►
From YouTube: [What's New] OpenShift 4.7 [Jan-2021]
Description
Technical Product Manager Overview of OCP 4.7 with Public Q+A that anyone can see.
Who Should come: Customers: Developers, Admins, IT professionals
Where: openshift.tv
A
Hello
welcome
everyone.
This
is
the
what's
new
in
openshift
4.7
briefing
for
the
field.
I
am
representing
the
openshift
product
management
team
and
my
name
is
tashar
katarki.
Let
me
turn
on
my
video.
First
of
all,
you
know
if
you
have
any
questions,
we're
going
to
go
through
a
lot
of
presentation
material
today,
but
if
you
have
any
questions,
please
use
the
q
a
forum
on
prime
time
or,
if
you
are
on
openshift.tv
use
the
comments.
Any
outstanding
questions
will
be
addressed
afterwards.
A
This
call
is
being
recorded,
and
so
you
will
get
that
recording
and
slide
deck
and
the
q
a
questions
answered
eventually
after
the
call.
So,
as
you
see,
we
have
a
lot
of
content
to
cover
and
without
further
ado,
let's
get
into
it.
This
is
the
openshift
product
management
team.
We
do
this
every
more
or
less
for
every
release.
Just
before
the
release.
What's
new
openshift
openshift
4.7
content
covers
the
stuff
in.
A
So,
what's
new
4.7
is
something
that
we
we're
going
to
cover
today
and
4.7
is
on
deck
and
it
is
going
to
be
released
soon
in
the
next
few
weeks.
You
know,
and
so
you
will
get
a
chance
to
play
with
it
and
take
it
for
a
ride.
So
so,
what's
what
are
some
of
the
themes
for
4.7?
A
A
As
an
example
of
those
improvements
from
a
code
platform
perspective,
we
have
some
exciting
changes
and
features
in
the
scheduler
as
well
as
we
continue
on
our
you
know,
journey
on
oen
and
you
will
see
obniki
sec
as
one
of
the
spotlight
features
there.
Security
has
always
been
important
for
openshift
and
for
red
hat.
A
The
aci
is
kubernetes
benchmark
being
presented
a
little
later
and
then
finally,
on
the
workload
and
stability
side,
you
know
we
continue
to
make
improvements
to
openshift
get
offs
that
we
introduced
in
4.6
and
openshift
pipelines
that
we
introduced
even
before
that
and
help
is
now
at
figure
out
fire,
and
that
is
not
ga,
so
we'll
hear
all
about
it
soon.
A
So
one
of
the
things
that
we
talk
about
here
is:
what's
what's
our
roadmap
looking
like
and
from
a
roadmap
perspective,
you
have,
you
know
4.7,
as
I
mentioned,
4.8
will
be
in
we're
targeting
q2
of
2021,
and
then
we
will
potentially
have
one,
if
not
two
more
releases
in
the
bottom
half
of
this
calendar
year.
I
won't
go
into
the
details
of
this,
but,
as
you
can
see,
we
have
improvements
planned
for
every
layer
of
this
stack.
A
Kubernetes
openshift
4.7
is
based
on
kubernetes
1.2
1.20,
which
g8
on
december,
8th
2020,
you
know
and
contrary
to
recent
releases.
This
release
has
more
alpha
than
stable
enhancements,
showing
that
there
is
still
much
to
explore
in
this
cloud
native
ecosystem
and
improve
so
considering
what
the
world
has
gone
through.
In
2020,
this
is
such
a
remarkable
achievement
for
the
community
to
come
together
to
make
this
1.20,
and
so
the
openshift
team
definitely
salutes
that.
So
I'm
sure
you
all
agree
with
me
on
that.
A
So
with
that,
I
will
pass
the
bit
on
on
to
these.
Are
the
openshift
4.7
spotlight
features?
Some
select
features
that
we
selected
as
representing
some
of
the
highlights
of
4.7,
we'll
go
through
them
first
and
then
we'll
get
into
kind
of
the
more
functional
level
features
that
we
talk
about
in
these
presentations.
So
with
that
I'll
hand
it
over
to
moran
to
talk
about
the
assisted
installer.
B
C
Okay,
hi
everyone,
my
name
is
mohan
goldberg,
I'm
a
product
manager
in
cloud
platform
bu
and
today
I'm
going
to
talk
about
assisted
installer
and
what
is
assisted,
installer,
so
assisted
installer
is
a
web-based
service
hosted
on
cloud.credit.com
made
to
simplify
the
flow
of
deployment
openshift
on
bare
metal.
It
is
focused
on
getting
a
user
to
an
open
shift
cluster
up
and
running
with
minimal
requirements
and
great
ux
4.7.
C
We
are
basically
going
to
release
it
as
a
tech
preview,
where
the
installation
network
itself
is
intact
preview,
but
the
clusters
installed
successfully
installed
with
it
are
supported
and
upgradable,
and
very
much
as
like
as
bare
metal
ipi
clusters.
C
So,
let's,
let's
talk
for
a
minute
about
the
flow,
so
the
focus
is
really
simplified
flow,
so
what
the
user
needs
to
do
it
needs
to
go
to
the
cloud.trader.com,
generate
an
iso
which
then
you
basically
take
and
boot
machines
with
it.
The
boot
media
is
awkwas
loss
and
agent.
The
agent
calls
back
home
and
registers
the
service
and,
from
that
point
on,
everything
is
done
via
the
ui,
and
we've
got
really
really
minimum
pre
to
to
begin
the
flow.
C
So
we
don't
need
a
dedicated
bootstrap
node,
which
is
really
critical
on
bare
metal
to
save
the
resources.
We
actually
use
one
of
the
masters
to
pivot
them
to
people
teach
from
bootstrap
to
a
master
machine,
and
we
so
we've
got
a
three
nodes:
minimum
cluster
to
begin
with
master
and
walker
combination.
C
So
that's
the
bare
minimum
compact
bare
metal
clusters,
as
well
as
you
can
add,
as
much
water
as
you
want
as
part
of
this
flow,
and
we
don't
need
any
dhcp
hostname
allocation
and
we've
got
the
option
to
have
a
jumpstart
veep
allocations,
meaning
that
the
the
agent
is
basically
asking
address
allocation
for
the
vips
for
the
load,
but
for
the
load.
Balancer
grips
to
make
the
installation
really
really
easy,
and
we
also
focus
on
pre-installation
validations
to
make
sure
that
the
installation
is
successful.
C
So
before
running
any
openshift
installation,
we
make
sure
that
the
host
meets
minimum
requirements
that
the
network
connectivity
and
addresses
metrics
full
mesh
connectivity
is
there.
Ntp
is
seeing
oh,
and
if
not
it
has,
it
has
the
option
for
to
have
a
chronic
config
and
we
have
the
option
to
select
disk
and
and
validating
the
other
speed
of
those
lists,
and
so
just
making
sure
that
everything
is
ready
for
for
the
installation
and
we
also
kind
of
provide
a
smart
smart
default
configuration
to
make
it
pretty
easy,
but
also
changeable
if
needed.
C
And
of
course
all
the
process
is,
is
actually
progress
monitor.
So
you
can
see
that
the
progress
of
the
installation,
event
and
alerts
you
have
the
option
to
download
the
logs
if
something
goes
wrong
open
a
bug,
if
you
need
the.
C
Assistant
of
of
support,
so
actually,
all
in
all,
we
are
trying
to
really
wrap
the
experience.
This
is
something
that
we
brought
up
this
service
to
air
around
late
september
as
in
a
dev
preview,
and
now
we
are
going
to
advance
with
it
so
4.7
prerelease
installation
is
actually
available
today
and
on
cloud.render.com.
So
you
can
go
ahead
and
to
assist
the
installer
and
try
it
out
using
4.7
bits
at
the
bottom.
C
D
Hi,
my
name
is
guard
sink.
I'm
a
product
manager
at
open
shift.
I'm
going
to
talk
about
hba,
horizontal
part,
auto
scaling.
It
is
automatically
scaling
up
part
based
on
memory.
Utilization,
take
an
example,
think
about
an
application
or
website
customer
facing
website.
D
Suddenly
a
lot
of
people
logging
in
because
of
that,
the
memory
utilization
of
the
website
has
increased
to
90
percent
hpa,
will
spin
up
new
parts
so
that
the
memory
has
been
spread
across
and
your
website
is
functional
without
any
impact
on
the
right.
What
you
see
is
a
yaml
template
where
you
can
specify
what
is
the
minimum
and
maximum
scaling
and
at
the
bottom
is
target
average
utilization
60.
It
is
a
60
percent.
D
D
Please:
okay,
dueling
profile,
it's
in
technology
preview.
What
it
is
is
you
can
customize
the
default
behavior
out
of
the
box,
behavior
of
openshift
scheduler
with
profiles
it
comes
into
flavor
one.
Is
the
previous
profiles
think
about
it
as
best
practices?
There
are
three
people
profile.
One
is
the
low
node
utilization.
D
It
will
spread
the
part
evenly
across
the
nodes.
Then
high
node
utilization
is
packed
as
many
parts
on
a
small
dedicated
nodes
and
no
securing
scoring
is
quickly
skidding.
Apart
think
about
an
application
which
require
low
latency,
so
it
schedule
those
parts
very
quickly
on
the
right.
It
is
build
your
own
profile.
Think
of
it
as
an
we
give
you
a
bunch
of
legos
you
put
together,
legos
based
on
your
business,
need
every
openshift
scheduler
have
only
one
studying
profile
and
one
splitting
profile.
D
I
have
many
plugins
and
one
plugin
has
many
extension
points
so,
based
on
this,
you
can
you
can
build
many
logic
of
which
maps
to
your
business
need
next
type.
Please.
D
D
There
are
three
profiles
available
today:
affinity
and
pains
a
big
part
that
do
not
follow
the
ethnicity
and
paint
rule
topology
and
duplicates
a
big
part
if
it
duplicate
parts
and
balance
it
across
the
node,
then
life
cycle
and
utilization
is
effect,
part
which,
based
on
part
lifetime
value
and
every
part
or
vivid
low
utilize
part,
which
is
being
spin
up
on
note,
which
has
been
dedicated
for
high
density.
E
Hello,
I'm
erwin
garen
product
manager
for
hardware
accelerators.
We
have
collaborated
with
nvidia
to
allow
gpu
sharing
for
openshift
on
virtualized
platforms.
So
to
use
this
feature
on
your
open
worker
node
should
run
on
a
virtual
machine
supporting
virtual
gpu,
like
reddit
openstack
platform
or
radar
virtualization.
E
So
you
can
also
choose
the
vgpu
memory
size
on
the
vgpu
sharing
technology.
You
can
pick
time
slides
or
new.
The
new
multiple
instance
gpu
before
using
gpu
operator
with
vgpu.
You
have
to
install
the
vgpo
host
driver
on
the
hypervisor
and
you
need
to
set
up
nvidia
license
server
on
your
network.
Next
is.
E
F
Right
thanks
everyone,
so
an
important
feature
for
openshift
customers
in
four
seven.
F
But
there
we
go
an
important
new
feature
for
openshift
customers
in
47
is
ipsec,
so
with
ipsec
all
east-west
internode
data
plane,
traffic
is
governed
by
that
is
governed
by
obn
will
be
encrypted
and
per
fips
requirements.
Ipsec
is
enabled
at
installation
time,
so
this
next
generation
enablement
of
ipsec
is
going
to
improve
upon
our
3x
implementation
in
several
ways.
So
for
one,
the
specific
type
and
level
of
encryption
will
be
fips
compliant
for
another.
F
It
avoids
ipsec
between
worker
and
control,
plane
nodes,
which
already
are
using
https
to
meet
the
requirements
of
fips.
Third,
while
initial
four
seven
implementation,
encrypts
all
dated
plane
traffic
at
the
node
level,
implementing
episec
inside
of
ovn
enables
future
capabilities
in
the
specificity
of
the
internet
traffic
that
will
get
encrypted.
So,
for
example,
we
could
refine
encryption
to
specific
namespace
to
namespace
traffic
versus
encrypting,
all
the
traffic
between
the
nodes
and,
finally,
the
way
we've
implemented
it
in
four.
F
Seven
also
provides
the
foundation
for
supporting
hardware
offload
of
ipsec
encryption
to
the
the
smart
next.
That
will
support
in
a
future
version
of
openshift,
so
further
details
about
how
you
would
implement
ibsec
in
a
four
seven
are
included
in
the
notes
of
this
slide.
Next
slide.
Please.
G
Open
shift,
git
ups
is
a
new
add-on
that
will
be
achieved
for
seven
and
the
idea
openshift
githubs
is
to
enable
customers
to
adopt
github's
practices
that
for
for
driving
infrastructure
and
application
operations
through
git
workflows.
So
in
openshift
get
ups.
Customers
have
access
to
a
supported
version
of
argo
cd,
which
is
the
githubs
engine
and
through
the
cr,
and
you
can
see
on
the
on
the
slide
it
can
define
which
git
repo
should
be
synced
with
which
cluster
and
through
that
way,
they
can
completely
rely
on
the
git
provider.
G
So
these
are
instances
that
have
more
limited
access
to
the
cluster
and
focus
on
driving
application
deployments
and
continuous
delivery
in
a
declarative
manner.
They
can't
install
any
cluster-wide
or
cluster
or
more
like
they
don't
have
privileged
access
to
the
cluster
that
they're
running
on
it
has
access
to.
G
It
has
its
supports
out
of
the
with
tp
disconnected
cluster
restricted
networks
in
both
of
those
scenarios,
and
we
have
some
guidance
on
different
use
cases
or
different
topologies
for
installing
or
using
argo
cd
for
different
purposes.
G
In
addition
to
the
two
that
were
mentioned
beyond
argo,
cd
openshift
gdos
also
includes
a
cli
openshift
and
github's
application
manager,
cli
abbreviates
to
to
cam,
and
the
purpose
of
that
is
to
help
customers
bootstrap
a
github's
workflow
using
argo
cd
using
tekton
and
some
of
the
other
components
in
order
to
simplify
adopting
github's
practices,
it
generally
populates
a
git
repo,
basically
for
them
and
configures
our
currency,
configures
takedown,
the
integration
points
in
between
them-
and
this
is
an
area
that
over
the
coming
months,
will
be
focusing
a
lot
on
getting
customers
started
with
that
next
slide.
G
H
Hi
everyone
kirsten
newcomer
here
I'm
excited
to
provide
an
update
to
you
for
a
cis
benchmark
for
openshift.
We
have
been
working
behind
the
scenes
with
directly
with
the
center
for
internet
security
dis
on
a
open
shift
benchmark.
We
expect
that
that
will
be
published
late
january
potentially
early
february.
H
It
has
gone
through
an
initial
internal
review
with
cis.
In
the
interim,
we
have
made
available
the
openshift
four
hardening
guide,
which
is
you'll
need
to
a
red
hatter
to
help
you
get
access
to
that
and
once
the
cis
openshift
benchmark
is
published
that
will
replace
the
hardening
guide.
We
also
now
have
available
for
use
with
4.6
and
then
with
4.7
as
soon
as
47
gas,
automated
compliance
checks
with
the
benchmark
that
you
can
implement
using
the
openshift
compliance
operator.
H
We
have
a
link
here
on
the
slide
to
the
update
that
you
need
to
apply
for
4.6
and
again
these
checks
will
be
available
through
the
optional
compliance
operator
in
4.7
and
4.6.
H
H
So
for
those
of
you
who
may
not
have
heard
on
january
7th
red
hat
announced
its
intent
to
we
decide
announced
signature
of
a
definitive
agreement
to
acquire
stack
rocks.
The
acquisition
is
subject
to
certain
customary
closing
conditions,
so
at
this
time,
there's
a
great
deal
of
public
information
available
to
you.
There
are
some
links
here
to
the
press
release
at
the
top
and
to
some
blogs
and
an
faq
at
the
bottom.
H
Stackrocks
is
strategically
aligned
with
red
hat's
view
on
offering
security
capabilities
that
enable
full
stack
solution
for
enterprise-ready
hybrid
cloud,
the
co.
The
capabilities
that
you
see
listed
here
from
stack,
rocks
complex,
complement,
red
hat's,
layered
approach
to
security,
where
we
focus
on
integrating
security
capabilities
throughout
the
stack
throughout
the
life
cycle,
and
we're
really
excited
about
this.
H
The
opportunities
that
this
opens
up
to
expand
and
refine
coop
native
security,
as
well
as
to
shift
security
left
into
the
container,
build
in
ci
cd
pipelines
as
stack,
rocks
and
red
hat
work
together
to
deliver
a
cohesive
and
comprehensive
solution
to
enhance
security
throughout
our
offerings
and
throughout
the
life
cycle.
Thank
you.
Next
slide.
I
Thank
you
very
much
good
day
everybody.
My
name
is
hisham
murad.
Welcome,
I'd
like
to
give
you
a
quick
update
on
what's
new
in
red
hat,
advanced
cluster
management,
2.2,
so
acm
2.2
will
be
releasing
on
march.
4Th
and
acm
to
acm
in
general
has
four
main
pillars.
If
you
will,
I
want
to
touch
on
each
of
those
pillars,
as
I,
as
I
touch
on
these
three
main
bullets
here.
I
We
introduced
bare
metal
as
well
as
vsphere
and
the
prior
release
and
in
2.2
we're
excited
to
announce
that
we're
also
supporting
azure
red
hat
open
shift,
as
well
as
openshift
dedicated
again
being
able
to
manage
that
openshift
clusters,
regardless
of
where
they
live,
and
on
that
same
theme,
if
you
will
we're
actually
as
a
tech
preview,
integrating
submariner
as
part
of
this.
Why
is
this
important
for
our
customers?
Because
what
we're
doing
is
we're
enabling
this
secure
communication
between
clusters,
but
from
an
application
perspective.
I
Now
our
customer
is
going
to
be
able
to
deploy
modern
applications
that
will
span
clusters
and
data
centers
and
even
ban
providers
as
well
so
incredibly
powerful
thing
that
we're
bringing
here
for
our
customers
now
at
the
heart
and
soul
of
what
we
do
from
a
red
hat
perspective
is
all
about
open
source
and
we
want
to
continue
to
expand
and
embrace
open
source.
So
as
part
of
the
compliance
and
what
kirsten
just
talked
about
is
the
integration
of
the
compliance
operator.
Well
with
the
2.2
release.
I
We
continue
to
enhance
that
as
well,
and
we
want
to
make
sure
that
openscap
scanning
and
that
security
scanning
that
you
can
perform
using
that
operator
against
your
clusters.
You're
now
able
to
surface
surface.
Excuse
me
that
information
into
the
acm
ui
and
have
that
unified,
centralized
visibility
of
that
as
well.
Another
thing
that
we're
into
that
we're
bringing
in
as
a
tech
preview
is
we're
integrating
the
kubernetes
integrity
shield.
This
is
all
about
resource
assurance,
making
sure
that
the
integrity
and
authenticity
of
all
the
kubernetes
resources
and
images
and
containers
are
valid.
I
So
that's
also
very
important
that
touches
on
our
governance
and
risk
and
security
and
compliance
aspect
of
or
pillar.
If
you
will
of
acm
now
another
area,
that's
really
important
is
get
ops
right.
Get
ops
is
really
a
now
a
methodology.
That's
used
very
widely
across
developers
and
has
been
for
some
time,
but
we
now
have
actually
integrated
argo
cd.
Argo
cd
is
widely
used
by
many
many
users
and
many
many
customers
to
deploy
applications
because
of
this
integration
with
acm
we're
now
able
to
surface
all
the
openshift
managed
clusters
to
the
argo
cd
users.
I
So
now
they
can,
with
confidence,
deploy
to
these
securely
managed
acm
open
shift
clusters
so
again
continuing
to
expand
and
enhance
that
story,
and
that
really
expands
our
application
life
cycle
management
story
as
well.
I
I
I
Last
point
is:
we
have
so
much
more
detail.
That's
coming
on
this.
It's
going
to
be
on
one
stop
very
quickly.
So
please
please
go
visit
one
stop
soon
and
we'll
have
some
more
details,
both
in
a
messaging
document,
as
well
as
a
powerpoint
presentation
on
all
these
topics.
Thank
you
very
much.
J
Okay,
so
this
is
sergio
khan,
the
product
manager
for
cost
management.
We've
worked
a
lot
in
the
last
release.
You
have
it
available
today
and
it
will
be
updated
through
voltage,
seven
and
more
than
experience
for
ocp
clusters.
So
in
the
past
we
were
using
operator
metering
right
now.
There's
a
new
operator.
It's
called
coco
metrics
until
we
certified
that
it
will
happen
during
photo
seven
with
only
one
configuration
file,
one
two
minutes:
now
you
can
have
it
and
we
have
reduced
by
1000
the
resources
that
you
need
to
get
it
running.
J
So
we
get
the
same
information
we
got
before
easier
and
really
with
better
maintenance,
and
the
other
thing
we
are
delivering
through
photo.
Seven
is
we're
gonna
support
it
in
our
gap
scenarios,
so
there
will
be
an
update
in
a
few
weeks
where
you
will
be
able
to
get
your
igap
cluster
gather
the
information
and
upload
that
to
the
sas
service
to
be
processed
next
slide.
J
J
J
Slide.
Okay,
so
we've
changed
some
additional
things
in
the
tool.
So,
if
you
go
now,
you
will
see
that
you
can
filter
out
tags
in
the
different
sources
we
support.
That
was
important.
We
have
customers
with
one
tag
and
more
than
1.5
million
values,
and
that
was
basically
making
it
really
hard
to
use
the
tool
right
now
you
can
select
which
tags
are
going
to
be
used
for
openshift.
You
basically
select
the
ones
you
want
to
include
in
your
interface
for
amazon
and
azure
you
by
default.
J
All
of
them
are
selected
because
you
already
do
that
in
your
in
your
cloud,
but
you
can
deselect
one
or
several
of
them.
If
you
don't
need
them
or
you
think
they're
misleading.
We
also
in
in
enhance
the
cost
model
with
level
based
rate.
So
right
now
you
if
you
want
to
put
a
price
on
storage
and
you
want
to
differentiate
between
different
type
of
storage.
So
it's
it's
more
pricey
to
use
a
gold
tier
than
using
a
bronze
tier.
J
You
can
use
a
label
in
your
storage
or
in
your
notes,
and
we
will
use
that
and
of
course,
in
order
to
do
that
too,
we
support
default
rates.
So
if
you
label
something
at
default,
if
you
wouldn't
find
any
of
the
tabs
that
you
have
configured,
we
would
use
the
default
that
we
cover
many
of
the
use
cases,
and
the
other
thing
is
one
request
we
were
getting
a
lot
from.
J
J
You
can
limit
that
visibility
to
one
group,
so
you
don't
have
access
to
everything
that
we
have
in
the
database.
You
only
get
access
to
whatever
you
are
entitled
to
use
and
the
other
thing
that
it's
it's
nice
coming
along
with.
That
is
that
now
there's
a
specific
role
to
create
sources
that
you
don't
need
to
be
an
organ
mean
to
create
new
sources.
You
delegate
the
role
and
the
user
will
be
able
to
create
new
sources
for
cost
management
and
that's
all
next
slide
on
net
presenter.
K
Hey
jake
lucky
here,
so
I
just
want
to
talk
about
managed
openshift
and
the
really.
The
first
thing
I
want
to
touch
on
is
that
from
the
managed
openshift
side.
We
really
want
to
highlight
the
fact
that
openshift
is
a
single
product
with
multiple
consumption
models
go
ahead
and
go
to
the
next
slide.
Actually,
so,
if
we
skip
to
the
next
slide,
so
one
thing
that
we're
going
to
be
doing
in
the
openshift
4.7
time
frame
actually
back
to
the
previous
slide.
K
There,
sorry
in
the
openshift,
4.7
time
frame
from
the
openshift
cluster
manager
side.
If
we
really
want
to
emphasize
that,
so
when
you
go
into
openshift
cluster
manager,
we
want
to
give
you.
You
know
all
the
different
ways
that
you
can
create:
openshift,
whether
you're
gonna
be
creating
a
new
cluster
with
ocp
on
bare
metal,
maybe
using
the
assisted
installer,
whether
you're,
creating
openshift
dedicated
cluster
or
whether
you're
creating
a
rocks
cluster
on
ibm
cloud.
K
So
what
you're
gonna
start
to
see
in
the
experience
information
cluster
manager
is
you'll,
be
able
to
log
in
you'll,
be
able
to
create
a
cluster
you'll,
see
all
the
different
ways
you
can
create.
A
cluster
and
you'll
actually
be
able
to
register
your
arrow
clusters
and
your
rocks
clusters
back
to
your
cluster
list.
So
you
have
a
real,
a
true
multi-cluster
view
across
all
of
your
openshift
clusters,
no
matter
where
they're
deployed
next
slide.
K
Now,
from
the
openshift
dedicated
and
the
redhead
openshift
service
on
the
abs
side,
we
have
actually
quite
a
few
updates
coming
here
in
the
4.7
time
frame
and
keep
in
mind
managed
openshift.
When
we
say
openshift4.7,
we
mean
4.7
time
frame.
These
are
features
coming
in
that
time
frame,
not
with
openshift
4.7.
K
So
we
have
a
new
ui
for
scheduling
cluster
updates,
so
you
can
basically
tell
us
if
you
want
your
cluster
to
automatically
update
when
there's
a
new
openshift
z
stream
available,
you
can
basically
set
a
maintenance
window
for
that.
What
time
and
day,
if
you
want
that
to
happen
on,
you
can
do
manual
updates,
you
can
basically
set
policies
around.
You
know.
K
If
how
you
want
your
nodes
to
drain
when
updates
happen,
we
have
auto
scaling
coming
at
the
the
cluster
level,
so
you'll
basically
be
able
to
set
auto
scaling
at
both
at
the
cluster
level
and
at
the
machine
pool
level.
So
we
have
this
concept
in
dedicated
and
redhead
openshift.
K
The
the
rosa
service
that
you
can
basically
create
machine
pools
for
the
machine
sets
so
machine
pools
are
basically
a
multi-az
machine
set
so
starting
in
the
4.7
time
frame,
you'll
be
able
to
actually
create
machine
pools
with
different
instance
types
previously,
you're
restricted
only
to
a
single
instance
type.
Now
you
can
use
mixed
instance,
types
in
your
openshift,
dedicated
and
rosa
clusters,
and
you
can
set
auto
scaling
on
those
pools.
K
You'll
also
be
able
to
install
into
an
existing
vpc
on
aws,
with
your
osd
erosive
cluster
you'll,
be
able
to
use
larger
instance
sizes
all
the
way
up
to
96b,
cpu
and
760
768
gigabytes
of
memory.
I
believe
that's
the
memory
optimized
24x
large.
K
We
also
have
customer
notifications
so
basically
tied
to
the
ocm
cluster
history,
log,
so
ocm,
tracks
changes
to
your
cluster
version
or
any
other
changes
that
a
cluster
owner
might
make
to
the
cluster
in
the
at
the
ocm.
Layer
also
tracks,
if
sorry
for
doing
support
case,
work
on
your
cluster
you'll,
basically
get
a
notification
and
you
can
set
your
notification
preferences
on
clouderaheld.com.
K
K
K
Maybe
for
that
seven
time
frame
it
might
start
seeing
like
preview
customers
in
4.7
timeframe.
Maybe
4.8
may
be
more
realistic
for
when
we
see
a
ga
on
that
egress
lockdown,
so
basically
the
ability
to
restrict
outbound
traffic
from
your
cluster.
K
We
should
have
that
coming
in
the
near
future
here
so
just
like,
with
with
osd
and
roson
abs.
Bring
your
own
key
disk.
Encryption
is
another
feature
that
we're
working
very
hard
to
get
released
on
aero
and
then
larger
vm
sizes
there
as
well.
K
One
of
the
things
that
you
should
see
sooner
on
the
arrow
side
would
be
the
cluster
create
gui,
so
basically
you'll
be
able
to
log
into
the
azure
portal
and
actually
use
you
know
the
web
azure
web
console
to
set
up
a
new
cluster
and
set
all
of
your
configuration
and
preferences
there,
and
I
think
that's
it.
That's
it
for
me
I'll
hand
it
over
to
danger.
L
Thank
you,
jake.
Let's
talk
about
workloads,
I'm
daniel
messer
and
the
product
manager
for
the
operator
framework,
which
is
the
foundation
for
all
our
workload.
Operators
on
openshift
for
the
operator
lifecycle
manager,
our
on
cluster
component
4.7,
was
mostly
a
bug
fix
release.
So
we
went
a
lot
into
fixing
issues
with
the
product
and
bring
in
stability
improvements.
But
one
thing
that
we
also
put
in
as
a
new
feature
is
for
operators
to
communicate
their
readiness
to
be
updated.
L
This
is
in
order
to
make
operator
updates
themselves
more
reliable
and
robust
and
obviously
not
something
you
would
see
when
the
operator
is
in
a
critical
situation.
So
a
critical
operation,
for
instance,
might
be
a
live
migration
of
a
virtual
machine
and
openshift
virtualization
or
the
restoration
of
a
database
from
a
backup.
L
So
in
these
instances
with
4.7,
an
operator
can
now
raise
the
flag
and
can
say
it's
not
upgradable
and
the
lifecycle
manager
will
hold
any
pending
update
either
manually
or
automatically
approved
until
the
operator
leaves
the
critical
phase
and
sets
the
upgradeable
flag
again
to
true.
So
this
should
help
with
building
up
confidence
with
cluster
administrators
to
update
operators
as
often
as
possible
next
slide.
L
The
operator
sdk
is
now
a
supported
offering
in
openshift
4.7.
That
means
our
isv
partners
and
customers.
Marketing
operators
can
now
rely
on
a
supported
official,
downstream
release
of
the
sdk
with
all
the
branding
and
that
they
can
get
from
redhead.com
and
obviously
also
get
support
for,
and
on
top
of
that,
we
are
now
at
version
1.3
of
the
operator
sdk,
which
brings
a
lot
of
improvements
in
terms
of
automating
the
packaging,
as
well
as
the
testing
so
with
a
single
command.
L
You
can
now
test
and
package
your
operator
for
olm
and
see
if
it
can
be
deployed
and
managed
by
olm.
The
sdk
also
supports
the
web
hook,
integration
that
oem
offers
so
operators
that
are
using
admission
web
posts
or
cld
conversion.
Web
posts
are
usually
a
little
bit
harder
to
install
because
the
cluster
admin
needed
to
go
up
front
into
the
system
and
register
those
web
hosts,
put
tls
certificates
in
place
and
make
sure
they
are
rotated,
and
this
is
not
all
being
done
by
olm.
L
And
last
but
not
least,
the
new
bundle
format
that
we
introduced
in
the
last
release
is
something
that
the
sdk
can
now
scaffold
and
create
automatically
all
to
the
point
that
it
really
just
takes
one
command
to
do
the
whole
bundle,
metadata
and
catalog
creation,
which
should
help
our
customers
to
do
this
actually
as
part
of
their
pipelines
and
continuously
test.
The
operator
on
openshift,
with
o
m.
M
Thank
you
daniel
I'm
karina
angel
and
we
are
further
complementing
our
operator
story
with
home
charts,
so
more
package
management,
more
ways
to
bring
workloads
to
openshift,
and
if
you
are
especially
a
helm
shop
and
are
using
a
lot
of
helm
charts,
you
will
be
excited
for
helm.
3.5
support
the
home.
3.5
is
a
feature:
release
go
to
helm.sh
and
go
look
at
all
the
great
new
features
in
helm35.
M
So
if
your
development
environment
uses
multiple
repositories-
or
there
are
repositories
out
there-
that
you
would
like
to
pull
into
your
openshift
environment
and
now
do
that
and
support
for
disconnected
environments,
you
can
pull
down
the
default
repo
and
allow
yourself
to
use
your
helm,
charts
in
a
disconnected
environment
for
air
gap
mode.
Also.
Now,
if
your
team
does
not
want
to
see
the
default
repository,
you
can
remove
that
we
will
not
force
you
to
see
the
helm
charts.
M
The
operator,
the
openshift
developer,
console
team
has
also
been
working
on
a
lot
of
great
new
features.
So
not
only
can
you
use
helm
through
the
cli,
you
can
also
go
into
the
console
and
deploy
helm
charts.
M
First,
I
want
to
highlight
that
we
do
have
an
example:
quarkus
helm
chart
for
you
to
deploy
and
you
can
go
and
copy
that
and
use
it.
How
you
wish
and
pull
it
into
your
own
repo.
You
can
also
select
different
version
if
you're
using
different
versions
for
your
home
charts.
Now
you
can
select
which
one
to
install
as
well
as
filter
out
any
helm,
charts
that
are
not
compatible
to
your
openshift
release.
M
M
M
So
now
within
the
console,
you
have
quick
starts,
which
will
be
mentioned
later
in
this
presentation
to
get
you
up
and
running
more
quickly,
as
well
as,
as
I
mentioned
at
the
example
helm
chart
so
that
you
can
go
deploy
a
quarkus
from
your
developer,
console
go
to
the
helm
group
repository
and
you
will
find
an
example.
Helm
chart
there
also
integration
with
serverless
functions,
which
is
dev
preview,
that
nano
will
be
talking
about
later
as
well.
M
I
would
also
like
to
highlight
that
eap
now
has
runnable
jars
again,
making
it
faster
and
quicker
for
you
to
run
your
applications
and
get
you
up
and
running
and
for
all
you
spring
boot
users,
not
only
support
for
ubi
support
for
java,
8
and
11,
but
decorate
build
hooks
again,
making
it
easier
for
you.
So
that's
tech
preview,
please
play
with
it,
give
us
all
your
feedback
all
right.
The
red
hat
integration,
team
jake
was
talking
about
openshift
dedicated
and
all
the
great
new
enhancements.
M
N
Thanks
karina,
so
openshift
virtualization
is
the
ability
to
run
vms
inside
of
openshift.
Now,
based
on
the
upstream
cuber,
we've
actually
been
working
with
some
very
strategic
customers
to
do
things
like
create
hybrid
applications
of
legacy,
vms,
like
enterprise
databases
and
connect
them
to
their
cloud
native
development,
and
then
some
other
folks
are
doing
some
very
what
I
would
call
advanced
stuff
using
ephemeral
with
those
vms
inside
of
say,
developer
pipelines
right.
So
there's
lots
of
different
use
cases
here.
N
N
We've
also
got
some
quick
starts
that
are
very
capable
guides
that
step
you
through
the
process
of
doing
something
in
time
real
with
with
the
console,
and
so
it
steps
you
through
and
guides
you
and
helps
you
learn
the
product
easier.
One
of
the
other
things
we've
got
coming
as
well
is
some
sizing
guidance.
Once
you
get
through
running
a
couple
of
virtual
machines
and
you
want
to
start
scaling
and
deploying
across
your
cluster
and
what
does
that
look
like?
N
N
N
We
last
time
we
talked,
we
did
have
a
robust
virtual
machine
performance,
benchmarks,
running
enterprise,
databases,
testing
out
the
performance
of
qvert
itself,
and
now
we've
been
working
very
closely
with
the
ocs
team,
making
sure
that,
as
you
deploy
large
vms
with
databases
inside
them
that
you
get
both
the
performance
and
scale
that
you
would
expect
on
either
bare
metal
or
a
virtualized
platform.
N
One
other
thing
I
want
to
talk
about
is
that
we've
done
virtualization
validation
with
microsoft.
This
has
actually
been
the
case
since
we
actually
gave
the
product
several
releases
ago.
So
anything
that's
supported
from
windows,
2012
r2
onwards.
All
the
way
up
to
windows
10
is
a
supported,
validated
configuration
not
only
from
red
hat,
but
from
microsoft
as
well.
N
We've
started
with
offline
virtual
machine
snapshots
and
then
you'll
see
some
future
work
coming
in
terms
of
online
snapshots
and
additional
capabilities.
In
conjunction
with
the
work
that
the
ocs
team
is
doing.
On
the
network
side,
we've
actually
got
two
things.
One
fell
off
the
list
here.
We've
got
a
tech
preview
capability
to
allow
you
to
run
a
dual
stack,
ipv4
and
ipv6
for
the
vms
inside
your
cluster,
and
then
we've
actually
been
working
very
closely
with
our
ecosystem
partners.
O
Thanks
peter
hi,
everyone,
I'm
jamie
longmir,
so
last
last
november
we
released
service
mesh,
2.0
and
that'll
continue
to
be.
The
latest
supported
version
of
service
mesh
on
openshift
4.7
service
message
based
on
istio
research
services.
2
is
based
on
istio
1.6.
O
Our
team
always
performs
a
lot
of
extra
testing
and
validation
on
new
releases
of
istio,
and
so
we're
usually
a
version
or
so
behind
the
the
fast
moving
upstream
project.
As
worth
noting,
due
to
the
substantial
changes
between
1.1
and
2,
we
strongly
recommend
that
new
users
start
with
with
service
mesh
2.
O
So
while
we
we
don't
have
a
new
major
release
of
service
mesh,
we
do
have
a
few
updates
to
provide
most
notably
service.
Mesh
now
supports
ovn
kubernetes
networking,
which
was
was
jade
in
ocp
4.6.
O
Finally,
while
service
mesh
is
already
supported
on
fips-enabled
openshift
clusters,
we
hope
to
declare
that
service
mesh
will
be
officially
fixed
validated
soon.
Fips
ensures
that
your
applications
are
running
using
a
well-defined,
secure
and
reliable
set
of
cryptographic
functions
as
validated
by
the
national
institute
of
standards
and
technology
or
the
nist.
O
One
of
our
most
notable
differences
between
openshift
service,
mesh
and
upstream
istio
is
our
use
of
openssl
for
encryption.
This
allows
us
to
declare
that
service
mesh
declare
service
mesh
as
tips
validated
once
the
applicable
version
of
rel
open
ssl
receives
its
validation
from
nist,
so
we're
just
holding
on
for
that.
The
the
next
version
of
openshift
service
mesh
will
be
2.1,
which
should
be
the
next
major
version
that
will
focus
on
service
mesh
federation
and
more
next
slide.
O
Finally,
the
team's
getting
closer
to
declaring
official
support
for
service
mesh
2.0
on
our
managed
openshift
offerings,
so
openshift
dedicated
azure,
red
hat,
openshift
and
red
red
hat
openshift
on
aws
service,
which
will
be
supported
as
an
unmanaged
add-on
for
these
platforms,
meaning
that
customers
will
will
install
and
manage
service
mesh
in
in
the
same
manner
that
they
would
on
ocp
with
access
to
the
same
support
channels
available
thanks.
Everyone
I'll
now
hand
it
over
to
naina
who
will
talk
about
openshift
serverless
functions.
P
Thank
you
jamie,
as
you
all
know
that
openshift
serverless
packages
and
extends
k
native
and
now
with
eventing
as
gn
functions,
has
stepped
with.
You
then
serverless1.11
release.
We
wanted
to
share
what
is
new
in
serverless.12,
so
similar
to
the
previous
releases.
1.12
updates
the
serverless
product
released
to
upstream
matching
version,
4k
native
serving
eventing,
cli,
etc,
which
is
0.18.x
in
addition
to
various
bug
fixes
this
release
changes
the
minimum
kubernetes
version
requirement
to
1.17.
P
Another
thing
to
note
is
the
net
contour
project
that
enables
contour
is
now
in
stable
stage
depth
preview
of
functions.
Runtimes
now
boasts
the
addition
of
spring
boot
as
an
available
runtime
in
addition
to
quarkus
node
and
go
as
you
can
see
at
the
top
right
corner
and
in
regards
to
eventing
developer
experience
through
devconsole.
P
Now,
in
addition
to
use
of
kn
cli
users
can
use
that
console
to
create
channels
and
subscriptions
and
can
also
add
triggers
to
brokers,
and
this
could
really
aid
the
users
in
creating
and
visualizing
event
driven
applications,
and
you
can
see
the
glimpse
at
the
bottom
right.
We
have
also
added
the
integration
with
servicemath2.oh
and
I
guess
that's
missing
from
the
slide.
I
thought
I'd
take
it
anyway.
So
that's
about
it
for
this
release
and
for
further
information.
Please
refer
to
our
release,
notes,
facts
and
on
one
stop.
G
Thank
you
so
for
cici
and
get
ups
they're,
offering
openshift
they're,
really
three
pieces
that
addresses
different
levels
of
complexity,
of
continuous
delivery.
There's
openshift
builds
that
focuses
on
building
container
images
on
the
cluster,
from
source
code
or
docker
file
or
binary
and
other
use
cases.
There
is
openshift
pipelines
that
focuses
on
ti
and
more
of
the
push-based
knee
openshift
gears
that
we
briefly
talked
about
next
slide.
Please,
on
four
seven,
you
have
a
new
release
of
pipelines,
1.3
that
brings
takedown
0.19.
G
Some
of
the
capabilities
in
in
openshift
5.1.3
is
that
we
have
reduced
the
privileges
that
the
pipelines
have
both
the
controllers
and
also
and
the
pipeline
itself
that
is
executing
and
in
the
next
next
release.
We
actually
will
reduce
that
even
further
taking
advantage
of
the
user
launching
pod
and
user
namespace.
That
is
coming
also
in
in
openshift47
clusterwide
configs,
that
if
a
customer
configure
them
on
on
the
cluster
option
python,
they
consume
them
and
pass
them
to
the
python.
G
The
task
from
pods
as
well
https
support
for
web
books
is
in
our
area,
so
we
can
have
end-to-end
integrations
for
web
hooks
that
are
exposed
for
each
pipeline.
This
is
an
rfid
that
comes
from
the
customer
and
another
area
that
you
will
see
more
also
in
the
following
versions.
More
automation
around
is
that
we
work
a
lot
to
reduce
the
amount
of
resources
number
of
pods
for
even
listeners,
so
that
one
even
listener
can
be
used
across
the
entire
cluster
or
across
multiple
related
namespaces.
G
The
enhancements
on
the
tasks
task,
library
that
is
shipped
by
with
openshift
python
is
that
now
it
is
try
and
build
a
task
expose
a
result
containing
the
image
digest
of
the
image
that
was
just
built
in
that
task
that
I
can
consume
in
the
following
tests
in
a
pipeline
for
deployment,
for
example,
again,
you
don't
have
to
go
on
the
tags
that
are
inaccurate
and
you
can
use
a
specific
digest
of
the
image
and
deployed
within
the
pipeline.
There
are
also
numerous
enhancement
in
the
dev
console
the
ui
experience
of
pipelines.
There
is
metrics.
G
G
There
is
a
task
run
tab
to
make
it
really
easy
to
navigate
from
a
python
run
to
the
task
ones
that
are
generated
for
that
pipeline
run
to
be
able
to
dig
deeper
and
for
debugging
purposes
or
more
insight
into
what's
happening
in
certified
execution
of
pipeline
and
even
stab
is
added.
That
is
hugely
beneficial
when
you're
debugging
the
python.
So
you
it's
a
collection
aggregation
of
all
the
events
that
are
coming
out
of
a
specific
python
run.
G
All
the
tasks
runs
that
are
related
to
that
python
run
or
any
of
the
pods
that
are
related
to
those
task
runs.
So
if
something
goes
wrong
and
you
don't
see
it
in
the
logs
there's
a
very
high
chance
that
you
would
find
the
events
tab
of
of
that
particular
python
run,
and
some
minor
updates
are
also
in
the
in
the
logs
for
python.
G
You
can
there's
a
button
to
download
the
logs
if
you
want
to
further
analyze
them
next
slide,
please
on
the
tool
inside
the
takedown
cli,
the
ide
plugins,
that
is
vs
code
extension
and
intellij.
We
have
now
integration
with
tecton
hub
for
searching
and
installing
tasks.
Tickton
hub
is
a
a
central
hub
under
hub
that
take
down
the
depth
or
discovering
searching
for
task.
G
Reusable
tasks
that
it
can
use
for
offering
pipeline
is
a
project
that
we
launched
a
while
back
within
the
community,
and
it
was
actually
just
recently
came
out
of
preview.
Now
we
have
integration
across
our
tooling
that
you
don't
have
to
lose
your
context,
leave
the
cli
or
leave
vs
code
or
intellij
to
search
for
install
tasks
right
from
there.
You
can
install
it
can
also
in
the
vs
code,
install
the
task
as
a
cluster
task.
G
G
Support
for
creating
pvcs
is
added
when
you're
starting
a
pipeline
vs
code.
It's
a
very
common
scenario
that
someone
wants
to
start
a
pipeline.
That
has
a
workspace,
but
you
don't
already
have
created
a
pvc
that
you
can
use
with
that
python
run.
So
on
the
spot
on
the
fly,
you
can
create
a
pvc
and
start
your
pi
network
and
a
very
useful
capability
added
is
about
notifications
as
well
that
the
pipelines
that
you
have
opened
within
your
workspace
in
vs
code
as
you
execute
python,
runs
and
they
fail
or
succeed.
G
You
would
get
notification
popping
up
at
the
bottom,
notifying
you
about
the
result
of
execution
of
that
pipeline
next
slide.
Please.
G
Github
we
have
a
partnership
with
github,
which
was
announced
in
the
beginning
of
december,
about
a
month
ago,
at
github
universe,
you
can
read
about
the
first
release
and
the
the
blog
that
explains
further
on
what
is
planned.
G
B
G
They
are
all
verified
and
there
are
more
plan
that
would
come
along
and
there
are
more
integration
plans
with
github
runner
and
eventually
github
enterprise
running
on
openshift.
But
it's
an
area
that
is
developing
to
bring
openshift
more
into
the
github
ecosystem
and
the
github
workflows
that
are
taking
traction
are
popular
among
the
github
users.
Next
time.
Q
So
hi
folks,
I'm
david
harris
talk
about
developer
tooling,
so
we're
always
looking
at
new
ways
to
help
developers
create
and
deploy
applications
on
openshift
and
accompanying
the
47
release.
We've
got
some
great
new
updates
to
our
portfolio
of
developer
tools
and
technology,
starting
off
with
this
exciting
new
offering
the
developer
sandbox.
Q
So
this
reduces
friction
for
developers
by
instantly
providing
a
reasonably
sized
private
openshift
environment
for
creating
and
deploying
applications
all
at
zero
cost.
So
you
can
fully
explore
the
default
developer.
Experience
that
comes
out
of
the
box
with
openshift
and
see
how
easy
it
is
to
take
source
code
and
have
it
running
in
a
container
developers
can
create
new
caucus-based
applications.
Connect
them
with
the
database
running
in
their
private
environment,
send
links
to
their
team
members
to
provide
feedback
on
as
they
iterate
through
it.
Q
The
sandbox
also
includes
code
ready
workspaces,
so
you
can
even
do
this
in
a
loop
development
all
hosted
on
openshift
itself
to
get
started
with
this.
It
is
available.
Now
you
just
need
to
head
over
to
developers.redhat.com
forward.
Slash
developer
dash
sandbox
next
slide,
please
so
the
service
binding,
so
service
finding
it
enables
dynamic
discovery
and
configuration
between
microservices
and
the
services
that
they
depend
on.
So,
for
example,
a
java
application
can
easily
connect
to
a
kafka
instance
without
having
to
manually
configure
secrets,
etc.
Q
As
of
december,
the
service
binding
operator
is
available
on
operator
hub
and
as
we
progress
towards
ga
both
of
the
specification
and
the
operator.
Hopefully
this
will
be
in
the
second
quarter,
we're
enhancing
with
important
updates,
such
as
to
protect
against
privileged
escalations
next
slide.
Please.
Q
Code,
ready
workspaces
is
based
on
the
eclipse
che
upstream
project.
It
provides
kubernetes
native
development
environments
all
hosted
on
openshift,
it's
great
for
fast
onboarding
onto
projects
and
collaboration
with
everything
you
need
in
a
consistent
and
reproducible
environment.
There's
some
exciting
new
updates,
either
side
of
the
47
release.
Q
We're
expanding
our
support
of
this
capability
to
also
include
public
and
private
repositories
in
bitbucket,
then
in
207
we're
looking
to
improve
what
is
often
the
first
time
experience
where
you
haven't
fully
configured
a
workspace
yet
to
automatically
recommend
and
include
plugins,
so
that
you
get
better
intellisense
out
of
the
box
and
next
slide.
Please.
Q
We
provide
a
whole
host
of
client-side
tooling
for
developers
here
we're
we're
picking
out
a
few
examples.
Firstly,
audio.
This
is
our
cli,
which
extracts
abstracts
kubernetes
and
makes
it
simpler
for
developers
to
create
and
deploy
new
microservices.
Q
In
version,
two
of
that
we
switched
to
introducing
a
new
dev
file
version,
2
format
for
creating
components
and
since
then,
we've
updated
both
of
our
ide
plugins,
the
openshift
connector
for
vs
code
and
intellij,
as
well
as
code
ready
studio,
which
is
our
eclipse-based
desktop
ide.
To
leverage
this
new
capability
in
audio
within
audio
itself,
as
well
as
an
update
to
support
service
binding
0.3.
Q
The
latest
updates
have
been
focused
on
improving
the
documentation
to
help
users
get
started
and
be
more
productive
and
expanding.
The
support
of
this
new
devfile
v2
specification
in
the
openshift
connector
we'll
be
looking
to
provide
simpler
access
to
the
new
developer,
sandbox
that
we
mentioned
earlier,
like
we
already
do
with
code
ready
containers
and
with
code
ready
studio.
Our
upcoming
release
includes
new
support
for
versions
of
wildfly
and
eap
next
slide.
Please.
Q
We
have
also
added
system
trays
on
windows
and
mac
to
help
you
easily
configure
crc
and
there's
a
new
installer
for
mac
to
streamline
the
installation
process
and
the
delivery
of
assigned
binary
we're
also
introducing
new
telemetry,
and
this
will
help
us
to
better
understand
usage
patterns
and
errors,
hopefully
to
make
crc
even
better
and
that's
it
from
me.
I'd
like
to
pass
the
danger.
L
Thank
you.
A
quick
update
on
kwai
free
four
great
v4
was
originally
planned
to
be
a
maintenance,
only
release
will
be
released
in
the
beginning
of
february.
This
was
mainly
due
to
the
fact
that
we
were
doing
a
python
v2
to
python
v3
migration
for
the
entire
code
base,
and
this
required
a
lot
of
retesting
still,
the
team
managed
just
a
couple
of
interesting
features,
and
one
of
which
is
that
you
can
now
download,
create
like
pretty
much
any
other
red
hat
product.
L
There
are
official
images
going
to
be
available
for
free
for
for
quay,
claire
and
all
the
operators,
so
no
more
downloading
from
cray
I
o
with
a
shared
secret
next
slide.
Speaking
of
operator,
we
have
a
completely
rewritten
quay
operator
that
is
managing
quay
on
openshift
and
it
gives
customers
this
batteries
included
experience
that
many
users
have
been
asking
and
waiting
for.
So
the
new
operator
will
deploy
completely
managed
quay
registry,
including
all
the
required
databases.
L
L
Obviously,
customers
can
opt
out
of
all
these
managed
services,
with
obviously
the
exception
of
query
and
clear,
and
provide
alternatives
like,
for
instance,
for
storage
using
the
redhead
ocs
offering
with
the
ocs
operator.
The
crate
operator
is
also
now
level
2,
meaning
it
can
update
quay
itself
to
the
next
version,
and
it
will
also
be
able
to
migrate
existing
quay
deployments
with
the
previous
great
setup
operator,
used
up
until
version
3.3
on
the
next
slide.
L
This
is
a
tech
review
feature
we
are
introducing
with
quay
and
in
the
first
iteration
it
will
work
by
launching
a
containerized
virtual
machine
and
execute
your
container
build
via
builder.
For
that
you
need
a
bare
metal
cluster
as
of
today.
This
does
not
have
to
be
the
same
cluster
where
query
and
the
operator
is
running,
it
can
be
an
external
cluster
and
it's
obviously
also
covered
by
the
infrastructure
policy,
meaning
there's
no
subscription
required
in
the
future.
L
We
want
to
alleviate
the
need
for
a
bare
metal
cluster
and
use
some
of
the
awesome
build
tooling.
You
have
heard
about
earlier
based
on
operation
pipelines
and
on
the
next
slide.
Last
but
not
least,
clearv4
finally,
graduates
out
of
tech
preview.
This
is
the
component
and
created
does
all
the
image
scanning,
and
this
is
now
the
default
for
quay.
As
of
version
3
4.,
it
supports
now
reporting
many
more
vulnerabilities,
also
from
programming
language
package
managers.
L
We
are
starting
with
python
here,
but
obviously
you're
going
to
expand
with
golang
node
and
java
and
quay
clear
before
can
also
now
be
part
of
a
disconnected
installation
like
wave
3
4
as
well,
due
to
the
use
of
offline
media
for
vulnerability
databases,
and
that's
it
for
cray
over
to
adi.
For
news
on
the
console.
R
Thanks
daniel
I'm
going
to
focus
on
openshift
console
now,
where
we
have
four
main
themes:
developing
on
kubernetes
learning,
extending
and
managing
we've
done
a
lot
of
great
work
in
the
console
in
47
across
these
themes
and
ollie,
and
I
are
going
to
highlight
some
of
those
today,
as
nina
mentioned
previously,
we've
added
a
lot
of
serverless
features
to
the
openshift
console.
R
The
console
now
has
support
for
brokers
and
channels
in
the
first
image
on
the
left,
we're
showing
the
ad
page,
where
users
have
the
ability
to
create
both
brokers
and
channels
once
created,
these
resources
are
displayed
in
topology
and
staying
in
line
with
our
visually
guided
focus
and
keeping
things
simple.
In
addition
to
allowing
creation
of
subscriptions
and
triggers
from
action
menus,
we
enable
drag
and
drop
to
quickly
and
easily
initiate
these
actions
from
within
topology
we've
also
enhanced
our
event
source
creation
flow.
Since
event,
sources
are
crs.
R
So
in
the
screenshot
in
the
bottom
right
hand,
side
you
can
see
that
users
can
view
event
sources
along
with
other
objects
in
the
service
catalog,
or
they
can
click
directly
into
the
event
source
type
and
drill
into
a
more
focused
catalog,
specifically
on
event
sources.
When
the
red
hat
integration
kmlk
operator
is
installed,
our
camel
k
connectors
will
also
be
shown
in
that
catalog
and
finally,
we've
made
some
changes
in
the
admin
perspective
around
serverless.
We
now
have
a
serverless
primary
nav
item
containing
two
nav
items.
R
The
first
one
is
focused
on
serving
resources
and
the
second
on
eventing
users
can
find
event,
sources,
brokers
triggers
channels
and
subscriptions
in
that
eventing
section.
Don't
forget.
These
items
are
still
accessible
in
the
admin
perspective
as
well,
since
the
topology
view
is
also
available
in
admin.
R
R
When
entering
the
developer
catalog,
the
catalog
has
the
ability
to
view
all
content
in
a
single
catalog.
We
provide
a
number
of
sub-catalogs
by
default.
The
builder
images,
helm,
charts
operator
back
services
and
samples,
but
other
sub-catalogs
are
available
based
on
operator
installs
such
as
event,
sources
and
vms
in
4-7
and
in
the
future,
we'll
see
more
coming
in
the
catalog
experience
is
more
contextual
when
drilling
into
those
sub
catalogs
exposing
features
and
filters
that
are
specific
to
that
type.
So,
for
example,
the
helm
chart
catalog
when
you
drill
into
that.
R
R
Finally,
I'd
like
to
share
some
cool
news,
new
things
in
topology
on
the
next
screen.
First
and
foremost,
we
now
have
persistent
storage
for
user
preferences
in
the
console.
This
is
super
exciting,
because
that
sets
a
stage
for
us
to
persist
layouts
in
topology,
which
has
been
something
that's
been
requested
quite
a
bit.
So
we
now
have
that
in
four
seven.
R
That
means
you
can
enter
your
topology
view
for
a
specific
project,
move
your
components
and
nodes
around
leave
the
project
come
back,
leave
the
entire
console
and
come
back,
and
it
will
all
be
persistent.
So
that's
great
news
there.
One
of
my
favorite
features
in
four
seven
is
the
new
quick
ad
in
topology,
which
is
shown
in
this
this
image.
R
Here
it
allows
developers
to
quickly
search
for
an
item
from
the
catalog
directly
from
topology
from
the
icon
in
the
top
left
hand,
side
of
the
the
bar,
the
quick
bar,
and
you
don't
have
to
change
context.
S
Thanks
serena,
so
openshift
console
now
has
quick
starts
that
was
introduced
in
4.6.
Quick
starts
are
essentially
are
in
console,
guided
experience,
so
new
in
four
seven
we've
made
this
extensible.
What
does
that
mean?
That
means
now
our
partners,
our
customers,
can
go
and
create
their
own
quick
start.
Experiences
are
gonna,
be
built
in
right
into
the
console.
We've
made
this
super
simple,
and
we
focused
this
by
by
a
couple
things.
First,
we
provide
an
excellent
default
sample
right
into
the
console.
S
So
if
you
go
into
the
console
and
go
and
create
a
console,
quick
start
crd
there'll
be
a
default
example
like
showing
you
everything
you
need
to
do
to
create
it.
On
top
of
that,
we've
provided
a
quick
start
guidelines
right.
This
is
going
to
come
in
and
tell
you
how
to
like.
You
know,
create
your
content,
the
formatting
and
essentially
just
give
you
an
overview
and
get
and
feel
of
how
to
create
a
really
good,
quick
start.
S
So
we
feel
like
this
is
a
very
important
piece
to
help
educate
users
so
again
like
if
you're
an
operator-
and
you
want
to
come
in
and
create
a
quick
start
to
guide
your
users
to
come
in
and
deploy
some
kind
of
sample
app
or
show
some
really
cool
functionality.
S
The
quick
start
is
is
for
you
a
couple
of
nice
additions
that
we
also
had
in
47
for
quick
starts
is
we've
added.
The
ability
to
do
hints
so
hints
will
allow
us
to
highlight
certain
sections
of
the
the
ui
again
just
making
a
little
bit
easier
to
guide
your
users
through
through
the
quick
start
and
finally,
in
four
seven
we
introduced
a
bunch
of
new
quick
starts
right.
S
So
the
ocs,
the
openshift
container
storage
team
added
quickstarts
virtualization
team,
did
you'll
see
some
new
ones
for
home
charts,
quarkus
spring
boot,
so
a
ton
of
great
content
coming
in
four
seven.
So
look
out
for
that
next
slide.
Please
all
right!
It's
finally
happened.
People
4.7
we're
getting
internationalization.
S
What
does
that
mean?
That
means
we're
going
to
have
support
for
chinese
japanese
korean
in
4.7,
as
you
notice
in
the
screenshot
you'll
now
see
a
language
preference
drop
down
that
will
be
under
the
user
menu
again
also.
What
does
that
mean
for
internationalization?
That
means
all
the
client
code
that
is
in
the
console,
we've
translated.
It
we've
also
translated
and
localized
all
dates
and
time
it
doesn't
mean
anything
from
the
back
end.
Coming
back
from
the
cube
api,
anything
is
going
to
be
translated
as
well,
just
all
client-side
code.
S
On
top
of
that
in
47,
we've
also
done
improvements
on
accessibility.
Another
key
piece,
that's
important
for
us.
We
even
enhance
our
ability
to
work
with
screen
readers.
You
know,
access
accessibility
is
super
important.
We
want
to
make
this
project
available
to
as
many
people
as
possible
and
with
these
accessible
improvements,
you
can
do
that.
S
I
added
a
couple
links
here
to
some
interesting
blogs
about
our
process
of
doing
it.
So
if
you're
curious,
take
a
look,
please
next
slide.
Please.
S
Also
new
to
4.7
we've
improved
our
monitoring
and
graphing
capabilities.
So
now
we
have
the
ability
to
extract
stacked
graphs
and
then,
on
top
of
that,
we've
added
enhanced
tooltips.
So
at
any
point
in
time
you
can
go
ahead,
select
somewhere
in
the
chart,
and
we
will
give
you
the
values
of
all
items
in
that
chart.
It
is
a
small
improvement
but
an
important
one.
It's
all
about
improving
the
usability
and
understanding
your
environment.
S
S
All
right
so,
finally,
today
I
want
to
talk
about
our
operator
hub,
specifically
catalog
sources.
This
is
the
mechanism
used
to
populate
the
operators
into
the
operator
hub
we've
added
additional
ui
to
make
this
easier.
So
now,
if
you
go
to
cluster
admin,
section
operator
hub
to
the
configs
you'll
have
the
ability
to
come
in
and
disable
or
enable
any
of
the
default
operator
hubs
that
come
with
the
product.
S
We've
also
added
a
nice
ux
around
additional
catalog
sources.
So
again
you
could
come
in
and
edit
configurations
via
ui,
not
the
ammo,
and
we
also
surface
a
lot
of
key
statuses
for
you
and
then.
Finally,
when
you
go
to
your
catalog
source,
you
could
actually
select
each
one
and
you'll
see
which
operators
that
this
catalog
source,
for
instance
right
so
out
of
the
box.
We
provide
you
a
bunch
of
different
catalog
sources,
but
you
could
yourself
add
additional
operator,
catalog
sources
that
will
populate
your
operator
hub.
S
To
pass
it
off
to
you,
christian
thanks.
T
I'm
sorry:
hey,
hey
everyone,
so
there
is
not
a
lot
to
talk
about
auxibility
today.
Only
a
brief
announcement
for
logging
with
openshift
4.7
login
will
continue
on
an
independent
release
cycle
to
the
openshift
container
platform.
With
that
we
better
align
ourselves
with
other
layered
products
such
as
service
mesh,
serverless
and
others.
T
What
it
literally
means
is
that
with
openshift
4.7
we
will
continue
under
logging.
5.0
don't
be
confused
about
the
major
version.
It
is
really
important
to
know
that
we
don't
really
change
anything.
We
don't
remove
any
features,
we
don't
add
any
features.
5.0
is
just
because
we
have
been
releasing
under
open
shift,
so
4.6
4.7
4.5
dot
dot,
so
it
was
just
the
next
iteration.
Basically
we
go
to
5.0
and
then
we
will
continue
with
minor
versions.
5.1
5.2,
as
I
said
already,
this
is
no
change
to
how
you
receive
support.
T
No
change
to
you
know
introduce
new
sku.
It
will
still
be
available
under
ocp.
All
the
features
are
still
supported
that
we
currently
support.
This
is
really
largely
a
repackaging
exercise,
but
some
of
the
notable
changes
which
I
believe
are
benefits
for
customers
here
is
that
with
5.0,
what
you
will
see
is
more
choice
on
how
you
want
to
consume
logging.
So
if
you
go
to
operator
hub,
there
will
be
a
few
more
channels
like
stable
and
tech
preview,
and
you
can
basically,
you
know,
choose
how
and
what
to
use.
T
We
will
also
go
into
a
more
feature
based
release,
cadence
to
be
a
little
bit
more
flexible
to
how
and
what
customers
need,
and
there
will
be
also
like
an
individual
support
metric.
So
you
know
that
with
service
mesh
and
service
and
pipelines
and
so
on,
so
that
means
you
know
with
logging.
We
will
not
be
able
to
only
support
one
ocp
release,
but
you
know
potentially
multiple
ocp
releases
for
eus
here
same
topic.
We
will
not
change
anything
with
every
eos
release.
T
A
Hi,
I'm
going
to
cover
this
section
on
behalf
of
catherine
who
couldn't
make
it
today
for
openshift
4.
There
are
two
primary
installation
experiences:
the
full
stack,
exp
automation,
what
is
known
as
ipi
and
the
pre-existing
infrastructure-
and
this
you
all
probably
are
familiar
with
it.
Just
shows
you
what
is
now
supported
with
4.7
moving
to
the
next
slide.
This
is
the
support
with
4.7.
We
introduced
support
for
the
installer
for
the
aws
commercial
cloud
services
or
cto
c2s.
A
You
know
the
the
installation
processes
are
largely
the
same
as
deploying
it
to
aws
valve
cloud.
The
aws
c2s
region
must
be
manually
configured
in
the
installed
config
since
red
hat
core
os
images
aren't
published
to
that
region
and
the
red
core
os
armies
must
be
manually
uploaded
by
the
user
to
prior
to
deploying
openshift
and
resulting
id
must
be
specified.
The
army
id
must
be
specified
in
the
install
config,
and
this
is
the
process
for
importing.
These
images
will
be
included
in
the
product
documentation.
A
For
user
defined
disk
encryption
keys
on
gcp,
this
is
new
that
we
are
introducing
in
portal
seven
for
customers
that
explicit
with
explicit
compliance
and
security
guidelines.
When
deploying
to
gcp.
We
have
added
the
support
for
user,
managed,
kms
key
for
encryption.
Encrypting
data
on
disks,
the
kms
key
can
be
configured
in
the
install
config
using
the
optional
encrypt
key
object
and
associated
fields.
The
kms
key
must
be
created,
along
with
assigning
the
proper
permissions
to
service
account
prior
to
deploying
openshift.
U
Hi
tashar,
thank
you
so
much.
I'm
maria
customers
want
openshift
to
be
able
to
use
temporary
short-lived
credentials
using
during
and
post
installation.
We've
started
this
work
with
the
aws
provider
since
sts
enables
an
authentication
flow
that
allows
a
client
to
assume
a
role
resulting
in
short-lived
credentials.
U
Aws
extended
their
sdk
to
offer
a
web
identity
token
off.
It
allows
the
automation
of
the
process
of
requesting
and
refreshing
credentials
using
an
openid
connect,
iem
identity
provider.
Openshift
can
sign
service
account
tokens
trusted
by
aws
iam
tokens
can
be
projected
into
a
pod
so
that
the
pod
can
use
those
for
authentication.
U
We
plan
on
releasing
an
initial
implementation
of
this
shortly,
following
the
release
of
openshift,
4.7
and
customers
that
want
to
try.
This
will
get
a
support
exception.
This
will
allow
us
to
review
and
inform
the
implementation
which
is
currently
manual,
as
well
as
the
plans
for
upgrading
to
openshift
4.8
in
4.8.
We
plan
to
complete
the
work
to
ga
this
feature,
meaning
all
the
automation
for
pre-installation
steps
and
upgrade
path.
U
V
Hey
this
is
anita.
I
will
be
covering
openshift
on
openstack,
taking
over
from
ramon
and
we'll
keep
it
short.
Since
we
are
running
out
of
time.
Openshift
and
openstack
is
now
supported
with
upi
and
ipi
installers
user,
provisioned
infrastructure
and
installer
provision
infrastructure
and
we're
adding
more
features
into
the
ipi
support
and
building
upi
flexible
ansible
playbooks
that
allow
customization
with
4.7.
V
We
can
run
on
openstack
13
and
openstack
16.1.
We
also
have
openstack
bare
metal
integration
with
ironic.
That
was
a
backboard
into
c
stream,
and
we
are
also
looking
at
auto
scaling
options
from
zero
nodes.
So
we
didn't
have
zero
node
support.
Now
we
can
go
from
zero
to
and
from
zero
nodes.
We
have
additional
telco
support,
starting
on
secondary
interfaces
with
sriv
support
and
with
openshift
4.8.
V
The
telco
coverage
will
go
up
with
support
for
ipi
with
sri
ov
and
also
looking
at
ovs,
dpdk
and
obvious
hardware
offload
with
openshift
4.9
for
4.7.
We
also
are
building
the
cinder
csi
support
and
with
4.8
we
will
had
topology
aware
with
ipi
for
4.7.
Again,
we
have
bring
your
own
load,
balancer
dns,
and
your
network
with
machine
set
for
custom
networks
and
career
support
for
ipv6
and
dual
stack,
no
ipi
installers.
Yet,
but
that's
coming
next.
N
Thanks
anita,
so
let's
talk
a
little
bit
about
openstack
on
rev.
We've
actually
delivered
a
lot
of
functionality
in
openshift
4.6.
If
you
remember,
we
did
disconnected
csi
storage
and
user
provision
infrastructure.
N
N
We
deploy
high
performance
vms
for
the
ftd
nodes
and
the
worker
nodes
and
then
automatic
guest
agent
installation
which
improves
the
overall
ability
to
collect
debug
data
and
just
manage
the
nodes
in
general.
One
important
thing
I
want
to
point
out
is
from
openshift
4.6
going
forward.
The
tested,
supported
configuration
will
be
openshift
on
rev
4.4.
N
That's
a
very
important
thing.
We
know
we
have
a
lot
of
customers
that
are
running
on
4.3
today.
That
will
continue
to
be
supported
in
a
eus
mode,
but
if
you
want
to
run
openshift
4.6
or
later
you're
going
to
need
to
upgrade
your
virtual
infrastructure
to
rev
4.4,
let's
talk
about
some
of
the
control
plane,
improvements.
N
F
F
So
some
of
the
metrics
include
api
request
rates,
request,
durations,
there's
several
metrics
in
understanding,
priority
and
fairness
and
how
it's
functioning
and
so
on.
These
plots
can
help
characterize
understand
api
traffic
in
the
cluster
and
accessing
this
information
is
as
simple
as
just
navigating
in
the
gui
to
the
navigation
tab.
Then
dashboards
and
selecting
api
performance
from
the
drop
down
selector
next
slide.
Please.
W
Thanks
mark,
I
guess,
on
the
cluster
infrastructure
side
and
we're
continuing
to
move
forward
with
the
machine
api.
We've
got
lots
that
we're
working
on,
but
I
guess
in
this
release
we
focused
on
three
high
priority
rfes
and
they
all
have
a
security
theme,
because
we
know
how
important
security
is.
These
days,
the
first
one
I've
deliberately
put
up
at
the
top
of
the
list.
It's
a
subtle
but
important
change.
W
When
we
architected
the
machine
api,
we
suppose
that
every
cloud
ap
cloud
provider
api
would
be
reachable
from
inside
the
cloud
provider
itself
and
we
used
to
ignore
the
global
proxy
proxy
setting.
But
for
those
of
you
kind
of
that
have
customers
that
have
inspection
proxies
and
want
to
all
traffic
through
it
for
security
reasons
or
or
others.
Then
we've
now
actually
look
at
that
setting
and
we
obey
it.
W
So
it's
definitely
helped
us
with
a
few
customers
whether
this
was
a
problem
for
us
where
they
couldn't
install
before
and
we've
made
that
change.
This
release,
obviously,
and
we've
also
added
support
for
the
aws
tenancy
dedicated
setting
in
the
machine
api.
So
for
those
of
you
with
federal
customers
or
federal
regulations,
that
you
need
to
build
where
you've
got
to
run
on
specific
data
customer
dedicated
hardware
in
an
aws
environment
that
is
now
no
longer
a
blocker
to
you
using
the
machine
api
and
then
finally,
google
cloud
disk
encryption
sets.
W
So
we've
had
a
few
customers
that
have
been
wanting
to
and
asking
us
to
use
their
own
own
keys
when
we
do
the
encryption
rather
than
the
ones
provided
by
google.
So
we've
started
off
by
bringing
that
into
this
release
as
well,
hopefully
to
get
people
over
that
bump,
and
I
think
next
we're
going
to
move
on
to
mark
who's,
going
to
tell
us
some
marvelous
facts
about
red
hat
core
os.
X
Hey
everybody,
and
thanks
duncan
mark
russell,
is
here
to
tell
you
what's
new
in
our
cost:
4.7,
it's
just
in
the
interest
of
time,
skip
to
the
real
headline
features
they're
all
related
to
disk
provisioning.
So,
starting
with
new
clusters,
you
can
deploy
coreos
nodes
now
with
the
mirrored
boot
device,
so
the
node
could
lose
either
drive
from
that
mirror
and
be
able
to
function
and
reboot
successfully
for
secondary
file
systems,
we're
also
supporting
both
mirroring
and
raid
5..
X
X
Last
but
not
least,
adam
will
be
tech
preview
in
4.7
there
will
be
published
documentation
on
how
to
manually
enable
capture
of
kernel
core
dumps.
In
any
case,
we
strongly
recommend
that
you
open
a
case
with
red
hat
support
when
you're
using
kdump
I'll
pass
it
back
to
mark
curry
for
networking
and
routing.
F
Great
thanks
mark
so
in
this
one
we're
going
to
talk
a
little
bit
about
configurable
application
domain
support,
so
customers
have
asked
us
for
the
ability
to
specify
one
of
their
own
domain
names
as
the
default
domain
name
for
application
routes
and
grasses
on
the
cluster
and
with
that
they'd
use
their
own
ca
to
sign
bls
certificate.
So
we
enabled
this
capability
via
an
app
domain
specification,
there's
a
brief
how-to
on
the
right-hand
side
here
that
I
created
for
you,
it's
pretty
pretty
straightforward.
F
Once
you've
set
the
app
domain
to
the
new
default,
you
no
longer
have
to
specify
that
domain.
You
know
when
you
run
the
oc
exposed
command
and
as
this
is
a
domain
that
is
owned
by
the
customer,
we're
going
to
need
to
configure
dns
for
its
proper
resolution
and
there's
a
suggestion
on
the
fourth
bullet
to
do
just
that
next
slide.
Please.
F
On
this
one,
we
cover
several
sdn
feature
developments,
the
first
one
being
srib
support
for
intel
and
3k.
I
think
that
one
is
pretty
self-explanatory.
The
idea
is
that
n3k
is
now
available
for
accelerated
packet
processing.
The
next
ovn
egress
firewall
filtering
with
dns
names,
egress
firewall,
is
a
improved
version
of
upstream's
network
policy
egress,
which
can
be
used
to
limit
the
external
host
to
which
pods
can
communicate
for
seven.
F
We
brought
ov
egress
firewall
to
feature
parity
with
our
current
sdn
implementation
by
adding
the
ability
to
use
dns
names
instead
of
just
sliders
fast
data
path.
Openstack
support
in
four
seven
to
address
telco
use
cases,
we've
enabled
fast
data
path
when
openshift
is
running
on
openstack.
This
directly
addresses
the
fast
data
path
technologies
like
sri
iob,
obs,
dbdk
and
obstc
flower
offload,
the
sriov
operator
is,
was
originally
designed
only
to
work
with
bare
metal.
F
But
of
course-
and
you
know
that-
and
that
meant
it
directly
controlled
the
nick,
and
so
they
could
it
own
the
next
physical
functions
it
could
manage
its
virtual
functions,
but
the
problem
running
on
open
openshift
on
openstack
is
that
that
pf
is
owned
by
the
underlying
hypervisor
and
not
present
in
the
vm.
So
we've
extended
the
capability
to
the
sri
sr
iov
operator
to
work
with
the
underlying
hypervisor
to
to
be
able
to
work
with
the
vfs
that
that
are
created
next
slide.
Please.
F
So
also
47
will
be
full
support
for
a
new
library
named
app.net
util.
This
library
can
be
used
by
an
application
to
assist
with
the
gathering
of
network
information
associated
with
a
pod,
in
particular
this
targets,
and
will
be
very
useful
for
application.
Programmers
that
are
using
sri
ovfs
in
dvdk
mode,
a
commons,
that's
a
common
scenario
for
anyone
needing
high
performance
traffic
throughput.
There
are
three
api
methods
implemented
in
this
library.
F
Each
the
functionality
of
each
should
be
self-explanatory
by
the
name.
The
third
one
has
something
worth
mentioning,
however,
though,
and
that
is
the
get
huge
pages
will
for
the
moment,
require
the
extra
step
of
enabling
the
alpha
feature,
gate
in
kubernetes,
120
or
greater
for
the
downward
api.
Huge
pages
just
set
to
the
true,
but
that
extra
step
will
be
resolved
and
will
be
unnecessary
in
the
4
8
release.
F
This
slide
serves
as
both
a
announcement
of
a
new
api
in
47,
but
also
as
a
sort
of
pub
psa
for
a
change.
That's
coming
in
for
eight
that
some
cluster
administrators
and
app
developers
may
need
to
prepare
for.
So
all
the
details
are
in
this
slide,
but
briefly
in
48,
the
version
of
aj
proxy
will
necessarily
down
case
http
header
names.
Now
the
value
is
just
names
as
permitted
by
the
http
http
protocol
standard.
F
There's
an
example
of
this
in
the
first
bullet
of
the
slide
when
they
are
when
those
header
names
are
down
cased
some
legacy
applications
might
be
sensitive
to
that
base
conversion
and
could
break
so
to
mitigate
the
issue.
We
created
a
new
api
that
would
allow
cluster
admins
and
have
developers
to
accommodate
those
older
legacy
applications
until
such
time
as
they
can
be
fixed
and
essentially
the
admin
can
specify
rules
that
transforms
the
header
names
in
any
http
one
request.
F
W
Next
slide,
thanks
mark,
I
guess,
onto
storage
now
just
a
couple
of
things
worth
talking
about
here
or
to
highlight
so
upstream.
We
are
now
seeing
the
deprecation
of
some
entry
drivers
that
are
going
on,
so
we
need
to
be
moving
on
and
getting
our
csi
drivers
out
there
really
quickly.
So
you
see
here
the
sender,
csi
driver
and
the
google
persistent
disk
joining
the
list
that
we
already
have,
but
you'll
see
a
lot
more
movement
on
this
going
forward
and
the
other
big
thing
it
seemed
like.
W
We
went
through
a
time
when
everyone
was
asking
for
snapshot.
Well,
snapshot
has
gone
ga
upstream,
so
we
get
that
now
in
openshift,
which
is
great
news
for
all
the
vendors
out
there
that
are
investing
in
a
and
csa
operators
to
run
on
openshift
and
use
them.
That
way.
Next
slide,
please
I'm
on
the
openshift
container
storage
side
again
a
lot
going
on
a
lot
on
the
security
side
like
the
vault
key
management
system
integration,
but
I
think
we
should
really
move
on
and
let
robert
tell
you
some
interesting
facts
about.
W
Y
Thanks
duncan
the
first
feature
that
we've
added
in
4.7
is
the
ability,
excuse
me
a
virtual
routing
and
forwarding
cni
plug-in.
This
is
a
meta
plug-in
that
you
typically
use
with
another
cni
like
mac,
vlan
or
sriov.
Y
Y
Therefore,
assuring
you
that
your
network
function
is
ready
to
be
deployed
onto
that
cluster
in
4.7,
we've
added
the
operating
system,
latency
measurement
test
os
lat
to
be
specific
to
this
suite
of
tests
next
slide.
Please.
Y
Y
H
Hi
so,
as
we
talked
about
earlier,
the
openshift
compliance
operator
gives
you
the
ability
to
automate
check
auditing
for
technical
controls
associated
with
regulatory
requirements.
Not
only
does
the
compliance
operator
allow
you
to
automate
that
work,
you
can
also
automate
remediation
with
the
compliance
operator.
In
parallel,
we
have
been
working
on
documentation
to
help.
You
understand
how
openshift
can
meet
the
technical
controls
in
various
regulatory
frameworks.
H
H
We
also
have
an
open
shift
for
security
guide
and,
as
I
mentioned
at
the
beginning
of
the
call
we
just
published
the
openshift
for
hardening
guide,
you
will
need
to
ask
your
red
hat
account
team
to
help
you
get
that
we
expect
it
to
be
replaced
by
the
cis
openshift
benchmark
in
late
january
early
february,
and
that's
the
reason
that
we've
chosen
not
to
make
it
publicly
available
at
this
time.
We
also
wanted
to
mention
for
those
of
you
who
think
about
certifications,
that
our
hosted
team
has
been
working.
Z
Thanks
kristen,
so
this
feature
is
about
the
ability
to
log
out
and
destroy
non
all
non-active
sessions
and
tokens.
So,
first
of
all
the
problem,
so
in
openshift
today,
users
have
the
ability
to
create
multiple
tokens
by
running
oc,
login
repeatedly
right.
So,
for
instance,
you
can
log
in
with
the
browser
you
can
log
in
with
the
command
shell.
In
fact,
you
can
open
a
couple
of
you
know:
command
shells
and
just
type
oc
login.
Z
As
a
consequence,
you
know
multiple
tokens
get
created,
but
when,
let's
say
you
go
to
one
of
those
terminals
and
you
type
log
out,
only
the
latest
token
is
logged
out
and
voice
is
important,
because
an
attacker
could
reuse
a
token
from
an
another
session
like
from
a
browser
or
from
a
terminal
where
you
have
not
done
aoc
log
out
and
use
those
session
credentials
or
session
ids
to
basically,
you
know,
get
access
to
your
cluster
right.
So,
as
an
analogy
I
mean
think
of
you
know
facebook
right.
Z
So
if
you,
let's
say
log
into
facebook,
from
your
browser
from
your
mobile
device
from
your
ipad,
but
let's
say
you
log
out
of
facebook
only
from
your
ipad
and
somebody
gets
access
to
your
phone,
your
browser,
they
pretty
much
log
in
so
this
is
exactly
the
same
where
we
want
to
give
openshift
users
the
ability
to
log
out
all
active
sessions.
So
you
would
simply,
you
know,
open
a
command
prompt,
say:
oc,
get
use,
user
access
tokens
get
a
list
of
all
the
oauth
access
tokens.
Z
Z
And
then
you
can
also
find
out
when
the
token
was
created
when
it
expired,
watch
the
token
inactivity
timeout
on
those
tokens
and
once
you
you
know,
figured
all
of
that
out
you
can.
Then
you
know
select
each
token
that
you
want
to
delete
and
say
oc,
delete
user
oauth
access
token
again
back
to
the
facebook
analogy.
It
would
be
something
like
facebook
lets.
You
list
all
your
active
sessions
and
for
each
of
those
sessions
you
can
go
and
say
log
out.
This
is
a
very
similar
analogy
to
that
next
session.
Next
slide.
Z
Please
next
slide,
please
so
windows.
So
we
went
generally
available
with
the
windows
machine
config
operator,
sometime
in
december
of
2020,
which
was
available
on
two
platforms,
aws
and
azure,
and
with
four
seven
be
proud
to
announce.
We
will
be
going
generally
available
with
vsphere
ipi.
The
next
slide
please,
and
what
this
will
let
us
do
is
unlock
a
lot
of
customers
who
are
on
vsphere
using
ipi
to
use
the
windows
machine
config
operator
to
bootstrap
windows.
Z
Nodes
that
are
running
on
vsphere
slight
point
to
note
is
bring
your
own
host
is
still
work
in
progress.
We
are
hoping
to
get
that
by
you
know,
maybe
q1
or
q2.
So
until
bringing
a
host
comes,
we
will
not
be
able
to
support
things
like
vsphere
upi.
You
know
and
other
platforms,
like
you
know,
paired
metal,
red
hat
virtualization,
so
on
so
forth,
but
we
feel
there's
a
good
start.
Z
W
Thanks
anand
and
then
last
and
by
no
means
least,
we
come
to
the
multi-architecture
updates,
actually
a
lot
going
on
this
release.
But
let
me
just
stick
to
a
couple
of
things:
the
first
big
one
up
at
the
front.
There
is
we're
making
kvm
available
for
ibm,
z
platform
so
before
you
had
to
use
the
zvm
environment,
and
this
is
really
useful
to
customers
they're
already
familiar
with
kvm
that
want
to
use
it.
W
There's
also
some
kind
of
price
considerations
in
there
that
they
take
into
the
other
big
thing
for
this
release
is
multi-pass
support,
so
with
ibm,
z
and
power.
The
the
big
you
know,
applications
that
they
run
are
pretty
important.
They
want
to
give
that
better.
Aha
experience
so
now
we're
bringing
it
onto
those
power
and
z
platforms
as
as
well,
and
so
let
me
end
by
handing
you
back
to
our
debina
debian
air
host,
to
show
us
who
you
can
wrap
up.
A
Thank
you
all.
Thank
you,
the
openshift
management
team.
Thank
you
audience
for
listening
to
us.
As
you
saw,
we
covered
a
lot
of
ground
here.
Definitely
we
are
definitely
over
time,
so
apologize
for
that,
but,
as
you
saw,
the
pms
definitely
put
a
lot
of
heart
into
it
and
they
try
to
get
into
the
right
level
of
detail
so
that
you
have
the
information
out
there.
As
I
said
earlier,
I
mean
it
is
recorded
the
q,
a
any
lots
of
q,
a
questions
so
bring
them
on.
A
So
that
is
all
good
if
you,
if
we
have
any
unanswered,
we'll
address
them
offline
with
that
without
further
ado
have
a
nice
day,
and
thank
you
all.