►
From YouTube: [What's New] OpenShift 4.9 [Oct-2021]
Description
Technical Product Manager Overview of Red Hat OpenShift 4.9
A
B
All
right,
hi,
everyone
welcome
to
today's
session
on
openshift
4.9
happy
to
have
you
here.
I've
got
the
entire
pm
team
with
me
as
well,
so
we're
going
to
shoot
through
a
bunch
of
cool
stuff.
That's
in
openshift,
4,
9
you're,
going
to
hear
it
directly
from
the
mouths
of
our
entire
pm
team,
a
reminder
for
what
we're
talking
about
today.
We
are
talking
about
openshift
platform
plus.
This
is
the
holistic
ecosystem
that
we've
built
around
openshift.
B
So
that's
going
to
include
all
the
bits
that
you
know
and
love
for
the
openshift
core,
as
well
as
advanced
cluster
management,
advanced
cluster
security,
red
hat
quay,
some
of
our
cloud
services
and
other
related
offerings.
So
we're
going
to
get
you
kind
of
everything
you
need
to
be
successful
in
a
multi-cloud,
hybrid
cloud
world.
So
we're
going
to
talk
about
everything
you
see
on
this
screen
and
a
little
bit
more
as
always,
ask
questions
in
the
chat.
If
you
are
inside
of
red
hat,
you've
got
your
own
chat.
B
If
you
are
outside
of
red
hat
we'd
love
to
engage
with
you
and
get
your
questions
answered
and
answer
any
follow-up
questions
later
on
as
well
all
right,
let's
dig
into
it
what
is
in
openshift
4.9,
so
here's
our
three
themes:
we've
got
extended,
installer
flexibility,
group,
security
and,
of
course,
always
working
on
our
next
gen
developer
tools.
B
The
first
thing
under
installer
flexibility
is
our
single
node.
Upi
is
ga
folks
have
been
asking
for
this
and
looking
for
this
for
a
while,
this
allows
you
to
extend
openshift
out
to
more
constrained
footprints.
You
know
thousands
of
clusters
at
retail
locations
and
on
all
kinds
of
vehicles,
and
things
like
that,
so
we're
super
excited
about
that.
So
go
ahead
and
try
it
out
when
four
nine
is
out.
B
The
next
one
is
also
long
requested
is
rel8
workers.
This
is
both
for
your
compute
workers,
as
well
as
your
infrastructure
nodes.
The
control
plane
remains
running
rel
core
os,
which
is
also
based
on
rail,
eight,
so
you've
got
relayed
across
the
board
there
as
well
we'll
cover
that
more
in
a
second.
B
Our
new
platform
that
we're
excited
about
here
is
azure
stack
hub.
Another
much
requested
feature,
that's
also
going
to
be
with
a
upi
based
installation
and
something
that
we
announced
a
little
bit
earlier.
But
we're
going
to
cover
again
is
our
bring
your
own
windows
node
support.
So
if
you've
got
out
of
band
management
for
how
you
actually
boot
and
manage
your
windows
machines,
you
can
bring
those
to
your
openshift
cluster.
B
There's
a
number
of
changes
related
to
the
apis
and
1.22,
which
we'll
cover
that
are
important
to
understand,
as
well,
in
the
security
bucket,
some
enhancements
to
our
tls
related
to
etcd,
so
shorter
expiry
and
some
rotation
tools
that
kind
of
bring
it
under
management.
Just
like.
We
have
rotation
for
the
rest
of
the
core
control
plane
and
some
customizable
audit
policy
for
the
cube
audit
log.
B
The
next
one
is
expanding
our
mutual
tls,
so
we've
got
a
big
service
mesh
user
base,
and
so
this
extends
that
into
ingress
and
serverless.
So
you
get
that
mutual
tls
between
all
of
those
components
and
then
last
on
the
security
front,
moving
our
fips
security
all
the
way
through
other
components
of
the
platform,
so
flips
validated
cryptography
for
our
acm
component,
openshift,
virtualization
and
sandbox
containers.
B
So
if
you're,
taking
advantage
of
any
of
that
in
a
fips
or
controlled
environments,
you're
good
to
go
there,
then
last
on
the
developer
tools,
buckets
we've
got
automatic,
rel
entitlements
that
will
get
kind
of
the
cluster
itself
will
get
entitled
automatically,
and
then
you
can
use
that
in
some
of
your
builds
and
other
code
artifacts,
which
is
really
really
great.
Just
moves
over
a
little
friction
that
used
to
exist.
B
I
want
to
cover
what's
in
cube
122.,
so
the
major
theme
here
is
api
deprecation,
so
a
number
of
apis
and
kube
have
been
marked
as
deprecated
for
a
while,
but
they've
actually
been
removed
now,
so
this
affects
a
ton
of
popular
apis
that
have
just
moved
basically
from
beta
to
stable,
so
we'd
like
to
see
that
and
so
we're
actually.
Finally,
removing
that
openshift
has
got
a
bunch
of
checks
to
help
you
kind
of
get
over
this
bump
and
we'll
talk
about
that
more
in
a
second
csi
for
windows.
B
Nodes
is
now
ga,
so
that
makes
using
storage
on
windows,
obviously
a
lot
easier,
and
we
will
cover
that
as
well
as
the
bring
your
own
node
and
some
other
windows
stuff
in
a
second
then
last,
I
kind
of
put
a
bunch
of
things
under
the
secure
by
default
category,
the
main
thing
being
that
there's
a
new
admission
controller,
that
is
the
replacement
for
pod
security
policies,
and
so
this
is
kind
of
phasing
in
the
new
functionality
around
that,
so
that
we
can
deprecate
pod
security
policies.
B
The
actual
object,
that's
slated
for
removal
in
1.25,
and
so
just
like
we're
talking
about
these
other
removals
openshift
for
you
as
well
a
note
that
the
cis
benchmarks
still
call
for
using
pod
security
policies.
So
that's
something
that
we'll
just
need
to
get
updated,
and
so
you
might
see
some
friction
there
from
some
of
your
customers
and
then,
as
always,
security
context,
constraints
and
openshift
remain
working.
B
And
you
know
we
had
your
back
kind
of
before
pod
security
policies
even
existed,
they're
still
there
and
they're
still
good
to
go
so
no
issues
there
and
we
pick
up
cryo
122
here
as
well.
Those
are
versions
the
same
as
kube,
so
that
is
in
openshift49
as
well
all
right.
Here's
a
quick
look
at
the
roadmap.
B
Obviously
we're
going
to
hear
about
a
lot
of
these
things
in
the
first
bucket,
I'm
not
going
to
go
over
everything
else,
but
just
tons
of
work
going
into
both
developer
tools,
application
platform,
our
hosted
offerings
our
cloud
services
that
build
on
top
of
openshift,
so
really
cool
stuff.
Going
on
here
we're
going
to
pick
up
different
regions
for
different
cloud
providers
where
we
have
a
upi
install.
We
might
pick
up
an
ipi
install
to
have
kind
of
that
that
full
range
of
flexibility
openshift
on
arm
is
coming.
B
We've
got
enhanced
windows
support
better
stuff
coming
for
serverless
get
ups
pipelines.
So
a
bunch
of
really
great
stuff
here
pause
your
video.
If
you
want
to
go
check
it
out
and
the
slides
will
get
published
as
well
all
right
with
that.
Let's
talk
about
four
nine
spotlight
features
and
I'm
gonna
hand
it
over
to
tony.
C
Thank
you
rob
so
the
first
one
here
is
about
a
new
upgrade
safeguard
in
response
to
the
cube
api
removal.
As
mentioned
earlier,
octp
409
release
comes
with
kubernetes
1.22,
which
removes
a
set
of
applicative
one
bit
on
apis.
What
this
means
to
the
operator
is
that
operators
that
use
the
beta
version
of
those
affected
apis
will
accordingly
need
to
be
updated
to
use
the
stable
version
so
far,
partners
and
their
products
have
been
audited
and
notified
of
updates
they
require
and
to
prevent
service.
C
C
Another
potential
service
breakage
due
to
cube
api
removal,
are
from
the
components
that
use
external
apis
that
we
cannot
detect
so
now
with
the
new
cluster
upgrade
behavior,
and
that
means
we'll
need
to
first
evaluate
their
cluster
migrate.
The
affected
components
to
use
the
appropriate
new
api
version
and
then
provide
a
manual
acknowledgement
before
the
cluster
can
be
safely
upgraded.
C
D
Hi
everyone
so
single
node
openshift,
as
the
name
implies
it
openshift
on
a
single
node,
with
the
goal
to
provide
a
consistent
application
platform
from
data
center
to
the
edge
and
extending
openshift
edge
deployment,
offering
from
three
compact
three
node
clusters
and
remote
volcano
nodes
to
include
signal,
node
openshift
as
well.
It
is
focused
at
production
and
edge
use
cases
mostly
for
parameter.
It
does
not
have
any
workload
or
runtime
dependency
on
a
centralized
control
plane
and
it
really
fits
and
architected
to
address
edge,
use
cases.
D
In
addition
to
that,
we
are
also
offering
a
deployment
via
reddit
advanced
cluster
management,
with
the
mechanism
of
zero
touch,
provisioning
and
centralized
infrastructure
management
which
are
going
to
be
covered
in
later
on,
as
well
as
with
assisted
installer.
The
sas
offering
for
openshift
deployment
olm
is
available
to
install
operators
on
top
of
snl
minimal
requirements.
To
for
this
type
of
deployment,
is
eight
cores
and
32
gigabytes
of
mark
of
ram
with
a
2
core
and
60
gigabytes
platform
footprint
for
vanilla,
openshift,
that's
the
what's
the
platform
consume.
D
There
is
an
attach,
a
link
to
to
kind
of
show
the
deployment
model
yeah,
and
with
that
I
would,
I
would
move
it
to
mark.
I
believe
thank
you.
E
Great
thanks
mara,
so,
while
open
shift
installations
on
public
cloud
provide,
take
advantage
of
native
load
balancing
services,
there
hasn't
been
any
native
out-of-the-box
load
balancer
for
open
shift,
on-premises
bare
metal
infrastructure
deployments
in
4-9.
We
enhance
bare
metal
deployments
by
providing
full
support
for
load,
balancing
bare
metal
infrastructure
clusters
using
metal
lb
in
layer,
2
mode,
so
layer,
2
mode
is
a
first
step.
E
The
next
step
is
to
support
a
currently
in
progress
upstream
effort
that
introduces
a
bgp
fr
mode
and
that
is
targeting
410
for
full
support.
The
operation
of
metal
of
b
basically
involves
two
components:
there's
the
cluster-wide
controller
that
handles
ip
assignments,
and
there
is
the
speaker
which
runs
as
a
damage
set
and
and
then
and
it's
it
speaks
the
protocols
of
your
choice
to
make
the
services
reachable
there's
also
a
couple
of
network
mode
types
involved.
There's
so
again
I
mentioned
that
layer.
Two
is
what
supported
four
nine
layer:
two!
F
Open
shift
pipelines
on
four
nine,
we
will
have
openshift
pipelines,
one
six
release
and,
with
this
release
the
trigger
subsystem,
which
is
responsible
for
the
weapon
functionality
when
events
come
from
git
providers
and
trigger
execution
of
a
pipeline
that
reaches
ga.
That
has
been
the
tick
review
for
the
previous
rule
in
the
multiple
releases
that
come
before
out
of
pruning
configurations
are
enhanced
to
allow
a
configuration
per
name
space
and
the
previous
releases
was
introduced
as
a
global
configuration.
F
Now,
every
team
or
every
group
can
go
and
customize
this
for
pair
their
own
needs
and
define
how
many
of
the
python
runs,
for
example,
should
be
kept.
Maybe
the
last
10
or
the
last
five
days
and
the
rest
should
be
automatically
cleaned
up
to
free
up
space,
both
in
http
and
on
the
cluster.
F
From
the
storage
perspective,
python's
code
is
a
feature
that
was
introduced
in
the
previous
release
and
that
allows
to
follow
the
the
git
ops
model
for
your
pipelines
themselves.
Instead
of
creating
the
pipeline
on
the
cluster,
you
put
the
pipeline
inside
a
git
repo
and
add
the
repo
to
the
cluster.
F
Every
time
an
event
comes,
the
pipeline
definition
is
taken
from
the
git
repo
and
executes
on
the
cluster,
so
this
was
introduced
in
the
last
is,
but
we
are
continuing
iteratively,
adding
more
capabilities
to
it
and
verifying
with
customers
of
how
it
works
for
them.
In
this
release,
private
git
repos
are
supported,
and
initially
we
supported
github
and
guitar
enterprise.
This
rule
is
hosted,
bitbucket
is
added,
and
we
know
that
a
lot
of
our
customers
are
using
bitbucket
server
self
self
hosted.
F
More
customization
are
added
in
this
release
so
that
customers
can
control
how
much
metrics,
for
example,
the
the
pipelines
to
generate
for
customers
that
consume
a
lot
of
pipelines.
They
have
a
lot
of
execution.
F
This
translates
to
a
large
volume
under
prometheus,
so
they
want
to
perhaps
reduce
some
of
those
metrics
they're,
also
giving
ways
for
customers
to
customize
how
tickton
works
on
on
openshift
in
general,
the
default
configs
of
takedown
in
a
way
that
is
maintain
across
upgrades
and
and
the
operator
is
aware
of
those
customizations
that
customers
are
doing
on
the
dev
console
side.
The
pipeline
builder,
the
visual
tool
that
allows
users
help
users
to
to
compose
pipelines
from
tasks.
F
There
are
a
lot
of
improvements.
Actually
there,
the
most
prominent
one
is
that
integration
with
takedown
hobbies
there.
So
you
can
search
for
tasks
and
it
doesn't
really
search
on
the
cluster,
but
also
search
on
community
tasks
that
are
coming
upstream.
Gives
you
enough
description
for
for
you
to
choose
and
add
it
to
the
canvas
that
that
is
in
front
of
you
for
designing
your
pipeline.
F
We
are
also
working
with
the
dev
console
team
to
bring
more
and
more
of
the
pipeline
as
code
views
into
the
dev
console.
What
you
see
on
the
screen
to
the
right
is
the
view
of
the
python
runs
that
are
attached
to
a
particular
git
repo
and
matching
github
checks
and
the
pr
status
or
comment
status.
Integration
with
that
so
expect
more
of
that
on
in
the
coming
releases
as
well.
Next
slide,
please.
F
Openshift
git
ops
also
has
a
new
release
on
four
nine.
That's
get
us
1.3
in
this
release
and
we
are
adding
support
for
user
groups
and
cube
admin
user
to
login
into
argo
cd
using
openshift
credentials
and
using
operation
currently
with
argo
cd
was
already
supported,
but
it
didn't
support,
cube
admin,
user
and
it
didn't
seek
the
user
group,
so
those
kind
of
capabilities
are
coming
as
well.
F
The
acm
team
has
done
a
great
job
and
integrating
more
and
more
of
argo
cd
into
the
platform
we're
working
for
a
long
time
with
them
and
in
this
release,
application
set
featuring
feature
matching
racking
that
allows
argo
cd
to
look
up
clusters
in
acm
and
generate
applications
for
them
is
is
available.
F
So
you,
you
have
a
quite
dynamic
environment
in
argo
city,
in
this
case
that
for
every
cluster
that
is
added
to
acm
or
imported
to
acm
for
management,
then
it
automatically
gets
added
to
the
list
of
classes
that
our
cd
is
managing
an
application
is
created
for
it
to
sync
to
to
to
to
a
git
repository
customize
for
support
is
coming
in
in
this
release
of
openshift
get
ups,
and
we
also
have
had
multiple
requests
on
supporting
external
certificate
managers
for
argo
cd
itself,
the
tls
configuration,
so
that
is
also
added
if
customer
wants
to
use,
search
manager
or
some
other
certificate
manager
and
router
charting
also
is
introduced.
F
We're
doing
the
sim
regarding
argo,
cd,
the
dev
console
incrementally
adding
more
capabilities
and
in
this
release
of
four
nine
you'll,
see
more
details
about
application
environments
where
an
application
is
deployed
through
argo
cd
and
there
are
status
the
metrics
about
how
how
successful
how
often
the
frequency
of
deployment,
the
failure
ratio
and
the
health
of
those
deployments
across
these
next
slide.
Please.
H
Hi
so
for
openshift
serverless,
that
is
based
on
upstream
k
native
in
4.9
will
be
updated
to
the
upstream
kinetic
version.
0.24
made
security
focus
and
added
the
encryption
of
in-flight.
H
Since
this
is
an
important
feature,
we
would
be
porting
it
to
the
previous
openshift
active
releases.
We
know
how
important
custom
domain
mapping
is
for
brand
presence,
so
we
have
extended
our
experience
from
just
available
through
cli.
To
do
this
using
dev,
console
and
serena
later
in
the
presentation
on
dev
console
would
cover
it.
New
monitoring
dashboards
have
been
added
for
the
visualization
of
your
serverless
apps.
We
also
added
support
for
empty
directory
so
that
the
serverless
apps
can
use
this
for
sharing
files
between
sidecar
and
the
main
application
container.
H
We
keep
enhancing
our
technical
preview
of
functions
so
this
time
around
the
new
times
that
we
have
added
are
typescript
and
rust.
In
addition
to
node,
quarkus
go
python
and
spring
boot
functions
can
now
also
access
the
data
stored
in
secrets
and
config
maps,
and
you
can
do
this
with
an
interactive,
kn
cli
experience,
and
we
would
also
be
enabling
google
cloud
functions
to
run
on
knativ.
F
Finally,
so
while
we're
working
toward
this
goal
across
multiple
teams,
we're,
finally
at
the
point
that
this
is
released
as
a
tech,
preview
and
410
aiming
to
become
ga
and
and
what
it
does
is
that
there
once
this
is
enabled,
since
it's
tech
preview,
be
fine
behind
the
tech
preview
feature
gate,
it
needs
to
be
enabled
by
the
customer
once
they
do,
that
the
cluster
itself
automatically
reaches
out
downloads.
F
The
simple
content,
access
certificates
of
the
entitlements
of
the
customer's
organization
and
place
it
on
the
cluster
in
known
location
and
manages
this
entity
entitlement
regularly.
So
it
automatically
refreshes
them,
because
these
entitlements
change
a
lot
since
they
represent
the
organization
and
get
evaluated.
It
automatically
goes
back
and
refreshes
a
new
instance
and
put
it
as
a
secret
on
the
cluster,
and
so
the
the
the
inside
team
has
been
delivering.
That
piece
ocm
has
exposed
the
api
and
build
configs,
have
added
support
for
mounting
secrets
and
configman.
F
So
you
can
use
the
entire
secret
and
map
it
directly
inside
a
build
config
or
inside
a
tick
time
pipeline
or
directly
pause
for
that
matter
and
consume.
The
entitlement
run,
a
dockerfile
build
and
you're.
When
you
run
yum
install
your
subscription
manager
automatically
will
recognize
those
entitlements
and
consume
it
for
for
pulling
the
content
pulling
the
rpms.
F
A
requirement
for
using
this
feature
for
the
customer
is
to
enable
simple
content
access
on
their
organization,
and
this
this
allows
them
to
use
a
certificate
that
represents
the
organization
and
instead
of
going
to
every
system
and
entitled
one
to
one,
which
is
a
traditional
way
that
some
of
our
customers
manage
their
subscriptions
across
their
rail
notes.
So
we're
really
happy
that
this
is
in
place.
F
Finally-
and
we
can
push
this
forward,
make
it
simpler
over
the
next
couple
of
releases,
some
more
automation,
around
distribution
of
the
secrets
themselves,
the
entitlement
secrets
make
it
easier
to
deliver
them
to
the
application
teams
next
slide.
Please.
G
With
openshift
4.9
customers
can
now
have
multiple
logins
to
the
same
registry
in
a
single
pull
secret.
Before
this.
Without
this
I
mean
you
could
only
have
one
login
for
an
entire
registry
in
a
single
pull
secret.
G
This
required
the
use
of
many
pull
secrets
for
deployments
with
multiple
components,
and
you
can
clearly
see
that
that
was
a
cumbersome
with
this
change.
You
can
now
be
use,
use
a
single
secret
that
can
contain
multiple
logins
for
the
same
registry,
either
per
registry
name
space
or
per
image
in
a
registry.
G
This
also
allows
additional
credentials
to
query
dot
io
in
openshift's
global
pull
secret
without
overwriting
existing
credentials
for
openshift
core
image
on
quad.io
next
type.
Please.
I
Hey
everyone
all
in
over
here,
so
we
got
some
really
great
console
updates
for
you
all
zone49
we
added
or
we
supercharged
our
project
selector.
So
not
only
can
you
come
in
and
do
a
quick
search
and
start
your
favorite
projects,
but
for
privileged
users
we
added
the
ability
to
filter
out
system
projects.
Our
goal
is
to
kind
of
just
help.
You
remove
filter
out
all
any
noise
and
get
you
to
the
projects
that
you
care
about.
I
In
addition
to
that,
we
added
a
user
preference
section,
which
includes
not
only
your
language
preferences
but
we've
added
ability
to
set
your
default
perspective
view
your
default
projects,
your
default
topology
view,
in
addition
that
we
also
added
the
your
default
added
method.
So
if
you
want
to
choose
form
or
yaml
when
you
come
in,
the
preferences
will
remember
that,
for
you
in
the
future
next
slide.
I
So
another
big
request
we
got
was
in
the
in
the
overview
dashboard.
Our
users
wanted
to
be
able
to
get
the
cluster
utilization,
not
just
by
the
entire
cluster,
but
by
node
types
as
well.
So
in
openshift
we
default
to
both
worker
and
masternodes,
and
I
have
the
ability
to
to
segregate
which
no
type
you
want.
In
addition,
if
users
add
their
own
note
types,
for
example,
they
come
in
and
add
an
infra
type
or
a
gpu
and
no
type
they
could
come
in
and
filter
that
as
well.
I
The
next
item
we've
added,
is
the
ability
to
get
node
level
logs
directly
from
the
console
so
kind
of
like
pod
logs,
you
can
come
into
your
nodes.
Select
logs
you'll
get
a
list
of
all
available
logs
there
and
then,
which
version
of
that
log
you
want
to
access
you'll,
be
able
to
do
that
right
from
the
console
and
finally,
we
added
the
ability
to
clean
up
operators
right.
I
So
when
we
say
by
cleanup
we
mean
not
when
you
uninstall,
you
not
only
uninstall
the
operator,
but
we
also
uninstall
all
the
operands
that
were
created
by
that
operator
as
well.
Give
you
a
full,
clean
uninstall
of
your
operator,
all
right
next
slide.
Please.
J
Thanks
ali,
so
yeah
hi,
I'm
I'm
serena.
I
am
the
pm
for
dev
developer
tools,
and
this
is
around
the
developer,
console
or
developer
perspective
inside
of
the
console.
What
we've
done
this
release
is
kind
of
really
focused
on
a
lot
of
usability
enhancements,
as
well
as
a
few
features,
so
the
the
screen
on
the
top
left
is
showing
our
converged
import
flow.
Previously
we
had
three
separate
flows
for
import
from
git,
docker
file
and
dev
file
and
in
four
nine,
to
improve
the
experience.
J
We
now
have
a
single
converge
flow
where
the
user
just
enters
their
gate
repo,
and
we
do
all
the
work
behind
the
scenes.
So
it's
a
great
improvement.
The
next
one.
On
the
top
right
hand,
side
is
an
easy
way
to
export
your
application.
This
is
a
new
dev
preview
feature
and
from
the
topology
view,
it
allows
you
to
export
your
app,
which
dumps
it
gives
you
a
bali
yaml
that
you
can.
J
The
bottom
left-hand
screen
is
around
a
form-based
edit
for
build
configs,
so
this
was
an
rfe
requested
by
many
people.
We
used
to
have
this
in
3.x.
So
again,
this
is
kind
of
a
parity
feature
with
3x
and
last
but
not
least,
on
the
bottom
right
hand,
side
we
have
improvements
for
application
observability.
J
Our
monitoring
section
has
been
renamed
to
observe.
We
now
have
four
dashboards
available
for
for
developers
and
we'll
continue
to
see
more
added
in
upcoming
releases.
J
On
the
next
slide,
we'll
talk
a
little
bit
more
about
the
serverless
changes
that
we
have.
Nana
had
talked
about
some
of
this
as
already
so.
We
do
have
the
domains,
a
mapping,
support
for
serverless
deployments.
J
So,
as
we
know,
the
each
service
kind
of
has
is
automatically
assigned
a
default
name
when
it's
a
domain
name
when
it's
created,
and
this
option
allows
you
to
map
any
custom
domain,
name
that
you
want
to
okay
native
service
through
the
ui
and
on
the
slide
on
the
mockup
on
the
right.
That
is,
our
developer.
Catalog
now
includes
community
camlets.
So
when
the
camel
k
operator
is
installed,
we
get
an
additional
50
plus
event
sources
available
in
the
developer
catalog.
J
So
that's
a
great
feature
as
well
on
the
next
slide
talk
a
little
bit
more
around
integration
with
pipelines
inside
of
the
console,
and
this
is
aligned
with
what
ciana
had
discussed.
So
we
do
the
the
screen
on
the
left
is
showing
that
we
do
have
a
repository
list,
views
for
for
pipelines
as
code,
so
that
allows
you
to
look
at
those
repositories
get
to
the
pipeline,
runs,
etc.
From
from
the
console
and
then
also
as
as
cmec
had
mentioned,
we
do
have
some
nice
enhancements
around
the
pipeline
builder.
K
A
K
As
I
just
mentioned,
we
are
adding
a
new
provider
to
this
release.
This
is
from
the
stack
hub
for
sorry
astute
stack
portfolio
of
product
that
extends
azure
services
and
capabilities
to
your
environment
of
choice.
This
new
feature
allows
an
openshift
cluster
to
be
deployed
into
an
existing
infrastructure
or
on
a
surestar
hub.
As
part
of
this
provider
enablement,
we
have
put
together
some
azure
resource
manager
templates.
This
is
the
solution.
Azure
offers
to
implement
infrastructure
as
code
to
their
azure
customers.
A
K
If
a
customer
wants
to
use
royal
for
the
worker
or
infrastructure
nodes,
this
can
be
done
as
a
day
two
operation
on
any
cluster
deployed
via
upi
or
ipi
in
in
openshift49.
We
are
deprecating,
adding
new
relay
rail
7
machine,
sorry
to
to
the
cluster
and
the
path
to
replace
existing
route.
7
machines
with
relayed
ones
will
basically
consist
of
adding
neural
machines
and
and
remove.
A
K
This
announcement
enables
for
the
openshift
installer
to
create
the
subnets
as
large
as
possible
within
the
machine
cidr,
rather
than
always
taking
taking
up
an
eight
of
it
regardless
the
number
of
subnets.
This
is
specifically
for
microsoft,
azure.
This
will
allow
users
to
create
a
machinery
idr
that
is
as
small
as
possible
to
accommodate
the
number
of
nodes
that
there
will
be
in
the
cluster
and
there
are
no
changes
required
in
order
to
to
consume
this
or
additional
fields
that
need
to
be
added
to
the
install
config
file
for
the
user
provision.
Refractor.
K
K
K
D
Yeah
and
so
zero
touch
provisioning
is
moving
from
a
depth
preview
to
a
depth
preview
release
status
integrated
within
rhcm,
with
the
additional
option
now
to
deploy
multi-node
clusters,
as
well
as
remote
walker
nodes
on
top
of
sno
capabilities,
which
were
there
already
with
requirements
coming
from
the
telco
market
to
address
multi-cluster
regional
planned
deployment.
D
Infra
configuration
and
workload
are
manifested
in
git
via
kubernetes
negative
apis
to
provide
an
automated
fully
deployment
from
a
regional
location
and
basically
for
guys,
people
who
do
not
know
that
already
it
integrates
and
leverage
existing
technology
stack,
whether
reddit
advanced
cluster
management,
hive
meta3
and
assisted
installer,
basically
taking
the
benefits
of
all
of
those
to
create
kind
of
a
fully
automated
flow
from
infrastructure
to
application
running
on
openshift
cluster.
It
has
minimal
requirements,
so
it
can
put.
D
It
can
do
deployment
over
layer,
3
net
networks
with
no
additional
bootstrap
nodes,
so
it's
really
aimed
at
edge
deployment
other
than
that
it
is
highly
customized
deployment.
It
feeds
connected
and
disconnected
ibvc
scipv4.
Dual
stack,
dhcp
static,
ip
and
also
the
all
supported
deployment
options
are
are
feasible
using
this
mechanism.
D
It
is
git
ops
enable
meaning
that
it
is
managed
with
cube,
negative
declarative
api
and
it
works
with
any
deployment
topology.
So
that's
zero
touch,
provisioning
and
it's
provided
via
infrastructure
operator
and
next
slide.
Please.
D
Using
the
same
apis,
the
same
cube
native
apis
and
we
really
wanted
to
touch
additional
flows,
ones
that
are
not
focused
at
plan
deployments,
but
provide
more
dynamic
capabilities
and
basically
a
decoupling
between
two
personas.
So
one
persona
is
the
infra,
I
admit
the
it
which
manages
the
on-prem
compute
across
different
data
centers
or
locations.
D
The
other
person
is
the
cluster
creator,
the
dead
or
ops
or
devops
which
consume
these
allocated
computer
resources
and
create
clusters
from
them,
and
we
really
kind
of
try
to
create
a
different
interface
for
each
one
of
those
one
is
for
managing
what
we
call
an
infrared.
It's
another
custom
record
that
we
added
to
the
operator
which
allows
you
to
organize
your
infrastructure
in
a
much
smaller
structural
way.
D
Still
kubernetes
stated,
so
you
can
divide
pararec
or
pair
location
and
organize
your
hardware
this
way
and
while
creating
and
kind
of
trying
to
preserve
the
same
or
better
user
experience
for
cluster
creation.
We
borrowed
many
of
the
practices
that
we've
learned
from
assisted
installer,
working
on
cloud.trader.com
and
created
the
same
type
of
experience
of
preflight
validation
of
the
monitoring
and
so
and
and
the
same
uxp
around
that.
So
we
kind
of
keep
the
same
structure
and
same
format.
D
L
Thanks
maureen,
okay,
so
let's
talk
now
about
diameter
ipi
and
one
of
the
features
we
are
adding
in
4.9
is
the
ability
to
use
the
regular
bare
metal,
ipi,
installer
and
workflow
against
bare
metal
nodes
provided
by
ibm
cloud.
L
So,
if
you're
familiar
with
the
bare
metal
iti,
you
will
know
that
essentially,
what
we
are
doing
is
we
are
using
diameter
nodes
as
if
they
were
a
cloud
provider
effectively
thanks
to
the
bare
metal
operator.
So
this
is
exactly
what
we
are
doing
against
the
metal
launch
provided
by
ibm
cloud.
This
is
not
a
cloud
provider,
a
new
cloud
provider
that
understands
ibm
cloud,
but
your
regular
ipi
workflow.
L
This
is
an
important
difference,
because
we
are
also
working
on
adding
full
support
for
ibm
cloud
next
slide.
Please.
L
Okay,
one
of
the
things
that
you
may
be
doing
still
talking
about
the
environmental
ipi
is
provisioning.
Your
notes
with
dhcp
and
pixi.
That's
very
common
when
you
provision
diameter
nodes-
and
this
is
integrated
in
the
standard,
ipi
workflow.
When
you
do
this,
you
need
a
provisioning
network.
L
That's
dedicated
for
this
purpose
right
to
do
the
provisioning
over
the
network
of
of
your
nodes,
but
then
you
may
want
to
expand
your
cluster
with
remote
worker
nodes
that
not
necessarily
will
have
access
to
this
provisioning
network
or
you
don't
want
any
provisioning
network
whatsoever,
and
you
can
do
this
with
virtual
media
with
virtual
media.
The
diameter
operator
will
essentially
map
the
installation
image,
which
is
you
know,
part
it's
in
your
cluster.
L
The
bare
metal
operator
is
aware
of
this
image
and
it
will
map
it
to
the
remote
nodes
bmc's
so
that
they
can
be
installed
from
it.
So
if
you've
installed
your
cluster
with
pxe
now
you
can
expand
your
cluster
with
remote
worker
nodes
over
virtual
media,
and
with
that
I
will
pass
it
to
garaf
to
talk
about
control,
plane,
updates.
G
Yeah
I'm
talking
on
behalf
of
ghana,
but
because
he's
on
pto
with
openshift
4.9.
What
you
can
do
is
that
I
mean
customers
all
and
users
always
want
to
choose
different
scheduler
behavior.
That
fits
their
workload.
So
for
that
we
have
two
ways
in
which
users
can
do
that.
One
is
the
pre-built
profiles
which
you
see
on
the
left
here
and
then
the
other
is
building
one's
own
custom
profile
and
both
are
supported
with
4.9.
G
You
know,
as
the
name
suggests,
the
previous
profiles
have
low,
node
utilization,
high
note
utilization
and
no
scoring
with
you
know.
You
know,
I
think
the
names
are
pretty
clear
on
what
they
do
for
building
your
custom
profile.
What
you
do
is
that
you
build
a
you
know
you
build
an
extension
using
the
scheduler
plugin,
and
then
you
can
use
that
in
your
scheduling
profile.
G
Note
that
you
can
only
use
one
scribbling
profile
on
a
cluster
with
that,
hopefully
you'll
use
that
and
I'll
hand
it
over
to
the
next
slide.
M
Thanks
asha,
so
I
want
to
talk
about
the
updates
to
control
play,
starting
with
custom
route
names
and
search
for
cluster
components.
So
the
default
route
name
of
openshift
cluster
components
now
allows
for
any
flexibility
in
customer
environments.
The
current
name
that
we
have,
which
is
name.apps.cluster.main,
can
be
customized
for
both
the
oauth
server
and
the
openshift
console.
So
if
you've
looked
at
your
openshift
console
the
url
is
you
know
something
like
console:
hyphen,
openshift,
hyphen,
console.apps,
dot,
cluster
name
or
domain
name?
M
You
know,
customers
have,
you
know,
talked
to
us
and
said
you
know.
I
want
this
to
have
like
the
name
of
my
bank
like
an
xyz
bank
or
acme
insurance.
You
know
in
the
cluster
name,
and
so
now
we
allow
for
the
customization
of
both
the
routes
and
pls
search
that
you
can
use
for
those
routes
for
the
award
server
and
the
ocd
console.
M
M
It
was
supported
for
console
and
the
downloads
page
from
4.8.
There
are
a
few
other
components
like
the
monitoring
pieces,
like
the
alert
manager
from
atheist,
grafana
thanos
that
are
in
progress
and
image
registry,
as
well,
hopefully
soon
you'll
be
able
to
customize
the
routes
for
those
components
as
well.
Next.
M
So
in
4.6
we
introduced
a
api
audit
log
policy
that
controls
the
amount
of
information
that
is
locked
to
the
epa
audit
logs
by
giving
you
three
profiles.
First
is
a
default
profile
that
lets
you
logs.
Only
metadata
for
read
and
write
request
does
not
log
any
request
bodies,
except
for
the
oauth
access
token.
That
was
the
default
policy.
M
The
second
profile
that
we
let
you
customize
that
will,
let
you
add,
was
write,
request
bodies
which,
in
addition
to
logging
metadata
for
all
requests,
you
could
log
request
bodies
for
every
read
and
write
request
for
every
write
request.
I'm
sorry
that
includes
create
update
and
patch
and
last,
but
not
the
least.
We
created
another
profile
called
all
request
bodies
which,
in
addition
to
logging
metadata
for
all
requests
it
also.
Let
you
log
request
bodies
for
every
read
and
write
request
to
the
api
server,
including
operations
like
get
list
create,
update
and
patch.
M
The
default
profile
obviously
had
the
least
resource
overhead
the
right,
because
bodies
has
a
little
more
resource
overhead.
The
all
rippers
bodies
has
the
most
resources
and
how
you
change
the
profile
is
you
would
edit
the
api
server
object
and
then
you
would
add
a
profile
under
spec.audit
and
then
specify
the
profile
that
you
want
default
or
write
request
all
request,
and
in
4.8
we
said
the
default
log
policy.
M
M
So
now
in
4.9
you
can
configure
audit
policy
with
you
know:
custom
rules
right,
which
means
you
can
create
multiple
groups
and
then
define
what
profile
you
want
to
use
for
those
groups.
For
instance,
if
you
look
at
the
same
api
7
object
under
spectre
audit.
I
have
to
find
a
section
called
custom
roles
under
custom
roles.
I
can
have
multiple
groups,
so
there
is
one
group
for
all.
You
know
what
server
requests
and
the
profile
that
I've
set
for
that
is
write,
request
bodies.
M
I
have
another
group
called
system
that
authenticated,
which
is
pretty
much
all
authenticated
request
to
api
server.
There
is
a
profile
set
called
all
request
bodies
and
for
those
requests
that
do
not
satisfy
the
about
two
criteria.
You
know
we
set
a
default
profile
right,
so
you
can
pretty
much
select
groups
and
for
those
groups
you
can
specify
what
level
of
logging
that
you
want
for
those
groups.
M
And
the
last
update
to
audit
logging
in
4.9
is:
we
have
provided
a
capability
of
disabling
audit
logging
so
again
you
edit
the
api
server
object
and
spec.profile.
You
would,
you
know
flip
it
to
none.
The
reason
why
we
have
given
you
this
switch
is
because
a
lot
of
customers
came
back
to
us
and
said
even
the
default
level
of
logging
was
a
little
excessive
for
them,
so
they
would
like
to
have
an
option
of
you
know,
turning
off
logging
completely
for
the
whole
cluster,
and
so
now
you
have
that
option.
M
So
next
couple
of
slides,
I
want
to
talk
about
all
the
latest
and
greatest
updates
to
hcd,
starting
with
cypher's
customization,
so
you
can
now
customize
the
ciphers
that
you
use
for
xcd.
So
again
you
edit
the
api
server
object
under
spec.pls
security
profile.
You
will
define
the
type
of
you
know:
tls
security
profile
you
want
to
use.
There
are
four
profiles
that
you
know:
mozilla
provides
which
is
old,
intermediate,
modern
and
custom.
M
The
intermediate
profile
is,
you
know
the
default,
one
for
the
interest,
controller,
the
cubelet
and
the
control
plane,
and
it
requires
a
minimum
plus
version
of
1.2.
M
M
The
next
update
to
hed
is
around
providing
you
know
automatic,
automated
search
rotation.
There
are,
basically,
you
know,
four,
you
know
set
of
certificates
used
and
generated
by
hed
processes
to
communicate
to
that
city
versus
peer
certificates
used
for
communication.
Among
you
know,
cd
members,
as
you
know,
default
openshift
cluster
has
three
masters,
which
is
you
know
three
city.
You
know
you
know
members,
so
if
those
hdd
members
need
to
communicate,
they
need
pure
certificates.
M
M
That
all
metric
consumers
use
to
connect
and
the
pure
client
and
server
certificate
validity
is
around
three
years,
and
so
you
know
after
three
years
before
4.9,
if
they
expired,
you
know
you
pretty
much
had
to
like.
You
know,
restart
a
note
or
reboot.
The
cluster
to
you
know,
make
your
certificates,
but
now
you're
providing
automated
certification
feature
and
these
certificates
are,
you
know,
manually
rotated,
you
know
prior
to
expiration
by
the
system,
and
there
is
you
know,
less
overhead
that
openshift
cluster
admin
has
to
worry
about
next
time.
Please.
M
And
last,
but
not
the
least
for
xcd,
we
have
provided
a
auto
defrag
feature
in
the
controller.
This
feature
enables
the
automated
mechanism
that
provides
defragmentation
as
a
result
of
observation
from
the
cluster.
The
goal
of
the
feature
is
to
provide
a
controller
that
manages
the
automation
of
hcd
fragmentation
based
on
observable
threshold,
again
mind
you.
This
is
not
an
api.
We
provided
to
end
consumers.
It's
just
you
know
an
automated
way.
The
lcd
cluster
hd
operator
has
to
defrag
the
cluster.
M
It
checks
the
lcd
cluster,
every
10
minutes
and
the
criteria
it
uses
is
the
cluster
should
be
held
in
a
highly
available
topology
mode.
The
cluster
member
should
be
healthy.
The
minimum
defrag
bytes,
which
is
a
minimum
database
size
before
the
fragmentation
occurs,
should
be
100
megabytes
and
then
the
max
fragmented
percentage,
which
is
the
percentage
of
the
store
that's
fragmented,
should
be
45.
M
So
if
these
criterias
are,
you
know,
satisfied,
cluster
xd
operator
will
go
ahead
and
you
know
run
a
defrag
on
the
cluster.
You
know
reclaim
law
space,
you
know
free
up
the
cluster
that
leads
to
you
know
fewer.
You
know.
Resource
outages,
like
lack
of
memory
or
resource
bloat
or
downtime,
it
leads
to
you,
know,
better
cluster
performance
and
why
we
did.
This
is
because
you
know
we
observed
you
know
large
scale.
Customer
clusters.
We
observed
some
of
our
internal.
You
know,
clusters
that
are
used
to
run.
M
You
know
rci
jobs,
and
we
said
you
know
this.
Could
you
know
benefit
from
you
know
like
an
auto
defrag
feature,
and
so
this
is,
you
know,
end
of
the
day.
Less.
You
know
overhead
for
the
cluster
admin.
You
know
better
reliability
of
the
cluster
and
inner
stability
of
the
openshift.
M
With
this
I'll
hand
off
to
mark
and
deeply
to
talk
about
all
the
networking.
E
E
So
the
first
is
the
addition
of
enhanced
egress
ip
load
balancing
for
clusters
that
were
built
with
ovn
for
networking,
so
egress
ip
is
likely
a
feature
that
was
already
familiar
to
networking
minded
developers
and
admins,
because
what
it
provides
is
the
capability
to
define
source
ip
address
or
a
predefined
range.
E
Ip
addresses
for
specified
applications,
egress
traffic,
so
cluster
admins
typically
use
this
source
ip
address
reservation
to
allow
list
traffic
at
the
edge
of
their
cluster
deployment,
and
in
doing
that
they
can
filter
the
traffic
that's
allowed
to
travel
externally
to
the
cluster.
So
this
enhancement
removes
the
previous
ovn
requirement
that
egress
traffic
go
out
of
a
single
node's
interface,
where
it
got
estimated
to
the
to
the
defined
egress
ips
of
that
node.
E
This
feature
enhancement
adds
the
ability
to
use
multiple
cluster
nodes
to
distribute
the
egos
traffic
to
avoid
a
single
node
choke
point
and
still
be
able
to
reserve
source
ip
addresses
for
that
traffic.
Note
that
this
feature
was
implemented
in
the
last
release,
openshift4a
for
our
default
out
of
the
box
networking,
and
so
this
enhancement
completes
it
for
customers
using
ovn
for
network
now
to
deep
deep
next
slide.
Please.
N
Thanks
mark,
let's
continue
to
take
a
look
at
some
of
the
other
major
enhancements
on
the
networking
side.
First
up
is
support
for
network
adapters
in
fast
data
path
list
on
openshift,
starting
with
openshift49
nick's
supported
on
openshift
platform,
is
going
to
be
aligned
with
rel
fast
data
path,
support
metrics
now.
What
does
this
mean
to
our
customers
right
from
now
on?
Any
network
adapters
you
know,
supported
in
rel,
will
be
supported
in
open
shift
without
needing
any
further
certification
requirements.
N
The
nik
support
information
can
be
viewed
in
the
support,
metrics
link.
That
is
right
there.
Next
up,
we
have
support
for
sri
oe
on
a
single
node
openshift.
Customers
want
to
run
real
time,
low,
latency
workloads
on
a
far
less
resource
constrained
hardware
and
to
help
them
with
that.
We
now
have
sriv
operator
running
on
single
node
openshift.
This
ensures
we
have
high
performance
network
in
place,
which
is
much
needed
to
onboard
these
critical
workloads.
N
N
Now
next
slide,
please
now,
let's
take
a
look
at
some
of
major
increased
enhancements
done
for
this
release.
The
theme
has
mainly
been
around
security.
First
up
is
support
for
mtls
through
the
ingress
operator
starting
4.9.
We
have
client
tls
enhancement
in
place
that
enables
administrators
to
configure
openshift
router
to
verify
client
certificates.
N
This
facilitates
mutual
dls
where
both
the
client
and
the
server
authenticate
using
their
own
respective
tls
certificates.
Moving
on.
We
now
have
support
for
tls
version
1.3
for
openshift
ingress.
Now
this
basically
supports
faster
tls
handshakes
with
better
performance,
stronger
security
in
comparison
to
its
predecessor.
N
Next
up
we
have
global
options
to
enforce
http,
strict
transport
security,
or
we
call
hsts
policy
in
this
release.
So
hstc
policy
basically
enforces
https
in
client
requests
to
the
host
the
policy
covers
without
making
use
of
any
http
redirects.
This
ensures
user
protection.
Minimizes
security
risks
you
know
which
are
basically
based
on
network
traffic,
eavesdropping
or
man
in
the
middle
attacks
in
openshift,
3.x
and
prior
versions
of
openshift
4.
N
N
N
You
know
when
your
connections
will
typically
time
out
these
options
can
be
set
as
part
of
the
ingress
controller
spec
under
tuning
options.
That's
it
for
now
moving
to
frank
next
slide,
please
thank
you.
O
Thanks
zipt,
so
vrf
cni
is
graduated
from
technology
preview
to
general
availability,
so
vrf
cni
permits
to
connect
a
pod
to
several
networks
with
overlapping
ip
ranges
by
creating
multiple
routing
and
forwarding
domain
within
the
pod.
Thanks
to
the
linux
kernel
feature
name,
virtual
routing
and
one
the
vrf
cni
can
run
on
top
of
any
secondary
cni
as
long
as
it
uses
net
devices,
meaning
linux
kernel
devices
and
not
dpdk
bound
interfaces.
For
instance,
at
this
point
in
time,
vrf
cni
is
deployed
on
top
of
sri
obcni
and
mag
vlan
siena
next
slide.
P
Thank
you
frank,
so
openshift
server
smash21
will
ship
shortly
after
four
nine.
It
will
update
service
mesh
to
istio
one
nine
and
introduce
new
resources
for
federating
service
meshes
across
multiple
openshift
clusters.
This
will
allow
masters
to
be
connected
securely
in
a
multi-tenant
multi-cluster
fashion
compared
to
upstream
istio's
multi-cluster
models.
Openshift
service
measures
federation
does
not
require
the
the
scod
control
planes
to
directly
access
the
kubernetes
api
servers
of
other
clusters.
P
This
allows
remote
services
to
be
shared
on
a
need
to
know
basis,
as
determined
by
each
individual
mesh
administrator
traffic
to
and
from
the
remote
services
can
then
be
managed
using
istio
resources
such
as
authorization
policies
and
virtual
services,
as
if
those
remote
services
were
in
fact,
local
service
mesh.
Two
one
also
brings
the
service
extensions
api
to
ga,
which
facilitates
the
webassem
web,
facilitates
use
of
webassembly
for
extending
sq
and
envoy
I'll.
Look
for
service
mission,
two
one
in
early
november,
I'll
now
pass
to
anita.
Q
We,
I
will
be
covering
hello,
everyone
I'll
be
covering
open
shift
4.9
on
openstack,
and
today
we
want
to
look
at
the
octavia
load,
balancer
and
support
for
as
a
as
an
external
load.
Balancer
service,
openshift,
on-prem
and
native
as
mark
said
earlier,
is,
does
not
have
a
native
load
balancer
to
support
non-http
and
tcp
in
general.
Traffic
with
metal
lb
is
coming
in
4.10,
but
we
already
have
requirements
and
expectation
that
the
cloud
provider
in
this
case
openstack,
provides
load
balancer
services.
Q
This
is
to
enable
connectivity
across
openshift
clusters,
as
well
as
to
connect
vm
workloads
with
with
openshift
on
stack
and
shift
on
stack
use
cases
in
for
openshift
4.7.
We
introduced
the
external
load
balancer
with
a
upi
installer.
Q
You
could
use
openstack
inbuilt
octavia
load
balancer
for
both
l4
through
l7
services,
and
previously
octavia
was
only
available
with
courier
cni.
But
now
we
have
enabled
this
with
openshift
4.9
as
a
service
type
load,
balancer
and
availability
to
install
this
with
the
ipi
installer
octavia
has
two
back-end
options:
it
has
the
amphora,
which
is
the
aka
proxy
ipvs
based
load
balancer,
and
you
have
the
ovn
backend
with
amphora.
Q
You
have
you
spawn
a
separate
vm
to
handle
load
balancing
for
every
openshift
cluster
and
it
handles
http,
https,
cls
termination,
all
different
types
of
tcp
ports.
Support
for
udp,
though,
is
work
in
progress
coming
with
openshift
4.10.
Q
It
needs
the
external
cloud
provider
which
will
be
added
in
openshift
4.10
http
support
is
planned
for
openshift
17.
and
for
ovn.
Octavia
is
tech
preview
right
now
you
can
use
it.
It
is.
It
doesn't
have
health
checks
and
health
monitoring
for
its
members,
but
you
it
relies
on
kubernetes,
inbuilt
pod
checks,
health
checks
to
to
do
verify
that
the
members
are
up
and
running.
It
has
the
same
support
for
tcp
udp,
coming
with
4.10
and
http,
with
openstack
17..
Q
The
main
advantage
with
ovn
is
it's
a
distributed
load
balancer
service.
It
is
available
with
the
node
itself.
There's
no
need
to
spawn
an
extra
vm
or
the
extra
hop
for
latency,
and
so
obn
is
definitely
a
potential
for
usage,
with
the
caveat
that
it
might
have
no
health
checks
and
rely
on
kubernetes
moving
to
the
next
slide.
Q
Q
So
you
want
separation
at
the
network
level,
separation
at
the
name,
space
level
or
separation
at
the
service
mesh
level,
and
we
now
support
and
have
validated
with
openshift
4.9
octavia
load
balancer
in
all
of
these
modes,
where
you
can
use
separate
load
balancer
with
the
ingress
controller
for
dmz,
internal,
external
and
name
services,
namespace
services
or
or
service
mesh,
separate
separation
using
labels
or
namespace
tags
and
service
mesh
tags.
R
Thanks
anita,
let's
talk
about
running
virtual
machines
in
openshift.
As
you
know,
we've
been
generally
available
for
over
18
close
to
18
months
now,
and
whether
using
virtual
machines
in
a
cloud-native
way
such
as
lockheed
martin
is
as
part
of
their
aiml
pipelines,
or
we
actually
have
an
online
retailer.
That's
building
on
the
excellent
work
that
ramon
and
his
team
did
for
bare
metal
and
deploying
three-tier,
converting
their
three-tier
applications
into
openshift
and
basically
transforming
those
right.
Now
they
have
about
1400,
vms
and
they're,
going
to
continue
to
grow
from
there.
R
So
this
is
where
you
could
actually
spin
up
a
cluster
and
run
vms
in
your
openshift
directly
in
your
openshift
cluster
up
there
in
public
cloud.
The
other
thing
we're
delivering
is
building
on
our
data
protection
story.
We've
got
some
basic
building
blocks
today.
The
next
release
will
continue
that
trend
and
then
you'll
actually
see
a
tech
preview
of
the
openshift
api
for
data
protection,
where
we
work
with
partners
to
not
only
make
sure
that
you
get
the
right
disaster
recovery
for
your
openshift
cluster,
but
the
vms
that
are
within
it
and
those
persistent
workloads.
R
Now,
if
you
were
going
to
run
vms
in
openshift,
I
wouldn't
pick
the
biggest
workloads
to
run
on
it
like
sap,
but
we've
done
exactly
that
so,
based
on
the
because
we're
based
on
kvm
and
using
the
same
capabilities
that
we
do
across
all
the
other
red
hat
virtual
platforms,
we
can
actually
take
advantage
not
only
that
knowledge,
but
that
technology
to
deliver
very
robust
and
capable
performance,
vms
and
you'll
see.
This
is
the
first
view
of
tech
of
a
non-production
deployment
of
sap,
hana,
we're
heading
towards
certification
and
a
future
release.
R
We've
done
a
couple
of
enhancements
for
security
and
performance.
The
main
one
is
to
be
able
to
run
with
virtual
workloads
on
a
fixed
compliant
cluster.
This
will
be
a
big
help
and
a
big
interest
to
our
financial
and
public
sector
customers
and
then,
lastly,
as
we've
said
for
some
time
now,
we
believe
that
vms
and
containers
should
be
able
to
take
advantage
of
developer
tools,
regardless
of
what
format
it's
in.
So
we've
actually
got
some
stuff
where
you
can
use
vms
and
service
mesh
and
get
improved
observability
and
security
of
your
hybrid
applications.
S
Thanks
a
lot
peter
yeah,
as
you
said,
I
mean
many
customers
are
bringing
workloads
to
openshift,
whether
it's
in
containers
or
vms,
depending
on
the
pace
and
the
content
really
station
of
the
workload.
If
I
may
use
that
word
or
something
similar
to
a
word.
So
so
I
mean,
if
you
want
to
bring
some
workloads
that
you
need
next
to
your
containers.
Or
do
you
want
to
bring
a
lot
of
workloads
to
do
the
modernization
at
your
own
pace?
I
mean
you
need
a
tool
to
to
do
that,
and
that
is
the
migration
virtualization.
S
If
there's
any
issue,
what
what
is
happening
with
mascara
integrated
in
in
in
migration
it
for
vitalization,
so
again
we're
ready
to
start
bringing
vms
whether
it
is
to
have
them
next
to
your
containers
or
whether
it
is
as
a
plan
to
start
migrating
at
a
slower
pace
by
bringing
those
those
workloads
into
openshift
as
vms.
And
with
this
I
pass
it
to
another.
M
I
want
to
talk
about
a
new
ga
that
we
had
for
windows
container
a
couple
of
weeks
ago.
Bring
your
own
host
for
windows
notes.
We
announced
general
availability
for
bring
your
own
host
or
short
form
is
vyoh
report
for
windows
notes
into
openshift.
M
M
Often
these
instances
run
on
on-prem
platforms
like
vsphere,
bare
metal,
and
you
know
so
on
and
so
forth,
and
it
is
essential
that
you
know
we
take
advantage
of
these
servers
to
run
containerized
workloads
so
that
their
computing
powers
can
be
harnessed
in
a
hybrid
cloud
cloud
world
and
enabling
this,
by
which
feature
for
windows
servers,
can
help
customers
lift
and
shift
their
on-premises
workload
to
a
cloud-native
world.
Next
time.
Please.
M
So
this
is
how
it
works,
so
you
have
a
cluster
with
you
know
three
masters.
You
know,
let's
say
three
workouts.
The
three
workouts
you
know
could
be
built
with
linux.
You
can
add
additional
windows
nodes
using
the
ipi
installer,
so
say
you're
running
on
azure
aws
vsphere.
What
not
using
the
ipi
mechanism,
you
can
add
more
windows,
server,
instances
that
are
managed
through
machine
api.
M
M
You
regularly
patch
it
and
update
it
and
manage
it
and
you'd
like
this
dedicated
windows,
server
instance
lying
in
your
data
center
to
be
onboarded
to
that
cluster
running
on
aws,
azure
vsphere,
you
know
wherever
it
is,
and
you
know
be
co-located
in
the
same
cluster.
You
can
do
that
with
the
bui,
which
instance.
So
now
you
have
a
openshift
cluster
that
comprises
of
machines
that
are
managed
through
machine
api
and
also
these
machines
that
have
been
onboarded
through
the
by
oh
feature,
and
so
you
can
manage.
M
You
know:
pets
and
cattles
in
one
happy
animal
farm
managed
by
the
same
openshift
control,
plane,
managed
and
scheduled
by
the
openshift
cluster.
So
this
feature
went
ga
a
couple
of
weeks.
Back
with
this,
you
can
now
onboard
you
know:
windows,
server,
instances
onto
new
platforms
like
bare
metal,
vsphere,
and
you
know
so
on
and
so
forth.
So
hopefully
this
will,
you
know,
unlock
customers
who
are
you
know
on
these
platforms
and
who
are
trying
to
use
windows
container
workloads
to
run
on
those
platforms.
T
T
I'm
anders
aluk
and
I
am
the
pm
for
openshift
sandbox
containers.
So
just
a
reminder:
openshift
sandbox
containers
is
tech
preview
in
4.9,
still
as
it
was
in
4.8,
it
provides
an
additional
runtime
for
customers
who
are
seeking
an
extreme
layer
of
isolation.
T
It
complements
our
existing
stack
that
we
have
and
follows
a
defense
and
depth
method.
So
it's
a
complementary
feature
to
the
for
existing
runtimes.
It
bases
off
the
catholic
containers
upstream
project
and
in
4.9
we're
providing
a
couple
of
new
features.
One
is
you
know
if
you
have
a
fips
installed
openshift
clusters,
you
can
now
install
cathode
containers
or
openshift
sandbox
containers
on
top
of
your
cluster,
without
painting
the
existing
tip
state
of
the
clusters
that
that
follows
and
has
been
validated
for
for
the
operator
and
the
catholic
containers
run
time.
T
We
also
now
allow
most
paths
for
updates
and
upgrades
whether
it's
for
the
operator
through
olm
or
through
for
our
runtime
and
virtualization
stack
or
hypervisor
stack
also
for
customers
who
are
having
trouble.
We
have
increased
and
provided
a
must-gather
image
that
you
they
can
use
to
collect
information
in
case
they
have
problems
with
their
clusters
and
in
that
case
they
help
narrow
down,
root,
cause
analysis
and
we're
working
on
even
adding
more
information
across
the
entire
stack.
T
Finally,
we
have
like,
if
you
have
an
openshift,
disconnected
cluster
and
want
to
run
openshift
sandbox
containers.
Now
the
operator,
the
openshift
sandbox
containers
operator,
allows
for
that
and
can
work
in
disconnected
mode
yeah.
With
this
I
will
hand
over
and
recover
hardware
acceleration
with
everyone.
U
Thank
you
to
hello,
I'm
erwin
garen
product
manager
for
openshift
ai,
on
hardware
accelerators
in
the
previous
openshift
releases.
We
have
enabled
new
hardware
accelerators,
including
gpus,
fpgas
or
es6.
Each
of
these
enablements
were
requiring
dedicated
operators
so
to
help
on
standardized
hardware,
accelerators
enablement.
We
are
providing
two
new
tech
preview
components
in
openshift
for
that
time.
So
the
first
component
is
the
special
resource
operator,
but
it's
an
orchestrator
that
can
manage
the
deployment
of
software
stacks
or
hardware
accelerators.
U
We
have
started
to
create
this
component
with
partners
like
nvidia
intel,
silicon
or
exilinx,
and
we
are
now
providing
this
toolbox
with
openc
openshift.
So
sr
can
manage
data
operations
like
building
or
loading
canal
modules.
You
can
use
it
to
deploy
drivers,
deploy,
device
plugins
on
enable
parameters,
monitoring
stacks
oso
use
recipes
to
enable
out-of-three
drivers
and
manage
drive
all
the
life
cycle
of
drivers,
so
for
specific
out
of
three
driver.
Enablement
will
fall
under
red
hat
third
party
support
and
certification
policies.
U
The
second
component
is
a
driver
toolkit,
so
it's
a
container
image
to
be
used
as
a
base
image
for
driver
containers,
so
the
driver
toolkit
contains
tools
and
kernel
packages
required
to
build
or
install
kernel
modules.
So
you
can
use
the
driver
toolkit
to
build
on
to
to
have
pre-built
containers
or
local
to
reduce
cluster
rental
time
and
requirements.
U
V
Right
now,
I'm
hoping
you've
all
heard
about
this,
but
we
have
a
developer
preview
of
arm
available
for
you
now
you
can
go
out
there
and
try
it
and
and
look
let's.
Let's
be
honest.
V
This
is
gonna
be
huge,
whether
you're
looking
at
what
cloud
providers
are
charging
for
arm
or
the
new
systems
that
are
coming
out
with
their
low
power
requirements.
It's
just
the
thing
that
you're
going
to
be
asking
about.
You
know
right
now:
people
are
probably
discussing
things
like
software
supply
chain
strategy,
but
by
the
end
of
2021.
V
You
know
in
the
next
few
months
time
everyone's
going
to
be
asking
about
arms,
so
we
have
something
now
that
you
can
not
only
see,
but
you
can
go
and
touch
and
try
and
we'd
be
really
interested
in
hearing
you,
your
feedback.
Our
roadmap
is
going
to
be
really
strong,
so
help
us
guide
that
we
also
work
in
with
ibm
on
there
on
their.
You
know,
power
and
z
features,
and
we
continue
to
innovate
there,
where
we
can
the
one.
Probably
that's
interesting
that
I
want
to
pull
out.
V
There
is
the
multiple
network,
interface
or
so
the
code,
for
that
was
always
there,
but
it
was
a
thing
that
we
never
really
tested
but
based
on
feedback
from
yourselves
and
customers.
We've
gone
out
there
and
made
sure
that
you
know
it
works
and
we're
not
only
making
that
available
for
four
nine
now,
but
we're
going
to
make
sure
that
you're
supported
right
back
to
four
six
and
we
also
don't
want
to
forget
about
our
developers.
We
all
love
developers
and
you
know
great
work
by
the
openshift
pipelines
team.
V
So
not
only
have
they
got
1.6
out,
but
that's
available
for
our
power
and
z
systems
as
well
and
then
for
the
rest
of
it.
Our
concentration's
really
just
been
on
new
hardware
on
the
z
side,
they've
upgraded
their
virtualization
support
system
so
that
zvm
has
gone
up
to
7.1,
so
we're
making
sure
that's
available
and
on
the
power
side.
Well
they've
only
gone
and
brought
out
some
new
hardware.
Hopefully
those
of
you
interested
in
this
area.
V
So
the
announcements
around
support
for
power
10,
so
we're
going
to
make
sure
that
works
well.
Well,
we
have
made
sure
that
works.
Well,
with
openshift
and
we've
seen
good
traction
here,
you
know
one
of
the
things
that
ibm
are
offering
right
now
is
on
demand
pricing
there.
So
you
can
go
and
kind
of
get
access
and
run
openshift
in
that
power,
10,
environment
and
you
know,
if
your
systems,
don't
your
workloads,
don't
fill
up
a
whole
power
10
system,
but
they
maybe
want
to
burst
later
on.
C
Thank
you
duncan.
So
on
the
operator
sdk
front.
There
are
three
highlights
in
this
new
downstream
release.
Firstly,
in
response
to
the
api
removal
in
cube
1.22,
the
updated
bundle
validate
commands
helps
developers
easily
review
if
any
manifest
in
the
operator
bundle
still
use
those
affected
apis.
The
command
also
provides
guidance
on
migrating
affected
benefits,
so
developers
can
easier
keep
their
operators
compatible
with
the
cube
ecosystem.
C
Moving
on
to
the
next
highlight
to
support
proxy
enable
clusters,
operators
must
inspect
the
environment
for
standard
process
variables
and
then
pass
the
values
to
operand
spot.
As
you
can
see
on
this
diagram
on
the
proxy
enabled
cluster
om
will
help
read
the
proxy
config
and
populate
the
config
as
environment
variables
in
operators
deployment.
C
C
Lastly,
starting
from
ocp
408,
the
downstream
sdk
default
use,
ubi
and
other
dawson
image
in
project
scaffolding.
The
downstream
based
image
also
guaranteed
with
compatibility
fixes
with
lcp
for
two
ocp
releases,
though
the
developers
can
easier,
create
and
maintain
operators
in
the
redhead
supported
way.
Next
I'll
head
over
to
daniel
to
talk
about
om
updates.
W
Thank
you,
tony.
Let's
talk
about
operating
lifecycle
management,
a
lot
of
enhancements
went
into
openshift
4.9
and,
let's
start
with
automatic,
switching
catalogs.
This
is
something
that
happens
under
the
hood
for
most
of
unaware
in
every
openshift
update
of
all
the
catalogs
that
we
ship
with
the
cluster,
and
this
allows
us
to
really
just
put
operators
into
a
catalog
that
are
really
known
to
work
with
that
particular
openshift
version.
W
And
while
it
has
always
been
possible
for
customers
and
partners
to
ship
and
create
their
own
catalogs,
and
they
didn't
have
access
to
this
automatic
switching
of
catalogs
with
a
cluster
update,
and
we
enabled
that
with
a
way
to
actually
dynamically
reference.
The
image
in
which
this
catalog
full
of
operators
that
you
can
install
via
the
operator
is
referenced,
for
instance,
by
the
use
of
template
variables
that
refer
to
the
current
major
minor
version
of
the
kubernetes,
a
platform
that
odam
is
running
on.
W
So,
customers
and
partners
can
use
that
now
to
ship
their
own
catalogs,
which
will
automatically
get
switched
with
a
cluster
update
in
order
to
also
take
advantage
of
this
way
of
shipping,
a
supported
set
of
operators
for
a
particular
operations
release
and
a
furthermore,
in
the
sense
of
operator,
release
compatibility
in
four
nine.
We
introduced
the
ability
for
operator
developers
to
denote
in
the
operator
metadata.
W
What's
the
maximum
openshift
version
that
this
operator
has
been
tested
with
and
is
known
to
work
with.
So
when
you
as
a
developer,
put
that
in
your
metadata,
you're,
essentially
shipping
a
support
matrix
boundary
to
your
customers,
and
this
is
something
that
administrators
will
notice
when
cluster
updates
are
available
and
they
have
operators
installed
that
have
the
maximum
openshift
version
set
to
whatever
the
current
version
is.
W
This
is
actually
how
we
inform
administrators
on
the
update
from
four
eight
to
four
nine
that
they
have
operators
installed,
which
are
still
referring
to
these
apis,
which
have
been
removed
in
the
openshift
for
nine
and
kubernetes
1.22
release.
So
this
is
now
available
for
developers
as
well
and
they
can
really
make
sure
that
customers
stay
within
the
supported
boundaries
of
the
operator
version
compared
to
a
cluster
version.
W
Another
thing
that
we
saw
is
that
these
bundles,
in
which
operators
ship
their
metadata,
grew
in
size,
mostly
due
to
the
fact
that
these
custom
resource
definitions
that
operators
should
to
describe
their
user
interface
in
the
apis
have
really
really
large
sections
of
text
that
describe
the
api.
W
The
open
api
spec
and
this
has
grown
all
to
all,
towards
the
limit
of
one
megabyte
which
is
imposed
by
the
cd
database
underneath
the
cluster
and
in
the
past
we
had
to
ask
authors
to
kind
of
put
and
reduce
this
data
by
not
shipping
or
open
api
spec
for
all
their
apis.
However,
we
want
that
because
it
really
drives
user
experience
with
validation
and
also
how
our
ui
builds
these
dynamic
forms.
W
So
in
openshift
nine
we
are
using
now
in-line
compression
to
compress
the
bundle
content
and,
and
while
we
handle
it
and
extract
it,
so
we
are
far
below
the
one
megabyte
limit.
In
most
cases,
we've
also
reduced
the
amount
of
resources
that
om
itself
uses,
specifically
the
catalog
ports
and
if
the
catalogs
were
large,
could
take
up
significant
amount
of
memory.
W
So
the
ram
utilization
of
all
the
openshift
default
catalogs
almost
500
operators
nowadays
are
now
a
fraction
of
what
they
used
to
be
before,
and
we've
also
added
a
lot
of
status.
Information
for
debugging
and
troubleshooting
to
the
user-facing
apis
of
all
them,
namely
operator
group
and
subscription.
So
admins
have
one
central
way
to
look
at
what's
causing
an
install
to
fail
or
an
update
to
error
out
in
one
high-level
api
without
the
need
to
go
into
logs
or
subordinate
object,
statuses,
to
figure
out
what
that's
it
for
operative
lifecycle
management.
W
Let's
continue
with
quay,
so
red
hat
quay
3.6
will
ship
almost
in
parallel
with
openshift
4.9,
and
the
first
thing
that
I'll
introduce
is
actually
not
really
connected
to
way
itself.
Is
it's
a
new
flavor
and
version
of
kuwait
that
we
ship
as
part
of
openshift?
We
call
it
the
openshift
mirror
registry
and
it
is
essentially
a
very
streamlined,
simple
to
use
installer
for
quay
that
deploys
a
very
specialized
and
stripped
down
quay
deployment
for
the
sole
purpose
of
bootstrapping,
a
disconnected
cluster
install.
W
W
And
within
a
minute,
they
have
a
fully
laid
out
registry,
which
they
can
then
turn
to
the
oc
utility
to
start
mirroring
the
openshift
content
or
disconnect
the
clusters.
So
while
this
is
not
a
fully
blown
full
performance
version
of
quay,
we
would
like
customers
to
run
that
on
top
of
openshift.
It
allows
you
to
kind
of
get
over
this
first
initial
barrier
of
installing
a
disconnected
cluster.
W
Next
slide.
Another
important
aspect
for
openshift
users
using
kuwait
is
the
operator.
This
is
our
go-to
installer
for
h8
deployments
of
quay.
This
has
seen
a
lot
of
improvements
in
36.
The
highlight
is
the
much
sought
after
ability
to
let
openshift
take
care
of
the
certificate
management
for
routes,
so
this
is
now
the
default,
and
customers
and
users
can
still
bring
their
own
tls
certificates,
in
which
case
will
not
have
an
edge
route,
but
a
pass-through
route.
W
Last
but
not
least,
a
feature
for
users
that
are
using
quay
as
a
central
ingress
points
for
all
upstream
registries
that
they
let
their
developers
use
or
for
storing
openshift
core
images
for
disconnected
mirroring
is
the
support
of
nested
repositories
right.
What
this
allows
is
to
essentially
structure
content
within
one
query
organization,
so
that
you
can
avoid
naming
collisions,
and
you
have
some
sort
of
logical
structure
if
you
will
within
one
bigger
bucket.
W
So
this
is
important
when
you
bring
in
content
from
other
registries,
but
you
want
to
retain
the
original
structure.
So
you
are
able
to
add
more
path,
elements
to
the
repository
name
and
an
admin
of
that
would
also
quickly
see
all
the
various
parts
in
a
certain
query
organization
that
have
been
mirrored
from
open
shift,
catalogs
and
openshift
core
images.
There's
one
small
wrinkle
with
that.
So,
like
you
have
these
logical,
subfolders
in
object
storage.
These
do
not
come
with
their
own
permission
management.
W
This
feature
will
also
come
with
craig
36
and
will
be
available
on
quay
io
the
hosted
version
of
quay
towards
the
end
of
this
year
as
well.
With
that
I
hand
it
over
to
storage,
take
it
away,
gregory.
X
Hi
everyone.
So,
on
the
storage
side,
we
are
continuing
our
cloud
providers
for
johnny
gcsi
target
for
411
and
to
be
successful,
we
need
both
the
csi
drivers,
ga
as
well
as
the
migration
path
for
customer
that
we're
using
entry
in
ocp
409
we're
graduating
new
csi
drivers
to
full
support
with
their
respective
operators,
namely
azure
stack
hub
and
aws
eps.
X
On
the
csi
migration
front,
we
are
adding
a
gce
disk
and,
as
your
disk
has
to
preview
last
but
not
least,
we'd
like
to
give
a
heads
up
on
the
dsp
on
migration,
as
vmware
is
one
of
our
top
ocp
infrastructure
provider.
X
As
mentioned
earlier,
ocp
411
is
the
release
target
that
will
trigger
the
csi
migration
and
that
will
be
the
only
option
to
consume
storage.
The
fee
sphere,
csi
driver
requires
vm,
hardware,
version
15
and
an
underlying
67
us3
version.
Therefore,
we
need
to
have
customers
in
forms
and
planning
to
upgrade.
So
if
you
have
customers
in
that
situation,
please
reach
out
to
them
and
we
will
also
work
with
marketing
to
advertise.
That.
X
Outlines
the
what's
new
in
odf
fallout
9..
So
before
we
go
into
the
feature
we
want
to
mention
the
rebranding
of
ocs
to
openshift
data
foundation,
starting
from
four
nine
change
can
be
found
across
the
product
and
the
marketing
called
platters.
There
is
no
migration
or
pricing
change,
as
odf
skus
were
already
introduced
early
this
year.
X
On
the
dr
side,
we
are
introducing
a
synchronous,
replication
based
regional,
dr
solution,
managed
by
acm
has
that
review
with
failover
and
failback
automation
and
next
on
the
security
front,
we
are
adding
pv
granularity
encryption
with
qms
integration
using
a
service
account,
and
we
currently
support
volts
as
pms.
H
X
Include
in
ogf
deployments,
and
with
nice
monitoring
and
dashboards,
we
added
a
new
capability
in
the
multi-cloud
object,
gateway,
name
space
bucket
to
replicate
data
between
otf
object,
storage
and
native
cloud
storage.
X
Last
in
this
list
is
the
manage
service
for
odf
on
rosa,
which
just
started
early
trial
period
for
early
adopters.
This
is
an
opening
offering
and
if
you
are,
if
you
have
any
customer
interested
in
that
offer,
please
reach
out
to
the
odf
product
management.
Y
Awesome,
thank
you
so
this
quarter,
we
really
wanted
to
focus
on
accelerating
kubernetes
security
innovation
and
we
wanted
to
do
that
three
ways.
We
wanted
to
do
that
through
the
thematic
of
advanced
security
use
cases
at
the
end
of
the
day,
red
hat
advanced
cluster
security
wouldn't
be
the
same.
Without
that
we
want
to
focus
on
self-service
security,
workflows,
so
developers
and
the
security
team
need
to
bridge
the
skill
gap
between
a
security
person,
not
actually
knowing
kubernetes
itself
holistically
in
an
application
holistically
and
a
developer,
not
necessarily
understanding
security
in
depth.
Y
As
much
as
a
security
engineer
would
we
also
wanted
to
focus
on
expanding
of
platform
support
so
in
terms
of
advanced
security
use
cases,
we've
enhanced
protections
for
the
kubernetes
api
server.
What
this
is
going
to
allow
teams
to
do
is
to
monitor
for
access
to
their
most
sensitive
secrets
and
config
maps
in
their
environment.
So
in
this
way
you
can
tell
if
someone
who
is
unauthorized
is
accessing
one
of
your
secrets
with
a
cluster
admin
role.
Y
We
also
want
to
help
organizations
improve
the
cyber
security
skill
gap,
so
many
organizations
use
the
miter
attack
framework,
which
is
a
industry
standard
framework
that
is
used
to
do
gap.
Analysis
from
an
attacker's
perspective
that
you
can
see
the
common
tactics
and
techniques
that
you
see
in
the
cyber
security
wild.
Y
Y
Y
So
we
want
to
shorten
the
feedback
loop
by
enabling
teams
to
use
namespace
annotations
to
define
where
exactly
they
want
that
feedback
to
go
to.
So
you
can
use
that
to
send
feedback
to
a
slack
distribution
and
email
distribution,
really
any
way
that
your
team
operates
today
and
we've
also
enabled
scoped
access
controls
for
self-service
security.
Y
So
the
reason
we
do
that
is
because
teams
want
to
be
able
to
log
into
a
user
interface
and
bridge
the
context
gap
between
a
security
and
development
team.
So
this
is
a
way
that
security
teams
can
share
a
multi-tenant
environment
with
development
teams,
but
only
give
them
access
to
the
organizational
details
that
they
need.
Y
Y
Z
Right
on
thanks
jamie,
the
next
slide
here
we
go.
So
thank
you
all
for
continuing
to
be
with
us.
I
know
some
of
you
probably
have
to
drop,
but
it's
only
getting
better
from
here.
You
probably
don't
even
realize
that
we're
putting
so
many
things
into
openshift
4.9,
but
management
is
what
really
makes
it
easy,
easy
to
spin
out
clusters
left
and
right,
easy
to
do
get
offs
at
scale
easy
to
do
all
those
things
and
we
do
it
better
together.
Z
So
acm
really
brings
us
to
this
to
the
center
point
of
openshift
platform
plus
bringing
those
features
that
jamie
just
talked
about
with
cluster
security,
bringing
those
features
from
samick
and
the
openshift
get
ops
teams.
Bringing
application
sets
at
scale
with
cluster
selectors,
doing
things
around
governance
risk
and
compliance
over
centralizing.
Those
alerts
into
the
hub
alert
manager
and
abilities
to
spread
those
out
to
slack
pager
duty
and
other
third-party
tools
that
you
have
for
incident
management,
and
you
asked
for
cluster
health
metrics
for
non-openshift,
and
we
brought
that
to
the
table.
Z
We're
now
bringing
cluster
health
for
eks,
gke,
aks
and
ips.
So
all
major
cloud
provider
clusters
are
being
reported
back
in
the
cluster
health
metrics,
centralized
at
the
hub
thunders.
On
top
of
that,
we've
brought
the
business
conversation
into
this.
We're
reporting
service
level
objectives
into
that
profano
dashboard.
So
you
know
the
uptime
of
your
api,
so
you
know
your
error
budget
and
how
you're,
targeting
and
tracking
and
trending.
Z
This
is
all
done
from
one
single
pane
of
glass.
We
always
say
that
management
makes
everything
else
easier,
and
you
have
to
put
that
first.
So
keep
that
in
mind
to
our
customers
or
our
partners.
As
you
work
with
openshift
across
this
portfolio,
we
acm
are
at
the
heart
of
it
all
next
slide,
please
going
from
better
together.
Z
Z
Moving
forward,
you
can
now
deploy
your
hub
on
ibm
power
and
z.
That's
a
full
ga!
So
guys,
like
duncan,
don't
worry
about
it.
We
got
you
covered.
We
can
manage
the
five
on
power.
Indeed
centralized
infrastructure
management
like
moran
talked
about.
We
can
do
that
for
bear
mobile
deployments
and
that's
that
tech
preview
also
look
for
enhancements
to
advanced
image
registry
configs,
so
that
within
your
public
cloud,
you
can
define
image
registries
for
the
cluster
lit
for
the
add-on
for
all
the
features
that
we
deploy
out
there
in
those
public
clouds.
Z
We're
really
trying
to
make
that
cluster
experience
easier
for
you,
so
you
can
do
that
repeatedly
and
operationalize
your
teams
and
be
successful
with
clusters
at
scale.
I'm
now
going
to
turn
it
over
to
my
esteemed
colleague,
brad
weinberger
who's,
going
to
talk
to
you
about
management
at
the
edge.
Take
it
away.
AA
Hello
openshift
world
and
belated
thanks.
Our
birthday
wishes
to
you
birthday
boy,
yep
acm,
for
management
at
the
edge,
no
matter
how
how
far
from
the
traditional
core
data
center
your
clusters
are
located.
Acm
supports
deploying
a
thousand
single
node
open
shift
clusters
and
bringing
them
under
management
in
the
single
pane.
Scott
mentioned
no
jumping
from
paint
system
to
system
zero
touch.
Provisioning
is
a
big
part
of
that
yeah.
AA
We'll
call
it
ztp
from
here
on
out
it's
a
project
that
deploys
and
delivers
openshift
ford
clusters
and
architecture
named
hubspoke
where
hub's
a
cluster
able
to
manage
many
of
those
spoke
clusters.
The
hub
cluster
will
use
acm
to
manage
and
deploy
those
spoke
clusters,
and
with
that
we
have
ipv6
dual
stack
support
along
with
connected
and
disconnected
scenarios.
AA
So
we
heard
some
industry
references
to
telco
and
many
of
those
are
in
a
disconnected
scenario,
especially
as
you
get
further
away
from
the
core
data
center
further
out
on
the
edge,
and
we,
the
worker
nodes,
can
directly
access
the
internet
or
not
in
the
disconnected
node
it
may
be
by
design
or
some
type
of
act
of
nature.
So
the
policy
generator
is
a
big
part
of
this,
because
we've
heard
other
folks
mentioning
pets,
first
cat
or
pets
and
cattle
analogy.
That's
where
our
clusters
are
disposable
and
we
can.
AA
We
can
simplify
that
with
a
git
ops
approach
to
the
distribution
of
kubernetes
resources
managed
through
through
rackham
policy,
acm
policy.
We
expect
these
to
be
stored
in
github
and
and
that's
as
we
deploy
this
through
ztp
using
argo
cd
to
push
to
the
hub
and
then
policy
engine
to
push
to
the
managed
clusters
and
depending
where
you're
at
in
that
using
the
telco
industry,
which
would
would
be
a
horizontal
depending
where
you're
at
in
that
edge
location.
We
can
deploy
different
profiles
and
if
it's
a
vertical
market,
less
policies
will
be
be
deployed.
AA
These
managed
clusters
are
handled
by
backup
and
restore
as
part
of
our
business
continuity
story.
And
yes,
it
red
hat.
You've
heard
a
few
of
us
mention
it,
but
we
think
of
these
clusters
being
disposable
and
they
can
be
replaced
more
efficiently
than
than
the
traditional
business
continues
stories
when
using
acm
and
a
git
ops
approach
the.
AA
So
whenever
you
hear
us
referring
to
the
cluster
or
cattle
and
not
pets,
it's
it's
going
to
save
you
time
and
money
when
you
start
thinking
of
it
like
that
versus
the
nurture
that
that
required
with
clusters,
where
you
know
tribal
knowledge,
has
to
keep
those
things
running
and
you
can
think
of
ztp
as
a
project
that
includes
a
solution
said:
it's
got:
assisted
installer
single
net
up
single
node
openshift
referred
to
as
snow,
and
these
are
deployed
and
managed
via
acm.
AA
In
the
I'd
like
to
go
over
some
of
the
provisioning
building
blocks.
Acm
deploys
the
snow,
which
is
the
openshift
container
platform
installed
on
single
nodes
leveraging
the
ztp.
The
initial
site
plan
is
broken
down
into
smaller
components
and
the
configuration
data
is
stored
in
a
git
repository.
AA
Things
such
as
ptp
performance,
profile,
sriov
and
then
lastly,
downloading
images
to
run
workloads.
Things
like
cnfs,
one
of
the
other
components
of
ztp,
includes
the
assisted
installer.
That
is
a
project
to
help
simplify
the
openshift
container
platform
for
a
number
of
different
platforms
you
can
deploy
on
the
ai
service,
provides
validation
and
discovery
of
targeted
hardware
and
greatly
improves
the
success
rates
of
installation.
That's
something
we're
striving
to
improve
with
every
every
release
advance
to
the
next
slide.
Please,
and
as
scott
mentioned
you,
the
customers
have
spoken.
AA
We've
listened
we've
finally
been
able
to
provide
crucial
features
around
business
continuity,
rackham
hub
backup
and
restores
a
tech
preview.
It's
using
a
backup
solution
based
on
openshift
api
for
data
protection,
the
managed
cluster
configurations
can
be
back
up,
backed
up
and
restored
in
a
different
cluster
leveraging
odf,
that
was
openshift
data
foundation,
previously
openshift
container
storage
for
those
who
haven't
seen
that
yet
and
then
the
acm
and
disaster
recovery
across
stateful
workloads.
That's
also
a
tech
preview
for
your
business.
AA
Critical
state,
philaps
odf,
along
with
acm,
will
ensure
you
have
a
robust,
multi-site
multi-cluster
dr
strategy,
both
odf
and
acm
enable
fast
and
consistent
application,
dr
that
protects
both
application
date
and
application
state.
This
is
while
ensuring
your
application
data
volumes
are,
are
consistently
and
frequently
replicated,
resulting
in
reduced
data
loss
recovery.
The
dr
operators
are
enabled,
with
acm
automate
the
dr
failover
and
fallback
process,
ensuring
that
your
recovery
is
fast
and
error
prone
for
manual
operation.
AA
I
also
want
to
call
out
the
pv
nature
of
this
with
ball.
Sync:
there's
a
tech
preview.
It's
ensuring
resilience
for
business,
critical,
stateful
apps
by
ensuring
a
planned
application
migration
strategy
across
your
clusters.
You
also
can
use
wall
sync
to
create
your
own,
dr
solution,
when
working
with
non-odf
storage
or
heterogeneous
storage
products,
and-
and
you
can
expect
this
business
continuity
story
to
continue
to
evolve.
AA
As
scott
was
mentioning
some
of
the
features
in
his
slides
and
I'd
also
like
to
encourage
you
to
review
our
blogs
at
cloud.redhat.com
forward,
slash
blog
because
a
lot
of
the
things
we've
discussed
here
in
these
slides
today,
they're
regularly
being
published
out
to
those
blogs
so
deeper
dives
on
ai
or
the
business
continuity
there's
actually
one
of
the
exciting
ones,
a
five-part
blog
series
that
covers
a
lot
of
the
things
we
just
discussed
here
in
the
acm
slides.
AB
AB
We
also
hearing
customers.
So
we
will.
You
will
see
slight
improvements
to
all
the
overall
user
experience
with
now
you're
capable
of
distributing
the
cost
of
nodes
and
clusters
based
on
memory
instead
of
using
cpu
one
of
the
requirements
we
have
for
several
customers.
So
if
your
cluster
is
memory
limited
instead
of
cpu
limited,
your
cost
will
be
more
accurate
and
more
close
to
what
you
want
to
do
to
reflect
your
business.
AB
We
also
work
on
the
overall
usability
of
the
of
the
tool,
and
now
we
are
getting
rid
of
the
of
the
infrastructure
on
supplementary
distinction,
and
you
will
see
that
you
can
show
that
only
if
you
need
it
just
to
make
the
interface
clearer
and
show
less
graphics,
more
simple,
graphics.
All
around
is
easier
to
understand
and
follow,
and
another
requirement
we
have
from
customers
is
being
able
to
export
labels
in
csv
files.
AB
And
another
thing
that
was
really
required
by
customers
is
now
you
can
post
your
sources.
So
if
you
have
a
source
that
is
no
longer
there
because
your
cluster
is
transient
or
just
basically,
you
kill
it
to
create
a
new
one.
You
can
stop
your
source
and
you
will
not
see
any
error
message
around,
because
we
will
know
that
the
source
is
no
longer
sending
information
to
the
cloud
and
with
that
next
slide,
I
pass
over
to
frank.
That's
going
to
talk
about
telco
on
5g
thanks.
O
O
So
high
performance
application,
running
dpdk
in
case
of
cloud
native
network
functions
or
cpu
network
interfaces
and
memory
to
be
located
on
the
same
anode
any
cross,
noma
situation
lead
to
significant
and
unacceptable
performance
drop,
in
other
words,
cpu
devices
and
memory
absolutely
need
to
be
on
the
same.
Luma
node
cubelet
relies
on
topology
manager
for
container
pneuma
alignment
and
so
far
only
cpu
and
devices
where
numa
align
with
the
addition
of
the
memory
manager,
cubelet
can
now
now
align
regular
memory
and
huge
pages
and
devices.
O
O
Yeah,
so
with
openshift4.9,
so
we
have
enhanced
our
ptp
support
to
support
the
boundary
clock
function
which
is
really
related
to
the
vram
deployment
so
which
is,
and
we
comply
to
the
oran
random
overall
designer
and
we
are
so
on.
So
we
have
contributed
in
our
run
by
delivering
not
local,
even
person
related
to
ptp
events.
So
we
can
deliver
super
fast.
O
So
it's
really
so
microseconds
are
precision
so
to
deliver
events
via
a
sidecar
image
that
can
be
injected
in
any
cnf
belonging
belonging
to
the
the
very
far
edge
node
that
we
call
the
du
of
the
distributed
units.
So
basically,
what's
the
first
piece
of
software
running
at
the
bottom
of
the
antenna,
if
you
want-
and
this
is
a
mandatory-
so
ptp
is
a
mandatory
function
for
a
run,
because
you
need
all
of
the
antennas
to
be
synchronized
in
order
to
avoid
destructive
interferences
between
themselves
and
also
to
have
super
high
5g
bandwidths.
AC
Thank
you
frank
good
morning,
good
afternoon,
everyone.
So
we've
continued
to
make
some
substantial
movement
here
with
openshift
monitoring.
So
with
the
introduction
of
some
new
cube
state
metrics
and
alert
manager,
functionality
that
we've
seen
pretty
significant
cost
a
lot
of
different
customer
requests.
AC
So
those
are
the
enhancements
to
alert
manager,
rules
for
cluster
monitoring
operator
and
refined
for
triggering
conditions
such
as
alerts
on
cube
states,
metrics
proven
to
detect
more
quickly
when
disk
space
is
actually
running
low
from
a
container
perspective,
which
is
pretty
important
across
customers,
the
more
frequency
of
intervals,
so
everything
from
errors
and
in
error
detection
at
the
cube
state
for
thanos
queries
as
well
for
monitoring
user
defined
projects.
AC
We've
made
enhancements
for
remote
right
storage
for
prometheus
metrics,
and
we
also
have
a
new
support
for
prometheus
2.,
2.9.2
and
thanos
0.2.2,
so
making
a
lot
more
investments,
I'm
looking
down
the
road
at
thanos.
AC
So
for
logging,
the
next
chapter
here
we've
added
new
support
and
flexibility.
As
we've
seen
a
lot
of
requests
to
improve
performance,
it
start
to
come
up
with
solutions
to
scalability
issues.
With
fluent
d,
we
are
now
offering
a
preview
in
logging.
Five,
it's
consumable
in
4.9
to
use
vector
which
is
vector
collectors
are
going
to
be
important
for
the
future
for
scalability.
AC
That
way
we
can
extend
requests
down
to
vector
through
api
calls
and
expand
our
abilities
for
how
we
do
vector
collection
for
logging
and
really
grow
and
expand
and
build
upon
that
in
the
future.
Here
as
we
progress
forward
with
openshift
logging,
so
as
we
seen
some
requests
for
customers
where
we
want
to
start
assembling,
multi-line
stack
tracing
log
messages
and
essentially
what
that
means
is
customers
didn't
want
to
have
to
trace
logs
through
multi-line
types
of
log
tracing,
so
they
wanted
a
more
unified
experience.
AC
We
have
more
flexibility
to
provide
a
simple
log
exploration.
It's
another
area
where
we
are
going
to
offer
a
new
api
experience
in
the
openshift
console
where
we
can
basically
display
the
contextualized
logs
inside
an
individual
alert,
and
we
can
start
to
see
more
grain
or
relationships
between
alerting
and
the
capability
of
being
able
to
use
the
explorer
to
expand
and
drill
down
there
for
root
cause
analysis.
AC
And
finally,
here
we
have
new
support
and
capabilities
for
as
we
grow
and
expand
with
openshift
we're,
making
more
investments
and
looking
at
loki.
So
loki
is
a
logging
solution,
so
our
loki
operator
now
is
capable
of
providing
on
custer
solutions,
so
we'll
have
the
ability
to
install
update
and
manage
our
cluster
with
an
alternative.
Essentially,
you
know
to
loki
with
scalable
and
scalability
and
improved
log
performance.
AC
That
said
I'll
pass
off
to
the
next
session.
AD
Thanks
shannon
so
last
thing
on
this
presentation
is
talking
about
insights.
Insights
is
a
set
of
services
that
we
offer
and
features
that
we
offer
for
free
to
all
our
openshift
customers
and
are
available
for
you
on
console.red.com
openshift.
We
already
talked
about
cost
management
as
one
of
the
inside
services
and
I'll
be
talking
about
insights.
Advisor
insights
advisor
is
all
about
delivering
a
proactive
support
to
all
connected
users.
AD
I
will
giving
you
recommendations
on
potential
issues
that
might
impact
your
custom
performance
or
security
or
some
other
aspects
of
your
cluster
and
we're
giving
you
recommendations
with
remediation
steps
through
which
you
are
able
to
avoid
these
situations
in
openshift
49.
We
improved
couple
things
around
advisor
first,
if
you
are
an
ergot
customer
and
you're
not
connected
to
our
infrastructure,
we
allow
you
to
manually
upload
the
insights
operator
archive
and
still
get
these
advisor
recommendations,
so
you
can
still
check
your
clusters.
AD
You
can
still
get
these
cool
recommendations
and
prevent
some
issues
on
your
clusters.
Also,
if
you're
using
the
advisor
interface
and
you've
been
also
using
console.com
for
different
features
that
it
offers.
You
might
have
noticed
that
there
is
a
feature
for
notifications
right
now.
You
can
configure
notifications
over
email
or
slack
message
or
through
webhook
for
critical
and
important
insights
events.
AD
We
look
at
misconfigurations
and
user
management
and
we
also
have
some
new
recommendations
if
you're
running
specific
workloads
on
your
cluster,
like
sap
as
an
example
and
insights
operator,
actually
accompanies
our
telemetry
operator.
It's
this
feature
is
known
to
us
as
a
remote
health
monitoring.
AD
We
work
with
insights
operator
on
a
smaller
footprint
with
something
we
call
condition
conditional
data
gathering.
This
means
that
every
time
a
specific
condition
happens
on
the
cluster,
then
we
collect
additional
data,
so
we
can
make
your
the
recommendation
more
spot-on
and
more
accurate,
so
you
can
again
resolve
it
all.
I
have
for
you-
and
I
guess
this
is
the
last
slide
so
bringing
it
back
to
rob.
B
Thank
you
very
much.
Thank
you
all
for
joining
us
today
to
hear
about
openshift49.
As
a
reminder,
we
have
tons
of
live
streaming
that
we
do
all
the
time.
So
please
look
for
the
calendar
for
that
and
please
do
check
out
openshift49
in
your
existing
clusters
or
for
new
installs
in
just
a
few
weeks
here
when
it
becomes
ga
again.
Thank
you
so
much
for
joining
and
we'll
see
you
next
time.