►
From YouTube: What's New in OpenShift 4.13 [Technical Product Update]
Description
The Red Hat's Technical Product Managers review what to expect in Red Hat OpenShift 4.13.
00:00 Stream beginning
02:38 Event Begins
03:00 4.13 Overview
06:10 Spotlight Features
15:27 Developer Tools Update
16:32 Runtimes Update
20:33 Platform Services Update
27:49 Installer Flexibility
41:02 Control Plane Updates
43:17 Security Updates
55:02 Management Updates
1:02:11 Quay and Quay.io Updates
1:05:18 Observability Updates
1:15:47 Networking and Routing Updates
1:19:59 Virtualization Updates
1:22:42 Operator Framework Updates
1:24:30 Storage Updates
1:28:55 Telco 5G Updates
1:31:00 Conclusion
A
A
I'm
with
the
openshift
product
management
team-
and
it
is
my
pleasure
to
be
with
you
today-
thank
you
for
your
time.
We
are
going
to
cover
the
what's
new
in
openshift
413,
for
you
today
and
I'm
excited.
We
have
a
whole
group
of
people
that
are
joining
to
provide
a
great
update
for
openshift
413.
So,
let's
jump
right
into
it.
A
A
We
have
three
node
open,
Arduino
cluster
support
and
AWS
Azure
and
gcp
and
DC
I
believe
called
compact
clusters
in
the
past,
so
providing
a
nice
tight,
openshift
platform
for
you
to
run
across
those
Cloud
platforms
and
we've
also
extended
our
412
support
for
a
single
control
plane
to
Azure
ngcp
in
our
what
we're
going
to
hear
from
our
team,
around
Custom
Auto
metric,
Auto
scaling
c
groups
and
of
course
grenade
is
one
126
is
part
of
the
foundation
for
openshift
in
this
413
release.
A
That's
an
exciting
things
to
discuss
and
share
with
you
or
regards
to
security.
Our
cert
manager
operator
is
GA.
We've
made
improvements
for
deploying
open
shift,
encrypted,
VMS
and
encrypted
storage
for
vsphere,
and
we're
also
in
enabling
users
to
manage
their
own
keys
to
encrypt
storage
and
AWS
Azure
NGC.
A
And
of
course,
we
can't
have
an
open
shift
presentation
without
talking
about
our
Edge
and
the
edge
parts
for
our
open
shift.
Are
improving
dramatically
and
really
excited
about
single
node
openshift
on
AWS
and
R
bare
metal?
We
use
this
internally
for
our
own
development
purposes,
and
the
arm
footprint
provides
us
great
efficiency
for
being
able
to
control
some
of
our
Cloud
costs.
B
Net
in
this
latest
release,
we
introduced
33
customer
requested
product
enhancements
rfps.
Our
primary
focus
for
this
release
is
quality
stability,
scale
and
security
among
the
most
requested
enhancements
for
this
release
are
Zone
awareness
for
openshifted
view
view
VMware,
speed
sphere,
the
expansion
of
cluster
networks,
the
addition
of
a
login
capability
to
nodes
via
the
red
hat,
Enterprise
Linux,
core
OS,
console
support
for
Azure
user-defined
tags
and
the
ability
to
install
in
Google
Cloud
platform
into
a
shared
virtual
private
cloud.
B
C
All
right,
hi
everybody-
we
are
really
excited
to
be
releasing
openshift
4.13
on
Rel
9.2
content.
This
is
a
major
version
upgrade
for
real
core
OS
and
extra
excited
that
we're
releasing
virtually
the
same
week
as
rel9.2
itself,
bringing
you
absolutely
the
latest
in
Rel
Hardware
support
fixes
and
performance
enhancements.
The
92
kernel
also
brings
some
key
c
groups
enhancements
that
gorov
will
talk
about
later
and
I'll
be
back
to
talk
about
innovation
in
a
bit,
but
for
some
other
Spotlight
features,
starting
with
Angelie
and
cert
manager.
Anjali.
D
Yes,
thank
you
Mark.
We
are
extremely
excited
to
announce
the
general
availability
of
cert
manager
operator
for
Red
Hat
openshift.
The
operator
helps
streamline
integration
of
cert
manager
into
openshift
clusters.
Third
manager
makes
it
very
easy
for
users
to
integrate
with
external
certificate
authorities.
Manage
certificates
with
built-in
tools
tells
our
certificates,
as
well
as
read.
New
certificates
automatically.
The
search
manager
operator
is
a
D2
operator
managed
by
olm,
with
the
ability
to
install
using
CLI
or
UI.
We
provide
a
published
product
lifecycle
that
aligns
very
closely
with
Upstream.
D
It
provides
configuration
options
for
ease
of
deployment
as
well
as
issuer
specific
configs.
The
operator
can
be
configured
post,
install
by
updating
the
search
manager
CR.
We
have
documented
and
tested
the
operator
against
Acme,
self-signed
and
CA
issuers.
The
search
manager
operator
helps
in
simplifying
and
automating
the
life
cycle
of
certificates
and,
lastly,
the
operator
really
helps
in
managing
managing
certificates
in
bulk.
Therefore,
helping
users
securely
expose
hundreds
and
thousands
of
attention.
E
Thanks,
essentially,
hey
everyone.
We
are
excited
to
continue
to
make
progress
with
Advanced
cluster
security
as
a
cloud
service.
We
have
launched
this
early
for
field
trial
in
December
and
we
are
ready
to
go,
live
we're
going
to
announce
that
in
Summit
and
that
is
going
to
be
open
in
what
red
hat
calls
limited
availability,
which
is
live
service
limited
to
a
number
of
customers.
So
with
ACS
as
a
service,
you
get
ACS
as
a
service.
You
get
all
the
features
we
run
Central
for
you.
E
You
only
have
to
worry
about
your
secure
cluster
faster
time
to
Value
reduce
complexity.
We
have
a
team
of
sres
looking
after
your
service
and
we
hope
you
may
take
advantage
of
it
or
it
used
to
been.
F
Thank
you
in
4.13,
you
can
now
vertically
scale
control
plane,
nodes
automatically
in
openshift
cluster,
on
Azure
and
Google
Cloud
platform.
This
is
done
using
control,
plane
machine
sets,
which
is
a
new
crd
added
in
machine
API.
As
a
result,
you
can
manage
control
plane
nodes.
Just
like
you
manage
your
worker
nodes.
Let's
say
if
you
want
to
change
the
CPU
in
a
control,
plane
node.
G
Thanks
subin,
this
is
another
big
release
for
the
systems:
enablement
team,
where
we've
got
advances
across
the
entire
portfolio
of
arm
power,
Z
systems
and
multi
architecture.
G
First
up
is
arm
where
we're
going
to
round
out
support
for
running
on
the
Azure
platform,
with
UPI
support
and
we've
also
validated
the
single
node
topology
on
bare
metal
arm
systems.
So
if
you
want
to
run
in
a
combat
compacted
environment
and
with
the
great
performance
for
what
of
on,
why
wouldn't
you
that
is
now
all
tested
and
supported
our
multi-r
compute
feature
which
lets
openshift
run
different
compute
node
architectures
in
a
single
Cost
Plus
that
comes
of
age,
with
full
arm
support
for
AWS
and
Azure
platforms?
G
Add
in
those
arm
nodes
as
a
day
two
operation
to
gradually
move
over
to
arm,
or
you
know,
maintain
support
for
those
applications
that
don't
run
on
your
default.
Architecture
of
device
will
also
introduce
full
support
for
migration
2
and
upgrade
for
this
mixed
configuration
at
this
stage.
The
most
useful
bit
is
the
migration
piece,
so
you
can
easily
move
your
existing
clusters,
the
ones
that
are
ready
to
receive
loads
of
different
architectures
and
then
finally,
lots
of
security
and
network
editions
for
our
power
and
Z
system
platforms.
G
Probably
a
specific
interest
is
the
allowance
to
run
supported
in
fifth
mode,
which
we're
not
we're
going
to
enable
not
only
for
413
but
also
for
412.
do
be
careful.
This
doesn't
mean
that
you're,
a
VIPs
validated
automatically
make
sure
to
check
the
validations
are
in
place
for
the
release
you're
running
on
and
with
that
he
promised
to
come
back
it's
time
for
Mark
Russell
again.
C
All
right
thanks,
Duncan,
really
appreciate
it.
I
am
just
as
even
more
exciting
for
me
overjoyed
and
proud
to
announce
that
core
OS
layering
is
going.
Ga
teams
are
working
super
hard
for
the
last
few
releases
to
bring
this
new
systems
management
Paradigm
to
openshift.
To
recap:
we're
shipping,
the
core
OS
image
in
a
standard,
oci
formatted
container
and
allowing
users
to
customize
it
with
standard
ocgi,
tooling,
like
podman,
and
a
Docker
file,
also
known
as
a
container
file.
C
The
OS
still
runs
on
a
normal
or
physical
or
virtual
machine
like
Rel.
This
format
is
for
customizing
and
rebasing
the
root
file
system.
The
way
the
first
iteration
works
is
that
you
build
your
image
outside
the
cluster
upload
it
to
your
own
container
registry
and
then
apply
a
simple
machine
config
to
tell
the
machine
config
operator
to
roll
that
out
across
the
cluster
over
time,
we'll
be
adding
on
cluster
builds
and
more
automation
knobs.
But
for
now
you
need
to
do
some
wiring.
C
C
So
I
will
try
not
to
overdo
it,
but
it's
hard
not
to
overstate
the
it's
hard
to
overstate
the
similarity
between
building
an
app
on
a
base
image
like
Ubi
and
customizing,
an
rcos
image
in
the
app
Dev
world.
You
start
with
a
base
image
supported
by
a
vendor
or
a
project.
You
very
often
add
some
additional
distro
dependencies
and
then
your
custom
content,
you
build
it,
push
it
to
a
registry
and
then
deploy
it
to
kubernetes.
C
C
You
build
it,
you
push
it
and
you
deploy
it
with
the
machine
config
operator,
the
MCO
then
rebases
the
nodes
to
the
new
image
one
at
a
time
comparing
App
State
and
configuration
to
the
operating
system
isn't
perfect,
but
there's
some
obvious
analogs
of
Slash
VAR
and
slash
Etc
in
the
kubernetes
realm,
and
the
responsibility
model
is
very
similar.
We
provide
a
supported
well-tested
base
image
if
there
are
Rel
dependencies.
C
We
support
those
packages
just
like
any
other
Rel
package,
but
you're
responsible
for
configuring
them
and
making
sure
they
don't
get
in
the
way
of
openshift.
So
please
try
it
out.
There
are
links
to
example,
container
files
on
the
previous
slide
and,
if
you'd,
like
a
tour
of
the
nitty-gritty,
please
check
out
our
virtual
Summit,
our
virtual
session
at
Red
Hat
Summit.
We
are
incredibly
eager
to
see
what
people
do
with
this
new
functionality.
Please
reach
out
next
Peter,
lotterbach
and
openshift
for.
H
Hi,
so
one
of
the
things
that
we're
going
to
talk
about
here
is
running
virtual
machines
with
openshift
clusters
that
are
virtualized
on
a
single
bare
metal
platform.
This
is
a
great
way
to
reduce
the
complexity
of
your
deployment,
but
also
can
be
a
challenge
when
you're
running
on
very
much
when
you're
running
on
larger
hosts
by
hosting
operative
clusters
on
the
same
Hardware,
you
get
the
benefit
of
improved
utilization
and
density
of
your
Hardware.
H
You
also
get
additional
separation,
where
you
can
give
different
teams
their
own
cluster,
using
our
back
privileges
to
administer
and
do
their
own
development
within
a
cluster.
It's
great
way
to
also
reduce
your
traditional
hypervisor
footprint
and
expand
your
openshift
deployment
on
identical
Hardware.
A
And
I'm
back
developer
tools
update.
So,
as
you
know,
open
shift
is
more
than
kubernetes
more
than
bringing
the
kubernetes
experience
to
our
developers.
So
we
have
some
exciting
features
that
we're
providing
from
our
developer
tools
update
when
you
look
at
the
developers
perspective
in
413,
you'll
find
being
able
to
pen
resources
to
the
dev
navigation,
improve
Talent
support
and
you'll
see
pod
traffic
through
the
topology
views.
We
also
have
podman
desktop.
A
It's
a
new
capability
allows
developers
to
go
to
Containers
very
very
quickly
to
their
development
experience
and
we'll
be
looking
to
bring
podman
desktop
to
air
gaps
and
environments
in
the
near
future.
Odo
319
is
now
available,
so
you
can
integrate
with
openshift
toolkits
and
IDE
extensions
on
PS
code
and
IntelliJ
Jameson.
Backstage
is
really
really
exciting
for
us
what's
happening
next,
you
can
take
a
look
at
those
Links
at
the
bottom
of
the
presentation
for
the
Developer
Edition
and
take
a
look
at
what
we're
doing
for
Janice
and
backstation.
I
Thank
you
so
hello,
everyone,
I'm
James,
Faulkner
I'm,
going
to
talk
a
bit
about
what's
in
in
openshifts
413,
as
well
as
getting
a
sneak
peek.
So
in
413
we
have
kubernetes
Native
Java
with
quercus.
So
if
you're,
a
quircus
developer
or
a
Java
developer,
you'll
appreciate
this.
We
now
have
fulls
for
Java
17.,
that's
GA!
I
That
was
Prior
in
Prior
versions,
was
a
tech
preview,
several
new
developers
capabilities,
so
a
new
developer,
UI
revamped
with
additional
capabilities
and
a
new
graphical
look
and
feel
we
also
have
a
new
Kafka,
Dev
UI.
So
if
you're
developing
applications
talking
to
Kafka,
you
can
look
at
topics
and
you
can
add
and
remove
data
and
kind
of
see
what's
happening
in
your
cluster.
I
We
also
have
a
number
of
new
Dev
services,
so
an
elasticsearch
def
service,
a
Dev
service,
basically
will
spin
up
a
copy
of
elasticsearch
for
you
as
you're
doing
your
development
and
then,
as
you
move
into
production,
you
can
link
that
directly
to
your
production,
elasticsearch
instances.
We
also
have
new
enhancements
to
the
infinisman
dev
service,
which
again
will
set
up
a
cache
for
you
and
some
new
features
in
there.
We've
also
added
new
social
connect.
I
The
services
for
open
ID
connects
that's
very
easy
to
protect
your
apis
and
allow
your
end
users
to
log
in
with
Facebook
or
GitHub
or
Google,
and
you
can
see
the
list
there
then.
Lastly,
we
also
have
new
Surface
bindings
for
reactive
SQL
clients.
So
if
you're
building
reactive
systems
talking
to
SQL
servers
like
more
ADV
or
MySQL
postgres,
you
know
the
list
goes
on
and
on.
You
can
easily
integrate
your
applications
and
bind
to
those
services
without
much
fuss
and
then
a
bit
of
late
breaking
news
caucus
version.
I
3
was
actually
released
late
last
week,
that's
going
to
show
up
in
the
next
version
of
openshift,
but
I
invite
you,
if
you're
interested
lots
of
new
performance
enhancements
in
Corpus,
3
and
in
developer
productivity
capabilities.
So
if
you
head
over
to
quarkus.io,
you
can
find
more
information
about
quarkus
3.
next
slide,
please.
We
also
have
as
part
of
openshift
413
JBoss
web
services,
our
productized
Tomcat.
I
So
in
413,
we've
made
updates
to
the
other
underlying
combat
in
the
individual
components
of
the
Tomcat
distribution,
like
our
mod
cluster
tomcatvault,
our
portable
runtime,
as
well
as
new
versions
of
openssl
that
backed
that
for
when
you're
running
on
things
like
Windows,
when
you're
running
it
on
like
Rel
and
openshift,
it
kind
of
links
to
the
native
libraries
that
are
available
there,
but
we
also
have
new
capabilities
in
the
jws
operator
for
for
openshift,
it's
now
level,
two
compliant,
so
you
can
not
only
install
it,
but
you
can
also
do
seamless
upgrades
across
different
versions
the
next
slide.
I
This
is
the
last
one
also
for
Java
developers.
We
have
support
for
Eclipse
Tamarin.
This
comes
out
of
the
eclipse
adopting
Community.
This
is
a
free,
an
open
source
version
of
java
very
popular.
There
have
a
extremely
high
number
of
downloads
around
200
000
per
day,
so
it's
very,
very
popular
Java
distribution.
I
We
now
have
full
support
for
that
on
openshift
for
all
three
of
the
of
the
LCS
versions
of
java
available
today,
which
includes
8,
11
and
17.,
and
notably,
we
also
have
not
only
production
support
for
Linux
and
windows,
but
we
also
have
Developers
for
Mac
OS,
both
Intel
and
RM
architectures,
that's
available
through
you
know
your
your
typical
package
manager
distributions,
there's
also
a
zip
distribution
as
well.
I
If
you
want
to
download
that
we
have
official
container
images
and
then
lastly
GitHub
action
support
by
default,
it's
one
of
the
available
jdks
that
you
can
install
when
you're
using
GitHub
actions
to
automate
your
pipeline
lines,
so
I
think
that's
it.
I
will
pass
it
over
to
the
platform.
Services
team
and
Kristoff
I
believe.
J
Is
now
generally
available
on
openshift
container
platform
4.13,
so
this
type
of
pipelines
1.10,
is
based
on
Upstream
technology
0.44
and
as
part
of
this
release,
we
have
added
Techron
V1
API
support
along
with
V1
bit
on
API.
That
is
currently
there.
We
will
continue
to
support
V1
beta1
till
it
is
obviously
deprecated
in
Upstream.
With
this
update,
you
can
specify
environment
variables
in
pipeline
run
or
task
run
pod
template
to
override
or
obtain
the
variables
that
are
configured
in
a
task
or
step.
J
Also,
we
are
enabling,
in
this
release,
integrators
to
Define
new
custom
task
type
as
crds,
which
can
be
run,
creating
a
new
run
object.
Basically,
it
will
plug
alternative
execution
models
into
electron
pipeline,
so
the
custom
tasks
they
are
intended
to
support
cases
for
execution
via
is
potentially
inefficient
and
some
of
the
use
cases
that
can
be
there.
For
example,
you
have
to
wait
for
an
external
event
to
occur,
for
a
task
runs.
J
It
can
be
an
approval
event,
signal
or,
let's
say,
execute
some
operation
outside
of
the
cluster
itself,
like
a
cloud
build
service
or
a
maybe
a
mobile
building
form
and
then
wait
for
the
execution
to
complete.
So
it
gives
you
tremendous
capability
with
certain
kind
of
conditional
strategies
around
task
run.
We
are
also
adding
support
in
GitHub
Interceptor,
where
it
will
block
a
pull
request
unless
it
is
being
triggered
or
invoked
by
an
owner
with
a
configurable
comment
by
the
owner
in
this
openshift
pipelines.
J
1.10
we
have
validated
and
now
customers
will
be
able
to
run
openshift
pipelines
in
fifths
in
a
world.
Cluster
pipeline
is
code,
which
is
an
opinated
CI
solution
and
a
kind
of
detox
approach
towards
effect
on
this
becomes
this.
We
are
spending
a
lot
of
time
here.
So
basically,
with
this
update,
you
can
configure
custom
console
dashboard,
in
addition
to,
let's
say
configuring,
a
console
in
openshift
or
let's
detector,
on
Upstream
dashboard.
J
We
are
also
adding
better
logging
for
pipeline
as
code
and,
finally
with
respect
to
Dev
console
ux
improvements
in
regard
to
openshift
pipelines.
So,
basically,
if
you,
if
in
in
Def,
console
when
you
are
importing
an
application
using
git
flow,
if
inside
your
git
repository,
there
is
already
a
pipeline
as
code
configured
dot
pack
or
dot
inside
there
is
pipeline
runs,
then
Dev
console
will
automatically
detect
that
and
it
will
create
the
path
report
and
subsequently
it
will
trigger
the
pipeline
Whenever.
There
is
a
full
request.
J
That's
all
for
official
pipelines
over
to
you.
K
Thanks
good
stuff
for
those
who
are
unfamiliar
with
openshift
get-offs,
it's
one
of
the
layered
products,
that's
included
with
openshifts.
It
enables
deployment
of
clusters
and
configuration
and
applications
from
a
declarative,
single
source
of
Truth,
with
automated
reconciliation
and
self-healing
deployments.
K
L
K
The
team
has
released
a
tech
preview
support
for
Progressive
sync
in
application
sets,
so
you
can
now
Define
the
order
in
which
you
need
your
applications
to
sync.
You
can
group
applications
together
with
labels,
for
example
based
on
environments,
and
you
could
specify
that
some
applications
be
automatically
synced
some
manually
via
the
UI
and
some
sequentially
as
specified
percentage
of
apps
at
a
time
all
from
within
a
single
application.
K
N
Thank
you.
Openshift
stubbornness
is
an
add-on
product
and
makes
it
open
shift
by
openshift,
plus
plus,
by
offering
better
Auto
scaling
and
networking
or
containerized
micro
services
and
functions.
It
is
based
on
the
Upstream
project,
Q
native
and
with
serverless
one
dot
for
nine.
We
would
be
updating
it
to
Key
native
108.
for
serverless
functions.
We
have
added
node.js
and
typescript
as
a
ga,
supported
out
of
the
box.
Runtime
functions
increases
your
developer
velocity
dramatically
as
it
provides
templates
for
jump,
starting
your
apps
and
even
does
container
creation
for
you
function
of
offers.
N
Multi,
container
support
or
deploying
multi-container
part
through
a
single
key
native
service
is
now
a
tech
review
feature.
We
have
also
upgraded
our
serverless
logic,
developer
preview,
that
offers
workflow
capabilities
and
there
will
be
a
new
landing
page
for
serverless
documentation
for
ease
of
access
next
slide.
Please.
N
Service
mesh
helps
you
create
a
secure,
reliable
microservices
with
enforced
TLS
encryption,
zero
trust
policies
and
visibility
with
metrics
and
traces
openshift
Summit
2.4
that
is
coming
soon
will
update
this
to
1.16..
Our
new
GA
features
are
a
cluster-wide
topology,
which
is
more
performant.
When
there
is
only
one
mesh
per
cluster,
we
will
still
support
multi-tenant
topologies
as
well
integration
with
search
manager
that
will
allow
server,
slash
workloads
to
use
certifications
obtained
via
cert
manager
from
issuers
such
as
hashicot
walls,
Etc,
external
authorization
for
HTTP.
N
This
will
allow
HTO
authorization
policies
to
delegate
decisions
to
an
external
Authority,
such
as
open,
policyagent
or
oauth2
extension
provider
for
Prometheus.
This
will
allow
service
match
to
send
metrics
to
open
ships,
monitoring
or
an
external
committees
instance,
and
you
are
also
adding
deploying
control,
plane,
components
to
openshift
infrastructure
nodes
as
GA
a
feature.
We
are
upgrading
single
stack,
IPv6
support
to
developer
preview,
and
finally,
this
release
brings
the
beta
version
of
Gateway
API
to
service
mesh.
We
are
excited
about
Gateway
API,
as
a
common
networking
standard
across
both
mesh
and
Ingress.
O
Thank
you,
Maya
with
openshift.
We
have
four
use
case,
driven
installation
experiences
with
a
variety
of
you
know,
control
and
automation
from
left
to
right.
We
have
the
fully
automated
installation
experience
that
we
know
as
IPI
and
celebration
infrastructure.
It
creates
the
infrastructure
resources
across
all
the
providers
that
we
have
it's
fully
automated
and
then
we
can
also
have
full
control
when
we
install
openshift
with
the
user,
provide
provisioned
infrastructure
where
the
user
needs
to
create
the
infrastructure.
Okay.
O
So
these
two
and
along
with
these
two,
we
have
the
two
latest
additions,
one
that
we
call
the
assisted
installer.
That
is
the
full
interactive
and
connected
installation
experience
which,
if
you
haven't
tried
it,
you
will
see
that
it's
a
hosted
web-based
completely
interactive
installation
for
this
Sphere
for
nutanix
latest
edition,
but
also
for
agnostic
and
their
metal,
and
our
latest
addition
for
focused
on
these
connected
installations
and
air
gap
type
of
installations
that
we
call
the
Asian
based
installer
is
the
latest
edition
in
4.12.
O
Let's
see
what's
new
in
the
integration
with
openshift
between
our
providers
next
slide
and
starting
by
the
interaction
with
the
sphere,
many
users
were
asking
us:
hey,
I
want
to
use
vsphere,
but
I
cannot
use
something
that
I
can
use
with
other
providers
such
as
AWS,
which
is
Distributing
openshift
across
zones
in
the
case
of
AWS
regions
and
availability
zones,
so
studying
obviously
for
the
13.
You
can
do
exactly
that.
O
O
Here
we
go
some
notable
changes
in
openshift
413
that
I
wanted
to
cover
are
first
studying
OBG
413
this
field,
7.0
page
one,
is
not
supported
anymore.
So
when
you
want
to
upgrade
to
413,
make
sure
that
you
at
least
have
update
too.
We
also
are
introducing
support
for
vsphere,
8.0
and
430,
and
if
you
are
using
412
starting
now,
you
are
going
to
be
able
to
install
it
on
the
sphere
8
as
well.
O
In
this
release,
we
also
introducing
support
for
three
node
clusters,
also
known
as
compact
clusters
and
finally,
dual
stock,
VIPs,
ipv4
and
IPv6
for
our
VIPs
is
Now
supported
in
this
room.
Let's
go
to
the
next
slide
and
a
few
more
notes
about
vsphere.
O
Another
thing
that
we
are
adding
in
this:
releasing
encryption
where
you
can
deploy
open
shifts
on
encrypted
VMS
and
also
you
can
encrypt
PVS
Provisions
with
the
vsphere
CSI
talking
about
CSI
the
vs
sphere,
CSI
migration,
new
installations
of
openshifting
413,
will
have
the
CSI
storage
by
default
right.
Existing
clusters,
updating
from
photo
12
into
413
won't
get
it
by
default.
You
will
be
able
to
obtain,
and
only
in
the
migration
to
414
our
next
release
will
have
it
enabled.
The
reason
is
an
unresolved
issue
in
the
sphere.
O
I
left
some
links
in
case.
You
are
interested
in
following
the
resolution
and
we're
working
with
VMware
on
fixing
this
the
user
experience
issue
them
very.
There
is
only
one
feature:
that's
going
to
be
available,
which
is
the
first
one
for
the
the
zones
that
I
introduced
earlier.
That
will
only
work
in
new
installations
of
openshift
and
finally,
sometimes
users,
customers,
ask
us:
hey:
I,
have
a
I'm
working
with
a
cloud
provider,
that's
offering
vsphere
as
part
of
their
offering
and
can
I
install
openshift
in
them.
O
Well,
those
are
being
where
most
of
them
at
least
we
were
Cloud,
verified
providers,
and
as
long
as
you
meet
the
vsphere
requirements
that
we
have
to
install
openshift,
you
are
supported.
We
wanted
to
be
very
explicit
with
this
and
now
starting
in
port
13,
and
we
are
also
documenting
it
in
for
about
12..
This
is
explicitly
including
documentation.
Next
slide.
O
And
a
note
about
something
new
in
the
agent
based
installer
remember:
this
is
the
installer.
That's
focused
on
these
connected
installations:
adapt
installations
and
when
you
install
with
agent-based
installer,
you
configure
everything
in
your
cluster.
Then
you
create
an
image
and
you
build
that
image
and
everything
that
is
needed
to
install
is
contained
in
that
image.
Sometimes,
what
happens
is
that
users
don't
know
the
details
of
their
Network
before
creating
that
image?
O
You
may
create
the
image
for
a
cluster
with
everything,
except
that
you
don't
know
the
networking
so
starting
413
when
you
boot
the
image
that
installs
openshift
with
the
agent-based,
installer
and
disconnected
environments
or
connected.
If
you
want,
you
will
be
able
to
enter
interactively
the
network
details
you
don't
have
to.
You
can
still
pre-create
all
the
network
and
host
networking
configurations,
but
if
so,
you
wish
you
can
do
it
at
installation
time
next
slide.
O
And
an
update
with
the
bare
metal
operator,
you
know
that
you
can
install
openshift
on
bare
metal
with
platform
integration
where
openshift
is
aware
of
the
bare
metal
nodes
and
you
can
start
them
reboot
them
and
do
maintenance
and
obviously
install
and
expand
the
cluster
with
this
platform.
But
we
have
some
users
with
large
clusters
with
platform
known
that
would
be
the
agnostic
integration,
no,
no
platform,
awareness.
That's
all!
That's
hey!
We
love
the
environmental
operator.
O
We
would
like
really
to
automate
the
provisioning
of
new
bare
metal
nodes
using
the
BMC
remotely,
like
you
do
with
the
bare
metal
operator
that
would
save
us
tons
of
time.
This
is
a
kind
of
automation,
that's
very
simple,
using
redfish
and
virtual
media
mapping
of
the
image
that
contains
openshift
well
start
in
413.
We
allow
to
do
this
with
UPI
clusters
that
don't
have
the
platform
their
metal
into
question,
and
with
this
I
pass
it
on
to
Mac.
P
You
can
disable
these
operators
by
setting
the
base
capability
set
and
additional
enable
capabilities
parameters
in
the
install
config
manifest
file
prior
to
the
to
the
cluster
installation.
If
you
choose
to
disable
these
capabilities
through
installation,
you
can
always
enable
them
after
the
cluster
has
been
installed.
P
If
you
want
to
learn
a
bit
more
about
this
feature,
please
refer
to
the
optional
capability
product
documentation
and
also
there
is
a
Blog
from
vampires
publishing
mods
in
our
hybrid
Cloud,
hybrid
Cloud
blog
site
called
customize.
Your
kubernetes
open
GPS
gets
composed
compostable,
and
you
can
learn
more
about
this
particular
feature
there.
Next
slide,
please.
P
Another
one
is
using
confidential
VMS
for
openshift
on
Google
Cloud
as
well
to
protect
the
confidentiality
of
data
by
encrypting
data
and
use,
while,
while
it's
being
processed
users
running
offensive,
Google
Cloud
can
be
confident
that
their
data
will
be
stay
private
and
encrypted
even
well.
While
this
is
processes,
this
feature
will
be
supported
as
a
technology
preview
on
this
release.
P
P
P
Where
the
workloads
can't
run
on
the
same
nodes
used
for
the
control
plane
of
achieve,
we
are
adding
support
as
well
to
deploy
open
chief
on
the
most
new
recent
regions.
That
has
been
announced
by
AWS
and
Google
cloud,
and
you
can
see
the
whole
the
whole
list
of
these
new
regions
on
on
this
slide.
And
finally,
we
have
enhanced
installation
experience
while
applying
offensive
on
AWS
local.
M
P
B
Yeah
Azure
user
tags
are
available
now
as
Tech
preview
and
413.
With
this
feature,
users
can
use
the
new
field.platform.azure
user
tags
and
installing
it
config
yaml
to
configure
the
required
tags
to
be
applied
to
the
Azure
resources.
Tags
can
only
be
configured
during
cluster
creation,
along
with
user
of
divine
tags,
openshift
ads
tags
required
for
its
internal
use
to
all
resources.
User-Defined
tags
are
supported
for
resources
created
on
Azure,
public
Cloud
alone
and
next
I'll
pass
it
to
Gail.
R
Thank
you.
Heather
pull
shift
on
stack
in
413
we're
introducing
dual
stack
support
in
the
preview,
mostly
Drive,
driven
by
our
Telco
customers,
looking
to
expand
the
5G
deployments
on
openstack.
We
are
adding
basically
a
full
dual
stack
supported
cluster
in
the
preview,
which
basically
means
both
control,
plane
and
interface,
zero,
which
will
be
added
to
the
secondary
interfaces
via
motors,
which
already
support
your
stack.
R
R
Q
Hi,
my
name
is
openshift
with
4.13.
We
are
going
to
do
GA
for
serial
and
c
groups,
V2,
which
will
be
non-default.
That
means
that's
not
going
to
be
installed
on
the
Box
customer
has
to
follow
opposite
documentation
to
install
it.
C-Run
will
provide
you
better
performance
as
it
has
faster
and
low
memory
footprint
and
cran
and
C
gross
V2
will
provide
you
better
stability.
One
important
things
to
note
here
is
any
critical
kernel
box
coming
in
will
be
fixed
in
C,
plus
V2
and
not
in
Secrets.
Q
V1
next
slide
another
exciting
stuff.
We
are
going
to
ga
custom
metric
Auto
scaler.
Previously
it
was
in
Tech
preview
in
in
fourth
unit
doing
doing
GA
and,
along
with
that,
we
will
provide
a
GS
support
for
Prometheus
scaler
and
technical
preview.
Support
for
Kafka
scale
next
slide,
so
a
run
once
duration
over
override
operator.
So
there
are
certain
parts
or
or
jobs
who
run
once
in
their
lifetime.
Q
Now,
with
this
operator,
you
can
Define
the
duration
of
how
long
these
parts
a
job
can
run.
So
you
can
use
this
operator
to
Define
that
duration
next
slide
allow
mirroring
by
tag.
So,
let's
take
an
example:
you
have
a
mirror
registry
and
you
pull
an
image
from
that
mirrorized
tree
now.
You
have
ability
to
attach
a
tag
to
that
to
that
image,
and
it
will
help
you
to
improve
life
cycle
management
of
the
images
right.
Q
For
example,
you
have
new
images,
you
pull
on
new
images
and
add
a
new
tag
to
it,
but
you
can
always
refer
to
the
old
image
that
you
are
using
production
over
to
next
presenter.
E
All
right:
hey
thanks,
I'm
back
with
some
really
exciting
news
for
security
and
ACS.
E
So
we
talked
about
the
launch
to
Cloud
going
live.
This
is
part
of
the
effort,
for
that
was
a
change
in
the
architecture
where
we're
now
using
postgres.
So
we
are
about
to
announce
a
major
release
for
thato.
That's
coming
up
very
very
soon
where
this
postgres
is
now
Central's
database
and
what
you
get
out
of
it,
whether
you're
on-prem
or
in
Cloud,
especially
for
you
when
you're
on-prem
is
you
will
get
performance
and
scale
improvements.
E
You
get
an
easy
upgrade.
We've
worked
hard
to
make
sure
that
the
migration
from
3.74
with
the
operator
is
really
smooth.
Some
customers
will
ask
themselves.
Has
the
migration
really
been
complete?
It's
really
that
smooth
and,
as
you
understand,
we've
been
running
with
postgres,
in
fact
on
Amazon
RDS
since
373,
so
we've
gained
a
good
amount
of
experience
and
caught
and
caught
the
issues
and
fixed
them,
so
we're
very
excited
with
4.0.
E
The
next
stage
in
this
architecture
and
people
will
be
asking
this
as
well.
Can
I
use
my
own
postgres
and
the
answer
is
absolutely
yes.
It
is
now
in
Tech
preview,
so
we
do
encourage
people
to
start
testing
your
systems
with
your
own
postgres.
If
that's
the
direction
you
want
to
go
and
we're
seeking
your
feedback
for
any
issues
that
you
may
find
with
your
own
postgres
postgres
systems
and
in
the
future
those
will
obviously
become
ga.
E
Next
please
next
slide
please.
So
we
are
also
excited
to
have
some
really
interesting
vulnerability
and
management.
One
back
too
short.
I
think
it's
51..
E
Your
security
for
openshift
cluster
starts
at
the
node
and
we
are
now
introducing
the
ability
to
scan
Red,
Hat,
core
OS
host
operating
system.
Of
course,
core
OS
is
the
operating
system
that
runs
your
openshift
node.
E
There
are
very
few
products
that
do
that
and,
and
we
are
proud
to
be
one
of
those
products
what
this
gives.
You
obviously
is
visibility
into
the
state
of
vulnerabilities
on
the
operating
system
that
you're
running
and
when
I
say
when
and
not.
If,
because
vulnerabilities
are
always
found,
the
when
vulnerabilities
are
found,
you
are
informed,
you
know
what's
going
on
and
you
can
pursue
and
upgrade
from
Red
Hat
to
address
on
your
own
pace
based
on
your
processes.
E
The
other
interesting
announcement
is
that
as
people
who
may
have
tracked,
we
are
working
on
consolidating
the
scanner
for
ACS,
with
the
red
hat,
clear
scanner,
and
so
we
are.
This
is
a
long
path,
but
we
have
reached
an
important
Milestone
where
we
are
now
Consolidated
with
Claire
version
4..
So
with
right,
at
374,
we've
introduced
an
integration
option
that
you
can
use
the
Upstream,
clear,
V4
and
we're
continuing
to
work
on
it
and
on
our
path
to
have
a
single
scanner
for
red
hat
and
ACS.
E
Next,
please
other
improvements
that
we've
introduced
you'll
be
interested
to
find
that
we
have
validate
our
fixed
compliance
on
red
hat
openshift.
For
fifth
aware
customers.
We
are
introduced
processes.
Listening
on
endpoints
customers
have
asked
us
I
want
to
understand
what
my
processes
are.
Listening
on,
certain
several
sectors
are
more
sensitive
to
this
than
others,
and
people
should
also
look
at
the
network
observability
operator
from
Red
Hat.
E
This
goes
a
little
deeper,
so
now
you
can
actually
understand
at
a
process
level
who's
listening
on
what
it
is
initially
available
through
API
and
in
the
future.
We
plan
to
add
it
to
the
UI
as
well
and
very
exciting.
Is
we
announced
support
for
IBM
platforms,
and
this
was
based
on
customer
demand,
so
we
can
now
run
ACs
on
IBM
power,
IBM,
Z
and
IPM
Linux
one
and
next
please,
the
other
really
exciting
feature
that
we're
introducing
We've
started.
E
Introducing
it
in
steps
is
what
we
call
the
network
graph
2.0,
so
Network
graph
with
ACS
helps.
You
understand
your
security
posture.
When
it
comes
to
networking,
you
can
look
at
your
connectivity,
you
can
understand
which
flows
are
possible.
You
can
manage
a
baseline
to
track
that
and
actually
create
policies
based
on
the
flows
that
you
allow
and
generate
Network
policies.
E
E
This
is
still
done
in
steps
because
we
want
your
feedback
but
be
aware
that
we
are
progressing
and
network
graph.
1.0
will
actually
disappear
in
4.1
and
what
we
call
now
Network
graph
2.0,
which
you
can
see
side
by
side
on
ACS,
will
just
become
Network
graph
next
and
we
talked
about
Network
policy.
This
is
what
some
of
the
field,
people
and
customers
that
have
seen
this
and
really
been
so
excited
some
call
it
a
game
changer.
What
this
is
about
is
when
you
are
tasked
with
creating
Network
policies
for
kubernetes
for
openshift.
E
It's
a
really
hard
problem
and
part
of
the
reason
is
that
it's
not
always
clear
who
owns
this.
Is
it
Dev?
Is
it?
Is
it
the
security
architect?
When
is
it
done?
If
it's
done
in
production,
it's
too
late?
If
it's
done
during
Dev
you're
asking
your
Dev
to
develop
some
quite
complex
yaml
code,
they
may
not
have
the
picture.
So
really.
The
only
reasonable
answer
to
this
problem
is
to
automate
it
and
that's
exactly
what
we
enable
now.
E
You
can
run
ACS
command
line
in
a
Dev
environment
and
we
will
analyze
your
deployment
environment.
All
we
look
at
is
your
resources.
Yaml
resources,
we're
not
expecting
to
do
anything
out
of
the
ordinary
if
you're
doing
best
practices,
and
you
have
all
those
resources
in
the
folder.
That's
all
the
information
we
need
and
based
on
that,
we
actually
generate
Network
policies,
they're
tight,
Ingress
and
egress.
E
It
is
out
there
ready
for
people
to
use,
give
us
feedback
honestly.
We
have
not
found
an
example
where
this
doesn't
work
yet
so
it
doesn't
mean
that
you
won't
find,
but
please
try
it
out,
give
us
feedback.
We're
super
excited
about
this
capability
and
one
more
exciting
announcement
is
the
introduction
of
this
concept
that
we
call
ACS
collections.
E
So
what
this
is
is
is
an
object
in
ACS
that
gives
you
better
perceptive
language,
to
tell
us
what
you
care
about
in
your
infrastructure
so
or
your
deployment
infrastructure
I
should
say
so
what
this
is,
if
you
have,
you
want
to
say:
hey
ACS
I
want
to
apply
a
certain
set
of
policies
or
I
want
to
filter
a
certain
area
in
my
environment
today
in
ACS,
you
kind
of
have
to
say
that
over
again
and
again
so
with
ACS
collections,
we
allow
you
to
name
those
areas
and
the
way
that
those
that
naming
works
is
it's
reusable.
E
It's
a
name
reference.
You
can
reuse
it
in
multiple
places
and
it's
very
powerful
because
it's
recursive,
so
you
can
actually
create
a
hierarchical
view
of
your
environment
say
this
is
an
area
owned
by
my
team.
This
and
multiple
teams
are
part
of
Dev.
Now
you
can
apply
policies
or
filters,
or
what
have
you
to
that
name
and
all
those
rules
are
resolved
in
runtime,
which
means
that
you
can
create
rules
for
things
that
don't
even
exist
yet
based
on
labels
and
names.
E
Now
it's
important
to
understand
this
is
this
is
a
big
feature
and
we're
introducing
it
in
steps
right
now
in
4.0.
This
feature
is
only
available
in
the
new
vulnerability
reporting
module
and
in
the
future
we
will
introduce
it
to
more
and
more
features
in
ACS.
Again,
as
all
these
new
features
come
out,
we
really
want
your
feedback,
so
go
ahead.
Use
them
tell
us
what
you
like
tell
us,
what
you
need
more
and
next
slide.
E
Just
to
recap,
if
you
look
at
all
this
greatness
since
370
through
three
I'm
sorry,
we
announced
the
field
trial
for
ACS.
Again
now
that's
going
live
and
you
can
just
kind
of
see
the
timeline
in
front
of
you
with
a64.0
coming
out
in
the
very
very
short
future
and
over
to
you
again,
Anjali.
D
Thank
you,
Boaz
I
am
going
to
give
some
updates
on
openshift
security
control.
Plane
features.
One
of
the
requirements
we
have
consistently
heard
from
customers
is
to
use
AES
GCM,
which
is
the
lowest
counter
mode,
which
provides
hardened
Cipher
speeds
for
encrypting
hcd
data
at
rest,
as
GCM
is
a
block
Cipher
mode
of
operation
that
provides
high
speed
of
authenticated
encryption
and
data
integrity
from
customers
such
as
our
Telco
customers
benefit
from
having
this
hardened
security
Cipher
to
meet
their
compliance
needs.
D
Asgcm
is
an
improvement
over
ascbc
Cipher,
which
is
more
vulnerable
to
padding
Oracle
attacks.
You
can
enable
this
feature
in
the
API
server
configuration,
as
is
shown
on
this
slide.
This
configuration
enables
using
AES
GCM
ciphers
with
a
random
nonce
and
a
32
byte
key
to
perform
the
encryption.
The
encryption
keys
are
automatically
rotated
once
per
week.
Next
slide.
Please.
D
So,
in
keeping
up
with
the
Upstream
kubernetes
policies
for
part
security
admission
in
openshift
for
11,
we
introduced
the
BSA
with
global,
privileged
enforcement
and
restricted
profiles
for
warning
and
audits.
This
was
to
mainly
help
users
to
opt
in
their
name
spaces
to
PSA
with
core
namespace
labels.
With
our
current
release,
we
still
have
privilege
mode
by
default
and
Global
config,
but
we
are
introducing
the
ability
for
users
to
turn
on
global
PSA
in
enforcement
mode
using
feature
Gates.
This
is
a
tech
preview
feature.
This
will
help
users
test
their
workloads
against
the
enforcement.
D
This
feature
is
also
back
ported
in
openshift
412..
We
intend
to
move
the
global
configuration
to
enforce
the
restricted
Esa
profile
in
a
subsequent
release.
Additionally,
in
this
release,
we
did
some
improvements
to
the
part
security
violation,
alert
messages.
They
are
now
more
informative
and
contain
information
on
openshift,
owned
workloads
and
namespace.
S
Yes,
sir
hello
world
and
welcome
to
the
what's
new
in
Rack
Room
2.8,
we
are
delivering
value
from
wall
to
wall
and,
as
you
study
these
slides
you're
going
to
scratch
your
head
thinking,
how
did
they
do
that?
First
of
all,
we
are
the
delivery
vehicle
for
hosted
control,
pins,
also
known
as
the
hypershift
project
so
excited,
and
how
many
times
have
you
heard
the
word
excited
today,
but
a
technical
preview
of
bare
metal
agent
bare
metal,
cubert
and
AWS.
S
We
continue
to
see
those
providers
come
in
and
we
continue
to
see
customers
telling
us
that
they
need
more
get
in
there
kick
the
tires
and
tell
us
how
it
works,
for
you
find
Grand
or
back
for
vacuum.
Observability
is
in
the
dev
preview.
We've
had
this
ask
across
a
range
of
customers
from
retail
and
financial
services
and
being
able
to
provide
that
fine-grained
scope
for
users
down
into
namespaces
across
the
observability.
Stack
really
helps
us
explore
the
space
with
Dev
teams
and
ensure
that
centralized
management
can
still
offer
value
out
to
the
app
Developers.
S
We
do
already
support
AWS,
STS
and
with
gcp
with
token.
We
ensure
that
observability
functions
well
with
your
security
posture
across
AWS
and
gcp
clouds.
And
finally,
the
application
set
pool
model
within
openshift.
Get
Ops
with
ACM
is
a
thrilling
opportunity
for
high
scale
and
Edge
scenarios
being
sure
that
we
can
manage
out
to
the
edge
at
High
Velocity,
ensuring
that
the
pool
model
scales
to
your
needs
really
fun
and
exciting
work
with
the
article
CD
Upstream,
making
sure
our
team
is
contributing
value
out
beyond
the
ACM
walls.
S
Next
slide,
please:
oh
yeah,
the
Global
Hub
here
we
are,
we
have
had
customers
across
the
retail
Telco
and
financial
services
asking
for
this
for
over
a
year
now,
what's
exciting
about
this
Tech
preview
release
is
that
our
collaboration
with
research
in
IBM
has
provided
a
deep
capability.
I
mean
deep,
providing
things
like
even
a
Kafka
integration,
providing
things
like
compliance,
Trend,
really
ensuring
that
your
team
sees
the
value
across
the
fleet,
whether
they're,
regionally
separated
globally
separated
data
isolated
or
at
high
scale.
S
This
is
awesome,
stuff
and
I
appreciate
all
of
the
community
that
supported
us
as
we
deliver
this
Tech
preview
you'll
see
that
policy
compliance
status
Trend
where
our
customers
really
tell
us.
They
need
to
see
this
over
30
days
for
audit
perspectives.
They
might
let
you
they
might
even
need
to
see
events
within
that
view
as
well.
The
tech
preview
can
allow
you
to
quickly
assess
and
audit
what
we're
saying
view
is
the
compliance
dates
across
those
and
really
ensure
that
we
can
narrow
in
for
critical
events,
for
example,
in
your
production
environment.
S
This
is
awesome,
stuff,
I
appreciate
the
teams
and
their
collaboration
as
they
worked
hard
to
deliver
this.
It
has
been
an
effort.
We
appreciate
all
that
you
can
do
to
look
into
our
tech
review
and
tell
us
how
it
works
for
you
up
to
that.
Next
is
Christian
Stark.
To
tell
you
a
bit
about
our
governance
and
policy
support
features.
M
Thank
the
north
Scott
I
would
like
to
give
you
an
update
about
what
to
do
in
acm's
governance
framework
and
I
often
use
features,
tenderize
policies,
and
here
we
have
added
support
for
ranges.
It's
a
policy
user
I
would
like
to
use
three
interesting
policy
templates
to
avoid
application.
In
my
template
definition,
it's
a
policy
user
I
would
like
to
use
conditionals
so
that
I
can
avoid
duplicating
policies
which
are
needed
for
different
environments.
M
Web
ideas
work
on
making
the
adoption
of
gatekeeper
easier
to
provide
a
more
native
user
experience.
Both
improvements
are
always
including
policies
generator,
which
generates
ACM
policies
based
on
gatekeeper
resources.
Last
I
would
like
to
highlight
that
we
provide
an
out-of-the-box
policy
set
for
installing
and
managing
openshift
platform
plus,
which
gives
you
the
option
to
easily
control
all
of
the
artifacts
from
a
central
place,
and
you
can,
for
example,
using
kdox.
It
would
be
happy
if
you
would
like
to
try
out
the
new
features
and
provide
us
feedback
passing
to
Jeff
excellent.
A
Thank
you,
Christian,
and
you
know,
management
and
scale
is
in
our
deep
for
ACM
and
we've
we've
been
on
this
presentation
for
several
releases
now
talking
about
single
node,
openshift
and
being
able
to
manage
thousands
of
single
node
open
shifts
in
the
context
of
our
Telco
retail
type
scenarios.
We've.
A
Taking
a
look
at
management
and
scale
for
mixed
fleets
at
a
lot
of
our
customers
that
are
coming
to
us,
saying:
that's
great
I'm,
not
really
a
single
note
user.
What
about
if
I
have
X
number
of
clusters
with
three
node
configuration
combination
of
single
node
openshift,
and
something
that
we
would
call
Standard
clusters
growing,
a
number
of
control,
plane,
nodes
and
worker
worker
nodes
within
those
with
within
the
within
the
fleet?
So.
A
Lot
of
performance
and
evaluation
across
these
mixed
fleets
and
we're
very
confident
that
we're
going
to
be
able
to
provide
those
necessary
scale
and
for
management
in
those
mixed
native
fleets
next
slide.
A
And
finally,
you
know
part
of
this
continuing
tree
trend
on
business
continuity,
as
we
saw
in
4.12
and
along
in
conjunction
with
open
foundation.
A
We
delivered
in
4.12
a
Metro
Dr
scenario
where
ACM
is
doing
the
heavy
lifting
and
during
the
Clusters,
are
created
the
same
and
configured
the
same
and
being
able
to
perform
an
orchestration
of
a
failover
of
an
application
from
one
cluster
to
the
next,
where
odf
focuses
on
data
management
and
synchronization
in
this
release.
In
addition
to
the
Metro
Dr
scenarios
we
released
from
4.12
is
Regional
Dr,
so
there's
an
asynchronous
replication
of
data
across
regions.
A
It
will
not
be
an
RPO
of
zero,
but
you
can
tune
and
configure
the
synchronization
replication
and
again,
this
feature
is
delivered
on
in
conjunction
data
shift
foundation
in
the
advanced
capabilities
within
that.
So
please
take
a
look
at
the
release,
notes
and
find
where
you
can
get
the
combination
of
ACM
2.8
in
odf
413
to
provide
business
continuity
in
both
Metro
Dr
and
now
in
Regional,
and
then
after
this
slide,
we're
going
to
hand
it
over
to
Daniel
who's.
Going
to
talk
to
us
about
clay.
U
Thank
you,
Jeff.
A
lot
is
happening
in
the
space
of
Central
and
scalable
container
registry
services.
So
let's
take
a
quick
peek
at
what
you
can
expect
around
option
for
13.
around
that
date.
You
will
start
to
see
our
managed
container
registry
service
creator.io
move
into
the
rapid,
hybrid
Cloud
console.
This
will
be
a
phased
process
of
course,
so
the
query
of
interface
at
the
Domain
will
remain
available,
but
around
the
413
time
frame
end
of
May.
You
will
start
to
be
able
to
manage
your
query.
U
I
o
content
inside
console.red.com,
so
you
don't
need
another
login.
You
don't
need
an
to
switch
to
a
new
view,
interface.
It
will
all
be
there
and,
alongside
all
our
other
Red
Hat
managed
services
and
openshift
custom
management.
This
also
unlocks
a
couple
of
additional
capabilities
that
we
will
add
later
around
additional
bidding
options.
Alongside
existing
credit
card
based
payment.
U
Today
we
will
be
able
to
burn
down
your
committed
AWS
spin
budget
via
database
Marketplace
and
purchasing
koi
IO
there,
or
you
will
be
able
to
give
your
Reddit
accounting
purchase,
orders
for
yearly
cryo
plans
and
upfront
payment,
and
with
that
move
of
equator
level
into
console.redder.com,
we
will
also
start
to
see
a
completely
new
user
interface
for
both
the
managed
Service
creator.io,
as
well
as
red
hat
Creator
self-managed
product.
U
It
is
a
Long
View
overhaul
of
the
user
interface
that
we
currently
have
it's
much
more
aligned
with
the
look
and
feel
you're
familiar
with
from
redhead
or
container
platform,
and
it
will
unlock
a
couple
of
very
effective
ways
to
manage
your
content.
The
vast
amount
of
content
you
typically
find
in
the
central
registry
at
scale
in
a
very
effective,
yet
intuitive
manner,
wave
39
our
self-managed
product,
releasing
around
the
same
time
of
openshop
413,
will
also
boast
a
completely
new
logging
integration.
U
You
will
be
able
to
afford
audit
documents
from
Quay
into
Splunk,
which
offloads
that
traffic
from
the
query
internal
database,
which
can
be
quite
significant,
especially
in
large
environments
and
we've,
also
increased
the
login
coverage
surrounding
pretty
much
anything
around
users,
organizations
and
their
life
cycle
and
the
product
to
keep
a
complete
order.
Trail
of
everything
that
happens
inside
the
system.
Last
but
not
least,
in
Quay
free
Niagara,
will
also
feature
a
very
fine
storage
consumption
tracking
mechanism.
U
We've
introduced
this
already
in
free
eight,
but
in
Free
nine
storage
consumption
tracking
is
much
faster,
especially
for
very
large
Registries,
with
a
lot
of
content
and
also
much
more
accurate.
In
particular,
it
will
correctly
account
for
layer
sharing
not
just
within
a
repository
between
text,
which
is
what
we
already
have,
but
also
between
repositories
of
a
different
repositories
of
the
same
organization
and
this
incentivizes
user
greatly
to
use
common
base
images
like
rail9
or
Ubi
9.
L
L
While
we
continue
work
towards
consolidating
our
open
shift
observability
by
providing
equal
emphasis
to
the
what
we
call
the
five
pillars
of
observability,
because
we
include
data
collection,
data,
storage,
data,
delivery,
data,
visualization
and
data
analytics
so
and
that
is
encompassing
these
multiple
data
sources
that
you
know
like
metrics,
logs
and
traces.
So
let's
take
a
look
on
what's
new
for
each
of
these
areas,
so
it
will
look
at
start.
Look
at
the
data
collection
and
the
monitoring
parts
to
align
with
more
and
more
requests
that
we
need
to
customize
your
data.
L
We
are
excited
to
present
enhancements
in
the
monitoring
and
optimization,
so
you
can
customize
your
node
exporter
collectors,
where
you
can
toggle,
which
collectors
you
need
and
what
you're
interested
in
and
continuing
with
customization.
We
also
now
add
scrape
profiles
in
the
cluster
monitoring
operator
and
by
the
same
designing
these
customers
grateful
for
us.
You
can
now
tailor
your
monitoring
system
relevant
to
your
own
needs.
We
also
add
the
vertical
products:
silver
metrics,
to
help
you
optimize.
Your
infrastructure
and
your
applications.
V
Along
with
this,
in
openshift
413,
we
will
have
logging
5.7
release
for
The
Collection
side
and
5.7.
We
have
within
Vector
multi-line
exception.
Traces
Java
traces,
for
example,
will
be
forwarded
as
single
log
entries.
W
So,
for
distributed
trades
into
the
eight,
we
introduced
a
tech
preview.
The
integration
with
tempo
as
a
as
a
backend
for
distributorials
right,
so
white
temple
will
tell
you
is
open
source,
easy
to
use
highly
scalable
unable
to
process
a
high
volume
of
phrasing
signals.
Sorry
becomes
pretty
better,
also
right
with
the
data
perspective.
One
feature
provided
in
this
release
in
Tech
preview
is
that
the
users
will
have
the
ability
to
gather
choices
from
most
people,
open
series
clusters
and
forward
them
to
a
single
Tempo
instance.
W
So
if
we
move
for
the
next
slide,
you
need
a
storage
How.
Does
it
go
Roger.
L
V
And
I
mean
5.7.
We
also
have
a
outstanding
request
that
the
customers
have
had
for
us
where,
within
Loki,
openshift
administrators
and
application
owners
can
create
alerting
rules
based
on
logs
within
those.
W
V
W
W
It
actually
provides
the
same
data
access
mechanisms
so
from
there
end
user
perspective,
it
enables
like
a
black
box
replacement
of
the
backend
without
impacting
the
the
data
delivery
experience
at
all,
which
is
pretty
convenient
right.
So
data
visualization
in
the
next
one.
V
And
in
1957
we
have
additional
data
visualization
support
So,
for
we
have
support
for
the
log
space
alerts
again
they're
happening
within
data
storage,
but
then
you
can
also
find
them
within
the
alerting
UI
in
a
visualization.
We
also
have
an
improved
user
experience
in
the
web
console.
So
we
have
a
plug-in
text:
translation
added
within
the
logs
UI,
and
we
have
users
enabled
to
configure
the
front-end
query
limit.
W
So
data
visualization
for
distributed
tracing.
This
is
really
we
think
that
we
included
a
tempo
right,
but
aspect
preview,
but
Jager
UI
is
still
provided
to
be
Solace.
The
data,
according
from
both
Tempo
and
and
elastic
search
setups,
no.
L
And
to
conclude
the
fifth
and
last
pillar
of
our
office
availability
journey
is
data
analytics
or
from
a
monitoring
perspective.
Those
P
console
users
can
now
filter
data
by
node
attributes
in
the
monitoring
dashboard,
and
that
means
that
you
now
have
a
better
visibility
of
the
specific
nodes
and
the
availabilities.
X
X
The
functionality
is
available
in
preview
and
while
using
it,
you
don't
need
to
turn
on
the
data
tag
in
console.threadhead.com,
also,
new
insights
recommendations
targeted
to
openshift
data,
Foundation,
openshift
cluster
version
operator
and
cluster
autoscaler
operator,
where
edits
in
order
to
help
you
with
these
components
of
openshift
in
such
recommendations,
are
now
also
available
in
cluster
history,
page
so
that
you
can
track
the
history
of
rows.
Recommendations
being
raised
for
your
environment
there's
also
a
lot
of
news
within
the
cost
management
for
openshift
so
powerful.
Why
don't
you
take
it.
Y
Think
of
the
mass
yes,
so
cause
management
is
our
service.
That
will
tell
you
how
much
it's
namespace,
it's
classy,
it's!
No!
This
tag
is
costing
in
in
your
opposite
cluster.
We
have
made
a
lot
of
improvements.
One
of
the
things
that
we
are
improving
that
we
have
been
doing
over
the
past
months
is
your
workloads
in
openshift
are
not
running
out
on
on
thin
air.
They
are
running
on
a
cluster
which
has
a
control
pane,
which
costs
some
money
and
then
there's
on
the
worker
plane.
Y
There's
the
workloads,
but
there's
also
always
some
spare
capacity
that
capacity,
there's
per
capacity,
has
a
cost,
and
someone
has
to
pay
for
that
for
the
console
plane
and
for
the
cost
of
the
analocated
capacity
we
reported.
We
started
reporting
that
a
couple
of
months
ago
now
we
are
allowing
you
to
go
in
the
cost
models
and
say
Yes
I
want
to
actually
distribute
these
on
top
of
the
of
the
user
projects
right
and
that's
one
of
the
things.
Y
Another
thing
that
we
have
done
is
we
have
made
a
lot
of
small
changes
in
the
way
that
we
calculated
costs
so
that
we
are
sure
that
we
are
Distributing
the
full
cost.
Maybe
you
can
go
to
National
Slide
by
the
way
we
have
this
city
in
the
cost
of
of
the
cost
of
the
clouds
in
a
better
way,
so
that
it's
more
it
reflects
better
your
Cloud
bill.
Y
Another
thing
that
we
have
done
is
so
far
cost
management.
When
you
installed
the
operator
on
the
cluster
side,
it
will
only
start
uploading
data
from
that
moment
on
what
we
have
done
now
is.
We
are
uploading
also
any
pass
data
that
you
have
up
to
90
days
back
and
we
will
also
fill
gaps
that
you
are
having
like
you.
You
had
a
connectivity
problem
or
you
uninstall,
mistakenly
uninstalled
the
cost
management
operator
which
will
cause
a
data
Gap.
That's
no
longer
the
case.
Y
We
are
going
to
feel
that
in
for
you,
so
you
have
100
of
your
costs.
Another
thing
is
some
customers
they
have
as
many
most
customers
have
a
lot
of
cloud
accounts
to
the
point
that
they
don't
even
know
how
many
of
them,
but
others
do
the
opposite,
always
have
one
single
or
two
Vehicle
Club
accounts
and
that's
shared
with
with
everybody.
So
they
have
a
lot
of
things
there.
Y
They
have
Windows,
they
have
the
cloud
services,
they
have
a
redhead
openshift
stuff,
and
that
was
a
lot
of
data
to
be
sent
to
console
dot.
But
we
are
allowing
now
users
to
do
on
AWS,
Azure
and
gcp
is
to
say
Okay
I
want
to
share
only
this
subset
of
data
with
with
red
hat
and
of
course,
we
have
also
added.
We
are
expanding
to
new
clouds,
so
Oracle
Cloud
infrastructure
is
now
available
as
the
cloud
itself.
Next
thing
is:
okay
shift
on
Oracle
cloud.
Z
Thank
you.
Let's
take
a
look
at
some
of
the
networking
stuff
that
we
have
done,
which
are
like
the
nuts
and
bolts
of
your
system.
Right
of
your
printer
cluster.
First
up
we're
very
happy
to
announce
the
general
availability
of
AWS
load
balancer
operator.
This
manages
an
instance
of
AWS
load
balancer
controller
and
provides
kubernetes
Ingress
Resources
by
provisioning
application
load
balances.
Now
this
is
a
day
2
operator
which
can
be
installed
on
openshift
versions
for
13
as
well
as
for
12..
Z
This
is
fully
supported
on
Rosa
and
provides
AWS
SDS
support
along
with
cluster-wide
egress
proxy,
and
it
also
enables
you
know:
support
for
clusters
without
cloud
credential
operator.
Z
Next
up
on
the
hardware
enablement
side
for
openshift
networking,
we're
very
glad
to
announce
the
general
availability
of
supporting
switching
the
blue
field
to
network
device
from
the
dpu,
the
data
processing
unit
to
the
Nic
mode.
Along
with
it.
We
would
also
like
to
announce
the
general
availability
of
OES
Hardware
offsword
support
for
connect,
X6
we're
offering
a
tech
preview
for
supporting
installations
on
nodes
with
dual
Port
Linux.
Z
These
came
from
the
customer
asks,
so
this
means
that
you
can
deploy
your
openshot
clusters
on
a
bond
interface
with,
say
two
virtual
functions
on
physical
functions
using
the
installer
next
slide.
Please
we're
very
happy
to
announce
dual
stack
support
on
vsphere
now
with
413.
You
can
use
the
installer
provision
vsphere
cluster
and
you
can
use
dual
stat.
Networking
with
ipv4
is
primary
and
IPv6
as
a
second
rate
risk
family.
Z
We
already
support,
will
stack
on
bare
metal
environment
with
ipv4
is
primary
and
with
413
we
now
support
IPv6
as
the
primary
IP
address
family
on
it.
So
this
means
that
during
your
installation
process,
you
can
configure
IPv6
for
machine
Network
cluster
network
service
network
API,
webs
Ingress
website,
so
this
update
will
enable
users
to
completely
utilize
all
the
IPv6
features
and
capability
on
their
dual
stack
bare
metal
clusters.
Z
Z
Moving
on
openshift
administrators
now
have
the
option
to
not
allocate
node
port
for
service
of
type
load
balance.
So
now
this
feature
is
very
useful
when
you
have
web-place
implementation
of
service
type
of
load,
balancer,
typically
for
metal
lb
deployments,
where
node
Port
is
not
necessary.
So
it
helps
to
meet
your
Regulatory
Compliance
requirements,
because
unnecessary
exposed
ports
are
always
problematic.
Z
We
can
also
use
ovnk
cni
plugin
as
a
secondary
interface
on
your
pods.
This
feature
is
currently
under
Tech
preview,
so
customers
can
use
this
feature
to
achieve
their
control
and
a
data
plane
separation.
They
can
Define
isolated,
tenants
networks
or
they
can
create
a
flat
L2
network.
With
your
virtual
instance
on
a
second
report.
Z
Interface
now
metrics
are
very
critical
in
software-defined
networking
and
basically
typically
used
to
monitor
performance
utilization
security
and
ensure
compliance
with
your
slas,
and
we
have
added
a
whole
bunch
with
you
know
this
release
in
rad
openshift
networking
for
the
same
request
you
to
please
take
a
look
at
the
documentation
on
the
added
metrics.
Lastly,
we
have
the
latest
version
of
the
network.
Observability
operator
version
1.2
out
now
on
operator
hub
for
those
who
have
not
used
a
network.
Observatory
operator
basically
gathers
all
kinds
of
data
to
help.
Z
You
design
plan
answer
questions
about
your
network.
It
also
provides
you
visual
representation
to
help.
You
understand
what
is
going
on,
diagnose
and
troubleshoot
networking
issues,
so
it
has
an
evpf
agent
that
gathers
all
the
information
and
gives
you
in
a
single
plane
within
the
openshift
web
console.
So
if
you
have
not
tried
it
out,
recommend
to
try
it
out
next
slide,
please
I
think
it's
over
to
Beta
for
virtualization
updates.
Thank
you.
H
Thanks
to
openshift
virtualization
is
the
ability
to
run
VMS
on
your
kubernetes
cluster
alongside
container
workloads.
We've
done
a
lot
of
work,
introducing
some
new
capabilities
for
both
traditional
virtualization
workloads
and
the
new
types
of
things.
We've
got
a
new
natural
provisioning
workflow
that
actually
uses
cloud-like
instance
types,
which
actually
simplifies
the
process
even
easier
right.
Now,
it's
two
clicks
we're
going
to
make
that
even
simpler,
we've
also
for
traditional
admins
May,
specific
VM
errors,
much
more
visible,
making
it
easier
to
find
and
fix
common
errors.
H
H
We've
actually
published
some
example
text
on
pipelines,
where
you
can
do
things
like
automatically
build
and
deploy
a
virtual
machine
and
a
CI
CD
pipeline
we'd
like
it
also
that
red
hat
Summit
we've
got
a
couple
of
customers
that
are
talking
about
what
they're
doing,
with
virtual
machines
on
openshift
like
Israel
Ministry
of
Defense,
National
Oceanic
and
Atmospheric
Administration
and
Morgan
Stanley
skip
the
next
slide
and
hit
the
sandbox
containers.
Please.
H
So
sandbox
containers
has
been
GA
since
4.10.
We
have
a
new
purepod
technology
that
adds
Cloud
support
as
Tech
preview
and
what
a
pure
pad
is
is
it
allows
you
to
spin
up
and
orchestrate
virtual
machines
for
sandbox
containers
outside
of
your
cluster
and
inside
of
any
cloud
in
this
release.
Awas
and
Azure
are
the
first
Q
supported
Cloud
providers
and
others
will
follow
along
with
specific
API
adapters.
H
Another
valuable
use
case
is
running
sandbox
containers
for
your
cicd
Pipelines
customer
in
the
financial
sector,
needing
enhanced
privileges
for
CI,
CD
Runners
and
successfully
achieve
those
goals
using
openshift,
sandbox
containers
and,
lastly,
we've
got
to
approve
the
concept
of
confidential
containers.
Together
with
Microsoft
on
Azure,
we
already
showed
a
demo
of
a
running
Apache
spark
workloads
at
kubecon,
EU
and
Amsterdam
having
completely
encrypted
containerized
analytics
in
the
cloud
I'm
going
to
turn
it
over
to
Daniel
to
talk
about
operate.
U
Thanks
Peter
in
operate
framework,
we
are
extremely
busy
at
the
moment
of
creating
a
new
version
of
the
operator
lifecycle
manager
with
a
completely
revamped
set
of
apis
that
are
much
more
user-friendly
and
github's
compatible.
Yet
we
were
able
to
sneak
in
in
a
really
nice
feature
into
413
as
well
on
the
existing
o
m
version,
which
allows
you
to
roam
within
your
cluster.
Look
at
all
the
available
versions
of
an
operator
inside
various
release
channels,
because
it's
just
the
latest
version.
U
We
would
typically
still
try
to
recommend
you
staying
on
top
of
the
latest
version
in
each
operator
channel
that
you
find
in
the
operator
catalog.
This
is
what
you
know
gets
you
on
top
of
latest
security
advisories,
as
well
as
any
graph
fixes,
but
some
of
our
customers
have
really
extensive
test
Suite,
where
they
move
operator
versions
through
various
stages
of
testing
into
production.
So
we
listen
to
that
feedback
and
introduced
the
way
to
introduce
back
the
catalog
for
the
entire
version
history.
U
So
you
can
really
discover
and
take
an
older
release
than
the
new
facilities
to
install
to
replicate
what
you
tested
in
a
staging
environment.
It's
also
very
handy
to
verify
the
content
of
a
mirror
catalog,
if
you're
in
an
offline
environment
or
just
find
a
particular
channel
in
which
a
desired
release
is
currently
in
note
that
this
is
not
quite
ready
yet
for
the
openshift
console.
U
This
is
in
CLI
API
only
feature
the
console
will
add
this
in
the
subsequent
openshift
hoodies,
but
at
the
same
time
we
also
addressed
an
issue
in
one
of
our
these
pipelines
that
led
to
Casual
disappearance
of
already
released
software
when
a
new
version
of
an
operator
came
out.
This
has
been
fixed
now
and
you
should
start
to
see
these
version
histories
accumulating
with
the
package
manifest
API
as
depicted
on
the
slide,
and
that's
it
for
me.
I'm
going
to
hand
over
to
Greg
who
takes
us
through
openshift
storage,.
T
Thanks
Danielle
and
hi
everybody,
let's
have
a
look
at
what's
new
in
storage
I'm,
starting
with
CSI
drivers,
we
are
interested
in
full
support
for
Azure
file,
CSI
migration.
This
is
automatically
enabled
by
default
and
does
not
require
any
action.
This
here,
CSI
migration
has
been
promoted
GA.
However,
we
had
to
take
a
particular
approach
for
this
driver,
as
Raman
mentioned
earlier,
due
to
some
unresolved
issues:
I've
Got
It,
upgraded
clusters
we'll
have
a
migration
disabled
with
an
option
to
opt
in.
T
T
The
openshift
documentation
will
include
all
the
details,
including
the
list
of
issues,
as
well
as
the
steps
to
obtain
continuing
with
this
CSI,
and
as
mentioned
earlier,
we
are
adding
support
for
volume
encryption.
We
also
ensuring
support
for
zones
backed
by
CSI
through
the
installer,
as
well
as
post
installed
on
the
AWS
prompts.
We
are
now
supporting
EFS
cross
account
amounts.
This
feature
has
been
requested
by
various
customers
who
are
storing
their
EFS
data
on
a
different
AWS
accounts
that
the
one
used
by
openshift.
T
Finally,
we
are
improving
the
LVN
storage
CSI
driver
with
similar
key
features,
including
multiple
storage
classes,
disconnected
installations
IPv6.
Dual
stack
support
as
well
as
several
optimization
to
reduce
the
ceramic
conception
of
the
lvms
parts.
T
Next
slide,
please:
okay,
next
up,
we
have
a
fairly
anticipated
feature
that
enables
administrator
to
control
how
openshift
storage
operators
manage
their
storage
classes
up
until
now,
operators
will
always
reconciling
their
storage
classes
and
it
wasn't
possible
to
update
or
delete
them.
This
is
now
possible
with
the
introduction
of
a
new
storage
class
States
parameter.
This
parallel
can
be
set
to
it
can
be
set
to
unmanaged
which
prevents
the
operator
from
reconciling
the
changes
or
set
to
remove,
which
completely
removes
the
storage
classes.
T
T
Please
note
that
this
feature
is
driver
dependent
and
requires,
therefore
explicit
driver
support.
So
please
contact
your
private
vendor
to
protect
that.
If
you
support
it,
it
was
nothing
that
the
features
comes
with
a
security
admission
plugin
that
control,
which
kind
of
namespace
can
consume
inline
volumes.
This
is
a
per
driver
setting
and
the
default
only
allows
privileged
namespaces
next
slide,
please
at
last,
but
not
least,
we're
introducing
a
new
check
preview
feature
that
improves
openshift
behavior
when
a
node
is
shut
down
on
gracefully.
T
Indeed,
when
a
node
shutdown
is
not
detected
by
cubette,
not
shutdown
manager,
the
pause
and
volume
attachments
are
not
properly
deleted
from
the
node
and
what
it
requires
a
manual
step
to
release
those
and
allow
the
parts
to
the
scheduled
elsewhere
by
tagging.
The
node
with
a
new
out
of
service
state,
the
pods,
are
now
automatically
deleted
and
the
volume
attachments
are
released
once
the
node
is
like
online.
Remove
the
change
to
allow
the
scheduler
to
pick
it
up
again
and
that
is
it
for
storage
hanging
over
to
Rob
or
Telco.
Thank
you.
AA
Next
slide,
please
hi
folks,
we're
happy
to
share
that.
We've
extended
our
cast
CPU
isolation
features
to
multi-node
clusters.
Some
users
may
be
familiar
with
CPU
reservation,
GPU
isolation
and
workload,
partitioning
on
single
node
openshift
clusters,
but
now
we
support
these
features
on
all
deployment
models.
Note
that,
for
the
most
part,
these
features
are
set
when
the
cluster
is
installed
and
are
not
reversible.
But
there
is
some
Nuance.
AA
AA
I'm
also
pleased
to
announce
a
plethora
of
operational
enhancements
for
Telco
deployments.
Most
of
these
are
focused
on
single
node
openshift,
but
some
are
not
exclusive
to
it.
D-Run
has
matured
to
the
point
that
is
generally
available
in
4.13
and
provides
provides
improved,
CPU
efficiency.
Lvm
storage
has
been
optimized
to
improve
CPU
efficiency
as
well.
The
amqp
operator
will
be
deprecated
and
we're
well
ahead
of
this
of
its
removal.
We've
replaced
it
with
the
native
HTTP
based
implementation
for
our
Telco
hardware
and
PTP
event
bus.
This
change
benefits
us
in
a
variety
of
ways.
AA
We
remove
the
dependency
for
du
deployments
on
single
node
openshift
clusters
and
we
remove
an
operator
and
additional
processes
running
on
the
cluster,
thus
improving
CPU
efficiency.
Again,
there
is
no
impact.
The
CNF
experience
when
consuming
these
events
with
the
red
hat
provided
event,
bus,
Sidecar,
Talon's
precaching
feature
previously
downloaded
a
default
set
of
content.
We've
improved
that
process
by
filtering
out
the
unnecessary
data
that
was
previously
being
downloaded.
This
improves
upgrade
times
when
utilizing
pre-caching
with
that
said,
I'll
pass
it
back
to
Jeff
to
close
out
this
presentation.
Thank
you.
A
Thank
you
Rob.
Thank
you
very
much
for
everyone
who
attended
today
and
reporting
really
appreciate
your
love
and
support.
This
is
a
community
vast
community
of
open
projects
across
the
ncf,
and
we
really
appreciate
your
support.
Please
come
out
and
join
touch
base
with
your
PMS.
If
you
have
any
feature
requests
coming
into
the
future
and
with
that
we'll
go
ahead
and.