►
From YouTube: [What's Next] OpenShift Roadmap Update [Nov-2021]
Description
On November 30, 2021, the OpenShift PM team will broadcast the OpenShift Roadmap Update briefing to internal Red Hatters and directly to customers and partners on OpenShift.tv.
A
Hi
and
welcome
everyone
to
the
q4
2021
update
to
what's
next
in
openshift,
as
many
of
you
know,
the
product
team
does
a
what's
new
and
a
what's
next
presentation
every
quarter,
or
so
this
is
the
what's
next
q4
update
the
what's
next
offers
an
overview
of
the
direction,
initiatives
and
exciting
new
use
cases
and
features,
or
a
six
to
18
month
time
horizon.
A
A
That
covers
the
overview
and
motivation
for
the
roadmap
that
we're
going
to
present
to
you
specific
details
for
each
of
these
topics
that
are
covered
here
can
also
be
found
in
the
appendix,
which
is
a
part
that
we
are
not
going
to
cover,
but
you
will
have
access
to
it.
You
know,
after
this
presentation-
and
you
can
use
that
as
a
reference.
A
A
I
have
a
fantastic
team
of
fellow
colleagues
and
product
managers
from
the
openshift
team,
who
will
be
the
speakers.
In
addition,
we
have
the
rest
of
the
team
present
also
today
and
together.
A
It
is
the
hard
work
of
not
just
us
presenting,
but
also
all
of
them,
and
also,
obviously,
by
extension,
all
the
engineers
and
the
other
functions
that
make
this
happen.
So
thanks
to
all
of
them,
you
know
obviously,
and
then
you'll
see
them
and
you'll
hear
them
all
talk
today.
Just
in
the
way
of
introduction,
I
don't
think
I
introduced
myself.
My
name
is
I'm
one
of
the
openshift
product
managers.
A
You
know
and
kind
of
shepherding
this
presentation
for
the
past
10
years
or
so
you
know
we
have
made
the
case
for
the
open,
hybrid
cloud.
As
many
of
you
know,
our
customers
have
innovated
competed
and
have
succeeded
in
creating
value
that
to
their
customers
through
applications
built
for
the
hybrid
cloud.
A
Those
applications
can
range
from
traditional
and
tier
applications
to
more
modern
cloud
native
microservices
based
applications,
encode
more
traditional
business
logic
or
rules
or
more
modern
data
analytics
and
ai,
and
can
be
for
in-house
developer
applications
of
packaged
applications
from
isvs,
no
matter
what
they
have
developed
and
deployed
these
applications
across
the
hybrid
cloud
footprint,
everything
from
a
physical
data
center
to
a
public
cloud
and
to
the
edge
red
hat
openshift
built
on
red
enterprise.
Linux
has
been
the
bedrock
platform.
C
A
And
that
continues
to
inform
our
roadmap
and
our
initiatives
and
our
future
openshift
as
a
quick
level
set,
as
you
probably
already
are
aware,
is
a
fully
managed
cloud
service
or
can
be
consumed
as
a
fully
managed
cloud
service
or
a
self-managed
platform
managed
openshift
is
jointly
engineered
and
offered
by
red
hat
with
the
corresponding
cloud
providers
so
that
you
can
get
started
with
a
kubernetes
service
very
quickly.
A
Openshift
managed
services
include
openshift
dedicated,
you
know:
openshift,
red
hat,
open
shift
on
aws
or
rosa
as
your
red
hat,
openshift
or
aro
or
ro,
and
then
as
well
as
openshift
on
ibm
cloud
and
and
google
clouds.
In
addition,
you
have
a
self-managed
products
from
red
hat.
This
includes
openshift
platform
plus
which
I
will
talk
we'll
touch
upon
briefly
on
the
next
slide:
the
openshift
container
platform
openshift
kubernetes
engine,
as
well
as
other
self-managed
software
offerings.
A
A
You
are
all
probably
already
familiar
with
this
in
a
block
diagram
if
you
will
or
a
rendition
of
what
makes
or
constitutes
red
hat
open
shift
as
a
quick
recap,
openshift
platform
or
container
platform
is
redhat's.
Distribution
of
kubernetes
are
built
on
top
of
red
hat
enterprise
linux
core
os.
That
also
includes
many
platform
developer
and
data
tools
and
services.
That
being
said,
you
know
especially
over
the
past
year,
and
we
certainly
anticipate
more
next
year.
A
Enterprises
and
organizations
need
to
deploy
and
manage
applications
and
clusters
in
a
multi-cluster
and
hybrid
cloud
environment.
They
need
to
answer
new
questions
in
this
context.
How
do
I
deploy
applications
across
multiple
clusters
and
cloud?
How
do
I
monitor
these
applications
as
well
as
these
clusters
and
drive,
updates
and
upgrades?
Are
my
images
free
from
vulnerabilities,
and
how
do
I
ensure
a
secure
supply
chain?
You
know
how
do
I
store
images
for
connected
and
disconnected
users?
A
How
can
I
integrate
security
into
my
entire
process
from
conception
to
production,
the
openshift
platform
plus,
which
is
something
that
we
introduced
earlier
this
year
in
the
first
quarter
of
2021,
is
comes
to
the
answer.
If
you
will
and
includes,
along
with
the
openshift
container
platform
it
had
advanced
cluster
management
ira,
had
advanced
cluster
security,
red
hat
quay,
integrated
and
tested
together
to
address
your
management
security
governance
registry
needs,
and
and
more
so,
we
will
discuss
that
in
a
separate
section
later.
A
As
you
all
know,
red
hat
is
an
open
source
company
and
everything
we
do
is
upstream.
First
in
the
open
communities
of
innovation,
openshift
platform
plus
and
its
component
parts
are
all
made
of
these
fantastic
upstream
communities
shown
here
definitely
very
colorful,
to
say
the
least.
On
behalf
of
the
entire
product
team,
I
wanted
to
acknowledge
the
work,
the
many
contributions
that
come
to
us
by
the
way
of
these
communities
and
say
thank
you
as
we
look
towards
the
year
2022
and
beyond.
A
Our
mission
is
to
enable
our
customers
to
accelerate
the
deployment
of
applications
in
hybrid
clouds
and
habit
cloud
environments.
That
includes
obviously
multiple
clusters
through
a
rich
services.
Based
experience
that
we
are
calling
the
hybrid
cloud
experience,
you
will
see
this
hybrid
cloud
experience
come
to
you
through.
You
already
see
this
in
cloudoutrad.com,
but
more
is
coming,
and
a
lot
of
this
roadmap
that
you
will
see
is
in
support
of
that.
This
hybrid
cloud
experience
is
comprised
of
three
themes.
A
Unified
experience
here
you
see
on
the
right
is
the
first
of
them
is
the
best
of
breed.
Brings
you
best
of
breed
uniformity
of
experience
for
application
developers,
devops
engineers,
data
scientists,
data
engineers,
machine
learning,
engineers
and,
of
course,
admins
and
operations.
Folks
that
span
the
hybrid
world
end
to
end
security.
Everywhere
is
the
second
theme
which
offers
tooling
and
capabilities
to
ensure
applications
run
securely
and
from
conception
to
production
and
users
interact
in
a
compliant
manner,
with
becoming
compliance
with
internal
industry
standards
platform
consistency.
A
It
provides
a
platform
that
tastes
smells
sounds
and
feels
the
same
while
also
providing
a
rich,
no
matter
what
the
hybrid
cloud
footprint
is
and
also
provides
a
rich
ecosystem
of
products
and
technologies
not
only
from
red
hat,
but
also
from
our
very
strong
isbn
partner
ecosystem
that
enables
users
and
gives
them
the
choice
to
customize
and
get
the
best
of
breed
that
suits
their
particular
need.
A
No
matter
what
the
cloud
or
all
the
way
to
the
edge,
we
have
three
miller
pillars
of
execution
that
you
see
also
here
in
the
center
in
the
context,
and
we
do
that
in
this
context
of
the
hybrid
cloud
experience
and
the
three
themes
that
I
talked
earlier,
you'll
find
that
the
rest
of
the
presentation
are
organized
along
these
three
pillars
and
these
three
themes.
A
The
first
of
these
pillars
here
on
the
left,
is
the
code
platform
and
developer
tools,
pillar,
which
includes
our
investments
in
kubernetes
linux
and
platform
and
developer
tools.
While
we
know
that
we
have
added
a
lot
of
innovation
over
the
years,
four
five,
maybe
even
more,
if
you
consider
linux
for
the
past
15
to
20
years,
you
know
you
know
this.
A
Innovation
is
not
slowing
down,
there's
more
coming
in
response
to
new
hardware,
accelerators,
be
because
of
new
specialized
workloads
which
need
new
kinds
of
scheduling
made
because
of
innovations
and
networking,
including
network
observability,
get-offs
and
devops,
and
exciting
new
things
for
developers
with
regards
to
serverless
and
service
mesh
and
ide
experiences
with
code
ready.
These
pillars
are
foundational
to
our
other
two
pillars,
which
are
managed
cloud
services
and
telco
and
edge
pillars.
The
telco
and
edge
pillar
is
in
service
of
rapid
innovation
and
needs
from
the
5g
core
and
5g
ran
in
the
telco
industry.
A
We
already
have
seen
major
customer
wins
and
adoptions
in
these
markets
and
will
continue
to
do
so.
So
obviously
they
this
segment
needs
a
lot
of
innovation.
These
include
their
desire
to
run
developers,
develop
and
run
container
native
network
functions
or
ai
and
machine
learning
applications
developed
at
the
core
and
deployed
at
the
edge
with
containers
and
on
a
5g
footprint,
or
this
could
also
include
collection
of
data
at
the
edge
and
anonymizing
it
and
then
cleaning
it.
A
And
then
you
know
feeding
it
back
into
the
code
or
acting
upon
that
real-time
data.
The
manage
cloud
services
pillar,
is
the
third
pillar,
and
this
is
all
about
bringing
openshift
and
application
services
from
red
hat
and
partners
as
a
fully
managed
and
sre
backed
service
on
the
cloud
of
your
choice.
A
This
includes
rosa
and
ero
and
openshift
dedicated
that
I
touched
upon
earlier,
but
also
includes
application
services
brought
to
you
by
with
sre
backed
services
on
top
such
as
api
management,
a
stream
service
with
kafka
radar,
openshift
data
science,
subscription
management,
cost
management
and
insights
all
available
via
cloud.redhead.com
as
a
rich
web-based
gui
or
you
know
also
through
apis
in
2022,
we'll
be
doubling
down
and
introducing
more
innovations
in
this
space
with
more
application
data
and
managed
services,
and
that
informs
a
lot
of
our
road
map
that
you'll
see.
A
As
you
all
know,
I'll
touch
upon
this
very
briefly.
We
released
openshift
when
we
released
openshift
4.
We
went
to
a
rolling
window
life
cycle,
so
the
life
cycle
of
a
specific
y
release
was
until
the
y
plus
three
really
released
gas.
So
when
openshift
4.8,
for
example,
g8
openshift
4.5
ended,
live
this
typically
meant
about
10
to
12
months
of
life,
but
based
on
feedback
that
we
have
got
from
customers
and
users.
A
You
know
this
was
constraining
and
to
our
customers
and
users
from
an
operational
point
of
view,
so
we
heard
from
them
and
therefore
we
are
introducing
this
life
cycle
change,
which
includes
you
know,
two
or
minor
and
minor
numbered
releases
like
the
four
dot
wire
leases,
for
example,
and
what
it
is
the
highlights
really
are.
We
are
changing
from
our
current
version
based
lifecycle
policy
to
a
time
based
life
cycle
of
18
months
for
all
minor
releases
of
openshift
code.
A
This
change
will
take
effect
with
redhat
openshift
container
platform
4.7
and
higher,
and
this
also
means
that,
and
also
we
are
designating
even
number
releases
as
uss.
So
that
way
we
can
provide
you
a
rich
us
to
eos
upgrade
experience
between
these
even
numbered
releases.
You
know
so
so
those
are
and
then
finally,
I
think
the
three
ocp
releases
per
year
is
in
cadence
with
the
upstream
kubernetes,
which
is
also
going
three
to
three
releases
per
year.
So
what
this
all
means
is
that
this
is
a
roadmap.
A
In
a
nutshell,
like
a
one
one
slide
a
nutshell
of
everything
we
I'm
not
going
to
go
through
each
one
of
these.
Obviously,
but
we'll
cover
some
of
these
throughout
the
rest
of
the
presentation,
we
continue
to
innovate
in
exciting
new
capabilities
across
the
core
platform,
application
developer
and
managed
services
or
pillars,
as
you
can
see
here
through
calendar
year
2022
and
beyond,
openshift
4.10
will
ga
in
q1
of
2022,
followed
by
4.11.
A
That
is
in
early
q3,
2022
and
4.12
will
be
the
last
release
this
year.
You
can
find
details
of
these
features
and
much
more,
as
I
said
earlier,
in
the
appendix
of
this
presentation
with
that
said,
and
with
this
introduction
I'll
hand
it
over
to
scott
burns
to
take
us
through
the
hybrid
cloud
and
openshift
platform.
D
D
We
want
our
operations
teams
to
be
thick
and
working
on
a
fleet
capability,
we're
just
we're.
Just
at
that
point
where
you
can't
have
one
and
two
clusters
anymore:
you
even
have
clusters
that
are
being
chiseled
out
and
defined
for
specific
applications
and
specific
workloads
to
run
on
that.
That's
shaped
cluster.
Those
could
be
clusters
from
the
central
data
center,
all
through
the
edge
tiers
and
beyond.
D
Let's
hit
the
next
slide
to
really
hold
on
to
that
thought,
as
you
standardize
in
this
space
with
one
club
or
multiple
hubs,
you
start
to
see
how,
from
your
first
cluster
to
the
hundredth
or
the
thousandth
cluster,
you
really
need
the
consistency
of
networking.
You
need
the
consistency
of
storage
and
tooling
ingress,
egress
container
registries.
D
Everything
that
we
have
built
in
our
hybrid
cloud
and
openshift
platform
plus
model
speaks
to
the
user.
That
has
the
frustration
and
the
concerns
and
the
challenges
about
managing
this
new
environment,
about
making
sure
that
the
east-west
traffic
is
is
tunneled
correctly
from
cluster
a
to
cluster
b.
Ensuring
your
ingress
and
egress
are
all
managed
consistently.
D
So
what
you
see
on
this
slide
really
represents
our
point
of
view
for
how
your
operations
teams
and
how
your
developers
can
start
to
interact
with
this
platform.
In
a
way
that's
consistent.
It
provides
the
ability
to
go
from
one
cluster
to
the
next,
without
a
huge
headache
of
disruptions
in
between.
D
We
want
that
to
look
and
feel
as
one
and
that's
a
consistent
delivery
in
the
on-premise
model
or
up
into
the
cloud.
So
as
I
navigate
the
the
manage
cluster
my
fleet
of
clusters
from
one
to
a
thousand,
I
don't
have
these
jarring
experiences
from
one
tab
into
the
next
step.
I
want
to
be
able
to
flow
into
those
consoles
and
see
the
metrics
and
see
the
data
and
see
the
applications
as
if
it's
been
unifying,
which
is
what
we're
going
for
here.
We
think
that
ultimately
reduces
the
total
cost
of
ownership.
D
It
reduces
the
headache
on
skills
and
upskilling
and
different
tools
that
you
need
in
that
space,
as
we
really
bring
the
unification
of
these
openshift
platform
plus
capabilities
into
your
teams,
security
everywhere.
D
That
is
such
an
important
facet
of
the
entire
package
here,
making
sure
that
your
supply
chain
is
secured
from
the
beginning
to
the
end
of
that
application,
delivery,
we're
bringing
cosign,
manifests
and
looking
at
secret
management
as
areas
that
are
huge
headaches
for
our
customers,
especially
in
the
kid
ops
model
where
they
want
to
be
able
to
deploy
those
workloads
out
consistently
across
clouds
again
on
premise,
with
bare
metal
or
up
in
the
clouds
with
any
of
your
favorite
public
providers.
We
think
that
reduces
your
exposure
and
risk.
D
D
So
as
you
funnel
through
this
this,
this
map,
you
understand
the
inbound
traffic
coming
from
the
internet
and
as
it
flows
through
these
different
protocols,
you
shouldn't
have
to
care
about
how
that
workload
needs
to
be
architected
this
way
or
that
way
we
want
to
handle
that,
for
you,
so
eliminating
the
risk
and
challenges
around
different
protocols
and
ensuring
that
there's
a
uniformity
in
that
flow
of
traffic.
D
You
can
see
how
we're
aligning
these
layers
so
that
you
can
take
some
of
that
thought
off
and
put
that
more
towards
what
you're
interested
in,
which
is
innovating
with
your
applications.
It's
not
just
that
there
has
to
be
one
single
point
of
view.
We
want
to
encourage
all
of
these
network
capabilities
in
the
box
and
you'll
see
there.
We
highlight
the
submariner
capability
with
that
vpn
tunnel
connecting
clusters
across
clusters
in
the
east-west
scenario.
D
So
all
of
this
is
being
built
and
packaged
into
openshift
platform,
plus
ensuring
that
you
have
the
ability
to
option
to
automate
and
and
deliver
the
operations
for
your
workloads.
D
Let's
hit
the
next
slide
please
and
to
round
this
out,
it
really
starts
to
make
the
most
sense
that
we
bring
storage
into
this
picture.
It's
not
just
an
afterthought,
it's
something
that
you
bring
in
on
the
day.
One
experience,
it's
something
that
you
plan
for,
and
you
work
around,
so
we're
really
unifying
this.
This
experience
as
well
to
bring
the
storage
capabilities
like
csi
migration
from
the
entry,
a
couple
of
other
key
features
around
security
everywhere
with
kerberos
mounts
and
secret
stores,
and
finally,
looking
at
platform
consistency
with
csi
ephemeral
volume.
D
This
is
ensuring
that
odf
and
its
storage
counterpart,
openshift
storage,
are
really
providing
the
asynchronous
capabilities
that
you're
looking
for
without
having
to
target
separate
storage
capabilities
that
you
bring
into
this
situation
so
doing
things
like
dr
operators,
you're
bringing
failover
and
fail
back
operations
to
reduce
your
downtime,
easing
that
disaster
recovery
scenario
and
ensuring
this
consistent
data
foundation
capabilities
everywhere.
You
go
we're
all
about
increasing
developer
and
admin
productivity,
reducing
again
the
risk
of
disruptions
for
your
business
continuity
and
reducing
your
total
cost
of
ownership
by
standardizing
storage
across
your
fleet.
D
Of
course,
the
openshift
storage
is
available
across
clouds.
You'll
find
the
csi
available
in
any
of
your
popular
clouds,
and
with
that
I'm
going
to
hand
it
over
to
my
colleague
jamie
scott
who's,
going
to
run
you
through
some
of
the
what's
next
features
in
the
advanced
cluster
security
for
kubernetes.
E
Awesome
thanks
scott
and
in
our
path
to
enable
you
to
secure
and
manage
your
first
in
your
100
hundredth
cluster
openshift
platform
plus
really
brings
together
a
few
bundled
service
offerings.
It
brings
together
advanced
cluster
management
as
scott
just
described,
but
it
also
brings
together
advanced
cluster
security
for
kubernetes,
which
came
from
the
stackrocks
acquisition
and
play
as
well
in
an
attempt
to
help
you
to
enable
your
first
and
hundredth
cluster
in
a
secure
and
compliant
manner.
E
So
through
that
advanced
cluster
security
is
going
to
focus
the
next
year
or
so
on.
Bringing
you
a
more
unified
experience.
What
that
means
is
we
want
to
break
down
cross-functional
barriers
to
help
you
reduce
call,
so
we're
going
to
do
that
by
accelerating
operationalization
with
managed
services.
This
will
help
enterprise
teams
decrease
the
swim
lanes
within
their
organization
and
accelerate
the
operationalization
by
being
able
to
manage
it
unilaterally
within
their
own.
E
E
We
want
to
enable
teams
to
innovate
with
competence
by
helping
them
to
bridge
the
skill
gap
between
a
security
professional
who
lived
in
the
kubernetes
space
and
someone
who
might
not
necessarily
have
that
skill
set,
and
we
want
to
do
that
by
identifying
different
risk
indicators
across
expanded
use
cases
and
by
also
enabling
teams
to
remediate
issues
more
effectively
by
giving
them
the
information
they
need
at
their
fingertips.
In
order
to
fix
issues
versus
just
identifying
the
issues.
E
That's
being
deployed
on
kubernetes.
We
do
this
through
a
policy
engine
and
an
api
that
works
to
establish
feedback
loops
that
are
continuous
throughout
application.
Development
life
cycles,
using
tools
that
security
teams
and
development
teams
use
natively
such
as
pager
duty
and
slack,
or
a
sims
such
as
splunk,
demologic
or
qradar,
and
we
enable
you
to
do
this
across
public
cloud,
private
cloud
and
multi-cluster
in
our
attempt
to
enable
you
to
secure
your
first
and
hundredth
cluster.
E
E
Consistency
with
the
global
deployment
model,
reaching
all
the
way
from
the
core
data
center
cloud
regions
and
the
far
edge
users
with
quay
are
finding
that
they
need
a
suitable
content.
Distribution
for
kubernetes
and
quay
helps
you
to
reduce
single
central
registry
instance
that
you're,
using
with
gia,
is
looking
to
do
that
with
geo
replication.
E
E
This
will
help
teams
to
optimize
consoles
between
hybrid
environment,
we're
also
looking
to
establish
additional
observability
for
the
cons
how
the
consistent
information
is
stored.
The
red
hat
is
going
to
provide
the
necessary
tooling
to
ensure
observability
can
be
delivered
across
multiple
environments.
E
E
So
we
want
to
extend
the
platforms
and
locations
that
you
can
use
the
existing
dashboards
within
the
openshift
console
and
export
observability,
metrics
and
log
metrics.
We're
going
to
do
this
by
optimizing
an
api
experience
in
the
openshift
console
next
slide
and,
finally,
to
round
out
our
investment
in
observability.
E
And,
finally,
we
want
to
provide
additional
platform
consistency
and
we're
going
to
do
that
by
allowing
developers
and
administrators
to
require
a
common
understanding
of
their
traffic
within
and
across
multiple
cluster
boundaries.
We're
going
to
do
that
by
establishing
a
topology
for
users
to
understand
and
share
a
viewpoint
on
network
traffic
flow
and
visualization.
E
Next
slide,
and
for
those
of
you
who
aren't
familiar
with
hypershift,
hypershift
really
brings
the
externalized
control
plane
to
open
shift
in
a
multi-cluster
environment.
Hypershift
is
a
middleware
for
hosting
openshift
control
planes
at
scale,
and
it
solves
the
problem
of
cost
and
time
to
revision.
Multiple
clusters
as
a
means
of
portably
across
implementing
cross-cloud
workflows,
so
hypershift
is
going
to
come
and
give
you
a
fleet
level
provisioning
view
for
your
clusters
in
a
way
that
gives
you
a
myriad
of
benefits.
F
Thank
you
so
built
on
top
of
enterprise
open
shift.
These
are
some
of
the
enhancements
we've
undertaken
specifically
to
support
telco
and
edge
workloads
on
the
platform,
so
telco
workloads
require
high
performance,
low,
latency
computing,
and
in
order
to
achieve
that,
workloads
need
absolute
resource
guarantee
to
enable
predictable
performance.
F
Running
telco
workloads
as
microservices
has
its
added
benefits
that
includes
continuous
ci,
upgrading,
seamlessly
various
parts
of
network
without
breaking
anything.
Now.
What
we're
trying
to
do
is
to
simplify
network
operations
and
management
by
making
it
practical
to
run
all
of
telco
workloads.
On
a
common
platform,
we
do
have
the
cnf
certification
process
in
place
to
ease
the
move.
Finally,
we've
always
looked
to
enable
next
generation
hardware,
be
it
cpus,
next
smartnates
gpus
to
facilitate
an
agile
infrastructure
with
latest
and
efficient
hardware.
Next
slide,
please!
F
So,
as
we
know,
telco
workloads
need
coherent
and
predictable
resource
alignment.
It's
more
like
having
cpu
memory
and
devices
all
the
resources
that
are
assigned
to
your
pod
belonging
to
the
same
numa
node,
and
without
this
alignment
we
are
cognizant
of
the
high
performance
finality
that
one
could
see.
F
The
first
implementation
of
new
marvel
scheduler
is
based
on
the
upstream
rte
or
the
resource
topology
exporter
component,
and
we
will
be
switching
to
node
feature
discovery
project
in
the
near
future.
With
topology
aware
scheduling,
enabled
workloads
should
never
be
placed
on
platforms
that
cannot
meet
their
resource
needs,
aligned
to
their
topology
preferences
next
slide.
Please
we've
always
looked
to
support
bleeding
edge,
networking
hardware
or
accelerators
on
the
platform
with
coming
to
5g.
F
This
becomes
even
more
essential
and
critical,
with
ovn
hardware
offload
we're
looking
to
offload
all
of
the
data
plane,
traffic
flows
and
services
to
the
nik
fpgas
doing
this
can
benefit
telco
nfv
customers,
who
can
now
have
high
performance
data
planes
with
improved
networking
services
with
smart
mix.
One
can
look
to
isolate
the
control
plane
onto
a
separate
cluster
just
for
running
infrastructure
services
they
like
running
on
arm
cores
in
the
neck,
while
the
tenant
workloads
continue
to
turn
on
the
nodes.
F
This
provides
managed
accelerated
infrastructure
services
that
includes
networking
storage,
aiml
outside
of
the
tenant
cluster.
Finally,
with
support
for
latest
hardware
accelerators,
we
like
programmable
fpgas,
crypto
engines
or
gpus
all
managed
by
openshift
operators.
We're
looking
to
accelerate
the
5g
code
and
rant
functions
like
incline
encryption
data,
plane,
encapsulation,
so
on
and
so
forth.
F
Next
slide,
please
so
we
come
into
so
now
we
get
into
the
random
ram
is
on
the
edge
of
the
network.
It's
a
crucial
connection
point
between
the
end
user
devices
and
the
rest
of
the
operators
network.
With
the
current
ongoing
5g
network
transformation,
one
is
increasingly
seeing
container-based
cloud-native
solutions
for
ram.
It
is
very
important
that
we
simplify
the
network
operations,
improve
the
stability
availability
efficiency
all
while
serving
increasing
number
of
devices
with
high
bandwidth
applications
as
it
is
on
the
edge.
F
It
is
very
essential
to
have
a
very
small
footprint,
optimized
infrastructure,
but
with
very,
very
good
performance
to
meet
5g
requirements.
We
do
have
a
single
node
openshift
that
fits
the
bill
right
here
and
given
that
we're
going
to
deploy
hundreds
and
thousands
of
sites
with
hundreds
and
thousands
of
such
devices,
it
is
essential
to
deploy
manage
upgrade
this
in
an
automated
weight
scale
to
advance
cluster
management
and
zero
touch
provisioning.
F
All
of
these
duals
that
typically
handle
antenna
coverage
to
a
ton
of
calculation
in
real
time
and
we
tune
the
nodes
that
run
around
real-time
workloads
to
leverage
advanced
timing
and
support
hardware
isolators
on
the
platform
to
achieve
such
high
performance
next
slide.
Please
zdp.
The
zero
touch.
Provisioning
is
a
way
to
dis,
deploy
openshift
clusters
at
scale
in
an
automated
way
via
acm
right.
It
uses
declarative
gear
ups
approach
bootstraps
in
place
to
deploy
openshift
on
new
compact
topologies.
F
We're
continuously
looking
to
evolve
and
enhance
this
specifically
for
the
edge.
We're
looking
to
you
know
enhance
this
on
a
scale
level,
we're
looking
to
support
more
than
2k
sno
provisioned
and
managed
by
a
single
instance
of
acm
very
soon
enabling
policy
based
upgrades
by
defining
groups
of
snos
that
can
be
upgraded,
independent.
You
know
of
each
other
for
more
granular
multi-cluster
management,
ideally
going
forward.
We
would
like
to
ztp
everything.
F
That's
right,
ztp
everything,
the
du's,
the
cran
hubs,
the
additional
infrastructure
that
is
needed,
which
all
would
make
our
life
easier
to
deploy,
manage
and
upgrade
clusters
at
the
edge
with
ease
and
scale
next
slide.
The
challenge
with
the
5g
network
is
to
provide
ultra
reliable,
high,
accurate
timing
synchronization
over
the
5g
packet
network.
F
The
answer
to
this
is
the
precision
time
protocol,
and
this
is
the
reason
we
have
invested
in
enhancing
this
and
making
it
more
robust
in
the
coming
days,
given
that
we
already
have
a
single
neck,
ordinary
clock,
boundary,
clock
and
event,
bus
for
ptp
events,
we're
now
looking
to
build
on
top
of
these
enhancements
for
ptp
stack
and
also
the
ptp
operator
on
openshift,
we're
looking
to
enhance
precision,
time
control,
we're
also
we're
for
scaling
to
more
number
of
ru's
building.
F
By
having
richard
policies
on
how
the
system
clock
is
set
with
best
master
selection,
sinky
support
and
we're
also
looking
to
upgrade
to
linux.
Pdp
3.1
stack,
which
has
much
more
richer
features,
improved
algorithms
and
robustness
and
enhancements,
and
we're
also
looking
to
support
the
grand
master
clock
via
the
nic.
This
would
highly
reduce
the
cost
of
the
cell
side
by
moving
this
functionality
from
the
cell
side
router
to
the
node.
For
those
who
are
interested,
we
have
a
detailed
roadmap
of
this
later
in
the
later
part
of
the
deck
next
slide.
F
Please
we're
increasingly
realizing
rand
functions
over
generic
of
the
servers
with
open
source
software
as
a
part
of
the
next
phase,
we're
looking
to
see
how
we
can
optimize
these
nodes
for
power
savings
without
any
performance
penalty.
At
the
end
of
the
day,
we
do
not
want
the
nodes
to
consume
any
more
power
than
that
is
necessary.
F
So,
given
that
many
telecom
vendors
run
thousands
of
this
far
hdus
power
getting
expensive,
especially
at
the
far
edge
every
little
bit
of
power.
Savings
on
the
node
directly
translates
to
huge
dollar
savings.
So
we're
working
up
the
stack
right
from
the
hardware
to
the
operating
system,
openshift
and
finding
the
workloads
themselves
to
enable
power,
saving
profiles
or
knobs
with
the
end
goal
that
all
of
this
working
together
coherently
will
result
in
no
wastage
of
power,
especially
at
the
edge
next
slide.
Please
it's
over
to
manage
services.
Thank
you.
B
Hey
deept,
thank
you
mostly
video
and
audio,
is
coming
across
to
show
our
next
slide
please.
B
So
just
I
just
want
to
level
set
just
a
bit
about
what
I'm
going
to
be
specifically
talking
about
with
relating
to
managed
openshift.
I
know
tushar
touched
upon
that
much
earlier
on
in
the
presentation,
but
as
we're
all
aware
right,
everyone
knows
about
openshift,
openshift
container
platform
right.
This
is
where
you,
basically,
you
know,
buy
a
subscription.
You
take
the
bits
right,
take
it
home
with
you
or
into
your
data
center
or
cloud
of
choice.
B
We
do
have
some
partnerships
with
the
large
cloud
providers,
where
we
provide
a
first-party
native
service,
that's
available
on
the
platform
where
really
the
entirety
is
available
on
on
your
platform
of
choice,
so
we
do
have
with
aws
the
red
hat.
B
Openshift
service
on
aws
or
what
we
like
to
call
rosa
or
similarly
with
azure,
with
azure
red
hat
openshift
early
on
ibm
cloud
as
well,
with
red
hat
openshift
on
ibm
cloud,
and
we
also
have
a
red
hat
offering
that
gives
you
a
choice
of
cloud
provider
and
that
is
red
hat
openshift
dedicated.
So
you
can
choose
between
running
it
on
gcp
or
aws,
and
just
some
of
the
next
things
we're
going
to
go
through
are
specifically
relating
to
to
these
offerings.
B
So
next
slide.
Please!
So
here
is
a
familiar
theme
that
we've
kind
of
been
going
through.
So
one
of
the
things
that
we're
looking
at
is
really
just
to
give
our
users
a
unified
experience
in
how
they
work
with
their
managed
openshift
clusters.
So
this
is
really
giving
them
one
single
point
of
location
where
they're
going
to
be
able
to
deploy
their
clusters.
You
know
manage
their
clusters,
delete
their
clusters
rather
than
having
to
have
disparate
areas
depending
on
the
service
that
they're
using.
B
Enhancing
that
hybrid
experience
so
allowing
them
to
go
to
openshift
cluster
manager
in
order
to
be
able
to
deploy
their
cluster
or
you
know,
modify
their
cluster
in
in
any
way
currently
what's
available
now
is
we
do
have
openshift
dedicated
that
is
available
from
there
we're
very
close
to
getting
rosa
to
be
available
through
that
as
well
and
hopefully
in
the
not
too
distant
future,
we'll
get
aro
there.
Also.
I
am
happy
to
announce,
though,
that
also
with
aro.
B
Just
this
week,
we've
enabled
a
ui
experience
through
the
azure
console
as
well.
So
as
we
are
kind
of
going
through
this
we're
we're
making
you
know
slow,
but
steady,
strides
and
being
able
to
further
enhance
our
our
customers.
Experiences
with
managed
openshift
talking
about
security
right,
and
everyone
is
interested
in
in
in
this
regard.
So
but
here
we
go,
you
know
we're
making
further
strides
in
achieving
further
compliance
with
industry
leading
certifications.
B
So
one
of
the
big
ones
that
are
that
is
ahead
is
hipaa
that
we're
working
towards
pci
compliance,
we've
actually
already
hit,
and
we
also
are
working
towards
other
government
certifications
such
as
fedramp
high.
B
We
are
looking
this
probably
in
the
you
know
earlier
half
of
2022,
but
really
giving
our
customers
just
more
flexibility
in
the
kind
of
workloads
that
we
can
accept
and
really
still
kind
of
feeding
into
that
hybrid
cloud
model
whereby
they
can
come
to
us
for
one
location
for
openshift
in
order
to
put
their
workloads
on
regardless
of
what
those
workloads
really
become
and
with
a
platform,
consistency
right.
What
we're
this
is
more
about
kind
of.
B
I
guess
like
a
a
mentality
that
our
that
our
team
has
kind
of
taken
on,
and
that
should
really
be
that
you
know
if
it
works
on
openshift
on
ocp,
then
it
should
work
on
managed
openshift
as
well.
Obviously,
there
might
be
certain
things
where
you
know
that
may
not
be
feasible,
but
it
really
the
the
approach
is
going
to
be,
and
is
that
you
know
it
should
be
initially
at
least
treated
as
a
bug
if
it
doesn't
work
on
managed
openshift,
but
it
does
on
ocp.
B
So
really
just
kind
of
you
know
ensuring
further
that
you
know
open
shift
is
open
shift
it's
going
to
work
like
you
expect
it
to
work,
regardless
of
how
you
are
opting
to
consume
it
next
slide,
please,
furthermore,
we're
working
on
just
expanding
the
choices
that
our
customers
have
so
again
in
in
expanding
their
flexibility
to
you
know,
choose
whatever
workloads
that
best
fits
for
their
use
cases,
so
expanding
the
options
that
they
have
in
terms
of
the
worker
nodes
or
kinds
of
work
earners
that
they're
going
to
be
using.
B
So
things
like
spot
instances
which
is
actually
already
available
by
working
on
things.
Let's
say
like
gpu
instances
or
amd
things
like
wavelength
or
dedicated
instances
as
well,
and
just
really
you
know
offering
those
to
be
able
to
just
meet
the
customer
where
they
are
with
the
kind
of
workloads
that
they
want
to
be
running
further
on
the
security
front,
we've
actually
initiated
this
too
already
so
just
supporting
bring
your
own
keys
for
kms
for
the
cluster
encryption.
B
This
has
been
something
that
we've
heard
from
our
customers
repeatedly
that
they've
wanted
to
see
so
before
it
was
encrypted
as
well,
but
we
would
create
the
key
for
you,
but
now
at
cluster
creation,
time,
you're
able
to
specify
your
kms
key
and
that
will
be
used
during
the
creation
of
the
cluster
and
then,
lastly,
with
the
platform
efficiency
right.
B
One
of
the
things
still
we
keep
hearing
is
that
our
customers
just
want
to
pay
for
what
they
use
right
and
going
along
with
this
is
that
sometimes
our
customers
might
be
using,
you
know,
might
not
be
using
their
clusters,
maybe
over
like
the
weekend
or
if
it's
an
extended
weekend.
You
know
like
we
just
had
here
in
the
us.
B
You
know
they
may
not
be
using
their
clusters,
so
they
want
the
ability
to
pause
it
or
hibernate.
This
is
something
that
will
be
coming
out,
hopefully
in
the
near
term,
whereby
the
customers
will
be
able
to
pause
their
clusters,
and
that
will
be
also
both
a
pause
on
the
infrastructure
costs,
as
well
as
the
openshift
subscription
costs
that
are
on
top
of
that.
So
this
way
they'll
only
be
able
to
truly
pay
for
their
consumption
as
they
are
using
it
next
slide,
please.
B
B
There
are
the
three
relevant
links
at
the
top,
so
I
encourage
you
to
come
and
you
know
take
a
look
see
what
new
features
are
being
added
right,
how
it's
progressing
through
the
pipeline
and
you
know
watch
what
is
you
know
what
has
already
been
ga
next
slide,
please
and
along
those
lines,
if
you
do
have
other
ideas
that
you'd
like
to
see
that
you
haven't
seen
yet
is
just
to
please
get
in
touch
with
us.
B
You
can
use
the
same
mechanism
to
actually
just
open
up
an
rfe
or
a
request,
and
you
know
maybe
we'll
get
in
touch
with
you
to
find
some
more
information
and
maybe
you'll
see
it
on
a
future
roadmap.
So
that
is
it
and
I'll
turn
it
over
to
karina
and
gurov
to
talk
to
us
about
platform
and
developer
tools.
C
Hi,
thank
you
I'll,
be
joining
with
my
colleague
karina
to
talk
to
cover
this
session
core
platform
and
delta
tools.
These
are
essentially
the
fundamental
features
and
capabilities
that
make
the
whole
platform
faster,
better,
secure
and
easy
to
use
next
slide.
C
Installation
so
with
each
releases
we
try
to
integrate
the
openshift
via
to
a
cloud
provider
so
that
you
can
use
you
can
deploy
openshift
on
the
provider
of
your
choice
in
near
future,
we'll
be
integrating
with
alibaba
cloud.
Azure
stack,
ibm
cloud
and
nutanix
we're
looking
forward
for
our
simplified
outboarding
of
the
installation
by
giving
you
a
bootable
media
that
you
can
boot
your
cluster
zero
as
soon
as
possible,
or
even
better.
C
You
can
have
all
openshift
installed
from
factory
and
ship
it
to
your
data
center
they're
trying
to
ridicule
mitigate
risk
during
upgrade
example,
starting
410
use
upgrade
requires
will
require
single
worker
reboot
zone,
aware
awareness
during
upgrade.
That
means,
if
you
have
a
fault
domain,
a
multi
and
openshift
deploy
deployed
in
different
fault
domains,
then
the
upgrade
will
complete
in
one
fall
domain
before
moving
to
another
targeted,
upgraded
blocking
is,
if
you
think,
a
release
has
a
critical
bugs
that
is
pertaining
to
a
particular
environment
that
you're
using.
C
Then
we
give
you
the
flexibility
and
freedom
to
skip
those
releases
next
slide
compute.
So
we
will
be
going
forward
going
to
support
arm
in
open
shift
when
you,
when
you,
you
will
deploy
the
openshift
in
a
cloud
provider
of
your
choice
that
will
give
you
the
flexibility
to
use
their
cloud
native
services
like
kms,
dns
load
balancer
will
provide
improved
life
cycle
management
of
certificates
through
certificate
manager,
to
enhance
the
experience
and
to
decrease
the
operational
cost.
C
We
are
looking
forward
to
provide
self-driving
control
plane
that
will
have
automatic
scaling
and
automated
backups.
Previously
our
cost
was
a
black
box.
Now
will
provide
you
the
capability
to
to
customize
the
r
cos
based
on
your
business
need
next
slide.
Please.
C
Operators
so
operator
is
an
easy
way
to
install
or
install
your
application
in
openshift.
So,
while
installing
the
whole
open
shift,
we
will
provide
you
the
capability
to
skip
certain
operators
that
you
don't
want
to
get
installed
will
provide
the
functionality
to
provide
automatic
failure.
Recovery
of
those
operators
specialized
schedulers,
so
we'll
provide
you
a
easy
way
to
install
your
new
schedulers
on
top
of
openshift,
so
that
you
can
deploy
your
aiml
or
hpc
type
of
workload.
C
On
top
of
openshift
for
customers
who
wants
to
deploy
openshift
on
an
airgap
environment,
we
provide
personality
called
disconnected
now,
with
the
disconnected.
You
could
have
an
easy
oc
mirror
like
functionality
to
install
the
whole
openshift
in
that
environment.
C
C
Advanced
host
networking
in
openshift,
bare
metal
so
we'll
provide
the
functionality
like
bonds,
vlan,
static,
ips.
In
fact,
you,
your
dhcp,
is
no
longer
requirement.
You
can
put
a
static
ip
and
stand
up
your
open
shift
on
the
metal
hybrid
clusters.
Now
you
can
run
vm
and
bare
metal
next
to
each
other.
Take
a
scenario
where
your
your
control,
you're,
you're,
running
a
control
plane
on
a
vm
and
player
map
and
the
worker
node
is
on
the
bare
metal
will
provide
a
faster.
C
A
booting
mechanism
like
you,
you'll
have
external
bootable
media,
which
will
help
you
to
boot
your
cluster
zero
up
and
running
in
no
time
next
slide.
Please.
C
So
cata
container
so
with
the
ga
of
cata
containers,
we'll
provide
the
help
matrix
functionality
will
provide
no
feature
discovery
so
that
it
will
let
you
know
before
even
installing
the
containers
that,
if
the
environment
is
suitable
to
install
the
category
or
not,
will
provide
the
integration
with
acs
so
that
you
can
have
a
tighter
runtime
control
and
will
provide
the
integration
with
sriv
and
dpdk
so
that
you
can
run
the
application
which
quite
a
high
throughput
low
latency
type
of
workload
in
a
secure
containers.
C
C
G
G
Next,
we
will
integrate
advanced
cluster
security,
red
hat
quay
log
management
and
more
into
this
unified
console
experience,
and
our
goal
is
to
provide
a
deeper
insight
into
your
fleet
about
your
fleet
into
this
new
unified
console
experience,
and
also
we
have
the
advent
of
dynamic
plugins.
G
Please
openshift
get
ops,
we
see
small
and
large
customers
increasingly
making
use
of
git
workflows
for
declaratively
driving
your
cluster
and
application
operations,
and
compliance
is
also
gaining
attention
from
customers
due
to
the
challenges
they
face
with
running
multiple
clusters
across
multiple
clouds,
openshift
get
offset
enables
customers
to
get
started
with
your
gitops
workflows
and
configure
your
clusters
and
deliver
your
applications
declaratively
and
as
your
requirements
grow.
G
Advanced
cluster
management,
advanced
cluster
security
and
ansible
provide
a
solid
foundation
for
extending
your
git
ops,
workflows
into
a
wide
variety
of
use
cases,
including
supply
chain
security,
edge
deployments,
your
cluster
lifecycle
management
and
compliance
policy
management,
as
well
as
aiml
workloads
through
mlofs
next
slide.
Please.
G
This
also
includes
additional
capabilities,
such
as
approval
processes
and
pipelines,
concerns
concurrency
control
to
be
driven
directly
from
the
get
repo
your
openshift
get
offs
focuses
on
enhancing
your
get
ops
workflows
for
customers
that
are
also
using
helm,
charts
they're
using
helm,
charts
to
deploy
their
applications.
It'll
simplify
the
bootstrapping
and
get
getting
started.
Experience
for
your
get
ops,
workflows
with
argo
cd.
G
Also
talking
about
security,
your
supply
chain
security
and
application
delivery.
It's
such
a
hot
topic
right
now
and
rightfully
so
it
is
a
top
of
mind
challenge
for
most
customers
and
it
is
a
key
area
of
focus
for
openshift
pipelines
and
openshift.
Give-Offs
you'll
see
be
able
to
enable
variable,
builds
by
signing
and
verifying
your
tech
time
pipelines
and
also
that'll
be
expanded
to
image
signing
which
will
be
incrementally
introduced
in
the
com.
Upcoming
quarters
also
hashi,
corp,
vault
secret
manager.
G
G
You'll
get
a
consistent,
simplified
programming
model
for
non-traditional
developers
such
as
data
scientists
and
content
developers
and
in
our
security
focus
area.
Serverless
will
be
providing
end-to-end
encryption
for
internal
and
external
services,
as
well
as
support
for
multi-tenancy
openshift
serverless
continues
to
drive
towards
a
much
more
consistent
platform
experience
by
offering
a
way
to
deploy
stateless
workloads
and
our
managed
cloud
offerings
which
we
heard
about
earlier,
as
well
as
providing
an
application-centric
foundation
for
the
centralized,
hybrid
and
multi-cloud
experience
that
you
have
also
heard
about
today.
Next
slide,
please.
G
All
right,
openshift
service
mesh
is
also
it
provides
a
well-integrated,
out-of-the-box
service
mesh
that
is
installed
and
upgraded
via
an
operator.
G
You'll
see
observability
and
visualizations,
with
kiali
it'll
automate
your
network
and
ingress
configuration
and
it
integrates
with
freescale
for
api
management,
also
zero
trust
networking
policies
for
enhancing
your
security.
This
allows
for
the
creation
of
your
traffic
policies.
It's
based
on
service
identities,
rather
than
traditional
ip
addresses
reports.
G
Again,
more
consistent
user
experience
as
well
across
the
platform
next
slide.
Please.
G
Openshift
virtualization,
as
virtual
machines
are
in
openshift,
inherit
all
of
the
openshift
features
we're
working
to
enhance
that
experience
by
managing
your
vms,
providing
even
more
detailed
statistics,
aggregated
views
of
your
vms
and
then
also
pulling
in
the
data
protection,
integrating
it
with
the
openshift
api
for
data
protection
disaster
recovery,
and
that
is
all
going
to
be
integrated
into
acm.
Advanced
cluster
management.
G
Security
everywhere
integrations
with
the
compliance
operator,
as
well
as
advanced
cluster
security,
so
you'll
see
even
tighter
integrations
for
openshift
virtualization
across
openshift
platform
plus
and
the
entire
platform,
and
then
also
for
your
platform.
Consistency
you'll
see
the
ability
to
run
the
same
workloads
in
your
public
cloud.
You
can
run
your
vms
and
openshift,
as
well
as
on
your
aws
bare
metal
instances.
That's
currently
tech
preview,
so
you'll
just
see
that
oga
and
then
collaborating
with
other
cloud
vendors
next
slide.
Please.
G
G
Some
things
you'll
see
are
migration
guidance
you'll,
see
even
a
further
being
able
to
bring
your
legacy.
Applications
in
you'll
see
guidance
on
how
you're
going
to
do
that,
and
the
goal
of
this
migration
toolkit
for
applications
is
to
become
your
ultimate
open
source
toolkit
to
safely
migrate
and
modernize
your
application
portfolio
so
bringing
in
your
legacy.
Applications
you'll
also
be
able
to
gather
more
insight
as
you're
architecting,
your
migrations
into
openshift,
your
old
applications
into
old
openshift.
So
there's
so
many
great
things.
G
Look
at
the
conveyor
tackle
project
upstream
to
see
what
is
coming
down
the
pipe
next
slide.
Please,
and
also
migration
toolkit
for
containers.
G
There
are
still
customers
on
openshift
container
platform.
Three
they're
migrating
from
three
to
four
has
never
been
easier.
So
look
at
the
migration
toolkit
for
containers.
If
you're
still
on
three,
it
continues
to
simplify
and
be
easier
and
easier
cloud
migrations,
and
when
there
are,
there
is
an
increased
demand
for
migrations
to
aro
and
rosa,
and
these
will
be
fully
tested
and
supported
by
the
migration
toolkit
for
containers,
regardless,
if
you're
migrating
from
openshift
container
platform,
three
or
four
also
storage
in
place
migration.