►
From YouTube: [What's Next] OpenShift Roadmap Update [Mar-2021]
Description
On March 18 2021, the OpenShift PM team will broadcast the[What’s Next] OpenShift Roadmap Update [Mar-2021] briefing directly to customers and partners on OpenShift.tv.
A
B
All
right,
hey
folks,
welcome
to
another
session
of
our
first
quarter:
road
map
for
openshift,
I'm
rob
sumski
part
of
the
product
management
team.
We're
super
excited
to
have
you
here
today
we're
going
to
cover
a
lot
for
what's
next
with
the
platform.
B
As
a
reminder,
this
presentation
is
our
what's
next,
it's
a
look
ahead
as
far
as
we
can
see
out
into
the
future,
the
content
is
directionally
correct.
You
know,
plans
do
change
so
keep
that
in
mind
as
you're
watching,
but
this
is
a
good
summary
of
where
we're
going
when
these
features
make
it
into
a
specific
open
release.
We're
going
to
tell
you
about
it
shortly
before
that
release
in
a
what's
new
session,
that'll
be
for
a
specific
version
of
openshift.
B
Our
next
one
will
be
openshift48,
so
keep
an
eye
on
the
streaming
schedule
for
that
session
now,
also
coming
up
really
soon,
we
have
red
hat
summit
where
we
got
a
bunch
of
great
announcements
there
that
you're
not
going
to
hear
about
here
they're
only
at
that
event,
so
please
sign
up
it's
a
virtual
event.
We're
really
excited
about
it
and
would
love
for
you
to
join
us.
B
So
today,
I'm
joined
with
a
few
of
my
colleagues
from
the
product
management
team
and
we're
going
to
be
talking
you
through
a
bunch
of
different
features
in
open
shift
so
mark
daniel
maria
and
jamie
are
gonna
break
down
some
of
the
the
pillars
that
we
have
for
the
platform
itself
and
where
we're
going
for
2021..
B
As
a
reminder,
openshift
is
everything
you
need
to
run
a
hybrid
cloud.
This
is
what
our
stack
diagram
looks
like
from
the
kubernetes
layer
up
to
platform
services
that
help
you
run
your
apps.
Do
development
tools
that
help
you
create
them
we're
going
to
cover
what's
next
in
all
these
areas
today,
and
if
you
look
really
closely,
you
might
see
that
there's
actually
a
new
box
on
the
screen
here
right
under
multi-cluster
management
is
our
new
advanced
cluster
security.
B
This
is
based
off
of
our
acquisition
of
stack
rocks,
which
just
closed
recently,
so
we're
super
excited
to
welcome
the
stack
rocks
team
to
red
hat
and
we're
still
working
on
the
plans
for
acs.
So
you're
not
going
to
hear
much
about
it
today,
but
know
that
that'll
be
coming
in
later
sessions.
B
Lastly,
we're
going
to
cover
some
of
the
exciting
work
happening
in
our
multi-cluster
management
sphere,
allowing
you
to
seamlessly
scale
from
one
cluster,
to
possibly
hundreds
of
clusters,
with
all
the
integrated
networking
and
management
that
you
expect
out
of
that
platform.
It's
truly
a
game
changer
for
hybrid
cloud.
So
with
that,
let's
dig
on
into
it.
B
Also
in
the
core
we're
looking
to
augment
our
security
tools
to
help
prevent
supply
chain
attacks,
probably
heard
a
little
bit
about
this.
In
the
news-
and
you
know,
customers
are
already
using
openshift
to
build
and
ship
their
software
and
they're,
using
tools
to
build
containers,
and
so
we're
gonna
tie
all
these
together
into
the
best
possible
protection
for
that
supply
chain
in
the
hybrid
pillar.
B
We're
investing
in
multi-cluster
networking
with
submariner
get
ops
features
to
help
you
push
down
the
config
applications
and
policy
onto
all
the
clusters
that
you're
under
management
in
our
kubernetes
native
infrastructure
pillar.
It's
all
about
bringing
new
capabilities
to
the
unique
features
of
bare
metal
and
edge
form
factors,
we're
investing
in
tools
that
help
vms
containers
and
serverless
experiences
blend
together
on
one
site
and
if
you've
got
a
vm
appliance
as
a
dependency
that
shouldn't
block
you
from
modernizing
a
different
part
of
your
application.
B
Maybe
you
want
to
bring
in
some
serverless
components,
totally
possible
in
our
kubernetes
native
infrastructure
and
last
in
our
developer,
developer
and
platform
tools,
we'll
be
gaining
three
services
that
we've
been
testing
for
a
while.
This
is
our
v2
version
of
openshift
builds
our
new
pipeline
based
on
techton
and
openshift,
get
ops
powered
by
argo
cd
and
then
last
in
this
category
is
a
full
functions
as
a
service
experience
built
off
of
openshift
serverless
to
get
that
lambda
like
experience
that
runs
in
a
true
hybrid
fashion,
anywhere
that
you
want
to
run
it.
B
So,
as
you
can
see,
we've
got
a
lot
going
on
we're
going
to
dig
into
more
specifics,
so
there's
a
ton
in
flight.
Here's
a
quick
overview
of
all
of
this.
Obviously
I'm
not
going
to
look
at
every
single
feature
of
this
larger
release,
but
feel
free
to
review.
This
pause.
Your
video,
if
you
want
these
slides,
will
also
be
up
on
openshift.com
in
the
learn
section
under
what's
new
and
what's
next,
so
you
can
take
a
look
at
these
at
your
leisure
all
right.
B
C
C
C
So
if
we
conceptualize
workload,
scheduling
as
a
stacked
layer
cake
and
putting
highly
specialized
customer
workloads
like
genome
sequencing,
big
data,
aiml
self-driving
cars
at
the
top
of
that
cake,
openshift
provides
application
tooling
to
manage
workloads,
single
or
multi-cluster
environments,
and
allow
customers
to
define
their
sla
quotas
and
priorities
for
those
jobs.
As
we
move
down
the
layers
to
the
open
data
hub,
openshift
provides
workflows
and
tools
to
manage
and
run
the
specialized
applications
and
the
topology
aware
scheduler
helps
identify
the
best
places
within
the
cluster
to
run
the
specialized
workloads.
C
C
So
as
we
evolve
our
security
model,
there
is
a
common
single
theme,
and
that
is
zero
trust
in
a
cloud
native
world
devsecops
is
critical.
Our
security
features
and
capabilities
must
be
available
in
every
phase
of
the
build,
deploy,
run,
application
lifecycle
and
we're
providing
advanced
options
to
address
every
customer's
threat
environment.
C
In
the
deployment
phase,
we
continue
to
work
with
disa
to
deliver
a
security,
technical
implementation
guide
or
dig
for
openshift
and
we're
adding
critical
protections
like
attestation
of
integrity,
so
we're
increasing
the
key
excuse
me
we're
integrating
the
keyline
project
to
focus
first
on
rel,
core
os
attestation
and
then
workload
at
the
station
in
a
second
phase,
we're
also
going
to
continue
to
support
security
context
constraints
and
how
it
is
that
they
will
coexist
with
our
with
pod
security
policies
or
psp
replacements
in
the
future.
C
C
A
second
key
use
case
is
any
storage
anytime.
The
openshift
provides
choice,
so
we
continue
to
focus
on
csi
drivers
and
gsi
migration
with
csi
drivers.
You
will
be
able
to
benefit
from
operators
getting
the
updates
to
your
your
drivers
when
and
as
they're
available
without
you
having
to
wait
for
the
next
openshift
release.
This
means
faster,
realization
of
new
features
from
a
very
active
upstream
community.
C
C
Some
of
this
information
is
available
today
already,
but
generally
only
at
the
node
level
going
forward
we're
going
to
be
looking
at
how
it
is
that
we
can
expose
this
better
and
provide
more
useful,
more
digestible
data,
not
only
with
ocs,
which
already
has
some
pretty
good
monitoring,
but
generally
across
pvs
active
in
the
openshift
cluster.
C
Please
openshift
today
has
more
than
one
way
to
ingress
traffic
to
the
cluster.
Customers
can
choose
routes,
they
can
choose
kubernetes
ingress.
Two
different
technologies
with
different
feature
sets,
starting
with
a
technical
preview
in
48.
We
hope
to
begin
the
process
of
unifying
ingress
for
openshift,
starting
with
gateway
api
gateway.
Api
has
undergone
several
name
changes.
You
may
have
heard
it
also
listed
as
ingress
v2
or
services
api,
but
gateway.
C
Api
is
a
set
of
apis
for
deploying
layer,
4,
layer,
7
routing
in
kubernetes,
that
is
expressive
and
extensible
and
role
oriented
across
the
different
api
interfaces,
so
gateway
api,
then,
can
be
used
for
unifying
and
deploying
these
layer
four
levels
layer,
seven
routing
to
provide
a
singular
experience
for
all
traffic
workflows
from
outside
the
cluster
to
the
application
workload,
whether
that's
web
traffic
service,
mesh
envoy
traffic
and
audit
capture
of
traffic
or
high
performance
throughput
traffic,
where
you
want
to
minimize
disruptions
to
the
in
the
stack
it
will
handle
all
of
those
eventually
kpwa
api
is
still
in
its
infancy,
but
we'll
have
a
technical
preview
of
it.
C
Available
in
48
then
build
upon
that
to
provide
it
on
more
platforms,
integrated
with
new
tooling,
such
as
metal
lb
for
load,
balancing
for
if
you're
on
bare
metal
deployments
gateway.
Api
will
eventually
have
support
for
and
make
a
deployment
to
multiple
clouds.
As
simple
as
replacing
that
cloud
provider
or
adding
a
new
cloud
provider
to
your
existing
config
along
with
gateway
api,
the
upstream
community
chose
contour
as
the
ingress
controller
for
all
its
testing,
and
a
lot
of
momentum
has
built
behind.
C
Among
many
other
things,
we
will
productize
contour
in
lockstep
with
gateway
api
and
support
it
alongside
hd
proxy
separately,
we'll
support
metal
lb
for
layer
2,
initially
at
4.9,
and
then
we'll
follow
it
up
with
metal,
lb,
bgp
support
and
410
within
the
cluster
itself.
There
are
many
new
features
in
the
pipeline,
but
some
key
ones
to
note
are
listed
here.
For
example,
ovn
we're
gonna.
We've
supported
this
since
four
six,
it
becomes
the
default
networking
solution
at
four
nine.
C
We
fully
support
ibv6,
single
and
dual
stack
end
to
end
and
openshift
at
48,
we're
going
to
support
hardware
offload
of
ovs,
initially
to
and
initially
to
melanoxy
x5
nix,
but
that
work
will
directly
help
and
benefit
us
to
enable
additional
nics
and
also
create
the
framework
for
offloading
offloading.
C
Other
compute
intensive
workloads
like
ipsec
encryption,
we're
creating
a
closer
alignment
to
host
networking,
and
one
of
those
ways
is
to,
for
example,
provide
multi-nic
traffic
flow
support,
we're
adding
bgp
support
to
openshift
for
advertising
kubernetes
services
overall,
increased
networking,
observability
for
better
understanding
and
debugging
of
your
networking
workloads,
we're
looking
at
the
future
support
of
ebpf
for
greater
precision
of
traffic
control
and,
finally,
we'll
provide
a
no
overlay
option
for
environments
that
prefer
to
use
the
underlying
underlay
networking
of
the
cluster
hosts
themselves.
C
There
are
four
key
takeaways
to
our
installation
and
update
feature,
so
the
first
one
being
we're
continuing
to
enable
openshift
deployments
on
even
more
platforms,
including
alibaba,
aws
outposts
azure
stack,
hub,
equinix,
metal,
ibm
public
cloud
and
microsoft.
Hyper-V
we're
also
expanding
our
existing
provider
support
to
include
more
regions
about
instances.
C
Second,
key
takeaway
is
that,
while,
while
rel
core
os
will
still
be
required
for
the
control
plane,
we're
going
to
be
introducing
rail,
8
server,
support
for
compute
and
infras
and
infrastructure
nodes,
this
will
enable
customers
to
migrate
from
row
7
to
8
for
hosting
their
application
workloads.
C
Third
key
takeaway:
today
we
have
several
installation
methods
like
user
provision,
infrastructure,
dollar
provision,
infrastructure,
assisted
installer.
Each
method
is
intended
to
address
different
deployment
scenarios,
but
one
problem
we're
facing
is
that
having
too
many
options
for
users
to
choose
can
be
confusing,
so
the
other
problem
we're
faced
with
is
how
to
provide
more
agile
support
and
faster
integration
to
new
providers
using
the
install
provisioned
infrastructure
approach.
C
So
this
process
is
very
involved,
also
takes
multiple
releases
per
provider,
so
we
need
a
more
scalable
way
to
integrate
new
providers
without
compromising
the
installation,
experience
or
overall
reliability.
So
our
goal
in
this
one
in
the
year
to
come
is
to
unify
our
overall
installation
experience,
starting
with
the
installer
core
by
making
the
provider
integration
easier,
more
modular
and
then
we'll
follow
that
up
by
improving
the
cluster
lifecycle
experience
along
with
our
fleet
management
story
at
a
high
level.
C
That
effort
will
involve
introducing
the
openshift
hive
operator,
which
will
provide
a
cluster
provisioning
api
upon
which
we
can
build
a
new
central
host
management
service,
along
with
improving
the
cluster
provision,
experience
within
openshift
ocm
and
acm
and
finally,
for
eus
eus
upgrades
where
eos
refers
to
our
extended
update
support
release
for
those
upgrades
we're
working
to
improve
the
experience
for
customers
while
minimizing
workload
disruption.
C
While
intermediate
versions
can't
be
skipped
for
control,
plane
upgrades,
we
are
looking
into
allowing
certain
releases
to
be
skipped
for
compute
nodes.
This
means
control
plane
upgrades
will
still
be
done,
sequentially
between
eus
releases,
but
for
some
intermediate
versions
it
may
be
possible
to
skip
the
upgrade
for
compute
nodes
when
progressing
to
the
next
eus
release.
This
process
will
require
pausing
the
compute
machine,
config
pool
at
specific
times
during
the
upgrade,
and
that
will
allow
the
upgrade
to
essentially
be
skipped
when
moving
between
immediate
releases
next
slide.
Please.
C
And
finally,
in
the
future,
windows
containers
for
openshift
will
be
available
on
more
platforms
across
private
and
public
cloud,
as
well
as
edge
to
plan
the
installation.
Experience
for
these
platforms
will
include
a
bring
your
own
host
model.
This
is
important
to
customers
in
particular
windows
customers,
because
they
tend
to
treat
their
instances
more
as
pets
rather
than
cattle
windows.
D
So
our
goal
for
openshift
is
to
expand
to
cover
all
sizes
of
deployments
right
and
seamlessly
scale
across
all
clusters
that
you
manage
box
trading,
networking,
observability
and
operations
across
these
is
something
that
we
are
heavily
investing
in
this
year.
The
bits
and
pieces
of
this
effort
have
already
started
as
all
of
the
innovations
actually
happening
at
the
cluster
level.
For
example,
hardware
offloading
at
the
sdn
layer
is
utilized
when
we
expand
to
multicluster
network.
So
let's
take
a
look
at
how
this
looks
in
more
detail.
D
Here's
what
the
standard
set
of
management
tools
look
like
when
tackling
multi-cluster
arena
policy
and
app
deployment
are
driven
from
central
locations,
making
it
easy
to
span
across
cluster
domains,
both
north
and
south,
as
well
as
east
and
west
networking
traffic
is
actually
routed
and
secured.
Observability
of
your
entire
openshift
fleet
is
enabled
making
it
easy
for
developers
to
keep
track
of
their
apps
and
for
cluster
admins
to
ensure
that
things
like
compliance
and
security
policies
are
uniform.
D
A
central
scalable
container
registry
is
in
place
as
well
to
provide
clusters
with
access
to
the
software
that
is
already
scanned
for
world
of
abilities
before
it
actually
hits
the
cluster.
The
end
result
of
this
is
a
standardized
experience
amongst
all
developers
and
admins
across
your
organization.
D
Next
slide,
please,
with
multi-cluster
network.
We
really
want
to
enable
applications
to
span
openshift
deployments,
but
without
the
need
to
re-architect
or
touch
the
application
anyway,
and
we
plan
to
achieve
this
by
privatizing
the
cncf
submariner
project
and
use
acm
to
synchronize.
The
network
configuration
across
all
clusters
and
what
this
will
do
is
create
encrypted
tunnels
between
the
openshift
sdns,
which
gives
customers
routed
connectivity,
essentially
between
all
the
nodes
of
all
their
clusters,
which
is
far
more
efficient
than
trying
to
stretch
a
single
cluster
over
directly
distant
locations.
D
D
So
for
our
single
cluster
observability
story,
we
will
continue
to
mainly
drive
features
that
help
administrators
and
developers
in
their
journey
to
understand
their
systems
right,
find
issues
quickly,
optimize
status
core
so
for
monitoring.
We're
planning
to
make
the
stack
more
resilient
in
situations
of
increased
demand
and,
in
general,
provide
better,
alerting
quality,
we're
working
towards
supporting
multi-cluster
metrics
aggregation
scenarios
by
pushing
metrics
to
a
third-party
metric
solution
and
also
plan
for
multi-tenant
api
for
configuring,
where
exactly
notifications
are
sent.
D
Of
course,
we
are
also
going
to
continue
to
provide
a
top-notch
connective
native
monitoring
experience
inside
the
cluster.
In
the
operative
console
for
logging,
we
will
be
focusing
on
allowing
customers
to
search
application
logs
in
kibana
by
preserving
the
json
structure
that
make
up
these
log
messages.
D
You
just
see
this
in
the
openshift
console
where
your
application
runs
next
slide
when
going
multicluster,
observability
becomes
really
critical.
So
to
that
extent
our
advanced
cluster
management
solution,
acm
will
aggregate
all
the
data
that
you
would
normally
see
in
the
connected
customer
experience
inside
offering
for
the
entire
fleet
in
turn
easing
the
administration
by
providing
this
insights
data,
so
this
telemetry
will
be
made
available
in
the
acm
hub,
so
you
don't
really
have
to
go
to
clouderater.com
separately.
D
This
aggregated
view
provides
benefits
for
many,
it
personas,
so
think
of
operations
and
class
admins,
which
will
be
able
to
resolve
problems
more
quickly
and
avoid
downtime
in
the
first
place
or
sre
and
devops
personnel,
which
have
a
better
visibility
into
how
applications
are
impacted
and
the
crucial
areas
to
actually
focus
on.
Last
but
not
least,
cost
management
will
also
be
integrated
to
provide
cluster
cost
visibility
at
the
acm
level
next
slide.
D
So
when
working
with
more
than
one
cluster
at
the
time,
customers
quickly
realized
the
need
for
automation.
Increasingly,
the
method
here
is
get
a
github's
approach
paired
with
continuous
integration,
and
we
are
addressing
this
need
by
productizing
openshift
pipelines
and
the
openshift
ddos
add-on.
Those
will
enable
customers
to
centrally
manage
cluster
configuration
and
application
definitions
across
a
multi-cluster
landscape.
D
So
one
of
the
big
focus
areas
in
2021
here
is
going
to
make
our
advanced
cluster
security,
offering,
as
well
as
other
security
tools
directly
available
in
those
pipelines
and
customers
who
are
using
pipelines
to
also
build
container
images,
can
count
on
that
process
becoming
more
secure
as
well.
As
we
heard
earlier,
with
the
proliferation
of
rootless,
builds
and
color
containers.
D
Openshift
gear
ops
is
the
central
pillar
to
actually
make
all
this
work
at
scale
across
clusters,
and
we
are
here
productizing
the
upstream
argo
cd
project
to
give
customers
the
ability
to
use
git
as
a
single
source
of
truth,
for
essentially
everything
that
you
use
with
oc
or
cube.
Ctl
apply
because
of
our
operator
first
approach.
This
includes
virtually
everything
from
cluster
configuration
to
pipeline
definition
as
well
as
application
deployments.
D
And,
of
course,
we
don't
expect
our
customers
to
integrate
all
of
this
manually
on
their
own
either.
We
are
actually
working
on
a
githubs
application
manager,
cli
called
cam,
and
what
this
will
do
is
bootstrap
a
typical
git
repository
layout
for
common
workflows,
which
already
include
pipelines
and
application
definitions
right
out
of
the
box,
and
sometimes
these
can
actually
contain
sensitive
data,
so
passwords
or
credentials
to
your
application
and
databases
and
to
protect
to
protect
those.
D
We
are
looking
to
integrate
secret
management
and
right
into
those
pipeline,
tooling
and
and
we're
going
to
choose
technologies
here
that
are
commonly
used
in
github's
scenarios.
Next
slide.
D
Let's
look
at
an
example
of
how
this
could
come
together,
so
customers
will
leverage
ecm
to
add
openshift
githubs
on
all
managed
clusters,
and
that
includes
setting
up
all
those
clusters
as
targets
in
argo
cd
and
also
create
the
required
credentials
for
things
like
cluster
access,
git
repositories,
etc,
and
from
this
point
on,
apps
can
be
deployed
and
configured
declaratively,
triggered
by
a
simple
pull
request
against
the
repository
and
as
new
clusters
are
added.
Bootstrapping
them
in
this
way
for
get-ups
is
entirely
automated
via
the
acm
policy
engine
and
similar
to
those
clusters.
D
There
would
also
be
configured
for
acf
and
that
would
allow
us
to
shift
security
analysis
as
far
left
in
the
process
of
deploying
and
managing
application
as
possible
next
slide
similar
to
cluster
and
application
configuration.
Multiple
clusters
generally
also
need
a
central
source
of
truth
for
the
software
binaries.
They
are
eventually
going
to
run
and
red
hat
quay
is
that
central
globally
distributed
registry
for
openshift
customers.
Today
and
as
soon
as
you
have
a
central
registry,
it
becomes
a
really
critical
and
vital
service
because
in
kubernetes
really
nothing
works
without
access
to
registry.
D
After
all,
so
red
earthquake
is
very
well
suited
and
battle
tested
for
the
scale,
but
some
more
controls
are
going
to
come
in
such
multi-tenant
environments.
Customers
are
frequently
leveraging
quotas
to
avoid
noisy
neighbor
syndrome
and
guarantee
their
service
levels
and
to
enable
that,
at
the
registry
level,
craig
will
start
tracking
usage
of
certain
aspects
of
the
registry,
for
instance,
how
many?
D
This
kind
of
creates
a
catch-22
situation
right,
because
a
registry
needs
to
be
up
first
before
you
can
even
install
the
infrastructure
cluster
in
that
disconnected
environment,
and
we
plan
to
solve
this
with
an
automated
way
to
stand
up
a
small
single
node,
purpose-built
query
registry
and
provide
a
required
tooling
to
easily
mirror
all
the
openshift
related
content,
which
is
the
release
payload
operator
hub
content,
the
sample
images
and
all
into
this
registry
in
a
single
step.
D
Customers
and
heavily
regulated
environments
frequently
also
have
an
air
gap
in
their
network
layout
as
well,
and
in
these
cases,
a
second
copy
of
this
all-in-one
cray,
mirror
instance,
can
be
stood
out
behind
the
a-gap
and
the
mirror
can
be
transferred
via
offline
media.
So
this
really
aims
to
streamline
the
process
to
initially
create
a
disconnected
mirror
for
a
air
gap
deployment
as
well
as
keep
it
in
sync
over
time,
as
you
are
consuming.
D
So,
in
red
hat
cost
management
we
will
get
a
cost
explorer
view,
which
is
our
first
installment
to
provide
a
real
full-time
based
view
of
costs
on
a
per-project
basis.
You
can
also
group
this
by
clusters:
nodes
tax
on
infrastructure
resources
or
labels
of
openshift
objects,
and
beyond
this
we
are
also
planning
to
add
additional
features
to
serve
the
business,
providing
actual
forecasting
and
budgeting
at
all
levels
of
the
infrastructure.
D
D
So,
to
recap,
to
move
to
a
multi-cluster
architecture,
we
are
going
to
provide
all
the
tools
and
features
out
of
open
shift.
This
is
built
from
the
bottom
map,
starting
with
multicultural
networking
with
single
clustering,
multi-cluster
observability
via
object,
logging
and
acm
all
the
way
to
a
scalable
approach
to
application
and
cluster
configuration
management
via
acm
object,
githubs
and
overhead
pipelines.
D
Thanks
to
red
hat
cost
management
we'll
now
hand
over
to
maria
for
kubernetes
native
infrastructure,.
E
Thank
you,
I'm
maria
ratcheon,
let's
switch
gears
and
focus
on
infrastructure.
K9
provides
the
simplicity
and
agility
of
the
public
cloud
and
on-prem
environments,
keeping
a
consistent,
open
shift
experience
across
both
footprints,
among
other
things,
k,
ni,
addresses,
container
adoption
growth
while
still
running
virtual
machines,
along
with
containers
by
running
openshift
clusters
on
bare
metal
nodes.
E
Let's
continue
talking
about
k
ni
on
the
next
slide.
Coming
up
later
this
year,
centralized
host
management
will
allow
you
to
have
your
infrastructure
inventory,
centralized
and
request
and
deploy
clusters
from
there,
while
keeping
the
bare
metal
hose.
Also
centrally
managed
hardware-based
pod
scheduling.
E
For
that
we'll
be
able
to
expose
hardware
telemetry
such
as
internal
temperature
fan,
speed
power,
supply
failure,
etc.
So
that
user
or
in
some
cases
our
partners
can
create
their
own
integration
to
have
that
data
affect
the
workload
scheduling
that's
coming
up
later
this
year
and
then
later
in
openshift,
4.9,
also
assisted
installer
and
methylcube
integration,
basically
providing
bare
metal
nodes
of
clusters
deployed
with
the
assisted,
installer
and
then
advanced
host
networking
configuration
to
provide
a
declarative
configuration
for
setting
vlans
bonds
and
static
ip
addresses
at
the
installation.
E
Time
and
on
day,
two
leveraging
technology
in
kubernetes
nm
state,
moving
on
continuing
on
virtually
station
on
the
next
slide.
We
since
openshift
virtualization,
became
ga
in
openshift
4.5.
Last
year.
We
have
continued
to
deliver
important
features
to
make
it
easier
for
admins
and
developers
to
use
vms
in
a
natural
way
in
kubernetes
for
administrators,
configuring,
vm
storage
and
networking.
There
are
good
default
settings
that
make
it
easier
to
create
standard
windows
and
raw
lvms
for
developers.
It
should
not
matter
that
some
data
or
service
is
running
in
a
vm.
E
Even
though
ai
and
machine
learning
workloads
leans
heavy
towards
container,
there
are
still
training
workloads
that
have
not
yet
been
migrated
from
their
virtual
legacy
form.
You
can
accelerate
these
workloads
with
direct
gpu
access
and
in
our
future
releases
the
admin
will
also
be
able
to
slice
up
valuable
gpu
resources
with
vgpu
capabilities.
E
Next,
we'll
talk
about
what's
in
it,
for
developers
coming
up
next
slide
for
developers
working
on
modernization
of
legacy,
application
and
workloads.
Vms
can
now
utilize
all
the
power
of
openshift
platform
to
be
more
flexible,
while
different
parts
of
the
application
are
still
being
transformed.
E
Next,
we
talk
about
mtv
or
the
migration
toolkit
for
visualization,
so
we're
introducing
the
first
release
of
migration
toolkit
virtualization
as
beta.
This
is
an
easy
to
use
tool
to
mass
migrate,
vms
from
vmware,
vsphere,
6.5
and
beyond,
to
openshift,
virtualization,
2.6
and
beyond
and
latest.
This
is
a
warm
migration
that
will
provide
reduced
vn
downtime
by
copying
data,
while
the
vm
is
still
running
and
then
copying
the
delta
once
the
vm
is
powered
down.
E
Obviously,
this
reduces
the
vm
downtime,
there's
also
a
set
pre-migration
checks
that
will
search
for
possible
issues
that
could
make
the
migration
process
difficult
or
non-viable
even
before.
Launching
the
migration
coming
up
at
the
end
of
the
year,
we'll
add
red
hat
virtualization,
rev
and
openstack
as
sources
with
openshift
virtualization
as
the
destination.
E
E
That
means
installing
the
cata
binaries
configuring
cryo,
with
runtime
class
handlers,
installing
qmu
as
lightweight
virtualization,
with
only
the
necessary
components
as
an
extension
and
then
created
the
require
runtime
class.
All
of
that
you
will
get
with
the
operator.
This
operator
exposes
crgs
that
initially
allows
for
selecting
the
nodes
where
the
qatar
run.
Time
would
be
available
and
potentially
other
data
parameters
in
the
cluster,
then
using
kata
as
a
runtime.
E
It's
just
a
matter
of
changing
runtime
class
name
on
the
pod
spec,
as
shown
in
the
in
the
example
here
or
you
can
choose
not
to
use
kata,
it's
just.
It
makes
just
kada
available
for
you
to
use
next.
Let's
talk
about
single
note,
open
shift.
It
looks
like
open
shift.
Quacks
like
open
shift
everything
on
a
single
note.
We
will
provide
an
open
shift
that
does
not
have
a
dependency
on
a
central
control
plane.
So
this
is
not
a
remote
worker
node.
This
is
open
shift
in
a
single
node.
E
It
provides
consistent,
app
platform
from
data
center
to
the
edge
it
fits
within
a
constrained
physical
footprint
of
a
single
but
beefy
server,
and
we're
planning
to
release
a
dev
preview
in
florida.
Eight
with
the
ability
to
deploy
a
simple
replica
control,
plane
topology,
it
will
have
a
bootstrap
in
place,
installation
and
deployment
of
the
edge
server
over
l3
networking
and
no
need
to
additional
bootstrap
node
in
future
releases.
We
plan
to
include
in
place
upgrade
as
well
as
olm
operators,
compatibility
integration.
E
E
E
Red
hat
will
deliver
a
set
of
capabilities,
and
then
customers
will
use
those
capabilities
along
with
their
specifications,
to
achieve
that.
Zero
touch,
work
workflow,
our
telco
5g
team
is
also
building
a
reference.
Github
space,
zero
touch,
provisioning
workflow
as
an
example,
or
something
that
you
can
take
on.
E
5G
so
with
openshift,
we've
already
delivered
a
solid
5g
core
platform,
starting
in
openshift
4.5
and
the
4.6
time
frame.
While
we
continue
to
improve
that
core
platform,
we
are
now
also
expanding
those
requirements
to
include
rand,
centralized
unit
and
distributed
unit
footprints
starting
the
first
half
of
this
year.
E
Looking
at
core
capabilities,
ipv4
ipv6
and
dual
stack
throughout
the
cluster
networking
continued
optimization
of
the
scheduler
in
kubernetes
to
provide
optimal
workload,
placement
and
looking
at
it
from
the
ram
use
case.
You
know
we
talked
about
zero
touch
provisioning
before
this
slide.
That
continues
to
be
an
area
of
focus
as
well
as
continuous,
optimization
of
latency
and
single
node
configuration
and
then
next
on
openshift
and
openstack.
E
E
As
highly
requested,
we
will
look
to
add
support
for
nfv
fast
data
pass
options
like
srov,
obs,
dpdk
and
obs.
However,
offload
with
both
ipi
and
upi
openshift,
cnf
pods
will
have
equivalent
performance
as
openstack
vnfs
by
mapping
fast
data
path.
That
includes
obs
dpdk,
srov
obs
hardware,
offload
interfaces
as
pci
devices
in
the
vm
and
srov
operator
to
connect
yes
directly
into
the
vms
and
pods
on
secondary
multus
interfaces.
E
E
E
It
is
recommended
when
there's
significant
external
or
no
south
traffic,
because
you're
using
this,
you
don't
need
courier,
since
there
is
no
double
encapsulation,
it
avoids
the
floating
ip
and
nat
for
external
connectivity.
E
Some
limitations
are
that
the
current
terraform
installer
requires
admin,
privileges
and
metadata
services.
For
the
example
shown
here,
the
compute,
node
and
vmworker
are
on
the
same
network,
and
top
of
rack
is
the
gateway
for
router.
For
this
network,
all
traffic
is
sent
to
top
of
rack
for
routing
destination
and
then
east-west
traffic
between
part,
8
and
part
c
is
sent
only
on
overlay,
managed
by
openshift,
sdn
and
routed
by
top
of
rack
gateway.
F
Thank
you.
Thank
you,
maria,
so
all
right,
thank
you
maria,
so
we'll
now
move
up
a
level
to
our
developer,
developer
and
platform
services.
So,
but
before
diving
in,
I
just
want
to
talk
a
little
bit
about
our
customers.
F
They're
all
they're,
all
unique,
unique
businesses
and
companies,
and
so
we
as
openshift
the
platform
of
platforms,
need
to
enable
our
customers
to
easily
configure
customize
and
extend
the
platform
to
meet
their
needs.
So
developer
and
platform
services
focuses
on
removing
friction
from
developing
configuring
and
deploying
applications
on
openshift.
F
F
Serverless
and
service
mesh
provide
easy
ways
for
developers
to
deploy
and
run
applications
in
kubernetes
with
with
a
pre-baked
date
to
operation
story.
These
are
also
examples
of
our
popular
examples,
say
of
operator
extensions
that
we
provide
devops
and
git
ops,
facilitate
delivery
and
management
of
our
applications.
F
F
So,
what's
what's
new
for
the
for
the
console,
so
the
ocp
console
is
an
extensible
and
customizable
kubernetes
ui,
designed
to
empower
users
at
all
levels.
Our
major
areas
of
focus
here
remain
consistent
for
each
release.
F
So
we'll
let
you
make
the
decision
of
what
you
want
to
allow
your
your
developers
to
to
have
access
to.
We
always
try
to
put
developers
first,
so
we
want
to
get
them
to
get
them
productive
more
quickly.
We
like
to
meet
them
where
they're
at
and
tailor
experiences
to
them,
whether
they're,
novices
or
experts.
F
Finally,
we
put
an
emphasis
on
making
kubernetes
easy.
We've
enhanced
our
extensible
quick,
starts
to
offer,
offer
the
ability
to
easily
copy
and
copy
and
paste
and
execute
commands
in
the
web
web
terminal.
This
onboarding
pattern
guides
customers
with
an
interactive
experience
and
helps
to
reduce
the
time
it
takes
to
get
developers
up
and
running
next
slide.
Please.
F
Let's,
let's
dive
a
little
deeper
into
our
frictionless,
cohesive
and
plugable
platform,
the
this
platform
allows
customization
and
it's
an
extension
of
the
ocp
cons.
Ocp
console
as
platform-a-built
capabilities
grow,
so
does
your
ui
and
so
we're
looking
to
enhance
the
features
in
the
following
areas.
Are
our
quick
starts
our
upcoming
metrics
dashboard
custom
resource
definitions
to
handle
enabling
custom
dashboards
and
the
continued
evolution
of
our
dynamic,
dynamic,
plug-in
framework?
F
F
F
F
Devs
can
now
drag
and
drop
their
fat
jars
from
their
desktop
directly
to
the
topology
page
and
get
it
quickly
deployed
on
openshift
in
the
future.
You'll
even
be
able
to
drag
and
drop,
helm,
chart
archives
and
eventually,
even
the
ability
to
connect
a
debugger
to
your
deployed
application
all
from
our
web
console
next
slide.
Please.
F
Last
but
not
least,
we
plan
to
help
developers
write
mature
operators
more
easily
by
providing
higher
abstraction
level
apis
in
the
operator
sdk
and
supporting
more
languages.
This
will
leave
developers
with
more
time
to
focus
on
management
logic
of
the
operator
rather
than
the
low-level
api
concerns
or
having
to
learn
a
new
programming
language.
Even.
F
Find
yeah
so
going
a
little
a
little
deeper
into
this
new
operator.
Api
it'll
be
it'll,
be
a
fairly
noticeable
change
and
that
the
it
will
eventually
replace
three
separate
apis
that
we
have
today
to
install
and
update
operators.
These
were
initially
opera.
These
were
initially
designed
with
human
intervention
in
mind,
for
instance,
to
explicitly
approve
an
update,
but
we've
listened
to
feedback
from
our
customers,
who
are
automating.
Every
aspect
of
cluster
configuration
looking
for
better
support
for
git
ops
when
dealing
with
operators.
F
F
F
So
we
don't
want
to
talk
our
block,
our
developers,
velocity
exploration
and
initiative
when
we
can
so
olm
will
allow
customers
to
give
this
freedom
to
cluster
tenants
as
part
of
an
auto
approval
plug-in,
which
will
be
which,
which
can
be
provided
on
a
per
calendar
or
per
catalog
basis.
F
So
helm
helm
is
one
of
the
most
popular
packet
managers
for
kubernetes
and
it's
now
generally
available
as
part
of
openshift
and
we're
continuing
to
integrate
its
capabilities
across
the
platform.
The
goal
is
to
provide
a
service
as
a
self-service
application
development
experience
that
makes
the
openshift
tagline
innovation
without
limitation
a
reality.
F
We
do
this
by
pro
by
enabling
developers
to
use
the
tools
they
desire
to
deploy
their
applications
with
minimal
intervention
and
greatly
reducing
application
time
to
market.
This
is
particularly
useful
for
developer,
centric,
tooling
and
stateless
applications
that
don't
necessarily
require
the
advanced
management
and
automated
day
two
operations
that
operators
give
us.
F
The
openshift
workload
ecosystem
will
get
stronger
with
a
new
helm,
helm
chart
certification
program.
This
will
enable
our
partners
to
provide
their
tested
and
supported
helm
charts
across
openshift.
We
will
contribute.
We
will
continue
to
bring
greater
integrations
to
with
various
developer
tools
and
services,
including
odo
service,
binding
operator,
dev
files
and
more
next
slide.
Please.
F
So,
what's
extra
service,
two
two
big
and
fast
growing
growing
areas
are
serverless
and
service.
Mesh
serverless
and
service
mesh
provide
easier
ways
to
develop
and
run
applications
on
openshift,
with
a
secure
and
out
of
the
box
day.
Two
story:
serverless
allows
you
to
build
and
run
applications
without
requiring
a
deep
understanding
of
the
underlying
infrastructure,
while
service
mesh
addresses
common
challenges
with
microservices
such
as
security,
observability
resilience
deployments
and
more,
which
takes
a
big
load
off
of
your
developers
as
we
move
forward
with
these
two
areas.
F
There
are
these
four
themes
that
are
guiding
us
better
together,
as
serverless
and
service
mesh
are
based
on
the
two
open
source
projects,
k
native
and
istio
respectively,
we're
working
to
provide
an
even
more
seamless
experience
between
these
projects
and
openshift.
Today
we
provide
seamless
installation
and
upgrade
upgrades
via
operatorhub
as
we
move
forward.
We
are
aiming
to
improve
our
integrations
with
different
eventing
sources:
github's
workflows,
api
management,
observability
infrastructure,
cluster
management
and
much
more
on
the
user
experience
side.
F
Both
k-native
and
sdo
are
young
and
moving
open
source
projects,
and,
as
such,
we
aim
to
smooth
the
admin
and
developer
experiences
around
both
projects
for
serverless.
This
includes
enhancements
to
observability,
include
conservability,
eventing
and
new
ways
of
writing.
Applications
with
servo's
functions
for
service
mesh
storage
mesh.
This
means
this
means
a
greater
focus
on
operations
and
troubleshooting
of
the
service
mesh
control,
plane
and
proxies
for
scaling
services.
There's
two
ends
to
the
spectrum.
F
On
the
serverless
side.
Scaling
to
zero
and
startup
performance
are
tantamount
on
the
service
mesh
side,
we're
recognizing
that
we
have
customers
now
that
are
pushing
the
boundaries
of
both
performance
and
scalability
when
it
comes
to
using
service
mesh,
we're
working
to
establish
benchmarks
and
and
we're
also
looking
to
see
how
we
can
help
customers
more
effectively
use
service
mesh
at
scale.
We're
also
focusing
on
extending
service
mesh
over
multiple
tenants
and
multiple
clusters,
which
we'll
go
into
a
little
more
later.
F
On
the
security
side,
security
is
always
a
major
focus
of
ours
at
red
hat.
Our
teams,
before
a
thorough
review
of
all
upstream
features
before
bringing
them
into
our
products
in
the
cases
of
service
mesh.
Many
of
the
differences
between
openshift
service
mesh
and
the
upstream
istio
are
a
result
of
these
such
reviews,
and
this
continues
to
be
the
case
as
we
move
across
clusters
with
federation.
F
F
So
openshift
serverless
functions
is
going
to
be
coming
into
tech
preview,
and
these
are
these
are
a
collection
of
tools
that
enable
developers
to
create
and
run
functions
as
native
services
on
kubernetes,
which
provides
a
simple,
focused
and
opinionated
way
to
deploy.
Applications
functions
are
all
about
making
the
programming
model
simple.
To
reduce
complexity.
F
Users
no
longer
have
to
worry
about
platform.
Specifics
like
networking
resource
consumption
sizing
many
other
considerations,
which
can
be
extremely
time
consuming.
It
makes
deploying
applications
like
ai
and
ml
much
simpler.
Since
data
scientists
can
easily
create
a
web
server
that
will
listen
for
services
and
they
no
longer
have
to
learn
different
application
configurations
or
frameworks.
F
F
Finally,
last
but
certainly
not
least,
by
popular
demand,
the
upcoming
release
of
service
mesh
will
introduce
the
ability
to
federate
multiple
service
meshes.
This
will
allow
service
mesh
administrators
to
easily
manage
connectivity
between
services
located
in
different
meshes
and
indeed
in
different
clusters.
F
This
will
be
our
first
step
towards
supporting
multi-cluster
topologies
for
service
mesh,
which
will
build
upon
our
existing
multi-tenant
deployment
model.
Our
multi-tenant
deployment
model
provides
facilities
for
securely,
deploying
and
managing
multiple
service
meshes
within
the
same
openshift
cluster.
Each
of
these
message
meshes
may
be
managed
by
a
different
team
or
administrator
we'll
be
expanding.
This
model
across
multiple
clusters.
F
Users
may
wish
to
connect
services
in
different
meshes
for
several
reasons,
most
notably
to
provide
direct
access
between
services
that
might
be
managed
by
different
teams
by
different
administrators
in
different
clusters,
different
regions
or
availability
zones.
This
is
becoming
increasingly
common,
as
our
customers
look
to
scale
service
mesh
across
their
large
organizations.
F
F
So
that
that
that
concludes
it
thanks
from
for
myself
and
all
my
colleagues,
I
want
to
thank
everyone
for
spending
time
with
us.
Please
keep
an
eye
out
for
future
presentations
on
openshift.tv
and,
if
you
haven't
already,
please
do
sign
up
for
the
virtual
red
hat
summit
in
april
thanks
everyone
for
your
time
and
we'll
see
you
next.