►
From YouTube: OpenShift Release Update and Road Map Daniel Messer, Ramon Acedo Rodriguez, Daniel Oh (Red Hat) OSCG
Description
OpenShift Release Update and Road Map
Speakers: Daniel Messer, Ramon Acedo Rodriguez, Daniel Oh (Red Hat)
OpenShift Commons Gathering Kubecon EU
May 17, 2022 Live from Kubecon EU in Valencia, Spain
Full Agenda here: https://commons.openshift.org/gatherings/OpenShift_Commons_Gathering_at_Kubecon_Europe_2022.html
Learn more at: https://commons.openshift.org
A
A
A
So
let's
talk
about
the
hybrid
cloud
to
start
with,
which
is
very
related
to
everything
we
do
with
our
kubernetes
openshift,
and
you
know
for
the
past
10
years
or
so
at
redcat,
we've
been
investing
a
lot
in
hybrid
cloud,
and
our
customers
and
users
have
also
been
investing
in
innovating
with
their
apps
in
the
different.
You
know,
types
of
applications
that
you
may
have
right
from
the
traditional
end-tier
applications
to
more
cloud-native
type
of
applications
right
and
all
of
these
can
be
done
across
all
our
footprints.
A
Here
you
can
see
five
foot
prints
that
we
have
from
the
physical
bare
metal
notes
where
you
can
install
kubernetes
virtual
machines.
You
know
the
traditional
virtualization
platforms,
but
then
also
your
private
cloud
think
about
openstack
or
others
as
well,
along
with
the
public
clouds
that
we
all
know
and
edge
cloud.
We
will
talk
a
little
bit
about
edge
as
well
today,
in
thinking
about
openshift
right
now
coordinates
delivered
with
openshift.
A
You
have
two
ways
right
to
access
kubernetes
from
openshift.
It
is
a
self-managed
platform
right
on
the
top,
so
you
have
aws
microsoft,
azure,
ibm
cloud
and
then
redcat's
own
cloud,
that's
backed
by
aws
and
google
cloud
right.
Those
are
self-managed
co-engineered
between
redcat
and
the
cloud
provider.
A
A
C
C
This
is
where
we
are
and
then
I'm
going
to
showcase
here's
a
couple
of
the
architecture
a
little
bit
complicated
this
one,
but
don't
worry,
I'm
going
to
go
through
really
step-by-step
and
then
there
are
a
bunch
of
the
capability
on
opposition
to
410,
for
example,
the
advanced
cloud
management
and
then
container
security
and
the
github
pipeline.
This
is
not
about
only
developer
but
also
sre
and
devops
engineer
or
application
architecture,
or
your
device
team
needs
I'm
going
to
showcase
a
bunch
of
stuff
in
next
to
15
minutes
and
then
yeah.
C
So
I'm
going
to
stop
my
presentation,
so
here's
my
openshift
cluster
pro
10
and
as
you
can
see,
hopefully
you
guys
see-
I
already
installed
a
bunch
of
the
operators.
So
you
can
see
advanced
management
kubernetes
for
acs.
It's
all
managed
and
allows
you
have
a
container
security
like
a
docker
cyber
benchmark
or
any
cva
violation.
C
You
could
keep
monitoring
and
secure
your
cluster
and
also
there
are
get
offs
based
on
tecton
pipeline
to
ci
cd
pipeline,
as
well
as
argo
city
based
on
that
you're
gonna
have
your
github
pipeline
from
your
gib
application
repository
and
the
last
thing
is
acm.
So,
for
example,
your
company
needed
to
manage
a
multiple
cluster,
not
just
single
on-prem
kubernetes,
for
also
you
have
kubernetes
on
amazon
or
google,
microsoft
or
digital
ocean
how
to
manage
multiple
cluster
pro,
along
with
your
multiple
cloud
strategy
or
hybrid
cloud
strategy.
C
So
I
already
installed
the
operator
you
can
go
to
as
long
as
you
have
a
cross
admin
permission
and
I'm
going
to
start
how
to
get
a
star
to
develop
and
create
the
github's
python,
and
actually
I
have
already
created
my
ci
cd
pipeline.
This
give
offs
based
on
argo
city.
You
can
actually
create
this
argo
cd
using
opportunity
to
get
also
pipeline
and
operator,
and
then
here's
a
bunch
of
stuff
and
then
I
have
a
developer
in
bottom.
C
C
It
also
manages
cri,
so
cam
bootstrap
allows
me
to
have
much
of
the
yammer
file
or
srs
security
file.
When
you
run
this
cri,
as
you
can
see,
I
already
set
it
up
the
gitoff's
pipeline
github
repository
as
you
can
see,
I'm
going
to
put
it
up
there
in
a
minute
and
also
here's
the
actual
application,
github
reprisaly
and
then
here
is
the
access
token,
which
allows
me
to
access
my
external
container.
C
So
I'm
going
to
use
a
quad,
I
o
you
can
also
use
docker
hub
or
google
container
s3,
which
allows
me
actually,
when
I
change
my
application,
it
automatically
pushes
into
container
registry
based
on
tecton
pipeline
and
then
also
when
I
re-tag
my
container
image
out
of
my
github
repository.
It
automatically
triggers
your
argo
city
to
deploy
application
into
target
environment,
for
example,
dev,
cluster
or
production
cluster
et
cetera.
C
So
once
you
run
this
command
line
and
then
you
got
a
bunch
of
the
secret
file,
as
you
can
see,
here
is
a
secret
file.
So
I
already
opened
my
terminal
based
on
my
view,
id
tool
and
then
go
to
compilation.
You
can
see
argo
cdo,
yemo
file
and
then
also
ci
cd,
based
on
tech
tone,
how
to
deploy
your
application
from
dev
cluster
or
your
staging
production,
et
cetera,
also,
here's
the
environment
for
dev
and
production.
C
When
I
go
to
application
service
and
base
here
and
the
computation
the
bunch
of
the
yemo
file
how
to
deploy
application
onto
kubernetes
cluster.
So
today
I'm
going
to
use
one
of
the
java
applications.
Everybody
say:
oh
java
is
too
old.
It's
maybe
25
years
old
technology,
but
it's
still
evolving.
So
I'm
going
to
use
the
application.
It's
a
wonderful
popular
gaming
windows,
operating
system
back
in
1995.,
so
mine
sweepers,
I'm
going
to
really
tweak
that
application
based
on
corkers,
which
he
made
me
a
little
bit
evolving
cloud
network.
C
Microsoft,
application
in
the
end
on
running
on
kubernetes,
so
go
back
to
web
browser.
So
this
is
my
github
reprise.
I'm
going
to
make
it
bigger
here,
so
here's
the
github's
repository
for
microsweeper.
This
is
publicly
available.
So
when
you
go
back
to
my
here-
and
here
is
actually
qr
code,
you
can
scan
it.
I'm
going
to
share
this
slider
later
today
and
then
you
can
scan
the
secure
code.
You
can
go
to
entire
the
demo
environment
just
youtube
channel
and
you
can
find
all
step
by
step.
C
I
don't
have
enough
time
to
step
by
step
all
kinds
of
stuff
today,
so
back
to
the
repository
here
to
get
offs,
you
can
see
compilation,
environment
and
then
go
to
application
itself.
Quarkx
and
here's
the
application.
You
can
go
to
source.
It's
somewhat
stub
if
you
are
familiar
with
the
java
application.
This
is
your
maven
project.
I
got
to
just
develop
the
application
and
one
of
the
interesting
part
when
you
go
to
setting
on
the
github
repository
and
then
there
are
developer
setting-
and
I
just
create
two
personal
access.
C
Token,
and
here
is
the
cam
cli.
I'm
gonna
input
my
super
secret
password
here.
So
this
is
the
way
how
the
tecton
pipeline
trigger
and
then
automatically
detects
some
of
the
change
by
github
reprisally.
So
this
is
a
common
practice.
How
developer
actually
develop
application
and
the
one
stay
done
just
pushing
the
code
into
github
repository
and
after
that,
the
nice,
smart,
ci
cd
pipeline,
detect
the
change
and
automatically
start
your
pipeline
and
the
view
and
deployment
etc
and
here's
the
quay
dot
io.
It's
a
external
container
registry.
C
So
in
this
setting
and
then
I
actually
created
the
credential
for
the
containerized
access
and
authorization
and
authentication
based
robot,
which
means
whenever
I
change
the
retag
new
imagery
into
container
race.
Three
and
argo
city
automatically
detect
the
change
and
deploy
the
target
environment.
So
I
gotta
set
it
a
bunch
of
stuff,
but
maybe
I
gotta
skip
some
of
the
necessary
part,
but
I
don't
have
enough
time
to
do
that.
So
just
it
makes
sense
and
then
here
back
to
the
application
and
let's
go
to
acm
cluster.
C
So
when
you
install
acm
and
then
you
can
find
the
acm
management
when
you
click
on
that
a
new
dashboard
just
open
it
and
when
you
go
to
up
upper
view
and
then
it
automatically
is
single
sign
on
your
open,
shifter,
authentication
and
then
log
in
my
username
daniel,
and
then
it
showed
two
cluster
dab
across
and
production
crosstalk
at
this
moment,
you
can
actually
do
this.
This
same
capability
on
your
opportunity,
410
cluster.
For
now
and
then
once
you
go
to
this
acm
and
then
we
have
a
bunch
of
the
application.
C
C
So
it
takes
some
time,
go
back
to
enemy
cons,
the
hd
acm
and
you
can
have
a
two
cluster.
When
you
go
to
cluster
now
we
have
a
double
cluster.
I
already
showed
you
in
igor
city
and
the
two
cluster
one
is
dev
cluster.
The
other
is
a
production
cluster.
Just
like
d1,
so
when
you
go
to
dev
cluster
c
and
you
can
find
all
relation
or
kubernetes
resources,
for
example,
here's
the
actual
application
and
post
sql
and
then
a
bunch
of
the
another
resources,
for
example,
services,
endpoint
and
router,
et
cetera.
C
You
can
find
all
kind
of
topology
which
resources
you
communicate
along
with
your
application
and
then
kubernetes
manifesto,
and
you
can
see
here,
I'm
going
to
make
it
bigger
two
cluster
in
a
local
and
a
production
and
then
go
to
application
view.
It
actually
show
the
similar
topology
in
operation,
the
console,
here's
your
opportunity
to
console,
you
can
see
it's
the
process,
care
and
the
simple
quarkx
application
and
the
click
on
the
open,
url
automatically
open
the
actual
application
endpoint.
C
So
the
openshift
actually
provide
the
route
url.
Just
like
a
coupon
is
ingress.
C
So
once
you
open
up,
you
can
see
the
minesweeper
application
in
the
like
back
in
1995,
the
the
minesweeper
came,
so
I've
been
there.
So
let's
try
to
give
some
time
to
the
application.
I
gotta
tweak
a
little
bit
and
here
is
a
micro
sweeper.
I
change
the
title
and
here
to
my
board.
Actually
change
the
backend
application.
C
This
is
all
score
stored
in
the
process.
Sql
from
the
quarkx
application,
which
is
the
cloud
name
of
microservices
java
framework,
invented
the
red
hat.
So
when
you
go
to
acm,
you
could
have
the
application
side
and
then
you
could.
You
could
have
a
similar
topology,
just
like
you
have
open
to
the
console.
But
when
you
go
to
open
the
console,
for
example,
you
have
a
two
cluster,
which
means
you
have
two
different
cluster
and
two
different
console,
but
the
acm
you
can
just
find
all
kind
of
stuff
in
a
single
pane
of
your
console.
C
So
go
to
application
and
dev
console.
You
can
find
the
similar,
the
topology
application
and
route
deployment
etc
and
go
to
route
and
click
on
the
route
url,
and
you
can
find
the
same
application
here
and
then,
when
you,
I
already
play
the
game
to
time
and
you
can
find
the
two
score
in
my
post-esque
database
and
then,
when
you
go
to
production
application
here
in
production,.
C
C
So
I'm
going
to
show
another
interesting
part
is
the
the
acs
acs
is
advanced
container
security
for
kubernetes,
for
example,
you
got
a
bunch
of
the
application
deployed
and
running
on
kubernetes,
and
then
there
are
a
lot
of
multiple
persona
to
secure
your
application,
as
well
as
in
infrastructure
as
well.
So
luckily
openshift
provides
aca
operator,
so
I
already
installed
it
so
go
back
to
asia
stock
racks
I
just
deploy
all
I
already
excuse
me.
C
And
then
it
will
show
up
the
acs
and
I
already
deployed
the
central
services
which
he
arrested
me
have
the
acs
console
here.
So
let
me
try
to
access
acs
console,
go
to
networking
in
the
route
and
it
will
automatically
create
the
route
url
to
access
acs.
Here
we
go
so
I'm
going
to
go
to.
C
Access
the
acs
route
url
and
then
it
shows
me
the
all
the
vibration
stuff
in
the
cluster.
But
in
this
case
I
didn't
edit
it
secure
because
at
this
moment
just
the
empty
dashboard.
So
you
don't
have
any
violation
at
this
moment.
So
I'm
going
to
try
to
showcase
how
to
edit
secure
cluster
specifically
in
this
cluster.
C
So
you
can
also
actually
edit
the
secure
cluster
for
acs
itself,
as
you
can
see,
there's
no
biracial
scanning
at
this
moment,
so
I'm
going
to
try
to
edit
a
new,
secure
cursor
before
that
so
acs.
Another
beauty
of
the
acs
actually
provides
some
integration:
how
to
create
a
secret
file
when
you
edit,
a
new
cluster
for
security,
for
example.
Here's
a
hem
chart,
so
I'm
going
to
create
a
new
my
secret
based
on
any
bundle,
and
I
just
the
download
all
secret
file
and
then
just
try
to.
C
C
And
then
I
just
p
secret
file
and
the
same
namespace
aca
starts
so
once
I
created
this
secret
file,
the
my
acs
actually
access
that
cluster
to
scan
all
kind
of
vibration.
So
I
just
created
three
a
secret.
You
can
go
to
operate
the
console
in
the
same
namespace
space
and
go
to
secret.
You
can
find
the
three
secret
just
like
created
like
just
now
and
just
now,
and
just
now:
okay,
so
pretty
cool
and
go
back
to
operator.
C
C
Acm,
okay:
this
is
acs,
so
I'm
going
to
copy
the
endpoint
and
paste
here
and
then
this
is
the
https
protocol.
Trs
termination
I'll
just
create
a
new
one
and
then
go
to
part.
It
will
automatically
deploy
a
bunch
of
the
parts
to
access
and
then
secure
the
acs
cluster
itself,
and
then
it
takes
some
time
to
finish
all
kind
of
stuff.
I
just
actually
download
the
old
cop
container
in
advance,
so
it
takes
almost
done.
I
think
this
is
done.
C
Okay,
go
back
to
my
acs
and
go
to
dashboard
and
go
to
compliance,
and
I
try
to
scan
in
bottom
once
again
and
then
it
takes
some
time
depends
on
the
how
much
your
application
actually
deploy
and
you
can
see
the
bunch
of
the
violation
you
can
find
that,
and
here
is
the
cis,
the
cyber
security
for
based
on
the
aqua
benchmark
and
this
the
virtual
dos,
the
heat
power
and
a
lot
of
the
the
cbe
standard.
C
You
actually
take
care
about
whenever
you
deploy
the
container
application
onto
kubernetes
in
your
production
environment
and
I'm
going
to
interesting
stuff
here
and
let's
just
reload.
It
takes
some
time
to
replace
your
dashboard
because
you
all
aggregate
all
your
application
and
then
in
the
end
it
show
up
in
the
dashboard.
In
the
meantime,
so
I
got
a
one
interesting,
yellow
bar.
This
is
my
last
thing.
C
So
look
for
jay!
This
is
your
one
emo
file,
so
I
just
you
probably
know
the
couple
months
ago
we
have
a
very
critical
cbe
around
the
logo
project
shell.
So
this
is
the
same
example:
export
logo4j
cbe
with
the
java
application.
C
C
Okay,
so
I'm
going
to
skip
that
thing
so
so
this
is
the
last
part
I'm
going
to
show
that
so
so,
just
a
quick
summary.
So
currently
you
have
with
acs
and
acm
and
also
the
git
of
the
pipeline
capability
on
openshift
410.
So
I'm
going
to
hand
it
over
to
daniel
to
talk
about
what
is
the
next
thing
and
open
shift
and
a
lot
of
the
bunch
of
the
capability.
B
I'm
daniel
messer,
I'm
a
product
manager
in
the
openshift
group,
in
red
hat
and
what
you
just
saw
are
all
current
capabilities,
so
all
of
the
things
that
dan
showed
you
with
multi-cluster
visibility
in
acm,
multi-cluster,
security,
analysis
and
reporting,
with
acs,
githubs,
driven
deployment
and
pipelines
with
tactile
and
argo.
All
this
is
possible
already
today.
So
what
I
and
the
nader
also
rum
on
are
going
to
talk
about
you
now
is
the
future.
B
B
So
multi-cluster
is
something
that
we
take
the
whole
platform
into
in
the
main
direction,
moving
away
from
this
model,
where
we
have
very
few
but
extremely
large
clusters,
with
hundreds
and
thousands
of
namespaces
shared
by
hundreds
and
hundreds
of
tenants
to
a
model
where
we
are
essentially
looking
at
a
architecture.
It
looks
like
this,
so
we
want
you
to
be
able
to
bring
in
tenants
into
their
own
clusters
to
bring
in
clusters.
B
For
specific
purposes,
with
specific
hardware
from
specific
cloud
providers
or
infrastructure
providers
and
manage
them
at
the
fleet
level,
so
we
are
not
just
telling
you
hey,
run
multiple
clusters,
because
now
it's
so
easy
with
openshift.
To
do
that,
we
want
you
to
do
this
in
a
way
where
the
scales-
and
you
aren't
drowning
yourself
in
work,
because
you
need
to
keep
all
of
this
operational.
B
So
when
we
talk
about
multi-cluster,
you
need
to
have
a
couple
of
things
in
check.
The
first
is
the
storage
layer
that,
from
the
central
pool
of
storage,
allows
your
applications
to
get
persistent
storage
from
a
central
source
for
efficiency
to
be
also
to
reclaim
it
later,
when
it's
no
longer
needed,
but
also
in
a
way
that
you
can
have
data
move
from
one
cluster
to
another
cluster
right.
You
don't
want
to
be
stuck
in
one
cluster,
just
because
your
data
sits
in
that
particular
region
or
data
center.
B
You
want
to
have
the
ability
to
move
the
data
over
to
another
cluster
for
fade
over
purposes,
so
a
multi-cluster
storage
layer
is
required
to
actually
do
that
and
in
those
clusters
you
still
have
their
own
individual
ingress
point
for
network
traffic,
that's
where
the
applications
sit,
but
the
applications
can't
be
aware
and
dependent
on
the
fact
that
they
are
running
across
multiple
clusters.
It
needs
to
be
transparent
to
them
right.
We
don't
want
to
rewrite
our
application
just
for
it
to
work
on
the
multi-cluster.
B
This
is
what
you
need
at
the
infrastructure
level
and
in
order
to
effectively
do
that
at
scale,
without
doubling
or
tripling
your
team
size,
you
need
to
have
tuning
right.
You
need
to
have
the
ability
to
have
insight
of
what
is
running
in
all
your
clusters,
but
also
from
a
central
point,
deploy
applications
across
those
and
force
policies
across
those
clusters,
figure
out
security
violations
in
various
clusters
and
remediate
them
and
get
alerted
about
it.
B
And
whenever
you
talk
about
having
multiple
clusters,
you
also
need
a
central
source
of
truth
of
all
the
containerized
software
that
you
are
running
in
these
clusters.
So
this
is
coming
together
with
a
container
registry
that
sits
in
sort
of
a
hub
position,
so
these
three
main
pillars:
multi-cluster
security,
multi-cluster
management
and
container
registry.
B
These
are
the
core
pillars
of
our
multi-cluster
approach,
where
we
basically
standardize
all
this
tuning
irregardless
of
how
many
clusters
you
end
up
running
and
where
so.
The
first
is
cluster
management,
and
we
are
true
to
our
work.
We
open
source
everything,
so
we
open
sourced
what
we
call
in
the
product:
space,
advanced
cluster
management
with
the
open
cluster
management
project,
and
we
didn't
just
open
source
it
and
put
it
on
github.
We
actually
donated
it
to
the
cncf
last
year,
so
the
core
parts
of
what
it
takes
to
do.
B
Multi-Cluster
life
cycle
application
deployment
across
multiple
clusters,
as
well
as
policy
enforcement,
is
in
the
open
cluster
management
project,
which
is
a
cncf
project.
Now
these
are
the
base
building
blocks
in
the
form
of
the
apis,
and
the
controllers
that
make
this
up
and,
as
you
can
see,
it
spans
the
various
six
and
working
groups
and
already
gets
contributions
and
community
momentum
also
from
our
partners.
So
open
cluster
management
is
one
part,
because
we
donated
this
to
cncf.
B
This
is
also
where
we
do
integration
with
hive
for
cluster
provisioning,
submariner
for
east
west
network
traffic
isolation,
as
well
as
volume
syncing
for
actually
replicating
storage
across
clusters,
using
a
shared
storage
system,
so
just
to
not
be
confused.
These
are
two
open
source
projects
that
kind
of
flow
into
the
acm
product
and
that's
where
all
the
innovation
happens
upstream.
B
So
what
we
are
focusing
on
in
multi-cluster,
specifically
the
networking
part,
because
this
is
crucial
to
get
right.
If
you
don't
have
that
your
multi-cluster
deployment
is
going
to
be
very
complicated,
it's
going
to
be
very
manual,
so
we
are
investing
heavily
in
this
multicluster
networking
layer
based
on
the
submariner
technology
to
essentially
allow
parts
running
in
different
clusters,
communicate
through
what
they
perceive
as
a
very
flat
network
namespace.
So
they
don't
perceive
any
boundaries
of
their
own
cluster.
B
They
don't
even
know
that
the
part
they
are
talking
to
may
actually
sit
in
another
cluster
very
far
away.
It's
completely
the
same
as
talking
to
parts
and
services
in
the
same
cluster,
and
this
is
the
level
of
transparency
and
abstraction
you
need
in
order
to
essentially
carry
out
multi-cluster
networking
where
the
clusters
themselves
are
connected,
east-west
wise
with
ipsec
tunnels,
but
from
the
port
perspective,
it's
all
one
flat,
networking
namespace.
B
What
we're
also
looking
to
do
in
acm-
and
I
say,
acm
synonymously,
with
open
cluster
management
and
storage
run-
is
the
ability
to
import
and
manage
openshift
and
okd
clusters
and
other
computer
architectures.
So
x86
is
what
we
are
doing
these
days,
also
power
and
system
z,
but
we're
also
going
to
support
arm
this
year.
Openshift
and
okd
already
supports
arms
since
version
4
10..
B
So
cluster
management
layer
will
also
start
to
support
that
and
learn
how
to
deploy
clusters
on
these
infrastructures
and
for
the
storage
part,
we
are
betting
on
volsync,
the
formerly
known
as
scribe.
This
project
is
used
to
essentially
asynchronously
replicate
data
from
persistent
volumes
across
clusters.
In
the
background
for
the
purposes
of
being
able
to
make
a
disaster
recovery
possible,
so
you
would
be
able
to
take
data
out
of
one
cluster
and
move
into
another
cluster.
B
But
when
we
talk
about
this
cluster
management
architecture,
you
will
see
that
the
cluster
management
stack
itself.
The
acm
on
open
cluster
management
technology
runs
on
openshift
itself
right,
so
we
call
this
a
hub
cluster,
and
this
is
an
infrastructure
only
cluster
that
doesn't
run
any
workloads.
It
really
just
runs
the
infrastructure
to
do
multi-cluster
management,
and
this
is
something
you
definitely
want
to
be
able
to
backup
and
restore
in
case
it
completely
fails.
B
What
it
will
also
support
is
deploying
openshift
in
a
slightly
different
way,
so
we
have
master
worker
node
model
today,
where
you
have
a
control
plane
separate
from
the
masters,
and
you
use
specific
nodes
for
the
control
plane
in
the
hypershift
project
that
we
are
working
at
upstream.
You
will
be
able
to
containerize
the
control
plane
and
run
it
on
openshift
itself.
Saving
you
from
procuring
and
providing
separate
machines
just
for
a
control
plane
for
a
cluster.
B
So
this
is
the
hypershift
project
and
acm
will
learn
how
to
provision
clusters
in
that
specific
way
in
the
future
as
well.
Another
important
aspect-
and
you
will
hear
about
this
later
today-
is
security
and
forced
by
content,
integrity
and
verification.
B
B
This
is
the
basic
cluster
management
layer.
This
is
open
cluster
management
and
storage
strong,
but
what
you
also
want
is
sort
of
a
console,
graphical
ui
experience
around
that
right
and
we
have
an
awesome
console
in
each
cluster
with
the
openshift
console
and
what
we
are
doing
is
we
are
elevating
this
experience
up
to
the
fleet
level,
so
you
have
a
console,
that's
actually
multi-cluster
aware
and
will,
as
you've
seen
previously
in
the
demo
start
to
integrate
many
of
these
multi-cluster
management
aspects
into
one
common
console
framework.
B
So
you
have
the
openshift
admin
console
next
to
the
developer,
console
next
to
the
acm
console
next
to
the
acs
console
as
well.
In
one
view
screen
basically-
and
you
will
be
able
to
zoom
into
particular
clusters
in
your
fleet,
but
also
zoom
out
at
the
fleet
level
to
basically
have
a
fleet
wide
overview
of
policy
enforcement
applications
running
as
well
as
security
profiles
being
enforced,
and
this
is
done
with
this
unified
cluster
engine.
B
There
is
the
new
operator
that
we
are
introducing
called
multi-cluster
engine
which
is
actually
taking
out
some
of
the
basic
cluster
lifecycle
that
acm
does
into
its
own
operator,
that's
available
to
every
openshift
and
okd
cluster,
and
this
is
driving
this
unified
console
and
what
it
does.
It
allows
you
to
essentially
use
things
like
fleet
wide
authentication,
so
you
don't
have
to
log
into
each
and
every
cluster
individually.
B
This
is
done
by
an
extremely
interesting
project
that
we
are
conducting
in
the
console
that
we
call
dynamic
plugins.
So
all
these
new
ui
experiences
that
we
are
sort
of
working
into
the
openshift
console
are
carried
out
with
a
plug-in
framework,
and
this
is
not
just
something
we
use
in
order
to
bring
acs
and
acm
to
the
console.
It's
actually
something
that
our
virtuous
users
as
diane
used
to
call
them,
and
also
our
partners
can
use
to
build
their
own
ui
experiences
right
inside
openshift,
it's
very
straightforward.
B
I
had
a
colleague
of
mine
in
the
pm
group
actually
do
a
dynamic
plug-in
within
two
days,
with
nothing
but
a
little
bit
of
javascript
and
a
little
bit
of
yaml
that
you
throw
the
cluster
and
it
makes
your
console
appear,
and
this
is
so
easy
that
I
think
a
lot
of
you
can
do
it
as
well
and
can
use
it
to
model
unique
workloads
and
specific
things.
You
want
to
have
specific,
ui
support
for
in
your
own
clusters.
B
I
talked
before
about
the
importance
of
storage
and
the
base
layer
to
actually
store
persistent
data.
So
this
is
an
area
where
we
are
heavily
investing
in.
We
are
starting
from
the
ground
up
at
the
container
storage
interface
level,
where
we
will
teach
openshift
how
to
use
csi
for
resizing
of
volume,
provisioning
of
a
thermal
volumes
sa
linux
mounts
as
well
as
bring
in
all
the
cloud
provider
plugins
again
through
the
csi
framework.
B
So
this
is
how
we
enable
the
individual
clusters
to
work
effectively
with
the
infrastructure
through
the
standard
interface
and
then
a
layer
higher.
We
are
starting
to
bring
in
multi-cluster
capabilities.
We
have
a
multi-cluster
object
gateway
that
is
used
with
the
saf
and
group
project
to
create
an
s-3
compatible
storage
in
your
storage
landscape.
So
it's
an
object,
storage
by
a
by
nature,
but
we
are
going
to
add
a
file
system
persona
to
that.
B
So
we
are
going
to
be
able
to
pretend
that
what's
actually
underneath
an
object,
storage
bucket
is
actually
a
file
system
and
there
are
a
lot
of
apps
out
there
that
rely
on
shared
file
systems
and
file
system
style.
Object.
Storage
is
a
way
to
make
that
work
across
clusters
in
the
read
write
a
active,
active
fashion,
but
also
on
the
other
end
of
the
extreme.
We
have
smaller
clusters
very
small
classes,
indeed
in
the
form
of
seeing
a
node,
openshift
sno,
where
openshift
and
all
of
its
technology
is
running
on
a
single
server.
B
These
systems
are
usually
not
connected
to
a
larger,
shared
storage
network
or
complicated
storage
system,
and
they
need
to,
you
know,
work
with
what
they
have
in
the
local
server
and
we
are
exposing
that,
with
the
same
management
capabilities
through
the
logical
volume
management
operator,
which
makes
use
of
the
lvm
stack
and
the
rail
kernel
to
essentially
give
you
a
little
bit
of
light
storage,
provisioning
on
a
single
node,
but
through
the
standard
interfaces
of
odf
and
then
one
layer
higher.
We
are
introducing
these
multi-cluster
shared
networking
and
shared
storage
capabilities.
B
What
we
will
be
able
to
do
with
acm
and
openshift
storage
working
together
is
facilitate
and
orchestrate
a
disaster
recovery
fail
over
from
one
cluster
to
another.
So
your
application
is
managed
via
acm
deployed
via
acm,
but
the
storage
is
managed
and
provided
by
openshift
data
foundation
and
these
two
technologies
integrate
in
a
way
that
they
use
this
wall
string
technology
that
I
mentioned
before
to
replicate
data
continuously.
B
In
the
background,
and
if
a
cluster
fails,
you
will
be
able
to
initiate
a
disaster
recovery
step
that
will
move
the
entire
application
definition
to
the
surviving
cluster,
where
all
the
data
is
already
present,
because
it
was
continuously
migrated.
So
this
is
something
that's
orchestrated
and
available
to
you,
basically
as
a
result
of
an
action
in
acm,
rather
than
you
going
to
the
systems
and
redirecting
storage
and
redeploying
applications
reactivating
storage
all
manually.
So
this
gets
you
out
of
a
disaster
really
really
quickly.
B
On
the
other
hand,
we
hear
you
about
requirements
in
openshift's
data
protection
space
with
the
ability
to
do
backup
and
restore-
and
we
get
this
so
often-
and
we
have
so
many
partners
that
want
to
integrate
with
openshift
and
backup
and
recovery
in
that
we
have
provided
additional
apis
for
them
to
integrate
into
the
platform.
So
the
openshift
data
protection
apis
will
become
version.
1.0
this
year-
and
these
are
the
integration
points
that
backup
vendors
will
use
to
integrate
with
with
openshift
speaking
of
storage.
I
mentioned
before
that.
B
Whenever
you
have
more
than
a
couple
of
clusters
running,
you
definitely
want
to
have
some
sort
of
central
truth
for
all
the
images
that
you're
running
in
these
clusters.
This
is
what
a
central
registry
does,
and
once
you
serve
more
than
one
cluster,
this
registry
needs
to
be
really
really
highly
available
and
really
really
performant,
because
if
the
registry
is
down,
you
will
notice
in
your
clusters
within
five
minutes.
I
guarantee
you.
This
is
the
idea
that
the
project
query
registry
has
been
designed
with
from
day
one.
B
Security
scanning
remains
an
important
aspect
of
an
image
registry
because
it
allows
you
to
scan
the
images
before
they
actually
hit.
The
cluster
quay
already
does
that,
since
a
long
time
with
the
clear
security
scanner
and
will
be
enhanced
to
scan
even
more
content
inside
a
container,
we
have
already
introduced
support
for
programming
languages
like
python,
where
quay
will
be
able
to
report
python
package
vulnerabilities
it
finds
in
the
container.
We
extend
that
to
java,
which
is
actually
in
tech
preview
right
now,
but
also
golang
and
other
scripting
languages
like
node.js
and
ruby.
B
B
And
then,
finally,
our
security
pillar-
you
will
have
a
session
about
it
later
today,
but
the
stack
rocks
project
is
the
community
version
of
red
hat
advanced
cluster
security.
This
is
the
last
component
that
dan
has
shown
in
this
demo,
and
this
is
a
central
piece
of
actually
having
sort
of
peace
of
mind
and
faith
in
your
multi-cluster
vision.
So
you're
not
opening
yourself
up
to
a
lot
of
rogue
workloads
and
tenants
that
do
all
kinds
of
unsecure
things
in
your
clusters.
You
can
still
centrally
control
what
kind
of
security
policies
are
in
place.
B
What
kind
of
workloads
can
run
and
what
is
your
tolerance
towards
security
vulnerabilities,
so
acs
and
multicultural
security?
Does
it
as
runtime
so
versus
the
registry,
which
does
it
at
rest?
Acs
will
tell
you
what's
going
on
in
the
cluster
in
the
context
of
the
running
workload,
so
what
it
will
do
here
is
integrate
with
the
six
door
project
I
mentioned
earlier,
so
it
will
be
able
to
define
policies
around
you
only
accepting
containers
that
are
signed
for
execution
and
prevent
unsigned
containers
or
containers
that
fail
signature,
verification
from
being
executed.
B
This
is
how
you
ensure
only
trusted
workloads
run
in
your
cluster.
There
will
also
be
an
ability
to
define
network
policies,
so
a
lot
of
users
want
to
essentially
compartmentalize
and
isolate
the
applications
at
the
network
level,
which
is
an
extremely
good
idea
in
the
security
space.
But
it's
also
very
complicated
right.
B
So
acs
will
provide
a
graphical
editor
for
that,
and
it
already
has
insight
into
the
cluster
traffic
already
based
on
its
eppf
level
packet
filtering
capability,
and
it
will
make
use
of
this
insight
to
recommend
you
known
traffic
patterns
to
trans,
to
basically
express
them
as
network
policies
and
say
this
known
pattern
is
now
allowed
and
everything
outside
is
not
allowed
anymore.
So
this
yields
network
policies,
they
need
to
be
applied
to
the
cluster
and
we
already
have
a
technology
that
applies
policies
to
a
cluster.
B
Compliance
is
an
important
aspect
for
corporate
security
and,
if
you
haven't
heard
about
the
compliance
operator,
you
will
hear
about
it
today
from
kirsten
how
it
helps
you
basically
prove
to
auditors
that
you
are
certain
that
you
apply
to
certain
compliance
levels.
The
compliance
operator
will
also
get
a
graphical
interface,
and
this
interface
will
sit
in
acs
in
the
next
topic.
A
So
you've
seen
a
lot
of
details
of
what's
going
on
with
openshift.
Sometimes
we
get
the
question
of
what's
the
difference
between
openshift
and
kubernetes,
and
you
know
with
what
daniel
showed
us
before
with
the
other
daniel
has
just
explained
you
can
see.
You
know
it's
many
pieces
together
that
we
package,
we
make
them
easy
to
consume,
and
this
is
where
we
are.
You
know
the
the
value
that
openshift
is
providing.
A
I
promise
I'm
gonna
try
to
be
quick
and
just
go
through
the
fun
stuff,
but
you're
gonna
see
many
letters
written
in
there
and
let
me
start
by
installing
openshift
updating,
openshift
and
integrating
openshift
with
more
providers
right.
So
what
are
we
doing
here
in
terms
of
adding
new
platforms?
A
We
have
a
few
new
platforms,
some
of
them
already
there,
some
of
them
in
the
roadmap,
alibaba
cloud
ibm
cloud
and
nutanix
as
well.
This
actually
is
not
only.
You
know
what
we
are
doing
with
adding
more
platforms.
We
are
also
adding
more
regions
to
the
existing
platforms,
so
this
is
a
continuous,
especially
with
the
large
main
ones,
public
clouds.
It's
a
continuous
effort
that
we
are
making
in
the
middle
installation.
You
need
to
install
openshift
right
if
you
are
not
doing
it,
self-managed
and
well.
A
Installing
openshift
needs
to
be
easy
as
well,
but
at
the
same
time
we
need
to
cover
all
these
use
cases
that
I'm
sure
each
of
you
will
have
and
all
have
seen
right.
We
need
to
make
it
easy
and
we
are
working
on
and
I'll
give
a
brief
update
later,
an
agent-based
installer,
we'll
see,
what's
that
in
in
a
second
hosted
control
planes.
Have
you
heard
of
that?
A
So
now
imagine
you
want
many
clusters
right
different
types
of
clusters,
but
you
would
like
to
have
one
shared
control,
plane
somewhere,
imagine
three
nodes,
for
example,
or
six
nodes:
it
doesn't
matter
serving
to
thousands
of
workers
right
nodes,
kubernetes
nodes
in
different
clusters,
just
sharing
the
control
plane.
That's
an
awesome
idea
and
very
practical
indeed,
so
we're
working
on
that
we're,
calling
it
so
far
hypershift
what
else
we
are
doing
and
then
upgrades
upgrades
is
always
a
challenge.
A
Isn't
it
many
times
customers
say
well,
you
know
I
will
only
upgrade
between
extended
user
support
versions.
Like
I
don't
know,
between
4.6
and
photo
tense,
things
like
this
just
to
save
the
hassle
of
managing
upgrades,
well
we're
working
on
making
this
continuously
as
well,
but
it's
always
in
our
roadmap
and
in
our
pipeline
make
these
improvements
with
upgrades
bare
metal,
so
by
the
way,
I'm
the
product
manager
of
for
everything
bare
metal.
A
So
this
is
a
topic
very
close
to
me,
and
I
I
could
talk
a
lot
about
this,
but
I'm
not
going
to
only
that.
Let
me
tell
you
this:
do
you
know
the
project
metal
cubed?
It
says
metal
3,
but
in
the
community
we
call
it
metal
cubed,
so
essentially
with
metal
cubed,
you
can
manage
physical
servers
as
if
they
were
virtual
machines
or
just
an
instance
in
a
public
cloud
right.
That's
that's
incredible!
You
know
we've
been
doing
this
for
years.
It's
a
pretty
mature
project.
A
In
fact,
it's
leveraging
technologies
that
existed
previously
ironic.
Maybe
you
have
heard
of
openstack
ironic.
So
that's
what
manages
or
in
fact
that's
the
engine
underneath
metal
cubed
anyway,
so
metalcube
managing
loads
of
servers,
managing
many
more
with
all
the
improvements
that
we
are
making
in
it.
Mobile
metal
and
installation,
how
else
can
you
deploy
openshift?
Well,
we
have
a
cloud
installer.
A
We
have
what
we
call
the
assisted
installer.
So
essentially
you
go
to
console.openshift.com
login
with
your
redcat
credentials,
and
then
you
can
have
access
directly
to
an
installer
on
the
web
right,
you
don't
need
to
download
a
client
anything
just
say.
This
is
how
I
want
my
cluster
to
look
like
and
then
you're
gonna
going
to
get
an
iso,
and
then
you
just
put
the
iso
in
the
notes
of
the
cluster
that
you
want
to
build
right,
super
cool
and
in
fact
we
are
yeah.
A
We
are
working
on
a
on
premise:
version
of
the
assisted
installer.
That
will
also
give
you
even
more
flexibility,
right,
disconnected
environments,
well,
things
that
you
need
when
you
are
on
prem
that
perhaps
you
can
do
with
when
you
work
from
the
sas.
So
that's
what
we
call
the
asian
based
installer
for
now
internally,
that's
how
we
are
calling
it.
What
else
are
we
working
on
openshift
virtualization?
A
Have
you
heard
of
qbert
yeah,
so
cubert
pretty
impressive
project
as
well?
We've
been
working
for
years
at
redcut,
so
in
fact
you
know
now
it's
pretty
mature.
It's
it's
been
years
of
you
know,
ramping
up,
adding
features
making
it
so
that
you
can
do
everything
you
would
do
with
your
traditional
virtualization
platforms.
A
Only
that
from
openshift-
that's
that's
pretty
cool!
That's
that's!
Actually
incredible.
Having
achieved
all
of
this,
and
in
fact
the
recognition
to
this
maturity
level
is
that
it's
now
an
incubating
project.
So
that's
what
level
up
in
maturity
in
the
cncf
right,
cubert,
pretty
cool
project.
A
A
So
anybody
who
needs
to
install
openshift
on
a
remote
location
will
benefit
from
all
the
improvements
that
we
are
adding
for
this
kind
of
topology
right,
for
example,
these
these
places
may
not
have
even
space
for
three
servers
right,
so
we
may
need
to
install
openshift
in
just
one
server
when
you
are
on
the
edge
they
may
have
connectivity,
but
perhaps
not
all
the
time.
A
So
we
need
a
way
to
be
able
to
do
management
of
these
clusters
under
these
circumstances,
right
many
times
from
a
central
point
of
management,
which
is
acs.
The
advanced
cluster
manager
that
daniel
oh,
has
shown
us
before
right.
Did
you
see
how
sophisticated
we
can
get
to
manage?
You
know
the
minesweeper
right.
Those
who
are
old
enough
probably
have
played
the
minesweeper
in
the
past
with
microsoft.
So
now
you
could,
you
know,
explore
all
of
that.
A
While
learning
about
acs
for
security
advanced
cluster
management,
you
know
to
manage
many
many
clusters
distributed
well.
This
applies
to
the
edge
and
then
in
the
edge.
We
have
also
very
specific
needs,
for
example,
in
terms
of
real
time
workloads
right,
so
we
need
to
fine
tune
our
servers
so
that
they
can
provide
the
performance
that
these
applications
need
applications
that
process
traffic
in
real
time.
Things
like
this
that
require
complete
dedication
of
the
machines.
You
are
running
your
workloads
on
and
related
to
this
single
note
open
shift.
A
As
we
were
saying,
you
can
install
openshift
in
just
one
note:
we've
been
working
a
lot
on
sno
as
we
call
it
internally
to
make
it.
You
know
as
good
as
any
other
cluster,
but
also
within
the
constraints
of
onenote.
That's
not
an
easy
task
right,
so
imagine
you
have
the
master
node,
the
worker
node,
all
together
all
competing
for
resources
right.
The
workloads
are
competing
for
resources
with
the
the
management
of
the
cluster
right,
the
ingress
operator,
the
you
know,
everything
that
you
will
have
in
there.
A
So
we
made
it
happen,
not
not
an
easy
project,
but
we
made
it
happen.
It's
fully
supported
since
last
year
and
now
look
at
this.
We
can
virtualize
workloads
in
one
cluster
right.
Sorry
in
a
single
node
cluster.
That's
pretty
impressive!
If,
if
you
ask
me,
I
think
this
support
also
not
only
for
bare
metal,
because
initially
I
mean
this
takes
a
lot
of
resources.
A
Open
shift
in
general
put
on
top
all
the
workloads
right
all
in
one
node,
so
we
really
need
to
be
very
careful
how
we
split
the
resources
in
in
this
node,
so
that
was
mainly
bare
metal,
even
though
internally,
you
know,
we
try
it
in
every
platform.
Now
we
are
adding
support
to
v
sphere
as
well,
so
you
can
have
a
single
node
cluster
on
b
sphere,
pretty
cool
as
well.
Another
thing
that
the
team
behind
us
you
know
is
working
on
is
well.
You
may
need
sometimes
more
capacity.
A
Maybe
you
start
with
one
snow
and
at
some
point
you
need
to
scale,
because
you
have
more
workloads
right,
so
you're
going
to
be
able
to
add
more
workers
or
simply
add
workers,
because
you
start
with
just
one
node
right
to
add
more
workers
to
your
to
increase
your
capacity,
another
some
just
to
mention
it.
Ovn
kubernetes.
Do
you
know
ovn.
You
know
that
you
can
use
sdn
openshift
for
the
network
or
ovn.
Well,
ovn
is
kind
of
like
the
thing.
A
That's
taking
more
adoption
for
managing
the
the
the
network,
the
cni
in
kubernetes
and
sno
is
not
going
to
be
an
exception.
And
lastly,
this
thing
yeah
here
we
go
arm
right.
This
architecture,
that
you
know
how
many
of
you
have
an
m1
laptop
right
now
with
the
apple
one
yeah,
it's
it's
pretty
impressive.
Like
you
know
my
fans
don't
go
off
like
almost
never
or
I
think
never
so
far.
Now
that
summer
is
coming,
so
I
mean
not
only.
This
is
just
an
example.
A
You
know
in
laptops,
but
arm
is
really
present
in
many
many
data
centers,
including
the
amazon
data
centers
with
aws
right
so
openshift
again,
is
no
exception.
We
are
supporting,
we've
been
supporting
them
already
and
what's
coming
we're
doing
something,
pretty
cool
as
well,
so
now
you're
going
to
have
a
cluster
that
will
allow
you
to
spread
to
distribute
your
workloads
in
the
same
cluster
between
arm
or
x86
right.
A
You
actually
will
tell
the
workload
when
you
define
it,
which
architecture
is
it
and
then
it
will
be
placed
in
the
right
servers,
as
this
little
graphic
tries
to
show
you
and
with
this
I'll
pass
it
over
back
to
then
to
finish
with
a
standardization.
B
So,
let's
finish
with
standardization,
I'm
going
to
be
respectful
of
time
and
also
the
break
that
we
have
scheduled
so
I'll
move
a
little
bit
quicker
on
this
one.
But
the
first
thing
we
are
going
to
standardize
is
the
way
you
are
installing
openshift
behind
the
firewall,
without
connectivity
to
retard.com
or
registry.redhead.or.
B
So
in
order
to
install
openshift
without
internet
connections,
you
need
to
mirror
all
the
images
in
your
registry
right
and
you
need
to
have
a
registry
first
as
well
so
and
depending
on
what
type
of
images
we
are
mirroring,
the
tooling
was
different
and
also
the
steps
were
different,
and
then
we
didn't
really
give
you
any
comment,
recommendation
or
guidance
of
how
to
do
this
over
time
to
keep
updates
coming
in
right.
So
we
are
going
to
standardize
this
project
now
this
process
now
and
we
have
a
new
utility.
B
Once
in
a
utility,
that's
working
off
a
single
declarative
configuration
file,
we
call
it
oc,
mirror
it's
part
of
the
oc
client,
but
it's
actually
a
plugin
for
the
oc
client
in
the
very
same
way,
cube
ctls,
plugins
oc
has
plugins,
because
it's
the
cube,
ctrl,
client,
basically
and
oc
mirror
is.
It
is
a
is
a
binary
tool
that
allows
you
to
create
a
mirror
for
many
managed
clusters
that
are
running
different
versions
and
keep
this
mirror
up
to
date.
B
Custom
images,
helm
charts
all
this
stuff
that
you
need
behind
the
firewall
and
it
will
download
all
this
data
from
the
various
places
and
it
will
either
generate
what's
called
an
image
set,
which
is
something
you
can
throw
in
a
tarball
and
move
over
behind
an
cap
with
a
usb
stick
or
if
you
have
direct
connectivity
from
the
oc,
mirror
host
to
registry
stream.
It
into
your
own
registry
as
well
and
from
there
you
can
run
all
the
clusters
and
this
tool
has
a
lot
of
intelligence
built
in
it
will
understand.
B
When
new
openshift
releases
have
been
published.
It
will
understand
how
the
update
graphs
for
the
operators
will
work
and
if
new
versions
are
available,
and
it
will,
if
you
ask
it
to
automatically
mirror
those
as
well,
you
only
need
to
execute
the
tool
again.
So
really.
The
way
to
automate
this
and
keep
most
up
to
date,
as
if
you
were
connected,
is
to
just
run
oc
mirror
in
something
as
simple
as
a
cron
job
on
a
system
every
night,
and
then
it
will
continuously
mirror
all
your
content
into
a
registry
or
into
an
image
set.
B
So
you
can
keep
your
disconnected
clusters
as
up
to
date
as
if
you
were
connected,
it's
a
very
seamless
experience
now
and
we're
going
to
move
it
to
ga
later
this
year,
at
the
compute
level
of
where
you
execute
your
workloads,
we're
also
driving
standardization.
So
we
are
going
to
standardize
on
cert
manager
as
to
de
facto
way
to
provision
tls,
certs,
rotate
them
and
move
and
and
and
provision
them
in
the
cluster
and
attach
them
to
workloads.
This
will
be
ga
later
this
year,
it's
already
in
tech
preview.
B
We
are
also
going
to
pick
up
policy
security,
admission
controllers.
This
is
a
feature
that's
in
coupe
124,
but
they
are
not
enabled
by
default.
We
are
going
to
make
them
enabled
by
default
and
openshift
and
we'll
work
on
making
that
compatible
with
our
security
context
constraints
so
that
automatically
the
restricted
policy
is
enforced.
B
We
are
also
going
to
improve
logging,
so
you
will
be
able
to
see
log
in
attempts
and
log
in
processes
in
the
ordered
logs,
and
we
are
also
going
to
fine-tune
the
auditing
and
logging
level
of
the
api
server
itself
to
reduce
a
little
bit
of
noise
that
usually
comes
in
with
the
api
server
being
very
busy,
but
also
cover
scenarios
that
we
aren't
covering
today.
For
instance,
when
you
have
web
books
in
your
cluster
and
some
of
these
web
hooks
are
broken,
this
could
actually
destabilize
your
entire
cluster.
B
So
the
api
server
will
recognize
these
situations
and
warn
you
about
it
and
tell
you
exactly
where
the
breakage
is.
One
thing
that
I'm
super
excited
about
is
openshift
chorus
layering,
and
this
is-
and
this
is
huge
because
we
previously
took
the
stance-
that
the
worker
node
image
is
immutable
and
it's
always
the
same.
It
comes
from
us
and
you
are
not
supposed
to
touch
it.
Anything
you
need,
in
addition
to
that,
has
to
be
executed
in
a
container
as
a
workload
on
top
of
openshift.
B
So
any
kind
of
agent
additional
software
that
you
need
running
there
for
regulations
and
compliance
requirements
needs
to
come
from
you
as
a
as
a
part
on
top
of
openshift,
and
you
all
told
us
that
this
is
not
working.
This
is
too
complicated
and
it's
you
need
the
ability
to
customize
the
vocal
node
and
with
core
as
layering
we're
giving
you
that
ability
karina
will
later
today
talk
a
little
bit
more
in
detail
about
this.
B
B
This
is
already
coming
as
a
template
from
acm,
but,
as
you
know,
we
are
heavily
invested
in
githubs
with
the
argo
cd
project.
We
are
working
to
enable
additional
tenancy
models
going
from
a
very
central
model
that
you
see
in
the
left
hand
side
here
where
there
is
a
single
argo
cd
instance,
that's
pushing
into
all
managed
clusters
or
an
argosy
instance
per
cluster.
That's
putting
workload
and
application
definitions
from
a
central
git
repository
and
pushes
them
to
different
name
spaces
or
even
more
extreme.
B
Get
ups
and
argo
cd
will
also
be
enhanced
to
support
a
key,
a
single
sign
on
with
key
cloak
and
red
hat,
we'll
get
refinements
and
helm
processing
and
also
integrate
with
hashicorp
world
later
this
year
and
before
we
wrap
up.
I
want
to
give
you
a
little
bit
of
an
outlook
on
a
super
interesting
project
that
we
are
working
on
that
takes
a
completely
different
approach
to
multi-cluster
ring.
So,
if
you
think
about
it,
the
way
kubernetes
is
built.
B
This
is
actually
very
useful,
even
outside
of
containers-
and
we
already
see
this
because
we
have
operators
in
the
cluster
that
don't
do
anything
on
the
cluster.
What
they
do
is
they
communicate
with
an
external
service
like
microsoft,
azure
or
aws,
and
provision
resources
there
and
just
report
back
status,
but
they
don't
do
anything
with
the
cluster.
B
It
can
run
on
your
laptop.
You
know
single
binary
and
we
call
that
kcp,
the
kubernetes
control
plane
or
the
smart
control
plane
and
that
control
plane
doesn't
isn't
attached
to
a
single
cluster
running
on
your
laptop.
It's
actually
attached
to
multiple
clusters
that
you
bring
into
the
fold
by
selecting
your
own
clusters
or
getting
clusters
from
ro,
rosa
or
osd,
and
the
beauty
of
this
is
is
that
this
is
something
you
can
give
directly
to
your
tenants.
Each
tenant
gets
their
own
little
control
plane,
and
this
control
plane
is
aware
of
multi-clustering.
B
It
can
be
extended,
like
we
install
operators
today
with
controllers
that
know
how
to
spread
the
deployment
across
three
or
five
clusters
evenly.
There
are
controllers
that
are
able
to
understand
that
config
maps
and
secrets
have
to
be
replicated
in
all
these
physical
clusters
down
there
and
from
the
user
perspective.
This
looks
and
feels
like
a
single
cluster
only
that
these
components
that
you
see
as
parts
deployments
and
services
are
actually
replicated
and
distributed
in
the
background
completely
independently
of
any
changes
in
your
application
to
make
multi-cluster
a
reality.
B
So
all
your
helm
charts
all
your
customized
templates.
All
your
git
ups
processes
will
likely
completely
work
unchanged,
but
instead
of
throwing
them
into
a
cluster,
you
throw
them
at
kcp
and
kcp
will
manage
multiple
physical
clusters
in
the
background
transparently,
and
we
are
planning
to
roll
out
this
service
later
this
year,
where
every
tenant
gets
its
own
kcp.
Remember
it's
extremely
small.
B
You
know,
there's
not
a
lot
of
overhead
in
executing
it,
so
we're
giving
one
per
tenant
and
they
can
connect
to
their
existing
clusters
or
request
clusters
from
rosa
or
osd
and
ro
to
actually
provide
compute
capacity
in
the
background
and
these
clusters
could
sit
anywhere
in
the
world
could
be
geographically
distributed
and
kcp
is
the
common
front
end
for
that.
So
this
is
an
exciting
project.
It's
not
part
of
openshift,
yet
or
cncf.
It's
very
early
still
follow
the
github
to
learn
more
about
this,
but
and
look
at
the
demo.
B
It's
really
nice
and
keep
tabs
with
that,
because
this
is
an
interesting
space
to
watch
with
that.
We
are
at
the
end
of
our
outlook
and
insight
into
how
we,
as
product
managers,
are
thinking
about
openshift's
future.
What
we
plan
with
multi-clustering
deployment,
flexibility
and
standardization,
and
even
you
know
very
exotic
things
like
kcp.
B
We
hope
we
gave
you
an
insight
into
what's
coming
and
we
invite
you
to
join
us
in
the
afternoon
for
the
ask
me
anything
session
where
we'll
be
able
to
answer
your
questions
and
give
you
a
little
bit
of
another
insight
as
well
into
topics
we
haven't
covered
today.
It's
a
large
platform,
you'll
see,
there's
so
much
going
on.