►
From YouTube: OpenShift Release Update and Road Map - Marcos Entenza, Rhys Oxenham and Karena Angell (Red Hat)
Description
OpenShift Release Update and Road Map - Karena Angell , Marcos Entenza and Rhys Oxenham (Red Hat)
This OpenShift Commons Gathering was held on July 6th, 2022 live in London, England
https://commons.openshift.org
A
My
name
is
marcos,
centenza
or
jasmak.
I
work
as
a
product
manager
in
in
for
openshift,
mainly
working
or
with
different
cloud
providers,
are
basically
trying
to
enable
openshift
to
to
run
everywhere
with
me.
I
do
have
a
pleasure
today
to
present
with
rhys
and
with
karina
later
I
will
let
rhys
introduce.
B
Management
himself,
I'm
the
director
of
the
customer
field
engagement
team
here
at
red
hat,
we
work
very
closely
with
some
of
red
hat's
largest
customers
and
partners
worldwide,
helping
to
drive
success,
but
also
try
and
be
a
conduit
back
into
the
product
teams.
We
want
to
learn,
what's
missing,
what
are
the
gaps?
How
can
we
prioritize
the
development
efforts?
We
feed
that
directly
into
our
product
teams.
It's
a
pleasure
to
be
here
today.
Thank
you.
A
Chris,
so
we
are
going
to
split
this
deck
and
into
a
couple
of
different
sections,
and
we
are
going
to
talk
about
the
main
themes
and
the
directions.
Sorry
that
we
are,
we
are
taking
with
the
product
with
openshift,
I'm
going
to
start
with
a
couple
of
familiar
slides
karina
has
present
already,
but
this
is
lights.
A
A
All
this
is
running
on
red
hat,
open
seats
and
run
on
top
of
a
hard
and
unsecure
enterprise
software
like
rehab
enterprise
linux,
as
I
previously
mentioned.
So
there
is
one
factor
here
which
is
a
very
important
which
is
consistency
and
which
is
a.
I
would
say
it's
a
key
here
so,
and
we
know
that
it's
very
important
for
customers
to
have
this
consistency,
because
they
do
have
the
ability
to
deploy
this
on
on
the
on
the
platform
they
they
chose.
A
So
for
a
very
long
time,
red
hat
has
been
offered
openshift
as
what
we
refer
now
as
a
self-managed
er
offering.
So,
basically,
you
as
an
organization
could
deploy
the
product
and
sorry
could
install
the
product
and
and
and
manage
the
the
product
yourself
and-
and
we
do
have
like
different
offerings
for
that,
starting
with
openshift
platform
plus
openshift
container
platform,
openshift
coordinates
and
giant
online.
A
A
So
we
started
with
openshift,
dedicator
or
osd,
where
red
hat
is
managing
end-to-end
the
the
solution,
then
we
start
working
with
some
cloud
providers
like
ibm
cloud,
microsoft,
azure
or
aws
on
our
co,
co-engineering
and
co-manage
platform,
so
where
the
customer
is
giving
the
tools
basically
to
access
the
service,
so
they
can
easily
go
to
the,
for
example,
to
an
integrated
console
on
on
under
clouds
of
choice,
click,
a
button
and
get
the
service
ready
for
them
all.
This
is
integrated
from
a
billing
and
support
perspective.
A
A
Do
you
need
a
registry
where
to
store
your
container
images
and
artifacts
that
can
be
used
from
every
single
cluster
and
you
need
also
a
different
approach
to
security
when
you
are
managing
multiple
clusters
and
an
approach
security
where
you
can
define
and
apply
policies
across
your
fleet.
A
I'm
sorry
we
have
some
issues
with
that.
No
yeah
when
you
got
your
your
services
spread
out
and
replicate
across
your
clusters,
you
need
to
bring
your
traffic
management
and
routine
into
that
layer
as
well.
This
is
happening
with
a
global
ingress
and
egress
glass,
balancing
and
some
cases
with
service
mess,
as
well
with
multiple
clusters.
A
You
are
going
to
need
to
secure
ways
for
them
to
speak
each
other,
and
this
is,
for
example,
using
encrypted
tunnels
where
we
are
going
to
connect
different
clusters
and
different
locations
to
encrypt
the
communication
between
between
them.
Then,
in
those
clusters
we
go
all
the
way
down
to
the
node
level,
where
you
are
interfacing
with
the
hardware
to
do
things
like
hardware
of
loading
or
running
low
latency
workloads
like
telco
use
cases,
for
example,
or
take
the
most
from
your
gpus
to
you
to
do
things
like
machine
learning
and
so
on.
A
Whatever
it's,
you
are
running
on
your
notes,
you
will
need
to
underpin
these
two
storage.
So
basically,
almost
every
single
workload
that
you
are
going
to
run
on
your
different
cluster
are
going
to
consume
storage
for
them.
So
once
you
get
this,
you
will
need
to
be
ready
to
to
do
some
things
like
backups.
You
need
to
be
proper
and
ready
for
disaster
recovery.
A
I
make
and
make
sure
that
all
your
storage
needs
are
are
met
across
all
your
clusters,
and
this
is
basically
the
the
openshift
ecosystem
that
we
are.
We
are
talking
about
for
for
these
customers
that
are
doing
running
multiple,
multiple
clusters,
so
when
you
have
more
than
than
one
cluster,
as
we
mentioned
there
are
things
on
on
on.
There
are
things
that
you
need
to
think
about
that
you
need
to
reflect
about
so
once
you've
got
your
multi-cluster
fleet,
for
example,
it
might
be
start
getting
across
sorry,
I
think
we're
missing
with
the
okay.
A
Okay,
sorry,
so
the
basically.
This
is
the
the
ecosystem
the
openshift
ecosystem
represent,
and
this
is
basically
going
to
help
you
to
go
to
from
your
first
cluster
to
your
100
cluster.
Let's
go
to
the
next
slide.
A
Okay,
so,
let's
see
what
we
have
on
the
multi-cluster
space
on
upstream.
So
there
are
a
couple
of
projects
that
we
that
are
interesting
on
this
space,
the
first
one,
the
is
the
open
cluster
manager.
This
is
a
cloud
native
computation,
sandbox
project,
which
is
basically
focus
on
simplify
your
cluster,
your
fleet
management.
A
A
There
is
a
framework
for
doing
governance
and
compliance
in
your
fleet,
and
when
you
are
doing
deployments,
these
applications
are
dynamically
placed
as
well
as
well.
You
do.
We
do
have
a
a
single
place
where
to
see
all
your
apps
or
what
is
the
states
or
your
apps
running
in
your
in
your
fleet,
and
it
also
gets
in
also
integrates
with
other
open
source
projects
like
argo,
ocd,
open
policy,
alien,
sanos
and
other
great
stuff
when
we
donated
or
when
open
cluster
management
got
accepted
to
the
cloud
native
foundation,
we
are.
A
This
project
is
called
stallstrone
and
this
is
the
extra
staff
that
we
bundle
here.
That
includes
things
like
our
ui
search
capabilities
and
it
also
support
other
things,
like
all
other
extra
things
like
submariner
on
balancing.
So
where
are
we
going
next
with
all
these
projects
on
the
network
inside,
we
got
the
submariner
add-on
I
just
mentioned
so.
Basically,
this
submariner
is
going
to
enable
direct
networking
communication
and
traffic
within
pods
that
are
running
on
different
clusters.
A
Acm
has
a
lot
of
underworks
at
the
moment
and
we
are
adding
things
like
supporting
arm
architectures,
which
is
in
tech
preview.
At
the
moment,
volume
sync
integration,
which
is
in
text
review
as
well.
This
one
is
particularly
cool
because
it
gives
you
persistent
volumes,
a
replicated
story
across
your
cluster,
and
it
will
do
that
asynchronously.
A
A
A
So
not
now
that
you
you've
got
the
the
the
high
level
view,
let's
dip
a
few
on
the
components
part
starting
with
one
we
are
going
to
mention
today.
A
lot
which
is
the
storage
is
a
lot,
as
I
mentioned,
going
on
in
the
multi-cluster
storage,
not
only
in
the
upstream,
but
also
with
from
other
partners,
as
you
are
going
to
listen
today
for
another
presentation.
A
This
is
a
very
important
topic
for
for
kubernetes
users,
so
there
has
been
a
lot
of
changes
in
the
last
two
years
in
this
space
and
red
hat,
and
the
committee
has
has
been
working
really
really
hard
across
all
these
levels
on
this
stack,
so
starting
from
the
from
the
bottom
from
the
lowest
level
or
all
of
these
there
has
been
a
lot
of
container
a
lot
of
improvements
in
the
container
storage
interface.
A
We
have
seen
csi
repositories
going
out
of
three
from
the
out
of
three
sorry
from
there
from
the
kubernetes
project.
There
has
been
a
lot
of
integration
with
many
different
cloud
providers.
A
This
is
part
of
the
openshift
platform
plus
subscription,
and
it's
going
to
enable
other
forms
for
openshift
like
single
single
node
openshift,
which
is
becoming
very
popular
for
telco
world
rocks
as
an
example,
and
then,
if
you
go
up
into
the
stack-
and
this
is
where
we
use-
this
is
sorry
about
openshift
data
foundation
advance,
and
this
is
very,
very,
very
focused
on
the
orchestration
side,
so
acm
and
volume
sync.
A
A
We
are
also
adding
a
new
airbag
model
for
qui.
It's
going
be
much
more
aligned
with
what
we
do
for
on
this
space
on
our
back
sorry
for
for
open
c,
so
far,
almost
every
user
in
using.
Why
has
the
rights
to
create
content
in
the
in
the
registry
and
in
order
to
control
the
access
you
have
to
manually,
pull
the
the
secrets
and
share
the
secrets
with
with
the
users?
A
A
So
when
you
got
a
multi-cluster
fleet,
basically,
what
what
you
need
or
what
you
are
aiming
for
is
an
easy
way
and
easy
and
repetible
way
of
applying
these
policies
to
multiple
sets
and
a
way
to
provide
and
track
exceptions
to
policy
by
violations
as
well.
B
Perfect,
okay,
so
I'm
going
to
go
through
a
number
of
different
topics,
primarily
related
to
some
of
the
core
infrastructure
components
of
openshift
and
some
of
the
direction
that
we're
going
in
there
so
start
off
with
installation
upgrades
and
provider
integration.
So
you
heard
it
from
mac
and
karina.
The
big
thing
for
us
is
giving
choice
to
our
customers
and
offering
a
wide
variety
of
platforms
to
install
openshift.
B
On
top
of
we
have
our
self-managed
offerings,
you
know
build
it
yourself,
run
it
yourself
or
we
of
course,
have
the
managed
services
offerings
where
we
will
allow
you
to
essentially
consume
your
infrastructure
as
a
utility
as
a
service.
We'll
help
run
and
manage
that
infrastructure
for
you,
but
for
customers
that
want
to
run
it
themselves
from
a
self
managed
perspective.
We
are
continually
adding
new
platforms
to
to
our
arsenal,
so
you
can
now
blow
on
top
of
things
like
alibaba,
ibm
cloud
nutanix,
we're
also
enabling
organizations
to
make
use
of
newer
instance
types.
B
You
know
be
those
a
good
example
here
would
be
something
like
bare
metal
instance
types
in
various
various
platforms
being
able
to
leverage
very
specific
instances,
because
they
provide
additional
capabilities,
perhaps
for
hardware
acceleration
or
because
they
have
very
specific
implementations
inside
of
those.
So
you
really
want
to
give
lots
of
choice
around
that
when
it
comes
to
installation
we're
of
course
trying
to
make
the
installation
experience
a
lot
better.
B
Another
thing
we're
looking
at
doing
is
something
called
hosted:
control
planes,
also
known
as
hypershift.
Now,
the
basic
premise
behind
hypershift
is
the
ability
to
divorce
the
control
plane
from
the
data
plane
in
a
given
cluster.
So,
rather
than
dedicating
a
whole,
you
know
number
of
nodes
for
your
control
plane.
B
The
idea
is,
we
would
bring
the
control
plane
as
a
workload
on
a
given
cluster,
so
you
can
rapidly
spin
up
new
worker
pools
across
a
wide
variety
of
platforms,
not
have
to
quote
unquote
waste
capacity
or
waste
an
entire
node
just
for
a
for
a
master.
So
this
is
something
we're
looking
at
doing
later
down
the
line,
composable
installation
for
those
of
you
that
have
been
working
on
openshift
for
a
long
time.
You
will
realize
that
when
you
deploy
openshift,
you
basically
get
a
full
fat
version.
B
B
Give
you
the
ability
to
turn
on
and
off
certain
features
right
out
of
the
gate,
rather
than
having
everything
deployed
by
default,
so
really
trying
to
streamline
or
thin
down
the
amount
of
resources
that
are
being
consumed
from
an
upgrades
perspective.
We
recognize
that
we
need
to
do
or
provide
a
little
bit
more
insight
into
what's
going
on
during
an
upgrade
why
things
might
fail?
B
How
to
troubleshoot
them
improve
the
documentation
around
there,
but
also
try
and
do
a
little
bit
more
gating
around
how
we
do
upgrades
if
you
have,
for
whatever
reason
chosen
a
particular
configuration
or
you're
leveraging
a
particular
api,
and
you
attempt
to
do
an
upgrade,
we
want
to
be
able
to
block
that
upgrade
because
we
know
that
it
might
prevent
you
from
having
a
successful
upgrade,
and
we
do
that
because
we're
constantly
tracking
upgrade
successes
through
our
telemetry
service,
so
many
of
our
customers
are
subscribed
to
the
telemetry
service
and
it
we
captured
data
around
okay.
B
This
configuration
with
these
versions.
They
either
passed
an
upgrade
or
they
failed.
Why
did
they
fail?
So
we
can
help
proactively
block
some
of
those
upgrade
paths
for
our
customers
just
to
end
up
in
a
much
more
stable
type
of
environment
openshift
on
bare
metal-
and
this
is
an
area-
that's
very
close
to
my
heart
and
I'm
really
glad
to
see.
I
won't
point
him
out,
but
we
have
someone
that
leads
a
lot
of
the
engineering
work
for
for
openshift
on
bare
metal
in
this
room.
B
So
if
you
do
have
any
questions
on
this
I'd
be
happy
to
point
you
in
this
direction,
but
for
a
long
time
many
organizations
have
deployed
openshift
on
top
of
say,
virtualization
platforms,
so
they've
deployed
it
on
top
of
a
public
cloud
platform.
All
of
those
provide
you
with
apis
right,
our
installer.
They
can
communicate
directly
with
those
they
can
provision
infrastructure.
We
have
a
lot
of
automation.
B
The
metalcube
project
is
something
that
we
we
worked
on,
and
this
basically
gives
us
that
standardized
way
of
provisioning
openshift
on
top
of
bare
metal
infrastructure,
we're
able
to
reuse
some
of
the
components
and
expertise
that
we
had
around
openstack
bare
metal
infrastructure,
pull
that
into
openshift,
and
so
now
we
have
a
a
programmable
way
of
adding
new
nodes
into
an
openshift
cluster,
deploying
on
top
of
them
having
a
little
bit
more
control
around
things
like
power
management.
So
what
are
we
doing
in
this
space?
B
Well,
adding
more
hardware
support,
so
we
are
leveraging
standards
such
as
redfish.
That
gives
us
you
know,
maximizes
our
ability
to
provision
a
wide
variety
of
platforms,
we're
continually
adding
new
oems
and
new
support
through
through
redfish,
we're
also
from
a
an
installation
of
bare
metal
perspective.
Many
of
our
customers
are
now
using
a
component
which
we
call
the
openshift
assisted
installer
that
recently
went
generally
available.
B
Now
this
is
a
really
really
simple
way
of
deploying
openshift
on
bare
metal
very,
very
quickly.
You
simply
go
on
to
our
console.redder.com
click,
the
assisted
installer.
It
asks
you
a
number
of
questions.
It
generates
a
disk
iso.
You
attach
that
iso
tool,
your
machines
and
it
from
there
on
it's
pretty
much
a
zero
touch,
provisioning
model.
Those
machines
come
up
and
away.
It
goes
so
really
really
quick
that
just
went
ga
and
also
I
highlighted
this
on
on
the
previous
slide,
this
the
agent-based
installer.
B
Now
we
recognize
that
we
we
have
customers
that
are
working
more
in
a
sort
of
disconnected
fashion
or
they
want
to
rapidly
provision
bare
metal
clusters
on
premise
and
we're
bringing
out
a
next
generation
installer
known
as
the
the
agent-based
installer.
Now
this
is
something
that's
going
to
come
later
in
the
year.
B
We
hope-
and
it's
essentially
providing
you
with
the
tools
to
do
that
kind
of
iso
based
deployment,
but
without
having
to
go
out
to
on
a
sas
based
service
and
having
a
connected
configuration
rapidly
generate
that
iso
attach
it
to
your
nodes
and
the
cluster
will
will
come
up.
B
Openshift
virtualization
so
again,
karina
mentioned
this
the
kubert
project-
and
this
is
a
really
exciting
project.
That's
bringing
virtualization
capabilities
to
openshift,
you
might
be
thinking
well,
hang
on
a
minute,
isn't
openshift
or
kubernetes
a
container
based
automation
platform.
Well,
yes,
absolutely,
however,
many
organizations
have
a
lot
of
reliance
on
virtual
machines.
B
Openshift
virtualization
allows
organizations
to
bring
those
virtual
machines
as
they
are
and
treat
them
just
like
any
other
workload
inside
of
openshift.
They
can
still
have
the
same
policies
for
network
policies
assigned
to
them.
They
can
still
communicate
with
the
same
persistent
storage
same
networking
infrastructure.
So
it
gives
you
a
little
bit
of
consistency
that
single
single
platform
to
manage,
regardless
of
the
workload
and
using
similar
tools
that
we
have
give
you
the
ability
to
either
leave
them
as
virtual
machines,
because
perhaps
you
have
no
choice
but
also
modernize
them
at
your
leisure.
B
So
it
removes
that
barrier
to
entry
for
kubernetes
just
for
customers
that
are
are
really
reliant
on
virtual
machines
today
and
there's
a
lot
of
things
on
on
this
slide.
B
What
we're
really
trying
to
do
is
make
open
shift
virtualization
a
much
more
comprehensive
platform,
really
trying
to
make
sure
that
it
can
maximize
its
reach
for
a
wide
variety
of
configurations,
workload,
types
workloads
that
have
relied
on
traditional
enterprise,
virtualization
features-
you
know
good
example
here
would
be
trying
to
support
rel
h
a
if
you
have
virtual
machines
in
your
existing
legacy
infrastructure
that
rely
on
high
availability.
You
know
we
lose
a
hypervisor,
we
lose
a
virtual
machine,
it
needs
to
be
kicked,
and
so
we
can
restore
that
workload.
B
We
want
to
provide
those
capabilities
in
openshift
as
well,
because
it's
no
good
saying
we
have
another.
You
know
virtualization
platform
for
you
to
move
your
workloads
over
to
if
it
simply
doesn't
support
the
features
that
you're
expecting
so
we're
doing
a
lot
of
ux
improvements,
new
wizards,
new
ways
of
configuring
things
we're
also
continuing
to
build
out
capabilities
in
openshift
virtualization
we're
doing
a
lot
of
things
around
integrating
with
ovn
and
providing
network
filtering
micro
segmentation
of
networking
doing
a
lot
of
work
around
live
migration
isolation.
B
The
hypershift
features
that
I
was
talking
about
before,
where
you
can
have
multiple
tenants
having
their
own
openshift
clusters.
Well,
if
you've
got
a
bare
metal
infrastructure,
install
openshift
virtualization,
you
can
use
hypershift
to
spin
up
multiple
openshift
clusters
on
top,
so
you
have
true
multi-tenant
isolation
for
multiple
different
customers
as
well
and,
of
course,
we're
trying
to
make
sure
that
we
support
a
wide
variety
of
guest
operating
systems,
your
rel
nine
support
windows,
11
support
and,
and
things
like
that,
edge
and
5g.
B
Of
course,
you
know
these
are
a
huge
you
know,
paradigm
shifts
for
for
the
industry,
for
red
hat
and
we're
constantly
trying
to
make
sure
that
red
hat
and
our
openshift,
tooling
and
pretty
much
all
of
our
tooling
is
ready
to
enable
customers
to
go
all
in
inside
of
some
of
those
configurations.
So
there's
a
few
different
key
topics
here,
first
of
all
variability,
so
we
know
that
there's
no
one-size-fits-all
edge
configuration
right.
B
You
can
take
a
standard,
open
shift,
installation
there's,
no
guarantee
that
it's
going
to
fit
on
the
hardware
profile
you
have
or
if
you
have
very
minimal
number
of
hardware
at
that
particular
site.
So
we
offer
a
number
of
different
new
configurations
for
openshift.
We
have
all
the
way
from
say
a
compact
cluster,
so
three
nodes,
everything
is
hyper
converge.
You
can
convert
storage
in
there
as
well.
So
you
know
almost
the
minimum
footprint
or
you
can
go
all
the
way
down
to
what
we
call
a
single
node.
So
everything
inside
of
just
one
node.
B
B
You
have
centralized
control,
plane,
remote
workers,
so
it's
just
the
worker
notes
that
are
running
close
to
close
to
where
the
the
customer
essentially
is,
and
we're
trying
to
do
a
lot
of
different
implementations
that
help
organizations
not
only
deploy
into
edge
sites
but
also
manage
them.
We
think
about
updates
and
upgrades.
How
can
we
perhaps
think
do
things
like
pre-caching
images
at
the
at
the
at
the
edge
site
and
and
manage
them
at
scale,
so
we're
doing
a
lot
of
work
around
deployment?
B
One
of
the
key
things
that
we're
investing
in
here
is
zero
touch
provisioning.
The
idea
that
I
can
simply
generate
an
iso
attach
it
to
the
the
remote
machine
or
remote
machines,
and
they
just
come
up.
They
bootstrap
in
place
no
reliance
on
the
outside.
We
make
trying
to
make
sure
that
happens
when
it
comes
to
scale
if
customers
are
distributing
their
infrastructure
across
hundreds
or
thousands
of
sites.
B
We
need
to
make
sure
that
our
management
tools
scale
as
well,
we're
doing
a
lot
of
work
around
acm
and
acs
the
tools
that
mac
explain
well,
so
that
they
can
really
scale.
You
know
we're
doing.
Tests
of
you
know
thousands
of
single
node
openshift
clusters
within
with
an
acm
we're
also
working
on
a
feature
known
as
hub
of
hubs.
So
you
can
really,
you
know,
have
a
huge
amount
of
scale
for
for
customers
that
are
really
pushing
and
pushing
the
boundaries
of
of
what
the
infrastructure
could
look
like.
B
Also
from
a
sort
of
5g
radio
access
network
type
perspective,
we're
really
working
on
features
and
enabling
features
inside
of
openshift,
so
they
can
provide
the
capabilities
that
some
of
these
workloads
really
require.
Some
good
examples
here
would
be
something
like
precision,
time,
protocol,
syncee
and
various
different
things
and
building
out
those
capabilities
inside
of
openshift,
so
that
customers
can
really
rapidly
onboard
those
types
of
applications,
single
node
openshift.
B
I
you
know
briefly
sort
of
highlighted
on
this
smallest
possible
footprint
of
openshift,
just
a
single
machine,
so
this
has
been
supported
for
a
little
while
we
introduced
this
at
the
end
of
2021
for
bare
metal
capabilities.
It's
also
now
possible
to
run
single
node
openshift
on
a
few
other
different
platforms,
rather
virtualization
vmware,
for
example.
We're
also
trying
to
recognize
that
many
of
our
customers
are
trying
to
deploy
openshift
into
the
smallest
possible,
not
just
footprint
but
also
specifications
on
their
servers,
so
single
node,
open
shift.
B
You
can
do
right
the
way
down
to
16
gigabytes
of
memory
on
a
single
machine.
My
colleague
henry
is
going
to
talk
about
microshift
later
on,
so
we're
really
going
down
much
much
further
than
that,
but
I'll
leave
that
to
him
to
explain
and
then
for
scenarios
where,
okay,
you
start
with
a
single
node,
but
you
quickly
realize
well,
actually
I
need
to
add
additional
capabilities
into
that
site
being
able
to
scale
that
infrastructure,
adding
additional
worker
nodes
into
that
single
node
openshift
infrastructure
as
well.
B
So
we're
doing
that
as
we
go
along
and,
of
course,
defaulting
to
ovn
kubernetes
across
the
board
for
our
openshift
on
arm,
I'm
sure
that
you've
all
heard
of
harmon
and
where
it's
going,
you
know
it's
possible
to
provision
arm
instances
in
the
cloud
now
it's
you
know
the
vast
majority
of
you
know:
new
apple
macbooks,
for
example,
you'll,
have
arm
chips.
B
This
is
the
new
architecture
that
many
people
are
talking
about
and
we
recognize
that
even
in
the
data
center
arm-based
instances
are
starting
to
really
take
off
for
a
wide
variety
of
reasons.
We
have
brought
out
openshift
for
arm
today.
You
can
use
openshift
on
arm
in
aws
instances
and
on
supported
or
certified
bare
metal
infrastructure
and
what
we're
trying
to
do
is-
and
this
should
be
coming
later
down
down.
B
Give
you
that
full
flexibility,
where
today
it's
homogenous
and
don't
worry,
I
won't
go
through
the
interest
of
time
all
of
the
features
that
we
have,
but
it
is
a
really
comprehensive
implementation
on
arm
you're
not
going
to
be
limited
because
you've
chosen
to
use
the
arm
architecture
when,
when
openshift
is
taken
into
consideration,
openshift
disconnected
so
again,
we
recognize
that
many
of
our
customers
are
working
within
environments
that
are
expected
to
operate
disconnected
from
the
internet.
B
Now
openshift
4,
when
it
first
came
out,
it
had
a
lot
of
reliance
on
red
hat
registries,
some
of
the
operators
that
we
implemented
needed
to
be
pulled
and
had
reliance
on.
You
know
connected
registries.
We've
done
a
lot
of
work
to
try
and
make
that
experience
a
lot
better
with
for
customers
that
want
to
deploy
fully
isolated,
fully
disconnected
clusters,
and
over
the
years
we've
provided
tools
that
enable
you
to
provision
disconnected
registries.
B
So
you
stand
up
a
node
or
a
pool
of
nodes
that
will
host
your
your
container
images
and
the
machines
can
essentially
pull
from
those
repositories.
However,
we're
happy
to
admit
some
of
those
toolings
needed
a
little
bit
of
work.
Documentation
was
a
little
bit
more
difficult
to
understand.
Some
of
the
tooling
wasn't
properly
set
up
to
say.
Well,
I
don't
want
all
of
openshift
and
all
of
the
versions.
B
I
just
want
this
version
of
openshift
and
all
these
operators,
and
so
some
of
the
tooling
we
now
have
known
as
the
oc
mirror
tool
will
provision
that
disconnected
registry
and
give
you
a
lot
more
granular
control
over
what
you're
pulling
how
you
can
do.
Okay.
Well
in
my
registry,
I
have
4.9,
but
I
want
to
move
to
4.10.
Just
pull
4.10
just
pull
these
operators,
so
you
have
a
lot
more
granular
control
about
what
that
looks
like
there
as
well-
and
this
is
currently
in
in
tech,
preview
and
4.10.
B
So
from
a
standardization
perspective,
some
of
the
additional
things
we're
trying
to
do.
I
realized
we've
only
got
a
couple
of
minutes
left
and
we're
really
working
on
giving
customers
a
lot
more
flexibility
around
integrations
with
third-party
providers.
You
look
at
things
like
key
management.
You
look
at
dns
load,
balancer,
really
trying
to
make
sure
that
you
can
integrate
right
out
of
the
box
with
a
lot
more
platforms
that
are
there
we're
looking
to
bring
cert
manager
to
general
availability.
B
So
you
have
you
know
that
api
for
provisioning
and
managing
a
lot
of
certificates
inside
of
the
environment
we're
looking
at
bringing
in
pod
security
admission
by
default.
So
you
can
kind
of
think
of
this
as
a
defined
set
of
policies
that
dictate
the
behavior
for
what
a
pod
can
do,
and
you
know
giving
you
that
api
to
manage
some
of
those
we're
bringing
that
in
by
default
and
we're
also
working
on
a
lot
of
alerting.
B
So
a
good
example:
there
atd
container
memory
consumption
exceeds
the
threshold
really
working
on
making
sure
that
we
can
capture
some
of
these
before
they
really
cause
a
lot
more
trouble
and
and
cause
you
instability
inside
of
the
cluster
and
we're
doing
that.
You
know
pretty
much
across
the
board
with
lots.
More
audit
logging,
especially
around
login,
attempted
login,
login
failures,
and
things
like
that.
A
lot
of
alerting
around
that
I
mentioned
mixed
architecture,
support
within
the
cluster
and
also
core
os
layering,
which
I
think
takes
me
perfectly
onto
the
next
slide.
B
So
I
actually
think
this
is
my
last
slide,
so
I'll
go
through
this
really
quickly,
so
core
os
layering.
What
exactly
is
this?
Well
when
we
introduced
openshift
4,
one
of
the
fundamental
changes
was
the
introduction
of
core
os
now
core
os.
As
I'm
sure
you're
aware,
it's
an
immutable
operating
system
that
is
designed
to
simply
run
containers.
It
was
a
really
good
idea
to
kind
of
divorce,
the
underlying
infrastructure
from
the
containers
and
the
platform
that
were
running
on
top.
B
However,
the
way
that
we
were
enabled
we
were
able
to
enable
that
was
to
essentially
make
all
of
the
maintenance
and
any
of
the
customization
and
modification
of
your
infrastructure,
of
the
platforms
driven
through
openshift.
So
if
you
wanted
to
make
a
customization
change,
you
do
it
typically
through
machine
convict
the
problem
with
that.
Is
it
typically
prevented
or
made
it
a
little
bit
more
difficult
for
you
as
customers
to
add
in
additional
components?
You
know
you
may
want
to
install
third-party
rpms
or
you
want
to
apply
a
hotfix
or
something
like
that.
B
So
we
listen
to
that
feedback
and
we're
happy
to
announce
that
we're
working
on
something
called
coreos
layering
now.
The
idea
here
is
that
you
still
have
this
sort
of
concept
of
a
golden
build
a
gold
golden
core
os
image,
but
we
are
we're
enabling
you
to
define
what
that
image
looks
like
from
a
container
file
perspective
or
a
dockerfile
type
perspective,
so
we'll
ship
a
base
image
for
coreos,
and
you
can
define
all
of
the
things
that
you
want
to
add
to
that
that
coreos
image
right
it
could
be
install
these
packages.
B
Make
these
configuration
file
changes.
You
then
run
through
the
same.
Essentially
your
podman
build
and
it'll
spit
out
a
new
image
that
you
can
push
to
your
nodes,
and
so
this
gives
you
a
a
much
more.
I
guess
straightforward,
standardized
way
of
defining
exactly
what
you
want
on
your
core
os
images
without
having
to
go
through
some
of
the
more
troublesome
or
more
challenging
ways
of
doing
it
like
before.
C
So
git
ops
is
about
declaratively,
defining
your
application
or
your
cluster
config
right
as
we're
moving
to
automation
and
having
that
again,
automation
automatically
implied
to
your
environments.
Usually
devsecops
is
worrying
about
this
too
right.
We
want
to
make
sure
that
he
you're,
deploying
the
same
across
your
fleet,
same
application,
same
configs
and
getting
you
out
to
market
even
faster.
C
Let's
see
we
have
a
bunch
of
upstream
features
landing
such
as
more
notifications,
more
plug-ins,
so
get
involved
in
the
communities.
If
you
want
to
see
more
of
those
as
well
and
then
we're
starting
we're
starting
to
explore
progressive
delivery
like
argo
rollouts,
so
if
you're
interested
go,
look
at
those
as
well.
C
C
I
do
it's
on
the
slide.
All
right
take
a
picture
all
right,
kcp.
It
adds
an
abstraction
layer
between
you
and
your
clusters.
You
no
longer
need
to
specifically
target
a
cluster
for
deployment
so
now
you'll
use
predefined
business
rules
to
direct
your
apps
to
the
clusters
and
the
appropriate
resources
to
run
them.
C
So
this
way
we
can
disconnect
your
app
and
the
cluster.
So
if
we
disconnect
that
relationship,
this
means
administrative
and
maintenance
activities,
aren't
disruptive
and
you
can
swap
out
clusters
that
you
need
and
kcp
will
redirect
it
if
it
needs
to
so
we're
looking
to
launch
a
managed
serviced
service
to
host
the
kcp
control
plane
for
you,
so
you
can
bring
your
own
compute
or
you
can
use
red
hat
cloud
services
such
as
rosa
stands
for
red
hat,
open
on
aws
as
well
as
aro,
so
azure,
red
hat
openshift.