►
From YouTube: [What's Next] OpenShift Roadmap Update [Dec-2020]
Description
On December 10 2020, the OpenShift PM team will broadcast the [What’s Next] OpenShift Roadmap Update [Dec-2020] briefing to internal Red Hatters, as well as directly to customers and partners on OpenShift.tv.
Slides available here: https://speakerdeck.com/redhatopenshift/whats-next-openshift-roadmap-update-dec-2020
A
A
You
are
now
on
this
journey
with
us
and
let's
see
where
we're
going
next
slide
like
I
said
this
is
a
road
map
presentation
which
means
in
quarter
four,
where
we
stand
today.
This
is
what
the
men
and
women
of
red
hat
are
coming
to
work
and
working
on.
This
is
our
best
estimate
of
where
we're
going
now,
things
will
evolve
and,
in
fact,
in
this
presentation
in
the
roadmap
presentations,
things
evolve
and
change
a
lot.
A
So
it's
good
to
check
back
in
with
us
we're
going
to
be
doing
this
once
a
quarter,
so
you
get
to
see
the
change
and
the
evolution
of
the
features.
You
know
the
lawyers
want
me
to
mention
that
you
should
not
buy
a
product
based
on
its
futures.
Those
futures
will
change,
buy
the
product
because
it's
awesome
right
now,
because
it's
gonna
help
you
with
a
problem,
and
that
is
what
we're
going
to
dive
into
right
now.
Next
slide.
A
A
When
you
look
at
openshift,
it
is
a
platform.
It
is
a
hybrid
cloud
platform.
It's
going
to
help
you
get
the
most
out
of
your
investments,
whatever
investment
that
you
may
have
chosen
and
we're
going
to
shine
a
light
on
what
hopefully
made
you
buy,
that
investment
we're
going
to
make
it
easier
to
use
those
things.
We're
going
to
make
it
easier
to
push
to
production,
we're
going
to
make
it
easier
for
you
to
capitalize
on
this
technology
stack
and
we
do
it
in
the
core
kubernetes
layer.
A
A
A
B
Thanks
mike
next
slide,
please
in
the
following
slides
we're
to
kick
off
the
core
platform
roadmap,
with
a
look
at
the
overarching
themes
which
form
the
basis
of
roadmap
and
how
we
guide,
based
on
what
we've
prioritized.
B
This
is
influenced
by
industry
and
market
trends,
technology
upstream
directions
and,
most
importantly,
customer
feedback
and
the
insights
given
to
us
through
conversations
with
customers,
rfes
customer
cases
bugs
and
other
methods
like
voice
of
the
customer
sessions.
B
Openshift
is
the
foundation
of
red
hat's,
open,
hybrid
cloud
strategy
enabling
users
to
have
a
consistent,
operational
and
application
experience
from
the
data
center
to
the
cloud
to
the
edge
across
different
infrastructure
types
providers,
hardware,
architectures
and
accelerators,
basically
openshift
everywhere.
B
B
Finally,
our
customers
not
only
scale
both
in
terms
of
size
and
number
of
clusters,
but
also
in
terms
of
maturity
and
sophistication
of
their
deployments.
We
see
observability
management,
automation
as
the
third
pillar.
You
will
see
declarative
policy,
driven
management
and
automation
of
multiple
self-healing
clusters.
B
B
So,
let's
kick
it
off
with
open
shift.
Install
and
update
we're
gonna
be
focused
on
a
number
of
key
areas
there
for
the
hybrid
cloud,
we're
enabling
openshift
to
be
deployed
on
even
more
platforms,
including
alibaba,
aws
outposts,
azure
stack,
hub,
equinix,
metal,
ibm
public
cloud
and
microsoft.
Hyper-V
we're
also
extending
our
existing
provider
support
to
cover
more
regions,
as
well
as
additional
cloud
instances
like
aws,
c2s,
aws
and
azure
china,
and
planning
to
make
reliability
and
scalability
improvements,
such
as
adding
support
for
vsphere
multi-cluster
deployments.
B
Finally,
I
would
like
to
improve
the
openshift
deployment
experience
with
better
documenting
of
cloud
credential
permissions
for
both
day,
one
and
day,
two
support
for
customer
managed
disk
encryption
keys
measure
in
gcp
and
moving
the
control
plane
to
be
machine
set
managed
for
automated
node
recovery
next
slide.
B
That
brings
us
to
the
core
themes:
compute,
networking
and
storage.
Let's
start
off
with
workloads.
In
the
past
year,
we
have
added
a
number
of
features
to
compute
networking
storage
such
as
new
alignment
with
node
topology
manager,
remote
worker
node
multis
for
high
performance
networking
to
name
a
few.
B
This
was
in
response
to
the
needs
for
new
workloads
and
markets
that
include
aiml,
telco,
5g
and
edge,
but
we're
doubling
down
on
those
markets,
as
well
as
the
workloads
that
run
on
them
and
we'll
continue
to
add
performance,
scale
and
schedule.
Extensions
serve
those
markets,
but
also
plan
to
start
addressing
the
hpc
market.
B
One
is
scalability
stable
stability
and
scale.
When
we
look
at
these
new
markets,
we
continue
to
pay
attention
to
the
needs
of
our
existing
markets
of
the
enterprise
market
and
bring
you
advanced
features
that
will
help
you
manage
or
better
manage
applications
certificates,
introduce
networking
support
for
multiple
clusters
in
an
autonomous
system
with
bgp
high
performance
networking
features
and
more.
B
We
increase
coverage
for
ci
cd
testing
of
openshift,
while
adding
telemetry
and
diagnosability
to
assist
sres
in
the
field.
To
decrease
mean
time
to
repair
and
the
last
one
here
is
partner
in
technology
integrations.
B
We
continue
to
support
a
broad
ecosystem
of
partners
through
more
certifications
of
providers
for
networking
via
cni
storage
via
csi,
and
provide
more
capabilities
with
extensions
to
the
core
scheduler
that
could
be
node,
topology,
load
or
specialty.
Workload
aware,
such
as
gang
scheduling
and
hpc
and
grids
next
slide,
so
we're
excited
to
introduce
upcoming
support
for
windows
containers
customers
to
run
windows
workloads
on
their
clusters.
B
A
prerequisite
for
this
probably
just
important
note
here
is
that
the
cluster
must
be
configured
with
hybrid,
hybrid,
ovn,
kubernetes.
Networking
next
slide.
B
So
continuing
with
that
thought,
we're
going
to
jump
into
the
road
map
for
windows
containers,
so
the
general
availability
of
the
windows
container,
the
the
operator
will
be
on
12
14.
So
it's
fairly
soon,
four
days
away,
the
windows
machine
config
operator
will
be
available
from
the
in
cluster
operator
hub
once
installed.
You'll
be
able
to
deploy
windows,
notes
right
on
your
cluster,
and
the
operator
will
be
available
for
aws
and
azure
at
the
time
of
ga,
with
support
for
other
platforms
such
as
vsphere
and
bare
metal
coming
later.
B
Finally,
logging
monitoring,
storage
solutions
for
windows
container
workloads
will
be
also
available
after
ga,
shifting
to
compute
so
arm
has
been
in
the
newsletter
with
recent
acquisition
by
nvidia,
as
well
as
the
adoption
of
arm
architecture
by
apple
and
their
new
mac
arm
also
forms
the
basis
of
a
number
of
technologies,
including
including
dpus
and
smart
nics.
B
Therefore,
openshift
for
arm
and
enablement
of
the
next
generation
of
bare
metal
as
a
service
with
openshift
is
important
focus
area
for
us
going
forward.
These
changes
will
enable
a
new
cloud-like
way
to
do
software-defined,
hardware,
security
and
isolation,
networking
and
storage
in
the
data
center.
B
We're
also
planning
to
provide
more
capabilities
with
extensions
to
the
course
scheduler
targeted
again
at
running
aiml,
as
well
as
hpc
workloads
as
a
first
step.
We're
introducing
scheduling
profiles
so
they're
defined
as
several
schedule
plug-in
configurations
that
represent
most
common
use
cases
and
can
be
enabled
by
the
cube
scheduler
operator
and
round
it
out
on
the
side
shift
to
control
plane.
So
several
new
enhancements
are
planned
for
the
control
plane.
First,
one
is
productizing.
The
cert
manager
or
jet
stack
assert
manager
helps
automate
certificate
management.
B
It
builds
on
top
of
kubernetes
introducing
significant
authorities
and
certificates
as
first
class
resource
types
in
the
kubernetes
api,
we're
working
on
providing
a
red
hat,
supported
operator
for
cert
manager
that
will
be
available
to
all
workloads
running
in
openshift,
except
the
bootstrap
components
that
need
certificates
before
operators
exist.
So
those
include
autobox
operators
that
support
it
as
a
day,
two
configuration
like
olm
all
operators,
middleware
software
from
red
hat,
as
well
as
applications
deployed
by
the
cluster
by
the
customer.
B
The
next
one
on
the
list
here
is
custom
route,
name
and
certificates
for
openshift
cluster
components
and
users
of
identity
providers
that
use
a
uri
scheme
in
their
sub
claims,
be
able
to
log
into
openshift.
Other
significant
highlight
include
providing
automatic
certificate
generation
rotation
for
direct
pod
to
pod
communication,
similar
to
the
service
serving
certificates
operator.
B
Finally,
customers
be
able
to
easily
integrate
with
external
kms
solutions
by
using
existing
kubernetes
canvas
provider
capability
and
existing
kms
plugins,
including
those
provided
by
major
cloud
providers
next
slide,
so
we're
gonna
move
to
networking,
so
customers
are
asked,
are
asking
for
more
advanced
features
than
ever
for
our
networking
and
we've
delivering
we've
been
delivering
on
upon
those
asks
in
our
next
generation
networking
platform
ovn.
B
So
examples
include
east-west
encryption
with
ipsec
for
customers
with
regulatory
compliance
requirements,
full
ipv6
support
for
telco
performance
improvements
via
hardware
offload
to
smartnics,
bgp
support
with
externally
advertised
kubernetes
services
and
ebpf
for
a
high
performance
replacement
to
ib
ip
tables
to
remove
scalability
limitations.
B
Network
observability
is
being
enhanced,
as
customers
grow
to
multiple
clusters
by
adding
new,
metrics
and
telemetry,
but
also
presenting
the
information
in
a
way
that
is
more
easily
digestible
by
networking
and
security.
Admins
shifting
to
network
edge
open
shift
continues
to
align
to
upstream
kubernetes
and
red
hat's.
B
Leadership
contributes
to
ingress
v2
service
api,
which
defines
our
next
generation
ingress
solution
for
openshift
builds
on
the
promise
to
close
the
gaps
between
red
hat
routes,
bernetti's
ingress
v.1,
a
version
of
that
one
egress
router
a
popular
customer
method
for
associating
a
verse
ipa
address
is
completed
in
ovn
as
we
close
the
gap
between
ovn
and
our
current
networking
solution.
B
We've
also
added
day
two
configurable
custom
domains
or
customers
that
want
application,
naming
that
aligns
with
their
corporate
dns
in
addition
to
performance
and
scalability
enhancements
in
h.a
proxy
2.2,
we
deliver
on
several
customer
requests
for
configurable
scale.
Configurable
settings
such
as
a
variable
number
of
threads
custom
error
pages
and
more
openshift
will
also
provide
support
for
ip
failover
image.
A
highly
anticipated
customer
feature
that
adds
a
means
of
providing
simple,
aha
for
cluster
service,
external
ips.
B
Shifting
to
storage
in
the
nearer
term,
we
are
going
to
be
supporting
snapshots
with
csi
drivers
that
do
have
support
now,
with
ocs
and
cnv,
which
already
support
snapshot.
B
This
will
be
what
what
is
known
as
crash
consistent
snapshots,
what
you
get
if
you
remove
power
from
a
system
with
a
journal
file
system,
our
work
to
transition
to
cs5
drivers
is
bearing,
through
some
tech,
previews
graduating
to
ga,
including
aws,
ebs,
gcps
and
cinder.
There
are
also
some
more
tech
previews.
We
continue
to
convert
our
core
entry
drivers.
The
other
thing
here
is
the
csi
migration
piece
which
allows
seamless
transition
from
entry
to
csi
drivers.
B
B
That
means
they
will
be
removed
once
the
csi
migration
is
positioned
to
ga
for
them
in
the
long
term,
we're
going
to
look
at
other
operators
like
ephemeral
and
object
storage,
and
at
some
point
we
like
to
move,
remove
flex
volume
since
support
csi
support
will
effectively
replace
this
next
slide.
Please
next
we'll
talk
a
little
bit
about
red
hat's,
observability
story,
so
openshift
provides
several
different
components
that
helps
this
helps
users
understand
and
troubleshoot
problems
quickly.
B
The
key
focus
areas
here
are
the
hybrid
cloud
observability,
so
extending
our
reach
and
exposing
more
multi-cluster
capabilities
to
support
our
hybrid
cloud,
observability
story
and
cluster
observability,
so
enriching
our
in-cluster
observability
experience
for
better
and
easier
troubleshooting
at
a
higher
resolution.
B
Next
slide,
please
so
in
the
near
term,
for
both
monitoring
and
logging
we'll
continue
to
double
down
ability
and
quality
of
our
solution,
specifically
doing
a
non-feature
release
in
4.7
as
we
work
through
the
backlog
and
test
infrastructure.
B
B
We're
also
going
to
be
continuing
to
provide
a
top-notch
connected
native
monitoring
experience
inside
the
openshift
console
for
logging,
we'll
be
focusing
on
introducing
some
critical
high
interest.
New
features
such
as
json
support
and
more
tennessee
capabilities
around
log
forwarding
architectural
changes
to
provide
a
more
lightweight
easier
to
operate.
B
Logging
experience
by
introducing
an
alternative
log
storage
solution
and,
finally,
better
native
exploration
capabilities
for
logs
inside
the
openshift
for
multi-cluster
we're
gonna
building
upon
are
gonna,
be
adding
additional
customer
value
in
ecm
cluster
health
monitoring,
with
thanos
and
grafana,
providing
enhancements
to
metrics
collections,
enabling
customers
to
allow
their
own
custom
metrics
to
be
gathered
and
sent
to
the
thanosub.
B
Customers
can
define
custom
graffana
dashboards
from
a
git
source,
ensuring
they
not
only
have
a
read-only,
multi-cluster
health
and
optimization
dashboard,
out-of-the-box
consistency
in
their
customized
grafana
dashboards
as
well,
and
as
we
get
to
the
last
area,
slide
focus
on
for
a
cluster
infrastructure
roadmap,
we're
continuing
to
move
forward
in
our
focus
of
bringing
value
and
additional
features
and
exposing
them
to
our
customers.
B
There's
a
lot
planned
in
terms
of
new
key
features
here.
The
first
thing
is
to
mention
is
proxy
support,
so,
historically,
the
machine
api
did
not
follow
the
global
proxy
settings
it
will
from
now
on.
We
are
bringing
this
change
as
part
of
a
z-stream
release
from
that
point:
they'll
obey
http,
proxy
and
https
proxy
settings.
B
C
B
Running
out
of
tree
cloud
providers,
just
like
csi
on
the
storage
side
upstream,
is
moving
to
cloud
providers
not
being
part
of
the
core
kubernetes.
This
means
eventually
removing
all
existing
entry
cloud
providers
in
favor
of
their
outer
tree
equivalents
with
minimal
impact
to
users.
The
first
one
we
will
do
is
on
openshift
openstack.
Excuse
me,
which
should
be
followed
by
work
on
the
other
key
cloud
providers,
aws
gcp
azure
as
part
of
our
midterm
goals.
B
This
is
also
this
is
a
actually
tied
to
a
lot
of
dependencies
and
other
teams,
so
the
cloud
auditory
cloud
providers
will
require
csi
drivers
as
part
of
the
for
the
storage
so
pretty
much.
A
lot
of
this
work
is
all
tied
together
before
it
will
be
delivered
and
then
in
a
midterm.
We're
also
looking
to
be
able
to
treat
the
control
plane
compute
as
machine
sets,
as
I
mentioned
earlier,
and
also
working
with
availability
zones.
B
The
long-term
goal
of
aligning
with
upstream
and
bringing
a
fully
realized
vision
of
a
multi-architecture
multi-cloud
cluster
to
open
shift,
they're
shifting
to
multi-arch
we're
continuing
to
move
forward
aggressively
for
the
multi-etch
work,
focusing
on
ibm
systems,
but
we're
also
planning
to
add
support
for
an
additional
architecture.
B
The
near
term
we're
bringing
bring
in
some
of
the
storage
options
that
didn't
make.
The
last
release
we're
also
responding
to
customers,
ask
for
supporting
ocp
and
on
kvm
on
z.
This
is
because
they
don't
want
to
run
them
sometimes
because
it
costs
sometimes
because
of
in-house
keys,
either
way
we're
going
to
be
looking
at
support
there.
Midterm
we'll
also
be
looking
at
multis
and
catching
up
on
the
nvidia
gpu
support,
though
the
latter
is
a
dependency
on
nvidia.
B
In
addition,
we're
hoping
to
introduce
arm
support
for
bare
metal
installs,
this
will
probably
be
in
the
pilot
initially,
but
moving
to
a
tech
preview
as
fast
as
we
can.
Initial
support
will
be
only
for
genius
clusters
of
no
meaning
no
mixed
architectures
as
of
yet
longer
term.
We're
looking
to
expand
support
for
our
own
stack
with
rev,
ipi
and
openstack
on
the
list.
You're
also
getting
ra
jimmy
also
on
the
roadmap
is
ibm
cloud
infrastructure
control
integration.
B
B
We
enhanced
our
existing
investment
in
opa
integration
by
bringing
the
development
of
the
opa
operator
via
policy
in
the
ecm
governance
risk
and
compliance
pillar
ecm
also
offers
git
ops
of
argo
cd
and
does
it
at
scale.
Acm
allows
configurations
to
grow
with
your
clusters
automatically
as
you
deploy
and
import
new
clusters.
Acm
configures
and
deploys
applications
and
configurations.
B
This
is
get
app,
get
ops
that
scale
submariner
well,
submariner
excuse
me
will
be
included
in
acm
as
a
tech
preview,
opening
up
the
possibilities
of
multi-cluster
networking,
especially
in
load
balance
ar
scenarios
and
then
finally,
we
continue
to
invest
in
supporting
our
customers
whenever
they
run
wherever
they
run
open
shift.
B
Adding
existing
support,
matrix,
ensuring
ecm
hub
and
managed
clusters
are
working
for
both
azure
red
hat
openshift,
so
aro,
as
well
as
dedicated
service
running
on
that
right.
Thank
you
and
next
slide.
D
Okay,
so
focused
on
on-prem
environments.
K9
provides
the
simplicity
and
agility
of
the
public
cloud
in
on-prem
environments
right
and
to
do
this
we
are
keeping
a
consistent,
open
shift
experience
across
both
footprints
on
public
and
private
clouds
and,
among
other
things,
kni
is
addressing
container
adoption
growth,
but
while
still
running
virtual
machines,
we'll
talk
about
this
later,
along
with
containers
right
by
running
openshift
clusters
on
biometal
mode
next
slide
and
we're
going
to
start
with
bare
metal.
D
And
if
you
remember,
in
openshift
4.6,
we
introduced
bare
metal
ipi
to
our
family
of
installers
and
in
this
model
in
the
kni
model.
The
openshift
installer
is
aware
of
the
infrastructure
provided
by
the
bare
metal
nodes,
treating
diameter
nodes
and
interacting
with
them,
as
you
would
with
machines
in
any
other
cloud
provider
right.
This
is
what
the
magic
is
in
in
this
part
of
kmi
right:
the
ability
to
treat
their
metal
nodes
as
if
they
were
machines
in
aws
in
google
cloud
or
in
any
other
platform.
D
The
resulting
cluster
is
an
open
shift
cluster
on
the
metal,
but
an
opposite
cluster
environmental.
That's
aware
of
these
bare
metal
nodes,
it's
running
on
and
managing
as
well.
Let's
see
what's
next
with
with
this
on
the
next
slide:
okay
as
part
of
the
installation,
so
I
think
one
of
our
biggest
additions
in
in
kni
is
going
to
be
the
fully
supported,
assisted
installation
from
cloud.redcad.com.
D
So
this
is
an
online
assisted
installer,
an
online
experience
to
deploy
your
classroom
by
metal,
making
it
easier
very
easy,
with
very
low
entry
bar
with
a
very
few
pre-requirements.
You
get
clusters
up
and
running
with
a
ui
installer,
a
ui
installer,
that's
online!
That
guides
you
through
the
through
the
installation
process
right
to
get
your
cluster
installed.
D
D
Again
so
in
openshift
4.6
we
introduced
the
bare
metal
ipi
4.6
is
out.
We
have
had
feedback
of
users
in
both
telco
enterprise
environments,
telling
us
well
in
this
first
few
weeks
a
couple
of
months.
That
is
what
we
are
learning
and
we
are
introducing
things
like
improved
validations.
One
of
the
things
we
learned
is
look
this
fails,
but
it
takes
a
while.
D
I
want
it
to
fail
fast
and
tell
me
exactly
what
am
I
missing
so
based
on
that
we
are
improving
the
user
experience,
we're
also
adding
features
and
features
that
initially
they
are
coming
mostly
from
telco
customers.
But
let's
not
forget
our
about
our
enterprise
type
of
customers,
which
we
have
loads
and
the
interest
is
really
high,
as
well
with
you
know,
automating
and
the
ability
to
deploy
fully
automated
clusters
on
bay.
D
America
right,
but
you
know
the
highlights
here,
you
will
see
in
that,
for
example,
uefi
sector
boot
is
an
addition
that
we
are
adding
that
anyone
can
benefit
from
that.
But
we
learned
that
telco
customers
require
this.
You
are
passing
your
security
audit
and,
if
you
don't
have
ufi
secure
boots,
well,
somebody's
gonna
flag,
it
right
during
the
audits.
Similarly,
we
are
adding
fips
mode.
D
Support
is
supported
in
openshift,
but
we
are
adding
the
logic
to
the
installer
in
api,
so
that
you
can
say
hey
I
want
to
have
it
enabled,
and
you
can
do
that
at
installation
time
then,
on
the
management
side
of
the
clusters
deployed
with
a
I.
D
Well
again,
we
are
learning
about
how
people
are
using
this
plastic
with
bare
metal
and
what
gaps
do
we
have,
and
one
of
the
things
that
you
have
with
bare
metal
is
when
you
reboot
the
notes,
it's
not
like
rebooting
every
job,
it's
gonna
take
some
time
and
we're
working
on
faster
recovery
times
after
a
bad
metal.
Note
failure.
D
Similarly
we're
also
automating
the
recovery
in
this
case,
without
bmc
management.
Bmc
is
what
we
use
to
manage
the
nodes
in
general.
Not
everyone
is
going.
D
The
nodes
to
to
be
able
to
be
recovered
after
failure
right,
so
we
are
doing
this
through
a
process.
We
called
the
poison
pill
right
more
things
that
we
are
doing.
We
are
learning
that,
especially
in
the
telco
market,
customers
want
to
have
specific
sets
of
bio
settings,
configured
right
and
those
are
needed
by
workloads
that
need
to
be
deployed
only
on
nodes
that
are
configured
in
the
specific
way
that
telco
workloads
expect
okay.
D
So,
in
order
for
us
to
do
this,
we
are
adding
support
to
get
and
set
by
your
settings
and
to
also
schedule
the
workload
placement
based
on
those
right
and
then
on
the
networking
site
still
on
the
meta.
What
we're
doing
is
well
host
network
configuration.
This
is
the
nodes
you
install
the
openshift
cluster.
D
We
want
to
be
able
to
give
you
a
way
to
configure
the
host
networking
in
any
way
you
want,
with
vlans,
with
buttons,
with
the
standard
configurations
that
you
can
do
manually
in
your
operating
system,
but
now
automating
them
for
you
to
do
it
on
day.
One
during
the
installation
static
like
this.
D
Similarly,
with
the
hosts
that
you
are
going
to
use
for
your
cluster,
we
are
learning
that
not
everyone
wants
or
can
have
the
hcp,
so
the
ability
to
configure
static
editors
for
your
nodes
is
what
we
are
working
on
and
as
part
of
deploying
an
openshift
cluster
of
the
metal
with
api.
Something
that
we
do
is
we
add
a
load
balancer,
solar,
balancer
right
now,
based
on
d,
mha,
proxy
and
dns
dns,
based
on
ndns
incorporated
as
well.
D
Customers
are
telling
us
well,
I
may
want
to
use
my
own
apartment
and
my
own
dnss,
not
the
ones
provided
by
you.
Okay,
so
we
are
adding
this
ability
to
enable
and
disable
those
simple
things
about
k
united
in
the
next
slide,
and
now
I
would
like
to
talk
about
openshift
mutualization
again
still
in
kubernetes
native
infrastructure
and
well.
I
would
like
to
suggest
you
to
read
the
blog
post
that
is
linked
in
this
slide.
D
What's
your
name
in
opencv
virtualization
in
openshift,
4.6,
remember
that
we
went
ga
in
4.4.5
and
with
that
release
we
then
focused
on
making
vms
easier
to
create
and
consume
in
kubernetes
right.
We've
also
demonstrated
performance.
Parity
for
virtual
machine
workloads
in
openshift
for
even
the
most
demanding
enterprise
databases
and
more
of
this
in
the
blog
post
link
in
this
slide.
Now,
let's
see
a
few
more
things
about
shift
virtualization
in
our
next
slide.
D
Okay,
so
let's
start
by
core
as
part
of
obvious
virtualization
and
core
improvements
for
virtual
machines
in
openshift,
I
will
highlight
a
few
of
them,
for
example,
running
compute
intensive
workloads
like
idml.
F
The
the
audio
is
a
little
spotty.
Can
you
try
turning
off
your
video
and
see
if
that
clears
up,
it's
been
spotty
just
internally.
Thank
you.
D
F
D
D
Okay,
I'm
gonna
keep
going
so
as
part
of
the
core
right
running,
intensive
or
the
metal
player.
Gpu
pass,
2
and
also
varia
and
pgpu
is
something
we
are
adding
to
virtualization
deployment
of
public
clouds.
D
Where
dimension
is
offering
that
it's
obvious
right,
whichever
politician
requires
eventually
to
work
on
their
mental
notes,
we
are
also
having
dms,
not
in
public
clouds
like
aws
and
ibm
cloud
is
where
we
are
adding
this
this
capability
as
well.
We
will
know
this
and
we're
just
working
on
tooling
for
much
workload.
Migration
from
this
sphere
and
red
platforms
into
openshift,
virtualization,
then
on
the
network
side,
and
I
hope
I'm
not
breaking
up
anymore.
Hopefully
you
you
can
follow
me
raymond.
F
D
Okay,
so
move
to
the
developer
platform
service
and
then
I'll
start
from
here.
G
G
So
first
I
want
to
talk
about
our
customers.
Each
of
our
customers
has
different
needs
and
requirements
as
they
should,
since
they
all
are
unique
businesses
we,
as
openshift
aka.
The
platform
platforms,
need
to
enable
our
customers
to
easily
configure
customize
and
extend
the
openshift
platform
in
order
to
meet
our
customers.
Business
needs
the
developer
platform
platform.
G
The
developer
and
platform
services
is
specifically
engaged
in
removing
all
friction
from
adding
and
configuring
services
on
top
of
the
platform.
Developing
and
deploying
applications
to
the
platform
following
areas
are
following
here
is
a
highlight
how
we
approach
that.
So
you
have
operators
at
home.
These
are
key
mechanisms
for
packaging
and
managing
add-on
services.
G
G
G
And
management
and
finally,
we
have
developer
tools
which
empower
developers
to
take
full
advantage
of
the
openshift
platform.
All
in
all,
our
goal
is
to
provide
our
customers
with
the
kubernetes
platform
that
will
help
them
succeed
next
slide.
Please
ocp
console
has
four
high
level
themes
which
we
focus
on
with
every
release.
Our
first
theme
is
about
making
developing,
building
and
deploying
applications
openshift
as
easy
as
possible,
with
the
goal
to
provide
developers.
G
Everything
they
need
in
a
fashion
that
is
consumable
by
them
below
are
a
few
of
the
highlights
you
can
expect
to
see
in
the
future
you
can
like.
You,
can
look
for
things
like
new
experiences
for
functions
and
integrations
with
managed
services
and
much
much
more
to
come.
Our
next
thing,
we
focus
on
teaching
our
users
about
kubernetes
and
ever
fast
changing
ecosystem
look
for
improvements
to
quick,
starts
and
then
features
like
cli
shortcuts
that
will
help
our
users
automate
what
they
just
learned
in
the
ui.
G
G
Finally,
the
last
thing
we
have
is
enabling
the
extension
of
the
platform
so
look
for
the
new
dynamo
dynamic,
plug-in
framework
that
will
enable
offers
to
easily
add
and
integrate
new
ui
with
ocp
console.
Let's
also
look
for
general
improvements
to
interacting
with
operators
on
on
the
platform
as
well.
Next
slide,
please.
G
G
This
just
adds
another
level
of
flexibility
for
for
our
additional
add-on
operator
teams
before
segment,
we're
building
the
foundation
of
the
framework
and
then
following
releases
we
will
migrate.
The
existing
internal
teams
then
further
down
the
line.
Once
we
work
out
all
the
kinks,
we
will
then
open
it
up
to
certain
partners,
then
to
the
general
public.
The
dynamic
plug-in
framework
will
be
the
ultimate
enabler
that
will
allow
internal
and
external
add-on
operators
to
create
some
very
cool
solutions
on
top
of
the
openshift
platform.
Next,
slightly.
G
In
four
six,
we
introduce
quickstarts
as
a
new
onboarding
pattern
that
has
the
major
benefit
of
guiding
customers
with
an
interactive
experience.
Reducing
the
time
it
takes
to
get
customers
up
and
running
now
for
seven
quick
starts
are
extensible
operators
and
customers
can
utilize
the
console
quick
start
crd
to
easily
provide
their
own
quick
starts.
G
Additionally,
we
will
add
hint
interactions
that
allow
users
to
highlight
components
of
the
ui.
Also
in
4.7
our
quick
start.
Catalog
has
been
enhanced
to
be
more
scalable
now
supporting
filter
by
keyword
and
status
post
47.
We
will
provide
additional
enhancements,
starting
with
embedded
cli
commands.
Additional
items
in
our
backlog
are
easy.
Access
to
quick
starts
from
topology
for
the
developer,
resizing
quick
start
panel,
a
grouping
mechanism
for
related
quick
starts
and
much
much
more
so
keep
an
eye
out
on
this
area.
Next
slide,
please,
before
segment
developers
now
have
enhanced
the
developer.
G
G
Cluster
admins
will
have
the
ability
to
customize
the
available
categories
in
the
developer.
Catalog
now
and
the
category
categorization
component
works
off
a
default
set
of
categories.
These
categories
are
only
displayed
if
it
has
associated
items
in
the
catalog,
so
subcatalogs
will
include
items
like
builder
images,
event,
sources,
helm,
charts,
managed,
services,
operator
back
services,
quick
starts
and
more
a
customized
experience
is
available
when
drilling
into
a
specific
sub
catalog.
G
Exposing
additional
catalog
features,
for
example,
in
the
future
our
helm
chart
will
have
the
ability
to
filter
by
available
chart
repositories
and
finally,
cluster
admins
with
rbac
can
limit
what
developers
are
able
to
see
and
access
next
slide.
Please
all
right
so
building
on
a
solid
foundation
we
haven't,
we
have
a
one-stop
show
for
application
monitoring
before
six
users
have
the
ability
to
view
and
silence
alerts
as
needed.
Exposing
alerts
on
workloads
in
a
topology
view
for
easy
discoverability.
G
Next
we'll
be,
we
will
provide
a
dedicated
area
to
view
target
and
associate
status.
This
will
allow
users
to
know
at
a
glance
which
workloads
have
custom
metrics
enabled
we're
also
working
on
pulling
in
performance
analysis
of
java
applications,
as
well
as
integrated
logs
and
tracing
information
in
47,
we've
added
the
ability
to
see
base
image
vulnerabilities
to
in
your
project
in
the
future.
We
will
enhance
the
experience
to
also
include
application
vulnerabilities
identified
with
our
partnership
with
snik
when
the
appropriate
operators
are
installed.
G
G
Over
300
operators
are
now
shipping
out
of
the
box
with
every
cluster,
so
there's
a
renewed
focus
on
a
smooth
update
experience
with
more
controls
for
the
cluster
administrator
to
specify
which
kind
of
updates
happen
at
which
time
we
also
want
to
enable
better
insight
into
dependency
resolution
between
operators.
The
cluster
admins
can
see
up
front
which
operators
will
get
installed
as
a
dependency
for
developers.
G
B
G
Customer
cluster
administrators
to
define
a
policy
for
operator
updates
in
which
patch
releases
are
applied
automatically,
but
change
in
minor
major
versions
are
waiting
for
explicit
approval
by
the
admin.
This
balances,
our
desire
to
regularly
ship
operator
updates
to
fix
cves
and
z
streams,
with
the
predictability
of
change
sets
due
to
an
operator
update
in
production
next.
G
Likewise,
another
way
to
make
operator
updates
more
robust
is
to
actively
communicate
with
service
that
it's
about
to
be
updated
and
the
next
release
of
llm
an
operator
can
communicate
a
non-upgradable
state.
This
makes
sense
for
critical
operations
that
should
not
be
interrupted
like
an
application,
configuration
change
or,
for
example,
in
the
case
of
openshift
virtualization,
the
live
migration
of
the
vm
olm
will
delay
any
update
until
operator
finishes.
G
The
goal
is
to
provide
a
self-service
application
development
experience
that
makes
openshift
tagline
innovation
without
limitation
of
reality
by
enabling
developers
to
use
the
tools
desire
and
deploy
their
applications
with
minimal
intervention
and
greatly
reducing
application
time
to
market.
We
will
continue
to
bring
greater
integration
with
with
the
various
developer
tooling
and
services,
including
audio
service,
binding
operator,
dev
files
and
more
all
right.
Next
up,
please
all
right.
So
moving
on
to
kawasaki
is
becoming
an
openshift
native
registry.
G
G
Many
of
our
openshift
clusters
are
running
in
a
disconnected
environment
where
kuwait
aims
to
help
streamline
the
process
of
creating
and
maintaining
a
mirror
for
openshift
deployments
and
then
finally,
in
a
multi-cluster
world
quay
is
essential
registry
for
all
community
artifacts,
not
only
for
container
images,
but
also
for
helm,
charts
and
operators.
Naturally,
quay
will
be
used
by
many
different
clusters
and
tenants,
and
we
aim
to
provide
better
support
for
the
office
teams
running
way
by
adding
more
controls
for
the
super
user
and
quotas
to
prevent
noisy
neighbor
syndrome.
G
Another
important
area
is
support
for
larger
multi-tenant
deployments
to
support
the
multi-cluster
landscape.
Our
customers
are
steering
towards
quay
and
we
will
introduce
quota
management
in
two
steps.
In
the
first
step
in
quake,
3.5
photo
reporting
will
be
introduced
where
various
metrics,
like
storage
consumption,
network,
egress
and
register
operations
like
pools
or
bills,
are
counted.
This
will
enable
reporting
and
showback
a
soft
enforcement
mechanism
will
follow.
That
allows
to
create
notifications
for
tenants.
Passings
are
quota
in
the
next.
G
In
the
next
steps
code
enforcement
will
be
enabled
which
can
be
configured
to
trigger
throttling
of
network
traffic
bills
or
initiate
recruiting,
starting
with
older
container
images.
There
will
also
be
options
for
administrators
to
temporarily
exempt
users
and
organizations
from
their
exceeded
quota
next
slide.
Please.
G
For
public
sector
customers
in
north
america,
the
fips
certification
is
very
important.
Quiz
so
far
was
not
supported
to
run
on
fips
enabled
openshift
clusters
and
quay
is
not
fifth
certified
by
itself,
either
with
the
rebase
of
python
version.
Three
and
koi
3.4
there's
an
opportunity
to
block
to
unblock
quay
on
fips,
enabled
openshift
clusters
in
quay
3.5
we're
in
a
fully
support
red
hat,
queer
running
on
top
of
openshift
fips
mode,
which
seems
to
be
the
largest
blocker
for
government
agencies,
as
of
today
later
in
2021.
E
Thanks
ali,
so
moving
on
to
what's
coming
throughout
next
year,
around
serverless
and
service
mesh
add-ons
around
several
less
one
of
the
major
focus
areas
are
day
two
operation,
so
we
have
had
several
less
on
openshift.
It's
already.
Ga
and
customers
are
adopting
the
through
the
user
experience
that
is
available
through
cli
and
console
deploying
applications,
serverless
applications.
But
as
these
applications
move
to
production,
there
are
more
capabilities
needed
around
monitoring
these.
These
services,
the
serverless
workloads,
be
able
to
make
decisions
on
their
performance
and
get
that
feedback
back
to
the
development
cycle.
E
So
that's
one
of
the
areas
to
bring
more
insight
into
these
services
deployed
and
also
focus
a
little
more
on
on
the
delivery
cycle
and
the
ci
cd
flows
and
github
flows
for
delivering
these
these
workflows.
These
are
applications
like
any
other
applications
that
need
to
fit
into
the
existing
ci
cd
or
delivery
flows.
That
customers
already
have.
E
Another
area
of
focus
is
integration
and
ecosystem,
especially
around
the
event
sources.
So
what
serverless
enables
for
most
is
an
event-driven
architecture
and
it's
very
important
to
have
a
very
rich
ecosystem
of
event,
sources
available
that
we,
through
red
hat
products
or
partner
products,
isps
that
customers
can
integrate
into
their
application,
consume
these
events
and
be
able
to
build
workloads
around
it
and
last
but
not
least,
is
a
developer.
E
Experience
to
incrementally
improve
the
experience
that
we
have
accuracy
developer,
tooling,
not
just
the
console,
but
also
cli
and
vs
code,
intelligent
and
other
id
plugins
that
we
might
be
working
on
throughout
next
year.
On
the
service
mesh
side,
the
scaling
service
mesh.
It's
a
big
area
that
touches
on
as
as
customers
have
more
and
more
open
shift
clusters
as
provisioning
openshift
clusters
become
simpler
and
they
deploy
more
of
them.
They
are
and
they
end
up
having
more
service
mesh
add-ons
also
in
each
cluster.
E
So
there
are
capabilities
needed
for
the
service
missions
to
talk
to
each
other
or
in
a
federated
approach,
or
be
able
to
have
a
shared
control
planes,
for
example,
so
having
multi-cluster
meshes
in
in
multiple
use
cases
and
is
an
area
of
focus.
E
Navigating
service
mission
like
getting
really
started
with
understanding
the
use
cases
and
the
documentation
and
and
quick
starts
and
and
other
approaches
to
get
customers
started
with
it
is
in
our
area
to
to
help
customers
understand
the
value
of
service
mesh
and
integrate
it
into
into
their
applications,
and
also
better
integration
with
the
rest
of
red
hat
portfolio
on
openshift,
for
example,
with
api
management
with
our
delivery,
pipelines,
tipton
and
and
alike,
is,
is
the
next
area,
the
major
area
that
we've
worked
on
throughout
next
year.
Next
slide,
please!
E
But
more
recently,
what's
we
are
working
on
on
these
two
areas
and
the
serverless
platform
function
is
an
exciting
piece
that
will
be
released
as
tech
preview
and
really
expanding
the
serverless
capabilities
to
not
not
not
just
the
applications
running
in
serverless,
but
also
you
just
bring
your
code
and
function,
and
you
don't
have
to
worry
about
the
deployment
mechanism
and
configuration
of
it
initially,
it's
based
on
go
node.js,
carcass,
expanding
to
vertex
and
python
and
other
other
type
of
runtimes,
as
support
for
a
running
serverless
on
openshift
dedicated
and
on
amazon
rosa.
E
That's
the
next
part
that
is
coming.
First,
as
as
an
unmanaged
operator
to
make
it
available
on
on
dedicated
cluster
for
customers-
and
there
are
we're
in
discussions
to
make
that
a
managed
service
going
forward
as
well,
so
that
customers
can
have
the
same
ost
type
of
experience
and
responsibility
of
management,
that
is
on
red
hat,
have
it
around
serverless
as
well
eventing
as
moved
to
a
ga.
E
It
releases
ga
last
month
as
part
of
serverless,
and
there
are
more
capabilities
coming
in
the
admin
console
for
the
admin
personnel
for
dealing
with
events,
and
monitoring
is
one
piece
of
that,
but
also
to
be
able
to
bind
event
sources
into
serverless
workloads
and
one
of
these
event
sources
are
kafka
and
integration
with
kafka
and
that's
becoming
generally
available,
also
in
serverless
around
service
mesh.
E
E
Multi-Cluster
federation
is
a
another
part
that
comes
with
serious
mesh,
2.1
and
expanding
support
to
also
workers
that
are
not
running
on
openshift,
like
in
visual
machines
or
bare
metal,
and
recognizing
the
fact
that
many
of
the
applications
customers
deploy
are
are
really
hybrid
combination
of
container
and
non-containers
to
be
able
to
use
the
same
service
mesh
across
all
these
workloads
next
fight.
E
I
briefly
touched
him
on
serverless
function,
it's
a
very
exciting
area,
especially
that
it
really
offloads
a
lot
more
of
the
k,
the
responsibilities
that
application
developers
had
to
to
to
bear
around
serverless
and
put
it
more
on
the
on
the
platform
you
bring.
You
write
a
function
essentially
in
your
favorite
programming,
language
go
or
node
or
or
java
and
based
on
build
packs.
You
can
deploy
the
function
and
run
it
in
the
platform
and
the
configuration
and
and
operations
that
are
managed
by
the
platform.
E
So
there's
a
local
experience
available
through
a
plugin
for
the
k
and
cli
the
funk
cli
that
helps
you
to
build
an
image
and
build
based
on
the
function
code
that
you
haven't
deployed
on
the
platform,
your
templates
and
you
can
bind
it
into
different
type
of
cloud
events
and
as
we
are
going
forward,
expanding
the
runtime
support
for
what
type
of
functions
can
run
on
this
platform
spring
boot.
E
Are
next
python,
especially,
is
important
because
there
are
a
lot
of
aiml
workloads
that
fit
the
the
function
and
serverless
use
case
really
well,
and
we
wanna
tap
into
that
community
and
make
that
available
on
openshift
as
well.
Next
slide
on
several
s
on
service
mesh
briefly
talked
about
a
federator
approach,
the
on
the
left-hand
side
of
the
slide.
E
What
you
see
on
service
mesh
2.1,
so
there
would
be
more
capabilities
for
multiple
clusters
that
they
have
service
mission
installed
service
missions,
be
able
to
talk
to
each
other
and
and
and
coordinate
their
capabilities.
It
will
have
incrementally
more
and
more
features
around
the
scenario
and
moving
later
throughout
the
year.
We
are
heavily
involved
in
the
istio
community,
alongside
ibm
as
well
around
discussions
to
be
able
to
have
a
central
control
plane.
So
we
have
multiple
clusters
and
that
have
service
machinable.
E
You
don't
want
to
have
one
instance
of
control
plane
in
each
of
this
cluster,
similar
to
how
the
federated
mode
looks
like,
but
rather
have
a
central
control
plane
and
each
cluster
has
its
own
data
plane
and
be
able
to
coordinate
the
mesh
across
these
this
cluster.
So
that
that's
come
like
more
of
a
long
term
through
out
the
next
year
toward
the
end
of
next
year.
Next
slide.
E
And
around
devops
and
github
through
three
main
areas,
we're
working
on
open
shift,
build
that
focuses
on
building
images
on
the
clusters.
The
evolution
of
existing,
build,
configs
or
classic
builds
that
you
might
be
familiar
with.
Openshift
pipeline
focuses
on
takedown
pipelines
for
ci
and
openshift
git
ops
for
focuses
on
argos
cd
for
enabling
github's,
workflows,
git
based
workflows
for
delivering
application
and
configuring
clusters.
So
an
openshift
builds
v2
build
pack
strategy
for
java
and
and
node.js
will
be
added
to
the
platform
alongside
source
to
image
and
builder,
for
docker
file.
E
Builds
that
already
available
and-
and
we
are
working
also
to
build
the
community
around
this,
so
that
there
are
more
community
bases
strategy
for
other
build
tools
like
canico
or
jeep
or
others
that
customer
excursion
are
using
to
be
able
to
have
the
same
experience.
But
as
a
community
build
strategy,
separation
of
build
tools
and
runtime
images
is
another
area
we're
working
on
to
be
able
to
have
build
builder
images
that
contain
the
build
tools
like
maven
and
jdk,
for
example,
and
runtime
images
that
are
really
lean.
E
So
customers
can
create
images
that
are
very
small
and
deploy
on
the
platform.
Volume,
support
and
dependency.
Cache
is
also
another
area.
We
are
exploring
with
an
openshift
bills
to
make
sure
that
different
types
of
volumes
can
be
consumed
and
also
the
builds
are
fast
by
caching
dependencies
for
maven
and
npm
and
build
systems
that
are
similar.
E
One
efficient
pipelines,
metrics
and
trends
is
an
area
that
are
like
in
closer
beginning
of
next
year,
appear
in
the
platform.
So
teams
can
get
insight
on
how
the
delivery
pipeline
is
performing
and
also
be
able
to
identify
issues
when
there
are
anomalies
pipeline.
As
code
is
a
major
area
to
be
to
align
the
way
that
people
build
delivery,
pipelines
more
with
the
git
ups
practices
and
treat
it
as
a
single
source
of
cruise.
Also
for
the
pipeline
definition
more
assistance
around
migrating
to
to
to
ticton.
E
I
can
see
on
the
slide
that
it's
actually
reversed
jenkins
to
to
tickton
migration
guide.
To
help
a
lot
of
customers
ask
us
of
how
they
should
they
have
made
investment
in
jenkins,
but
they
would
like
to
move
some
of
those
workloads
to
take
down
to
provide
them.
Some
guidance
assistance
on
how
that
can
be
achieved
and
we
were
working
on
getting
opposite
pipeline
as
well
on
openshift,
dedicated
and
flavors
of
that,
initially
as
unmanaged
and
going
forward
as
a
managed
service
takedown
hub
integration.
E
Also
in
another
area
that
we
are
pushing
forward
across
our
developer,
tooling.
Tech
on
hub
is
a
place
that
people
can
find
reusable
takedown
tasks
and
osv
tasks.
Build
pipelines
based
off
of
and
this
is
on-
is
already
launched
in
the
community,
but
we
are
bringing
more
integration
into
dev
console
and
cli
and
vs
code
and
intellij
extension
so
that
you
don't
have
to
lose.
Leave
your
switch
context
to
go
to
a
web
browser
and
find
this
task.
So
you
can
write
from
where
you
are
install
tasks
and
use
them
in
your
pipeline.
E
We
will
be
having
the
first
release
toward
the
the
next
couple
of
weeks,
in
fact,
with
a
productized
version
of
argo
cd
and
we
are
also
enhancing
the
ux
around
github's
application
manager,
cli
that
makes
use
of
argo
cd
takes
down
cli
and
and
other
technologies
that
we
have
to
bootstrap
a
github
process,
and
there
are
views
that
counter
that
or
match
to
map
that
in
dev
console
that
you're
enhancing
those
that
give
you
insight
into
what
what
deployment
environments
your
application
has
and
which
version
of
your
application
is
deployed
in
each
environment
which
status
they
have
and
history
of
those
to
give
you
a
more
higher
level
insight
of
how
your
delivery
is
looking.
E
It
looks
like
the
same
about
a
managed
service
of
we're
working
to
get
them
on
on
dedicated
it's
unmanaged
and
later
throughout
the
year
as
managed
and
alignment
with
dracum.
Also
the
question
that
comes
up
a
lot
so
we're
working
on
argos
cd
being
as
the
single
and
core
git
ups
engine,
both
on
openshift
and
raccam,
and
expand
capabilities
to
cluster
provisioning
and
governance
policy
and
governance
through
iraq
and
based
on
argo
city
next
slide.
E
Please
looking
a
little
deeper
into
a
pipeline
as
code,
so
this
is
a
new
mode
that
we
are
working
on
and
this
would
enable
essentially
to
treat
pipelines
as
a
declarative,
syntax
that
lives
only
in
the
git
repository.
So
what
you
see
on
the
slide?
There
is
a
docteton
dot,
techno
folder
in
your
git
repo
that
contains
your
pipeline.
There
is
no
pipeline
on
the
cluster.
That
user
would
manage
and
based
on
git
events
that
are
coming
from
that
repository.
That
might
be
a
git
comment
or
pull
request.
E
The
definition
of
that
pipeline
from
git
repo
is
pulled
and
executed
in
the
platform
and
and
the
results
are
published
to
to
that
pr,
for
example,
that
are
accessible
through
the
cli
or
dev
console
under
under
the
the
repository
itself.
What
you
see
on
the
dev
console
is
really
just
the
execution
of
a
pipeline
and
a
git
repository.
E
You
don't
have
a
pipeline
on
the
cluster
that
you
have
to
keep
in
sync
with
what
you
have
in
the
git
repo,
so
really
doubling
down
on
the
get
ups
concepts
here
and
making
sure
everything
is
declarative
and
coming
from
git
rather
than
live
objects
on
the
cluster
next
slide.
Please
one
request.
We
get
a
lot
from
our
customers
when
we
talk
about
our
devops
portfolio
is
that
they
they
like
and
use
the
technologies
that
they're
productizing
and
bringing,
but
they
ask
us
so
how
is
best
to
combine
this?
E
How
should
my
delivery
look
like?
What
should
I
do
with
the
takedown?
What
should
I
do
with
argo
cd?
So
our
response
to
that
is
really
is.
Is
the
the
githubs
application
manager
cli
that
you'll
be
building,
is
an
embodiment
of
our
opinionated
way
to
do
continuous
delivery
using
these
technologies,
so
a
customer
runs
a
command
can
bootstrap
and
that
would
generate
the
resources
generate
a
pipeline
for
the
customer
generate
a
number
of
sample
environments.
E
The
application
would
get
deployed,
configure
web
books,
configure
our
cd
and
really
bring
up
that
entire
environment,
that
you
see,
on
the
right
hand,
side
the
entire
workflow
that
you
see
on
the
right-hand
side
of
the
slide
with
that
one
command
one
or
two
command
for
the
cluster,
and
from
that
point
they
can
go
modify
this
cluster.
So
it
gives
an
opinion
and
integrated
way
of
doing
continuous
delivery
through
github's
practices,
with
the
offerings
that
they
have
available
on
openshift,
and
they
can
customize
it
after
the
tool
generates
this,
and
this
is
structure
for
them.
E
Initially,
there
we
are
using
customize,
as
as
the
customization
and
packaging
and
overlaying
the
config
for
different
environments,
we're
looking
at
helm
as
an
alternative
packaging,
and
also
a
different
type
of
secret
management.
Integration.
Right
now,
still
secret
secrets
is,
is
what
is
used
throughout
this
tool.
Next
slide,
please.
E
So
getting
developers
started
with
openshift,
we
currently
have
a
local
experience
with
openshift
as
code
ready
containers,
which
is
a
little
larger
than
you
would
like,
and
we
are
actively
working
on
reducing
the
resources
that
it
needs
so
that
you
can
run
with
much
less
resources
on
your
laptop
with
the
below
66
gigabytes
really,
and
there
are
catacotta
scenarios
if
you
have
ever
run
to
learn.openshift.com.
There
are
category
scenarios
that
gives
you
an
instance
of
openshift
for
one
hour
and
a
tutorial
to
run
through.
E
So
we
are
expanding
that
concept
with
developer
sandbox
as
a
sure
multi-tenant
cluster,
that
it
lives
for
14
days
and
user
gets
and
a
number
of
projects
and
resources
to
be
able
to
deploy
applications
and
try
out
different
technologies
and
different
type
of
developer
and
add-on
services
that
we
have
are
pre-installed
in
this
environment.
So
it's
really
an
environment
that
is
ready
for
the
developer
to
get
started
and
start
coding
and
experiment
with
various
capabilities
that
we
have
on
the
platform
and
our
developer
tooling,
and
also
a
workshop
version
of
that.
E
That
is
really
for
running
workshops.
For
for
for
the
public
at
conferences.
A
lot
of
our
developer
advocates
used
to
use
catacota
for
running
workshop,
which
doesn't
allow
the
user
to
play
with
the
environment
after
the
workshop
finishes
two
hours
later,
so
a
version
of
that
on
ibm
cloud
that
would
be
available
for
a
running
workshop
that
is
available
for
40
hours,
so
the
first
two
hours,
maybe
the
workshop,
executes
and
and
people
have
two
days
after
that
to
to
play
around
with
the
environment
after
the
conference,
for
example.
E
So
not
only
the
the
activating
access
to
openshift,
but
also
there
are
areas
we're
working
to
help
developers
get
started
with
various
technologies
that
we
have
different
across
different
tooling.
That
we
have
so
auto
is
an
example
of
that,
taking
advantage
of
diff
files
to
provide
a
multiple
type
of
quick
start
that
generate
an
application
and
sets
up
the
environment
for
application
to
for
developer.
E
To
start
with,
this
is
in
ways
similar
to
the
quick
start
experience
in
in
developer
console,
but
bring
it
to
audio
and
more
expanded
way
of
getting
developers
started,
they're,
bringing
similar
capabilities
in
vs
code
and
other
id
plugins
that
we
create
to
be
able
to,
instead
of
developers
starting
from
scratch
and
trying
to
figure
out
how
to
build
code
and
deploy
code
on
openshift
through
different
runtime
and
serverless
and
other
capables
that
they
have
to
give
them
examples
and
start
them
off
a
much
higher
ground
and
push
them
through
the
flows
that
they
have
designed
for
them.
E
As
we
see
the
trend
that
customers
incrementally
add
more
and
more
clusters
to
their
twitter
infrastructure
so
that
they
have
applications
that
are
deploying
multiple
cluster
and
this
through
the
same
interfaces
through
the
same
experience.
So
they
have
access
to
those
multi-cluster
deployments
as
well.
E
Please,
and
with
that,
thank
you
for
this
section,
I'm
actually
not
sure
who
is
we'll
be
talking
next.
A
D
Or
here
there
you
go
perfect.
Thank
you,
okay,
so
we
are
in
the
kubernetes
native
infrastructure
section
and
let's
go
over
openshift
virtualization
and
starting
at
the
core
of
openshift
virtualization.
Something
we're
working
on
is
a
gpu
passthrough
and
bgpu
support.
Those
are
you
know,
addressing
intensive
workloads
like
aiml
by
dedicating
gpu
to
the
the
parts
that
require
it.
D
Then
I'm
highlighting
a
few
important
things
of
opencv
virtualization
and
another
one
is
the
work
we
are
doing
or
working
with
public
clouds,
particularly
with
aws
and
ibm
cloud
that
is
using
physical
instances
in
these
public
clouds,
where
opencv
virtualization
needs
to
run
and
something
else
we
are
working
on
at
the
core
is
well
a
tooling
for
mass
workload,
migration,
specifically
from
vsphere
and
ref
platforms
into
openshift
virtualization,
then
on
the
networking
sites.
D
One
of
the
things
is
that
we're
working
on
as
well
is
the
life
migration
of
a
virtual
machines
with
sri
oe
right,
not
an
easy
task
to
do
think
about.
You
know
we
are
presenting
sriovi,
which
is
a
slices
of
a
new
physical
device
into
a
virtual
machine,
and
we
are
then
moving
this
virtual
machine
into
a
separate
post
as
well.
D
So
this
is
coming
we're
working
on
that
and
similarly
we're
also
working
on
a
nic
hot
plug
to
you
know:
hot
plugging
and
unplug
your
your
nixon
virtual
machines
and
we
are
continuously
enhancing
the
redskate
ecosystem
for
network
partners,
we're
working
with
chigera
calico
and
then
on
the
storage
types
of
things.
What
are
we
doing
there?
D
But
one
of
the
things
we
are
doing
is
dynamically:
attach
storage
with
a
hot
plug
to
your
virtual
machines
and
also
with
the
intention
of
having
less
downtime
while
migrating
workloads
and
running
on
vsphere.
We
are
adding
support
for
warm
import
from
the
sphere.
This
will
still
require
a
reboot
but
is
faster
than
the
hot
than
the
migration,
and
then
improvements
in
snapshots
and
cloning,
with
storage
solutions
like
ocs
through
a
csi
interface,
which
results
in
better
protection
of
the
data
using
native
kubernetes
capabilities.
D
So
this
is
with
openshift
virtualization.
Let's,
let's
move
to
the
next
slide
and
we're
gonna
cover
tata
containers,
all
right,
so
open
shift,
sandbox
containers,
cutter
containers
and
well.
Let
me
again
highlight
a
few
things
in
this
slide
operator.
Well,
let's
start
with
the
nature
of
category
containers
as
driven
by
an
operator.
D
So
sandbox
containers
are
coming
on
bare
metal,
bringing
cancer
containers
to
openshift
operator
right,
which
will
be
using
operating
system
extensions
with
a
downsized
qmu
build
okay,
the
operator
will
be
providing
olm
based
installs
updates
and
upgrades
at
the
core
at
the
core.
Getter
containers
includes
openshift,
ci
and
cryo
ci
integrations
to
get
changes
and
then
we'll
use
kata
2.0,
which
brings
metrics
along
for
observability
and
footprint
optimize
optimizations
as
well
to
the
catacad
agent
in
this
case,
and
then
on
the
networking
side.
D
D
In
this
case,
and
let's,
let's
go
to
the
next
slide
now
and
talk
about
edge-
and
it
is
a
natural
extension
of
open
hybrid,
of
the
open,
hybrid
cloud
strategy
right
which
I'm
sure
you
all
are
familiar
with-
enabling
any
workload
on
any
footprint
in
any
location
right-
and
this
is
important
because
as
organizations
look
at
the
best
technologies
to
help
them
reach
more
customers,
deliver
differentiated
experiences,
drive
innovation.
D
We
are
talking
about
architecture
here,
and
your
architecture.
Options
can't
be
limited
to
just
centralized.
Architectures
right
edge
is
required,
it's
it's
driven,
perhaps
mostly
by
the
telco
industry,
but
you
know
it's
something
you
can
use
in
many
types
of
customers,
and-
and
this
is
what
we
are
learning-
let's
move
to
the
next
slide
and
talk
about
a
few
things
about
edge,
starting
by
topologies.
D
Talking
about
architectures
topology
that
we
are
working
on
and
a
lot
is
a
single
note,
open
shift
you
you
had
it
right
single
note,
open
shift
in
a
specific
use
case
for
open
shift
edge
right
where
you
need
an
entire
openshift
cluster
on
the
remote
side,
but
you
only
have
space
for
one
note
right.
This
is
pretty
much
the
use
case
that
we
are
learning
these
type
of
customers
needs,
and
we
are
really
working
hard
to
to
make
this
happen
more
things
important
here.
D
Well,
you
know
the
assisted
installer.
I
was
talking
about
before
one
of
the
biggest
additions
to
kubernetes
native
infrastructure
for
the
provisioning
of
their
mental
nodes.
Well,
this
also
needs
to
be
able
to
provide
compact
clusters
right.
That
is,
a
three-node
clusters
with
the
metal
api
in
4.6.
There
was
a
question
before
in
the
chat
we
support
deploying
a
three
node
cluster
already
right.
You
still
need
a
machine
to
do
a
provisioning.
D
Well
now,
with
the
workflow,
that's
julian
from
cloud.redcut.com
right
completely
online.
We
can
also
do
this
and
and
provides
compact
clusters.
What
else
acm
acm
is
is
important
in
this
type
of
use.
Cases
and
acm
cluster
life
cycle
now
has
integration
with
unseagull,
and
with
this
you
know,
when
deploying
clusters
with
acm
we're
using
ipi,
we
are
using
hyph,
but
this
is
not
enough.
D
There
are
so
many
types
of
topologies
use
cases
that
bringing
ac
sorry
bringing
ansible
hooks
to
the
table
will
allow
us
to
cover
a
variety
of
use
cases
when
starting
with
acm
and
going
on
multiple
deploy,
multiple
clusters
and
then
zero
touch
provisioning.
So
the
concept
of
zero
touch
provisioning
is,
I
want
to
provision
my
nodes
with
barely
any
interaction
that
is
going
to
the
data
center,
plug
the
node
in
power
it
on
and
have
the
node
added
in
the
cluster
right
with
virtually
zero
interaction.
D
Anyone
could
do
that,
and
this
involves
many
things
right.
This
involves
well
automating
that
this
node
needs
to
go
to
a
specific
cluster
needs
to
be
accepted.
It
needs
to
be
ready
to
schedule
workloads
of
the
type
that
are
required
by
every
you
know
remote
site.
In
this
case.
All
of
this
falls
into
the
zero
touch
provisioning
category
and
we're
doing
a
lot
of
work
to
make
these
use
cases
possible,
with
with
openshift
on
the
edge
right
and
then
timing
and
acceleration
again.
D
These
are
use
cases
related
to
telco
customers,
right,
specific
or
workloads
that
work
with
real
time
that
work
with
sparning,
accelerators,
hardware,
accelerators,
etc.
This
is
the
the
line
of
work
at
the
edge
for
openshift.
We
are
we're
working
on
and
with
that,
let's
move
to
the
next
slide
and
in
the
next-
and
this
is
like
this-
is
not
a
strictly
keni.
Now
we're
going
to
talk
about
openshift
and
openstack
and
in
openshift
on
openstack.
D
If
you
think
about
this,
we've
been
making
progress
since
openshift
4.2,
in
fact
before
openshift
4,
but
in
openshift
4,
essentially
adding
support
to
the
installer
started
in
4.2,
and
I
want
to
say
that
by
now,
openshift
on
openstack
is
a
pretty
mature
combination
and
integration
of
all
the
services.
D
So
we
are
working
on
the
deployment
user
experience
in
every
release,
with
upi
with
ipi,
making
it
easier
and
easier
to
use
and
covering
more
use
cases
as
we
go,
but
I
have
to
say
it's
pretty
mature
at
this
point
we
are
going
and
we
are
already
focusing
on
some
telco
and
edge
use
cases
openstack.
D
If
you
think
about
it
is
one
of
the
most
popular
platforms
in
in
the
telco
industry
right
in
the
telco
market.
There
are
lots
of
openstack
deployments
and
obviously
they
want
openshift
in
these
deployments
as
well,
and
this
brings
a
lot
of
flexibility
as
well
having
vnfs,
along
with
cnfs
together
in
the
same
platform
and
then
another
point
that
I'm
covering
in
this
slide
is
the
integration
with
bare
metal.
D
Another
thing
that
openstack
is
allowing
you
to
do
is
to
manage
experimental
notes,
as
if
you
were
managing
virtual
instances,
and
we
take
advantage
of
this
with
openshift
having
the
installer
in
an
openstack
platform
that
allows
you
to
provision
their
mental
notes,
provision
the
nodes,
the
openshift
nodes
now
on
bare
metal
or
in
a
combination
of
bare
metal
nodes
and
virtual
machines.
This
is
something
that's
available
and
we
are
about
to
release
full
support
for
that.
D
But
you
know
the
technology
is
ready
there
for
you
to
test
and
let's
go
to
the
next
slide
and
see
a
few
more
things
about
this
integration
and
what
we
are
working
on
and
at
the
core.
As
I
said,
we're
working
on
bare
metal
workers-
we
also
you
know-
I
said
this
is
pretty
much
weird,
but
thanks
to
the
feedback
of
our
customers,
we
have
very
close
relationships
with
a
number
of
customers
using
openshift
on
openstack
and
some
things
that
they
are
saying.
One
of
them
was
saying:
well,
not
one
of
them.
D
When
I'm
using
auto
scaling
with
openshift
open
stack,
I
still
need
to
have
at
least
one
note
in
there
and
that
costs
money
to
my
customers
right.
So
we
are
now
working
with
auto
scaling
to
and
from
zero
nodes
right.
D
So
when
you
don't
have
any
work
to
process,
you
don't
need
any
instances,
you're,
not
paying
anything
as
katherine
was
talking
at
the
beginning
of
this
presentation,
we're
also
working
on
introducing
the
external
cloud
provider
as
opposed
to
the
entry
in
the
kubernetes
code
provider
that
we
have
right
now,
more
things
on
the
network
side
of
things.
As
I
said,
part
of
our
focus
now
is
covering
telco
use
cases
and
we're
starting
by
adding
support
in
a
fast
data
path
with
sri
ov
and
ovsd
tdk
and
power
of
load.
D
How
does
that
work?
Well,
essentially,
we
are
allowing
parts
running
on
virtual
machines
that
run
an
openstack
to
use
sri
ov
right
with
the
s
sri
levy
operator
and
the
same
for
obs
dpdk
on
the
network
side
of
things,
we're
also
working
on
bring
your
own
loads,
balancer
and
dns.
D
Similarly
to
a
hotel
in
openstack,
when
you
provision
an
openshift
cluster,
we
also
provision
a
load,
balancer
and
a
dns
infrastructure
actually
is
the
same
than
the
one
we
use
in
their
metal
and
what
we
are
working
on,
allowing
you
in
a
very
easy
manner
to
bring
your
own
load
balancer.
So
you
wish
and
the
same
for
dns
we
also
working
with
full
support
for
provider
networks.
D
This
is
common
in
in
many
customers,
enterprise
and
telco
type
of
customers
and
well
courier,
which
is
our
cni
right,
a
cni
that
understands
that's
working
with
openshift
and
with
openstack
optimizing
the
traffic
for
this
integration,
and
we
are
adding
support
for
ipv6
dual
stock.
This
is
a
common
pattern
across
many
providers
in
in
openshift
and
then
moving
to
the
storage
section.
Well
again,
as
catherine
was
introducing
earlier
seeing,
the
csi
is
something
that
also
we
are
working
on.
D
We
believe
very
soon,
you're
going
to
have
seen
the
csi
by
default
and
as
part
of
seeing
the
csi,
you
have
a
csi
topology
in
general
in
kubernetes
that
we
want
to
support
also
as
part
of
the
storage
integration
in
openstack
with
similar
availability
zones
in
particular
right.
So
you
will
be
aware
of
the
availability
zones
when
you
create
tvs
for
your
pots
and
you
will
be
able
to
influence
where
these
pvs
are
scheduled.
A
Right:
well,
thanks
for
reminding
man,
canada,
containers
right
after
cooper,
nothing
slows
you
guys
down
the
machine.
So
I'd
like
to
thank
everybody
for
joining.
It
was
a
great
great
session
and
these
slides
are
available.
The
the
recording
is
available.
Two
quick
things
remember
to
come
back
once
a
quarter.
This
roadmap
will
change
and
we'll
update
it
once
a
quarter
for
you.
I'm
also
anything
that
enticed
you
that
you
want
to
hear
more
about.
A
There
are
deeper
dives
available,
so
grab
the
closest
red
hatter
to
you
and
get
them
connected
to
us,
and
we
can
dig
in
deeper
on
any
of
these
topics.
I
also
would
like
to
thank
everybody
out
there
in
the
world
for
the
call
for
papers
for
summit
that
closed
last
week.
A
We
had
so
many
awesome
public
customers
that
want
to
talk
about
their
usage
of
openshift,
that
we
will
be
having
those
sessions
on
openshift
tv
in
openshift,
commons
and
at
summit,
so
really
give
a
lot
of
exposure
to
the
amazing
things
that
people
have
done
with
the
platform
and
from
all
of
us
at
red
hat
and
the
openshift
community
we'd
like
to
wish
all
of
you.
The
happy
holidays
stay
safe
out.