►
From YouTube: [What's Next] OpenShift Roadmap Update [Jul-2021]
Description
On July 22 2021, the OpenShift PM team will broadcast the internal [What’s Next] OpenShift Roadmap Update [Jul-2021] briefing to internal Red Hatters on Primetime, as well as directly to customers and partners on OpenShift.tv.
What: Technical Product Manager Overview of the OpenShift Roadmap with Public Q+A that anyone can see. *All questions and comments are entirely public*
Who Should come: Customers: Developers, Admins, IT professionals
Where: OpenShift.TV, https://red.ht/twitch
Topics: See details below
Why: See FAQ below
Next Steps: Email your customers with the suggested language below.
A
A
A
B
Good
morning,
good
afternoon,
good
evening,
wherever
you
are
very
warm
welcome
to
the
q3
2021
update
to
what's
next
in
open
shift,
my
name
is
tushar
katarki
and
I'll
be
facilitating
this
presentation,
I'm
from
the
openshift
product
management
team.
B
As
a
reminder,
the
presentation
offers
an
overview
of
the
direction
initiatives
and
exciting
new
use
cases
that
we
are
solving
with
new
features
over
a
six
to
18
months
plus
time
horizon.
We
provide
this
update
quarterly.
B
Also
note
some
of
the
specific
details
that
can
be
that
we
won't
go
but
you'll
be
finding
that
in
the
appendix
and
you
can
refer
to
them
offline
as
a
reference.
One
final
note
on
this:
this
is
a
roadmap
update,
so
things
that
are
coming
in
the
future
are
subject
to
change.
The
material
discussed
here
can
change.
You
know
just
be
aware
of
that.
B
B
In
addition,
you'll
notice
that
we
have
the
rest
of
the
team
present
here
also
who
can
help
answers
answer
questions.
So
please
ask
them
away
on
the
on
on
the
q,
a
forum
of
that
is
available
to
you.
B
B
Our
customers
have,
over
the
years
innovated
competed
and
successfully
created
value
to
their
own
customers
through
applications
built
for
hybrid
cloud,
be
it
endear
applications
or
a
bit
more
modern
cloud
native
micro
services
that
encode
traditional
business
logic
or
rules
or
more
modern
data
analytics
and
artificial
intelligence,
or
it
can
be,
you
know,
developed
in-house
applications,
custom
applications
or
packaged
applications
from
isvs.
B
B
And-
and
this
is
a
better
look
at
that
journey
and
this
journey
called
the
open
hybrid
cloud
journey
has
evolved
over
time.
We
started
this
journey
back
in
2015,
as
some
of
you
might
recall,
with
the
ocp
v3
openshift
container
platform
v3
with
docker
containers
and
kubernetes,
with
that
docker
providing
the
standard
packaging
format
and
kubernetes,
providing
scheduling
and
cluster
management
for
portable
applications
and
consistency.
B
Since
then,
with
openshift
v4
in
2019,
we
brought
a
comprehensive
and
integrated
approach
to
the
cloud
with
full
stack,
install
experience,
kubernetes
operators,
monitoring,
distributed
tracing
service,
mesh
servers,
cicd,
git,
ops,
a
number
of
developer
tools
and
more
building
on
this
openshift
v4
and
with
customers
wanting
more
cloud-based
services
and
consumption.
We
have
added
openshift
as
a
fully
managed
service
that
can
be
consumed
on
your
favorite
public
cloud,
with
offerings
like
red
hat
openshift
for
amazon
or
rosa
as
your
open
shift
or
arrow
or
openshift
dedicated
and
much
more.
B
In
addition,
we
have
introduced
acm
and
acs
for
hybrid
cloud
management
and
security
building
on
this
success.
Building
on
our
success
with
managed
openshift,
we
earlier
this
year
doubled
down
and
unveiled
a
suite
of
fully
managed,
sre,
backed
application
and
data
services
that
we
call
red
hat
cloud
services.
B
You
will
see
us
continuing
to
add
more
there
through
this
year
and
next
and
now,
where
are
we
headed
next
in
some
ways
we're
already
on
this
journey?
B
But
you
know
our
point
is
to
bring
you
a
comprehensive,
hybrid,
cloud-based
service
experience
that
brings
all
these
together,
such
that
there
is
a
uniform
experience
for
you,
no
matter
where
you
are
on
the
cloud
or
on
premise
or
on
or
at
the
edge,
be
it
for
a
developer
or
an
application
architect
or
a
devops
person
or
a
security
office
or
a
security
person
or
a
you
know
or
a
you
know,
a
system
administrator
or
anybody
in
charge
of
operations
in
general,
so
open
shift,
just
as
a
quick
recap
is
available
when
we
say
open
shift
we
are
talking
about
is
this
is
the
different
ways
in
which
it
is
consumable
by
you,
as
customers
and
users
is
available
as
a
fully
managed
cloud
service
or
or
it
can
be
consumed.
B
As
a
self-managed
platform
managed
red
hat,
openshift
is
jointly
engineered
offered
and
managed
by
red
hat
and
the
cloud
provider
so
that
you
can
get
started
with
kubernetes
service
quickly.
This
includes
openshift
dedicated
azure
that
had
openshift.
You
know
on
openshift
service
on
aws
and
ibm
openshift
service
on
ibm
cloud
as
a
fully
managed
for
you
by
red
hat
and
our
partners.
B
Openshift
container,
plus
openshift
container
platform,
openshift
kubernetes
engine
are
self-managed
software
offerings
that
you
can
deploy
in
your
data
center
public
cloud
or
edge
locations.
You
can
choose
the
model
that
best
suits,
suits
your
needs
and
or
a
combination
of
both
and
many
customers
have
done
both
so
open
shift
anywhere
anytime
and
any
way
is
kind
of
where
we're
going
with
this.
B
You
are
very
familiar
probably
with
this
rendition
of
what
makes
redhead
openshift.
So
as
a
quick
recap,
openshift
container
platform
is
the
foundational
piece
here.
Is
our
kubernetes
distribution
built
on
top
of
reddit
enterprise,
linux
and
coreos?
That
also
includes
many
platform
developer
and
data
tools
and
services.
B
The
new
thing
that
you
know
nina
is
going
to
touch
upon
later,
as
well
as
tony
is
the
openshift
platform
plus
which,
in
addition
to
openshift
container
platform,
also
includes
things
such
as
acm,
acs
and
red
hat
quay,
integrated
and
tested
together
to
address
your
management,
security,
governance.
Compliance
and
registry
needs
more
on
this.
A
little
later.
B
Next,
let's
look
at
the
themes
for
openshift.
These
themes
are
based
on
customer
inputs
such
as
you,
as
well
as
red
hat
vision
and
strategy,
and
the
broader
market
and
technology
trends
and
changes.
The
first
theme
here
on
the
left,
starting
at
the
left,
is
the
code
platform
and
developer
tools.
Pillar,
if
you
will,
this
includes
our
investments
in
kubernetes
linux
and
platform
and
developer
tools.
B
While
we
know
that
we
have
added
a
lot
of
innovation
here
over
the
years,
there
is
much
more
to
come
here
and
you
know
the
innovations
such
as
you
know,
a
response
to
new
hardware
accelerators.
This
is
just
beyond
gpus
now,
with
things
like
ppus
or
data
processing
units
and
fpgas
or
innovation,
in
use
case,
specific
scheduling
or
innovations
in
networking,
including
network
observability
innovations
in
githubs
and
devops,
and
exciting
new
things
for
developers
with
regards
to
serverless
and
service
mission,
core
ready
tools.
B
This
theme
is
also
foundational
for
the
rest
of
our
team,
such
as
managed
services
and
telco
and
edge.
The
telco
edge
theme
is,
is
in
service
of
rapid
innovation
and
needs
from
the
telco
industry
in
5g
code
and
5g
iran.
These
needs
include
desire
to
run
and
develop
container
native
functions.
You
know
all
the
iml
applications
etc.
You
know
developed
at
the
core
and
deployed
at
the
edge
with
containers,
or
it
could
be
because
you
are
collecting
vast
amounts
of
data
at
the
edge.
How
do
you
enable
that?
B
How
do
you
clean
it-
and
you
know
bring
that
for
consumption
into
the
core
for
analytics
and
stuff,
like
that?
The
managed
services
team
is
all
about
bringing
openshift,
as
I
said,
as
a
fully
managed
and
sre
backed
service
on
the
cloud
of
your
choice.
In
addition
to
the
current
api
management
streams
service
with
kafka,
red
hat
openshift
data
science
and
management
services
such
as
subscription
management,
cost
management
and
insights,
which
you
already
see
we'll
be
adding.
B
Obviously
we
will
be
adding
more
features
and
capabilities
to
those,
but
also
we'll
be
adding
more
services
over
this
year
and
next.
Finally,
we
are
very
excited
about
the
hybrid
cloud
experience
as
alluded
to
earlier.
This
is
a
comprehensive
end-to-end
experience
for
their
applications
across
the
hybrid
cloud.
It
includes
hybrid
cloud
governance,
compliance
security
management
and
observability
all
tied
together
with
a
rich
cloud.redhat.com
experience.
B
So
all
this
is
going
to
come
to
you
through,
basically
our
releases
right
now
we
are
in
q3
4.8,
openshift
4.8
is
next
on
deck.
After
that
we
get
into
q4,
and
then
we
have
we
divided
into
first
half
and
second
half,
and
you
will
see
that
we
are
continuing
to
innovate
across
all
the
different
different
parts
that
I
described
earlier.
B
You
know
I
won't
go
obviously
into
the
details
of
each
one
of
these,
but
this
is
there
as
a
kind
of
a
cheat
sheet
whenever
you
need
to
refer
and
more
details
can
be
found,
as
I
said
in
the
appendix.
With
that
I'll
hand
it
over
to
nana
to
take
us
through
hybrid
cloud
experience
and
openshift
platform
plus-
and
I
take
it
away-
please.
C
C
C
Now
you
are
thinking
about
the
hybrid
cloud
for
the
application,
architect
and
developers.
The
questions
are:
how
do
I
deploy
applications
across
multi-cluster
hybrid
cloud?
How
do
my
applications
in
different
cluster
communicate
with
each
other
and
exchange
data?
And
how
do
I
do
this
in
a
secure,
repeatable
and
automated
fashion?
C
C
So
redhead
openshift
is
the
industry's
leading
hybrid
multi-cloud
platform
and
openshift
brings
kubernetes
to
the
enterprise
with
over
3000
customers
across
all
industry
verticals.
It
is
built
on
a
foundation
of
red
hat
enterprise.
Enterprise,
linux
and
openshift
provides
a
comprehensive
container
platform.
C
C
C
So
marina
is
integrated
with
red
hat
advanced
cluster
management
as
a
technology
preview
at
the
moment,
and
it
provides
cross-cluster
network
infrastructure
for
openshift
by
extending
the
well-known
kubernetes
networking
objects.
Main
feature
in
this
tech
preview
include
part
to
pod
and
part
two
service,
lc
routing
with
native
performance
for
your
connectivity
needs.
C
All
traffic
flow
between
cluster
is
encrypted
by
default.
Ipsec.
To
give
you
security
compatibility
with
different
infrastructure
providers
such
as
aws
gcp,
is
your
ibm
vmware
and
network
plugins
such
as
ovn
calico
service
discovery
is
another
aspect
and
submariner
provides
class
plus
cross
cluster
service
discovery
dns
with
service
failover
and
load
balancing
across
clusters.
C
Continuing
on
networking,
the
openshift
roadmap
for
networking
has
expanded
to
include
multi-cluster
hybrid
cluster,
and
so
we
need
unified
networking.
So
this
slide
represents
the
roadmap
goal
for
getting
traffic
into
and
out
of
the
cluster
in
a
unified
way,
so
that
ingress
and
egress
are
the
same
regardless
of
protocol
and
to
align
to
how
layered
products
such
as
openshift
service,
mesh
and
openshift
virtualization
operate.
C
There
are
additional
benefits
that
are
realized
from
this
model:
single
and
multi-cluster
scoped,
port
replication
for
auditing
canary
deployments,
operational
simplicity
through
in-gas
unification,
cloud
provider
and
community
contributions
extensible
to
current
and
future
networking
protocol
next
slide.
Please.
C
Openshift
service
match
2.1,
which
will
be
released
in
late
q3
2021,
will
introduce
the
federation
of
service
meshes
across
different
openshift
clusters.
This
will
include
new
custom
resources
for
configuring.
Interconnectivity
between
federated
meshes
meshes
as
well
as
importing
and
exporting
services
between
different
meshes.
This
will
enable
secure
sharing
of
services
between
different
meshes,
including
load,
balancing
and
high
availability
of
services
in
different
meshes
and
clusters.
C
Each
mesh
in
the
federation
will
retain
its
own
control
plane
where
importing
and
exporting
of
services
is
done
in
an
explicit
manner.
This
allows
users
to
limit
the
scope
of
access
between
meshes
where
desired
and
future
releases
of
service
mesh
will
include
support
for
a
single
service,
mesh
and
control
plane,
which
spans
multiple
openshift
clusters
next
slide.
Please.
C
Multi-Cluster
storage
is
required
for
a
number
of
users,
including
data
federation,
wherein
an
application
wants
to
access
data
from
multiple
sources
and
multiple
clusters
and
clouds.
To
that
end,
the
openshift
multi-cloud
object
gateway
is
a
lightweight
object.
Storage
service
for
openshift,
allowing
users
to
start
small
and
then
scale
as
needed.
On-Prem
in
multiple
clusters,
with
cloud-native
storage,
multi-cloud
object
gateway,
is
addressing
that
data
federation,
specifically
the
ability
to
have
a
local
endpoint
to
read
data
from
multiple
locations,
with
an
option
to
get
local
caching
and
mirroring
capabilities.
C
The
other
important
need
for
multi-cluster
storage
is
for
high
availability
and
disaster
recovery.
The
red
hat
openshift
data
foundation,
which
is
based
on
self
technology,
amongst
other
things,
has
a
number
of
current
and
future
capabilities,
including
synchronous
and
asynchronous
replication
that
allows
for
meeting
your
high
availability
needs.
C
In
addition,
we
are
following
some
interesting
upstream
projects
like
scribe
and
ramen,
for
replication
of
persistent
volumes
within
or
across
clusters,
and
for
failover
and
fallback
capabilities
just
be
assured
that
all
these
will
be
integrated
and
managed
from
red
hat
advanced
cluster
management
to
provide
a
comprehensive,
hybrid
cloud.
Experience
next
slide.
Please.
C
Integrating
with
csx
insights
here
brings
the
openshift
fleet
wide
analytics,
analytics
and
remediations
from
the
traditional
single
cluster
experience
now
into
the
advanced
cluster
management
hub
console
you
further
enhance
the
multi-cluster
health
by
ensuring
your
managed
fleet
is
now
benefiting
from
the
telemetry
of
the
entire
openshift
fleet,
enhancements
to
the
user.
Experience
will
ensure
a
reduction
in
the
amount
of
time
required
for
the
cscx
on
cluster
remediation
to
be
reported
back
into
the
insights
data
hub
tons
of
look
and
feel
improvement
will
create
a
more
seamless
experience
across
the
acm
and
ocm
next
slide.
Please.
C
Openshift
builds
v2
will
be
in
tech
preview,
and
this
feature,
which
is
based
on
the
upstream
project
shipwright,
will
be
a
common
interface
for
all
application
container
bills,
so
users
can
keep
using
their
favorite
favorite
and
well-known
methods
such
as
s2i,
build
packs,
chronicle
or
other
popular
container
build
strategies
all
from
a
common
interface
fighter.
Integration
with
argo,
cd
and
advanced
cluster
management
will
give
users
the
ability
to
use
both
with
their
gitoff's
workflows.
C
We're
completely
integrating
the
popular
secret
management
tools
like
vault
or
sealed
secrets
for
a
seamless
experience.
Users
will
still
be
able
to
use
their
secret
manager
of
choice
in
their
githubs
workflow
and
will
be
able
to
easily
integrate
them
with
openshift.
C
The
application
environment
view
inside
the
dev
console
will
provide
a
feature-rich
ui
to
visualize
their
application
developments
per
environment
like
dev
stage
and
and
there
will
be
a
cli,
the
cam
cli
that
developers
can
go
from
zero
to
guitars
in
in
few
commands.
This
cli
provides
an
opinionated,
helpful
practices,
bootstrapping
experience
that
gives
the
developer
complete
control
over
how
their
applications
are
deployed,
using
github's
practices.
C
Many
customers
rely
on
an
internal
registry
as
a
trusted
source
of
software.
However,
at
the
same
time,
developers
may
want
to
rely
on
public
upstream
registries
for
experiments
and
and
rapid
prototyping,
and
this
usually
conflicts
with
the
security
boundaries.
The
networking
and
infor
teams
that
the
customer
sets
in
place
quay
will
offer
a
new
middle
ground
in
the
future,
the
transparent,
pull
through
caching.
This
feature
works
together
with
openshift's
image
content
sales
policy,
which
will
allow
the
transparently
redirect
images
pulls
from
a
public
external
registry
to
an
internal
registry.
C
If
this
internal
registry
is
a
query
instance,
it
will
be
possible
in
the
future
to
configure
it
to
cache
a
certain
upstream
image
repository
and
if
the
image
hasn't
been
pulled
yet
it
will
be
pulled
by
que
on
the
client's
behalf
and
will
be
served
from
cash
and
others.
Much
faster
in
the
subsequent
polls.
C
Que
will
autonomously
maintain
the
cash
and
determine
when
it
needs
to
update
the
cash.
This
will
give
the
administrator
privilege
to
selectively
allow
pulling
from
certain
repositories
and
the
entire
namespace
can
be
configured
as
a
cache
of
an
upstream
registry.
At
the
same
time,
the
developers
have
the
impression
that
their
images
pulse
ourselves
from
the
upstream
registry.
They
don't
even
have
to
update
their
image
urls
and
they
will
only
notice
faster
pull
speeds
and
will
be
able
to
circumvent
any
rate.
Limiting
some
public
repositories
are
applying
nowadays
next
slide.
Please.
C
This
approach
secures
both
platform
and
the
applications
deployed
to
the
platform
and
openshift
platform
plus
enables
devsec
ops
for
both
layers,
as
you
know,
that
openshift
platform
plus
includes
openshift
container
platform,
advanced
cluster
management,
advanced
cluster
security
and
red
hat
quick.
C
The
capabilities
delivered
across
these
components
combined
to
provide
policy-based
cluster
lifecycle,
management
and
policy-based
risk
and
security
management
across
your
fleet
of
clusters,
and
you
can
see
the
flow
at
the
bottom
of
this
graphic.
The
hub
cluster
is
where
life
cycle
deployment,
security,
compliance
and
risk
management
policies
are
defined
and
is
the
central
management
point
across
clusters.
C
We
have
many
of
these
capabilities
in
place
today,
however,
they
each
have
their
own
ui.
So,
over
the
next
few
releases
we
will
be
working
to
provide
an
integrated,
multi-cluster
user
experience
for
admin,
security
and
developer
persona
in
the
console
when
it
comes
to
implementing
dev
ops
for
applications.
The
openshift
pipeline
is
the
key
integration
point.
C
D
Thank
you,
I
know
so
to
recap.
So
far,
we've
talked
about
standard
tools
for
managing
a
large
openshift
fleet
and
openshift
platform.
Plops
is
where
that
experience
comes
together.
It's
a
set
of
tools
to
help
you
run
your
openshift
fleet
for
the
use
cases
range
from
thinking,
networking
and
security
policy
across
clusters.
D
So,
let's
start
with
the
redhead
hdm,
we
continue
to
drive
cluster
management
capability
across
four
high-level
pillars,
as
you
guys
can
see
on
this
slide
and
we'll
be
introducing
a
new
pillar
focusing
on
multi-cluster
networking
in
the
coming
months
is
like
mentioned
earlier.
In
the
earlier
slide,
we
expected
to
see
continued
work
with
submariners
and
the
use
case
with
multi-cluster
service
mesh.
Coming
together
so
for
the
first
pillar,
here,
multi-cluster
lifecycle
management,
operation
teams,
look
to
leverage,
redhead,
acn's
hives
and
assisted
installer
to
drive
cluster
deployment
on-prem
as
well
as
in
the
cloud
today.
D
Acm
will
also
look
to
drive
the
same
capability
into
sre,
manage
openshift
or
from
the
same
acm.
Central
hub
customer
has
been
asking
for
more
flexibility
in
their
deployments,
and
the
central
infrastructure
management
will
bring
a
hybrid
agnostic
approach,
more
aligned
to
uvi
moving
on
to
our
application
lifecycle
pillar.
On
the
right
hand,
side
we
are
continuing
to
support
our
customers,
investments
in
openshift,
apps
resource
argo,
cd,
helm,
chart
and
in
showing
a
github-driven
approach
with
everything
we
do.
D
Acm
will
add
a
new
policy
to
seamless
deploy,
acs
central
and
sensors,
and
we
will
further
enhance
the
sie
ops
experience
by
sending
policy
compliance
alert
to
your
preferred
notification
systems.
Lastly,
on
the
for
the
observability
pillar,
we
are
looking
to
bring
cluster
health
metric
support
to
manage
your
entire
kubernetes
fleet.
That
includes
both
your
openshift
and
non-openshift.
Clusters
are
from
a
single
usage,
dashboard
interface,
and
we
are
also
exporting
the
acm
hub
metrics,
so
that
the
third
party
tools
can
be
used
for
a
deeper
level
of
fleet
analytics
next
slide.
D
Please,
on
the
acs
front,
we
have
six
focus
here.
The
first
focus
is
to
reduce
security
program
cost
shifting
lab
is
the
concept
allows
teams
to
address
security
risk
early
in
the
software
development
process,
allows
team
to
reduce
the
cost
of
retesting
and
possible
architecture
by
designing
securely
by
default
and
catching
security
issues
early
moving
down
to
the
next
focus
by
priority
in
issues
that
are
most
important.
D
The
priority
teams
can
spend
their
limited
time
and
resource
on
issues
that
will
have
an
outsize
impact
on
their
security
postures
next
best
in
class
openshift
support.
This
includes
support
for
openshift,
dedicated
red
hat
openshift
on
adobe's
and
azure
redhead
openshift,
as
well
as
continued
development
of
security
features
with
openshift
as
a
first
class
citizen
moving
toward
the
right
top
right
commitment
to
open
source
a
xero
aware,
staff
must
acquire
as
a
proprietary
solution.
D
D
So
we
will
continue
to
invest
in
this
workflow
to
meet
the
needs
of
customers
and,
lastly,
the
security
teams
want
to
track
the
return
on
investment
in
their
security
program
using
kpis.
So
we
are
working
on
gathering
kpis
for
program
to
excel
and
on
ways
to
make
recommendations
to
programs
around
actions.
They
can
take
to
have
a
vast
return
on
investment,
so
moving
on
to
red
hat
way.
Next
slide,
please.
D
That
means
the
image
view
can
run
as
kubernetes
jobs
using
a
containerized
buildup
process
negatively
on
the
clusters.
This
allows
taking
advantage
of
openshift
scheduler
and
alleviate
the
needs
for
a
bare
metal.
Cluster
builds
can
then
run
an
openshift
cluster
that
are
hosted
either
on
the
hypervisor
environment
or
cloud
providers,
and
openshift
pipeline
speeds
are
currently
running
with
reduced
privilege,
and
the
plan
is
to
allow
views
to
execute
fully
ruthless,
providing
the
good
balance
between
security
needs
and
infrastructure
requirements
by
running
in
virtualized
infrastructures.
D
E
Hi
everyone
I'm
adul
zanuk,
and
I
am
the
product
manager
for
the
hypershift
project,
so
hypershift
basically
aims
to
bring
in
a
new
architectural
pattern,
in
addition
to
the
standalone
mode
that
we
already
have
on
the
left,
where
we
have
control
plane
and
workers
basically
co-located
and
where
the
control
plane
is
required
to
run
on
dedicated
nodes
or
dedicated
vms,
we're
also
introducing
a
hypershift
which
allows
us
to
decouple
the
control,
plane
and
the
workers
and
be
able
to
run
the
control
plane
on
an
existing
cluster
and
be
able
to
run
more
than
one
control
plane
of
a
cluster
on
the
same
node.
E
This
makes
a
lot
of
sense
when
you
are
running
thousands
of
clusters
or
when
you,
for
example,
want
to
run
different
architectures
arm
or
x86
and
host
them
like
control,
plane
and
the
workers
could
have
different
architectures,
and
it
also
makes
the
the
bootstrap
time
of
clusters
a
bit
faster
because
we're
sharing
the
the
control
plane
already
on
the
management.
Cluster
hypershift
is
still
under
development
and
we're
expecting
to
roll
it
out
for
for
use
late,
2022,
based
on
your
feedback.
E
E
So
I
want
to
start
thinking
about
edge
and
telco
from
the
application
perspective,
so
we
divided
application
patterns
into
four
different
patterns
operations,
edge
application
patterns,
enterprise
enterprise,
edge
application
patterns,
provider,
edge
application
patterns
and
consumer
edge
application
patterns.
Each
pattern
has
different
usage
usage
style.
So,
for
example,
was
the
operator
operations
edge
pattern
we're
more
interested
in
doing
analytics.
An
example
of
this
is
manufacturing
visual
inspection,
where
you
really
want
to
automate
the
process
of
looking
into
goods
in
a
production
pipeline
and
figure
out.
E
The
defects
then
send
the
analytics
data
back
to
the
cloud.
On
the
other
hand,
things
like
routing
and
switching
becomes
more
important
for
the
provider
application
pattern
where
we're
more
interested
in
mobile
broadband
or
telco
5g
use
cases.
Here
we
really
are
optimizing
for
a
network
performance
bandwidth
and
throughput
and
optimizing
the
latency.
E
This
reports
back
to
the
cloud
and
if
we
think
about
these
deployment
patterns,
the
existing
different
stacks
different
layers
of
the
architecture,
you
see
on
the
left,
there's
a
data
center
cloud
near
edge
and
far
edge,
the
more
we
move
closer
to
the
device
or
the
user,
the
more
we,
the
requirements,
increase
and
change,
so
that
will
also
mandate
us
to
kind
of,
like
forge
architecture
and
deployment
models,
to
help
cater
for
these
different
patterns
and
application
requirements.
E
Alright,
so
we're
gonna
dig
more
into
the
provider
edge,
that's
the
telco
use
case
more
or
less,
and
starting
from
the
left,
there's
a
lot
to
take
here
in
this
diagram.
The
point
is,
if
you
start
from
the
left,
we
have
the
radioheads
and
the
endpoints.
This
is
where
your
connectivity
comes
in
and
and
gets
divided,
and
then
there's
the
base
band
unit.
E
Initially
the
baseband
unit
for
4g,
architectures
and
and
so
on,
were
coupled
together
and
co-located
at
what
we
call
the
cell
side,
which
is
also
very
close
to
the
radiohead.
E
However,
in
5g
the
architecture
changed
a
bit
and
the
the
virtual
centralized
unit
and
the
virtual
data
distributed
unit
got
divided.
E
The
virtual
distributed
unit
is
more
concerned
with
functions
on
the
physical
layer
like
orthogonal
frequency,
division,
multiple
access,
multiple
input,
multiple
output
or
doing
a
medium
access
control
or
radio
link
control
like
automatic
repeat
requests.
This
is
more
about
encoding
and
decoding
to
make
sure
that
you're
not
losing
data
at
the
transport
layer
or
the
radioaccess
network.
E
There's
also
the
aspect
of
draen
and
cran.
This
is
more
or
less.
How
close
are
we
to
the
cloud?
The
closer
we
get
to
the
cloud,
the
more
flexible
it
becomes
and
the
less
the
requirements
on
resources
and
latency,
for
example,
there's
two
distinctions
here
between
these
requirements.
We
call
them
the
higher
layers
split
in
the
lower
layers,
but
this
is
more
about
what
functions.
Do
we
want
to
host?
E
Where
and
depending
on
the
functions
like
the
five
functions
or
the
medium
access
control
functions,
the
requirement,
increase
or
decrease,
and
it
becomes
a
matter
of
like
the
use
case.
Based
on
what
use
case?
Do
we
move
closer
to
the
cloud
and
based
on
what
use
case?
Did
we
get
closer
to
the
far
edge
and
we
are
trying
to
cater
for
these
requirements
again
by
offering
different
deployment
models,
so
the
vdu,
for
example,
requires
more
latency.
E
Sensitive
deployments
requires
more
network
throughput,
while
on
the
other
hand,
the
virtual
centralized
unit
does
more
calculations,
does
more
quality
of
service
and
require
more
gpus,
more
more
resources
on
the
nodes,
and
it
also
gets
more
flexibility,
because
it's
closer
to
the
cloud
it
can
fetch
nodes
from
a
pool
and
address
use
cases
based
on
that.
E
So
next
slide.
Please.
E
All
right
so,
as
I
said
like,
we
have
different
application
patterns,
different
adoption
patterns
and
different
tiers,
such
as
data
collection,
data,
aggregation
and
data
analytics
and
the
closer
we
get
to
the
user.
The
more
the
requirements
become,
the
harder
it
becomes
the
requirements.
For
example,
if
we
move
closer
to
the
user
to
the
user,
the
latency
requirements
become
very
sensitive,
we're
talking
about
microseconds
and
milliseconds.
E
Additionally,
the
more
we
move
to
the
user,
the
less
resources
we
have,
and
so
we
have
to
also
think
about
how
much
resources
do
we
offer
on
the
nodes
and
what
deployment
models
do
we
think
about
when
we
do
that,
for
example,
in
openshift
we
offer
three
main
or
four
main
deployment
models
for
standalone
openshift.
E
This
becomes,
as
I
said
like
when
we
get
closer
to
the
core
when
we
get
closer
to
the
data
center,
that
becomes
possible.
So
this
is
where
a
deployment
pattern
that
can
be
applied
at
the
closer
to
the
core
when
we
move
to
the
left,
we
we
are
in
need
for
or
the
resource
scarcity
increases
and
we
are
requiring
less
resources,
and
so
we
have
to.
We
provide
single
node
openshift,
which
basically
reduces
the
footprint
and
reduces
the
minimum
requirement
that
is
needed
to
run
an
openshift
cluster.
E
Finally,
at
the
edge
we
have,
for
example,
red
hat
enterprise
minutes
for
edge
and
and
then
that
is
closer
to
the
user
and
the
endpoint
devices
all
right
can
you
go
to
the
next
slide.
Please.
E
All
right,
so,
if
we,
if
we
zoom
in
onto
the
requirements
of
the
edge
like
you,
saw
a
lot
of
devices,
lots
of
value,
I
would
see
lots
of
analytics
zero
touch.
Provisioning
becomes
crucial
and
critical
in
automation,
automating,
the
installation
and
life
cycle
of
these
devices
peter
touch
provisioning
is
technology
preview
in
advanced
cluster
management
version
2.4.
E
It
is
aimed
at
regional,
distributed
on-prem
deployments.
It
gives
us
a
lot
of
benefits
because
it
integrates
already
and
leverages
technology
stacks.
For
example,
it
integrates
already
with
red
hat
ron's
posture
management,
hive,
metal,
cube
and
assistant
installer.
It
also
has
minimum
prerequisites
to
install,
which
is
the
name
implies:
zero
touch
provisioning,
it's
perfect
for
automating
multiple
devices
at
scale,
so
it's
really
requiring
it
or
self-configuring
and
enables
an
untrained
technician
to
kind
of
like
go
through.
E
The
installation
flow
very
easily
think
about
like
just
scanning
a
barcode
to
trigger
and
install,
and
then
you
also
have
highly
customized
deployments
it
can
fit
in
any
of
the
modes
that
we
offer
with
openshift
fits
in
either
connected
or
disconnected
mode
supports.
Ipv6
dual
stack
supports
the
dynamic
host
control
protocol,
dscp
or
static
tele
discovery,
upi
or
ipi
deployment
technology
and
more
or
less,
and
moreover,
it
is
also
edge
focused.
So
there's.
E
Required,
as
usually
is,
then
there
is
self
bootstrapping
in
place,
and,
finally,
it's
integrated
with
acm
detox
feature
which
allows
you
to
manage
your
your
zero
touch,
provisioning
installation
in
the
cube
native
way
and
account
the
actions
that
is
being
taken
to
install
the
cluster.
E
Finally,
it
also
removes
the
need
for
compute
management
and
cluster
provisioning.
Only
for
a
single
dedicated
node
allows
a
pool
of
nodes
to
kind
of
like
self-discover
and
allocate
themselves
to
join
a
cluster
yeah
next
slide.
E
Finally,
this
is
all
this
is
how
it
all
fits
together.
We,
after
all,
starts
from
the
application
layer.
The
application,
as
I
said
there-
is
different
application
patterns.
Different
focus.
E
Some
applications
focus
on
analytics
require
more
cpu,
more
resources,
some
application
focus
on
data
collection,
data
management,
some
applications
focus
on
networking
like
mobile
broadband,
telco
iot
and
all
these
applications
have
different
dependencies.
The
forces
or
the
ad
constraints,
and
these
constraints
can
come
in
different
shapes
and
form.
The
constraint
can
come
in
space
or
footprint
available,
as
I
said,
when
we
move
closer
to
the
edge
the
space
in
the
footprint
decreases,
and
we
have
to
find
solutions
for
that.
We
need
to
think
about
scale.
E
E
How
do
I
increase
high
availability
when
I'm
close
to
the
core
and
hosting
critical
application
that
all
the
entire
layers
of
the
apps
rely
on
another
layer
becomes
more
important?
How
do
I
automate
all
of
this?
How
do
I
give
my
users
the
ability
to
delegate
the
installation
and
and
handle
all
the
installation,
the
life
cycle
of
these
devices
that
scale?
Finally,
we
have
to
support
different
infrastructure
types,
as
always:
virtual
machines
bare
metal,
the
different
flavors
of
installation.
E
So
how
do
we
cater
for
these
needs
in
openshift
so
coming
next,
especially
in
the
second
half
in
the
first
half
of
2022.
As
I
said,
zero
touch
provisioning
integration
with
acm
that
allows
you
allows
us
to
manage
and
and
collocate
the
deployment
and
installation
at
scale
for
single
node
remote
worker
deployment
models.
E
Three
node
clusters
for
h,
a
that
also
ties
back
to
the
high
availability
requirement,
and
then
there
is
du
up
profile,
optimization
decentralized
unit,
profile,
optimization,
because
we
deploy
a
single
node.
As
the
du
present
representative
or
sno
is
the
du
in
the
telco
edge
model.
We
need
to
be
able
to
optimize
the
profile
to
even
decrease
the
space
in
the
footprint
required
to
run
the
cluster
or
a
or
a
du
deployment.
E
And
then
we
have
network
optimizations
coming
coming
up,
sruv
dual
stack:
smart
nix
for
network
of
loading
load
balancers
on
bare
metal,
and
we
also
have
numa
for
performance
and
optimization
cpu,
pinning
real-time
kernels
forward
error
correction,
also
for
the
dpu
or
for
the
for
the
du
decentralized
unit.
In
the
edge
hyper
thread.
Aware
scheduling
all
this
ties
back
and
it's
becoming
it
just
becomes
an
interconnection
model
that
we
just
mapped
to
from
the
things
we
do.
E
Yeah
so
hand
over
to
john.
F
Thank
you
very
much.
Hopefully,
everyone
can
hear
and
see
me
okay.
So
if
we
could
move
on
to
the
next
slide,
please
so
hey
everybody.
My
name
is
sean
pertell,
I'm
from
the
managed
openshift
cloud
services,
product
team,
in
addition
to
all
the
great
features
that
are
inherited
from
openshift
container
platform
and
the
integrations
that
we
have
with
other
red
hat
products
and
services.
F
So,
first,
I
just
want
to
start
quickly
with
some
a
compliance
readout
if
you
will
so
pci
is
now
available
for
both
openshift
dedicated
and
our
rosa,
offering
our
next
focus
will
be
on
fedramp
certification,
which
is
currently
targeted
for
the
second
quarter
of
22.
hipa
hippo,
ready
certification
is
also
next
on
the
agenda.
F
Moving
on
a
little
bit
to
the
second
box
there
on
security,
I
wanted
to
specifically
call
out
a
few
things
so
starting
with
amazon
sts.
So
now,
with
rosa
and
openshift
dedicated,
we
can
leverage
policies
within
amazon's,
secure
token
service
to
gain
access
to
aws
resources
needed
to
install
and
operate
the
cluster.
So
this
allows
us
to
do
things
in
a
more
standardized
and
secure
way
on
aws
and
to
more
directly
enforce
least
privileged
policies.
F
In
addition
to
that,
we've
also
introduced
private
link,
which
removes
the
the
need
for
direct
public
internet
access
for
a
cluster
and
allows
more
vpc
network
customization.
It
enables,
for
example,
our
red
hat
sre
teams
to
access
the
cluster
for
maintenance
or
upgrade
procedures
through
a
private
connection,
with
new
need
for
public
internet
on
the
aro
side.
This
is
already
available,
but
there's
an
analog
for
egress,
which
we
call
egress
lockdown,
which
allows
for
much
more
control
over
the
egress,
the
egress
traffic
to
and
from
well
egress.
F
It
would
be
from
an
arrow
cluster
right.
Bring
your
own
key
encryption
is
something
that's
being
worked
on
for
both
aws
and
azure
storage
options
for
rosa
nosd.
We
are
working
on
an
additional
layer
of
fcd
encryption.
It's
worth
noting
that
fcd
storage
is
already
encrypted,
but
this
will
provide
an
additional
layer
of
security
around
fcd
and
on
the
azure
side,
we're
working
on
an
azure
active
directory
group.
Sync
mechanism
next
slide.
Please.
F
F
That
does
need
some
work:
spot
instance
support
amd
instances
and
even
dedicated
instances,
instances
on
the
rosa
and
the
osd
side.
We
continue
to
try
to
maintain
parity
between
openshift
dedicated,
rosa
and
ocp,
or
the
self-managed
options
on
the
aro
side.
We're
trying
to
support
the
azure
government
region
and
again
working
to
expand
instance,
type
supports
that
are
instance,
types,
sorry
that
are
specific
to
the
azure
cloud.
F
On
the
on
the
infrastructure
side.
A
really
interesting
feature
that
is
in
the
works
right
now
is
cluster
hibernation,
so
for
on-demand
managed
clusters.
Obviously
this
is
extremely
important
in
terms
of
being
able
to
manage
costs
being
able
to
essentially
pause
and
unpause.
A
cluster
is
definitely
a
step
in
the
right
direction.
For
those
clusters
that
you
may
not
have.
You
may
not
need
running
24
7..
F
There's
ongoing
integrations
with
several
infrastructure
tools
to
provide
a
lot
more
flexibility
when
it
comes
to
how
you
might
be
able
to
manage
a
vpc,
including
cloud
formation,
terraform
and
ansible
on
the
roast
and
openshift
dedicated
side
for
networking,
we're
working
to
transition
to
ovn
to
open
ovn
as
the
default
network
provider
over
from
openvswitch.
F
We're
adding
additional
in
cluster
support
for
network
load
balancing
and
for
pre-existing
route,
53
configurations
when
installing
into
an
existing
vpc
again
on
the
aero
side,
we're
working
on
integrations
with
the
azure
portal
ui
for
providing
a
cluster
creation
gui,
allowing
for
a
little
bit
more
configurability
when
it
comes
to
installing
a
cluster,
including
being
able
to
determine
specific
versions
and
then
native
integrations
with
with
other
azure
tools
such
as
app
lens,
which
is
the
next
thing
that
we
have
here
so
next
slide.
F
Please
all
right
and
we'll
close
this
out
just
by
talking
about
some
overall
platform,
specific
features
that
are
coming
so
again
on
the
rosen
and
osd
side
we're
incorporating
a
more
tightly
integrated
user
workload,
monitoring
feature
which
is
already
available
for
self-managed
solutions.
F
On
the
managed
side,
the
work
that's
being
done
is
primarily
around
leveraging
custom
alerts,
because
alerts
are
are
used
by
the
red
hat
sre
team.
So
there
are
just
some
permissions
that
need
to
be
put
in
place
to
make
sure
that
things
will
work
properly
for
both
the
red
hat
team
and
the
customer
team
for
rosa
specifically
working
on
aws
console
integration.
That
work
is
always
ongoing
to
provide
a
more
native
experience,
including
things
like
supporting
annual
agreements
directly
from
the
aws
console.
F
F
We're
working
on
integrations
with
openshift
cluster
manager
to
be
able
to
provide
the
same
level
of
experience
across
all
three
of
these
these
offerings,
and
that
includes
the
ability
to
provision
both
rosa
and
ar
row
clusters
directly
from
the
openshift
cluster
manager
interface
and
then
for
aro
being
able
to
adopt
and
provision
the
clusters
manage
add-ons
and
schedule
upgrades
directly
through
the
ui.
F
Yeah,
I
think
that
covers
what
I've
got
here,
so
the
next
slide,
please,
the
the
last
thing
I
wanted
to
talk
touch
on
really
quickly
is
that
is
our
ongoing
efforts
to
provide
more
visibility
and
avenues
for
direct
feedback
to
our
product
teams.
It's
very,
very
important
to
us.
So
to
that
effect,
we've
established
public
road
maps
for
each
of
our
managed
services.
F
You
can
see
an
example
here
of
the
rosa
road
map,
but
there
are
quick
links,
red
dot,
hat
links
to
the
osd
road
map,
the
rosa
road
map
and
the
air
road
roadmap,
where
you
can
see
which
specific
features
are
in
progress
and
gain
a
little
bit
more
information
into
those
features.
Next
slide,
please
and
then.
Finally,
as
part
of
those
public
road
maps,
we've
enabled
the
rfe
tracking
and
issue
tracking
through
the
standard,
github
issue
tracker.
So
again,
any
feedback
is
always
welcome.
F
G
Thank
you
good
morning,
good
afternoon,
good
evening.
Everyone
welcome
to
the
roadmap
update
on
the
code,
product
platform
and
developer
tools.
My
name
is
anand
product
manager
at
openshift
and
I'll
be
taking
you
through
the
course
of
this
presentation.
G
So
what's
next
for
the
openshift
console,
the
console,
as
you
know,
is
the
face
of
the
product
for
cluster
administrators.
So
first
we
have
the
openshift
platform
plus
that
enables
us
to
do
so
much
so
much
more.
We
will
be
combining
the
capabilities
of
acn
acs
and
quay,
so
we
can
offer
our
customers
the
ability
to
manage,
secure
and
deploy
across
clusters,
and
this
is
becoming
our
foundation
for
the
openshift
hybrid
story.
G
Next
dynamic
plugins,
the
openshift
web
console
framework
will
now
work
with
dynamic
plugins
that
will
enable
our
teams
to
create
beautiful,
layered
ui,
with
minimal
effort
with
these
dynamic
plugins
operators
can
deliver
and
manage
their
ui
experiences
with
the
release
cycle,
giving
operator
operator
creators
much
more
control
and
flexibility
and
the
result
of
the
dynamic
plugins
and
the
openshift
platform
plus,
is
that
you
have
a
hybrid
console
that
brings
everything
together
into
a
single
ui
with
a
single
url
and
users
can
now
see
their
entire
fleet
in
a
glance
and
drill
down
as
needed.
G
So
in
this
slide
I
want
to
talk
about
our
investments
into
serverless
and
our
journey
to
strengthen
our
portfolio
in
the
serverless
space.
There
are
two
pillars
for
it:
one
is
serverless
deployment
platform,
and
for
this
we
want
to
leave
the
example
and
become
the
community
leader
and
thought
leader
in
k-native
and
serverless,
and
one
aspect
of
it
is
obviously
to
add
more
event.
G
Sources
like
kafka,
strengthen
our
security
story,
and
we
also
want
to
you
know,
scale
our
performance
on
the
serverless
site
for
driving
and
increasing
adoption,
and
the
next
pillar
is
around
user
experience.
We
want
to
attract
enterprise
developers
as
well
as
non-developers
people
who
cannot
code
people
who
cannot
write
a
ammo.
G
The
next
is
around
the
serverless
platform
itself
and
we're
doing
this
in
two
steps.
The
first
is
to
make
serverless
the
default
way
of
deploying
workloads
such
as
customer
workloads
and
other
managed
cloud
offerings,
and
the
second
step
is
to
make
openshift
serverless
a
fundamental
and
integrate
an
integral
part
of
openshift
itself.
So,
regardless
of
where
you
deploy
openshift,
you
know
bare
metal
or
red
hat
virtualization
or
other
managed
cloud
services.
G
Provider
serverless
will
be
available
for
you
to
use,
and
all
of
this
you
know
leads
us
to
the
fact
that
we
create
a
foundation
that
is
very
application-centric
is
very
focused
on
centralized
hybrid
cloud
and
the
developer.
Experience
is
not
in
you
know,
deploying
clusters
or
deploying
calls
or
deploying
nodes.
The
developer
experience
isn't
deploying
the
application
itself
and
the
cluster
creation
is
less
important
than
making
sure
the
developer
is.
You
know
productive
next
slide.
Please.
G
In
this
slide,
I
want
to
talk
about
our
investments
to
our
operators.
So,
first
of
all,
you
can
now
write.
Operators
in
java
java
is
still
a
very
popular
enterprise
programming
language.
If
you
look
at
any,
you
know,
programming
language
surveyed,
the
uob
index
java,
still
you
know
rate
somewhere
in
the
top
five,
and
you
know
red
hat
has
invested
in
quarkx,
which
is
cloud
native
jama,
and
we
want
to.
You
know,
extend
that
to
writing
operators
with
the
java
programming
language,
and
we
also
want
to
enable
granular
permissions.
G
So,
as
you
know,
olm
shifts
its
lifecycle
model
towards
global
operators.
We
want
to
make
sure
that
additional
controls
will
be
available
to
introduce
fine
grain.
Our
bag
selectively,
enabling
who
can
see
the
operator
use
the
operator.
What
can
a
particular
operator
do
in
a
particular
name,
space
and
whatnot?
G
Next,
on
the
operator
investment,
you
might
have
heard
of
something
called
cloud
native
application,
bundles,
which
is
basically
a
cloud-native
way
of
you
know:
packaging
distributed
applications.
G
So
we've
heard
from
a
lot
of
our
customers
and
they've
said
that
on
the
cloud
native
application
bundle
side,
they
would
like
to
combine
the
goodness
of
both
operators
and
help
charts.
So
in
the
future,
in
olm,
you
will
have
a
generic
api
to
install,
distribute
and
unpack
cloud
native
content,
such
as
your
content
and
operators,
and
help
packs
in
openshift
clusters
and,
last
but
not
least,
catalog
files.
Today's
catalog
management
and
olm
is
based
on
images
which
are
incrementally
added
to
a
database
as
users.
G
G
So,
what's
next,
for
you
know,
helm
on
openshift
elmer's,
you
know
very
is
one
of
the
most
popular
you
know,
package
managers
in
kubernetes
and
you
can
need
to
make
investments
in
the
direction
and
the
goal
is
to
provide
a
self-service
application
development
experience
that
enables
developers
to
use
tools
they
desire
and
deploy
the
applications
with
minimal
intervention.
G
So,
along
with
operators,
you
know,
helm
is
a
very
popular
way
to
deploy
applications
on
kubernetes
and
we're
continuously
targeting
to
enrich
the
developer
catalog
to
make
helm
charts
available
out
of
the
box.
We've
recently
introduced
a
new
certification
program
for
help
charts,
along
with
you
know,
partnerships
with
hashicorp,
ibm
and
gitlab
with
more
partners.
You
know
getting
onboarded
as
we
speak,
and
you
will
also
see
more
products
and
applications
from
the
red
hat
portfolio
available
on
openshift
with
help
chats
next
on
helm
charts.
G
You
know,
users
often
deal
with
potential
security
issues
and
misconfigurations
when
they
pull
help
charge,
and
this
can
get
quite
challenging.
The
dependency
graph
can
get
very
large
and
each
layer
can
potentially
introduce
misconfigurations
and
security
risks.
So
we
want
to
help
developers
to
ensure
that
help
charge
all
our
best
practices
and
avoid
any
kind
of
misconfiguration
and
so
we'll
be
looking
at.
G
You
know,
providing
these
best
practices,
documentation
and
tooling
to
make
sure
that
the
health
charts
are
secure
and
you
know
they
will
work
properly
and
last,
but
not
the
least
on
helm
charts.
We
will
be
continuing
to
make
greater
integrations
with
various
developer
tools
and
services.
The
developer
console
in
the
ocp
console
will
allow
you
to
you
know
easily
test
your
charts
and
we
will
also
provide
the
ability
to
install
a
help
chart
directly
from
an
archive
and
on
the
ide
tooling
side,
the
openshift
plugin
for
vs
code.
G
Will,
you
know,
provide
the
ability
to
install
tools
that
are
available
out
of
the
box
in
openshift,
as
well
as
the
ones
you've
configured
in
your
health
chart
registry
next
time,
guys
openshift
personalization,
so
openshift
virtualization
is
again.
You
know
a
very
popular
way
of
you
know,
running
virtual
machines
and
containers
together
and
the
investments
we're
making
in
openshift.
Virtualization
is
around
these.
You
know
four
broad
umbrellas.
First
umbrella
is
hybrid
cloud
and
edge.
G
We
want
to
optimize
for
smaller
deployments
like
single
or
openshift
compact
clusters,
and
we
also
want
to
support
bare
metal
instances
on
public
cloud
next
on
the
workload
site.
We
want
to
enhance
support
for
workload,
acceleration
technologies
for
sharing
gpus
across
technical
workstations,
video,
rendering
and
ai
ml
workloads
on
the
third
pillar
of
enterprise
scale.
Up
we
see
continued.
We
continue
to
see
production
rollout
for
customers
who
are
modernizing
existing
application
and
enterprise
scale.
G
One
example
is
a
major
e-commerce
company,
relying
on
openshift
worth
modernizing
private
cloud
services
that
involves
you
know
millions
of
active
users
and
as
a
part
of
supporting
enterprise
scale.
We
continue
to
enhance
our
partner
ecosystem
around
data
protection,
backup,
restore
and
disaster
recovery,
and,
in
this
regard,
we're
validating
sap
hana
as
a
key
workload
to
scale
to
an
enterprise
use
case,
and
last
but
not
least,
the
fourth
pillar
is
migration
at
scale,
so
simplify,
bringing
virtualized
to
workloads
to
openshift.
G
So,
while
most
of
the
applications
and
services
running
on
openshift
can
be
served
by
strong
linux
features
like
sc
linux
or
you
know,
sitcom
profiles
and
whatnot
sandbox
containers
provide
an
additional
layer
of
isolation.
That's
needed
for
high
sensitive
tasks
such
as
privileged
workloads
or
you
know,
running
trusted
code
to
think
of
it
as
a
good.
You
know,
combination
between
containers
and
virtual
machines,
you're
getting
the
light
weight
and
speed
of
containers
at
the
same
time,
you're
getting
the
secure
isolation,
goodness
that
virtual
machine
brings.
G
So
it's
really
combining
festive
breed
of
containers
and
virtual
machines,
and
what
is
virtual,
openshift
sandbox
containers
provide
in
the
latest
set
of
releases
is
fixed
compliance.
This
means
that
you
will
be
able
to
deploy
openshift
sandbox
containers
on
fips
enabled
clusters,
and
it
will
be
safe
to
deploy
the
operator
on
fips
enable
clusters.
G
Next,
the
operator
will
delegate
the
upgrade
of
the
kata
container
to
the
machine
config
operator.
Next,
we
are
introducing
must
gather
to
collect
cluster
information
and
information
about
sandbox
containers
that
it's
easy
for
cluster
admins
to
debug.
These
sandbox
containers
and
these
sandbox
containers
will
also
support
disconnected
environments,
and
we
will
also
be
using
a
matrix
endpoint
called
cata,
monitor
to
fetch
metrics
for
different
kind
of
containers.
Endpoints
such
as
the
agent,
the
hypervisor
and
the
ship.
G
So
this
slide,
I
want
to
talk
about
our
investments
for
open
shift
on
bare
metal,
first
being
advanced
host.
Networking
configuration
will
provide
a
declarative
configuration
for
setting
vlans
bonds
and
static
amp
addresses
at
install
time
and
on
day
two
leveraging
the
technology
in
kubernetes
and
upstate.
G
Next,
you
will
be
able
to
run
pyramid
bare
metal
open
share
anywhere,
which
means
you
could
run
it
on
like
physical
hardware
in
your
data
center,
or
you
could
also
run
it
on
virtual
machines,
say,
for
instance,
on
openstack
or
on
radar
virtualization
next
node
health
checks
and
remediations.
G
So
next
slide,
I
want
to
talk
about,
are
investments
to
specialized
workload,
scheduling
framework,
and
this
is
really
a
you
know-
a
multi-layered
cake.
So
I'm
going
to
read
this,
you
know
you
know
top
to
the
bottom
at
the
top.
Is
your
you
know
specialized
workload
right,
like
your
big
data
workloads,
your
self-driving,
you
know
application
for
your
cars
and
whatnot
right
below
that
is
your
multi-cluster
application
dispatcher,
which
helps
prioritize
workload,
set
quota
limits
based
on
customer
business
requirement
and
below.
G
G
G
Exactly
thank
you
so
next
slide.
I
want
to
talk
about
installation
and
updates,
so
we're
working
to
enable
openshift
to
be
deployed
on
even
more
number
of
platforms,
including
alibaba,
newtonix,
azure
stack
hub,
equinix,
metal,
ibm,
public
cloud,
microsoft,
hyper-v
and
we
want
to
you,
know,
improve
this.
You
know
list
as
we
you
know,
go
forward
and
we're
also
expanding
our
existing
provider,
support
to
include
more
regions
and
more
cloud
instances
and
whatnot,
and
there
is
also
support
for
rel
eight.
So
you
will
see
that
rel
coreos
is
obviously
still
the
default.
G
You
know
choice
of
operating
system
for
the
control
plane,
but
for
the
computer,
the
infrastructure
nodes,
we
will
provide
the
option
to
use
rel8
for
your
application,
workloads.
G
So
next
unified
installation
experience
right.
So
today
we
have
multiple
methods
of
installing
openshift,
you
know:
user
provider,
infrastructure,
installation
provider,
infrastructure,
assisted
installer
and
each
you
know,
method
addresses
in
a
different
deployment
scenario.
If
you
want,
you
know
full
stack
automation,
you
go
for
ipi.
If
you
want
to
do
an
ola
card
on
your
custom,
you
know
hardware
use
upi.
If
you
wanted
some
help
for
bare
metal
installs,
you
use
a
system
installer.
So
there
are
a
lot
of
different
ways
to
install
openshift
based
on
deployment
scenario
and
we're
seeing
that
you
know.
G
Sometimes
these
options
are
too
many
and
it
becomes
difficult
for
users
to
choose
which
option
they
should
choose,
because
in
some
cases
there
is
more
than
one
option
for
a
particular
deployment
scenario.
For
instance,
you
know
vsphere,
you
could
do
ipi
upr.
What
not
and
the
other
problem
is.
You
know
if
you
want
to
be
more
agile
in
supporting
more
number
of
these.
You
know
cloud
platforms
and
providers
and
availability
zones.
G
For
instance,
you
know
if
there's
a
new
cloud
provider
like
digitalocean,
or
you
know,
equinix
whatnot
or
if
you
know,
amazon
introduces
a
new
az
whatever
it
is.
We
want
to
be
more
agile
in
supporting
these
newer.
You
know
cloud
providers,
new
regions
using
the
ipi,
which
is
a
full
stack
automation
approach
and
as
we
try
to
onboard
new
providers
in
more
regions
and
whatnot,
it
usually
takes
multiple
releases.
G
It
takes
you
know,
a
couple
of
you
know,
cycles
to
get
it
right,
and
so,
if
you're,
looking
at
a
more
scalable
and
a
more
agile
way
of
integrating
these
new
providers
without
compromising
the
installation
experience
and
at
the
core
of
it
is
going
to
be
the
install
core,
which
is,
you
know,
basically
installing
openshift,
and
we
want
to
improve
that.
G
Obviously-
and
you
know
layering
on
top
of
that
is
a
cluster
lifecycle
api,
which
we
also
want
to
improve
and
we're
also
looking
at
centralized
host
management,
which
is
to
you
know,
manage
posts.
You
know
across
multiple
ways
of
deploying
openshift
and
at
a
high
level.
G
You
know
this
effort
will
involve
introducing
the
openshift
hive
operator,
which
will
provide
a
cluster
provisioning
api
upon
which
we
can
build
a
new
central
host
management
service
along
with
improving
cluster
provisioning
experience
with
you
know,
openshift
ocm
and
acm
and
last,
but
not
the
least
on
eus
to
eu's
upgrade.
We
are
working
to
improve
the
experience
for
customers
while
minimizing
workload
destruction.
G
While
some
intermediary
versions
cannot
be
skipped
for
control,
plane
upgrades
we're
looking
at
you
know,
skipping
those
for
the
compute
nodes.
This
means
that
the
control
plane
upgrades
will
be
done
sequentially
between
us
releases,
but
for
some
intermediary
versions.
We
may
be
able
to
skip
the
upgrades
on
the
compute
nodes
when
progressing
to
the
next
three
us
release.
G
So,
for
instance,
you
know
the
process
will
require
passing
the
compute
machine
config
pool
at
specific
times
during
the
update
process,
so
that
the
upgrade
can
be
skipped
while
moving
for
intermediate
releases
and
as
you
can
see
in
this
diagram,
you
know
if,
let's
say
you're
on
4.6
eus,
all
control,
plane
and
data
plane.
Nodes
are
running
on
4.6
and,
if
you
let's
say
upgrade
to
4.6.1-
or
you
know,
let's
say
four
dot
n
plus
one
whatever
that
is
you'll,
upgrade
only
the
control
plane
nodes
and
not
the
data
plane
nodes.
G
G
So
next
slide
is
about
bringing
your
own
windows
server
host
we're
seeing
that
the
world
of
windows
sees
customers
treating
their
instances
more
as
specs
than
cattle,
and
there
is
a
desire
from
customers
to
be
able
to
reuse
these
pets,
windows,
server
instances
and
openshift
poker
nodes,
non-windows
workloads
and
gain
similar
benefits
that
their
linux
workloads
get
when
managed
by
running
openshift.
So
today
we
support
windows,
server,
container
deployments
on
openshift
on
three
platforms:
aws
azure
and
vsphere,
using
the
installation,
provider,
infrastructure
method
or
the
ipm
method.
G
You
want
to
extend
this
out
to
byoh
so
that,
if
you
have
these
windows
server
instances,
as
you
know,
pets
say,
for
instance,
you
have.
You
know
a
bunch
of
windows,
server,
x86
servers
running
in
your
private
data
center
as
ravs,
and
you
want
to
treat
these.
As
you
know,
pets
and
not
cattle.
This
feature
will
let
you
on
board
those
windows
server
instances.
G
As
you
know,
worker
nodes
or
compute
nodes
on
your
openshift
cluster,
and
the
only
two
caveats
is
that
the
windows
server
instance
has
to
be
on
the
same
network
as
the
linux
worker
node
in
the
cluster
that
they're
connected
to,
and
the
instance
also
has
to
be
in
the
same
cloud
provider
that
the
cluster
is
brought
up
on.
And
the
second
caveat
is
that
the
prerequisite
for
windows
containers
is
ov
and
hybrid.
So
the
cluster
has
to
be
installed
with
ob
hybrid.
Before
you
can
set
up
your
windows,
server
nodes.
G
So
next
slide
I'll
wrap
up
this.
We
want
to
provide
a
preview
of
cert
manager
in
openshift,
and
you
know
the
goal
is
to
have
a
cluster
wide
operator
for
application
certificate,
lifecycle
management
that
supports
integration
with
external
ca.
G
This
has
been
a
very
popular
ask
for
a
while
now
and
this
work
will
let
us
include
provisioning,
renewal
and
retirement
of
certificates,
and
so
we're
riding
this
new
operator
called
the
search
manager
operator,
which
is
a
simple
wrapper.
You
know
on
the
upstream,
you
know
search
manager
operator,
and
I
want
to
emphasize
that
this
will
be
available
for
all
workloads
running
on
openshift,
except
bootstrap
components
that
need
certificates
before
the
operators
exist.
G
So,
for
instance,
you'll
be
able
to
use
this
as
a
data
operator
you'll
be
able
to
use
this
as
an
olm
install
operator
for
any
application
installed
on
openshift
any
middleware
component
installed
on
openshift.
Obviously
you
know
any
applications
you
may
use.
It's
not
meant
to
replace
any
of
the
day
one.
G
You
know
certificate
management
for
hcd
or
for
any
of
the
api
server
or
control
plane,
infrastructure
components,
it's
only
for
application
level,
certificate
management
and
right
now
this
is
the
latest
release
is
1.3
upstream,
and
that
is
what
we
will
be,
including
in
the
operator,
and
with
that
I
would
like
to
wrap
this
presentation,
and
you
know
thank
you
for
attending
the
slides
and
the
recording
will
be
posted
on
twitch
once
again
have
a
wonderful
day
and
thank
you
for
attending
the
roadmap
update
on
openshift.
Thank
you.