►
From YouTube: OpenShift 4 Roadmap Update - William Markito Oliveira (Red Hat) , Chris Blum (Red Hat)
Description
OpenShift Commons Gathering
Milan Italy 2019
Title: OpenShift 4 Roadmap Update
Speakers: William Markito Oliveira (Red Hat) , Chris Blum (Red Hat)
A
A
That
it
wasn't
that
great
I
don't
know,
but
but
anyway,
I'd
like
to
introduce
myself.
My
name
is
William
I'm,
a
product
manager
at
openshift
and
I
have
here
with
me
Chris,
but.
A
Nice
and
we're
going
to
then
talk
a
little
bit
about
what's
coming
in
4.2
and
a
little
bit
of
all
the
different
features
that
we
have
throughout
the
platform
as
well.
This
is
going
to
build
very
nicely
on
what
Brian
already
talked,
because,
again,
like
some
of
these
slides
would
be
already
we
were.
They
were
already
covered
by
him.
That's
great,
because
I
have
a
lot
to
talk
about
here,
but
with
4.2
we
have
some
specific
teams
that
we
were
tackling
as
part
of
this
release
right.
A
One
of
the
things
that
we
got
as
feedback
from
customers,
for
example,
was
around
air
gapped
installs.
So
we
are
provisioning
that
now
as
one
of
the
features
in
the
platform,
we
did
a
lot
as
well
to
expand
the
workloads
that
we
can
now
run
in
a
platform,
more
specifically
enabling
GPUs
to
run
a
little
bit
easily
on
kubernetes
and
then,
of
course,
a
lot
of
the
features
that
we
have
now
as
part
of
our
developer
experience
and
developer
tools
that
also
run
on
top
of
the
platform.
A
So
Brian
already
mentioned
this.
One
of
the
things
that
you
get
with
every
open
shift
for
cluster
is
telemetry.
You
can
it's
it's
an
optional
thing
again.
You
can
disable
that,
if
you
want,
but
by
enabling
that
you
do
allow
this
back-end
system
that
we
at
Red
Hat
to
read
this
kind
of
metadata
about
your
cluster
soso,
anonymize
and
whatnot.
B
A
A
We
are
adding
some
new
platforms
and
new
providers
that
we
can
now
support
through
the
two
different
ways
that
you
can
install
open
shift
for
if
you're
not
familiar
with
that,
we
have
pretty
much
two
ways
to
provision
infrastructure
for
OpenShift,
for
one
is
called
IPI
and
the
other
one
is
called
UPI.
So
with
IPI
it's
what
we
call
the
installer
provision
infrastructure
that
is
pretty
much
I
would
say
the
easiest
way
to
get
and
open
shapes
for
cluster
up
and
running
and
with
with
that
installation.
A
We
pretty
much
manage
the
whole
stack
so
from
the
operating
system
to
all
the
different
network
settings
and
the
creation
of
those
resources
on
these
different
cloud
providers,
like
everything,
is
then
provision
and
configured
for
you
using
the
Installer,
but
as
an
alternative.
You
also
can
use
the
user
provided
infrastructure
and
that's
pretty
much.
Then
you
are
on
the
hook
in
order
to
mic
not
working
properly,
okay,
so
with
UPI
you
can,
then
you
are
then
on
the
hook
to
provide
the
infrastructure,
the
networking
and
the
operating
system
and
whatnot.
A
But
then
you
just
used
installer
to
install
the
open,
shipped
bits
on
top
of
that
infrastructure
right.
So,
if
you're
going
to
end
up,
let's
say
kind
of
summarize
all
this
different
ways
that
you
can
install
up
and
shift
on
the
openshift
on
the
OCP
side
as
we
call
it
right.
We
have
then
this
full
automated
install
the
pre-existing
install,
and
we
have
also,
of
course,
the
the
hosted
offerings
that
we
have
in
two
flavors
as
well.
One
of
the
flavors
is
what
we
call
ro
or
as
a
Red,
Hat
OpenShift.
A
So
with
that
way
you
were
running
OpenShift,
but
that's
managed
jointly
by
Red
Hat
engineers
and
Microsoft
engineers.
You
can
create
an
open
chef
cluster
directly
from
the
azure
console.
That's
kind
of
very
nice
if
you
are
an
ESRI
customer,
but
we
also
offer
another
flavor
of
that
which
is
openshift
dedicated
and
with
the
OpenShift
dedicated
the
management
of
that
cluster
is
then
done
by
the
Red
Hat
engineering
team
in
our
sres.
A
If
we
are
then
pretty
much
putting
that
side-by-side,
the
UPI
and
IP
I
installation
modes,
this
is
what
the
the
installer
provides
for
you.
So
again,
it's
pretty
much
automated
or
all
the
way.
But
then
you
have
also
the
the
things
that
you,
as
a
user,
have
to
provide
when
you're
doing
the
UPI
route,
but
the
Installer
can
still
do
some
of
that,
for
you,
for
example,
generating,
for
example,
the
the
configuration
of
ignition
and
now
get
into
more
details
about
that
later.
A
Throughout
the
talk,
looking
at
the
the
specifics
of
how
that
materializes
for
you
as
a
user,
you
see
here,
for
example,
the
installation
procedure
on
Azure
and
the
installation
procedure
on
GCP.
It's
pretty
much
the
same
thing
again
if
you're
not
familiar
with
Amish
before
you
download
this
installer
called
openshift
install
it's
a
binary,
and
when
you
run
that
binary,
you
can
pretty
much
say,
create
cluster.
It's
going
to
pop
up
and
ask
you
a
few
questions
which
file
provider.
A
You
want
to
use
your
credentials
or
SSH
keys
and
whatnot
and
from
there
you
pretty
much
hit
enter
and
let
the
Installer
run
for
30
minutes.
40
minutes
and
you've
got
a
cluster
up
and
running.
That
is
pretty
much
ready
to
rock
as
a
production,
ready
production
grade
installation,
it's
very
say
straightforward
and
you
can
how
that's
compatible
and
a
very
similar
experience
across
all
these
different
providers.
Specifically,
that
would
four
point
two.
We
are
adding
GCP
support
and
Azure
as
well
just
expanding
a
little
bit
on
that
disconnected
installs.
A
Even
though
a
lot
of
things
solution
and
the
procedure
that
we
have
been
describing
about
OpenShift
for
it,
it
relies,
of
course,
on
connectivity
with
the
internet
and
with
those
back-end
systems
at
Red
Hat.
We
also
heard
feedback
from
multiple
customers,
especially
the
ones
running
in
industries
like,
of
course,
banking
and
help
and
whatnot
that
they
want
to
run
complete
air-gapped
installation
procedures.
So
we
are
now
offering
that
as
well.
A
Would
four
point
two,
but
because
of
course
you
still
want
some
of
that
experience
that
you
get
for
running
upgrades
for,
let's
say
multiple
clusters
that
you
have
running
on
your
infrastructure.
You
can
still
do
that,
but
you'd
have
to,
of
course,
have
a
local
copy
of
those
containers
and
those
patches
and
those
updates
available
behind
your
firewall
and
that's
kind
of
what
we
allow
you
to
do
now.
At
four
point,
you
another
feedback
that
we
heard
from
our
enterprise
customers
was
around
the
need
for
egress
proxy.
A
So
again
like
sometimes
you
have
to
go
through
a
corporate
proxy
in
order
to
do
any
kind
of
connectivity
to
the
Internet
and
whatnot,
and
this
is
kind
of
a
nice
way
to
configure
the
entire
openshift
cluster,
the
entire
banana
kubernetes
clusters
and
all
those
services
to
go
through
this
proxy.
But
you
can
configure
that
now
from
mikan,
centralized
location
in
in
the
cluster.
A
Specifically
talking
about
open
ships
4.2
and
what
version
of
kubernetes
is
available
at
four
point,
you
that's
one.
Fourteen
one
thing
to
notice
here
is
that
as
we
transition
from
four
point,
one
to
four
point,
two
to
four
point:
three:
we
are
skipping
one
version
of
kubernetes,
and
that
is
something
that
we
we
can
do
and
we
will
do
when
we
think
that's
something
that
makes
sense
so
we're
looking
at
the
features
available
in
openshift
115,
and
we
are
looking
at
the
versions
in
116,
even
the
timelines
that
we
have
to
ship.
A
The
4.3
release,
we're
like
I
think
we
can
skip
that
version
and
get
the
the
upgrade
and
the
update
handle
it
by
the
platform
for
you.
So
again.
This
is
something
that,
if
you're
looking
for
a
specific
feature
in
queue,
Burnett
is
it's
good
to
know,
but
if
you
are
not
as
much
concerned
about
that,
but
you
were
concerned
about
the
whole
upgrade
process,
even
though
we're
skipping
a
version
there,
that
is
all
handled
by
us
bite
by
OpenShift
I,
mentioned
this
capability
of
enabling
GPUs
in
Cuban
areas.
A
So
we
are
automating
that
and
simplify
that
using
an
operator
we
call
NFD
and
through
this
operator
again,
you
pretty
much
have
a
click
install
experience
to
enable
GPU
in
every
single
loan
that
you
have
a
GPU
available,
very
I'd,
say
powerful
and
easy
to
use
again
if
you're
doing
any
kind
of
a
IML
workloads.
This
is
something
that
would
be
very
interested
in
leverage
that
I'll
transition
out
to
Chris
to
cover.
B
I'm
Chris
from
storage,
beer
and
one
of
the
cool
things
that
we
got
or
we
were
about
to
get
in
4.2
is
we
will
have
the
CSI
to
container
storage
interface
in
there
and
that
will
enable
us
to
add
plugins
search
plugins
to
kubernetes
they're,
not
in
the
kubernetes
tree.
So
we
don't
have
to
commit
code
to
the
kubernetes
project
and
wait
for
every
kubernetes
release
to
get
updates
there.
We
can
develop
that
much
quicker
and
then
just
use
to
see
as
I
interface
there
and
the
open
chip
container
storage
bug
in
will
leverage
that.
B
But
we
also
have
a
couple
of
third-party
developers
that
provide
plugins
they're.
Looking
at
the
storage
devices,
the
storage
operator
will
automatically
set
up
the
full
storage
class,
depending
on
where
you
deploy
your
open
check
cluster.
So
if
you
go
in
AWS,
you
already
have
a
default,
yes,
storage
class
and
you
can
go
ahead
and
directly
use
that
storage
class
or,
if
you're,
on
VMware.
You
have
that
available
as
well
and
the
cool
thing
additionally
is.
B
We
got
local
volume
and
the
raw
block
as
well,
so
you
can
use
whatever
is
locally
available
on
your
OpenShift
nodes
and
you
can
also
forward
raw
blocks
into
your
containers
to
use
for
whatever
you
need
and
that's
especially
helpful.
If
you
want
to
deploy
something,
that's
io
intensive,
like
databases
or
anything
like
that.
B
Looking
at
open
container
storage,
we
do
get
a
completely
new
back-end
so
previously
in
open
chef,
3
that
was
backed
by
a
cluster
of
us
and
now
we're
switching
that
over
to
SEF
and
nuba
and
for
the
safe
part,
we're
gonna
use
rook
and
for
the
Nuba
part,
we're
gonna
call
this
the
multi-cloud
gateway
as
everything
in
openshift
4
is
base
and
operators.
This
will
also
have
an
operator
to
cover
the
install
and
a
lifecycle,
including
updates
migrations
and
everything,
and
out
of
the
box.
It
is
designed
to
work
at
scale.
B
So
if
you
have
a
lot
of
PVS,
then
this
is
correctly
designed
for
you
already.
We
also
correctly
identify
availability
zones
in
your
clusters,
so
we
will
try
to
replicate
between
those
availability
zones.
So
you
you
never
lose
your
storage.
Obviously,
and
another
thing
is
we're
very
closely
integrated
with
OpenShift,
so
you
do
get
your
storage
monitoring
out
of
the
box
as
well,
and
just
to
give
you
a
quick
look
at
that
there's
this
one
everything
is
good.
It's
all
green!
B
You
see
your
capacity,
you
consumer
still
help
happy
about
it
and
then,
when
a
node
fails,
this
all
goes
in
into
red
and
you
see
that
obviously
something
has
failed.
You
probably
can't
read
it,
but
a
node
failed
in
this
case,
and
then
we
do
see
that
you're
supposed
to
do
something
and
this
in
the
back
end
also
hooks
into
prometheus.
So
you
get
the
alerts
to
wherever
you
configured.
B
And
if
you
already
use
open
ship,
3
and
you're
wondering
hey
how
do
I
get
over
to
openshift
4,
then
we
do
have
a
migration
tool
for
that.
So
a
migration
tool
will
get
you
from
open
ship
3
over
to
open
shift
4,
and
that
will
also
include
the
persistent
storage.
So
you
will
be
able
to
migrate
over
from
your
open,
safe
container
storage.
That's
close
to
based
over
to
the
open
chef
container
storage,
that's
safe
and
nuga,
based.
A
A
That's
a
very
streamlined
experience
to
this
kind
of
software
right,
especially
if
you're,
considering
how
complicated
it
can
be
to
update
a
kubernetes
cluster
like
this
is
amazingly
easy.
Another
thing
that
we
are
adding
to
this
view
is
the
ability
to
do
cluster
monitoring
as
well,
so
in
this
screenshot.
We
we
don't
have
that
here,
but
there
is
a
new
tab
there
at
the
top
called
monitoring
that
gives.
B
A
Access
into
like
high
level
monitoring
data
for
for
that
particular
cluster,
let's
see
what
else?
Oh
and,
of
course,
from
from
this
interface
as
well,
you
can
also
create
a
new
cluster
using
OSD.
So
if
you
want
to
create
like
a
new
manage
the
cluster,
you
can
manage,
you
can
view
the
status
of
that
cluster
and
create
a
new
cluster
from
here
as
well.
A
Specifically
talking
about
metering
metering
in
4.2
now
is
considered
GA
feature
and
with
metering.
You
can
pretty
much
see
consumption
of
resources
for
your
cluster
and
break
that
down
for
namespace
per
pod,
or
even,
of
course,
cluster
wide
as
well.
This
is
something
that
comes
also
very
handy,
especially
if
you're
running
this
across
multiple
providers,
sometimes
it's
very
hard
to
understand
where
your
consumption
actually
is
going,
which
pod
or
which
application
is
actually
taking
most
of
the
resources
that
you
are
running
in
your
cluster,
and
this
is
a
kind
of
a
way
to
do
that.
A
Again,
it
doesn't
matter
where
you're
running
you
get
this
consistent
view
and
report
for
that
particular
data
set
with
pluster
logging,
specifically
if
you're
running
311,
starting
with
4.1,
we
already
made
a
lot
of
progress
towards
optimization
and
performance.
We
are
able
now
to
restore
pretty
much
three
times
the
amount
of
logs
in
the
same
in
the
same
cluster,
but
we
are
also
reducing
a
general
resource
consumption
by
50%.
B
A
Alerts
for
when
that
infrastructure
is
not
working
properly,
so
this
is
all
good.
Now
you
have
your
say,
base
platform
infrastructure
running,
but
you
also
now
want
to
bring
this
workloads
to
to
your
platform
and
Brian
touched
a
little
bit
on
how
operators
work
and
also
that
we
have
an
operator
hub.
I
would
highly
encourage
for
you
to
browse
the
operator
hub
and
see
if
there
is
anything
there
that
it
might
be
interesting
for
you
or
to
create
your
own
operators
and
submit
it
there
as
well.
A
So
those
are,
of
course,
our
Red
Hat
products
and
also
certified
operators
that
they
went
through
a
very
rigorous
process
through
Red
Hat,
to
certify
that
those
operators,
respect
like
security
concerns
and
all
sorts
of
other
validations
that
we
do
and
that's
kind
of
another
I'd,
say
peace
of
mind
for
you
as
an
administrator
that,
when
you're
using
those
operators,
those
are
it's
a
consistent
right
with
4.2.
We
are
also
adding
a
new
capability
with
operators
in
OLM.
That's
the
automated
dependency
resolution.
A
So
if
you're
building
an
operator
that,
for
example,
depends
on
another
operator
until
4.1,
you
could
do
this
declaration
of
dependency,
but
it
was
static.
So
you
could
just
like
let
the
cluster
know,
but
it
would
still
have
to
do
some
work
to
install
the
dependent
for
that
operator.
The
dependency
I
would
for
point
you.
A
So
again,
as
you
provision
your
operator
on
the
cluster,
you
have
now
the
ability
to
extend
the
console
and
configure
it
a
little
bit.
How
that
experience
is
going
to
look
like
and,
let's
say,
expose
your
console
or
your
see
your
your
CLI
inside
the
open
shift
console
itself
like
this
is
something
that
is
very
powerful
again.
A
If
you
want
to
customize
how
your
customers
are
going
to
see
your
operator,
your
software
inside
up
and
shift,
we
are
also
adding
a
new
dashboard
again
as
you
login
into
openshift,
for
you
get
this
nice
summary
view
of
like
what's
happening
with
your
cluster.
What's
the
overall
health
status
and
what
not,
but
we
are
adding
a
couple
more
things
here
in
4.2
around,
like
top
consumers,
you
can
filter
that
now
by
resource
by
CPU
and
memory
network
and
whatnot,
and
this
is
one
of
the
really
cool
features
that
we
are
adding
to
4.2.
A
It's
something
that
we're
calling
the
developer
console
or
the
developer
perspective.
There
is
this
now
toggle
at
the
top
of
the
console
that
you
can
go
back
and
forth
between
the
admin
view
or
the
developer
view.
The
nice
thing
here
is
again
like
as
a
developer.
Maybe
you
don't
care
as
much
about
like
what's
happening
at
the
SDN
or
at
the
network
level,
or
with
all
the
specifics
that
are
happening
at
the
cluster.
You
can
really
just
focus
on
building
code,
deploy
your
application
and
what's
happening
with
all
my
applications
running.
A
This
is
something
that
is.
It
looks
really
nice
and
we
have
more
demos
and
more
slides
on
that
too,
for
example,
using
the
developer
console.
If
you
want
to
create
a
new
application,
you
have
like
a
couple
of
flows,
a
guided
flows
that
you
can
use.
For
example.
Maybe
you
want
to
start
from
get
you
pretty
much
input
the
the
github
repo
there.
You
say
a
couple
things
around
like
what
kind
of
application
that
is.
It's
a
java
application.
A
It's
a
Python
application
you
hit,
create
behind
the
scenes,
we're
going
to
clone
that
it
hub
repo
trigger
a
build,
wait
for
the
build
to
complete
push
that
to
the
internal
container
registry
and
make
your
application
deploy
it
available
for
you
that
same
flow
now
works
for
a
couple.
Different
kinds
of
workloads
such
as,
for
example,
servers,
applications
or
traditional
kubernetes
deployments
as
well,
and
once
that
application
is
then
deployed,
this
is
pretty
much
what
you
get.
A
You
have
like
a
topology
view
that
lets
you
see
what
are
the
applications
that
I
have
deployed
in
my
namespace?
What's
the
relationship
of
those
applications,
maybe
you
want
to
access
the
logs
for
that
particular
part,
it's
all
very
intuitive
and
integrated
now
right.
Another
thing
that
you
can
see
is
the
relationship
between
those
applications.
Again,
maybe
you
want
to
group
that
in
some
way
that
makes
sense.
You
can
now
create,
like
this,
this
artificial
grouping
saying
that
this
application
and
this
other
application.
A
They
are
part
of
a
broad
like
a
bigger
component
right
and
then
kind
of
orchestrate
that
you
can,
of
course,
out
to
a
scale
that
application
here
or
I'd,
say
manually
scale
the
application
here
using
the
console
or
if
you
are
deploying
that
as
a
service
workload,
that
is
going
to
alter
a
scale-
and
you
can
see
that
here
as
well
as
we
talked
about,
of
course,
the
developer
console.
Let's
talk
a
little
bit
about
the
developer
tools,
one
of
the
things
that
Brian
also
touched
was
on
called
ready.
A
Workspaces
would
4.2
we're,
or
actually
it's
not
Witt
4.2,
but
right
after
four
point
you
because
code,
ready
workspaces,
doesn't
actually
follow
the
same
exact
schedule
as
openshift
it's
shipped
as
an
operator
that
you
can
install
and
you
can
release
in
a
different
cadence.
But
right
after
four
point,
you
will
be
releasing
coderaider
workspaces
Todaro
that
is
based
on
its
g7.
A
There's
a
lot
of
new
capabilities
in
this
release,
and
it's
the
one
release
that
I'd
say
it's
like
was
really
a
rear
attack
that
to
run
on
top
of
kubernetes,
so
I
highly
encourage
taking
a
look
at
that
as
well.
One
of
the
things
that
we
announced
last
week
or
two
weeks
ago,
sorry
not
sure,
but
we
announced
recently
is
code
ready
containers,
so
quadratic
containers
is
a
way
for
you
to
have
pretty
much
everything
I'm
talking
about
here
and
OpenShift
for
cluster
running
on
your
laptop
sure.
A
With
all
the
operators
that
you
want,
you
can
install
more
operators,
if
you
like.
That's
also
something
they'll
say:
that's
very
nice
and
that's
delivered
on,
say,
multiple
platforms
as
well,
so
Linux,
Windows
and
Mac
one
of
the
things
that
with
4.2
we
are
releasing
as
developer
preview
is
the
op
open,
shipped
pipelines
or
the
new
version
of
to
open
ships
pipelines
based
on
tacked
on
tacked
on
if
you're
not
familiar
with
it.
A
It's
a
it's
a
new
project
that
was
created
based
on
Kane
it
based
on
K
native,
like
it
started
with
K
native,
and
now
it
moved
to
be
a
standalone
project,
and
it
leaves
now,
under
this
CIC
D
foundation,
the
cloud
native,
the
continuous
delivery
foundation.
It's
another
foundation
on
there,
the
linux
foundation
foundation,
foundation
foundation.
But
it's
it's
a
it's
a
new
modern,
CI,
CD
platform
that
it
was
designed
to
run
on
top
of
cuba
Nettie's.
It
was
designed
to
do
with
containers.
A
So
again,
some
of
the
assumptions
that
you
see
for
more
traditional
CI
CD
systems,
like
they
they're
pretty
much
now
revisited,
and
you
can
react,
act
that
to
be
I,
would
say,
design
for
this
modern
applications
and
modern
workloads.
It
runs
as
an
operator.
Of
course.
It's
one
of
those
a3
like
flagship
operators
that
we
shipped
as
and
on
to
the
platform
pipelines
is
one
another.
One
is
OpenShift.
Serverless
I
will
have
a
whole
session
talking
about
this
in
the
afternoon.
A
A
If
you're,
following
that,
you
know
a
little
bit
about
the
history,
but
we're
happy
to
say
that
it
reached
that
maturity
level
down
that
we're
very
comfortable,
saying
that
it's
a
ga
technology
in
the
platform
and
with
that
out
pretty
much
end
with
the
high-level
roadmap,
but
I
have
covered
all
if
not
most
of
the
items
here
already
throughout
the
slides.
I
may
have
went
too
quickly
for
some
of
them.
But,
as
you
might
imagine
like
this,
whole
session
is
usually
delivered
by
10
or
12
p.m.