►
Description
OpenShift 4.1 Release Update and Road Map
Ali Mobrem (Red Hat)
Buenos Aires 2019
OpenShift Commons Gathering
A
When
ideas
a
lot-
Odo's,
hello,
everyone
are
you
doing
so
we're
here
today
to
talk
about
open
ship,
for,
as
you
know,
this
is
a
really
special
year
for
red
hat.
In
the
same
year,
we
were
able
to
release
both
rel8
and
open
ship
for
with
many
new
features,
with
many
new
good
and
interesting
capabilities,
and
now
we're
gonna.
What
we're
going
to
do
today
is
going
under
the
hood.
A
A
To
start
just
to
give
you
a
glimpse
of
the
key
themes
around
open
ship
for
so
first
things
here
we
want
to
keep
doing
what
we've
done
before,
which
is
to
provide
enterprise,
kubernetes
distribution,
so
we're
talking
about
security,
reliability,
production
grade
ready
many
of
these
characteristics
that
came
from
our
experience
with
rel
with
our
enterprise
operating
system
and
we
bring
it
to
coronaries.
But
on
top
of
that
we
are
bringing
now
many
new
capabilities
and
I'm
talking
about
here.
A
Obviously,
all
the
features
that
this
there
is
important
for
developers
to
keep
using
and
creating
new
applications
and
will
bring
many
more
capabilities
around
that
on
these
terms
and
obviously
keep
bringing
openshift
as
a
whole
solution
that
is
ready
again
for
small
companies
for
enterprise
companies.
Global
companies
should
apply
it
on
premises,
deploy
it
in
the
cloud
anywhere
on
a
hybrid
mode.
A
The
first
thing
here
we're
talking
about
open
ship
for
is
that
we're
bringing
a
new
paradigm
here
when
I
started
at
Red
Hat
into
2013.
Now
we
were
we
at
that
time.
We've
had
open
ship
chew
and
we
were
starting
and
that
time
the
container
orchestration
technology
was
completely
different
and
then
we
started
to
develop
open
ship
3
at
that
time
based
on
kubernetes
and
when
we
release
it
on
2015,
it
was
incredible
was
astonishing
because
nobody
at
that
time
was
using
coronaries.
Now
it's
easy.
A
Everyone
knows
cavern
ares
everyone
when
things
about
containers
knows
about
this
technology,
but
at
that
time
nobody
knew
it.
The
community
version
was
not
even
one
that
0
was.
Everything
was
very
new.
Well,
so
we
made
a
big
bet
at
that
time.
It
was
a
good
good
decision,
and
now,
when
you
start
to
think
okay,
so
we
have
a
that
the
team
is
winning.
A
Should
we
keep
it
and
we're
now
making
a
big
backup,
big
bet
again,
so
we're
keep
continuously
moving
and
improving
and
changing
things
so
not
only
we're
bringing
continue,
bringing
kubernetes,
but
now
this
concept
of
operators.
What
it
do
is
that
with
we
are
now
unable
to
manage
not
only
the
container
platform
but
also
the
infrastructure
that
is
below
what
I
mean.
This
is
the
operating
system
and
also
optionally
on
the
hardware
other
infrastructure
components.
A
What
we're
saying
is
that
with
OpenShift
3,
we
would
have
to
like
maintenance
windows,
one
for
the
the
openshift
in
the
other
pharrell,
so
you
have
to,
for
instance,
patch
rel
and
then
patch,
or
do
an
upgrade
on
OpenShift
and
shoot
different
moments.
Now
we're
doing
out
of
everything
as
a
cohesive
and
a
unit.
So
everything
is
managed
by
OpenShift,
including
the
hardware
itself.
It's
if
it's
necessary
and
we're
doing
this,
because
we
model
all
the
components
based
on
this.
A
What
we
call
operators
and
you're
going
to
explain
this
in
much
more
details
and
all
the
components
that
made
OpenShift
is
based
on
operators.
We
have
to
write
42
operators
that
exposes
api,
so
we
can
control
all
the
infrastructure,
all
the
machines
or
the
operating
system,
and
obviously
on
the
container
orchestration
components
that
are
on
top
of
it
to
go
into
a
bit
more
details
and
talk
about
especially
operators
and
new
features.
I'm
passing
to
ali
who's
gonna
give
you
a
glimpse
of
the
future.
We've
open
ship
for
Thank.
B
You
Diego,
hello,
everyone,
my
name
is
Ali
Melbourne
I'm,
a
product
manager
on
the
open
ship
team
at
Red,
Hat
I'm,
very
excited
to
be
here
to
speak
with
you
all
today.
So
thank
you
for
coming
out,
like
Iago
said
openshift,
for
is
a
brand
new
paradigm
for
us
we
rebuilt
it
from
the
ground
up
we're
using
this
new
concept,
like
you
said
operators,
so
for
people
that
don't
know
what
an
operator
is
operator
is
a
runtime
that
manages
kubernetes
applications
and
services
right.
B
So
what
we
do
is
we
take
the
knowledge
from
the
support
engineers
and
we
put
that
smarts
into
the
operator.
So
when
you
need
to
go
install
if
you
need
to
go
upgrade,
if
you
need
to
do
some
troubleshooting,
your
building
that
logic
in
so
Iago
said
as
well.
We're
looking
at
this
a
holistic
view
we're
looking
at
open
shift
from
the
bottom
up,
all
the
way
from
the
the
OS
to
all
the
applications
and
everything
running
on
there.
B
So
what
happens
now
is
in
kubernetes
when
you're
managing
an
application
you
tell
kubernetes
I
want
the
desired
state
for
my
application
to
be
this
and
if
it
ever
deviates,
kubernetes
job
is
to
get
that
applicant
back
to
the
desired
state
right.
But
now
because
we're
doing
this
holistically
and
we're
using
operators,
we
can
now
use
kubernetes
to
manage
kubernetes,
so
we're
actually
pushing
the
boundaries
here,
a
lot
with
kubernetes
and
it's
absolutely
awesome,
I,
don't
know
anyone
else,
that's
doing
this
yet.
B
So
that's
why
openshift
was
such
a
big
jump
for
us
from
OpenShift
from
OSHA
3
to
4.
Another
piece
here
I
want
to
talk
about
is
at
right.
Now
we
talked
to
our
customers
a
lot
and
we
really
appreciate
the
feedback
that
we
get
from
you
and
some
of
the
feedback
we
got
for
openshift
3
was
that
I
wish
you
3
it's
difficult
to
install.
Sometimes
it
could
be
hard
to
upgrade.
It
can
be
hard
to
scale
out
the
cluster
right.
B
So
we
heard
that
loud
and
clear
and
when
we
designed
open
shift
4,
we
wanted
to
address
all
those
issues.
That
was
a
primary
concern
for
us
another
another
thing
that
when
we
talk
to
customers,
we've
got
a
lot
of
feedback,
for
was
the
hybrid
cloud.
Hybrid
cloud
is
very
important
for
our
customers.
They
want
to
be
able
to
run
their
workloads
on
Prem
in
various
different
ways
on
cloud
providers.
B
You
know
they
may
want
to
move
from
one
cloud
provider,
the
other,
so
our
customers
get
showed
us
a
concern
that
they're
worried
about
vendor
lock-in,
especially
the
cloud
providers.
So
we,
when
we
created
open,
show
for
one
of
our
primary
goals
there
as
well,
was
to
enable
our
customers
to
be
able
to
run
their
workloads
wherever
they
want
to
and
to
have
the
same
type
of
interface.
B
So
now,
what's
kind
of
dig
in
to
the
details
here
so
here
I
have
the
different
types
of
installations,
so
the
first
one
you
see
is
the
full
stack
automation.
This
is
what
we
call
our
one-click
installer
and
you
can
go
ahead.
Just
give
us
your
credentials
to
AWS
or
GCP
or
whatever
you
click
the
button.
At
that
point,
under
a
little
over
20
minutes,
you
get
a
brand-new
H,
a
cluster
right
out
of
the
box
with
all
the
best
practices
you
don't
know,
you
know
all
the
hard
work
is
kind
of
done
for
you.
B
There
again,
we
listen
to
our
customers
and
our
customers
gave
us
feedback
and
said:
look.
We
have
all
this
existing
infrastructure.
We
already
have
a
DNS
as
VPC
set
up.
We
have
all
the
security
stuff.
We
need
a
little
bit
more
flexibility
and
we
said
okay.
So
now
we've
created
a
pre-existing
restructure.
That
is
much
more
flexible.
That's
a
little
bit
more
complex
because
we're
giving
you
that
flexibility,
but
it's
it's
so
you
can
put
it
open
shift
into
your
environment
and
so
open
shift
will
meet
your
needs.
That
was
our
primary
goal.
B
There
again
listening
to
customer
feedback
and
I'd
love
to
talk
to
everybody
today,
if
you
get
a
chance
to
come,
find
me
after
this-
and
you
could
tell
me
your
opinions
on
things-
I
would
love
to
hear
it.
We
also
have
a
couple
other
offerings.
We
have
two
offerings
for
hosted.
We
have
open,
shipped
dedicated
and
then
open
shipped
on
Azure
they're,
pretty
much
a
very
identical
except
the
azure
offering
we
actually
do
support
with
Microsoft.
B
So
for
the
open,
dedicated
you
only
get
the
Red
Hat
engineer
support,
but
for
Azure
you
get
both
now
here.
I
wanted
to
put
this
chart
up
for
everybody
to
show
you
a
comparison
slide
right.
It
shows
you
the
difference
between
the
full
stack
automation
and
the
pre-existing
infrastructure.
You
see
whether
users
responsibility
is,
and
you
see,
with
installers
responsibilities
and
the
full
stack
automation.
The
one
big
thing
I
want
people
to
take
away
from
this
is
in
the
pre-existing
infrastructure.
You
have
the
ability
to
use
rel,
seven
for
your
worker
nodes.
B
B
This
is
our
provider
roadmap,
showing
you
all
the
platform's
we
can
support
currently
for
one
for
OpenShift
4.1
is
out
and
we
support
Amazon
Web
Services
for
full
stack,
pre-existing
as
well,
and
then
VM
and
bare
metal
and
pre-existing
as
well
for
for
two
we're
adding
the
ability
to
support
Microsoft,
Azure
GCP
opens
that
platform
and
bare
metal
RI
for
the
full
stack,
automation
and
we're
gonna,
be
adding
GCP
for
a
pre-existing
infrastructure
as
well,
and
the
4-3
or
you're
gonna
get
Alibaba
cloud.
Ibm
cloud
might
have
virtualization
support
as
well.
B
Okay,
so
this
next
slide
I
put
up
it's
important
because
there's
a
new
thing
called
cryo,
that's
the
container
runtime
face.
We
are
in
openshift
4.
We
do
not
use
the
doctor
runtime
we're
actually
using
container
runtime
interface,
because
it's
a
lightweight
native
support
to
be
able
to
run
containers.
What
this
means,
though,
is
you
could
still
run
any
docker
container
they're,
both
using
the
OCI
interface,
so
they're
interoperable.
You
don't
have
to
worry
about
modifying
any
existing
programs
or
applications.
You
have
they'll
work
right
out
of
the
box.
B
You
get
a
couple
of
cool
command
lines
that
come
with
it
as
well.
There's
something
called
pod
man
and
build.
If
you
get
a
chance,
take
a
look
at
those
alright.
So
for
open
ship
4.0,
one
of
our
biggest
things
like
I
said
before
is
hybrid
cloud.
We
need
to
be
able
to
support
our
customers
here.
We
need
be
able
to
support
them
on
any
platform
on
Prem
or
in
the
cloud
and
be
able
to
allow
them
to
move
their
workloads
as
needed.
B
So,
in
order
to
support
this,
we
we
actually
created
something
called
cloud
right.
Hakam,
it's
a
nice
portal.
That'll
allow
you
to
manage
all
your
clusters.
It'll
have
all
your
subscription
items
there
and
expect
a
lot
of
stuff
here,
because
we're
planning
to
really
increase
the
the
functionality
and
I'll.
Tell
you
why?
Because
now
it's
so
easy
to
go
ahead
and
create
clusters
and
to
upgrade
clusters
and
to
scale
out
clusters
our
customers
are
signing
to.
B
You
can
create
many
more
clusters
versus
just
to
having
it
for
you,
so
we
want
to
provide
you
guys,
the
tooling,
to
be
able
to
manage
the
increase
of
clusters
you're
going
to
be
creating
so
I
club.com.
We
actually
have
the
open
shift
cluster
manager.
Every
time
you
create
a
cluster,
it
will
register
back
to
the
cluster
manager.
This
will
be
your
single
source
of
truth
for
all
your
clusters.
We're
gonna,
send
back
some
data
a
little
bit
of
telemetry
and
we're
gonna
send
back
the
status
of
the
cluster.
B
We're
gonna
send
back
how
many
CPUs,
how
much
memory
so
the
the
utilization
of
the
cluster
is
gonna,
be
back
and,
like
I
said
in
this
area,
we're
planning
to
add
a
lot
more
functionality,
we're
probably
going
to
add
the
ability
to
use
coop
fed,
which
will,
when
you
want
to
install
an
application
or
an
operator
to
many
clusters
you
should
be
able
to
from
here.
So
look
look
just
stuff
like
that
in
the
future.
It's
a
very
exciting
area,
alright.
So
this
is
a
big
one
operators
all
the
way
down.
B
We
talked
about
how
we
rebuilt
OpenShift
4
completely
with
operators,
so
everything
is
built
on
operators
and
for
a
good
reason.
It
lets
us
automate
a
lot
of
stuff
and
the
nice
thing
in
forw
is
we
we
actually
surface
that
information
to
you.
If
you
go
to
the
admin
cluster
admin
console
and
go
to
cluster
settings,
you
get
the
list
of
all
your
operators.
You
can
see
what
their
current
health
is.
B
B
Now
in
OpenShift
4
we
also
have
global
configurations
area
where
you
can
configure
clusters,
so
dopey
shift
3.
There
was
probably
like
a
bazillion
different
flags
and
configurations
you
could
set,
which
allowed
people
to
shoot
themselves
in
the
foot
so
with
with
OB
shove.
4
operators
are
very
precise
on
what
Flags
and
configurations
we're
going
to
expose
and
all
those
flies
like
variations
are
going
to
be
exposed
here
on
the
global
configuration
page.
B
This
is
actually
somewhere
something
I
want
to
get
feedback
from
people
today.
At
some
point,
we're
actually
kind
of
being
rigid
about
this,
because
we
want
to
be
very
particular
on
what
we
allow
people
to
modify,
but
we
want
to
make
sure
we're
not
too
rigid.
So
when
people
start
using
open
shift
for
here,
I'd
love
to
hear
back
like
if
we
need
to
give
you
guys
more
flexibility
and
expose
more
configurations
to
the
cluster.
B
Ok,
so
the
next
thing
we
have
is
over-the-air
updates,
because
now
we're
using
operators
and
we're
using
the
rel
core
OS,
which
is
immutable.
We
know
the
state
of
the
cluster
at
any
given
time
and
because
we
know
the
state
of
the
cluster,
we
could
say
hey.
We
need
to
take
plus
the
the
cluster
from
state
to
state
at
any
point,
and
we
can
now
do
that
very
easily.
B
We
actually
have
something
called
the
cluster
version
operator,
which
is
a
master
operator
that
manages
all
the
underlying
infrastructure
and
core
operators
that
make
up
kubernetes
and
overshift
and
maintains
their
versioning.
So
if
we
ever
want
to
go
from
say
for
one
to
for
two
or
four
one,
two
four
one,
three
whatever
we
can
now
do
that
very
easily
for
you
guys,
okay,
so
the
next
thing
I
want
to
talk
about
is
we
kind
of
talked
about
installation
and
how
we
solve
that?
We
talked
about
how
we
do
over
there
updates.
B
B
For
example,
if
you
have
a
workload
that
needs
high
GPU,
you
can
define
a
machine
set.
That
says:
hey
use
this
type
of
machine
with
this
OS
with
lots
of
GPUs
on
it,
and
if
my
cluster
doesn't
have
the
existing
capacity,
you
can
now
auto
scale
that
up
run
your
workloads
and
then
it
will
auto
scale
down
the
extra
machines
to
back
to
the
desired
state
that
you
have
set
for
your
cluster,
so
really
cool,
very
powerful,
and
that's
now
available
available
for
everybody
to
use
all
right.
B
So
I
talked
a
little
bit
about
machine
sets
and
I
kind
of
want
to
go
a
little
further
in
it.
You
could
do
machine
sets
for
infrastructure
right,
so
you
could
have
your
elasticsearch
game.
You
could
have
Prometheus
for
monitoring
your
router,
your
registry,
you
could
define
these
different
machine
sets
and
say:
hey
I
just
want
to
run
my
infrastructure
items
on
this.
Even
with
metering
and
chargeback,
you
could
define
a
specific
type
of
machine
set
and
then
with
node
selectors
you,
although
with
note
selectors,
we
could.
B
Thank
you
so
with
no
sorry
about
that,
so
no
selectors,
you
could
drive
the
correct
workloads
to
your
infra
machine
sets
that
you've
created
as
well
I
quickly
wanted
to
just
show
you
guys
a
possible
architecture
diagram.
This
is
for
AWS.
As
you
notice,
we
have
a
control
plane,
there's
three
three
masters
on
that
and
then
for
logging
and
monitoring.
We
have
some
r5
2x
large.
B
Those
are
some
high
performance
machines
and
we
specifically
put
those
there,
because
the
throughput
and
memory
and
the
CPU
usage
that
those
types
of
applications
are
gonna
need
then
you're
going
to
notice
that
there's
routing
and
workers
they're
both
m5
large
but
they're,
separated
into
different
machine
sets
and
the
reason
we
do.
That
is
because
maybe
the
routing
machines
have
to
have
higher
security,
because
they're
exposed
directly
to
the
internet
and
they
have
different
configurations.
B
Ok
cool,
so
something
I
wanted
to
talk
about
today
as
well
is
cluster
monitoring.
Cluster
monitoring
now
is
a
core
component
of
openshift.
You
have
to
have
it
there.
You
cannot
turn
it
off.
It's
based
off
of
Prometheus
and
the
reason
that
it's
a
core
piece
is
because,
right
now
out
of
the
box,
we've
added
a
ton
of
metrics.
In
there
we
talked
to
again
like
the
support
engineers
and
everything,
and
we
know
what
the
good
boundaries
are
for.
B
Again,
it's
all
about
automating
and
simplifying
your
guyses
lives
and
giving
you
the
most
bang
for
the
buck,
so
in
openshift,
for
in
order
for
us
to
be
able
to
give
you
guys
the
best
service
and
the
best
support
we've
added
some
telemetry
in
there
we're
sending
back
data
like
how
many,
how
many
nodes
in
your
cluster,
how
much
utilization
they
have,
maybe
the
cluster
the
operator
status
and
also
like
upgrade
status.
How
do
your
upgrade
go?
B
We
want
to
know
if
there's
an
issue
with
your
clusters,
so
we
could
come
and
proactively
reach
out
to
you
and
help.
You
resolve
your
issues
if
there's
anything
there,
something
also
new
for
option
shift
for
ou
is
a
metering
and
chargeback.
We
now
have
the
ability
to
plug
into
cloud
providers
api's
and
we
could
go
ahead
and
give
you
reports
on
how
much
you're,
spending
and
and
so
forth.
We've
already
have
a
lot
of
reports
out
of
the
box
for
you
and
you
have
the
ability
to
create
custom
reports
as
you
want
as
well.
B
If
you
look
at
the
bottom,
there's
a
matrix
there,
CPU
memory,
storage,
request,
usage,
pod
name,
space
node,
so
you
could
do
pod
usage
on
a
on
a
node
or
pod
usage
on
a
cluster
or
storage
request
on
for
a
namespace.
So
these
types
of
reports
are
going
to
be
there
for
you
and
they
should
handle
about
80%
of
the
use
cases.
Again.
We've
talked
a
lot
of
customers
and
got
a
feedback.
What
kind
of
reports
they
wanted
and
we
have
those
out
of
the
box
for
you.
B
Alright
next
thing,
I'd
like
to
talk
about
is
extending
the
platform
so
far,
we've
kind
of
talked
about
the
infrastructure
and
running
it
and
and
and
the
core
pieces.
Now
one
of
the
pieces
of
feedback
we
got
from
customers
as
well
as
they
felt
like
open
ship
was
bloated.
So
what
we
did
is
we
really
slimmed
in
down
to
a
base
install
and
then
we
enable
people
to
add
functionality
as
they
need
right.
B
So
in
the
base,
install
you're
gonna
get
the
console
and
off
you're
gonna
get
monitoring
I
get
to
spoke
about
you're
gonna
get
over
the
air
updates
and
you're
gonna
get
machine
management,
essentially,
is
you
know,
be
able
to
scale
all
your
cluster
optional
items?
Are
the
service
broker
optional,
OCP
components,
for
example,
logging
is
the
optional
OCP
component
metering
is
an
optional
OCP
component
and
the
way
you
do
that
is
you
actually
go
to
the
operator
hub.
All
these
add-ons
are
in
your
operator
hub
catalog.
B
Here's
the
image
of
the
operator
hub
and
what
I
want
you
to
know
is
there's
two
version
of
our
hub:
you're
gonna
have
a
local
operator
hub
on
your
cluster
and
as
an
admin,
you
can
see
that
your
users
will
not
be
able
to
see
that
only
an
admin
admin
can
see
that
so
once
an
admin
goes
and
decides,
you
know
what
I
want
to
offer
to
my
entire
cluster.
You
could
do
that.
If
you
want
to
say,
hey
I
want
to
offer
Couchbase
just
to
these
namespaces
or
these
projects.
B
You
could
do
that
as
well
right.
So
a
lot
of
operators
allow
you
to
decide
if
it's
a
cluster
wide
service
or
if
it's
a
specific
service
you
offer
for
a
project.
There's
another
operator
hub
as
well,
which
is
operator
hub
IO,
that
is
a
community
project.
Anybody
could
upload
their
operators
there.
We
encourage
people
to
create
operators
for
their
services,
that
they
want
to
run
on
kubernetes
and
to
share
it
with
the
world
and
that's
a
great
spot
to
do
it.
B
So,
like
I
talked
about
the
operator
hub,
once
you
enable
a
service,
how
do
you
get
users
consume
it?
So
users
are
able
to
consume
it
via
the
developer,
catalog
right
so
before
the
developer,
catalog
has
Service
Catalog
items
in
there
had
templates
have
the
source
to
image
build
stuff
in
there,
but
now
we're
adding
the
the
operator
services
in
there
as
well.
So
your
your
debt
developer
catalogs
are
a
one-stop
shop
to
grab
everything
that
you
can
consume
all
right.
So
this
is
pretty
cool
because
of
this
operator
framework.
B
We
now
have
the
ability
to
try
to
make
make
a
cool
console.
Like
I
talked
about
on
the
left
side,
you
have
the
dev
catalog.
When
you
enable
an
operator
those
services
show
up
automatically
for
certain
operators.
You
could
go
ahead
and
add
a
link
into
the
external
application
launcher,
for
example,
someone
install
service
mesh.
They
can
now
put
a
link
directly
to
the
Kali
UI
from
there
or,
if
you
guys,
are
interested
in
container
virtualization.
B
If
you
enable
cnv
all
the
sudden
you're
gonna
get
the
ability
to
manage
VMs
with
an
open
ship
right
next
to
your
containers.
Right
so
you're
gonna
be
able
to
see
all
that
kind
of
good
stuff
all
right,
so
I'm
gonna
get
this
back
to
theöto
and
he's
gonna
talk
to
you
about
the
broad
ecosystem
of
workloads
available
to
you.
A
So,
using
this
okay,
great
and
the
live
alright.
So
the
idea
here
is
that
we
show
the
platform
itself,
but
of
course,
a
platform
is
nothing
without
an
ecosystem
of
applications
of
technologies
and
software
running
on
it.
The
first
thing
is
that,
with
operators
that
we
talk,
operators
there
are
ones
that
we
built
that
manage
the
platform,
but
also
the
ones
that
third
parties
can
build.
Yourselves
can
can
be.
Your
companies
can
build
operators
to
manage
the
applications
that
we'll
be
running
on
and
taking
care
of
the
applications
that
are
running
on
the
cluster.
A
So
you
can
take,
for
instance,
an
existing
home
chart
and
convert
it
to
a
helm
operator,
but
also
using
other
technologies
such
as
ansible,
playbooks
and
even
programming
languages
like
go
to
create
operators
and
taking
care
of
your
applications.
We
define
a
maturity
model,
so
very
basic
operators
that
just
do
installation.
It
can
be
using
like
the
helm,
charts,
which
are
very
simple,
but
then
you
can
start
going
to
much
more
details
such
as
date.
A
You
operations
like
backup/restore,
metrics,
analytics
logging
and
doing
what
we
call
autopilot,
which
means
the
operator
takes
care
of
everything
you
don't
need
to
do
anything
operator.
If
you
think
of
a
support
engineer,
is
the
knowledge
that
he
has
on
managing
applications
built
on
a
software.
So
this
is
an
operator,
so
you
can
do
it.
Many
many
interesting
things
with
operators
and
operators
are
what
we
call
first-class
citizens,
because
they
do
all
this.
In
fact,
there
is
a
software
running
and
as
a
pod,
in
a
container
a
long-running
process
and
taking
care
of
your
applications.
A
That's
what
a
an
operator
is,
but
if
you
think
about
it,
okay
operators
take
care
of
my
application
who
takes
care
of
the
operators.
That's
the
role
of
the
operator
lifecycle
manager,
it's
yeah
years,
that
it
gives
you
operators
all
the
requirements,
all
the
features
that
the
cluster
has
for
the
operator
to
do
his
job
like
deployments,
rolls
permissions,
etc.
That's
what's
the
operator
lifecycle
manager
does,
and
the
best
thing
here
is
that
the
operator
lifecycle
management
you.
The
idea
is
that
you
bring
a
catalog
think
of
it
as
a
App
Store.
A
So
you
have
your
operators
in
App
Store,
you
download
it
as
such
as
the
catalog,
and
then
you
can
attach
what
we
call
subscription,
which
means
you
can
create
interesting
rules
like
I
want
this
operator
to
be
as
updated
as
possible.
So,
yes,
to
make
sure
the
applications
on
your
new
cluster
are
easily
updated
and
are
always
on
the
latest
version
being
automatically
or
you
can
configured
doing
with
with
the
authorization
of
a
cluster
administrator.
So
that's
what
the
application,
the
operator
lifecycle
management
does.
A
Finally,
another
way
to
extend
the
platform
is
obviously
creating
a
container
based
applications,
so
we
are
now
offering
a
new
possibility.
We
are
we
just
launched
Red
Hat
Universal
based
image
or
you
bi,
which
is
a
very
small
lightweight
route
based
image
for
containers.
So
it
comes
in
different
flavors
like
for
dotnet
for
for
PHP
for
nodejs
and
others,
and
it
has
obviously
all
the
security
features,
all
the
the
performance
features
of
realm.
But
it's
very
small
and
the
idea
here
is
most
important.
A
A
We
have
now
a
new
CLI,
you
know
I
think
most
of
you
know
cube
CTL
+
OC
line
called
commands.
They
are
great,
but
they
are
modeled
thinking
of
kubernetes
objects.
So
when
a
developer
is
doing
his
job,
his
always
have
to
translate.
What
does
this
comment
means
for
his
application?
So
we
created
this
audio
CLI
that
is
really
focused
on
the
operations
that
the
developer
needs
to
do
and
it's
modeled
in
a
way
like
git.
A
So
if
there
is
a
audio,
create
audio
push
audio
watch
similar
for
those
who
are
who
understands
and
works
as
a
developer
using
get
another
thing
is
and
Julia
talk
about.
That
is
that,
although
we'll
continue
with
still
delivering
Jenkins-
and
we
will
continue
to
do
it-
I
want
to
pause
here
and
ask
you
a
question
how
many
of
you
are
using
Jenkins
on
your
organization's
raise
hand?
Okay,
so
a
lot
of
you
and
how
many
of
you
really
love
working
with
Jenkins.
A
So
I
think
I
figured
understand
what
we've
we
come
to
this
in,
create
this
new
tact
on
pipelines.
So
again
we're
still
doing
Jenkins
it's
it's
important
that
should
keep
to
keep
evolving
Jenkins.
However,
this
new
model
called
Tecton.
It
is
based
on
cloud
native
principles,
which
means
is
that
Jenkins
was
very
centralized.
You
have
to
configure
everything
sent
in
a
central
model,
plug
plugins
and
configurations
tacked
on
is
distributed,
so
each
team
owns
its
pipeline
own.
Its
configuration
can
do
multiple
types
of
pipelines,
different
pipelines
for
each
team.
A
So
that's
the
idea,
with
we've
tacked
on
pipelines
that
we're
gonna
ship
with
webpop
and
ship
now.
Another
thing-
and
it's
really
interesting
here-
is
a
native
this
open
source
project.
The
idea
is
to
bring
several
its
capabilities
to
open
shift
and
what
I'm
doing
what
I'm
saying
here
is
that
the
idea
is
that
you
can
scale
down
to
zero.
Your
containers,
imagine
that
you
have
application
a
container
based
application
and
it's
not
running
an
open
shipped.
Yet
it's
it's
down
to
zero.
The
number
of
replicas
then
comes
a
request.
A
It
starts
automatically
the
container
it
does.
Its
things
is
scheduled
to
many
more
replicas
if
needed,
and
then,
when
there's
nothing
else
to
do,
there's
no
more
requests.
It
goes
down
to
zero
again,
and
this
is
important
for
resource
consumption.
So
you
you're,
gonna,
use
banner
your
your
compute
resources,
your
memory,
resources,
storage,
resources
on
your
on
your
cluster,
so
this
is
something
really
interesting
that
we're
we're
developing
now
and
also
every
time
I
talk
to
developers.
A
This
is
the
main
thing
that
everyone
wants
to
know
wants
to
talk
about
that
East,
your
project,
we're
delivering
now,
in
a
couple
three,
three
to
four
weeks
now
we're
going
to
deliver
this
open
ship
service
mesh,
which
comes
together
with
no
additional
cost
on
top
of
openshift
the
East
your
project.
The
idea
is
to
create
a
network
that
connects
and
controls
the
traffic
between
different
micro
serve,
and
the
idea
here
is
that
you
have
control,
so
you
can
define
which
microservice
can
talk
to
the
other
microservices
which
how
this
this
traffic
will
flow
visibility.
A
So
you
can
see
which
requests
going
to
a
different
micro
services
and
obviously
making
more
advanced
at
deployment
techniques
such
as
cannery
NAB.
So
you
can
have
multiple
versions
of
microservices
running
and
managing
the
traffic
flow
between
this.
You
know
this
was
something
that's.
It
was
possible
before
to
do
these
kind
of
things,
but
it
requires
many
libraries
that
it
is
specific
for
a
language
or
you
have
to
modify
your
your
application
should
do
this
now
we're
delivering
this
on
top
of
open
ship.
So
you
don't
so
it's
available
for
any
kind
of
application.
A
One
developer
can
even
see
what
the
other
is
doing
online
on
the
on
the
IDE
and
all,
obviously,
all
based
on
containers
so
well.
I
think
we
talked
a
lot
about
open
ship
for,
but
I
know
that
many
of
you
are
using
open
shiftry's.
So
obviously
you
can
start
installing
open
ship
for
right
now,
it's
available
the
for
that
one
is
available,
but
for
those
who
are
thinking
I
want
to
migrate.
I
want
to
start
I
have
my
my
production
first
or
my
test
cluster
when
I
migrated
to
power.
How
do
I
do
this?
A
Well,
we
are
working
on
a
new
tool,
a
migration
tool.
That's
based
on
open
source
project
called
Valero,
and
the
idea
is
that
automatically
you
can
select
namespaces,
persistent
volumes
and
other
other
components
like
stateful,
SADS
deployments,
etc.
Select
then,
and
define
oh
I'm
from
version
3.7,
3.9,
3.8,
n/a
click,
and
then
it
moves
to
open
shift
for
that
you
it
does
the
migration
automatically
you
can
even
stage
it
tested
before
doing
this
is
migration,
and
it's
it's
important
to
explain.
This
is
not
an
in
cluster
upgrade
you're,
not
upgrading
an
existing
cluster.
A
A
So
we
started
with
this
methodology
with
strategy
and
we
believe
it
would
be
much
better
for
you
to
migrate
clusters
and
me
sometimes
you
can
instill
if
you
want
using
both
versions,
maybe
productions
two
and
three-
and
you
start
in
developing
in
version
4
and
when
you
think
you're
comfortable
with
needs,
all-news
concepts
and
features
then
start
using
version
4
and
migrate
around
the
applications
so
impulse
here.
Here's
a
road
map,
as
you
can
see,
there's
a
lot
of
things
going
on
for
the
next
six
months,
so
we
were
in
version.
A
We
already
deliver
version
4.1
we
are
about
in
immoral,
as
some
moms
were
gonna
deliver
version
4.2
and
the
beginning
of
next
year
version
4.3.
We
are
going
to
send
this
slides
back
to
you,
so
you
can
get
into
more
details
of
each
feature,
for
there
is
plenty
for
each
version,
and
now
you
can
everything
that
we
told
you
you
can
try
now
going
to
try
that
open
ship
con.
You
can
in
a
couple
of
minutes,
instantiate
a
new
cluster
and
start
playing
with
these
new
features.
B
Awesome
Thank,
You,
Tiago,
all
right
so
on
the
screen.
You
see
an
OB
shift
for
one
cluster,
one
of
the
things
I
wanted
to
show
you
is
the
the
new
console.
It's
based
off
the
tectonic
core
OS
console,
and
we
we
took
that
as
a
base
and
then
we've
enhanced
it.
So
one
of
the
cool
things
I
want
to
show
you
is
cluster
settings.
This
is
where
you
come
to
do
upgrades.
As
you
see
here.
We
have
channels
on
the
channels.
You
could
say:
hey
I
want
a
nightly,
build
pre-release
or
stable.
B
Depending
on
what
type
of
cluster
you
have.
You
could
set
that
and
then
and
the
update
status.
You
have
all
the
you,
you
could
click
here
to
select
what
version
do
you
want
to
go
ahead
and
upgrade
to
you
right?
So,
let's
just
pick
one
of
these
and
click
update
and
the
update
will
start
proceeding
over
here.
You
could
also
see
below
as
the
version
history
right,
so
you
can
see.
I
started
with
a
4-1
release
candidate
and
then
went
to
official
four
one
and
four
one.
Two
now
we're
we're
upgrading
to
four
one.
Eight.
B
You
can
see
it's
downloading
the
updates
here,
so
that's
it!
That's
how
you
upgrade
an
OPA
shift
or
cluster.
The
next
thing
I
want
to
show
you
guys.
Is
the
operator
hub
to
kind
of
give
you
actual
feel
of
the
marketplace
here?
So
let's
say
we
want
to
install
all
right.
Install
the
let's
just
saw
something
else:
okay,
this
is
all
Couchbase.
B
You
come
in
here
and
sometimes
I'll
say:
there's
pre
required
stuff
here.
So
you
get
information
about
this
operator
and
all
the
supportive
features
you
go
ahead
and
click
install
now.
This
operator
gives
me
options,
it
says:
hey
do
you
want
to
install
to
all
namespaces
in
the
cluster
or
do
you
want
to
specify
a
specific
namespace
right?
I
also
have
the
ability,
if
they
have
more
than
one
channel
preview
nightly
whatever
this
was
show
up
here
as
well.
B
So
you
could
we're
giving
third-party
is
he's
an
opportunity
to
do
upgrades
just
like
we're
doing
for
the
cluster
for
their
applications
running
on
open
shift,
and
then
you
get
the
approval
strategy.
Do
you
want
this
to
automatically
upgrade
or
do
you
want
the
admin
to
come
and
manually
update
this
right?
So
let's
just
subscribe
to
this
and,
as
you
see
like
it'll
start
the
upgrade
process,
as
is
going.
Let's
quickly
go
back
to
the
cluster
update
and,
as
you
see
it
says,
working
towards
401
813
percent
complete.
B
If
we
go
to
the
cluster
operator
page.
These
are
all
the
operators
that
we
have
here
and
slowly
once
the
the
images
are
all
downloaded.
You'll
see
this
getting
updated
and
in
live
and
allied
method
as
well.
Okay,
so
we're
going
back
to
operator
hub
I
won't
actually
show
you
guys
now
installed
operators.
This
is
the
list
of
installed
operators
you
have
and
you
could
go
ahead
and
space
us
set
it
for
namespace.
So,
if
I
go
to
all
projects,
we're
gonna
see
something
interesting
here.
B
Thing
I
wanted
to
show
you
guys
is
the
brand
new
for
to
cluster.
We
have
a
brand
new
dashboard
here.
We
have
a
lot
of
new
great
functionality,
I
kind
of
wanted
to
show
this
real
quick.
We
have
this
new
thing
called
API
Explorer,
so
for
every
API
or
kubernetes
resource
available
you
can
now
select
it.
You
could
see
some
details
about
it.
B
You
could
see
the
schema
and
you
could
actually
drill
down
on
the
schema
to
find
out
what
the
different
values
are,
and
you
can
see
all
the
instances
of
that
type
of
object
in
your
cluster.
So
one
of
our
goals
for
the
open
ship
plus
a
console,
is
to
educate
admins
and
to
give
them
as
much
visibility
into
the
into
the
cluster
and
every
release
where
we're
adding
functionalities
and
futures
to
to
make
it
easier
and
to
make
it
more
accessible
for
everybody
awesome.
Well,
thank
you.
So
much
yeah.