►
From YouTube: Red Hat OpenShift 4 Release Update Clayton Coleman Derek Carr Mike Barrett OpenShift Commons 2019
Description
Red Hat OpenShift 4 Release Update with Clayton Coleman
Derek Carr
Mike Barrett
at OpenShift Commons Gathering
at Red Hat Summit 2019
A
So
next
up
we
have
three
of
my
favorite
people,
Clayton
Derek
and
Mike,
who
are
going
to
come
and
tell
us
everything
we
need
to
know
about
openshift
4.0,
so
come
on
up
and
unplug.
My
laptop
that'll
take
a
minute
or
two
and
we'll
do
this,
and
so
the
way
that
this,
while
they're
doing
all
the
shuffling
here,
the
way
that
today
works
is
there
won't
be
a
lot
of
time
for
QA.
A
C
B
That's
true,
it
is
actually
funny
cuz
Jim
just
stabbed
us
in
the
back
ways
up
there,
because
what
we've
been
trying
to
build
is
something
that
is
easy
to
use
and
works
well
together.
Unless
you
don't
worry
about
details,
but
Jim
told
us
that
you
guys
want
box
of
Legos,
and
so
we're
going
to
take
this
away.
We're
canceling
open
shift
for
we're.
Gonna
go
back
to
open
shift
three
we're
just.
C
Gonna
do
it
for
a
while.
So
this
is
a
product
managers
dream
right.
We
have
a
steering
committee
members
from
up
from
kubernetes.
There
are
sig
chair,
there's
lead,
architects,
I
could
just
go
home
and
these
guys
could
take
care
of
the
rest
of
the
session.
But
we
sat
down
in
the
next
slide
and
we
thought
about
what
could
we
tell
you
that
you're
not
going
to
hear
this
week-
and
this
is
probably
the
best
summit
for
OpenShift
that
we've
ever
had
and
that's
hard
to
say
every
year
it
gets
bigger
and
bigger.
C
But
you're
gonna
hear
unbelievable
success
stories
from
just
about
every
industry
out
there
and
what
could
we
top
look?
What
could
we
say
and
what
we
came
up
with
was
the
inside
scoop
right.
We're
gonna
tell
some
stories
about
some
of
the
things
that
were
hard
to
solve
and
we
cut
it
down
to
four
things
that
were
hard
to
solve
and
open
chef
Ford
and
we're
gonna
tell
you,
pull
back
the
curtain
and
tell
you
some
of
the
miseries.
C
So,
let's
start
from
the
beginning:
let's
start
with
install
and
upgrade
now
when
we
looked
at
installation
and
upgrades,
you
would
think
that
all
the
problems
were
in
the
install
in
the
upgrade,
but
70%
of
them
came
from
other
parts
of
the
product
right
because,
when
you're
upgrading
you
notice,
some
weird
thing
in
the
couplet
went
wrong
or
some
weird
thing
in
the
storage
went
wrong
or
some
weird
thing
in
the
operating
system
went
wrong.
So
how
did
you
guys
attack
that?
How
did
you
approach
solving
that
problem?
B
B
Like
a
reference
architecture
is
a
great
example
of
something
where
we
take
somebody
who
really
knows
the
problem,
and
they
know
what
customers
are
doing.
They
know.
You
know,
I
need
to
unsecure
this
way
and
I
need
to
do
this
in
order
to
meet
this.
They
know
the
platform
that
they're
installing
on
top
of,
and
we
said
well
instead
of
giving
you
a
reference
architecture,
why
don't
we
just
bake
that
in
and
ship
the
reference
architecture
make
sense,
so.
D
D
B
Well,
Derek's
doing
this
I
have
a
question:
how
many
people
were
at
cube
Con
this
year
in
Seattle,
and
how
many
people
saw
the
demo
that
we
did
then?
Okay,
there's
a
good
number.
Not
much
has
changed
because
again,
the
experience
is
about
trying
to
take
most
of
those
choices
and
boil
them
down
to
what
matters
so
in
this
opinionated
reference
architecture,
which
is
something
we'll
get
into
a
little
bit
more.
We're
just
trying
to
focus
on
the
choices
that
actually
matter
for
standing
up
a
cluster
and
once
you've
installed
it.
B
D
So
you
saw
I
answered
about
five
questions
and
the
wizard
is
gonna.
Give
me
a
default
cluster
with
in
a
particular
cloud
and
a
chase
set
up
everything
spread
across
zones
as
expected,
and
you
have
a
default
set
of
workers
and
in
the
interest
of
time,
I
guess
we'll
move
on
to
like
once.
Your
cluster
is
up.
Our
next
problem.
Mike
was
trying
to
figure
out
how
to
maintain
that
cluster
reliably
and
let's
talk
about
what
we
hit
with
that
yeah.
B
And
reliability
is
really
important,
so
you'll
hear
you'll,
hear
us
talk
this
week,
a
lot
about
operators
when
we
acquired
core
OS
last
year,
a
lot
of
smart
folks
on
the
core,
OS
team
had
been
taking
the
lessons
the
kubernetes
had
the
patterns
and
the
possibilities
that
are
available
in
kubernetes.
You
know
it's
infrastructure
for
running
applications.
Well,
I
think
that's
the
unique
thing
about
what
we're
trying
to
do
in
the
kubernetes
community
is
that
it's
the
first
opportunity
we
really
have
to
have
an
application
focused
infrastructure,
it's
incredibly
flexible,
can
talk
to
infrastructure.
B
Can
abstract
these
things
can
give
UHA
at
of
the
box.
Lets
you
manage
rolling
deployments,
and
so
you
know
the
next
question
is
well.
If
we
can
for
building
this
platform,
that's
so
great
at
running
software,
when
we
actually
use
that
platform
to
run
the
software.
That
runs
the
higher
level
services
yeah.
D
So
what
you
see
here
is
our
our
console
in
the
Forex
experience,
and
you
know
traditionally,
people
would
install
our
kubernetes
cluster
punch,
some
monitoring
on
it
and
then
hope
it
stays
up
and
hope.
Your
monitoring
solution
tells
you
when
something
went
wrong,
and
so
we
wanted
to.
You
know
level
that
up
a
little
bit
and
kubernetes
is
a
very
big
surface
area,
especially
when
you
want
to
add
the
necessary
components
to
actually
make
it
a
useful
product
and
production.
D
B
You
want
authentication
and
wild
ingress,
and
actually
you
know
what
we've
seen
over
the
last
five
years
since
we've
been
involved
in
kubernetes.
Is
that
Mike
never
stops
saying,
which
is
one
more
thing?
We
just
get
one
more
thing
and
every
time
we
had
one
of
those
one
more
things
that
was
a
you
know.
It's
an
upstream
project,
it's
innovation,
it's
people
coming
up,
but
we
have
to
productize
it.
We
have
to
think
about
security.
We
have
to
make
sure
that
it's
reliable,
we
have
to
test
it
and
each
one
of
those
steps
is
about.
B
Ok.
Well,
let's
take
this
bit
of
technology
run
it
on
kubernetes,
make
sure
that
we
can
keep
it
up
to
date
and
keep
it
secure
and
over
time
you
know,
maybe
it's
replaced
by
something
new
and
shiny.
We
have
to
be
able
to
pull
that
out,
and
so
the
operator
pattern
in
a
lot
of
respects
is
just
a
fancy
way
of
saying
we
want
to
boil
all
of
that
information
about
how
to
run
the
software
into
the
platform
we
ship
it.
Just
like
the
reference
architecture.
B
D
Just
to
touch
a
little
bit
more,
each
of
our
operators
that
are
running
on
the
platform
managing
the
platform.
You
know
you
as
an
admin
you
can
come
in
and,
aside
from
the
monitoring
view,
get
a
view
of
everything.
That's
in
the
cluster
if
things
going
well,
what
versions
said
if
there
are
any
problems-
and
this
is
really
only
useful
if
it
does
something
for
you
in
response
to
a
bad
thing
happening
like?
Does
it
rotate
your
certs?
D
D
Shoot
that
was
the
NS,
that's
not
good!
So,
let's
see
if
our
operators
fix
this.
So
typically
an
diamond
is
working
late
at
night
trying
to
figure
out
what's
going
wrong
with
the
cluster,
and
now
they
would
have
been
getting
paged.
So
the
DNS
service
is
reporting
its
degraded
I'm,
getting
woken
up
in
the
middle
of
the
night.
But
what
happened?
It
looks
like
our
operator
kept
our
cluster
alive.
My
web
UI
is
still
running
problem
as
we.
B
Didn't
we
didn't
try
that,
like
50
or
60
times
to
make
sure
that
actually
works
good
promise?
But
this
is
the
idea
right
as
all
this
is,
is
config.
It's
true
Bernays
applications.
You
know
the
kind
of
leads
into
the
you
know
the
question
of
well,
you
have
an
operator
and
you
need
to
make
sure
it's
there.
You
know
who
who
guards
the
guards
right?
Who
watches
the
Watchmen
yeah.
D
D
Take
their
knowledge
and
say
you
know
how
best
operate
your
software,
but,
as
Clayton
said,
you
need
something
to
roll
that
all
together
to
like
bring
the
cluster
as
an
atomic
hole
and
know
that
when
we're
shipping
all
these
operators
in
one
package
that
we
at
Red
Hat
of
certified
and
tested
and
made
you
sure
that
you
can
trust
them
as
a
unit.
So
what
we
have
knew
in
the
four?
Oh,
you
hear
if
I
go
to
the
next
view.
Let's.
C
B
Recorded
so
the
Donna's
story
is,
is
that
Cooper
news
is
still
new
right,
like
so,
we've
been
doing
operations
and
IT
management
for
a
very
long
time.
The
automation
that
we
use
keeps
improving,
so
it
used
to
be
a
long
time
ago.
We
didn't
have
configuration
management
and
then
you
know
we
added
these
tools.
That
said
well,
we
can
do
this
thing
over
and
over
for
you
as
long
as
you
write
it
down
once
we
can
come
back
in
and
there's
all
sorts
of
wrinkles
in
that
right.
B
You
drill
down,
and
you
say
well,
you
know,
maybe
somebody
got
onto
the
server
and
changed
something,
and
so
this
time
the
configuration
managed
in
a
script
doesn't
actually
work.
So
you'll
fix
that
bug
and
move
on,
and
we've
built
these
better
and
better
and
better
abstractions.
We
know
what
works.
What
doesn't
a
lot
of
what
is
applied
here
is
taking
those
lessons
from
operate
over
the
last
30
40
years
at
scale
and
boiling
them
down
to
their
essence,
and
it's
still
hard
right.
You
still
have
to
write
some
code.
B
B
So
if
you
mess
it
up,
it
happens
right
away,
so
that
makes
it
easier
to
test,
makes
it
easier
to
validate
those
connect,
configs
not
much
more
complex
than
the
config
that
we've
been
doing
for
10
or
15
years,
but
we're
doing
it
all
the
time,
and
so
that
shows
up
fast.
So
it's
like
backups.
You
know
this
is
like
one
of
my
favorite
sayings,
which
is,
if
you've
never
tested
your
restore.
You
don't
have
a
backup
system.
B
You
just
have
a
very
expensive
tape
array
right,
and
that
is
what
we've
tried
to
do
with
operators
and
getting
teams
to
be
able
to
have
that
code.
It
was
a
mind
shift,
but
it
was
also
something
that
they
were
ready
to
do
and
I
think
a
lot
of
people
in
the
community
are
ready
to
do
that
as
well.
Yeah.
D
So,
as
you
saw
they're
about
25
out-of-the-box
operators,
managing
key
aspects
of
the
platform
here-
and
you
know
the
we
needed
a
top
level
operator
to
ensure
all
those
25
children
in
the
product
are
working
as
extended,
and
so
we
have
a
concept
of
a
cluster
version
operator.
It's
the
watcher
of
all
the
watchman
and
it's
running
on
your
cluster
as
the
root
thing
and
as
Clayton
said,
it's
just
doing.
A
constant
config
loop
check
to
say
is
the
desired
stay
the
cluster
as
I
need.
D
It
are
all
my
operators
at
the
version
they're
expected
to
be
if
I
had
gone
and
deleted
by
DNS
operator.
That
operator
would
have
been
restored
within
five
minutes
or
so
so
it's
a
pretty
quick,
constant
iteration
loop
to
make
sure
is
your
cluster
as
its
intended
and
admin
didn't
make
a
late-night
error.
Trying
to
restore
this
thing
and.
B
Adhere
to
like
you
know,
this
was
an
idea
that
the
core
OS
teams
on
tectonic
had
been
working
with
for
a
while,
it's
kind
of
the.
How
do
we
build
the
simplest
possible
thing
that
could
actually
work
and
the
config
loop,
the
dumb
config
loop
of,
go
in
update
and
make
sure
that
something's
there
works
really
well
with
operators
when
you
have
a
top-level
operator
talking
to
level
operators,
because
you
have
a
really
simple
problem,
make
sure
every
operator
is
up-to-date
and
each
of
those
operators
says
okay,
make
sure
that
my
application
is
up-to-date.
B
D
B
We'll
have
more
info
about
this,
like
we
want
to
make
pre-release
and
nightly
release.
It
was
easier
to
test
right,
because
this
I
think
this
is
a
really
important.
You
know
discussion
that
we've
had
with
it's
not
enough
for
the
software
to
work
when
we
test
it,
it
needs
to
work
when
people
install
it
and
kubernetes
has
always
kind
of
had
a
an
evolving
and
quick-paced
nature.
B
What's
coming,
you
know,
you
may
not
be
ready
to
update
and
red-hats
really
good
at
delivering.
You
know
production
updates
the
systems
for
a
very
long
time.
That
sort
of
flexibility
is
something
that
we
want
to
offer
to
customers
so
that
you
can
be
part
of
our
testing
process
as
well,
and
we
can
take
feedback
as
the
software
is
available.
So.
D
It's
a
little
anxiety
inducing
to
upgrade
a
kubernetes
cluster
live
in
front
of
a
thousand
people,
so
hopefully
this
works,
but
the
experience
is
pretty
simple
and
haman
can
come
in
there's
a
command
line.
To
do
the
same
thing.
I
can
see
that
there's
new
versions
of
the
software
available
I
will
check
and
pull
our
May
5th
knightly
and
clayton.
You
forgot
the
signing.
D
B
You
know
the
software
supply
chain
and
you
know,
supply
chain
attacks
are
on
the
rise
and
a
red
hat
like
a
key
part
of
our
DNA,
is
how
do
we
release
software
to
the
world
entrusted,
and
so
what
you're
seeing
Derek
doing
here
is
something
that
you
should
never
do
ever.
Anyone
which
he's
basically
there's
a
built-in
check
that
says
is
this
content
signed
by
the
appropriate
party,
and
so
Derek
is
overriding
that
and
completely
avoiding
his
warranty,
and
we
won't
hold
him
accountable
for
that.
Yeah.
D
The
operator
is
going
to
pull
down
that
image
from
registry
and
then
from
that
another
set
of
manifests
to
apply
to
the
cluster,
to
drive
the
upgrade
and
so
and
the
interest
of
time
we'll
let
this
upgrade
proceed,
but,
as
you
would
want
to
watch
it
happen
once
it
finishes
downloading
the
release
image
you'll
see
each
of
these
individual
operators
start
to
rapport
that
they're
upgrading
and
their
version
will
change
the
new
version,
and
so
we're
hoping
that
this
provides
a
easy
one.
Click
update
upgrade
trusted
experience.
Nice.
C
C
Couple
times,
maybe
first
couple
times
so
you
know
skipping
ahead.
We
should
probably
move
on
from
from
operators.
Let's
talk
about
something
that
probably
not
a
lot
of
people
knew
was
even
being
engineered
or
touched.
So
you
know
when
you,
when
you
all
out
there
use
the
open
shift
beta
part
of
that
beta
was
a
lot
of
code,
work
that
these
engineering
teams
did
on
getting.
There
see
ICD
processes
down
sort
of
revamping,
re-energizing
them,
but
also
building
a
back-end
service
that
would
allow
this
upgrade
technology
to
really
work.
B
Scared
Garak,
he
doesn't
like
going
off
script,
so
a
big
part
of
you
know
the
evolution
of
software.
This
is
something
I
think
everybody
here
knows
is
that
as
we
deploy
more
and
more
complex
systems
testing
those
in
the
real
world
doing
that
repeatedly
tying
those
into
our
software
delivery
processes.
Through
you
know,
code
automation,
through
test
automation
and
on
the
open
ship
team
has
actually
been
a
very
serious.
B
If
we
want
to
be
able
to
build
and
deploy
software,
we
should
do
it
the
same
way
that
anybody
else
might
build
in
to
play
open
software
as
well,
because
open
shipped
is
just
a
bunch
of
container
images
and
allegedly
I've
heard
OpenShift
is
a
pretty
decent
place
to
run
container
images
and
to
build
them,
and
so
we
spent
a
lot
of
time
building
out
infrastructure
that
ensured
we
launch,
build
and
delivers.
I.
Think
we
built
something
like
four
or
five
thousand
images
a
day
on
top
of
an
open
shift
cluster.
B
B
That
was
actually
a
great
example
of
well
okay,
well,
you're,
hitting
a
rate
limit,
we're
not
going
to
test
less
we're
gonna
force
you
to
go,
make
the
software
more
more
parsimonious
about
how
it
uses
the
cloud
and
so
that's
sort
of
using
what
we
build
and
using
the
software
that
we
expect
you
to
build
on.
It,
helped
us
find
and
encounter
those
scenarios
earlier,
yeah
some.
D
Mics
going
a
little
off
script
and
he's
throwing
me
off
but
to
drive
this
point
home.
Let's,
let's
talk
about
what
the
open
shift
development
organization
has
been
doing.
So
you
know
this
clusters
up,
you
know
and
as
Clayton
said,
we
we
run
thousands
and
thousands
of
tests
and
simulated
upgrades
every
day,
all
the
time
on
every
PR.
D
Basically-
and
you
know,
in
a
pristine
testing
environment,
you
might
get
a
false
signal
around
the
actual
health
of
those
clusters
in
customer
environments,
and
so
one
of
the
things
we've
been
trying
to
do
is
get
some
telemetry
data
optionally
sent
back
to
Red
Hat.
So
we
can
understand
the
information
that
we
need
to
better
the
bug,
a
customer
environment
without
having
to
go
and
directly
ask
like
how
big
is
your
red
CD?
How
big
is
your
cluster?
D
What
are
your
problems
that
you're
having
and
so
one
of
the
things,
if
you
opt
in
to
that
telemetry
setup?
Is
you
can
take
your
cluster
ID
and
I
will
copy
that,
and
here
this
is
cloud
Red,
Hat,
comm
and
hopefully
it
loads,
and
when
I
log
in
with
my
red
head
account,
I
have
a
new
thing
here.
That
lets
me
see
my
Red
Hat
openshift
cluster
manager
and
I'm
currently
associated
with
the
open
shift
development
organization.
D
So,
as
Clayton
said,
we
spin
up
a
lot
of
clusters
and
based
on
the
state
of
those
clusters
being
up
and
down,
there's
about
nine
hundred
right
now.
So
let's
find
the
one
I
created
right
here-
and
this
is
our
live
Commons
presentation,
cluster
and
so
you
know,
I
can
get
an
aggregate
view
of
all
the
clusters
associated
with
my
organization.
If
they're
healthy,
unhealthy,
their
size
are
and
perform
some
actions
on
them,
and
so
you
know
remotely.
D
B
Talked
about
you
know,
pre-release
and
nightly
channels
getting
an
early
signal
from
customers
and
users.
People
who
are
evaluating
openshift
people
are
hobbyists.
It's
actually
really
important
for
us,
because
you
know
kubernetes
has
kind
of
been
a
very
big
and
open
community
and
there's
a
lot
of
you
know
great
CI
work
that
goes
into
testing
the
core,
but
for
us
it
was
about
well.
How
do
we
test
everything
as
a
unit
and
when
we
apply
an
upgrade,
you
know
Derek
talked
about.
B
Well,
you
know:
are
you
sure
that
you
really
want
to
push
that
Update
button?
We
want
to
be
really
sure,
and
so
we
have
a
number
of
patterns
and
tools
and
services
that
we're
trying
to
make
that
both
customers
and
partners
and
developers
can
make
use
of
to
become
part
of
that
early
testing
fleet.
That
gives
us
data
that
allows
us
to
say,
okay
well
before
we
start
suggesting
updates
to
production
servers.
B
Maybe
we
should
make
sure
that
all
the
development
clusters
updated
successfully
and
this
this
kind
of
relationship
between
red
hat
and
our
customers
is
very
important
to
us.
You
know,
obviously,
we
want
to
earn
and
deserve
your
trust,
and
so
it's
very
important
that
we
go
through
this
in
a
very
deliberate
way,
but
you
know
existing
on
the
ideas
of
red
hat
insights
and
looking
at
how
we
can
make
sure
that
the
software
that
we
give
to
you
is
working
as
we
intended
our
support
relationship
with.
C
There's
there's
two
things
happening
there
right,
there's
a
SAS,
almost
relationship
that
helps
the
upgrade
become
more
intelligent,
but
there's
also
a
portal
that
you
just
snuck
in
here.
It's
so
beautiful
by
the
way-
and
it's
showing
me
all
my
open
ship
clusters
all
over
the
world,
like
the
ones
I
have
on
dedicated
the
ones
I
have
on
as
the
ones
I
have
anywhere
yep
they're
all
right
here
for.
B
Me
yeah
and
again,
like
I,
think
this
is
this
really
gets
to
the
heart
of
software.
Software
is
changing.
We
want
to
make
sure
that
we
change
with
software,
and
you
Jim
alluded
to
this-
is
the
idea
that
software
is
this
package,
bundle
that
you
take
and
run
in
this
very
disconnected
environment.
That
model
isn't
completely
broken
right.
There's
still
reasons
that
you
run
disconnected
that
you
run
in
air
gapped
environments,
but
I
also
think
that
the
rate
of
change
in
software
is
increasing
to
the
point
where
the
software
becomes
more
complex.
B
Security
becomes
ever
more
important.
It's
not
enough
anymore,
to
just
rely
on
static
network
defenses
or
to
ensure
that
a
system
isn't
changed.
We
need
to
be
able
to
change
with
the
software
when
critical
security
updates
in
fundamental
aspects
of
every
bit
of
software
we
run
or
coming
out
at
the
rate
of
you
know
multiple
times
a
year.
Those
systems
need
to
be
updated
and
that's
something
the
Red
Hat
has
a
long
experience
with.
You
know
if
you
trust
us
to
run
the
update
you
need
to
be
able
to.
B
C
B
This
is
a
very
simple
system.
We
call
it
Cincinnati
because
Mike's
from
Cincinnati-
and
we
like
to
we
like
to
say
that
Mike
is
kind
of
the
one
who
drags
us
in
a
particular
direction,
and
so
Cincinnati
offers
suggestions
about
what
versions
to
upgrade
to,
and
those
suggestions
are
based
on
the
best
knowledge
that
we
have.
But
that
is
omits
a
very
simple
service
that
could
be
run
in
an
offline
environment.
In.
B
D
D
So
that
doesn't
include
DNS
that
doesn't
include
networking
that
doesn't
include
monitoring,
and
so
we
thought
about
like.
What
can
we
do
to
make
the
4x
experience
better?
Is
we
kind
of
had
this
tension
within
the
development
organization
on
like
the
difference
between
product
configuration
versus
project
configuration,
and
so
one
of
the
things
that
you'll
see
new
infor?
B
Drill
into
this
a
little
bit
more
I
mean
this.
Is
the
operator
pattern
right?
The
idea
that
you
know
we
have
these
things
on
the
cluster
that
can
react
to
configuration.
You
wanted
to
be
driven
back,
config
loop.
These
API
objects
are
just
like
any
other
kubernetes
api
object.
If
you
want
to
store
them
in
a
git
repo
and
then
periodically
apply
them
they'll
work,
just
just
as
well
that
idea
of
being
able
to
change
the
configuration
of
the
cluster
after
the
fact
was
actually
surprisingly
difficult
for
a
lot
of
the
engineers
on
the
team.
B
They
were
like
what
I
can't
just
add
a
new
flag
to
ansible
the
beginning
of
the
install.
We
only
have
2,400
flags
what's
another
one
and
we
actually,
we
actually
had
a
very
long
discussion
and
this
yeah
I
spent
I
wrote
a
number
of
manifestos
which
were
along
the
lines
of
you,
make
the
minimum
number
of
choices
day,
one
because
we
want
to
ensure
that
day
two
is
always
there,
there's
always
able
to
be
flexible.
B
If
you
want
to
turn
off
a
component,
if
you
want
to
turn
on
a
component,
if
you
want
to
install
a
new
operator,
if
you
want
to
install
a
custom,
Network
Sdn
yeah,
so
there's
no
always
be
some
changes
that
are
just
gonna
be
hard
right.
You
know
going
from
one
cloud
provider
to
another
on
an
existing
cluster
gets
a
little
crazy,
although
Michael
probably
asked
us
to
do
that
at
some
point
in
the
future.
B
D
So
as
an
example
here,
a
lot
of
lot
of
people
immediately
after
they
install
an
up
and
ship
cluster
is
set
up
authentication.
So
we
wanted
that
experience
to
be
really
simple.
What
this
is
driving
is
just
cute
resources
in
the
backend.
So
while
you
see
me
doing
it
in
a
UI
which
is
really
nice
in
a
demo,
you
could
just
Q
control,
apply
this
config
on
your
cluster.
D
D
You
have
the
traditional
idea
of
kubernetes
nodes,
and
so
this
is
an
H
a
set
up
and
if
I
go
and
look
at
one
of
these
nodes,
what
is
a
little
different
here
now,
as
you
can
see
that
it's
running
velcoro
F,
and
so
this
is
the
integration
from
the
core
OS
team
and
our
relative
ik
team
on
trying
to
make
sure
that
openshift
four
can
run
really
awesome
on
an
immutable
operating
system.
Yeah.
B
And
the
immutability
is
actually
a
really
key
part,
so
there's
always
been
this
tension
around
systems.
Management,
which
is
computers,
are
big,
complex
things
and
we've
gotten
used
to
thinking
of
them.
As
this,
this
big
complex
thing
that
you
put
a
bunch
of
changes
on
to
and
walk
away
and
a
lot
of
the
advantages
operationally
that
people
have
seen
over
the
last
five
to
ten
years
have
been
taking
a
step
back
and
saying.
B
I
want
to
treat
these
like
a
black
box
and
rel
core
OS
is
actually
a
culmination
of
that
takes
the
route
kernel
takes
the
role
user
space.
You
know
our
our
upcoming
rel
8
version
is
what
rel
Kouros
is
based
on.
So
it's
really
just
as
much
as
the
version
that
you
would
install
directly
from
an
ISO,
and
the
idea
is
that
it's
B,
it's
programmed
it's
built
to
be
driven
by
the
cluster.
B
It's
part
of
the
cluster,
and
we
got
thirty
minutes
into
this
talk
without
even
talking
about
a
host
once
and
that's
how
we
want
everyone
in
the
audience
to
be
able
to
think
about.
Managing
machines
like
managing
machines
should
be
easy.
The
combination
of
these
two
immutability,
being
driven
from
the
cluster
and
being
a
very
simple
touch
point,
is
the
heart
of
open
chip,
4
and.
D
D
And
then
what's
gonna
happen
here
is
that
the
machine
resource,
it's
very
scoped
to
just
you
know,
describing
the
computer
you
want
to
create
in
the
cloud
it's
going
to
boot
with
route
core
OS
that
Relic
OS
server
is
going
to
talk
home
to
an
ignition
server
in
our
server
in
our
cluster.
That
knows
how
to
configure
that
piece
of
compute,
and
so
we
have
master
and
worker
machine
config
pools
that
machine
will
join
as
a
worker
and
then
once
that
machine
boots
and
gets
the
software
applied,
you'll
suddenly
see.
B
And
unfortunately,
since
the
infrastructure
is
still
kind
of
slow,
it's
gonna
take
five
or
six
minutes.
I
mean
that's
why
everybody
uses
containers
right
yeah.
But
this
idea,
though,
of
these
machines
being
managed
by
the
cluster.
We
wanted
that
to
work
on
bare
metal
as
well,
so
stay
tuned
for
some
exciting
announcements
this
week.
We
also
wanted
it
to
be
easy
to
upgrade
any
server
and
actually
rel
core
OS
actually
gives
us
that
nice
ability,
which
is
we
can
do
in
place,
updates
very
easily.
B
So
you
don't
have
to
add
new
machines
to
take
the
old
ones
out
of
rotation.
We
can
do
updates
in
place
and
in
both
of
those
cases,
you
know
whether
it's
on
the
cloud,
whether
it's
on
in
a
private
data
center
or
whether
it's
just
a
couple
of
boxes
sitting
underneath
someone's
desk
updates
the
same
way
so.
C
We
don't
want
to
delay
mass,
we
don't
want
to
stop
NASA
come
on
so
real
quickly.
There.
You
talked
a
lot
about
install
operating
system,
automation.
You
talked
about
operators
and
baking
in
best
practices
into
the
software,
codifying
it
what
about
the
users?
What
about
the
people
who
use
the
platform?
What
did
we
do
for
them?
Yeah.
D
So
Diane
talked
about
operator
hub
in
the
brief
intro
and
operator
hub
is
now
exposed
inside
the
cluster
under
our
catalog,
and
so
as
a
as
an
admin
to
the
cluster.
I
can
expose
operators
that
I
want
to
make
available
to
my
users,
and
so
this
is
the
idea
of
what
we
like
a
lot
about.
The
cloud
is
that
a
resource
is
just
an
API
away.
B
There's
gonna
be
a
lot
of
talks
this
week
about
operators
and
partners
and
ISPs
you've
integrated
we've
operator
hub
for
us.
Well,
not
everybody
might
want
to
write
an
operator.
You
know
we're
not
taking
away
any
of
the
flexibility
or
power
kubernetes.
We
think
that
the
operator
pattern
you
know,
however,
you
implement.
It
is
a
great
way
to
boil
down
the
complexities
of
managing
and
running
software
into.
What
do
you
want?
You
know
saying
what
you
wanted,
then
letting
the
system
take
care
of
that,
for
you,
that's
a
key
part
of
you
know.
B
D
It's
hard
to
look
at
that
wall
of
icons
on
a
really
big
display
right
now,
but
just
as
an
example,
if
I
wanted
to
offer
Kafka
to
my
users
on
the
cluster
as
an
admin,
I
can
install
this
operator
and
similar
to
how
the
actual
cluster
itself
can
be
subscribed
to
an
update,
channel
and
optionally
choose
when
and
how
you
want
to
update
it.
The
same
thing
goes
for
the
end:
user
content
operators,
and
so
here,
as
an
admin,
I'm
just
gonna
be
bold
and
automatically.
D
D
And
I
will
switch
over
to
my
default
cluster.
If
your
imagine
that
I
am
working
on
an
application
as
an
example,
you
know
in
our
normal
end-user
developer
Catalog
you
get
now
a
mix
of
templates,
as
you
traditionally
had
an
open
shipped
in
the
past,
as
well
as
new
operators,
and
so
as
that
operator
I
just
installed
on
the
cluster
became
available.
You'll
see
that
I
now
have
new
options
associated
with
the
existing
operators.
My
cluster
has
been
allowed
to
install.
D
D
Okay
I
want
to
kafka
cluster
in
my
namespace
I
will
create
it
and
what
we've
done
is
taken,
then
the
intelligence
of
the
teams
that
know
how
to
best
administer
Kafka
in
the
same
way
that
we
feel
we
can
do
a
decent
job
at
the
underlying
queue
platform
and
a
user
can
just
have
a
copper
cluster.
An
API
call
away,
and
so
I
will
create
this.
It's.
D
A
little
big,
and
so
what
we'll
see
here
as
I
now
have
a
a
Kafka
resource
in
my
cluster,
just
like
you
could
have
pods
and
deployments
and
daemon
sets,
and
here
I
have
my
Kafka
cluster
and
without
needing
to
have
a
PhD
in
kubernetes.
I
can
see
that
the
operator
is
creating
a
ton
of
services.
Stateful
sets
and
pods
and
secrets
to
manage
that,
and
once
this
rolls
out,
the
operators
can
ensure
that
this
thing
is
always
running
this.
D
C
Awesome
so
with
that
I
think
we're
out
of
time.
The
four
dotto
platform
you,
gentlemen,
made
some
really
bold
design
choices
and
I
haven't
seen
that
sort
of
courage.
Since
the
three
dot
o
release
went
out,
this
community
built
some
amazing
things
off
with
3.0
and
we're
all
looking
forward
to
what
you're
gonna
build
off
of
the
four
dot.
Oh
thanks,
everybody.