►
From YouTube: OpenShift 4.0 - Features, Functions, Future at OpenShift Commons Gathering Seattle 2018
Description
OpenShift 4.0 - Features, Functions, Future, Clayton Coleman, Mike Barrett and Derek Carr at OpenShift Commons Gathering Seattle 2018
https://commons.openshift.org/gatherings/Seattle_2018.html
A
B
B
So
we
had
a
talk
scheduled
that
was
features
futures
functions,
kind
of
the
same
one.
We
always
do.
We
decided
to
blow
that
up
and
do
something
different.
So
what
we
here
today
to
do
is
to
Chris
stole
all
of
my
architect
thunder
so
like
I.
Had
all
these
like
great
slides
and
I
was
like
Chris
did
them
all
I'm
gonna
do
something
different,
and
so
what
we
did
is
we're
gonna
do
a
demo
live
on
stage
on
terrible
conference
Wi-Fi,
although
you
have
a
hard
line,
yep,
okay,
hope
it
works.
B
A
A
A
personal
memory
around
this
so
so
definitely
bring
it
back
to
yourself.
As
I
said,
we're
gonna
show
it
to
you
and
we're
gonna
get
through
these
slides
as
quickly
as
we
can.
But
what
built
it,
why
did
we
make
the
decisions
we
made?
You
hit
the
click
here
was
it?
Was
it
the
over
a
thousand
customers
that
we
now
have
on
the
kubernetes
platform
that
we
know
as
openshift
the
next
slide
or
the
next
or
was
it
the
32,000
support
issues
that
we
took
from
the
customer
base?
It.
A
A
A
B
So
again,
we're
gonna
go.
The
best
of
these
slides
I
have
a
whole
bunch
of
talking
points.
Honestly,
it
doesn't
matter
if
you're
here
and
you
don't
believe
in
kubernetes
I,
don't
know
why
you're
here
other
than
to
just
be
a
hater
or
something
like
that.
So,
like
kubernetes,
was
a
big
multiplier
for
application
operations.
You
know
we
talked
about
serverless,
we
talked
about
DevOps.
We
talked
about
all
of
these
things.
B
The
end
goal
is
to
change
how
we
do
software
to
make
it
easier
to
to
be
faster
to
be
more
responsive,
as
Chris
pointed
out
it's
to
make
things
automated
so
kubernetes
has
evolved,
OpenShift
has
evolved
with
it.
It's
been
about
building
applications
and
running
applications.
It's
not
about
computers
right,
there's!
No,
like
we're!
Not
computers
are
just
a
thing.
B
It's
like
the
electricity
that
runs
for
the
wires
it's
about
software
and
software
is
what
we
care
about
and
what
we're
trying
to
help
build,
because
software
runs
everything
right
like
I'm,
pretty
sure
that
none
of
us
would
have
made
it
here.
Without
software
we
probably
wouldn't
be
able
to
get
in
the
door.
The
doors
are
probably
tied
up
to
some
crappy
industrial
IOT
thing
that
would
have
just
failed
and
locked
us
out
of
the
conference
center.
B
So
if
kubernetes
was
a
multiplier
for
applications,
our
goal
with
open
shift4
is
to
be
a
multiplier
for
kubernetes
and
so
continuing.
You
know
we
were
involved
very
early
in
the
community.
It
was
about
stabilizing
kubernetes,
making
it
enterprise,
which
you
know
various
people
will
tell
you,
that's
the
worst
thing
in
the
world
to
make
something
enterprise,
which
means
it
actually
works,
and
it's
probably
secure.
B
Although
last
week's
CVE
you
know
like
even
then,
like
you
know,
this
is,
and
this
is
the
reality
of
the
world
we
live
in
right,
which
is
there's
so
much
software
I
get
like.
Maybe
this
little
to
dad
I'm
gonna
say
we
may
never
be
safe
again
unless
we
ourselves
internalize
this,
as
we
have
to
get
a
lot
better
at
patching
and
rolling
out
software
updates,
and
that's
part
of
that
core
OS
DNA.
B
A
You
know
we
brought
a
core
OS
in
and
we
released
container
Linux
at
Summit
has
Redhead
core
OS.
It
really
is
a
foundational
component
of
this
new
stack
in
that
it
allows
us
to
have
an
immutable
align
to
an
immutable
infrastructure.
That's
extremely
lightweight,
it
means
no
more
snowflakes.
It
means
that
we
can
leverage
a
more
impactful
orchestration
before
you
that
we
sort
of
were
blind
to
before,
and
we're
gonna
see
that
today,
in
the
demo
and.
B
So
Chris
really
hit
on
automated
operations,
but
I
don't
know
that
I
would
have
picked.
Such
a
dystopic
picture
to
put
up
there
like
refineries,
are
often
like
the
bad
things
we
talk
about,
but
we
talk
about
operators
automated
operations,
anybody
in
this
room
who
believes
that
we
won't
need
to
operate
software.
Please
raise
your
hands,
yeah,
didn't
think
so
so
going
forward.
Software
has
to
be
run
and
managed.
B
What's
going
to
do
it
well,
we
can
do
it
by
hand,
which
is
what
we've
been
doing
pretty
much
since
the
beginning,
or
we
can
get
better
at
automating
it
which
we've
been
doing
since
the
beginning.
So
this
is
just
a
natural
evolution
of
where
we're
going
core
OS
could
give
it
a
really
good
name.
Thank
you
to
Reza
and
Brandon,
and
and
Alex
like
the
the
name
of
an
operator.
Is
it's
about
people
and
it's
about
computers
working
together,
it's
not
just
one
or
the
other,
so
that
was
all
the
slides.
B
So
now
drumroll,
please!
For
the
first
time
ever,
yeah
yeah
yeah
yeah
we're
going
to
do
a
live
demo
on
stage
of
OpenShift
4
and
all
of
the
things
we
talked
about
over
the
last
year
at
Summit
in
the
open
source
communities.
We're
going
to
show
you
how
we're
going
to
take
these
pieces
and
put
them
together,
and
it's
probably
going
to
break
along
the
way.
But
that's
why
we
do
live
demos,
because
it's
exciting.
C
C
So
the
new
install
is
designed
to
help
users
from
a
wide
range
of
skill
levels
like
the
novice
user,
all
the
way
up
to
the
advanced
user
and
what
I'm
gonna
walk
through
here
is
the
very
novice
user
and
then
we'll
talk
about
the
advanced
user
afterwards.
So
we
have
a
new
tool,
openshift
install
and
it's
got
a
few
commands,
so
you
can
create
a
cluster.
You
can
destroy
clusters,
you
can
find
information
dependency
graphs
about
how
this
installer
works,
dependency.
Graphs-
really
that's,
because
some
of
our
friends
at
coalesce
love
graphs.
C
C
B
C
That
this
is
actually
installing
in
our
developer
accounts
and
hopefully
I
don't
hit
quota
issues.
So
USC
stew
is
going
to
be
awesome
and
this
is
going
to
go
great
and
like
that,
that's
all
it
did
right
so
down
the
cluster
in
a
few
minutes
is
going
to
start
provisioning,
but
basically
it's
going
to
pick
some
appropriate
defaults
and
start
provisioning
infrastructure
and
create
the
cluster.
My
behalf
so.
B
A
little
color,
you're
gonna
see
it's
a
used
terraform
to
create
this
cluster.
That
message
is
gonna.
Go
away,
it's
not
that
we
don't
actually
care
about
terraform.
It's
just
a
detail
right
like
we're,
trying
to
focus
this
experience,
what
we're
showing
on
ensuring
that
you
succeed
and
installing
a
cluster
every
time
and
some
of
the
choices
that
you
may
be
used
to
historically
are.
Maybe
things
that
you
shouldn't
do
right
up
front.
You
should
do
later
so
get
to
that
as
we
go
right.
C
C
So
it
has
a
series
of
targets
that
X
get
executed
in
a
particular
graph
flow
and
depending
on
where
the
operator
wants
to
come
in
and
tweak
their
install,
they
can
can
do
just
that
so,
rather
than
just
go
direct
to
create
a
cluster,
let's
look
at
the
other
things
that
we
can
create
so
that
wizard
you
saw
where
it
asked
me
a
few
basic
questions
to
figure
out.
You
know
where
I
wanted
to
install
the
cluster
and
how
that's
creating
what
we
call
our
install
config.
B
C
Okay,
so,
rather
than
what
you
saw
previously,
where
I
answered
like
my
questions
in
the
wizard
and
out,
was
clustered
being
cooked,
this
didn't
go
and
cook
a
cluster,
yet
what
didn't
instead
did
was,
go
and
create
an
install
config
file
and
I'm
gonna
bridge
what
I
show
here.
So
you
don't
see
some
data
if
this
is
going
on
internet,
it's
a
secret.
C
Yes,
so
this
install
config
file
is
just
a
gamma
file
right
and
so
the
information
that
was
collected
in
that
wizard
you'll
see
here
as
well
as
some
initial
defaults
that
were
chosen
so
behind
the
scenes,
because
I
chose
to
install
in
AWS
in
a
particular
region.
It's
going
to
create
me
three
masters
and
by
default
the
installer
is
going
to
provision
those
masters
in
the
set
of
availability
zones
exposed
in
that
region,
and
then
the
default
cluster
size
for
your
worker
in
north
pool
is
is
three
replicas.
C
You
know
types
for
some
other
set
of
machine
platforms
instead
of
just
master
and
worker
I
could
set
some
default
sizes
for
them,
so
I,
don't
just
have
three
workers:
I
could
have
ten
twenty
to
thirty
and
then,
depending
on
the
platform
that
I
install
in
this
platform
section
could
let
me
choose
more
particular
metadata
about
that
host
environment
uninstalling,
for
example
the
instance
types
that
might
be
used
so
once
I
tweak
this
install
config
and
get
it
as
I
want.
The
next
step
in
the
install
flow
is
to
create
some
kubernetes
manifests.
I
would.
B
Also
say
at
this
point,
like
a
there's,
a
lot
of
people
who
look
at
this
and
they
say
wow,
there's
like
seven
thousand
other
things
that
I
also
configure
part
of
what
we're
what
our
goal
is,
is
many
of
those
seven
thousand
other
things
actually
aren't
things
that
you
should
have
to
deal
with.
Sometimes
they
are
sometimes
they're
things
we
want
to
tweak,
and
you
know
we
we
work
with
a
lot
of
sophisticated
customers
who
need
tuning,
parameter,
X
and
tuning
parameter
Y,
and
they
need
to
drill
down
into
these
things.
B
C
And
so
part,
two
we'll
talk
through
some
of
that
day,
operational
stuff.
So
a
lot
people
might
be
familiar
with
how
tectonic
and
core
OS
is
distributed,
working
and
past
and
present,
and
they
basically
self
host
a
kubernetes
control
plane.
So
in
open
shift4
we're
doing
the
same
basic
actions,
but
we
self
host
a
little
differently,
but
the
way
the
Installer
works
is
it
takes
that
install
config
and
then
create
some
kubernetes
manifests.
C
Let's
say
how
the
cluster
should
be
applied,
so
it
read
that
install
configure
file
and
now
has
created
some
manifests
that
are
literally
just
regular
old
kubernetes
manifests
into
directories,
so
I
have
some
manifests
for
some
default.
Namespaces
that
will
appear
in
the
cluster
default
manifests
for
some
config
maps,
and
various
information
like
that
and
I
could
come
in
after
the
fact
and
tweak
some
of
these
things.
If
I
wanted
to
to
change
how
the
default
queue
manifests,
work
for
the
corresponding
install
config
plan.
C
Now
the
final
piece
to
how
the
install
works
is
once
you
have
that
install
config.
That's
now
been
translated
into
kubernetes
artifacts,
which
are
just
deployments
and
secrets
and
namespaces,
etc,
is
to
create
a
set
of
ignition
configuration.
So
if
you
are
familiar
with
ignition,
can
you
raise
your
hand
in
the
room?
Okay?
So
not
everyone
is
familiar
with
ignition.
B
C
Remember
the
goal
here
at
the
end
of
the
day
is
to
get
a
cluster.
That's
running
an
immutable
operating
system
so
for
Red
Hat
core
OS.
You
need
to
be
able
to
tell
how
to
configure
that
operating
system
on
first
boot
and
collect
that
configuration
from
ignition.
So
basically
that
previous
cooking
show
you
so
I
created
a
cluster
that
wizard
went
right
through
the
defaults
and
went
and
created
some
initial
ignition
configurations
that
tell
how
to
bootstrap
the
cluster.
C
C
B
B
C
That
might
so
the
installer
here,
at
the
end
of
the
day,
to
bootstrap
the
install
process,
it's
basically
taking
these
3
ignition
files
that
tell
various
nodes
in
the
cluster,
how
to
boot
and
how
to
configure
yourself
automatically,
because
we
run
a
self-hosted
control
plane.
You
kind
of
have
a
chicken
and
egg
scenario
right
you.
We
need
a
criminales
cluster
in
order
to
run
a
kubernetes
cluster.
B
Another
thing
is,
like
you
know,
the
point
of
all
of
this
is
that
once
you
have
this
running,
is
you
don't
have
to
go
back
and
reinstall
it,
and
so
a
lot
of
what
we're
trying
to
do
is
to
build
in
that
idea
of.
If
you
can
change
it
live.
This
was
a
I
think
something
that
tectonic
did
really
well.
If
you
can
change
it
live
you
can
recover
from
issues.
B
You
can
fix
problems,
you
have
feedback,
you
don't
have
to
go
reinstall
those
clusters,
and
while
there
are
a
lot
of
people
out
there,
that
would
love
to
have
immutable
clusters
I
think
based
on
the
evidence,
a
lot
of
us
are
gonna
have
clusters
that
they're
gonna
want
to
run
for
5
or
10
or
15
years,
and
so
some
of
this
is
building
in
early
the
foundation
of.
If
you
can
manage
it
on
the
cluster,
if
you
can't
change
it
after
the
fact,
it's
now
worth
doing,
yep.
C
So,
just
to
demystify
ignition,
it's
just
serving
a
JSON
payload,
that's
read
by
the
machine
on
early
boot
and
if
you
look
through
any
ignition
file,
it's
basically
saying
these
are
the
lists.
The
default
units
I
want
to
apply.
These
are
that
these
list
of
files
and
where
I
want
them
located
in
what
on
what
file
system.
So
this
install
process
that
we
walked
through
was
able
to
create
enough
of
a
baseline
initial
ignition
configuration
to
boot
up
a
cluster
and.
B
C
C
So
I
asked
for
three
masters:
that's
going
to
go
and
provision
me
three
masters
that
bootstrap
machine
boots
reads
that
concrete
ignition
configuration
that
we
created
earlier
and
then
knows
how
to
serve
up
ignition
configurations
to
other
nodes
in
the
cluster
right,
so
that
bootstrapping
machine
will
serve
the
ignition
config
for
the
master,
as
well
as
every
worker
that
comes
up
from
that
bootstrapping
machine.
We
know
how
to
go
and
create
a
2d
cluster
and
a
temporary
control
plane
to
then
know
how
to
install
a
real
control
plane
on
the
masters.
C
That
leads
me
into
part
two,
so
the
install
once
you
run
it,
you
never
use
it
again
to
do
an
upgrade,
basically
that
the
entire
cluster
now
is
is
is
self
aware
of
self
managing
and
entirely
automated.
So
if
you're,
an
existing
v3,
openshift
user-
and
you
were
using
openshift
ansible
to
both
provision
and
upgrade
a
cluster,
the
v4
experience
now
is
basically,
you
use
the
install
to
install
a
cluster,
and
then
you
depend
on
kubernetes
native
applications
like
operators
to
know
automatically
upgrade
and
deploy
that
cluster.
A
Just
in
summary,
there
you've
given
us
a
single
command
line,
install
experience
for
kubernetes,
yes,
you've.
Given
me,
a
template
that
I
can
get
is
pretty
sophisticated
with
myself
if
I
want
it
to
enhance
it.
Yep
and
the
thing
comes
up
by
itself:
I
don't
have
to
do
anything,
it'll
be
done
and
you've
separated
the
toolset
from
the
upgrade
toolset.
Now
it's
install
and
an
upgrade
everyone's.
A
A
C
B
C
The
cluster
the
customer
baking
it'll,
take
about
I,
don't
know
10
minutes
to
13
minutes
in
that
anyone
ever
count
so
hopefully
later
in
the
cooking,
show
we'll
see
that
cluster.
So,
let's
get
to
part
2,
which
is
I,
have
a
cluster.
How
do
I
operate
this
thing?
So
we
talked
earlier
about
operators.
Operators
really
just
fancy
term
for
automating
things.
B
I
think
this
is
actually
I
want
to
take
a
second.
There
is,
you
know:
kubernetes
100
was
a
very
simple
thing.
You
know
we'll
talk
about
this,
some
of
the
keynotes
this
week
about
you
know
how
kubernetes
has
changed
over
the
last
four
and
changed
years.
Four
and
a
half
years,
it
took
a
long
time
to
get
to
the
point
where
we
believe
that
all
the
pieces
are
in
place
in
the
community
and
the
ecosystem
to
where
we
can
do
these
kinds
of
demos,
and
that
work
is
work.
B
The
red,
headers
and
Googlers
and
chorus
people
and
people
from
VMware
and
Amazon,
and
a
thousand
other
companies
have
contributed
to
so
like
this
is
this
is
a
Commons,
meaning
I
wanted
to
pause
and
say,
like
that
statement
that
Derek
made
like
it
took
us,
so
it
took
us
a
while
to
get
here
because
big
important
things
take
a
while
to
happen.
I
know
that
some
of
you
have
gone
through
those
thirty-two
thousand
support
issues
and
I'm.
B
C
The
interest
of
time,
Clayton's
gonna
need
to
talk
a
little
less
and
let
me
do
some
demo,
so
this
is
the
this.
Is
the
admin
console
I'm
logged
in
as
a
temporary
admin
user,
and
this
is
the
console
you'll
see
by
default
after
the
install
finishes,
the
console
is
deployed
and
you
can
login
give
you
your
credentials
and
be
ready
to
go.
The
critical
operator
I
want
to
talk
about
today
that
drives
how
open
ships,
distribution
and
kubernetes
can
self
automate
itself
is
what
we
call
the
cluster
version
operator.
C
That
is
the
top-level
operator
that
knows
how
to
manage
everything
else,
and
so,
if
we
click
through
and
I'll
show
a
little
s
command
mining
it
a
little
more
visual
here.
If
I
can
look
at
my
cluster,
and
this
is
reading
what
we
have,
which
is
a
cluster
version
resource-
and
it's
telling
me
what
my
current
version
of
my
cluster
is.
This
is
literally
running
master
in
production
and
a
demo.
So
my
current
version
is
this:
at
home.
Kids
is
what
was
deployed,
and
so
hopefully,
like
I,
said
earlier.
C
My
previous
creation
of
a
cluster
just
finished
so
and
from
that
I
can
see
an
update
source,
and
it's
telling
me
hey,
potentially,
updates,
are
available.
Tell
me
what
I
want
to
go
to
apply
and
I
could
roll
out
a
new
upgrade
of
my
entire
cluster.
Just
by
applying
a
new
version
of
this
cluster
version
operator,
which
will
then
know
how
to
coordinate
the
upgrade
of
everything
else.
C
A
B
This
idea
of
delivering
updates
to
the
cluster.
This
is
the
big
trick.
It's
kind
of
the
magic
trick
that
is
the
cluster,
manages
itself.
It
knows
how
to
pull
updates.
It
knows
how
to
safely
apply
them,
that's
just
reconciliation.
So
if
I
went
in
and
deleted
half
the
things
on
Derek's
cluster
that
operators
can
be
like
I
can
handle
this
I
got
it
and
it'll
bring
you
back
to
where
you
were
so
like
Cooper.
C
C
If
I
describe
the
cluster
version
resource,
it's
telling
me
how
it's
trying
to
converge
to
my
desired
state.
I
want
to
run
this
level
of
OpenShift
and
it's
making
a
series
of
coordinating
changes
to
the
cluster
and
trying
to
converge
to
that
desired
state.
At
the
other
day,
it's
managing
a
set
of
second-level
operators.
C
We
call
them
which
are
basically
managing
the
cube,
API
server,
the
controller
manager,
the
openshift,
API,
server,
etc,
etc,
and
if
any
one
of
those
operators
is
having
trouble
converging
to
its
desired
state,
that
gets
bubbled
up
very
clearly
to
the
admin
and
says:
hey,
there's
a
problem
right
now:
I
have
a
transient
problem
in
one
of
my
masters.
So
please
ignore
that
for
the
moment,
the
way
that
cluster
version
operator
works
is
it
takes
a
release
payload.
So
a
release,
payload
and
OpenShift
v4
is
just
a
container
image.
C
C
And
you
can
see,
there's
a
payload
that
says
this:
is
the
image
I'm
running,
that's
defining
what
this
cluster
is
and
we
have
a
new
command
called
OC
ATM
release
info,
which
I
can
feed
at
that
payload
and
I
can
find
out
how
that
cluster
was
installed,
and
hopefully
I'm
conference,
Wireless
yeah.
So
you
can
see
here.
This
is
telling
me
okay,
you're
running
a
4o
payload
of
a
particular
version
and
that
release
payload
says
hey.
These
are
all
the
operators
and
the
images
I'm
going
to
install
and
their
corresponding
container
hash.
C
C
C
Excuse
me
so,
like
I
said
earlier,
the
top-level
operator
we
to
run
is
the
cluster
version
operator.
It's
just
the
deployment,
that's
running
on
the
kubernetes
control
plane
itself
and
all
it's
doing
is
its
acting
like
a
replica
set
deployment,
a
stateful
set
controller
and
it's
just
constantly
trying
to
converge
to
a
desired
State,
and
so
it's
not
doing
anything
too
crazy.
We
can
actually
go
look
at
it
and
I'll
rsh
into
that
container
and.
C
When
I
say
we
use
Corinna
Dee's
to
manage
kubernetes,
this
is
the
set
of
yamo
artifacts
that
describe
the
resources
we
deploy.
That
say
how
you
make
this
version
of
kubernetes.
So
so,
for
every
operator
we
ship,
which
includes
all
the
regular
suspects
you'd,
expect
an
operator
that
can
manage
the
network,
an
operator
that
can
manage
DNS
an
operator
that
command
asserts
the
API
server,
the
scheduler
yada-yada-yada.
Basically,
this
release
payload
that's
inside
that
image
is
telling
the
version
operator
what
set
of
artifacts.
C
B
B
C
Try
to
go
quick,
so
the
point
being
we
have
a
top
level
operator
that
just
applies
cube
artifacts,
so
in
the
same
way
that
you
roll
out
new
versions
of
your
applications
on
the
platform,
that's
literally
how
we
roll
out
new
versions
of
kubernetes
and
criminales
is
really
good
at
that
type
of
thing.
So,
if
I
go
and
look
at
the
logs
for
the
cluster
version
operator,
it's
not
like
you're
doing
an
upgrade.
We're
always
doing
an
upgrade
right.
C
We're
always
trying
to
ensure
that
if
the
cluster
has
drifted
out
of
your
desired
state
that
it
is
getting
put
back
into
line
right,
just
like
the
man
was
getting
bit
by
that
dog,
the
cluster
version
operator
is
stopping
me
from
going
and
changing
one
of
these
things
like
I
can't
go
set
a
flag
on
the
cube,
API
server.
It
will
come
back
and
say:
hey,
don't
do
that!
I'm
gonna
go
set
it
as
it
should
be
unless
I
go
through
a
formal
interface
for
it.
C
A
Everything
on
the
platform
is
literally
an
operator
from
the
cni
plug-in
to
the
storage
plugins
to
the
kubernetes
api
server,
literally
everything,
and
for
the
last
year,
an
entire
year.
30
scrum
teams
have
been
porting
the
kubernetes
framework
into
operators
and
baking
in
their
knowledge
of
what
those
framework
components
should
be
and.
B
I'm
not
gonna
lie.
Some
of
these
are
really
stupid,
simple,
because
in
production,
environments
and
distributed
systems,
stupid,
simple
works,
and
so
some
of
these
are
doing
like
the
same
things
you
might
do
in
a
control
loop
as
an
operator
right,
you
run
a
you're,
gonna
control
loop.
Then
you
say
here's
what
I
want
to
have
happen
and
I'm
just
gonna.
Do
that
every
30
minutes
until
the
end
of
time,
with
a
cron
job.
B
C
When
open
shift,
v3
first
released
right
when
kubernetes
mono
came
out
the
the
state
of
kubernetes
at
that
time
and
the
state
at
were
openshift
v3
needed
to
be
required
us
to
package
and
deploy
to
run
any
slightly
differently.
So
you
had
one
uber
start
master
command
or
one
Hoover
start
controllers
command.
That
combined
your
cube
processes
with
your
open
shift
processes.
But
then
it
let
us
go
and
do
things
like
our
back
and
some
other
things
kubernetes
has
evolved.
C
We
spent
a
lot
of
time
at
Red
Hat,
trying
to
change
the
upstream
to
make
it
that
we
can
get
clear
layers
between
our
software
and
the
upstream.
So
what
you'll
see
in
v4
here
is
that
we
actually
deploy
a
cube,
API
server
like
any
other
pod
on
the
cluster.
We
deploy
the
scheduler
like
any
other
pod
on
the
cluster
and
same
with
the
controller
manager.
C
So
the
base
control
plane
itself
for
kubernetes
is
just
running
as
pods,
so
you
can
see
them
here
and
then
there's
an
operator,
that's
actually
managing
that
to
ensure
it's
where
I
want
it
to
be.
What's
cool
about
these
operators
relative
to
like
existing
tectonic
users,
is
we've
kind
of
relooked
at
how
that
worked
and
made
it
that
we
don't
need
something
like
cube
recover
in
a
disaster
scenario?
Basically,
this
thing
is
always
going
to
run
fine
disaster
or
not,
which
was
really
cool
innovation
from
the
team.
C
Then
the
traditional
openshift
API
server
stuff
that
gives
you
all
that
nice
development
pieces
it's
just
running
as
a
separate
operator
as
a
daemon
set
on
every
one
of
my
Master's.
At
the
end
of
the
day,
an
operator
is
just
managing
a
set
of
custom
resource
definitions
that
describe
how
the
admin
wants
to
make
their
cluster
be
configured.
We.
C
B
We've
made
we've
made
configuring
the
cluster
just
another
kubernetes
api
operation,
so
you
can
say
what
you
want
the
state
to
be.
You
go
and
keep
control
apply.
You
can
put
it
in
danceable.
You
can
put
it
in
to
helm.
Doesn't
matter
configuration
is
managed
as
a
declarative,
API.
Just
like
everything
else.
Okay,.
C
So
right
now
I'm
running
a
six
node
cluster,
three
masters,
three
workers
and
what
I'm
gonna
do
here
is
gonna
pain,
some
of
red-hot
core
OS
developers,
I'm
going
to
SSH
into
a
master
I
promise
in
the
future.
They
say:
they're
going
to
taint
the
node
when
I
do
that.
But
right
now
that's
not
there,
and
so
you
can
see
right
now
that
the
OS
that's
being
rewarded
is
that
I'm
running
Red
Hat
core
s4.
C
So
if
I
go
in
SSH
into
one
of
my
masters,
let's
take
a
look
at
what
is
actually
on
this
house,
so
you
can
see
I'm
in
and
I'm
running,
redhead
chorus
for
awesome,
so
to
clarify
any
confusion.
Redhat
core
OS
is
a
rel
kernel
with
rel
content.
So
if
I
go
and
look
at
what
I'm
running
and
I'm
running
around
kernel,
hopefully
I
can
say
it
again:
I'm
running
a
rel
kernel.
C
C
I
can't
type
either
way
I
promise.
If
I
had
this
journal
command.
You
would
see
that
on
first
boot,
ignition
launched
before
the
whole
OS
was
booted
and
said:
hey
tell
me,
the
files
I
need
to
lay
down
on
this
amitabh
Oh
hosts
you
configure
that
host
and
it
would
have
been
good
now,
just
like
relative
ik
or
container
Linux.
C
I
am
running
an
immutable
OS,
so
I'm,
a
nefarious
user
and
I'm
gonna
go
and
try
to
touch
bad,
and
this
should
tell
me
I
can't
do
that,
but
I
do
have
a
writable
layer
and
I
will
go
and
try
to
touch
good,
but
I
have
no
permission.
That's
that's
also
good
to
calm
folks.
Sc
Linux
is
on
an
enforcing
by
default
on
Red
Hat,
core
OS.
That's.
B
B
B
B
C
A
set
of
files
that
would
I
can
configure
it
essentially
on
the
cluster
that
says
how
you
get
there.
No,
that's!
That's
the
operators
of
Clayton
didn't
talk
so
much.
This
would
have
gone
great
sorry,
so
this
was
a
set
of
second-level
operators.
Also
in
v4,
we've
been
working
on
making
sure
that
not
just
we
can
explore
an
operator
pattern
that
the
applications
that
you
run
on
the
cluster
can
also
be
delivered
as
a
set
of
operators.
C
So
what
you'll
see
in
the
administration
section
here,
you'll
see
some
new
interfaces.
You'll
see
that
there's
a
machine
interface.
So
what
this
machine
interface
is
is.
This
is
actually
a
kubernetes
resource
called
machine
and
it
is
the
base
atom
for
describing
a
kubernetes
machine
by
default.
The
installer
used
the
machine,
CRD
definitions
and
this
other
component.
C
That
knows
how
to
then
instantiate
those
machines
in
your
target
platform
to
stripe
out
my
cluster
by
default,
because
I
was
deploying
enable
us,
in
a
particular
region,
you'll
notice
that
everything
is
spread
across
in
the
right
number
of
availability
zones.
This
machine
or
face
as
I
said
machine
is
the
base
primitive,
but
just
like
pods
you
have
a
machine
set
and
so
I'm
able
to
say
okay
in
u.s.
C
B
C
Yeah,
as
I
said
earlier,
a
machine
resource
is
just
an
e
mo
definition.
This
is
being
derived
out
of
the
up
stream
and
we're
adopting
this
portion
of
the
project
and
we're
really
excited
to
work
with
the
the
upstream
community
to
see
this
to
fruition.
So
what
I
want
to
do
is
you
know?
I,
don't
want
to
spend
a
lot
of
money
unless
I
have
to
run
something
that
needs
it.
C
B
B
That's
the
last
thing
you
should
ever
have
to
think
about
machines
for
right,
they're
up
to
date,
they're,
secure,
they're,
running
workloads,
that's
the
goal
and
programming
the
infrastructure
to
do
that,
for
you
is
again
something
that
you
know
that
dog
you
don't
have,
the
man
doesn't
have
to
go
and
say:
I
want
ten
or
I
want
fifteen
or
I
went
20.
The
Machine
says:
I
know
how
to
talk
to
the
cloud.
Api
I
know
how
to
ask
for
more
machines.
Make
it
happen.
Okay,
so.
C
I
went
into
ploy
to
work
your
job,
that's
gonna,
run
I,
think
a
hundred
parallel
pods
that
make
absurdly
high
requests
for
the
actual
work
they're
doing
and
require
me
to
spend
money
to
spin
up
new
machines.
So
there's
not
enough
capacity
on
this
cluster
to
satisfy
this
compute
need,
but
because
I
went
and
told
the
cluster
autoscaler
and
the
Machine
set
API
that
says
we're
allowed
to
dynamically
size.
These
things
in
the
background.
C
B
Yeah
and
and
this
demo
is
on
AWS,
but
we
don't
really
think
a
DBS
is
special
I'm,
sorry
recorded.
We
don't
think
AWS
is
any
more
special
than
the
other
clouds
or
OpenStack
or
on-premise
clouds
or
honestly
bare
metal.
So
Chris
talked
about
bare
metal,
but
like
we
don't
think
that
this
experience
is
unique
to
clouds.
We
think
this
is
just
something
that
should
just
work
everywhere
and
that's
part
of
what
it's
about
the.
C
B
There's
a
ton
more
in
this
demo
and
you're
not
gonna,
be
able
to
sit
here.
If
you
find
Derek,
you
know
make
him
do
this
demo
for
you,
but
then
we
were
like
well.
You
know
like
this
is
really
easy
for
us
to
go.
Do
as
a
development
team,
so
what
we
said
was
well.
Why
don't
we
just
make
it
available
for
everybody,
so
I
had.
B
And
this
man,
this
man
I,
want
to
say
thank
you
to
this
man
for
not
having
a
heart
attack
when
we
sprung
this
on
him,
but
you
can
go
to
try
that
openshift
comm.
You
can
do
everything
that
we
just
did
in
this
demo.
Get
pull
secrets,
get
access
to
open
shift
for
we'll
walk
you
through
the
process.
Derrick's
demo
is
up
in
a
Google
group
for
them,
and
you
know
this
is
just
the
beginning
right.
We
said
we
want
to
be
that
10x
multiplier.
We
want
your
feedback,
commons
contributors
and
companies.