►
From YouTube: TGI Kubernetes 108: Cluster-API Docker
Description
Come hang out with Duffie Cooley as he does a bit of hands on hacking of Kubernetes and related topics. Some of this will be Duffie talking about the things he knows. Some of this will be Duffie exploring something new with the audience. Come join the fun, ask questions, comment, and participate in the live chat!
This week we will be exploring cluster-api for docker!
B
And
welcome
to
tgik
episode
number
108.
It's
very
good
to
see
you
all.
Let's
see
who
you
all
are
remember.
I
always
have
my
screen
off
to
my
left
here.
So
if
you
see
me
like
casting
my
eyes
about
the
room,
that's
basically
what's
happened,
but
my
chat.
The
way
I
can
actually
see
your
interaction
is
always
to
my
left
and
that
way
we
can
save
the
desktop
screen
for
the
fun
stuff
like
going
through
the
articles
and
that
sort
of
stuff.
So
we
got
martin
from
the
netherlands.
B
We
got
rory
from
scotland,
we
got
waleed
good
to
see
you
a
lead
and
we
have
marky
jackson,
totally
awesome
human.
If
you
ever
get
a
chance
to
hang
out
with
marky
highly
recommend
it.
Actually,
I
think
that's
true
of
all
the
people
in
the
list.
So
far
I
mean
these
are
all
amazing
people
that
I
have
met
and
enjoy
their
company
riko
from
good
to
see
you
and
we
have
joe
tuning
in.
We
got
amane
from
stroudsburg.
We
got
miguel
from
mexico,
we
got
peter
from
seville,
spain.
B
We
got
tim
reece
from
frankfurt
and
tim
downey.
We've
got
morteza
from
tehran
and
eric
from
dfw
and
got
the
maddie
thanks
for
coming
back
to
back
weeks.
Oh
yeah,
I'm
digging
it.
They
do
take
some
preparation,
but
it's
fun.
You
know
it's
like
fun
to
kind
of
explore
this
stuff.
You
know
we
got
amin
from
amsterdam.
Is
it
amin
or
amen?
B
B
All
right,
so,
let's,
let's
dig
into
the
news
and
see
what
we
got
here.
We
got
alejandro
signing
in
from
lima
peru
and,
let's
flip
over
to
the
screening,
face
all
right
here
we
go
so
this
is
our
this
week's
news
and
again
like
as
always,
these
things
are
available
online
at
tgik,
dot,
io,
slash
notes
and
it's
actually
a
link
at
the
very
top
of
the
chat.
B
So
if
you
want
to,
you
know,
help
keep
track
of
notes
or
if
you
have,
if
you
see
me
discussing
a
link
or
if
you
have
some
context
or
reference
link
that
you
would
like
to
share,
feel
free
to
throw
it
in
here
this
week
in
review,
we
got
118
beta
1
is
happening
right,
so
that
means
we're
getting
pretty
darn
close
to
the
118
release
the
sidecars
kept,
which
actually
raised
a
little
bit
of
no
a
little
bit
of
news
this
week.
B
Let's
talk
about
that,
one,
that's
a
fun
one,
so
118
118
is
almost
there.
Sidecar
is
kept.
This
is
a
really
interesting
enhancement
and
I'm
afraid
that
the
author
of
this
enhancement
lost
a
little
faith
this
week
because
it
was
taken
a
while
to
get
this
done,
but
it
is
a
really
interesting
one.
So
this
this
kept,
it's
basically
the
idea,
let's
see
if
it's
actually,
it
is
like
the
next
yeah.
B
So
this
one's
a
front.
This
is
the
release
sign
off.
This
is
another
cap
cup
is
probably
linked
from
in
here,
but
this
is
a
relatively
there.
We
go
boom,
so
it's
a
relatively
new
idea
and
it
describes
a
different
way
of
thinking
about
side
cars
in
that,
and
I
like
that,
there's
actually
there's
one
line
in
here.
B
That
really
succinctly
lays
it
out,
and
this
is
basically
what
the
change
in
behavior
would
look
like
if
caps
were
to,
if
this
gap
were
to
be
implemented,
and
it's
currently
in
an
implementable
phase
and
there's
an
author
working
on
it,
he's
done
a
lot
of
amazing
work
and
I
hope
it
does
get
in,
but
it
looks
like
it'll
probably
get
in
119.,
but
the
the
thing
that
it
would
do
is
it
would
allow
you
to
indicate
that
the
life
cycle
of
one
of
the
other
containers
inside
of
your
pod
right,
so
you
might
be
able
to
indicate
that
a
given
container
inside
your
pod
is
life
cycle
type
sidecar,
and
when
that
happens,
it
changes
the
normal
startup
and
teardown
a
little
bit.
B
The
unit
containers
piece
is
not
related
to
this
unit.
Containers
still
start
before
everything
other
things
can
start,
but
then
we
see
site
cards
starting
before
anything,
not
labeled,
sidecar
sidecars
become
ready
and
then
containers
start-
and
this
is
in
this
in
this
way
we
could
do
things
like
you
know-
maybe
have
a
sci-car
life
cycle
that
is
responsible
for
managing
going
to
fetch
secrets
from
vault
right.
You
would
kind
of
want
that
to
actually
be
in
a
successful
state
before
you
started
your
application,
and
this
allows
for
that
sort
of
a
mechanism.
B
The
interesting
thing
is
also
that,
like
it
works
on
the
way
out
as
well
right,
so
when
you're,
when
the
pod
is
getting
ready
to
be
terminated,
it'll
send
a
sig
term
to
the
containers
and
then,
once
all
of
the
containers
have
exited
the
side.
Cars
get
a
sig
term.
So
if
you're
running
that
lifecycle
model,
where
you
have
a
another
container
inside
of
your
pod,
that
is
marked
with
life
cycle,
sidecar
you'd
be
able
to
see
that
it
would
start
first
and
it
would
become
ready.
B
Then
your
regular
containers
would
start
then
on
on
termination.
The
regular
containers
would
get
the
term
they
would
end.
I
mean
they
would
leave,
they
would
exit,
presumably
and
then
the
side
cars
would
actually
get
the
the
exit.
And
this
might
you
know
that
might
be
useful
for
things
like
log
forwarders
and
those
sorts
of
things
right
like
things
that
need
to
actually
flush
a
buffer
or
or
or
flush
the
disk
before
they
can
actually
sign
off.
B
B
And
what
that
does
is
it
will
and
what
that
will
and
what
that
will
mean
is
basically
that
the
give
me
one
second
here.
B
Okay,
so
what
that
will
mean
is
that
right
now,
there's
actually
a
number
of
different
ways
to
actually
build
kubernetes
and
we
and
we
tend
to
use
a
couple
of
them
right
so
there's
at
least
two
different
ways
that
are
in
heavy
use
for
actually
managing
the
build
of
kubernetes,
binaries
and
all
of
the
testing
and
everything.
In
some
cases
we
use
basil
for
everything.
In
some
cases
we
use
helper
scripts
and
those
sorts
of
things
to
do
it.
But
what.
B
That
is
like
twice
as
much
work
to
actually
to
to
to
bring
together
a
given
release,
and
so
in
this
case
the
argument
is
well
if
we're
going
to
nuke
one,
which
one
should
we
nuke
and
which
one
are
we
getting
the
big
value
out
of,
and
what
are
we
not
seeing-
and
this
is
actually
one
of
the
things
I
really
dig
about
the
community
is
that
this
issue
is
raised
and
everybody's
coming
to
it
with
an
open
mind.
You
know
like
there's,
no,
like
immediate
naysayers,
they're
really
like
coming
in
and
saying
like.
B
If
we
do
this,
here's
the
fallout
that
I
could
see
right
and
people
are
providing
support
and
people,
basically
providing
making
sure
that
we
go
into
this
a
decision
like
this,
which
is
pretty
critical
to
the
build
infrastructure
with
our
eyes
open.
You
know
we
really
want
to
make
sure
that,
like
we
don't
miss
anything.
So,
let's
go
back
to
our
chat,
see
how
everybody's
doing.
B
We
have
tolgahan
from
kirk
from
turkey.
We
have
shahar
from
atlanta
good
to
see
you
shahar.
We
have
joy
from
how
you
enjoy
and
we
have
balaz
what
linux
I'm
I'm
using
ubuntu
and
I'm
using
like
a
version
of
ubuntu
that
still
gives
me
wobbly
windows,
it's
kind
of
fun,
because
I'm
I'm
just
that
kind
of
linux
geek.
You
know.
A
B
What
else
do
we
got
here
in
our
news?
We
have
establish.
Let's
see
the
next
one
up
is
a
full
checklist
for
k8
infra
and
moving
official
images
into
a
controlled,
a
community
controlled
resource
which
is
on
track
for
april
fools,
which
I
think
is
interesting.
I
think
it's
kind
of
a
funny
time
for
that
to
happen,
but
I
think
you
know
funny,
as
in
not
funny
isn't
weird
so
so
this
is
where
is
it
at?
Oh,
I
guess
I
didn't
link
it
all
right.
So
what.
B
Most
of
the
images
that
you
would,
if
you're
going
to
pull
container
images
for
kubernetes
components
like
the
controller
manager,
the
scheduler,
the
api
server,
all
of
those
things,
you
would
actually
get
them
from
g
khs.gcr,
dot
io,
which
is
a
vanity
url
in
front
of
a
gcr
repo,
and
that
has
up
up
until
until
I
guess
april.
1St
has
always
been
oh
kind
of
own.
B
B
You
know
it's
again
kind
of
what
we're
seeing
is
that,
like
kubernetes,
is
kind
of
coming
out
into
the
community
as
far
as
like
all
the
resources
and
components
that
it
needs
and
google
has
been
incredibly
generous
in
offering
up
quite
a
lot
of
that
infrastructure
at
no
at
no
cost
to
you
know
further
the
project
which
has
been
tremendous,
but
this
actually
pushes
the
ownership
of
some
of
those
resources
into
the
community
hands
which.
B
What
else
I
think
we
have.
The
next
part
is
the
sad
news
all
right.
So
the
sad
news
is
you
you've
probably
already
been
made
aware
of
this,
but
if
you
have
not
is
actually
not
just
kubecon
either
so
cubena
cubecon
cloud
nativecon
in
eu
has
been
postponed,
so
the
one
that
was
that
was
to
hemp
happen
in
amsterdam.
B
B
The
final
dates
have
not
yet
been
picked.
The
north
america
event,
which
was
scheduled
for
november
17th
to
november
20th,
will
continue
to
happen
as
it
was
as
it
was
scheduled
and
unfortunately,
also
the
cloud
native
open
source
summit
in
china
has
been
cancelled
due
to
not
enough
not
enough
interest
in
the
in.
In
that
particular
event,
given
the
current
circumstances,
so
big
changes,
you
know
like.
B
Obviously
that
means
that
a
ton
of
us
are
not
going
to
be
in
amsterdam
at
the
end
of
march,
but
hopefully
we
will
be
there
later
in
the
year.
So
the
goal
I
think
is
to
be-
you
know
right
now:
they're
bantering,
they're,
going
back
and
forth
between
july
and
august,
or
maybe
some
crossover
between
the
two,
but
I
think
that's
actually
the
goal
there's
been
a
survey
gone
that
has
gone
on
around
it.
B
B
I
have
you
all
to
like
go
chat
with
because,
like
I
imagine
that
many
of
the
devrel
folks
and
my
heart
goes
out
to
them
like
like
a
lot
of
the
folks
that
actually
spend
a
lot
of
their
time
out
there
with
talking
about
things,
and
you
know
working
on
working
on
conferences
and
building
talks
and
those
sorts
of
things
I
can
imagine.
This
would
actually
incredibly
adversely
affect
them
right,
like
a
lot
of
the
conferences,
have
been
shut
down
or
maybe
turned
to
virtual
or
cancelled
or
postponed.
B
So
that's
what
I
know
about
that,
and
obviously
more
of
that
will
I'm
sure,
probably
surface
by
next
week
and
we'll
have
a
little
bit
more
of
an
update
around
what's
happening
that
there
contributor
summit
for
amsterdam
has
been
postponed.
Obviously,
you
know
a
lot
of
the
zero
day.
Stuff
will
actually
also
be
postponed
or
turned
to
a
virtual
event.
A
B
B
So
in
some
cases
you
can
actually
pass
like
an
argument
like
cube
kettle
cui
get
pods
dash,
ui
and
it
will
pop
open
a
little
window
with
your
output.
Now
I
haven't
seen.
I
haven't
tried
this
out
to
see
if
it's
like
cross-platform,
but
it
looks
like
it's
built
on
electrons,
so
it
might
be
kind
of
fun
to
to
try
that
out
and
maybe
we'll
try
that
out
in
a
little
while,
but
that's
a
new
one,
that's
come
out.
Jeff
gearling
writes
everyone
might
be
a
cluster
admin
in
your
kubernetes
cluster
gosh.
B
I
hope
not
quite
often
when
I
dive
into
someone's
kubernetes
cluster
to
debug
a
problem,
I
realize
that
whatever
pod
I'm
running
has
way
too
much
permission.
Often
the
pod
has
cluster
admin
role
applied
to
it
through
the
default
service.
Account
yeah,
that's
a
problem
that
should
not
be
true.
Okay.
So
what.
B
Mean
is
that
somehow,
like
somebody
has
actually
created
a
new
cluster
role,
binding
of
cluster
admin
to
the
default
service
account
and,
as
you
already
all
know,
every
pod
in
a
given
namespace
will
be
associated
with
the
default
service
account
inside
of
that
namespace.
B
B
It
looks
like
he's,
got
some
examples
for
how
to
check
it
and
see
if
it's
actually
a
problem
like
how
you
can
actually
break
out
and
and
do
things
the
way
you
might,
you
might
be
able
to
actually
evaluate
whether
you
have
cluster
admin
or
not
one
of
the
other
ways
that
you
can
do
that,
and
I
think
I've
talked
about
this
before,
but
let's
actually
pull
this
up.
B
You
can
you
can
also
do
this
command
keep
kiddle
off.
Can
I
dash
dash
list?
So
if
you
can
impersonate
an
an
entity
like
in
this
case,
if
I
do
system
service
account
default
default,
if
I
run
that
command,
then
I'll
actually
be
able
to
see
what
permissions
that
service
account
has
against
the
system.
B
B
That
was
that
part
cube
flow
is
hit
1.0,
which
is
an
interesting
one.
It's
an
ml
cloud
native
ml
platform
for
kubernetes,
and
this
is
actually
I
mean
if
you're,
into
like
machine
learning
and
jupyter
networks
and
tensorflow
and
that
sort
of
stuff,
it's
actually
pretty
cool
they've,
got
a
pretty
reasonable
way
of
actually
deploying
this
stuff
on
top
of
kubernetes.
B
So
if
that's,
if
that's
the
thing
that
floats
your
boat
definitely
go
check
that
out,
but
they've
hit
1.0,
which
is
a
pretty
reasonable
milestone
for
them,
and
that's
pretty
awesome.
B
Spotify
has
released
their
terraform
gke
cube
flow
cluster,
so
if
you're
in
gke
or
you
can
actually
manage
a
gke
cluster,
you
can
actually
spin
up
kubeflow
the
way
that
spotify
does
and
play
with
those
things,
that's
pretty
fun
kind
of
related.
I
think
that
that
it
was
all
yeah
cube
flow,
yeah
speaking
of
spotify.
So
that's
actually,
I
think
that's
a
spotify
team
working
on
that
and
they've
up
and
they've
open
sourced
their
stuff.
B
So
then
we
got
vault
replication
across
multiple
data
centers
on
kubernetes,
which
is
a
mouthful
this
idea
being
that
you
could
replicate
vaults.
I
see
so
this
is
actually
from
bonsai
cloud,
who
has
done
quite
a
lot
of
work
in
this
space,
so
they
have
a
vault
operator
and
a
multi-data
center
set
up
and
they
presumably
have
a
way
of
syncing
vaults
from
one
vault
to
another
across
different
clusters.
B
So
they've
got
multiple
clusters
in
different
regions.
After
all,
the
clusters
are
there:
you
need
a
cube
config
from
each.
You
have
bank
faults.
Repository
holds
a
useful
script,
oh
nice,
which
actually
does
probably
does
all
the
heavy
lifting
copying
vaults
and
content
over
between
between
different
vault
implementations
on
different
clusters.
So.
A
B
B
So
I
think
I
should
try
this
one
out,
but
like
kubernetes
namespaces
are
a
thing
that
is
like
constantly
under
discussion
like
what
would
you
put
inside
of
a
namespace
what
should
be
inside
of
a
namespace
like
what,
like
you
know,
namespaces
are
where
we
define
constraint,
and
so
when.
Because
of
that,
when
we
think
about
like
how
we
use
that
primitive
things
get
a
little
complex
right
like
if
you're
going
to
actually
use
a
cluster
to
grant
individual
users
access
to
a
namespace
like
namespace
as
a
service.
B
B
Oh
I'm
using
linux,
so
this
is
a
this
is
happening
because
I
have
compiz
going,
and
so
I
can
see
the
gears
running
in
there
and,
like
all
that
fun
stuff.
B
Okay,
crds
killed
the
free,
kubernetes
control
plane.
This
is
some
conjecture,
but
I
thought
it
was
interesting,
so
this
one
they
they're
talking
about
how
google
is
actually
now
charging
for
control,
plane,
nodes
and
they're,
making
the
and
they're
making
the
the
statement
that
maybe
the
reason
that
is
is
because
there
are
a
lot
of
people
who
are
using
the
control
plane
nodes
to
hold
data.
B
B
First
thing
I
want
to
do
is
I
want
to
bring
up
the
idea
of
cluster
api
and
what
it
does
we're
going
to
do.
A
little
background.
I
am
totally
using
compass.
We're
gonna
do
a
little
background
on
cluster
api.
We're
gonna
talk
about
what
it
means
then
we're
gonna
play
with
it
and
then
we're
gonna
play
with
it
in
a
way
that
is
local
to
my
machine,
which
will
be
kind
of
interesting
and
we'll
also
talk
about
like
why
that
is
interesting
to
me.
B
So
let's
get
started
here
so
the
first
thing
I
wanna
I
want
you
to
know
is
that
cluster
api
is
a
in
is
part
of
the
sigs.kh
io
stuff,
and
you
you
all,
might
notice
that
that's
actually
pretty
similar
to
the
way
that
we
you've
all
seen
this
documentation.
The
kind
documentation
very
similar
thing
we're
using
that
github
web
hosting
mechanism
to
actually
host
the
book
for
cluster
api
right
same
thing
for
kind
right.
So
if
you
go
to
kind
dot,
sigs
dot
kh,
I
o
that's
the
documentation
for
kind.
B
If
you
go
to
cluster
dash
api
dot,
sigs
dot,
k,
h,
dot
io.
This
is
the
documentation
for
cluster
api,
so
introducing
cluster
api.
First,
let's
talk
a
little
bit
about
like
what
and
I
imagine
actually
I'm
curious.
Let's
do
a
poll
I
would
like
for
you
to
say
yes,
if
you're
familiar
with
cluster
api
to
some
to
some
level
and
know
if
you're
not
familiar
with
cluster
api
at
all
like
coming
to
this
with
a
completely
fresh
mind
right.
B
We're
leaning
a
little
bit
more
toward
people
who
understand
it
a
little
bit
and
or
maybe
have
some
deep
understanding
of
it
and
some
folks
that
don't
have
anything
all
right.
So,
if
you're
going
to
dig
into
it
cluster
api,
this
is
definitely
the
place
to
start
right
here
in
dot.
This
book
k
it's
dot
io,
but
let's
talk
a
little
bit
about
it
and
like
and
and
and
take
a
look
at
how
this
works.
So
what
I'm
gonna.
A
B
B
Cat
this
is
my
configuration
for
the
kind
cluster
that
just
created,
and
you
can
see
that
really
all
I'm
doing
is.
I
am
mounting
in
the
docker
socket
into
that
control.
Plane
node,
now,
probably
not
a
great
idea
in
general
to
do
that,
and
we've
talked
about
why,
if
you're
interested
in
understanding
more
about,
why
feel
free
to
like
follow
me
on
twitter?
B
I
spend
a
lot
of
time
yammering
on
about
it,
but
what
this
will
allow
us
to
do
is
it
will
make
it
so
that
this
control
plane
node,
can
be
used
to
effectively
play
with
and
test
cluster
api
now
before
we
get
into
that.
What
I
want
to
play
with
is
one
more
thing
before
we
get
to
that
part.
Okay,
so,
to
understand
a
little
bit
about
what
cluster
api
is.
Let's
talk
about.
B
Three
actually
one
because
well,
let's
just
leave
it
blank
okay.
So
what
this
is
going
to
do
is
it's
going
to
create
a
deployment
using
the
nginx
image
from
docker
hub
and
it
will
submit
that
deployment
straight
up
to
kubrick
to
to
my
api
server
and
that's
where
we
see
the
output
deployment
apps
test
right
and
so
now,
if
I
do
cube
cattle
get
pods
I'll,
be
able
to
see
that
my
my
nginx
instance
is
running
and
if
I
wanted
to
scale
it
right,
keep
getting
scale
deployment
three.
B
If
I
wanted
to
reconfigure
it,
if
I
wanted
to
change
the
image
that
was
underlying
it,
if
I
wanted
to
do
any
of
those
things,
I
could
actually
interact
with
that
deployment
object
and
and
see
that
change
happen
over
time
right.
Well,
what
if
we
took
this
idea
of
deployments
and
opera
and
extended
it
to
understand
sort
of
the
idea
of
machines
right,
maybe
I
want
a
deployment
of
machines
which
are
going
to
be
the
workers
for
a
kubernetes
cluster.
B
Maybe
I
want
to
be
able
to
ensure
that
those
machines
have
have
you
know.
I
have
some
ability
to
understand
when
they're
running
and
when
they're
healthy
when
they're
not
healthy.
I
have
some
ability
to
understand
how
to
join
the
these
nodes
into
a
given
cluster.
You
can
see
how
there's
like
individual
pieces
and
parts
just
just
like
there
might
be
as
part
of
an
application
that
is
deployed
inside
of
kubernetes
there's
different
configuration
pieces
and
different
functions
that
I
would
want
for
machines
to
handle
just
like.
B
There
are
different
functions
and
configurations
of
given
pods
that
might
make
up
my
application
right.
The
ability
to
scale
it
up
or
scale
it
down.
All
of
those
things
are
pretty
interesting
to
me
when
I
think
about
the
way
applications
work.
So
cluster
api
is
the
idea
that
we
might
be
able
to
actually
take
all
of
what
we've
learned
from
managing
empirically
managing
or
imperatively
managing
infrastructure
at
like
the
application
layer
and
apply
it
more
at
the
infrastructure
layer.
B
Right
so
just
like,
we
can
do
this
here,
where
I
just
spun
up
a
new
deployment
and
I've
scaled
it
up
and
I've
scaled
it
down,
and
I
can
delete
it
and
I
can
reconfigure
it.
I
can
do
all
of
those
things
just
by
modifying
the
actual
deployment
object.
What
if
I
could
do
that
stuff
with
machines,
virtual
machines
running
in
some
ios
somewhere
or
in
our
case
containers
running
on
my
on
my
docker
daemon?
B
B
We
go
all
right,
I'm
using
what
is
it
it's
like
a
dell
or
something
like
kind
of
a
big
powerful
deal.
They
just
just
recently
gave
me
a
boat
anchor.
It's
like
16
inches
full
of
bell.
B
B
Because,
if
that's
true,
then
you're
right
right,
but
if
you
think
about
it,
really
the
fact
that
it
is
still
var
lib,
the
fact
that
it's
still
var
run
docker
is
actually
somewhat
orthogonal
right,
so
kind
does
use
container
d
to
implement
them,
but
but
kind
actually
creates
one
container
d
instance
per
container.
B
If
you
will
right
so,
every
node
has
its
own
container
d.
If
that
were
if
they
were
in
conflict,
what
we
could
do
is
just
move
the
docker
implementation
off
to
another
directory
right
so
that
we
were
actually
using
the
underlying
docker
implementation.
B
If
you
have
a
set
of
defaults
like
obviously
I
would
want
all
of
my
nodes
to
be.
You
know
web
hook,
authentication
true,
which
means
that
any
call
that's
coming
into
the
qubit
api
would
have
to
be
authenticated
with
the
api
server
right.
But
I
want
some
mechanism
by
which
to
define
those
things
right,
and
fortunately
we
know
that
we
already
have
sort
of
a
tool
for
that
which
is
called
cube
adm.
B
So
this
is
actually
extending
the
work
of
cubadm
and
we'll
talk
a
little
bit
more
about
that
so
to
reuse
or
integrate
with
existing
ecosystem
concepts.
Right
so
again,
cubadm
already
exists.
We
probably
should
extend
that.
We
already
have
the
idea
of
you
know.
We
already
have
some
kind
of
repeatable
ideas
in
different
cloud
infrastructures
of
a
machine
that
is
true
everywhere,
so
maybe
we
should
just
take
that
one
and
run
with
it
can
provide
a
transition
path
from
for
kubernetes
lifecycles
to
adopt
cluster
api.
This
is
getting
there.
B
Here
are
some
pictures
that
kind
of
describe
what's
happening
here
right,
so
we
have
our
management
cluster
and
that's.
This
is
the
cluster
that
I
just
spun
up
and
inside
of
that
cluster.
We're
going
to
deploy
three
things:
we're
going
to
deploy
a
cluster
api
controller,
a
bootstrap
provider,
the
thing
that
will
turn
a
machine
into
a
kubernetes
node
and
then
we're
going
to
deploy
an
infrastructure
provider
which
is
actually
going
to
be
the
thing
that
cluster
api
uses
to
go
and
talk
to
right.
B
So
like
what
is
your
infrastructure
that
you're
going
to
go
talk
to?
Are
we
targeting
aws?
Are
we
targeting
bare
metal?
Are
we
targeting
azure
vmware
any
of
these
individual
places
like
that
infrastructure
provider
is
going
to
be
unique
to
whatever
our
target
is
so,
in
my
case,
I'm
actually
going
to
be
deploying
cappy
d
or
cluster
api
for
docker,
because
that
is
actually
the
infrastructure
that
I'm
going
to
use
to
create
these
machines
and
networks.
B
And
again,
you
can
kind
of
see
from
this
from
this
diagram.
There
is
this
overarching
idea
of
a
cluster
which
makes
sense
right.
The
cluster
is
represented
by
all
of
the
nodes.
All
of
the
nodes
need
to
have
some
idea
of
what
represents
the
entire
cluster,
so
so
they
can
join
them
right.
So
qradium
join
that
we
need
to
know
what
the
ip
address
of
the
api
server
will
be,
or
what
the
load
balancer
in
front
will
be.
B
We
need
to
have
a
token
that
allows
us
to
join
a
given
node
to
that
cluster,
and
perhaps
there
are
some
other
consistent
configurations
that
we
want
to
actually
make
right
with
with
individual
within
within
the
nodes,
within
a
particular
node
group,
even
or
within
a
machine
deployment
and
we're
going
to
play
with
these
things
in
person,
provider
capi
functionality
will
provision
vms
on
new
raw
vms
as
well
yeah.
That's
in
fact
what
it
does.
B
Yes,
that's
exactly
what
it
does
so
the
cloud
provider.
So
what
happens
so?
There's
three
responsibilities
here
right
so
there's
the
responsibility
of
cluster
api
and
cluster
api
is
responsible
for
for
defining
those
generic
terms.
What
is
a
cluster?
What
is
a
machine?
What
is
a?
What
is
a
machine
deployment
to
those
sorts
of
things
right?
Then?
We
have
a
bootstrap
provider
which
is
like
once
we
have
a
machine.
How
do
we
turn
that
machine
into
a
cluster?
B
What
what
is
the
glue
that
brings
it
from
just
a
machine
running
in
aws
into
a
member
of
a
cluster,
and
then
we
have
this
infrastructure
provider,
which
is
how
do
we
get
the
machine?
These
are
kind
of
out
of
order
to
be
honest
right
so
in
cluster
api,
when
I
create
a
machine
that,
when
I
say,
go
ahead
and
create
this
machine,
there's
a
series
of
calls
that
there's
a
there's,
a
relationship
with
the
infrastructure
provider
in
which
it
will
go
out
to
the
infrastructure
endpoint
that
you've
described
in
my
case.
B
Docker
could
be
in
your
case
aws
or
vsphere,
and
it
will
say,
give
me
a
new
machine
and
I
will
and
and
I'll
take
the
metadata
associated
with
that
machine
and
bring
it
back
and
allow
my
bootstrap
provider
to
understand
it
so
that
it
can
go
ahead
and
configure
or
or
manage
that
thing
same
thing
with
cluster
api
right.
B
So
if
I,
if
I
scale
this
thing
up
or
scale
it
down,
if
I
wanted
to
create
more
machines
or
or
remove
some
machines,
then
that
infrastructure
provider
will
be
responsible
for
understanding
how
to
implement
that
call
right.
If
I
scale
it,
I'm
scaling
the
generic
idea
of
a
machine
deployment,
but
the
call
to
actually
but
the
calls
to
aws
to
make
new
machines
will
be
coming
from
the
infrastructure
provider
and
when
they
show
up
inside
of
my
cluster.
I
know
that
that
worked
because
of
the
bootstrap
provider
right.
B
Now,
let's
get
into,
I
think
kind
of
the
fun
stuff,
so
we
have
different
controllers
and
actually,
if
you
wanted
to
dig
into
a
little
bit
more
about
like
what
the
actual
controller
is
doing
right.
So
these
are
the
bootstrap
controller,
the
cluster
controller,
the
machine
controller
machine
set
and
node
controller.
These
are
all
the
actual
controllers
that
that
describe
like
that
api
right,
so
the
api
or
they
implement
that
api.
B
So
when
you
say
I
need
a
new
machine,
just
like,
let's
back
it
up
for
a
second,
if
we
look
at
this
machine
idea
right,
this
machine
could
be
effectively
a
pod.
B
So
when
I
deploy
a
new
pod
into
inside
of
kubernetes,
some
some
stuff
happens
right,
for
example,
if
it's
not
already
scheduled
to
a
node,
then
then
the
scheduler
will
see
that
that
pod
doesn't
have
a
isn't
scheduled
and
it
will
isn't
scheduled
to
a
node
and
will
do
its
work
right
and
we'll
go
ahead
and
schedule.
Associate
that
note
that
pod
object
with
a
node
object
and
then,
as
soon
as
a
node
sees
it,
it
will
actually
instantiate
that
pod.
Well,
the
machine
controller
is
responsible
for
a
very
similar
set
of
work
right.
B
Machine
controller
says:
oh,
I
see
that
there's
a
machine
that's
been
requested,
but
it
doesn't
look
like
it's
actually
associated
with
a
a.
What
do
you
call
infrastructure
provider
yet
so
we
should
probably
get
that
you
gotta
get
on
that
and
etc,
etc
right.
So
this
is
actually
you
know
how
the
how
the
relationship
between
these
things
actually
work.
B
So
here's
the
infrastructure
provider
telling
us
which
infrastructure
provider
to
use
to
implement
this
particular
node
and
some
information
about
it
right
like
what
size
is
that
machine?
What's
the
provider
id
you
know
any
other
kind
of
metadata
that
might
be
interesting
about
that
particular
piece,
and
then
we
also
have
some
status
objects
like
bootstrap
data.
Where
do
we
get
bootstrap
data
from
we
got
bootstrap
data
from
the
bootstrap
controller
right.
The
cube
adm
configuration
associated
with
this
particular
node
is
actually
generated
by
the
bootstrap
controller.
B
It
does
not
create
eks
clusters,
aks
clusters
or
gke
clusters.
It
creates
machines
and
deploys
kubernetes
on
top
of
them.
That's
correct.
Now
there
is
actually
some
work
on
the
idea
that
yeah
so
right
now
it
doesn't.
It
doesn't
actually
create
eks
clusters
because.
C
B
There
wouldn't
really
be
it-
wouldn't
really
provide
a
lot
of
value
on
top
of
that
abstraction
right
like
this.
The
goal
of
this
is
to
provide
an
abstraction
where
you're
actually
getting
straight
up
upstream
kubernetes
in
a
very
configurable
and
flexible
way
and
deployed
on
your
infrastructure,
where
your
infrastructure
is.
You
can
do
this
in
aws
in
azure,
in
vsphere,
on
bare
metal.
It
really
wouldn't
matter,
but
that
is
correct.
You're
correct
joe
all
right.
B
B
Let's
go
ahead
and
get
started.
I
think
this
will
be.
This
will
be
a
fun
part.
So
inside
of
the
quick
start,
we
actually
have
some
some
ways
to
actually
go
ahead
and
get
things
deployed
now,
in
my
case,
I'm
going
to
use
docker
and
we're
going
to
talk
about
a
little
bit.
Why
and
in
fact,
let's
just
talk
about
that
now
and
then
we'll
go
ahead
and
get
started
and
get
things
deployed
and
start
playing
with
them
and
doing
that
sort
of
stuff.
B
So
the
reason
I'm
using
docker
to
deploy
this
stuff
is
the
very
same
reason
that
I
use
kind
to
create
my
kubernetes
clusters
like
for
me
as
an
engineer,
the
most
important,
the
the
most
valuable
way
that
I
can
get
my
hands
on
a
project
is
by
by
being
able
to
play
with
it
right.
I
learn
more
about
kubernetes
by
playing
with
it.
B
By
tearing
it
apart
and
configuring
it
in
different
ways
and
understanding
how
different
pieces
work,
then
I
will
ever
be
able
to
learn
if
I
had
to
actually
pay
for
that
in
aws
or
in
gke
or
in
in
these
other
environments.
Now
I
do
have
an
aws
account
and
I
have
a
gke
account
and
my
company's
fine
with
like
paying
for
me
to
put
deploy
those
things
there,
but
really.
C
B
It
means
that
for
me
to,
I
would
have
to
take
you
know:
I'd
have
to
like
jump
into
those
environments.
I'd
have
to
make
sure
they're
secure.
I'd
have
to
like
go
through
all
this
headache
and
in
reality,
like
all
I
can
do
with
kind
and
with
cl
and
with
those
sorts
of
things.
I
can
just
run
this
stuff
on
my
local
machine.
I
can
tear
it
apart.
I
can
I
can
especially
with
kind
I
can
actually
go
ahead
and
build
my
own
node
image
I
can
put
in
logging
statements.
B
I
can
tear
it
apart
as
much
as
I
want
and
you've
all
seen,
that
kind
of
happen
over
time
inside
of
tgik
every
time
I
actually
I'm
I'm
on
tjik,
I'm
almost
always
using
a
con
cluster
and
that's
the
reason
right.
It
allows
me
to
interact
with
it
locally
and
to
play
with
it
and
and
to
troubleshoot
and
figure
out
how
kubernetes
is
working
or
not
working.
B
In
my
own
local
machine,
it
really
lets
me
learn
a
lot
about
that
application.
Now,
when
I
want
to
go
debug
inside
of
a
production
environment
or
inside
of
one
of
my
one
of
my
customizable
environments,
obviously
I
can
take
the
knowledge
that
I've
learned
locally
and
apply
it
to
those
to
those
environments
as
well.
B
So
there's
yeah
speed
is
another
big
one,
but
it
really
lets
me.
You
know
like
tear
all
the
parts
apart
and
put
all
the
legos
on
the
table
and
figure
out
like.
If
can
I
put
them
back
together
on
my
own?
Do
I
understand
how
the
system
works
and
what
are
my
questions
like
in
digging
it
through,
and
this
actually
applies
directly
to
why
I'm
using
docker
in
the
cluster
api
model?
If
you
think
about.
B
Even
more
kind
of
like
a
convo,
a
very
complex
system
right,
we're
using
it
to
interact
with
and
manage
infrastructure
provided
by
some
ios
provider.
So
how
can
I
learn
more
about
the
api?
How
can
I
learn
more
about
the
way
to
configure
or
to
extend
cluster
api
right?
You
know
in
a
consistent
way
that
it's
just
you
know
local,
that
I
can
actually
use
to
troubleshoot
it
now,
ironically,
I
guess
unironically.
B
This
is
actually
the
same
reason
why
cluster
api
provider
for
docker
exists.
I
was
talking
to
chuck
haw
who's,
one
of
the
major
one
of
the
contributors
of
cluster
api,
and
he
says
the
reason
that
the
re
you
know
this
is
actually
what
they
use
cluster
api
for
docker
for
pretty
consistently,
which
is
basically
running
the
test
against
cluster
api
right
when
you're
making
changes
to
controllers.
B
You
know
these
controllers
that
we
talked
about
over
here.
The
bootstrap
controller
cluster
controller
machine
machine
set
machine
deployment
controllers
as
you're
modifying
the
code
inside
of
these
controllers.
How
do
you
ensure
that
you
haven't
broken
the
world
right?
This
is
very
similar
to
the
way
that
I
the
reason
I
like
kind.
If
I
want
to
modify
the
code
in
cube
proxy,
I
want
to
be
the
only
one
looking
at
that
code
in
q
proxy
inside
of
my
environment,
first,
to
prove
that
it
works
the
way.
B
I
expect
right
make
sure
that
I
can
build
confidence
in
the
way
that
this
works
and
with
cluster
api
provider
for
docker.
I
can
do
exactly
that
right
I
can
make
I
can
I
can.
I
can
go
ahead
and
ensure
that
the
change
I
made
to
my
bootstrap
or
cluster
or
machine
or
machine
set
controller
is
actually
doing
what
it's
supposed
to
do.
B
B
We
can
see
we're
talking
to
the
capi
cluster
and
we
can
see
in
the
output
here
what's
being
defined
right.
We
can
see
that
there
is
a
new
namespace
called
copy
system
created
and
that
we
have
this
now
and
then
we're
defining
a
custom
resource
definition
for
clusters,
for
machine
deployments,
for
machines
for
machine
sets,
we're
actually
giving
a
cappy
election
leader
role,
which
is
a
role
that's
being
created
that
allows
us
to
determine
which
pod
is
the
leader.
B
We
have
a
cluster
role
for
clapping
manager
role.
We
have
a
role
binding
associated
with
that
leader
role
that
we
just
created,
and
then
we
create
a
deployment
which
is
the
capi
controller
manager
right.
So
now,
if
I
do
cube
kit
I'll
get
an
s
I
can
see,
cappy
system
is
now
created
and
cubekit
will
get
pods
dash
and
cappy
system.
A
B
B
B
A
B
B
B
B
B
B
We
have
our
manager
role,
we
have
at
least
we
have
a
leader
election
rule
just
like
before
all
that
stuff
is
still
there
and
you're
still
there
too,
which
is
awesome
that
was
really
weird
anyway,
so
cube
kettle
get
ns,
so
we
can
see
the
cap
d
system
being
created
if
it
if
this
were
aws
or
vsphere
or
project
pacific
or
any
of
the
other.
Like
you
know,
infrastructures
that
we
could
use,
then
we
would
see.
B
B
Which
is
kind
of
the
one
capo
yeah
openstack
could
actually
use
a
little
more
love.
To
be
honest,
I
don't
think
that
it's
actually
getting
a
ton
of
love
but,
like
the
other
ones,
are
actually
all
kind
of
aws,
I
think,
is
probably
the
most
mature
and
then
like
the
other
ones
that
follow
after,
like
vsphere
is
probably
after
that
and
there's
a
few
others.
Azure
is
actually
chair.
B
So
now
what
we
can
see
is
that
we've
got
our
our
administrative
cluster
up,
let's
go
ahead
and
see
if
we
can
create
a
a
cluster
on
docker.
So
the
way
we're
gonna
do
that
first
is
we're
gonna
go
ahead
and
we're
going
to
go
and
we're
going
to
deploy
these
things.
So
first,
one
we're
going
to
deploy
is
the
thing
that
defines
what
the
cluster
is
right.
So
let's
go
ahead
and
grab
that.
B
And
inside
here,
what
I'm
actually
doing
is
I'm
defining
the
tr,
the
cluster
in
two
different
providers
right,
I'm
defining
what
the
cluster
is
in
the
cluster
implementation
and
I'm
providing
in
the
cluster
api
implementation.
That's
what
this
one
is
right
and
then
down
below
here,
I'm
also
defining
what
that
I'm
defining
this
specific
cluster
with
the
infrastructure
provider
right.
I
got
to
define
it
in
both,
but
you
know
it's
a
lot
of
moving
parts.
So
let's
go
ahead
and
do
that.
B
So
I'm
going
to
go
ahead
and
create
a
namespace
called
c1
for
where
this
cluster
will
be.
I
don't
want
to
put
it
in
my
default.
Namespace,
so
cube
could
apply,
dash,
create
the
namespace
and
then
here's
my
cluster
yaml
and
again
you
can
see
hey.
I've
got
a
a
cluster
defined
and
inside
of
here
I'm
just
really
providing
the
most
simple
outcome.
Right
like
this
is
the
the
the
pod
cider
that
I
wanted
to
use,
I'm
giving
it
a
name.
B
A
B
Cluster
dash
a
we
can
see
that
now
there
is
one
provisioned:
it's
there,
we
see
it,
but
do
we
have
any
machines?
You
cannot
get
machines.
B
B
B
This
is
the
configuration
of
that
right.
So
we've
got
we've
defined
the
actual
name
of
the
machine,
cp0
we're
going
to
put
it
in
the
c1
namespace
we're
calling
we're
calling
it
part
of
the
same
cluster
that
we
created
the
c1
cluster
here
we're
actually
our
specs
get
a
little
more
interesting
right,
we're
giving
it
the
version
of
kubernetes
that
we
want
to
deploy
we're,
giving
it
bootstrap
configuration
because
obviously
we're
going
to
need
to
bootstrap
this
node.
B
B
So
here
we're
actually
defining
the
cubed
even
configuration
that
will
be
used
for
this
particular
node.
Now
this
is
where
things
get
slightly
interesting,
because
we
are
going
to
be
using
the
docker
implementation.
We're
going
to
do
a
couple
of
things
in
here
right,
so
we're
going
to
set
the
init
configuration
node
registration,
cubelet
extra
args.
So
these
are
the
extra
arguments
that
I
will
pass
to
the
cubelet
that
is
associated
with
this
cubeadm
configuration
and
then
in
there
I'm
going
to
pass.
I
want
eviction
to
be
tuned
in
this
particular
way.
B
I
want
the
cluster
configuration
controller
manager
to
enable
hostpath
provisioner,
true
right,
so
I'm
passing
extra
configuration
to
the
controller
manager
and
I'm
actually
handling
the
init
configuration
for
this
specific
cubelet.
Also
right
here
in
this
cube
adm,
and
this
is
sort
of
the
flexibility
of
cube
adm.
I
can
actually
configure
any
number
of
things
inside
of
cubedm
in
this
case.
B
Let's
go
ahead
and
apply
that
cp0
and
we
can
see
those
three
resources
were
created:
the
kubaydem
config,
which
is
part
of
bootstrap,
the
docker
machine,
which
is
part
of
the
infrastructure,
so
that
docker
machine
that
we
create
this
would
be
like
an
ec2
instance.
The
kubernetes
configuration
would
continue
to
be
on
the
cluster.
Just
like
you
see
it
here,
but
if
we
were
pointed
at
aws
instead
of
a
docker
machine,
you
would
see
an
ec2
instance
right
and
then
we
have
that
generic
machine
object
that
has
been
created.
B
B
B
There's
also
another
instance
that
got
created
the
c1lb,
and
this
is
because
in
many
times
in
many
cases
people
want
an
h8
control
plane.
They
want
multiple
nodes
right,
they
want
multiple
control,
plane,
nodes
and
so
the
kind
provider
for
so
the
cluster
api
provider
for
docker
knows
how
to
implement
that
and
it
implements
that
load
balancer
under
the
covers
for
us
in
cluster
api
provider
for
aws.
It
uses
an
elb
for
this
right,
so
this
implementation
is
actually
a
part
of
how
this
all
works.
B
Mo
not
quite
yet
so.
This
is
actually
mostly
really
focused
on
just
trying
to
solve
the
kubernetes
problem.
It's
not
really
trying
to
solve
the
problem
of
how
do
you
consume
those
other
resources
that
may
be
available
to
you
inside
of
kubernetes
or
outside
of
aws?
Sorry,
it's
just
really
trying
to
provide
that
that
that
way
of
implementing
the
kubernetes
cluster
itself
inside
of
those
environments.
B
B
B
A
B
So
what
this
is
doing
is
I'm
saying
I
mean
I'm
using
cube
kettle
inside
of
the
namespace
c1,
where
I
created
my
cluster
to
grab
a
secret
that
was
generated
there
right.
So
when
I
created
the
cluster
as
part
of
creating
that
cluster,
it
actually
generates
a
cube
config
in
the
administrative
cluster
that
I
can
use
to
download
and
then
interact
with
that
lower
cluster
right.
B
So,
if
I
do
cube
kettle
get
secrets
dash
n
c
one,
I
can
see
the
secrets
that
have
been
created,
there's
a
secret
for
the
ca,
there's
a
secret
for
the
ncd
component
for
the
cube
config
that
has
been
generated
for
the
proxy
and
for
the
service
account
to
be
generated,
and
these
secrets
are
important
because,
if
you
think
about
standing
up
any
kubernetes
cluster,
most
of
these
things
are
used
or
copied
to
different
api
servers
as
they're
generated
right.
So
these
are
these
represent
sort.
B
Those
those
those
secrets
that
get
copied
around
the
cube,
config
itself
is
actually
more
of
a
convenience
than
a
requirement.
We
don't
necessarily
need
that
cube
config
to
be
inside
this
cluster,
but
because
of
just
mere
convenience.
It
makes
a
lot
of
sense
for
it
to
be
there
right
for
us
to
actually
be
able
to
understand
that
things
are
working.
B
But
the
other
ones,
the
scd
ca-
is
super
necessary.
The
cluster
ca.
We
want
to
make
sure
that
all
of
the
ap
all
of
the
control
plane
nodes,
agree
on
what
that
ca.
Key
and
public
key
are
same
for
sed
and
same
for
the
front
proxy
and
the
same
for
the
service
account
token
key
right,
like
these
key.
These
certificates
super
important
that
all
of
the
control
plane
nodes
agree
on
what
they
are,
or
things
are
not
going
to
go
well
for
the
cluster.
B
B
With
management
cluster
having
so
much
power,
centralized
admin
creds
and
for
creds,
do
you
think
there
are
any
different
security
practices
that
are
warranted
around
this?
I
think
I
mean,
in
my
opinion,
you
would
keep
this
one
pretty
locked
down.
You
wouldn't
run
workload
clusters
on
your
management
cluster.
You
would
actually
only
run
you
would
be
using
it
only
to
instantiate
and
create
kubernetes
clusters
right,
but
this
isn't
a
kubernetes
cluster
like
any
other
cluster.
I
would
not
think
I
would
not.
Like
put
you
know,
my
wordpress
deployment
on
top
of
the
cluster.
A
B
Api
provider
configuration
inside
of
it.
That
would
be
a
learning
lesson
I
would
say
so.
That's
your
quote.
Is
that
that's
your
question,
joe?
B
Is
actually
defined
by
the
atom
by
the
cluster
eight
by
the
cluster
api
provider
for
docker
and
the
one
it
used
to
stand
up
that
control,
plane,
node
is
actually
kindest,
node,
v1153
and
if
you've
ever
played
with
kind,
then
you'll
know
that
you'll
notice
this
pattern
right.
This
is
actually
the
pattern
that
is
used
to
stand
up
the
kind
cluster
as
well.
So
these
images
are
maintained
by
the
kind
project
and
they
have
all
of
the
bits
integrated
with
them
to
be
able
to
stand
up
and
manage
a
kubernetes
cluster.
B
So
inside
of
them
already
they'll
have
the
cube,
lit
they'll
have
cube
kettle
and
they'll
have
cube
adm
all
already
installed
on
that
node
and
they'll
already
have
kind
of
the
necessary
bits
to
to
make
sure
that
that
image
can
be
a
working,
cubelet,
node.
B
What
are
the
options
around
this
in
bare
metal
environments?
So
that's
a
fascinating
question
and
I'm
going
to
come
back
to
it
josh,
but
we're
going
to
come
back
to
it
all
right.
So
here
we
are
so
let's
go
ahead
and
do
the
next
piece
because
I
totally
want
to
put
that's
the
thing.
I
really
want
to
talk
about.
Oh
okay,
so
we
have
our
cube
config.
So
let's
try
this
export
cube.
Config.
B
And
we
have
this,
is
our
cube?
Could
all
get
nodes
right?
So
we
have
our
c1
cp0
node.
We
see
it's
not
ready.
This
is
probably
because
there's
no
network
implementation
already
deployed.
We
are
see
it's.
We
see
it's
running
in
v153,
so
we
actually
see
that
this
is
the.
This
is
the
cluster
that
I
created
using
cluster
api.
B
B
A
B
B
We're
going
to
create
a
machine
deployment
click
over
to
the
docker
one
and
again
in
here
we're
going
to
break
this
down
into
like
what's
actually
happening
right.
So
we're
going
to
create
a
couple
of
different
resources,
we're
going
to
create
a
receive
a
resource
that
defines
the
machine
deployment
using
the
cluster
api
interface
right
inside
we're
going
to
name
this
c1.
In
our
case,
we're
going
to
give
it
a
node
pool.
B
We're
going
to
associate
with
a
node
pool,
I
think
that
actually
might
be
extraneous,
then
we're
going
to
give
it
some
labels
we're
going
to
give
it
a
bootstrap
configuration.
This
is
going
to
be
called
worker,
we're
using
the
bootstrap
interface
to
describe
that
relationship
just
like
we
did
with
this
control
plane
node
only
in
this
case
right
we're
actually
doing
this
across
a
bunch
of
different
machines.
This
is
a
template
spec,
because
this
is
a
deployment.
B
So
we're
saying
we
want
to
templatize
that
cappy
quick
start
worker
configuration
across
the
set
right
same
thing
for
the
infrastructure
piece:
we're
going
to
use
a
docker
machine
template
to
describe
the
configuration
of
each
of
the
docker
machines
that
will
be
part
of
the
worker
configuration
right.
So
we
have
to
define
a
docker
machine
template
which
is
blank,
there's
nothing
special
about
it.
B
It's
going
to
use
all
the
the
kind
of
built-ins
and
we're
also
going
to
have
to
define
the
cube,
am
configuration
in
which
again
we're
using
we're
passing
some
extra
arguments
to
the
the
node
registration
for
this
guy,
and
then
we
also
oh,
this
is,
oh,
I'm
sure
extraneous.
B
So
this
is
actually
also
configuring,
the
controller
manager
on
the
worker
nodes,
which
clearly
won't
work.
It
won't
do
anything
for
us,
but
it's
defined
the
same
way
here.
B
So
let's
go
ahead
and
do
this:
let's
go
ahead
and
get
this
deployed.
So
let's
take
a
look
at
the
nodes
yaml.
This
is
actually
where
we're
defining
that
same
thing
that
we
just
talked
about
right.
We've
got
our
machine
deployment,
we're
putting
it
in
the
c1
namespace
we're
going
to
call
this
machine
deployment.
Worker
probably
could
call
it
workers
we're
associating
it
with
cluster
name
c1.
B
We're
giving
one
machine
and
we'll
scale
it
up
and
scale
it
down
and
play
with
that
sort
of
thing
we're
going
to
go
ahead
and
define
the
bootstrap
configuration
that
will
be
associated
with
it
and
we'll
create
the
infrastructure
configuration
that
will
be
associated
with
it
down
below
here.
This
is
the
docker
machine,
template
nothing
special
in
here
and
then
here's
our
qb
configuration
template
just
like
we
saw
before
with
our
join
configuration
this
time,
because
this
is
a
worker
node,
so
it
will
be
cube,
called
join
and
our
cluster
configuration
being
passed
down.
B
B
Replicas
three,
then,
just
like
our
deployment
that
scaling
function
has
effect
right.
So
now,
if
we
do
cubekittle
get
machine
dash
a
we
can
see,
we
have
other
machines,
kicking
up
right,
that's
our
first
c1
worker
and
now
we're
actually
provisioning
two
more
just
like
we
did
with
deployments.
So
in
this
case
we're
not
interacting
with
pods
we're.
Well,
we
kind
of
we're
interacting
with
infrastructure.
We're
saying
I
want
two
more
machines
and
the
infrastructure
provider
says:
okay,
I'm
gonna
go
create
two
more
machines.
B
B
One
of
the
things
I
want
to
talk
about
is
the
fact
that,
like
this
child
cluster
is
completely
autonomous
from
the
parent
cluster,
I
could
destroy
the
administrative
cluster
like
the
cappy
cluster.
I
could
make
this
one
go
away,
and
this
cluster
down
here
would
continue
to
work
and
be
just
fine.
Nothing
is
going
to
be
managed
with
it
yeah.
In
fact,
it
would
actually
just
be
cluster
auto
scaling,
and
there
is
a
ticket
open
to
actually
make
cluster
auto
scaler
work
with
cluster
api.
B
It's
not
quite
done
yet,
but
at
the
moment
this
is
completely
it's
completely.
It's
got
a
lot
of
autonomy
now.
The
problem,
of
course,
is
if
the
administrative
cluster
were
to
go
away.
It
would
mean
that
nothing
would
be
taking
care
of
the
life
cycle
of
these
machines.
If
a
machine
died,
you
would
not,
there
would
be
nothing
to
reinstantiate
that
machine
right.
B
So
as
long
as
the
administrative
cluster
is
healthy
and
working,
then
there
is
a
reconciliation
loop
running
on
the
on
the
control
plane
that
will
take
care
of
that
for
us,
in
fact,
let's
go
ahead
and
play
with
that
right,
so
we
have
our
docker
ps.
Actually,
let's
do
this:
let's
do
it.
Our
cube
kit
will
get
nodes
right,
and
so
that
is
the
name
of
one
of
the
containers
running
inside
of
my
docker
instance.
B
B
A
A
B
So
yes,
the
answer
to
that
is:
yes,
that's.
That
is
precisely
how
it
works.
So
we've
just
detected
that
the
node
is
not
ready.
B
B
B
A
A
A
B
A
B
B
B
So
what
we
just
tested
here
was
a
different
kind
of
failure.
This
failure
was
kind
of
interesting,
so
I
went
ahead
and
deleted
the
machine
from
docker
and
I
don't
think
that
anything
has
been
implemented
in
such
a
way
that
the
reconciliation
loop
inside
of
my
administrative
cluster
is
polling
docker
to
see
if
those
machines
are
in
a
healthy
state
or
not.
So
it
never
detected
that
it's
not
in
a
healthy
state
anymore
and
never
tried
to
delete
it
and
that's
probably
just
a
function
of
not
having
been
implemented
yet.
B
B
B
So
the
logic
inside
of
the
controller,
for
you
know,
handling
the
reconciliation
of.
I
have
requested
that
there
be
three
workers
and
the
reconciliation
saying.
Well,
you
know
if
you
delete
it
this
way,
then
I'll
definitely
make
sure
that
I
get
you
three
workers.
So
that's
one
way
of
actually
handling
this
problem.
So
all.
A
B
A
B
Have
cube
kettle
get
kettle
version.
We
can
see
our
cluster
first
thing.
You
know
what,
let's,
if
we're
going
to
upgrade
some
stuff,
we
should
probably
like
add
a
couple
more
control
point
nodes.
So
let's
go
ahead
and
do
that
so
other
and
I'm
what
this
is
doing
is:
let's
just
take
a
look
at
these
manifests
real,
quick
so
inside
here
I'm
actually
going
to
go
ahead
and
create
a
couple.
Other
control,
plane
nodes,
I'm
associating
with
the
same
cluster,
I'm
using
the
same
version
all
that
stuff
right.
B
B
B
But
so
in
that
case,
when
you
hit
delete,
it
deletes
right
away
where,
if
you
where,
if
I
tried
to
delete
a
more,
I
wanted
one
that
was
already
running,
then
you
will
see
that
it
would
go,
not
ready,
it
would
become
coordinate.
Things
would
schedule
to
other
nodes
and
we
would
delete
them.
A
C
B
A
B
A
A
A
A
B
B
A
A
A
B
A
B
So
what's
happening
here,
is
it's
going
to
go
ahead
and
create
a
new
cp0
node
and
replace
that
one
right,
and
so
this
is
actually
going
to
make
all
the
cool
qbm
commands
calls
to
delete
this
old
scd
member
from
the
cluster
to
bring
on
the
new
std
member
of
the
cluster.
So
we're
going
to
see
that
happen
over
time,
so
cp0
test
is,
is
working
and
then
we're
going
to
see
cp0
test
get
delete
right.
So
in
this
output.
B
Let
me
actually
just
make
this
a
little
smaller,
so
I
think
it
makes
it
a
little
more
readable.
I
hope
that
that's
at
least
legible,
so
we
can
see
what's
happening.
Is
that
scd
was
in
a
healthy
state?
We
see
all
three
members
now
we're
creating
our
new
member,
the
cp0
test,
we're
doing
a
get
pod
over
and
over
and
over
again
until
we
were
able
to
actually
see
that
that
new
node
show
up
and
once
it
shows
up
we're
going
to
go
ahead
and
do
some
other
cool
stuff
to
it.
B
B
A
B
B
B
B
B
Sorry
about
that,
that's
frustrating!
I
wonder
if
it's
actually
my
wireless.
This
is
a
relatively
new
laptop
and
linux
being
linux.
You
know
it
could
be
the
wireless
I'll
have
to
try
and
figure
that
out,
but
you
know
at
some
point
probably
not
today,
so
we
got
our
new
master
up
waiting
for
the
old
one
to
go.
Cordoned.
B
Oh,
my
gosh
yeah,
you
should
go
to
sleep
paulie.
Thank
you
so
much
for
your
time.
You
enjoy
your
weekend
as
well.
Thank
you
very
much
for
tuning
in
and
like
catch
it.
I'm
sure
that
you'll
be
able
to
follow
along
for
the
rest
of
it.
As
long
as
my
wireless
hangs
out
for
just
a
little
bit
longer
after
this
is
done,
we're
going
to
just
upgrade
the
nodes
and
then
I
think
we're
probably
going
to
wrap
so
have
a
kicking
weekend.
A
B
B
B
B
I
might,
but
I'm
not
really
pivoting
anything
like
which
is
the
kind
of
the
biggest
killer
value
of
cluster
kettle
at
the
moment.
So
yeah,
you
know
what
let's,
but
let's
talk
about
it
because
it's
a
good
point.
B
B
Cluster
kettle
is
a
thing
that
has
like
come
and
gone
from
the
cluster
api
project.
Let's
talk
about
what
it
is,
so
cluster
quote
so,
and
it's
coming
back
like
in
this
next
version,
but
it
was
like
there
was
for
a
brief
version.
Cluster
cuddle
had
kind
of
gone
away,
and
now
it's
and
now
it's
returned
so
cluster
kettle
actually
provides
some
pretty
interesting
functionality
about
the
way
that
we
met.
We
think
about
that
administrative
cluster
right.
B
So
if
we
were
going
to
spin
up
an
administrative
cluster,
we
might
do
that
initially
inside
of
like
a
kind
cluster
right,
we
stand
up
our
kind
cluster.
We
deploy
all
the
api
objects,
we
configure
them.
We
deploy
the
first
cluster
inside
of
aws,
and
then
we
might
want
to
make
that
new.
First
cluster
in
aws,
the
new
administrative
cluster
and
so
cluster
api
has
this
construct
construct
concept
of
like
cluster
cluster
kettle
init.
B
B
So
these
different
day,
two
operations
are
pretty
neat.
What
I
was
using
there
to
do
upgrade.
I
was
actually
calling
the
cluster
kettle.
Cluster
api
upgrade
piece
myself
kind
of
from
the
command
line,
but
cluster
kettle
is
allowing
us
to
wrap
that
right.
If
we
wanted
to
do
an
upgrade
of
the
cluster
api
providers
and
possibly
also
of
the
cluster,
this
might
be
something
where
we
could
wrap
this
in
right.
B
Cluster
kettle
delete
gives
us
the
ability
to
delete
those
api
providers
so
that
infrastructure
provider,
those
bootstrap
providers,
the
all
the
bits
of
the
actual
cluster
api
that
we've
actually
got
running
and
then
to
actually
configure
any
of
these
components
they
have
that
wrapper
as
well.
So
that's
like
all
the
stuff
that's
coming
with
cluster
kettle,
but
that's
going
to
be
released
in
the
next
version-
it's
not
in
the
upstream
docs
today,
but
it
will
be
in
the
next
version
of
the
docs.
B
If
I
prepend
that
with
master
dot,
I
could
see
what
the
docs
look
like
in
master
and
they
look
different
than
the
ones
that
are
that
I'm
using
to
do
this
test
so
stuff,
that's
coming!
We're
definitely
going
to
be
able
to
see
in
master.cluster.api,
whereas,
like
the
current
upstream
ones,
is
kind
of
a
cool
little
url
hack
to
make
that
work.
B
So,
let's
see
if
we
have
proceeded
here
all
right,
so
we
got
our
new
clusters
up.
We
got
our
new
things
working
and
our
upgrade
was
successful,
but
there's
still
something
missing
right,
dang
it.
Why
do
people
say
they're
losing
it
all
right?
So
there's
still
something
missing.
We
can
still
see
that
we
have
three
machines
here
that
are
running
one
fifteen
three.
B
B
B
B
And
let's
watch
what
happens
so,
we
should
actually
see
the
the
nodes
come
and
go
here.
B
So
this
is
like
immutable
infrastructure
level
stuff
right,
we're
not
actually
trying
to
we're
not
trying
to
modify
the
we're
not
trying
to
upgrade
them
in
place,
we're
trying
to
replace
them
with
new
version,
with
a
new
instance
running
at
that
new
version.
B
Cubekit
I'll
get
machine
a
you
can
see
a
new
worker
being
provisioned
and
those
of
you
who
have
played
with
deployments
before
know
what
we're
looking
at
here
right.
We
can
see
that
the
hash
of
that
configuration
has
changed
right
and
so
then
the
the
the
name
is
not
the
same,
and
so
because
it's
provisioning.
B
B
A
B
Ready
it
is
going
to
go
one
at
a
time
exactly.
I
believe
that
the
plan
is
to
implement
that,
so
there
would
be
a
rolling
up
so
that
you
could
actually
set
the
surging.
I
don't
think
it's
currently
implemented,
though
right.
So
if
I
do
keep
kettle
edit
machine
deployment
dash
and
c1
worker,
I
don't
think
it's.
Oh
yeah
actually
strategy
is
defined.
B
B
B
C
B
Running
on
any
node,
so
when
I
actually
went
to
go,
do
my
when
I
went
to
go,
kill
the
old
node,
it
was
just
like
anything
here:
nope
nope
then
deleted
right,
like
it
was
waiting
for
the
cordon
and
drain
it
drained
instantly
because
there
was
nothing
to
drain
and
then
the
the
controller
deleted
it.
So
I
just
kicked
up
a
little
deployment
of
a
couple
of
engine
etch
instances,
and
this
meant
that
it
would
actually
have
to
drain
it,
and
that's
why
we
were
able
to
see
that
drain
happen.
B
B
Better,
all
right,
so
one
of
the
questions
that
came
up
earlier-
and
this
is
an
important
one
to
me-
because
I'm
a
bare
metal
guy
right,
like
one
of
the
questions
that
came
up
was
like
how
would
could
we
do
a
thing
where
we
do
this
with
bare
metal?
And
so,
in
my
opinion.
Yes,
I
think
it
is
absolutely
possible-
and
I
think
it's
actually
kind
of
an
intriguing
question,
because
here's
how
this
would
shape
up
in
my
mind
right.
B
We
could,
for
example,
use
the
control
use
the
the
tooling
here
to
stand
up.
Kubernetes
clusters
leveraging
sorry,
we
could
use
the
cluster
api
provider
to
stand
up
and
manage
the
control
plane
and
we
can
use
it
to
upgrade
and
manage
the
life
cycle
of
that
control.
Plane
and
everything
would
be
great
and
we
could
even
use
it
to
stand
up
another
worker,
another
pool
of
nodes
that
are
the
worker
nodes
and
that's
what
we're
seeing
here
right,
c1,
worker
or
siemen
worker,
and
we
can
manage
lives
like
all
of
those.
B
A
B
I'm
doing
this
really
just
to
simplify
my
life
right,
so
what
I'm
doing
is
I'm
creating
another
cluster
with
a
single
node
and
then
I'm
going
to
jump
into
that
node
and
I'm
going
to
wipe
it
out
and
then
I'm
going
to
join
it
to
the
other
cluster
so
check
this
out.
This
is
really
the
fun
part.
We'll
pretend
like
this
is
the
bare
metal
node
right.
I
have
gotten
to
the
point
where
the
node
has
been
configured,
I'm
going
to
jump
in
here.
A
B
B
B
B
So
I've
already
got
my
bootstrap
conv,
my
pixie
environment
stood
up.
I've
got
all
that
figured
out
and
I've
got
like
a
way
to
bring
nodes
up
to
a
point
where
they
could
be
made
a
part
of
a
kubernetes
cluster.
That
means
they've
got
cubelet
already
running
on
them
or
they've
got
cubelet
already
set
on
them.
They've
got
you
know
the
other
prerequisites
docker's
already
configured
container
d.
B
B
Okay,
so
now
we
do
cube
kettle
get
nodes.
Coupe
config
equals
c1,
cube
config.
There's
our
stolen
control
play
node.
B
I
could
bring
up
the
node
instantiate
it
with
cube
adm
and
join
it
to
one
of
my
existing
control
planes
right
so
pretty
cool.
B
Let
me
read
this
question
by
dimitri.
Give
me
one
second
here,
so
I
met
run
all
child
api
servers.
Oh
you
know
what
that's
actually
there's
another
project
that
does
this
and
it's
called.
What
is
it
called
gardner
called
gardner
gardner.
Does
that
I
don't
really
think
that
running
control
plane
instances
as
pods
is
really
the
way
to
go.
I
think
it's
better
to
run
them
as
machines
and
I'm
also
and
coming
from
car
os.
This
is
kind
of
you
know.
B
This
is
a
painful
thing,
but,
like
I'm,
not
a
big
believer
in
self-hosted
kubernetes
there,
it
is,
I
said
it.
I
am
a
believer
in
using
static,
manifests
on
cubelets
to
actually
manage
the
control
plane,
because
I
think
it
handles
the
reboot
case
better.
But
that's
my
own
opinion
and
it's
you
know
born
of
experience
from
having
done
this
a
couple
of
times,
but
that's
definitely
my
impression.
B
I
think
that
I
mean-
and
it's
actually
what's
ironic,
because
it's
not
true
for
everything
right
like
if
I
look
at
like.
I
think
that
I'm
I'm
open
to
the
idea
of
the
controller
manager
and
the
scheduler
being
a
deployment
running
inside
of
a
cluster,
but
the
api
server
itself
is
such
an
intrinsic
piece
that
it
needs
to
be
brought.
B
It
needs
to
be
bootstrapped
consistently
and
reliably
every
time
a
machine
reboots
and
I
can
get
by
with
a
static
pod,
manifest
to
actually
achieve
that
and,
in
fact,
if
you
look
at
some
of
the
different
methods
that
are
out
there
for
handling
that
problem,
that's
kind
of
exactly
what
they've
done
anyway.
That's
what
I
wanted
to
share
with
you
all.
I
just
wanted
to
play
with
this.
A
B
Thanks
a
bunch,
everybody
have
a
kicking
weekend.
I
know
that
I'm
planning
on
having
a
cooking
week
and
I'm
gonna
go
watch
harry
potter,
the
cursed
child
with
my
very
own
christian
child.
So
I'm
looking
forward
to
that.
It's
like
six
hours
or
something
I
figured
by
then
she'll
at
least
climbed
like
everything,
that's
even
possibly
climbable
from
where
she's
at
so
see,
y'all
next
time.
B
B
Thing
before
I
go,
I
forgot
I'm
so
proud
of
myself.
I
remembered
so
next
week.
B
I'm
actually
going
to
be
interviewed
on
cube
cuddle,
which
is
a
new
podcast,
that's
being
hosted
by
a
good
friend
of
mine
named
rich
burroughs,
and
if
you're
interested
in
asking
any
question
of
me
and
everything's
on
the
table,
like
you
want
to
ask
me
anything,
that's
the
time
to
do
it
in
tech
or
personal,
whatever
go
ahead
and
just
jump
into
this
tweet
and
you
know,
ask
the
question
and
we'll
make
sure
that
we
cover
it
in
the
next
session.
So
you
can
see
that
some
of
the
questions
were
pretty
good.
B
I
already
have
like.
I
just
have
one
about
malicious
pods
and
then
the
other
one
I
really
like
is.
I
have
a
lot
of
questions
number
one.
How
dare
you
my
reply?
I'm
particularly
proud
of
I
woke
up
like
this.
You
know.
That's
that's
who
I
am
anyway,
so
I
look
forward
to
seeing
you
all
soon
so
enjoy
if
you're
interested
in
that
that's
on
tgik.ios
notes
throw
some
questions
in
there
I'll
see
you
all
next
time
have
a
kick
in
weekend
and
see
y'all.