►
From YouTube: TGI Kubernetes 110: Cluster-api v3
Description
Come hang out with Duffie Cooley as he does a bit of hands on hacking of Kubernetes and related topics. Some of this will be Duffie talking about the things he knows. Some of this will be Duffie exploring something new with the audience. Come join the fun, ask questions, comment, and participate in the live chat!
cluster-api version 3!
A
A
And
who
do
we
have
with
us
today?
We've
got
Augustine
signing
in.
We
have
look
from
looking
forward
to
the
show
I'm
really
glad
to
hear
you're,
seeing
a
little
bit
of
it
far
from
v3.
It's
the
alpha
version
of
e1,
fair,
fair
I,
probably
should
have
been
a
little
more
clear
about
that.
In
my
mind,
it's
Version
three
but
you're
right.
A
It's
off
of
its
V
1,
alpha
3
I,
should
definitely
I
should
definitely
update
that
Muzaffar,
saying
hello
to
everybody
and
Augustine,
asking
them
where
they
are
hello
from
Atlanta,
says
Shahar
Alex
says
hello
from
Elena
as
well.
Rory
from
Scotland
good
see
you
Rory
enjoy
saying
hello
Reiko
from
Germany.
They
got
Marcin
from
cat-cow
Krakow
and
we
have
Alex.
Oh
cool.
A
The
cute
friends
joined
I
meant
to
go
to
the
Kim
friends
earlier,
but
I
was
a
little
busy
trying
to
get
things
set
up
for
my
house
for
this
whole
thing,
so
you
know
how
it
is.
We
got
hello
from
Las
Vegas
and
from
Belgian
and
from
England
and
from
Germany,
that's
Mohamed,
Bradley
and
Thorsten
good
to
see
you
all
and
I'm
glad
you're.
A
All
er
back
on
TG
I
came
to
go
saying
a
little
back
from
TJ,
okay,
even
from
Tel
Aviv,
we
got
a
lot
of
slobs
from
Bulgaria
Keith,
hey
my
friend
Keith
from
Ireland
good
to
see
you
Keith
I
was
checked
from
Seattle,
also
good
to
see
you
and
Dan
what
we
got.
We
got
a
celebrity
in
the
house
sheriff
we
got
popped
from
sitting
on
the
line.
That's
pretty
awesome
good
to
see
you
pop.
We
got
Jimmy,
Shu,
say
hello.
We
got
Geoffrey
from
Mountain,
View
and
Kris.
A
Mr.
Magoo
say
hello
from
South
Carolina
and
Dragon
Age
new
from
India
room
from
the
UK.
Alright,
so
hello,
everybody!
It's
good
to
see!
You
all
I
know
that
it's
a
rough
time
right
now,
I'm
really
kind
of
on
house
arrest,
I'm
on
house
arrest
as
well,
not
quite
house
arrest,
but
you
know
what
do
they
call
it?
It's
it's
not
even
social
distancing
at
this
point.
A
It's
like
everybody,
stayed
inside
your
house
and
and
be
and
be
away
from
everything
else,
and
that's
that's
been
tough
on
everybody,
especially
I
mean
just
really
everybody
so
I'm
one
of
those
people
that
has
family
so
I
have
a
8
year
old,
a
9
year
old
here
in
the
house
with
me
and
she's
hanging
out
and
learning
a
bunch
of
stuff
and
we're
and
we're
just
dealing
with
that.
Dave
Dave
ID
and
that's
really
great
I'm
sure
she
super
misses
her
friends
and
all
of
those
things.
A
But
you
know
I
think
that
that's
probably
true,
just
globally
right
now,
there's
a
lot
of
that
happening.
So
I
figured.
Let's
keep
the
tgia
thing
going
sit
in
the
background.
You're
gonna
see
my
house.
This
is
basically
my
work
area.
I
have
I,
won
a
little
table
here
where
I'm
working
and
that's
my
situation.
So
let's
look
into
the
news
this
week.
A
And
Jose
from
her
pro
so
in
the
news
this
week
we
got
all
supported.
Versions
of
kubernetes
took
an
upgrade
this
week.
That
means
that
if
you're
running
one
of
the
other
supported
versions,
you're
gonna
see
this
upgrade
roll
through
this
version
of
kubernetes
basically
updated
the
version
of
golang
to
113
8i
link
this
ticket
because
it
actually
asks
a
couple
questions
up.
Certainly
I've
been
wondering
and
I'm
wondering
if
you
probably
all
have
also
been
wondering
it's
like:
when
does
the
kubernetes
project
decide
to
go
to
the
next
version?
Who
go
like
why?
A
Why
and
when
do
we
move
forward?
And
mr.
blocker
is
right
there
with
the
answer
he
says
we
tried
to
track
master
with
the
latest
Galang
so
as
to
keep
aligned
with
security
fixes
right
now.
It's
the
build
owners
to
make
the
calls,
but
we're
hoping
to
dock
these
policies
more
clearly
and
turn
this
over
to
release
engineering
sub-project
of
cig
release.
Mr.
steven
augustus
and
his
whole
squad
of
amazing
amazing
people
up
and
out
so
yeah.
That's
the
reason
for
this
I
think.
A
That's
probably
the
primary
reason
for
one
of
the
updates
in
it
or
for
the
upgrade
of
the
update
to
the
supported
version
of
kubernetes
and
if
we
can
also
go
look
at
the
release,
notes
and
understand,
if
that's
the
case,
so
with
the
wait.
Let's
do
that
real,
quick,
let's
see
what
else
kind
of
snuck
in
there
for
like
the
117
release.
Obviously
this
is
a
great
way
of
tracking
what's
happening
there,
so
we
just
saw
117
for
cut
and
the.
A
A
Next
up,
there's
an
endpoint
slice,
a
relatively
new
thing
in
kubernetes
that
I
think
when
did
originally
in
like
116
or
something
that's
called
endpoint
slices,
an
endpoint
slice
which
is
like
a
set
of
endpoints
behaved
differently
for
a
while.
So
if
you
were
gonna
pull
throwing
Depot
a
list
of
an
endpoint
site,
uh-huh
endpoints
I
would
you
would
have
to
filter
out
the
ones
that
were
terminating
your
kind
of
yourself
and
now
they
actually
behave
the
same.
So
this
is
a
change
by
mr.
Andrew
second
and
makes
that
change.
A
One
of
them
is
from
the
sync
scalability
folks
and
kind
of
inviting
you
to
come,
learn
Coover
too
just
the
hard
way
and
what
they
mean
by
the
hard
way.
I
think
is
actually
that
you
know.
There's
definitely
some
challenges
here,
so
contributing
to
scalability
is
a
great
way
to
learn,
kubernetes
and
its
depth
and
breadth.
The
team
would
love
to
have
you
join
as
a
contributor
they're,
looking
to
pull
more
people
into
six
scalability
and
take
a
look
at
them
and
and
the
value
of
learning
the
hard
way.
A
There
is
a
belief
that
software
development
community
that
pushes
for
the
most
challenging
and
rigorous
possible
method
of
learning
any
language
these
tend
to
go,
we'll
mix
it.
You
know
what
I
just
realized
you're
not
actually
seeing.
My
screen
probably
have
all
told
me
that
at
least
five
or
six
times
no
thank
you.
Andy
I
move
it
over.
A
A
We
got
Marco
from
Italy
signing
in
Vince
telling
me
that
things
are
looking
good
Stephen
saying
things
are
looking
good
glad
to
hear
both
of
those
things.
Sorry
about
that
for
not
showing
you
those
other
things,
but
you
could
find
the
links
to
them
in
the
top
part
of
the
of
the
notes
here.
This
is
actually
what
I
just
I
just
went
through
these
the
upgrade
this
week
and
the
better
behavior.
So
if
you're
interested
in
digging
into
that
ticket,
this
is
the
yeah.
You
can
find
those
you
can
find
those
links
there.
A
Next,
one
up
is
a
blog
post
on
the
kubernetes
blog
site.
Talking
about
joint
scalability,
we're
looking
for
more
people
to
come,
contribute
to
that
which
would
be
tremendous,
there's
actually
another
one.
Just
recently
that
I
thought
was
also
good.
That
was
a
coming
from
Kong
the
Congress
controller
and
service
mesh,
if
you're
interested
in
exploring
so
the
other
options
that
are
out
there
for
this
sort
of
stuff,
they've
got
a
pretty
decent
blog
post.
A
That
has
examples
on
how
to
set
that
stuff
up
again
on
the
blog
site
we
have
from
the
Cystic
folks
so
which
are
hearing
the
call,
which
is
awesome.
We've
got
a
new
article
talking
about
what's
new
in
kubernetes
118,
so
if
you
have,
if
you
haven't,
had
a
chance
to
review,
what's
changing
or
what's
or
happening
upstream,
I
love
articles
like
this
because
they
actually
do
give
a
view
into
what's
happening,
and
so
this
is
a
bunch
of
the
changes
that
are
coming
in
118.
They
talk
about.
Guberniya
is
118
core.
A
They
dig
through
a
certificate
service
or
cut
a
certificate
signing
request,
API
moving
to
beta
once
you're
in
stop
happening.
Some
scheduler
stuff,
taint
based
eviction
knows
extending
the
huge
change
or
the
huge
page
feature
moving
it
to
stable,
which
is
awesome,
pod
overhead,
no
topology
manager
and
kind
of
a
relatively
new
thing,
but
moving
to
beta,
which
is
awesome.
Pod
startup
liveness
probe
and
hold
off.
We
have
some.
This
is
actually
a
pretty
comprehensive
article.
I
haven't
actually
read
it
through
it
myself,
but
there's
a
good
amount
of
detail
here.
A
So,
if
you're
interested
in
understanding
a
little
bit
more
about,
what's
happening
in
the
118
release
timeframe,
this
is
a
very
good
article.
They
really
get
into
the
weeds
here,
really
go
through
all
the
changes.
It's
a
pretty
major
release
so
definitely
check
that
out
very
very
cool
shout
out
to
Cystic
nice
job.
A
Next,
one
is
an
article
saying
your
team
might
not
keep
meeting.
Kubernetes
is
from
my
friend
Alex
Ellis.
He
is
doing
work
on
open,
fast
and
a
bunch
of
other
stuff.
If
you're
not
aware
of
Alex,
you
should
be
he's
pretty
smart
guy
and
he's
always
working
on
interesting
ways
to
try
and
improve
things
it
has
to
my
favorite
cartoons.
You
know
can't
just
use
work
can't
solve
problems
by
saying
things.
A
Couple
of
ecosystem
things,
so
I
thought
this
pretty
interesting.
Calico
traditionally
has
been
kind
of
a
low
level
networking
option
within
within
the
different
CNI
options
right
they
use
BPF
for
sorry,
they
use
BGP
for
handling
the
routing
they
use
the
Linux
kernel
for
handling
the
termination
of
interfaces
all
about
stuff
I
mean
it's
a
very
clever
implementation,
but
typically
it's
always
been
making
use
of
some
sort
of
those
technical
technologies
that
we've
had
for
some
time.
A
So
it's
really
exciting
to
see
calico
starting
to
embrace
CP
bf,
which
is
not
too
surprising
because,
obviously,
like
everybody
I
think
in
the
EPF
space
is
like.
We
need
to
figure
out
how
to
improve
on
that.
So
this
is
a
webinar
on
hosted
by
the
CNC
F
about
how
about
how
calico
networking
is
working
with
EPF.
So
if
you're
interested
in
that
space
and
networking
and
EPF
stuff
definitely
check
that
out.
So
next,
one
ansible
for
kubernetes
is
I.
Think
part.
A
That
what
this
is
or
is
this
lean
pub?
This
might
be
lean
pub
yeah.
So
if
you're
interested
in
exploring
ansible
for
kübra
do
this
Jeff
Deerling
has
a
book
out
and
you
can
pay
what
you
want.
You
can
pay
nothing
or
you
can
pay
a
good
amount
of
money
and
I
always
kind
of
like
these
sorts
of
setups,
because
they
describe
like
how
much
money
the
author
is
gonna
get
from
the
work
and
that's
really
cool
like
I,
really
appreciate
that
part.
A
You
know
like
when
I
have
a
couple
friends
who
are
authors
and
I,
always
wonder
like
you
know
like
if
I
wanted
to
go
and
buy
a
book
and
ensure
that
you,
the
author,
got
the
most
reward
from
that
purchase.
How
would
how
would
I
go
about
that,
and
this
is
one
of
the
ways
to
go
about
that
and
that's
pretty
cool
Kubler
bringing
rolling
updates
to
cou
Burnet
is
the
newly
released
Kubler
116,
a
tool
that
facilitates
enterprise-grade
Kuber,
and
it
is
how
does
the
feature
work?
A
A
A
A
Yes,
okay,
so
turns
out
with
OBS
when
you
change
the
size
of
your
desktop,
and
then
you
go
and
move
forward
from
that
it
actually
I
had
to
pick
a
different
desktop
from
the
list
to
make
that
happen,
good
to
know,
learn
something
new
every
day
this
is
a
wild
tea
gik.
Today
all
right
hide-
and
you
should
all
still
be
able
to
see
me-
make
sure
I'm
actually
watching
the
live
buck.
Mister
my
left,
just
to
make
sure
that
it
continues
to
work.
A
C
A
A
Let's
say
we
get
down
into
it
here,
cuz,
you
know,
I
was
17
minutes,
I'm
gonna
be
checking
chatting
and
I
think
it'd
be
kind
of
fun
to
play
with
some
stuff.
All
right,
yeah
OBS
is
actually
pretty
solid.
For
me,
it's
just
that,
like
I
guess,
I
forgot
to
check
my
desktop
size
before
starting
my
broadcast.
A
So
this
is
the
dock
from
this
is
a
current
dock,
and
what's
funny
about
this
is
like
obviously,
I
was
kind
of
chuckling
a
little
bit
because,
like
the
week
before
I
my
last
record,
I
guess
so.
I've
been
I've
been
the
last
three
TDI
case.
The
last
time
I
did
one
on
like
developer
tool,
setup
kind
of
thing
or
how
to
contribute
to
kubernetes
and
the
one
before
that.
A
I
did
cluster
API
version,
v1,
alpha,
2
and
and
then
of
course
like
as
soon
as
that
released
like
I,
think
it
was
either
that
day
or
maybe
like
the
following
Monday
or
something
they
released,
v1
alpha
3
and
so
any
of
the
directions
that
you
would
have
tried
to
follow
on
the
on.
The
website
here
would
have
changed
and
it
was
like-
and
that's
awesome
right,
good
to
see
forward
progress,
but
it
was
like
dang
it
in
the
middle
of
that
it
is
what
it
is,
though.
So
there
have
been
a
lot
of
changes.
A
A
Part
of
the
big
part
of
the
big
shift
is
actually
the
introduction
of
a
thing
called
cluster
kettle
and
I
understand.
Now
why
I
was
asked
in
the
previous
episode
am
I
going
to
talk
about
cluster
kettle
because
it's
a
pretty
big
deal
and
we're
gonna
explore
it
a
bit
today
in
this
session
and
I
had
and
and
at
the
time
I
was
like.
Why
would
I
explore
cluster
come
on
oh
wow?
A
B
A
A
You
can
work
through
as
well,
because
cap,
because
cluster
API
V
is
a
relatively
complex
thing,
I
decided
to
actually
try
like
make
my
life
a
little
easier
and
and
save
some
commands
into
a
Meg
file.
So
first
thing:
that's
first
is
I've
created
an
n/bar
C
inside
of
this
directory,
which
includes
this
cube
config
inside
of
my
resources
directory
right,
and
so,
if
I
look
inside
of
that
directory
resources
Goodkin
trigger
in
there
at
the
moment,
and
that's
fine.
A
A
Give
me
one
second
here:
what
I
need
to
do?
I
need
to
go,
get
kind,
real,
quick,
because
we
had
actually
I
just
remembered
it.
On
the
last
time,
I
used
this
particular
computer.
We
had
deleted
it
and
built
it
from
scratch
and
I
don't
want
to
go
through
that
again,
so
I'm
just
going
to
download
it
real,
quick
and
release.
A
A
A
A
A
A
So
next
thing,
I'm
gonna,
do
is
I'm
going
to
download
a
bunch
of
a
bunch
of
images
that
are
going
to
be
necessary
inside
of
the
cluster
that
we're
creating
and
what
these
are.
Basically
just.
You
know
all
of
the
different
components
that
are
going
to
be
deployed
as
part
of
cluster
API
I'm,
just
pre,
cashing
them
so
that
we
can
make
them
available
to
a
kind'
cluster
without
having
to
load
them
inside
of
the
kind
cluster
which
saves
us
a
pretty
significant
amount
of
time.
A
And
if
we
look
at
the
image
list,
that's
basically
what
is
represented
here.
We
have
an
infrastructure
provider,
controller
manager,
which
is
actually
the
cluster
API
for
docker
controller
manager.
We've
got
a
couple
of
different
versions
of
our
cube
rback
proxy,
which
is
used
to
secure
connectivity
down
to
the
actual
processes
that
are
running
inside.
We
have
our
three
cluster
API
components.
A
We
have
the
cluster
API
controller,
which
handles
the
creation
and
management
of
resources
that
are
defined
inside
of
cluster
API,
the
bootstrap
controller,
the
thing
that's
actually
going
to
be
generating
cube,
ATM
configurations,
and
then
we
have
the
cube,
am
control,
plane,
controller,
and
this
is
where
things
are
going
to
get
interesting,
because
this
is
a
very
different
change
in
cluster
API.
That
didn't
exist
before,
and
this
is
one
of
the
things
I
really
want
to
spend
time
talking
about.
So
that's
pretty
neat.
A
A
Not
happening
here,
okay,
right,
aha,
dear
him,
allow
okay!
So
now
we
can
see
that
I'm
attached
to
the
kind
cluster
that
we
just
brought
up
and
again
when
you
actually
get
this
stuff
downloaded.
You
can
just
do
the
same
thing.
I
was
just
doing,
which
is
make
a
load
cb3,
which
will
bring
up
the
kind
cluster
and
and
load
and
precache
those
images
inside
of
it.
A
And
if
you
want
to
download
all
the
images
that
are
going
to
be
necessary,
you
can
use
this
command
cache
gates,
which
will
download
all
the
images
and
and
host
them
locally.
In
your
repository-
and
this
is
what
I
mean
by
that
right,
if
I
do
dr.
images,
I
can
see
a
bunch
of
images
that
are
here,
including
why
I
have
time
time
to
clean
this
out.
I
think
but
including
we
can
actually
see
the
the
images
that
we've
actually
downloaded
from
as
part
of
that
image
list.
A
Okay,
so
once
they're
in
my
local
docker
it
certain
my
local
docker
image
list,
I
can
actually
tell
kind
to
load
those
by
doing
kind,
load,
docker
image
and
then
just
the
name
and
tag
from
within
here
right
if
I
wanted
to
pull
in
Jam
Firth
web
pack,
for
example,
I,
don't
even
know
what
that
is.
Oh
I
didn't
know
what
it
is.
Okay,
so
I've
got
version
latest.
A
Then
I
can
go
ahead
and
preload
that
into
my
kind
cluster,
whatever
the
current,
whatever
the
current
kind
cluster
is
by
name
right.
So
in
this
case
I'm
gonna
be
using
Cappy
b3,
so
I
do
name
Cappy
v3
and
it
will
check
to
see
if
the
image
is
already
there
and
if
it
isn't,
it
will
pipe
it
into
the
container
D
implementation
inside
of
those
containers
for
each
of
the
hosts.
A
So
this
is
a
great
way
if
you're
like
trying
to
do
development
or
something
like
in
your
offline,
you
can
actually
sort
of
cache
those
things
that
you
need
locally
and
that's
pretty
neat
all
right.
So
next
thing
we're
gonna
do
is
we're
gonna
deploy
cluster
kettle
before
we
do
that?
Let's
go
back
to
our
Docs
and
dig
into
how
that's
gonna
work.
A
A
And
what
they
want
you
to
do
is
create,
or
what
you
can
do
is
create
an
existing
management
cluster,
which
is
what
we
did
by
creating
that
kind
cluster,
and
then
we
set
our
cube
config
to
something
that
would
allow
us
to
discover
it.
So
if
things
like
cube
channel
work,
then
we
can
actually
move
past
this.
The
next
thing
we're
gonna
do
is
we're
going
to
grab
the
latest
version
of
cluster
cutoff.
So
let's
go
ahead
and
do
that.
A
A
So
that's
kind
of
surprising
change,
mode,
plus
X
C
cluster
cuddle
version.
We
see
that
it's
up
to
date,
Prince
says
he's
on
top
of
the
cluster
kettle
dirty
thing.
The
next
thing
we're
gonna
do
is
now.
This
is
a
little
challenging
because,
like
the
before,
there
was
actually
instructions
here
to
handle
the
docker
infrastructure
provider.
I've
actually
had
to
get
a
little
creative
about
how
about
doing
this,
but
let's
go
ahead
and
click
through
to
the
doctor
provider
and
see
what's
happening
here.
A
A
A
A
A
A
And
we'll
go
through
the
build
process.
This
is
actually
I'm,
always
appreciative
of
bills.
That
actually
happened
in
docker
containers,
because
then
I
don't
have
to
worry
about
whether
the
version
I
have
was
up-to-date
with
what
with
whatever's
happening
inside
the
docker
container.
So
we
can
watch
the
bill
to
happen
doing
a
bunch
of
pulling
in
a
bunch
of
dependencies
inside
of
the
docker
container.
A
All
right
so
build
successful.
That's
awesome,
yeah,
so
I'm
Maui
lion,
because
I
grew
up
on
Maui
and
I,
like
lions
and
and
so
the
reason,
the
reason.
That's
why
Maui
lion?
The
other
reason
is
because
when
I
originally
got
its
Maui
lion
is
a
domain,
so
you
can
send
me
e-mail
at
Maui,
land,
calm
and
I'll
always
get
it.
It
doesn't
matter
what
address
you
send
it
to
the
reason
that
is
that
originally
was
because
nobody
could
spell
Duffy.
Do
you
ever
I,
not
dy,
and
so
I
just
got
a
start.
Mail
account.
A
A
A
A
There
we
go:
that's
better.
Okay,
you
can
see
the
provider
here
is
infrastructure
docker.
You
see,
it's
actually
gonna
be
in
that
cap.
D
system
namespace
this
stuff,
actually
that
it's
that
this
namespace
will
be
labeled
as
part
of
as
being
owned
by
a
provider.
It's
part
of
that
contract
that
we
were
talking
about
a
little
bit
earlier
and
then
here
are
the
custom
resource
definitions
for
infrastructure
docker,
which
is
what
we
wanted.
A
C
A
Some
leader
election
stuff
happening
with
cluster
API
for
docker.
That
way,
if
you
have
a
couple
of
them
really
only
one
of
them
will
be
active
at
a
time,
but
it
affords
like
a
faster
failover
like
if
you
had
a
couple
of
them
running,
only
one
would
be
active,
but
the
other
would
not
interesting
stuff
really
interesting
stuff.
A
Filling
a
service
account
named
default
in
the
namespace
cap.
D.
Oh,
that's
not
about
well,
it's
kind
of
weird,
though,
because
they're
actually
granting
the
permission
to
the
default
service
account
in
that
namespace,
rather
than
rather
than
generating
a
service
account
for
that
particular
entity.
So.
A
A
All
right,
so
that's
one
thing.
The
next
thing
also
worth
pointing
out
is
this
piece
here
right,
so
we're
in
the
in
the
deployment
of
the
cluster
API
controller
manager
for
docker
the
doctrine,
infrastructure
of
docker
provider.
We're
gonna
actually
be
doing
a
little
bit
of
docker
and
docker
stuff,
and
this
is
gonna,
be
a
little
complex
to
understand.
It's
gonna
be
layers
and
layers,
but
we're
gonna
talk
about
it
anyway.
I
think
it's
worth
actually
understanding
what's
happening
in
this
deployment.
A
And
the
reason
we're
doing
that
is
because
cluster
API
provider
for
docker
is
actually
responsible
for
creating
infrastructure
leveraging
docker
as
your
infrastructure
provider,
which
means
that
when
you
create
a
new
machine
in
AWS,
for
example,
you
get
a
new
ec2
instance
in
docker
you
get
a
new
docker
image
in
you
know
vSphere,
you
might
get
a
new
virtual
machine,
but
they
put
that.
But
this
one
here,
the
cluster
API
provider
infrastructure
for
docker,
will
actually
create
a
new
docker
image
and
that's
the
important
part
to
think
about
the
way.
For
that.
A
A
Now
there
hey,
there
is
no
controller
running,
but
once
we
actually
do
our
cluster
kettle
in
it,
we're
gonna
see
the
cap
D
controller
running
on
this
particular
node
and
for
the
cap,
D
controller
to
have
access
to
docker
I
need
to
express
that
into
that
node,
and
so
we're
gonna
play
with
that
a
little
bit
more
and
explore
in
such
a
way
that
it
makes
perfect
sense,
create
a
new
docker
container
on
a
new
image
correct.
You
are
correct
and
we'll
talk
about
like
what
the
image
part
in
here
is
as
well.
A
In
fact,
that's
probably
we
could
actually
just
do
that
now.
So,
if
I
do
docker
PS,
this
is
actually
the
running
kubernetes
cluster
I
have
on
my
laptop
right
now
it's
a
kind
clusters
of
a
single
node
and
there
are
no
worker
nodes.
It's
just
a
single
control,
plane,
node,
and
it's-
and
this
is
the
image
right,
so
kindest,
node,
v1
17-0
is
kind
of
like
the
ami
in
this
case
right
or
the
the
thing
from
once
you
make
linked
clones
or,
however,
you
want
to
think
about
it.
A
This
is
the
image
and
that
image
would
host
the
operating
system
that
would
host
the
any
of
the
container
images
that
you
want
to
cache
locally,
all
that
stuff
right
and
so
in
this
particular
image,
because
it's
a
kind
cluster.
What
we're
doing
is
for
because
it's
version,
117
0.
If
I
look
inside
this
image,
docker
exec
GI
bash.
This
is
just
like
SSH
into
a
VM
in
a
way
right,
so
system
system.
A
A
If
I
export,
Kubb
config,
that's
see,
kubernetes
admin.com
I
can
do
who
couldn't
get
pods
a-and
this
administrative
credential,
that's
left
on
the
disk
can
actually
show
me
all
of
the
things
running
inside
the
cluster
right.
So
this
is
like.
Oh,
this
is,
for
our
intents
and
purposes.
One
way
to
think
about
it
would
be
like
adds
a
VM,
even
though
it's
really
just
running
as
a
container
and
the
end
of
and
pid'
one.
A
A
A
A
A
So
we
generated
our
manifest,
but
I
don't
think
we
actually
saved
it
that
we
just
looked
at
it
yeah,
okay,
so
let's
go
ahead
and
output
this
to
infrastructure,
docker
I've
already
got
that
directory
set
up,
and
what
that
means
is
that
if
I
go
into
resources,
this
is
actually
all
of
the
resources
that
I'm
going
to
use
for
this
cluster.
A
A
Then
we
move
into
there.
We
look
at
the
metadata.
That
should
still
be
ok,
so
we
have.
This.
Is
the
infrastructure
components
file
that
we
just
created
right?
It's
got
the
docker
socket
mount.
It's
got
all
the
configuration
and
everything
for
our
infrastructure,
docker
piece.
That's
good
I've
also
created
a
cluster
template
and
we're
going
to
come
back
to
that
in
just
a
bit,
but
right
now,
infrastructure
components
is
the
part
that
is
necessary
to
understand,
and
you
know
how
and
we
see
how
we
got
there.
A
A
It
requires
an
absolute
pass,
so
I
had
to
go
from
home,
D
Cooley
to
gik
the
cap,
DB
3,
folder
infrastructure,
dr
v
0
3
2,
because
I
just
renamed
it
and
then
infrastructure
components
yeah
mo.
I
need
to
tell
it
it's
type
infrastructure
provider
and
then
we're
gonna
get
into
what's
happening
down
here
with
kubernetes
version.
A
A
Right
now
we're
gonna
let
it
deploy
whatever
the
latest
version
is,
but
if
we
wanted
to
deploy
like
version
200
300
stuff,
we
could
do
that
the
infrastructure
string.
This
is
I
telling
us
whit
infrastructure,
we're
its
infrastructure
provider.
We're
going
to
use
now,
because
our
cluster
kettle
named
this
docker
we're
going
to
use
I
docker
tell
it
to
use
that
we're
gonna
go
ahead
and
pick
up
the
keep
config
from
the
environment,
we're
not
going
to
specify
one
list
container
images
required
for
initializing
management
cluster,
that's
pretty
cool!
Let's,
let's
explore
that!
B
A
Okay-
and
this
is
actually
kind
of
an
interesting
output,
because
we
didn't
tell
it
what
infrastructure
provider
to
use
you
can
see
that
all
that
was
being
pulled
in
here
was
that
it
was
the
core
cluster
API
controller
and
the
bootstrap
and
control
plane
controllers.
But
there's
no
there's
no
cap
D
in
this
output
right.
So,
if
I
do
I
docker.
C
A
I've
told
it
explicitly
what
version
what
infrastructure
provider
I
want
to
use
and
we
can
see
that
the
list
output
is
different
right.
First,
we
can
see
that
docker
is
using
an
older
version
of
q
bar
back
proxy
and
so
we're
picking
up
another
image
here.
We
can
also
see
that
Maui
Lyon
cap
D
cap
T
manager,
amd64,
is
in
the
list.
A
A
So
we
tell
it
what
infrastructure
to
use
until
target
namespace
by
default,
the
provider
components
default.
Namespace
is
used
watching
these
space
namespace
server
writers
should
watch
when
reconciling
objects,
if
not
specified
all
namespaces
are
watched,
that's
actually
kind
of
a
cool
feature,
because
I
imagine
that
at
that
point
you
could
actually
have.
The
providers
pointed
out
specific
at
specific
namespaces
if
you
wanted
to
have
like
multiple
providers
in
the
same
cluster.
A
A
So
we're
going
to
grab
version
o
3
of
cluster
API,
basically
from
the
internet,
from
the
release
page
of
cluster
API
components,
and
we
can
actually
go
and
see
where
this
is
right.
I.
Actually,
when
I
was
looking
into
like
how
this
was
working,
I
had
I
was
curious,
like
where
can
I
find
the
core
components
dn?
Well,
where
can
I
find
bootstrap
components,
but
you
know
these
particular
pieces.
Obviously
we
know
where
this
one
is.
We
just
put
it
on
a
disk
ourselves,
but
what
about
the
other
ones?
A
There
we
go
so
at
the
moment.
These
are
the
things
that
are
kind
of
built
in
that
it
knows
how
to
go
find
right.
He
knows
how
to
go,
find
the
core
provider
for
cluster
API,
and
this
is
where
you
can
find
the
core
components.
File
ID
knows
how
to
go,
find
the
bootstrap
provider
again,
where
that's
the
look,
that's
the
location
of
it
knows
how
to
go,
find
that
part
and
the
infrastructure
side.
It
only
knows
how
to
find
the
AWS
as
your
metal
3,
which
is
kind
of
an
interesting
one.
A
A
A
So
we
read
the
cluster
yam.
Oh,
we
figured
out
what
we're
gonna
do:
we're
gonna,
install
infrastructure,
docker,
we're
gonna,
install
control,
n,
cube
atm
good
job
cubed
cluster
API
we're
gonna,
do
version
2,
O,
3,
2
of
all
of
those
things.
We
also
insert
install
assert
manager-
and
this
is
so
that
we
can
actually
handle
the
certificate
stuff
a
little
more
specifically
if
you
want
to
like
I,
think
I'm,
just
using
self-signed
cert
here,
but
we'll
take
a
look
here
in
just
a
minute.
A
Then
we
wait
for
cert
manager
to
become
available
before
anything
else
happens
and
once
it
does
become
available,
then
it
starts
installing
those
providers,
the
first
one
that
it
tries
to
install
its
cluster
API.
We
can
see
it.
Basically,
it
is
almost
as
though
we're
doing
a
cube
cuddle
apply
of
the
of
that
specific
file.
So,
like
you
know,
in
the
output
of
config,
posit
orys.
A
So
looking
at
the
output
of
that
log
file,
it's
just
it
it's
as
though
I
had
done
something
like
this
right.
I'll
find
a
chef
right,
that's
actually
what
that's!
What
cluster
NIT
is
gonna
do
for
us.
Is
this
gonna
go
and
figure
out
all
the
pieces
that
we've
told
it
to
go,
pull
we
could
pull
those
versions
down
and
install
them,
and
presumably
it's
going
to
make
sure
that
they
happen
in
the
right
order
and
get
all
that
stuff
right.
So
we
could
see
that
happening
now.
So
that
is
the
end
of
that
provider.
A
A
We
can
now
see
our
cap
D
controller
manager
and
our
cube
a
DM,
bootstrap
and
control
plane.
We're
going
to
talk
about
this
one.
This
one
is
actually
really
cool.
I
was
really
surprised
by
this
change
in
version.
Oh
three:
oh
we're
gonna
talk
about
this
change
and
like
explore
what
it
does
for
us,
but
we
office
to
see
our
cap
B.
We
also
see
our
copy
controller
manager
running
in
cap
e
system,
and
then
you
see
these
web
hooks
now.
A
I
believe
the
web
hooks
are
there
to
ensure
that
we
have
a
way
of
handling
upgrades
right
so
like
if
you
had
to
be
one
alpha
two
and
you
needed
to
be
able
to
move
those
resources
to
a
v1
alpha,
3
controller
like
this.
This
is
a
way
of
actually
like
handling
that
upgrade
I.
Think
correct
me.
If
I'm
wrong
any
of
you
awesome
cluster
API
people,
Vince
Andy,
Jason,
D
tiberias,
we
got
a
bunch
of
them.
Conversion,
defaulting
and
validation,
yeah,
ok,
good!
A
So
that
way,
if
you
actually
had
an
older
version
of
something
that
it
would
actually
be
able
to
convert
that
for
you
and
that's
necessary
because
inside
of
kubernetes
like
the
stuff,
that's
first-class
in
the
api,
like
deployments
and
pods-
and
you
know
things
like
that-
those
things
that
actually
there's
already
like
a
built-in
mechanism
within
kubernetes
that
handles
the
defaulting,
validation
and
conversion
of
those
things.
But
if
you're
going
to
define
things
as
a
third
as
a
custom
resource,
then
you
actually
the
the
entity
that
is
handling
the
oh
look.
A
The
entity,
that's
creating
the
controller
will
actually
have
to
take
care
of
the
the
validation,
defaulting
and
conversion
themselves,
and
one
way
to
do
that
is
through
an
a
web
hook
right.
So
when
they
see
a
resource
come
in
at
a
particular
version,
they
can
modify
that
so
pretty
cool
stuff.
Okay,
I'll
get
pause
and
capti.
A
A
A
Each
of
these
other
ones
actually
have
content
that
they're
gonna
handle
right
so
like
if
you're
gonna
use
the
AWS
provider
inside
of
a
cluster
kettle's
configuration.
That's
where
you
can,
where
you
can
pass
it.
You
can
satisfy
these
things,
and
if
you
set
these
environment
variables,
you
can
set
these
environment
variables
in
your
environment
or
you
can
also
set
them
in
your
cluster.
Cuddle
configuration
either
way
works
in
cluster
area
in
cluster
cuddle.
It
is
we
have
some
mazur
arguments.
A
They
have
docker
GCP
vSphere
OpenStack
metal,
but
once
we
got
that
down
now,
we're
gonna
do
our
create
first,
our
create
our
first
workload
cluster
and
the
command
that
they're
calling
out
here
is
cluster
cuddle
config
cluster.
So
let's
give
that
a
try
and
before
we
do
that
we're
gonna
need
a
yamo
template
for
the
workload
cluster.
A
A
A
What's
happening
in
to
the
scenes
here,
is
it's
gonna
go
looking
for
this
cluster
template
yeah
mole
file
under
the
infrastructure
provider
that
you're
currently
using,
in
my
case
its
infrastructure,
docker,
and
it's
going
to
template
eyes
this,
based
on
the
configure
on
its
given,
based
on
the
arguments
that
you
pass
in
and
then
give
you
a
full
on
like
usable
configuration
of
cube,
ADM
or
of
cluster
api,
and
so
this
will
have
our
namespace
defined.
If
it's
not
already
defined.
A
This
will
have
the
core
api,
the
cluster,
but
the
cluster
api
cluster
that
will
be
defined,
and
here
we
can
pass
in
some
information
like
what
the
cluster
cider
and
the
pod
ciders
and
server
ciders
are
going
to
be,
and
then
we
have
to
create
a
relationship
between
that
cluster
and
the
underlying
implementation
right.
So
we
need
I
really
need
an
infrastructure
relationship
and
we
also
need
a
control,
plane
relationship.
A
This
cue,
medium
control
plane
is
going
to
be
necessary
for
us
to
be
able
to
bring
up
those
control
claim
instances
inside
of
that
cluster,
and
then
the
infrastructure
ref
is
going
to
tell
us
ok,
but
is
this
cluster
going
to
be
an
AWS?
It's
gonna
be
in
vSphere,
like
what
is
the
infrastructure
provider
going
to
they're,
going
to
use
as
the
object
of
a
cluster
and,
in
our
case
we're
using
a
docker
cluster
same
thing?
Here's
where
we
actually
define
the
docker
cluster,
here's
where
we
define
a
docker
machine
template
that
will
be
used.
A
We
get
down
into
the
control
plane.
This
is
a
cue,
medium
control
plane.
This
is
one
of
the
biggest
things
in
version.
2
o3,
oh
I,
think
even
bigger
I
think
than
cluster
Ketel,
because
it
represents
a
new
object
or
a
new
behavior
that
cluster
Ketel
can
handle,
which
is
pretty
awesome
and
it
may
have
been
there
in
version.
2
and
I
just
didn't
notice
it,
but
what
this
one
does
is
actually
really
cool.
What
it
will
do
is
it
will
actually
handle
the
lifecycle
of
control,
plane
nodes
for
you
right
so
down
here.
A
We
have
replicas
one
and
we
have
an
entire
spec
that
tells
it
how
to
stand
up
and
configure
or
passes
enough
information
into
cube
ATM
in
such
a
way
that
when
that
particular
control
plane,
node
comes
up
that
configure.
This
is
the
configuration
that
will
be
used
to
apply
it
right.
What
version
of
Cooper
do
this?
Will
we
use
we're
going
to
go
ahead
and
register
container
D,
because
again,
this
is
going
to
be
using
kind
images,
so
kind
uses
container
D
in
the
underlying
implementation.
A
So
we
need
to
make
sure
that
when
we
do
note
registration,
we
tell
it.
This
is
going
to
use
container
D
we're
gonna
pass
some
cubelet
extra
args,
basically
telling
it
again
not
to
worry
about
space,
because
this
is
a
container
image
which
is
crazytown.
But
you
know
what
is
what
it
is,
and
then
we
have
this
replicas
field
that
tell
tell
us
how
many
control
plane
nodes
to
create
and
because
there's
only
one
here.
That
means
that
the
cube
idiom
control,
plane
controller
well
on
the
create
one
control
plane
node.
A
Then
we
had
the
bootstrap.
This
is
where
we're
actually
going
to
generate
the
cue
idiom
configuration
template
for
the
worker
nodes.
That's
what
test
MD
is
with
the
MD
part
stands
for
and
that's
machine
deployment,
so
I'm
actually
using
a
machine
deployment
to
define
worker
nodes
because
they're
all
the
same,
unlike
control,
plane
nodes
right.
This
is
one
of
the
key
differences.
I
couldn't
use
a
machine
deployment
to
handle
the
control
plane
notes
because
each
of
those
control
plane
nodes
has
a
TD.
It's
very
stateful.
It
really
matter.
A
It's
like
making
sure
that
we
handle
that
lifecycle
correctly.
Oh
cool,
it
is
brand-new
nice.
So
that's
pretty
huge.
That's
a
pretty
huge
change!
That's
very
exciting
and
we're
gonna
play
with
it
here
play
with
how
it
works.
So,
like
I
said
so
now
we
have
the
object
of
a
machine
deployment.
That's
gonna
handle
workers
and
we
have
the
object
of
a
cube
ATM
control
plane
that
it's
gonna
handle
the
control.
Plane,
notes
the
cube,
18
control,
plane
controller
is
going
to
handle
the
stateful
part
of
the
cluster.
C
A
We're
leveraging
cue
medium
appropriately
like
if
we
stand
up
the
first
control
plane
node
and
we
want
to
bring
up
a
second
control,
plane
node.
We
have
to
be
careful
with
our
join
command
so
that
we
can
ensure
that
the
secrets,
like
the
the
see
a
secret,
the
all
the
other
pieces
that
need
to
be
exactly
the
same
one.
Each
node
are
copied
down
or
made
available
on
the
new
control
find
out
right
and
that
we
and
then
we
stand
up
at
CD
on
the
control
plane.
A
Node
to
that
we
join
it
to
the
sed
cluster
on
control,
plane,
node
one.
Otherwise
it
wouldn't
really
work
terribly
well
for
a
cluster
right.
So
that's
pretty
neat.
We
have
two
different
ways
of
managing
instances.
We
have
Q
medium
control
plane
and
we
have
machine
deployment
both
defined
in
this
manifest.
A
What
is
happening
next,
here's
that
document.
Here's
that
reference
to
the
infrastructure.
It's
gonna,
be
a
docker
machine
template
and
then
the
reference
to
the
cube.
Atm
configuration
that's
going
to
be
used
for
the
worker
nodes
right,
pointing
at
that
cube,
ATM,
config
template,
and
then
this
is
a
something.
I
was
trying
out,
but
it
didn't
work.
So
let's
go
play
with
that.
A
We
can
actually
have
we
can
concatenate
the
variable
with
another
part
right,
so
cluster
name,
MD,
0,
that's
pretty
cool
and
any
of
these
variables
could
be
used
right.
It
doesn't
really
we
we
aren't.
We
aren't
held
to
just
what
the
system
knows
about,
because
we
can
define
whatever
we
want
inside
of
the
space.
A
A
A
And
this
time
I
went
through,
we
were
able
to
create
it,
and
if
we
look
up
here
at
the
top
underneath
the
name
space,
we
see
that
the
name
space
was
converted.
That's
a
content
of
the
config
map
made
it
in
and
that
the
name
of
the
config
map.
What's
true
all
right,
how
are
we
doing
on
the
chat
me
too?
A
A
A
Hey
good,
to
see
you
alright,
okay,
so
let's
go
ahead
and
deploy
our
first
workload
cluster,
let's
go
ahead
and
let's
get
this
done
so
since
we
have
our
configuration
here,
let's
go
ahead
and
pipe
that
to
keep
kennel
apply
eff
and
you
know
it
doesn't
like
our
name.
That's
alright,
though
we
don't
need
the
config
map,
but
yeah
like
I,
apparently
set
that
to
like
all
caps
and.
C
A
Templating
doesn't
catch
that
there's
a
bug.
Templating
just
complains
about
it.
So
that's,
okay.
We
got
our
namespace
created
when
the
case
it
was
a
default
namespace.
We
have
a
cluster
generated.
A
docker
cluster
generated
a
docker
machine
template
all
those
pieces
are
done
here.
So
let's
do
it
and
just
go
ahead
and
do
cubic
it'll
get
talker
cluster.
A
A
A
A
We
do
docker
PS.
We
can
see
the
nodes
being
generated
right
so
if
I
do
docker,
PS
and
I
just
grep
for
actually,
if
I
do
kind
get
nodes
name
those
tests.
These
are
all
the
nodes
that
have
been
generated
for
the
cluster
right.
So
we
have
our
control
clean
node.
We
have
our
three
worker
nodes
and
we
have
a
load
balancer
node,
and
these
are
all
generated
as
part
of
the
cluster
API
provider
for
docker,
the
infrastructure
of
a
writer
for
doctor.
A
So
that's
all
happening
now
and
and
unlike
the
v2
version,
where
I
had
to
actually
explicitly
configure
the
control
plane
nodes
as
specific
machines.
Now
they
have
this
cube.
Atm
control,
plane
controller,
where,
instead
of
actually
having
to
configure
each
individual
control
plane,
node
individually
I
can
instead
apply
them.
I
can
instead
just
use
the
control
the
cue
medium
control
plane
to
define
how
many
of
them
I
want
so
pretty
cool.
A
A
A
We
could
pass
as
part
of
the
config
argument
how
many
control
play
nodes
and
how
many
worker
nodes
we
want.
We
can
also
tell
it
that
the
can
the
template
is
in
a
config
map,
which
is
pretty
neat,
and
we
can
also
pull
a
config
map
from
a
github
URL
cluster
kennel
config
cluster,
my
cluster
from
URL,
where
the
cluster
template
is
or
we
can
point
it
to
the
built-in
one.
What
we
did
we
had
the
idea
of
flavors
the
workload
cluster
template
variant
used
reading.
A
A
A
Let's
go:
do
oh
there's
more
thing,
I
wanted
to
do
here,
so
we
do
docker
PS,
we
can
see,
we've
got
the
machines
up
and
if
we
do
kind
get
clusters
we
can
see
that
two
clusters
have
been
built.
This
is
the
one
that
we
built
originally
when
we
started
the
episode
cap
ev3
and
then
the
test
cluster
has
also
been
created
right.
The
difference
is
the
test.
Cluster.
Isn't
gonna
have
access
to
that
underlying
docker
socket,
whereas
the
Cappy
v3
one
does
so
we
can
keep
using
Cappy
v3
to
stand
up
new
kind
clusters.
A
Wanna
see
EP
control
playing
machine
countess
wordy
so
we're
going
to
create
a
three
node.
Three
worker
cluster,
we're
gonna,
put
the
resources
that
are
holding
this
configuration
into
the
prod
namespace
we're
gonna,
leave
it
running
117
zero,
because
all
that
stuff
is
cached,
let's
go
ahead
and
apply
it.
A
I'm
cleaned
about
our
config
map
again,
but
we
don't
really
care
okay,
I'll
get
machines,
I
should
still
only
inside
the
default
namespace.
They
should
still
only
see
the
ones
that
were
part
of
the
test
cluster,
but
if
I
do
and
prod
I
should
see
the
other
part
of
it
right,
so
I
get
three
worker
machines
are
pending.
This
is
actually
part
of
the
dependency
of
cluster
API.
B
A
Key
other
cubelets
aren't
gonna
really
get
too
far,
while
that's
happening,
let's
go
ahead
and
grab
the
config
mount.
Let's
go
ahead
and
grab
the
configuration
from
our
test
cluster.
So
if
we
do
get
secrets
inside
the
default
namespace,
you
can
see
all
of
the
secrets
that
were
generated
as
part
of
initiating
this
configuration,
and
most
of
these
secrets
are
related
to
the
related
to
resources
that
are
going
to
be
shared
across
individual
controllers
right.
B
A
A
A
A
A
A
So
now
we
have
a
cute
config
for
the
admit,
the
administrative
user
and
what
I
meant
by
cert,
based
as
we
can
see
that
they
have
a
client
certificate.
This
client
certificates
gonna
be
good
for
the
life
of
the
certificate
expert.
So
like
we
wanted
to
understand
how
long
this
would
be
good
for
this
client
certificate
is
encoded
in
base64,
so
we
can
do
echo.
A
A
A
A
So
now
we're
back
on
kind
kepi.
What
if
we
do?
This
is
actually
kind
of
a
cool
environment.
Variable
trick
you
separate,
cube
configs
bypass
and
they
will
all
be
discoverable
right.
So
if
I
do
a
cube,
Kettle
config
get
contexts.
I
now
see
that
I
have
the
credential
for
test
admin
and
also
the
credential
for
my
kind
of
copy
right
and
so
now
I'm
going
to
go
ahead
and
grab
the
secret
or
the
cube
config
for
the
prod
cluster
and
interact
with
that.
One.
A
C
A
A
A
A
A
Also
know
that
the
cluster
API
folks
are
working
on
a
way
to
handle
cluster
auto-scaling
I.
Don't
think
it's
completely
done
yet,
but
I
know
that
there's
been
some
pretty
good
push
on
getting
that
in
there.
So
that's
another
way
of
actually
handling
that
same
problem.
So
let's
go
ahead
and
jump
into
this
node
here
so
I'm
gonna
do
docker
exact
TI,
prod.
B
A
A
A
A
A
A
A
A
A
A
A
A
Let's
go
ahead
and
delete
our
machine
again
so
what's
happening,
there
was
I
notice,
I
did
a
complete
reset
and
when
I
did
that
it
blew
away
the
CA
certificate
and
a
lot
of
those
certificates
necessary
to
do
a
control,
plane
join
right,
and
so
what
we
have
to
do
next
is
basically
bring
up
a
new
node.
So
you
get
a
little
get
machine
and
prod
you
don't
see
it
there
anymore.
It's
still
provisioning
so
to
puke.
It'll,
delete
machine.
A
A
A
A
A
A
A
B
A
B
A
Yep,
so
it's
continuing
to
try
and
do
the
same
thing
over
and
over
again
and
it's
continuing
to
fail
and
so
I'm
gonna.
Let
that
fail,
I'm
gonna,
let
it
drop
and
then
I'm
gonna
come
back
and
look
at
it
on
my
own
time
and
not
drag
y'all
with
me,
but
yeah
that
was
some
pretty
wild
debugging.
The
next
thing
I
would
do
here
to
try
and
debug.
A
A
A
C
A
This
is
totally
a
test
file.
Yes,
but
it's
fun
to
actually
like
you
know
it's
fun,
to
use
it
to
actually
understand
how
cluster
API
works
and
tear
it
apart
and
chase
it
around
and
see
what
happens
and
play
with
the
pieces
and
break
it
and
fix
it,
and
just
really
build
up
your
understanding
of
how
all
of
this
works
right
there
locally
on
your
machine
without
having
to
have
the
cost
of
a
cloud
provider
configuration.
A
Well,
that's
interesting.
This
is
control.
Plane
join
config!
Oh
that's
already
in
there
yeah
they're
both
doing
that.
This
is
the
initial
in
it
and
then
this
is
they
join
well.
What's
interesting
is
it
looks
like
the
docker
provider
is
actually
just
trying
to
do
work
around
some
of
the
configuration
that
cube,
ATM
itself
provides,
and
so
I
might
have
to
look
into
why
that
is.
A
A
Sitting
out
how
much
you
love
to
all
y'all
I
hope
you
all
have
a
great
week.
I
understand
everybody's
having
a
tough
time.
You
know
everybody's
doing
a
great
everybody's.
You
know
doing
great
trying
to
work
from
home
and
do
all
that
good
stuff,
and
so
just
well
know
that
you're
not
alone,
and
that
you
know
it's
hard
for
everybody
and
we
just
got
to
keep
on
keepin
on
until
we
can
get
through
the
through
this,
and
so
thank
you
all
so
so
much
for
your
time
and
I
will
see
you
next
time
or
Joe.