►
From YouTube: TGI Kubernetes 143: Cluster API Update
Description
Show notes available at https://github.com/vmware-tanzu/tgik/blob/master/episodes/143/README.md
- 00:00:00 - Welcome to TGIK!
- 00:03:20 - Week in Review
- 00:11:20 - ClusterResourceSet
- 00:12:05 - `ktx` sidebar
- 00:12:35 - Back to ClusterResourceSet
- 00:33:05 - KubeadmControlPlane
- 00:40:34 - `clusterctl get kubeconfig` sidebar
- 00:42:16 - MachineHealthCheck
Come hang out with Scott Lowe as he takes a look at the latest iteration of Cluster API. In this episode we'll explore new features found in the most recent versions of Cluster API and see how they work!
A
All
right,
hello,
everyone,
let's
go
ahead
and
get
started,
wow,
my
very
first
tgik
man
I
am
like.
I
have
been
thinking
about
this
for
a
really
long
time.
Right,
like
I
joined
heptio
in
2018
and
from
like
the
first
time
they
started
talking
about
tgik
I
was
like
I
want
to
be
on
tgik,
so
finally
happening
super
thrilled
to
have
all
of
you
here
welcome
everyone
from
all
around
the
world.
I
see
folks
from
let's
see
tehran
wow,
that's
a
long
ways
away.
A
I
wonder
what
time
it
is
there
hey
jed
from
boulder
good
to
see
you,
man,
eric
smalling,
what's
up
buddy,
so
andy
goldstein,
thanks
for
being
on,
I
saw
vince
earlier
so
super
super
thrilled
to
have
everyone
here
today
we're
going
to
be
talking
about
cluster
api
right,
and
I
know
that
there's
been
a
couple
episodes
on
cluster
api
beforehand,
but
we're
going
to
be
doing
an
update
on
sort
of
where
cluster
api
is
now
and
then
we're
gonna
show
off
some
new
features
or
well.
A
Newish
anyway,
features
that
haven't
been
shown
here
on
tg
tgik.
As
far
as
we
know
now
before
we
do
that,
though,
let's
talk
a
little
bit
about
like
what's
been
happening
in
the
kubernetes
world
right.
So
let
me
pull
up
some
links
here,
real
quick,
all
right
here
we
go
and
we'll
open
our
window,
all
right,
all
right.
So
before
we
get
started,
I
have
a
few.
You
know
sort
of
like
news
items.
I
was
gonna
share
with
everyone
pertinent
to
the
episode
here
on
cluster
api.
A
We
have
new
releases
of
cluster
api
and
the
cluster
api
provider
for
aws
that
just
happened
earlier
today.
So
let
me
pull
that
up.
So
the
cluster
api
provider
got
released
to
0.3.13.,
mostly
bug
fixes
and
a
few
new
features.
A
One
really
really
cool
feature
that
I'm
going
to
show
you
later
here.
Is
this
cluster
ctl
describe
cluster
command,
which
is
super
super
cool
love
that
new
command
told
some
of
the
other
developers
in
one
of
the
slack
channels
that
I
love
it.
So
I
think
you
guys
are
really
gonna
like
it
if
you've
worked
with
cluster
api
at
all.
This
is
a
great
great
way
to
sort
of
get
the
status
of
the
cluster
right.
A
So
we're
gonna
look
at
that
here
in
just
a
moment,
but
so
new
release
here,
you
can
use
cluster
ctl
upgrade
to
upgrade
the
cluster
api
components
on
your
management
clusters,
which
I
will
not
do
on
this
show
just
for
the
sake
of
keeping
things.
You
know
relatively
sane
so
that
the
stuff
that
we're
gonna
demo
later
actually
works,
but
in
addition
to
this
release
we
also
had
a
new
release
of
the
api
provider
for
aws
so
now
bump
to
0.6.4.
A
So
a
few
other
things
of
note
by
the
way
and
I'll
be
periodically
looking
over
here
on
the
side
of
my
screen,
because
I
have
live
chat
running
over
here,
so
making
sure
I'm
not
missing
anything
hello
to
more
folks.
Thanks
for
joining
late
in
the
uk,
there
nadir
all
right.
So
a
few
other
things
if
you
aren't
familiar
with
cube
academy.
A
This
is
an
awesome
awesome
website.
It
is
sponsored
by
vmware,
but
I'm
not
just
mentioning
it,
because
it's
sponsored
by
vmware,
but
there's
a
ton
of
free
training
here
on
cube
academy,
and
so
I
want
to
highlight
that
we
recently
released
a
course
on
helm,
and
this
is
10
lessons
on
helm,
including
you
know,
like
introduction
to
helms
packaging
hands
on
with
the
helm
tool,
creating
helm,
charts
using
templating
wrapping
everything
up.
A
So
if
you
are
into
helm
or
want
to
learn
more
about
helm,
this
would
be
a
great
episode
this
or
a
great
resource.
This
by
the
way,
is
a
completely
free
resource.
All
you
got
to
do
is
grab
the
cube.academy.
A
If
most
of
the
courses
don't
even
require
a
login,
there
are
a
few
courses
that
are
listed
as
like
pro
courses
that
have
longer
more
in-depth
lessons.
They
require
a
login
but
still
no
charge.
It's
all
free.
So
have
a
look
at
that
also.
A
I
was
super
thrilled
to
see
this
news
from
fellow
tgik
host
josh
rosso
josh
and
a
few
other
folks
have
been
working
on
a
book
named
production
kubernetes
and
they
just
announced
justice
announced
a
couple
days
ago
that
everything
is
done
and
the
book
is
headed
off
to
print.
A
So
that's
awesome.
I
know
what
what
it
feels
like
to
have
worked
on
a
book
and
just
like
been
slaving
over
it
and
working
on
content.
Reviewing
it
and
finally,
that
day
comes
when
the
editor
tells
you
like:
hey
you're,
all
done
everything's
good,
we're
shipping
it
off
to
be
printed.
It's
like
awesome.
The
only
thing
that
beats,
that
is
when
you
get
the
print
copy
in
your
hand,
so
I
want
to
hear
from
josh
and
alex
and
wow
man.
I
remember
who
else
is
on
there:
okay,
josh
rich,
alex
and
john.
A
I
wanna
hear
from
you
guys
when
you
get
the
physical
copies
of
the
books
in
your
hand,
let
me
know
if
I'm
right,
if
that's
like
one
of
the
most
awesome
feelings
in
the
world
right,
let
me
move
my
browser
a
little
bit
here
slightly
off
screen,
okay,
cool!
So
that's
awesome
and
if
you
are
signed
up
for
what
is
that
o'reilly
service,
I
don't
remember
the
name
of
it
anyway.
That
gets
you
access
to
the
books.
A
You
probably
can
get
an
access
to
a
pre-release
version
of
the
book
as
well,
as
you
know,
get
copy
of
it
whenever
later
on.
So
have
a
look
at
that
and
then
this
blog
post
just
went
live
not
too
long
ago
and
by
also
by
fellow
tgik
host
paul,
and
I
won't
try
to
mangle
paul's
last
name
so
you're
welcome
paul.
A
B
A
So
using
this
as
like,
a
a
pull
through
proxy
sort
of
thing
is
pretty
awesome.
So
I
haven't
a
chance
to
read
this
yet,
but
I
really
appreciate
paul
sharing
the
link
with
me.
I'm
going
to
add
it
to
my
list
of
links
and
don't
be
too
surprised
if
you
see
it
show
up
in
the
next
technology
short
take
on
my
site,
so
cool
and
then
another
one
that
is
definitely
going
to
make
it
on
to
my
next
technology
short
take.
A
Is
this
post
from
michael
gash,
which
is
a
sort
of
a
deep
dive
on
etsy
d,
but
initially
focusing
on
the
sort
of
the
list
watch
pattern
in
kubernetes,
so
kubernetes
objects
applying
a
watch
against
an
object
to
see
changes
and
events
related
to
the
object,
so
on
so
forth,
lots
and
lots
of
great
detail
here.
A
Much
I
think,
it'll
be
awesome.
Definitely
take
a
look
at
that
and
then
one
other
thing
I
wanted
to
bring
to
people's
attention
before
we
jump
into
looking
at
cluster
api
and
some
of
the
new
features
is
this
crds
website
crds
custom
resource
definitions.
These
are
documents
generated
from
the
crd
code.
A
This
gives
you
an
example
of
one
particular
object
in
this
case.
It
is
a
cluster
api
object,
it's
for
a
vsphere
machine
which
is
the
you
know,
vsphere
specific
object
that
underlines
the
cluster
api
machine
object,
and
this
gives
you
a
breakdown
of
sort
of
all
of
the
fields.
It's
the
schema
right
for
the
v-share
machine
object
right,
so
it
gives
you
all
the
fields,
the
expected
values
you
can.
A
You
know
drill
into
various
parts
of
this,
so
on
so
forth,
super
handy
and
a
lot
prettier
than
cube
ctl
explained
right
so
cool.
A
Now,
with
that
in
mind,
then
why
don't
we
go
ahead
and
jump
into
the
content
that
I
have
ready
for
you
guys.
So
we
are
talking
about
cluster
api
today
and
we're
gonna
be
looking
at
a
few
different
new
features
in
cluster
api,
and
I
see
new
features
in
that
they
aren't
super
new,
as
in
like
they
just
came
out
last
week
or
whatever,
but
as
far
as
we're
aware
and
we've
done
a
ton
of
tgik
episodes.
A
and
some
of
those
changes
just
add
some
new
features
that
really
make
it
easier
to
perform
certain
types
of
operations
around
cluster
lifecycle
management
with
cluster
api,
and
so
we're
going
to
take
a
look
at
a
few
of
these
things.
Today,
I'm
going
to
do
it
in
a
little
bit
of
an
odd
order,
I
think,
but
hopefully
it'll
make
sense.
So
we'll
start
out.
A
First
thing
I
want
to
start
out
with
is
this
idea
called
a
cluster
resource
set
or
a
crs,
and
the
idea
here
is
when
you
apply
a
new
or
when
you
create
a
new
workload
cluster
in
kubernetes
cluster
api,
just
handles
sort
of
the
basics,
right,
bootstrapping
the
nodes
and
getting
everything
set
up
right,
and
so
I'm
going
to
pull
some
of
this
up
right.
So,
let's
let
I'm
gonna.
First,
I'm
gonna
connect
to
my
management
cluster.
A
My
management
cluster
is
behind
ssh,
bastion
host
right
so
and
then
I'm
going
to
switch
over
to
that
cube
config
by
the
way
that
ktx
tool,
I
am
a
huge
fan
of
ktx.
It
is
a
fairly
simple
command
line
tool.
I'll
have
a
link
in
the
show
notes
after
everything's
done
here,
it's
from
heptio
lab,
so
it's
still
in
hefty
labs
github
organization,
but
it
just
sets
the
environment
variable
for
coop
config.
In
whatever
terminal
you're
in
which
means
you
could
have
different
cube,
configs
in
different
terminals.
A
Right,
I
like
it,
but
you
know
different
structure
different
folks,
but
anyway
I
have
a
cluster
that
I
have
defined
in
cluster
api.
A
Using
this
management
cluster,
this
is
an
aws
as
you
might
gather,
and
it's
sitting
in
us
west
too,
and
when
we
look
at
this
cluster,
we
see
that
the
cluster
says
provisioned
right
and
if
I
look
at
the
machines
in
the
cluster,
then
I
see
that
they
shape
running
and
so
from
this
perspective
it
looks
like
everything's
good
right
and
even
if
I
run
this
new
cluster
ctl
version
and
y'all,
don't
laugh
too
much
at
my
typing.
Okay,.
B
A
Here
we
go
even
if
I
run
this.
This
is
the
new
thing
that
I
was
just
talking
about
earlier.
Is
cluster
ctl
describe
cluster,
which
shows
you
sort
of
the
state
of
the
cluster
of
a
workload
cluster
right,
one
or
more
workload
clusters,
and
so
here
I'm
talking
about
oracle
cluster,
which
I
have
very
imaginatively
named
tgik143a,
because
this
is
tgik,
and
this
is
episode
143,
and
this
is
the
first
one.
So
it's
a
and
it
shows
you
the
status
now
by
all
indications.
A
This
cluster
is
like
ready
to
roll
get
cluster
shows,
provisioned
get
machine
shows
running,
describe
cluster
shows
true
and
everything's
great
right,
but
we're
missing
something
here.
So
if
I
go
and
I
get
the
cube
config
for
this
cluster.
A
A
Pull
out
the
contents
of
the
secret
and
then
because
it's
basically
encoded,
we
have
to
decode
it
and
because
I'm
running
on
mac
os,
I
have
to
do
a
capital
d,
and
then
I'm
going
to
put
this
into
my
cube
directory
as
tgi
k143a
all
right
and
now
I
can
do
this
all
right.
So
now
I've
switched
my
cube,
ctl
context,
my
cube
config
over
to
this
workload
cluster
and
if
I
do
a
get
nodes,
what
I
will
see
is
that
they
are
going
to
report
not
ready,
see
like
that.
A
A
Let's
learn
something
new
and
so
what
we're
going
to
learn
new-
and
I
haven't
used
this
feature
yet
so
I'm
going
to
figure
it
out
along
with
all
of
you
at
the
same
time,
is
this
idea
of
a
cluster
resource
set
or
a
crs
which
would,
among
other
things,
allow
us
to
install
additional
things
onto
a
cluster
or
a
group
of
clusters,
a
workload
cluster
or
a
group
of
workload
clusters
using
cluster
api
right
now.
This
is
an
experimental
feature,
as
you
can
see
on
the
screen.
A
This
is
the
cluster
api
website
by
the
way
cluster
dash,
api.sigs.keats.io
great
website.
This
is
an
experimental
feature,
and
so,
in
order
for
it
to
show
up
in
your
cluster,
you
have
to
enable
it
first
and
enabling
it
is
described
here
in
the
experimental
features
page.
There's
a
couple
different
ways
to
do
this.
If,
if
you're,
if
you
want
to
enable
the
feature
before
you
set
up
a
management
cluster,
be
like
before
you
initialize
it
right,
then
you
can
just
set
the
environment
variable
here
and
then
run
your
cluster
ctl
init.
A
You
can
also
set
it
in
a
the
clusterctl.aml
configuration
file.
Wait
a
minute
andy's
telling
me
about
cluster
cdl
getcubeconfig
ooh.
I
haven't
used
that
one.
Yet
that
must
be
cool
is
that
that
must
be
relatively
new.
Okay,
either
that
or
I
just
haven't
been.
I
haven't
been
splunking
around
enough
in
the
cluster
cto
command.
A
The
cluster
api
developers
are
like
running
away
with
awesome
stuff
here
or
for
developers
this
stuff,
but
for
users,
if
you
want
to
enable
it
on
an
existing
cluster,
which
was
my
case,
I
already
had
existing
clusters,
so
I
needed
to
enable
it
and
you
have
to
go
in
and
edit
the
deployment
for
the
capi
controller
manager.
So
I'm
gonna
show
you
what
that
looks
like
real,
quick
I'll
show
you
that
I've
already
done
that.
So
let
me
clear
this
and
switch
back
over
to
my
management
cluster
and
then
cube
cpl.
A
Okay,
so
I'm
getting
the
deployment
here
and
if
you'll,
like
notice
here,
I
have
already
enabled
the
experimental
features
right
and
it's
already
restarted
and
updated
the
the
pods
as
well.
So
this
is
enabled
on
this
cluster.
If,
if
you
haven't
enabled
it,
then
you
need
to
go
and
enable
it
either
before
you
enable
the
management
cluster
or
by
modifying
the
deployment
afterwards,
but
once
it's
enabled,
then
we
can
go
in
and
we
can
use
a
cluster
resource
set
to
install
additional
things
onto
the
cluster.
A
I
got
just
checking
the
chat,
real
quick.
I
had
somebody
asked
how
was
the
workload
cluster
provisioned
by
cluster
api,
so
this
cluster
in
particular,
was
provisioned
by
using
I
used
cluster
ctl
config
cluster
to
generate
the
ammo,
and
then
this
generated
this
file.
A
Which
is
the
complete
cluster,
manifest
for
a
cluster
api
workload
cluster
and
with
a
single
control,
plane,
node
and
a
single
worker
node,
and
then
just
to
keep
ctl
apply
to
that
of
that
yaml
against
the
management
cluster,
and
then
it
went
off
and
created
it
so
pre-provision
ahead
of
time.
It's
you
know,
kind
of
like
that.
Whole
cooking
show
thing
where
we
show
you
all
the
pieces.
A
Let
me
say
now:
here's
one
that
I
prepared
for
earlier
for
you
right,
so
here's
a
cluster
that
I
prepared
earlier
for
you
all
right,
so
we
want
to
install
cni.
We
want
to
install
calico
in
this
case
I'm
going
to
use
calico
into
this
workload
cluster.
We
don't
have
to
do
it
directly.
We
want
to
do
it
in
some
sort
of
declarative
way,
so
to
do
that,
we're
going
to
do
a
cluster
resource
set
a
crs
right.
A
A
A
A
So
now
that
I've
set
the
context
of
what
we're
going
to
work
on,
like
I
said
I
haven't
done
this
before
so-
let's
try
and
figure
it
out
here
all
right,
so
I
ahead
in
advance,
reviewing
directions
and
all
that
I
have
some
files
prepared
for
us.
So
first
thing
I
have
prepared
is
this
configmap
cluster
resource
said
it's
going
to
use
config
maps
and
and
other,
I
think
believe
secrets.
A
It
will
use
that
to
apply
stuff
to
the
workload
clusters,
and
I
know
I
have
some
maintainers
in
the
chat
so
feel
free
to
correct
me
if
I'm
not
presenting
this
correctly.
But
this
is
the
calico
yaml,
which
I
won't
scroll
all
the
way
through,
because
I
doubt
that
you're
interested
in
that,
and
for
that
I
should
probably
just
do
this
instead.
But
what
we're
doing
is
we're
taking
the
content
of
the
normal
calico
manifest
that
we
would
use
in
a
like
a
cube.
A
Ctl
apply
right,
we're
putting
that
into
config
map
and
we
have
this
convictmap
called
calico
crs
configmap,
all
right,
whoops
all
right
and
then
I
also
have
an
actual
crs
definition
that
defines
the
cluster
resource
set.
So
it
says:
here's
the
name:
the
customer
resource
set
here's
the
namespace,
which
has
to
be
the
same
as
the
clusters
that
it's
going
to
match
against
and
then
we're
matching
against
a
set
of
labels.
So
my
clusters
that
have
the
labels,
cni
and
calico
will
then
get
this
resource
applied
to
them.
A
Now
here
is
where
all
the
documentation
that
I
have
found
begins
to
gloss
over
a
little
bit.
A
So
don't
be
too
surprised
if
you
see
a
blog
post
come
out
of
this,
but
what
I'm
going
to
do
is
I'm
going
to
add
these
to
the
workload
to
the
management
cluster
and,
let's
see,
if
that
does
what
I
expect
it
to
do
so
first,
my
thought
is
that
if
I
apply
this
to
the
management
cluster,
then
I'll
put
the
config
map
on
the
management
cluster
and
then
I
put
the
custom
resource
set
on
the
management
cluster
and
then
the
controller
specified
by
the
cluster
resource
set
we'll
go
and
get
that
config
map
and
apply
it
to
matching
workload
clusters.
A
All
right,
oh
I
got
an
error
server-
could
not
find
the
requested
resource.
Okay.
Well,
that's
interesting!
Okay,
okay!
Well,
let's
see
do
I
have
an
error
in
my
api
version.
Perhaps
let's
see
no
that
looks
right?
Okay,
so
let's
do
some
troubleshooting
here
resources
mode.
I
don't
have
to
specify,
because
I
think
the
default
mode
is
supply
once
and
we
have
the
metadata
all
right.
A
So,
let's
see
this
resource,
I
could
not
find
it.
So,
let's
see
oopsie
I'll
get
crds
crap,
let's
see
if
the
crd
is
there,
I
would
think
that
it
would
be,
but
all
right,
let's
see,
okay,
cluster
resource
set
add-ons
cluster
excuse,
io.
Okay,
all
right-
and
if
I
look
at
my-
must
have
typo
here:
head-ons
cluster
gates,
plus
resource
set
yeah,
okay,.
B
A
Yeah,
a
few
folks
in
the
in
the
chat.
Did
I
the
missing
crd?
Well,
that's
what
I
was
thinking
too
right,
but
the
crd
is
there.
We
can
see
the
crd
right
here.
Cluster
resource
sets,
so
it's
there
calling
web
hook
default.
Cluster
resource
set
add-ons
could
not
find
a
crested
resource
right.
Okay,
let's
get
back
in
here
again
seems
vince
is
saying
the
defaulting
web
hook
wasn't
installed.
Okay,
so
I
have
no
idea
how
to
fix
that.
A
I
did
feature
fabricio
I
did.
I
did
enable
the
peach
flag
in
oh
there's,
a
web
hook,
deployment
too
okay,
so
cube
ctl,
and
is
that
going
to
be
in
the
let's
see
it's
probably
in
the
cappy
web
hook
system?
So,
let's
see
cube.
A
Ctl
all
right
so
cappy
yep
cappy
deployment,
so
copy
controller
manager.
Okay.
So
let's
do
that,
let's
edit
this
capi
web
hook
system
edit
deployment,
cappy
controller
manager
all
right
now.
This
is
good
because
I
didn't
see
this
in
the
docs
anywhere.
So
that's
the
rate!
Here
we
go
there.
You
go
okay,
so
you
have
to
edit
the
deployment.
A
We'll
give
it
a
minute,
let's
see,
make
sure
that
this
has
updated.
Recently
happy
web
hook
copy
controller
manager
11
seconds.
That's
probably
the
latest
replica
set
that
the
deployment
is
managing
there.
Okay,
so
that
looks
right.
Then,
let's
try
this
again.
Okay,
so
cube
ctl.
A
Apply
that's
much
better
okay!
So
now
we
have
both
the
config
map
that
contains
the
cni,
manifest
that
we're
going
to
have
applied
to
the
workload
clusters,
and
then
we
have
the
custom,
the
cluster
resource
set
defined.
That
will
leverage
that
config
map.
So
now
what
we
should
see
and
what's
your
string
and
negs?
Oh.
B
A
All
right
nadir
is
just
pointing
out
in
chat
that
generally,
you
want
to
set
the
environment
variable
to
enable
this
feature
before
you
do
the
cluster
ctl
init,
which
then
enables
this
functionality
in
the
management
cluster
right
off
the
bat.
But
for
us
again
I
had
a
pre-existing
management
cluster,
so
I
had
to
go.
Do
this
so
probably
don't
want
to
normally
go
in
and
edit
the
deployment,
probably
not
the
best
approach,
but
that's,
okay,
we're
learning
we'll
work
through
this
together,
all
right
so
cube,
ctl
get
crs.
A
Let's
see
if
that
short
works,
nope,
okay,
cluster
resource
set
all
right,
and
then
let's
describe
this.
A
That
double
r
right
there
we're
gonna
throw
a
lot
of
people
off
all
right.
Here
we
go
so
this
is
a
describe
there.
We
go
all
right
and
it
shows
what's
happening
here
now.
Let's
take
a
look
at,
we
could
probably
look
at
some
logs
here
so
but
let's
just
take
a
look
at
the
workload
cluster
that
I
had
deployed
and
see
if
it's
already
been
applied,
oh
bam
check
that
out
look
the
nodes
are
already
ready,
so
that
means
calico
is
already
running
so
cube.
A
Steel
get
pods
a
and
we
should
see
the
calico
nodes
sweet.
Okay,
so
there
you
go
using
a
cluster
resource
set.
I
was
able
to
define
this
now.
The
cool
thing
is
that
this
was
a
pre-existing
workload
cluster
right,
but
because
the
label
set
matched
the
cluster
resource,
that
found
the
pre-existing
cluster
and
then
went
and
did
an
apply,
and
you
probably
saw
on
the
on
the
screen
a
moment
ago.
We're
talking
about
this
spec
mode
apply
once
right.
A
Now,
if
you
read
a
little
further
down
in
the
text,
initially
only
supported
mode
is
applied
once
right.
So
it's
just
like
a
one
shot
thing
it'll
apply
this
and
then
it's
not
gonna
go
back
and
remediate
it.
Although,
if
I
recall
correctly,
I
saw
some
conversations
about
them:
exploring
an
ongoing
remediation
of
the
cluster
resource
set
against
against
workload
clusters
right
so
cool.
It
worked.
That's
nice
now.
The
other
thing
that
now
this
is
present.
The
other
thing
that
will
happen
is
any
new
workload
clusters
that
I
create
that
match.
A
The
selector
for
the
cluster
resource
set
will
then
automatically
get
this
now.
It
just
so
happens
conveniently
enough
whoops
that
I
have
the
manifest
for
another
workload,
cluster
ready
to
be
applied,
so
let
me
just
make
sure
I
haven't
applied
it
yet.
Don't
think
I
have.
A
We
only
have
a
here
all
right,
great,
so
we're
going
to
apply
dgik
143
b
and
we're
going
to
create
a
new
workload.
Cluster
boom
off.
It
goes
and
that'll
sit
and
run
and
cook
in
the
background
for
a
while,
while
we
go
play
with
some
other
stuff
and
then
we'll
come
back
and
check
on
it,
and
what
we
should
find
is
that
it
should
go
all
the
way
through
to
ready
state
without
any
further
intervention
on
our
part
like
we're
not
going
to
need
to
go,
install
cni.
A
Did
a
sign
notice-
maybe
I
should
do
this
in
bi
instead,
so
line
numbers
okay
on
lines.
Six
and
seven,
we
applied
the
labels
to
this
cluster
that
will
cause
it
to
match
the
label
selector
that
we
defined
in
the
cluster
resource
set.
So
it's
going
to
create
the
new
cluster
and
then
the
cluster
resource
set's
going
to
see
it
and
then
bam
off
it's
going
to
go,
which
would
be
super
cool,
so
I
like
andy's
like
magic.
I
need
I
need
that
gif
of
you
know,
I
think
it's
sheila
booth
right.
A
A
Link
it's
here.
It
was
a
simple
little
project
that
some
folks
from
heptio
created
and
the
idea
here
is
that
it
it.
You
keep
each
of
your
cube,
configs
in
a
separate
file,
and
then
you
just
use
kcx
to
select
between
the
files
and
it
sets
the
environment
variable
for
whatever
terminal
you're
in
which
is
super
handy,
because
then
you
could
have
like
two
different
terminals
and
two
different
cube.
Configs
set
right
super
neat.
I
love.
B
A
I
use
it
all
the
time
all
right,
so
we
we
we
played
around
cluster
resource
sets.
We
made
it
work
cool
now,
while
that
second
workload
cluster
is
creating.
A
Let's
talk
about
one
of
the
other
new
cool
features,
oh
and
by
the
way,
while
it's
creating,
I
should
run
cluster
ctl
describe
again,
so
you
can
see
the
last
time
I
used
this
everything
was
done,
but
now
everything
is
not
going
to
be
done,
so
we'll
get
a
different
output.
Oh
no!
That's
not
going
to
work!
A
A
This
is
like
it's
not
ready
right,
we're
still
waiting
on
stuff,
and
then
it
gives
you
why
it's
waiting
in
this
case
there's
just
an
info
message
that
we're
waiting
on
things
to
happen
right
we're
waiting
on
in
this
case
it
is
in
the
process
of
creating
the
nat
gateways
that
are
used
by
cluster
api
when
we
instantiate
a
new
workload
cluster
on
aws.
A
So
again,
a
quick
look
at
cluster
ctl
describe
cluster.
There
super
super
handy
and
again
that's
part
of
the
0.3.13
release
that
just
landed
today.
But
while
that
is
happening,
I
want
to
talk
about
one
of
the
other
cool
things
in
v1,
alpha
3,
which
is
the
cube
adm
control
plane.
A
A
We
could
just
scale
the
cube,
adm
control,
plane,
object
and
then
cluster
api
will
handle
the
process
of
taking
us
from
one
to
three
and-
and
so
that's
that's
super
super
handy
now.
We
could
also
scale
back,
but
I
think
there
are
some
concerns
around
scaling
back.
I
don't
remember
for
sure.
I
thought
I
remember
seeing
something.
A
So
if
one
of
the
maintainers
can
keep
me
honest
here
and
put
it
in
chat
as
to
whether
they
don't
recommend
scaling
back
if
you're
going
from
back
down,
I
wouldn't
recommend
it,
but
you
know
for
testing
and
other
stuff,
it
might
work.
So
here
we
have
the
qbm
control
plane
object
for
our
first
workload.
Cluster.
We
see
a
control
band
object,
it's
initialized,
true
api
server
is
true.
It
shows
the
version
shows
the
number
of
replicas,
so
on
so
forth.
So
let's
say
I
want
to
take
this
cluster.
A
This
first
cluster,
which,
in
my
in
my
manifest,
was
specified
as
one
one
replica
right
and
let's,
let's
look
at
that
real,
quick,
so
tgik143a.
A
It's
going
to
give
us
a
cube,
adm
configuration
to
bootstrap
the
cluster,
and
then
here
are
the
number
of
replicas
right.
So
I
am
going
to
take
the
declarative
approach
and
I'm
going
to
change
the
number
of
replicas
here,
which
will
then
trigger
cluster
api
to
reconcile
the
desired
state,
which
I
have
stated
as
three
replicas
with
the
actual
state
right
so
whoops.
A
I
am
not
a
vim
expert,
so
don't
laugh
at
me
too
much
all
right.
Actually,
okay,
then
I
need
to
do
this.
A
And
for
the
most
part,
you're
going
to
see
unchanged,
unchanged,
unchange
and
change,
that's
because
what
what
is
the
actual
state
of
the
cluster
matches
what's
defined,
but
you
will
note
right
here
that
this
shows
configured
right
and
if
I
go
and
I
run
my
get
kcp,
which
is
faster
to
type
than
cube
adm
control,
plane,
we'll
see
that
it
is
now
in
the
process
of
scaling
up
from
one
two
three,
based
on
what
I
specified
in
the
m,
which
is
extraordinary
cool.
All
right,
vince
is
just
adding
in
in
chat.
Thank
you
vince.
A
You
can
scale
down,
although
features
like
automatic
remediation
are
not
going
to
be
available
on
a
single
control,
plane
node,
which
is
fine
like
automatic
remediation,
meaning,
you
know
automatic
updates
of
the
control
plane.
If
there's
only
a
single
control
plane,
then
it
can't.
You
know
automatically
update
that
right.
So
that's
fine!
That's
expected
no
big
deal
there.
Another
reason
to
run
h8
configurations
and
we'll
give
this
see.
This
will
probably
take
a
couple
of
minutes
to
run
yep
okay,
so
it's
still
in
the
process.
A
So
while
that
is
playing
around
then
we're
gonna
give
that
some
time
and
we're
gonna
look
at
the
other
cool
thing
that
I
wanted
to
play
around
with
today.
By
the
way
I
guess
before
we
should
move
on,
we
should
we
should
dig
into
this
a
little
bit
more.
A
So,
let's,
let's
do
a
describe
and
get
the
objects,
get
the
full
text
out
of
this
and
see
what
it
says
and
then
maybe
we'll
flip
over
and
do
some
log
viewing
all
right
here
we
go
so
we're
just
going
to
describe
on
that
control.
Plane,
object,
okay
and
right
here
that
you
know
right
at
the
top.
It's
like
it's
unhealthy,
okay,
cool!
I
think.
That's
probably,
let's
see
preflight
checks.
A
A
All
right,
server-side
fields,
okay,
so
yeah,
okay,
looks
like
that's
just.
A
Yeah,
okay,
see
the
replicas
that
we
specified
in
the
ml
file
here,
okay
and
then
last
events
and
status
conditions
scaling
up.
Okay
and
let's
see,
if
that
event
latest
event
has
changed
at
all.
Okay
still
in
the
process.
Okay,
so
we'll
give
that
a
few
minutes.
Let's
see,
maybe
we
just
let's
see
that's
if
I'm
not
mistaken,
that
is
running
out
of
the
cube
adm
controller.
A
So
let's
look
at
some
logs
and
see.
If
there's
anything
we
need
to
be
worried
about.
I
don't
think
there
is
a
cubed
control,
plane
system,
okay,
so
cube
ctl
dash
in
cabi,
qbdm
control,
plane
system
get
pods
and
that'll
show
us
the
controller
manager,
okay,
cool
logs
from
that
and
from
the
manager
container,
all
right.
So
this
is
gonna,
show
some
logs
and
let's
see.
A
I
don't
see
anything
that
looks
terribly
concerning
here:
a
lot
of
informational
error
messages,
just
as
it's
checking
it's
waiting
on
things
to
go,
live
so,
okay,
we'll
let
it
sit
there
and
bake.
We
don't
want
to
rush
things
too
much.
Let's
check
the
status
of
our
other
cluster.
Let's
use
cluster
ctl
describe
to
check
the
status
of
our
other
workload
cluster.
A
So
andy
telling
me
we
shouldn't
be
looking
at
logs.
Okay,
fine
andy!
I
will
not
look
at
logs
all
right.
That's
why
I'm
gonna
go
back
and
do
cluster
ctl
describe
describe
cluster.
A
Let's
see
what
it
says
about
the
one
that
I'm
scaling
up
right.
Oh
that's
right!
I
forgot.
I
specified
the
version
here
because
I
didn't
want
to
replace
my
main
version
in
case.
I
needed
it.
Okay,
so
okay,
says
c
control,
plane,
objects,
all
right,
all
right!
That's
cool!
All!
Right!
Let's
see
where
b
stands!
This
is
the
one
that
we
created
all
right.
It
looks
like
the
control
plane
is
up.
A
So,
let's
see
cuba
ctl
get
machines
and
we'll
check
the
machines
for
tga
k143b,
which
shows
running
and
we'll
have
to
we'll
have
to
pull
down
the
context
hey.
This
is
where
I
can
use
cluster
cto
get
coop
config.
As
pointed
out
earlier
how's.
This
work
all
right,
get
the
workload
clusters,
keeping
vague
name
of
workload,
cluster
all
right
tgik143b
and
is
it
output
to
yep?
It
does
okay.
So,
let's
put
that
into
my
tgik
143b.
A
Yeah,
that's
a
lot
easier,
I'm
using
jsonpath
thanks
for
that,
okay
and
then
ktix
ggi143b,
all
right
and
let's
see
where
things
stand
here
all
right.
Well,
we
do
see
that
the
control
plane
node
is
up
and
showing
ready,
which
means
that
it
is
probably
already
running
the
calico
pods
that
we
saw
in
the
cluster
resource
set
and
sure
enough.
Let's
see
waiting
for
it
here,
we
go
yep
there.
A
They
are
all
right
so
just
to
close
out
the
cluster
resource
set
section
right,
showing
you
that
we
can
use
cluster
resource
sets
to
apply
stuff
to
existing
workload
clusters
and
also
to
catch
new
workload
clusters
as
they
are
provisioned,
which
is
super
super
cool.
So
I
like
that,
all
right
now,
let's
switch
back
to
the
management
cluster
and
check
on
our
control,
plane
objects
and
see
how
things
are
working
there.
Okay
rate.
A
So
now
we
have
two
that
are
ready,
which
means
that
second
one
finally
came
online
and
now
it's
spinning
up
the
third
one.
So
that's
great
cool,
okay.
There
we
go
all
right.
So
while
that
is
continuing
to
bake,
I
want
to
take
a
look
at
machine
health
checks.
This
is
another
feature
that
I
haven't
played
with.
You
can
see
more
information
about
machine
health
checks
here.
A
This
is
on
the
cluster
api
website
and
I
have
a
link
for
this
in
the
show
notes
so
that
you
don't
have
to
like.
You,
know,
hurriedly
scramble
and
write
it
down
real,
quick
or
take
a
screenshot
or
whatever,
and
the
idea
here
is
that
this
is
a
way
for
cluster
api
to
perform,
like
a
health
check
right
that
similar
to
what
it
would
do
for
a
pod
to
make
sure
that
a
pod
is
running
and
responding,
and
so
on
so
forth.
A
We're
doing
this
for
a
machine
and
then,
if
a
machine
is
considered
unhealthy,
then
it
will
automatically
sort
of
remediate
that
which
will
trigger
a
new
machine
to
be
created
and
then
remove
the
failed
one.
Here's
an
example
and
we're
going
to
use
this
example
as
like.
Just
the
you
know
the
starting
point
for
us,
although
I
think
I
should
probably
scale
up
my
number
of
worker
nodes
before
I
do
that
so
because
with
only
one
worker
node,
then
it's
probably
not
going
to
do
it.
A
It's
probably
not
going
to
work
the
way
we
expect
it
to
and
that's
because
of
this
max
unhealthy
field
right.
So,
although
the
default
is
100,
so
as
long
as
I
change
it
100,
I
don't
have
to
do
that.
Well,
nevertheless,
let's
let's
go
ahead
and
let's
see
cube,
ct
I'll
get
machine
deployments.
I
should
have
two
one
for
tgik
yup.
Okay,
I'm
just
going
to
scale
the
number
of
nodes
in
that
first
cluster,
real,
quick.
A
Machine
deployment
tgik
143
a
md0
okay,
so
this
is
the
imperative
way
to
scale
the
number
of
nodes.
The
declarative
way
would
be
to
go
and
modify
the
yaml
like
I
did
for
the
kcp
and
then
let
and
then
that
way
we
have
it.
A
The
reason
you
know
there's
a
difference
there
is
that
if
I'm
not
careful,
if
I'm
not
paying
attention
what
I'm
doing
and
I
run
this
command
and
then
I
go
back
later
on
and
I
reapply
the
yaml
again
then
I'll
end
up
undoing
what
I
just
did
and
possibly
causing
an
outage.
A
So
generally,
I
recommend
you
stick
to
the
declarative
way
by
modifying
the
yamo,
but
for
the
purposes
of
this,
we'll
just
we'll
fly
by
the
seat
of
our
pants
how's
that
sound
okay
so
and
it's
in
the
process
of
scaling
up.
So
it's
gonna
do
that
now,
let's
look
at
machine
health
checks
all
right,
so
I
copied
out
the
machine
health
check
from
the
documentation
all
right.
So
here
we
go
kind
metadata.
Let's
change
this
name
and
we'll
call
it
md0.
A
Mhc
machine
health
check
very
original
here,
cluster
name
we're
going
to
apply
this
to
tgik,
143a,
tgik143a,
all
right
and
max
unhealthy.
I
believe
we
can
actually,
let's
just
comment
this
out
for
now,
which
means
it's
100,
which
means
remediation
is
always
going
to
occur.
Did
I
read
that
right
machines
will
be
remediated,
no
matter
the
state
of
the
cluster
right?
Okay,
all
right,
very
good.
A
Saw
somebody
asking
any
way
to
lock
the
configuration
according
to
declarative
manifests,
yes,
put
your
manifests
in
a
git
repo
and
force
code
reviews
or
some
other
process
before
people
edit
them.
That
would
be
one
way
of
doing
that.
I'm
not
sure
if
there
is
a
technological
way
of
doing
that
other
than
restricting
permissions
using
our
back
on
the
clusters,
which
would
be
one
way
to
do
it.
I
guess
the
other
way
to
do.
A
It
would
be
to
put
some
process
around
it,
which
you
probably
should
be
looking
at
anyway,
if
you're
going
to
be
moving
kubernetes
into
production,
all
right,
so
then
node
startup
time
is
10
minutes.
This
is
how
long
it
waits
for
a
new
journey
cluster
which
machines
should
be
health
checked.
So
here's
where
we
need
to
find
match
labels.
A
If
I
recall
correctly,
then
what
happens?
Is
they
automatically
get
these
labels
put
on
them?
These
machines
should
automatically
have
these
labels
put
on
them.
So
I
should
be
able
to
use
those
let's
just
copy
this
one
and
let's
take
a
look
at
cube.
Cto
get
machines.
Let's
look
at
okay,
so
the
the
machine
deployment's
still
provisioning.
So
it's
still
scaling
up.
Let's
look
at
the
one
machine
that
is
running,
keep
cpo
described
machine
this
one.
A
A
Now
I
didn't
mention
this
specifically,
but
it's
in
the
documentation
machine
health
checks
only
work
when
a
machine
is
being
managed
as
part
of
a
machine
set
which
would
also
include
machines
being
controlled
overall
by
machine
deployment
so
similar
to
the
way
that
deployment
manages
replica
sets
which
then
manage
the
number
of
pods
in
machine
deployment
manages
a
machine
set
which
then
manages
a
group
of
machines,
and
so
as
long
as
you
have
machines
being
managed
by
a
machine
deployment
or
a
machine
set
directly
but
machine
deployment
will
be
better.
A
Then
you
can
use
a
machine
health
check
against
that.
So
I've
got
the
match
label
here
and
then
now
here
are
unhealthy
conditions.
If
ready
is
showing
is
unknown
or
if
ready
is
showing
us
false,
then
it's
going
to
say
that
those
is
unhealthy.
Okay,
so
let's
save
this
and
then
let's
see
I
need
to
apply.
This
keep
see.
Tail
apply.
A
Whoops,
okay,
all
right
so
now
I
have
created
the
machine
health
check
and
thinking
about
it
now
I
probably
should
have
changed
the
name
in
case.
I
wanted
different
machine
health
checks
for
different
clusters,
because
md0
is
a
generic
name
that
multiple
clusters
are
using.
Oh
well,
okay,
that's
fine,
and
so
let's
do
a
cube.
Ctl
get
mhc
and
it
shows
there
and
it
shows
currently
healthy
and
currently
mac's
unhealthy,
okay,
awesome!
A
A
Let's
see
well
yeah,
I
guess
I
could
go
into
my
aws
console,
which
I
will
not
be
showing
you
here
on
screen
and
just
shut
that
machine
down.
Let's
see,
instances.
A
All
right,
so
all
right!
Well,
first,
let
me
look
at
my
machine
deployment.
Did
it
finish
scaling
up?
It
says
there
are
three
and
that
they're
all
healthy,
updated
three
okay
and
then,
if
I
do,
I
get
machines.
A
Okay,
all
right
so
cut
up
the
timeouts
to
a
smaller
value
for
the
demo
yeah.
That's
probably
a
good
idea,
andy
just
pointing
out
in
the
chat
that,
because
I'm
doing
this
kind
of
live
right
here,
we
don't
want
to
wait
five
minutes
for
it
to
detect
that
it
is
unhealthy.
We
would
want
something
shorter.
So,
let's,
let's
update
our
health
check.
A
So
let's
change
this
to
something
like
I
don't
know,
30
seconds
just
for
the
purposes
of
the
demo
and
then
we'll
apply
this
again
and
that'll
change
it.
A
Oh
that's
interesting,
oh
see!
This
is
why
it's
useful
to
have
your
cube,
convict
displayed
on
your
prompt
and
that's
because
I'm
I
have
my
cube
config
pointing
to
my
workload
cluster.
So
we
go
here
and
then
apply
it
and
then
it's
better
okay.
So
now
we
have
that
showing
as
shorter.
I
wonder
if
it's
worth
setting
both
of
those,
because
I
don't
know
which
of
these
conditions
will
come
up.
Let's
set
this
one
too
okay
and
then
we're
gonna
pick
a
machine.
A
A
A
Okay,
so
that
is
now
stopped
and
it'll
take
a
little
bit
for
the
system
to
update.
But
what
we
should
see
here
soon
is
that
it'll
detect
that
the
machine
is
unhealthy
and
then
we'll
see
machine
health
check
kick
in
we'll
give
that
a
moment
all
right,
while
that
is
happening.
Let's
check
in
on
our
other
workload
cluster
that
we
created,
okay,
so
tgik
143
b
shows
provisioned
all
right.
We've
styled
the
machines
right
up
above
that
and
the
show
that
they're
running.
A
So,
let's
run
our
cluster
ctl
describe
again
and
show
what
it
says
describe
cluster
tgik,
143
b,
all
right,
and
it
shows
that
everything
is
up
and
running
and
everybody's
happy
awesome,
and
we
already
saw
that
the
cni
got
installed
properly
when
it
applied
the
cluster
resource
set
just
refreshing,
my
nope,
okay,
okay,
here
we
go
all
right.
The
node
that
I
shut
down
is
still
in
the
process
of
shutting
down.
A
Okay,
oh
cool
awesome
check
that
out.
Okay,
so
bmq
4q
was
the
one
that
we
shut
down
and
notice
that
cluster
api
is
showing
that
as
deleting
and
it
is
automatically
provisioning
another
one.
So
that
is
cool.
That
means,
as
far
as
I'm
aware,
the
machine
health
check
worked
as
expected
and
therefore
detected.
The
machine
was
unhealthy
and
automatically
started
provisioning,
a
new
machine
which
is
super
cool.
A
A
No,
it
doesn't
okay,
but
if
I
were
to
get
the
machine
deployment,
it
might
show
us
that
or
I
could
look
at
logs,
but
I'm
not
supposed
to
look
at
logs.
So
we
won't
do
that.
Okay,
so
we've
seen
three
new
features
and
cluster
api.
So
far
we
saw
the
cluster
resource
set,
which
allows
us
to
automatically
install
new
and
cool
stuff
automatically
install
things
like
cni
or
a
csi
driver.
A
If
you
need
to
install
a
csi
driver
or
other
things,
maybe
you
need
to
define
storage
classes
or
something
like
that.
You
could
do
that
with
a
cluster
resource
set,
so
we
saw
that
that's
very,
very
handy.
I'm
gonna
start
using
that
quite
a
bit
since
I
do
a
fair
amount
of
workload,
cluster
provisioning
for
testing
and
stuff,
and
it's
always
a
pain
to
have
to
go
back
and
install
the
cni
afterwards.
So
that's
cool.
A
We
also
saw
the
cube
adm
control
plane
or
the
kcp,
which
allows
us
to
treat
the
control
plane
object
as
a
single
object,
and
then
we
can
just
scale
that
right
rather
than
having
to
say
okay,
I
want
to
provision
two
more
machines.
That
would
then
you
know,
be
spun
up
and
bootstrapped
as
control
plane
nodes.
Instead,
we
just
treat
that
whole
control
plane
as
one
object
which
allows
us
again
to
easily
scale
the
control
plane
up
or
down.
A
Although
scaling
down
again
be
aware
of
sort
of
the
caveats
of
running
a
single
control,
plane,
node
and
then
we
also
saw
machine
health
checks
which
perform
health
checks
against
machines
that
are
being
managed
by
a
machine
set.
So
if
you
define
a
machine
individually,
which
you
can
do
is
just
to
find
a
machine
in
the
cluster
api
gamma
that
will
not
be
subject
to
checks
by
a
machine
health
check,
only
machines
that
are
part
of
a
machine
set
we'll
we'll
do
that
and
and
then
it
will
automatically
remediate
those.
A
So
if
it
shows
up
as
healthy,
then
it'll
it
shows
up
as
unhealthy
or
it's
detected
as
being
unhealthy.
Then
it'll
automatically
provision
that.
So
let's
do
this
and
okay,
I
need
to
do
oh
yeah.
I
need
to
get
machines
notice
here
that
it
shows
two
ready
one
unavailable,
oh
fabrizio,
just
pointing
out
the
machine.
Health
check
also
works
now,
with
the
control
plane
machines
being
managed
by
kcp.
So
that's
very
cool,
which
means
you
could
you
could
define
machine
health
checks
to
check
your
control,
plane,
nodes
and
remediate?
A
Those
if
you
wanted
to.
Let
me
get
my
list
of
machines
here.
Okay,
so
now
we
show
all
of
the
machines
running
and
if
I
flip
over
to
my
aws
console,
let's
see
I
show
the
one
that
I
stopped
now
is
terminated,
not
just
stopped,
and
then
it
spun
up
a
new
one.
So
that's
corresponds
to
what
we
saw
on
the
screen
just
a
moment
ago,
which
was
cluster
api,
deleting
the
machine
and
then
spinning
up
a
new
one.
A
So
all
right
and
then,
let's
see,
let
me
check
the
live
chat
here,
marcos
asking
no
way
to
assign
a
single
machine
healthcare
to
multiple
clusters.
A
As
far
as
I
am
aware,
yeah
machine
health
checks
are
gonna
be
per
cluster,
because
when
we
looked
at
the
yaml
for
there,
we
had
to
specify
a
cluster
name
which
then
associates
this
machine
health
check
with
a
particular
cluster.
Now
conceivably,
I
guess
you
could.
A
A
Okay,
so
all
right!
Well,
thanks
for
andy,
just
taking
off
thanks
for
being
on
andy.
I
appreciate
that
super
helpful
okay.
That
is
everything
that
I
had
to
show
all
of
you.
Let's
see,
we
saw
that
the
machines
were
up
and
running.
We
saw
the
kcp,
we
saw
the
control
set.
I
think
that
about
does
it,
so
I
will
make
sure
that
the
that
the
show
notes
have
links
to
all
the
resources
that
I
showed
you.
A
So
we
we
saw
this
one
on
working
with
machine
health
check
this
off
the
cluster
api
website
and-
and
that's
already
in
the
show,
notes
and
then
working
with
cluster
resource
sets
is
already
in
the
show
notes
as
well.
I
did
note
that
the
update
to
the
documentation
for
cluster
ctl
describe
has
apparently
not
hit
the
website
yet
so
we
don't
see
cluster
ctl
described
here,
there
is
getcubeconfig,
which
I
just
completely
missed.
A
I
don't
know
how
I
totally
missed
that
right,
but
this
greatly
simplifies
the
process
of
getting
the
cube
config
for
a
workload
cluster,
so
that
is,
that's
gonna
make
my
life
like
so
much
easier.
Now
I
I
can't
believe
I
totally
just
didn't
catch
out
on
that
anyway,
but
okay,
so
that's
I
think,
that's
it!
That's
all
I
have.
A
A
I
learned
some
stuff
today
hope
you
learned
some
stuff
and
then
you
know
this
will
be
posted
on
the
tgi
kubernetes
youtube
channel
for
on-demand
viewing
afterwards,
so
feel
free
to
you
know
pick
it
up
later
on
in
the
event
that
you
didn't
get
a
chance
to
watch
it
live
and
then
again
the
show
notes
are
going
to
be
updated
with
links
to
everything
and
all
of
that,
so
all
right
all
right.
I
think
that's
it.
Then
I
will
see
you
all
next
time.