►
Description
Come hang out with Duffie Cooley as he does a bit of hands on hacking of Kubernetes and related topics. Some of this will be Duffie talking about the things he knows. Some of this will be Duffie exploring something new with the audience. Come join the fun, ask questions, comment, and participate in the live chat!
This week we will be continuing the Grokking series with: exploring the Controller Manager
A
B
Everybody
happy
Friday
and
welcome
to
TGI
K
episode
93.
Today
we're
going
to
be
doing
another
rockin
episode
and
today's
subject
is
the
cube
controller
manager,
which
is
gonna,
be
a
fun
one.
There's
a
lot
and
a
lot
of
fun
stuff
to
talk
about
with
regard
to
the
controller
manager,
so
welcome.
We've
made
it
through
the
week.
I
picked
a
suitably
October's
color
scheme.
It
was
actually
kind
of
a
shout
out
to
the
idea
of
reconciliation
loops.
You
know
like
the
idea
of
a
thermostat
being.
B
You
know
something
that
you
sit
and
like
to
your
desired
state
and
then
a
bunch
of
machinery
and
temperature
checking
and
all
that
stuff
happens
on
the
back
end,
to
make
sure
that
you're
still
comfortable.
You
know
throughout
time,
so
I
thought
this
would
be
a
great
image
for
this
week's
ttg
a
case
that
was
really
fun
founded
on
teach
on
unsplash.
That's
probably
get
all
of
our
images.
B
Let's
see
who
is
with
us
today,
I'm
gonna
be
looking
off
to
my
left,
a
little
as
I
go
through
this
episode,
because
I
have
the
to
screen
setup,
but
this
one's
actually
kind
of
off
to
my
left
a
little
so
we
got
mr.
Lu
Maddy
with
us
happy
Friday,
Lu
I'm
Addie.
We
have
Martin
Portman
from
the
Netherlands
they've
Josh
from
Colorado
he's.
Actually
a
customer
working
with
a
customer
stuff.
B
This
week,
I
got
Rory
in
Scotland
I
appreciate
that
you
rose
Scotland
enough
that
other
word,
but
I
cannot
pronounce
good
to
see
right,
hello,
joy
and
Amin
from
Strasbourg
and
Marko
from
Milan
Italy
and
my
friend
Steve
sloka,
who
actually
just
did
a
webcast
of
his
own
on
some
of
the
changes
in
contour
and
stuff.
So
if
you're
interested
in
that,
definitely
go
check
that
out.
I
think
that
was
earlier
this
week,
Shahar
from
atlanta
and
we
have
katene
from
Ashburn
Virginia
Keith
from
Ireland
and
thank
you
I
I
like
doing
them.
B
The
grokking
sessions
are
super
fun,
I've
really
been
enjoying
them.
We
got
Christine
from
Germany
and
Olaf
from
Copenhagen
Denmark
and
Sean
from
Birmingham
England
Shakeel
from
Raleigh
this
time
or
in
Georgia.
I
should
say
this
time
and
North
Carolina
that
whole
area,
a
meme,
hello
from
New,
York,
City
and
today
from
Texas,
and
he
works
with
me
here
at
VMware
as
part
of
our
customer
reliability,
engineering
team,
it's
really
pretty
incredible
team.
B
B
New
Delhi
India
and
Simone
from
Gothenburg,
Sweden
and
Yash
from
hell
and
from
India
Bradley
from
England
and
Nick
Perry.
It's
good.
It's
such
a
great
thing
to
see
so
many
people
from
all
over
the
place
instead
I'm
just
like
I'm
always
just
I,
will
always
be
surprised
by
that.
Like
every
time
you
know
like
such
a
great
such
a
great
audience
we
have
for
for
this.
For
this
series,
it's
really
awesome
all.
B
B
A
B
Big
news
very
important
news:
the
TGI
K
repo
has
moved
it
is
no
longer
in
its
normal
spot,
which
was
under
hefty
Oh
TGI
K.
It's
been
moved
to
VMware
tenzou,
that
the
organization
and,
if
you're,
curious
about,
if
you
want
to
hear
a
little
more
about
that
and
Joe
actually
did
a
blog
post
on
what
VMware
tons
ooh.
That
repository
is
going
to
be
used
for
that's
actually,
where
we're
housing
a
lot
of
the
existing
hefty
open-source
projects
like
contour
octant
and
some
of
the
others,
so
definitely
check
that
out.
A
B
The
place
to
be
is,
if
you
go
to,
let's
just
go
to
even
redirects
cuz:
it
should
so
get
calm
after
you,
okay,
this
is
where
you
would
have
normally
gone
right
and
as
soon
as
I
click
on
that,
the
good
part
is
because
we
just
moved
the
organization
over.
Oh
nice
security
alert
because
we
moved
the
organization
over
it.
Just
redirects
you
automatically,
so
it
should
be
pretty
painless
for
most
folks.
But
what
I
wanted
to
point
out
here
was
this
issues
tab
here.
B
A
B
B
Basically,
you
can
learn
more
about
hosting
your
own
kubernetes
community
today,
the
idea
being
that,
like
we
want
to
get
people
out
there
and
enable
to
be
able
to
kind
of
like
spread
the
word
about
what
is
interesting
about
kubernetes,
what
you're
able
to
do
with
it?
What
what
makes
people
successful
with
it
so
check
that
out?
That's
definitely
more
information
about
how
to
learn
how
to
learn
about
both
what
it
is
and
how
it
works
and
how
you
get
started.
It's
all
in
here
in
the
event
details
pretty
cool
stuff.
B
So,
if
that's
the
thing
that
you're
interested
in
doing
it's
a
great
opportunity
to
kind
of
get
involved,
the
schedule
of
event
for
all
events
in
San
Diego
are
up
as
far
as
I
know.
I
think
that
all
the
schedules
are
up,
and
this
means
that
it's
time
to
start
putting
together
your
very
hectic
schedule
to
determine.
B
And
when
you
will
be
there
in
San
Diego,
if
you're
going
to
coupon
or
if
you
plan
on
attending
any
of
this
name
on
signed
events,
I've
got
some
other
people
check
in
and
we
got
naman
from
Toronto
and
I
mean
from
Ireland
good
to
see
both
the
account
I've
actually
linked
the
under
events
for
the
right
right
through
here.
So
there's
an
account
now,
there's
a
set
of
events
for
the
contributor
summit,
which
happens
right
before
cube
con.
We
have
cube
con.
Obviously
that
schedule
has
been
up
for
a
little
bit.
B
How
native
rejects
provides
you
another
opportunity
to
submit
that
talk
and
get
it?
You
know
and
get
it
presented.
You
know
in
a
in
a
zero
day
event
right,
so
it's
gonna
be
right.
There
that's
the
same
event,
I
believe
it's
happening,
Saturday
and
Sunday
this
year.
The
preceding
weekend
so
definitely
check
that
out.
That's
something!
That's
interesting
to
you,
and
then
we
have
cloud
native
security
day,
which
is
a
co-hosted
event
where
we're
gonna
do
a
lot
of
talking
about
the
security
stuff.
B
B
Lane
we
used
to
work
with
me
here
at
VMware
and
at
the
cloud
native
rejects
and
then
at
the
native
security
I'm
gonna
be
super
excited
to
co-present,
with
Ian
Coldwater
on
abusing
kubernetes
defaults,
and
so
all
three
of
those
are
gonna
be
great
I'm,
sure
they're
gonna
be
awesome
so
and
there's
and
I'm
just
one
of
many
many
many
many
speakers
there.
So
I
can
definitely
check
those
things
out
if
you're
going
to
be
coming.
If
you
have
not
or.
B
In
presenting
presenting
at
a
cube
con,
the
CFP
for
cube
con
EU
2020
is
open
and
it
will
be
open.
It
actually
was
just
extended
before
was
supposed
to
shut
in,
like
I
came
here,
but
it
was
very
short
deadline.
So
right
now
it's
open
until
Wednesday
December
4th.
So
if
you
would
like
to
get
a
talk
into
cube
con
in
the
European
or
in
Cuba
on
EU,
2020
definitely
check
that
out.
I
think
it's
a
chemist.
Err!
B
Damn
is
that
right,
yeah,
it's
an
Amsterdam
this
year
we
March
30th
and
to
April
2nd,
so
definitely
worth
checking
out
I
mean,
like
I,
think
that's
gonna
be
like
just
a
really
super
fun
event,
and
if
you
get
a
talk
into
it,
a
lot
of
a
lot
of
companies
that
I've
worked
with
and
worked
for.
If
you
get
a
talk,
is
accepted
into
an
event
like
that
they'll
offset
the
cost
from
going.
So
maybe
that's
available
to
you.
B
I
hope
that
it
is
because
it's
an
incredible
opportunity,
one
of
our
very
own,
like
I,
think
he's
like
kind
of
one
of
the
hearts
of
the
community,
Jorge
or
Jorge.
You
know
Castro
he's
actually
got.
He
was
actually
just
interviewed
on
the
kubernetes
podcast.
So
definitely
check
that
out.
If
you're
interested
in
hearing
what
he
has
to
say,
they
did
quite
a
lot
of
really
great
conversations
and
they
kind
of
dug
into
the
sinking
trip
X
and
how
that
works.
A
B
B
B
Where
is
Salesforce
put
out
a
project
called
sloop,
which
is
a
really
interesting
title,
and
the
title
is
Cooper
to
this
history
visualizations
the
idea
that
you
might
be
able
to
see
events
that
have
happened
in
recording
and
the
resource
state
changes
that
have
happened
over
time
so
that
you
can
go
back
and
kind
of
you
know
understand
what
changed
within
your
cluster.
Obviously,
as
we
understand
as
consumers
of
Cooper
need
understand,
many
of
the
things
within
the
cluster
are
are
ephemeral
right.
B
So
you
might
see
a
pod
get
killed
because
it
ran
out
of
resources
or
you
might
see
somebody
change.
The
version
of
the
underlying
container
image
is
going
to
be
used.
Maybe
a
rolling
update
of
all
of
those
changes,
or
perhaps
you
see
a
pod
loop
failing
a
life,
is
check
and
getting
shut
down
or
restarted
so
that
it
can
become
live
again
right.
So
the
the
idea
of
this
project
is
that
you
can
actually
see
either
you
can
see
that
timeline.
B
You
can
see
how
that
affected
the
resources
that
you're
able
to
select
over
time,
so
I
thought
this
was
a
pretty
interesting
one.
Definitely
worth
checking
out.
I
haven't
played
with
it
myself
yet,
but
I
think
it's
a
it's
an
interesting
project.
It
looks
like
they've
got
it
written
down
pretty
well,
so
it's
pretty
active
last
last
commit
was
about
21
hours
ago.
A
B
With
the
multistage
docker
build
stuff,
and
so
yeah
check
that
out,
I
thought
that
was
pretty
neat,
oh
and
then
there's
this
scalability
tuning
on
test
I
Oh
cluster,
which
is
a
pretty
interesting
one.
No
most
of
the
time,
we
don't
really
see
a
lot
of
people
yeah,
so
Roberts,
actually
providing
a
little
more
documentation
on
or
providing
a
little
more
information
on
the
podcast.
A
B
To
come
behind
developers
and
users
as
well,
which
is
true
and
specifically
in
handling
veterans
that
are
deep
in
knowledge
and
those
just
getting
started
and
don't
really
know
anything,
and
that
is
a
huge
divide.
I
mean
I've
seen
just
this
week,
I've
seen
a
lot
of
tweets
kind
of
around
the
idea
of
people,
saying
oh,
how
do
I
even
start
like
where
do
I
begin?
B
Some
of
the
stuff
they
were
getting
into
is
the
caching
that
happens
at
the
API
server
like
what
do
you
know,
making
sure
that
that's
actually
in
a
ready
state
before
forwarding
information
to
that
API
server,
so
that
it
would
actually
take
the
hit?
So
you
see
quite
a
lot
of
really
good,
in-depth
information
here
around
how
that
works
and
what
they
did
to
kind
of
like
make
that
work
a
little
more
efficiently.
B
So
if
this
is
interesting
to
you
definitely
check
it
out,
I
thought
it
was
fascinating,
read
and
something
worth
looking
at
and
then
the
last
one,
which
is
actually
from
a
weekly
report
that
Josh
Burke
has
puts
out,
and
it's
not
just
I-
think
it's
actually
a
few
people,
but
Josh
I
started
this
one.
It's
called
last
week
in
kubernetes
development,
lwk
d
info.
Let's
just
go
over
there,
real
quick
lwk
need
info,
so
this
is
a
summary
of
some
of
the
big
changes
that
are
happening
right.
B
So
here's
like
a
featured
PR
from
last
week.
You
may
have
heard
about
this
one.
The
rori
was
the
one
that
actually
reported
this
one.
This
was
the
billion
laughs
ticket.
This
is
the
this
is
the
merge
that
actually
addresses
it.
It
doesn't
address
it
by
basically
limiting
the
decode
size
of
PML
and
json
documents.
B
But
if
you
haven't
read
about
that
CVE
there
is
an
open
CBE
against
the
it's
actually
linked
down
here
below
there
we
go
so
this
was
the
report
that
Rory
put
out
and
what
is
actually
happening
here
is
pretty
fun.
The
idea
is
that
we
with
the
API
server,
accept
llamo
palos.
You
have
to
be
authenticated
and
you
have
to
have
permission
to
basically
submit
information
to
the
heyguys
server.
B
But
if
you
do
have
that
in
from,
if
you
do
have
that
permission,
then
you
can
actually
submit
something
like
this,
which
will
expand
the
memory
and
cause
the
CP,
the
cpu
load
on
the
API
server
to
climb
and
if
it
climbs
high
enough
that
it
starts
to
do
things
that
you
maybe
don't
want
it
to
do
like
shut
down
or
get
restarted
and
those
sorts
of
things.
So
it's
a
pretty
interesting
attack.
B
It's
not
a
it's,
not
a
new
one,
and
that's
why
the
quotes
here
billion
laughs
has
been
around
since
the
XML
days
and
stuff,
but
it
is
a.
It
is
a
take
on
that
same
attack
that
was
made
available.
Then
the
cool
thing
about
this
is
that
the
commit
it
basically
just
limits
the
expansion
and
makes
it
so
that,
like
when
this
happens,
you
can
actually
limit
the
limit.
The
effect
this
has
been
merged.
As
a
cherry
pick
into.
C
B
D
B
That's
and
that's
the
the
the
thing
I
was
talking
about
it's
like
LW
Katie,
actually
does
a
pretty
decent
job
of
just
kind
of
highlighting
some
of
the
more
interesting
changes
that
are
happening,
and
so,
if
you
want
to
be
a
little
closer
to
the
developer
news,
this
is
a
great
way
to
go
about
it.
They
also
talk
about
next
deadlines
and
where
we
are
in
the
development
process
of
a
particular
Ries
release
of
kubernetes,
which
I
thought
it's
pretty
cool
in
there
I
saw
a
link
to
this
PR
I.
B
B
Over
the
show,
notes,
I
think
I'm,
actually
even
just
using
the
same
hack,
MD
over
and
over
again
and
we're-
and
this
is
our
our
plan
for
the
day-
we're
going
to
talk
about
the
cue
controller
manager.
Here
we're
going
to
talk
about
the
anit
code,
the
controller
code,
we're
going
to
give
an
overview
of
some
of
the
of
the
controllers
and
where
to
find
more
information
about
the
rest
of
them.
B
But
obviously
there
are
quite
a
few
controllers
that
are
all
managed
by
the
controller
manager
and
so
probably
not
going
to
get
to
each
one
of
them
in
any
great
detail,
but
we'll
give
kind
of
a
high-level
overview
of
some
of
them
and
we'll
also
teach
you
to
fish
right.
We're
going
to
send
you
we're
gonna,
go
through
the
process
of
like
showing
you
where
you
could
find
that
code
and
where
you
can
learn
about
what
those
things
are
we're
going
to
talk
about.
This
really
interesting.
B
Itself
is
a
loosely
coupled
system,
and
that
means
that
the
controller
manager
may
not
be
involved
in
every
call
that
happens
through
the
system
right.
The
API
server
definitely
is,
but
the
controller
manager
may
not
even
affect
the
life
cycle
of
a
pod.
Interestingly
enough
and
we'll
show
an
example
of
that
how
that
works.
B
We're
talking
about
the
theory
of
operation,
we're
going
to
talk
about
leader
election,
that's
where
we're
going
to
go
back
to
that.
Pr
talked
about
or
talked
about,
metrics,
so
I
hope
all
that's
interesting
to
you
and
then
let's
go
ahead
and
get
started
here.
So
actually
this
chicken
or
our
chats-
and
so
you
forgot
anybody
else.
Checking
in
here
we
got
Mohamed
from
Paris
Bogdan
from
Bucharest.
We
have
Ricardo
hello
from
San,
Francisco,
Bay
Area,
and
then
we've
got
Roberts
comment
about
the
podcast,
which
is
really
don't
full.
Thank
you.
B
B
B
We
also
add
you
know
latency
things
like
that
like
we
have
to,
we
have
to
we
end
up
I,
think
putting
too
many
Catalan
having
too
many
fish
into
one
one
bucket
here.
You
know
other
word
that,
and
that
means
that,
like
the
project
just
doesn't
scale
generally
anyways,
we
spend
way
too
much
time
like
optimizing
code,
to
support
the
type
of
skill
that
large
clusters
would
require
and
fundamentally
I,
don't
know
that
there's
a
big
benefit
to
it.
Right,
like
the
idea
of
having
a
good
role
plane.
B
That's
a
that
supports
within
its
own
constraint,
its
own
ability
to
support
the
number
of
nodes,
or
are
the
size
of
a
cluster
that
that
that
part
or
you
know
a
good
efficiency
for
that
code.
I
think
that
that's
actually
the
right
target
for
designing
cuvee
news
clusters
right,
like
figure
out
what
the
efficiency
for
that
code
is
like
and
then
aim
for
clusters
of
that
size
and
then
understand
whether
that
constraint
is
limiting
you
in
your
consumption
of
a
project
like
uber.
B
B
B
I
think
it's
this
one
all
right,
this
question
was:
can
you
talk
about
these
controllers
like
what
they
do?
Where
does
where
they
go
how's
it?
How
does
it
work
like
what
is
one
of
these
things
or
what
are
they
for.
A
B
I'm
good
too,
so,
let's,
this
is
actually
kind
of
broken
out
that
same
list
right
and
I've,
highlighted
or
bolded
those
that
are
not
that
are
automatic,
because
I
won't
disabled
by
default
and
they're
disabled
by
default
in
a
number
for
a
number
of
reasons
that
make
sense
right.
So
endpoint
slice
is
a
relatively
new
API
and
isn't
actually
turned
on
by
default
or
my
audio
levels
low.
B
Can
everybody
give
me?
Okay,
I
can
probably
turn
to
my
volume
up
a
little
I
hope.
It's
alright,
because
I'm
looking
right
at
the
mic
anyway,
endpoint
slice
is
a
relatively
new
take
and
it's
actually
a
really
interesting
controller,
because
the
idea
of
an
endpoint
slice
is
to
kind
of
redo
the
input
controller,
which
is
already
defined
within
this
list
and
achieve
kind
of
a
better
efficiency
for
those
things.
B
There's
documentation
for
this
now
it's
an
alpha.
This
is
where
the
documentation
is
and
there's
actually
some
motivation,
there's
some
of
the
good
that
goes
into
the
actual
enhancement
proposal,
but
why
this
was
being
done
and
what
the
and
what
the
goal
of
it
is,
but
suffice
to
say
that
it's
actually
just
part
of
it
is
that
it's
a
better
efficiency
for
how
to
actually
gather
information
about
those
endpoints
that
are
behind
services
that
we
defined
within
kubernetes
and
another
part
of
it.
Is
that
really
to
be
able
to
support
better
addressing
schemes?
B
Like
you
know
now
we
actually
are
in
a
place
where
we're
supporting
where
we're
supporting,
ipv4
and
ipv6
so
like
in
116
I,
believe
we
were
dilip.
We
delivered
on
dual
stack,
ipv6
and
ipv4,
and
so
the
endpoint
slice
is
a
part
of
that
magic,
basically
making
it
so
that,
like
you
know,
everything
can
actually
determine
both
of
the
addresses
that
might
be
related
to
a
specific
end
point,
and
so
that's
where
some
of
that
information
comes
in.
B
B
B
Oh,
that
is
that
any
better
turn
my
mic
up
a
little
bit
check
check
check
it's
probably
a
little
better.
Well,
it
looks
better
anyway
on
that
on
the
graph,
but
let
me
know
if
it
let
me
know
if
it
needs
a
little
more
improvement
anyway.
Hopefully
I
didn't
blow
anybody
out
just
then
so
that's
kind
of
the
endpoint
size
piece
and
that's
where
the
endpoint
slice
controller
comes
in.
B
So
let's
go
and
look
at
a
couple
of
different
things.
First,
I
want
to
want
to
talk
about
like
where
you
can
find
the
code
that
digs
into
what
these
things
actually
mean,
and
second
I
want
to
talk
about
the
in
in
control
or
the
Nick
code.
So
let's
take
those
in
reverse
order.
Actually,
let's
do
this
one
first,
so
we
just
talked
about
the
ends:
boys,
slice,
controller,
and
that
is
located
in
the
package
repository
underneath
coober
disc
Ruben.
B
It
is
package
controller
and
there
you'll
find
the
directory
for
all
of
the
for
all
of
the
code
right,
and
so
you
can
typically
find
like
the
controller
piece.
That's
going
to
do
it,
you
can
find
input.
You
can
find
all
of
the
codes.
That's
actually
going
to
be
responsible
for
this
particular
control.
Loop
right
here
inside
of
this
repository.
B
And
go
well:
I
may
not
be
the
most
readable
language
in
the
whole
world.
It
actually
doesn't
do
too
terribly
bad
right
like
there's,
actually
a
it
does
a
pretty
decent
job
of
providing
you
a
way
of
understanding
like
of
trying
to
be
readable,
trying
to
be
legible
so
like
when
you're
looking
through
trying
to
determine
what
a
thing
it's
going
to
do.
You
can
kind
of
see
like,
for
example,
in.
C
B
A
B
A
A
B
A
pot
endpoint
and
updating,
pods
and
deleting
pods,
and
so
these
are
the
functions
that
are
going
to
either
remove
pods
from
a
service
or
from
a
slice
associated
with
the
service
or
add
them
in
and
so
like,
as
we
add,
as
we
see,
pods
come
and
go
we're
going
to
adjust
the
number
of
endpoints
associated
with
a
given
service
or
just
the
endpoint
api
with
those
pods.
But
this
part
is
actually
probably
only
going
to
implement
the
endpoint
bit.
It's
not
really
particularly
concerned
about
the
services
piece.
That's
the
services
job
all
right.
A
B
So
you
can
see
all
of
the
a
lot
of
those
names
are
pretty
consistent
across
the
set
right.
Obviously,
in
this
case,
there's
one
that's
because
certificates-
and
here
you
see
the
approver,
the
cleaner,
the
publisher,
the
signer-
these
are
all
controller.
This
is
all
these
are.
Each
control
loops
that
are
responsible
for
some
subset
of
the
amount
of
work
is
the
endpoint
sized
controller
similar
and
designed
to
the
endpoints
controller.
It
is
it
just
limits.
Node
well,
doesn't
limit
the
number
of
positraction
eaded
to
know
about
all
the
pods.
B
The
difference
is
that
it
breaks
that
data
structure
up
instead
of
providing
and
all
of
the
pods
associated
with
an
endpoint
or
instead
of
defining
all
of
the
positives
with
a
service
in
it
in
a
single
queue
or
in
a
single
table.
We
break
those
things
up
into
so
that,
as
we
add
more
endpoints,
we
can
kind
of
like
sprint.
We
can
change
that
datum
a
little
bit
there's.
Actually,
a
cap
that
talks
about
that
I'm
and
I'm
sure
it
does
a
better
job
of
explaining
it
than
I.
B
Do
let's
see
if
we
can
find
that
real
quick?
So
if
we
go
back
to
endpoint
slices,
the
endpoints
API
has
provided
a
simple
and
straightforward
way
of
tracking
Network,
endpoints
and
communities.
Unfortunately,
as
clusters
and
services
that
larger
limitations
of
the
API
became
more
visible,
mostly
notice,
most
most,
notably
including
challenges
with
scaling
larger
number
of
Network
endpoints.
So
the
idea
being
that
we
add
dirt,
we
approach
this
from
a
data
model
perspective
right.
B
Endpoint
slices
help
you
mitigate
that
by
changing
the
data
model
and
providing
kind
of
a
topological
routing
piece
which
is
actually
pretty
cool,
I
mean
like
I'm,
actually
pretty
excited
about
this
technology,
like
I,
think
it's
gonna
really
help
for
a
number
of
things.
It's
interesting
confusing.
If
we
talk
about,
like
you
know
to
proxy,
which
is
our
previous
episode
when
we
talked
about
how,
like
the
one
of
the
first
things
that
will
be
affected
at
scale,
is
his
IP
tables,
because
it
was
really
never
meant
to
scale
for
that.
B
B
That
was
a
lot
of
things
to
talk
about
for
endpoint
sites,
so
this
one
is
disabled
by
default.
This
is
why
we're
not
going
to
go
through
all
of
them,
because
it's
gonna
be
crazy
if
we
try
it's
disabled
by
default,
because
it's
in
an
alpha
state,
bootstrap
Steiner
has
been
around
for
some
time,
and
I
would
like
to
talk
about
that
one
as
well
and
then
token
cleaner.
B
We
can
go
look
at
the
code
for
that,
so
each
of
these
things
has
a
controller
defined
within
that
within
that
code
structure
and
if
you
would
like
to
jump
in
there
and
look
at
how
it
works
and
what
it's
doing,
that's
that's
where
you
would
go
and
dig
into
it.
If
you
want
to
look
at
the
code,
but
many
of
them
kind
of
make
sense
right,
like
many
of
them
just
by
the
title.
It
makes
sense.
B
B
So,
let's
back
it
up
and
do
like
a
quick,
bring
everybody
on
the
same
page,
around
controller
manager,
so
the
controller
manager,
its
job
at
the
you
know
and
in
an
abstract
form,
is
to
take
those
higher
level
abstractions
within
kubernetes
and
make
them
lower
level
abstractions
right
and
so
when
you're
interacting
with
kubernetes,
and
you
create
a
deployment
that
deployment
will
create
a
set
of
replicas
set
or
a
replica
set
associated
with
the
configuration
that
you
provided
within
that
deployment
right.
So
let
me
jump
into
my
so
I
have
my
cluster
here.
B
B
So
I've
just
created
a
deployment,
but
as
most
of
the
people
who
have
been
playing
with
kubernetes
for
a
little
while
they
realize
that's
not
just
the
only
thing
that
just
happened
right
now.
What
happened
here
is
actually
a
number
of
those
control
loops
that
we
saw.
That
list
were
just
affected
by
this
change
by
this
by
this
representation
of
a
deployment.
If
I
do
cube
kettle
describe
deployment
test,
I
can
see
in
the
deployment
that
I
created.
B
I
could
see
like
you
know,
basically
what
it
is
and
really
this
is
like
the
simplest
possible
deployment
right,
I
created
a
a
I've,
only
asked
for
a
single
replica
I've,
given
an
image
I,
it's
made
up
some
labels
app
equals
test,
I'm,
not
defining
any
ports
or
host
ports
or
environments
or
volumes
or
any
of
that
stuff.
This
is
like
the
simplest
possible
deployment,
but
what's
interesting
is
that
that
deployment
isn't
isn't
responsible
for
creating
pods,
it's
just
responsible
for
creating
replica
sets
right
and
that's
why
we
see
in
the
events
down
below.
B
We
see
scaling
the
replica
set
up
to
one.
It
just
defines
that
replica
set
as
well
right.
So
the
new
replica
set
test
D
for
D
F,
7,
4,
ft
9,
was
created
by
this
deployment
controller.
So
as
soon
as
this
resource
gets
persisted
to
SVD
right,
the
API
server
makes
we
make
a
call.
The
API
server
saying
here's
a
deployment
object.
B
I
want
you
to
go
ahead
and
make
all
of
the
pods
associated
with
it
real
right
that
gets
persisted
to
NCD
the
controller
manager,
which
has
a
watch
pretty
much
against
every
resource
in
the
world
right
and
it
creates
a
shared
Informer.
The
deployment
controller
within
that
controller
manager
is
associated
or
subscribes
to
that
shared
Informer
and
sees
that
there
is
a
new
deployment
object
that
has
been
created,
and
then
there
are
no
replicas,
that's
associated
with
that
deployment
object
and
it
does.
B
B
It
associate
it
persists
that
replica
said
object
down
to
sed
right,
so
that
is
a
different
control
loop.
The
first
control
loop
was
the
deployment
controller
and
his
job
was
to
define
that
replica
set
and
scale
it
to
whatever
the
number
of
whatever
the
value
of
a
replicas
that
you
defined
was
leveraging
the
configuration
or
the
pod
spec
or
the
side
of
the
template
that
you
provided
within
that
deployment
right.
So
now
we
have
a
replica
set,
we
can
scale
it
horizontally.
We
can
create
more
of
them
or
less
of
them.
B
What
have
you
that
replica
set
was
defined
within
the
earth
within
at
CG
right,
and
we
saw
via
our
watch
on
the
controller
manager
that
now
there
is
a
replica
set.
That's
been
created,
a
different
controller.
The
replica
set
controller
is
now
going
to
see
hey,
there's
a
replica
set
that
was
created,
but
there
are
no
pods
associated
with
it.
I
better
get
to
work
right.
B
B
I
did
it
there's
there's
no
one
of
them
right,
but
this
pot
isn't
really
it.
This
pot
is
just
an
object.
That's
been
persistent
to
SCD
I'm,
actually
curious,
like
how
many
people
know
what
happens
next
right.
So
we've
created
the
pod
and
we've
persisted
that
pod
back
to
NCD.
What
is
the
next
object
within
kubernetes
within
kubernetes
to
see
this
pod?
B
B
B
Now
there
we
go
scheduled
right.
I
can
see
that
the
next
thing
to
happen
to
effectively
was
that
the
scheduler
saw
this
pod.
That
was
created,
and
it
did
it's
been
work
right.
It
associated
it
successfully
assigned
this
pod
too
kind
to
one
of
my
nodes
called
kind
worker,
and
then
we
see
the
rest
of
the
life
cycle
for
this
pod
and
that's
all
happening
by
the
cubelet
right.
A
B
To
work
all
right
and
it's
going
to
do
the
work
of
pulling
the
image
it's
going
to
determine
whether
it's
gonna
report
back
whether
it
was
sexual.
Is
it
whether
it
was
successful
or
not,
it's
gonna
create
the
container
and
then
report
back
on
whether
it
was
successful
in
starting
it
right
and
that
whole
process
from
cute
kettle
run
or
from
cute
kettle
create
deployment
involved,
a
number
of
different
control
loops
across
a
number
of
our
I.
It's
a
laptop
give
me
a
break.
It's
all
running
locally
on
my
machine.
D
A
B
People
unkind
worker
right
as
again
kind
of
like
the
high-level
view.
I
mean
the
first
thing
that
happened
when
that
deployment
happened
right.
So
the
deployment
controller
saw
that
the
little
bigger
the
deployment
controller
saw
that
so
I
first
interacted
with
the
API
server
and
I
said
API
server.
I
would
like
you
to
create
this
deployment
for
me
and
here's
the
spec
that
deployment
controller
or
so
the
API
server,
then
persisted
that
value
to
a
CD.
I
have
a
picture
for
this.
B
B
Ooh
all
right
yeah.
This
is
exactly
the
one,
but
it'll
be
good
enough
right.
So
here
we
are-
and
this
is
me
I'm,
a
client
I'm
interacting
with
the
API
server
I
say,
create
me
a
deployment
that
deployment
gets
persisted
back
to
NCD
and
not
an
SE
returns,
a
success
that
it's
persisted
in
API
server
returns
a
success
to
the
client
saying
job
done.
The
interesting
thing
is
now
that,
from
the
clients
perspective
I've
created
a
deployment.
There
may
be
other
work
happening
in
the
background,
but
it's
asynchronous
for
me
right
what
happens
next?
B
We
see
the
controller
manager,
it
has
a
watch
against
the
API
server
right.
It
sees
that
a
deployment
has
been
created
and
it
populates
that
informed
that
Informer
that
shared
cache
shared
informa
cache.
Our
our
deployment
controller
sees
that
new
object
pulls
it
off.
The
queue
determines
what
what
action
needs
to
be
taken.
It
creates
a
replica
set.
It
makes
the
call
to
create
a
replica
set
and
it
persists
that
replica
set
object
back
to
the
API
server,
sir,
and
then
back
to
NCD
again
success
right.
B
So
now
we
see
a
new
replica
set
being
created,
API
services
that
replica
set
the
controller
manager
having
a
watch
against
the
API
server
detects
that
new
replica
set
the
controller
manager
replica
set
controller.
Does
its
work,
creates
a
pod
persist
that
pod
object
back
to
the
API
server
back
to
SED
right
and.
B
Scheduler
says:
oh:
let's
go
to
a
pod.
Gotta
do
work,
associates
the
node
name
field
inside
of
that
pod
spec,
that's
what
scheduling
does
and
it
uses
us
predicates.
Some
of
the
other
information
that
you
might
define
within
that
spec
like
anti
affinity,
affinity,
node,
selector,
all
those
things
and
then
persist
that
scheduled
object
back
to
sed
via
the
API
server.
The
cubelet
sees
that
there's
a
scheduled
pod
and
that's
actually
how
that
that
whole
process
works.
So
it's
like
it's
a
number
of
pieces
that
are
really
involved
here.
B
B
But
what
if
we
could
skip
some
steps
right?
What
if
I
didn't
use
a
higher
level
abstraction,
maybe
I
just
create
a
pod
and
what,
if,
instead
of
actually
defining
a
waiting
for
the
schedule
to
allocate
what
node
is
going
to
be
associated
with
that
pod?
Oh
yeah
allocated
that
pod
directly
myself.
What
would
happen
like?
What
would
the
behavior
of
the
system
be
right?
Could.
B
A
B
B
B
B
What
I'm
doing
here
is
I'm
leveraging
cube
kid,
I'll
run
to
create
just
a
pod
manifest
and
the
way
that
I
do,
that
is
I
set
replicas
to
one
and
I
said,
restarts
and
ever
and
then
I
do
oh
yeah
well,
when
I
wanted
to
add
dry
run
and
then
that
will
create
a
pot
of
pod
manifest.
So
let's
take
a
look
at
this
pod
manifest.
So
this
is
our
pod
manifest.
B
Got
an
image
that
I've
specified:
don't
ever
do
this
because
it
will
just
use
latest,
but
whatever
you
know
it's,
it's
fine
for
now,
I've
got
a
restart
policy
of
never
and
I've
got
a
DNS
pop
a
lot
of
these
things
that
have
just
been
defaulted,
so
I'm
gonna
specify
at
this
field.
This
is
me
scheduling
this
pod
right,
I'm
gonna
put
it
in
kind
worker.
B
Yeah
I
think
it
does
yeah,
that's
what
makes
it
the
difference
between
a
pod
and
a
deployment
when
leveraging
and
cube
cue
kettle
run
is
that
if
you
do
keep
get
overrun,
restart
number
it'll
make
a
pod
if
you
do
restart.
If
you
leave
that
argument
off
it'll
make
a
deployment
we're
just
kind
of
fun,
so
there's
our
pod
yah
mo
it
looks
fully
defined.
It's
already.
The
lowest
possible
deployment
object.
B
So
there's
no
higher
level
of
construct
being
used
here,
I'm,
not
using
any
pretty
my
using
MIDI,
crazy
scheduling,
predicates
or
anything
else
like
that.
I've
just
specified
the
node
name,
which
is
what
the
scheduler
would
do
for
this
pod.
Let's
go
ahead
and
create
it
apply
a
chef
pod
test
created,
get
pods
and
it's
running
and
the
controller
manager
is
not
it's
a
loosely
coupled
systems.
This
is
my
point
like
the
neat
thing
about
patroller
manager
is
that
it
is
a
loosely
coupled
system
or
that
it
represents
that
kubernetes
is
a
loosely
coupled
system.
B
The
same
thing
could
be
said
for
the
scheduler.
If
I
wanted
to
do
the
work
of
turning
off
all
the
schedulers
I
would
also
be
able
to
see
that
they
are
not
a
part
of
the
creation
of
this
pod.
In
this
context,
what
I
have
created
a
pod
object?
What
happens
is
a
very
different
flow
from
what
would
happen
when
I
created
the
deployment
object
right
exactly
yeah
when
I
create
the
pod.
In
this
case,
right
I
could
actually
that's
a
good
point.
Let's
do
keep
it
on
delete
test.
B
B
B
It
put
it
on
kind
worker
and
it's
running
right,
so
in
that
case,
I
actually
still
used
the
scheduler,
but
the
controller
manager
never
saw
it
and
it
couldn't
have
seen
it
because
it
was
not
running
so
interesting
stuff
right.
This
is
a
loosely
coupled
system.
This
is
one
of
the
things
I
really
love
about,
proving
is
that
it's
very
durable
it's
loosely
coupled
and
that
each
piece
of
is
really
focus
on
just
its
piece
of
work.
You
know
it's
one
of
the
really
neat
things
about
them
about
the
project,
all
right.
B
What
else
do
I
want
to
talk
about
here?
So
we
talked
about
how
this
is
working.
We
talked
about.
Turning
off
the
controller
manager,
we've
talked
about
static
scheduling,
which
is
really
interesting,
like
it's
actually
even
really
interesting
from
the
perspective.
I've
used
it
in
the
past,
with
different
projects
to
figure
out
how
to
bootstrap
things.
B
A
B
B
Let's
talk
about
controllers,
I
mean
how
do
I
want
to
put
it
on
phrases.
We
are
making
use
of
an
incredible
piece
of
efficiency
when
we
put
all
of
these
controllers
in
the
same
context
right
when
we,
when
we,
when
we
go
about
that,
what
we
do
is
we
create
a
shared
Informer
that
is
responsible
for
presenting
the
watch
to
the
API
server
for
pretty
much
any
related
resource.
So,
in
theory,
the
controller
manager
is
watching
everything
right
like
pretty
much
everything,
because
it
needs
to
know
when
to
GC
pods.
B
It
needs
to
know
about
volumes,
it
needs
to
know
about
replica
sets
and
routes
EA's.
It
needs
to
know
about
routes
and
services.
The
controller
manager
has
fingers
in
everything,
but
it's
not
in
the
blocking
path
for
every
resource,
as
we
just
witnessed
right,
like
I,
turned
the
controller
manager
off,
and
so
even
though
it
needs
to.
B
Needs
to
know
about
that
piece
of
everything
that
is
related
to
some
of
the
controller
work
that
it's
doing
right
so
the
way
that
we
go
about
that
is,
we
create
a
shared
Informer,
and
that
means
that
we're
doing
the
watch
against
the
API
server
in
one
place
and
then
we're
multiplexing
the
result
of
that
watch
out
to
all
of
the
controllers
that
are
being
better.
That
are
interacting
right.
B
B
I'm
pretty
sure
that
it
would
be
every
resource
type
in
this
case
right.
So
it's
like
it's
watching
for
everything
and
then
it
multiplexes
out
that
watch
across
all
of
those
controllers.
So
it's
incredibly
efficient
when
you
think
about
it
right,
because
if
I,
if
I
were
to
break
these
controllers
out,
like
one
controller
and
Informer
per
pod
or
per
object,
that
would
be
a
significant
load
on
the
API
server.
B
B
B
B
B
And
then
it
does
some
business
logic
like
is
the
is
it
expired?
Is
it
pending?
Is
it
past
its
deadline?
You
know
of
the
the
business
logic
around
what
it
will
do
and
then
it
takes
its
action
like
if
it
is
expired.
What
do
you
want
me?
What
should
I
do
should
I
remove?
It
should
I
leave
it
in
place.
So
do
all
the
controllers
live
in
the
same
pod
as
the
controller
manager.
B
B
C
B
B
What
we
saw
from
the
logs
was
that
it
was
unable
to
start
because
there
was
already
at
least
and
we're
going
to
talk
about
what
that
is
and
what's
happening
there
in
just
a
second.
The
first
pop
up
here
to
the
top
of
the
log
and
washer.
What's
happening
for
the
cluster
I
have
increased
the
log
level,
so
we're
getting
a
bunch
of
information
that
we
probably
don't
need
back
to
you.
What
I'm
gonna
do
I'm
gonna
just
go,
make
it
one
way
to.
B
B
We
got
a
little
bitter,
a
little
more
reasonable
in
the
amount
of
data,
so
here's
our
log
and
when
you
see
it
pick
up-
and
it
tells
us
all
about
the
flags
that
it
was
able
to
determine
from
its
configuration
right
like
how
is
the
controller
manager
configured
I,
always
like
really
appreciating
when
things
do
this,
because
it
helps
me
determine
if
I
have
them
configured
correctly,
a
bunch
of
information.
This
basically
describes
all
of
the
flags
that
I
would
be
looking
for
and
it'll.
Tell
me
what
it'll
tell
me
what
it
determined
was.
B
B
Around
the
metrics
in
point,
then
it
goes
about
trying
to
acquire
on
a
leader
lease,
and
originally
this
failed,
because
it
was
able
to
determine
that
some
other
controller
manager
was
the
actual
leader
and
that's
probably
because,
as
we
turned
them
off,
we
were
actually
looking
with
the
leader
election
across
that
set.
So
we
saw
we
were
basically
causing
it
to
fail
over
until
we
until
it
completely
failed,
because
there
were
no
more
controller
managers
left.
Then
we
see
the
success
right.
B
We
we
are
now
the
leader,
so
we
better
get
to
work
right,
and
this
is
what
get
to
work
means
right.
So
now
we're
going
to
start
kicking
up
the
endpoint
controller,
we're
going
to
kick
up
the
controller,
we're
going
to
click
on
the
reflectors,
but
they're
starting
up
that
inch
in
cash
around
some
things.
We
started
the
pod
GC.
We
started
the
resource
quota,
we're
starting
the
GC
quota,
again
kicking
up
the
the
cash
or
stuff
reflectors
for
things
we
got.
The
e
monitor
in
place.
B
Starting
the
disruption,
controller
and
the
service
control,
all
of
these
things
are
getting
kicked
up
and
we
can
look
at
the
log
and
watch
them
all
starting
up
right.
So
these
are
all
just
processes
here.
So
now
what
I
wanted
to
show
you
before
right?
So
CRI
kettle?
Yes,
if
I
do
a
CRI
kettle
in
exact
into
this
sect,
yeah
the
controller
manager.
B
B
Yeah
but
stand
by
there
stand
wise,
don't
run
anything
at
all.
They
sit
there
watching
this.
In
fact,
why
don't
we
show
that
real,
quick?
You
guys
are
like
leading
you
through?
This
is
so
great
someone
jump
into
control
plane
to
see
to
burn
it
is
and
then
we're
gonna
do
move
cube
controller
manager
to
manifest.
B
B
We're
not
seeing
much
come
in
more
from
the
logs,
but
why
not
like?
Why
can't
it?
Why
can't
it
acquire
that
leader
lease
right?
So
it's
not
any
literally.
It
is
not
starting
any.
It's
not
continuing
any
further
down
the
code
path.
Then
the
leader
lease
code,
which
happens
really
early
on
the
process
for
the
controller
manager
right
and
since
it's
not
the
leader,
it
doesn't
do
any
work,
it
doesn't
start
any
controllers,
it
just
sits
there
in
hibernation,
but
let's
go
back
to
control
plane.
One
and
move
at
sea
kubernetes
manifests
cube
controller
manager.
B
B
I'm
going
to
do
is
I'm,
going
to
stop
the
controller
manager
on
that
first
pot
and
see
what
happens
right
so
if
I
move
it
securing
these
manifests,
cube
controller
manager
to
Etsy
Cooper
and
it
is,
and
that
pot
will
get
killed.
The
leader
election
should
then
open
up
because
it'll
be
15
seconds,
I
think
it
is,
and
then
this
controller
manager
which
is
running
watching
that
lease
election,
it's
going
to
see
the
opportunity
and
it's
gonna,
take
a
shot
at
becoming
the
leader.
Oh
there
we
go
and
away.
We
go.
B
Why
does
the
Control
Manager
start
an
election
and
elect
himself
as
a
leader
in
that
case,
because
there
has
to
be
a
leader
right?
So
there
has
to
be
some
logic
in
which
yeah
exactly
because
there's
no
other
control
of
manager
around,
but
it's
still
longer
that
lease
right.
So
let's
take
a
look
at
what
that
least
means.
B
B
It's
the
acquired
time
and
then
there
were
new
time
and
then
the
leader
transitions,
but
a
lot
of
leader
transitions
as
we
played
around
kicking
these
controller
managers
back
and
forth
to
each
other
right.
So
at
the
moment
the
API
is
leveraging
the
endpoints
API,
just
like
we
talked
about
for
the
for
the
services
and
those
sorts
of
things,
it's
just
leveraging
that
same
object,
model
and
annotating
it
and
using
that
annotation
model
to
determine
or
to
to
represent
the
the
leader
election
process
back
and
forth
between
controller
managers.
B
C
B
Controller
managers
run
inside
the
controllers,
run
inside
the
controller
managers
as
go
routines.
That's
correct:
they
perform
the
leader
election
across
multiple
replicas.
To
make
sure
multiple
instances
are
all
in
the
API
are
not
awful
than
any
of
you.
Guys
are
yeah,
that's
right,
I
said
channels,
but
I
meant
go
routines.
You
are
correct.
So
what
else
we
got
leader
election
is
more
for
ensuring
that
only
one
manager
is
muted.
Does
the
mutating?
B
Well,
that's
part
of
it,
but
it's
actually
that
there
there
is,
there's
no
reason
to
increase
the
load
on
the
API
server
if
you're
not
doing
any
work,
so
we
do
actually
not
more
than
any
controllers.
If
you're,
not
the
the
main
one
I
would
expect.
I
would
have
expected
standbys
to
have
an
informer
chat
which
they
do
not
because
it
doesn't
really
take
very
long
and
if
you
think
about
it,
we're
still
in
that
you
know
we're
still
in
that.
B
Level
triggered
design
right
we're
in
even
if
there
is
no
controller
manager
for
some
time.
As
soon
as
there
is
one
it
will
take,
the
it
will
think
it
will
take
action
and
converge
on
the
desired
state.
So
we're
not
we're
not
beholden
to
the
idea
that
the
controller
manager
has
to
be
running
the
whole
time.
That's
kind
of
the
benefit
of
the
model
is
that
if
we
kick
up
that
controller
manager
at
any
point,
it
will
take
care
of
the
and
then
in
theory
we
could
actually
shut
down
kind
of
Wow.
B
B
B
It
looks
at
the
lease
to
determine
if
that
lease
is
expired,
and
if
there
is
no,
if
it
is
expired,
then
it
puts
its
bid
in
right,
and
so,
if
we
see
that,
if
we,
if
there,
if
that
lease
is
expired,
that
it
puts
it's
been
in,
and
this
is
actually
how
we're
doing
leave
on
the
leader
election
and
so
the
next
guy
comes
along
and
he
does
a
check
for
that
leader
election
value
and
if
it's
already
in
place,
but
it
doesn't
proceed
from
that.
The
code
won't
proceed.
B
A
malicious
controller
manager
could
totally
get
pretty
hinky.
Yeah
I
mean
like
if
you
had,
if
you
had
even
if
you'd
started,
one
another
controller
manager
and
just
disabled
the
leader
election
code
with
the
command
line
flag,
things
could
get
fun,
you
know
cuz
like
would
your
controller
manager
is
doing
that
it
because
in
a
while.
B
B
C
B
Transitions,
how
many
of
them
are
there
what's
the
least
duration
time,
holder,
identity,
the
fire
time?
The
benefit
of
this
is
that
it's
a
real
API
we're
not
just
overloading
the
endpoints
to
determine
this
yeah
yeah,
it's
actually
it's
used.
It's
used
in
115,
I
think
in
116.
That
was
actually
what
I
was
referring
to
here
and
116.
B
A
B
B
So
back
to
the
list
of
controllers,
attach
detach
relating
to
volumes
being
attached
on
cubelets
bootstraps
finer.
This
is
what
I
want
to
talk
about.
I'm
gonna
come
back
to
it
in
just
a
second
cloud:
no
lifecycle!
You
ever
get
that
experience
where
you
have
a
bunch
of
cubes
running
inside,
of
like
an
AWS
or
what
have
you
and
one
of
the
nodes
gets
deleted,
and
you
notice
that
it's
not
it's
no
longer
in
your
cube
kettle!
No
get
nodes
output!
That's
because
of
this
pretty
cool
cluster
aggregation.
B
This
is
an
are
back
trick
so
within
within
our
back
now
you
have
the
ability
to
define
annotations
that
would
describe
aggregation
of
her
bowls,
which
is
a
thing
I
want
to
do
a
session
on
cuz.
It's
really
cool,
but
coaster
aggregation
has
the
ability
to
help
you
build
an
aggregate
rule,
a
crop
across
a
number
of
different
cluster
roles
and
roles
that
you've
different,
actually
against
the
number
of
different
cluster
roles
that
you've
defined
I
think
it
works
as
rolls,
but
I
haven't
play
with
it
too
much
cron
job
super
obvious.
B
If
you
have
cron
jobs
running
like
that
cron
job
resource
that
creates
that
pod
for
you
and
like
what
how
you
know,
what
does
it
do
it
that
sort
of
stuff?
That's
all
the
cron
job
controller
certificate,
signing
request
approving
so
to
begin
signing
request,
cleaner,
it's
gonna,
be
good
signing
request,
signing
like
the
work,
the
actual
controllers
that
are
responsible
for
doing
this
work,
approving.
B
B
Think
that's
right!
Yeah
I
think
it's
still
using
the
job
object.
It's
just
that
the
cron
job
part
has
like
a
slightly
different.
You
know
what
the
best
way
to
see
this
in
my
opinion,
always
is.
You
can
explain
I'm
like
such
a
crazy
jumpy
of
cumulative
a
cron
job,
so
here's
the
API
object
for
a
cron
job.
Let's
see
it's
in
v1
v2
one
we
can
see
the
spec
and
inside
of
the
spec
for
cron
job.
We
have
concurrent
policy
field
job.
B
History
was
a
job
template
schedule,
starting
deadline;
second,
successful
job
history-
that
many
of
these
things
are
common
across
this
and
job
spec,
but
the
differences
are
also
very
important
right.
So
many
of
these
things
actually
kind
of
come
across,
but
the
difference
is
the
contracts
kind
of
like
a
higher
order,
one
that
shows
that
shows
like
the
timing
and
that
sort
of
stuff
like
whether
to
start
them
all
same
time.
What
the
schedule
is
that
sort
of
stuff
there's
a
higher
level,
the
scheduled
job.
B
B
Game
and
set
controller
handles
turning
on
to
creating
the
creation
of
David
sets
deployment
handles
the
deployment
defendant
calls
all
those
things
right
disruption.
This
is
your
pod
disruption
budget,
your
PDB,
it's
actually
gonna
handle
that
sort
of
stuff.
In
point,
we
just
looked
at
end
point
for
many
good
reasons:
lots
of
obvious
thing.
This
garbage
collector.
There
is
a
number
of
garbage
collectors
in
here
number
of
garbage,
collector
jobs,
and
so
they
are
each
different.
This
one
is,
if
I
remember
correctly,.
B
And
then
trying
to
gather
them
up
and
delete
them
over
time,
so
it's
basically
like
a
real
garbage
collector.
It's
like
the
big
top-level
garbage
collector,
that's
responsible
for
determining
whether
things
need
to
be
deleted
or
not
like
if
there
are
pods
or
orphan
dependence,
resources
that
have
been
created
that
have
been
left
behind
this
garbage
collector
is
trying
to
take
care
of
them.
B
Pretty
cool
yeah
so
again
like
if
you
want
to
go,
look
at
the
code
and
see
how
it's
working
and
like
what
it's
ready
to
what
it's
trying
to
do.
This
is
definitely
that
the
path
for
that
right,
so
Cooper
needed
to
earn.
It
is
package
controller
you're,
going
to
find
the
code
for
each
of
the
controllers
that
we're
looking
at
inside
of
here.
So
this
is
kind
of
like
the
global
garbage
collector.
We
have
horizontal
pot,
auto
scaling,
that's
where
you
actually
like
might
create.
B
You
know
in
HP
a
policy,
that's
you
know,
triggering
on
a
queue
depth
and
when,
if
you
dip
gets
to
a
certain
point,
then
it
makes
a
job
that
makes
a
it
does
a
scaling
operation
on
the
deployment
associated
with
that
worker
right,
that
sort
of
stuff
get
jobs,
a
subset
of
cron
job.
If
you
will
get
namespace
controller,
which
is
responsible
for
sometimes.
A
B
Automatic
creation
of
things
I
think
yeah
that
one
probably
is
some
one
documented
in
workloads,
good,
yeah,
I
think
that's
right,
and
then
a
lot
of
these
things
are
going
to
be
documented
in
the
docs
I'm.
Just
looking
at
the
code,
that's
a
very
good
point,
though.
If
you
are
curious
about
it,
if
you
go
to
doc,
stockades
diono
provide
you
a
nice
search
interface.
B
B
B
C
A
B
D
B
Right
so
we
talked
about
these
things:
life
cycle,
persistent
volume,
finder
and
expander,
relatively
new
code
and
expander,
which
is
the
ability
to
actually
interact
with
whatever
the
provider
is
and
like
make
it
a
larger
one.
Pod
you
see
deleting
those
posit
or
intermittency
over
time
just
pod,
so
it's
actually
just
garbage
collecting
pods.
Then
we
have
PVC
protection
and
PUA
protection.
This
means,
if
you
mark
up
for
non
deletion.
This
is
the
thing.
That's
gonna,
intercept
that
and
make
sure
that
doesn't
happen.
B
Replication
set
control,
replica
set
controller,
the
replication
controller
controller
resource
Kota.
All
these
things
are
kind
of,
like
you
know,
in
line
with
what
you
would
expect
service
account
service
account
token
TTL
and
TTL
after
finish,
they're,
both
kind
of
related
to
jobs
again
and
then
stateful
sets
responsible
for
creating
the
higher.
You
know,
breaking
those
larger
objects,
so
just
beautiful
sets
down
to
smaller
objects,
as
we
talked
about
now
we're
back
to
bootstrap
cider.
So,
let's.
A
B
Cloud
provider,
routes
yeah,
so
in
some
cases
like
you,
had
the
ability
to
it,
interact
with
your
cloud
provider
to
create
routes
for
for
subnets
that
were
associated
with
nodes
inside
of
the
cloud
providers,
VPC
routing
mechanism
so
like
within
the
route
table
of
the
V
PC.
You
could
actually
enter
out
saying
that
if
you're
going
to
a
subnet
that
is
associated
with
pod
WA
or
with
node
one,
then
you
would
just
create
a
route
if
the
BPC
saying
send
that
traffic
to
that
one.
B
Hat
did
something
similar
with
it
too.
Actually,
but
you
know
it's
it's
an
older
pattern.
I,
don't
think
like
you
actually
I'm
entirely
sure
we
even
still
use
this
code
I,
don't
work
for
Google,
so
I
can't
say
for
sure,
but
I
don't
under
the
impression
this
is
aw.
This
is
all
still
used.
This
is
a
lot
you
know.
A
lot
of
this
stuff
has
been
moved,
obviously
to
the
cloud
provider
interface,
so
I.
B
B
B
So
this
bootstraps
out
this
bootstrap
Center
is
part
of
this
action,
and
this
is
an
action
that
cube
ADM,
leverages
and
I.
Think
it
greatly
increases
the
security
posture
of
communities
in
general
because
it
basically
provides
for
a
unique
identity
per
node
so
that
you
can
actually
do
things
like
have
a
better
control
over
when
doing
things
like
node
restriction.
B
All
documented-
and
it
basically
talks
about
bootstrap
tokens
and
the
token
authentication
file
and
like
how
those
things
are
created
and
and
all
of
that
sort
of
stuff,
so
enabling
bootstrap
token
an
auth.
True,
it's
actually
probably
to
do
with
that
bootstrap
signer
and
then
the
auto
approval,
the
CSR
Auto
approval
is
the
ability
to
allow
for
those
CSRs
that
are
issued
by
nodes
to
be
automatically
approved
so
that
the
client
certificate
that
the
node
uses
when
interacting
with
the
API
server
is
unique
per
node.
B
B
B
B
C
B
C
B
Katzie
kourounis
cubed:
this
is
a
cube
config
that
has
a
copy
of
the
CA
certificate,
especially
with
our
cluster,
and
it
has,
and
it's
pointing
at
the
client
certificate
that
we
just
looked
at
the
client
current
one
and
the
key,
and
so
now
I'm
going
to
be
able
to
authenticate
as
this
user
right.
So
if
I
do
or
as
this
particular
node.
So
if
I
do
cute
Kettle
auth
can
I
list
I.
B
Can
see
what
capabilities
this
particular
node
has
right,
so
I
can
see.
I
have
to
have
the
ability
to
yourself
subject:
access
I
can
do
get
against
API
and
api's
I
can
do
create
against
the
self,
node
client
or
just
fun
right.
So
I
can
actually
create
a
new
certificate
by
myself
or
myself.
A
new
csr
new
crate
certificate,
signing
request
for
myself,
but
I
can't
like
approve
it.
B
C
B
C
B
So
it
says:
pods
cube
proxy
7,
5,
z,
9
6
is
forbidden,
node
kind
worker
can
only
delete
pods
with
spec
node
name
set
to
itself
right.
This
is
where
that
this
is
where
that
admission
control
or
node
restriction
comes
in,
and
because
it's
able
to
determine
if
it
is
itself
because
of
the
way
that
I'm
authenticating
as
part
of
my
authentication
mechanism,
it's
able
to
determine
that
I
am
kind
worker
and
so
I
can
I
will
only
be
allowed
to
delete
things
that
are
known
about
by
kind
of
worker
neat
stuff.
B
B
B
D
D
B
C
B
Right
so
what's
happening
there,
which
controller
manager
wasn't
running
that
I
missed
that
sorry
about
that.
So
once
the
controller
manager
was
able
to
actually
see
it,
then
we
were
able
to
see
things
like
the
call
to
interact
with
cluster
you're
able
to
see
the
go
ahead
and
get
the
cube
idiom
configuration.
And
then
we
submitted
a
CSR
and
then
we
saw
that
CSR
approves.
So
if
I
do
keep
KITT
I'll
get
CSR
I
can
see
the
approval.
B
B
B
C
B
C
C
B
A
B
Her
manager,
the
reference
Docs
for
keen
controller
manager
and
specifically,
it's
a
list
of
controllers
to
enable
so
if
I
never
wanted
them
to
get
cleaned
up,
if
I
wanted
to
handle
that
process
on
my
own
right,
I
could
actually
not
I
could
actually
not
approve
I
could
get
rid
of
the
CSR
cleaner
controller
right.
These
are
all
the
controllers
and
I
can
explicitly
say
to
enable
of
particular
set
or
specifically
disable
a
specific
controller.
B
A
B
B
A
A
B
If
you're
interested
in
exploring
more
about
CSR
controller,
there
is
a
really
cool
project
by
Julien
best
drop,
let's
drop
my
bag,
but
I
have
used
on
a
number
of
occasions
just
to
interact
with
in
cluster
CA,
and
here
you
can
actually
leverage
this
tool
to
interact
with
a
history.
Ca
provided
the
right
permissions
to
allow
it
to
improve
the
certificate.
B
C
B
B
These
are
the,
and
these
are
the
metrics
that
are
exposed
by
the
controller
manager,
and
obviously
the
number
of
endpoints
that
we
see
in
the
controller
manager
is
always
gonna,
be
pretty
intense,
like
metrics,
wise
cuz.
It's
doing
so
much
work
right,
so
it
has
so
much
to
talk
about
as
far
as
like
you
know,
the
things
that
we're
actually
watching
them
going
through
this
of
trying
to
get
to
the
top,
but
there's
a
ton
of
metrics
of
the
controller
manager
exposes
that
are
valuable
metrics
that
we
want
that.
We
want
to
understand
right.
B
So
here
is
the
top
of
that
list.
We
are
listening
on
port
10,
252
and
exposing
the
metrics
for
all
this
information,
and
this
information
could
include
things
like
how
many
cholesterol
aggregator
adds
the
depth
the
for
each
controller.
There's
going
to
be
some
specific
information
right.
What
the
API
server,
client
certificate
expiration
looks
like
what
the.
B
B
We
have
the
work
duration
HANA.
To
do
the
work
we
got
claims
there's
just
a
ton
of
metrics
out
for
the
daemon
set
q.
How
long
is
the
Damon's
IQ,
like
we,
you
issue
a
new
daemon
set?
How
long
does
it
take
for
that
daemon
set
to
actually
progress
same
thing
for
deployments
same
thing
for
replicas
sets
the
disruption
check
right,
like
all
of
the
things
that
you
would
probably
want
to
instrument
in
that
code,
are
all
instrumented.
Many
of
them
are
instrumented
here.
B
If
there
are
things
that
you
would
want
to
instrument
differently
or
a
ninja,
in
addition
to
you
know,
open
source
project
involved
it'll
be
awesome,
so
tons
of
things
to
tons
of
things
in
the
metric
output
that
actually
really
have
a
lot
of
value
at
the
in
the
actual
workers.
It
relates
to
controller
manager
right
so
it
will,
it
really
doesn't
perform.
B
It
really
does
provide
a
lot
of
information
for
like
for
how
the
actual
process
itself
is,
is
operating
so
tons
of
metrics
all
custom
resource
control,
all
custom
resource
controlling
step
outside
the
controller
matter.
That
is
correct,
yeah,
I,
think
I'm
understanding.
My
client
I
think
you
are
too
Johar.
It
is
true
that
it's,
as
you
add,
more
and
more
controllers,
you
definitely
have.
B
You
definitely
have
to
think
about
that
resource
constraint.
Okay,
if
you,
a
bunch
of
those
controllers,
are
owned
by
you
considered
a
shared
Informer
or
I'm,
not
all
like
that
Sheraton
former.
There
is
some
caching
in
the
API
server,
so
it's
not
a
complete
wash
but
yeah.
If
you
were
very
inefficient
about,
you
know
controlling
like
if
you
had
a
model
instead
of
where
were
you
at
we're?
B
B
C
B
B
B
C
B
B
Let's
go
I
would
I
would
say
they're
like
slightly
less
scalable
until
the
room
time
package
has
access
to
all
manager
leader
election
stuff
too.
So
if
you
want
to
make
your
own
custom
control,
if
you
more
efficient,
that's
true
team
builder
and
operator
SDK,
both
the
load
controller
runtime.
They
pretty
much
used
the
same
pattern
as
this
controller
manager,
which
is
true,
but
for
those
shared
informants
to
really
be
useful
if
you'd
have
to
have
multiple
controllers
or
reconciliation
loops
associated
with
the
same
right,
and
so
typically
in
the
operator
pattern.
B
You
have
like
one
pod
running
that
operator
per
operator
pattern
rather
than
a
single
pot
that
is
able
to
manage
the
shared
Informer
across
multiple
of
those
patterns
like
you're,
not
going
to
generally
people.
Generally
speaking,
people
aren't
going
to
put
the
SED
operator
and
you're
Postgres
operator
in
the
same
pod
right
so
alright
cool.
So
this
is
the
secure
port.
It's
kind
of
on
my
mind
that
it's
tied
to
one
two,
seven
zero
zero
one
and
that
the
metrics
endpoint
is
found
to
all
zeros,
so
I
think
it's
a
bug.
B
B
B
I
thought
that
it
had
like
in
113,
so
I'll
have
to
go
back
and
look,
but
thank
you
all
so
much
for
your
time.
I'm
sure
that
was
alright.
That's
why
I
thought,
but
maybe
it's
a
one
15:3
thing
kind
of
blows
my
mind
anyway:
I'm
gonna
dig
into
it.
I
am
but
now
right
now
right
now,
I
want
to
go
and
I'm
gonna
enjoy
it.
My
my
awesome
weekend
and
I
think
that
you
should
do
the
same
because
it
is
going
to
be
a
beautiful
weekend.
B
So,
thank
you.
Thank
you.
Thank
you.
I
learned
stuff
in
this
episode.
You
learned
stuff
in
this
episode.
Why
don't?
We
have
a
controller
for
config
map
and
secrets?
What
what
do
we
need
to
modify
in
those
config
maps
and
secrets
that
we
would
need
a
controller
for
thanks
for
the
awesome
talk
anytime,
been
super
fun.
B
Yeah,
oh
yeah,
you
know
that's
a
good
point.
Well,
the
killer
is
kind
of
interesting
can
cut
all
get.
B
B
B
It's
not
wired
up
that
way
anyway.
Yeah
sorry,
you're,
right,
I'm,
talking
about
it,
because
I
was
gonna,
sign
off
and
then
I
got
all
distracted.
Yes,
the
it'll
get
rise,
really
interesting.
If
you
have
the
self
linked
API,
then
you
can
actually
interact
with
resources
like
here's,
an
example
of
cubic
it'll
get
raw
real,
quick.
Just
we
can
leave
it
on
a
good
note
back
to
beauty,
face
all
right
clear.
B
B
B
Server
it'll
be
one
of
the
API
servers
that
you're
interacting
with
it
may
be
different
because
it's
a
load,
balance
endpoint,
but
typically
they
will
all
agree
on
this
output
right,
because
this
is
actually
the
number
of
objects
that
are
known
inside
of
a
CD
listed
by
object
type,
which
is
really
cool
like
this
is
a
really
neat
table.
I,
really
like
this
one,
because
you
can
do
things
like
how
many
certificate
signing
request
SAR
there.
What
are
the
clustering
for
that?
B
You
know
how
many
config
config
maps
are
there,
and
this
is
cluster
wise,
like
this
is
actually
super
useful
information
to
kind
of
like
understand
how
the
API
server
understands
what's
happening
inside
of
communities
or
as
as
those
resources
are
stored
by
the
entity
and
and
as
you
can
tell
it
actually
does-
include
C
or
DS.
It's
not
just
those
native
resources.
It's
any
resource,
that's
been
defined
against
the
API
is
exposed,
and
it's
like
super
super
super
cool.
So
alright,
thank
you
very
much
and
again
enjoy
your
awesome
weekend.
B
I'm
going
to
go,
enjoy
mind.
I
have
some
great
plans.
I
just
recently
celebrated
my
14th
wedding
anniversary,
and
so
finally,
my
wife
and
I
are
gonna,
have
some
time
this
week
and
dig
it
out
and
enjoy
that
and
celebrate
each
other.
So
thank
you
very
much
and
I'll
see
you
next
time
and
if
you're
coming
to
coupon
I'll
see
you
there
so
have
a
great
weekend
thanks
again.