►
Description
Come hang out with Duffie Cooley as he does a bit of hands on hacking of Kubernetes and related topics. Some of this will be Duffie talking about the things he knows. Some of this will be Duffie exploring something new with the audience. Come join the fun, ask questions, comment, and participate in the live chat!
This week we are back on the Grokking Series to talk about how and what workload scheduling can do for us.
A
Hey
everybody
good
to
see
you
all.
Let
me
flip
to
like
a
little
easier
to
read
screen
there.
Sorry
about
that
got
my
scenes
mixed
up
good,
to
see
well
welcome
to
episode
number
114.
We
are
going
to
be
exploring
or
trying
to
simplify
our
understanding
of
the
way
that
pot
affinity
and
and
those
things
work.
We're
also
gonna
be
introducing
some
of
the
newer
interesting
and
some
newer
ideas
like
the
topology
constraint,
stuff,
we're
gonna
be
playing
with
those
things
live
like
we
do,
but
good
to
see.
Wait,
no
sound
for
you.
A
And
you
all
hear
me:
okay
or
it's
still,
a
no
sound
okay,
good
Oh,
how's
freaking
out
I
was
like
dang
yeah.
It's
a
really
good
live
background.
You
know
I
mean,
like
you
have
to
kind
of
time
it
just
right,
though
cuz
like
that
way.
When
you
reach
back
and
you
touch
a
flower,
it
looks
like
it's
actually
real,
but
you
know
it's
just
it's
just
a
loud
background,
it's
playing
around
anyway,
so
yeah,
that's
what
we're
gonna
do
this
week,
we're
gonna
play
with
we're
gonna
play
with
different.
A
They
play
with
different
scheduling
things
and
try
to
keep
it
easy.
It's
a
little
windy.
You
know
it
should
be.
Okay,
though
I
hope
that
my
microphone
is
actually
sounding
pretty
good,
too
y'all,
not
like
crazy
wind
everywhere.
But
you
know,
let
me
know
how
it
goes.
This
is
my
backyard
and
let's
get
started.
Who
do
we
got
signing
in
with
us?
Today
we
got
mr.
Lee
Maddy,
saying
hello,
happy
Friday
to
everyone
good
to
see
you.
The
Maddie
got
choco,
saying
hello
and
saying
hello
and
there's
a
nice
technical,
hello.
We
got
mr.
A
Lee
gains
from
New
York.
We
got
Willie's,
saying
happy
Friday
from
Saudi
Arabia
and
Rory
from
Scotland
and
Brandon,
telling
me
that
he
likes
my
backyard.
You
know
when
we're
all
able
to
get
back
into
a
place
where,
like
able
to
visit
each
other
I'm
totally
down
to
like
visit
with
a
bunch
of
y'all.
You
know
like
I'm
here
in
the
san
francisco
bay
area,
if
you're
ever
in
town,
which
could
be
a
little
while
you
know
feel
free
to
reach
out
we'll
chat,
we'll
talk
about
some
things:
it'll
be
fun
Joyce.
A
Our
car
is
saying
hello
from
Richmond
Virginia
we've
got
the
pop
casts
mr.
Dan,
the
pop
mr.
pop
saying
hello
from
Long
Island.
We
got
ciamis
saying
a
very
warm
Friday
night
here
in
Tunisia.
Oh,
that
would
be
awesome.
I've
never
been
there.
We
got
Brandon,
saying
yeah,
that's
nice
and
Rory.
Saying
he's
got
sound,
which
is
good
to
hear
and
on
sue
from
Paris
and
Tim.
Saying
I
really.
Is
that
really
outside
it's
totally
real
yeah?
My
hair.
A
B
A
Right
behind
me
is
actually
not
a
fruit
tree.
I
can't
vote
it
in
so
it
has
these
like
little
purple
flowers,
but
off
to
my
right
hand,
side
I,
have
a
I.
Guess.
Is
that
my
anyway
over
there
is
a
cherry
tree
and
behind
it
like
a
huge
black
walnut
and
then
to
my
left,
my
other
direction.
I
have
two
types
of
apples,
a
Granny
Smith,
and
you
know
what,
let's
just
let's
just
show
that
off.
C
A
That's
what
we
got
hello
from
Frank
word:
we
had
more
days
of
signing
in
from
Tehran.
We
got
Ramesh,
saying
hello
from
San
Francisco.
We
got
just
everybody's
good
to
see
you
all
again,
it's
great
it's
great
that
we're
able
to
get
together
every
Friday
and
kind
of
like
take
our
minds
off
of
all
the
craziness
and
just
focus
on
the
tech
and
play
with
some
stuff
live.
So
I'm,
really
great
I'm
really
grateful
y'all
are
here.
You
know
we
got
Carrie,
saying
hello
from
Russia
and
Bradley,
saying
hello
from
the
UK
all
right.
A
Well,
let's
get
into
the
news
here
and
see
what
we
got
bear
with
me
as
I
flip
to
the
screen
and
face
here
and
then
go
over
there.
All
right!
So
excuse
me
so
this
week
in
the
notes.
Of
course,
you
can
find
at
the
notes
at
TGI,
K,
dot,
io
/
notes,
it's
the
very
first
message
in
the
chat
log.
So
if
you
scroll
up
in
the
log
you'll
be
able
to
find
that
link,
it's
TGI,
K,
dot,
io
/
notes
this
week,
I've
got
mr.
A
A
A
It's
a
really
good
idea
to
wait
for
a
couple
of
different
versions
or
patch
versions
of
kubernetes
to
go
by
before
you
know,
adopting
a
new
version
and
what
I
mean
by
that
is
like
I,
wouldn't
pick
up
1:18
0
because,
like
there's
still
a
lot
things
that
we're
learning
you
know
we're
looking
at
signal,
we're
making
sure
that
things
work,
we're
gonna
catch
a
few
bugs
and
that's
really
highlighted.
If
you
go.
A
If
you
go
and
hit
up
the
release,
notes
for
Cooper
news
118,
which
are
these
here
right,
you
can
see
the
kind
of
stuff
that
makes
it
in
right,
bugger,
regressions,
stuff,
that's
actually
been
stuff
that
we've
picked
up
and
fixed
in
118
zero
and
addressed
in
118
one.
The
same
thing
with
118
there'll,
be
things
that
we
pick
up
and
fix
in
118
as
well.
So,
as
those
things
come
along,
it's
definitely
worth
like
staying
at
staying
on
top
of
what's
happening
in
that
ecosystem
before
adopting
it.
A
But
it's
also
kind
of
neat.
We've
also
got
a
patch
version
out
for
that
point.
We've
got
a
new
point
releases
for
the
older
versions
that
are
supported,
one,
seventeen,
six
and
one.
Sixteen
nine
one
of
the
other
interesting
pieces
of
news
this
week
in
the
community
is
that
sig
PM
is
retiring.
Mr.
Steven
Augustus,
who
was
actually
really
hands-on
in
sig
p.m.
to
try-
and
these
last
couple
this
last
while
here
trying
to
get
that
or
you
know
basically
driving
sig
VM.
A
We
also
it
has
a
number
of
other
amazing
contributors,
folks
like
Jase
to
Mars
folks
like
heed
or
we
just
have
a
you
know:
Bob
Killam
Jeremy,
your
cart.
We
have
a
bunch
of
different
folks
who
have
actually
been
a
part
of
sig
p.m.
over
time
and
they've
just
done
an
amazing
job
of
really
trying
to
bring
some
project
management
to
the
way
that
we
manage
kubernetes,
but
I
think
it
what's
end
up.
A
What
has
ended
up
happening
is
that
a
lot
of
that
structure
is
kind
of
moving
into
the
the
the
SIG's
a
special
interest
groups
themselves,
so
that's
being
shut
down
and
we're
looking
at.
You
know
making
sure
that,
but
I
mean
the
community
still
here
and
everybody
still
working
on
that
stuff.
So
you
know
they'll
still
be
they'll
still
be
work
kind
of
going
on
in
that
direction,
see
what
you
all
think
of
that
mr.
George
saying
hello.
A
Oh
that's
right
thanks
all
right
cool
and
then
in
the
ecosystem
of
a
co-worker
of
mine,
mr.
Michael
Bosch,
we
pulled
it
up
here.
He
went
down
the
rabbit
hole
now.
Mike,
oh
gosh
actually
has
a
couple
of
different
posts
here.
That
I
think
are
interesting.
One
of
them
is
pinned
and
it's
actually
related
to.
Let
me
open
it
up
here.
It's
this!
It's
this
one
here
where,
where
in
the
question
becomes
like,
have
you
ever
asked
yourself
if
cgroups
containers
pods
whatever
can
impact
the
golang
runtime?
A
This
was
a
fascinating
read
and
if
you're
interested
in
it,
I
would
definitely
recommend
checking
out
that
tweet
or
that
series
that
he
posted
and
the
other
one
that
I
thought
was
actually
really
interesting
was
the
one
that
he
posted
just
this
time,
which
was
about
basically
chasing
this
particular
issue
and
I
wanted
to
bring
it
up
because
I
felt
like
there
are
a
couple
of
really
great
things
about
this
issue.
Right.
A
A
This
is
a
great
just,
a
really
good,
informative
ticket
and
I
really
learned
a
lot
from
it
and
I.
Imagine
that
you
probably
would
as
well
so
Michael
Russell
one
of
my
co-hosts
on
the
cube
list,
the
cubelets
IO.
So
if
you're
interested
in
and
hearing
us
random
ramble
about
distributed
systems-
and
all
things
definitely
check
us
out-
there
he's
he's
a
he's-
a
great
great
guy.
A
The
next
one
up
I
have
a
hold
on
one
second,
so
the
next
one,
if
we
have
cube
weekly
I,
was
just
gonna
call
out
to
key
weekly
again,
because
you
know
there
is
a
lot
of
really
good
content
on
cube
weekly
for
kubernetes
every
week.
This
is
actually
curated
by
the
CN
CF
ambassadors,
Bob
killin,
Chris
shirt,
Craig
box,
Makram,
Kim,
McMillan,
I'm,
probably
fluttering
their
name,
and
then
we
have
Michael
housing,
boss,
I'm
also
also
sure
I'm
messin
his
name
up.
A
But
if
you're
interested
in
submitting
an
article,
you
can
email
them
at
key
weekly
at
CN,
CF,
chayo
I'm,
always
looking
for
articles
to
share
with
you
all
every
week,
and
this
is
a
great
website
for
for
Howlett
for
handling
that
sort
of
stuff
and
they
break
it
up
into
different
sections.
They
have
stuff
like
our
headlines,
things
that
are,
you
know
that
have
raised
up,
including
this
more
recent
article
from
Karen
Chu
at
Microsoft,
who's
talking
about
joined
the
kubernetes
release,
team
and
and
learn
from
and
give
back
to.
A
The
community
I
mean
like
the
kubernetes
community
is
amazing
and
it's
amazing,
because
we
have
so
many
perspectives
that
were
able
to
bring
to
bear
on
the
project
right,
and
so
your
perspective
is
very
important
to
us.
If
you're
interested
in
participating
that
you
know
there
are
lots
of
areas
where
you
can
help,
you
can
help
just
write
Docs,
you
can
help
interact
with
the
release
team.
You
can
help
any
number
of
things.
So,
if
you're
interested
in
about
it
just
keep
your
just,
you
know
join
up
and
join
the
kubernetes
development.
A
Mailing
list
and
you'll
see
the
invite
for
for
those
things
come
through.
So
if
that's
something
that's
interesting
to
you
jump
in
there,
we
also
have
some
project
news.
Let's
dig
in
here.
So
one
of
the
more
recent
kubernetes
blog
posts
which
I
thought
was
interesting,
was
monitoring.
Kubernetes
workloads,
the
sidecar
pattern.
I
spend
a
lot
of
my
time
telling
people
to
really
consider
whether
the
sidecar
pattern
is
important,
but
I
do
like
it.
They
dug
into
kind
of
the
reality
of
like
what's
happening
here.
A
So
if
you're
interested
in
you
know,
monitoring
and
as
it
relates
to
applications,
this
is
probably
worth
checking
out.
I
know
that
a
lot
of
folks
use
the
sidecar
pattern
for
on
things
like
data
dog
and
some
of
the
other
third-party
integrations.
The
thing
I'm
hesitant
about
when
it
work
are
with
regard
to
the
sidecar
pattern,
is
that
when
you
think
about
you
know
aggregating
logs
into
a
sidecar
or
or
scraping
metrics
in
a
sidecar.
What
you're
doing
there
is
you're,
basically
going
to
start
another
process
on
that
on
that
particular
node.
A
That
is
responsible
for
the
interaction
with
metrics,
for
that
particular
application,
or
the
interaction
with
logs
for
that
particular
application.
So
now
you
have
the
process,
which
is
your
application.
You
have
the
process
that
is
handling
your
logs
and
you're
doing
all
of
that
at
the
pod
level,
every
time
and
and
that
could
become
incredibly
inefficient
as
you
grow
the
number
of
pods
per
node
right,
and
so
that's
actually
one
of
the
reasons
I'm
hesitant
about
it,
like
I,
think
the
sidecar
has
to
earn
its
place.
There
has
to
be
real
value
that
it
shows.
A
Another
container
that's
handling
just
logs.
No,
there
are
obviously
exempt.
There
are
obviously
exceptions
to
these
rules.
Some
logs
need
to
be
secure
in
transit
right.
They
need
to
be
isolated
like
if
I
wanted
to
grab
the
log
from
the
API
server
or
the
controller
manager
or
the
audit
log
itself.
I
might
be
more
careful
about
the
way
that
I
I
position
the
logging
piece
for
that,
because
I
want
to
make
sure
that
nothing
can
interrupt
that
log
stream
right,
so
yeah
bit
is
actually
really
cool.
A
A
Can
you
share
the
github
link?
All
of
my
links
are
gonna,
be
in
either
they're
gonna
be
in
the
in
the
notes
here
or
they're.
Gonna
be
linked
off
of
cube
weekly,
like
I,
didn't
quite
prepare
as
much
as
I.
Don't
really
do
inside
of
my
chat,
but
all
those
links
can
be
found
either
in
TJ,
kita,
io,
/
notes
or
directly
from
key
weekly
io.
A
This
next
article
again
from
Q
bleakly,
was
a
just
a
great
workshop
on
understanding.
Kubernetes
objects
right
in
this
case.
What
they're
describing
is
leveraging
kata-kata
to
start
up
a
cuckoo,
Burnie
display
ground
and
then
and
then
kind
of
play
with
the
different
objects
against
the
kubernetes
api,
and
so
I
thought
this
was
a
great
little
introduction.
A
It's
a
great
piece
of
great
piece
of
work
and
actually
really
tries
to,
like
you
know,
kind
of
get
it
in
a
simpler
place.
So
if
you
know
folks
who
are
looking
to
understand
it
at
a
low
level,
I
mean
you
know
at
a
high
level
or
just
get
your
hands
on
or
are
looking
for
some
entry
point
to
start
playing
with
kubernetes.
This
is
a
this.
Is
a
good
one
so
definitely
check
that
out.
A
Cube
Academy
and
cube
Academy
and
so
and
cube
Academy
has
stuff
like
this,
but
across
the
whole
space
right
across
the
entire
kubernetes
space
and
it's
all
open
source
upstream
stuff.
So
that's
another
good
entry
point
cubed
Academy,
it's
pretty
good,
so
shout-out
to
that
this
one
I
think
you
all
heard
me
mention
this
last
week,
kubernetes
little
node
local
DNS
cache
is
a
great
one
and
it's
gone
stable
now,
so
it's
available
in
118
is
stable
and
it
wrecked.
It
really
I
think
this
article
does
a
pretty
decent
job
of
summarizing.
A
Why
it's
interesting
and
why
it
why
it's
helpful,
as
it
relates
to
the
way
that
key
proxy
handles
traffic,
whether
it's
DNS
traffic
or
UDP
traffic
and
those
sorts
of
things,
but
there's
also
another
piece
to
this,
which
I
didn't
see
mentioned
in
the
article.
Maybe
I
just
missed
it,
but
I
brought
it
up
last
week
in
the
chat,
which
was
around
the
fact
that
some
cloud
providers
limit
you
on
the
number
of
packets
in
which
you
can
describe
in
which
you
can
describe
how
many
DNS
packets
are
gonna,
come
from
any
Givens
interface
right.
A
A
A
You
know
the
standard
core,
DNS
deployment
and
say,
for
whatever
reason,
both
of
those
core
DNS
entities
landed
on
the
same
node
right
now.
Any
upstream
query
is
gonna:
go
against
the
DNS
servers
of
the
underlying
node
of
that
particular
node,
where
you've
actually
deployed
core
DNS,
and
the
number
of
queries
that
we
see
is
gonna
vary
wildly
right.
A
Some
things
are
not
well
behaved
when
it
comes
to
DNS,
like
some
things,
don't
cache
entities
or
entries
there's
a
lot
of
really
interesting
behaviors
out
there
as
it
relates
to
DNS,
and
this
is
partially
why
you
hear
like.
Oh
you
hear
some
of
us
complaining
about.
Oh
wow,
the
Sun
is
coming
out.
That's
awesome!
Anyway,
you
hear
some
of
us
talking
about
the
you
know:
it's
always
DNS,
it's
it's!
Those
behaviors
that
we're
referring
to
you
know
always
something
with
DNS.
A
A
This
article
does
a
great
job
of
describing
how
it
works,
and
this
one
is
linked
right
off
of
cube
weekly
done
IO
right.
You
know
what
I'm
just
gonna
copy
it
into
the
notes.
So
I
don't
forget
to
do
that
later,
blink.
There
we
go
alright,
so
cool,
so
yeah,
node,
local
DNS,
cache
very,
very
cool
stuff
worth
checking
out,
especially
if
you
run
in
a
lot
of
a
lot
of
pods
in
production.
A
Human
friendly
domains
with
key
native
Cana
DF
is
making
some
changes,
trying
to
make
it
easier
for
people
to
actually
figure
out
how
to
make
creative
work
with
domain
names.
So,
if
you're
interested
in
K
native
check
that
out
not
gonna
spend
a
lot
of
time
there,
some
of
the
other
stuff
that's
changed
this
week
is
Helms
got
a
new
release
version
out.
It's
a
patch
release.
A
A
A
Containers
were
a
child
process
of
the
docker
daemon
itself,
and
that
meant
that
if
you
had
to
restart
the
docker
daemon
or
you
know
it
was
raining
and
dr.
Damon
shut
down,
all
of
your
containers
would
also
shut
down
and
it
created
quite
a
bit
of
churn.
So
now
we
can
actually
even
in
docker.
This
is
configurable
on
any
container
T
as
well.
Now
you
can
actually
restart
the
container
d
daemon
or
the
dr.
Damon,
without
affecting
the
lifecycle
of
the
containers,
and
this
is
actually
I
mean
if
you're
interested
in
like
trying
this
out.
A
A
What
we
got
we
got.
What
are
you
talking
about
cubed
on
Academy
yeah,
it's
Cube
dot,
Academy
super
easy
cube,
Academy
check
it
out.
I
actually
have
some
kubernetes
networking
stuff,
that's
gonna,
be
going
up;
I'll
go
up
going
up
there,
probably
in
the
next
couple
weeks,
there's
some
other
stuff,
basically
meters,
yammering
about
services,
you're
all
used
to
that.
A
Then
there's
another
one,
article
introducing
this
thing
called
Pluto,
a
CLI
tool
to
discover
deprecated,
API
versions
in
kubernetes
and
you've.
All
heard
me
rant
about
this
in
the
past
as
well.
I
think
this
is
super
important.
The
tool
that
I've
been
using
most
of
all,
that
the
tool
that
I
talk
about
most
frequently
is
a
tool
called
deprecates,
which
light
which
relies
on
comp
tests
to
do
its
work.
And
so
deprecates
is
a
thing
that
Nicolas
Bernard,
wrote
and
I
thought
it
was
actually
a
really
great
little
set
of
policy
that
describes.
A
You
know
basically
enables
you
to
filter
your
manifest
through
contests
and
have
contests
complaining
you
about
things
that
are
going
to
be
poor,
cater
to
the
next
reason
or
that
are
currently
deprecated,
and
in
this
way
you
could
have
like
a
pre-commit
hook
or
whatever
right
like
when
somebody
makes
a
poor
request
with
a
set
of
manifests.
You
can
have
a
thing
there
where,
in
your
pull,
request,
you're
testing
against
contest
and
if
new
versions
are
out
of
date,
then
you
could
complain
about
that
in
the
and
and
your
contributor
could
update
things.
A
I
know
that
stevewade
is
implemented
something
similar
to
this,
and
there
are
a
few
other
folks.
I
know
that
I'm
actually
also
looked
at
this,
but
Pluto
seems
to
be
a
play
on
exactly
that.
So
now
they
have,
they
have
a
homebrew
tap
for
Pluto.
They
got
a
quick
start.
Pluto
detects
files
test
data,
home
detection
in
cluster.
A
No
cool,
so
it'll
actually
even
check
it
out
in
cluster.
That
is
cool
right
right
on
Pluto
is
neat.
You
can
also
use
Pluto
with
helm,
so
clue
to
detect
helm
home
a
it
will
actually
tell
you
the
things
that
have
been
deployed
that
are
out
of
date,
which
is
very
cool,
checking
local
files.
They
also
support
they've
got
a
mount.
They've
got
a
model
for
CI
pipelines.
A
Yeah,
that's
cool
looks
like
somebody
like
basically
productize
the
thing
that
that
we
were
playing
with
outside
of
this.
This
is
really
neat,
so
definitely
check
this
out.
It
looks
pretty
solid.
I
might
play
with
this
in
a
future
episode.
It
actually
looks
really
cool,
so
definitely
check
it
out.
Fair
one
ops
also
actually
has
a
number
of
things
that
are
pretty
interesting.
A
One
of
the
other
ones
they
have
is
our
back
manager,
which
is
a
way
of
actually
declaratively
handling
things
like
our
back,
so
you
can
actually
commit,
like
you
know,
you
could
use
git
as
a
source
of
truth
for
the
configuration
of
our
back
over
in
general,
and
it
also
handle
things
like
namespace
creation.
That's
pretty
neat
stuff!
There's
a
few
other
things
that
are
pretty
cool,
so
if
you're
interested
in
that
sort
of
stuff
check
those
things
out,
we
already
talked
about
the
1:18
release.
Long.
A
A
A
So
cool
Argos
looks
like
it's
actually
doing
a
pretty
decent
job
of
taking
care
of
these
sea
bees
and
actually
staying
abreast
of
what
could
relate
to
Argo
itself.
Whether
these
issues
are
agro
specific
or
get
specific,
pretty
cool
stuff
I've
got
an
overview
of
past
issues,
known
work,
arounds,
neat
stuff
and
if
you're
interested
in
reporting
vulnerabilities
they've
got
a
method
for
that
as
well.
Although
that
seems
interesting,
it
looks
like
it's
just
straight-up
emailing
those
people
rather
than
like
well.
B
A
A
Crio
I
haven't
actually
played
with
cRIO,
yet
it
might
be
interesting
to
do
another
session
on
container
runtimes.
Most
of
my
container
runtime
experience
has
been
docker
or
container
D
I've
not
really
explored
cRIO,
but
yeah.
That
would
be
kind
of
an
interesting
that
would
be
a
kind
of
an
interesting
episode.
A
Here's
the
pop
quiz
I'm
actually
talking
to
the
iPad,
rather
than
the
laptop
last
week
in
the
notes
or
actually
in
the
chat
we
were
talking
about
stuff,
while
Joe
was
doing
his
thing
and
we
came
up
with
an
idea
of
something
to
cover
and
I.
Do
not
remember
what
it
is
and
I
haven't
gotten
back
to
the
chat
to
see
what
it
was.
That's,
why
wait?
That's
why
this
week
we're
doing
scheduling
because
I
didn't
wanna
I
hadn't,
like
I,
didn't
have
time
to
go
back
in
the
chat
and
comb
for
for
whatever.
A
A
D
A
Okay,
so
this
is
my
kind
clusterer
this
week
we're
gonna
have
multiple
node
cluster,
because
we're
gonna
be
playing
with
things
like
affinity
and
not
an
anti
affinity,
and
so
I'm
gonna
bring
up
multiple
nodes.
Well
label
them
we'll
play
with
them.
Was
it
CSI
driver?
That's
what
it
was
secrets
towards:
CSI
driver,
okay,
yeah
yeah,
okay,
great!
We
will
definitely
play
with
that.
One
I.
A
If
you
have
an
export
kind,
this
is
my
weekly
tribute
to
the
amazingness.
That
is
kind
it's
a
great
project.
It
lets
us
do
things
like
we're
doing
here
where
we're
gonna
basically
explore
like
a
multi,
node
cluster.
It's
really
a
pretty
great
project.
If
you
go
to
kind
dot,
SIG's
khi,
oh,
you
can
find
out
a
lot
more
about
it.
A
A
lot
of
the
configuration
options
that
are
out
there
kind
to
also
let
you
build
kubernetes
locally
on
your
laptop,
even
if
you're,
using
a
Mac
and
you're,
using
like
the
Mac
dock
or
desktop
thing.
There
are
some
challenges
in
the
way
that
networking
works
on
a
Mac.
But
you
know
end
of
the
day.
Kind
is
still
a
pretty
great
way
of
actually
managing
and
playing
with
multiple
node
clusters,
and
it
seems
to
have
become
kind
of
a
favorite
here
on
tgia
for
managing
exactly
that.
A
Atm
is
a
tool
that
allows
you
to
really
get
very
down
into
the
debt
into
into
detail,
about
the
way
that
your
cluster
is
configured,
but
also,
this
is
a
multi
node
cluster
right.
So,
if
I
do
cube,
cut'
I'll
get
nodes
and
I
spell
it
right.
Yeah
no
I
can
see
that
I
have
four
nodes
here
and
I
can
play
with
things
like
some
pot
affinity,
pot,
anti
affinity
and
those
sorts
of
things
and
I
believe
that,
like
on
docker
desktop,
you
only
get
the
one
note.
A
You
need
to
come
to
the
k-8
slack
prabhakar,
because
that
is
in.
We
we
in
fact
kind
is
like
currently
the
gate
for
ipv6
work
inside
of
upstream
kubernetes,
like
yeah
kindest,
actually
had
ipv6
for
sometime,
it's
really
great
and
you
could
change
the
node
config.
Precisely
so,
let's
play
with
this.
Let's
go
ahead
and
do
a
cube
kettle.
Explain.
You've
seen
me
talk
about
this
before
hot
speck,
dot.
A
A
We're
gonna
be
working
through
the
examples
that
are
in
the
kubernetes
talks
and
if
we
find
issues
with
those
docks,
I'm
going
to
commit
those
changes
actually
in
to
this
episode,
but
I'm
hoping
that
things
just
work
appropriately
right
and
so
we're
gonna
play
with
those
sorts
of
things
now,
because
we're
using
a
kind
cluster
right,
get
nodes.
Oh
wide
show.
B
A
Cube
kettle
get
nodes,
show
why
I'm
like.
Why
doesn't
that
work
so.
B
A
Okay,
so
in
our
configuration
right
now
by
default
because
we're
running
all
of
this
stuff
locally
using
a
kind
cluster,
there's
not
a
lot
of
labels
that
are
going
to
be
applied
to
my
nodes
and
that's
one
of
the
first
topics
that
I
want
to
describe
right,
and
that
is
that
the
the
way
that
your
nodes
are
labeled,
the
way
that
they
get
these
labels
around
the
zone
and
failure
domain
and
those
sorts
of
things.
It's
typically
related
to
the
integration
with
your
cloud
provider
right.
A
So
since
I'm,
not
using
a
cloud
provider
I'm
using
effectively
what
could
be
considered
a
bare-metal
cluster
here
right,
there's
there
aren't
going
to
be
a
ton
of
labels
on
my
nodes
like
there
aren't
gonna
I'm,
not
gonna,
have
a
WS
zone.
I'm
not
gonna,
have
a
lot
of
those
things
related
because
they're
not
because
there's
no
cloud
provider,
integration
or
cloud
controller
manager
to
apply
those
things
to
my
nodes.
But
that
is
a
mechanism
of
the
way
that
the
integration
provider
works
and
to
highlight
kind
of
what
I'm
talking
about
there.
A
A
A
So
these
are
the
labels
that
I'm
talking
about
right.
So,
like
you
have
this
label
failure
domain
beta,
a
kubernetes,
the
I/o
slashes
zone
and
region
and
those
things
are
actually
and
also
instance,
type
and
those
things
are
applied
by
the
cloud
controller
implementation
right.
So
when
you
bring
up
a
cluster
in
AWS,
if
you
turned
on
the
AWS
cloud
integration
provider,
you're
gonna
see
those
labels
show
up
on
your
notes
and
if
you're
curious
about
seeing
those
things,
you
can
jump
into
one
of
your
existing
clusters.
A
All
right
and
you
can
do
exactly
this
command.
You
can
I'll
get
nodes.
Show
labels
and
you'll
be
able
to
see
those
note.
Those
labels
as
well
yeah
we're
gonna
get
into
that
AJ
stay
with
me
on
that,
but
effectively
so
they're
being
deprecated,
because
they're
no
longer
beta
they're,
actually
kind
of
going
into
a
stable
configuration.
But
that
also
means
that
we've
changed
the
label
format
to
move
into
stable.
But
well,
we'll
talk
a
little
bit
more
about
that
here.
In
a
second.
A
So
the
other
piece
of
this
that
I
would
really
like
to
highlight-
which
I
think
is
actually
pretty
neat-
is
that,
as
we
can
see
from
this
output,
there's
no
hidden
api's
right,
you'll
hear
me
talk
about
you
probably
hear
a
number
of
things,
and
if
label
has
no
value,
the
equals
is
confusing
fair,
fair
yeah.
This
is
actually
in
this
case.
B
B
A
And
then
we
do
cube
cat
I'll
get
nodes.
All
right,
you
can
see
like
that's
effectively
what
that
wow
that
works
right,
and
so
you
can
see
that
there's.
No,
there
are
no
hidden
api's
everything
that
we're
going
to
rely
on
with
relationships
was
to
scheduling
or
those
or
that
relationship.
Those
things
should
all
be
viewable.
A
The
value
for
the
master
is
already
set
right.
You
can
see
that
the
role
for
master
kind
control
plane
is
already
set
to
role
master
and
the
way
that
that's
done.
This
is
actually
what
I'm
calling
out
right.
The
way
that
that's
done
is
that
we
look
at
the
label
on
just
our
master.
Let's
do
cube
kettle
get
node.
A
A
A
Yeah
I
might
add
a
minus
at
the
end.
Exactly
so,
let's
do
that,
except
that
we
have
to
do
like
for
that.
There
we
go
and
then
we
get
to
you
keep
yet
I'll
get
nodes
kind
worker.
We
can
see
that
the
role
has
been
removed
right.
That
label
that
we
applied
has
now
just
been
removed,
and
if
we
do
cubic
it'll
get
nodes,
show
labels
or
cubic
it'll
get
nodes.
A
My
workers
back
to
none,
because
there
is
no
label
accorded
configured
on
that
particular
node.
In
this
way,
you
could
actually
use
that
role
piece
if
you
wanted
to
to
like
to
configure
different
subsets
of
machines
right,
but
typically
what
we
end
up
doing
is
actually
specifying
labels
that
make
more
sense.
You
could
actually
apply
multiple
roles
which
is
kind
of
wild,
but
then
it's
true,
but
we're
not
gonna,
explore
that
right
now
for
busy.
A
So
but
my
point
is
that,
as
we
start
exploring
these
things
like
pod
affinity
and
anti
affinity
and
those
sorts
of
things
what's
interesting
is
that
we
can
it's
that
the
thing
that
we're
anchoring
on
right
when
we're
describing
node
selector
and
when
we
were
describing
topology
key
those
sorts
of
things.
Those
are
gonna,
be
related
to
the
labels
that
are
associated
with
your
nose
right,
and
so
that's
actually
I.
A
Think
one
of
the
first
pieces
to
understand
is
that
when
you
start
playing
with
scheduling
predicates-
and
you
have
to
pick
some
selector
and
we're
gonna-
do
that
here
in
just
a
minute.
It's
important
to
understand
that
the
way
that
selector
works,
it's
going
to
select
nodes,
some
subset
of
nodes
by
label
and
that's
and
these
labels
are
not
invisible
by
at
least
labels
are
like
actually
applied
to
your
nodes.
A
And
so,
if
your
nodes
don't
have
that
label,
then
that's
where
it's
gonna
fall
apart
for
you,
how
does
that
label
given
a
role
to
the
node
logic
behind
it
yeah
effectively
what's
happening
there
is
when
cube
kettle
is
doing
a
get
nodes.
It's
actually
going
to
parse
and
see
what
the
if
that
label
exists
for
that
particular
node
or
the
labels
associated
with
that
particular
node
as
part
of
the
output.
It's
part
of
the
printable
output
right,
so
the
fact
that
you've
actually
labeled
it
accordingly
is
actually
how
that
gets
applied
right.
A
A
A
A
A
Thanks
there
we
go
so
this
is
the
taint
on
this
particular
master,
saying
the
effect
is
no
schedule
and
the
key
is
no
droll
kubernetes
I/o
master
right,
and
so,
if
you
wanted
to
tolerate
this,
then
you
would
have
to
be
able
to
tolerate
no
schedule
on
node
roll
kubernetes,
I/o
master
or
you
could
be
really
wild
and
just
tolerate
everything
and
we'll
talk
about
that
as
well
as
we
dig
into
this
whole
scheduling
piece,
but
first
things.
First,
we're
gonna
play
with
some
crazy
magic
here.
A
E
A
Engine
expedite
you
know
so
here's
my
manifest
for
a
single
pod
right.
So
here
we,
if
I,
do
a
head
and
I
apply
this
right.
If
I
do
think,
it'll
apply
F
to
this
nginx
instance
and
then
I
do
cubic
it'll
get
pods
Oh
wide
I
can
see
that
nginx
was
scheduled
to
kind
worker
and
it
got
an
IP
address
and
it's
up
and
running
and
all
the
happy
stuff
right
and
if
I
also
do
just
like
that.
You
know
that
that
node
thing,
if
I
do
show
labels
I.
A
Can
also
see
the
labels
that
are
that
were
configured
on
that
note.
Right
and
I
can
also
label
that
pod,
just
like
I
did
with
a
node.
All
of
those
things
all
continue
to
work,
but
but
you
can
see
that
this
node
has
been
configured
right
now,
if
I
edit,
this
pod
cube
channel
edit
pod
nginx,
we
can
obviously
see
that
a
bunch
of
stuff
has
been
popular
in
the
configuration
of
this
node,
including.
A
This
field
right
here
now,
those
of
you
who
don't
already
know
this
right.
What
just
happened
in
that
situation
right
was
that
basically
I
submitted
a
pod
manifest
to
the
API
server
via
cube
kettle
apply,
write
the
API
server.
Did
some
validation
made
sure
the
fields
that
need
to
be
filled
in
or
filled
in
made
sure
that,
like
the
that
I
have
permission
to
do
things,
and,
although
all
that
good
stuff
and
when
it
was
happy
with
all
of
that
preamble,
it
basically
persisted
that
pod
object
to
a
CD.
A
You
might
have
heard
me
talk
about
this
more
well,
this
the
cube
kettle
once
once
that
pod
object
is
persisted
to
a
CD.
The
next
thing
that
happens
is
the
controller
manager.
Well,
actually,
in
the
pods
case,
the
controller
manager
doesn't
even
see
it.
What
happens
next
is
a
scheduler
right.
Let's
set
the
scheduler
sees
hey,
there's
a
new
pod
object
that
doesn't
that
isn't
assigned
to
a
node
right.
This
is
kind
of
like
a
filtered
watch
like
it's
doing
a
thing
where
it's
saying
like
for
pod,
not
assigned
to
node.
A
You
know
hit
me
up:
I'm
gonna
I'm,
the
scheduler
I'm
gonna
figure
out
how
to
do
that
thing
right
and-
and
we
even
know
which
scheduler
it's
gonna
be.
It's
gonna,
be
the
default
scheduler.
It's
going
to
figure
out
what
node
to
associate
this
pod
with,
and
it
picked
one
that
wasn't
mastered
right
because
I'm
not
tolerating
anything.
A
Special
I
haven't
set
a
node
selector,
which
means
it
can
land
on
any
node
that
it's
not
tainted
and
it
picked
and
it
picked
kind
control,
plane
or
picked
kind
worker
1,
and
we
know
that
it
picked
it.
We
know
it
picked
that
node,
because
it
populated
that
node
name
field.
So
the
scheduler
basically
said
like
look.
I'm
gonna
persist
this
node
name
field
back
into
that
same
sed
object,
and
then
my
scheduling
work
is
done.
A
A
A
Can
see
that
the
pod
got
deleted
right,
everybody
sees
it.
There
have
some
questions
here,
so
I'm
impressed
you're
able
to
focus
while
hearing
yourself
slightly
delayed
I
can't
actually
hear
myself.
I
can
only
I'm
I'm
not
doing
that.
Cuz
I
wouldn't
be
able
to
do
that.
Like
the
delay,
the
delay
would
kill
me
yeah.
So
instead
I'm
just
like
talking
to
myself
here
and
also
to
you
and
that's
about
it,
that's
good
point
Tiffany
and
then
lead,
and
then
Harish
was
saying.
Let's
say,
for
example,
kind
worker
is
not
picking
up
the
pod.
A
Will
the
scheduler
come
into
play
again
and
change
the
node
name,
the
way
that
that
functions
is
a
little
differently
right?
It
wouldn't
be
that
the
cubelet
isn't
picking
it
up.
What
ends
up
happening
is
others
are
delay
on
that
side
interesting.
So
what
ends
up
happening
is
that
the
controller
manager
would
detect
that
the
node
was
not
in
a
healthy
state
right
like
maybe
it
passed,
it
failed.
A
The
cells
check
after
a
number
of
times
or
whatever,
and
it
would
take
it
out
of
and
it
would
mark
it
unscheduled
or
not
ready
and
the
Qibla
can
do
this
itself
to
write
the
cubic
and
mark
itself
not
ready
if
that
pot.
If
that
know,
if
that
Cuba
that's
configured,
not
ready,
that
means
that
the
scheduler
is
gonna,
take
it
out
of
the
running
for
scheduling,
but
it
won't
immediately
event
those
funds
that
have
landed
on
that
node
because
it
doesn't
know
if
it's
coming
back
or
not
right.
A
It
has
to
wait
I
think
by
them
by
default.
It's
five
minutes
for
that
node
to
be
not
ready
for
five
minutes
before
anything
will
happen
as
far
as
like
rescheduling,
but
I'm
good
to
know
anyway.
So,
let's
play
with
this
so
I've
already
got
I've
got
my
pod.
There
I've
got
my
new
manifest
engine,
X
pod,
llamo,
I'm
configuring
it
for
node,
node,
name
kind,
worker.
A
Let's
go
ahead
and
apply
it
oh,
but
before
we
do,
I
want
to
show
you
this,
because
I
think
this
is
the
coolest
part
and
it
kind
of
feels
like
magic.
So,
let's
do
it
right
so
I'm,
gonna,
docker,
exec,
yeah,
I,
kind,
control,
playing
bash,
that's
the
kubernetes
manifests
and
what
I'm
gonna
do
is
I'm
going
to
shut
off
the
controller
manager
and
the
scheduler
look
out
now
move
tube
control.
The
manager
to
want
back
one
directory
move
queue.
A
So
we
can
see
that
my
API
server
is
still
there,
but
I
don't
have
a
scheduler
and
I,
don't
have
a
controller
manager.
I
just
have
sed
and
kind
net,
and
whatever
this
thing
is
queue
proxy
and
my
local
path
for
Isner.
If
I
jump
out,
I
can
do
cube
kit
I'll
get
pods.
I
can
see
that
there
are
no
pods
deployed.
I
can
do
cube
kettle,
get
pods
a
and
I
can
see
that
I
don't
have
a
scheduler
and
I,
don't
have
a
controller
manager,
but
I
can
still.
C
D
F
A
This
yeah
don't
turn
that
stuff
off
in
production,
Thank
You
Bogdan
yeah,
but
you
just
notice
that
I
deployed
a
pod
with
no
controller
manager
running
and
no
scheduler
running
kubernetes
is
a
distributed
system
right.
It
is
a
right
and
we
could
think
of
these,
like
we
can
think
of,
like
the
controller
manager
and
the
scheduler
as
applications
in
a
larger
stack
that
do
particular
work
and
in
this
case,
because
I
have
because
I'm
first
I'm
only
creating
a
pod
right.
A
So
there's
no
need
for
the
controller
manager
to
break
my
deployment
down
into
a
given
into
pods.
That's
already
been
done,
I'm
already,
creating
a
pod
right
and
because
I've
already
populated
the
node
name.
That
means
that
the
scheduler
doesn't
have
any
work
to
do,
and
so
the
next
step
right,
where
the
cubelet
sees
that
something
has
been
applied.
It
picks
it
up
in
it
and
it
runs
that
pod.
What
if
I
kill
the
Potter
AHA
again
distributed
system?
What
do
you
think
will
happen
if
I
kill
the
pod
I
mean?
A
B
C
B
B
A
D
A
A
Get
pods
well,
it
aired
and
because
restart
is
set
to
never
it
won't.
It
won't
restart
it,
but
it
will
tell
me
that
it's
gone
right,
but
if
I
had
set
restart
to
true
than
it
would
restarted,
it
was
part
of
a
deployment
and
I.
Did
this
I
would
still
it
would
still
work.
So
we're
gonna
come
back
to
that,
but
stay
with
me
for
now
so
delete
pod
engine
X,
because
I
want
to
show
you
one
more
thing
which
is
really
important:
engine
X
pod.
A
A
I've
set
my
node
name
to
kind
control
plane,
but
that
shouldn't
work
right,
because
we
know
that
master
is
tainted
right,
but
we
also
know
that
I'm
not
even
running
a
scheduler.
So
what
do
you
think
is
gonna
happen
when
I
run
this
pod
it'll
create
a
new
pod?
Exactly
cube
Kittel
applied
a
chef
into
next
pod.
You.
A
Boom,
it
lands
it.
It
starts
running,
it's
working
like
a
champ,
so
repeat
after
me,
amazing,
faithful
audience
exactly
repeat
after
me:
amazing,
faithful
audience
scheduling
is
not
a
security
boundary
by
default
by
default.
It
is
not
right
by
default.
Nothing
keeps
us
from
being
able
to
defeat
scheduling
inside
of
kubernetes,
because
we
can,
we
can
manually,
create
a
pod
and
we
can
assert
and
we
can
pop
pre
populate
that
node
name.
So
nothing
keeps
us
from
able
to
land
this
on
any
any
node
that
we
want
any
pod.
A
We
want
on
any
node
okay,
but
there
are
some.
There
are
some
things
that
we
can
do
to
improve
that
and
we're
gonna
get
to
that
later
in
the
episode.
But
let's
keep
going
I,
don't
wanna
I,
don't
wanna
get
too
sidetracked
here.
So
what
I'm
gonna
do
now
fleet
pod
nginx
is
I'm
going
to
go
ahead
and
do
cube
kettle.
B
A
So
there
we
got
our
deployment
now
we're
gonna
create
a
deployment,
we're
gonna
play
with
this
more.
What
do
you
think
will
happen
if
I
deploy
this
right
now
there
is
actually
a
native
way.
There's
a
there
is
a
even
with
yeah,
even
no
schedule.
Nope
no
schedule
notes
because
I'm
not
hitting
the
scheduler,
the
scheduler
is
turned
off
right
now.
A
A
A
Deploy
I
can
see
that
the
deployment
is
sitting
there,
but
nothing's
happening
okay,
it's
just
sitting
there.
What
do
I
do
about
that?
What
do
y'all
think
Hey
exactly
so,
not
even
it's
not
even
no
pods
like
we're,
not
even
gonna
progress
at
all.
There's
not
going
to
be
a
replica
set,
there's
not
going
to
be
anything
because
the
control
manager
is
turned
off
right
and
so
the
controller
manager
is
actually
going
to
take
two
actions
on
this
deployment.
A
A
B
A
And
Cooper
need
huge
system,
you
control
her
manager,
it's
like
it's
sending
it
there
and
then
there
we
go
so
there's
our
off
the
cassette
right
and
it
says
desired
and
current
know.
If
we
do
cube
kennel
get
pods
a
or
Cupid
I'll
get
pods
I
can
say:
let's
keep
it
simple.
We
can
see
that
they're
pending,
but
what
are
they
waiting
for.
C
E
A
Pod
updates
may
not
change
fields
of
inspect
containers,
image,
spec
containers
in
image
deadlines
only
Edition
only
additions
to
existing
Toleration
x'.
But
why
is
that
anybody
new?
It's
because
there's
an
owner
right.
The
owner
of
this
pod
is
not
me
as
a
user.
The
owner
of
this
pod
is
the
replica
set,
and
so
I
can't
modify
the
fields
here
and
also
that's
actually
just
true
of
pods
in
general
right.
So
I
can't
edit
that,
but
what
I
can
do
is
I
can
enable
the
scheduler
to
edit
it's
that.
A
Schedulers
gonna
come
online,
it's
gonna
see
those
pods
just
sitting
there
waiting
for
things
to
change.
Let's
do
it
a
watch
there
we
go
and
the
scheduler
has
associated
them
with
nodes
and
away.
We
go
now
one
more
thing
on
this
before
we
move
on
right.
So
if
we
do
cube
kettle,
get
pods
Oh
wide,
we
can
see
that
they
were
distributed
across
a
couple
of
different
nodes.
What
if
we
wanted
them
to
all
go
to
kind
of
worker
3?
Let's
do
this
cube
cat
all
delete
deployments
and
Gen.
E
A
Explanation
use
watch,
I,
know,
I,
know
good
point
or
k900.
Here
we
go
all
right
and
then
how
about
editing
deployment
object
for
node
name?
You
can
do
that
and
that's
actually
what
I
just
did
here
right
once
I
get
I
went
ahead
and
I
edited
the
nginx
deployment
and
I
specified
underneath
the
pod
specification,
I
populated
that
node
name
field
right
and
then
I
applied
that
manifest,
and
we
can
see
that
they're
all
now
on
behind
worker
3.
A
A
Sending
node
selected
node
affinity
would
go
through
the
scheduler,
correct
setting,
node
name
wouldn't
also
true
yeah,
that's
a
big!
That's
a
pretty
nice
difference,
there's
another
one
as
well,
which
is
that
note.
Selector
can
use
those
labels
that
I
was
talking
about
right,
whereas
no
name
is
the
name
of
the
node,
exactly
Vivian
rocking
it
good
job,
all
right,
cool,
delete,
f,
nginx,
deploy
all
right.
So,
let's
play
with
the
note
affinity
stuff
right,
so
let's
do
them
nginx.
A
C
A
So
I'm
gonna
head
then
on
fide
I,
think
I've
applied
that
correctly.
Yeah
notes,
lucker,
okay,
so
I've
gone
ahead
and
I've
modified.
My
manifest
right
apply
of
nginx
deployment,
keep
it'll,
get
pods
a
or
o
wide,
and
we
can
see
that
they
all
picked
yeah
exactly
pick
me
equals
quotes,
we
can
see
they
picked
to
find
work
or
because
it
was
the
only
one
with
that
label
at
the
time
now.
Here
is
a
troubleshooting
step
that
I
wish
I
had
known
earlier
on
the
process
get
nodes.
I'll
pick
me.
A
Everybody
see
that
so
in
this
way,
I
can
also
do
it.
Yeah
I
got
it
it
work
so,
but
we
can
see
like
I
can
actually
determine
what
nodes
are
gonna
be
in
the
running
by
my
filtering
nodes
on
labels
right
and
so,
if
I
were
to
do
like
queue,
petal
nodes,
I'll
pick
me
equals.
You
know
quote
whatever
right,
then
I
only
have
one
node
that
has
that
and
if
I
wanted
to.
A
B
A
It
won't
work,
I,
guess,
correct.
Just
gonna
get
pod
pending,
you've
got
all
described,
pod
and
snacks,
9kf
yep
love
you
and
I
get
this
sometimes
hard
to
understand
error
in
my
describe
pod
right,
I
said:
I
see,
failed
scheduling.
Zero
of
four
nodes
are
available.
One
node
had
the
taint
node
roll
kubernetes
master,
but
the
pod
did
not
tolerate
three
nodes
did
not
match
the
node
selector
no
I
mean
like,
if
you
read
it
out
loud
like
that.
It
kind
of
makes
a
little
more
sense.
A
B
E
B
C
A
A
A
B
A
A
Yeah,
the
Toleration
can
be
kind
of
wild
right.
If
you're
you,
don't
you
don't
want
to
definite.
You
definitely
want
to
make
every
manifest
tolerate
everything,
because
then
things
are
gonna
get
get
ugly,
quick
right,
you
you've
overcome
you've,
got
kind
of
overridden
the
scheduling
piece
yeah.
You
asked
me
about
it,
but
there
were
a
couple
other
people
who
also
asked
me
about
it
like
here
in
the
chat
and
basically
that
these
things
are
going
the
way
they're
being
deprecated
in
favor
of
these
other
things.
A
The
node
restriction
admission
prevents
cubelets
okay,
so
this
is
just
a
little
shout-out
to
node
restriction.
Isolation.
We've
talked
about
this
before
this
is
a
node
admission
controller
and
if
you
go
back
to
the
admission
controller
episode
and
the
grokking
stuff,
you'll
find
me
talking
about
node
isolation
and
node
restriction,
it
is
usually
important
and
what
node
restriction
does
as
it
relates
to
scheduling.
Is
it
basically
enforces
what
configurable
things
the
node
can
configure
about
its
own
record
right?
We
don't
want
the
pod.
We
don't
want
the
cubelet
to
be
able
to
real
able
itself.
A
We
want
the
cubelet
to
only
be
able
to
like
modify
things
that
the
cubelets
should
be
labeling
right
and
so
the
admission
this
particular
admission
plugin
prevents
cubelets
from
setting
or
modifying
labels
with
a
node
restriction,
kubernetes
IO
fix,
and
that
means
that
those
things
that
you
said
to
my
with
that
prefix
are
not
modifiable
by
the
cubelet.
So
it's
a
better
thing
to
rely
on
in
some
cases.
So
if
it's
true
PC
is
DSS,
true,
that
sort
of
thing
affinity
and
anti
affinity,
let's
play
with
it,
we
talked
about
node
selector.
A
The
affinity
and
anti
affinity
feature
greatly
expands.
The
types
of
constraints
you
can
express
the
key
enhancements
are
that
they
have
an
analogical
and
that
they
have
soft
and
prep.
They
have
a
soft
and
hard
preference.
They
have
a
number
of
other
things
that
we're
going
to
work
through
some
of
these
examples.
So
no
two
affinity
there's.
A
Actually
this
is
talking
about
how
we
can
ensure
that
pods
are
spread
across
nodes
and
then
we
also
have
pod
affinity
inter
pod,
anti
affinity
and
anti
affinity,
and
that
was
an
immediate
expressed
that
by
a
topology
key,
so
I
think
the
best
way
to
understand
this
is
to
play
with
it.
So
let's
go
ahead
and
play
with
it.
So
I'm
gonna
grab
this
manifest
here
and
let's
go
ahead
and
read
this
in
such
a
way
that
it
makes
sense
to
us
right
and
so
what
we
have.
We
have
defined
a
pod.
A
Let's
just
read
through
this
together,
I
think
that's
actually
the
one
I
wanted
to
start
with
all
right,
so
note
affinity.
Here's
an
example
of
a
pod
that
uses
node
affinity
right
and
inside
of
the
affinity
section
in
the
spec.
It
says
note:
affinity
required
during
scheduling,
meaning
we're
going
to
be
informing
the
scheduler
ignored
during
execution,
meaning
that
if
the
scheduler
made
a
different
decision,
it's
a
one-time
decision
once
it
makes
that
decision.
A
That's
where
the
pot
is
gonna,
live
node
selector
terms
match
expressions,
key
kubernetes
e
to
your
buddies,
io
e
e,
easy
name
which
we
don't
currently
have
set
operator
in,
and
it's
looking
at
the
values
of
those
fields,
AZ
1
or
AZ
2,
and
then
we
have
a
preferred
during
scheduling,
ignored
during
execution.
Wait,
1
and
we're
saying.
If
you
you
know,
if
you
can
satisfy
the
first
and
you
can
satisfy
the
second
go
with
the
goals
go
with
the
the
combined
effort.
A
If
you
can
only
satisfy
the
first
than
then
satisfy
the
first
right,
so
note
affinity,
rule
says
the
pod
can
be
placed
on
a
node
with
a
label
whose
key
is
whose
Q
has
a
key
e
to
e
AZ
name
and
whose
value
is
e
to
e
AZ,
1
or
e
to
a
Z
2.
In
addition,
and
among
nodes
that
meet
that
criteria
nodes
with
the
label
whose
key
is
another
node
label
key
whose
value
is
another
node
label
value
should
be
preferred.
Should
we
try
this
out?
What
do
you
think
you
all
want?
A
A
Okay,
so
at
the
moment,
if
I
were
just
a
schedule,
this
as
it
is,
it
would
not
schedule
because
I
have,
because
it's
required
during
scheduling
that
I
find
some
nodes
that
have
this
match
right.
So
let
me
actually
just
clean
this
up
a
little
bit,
because
I
want
to
be
a
little
simpler
about
the
name,
we'll
call
it
Cooper
TVs,
we'll
call
it
zone.
A
B
B
A
Keep
going
to
apply
that
chef,
pot,
affinity,
Jim
kettle
get
pod
and
we
see
pending
and
if
we
look
at
the
error
for
pods
cube,
can
all
describe
odd
with
a
note
affinity.
We
see
that
all
four
nodes,
none
of
them-
have
matched
our
requirements
right.
Y'all
see
that
you
might
need
yeah
exactly
and
then
cube
kettle
label,
node
worker,
our
kind
worker.
E
E
C
A
Did
you
see
that
it's
scheduled,
but
this
is
the
pod
goody,
there's
no
node
name
populated
right,
so
it
actually
has
to
go
through
the
scheduler
to
get
scheduled,
but
now
we
actually
had
one
that
matched
and
because
and
because
the
second
part
is
preferred.
The
second
part
didn't
apply.
We
weren't
able
to
actually
find
a
pod
that
had
a
preference
right.
So
let's
do
that.
Let's
do
cube
kettle.
A
little
label
node
right.
So
basically
one
more
thing:
I
want
to
go
before
we
go
there
like.
A
B
A
Right
we
can
see
that
oh
I
already
have
pick
me
so
I'm
gonna
get
rid
of
that
yeah
we'll
leave
it
there
for
now.
So
we
can
see
that.
A
That
we
have
those
labels
in
the
in
the
we
have
the
two
of
the
two
labels:
the
two
nodes
that
come
back
with
label
with
a
key
value
of
zone,
our
kind
worker
and
kind
of
worker
whew,
and
this
way
we
can
actually
validate
what
notes
are
going
to
be
considered
for
scheduling.
So,
if
you're
trying
to
figure
out
in
your
head,
okay,
what
nodes
could
this
match?
I
want
to
make
sure
I
get
it
right
right?
This
is
one
way
of
actually
helping
kind
of
work
that
out
right.
A
A
A
C
A
Right
so
because
of
this
right,
what
this
means
is
that
even
it's
only
gonna
take
effect
at
scheduling
time.
It's
not
going
to
take
effect
after
scheduling
time
right.
That's
actually
what
that's?
What
the
that's,
why?
The
the
the
affinity
piece
is
actually
labeled
required
during
scheduling
not
during
execution
right.
A
So
it's
gonna
stay
tied
to
that
host
for
a
lifecycle
of
that
pod.
But
if
I
were
to
delete
this
pod
right
and
there
were
a
deployment
which
it's
not
if
I
were
to
delete
this
spot,
it
would
be
rescheduled
on
to
kind
worker
one.
This
is
a
part
that
I
wanted
to
show
you
all
right,
so
I'm
gonna
go
ahead
and
put
that
label
back,
because
it's
part
of
our
exercise.
E
A
A
What
I've
done
is
I've
made
it
so
that
kind,
worker
and
kind
worker
to
both
have
a
zone
label
was
on
to
and
zone
one
right
and
I've
made
kind
worker
three
have
this
pic
this
label
and
let's
look
at
the
pod
manifest
again
and
what
in
here,
it
says
required
during
scheduling
the
zone
operator.
The
zone
key
has
to
have
one
or
two
in
it
preferred
during
scheduling
the
operator.
The
key
pick
this
has
to
have
one
in
it,
but
what
I
have
is
a
bit
of
a
conflict
right.
A
A
B
A
A
A
A
A
Does
that
make
sense,
and
if
I
do
actually
I
can
also
set
priority
as
well
a
weight
right,
and
so
let's
look
at
that
manifest
more
time,
and
this
is
that
part
right
here
that
Tiffany
hold
on
that
CHUM's
is
talking
about
right,
where
I
can
set
a
weight
if
I
wanted
to
set
multiple
preferred
one
that
had
a
higher
weight
than
other
like
I
wanted
to
wear
a
lot.
I
wanted
to
pick.
A
The
weight
of
you
know
like
I
want
to
prefer
zone
one
if
it
exists,
but
if
it
doesn't
exist,
then
I'll
happily
take
zone
or
if
zone
one
can't
be
scheduled
for
some
other
reasons,
because
there
is
no
availability
right,
then
I'll
prefer
Zone.
Two
I
would
prefer
to
schedule
it
in
his
own
one,
but
if
zone
one
is
full,
then
then
schedule
it
in
Zone
two.
A
A
Note
affinity,
keep
cattle,
apply,
F
affinity.
A
B
A
A
A
A
But
yeah
your
otherwise
absolutely
correct,
righto,
all
right
so
cool,
so
that
is
exploring
all
the
affinity
stuff
that
I
typically
end
up
spending
time
with,
which
is
that,
which
is
that
you
know
how
we
actually
do:
node
affinity,
let's
play
with
pot
affinity,
real
quick
and
then
we're
going
to
play
with
topology
constraint
and
then
that'll
be
it.
What
time
is
it
2:36
great
we're
doing
good
I
said
to
myself:
I
really
pay
no
attention
to
that
man
behind
the
covers.
A
Alright,
so
I
said
to
myself
that
I
wanted
to
be
able
to
get
through
this
in
before
3,
so
I
think
we're
doing.
Okay,
let's
keep
let's
keep
at
it,
enter
pod
affinity
and
anti
affinity.
Allow
you
to
constrain,
which
notes
your
pod
is
eligible
to
be
scheduled,
based
on
the
labels
on
pods
Inter,
pod
affinity
based
on
the
labels
on
pods
that
are
already
already
running
on
the
node,
rather
than
based
on
labels
on
the
node
which
this
meet.
What
this
means
right,
your
use
case
is
I.
A
Have
my
application
and
I
want
to
be
darn
sure
that
I
don't
schedule
my
application
code
right
next?
To
my
database,
I
don't
want
them
to
be
on
the
same.
Node
I
want
to
be
on
separate
nodes,
and
this
is
a
mechanism
by
which
we
can
you
know
it's
just
kind
of
like
an
H,
a
tooling
thing
right
where
we
can
say.
Like
look.
A
You
know
these
things
represent
each
of
these
represent
kind
of
their
own
fault
domain,
as
as
I
consider
the
availability
my
or
that
it
might
each
consider
I
consider
each
of
these
to
be
their
own
availability,
domain,
I'm,
gonna
text,
my
wife
or
click,
and
tell
her
that
my
kid
can
totally
use
the
trampoline.
That's
cool
with
me
one.
Second,
let
me
just
do
that.
A
All
right
we're
in
the
backyard
I
want
to
make
sure
that
my
wife
knows
that
it's
cool
that
my
kid
come
out
and
played
on
the
trampoline
I,
really
appreciate
that,
like
they're,
like
we
don't
wanna
interrupt,
but
you
know
at
the
same
time,
kids
gotta
have
fun
exactly
right.
Like
you
know,
here
we
are
working
from
home
okay
anyway.
A
So
what
this
means
is
it's
sort
of
like
the
way
that
you
might
reason
about
things
like
availability
models
right,
we're
in
you
know
like
we
want
to
be
able
to
constrain
a
particular
set
of
applications
to
a
zone
where
we
would
use
node
affinity,
and
that
would
allow
us
to
different
to
describe
that.
You
know
for
this
particular
application.
I
want
to
make
sure
that
I
have
the
application
deployed
into
this
particular
availability
zone
or
a
particular
node
pool,
or
what
have
you?
A
Maybe
this
is
using
a
high
speed
disk
rather
than
a
low
speed,
disc
or
or
some
other
feature
of
that
node
and
but
on
my
other
application.
I
want
to
make
sure
that
I
don't
schedule
next
to
one
of
those
other
pods
and
I.
Don't
really
see
a
lot
of
people
using
interpret
affinity
to
be
totally
Frank.
It's
not
a
thing
that
I
think
a
lot
of
people
make
yourself,
but
it
is
available
to
you
conceptually
as
a
topology
domain,
and
so
because
I
don't
really
see
a
lot
of
people
using
it.
A
I'm
not
gonna,
really
explore
it
here
in
the
episode
I'd
rather
kind
of
move
on
to
some
of
the
other
stuff
that
I
wanted
to
talk
about,
but
you
can
see
that
the
mechanism
is
effectively
the
same
right,
but
the
difference
is
in
the
in
this
and
the
way
that
it
describes
the
topology
key
right
and
so
affinity
is
required
during
scheduling
ignored
during
execution
label.
Selector
on
the
pod
is
security
in
s1.
A
We
don't
want
to
schedule
this
pod
next
to
a
pod
that
is
labeled
s2
right,
so
we
can
also
use
pod
anti
infinity
to
force
pods
generated
by
deployments
to
be
scheduled
to
different
nodes,
which
is
good
used
to
yeah.
That's
true
yeah
and
we're
going
to
talk
about
that
one.
Next,
that's!
Actually
the
next
feature
I
wanted
to
get
into
was
actually
talking
about
topology
constraint,
so
they
get
into
like
match,
use
cases
more
practical
use
cases
always
co-located
on
the
same
node.
A
We're
saying
required
how
to
ante
affinity
pot
affinity,
but
if
it's
in
okay,
alright,
so
that's
good.
Let's
go
ahead.
Let's
grab
this
example.
We're
gonna
show
this
one,
because
this
is
actually,
if
there's
any
one
place
where
I
see
affinity
being
used.
It's
in
this
particular
example
right.
So,
okay,.
C
A
Then
we
got
our
web
server.
We're
gonna
give
the
web
server
labels
of
webstore
we're
saying.
If
the
app
has
an
operator
in
Web
Store,
that's
the
topology
key
is
ready
to
toast
name
already
use
that
key
to
make
sure
that
we
have
good
segregated
segmentation.
We're
gonna
have
three
replicas
we're
gonna,
get
rid
of
pot
affinity
for
now.
Well,.
D
A
B
A
C
A
A
E
A
All
right,
so
we
can
see
that
that
they're
dispersed
and
they're
hard
dispersed
right
because
of
the
required
key
field,
because
that
required
field.
That
means
that
no
is
that
well
sort
of
they
I
mean
technically.
Yes,
it
is
a
toleration,
because
the
toleration
is
no
schedule
right.
So
technically,
yes,
you
are
correct,
but
anyway,
because
of
that,
though
right
our
heart,
we
have
a
hard
anti
affinity
rule
because,
if
required
right,
because
we
have
that
required
key
from
keyword,
that
means
that
we
are
not
going
to
schedule.
A
Yeah,
that's
a
good
point.
The
DS
schedule
is
a
really
good
point.
That's
a
very
good
point.
Well
eat
right
in
that
scheduling,
because
it
is
a
point
in
time.
It
can
easily
lead
to
an
unbalanced
state
right
if
we
think
about
the
way,
scheduling,
works
and
I
word
according
to
nodes
and
then
make
an
employment.
Only
one
of
those
nodes
would
get
all
three
pods.
Unless
I
had
a
mechanism
like
this,
in
which
I
say
in
which
I
enforce
some
anti
affinity
right,
then
there's
nothing.
A
A
Oops
sure
I
know
Pods
apology
spread
constraints.
Now
this
is
beta
in
118,
it's
relatively
new
I
believe
it's
actually
got
a
feature
gate
if
you
want
to
turn
it
on
even
pods
spread.
But
let's,
let's
talk
a
little
bit
about
it
and
maybe
if
we
have
time
we
might
be
able
to
apply
that
feature
gate
to
the
to
the
other
two
nodes,
but
I
don't
know
if
we're
gonna
have
time
for
all
that
today.
This
is
a
relatively
new
way
of
thinking
about
the.
A
Handling
the
anti
of
the
pod
affinity
in
pod,
anti
affinity
right.
You
can
use
topology,
spread
constraints
to
control
how
pods
are
spread
across
your
cluster
among
failure,
domains
such
as
regions
zones
and
notes
and
other
user-defined
topology
domains.
Right
and
in
this
context,
a
topology
domain
is
basically
like
using
that
prefix
as
a
mechanism
to
describe
the
the
label
set
right.
A
So
in
this
case
they're
saying
like
you
know,
for
example,
a
node
might
have
labels
node,
node
one
zone,
US
East,
one
a
and
region,
US
East,
one
a
and
we
could
use
those
key.
You
could
use
those
those
keys
and
values
to
treat
to
act
as
apology
constraints.
So
one
of
the
questions
that
Steve
asked
me
this
week
was
like
well,
you
know
what
labels
gonna
use
for
this
and
I'm
like
literally
anything
it
can
be
any
label.
A
You
know
try
to
stay
the
labels
that
maybe
you
don't
after
yourself,
maintain
like
use
the
ones
that
the
cloud
provider
configures
for
you.
That
makes
it
easier
to
reason
about
when
you're
thinking
about
clusters,
but
if
you're
fine,
with
managing
the
lifecycle
of
those
labels
yourself,
either
through
automation
or
some
other
mechanism,
then
you
could
totally
use
those
as
well
all
right,
spread
constraints
for
pods.
We
can
see
that
we're
talking
about
a
pod
spec
here
it
was
introduced
in
116
below.
A
So
there
is
our
example:
let's
go
ahead
and
grab
that
example
and
then
we'll
play
with
what
happens
before
we
jump
into
our
example.
Here,
I
want
to
talk
about
what
it
actually
does
right,
so
max-q
describes
the
degrees
to
which
pods
may
be
unevenly
distributed.
It
is
a
it
is
the
maximum
permitted
difference
between
the
number
of
matching
pods
in
any
2
topology
domains
of
a
given
topology
type.
It
must
be
greater
than
0
and
then
topology
key
is
the
key
of
node
labels.
A
A
Do
not
schedule?
Is
the
default
and
schedule
any
meet?
Any
way
means
that
we
can
tell
the
scheduler
to
still
schedule
while
prioritizing
nodes
that
minimize
the
skew
okay
and
then
we
have
our
label.
Selector
is
used
to
find
a
matching
pods
pods
that
match
this
label.
Selector
are
counted
to
determine
the
number
of
pods
in
the
corresponding
topology
domain.
A
A
B
A
So
what
we've
got
here
is
we've
got
a
pod,
it's
labeled
bar
this
is
the
topology
Keys
Zone,
which
we
do
have
one
step
forward
right
and
then
we
have
labeled
matchers
any
pod
with
the
name
food
with
with
the
name
fubar
right.
So
if
I
had
another
pod
that
I
created
through
some
other
deployment,
this
would
still
count.
A
This
topology
mechanism
would
still
apply,
even
if
they
were
all,
even
if
those
labels
are
associated
with
the
pod
outside
of
this
particular
pod
spec
and
then
the
container
here
is
just
basically
pause
with
a
pod
container.
Alright,
so
let's
go
ahead
and
apply
this
and
then
we'll
see,
and
we
see
that
unsatisfiable
is
do
not
schedule
and
so
we're
looking
for
any
node
any
pod
with
an
with
the
topology
key
zone.
A
A
A
E
B
A
E
A
E
A
Right
and
also
because
in
this
case,
because
there's
no
required,
then
the
scheduler
isn't
interested
because
the
scheduler
sees
oh,
it's
already
been.
It's
already
been
scheduled,
and
so
because
this
because
this
pot-
because
this
node
could
still
match
it
still
can
it
could
it's
like
a
preferred
state
right
so
because
we're
in
that
preferred
state,
this
node
can
still
match
which
means
it'll
allow
it.
But
let's
do
this.
A
F
B
A
So
if
you're
at
the
top
I'm
saying
required
during
scheduling,
got
up
have
his
own
label
but
down
here
below
I'm,
saying
pick
kind:
worker,
3,
pom,
pom,
pom
battle
of
the
Titans.
Let's
see
what
happens,
let's
go
to
the
tape.
You
can
apply.
A
chef
pot
affinity,
yeah
well
cube,
go
get
pod
special
wide.
What
do
you
think?
What
do
you
think.
A
Right
because
I
have
specified
a
predicate
in
node
affinity
and
it
couldn't
satisfy
that
and
the
other
constraints,
it's
telling
me
hey.
What
can
I
do?
I
can't
I
can't
land
that
thing
on
that
pod.
So
I'm
gonna
fail
to
schedule
it
I'm
gonna
fail
the
scheduling
and
this
pod
won't
run,
even
though
it's
directly
scheduled
on
kind
of
worker,
three
I
will
not
allow
it
to
run
and
I
will
fail
it
with
status.
No
definitive.
A
All
right
cool,
that's
what
I
want
to
show
you
all
right.
That
was
actually
like.
The
last
example
of
node
affinity
that
I
wanted
to
show
and
I
forgot
until
just
now.
So
thank
goodness
I
remembered
and
then
we
want
to
do.
Kyouko
delete,
pods,
note
infinity,
and
then
we
could
dig
into
the
spread
stuff.
A
little
bit
more
cat
pod
spread,
go
ahead
and
grab
all
that.
B
A
Spec
nope
tab,
max-q
apology
zone
when
unsatisfiable
with
me
battling
white
space
like
a
king
match
labels
fubar
containers
pump
it
a
bomb.
Alright.
F
A
D
B
A
F
A
A
A
B
C
A
A
A
A
A
All
right
awesome,
that
is
everything
that
I
wanted
to
show
you
today
and
I'm,
really
glad
that
we
were
able
to
hang
out
today
and
I
hope
that
that
was
really
helpful
and
I
hope
that
it
made
it
make
some
sense.
I
know
that
scheduling
predicates
can
really
be
confusing,
but
I
hope
that
the
understanding
of
how
labels
work
and
how
that
part
works
and
all
that
stuff
does
kind
of
fit
a
little
easier
wrap
your
head
around
it.
You
know
kind
of
stuff.
A
That's
that's
all
I'm
trying
to
do
around
here
is
really
just
trying
to
make
things
a
little
easier
to
understand.
So
I
hope
that
was
good
and
I
hope
that
y'all
are
just
gonna
have
a
kick
in
weekend,
and
it's
so
good
to
see
you
all
and
thank
you
for
tuning
in
and
I'll
see
you
next
time.
Thanks
again,
y'all
have
a
great
time.