►
From YouTube: TGI Kubernetes 096: Grokking Kubernetes : kube-scheduler
Description
Notes: https://github.com/vmware-tanzu/tgik/blob/master/episodes/096/README.md
Come hang out with Duffie Cooley as he does a bit of hands on hacking of Kubernetes and related topics. Some of this will be Duffie talking about the things he knows. Some of this will be Duffie exploring something new with the audience. Come join the fun, ask questions, comment, and participate in the live chat!
This week we will be continuing the Grokking series with: exploring the Scheduler
A
Good
afternoon
everybody
and
welcome
to
TDI
K
number
96
this
week,
we're
gonna
be
talking
about
the
kubernetes
scheduler
and
some
of
the
things
that
you
can
do
with
it
with
those
sorts
of
stuff.
So
that
should
be
a
fun
one.
I'm
still
wearing
some
good
Halloween
stuff.
I've
got
my
boonie
shirt
on.
If
you
haven't
seen
the
80s
movie
Goonies,
it's
totally
worth
watching
super
campy,
but
super
fun.
So
today,
I'm
wearing
my
skull
and
crossbones
for
Halloween
welcome
everybody
who
do
we
got
tune
in
this
week
from
all
over
the
world.
A
We
again
always
just
amazed
by
by
everybody
joining
from,
like
literally
everywhere,
so
we've
got
Olaf
from
Copenhagen,
given
us
some
hard
time
about
time
zones.
It's
true,
we
actually
haven't
shifted
yet
I
think
it's
this
Sunday
and
our
notes
are
up
at
TGI,
K,
dot,
io,
/
notes
and
Allah.
Matty
has
joined
us
and
he
reached
out
to
me
on
Twitter
asking
some
questions
about
what
I
plan
on
covering
I
probably
will
be
talking
about.
Multiple
schedulers
and
things
like
that.
A
That'll
be
fun
to
talk
about
happy
Friday
from
Melo,
the
Netherlands
from
the
Netherlands
and
hello
from
Sao
Paulo
Brazil
from
Andre,
hello,
Ramesh
from
San
Francisco
right
here
in
San
Francisco,
where
I
am
and
Amin
from
Strasbourg
and
Scotty
ray
good
to
see
you,
sir
I'm,
looking
forward
to
hanging
I'm
hoping
left
till
the
cord
to
hang
out
with
mr.
Scotty
in
Barcelona
next
week,
but
we'll
talk
about
that
a
little
bit
more
here
in
just
a
minute.
A
We
got
mr.
Magoo
hiding
out
the
defense
labs
and
mr.
Magoo
I
know
it's
a
great
movie
and
we
got
joy
how's
it
going
joy
and
Roth
from
India
and
Mike
Morrell
from
New
Jersey
and
Bogdan
Luca
from
Bucharest
Bucharest.
You
know
one
way
or
the
other
I
think
you
probably
is
yeah
I
plan
on
actually
doing
quite
a
bit
more
on
the
grokking
series
like
I'm
having
a
lot
of
fun
with
it
and
I.
A
Think
I'll
probably
do
this
money
if
I'm
as
I,
could
think
of
like
right
now,
I
still
have
a
bit
more
left
and
we'll
see
what
the
roadmap
for
this
series
is
here
in
just
a
minute
when
we
get
back
into
the
notes,
but
think
you
think
you
think
you
for
all
all
for
joining
us.
I
did
want
to
give
a
quick
heads
up
to
folks
and
I'm
going
to
mention
this
a
couple
of
times.
During
this
episode
we
are
probably
going
to
take
a
break
from
TGI
ka
a
brief
one.
A
You
know
so
that
we
can
prepare
our
talks
and
and
everything
else
that's
happening
with
cube
con
and
all
the
other
events
in
November.
It's
going
to
be
kind
of
crazy,
so
it's
very
likely
that
coming
up
soon,
we'll
be
taking
a
couple
of
weeks
off
to
be
at
cube,
melon
and
perhaps
the
week
before
next
week,
up
I
think
we
have
mr.
Josh
Rosso
gonna
talk
about
I'm,
not
sure
reason
to
talk
about
yet.
A
But
you
know
it's
always
awesome
to
to
see
other
members
of
our
team
kind
of
jump
up
and
do
a
TGI
queso
next
week.
I
believe
you
have
Josh
Rosso,
all
things
being
equal
and
then
after
that,
we'll
probably
take
a
little
time
off,
do
cube
con
and
then
come
back
and
then
we're
gonna
really
try
and
figure
out
like
some
kind
of
event
or
something
for
the
100th
episode
of
TGI
K,
which
should
be
maybe
it
should
be
like
near
the
new
year,
we're
kind
of
figuring
that
out
so
super
exciting.
A
It's
it's
amazing
to
me
that
it's
almost
102
episodes
you
know
like
when
I
started,
watching
it
was
Chris
and
Joe
and
it
was
already
pretty
incredible
and
now
I
get
to
be
a
part
of
it.
It's
just
been
super
great,
so
100
episodes
of
that.
Who
else
do
we
have
joining
us?
We
got
that
was
the
long
e
I
got
you
Scotty.
It's
not
gonna
be
a
bar,
but
he
will
see
me
in
San
Diego.
So
that's
good
and
we
got
Alexander
from
Brazil.
We
got
Shah
from
Atlanta
and
we
have
Andre.
A
Tell
me
yeah.
Goonies
is
pretty
good
yeah.
It
is
kind
of
good,
as
as
far
as
like
back
to
the
future
kind
of
good.
It's
true
super
campy
I,
don't
know
if
it's
aged
particularly
well,
but
it
it's
pretty
fun.
We
got
suresh
from
hamburg
and
we've
got
michael
rice,
who
is
also
part
of
my
team
from
sunny
Los
Angeles
with
I.
Think
it's
bright
in
other
ways
in
Los
Angeles
right
now
too,
so,
hopefully,
you're
staying
safe
and
out
of
the
smoke
Mike.
A
A
This
thing
it's
true,
there's
good
stuff.
All
right,
let's
see,
let's
jump
into
our
notes.
Here
again,
our
notes
are
at
TJ,
kita,
io
/
notes.
So
if
you
want
to
help
keep
track
of
notes
or
do
anything
else
like
that
feel
free
to
jump
in
here
so
this
week
in
review,
there
are
some
pretty
exciting
stuff
in
kind
of
internally.
A
This
week
we
actually
announced
I
think
was
just
either
today
or
yesterday
might
have
been
today
that
contour,
which
is
an
ingress
controller,
that
acts
as
a
control
plane
for
envoy
proxy
is
1.0,
which
is
a
big
thing.
It's
been
awhile
and
coming
and
I
think
everybody
has
done
such
an
amazing
job.
We've
got
Dave
Cheney,
Steve
sloka
if
young
and
James
peach
have
been
working
on
this
for
years
and
when
dotto
is
a
big
release.
A
So
if
you're
interested
in
understanding
more
about
what's
happening
and
what
the
journey
was
like
and
what
they're
trying
to
achieve
and
what
they
have
achieved
and
it's
in
and
getting
to
one
dot
Oh
feel
free
to
check
out
this
article.
The
link
is
in
the
notes.
It's
just
you
know
it's
just
a
lot
like
I,
look
back
on
the
kind
of
stuff
that
we've
introduced
with
contour,
like
things
like
ingress
route
and
being
able
to
have
a
better
model
for
handling
security
or
certificates,
but
for
users
versus
certificates
for
operators.
B
A
A
It's
like
it's
like
Rory,
giving
me
the
name
of
his
Scottish
town
I'm
like
nah.
We
were
just
talking
about
North
Carolina,
okay,
North
Carolina
got
it
all
right,
so
you've
heard
us
remember:
Brian
miles
did
an
episode
on
octant
just
a
few
weeks
ago.
Octant
is
one
of
our
open
source
projects
at
VMware,
where
we
talk
of
I'd
kind
of
a
better
model
for
interacting
with
your
clusters,
because
it
all
becomes
client-side.
It
now
has
a
website
so
feel
free
to
share
that
with
your
friends
and
talk
about
it.
A
This
one
is
neat
and
it
actually
highlights
a
very
interesting
thing
within
kubernetes
which
I
you
know
like
I,
think
security
wise
to
my
security
self
I
find
this
really
interesting,
I'm,
not
sure
how
much
this
is
interesting
to
my
audience,
but
let's
give
it
a
try.
A
So
did
you
know
that
if
you
have
the
ability
to
update
the
status
object
of
a
thing
that
that
permission
inside
of
status
is
perhaps
more
than
you
would
think
so
with
nodes,
for
example,
you
can
specify.
Excuse
me.
You
could
specify
annotations
and
labels
on
with
only
access
with
with
our
back
constrained
only
to
the
status
object
of
the
note-
and
this
was
true
for
pods
as
well.
A
So
if
you
had
the
ability
to
post
to
the
status
object
for
a
pod,
you
would
effectively
kind
of
inherit
the
capability
of
being
able
to
write
to
an
object
outside
of
that
particular
section.
In
the
mo
into
the
pods
labels.
You
used
to
be
able
to
do
that
you'd
be
able
to
update
the
label
of
a
pod
from
the
status
object
so
from
a
security
perspective.
A
I'm
like
why
do
we
have
a
sub
resource
called
status
that
has
the
ability
to
modify
things
that
are
above
that
status
resource
and
the
answer
is
backwards,
compatibility
and,
as
we've
grown,
the
API
and
change
the
permission
model
over
time.
Some
of
these
things
have
not
completely
come
through
and
been
locked
down.
So
this
is
one
of
the
tickets.
That's
actually
going
to
go
ahead
and
correct
some
of
that
capability.
A
Next
up,
there
was
a
survey,
make
documentaries
a
documentation
survey
who
put
up
by
the
coober
news
community
team.
That
kind
of
asked
a
few
specific
questions
about
like
how
people
were
interacting
with
the
documentation
inside
of
the
kubernetes
repository,
and
it's
got
some
really
interesting.
Key
takeaways
right,
74
percent
of
respondents
said
they
would
like
to
see
more
tutorials
section
contain
or
tutorial
section
contain
more
advanced
content.
Seventy
percent
said
the
kubernetes
documentation
is
the
first
place.
They
look
for
information
about
kubernetes.
A
We've
got
administrators
and
developers
and
then
kind
of
some
quick
responses
around
like
what
people
are
using
it
for
api
references
concepts,
which
seems
like
it's
nailed
down
pretty
well
for
that
sort
of
stuff,
and
then
they
just
kind
of
talk
about
and
then
they
get
intellect
you
know
taking
this
information.
What
are
they
planning
on
doing
with
it
like
what?
What?
Where
are
the
schedule?
Where
are
the
improvements
going
to
happen?
That
kind
of
stuff?
So,
if
you're
interested
in
that,
definitely
check
that
out.
A
What
the
problem,
what
the
problem
was,
what
they
were
able
to
see
and
how
they
went
about
actually
resolving
it.
So
this
is
a
great
article,
if
you're,
just
interested
in
like
seeing
patterns
of
success
for
people,
troubleshooting,
interesting
problems
on
top
of
kubernetes,
and
so
I
thought.
This
was
a
great
one,
I'm
not
going
to
spend
too
much
time
on
it,
because
we
have
a
lot
to
get
to
in
the
scheduler,
but
definitely
check
out
this
article.
A
I
thought
it
was
really
great
and
with
a
name
like
haiku,
Brady's
crime
story,
happy
to
know
a
little
bit.
This
one
I
think
is
also
pretty
great.
Jonas
else
is
a
puts
out
a
how
to
write
a
few
pillow
plug
in
and
keep
kind
of
plug
in.
So
for
those
who
don't
know
have
they
give
you
the
ability
to
extend
your
the
UX
of
your
cubicle
interactions
to
have
other
capabilities.
So,
for
example,
if
you
go
to
like
it's
called
crew
crew
die,
oh,
don't
be
where
I
want
to
go,
probably
not
nope.
A
A
Guys
better
so
crew
here
is
a
registry
crudo,
that's
where
it
is
so.
Crew
is
a
registry
or
a
package
manager
for
cute
little
plugins,
and
you
can
kind
of
see
some
of
the
things
that
are
out
there
by
taking
a
look
inside
of
here.
These
are
the
cute
kettle.
Plugins
that
are
available.
There
are
a
ton
of
them,
there's
the
ability
to
print
the
CA
cert,
there's
ability
to
describe
multiple
actions.
A
There
are
some
tools
that
are
in
here
to
give
you
the
ability
to
see
SSH
into
things
or
to
exec,
as
there's
just
you
know,
different
ways:
I'm,
actually
just
kind
of
like
implementing
or
improving
the
user.
Experience
of
cube
kettle
into
some
specific
way
right
cubic
a
cube
Seck
scan
gives
you
the
ability
to
audit
their
manifest
against
a
reasonable
set
of
defaults.
A
There's
just
a
ton
of
these
things
in
here
and
they're,
always
kind
of
growing
I
think
we
talked
in
the
past
about
our
back
lookup
and
our
back
for
you
to
tools
to
enable
you
the
app
to
give
you
the
ability
to
kind
of
like
enumerate
permissions
for
a
user
or
a
request
or
query
things
like
what
user
has,
what
permissions
and
those
sorts
of
things
pretty
cool
stuff.
So
what
this
article
does?
A
Is
this
article
talks
about
how
to
write
one
of
those
things
and
how
to
get
it
published
and
like
what
resources
you
have
at
your
disposal
to
do
so?
So,
if
you're
curious
about
that
go
ahead
and
check
that
out,
it's
a
great
tool.
If
there's
a
tool
is
it
you
know,
it's
like
it
anything
right
if
you're
using
cube
Ketel
to
do
a
bunch
of
work,
but
then
there's
a
bunch
of
work
that
you
have
to
do
kind
of
externally
to
it
to
get
back
to
the
cube
cut
apart.
A
It
might
be
worth
like
thinking
about
how
to
automate
that
extra
bit,
so
you
could
just
wrap
it
all
up
into
one
into
one
workflow
for
yourself
improve
your
overall
user
experience.
This
week,
I
was
actually
just
looking
at
the
kubernetes
podcast
I.
Don't
know
if
y'all
listened
to
this
one,
but
it's
definitely
worth
checking
out.
They
do
a
great
job.
A
Recording
a
number
of
really
great
sessions
with
a
number
of
people,
I
think
actually
even
Brian
Laos
was
on
this,
but
I
was
just
really
impressed
because,
when
I
opened,
this
up
I
was
like
wow.
What
a
lineup
right
so
like
episode,
75
was
James.
Nunnally
I'll
be
co-presenting
him
with
him
at
Keep
Calm
this
year.
A
He
is
just
an
incredible
engineer
and
is
actually
the
person
behind
the
cert
manager
and
before
that,
I
camera
was
called
before
cubes
before
cert
manager,
but
he's
been
in
that
game
for
some
time
really
great
stuff
and
it
was
really
great
to
have
him
actually
do
a
podcast
on
that
topic.
And
then
we
have
Pulu
me
being
presented
by
mr.
Joe
Duffy.
A
Does
a
great
job
and
is
a
great
speaker
talking
about
some
of
the
problems
that
pollute
me
solves
and
those
sorts
of
things,
and
then
engineering,
productivity
and
testing
with
Katherine
Barry,
and
it's
like
these
are
three
podcasts
that
I
would
totally
listen
to
in
there.
Oh,
it's
like
you
know
when
you,
when
you
tune
into
some
podcast
you're,
like
you
know,
maybe
I'll
skip
that
episode
or
that
episode.
But
this
time
I'm
like
I,
just
gotta,
say
you
know
like
so
what?
What
a?
A
What
a
list
of
amazing
speakers
and
amazing
topics
so
great
stuff,
if
you're
not
checking
out
the
kubernetes
podcast
com,
definitely
go
check
that
out
the
next
one.
I
was
actually
pretty
impressed
to
find
out
about
happen.
October
21st
Nvidia
is
putting
time
into
developing
a
GPU
operator
to
enable
or
to
simplify
GP
management
in
Cornelius.
So
if
you're
running
those
ml
workloads
or
you're
working
on
things
that
require
GPUs
yeah,
it
was
it
was
like
cube,
Lego
or
something
anyway.
A
So
if
you're
running
these
sorts
of
capabilities-
or
it
was
really
interesting
to
see
that
Nvidia
is
actually
working
on
trying
to
improve
the
user
experience
and
perhaps
the
consumption
model
for
GPUs
on
top
of
kübra,
it
is-
and
so
I
thought-
that's
pretty
great.
You
know
so
definitely
check
this
out.
If
that's
the
kind
of
stuff
you're
working
on
there
are
some
really
interesting
challenges
when
giving
access
to
containers
or
giving
access,
giving
containers
access
to
Nvidia
or
to
GPUs
and
like
how
and
how
those
things
are
solved.
A
It's
pretty
unique,
almost
like
on
a
per
cluster
basis
right
now
and
so
I
thought
that
was
actually
pretty
interesting,
that
they're
actually
trying
to
kind
of
resolve
some
of
that,
and
so,
if
you're,
interested
in
GPUs
or
you're
in
that
space
at
all
might
be
worth
checking
out.
I
haven't
dug
too
deeply
yet,
but
I'm
always
interested
in
people
improving
the
user.
Experience
of
things
like
this
so
could
be
really
interesting.
I
might
to
us.
A
Let's
go
back
to
our
notes
and
we
can
see
kind
of
like
what
we've
done
so
far
and
what
I've
been
doing
is
putting
a
link
to
the
episode
that
covered
that
particular
episode
of
the
grokking
series
right
next
to
it
right
and
so
that's
the
last
time
we
met.
We
talked
about
cube
controller
manager
on
the
grokking
series
and
this
week
we're
going
to
be
talking
about
cube
scheduler
next
week.
A
At
the
most
in
the
most
basic
sense,
it's
job
is
to
attribute,
or
is
to
assign
node
names
to
those
pod
specifications
that
you
create.
So
let's
just
talk
through
that
really
quickly
and
make
sure
that
we
understand
what's
happening
there
when
you
create
whatever
abstraction
you
use
to
create
pods,
whether
that
abstraction
is
a
job
or
a
deployment
or
replica
set
or
any
of
those
other
things.
All
of
those
things
are
going
to
be
broken
down
by
the
controller
manager,
which
we
talked
about
last
time
into
specific
pots
right.
A
The
pod
is
the
deployable
unit.
Everything
above
that
is
just
an
abstraction
to
make
lifecycle
better
or
solve
other
problems,
but
once
you
have
those
pods
and
those
pods
are
created
within
the
entity
within
the
datastore
by
through
via
the
API
server,
we
need
something
to
associate
those
pods
with
the
nodes
where
they
will
be
instantiated
and
that's
where
it
keeps
scheduler
fits
in.
A
A
A
If
I
do
cube
kettle
edit,
the
pod
bash
down
here,
you
can
see
a
bunch
of
fields
have
been
added
from
that
manifest
that
I
applied
right.
A
bunch
of
different
fields,
including
what
the
pot
IP
was
annotated,
pot
owed
IP,
is
was
annotated
by
the
Calico
project.
I
have
a
record
of
what
the
applied
configuration
looks
like
I
have
some
labels
that
are
associated
with
it
and
I
think
those
were
already
in
there
the
name
space
has
been
populated
and
as
we,
this
self
thinks
been
populated.
A
There
has
been
a
service
account
associated
with
this
container.
All
of
that
stuff
has
been
done.
Kind
of
forming
right,
DNS
policies
been
defined,
but
this
field
right
here
is
the
one
that
I
really
want
to
talk
about.
This
is
what
the
scheduler
does.
Its
job
is
to
do
this
and
we
can
even
determine
which
scheduler,
if
there
are
multiple
of
them,
will
be
used
to
populate
that
node
name
field
by
specifying
scheduler
name.
A
So
these
are
two
fields
in
the
pod
spec
that
are
are
specific
to
scheduling
and
they're
a
bunch
more
and
we're
talking
about
them
as
well,
but
but
those
that
that
I
want
to
talk
about
just
kind
of
open
up
the
conversation
right
so
when
the
scheduler
has
done
its
work.
The
result
of
that
work
is
to
populate
this
node
name
field.
A
If
you
populate
that
node
name
field
manually
yourself,
then
the
scheduler
doesn't
have
any
work
to
do,
and
I've
talked
about
this
a
little
bit
last
time,
but
I
want
to
show
it
again
just
so
that
it
really
it
really
sinks
in
here
right.
So
I'm
gonna
edit,
my
Bosch
die
Hamel
I'm,
gonna
populate
node
name.
A
Before
I
saw
a
line
from
here
saying,
the
reason
was
scheduled
successfully
assigned
this
to
kind
worker,
but
this
time
I
did
not
see
that
right.
So
there
was
no
actual
event
describing
the
scheduling
of
this
pod,
because
that
pod
was
directly
scheduled
right
now.
Here
is
why
this
is
important
as
it
relates
to
scheduling.
This
is
a
thing
that
I
think
people,
frequently
conflate
and
I,
want
to
make
sure
that
everybody
listening
to
this
podcast
or
this
the
session
really
get
really
gets
in
touch
with
this
idea.
A
Frequently,
when
we're
thinking
about
sky,
when
we're
thinking
about
security
and
those
sorts
of
things
within
kubernetes,
we
think
sometimes
that
being
able
to
leverage
scheduling,
predicates,
like
affinity
and
heart
affinity
and
and
node
selector,
and
those
sorts
of
things
taints
and
toleration
x'
are
going
to
have
are
going
to
give
us
the
ability
to
have
a
better,
have
better
control
over
what
pods
can
be
scheduled.
Where
and
it's
easy
for
us
to
think
about
that
as
a
security
boundary
right.
A
If
I
say
this
pod
or
this
deployment
this,
this
deployment
is
going
to
be
scheduled
within
this
particular
failure.
Domain
and
I
put
in
a
node,
selector
and
I
say
these.
You
know
the
notes
that
are
a
part
of
this
are
associated
with
his
availability
zone
label
associated
with
those
nodes
right
then,
what
will
happen
is
that
the
scheduler
will
happily
only
schedule
those
pods
onto
the
nodes
that
match
that
label
query
and
that's
great,
but
it's
easy
to
conflate
as
a
security
thing
right.
A
Nothing
keeps
me
from
being
able
to
populate
that
node
name
myself
and
put
it
on
any
node
that
I
wanted
to
put
on
right
and
because
scheduling
it's
not
in
the
critical
path
for
every
pod.
That
means
that
we
can't
think
of
it
as
a
security
boundary
right.
We
could
actually
we
could
work
around
the
scheduler.
We
could
turn
the
scheduler
off.
We
could
create
another
scheduler
that
has
different
predicates.
That
only
match
our
particular
targets,
so
we
can't
think
about
scheduling
necessarily
as
a
security
boundary.
A
That's
what
I
wanted
to
make
clear
right
by
default.
You
can't
think
of
it
as
a
security
boundary.
You
can
think
of
it
as
a
tool
that
enables
you
to
better
appropriate
those
resources
within
your
cluster
as
they
relate
to
the
workloads
that
you're
deploying
on
top
of
them.
So
you
can
say
you
know,
I
need
to
schedule,
GPUs,
so
only
schedule.
These
pods
two
nodes
that
have
GPUs
I
need
to
schedule
these
particular
pods
in
an
anti
affinity
way
so
that
they
don't
reside
on
the
same
node.
A
That
way,
if
I
lose
a
node
enough
of
the
capacity
of
my
application
will
still
be
present
to
satisfy
my
requirements
right
all
of
those
predicates
are
great
and
they
serve
a
very,
very
good
purpose,
but
they
are
not
scheduling,
primitives
I'm,
sorry,
they're,
not
security
primitives.
They
are
just
scheduling,
primitives,
I,
hope
that
makes
sense
all
right.
That's
me
ranting
about
security.
Again,
you
know
how
it
is.
That's
how
I
that's
how
I
roll
around
here
all
right,
so
that
was
one
thing.
A
A
Now,
I'm,
actually
using
a
kind
cluster
again
and
depending
on
how
you
do
this,
some
folks
actually
deploy
the
scheduler
and
the
controller
manager
as
a
deployment
inside
of
their
cluster,
rather
than
as
a
as
a
static
pod.
In
our
case,
it
is
a
static
pod
right.
So,
if
I
jump
in,
if
you're,
using
cube
ADM,
if
I
jump
into
a
control,
plane,
node.
A
This
is
the
manifest
for
the
scheduler
and
for
the
controller
manager
of
API
server
and
NZD
right.
So,
if
I
take
a
look
at
this
scheduler
manifest
I
can
see
that
there
is
a
cube
config
that
is
hosted
at
Etsy
kubernetes,
scheduler
kampf.
That
is
made
available
to
this
to
the
scheduler
itself
and
we're
gonna
play
with
that
a
little
bit,
but
that's
actually
how
it's
using
a
cube
config
to
authenticate
to
the
cluster.
So,
let's
play
with
that
a
little
more.
So
let's
do
export
coop
config
equals
SC
Cooper
and
it
is
scheduler
comp.
A
Can
I
list?
What
can
the
scheduler
do
so?
The
cool
part
is,
the
scheduler
has
actually
got
some
very
specific
things
that
it
can
do
it's
not
like
a
blanket
administrative
permission.
It's
actually
constrained
only
those
things
that
the
scheduler
requires
to
be
able
to
get
its
job
done
right,
so
it
needs
to
be
able
to
obviously
needs
to
be
able
to
populate
events.
That's
actually
why
we
saw
that
event
earlier.
A
It
has
to
be
able
to
associate
bindings
or
binding
a
particular.
Does
the
scheduler
implement
the
logic
to
handle
preemption
and
pod
priorities?
It
is
a
part
of
it
yeah,
but
remember
that
preemption
and
pod
priorities
are
also
kind
of
implemented
at
the
cubelet
right,
because,
especially
in
a
priority
and
implementation,
when
it
when
it
when,
when
the
decision
to
evict
happens
that
effective
that
eviction
will
have
to
happen
on
the
queue,
but
the
scheduler
doesn't
have
the
ability
to
implement
that
for
the
cubelet.
A
It
can
just
say
this
is
the
one
you
can
just
talk
about.
It
can
just
help
a
reason
about
like
what
would
be
a
victor
didn't
wear,
so
we
got
endpoints.
We've
got
pods
binding
again
sitting
that
node
name
field.
We
have
the
ability
to
review
tokens.
We
have.
This
is
kind
of
a
standard
permissions
thing
here,
endpoints,
which
is
interesting,
and
this
is
the
endpoint
own
of
a
specific
resource
name
right.
So
it
has
the
ability
to
define
the
endpoints
for
cube
scheduler
and
we're
gonna.
A
Look
at
that
here
in
a
second,
that's
part
of
the
leader
election
process
for
the
scheduler
and
we'll
talk
about
that
just
a
minute.
So
we
also
want
the
ability
to
get
lists,
watch
patch,
persistent
volume
claims
and
persistent
volumes.
We
have
the
ability
watch
for
nodes,
which
is
obviously
because
we
need
to
discover
all
of
them
to
be
able
to
consider
them
for
scheduling,
same
thing
for
these
volumes
and
those
sorts
of
things
replica
sets
all
those
bits
that
you
would
kind
of
expect
and
then
kind
of
the
standard
grant
of
permissions
right.
A
You
can
do
a
get
of
API,
so
you
can
explore
the
API
server
in
a
read-only
state
and
then
you
also
have
the
ability
to
patch
and
update
the
pod
status
object
and
so
it'll
be
really
interesting.
I
actually
didn't
think
about
this
before
but
like
I,
wonder
if
they're
actually
doing
anything
that
would
be
blocked
if
they
limit
the
ability
to
update
through
the
paths,
path,
status,
object.
A
A
Well,
I'm
glad
we're
having
this
conversation
so
what
I'm
reading
what
I'm
reading
here
is
that
it
might
be
pop.
It
might
be
a
problem
that
the
scheduler
uses
the
status
object
or
sub
resource
to
populate
node
name.
If
they
lock
that
down
well,
the
scheduler
still
be
able
to
do
that
or
will
it
need
a
different
permission,
the
ability
to
actually
patch
or
update
the
pod
rather
than
the
sub
resource
status,
and
so
that's
a
very
interesting
question.
A
I'll
have
to
take
a
look
at
that
I'm
sure
that
somebody
will
catch
it,
but
we
have
a
lot
of
really
good
ad
tests
for
this
sort
of
stuff
but
fun
stuff.
Nonetheless,
so
this
is
the
permissions.
So
those
are
the
permissions
that
the
cube
scheduler
has.
You
can
see
that
it's
all
tied
in
pretty
well
and
that
it
has
a
specific
cube
config
only
for
that
particular
piece
right.
So
right
of
cube,
Kettle
config,
you
flatten.
A
A
Base64
D
OpenSSL
x5o
text:
this
is
the
certificate
that
is
being
used
to
authenticate
to
the
cluster.
We
can
see
that
we've
been
put
into
a
group
called
system,
cubes
Couture,
we're
identifying
a
system
cube
scheduler,
and
this
certificate
was
issued
by
the
Cabrini,
the
CA,
and
it
will
expire
in
2020,
so
the
certificate
in
cube
a
DM.
By
default,
we
leverage
the
certificate
authentication
for
queue,
scheduler.
A
A
Let's
take
a
look
at
so
there's
one
more
thing
to
actually
cover
from
a
high
level
about
schedulers.
You
could
have
multiple
of
them.
If
you
have
multiple
of
them,
only
one
of
them
will
be
active
at
a
time
and
that's
because
if
you
think
about
it,
contextual
Eve
there's
a
lot
that
the
scheduler
needs
to
know.
A
That's
the
image
you
see
done
by
scheduler.
If
there's
dis
pressure
on
the
node,
the
schedulers
job
literally,
is
only
to
populate
the
node
name
in
a
pot
object,
and
so
a
lot
of
the
work
that
it's
happening
above
that
from
a
schedulers
perspective,
is
somewhat
limited
in
its
capability
and
we'll
talk
about.
You
know
what
this
actually
might
be
a
good
time
to
come
back
to
this
page,
real,
quick,
I.
A
So
here
we
can
see
actually
like
this
is
a
pretty
good
overview
of
what
the
cube
scheduler
is
doing
and
what
it's
specifically
for
and
one
of
the
problems
it
solves
right.
So
this
one,
so
this
gets
into
it
like
basically
around
like
exactly
what
it
can
do
and
what
it
can't
do
and
in
and
how
it
all
like
lays
out
like
what
is
considering
as
part
of
that
scheduling
capability.
A
scheduler
is
absolutely
a
type
of
controller
yeah.
A
B
A
A
This
pod
needs
this
much.
This
pod
needs
this
much
disk.
This
pod
has
this
affinity
requirement
or
this
anti
affinity
requirement.
This
pot
has
this
node
selector,
there's
a
ton
of
things
that
we
can
actually
use
to
see
the
information
or
predicate
information
for
the
scheduler
so
that
it
can
quickly
filter
down
only
those
nodes
that
might
be
feasible.
A
Then
we
have
things
like
you
know:
pod
fit
toast
for
it
positive
hosts,
and
these
things
are
actually
all
pretty
interesting
to
read
them
if
you're
interested
in
digging
more
into
this.
This
is
a
this.
Is
the
document
I
put
it
in
the
notes
already
to
talk
about
this,
but
like
some
of
the
filters
that
we
have,
which
totally
makes
sense
right
like
does
the
pod
fit
the
resource
of
the
node?
A
Some
of
the
other
stuff
that's
happening
like
you
know,
obviously
no
dis
conflict,
and
if
you
read
through
these
are
all
just
the
the
filters
right
the
filters
are
we
actually
break
down
the
number
of
of
all
of
the
nodes
in
the
cluster?
How
do
we
reduce
the
number
of
nodes?
Only
those
that
are
viable
for
this
particular
pod?
A
A
Now
these
things
actually
like,
if
you're
getting
to
the
scheduling,
predicates,
that
you
could
specify
in
a
pod,
then
you
have
the
ability
to
kind
of
control
the
scoring
and
the
reason
it's
called
scoring
about
filtering
right,
like
if
you
think
about
it,
just
as
a
high
level,
if
you
think
about
hard
affinity
versus
soft
affinity,
right,
soft
affinity
might
be
broken
down
to
think
in
terms
of
I.
Have
five
pods
in
my
deployment
and
at
the
moment,
I
have
two
nodes
in
my
cluster,
so
from
a
soft
affinity.
A
I
might
want
to
make
sure
that
those
pods
are
spread
so
I
put
in
a
soft
weight
to
basically
ensure
that
they're
spread
and
then
I'll
end
up
with
two
nodes
on
one
two
pods
on
one
cluster
and
three
pods
I'm.
Sorry,
two
pods
on
one
node
and
three
pods
on
another
node,
because
there's
no
better
way
for
them
to
spread
it
now,
I've
added
two
new
nodes:
there's
nothing!
That's
going
to
actually
force
me.
Nothing
is
going
to
reschedule
that
automatically.
For
me,
I'll
stay
in
the
same
arrangement
that
I've
already
got.
A
If
a
pod
dies,
then
it
will
actually
be
rescheduled
and
probably
spread
across
those
other
two.
The
two
new
nodes,
hard
anti
affinity,
is
different
and
hard
at
an
affinity.
I
have
a
deployment
of
four
pods
and
I
have
two
nodes
in
that
hard-on
infinity.
Basically,
what
means
what
that
means
is
that
if
there
are
only
two
nodes,
I
will
only
create
two
pods,
because
I
can't
satisfy
the
hard
anti
affinity
between
those
things
and
that's
kind
of
the
difference.
Between
scoring
and
filtering
with
filtering
I
would
say.
A
These
nodes
are
like
not
going
to
work.
I
only
have
two
nodes
to
which
I
can
associate
these
pods
and
if
you
have
a
soft
affinity,
I
can
just
spread
them
across
those
two.
But
if
you
have
a
hard
anti
affinity,
then
I
cannot
I
cannot
I
cannot
satisfy
for
pods.
So
what
do
I
do
right,
I,
just
don't
start
them.
A
A
Bit
more
work,
that's
happening
recently
around
like
pod,
topology,
spread
constraints
and
topology
management
policies.
These
things
are
relatively
new,
especially
the
pod
topology
spread
constraints,
which
is
another
way
of
actually
solving
this
problem.
It's
alpha
in
116
I'm
not
going
to
spend
a
lot
of
time
on
it,
but
if
you're
interested
it
might
be
worth
talking
about
because
then
you
could
actually
think
about
this
differently
right
and
actually,
let's
just
cruise
down
here
to
where
they
compare
what
is
happening
with
top
with
pod
topology
versus
pod
affinity
and
anti
affinity.
A
So
these
two
have
been
around
for
some
time
within
communities,
but
the
way
that
they
differ
is
that
pod
affinity
and
anti
affinity
give
you
the
ability
to
cluster
pods
or
or
separate
them
with
topology
awareness.
You
can
get
a
lot
more
granular
about
what
happens
right.
You
can
say
they
can
break
these
things
out
into
different
zones
or
different
failure,
domains
and
then
control
kind
of
like
what
particular
failure
domains
are
considered
for
scheduling,
which
is
a
whole
different
thing.
So
pretty
cool
stuff.
A
So
Chandra
asks
I
believe
there
is
I
know
that
I
have
seen
a
document
that
describes
our
ranking
algorithm,
but
I
think
it's
just
tuned
by
the
by
the
predicates
that
you
specify
I
had
to
do
it
with
only
a
few
pods
on
it
and
in
theory
more
than
enough
CPU
resources.
However,
a
faulty
kernel
process
actually
consuming
100%
here.
That
would
happen,
because
the
qubit
will
actually
be
able
to
report
on
that
on
that
capability.
I
was
surprised
that
the
scheduler
kept
sending
pods
to
it.
A
Well,
I'm
actually
very
curious
about
that
too.
So
there
are
a
couple
of
different
ways
that
you
can
consume
the
cubelet
to
make
sure
that
it
is
reporting
accurately
the
available
resources
on
the
node
right,
and
this
kind
of
comes
down
to
things
like
cgroups
and
like
how
you
configure
them
and
those
sorts
of
things,
and
so
frequently
in
cansado.
A
So
definitely
check
this
out
if
you're
interested
in
looking
at
the
priorities
and
like
how
they
kind
of
work
out-
and
there
are
a
number
of
other
documents-
that
kind
of
dig
into
that
too,
like
the
real
detail
about
how
this
works
but
understand
that
also
it's
to
some
degree,
it's
a
serial
process
for
every
pod
that
matches
it.
It's
actually
going
to
go
through
and
determine
what
this
worth.
The
resolution
for
that
pot
is,
and
then
it
will
pick
up
the
next
one
and
do
it
right.
It's
like
it's.
A
It's
not
going
to
necessarily
bulk
schedule
this.
This
scheduler
is
not
a
boat
scheduler.
It
will
actually
just
schedule
those
pods
as
it
describes
you
pop
them
off
a
queue
figure
out.
The
answer
go:
get
the
next
one
off
the
cube
figure
out
the
answer,
etc,
etc,
like
on
down
the
line
and
because
it
happened
so
quickly
where
it
doesn't
really
even
in
large
clusters.
It
hasn't
really
caused
too
much
of
a
problem,
but
this
is
definitely
one
of
the
concerns
for
large
clusters
is
like.
A
That's
gonna
obviously
grow
right,
and
so
there
are
some
things
that
we
can
do
to
combat
that
this
is
a
feature
that's
in
beta
state
as
of
1:14,
and
this
is
actually
I've
seen
this
used
it
in
a
couple
of
very
large
clusters
to
help
tune
this
and
what
this
does.
Is
it
basically
gives
you
the
ability
to
specify
you
know
how
many
nodes
can
be
considered
feasible
for
a
particular
pod
before
we
stop
trying
to
filter
right,
because
that
filtering
process
we
basically
go
through
all
of
the
nodes.
A
You've
got
20,000
nodes
in
your
cluster.
It's
gonna
determine
from
those
20,000
nodes.
What
the
what
the
reduced
set
is
and
then
from
that
reduced,
set
its
going
to
give
priorites
going
to
determine
the
ranking
for
that
pod
and
then
make
a
scheduling
decision
right
in
this
case.
In
this
configuration
you
have
the
ability
to
say
look,
you
don't
have
to
look
at
every
node
in
the
cluster.
Just
find
me
50
that
are
gonna
work
and
then
do
the
ranking
from
there
right.
Don't
don't
do
all
of
them,
like
that's
crazy
town.
A
But
yeah
so
the
other
questions
I
have
here:
let's
see,
do
we
have
any
ranking
algorithm?
We
already
talked
about
that
one
I
had
a
node
yep.
Can
we
configure
the
time
interval
at
which
cube?
Let's
send
a
report
about
node
or
everything
is
real-time?
Yes,
we
can
absolutely
tune
that
that's
in
the
cubelet
I
can
absolutely
be
tuned.
A
A
So
this
is
an
interesting
one
and
it
gives
us
the
ability
to
do
things
like
and
the
percentage
of
nodes
to
score,
which
I
thought
was
actually
pretty
neat.
It
totally
makes
sense.
I
think
this
is
a
typo
I
think
it's
supposed
to
be
presented,
notes
to
score
it.
Zero
is
to
actually
disable,
but
it
is
what
it
is.
I'm
gonna
probably
fix
that,
but
yeah,
so
pretty
cool.
A
A
So
for
now,
if
you're
in
a
cluster,
that's
115
and
below,
if
you
do
cube,
ketal
get
endpoints
dash,
n
poop
system,
scheduler
yamo,
you
can
see
from
here
who,
the
actual
who
the
the
leader
is
right,
so
kind
control,
plane
node
is
leader.
It's
actually
also
the
only
one
in
this
cluster
I
only
have
this
one
running
its
lease
expires
in
15
seconds
and
it
acquired
it
at
this
time.
A
So
this
is
actually
how
the
cube
scheduler
determines,
which
one
of
the
schedulers,
if
you
have
multiple
of
them,
that
it's
going
to
actually
do
the
work
and
the
reason
that's
important
I
think
is
because
contextually
it's
just
we
can
handle
that
thing.
We
can
make
those
decisions
more
efficient
if
we
have
a
single
cache
where
all
that
has
been
held.
A
If
the
scheduler
fails
to
delete
a
pod
and
I
forced
deletion,
the
scheduler
doesn't
delete
pods,
that's
not
the
schedulers
job.
Scheduler
is
never
going
to
delete
a
pod
and
I
forced
deletion,
and
it
stopped
telling
me
about
it.
Is
there
a
way
to
know
if
it's
deleted
deleted
or
just
stops,
reporting
about
it
so
deleted
deleted
means,
like
I,
think
if
you
have
cue
kettle
pause.
A
So
cute
cuddle
get
paws
is
going
to
show
me
those
pods
that
have
been
created
regardless
or
those
positive
known
about
within
SED,
and
if
that,
if
there
is
no
record
there,
then
it's
no
longer
known
about
now.
What's
the
challenge
that
you've
highlighted
Shahar
is
different
than
this
right.
The
challenge
here
is
like
say:
I
have
you
know
a
bug
in
my
story
in
my
CSI
driver
and
what
that
bug
is
doing.
A
Is
it's
forcing
me
it's
it's
basically
keeping
me
from
being
able
to
disassociate
the
storage
from
a
particular
pod,
and
so
when
I
try
to
delete
that
pod,
it
fails
because
the
finalizar
we
can't
complete
right.
The
finalizar
can't
disassociate
the
storage
from
the
pod,
and
so
I'm
stuck
right
and
it
could
be
disassociating
storage.
It
could
be
any
number
of
things.
That's
basically
restrict
me
from
being
able
to
remove
that
pod
from
the
cluster,
because
the
finalizer
can't
validate
that
there's
nothing
else
associated
with
this
pod
record
before
deleting
it.
A
If
I
force
deletion
either
by
modifying
the
object
and
removing
the
finalizar
right
or
through
with
some
other
means,
then
what
that
means
is
that
I'm
going
to
remove
the
pod
object,
but
any
dependence
on
that
pod
object
might
still
exist
right.
So
anything
that
I
was
using,
that
that
was
forcing
the
pod
object
to
remain
because
the
finalizer
could
not
be
completed
will
still
be
there,
which
means
that
when
a
new
pod
comes
along,
it
may
not
be
able
to
reattach
to
the
existing
stuff.
So
that's
kind
of
the
challenge.
A
A
A
The
question
becomes
like
is
the
current
default
scheduler,
a
good
solution
for
that,
and
in
some
large
clusters
and
high
churn
environments,
it
may
not
be
right.
There
is
another
scheduler
out
there
called
Poseidon
from
it.
It's
not
super
active,
but
it
represents
a
pretty
significant
improvement
in
scheduling
across
large
clusters,
and
this
is
a
good
example
of
one
you
might
want
another
scheduler
or
a
different
scheduler
inside
the
cluster,
to
solve
a
particular
case.
A
So
Sam
labs
is
a
kind
of
a
performance
focused,
a
way
of
managing
kubernetes
that
is
specifically
focused
on
performance,
sensitive
workloads,
which
I
thought
was
pretty
interesting
and
they've
got
a
number
of
things
that
you
actually
run
on
the
cubelet
like
they
have
their
own
container
runtime.
They
have
their
own
those
sorts
of
things,
and
because
of
that,
they
have,
they
have
a
number
of
things
that
they
want
to
actually
implement,
including
there's
actually,
how
I
came
across
this,
including
the
ability
to
specify
or
to
extend
the
scheduler.
A
Was
a
really
interesting
project
and,
if
you're
interested
in
it
you,
this
is
actually
a
way
that
you
could
be
that
you
can
extend
the
kourounis
scheduler
just
to
use
a
different
algorithm
for
for
filtering
right.
So,
instead
of
actually
filtering
on
all
the
built-in
filtering.
Maybe
what
we
want
to
do
in
the
case
of
this
is
actually
filter
those
nodes
that
don't
match
a
particular
interface
type
right,
which
is
actually
pretty
interesting.
A
So
in
this
case
you
would
be
able
to
do
to
extend
the
existing
scheduler
rather
than
having
to
create
a
new
one,
but
these
are
both
different
reasons
why
you
would
want
to
do
different
things,
and
so
that
might
be
an
example
of
of
why
you
might
want
multiple
schedulers
also
because,
like
maybe,
you
want
the
SCI
labs
one
to
run
against
another
implementation
of
the
same
default
scheduler,
but
just
use
the
use,
the
filter,
the
new
filter
on
that
new
scheduler.
And
that
way
you
can
have
multiple
types
of
workloads
right.
A
You
could
have
workloads
that
are
going
to
be
satisfied
by
the
Scilab
scheduler
and
you're.
Gonna
have
workloads
like
we
satisfied
by
the
default
scheduler.
So
Willie
is
talking
about
silos.
Okay,
that
makes
sense,
are
more
into
high
performance
computing
and
they
are
famous
for
their
famous
in
the
high-performance
computing
data
centers
for
their
container
runtimes
singularity,
which
is
now
CRI
compliant
yeah.
That's
pretty
neat
I've
only.
B
A
Discovered
him
in
this
conversation
and
I
thought
that
was
actually
really
cool.
I
might
dig
more
into
that
a
little
bit.
Cuz
I'm,
actually
kind
of
curious
like
how
that
how
that
would
would
be
in
a
minute,
but
thinking
you
did
it.
I
was
like
you
know,
found
myself
a
new
rabbit
hole
to
go
chase
down.
A
Well,
it
also
says
that
you
would
you
leave
a
nav
ops
did
something
similar
to
run?
Did
it
yeah
that
makes
sense
yeah?
What's
the
difference
between
Dockers
and
container
I,
don't
think
I
understand
the
question?
Are
you
asking
that
what's
the
difference
between
container
runtimes
like
container
container
runtimes
like
container
D
and
docker,
and
this
singularity
one
that
well
it
is
mentioning
or
are
you
asking
like?
What's
the
difference
between
a
docker
container
and
just
a
regular
container,
I
mean
for
maybe
some
clarity
I'm
happy
to
answer
that
question.
A
Alright,
so
next
I
thought
you
know
what
we
would
do.
I
want
to
go
back
to
the
checklist,
real
quick,
but
then,
after
that,
I
thought
it
might
be
kind
of
fun
to
explore
this
idea
of
creating
a
second
schedule
order
and
seeing
that
scheduler
schedule
our
pod
rather
than
using
the
default
one
so
might
be
fun
kind
of
hacking
project
to
do
that.
We
talked
about
leader
election
and
we
talked
about
direct
scheduling,
yeah
I
think
we're
up
to
multiple
schedulers.
So,
let's
play
with
that.
A
I'm
just
going
to
use
the
manifest
and
point
it
at
the
existing
scheduler
configuration
and
then
use
this
part
of
it
right
here
to
specify
a
different,
a
different
scheduler
name
right
and
that
way.
I'll
have
multiple
schedulers
in
my
cluster
and
that
can
even
I
can
even
use
those
other
schedulers
to
test
this
capability.
A
Oh
I
see
so
Cooper
D.
This
is
a
container
orchestration
system,
I'm
not
going
to
spend
a
lot
of
time
on
this
right
now,
but
effectively,
Cooper
D.
This
is
a
container
orchestration
system
right.
So
within
kubernetes
we
have
this
idea
of
a
pod.
A
pod
can
be
represented
by
multiple
containers
that
happen
that
could
share
some
dependencies.
A
Those
containers
are
docker
images.
They
are
when
you
even
populate
the
pod
you're,
actually
even
specifying
that
docker
image
name
and
where
to
go,
pull
it.
What
register
you
pull
it
from
what
the
version
is
the
tag
all
that
stuff,
it's
just
a
docker
container.
It's
just
that.
We
orchestrate
that
in
such
a
way
that
we
can
provide
a
better
environment
for
which
you
know
in
which
that
container
is
going
to
operate,
that
better
environment
might
include
the
ability
to
define
environment
variables
or
the
storage
or
shared
key
or
shared
storage
between
multiple
containers.
A
Right,
you
might
have
a
logging
forwarder
and
you
might
have
your
application
and
then,
as
we
move
up
the
stack,
we
can
also
say
that
because
container,
because
kubernetes
is
an
orchestration
level,
we
have
the
ability
to
effectively
provide
primitives
that
are
able
to
help
us
build
distributed
systems.
So
if
I
had,
if
my
application
was
a
micro
service,
there
are
multiple
different
services
that
were
interacting
on
some
level.
How
do
I
provide
service
discovery
so
they
can
so
that
service
a
can,
find
service,
B
and
inter
communicate
with
it
right.
A
These
are
all
primitives
that
are
exposed
within
Coover
gears
as
an
orchestration
model,
so
the
difference
between
docker
and
kubernetes.
It's
basically
the
ecosystem
that
we're
where
these
containers
operate
is
much
more
developed.
It
has
a
lot
more
capability
that
enable
you
to
kind
of
take
that
really
great
packaging
trick
that
docker
represents
and
give
it
a
home
and
help
it
operate.
It
manage
software
over
time.
It's
a
quick,
crazy,
like
five
minute
talk
about
what
all
that
is
about
all
right,
oh
yeah!
That's
when
accidentally
delete
namespace
with
thousands
of
pods
well
Cooper.
A
B
A
A
So
I'm
just
I'm,
treating
like
full
master
it
down
at
this
point,
then
we'll
have
the
ability
to
kind
of
play
with
the
scheduling
kind
of
similar
to
what
we
did,
but
it's
gonna
work
very
similar
to
the
way
that
the
controller
manager
scheduling
a
che
works.
So
it's
not
really
very
different,
but
looking
about
another
given
another
I'll
go
over
it
again
here
in
just
a
second,
as
we
talk
about
like
how
that
how
that
process
will
work.
A
So
Dmitri
says
tryout
K
14
K
its
resource
protection
yeah
that
can
be
used
kind
of
like
a
Mali
Guard
kind
of
thing
that
can
help
one
of
our
dams
deleting
a
namespace
caused
us
to
enforce
our
back
for
everyone.
Yeah
I
am
definitely
a
fan
of
our
back.
In
fact,
as
we're
waiting
for
this
custard
come
up,
I'll
definitely
point
out
one
of
the
other
pieces
of
our
back,
which
I
think
is
a
really
good
idea.
So
John
Harris.
A
My
good
friend
John
Harris,
he
put
his
blog
there
we
go
so
he
works
with
me
at
hefty
Oh
or
you
work
shall
be
at
VMware
and
he
recently
put
up
this
blog
post,
which
I
think
it's
definitely
worth
talking
about,
and
so
what
this
does
least
privilege
and
kubernetes,
using
impersonation,
right
and
I
think
that
Mike
you'll
find
this
interesting
and
that
a
lot
of
people
should
actually
be
aware
of
this.
What
this
does
is
it
operates
under
the
assumption
that
every
user
within
the
cluster
is
like
a
read-only
user
by
default.
A
So
if
you
can
authenticate
to
the
kubernetes
cluster
you're
going
to
be
read-only
and
if
you're
going
to
actually
do
anything
that
requires
more
access
than
read-only,
then
you
have
to
impersonate
a
user.
With
that
new
permission,
this
is
a
kind
of
think
of
it
like
a
pseudo
model
right.
So,
if
you're
going
to
have
the
ability
to
deploy
or
delete
or
create
or
modify
resources
deployed
within
the
cluster,
you
would
do
you
would
effectively
be
using
cube
kettle.
You
know
the
command
and
then
as
the
ability
to
impersonate
some
new
group
or
user.
A
So
it
gives
us
a
kind
of
a
better
stacked
permission
model
very
similar
in
some
ways
to
the
way
that
sudo
works
so
definitely
check
out
this
article.
If
that's
something
that
is
interesting
to
you,
there
is
actually
a
crew
pool
again
to
kind
of
get
back
to
the
career
stuff.
That
actually
does
this
I'm
gonna
put
this
down
in
the
reference
links.
B
A
A
A
System
so
now
we
only
see
the
two
schedulers
right,
so
we
have
a
scheduler
running
on
kind
control,
plane,
one
or
kind
if
the
first
control
plane
and
the
third
one.
So
now,
if
I
go
back
to
my
logs,
I
can
only
see
those
two
logs
and
I
see
in
those
logs
that
one
of
them
is
actually
active
and
the
other
one
is
actually
not
that
it's
not
active
right.
One
of
them
is
successfully
acquired
the
lease
for
keep
scheduler
and
the
other
one
is
attempting
to
acquire
that
scheduler.
A
This
guy
is
actually
the
one
that
is
tempting
to
create
attempting
to
acquire
the
lease,
and
then
this
one
is
the
one
that
has
the
lease.
So
my
active
scheduler
right
now
is
on
cue,
scheduler,
zero.
Three
and
the
other
one
is
just
sitting
there
waiting
to
do
it
and
the
reason
and
the
way
that
it's
doing
that
right.
If
again,
we
look
at
QP
it'll
get
endpoints
an
coop
system
for
the
cube
scheduler
and
point.
A
B
A
A
What
else
we
got
in
the
chat
here.
Fernand
is
asking
I
created
a
cust,
a
custom
scheduler
and
it
fails
to
rank
out
nodes.
Do
I
know
how
one
can
fall
back
to
the
defaults.
It
can't
really
default
back
yep
when
you
pick
another
scheduler,
it's
that
scheduler
is
responsible
for
scheduling
that
pod
he's
not
like
a
fallback.
A
You
can't
prioritize
the
schedulers
that
would
be
used
to
do
the
scheduling
you
actually
have
to
work
through
the
problem
and
make
sure
that
that
scheduler
ken's
satisfy
the
requirement
and
then
they
and
then
add
it
back
in-
and
this
is
a
definitely
another
case
where
you
might
want
multiple
schedulers
right,
because
that
way
you
can
actually
be
very
clear
about
which
one
is
responsible
for
doing
that.
Work.
A
A
A
A
Time
is
it
I'll,
make
sure
then
we're
doing
okay,
so
it's
214,
so
we
have
a
little
bit
more
time.
I
want
to
do
this.
Other
thing
I
want
to
spin
up
another
scheduler
and
we
could
play
with
that
real
quick.
So
my
scheduler,
oh
one
more
thing,
I
wanted
to
show
you
about
the
scheduler,
which
I
thought
was
really
interesting,
so
cube
scheduler.
This
is
the
binary
I,
just
grabbed
the
binary
and
I'm
looking
at
it.
A
A
Yes,
here
it
is
so
you
have
this
right:
config
2-piece,
which
I
thought
was
pretty
cool
and
I'm
always
impressed
when
software
does
this,
the
scheduler
can
take
a
config
or
it
can
take
a
configuration
argument
and
when
it
does
get
config,
Maps
and
I,
don't
think
there's
one
in
here,
but
let's
just
take
a
look
yeah,
but
when
it
does,
you
can
actually
pass
that
configuration
option
as
a
file.
So
if
you
actually
wanted
to
configure
there,
the
cube
scheduler
to
like
say
maybe
use
that
that
reduction
thing.
A
B
A
Then
pass
that
argument
and
then
provide
a
file,
and
it
will
actually
give
you
what
the
defaults
look
like,
and
so,
if
you're,
looking
at
like
understanding,
okay
well,
how
can
I
configure
this?
What
Flags
are
available
to
me?
These
are
the
default
flags
for
the
configuration
of
the
cube
scheduler
right
and,
if
I
jumped
in
to
the
scheduler
running
inside
of
the
my
system,
cube
scheduler,
shell
and
I
do
cube
scheduler
right,
equals
config
and
I
cut
that
config
I
can
actually
even
see
what
the
configuration
of
this
specific
running
configuration
is
right.
A
A
That
was
pretty
cool,
so
the
next
thing
I
wanted
to
show
you
was
it's
a
different
scheduler.
So
let's
do
this.
Let's
do
this,
so
this
is
the
example.
Scheduler
from
this
is
the
example
configuration
from
the
the
dock
inside
of
the
multiple
schedulers
talked.
This
is
their
example
here,
and
you
can
see
what
they're
doing
is
they're
having
you
build
the
scheduler
and
then
deploy
it
as
an
image
and
then
grab
it,
but
in
my
purpose,
is
really
what
I
want
to
care
about
is
like
that.
A
A
Now,
unlike
our
current
scheduler,
it's
not
gonna.
It's
got.
It's
not
gonna
bind
to
it's
not
going
to
bind
to
the
to
the
masternodes,
so
it
should
be
no
conflict
in
port,
but
we'll
see
so
then
I'm
going
to
do
an
apply,
dash
F
of
my
schedule
or
mo,
and
while
that's
coming
up
I
want
to
look
at
that
configuration
one
more
time
that
would
have
been
okay.
A
My
schedule
what's
happening
here
at
the
top
in
this
manifest
is
I'm.
Creating
a
service
account
called
my
scheduler,
and
that's
because
my
scheduler
is
going
to
need
the
same
hour
back
model
that
the
existing
scheduler
does
to
be
able
to
do
its
work
and
I'm,
giving
it
a
cluster
blow
binding
that
mount.
That
gives
it
the
same
cluster
role
that
the
existing
scheduler
has
right.
A
So
basically
adding
a
new
service
account
called
my
scheduler,
which
we've
already
created
and
I'm
associating
with
the
same
cluster
role,
binding
that
the
existing
scheduler
does
just
kind
of
reusing
that
work
I'm,
creating
a
deployment
and
I'm
calling
a
component
scheduler
I'm
going
to
name
my
scheduler
I
have
gone
ahead
and
done
some
matching
labels.
Obviously,
again,
I
told
her
what
service
account
you
use,
there's
be
calling
the
actual
binary
and
some
passing
some
configuration
flags
in
and
I
have
a
health
seaport
of
ten
to
fifty
one
and
away.
A
We
go
right
so
now,
if
I
do
cube
kettle
get
pods
in
coop
system,
I
should
see
my
scheduler
running,
which
is
very
cool.
So
now
we
actually
have
two
schedulers
in
the
system.
We
have
the
default
one
that
everything
is
using
and
we
have
the
new
one,
my
scheduler,
which
is
using
the
same
code
base
as
the
default
one.
It's
just
something
I
can
now
tune
uniquely
right.
I
can
tune
this
one
independently
of
the
other
one.
A
So
if
I
wanted
to
play
with
like
a
different
filter,
if
I
wanted
to
explore
just
extending
the
existing
scheduler
without
affecting
the
default
one,
then
I
can
do
that
by
basically
creating
a
new
scheduler
and
modifying
that
configuration
rather
than
money.
You
know
messing
with
the
default.
One
gives
me
the
ability
kind
of
like
easily
test
and
validate
my
assumptions
before,
knocking
out
the
really
important
one.
A
So
specifying
node
name
is
different
than
specifying
node.
Selector
know
the
selector
gives
you
the
ability
to
specify
a
label
query
that
would
match
a
subset
of
nodes.
Node
name.
Is
this
specific
node?
That's
the
key
difference
between
the
two
yeah
Peter
got
it
sorry
about
that
I
just
realized.
So
I
was
reading
the
same
answer
that
I
just
gave
go
Peter
whoo
all
right.
A
A
A
A
A
A
A
A
Now
it's
got
a
service
account,
so
it's
not
really
about
whether
it's
using
that
and
if
we
didn't
see
that
actually
mounting,
we
would
see
a
different
log
output.
So
this
is
kind
of
interesting
because
we
see
a
log
output
of
the
user
system
service
account
cube
system,
my
scheduler.
It
means
that
we're
actually
seeing
it
use
the
right
service
account,
but
that
service
account
doesn't
have
access
to
list
storage
classes
which
is
which
the
default
configuration
must
be
satisfying,
or
this
would
fail
there
too.
A
A
A
A
A
And
there's
our
scheduling
event,
so
our
our
scheduler
came
up
I,
don't
know
why
their
permissions
were
messed
up,
but
it's
probably
old
doc.
We
saw
the
scheduling
event
happen.
We
were
able
to
actually
see
the
successful
scheduling
of
this
to
kind
worker,
and
so
that
worked,
but
a
not
quite
sure
why
the
permissions
were
messed
up.
Well,
who
didn't
do
it
more
to
figure
that
out,
but
that
wasn't
working
thing
so
that
was
kind
of
an
interesting
kind
of
crazy
permissions
thing
that
we'll
have
to
figure
out
at
some
point
in
the
future.
A
B
A
A
B
A
We
haven't
exposed
it.
Okay,.
A
A
B
A
So
there's
our
metrics
endpoint
for
the
scheduler-
and
this
is
gonna,
be
true
for
any
scheduler
I
mean
the
scheduler
will
be
deployed
and
also
the
previous
one
right.
So
we
can
see
in
this
metrics
output
and
in
this
metrics
output
we
can
see
you
know
kind
of
the
behavior
of
this
particular
scheduler,
which,
for
the
most
part,
is
gonna,
be
not
busy
at
all,
because
it's
only
had
to
schedule
one
pod
and
it
got
it
done
pretty
quickly.
A
A
Meeting
the
scheduler
or
meeting
the
Prometheus
endpoint
a
bunch
of
other,
go
mem
stats
for
the
actual
application.
That's
running
whether
it's
actually
what
the
certificate
expiry
looks
like
for
this
particular
API
server
client
certificate,
ex-free
all
that
stuff,
so
those
are
the
metrics
and
then,
if
you
were
curious
about
that,
config
file
that
we
talked
about
before
you
can
actually
see
the
result
of
that
configuration
file
in
component
config
for
the
scheduler.
We
can
see
that
the
name
for
the
scheduler
is
my
scheduler.
A
B
A
Just
by
reading
through,
what's
there
right
like,
basically
you
can
you
most
of
these
things
are
pretty
legible
as
they
are.
They
basically
talk
about
like
well
what
they're
actually
for
in
right
there
right
there
in
right
right
there
in
the
output.
So
it's
like
the
scheduling,
algorithm,
preemption
evaluation
seconds,
how
long
it
took
to
how
much
time
was
spent
in
pre-empting
the
scheduling
algorithm.
A
How
much
time
the
priority
ranking
was
taking
and
you
can
kind
of
go
through
the
different
case,
different
characteristics
from
that
perspective,
if
you
want
to
have
more
detail
in
it,
I'm,
fortunately,
gonna
have
to
throw
toward
the
code,
because
that's
kind
of
the
the
place
I
would
expect
it
to
have
to
go,
but
here's
some
other
interesting
metrics
like
from
end
to
end
perspective.
How
long
does
what
was
the
latency
on
scheduling
a
particular
prod
or
for
pods
in
general?
Those
sorts
of
things.
A
It
does
not
metric
server
really
only
focuses
on
by
default.
It
really
only
focuses
on
things
like
nodes
and
pod
CPU
memory
and
those
sorts
of
utilizations.
It
doesn't
really
expose
everything,
but
you
can
express
two
metrics
server
other
metrics
to
watch
for
what
does
catch
it.
All
the
time
by
default
would
be
the
would
be
queue
Prometheus
right,
and
so,
if
you're,
using
cube,
Prometheus.
A
This
is
the
center
manifest
for
human
Prometheus.
This
guy
will
listen.
This
camp
will
be
deployed
in
such
a
way
that
it
will
pull
metrics
from
the
scheduler
and
the
other
interesting
thing
that
I,
don't
think
everybody
really
realizes.
Yet.
Is
that
when
you
go
about
that,
the
the
metric
server
is
no
longer
required
to
be
deployed
as
a
as
a
separate
entity,
because
Prometheus
Q
Prometheus
deploys
what's
called
an
adapter,
and
that
gives
you
the
ability
to
represent
those
metrics
that
are
known
by
Prometheus
to
the
metrics
API
within
kubernetes.
A
A
It
can
be
focused
on
a
specific
metric.
You
can
use
the
Prometheus
adapter
to
satisfy
that
requirement.
Pretty
neat
stuff.
How
does
the
default
scheduler
know
that
another
custom
scheduler
exist
side
by
side?
It
does
not
and
it
doesn't
care,
because
really
it
comes
down
to
that
pond
specification
right.
So,
if
I
jump
back
in
here
and
I've
and
I've
been
the
the
specification
of
the
pod
right
here
when
I've
specified
scheduler
name,
it's
only
going
to
be
that
scheduler
and
that
name
that
is
going
to
match
my
particular
scheduler.
A
If
I
created
another
scheduler
or
if
I
compare
this
to
the
default
scheduler,
the
default
scheduler
is
not
going
to
see
this
pod
because
it
will
actually
be
watching
for
pods
that
are
not
scheduled
to
nodes,
and
if
it
sees
that
the
scheduler
name
is
not
default,
scheduler
explicitly,
then
it
will
not
do
anything
with
this
pot.
It
will
ignore
it
out
of
hand,
presuming
that
there
will
be
some
other
scheduler
called
that
the
right
thing
that
will
allow
the
scheduling
of
that
pod.
A
A
This
is
the
information
that
the
scheduler
can
work
with
to
determine
fit
right,
and
so
inside
of
here
we
can
all.
We
can
see
the
the
information
that
the
scheduler
has
to
work
with
up
here
at
the
top.
We
can
understand
from
the
from
the
perspective
of
the
node.
We
have
the
ability
to
understand
capacity
and
allocate
of
all
capacities.
A
How
much
memory,
storage
and
those
things
that
are
available
to
the
to
the
system
are
made
available
by
that
cubelet
and
allocatable
are
are
what
is
remaining
after
the
scheduling
product
after
people
have
made
requests
in
limits,
and
so
scheduler
can
only
look
at
that
information.
It's
not
actually
tied
to
the
metrics
API,
it's
not
referencing,
metrics,
server
or
Prometheus.
In
any
way,
it's
actually
only
evaluating
the
about
the
characteristics
of
the
node
directly.
A
So
what
the
cubelet
reports
the
scheduler
can
operate
on,
but
there's
no
person
in
the
middle
there's
no
entity
in
the
middle.
That
is
actually
like
aggregating.
That
information
does
that
make
sense.
So
you
could
actually
have
a
cluster
without
a
metric
server
or
without
Prometheus,
and
the
scheduler
would
still
be
able
to
determine
resource
constraint
by
looking
at
the
value
that
the
cubelet
reports.
A
Scheduler
uses
real-time
CPU.
Can
you
talk
about
metadata
rule
in
the
manifest?
What
does
that
mean?
As
far
as
I
could
know,
the
allocatable
doesn't
actually
yeah
it
wouldn't
change
real-time.
It
would
only
it's
actually
gonna
be
tied
to
the
amount
of
time
it
takes
for
the
cubelet
itself
to
report
up
its
own,
its
own
calculated
value,
but
that
value
is
calculated
at
the
cubelet.
The
cubelet
reports
that
value
it's
nothing
else.
A
That
does
it
so,
when
the
cubelet
reports
into
NC
d,
its
particular
value
that
allocatable
value
is
something
that
the
scheduler
can
pull
from
SED
or
watch
for,
and
if
it
sees
a
change,
it
can
actually,
even
you
know,
determine
the
value
or
the
pod
fit
based
on
CPU
or
memory
for
particular
notes
for
this
particular
pod.
As
known
at
that
time,
I
hope
that
makes
sense.
Ramesh,
I,
didn't
understand
your
question.
A
A
So
this
is
what
metadata
does
and
what
you
can
do
inside
of
it
right.
So
if
you
do
us,
if
you
have
access
to
cubic
at
all,
you
can
do
cubic
it'll,
explain
pod
metadata
and
you
can
see
exactly
what
fits
into
that
particular
category
right.
So
this
is
all
the
the
things
that
you
can
do
with
them.
So
this
is
what
annotations
are
used
for.
They're,
basically
used
as
like
a
key
value
store
for
information
that
can
be
consumed
or
represented
by
other
entities
that
are
watching
for
that
kind
of
value.
A
The
cluster
name
is
something
that
you
can
populate
generally,
it
doesn't
get
populated
creation
and
deletion.
Timestamp
perks
are
are
described
here
same
with
finalized
errs,
generate
name,
there's
just
a
ton
of
other
stuff
that
fits
into
that
metadata
field
that
actually
describes
like
how
they
can
be
used
and
what
they're
good.
For
so
you
know.
A
It's
pretty
great
recursive
just
basically
gives
you
the
ability
to
see
like
recursively
for
metadata
what
that
what
that
looks
like.
So,
if
you're,
not
looking
for
the
information
for
a
specific
thing,
you
can
just
see
what
all
of
the
fields
are
and
what
their
dependency
are
right.
So,
like
initializers
pending
the
name,
the
result,
these
are
all
some
sub
resources
of
a
particular
field.
So
yeah.
B
A
You
very
much
lomani,
alright,
my
friends,
I
will
see
you
probably
in
a
few
weeks,
so
let
me
get
back
here
to
full-face
boom.
You
see
my
shining
face.
Thank
you
all
for
your
time
today.
Today
with
the
scheduler
and
I,
think
we
covered
it
pretty
well,
I
hope
it
was
helpful
and
if
you
want
to
follow
along
I'll,
be
putting
the
manifests
and
everything
up
into
the
episodes.
So
thank
you.
Thank
you.
Thank
you
very.
Very
much
and
I
completely
agree
any
of
the
certifications
that
you
can
take
for
career
days
today.
A
If
you
can
master
the
use
of
kettle
explained,
you
will
be
unstoppable.
It's
an
incredible
thing,
so
thank
you
all
again
and
I
will
see
you
in
a
couple
weeks
next
week
tune
in
to
mr.
Josh
Rosso
talking
about
other
interesting
things
inside
the
space
and
I
hope
you
all
have
a
wonderful
time.
I
really
hope
I
get
to
see
many
of
you
or
some
of
you
at
least
in
either
Europe
or
in
puke,
on
I'll,
be
traveling
a
lot.