►
From YouTube: Kubernetes SIG-Scheduling Weekly Meeting for 20200702
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
The
first
one
is
the
scheduler
label
and
namespace
filtering,
not
sure
who
would
like
to
discuss
this.
Oh.
B
Yeah,
sorry
hey,
this
is
Sean.
Can
you
hear
me
yeah
once
oh
yeah
I
had
added
this
to
the
agenda
so
about
two
ish
weeks
ago,
I
had
shared
out
a
proposal
document
a
Google
Doc
with
the
details
on
adding
to
I
guess
really
two
separate
features
to
the
D
scheduler
one
is
basically
filtering
pods
that
will
be
candidates
for
eviction
based
on
namespace,
another
one
on
label.
B
A
C
C
I
could
go
so.
This
is
a
appeared
that
I
sent
out
to
upstream
in
place
pod
vertical
scaling
feature
that
now
allows
the
pods
resources
to
be
mutable,
and
the
reason
to
do
this
is
that
we
want
to
resize
pods
without
restarting
them,
and
the
code,
as
it
currently
is,
is
mainly
a
contract.
The
changes
are
a
contract
between
the
API
server
and
the
cube.
Let
the
node
agent
and
user
expresses
the
new
desired
node
resources.
C
In
this
case
the
user
would
be
typically
VP
a
which
monitors
what
the
right
sizing
should
be
and
then
updates
does
a
patch
to
the
pod
spec,
and
that
is
the
cube.
Let
watch
sees
that
and
then
it
implements
and
the
new
resources
gives
it
requests
if
there
is
no
capacity
available
or
waits
until
it
is
available.
C
Now
this
introduces
a
new
field
to
the
pod
spec
called
resources
allocated
and
change
here
and
in
many
places,
is
to
essentially
have
scheduler
and
other
places
where
they
were
looking
at
requests
start
using
resources
allocated,
which
is
the
true
allocation
on
a
node.
So
it's
a
one
file
changes
from
a
scheduler
is
concerned
and
I
wanted
to
see.
If
you
had
a
chance
to
review
this
code,
I
can
pull
it
up.
A
C
Right,
it's
both
of
them,
so
the
limitations,
the
resources.
Previously
resources
was
not
mutable
and
now
resources
becomes
mutable
and
the
user
can
change
this.
That's
the
idea
resources
allocated
is
a
new
field.
That's
got
introduced
the
part
spec
that
is
mutable
only
by
to
the
gooblat.
Yes,
so
the
there
is
admission
control
plug-in
which
ensures
that
only
nodes
can
change.
This
users
cannot
and
also.
C
A
This,
let's
focus
on
the
likely
scheduler,
part,
yeah
and,
and
so
the
resources
allocated.
This
is
actually
the
amount
of
resources
reserved
on
the
node
for
that
port
right,
correct,
and
this
is
what
we
have
to
take
into
account
when
we
calculate
how
much
resource
has
been
allocated
on
the
node
in
general,
like
how
much
is
allocated
how
much
is
free
when
right.
A
C
C
Okay,
so
yeah
awful
desk
when
I
changed
files
in
the
scheduler
it
did
come
across
as
I
found
it
a
little
strange
that
this
is
the
only
place
in
the
scheduler
that
I
have
to
change.
There
is
another
places
where
requests
is
being
looked
at.
That's
when
the
scheduler
determines
the
fit
for
the
pods
sizing,
and
in
that
case
it's
OK
for
a
new
part.
The
part
that's
not
bound
for
the
scheduler
to
look
at
requests
and
then
use
that
to
make
the
determination
of
where
the
node
fits
run.
C
A
A
Yes,
not
the
because
allocated
is,
is
what
it's
actually
being
allocated
by
the
cubelet
and
it's
updated
by
the
queue,
but
not
the
not
what
they
recruit,
not
what
the
user
actually
requesting,
because
those
could
differ
right
and
and
there's
like
a
point
where
the
quiz
could
be
higher
or
lower
than
resources
allocated
and
cubed
is
trying
to
drive
the
resources
allocated
to
what
requests
is
right
right:
okay,
yeah
I,
think
that
makes
sense.
I,
don't
know
if
others
aldo
way
if
way
is
on
the
line
or
mike
have
any
I.
G
C
C
G
C
I,
don't
like
it
look
at
this,
this
change
and
he
commented
that
there
wasn't
anything
to
if
that
from
the
server
side.
When
we
update
it,
we
have
the
new
field
and
we
set
it
if
it's
empty.
If
the
old
pot
spec
is
missing
it,
it's
dropping.
Essentially,
the
older
client
is
writing
the
sports
back.
Then
we
set
this
right.
E
A
C
C
A
C
C
G
G
C
A
C
We
did
the
initial
points
were
not
too
touching.
It
containers
have
them
work
exactly
the
same
way
it
works
today
only
the
containers
are
affected,
but
it's
increasingly
looking
like
it
makes
sense
to
have
this
for
eight
containers
as
well,
so
that
the
code
is
uniform
everywhere
with
a
added
validation
and
check
that,
if
you
try
to
mutate,
the
resources
of
unit
containers
you're
going
to
get
welded
gene
will
fail.
C
F
F
So
what
we
have
in
the
doc
is
a
couple
options
on
how
to
implement
it.
Each
of
these
strategies,
since
they're
so
similar,
they
have
like
the
same
options
and
those
are
basically
a
CLI
option
that
gets
passed
to
the
D
scheduler.
A
config
option
in
the
strategy
itself
or
for
namespaces
actually
has
a
third
one
that
we
could
consider
namespace
as
a
field
selector
against
the
pod
itself.
F
Instead
of
just
looking
at
pods
in
the
namespace,
we
could
set
this
up
as
a
field
selector
now
and
then
that
might
open
up
the
road
for
if
people
want
to
filter
pods
by
other
field,
selectors
down
the
road
I,
don't
know
if
we
should
be
considering
that
1/2
inch
anywhere.
That
was
my
suggestion.
Originally
looking
back
on
it
I.
Don't
think
that
we
should
consider
that,
so
it
really
comes
down
to
do.
F
We
want
to
implement
these
as
a
COI
option
to
take
effect
over
the
whole
G
scheduler
and
catch
ed,
ed
cases
in
strategies
where
we
need
to
or
make
them
pod
strategy
parameters
that
can
be
set
directly
on
the
specific
strategies
that
you
want
to
run
for
a
for
a
namespace
or
pod
selector.
So
that's
what
we're
looking
for
a
feedback
for
Sean
does
that
sound
right
leg
cover
that
pretty
well.
F
G
G
G
F
Wow,
so
that
I
think
that
was
all
we
wanted
to
go
over
for
it.
I
don't
know.
If
there's
anything
else,
people
can
read
the
doc
and
add
any
more
comments
if
they
have
it,
but
I
think
that
you
know
if
we
can
just
come
to
a
decision
on
how
to
implement
them,
that
both
of
these
options
seem
like
reasonable
features
and
I
think
it
seems,
like
people
are
kind
of
leaning
towards
adding
it
as
a
strategy
parameter
in
our
API.
A
F
D
Hat,
my
name
is
Fadi
and
you
want
to
bring
the
discussion
about
quality,
we're
scheduling
that
we've
been
actually
having
in
a
few
other.
Six
is
where
we
spoke
about
this
in
sig
note.
We
we've
had
conversations
about
this
since
it
resources
so
like
our
intention
was
to
bring
this
discussion
here
in
six
candling
level
stretch,
everyone
kind
of
explained,
the
problem
statement,
what
we're
trying
to
solve
and
the
conversation
of
the
discussions
we've
had
so
far
and
the
way
we
are
approaching
this
problem
so.
D
Yep,
so
this
is
our
agenda.
I'm
just
going
to
go
through
Quality
Manager
provide
a
quick
overview.
What
the
current
state
of
topology
awareness
isn't
scheduler
or
you
could
call
it
topology,
unawareness
and
scheduler.
The
scheduler
is
vo
free
sources
at
this
point
in
time.
The
discussion
that
is
happen
in
the
community
so
far
a
few
questions
and
things
that
we'd
like
to
chat
about
and
a
few
proposals.
We
have
a
couple
of
caps
to
support
this
proposal
and-
and
we've
also
had
discussions
about
that.
D
D
So
the
policies
that
topology
manager
supports
are
no
non
best-effort,
restricted,
single
luminoled
policy
and
essentially
topology
manager
is
like
an
admission
controller
and
the
biggest
challenge
that
we
have
currently
is
that,
in
case
of
single
womanhood
policy,
it
is
very
restrictive
and
the
scheduler
can
allow
a
part
be
placed
in
a
certain
node,
but
quality
manager
could
reject
it.
So
that's
essentially
the
challenge
that
we
had.
D
We
have
right
now,
so
the
challenge
again
kind
of
risk
of
repeating
myself
is
that
the
scheduler
is
topology
unaware
people
who
want
to
dig
deeper
into
the
issue
itself.
They
can
I
have
a
link
on
the
slides
they
can
go,
dig
deeper
into
it,
but
the
main
problem
is
that
in
case
you're
running
topology
manager
with
a
single
nominal
policy
guarantee
pods.
D
If
they
request
more
than
resources
more
than
a
single
human
ode,
but
less
than
a
whole
load,
it
causes
runaway
poor
creation
and
the
reason
is
because
scheduler
has
a
different
view
of
resources.
So
my
next
slide
I'm
going
to
explain
this
will
help
the
diagram
just
continue
this
for
the
time
being,
the
scheduler
will
basically
send
the
port
on
the
node
and
topology
manager
will
kind
of
reject
it,
and
that
would
lead
to
quality
affinity
error
if
the
pod
is
part
of,
say
a
deployment
or
a
replica
set.
D
That
would
continue
to
happen
because
it
will
continue
to
to
suffer
the
same
fate,
so
the
this
is
an
example
that
I
want
to
explain.
In
this
case,
we
have
like
a
two
node
cluster
from
schedulers
point
of
view.
We
see
that
both
nodes
have
three
instances
of
device
a
and
three
instances
of
device
be.
It
doesn't
have
visibility
on
what
no
one
holds
they
are
on
and
that's
the
reason
this
problem
happens.
D
So
if
a
pod
like
this
comes
up,
where
you're
requesting
an
instance
of
advise
a
and
no
instances
of
device,
B
scheduler
could
could
could
think
that
okay
masternode
is
suitable
to
place
that
part
when
it
does
that
it.
Everything
works.
Fine,
because
topology
manager
understands
that.
Okay,
we
with
the
single
node
policy
and
able
we
can
place
it
on.
No,
no,
no,
no
zero,
but
the
problem
happens
when
the
scheduler
decides
to
place
it
on,
say
this
node,
where
the
topology
alignment
is
not
possible
and
topology
affinity.
D
D
H
Sorry
I
was
muted.
Yes,
there
are
a
lot
of
discussion
in
signal
meeting
and
in
signatur
resource
management.
Meeting
the
first
one
was
much
before
and
April
here,
I
mentioned
it
April
discussion
about
the
boredom
manager
and
no
we're
scattering
and
then
in
me
was
discussion
about
nowhere.
Alaska
during
we
narrowed
the
possible
approaches,
reduced
the
number
of
approaches,
and
no,
we
research
for
approaches
come
out,
IQ
blade
from
a
scandal.
It's
just
the
call
of
cubelet
endpoint
from
scandal
plugin.
So
this
approach
also
rejected.
H
H
So
then,
my
part
of
this
approach,
it's
the
gathering
resource
or
supporting
information.
We
prepared
a
enhancement
proposal
you
can
find
you
can
find
it
here
and
we
published
it
so
currently
we
exposing
our
resources
with
crg
format.
We
should
describe
it
in
document,
customers,
definition
standart
for
not
a
result
of
origin.
H
H
H
D
A
So
if
you
go
when
you're
talking
about
the
CRT
and
CR,
they
will
basically
be
describing
the
new
market
texture
of
the
node
right,
and
how
much
is
there
on
each
humanoid
have,
like
you
mentioned,
that
you
discussed
having
this
information
in
the
node
object
itself?
What
was
the
outcome
with
that
discussion?
Why
was
it
rejected?
Why
do
we
have
any
do
we
need
a
new
object
to
represent
the
data
for
properties
of
the
node?
While
we
have
an
a
canonical
node
object?
D
Think
the
objection
currently
to
that
is
that
we
don't
want
to
introduce
too
much
political
information
into
mainstream
kubernetes
objects.
That's
the
main
reason
we
are
going
ahead
with
the
CR
D
based
implementation,
so
yeah.
So
that's
the
feedback
that
we
got
like
introducing
it
into
like
main
node
object
from
B
is.
A
D
So
actually
that
is
part
of
one
of
our
asks
as
well.
So
the
way
we
see
it
is
that
it
should
be
at
a
neutral
sort
of
position,
and
we
wanted
to
ask
if
maybe
maybe
kubernetes
sig
is
a
reasonable
place
to
place
that,
like
that's
something
that
we
wanted
to
charge
you
out
as
well,
what
would
be
a
reasonable
place
where
we
can
I
think.
F
D
A
So
girls,
do
you
think
there
is
the
plug-in
itself
which
I
think
it's
fine
to
have
it
in
this
scheduler
repo
are
the
same
way
we
did
with
the
CSI
again
as
long
as
there's
a
Buy
in
from
multiple
eggs
in
the
kubernetes,
but
the
CRD
type
itself
and
who's
going
to
maintain
that
CRT.
You
know
like
the
type
I
don't
know
if
there
is
any
functionalities
around
it,
like
utility
functions,
etc.
I
feel
that
it
belongs
to
the
node
more
than
more
than
the
scheduler
again
like
I'm.
A
A
A
D
F
I
I
Yeah
I
think
so
yeah
as
a
purple
isn't
a
big
thing.
You
can
just
use
the
given
area,
slash
org,
you
stare
at
all
to
create
a
issue
and
there's
the
issue
template.
You
can
say
you
want
to
create
a
sublet
poll
and
before
that
you
feel
better
negotiate
with
six
node
I.
Think
I
agree
with
Abdullah.
That
stick
knows
things
with
a
better
place
to
own
the
CRT,
so
NEADS
kind
of
Don,
chair
or
direct
cards
approve
on
create
creating
a
that.
You
can
create
that
issue.
I
also.
D
J
H
D
H
D
J
H
F
D
J
D
So
I
think
do
we
do
advise
plugins
currently
work
especially
like
few
consider
SRV
device
plug
in
the
device.
Id
usually
is
used
to
populate
the
PCI
address,
and
if,
if
that
is
used,
you
could
evaluate
the
new
and
old
on
which
the
device
exists,
so
that
reformation
could
be
gathered
from
there,
and
so
that.
H
A
E
H
E
D
Before
we
end,
I
just
want
to
bring
this
thing
as
well,
in
which
we
had
brought
up
in
one
of
the
p
PR
discussions,
so
the
like
I
mentioned
that
there's
probably
a
more
generic
problem
to
solve
here.
That
scheduler
proposes
and
indicates
that
it
note
is
suitable,
but
the
cubelet
on
that
node
rejects
it.
So
we
had
kind
of
provided
all.
It
was
a
really
nice
idea
about.
How
can
we
actually,
how
can
scare
deter,
actually
propose
more
than
one
nodes
to
be
player
to
be
to
be
nominated
for
upon
to
be
scheduled
on?
D
D
J
It's
not
a
question,
that's
more
regarding
recommend
what
Swati
just
mentioned
this
issue.
So
it's
not
talking
about
admitting
post-its.
It's
about
any
kind
of
error
situation
which
happens
between
report
is
arrived
to
the
lab
and
actually
started
so
like
wherever
condition
can
happen,
because
some
of
addition
hooks
is
failed.
It
might
be
some
storage
attachment
rails.
It
can
be
like
runtimes
privated
creation
errors,
so
so
practically
to
to
get
to
a
state.
What
we
sure
would
we
can
run
report
on
the
North
tweets.
J
D
Yeah-
and
the
other
thing
actually
I
want
to
add
here-
is
that
it
if
we,
if
we
go
ahead
with
this
proposal
as
well
it
it
would
improve
the
chance
of
a
pod
to
be
successfully
placed.
But
there
could
be
a
scenario
that
paaji
affinity
error
would
still
occur,
because
no
node
is
able
to
fulfill
that
that
pod
and
again,
having
that
visibility
at
a
scheduler
level
to
decide
that
the
court
is
not
scalable.
So
you
probably
just
need
to
keep
it
in
a
pending
state
or
suitable
notarized
I.
D
D
A
Okay,
yeah
and
Jen
I'd,
like
I,
like
the
approach
of
having
the
topology
described
in
the
industry,
OD,
it's
it's
a
kids
native
way,
I
guess
this
one
week
and
hopefully
we
can
make
progress
in
that
direction.
So,
as
a
cig,
I
would
say
like
I,
don't
know.
If
anyone
has
any
objections,
it
seems
like
something
that
makes
sense
so,
but
there's
the
Cape
there.
If
anyone
has
any
concerns,
please
bring
them
up
on
the
trip.