►
From YouTube: Kubernetes SIG Scheduling Weekly Meeting for 20211104
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hello,
everyone
today
is
november
4th
2021
and
welcome
to
join
this
week's
sixth
schedule
meeting.
This
meeting
is
being
recorded
so
be
aware
of
your
what
you
are
saying:
it
will
be
public
on
the
internet.
A
A
So
it's
it's
original
a
g-stock
project
and
we
agreed
to
extend
maintain
that
to
be
more
widely
used
by
some
other
purposes
like
you
can
use
it
as
a
sdk,
not
only
as
a
gui
to
to
to
simulate
the
scheduling
process,
but
also
can
be
used
extensively
extensively
so
about
a
lack
of
people
that
understanding
the
code
base
and
where
some
reviewing
work.
So
I'm
calling
for
he
is
calling
for
help
on
this.
A
So,
basically,
do
you
want
to
yeah
give
more
information
on
this.
I
think
you
are
online.
C
So
yeah,
all
I
want
to
talk
about
is
yeah.
Now
we
need
more
reviews,
and
maybe
I
think
we
use
go
language
on
this
project
and
you
need
a
basic
knowledge
of
scheduler
for
development,
but
I'm
sure
most
of
the
people
at
this
meeting
or
here
this
meeting
meet
these
requirements.
C
So
if
someone
get
interest,
I
can
help
you
so
please
free,
please
feel
free
to
ask
me
on
ask
any
questions
on
github
or
struck.
A
Yeah,
I
also
recall
that
you
mentioned
you'd
like
to
go
to
give
a
sort
of
overall
introduction
and
baseball
crew
so
that
people
can
get
familiar
with
that.
So
do
you
want
to
sort
of
schedule
that
maybe,
for
example,
in
the
next
schedule
meeting
so
that
can
get
people
started
with
the
code
base?
A
All
right
and
yeah,
I
think
this
is
clear
for
us
to.
We
need
more
reviewers
to
check
the
reviewer
responsibilities
and
the
other
maintenance
I
think,
occupied
and
have
their
things
in
their
plates.
A
D
Yeah,
I'm
here
so
yeah,
sorry,
my
I
I
think
we
slipped
this
from
122,
because
we
were
running
short
on
time
for
review
and
after
the
loss
of
momentum
there
I
had
my
new
role
required
request
me
to
like
manage
so
I've
been
finding
it
hard
to
find
time
focused
time
to
work
on
this,
but
the
last
couple
of
weeks
I've
been
looking
into
fixing
all
the
node
issues
that
the
reviewers
brought
up
in
signord.
D
I
think
most
of
them
are
addressed.
I
also
have
addressed
the
issues
that
you
raised
last
time
on
this
pr.
I
think
there
were
four
four
items
that
you
had.
I
just
commented
on
them
this
morning,
so
there
are
four
things
that
you
had.
One
is
to
verify
if
there
is
yeah,
so
the
this
code
comments
that
I
have,
I
think
I've
fixed
them
and
it
looks
like
the
scheduler
cache
init
containers
does
not
need
to
be.
D
We
don't
have
to
worry
about
it
because
validation
blocks
it.
I
tried
updating
the
unit
container
resources.
It's
not
allowed
so
scheduler
doesn't
need
to
worry
about
accounting
for
that
in
terms
of
what
its
state
is
the
as
far
as
the
pod
cache
update,
update
part
function,
it
does
completely
remove
and
add
the
new
pod,
so
whenever
calculate
resources
is
called,
it's
always
looking
at
the
latest
updated
pod
spec
and
the
pod
status
where
resources
are
located
has
to
be
taken
into
consideration.
D
D
My
question
here
is,
I
think
the
outstanding
tasks
here
are
to
I
was
trying
to
see
if
I
can
get
some
help,
but
the
person
who's
offered
to
help
is
not
quite
available,
so
I'm
going
to
try
and
do
it
over
this
weekend.
I
think
the
things
that
are
that
need
to
be
added
is
unit
tests,
which
I'll
do
one
of
the
questions
that
you
mentioned.
Last
time,
I
remember
was
that
there
you
mentioned
that
there
are
some
places
in
scheduler,
where
you
have
some
assumptions
on
the
immutability
of
resources
when
supported.
D
Yeah,
so
I
wanted
to
get
I'm
asking
to
yes,
almost
asking
for
spoon
feeding
help
on
this
one,
because
my
time
has
been
very
slammed,
so
I
think
some
pointers
on
where
to
look
at
is
this
the
only
place
the
event
is
to
register
I'll.
Take
a
I'll
try
to
take
a
look
at
this
weekend.
Is
there
any
other
place
that
I
need
to
look
at,
because
it
looks
like
this
besides
the
unit
test?
That
seems
to
be
the
only
other
thing.
F
A
So
so,
basically
it's
just.
We
have
the
assumption
that
pop
update
won't
update
the
the
requested
resources,
but
now
it's
not
the
case,
so
we
need
to
yeah
change
some
places
and
to
loosen
the
restriction,
and
basically
I
can
help
that
if
you
don't
have
time
given
the
co-phrase
is
in
less
than
two
weeks,
it's
november.
F
A
F
D
Yeah,
I
think
I
broke
it
down
when
we
started.
We
wanted
the
node
guys
said
we
want
to
have
everything
in
one
pr,
the
sig
node,
and
that's
why
the
api?
I
had
it
initially
as
one
commit
for
the
api
change,
and
this
is
what
I
did
last
night.
D
We
after
tim
hawkins,
approved
the
api
side
of
things,
and
then
I
got
a
lot
of
review
was
done
by
lantau
in
signord,
we
kind
of
reached
a
tentative
agreement.
We
were
at
that
point
looking
to
commit
it
in
1.22,
so
we
decided,
let's
squash
it
and
then
it'll
be
easier
to
see
any
follow-up
changes
that
we
make
instead
of
having
all
these
commits.
So
I
did
that
yesterday
I
undid
that
squashing
I
essentially
broke
it
down,
because
api
is
not
changing
anymore.
Cri
is
not
changing
anymore.
D
Those
pieces
have
been
very
well
reviewed.
The
node
implementation
there
are
some.
There
is
one
or
two
items
left
in
terms
of
how
to
like
how
to
manage
the
updating
of
the
container
resources,
how
to
call
the
cri
and
how
to
manage
the
plague
pulling
the
status
from
the
cri.
D
That
is
going
to
be
reviewed
next
week
and
the
last
big
piece
that
remains
is
scheduler,
which
is
not
in
these.
E
D
Yeah
right
now,
the
scheduler
change
is
not
there
in
this
pr.
It's
it's
it's
it's
there.
On
my
on
my
in
my
repo
local,
I
have
to
fix
this.
I've
been
testing
it
and
I'm
looking
to
add
unit
tests
this
weekend,
I'll
I'll.
Do
that
and
then
I'll
take
a
stab
at
the
events
that
events
to
register
as
well.
If
I
can't
then
I'll
just
message,
you
over
slack
that
I'm
not
able
to
please
help
so
I'll
try
to
take
a
stab
at
it,
it's
easier.
D
If
I
can,
if
it's
easy
for
me
to
grasp
and
understand
this
is
all
new
stuff
that
I
have
not
I'm
not
familiar
with.
Last
time
I
looked
at
scheduler
closely.
While
I
was
working
on
this
was
2019,
that's
and
then
the
scheduler
was
quite
different
and
our
design
also
was
was
different.
It
was
having
the
scheduler
approve
the
change
first,
because
scheduler
does
the
initial,
and
then
we
decided
to
go
with
okay.
D
If
it
may
assist
like
at
some
point,
it
can
say
I
can
edit
some
part
or
something
like
that,
but
that's
not
in
the
scope
for
now,
so
that
was
decided
as
a
key
changes
to
the
design
after
the
after
the
2019
initial
proposal,
and
so
now
we
have
settled
on
this,
and
it's
pretty
close
to
getting
in
from
api
is
done.
D
The
node
is
hopefully
will
be,
get
will
will
get
done
next
week
well
in
well
in
time
for
the
code
freeze-
and
I
just
hope
that
we
have
enough
time
for
scheduler
to
like
you
know,
make
a
clean
code
change
here.
It
could
be.
I
can
make
it
a
separate
pr
or
I
just
have
to
see
if
it's
a
separate
pr,
it
has
to
go
first.
This
spear
has
to
go
in
right
and
then
so
having
it
as
a
commit
on
this.
On
the
same,
pr
makes
it
easy
to,
you
know,
appreciate.
D
Okay,
so
for
now,
I
think
I'll
create
a
commit
on
on
this,
and
if
we
want
to
separate
it
into
a
separate,
if
we
want
to
make
it
a
separate
pr
that
follows
on
this,
I
can
always
force
update
it
and
move
it
out.
D
That's
why
we
want
it
in
the
same
pr.
That's
why
the
node
guys
want
it
in
the
same
pr.
D
It
won't,
I
mean
it's
alpha,
so
it's
disabled,
so
it
won't
immediately
look
like
a
broken
node,
but
the
fact
is
that
when
scheduler
sends
a
part
towards
another
node,
it
is
not
taken
into
consideration.
This
particular
state
where,
if,
if
it's
infeasible,
then
it
needs
to
account
for
it
differently
and
use
the
min
max
so
that
the
using
the
sorry
not
the
max
of
the
requested
and
what's
currently
running
so
that
will
facilitate
the
resizing.
D
Otherwise,
what
will
happen
is
that
scheduler
will
see.
Okay,
it'll
exit
it'll
execute
based
on
its
current
behavior,
where
it
doesn't
know
that
a
resize
is
happening
and
it'll
send
a
part
to
a
node
that
may
not
have
the
room
to
accommodate
it,
and
the
pod
will
get
rejected
will
have
to
be
created
again
by
the
controller
that
initially
created
it.
So
it's.
D
Yeah
I'll
yeah,
I
think
that
makes
sense.
G
Does
that
make
sense
way.
D
D
Thankfully,
the
first
two
are
completely
done.
Tim
hawkin
has
already
signed
off
on
that
the
api
changes
and,
of
course,
the
generated
files,
the
second
the
second
one,
the
commit
number
8cb.
That
is,
the
one
that's
node
implementation,
most
parts
of
it
have
been
reviewed.
The
cri
has
been
completely
reviewed
and
there's
no
changes
coming.
So
as
far
as
rebasing
goes
it's
every
time
something
a
new
commit
comes
in.
It
does
take
me
a
little
while
to
catch
up
and
redo
the.
D
So
there
is
hopefully
there's
a
lot
of
motivation
to
get
this,
and
more
and
more
companies
have
been
asking
me.
E
D
Need
help
so
it
looks
like
there's.
A
lot
of
a
lot
of
people
want
this
in
now,
so
the
scheduler
changes.
Thankfully
I
mean
it
comes
on
top
of
all
of
this,
but
thankfully
it's
it's
so
far.
It
looks
like
it's
not
sprawling.
All
over
the
place.
It's
a
few
places
and
unit
tests
that
need
to
be
added
I'll.
Do
it
this
weekend,
okay,.
D
It's
it's
big,
so
I
think
the
reason
why
it's
taking
so
much
time
is
also
because
it's
it's
touches
major
components
and
you
know
it
does
have
the
potential
to
destabilize
kubernetes.
So
that's.
D
I
So
for
the
for
the
unit
test,
the
integration
test
for
the
scheduling
part.
If
you
have
questions
you
can
also
come
to
me,
I
would
like
to
help.
D
That's
great,
then,
because
what
I'll
do
is
I'll
I'll
update
the
commit
I'll
update
that
today,
in
fact,
by
today
afternoon,
I'll
push
another
comment
on
this,
which
is
the
which
is
for
the
scheduler
changes.
That
applies
the.
E
D
Review
feedback
that
long
way
had
given
earlier,
and
so
those
if
you
scroll
below
below
this
yeah.
I
D
D
It'll
just
be
on
the
main
pr,
so
you
can
you
can
just
after
I
send
I'll
I'll
shoot
you
a
slack
message
after
I
update
the
pr
with
this
one
commit
for
scheduler
changes,
and
then
you
can
sync
up
to
this
and
then
add
unit
tests
to
it.
I
D
J
D
Yeah,
I
think
she
sends
a
a
commit
to
the
to
this
to
my
repo.
She
sends
a
pr
to
my
repo
and
then
I
just
merge
it.
J
D
I
think
I
think
I
need
to
have
some
basic
e2e
test
in
there.
I
just
haven't
figured
out
how
to
do
it
yet
for
to
verify
that
the
scheduler
is
behaving
correctly
from
given
the
states
of
the
pod
it's
difficult
to
catch
this
one
in
the
sense
it
might
be
one
or
two
test
cases
can
be
covered
where
infeasible
can
be
handled.
D
I
just
have
to
make
sure
that
when
I,
when
the
e-tweet
has
a
run,
the
node
has
limited
capability
capacity,
or
let
me
figure
this
out
wang
chen.
If
you
can,
if
you
can
look
at
that.
I
I
can
take
a
look
at
the
unit
test
and
integration
test.
Yeah,
I'm
not
sure
about
entrance
either
yeah.
H
J
But
like
no,
you
can
still
run
for
r5
features.
We
have
a
job,
a.
J
And
for
this
case,
like
an
integration
test
is
not
that
I
wouldn't
be
interested
in
integrating
this.
We
need
an
intent
test
because
we
want
to
understand
the
interaction
between
the
scheduled
node
works.
Well
in
a
integration
test.
You
will
not
have
nodes,
then
you
want
to
have
keyboard
running,
for
example,.
D
Correct
so
yes,
we
do
have
a
very
comprehensive
end-to-end
test.
It's
just
that
the
focus
on
focus
on
that
is
mainly
to
ensure
that
the
node,
when
we
make
the
different
kinds
of
parts
you
know
guaranteed
a
best
effort
and
best
effort
is,
of
course
it
should
not
not
even
allow
that
but
guaranteed
and
burstable
pods
different
variations
of
burstable
pod
configurations.
D
When
you
update
them,
they,
the
resources
are
updated
correctly.
The
c
group
reflects
the
correct
values
and
the
part
status
reflects
the
correct
values.
That's
what
the
the
e2
it's
a
slow
running,
e2e
test
that
we
have.
D
There
is
currently
the
in
that
suite.
There
isn't
a
focus
on
scheduler
figuring
out
how
to
see
if
scheduler
is
doing
the
right
thing.
So
I
think
we
can
add
to
that.
If
it's
easy
to
do
for
essentially
this
is
a
this
was
planned
for
beta
negative
test
cases
e
to
e
for
alpha.
We
had
all
the
you
know,
the
golden
happy
golden
path
test
cases
which
is
already
covered,
and
there
is
a
there-
is
a
test
file
called
under
test
e2e
pod
under
I
think
it's
under
node.
D
D
Just
that,
okay,
I
have
to
say.
C
D
I
think
if
there
is
some
visibility
into,
we
can
probably
add
a
test
case
which
you
know
focuses
on
this
particular
scheduler.
You
know
where
it
is
picking.
If
we
create
a
part
that
change
a
resource
change,
that's
infeasible,
then
we
need
to
and
then
we
try
to
schedule.
Another
part
it's
a
little
bit
of
work,
but
I
think
that
can
exercise
on
an
end
to
end
level
that
scheduler.
This
change
that
we
have
is
doing
the
right
thing.
I
You
mean,
like
we
have
a
part
resizing
to
take
out
the
capacity
and
then
the
scheduler
will
just
filter
that
out.
E
D
Port
that
we
schedule
a
pod
pod,
one
which
has,
which
is
requested
two
cpus,
and
then
we
resize
that
part
to
say
we
want
five
cpus
now
that
is,
that
is
an
infeasible
change.
So
what
happens
at
that
point
is
that
the
scheduler
sees
when
it
does
calculate
resources.
It
sees
that
the
changes
that
requested
is
not
feasible.
D
Now,
if
we
schedule
a
new
pod
which
requests
one
cpu,
it
should
get
scheduled.
That's
how
we
test
this
particular
part
of
it.
That's
an
end-to-end.
It's
a
special
e2e
test,
it's
more
like
a
negative
test,
so
I
haven't
considered
it
this
in
alpha.
It
was,
I
think,
in
the
cap
I
mentioned
all
the
negative
tests
are
for
beta
release,
but
if
we
can
probably
possibly
look
into
adding
that
one
case
for
for
this
to
cover
scheduler
from
end
to
end
perspective,
unit
test
is
important.
I
Okay:
okay,
let's
prioritize
the
unit
test,
so
so
why?
So,
if
we
want
to
add
some
e3
test
for
this
one,
that
should
we
put
those
in
the
because
all
our
e3
test
right
now
is
under
the
testing
node
folder.
Will
that
be
under
the
testing
scheduling,
folder.
A
I
think
the
input
test
that
yeah
separated
into
photos
like
the
control
plane,
as
well
as
the
data
plane.
So
if
we
come
up
with
entry
tests
for
scheduling.
E
E
H
D
So
if
the
resize
is
is
feasible,
it's
it's
being
applied,
then
the
scheduler
would
take
max
of
the
what
is
requested
and
what
is
current
currently
allocated
and
at
some
point
those
two
will
become
the
same
and
that's
when
the
the
resizing
is
converged.
If
the
request
is
such
that
okay
there's
a
part
that
comes
and
says,
there's
a
16
cpu
system
and
there's
a
part
that
that
has
requested
one
cpu
it's
been
scheduled.
D
H
Basically,
if
the
resize
is
not
feasible,
okay,
there's
not
enough
resources
resource
to
support
that.
Besides,
then,
that
part
will
be.
I
mean
the
logic.
The
neurologic
will
kill
that
part
right
delete.
That
part.
Is
that
right?
Oh
just
keep
that
powered
not
changed.
It
will.
H
H
A
J
I
A
I
Okay,
yeah,
you
say
you
want
to
say
something
about
it,
so
so
he
prepared
the
whole
cap
and
we
want
some
someone
to
review.
E
E
Commit
and
not
all
the
commits
that
I
have
worked
in
the
last
few
weeks.
So
essentially,
we
have
started
from
the
feedback
that
we
got
last
month
when
we
presented
the
work,
so
we
have
adapted
our
design
and
essentially
we
want
to
see
if
the
community
agrees
with
them
with
the
design
so
that
we
can
essentially
bring
some
plugins
that
will
address
latency
and
bandwidth
in
the
scheduling
process
sounds
great,
sounds
great.
Yeah.
A
And
by
the
way,
I'm
going
to
bump
the
dependency
of
the
scheduler
packet
so
right
now
it's
rendering
yeah
21,
so
I'm
going
to
bump
the
dependency
so
22
yeah
just
for
your
information,
all
right
and
yeah.
So
before
I
end
up
this
meeting,
do
you
have
some
other
items
to
discuss.
H
I
just
want
to,
I
see
something
name
when
you
say
discuss
offline
through
the
slack
channel.
Oh
yes,.