►
From YouTube: Kubernetes SIG Scheduling - 2019-05-16
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
and
welcome
to
six
scheduling
Ricky.
As
you
know,
this
meeting
is
recorded
and
will
be
uploaded
to
public
internet.
So
we
have
quite
a
few
items
on
the
agenda
today.
Hopefully
we
will
get
to
all
of
them.
I'm
gonna
start
by
giving
you
some
update
about
some
of
the
projects
that
you're
working
on.
As
you
know,
we're
working
on
adding
scheduling
framework
some
of
the
key
ours
are
already
managed.
A
There
is
a
PR
for
adding
configuration
of
the
framework,
which
is
basically
adding
some
configuration
to
disconcert
it
to
define
what
plugins
we
want
to
enable
and
what
are
the
set
of
arguments
that
you
want
to
pass
to
these
plugins?
This
is
an
important
PR
for
the
scheduler
or
for
the
scheduling
framework,
because
it
touches
the
scheduler
API
now,
so
it's
the
main
way
to
configure
all
the
plugins
and
the
framework.
So
we
are
hoping
that
we
can
merge
this
soon.
A
One
round
of
API
review
by
the
API,
our
review
of
reviewers,
is
already
done.
Jordan
Leggett
has
helped
us
a
lot
on
this
I
really
appreciate
his
efforts.
There
are
a
couple
of
extension
points,
a
couple
additional
extension
points.
I
should
say
that
are
already
managed.
They
are
a
cue
sort
impairment.
Thank
you
for
the
contributors
for
these,
and
then
there
is
a
post
point
which
is
also
ready.
A
The
post
point,
dr,
is
ready
to
mesh,
but
I
decided
to
hold
it
because
every
time
we
add
a
new
one,
some
the
PR
for
adding
configuration
of
the
framework
needs
to
be
is
rebased
and
in
order
to
avoid
numerous
free
bases,
I
decided
to
hold
this
a
little
bit
until
that
one
is
meshed.
So,
but
hopefully
we
will
have
that
merge
soon
and
then
we
can
have
the
first
point
as
well.
A
A
I
actually
have
raised
this
issue
with
the
with
the
Sikh
arch
and
also
contributor
experience
we're
trying
to
brainstorm
with
those
folks
to
see
if
we
can
find
any
solution
for
this.
Unfortunately,
there
is
a
high
risk
that
this
feature
will
not
make
it
to
the
countries,
because
there
are
not
enough
reviewers
for
the
API
side
of
this
I
really
appreciate
Ray's
work
on
this.
A
We
really
hope
that
we
can
get
this
in
115,
but
if
it
doesn't
happen,
we
will
definitely
have
this
in
116.
This
is
something
that
we
should
keep
in
mind
that
whenever
we
work
on
a
feature
that
requires
API
review
should
probably
send
the
PRS
right
at
the
beginning
of
a
cycle.
Basically,
as
soon
as
a
code
freeze
for
what
version
is
lifted,
we
should
probably
have
these
PRS
ready
for
review.
Otherwise
there
is
a
chance
that
these
may
not
make
it.
This
is
a
little
unfortunate.
A
We
are
trying
to
resolve
this
issue
with
the
community
and
somehow
raise
the
number
of
API
reviewers.
But
at
this
point
we
don't
have
a
solution.
Yet
there
is
another
PR
for
supporting
less-than
and
greater-than
operators
for
inter
pod
affinity,
that
one
is
also
a
pending
API
review
and
also
code
review.
A
There
was
this
other
problem
in
the
scheduler
that
one
of
our
preemption
tests
was
sometimes
flaky.
This
was
going
on
for
a
long
time
and
we
kind
of
ignored
it
because
she
felt
like
it's
not
that
important,
but
of
course
it
was
causing
some
trouble
for
some
of
you
folks,
especially
when
it
was
failing
and
precip,
maybe
not,
and
sometimes.
B
A
A
B
I
fail
to
the
same
so
yeah.
We
should
come
up
with
a
better
solution
on
that
I.
Just
I
just
raised
it
as
kind
of
draft,
so
we
can
brainstorm
on
better
solutions,
yeah,
rational
and
why
I
propose
that
that
here
is
based
on
the
real
symptoms,
I
observed
on
the
on
the
reproducing
steps
yeah.
So
we
should
follow
the
way
execution
path,
which
is
that
a
kind
of
dummy,
no
sort
of
a
dummy
part
update,
was
dead
right
right
after
we
reserved
the
information
for
that
for
that
parameters.
A
A
Okay,
I
assume
there
is
no
question
but
feel
free
to
ask
questions
whenever
you
want
one
other
quick
thing
that
I
would
like
to
add
is
that
next
week
is
Q,
Khan
I'm
gonna,
be
there
we're
not
gonna,
have
any
sick
meetings
next
week
and
we're
not
gonna
have
any
sick
meetings
in
the
week
afterwards.
So,
basically,
this
is
our
last
set,
meaning
in
the
next
two
or
three
weeks.
However,
you
wanna
count
it.
A
Basically,
we
are
not
gonna
have
any
more
meanings
in
the
next
two
weeks,
so
I
will
see
some
of
you
folks
a
kill
card
and
if
you're
not
attending
cue
card,
I
will
see
you
in
the
next
sick
meeting,
which
is
going
to
happen
in
three
weeks
from
now
so
Vinay
you're.
Here
please
go
ahead
and
tell
us
about
in
place
vertical
auto-scaling
hi.
C
C
The
main
chain
I
just
wanted
to
itemize
the
main
changes
that
were
there
and
run
them
by
this
by
the
sig
and
then
see
if
there
was
any
concerns.
So
last
we
looked
at
it.
We
had
resources,
we
were
making
the
resource
requirements
in
the
contained
pod
spec
container
as
the
desired
resources
that
still
stands,
and
the
container
status
had
resources
allocated,
which
would
communicate
to
the
observer,
the
external
agent
like
VBA
or
any
other
component.
What
the
currently
held
resources
are-
and
we
went
back
and
forth
with
sig
note
on
this,
and
they
were.
C
There
was
a
case
where
we
were
persisting
state
in
one
of
the
pod
conditions
and
they
did
not
want
state
to
be
persistent
across
and
relied
upon
by
any
of
the
components.
So
the
change
that
the
key
change
we
made
was
to
introduce
another
component,
another
the
resources,
the
other
component
in
the
pod
speck
under
containers
called
resources,
resource
binding
similar
to
how
node
name
is
scheduler
assets.
The
node
name
scheduler,
would
set
the
resource
binding.
This
keeps
the
scheduler
in
the
loop.
As
per
our
previous.
C
There
was
a
problem
with
the
operational
version
where,
if
the
scheduler
and
cube
not
act
on
it
in
parallel,
they
we
would
end
up
in
a
potential
race
condition
where
the
cube
net
might
accept
the
update
and
then
the
scheduler
at
the
same
time,
is
working
on
scheduling
another
pod,
but
hasn't
seen
this
update
and
scan
even
support
the
same
node,
which
gets
rejected
to
address
that
problem.
The
I
made
an
update
to
the
workflow
to
have
the
scheduler
first
look
at
the
resource.
Update
request,
agree
that
it
is
valid.
C
It's
it's
not
cached,
locates,
node,
cache
and
say:
ok,
I
said
that
the
node
can
handle
this
and
accept
that
and
when
it
does
that
it
takes
the
resources
resource
requirements,
which
is
that
which
is
now
stands
for
desired
resources
and
puts
that
into
the
resource
binding
section.
The
current
proposal
is
to
do
this
by
extending
the
bind
endpoint
today
we
as
we
have
we
use,
bind
endpoint
to
assign
a
part
to
a
node.
This
would
be
an
extension
to
that
saying.
Okay,
we
bind
the
patro
node.
C
It
runs
through
its
own
gating
checks
where,
for
the
case,
where
multiple
schedulers
are
concerned,
they
might
there's
another
race
condition,
so
it
does
another
pod,
fit
resources,
predicate
check
and
then,
if
it,
if
all
goes
well
and
the
happy
golden
path
case,
the
couplet
would
take
that
and
do
the
update
container
resources
API
on
the
containers
and
then
once
that
is
done,
it
would
update
the
resources
allocated.
So
that
is
the
key
change
in
terms
of
the
flow.
C
The
thing
has
changed
in
terms
of
how
the
scheduler,
where
the
scheduler
fits
in
it
hasn't
changed
since
March,
except
that
this
new
field
has
come
in
and
the
pod
conditions
that
we
had
one
pod
condition
there.
We
split
them
up
into
two
different
pod
conditions:
one
is
called
a
resource
bomb
and
the
other
one
is
Larry's
resized
success,
pod,
resize,
success
and
scheduler
sets
the
pod
resources
bound
condition
to
indicate
to
the
components
whether
or
not
and
resize
successes
self
by
couplet
and
then
may
be
cleared
by
a
controller
to
indicate
to
the
scheduler.
C
Okay,
we
re
trying
this
so
I,
don't
know
if
I
missed
any
points
or
if
I've
not
been
clear,
but
that's
kinda
short
summary
of
the
changes
that
have
occurred
since
March.
So
the
idea
is
to
essentially
communicate
this
and
see
only
have
six
scheduling.
Think
over
this
change
and
see
if
there
are
any
concerns
with
this
change,
but
intraday
review
that
new
latest
changes
from
the
cap
at
some
point,
yeah.
A
Definitely
happy
to
describe
make
sense,
I
reviewed
your
PR
a
while
ago.
It
was
a
few
months
ago.
I,
don't
remember
all
the
details
now,
but
what
I
remember
is
exactly
similar.
What
he
said
you
we
we
basically
look
at
the
desire
by
me.
I
mean
the
scheduler,
looks
at
the
desired
resources
and
verifies
that
basically
approves
that
and
then
those
are
sent
is
a
you
bet.
So,
but
one
thing
that
is
not
so
clear
to
me
is
the
reason
that
a
race
condition
exists.
C
The
race
condition
from
what
we
looked
is
let's
say:
let's
take
the
case
where
there
is
one
node
which
has
four
gig
of
memory
and
the
it's
been
assigned
as
if
one
single
pod,
which
has
asked
for
three
gig,
the
scheduler
sees
that
and
assigns
that.
Okay,
now,
that
part
is
up
and
running
on
that
node
and
then
there
is
another
part
in
the
queue
which
asks
for
one
gig.
So
now
this
part
can
fit
on
that
one
node.
C
There
is
one
gig
of
capacity
on
that
node,
where
this
previously
assigned
three
gig
pod
was
the
is
running
so
scheduler
sees
that
and
scheduler
is
gonna
assign
that
part
to
this
node
and
against
that
capacity.
While
the
scheduler
is
running
through
its
predicate.
Let's
say
we
get
a
patch
for
that
first
board,
which
is
asking
for
three
gig.
Now
it
says
I
want
3.5
and
the
couplet
at
that
point
hasn't
seen
the
part
that
the
scheduler
is
about
to
assign
to
that
node.
A
A
A
Yeah,
the
one
that
you
I
I'm,
talking
about
the
one
that
you
basically
the
one
before
this
very
recent
change,
the
Bears.
You
worked
on
that
one
had
the
scheduler
in
the
loop
right,
so
the
cubit
cousin
approved
a
resource
change
directly.
It
had
to
go
through
this
scheduler
and
if
the
scheduler
is
approving,
then
where
is
the
race
condition
there.
C
Is
notice
condition
in
that
case
the
only
changes
that
have
happened
since
March?
That
is
what
we
looked.
That's
what
you
looked
at
in
March
and
then
that
continues
to
remain
the
same.
The
changes
are
more
around
where
the,
where
the
information
is
stored,
how
it
is
stored.
Previously
we
were
storing
it.
We
had
the
resource
requirements
and
which
is
the
desired
resources
and
in
the
pod
status
container
status
we
had
resources
allocated,
which
is
the
Kunitz
view,
and
we
introduced
a
third
third
struct
which
is
pod
resource
resource
binding,
to
which
the
scheduler
says.
C
Okay,
I
approve
previously.
What
we
were
doing
was
we
would
set
the
pod
condition,
which
is
a
part
of
the
status
field
and
the
reliance
on
setting
that
and
persisting
that
was
against
the
API
conventions.
The
APA
conventions
say
that
if
the
bar
status
is
lost
for
some
reason,
then
the
component
should
be
able
to
arrive
at
that
at
the
pod
status,
with
using
looking
at
making
looking
at
observations
of
the
current
state
and
not
having
to
rely
on
this
being
persistent.
That
became
the
kind
of
the
pain
point
with
the
previous.
C
A
C
A
Actually
so
you
know
you're
you're
going
to
have
this
contributor
summit
and
you're
going
to
be
the
scheduler
or
scheduling
has
a
three
hour
time.
Slot
in
the
contributor
summit
will
be
great
to
use
some
part
of
that,
for
maybe
peer-to-peer
prev
use.
So
this
thing
is
sort
of
like
face
to
face,
be
already
used
together,
that
that
will
basically
speed
up
things
and
remove
some
of
the
you
know
back
and
forth
munication
that
we,
so
we
can
use
this
opportunity
for
for
doing
this,
and
this
is
not
only
for
this
particular
clip.
A
I
encourage
everybody
who
has
a
PR
that
is
not
getting
enough
attention
to
be
there
and
communicate
directly
with
any
of
us
to
review
these
PRS
over
there
you're
going
to
have
this
three
hour
time
slot
in
the
contributor
summit,
we're
going
to
use
part
of
that
for
also
demoing
some
of
the
work
that
is
going
on
and
in
our
invaders
and
hopefully
it'll
be
more
like
a
social
gathering
for
all
the
contributors
as
well.
I
look
forward
to
seeing
some
of
you
folks.
There
definitely.
C
A
C
D
A
Two
sana
by
I,
don't
know
the
details
really
to
be
honest
with
you,
I
volunteered
myself
class
annoy
what
volunteered
ourselves
for,
or
he
may
I
review
and
for
at
least
producing
scheduling.
I
have
actually
gone
through
one
of
those
like
peering.
You
know
sort
of
like
face
to
face.
We
used
with
with
Jordan,
and
it
was.
It
was
very
helpful.
Actually,
it's
gonna
take
a
while
for
someone
to
to
become
like
a
competent
API,
because
there
are
so
many
little
things
that
you
need
to
keep
in
mind
when
it
comes
to
the
API.
A
D
D
A
I,
don't
think
this
is
like.
Maybe
you
know
there
is
any
formal
process
to
volunteer
someone
I,
don't
it's
just
you
know
if
you
communicate
with
him
directly
or
if
you
don't
want
to
do
that,
I
can
actually
tell
him
that
you
are
interested.
Why
don't
you
recommend
me
for
6qd
sure
yeah
I
can
apply
to
that
email
thread
and
also
at
you.
A
You
know
honestly.
I
have
gone
through
this
process
and
read
the
documents
and
everything
but
I
still
don't
feel
like
I
am
there
because
this
is
still
new
to
me
and
then
there
is
a
lot
of
things.
I
think
I
need
to
learn.
It's
gonna
take
a
while
I
I'm
sure
it's
not
gonna,
be
like
you
sign
up
and
we
think
like
on
three
days
you
become
here.
A
D
A
Yeah
both
of
these
two
are
great
things.
To
do,
I
mean
marking
it
clearly,
as
experimental
will
be
great
because
you
know
kubernetes
is
super
popular.
We
sometimes
go
and
add
a
new
API
somewhere
and
despite
the
fact
that
we've
marked
it
as
alpha
or
experimental
people
still
take
it
and
use
it
and
rely
on
it,
and
then
we
need
to
change
their
systems
may
break.
So
it's
it's
important
to
communicate.
The
changes
clearly
make
sure
that
it's
marked
as
experimental
and
also
yeah.