►
From YouTube: Kubernetes SIG-Scheduling Weekly Meeting for 20200730
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
hi
everyone.
This
meeting
is
recorded
and
will
be
uploaded
to
youtube.
A
So
last
time
I
was,
I
mentioned
that
I'm
going
to
start
the
these
like
a
popular
the
spreadsheet,
with
the
plan
features
the
reason
I'm
procrastinating
is
because
1.19
is
still
in
freeze
and
it
might
stay
in
code
freeze
for
a
little
while
until
all
the
or
most
of
the
flakes
are
addressed,
so
we
don't
have
any
date
as
of
now
on
when
1.20
will
be
open
for
basically,
the
master
will
be
open
again
for
future
enhancements.
A
So
I
will,
I
will
try
to
list
some
stuff
anyways
right
now
we
can
discuss
them
on
the
during
the
meeting.
In
any
case,
we
are.
We
have
two
items
on
the
on
the
agenda:
the
first
two
ones
that
probably
are
going
to
end
up
as
feature
enhancements
that
we
can
list
in
the
in
the
spreadsheet.
A
So
the
first
one
is
efficient
requeuing.
Who
would
like
to
take
the
lead
on
this
aldo
or
ding?
Sync.
A
B
But
and
yeah
I
wanted
to
say
it
yeah,
because
I
was
discussing
with
him
that
the
main
concern
was
the
complexity.
A
Right,
I
completely
agree,
I
think,
if
we
look
at
the
events
that
we
have
right
now,
that
the
schedule
reacts
to
and
and
to
to
move
unscheduled
pods
back
into
the
active
or
back
of
queue.
A
There
are
some
events
that
we
can
actually
right
now,
like
this
card
or
or
basically
disable
the
ones
that
are
related
to
services.
Those
are
necessary
only
for
the
custom
plugins,
the
old
custom
plugins
for
services
like
service
affinity.
I
think-
and
this
is
not
even
able
by
default,
so
I
think
we
can
simply
disable
those
events
when
the
post
plugins
are
not
enabled.
A
But,
more
importantly,
I
think
those
two
plugins,
I
don't
think
we
should
graduate
into
ga.
A
I
think
we
should
only
make
them
available
through
the
policy
api
and
they
should
get
duplicated
with
the
policy
api,
and
the
reason
is
that
we
already
have
a
ga
feature
for
those
two
plugins
which
is
for
affinity
and
not
affinity.
So
we
have
two
two
custom,
plugins
called
service
affinity.
A
Another
thing-
and
I
think
node
label
called
something
like
that,
and
I
think
we
can
claim
that
there
are
alternative
ga
features
which
is
interport
affinity
and
unknown
affinity
that
that
users
can
use,
and
so
we
shouldn't
really
graduate
them
to
ga
through
component
config
and
the
plugins
api,
but
we
can't
we
can
discuss
that
on
a
separate
issue.
So
this
is
a
bit
tangent.
A
The
other
two
types
of
events
that
are
causing
parts
to
be
moved
is
no
difference
in
part
events
for
node
events.
I
think
what
we
can
do
is
simply
invoke
the
logic
that
cubelet
invokes
on
nodes
for
admission
and
make
a
decision.
Based
on
that,
I
think
that
logic
is
quite
simple:
it
doesn't
look
at
the
whole
cluster.
A
It
just
looks
at
the
incoming
node
status,
basically
evaluates
it,
and
it
includes
checks
on
resources,
affinity,
no
dfinity
and
a
couple-
others,
maybe
maybe
preemption,
and-
and
so
this
this
check
can
be
simply
done
there
hardcoded
and
the
other
one
is
based
on
volumes
in
general
storage
class,
pvcs,
etc,
and
for
all
of
those
we
can.
We
can
insert
a
check
on
whether
the
pod
actually
has
a
port
volume
claim,
and
so
so
yeah.
I
think
this
approach
is
much
more
simplified.
B
So
what
you're
proposing
is
that
we
we
react
to
to
these
events
and
uncheck
these
plugins,
regardless
of
of
the
profile
configuration.
A
Yeah,
I
think
the
cube
doesn't
even
invoke
the
plug-ins.
They
invoke
the
fit
functions
there,
which
are.
A
You
know,
their
interfaces
are
pretty
simple
like
just
pass
the
node
and
the
pod,
and
it
will
tell
you
whether
the
part
would
have
been
admitted
to
the
node
or
not
and
so
yeah
I
mean,
I
think
we
should
do
it
regardless
of
the
profile,
because
it's
something
that
the
cubelet
will
eventually
reject
anyways.
If,
if
even
if
we
don't
check
it
in
the
schedule.
A
Yeah,
so
so
that's
one
and
the
other
thing
is
the
for
this.
All
the
storage
related
things
like
all
of
them,
I
think,
can
be
summarized
with
just
simply
checking
whether
the
pod
has
a
port
volume
claim
or
not.
If
it
does
then
yeah,
it
makes
sense
to
put
it
back.
If
it
doesn't,
then
any
storage
related
event
shouldn't
move
the
pod
back
out
of
its
unscheduled
status.
A
A
C
C
But
but
but
there
are
some
corner
case
which
might
make
the
the
next
negative
node,
not
the
the
best
candidate.
So
so
quality
is
okay
with
the
option,
so
we
can
opt
out
and
disable
the
disable
the
feature.
If,
if
we
want
to
finally
find
the
best
that
can
kind
of
not
just
only
a
no,
you
can't
get
schedule,
but
also
I
I
noticed
you
have
just
leave
a
comment
couple
of
day
before
we
can
just
use
the
physique.
C
So
I
think
we
can,
when
you
just
need
to
reach
a
consensus
on
this,
so
so
we
can
like.
I
can.
I
can
give
a
final
change
and
make
it
okay
to
to
move
on
yeah.
A
Right,
let
me
try
to
summarize
so.
The
the
issue
here
that
dave
is
proposing
is
that
when
we
evaluate
a
pod
before
we
go
through
all
the
nodes,
we
should
check
its
nominated
node
name
status
if
the,
if
the.
If
that
status
is
set,
perhaps
before
going
through
all
the
nodes,
we
should
just
check
that
specific
node
first,
because
it
was
the
node
that
was
nominated
in
a
previous
cycle.
A
There's
a
very
high
chance
that
this
is
the
only
node
that
will
foot
the
part,
because
this
is
what
caused
the
pod
to
not
be
scared
in
the
first
place
and
and
do
go
through
preemption,
but
yeah.
A
Here
is
that
this
could
be
sub-optimal
because
it
could
possibly
be
a
case
where
another
node
has
been
added
or
a
pod
was
deleted
and
another
node
that
better
fit
the
pod.
The
incoming
part
could
become
available,
and
so
there's
this
debate
with
how?
How
should
we?
A
How
should
we
enable
this
feature,
whether
whether
we
have
a
flag
in
the
component,
config
or
or
a
feature
flag?
So
aldon
made
a
comment
there.
I
think
I
agree
with
him
that
like
having
it
as
a
component
config
flag,
it
doesn't
make
a
lot
of
sense,
because
people
would
eventually
either
enable
this
or
not
or
not
similar
to
preemption,
like
it
didn't
make
sense
to
have
a
preemption
enable
disable
flag
and
component
config.
A
I
think
it
should
be.
It
should
be
a
feature
flag.
A
B
D
Is
way
here,
yeah,
I'm
here,
so
even
if
it
goes
to
beta,
we
have
the
chance
to
remove
that
feature,
gate
and
convert
it
to
a
to
a
component
option
right,
yeah,
okay,
that
should
be
good.
That
should
be
good,
so
so
the
discussion
here
is
just
that:
whether
the
choice
is
binary
or
not
yes
or
no
right.
D
D
Yeah
a
little
bit
like
the
you
want
the
choice
to
enable
preemption
or
not,
but
not
exactly
like
connection,
because
preemption
you
can
disable
all
the
post-filter
plugins
right.
D
B
C
D
D
A
I
mean
there's
no
harm
in
having
a
cap,
even
if
it's
just
like
yeah,
so
it
might
probably
like
yeah,
bring
more
feedback
from
others,
and
maybe
we'll
have
a
different
decision
when
we're
at
the
cap.
A
All
right,
so
we
have
a
profile
level
scheduling
configuration
parameters.
E
Yeah,
I'm
here
hear
me:
yes,
can
I
share
my
screen
sure.
Can
you
see
my
screen?
E
Yeah,
okay,
yeah,
so
we
open
easier
and
you
should
be
able
to
see
it,
and
so
this
is
from
and
yeah
yeah
and
myself
and
also
we
started
this
discussion
in
the
scheduled
plug-in
and
the
repo
we
are
luna
and
eldon
already
provide
a
lot
of
good
feedback.
Then
suggest
us
just
submit
a
proposal.
E
So
I'd
like
to
thanks
everyone
and
yeah
yeah
he's
online
as
well.
So
the
idea
and
is
quite
and
simple
so
we'd
like
to
yeah.
We
are
proposing
and
profile
levels
and
parameter
so
particularly,
and
especially
this
parameter
and
the
percentage
of
nodes
to
score,
because
this
is
a
very
an
important
parameter
and
we
know-
and
it
could
have
a
great
impact
on
the
performance
so
in
particular
in
march
skill
and
clusters,
by
adjusting
the
threshold
of
this
and
the
parameters
and
yeah.
E
Definitely
it
would
affect
the
scheduling
performance
so
seems,
I
think,
everyone
and
yeah.
We
do
not
agree
and
yeah
nets
make
sense
and
to
introduce
to
this
parameter
and
move
it
to
inside
the
profile
level.
E
Right
since
as
our
scheduling
framework
and
support
a
multiple
profile,
and
if
we
have
this
at
an
profile
level
and
we
believe
it
can
better
and
have
better
customization
and
creator,
yeah
easy
to
yeah,
adjust
or
balance
the
scheduling,
quality
and
the
performance
now
the
question
I
I
know
there
are
some
discussion
board
between
the
elder
and
abdullah
is
yeah.
E
How
we're
going
to
do
it,
and
so
basically
we
have
two
options
and
of
course,
why
is
we
just
move
this
and
the
parameter
and
yeah
the
global
one
and
inside
the
profile
and
finally,
and
maybe
in
the
beta,
and
we
have
both
and
then
find
it
in
the
ga
and
remove
the
global
one
just
keep
the
profile
one.
E
The
second
option
is,
I
think,
maybe
and
yeah
more
reasonable.
You
just
still
keep
the
global
one
right
and
I
have
a
pro
local
profile
one.
I
personally,
I
think-
and
this
of
course,
can
probably
and
have
better
compatibility
and
also
more
flexible,
is
yeah.
If
I
don't
want
to
customize
any
profile,
never
right,
I
can
just
use
the
global
one
and
if
I
define
the
profile
levels
parameters
and
then
it
will
override
the
global
values.
E
So
I
don't
know
if
this
is
fair
in
the
summary
of
the
discussion,
erdo
and
aberdeen,
and
we
and
or
what
is,
do
we
have
any
and
yeah
conclusion
about
what
we're
going
to
do
with
this
or
it's
to
be
determined.
D
A
D
C
B
So
the
question
is:
when
one
user
has
more
than
one
profile,
then
they
want
to.
You
want
to
change
both,
maybe
so
they
will
have.
I
mean
they
want
to
change
the
percentage
globally
so
now
they
they
have
to
change
it
for
all
their
profiles,
which
it
might
be
okay,
because
if
someone
is
looking
into
this
they're-
probably
already
a
power
user,
so
it
might
be
fine
to
just
remove
it,
but
we
just
need
to
comply
with
the
api
deprecation
policy.
E
Yeah,
so
so
so,
firstly,
I
think
you
are
right,
I
think,
that's
probably
in
the
the
target
users
will
be
advanced
and
yeah
the
people
who
want
to
really
customize
the
scheduling.
E
So
the
second
option
you
keep
both
and
like
you
yeah
mentioned,
and
so,
if
we
just
to
keep
the
the
local
one
finally
and
yeah
and
then
the
user,
we
could
still
have
a
default
one.
But,
for
example,
I
want
to
customize
it
applied
to
all
the
profiles
right
and
the
default
so
far,
for
example
50
and
if
I
don't
specify
any
value,
if
I
want
to
change
to
whatever
anyone,
yeah
100
and
twenty
percent.
Now
I
have
to
I
have
multi
profiles.
D
A
E
B
E
Yeah,
it
makes
sense
at
least
even
we
we
we
want,
finally,
just
to
keep
the
nokia
one
right
as
we
we
discussed
here
and
the
ethernet
for
the
beta
and
in
yeah.
You
will
keep
both.
So
that's
just
probably.
B
B
But
we
can
decide
that
later
anyways
like
for
120,
we
we
have
to
do
okay,
yeah
makes
sense.
I
have
some
questions
because
you
also
mentioned
the
possibility
of
changing
the
minimum
number
of
nodes,
which
is,
I
think,
is
100.
Now
that's
where
you're
going
to
talk
next
yeah.
E
E
So
far,
it's
a
constant,
even
not
a
configurable
one,
and
so
my
question
is
yeah
is
related
to
the
percentage
of
those
two
scores.
Do
we
want
to
introduce
and
other
related
parameters,
and
so
this
means
the
physical
news
to
find
probably
is
less
and
important
than
the
second
one
yeah.
I
probably
didn't
explicitly
mention
mentioned
in
the
discussion
in
the
issue,
but
I
also
want
to
mention
this
parameter
and
the
physical
notes
to
school.
E
To
me,
this
is
definitely
for
for
for
advanced
development
who
want
to
develop
the
plugin
and
for
advanced
scheduling
features.
So
one
particularly.
I
think
this
one
is
interesting
and
we
noticed
two
used
cases
and
I
don't
know
the
plan
for
the
community
to
them.
I
just
want
to
give
the
example,
so
why
is
yeah
we
and
afford
me
and
their
discussion
around
choose
the
better
one
from
two
random
choices.
E
I
believe
a
regional
idea
from
yeah
academic
paper
called
sparrow
from
berkeley,
so
I
just
said
if
I
just
want
to
have
a
very
high
throughput
scheduling
and
for
large
amount
media
experience
of
the
tiny
jobs.
So
I
I
probably
just
randomly
choose
one
is
too
bad
right.
I
just
randomly
choose
two
and
then
I
choose
the
pattern
between
these
two
and
so
for
this
kind
of
thing.
Another
example
is,
for
example,
if
I
specify
the
node
name
right.
E
I
just
want
to
schedule
on
this
particular
node
or
internally,
and
we
have
some
use
cases
because
the
data
in
the
store
and
the
uncertain,
node
and
already
yeah,
we
have
to
use
this
local,
persistent
volume,
and
so
we
have
to
schedule
on
that
node.
So
for
this
kind
of
cases,
and
so
I
just
wonder-
and
if
we
have
a
profile
level
right
and
the
parameter
and
the
particular
specifies
that
yes,
I
just
want
to
find
in
the
two
and
yeah
physical
notes
to
score,
or
even
just
one.
E
If
I
find
this
one,
this
one
is
the
one
that
I
really
want
now,
of
course,
this
we
can
shortcut
right,
show
the
secret
and
all
other
filter
on
the
stage
and,
but
I
think,
auto
and
makes
some
good
and
yeah
and
the
comments
also
concerned
and
said:
oh,
do
we
really
want
to
introduce
the
complexity,
make
it
hard
for
users
to
configure,
so
I
just
want
to
throw
out
yeah
why
we
think
it
might
be
helpful
and
yeah
in
particular,
I
think,
is
specified
in
terms
of
the
nodes
and
yeah
for
some
use
cases.
B
E
Yeah,
I
thought
of
this.
I
just
thinking
because,
like
our
case,
we
have
multiple
clusters,
the
number
of
the
nodes,
different
right
and
so
it's
causing.
So
how
should
I
specify
this
percentage
of
the
nodes
right?
I
get
exactly
these
two,
for
example
here,
so
the
b
memory
is
two
and
I
do
the
calculation.
Maybe
it's
wrong
to
some
numbers
yeah,
but
definitely
it
is.
I
think
it's
possible
yeah,
just
to
use
the
the
percentage
of
notes
to
control
it
or
plus
this
mean
feasible
notes
to
score.
E
E
Oh
okay,
so
these
two
and
the
original
the
mean
and
the
feasible
nodes
to
find
like
a
specified,
a
lower
bond.
So
if
you
find
10
and
for
example,
here,
if
you
find
200,
it's
fine,
you
are
still
going
to
use
200
right.
Only
if
you
calculate
based
on
the
percentage
of
the
nodes
you
go
to
99
you
go
to
the
50.,
so
the
the
schedule
next
stage,
the
steel
and
the
feasible
nodes,
still
will
return.
100
fees
of
nodes,
but
this
feasible
nodes
to
score
means.
The
exact
number
said.
E
A
Yeah
this
has
been
discussed
before
and
the
exact
same
proposal
was
was
made
basically
have
an
absolute
value.
How
many
nodes
you
want?
Yeah
yeah,
yeah.
A
E
Yeah
to
me,
I
think
the
second
one
probably
is
more
useful
and
yeah
in
this
case.
As
I
said,
you
use
percentage
to
specify
it's
a
little
bit
challenging
to
how
to
control
it.
Also,
these
two
use
cases
yeah.
I
don't
know
how
yeah
generally
are,
but
at
least
for
us
you
know
internal.
We
have
some
cases
like
this
yeah.
I
I
want
to
specify
you
how
many
and.
D
C
E
D
That
is
a
percentage
number
right,
but
if
we
think
that
we
just
need
a
number,
no
matter
is
a
percentage
or
absolute
number.
We
can
use
that
parameter,
maybe
rename
a
bit
to
represent
both
the
systematics
like
internal,
like
we
use
the
type
like
ink
or
string,
so
that
can
be
translated
internally
to
represent
the
specific
amount
of
nails
to
score.
D
So,
if
you
specify
absolute
number,
of
course,
that
is
the
number
we
in
general
will
use.
If
we
use
the
percentage,
then
that'll
be
translated
a
bit.
So
in
that
way
it
can
simplify
the
configuration
so
that
we
don't
need
too
many
parameters
just
one
another
idea.
If
we
goes
with
the
second
option
and
to
work
with
existing
percentage
notes
to
score.
E
E
If
you
specify
the
in
absolute
and
the
number
yeah
so
yeah
as
long
as
it's
valid
and
a
number
and
yeah,
we
need
a
default
one
right.
We
don't
need
this
minimum
one
yeah
yeah.
I
think
we
make
a
good
point
so
it's
equivalent
and
then
this
adaptive
threshold
and
so
find
the
use
that
we
could
definitely
convert
it
into
the
even
the
feasible
nodes
to
score,
for
example,
are
feasible.
So
far
the
default
value
of
the
percentage
of
nodes
to
score
is
50
percent.
A
A
B
A
All
right,
so
that's
all
we
have
for.
Oh,
we
have
released
blockers
integration
tests.
You
want
to
quickly
touch
on
that
way.
D
Yeah
I
just
mentioned
that
if
you
have
any
release
blockers,
you
spot
spot
it
any
broadcast,
just
let
yeah
just
put
it
on
the
channel
or
open
issue
to
us.
So
right
now,
jordan
leggett
opened
flakey
issues,
including
all
the
recently
flaky
integrative
tests.
Yeah
it
happens
to
all
integration
tests,
so
I
spent
spend
some
time
fixing
them
and
I
hope
they
are
clear
right
now
so
this
morning
I
think
the
last
appeal
went
in
and
the
test
screen
looks
green
so
far,
but
I
will
monitor
that
for
the
next
couple
of
days.
A
Yeah,
thank
you
for
for
that
effort.
It's
pretty
important.
I
think
we
as
a
community.
We
should.
We
should
keep
an
eye
on
on
the
test
grid,
and
I
don't
know
I
need
to
find
out
if
we
can
set
up
some
sort
of
like
an
automation
where
an
issue
could
get
created
like
automatically
if,
if
any
of
our
tests
is
flaky,
because
I
really
didn't
know
that
we
have
a
flicky
test
like
until
jordan,
raised
an
issue,
and
I'm
wondering
why
this
was
not
placed
before.
A
Yeah,
so
so
we
need
to
figure
out
a
way
to
automate
all
of
that
basically
reporting.
We
shouldn't
wait
until
you
know
when
they
try
to
cut
the
release
and
then
and
then
everything
is
frozen
because
well
we
have
a
bunch
of
flakiness,
at
least
for
us
as
six
scheduling.
A
We
need
to
make
sure
that
all
our
all
our
tests
are
are
not
flaky
and
tracked.
Basically,.
D
D
So
if
you
want
to
improve
your
test,
quality-
and
this
is
can
act,
as
example
for
you
to
check
with,
for
example,
make
your
subtest
in
your
your
your
test
case-
stay
less
not
by
not
sharing
some
state
across
the
different
tests,
as
well
as
some
yeah
other
common
mistakes.
So
if
you
want
to
check
out,
you
can
yeah
take
a
look
at
the
individual
issue.
A
All
right
I
will,
I
will
continue
to
clean
up
the
feature
sheet
and
hopefully,
in
the
next
two
weeks,
one
120
will
will
be
open
for
future
for
features
and
yeah.
That's
I
guess
all
I
have
this
is
all
we
have
in
the
the
agenda.
Anyone
has
any
final
comments,
feedback.