►
From YouTube: Kubernetes SIG-Scheduling Weekly Meeting for 20200827
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
One
thing
I
thought
we
could
do
this
week
is
look
at
what
we
could
do
in
1.20
master
has
been
opened
again,
based
on
the
plan
that
I
think
was
shared
on
on
the
mailing
list,
which
is
basically
starting
with
pr's
that
are
like
tagged
from
labeled
cleanup
and
then
and
then
test
flakes
and
then
backfist
fixes
and
then
only
after
that
it
will
be
open
for
feature
development.
A
Before
I
get
into
that,
I
was
wondering
if
anyone
has
any
questions
or
would
like
to
discuss
other
topics
so
that
we
can
time
box
properly.
The
discussion
related
to
the
tasks
that
we're
going
to
do
in
1.20.
A
Okay,
so
we're
going
to
do
only
this-
hopefully
this
is
not
going
to
take
a
long
long
time
as
well,
and
so
let
me
share
this
spreadsheet,
so
I
added
a
few
things.
Please
feel
free
to
add
or
comment
on
the
spreadsheet.
B
C
A
I
think
I
need
to
do
some
permission
like
give
some
computers
and
stuff
like
I'll.
Do
it
later
anyways,
if
you
guys
have
the
spreadsheet
open?
The
first
item
is
graduating
default
for
topology
spreading
into
beta.
A
A
This
is
exciting
because
first,
it's
more
accurate
than
the
old
one,
because
it
looks
at
all
the
pods
in
the
cluster
than
only
the
filtered
ones,
and
the
other
thing
is
it's
customizability
you
can
customize
which
topology
domains
you
want
the
spreading
the
default
splitting
to
act
on
and
also
the
max
skew,
basically
how
aggressive
you
want
to
be
etc.
A
So
this
is,
this
effort
is
lit
by
aldo,
although
do
you
wanna?
Do
you
have
anything
else
to
add
related
to
this.
C
C
A
What
is
it
like?
Just
changing
the
constraint
and
the
component
conflict
to
a
pointer
to
a
reference?
Allow
you
to
do
that.
C
B
D
D
C
So
the
the
feature
today
there's
two
separate
plugins:
one
of
them
implements
the
default
spreading
and
the
other
one
implements
the
bot
topology
spreading.
That
comes
from
the
pod
specs
and
this
new
feature
is,
is
basically
deprecating
the
default
spreading
and
just
making
it
a
configuration
on
the
new
one
so
and
this
default
spreading
with
the
new
coffee
with
just
configuration,
we
cannot
disable
it
without
disabling
the
whole
plugin
and
that's
both
features
and
that's
that's
a
major
blocker
at
this
point.
D
So
so
we
are
saying
like
earlier
default:
spreading
was
always
there
right,
and
so
it's
also
a
plug-in
in
the
new
framework,
and
we
are
saying
that
both
will
act
independently
of
each
other
and
then
their
weights
will
apply
right.
So
we
don't
know
how
the
behavior
of
that
will
affect
this
one
is
that
the
main
problem.
D
Okay,
I
mean
I
just
wanted,
I'm
I'm
waiting
for
that
feature.
That's
why
I
wanted
to
know
if
there
are
any
blockers,
so
this
is
already
available
as
beat,
and
so
it's
enabled
by
default
in
all
clusters
right
we're
not
talking.
F
Let
me
let
me
just
try
to
explain
a
little
bit
on
the
background,
so
we
have.
The
part
is
spread
right.
That
feature
has
been
stable
in
y19.
We
are
not
talking
about
that,
but
underneath
there's
some
related
features
like
you
know
before
we
call
it
sort
of
selective
spreading.
So
that
means
we
can
implicitly
to
associate
the
relationship
between,
for
example,
a
replica
set
and
the
backing
part
or
the
service
and
its
backing
path.
F
So
it's
kind
of
associated
by
the
label
selector,
so
that
we
have
the
feature
right
selector
spreading,
so
that
we
try
to
spread
these
paths
which
are
associated
by
their
label
selector
evenly
across
the
cluster.
So
that's
called
spreading,
so
that
feature
was
implemented
in
18
as
a
plugin,
so
that
is
called
yeah
cut.
F
Anything
it's
called
default,
topology
spread
or
something,
but
in
19
it's
renamed
to
be
selector
spread.
So
that
is
the
plugin
story,
but
because
that
is
implement
as
the
standalone
plugin.
If
you
want
disable
that
feature,
you
have
to
disable
that
plugin.
So
that's
its
current
current
state.
So
but
a
lot
of
logic
implement.
There
has
been
overlapped
with
some
common
utility,
which
is
the
current
stable
part.
F
The
project
spread
feature
so
that
we
decided
to
sort
of
dedicate
that
plugin
and
instead
put
some
configuration
parameters
to
the
or
implicit
parameters
or
constraints
to
the
stable
autopilot
spread
plugin,
so
that
we
only
have
one
plugin
and
under
that
plugin
you
can
have
explicit
constraints,
implicit
constraints
and
they
will
take
some
priority
one
over
each
other,
so
that
in
the
end
we
all
have
one
plugin.
So
that's
the
whole
story.
D
Okay,
I
can
discuss
that
offline
if
it's
not
part
of
the
schedule.
My
concern
was
that
if
I
know
about
selectors
selector
spreading
dot
go
file
that
was
there
in
older
scheduler.
It
seems
to
have
moved
to
the
plug-in
right
and
if
we
are
saying
if,
if
by
saying
that
implicit,
we
are
saying
that
the
pod
topology
constraint
spec
will
not
be
there
and
still
will
support
spreading.
I
think
that
is
what
we
would
we
should
have.
Otherwise
it
will
be
breaking
change
for
others
right.
It
will
be
if
we
just
suddenly
remove
and.
A
We're
already
doing
that,
the
idea
here
is
that
we're
gonna
replace
that
standalone
plug-in
with
a
default
configuration
that
applies
both
topology
spread
constraint
on
all
posts
that
they
don't
have
it
by
default.
So.
A
Let
me
just
like
finish
there.
The
question
here
is:
how
can
we
allow
administrators
to
disable
that
default?
Behavior?
Okay?
In
the
past,
you
were
able
to
do
that
by
basically
disabling
the
default
spread
plugin
or
what
what
you
used
to
call
it
the
score
function
right.
A
You
can
just
remove
it
from
the
configuration,
and
so
you
are
able
to
say
okay,
I
don't
want
to
have
my
collections
be
spread
by
default
with
this
new
feature
where
you,
where
we
do,
that
using
a
default
constraint
that
we
that
we
configure
in
component
config
there's
no
way
to
do
it,
because
we
don't
distinguish
between
nil
and
empty
and
whatnot
so
yeah.
There
is
more
discussion
on
the
issue
that
aldo
listed.
So
it's
a
it's
a
little
nonce.
I
don't
think
it's
not
a
breaking
change.
A
A
A
Sorry
I
forgot
his
name,
but
github
is
in
something
so
they're
working
on
on
actually
on
a
in
a
feature
where
we're
going
to
do
this
across
the
whole.
Scheduler
sorry
gates
package,
so
we're
creating
a
new
repo
and
staging
that
will
allow
that
will
basically
factor
out
all
these
utility
functions
across
the
whole
gates
code.
A
And
so
now
we'll
be
able
to
cut
all
the
dependencies
that
the
scheduler
have
on
kate's
main
main
repo.
So
the
kids
main
package-
and
this
is
quite
useful
because
it
will
allow.
A
The
third
item
is
deprecating
custom
plugins.
So
I
discussed
this
previously
node
label
and
service
affinity.
Those
are
pretty
old
custom
priority
predicates
that
has
been
moved
into
plugins
because
of
the
refactoring,
but
those
they
do
have
already
ga
features,
which
is
again
no
dfinity
and
pod
or
affinity.
A
We
should
phase
them
out
before
they
make
it
plug
into
a
component
config.
Ga.
A
That's
good
and
then
there
we
have
the
issue
of
continuing
to
refactor
the
core
scheduler
code
into
plugins
in
the
past
cycle.
We
let
the
effort
to
refactor
the
apprehension
logic
into
into
a
pos
filter.
Plugin.
A
A
So
there's
an
issue
we're
discussing
their
motivations
and
feasibility
of
this
thing,
how
we
can
do
it
safely
as
well,
because
such
logic
should
not
never
be
disabled,
but
from
my
perspective,
I
think
we
should
try
as
much
as
we
can
move
the
anything
we
have
into
plugins
and
have
the
scheduler
code
as
simply
simply
as
a
bunch
of
plugins,
and
then
the
the
scheduler
code
is
basically
running
the
framework
and
those
extension
points,
and
that's
it.
A
Okay,
next
item
is
benchmarking,
so
id
is
working
on
on
two
items
related
to
benchmarking.
One
is
adding
affinity
and
spreading
benchmarks,
cube
cubemark
benchmarks
into
the
test
and
fra
scaling
test
suite.
This
is
available
on.
A
Let
me
share
the
link
pair
of
dash.
I
don't
know
if
you
are
aware
of
this.
A
So
those
basically
a
test
that
will
run
daily
on
as
cubemark
benchmarks
and
they
will
show
up
on
the
dashboard.
So
this
will
allow
us
to
understand
better.
How
far
are
we
from
like
in
terms
of
performance
for
for
for
complex
features
right
now?
We
only
have
that
for
vanilla
workloads,
but
he
will
be
adding
ones
for
preferred.
Anti-Affinity
preferred
affinity
pod
spreading,
so
we
will
be
able
like
to
either
inform
our
optimization
efforts
in
the
future
or
catch
regressions
when
they
happen.
A
And
the
other
item
is
integration
benchmarks,
the
one
that
we
have
in
the
scheduler
pair
under
the
under
test
integration
and
those
are
going
to
allow
us
to
do
simulations
related
to
the
impact
of
unscheduled
reports,
the
and
the
overhead
of
the
preemption
path
in
general.
G
Nothing
much
it's
just
both
of
them
are
in
progress.
You
can
probably
follow
the
prs,
but
yeah
sounds
good.
A
And
then
the
item
related
to
efficient,
requiring
also
we
have
an
issue
opened
related
to
that,
basically
trying
to
make
smarter,
basically
between
quotations
decisions
and
when
we
move
parts
out
on
schedule
with
q,
we
had
a
proposal
last
cycle
from
romaldo,
make
it
like
extremely
generic
and
and
then
there
is
a
path,
basically
like
a
ramp.
Up
to
that
that
we
can
take.
That
is
a
little
bit
more
hard-coded,
more
targeted,
but
it's
simpler,
I
think,
to
start
with,
to
evaluate
the
impact
of
actually
doing
this.
A
This
is
going
to
depend
on
the
previous
item
as
well,
which
is
integration
tests
benchmarks,
so
it
will
allow
us
to
evaluate
the
impact
of
having
like
efficient
reviewing.
So
that's
why
addie
is
taken
with
this
one
as
well,
because
they're
really
tightly
coupled
with
the
previous.
A
A
Okay,
so
the
last
item
in
the
list
is
extension
points
for
pod
queue,
dequeue
or
update.
So
we
had
a
request
to
add
enqueue
function
into
the
q
sort
plug-in.
I
think
it's
reasonable.
It's
open
for
discussion
as
well
on
the
issue.
A
So
we
can
it's
not
like
a
done
deal,
but
one
thing
we
can
do
is
basically
add
callbacks
when
we
add
delete
or
update
when
we
receive
add,
delete
or
update
events
for
a
pod
that
is
not
scheduled
yet
so,
basically
to
the
queue.
So
we
can
do
that.
I
think,
and
that
will
allow
more
advanced,
like
basically.
A
Plug-Ins
to
be
developed,
as
the
issue
discusses,
you
could
have
like
veteran
cues
implemented
by
those
plugins
and
they
maintain
that
state
and
then
based
on
that
they
sort
the
pod
when
we
do
when
we
sort
it
in
the
in
the
global
active
queue.
A
A
Okay,
so
those
are
the
things
that
I
have
I've
listed
here.
A
B
A
Yeah
yeah,
I
mean
harsh.
I
think
he
offered
to
take
on
this
one
but
yeah.
If
you
want
to
work
on
it
like
you,
can
also
collaborate
with
him
on
on
this
effort.
I
think
it
will
take
some
time
it
will
like.
It
will
be
my
multiple,
multiple
pr's
there
are,
like,
I
think,
two
plugins
that
we
want
to
take
to
remove.
B
You
interested
in
doing
yes,
I
am
interesting
okay,
but
I
will
talk
with
him
about.
It
sounds
good.
F
Sorry,
sorry,
we
can't
we
can
hear
you
clearly.
H
I'm
saying
that
this
is
challenge
as
well
and
I'm
leaving
this
community.
So
I'm
looking
for
you.
A
Sounds
good,
hopefully
in
the
next
like
once
the
once
massive
is
open.
There
will
be
like
all
these
issues
will
be
likely
broken
into
smaller
tasks
and
whoever
is
leading
a
specific
issue
can
create
smaller
issues
and
and
tag
them
with
help
wanted.
So
just
watch
out
for
the
help
wanted.
H
E
So
hi
everyone-
this
is
anuj
here
I
am
new
to
this
group.
I
joined
it
for
the
first
time.
Basically,
these
I
mean
from
my
team.
I
got
to
know
that
there's
a
problem
in
the
descheduling
part
of
the
scheduler.
Basically,
when
you
de-schedule
something
the
spread
is
not
maintained,
and
that
was
one
of
the
reasons
I
got
interested
in
joining
this
group.
A
So
there
is
a
and
kept
open,
I'll
share
it,
and
I
have
it
in
my
backlog
to
read
it
related
to
the
scheduler.
A
Yeah
it's
jan
or
john,
from
from
red
hat.
I
think
he's
he's
leading
that
effort.
E
A
E
D
I
Hey
this
is
sean
yeah.
I
have
an
open
work
in
progress,
pull
request.
It
really
doesn't
work
for
the
d
schedule,
yet
it's
kind
of
sort
of
close
for
the
to
yeah
doing
the
even
pod
spreading
our
topology
spread
cons,
constraint,
stuff
in
d
scheduler.
So
that's
a
work
in
progress.
D
Okay,
I
mean
do
like,
is
that
like?
Are
you
actively
working
on
it,
because
I'm
also
interested
in
that
like
getting
that
not
not
doing
the
work
but
actually
consuming
it?
So
just
wanted
yeah.
I
So
I
think
what
I'm
going
to
do
is
I
realistically
don't
have
time
to
finish
this
pull
request.
So
I'm
going
to
clean
up
the
commit
history
a
little
bit
and
I'll
offer
it
up
for
other
people
to
finish
off
at
this
point
in
time,
so
yeah,
that's
kind
of
where
I'm
at
with
it.
I'm
also
interested
in
using
it,
but
I
think
I
realistically
don't
have
time
to
finish
the
code.
Okay,.
A
Sean,
can
you
please
add
that
into
this
spreadsheet
and
we
can
track
it
there
and
we
can't
find
volunteers
to
continue
the
work.
I
Sure
I'll
add
the
link
to
the
issue
and
the
pull
request
to
the
spreadsheet
sure.
A
I
It
is
related
to
the
the
pot,
the
new,
what
I'll
call
the
new
newer
feature.
The
pod
topology
spread
constraint,
feature.
A
Well,
when
you
for
preferred,
you
would
need
to
set
some
thresholds
and
whatnot
like
I
don't
think
we
have
in
the
descriptor
anything
implemented
related
to
preferred
whether
that's
preferred
anti-affinity
or
profit
affinity
or
preferred
anything.
It's
all
about
filters.
I
I'm
gonna
yeah,
I'm
gonna
say
I
don't
know
enough
about
scheduling.
To
answer
your
question,
I
think
that's
what
my
new
answer
is.
A
This
schedule
right,
like
what
I'm
trying
to
say,
is
that
the
cost-based
scaling
down
of
quads,
I
think,
is
trying
to
bring
preferred
semantics
into
the
d
scheduler
at
this.
As
my
rough
understanding
from
this
first
scheme.
Maybe
we
can
comment
on
this
because
you
already
reviewed
the
kit,
but
it's
going
to
expand.
A
They
say
what
the
scheduler,
the
scheduler,
can
do,
not
only
for
required,
but
also
for
preferred
placement,
prioritization,
basically
of
pods
of
nodes.
C
Go
ahead,
and
so
maybe
it's
worth
having
10
minutes
or
so
next
week
to
discuss
the
plans
for
for
discovery.
I.
A
I'll
propose
to
them
to
have
something
prepared,
maybe
one
slide
or
just
list
of
tasks
that
they
plan
to
do
the
d
scheduler.
D
Yeah,
I
think
that
will
be
helpful.
Is
it
going
to
be
discussed
in
the
same
meeting
next
week,
you're
saying.
F
A
I
think
it's
fine
yeah.
F
D
D
Okay,
I'll
just
ask
the
question
and
yeah:
we
can
discuss
that
further
in
issue
or
something
so.
My
first
question
was:
I
saw
in
one
of
the
issues
yesterday
that
the
current
plug-in
or
the
current
scheduler
can
run
multiple
profiles.
D
A
Right
but
the
the
thing
is:
the
provider
needs
to
provide
a
different.
Like
multiple
profiles
like
by
default,
the
default
configuration
the
default
component.
Config
has
only
the
default
scheduler.
A
Right,
they
need
to
provide
a
component
config
file
that
defines
multiple
schedulers.
D
A
No,
it's
a
file,
it's
a
part
you
pass
to
the
scheduler,
I
think,
although
do
you
can
we
do
it
through
config
map
as
well.
C
D
A
There
are
no
plans
to
do
that
in
the
schedule
itself,
unless
we
we
get
to
a
point
where
we
build
the
descheduler
as
a
another
process
that
go
routine
runs
from
the
scheduler.
Something
like
that,
but
that's
like
a
major
change
at
the
scale
of
the
framework.
So
it's
a
significant
change
going
to
be
okay,
but.
D
C
So
the
the
the
idea
is
also
that's
why
the
discolor
discolor
is
a
separate
project
so
that
the
learnings
can
happen
there
faster
and
once
that's
ready,
we
could.
We
could
think
of
merging
the
two
things
in
a
single.
F
C
F
Yeah,
I
think
the
the
cycles
for
for
these
two
things
are
different,
so
scheduling
happens
as
a
specific
cycle
to
schedule
a
path
once
it
comes
into
the
into
the
schedule
right,
but
the
balancing
and
the
runtime
scheduling
can
happen
at
any
time
so
periodically
or
you
define
the
intervals
so
fundamentally
they
they
were
sort
of
different.
So
that's
the
way
they
were
kept
into
to
separate
projects.
D
D
A
Okay,
thank
you
all
I'll
see
you
next
time
I'll
bring
mike,
and
we
will
also
add
the
other
tasks
related
to
this
schedule.
Like
the
external
plugins.
This
is
not
only
to
track
things,
but
also
to
have
like
for
us
an
opportunity
to
find
owners,
exceptions
that
we
can
have
as
point
of
contacts
so
that
we
can
also
direct
new
contributors
to
these
owners
of
point
of
contacts
if
they
need
help.