►
From YouTube: Kubernetes SIG-Scheduling Weekly Meeting for 20200716
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
hi
everyone.
So,
as
you
all
know,
this
meeting
will
be
recorded
and
uploaded
to
youtube
we're
just
looking
at
the
at
the
agenda
we
have.
She
would
like
to
talk
about
a
way
to
ignore
pods
for
for
eviction
in
the
led
scheduler.
B
Hey
yeah,
this
is
sean.
Can
you
hear
me
cool
yeah,
so
I
opened
this
github
issue
for
like
a
feature
request
for
d
scheduler,
so,
like
high
level,
just
wanted
to
run
it
by
people
the
concept,
I
guess
there's
two
options
for
having
de-scheduler
ignore
pause
for
eviction.
One
is
by
annotation.
The
other
one
is
by
party
class.
There's
been
some
discussion
on
the
github
issue
by
several
people
there
on
this
call
already.
B
So
that's
the
high
level
overview.
I
mean.
If
you
want
to
go
into
more
details,
we
can,
if
you
have
questions
or
even
explain
it
further.
A
Right
so
the
I
guess
I
can
try
to
summarize
it
from
someone
who
just
read
the
doc
to
make
sure
we
understand
the
the
problem.
So
what
you
would
like
to
do
is
to
have
a
way
to
configure
the
scheduler
to
ignore
specific
group
of
parts
from
being
considered
for
eviction,
and
the
question
here
is:
how
do
we
select
those
pods
or
how
do
we
identify
basically,
those
parts
and
you're
suggesting
two
two
ways
on
that
issue?
A
One
is
like
by
labels,
I
guess,
and
the
other
one
is
by
setting
a
class
a
priority
class
like
a
threshold
where
any
pods
are
with
a
with
a
priority.
Higher
than
that
will
be
will
be
ignored.
So
I
guess
the
first
approach
is.
Another
level
is
actually
an
annotation
that
I'm
going
to
say,
which
is
similar
to
how
we
def
used
to
define
critical
pods
in
the
past
before
pot
priority
was
was
introduced.
Is
that
a
fair
summary.
A
A
Filter
basically
filtering
the
scalar
label
and
namespace
filtering
and
have
you
also
considered
doing
something
similar
as
well
like
saying,
okay
parts
in
this
name
space,
for
example,
cube
system
are
not
like,
and
then
you
can
say,
okay,
you
can
configure
scheduler
with
the
specific
namespace.
A
It's
just
like
another
another
like
option
that
you
might
want
to
add.
B
Yep
yeah
yeah
I'd
say
for
my
first
grade,
say
yeah.
We
want
to
all
right,
so
it
would
be
interesting
to
have
a
feature
to
be
able
to.
You
know
include
and
exclude
by
namespace,
including
xb,
by
label,
which
is
what
we
talked
about
the
previous
meeting.
And
then
this
is
like
an
additional
option
or
third
option.
We'll
call
it
for
excluding
pods,
okay,
yeah.
A
I
think,
like
from
my
perspective,
the
priority
threshold
does
make
sense.
A
A
Any
other
suggestions
comments
right
now.
C
I
took
a
quick
look
so
the
second
option
that
goes
with
the
discussion
policy.
Specific
fields
make
more
sense
to
me
as
because,
as
the
someone
mentioned
that
the
current,
if
you
go
with
the
global
command
line
arguments
that
is
not
version,
and
also
that
is,
I
guess
that
this
has
grows
much
bigger
than
the
initial
state
so
to
have
organized
and
russian
spec
for
management.
Their
management
will
be
much
easier
unless
there
is
just
particular
requirement
to
have
it
to
be
global.
C
I
think
having
global
can
be
the
next
thing
that
the
schedule
goes
with
just
like
the
scheduler
component
can
fit,
rather
than
put
them
into.
A
Individual
command
line
arguments
right,
but
there
are
two
questions
here.
First
is
how
do
we
select
the
pods?
The
second
thing:
how
do
we
configure
this
scheduler
with
this
selection
right
and
the
first
question
is
like
whether
we
select
them
using
an
annotation
like
name
space
label,
specification
like
selector
or
setting
a
threshold
and
now
the
second
question,
which
is
what
you're
discussing
right
now,
is
whether
it's
going
to
be
at
the
policy
level
or
at
like
a
global
flag
right.
So
those
are
two
two
things
that.
C
B
A
Right
so
we
we
have
two.
I
think
two
two
two
problems
or
like
like
two
things
that
we
want
to
make
you
want
to
make
decision
on.
One
is:
how
do
we
select?
The
pods
is
using
a
threshold
priority
threshold
using
a
annotation
or
using
a
label
selector,
and
then
how
do
we
feed
that
into
the
d
scheduler,
whether
as
a
command
line
flag
or
as
a
policy
or
like
as
a
yeah?
What
do
you
call
them
like
eviction,
policies
or.
B
Yeah
there's
some
yeah,
like
everything
kumara
says
some
wonderful
yaml
configuration
file.
They
have
to
update
yeah.
A
So
they
agree
that
those
are
the
two
things
that
we're
discussing
like
there's
two
different
problems
right
and
yeah,
and
that
way
was
discussing
the
second
one,
not
the
first
one,
I'm
not
sure.
If
we've
made
a
decision
on
the
first
one.
A
D
D
I
think
it's
two
times
the
user
highest
user
priority
class
and
there's
also
the
fact
that
we
have
a.
I
think
we
have
an
annotation,
that
you
can
add
to
specify
that
pods
are
evictable.
So
if
they
don't
meet
all
the
criteria
to
be
evicted,
you
can
have
the
sanitation.
So
this
is
sort
of
extrapolating
both
of
those
into
other
ways
that
users
can
configure
the
scheduler.
So
those
are
the
two
ways
that
we
currently
built
in
just
try
to
basically
exclude
like
system
pods.
A
Just
as
an
example,
cluster
order
scale,
they
do
have
something
similar
when
they
downscale,
you
can
track
like
you
can
configure
parts
to
say,
don't
ever
evict
me
and
they
use
an
annotation
just
like
as
an
example
of
what
other
component
does.
C
A
Yeah
I
agree.
I
just
saw
that
it's
like
interesting
to
see
what
other
components
are
doing,
but
probably
for
the
schedule.
This
makes
more
sense.
E
Also
compared
to
the
label
and
name
in
space
fluttering,
it
can
be
used
only
for
some
strategies
right
and
the
like
the
property
class
threshold.
It's
more
like
more
suitable
for
low
node
utilization
strategy
where,
for
example,
the
an
end
space
filtering
like
is
not
allowed
to
be
used.
E
So
just
to
say
like
in
in
some
cases
the
clasp,
pretty
just
rod
is
the
only
way
how
to
like,
save
or
keep
some
pots
from
being
like,
like
a
deleted.
D
E
E
A
Okay
sounds
good.
This
there
is
some
clear
preference
towards
using
priority
threshold.
Is
there
anything
else,
sean
you
want
to
discuss
related
to
this
item.
B
No,
I
think,
I'm
good.
Thank
you.
A
D
I
know
I
just
added
that
just
like
you
said
shameless
plug.
I
don't
know
if
anyone
else
had.
I
did
a
quick
look
through
the
schedule
to
see
if
there
was
anyone
else
from
the
sig
that
was
doing
stuff.
I
didn't
see
any,
but
if
anyone
has
any
other
presentations
that
they
want
to
link
to
add
it
to
the
list
there
and
we'll
all
check
them
out.
Just
a
reminder
about
kubecon
we're
going
to
be,
like
you
said,
recording
the
slides
tomorrow
and
there's
anything
critical
that
you
want
us
to
mention.
A
D
A
Okay
yeah,
it
might
be
useful
like
to
close
up
to
that
deadline,
to
send
an
email
to
the
list
and
on
the
slack
channel
just
for
people
like
to
who
wants
to
ask
questions.
Is
it
open
for
everybody
or
do
they
have
to
be
registered
in
the
in
the
conference?
C
C
C
Nice
first,
I
want
to
go
through
some
changes
on
the
extension
point
background.
So
basically,
before
we
didn't
list
the
specific
preemption
phase
here,
which
is
happens
after
filter
and
only
trigger
one,
there
is
no
single,
visible
node.
So
right
now
we
make
that
face
explicit
here
as
a
post
filter
extension
point
so
and
as
before,
it's
also
only
trigger
one
there's
no
visible
net.
C
The
changes
here
is
that
we
made
the
hard
coded
logic
extend
extendable
and
it's
a
bit
difficult
than
we
thought,
because
preemption
is,
is
a
faith
that
you
need
to
do
some
internal
driver
on
reusing
some
filter
logic
or
pre-filled
logic,
so
that
made
the
efforts
a
little
bit
more
than
we
expected,
but
I'm
glad
we
did
it
so
it's
available
in
the
1
19
release.
C
So
if
anyone
wants
to
try
their
16
point
to
customize
the
preemption
logic,
you
can
do
that
so
today,
I'm
going
to
show
you
how
to
do
that
to
exercise
cross
node
preemption.
So,
basically,
right
now
due
to
some
performance
considerations,
we
just
allow
the
preemption
logic
to
happen
on
a
single
node
to
preempt
those
parts,
a
single
node,
and
but
that
is
a
limitation
for
for
the
performance.
C
C
Repo-
this
not
sure
you
have
you,
are
aware
of
this.
We've
been
have
a
few
proposals
like
scheduling
like
elastic
elastic
code
or
something
in
in
progress,
so
you
can
fork
this
repo
and
build
all
your
plugins
on
the
this
folder.
So
here
we
just
use
an
example
right
now,
I'm
just
using
put
create
a
new
folder
called
cross
nail,
preemption
and
I'll
show
you
the
logic
later
and
there's
one
file,
you
just
implement
the
post
filter
standpoint
and
they
do
all
the
necessary
logic
here
and
how
it's
built.
C
It's
built
nothing
special,
it's
just
the
same
as
how
the
main
upstream
schedule
was
built,
so
basically
their
cmd
folder,
yes,
main.go
you
just
vendor
the
upstream
cmd
cube,
scheduler
apd
and
maybe
the
new
slash
new
scheduler
command
and
we
have
a
hook
there
called
web
plugin.
So
you
can
register
your
out
of
three
and
plug
into
the
registry
and
after
you
register
that
you
can,
you
can
enable
it
all
don't
enable
it.
It
just
depends
on
you,
so
I'm
using
the
local
script.
For
example.
C
C
What
I
did
is,
I
replace,
of
course,
replace
the
cube
scheduler
binary,
I'm
using
the
schedule
plugin
being
such
bing,
slash,
cube
scheduler,
which
I
just
built
the
main.go.
I
just
showed
you
and
I'm
using
the
multi-profile
features
which
was
introduced,
introducing
118
release
so
basically
they're.
I
kept
the
default
schedule
in
place
and
name
another
schedule
called
crossnow
preemption
and
enable
the
specific
platform
so
that
that's
all
and
okay,
I
can
show
you
a
little
bit
so
right
now.
I
have
three
nails
and
the
topology
is
bit
like
this.
C
The
tabard
is
like
this
node
one
node
two
belongs
to
zone
one
and
o
three
belongs
to
zone
two
and
to
demonstrate
that
the
note
3
is
invisible
whatever
due
to
out
of
resources
or
something
I
just
tend
this
note.
It
can
be
the
same
effect
if
you
have
one
out
of
resources.
C
C
C
Yeah
there
are
land,
downline,
node
two
and
next
I'm
going
to
deploy
a
preemptor
path
and
of
course
the
preemptor
has
to
have
a
higher
priority.
So
I
put
the
higher
class
p1
here
and
let
me
just
use
default
scheduler
to
give
you
a
idea
how
it
works.
So
basically,
this
preemptor
part
is
trying
to
schedule
to
this
cluster
with
a
spread
constraint
on
zone
and
max
q
is
one
so
in
this
case,
if
we
put
the
part
to
zone
y.
C
C
Yes,
let's
give
another
couple
of
seconds
to
see
how
it
changes.
Of
course
it
doesn't.
It
still
stays
in
the
pending
state
and
the
two
running
paths
are
not
impacted,
so
what
we
can
do.
We
can
do
nothing,
because
we
all
the
default
principles
strategy
just
try
to
preempt
parts
from
a
single
node.
It
doesn't
work
if
you
preempt
this
part
from
node
one
or
either
bring
up
this
part
from
node
two,
but
it
can't
be
possible
that
we
print
these
two
paths
together
and
make
the
incoming
path
schedulable.
C
Yeah,
the
preemptor
is
landing
on
note,
2
and
aft
the
preemptor
land
on
node
2.
The
original
two
pass
starts
to
to
be
rescheduled
and,
of
course,
there
is
at
least
one
original
park
enough
in
because
the
the
original
two
parts
want
to
schedule
on
the
notes
evenly.
C
So
that
is
basically
demos
how
it
works,
and
you
might
imagine
that
the
logic
is
complex,
but
it's
not
that
case.
Let
me
show
a
bit
the
logic
here
so
because
we
already
have
a
a
good
default
implementation
in
the
upstream.
So
what
you
want
me
to
do
is
just
implement
the
customized
ones,
so
the
logic
starts
with
pause
filter
and
you
can
define
a
private
function
called
pre-emptive
updo
and
then
so
the
dp
is
short
for
default
preemption.
A
At
least
I'm
not
seeing
your
code
like
it's,
oh
really,
yeah.
Let
me.
C
A
C
C
So
I
will
continue
my
work,
my
words
so
because
in
the
upstream
there
is
already
a
good
shape
before
implementation,
so
you
don't
need
to
really
implement
all
of
them.
What
you
need
to
do
just
implement
what
the
customer
wants
mice
wants,
so
the
dp
is
short
for
default,
preemption
right
here.
So
that
means
this
is
the
upstream
default
implementation.
C
So
I
can
leverage
that
the
first
step
I
leveraged
that
the
third
step.
I
leveraged
that
the
first
step.
I
left
that
also
the
fifth
yeah,
all
five
steps
I
just
implement.
Only
the
second
step,
which
is
the
find
the
candidate
and
the
frank
candidate,
are
just
using
a
brute
force,
dfs
algorithm
to
try
all
the
pulse
combination.
C
So
for
each
part
you
can
choose
or
not
choose
so
in
total.
You
will
be
a
lot
of
choice,
of
course,
but
it's
just
a
demo
to
show
you
how
it
works.
So
the
code
logic
I
will
see
here
is
less
than
100
lines.
You
can
implement
the
crosstalk
preemption
so
right
here
using
dfs,
because
recursive
rate
cost
itself
just
like
this,
and
also
we
have
quite
good
library
for
you
to
build
your
unit
test.
For
example,
the
st
as
the
usual
for
scheduling
testing.
C
You
can
build
the
api
objects
easily
and
also
we
have
a
good
utility
like
sd
dot
new
framework.
You
just
register.
C
Yeah,
which
is
the
tesla
your
rejects
your
plugin
into
a
union
test
sort
of
map
like
that.
So
that's
pretty
much
for
five
yeah
any
questions.
F
Anyway,
this
is
yen,
so
great
work
and
very
excited
to
see
this
yeah.
I
have
two
questions
and
the
first
one
have
you
any
think
about
thought
about,
so
so,
what's
the
performance
implication
and
this
cross
node
preemption,
as
you
mentioned,
you
use
the
dfs
other
thing.
Do
you
think
that.
C
No,
it's
just
a
sample.
I
just
use
a
couple
of
hours
build
it.
So
it's
more
is
at
some
point
to
sort
of
inspire
the
end
users
to
explore
more
ideas,
more
innovative
ideas
and
more
efficient
algorithms.
I
want
so.
Basically
the
idea
is
we
give
you
a
sample
implementation.
We
just
don't
want
to
limit
the
default
implementation
of
scheduler,
so
we
want
more
ideas.
C
F
F
G
C
C
So
so
so,
basically
right
now,
1
19
is
not
cut
yet
right
right.
So
I
just
use
the
1
19
0
rc
1.
So
if
you
want
to
do
a
give
a
try,
you
can't
refer
the
go
modules
settings
here.
You
have
to
use
the
rc1
and
everything
is
in
place
as
you
want.
G
F
G
G
Yeah
my
thing
is:
it
was.
C
And
beyond
that,
I'm
trying
to
create
a
issue
in
schedule,
plugin
to
come
up
with
a
series
of
sample
plugins
to
to
give
the
user
samples
to
exercise
each
extension
point
so
yeah.
I
have
a
I'll
build
a
sort
plug-in
and
of
course
you
can
experiment
more,
for
example,
like
score,
for
example,
a
score
plug-in.
You
can
take
into
account
that
the
terminating
path,
and
also
as
well
as
the
nominated
path.
So
you
can
give
a
way
to
give
a
score
in
considering
these
factors,
and
this
logic
is
not
in
the
default
invitation.
C
But
you
can
implement
that.
I
just
yeah
this
rebel
again
is
to
inspire
more
practical
ideas
to
resolve
real
case
problems.
A
Okay,
so
the
plan
for
next
meeting,
which
is
going
to
be
in
two
weeks,
is
to
review
what
we're
planning
to
do
for
1.20.
A
A
All
right,
thank
you,
so
much
that's
for
agenda
for
today
right.
Thank
you.