►
From YouTube: Kubernetes SIG Scheduling Weekly Meeting for 20201105
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi
everyone
welcome
to
this
week's
six
scheduling
weekly
meeting
and
this
meeting
is
being
recorded
so.
A
So
one
thing
about
the
next
release:
the
120th
release
is
that
I
think
the
middle
of
this
month
will
be
the
code
freeze.
So
if
you
have
some
critical
code,
checkings
make
sure
you
catch
up
the
deadline
and
after
that,
so
in
theory
we
only
accept
for
the
bug
fixes.
A
So
the
significance
and
sort
of
changes-
and
the
first
one
I
want
to
mention-
is
that
the
due
to
some
the
bandwidth,
we
decided
to
postpone
the
schedule
config,
we
won
beta
2
api,
so
that
also
means
some
associated
features
have
to
be
also
be
posted
on,
like
the
some
feature
like
prefer
nominating
no
during
the
preemption
as
well
as
some
others
like.
I
think
there
is
some
reflecting
work
going
there
so
be
aware
of.
B
That
yeah
yeah,
I
think
adding
new
fields,
is
not
a
problem,
but
it's
mostly
about
all
the
deprecation
ideas
that
we
had
that
we
cannot
no
longer
do.
B
Does
that?
Does
that
make
sense?
Sorry,
the
new
fields
are
not
a
problem.
They
could
still
be
added
to
b1
beta
1,
okay,
if
they
make
it
on
time
before
the
code
freeze,
it's
just
all
the
deprecation
things
that
we
plan
that
we
cannot
do.
A
A
Is
that
the
ongoing
plan
for
for
which,
for
which
field?
I
think
they
are
there?
There
is
someone,
but
I
can't
I
can't
tell
the
exact
issue.
A
If
that
feature,
gate
is
disabled,
the
the
scheduling
behavior
doesn't
honor
that
so
that
bug
was
also
fixed
and
the
back
product.
These
two
small
fixes
be
aware
of
that,
and
the
next
one
is
a
proposal
raised
by
aldo,
which
is
to
other.
Would
you
go
go
over
there
into
more
details?
The
default
affinity,
things.
B
Right
so
we
have
an
existing
plugin
called
node
labels,
which
basically
allows
you
to
to
restrict
to
to
avoid
some
nodes
from
scheduling.
B
So
this
feature
we
kind
of
wanted
to
get
rid
of,
because
it's
you
could
do
it
with
node
affinity.
But
we
came
to
the
realization
that
it's
actually
useful.
It's
actually
useful
in
the
context
of
multi-scaling
profiles
so
that
you
can
use.
You
can
basically
tie
a
profile
to
specific
set
of
nodes,
and
this
way
your
pods
don't
have
to
specify
that
node
affinity.
They
only
need
to
specify
the
scalar
profile
and
that
will
that
will
be
linked
to
an
affinity.
B
So
that's
what
this
feature
is:
it's
adding
some
default
affinity
to
to
apply
to
the
plugin,
and
this
means
that
a
pod
or
a
node
needs
to
satisfy
both
the
default
affinity
and
the
power
the
pods,
not
affinity
if
they
have
any.
So
it's
a
an
and
semantic,
so
the
pr
is
out
there.
B
The
the
only
question
now
is
whether
we
want
to
name
it
default,
node
affinity
or
default
affinity
or
something
else,
the
problem
being
that
we
have
default
topology
spreading
and
default
in
in
this
sense
default
spreading
means
if
the
pod
doesn't
specify
anything.
We
apply
this
default
spreading,
but
in
the
case
of
node
affinity,
the
the
semantics
is
different.
We
want
to
apply
both
so
yeah.
That's
that's
a
question.
B
There
there's
a
couple
of
suggested
names
if
you
can
go
ahead
and
put
your
thoughts
down
or
or
if
you
are
really
against
this
change.
Also,
let
us
know
there
is
only
one
cave
yet,
which
is
demonstrate
pods,
since
the
the
professor
not
share
with
with
controller
manager.
If,
if
you
use
this
feature,
this
default
affinity
and
you're
scaling
you're
creating
a
demon
set,
then
the
the
demon
set
will
create
the
the
demonstrate
controller
will
create
posts
that
will
not
be
scalable.
B
A
A
Yeah,
so
it's
still
being
discussed
and
this
is
hasn't
been
merged.
So
if
you
have
any
options,
just
lose
yeah.
We
see
that
next
item
is
from
macd
from
the
other,
the
scheduler
strategy
to
to
to
rebalance
the
path
which
has
the
topologies
by
constraints.
So
mike.
C
Yeah,
I
just
wanted
to
quickly
mention
this
that
this
has
gone
through
a
lot
of
reviews.
So
far
I
know
talked
to
you
way
about
that,
and
I
think
we
decided
that
for
just
a
at
least
an
initial
release
of
this
we're
going
to
keep
it
to
like
regular
rebalancing
and
keep
some
of
the
edge
cases
like
conflicting,
topologies
and
stuff
for
later
iterations
of
this.
C
C
B
More
minor
question
is,
is,
is
the
support
only
for
filters,
or
I
mean
meaning
harder
spreading
or
also
soft.
D
Yeah
I'll
just
make
this
quite
quick,
so
for
the
past
month
and
a
half
month
or
so,
I've
been
working
on
api
server
reality
and
fairness,
which
would
give
us
significant
improvements
in
our
schedule
throughput.
That
is
the
end
goal.
Basically,
but
of
course,
since
this
is
a
more
more
central
change,
since
it
happens
in
api
server,
there
needs
to
be
a
lot
more
testing
on
that.
D
So
the
dldrs
we
are
going
to
be
graduating
api
priority
and
finance
to
beta
and
120,
and
with
that
we
can,
it
will
be
turned
on
by
default.
So
we
can
start
looking
into
removing
the
api
server
qps
and
burst
limits
in
the
scheduler
keep
so
that
we
can
entirely
move
to
server
side
rate,
limiting
through
api
priority
and
fairness.
D
D
In
case
you're
not
familiar
with
api,
it's
just
a
new
way
for
the
api
server
to
handle
requests
in
a
more
fair
way
from
multiple
clients,
so
that
no
no,
no,
no
also
no
particular
client
can
overload
the
api
server,
for
example,
and
that's
what
basically
should
I
say
that
first
but
yeah
thanks.
D
Right
yeah,
so
so
so
right
now
we
only
have
on
the
scheduler
side.
We
only
have
one
configuration
when
talking
to
the
api
server
the
keyplace
and
burst
limits,
for
you
know
all
communications
with
their
server
right.
Now
we
kind
of
limit
that,
to
I
think,
50
or
something
like
that,
but
from
my
initial
benchmarks
it
looks
like
we
can
reach
all
the
way
up
to
a
thousand
parts
per
second,
as
the
scheduling
throughput
with
api
server
and
api
priority
fairness.
D
This
is
not
a
very
small
cluster
and
for
larger
clusters.
The
performance
is
lower,
but
we
can
work
on
that
in
the
future.
Once
apf
priority
and
finance
is
inside.
A
A
A
A
All
right,
it
seems
we're
good
so
with
all
the
items
ongoing
and
if
you
have
any
questions,
review
requests,
just
ping
ping
us
on
slack
and
you
have
30
minutes
back
and
thanks
for
joining
today's
meeting,
see
you
next
time
see
you
bye
thanks.