►
From YouTube: Kubernetes SIG Scheduling Weekly Meeting for 20210325
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
The
first
one
is
that
I
found
that
there
is
some
misleading
feature
name
which
is
in
conflict
in
the
code,
implementation
and
the
cap
about
the
random
replica
set
down
scale.
So
in
the
cab,
the
feature
gate
was
named,
rendering
replicas
at
downscale,
but
in
the
code
implementation,
I'm
not
I'm
not,
supposing
that
this
is
the
only
implementation,
but
there
is
much
from
this
being
in
the
future
names.
So
in
the
coding
implementation,
we
have
a
logarithmic
scale
down
feature
gate,
so
this
is
kind
of
misleading.
So
I
suppose
we
need
to.
A
I
guess
the
code
implication
has
been
there.
We
cannot
change
that.
So
one
solution
is
that
to
update
the
cap
to
make
them
the
names
page
names
consistent
and
also
mike
mentioned
to
me
that
there
is
not
a
kubernetes
flash
website
pr
yet
to
introduce
this
new
feature
data.
So
I
suppose
we
should
do
that
so
mike.
Does
that
make
sense
to
you?
A
B
I
think
you
explained
it
pretty
well
like
we
called
the
cap.
Originally
random
replica
set
down
skill,
and
then
you
know
that
was
my
fault
that
I
named
the
actual
feature:
gate
logarithmic
scale
down
and,
like
you
said,
being
past
code
freeze
at
this
point,
I
don't
think
that
we
can
change
the
feature
gate
name
so
probably
bringing
the
cap
to
that.
Consistency
would
be
right
and
then
the
other
thing
that
you
said
the
the
pr
to
actually
document
the
feature
gate.
B
We
had
a
pr
to
update
the
descriptions
which
ever
I
think
everyone
saw
and
reviewed,
but
I
wasn't
aware
if
we
need
to
actually
make
a
document
somewhere
like
updating
the
list
of
feature
gates
or,
if
that's
the
kind
of
thing
that
gets
done
automatically.
B
C
B
B
Okay,
yeah:
what's
the
page,
you
can
send
it
to
me
offline,
but
what's
the
page
that
needs
to
be
updated,
for
that.
Is
there
like
a
central
list
of
all
the
feature
gates,
or
do
we
just
update
the
description
that
we
had
there
to
say
like
as
of
121?
This
alpha
feature
gate
enables
you
to
do.
A
B
B
A
Okay,
thank
you
mike
and
in
addition
to
this,
I,
I
suppose
there's
no
items
needs
to
be
discussed
in
this
meeting.
If
you
have
just
yeah,
build
them
up
and
because
next
I
will
give
a
very
short
121
release,
which
is
perspective
yeah,
if
you
have
any
items,
just
lift
them
up.
A
Otherwise,
I'm
going
to
you
want
a
new
one
retrospective,
let's
go
over
them
one
by
one.
The
first
one
is
that
in
the
very
beginning,
we
want
to
take
some
cartesian
constraints
into
consideration
when
just
scaling
down
that
after
the
investigation
we
found
that
the
cost
is
somewhat
in
illegible.
A
So
we
transfer
to
another
solution
which
is
provided
by
mike
is
to
give
some
logarithmic
algorithm
to
sort
the
available
paths
into
buckets.
So
it's
not
using
the
original
algorithm
to
to
choose
the
the
path
to
scale
down.
So
I
would
say
it's,
although
not
perfect,
but
comparing
to
the
efforts,
if
we
want
to
do
it
in
the
better
case
to
to
to
solve
the
project
skill
scale
down
case.
C
Of
controller
aware
of
scaling
algorithms
yeah,
so
basically
they
said.
If
if
we
start
checking
this
api,
then
we
then
we
have
to
add
checks
for
other
apis,
such
as.
C
A
Exactly
yeah,
so
your
current
solution
is
mitigation
is
well.
It
cannot
perfectly
solve
the
case,
but
can
mitigate
the
the
worst
case
and
also
as
a
blue
abdomination,
and
we
provide
some
somewhat
a
hook
called
deletion
cost
annotation
in
this,
and
so
that
you
can
implement
your
implementation
to
apply
this
kind
of
annotation
to
so
that
the
control
manager
is
aware
of
the
extra
factor
you
apply
into
this
kind
of
path
while
doing
the
path
scale
down.
So
this
is
kind
of.
A
A
A
We
are
continuously
to
sort
of
respect
our
scheduled
code
base
and
so
that,
for
example,
each
plugin
implementation
can
be
extracted
out
into
the
staging
so
that
building
entry
and
external
other
straight
plugins
can
be
more
easier
and
also
can
get
our
code
base
more
met.
A
Yeah,
that's
the
second
one,
I
guess
in
the
next
recovery
list
we
are
continuing
working
working
out.
A
C
You
is
the
first
feature
enabled
by
default:
what's
the
what's
the
the
status
there?
Is
it
an
alpha.
A
A
D
Yeah,
that's
good.
The
plan
is
to
graduate
to
beta
in
the
next
release.
The
main
thing
that
I
need
to
add
is
a
benchmark.
It
shows
like
I
have
done
some
local
benchmarking,
but
I
want
to
merge.
A
new
benchmark
shows
that
if
you
have
like
a
massive
number
of
namespaces
that
you're
selecting
it's
still
going
to
be
okay.
A
Yeah
sounds
good
and
also
the
name
selector,
I
suppose,
is
also
applied
to
the
what's.
What's
that
called
the
part
of
the
finite
quarter
or
something.
D
Yeah,
because
right
it's
a
codescope,
there
is
a
new
group
that
allows
administrators
operators
to
control
who
can
use
this
cross
namespace.
D
You
know
feature
in
port
affinity,
because
if
you
can't
control
that,
then
there's
a
chance,
for
example
like
a
and
you
have
a
multi-tenant
cluster
one
tenant
could
basically
impact
the
scheduling
of
other
tenants
right
like
by
saying,
by
creating
pauses
on
the
affinity
to
every
single
namespace
and
so.
A
A
So
in
the
before
we
move
past
from
the
unscheduling
queue
to
active
queue
or
backup
q
unconditionally
upon
pound
some
system
level
events,
but
that
is
too
aggressive.
So
sometimes,
if
you
have
a
lot
of
pending
high
price
pass,
that
kind
of
unp
conditional
move
can
block
a
low
priority
path
to
getting
attempted.
So
that
is
not
ideal.
So,
in
this
release
we
introduce
the
extension
not
not
strictly
extension.
A
Point
by
extension,
interface
called
cut
in
queue
extensions,
and
there
is
a
one
functions
you
can
implement
called
events
to
register
so
that
this
basically
works
like
once,
a
part
failed
upon
scheduling
the
path
will
carry
the
plugins
it
fell
by
and
also
in
the
scheduling
website.
A
So
if
that
match,
I
will
say:
okay,
this
part
can
be
moved
upon
this
kind
of
system
events,
so
this
basically
works
like
this,
but
unfortunately
we
only
completed,
I
think,
half
of
the
of
our
items
and
the
interface
and
in
the
schedule
framework
has
been
merged
and
in
terms
of
the
plugin
implementations,
there's
only
one
called
no
resources
has
been
merged,
so
the
other
one
will
be
emerging
122.
A
A
and
also
okay.
I
will
mention
the
other
item
later
and
also
next
one
is
deprecate
some
custom
plugins,
as
well
as
raising
the
scheduling,
cc
version.
Two
we
were
beta
two.
We
cannot
obtain
them
in
the
current
release
so
that
this
here,
I
think,
api's
going
pr
is
almost
ready
and
which
will
handle
in
handling
122..
A
A
A
We
failed
to
obtain
that
and
also
expansion
point
for
a
more
explicit
way
for
the
other
three
plugins
to
do
part
eq.
So
this
is
not
finished
yet
next
one
is
to
prefer
nominating
one
scheduling
report.
So
the
background
is
that
when
we
do
preemption,
we
update
the
path,
the
status,
dot,
nominate,
node,
name
right
for
the
preemptor
part
and
in
the
next
scheduling
cycle
we
know.
A
Okay,
this
kind
of
part
has
already
experienced
a
scheduling,
attempt
and
that
they
are
more
so
they
are
basically
has
been
pre-attempted
and
has
already
had
some
candidates,
so
in
the
next
scheduled
cycle.
Instead
of
re-evaluating
all
the
possible
notes,
we
will
take
preference
on
the
nominated,
no
name
it
carries,
so
it
saves
saves
us
a
lot
of
processing
cycles.
So
this
is
the
one
feature
gate
we
introduce
in
121.
A
A
Okay,
this
one
we
have
completed-
and
the
next
item
is
some
internal
options
needed
by
the
external
users
like
on
scheduling
q
term
interval.
So
it
makes
sense
because
we
also
did
that
for
the
back
off
queue
terminal
as
the
drive
picture.
So
in
this
we
tried
to
obtain
this
in
this
release,
but
you
cannot
catch
the
real
estate,
so
api
is
ready
and
you
will
be
merging
the
x-rays.
A
The
final
one
is
called
owned,
with
fixed
storage.
The
background
is
that
wirepad
requested
for
some
some
volumes
right
now
we
don't,
we
don't
take
preference
on
which
node.
If,
if
a
couple
of
nodes
has
available
capacity
of
the
volumes,
we
don't
take
preference.
But
in
this
release
we
introduce
a
feature
gate
to
prioritize
the
node
with
the
minimum.
A
Specification
one
capacity
priorities
can
leverage
this.
If
you
have
some
fitting,
you
want
to
sort
of
optimizing
the
volume
fitting
issues
okay
after
this,
so
this
is
basically
a
very
short
retrospective
for
this
release
and
I
will
go
to
the
next
release.
So
sorry
and
in
terms
of
the
alpha
features
right
now,
we
have
three.
If
I
missed
more
somewhat
sending
me
out,
I
think
absolutely
mentioned
we
like
to
go
with
beta
in
122.
For
this
feature
right.
A
Yeah,
okay
and
the
change
is
not
here,
so
I
will
talk
to
him
to
see
whether
he
wants
to
or
if
needed,
to
promote
to
britain
next
release
and
also
through
is
not
here
yet
also.
I
will
talk
to
him
in
terms
of
the
viva
beta
2
api,
as
well
as
some
other
features
which
is
depends
or
associated
with
the
api.
A
A
A
So
maybe
in
the
next
release
it
can
be
a
alpha
candidate.
Other
thing.
A
Yeah
sure
sure
I
just
missed
the
question
mark
here:
okay
and
jan
and
mike
will
continue
to
work
on
reflecting
the
call
dependency
out
of
the
framework,
and
so
I
talked
to
alex
on
this
item.
He
will
very
likely
to
obtain
this
in
122
release
and
also
I
will
continue
to
work
on
the
fine
grain
part
in
queuing
in
the
scheduler
and
also
I
left
some
other
requirements
here.
Like.
A
Like
we
may
want
to
bring
up
with
the
mechanics
to
register
the
event,
handlers
for
call
ips
dynamically,
so
right
now
we
just
unconditionally
reduce
them
like
pvc,
pv
pad
node
right,
but
like
for
services,
we
may
choose
to
opt
out,
but
how
we
choose,
this
mechanics
is
which
we
we
might
want
to
bring
up
with
the
mechanism
for
its
dynamic,
registering
right
and
also
like
see.
I
thought
I
had
some
csi
specific
events,
so
I
want
that
to
opting
or
opt-out
very
dynamically,
instead
of
just
hard
coded
them
or
have
emails
logic.
A
So
this
is
another
thing
I
want
to
explain
to
you,
and
also
after
I
mentioned
that
we
may
want
to
come
up
with
some
metrics
to
reflecting
the
the
status
of
the
part,
having
not
been
moved
or
having
moved
by
the
cluster
event,
so
that
we
know
that,
comparing
to
the
the
previous
implementation,
how
it
behaves
and
how
it
performs
this
is
the
opposite
matrix.
So
this
is
the
things
I
want
to
include
in
the
next
release.
Hope
we
can
do
that,
and
also
this
kind
of
thing
has
been.
A
A
So
one
item
I
want
to
include
is
that
so
for
now,
in
the
framework
handle,
we
just
expose
the
class
files
to
those
imagery
breaking
and
our
trade
plugins,
but
in
some
cases
we
need
the
key
of
config
to
build
a
sort
of
particular
client.
Instead
of
core
client
side
to
manipulate
the
call
api
users,
we
also
need
we
want
to
build
a
some
external
client
to
operate
with
the
particular
crd.
So
one
thing
I
want
to
leverage
is
that
we
may
come
up
with
the
time
builder.
A
Oh,
it's
not
shared,
so
basically
I
won't
come
up
with
some
current
builder
into
the
interface
so
that
the
other
tree
can
use
that
computer
to
build
their
external
client
to
operate
with
the
series
right
now.
The
workaround
is
very
ugly.
They
have
to
provide
the
cube,
config
path
again
as
a
plugin
argument
if
they
want
to
use
crd,
so
that's
very
ugly
and
inconvenient.
A
B
Yeah
I
like
having
the
sub
project
requirements
here.
I
think
that
there's
a
couple
stuff
from
the
scheduler
that
we
could
add
in,
I
think
one
thing
would
be
the
you
know
we're
almost
out
of
time,
but
the
event
based
descheduling,
similar
to
what
you
guys
were
working
on
with
the
the
unscheduleable
queue
being
triggered
on
specific
events,
is
one
thing
that
I'd
like
to
do
more
with
in
the
scheduler.
B
I
think
sean's
on
the
call,
there's
probably
a
couple
different
features
that
we
could
be
targeting
for
the
release,
but
we
can
just
add
those
to
the
spreadsheet.
Instead
of
keeping
everyone
on
for
more
time
to
talk
about
this.