►
From YouTube: Kubernetes SIG Scheduling Weekly Meeting for 20220728
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Code
of
conduct
and
standards
in
these
meetings,
we've
got
a
few
items
on
the
agenda.
First,
we've
got
aldo
an
announcement
on
encouraging
new
reviewers
approvers.
B
Yeah
just
wanted
to
give
a
quick
announcement.
B
Some
time
ago
we
asked
for
people
to
to
do
more
reviews,
so
they
could
eventually
make
it
to
to
get
reviewer
status,
and
that
happened
over
the
last
year.
So
we
reached
eight
approved
sorry,
eight
reviewers,
and
at
that
point
we
decided
that
we're
gonna,
separate
approvers
from
reviewers,
so
approvers
are
no
longer
reviewers.
B
This
means
that
in
general
we
want
caps
to
first
be
reviewed
by
reviewers
before
reaching
out
to
approvers
and
that
that
should
help
you
know,
get
some
some
some
some
certain
level
of
quality
before
reaching
out
to
approvers,
which
are
more
limited.
But
that
said,
we
still
would
like
to
have
more
reviewers
long
term.
B
So
if
any
of
you
are
interested
in
becoming
reviewers
yeah,
you
can
continue
reviewing
prs
and
maybe,
if
you,
if
you
would
like
to
to
become
a
reviewer,
let
us
know
so.
We
can
tag
you
on
certain
certain
pr's
for
for
you
to
review.
First
and
eventually
you
you
would
qualify
for
this
status.
B
D
Yeah,
so
this
is
one
word:
I
cooperate
with
the
dark
cloud
company
in
china
and
the
author
called
swimming
author,
the
basically
it's
a
fake
cupid
manager
that
is
similar
to
to
cute
mock
but
implemented
a
different
way.
So
basically,
you
just
have
one
controller
in
which
you
watch
all
the
api
objects
like
node
and
the
path
and
maintain
their
liveness
to
let
them
behave
like
a
real
nelson
path
so,
but
in
the
other
end
the
cube
mark
spin
out
each
hollow
node
to
represent
the
fake
keeper.
D
So
in
terms
of
performance,
this
tool
has
much
better
performance
because
it's
a
01
in
terms
in
terms
of
term
complexity
and
then
it
will
be
pretty
useful
in
terms
like
testing
the
behavior
and
scalability
of
the
scheduling
behavior
like
if
you
want
to
introduce
a
new
schedule,
plugin
as
well
as
change
some
scheduling,
plugin
weights.
So
basically
you
just
have
a
one
bearable
control
point
and
you
can
smear
by
thousands
of
nails
as
you
want,
so
it
works
well
on
my
local
laptop.
D
So,
don't
you
don't
need
to
spend
money
to
spin
up
the
real
notes,
so
we
yeah,
finally
that
the
project
was
created
as
a
kubernetes
sub
project.
We
got
the
naming
discussion
back
and
forth
well
for
a
while,
and
finally,
we
decided
with
the
quark
the
the
name
kubernetes
without
the
cube
rate.
So
that's
it
and
yes,.
C
D
Tuned
there
will
be
a
lot
of
interesting
features,
contribute
to
this
repo,
like
the
support
for
cea.
D
So
basically
you
have
a
bare
bone
control
point
right
and
you
start
with
zero
names
and
once
there's
unscheduled
parts
comes
in
and
we
are
going
to
provide
a
provide
ca
provider
for
the
clock
and
then
the
ca
can
screen
our
emails
as
well.
So
you
can
use
there
to
simulate
a
lot
of
simulating
behavior
like
by
giving
a
bunch
of
workload
to
check
how
many
nodes
you
you
need
to
accommodate
the
incoming
workers
and,
in
the
other
hand,
yeah
just
just
test
the
literally
the
functionality
of
the
integration
with
ca
yeah.
A
So
you
will
have
like
something
imagined
to
ca.
That
is
basically
making
yeah
yeah.
D
A
So
where
do
they
get
spent
like
these
nodes?
And
it's.
D
C
D
A
Yeah,
it
depends
on
what
you're
trying
to
test,
though
right
like
there
are
many
cases
where
you
actually
want
the
cubelet
running.
Otherwise
it
it's
not
going
to
be
like
very
useful
yeah,
but.
D
D
A
D
This
this
is
swampy,
I
I
I
miss
luck
and
yeah.
So,
basically,
okay,
okay,
we
can
bring
out
another
discussion
on
the
reviewers
liking,
others
projects.
So
for
now
I
think
I
do
need
some
help
from
okay.
We
lost
abdullah's
screen.
D
I
know
my
and
basically
I
I
do
really
like
to
get
some
help
from
other
experienced
kubernetes
contributors
so
that
I
can
shift
some
load
from
from
me.
So
I
think
yeah
I'm
apologize
for
that.
This
pr
has
been
there
for
a
while
and
I
noticed
jose
and
the
chain
has
raised
the.
I
think
it's
a
session
in
the
kipcar
yeah.
I
will.
I
will
prioritize
my
time
to
review
that,
but
I
do
appreciate
and
help
you
any
good,
really
interesting
kind
of
jumping
in
to
take
another
one
I'll
reveal.
C
Yeah,
I'm
also
wondering
like
if
mike
and
young
can
help
here.
If
you
have
some
time.
D
C
E
It's
not
for
this
125.
Is
it
that
network
aware
framework.
D
It's
it's
in
the
scheduled
plug-in
level.
It's
not
in
the
kubernetes
communities.
E
C
E
So
the
update
on
this
from
the
scheduling
site,
nothing
has
changed,
except
we've
made
some
progress
on
the
wang.
Chen
who's
here,
she's
been
working
on
the
e2e
test
and
I
think
we
have
one
of
the
tests
up
and
running
the
other
one.
I
haven't
had
a
chance
to
look
at
it
very
closely
yet,
but
it
was
failing
and
I
believe
we
cannot
help
that
one.
E
That's
the
one
where
we
expect
when
we
resize
a
pod
a
pending
part,
that's
waiting
because
there's
not
enough
node
capacity
and
we
resize
an
existing
running
port
down.
It
should
get
picked
up
by
the
scheduler
and
then
get
run,
but
we're
looking
at
the
end-to-end
status
and
the
status
is
going
to
be
reported
by
container
d,
a
change
that
needs
to
happen
container
d.
E
E
That's
my
theory
on
why
that's
failing!
I
really
haven't
taken
had
a
chance
to
we
found
this
issue.
I
think
manchin
found
this
issue
two
days
ago
and
I've
been
busy
with
a
few
other
things
and
I
have
yet
to
get
to
this,
but
based
on
summary
code.
What
I
saw
this
is
what
I
feel
is
the
issue,
so
we
can
merge
it
with
the
test
commented
out.
I
don't
like
that,
but
probably
we
just
merge
the
one
test
that
is
working.
E
That,
I
think,
is
a
recommendation,
and
this
is
not
really
due
for
next
tuesday.
The
code
freeze.
This
is
the
e2e
test.
I
think
there's
one
more
week
time
for
that.
So
we
have
a
little
bit
of
window
to
review
that
the
main
thing
is
okay,
we
have
this.
The
code
freeze
date
coming
up
and
I
think
tim,
hawkin
and
derek
are
on
vacation
this
week,
so
I'm
waiting
for
the
lgtm
from
them.
E
I
was
just
wondering
if
one
of
you
approved
from
the
scheduling
site
approvers
is
going
to
be
available
to
you
know.
I
can
quickly
ping
on
slack
and
the
existing
code.
Nothing
new!
Would
that
work
alternately.
I
could
get
the
scheduling
changes
out
into
a
separate
pr.
I
wish
github
provided
us
like
the
linked
pr
kind
of
thing
where
you
can
have
multiple
pr's,
but.
A
Yeah,
I
mean
that's
okay,
I
did
the
day
like
it's.
It's
someone
that
like
like
them
or
derrick,
they
have
much
like
you,
don't
need
our
lgtm
if
they
are
approving
right,
like
oh,
it's
like
gtms,
because
they
have
higher
level.
A
You
wouldn't
need
for
us,
for
one
of
us
to
to
do
anything.
Okay
agreed
that
this
is
before
the
lgtm,
the
logic,
but
I'm
concerned
about
the
the
unit
the
end
to
end
test.
I'm
wondering
it
like
what
was
the
discussion
before
was
it
like?
We
will
have
both
of
them.
C
E
We,
I
think
the
previous
agreement
we
had
was
that
we
can
merge
the
current
code.
That
was
a
few
weeks
ago,
and
I
was
hoping
that
it
would
get
merged
early
like
a
few
weeks
back,
and
then
we
work
on
this
in
the
in
the
interim.
The
end-to-end
test
and
the
end-to-end
test
is
to
target
different
aspects
of
scheduling,
but
as
we
developed
that,
I
think
there
is
a
problem
where
we
cannot
really
fully
enable
it
until
I
wouldn't
say,
go
as
late
as
beta
by
beta.
E
Of
course
it
should
be,
but
I
would
say
that
merges
and
then
give
some
amount
of
time
for
even
and
the
container
d
to
come
up
to
merge
their
pr
in
and
then
we
should
be
able
to
enable
the
test.
That's
failing
right
now,
it's
failing
because
what
it
does
is
it
looks
at
the
status
and
the
status
is
not
updated
and
I've
done
that
for
other
tests
as
well,
where,
if
it's
dependent
on
status,
I
don't.
I
have
a
flag
in
the
test
that
says.
E
A
A
Doesn't
support
like
who's
supporting
the
like
the
feature?
Maybe
I'm
like
it's
a
simple
question
but
like
if
container
d
doesn't
have
support,
then
why
would
this
feature
be
useful
at
all.
E
The
continuity
support
has
to
come
in
after
the
cri
changes
are
merged.
They
cannot
merge
in
their
code
until
the
cri
is
the
cri.
Has
this
new
fields,
the?
What
container
d
needs
to
do
is
populate
the
resources
field
in
the
container
status
structure
in
the
cri,
the
response
to
the
container
status
api
right,
and
that
is
a
new
field.
That's
going
in
with
this
main
pr.
E
E
Yeah
chicken
and
egg
problem,
so
I
initially
did
have
had
this
separated
out
into
two
separate
caps.
For
this
very
reason,
but
at
some
point
during
the
development
I
felt
you
know:
okay,
all
this
can
go
in
as
one
I
think
that
was
kind
of
my
flawed
thinking.
Although
others
disagree,
I
I
felt
that
you
know
we
should
have
brought
it.
There's.
A
E
I'll
take
a
look
at
that.
I
haven't
had
a
chance
to
look
at
it
closely.
Hopefully,
this
weekend
I'll
take
a
look
at
it
and
see
if
we
can,
if
there
are
other
ways,
maybe
looking
at
c
groups
or
something
to
see
that
the
pods
been
running.
E
I
think
the
signal
I
don't
know
if
there
is
any
signal
from
the
scheduler
which
we
can
read
to
see
that
you
know.
Okay,
the
scheduler
has
picked
up.
I
haven't
followed
up
with
wang
chen,
but
did
the
logs
show
anything
the
cache
was
being
updated?
Have
you
had
a
chance
to
verify
that
function.
E
C
E
So
I
think
we're
going
to
manually,
of
course,
we're
going
to
manually
verify
that
it's
I
believe
last
I
looked
at
this
was
a
couple
of
years
ago
and
any
update
that
we
make
goes
to
scheduler
cache.
I
I
don't
believe
that
logic
would
have
changed,
so
we
were
going
to
verify
that
manually.
E
A
E
C
B
Is
also
useful,
but
I
mean
yeah.
We
need
some
some
confidence
without
the
cumulative
as
well.
The
other
thing
I
wanted
to
ask
is:
is
the
behavior
well
defined
when
you
know
it,
it
will
happen
that
certain
customers
will
be
running
an
older
node
that
doesn't
have
the
support
right.
So
is
that
already
covered,
and
if
so,
is
there
any
end-to-end
test
for
this.
E
We
there's
not
going
to
be
an
end-to-end
test
for
this.
The
way
we
are
addressing
this
is
by
not
going
ga
until
n,
plus
two
so
they're
in
the
api,
particularly
in
the
api
supporting
down
level
versions
of
coupe
client.
We
discussed
this
and
we
felt
that
the
cleanest
way
to
do
this
is
have
ga
happen
after
the
support
for
down
level
on
like
if
we,
for
example,
if
we
merge
this
in
125,
then
we
won't
be
going
ga
until
128
and
at
128
we
don't
support
anything,
that's
below
125
anyways.
E
So
whatever
components
are
there,
they
should
be
able
to
understand.
B
E
C
E
Yeah,
I
think
I
got
peter
from
red
hat.
I
believe
he
is
creo
owner,
he's
they're
waiting
for
this
pr
to
merge
and
then
they'll
immediately.
Work
on
providing
support
for
it.
Container
d
is
already
on
board,
of
course,
might
be.
We
already
have
a
draft
pr
ready
for
it
and
the
main
reason
we
did
that
was
to
we
use
containing
the
ci.
Currently
all
alpha
tests,
they
use
continuity
and
this
e28
test.
It's
gonna.
E
We
just
once
that
support
is
in
and
the
latest
continuity,
which
has
the
support,
has
been
picked
up.
We,
I
would
send
a
pr
that
removes
this
check
for,
of
course,
I
need
to
see
if
other
runtimes
in
the
gk
cluster
tests
are
affected,
but
yeah
once
we
hit
all
the
all
the
checkboxes
there.
We
would
be
removing
this
check
that
disable
some
tests
for
alpha
so
yeah.
This
just
means
that
we'll
have
a
long
alpha.
C
B
Okay,
because
yeah
there
could
be
a
cri
out
out
there
that
doesn't
support
it.
You
talked
about
the
two
most
popular
ones
or
sierra
io,
yeah
and
continuously,
but
but
there
there
could
be
others
and
yeah,
so
it
makes
sense
to
have
at
least
an
integration
test
that
exercises
when
the
cubelet
doesn't
support.
The
feature
make
sure
yes.
D
The
main
challenge
is
that
how
complicated
to
simulate
the
cuba
behavior
in
this
scenario
right,
I
know,
maybe
there's
a
lot
of
logic
plumbed
into
the
key
of
life,
not
sure
how
complex
it
can
be
in
terms
of
impleme
implementing
the
integration
test,
because
it
might
be
the
most
complicated
case
involved
the
most
components
we
have
so
far
like
in
the
before
we.
We
do
have
some
features
that
involves
more
than
two
components
like
container-based
eviction
that
tent
node,
by
condition
that
involves
both
the
node
lifecycle
manager
in
the
control
manager,
along
with
the
scheduler.
D
C
E
Take
a
look
at
it
this
weekend
and
see
if
I
can
I'll
file
an
exception
anyways
to
give
us
a
bit
more
time
for,
for
this
looks
like
retail
and
direct
convocation
we
haven't
had.
The
code
is
pretty
much
mostly
reviewed
quite
a
few
times
already
for
the
kubelet
side
and
derek
has
to
just
do
one
more
pass
for
the
scheduler.
E
I
think
the
blocker
is
this
integration
test,
the
e2e
test
that
we
have
that's
working,
that
wong
chin
has
worked
on,
we'll
bring
that
in
in
a
separate
pr,
the
one
test,
that's
working
and
then
complement
whatever
is
missing.
I
think
there
were
three
test
cases
we
were
looking
at.
E
So
one
thing:
if,
if
the
number
two
test
case
is
failing,
can
you
please
go
to
number
three
and
implement
that
and
the
number
two
test
case
I'll
take
a
look
at
it
in
parallel
to
see
if
I
can
get
that
into
into
this
the
integration
test,
and
maybe
even
add
the
other
two
as
well.
If
it's
easy,
I
just
don't
know.
C
Definitely
I
just
noticed
that
the
number
two
test
we
talked
about
before
has
been
covered
in
the
in
the
existing
no
test
already.
C
Yeah,
so
we
we
actually
covered
all
the
cases
for
particular
parts
hunting,
the
the
the
skeletal
behaviors,
the
other
one
is
not
related
to
discovery.
Behaviors,
there's
no
behavior
and
the
existing
nodes
have
star
they
cover
it.
E
Okay,
so
let's
let's
get
this,
I
think
let's
share
the
results
with
sixth
grade
reading
on
slack
and
then
probably
see.
If
that's
good
enough,
then.
E
Look
at
the
integration
just
this
weekend,
mainly
because
this
down
level
in
beta,
I
want
to
get
a
beat
on
that
to
see
how
what
it
really
means.
At
least
we
should
have
a
coverage
that
I
agree
with
that
point.
C
A
Yeah
I
mean
we're
going
to
get
that
question
from
the
production
readiness
reviewer
as
well,
when,
when
you
get
to
beta
they'll,
ask
you
like
about
the
behavior
when
the
feature
is
disabled,
enabled
and
disabled
back
yeah,
and
so
we
are
testing
around
this,
and
I
don't
think
that
you,
your
your,
like
beta
graduation,
wouldn't
be
approved.
If
there
are
no
tests
around
this.
E
Yes,
I
think
we
already
answered
that
question.
The
pr
review
elena
elena
hashman
did
that,
and
we
already
looked
at
this
scenario
and
answered
it,
and
this
was
essentially
what
we
came
up
with,
that
we
will
do
n
plus
two.
It
was
part
of
the
this
solution
came
out
as
part
of
the
discussion
with
them.
E
Okay,
yeah,
this
pr
is
sprawling
and
yeah.
It
is
it's
probably
the
most
challenging
pier
anyway,
I've
dealt
with
for
sure,
so
it's
cut.
C
E
E
C
E
The
yeah
part
of
it
was
we
implemented
with
one
one
different
version
and
then
democrat,
and
he
liked
he
approved
the
earlier
design
then
later
on,
he
had
a
change
of
heart,
so
we
had
to
reset
a
little
bit
on
the
on
where
the
the
state
of
the
the
resize,
when
the
cubelet
accepts
a
certain
admit,
a
certain
resize,
where
you
need
to
keep
that
state
somewhere
and
we're
storing
in
the
api.
He
wasn't
really
sure
about
that
after
the
code
was
done
and
then
they
asked
me
hey.
A
Oh
okay,
we've
got
one
last
item
on
the
agenda.
The
component
config
graduation
to
ga.
B
Yes,
I
don't
know
if
kant
is
here,
but
basically
we
are
graduating
the
component
config
from
b1,
with
v1
beta
3
to
v1
I
mean
we
we've
wanted
to
do
this
for
a
lot
of
for
a
long
time
already,
and
we
did
a
lot
of
iterations
in
the
component
config.
B
We
felt
we
we
we
felt
like
this
was
ready,
but
the
last
question
that
came
up
is
whether
our
defaults
are
good.
This
problem
me
this
problem
prompted
me
to
check
double
check
on
the
weights
we
have
on
plugins
and
yeah.
Basically,
the
question
is
whether
these
these
weights
are
good
enough
for,
for
most
users.
The
last
time
we
changed
them
was
in
v1,
beta
2,
no
sorry,
actually
in
b1
beta
3,
we
changed
them
for
the
last
time,
and
the
question
is
whether
this
is
enough.
B
B
These
these
plugins
have
double
the
weight
and
and
in
particular
the
things
and
generation
has
triple
the
weight,
whereas
the
rest,
like
you
know,
most
post
most
spots,
allocated
or
least
spot
allocated,
which
are
now
single
plugin.
Those
have
a
default
weight
of
one
so
yeah.
I
just
wanted
to
bring
this
question
out
there
for
people
to
comment
if
they
have
a
strong
feeling
about
the
weights,
I'm
more
inclined
to
to
actually
let
this
soak
for
one
more
release.
B
D
D
D
Soft,
hard
affinity
terms
they're
using
that,
but
in
the
lower
version
it's
one,
but
even
it's
two-
it
doesn't
satisfy
their
workflows,
so
I
have
to
sort
of
use
their
workloads
to
simulate
how
much
weight
is
good
for
them
and
finally,
the
weight
goes
tuned
to
10..
So
in
this
case
we
cannot
say:
okay
by
default,
we
want
to
use
the
weight
to
obtain
our
banina
scheduling
offering.
D
A
I
mean
like:
don't
we
think
that
anyone
who's
sitting,
like
you
know,
preferred
toleration
or
preferred
affinity
or
preferred
spread
basically
telling
us.
A
This
is
my
number
one
priority
for
placement,
so
I
would
even
think
that,
like
maybe
the
weight
is
something
that
is
not
configurable
but
even
like
you
can
think
about
it
in
a
way
that
when
we
score,
if
any
of
these
you
know
constraints
are
being
said,
we
should
actually
just
take
them
into
account,
first
and
foremost,
and
and
so
like,
not
a
team
like
you
put
it
100
000,
I
would
say
like
right,
and
it's
basically
saying
this
is
your
most
important
thing
that
we
should
wait
on
if
this
constraint
is
not
being
used
at
all,
if
what
the
purchase
price
is
100
years
at
all,
that
weight
is
not
going
to
make
any
difference
right.
A
So
so
you
got
up
to
10,
but
I
think
just
put
it
like
a
hundred
thousand
be
done
with
it
right
like
I
don't
understand
why
we,
why
do
we
keep
tweaking
these
things
and
trying
to
balance
them
with
some
difficult
scoring
strategies
that
we
have
when
the
user
are
exposed
to
telling
us?
I
want
this,
so
I
don't
know.
A
F
Sorry,
I
think
that's
kind
of
a
big
assumption
about
you
know
just
because
they're
setting
these
things
that
it's
going
to
be
the
number
one
thing
when
there's
other
stuff
like
resource
allocation,
that
users
are
probably
taking,
as
just
you
know,
a
given
that
they
can't
set,
which
is
usually
probably
their
number
one.
I
think
if
it's
the
top
thing
that
they
care
about,
then
they're,
probably
setting
like
a
required
constraint
or
something
right.
A
F
F
Not
necessarily
I
mean
you
could
set
more,
you
could
set
like
higher
requests
on
your
resources
for
the
pod,
but
you're,
also,
probably
just
assuming
that
I
mean,
especially
if
you
have
like
most
allocated
you're,
probably
wanting
it
to
be
been
packed
first
and
foremost
and
then
maybe
secondary.
You
have.
I
want
them
impact,
but
I
also
want
to
prefer
to
have
these.
A
A
It
feels
to
me
that
when
you
explicitly
mention
it
it's
basically,
this
is
what
you
want.
Yeah
and
tuning
the
weights
down
is
not
going
to
address
the
case
that
you
mentioned.
C
F
Think
that
that's
that's
the
intent
here
right,
but
I
just
I
don't
think
that
you
can.
You
know
I
kind
of
agree
with
way
a
bit
more,
that
there's
like
it's
tough,
to
make
a
one
like
broad
assumption
about
how
everyone
is
going
to
be
performing.
A
C
A
F
Yeah,
I'm
not
personally
like.
I
don't
think
that
setting
these
weights
to
like
two
or
three
really
is
going
to
have
really
had
as
much
of
an
effect
as
what
we
were.
Maybe
looking
for
with
that,
and
I
think
that's
kind
of
just
what
the
like
config
api
is
for.
If
you're
really
trying
to
prefer
something
like
the
setting
is
in
there
to
do
it,
I
think
making
the
jump
to
totally
preferring
these
scheduling
constraints
is
kind
of
redundant
with
setting
like
hard
constraints.
D
So,
basically,
the
combination
of
the
weights
can
to
some
extent
impact
the
eventual
scheduling
if
you
give
give
a
bunch
of
workers
right.
So
I
think
another
angle
to
look
at
the
weights
is
that
we
may
can
set
a
test
to
say
okay
by
given
a
bunch
of
workloads,
whether
and
nails
can
accommodate
all
the
requires.
So
this
is
the
targeted
goal
like
we
just
can't
have
50
mils
to
satisfy
incoming
workloads,
but
by
changing
the
weight
it
might
be
that
it's
impossible
to
accommodate
the
workers
anymore.
D
D
Yeah
this
point
I
want
to
raise
up
so
that
we
instead
of
tuning
the
weights
I
mean
right
now.
How
we
look
at
the
weights
is,
is
that,
okay,
we
we
create
an
integration
test
or
union
test
to
verify
it's
functional,
it
functions.
Well,
but
yes,
it
does
function
well,
but
the
venture
goal
is
that
by
combining
all
the
priorities
and
in
the
user's
angle,
is
that
okay,
whether
you
can
or
cannot
afford
a
bunch
of
workers
by
by
by
using
this
combination
right?
D
So
that
is
the
I
think,
that's
the
point
of
having
a
default
weight.
We
okay,
we
just
say
okay.
This
is
the
bar
and
criteria
that
we
why
we
set
this
kind
of
default
weights.
We
want
to
have
50
nails
to
accommodate,
like
1000
workloads
and
the
workers
is
fixed
and
by
version
version
after
version
we
just
ensure
this
test
test
can
pass,
but
if
you
won't
have
other
tests,
sorry,
if
you
have
other
workers
you
can
you
can
you
can
tune
your
weight
yourself
or
version
over
version,
so
maybe.
A
A
My
only
concern
here
is
one
thing
which
is
spot.
Topology
spread
we're
using
it
as
the
default.
We
have
a
default
for
it
right
and
if
you
want
to
increase
the
weight
for
the
plug-in,
I
don't
want
it
to
be
increased
for
the
default
constraint
right,
because
the
default
constraint
is
just
you
know,
it's
not
a
customer
user
explicitly
set
constrained
right
for
that.
I
want
it
to
be
similar
to
the
other,
like
you
know,
plugins
based
on
resource
allocation,
etc.
A
B
B
So,
basically,
what
you're
trying
to
say
is
that
you
want
it
to
be
able
to
to
make
every
user-defined
scheduling
directive
have
a
stronger
weight.
While
the
default
apology
spreading
is
not
affected
or
it
stays
the
same.
B
B
C
B
So
that
it's
somewhat
possible
it's
just
not
you
know.
If
I'm
increasing
the
weight
of
to
two,
it's
not
obvious
what
what
rule
I
should
plan
yeah,
what
should
apply
to
to
preserve
the
same
behavior.
B
But
yeah
at
the
same
time,
it's
kind
of
tricky
to
it's
kind
of
confusing
to
have
this
extra
weight
just
for
the
default
spreading
rule,
because
then
you
have
to
in
your
head,
you
have
to
multiply
them
as
well.
So
it's
also
not
obvious.
A
I
don't
know
like:
if
the
rule
is
default,
then
we
should
normalize
the
way.
That's
I
guess
it
could
have
some
logic.
Do
that.
B
A
C
B
But
I
think
from
from
this
discussion
and
getting
to
the
conclusion
that
we
cannot
have
a
one
fits
all
default,
so
we
can
just
go
to
ga
with
the
current
defaults
and
leave
it
up
to
two
to
people
to
configure
them
differently
if
they
want
to.
C
F
We
decided
to
change
the
like
the
default
pod
topology
spreading
weight.
Could
that
be
more
linked
to
like
the
feature
itself,
instead
of
being
bound
to
the
api?
Versioning
constraints
like,
like
you
said,
like.
C
B
F
B
We
have
two
alternatives:
right
basically
go
with
this
defaults,
given
that
we
are
only
one
week
away
from
the
code
freeze.
We
have
two
alternatives.
We
go
with
this
this
defaults
or
we
wait
until
126
120
six
for
the
gi
graduation
and
try
to
come
up
with
a
some
kind
of
test
case.
B
Some
scenario
where
we
can
prove
that
our
weights
are.
A
I
I
don't
think
we
can
prove
that,
like
they
are
good
for
most
people.
I
think
I
would
be
okay
if
we
got
to
graduate
this
as
long
as
this
doesn't
block
us
from
solving
the
pathology
spread
default
constraints
versus
user
constraints
case.
A
B
C
B
C
C
A
A
D
And
I
think
we
we're
missing
this
kind
of
test
the
two,
because
the
default
top
topology
spread
is
duplicating
taking
over
the
older
selector
priority
functions
right
we
have,
we
don't
have
a
taste
or
test
that
we
can
ensure
the
tesla
works
for
selector,
sorry,
selector,
spread
plugin
and
also
it
works
for
the
calendar
default
weight
setting
for
default
project
spread
right
so
so
that
we
came
out
with
useless
complaints
about
like
okay,
this
works
before,
but
now
it
doesn't
work.
I
saw
one
user
from
catego
complaining
about
that,
but
they
did.
D
A
Okay,
we've
got
five
minutes.
I
guess
we
can
maybe
discuss
this
on
the
one.