►
From YouTube: Kubernetes SIG Scheduling Weekly Meeting for 20230518
Description
Kubernetes SIG Scheduling Weekly Meeting 2023-05-18T16:58:56Z
A
All
right,
hi
everyone,
so
this
is
our
sixth
schedule
meeting
for
May
18th.
As
you
all
know,
this
meeting
is
schedule
recorded
I
will
be
uploaded
to
YouTube.
So
please
add
here
to
kubernetes
code
of
conduct
during
the
meeting,
so
we
have
three
items
on
the
agenda
for
today.
A
B
Yes,
so,
first
of
all,
thank
you
to
Kant
and
I
can
say
that
work
on
this,
this
document,
but
basically
they
have.
They
have
collab
collected
a
set
of
guidelines
for
contributing
to
this
thing,
specifically,
of
course,
this
Builds,
on
top
of
the
guidelines
for
the
kubernetes
project,
but
we
have
a
few
details
here
about
how
we
we
are
operating,
how
we
are
already
operating
or
how
we
have
been
operating
for
the
past
few
years.
B
And
yeah
and
things
like
squashing
the
commits
when
ready
to
marriage,
but
not
not
before
first
add
new
commits
when,
when
you're
in
review
things
like
that,
so
I
encourage
everybody
that
wants
to
collaborate
to
contribute
to
to
see
scaling
to
to
read
through.
Through
these
documents
we
might
be
adding
an
extra
document
a
little
bit
more
focused
on
on
common
common
practices
during
during
core
reviews.
B
So
you
can
expect
that
in
the
the
next
few
weeks-
and
that's
it-
that's
all
I
wanted
to
say
about
this.
Thank
you.
A
Thanks
Aldo,
and
we
have.
C
C
Okay,
let
me
share
my
screen
now.
C
Let
me
know
if
you
can
see
the
document
or.
D
C
Okay,
so
about
the
automating
for
the
sex
scheduling,
dashboard
I
just
wanted
to
you
know,
like
I
reach
out
to
the
scheduling,
leads
and
asked.
If
you
know
like,
is
there
any
way
that
I
can
help
so
one
of
them?
One
of
the
you
know
like
few
tests
was
what
there
is.
We
have
six
scheduling
aboard,
but
we
don't
have
any
way
to
automate
it.
So
it's
all
like
a
manual
effort
for
now
so
I
went
ahead
and
I.
C
You
know
like
tried
to
find
what
other
sects
are
doing
and
basically
they
they
saved
us.
They
already
have.
C
You
know,
like
a
solution
for
how
to
you
know,
like
you
know,
like
get
projects
automated
without
having
GitHub
action
and
kubernetes
organization,
because
it's
not
permitted
yet
we
had
actually
this
discussion
with
the
the
you
know
like
the
GitHub
admin
group,
but
because
of
other
security
reasons,
they
don't
prefer
to
have
GitHub
actions
enabled
for
kubernetes
Organization
for
now
so
because
of
this,
take
off
actually
went
ahead
and
they
created
another
or
a
separate
trip
in
kubernetes
sets
and
just
right
and
go
ago
program
to
to
do
the
exact
same
thing
that
GitHub
actually
can
do
which
get
the
projects
list.
C
All
the
issues
by
you
know
like
specific
label
and
just
update
the
the
dashboard.
So
basically,
I
did
the
same
I
just
forked
at
the
cerebral
and
instead
of
just
you
know
like
having
this
GitHub
workflow
working
for
specific,
take
off
I,
had
it
working
for
the
six
scheduling
and
I
already,
you
know
like
tested
it,
and
it's
working
as
you
can
see
here
so
once
you
know
like
this,
this
GitHub
action
can
can
be
run
every
couple
of
hours
once
it
it's
run.
C
Everything
related
to
say,
scheduling
with
lip
and
six
scheduling
would
be
a
meteorage
and
we
will
just
need
to
go
ahead
and
you
know
like
find
whatever
we
want
to
add
it
to
our
backlog.
And,
of
course,
you
know
like
I'm,
trying
to
add
more
automation
on
you
know
like.
If
there's
you
know,
like
storage
accepted,
we
should
move
it
to
in
progress
or
something
like
that,
but
it's
still
not
done
yet,
but
at
least
we
have.
We
have.
C
You
know
like
this
dashboard,
automated
somehow
I'm
working
with
the
kubernetes
org
now
to
add
to
either
add
this
reboot
as
a
new
repo.
So
we
can
maintain
it
as
a
scheduling
or
I
I
had
another
proposal
which
thank
you
I'll
do
for
that,
so
it
was
like
if
we
can
cooperate
with
the
sick
ass
and
have
just
one
repo
and
a
couple
of
other
GitHub
actions
on
Bob.
C
You
know
like
under
under
you
know,
like
the
tooling
repo,
so
you
know
like,
instead
of
just
having
another
repo
to
maintain,
we
will
have
one
across
all
six,
but
we
still
have
this
discussion
open.
We
didn't
have
this
yet
finalized,
but
again,
I
I
went
ahead
created
some
views
just
to
help
people
here,
so
the
enhancement
will
have
just
a
specific
view,
so
you
can
go
ahead
and
triage
it
once
it's
triage.
C
It
will
have
like
another
section,
just
to
be
more
focused
on
the
enhancement
for
this
specific
Milestone
and
this.
This
is
like
the
Milestone
issue
so
far
yeah.
So-
and
this
is
the
issue
this
is
the
PRS
abused,
so.
B
C
B
A
This
is
now
like
released
like
is
it?
Is
it
functioning,
or
this
is
a
prototype.
C
So
this
it's
functioning
from
my
from
my
Fork
but
I
wanted
to
make
it
you
know
like
available
for
for
the,
for
the
sake
sake,
scheduling
leads,
so
they
can
run
this.
You
know
like
GitHub
action
whatever
whenever
they
want,
but.
C
This
this
is
like
my
my
Fork
right
now,
because
we
don't
have
like
final
home
yet
for
this.
But
again
this
is
a
current
job,
we'll
run
every
day
on
6PM
6
a.m.
Pst-
and
this
you
know
like
this,
this
dashboard
would
get
will
get
updated
for
now.
Yes,
this
is
working,
but
from
my
work.
D
Thanks
I
think
Hebrew
is
trying
to
propose
that
he
she
moves
its
own
Fork
to
a
sub
project.
Just
like
seek
us
that
before
we
reach
a
agreement
to
have
a
unified
repo
to
manage
all
the
text
to
have
a
general
or
different
GitHub
action,
but
in
the
in
the
same
repo
yeah.
So
before
we
reach
to
that,
maybe
we
can
have
a
sub
project
to
manage
six
scheduling,
specifically
I.
Think
there's
a
heat
passport
also.
C
Yeah,
but
for
now
it
is
working,
but
from
my
work,
what
I'm
trying
to
get
it
done
now
is
moving
moving.
This
work
to
be
either
a
repo,
a
separate
triple
for
a
sex
scheduling
or
just
another
GitHub
action
on
a
unified
repo
for
all
things
they
wanted
to
to
do
the
same.
C
Not
yet,
but
I
can
reach
out.
Who
is,
who
is?
You
know,
like
the
main,
the
main
contributor
to
take
off
I
I,
know
him
so
I
can
reach
out
to
him
and
see
you
know
like
how
can
we
cooperate
and
just
you
know
like
rename
this
repo
and
at
our
GitHub
action
here
and
we
will
be
done,
but
just
I'm
waiting
for
people
to
just
you
know
like
contribute
to
this
issue.
If
there
are
any
other
opinion.
C
Yeah,
this
is
what
I
just
mentioned,
so
this.
D
D
C
This
is
what
I
have
already.
This
is
what
I
proposed.
So
instead
of
you
know
like
having
another
reputu,
you
know
like
to
take
care
of,
but
again
I'm.
Just
you
know
like
I.
I
will
wait
like
cup.
You
know
for
today
at
least
and
then
I
will
lead
to
a
niche
and
ask
if
they
are
okay
with
you
know,
like
renaming
the
the
repo,
so
we
can
contribute
back
our
GitHub
action
directly
instead
of
having
just
a
separately.
D
D
B
Yeah,
you
might,
you
might
also
want
to
reach
out
to
to
contributor
experience,
because
this
is
kind
of
if,
if
all
things
are
supposed
to
be
using
this
same
Ripple
then
seek
contributor
seek
contributed.
Experience
should
have
the
ownership
of
the
repo
rather
than
seat
off.
A
Can
we
put
a
link
to
that
dashboard
on
in
our
last
document?
In
our
like,
you
know,
a
community
page.
D
C
Yeah
working
on
it
right
now,
oh
you
wanted
me
to
add
it
to
the
GitHub.
Okay,
sure.
C
C
I
will
add
it
to
the
agenda
that
the
sake
scheduling
agenda
on
top
of
you
know
like
this.
You
know
like
the
intro
paragraph
and
I
will
work
on
adding
it
to
the
GitHub
pages.
C
A
All
right,
although
you
want
to
bring
up
the
recube
extension
point,.
B
Yes,
mostly
As
FYI
or
anybody
interested.
B
What
one,
second,
okay!
So
no
actually
the
document
was
not
updated,
so
everything
is
in
code
at
the
moment,
but
so
just
to
go,
give
a
kiss
to
everybody.
B
B
So,
for
example,
if
you
failed,
if
the
post
failed
because
of
Paul
at
the
affinity,
then,
for
example,
a
port
deletion
could
help
could
help
the
Pod
schedule.
So
it's
that
that's
one
kind
of
event
that
could
could
make
the
pots
get
in
the
blogging
so
and
then
we
have.
We
have
a
few
hard-coded.
B
A
few
hard-coded
checks
that
filter
this
kind
of
events
to
decide
whether
a
pot
should
go
back
into
the
scheduling
queue
so,
and
this
proposal
is
doing
two
things.
One
is
optimizing
or
adding
more
control
over
over
over
these
checks
through
a
quick,
heal
extension
point.
So
in
this
extension
point
let
me
just
show
a
little
bit
of
the
code.
B
So
here
is
the
extension
point:
it
accepts
the
pot
that
is
being
considered
for
retry.
It
accepts
an
event
which
describes
what
happened,
for
example,
a
port
changed,
a
pot
was
created
or
a
conflict
map.
There
are
a
few,
a
few
static
events
that
could
happen,
and
then
we
have
the
old
object
of
the
event
and
the
new
object.
So
if
it's
a
bot
update
this
odd
object
will
be
a
part.
B
This
new
object
would
be
the
new
version
of
the
pod,
and
with
that,
the
plugin
can
decide
whether
this
event
would
could
potentially
make
the
Pod
schedule
so
going
back
to
Infinity.
B
If
there
was
a
port
Affinity
failure
whenever
Whenever
there
is
a
new
part
that
satisfies
this
Affinity,
then
the
the
original
part
will
be
recruited.
So
that's
that's
the
the
gist
of
the
solution,
and
so,
and
so
this
can
be
applied
for
existing
plugins.
We
have
many
plugins
that
would
benefit
from
this
and
the
the
effect
is
that,
let's
retry,
less
less
Requiem
means
less
retries,
so
better
throughput,
better,
better
latency
for
for
every
pod.
B
So
that's
that's
one
effect,
and
the
other
effect
is
that.
Well,
now
any
custom
scheduling,
plugging
or
is
out
of
three
scaling
plugging
can
Implement
these
mechanics
on
their
own.
So
that's
that's
the
the
idea
now.
B
The
contention
here
is
that
there
are
two
proposals
that
kind
of
do
the
same.
One
is
this
free
queue
extension
proposed
by
can
say,
and
the
other
proposal
is
by
yes.
B
Which
is
actually
explained
currently
in
two
PRS
I
only
had
the
chance
to
review
the
first
one,
but
these
two
PRS
which
they
here
they
are
called
filter,
callbacks
and
I-
didn't
have
a
chance
to
review
this
one
yet,
but
at
this
point
of
in
time
they
are
very
similar,
so
yeah
I
just
wanted
to
bring
it
up
to
the
attention
of
every
anybody
interested
in
the
topic
to
review
and
comment.
B
I,
don't
know
if
way
or
Abdullah,
you
had
any
chance
to
look
at
them.
Yeah.
D
So
if
you
haven't
look
into
two
solutions
that
tldr
may
give
you
a
very
high
level
fearing
on
some
of
these
two
solutions,
a
difference
at
but
after
that
can
say,
may
some
Evolution
changes
to
the
especially
to
extract
the
original
proposal,
that
is
to
mixing
the
pre-ink.
You
extending
point
to
another
ritual
which
I
think
the
semantics
is
more
clear,
so
yeah,
so
I
need
also
to
to
catch
up
with
his
latest
design,
the
req
and
to
so.
D
So
did
there
any
like
Benchmark
results
or
any
strong
opinions
from
either
Patrick
or
can
say
the
comparison
of
these
two
solutions.
B
I
think
Patrick
is
okay
with
The
requieu
Proposal
I,
not
sure
if
there
was
a
chance
to
run
benchmarks.
Okay,.
B
Personally,
I
had
a
better
time
reading
through
the
Riku
implementation
So.
Based
on
that
subjective
feeling,
I
mean
trying
to
to
accept
the
requieu
proposal.
Yeah.
B
Another
thing
I
wanted
to
discuss
or
highlight
is
the
fact
that
this
proposal
is
exposing
the
req
as
an
extension
point
mm-hmm,
which
is
visible
in
the
in
the
kubernetes.
The
cubeconf
cube,
scalar,
config,
configuration
and
I,
don't
think.
That's
necessary.
I
think
this
could
be
a
hidden
extension
boiler
like
an
implementation
detail
of
all
the
other
plugins.
D
Can
it
be
merged
with
the
what's
that
yeah
three
in
queue
extension
or
something
that
there's
an
interface
is
an
implicit
contract?
We
have.
B
Is
it's
a
separate
extension
point,
but
at
the
same
time
it's
like
it's
not
independent
right.
You
wouldn't
have
a
pre
in
sorry.
You
wouldn't
have
a
a
plugin
that
only
implements
req.
B
Okay,
you.
A
So
I
looked
at
the
original
proposal
and
I
had
the
same
pushback
like
it's
a
really
confuses
things
like
with
the
with
the
pre-inq
semantics,
especially
with
the
return
types
yes
I
didn't
like
at
all.
It
was
quite
confusing.
I,
like
the
new
Direction
having
a
req
I,
do
agree
that,
ideally
we
don't
want
this
to
be
a
new.
A
A
new
plugin
I
mean
a
new
extension
Point,
explicit
extension
point
and
I
do
see
it
as
a
like
an
extension
2
filter
like
it
should
be
fine,
because
that's
where
we
would
decide
whether
it
makes
us
to
really
enqueue
or
not
like
that
schedule,
ability
of
the
part
I
wonder
because,
because
like
we
would
be
implementing
this
for
all
filters
right
anyway,.
D
S
I
think
I
think
so
almost
yeah,
that's
a
same
case
for
in-q
extension,
which
almost
all
the
plugins
throughout
the
packings
implement
the
events
to
register.
So
maybe
we
can
use
the
this
chance
to
combine
them
together,
so
one
contract,
at
least
for
one
interface,
maybe
have
different
functions,
but
in
one
interface
will
be
much
desirable.
D
A
Yeah
I
know
that
there
is
like
subtle
differences
between
this
and
the
event,
handling
that
we
have
right
now-
and
this
is
my
only
concern-
is
my
other
concern
here.
D
B
Sorry,
marriage,
which
two
things.
B
Yes,
I
agree.
They
they
should
be
put
back
somewhere
in
the
pr
that
is
have
to
do
to
do
that.
To
move
all
the
pre-checks
into.
A
A
What
the
way
it
did
right,
like
the
the
whole
event,
handling
thing
right.
A
A
A
But
yeah
is
this
being
proposed
for
this
cycle
or.
B
Ideally,
yes
at
least
some
form
of
it,
some
reduced
scope
and
that's
another
reason
why
we
might
not
want
an
API
so
that
we
can
iterate
more
internally.
And
if
later
we
see
the
need
for
an
API.
We
can
expose
it,
but
we
can
start
without
an
API.
C
D
B
My
comment
here
was
that
it
should
probably
be
part
of
a
framework
instead
of
being
pre-implemented
here
but
yeah
I
guess
what
I'm
hearing
is
that
we
are
leaning
towards
accepting
these
proposals
or
like
one
of
these
two
but
needs
reviews.
B
I
think
that's
it
from
my
side.
I
don't
know,
I,
don't
have
any
other
questions.
D
So
why
is
that,
when
I,
when
I
look
into
some
the
plugin
called
volume
Zoom,
it's
Oscar
I
came
into
this
lines
that
so
basically
this
plugins
try
to
fill
that
out.
If
a
past
Associated
PVC
has
been
bound
to
a
PV
and
that
that
PV
carries
some
topology
labels,
like
kubernetes.io
slash
house,
name,
slash
Zone
Etc.
So
if
that
TV
carries
those
labels,
we
will
check
in
the
filter
whether
the
node
I
also
have
those
required
labels
and
whether
their
values
match,
so
that
that
is
the
intention
of
this
plugin.
D
But
when
I
look
at
this
line,
this
means
bit
well
check
whether
the
node
has
any
topology
label
like
the
topology
or
domain
dot,
kubernetes
style
Etc.
So
in
some
cases,
if
a
node
is
not
properly
labeled,
that
means
the
filter
will
return
success.
So
I'm
wondering
if
this
is
a
price
which
improve,
because
when
the
filter
is
hit,
that
means
the
prerequisite
is
that
the
part
is
really
requesting
a
PVC
and
the
PVC
has
about
PV.
Otherwise
it
won't
return
earlier
here
or
the
state
here.
D
So
I'm
not
sure
if
we
should
disregard
this
so
that
if
a
node
is
not
carrying
the
required
PVS
to
project
labels,
we
will
return.
D
The
pop
cannot
be
scheduled
through
this
note
because
I
want
you
to
know
production
environment
some
knows
but
somehow
are
not
labeled
properly,
and
then
that
the
node
was
also
considered
as
a
scheduling
candidate
and
somehow
then
the
winds
out
in
the
scoring
phase
that
then
in
one
time
well,
that
part
cannot
be
running
so
that's
the
whole
background,
I'm
tagging,
Michelle
and
Jetson
and
as
well
to
see
whether
they
are
issue
for
your
information.