►
From YouTube: Kubernetes SIG Scheduling Weekly Meeting for 20230504
Description
Kubernetes SIG Scheduling Weekly Meeting 2023-05-04T16:56:01Z
A
A
He
wants
to
do
some
experiment
or
use
in
Watson
to
see
whether
we
can
provide
another
kind
of
bundle
system
for
the
plugin,
like
extension,
so
his
idea
is
to
want
to
use
schedule
plugin
as
a
starting
point,
but
the
overall
approach
might
be
General
for
all
the
go
projects
that
has
sort
of
the
plugin
model
supported.
A
So
it's
a
long
thread,
but
to
be
honest,
I'm
not
a
was
in
expert,
so
the
right
now
I
think
the
focus
is
that
do
we
support
this
kind
of
sub
project
to
be
sponsored
by
sketch
scheduling
or
some
other
other
opinions
here.
A
B
Other
alternative
would
be
to
do
this
in
the
keeps
the
scalar
plugins
repo
yeah.
A
That
is
another
option,
but
given
the
long
term
envisions
that
this
kind
of
a
general
Machinery
as
a
general
mechanism
that
can
be
so
have
something
talk
about
to
the
to
the
go
projects.
So
in
the
in
that,
in
that
with
that
being
said,
I
think
maybe
a
standalone
project
May
face
more
or
you
can
say
for
now
we
can
create
a
feature
branch
and
then,
by
sometimes
the
test,
the
test
has
been
mature
or
something
we
can
just
move
to
a
standalone
sub-project.
That
can
be
also
another
option.
A
So,
although
what's
your
concern
supporting
another
sub
project,
so
just
feel
that
we
have
way
too
many
sub
projects
right
now.
Yes,.
B
That's
one
concern.
My
other
concern
is
that
we
already
struggle
with
communicating
what's
the
level
of
supportability,
of
the
scary,
plugins
repo
and
having
yet
another
custom
scheduler,
because
it's
at
the
end
of
the
day,
it's
a
custom
scheduler
having
yet
another
repo
for
another
custom.
Scheduler
within
within
six
scheduling,
is
adds
to
the
confusion
of
what's
what's
recommended.
A
I,
don't
I,
don't
read
it
as
another
repo
for
for
plugins
I!
Think
it's
not
a
repo
for
General
Pro
plugin
experiments.
So
my
my
point
is
that
this
project
is
is
for
General
experiments.
Instead
of
just
focus
on
schedule
parking.
It's
just
that
it's
used
schedule
parking
as
an
example
to
try
out
the
idea
so.
A
C
B
Is
that
a
we
don't
have
enough
disability
on
the
existing
repos
for
extensive
projects
about
what
the
maturity
state
just
adding
one
more
yeah?
It
complicates
this.
This
communication.
D
Do
we
actually
declare
a
level
of
supportability
to
sub-projects
I
mean
we
sponsor
them?
It's
the
sub-project
maintainers.
D
You
know
it's
on
them
to
decide
like
a
declare.
What
is
like,
it's
not
the
thing,
that's
supporting
it.
The
Sig
is
sponsoring
it
I
guess
I.
That's
that's
different
right.
A
A
It's
supposed
to
be
Marketplace
for
people
to
contribute
the
idea,
implementation
design,
but
it's
not
overall
straighted
as
a
out
of
box
products.
So
basically
you
can
vendor.
You
can
just
use
them
as
this
all
right,
so
we
I
don't
think
we
it's
it's
just
too
much
for
us
to
say,
okay,
what
kind
of
maturity
we
can
support
for
each
sub-projects.
A
Yeah
I
I
feel
that,
yes,
under
six
scheduling,
we
are
sponsoring
quite
a
lot
of
sub
projects
right
now,
but
that
is
only
just
maybe
one
of
the
concerns
other
than
that
overall
I'm,
more
inclined
to
say,
okay,
has
been
through
his
pretty
experience.
A
Contributor
in
terms
of
scheduling,
as
well
as
like
it
was
in
Station
Magazine
mechanism,
so
I
pretty
trusting,
can't
hosted
this
project
so
I'm,
basically
plus
one
to
give
a
green
light
to
to
set
up
a
sub
project
but
yeah.
We
can
just.
A
B
I
guess,
then
the
question
becomes,
what
does
it
mean
for
us
to
sponsor
as
a
project
if
we
have
zero
visibility
of
what's
happening
in
the
project?
B
A
I,
don't
see
clear,
documented
in
some
architecture
chart
or
some
from
where,
maybe
you
can,
just
because
of
the
time
limits?
Okay,
just
just
come
in
below
these
issues.
This
is
our
preference
and
yeah.
That's.
A
Yeah
is
anyone
here
in
the
middle
meeting
I.
B
C
Hello
hi:
can
you
hear
me
yeah
yeah.
C
You
yeah,
so
let
me
go
over
that
yeah.
So,
first
of
all,
I'd
like
to
introduce
myself
and
a
couple
of
my
colleagues
on
the
call,
so
my
name
is
Prashant
and
I
have
Alessandro
and
Jeremy
with
me
on
the
call-
and
this
is
our
first
time
on
the
call.
So
hopefully
we
we
yeah,
we
do
follow
the
rules
and
all
of.
C
So
so
we
some
to
give
give
you
some
context.
We
are
working
on
openshift
and
specifically
on
the
multi-arch
compute
effort,
which
is
we
are
working
to
have
openshift
support
cluster
with
compute
nodes
of
different
architectures.
We
are
seeing
a
lot
of
traction
with
Cloud
providers
like
AWS
and
Azure,
who
have
now
come
out
with
arm
64
nodes,
and
we
are
getting
a
lot
of
users
who
are
interested
in
them
and
with
that
comes
the
question
of
every
user
would
ask
that.
Why
is
my
like?
C
They
would
have
this
cluster
up
and
they
would
say
why
is
my
workload
not
working
on
on
this
node
like
what
is
going
on
my
workload
is
crashing
and
then
we
would
look
at
it
and
we
would
tell
them
that.
Oh,
your
workload
is
x86
workload,
but
it's
running
on
this
arm
system.
So
we've
had
these
kind
of
on
this
arm.
Vm
on
it's
landed
on
the
Pod.
It's
landed
on
the
arm
node,
so
you
have
to
set
the
node
Affinity.
C
Can
there
be
anything
done
in
kubernetes
to
to
make
it
more
aware
to
make
it
more
intelligent
regarding
the
architecture
and
I
know
that
this
issue
was
raised
before,
and
we
did
have
a
look
at
this
issue
and
we
have
actually
been
working
now
for
a
few
weeks
to
on
some
proposed
design
solutions
that
that
we
could
potentially
have
in
the
in
the
scheduler
or
related
to
scheduling.
C
So
we
were
wondering
what
is
the
best
way.
So
this
is
like.
We
want
this
to
be
kind
of
an
introductory
call
where
we
present
ourselves-
and
we
ask
like
what
is
the
best
way
to
go
about
doing
this.
We
have
some
ideas.
We
have
some
proposals
that
we
would
like
to
bring
forth
to
you
all
of
you
and
see
which
ones
make
sense
or
or
which
ones
don't
or
which
path
forward
is
the
best
to
proceed.
How?
How
should
we
proceed
on
this?
A
C
So
right
now
we
don't
have,
as
you
know,
like
the
only
way
that
we
are
asking
users
to
do.
It
is
to
set
the
affinities
in
that
part
spec
to
to
get
around
this
issue,
but
we
we
have
are
floating
around
like
a
couple
of
ideas.
One
is
we
were
thinking
about,
having
a
scheduler
plugin,
an
out
of
tree
plugin,
which
could
which
could
run
in
the
filter
phase
and
it
could.
C
It
could
have
a
pre-filter
phase
where
it
detects
the
architecture,
the
the
the
image
is
inspected
and
the
architecture
of
the
image
is
detected
and
in
the
filter
phase
the
nodes
are
filtered
based
on
the
architecture
and
the
and
the
filtered
list
is
sent
across
to
the
next
step.
So
that
was
one
potential
solution
that
we
were
discussing
about.
Well,
we
could
have
a
multi-arch
aware,
scheduler
plugin.
There
are
other
proposals
that
we
are
also
looking
into.
C
For
example,
we
could
have
a
scheduler
extender
where
the
scheduler
would
query
a
a
controller,
and
it
would
it
would
hand
off
the
work
to
the
controller,
which
was
then
again
inspect
the
image,
and
you
could
filter
the
notes
and
give
a
subset
of
nodes
back
to
the
scheduler
and
I
know
that
there
was
some
work
done
around
pod
scheduling
Readiness.
C
So
we
could
typically
also
leverage
that
as
a
solution
where
we
set
the
scheduling,
feature
gate
for
a
part
and
then
have
a
controller,
come
in
and
intercept
the
node
list
and
and
do
the
work
that
we
wanted
to
do,
based
looking
at
the
image
and
filtering
the
list
of
nodes
when
sending
it
back
and
removing
the
scheduling
feature
gate.
So
these
are
some
of
the
potential
solutions
that
we
have
discussed
and
we
want
to.
C
A
A
Http,
so
it
seems
it's
basically
two
months.
One
is
that
you
can
just
use
a
the
current
existing
scheduling,
primitive,
like
not
Affinity
or
past
schedule,
Readiness
to
check
out
this
issue.
The
other
is
more
smooth
and
more
smoothness
that
you
can
just
impress
us
plugin
to
multiply
doing
this,
for
you
so
basically
to
to
kind
of
waste.
I
would
say
it
depends
on
how
you
can
handle
these
two
solutions.
A
C
I
mean
I,
let
Alessandra
talk
to
it
in
a
minute,
but
we
have
invested
some
work
in
the
scheduler
plugin
method
and
we
have
like
I,
think
a
very,
very
basic,
like
a
skeleton
prototype
of
the
of
the
scheduler
plugin,
where
it
inspects
the
image
and
then
it
filters
like
based
on
the
like,
based
on
the
notes
it
filters
based
on
the
arch
we
were.
We
were
gravitating
towards
using
the
like
a
scheduler
plugin,
just
because.
E
C
In
our
mind,
it
becomes
very
simple
like
it
just
organically
fits
into
the
scheduler.
We
don't
have
to
write
a
separate
controller
just
for
this
purpose.
All
we
need
is
already
there
in
the
scheduler
and
the
plugin
already
like
the
the
the
the
plug-in
pipeline
already
gives
us
all
this
information
that
we
need
and
it
passes
on
to
all
the
relevant
information.
So
it's
just
a
matter
of
intercepting
and
inspecting
the
image
to
get
the
architecture,
so
we
were
leaning
towards
that
I'd
like
to
understand
why
you
think
it's
not
well.
C
We
are
not
saying
it's
trivial,
but
we
are
saying
that
is
the
method
in
our
mind,
that
requires
less
amount
of
effort,
rather
than
implementing
a
separate
controller
to
tie
in
with
the
scheduler,
which
would
then
need
to
be
shipped,
etc,
etc,
and
we
we
and
ideally
we'd
like
something
that's
organically,
that
that's
part
of
this
core
scheduler
itself,
given
that
multi-architecture
is
becoming
more
of
a
thing
now,
with
we
have
seen
more
adoption
in
many
clouds
coming
up
with
arms
and
arm
nodes,
and-
and
this
seems
to
be
like
the
future,
where
you
will
have
notes
of
different
architectures
or
different
types,
and
and
that's
that's
what
we
are
thinking
so.
B
Back
to
your
question
about
how
to
proceed,
you
could
reopen
this
issue
or
create
a
new
one
and
present
your
ideas
in
a
document
or
in
the
issue
itself,
but
you
have
to
you
have
to
make
sure
that
you
answer
all
the
questions
that
that
already
explained
here
like
how
reliable
it
is
that
you
will
positively
get
the
correct
architecture
and
the
authentication
problems
the
way
it's
highlighting
right
now.
B
All
of
these
details
need
to
be
explained
before
we
can
say.
Yes,
this
can
be
in
the
scheduler
and
again
you
can
you.
Yes,
you
can
solve
this
out
of
three.
You
can
create
a
webhook
that
might
work
for
you,
but
but
for
a
feature
that
has
to
go
in
the
scheduler.
It
has
to
work
everywhere
and
it
has
to
like.
We
cannot
add
it's
scheduler
doesn't
have
access
to
Internet
today
right
and
these
my
this
will
include
the
dependency
on
internet
right.
A
A
And
how
can
we
know
that
you
know
very
lightweight
cast
there's
some.
E
E
Our
ID
and
the
reason
we
have
here
is
mostly
to
understand
whether
it's
possible
to
so
yeah
I.
Imagine,
by
the
way
before
we
are
working
together
and
our
idea
of
the
reason
why
we
came
here
and
this
meeting
is
to
understand
whether
it
is
possible
to
have
eight
thousand
out
of
three
or
in
three
plugin,
whatever.
Obviously,
by
solving
all
the
issues
that
you
were
discussing
into
the
issue
itself,
of
which
the
most
heat
blockers
seem
to
be
register,
authentication
and
the
directive
reachability
right.
E
Foreigntication
and
reachability
is
that
we
should
agree
on
having
this
critical
Apple
access
the
secret
in
the
clusters
right,
because
the
image
procedures
can
be
stored
at
different
levels,
namespace
level,
Port
level
or
even
in
the
machines
of
the
we
will
discuss,
and
the
schedule
should
be
able
to
amount
access
to
see
those
Secrets,
the
ones
at
the
level
of
the
cluster,
namespace
level
or
odd
level.
E
Rule
needs
another
material
at
that
point.
Is
this
a
possibility
that
we
can
I
consider
into
an
amount
of
three
plugin?
Probably
yes,
because
it
could
be
in
the
users
to
configure
that's
what
about
an
entry?
Plugin
and
another
thing
is:
can
we
have?
Can
we
consider
these
plugin
as
disabled
by
default
and
make
the
users
able
to
add
some
secrets
to
configure
that
notes,
level
credentials.
E
And
finally,
is
there
some
are
there
any
cases
that
we
can
already
access
and
like
test
cases
for
which
we
can
consider
the
different
scenarios
where
the
scheduler
should
work
in
the
sense
that
one
can
think
also
cluster,
where
some
nodes
are
able
to
access
the
resistance
networks,
not
in
terms
of
registered
analysis
and
other
nodes
can
nodes.
But
is
this
something
that's
one
should
consider
or
not.
B
B
Having
the
node
Affinity,
somehow
doing
it
pre-scheduling
a
web
hook
or
I
mean
it
could
even
be
an
admission
controller
by
having
that
information
written
down
in
the
Pod
spec,
so
that
the
autoscaler
can
also
use
the
same
information
without
having
to
deduce
it
again
or
query
the
registry
again
things
like
that,
so
the
more
information
you
can
put
in
the
prospect,
the
better.
E
Okay-
and
this
is
why
I
we
were
wondering
about
the
entry
plugin,
because
we
know
that
the
autoscaler
can
ex
can
simulate
the
scheduler
and
let's
give
them
three
plugins
or
filtering
plugins,
but
depending
on
execute
out
of
trip
again.
So
the
discussion
should
go
not
considering
either
a
controller
that
will
be
able
to
provide
information
for
the
both
sites
and
eventually
and
maybe
for
I,
don't
know
the
auto
scaler.
Some
of
us
get
expanded.
E
Maybe
unless
you
I,
don't
know
you
just
another
thing
this,
and
it
is
enough
for
their
own
scaler
or
having
it
as
an
entry
plugin
that
the
auto
scaler
can
use
to
simulate
the
other
scaling
of
nodes
with.
B
A
A
If
you
trigger
Delta
schedule,
you
have
to
assume
that
if
you
rely
on
some
internet
access,
you
have
to
assume
that,
because
those
killer
have
the
same
ethernet
access
to
the
registry
to
get
the
same
kind
of
information.
Otherwise,
although
you
will
use
the
plugins
logic,
but
the
topology
network
issues,
the
different
that
may
be.
E
B
Yes
or
it
will
be
a
well
hook
or
or
it
could
be,
an
admission
controller
in
API
server,
I
think
we
are
not
admitting
new
ones,
but
that's
something
that
can
be
discussed
with
API
machinery,
whether
we
can
include
more
admission
controllers
to
do
this
kind
of
behavior.
E
B
Yes,
I
think
the
point
of
the
of
in
the
thread.
The
main
point
is
that,
well
it's
not
the
scale
of
responsibility
to
know
what's
the
architecture
of
of
of
an
image
or
it
yeah,
it's
not
really
enough
to
obtain
that
information.
So
yeah
share.
Please
share
your
proposal,
but
it
needs
to
answer
all
these
questions
that
are
already
there.
E
E
No
questions
no
comments.
Thanks
for
the
feedback
good
to
meet
you
all.
A
B
Basically,
the
the
quick
idea
is
that
a
pod
could
have
access
to
the
topology
labels
so
that
you
can,
it
knows
where
it
is.
It
knows
which
ports
that
belong
to
the
same
job
are
close
or
in
the
same
Zone
same
topology
in
the
same
topology
domain.
So
that's
the
that's.
The
overall
desire
now
wait.
Wait,
wait.
A
B
Yeah,
so
that,
yes,
so
that
this
is
the
desired
outcome
now
coming
back
to
implementation
or
Solutions,
one
potential
solution
is
that
at
Port
binding
time
when,
when
we
assign
a
node
name
to
us,
a
pod,
we
can
copy
the
topology
labels.
The
well-known
topology
labels
into
the
pot.
B
A
B
And
then
the
pots
can
use
the
downward
API
to
access
those
those
labels
so
yeah
that
that's
one
of
the
proposed
Solutions
by
Tim
Hawking,
it's
kind
of
one
of
the
last
comments.
B
B
A
Subscribe,
that's
good,
okay!
The
the
last
thing
is.
We
are
doing
some
planning
for
the
next
release
so
other
and
Abdullah.
So,
given
we
only
had
10
minutes
left,
do
you
think
we
can
maybe
leave
it
to
the
next
meeting,
because
it's
still
a
lot
of
things
to
discuss
or
you
just
want
to
kill
the
quick
go
through
with
this.
B
Yeah
we
can
do
a
quick,
okay,
okay,
I.
A
Will
see
what
so
the
first
one
I
will
give
a
very
high
level
introduction.
The
first
one
is.
We
are
already
Implement
some
similar
directive
in
potability
spread,
which
is
many
for
some
some
scenario
like
the
rolling
upgrade,
so
that
by
that
mechanism
can
be
generic
to
other
to
other
scheduling
directory.
A
We
already
provided
like
the
Pod
Affinity
apart
and
definitely
so
that
is
this
issue
is
for
that
and
it
didn't
catch
the
loss
of
the
training
of
127,
so
we're
going
to
Target
on
120
I'm
gonna
to
review
and
approve
it
for
the
other.
Three
like
I
said
it's,
oh
not
this
one
like
I,
said,
pause
to
part
just
spread
and
has
two
fields
that
can
get
more
benefits
during
the
ring
upgrade.
A
So
those
two
are
already
beta
and
we
don't
have
plans
to
promote
it
to
GA
yet
and
for
past
scheduling
Readiness,
because
this
were
beta
in
last
release,
so
we
don't
consider
getting
it
to
GA
too
soon,
so
those
three
features
will
stay
in
beta
for
128,
and
this
one
is
dynamic.
We
Source
our
location,
so
it's
a
cross,
multiple,
stick
and
definitely
scheduling
is
a
more
important
area.
So
Patrick
is
only
in
there
and
I.
A
Don't
know
his
plan
yet
maybe
not
to
promoting
128,
but
he
has
done
a
lot
of
a
good
thing
million
things
in
the
scheduling
Improvement,
as
well
as
the
scheduling
performance
test
framework,
and
so,
oh
by
the
way
this
one.
The
feature
overall,
is
to
want
to
generalize
the
PB
PVC
pattern
to
any
generic
resource,
so
it's
called
Dynamic
resource
and
after
that
is
another
ones
for
pre-emptions.
Optimization
I
think
the
last
release
we
didn't
make.
It
is
because
we
we
didn't
make
a
consensus
down
some
API
design.
Is
that
correct?
Although.
A
Okay,
so
yeah
we
are
trying
to
get
that
result
in
this
place.
So
this
is
for
the
pdb,
whether
it's
a
required
guarantee
your
best
efforts
and
the
last
one
is
a
new
one,
not
that
particular
with
scheduling,
but
some
of
the
code.
I
think
we
can
own.
So
so
the
history
is
there.
You
know
before
whether
how
you
label
to
sorry
I
tends
to
a
note
and
how
you
honor
the
effect
of
of
the
chance.
A
Just
a
bundle
in
one
components
called
know
the
first
lifecycle
controller.
So
here
I,
just
I
just
meant
the
no
execute
the
10.,
but
there
are
some
scenarios
users
want
to
to
separate
them
and
to
provide
more
customization
Etc.
So
this
cap
is
basically
for
that.
So
I'll
be
volunteering
to
reveal
this.
So
basically,
this
is
still
a
raw
list
of
the
caps
we're
going
to
have
128.
A
But
if
you
have
some
other
new
or
I
missed
some
something
here
just
raise
it
up
in
the
snack
Channel
we
can
edit
it
and
to
our
radar
to
our
radar
other
than
that
yeah.
Any
questions
for
anything
else
just
feel
free
to
answer
questions.
D
Regarding
match
label
Keys
I
was
talking
to
Aldo
yesterday.
D
A
D
D
Yeah
I
understand
that
but
like
if
I
want
to
do
anti-affinity
so
that
I,
don't
I
I
still
want
my
group,
like
so
I,
have
a
job
I
want
to
implement
exclusivity
I
want
to
repel
or
other
jobs.
D
I
want
to
be
the
only
one
in
that
topology
domain
right.
So
what
would?
How
would
you
do
that?
You
say:
okay,
I
will
install
I,
will
add
a
pod
Affinity
to
myself
right
and
anti-affinity
against
all
others
yeah,
and
the
way
that
you
do
anti-affinity
against
all
others
is
by
saying
anyone
has,
for
example,
job
name
set
that
should
be
excluded
right,
shouldn't
shouldn't
it
shouldn't
collocate,
but
then
that
but
I
myself
has
that.
D
So
how
do
I
exclude
my
myself,
not
myself,
like
my
job,
not
the
part
itself
like
the
job
itself,
you
see
what
I'm
saying
so
you
say:
I
have
anti-affinity
against
every
part
that
has
job
name
set
unless
the
job
name
is
equal
to
me.
D
So
if
I
like,
if,
if
you
were
to
do
it
in
a
genetic
fashion,
without
specific,
like
explicitly
setting
the
my
job
name,
you
want
to
do
something
like
not
in
match
label
case
job
name,
basically,
not
like
you
fetch
the
label
from
the
pod
that
is
being
placed.
D
It's
a
bit
complicated,
not
to
straightforward
to
think
about,
but
I
was
looking
at
the
implementation
for
match,
label
keys
and
it's
basically
it's
only
about
the
like.
Basically
the
operator
in.
That's
it
that's
what
it
supports
right
like
if
you
have
an
anti-affinity
or
Affinity
that
says
so
that
sets
the
match
label
key.
We
add
constraint
that
says:
label
equal
myself
right,
that's
it!
D
But
there's
no
way
to
say
nothing
like
the
introverted,
anyways
yeah,
that's
I,
don't
know
if
we.
D
I
guess
this
can
be
yeah
I,
don't
know
like
I
need
to
look
closely
at
the
API
if
it
is
possible
to
actually
specify
such
a
thing
like
you
set
match,
label
keys
to
the
label
key
and
then
at
the
same
time
set
the
operator
and
the
operator
would
decide,
but
I
don't
think
we
have
that
level
right.
Like
of
because
imagine
we.