►
From YouTube: Kubernetes SIG Scheduling Weekly Meeting for 20221103
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi,
everyone
welcome
to
join
this
week's,
seek
scheduling,
meeting
this
many
spoon,
we'll
call
it
so
I've
been
respectful
to
each
other
and
they
will
be
public.
Firstly,
let
me
share
my
screen
and
yeah
so
today
the
first
topic
is
a
cap
about
the
Path
schedule.
Readiness
so
I
think
I'll
give
some
high
level
introductions
on
this
five
seconds.
A
So
the
background
is
that
if
you
know
how
the
internal
schedule,
especially
the
scheduling,
kill
works,
is
that
once
the
part
is
created
and
the
part
will
immediately
enter
the
scheduling
queue
and
then
opt
out
for
scheduling
unconditionally
no
matter,
the
part
is
like
waiting
for
some
external
condition
like
a
quarter
check
or
something
else
that
it
is.
You
have
no
control
on
this.
The
Power,
Credit
and
endless
can
kill
and
it
will
have
some
internal
back
off
timer,
but
it
will
pop
out
and
the
end
of
the
screening
cycle.
A
So
in
some
cases
the
part
actually
not
ready
yet
because
there's
some
other
controlling
controls
that
the
readiness-
but
it
still
goes
to
scan
your
cycle
and
also
goes
through
the
reaction
check
Etc,
so
yeah
well
waste
a
lot
of
Cycles
here,
especially
if
you
consider
this
is
the
multi-tenant
environment,
the
psychos
wasted
on
this
kind
of
path,
where
I
postponed
the
scheduling
time
of
other
parts
right.
So
it
will.
A
Indirectly,
to
impact
the
latency
and
the
overall
support
of
the
scheduling,
so
one
thing
we
were
thought
is
that
maybe
we
can
just
add
some
knob
here
so
that
we
will
give
the
user
some
control
to
say
okay
for
pause
that
meeting
this
kind
of
criteria,
then
it
should
probably
otherwise
you
should
just
stay
inside
the
unscreen
queue
of
the
of
the
internal
scheduler.
So
that
is
what
we
propose
and
the
background
that
the
first
version
of
this
proposal,
which
was
in
your
Google
doc.
A
So
if
we
can
do
everything
the
schedule
and
plugging,
maybe
it's
well
get
less
adopted,
because
not
a
lot
of
people
has
the
expertise
of
writing
and
compiling
scheduled
plugins.
So
the
proposal
finally
comes
to
a
shape
is
that
okay,
you
can't
just
purely
use
the
extension
point.
Sorry
use
the
in-q
plugin
and
Implement
what
you
whatever
you
want,
but
otherwise,
if
you
don't
want
recompile,
the
scheduler
use
extras
plugins.
We
provide.
A
We
provide
something
a
new
field
called
scheduling
Gates,
so
that
a
typical
workflow
is
something
like
the
user,
create
sorry,
I
would
say
the
outside
your
administrator
credit
pass
carrying
some
scheduling,
Gates
and
each
scheduling
gate
is
mapped
to
one
criteria,
condition
that
the
part
should
be
pre-qualified.
Otherwise
you,
you
will
just
stay
there.
Not
wasting
skating
cycle
just
stay
there
and
asking
queue.
A
That
is
the
part
creation
and
after
that,
your
controller
is
responsible
for
remove
the
scheduling
gate
one
by
one,
no
one
more
one,
but
just
remove
the
corresponding
schedule
that
you
are
responsible
for
like
removing
full
and
moving
bar
or
Revenue
all
in
one
time.
So
that
is,
and
in
the
scheduled
perspective
there
is
a
default
enabled
in
queue
plugin
that
will
check
whether
the
scheduling
Gates
is
empty.
A
If
it's
not
empty
it
will
it
will
not
and
let
the
part
go
you
just
put
it
there,
get
it
there
and
let
it
be
in
the
in
the
internal
unscreen
Cube.
So
I
will
give
you
some
demos
on
it.
But
if
you
have
any
questions
just
with
the
request,
so
I'm
gonna
to
comply
the
oh,
my
gosh.
B
Yeah,
mostly
because
I'm
out
of
date
with
the
cap,
so
initially
we
were
suggesting
to
add
conditions
right
to
the
pot.
So
that's
no
longer
the
case.
B
A
Not
it's
not
exactly
the
same
as
the
big
networks,
Readiness
okay,
because
we
don't
want
to
use
a
condition
to
act
as
a
mapping
and
also
don't
want
the
condition
we
don't
want
to
enforce,
like
the
one-way
transition
on
the
condition
itself,
because
if
you
look
at
the
design
of
the
Sig
Network
Readiness
case,
the
condition
is
that
okay,
it's
the
best
efforts
that
you
should
have
the
condition
like.
If
the
condition
is
true,
you
shouldn't
make
it
the
boss,
but
in
practically
it
there's
no
logic
enforciness.
A
That
means
you
can
still
just
flip
that
true
and
false
back
and
forth.
So
that
is
no
not
the
thing
we
want
and
also
the
condition
is
more
weak
enforcement
if
we
want
to
maintain
a
State
machine
right.
So
that
is
why
we
just
want
the
the
spec
itself
to
enforce
the
transition
state,
so
it
won't
see
condition,
but
there
is
some
condition
literal
I
will
show
later.
A
That
will
give
you
better
user
experience
and
also
integration
with
custom,
auto
scaler
to
help
you
better
understand
which
paths
scheduled
and
which
one
is
paining.
So
pending
is
more
like
more
indicating
that
the
part
has
been
scheduled
attempted,
but
it's
just
cannot
scheduled
without
constraints.
A
So
for
now
you
can
see
that
I
just
compiled
the
local
kubernetes
software,
because
it's
not
available
in
the
in
the
in
the
the
upstring
yet
is
just
three
PR's
race
and
I.
Don't
have
note
and
my
version
is
yeah
1.6,
so
I'm
gonna
leveraging
a
tool
called
to
stimulate
a
couple
of
notes
like
in
the
right
hand,
I
just
spin.
The
quote
and
I'm
gonna
have
to
yeah
raised
two
notes.
Each
has
four
CPUs.
A
And
I
think
I
can
show
you.
A
Okay,
I
can
see
all
the
functions
points
that
is
is
Implement
already,
so
the
first
one
is
that
within
for
some
check
like
the
first
one
is,
you
cannot
create
a
path
with
a
node
name
and
no
empty
schedule.
Gates.
We
will
reject
that
so
for
you,
for
example,
in
this
case
suppose
I'm
using
an
external
controller,
but
somehow
I
misuse
the
scanning
Gates,
because
I
specify
non-emption
case
along
with
no
name.
A
So
this
in
this
case,
there's
some
logic
in
the
API
side
to
reject
you,
because
it
doesn't
make
sense,
you,
okay,
this
part
are
not
scheduled
already
because
it
carries
scheduling
case,
but
you
are
assigning
unknown
to
that.
So
it
doesn't
make
sense
and
if
you
want
to
do
some
trick
to
work
around
next,
like.
A
A
But
if
you
want
to
do
some
trick
to,
let's
say,
specify
a
binding
for
the
past
part
to
Note
One,
let's
take
a
look
whether
we
can
reject
that,
so
this
bonding
is
also
rejected,
because
you
are
trying
to
make
the
part
into
a
nonsense
state
right
with
a
No
Name
by
carrying
the
scheduling
Gates.
So
in
this
case
we
enforce
some
non-sense
situation
and
also
there's
a
so
by
default.
A
Condition
with
type
part
schedule,
but
with
reason
are
scheduled
because
it
has
it
indicates.
The
part
has
goes
through
the
scanning
cycle,
but
if
the
part
carries
the
reason
schedule,
gated
or
the
consumers
should
be
aware
that
the
part
is
right
now
just
staying
there
in
the
indicating
the
condition
you
don't.
A
So
this
is
the
default
condition
and
also,
once
the
part
has
been
transit
to
the
next
sustain,
which
means
it's
no
longer
gated,
then
the
reason
we
are
just
continued
continues
back
to
the
unscrewable.
A
A
A
Okay,
let
me
demo
Happy
pass
of
the
past
scheduling
using
the
schedule.
Gates,
so
I've
deleted
the
part
and,
let's
say:
okay:
I
will
just
use
the
device
default
schedule
to
schedule
it
and
in
the
beginning,
I
just
carried
to
scan
your
dates.
Okay,.
A
Inaccurate
thing,
like
I'm
gonna,
append
a
new
second
gate,
because,
as
I
mentioned,
we
enforce
the
one-way
transition
once
the
the
schedule.
Gates
can
only
be
applied
in
the
past
creation
time
and
after
that,
the
power,
the
scheduling
gate,
can
only
be
removed
instead
of
addition.
So
in
this
case,
if
I
want
ads
a
new
scanning
game,
you'll
be
rejected
because
it
can
be
removed
after
can
only
be
deleted
after
the
at
the
time,
so
so
originally
I
have
when
the
bars
campaign.
A
A
A
Second
thing:
yeah
there
is
some
Metric,
but
I
haven't
prepared
a
demo
for
Matrix
I.
Think
the
rest
of
demo,
I
have
is
say:
okay,
I
have
two
notes,
and
each
nose
has
four
CPUs
and
let's
say
if
I
specify
10,
CPUs
and
then
I
have
two
bedroom
two
spinning
plates.
Let's
enable
the
pastel
scalar
to
see
whether
it
will
break
Classical
to
scale
or
not
so.
A
But
now
the
part
there
is
two
certain
Gates.
Let's
say
we
apply
this
and
it
will
be.
As
I
mentioned,
you
will
be
scheduling,
gate
health
status.
In
this
case
we
don't
expect
the
class
of
scalar
to
trigger
provision
a
new
instance
because
it
doesn't
help
the
gate.
The
gates
hasn't
been
released.
Yet
so,
let's
take
a
look
at
the
classical
scalar
and
we
don't
want
CA
to
spin
up
new
nodes.
So
for
now
it's
not
yet.
A
Let's
give
it
another
couple
of
seconds
related
to
so
I
suppose,
there's
the
interval
like
30
seconds
or
something
to
check
the
crossovers
situation
and
then
act
on
the
all
scheduled
paths
we
have
consulted
with
them,
says:
okay,
we
have
a
new
reason
for
this
kind
of
part.
So
we
don't
expect
you
to
provision
your
instance:
okay,
okay
yeah.
So
that
means
CA
is
really
running,
but
it's
not
spinny
a
new
instance.
A
That
because
it
has,
it
requires
the
10
CPUs
right.
One
thing:
the
first
thing
we
expect
with
this
actually
is
that
okay,
scheduled
Gates
has
been
lifted.
So
it
should
go
straight
into
the
regular
scanning
cycle,
but
the
scheduled
cycle,
the
because
of
we
don't
have
10
CPU
nodes
to
fit
it,
so
it
will
be
pending
and
then
depending
will
trigger
the
cluster
Auto
Scala.
Well,
that
is
what
we
expected.
So,
let's
take
a
look.
A
C
But
it
doesn't
matter
like
what
matters
is
that
the
scheduler
yeah
they
try
to
schedule
the
Pod
it
fails.
We
can
look
at
this.
Where
is
the
status
the
conditions,
if
you
can
show
the
condition
as
well?
It
should
show
unschedulable,
yeah,
yeah
and
then.
A
Yeah
I
think
I
just
I
implemented
the
ca
ID
question
why
it
was
one
you
know:
124
release,
but
I'm,
not
sure.
If
the
latest
default
policy
changes
Because
by
the
first
like
you,
you
don't
have
CPUs,
but
the
new
instance
can't
have
enough
CPU.
It
should
trigger
a
scale
up
but
yeah
anyway,
but
it
doesn't.
It
doesn't
impact
the
current
feature
of
the
past.
Getting
Gates
yep
now
I
think
that's
pretty
much
I
want
to
demo.
Today.
D
A
silly
silly
question
for
me
so
is
run
the
cluster
Auto
scaler
on
your
laptop
just
for
the
demo.
Is
that
part
of
the
K
walk.
A
No,
no,
no,
no!
It's
not
okay
yeah!
So
basically
the
clock
project
is
aimed
to
like,
if
you
are
familiar
with
integration,
how
integration
test
is
that
it's
a
bare
bone
control
point
and
then
you
just
create
the
the
note
objects.
But
the
note
objects
should
have
some
okay
I'm
right,
like
components
to
maintained
heartbeat
from
the
no
objects
to
the
API
server.
So
it
looks
like
a
real
news
right,
so
the
core
component,
the
Quark
project,
is
supposed
to
doing
that.
A
It
may
change
habit
from
the
fake
milk
objects
to
the
FPS
server,
as
well
as
the
power
condition,
because
if
you
think
of
integration
test
we
don't
have
Cuba,
then
the
part
will
stay
in
painting
state
state,
because
even
if
you
assign
a
new
name
to
the
part,
there's
no
party
responsible
to
bring
it
up
right.
So
that
is
also
doing
and
then
as
qualities,
sort
of
like
components:
I
internally
Implement,
a
coaxial,
Auto
scaler,
and
then
it
will
kind
of
just
spring
up
new
instance
Etc
yeah.
D
A
Yeah
I
hope
this
cap
can
get
some
more
feedback,
because
the
next
release
is
Alpha.
We
we
do
hope
to
hear
more
how
you
adapt
your
use
case
to
this
cam,
like
some
use
case
I
heard,
is
that,
like
in
Caldera
that
I
use
they're
trying
to
use
it
in
their
unicorn
project
so
that
they
have
some
quota
check
and
before
the
code
check
is
down,
they
don't
want
to
do
wasting
the
scaling,
Cycles
and
I
guess.
Different
companies
and
different
projects
has
different
requirements,
and
this
feature
can
be
leveraged.
A
Yeah
we
have
almost
20
minutes,
I.
Think
I've
done
that
you
should
have
this
in
today's
agenda
right.
You
want
to
discuss
this.
C
Yeah
I,
just
oh
what.
C
In
my
mind,
I
I
want
to
have
the
concept
even
like
more
Global
like
introduce
something
called
reservations,
and
we
discussed
this
before.
We
also
a
proposed
that
I
had
in
the
document
that
I
shared
was
basically
mostly
related
to
reservations
at
the
Pod
level,
like
we
create
a
reservation
object
for
each
part.
C
Doesn't
quite
solve
the
All
or
Nothing
problem
completely
because
well,
even
if
it
is
eventual
consistency
like
you
keep
adding,
is
it
like?
You
keep
scheduling
these
reservations
and
then,
once
all
the
reservations
of
a
group
is
scheduled,
then
you
start
the
drop
on
them.
That's
fine.
The
job
itself
is
going
to
be
all
or
nothing,
but
the
reservations
themselves
might
basically
end
up
in
a
deadlock,
because
you
know
two
sets
of
reservations
are
trying
to
get
scheduled
and
they
could
basically
starve
each
other.
C
There
are
some
solutions
around
this
right,
like
you
can
do
preemption
Etc,
but
it's
again
that
seems
to
me
again,
like
might
be
too
expensive
as
well
like
yeah.
There
is
no
Central
Point
that
says:
okay.
This
is
a
group
that
I
want
to
basically
place
together
right,
so
I
want
to
introduce
this
new
idea
in
in
the
document,
but
I
wanted
just
to
float
it
here.
C
First,
which
is
something
similar
to
pod
group,
but
it's
not
just
identifying
pods
that
needs
to
schedule
together,
but
it's
actually
something
like
I
want
to
reserve
10
instances
of
this
pod
template.
Okay
and
this
reservation
is
going
to
be
scheduled
as
one
by
the
scheduler.
We
can
talk
about
how
we
implement
this
later,
but
I
wanted
to
describe
the
API,
so
the
object
itself
is
going
to
basically
act.
Imagine
like
like
end
points.
If
you
remember
that
with
endpoints,
you
track
all
the
part
IDs
IPS
that
are
related
to
specific
servers.
C
Here's
the
same
thing
like
you,
have
a
reservation
object.
You
can
say
I
want
in
the
simplest
case,
I
want
10
instances
of
this
pod
template
and
the
scheduler
is
going
to
find,
for
example,
10
places,
and
we
will
track
them
in
this
part.
Object
so
in
this
part,
object
you're
going
to
have
a
list
of
nodes
where
all
these
reservations
has
been.
You
know
made
Parts
can
schedule
in
these
reservations,
similar
to
what
I
made
similar
to
my
you
know,
proposal
here,
like
a
pod,
will
have
an
affinity
to
the
reservation.
C
Now
this
basically,
like
you
know,
makes
it
more
explicit
the
idea
of
a
reservation
as
a
group
for
both
the
scheduler
and
auto
scaler,
and
it
is
more
generic
than
the
Pod
group
concept,
which
is
basically
it's
only
a
scheduler
one.
It
doesn't
doesn't
really
solve
the
problem
of
preservations
I
can
see
this
being
implemented
in
two
modes.
One
is
trickle
down.
Basically,
I
want
10
instances
of
this
part,
and
then
the
scheduler
will
continuously
try
to
find
10,
and
then
you
know
it
can
add
them
like.
C
You
know
over
time
if
it
doesn't
find
them
in
the
first
like
in
one
go,
and
so
the
reservation
object
is
going
to
continue
to
basically
you're
going
to
continue
to
see
nodes
being
added
there.
As
you
know
where
these
spots
could
schedule
or
another
mode
of
operation
is
All
or
nothing
like
you,
either
going
to
find
me
all
these
parts
which
could
be
implemented
similar
to
what
how
we're
implementing
one
group
right
now
or
not,
and
then,
when
you
find
them
you're
gonna.
C
Actually,
you
know
record
these
notes
on
the
object
itself,
and
then
the
parts
will
basically
schedule
on
a
second
cycle
like
when
you
create
the
pods.
Later
they
have
an
affinity
to
this
reservation
object.
The
scheduler
will
basically
schedule
them
only
if
there
is
a
reservation
object
that
that
these
parts
have
a
thin
feature.
So
I
want
to
add
this
also
proposal
here
to
this,
like
we
have
I,
don't
know
three
or
four
proposals
here
around
these
concept
and
and
the
reason
I'm
pushing
forward
like
I'm.
C
Trying
to
find
a
solution
in
this
context
again
is
because
of
batch
workloads
in
general,
like
the
concept
of
reservations,
I,
think
it's
it's
useful.
It
will
also
similar
to
how
we
are
think
we
were
thinking
about
scheduling,
Gates.
It
allows
higher
level
controllers
to
manage
resources
without
knowing
what
the
actual
workload
is
going
to
look
like.
C
You
know,
like
the
container
itself,
that's
going
to
run
like
I
right,
so
so
the
higher
level
job
it's
called
a
job
scheduler,
can
create
a
reservation
without
knowing
exactly
where,
like,
without
actually
being
a
job
controller
in
front
of
itself
right,
because
it
just
requests
for
the
resources
and
then
it
will
tell
the
job
control
say:
okay,
now,
I
got
the
resources
for
you.
You
go
ahead
and
start
your
pods,
so
that
that
separation-
you
know
semantic-
you
think,
can
only
implement
it.
By
having
this,
you
know,
split
between.
C
B
C
I
have
some
thoughts
how
this
can
be
implemented
in
this
scheduler
itself,
but
I'll
leave
it
to
the
document.
So
that's
just
like
this
idea.
What
I
wanted
to
add
to
this
document
and
see
if
how
far
I
can
go?
My
my
only
challenge
here
is
again
related
to
the
autoscaler
how's,
the
auto
scale
going
to
do
or
nothing
allocation
in
general,
but
that's
yeah
I
mean
I
need
to
Circle
back
with
the
cluster
Auto
scale
forks
and
see.
If
there
are
any
solutions
there.
A
Yeah
I
do
I
do
agree
with
Abdullah.
I
should
have
a
abstract
the
API
to
to
describe
the
behavior
and
the
guy
that
the
implementation
so
right
now
the
community
has
a
little
bit
diverged
and
each
scheduling
or
scheduling
is
like
projectile
project
Implement.
Their
own
reservation
logic
so,
like
volcano,
has
its
own
reservation
logic
and
also,
and
the
Baba
has
a
project
called
K
ordinator
and
also
it
has
a
resource,
reservation
Nation
concept,
but
it
sticks
to
the
scheduling
framework.
A
So
this
is
maybe
we
can
look
into
because
it's
not
that
hacked
it's
still
using
the.
But
the
interesting
thing
is
that
is
using
a
I.
Think
it's
using
some
fake
part
objects
or
something,
but
I
haven't
been
looking
into
the
details
yeah.
But
this
is
one
thing:
if
you
want
I
can
post
it.
The
link
here.
C
B
C
The
reason
why
we
that
this
could
be
an
option
is
because
introducing
a
new
concept
for
allocating
resources
is
complicated,
like
there's
so
many
controllers
that
depend
on
that.
Like
imagine
two
blades
and
Cube
cattle
and
everywhere
they
they
only
is
they
assume
that
resources
are
allocated
based
on
the
number
of
parts
right
like
that
assigned
a
specific
node,
and
so
everywhere?
Where
that
assumption
is
being
made,
you
want
to
introduce
introduced
to
the
concept
of
a
reservation
as
well,
because
resources
can
be
not
available
on
the
Node,
not.
A
Because
of
a
pod
schedule,
but
the
reservation
schedule
right:
yeah,
let's
basically
introduce
the
yeah
a
resource
or
reservation,
and
maybe
in
the
future
we
can
introduce
and
implement
the
back
view.
Logic
as
well,
because
that's
a
typical
Concept
in
the
traditional
batch
processing
scheduling,
yeah.