►
From YouTube: Kubernetes SIG Scheduling meeting 2018-01-18
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
To
me
so
I
mean
realistically,
we
are
not
gonna,
be
able
to
work
on
this
on
the
old
framework
and
implementing
it
in
this
corner.
Hopefully,
then
we
can
finalize
the
design
and
the
way
to
implementing
in
this
quarter
and
then,
when
it
comes
to
implementation,
all
those
repos
have
different
structure,
hopefully
will
be
in
place
next
quarter
and
we
can
start
on
that
and.
C
B
C
Mean
like
when
we
are
thinking
about
framework
and
if
we
think
about
huge
cases
like
cluster
capacity
like
mere
cluster
capacity,
also
like
it's,
not
a
I,
don't
know
like
how,
maybe
you
could
call
it
like?
Oh,
maybe
virtual
scheduler
budget
does
not
really
schedule,
so
basically
it
was
to
be.
Is
it
it
was
to
be
as
close
as
possible
to
different
scheduler
with
respect
to
feature
and
sincerity,
but
it
still
it
was
to
override
the
core
function
binding
because
it
does
not
bind
and
in
previous
religions,
even
in
the
Kin.
C
A
Have
to
be
some
policies
around
one
thing
that
the
kubernetes
as
a
whole
sucks
at
and
if
we're
going
to
create
something
like
an
SDK,
is
that
we
need
to
support
version
compatibility
and
you
know,
have
promotions
and
deprecation
that
should
almost
be
defined
like
in
the
beginning.
So
the
ambiguity
isn't
there?
C
C
Yeah
you
wanna
come
in
relax
a
little
bit
up,
divert
the
scheduler
and,
let's
say
like
maybe
we
have
several
custom
scheduler
based
on
different
scheduler.
If,
let's
say
I
run
some
cluster
capacity,
how
can
I
make
sure
that
the
functionality
that
is
being
used
by
cluster
capacity
are
exactly
similar
to
default,
scheduler
or
whatever
custom
LSK
scheduler?
That
is
being
run
in
that
cluster?
C
B
I'll
be
happy
to,
but
I
I
feel
like.
Let's
say
that
we
have
a
very
simple
skeleton
for
scheduling,
and
then
we
have
a
bunch
of
plugins
and
some
of
these
custom
schedulers
have
pretty
fancy
plugins.
How
can
it
faster
capacity
tool
as
a
separate
process,
which
is
what
completely
independent
know
about
all
these
plugins
if
those
customers
customers
or
something
that,
for
example,
built
by
a
user
in
a
separate
environment,
and
that
we
don't
even
know
about
so.
C
No
I'm
not
talking
about
custom,
a
scheduler
that
is
completely
different
implementation
when
I
say
custom,
a
scheduler,
then
I
think
that's
been
one
of
the
reasons
like
why
we
are
going
to
a
frame,
but
like
it's
a
default
schedule
Darrell,
but
maybe
one
scheduler
might
you'd.
Let's
say
like
we
have
ten
features
in
our
scheduler.
C
Maybe
in
one
cluster
we
are
using
8
features
out
of
those
10
features
and
maybe
in
another
cluster,
maybe
maybe
we
are
using
7
features
out
of
there,
in
fact
those
10
features
and
when
we
are
interested
capacity
against
those
clusters
like
we
need
to
make
sure
like
when
we
are
running
the
capacity
against
the
cluster.
That
is
it
features.
Cluster
capacity
also
enabled
will
donate
features
in
the
similar
folder
yeah
yeah.
B
Yeah,
definitely
that
would
be
definitely
an
option.
Actually,
in
fact,
it
is
even
an
option.
Today
we
have
this
scheduler
policy
config
that
yeah
there
can
read
had
to
remove
some
of
these
X
some
of
the
extenders
and
also
add,
or
remove
some
of
the
predicate
functions
or
changed
by
your
ad
and
their
rates
at
all,
so
you'll
definitely
have
a
very
similar
conflict
for
for
adding
and
removing
some
of
these
yeah.
C
So,
as
far
as
like,
we
have
something
based
on
Limi,
let's
say
like
config
file,
because
right
now
we
also
have
so
many
features.
Some
are
alpha.
Some
are
like
some
are
enabled
based
on
I
mean
it's
really
difficult
to
make
sure
whether,
like
we
are
on
a
cluster
capacity,
with
all
the
features
that
the
scheduler
is
being
run,
because
how
to
make
sure
that
the
scan
scheduler
has
really
enabled
those
features,
and
maybe
cluster
capacity
has
not
enabled
those
features.
I
see.
B
C
C
Like
like,
when
you
look
at
that,
config
or
whatever
like
we
can
say,
okay,
yes,
you're
like
what
functionality
is
really
we
are
using
in
the
cluster
and
both
function,
T,
that
is
scheduler,
reusing,
okay,
it's
like
if
we
have
to
run
to
similar
scheduler
and
two
different
clusters.
We
are
sure
they
are
really
running
with
the
same
features.
D
C
D
B
B
The
next
topic
that
we
have
is
how
to
add
preemption
to
demon
set
controller
or
what
to
do
with
the
demon
sect,
controller
scheduling
logic.
This
is
a
more
contentious
topic.
There
has
been
a
lot
of
comments
on
the
designer
that
I,
wrote
and
I
have
had
at
least
a
couple
hours
of
discussions
with
Brian
grant
yesterday
and
today
in
the
morning
about
it.
So
in
that
document,
I
I
proposed,
like
I,
guess
four
different
solutions.
B
That's
solution,
number
one
solution,
number
two
I
think
in
a
document
is
proposed
by
same
it
was
not
in
this
meeting.
I
believe
he
was
saying
that
demon,
so
controller
can't
can
do
preemption
and
then
taints
the
notes
that
it
has
preempted
on
so
that
no
no
other
note
can
be
scheduled
on
those
notes
until
the
demons
of
cause
are
scheduled
there.
So
can
we
can.
A
We
pause
for
a
second
like
the
the
prep,
the
preface
of
all
of
this.
It
is
based
upon
a
supposition
which
I
think
we
need
to
vet
first,
which
I,
don't
think,
there's
a
bit
here.
That
I
think
is
an
edge
case
and
that's
what
I
was
trying
to
allude
to
in
the
beginning,
which
is
that
the
four
critical
pods
you
want
to
have
priority
on
demons
set
based
critical
bonds
like
the
coop
proxy,
is
a
great
example
that.
D
D
A
It's
very
rare
to
actually
have
a
preemption
scenario
that
you
would
need
for
demon
sets
unless
they're
super
packed
to
the
hilt
right
and
in
those
cases,
you
could
also
do
things
like
have
a
reservation
right
leave.
If
something
is
so
critical
for
your
infrastructure
to
be
supported,
I
would
also
I
would
probably
state
you
need
a
certain
level
of
slack
and
reservation
space
on
those
nodes
set
aside
for
this
operation
of
your
control,
plane.
Sure.
B
I
understand
something
I
know
I,
understand
that
we
may
want
to
have
a
reservation
for
something
super
critical
for
the
cluster,
but
I.
Think
one
of
the
things
that
is
missing
here
is
that
demon
paws
are
not
necessarily
super
critical,
so,
for
example,
I'm
on
a
log
rotation
service
or
pod
in
my
cluster
on
every
node.
But
this
log
rotation
mechanism
does
not
have
to
run
all
the
times.
B
I
mean
if
push
comes
to
shove
and
I
have
to
really
remove
something
from
my
cluster
to
run
my
production
service
I
just
remove
the
log
rotation
and
it's
completely
fun.
If
I
don't
have
any
log
rotations
for
a
few
hours,
for
example.
So
there
are
examples
that
maybe
dimensional
parts
are
not
necessarily
the
highest
priority
parts
in
the
cluster,
and
there
are
other
reasons
that
demon
sub
parts
may
be
rescheduled.
B
C
B
A
The
depression
could
occur
on
the
kulit
side
right
because
the
demon
sets
pretty
stupid.
It
just
binds
right,
there's
not
really
scheduling
it.
Just
binds
to
all
the
things
that
match
right
and
the
matching
is
super
dumb,
so
the
if,
if,
if
it
binds
to
the
node,
the
couplet
can
determine
like
hey
I'm
bound
to
this
I'll.
Try
to
run
this
thing:
either
I
have
resources
or
don't
I.
Don't
have
resources
evaluate
priority
right.
You
know,
I,
don't
I.
B
A
Will
buy,
it
will
still
bind
the
binding
will
still
be
there,
but
the
couplet,
the
couplet
will
see
and
detect
that
the
binding
occurred
on
its
watch
and
on
its
list
watcher
it
can
determine
it
could
do
a
priority
evaluation
right
says:
like
I,
have
no
more
resources.
Somebody
tried
to
bind
me
it's
the
highest
priority
thing:
I
need
to
bump
something
else,
and
then
I'll
do
a
weight
score.
There.
B
Yes,
it's
actually
I,
guess
I
guess
I'll.
Actually,
class
I
also
mentioned
something
like
that,
which
is
which
is
basically
leaving
a
preemption
to
to
happen
by
the
cubelets.
That's
a
possibility,
I
should
say,
but
are
you?
Are
you
saying
this
only
for
the
case
of
demons
and
pots
or
for
everything
else
in
the
quest
or
basically,
all
others,
all
other
positive
cluster
I'd.
A
Have
to
think
about
it
if
a
generically
applies
or
not,
but
I'm,
specifically
thinking
about
demons.
That's
because
if
you
keep
demon
sets
the
way
they
are
as
stupid
as
they
are.
This
would
be
one
exit
with
regards
to
everything
else.
I,
don't
know
if
you
want
regular
scheduling
to
work
that
way,
but
you
you
could
but
it'd
be
inefficient
right.
You
would
have
yeah,
it's
already
inefficient
great
to
be
more
inefficient,
inefficient.
B
B
So
that's
a
possibility.
I,
don't
think
it's
gonna
resolve
the
problem,
though
so
so
basically,
what
you're
saying
in
this
case?
Basically
the
proposal
if
I
understand
it
correctly,
is
that
demon
said
controller
doesn't
do
preemption
and
instead
it
just
doesn't
care
about
availability
of
resources
and
sort
of,
like
blindly
assigns
paths
to
the
nodes,
and
it's
the
responsibility
of
cubelet
to
to
remove
some
of
the
existing
parts
to
make
room
for
the
demons
apart.
If
it,
if.
C
B
A
B
B
C
B
Demon
said
demon,
said,
controller
still
needs
to
have
all
those
all
the
logic
to
determine
whether
a
particular
node
can
ever
run
on
that
and
that
that
part.
Otherwise,
we
will
get
to
the
same
situation
or
similar
situation.
That
arrest
was
saying,
basically,
the
demon
such
controller
assigns
apart,
like
cubelet,
cubelet
may
or
may
not
remove
some
of
the
parts
of
to
create
the
room
for
what
is
and
eventually
realize
that
this
is
not
gonna
I
mean.
Basically,
the
node
is
not
going
to
be
appropriate
for
this
one
and.
C
Like
I
think
moving
priam
said
from
scheduler
to
cube
later,
but
but
cubelet
does
not
have
used
for
whole
class
terror
because
when
the
scanner
is
doing
preemption,
so
maybe
maybe
a
will
I
haven't
ready
or
designed
oka
mediator.
So
what
we
are
like
your
design
document
is
about
just
preemption
logic
to
not
have
any
master
controller
or
if
it's
hard
in
general,
like
to
have
scheduling
for
even
front
disc
demon
status,
reports
to
heaven
the
deferred
scheduler.
C
B
C
A
Of
the
logic
has
to
move
like
you
have
to
have
on
the
couplet
side,
because
people,
resource
requests
and
limits
are
optional
right
and
you
could
potentially
have
some
mishmash
of
different
QoS
tiers
as
it
exists
today.
I
don't
know
if
we're
gonna
get
rid
of
that
which
I
hope
we
do,
but
you
could
have
a
mishmash
of
best-effort
pods
running
which
are
abusing
resources
right
and
you
could
have
other
ones
that
have
you
know
high
level
priority
and
you
need
some
level
of
evaluating.
Besides
the
rheumatory
logic.
That's
there.
A
D
B
B
A
You
have
like
the
matching
question
and
then
you
have
the
eviction
question.
You
don't
get
a
rip,
you
don't
if
you
split
this
preemption
into
those
two
parts.
The
eviction
problem
still
will
always
occur
in
the
coupon
side,
but
the
the
matching
question
of
whether
or
not
to
put
the
code
in
there
to
match
at
all
I,
don't
I'd
have
to
think
of
the
cases
the
edge
cases
that
fail.
You.
B
Know
for
a
fact
that
leading
preemption
to
that
cubelet
is
not
an
option
for
generic
parts,
though
there
has
been,
and
there
has
been
problems
with
that,
including
the
fact
that
after
you
assign
apart
and
before
the
because
you
know
there
is
always
a
gap
between
the
time
that
you
start
preemption
until
the
resources
become
available,
and
this
time
is
significant,
it
could
be
like
minutes.
So
in
the
meantime,
maybe
a
higher
priority
pod
arrives
and
you
want
to
override
your
previous
decision
so
as
binding
a
part
right
away
at
the
time
of
preemption.
B
B
A
third
problem
is
that,
while
that
pod
is
assigned
to
the
node
and
while
waiting
for
the
node
to
get
the
resources,
another
part
may
actually
terminate
in
a
different
node
or
a
new
node
may
be
added
by
autoscaler,
so
one
capacitor
becomes
available
in
general
and
you
want
to
override
that
decision
and
not
wait
for
the
first
node
to
get
the
resources
you
want
to
assign
a
part
to
a
different
node
so
that
the
startup
time
goes.
Oh
good,
lord.
B
So
there
are
some
of
these
decisions
that
we
initially
thought
about,
and
we
decided
to
now
go
with
the
preemption
and
the
cubelet
side.
Some
of
those
kind
of
applies
to
the
mincer
controller,
but
not
quite
because
in
demonstrate
controller
we
have
a
more
one-to-one
mapping
between
a
node
and
a
pod
or
an
instance
of
a
demon
said
so
that
yeah
exactly
so.
It's
not
necessarily
one-to-one
but
for
let's
say
a
set
of
appropriate
knows
that
the
demon
set
controller
chooses.
We
know
that
we
want
to.
B
We
want
to
run
one
part
per
node
so
and
that
that
node
base.
Basically,
we
want,
you
won't,
have
the
option
of
not
running
it
on
one
of
these
selected
nodes
and
run
somewhere
else,
which
makes
make
some
of
these
problems
for
generic
scheduling
or
generic
preemption
of
paths
go
away
for
demon
says,
but
still
there
are
some
logic
in
the
cubelet
site
to
take
care
of
this
assignment,
and
removal
of
the
parts
remain
I'm,
not
so
sure.
B
So
one
other
thing
that
we
were
discussing
with
Bryan
is
the
fact
that
we
want
to
support
free
emption
among
multiple
schedulers.
So
if
multiple
scatterers
do
preemption,
we
still
want
to
support
that
and
not
have
any
starvation
issues
or
race
conditions
between
the
disc,
the
pre-empting
schedulers,
and
in
order
to
support
that,
we
need
to
change
the
annotation
that
we
use
today
for
nominated
node
name
to
a
field,
and
so
that
other
schedulers
become
aware
of
this
node
selection.
B
B
The
changes
seem
to
be
reasonably
small
for
the
default
scheduler
I.
Don't
expect
it
to
be
huge,
but
it
requires
us
to
look
at
parts
which
are
not
scheduled
by
the
default
is
scheduler,
for
example,
paws
that
have
a
particular
scheduler
name,
and
we
currently
ignore
all
of
those
if
it's
not
before
the
scheduler,
but
we
need
to
actually
look
at
those
and,
if
they're
nominated
node
name
said,
need
to
take
them
into
account
and
assume
that
they
are
running
and
no
it's
nominated
to
run
it
so
that
change
we
will
make.
B
Quite
our
eventual
discussion
with
brian
was
that
he
he
didn't
have
any
strong
opinion
about
how
to
move
forward
with
supporting
preemption
and
domestic
control
it
they
sent
our
conversation
now
and
on
some
of
the
design.
Ideas
in
the
document
looks
like
we
have
three
or
four
options:
one
to
leave
it
up
to
the
cubelet
to
decide
to
leave
it
to
the
default,
the
scheduler
to
to
schedule
the
demon
side
parts
and
to
the
preemption
for
them
three
demons
that
controller
does
preemption
ten
stand
out
to
avoid
complete
competition
with
the
default
or
other
schedulers.
B
B
So
these
are
the
four
ideas
that
I
think
I
have
in
mind
or
well.
There
is,
of
course,
one
other
one,
which
is
actually
make
a
demon
search
controller
more
like
a
full-fledged
scheduler
with
proper
queues,
and
you
know
running
all
the
predicates
have
its
own
cache
and
everything.
That's
another
option.
I.
A
So
it
just
leverages
pieces
of
the
scheduler
directly
or
it
can't
just
think,
use
I,
don't
see
why
that
isn't
like
a
way
to
make
go
forwards
and
you
could
also
have
ordering
of
evaluation
than
to
so
like.
If
you
could
always,
you
can
always
order
evaluation
for
demons.
That's
first
or
whatever
you
cared
about
right,
but
you
could.
You
could
be
much
more
explicit
on
how
how
to
evaluate
things.
Yes,.
B
C
A
Like
the
way,
the
controller
manager
is,
controller
manager
spawns
a
bunch
of
routines
each
one
of
those
routines
is
its
own
control.
Loop
right,
you
could
still
have
piece
of
what
the
demon
set
controller
does
but
then
coordinate
that
with
how
the
caches
are
working,
as
well
as
the
order
of
evaluation.
You
want
to
do
things.
D
A
It's
performance
as
well
as
like
what
you're
doing,
because,
if
you
right
now,
the
demon
set,
doesn't
have
a
lot
of
caches
for
pod
history.
It
does
for
its
separate
thing
right,
but
there's
special
optimizations
that
occur
inside
the
scheduler
for
how
the
caches
work.
Because
of
you
know
wanting
assuming
that
you
know
the
bindings
that
are
going
to
occur
there
as
well
as
other
data.
All
right,
so
there's
been
a
lot
of
history
for
how
the
cache
is
evolved.
So
I'm,
just
thinking
like
for
optimization
purposes
as
well
as
efficiency,
you
could.
A
You
could
just
literally
take
that
code
start
a
separate
go
routine
inside
of
the
scheduler
and
then
trim
down
pieces
of
that
and
then
have
a
handoff
between
the
scheduler
and
demon
said
of
when
they
need
to
do
certain
work
because
you'd
be
leveraging
the
same
caches.
You'd
have
to
have
a
pipeline
of
how
you
would
evaluate
things
but
other
than
that.
That's
their.
The
pipeline
already
exists
inside
the
controllers
right
so.
B
In
terms
of
separation
of
concerns
and
responsibilities,
I
don't
think
that
bringing
the
message
controller
into
the
scheduler
is
a
good
idea
really
and
I
probably
will
see
some
resistance
in
other
SIG's,
though
right
yeah
it.
But
what
you're
saying
is
kind
of
very
similar
to
to
like
solution
number
one.
Apart
from
the
fact
that
you
are
saying
that
you
bring
the
code
2
to
the
scheduler
or
sort
of
like
running
it
as
a
as
a
scheduler
process
as
a
part
of
a
scheduler
process,.
B
Iii,
don't
think
I
don't
feel
really
great
about
that
architecture,
because
now
we
are
bringing
one
of
the
controllers
which
doesn't
really
belong
to
schedule
inside
the
scheduler,
and
we
we
have
an
alternative
solution
for
that,
just
just
Jonathan
said,
and
that
the
author
analysis
is,
which
is
domestic
controller,
sets
these
node
affinity
for
the
nodes
and
then
leave
them
up
to
do
scheduler
to
schedule
them,
which
is
a
essentially
achieves
the
same.
What
I
believe.
B
A
E
E
It
has
awareness
of
some
of
the
constraints
and
an
app,
and
it
short
circuits
a
lot
of
that
logic.
Just
optimize
that
one
case
for
a
totally
different
reason
or
casas.
Weird
nested
scheduling
model
where
it
needed
to
do
that
kind
of
optimization
routinely,
but
that
could
also
be
done
so
I.
Don't
think
that
specific
thing
is
that
compelling
one
way.
B
Or
another,
as
as
a
reason
so
but
generally,
when
we
talk
about
performance,
we
should
quantify
how
much
it
is
for
cluster
bring
up.
I
would
not
expect
like
running
a
hundred
demon,
subpods
or
scheduling
a
hundred
demons
of
parts
in
like
nodes
with
like
a
hundred
or
several
hundred
nodes
to
be
really
that
slow
I
can
expect
it.
B
To
bring
up-
and
we
don't-
we
don't
need
to
do
a
lot
of
checks
for
find
like
for
finding.
No,
it's
appropriate
knows
probably
can
easily
find
notes
to
run
these.
These
demon
sets,
especially
since
we
have
no
definite
e
for
them,
that
is
set
by
the
dimension
controller.
Our
search
is
pretty
limited,
so
I,
don't
expect
really
a
whole
lot
of
performance
issues
and
cluster
bootstrapping.
D
E
No
I
said
that
even
sets
controller
today,
the
most
complex
parts
of
it
have
to
do
with
rolling
updates
and
the
reason
it's
owned
by
the
workloads
team,
who's
awesome,
who's,
also,
building
deployment
and
stateful
sets
is
because
it's
very
very
similar
to
stateful
set
in
terms
of
the
semantics
and
the
implementation
and
in
terms
of
how
users
use
it
there's
also
some
overlap.
I
mean
yes,
the
original
intent
was
to
use
it
to
manage
node
agents,
effectively
control
playing
components
or
extensions,
and
actually
there's
a
lot
of
unexploited
opportunities
in
that
area.
E
To
make
that
work
better
for
a
cluster
lifecycle
by
court,
more
tightly,
coordinating
node
life
cycle
and
the
demon
life
cycle
so
I
want
I
do
want
to
make
sure
we
don't
lose
that,
but
I
don't
think
it
makes
sense
to
bring
in
a
lot
of
the
like
the
whole
demon
sect
controller,
the
set
of
people
who
are
actually
building
building
that
controller
I
think
it
makes
sense
for
it
to
be
the
same
set
of
people
who
are
building
stateful
set,
for
example,
so
I.
You
know
in
terms
of
the
scheduling
logic
part.
E
Scheduling
to
be
involved
with
that
and
and
I
made
this
comment
on
the
dock
and
you
discussed
earlier.
You
know
the
demons
at
least
needs
to
evaluate
the
attributes
for
selecting
the
nodes
that
where
the
demon
should
be
run
in
order
to
decide
what
ought
to
create
and
that's
true,
regardless
and
is
broken
today
and
needs
to
be
fixed
regardless
everything
else.
E
B
Bit
right
so
and
I
told
them
the
plan
that
we
need
to.
We
need
to
keep
ordering
it
for
now
at
least
I
think,
and
we
need
to
support
multiple
scheduler
doing
preemption
and
I
think
the
field
awareness
for
other
schedulers.
So
our
part
is
something
that
we
are
going
to
do.
The
main
debates
right
now
is
that
how
do
we?
How
should
we
proceed
with
like
MSL
controller
scheduling
logic
and
whether
we
should
like,
because
we're
doing
preemption
and
the
scheduling
of
the
demons
at
pods?
B
A
A
B
E
A
It
won't
break
people
at
least
not
today,
okay,
as
so
long
as
that
it
comes
online
to
be
able
to
you're
gonna.
Have
this
weird
ordering
on
startup
it'll
be
different
right
because
you're
gonna
have
the
controller
manager
you're
gonna
have
the
controller
manager
for
the
daemons,
that's
basically
and
Q,
for
the
scheduler,
with
some
extra
selector,
the
information
that
it
has
and
then
the
scheduler
to
schedule
right.
D
B
C
D
A
E
E
So
we
need
to
address
that
if
we
are
going
to
continue
to
support
the
use
cases
that
we
wanted
to
support
by
supporting
multiple
schedulers
and
then
you
know
just
writing
a
document
about
how
we
expect
well,
the
division
of
responsibilities
should
be
and
how
all
these
control
loops
should
interact
right.
We
have
the
scheduler
and
the
demon,
sect,
controller
and
other
controllers
and
cubelets,
and
all
the
tip
4
flavors
of
auto-scaling
and
d
scheduler
right.
E
So
it's
become
very
confusing
board
had
at
least
six
different
controllers
as
well,
and
you
end
up
with
pathological
interactions
and
it's
not
clear
to
everyone.
You
end
up
with
things
being
done
in
an
unfortunate
place
like
be
nothing,
but
they
should
not,
or
things
being
done
and
the
scheduler.
This
should
not.
This.
A
Is
why
I
wanted
to
make
like
this
is
why,
in
the
original
document,
even
going
back
a
long
time
ago,
the
library
for
keishon
or
SD
KF
eyeing
the
scheduler
such
that
you
know,
it's
really
easy
for
people
to
build
their
own
just
by
making
well-defined
pluggable
layers.
That
would
automatically
include
these
other
logic
pieces
that
we
create
so.
E
I
think
that
proposal
the
schedule
later
as
a
framework
or
platform
or
whatever
proposal,
is
an
important
component
of
this
of
the
strategy
going
forward
and
also
we
had
talked
about
creating
a
way
to
automatically
delegate
automatically
change
the
default
for
some
set
of
pots,
either
at
a
namespace
level
or
some
other
way
that
was
never
implemented.
I
think
that
could
potentially
go
a
long
way.
You
know.
E
The
user
doesn't
have
the
option
of
disabling
the
default
scheduler.
So
then
they
can
and/or
adding
me.
They
also
cannot
add
the
extensions
right.
They
don't
have
access
to
the
configuration
mechanism
or
the
execution
mechanism
that
would
allow
them
to
add
the
extensions
they
need,
so
that
I
think
has
to
be.
If
our
goal
is
to
allow
these
extensions
to
be
implemented,
then
that
has
to
be
taken.
E
There
has
to
be
a
way
to
achieve
that,
for
this
kind
of
a
platform,
and
one
way
with
the
scheduler
is
a
framework
proposal
would
be
they
would
reuse
all
the
scheduler
code,
maybe,
but
they
would
run
another
instance
and
then
delegate
all
their
pots
to
use
that
scheduler.
Alright.
So
in
practice
it
wouldn't
be
competing
schedulers,
which
I
think
is
the
dominant,
where
people
want
to
run
their
own
scheduler
is
they
have
a
particular
set
of
extensions
for
all
the
nodes,
or
they
have
a
very
specific
thing.
E
It's
yes,
it
should
be
able
to
print
the
others,
but
it
doesn't
really
compete
with
them
or
best-effort
pods
which
aren't
affected
by
you
know
the
normal
request,
based
scheduling
at
all
directly
right,
if
they
just
use
usage,
for
example,
to
do
best
best
effort
scheduling,
that's
using
a
totally
different
set
of
criteria
and
information
for
deciding
if
they
fit
alright.
Those
are
really
the
cases
where
we
knew
we
weren't
going
to
spend
effort
on
that
for
a
long
time
we,
but
we
didn't
want
to
prevent
other
people
from
implementing
that.
E
B
A
lot
of
cases
we
need
to
you
need
to
address,
and
hopefully
with
that
proposal,
make
it
easier
for
others
to
hopefully
keep
supporting
multiple
schedules.
At
least
today.
We
don't
see
any
reason
why
we
shouldn't,
but
maybe
we
may
want
to
convey
this
message
clearly
to
keep
all
that
running.
The
default
scheduler
is
necessary
or
bootstrapping,
or
you
have
to
replace
it
with
another
scheduler
that
can
take
care
of
scheduling.
E
A
B
C
I
mean
it's
kind
of
whatever,
like
very
small
issue
like
right.
Now,
that
does
balancing
algorithm.
We
have
like
we
just
consider
CPU
and
memory,
and
we
don't
consider
PVCs
because
also
like
PVCs
are
not
initially
bound
to
nodes.
Dilek
TV
series
have
limited,
say
like
if
you
go
see
in
or
AWS
I
think
there
is
sort
of
limit
like
40
limits.
40
per
is
node
so
like
when
we
are
doing
this
balancing
it
might
be
skewed
up
with
respect
to
PV
season.
C
Bear
like
we
have
some
notes,
ER,
where,
like
we
have
capacity
for
CPU
and
memory,
but
we
don't
have
capacity
for
PVCs
and
on
the
other
way
around,
we
have
capacity.
We
don't
we
have
out
of
capacity
for
PVCs,
but
we
do
have
capacity
for
CPU
and
memory
so
just
to
consider
that
into
a
CPU
and
memory
when
I'm
single
beneath
him.
Oh
that.