►
From YouTube: Kubernetes SIG Scheduling Meeting - 2019-08-01
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi
everybody,
as
you
know,
this
meeting
is
recorded
and
will
be
uploaded
to
public
Internet,
so
chances
are
whatever
you
see
in
this
meeting
will
remain
for
a
very
very
long
time.
With
that,
let's
start
the
reading.
I
have
a
few
items
that
I
would
like
to
speak
about,
and
then
you
guys
can
have
question
comments
in
between
or
after
them.
So
the
first
item
is
that
we
plan
to
graduate
component
come
fake
for
the
scheduler
to
beta
in
1.7
116,
but
unfortunately
I
don't
think
we
can
do
that.
A
The
main
reason
is
that
component
config
does
not
allow
us
to
have
optional
fields
for
seven
fields.
There
are
some
fields
which
are
currently
pointers,
I
believe,
but
there
are
a
few
quite
a
few
actually
that
are
non
finders.
So
for
those
it's
almost
impossible,
not
almost
it's
actually
impossible
to
find
whether
they
are
not
provided
or
they
have
like
a
value
which
is
the
same
as
the
default
and
mailable
incident
so
I
give
one
nil
value
in
go,
for
example,
for
a
bull.
If
you
don't
specify
bull
by
default,
it'll
be
false
and
go.
A
A
So
we
need
to
wait
for
that
to
happen,
and
then
we
can
promote
component
config
of
the
scheduler
to
beta.
This
will
probably
happen
in
117.
This
is
a
little
unfortunate
because
we
have
you
know
we
kind
of
already
have
deprecated
other
types
of
config
like
the
flags
or
the
scheduler,
and
yet
the
alternative
is
still
not
even
beta.
It's
still
in
alpha
version
we
need
to.
We
need
to
make
sure
that
this
is
graduated
to
beta
as
soon
as
possible,
but.
B
Yeah,
how
long
common
here
so
speaking
of
the
deprecated
flags,
so
basic
right
now
we
have
the
old
fashioned
deprecated
Flags.
Why
right
and
also
another
complete
component
config
options
which
is
just
read
head
into
one
component
config
file.
So
basically,
if
people
resemble
people
use
the
old
fashioned
cube
config
and
also
use
some
options
in
the
component,
config
resume
on
ik,
like
other
predicates
priorities.
Damn
our
scheduler
will
report
error
says
we
don't
internally,
we
don't
respect
the
dedicated
replicas,
Debra,
sorry,
fabricated
flags.
If
we
use
the
component
of
it
please.
B
The
special
typical
case
is
that
could
consider.
I
saw
some
people
open
the
issue
on
that.
So
I
just
reckon
recommend
them
to
move
the
keep
config
into
the
component
config,
but
maybe
we
can
internally.
We
are
overwrite
the
cube
config
value
inside
of
config
config.
If
the
people
give
the
deprecated
I'm,
not
sure
that's
a
correct
decision,
or
maybe
in
the
future,
we
just
deprecated
all
the
deprecated
facts.
So.
A
A
Honestly,
some
of
these
situations-
I,
you
know,
since
they
are
making
a
change
and
then
they
see
that
this
problem
happens.
It's
not
like
you're
rolling
out
a
version
and
suddenly
their
clusters
break
it's
it's
more
like
they
make
a
change
and
they
are
probably
monitoring
and
supervising
this
change
and
they
realize
that
there
is
a
breakage
or
the
scheduler
does
not
come
up
in
these.
B
A
A
A
Okay,
so
next
item
is
that
I
actually
want
to
bring
up
this
question.
I,
don't
see
what
you
guys
think
I
was
you
know.
Initially,
the
plan
was
to
graduate
our
scheduling
framework
to
beta
in
116.
In
fact,
what
I
mentioned
in
that
cap
for
the
4
for
the
scheduling
framework
was
to
have
the
extension
points
implemented
in
order
to
make
it
to
beta
now
we
can
definitely
do
that.
I
think
you're.
Almost
there
already.
A
Maybe
one
of
these
extension
points
is
left
to
be
done,
but
I
feel
like
our
API
may
not
be
as
as
stable
as
we
want.
I
have
a
feeling
that
once
we
start
changing
our
predicates
and
priority
functions
and
try
to
build
more
plugins
and
users,
also
a
bit
more
plugins,
we
will
get
more
feedback
about
making
certain
changes
to
our
interfaces
for
extension
points,
and
if
we
now
promote
things
to
beta,
we
have
to
stick
with
these
interfaces.
A
The
existing
ones
for
at
least
a
couple
of
releases,
and
then
we
all
will
be
subjected
to
all
the
now
backward
compatibility
that
comes
with
a
beta
version,
API
of
kubernetes.
For
that
reason,
I
feel
we
should
keep
scheduling
frameworks
as
an
alpha
version
in
116
and
maybe
promote
it
to
beta
in
117.
Once
we
have
a
little
more
clarity
or
confidence
about
our
interfaces
is
higher.
What
do
you
guys
think.
D
I
completely
agree,
I
think
we
need
to
really
think
for
for
some
time
try
a
few
plugins
and
maybe
perhaps,
if
I,
make
progress
on
the
roadmap
as
well,
give
us
more
clarity
whether
the
current
interface
is
sufficient
or
needs
to
some
tweaks
I.
Don't
think
we
have
any
pressing
needs
to
graduate
it
to
beta.
It's
not
a
it's,
not
necessarily
like
a
customer
facing
feature
that
someone
outside
is
waiting
for
it.
It's
mostly
internal
restructuring.
The
whole
scheduler
I
know.
A
I
know
that
there
are
a
couple
of
companies
who
are
impatiently
waiting
for
it
actually,
but
but
you
know
to
respect
that,
and
we
understand
that,
but
I
don't
think
we
can
actually,
in
fact,
if
we
graduate
things
to
beta
and
then
decide
to
change
it,
they
better
cause
even
more
trouble
for
those
folks
as
well.
So
I
think
we
should
actually
be
more
careful
and
Allegiant
here
we
should.
We
should
carefully
consider
all
of
our
options
and
be
a
little
bit
more
patient
here
until
we
we
are.
We
have
more
confidence
about
our
ABI.
A
B
A
One
more
thing
so
recently,
actually
very
recently
like
a
couple
days
ago,
or
maybe
just
not
yesterday,
but
anyways
very
simply
this
week.
We
realized
that
there
is
no
way
for
phaser
plugins
to
tell
us
whether
node
is
infeasible
and
after
it
became
infeasible,
whether
preemption
can
help
it
or
not.
So
basically,
today
we
have
one
single
code
for
on
infeasibility
of
nodes,
and
that's
basically
just
saying
that
a
note
is
on
a
schedule.
Sometimes
this
on
a
schedule.
Ability
can
be
resolved
by
preemption,
and
sometimes
it
cannot
so
it
currently.
A
Our
preemption
algorithm
relies
and
they're
very
fixed
set
of
these
predicates
that
we
have
and
has
it,
has
a
hard-coded
sort
of
config
for
these
predicates
to
indicate
whether,
when
one
of
these
predicates
fail,
preemption
can
help
it
or
not.
For
example,
if
a
node
is
tainted
by
removing
other
parts
from
the
node,
we
cannot
really
change
anything
or
you
know
in
in
another
example.
If
I
know
it
is
other
resources
removing
some
parts
from
they
know
it
can
free
up
some
resources.
A
Remove
some
class
from
you
know,
can
free
up
some
resources
and
potentially
make
a
party
schedule,
so
you
can
see
that
if
that
amount
ain't
predicates
fails,
there
is
no
point
in
trying
preemption.
If
the
resource
check
predicate
fails,
we
should
definitely
try
preemption.
For
that
reason,
we
also
need
for
the
filter
plugins
to
somehow
tell
us
whether
preemption
can
help
them
or
not.
You
know,
right
now
we
have
a
fixed
set
of
predicates
in
the
future.
A
After
the
framework
comes,
we
don't
know
what
kind
of
filters
are
enabled
and
in
customer
clusters
also,
various
users
of
kubernetes
may
have
implemented
their
own
theaters.
So
whatever
is
in,
there
are
a
couple
of
proposals
to
solve
this
problem.
One
is
that
you
know
a
filter
plugin,
instead
of
just
retaining
a
single
on
a
schedule
about
code
can
return
more
than
just
one
code,
for
example,
can
return
on
a
schedule
world
and
something
like
hopelessly
on
the
schedule
of
work,
so
they
hopelessly
on
the
schedule.
A
Work
means
that
you
cannot
do
anything
about
this,
so
this
is
on
a
schedule
of
all
anti
removing
parts
from
the
know.
It
is
not
gonna
help.
You
have
to
wait
for
some
other
things
to
happen.
Basically,
I
mean
that
that
means
we
should
retry
scheduling
the
part,
but
we
should
not
try
preemption.
The
other
record
could
be
just
on
a
schedule
about
unscalable
means
that
you
can
try
preemption,
and
with
that
thing
the
scheduler
can
decide
to
try
preemption
or
not.
A
Basically,
if
all
the
filters
for
a
particular
node
has
returned
only
on
a
schedule
above
then,
the
scheduler
will
try
to
preamp
parts
to
see
if
it
can
schedule
the
incoming
part.
Another
idea
is
that
we
can
have
a
config
similar
to
basically
in
the
same
place
as
the
filter,
plugin
and
as
the
other
plugins,
and
that
config
can
be
can
tell
us
basically
which
filter,
plugins
or
priam.
The
sort
of
like
preemption
can
help
them
and
which
photo
parkings
cannot
be
helped
by
preemption
the
nice
thing
about
this
approach.
A
The
second
approach
is
that
we
don't
need
to
have
multiple
codes
for
each
each
part,
but
at
the
same
time
it
can
limit
us
a
little
bit,
because
all
these
plugins
must
be
statically
configurable.
So
if
they
do
things,
I
don't
know
some,
you
can
come
up
with
all
kinds
of
examples,
but
if,
if
they
make
some
decisions,
select
dynamically
and
based
on
those
dynamic
decisions,
preemption
can
help
them
or
not.
We
cannot
really
reflect
that
in
the
country,
so
yeah
that
was,
we
are
actually
creating
an
issue
for
this.
There
are
a
couple
of.
A
There
were
a
couple
of
comments.
There
I
believe
most
people
leaning
towards
the
first
option,
basically
having
another
code
like
unschedulable
and
hopelessly
unschedulable
kind
of
code.
That
is
configurable.
Third
part.
What
do
you
guys
think
I
don't
know
if
I
have
been
clear
enough
and
I
have
described
a
problem
clearly,
but
if
you
have
to
stand
it,
please
let
me
know
which,
which
is
your
preference
I.
E
A
E
A
But
well
we
don't
need
to
do
preemption,
for
that
I
mean
the
schedule
is
gonna
retry
you
scheduling
apart,
which
is
on
a
schedule
of
all
indefinitely
until
it
finds
a
right.
So
if
you
know
preemption
is
not
needed
for
the
case
of
checking
again,
it's
just
a
rescheduling
of
that
part
which
is
gonna
happen.
E
D
A
Yes,
for
the
information
of
others
to
understand
why
this
is
just
an
optimization.
So
yes,
optimize
right!
This
is
just
an
optimization.
All
plugins
along
v
appliance
can
be
conservative
and
say
always
try
preemption
right,
even
if,
for
example,
they
fail
for
a
taint,
they
can
say:
yeah,
that's
right,
three
emption,
of
course,
preemption
is
gonna.
Try
and
install
us
honest
of
the
problem,
so
you're
gonna
do
some
waste
at
work,
but
it
should
be
now.
A
Gonna
basically
make
a
neat
logical
error
if
we
try
preemption,
alright,
so
basically
or
in
even
if
all
the
filters
are
considered
as
resolvable
by
preemption,
we
are
not
going
to
make
any
incorrect
decision.
This
is
basically
just
an
optimization
to
avoid
preemption
when
it's
not
gonna
help.
So
yes,
for
that
reason,
I
agree.
He
could
have
done
a
lot
of
that.
A
D
F
F
D
Or
not,
I
think,
in
the
contrary,
like
you're,
leading
the
the
flag
trying
to
make
optimization
into
the
plugin
implementation
and
that
that
is
like
a
that's
bad
for
suppression.
Concerns
like
this
optimization
is
outside
the
plugin,
but
the
plugin
is
making
a
decision
about
optimization
and
that's
wrong.
A
Plug
in
I
mean
one
way
or
the
other,
the
plugin
is
kind
of
making
that
decision,
I
mean
whether
we
put
it
in
a
config
file
or
in
the
plug-in,
it's
kind
of
still
the
plug-in
or
the
plug-in
writer
who
makes
that
decision.
One.
One
nice
thing
of
course
about
this,
is
that
you
know
when,
when
you
create
a
plug-in,
let's
say
that
you,
you
have
received
a
plug-in
from
somewhere
on
the
internet
and
you're
installing
this
plug-in
for
your
your
system.
A
You
don't
need
to
know
what
do
you
need
to
put
in
a
config
file?
The
plug-in
takes
care
of
this
for
you.
This
is
me.
If
we
go
with
option
one
which
is
basically,
you
don't
need
to
put
anything
in
a
config
file,
so
the
plug-in
is
going
to
take
care
of
this.
For
you
in
the
second
case,
whoever
install
the
plug-in
has
to
provide
the
right
configuration,
or
at
least
has
to
refer
to
the
documentation
of
the
plug-in,
to
find
what
is
the
right
config
for
the
plug.
A
G
A
G
A
Yeah
but
I
don't
think
it's
really
necessary
to
change
it.
I
mean
if,
if
a
plug-in
knows
what
to
do
and
it's
written
correctly,
then
someone
who
does
not
know
about
the
internal
logic
of
the
plugin
does
not
need
to
really
configure
it
and
I
was
to
decide
basically
whether
the
preemption
is
gonna,
help
it
or
not.
The
plugin
plugin
right
here
knows.
A
A
Okay,
so
I
guess
I
have
linked
that
issue
to
our
meeting
notes,
feel
free
to
go
and
take
a
look
and
leave
more
comments.
Please
think
about
it
a
little
bit
more.
This
is
an
important
thing
that
we
should
address.
We
have
a
couple
more
minutes
if
there
were
questions
or
comments
from
other
folks
in
the
ruling.
E
E
A
We
prioritized
it
because
we
didn't
get
enough
feedback
from
users.
There
were
a
couple
cases
that
some
folks
came
back
to
us
and
said
yeah.
This
is
this
is
going
to
be
useful,
but
they
were
not
super
enthusiastic
about
them,
so
we
felt
like.
Maybe
there
were
not
that
much
interest,
so
we
prioritized
it,
but
definitely
we
can
revisit
that
if
there
is
no
chance.
Okay,.
E
Yeah,
so
one
of
the
things
that
on
the
open
shifts
and
the
open
shift
side
that
we
are
trying
to
look
at
is
how
can
we
use
our
back,
especially
for
toleration
x'
like
anyone,
as
of
now
can
submit
a
toleration
for
master
change
so
that
part
could
land
on?
I
previously
we
used
to
have
something
like
carrie,
plugin
and
open
shift,
which
ensures
that
most
of
the
ports
would
land
on
to
the
compute
nodes,
not
on
the
master
nodes,
but
with
the
recent.
E
No,
we
are
making
master
scheduler
bill,
but
one
of
the
problems
that
we
are
noticing
is
anyone
can
make
the
master
or
can
run
the
dead
parts
on
master
nodes
if
they
make
them
scheduled,
so
the
problem
is
kind
of,
and
we
wanted
to
solve
it
using
our
back.
So
we
kind
of
created
a
virtual
API
group
for
toleration
x',
and
then
we
told
if
a
service
account
has
got
roll
and
a
roll
binding
which
allows
the
Toleration
x'
as
a
resource
to
be
accessible.
E
Only
that
particular
service
account
will
have
only
that
pod,
which
has
the
service
account
amount.
It
will
have
n
of
toleration.
X'
or
it
will
be
able
to
land
onto
the
master,
no
I
just
want
to
know
is
the
policy
base
step
approach,
something
that
we
have
agreed
upon
and
we
are
going
to
move
forward
or
other
alternatives
are
fine
with
we.
A
E
Yeah
so
that
I
believe
you're
talking
about
poor
toleration,
restrictions,
admission,
plug-in
the
main
problem
there
I
think
I
wish
wrote
the
initial
version,
and
at
that
point
of
time
the
the
the
thinking
was
quite
different.
We
wanted
to
ensure
that
for
a
particular
namespace,
we
would
like
to
have
a
certain
tolerance
being
available,
and
if
there
is
a
conflict,
we
are
going
to
deny
the
pod
admission.
If
not,
we
are
going
to
merge
the
foundations
that
are
available
for
namespace
with
the
toleration.
E
Instead,
the
pod
has
on
the
pod
spec,
but
now
it's
even
there.
We
do
not
have
some
sort
of
role
based
access
control
where
we
are
telling.
We
are
anyway
whitelisting
certain
namespaces
using
that
admission
playing
saying
that
only
these
namespaces
are
allowed
to
have
certain
valuations.
It's
not
like
a
generic
enough
solution
that
can
solve
these
problems.
B
B
Some
minor
stuff
needs
to
consider.
I
will
may
be
raised
on
issues
for
brainstorming.
One
thing
I'm
thinking
about
is
that
if
the
path
is
explicitly
now
scheduled,
for
example,
use
use,
keep
control,
okay,
no,
not
a
drain
yeah.
If,
if
a
user
drea,
no
the
no
the
way
I'll
be
marketed
as
unschedulable
I
mean
the
Spector,
our
schedule
becomes
true,
so
in
that
case
should
or
shouldn't
the
even
past,
Brad
considered
bad
note,
that's
a
great
one.
A
B
That's
also
my
my
my
preference,
so
in
that
case
then
use
encode
to
tolerate
this
case
and
the
comparing
to
this
case
there's
another
case
no
become
tense
become
tainted.
In
that
case,
we
won't
consider
is
that
we
would
still
consider
a
titanium,
no
I
thought
I
said
no.
The
tool
calculated
that
even
possibly
yeah.
B
A
Yeah,
but
that's
a
good
miami
at
the
very
very
least,
we
should
definitely
document
this
in
our
in
our
documents
and,
of
course,
this
feature
is
still
alpha,
so
we
can
get
more
feedback
from
our
users
and
see
whether
we
can
we
should
change
this.
We
have
you
and
our
next
witness
is
Rana.
There's
a
clicker.