►
From YouTube: Kubernetes SIG Scheduling Meeting - 2019-08-29
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
There,
for
very
long
time
with
that,
let's
start
the
meeting
and
take
a
quick
look
at
the
agenda.
I
actually
wanted
to
give
a
quick
update
on
on
code
freeze
116,
which
is
coming
only
in
a
couple
of
days
or
so
and
I
believe
you
know,
at
least
according
to
our
clan
and
the
spreadsheet,
that
we
have
pretty
much.
All
the
major
items
are
done
and
already
PRS
merged
among
those
are
the
scheduling
framework
we've
built
all
the
extension
points.
Ours
are
merged.
A
There
is
even
par
spreading
that
way
he
was
working
on
and
all
those
peers
are
merged,
you're
all
ready
for
code
freeze
there
is
there
were
some
other
items,
including
like
resource
coda,
because
that
one
is
not
there
as
much
as
we
wanted
to
that
is
pending
for
some
other
reviews.
As
we
discussed
last
week,
sooo
dishes
resource
bin
packing
is
done
sure
in
the
meeting.
Let.
A
What
else
so
yeah
the
roar,
a
few
items
that
we
went
over
last
week?
I
don't
want
to
go
through
those
again.
We
know
that
some
of
them
are
not
gonna
make
it
for
some
other
reasons.
For
example,
we
decided
to
not
move
comparing
conflict
to
be
one
Beethoven,
because
there
is
another
major
work
to
make
all
the
fields
optional.
That's
gonna
happen
outside
of
our
sake
and
we're
waiting
for
that
to
happen
before
moving
our
component
config
to
beta
and
other
things
which
I
think
we
don't
need
to
repeat.
A
C
So
basically,
I
have
more
question
on
our
fears:
change
on
our
internal
scheduling,
API.
So
right
now
we
are
at
we
1/2
a
while.
So
does
that
mean
we
can
arbitrarily
change
the
fields
to
support
new
feature,
for
example
the
zoo
I
think
so
they
feature
needs
and
maybe
a
change
right
as
as
well
as
some
ongoing
items
like
the
dragonus
I
was
an
item
P
yeah
configurable,
the
max
power
back
half
generation.
A
I
agree
with
you:
I
mean
if
we
want
to
go,
why
only
the
guidelines
provided
by
the
whole
kubernetes
repository.
We
can
change
our
versions
any
time
you
want
in
any
ways
of
uhand.
The
problem
is
that
for
some
time
at
least
in
the
past
couple
of
releases
component
config,
which
is
in
our
form,
has
been
the
only
recommended
way
of
providing
convicted
scheduler.
The
other
flags
were
marked
as
deprecated.
In
these
cases,
I
don't
think
we
can
really
go
by
those
recommendations
that
we
can
change.
A
Alpha
cos
alpha
features
in
any
ways
we
want,
because,
since
this
has
been
the
only
way
that
users
could
provide
a
comfort
chances
are
a
lot
of
users
are
using
them
as
a
result,
we
should
be
more
careful
and
we
should
be
considerate.
If
you
want
to
change
these
things,
we
should
announce
them
and
let
users
know
at
the
same
time.
If
we
just
want
to
stick
with
the
guidelines,
you're
right
we
can,
we
can
change
these
alpha
version
IPIN
in
any
way.
We
want
yeah.
C
A
A
D
In
the
way
we
called
filters
run,
but
we're
not
running
the
filters
right
now:
they're
not
being
run
in
the
preemption
path.
So
we've
it's
already
being
reviewed
waiting
for
that
person
to
actually
respond.
I,
don't
know
if
we
will
get
it
before,
but
I.
Don't
think
it's
a
major
issue
because
we
don't
have
any
plugins
unless
I.
D
A
D
A
Maybe
we
can,
we
can
reach
out
to
them
on
slack
if
they
are
available
in
slack
and
ask
them
to
fix
things
more
quickly,
Dada
Dada,
that's
no
I,
wouldn't
call
it
like
a
necessary
item
for
116,
but
it'll
be
much
better
if
we
can
add
it
to
116
in
case
some
other
users.
Try
to
add
filters,
probably
want
preemption
to
work
correctly,
like.
A
A
D
The
first
one
is
I
notice
that
we
have
a
flag.
That
indicates
whether
or
not
we
are
on
all
the
predicates
when
a
predicate
fails,
so
basically,
the
the
flag
says
whether
to
run
to
continue
running
the
predicates.
If
a
predicate
fails
or
like
returns,
that
the
the
note
should
be
filtered,
we
don't
have
that
in
this
in
the
framework
for
the
filters
so
I
was
wondering
is
was
this
by
design
or
not
whether
we
should
actually
add
that
feature
to
the
framework?
If
not,
should
we
duplicate
this
I.
D
A
Know
actually,
since
the
scheduler
was
running,
all
the
all
the
predicates,
no
matter
the
previous
one
had
failed
or
not,
we
decided
to
add
a
flag
when
we
started
not
executing
any
more
predicates.
After
one
phase,
we
decided
to
add
a
flag,
but
the
value
of
the
flag
by
default
was
I
was
false
by
basically
not
running
any
more
predicate
after
one
phase
and
I
haven't
heard
any
single
person
complaining
about
the
fact
that
this
flag
is
false
and
it
does
not
run
other
predicates
I
highly
doubt.
A
A
Not
necessarily
the
flag,
I
mean
the
behavior
that
how
many
of
these
predicates
are
executed
and
how
do
what
or
in
the
order
can
be
specified
by
otherwise,
if
it's
not
a
specified,
people
cannot
really
rely
on
on
the
order
of
these
predicates.
This
is
considered
as
internal
logic
of
the
scheduler
so
for
for
filters,
and
for
for
the
scheduling
framework.
I
would
honestly
say
that,
let's,
let's
drop
it
yeah.
D
I
agree
I'm
just
wondering
about
the
process
of
duplicating
this
behavior
because
for
externally
for
users
even
filters,
it's
a
it's
an
internal
issue
like
the
way
that
will
present
with
me,
but
we're
moving
predicates
into
filters
excited
schedule,
or
it's
still.
Basically,
a
bunch
of
predicates
are
being
run
so
basically
breaking
that
and
and
I
know
that
this
is
internal
feature,
but
we're
exposing
it
as
a
flag
today
to
ever
using
beautiful.
So
my
recommendation
is
to
actually
completely
duplicate
the
flag
before
we
get
into
inconsistent
behavior
when
we
start
moving
predicates
into
filters.
D
A
I
yeah
I
agree
with
you,
I
think
now
that
we
are,
we
haven't
hit
the
code
freeze,
yet
it's
actually
a
good
time
to
mark
that
flag
as
deprecated,
so
that
we
can
drop
it
sooner
and
when
the
scheduling
framework
I
mean
we
are
we're
adding
first
of
default
filters
for
the
scheduling
framework
in
the
next
cycle.
We
can
basically
forget
about
this.
This
particular
behavior.
A
B
Yeah
I
mean
I,
don't
know
so
many
users
who
are
actually
using
that
I
think
yet
we
kind
of
approved
it,
because
we
won't
have
that
short-circuit,
a
variation
of
instead
of
doing
for
all
the
predicates
like
short-circuit.
If
one
of
the
predicates
fails
but
I
think
it
would
be
better
to
send
out
an
email,
sick
scheduling
me.
Let
me
start
you
when
it
is
safe
to
like
get
some
feedback
before
we
get
started
on
the
application.
A
B
A
A
D
That's
fine,
it's
just
some
disguise,
so
the
other
one
is
I.
I
really
understand
like
the
scheduler
API
package
and
I.
Do
it
that
we
version
things
there
and
what
like
it's
along
the
same
lines
of
ways
comment
as
well
a
few
minutes
ago.
We
have
this
problem
where
the
types
that
go
that's
under
API
for
some
of
its
fields.
They
have
a
different
type
like
unsigned
integer
32,
compared
to
the
one
that
exists
under
v1.
Like
you
know,
API
stash
be
1/5,
should
go.
D
D
A
I,
don't
think
we
have
revered
Vito
anywhere
in
in
kubernetes
codebase.
Yet,
but
promotion
from
like
alpha
to
beta
to
GA
is
what
has
happened
very
commonly
four
ways:
eight
guys
and
I
think
you
may
have
already
been
familiar
with
that.
So
once
the
API
becomes
more
stable
of,
we
usually
graduated
from
v1
to
beta
1,
and
we
can
have
other
criteria
for
graduation
and
these
are
usually
specified
in
the
caps
for
various
features
of
kubernetes
and
then,
once
all
the
criteria
are
met.
A
We
have
all
the
features
and
all
the
bells
and
whistles
for
some
feature
in
then
we
can
and
we
have
confidence
first,
a
buddy.
We
can
move
it
yeah.
Of
course,
each
one
of
them
has
different
guarantees
for
backward
compatibility
as
well.
So
some
of
like
alpha,
we
don't
provide
any
backward
compatibility
for
beta.
We
provide
like
six
months
for
GA,
provide
I
think
a
year
after
the
application.
A
A
D
A
A
A
Actually,
don't
know
exactly
what
all
these
voice,
how
these
files
are
used,
some
of
them
are
there
too
for
cogeneration
purposes.
I
know
that
we
have
some
scripts
at
Jerry,
Jerry's
code,
but
I
don't
know
exactly,
and
some
of
them
are
there
for
historical
reasons.
Anyway,
I
I
don't
actually
know
why
we
have
to.
C
Have
some
I
know
some
quite
a
few
knowledge
on
this,
but
maybe
I'm
wrong
I,
just
throw
my
my
two
cents.
So,
basically,
in
the
old
days
of
kubernetes,
we
have
non
version
types
dojo,
which
is
and
to
be
used
internally,
which
I'm
going
our
go
long
source
code
and
also
there
are
some
versions,
type
star
go
so
internal.
You
have
my
own
version,
so
basically
we
have
some
conversion
methods
to
transfer
from
them
between
between
each
other.
C
So
basically,
so
we
have
down
version
object
to
convert
to
the
we
want
a
from
Y
or
two.
We
want
offer
to
all
some
some
other
versions,
but
in
114
I
think
this
kind
of
mechanism
has
been
deprecated
because
of
extra
efforts
manage
the
conversion
or
sort
of
that.
So
there
is
a
button
disk
in
mind
zombies,
but
you
can
refer
to
as
CDG
as
Steven
Jim
asked
if
he
had
kissed
the
boss
off
yeah,
yeah
I,
just
don't
know
quite
a
little
on
this
yeah
yeah.
There
is
managed.
D
D
D
So
there
is
an
old
issue
where
they
advocated
the
use
of
an
exact
size
types
like
instead
of
using
nd.
You
have
to
use
n
32
and
api's
yes,
but
we
have
not
been
consistent
with
that.
So
there
are,
some
fees
are
used,
n
samples
are
using
n,
32,
etc
in
our
own
world,
in
our
types
would
go
for
the
field.
So
I
think
this
is
a
right.
D
It
is
because
we
were
discussing
moving
into
zero
to
100
range
for
four
priorities
and
scores
other
than
zero
to
ten
and
I
thought
that
we
could
just
basically
change
max
priority
from
10
to
100
and
just
be
done
with
it,
but
I'm
concerned
about
who
is
using
that
field.
I
mean
at
the
end
of
the
day
the
contract
between
us
and
whoever's,
using.
It
is
just
the
variable
name,
not
the
value,
I
think.
A
D
A
D
Yeah,
which
is
like
you
know,
kind
of
hacky
and
that
like
I,
feel
it's
it's
not
the
right
approach,
but
it's
the
right
approach,
shorter
share
and
so
that
we
make
a
transition.
But
I
don't
want
to
keep
that
code
inside
our
schedule
where
it's
inconsistent,
what
each
one
is
providing
and
they
were
trying
to
convert
you
yeah.
A
A
B
One
thing
I
would
like
to
really
discussion
is:
do
we
want
to
have
any
delay
between
like
subsequent
scheduling
like
say,
if
there
is
a
part
that
has
failed?
Can
we
like,
as
of
no
I,
think
we
have
something
for
the
preemption,
but
we
want
to
have
a
retry
mechanism
that
is
to
ensure
that
the
part
that
is
failing
continuously,
it's
not
being
retried
like
a
timeout
kind
of
thing,
yeah.
B
B
D
B
A
Because
you
know,
as
I
said,
there
could
be
some
changes
in
the
cluster
that
come
after
a
long
number
of
hours.
For
example,
I
don't
know
you're
running
a
huge
job
in
a
cluster
which,
which
has
thousands
of
pods,
and
the
job
takes
several
hours
to
finish,
but
after
it
finishes
suddenly
the
releases
a
lot
of
resources
in
your
cluster.
A
lot
of
other
new
parts
who
have
been
waiting
for
a
long
time
can
now
be
scheduled.
A
So
things
of
this
sort
happens
a
lot
you
know
in
clusters,
so
I
don't
think
it
makes
sense
for
the
cluster
to
stop.
You
know
we
could
potentially
think
about
some
some
other
mechanisms
like
if
a
job
has
a
resource
requirements
beyond
a
certain
configurable
limit.
That
can
you
know
we
can
configure
any
clusters,
then
never
try
to
schedule
it.
For
example,
if
I
go
and
say
my
party
requires
20
terabytes
of
RAM,
the
scheduler
retrying,
and
it
will
never
succeed
so
I
mean
one
potential.
A
Possibility
is
to
say
that,
okay
and
whenever
you
see
a
memory
request
beyond
I,
don't
know
500
mega,
aggron,
gigabytes
and
drop.
This
part
I
mean
this
is
just
a
just
a
vague
idea
in
in
board.
We
had
something
similar
to
this,
basically
finding
the
maximum
size
of
a
node
and
if
a
pod
or
if
a
task
and
work
knowledge
is
larger
than
that
max
node
size
and
never
never
reach.
Why.
A
But
that
does
not
quite
have
my
two
kubernetes,
because
in
kubernetes
know
we
have
a
more
flexible
cluster,
so
autoscaler
could
potentially
add
some
other
nodes,
for
example
to
the
cluster
or
some
other
nodes
may
be
added
manually
even
to
the
cluster
that
have
a
different
spec
in
bore
queries.
We're,
assuming
that
we
are
running
if
we're
in
an
on-prem
cluster
size
of
nodes
or
figs
and
nodes
are
not
added.
The
new
platform
for
one
is
slightly
different.
So
that's
why
I
think
if
we
want
to
have
a
similar
feature
it
we
should
probably
yeah.
B
Making
it
comfortable
is
fine,
but
I
see
your
point
to
learn
and
Bobby
yeah.
Let
me
think
more
about
it
and
then
I'll
see
if
there
is
a
use
case
where
we
can
send
some
sort
of
notification,
saying
that
if
a
new
node
gets
added
or
whose
size
can
be
fit
into
or
whose
site
can
actually
size
can
actually
satisfy
the
part
we
can,
we
can
try
retrain
then
something
like
that
yeah.