►
From YouTube: Kubernetes SIG Scheduling Meeting 2018-05-07
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
and
welcome
to
city
scheduling
meeting
today,
I
have
a
couple
of
items
that
I
would
like
to
talk
about,
and
then
we
will
open
up
to
questions
or
comments
from
you
guys
so
I
believe.
One
of
the
item
is
an
update
on
our
progress
with
priority
and
preemption.
As
you
may
have
known
already,
we've
been
able
to
move
this
to
beta
in
111.
A
The
feature
is
enabled
and
I
just
received
an
update
from
Ravi
who
has
been
able
to
test
that
the
test
grid
is
green,
I'm
very
happy
about
that,
and
I
would
like
to
thank
everybody
who
put
a
lot
of
efforts
on
moving
this
forward.
Klaus
and
Robbie
helped
a
lot.
I
especially,
would
like
to
thank
Robbie
very
much
for
his
efforts.
A
We
recently
ran
into
an
issue
with
the
demons
that
demons
as
controller
was
being
scheduled
by
demon,
said
in
111,
which
is
going
to
kind
of
conflict
with
what
we
wanted
to
do,
but
basically
wanted
to
remove.
These
guys
were
completely
from
from
our
code
base
because
we
believe
that
three
scheduler
would
conflict
with
preemption,
particularly
because
preemption
may
remove
some
of
the
parts
from
a
cluster
and
the
scheduler
can
do
the
same
thing
on
another
node.
So
basically,
we
could
face
double
preemption
in
certain
scenarios.
A
A
As
a
result,
we
needed
reschedule
to
be
there
for
scheduling
or
guarantee
scheduling
of
critical
demands
at
pods.
Robbie
did
all
the
work
of
changing
your
scheduler.
So
the
reschedule
is
now
aware
of
the
fact
that
some
of
these
critical
parts
are
created
by
the
message,
controller
and
schedules.
Those
or
create
basically
helps
to
schedule
those
by
removing
some
other
parts
from
the
cluster.
Then
when
there
is
not
enough
capacity
or
room
for
those
critical
parts.
So
if
your
scheduler
remains.
B
A
111
but
its
responsibilities
are
reduced
to
creating
room
for
critical
demons
at
party
thanks
a
lot
Ari
Robbie
has
done
a
lot
of
other
things.
He
has
set
up
our
tests.
He
has
done
phenomenal
work
for
moving
this
feature
for
what
I
would
like
to
especially
thank
him,
and
we
have
another
item
on
our
agenda
by
the
way
if
there,
if
you
have
any
questions
about
priority
and
preemption,
please
go
ahead
and
ask
I.
A
I,
actually,
in
our
previous
meetings,
I
have
said
several
times
that
I
have
kind
of
given
up
keeping
up
with
github
notifications.
If
you
have
any
peers
that
needs
my
attention,
please
don't
hesitate
to.
Pin
me
on
slack
yeah
I
tried
really
hard
to
still
sometimes
try
to
go
over
my
notifications
and
try
to
catch
up.
But
honestly,
it's
almost
impossible
I
received,
like
15
PR
notifications
every
day
and
it's
almost
impossible
to
keep
up
yeah.
A
A
C
A
C
I
mean
there
is
a
mister
for
like
because
it
was
disabled
by
default
previously,
so
local
cluster
of
script
has
priority
renewal
flaga
so
that
you
can
enable
later,
oh
I
see
okay,
but
but
anyway,
now
it's
enabled
by
default.
So
we
get
it
by
default
anyway,
but
because
that
stuff
is
there
I
think
I
think
that
is
not
required.
Anymore
I
mean.
D
No,
should
we
be
bringing
d,
scheduler
and
I'm
sure
since
I
wish
is
here.
We
can
discuss
about
that,
because
we
are
anyway
going
to
retire
reschedule
err
in
OpenShift.
We
want
to
make
it
an
or
release
it
in
tech
preview.
More
so
should
be
made
like
official.
What
do
you
think
we
should
wait
for
some
more
time.
D
A
So
so
do
you
schedule
I
I,
don't
think
so.
I
wish
to
correct
me
if
I
am
wrong,
I
I
think
a
scheduler
can
have
making
room
in
a
full
cluster
right,
so
this
schedule
probably
doesn't
help
with
that,
but
you
schedule
required,
arrange
things
around
to
maybe
create
more
efficient
balance
in
its
class.
All
right,
yeah.
C
Exactly
I
mean
this
scheduler
doesn't
do
like
what
reschedule
ER
is
doing
anybody
scheduler,
just
temporary
solution
right
now.
In
fact,
if
we
had,
you
said
previously
like
we
just
want
reschedule
ER
because
you're
like
we
cannot
move
that
feature
to
beta,
so
I
think
in
the
next
release
in
1.12.
Once
we
move
that
feature
like
where
we
are
scheduling
dimension
ports,
so
by
default
scheduler,
then
we
don't
need
risky
scheduler
anymore.
C
The
only
thing
like
I
mean,
if
we
want
to
do,
we
could
suggest
like
right
now,
whoever
is
using
reschedule
ER
and
whatever
changes
we
have
made
related
to
priority
and
preemption
in
reschedule
error,
because
any
various
scheduler
is
going
to
be
in
I
mean
already
deprecated.
If
anybody
who
is
using
this
scheduler
and
unless
we
move
that
existing
feature
to
beta,
we
should
make
all
those
changes
into
D
scheduler,
and
we
should
we
should
encourage
them
to
use
the
scheduler.
That's
what
the
idea
is.
That's
what
I
meant
I
see.
A
C
C
Question
is
my
question
is
like,
while
we
are
making
changes
to
the
scheduler
at
all,
even
if
you
want
to
make
those
changes
or
we
should
make
those
changes
in
two
days
scheduler
and
then
we
should
encourage
even
wherever
that
reschedule
already
add-on
is
being
used
so
that
we
can
increase
their
mother
d
schedule
added
should
be
used
because
that's
going
to
be
our
supported
thing
in
the
future.
Also
even
I
agree.
C
A
Sure
that
certainly
should
be
our
I
believe
longer
term
plan
and
approach,
but
the
particular
change
that
we
made
was
small
change
only
to
get
things
going
in
1:11.
Basically,
our
plan
is
to
probably
deprecated
deprecated
reschedule
right
away
after
1:11
is
released
and
completely
remove
it.
But
in
order
to
get
the
ball
rolling
for
1:11,
we
wanted
to
actually
small
changes.
Yeah.
C
C
E
A
The
reality
is
that
in
in
our
like
vanilla
kubernetes,
we
don't
have
multiple
schedulers.
So
in
a
way,
it's
not
super
high
priority
for
us
or
for
most
of
our
users
to
support
multiple
schedulers
in
preemption.
But
since
our
promise
to
what
the
community
is
that
we
do
support
multiple
schedulers
I
think
it's
still
important
to
support
multiple,
pre-empting
schedulers
as
well.
I,
don't
expect
a
lot
of
people
to
have
implemented
preemption
in
their
own
custom
schedulers,
but
even
for
a
small
number
of
people
who
might
be
interested
in
this
picture.
A
I
think
we
should
still
do
that.
Scheduling
framework
is
something
that
we
are
gonna
start
the
work
soon,
but
it's
definitely
gonna
take
some
time
and
I
have
written
the
document
we
are
gonna,
probably
send
it
out
soon,
so
that
that
document
explicitly
says
that
that
is
gonna,
be
scheduler
version
2
and
is
now
gonna
be
fully
compatible
with
with
scheduler
version
1.
A
Basically,
all
of
those
extensions
that
are
we
used
to
have
in
scheduler
version
1
isn't
are
not
going
to
be
compatible
with
scheduling
framework,
so
some
people
may
want
to
stick
to
scheduler
version
1
or
the
scheduler
that
we
currently
have
for
some
time
and
we
need
to
at
least
at
this
point,
that
is
that
v1
is
not
deprecated
to
support
that.
So
we
would
like
to
add
multiple
scheduler
support
to
preemption.
Ok,.
A
A
We
know
for
a
fact
that
in
Borg
inside
Google
we
use
this
scoring
function
and
it
helps
a
lot
with
store
with
saving
and
storage
and
also
saving
some
amount
on
bandwidth
and
basically
in
across
the
banner,
that's
an
important
optimization
in
my
opinion,
and
we
wanted
to
move
it
forward
and
make
it
a
default
priority
function
in
111,
but
we
couldn't
get
everything
in
place
before
the
close
free.
So
it's
is.
It's
pushed
112
I.
A
One
thing,
though,
that
Tim
saint-clair
brought
up
you
today
was
the
fact
that
this
scoring
function
could
possibly
cause
a
lumpy
distribution
of
pods
among
nodes.
The
concern
here
is
that,
if
they're
all
it's
a
couple
of
nodes
in
your
cluster
that
have
images
for
a
party
here-
let's
say
replica
said-
and
those
nodes
would
be
preferred
over
other
nodes
in
the
cluster
and
as
a
result,
a
lot
of
pods
from
the
same
replica
set
may
end
up
being
scheduled
on
those
nodes.
A
That
is
not
necessarily
always
desired,
because
we
don't
want
to
put
all
the
unknowns
of
all
the
parts
of
a
replica
set
on
a
node.
We
would
like
to
distribute
them,
so
we
would
thank
you
a
little
bit
study
the
effects
of
this
scoring
function.
I.
Don't
believe
that
this
particular
scenario
is
always
the
case,
because
we
have
other
scoring
functions
that
well
for
normal
sort
of
normalize
the
effects
of
this
or
this
one,
particularly,
we
have
a
scoring
function
that
prefers
parts
that
have
prefers
nodes
that
have
lower
resource
utilization.
A
So
if
some
nodes
in
a
cluster
have
a
lot
of
resources
utilized
and
some
nodes
are
empty,
then
that
priority
function
prefers
the
empty
house.
So
we
believe
that
the
combination
of
various
scoring
functions
can
balance
one
another.
We
also
have
weights
for
scoring
function,
so
if
we
give
a
scoring
function,
higher
rate
that
scoring
function
basically
prevails.
A
If
we
give
least-used,
which
is
the
scoring
function,
that
I
just
talked
about
talk
about
that
that
prefers,
prefers
node
with
lower
resource
utilization
over
other
nodes
with
higher
resource
utilization,
then
that
scoring
function
can
prevail
and
we
may
prefer
those.
No
it's
over
other
nodes.
We
need
to
study.
We
need
to
see
what
are
the
effects
of
this
scoring
function
and
if
we
enable
it
by
default,
we
should
be
careful
about
having
a
balance
of
so.
C
Some
comments
about
it,
in
fact,
I
kind
of
agree
like
what
Tim's
and
Tim
said
and,
in
fact,
I
think
in
the
beginning,
you
said
that
in
bog
like
like,
when
there
are
large
number
of
ports
using
same
mes,
it's
going
to
be
useful.
That's
what
he
said.
So
my
comment
is
it's
going
to.
In
fact,
it's
going
to
be
their
way
around
if
large
number
of
ports
are
going
to
use
the
same
image
and
also
like,
although
cluster
size
will
also
matter.
C
But
let's
say
there
are
like
20
nodes
on
the
cluster,
and
there
are
a
large
number
of
course,
and
obviously
like
by
default
or
default.
Scheduler
is
going
to
spread
those
pods
on
all
nodes,
so
that
means
sort
of
that
image
is
going
to
be
there
on
all
the
nodes,
and
in
that
case
we
don't
really
need
any
function
like
this.
C
So
there's
no
spots
go
there
because
of
to
two
things
that
that
was
coming.
To
my
mind
that
we
really
need
to
take
into
account
like
when
even
like
redesigning
this
good
I,
don't
think
so
in
its
current
form,
it's
really
going
to
be
useful,
because
first
first
thing
I
would
like
to
say
like
bunnies,
like
rate
limiting
reagent
in
the
sensor
like
it.
This
priority
function
really
needs
to
check
like
how
many
ports
have
already.
A
I
hear
your
concerns
at
the
same
time,
I
believe
if
we
set
the
rates
quirky
and
let's
say
that
apart
I
know
with
lower
resource
utilization,
is
generally
preferred
or
yeah.
You
know
whose
resources
are
more
available
or
are
preferred
over
other
nodes
in
the
cluster,
and
if
there
are,
let's
say,
multiple
nodes
with
similar
resource
utilization,
then
between
those
we
can
still
choose
one
which
gives
us
better
efficiency
and
I
could
possibly
be
a
note
that
already
has
the
images
that
applied
requires.
C
Yeah
I
agree,
I
mean,
like
generally,
like
I,
have
seen
like
what
some
admins
have
been
doing
to
like
her
better
startup
time
to
help
rebuild
images
or
like
when
the
image
is
going
to
be
used
very
frequently,
so
I
mean
I'm,
not
saying
like,
but
I'm
saying
like.
Sometimes
that
solution
might
be
better
they're.
C
A
A
Maybe
we
can
actually
start
experimenting
with
cluster
with
a
certain
size
and
then
start
creating
some
pods
and
playing
with
the
weights
of
various
priority
functions
to
to
find
how
things
are
distributed
in
practice
when
weights
are
changing
and
we
I
believe
we
can
come
up
with
a
scenario
or
some
proper
rates
that
gives
us
a
good
distribution
as
well
as
efficiency,
when
distribution
has
not
so
much
of
a
concern
or
distribution
is
already
ideal.
We
can
also
think
about
how
to
optimize
for
better
party
start
at
time
and
the
word
storage,
storage,
usage.
C
C
A
C
A
That's
a
very
good
question.
Actually
we
already
have
several
items
which
we
would
like
to
work
on,
for
one
for
one
is
wholly
starting
or
work
on
the
scheduling
framework
I'm
going
to
share
a
document
soon,
that's
one
of
the
things
that
we
can
put
it.
This
is
not
something
that
necessarily
gonna
be
available
in
112.
Of
course,
this
is
a
longer
term
effort.
I
would
expect
an
early
version
of
it
to
be
available,
maybe
as
early
as
113,
maybe
but
other
than
that.
We
would
like
to
move
equivalence
cache
to
beta.
A
That's
another
item
that
you
would
like
to
work
and
I
still
need
to
study
the
effects
of
equivalence
cash
that
we
would
like
to
move
forward
with
gang
scheduling
and
that's
something
that
Klaus
has
been
working
on
and
would
like
to
see
some
progress,
they're,
probably
finalizing
the
API
before
112,
and
finding
some
plans
for
implementing
it.
This
is
another
item.
We
would
like
to
see
some.
A
This
performance
improvement,
if
possible,
some
more
performance,
improvement
on
affinity
and
anti
affinity
and
moving
moving
that
feature
to
GA,
because
currently
it's
in
beta
and
we
keep
hearing
that
we
want
to
improve
performance
of
an
anti
effect.
I
have
already
sent
one
one
PR
that
has
improved
performance
of
the
affect
an
entire
vanity
significantly.
B
E
So
basically,
we've
been
working
with
Erika
on
on
the
proposal.
It's
ready
for
a
final
review,
but
the
plan
is
to
implement
it
and
have
it
ready
for
1:12
so
feel
free
to
to
to
review.
So
we
can
do
a
final
pass
and
get
it
merged.
I
know
that
now
there's
the
code
freeze,
so
I
think
that
we
can
focus
on
review
the
proposal,
so
we
can
get
it
out
there
as
soon
as
possible
sounds.
B
B
Don't
know
if
this
is
true
for
everyone
in
the
sake
but
like
I've,
you
know
been
approached
by
and
talked
to
maybe
five
people
in
the
last
month
or
so
who
are
all
interested
in
contributing
to
SIG's
scheduling
and
I.
Can
you
know,
give
them
an
outline
of
the
code
and
how
we
work
and
operate,
but
when
it
comes
to
like
helping
them
actually
get
started
on
some
tasks,
I
don't
usually
have
anything
to
give
them.
B
A
A
The
intention
from
the
beginning
was
that
you
want
it
to
actually
have
meetings.
Basically,
we
separately,
then
you
split
the
meaning
into
like
two
smaller
meanings,
so
that
people
can
join.
At
least
one
of
the
two
I
know
that
the
other
one
is
not
convenient
at
all
for
people
on
the
East
Coast,
but
we
didn't
really
find
much
other
options,
given
that
we
want
to
have
people
from
China
to
attend
the
meetings
as
well,
but
feel
free
to
suggest
any
times.
It's
very
hard
to
find
one
single
time
that
works
for
everyone
across
the
globe.
C
In
fact,
I
don't
know
like
anything
like
this
might
be
we
just
might
but
hold
on,
but
I
was
wondering
if
it's
possible
I
mean
I'm,
not
saying
I
mean
we
have
to
do
that
it
just
it's
such
a
sauna.
I,
don't
know
that
if
it's
possible,
we
could
move
that,
maybe
a
meeting
from
a
to
let's
say,
11
a
well,
it
might
be
very
late,
but
that
might
be
more
easier
than
to
attend
meeting
at
it.
B
I
know
I
know,
maybe
another
aspect
of
the
solution
should
be
taking
better
meeting
notes
if,
if
maybe,
if
we
did
a
good
job
summarizing
the
minutes
in
the
dock
and
then
spend
like
the
first
three
minutes
of
each
meeting-
recapping
the
other
half,
so
that
people
who
can
only
attend
one,
can
kind
of
stay
up
to
date.
Yeah.
A
So
and
also
I,
always
we
have
the
recordings
of
all
the
meetings
and
I
try
to
post
them
immediately
after
the
meetings
and
I
put
the
links
also
and
and
the
document
so
that
you
know
if
you
don't
attend.
Personally,
of
course
you
cannot
ask
questions,
but
the
intention
is
that
you
can
ask
the
questions
in
the
next
meeting,
where
it's
probably
at
a
better
time
for
your
timezone.
And
yet
you
can
be
even
aware
of
things
that
went
on
in
that
meeting
by
looking
at
the
video
recording
and
also
maybe
the
meeting
yeah.