►
From YouTube: Kubernetes SIG Scheduling meeting - 2019-01-17
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
as
you
know,
our
meeting
is
recorded
and
will
be
uploaded
to
public
Internet,
so
chances
are
whatever
you
say,
will
remain
there
forever,
but
that
is
to
start
our
siggy
scheduling.
Meeting
I
have
a
couple
of
updates
for
you
guys.
Some
of
you
have
seen
a
change
that
I
sent
a
P
or
that
I
send
recently
for
for
basically
not
snapshotting
preemption.
A
It
needs
not
snapchatting
the
cash
before
preemption
and
I.
Try
to
explain
what
the
problem
was
in
the
in
that
PR
that
the
PR
I
just
had
to
the
chat
link
I,
try
to
explain
the
PR
what
the
problem
was
and
while
you're
not
doing
the
snapshotting,
but
I
don't
know
if
that
was
clear,
particularly
since
you
were,
you
were
on
a
fog,
I,
don't
know
if
it
was
clear
for
you
or
you
have
more
questions,
feel
free
to
ask
and
then
I
will
give
you
some
reasons
why
we
did
this
I.
A
A
So
this
was
causing
autoscaler
to
become
very
slow,
because
autoscaler
waits
for
all
the
nodes
in
a
cluster
to
be
filled,
and
then
it
adds
more
more
notes.
The
cluster,
basically
as
long
as
there
are
pending
parts
that
can
be
scheduled
or
descaler,
will
not
attempt
to
add
more
nodes
to
the
cluster,
and
we
made
this
change
in
order
to
avoid
putting
nominated,
no
name
for
new
notes
which
are
added
to
the
cluster.
So
this
is.
A
This
is
one
example
of
an
area
that
our
new
scheduling
logic
does
not
work
well
with
the
other
scalar,
but
there
are
more
areas
once
we
add
the
scheduling
framework.
The
autoscaler
may
not
know
about
all
the
plugins
that
the
scheduler
uses.
So
we
definitely
need
to
make
sure
that
the
autoscaler
works
with
the
scheduling
framework.
What
one
possible
solution
is
for
the
autoscaler
to
be
aware
of
all
the
plugins
and
consume
the
same
config
file
that
these
scheduler
uses.
A
In
that
way,
it
can
also
use
the
same
sort
of
plugins,
maybe
for
for
running
its
simulation
and
basically
use
the
same
logic
that
the
schedule
uses
other
area
that
the
autoscaler
team
actually
brought
up
was
yang
scheduling.
They
were
saying
a
gang
scheduling
does
not
work
well
with
the
autoscaler
and
I
agree
with
that.
The
reason
for
that
is
that
autoscaler
looks
at
a
part,
a
pending
part
and
basically
evaluates
all
the
predicates
that
the
scheduler
uses
and
figures
out
if
adding
a
new
node
to
the
cluster
could
make.
That
part
is
schedulable.
A
When
we
are
gang
scheduling,
a
single
part
of
a
gang
may
not
be
scheduled
at
all
by
itself,
unless
we
add
like
10,
more
nodes
further,
for
example,
to
the
cluster,
so
that
all
those
members
of
that
group
quadruple
as
we
call
them.
Usually
the
scheduling
work
that
gangs
can
be
scheduled
so
I
think
a
single
node
may
not
necessarily
make
a
part
one
of
these
parts
of
they
are
they
again
scheduled
on
these
cause?
Some
issues
for
auto
scaling
so
going
forward.
A
I
think
we
should
definitely
make
sure
that
any
of
the
proposals
that
we
write
for
made
some
changes
to
the
scheduler
also
gets
the
attention
of
the
auto
scale.
I've
actually
talked
to
them
and
we
are
gonna,
add
them
to
our
proposal
forward
so
that
those
guys
can
also
have
a
basic
common,
video
or
TRS
and
give
us
their
opinion
about
how
it
works
with
the
others
game.
That's
something
that
I
wanted
to
talk
about.
Yeah
go
ahead,
yeah.
B
A
You
talking
about
gang
scheduling
gangs,
so
yes,
that
is
correct.
So
so
far
young
scheduling
and
I
mean
the
proposal
is
matched,
but
gang
scheduling
is
still
far
from
a
soft
problem
really,
so
there
are
other
things
that
we
need
to
address.
Other
scaling
is
just
one
of
them.
Another
important
area
is
admission
control
and
coda
management.
Those
are
not
soft
problems
yet,
and
the
initial
proposal
didn't
cover
those
so
right
now
the
proposal
is
basically
only
saying
that
we
are
gonna
feel
sort
of
a
prototype
in
a
in
an
incubator
project.
A
B
A
Correct
correct
so
the
first
prototype
the
first
prototype
was
I,
have
actually
do
all
of
the
comments
as
that
too.
So
it
says
that
this
is
good
as
a
starting
point
for
designing
the
API,
and
this
is
good
for
basically
as
a
foreign
incubator.
That
is
not
a
part
of
the
standard
company
so
that
that's
basically
the
scope
that
it's
currently
and
it's
not
meant
to
be
like
the
final
design.
Necessarily
sometimes
a
proposal
is
an
evolving
document
that
we
are
adding
an
improving
okay.
C
Thanks
hi
everyone,
my
name
is
Claire
I'm,
the
enhancement
lead
for
the
114
release
and
I
wanted
to
hop
on
to
introduce
myself
to
the
sig
today
and
check
in
on
existing
kubernetes
enhancements.
That
y'all
are
tracking.
That
might
not
already
be
in
our
sheet
I've
gone
through
yesterday
and
pinged
a
bunch
of
issues.
So
if
I've
done
that,
I
probably
and
you've
responded
saying
it's
tracked
for
114,
it's
probably
been
added
to
the
sheet,
but
just
wanted
to
double-check
that
we've
got
everything:
okay,.
A
Okay,
I
saw
some
of
your
comments,
a
bunch
of
our
features,
which
are
still
not
closed
because
he
usually
goes
through
the
process
of
you
know,
marking
them
as
alpha
and
beta
and
then
GA
some
of
them
takes
a
year
or
maybe
longer
than
a
year
before
getting
to
the
GA,
and
we
leave
those
feature
requests
open
for
for
that
time.
Some
of
them
are
already
built,
at
least
all
of
the
ones
that
I
looked
at,
and
you
had
left
a
comment
on
they're
already
built
and
all
it
developed
and
authorities.
A
C
A
C
A
So
that
you
know
a
lot
of
these
issues
are
other
issues,
not
necessarily
tabs
I
mean
the
difference.
Is
that
I
guess
caps
all
over
a
particular
format?
Some
of
those
older
proposals
do
not
follow
that
woman.
So
for
graduation.
Do
you
expect
these
those
older
proposals
to
be
converted
to
caps
and
reopened?
A
C
A
A
B
To
give
everyone
some
background,
we
have
like
predicates,
which
limit
the
number
of
volumes
that
could
be
to
a
disc
for
every
cloud
provider
for
every
node.
When
we
are
using
a
particular
cloud
provider,
we
have
it
for
all
the
public
loads,
but
we
are
not
supporting
it
at
this
point
of
time
for
OpenStack
and
other
private
clouds.
B
One
of
the
reasons
that
we
added
was
it
was
kind
of
a
customer
request.
That's
that's
one
thing
and
the
second
thing
is
to
bring
the
limits
like
we
want
to
bring
parity
because
we
wanted
to
have
both
the
public
cloud
providers
and
OpenStack,
which
is
by
default
one
of
the
biggest
private
cloud
providers
or
the
ones
that
or
the
cloud
that
is
being
used
for.
So
we
want
to
add
that,
and
in
the
in
this
release
we
want
to
duplicate
all
the
predicates,
which
are
cloud
providers,
specific
ones.
B
A
B
A
B
A
B
A
B
A
A
Okay,
that
so
there
was
also
one
other
item
that
we
were
tracking
and
that's
the
removal
of
or
limiting
the
inter
pod
affinity
and
an
affinity.
This
was
actually
proposed
initially
by
Wojtek
from
the
scalability
team,
on
the
grounds
that
the
intifada,
Antionette
affinity,
has
a
very
high
overhead
for
for
the
whole
scheduler,
including
us
that
don't
use
these
features.
A
Particularly
anti
affinity
should
be
checked
for
everything
when
a
when
a
basically,
the
scheduler
has
to
make
sure
that,
for
every
part
that
in
two
schedules
it
doesn't
violate
existing
hard
anti
affinity-
and
this
has
a
pretty
high
overhead.
So
Voytek
was
starting
this
particular
thread.
Let
me
see
if
I
can
find
it,
and
he
was
saying
that
we
should
should
probably
change
affinity,
an
affinity.
A
And
yeah,
so
the
latest
of
data
was
to
add
a
new
feature
that
basically
spreads
pots
evenly
in
a
particular
or
in
a
in
a
arbitrary
topology
domain,
so
that
apology
domain
could
be
like
a
zone,
could
be
a
region
or
any
arbitrary
labels.
So
that
was
that
that
was
one
part
of
it
and
then,
after
adding
this,
we
would
limit
and
I
definitely
only
to
note
so
that
way
we
don't
need
to
check
the
whole
cluster
every
time
and
we
can
only
check
the
assume.
A
That's
the
same
note
that
we
are
considering
for
scheduling
apart.
Of
course,
it
definitely
reduces
the
overhead
of
affinity,
an
entire
fan
until
March,
but
the
concern
is
that
I
think
an
anti
affinity
is
a
is
it
as
a
beta,
API
and
many
users
already
use
those
and
deprecating.
It
could
be
a
big
project
honestly,
and
even
if
we
follow
all
the
standard,
deprecation
processes
which
is
like
and
six
months,
we
can
defecate
it
and
all
the
many
users
could
become
very
unhappy.
C
A
B
A
Of
all
it's
not
I
mean
nothing
is
set
or
nothing
is
finalized
at
the
moment.
Okay,
we
may
not
go
with
this
plan
at
all.
Okay
basic
job
in
here
at
the
moment,
and
no
decision
has
made
yet
has
been
made.
Yet
so
no,
we
don't
know
yet,
but
if
we
decide
to
defecate
it,
this
is
going
to
be
the
this
is
going
to
be
the
process.
A
D
Bobby
hi
yeah
I
have
small
question
related
to
issue
actually
I.
Think
one
of
the
most
effective
approach
to
improve
the
performance
of
infinity
and
anything
is
precache
calculation
before
we
go
into
skating,
I
actually
know
that
you
have
done
out
of
work
in
the
parts,
though
so
I'm
wondering,
if
that's
all
what
we
can
do
or
there
there
are
still
some
some
pre
calculation
we
can
do
like
we
can
set
up
some
index
or
something
like
that.
Do
you
have
any
any
more
ideas
on
that
yeah.
A
I
think
I
think
so
you
know
the
one.
One
thing
that
we
definitely
can
do
today
is
one
in
one
area.
That
is
definitely
not
ideal.
Is
that
we
go
over
the
nodes
in
the
cluster
and
each
node
for
each
node
restore
pods
that
have
finite
current.
If
any
sorry
and
say
affinity
roots
for
eternity
basically
enter
affinity
is
the
one
which
is
symmetric
whenever
we
schedule
a
pod.
A
Even
if
the
pod
does
not
have
any
anti
affinity
rules,
we
must
make
sure
that
until
I,
think
of
any
existing
parts
in
the
cluster
is
not
violated
by
this
new
pod.
So,
even
if
the
pod,
the
new
part,
is
not
using
anti
affinity,
we
still
need
to
check.
So
today
we
have
a
map
for
each
node.
That
tells
whether
no
one
has
any
parts
that
has
entire
Finiti.
A
That
map
helps,
but
it's
not
enough.
Basically,
if
you
have
only
one
part
in
the
whole
cluster
that
has
anti
affinity,
we
still
go
over
all
the
nodes.
We
do.
We
check
whether
that
know
it
has
any
part
with
anti
affinity
or
not.
We
can
definitely
optimize
that
we
can,
however,
another
list.
Maybe
that
has
all
the
nose
with
affinity
pods
and
that
whole
list
is
going
to
be
a
much
much
smaller
portion
of
the
nodes
of
the
cluster.
A
This
is
going
to
have
the
lot
in
larger
clusters,
with
thousands
of
nodes
and
very
few
anti
finicky,
pods
or
no
affinity,
pods,
no
I,
definitely
party,
so
that
that
area
can
can
be
improved
for
sure
whether
it
solves
all
the
problems
is
a
big
question.
It
probably
doesn't,
and
we
still
need
to
pay
some
overhead.
Also.
If
the
cluster
has
a
lot
of
antenna
deep
part,
you
may
still
need
to
go
and
do
a
lot
of
processing
for
each.
You
know
pod,
which
is
being
scheduled
in
the
cost.
D
B
B
A
Have
already
implemented
them
know
that
optimization.
That
is
for
basically
that's
for
checking.
You
can
deep-fry
the
kids,
and
we
have
already
done
that.
You
know
anti
affinity
or
a
Finnegan
and
590.
Both
of
them
were
the
slowest
predicates
and
there
were
like
a
thousand
times
slower
than
our
other
predicates.
We
made
a
lot
of
improvements
to
those
we
had
like
a
hundred
20x
performance
improvement,
so
they
are
no
longer
a
thousand
times
slower.
They
are
maybe
the
order
of
eight
times
the
story
or
ten
times
or
something
like
that.
A
They
are
no
longer
super
slow,
but
the
roar
still
significantly
lower
than
other
predators.
But
that's
not
the
only
problem.
The
bigger
problem
with
entering
affinity
is
the
fact
that
it's
met
title
its
matrix
makes
it
makes
us
or
makes
the
scheduler
to
check
all
the
pods
in
the
cluster
and
that's
one
of
the
other
areas
that
is
causing
the
whole
scheduler
to
be
slower
and
effects
scheduler
for
almost
everything,
I'm.
B
A
D
Actually
it
does
the
many
people
claim
that
they
are
using
affinity
and
and
in
the
issue.
So
there
are
people
in
were
scheduling
talking
about
that
which
I
think
we
should
find
a
way
to
included
users,
that's
great
to
talk
about
how
they
use
and
infinity
and
infinity
I
think
that's
going
to
be
much
better.
Yeah.