►
From YouTube: Kubernetes SIG Scheduling - 2019-06-13
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
and
welcome
to
scheduling
meeting.
As
you
know,
this
meeting
is
recorded
and
will
be
uploaded
to
public
internet.
Thank
you
for
all
of
you
who
joined.
We
didn't
have
any
meetings
in
the
past
three
weeks,
I
apologize.
First
of
all
that
I
didn't
communicate
clearly
that
last
week
we
didn't
have
a
meeting.
I
was
under
the
impression
that
I'll
not
announced
it,
but
if
I'm
guy
hadn't
anyway
so
today,
I
would
like
to
talk
a
little
bit
about
the
projects
that
we
had
for
1:15
and
start
planning
for
1:16.
A
Basically,
and
of
course,
we
will
also
open
up
to
questions
and
comments
that
you
may
have
among
the
important
projects
that
we
managed
to
get
in
in
1:15
was
scheduling
framework
scheduling
framework.
Alpha
version
is
going
out
in
115
in
the
alpha
version
we
have
been
able
to
build
a
number
of
extension
points
and
not
all
of
the
extension
points
that
are
proposed
and
in
the
design
proposal,
but
some
of
those
extension
points
that
we
targeted
for
alpha
are
already
done.
A
In
fact,
we
have
been
able
to
build
more
than
just
those
extension
points
that
we
initially
targeted
for
alpha
for
beta
we
are
beta,
which
is
retargeting
and
116.
We
are
building
all
of
the
extension
points
and
we're
hoping
that
we
will
be
able
to
convert
some
of
the
existing
predicate
and
priority
functions
into
plugins
for
for
the
scheduling
framework
so
other
than
the
scheduling
framework
there
are.
There
were
other
projects
that
you
managed
to
land
in
115.
One
was
support
for
non
pre-empting
priority.
A
The
non
preemption
priority
as
a
as
a
priority
for
parts
that
allowed
them
to
be
high
priority,
while
not
pre-empting
other
parts
in
the
cluster.
This
is
oftentimes
needed
for
batch
workloads
where
you
want
about
workloads
to
be
scheduled
sooner
than
other
pending
workloads
in
the
scheduling
queue.
At
the
same
time,
you
don't
want
these
higher
priority,
but
workloads
to
preempt
existing
ones.
This
is
often
needed
because
in
clusters
that
run
mostly
backward
nodes,
if
the
workload
if
a
workload
is
preempted,
then
all
the
work
that
is
done
by
have
workload
is
gonna
go
away.
A
A
C
A
Sure
also
we
we
added
new
scheduling
metrics,
particularly
for
by
counting
the
number
of
pending
parts
in
that
cluster.
This
is
used
for
monitoring
clusters
and
ensuring
that
the
scheduler
house
is
basically
checking
scheduler
health.
We
had
planned
to
land
some
features
that
we
were
not
able
to
do
particularly
was
spreading
pod
among
physical
hosts.
A
There
is
some
discussion
about
how
to
know
how
to
define
these
physical
hosts
some
work
on
the
labels
we
should
use
and
there
that
discussion
is
still
ongoing,
you're,
hoping
that
we
can
deliver
this
feature
in
1/16.
We
would
like
to
support
less-than
and
greater-than
operators
and
entire
in,
inter
pod
affinity
that
one
didn't
pass.
The
API
reviews
we're
hoping
that
we
can
do
that
in
116,
even
par
spreading
was
another
feature
that
we
were
targeting
for
115,
and
also
that
one
also
got
stuck
towards
the
end
of
the
release
cycle
on
API
reviews.
A
The
PR
thought
we
had
done
a
lot
of
effort
on
and
on
creating
those
PRS
and
pretty
much
implementing
everything,
but
unfortunately
we
were
not
able
to
land
it
in
115
due
to
my
review
bandwidth,
but
we
are
hoping
that
we
can
do
this
116,
so
I've
created
a
new
sheet
in
our
spreadsheet
I
can
send
you
the
link
in
chat
again.
The
spreadsheet
is
linked
to
our
meeting
notes,
which
in
turn
is
linked
or
invite
so
yeah
I
just
shared
the
link
to
the
spreadsheet
on
the
chat
window.
A
A
A
A
Well,
there
is
already
some
examples
in
our
code
days
actually
for
some
of
these
extension
points
and
a
couple
of
plug-in
examples
already
exists
in
our
code
base,
but
we
are,
of
course,
after
we
convert
some
of
these
priority
and
predicate
functions.
We
have
more
examples
and
we
are
also
gonna
write,
more
detailed
documentation
about
how
to
create
them
in
order.
A
A
B
Yeah,
so
a
couple
of
things
that
happen:
they
have
missed
115
release,
resource
called
a
scope.
Selectors
that
have
been
working
on
and
the
other
thing
that
came
in
the
last
minute
was
related
to
duplication
of
cloud
provided
specific
code
from
cube
scheduler.
So
the
storage
folks
have
come
up
with
CSA
and
added
informers
to
the
scheduler
code
base
to
come.
The
number
of
volumes
based
on
the
the
note
type
in
a
in
a
specific
provider
using
CSI
but
I
have
reviewed
it,
but
I
think
Bobby.
B
Believe
yours
can
you
review
it
at
the
same
time?
I
wasn't
sure
about
the
the
performance
benchmarks
that
they
have
given
or
that
they
have
provided
in
the
PR,
and
it
was
not
having
enough
soak
time
to
to
allow
it
to
be
merged
in
115
seconds.
So
I
have
kind
of
stopped
them
from
stop
the
PR
from
getting
moist
but
I.
Think
that
meets
your
review
in
116
cycle.
A
B
C
That
basically
implements
the
even
hot
spreading,
so
I
wanted
some
feedback
on
the
API,
what
it
should
look
like
and
it
basically
uses
what
V
has
for
the
API
and
I
wanted
that
too
to
be
used
so
that
we
can
basically
make
use
of
the
spreading
now
and
as
soon
as
that
becomes
readily
available.
I'm
happy
to
change
that.
So
I
wanted
to
make
sure
somebody
reviews
it
and
like
fried
some
comments,
so
that
we
are
on
the
right
path.
It.
B
D
B
C
I
mean
basically
I
think
the
API
is,
the
effort
is
awesome
and
the
FAA
looks
super
good
and
I
think
heading
in
the
right
direction.
The
only
thing
that
I
was
thinking
is
that,
right
now
the
API
wants
to
provide
a
label
selector
right.
So,
for
example,
if
I
have
a
workload
controller
like
deployment
or
stateful
said,
I
still
need
to
specify
all
these
labels
and
it
tests.
Why
I
can't
really
all
parts
of
a
specific
deployment
or
a
stateful
set?
C
So
I
think
that
makes
sense
for
those
cases
where
you
have
multiple
deployments
and
it
will
set
and
we
want
to
spread
across
and
those
I
believe
I
mean
that's
my
understanding.
That
rules
will
be
like
power,
user
use
cases,
but
is
there
a
way
we
can
modify
this
API
so
that
it
works
for
by
default
without
specifying
this
or
just
one
type
of
deployment
or
stateful
start
right
and
then
for
those
power
users
we
can
just
let
them
specify
the
label
selectors
yeah.
A
E
E
And
by
the
way
why
we
introduced
the
label,
selector
is
because
in
schedule
almost
all
the
features
is
imposed
on
the
pod
right.
That
is
the
benefit.
Is
that
we
don't
care
about
what
kind
of
controller
the
Parkton
will
ask
to.
Maybe
it
belongs
to
the
co,
a
PRI
workload
like
Department
stay
for
that,
but
you
can
also
belong
to
C
Rd
control
that
the
user
created
by
their
own
right.
So
that's
yet
why
we
care
only
about
the
pod
scary
rather
than
though
we'll
close
API
scheduling,
so
that
is
the
basic
design.
C
Yeah
I
think
that
completely
makes
sense,
I
think
a
lot
of
applications
deploy
multiple
of
those
but
I
thought
like
for
90%
of
the
use
cases,
if
they
don't
have
to
specify
it
I
think
so.
I
think
that
would
I
think
we
all
agree.
So
that
would
be
awesome.
I
have
two
more
questions,
so
one
is
when
we
are
using
this
even
part
spreading
to
disable
pod
and
TR
final
affinity,
because
those
both
of
them
cannot
work
with
each
other
right.
No.
C
A
Yeah
yeah,
absolutely
so
so.
First,
when
we
introduced
this,
which,
as
a
half
of
course,
it's
going
to
be
disabled
by
default,
users
must
enable
it
manually
in
the
clusters,
and
hopefully,
users,
who
do
this
understand
all
the
details
here.
Once
we
go
to
beta,
we
should
probably
I
mean,
probably
at
the
same
time,
when
we
go
to
alpha,
we
should
deprecated
anti
affinity
to
anything
other
than
basically
any
topology
other
than
notes,
and
once
we
go
to
beta,
we
could
probably
disable
entire.
A
She
should
check
what
what
are
the
requirements
for
like
removing
features,
but
I
guess
for
for
beta
feature,
which
is
anti
affinity,
is
one
of
them.
We
need
to
wait
two
cycles
after
the
application,
so
we
have
to
have
to
weigh
our
options
and
see
when
we
can
do
when
we
can
remove
it.
At
the
very
least,
we
have
to
document
this
well,
at
the
same
time,
I
feel
like
they
don't
necessarily
contradict
one
another.
A
They
may
conflict
with
one
another,
though
so,
depending
on
what
kind
of
rules
a
user
specifies
under
parts,
sometimes
the
rules
could
basically
support
each
other
as
well.
Basically,
the
other
day,
the
outcome
of
anti
affinity
could
be
the
same
as
the
outcome
of
even
participating,
but
it's
not
necessarily
always
the
case,
so
it
could
cause
confusion.
We
should
definitely
document
this
well
and
see,
tell
users
that
not
just
not
to
specify
both
of
them.
At
the
same
time,.
B
A
B
C
So
one
other
thing
that
I
was
trying
to
bring
up
on
slack
is
that
when
the
even
for
spreading
has
multiple
topology
dough
being
specified,
then
how
do
we
I
think
there
was
maybe
and
that
probably
a
little
bit
of
my
understanding
was
probably
wrong,
but
it
seems
that
when
there
is
a
bigger,
topology
dopamine
and
then
there
are
buckets
within
that
topology
domain,
then
we
are
still
looking
at
the
second
opal.
So
so
let
me
give
you
an
example:
a
concrete
example
would
be
zone
and
Iraq
right.
C
So
when
we
are
evenly
spreading
across
zones,
we
basically
look
and
we
try
to
evenly
distribute
across
zones
and
then
there
might
be
rags
within
a
zone
right.
So
it
seems
from
my
understanding
of
the
code
and
what
he
was
describing
on
slag.
We
would
look
at
for
that
shredding.
We
would
look
at
all
the
racks
across
all
zones
which,
according
to
me
doesn't
make
sense.
E
I
think
I
have
explained
this
on
the
slack
and
us
a
nice
try,
because
we
respect
each
topologist
rare
constraint,
independent
ways.
So,
for
example,
you
gave
the
example
is
rag
and
zone,
but
actually
we
don't
have
enough
information
to
tell
whether
this
topology
belongs
to
another
right.
So
they
are
just
two
semantics
called
concept
to
to
schedule.
We
we
can
only
just
calculate
them
independently
and
then
to
kind
of
intersect
their
result.
Finally,
to
give
you
the
final
result,
so
right.
C
But
think
about
the
use
case
right
if
somebody
is
trying
to
spread
a
pod
right
and
he
definitely
wants
to
eliminate
one
failure
in
domain
which
is
the
zone,
so
he
wants
to
spread
across
zones
and
then
within
a
zone
he
wants
to
spread
across
rack.
So
if
one
rack
within
a
zone
goes
down,
then
his
other
part
is
still
running
right.
He
doesn't
care
about
racks
in
other
zones.
A
Know
what
provides
any
guarantees
for
us
that
nodes
vitina
zone
have
rack
labels
in
kubernetes?
We
don't
have
any
guarantees
about
these
kind
of
topologies
right.
We
don't
know.
Actually
a
bunch
of
a
bunch
of
racks
might
have
like
the
same
zone
label
or
it
could
be
the
other
way.
Basically,
nodes
within
iraq
will
have
different
zone
labels.
I
mean
people
are
really
free
to
put
any
kinds
of
labels
on
there
on
their
parts
on
under
notes.
A
C
So
I
mean
what
I'm
trying
to
say
is
that
so
I
understand
that
if
the
label
is
not
like
whoever
is
managing
the
kubernetes
cluster
knows
what
zones
and
racks
are
for
their
cluster,
so
they
would
basically
label
the
nodes
appropriately
right.
And
what
trying
to
say
is
that
within
a
zone,
if
the,
if
he
has
three
rags,
then
he
wants
the
pots
to
spread
across
those
three
rags,
but
with
the
current
algorithm.
What
will
happen
is
that
it
will
actually
spread
across
across
racks
in
different
zones,
which
is
not
what
we
might
want
right.
C
A
One
one
problem
here
is
that
as
soon
so,
their
art-
let's
think
about
two
scenarios,
even
if
what
you
described
is
true,
let's
think
about
two
scenarios.
Let's
say
you
have
two
different
zones,
and
these
two
zones
have
very
unbalanced
number
of
tracks
right
so,
for
example,
zone
1
has
10
racks
zone.
2
has
two
racks
right.
A
C
A
C
A
A
D
C
I
mean
I
would
put
five
and
five
each
right
and
then
within
five.
If
there
are
ten
racks,
I
will
try
to
distribute
as
evenly
within
that
so
I
mean
that
is
as
when
I'm
concerned
about
is
a
failure
domains.
That
is
what
I
want
as
a
behavior
right.
If
we
think
that
the
current
API
doesn't
solve
that,
then
we
we
should
think
about
what
what
kind
of
API
would
solve.
It
I
think.
A
C
Okay,
let
me
start
a
dog
with
some
more
concrete
examples
and
how
I
think
and
basically
we
we
have-
that
those
use
cases
and
lot
of
stateful
applications
like
self
and
all
people
right,
operators
where
they
want
to
spread
the
replicas
as
uniformly
as
possible,
and
their
topologies
are
multiple,
so
I
mean
my
use
case
is
only
like
at
least
twos
to
topology,
somos
ODEs
and
in
my
use
case,
basically,
the
number
of
racks
would
be
uniform.
So
in.
A
A
But
if
that's
the
case,
then
the
effect
that
you
have
in
mind
will
happen,
but
even
in
the
first
example
that
I
gave
you
can
you
can
specify
two
rules,
you
can
say
spread
evenly
among
zones
with
very
low
max
Q,
for
example,
I
see
you
are
one,
and
then
you
can
also
specify
to
spread
among
racks
with
a
large
max
Q
like,
for
example,
max
Q
or
five
and
then
again,
I
guess.
The
effect
that
you
have
in
mind
will
happen
even
right.
C
C
E
C
No,
oh,
so
sorry
for
the
last
comment,
I
mean
it
doesn't
matter
whether
that
information
is
there
or
not
right.
If
somebody
specifies
a
spreading
constraint
that
has
two
topology
domains,
one
with
some
max
use
other
with
another
one,
yes,
yeah
I
mean
I,
think
you're,
saying
that
there
is
no
way
to
know
whether
the
second
over
our
G
domain
is
within
the
first
one.
Exactly.
D
C
A
E
Just
I
just
want
to
mention
that
in
the
show
hockey
account
next
Monday,
Costin,
I
and
other
folks.
Well
how
the
scheduling
face-to-face
meeting
and
I
will
in
the
meeting
I
will.
We
will
discuss
schedule
and
as
well
as
I,
will
give
a
new
impasse
spreading
demo.
Maybe
ten
minutes
so
you're
welcome
to
join
that
yeah.