►
From YouTube: Kubernetes SIG Scheduling - 2018-03-01
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Alright,
let's
just
start
the
meeting
hi
everyone
I'm
Bobby-
we
are
gonna-
start
this
meeting
by
going
over
quite
a
few
agenda
items.
So
the
first
thing
that
I
would
like
to
talk
about
is
to
give
a
quick
update
on
on
one
point,
ten
items
that
he
wanted
to
release
and
unfortunately,
most
of
them
I
guess
didn't
make
it
to
1:10
for
various
reasons.
B
The
first
item
that
I
would
like
to
talk
about
is
priority
and
preemption.
We
decided
to
not
moving
meat
to
beta
in
110,
particularly
because
we
didn't
find
enough
customers
to
try.
It
second
item
was
the
fact
that
we
saw
some
scalability
test
failing
and
we
would
like
to
address
the
scalability
issues
in
pyonium.
Preemption
third
thing
is
the
fact
that
you
don't
have
proper
support,
yet
in
the
message
controller
to
create
pods
and
let
the
default
the
scheduler
schedule
them.
Basically,
we
needed
this
feature
in
order
to
help
Ionian
preemption
for
demons
at
thoughts.
B
B
B
The
second
item
is
equivalence
cache
to
beta
equivalence,
caches,
the
cache
that
lets
us
cache,
predicate
results
or
pods
which
are
equivalent.
We
wanted
to
move
it
to
beta,
but
as
a
as
a
part
of
moving
to
beta,
we
wanted
to
add
better
hashing
for
equivalent
parts,
Jonathan
github
ideas.
Mr.
kid
was
working
on
this,
but
after
changing
the
algorithm
for
calculating
hash
for
these
parts,
we
realized
that
we
saw
some
troop
withdrawal
schedule
could
drop.
We
wanted
to
address
this,
Harry
looked
at
it,
but
I
guess
he
was
also
busy.
He
didn't.
B
B
After
much
discussions,
we
decided
to
at
least
for
now
put
this
item
on
hold
because
we
already
see
some
issues
in
scalability
of
affinity,
an
anti
affinity,
and
we
see
lots
of
slowdown.
If
you
and
specially
enabled
anti-authority
in
a
cluster
and
have
some
pods
in
a
cluster
that
has
anti
affinity,
we
would
like
to
address
the
performance
issues
before
adding
any
more
features
and
in
fact
we
are
not
so
sure
if
this
particular
feature
basically
having
like
a
maximum
of
n
pods
per
topology
for
a
family,
an
anti
affinity
has
much
of
use.
B
C
It
requires
a
fundamental
rethought,
rethink
how
you
would
even
do
it,
that's
kind
of
where
some
of
the
aspects
of
firmament
come
in,
but
I
don't
think
we're.
That's
anywhere
ready.
For
you
know,
primetime
is
the
default
which
will
lead
into
other
topics.
I.
Think,
because
we
need
to
talk
about
that
today,
but
I
don't
see
how
you
get
around
it
with
the
way
it's
currently
being
done.
You
could
refactor
it
in
some
ways,
but
I
would
never
do
any
technology
under
other
than
hosts
right
now.
Yes,.
B
That's
right
so
we're
actually
in
order
to
make
affinity,
an
anti
affinity,
more
practical
I
think
we
need
to
limit
parts
of
parts
of
its
design
in
a
way,
so
we
need
to
probably
limit
it
or
restricted
topology
key
to
their
values
and
probably
like.
Oh,
this
is
gonna,
be
the
most
useful
and
I
would
say
easiest
thing
to
fix
when
it
goes
to
a
larger
scope
in
performance,
I'd
be
great
to
an
extent,
so
we
may
never
be
able
to
run
it
for
a
very
large
cluster.
B
C
One
of
the
things
I
was
thinking
about
for
a
long
time
was
to
have
you're
right
now.
The
cache
is
a
linear
sort
of
caching
mechanism
and
it
doesn't
necessarily
store
it
in
a
data
structure
which
allows
for
a
login
access
or
less
than
that,
because,
if
you
were
to
you
know,
have
you
know
a
red
black
tree
stored
on
labels,
you
might
be
able
to
do
faster
lookups.
It
would
be
much
more
effective
for
doing
matching
right,
and
but
that
means
like
having
like
an
index
over
your
cache
right.
C
D
B
C
You're
suggesting
I'm
suggesting
not
organizing
the
pods
themselves
but
organizing
like
an
index,
so
you
would
have
an
index
where
the
key
would
be
the
label
right
and
then
it
would
eventually
point
to
the
pod
itself
inside
of
there.
So
basically
like
very
similar
to
our
databases
create
indices.
So
you
can
create
an
index
on
multiple
different
tables
and
it
would
basically
be
a
set
of
pointers
or
a
way
of
setting
up
fast
access
to
get
to
the
individual
data
elements
behind
it.
So
that
would
just
be
an
index
optimized
for
accessing
the
labels.
Now.
B
This
is
this
is
an
idea
of.
What's
exploring.
There
are
still
some
unknowns
in
this
topic
because
you
know
label
selectors
can
be
complex.
They
are
not
necessarily
matching
the
exact
same
label.
They
can.
They
can
match
other
things
as
well.
They
can
be
like
not
matter
label,
they
can
be
like
the
label
is
in
this,
like
array
or
list
and
stuff
like
that,
which
may.
C
B
Right
absolutely
I
I
totally
understand
and
in
fact
we
are
actually
build
some
a
metadata
as
we
call
it
like
predicate.
You
know
that
has
some
of
this
information
for
paws
that
match
certain
terms,
but
you
are
not
necessarily
enough.
Finally,
we
definitely
should
explore
some
of
these
ideas
more.
Hopefully
we
can.
We
can
improve
performance
of
a
finite
iana,
definitely
much
better
levels.
B
So
yeah.
There
is
another
item
designed
up
for
policy
library
of
a
scheduler,
so
others
are,
they
can
reuse
them.
So
I
guess
cloud
has
put
this
document.
I
guess
his
idea
was
to
make
portions
of
scheduler
convert
some
of
this,
some
of
it
into
a
library
so
that
other
components
can
use
these
libraries
and
from
functions
easier
in
other
pieces
of
the
code.
I
believe
we
already
have
some
of
that
I
don't
know
if
Claus
himself
is
here
to
explain
his
ideas
a
little
better
but
I,
don't
think
he's.
B
So
but
anyways,
he
has
also
put
this
item
in
our
1.11
list,
so
he
will
probably
explain
it
a
little
bit
more.
He
had
sent
an
email
as
well,
but
I
would
like
to
hear
himself
telling
us
a
little
bit
more
about
what
he
has
in
mind,
and
then
there
was
things
and
toleration
Stu
GA,
which
was
done
it's
done
already.
Actually
it
was
not
much
of
work.
We
had
to
only
update
the
documents.
B
We
had
some
discussion
actually
about
some
other
usages
of
chains
and
toleration
which
are
not
gonna,
go
to
GA
anytime
soon,
I
guess,
for
example,
some
of
them
are
already
in
there
you
go
that's
already.
There
are
still
in
alpha,
not
either
in
beta,
so
those
are
gonna
take
some
time
before
we
can
move
them
to
GA,
one
of
them,
which
is
probably
very
useful
for
ourselves,
and
many
others
is
a
thinking
node
by
by
condition.
B
So
basically,
when
I,
no,
it
is
not
ready.
You
can
think--and
out
properly
I.
Do
it
no
schedule,
for
example,
or
other
gains
that
we
can
add
and
when
the
node
becomes
ready.
For
example,
we
remove
this
and
some
parts
can
have
the
Toleration.
This
is
particularly
useful
for
us,
for
example,
when
we
want
to
schedule
demon
side
parts,
because
those
parts
are
usually
resilient
against
no
schedule
bit
or
unschedulable
bit
of
nodes,
for
example,
that
we
have
today.
B
B
And
let's
look
at
the
other
items,
of
course
bunch
of
these
items
that
is
slipped
in
110.
We
still
would
like
to
do
in
111
priority
and
preemption
to
beta.
We
would
like
to
do
it
in
111
equivalents.
Cash
to
beta.
Again
we
like
to
do
house
wants
to
do
the
library
in
111
would
like
to
add
a
gank
scheduling,
support
111.
Now
we
should
discuss
this
a
little
bit
more
I,
don't
have
a
very
clear
understanding
or
I
haven't
seen
any
I.
B
Guess
like
your
detailed
design
about
how
all
this
will
be
done
or
what
kind
of
API
changes
we
need
to
do.
But
anyway,
hopefully
we
get
more
information
about
this
between
before
going
to
in
111
and
as
I
said,
we
would
like
to
graduate
came
10
nodes
condition
and
also
removal
of
the
scheduling
logic
of
TMS,
a
controller
in
111.
B
F
B
F
B
F
E
F
We
would
like
to
go
about
it
because
I
think
Ravi
was
telling
he
had
some
offline
discussion.
Video
and
I
think
he
said
that
you
are
more
interested
to
have
that
in
the
saw
square
admission
plugin,
both
like
I
mean
I,
am
NOT,
saying
I'm,
not
saying
that
might
not
be
I
mean
that
might
be
a
bad
idea
or
something,
but
but
I
think
what
I?
What
I
think
from
my
point
of
view,
a
resource
Curtis
should
we
catch.
It
should
be
so
to
worry
about
just
resource
code.
F
B
F
B
B
Yes,
so
in
that
case,
I
guess
we
you
could
say,
for
example,
default
is
deny
for
that
namespace.
If
a
user
does
not
have
any
or
if
there
is
no
quota
for
for
that
namespace,
then
users
cannot
create
any
part,
and
then
you
only
create
quota
at
particular
particular
priority
classes,
for
example
those
two
priority
classes
that
you
want
to
whitelist.
You
just
create
infinite
code
out
for
them
in
that
project,
and
then
you
can
achieve
what
you
wanted
to
achieve.
It
Whiteley's
think
that
those
to
fire,
you
I,
think.
C
C
B
C
C
C
You
would
have
to
define
the
spec
for
how
you
want
it
to
handle
it,
but
that
basically
doesn't
it
says
we
don't
it
says:
kubernetes
proper
is
punting
on
the
idea
and
saying,
like
you
define
policy,
because
this
is
a
policy
question.
So,
according
to
Brian's
picture
diagram,
which
I
see
if
I
can
grab
over
here
right,
oh
no
I'm
gonna
take
everything
else
down.
C
If
you
define
policy,
your
your
way
up
the
stack
you're,
not
in
kubernetes
proper
anymore.
So
let's
see
here
it's
you're
in
governance,
great
you're,
in
the
governance
layer
of
the
stack
right.
So
during
the
governance
layer
of
the
stack
you
know
we
provide
the
tooling
to
allow
you
to
do
the
government's,
but
we
don't.
We
don't
enforce
policy.
That's
up
to
you
to
decide
right
because
there's
a
many
ways
to
slice
it.
If
you
conflate
resource
quota
to
do
this
resource
quota
becomes
confusing.
It's
already
confusing
right.
C
It
totally
is,
but
if
you
conflated
it
gets
even
more
confusing.
Now,
if
you
were
to
to
say
like
we're,
gonna
we're
gonna,
also
whitelist
with
that,
then
it
becomes
even
more
confusing.
You
could
do
it.
There's
many
ways
to
hijack
it
hijack
the
system.
What
I'm
saying
is
that
if
you
want
to
apply
policy,
why
don't
we
strictly
adhere
to
the
layer,
diagram
and
say,
like
policy
we're
gonna
punt?
That
idea
off
to
whoever
wants
to
implement
that
policy
and.
B
From
from
the
very
beginning
that
we
designed
this
I'm,
not
saying
that
this
is
a
good
reason
necessarily,
but
from
the
very
beginning
that
we
design
the
priority
class.
We
had
this
idea
that,
in
order
to
limit
someone
from
creating
parts,
for
example,
or
a
certain
amount
of
resources
or
using
certain
amount
of
resources
at
a
particular
priority,
we're
gonna
introduce
resource
quota
and
we
are
gonna,
add
for
any
classes
to
resource
color.
So,
for
example,
you
can
say
at
this
particular
priority
user
X
does
not
have
any
quota,
or
at
this
particular
other
priority.
B
F
Actually
for
board
policy
resources.
So
if
we
saw
what
is
controlling
that
I
think
that
makes
sense
I
mean
like
like,
for
example,
at
least
in
the
design
document
I
have
seen,
and
there
are
some
examples
to
examples
like
how
many
ports
can
be
created
in
a
particular
priority.
Class
Bridgeport
pod
is
also
a
resources,
so
whether
if
what
can
be
created
or
water
because
also
like
it's
select
by
like
just
like
it,
can
be
either
one
or
zero.
F
B
F
No
limit
limiting
ports
in
a
particular
priority
class.
Definitely
like
the
way
it's
being
done
in
the
source
code.
I
should
happen.
The
error
only
thing
like
water
priority
class
apart
should
be
assigned
or
no
order
could
happen
in
another
like
we
already
have
priority
and
we
said
plug-in:
oh,
it
could
happen
there.
That
would
be
I
think
a
bad
triple
a
sir.
We
all
I
mean
I'm,
not
saying
to
create
another
new
admission
plugin,
but
we
already
have
priority
admission,
plug-in
right,
I,
think
I,
think
that
would
be
a
better
place
for
that.
E
B
F
Actually,
actually
I
know
differently.
Actually
you
my
mean
you
might
think
I
know
this
might
not
be
good
idea,
but
like
historically
like
some
features
initially
that
were
not
supposed
to
be
or
initially
the
way
this
new
features
were
started.
They
were
started
based
on
a
node,
a
cell,
so
so
maybe
that
might
be
stashed,
Oracle
be
genuine
like
vibrator
and
notation
based
like,
for
example,
we
have
been
doing
similar
thing
in
Quad,
not
selector
to
plug
in
and
for
the
Tollison
restriction
plugins.
We
are
doing
the
same
thing.
F
So
that's
why
I
like
I
think
what
I'm
saying,
starting
with
annotations,
to
have
whitelist
a
lingo
for
therefore
I
mean
wouldn't
be
a
bad
idea,
because
that's
what
we
have
been
doing
in
sport
mode,
selector
plug-in
I,
do
some
plugins
and
pod
Tolleson
restrictions
and
we
have
been
using
them
in
our
production
cluster
and
moving
them
from
a
rotations
to
a
namespace
fields.
I'm,
not
sure
like
whether
like
we
would
want
to
go
that
path
or
not.
It's.
B
E
B
Have
done
that
in
our
in
our
Bernie's
code?
Sometimes,
but
recently
you
know
we
I
was
using
annotations,
for
example,
to
for
preemption
and
Brian
told
me
that
annotations
are
not
meant
to
communicate
anything
among
components
or
they
are
not
meant
to
configure
any
cluster
or
set
any
policies,
and
so
I
guess.
For
these
reasons,
we
should
avoid
creating
more
exceptions
to
allow
using
annotations
for
configuring
clusters.
B
F
Yeah
I
mean
anything
like
her
like
I
mean
for,
like
port
notes,
electrical
against,
and
these
like
got
these
like.
We
have
been
using
in
multi-tenant
environments
so
definitely
like
they
are.
They
are
useful.
Yeah,
so
I
mean
valid
reasoning.
Is
there
but
I?
Don't
know
whether
it's
what
using
a
notation,
it's
really
easier
to
start
to
have
that
feature
in
Pillai,
sir,
there
to
go
and
add
new
field
in
the
name
of
space
object,
because
that
is
such
a
it's
always,
but
the
metal
object.
We.
B
F
Maybe
like
okay
I
mean
they
started
the
decks
better.
In
fact,
I
even
I
prefer
that
so
maybe
we're
I
think
Ravi
is
planning
to
work
on
that
definitely
like
he
can
start
with
priority
at
me,
simple
Arcana,
and
he
can
have
this
feature
this
feature
implemented
and
he
can
use.
He
can
add
new
fears
to
namespace
object
and
he
can
definitely
try
and
next
see
like
what
feedback
we
get.
D
Let
me
know
once
you
are
able
to
see
the
screen:
I
can't
okay,
so
this
is
the
basic
architecture
that
we
are
planning
to
use
have
punitive
scheduler
before
that
I
just
want
to
do
some
background
making
a
speaker.
You
have
discussed
on
the
usage
based
scheduling
and
we
have
decided
to
use
the
scheduler
extender
to
implement
that.
So
this
is
the
architecture
of
how
we
want
to
use
scheduler
or
usage
based
scheduling,
so
the
Kuban.
It
is
scheduler
talks
to
the
extender
and
the
extender
queries.
The
information
store.
D
The
information
store
could
be
like
having
historical
information
or
the
current
usage
information
of
notes.
In
this
case,
we
are
going
to
use
only
the
notes.
So,
as
of
now,
we
are
using
metric
server
to
get
the
information
related
to
notes
and
the
metric
server
stores
the
data
one
minute
window
for
all
the
notes.
B
B
B
D
And
we
are
going
to
use
CPU
as
the
metric
based
on
which
we
make
the
decisions.
So
here
we
have
the
extender
which
is
like
a
HTTP
server
already
running,
and
we
have
passed
the
cube
config
to
that,
so
that
talk
to
the
corresponding
cluster,
and
this
also
uses
the
matrix
client
which
is
available
in
the
metric
server.
The
same
thing
is
being
used
by
the
horizontal
power
autoscaler.
So
as
of
now,
we
have
like
two
nodes
in
the
cluster.
D
D
D
D
D
D
D
Pod
template,
it
is
like
it's
it's
just
a
pod
which
runs
infinitely
and
then
doesn't
occur.
We
want
to
make
sure
that
once
the
pod
lands
on
to
this
node,
it
actually
crosses
the
CPU,
the
utilization
on
this,
on
the
other,
it's
going
to
cross
the
41
milligauss.
Now,
let's
look
at
the
usage
of
the
notes.
D
B
D
So
we
see
this
the
other
month
is
like
around
8:30
mili
coughs.
Now
the
actual
usage
like
previously
it
was
around
like
24.
That
was
the
note
that
we
have
chosen
like
9
3,
5,
9
8.
No,
it's
usage
is
around
like
8:30
milah
course.
Now,
if
we
go
back
and
look
at
the
usage
that
CPU
sees
because
we
look
at
only
the.
D
D
So
we
remember
choose
the
other
node,
which
is
previously.
It
was
around
like
41
milligauss,
and
we
are
going
to
choose
that
node,
because
this
notice,
like
it,
sits
around
like
almost
830
million
courses
being
used
there
and
that
if
we
don't
use
the
extender
schedule,
that
would
have
actually
landed
those
spots
on
to
the
same
node,
which
is
having
a
30-minute
course,
because
we
are
using
this
usage
based
mechanism.
Yes
landing
on
to
the
other
node,
where
the
user
is
actually
less.
B
B
D
B
Interesting,
although
one
problem
that
almost
always
exists
is
the
fact
that
once
someone
creates
a
collection
like
a
deployment
or
replicas
or
whatever,
there
is
gonna,
be
like
a
burst
of
pods
that
are
created
at
the
same
time
and
I
guess
this
usage
based
scheduling
won't
be
super
useful
for
scheduling,
multiple
instances
of
the
same
collection,
but
but
this
is
nevertheless
useful,
for
you
know
a
little
bit
if
we
can
see
there
are
little
longer
periods
of
the
scheduling.
This
is
going
to
be
useful
for
other
created
enough
time
distance
for
one
another.
F
D
Other
thing
is,
as
of
now,
we
don't
have
an
official
like
tool
which
stores
the
information
for
a
long
period
of
time,
based
on
which
we
can
make
some
intelligent
decisions
at
ease.
Whatever
Tim
was
suggesting
last
time,
so
I
was
talking
to
the
people
who
are
involved
in
instrumentation.
They
told
if
there
is
a
strong
requirement
for
it.
They
would
get
started
on
working
on.
B
F
Alright
I
won't
bring
one
thing
up,
like
I
did
next
step
for
this.
You
just
basic
scheduling:
do
you
have
any
feedback
or
anything
like
like
one
thing?
I
won't
bring
like
right
now,
the
whatever
code
Robbie
has.
He
is
in
his
own
repo,
like
in
the
future.
Well
I
mean
when
is
they
put
in
a
better
shape?
B
I
think
it
will
be
useful,
actually,
I
think
this
is
one
of
one
of
the
features
that
we
would
like
to
add
to
the
scheduler.
At
least
I
would
like
to
see
in
the
scheduler,
because
it's
gonna
be
useful.
Today
we
don't
have
any
idea
of
the
usage
and
we
have
a
bunch
of
parts
that
don't
specified
limits,
so
they
could
be
using
a
lot
more
than
what
they
request,
and
we
think
you
know
when
we
assign
those
two
nodes
notes
sometimes
have
to
delete
other
parts
or
the.
E
B
F
F
F
C
This
kind
of
lends
itself
into
like
a
segue
into
some
of
the
other
issues
we,
having
with
regards
to
there's
broader
topics
of
conversations
that
we
need
to
discuss.
I
didn't
put
it
on
the
agenda,
but
I'm
gonna
hijack
this
as
a
reason
to
do
it,
one
of
which
was
in
order
for
us
to
get
repos
like
one
of
the
things
we're
talking
about
in
the
steering
committee.
C
A
Yeah,
so
essentially
you
saw
that
email
which
I
sent
to
you
last
night.
Actually
so
so
we
based
on
the
discussion
with
Marty,
and
you
know
so
this
is
what
we
decided.
So
we
gonna
keep
the
Poseidon
repo
as
part
of
the
incubation,
so
they're
gonna
transfer
over.
Did
you
get
a
notification
from
multi
yeah
I
got.
A
Because
I
guess
this
is
how
it
was
so
cuz
once
he
does
a
transfer,
then
you
get
a
notification
and
yeah.
You
have
to
accept
it.
I
guess
so
that
was
one
thing
and
then
the
other
thing
was
the
follow
meant.
Repo
is
gonna,
stay
very
where
it
is
right
now
it's
in
Kansas
actually,
because
the
reason
being
is
for
moment:
scheduler
Kendra
on
top
of
coconut
X
missiles
and
farm
and
other
cluster
managers
as
well.
A
So
that's
what
so,
the
there
would
be
dependency
from
our
perspective
from
Poseidon
perspective,
so
currently
actually
for
moment
does
not
really
have
a
release
process,
so
we
gonna
build
that
release
process
as
well.
So,
for
example,
we
are
actually
just
currently.
We
are
making
lot
of
changes
who
follow
me.
We
are
adding
all
this
fin
and
currently
we're
working
on
the
node
level
affinity
thing
and
then
we're
gonna
start
working
in
the
pod
level
affinity.
So
as
we
start
making
these
changes
to
firmament,
so
we're
gonna
have
different
versions
of
for
moment.
A
So
we
need
to
kind
of
coordinate
that
as
part
of
the
release
cycle
for
our
intubation
project
as
well.
So
that's
the
plan
so
essentially
we'll
have
dependency
releases
in
firmament
and
then
we'll
have
releases
in
our
intubation
project
and
our
prior,
the
packaging,
the
deployment
packaging
using
charts
or
whatever
will
control
that
basically.
B
B
A
No,
no,
no,
that's
what
actually
that's
what
we
wanted
to
avoid
in
the
first
place.
That's
the
reason
we
are
letting
it
stay
there
in,
in
contrast,
because
there
gonna
be
other
contributors
outside
of
kinetics.
Also,
they
may
want
to
add
some
features
which
possibly
we
can
leverage
as
well.
Oh
I,
see
so
so
by
having
a
one
centralized
place
and
then
yeah
yeah
make
sense.
B
A
I
just
wanted
to
add
the
comment.
I
think
you
mentioned,
that
class
mentioned
that
you
folks
are
going
to
start
at
least
I
guess
maybe
he's
gonna
start
looking
at
the
policy
base,
the
scheduling
policy
thing
so
there's
a
initiative
going
on
called
OPA,
so
I
think
you
might
be
familiar
so
they're
trying
to
see
if
pierre
framework
can
be
used
constable
for
all
gonna
policy.
Need
you
know
the
networking
policies,
so
I'm
possibly
thinking
when
we
can
be
leveraged
for
a
scheduling
policies
as
well.
Yeah.
B
E
B
A
B
B
Okay
yeah,
he
send
me
the
information.
I
was
a
kitsune.
Alright
and
I
would
like
to
add
one
thing
you
know,
given
that
we
had
a
bunch
of
things
on
our
plate.
They
were
a
little
ambitious,
I
I
would
say
quite
we
we
didn't
make
many
of
them
to.
Basically,
we
couldn't
prepare
many
of
them
for
the
release.