►
From YouTube: Kubernetes SIG Scheduling Meeting - 2019-03-28
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Let's
start
the
meeting,
as
you
know,
this
meeting
is
recorded
and
will
be
uploaded
to
public
internet,
so
chances
are
whatever
you
say,
remains
forever
somewhere
dad.
Let's
start
the
meaning.
We
have
quite
a
few
item
on
the
agenda.
People
have
already
added
a
few
so
instead
of
giving
me
a
giving
an
update,
I
will
people
to
start
so
I,
don't
know
if
Connors
did
I,
don't
see
him
right
now.
Yet
Connor
Doyle
from
Intel
has
a
proof
of
concept
for
simulating
logical
large
clusters
to
test
the
scheduler
scenarios.
A
B
A
B
A
C
Simulated
cubelet
cluster,
basically
for
the
purposes
of
testing
scheduler
policy,
and
so
you
know
the
background
is
in
our
group
at
Intel
we're
working
on
some
scheduler
plugins
and
also
working
to
test.
You
know
some
some
gang
scheduling
on
top
of
the
new
permit.
Plugin,
that's
a
you
know
a
ga
we'll
talk
about
that
next.
I
guess
the
next
next
topic,
but
you
know
we
needed.
We
had
the
need
for
a
simulated
environment
for
for
a
scheduler
and
around
the
same
time
we
started
working
on
it.
C
Somebody
started
asking
around
in
the
slack
channel
for
about
this
about
the
same
thing,
and
so
we
figured
we
would.
You
know,
try
to
present
it
and
see
if
there's
broader
interest-
and
you
know
just
just
talk
about
what
we
did
yeah
so
I
can
I
can
share
my
screen
and
do
a
tiny
demo.
Oh
hey
Marian,.
C
C
C
So
this
thing
we
call
it
notice
notice
opponents,
it's
a
it's
a
terrible
pun,
but
basically
the
idea
is
that
you
set
up
the
fake
cluster
and
then
you
can.
You
can
run
a
scenario
that
basically
asserts
basic
facts
about
what
kind
of
nodes
are
in
the
cluster.
What
kind
of
pods
are
running?
C
What
phases
they're
in
you
can
move
pods
along
and
in
the
pod
lifecycle,
and
then
you
can,
you
know
kind
of,
do
more
assertions
based
on
that
and
so
kind
of
broad
strokes,
we're
relying
on
a
docker
compose
to
provide
the
the
created
control
plane.
So
I've
already
launched
that
we've
got
SPD,
the
current
is
API
and
the
stock
scheduler
are
running
and
then
the
other
part
is.
C
Right,
yeah
right
now,
so
the
two
binaries
that
we've
built
taking
a
cube
config
as
input.
So
technically
you
could
run
it
against
a
real
cluster,
but
you
know
we
we
just
provide
a
make
target
that
builds
a
cube
config
that
points
at
the
local.
D
A
C
So
you
know
with
this:
you
can
we've
tried
up
to
like
256
nodes.
We
had
some
issues
with
API
rate,
limiting
I
think
we
can
get
around
that
just
with
some
config,
but
you
know
you
should
be
able
to
easily
do
thousands
on
a
single
machine
for
free
yeah.
C
You
know
these
can
have
any
content
whatsoever.
The
container
doesn't
really
matter
because
it
never
ends
up
executing,
but
you
know
it
just
needs
to
parse
as
a
as
a
pod
spec.
C
The
main
thing
is
the
resources
and
the
name
right,
and
then
this
is
I,
think
the
coolest
part
the
scheduling
scenario.
So
we
started
out
writing
a
like
a
big,
complicated,
llamó
spec
and
then,
when
I
was
commenting,
the
yamo
I
was
like
this
comments.
Look
parsable
and
so
we've
decided
to
track
like
try
it
out
and
it
actually
worked
pretty
well.
So
there's
a
small
grammar,
you
can
do
four
kinds
of
steps,
you
can
assert
things,
create
change
or
delete,
and
you
right
now
you
can.
C
You
can
talk
about
pods
and
notes
basically,
and
so
you
can
create
one
large
node
create
two
small
nodes.
You
know
assert
that
they're
running
you
know
make
a
pod.
We've
got
time,
outs
kind
of
included
in
your
assertions,
yeah,
that's
pretty
much
how
it
works.
I.
Think
probably
the
most
amazing
thing
at
this
point
would
just
be
to
run
it.
So
you
know,
basically,
we've
got
two
binaries
and
Pete.
C
And
this
thing
is
the
one
that
takes
the
the
pod
config
and
the
scenario
config
along
with
node
node
configuration.
So
you
can
see,
we've
got
a
namespace
flag.
You
can
run
this
inside
a
namespace
if
you
want
to
whether
it
passes
or
fails
it
cleans
up
cleans
up
after
itself,
which
is
kind
of
nice,
and
then
you
can
kind
of
see
the
flag
to
configure
which
cubelet
to
talk
to
you
by
default.
It
just
looks
at
k,
config
in
no
local
directory,
and
you
can
see
that's
just
pointing
at
my
my
docker
compose
machine.
C
So
this
is
what
it
looks
like
we're
just
pointing
at
those
three
files
that
I
already
showed
you
and
pointing
at
the
default
cluster.
So
it
runs
kind
of
fast,
it's
kind
of
anticlimactic,
but
I
think
it's
kind
of
fun
just
to
see
what
what
happens
so
yeah
completed
up
steps
and
exited
zero,
so
tada,
it's
kind.
E
A
This
is
actually
great
I,
really
like
the
idea
and
also
the
way
that
you
implemented
it
with.
You
know,
expectations
right
the
expectations
and
all
these
pretty
seems
pretty
easy
to
use.
It
would
be
great,
if
you
can
add,
did
you
have
it
like
a
github
repository
for
this
I
assume
you
do
right,
we
do,
but
it's
not
public.
C
Right
now
we
started
the
process
to
do
that,
but
it
takes
a
little
bit.
We
have
to
get
through
run
a
little
bit
of
legal
compliance,
but
something
we've
done
before,
but.
A
F
Just
wanted
to
pop
it
mrs.
Jonathan,
so
there's
a
couple
of
comments
on
sort
of
that
topic
there.
So
there
was
from
the
I
think
the
sig
test
team.
They
came
up
with
a
really
interesting
way
to
run
their
conformance
tests
with
the
profiler
turned
on
that
could
be
probably
used
here
and
we
could
get
CPU
profiles
out
of
the
scheduler
from
these
scenarios,
which
would
be
pretty
neat
yeah.
F
For,
like
course,
scheduler
development
and
then
the
other
thing
in
terms
of
you're
using
docker,
compose
I,
don't
know
if
you've
seen
kubernetes
in
dr.
K
and
D
and
I'm
not
sure
how
suited
that
is
to
your
application,
but
it
might
be
something:
okay,
okay,
it's
called
K
in
the
K,
IND
kind,
kubernetes
and
docker.
Yes,.
G
C
All
right,
can
you
see
this?
It's
like?
Yes,
all
right
now,
it's
basically
you
know
dr.
composers,
running
these,
those
four
components
and
then
MP,
sim
and
NP
tests
are
running
on
the
host
and
they're
consulting
those
config
files
and
just
talking
to
the
API
server
and
so
like
if
you're
just
running
NPM,
which
is
just
the
just
a
fake
node
pool
all
it
does.
C
Is
it
registers,
whereas,
like
you
know,
however
many
nodes
you
configured
and
then
when
the
scheduler
binds
a
pod
to
that
node,
you
know
we
have
excuse
me
if
it
reports
yeah
it
reports
back
that
the
pot
is
running.
So
you
know,
since
the
couplet
only
ever
knows
about
pods.
In
the
end,
it
should
work
for
testing
other
stuff
too,
like
controller
manager
and
job
job
Runner,
and
things
like
that.
C
You
can
also
one
thing
that
we
implemented
was.
You
know
you
can
basically
drive
the
runtime
duration
and
the
final
phase
of
the
pod
just
through
your
labels.
So
it's
like
NP
runtime
or
something
like
that.
You
just
give
it
a
duration
and
then
the
the
fake
cubelet
will
wait
that
long
before
reporting.
The
final
status.
B
Okay,
so
no
actually
there
are
no
requests.
So
I
was
wondering
what
does
the
scheduler
actually
schedule
based
on?
Yes,.
C
C
H
H
Also
discuss
like
maybe
using
this
ayah
level,
abstraction
outside
of
corresponds
and
I,
create
a
job
outside
of
our
opponents
and
no
response
only
handle
spots,
and
that
is
something
we
can
do
as
well
or
use
other
abstractions
like,
and
this
is
one
lease
the
gang
scheduling,
but
that
is
pod
group
abstraction
in
gang
scheduling.
You
can
talk
group
of
pods,
so
those
are
the
possible
options
but
yeah,
that's
kind
of
like
you're
still
exploring
it.
C
H
C
C
But
actually
you
know
in
a
JS
demo
which
is
coming
up
next.
He
he
used
this
project
to
test
the
the
permit
plugin
for
ganks
guys
doing
so.
He
can
you
can
talk
about
how
he
you
know,
turned
off
the
default
scheduler
and
used
the
patched
scheduler
that
he
built.
A
I
I
So,
like
I'm
gonna
talk
about
like
so
we
kind
of
implemented
the
permit
extension
point
that
was
proposed
in
there
Kevin
Oh
extending
the
schedule
before
that.
Let
me
just
think
talk
about
it
about
the
concept.
So
currently
we
have
liked
a
single
scheduling,
loop,
which
looks
at
the
part,
and
it
considers
one
for
a
time.
Those
two
the
product
is
very
case
and
assigns
their
four
to
the
node.
So
like
the
main
motivation
for
us
was
that
type
you
wanted
to
hang
sterling
and
there's
currently
no
way
to
do
it.
I
I
We
can
only
observe
those
pods
which,
in
a
gang
like,
for
example,
if
you
have
opponent
again
without
the
dough
spots,
for
when
all
the
pods
in
the
gang
have
been
assigned
in
obesity,
so
maybe
I
can
take
short
it,
which
would
make
it
here
so
in
the
designs.
But
we
always
like
the
permit
would
then
allow
a
pod
or
reject
abort
or
Wade
with
like
a
specific
time
or
so
to
that
effect.
We
just
have
like
three
interfaces
should
accept,
should
reject
or
shared
wait
for
the
point.
I
So,
for
example
like
this
is
vector
the
flow
like
a
single
loop
in
the
concurrent
loop
as
filters
they're
part
scores
of
four
and
so
on,
and
when
we
have
the
asynchronous
loop
which
bind
support,
we
go
through
all
the
permit
plugins
and
then
see
that
it
should
it
like
accept
that
particular
card.
Wait
for
it
or
just
reject
that
one,
so
I
feel
like
zoom
in
a
little
bit.
I
So
if
we
see
that
for
take
an
example
for
gang
scheduling,
world
mean
is,
let's
say
we
have
like
four
parts
in
again
and
then,
if
any
one
part
has
been
rejected,
then
you
just
see
that
one
part
as
it
has
been
rejected.
So
we
should
just
reject
this
part
and
then
not
let
it
find,
and-
and
let's
say
if
we
have
two
parts
in
the
gang
and
two
for
two
pots
have
been
seen
by
the
scheduler
and
two
parts
have
not
yet
been
seen.
You
know
in
a
gang
before
then.
J
I've
used
so
so
can
I
just
make
sure
I
understand
here.
So
what
you're
trying
to
outline
here
is
a
scheme
by
which
the
scheduler
still
scheduled
is
one
part
at
a
time,
but
with
this
permit
extension,
a
gang
gets
held
up
until
the
scheduler
has
scheduled
all
the
individuals
in
the
game
right.
Thank
you.
So.
A
E
I
A
F
D
J
Just
to
make
sure
clear,
warm-up
question
right:
the
scheduler,
it
doesn't
understand
the
gangs
and
it
can
work
on
these
pods
in
any
order.
Right.
We've
got
14
pods
here
it
might
happen
to
work
on
7
pods
from
the
first
gang
and
five
pods
on
the
second
gang
before
it
finishes
either
gang
right,
yes,
and
in.
J
H
But
I
mean
so
the
the
guarantee
that
this
apartment
plugin
gives
is
that
be
ganks
will
be
scheduled.
The
pods
in
the
gang
will
be
scheduled
together,
however,
the
ordering
of
which
gang
scheduled.
First,
it's
not
deterministic.
That's
the
whole
point
here.
The
gangs
will
be
scheduled
like
to
order
right.
J
F
H
I
So
this
would
also
be
pretty
quick,
but
I
can
try
to
highlight
when
it's
there.
So
basically
it
create
the
three
nodes
and
then
it
should
make
the
first
four
running.
Well,
the
six
are
working
fine,
even
though
two
should
have
a
reservation
or
two
should
be.
Four
should
have
reservation
then,
once
they
are
exactly
that
remaining
basic
theorem.
A
That
was
encouraging,
at
least
to
verify
that
some
of
our
functionalities
and
design
works.
One
quick
update
regarding
the
framework,
as
you
know
or
I,
don't
know.
Maybe
you
don't
know.
We
have
had
a
revision
for
for
the
cap,
which
is
already
managed
a
few
weeks
ago.
It
was
exactly
when
it
was,
but
anyway,
I
have
a
a
new
change,
new
TREC
clear
that
I'm
gonna
send
out
soon
based
on
this
new
and
new
changes,
and
you
know
ideas.
So
it's
kind
of
changed
some
of
the
interfaces
that
we
have
for
for
the
plugins.
A
A
K
Basically,
it's
around
adding
scheduling
constraints
to
the
run
time
class
API,
so
I
just
wanted
to
highlight
some
of
the
latest.
Improvements
basically
were
asking
if
we
should
build
native
predicates
on
the
scheduler
to
filter
the
notes
out
based
on
if
they
are
able
to
handle
the
specific
runtime
class,
but
since
the
runtime
class
is
based
on
basically
toleration
and
not
selectors
terms.
K
K
K
Basically,
the
user
have
no
idea
about
his
father
is
not
God
is
not
bound
to
a
to
a
nod,
so
we
do
have
to
balance
between
the
you,
the
user
experience
problem
and
the
composition
problem,
knowing
that,
if
we
start
to
compose
predicates,
how
do
we
handle
like
opt-in,
opt-out
of
the
composed
predicates
or
things
like
that?
I,
don't
know
if
you
folks
have
any
thoughts
about
this
I.
A
Haven't
thought
very
carefully
about
the
implementation
of,
but
I
feel
if
we
don't
want
to
have
that
user
experience
issue
that
basically
I
pointed
out
and
except
we
should
probably
go
with
a
separate
predicate
for
runtime
class
one.
Our
one
option
that
we
have
here
is
that
we
can
build
a
library
for
for
for
like
no
non
selector
predicate
and
also
for
toleration
for
it,
so
that
both
actually
not
select
on
toleration
as
well
as
the
runtime
class
predicate
can
use.
A
We
are
going
to
do
some
amount
of
basically,
maybe
reprocessing
of
certain
note
selector
rules
in
the
case
that
we
have
both
runtime
class,
as
well
as
a
note
selector
in
in
the
pod
spec.
In
those
cases,
I
think
it
should
generally
be
fine.
I,
don't
expect
to
see
much
a
performance
difference
between
like
combining
everything
into
a
single
predicate
versus
running
them
in
separate
predicate,
so
I
still
feel
like
with
having
a
library
we
can.
A
We
can
implement
this
cleanly
without
much
performance
penalty
and
also
it
gives
us
better
user
experience,
because
if
we
have
a
separate
predicate
and
we
can,
if
the
predicate
fails,
we
can
show
user.
Why?
What
happened?
For
example,
say
this
menos
didn't
match
the
runtime
class
that
you
requested
for
your
part.
Yes,.
K
A
D
E
A
The
way
that
the
newer
process
is
like
this,
we
we
start
with
a
cap.
The
cap
must
be
merged
and
usually
the
deadline
for
merging
these
caps
is
a
lot
earlier
than
our
actual
code
freeze,
so
it
it
makes
more
sense
to
first
manage
the
cap
and
then
work
on
the
PRS
and
then
merge
the
PRS
right
so
that
that's
repeated
workflow,
okay,.
E
A
A
G
G
Just
quick
update
on
the
cab
of
even
Power,
Distribution,
so
I
think
I
think
some
general
directions.
We
are
both
fine
and
we
are
using
a
separate,
predicate
and
separate
ApS
back
to
describe
those
kind
of
information
and
some
corner
cases
also
have
been
discussed,
especially
on
the
max-q
I've,
had
a
specific
section
on
describing
there.
So
next
step
is
to
having
some
API
machinery
experts
to
reviewing
the
API,
spec
and
yeah
hope
became
much
there.
So
yeah.