►
From YouTube: Kubernetes SIG Scheduling meeting 2018-04-12
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone.
So,
as
you
have
noticed,
we
have
a
different
schedule.
These
days,
we
decided
to
split
the
one-hour
meeting
bye-bye.
We
committee
of
scheduling
into
two
meetings,
a
30-minute
meeting
every
week.
I
hope
this
schedule
works
better
for
everyone
around
the
world.
Since
we
have
so
many
contributors
who
are
located
in
various
countries,
we
had
a
hard
time
finding
one
one
time
that
would
work
well
for
everybody,
so
we
had
to
do
this
I
hope
at
least
one
of
these
two
meetings
like
did
at
10:00
a.m.
A
for
Pacific
Pacific
time,
and
one
is
5
p.m.
Pacific
time
so
I
hope
at
least
one
of
these
two
works
for
everyone.
If
none
of
them
still
works
for
some
of
you
and
you
prefer
to
attend
in
any
of
these
meetings,
please
let
us
know
we
try
to
accommodate
as
much
as
possible
for
today's
meeting.
I
didn't
send
a
an
agenda
before
before
the
meeting,
but
I
have
a
few
items
that
I
would
like
to
talk
about
and
and
then,
if
you
have
any
items,
I
will
let
you
speak.
A
So,
as
usual,
we
decided
to
go
over
111
items
that
we
have
some
of
them,
so
some
of
them
are
handled
by
a
class
which
I
guess
I
did
present
today,
because
it's
a
it's
a
horrible
time
for
in
China.
So
yeah
he's
not
here
so
I'm
gonna
go
over
some
of
the
other
items
and
then,
if
you
have
anything
to
talk
about,
please
go
ahead.
So
one
of
the
most
important
items
that
you
would
like
to
do
in
111
is
to
promote
priority
and
preemption
to
data.
So
far,
this
is
on
track.
A
A
A
We
haven't
seen
a
whole
lot
of
issues
so
far,
I
have
been
working
on
some
benchmarking
and
along
the
way,
I
found
ways
to
improve
performance
of
affinity
and
anti
F
nad
predicates
I
have
said:
I
have
sent
a
PR
that
significantly
improves
performance
of
affinity,
an
anti
affinity
we've
seen
over
20
20
times
performance
improvement
in
larger
clusters.
Larger
clusters
were
the
ones
that
we
were
most
concerned
about
and
I'm
glad
that
we
have
found
a
solution
for
that.
So
hopefully
this
helps
with
the
scalability
of
by
Orion
preemption
as
well.
B
Regarding
earlier
p.m.
so
no
I
think
he
said
it
before
also
like.
We
are
also
very
interested
in
trying
that
in
our
added
cluster,
so
all
like
what
you
said,
those
things
like
we
also
agreed
with
them.
But
in
addition
to
that,
one
of
the,
like
other
biggest
blockers
for
using
priority
and
preemption
Heiser,
not
having
some
sort
of
policy,
controller
and
AD
vision,
controller
that
we
have
been
looking
for
like
having
whitelisting
or
anything
allowed
a
list
of
priority
classes
for
namespaces
and
then
controlling.
Then
why
are
you
supplying
it
right.
B
A
Scene
has
been
working
on
actually
he's
present
today.
Here
he
can
talk
a
little
bit
more
about
that
and
give
us
a
little
bit
more
update
on
that
effort.
There
is
an
effort
that
is
trying
to
create
a
a
sorta
like
holistic
policy
system
for
for
restricting
properties
of
paths
that
affect
scheduling
of
them,
one
of
them.
One
of
these
properties
is
priority
and
his
policy
tries
to
address
defaulting
of
the
priority
for
parts
in
a
particular
namespace.
That's
what
I
am
trying
to
actually
tell
them
to
do.
B
I
think
yesterday-
or
maybe
I
also
give
some
feedback
like
we
might
have
some
simplistic
policy-
maybe
not
having
so
many
like
Arabic,
and
so
many
other
things
or
maybe
we
might
know
add
a
note
so
that
management
of
that
policy
is
easier,
I'm
not
sure
like
if
there
is
any
some
after
that
there
is
another
PR
or
anywhere
like
I
saw
like
now.
We
have
been
trying
to
do
that.
Why
I
ke
p
right,
but
I'm
not
sure
like
how
to
be
able
to
look
for
that.
B
C
We
were
trying
to
do
it
by
using
KP,
but
from
talking
with
Tim
or
curve
from
sick
of,
we
figured
out
that
we
might
need
to
just
go
with
scheduling
policy
design
proposal
and
after
that,
extract
all
the
standards
that
we've
put
there
and
all
the
DS.
The
choices
that
we've
made
into
a
KP.
So
we'll
go
just
with
the
design
proposal
and
then
extract
of
what
we've
done
and
generalize
it
for
the
whole
OSI
API
group
yeah
and
the
thing
we're
dropping,
also
binding
pulses
using
our
back.
C
I
think
there's
some
concern,
there's
some
security
concerns
and
when
you
use
role
bindings
on
glycerol
bindings
to
to
to
bind
policies,
it
opens
up
the
usage
of
the
policy
to
all
pull
spine
to
all
parts
using
a
certain
service
account.
So
it's
not
really
super
safe
and
instead,
maybe
just
use
namespaces
and
namespace
the
selectors
and
maybe
even
pour
selectors.
You.
A
Know
I
think
namespace,
selector
or
generally
restricting
or
making
the
granularity
of
policy
and
namespace
is
more
aligned
with
other
policies
that
we
have
not
all
all
of
the
policies
in
communities,
but
with
some
of
the
other
policies
that
we
have
in
qualities
life,
particularly,
for
example,
coda.
We
have
it
her
name
space,
so
we
should.
We
should
possibly
align
these
two
together,
my
field
in
most
scenarios.
This
works
better
in
for
users
and
covers
most
of
the
use
case.
A
C
C
The
thing
is
that
it's
a
bit
tricky
and
we
basically
use
we're
basically
we'll
be
using
two
ways
to
compose
policies
binding
mode
where
it,
which
is
called
any
this
binding
mode,
is
basically
we
take
any
policy
that
validates
or
that
satisfies
the
constraints,
and
there
is
the
binding
mode
which
is
called
all
the
this
binding
mode
is
basically,
we
must
satisfy
each
policy
and,
if
there's
a
conflict.
Basically
there
is
two
types
of
conflicts:
structural
conflicts.
C
These
we
can
detect
them
add
validation
time
when
you
create
the
scheduling
policy,
basically
putting
something
in
required,
but
also
putting
the
exact
same
thing
on
denied
this,
we
can
detect
it
on
on
the
validation
time
and
there's
semantic
conflicts.
These
can
be
detected,
add
compute
time
and
basically
we're
just
we.
We
can't
make
choice
for
for
the
cluster
admin
sis,
since
he,
since
he
made
he
created
the
conflict.
So
we
just
sir
face
an
event
to
to
say:
hey,
there's
a
conflict.
There
is
a
conflict
in
here
in
your
policy
right.
A
C
Actually,
the
design
doc
I've
been
working
on
this
the
whole
week.
We
have
a
generic
part
that
says
this
is
how
we
structured.
This
is
how
we
handle
conflicts
and
there's
a
second
part
that
specifies
everything
for
the
scheduling
policy.
I,
think
that
be
ready
to
review
for
tomorrow,
but
also
there's
another
stream.
It's
basically
how
we
implement
policies
and
erricka,
which
is
from
the
working
group
policy,
sent
me,
is
an
effort.
I've
been
done
on
their
side,
which
is
the
container
policy
interface.
C
So
it's
basically
the
idea
to
surface
a
true
API
I'm,
not
just
fall
back
on
admission
controller
to
to
implement
policies
like
to
have
a
real
API,
real
contract.
That
says:
hey!
That's
how
we
get
policies,
that's
how
we
enforce
them,
that's
how
we
update
them.
So
it's
I
think
that
the
implementation
of
scheduling
policy
will
be
tied
to
to
how
fast
we
can
get
the
container
policy
out.
If
it's
still.
A
A
C
Think
that
surfacing
the
API
might
not
take
so
long.
Since
we
have
the
working
group
policy
on
board,
we
can.
We
can
have
a
default
implementation
on
the
culinary
side
for
policies.
Basically,
it
will
be
just
extracting
the
implementation
of
network
policies
or
security
policies
and
the
implementation
of
scheduling
policy,
and
this
will
be
an
implementation
like
for
CNI.
A
C
I,
don't
think
that
we
can
have.
We
can
have
it
for
the
next
release,
I'm,
not
sure
that
we
can
have
it
I
think
that
we
can
go
with
what
we
have
in
admission
controller
as
alpha,
but
we
in
favor
to
promote
it
to
beta
or
something
like
that,
since
we
just
really
were
conveying
implementation
using
annotation
and
things
like
that,
I'm.
D
C
A
B
B
A
A
C
Thing
is
that
when
you
use
admission
controller,
you
introduce
an
implicit
API.
Even
if
we
do
not
have
a
strict
API
people
are,
will
be
using
these
things,
so
it
creates
a
dependency
on
migrated
kind
of
clusters.
If
we
degrade
from
a
release
to
another,
if
we
pull
back,
for
example,
any
temporary
implementation.
C
A
Time-
and
that
makes
a
lot
of
sense,
of
course,
if
you
want
to
think
long
term,
we
should
we
should
have
something
that
works
better
and
a
longer
future,
as
opposed
to
having
some
temporary
fix
that
we
have
to
get
uses.
So
definitely
this
is
going
to
be
a
more
proper
approach,
no
doubt
about
it,
but
I'm
just
trying
to
see
what
we
can
do
to
address
the
problems
that
I
wish.
This
concern
about
no.
B
I
mean
I,
agree,
long-term
approach,
but
I
mean
that's
the
only
thing
of
maybe
like
I'd
also
discuss
with
other
folks
internally,
let's
see
and
like
once,
we
have
some
update
about
this
design
proposal
we
can
see
like
if
there
is
any
accommodation
we
can
make
it
one
totally
way.
We
one
species
update.
We
might
have
better
picture
how
to
go
about
that.
Yeah.
A
That
sounds
good.
Okay,
let
us
now
so
among
the
other
items
that
I
would
like
to
give
an
update
is
one
is
about
equivalence
cache,
as
you
guys
might
know,
or
have
seen.
There
has
been
a
few
attempts
in
enabling
equivalence,
cache
or
making
it
more
usable,
and
most
recent
one
failed
again
due
to
a
race
condition
that
it
introduces
if
I,
don't
want
to
give
all
the
details
here.
But
if
you're
interested
you
I
can
send
a
link
to
this
race
condition
to
you
so
anyway.
A
This
introduces
a
race
condition
and
we
have
a
hard
time
finding
a
proper
fix
for
this,
because
if
we
want
to
introduce
synchronization
to
address
the
race
condition,
it
looks
like
you're
gonna
have
a
very
big
log
around
each
scheduling
cycle,
which
seems
a
little
bit
of
a
performance
to
me.
We
haven't
really
tried
it,
but
my
experience
is
that
it
most
probably
is
gonna
cause
a
lot
of
performance
drop
so
anyway,
if
you're
interested
or
if
you
have
expertise
or
interest
in
solving
this
problem,
please
reach
us
and
take
a
look
at
them
problem.
A
Another
couple
of
items
like
graduating
paints
by
painting
node
by
condition
and
evicting
nodes
based
on
pains
and
all
of
that
is
done
by
class
next
week
or
in
he's
present
he's
gonna,
give
an
update
on
those
he's.
Also
working
on
against
gang
scheduling.
He's
gonna
give
an
update
on
that
as
well.
There
is
one
item
that
I
should
be
working
on
and
I
have
delayed.
A
It
and
I
actually
have
started
working
on
and
everything
quite
a
few
pages
for
that
for
the
document
and
that's
scheduling,
framework
I
haven't
finished
the
document
yet
and
it's
I
take
full
responsibility
of
not
finishing
it.
I
might
want
it
to
really
finish
it
by
the
end
of
March,
which
didn't
happen,
I'm
really
hoping
that
I
can
at
least
do
it
by
the
end
of
April.
So
hopefully
I
will
finish
it
and
send
it
out.
A
The
reason
that
it's
delayed
is
partly
because
I
happen
to
a
busy
Anna
and
other
reason
is
that
I
try
to
cover
a
bunch
of
things,
including
extend
extensibility
of
scheduler,
as
well
as
trying
to
at
least
see
that
this
new
architecture
is
flexible
enough
to
address
the
projects
that
we
have
in
our
roadmap.
One.
A
Gangs,
canoeing
gang
scheduling
does
not
quite
fit
well
in
our
current
schedule.
Architecture
I
am
trying
to
see
if
I
can
really
fit
into
the
new
architecture
that
we
are
designing
and
there
are
a
number
of
other
items
that
you
would
like
to
address
there.
One
is
dynamic
finding
of
resources.
These
are
resources
which
are
faster
deliveries,
as
opposed
to
note
level
resources.
A
This
model,
actually
the
new
model
that
I
have
worked
on,
works
pretty
well
for
dynamic,
binding
but
as
I
said
or
other
items
that
I'm
working
on,
and
hopefully
I
can
finish
it
and
send
it
out
soon.
So
these
are
most
of
the
items
that
I
have
and
the
111
list
that
I
wanted
to
talk
about.
I,
don't
see
any
major
issues
that
is
causing
any
major
problems
in
scheduler
or
I,
don't
see
any
major
blockers,
but
if
you
have
any
items
that
you
would
like
to
talk
about,
please
go
ahead.
D
A
Action
there
is
no
issue
for
Diana
I've,
been
working
on
a
document
that
I
that
I
will
send
out
to
their
community
soon.
I,
don't
think
there
is
any
issue
by
the
particular
problem
with
with
the
schedule,
extender
or
two
things.
One
is
the
fact
that
schedule
extender
supports
very
few
extension
points
basically
or
art
or
architecture
exposes
very
few
extension
points.
These
are,
for
example,
the
filter
extension
points,
prioritize
extension
points
and
now
more
recently
at
bind
and
preemption.
A
So
why
do
you
would
like
to
achieve
in
the
new
architectures
that
we
would
like
to
expose
many
more
extension
points?
First
of
all
like,
for
example,
before
filtering
after
filtering
during
fruiting,
and
all
of
that
the
goal
is
that
we
can
hopefully
convert
all
of
our
priority
and
predicate
functions
into
plugins
for
this
new
architecture.
A
That's
one
of
the
goals.
The
second
goal
is
that
our
turn
extension
mechanism
is
only
wire
web
hook.
That
hook
is
good
if
you
wanna
run
a
separate
process
alongside
to
scheduler,
but
it's
not
that
great
in
terms
of
performance.
So,
for
example,
if
we
wanted
to
convert
all
of
today's
filth,
predicates
and
priority
functions
into
web
hooks,
schedulers
performance
would
have
dropped
dramatically.
So
what
we
are
trying
to
achieve
in
the
new
architecture
is
to
introduce
in
process
plugins.
A
Basically,
these
are
to
write
a
library
that
are
dealt
with
the
scheduler,
but
they
always
talk
with
the
scheduler.
Why?
A
very
well-defined
interface
and
users
can
hopefully,
in
the
future,
have
some
of
these
custom
plugins
they
could
be
in
process
or
out
of
process
plugins
that
they
can
easily
maintain
a
long
side
with
the
scheduler
for
the
main,
scheduler
or
scheduling
framework
that
we're
building.
This
will
reduce
the
burden
of
maintaining
customized
schedulers,
so,
hopefully,
users
who
need
customization
of
the
scheduler
can
achieve
those
with
reasonable
performance.
A
A
Of
course,
performance
is
gonna
be
lower,
but
even
if
you
have
all
the
performance
optimization
in
place
for
an
out
of
process,
plugin
still
the
communication
overhead
over,
like
our
network
or
various
layers
of
software,
like,
for
example,
going
or
over
HTTP,
sending
a
request
out,
marshaling
everything
to
JSON
or
and
then
getting
the
result
back
again
over
network
and
HTTP
interface
and
on
marshaling.
Everything
back
to
usable
data
structures
is
gonna,
take
a
lot
of
time.
Alright.
So
we
are
trying
to
avoid
all
these
layers
by
introducing
in
process
pockets.
D
A
We
want
to
schedule,
let's
say
a
set,
a
number
of
parts
altogether
right
and
if
we
cannot
schedule,
let's
say
20
parts
we
don't
want
to
schedule
any
of
them
alright,
so
today's
schedule
or
architecture
is
that
it
schedules
one
part
at
a
time.
So
a
model
like
Yahoo
scheduling
does
not
fit
in
today's
architecture.
In
in
the
scheduling
framework,
I
followed
pretty
much
the
same
architecture,
but
I
exposed
a
lot
more
scheduling
a
lot
more
extension
points,
basically
in
my
in
my
document,
but
the
basics
of
scheduling
flow
is
still
the
same.
A
We
need
to
change
that
a
little
bit
in
order
to
make
a
gang
scheduling
possible,
and
this
is
not
the
only
requirement
of
gang
scheduling
if
you
combine
it,
for
example,
with
preemption
things
become
a
little
bit
more
complex.
So
let's
say
that
a
number
of
parts
in
your
gang
require
preemption
if
five
of
them
require
preemption,
but
even
after
pre-empting
or
assuming
that
you
would
preempt
these
parts,
you
still
realize
that
you
cannot
schedule
all
those
20
parts
that
you
require
to
schedule
for
gang
scheduling.
A
You
should
not
create
anything,
so
basically
the
preemption
should
be
assumed
during
the
process.
What
should
be
actually
done
until
you
are
sure
that
you
can
schedule
all
those
20
pots?
So
there
are
some
of
these
complications
when
Yankee
scheduling
is
considered.
I
would
like
to
cover
those
in
the
new
architecture,
if
possible,
I'm,
not
a
hundred
percent
sure
that
we
can
cover
it
nicely
if
I
haven't
given
up
doing
it.
So
I
will
see.
A
B
A
A
B
D
It's
also
important:
you
have
like
a
working
prototype
first
so
that
we
can
verify
like
if
it's
or
to
use
our
use
cases,
it
doesn't
have
to
be.
Everyone
agree
on
the
same
thing
and
then
we
move
forward
like
we
can
have
majority
agree
or
something.
Then
we
stop
for
typing,
and
then
we
finalize
the
use
cases
one
before
we
finalize
them.
No
actually
actually.
B
In
my
understanding
is
something
I
mean
at
least
from
my
understanding
this.
This
having
this
framework
curve
means
some
larger
effort.
If
it
would
be
so
easy
to
have
a
point
of
concept
or
to
have
a
working
prototype,
then
maybe
like
either
I
am
wrong
and
I.
Do
it's
not
a
large
effort
or
something
like
that?
That's
why
like
I,
was
wondering
all
to
have
consensus
or
design
dog.
A
Yeah
you're
right
I,
as
I
said
I
generally
tend
to
underestimate
the
amount
of
time
that
it
takes
to
implement
stuff,
and
it's
probably
the
case
here
as
well
when
I
say
112,
it's
really
wishful
thinking,
I
guess,
and
it's
too
optimistic,
but
but
I
really
really
hope
that,
with
after
we
finalized
the
top
design,
I
really
hope
that
we
can
have
a.
B
A
B
Ahead
I
mean:
are
you
just
one
thing
I
wanted
to
say
like
if
there
are
I
haven't
seen
many
items
like
Libby?
Are
you
are
looking
for
help
or
anything
but
like?
If
there
are
some
specific
items,
people
need
help,
like
obviously
like
Eddie's
again
like
Revere
and
myself,
we
would
be
happy
to
help
and
like
I,
think
one
thing
like
be
that
soon
we
have
been
looking
in
the
Strega
like
to
try
like
a
pitch
trim,
each
user.
Still,
if
that's
the
case
are
definitely
like,
we
can
help
so.
A
More
recently,
there
is
the
number
of
pr's
and
the
number
of
items
that
needs
review
have
dropped
a
little
bit,
so
I've
been
able
to
manage
some
of
them
quite
soon.
I
think
we
are
gonna,
see
again
some
rise
in
the
number
of
items
that
we
are
going
to
work
on
it.
This
is
like
the
calmness
after
code
freezes
still
I
guess,
but
the
work
there's.
B
Thank
you
for,
and
just
one
thing
I
would
like
to
point
out.
Specifically.
There
is
one
very
large
refactoring
pyaara
by
steam,
l
Szymanski,
where
he
is
adding
secure,
authentication,
authorize
your
options
to
a
scheduler
and
I've
been
reviewing
that
so
only
just
one
part
is
there
that
I
need
to
review
once
I
am
done.
That,
like
you
are
like,
maybe
you
you
can
have
a
local.