►
From YouTube: Kubernetes SIG Scheduling meeting - 2018-10-25
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
this
meeting
is
recorded
and
is
gonna,
be
uploaded
on
public
internet,
so
watch
what
you're
saying
all
right.
So,
let's
go
over
some
of
the
items
that
we
have.
As
you
know,
we
were
targeting
scheduling
framework,
some
parts
of
the
scheduling
framework
implementation
for
this
quarter.
I
I
sent
out
a
PR.
Last
night,
I
tried
to
CC
most
of
the
people
that
I
thought
might
be
interested
in
taking
a
look.
This
is
an
early
version
of
what
I
thought
would
be
an
alpha
version
for
for
the
feature.
A
I
went
back
and
forth
between
various
designs,
including
something
similar
to
other
other
plugins
that
we
have
in
other
modules,
something
like
for
example,
or
event,
handlers
for
the
API
server,
where
there
is
a
struct
that
has
many
members,
which
are
functions,
for
example,
that
that's
the
alternative
approach
that
I'm
talking
about
which
is
similar
to
di
sever.
So
that's
the
alternative
epigenomics
playing.
Now
it's
like
a
stroke
that
has
multiple
members.
These
members
are
functions,
so
this
chart
could
be
like
a
plugin
that
exposes
bunch
of
these
functions
for
different
locations.
A
Let's
say,
for
example,
it
has
a
member
called
to
reserve,
that's
a
function.
This
function
is
supposed
to
be
called
by
discussion
or
at
Reserve
point
and
so
on.
One
of
the
problems
with
that
approach
is
that
you
know
if
you
register
these
plugins
at
a
certain
order.
All
of
these
extension
points
are
gonna
be
caught
at
the
same
order,
because
you
know
if
yeah
there
is
one
struct
that
exposes
all
these
extension
point
functions
when
you
register
that
distract
that
struct
is
going
to
be
caught
in
the
same
order.
A
In
all
of
the
extension
points,
I
felt
like
we
may
want
to
have
different
ordering
for
different
extension
points.
As
a
result,
I
changed
it
to
a
different
design
that
is
actually
currently
in
the
PR,
where
any
struck
can
implement
one
or
many
of
the
extension
points
and
can
be
registered
at
each
extent
each
extension
point
in
a
certain
order.
So
whatever
order
you
decide,
you
can
keep,
you
can
put
that
destruct
in
that
order.
A
In
that
extension
point
so,
as
a
result,
various
extension
points
can
have
strokes,
so
you
can
have
plugins,
which
are
not
necessarily
caught
in
the
exact
same
order
as
other
extension
parts.
So
if
we
can
mix
and
match
any
way,
we
like
that's
one
of
the
big
decisions
that
I
had
to
make.
There
are
other
important
decisions
in
the
current
implementation.
One
decision
is
to,
for
example,
how
to
access
cerumen
structures
such
as
a
scheduler
cache.
A
That's
also
something
that
I
actually
decided
to
register
at
the
beginning
of
the
scheduler
initialization,
so
we
registered
a
scheduler
cash
in
our
plugins
or
in
our
plugin
set,
as
we
call
it
in
that
implementation
and
then
the
plugin
set
provides
the
scheduler
cache
to
all
the
plugins
as
they
call
them.
Similarly,
for.
A
Context,
which
is
basically
the
data
structure
that
I
talked
about
in
the
design
in
our
proposal
in
a
design
proposal
for
the
scheduling
framework.
This
is
a
structure
that
allows
plugins
to
communicate
with
one
another.
The
data
that
is
kept
in
the
indy
plugin
in
this
context
survives
only
for
one
scheduler,
one
scheduling
cycle
and
at
the
end
of
this
guide
mix
like
it
goes
away.
A
A
This
is
another
design
decision.
Please
take
a
look,
and
the
last
one
that
I
want
to
talk
about
now,
how
to
keep
state
for
plugins
that
one
active
state,
so
a
plug-in,
might
want
to
be
stateful
and
keep
certain
records
for
itself
that
survived
all
the
scheduling
cycles.
It
doesn't
want
the
it
doesn't,
want
them
to
go
away.
So
I
have
actually
created
examples
in
at
PR.
That
shows
how
a
stateful
plugins
can
be
able
Clemente
it.
Essentially,
a
plug-in
can
be
sparked
that
exposes
some
functions,
and
these
functions
implement
the
plug-in
interface.
A
A
So
I
have
created
a
couple
of
examples,
actually
three
examples
that
show
these
plugins,
that
are
stateless
plugins
that
are
stateful
their
plugins
that
are
communicating
in
multiple
stages
of
the
scheduling,
as
well
as
having
like
a
state
that
is
kept
in
some
of
these
plugins
as
their
own
state,
as
opposed
to
something
that
is
communicated
with
other
plugins,
so
anyways
the
PR
is
out.
Please
take
a
look
if
you're
interested.
A
C
We
have
kind
of
moved
away
from
the
design
and
started
with
dynamic
admission
controller
because
the
plug-in
development
kind
of
became
difficult.
Do
you
think
we
face
the
same
issues?
So
what
was
the
issue
there
so
plug-in
development?
It
has
to
be
compiled
within
the
the
code
and
the
other
problem
is
with
say:
if
someone
wants
to
develop
the
plug-in
in
a
separate
tree,
it
becomes,
it
becomes
difficult.
So.
A
B
A
D
A
For
that
one
predicate,
so
our
goal
was
to
basically
change
the
law,
change
the
architecture
so
that,
yes,
you
may
need
to
bring
everything
from
the
whole
code
to
your
own
repo
and
add
one
plugin,
maybe
or
maybe
not
actually
talk
about
that.
But
let's
say
that
you
need
to
do
so,
but
all
that
it
takes
is
just
copy
everything
you
don't
need
to
measure.
I
need
to
worry
about
the
change
of
logic
or
anything.
You
know
you
just
bring.
A
D
I
think,
of
course,
you
have
to
rely
on
the
existing
code
base
and
then,
if
the
plugin
can't
support
that,
you
don't
need
to
modify
it
just
as
some
others,
then
that
will
be
recompiling
easier.
You
don't
need
to
handle
those
kind
of
conflicts
if
our
API
got
changed.
Of
course
you
need
to
change
your
the
files
into
your
specific
folders,
exactly.
A
So
our
promise
is
that
we
are
gonna,
keep
this
API,
the
plug-in
interface
basically
backward
compatible,
and
we
provide
some
sort
of
same
guarantees.
Basically,
as
our
API
is
kubernetes,
api
is
provide.
So
if,
for
example,
if
you
look
at
this
PR
that
I
send
out,
the
interface
is
specifically
placed
under
a
folder
called
the
v
1
alpha
1,
calling
it
an
alpha
and
then
at
some
point
we
are
gonna,
promote
it
to
beta,
and
after
that,
we
are
going
to
provide
backward
compatibility.
As
long
as
we
keep
the
video
on
yeah.
A
A
So
once
you
have
like
a
single
process
and
all
plugins
are
in
that
same
process,
you
can
share
all
the
data
all
the
context
with
like
those
plugins,
but
once
the
plug-in
is
outside
and
all
that
sharing
goes
away,
and
one
of
the
reason
is
that
actually
you
get
better
performance
for
in
process.
Plugins
is
the
fact
that
is
the
fact
that
you
can.
You
can
actually
share
context
or
share
memory
and
everything
inside
the
process
feed
that,
with
the
extent
with
those
other
planets.
A
A
So
I,
don't
know
if
yes,
Cena's
here
today
with
us
anyway,
I
had
I
had
another
meeting
with
Tomas,
where
we
decided
that,
in
order
to
have
meaningful
and
really
useful
party
scheduling
policies,
we
need
to
have
both
static,
which
means
like
admission
time
checks
as
well
as
dynamic,
basically
scheduling
time
checks.
So
you
know
imagine
that
an
admin
wants
to
prevent
users
from
or
prevent
certain
pods
in
a
particular
namespace
from
getting
scheduled
in
zone
one
after
cluster.
A
This
is
a
little
harder
to
specify
statically,
because
when
a
user
creates
parts
they
may
not
put
any
particular
policies
or
any
particular
constraints
on
their
perspective
or
stair
parts
do
not
go
to.
That
is
one
one,
but
at
the
scheduling
time
we
can
enforce
it.
Basically,
the
scheduler
can
look
at
these
scheduling
policies
and
say
his
own
one
is
not
is
not
feasible
for
parts
of
this
particular
namespace.
So
we
need
some
of
these
dynamic
checks
as
well.
So
the
latest
development
in
that
area
is
that
we
are
going
to
have
like
two
parts.
A
One
is
like
the
admission
time
shakes
these.
These
checks
are
for
those
kind
of
constraints
that
are
very
obvious
and
are
easy
to
catch.
For
example,
an
admin
wants
to
say
no
pause
in
this
namespace
I
can
have
toleration
with
this
particular
tea,
and
if
you
put
that
key
in
your
party
spec,
your
part
may
get
rejected
at
the
admission
time,
but
there
are,
as
I
said
there
are.
There
are
more
advanced
or
more
complicated
conditions
that
cannot
be
checked
at
at
mission
time
and
what
and
just
by
looking
at
pod
specs.
A
So
those
will
be
they'll
be
done
at
the
scheduling
time.
As
a
result
apart,
that
violates
let's
say
some
of
the
scheduling
policies
may
get
admitted
to
the
cluster
but
may
stay
pending.
So
this
is
a
slightly
downside,
so
the
user
may
not
notice
that
the
admission
time
that
the
pods
are
rejected,
but
they
have
to
go
back
and
check
later
and
look
at
the
party
status
or
events
to
figure
that
out.
That's
why
they're
positive
pending?
A
A
Okay,
so
next
item
paint
based
eviction,
I
know
where
you
had
sent
some
updates
or
some
additional
end-to-end
checks
and
tests
for
painting
based
eviction.
I
think
that's
great!
Thank
you
very
much
for
sending
those
out
and
Robbie
have
been
helping
that
effort
as
well.
Thank
you
ollie
as
well.
So
if
you
guys
have
any
further
updates,
please
share
them
with
us.
Yeah.
D
D
Be
well
my
personal
opinion
is
that
the
name
10
best
eviction
I've
talked
with
this
with
Robbie
before,
if
the
name
is
somewhat
misleading,
because
it
caused
me
to
think
that
it
is
the
picture
that
evict
the
past
right
based
on
the
time
that
actually
the
taint
behavior
is
actually
not
decided
by
this
feature
enable
or
not.
So
maybe
we
should
mention
a
little
bit
in
the
document.
Yeah.
A
A
A
A
C
The
problem
seems
to
be
with
the
default
like
we
have
an
algorithm
provided
as
family
in
the
defaults,
not
code.
There
is
no
way
to.
As
of
no.
There
is
no
way
to
register
priority
functions.
There
is
a
way
to
raise
two
predicate
functions,
but
the
default
provider,
but
we
do
not
have
it
so
I
have
added
that
function
as
well.
C
C
B
B
C
A
It's
fine
I
mean
this
releases
special
release.
First
of
all,
it's
very
short,
and
there
is
also
some
I
mean
we
prefer
not
to
add
a
lot
of
features,
and
this
really
so
it's
okay.
We
can
break
into
it
in
a
different
I
release
and
I
mean
to
it.
We
wouldn't
probably
enable
the
scheduler
by
default
anyways,
but
it's
I
mean
there
is
always
a
chance
to
do
it
next
cycle.
So
there
was
one
more
item
and
I
was
moving
balanced
number
of
attachments
to
beta
I.
C
C
A
E
I
bought
me
hi,
so
yeah,
so
we've
been,
we
are
in
the
process
of
cutting
a
release
that
includes
the
gang
scheduling,
the
initial
version
of
gang
scheduling
which
we
developed
and,
and
we
had
that
thing
mixed
up
at
the
max
part.
Actually,
you
know
the
max
part
feature
and
we
ran
into
some
problem
with
the
max
barrace,
so
we
initially
because
the
way
informers
work
and
all
the
threading
issues.
So
there
was
some
sequencing
issue
and
all
that
actually
so
initially
we
thought
that
we
will
take
out.
E
You
know
the
was
just
to
the
max
part
within
firmament,
but
even
that
doesn't
work,
because
there
are
some
cube
system.
Parts
are
on
the
same
node,
so
the
count
goes
so
the
test
failed.
Because
of
that,
so
we
don't
have
any
choice
other
than
getting
all
the
parts
from
cuba
netic.
So
we
had
the
code,
but
I
think
there
are
some
corner
cases.
E
It
is
not
now
yeah,
I
think,
there's
some
not
a
major
issue.
Essentially
we
define
formers
and
then
we
getting
there.
There
was
some
sequencing
issue
and
we
done
so.
We
just
wanted
to
have
so.
The
reason
I
wanted
to
keep
max
part
separate
because
we
were
not
able
to
resolve
those
issues
is
because
internally,
actually
there's
our
product
group
guys
are
very
interested
in
this
gang
scheduling
and
all
the
bulk
capabilities.
So
they
wanted
to
look
into
that.
You
know
the.
E
E
E
A
E
Make
sure
yes,
yes,
we
we're
definitely
gonna
do
that
and
we
do
working
on
the
hands
and
charts
as
well.
Actually
so,
hopefully
we'll
we'll
have
that
done
too.
So
we
might
have
make
it
more
simpler
installation,
we
will,
let
you
know
we're
just
doing
making
sure
we
think
it's
an
in
place
and
then
we
will
that's.