►
From YouTube: Kubernetes SIG Scheduling meeting - 2018-09-13
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
C
A
While
the
taint
contain
don't
by
condition
doubt
that
one
actually
I
don't
know
exactly
what
is
going
on
so
I
guess
last
night,
when
I
looked
at
that
math
the
park,
Voytek
was
saying
that
the
regression
that
you're
there
saying
with
respect
to
scalar,
but
he
could
be
related
to
that
change,
that
we
made
to
no
life
lifecycle
controller,
but
he
didn't
know
for
sure.
So
we
are
not
so
sure
and
the
tests
are
passing
one
day,
not
passing
the
other
day.
A
He
was
saying
some
of
the
tests
are
actually
taken
out
so
that
the
rest
can
run
I.
Don't
have
a
great
amount
of
knowledge
about
what
is
going
on
at
them
at
the
moment.
But
I
will
the
father
that
that
might
actually
be
one
of
the
things
that
we
need
to
revisit.
C
A
A
Hope
you
do
so
in
this
spreadsheet
we
are
tracking
the
items
that
we
wanna
address
in
113.
One
of
the
things
that
we
are
trying
to
work
on
is
again
equivalence
cache.
But
this
time
we
are
gonna
probably
try
a
different
approach
as
we
discussed
before.
Maybe
we
should
actually
change
the
design
of
the
equivalence
cache
instead
of
targeting
particular
predicates
and
nodes.
A
Yeah,
so
we're
trying
to
change
our
approach
to
equivalence
cache.
Basically,
if
you
are
grouping
a
bunch
of
pods
under
the
same
group
in
a
way,
they
have
all
equivalents
of
a
nice
hash
and
a
good
equivalent
space.
Basically,
so
if
one
of
these
positives
cannot
be
scheduled,
there
is
no
point
in
trying
to
schedule
any
other
part
as
long
as
the
cluster
condition
has
not
changed.
For
example,
if
node.
D
A
A
So
there
are
actually
particularly
one
or
two
problems,
I
would
say.
So
one
was
the
fact
that
you
know
a
lot
of
our
predicates
today
are
very
fast
pretty
much.
The
only
predicate
that
is
not
super
fast
is
affinity,
a
natural
affinity.
Quite
why
saying
that
I
mean
interpretive
energy,
I
didn't
even
know.
Definitely
an
anti
affinity
is
not
raised,
though
so
only
interpret
a
sentiment
affinity
to
slow.
For
that
predicate,
it
makes
sense
to
have
an
equivalence
cash
equivalents.
Cash
today
is
a
like
a
tweetable
cash.
A
A
We
initially
had
a
big
global
log,
replace
it
with
finer
grain
logs
for
our
knowledge
and
stuff,
and
still
we
are
not
seeing
performance
improvement.
Actually,
if
you
try
to
schedule
a
simple
pause,
apposite,
don't
use
any
of
these
more
sophisticated
predicates.
We
see
slowdown,
and
the
reason
is
that
our
predicates
are
very
fast.
It
does
not
make
a
lot
of
sense
to
go
and
acquire
a
bunch
of
logs,
instead
of
just
throwing
defrayed
to
get
themselves
if
you
wanna,
for
example,
change
check
the
amount
of
resources
on
a
node.
A
It's
like
several
simple
mathematical
operations,
which
are
quite
fast
or
if
you
want
to
check
something
like
the
node
name
itself.
It's
just
a
simple
string
comparison.
So
there
are
some
of
these
predicates
that
don't
make
sense
to
have
an
equivalence
cache
for
and
then
there
are
more
complex
predicates.
Those
actually
have
tried
to
improve
their
performance
quite
a
bit.
A
Now
I
don't
know
if
your
have
been
involved,
but
there
has
been
at
least
a
couple
of
major
race
conditions
so
far,
and
there
have
been
several
instances
that
a
new
feature
or
new
addition
to
the
scheduler
was
done
without
updating
the
invalidation
logic
properly.
So-
and
this
is
the
reason
for
this-
is
that
it's
very
kind
of
complex
and
it's
very
easy
to
overlook
those
places.
A
A
Alright,
so
gang
scheduling
is
another
item.
Klaus
is
gonna
work
on
that,
so
we
are
hoping
that
we
will
have
an
early
version
of
the
gang
scheduling
available
in
1:13.
This
is
not
going
to
be
integrated
in
the
defaulted
scheduler.
This
is
going
to
be
him.
An
external
component
basic
cube
arbitrator,
whose
name
like.
A
A
A
So
yes,
so
I
was
saying
that
we
are
gonna
focus
on
this
scheduling,
part
of
it.
So
gang
scheduling
has
other
aspects.
One
is,
for
example,
the
admission
or
I
should
say
atomic
admission.
So
let's
say
that
you
are
admitting
a
gang
of
pods
and
let's
say
there
are
10
10
of
these
pots
and
and
you
want
all
of
those
to
be
scheduled
together.
If
some
of
those
cannot
be
scheduled,
then
you
don't
want
any
of
them
to
be
scheduled.
A
Similarly,
if
you
want
to
admit
some
of
those-
and
you
have
coda-
you
want
all
of
those
to
be
admitted
or
none
of
those
to
get
right.
Otherwise,
if
you
let's
say
admit
three
of
them
and
you
your
quota
is
exhausted
and
the
rest
cannot
yet,
then
there
is
no
point
of
that
pinning
those
3,
because
now
you
are
using
your
code
up
for
something
that
are
completely
useless
for
you,
so
we're
we're
trying
to
address
the
admission
atomic
addition
at
some
point,
but
not
in
in
the
next
version,
so
so
gang
scheduling.
A
Mack,
you
can,
you
can
go
ahead
and
ask
a
question
if
you
want
you're,
saying
that
is
gangsta
court
case
or
or
extend
or
scheduler.
So
it's
actually
gonna
be
a
part
of
an
external
scheduler.
Basically
at
this
point,
but
we
are
going
to
revisit
this
in
the
next
versions.
We
are
gonna
see
if
it
makes
sense
to
bring
this
bring
this
like
teacher
to
the
scheduler,
but
for
that
we
need
to
also
basically
study
and
analyze
the
amount
of
complexity
that
it
adds
and
the
benefits
that
we
are
getting
from
it.
A
All
right
with
respect
to
scheduling
framework,
as
you
may
know,
we
are,
we've
decided
that
we
are
gonna,
bring
some
of
the
select
features
of
the
scheduling
framework
to
the
scheduler
and
we're
gonna
start
with
the
next
release.
We
are
gonna
bring
some
of
those
features,
particularly
extension
points
around
the
assumed
phase
of
the
scheduling,
which
is
basically
that
the
part
that
scheduler
finds
an
out
and
starts
its
cache.
There
are
no
dates,
its
cash
to
reflect
this
decision.
A
We
are
gonna,
bring
some
of
extension
points
around
diem
and
buying
phase
so
that
our
dynamic
resources,
scheduling
particularly
volume,
scheduling
or
dynamic
volume
binding
as
it's
called,
can
be
implemented
in
the
scheduler
as
a
plugin,
as
opposed
to
a
hard-coded
piece
of
the
scheduler.
It's
currently
today,
it's
a
part
of
the
scheduler,
and
we
would
like
to
kind
of
move.
It
move
it
to
a
plugin
which
and
then
also
think
about
making
many
of
the
other
extension
points
and
moving
some
of
our
predicates
and
priority
functions.
A
A
D
A
A
No,
actually,
it's
gonna
be
kind
of
hard.
I
would
say
a
I
mean
in
the
yellow
phase.
What
we
have
today
is
basically
like
many
other
pieces
of
the
commodities
we
are
just
describing
or
declaring
in
a
way
some
of
the
features
of
kubernetes
that
we
would
like
to
have,
or
some
of
the
state
of
the
cluster
that
we
would
like
to
get
it's
not
like
building
logic
in
and
oceania
fire
and
those
policy
config
files,
all
right.
A
So
as
a
result,
we
don't
think
that
we
are
gonna
support
that,
but
having
external
scheduler
is
something
that
thank
you.
So
external
plugins
is
something
that
scheduler
supports.
Today
you
can
build
your
own
logic
and
run
it
as
a
as
a
web
hook
for
the
scheduler.
So
that's
that's
something
that
I'll
just
get
three
supports
today.
Those
are
not
usually
very
fast
because
you
have
to
go
over
your
network
to
call
those
web
hooks,
but
that
option
exists
as
well.
We're.
E
A
We
are
gonna,
so
basically,
our
account
plan
is
to
bring
the
ideas
of
the
framework
to
determine
scheduler
and
currency
scheduler
supports
extenders
and,
like
those
external
plugins,
we
are
not
going
to
remove
the
support
for
those
extenders.
At
this
point
at
least,
we
don't
have
any
plans
that
other
yeah.
E
A
Okay,
so
another
item
that
we
have
been
working
on
and
the
past
couple
of
cycles
was
finalizing.
The
design
of
party
scheduling
policies,
sket
Party
scheduling
policies
are
the
policies
that
are
mostly
targeted
for
multi
tenant
clusters.
It
allows
or
prevents
some
of
the
users
to
specify
certain
scheduling
policies.
A
So,
for
example,
I
could
go
and
say:
I
have
the
product
which
has
anti
affinity
to
all
other
parts
in
this
particular
zone.
So
if
I
get
lucky
and
my
part
is
scheduled,
then
no
other
person
will
be
scheduled
at
in
that
zone.
So
there
are
some
of
these
similar
problems
that
exists
today,
which
are
more
relevant
in
a
multi-tenant
cluster,
or
we
are
going
to
work
on
those
scheduling
policies
to
allow
an
admin
to
put
some
of
these
restrictions
on
what
kind
of
rules
or
scheduling
rules
can
be
specified
on
pods.
A
F
A
A
Sure,
yes,
so
going
back
to
the
scheduler
of
a
wish
and
Robbie
have
been
working
on
D
scheduler
for
a
while.
It's
it's
a
point
that
we
think
that
we
should
actually
bring
it
as
a
standard
component.
So
we
are
gonna,
have
meaning
I
believe
so
too.
Tomorrow,
I'm
gonna
have
a
meeting
with
innovation
Robbie's.
This
is
the
meeting
tomorrow,
open
to
other
folks
as
well,
or
is
it
yeah.
A
A
ok.
There
is
another
there's
another
item
with
respect
to
changing
and
that's
basically
TNT
paint
based
eviction.
So
we
would
like
to
move
this
item
or
the
feature
to
beta
in
113.
That's
something
that
we're
gonna
work
on.
If
you
guys
are
interested,
please
let
us
know
we'll
be
happy
to
let
you
work
on
it.
A
C
A
C
A
Removing
that
feature
is
that
your
question
yeah
I,
think
so
I,
don't
feel
that's
something
that
we
need
to
rush
in
the
next
release.
To
be
honest,
since
we
just
basically
eliminated
three
scheduler
in
one
twelve
and
we
still
don't
know
how
it's
kind
of
work,
you
know
it's
gonna
usually
take
several
months
until
one
release
goes
out
and
it's
really
tried
an
exercise
right.
So
for
that
reason,
I
I
don't
want
to
rush
to
remove
all
of
that
alpha
feature
in
113.
Maybe
we
can
talk
it
out
for
for
140
yeah.
A
Actually,
it
makes
us
to
deprecated
it
for
sure
I
mean
it's
an
alpha
feature.
Nobody
should
use
it
because
any
moment
we
could,
according
to
kubernetes
policies
anything
we
could
remove
that
feature
without
providing
any
backward
compatibility.
So
it's
not
a
wise
idea
to
use
critical
annotation
for
production
workload,
but
yes,
we
can
actually
go
ahead
and
defecate
that
so
we
can
do
that
for
sure.
I
didn't
put
it
here.
A
All
right
is
there
any
other
feature.
Okay,
these
are
most
of
the
features
that
you
wanted
to
target.
There
are
a
couple
of
other
things
that
they're
not
necessarily
features.
What
we
need
to
still
work
on,
one
is
to
build
an
integration
test
library
for
building
benchmarks.
For
for
the
scheduler.
Today
we
have
a
couple
of
benchmarks
which
are
which
are
of
the
type
of
integration
tests,
but
these
benchmarks
are
kind
of
like
very
written
in
a
kind
of
like
a
hard-coded
way.
A
We
don't
have
any
particular
test
libraries
and
they
are
hard
to
expand
an
extent.
So
we
would
like
to
build
a
new
library
for
expanding
these
benchmarks.
Allowing
us
this
library
should
allow
us
to
also
add
new
benchmarks
for
the
features
that
will
gradually
adding
to
these
along.
So
maybe
we
should
add
this
works
for
the
existing
features.
So
that's
something
that
we
should
also
target
another
thing
that
you
are
email
or
you
may
have
already
seen
his
the
kind
of
like
cleanups
or
refactoring
that
Jonathan
has
has
worked
on.
He
sent
out
this
document.
A
We
would
like
to
start
working
on
those
as
well
in
113.
Hopefully,
Jonathan
did
find
some
time
to
file
issues
for
those
soon
and
once
those
issues
are
fired,
we
will
put
help-wanted
labels
on
them
and
we
will
be
very
happy
to
work
with
any
of
the
folks
in
the
community
to
get
those
implemented
or
make
those
changes.
C
One
thing
related
to
the
integration
test
library,
so
last
year's
like
in
2017,
I,
believe
and
I've,
started
working
on
something
called
schedule
at
work.
The
tool
was
existing
even
before
I
started
with,
but
the
main
idea
is
is
a
way
to
add
integration
tests
so
that
we
can
have
some
benchmarks
for
the
new
features
that
are
coming
into
the
scheduler.
A
Alright,
so
we'd
like
to
basically
build
a
library
that
allows
us
to,
for
example,
add
new
types
of
parts
or
now
change
the
number
of
notes.
Everything
is
currently
very
hard
coded,
also
amazing,
so
that
is
there
to
basically
allow
us
to
expand
those
tests,
and
those
benchmarks
would
be
great
right.
C
So
when
I
started,
it
was
the
idea
like
having
some
sort
of
yeah
man
find
where
we
could
take
the
inputs
for
node
count,
the
type
of
pots
and
the
type
of
scheduler
policies.
That
like
mean
the
predicates
and
prayer
case
that
we
could
have
while
running
those
integration
tests.
That
was
the
idea
when
I
started,
but
I
cannot
complete
it.
C
A
Sounds
good
is
a
question
on
chat.
Someone
is
asking
why
annotation
is
not
proper
veggies,
so
I
believe
this
is
which
is
critical
connotation,
so
somebody's
asking
why
we
shouldn't
be
using
critical
part
annotation.
Is
that
what
the
question
is
yeah
so
ray
has
already
provided.
The
answer
to
this
question
is
that
critical
part
annotation
is
basically
replaced
by
pod
file
unity.
You
already
have
pod
priority.
You
can
specify
for
for
system
components.
There
were
a
couple
of
already
existing
priority
classes
for
those
critical
components
you
can
use
those
priority
classes.
There.
C
D
A
Yeah
yeah,
so
we
actually
don't
have
any
I,
don't
know
if
scene
himself
wants
to
work
on
the
implementation
of
party
scheduling
policies
and
if
even
if
he
does,
there
might
be
more
work
than
just
those
what
can
do
by
himself.
So
that's
one
of
the
areas
that
we
can
use
someone
else's
have
and
also
with
respect
to
building
and
I,
know
those
you
know
or
refactoring
and
doing
a
lot
of
cleanups
on
our
code.
That's
something
else
that
we
can
use.
A
Also
integration,
test
library,
you
know,
Ravi
I
guess
was-
was
saying
that
he
would
would
be
happy
to
work
with
the
community
to
that
integration
test
libraries,
if
you're
interested
you
can
help
us
with
those
as
well
or
if
you
only
have
a
the
interest
on
party
scheduling
policies,
we
should
probably
arrange
with
the
same
and
so
that
you
get
you
can
work
on
those.
So.
E
B
A
B
A
Yeah
so
yeah
sure
I
guess
also
you
know
you
may
be.
You
may
want
to
work
with
Ravi
as
well,
on
painting
or
tank
based
addictions
as
well.
I
guess
for
my
team,
more
than
just
I'm
pretty
sure
there
is
more
than
just
one
PR
for
for
moving
in
based
addiction
to
beta
and
there
might
be
need
for
adding
more
tests
and
stuff.
So
maybe
you
guys
can
work
together.
Why
do
you
think
re
yeah.