►
From YouTube: Kubernetes SIG Scheduling meeting - 2018-10-04
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Alright,
you've
started
recording
so
to
be
where
this
meeting
is
recorded
and
will
be
published
on
internet,
so
everything
that
you
say
may
stay
on
the
on
public
internet
forever.
We
doesn't
start
our
today's
meeting.
I,
don't
I,
don't
have
a
whole
lot
to
talk
about
today.
So
today
we
may
actually
go
into
a
little
bit
more
details
about
a
couple
of
features
that
were
working
on.
If
that's,
okay
with
you
folks
I
was
expecting
a
Shack
to
join
and
give
us
some
updates
from
the
from
about
the
release
process.
I
believe
it
a.
B
B
B
Yeah
and
not
a
problem,
so
I'm
Kendrick
I,
am
running
enhancements,
lead
for
113,
so
I'm
just
taking
care
of
Isis
for
all
today.
So
if
anyways
unfamiliar
ice
is
running
the
release
for
113,
but
the
goal
is
that
we're
just
trying
to
get
around
to
all
of
the
Stig's
just
a
a
introduce
ourselves,
bead,
also
just
kind
of
give
an
update
on
113
sort
of
what
the
timeline
is
is,
as
most
of
you
are
probably
well
aware,
this
is
gonna,
be
eat
more
of
an
aggressive
schedule.
B
That's
going
on
with
this
one,
because
it's
I
believe
two
or
three
weeks
shorter.
It's
only
ten
weeks
long
for
this
particularly
release
because
we've
got
to
cube
cons
plus
the
holidays
that
are
approaching
so
one
of
the
things
that
are
going
into
it
and
really
the
theme
that
she's
really
trying
to
drive
is
stability
with
this.
So
really
there's
a
request
to
say:
let's
not
try
to
take
any
massive
alpha
features
and
try
to
get
them
and
drive
them
into
113,
because
there's
probably
not
gonna
be
enough
time.
B
I,
believe
code
slush
begins
early
November
and
code
freezes,
I.
Think,
like
November
11th
or
never
November
15th.
So
it's
it's
really
not
too
far
away.
If
you
start
thinking
about
what's
going
into
it,
so
the
goal
is
to
start
thinking
of
what
features
were
actually
deferred
for
112
and
you
could
potentially
get
those
into
113.
These
could
be
things
for
that.
Just
didn't
make
it
as
a
doctor
or
anything
like
that.
B
Looking
at
the
que
features
repo
right
now,
there's
14
open
issues,
there's
I,
think
five
or
six
of
them,
or
maybe
four
or
five
of
them
that
are
tagged
with
milestone.
112
I'll
probably
start
bugging
people
early
next
week
to
figure
out
if
these
are
actually
going
to
graduate
to
113,
if
not
I'll,
probably
remove
them
from
the
milestone
anyway,
things
that
are
on
113
will
probably
just
give
a
comment
to
just
figure
out.
B
What
is
your
level
of
confidence
that
you
could
actually
make
it
to
code
slush,
slash
code,
freeze
time,
given
the
time
constraint?
That
is
there.
So
that's
that's
really
the
major
theme
that
she
wanted
me
to
relay
to
everybody
here.
The
the
other
thing
that
she
also
wanted
me
to
talk
about
was
that
there
were
some
daemon
set
failures
that
were
happening,
the
upgrade
dashboard
and
she
wanted
to
know
if
there's
any
pending
PRS
that
will
fix
this
I
guess
I
should
before
I
get
to
there.
A
I
am
more
concerned
about
the
features
that
we're
moving
to
beta,
because
these
are
the
features
that
become
enabled
in
this,
and
those
are
potentially
could
be
more
problematic
than
others,
particularly,
we
were
targeting
taint
based
eviction,
which
is
one
of
the
features
that
allows
you
know
nose
to
be
evicted
if
there
are
issues,
for
example,
with
nose
or
if
we're
draining
those
for
operating,
etc.
To
move
to
beta
and
I
am
a
little
bit
more
concerned
about
that
feature,
because
potentially
it
could
cause
outages
in
clusters
and
issues
of
that
sort.
A
C
A
So
basically,
instead
of
just
our
usual
code
freeze,
which
is
I
believe
in
mid-november,
we
can
finish
this
two
weeks
before
the
code
freeze,
so
that
we
will
give
it
a
little
bit
longer
soak
time
and
hopefully
our
upgrade
tests
and
other
tests
will
run
for
this
feature
for
a
longer
period
of
time
after
the
feature
is
enabled
by
default.
And
if
we
see
that
there
are
issues
we
can
hold
it
back
for
the
next
week's.
B
Absolutely
I
think
that
sounds
like
a
like
a
good
point.
I
was
actually
just
pulling
it
up.
It
looks
like
this
is
a
one
thats
been
sitting
around
for
a
while
if
it's
issue
number
166,
so
it
seems
like
it's
something
that
she
open
pretty
heavily
working
on
for
quite
some
time.
So
yeah
I
don't
see
a
problem
if
you,
if
you
can
wrap
something
up
relatively
quickly,
see
if
we
can
get
it
in.
If
you
know,
if
not,
then
then
pull
it
out.
If,
if
things
just
aren't,
are
passing
through
the
the
dashboard.
A
Yeah
then
weigh
in
Robby
I
think
you
guys
can
arrange
with
one
another,
and
we
should
start
working
on
this
almost
right
away,
because
we
are
basically
almost
three
or
four
weeks
away
from
our
deadline
for
this
feature,
so
we
should
basically
get
it
done
pretty
quickly,
and
then
we
can
decide
about
other
features.
I
am
much
less
worried
that
we
have
a
couple.
Other
features
like,
for
example,
moving
one
of
our
priority
functions
to
beta,
which
is
not
very
creative.
A
What
I'm
not
very
worried
about
that,
but
this
particular
one
is
a
little
bit
more
of
an
issue.
Other
features
that
you
are
working
on
are
not
gonna
I'm,
not
very
much
concerned
about
them.
Some
of
them
are
just
finishing
the
designs
and
stuff
like
that.
So
thank
you
very
much
for
for
giving
us
the
heads
up
and
you
wanted
to
you-
wanted
to
actually
talk
about
one
issue
that
apparently
is
yeah.
B
Yeah
I
will
put
it
in
the
chat,
the
link
that
she
had
sent
me
sure
she's
even
more
familiar
than
you
guys
me
more
familiar
with
this
one
than
I
am,
but
she
said
she
said.
The
main
thing
to
ask
is
about
daemon
set
failures
in
the
upgrade
dashboard.
She
wanted
to
know
if
there's
any
pending
pr's
that
are
gonna
fix
this.
She
said
the
tests
themselves
are
held
by
sig
apps.
She
said,
but
Claus
was
working
on,
upgrade
fixes
in
1.12
heavily.
A
C
A
C
A
These
feel
to
me,
like
more
of
a
life
cycle,
events
of
demon
sets,
as
opposed
to
scheduling
them.
So
if
that's
the
case,
then
it's
the
missus
controller,
but
anyway
we're
gonna.
We're
gonna,
take
a
look
at
this
anyhow
and
we
are
gonna
update
the
body.
Well,
actually
there
isn't
it.
This
is
not
a
bar,
actually
I,
don't
think
there
is
an
any
issue
fight
for
it
or
do
you
know
Kenrick
if
there
is
any
issue
fight
for
this,
that.
A
B
A
All
right
I.
Actually
let
me
give
you
an
update
about
equivalence
cash,
so
heavy
has
written
a
document
about
how
to
change
equivalents.
Cash
and
I've
actually
linked
a
document
to
our
meeting
notes
and
feel
free
to
to
take
a
look
at
a
document,
and
if
you
have
any
question
comments,
you
can
leave
them
there
I
feel
like
we
need
to
we
need
to.
If
we
go
that
path.
A
Basically,
we
need
to
make
major
changes
to
various
structures
like
data
structures
of
the
scheduler,
and
for
that
reason,
I
asked
Harry
to
prototype
this,
this
new
design,
which
is
basically
avoiding.
If
you
guys
remember
from
our
previous
discussions,
the
new
design
is
based
on
I'm
trying
to
avoid
scheduling
parts
which
are
unschedulable
and
have
the
same
spec.
A
So
if
you
have,
for
example,
a
thousand
parts
which
have
the
same
spec
and
we
find
it
with
the
first
one
on
a
schedule,
but
there
is
no
reason
to
check
the
other
999
parts,
because
all
other
ones
with
the
similar
respect
are
gonna,
be
on
a
schedule
about
as
long
as
the
state
of
the
cluster
is
not
changed.
So
that's
the
idea
of
this
new
design
of
the
equivalence
cache
that
might
require
some
changes
to
the
scheduling
queue.
A
That's
one
update.
I
also
would
like
to
thank
our
contributors.
Thank
you
so
much
folks
who
have
contributed
to
the
issues
that
Jonathan
files
for
refactoring
parts
of
the
scheduler,
some
of
those
PRS
are
already
made,
or
almost
ready
to
to
be
managed.
I,
really
appreciate
your
efforts
and
the
last
one
which
is
actually
I
would
like
to
talk
about
it
way,
and
it
this
is
going
to
be
a
little
bit
more
details.
A
This
turned
out
to
be
a
little
bit
hard,
given
that
we
would
like
to
support
a
single
part
of
any
parts,
I
a
part
affinity
towards
itself,
and
what
I
mean
by
that
is
that
when
apart
can
have
affinity
to
self
and
no
other
part
in
the
cluster
has
that
affinity?
We
should
mark
that
part
of
schedule
about.
The
reason
for
this
is
that
when
you
are
scheduling
a
res,
let's
say
replicas
served
or
any
other
collection
whose
parts
have
affinity
to
one
another.
A
None
of
them
will
be
scheduled
if
we
don't
support
this,
the
first
one
which
basically
needs
to
be
scheduled
so
that
other
ones
that
have
affinity
to
that
one
can
be
scheduled
as
well.
So
if
there
is,
if
we
cannot
schedule
the
first
one,
then
none
of
them
we
get
scared.
So
affinity
to
self
is
actually
there
to
support
this
particular
use
case,
and
our
scheduler
has
been
supporting
this
for
a
long
time
now,
and
it's
actually
a
valuable
feature.
A
Sometimes
you
want
to
you
know,
schedule
all
the
replicas
of
a
replica
set
in
in
a
single
zone.
For
example,
you
should
be
able
to
do
that
and
with
with
supporting
matching
motor
models,
matching
motor
parts
against
affinity,
we
ran
into
an
issue
that
figuring
out
whether
there
is
any
part
in
the
cluster
in
the
whole
cluster
that
matches
all
the
affinity.
A
We
need
to
look
at
each
node
of
the
cluster
and
check
the
labels
of
that
node
to
be
able
to
perform
this
check,
and
what
this
means
is
that
we
cannot
know
the
answer
to
this
question,
that
is
there
any
pod
or
a
group
of
positive
cluster
that
match
the
affinity.
Rules
of
an
incoming
part
needs
us
to
wait
for
the
actual
predicate
to
run,
and
only
after
that
we
can.
You
can
answer
this
question
so.
A
D
A
A
A
B
A
A
D
A
A
Do
you
think
that's
the
desired
behavior?
The
reason
that
we
decided
to
go
with
this
is
that
you
would
like
the
nodes
to
be
shrunk
during
the
process
of
running
predicates,
better
performance,
obviously
right
so,
instead
of
checking
all
the
predicates
for
nodes
that
have
already
failed
some
other
earlier
predicates,
we
just
focus
on
the
nodes
that
have
passed
previous
predict.
That's
right,
yeah,
yeah!
So
the
issue
now
in
that,
when
we
check
these
nodes
at
a
time,
you're
checking
affinity,
the
nodes
are
already
filtered
yes,
so
there.
B
A
Be
some
pods
in
the
cluster
that
match
affinity
mm-hmm,
but
they
are
not
checked
because
some
of
those
nodes
are
filtered
out.
For
example,
those
nodes
have
don't
have
enough
memory
for
that
for
the
incoming
Park,
so
those
nodes
are
filtered
out
and
by
the
time
did
we
reach
to
this
affinity
predicate
he
already
knows
so.
A
So
yeah
exactly
so
when
we,
when
you
were
not
doing
multiple
part
matching
affinity,
we
were
sure
that
a
single
part
would
satisfy
all
the
affinity
terms
right.
So
that
was
very
easy
to
basically
add
at
the
metadata
check.
We
could
easily
say
if
there
is
any
matching
part
or
not
all
right,
yeah,
now
that
we
have
like
multiple
matching,
we
can
learn.
I
got
it.
A
C
D
A
I
was
exactly
so
well
when
I
find
this
issue
to
support
multiple
pod
matching
I,
of
course,
wasn't
aware
of
this
issue
and
as
after
we
work
together
on
this
PR,
and
thank
you
for
your
efforts
on
this
PR.
After
that,
we
realize
that
order
it's
a
little
bit
more
complex
and
we
initially
thought
right.
So
if
we
want
to
support
multiple
by
matching,
then
it
could
impact
the
schedulers
performance
greatly,
actually
because
we
need
to
initially
maybe
check.
A
D
A
D
A
Yeah,
while
today's
should
be
fun
sounds
great
and
I
would
like
to
emphasize
again
towards
the
end
of
our
meeting
today
that
that
you
know,
paint
based
eviction
is
of
high
priority
for
us
to
you
and
both
you
and
Ravi.
Actually,
that's
a
high
priority
for
us.
Please
focus
most
of
your
attention
on
that
feature
at
the
moment,
because
you
want
to
get
that
out
of
the
door
as
soon
as
possible
so
that
we
can
get
it
so
for
a
longer
period
of
time.
Before
we
get
to
the
code
freeze.
C
We
actually
had
a
couple
of
questions,
probably
a
little
slack
or
or
some
other
place
to
you,
and
we
I
so
like
whatever
I
understood
from
ways
implementation
whatever
he
wanted
to
do
like
just
now.
Whatever
I
understood
is.
He
wants
to
have
kind
of
a
tree
trunk
which
is
like
the
labels
as
roots
and
with
individual
nodes
or
pots.
As
sorry.
D
D
Basically,
I
time
proposal,
II
structure
so
basically
like
I
mention
it's
the
keys
at
probably
pair
and
very
soon
no
names,
so
it
should
be
added
to
the
schedule
skid
or
a
cash
and
once
notice
that
is
updated
deleted.
It
should
be
updated
so
that
by
posting
this
cash
to
our
product
Kade
should
be
metadata.
Predicate
metadata
so
that
by
a
single
incoming
path,
by
checking
it's
part
affinity
and
about
each
one,
I
can
know
which
part
affinity
has
which
node
names
match
right.
D
D
So
suppose
we
have
an
knows
in
matching
one
affinity,
rule,
sorry,
affinity
term,
then
the
total
time
complexity
would
be
yeah,
because
you
can
do
that.
You
can
do
this
in
section
half
and
half
so
that
is
log
log
lands
of
the
affinity
terms
and
multiply
the
length
of
node
names
in
each
match.
Right,
yeah.
D
A
Yeah
but
anyway,
I
don't
think
you
know
our
API
has
been
always
vague
about
whether
we
support
multiple
parts
matching
affinity
and
in
our
previous
implementation
we
decided
to
not
match
multiple
parts
and
it's
fine.
If
if
this
is
gonna,
be
very
time-consuming
and
causes
performance
degradation,
it's
fine
to
just
keep
supporting
what
we
have
been
supporting
so
far
and
we
haven't
seen
actually
a
lot
of
demand
or
request
from
our
users
asking
for
this
feature
to
be
matched
against
possible
BOTS.
So
maybe
we
should.
A
We
should
not
put
a
whole
lot
force
on
it
at
the
moment.
But
if
you,
if
you
know
a
great
solution
of
course,
I
will
support
having
it
yeah
and
by
the
way,
even
if
we
decided
to
not
go
with
your
PR
I
think
it's
still
valuable
to
add
some
of
the
testers
you
have
added
and
this
PR
to
the
affinity.
Maybe
the
tests
need
to
be
slightly
modified
but
still
will
be
useful.
A
C
Thanks
a
lot
Bobby
in
the
way,
I
have
one
more
question
to
Bobby
I'm,
sorry
I'm
taking
much
of
your
time,
but
so
you
have
asked
a
question
on
the
PR
for
promotion
of
resource
limits
thing.
So
as
of
now
the
problem,
at
least
in
openshift
online
clusters
that
we
host
is,
unless
the
feature
is
beta,
we
will
not
like
allow
it
to
be
used
by
customers.
So
we
do
not
have
a
real
usage
clustered,
but
our
QE
team
has
tested
it.
A
B
A
Access
to
the
master
now
as
well
on
B
we
do,
but
as
GK,
if
you
want,
you
really
try
kubernetes
like
open
source
kubernetes,
we
bring
up
a
cluster
of
our
own
on
GC
or
other
cloud
providers
can
do
the
same
thing
so
I
don't
know
if
you
have
access
to
such
environments
so
that
you
can
bring
up
real
open
source
kubernetes
or
like
raw
environment
and
test
it.
It
would
be
great
if
we
can
actually
test
it
before
we.
A
C
A
C
A
Is
it
you
can
check
to
make
sure
that
there
are
some
e
two
E's
I
mean
yeah?
Well,
if
you
just
check
that
enough,
there
are
each
ways
that
I
have
like
limits
on
parts.
This.
It
means
that
this
feature
is
gonna
get
exercised,
but
it
will
be
also
great,
if
you
add
specific
tests
for
this
feature,
so
that
I
know
some.
Some
nodes
are
preferred
over
the
others.
This
particular
yeah.