►
From YouTube: Kubernetes SIG Scheduling Meetings 20170710
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Put
the
recording
on
YouTube
yet
for
the
last
couple
of
meetings,
I
promise
I
will
do
it
very
soon,
but
all
the
meetings
now
are
recorded
and
go
up
on
the
YouTube
page
for
kubernetes
CN
CF.
Something
I
can't
remember
exactly
what
it's
called
anyway.
So
the
agenda
for
today's
meeting
was
going
to
be
to
talk
about
1.8
did
anybody.
At
least
that
was
the
item
that
I
had?
Was
there
anything
else?
People
wanted
to
talk
about.
A
The
reason
we're
meeting
this
week
is
that
we
cancelled
last
week
because
it
was
a
day
that
a
lot
of
people
need
us
take
off,
even
though
it's
not
an
official
holiday
it's
the
day
before
4th
of
July
anyway,
so
we
may
or
may
not
meet
next
week.
We
could
discuss
that
also,
but
is
there
anything
besides
1.8
that
folks
wanted
to
discuss.
B
A
Okay,
that's
fine!
If
you
tell
us
what
it
is,
we
may
start
talking
about
it.
So
it's
probably
smart
to
not
say
what
it
is.
I,
don't
know
what
it
is
by
the
way.
Okay,
anything
else,
people
want
to
talk
about.
I,
guess
not
so.
Oh
I
know
what
I'm
one
of
the
things
I
wanted
to
talk
about
was
related
to
the
1.8
was,
and
I
mentioned
this
email
I
am
interested
in
getting
feedback
from
people
about
designating
official,
approvers
and
reviewers
for
the
items
that
we
identify
for
each
release.
A
My
understanding
is
at
least
some
other
SIG's
do
some
variant
of
this,
and
the
main
reason
is
well
there's
a
couple
of
reasons.
One
is
so
that
people
who
are
interested
in
a
particular
feature
get
cc'd
as
as
reviewers
on
it,
and
also
to
make
sure
that
we're
clear
on
who
to
prove
an
issue
for
it.
Well,
the
pr's
care
corresponding
to
a
feature
to
get
merged
and
who
doesn't
need
to
approve
it's
not
actually
intended
to
make
it
harder
to
get
features
merged.
It's
actually
the
opposite.
A
It's
to
make
sure
that
people
who
implement
stuff
aren't
left
waiting
around
wondering
should
I
wait
another
week
to
see
if
someone
else
has
more
feedback
or
an
objection
or
something
like
this.
The
idea
is
by
making
it
clear
ahead
of
time
who
has
to
approve
the
people
who
are
working
on
features
will
know
when
they're
done
essentially
I
mean
technically.
Where
do
you
have
that?
Because
we
have
you
know
an
approver
is
an
owner's.
A
Stuff
like
that,
but
the
the
people
who
might
be
responsible
for
approving
an
issue
may
not
all
be
technically
approvers
but
are
important
to
make
sure
they're,
ok
with
something.
So
this
was
kind
of
the
idea
there.
So
Krueger's
would
be
people
who
must
approve
and
then
once
that's
all
those
people
approve,
then
it
can
be
merged
and
reviewers
would
be
people
who
are
interested
in
participating.
The
discussion
and
the
reviews
for
a
PR
and
she
gets
the
seed
on
on
it,
but
don't
officially
have
to
prove.
Is
there
any
kind
of?
B
A
My
understanding
is
from
the
folks
who
have
done
it
elsewhere
that
it's
that
it's
worked
out
pretty.
Well,
my
only
concern
it's
not.
It
has
nothing
really
directly
to
do
with
this
idea,
but
is
when
there's
issues
that
people
outside
of
six
scheduling
might
have
strong
opinions
on.
How
do
we
ensure
they
get
looped
in
this?
Like
I
said
this
is
kind
of
almost
orthogonal
to
the
this
this
this
this
idea,
because
you
know
it's
hard
for
everyone
to
keep
up
with
everything.
A
Make
sure
that
those
people
are
at
least
reviewers
and
are
notified
about,
what's
go,
what's
what's
going
on
with
an
issue,
that's
actually
one
of
the
reasons
why
we
have
this
dependencies
column
is
to
try
to
identify
in
the
spreadsheet,
which
identify
other
states
that
might
be
interested
in
an
issue
or
when
there's
actually
a
hard
dependency
from
our
feature
or
to
our
feature
from
or
to
another
state.
So
that's
just
something
to
keep
in
mind.
A
We
think
when
we're
designating
approvers
interviewers
for
stuff
all
right
well
doesn't
sound,
like
anybody
has
an
objection
to
this
idea.
I'll
give
one
more
more
try
if
somebody
doesn't
want
to
speak
up
here,
but
has
some
opinion
on
it
feel
free
to
send
me
an
email
directly
or
send
it
on
the
mailing
list
and
if
I
don't
hear
anything
else
in
the
next
day
or
so,
we
can
assume
that
this
is
something
we
want
to
do
so.
A
Maybe,
but
actually
maybe
maybe
folks
can
just
bring
it
up
on
their
computer
I,
we
we
can
go
through
them
real
fast
and
if
people
have
a
something
to
say
just
just
speak
up,
we
have
party
and
preemption
implemented
and
in
alpha
for
1.8.
Bobby
is
sitting
next
to
me,
he's
not
in
the
camera,
but
he
is
working
on
that
and
by
the
way,
feel
free
to
add
yourself
as
a
reviewer
on
the
spreadsheet,
for
things
that
you're
interested
in
and
we'll
try
to
figure
out
some
scheme.
A
For
actually
you
can
add
yourself
as
an
approver
or
a
reviewer,
and
if
the
list
of
approvers
gets
too
long
or
something,
then
we
can
have
a
separate
discussion
to
try
to
filter
it
down
because
we
don't
like.
We
don't
want
everyone
in
the
sig
to
be
an
approver
for
every
issue.
This
would
kind
of
be
a
disaster
but
feel
free
that
the
spreadsheet
is
editable
by
anyone
who's
in
the
six
scheduling
order.
A
Kubernetes
dev
mailing
list
so
feel
free
to
add
your
name
as
an
approver
or
reviewer
for
any
of
these
in
the
spreadsheet,
so
yeah,
so
Bobby's,
working
on
party
and
preemption
and
implemented.
So
the
goal
is
to
get
that
in
1.8.
He
has
written
a
couple
of
design
Docs
already
that
he
has
sent
out.
If
people
haven't
seen
those
they
should
take
a
look
Bobby.
Are
there
more
design,
Docs
the
lady
party
and
preemption
that
you're
planning
to
send
out
or.
A
D
A
Yeah,
that's
right:
okay!
Well,
all
right!
Leon
we
had
reschedule
er
I
know
a
veg
sent
out.
I
didn't
look
at
it
yet,
but
if
Escher
sent
out
a
design
doc
for
this
I,
don't
know
if
he's
committing
to
implementing
something
in
1.8
for
it.
But
it's
going
it's
not
going
to
be
inside
the
core
kubernetes
anyway.
A
A
The
idea
was
that
a
never
policy,
the
way
that
it
I
forgot
the
details,
but
but
it
network
policy,
if
you
refer
to
a
pod
in
another
namespace
I,
think
the
idea
was
that
they
were
using
a
label
selector
on
the
namespace
name
and
we
were
using
a
list
of
namespace
names
or
something
like
that,
and
we
were
trying
to
unify
the
two
approaches.
I
can't
remember
actually,
which
which
approach
they
were
using,
but
I
think
the
idea
was
that
we
would
change
ours
to
match
theirs.
A
You
can
take
a
look
at
the
issue,
that's
linked.
If
you
want
to
see
the
details,
so
I
need
to
check
with
the
person
who
is
implementing
that,
but
my
recollection
is
that
it
was
just
blocked
on
getting
enough
review
bandwidth.
It
was
not
a
huge
huge
change.
It
was
so
that
the
pod
affinity,
the
way
you
select
pods,
that
are
in
another
namespace
when
you're
doing
Prada
Finity
matches
the
way
you
select
pods
in
a
namespace
when
you're
doing
network
policy.
This.
A
A
So
that's
actually
a
great
point.
I
mentioned
earlier.
One
of
the
reasons
for
approvers
is
so
that
it's
clear
when
you
can
get
it,
get
the
pr
merge
when
you're
done,
but
it's
also
a
good
way
to
make
sure
we
don't
take
on
too
much
work
for
the
sig
so
that
there's
a
I'm
not
saying
this
is
what
happened
in
this
case.
But
it
just
reminded
me
that
by
making
sure
there's
approvers
up
front,
we
know
whether
there's
enough,
like
review
bandwidth
for
all
the
features
that
were
planning
for
the
for
the
release.
A
So
when
you
sign
up
for
an
approver
you're
kind
of
committing
to
be
available
to
review
something
in
a
timely
fashion
for
the
release
so
yeah.
So
so,
if
you
guys
already
like
reviewed
and
stuff
I,
can
look
into
that
and
see.
Maybe
we
can
just
maybe
it's
done
and
we
can
just
approve
it
and
be
done
with
that.
So
thanks
thanks
for
mentioning
that
and
thanks
for
viewing
it.
A
Then
our
UX
for
advanced
scheduling,
features
I'm
Way
behind
in
my
email,
but
I
think
Derek
sent
emails
to
the
mailing
list,
saying
that
he
was
not
going
to
have
time
to
work
on
it
from
1.8,
so
I
need
to
check
the
email
but
I
think
that's
what
he
said
and
I'll
delete
it
once
I
verify
that
component
config.
This
is
something
Tim
had
started
working
on
in
the
last
release
and
has
indicated
he's
gonna
continue
working
on
I
think
he
said
he
wasn't
able
to
make
the
meeting
today.
Tim
are
you
here?
A
This
is
the
feature
that
changes
the
logic
in
the
node
controller.
To
do
evictions
based
on
no
execute
aints
instead
of
based
on
node
conditions,
will
still
have
the
node
conditions
like
the
node
not
ready
and
the
condition
unknown,
which
means
like
it's
not
reachable
from
the
master.
The
conditions
will
still
be
there,
but
the
logic
for
doing
the
evictions.
A
After
five
minutes
will
use
the
in
the
note,
controller
will
be
based
on
taints
that
get
placed
when
those
note
conditions
arise.
The
main
benefit
of
this
is
that
well,
besides
making
a
little
more
obvious
when
those
well
I
guess
it's
not
actually
more
obvious
that
no
conditions
are
pretty
obvious.
The
main
benefit
is
that
you
can
use
toleration
to
change
the
five-minute
default
time
out
to
any
value
you
want
on
a
per
pod
basis,
for
how
long
you
want
to
wait
to
get
a
before
your
evicted.
A
So
we
have
someone
who
has
been
working
on
that
and
actually
Marek
a
volunteer
to
do
the
reviews
for
that
so
I'm
optimistic.
We
can
get
that
in
1.8
demon
sets
scheduling
is
needs.
Some
discussion,
there's
I
recommend
people
take
a
look
at
the
issue.
You
have
opinions
on
the
discussion
there,
please
chime
in
there's.
A
Some
Brian
Graham
brought
up
kind
of
a
point
about
how
this
is
by
the
way,
what
this
demon
set
scheduling.
This
is
I
did
not
write
a
thorough
explanation
of
what
this
is.
The
idea
is
to
remove
scheduling
from
the
demon
sect
controller
so
that
the
demon
sect
controller
works
more
like
a
normal
controller
where
it
just
creates
pods,
and
then
we
use
the
default
scheduler
to
schedule
the
pods
and
we
would
use
like
node
affinity
and
I
guess
mostly
just
note
affinity
to
ensure
that
the
pods
get
scheduled
on
the
right
nodes.
A
But
there
was
some
question
about
what
happens
when
someone
replaces
the
default
scheduler
with
a
custom
scheduler
or
maybe
it
won't
schedule.
Demon
set
pods
the
right
way
and
even
set
pods
are
often
like
critical
system
pause
like
we
might
use
the
demon
set
schedulers
to
proxy
or
something
like
that,
and
you
know
if
this
might
break
the
cluster.
So
maybe
the
demon
said
controller
should
continue
to
do
its
own
scheduling
and
and
not
have
that
be
delegated
to
the
to
the
default
scheduler
and
the
cluster
that
the
user
might
replace.
A
And
so
there's
some
discussion
about
that
and
there's
also
discussion.
Klaus
wrote
a
doc
if
we
do
assuming
we
do
separate
the
scheduling.
So
that's
done
a
regular
default.
Scheduler,
like
there
are
some
design
options
and
how
exactly
that
would
work
and
closer
to
dock
that
people.
Some
people
have
commented
on,
but
we
didn't
really
get
convergence
on
that.
Yet
so
there's
still
some
discussion
about.
What's
the
right
way
to
do,
it
is
so.
A
A
Instead
of
only
being
able
to
read
the
scheduler
policy
config
from
a
file,
the
scheduler
will
also
be
able
to
read
it
from
a
config
map,
which
makes
it
easier
for
users
to
change,
presumably
only
a
yeah,
so
it
makes
it
makes
it
easier
for
people
to
change
that.
The
policy
config
is
things
like
which
predicates
are
enabled.
What
are
the
priority
functions
that
are
enabled?
A
What
are
the
weights
for
the
party
functions
stuff
like
that
by
the
way,
if
people
have
questions
about
these
I'm,
going
kind
of
fast
feel
free
to
jump
in,
but
I
just
thought,
it
should
explain
what
these
things
are,
so
that
people
who
want
to
sign
up
to
be
a
reviewer
or
even
approver
for
them.
Well,
they
look
what
we're
talking
about,
although
I
guess
most
of
them
have
links
to
issues
anyway,
you
could
check
it
out,
represent
node
conditions
that
block
scheduling
using
taint.
A
So
this
is
kind
of
complimentary,
but
not
the
same
thing
as
the
other
taint
thing
we
talked
about
so
today,
then
there
are
two
node
conditions:
well,
really,
there's
kind
of
one
node
condition
that
triggers
eviction.
That's
the
ready!
Node
condition
it
can
either
be
false
or
unknown,
meaning
the
cubelets
either
like
having
a
problem
or
the
node
is
not
reachable
from
the
master
and
those
trigger
eviction.
A
And
so
if
we
want
to
use
the
default
scheduler
for
scheduling
a
demon
set
pods,
it's
useful
to
have
the
no
scheduled
conditions
be
represented
by
taints,
because
then
the
demon
set
pods
can
use
Toleration.
They
can
schedule
on
those
nodes,
whereas
today
like
when
it's
a
node
condition
that
the
scheduler
just
obeys
for
every
pod.
There
isn't
really
any
way
to
say
for
some
pods.
This
node
condition
should
prevent
scheduling
and
for
other
pods.
This
node
condition
should
not
prevent
scheduling.
A
So
this
is
to
some
extent
prerequisite
for
the
moving
demon
set
scheduling
to
the
default
scheduler
and
someone
from
Google.
That's
the
you
may
not
recognize
the
github
handle
there.
That's
one
of
the
folks
at
Google
and
Warsaw
has
offered
to
work
on
that
so,
and
you
can
check
out
the
issue.
If
you
want
know
what
that
is,
and
then
these
last
few
were
added
by
Klaus.
A
He
had
benchmarks
for
predicates
and
priority
functions,
update
the
scheduler
to
use
client
go,
and
this
last
one
add
max
number
of
replicas
pronounced.
So
this
last
one
I
believe,
although
all
the
work
has
been
done
for
it,
I
need
to
go
back
and
check.
This
may
just
be
blocked
on
getting
a
review.
The
idea
for
this
last
one
was
that
today
we
have
the
node
anti
affinity.
Sorry,
the
pod
anti
affinity
and
it
means
like
do
not
schedule
this
pod
in
the
same
like
topology.
A
Basically,
we
have
this
topology
key
thing
that
represents
a
topology
domain,
so
it
means
like
don't
schedule
this
pod
in
the
same
topology
domain
as
any
of
these
other
pods.
That's
the
way
it
works
today,
and
then
this
feature
would
allow
you
to
say
to
give
a
number
so
that,
instead
of
saying,
don't
schedule
this
pod,
if
there
are
any
other
pods
in
the
same
topology
domain,
you
could
say
don't
schedule
this
pod
if
there
are
more
than
an
other
pods
in
this
topology
domain.
A
So
concretely,
you
could
say
like
schedule
up
to
three
pods
per
rack
from
this
replica
set
or
something
like
that,
whereas
today
you
can
only
say
schedule,
you
know
at
most
one
per
rack
up
from
this
replica
set.
So
most
of
the
work
for
that
from
what
I
recall
has
been
done.
Is
there
anything
else
that
folks
are
planning
to
work
on
in
1.8
that
they
would
like
to
have
listed
here.
A
A
Alright,
either
my
audio
is
off
or
nobody
has
anything
anything
else
to
say
if
you,
if
anybody
thinks
of
something
else,
feel
free
to
put
it
on
the
spreadsheet,
like
not
everything
needs
to
go
on
the
spreadsheet,
like
small,
you
know,
bug
fixes
and
and
small
changes
don't
need
to
go
on
the
spreadsheet,
but
anything
you
know
of
reasonable
size.
We
should
discuss
as
a
sig
and
certainly
make
sure
there's
an
approver
who
has
bandwidth
to
to
review
it.
So
it's
good
to
put
it
on
the
spreadsheet
all
right.
A
B
And
this
is
just
kind
of
like
it
heads
it
up,
because
I
think
is
working
on
a
proposal
that
that
will
help
to
motivate
it
better,
but
just
for
background,
so
in
in
the
resource
management.
Workgroup.
There's
some
ongoing
work
to
develop
these
things
called
a
device
plugins,
and
you
know
kind
of
the
idea-
is
that
it's
an
extension
that
connects
using
G
RPC
and
it
has
the
ability
to
advertise
new
resources-
and
you
know
part
of
the
design-
was
that
those
resources
would
fit
in
a
new
namespace
called
extensions
that
kubernetes
do.
B
But
after
that
point
they
they
would
essentially
be
handled
almost
identically
to
an
opaque
integer
resource.
And
so
one
idea
that
came
up
was
that
the
logic
in
the
scheduler
could
be
relaxed
to
treat
those
those
resource
names
in
the
same
way
as
ARS
are
currently
handled.
So
that
there's
not
like
a
you
know
a
complete
duplication
of
that
logic.
So.
B
A
A
Haven't
been
following
that
or
but
I
know,
there's
a
lot
of
interest
in
having
more
options
for
how
to
add
your
resources
without
kubernetes
core
having
to
get
involved
and
be
modified
to
support
them.
So
certainly
that
sounds
like
a
good
good
feature
and
doesn't
sound
like
the
change
you're
suggesting
to
the
scheduler
is
very
large
since
we're
already,
like
you,
said,
kind
of
doing
that
for
I'll
take
integer
resources,
but
it
sounds
like
something.
People
might
be
interested
in
checking
out
by
the
way.
A
What
is
an
I'm,
an
asset,
dumb
question
here
and
I
feel
like
more
visits
reported.
But
what
is
it
as
an
arch
document?
Is
this
I
mean
I
assume
it's?
It's
like
a
gonna
document,
the
architecture
of
this
feature,
but
is
this
something
new
that
the
architecture
sig
is
requiring
for
new
feature?
There's
something
I
haven't
been
following:
what's
our
architecture,
sig
is
doing,
don't
know,
okay,.
A
A
C
A
A
A
C
Have
a
question
regarding
the
the
benchmarks
that
those
has
created,
so
one
of
the
things
that
we
are
trying
to
do
like
when
Jay
was
here
with
red
hand,
wanted
to
do
the
scheduler
less
for
all
the
predicates
and
priorities,
it's
kind
of
a
kitchen
sink
where
and
we
can
randomly
choose
the
predicate
and
priority
and
run
the
tests
against
those
particular
predicates
and
priorities.
So
is
that
something
that
that
will
be
owned
by
six
scheduling
or
sick
testing,
I'm
kind
of
interested
in
knowing
that
that's.
A
A
good
question:
I
guess
my
I,
don't
know
I
mean:
do
you
know
what
kinds
of
tests
cig
testing
has
been
owning?
My
assumption,
without
having
any
information
is
that
they've
probably
are
mostly
involved
in
the
testing
infrastructure,
as
opposed
to
like
specific
tests
written
for
specific
features,
and
therefore
these
tests
should
be
owned
by
sig
scheduling,
but
that
might
be
an
incorrect
assumption.
Do
use.
Do
you
know
more
about
like
what
other
tests
they've
been
owning.
C
A
Yeah
I
don't
either
we
could
try
to
look
into
that.
Like
my
default
response
would
be
that
you
know,
as
a
scheduling
should
only
test,
because
the
people
who
might
be
making
changes
that
would
enter
that
would
by
worsen
or
improve
the
performance
of
predicates
and
party
functions,
are
most
likely
going
to
be.
You
know,
people
who
are
involved
in
sig
scheduling
and
so
seems
to
me,
like
those,
should
be
owned
by
six
scheduling,
but
you
know
if
this
is
turns
out.
It's
like
testing
is
taking
ownership
of.
A
C
So,
as
of
know,
the
owners
file
includes
the
people,
come
see,
scheduling
so
I'm,
not
sure
like
what.
What
is
the
long
term
goal?
If
you
want
to
have
this
to
be
done
in
the
CA
and
then
find
out,
if
we
add
in
your
predicate
or
priority,
if
it's
causing
any
regression
or
not
so
like?
If
you
want
to
have
something
like
that
being
done
in
CI,
should
we
have
suggesting,
on
their
tour
yeah.
A
D
A
A
A
All
right,
I
guess
not
I,
guess
it's
a
short
meeting
this
week.
I
guess
I'll
send
the
question
on
the
mailing
list
about
whether
people
want
to
meet
next
week.
If
we
don't
meet
next
week,
then
going
back
to
the
regular
schedule,
it
would
be
three
weeks
from
today,
so
that
might
be
too
long.
So
maybe
we
should
have
at
least
a
short
meeting
next
week,
I'll
send
email
to
see
if
there's
topics,
if
there's
no
agenda,
then
we
can
cancel,
but
for
now
let's
assume
we
will
meet
next
week.
All
right
thanks.
Everyone.