►
From YouTube: Kubernetes SIG Scheduling Meetings 20171012
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
A
A
C
D
E
Yes,
so
this
is
a
just
well
like
Abby
noticed
a
behavior
change
related
to
the
East
End
areas
SPR,
so
you
1.8.
We
have
PR
that
introduces
the
eastern
areas
of
so
basically,
we
consider
all
the
resources.
I've
said
the
communities
in
the
default
namespace
as
in
standard
results,
and
we
said
the
some
scheduler
predicate
to
consider
standard
results
as
the
topic
integer,
and
we
also
check
whether
the
node
has
enough
results
counted
for
each
scenario
sauce.
E
So
this
is
a
behavior
change
for
ein,
a
resulting
name
outside
with
the
default
cognate,
his
name
space,
because
before
this
PR
we,
the
scheduler,
wouldn't
reject
them.
Even
though
there's
no
I
strongly
sauce
or
divert
eyes
direct
by
the
node,
but
now
with
this
PR
we
would
mock
them.
I
got
knocked
apart
with
the
use
of
a
generic
sauce
Iowa
before
we
even
sent
the
pot
to
any
scheduler
is
tender.
E
A
So
one
thing
which
was
not
very
clear,
is
this:
so
I
as
far
as
I
can
remember
in
our
predicate
we
were,
we
were
checking
extended
resources
and
if
there
was
not
enough
external
resources
in
the
past,
we
were
still
rejecting.
Well,
basically
were
not
considering
that
node
as
feasible
if
I
am
NOT
wrong.
So
no,
that's
that's
not
what
happened
before
so.
D
E
G
Yeah,
so
the
thing
that
I
wanna
say
is
that
there
are
use
cases
will
pick
into
these
resources
in
the
past,
specifically
around,
say
software
licenses
and
so
on,
which
is
not
really
like.
It's
not
really
a
node
resource
and
it's
not
like
part
of
the
node
lifecycle.
So
it's
maybe
it's
me,
like
you,
don't
have
a
contrary
use
case
for
such
things
yet
like
it's
not
yet
well
define
in
the
ecosystem.
A
A
E
A
A
A
G
A
A
G
In
like,
as
of
today,
like
the
only
way
for
figuring
out
what
resources
are
available
in
a
cluster
is
by
probing
node
objects,
but
it
doesn't
make
sense
to
associate
cluster
level
resources
with
node
objects,
so
discovery
aspect
which
I
guess
is
what
we
commonly
refer
to
as
advertising
is
probably
not
going
to
straightforward
for
such
resources.
But
consumption
can
still
happen
through
scheduler
extenders
and,
like
maybe,
discoveries
through
like
third-party
objects
and
so
on.
G
C
Seems
a
little
weird
like
we
have
a
couple
of
different
mechanisms
there
there
is.
It
sounds
to
me
like
there's
a
couple
different
use
cases
and
mechanisms
that
could
be
potentially
leveraged
with
in
different
ways
right,
there's,
nothing
to
say
that
you
couldn't
leverage
the
ability
to
use
a
dedicated
resource,
preserve
a
cluster
wide
resource,
using
something
like
quota
right
and
having
your
own
plugins.
There's
nothing
preventing
you
from
doing
that,
because
I
would
guarantee
you
don't
over
utilize,
something
to
admission
control
itself.
G
I'm,
like
I'm,
not
sure
how
how
like
admission,
control
and
quota
would
play
a
role
in
this
scheduling
aspect
here,
but
I
do
agree
that
for
advertising
a
travel
resource
there
needs
to
be
some
answer
or
for
like
any
resource.
That's
not
a
first-class
resource
that
needs
to
be
some
solution
for
code
or
limit
range.
The
whole
gamut
of
resources,
API,
is.
D
D
G
Think
there
is
one
I
mean
personally,
my
team
is
like
trying
to
tackle
introducing
a
cluster
level
resource
which
should
be
like
Google,
deep
use,
and
so
we
are
starting
to
hit
these
issues
one
at
a
time,
and
so
this
might
be
one
of
the
issues
and
like
thanks
for
joining
for
actually
like
pointing
it
out.
Yes,.
E
I
think
the
retina
we
do
have
work
away
to
really
support
a
cluster
level
resource
in
some
generic
way.
So
we
have
the
current.
They
I
think
I
already
know
like
a
volume
as
a
class
two
level
resource,
but
the
volume
is
supported
as
a
native
or
kind
of
like
a
native
resources
between
communities
and
we
don't
really
have
a
architecture
to
support
like
a
generic
cluster.
Divert
resources,
yeah.
A
I
think
you
know
overall
I
think
the
cluster
resources
should
be
I
mean
they
should
be
distinguished
from
node
level
resources
and
whatever
we
have
today
is
not
enough
to
cover
them.
I
mean
the
predicates
that
we
have
today
really
want
to
check
everything
on
a
single
note
or
ignore
some
of
these
resources
completely
and
let
the
exchangers
handle
them.
I.
Think
that's
not
enough.
Whatever
we
have
today
is
not
enough,
and
maybe
extended,
resources
is
not
also
I
mean,
at
least
in
my
opinion.
It's
not
a
good
solution
to
handle
cluster
level
resources.
A
E
C
That's
what
I'm
saying,
but
it
would
be
it'd,
be
a
whole
new
construct,
a
primitive
or
some
kind
of
that
has
a
well-defined
semantics
in
other
systems.
These
types
of
things,
cluster
level
resources,
are
for
better
lack
of
better
terms.
People
used
to
call
them
concurrency
limits.
How
many
things
you
could
run
it
across
here
at
any
one
moment
in
time,
but
that
was
years
ago
for
for
things
like
licensing
software,
licensing.
A
Of
that
sort,
but
we
should
we
should
probably
make
them
more
generic.
Some
things
may
be
similar
to
what
X
and
resources
do
like
pretty
much
like
make
them
I,
don't
know.
This
is
just
a
very
rough
idea
based
on
our
discussions
in
the
past
several
minutes,
but
maybe
we
can
have
some
team,
like
scalar
resources
that
we
have
today
and
distinguish
them
by
making
them
like
cluster
levels,
so
that
we
know
that
there
is
just
one
collection
of
these
resources
per
cluster.
G
So
the
question
is
like
if,
if
the
scheduler
used
to
suppose
this,
this
model,
where,
if,
if,
if
excellent
resources,
when
all
available
on
the
end
on
any
notes,
the
scaler
would
still
go
ahead
and
like
schedule
a
part
to
one
of
the
notes
address,
shows
on
and
that
behavior
change
shouldn't
that
behavior
be
fixed,
but
all
reverted
back
to
its
all
behavior.
It.
G
A
There
are
some
other
cases
that
may
fail
so,
for
example,
I
guess
now
implemented
huge
pages,
with
sort
of
like
external
resources
and
well
they
are
called
extra
scaler
resources
or
something
like
that,
but
that
is
a
node
level
resource
and
for
that
it
makes
a
lot
more
sense
to
have
scheduled
check
those
on
particular
node,
as
opposed
to
like
ignore
them.
Oh
so.
G
D
C
My
mind
it
would
help
a
lot
to
see
in
an
exact
example
of
a
speck
behavior
spec.
Oh
you
know,
maybe
even
before
spec
behavior
after
because
in
part
of
my
mind
we're
talking
about
it
but
I'm
not
seeing
the
exact
semantics
of
how
something
changed.
Maybe
that's
me
I
think
we're
kind
of
beating
around
the
bush
a
little
bit.
D
G
A
A
D
G
E
E
C
I'm,
okay,
with
if
a
person
bought
into
an
alpha
resource
and
it
changes,
behavior
I-
think
that's,
that's
totally
fine
for
the
time
being,
and
so
we
I
don't
think
it
needs
to
be
resolved
for
1-8
immediately
I'm
in
line
with
Bobby
I.
Think
it's!
It's
a
buyer!
Beware
thing!
So
they
bought
into
modifying
their
spec
to
use
this
alpha
great
feature
and
it
doesn't
behave
the
way
they
expect.
That's.
Okay,
so
long
as
we
have
a
plan
for
the
broader
problem
in
the
future,
how.
D
G
D
A
G
A
A
A
Sounds
good
to
me.
Alright,
so
yeah
I
actually
sent
a
link
to
the
dock
and
a
chat
window.
We
have
a
few
things
to
work
on
in
1.9
one
of
the
items
as
we
discussed
in
the
last
meeting,
as
well
as
to
work
on
reducing
open
box.
This
requires
I,
guess
some
joint
effort
with
the
community
and
all
the
people
involved,
and
maybe
we
need
to
somehow
it
advertises
some
somewhere
I.
F
A
C
This
is
one
way
that
helps
a
lot
is
to
help
people
who
want
to
onboard
or
develop
a
sort
of
higher
level
access
to
the
project.
I
know
Klaus
had
helped
quite
a
bit
in
the
beginning,
and
it
was
great
way
for
him
to
onboard
to
build
up
sort
of.
You
know
ownership
rights
in
the
community.
They
want
to
do
that.
So,
if
folks
want
to
build
street
cred,
if
it
matters
to
their
organization
to
have
committed
access
to
the
project,
then
I
think
that
is
a
good
way
or
a
good
means,
a
carrot.
C
A
C
It
does
to
advertise
I
know
like
a
couple
of
releases
back.
We
advertised
I
haven't
done
the
advertising
in
the
ones
made
cycle,
but
I
did
do
it
and
one
six
and
that's
actually
when
we
found
a
whole
bunch
of
other
folks.
So
if
we
want
to
I
could
try
to
do
another
sort
of
PR
push
to
sort
of
like
a
call
to
action
across
Twitter
and
other
email
lists
to
try
and
solicit
folks
who
want
to
get
involved.
Yeah.
A
The
next
item
is
priority
and
preemption.
There
are
a
bunch
of
items
that
you
would
like
to
add
to
move
this
closer
to
beta.
The
plan
today
is
to
not
move
into
beta
in
1.9.
The
reason
for
that
is
that
the
starvation,
you
know
fixing
the
starvation
problem
that
is
discussed
in
the
user
Docs
and
also
in
the
designer
and
I
wrote
it
earlier
require
some
major
changes
to
the
scheduler
and
those
changes.
A
A
C
A
B
C
It's
a
really
bad
one
and
people
plan
their
deployments,
so
I
work
on
sig,
cluster
lifecycle
and
us
planning
to
have
the
pods
have
their
priority
preset
as
part
of
the
control
plane
stand
up
is
super
duper
important,
and
this
reduces
our
our
this.
It
reduces
our
dependency
on
what
was
the
reschedule
err
from
the
contribute.
Oh
yeah.
B
I
mean
I
think
we
should
try
to
deprecate
whatever
that
means.
In
this
context,
the
reschedule
err
at
the
same
time
as
it
goes
to
be
as
partying
preemption
goes
to
beta
I
think,
which
is
kind
of
what
a
lot
of
what
Bobbi
was
saying
but,
like
I,
think
we
do
need
a
concrete
plan
for
doing
that.
I,
don't
think
yeah
right.
A
A
Am
I
doing
in
resource
Koda
means
that
users
will
have
a
certain
amount
of
Koda
at
a
particular
priority
level
if
it's
enabled
in
their
clusters,
so
that
users
cannot
have
all
the
parts
of
scheduled
at
the
highest
priority
Harry
with
the
github
idea
of
resource
or
something
that
is
working
on,
that
the
design
doc,
for
it
is
almost
finalized,
you're
close
to
merging
it,
and
hopefully
we
can
have
it
soon
as
another
creature
as
well,
and
hopefully
in
one,
why
not
so
yeah.
So
there
is
also
this
discussion
that
has
been
going
on.
A
A
We
have
been
going
back
and
forth
between
the
two
ideas
of
letting
the
scheduler
schedule
them
for
letting
the
demons
that
controllers
today
the
demon
said,
controller
schedules
them,
and
there
are
some
people
who
are
in
favorite
in
favor
of
this
approach,
because
they
say
that
there
are
lots
of
critical
system
parts
which
were
demon
set
pods
and
when
they
are
scheduled
by
the
demons
that
controller
we
are
in
Troy
you're
sure
that
they
can
be
scheduled.
Even
if
the
default
schedule
is
replaced
by
a
customer
scheduling.
Anybody
is
clustered
so
that's
a
valid
argument.
A
There
are,
of
course,
some
drawbacks
with
this
approach,
like
demons.
That
controller
needs
to
have
some
scheduling
logic,
and
we
need
to
maintain
two
places,
and
we
add,
for
example,
in
predict,
heads
and
the
demon
switch
control
needs
to
have
more
memory
state
of
the
cluster
in
order
to
make
scheduling
decisions
faster,
so
that
increases
the
total
amount
of
memory
memory,
consumption
on
a
master
and
on
stuff
like
so
anyway,
I'm
gonna
send
an
email
I.
Don't
think
that
we
can
finalize
that
decision.
A
C
I've
always
been
in
favor
of
how
do
I
put
this
brick-making
the
core
set
of
scheduling,
pieces,
more
modular
and
reusable
by
other
components.
That
was
actually
an
effort
that
we
started
while
I
was
still
at
Red
Hat,
but
it
never
saw
its
way
all
the
way
through
to
fruition.
It
was
going
to
be
a
multiple
release,
type
of
project
where
we
were
working
on
pieces
of
it
over
time.
C
You
know
as
part
of
the
technical
debt
pay
down,
but
as
I've
moved
on
from
Red
Hat,
and
it's
similar
some
of
the
other
people
that
were
on
my
team
have
also
need
them.
There's
no
one
currently
actively
doing
that.
That's
another
area
that
I
think
you
know
our
community
person
who
wants
to
get
their
feet
wet
could
break
down
some
of
the
some
of
the
pieces
of
the
scheduler
to
be
much
more
modular,
so
they
can
easily
be
linked
in
by
things
like
a
foreign
scheduler
or
the
demon
sect
controller.
A
A
A
H
Hi,
so
I
can
be
scheduler
I
think
last
week
or
before
that
we
did
one
release,
there's
like
early
scheduling,
based
on
a
node
utilization
and
also
export
if
there
are
duplicate
ports
related
to
some
high
level
controller,
the
replica
set
and
a
priest,
a
controller
job
or
dinner,
oh
deployments,
and
now
basically
like
what
we
are
working.
We
are
targeting
already.
There
is
you,
but
we
are
by
a
constrained.
H
Consider
I
mean
a
evicting
port,
the
those
are
not
honoring,
affinity
for
NT
affinity
and
now
next
we
are
targeting
for
node
affinity,
the
ports
which
are
not
on
during
node
node
affinity
or
the
ports
which
are
targeting
node
affinity
and
if
we
have
set
of
nodes,
so
we
have,
we
want
them
to
schedule,
but
those
sports
don't
satisfy
know.
Definitely
there
then
maybe
which
should
not
ever
those
ports
so
like
both
kinds
of
beer.
Whether
we
should
put
those
parts
would
be
whether
we
should
not
I
with
those
points,
so
those
kills
above.
H
We
are
working
right
now,
so
probably,
and
we
are
also
working
on
rebase
so
that
we
are
I
mean
a
base
for
higher
either.
For
one
dot,
a
dot
one
dot
7.6
or
some
latest
super,
not
seven
delays.
So
that's
what
we
are
targeting
for.
Our
next
decided
release
and
we
have
other
roadmap.
Also
in
read
me
on
the
scheduler.
So,
basically
like
one
by
one,
we
are
working
on
those
items
and
we
will
keep
releasing.
We
will
keep
doing
the
regular
releases
whenever
we
have
something
concrete.
A
H
A
There
is
a
pure
unchanging
scheduler
configurations
in
the
way
that
we
were
changing,
scheduled
configuration.
It's
a
relatively
large
pure,
are
is,
is
changing
a
lot
of
stuff
on
the
scheduler
configuration
how
it's
done.
I
need
to
work
on
that
and
I
need
to
review
that
and
I
appreciate.
If
other
people
can
also
take
a
look
it'll
it's,
it
has
a
lot
of
changes
to
the
way
that
we
configure
our
schedule.
If
you
have
any
concerns
or
comments,
please
feel
free
to
add
it
to
the
PR.
A
This
peor
has
been
around
for
a
while
and
we
were
expecting
to
merge
it
in
1.8.
Well,
that
didn't
happen.
So
we
now
would
like
to
add
it
in
1.9
class
has
been
working
on
that,
so
hopefully
we
can
finalize
and
merge
it
in
1.9,
I,
don't
think
class
and
attendees
meeting.
So
you
cannot
add
more
so
another
item
that
yes
in
is
working
on
is
ordering
of
predicates.
A
The
idea
there
is
that
we
would
like
to
be
able
to
specify
the
order
at
which
predicates
are
executed
and
the
main
reason
or
motivation
behind
it
is
that
some
of
the
predicates
take
longer
than
the
others,
and
there
are
more
complex
and
at
the
same
time,
some
of
those
simpler
ones
are
more
restrictive
for
it
Paul.
You
may
be
able
to
filter
a
lot
of
notes
and
it
much
much
quicker
time
if
you
run
certain
predicates
before
some
other
ones,
so
this
PR
is
trying
to
to
design
the
idea.
There
are
two
concepts
there.
A
One
is
to
have
a
static
ordering
which
is
sort
of
like
a
default
or
drink
for
the
predicate.
So
we
try
to
run
the
predicates
which
are
more
restrictive
and
or
faster
or
sooner
than
the
other
ones.
So
we
know
that
an
affinity
and
anti
affinity
is
one
of
the
slowest
ones
that
can
be
executed
last,
when
a
lot
of
other
notes
are
already
filtered
out.
That's
the
idea.
It's
still
in
the
design
phase.
If
you
have
any
comments,
please
feel
free
to
add
it
to
the
design
document,
and
also
there
was
this.
A
C
I
would
vote
against
it
because
it
seems
like
get
another
for
someone
to
themselves
with
right.
We
are,
if
we
do
other
things
like
make
it
easier
to
build
your
own
scheduler,
which
will
pay
down
the
technical
debt
that
gets
other
people
into
the
game
to
do
customized
things
that
they
want
to
do
anyways.
So
that
was
always
a
long
extended
plan
right.
A
At
the
same
time,
I
think
it's
gonna
be
a
lot
of
work
for
a
user
teachers
and
also
maintenance
work
for
a
user
to
just
really
order
predicate.
So
if
that's
a
need
by
a
relatively
large
number
of
users
were
a
few
users
at
least
major
users,
we
should
probably
do
it,
but
I
still
have
a
hard
time,
believing
that
there
are
a
lot
of
users
who
may
want
to
customize
it,
despite
the
fact
that
scheduler
will
do
it.
Smartly
I
have
never
heard
of.
A
A
C
Think
just
that,
it's
just
a
PSA
that
ongoing
work
is
there
for
people
who
want
to
do
multiple
elastic
workloads
on
kubernetes
in
in
Mesa
esque
type
fashion,
but
with
all
of
the
policy
constraints
in
place
that
missus
doesn't
have
across
were
close
and
then
go.
Look
there
because
there's
ongoing
work
there,
that's
good
stuff!
So
far,
okay,.
A
A
A
H
Just
when
I
talk,
I
wanted
to
discuss
a
regarding
resource
limits,
concentration
is
gorilla.
In
fact,
I
sent
an
email
to
QB
scheduling,
mailing
lists,
I,
don't
know
you
guys
caught
a
chance
to
look
at
that,
although
it
might
not
be
a
lot
of
work,
I
think,
but
I
was
wondering
if
you
would
look
at
something
like
that
and
I
give
some
example
use
cases
Erbil.
That
might
be
useful.
A
H
Can
explain
hiragana
so,
like
I
mean,
as
you
specify
their
limits,
but
because
scheduler
only
works
based
on
ODS
requests
and
as
I
gave
an
example
in
my
email
that
let's
say
there
is
a
pod
that
is
requesting
one
core
of
CPU.
But
and
the
limit
is
an
requesting
limit
of
eight
core
of
CPU
CPU
and,
and
there
are
loads,
some
notes
before
codes
and
some
nodes
with
16
cores
I.
H
Think
in
current
implementation
a
scheduler
would
not
make
any
difference
and
might
choose
any
one
of
them
are
considering
other
things
are
same,
but
if
we
have
a
practical
Center,
it
also
considers
limits.
Then
maybe
it
might
choose
node
which
are
having
16
ports
so
that
they
could
satisfy
force
limits
in
a
better
way
than
the
notes
that
are
made,
for
course,
right.
A
H
A
G
G
G
That
makes
sense,
I
guess
what
I'm
trying
to
say
is
like
if
there
happens
to
be,
if
you
cluster,
let's
sports,
run
without
limits,
then
the
value
addition
through
this
extra
priority
filter
might
be
very
much
diluted
because
all
loans
are
fully
occupied.
So
it's
like
how
would
you
prioritize
them?
Yeah.
H
I
mean
I,
think
I
think
the
idea
is,
let's
say
there
are
two
nodes:
the
way
you
are
describing
and
they
are
objecting
I
mean
yesterday,
like
we
just
have
two
nodes
and
maybe
they
are,
as
you
said,
fully
occupied
or
utilized,
but
even
then
FBF,
some
node
that
and
the
support
that
is
specifying
limits.
Maybe
we
might
choose
among
one
of
the
two
nodes,
so
that
is
still
better
able
to
satisfy
that
force
limits
I'm
not
sure
like
why?
Because
why
would
it
make
any
difference.
H
A
G
I'm
saying
that,
like
the
value
of
addition
through
this
extra
priority,
filter
might
not
actually
be
available,
because
if,
if
the
cluster
happens
to
let
a
pod
run
without
limits,
and
if
there
are
enough
of
them
or
like
it
should
understand,
like
this
really
isn't
a
single
more
available.
That
has
some
guarantee
a
good
limits,
because
all
of
them
are
like
given
up
their
Intel
capacity.
F
G
You
know
that's
fine,
I,
think
if
you
like
from
a
user
standpoint
like
if
I'm
trying
to
consume
this
feature,
then
maybe
we
should
just
acid,
also
state
that
we
recommend
all
the
specifying
limits
to
limit
range
or
something
for
your
pods,
so
that
you
can
actually
get
this.
This
neat
distribution
model.
A
A
I
Thanks
yeah
I'm,
definitely
I,
don't
know
if
I'm,
truly
the
new
guy
but
I'm
I'm,
the
new
guy
here.
So
just
just
real
quick.
Just
to
give
you
my
background
that
company
I
work
for
we
do
training
and
consulting
largely
round
container
tech
stuff-
and
you
know
like
like
I,
think
Tim
was
mentioning
earlier.
You
know
part
of
those
are
just
quite
frankly
for
me
to
get
street
cred,
but
just
academically
I'm,
just
interested
in
scheduling
and
like
workflow,
optimization
and
things
like
that.
Definitely
new.
As
far
as
like
coding,
the
kubernetes
views
deferral.
I
You
know
talk
about
it
for
a
long
time,
but
you
know
as
a
kind
of
we're
not
assessed
right.
So
I,
don't
like
just
use
it
day
in
and
day
out
so
yeah
any
kind
of
guidance,
I'm
happy
to
jump
in
I
got
I
have
quite
a
bit
of
time.
I
would
say
to
actually
work
on
some
things,
but
I
still
got
a
ramp
up,
so
any
anything
I
can
do.
I'll
try
to
help
sounds.
A
H
Mean
I
have
just
one
thing
regarding
the
some
contributions
I
saw
like
and
I
think
this
week
or
the
past
week.
There
were
some
guys
who
were
interested
in
contributing,
or
in
fact
they
are
some
Krishna's
on
slack
I
answered.
So
maybe
I
don't
know
whether
those
guys
are
have
joined
the
city,
scheduling
main
list
or
or
maybe
just
by
going
back
and
seeing
the
ships
lag.
Maybe
I
can
find
those
guide
and
they
still
ask,
or
maybe
anyone
off
can
ask
if
they
are
still
interested
in
contribution.
Yeah.