►
From YouTube: Kubernetes SIG Scheduling Meetings 20170928
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
The
first
one
I
wrote
down
was
our
bug.
Count
is
way
too
high.
It's
been
creeping
up
across
the
money
release.
Some
of
those
issues
that
have
been
cropping
up
have
been
multiple
reported
across
a
couple
of
release
cycles.
Now,
that's
not
a
definitive
list.
You
have
to
do
a
couple
of
different
queries.
I'll
paste
the
query
into
the
channel
too,
as
well
for
folks
who
are
not
looking
at
the
doc,
but
the
currently
there's
like
25
open
bugs,
but
some
of
them
are
pretty
nasty.
B
If
you
actually
look
into
him,
I
started
digging
into
a
couple
of
them,
but
I
haven't
enough
time
to
dig
into
them
and
it'd.
Be
nice.
I
sent
a
couple
of
emails
to
the
list
too,
but
I
don't
think.
We've
had
any
takers
to
start
to
burn
down
that
bug
list,
but
we
should
probably
also
do
like
a
triage
as
well
to
prioritize
this
at
some
point.
I
don't
know
if
we
want
to
do
it.
Maybe
next
meeting
we
could
have
like
a
walkthrough
on
what
are
the
higher
priority
ones.
B
C
B
A
Right
so
I
saw
one
person
sent
you
no
I,
think
Kevin
said
he
was
interested
in
doing
some
more
work
on
the
what's
now
called
the
G
scheduler
and
but
that
since
that's
in
an
incubator,
I
don't
know
that
it's
tied
to
a
release
which
he
also
pointed
out
in
his
email,
but
I,
don't
think
he
had
much
detail
on
what
he
wanted.
Did
you
see
that
email
him?
A
A
Bobby's
on
the
I
think
Bobby
I
know
he
was
maybe
he
wants
to
say
what
he
was
planning
to
like.
He
was
planning
to
do
some
work
on
party
preemption
and
then
also
we
have
the
resource
quota
changes
to
incorporate
priority
that
are
going
through
the
community
Bobby.
Do
you
want
to
say
something
about
those
sure.
D
D
So,
basically,
if
we
priam
something
and
lower
priority
pods
at
a
time
that
those
pre-emptive
pods
go
away
or
in
the
head
of
the
queue
scheduling
queue,
then
they
may
get
scheduled
before
the
higher
priority
pod
that
initiated
the
preemption.
So,
in
order
to
solve
this
problem,
we
are
changing
the
scheduler
pending
queue
to
sort
pods
by
priority
and
add
some
logic
to
account
for
the
amount
of
resources
that
those
priam
tears
need.
D
So
it's
in
boosting
my
first
proposal
on
preemption
I
will
probably
write
another
design,
dark
iron
and
assist
specifically.
We
also
need
to
add
support
for
pod
disruption
budget
during
preemption.
Another
item
that
I'm
considering
for
preemption
is
or
into
pod
affinity.
I
am
a
lower,
lower
priority
pods.
Basically,
if
a
pod
today,
if
a
pod
has
affinity
on
a
lower
priority
pod,
they
don't
support
it
properly
and
preemption.
D
E
No,
what
a
question
I
have
I
got
this
preemption.
You
were
talking
about
and
I
think
just
once
not
you're
like
I
will
have
it
in
mind.
Let's
say
like
there
is
a
like
one
higher
priority
pod
on
a
node
and
there
is
even
a
lower
priority
pot
on
an
order,
but
somehow
the
higher
priority
pod
has
a
stopped
ordering
it's
more
definitive
for
whatever
reason,
maybe
node
labels
first
change
your
tents
per
chain
stir.
D
No,
we
don't,
there
might
be
some
of
these
mesh
cases.
Both
we
generally
do
not
so
well
depends
on
what
you
exactly
mean,
but
generally
we
do
not
preempt
any
pod
higher
priority
than
a
pending
pod.
If,
if
that's
a
question,
so
it
was
not
very
clear
from
your
question
when
you
are
saying
a
higher
priority
part
whether
you
mean
how
high
you
already
compared
to
a
pending
pod,
that
you
would
like
to
schedule
or
I'll
fire
it
Wi-Fi
to
compare
to
mother.
E
E
Actually,
I
don't
know
like
in
this
question
of
whether
it
matters.
What
is
the
priority
of
pending
pot?
I
I
mean
I'm.
Just
saying
like
there
are
two
pots
money
high
priority
and
one
is
lower
priority
that
are
running
on
a
node,
but
higher
priority
pot
has
stopped
altering
its
node
affinities,
but
the
lower
priority
for
that
is
running
the
agent.
But
what
is
the?
But
this
higher
priority
over
priority?
I
mean
I'm,
not
saying
like.
So
what
is
the
relative
priority
with
respect
to
the
pending?
Maybe
both
might
be
higher.
E
D
E
A
E
I'm
saying
will
exit.
There
are
two
ports
that
are
running
on
an
older
one,
portage
hydropower,
let's
say
two
parts:
a
and
B
age
has
higher
I
didn't
B,
but
there
is
another
part,
she'll
be
that
is
pending,
but
both
a
and
B
head
have
higher
priority
than
C.
My
question
is
like,
as
Bobby
explained,
like
whatever
way
they
are
calculating
with
your
core
to
pre-empted
seem
self.
E
D
E
C
E
D
We
still
don't
freelance,
we
still
don't
preamp
odd
a
because
a
is.
We
still
owe
preempts
a
mimicry
and
B,
because
b
has
a
lower
priority.
That's
our
first
metric
to
find
what
is
the
sort
of
like
most
probable
candidate
for
preemption,
but
if
hard
C
somehow
needs
heart
beat
we
present.
For
example,
Part
C
has
an
affinity
inter
pod
affinity
on
on
Part
B.
Then
we
are.
E
D
A
E
E
A
It
more
likely
to
be
preempted
and
by
and
the
way
you
would
do
that
in
this
in
the
mechanism
and
rules
that
we
have
today
would
be
by
effectively
treating
it
as
having
a
lower
priority.
So
I
think
I
think
that
the
the
kind
of
the
vision
for
what
we
had
for
for
that
approach-
and
maybe
it's
kind
of
arbitrary
but
was
to
have
the
reschedule
or
the
D
scheduler
or
whatever
you
want
to
call
it,
handle
those
scenarios.
A
So
if,
if
there
are
pods
that
are
not
running
and
their
their
node
affinity
or
pod
affinity
is
no
longer
satisfied,
you
can
have
the
the
D
scheduled,
and
one
of
the
advantages
of
that
is
that,
for
example,
like
preemption
does
not
take
pod
disruption
budget
into
account
like
it's
a
factor,
but
it
can,
it
can
violate
pod
disruption
budget.
We
just
we
decided
that
when
we
have
the
designer
view,
but
you
could
have
a
rule
that
the
D
scheduler
always
respects
pod
disruption,
budget
and
the
idea
and.
C
A
D
And
another,
as
David
said,
another
important
thing
for
us
is
that
we
cannot
really
spend
a
whole
lot
of
time
in
preemption,
because
preemption
is
right
in
the
critical
path
of
scheduling.
You
know
we
cannot
spend
too
much
time
and
one
of
the
reasons
that
we
haven't
added
support
for
affinity
on
lower
priority
positives.
Exactly
because
of
this
problem
that
we
cannot
know
our
affinity,
predicate
inter
pod
affinity.
Predicate
is
one
of
those
Louis.
C
D
D
A
Yeah
I
think
it
said
it's
an
interesting
question
and
we
should
kind
of
keep
a
list
of
what
kinds
of
policies
we
think
the
D
scheduler
should
should
enforce
and
and
ideally
people
should
be
able
to
enable
and
disable
whichever
policies
they
want.
The
cluster
admin
because
some
people
might
want
these
things.
Some
people
might
not
and,
and
so
but
I
think
it's
a
good
good
question.
I'm
sorry,
I'm
Bobby
did
you
I
think
you
were
in
the
middle
of
the.
B
A
D
C
B
It
doesn't
make
sense
to
expand
the
existing
document.
That's
there,
it's
already
merged
right.
Where
does
it
make
sense
to
have
a
separate
doc
than
the
link
to
it,
because
right
now,
scanning
through
topic
areas
is
actually
quite
difficult
for
people
to
reason
about
unless
they've
been
in
the
project.
So,
if
I
wanted
to
understand
priority,
preemption
I
would
have
to
scan
in
multiple
Doc's
than
to
get
an
understanding
of
it,
yeah
so
I.
What.
D
A
D
Us
so
Henry
jank
is
working
on
it.
The
proposal
is
already
there
and
I
think
we
agreed
on
it.
But
Derek
had
some
objections.
I
I
tried
to
ping
back
after
we
try.
We
try
to
address
his
concerns,
but
he
hasn't
got
him
back
to
us
in
the
past
couple
of
weeks.
I
know
it
may
be
ping.
Try
pinging
him
again.
You.
A
A
B
There
is
a
couple
of
a
component
config
work.
There
is
a
large
PR
from
Dan
mace
that
came
in
to
refactor
how
the
configuration
is
done
for
initialization
of
the
scheduler,
and
that
falls
in
line
with
the
proxy
in
the
other
grande
config
work.
That's
going
on
grand
unified
config,
so
I
am
on
the
hook
to
do
a
review
for
that
I
think
Bobby's
on
that
PR
too,
as
well.
It's
pretty
large.
You
might
want
to
take
a
look
too
dude.
B
A
I
think
that's
kind
of
by
design
right.
We
kind
of
wanted
some
of
the
flags
to
be
at
startup
time
and
some
to
be
like
the
policy.
I
assume.
That's
what
you're
alluding
to
right,
like
that,
we
wanted,
like
the
policies
to
be
that
was
the
work
Bobby
had
done
some
like
several
quarters
ago
to
configure
the
policies
like
the
weights
and
the
policies
that
are
enabled
I.
Think
that
is
that
the
what
you're
referring
to
now,
that
is
about
yes,.
B
A
But
then
let
the
cluster
admin
end-user
cluster
admin
control
like
the
the
scheduling
policies
and
weights
and
stuff
like
that,
and
this
what
we
have
today.
It
allows
that
and
if
we
combine
it
into
a
single
object,
I
don't
know
how
we
can
easily
like
have
the
permissions
be
set
in
the
right
way.
Do
you
see
what
I'm
saying
I'm.
B
B
A
Unfortunately,
no
I
mean
you
know:
we've
had
highly
variable
engagement
from
the
community
in
the
scheduling
area
and
most
recently
we
had
folks
like
Harry
and
Klaus,
but
we
you
know
we
can
try
to
contact
them.
If
we
want
to,
you
know,
see
what
they're
interested
in
doing,
but
nobody
has
been.
Nobody
has
been
pinging
me.
C
How
do
you
recommend
sort
of
collating
those
or
I
guess
triaging,
those
if
I
was
to
look
for
something
that
I
can
contribute
a
fix
for
I,
see
that
a
number
of
them
are
already
assigned
to
someone
it
is,
should
I
just
avoid
anything,
that's
already
assigned
to
someone?
Is
there
I,
don't
know?
Is
there
a
particular
area?
I
should
avoid
or
I
think.
B
One
thing
that
we
haven't
done
is
assigned
priorities
and
you
can
always
just
ask
if
the
person's
actively
working
on
something
like
the
scheduler
dies,
because
the
scheduler
cache
is
corrupted
one
that
I'm
assigned
on
that
has
occurred
a
couple
multiple
times
and
that's
a
big
problem
and
I'd
be
happy
to
hand
it
over.
If
you
want
to
dig
into
that,
but
that's
also
a
very
thorny
issue:
okay,.
C
B
A
D
A
A
I
know
other
SIG's
at
some
points
have
had
like
you
know,
stability
only
releases
or
at
least
have
claimed
to
have
stability
only
releases
or
not
I-
guess
doing
that
officially.
But
you
know
if,
if
we
only
have
like
one
or
two
features,
this
would
be
a
great
opportunity
for
people
to
work
on
the
bugs
yeah.
D
Also
I
would
like
to
add
that,
like
in
the
same
aligned
with
the
same
topic
that
we
were
talking
about,
like
the
Box
you've
seen
a
lot
of
test
failures.
Most
recent
one
was
a
another
failure
that
I
just
submitted
a
PR
today
go
fix
it
today.
So
we
keep
seeing
a
bunch
of
these
e
to
e
test,
failing
I
converted
a
couple
of
them
in
a
previous
release
to
integration
test.
D
Some
of
them
were
failing
because
of
the
fact
that
our
test
clusters
were
not
set
up
to
be
consistent
enough
sort
of
so
those
are
loser
converted,
but
we've
seen
also
some
other
issues
like
the
test
themselves
are
not
written
in
a
robust
way.
I
changed
one
of
them,
which
was
failing
and
causing
a
release
block
so
whose
revisit
some
of
our
e3
tests
again
and
look
at
the
flaky
ones
and
try
to
address
those
as
well.