►
From YouTube: Kubernetes SIG Scheduling Meeting - 2019-07-18
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
One
is
that
most
of
the
extension
points
are
already
managed.
There
has
been.
There
has
been
some
issues
with
some
of
the
extension
points,
particularly
with
respect
to
flakes
and
stuff.
We
are
counting
the
number
of
times
that
the
extension
points
or
plugins
are
invoked
in
our
tests
and
those
sometimes
caused
flakiness,
because
we
didn't
expect
the
scheduler
to
retry,
for
example,
and
things
of
that
sort,
so
those
are
most
of
the
flakes
or
addressed
I
leave
up,
told
I
use
and
another
PR
yesterday
to
address
another
one
too.
A
So
we
are
of
course
developing
these,
and
these
are
all
new
things
we
expect
them
are
the
issues
of
this
sort.
We
are
addressing
them,
but
hopefully
we
will
get
all
of
those
sorted
outs
and
a
very
near
future.
Hopefully
this
week
we
are
still
waiting
for
one
more
extension
point
and
that's
normalizes
score
right.
B
B
A
C
So,
oh
you!
So
in
total
there
are
60
or
something
the
first
ones,
AP
I
want
and
the
level
and
has
given
a
green
light
on
that,
and
we
have
five
wants
to
go,
and
thanks
for
that,
that
was
a
reviewing
the
second
one
step.
One
I
think
we
asked
almost
come
to
us
consensus
on
that,
those
two
one.
So
they
are,
inter
they
are
three
wants
to
go
so
after
the
six
years
can
complete
that
reviewing
that
we
are
good
to
merge
into
the
master.
That's.
A
D
C
A
C
According
according
to
our
previous
amount
of
a
conversation
with
level
and
and
alleged
I
think
they
won't
give
slash,
accrue
until
the
whole
series
of
implementation
got
eventual
consensus.
So
my
idea
is
that,
after
we
will
finish
the
reviewing
of
all
the
six
years,
then
we
can
go
back
to
lady
I'm,
assuming
folks
m2
joke
ever
/
I.
E
Don't
think
the
safest
one
is
necessary
because
the
integration
test
not
necessary
not
to
submit
but
necessary
for
the
approval
of
the
first
PR,
but
the
other
ones
I
think
they
are
I
looked
at
all
of
them
by
the
the
logic
is,
it
is
good.
I.
Just
think
that
we
might
be
able
to
find
room
to
chew
mice,
but
I,
don't
think
I
need
any
any
comments
that
we'll
be
making
are
going
to
impact
the
the
API.
E
A
F
A
The
for
the
first
one,
our
focus
should
be
on
validity
of
in
implementation
writing
enough
tests
for
it
making
sure
that
it's
very
stable.
This
is
gonna,
be
an
alpha
feature
anyway,
so
it
might
not
be
the
most
optimized
solution
yet,
of
course,
if
we
can
get
all
of
them
together
at
the
same
time,
basically
the
most
optimized,
as
well
as
very
stable,
reliable
implementation.
A
That
would
be
even
better,
but
sometimes
it
becomes
a
little
harder
to
you
know,
because
because
sometimes
these
optimizations
required
addition
of
some
complex
data
structures
that
makes
very
fine
validity
of
the
approach
or
logic
a
little
harder.
So
for
that
reason,
sometimes
we
go
with
simpler
data
structures.
That
may
not
give
us
perfect.
A
You
know
performance,
but
at
least
it
let
us
know
that
the
solution
is
valid
and
we
can
build
enough
tests
to
make
sure
that
it's
valid
for
the
next
release.
We
can
then
optimize
it,
and
now,
with
having
the
tests
and
having
the
logic
in
place,
we
can
more
easily
verify
that
the
optimized
solutions
or
valid
as
well,
so
that
that's
one
approach.
E
I
mean
I'm
looking
at
the
predicate
right
now,
but
I
like
maybe
this
is
tangent
but
I
think
need
to
add
another
extension
point,
which
is
called
my
Peace
Corps,
and
the
reason
is
that
the
predicate
in
the
even
part
spreading
is
looking
at
all
the
nodes,
not
just
the
filtered
ones,
to
compute
some
state
and
earliest
extension.
Point
that
allows
us
to
do.
This
is
the
pre-filter.
Yes,.
F
E
A
We
shouldn't
we
should
not
probably
reject
the
part
at
through
filter,
but
one
thing
that
sometimes
it
makes
probably
sense
to
do
in
pre-filter
is
to
perform
pre-processing.
So
it's
similar
to
the
metadata
that
we
have
for
for
predicate
and
priority
functions.
Sometimes
we
create
the
metadata
so
that
we
can
iterate
over
all
the
cluster
nodes
once
generate
some
metadata
that
helps
us
run
the
predicates
faster
per
node
right.
E
E
The
data
that
this
priority
function
means
is
going
to
be
the
pre-filter
because
filter
because
filter
and
post
filter
both
of
them
operate
at
the
node
level,
not
at
the
less
you
know
the
the
plugin
gets
called
per
node
pre-filter
gets
called
over
over
the
pod
and
over
all
the
noise
correct,
and
so,
if
you
want
to
compute
that
data
you'll
have
to
do
it
at
pre-filter
and
my
concern
is
the
pod
could
get
rejected
by
a
defender,
phase
or
pre-filter
phase
or
post
filter
phase.
Is
that
is
that
possible
host.
A
E
A
Yeah
something
something
you
know
right
now:
I
cannot
really,
since
I
haven't
haven't
looked
at
all
the
implementation,
yet
I
was
mostly
focused
on
the
API
side,
and
I
was
hoping
that
the
API
click
Adafruit
sooner
so
that
we
can
focus
on
the
rest.
But
you
have
done
more
reviews.
I
I,
don't
have
a
very
good
image
of
the
implementation
in
my
head
right
now,
I
have
to
go,
and
log
by
the
shore
sends
sent
your
recommendation
or
suggestions,
and
then
we
can
go
from
there.
So.
G
E
E
G
E
A
All
right
so,
since
we
don't
have
a
whole
lot
of
time
in
there
or
a
few
agenda,
let's
move
on
there
was
this
other
PR
by
Draven
that
I
actually
suggested
to
send
to
remove
critical
part
annotation.
This
critical
part
annotation
existed
because
we
didn't
have
priority
at
some
point
in
the
past
in
in
kubernetes,
and
this
particular
annotation
was
used
to
indicate
that
a
pod
is
very
important
and
it
should
not
get
rejected
or
not
with
the
existence
of
priority.
We
no.
A
This
annotation,
it
was
deprecated
in
1:13,
and
we
removed
it
very
recently
in
1/16,
but
it
caused
the
regression.
The
problem
is
that
static
parts
are
onion
node
directly,
so,
for
example,
someone
can
SSH
to
a
node
and
create
a
static,
pod
and
and
then
DC
static,
pods
or
on
a
node
like
as
a
visitor
like
physical
pod
under
node,
before
their
corresponding
pod
object
is
created.
So
pod
object
for
these
nodes,
for
these
parts
are
created
afterwards.
A
These
are
called
mirror
pods.
These
objects
that
we
create
are
called
mirror
pods.
This
was
kind
of
a
hack,
you're
trying
to
get
away
from
this
kind
of
scenario,
but
they
exist
still
exist
today,
and
someone
reported
that
this
is
causing
I'm.
Never
removal
of
the
annotation
causes
issues
and
he's
right,
because
what
happens
is
that
when
apart
apart
object
is
created,
it
goes
through
admission,
an
admission
resolves
the
priority
name
or
priority
class
name
of
a
pod
into
the
priority
value.
It's
similar
to
like
a
DNS
resolution.
A
We
have
a
name
for
a
priority
and
Express
off
to
the
actual
integer
value.
The
problem
for
a
static
pause
is
that,
since
the
actual
thought
is
created
before
the
pod
object,
the
pod
object
resolution
or
prior,
the
resolution
happens
afterwards.
So
when
pod
itself
is
created,
cubelet
sometimes
rejects
it,
because
the
priority
of
this
part
is
not
populated.
Yet
this
is
a
problem
and
annotation
was
addressing
this
because
we
were
honoring
this
annotation
on
the
cubelet,
but
now
since
the
priority
is
not
populated,
yet
the
qubit
has
no
notion
of
criticality.
A
Of
this
partridges
sometimes
rejects
it
when
the
cubit
is
under
a
spokesperson.
For
that
reason,
we
are
reverting
this
change.
Basically,
removal
of
the
annotation
for
now
I
am
working
with
a
note
of
it
signaled
to
address
this
problem,
we
should
probably
add
a
small
piece
of
logic
to
the
cubelet
to
resolve
priority
of
static
parts
before
rejecting
them.
Basically,.
A
Your
static
parts
should
go
away
completely.
I
agree.
We
have
daemon
sets
now
take
the
or
good
replacement
for
static
cause.
There
is
no
reason
to
have
them
anymore,
I
think,
but
since
it's
still
there
and
since
there
are
users,
we
have
to
be
respectful.
So
for
that
reason
we
are
reverting
this
I
guess
it's
already
regarded
actually
we're
reverting
the
removal
of
the
annotation,
but
we
are
gonna,
remove
it
again
after
this
problem
is
addressed.
A
C
Yeah,
when
I
wanna
ride
the
integration
test
before
they
even
post
reading,
I
realized
that
there's
something
consistency
in
between
literary
things
and
I
kind
of
drop.
The
issue
to
list
all
the
limitation,
all
the
potential
issues,
and
no
so
for
brainstorming
and
the
fourth
one
is
blocker
for
my
integration,
tester
so
I,
just
at
PR
on
that
make
the
function
apply,
feature
case
stainless
and
that
the
other
story
wants
the
first
one
is
about
validation,
I,
think
Ravi
and
some
other
guys
are
looking
all
PR
and
that
the
other
tooth
we
need
further
brainstorming.
B
A
A
F
A
All
right
all
right,
one
more
quick
thing:
pot,
overhead
PR,
is
ready
and
it's
getting
managed
to
account
for
pot
overheads,
where
we
calculate
the
amount
of
resources
that
parts
required
since
we're
running
out
of
time.
I
don't
want
to
spend
much
more
time
on
that.
There
is
a
link
in
our
meeting
notes.
You
can
go
and
take
a
look.
If
you
have,
if
you
need
more
information,
there
is
one
more
item
with
respect
to
the
scheduler
changes
for
pod
spreading.
I
believe
this
is
something
that
many
anchor
is
working
on.
G
F
A
A
C
Guess
this
is
just
one
thing:
I'm,
sorry
Bobby!
So
just
one
thing
so
I
just
didn't
want
to
confuse
people.
So
I
just
mentioned
that,
since
the
routine
is
that
unless
the
whole
indentation
for
the
Union
for
spreading
is
a
is
gathered
consensus,
then
we
can
much
wonder
why,
but
according
to
a
based
on
scheduling
perspective,
we
want
they
gave
part
much
as
as
soon
as
possible.
A
H
A
A
Yeah,
it
will
be
will
be
helpful,
because
I
think
we
probably
need
more
voices
there
to
point
out
the
importance
of
this
and
basically
give
all
the
reasons
that
why
we
want
why
we
need
this
and
why
it's
important,
I
think
it's
right
now.
Actually
other
VMware
themselves
have
had
this
problem
that
this
does
not
exist.
This
blingo
does
not
exist
and
they
are
using
other
labels.
You
know
clones.
A
A
A
We
already
treat
certain
labels
differently
and
we
care
about
certain
labels,
particularly
node
and
zones.
These
are
the
things
that
are
our
scheduler
already
treats,
especially
so
I,
don't
know.
If
that's
not
a
good
idea,
we
should
probably
go
away
from
all
of
these
and
replace
them
with
something
else,
but
that's
something
else
should
define
right
now
there
is
nothing
there
are,
and
also.
A
Another
problem
is
that
if
some
of
these
labels
are
automatically
populated
by
the
underlying
infrastructure
by,
for
example,
nodes
and
zones
and
all
are
under
populated
automatically,
but
underlying
infrastructure
these
are
I
would
say.
These
are
like
some
formal
labels,
and
these
are
part
of
our
API
anyway.
H
A
H
A
But
I
mean
it's
created,
sort
of
a
hack
because
there
didn't
exist,
some
other,
better
alternatives,
but
that
does
not
necessarily
mean
that
whatever
they're
doing
is
the
best
solution.
I,
don't
like
those
because
you
know
physical
hosts
can
be
in
particular
zones
and
you
can
have
data
centers
in
multiple
zones
in
the
same
region.
So
you
may
still
need
all
of
those
other
labels
like
zones
and
regions,
while
you
are
also
using
physical
hosts
label
so
yeah
that
that's
not
a
great
excuse
really
to
not
have
this
extra
late
anyway.