►
From YouTube: Kubernetes SIG Scheduling Meeting - 2019-07-25
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
Will
remain
for
a
very
long
time
with
that,
let's
start
our
meeting.
We
have
a
few
item
on
the
agenda.
Let's
go
with
the
same
order
as
we
have
them
in
the
in
the
notes.
The
first
item
is
an
update
on
physical
host,
the
topology
that
we
were
trying
to
add
I,
think
Abdullah,
I.
Think
I
should
let
you
speak
and
give
us
the
updates.
You
have
the
full
story,
so
why
don't
you
go
ahead?
Yeah.
C
C
That's,
not
ideal,
but
I
think
we
can.
We
can
adapt
to
that
by
changing
a
little
bit.
The
scheduler
behavior.
The
main
problem
is
is
that
those
are
being
used
in
the
default
spreading
for
the
scheduler.
So
basically,
the
scheduler
by
default,
distributes
political
zones
and
if,
if
an
on-prem
like
set
up
once
tribute,
pods
occur
across
a
physical
host,
they'll
have
to
use
the
zone
as
the
zone.
C
They'll
have
to
use
his
own
label
as
a
physical
host,
and
they
have
to
use
the
region
label
to
Indic,
for
example,
different
tracks
which
simulates
the
set
up
on
an
actual
on
cloud
Sara.
So,
but
if
they
do
that,
then
they
will
not
get
the
default
spreading
of
ports
across
zones
that
the
scheduler
does.
C
A
Or,
alternatively,
we
can
actually
add
a
conflict
section
to
do
scheduler
so
that
one
can
define
what
is
the
hierarchy
of
any
arbitrary
label
that
they
want
pods
to
spread
them
on.
So,
for
example,
one
could
say
my
first
level
is
node.
Second
level
is
zone.
Third
level
is
region,
I
mean
we
don't
need
to
go
more
than
three
levels.
I
guess
at
this
point,
but
a
possible
alternative
is
to
just
start
with
known
labels,
as
you
said,
but
I
think
there
is
no
reason
really
to
stick
with
the
known
labels.
A
We
can
just
go
with
any
arbitrary
label
and
let
people
define
whatever
label
they
want.
It's
real
useful
because
I
remember
that
there
was
this
discussion
that
some
people
were
asking
if
they
can
spread
pods
among
racks,
and
we
said
we
don't
have
any
label
for
racks.
So
do
you
want
to
do
it
by
default
and
I?
Guess
if
we
go
and
add
a
feature
similar
to
what
I
just
described?
Of
course,
you
know
there
is
a
possibility
that
a
one
can
you
know,
especially
with
pot
even
participating.
A
One
could
possibly
go
and
say:
I
want
my
powers
to
be
spread
using
this
particular
label
and
I'm,
given
this
particularly
about
among
whatever
topology
that
I
want.
But
the
problem
is
that,
then
all
users
must
put
those
labels
in
the
party
spec
some
companies.
There
are
users
that
have
heard
from
the
they
prefer
not
to
add
this
tutor
to
all
their
pod
specs.
A
They
just
basically
want
to
have
a
default
behavior
for
everything,
and
this
is
useful
because
it
gives
them
workload
portability,
for
example,
if
they
have
certain
set
of
labels
in
one
cluster
and
they
don't
have
the
same
set
of
labels.
For
then
for
the
notes
in
another
cluster,
he
can
still
use
the
same
party
spec
or
in
one
cluster
and
move
it
to
another
cluster
and
just
change
the
configuration
of
the
scheduler
in
another
cluster
and
everything
just
works.
A
They
don't
need
to
change
the
configuration
all
the
parts
and
all
the
workloads
in
every
cluster,
so
this
is
going
to
be
useful.
I
think
it's
going
to
be
useful.
You
can
definitely
target
this
for
117
to
add
something
like
this.
You
know
to
the
scheduler
configuration
to
basically
define
a
set
of
labels,
as
I
said
it
does
not
necessarily
need
to
be
more
than
three
of
those.
In
fact
they
gave
you
only
support
to
levels
in
the
scheduler.
First
level
is
note
and
second
level
is
zone.
A
C
A
A
That's
great
I'm
glad
that
at
least
something
good
all
right
is
there
any
question
about
this
feature
or
any
comments
all
right.
There
is
nothing
else.
Move
on
to
the
next
item.
Next
item
is
about
an
update
on
the
scheduling
framework.
As
far
as
I
can
tell
I
hope,
I'm
not
missing
anything.
There
are
only
two
expansion
points
left
to
be
done.
One
is
post
filter,
plugins
and
the
other
one
is
normalized
score,
which
is
sort
of
like
post
score
plugins.
A
So
these
two
postings
are
left
to
be
done
and
both
of
them
have
PRS.
Both
of
them
are
kind
of
close
to
state
that
we
can
probably
merge
them
soon,
but
especially
the
normalized
score.
One
needs
a
little
bit
more
thinking
and
I
know
that
there
has
been
some
and
discussion
about
changing
our
approach
and
the
original
design
that
we
we
proposed
in
the
scheduling
framework
cap
Abdulla.
Maybe
you
can
tell
us
more
about
that
and
what
your
proposal
is
here.
C
D
Yes
sure
so,
right
now
the
score
and
normalized
score
plugins
are
enabled
by
two
separate
plug-in
set
lists,
just
as
any
other
plugins
they're
kind
of
treated
independently
in
in
that
sense.
But
the
real
problem
is:
there's
a
expectation
that
we're
at
normalize
the
score.
There
is
always
like
the
sims
plug-in
that
has
a
score
better,
so
we
normalize
the
results
from
the
from
the
score
method.
So
it's
a
bit
confusing
in
the
sense
that,
like
I,
have
a
normalized
score
plug-in.
D
How
should
I
anymore,
that
should
I
put
put
the
plug-in
in
both
the
normalized
score
and
the
score
you
know
list
or
just
in
one
with
them,
so
the
proposal
is
really
is
just
consolidate
to
listen
to
one
just
one
score
plug-in
list
and
the
normalized
score
method
is
treated
as
like
an
optional
interface
method
that
can
be
implemented
by
a
school
plugin,
but
by
default
you
don't
have
to
so.
When
we
read
that
list,
we
can
check
okay.
D
So
if
you,
if
this
scope
lacking,
also
implements
normalized
score,
okay,
we're
gonna
put
that
in
a
normalized
score
internal
list
and
then
run
normalized
score
after
score.
Otherwise,
we,
basically
you
think,
okay,
so
you
don't
have
normalized
score
and
we
just
you
can
know
that
yeah
I
think
this
is
a
much
cleaner
user
interface
to
avoid
any
confusions.
I
agree.
B
A
Out
in
the
config
you
know,
all
our
normalized
scores
must
also
be
an
Associated
score
plug-in
with
the
same
name
and
also
yes,
I
agree
with
you.
It
makes
sense
to
me
the
only
limitation
of
this
new
approach
compared
to
the
previous
one
is
the
fact
that
someone
cannot
directly
disable
and
normalize
the
score
in
the
config
file.
A
D
A
I,
don't
think
it's
a
big
deal
really
in
terms
of
usability
of
the
scheduler.
It's
probably
something
that
is
not
I
mean
disabling.
It
normalizes
score
without
disabling
the
whole
score
plug-in
is
probably
something
not
very
commonly
needed
feature,
in
my
opinion,
even
wrong
in
the
future,
and
if,
if
that
happens,
we
can
definitely
revisit
this
and
maybe
enable
it
as
a
separate
plug-in
again
and
the
future
change
it.
But
yeah
at
this
point,
I
agree
with
you.
A
This
this
new
approach
is
cleaner,
makes
more
sense,
and
if
you
look
at
the
config
files
for
this
schedule,
it
feels
it
feels
more
natural.
The
flow
is
more
natural
compared
to,
if
you
compare
it
with
other
plugins,
and
we
don't
need
to
make
any
sort
of
like
exceptional
rules
for
this
particular
normalizes,
core
plugins,
yeah
I
agree
with
that.
Please
go
ahead
and
make
changes
as
necessary
to
the
PR.
A
To
make
this
happen,
we
don't
have
to
necessarily
come
and
combined
it.
You
know
removal
of
normalizes
score
plug-in
from
the
config
file
in
the
same
PR
that
you've
sent
it's
up
to
you.
We
can
remove
it
in
a
separate
PR,
but
I
guess
that
PR
should
probably
get
much
sooner
down
or
before
before
your
implementation
of
his
core
plugins.
A
C
A
Know
I
mean
the
only
thing
that
I
knew
is
that
we
want
to
do
it.
I
don't
have
any
particular
plan
I,
you
know
I,
you
all
right.
We
should
probably
think
about
it
more
carefully
and
maybe
there
needs
to
be
site,
I
don't
know
for
sure,
but
maybe
there
is
any
for
having
a
particular
order
in
which
we're
converting
these,
but
whether
there
is
such
a
need
or
not,
is
something
that
we
should
think
about,
and
we
should
actually.
C
Take
that
as
an
action
I
can
I'll
try
to
like
write,
at
least
like
a
maid
turns
out
just
a
one-pager
or
something
plant
migrate
to
the
to
the
to
the
to
the
framework,
and
maybe
looking
at
each
like
predicate
and
see
what
each
predicate
maps
to
in
terms
of
plugins
as
well.
Because
some
predict
is
that
our
priority
functions
are
not
going
to
just
matter
a
single
plug-in
right.
They
will
have
yes.
A
A
A
Next
item
is:
where
is
work
even
part
of
spreading?
One
of
our
most
important
features
were
for
116,
so
I'm
glad
that
it's
moving
forward.
Of
course,
the
API
part,
which
was
like
the
bigger
blocker,
is
now
merged,
and
following
that
there
are
three
other
PRS
which
are
managed
as
well.
Thank
you
very
much
for
all
of
you
folks
who
contributed
code
particularly
way
and
also
you
folks,
women
who
reviewed
these
parts.
These
are
I
know.
There
is
a
lot
of
complex
algorithms
involved
in
these
PRS
and
thank
you
very
much.
E
Not
just
to
angle,
your
comments
to
shout
out
to
the
reviewers
they
spend
tremendous
efforts
are
reviewing
the
things
like
Abdullah
like
Otto
and
the
other
guys
just
did
forgotten,
name
and
yeah
recent
recent
challenge
on
the
on
the
future
importation.
Instead,
we
respect
the
constraints,
individual
constraint
independent
way,
so
that
is
a
behavior
change
and
young
man
and
I
think
all
things
going
going
well
and
there
are
two
peers
to
go
binds
the
priority
wants
the
integration
test.
So
we.
E
A
That's
great,
we
haven't
had
the
feature
freeze
deadline
where
enhancement
for
his
deadline,
yet
so
this
is.
This
is
great.
We
have
plenty
of
so
time
for
the
future,
and
now
I,
guess
or
confidence
is
pretty
high-
that
we
can
have
this
feature
with
very,
very
little
worries
about
like
converting
it
in
one.
Sixteen,
so
that
that's
great
I'm
excited
about
this.
Thank
you
very
much
for
your
help.
A
Alright,
one
more
item
that
I
have
to
talk
about
is
something
that
Draven
has
been
involved
with.
So
if
you
remember,
we
were
trying
to
remove
critical
pod
annotation.
This
was
an
annotation
which
added
in
I,
believe
kubernetes
1/5,
which
is
a
relatively
old
kubernetes
for
a
few
years
ago.
At
the
time
there
was
no
prior
enya
preemption
in
kubernetes,
so
there
was
no
way
of
telling
the
control
plane.
That
apart
is
critical.
A
Please
do
not
remove
it
or
always
admit
it
so
because
there
was
no
priority
at
the
time
this
particular
annotation
was
added
to
the
system.
It
was
an
experimental
alpha,
but
given
that
there
was
no
other
alternative,
it
was
enabled
by
default
in
many
calculators
and
in
our
cluster,
bring
up
the
scripts
and
all
our
control
plane
was
respecting
it.
A
We
tried
recently
to
remove
this
and
we
run
into
a
problem,
because
the
static
parts
were
getting
rejected
by
the
cubelet
if
they
didn't
have
this
particular
allocation.
The
problem
with
the
static
parts
is
that,
despite
the
fact
that
they
have
a
priority
class
name,
these
parts
are
created
directly
on
notes,
and
so
someone
can,
for
example,
ssh
to
a
node
and
bring
up
a
static
pod,
and
then
once
this
happens,
a
pod
object
as
a
result
of
this
is
created
on
the
API
server.
A
So
this
part
is
created
on
the
node
first
and
then,
as
in
retrospect,
a
pod
object
is
created
on
the
API
server.
So
what
happens
is
that,
if
that
actual
pod
has
a
priority
class
thing,
it
does
not
have
the
priority
integer
value
populated
at
the
time
it's
created,
because
we
populate
that
when
the
pod
object
created
is
created
on
the
API
server.
Api
server
has
these
admission
plugins
that
are
executed
when
certain
objects
are
created.
A
A
So
this
caused
some
issues
for
some
of
the
users
and
they
so
we
had
to
revert
removal
of
the
annotation
critical
part
annotation,
so
that
we
could
still
consider
these
static
parts
as
critical,
and
then
we
had
a
meeting
with
signal
folks
to
find
a
solution
for
this.
The
outcome
of
that
meeting
was
that
signal
told
us
that
it's
better
to
consider
all
the
static
parts
as
critical,
because
these
are
normally
parts
that
are
created
by
the
cluster
bring
up
scripts.
A
We
normally
don't
use
the
static
parts
and
it's
not
recommended
to
use
the
static
parts
for
anything
other
than
you
know,
positive
aura
created
by
Casa
bingo,
so
it's
kind
of
reasonable
to
think
that
all
of
those
parts
are
critical,
and
that
was
a
decision.
So
so
far
we
we're
trying
to
implement
this.
In
the
cubelet
there
is
a
PR
out,
and
hopefully,
after
this
PR
is
made,
we
will
be
able
to
resubmit,
quick
removal
of
critical
part
annotation.
E
Cabret
can
now
recognize
the
part
as
a
system
critical
part
because
of
the
Mira
pod
or
the
there
are
two
copies
of
the
pod
right.
Why
is
the
in-place
part
which
is
created
from
the
manifest?
Maybe
the
other
one
is
created
in
the
API
server,
so
it
gets
over.
The
one
has
the
admission
control
so
that
value
was
populated
to
the
right
for
the
know,
the
copy
from
the
manifest
they
don't
have
the
value
stack
every
time
they
just
checked
part
on
the
part
on
the
near
the
clear
created
from
the
manifest.
E
E
A
E
A
So
if,
if
the
mirror
part
is
not
created
yet
or
if
we
haven't
received
the
updates
for
the
mirror
pod
yet
on
the
cuba
side
and
actual
part
goes
to
the
admission
phase
of
cubed,
it
will
get
rejected.
So
your
fix
is
useful,
but
in
this
particular
case
I
guess
it's
not
gonna
as
much
yeah
yeah.
Thank
you
very
much
all
right.
These
are
all
the
items
that
I
have
on
their
agenda.
We
have
four
more
minutes
in
this
meeting.
If
other
folks
have
question
comments,
they
want
pure
reviews.
B
A
B
B
A
B
Pr
or
PR
is
being
reviewed,
yes,
I
think
Abdullah's
reviewed
it
and
I
addressed
his
comments
lately
and
I
think
most
of
the
comments
address,
except
for
the
one
which
is
Oh,
which
was
related
to
a
method
which
I
am
not
using
in
my
so
I
just
moved
it
to
another
file
where
it
was
being
used.
So
I
I
don't
have
much
ideas
about
that
except
other
than
that.
Most
of
the
comments
are
addressed.
B
A
E
So
the
brother
brainstorming
issues
and
the
future
Kay
was
some
internal
state
of
the
predicates
and
priority
last
week.
I
think
there's
a
list
of
four
issues
and
I
gave
one
solution,
one
one
of
the
first
so
then
I
need
someone
to
take
a
look.
So
basically
is
that
there
is
a
function
called
apply,
feature
gate
and.
E
It
doesn't
so
basically
that
if
their
function
is
enable
in
a
real
cluster,
like
you
know,
when
it
went
around
schedule,
your
neighbor
that's
fun.
Sometimes
that
function
is
caught
in
in
the
question
test.
Then
you
actually
don't
revert
the
state
at
the
end
of
your
in
the
question
test
that
can
bring
in
some
trouble
so
in
the
before
it
doesn't.
E
The
issue
doesn't
show
up
is
because
some
some
integration
testing
neighbor
some
doesn't,
and
it
happens
to
be
that
the
the
one
with
lower
upper
case
to
not
enable
that
happens
in
the
first,
the
Hanna
Beth
those
smarts
enable
happen
after
that,
then
issue
doesn't
sure.
But
recently,
when
I
write
the
integration
test
for
for
the
human
past
brothers,
you
know
even
the
latter.
He
comes
first
so
that
enables
first
letting
revert
back
so
yeah.