►
From YouTube: Kubernetes SIG Scheduling Meeting - 2019-01-31
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
let's
start,
as
you
know,
this
meeting
is
recorded
and
will
be
uploaded
to
public
Internet
with
that,
let's
just
start
with
some
status
updates.
I
have
a
couple
of
updates
for
you
guys.
One
is
the
fact
that
I
decided
to
move
pot
priority
and
preemption
to
GA
in
114.
There
are
a
couple
of
reasons
for
this.
One
was
the
fact
that
our
most
critical
components
which
are
critical
daemon
sets
rely
now
I
on
this
feature
to
be
scheduled,
basically
guarantee.
The
scheduling
of
those
demons
has
rely
on
part
by
Union
preemption.
A
So
it
makes
sense
for
this
feature
by
this,
which
are
firing
preemption
to
be
stable
and
reliable
and
backward
compatible
in
the
future.
So
that
was
one
of
the
main
reasons.
There
are
other
reasons,
including
the
fact
that
many
people
have
asked
us
to
move
this
to
GA,
because
it
cannot
use
it
unless
it's,
it
has
enough
backward
compatibility
guarantees.
So
there
are
some
of
these
reasons
and
we
decided
to
my
decided
to
move
into
GA.
I
also
believe
that
the
API
is
pretty
stable.
A
We
haven't
made
any
changes
to
the
API,
so
it
makes
sense
to
move
it
forward.
The
implementation
has
been
reasonably
stable
as
well.
I
would
say
we
haven't
had
like
very
major
big
box.
We
had,
of
course,
some
box
here
and
there,
but
there
hasn't
been
any
major
one,
alright.
So
with
that,
let's
go
to
the
next
item.
If
you
have
questions
about
that,
please
go
ahead
and
ask
now.
B
A
A
B
A
C
A
A
Know
it's
that's
one
of
the
speeches
which
is
a
maypole
I,
don't
know
if
it
is
paid.
I
don't
know
if
there
is
any
version
associated
with
it.
To
be
honest
with
you,
maybe
you
have
forgotten,
but
anyway
our
if
we
don't
do
that,
we
need
to
run
reschedule
er
right
and
otherwise
some
of
these
critical
parts
may
not
may
not
get
scheduled
without
having
priority.
So
I
think
there
are
more
than
just
one
thing
to
change
and,
as
a
result,
going
back
to
the
old
behavior
in
a
new
cluster.
It's
not
easy.
A
B
A
B
B
B
A
A
B
B
A
A
A
All
right,
yeah
I,
don't
know
I'm
just
here
today
with
us,
he
looks
like
he's
not
I
would
like
to
thank
him
for
his
efforts
on
moving
informers
out
of
or
hand
handlers
informant
handlers
as
well
out
of
the
factory.
That
was
a
big
changed,
a
lot
of
things
in
our
code
base.
He
did
it
very
patiently
and
over
a
few
weeks
and
received
a
lot
of
feedback
from
us
and
we
changed
our
minds
a
few
times
in
the
middle
of
the
process.
I
really
appreciate
his
efforts.
A
We
also
thank
Valerie
for
her
efforts
and
supporting
non
non
pre-empting
priority,
so,
whether
you're
here,
so
why
don't
you
give
us
an
update
and
that?
How
is
that
working
I
know
that
I
know
that
you
were
working
on
the
PRN.
Your
face
a
few
different
kinds
of
challenges
along
the
way
and
thanks
a
lot
for
for
your
help.
I
wouldn't
have
figured
out
all
those
problems
that
you
already
solved
be.
D
A
D
A
That
that's
that's
odd,
alright
yeah
yeah!
We
need
to
debug
it
see.
What's
going
on
I'm
a
little
surprised,
because
those
defaults
for
a
boolean
field
is
false.
So
if
nobody
sets
it,
it
should
be
false.
It
also
explicitly
set
it
as
false
in
our
default
mechanism.
I
didn't
expect
this
to
to
be
turned
on
automatically,
but
yeah.
If
you
figure
out
it'll
be
great
I'm.
B
A
This
is
non
pre-emptive.
Priority
is
essentially
a
new
field
in
priority
class,
so
he
basically
tells
the
scheduler
that
this
priority
should
not
preempt
other
pots.
If
the
pot
has
this
priority
trained
other
pots.
So
the
reason
for
this
is
that
some
batch
workloads
would
like
to
have
priority
so
that
they
are
scheduled
ahead
of
other
workloads
which
are
pending
and
have
lower
priority.
A
But
if
there
is
a
high
priority
pending
pod,
they
don't
want
this
high
priority
pending
pod
to
preempt
already
running
low
priority
pots.
The
one
is
high
up
this
pending
part
to
wait
for
those
lower
priority
pending
positive
finish,
but
they
want
is
a
priority
want
to
be
scheduled
first
before
any
other
one.
A
B
A
B
A
A
Removing
some
of
these
features
is
a
little
bit
of
a
problem
because
some
users
rely
on
them
and
removing
it
causes
issues
for
them.
So
we
have
to
probably
announce
it
ahead
of
time,
maybe
a
few
months
before
before
we
remove
them.
So
maybe
we
can
start
that
announcement
process
now
to
our
mailing
lists
and
whatnot
and
ask
people
to
change
their
configuration
and
then
we
remove
it
in
115,
okay,.
B
B
A
A
Alright,
ok,
so
let's
go
over
some
other
updates.
I,
don't
know
if
Harry
is
here
today,
he's
not
I
had
a
chance
to
take
a
look
at
the
equivalence
class.
The
nude
equivalence
class
design
looks
promising,
but
I
have
some
comments
on
that.
Hopefully
we
can
get
that
working
soon
as
an
alpha
feature
and
I
don't
know
in
about
14.
If
you
can
have
it
probably
won't
be
able
to
get
it
to
114.
Hopefully
we
would
have
it
soon
and
it
can
improve
the
situation
in
class
there's
with
a
lot
of
unschedulable
parts.
A
Yeah
so
I
also
filed
an
issue
for
for
for
a
new
thing
that
we've
added
a
new
piece
of
logic
that
we've
added
to
the
scheduler.
Recently
we
changed
the
way
the
scheduler
marks,
apart
as
unschedulable
before,
if
a
part
was
being
marked
unschedulable,
the
scheduler
would
only
update
the
pod
object
if
it
or
didn't,
if
it
didn't
already
have
that
on
a
schedule,
a
bullet
status.
A
So
if
it
had
the
unschedulable
status
and
nothing
in
this
Paris
was
changing,
it
would
not
send
any
updates
to
the
API
server,
but
we
recently
made
a
change
so
that
the
scheduler
always
updates
the
timestamp.
The
last
attempt
at
timestamp
of
the
part
every
time
that
it
tries
a
part
and
integer
means
that
it's
unschedulable.
As
a
result,
the
number
of
queries
sent
to
the
every
increase,
especially
if
you
have
a
large
number
of
pending
parts
that
are
unschedulable
and
scheduler
to
rejoice
them.
A
It
will
send
one
new
update
for
each
one
of
them
to
the
advisor.
This
can
both
hurt
the
schedule
performance
because
if
the
API
server
updates
or
generally
awkward,
Street,
guy
server
or
rate
limited
to
20
per
second
and
by
default,
of
course-
and
the
other
problem
is
that,
of
course,
it
raises
the
load
and
the
API
server
to
an
extent.
A
So
this
timestamp
that
you're
updating
is
only
needed
for
the
scheduler
internal
states,
so
the
scheduler
uses
this
times
them
to
sort
pods
in
the
scheduling
and
it
only
uses
it
for
pods
which
have
the
same
priority.
Basically,
the
past
have
the
same.
Priority
are
sorted
based
on
their
latest
attempts
or
time
latest
timestamp
of
the
attempt.
A
So
we
we
decided
I
mean
IIF
I've
defined
a
an
issue
to
change
that
should
change
a
timestamp
to
an
internal
state
as
opposed
to
a
an
update
to
the
pod
object.
This
timestamp
does
not
need
to
be
announced
the
whole
world
outside
of
the
schedule-
and
this
is
just
me
that
word
in
internally
schedules
day,
so
we
are
working
on
changing
that
behavior
and
we
only
updated
update
the
timestamp
in
the
interim
the
state
of
discovery.
Hopefully
this
will
improve
the
situation
both
with
respect
to
the
scheduling,
throughput
and
also
the
load
Andy
Kieser.
A
A
Maybe
it's
already
missed
last
night
it
was
not,
but
maybe
it
will
that
match
today
we
may
want
a
back
port
that
I
will
I
would
take
a
look
for
everyone,
a
back
port
it
on
okay,
so
I
know
that
there
have
been
a
number
of
other
themes
that
we
have
been
trying
to
achieve
like
there
was
an
effort
and
research
been
packing,
so
this
was
working
on
that
I,
don't
know
if
he
has
gone
any
forward
or
he
has
I've
been
working
on
it.
I
would
like
to
hear
updates
about
that.
A
B
Have
an
update
regarding
the
duplication
of
cloud
related
volume
limit,
predicates
I.
Remember
like
last
last
week
we
had
this
discussion
and
we
should
get
started
on
duplication
in
114.
So
last
week,
Michelle
and
Haman
and
I
had
a
meeting
and
a
couple
of
things
that
they
have
at
this
point
of
time,
having
concerns
with
as
the
volume
limits
on
a
node
and
the
allocatable.
B
The
the
ciggy
node
does
not
want
those
fields
to
be
part
of
the
node
structure
SH
so
because
of
that
they
want
to
use
a
CR
D
and
that
we
need
to
have
an
additional
Informer
which
actually
watches
those
CR
DS
and
then
makes
changes
to
allocatable
within
the
scheduler
code,
so
that
may
actually
impact
the
performance.
So
I
would
like
to
discuss
that
with
you,
I
mean
we
will
have
one
more
meeting
tomorrow
and
we
will
update
the
design
dog
soon,
but
it
seems
that
is
going
to
be
a
concern.
B
B
A
A
B
B
B
B
A
A
A
B
A
This
point
I,
don't
recommend
using
any
of
existent
extension
points
and
this
framework,
because
that's
an
early
development,
really
we
are
working
on.
Even
the
design
is
being
changed
to
work
on
implementation,
finding
the
right
interface
and
everything
change,
I,
don't
really
using
at
some
points
at
this
point
even
is
kinda
if
they
get
it
to
a
state
that
we
we
can
comfortably
use
all
those
open
points
as
a.
A
A
A
B
One
thing
that
I've
been
working
on
is
using
the
scheduler
config
map
within
the
cube,
scheduler
config
file.
You
remember,
we
used
to
have
a
component
conflict
that
we
have
changed.
Scheduler
config
recently
and
I'm
noticing
an
issue
where
the
config
map
is
once
it
is
loaded,
it's
actually
having
an
empty
value,
meaning
it's
it's
having
the
predicates
as
as
an
empty
slice
and
priorities
as
empty
slice,
and
all
that
this
is
based
on
112,
whereas
on
114
of
the
latest
master
it
is,
it
is
working
fine,
but.
A
There
was
a
big
change
in
the
middle
I
know
that
I,
don't
remember,
I
understand
all
this
is
actually
because
there
has
been
a
poncho
since
its
two-component
frankly
well,
I.
Remember
that
there
was
a
big
change
to
component
config
and
actually
it
was
not
only
limited
to
display
varied,
went
through
many
other
components
and
they
changed
a
bunch
of
stuff.
So
maybe
because
they
said
but
I
don't
know
those.
B
A
It's
it
doesn't
know
it
does
not
restart
itself.
The
the
conflict
map
is
unmonitored.
I.
Remember
I,
wanted
to
think
that
what
I
early
on
like
what
two
years
ago,
actually
I
wanted
to
do
that.
But
there
were
some
concerns
about
executi
and
saying
that
maybe
it
shouldn't
have,
the
schedule
doesn't
need
to
have
access
to
all
these
and
monitoring
or
I.
Don't
know
exactly
what
the
details
were
right,
but
at
that
point,
okay,.
A
C
This
is
the
first
time
I'm
attending
this
meeting,
so
I
would
just
like
to
introduce
myself
so
I'm.
Actually,
new
to
covenant
is
not
up
there.
Talking
about
my
community's
experience,
I
have
been
working
with
kubernetes
since
a
year.
Okay,
so
how
do
lips
are
several
controllers
for
clients
and
mostly
I'm
working
on
kubernetes,
Angola,
okay
and
how
submitted
so
several
peers
to
community
sub
stream?
A
A
Absolutely
kubernetes
community-
oh
of
course,
you're
not
doing
you
community,
I
guess
so,
but
welcome
to
our
scheduling
goes
well
and
you
know
we
have
a
number
of
issues.
Sometimes
we
file
a
little
help
on
it.
You
might
be
already
familiar
with
those
and
you
can
always
look
at
those
and
pick
those
that
are
interesting
for
you
and
sometimes
from
time
to
time.
There
are
some
projects
that
we
try
to
assign
to
people
in
our
Sigma
teams
and
sometimes
at
the
beginning
of
each
quarter.
A
A
Incubator
projects,
those
are
also
under
the
sink
but
you're
not
in
the
core
of
kubernetes
and
those
are
all
valuable
projects
that
it
will
eventually
become
parts
of
the
quarters.
Those
are
also
great
areas
and
are
sort
of
like
more
but
they're.
They
don't
have
a
lot
of
complexity
compared
to
maybe
some
of
the
critical
components
or
a
static
components
of
kubernetes,
so
those
are
or
greener
in
a
way.
If
you
want
to
start
reading.
A
Well,
generally,
there
is
I,
don't
know
if
there
is
any
one
approach
that
I
can
recommend
to
everybody,
but
one
is
to
just
look
for
those
Help
Wanted
issues.
Honestly,
we
have
a
lot
of
contributors
who
are
looking
for
projects
and
we
don't
have
enough
projects
for
everybody,
some
some
of
the
projects
which
are
simpler
or
also
marked
as
it's
like
first
projects
or
something
like
that,
but
wonder
the
exact
label
I
think
it's
like
good.
First.