►
From YouTube: Kubernetes SIG Scheduling - 2019-02-07
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
So
let
me
I
know
that
this
Chinese
New
Year
and
many
of
the
folks
are
not
online
today,
so
we
can
make
it
a
short
one
I.
Just
now
that
we
started
this
meeting,
we
can
just
give
an
update
on
some
of
the
stuff
that
you've
been
working
on.
I
got
a
chance
to
go
and
review
the
the
new
version
of
the
e-class
proposal,
PR
yeah.
So
if
I
want
to
give
you
guys
a
quick
update
on
what
it
is
it's
essentially
basically,
the
idea
here
is
that
you
come
up
with
a
you.
A
A
Basically,
all
the
parts
that
are,
for
example,
created
by
the
same
replica
set
or
by
the
same
stage
we'll
set
and
so
on
all
collections
into
the
same
group,
and
then
we
once
a
pod
in
order
to
simplify
I
can
say
it.
This
way,
imagine
that
this
group
is
is
the
one
in
our
scheduling
queue.
So
it's
not
exactly
implemented
like
this,
but
conceptually
it's
like
this.
A
So
let's
say
that
once
this
part
reaches-
or
this
group
reaches
to
the
head
of
the
queue,
if
it
is
on
the
scheduler
boom,
everything
in
that
becomes
I'm
schedulable,
which
makes
sense
right
and
once
it's,
if
it's
the
schedule
above,
let's
say
that
we
pick
one
part
from
this
group
and
we
determine
that
this
is
scheduled
at
all.
Then
the
scheduler
keeps
taking
from
that
and
trying
to
schedule
more
of
those.
A
At
first
glance,
it
made
sense
to
me,
but
when
I
thought
a
little
bit
more
about
it,
I
noticed
a
problem
with
this
design.
The
problem
is
that
if
there
are
parts
that
are
created
by
the
same
replicas
that
controller
at
the
various
times,
for
example,
let's
say
that
a
huge
replica
set
controller
which
creates
out
of
2,000
replicas
and
another
one
which
creates
another
we're
not
at
different.
Let's
say
replica
said
that
creates
another
2,000
parts
are
starting
almost
at
the
same
time,
so
they
start
creating
parts
at
various
times
over.
A
Let's
say
an
hour
until
all
these
2000
parts
are
created.
If,
for
some
reason,
one
of
the
parts
after
replicas
at
one
gets
to
the
head
of
the
queue
and
if
the
scheduler,
for
example,
is
a
slower
that
scheduling
rate
of
the
scheduler
is
lower
than
the
rate
at
which
positive
created,
then
the
scheduler
will
keep
scheduling
all
the
parts
from
that
first
replica
set
and
not
getting
into
the
second
replica
set
until
all
those
replicas
are
scheduled.
This
has
a
this.
Has
a
fairness
issue.
Basically,
the
the
scheduler
never
gets
to.
A
A
It
could
be
a
controller
reference
for
simplicity,
or
at
least
for
the
first
version,
in
a
Internet
in
a
different
set,
and
if
another
pod
comes
to
the
head
of
the
queue,
the
scheduler
picks
it
up.
If
it
has
the
same
controller
reference
as
a
controller
for
service
in
the
unschedulable
said,
then
we
determined
that
this
part
shouldn't
be
schedulable
either
because
another
one
with
the
same
characteristics
we
were
not
scheduled.
Also,
this
one
is
not
as
scheduled
when
certain
events
happen.
A
In
the
cluster,
for
example,
a
new
node
is
added
to
the
cluster,
or
a
new
pod
is
terminating
it
in
the
cluster
and
stuff.
Like
that,
you
know
we
have
these
events
that
causes
apart
unschedulable
parts
to
be
moved
to
the
active
you
today.
Similar
events
will
cause
that
set
to
be
emptied
potentially
marking.
A
So
this
one
probably
has
a
little
bit
slower
performance
compared
to
the
approach
that
is
currently
implemented,
because
we
still
keep
considering
parts
one
by
one
and
we
keep
looking
up
the
that
Mark
Loddon
sketchbook
man
that
we're
building,
but
it
at
least
doesn't
have
that
fairness
issue
that
we
were
talking
about.
Another
benefit
of
this
approach
is
that
it
almost
does
not
need
any
major
changes
to
the
scheduling
hue
or
anything
to
actually
scheduler,
but
currently
the
logic
of
the
scheduler
that
exists
today,
yeah
so
I'm
working
with
the
author
of
the
PR
I'm.
A
A
One
other
thing
that
some
of
you
guys
are
already
aware
is
that
we
are
moving
priority
of
preemption
to
GA
in
114.
I
have
sent
a
bunch
of
PRS
for
doing
that,
including
promoting
the
API
I'm.
Adding
a
cap
for
promoting
and
also
excuse
me
still
not
fully
recovered
from
a
flu,
but
anyhow
those
PRS
are
gradually
getting
merged.
A
A
So
I
actually
added
a
a
document
which
is
which
is
learner
review
now
for
how
to
run
scheduler
benchmarks.
That
document
is
useful
for
people
who
want
to
try
running
the
benchmarks
or
finding
areas
where
they
can
help
improve
the
schedulers
performance,
but
it's
also
important
because
I
think
going
forward.
A
Of
course,
this
doesn't
need
to
exist
in
all
the
PRS,
for
example,
non-trivial
trigger
PRS
need
to
have
these
this
information,
but
other
PRS
should
probably
have
this
information
to
make
sure
that
our
performance
is
preserved
over
time,
and
hopefully
we
can
improve
performance
gradually
as
well
so
yeah.
These
are
only
updates
that
I
wanted
to
give
today.
If
you
have
questions
or
comments
or
you
have
updates
for
us,
please
go
ahead
and
share
with
us.
I.