►
From YouTube: Kubernetes SIG Scheduling Meeting - 2019-02-14
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
I
forgot
to
record
so
the
idea
behind
the
new
scheduler
snapshotting
mechanism
is
that
the
snapshot
will
update
only
entries
in
the
cache
that
are
that
have
changed
after
the
previous
snapshot
so
before
the
scheduler
snapshotting
mechanism
was
going
to
through
all
the
entries
in
the
cache
I
was
trying
to
find
out
if
any
of
them
is
updated
or
not.
The
new
mechanism
is
different.
It
uses
a
different
data
structure.
Essentially
it's
a
doubly
linked
list
every
time
they
do
not
and
then
let
an
entry
is
updated.
A
It
goes
to
the
head
of
the
linked
list
and,
as
a
result,
the
entries
closer
to
the
head
are
the
ones
that
are
more
recently
updated
and
you
know
scheduler
have
always
not
always,
but
before
this
change
head
has
had
a
global
generation
number
for
cash
entries.
So
it's
easy,
relatively
easy
to
find
out
which
ones
are
or
recently
updated,
and
we
use
that
global
generation
number
Plus.
This
doubly
linked
list
to
find
the
entries
which
are
updated
after
the
last
snapshot.
A
This
way,
we
only
look
at
the
at
the
entries
that
are
updated
after
the
last
snapshot
and
only
capture
those.
This
showed
an
improvement
of
about
20%,
actually
more
than
20%
in
some
of
our
benchmarks,
so
we
will
see
the
actual
results
in
real
clusters
after
the
PR
is
merged.
That's
pretty
much
the
only
updated
ever
that
I
have
for
you
this
week.
A
There
was
this
other
PR
by
one
who
actually
sent
a
PR
to
optimize.
The
scheduler
skip
pod
status
updates
recently
in
order
to
ensure
that
pods
get
it
get
their
fair
share
of
scheduling
cycles.
We
added
a
mechanism
to
on
schedule
apart,
so
that
every
time
the
other
part
is
marked
on
a
schedule
of
all
the
time
that
the
schedule
tried
to
schedule
apart
is
recorded,
and
if
parts
have
the
same
priority,
then
they
are
ordered
by
this
last
attempt
time.
A
This
gives
parts
which
are
more
recently
updated
a
lower
range
compared
to
those
which
have
stayed
in
the
queue
for
the
longest
time.
But
the
issue
with
this
approach
is
that
this
requires
an
update
of
party
status
in
the
API
server,
for
every
scheduling
attempt
there
is
a
there
is
a
PR
that
optimizes
this.
A
Basically,
when
it
keeps
the
the
time
in
the
scheduler
internal
state
as
opposed
to
updating
part
on
the
API
server
side,
this
helps
us
improve
scheduler
to
the
API
server
throughput
and
actually,
since
this
is
all
subject
to
rate
limiting
having
this
optimisation
can
help
in
clusters
that
have
a
lot
of
unschedulable
parts
so
and
that
PR
is
on
the
review.
Hopefully
it
will
be
Murchison.