►
From YouTube: Kubernetes SIG-Scheduling Weekly Meeting for 20200604
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
The
been
timeout
changes
for
you
see
already
you
actually.
Let
me
check
it's
already:
marriage.
Yes,
sorry
yeah!
So
we
still
pending
the
beam
timeout
changes
for
the
volume
building,
plugin
and
then
the
marriage,
marriage,
increased,
surf
and
unreserved
plugins
into
one.
But
whoever
is
needing
to
do
work
in
p,
1
beta
1.
B
B
B
B
A
B
A
B
C
C
Cookie
free
keys
in
your
environment,
the
vas-y
is
given
by
Abdullah,
which
is
to
sort
of
when
you
do
preemption.
So
the
old
logic
is,
there
will
be
two
API
calls.
The
first
one
a
drug
car
is
to
update
the
pre
enter,
pass
nominating,
don't
name
to
the
note.
We
preempted
some
pause
to
make
room
for
the
preempted
part,
so
this
is
what
I
called
the
second
API
poison
as
usual,
because
the
preemption
only
happens,
one
filter
all
fair.
C
So
we
were
going
to
update
the
part
of
status
to
scheduler,
go
to
the
part
of
spec,
no
busca
part
of
status,
updated
conditions
or
they
are
to
attack
off,
but
it's
not
necessary
to
have
two
cards.
We
can
merge
them
into
more.
There
is
only
not
only
related
with
with
using
the
api
cross
number,
but
also
I
think
with
using
some
chances
to
have
having
some
income
cash
and
inconsistency
as
well
as
some
other
unexpected
bugs.
This
is
possible
second
wise
to
so
during.
C
We
update
the
path
status
to
a
guess,
everything
that
the
public
gets
uncredible
inside
their
function.
We
called
make
error
function
without
some
functions.
The
old
logic
is
spinning
out
a
go
routine
to
try,
try
to
the
live
version
of
the
pod
and
and
using
some
some
Bekoff
logic
there
to
update
the
pod
status,
but
that's
sort
of
unnecessary
because
just
being
either
kill.
C
So
the
Skip
update
is
necessary
because,
usually,
when
a
scheduling
cycle
finished,
we
do
stab
called
option
optimistic
concurrency,
which
is
just
we
assume
the
pod
has
been
specified
with
the
houseless
name
so
that
when
the
real
part
update
happens
comes
inside,
we
can
skip
that
because
in
our
internal
cache
we
have
already
had
has
known
that,
but
because
some
extra
fuse
is
handled
in
a
guess,
I.
So
in
the
assumed
phase
we
cannot
know
all
those
fields
has
been
updated.
So
we
in
the
scale
part
update.
C
We
strike
those
kind
of
fears
that
we
can,
but
you
know
sometimes
a
chess
server
or
some
other
idiot
spec
has
updated
with
resulting
the
skip.
Update
is
not
up
to
date,
for
example,
we
have
suicide
apply,
so
that
means
there
are
your
specs.
Our
managers
views
will
be
updated
by
the
ABA
server.
That
is
inconsistent
with
our
internal
assume
the
cache.
So
we
need
to
start
all
those
fields
as
well,
so
this
repeals
are
handling
it
kind
of
event.
So.
A
When
we,
the
reason
we
are
skipping,
a
pod
update,
I,
would
like
the
side
effect
of
that
is
that
we're
not
going
to
appear
on
the
car
in
the
queue
right
yeah
and
which
means
what
I'm
just
trying
to
follow
the
code.
So
if
the
update
happens,
what
what
is
the
we
is?
That
is,
that
a
performance
penalty
that
we're
gonna
try
the
part
again
yeah
you.
A
C
A
C
So
basically,
there
are
two
things
we
we
should
sort
of
discuss
to
agree
on.
The
first
line
is
that
the
real
semantics
of
postal
theater,
so
in
the
first
version
I
just
thought
of
is
the
persecutor
is
generic
interface
for
preemption
like
behavior.
That
means
it
should
and
should
be
only
called
after
the
theater
phase
failed.
C
A
A
A
But
if
you
want,
if
you
wanted
to
execute
only
on
failure,
then
it
that
would
allow
us
to
exit
early
once.
One
of
the
plugins
says
that
oh
I
change
the
cluster
state
and
I
believe
that
the
part
is
schedulable
in
the
next
scheduling
cycle.
So
you
don't
need
to
keep
executing
other
post
filter
plugins,
and
so
you
can
imagine,
as
we
argued
in
the
in
the
issue,
you
can
have
multiple
preemption
plugins
that
have
various
like
levels
of
complexity
and
yet
trying
them
one
by
one.
A
So
you
should
stop
earlier
if
one
of
them
succeeded-
and
the
idea
is
that,
ideally,
you
would
have
them
order
in
term
like
in
in
their
complexity,
how
complex
they
are
and
how
to
miss
it.
They
are
that
doesn't
prevent
having
a
like
informed,
informational
plugins,
as
alga
mentioned,
also
in
the
issue
where,
if
you
want,
for
example,
to
record
some
metrics,
there
related
to
the
fact
that
the
pod
was
unschedulable
but.
A
Cluster
operators
needs
to
be
aware
of
how
that
those
should
be
configured
in
the
scheduler
component,
config
such
that
they
are
executed
first,
and
they
always
return
on
schedule
which
makes
sense
like
the
they,
the
plug-in.
The
informational
plugin
is
not
going
to
make
the
god
schedule,
so
it
should
retain
unsterile
still
like
maybe
basically
maintain
the
previous
state,
and
that
allows
us
to
have.
A
So
we
have
three
unscalable,
meaning
that
I'm
going
to
continue
executing
at
the
post
filter
plugins
the
first
plug-in
to
return
success,
we're
going
to
assume
we're
going
to
stop
and
say:
okay,
we're
done
I'm
gonna
return,
I'm,
just
gonna
assume
that
the
power
that
some
that
data
plugin
found
a
solution
or
change
the
cluster
state
to
make
the
pond
scalable
and
it
R
is
always
error
or
error-
should
always
be
an
internal
error,
something
wrong
this
they
could
take.
The
scheduler
state
itself
is
inconsistent
or
something
wrong
unexpected
and
we
should
break
their
like.
A
C
The
second
thing
is
more
very
detailed
design
whether
we
should
expose
the
preemption
unit
information
as
a
parameter
to
the
posterior
function.
All
we
want
exposing
it
more
to
a
higher
level
level
to
framework
handle.
So,
according
to
Abdullah's
comments
that
the
parameters
we
define
in
each
expansion
point
function
should
be
sort
of
transient
and
only
associated
with
that
particular
scheduling
cycle
like
what
we
did
like
cycle
state.
C
The
path
as
well
as
some
snapshots
was,
you
know,
it's
sort
of
a
transition
can
see
an
estate
of
that
circus
carry
on
the
sidewalk,
so
they
saw
that
design
rationale,
I,
sort
of
agree
that
we
should
expose
the
preemption
unit
information
through
a
more
higher
level
to
the
framework
handle,
so
that
those
kind
of
the
the
those
kind
of
information
like
nominated
PubMed
is
a
is
a
global
state
that
change
along
the
whole
life
of
the
schedulers.
So
it's
on
that
point.
I
also
don't
agree.
C
A
I
mean
the
document
is
also
simplified.
The
function
signature
I
mean
again
since
this
is
global
state
that
is
valid
across
scheduling
cycles
might
as
well
having
it
accessible
directly
through
the
framework,
handle
I
mean
there's
an
argument
by
alder
say
we
need
to
make
things
like
you
know,
limit
the
scope
or
like
separation
of
concerns.
I
agree
with
that.
A
The
the
the
problem
is,
this
increases
the
burden,
our
side
of
deciding
or
what
to
expose
where
and
I,
don't
think
we
can
even
enforce
it,
because
anyone
can
take
a
pointer
and
store
it
in
there
like
plugins
data
and
have
it
accessible
through
the
different
extension
point
and
locations,
and
so
I
would
I
would
rather
not
like
be
opinionated
in
this.
For
this,
like
it's
just
like
I,
have
the
function,
signature
as
simple
as
possible
and
yeah.
That's
that's
I,
guess
my
my
point.
B
A
A
C
A
So,
when
I
have
and
then
open
the
floor
for
for
any
other
discussions
or
questions,
nothing,
the
agenda
so
I'm
working
on
defining
some
benchmarks
to
better
evaluate
preemption
and
preemption
back
path,
basically,
and
any
any
logic
like
it
gets
executed.
When
we
fail
to
schedule
apart,
so
I
already
have
one
on
benchmark
added
I'm
working
on
a
second
one
as
well,
so
the
first
benchmark
was
trying
to
evaluate
preemption
itself.
So
it
creates
a
bunch
of
low
priority
pods
that
gets
scheduled
once
those
pods
are
scheduled.
A
We
benchmark
how
fast
they,
how
like
the
performance
of
the
scheduler
the
schedule
an
equivalent
number
of
pods
in
place
of
those
of
those
in
it
in
it
lower
priority
pods
so,
basically,
for
each
part,
we're
going
to
be
going
for
each
pod
that
were
testing.
It's
gonna
affect
a
bunch
of
in
its
pods,
lower
priority
ones
and
so
forth.
So
that
allows
us
to
evaluate
preemption
plus
back
the
hope,
everything
so
the
second
benchmark
that
I'm
working
on
right.
Now
it
evaluates
sort
of
like
head
of
line
blocking
as
well.
A
So
what
the
benchmark
does
it
creates?
A
bunch
of
unschedulable
in
advance
does
stay
in
the
system
and
then
the
benchmark
starts
to
create
schedule
able
pods
and
sees
how
fast
we
can
schedule
those
scheduled
pods.
While
we
have
I
don't
know
5,000
unscheduled
pods
in
the
queue
that
keeps
you
know.
Coming
back
and
clocking
clock
clothing,
the
queue
all
the
pods
have
the
same
priority
is
just
that
they,
the
unscareable
pods,
are
going
to
consume
some
cycles
and
make
you
know
the
scheduled
pods
slower
to
slower
to
schedule.
A
A
You
know
in
the
inter
logic
that
comes
after
deciding
at
the
pods
unscalable
I.
Don't
think
we
had
any
benchmarks
before
to
evaluate
that
and
so
I'm
trying
to
create
a
couple
of
canonical
ones
to
use,
as
you
know,
a
way
to
to
always
evaluate
to
make
sure
anything
we
do
there
is
we
have.
We
have
some
sort
of
like
a
reference
point.
What
was
the
performance
and
how
the
new
logic
impacts
things?
So
that's
all
I
have
I
do
have
an
optimization,
a
funny
one
which
is
basically
we
face.
A
Before
related
to
this,
we
have
like
a
formatting
blind
like
a
log
line
that
is
that
actually
v3,
it's
not
getting
printed,
it's
just
getting
formatted,
but
it's
in
a
very
tight
loop
there
and
it
consumes.
Like
I,
don't
know
30
percent
of
the
execution
time
for
my
build
mark,
and
so
we
need
to
find
a
generic
solution
for
this
type
of
really
annoying.