►
From YouTube: Kubernetes SIG Scheduling Meeting - 2020-01-09
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
So
when
you
remove
you
when,
when
you
start
up,
you
get
some
pot
events,
add
20
vents,
for
example,
and
only
later
you
get
the
node
events.
So
at
this
point
we
have
some
inconsistencies.
You
need
these
clashes
and
same
when
you're
deleting
say
you
totally
lose
a
node.
Then
you
could
have
the
delete
nodes
delete
all
parts
before
or
after
they
delete
no
events.
So
these
kind
of
issues
are
might
cause
some
inconsistencies
and
that
could
affect
scheduling
decisions,
especially
around
affinity
and
even
point
spreading.
B
B
But
as
I
said
this,
this
affects
affinity
and
it
might
be
slightly
different
from
from
the
previous
implementation
for
this
kind
of
scenarios
in
general.
It
should
only
take
a
few
seconds
until
they
until
everything
is
in
sync,
but
something
like
say
they,
the
scheduler
restarted.
Something
like
affinities
could
could
be
skipped
because
we
didn't
get
the
note
event
creation.
Yet
so
we
don't
have
labels
or
anything.
So
we
can't
really
check
for
pot
for
note
and
the
affinity,
for
example,
so
yeah
the
stake
of
does
the
problem
we
can't
get
around
so.
A
B
We
first
before
we
start
scheduling.
We
wait
for
the
informers
to
sync
up
from
the
documentation.
Did
this
means
that
us
at
least
call
was
done
so
we
we
will
wait
for
the
list
to
come
and
once
that's
in
the
former's
cash
we
start
scheduling
the
problem.
There
is
that
week
we
don't
use
that
cash
directly.
We
we
have
our
own
cash
I
understand.
B
B
A
B
Yeah
yeah,
unless
somehow
you
would
like
lock
locked
for
the
infamous
cache
to
be
in
sync
thing
like
that
right
process.
All
the
events
once
only
then
continue
receiving
the
rest
of
events
like
that,
but
that
needs
a
changes
in
in
the
formulas
shared
informers.
So
we
maybe
we
can
send
an
email
to
API
machinery
to
check
if
something
can
be
done.
B
A
We
can
we
can
ask
them
for
best
practices
around
this,
because
I
don't
think
this
is
a
something
that
is
related
exclusively
to
the
scheduler.
Any
controller
I
start
up
wants
to
make
sure
that
they
are
in
a
consistent
state
and
I.
Don't
think
we
make
any
differently,
we
don't
totally
those
let
the
client
live.
It
doesn't
make
any
guarantees
around
orders
of
events
and
whatnot,
so
yeah
I
think
with
the
change
that
you're
making
I
don't
agree
where
we're
introducing
a
new
problem.
A
A
C
D
C
So
if
you
check
out
the
logic
actually
dad's
process
in
the
prodigal
routine,
if
even
B
is
for
a
new
man,
because
if
it's
a
delayed
apart,
of
course,
the
known
name
has
been
said,
so
we
just
present
that
in
of
a
cache
but
I'm
gathered
up
under
the
same
I,
don't
want
to
me
in
the
same
time,
but
you
know
later
time
very
slight
interval.
After
that
we
got
a
new
part
and
the
new
power,
maybe
is
a
BEC,
feel
part
of
the
deleted
part.
C
B
B
Kind
of
goes
against
the
yes
I
know,
I
know
cover
this
way,
but
then
Cerys
is
mostly
startup
or
way
when
the
scheduler
restarts,
because
because
that's
when,
like
there
are
more
events
that
could
come
through
and
more
things
have
a
bigger
chance,
alpha
of
being
a
being
wrong
of
being
the
wrong
order.
So.
C
C
B
Yeah
I
could
have
been
doing
no
deletion
or
ordering
a
note
creation
or
startup
when
you
don't
have
anything
any
information,
but
you
already
have
a
node
with
pot
running.
So
in
that
case
you
could
have
the
events
come
in
a
different
order,
so
you
get
first
part
creation
and
then
you
get
a
node
equation
for
that
part.
Yeah.
C
A
D
A
C
Think
so,
okay
I
know
that
some
large
China
companies
have
have
story
interesting
on
there.
So
basically
I
talked
with
the
Alibaba
guys
offline.
They
internally
have
some
implementation,
like
Costa
drinking
and
some
other
batch
teachers.
So
my
plan
is
that
we
will
come
up.
We
were
set
up
sub
rattle
and
think
the
term
lines
that
I
would
finished
up
at
the
end
of
this
month
and
the
Alabama
gang
is
putting
up
a
cab,
but
don't
don't
confuse
with
that
cab
whiz.
So
it's
just
a
cab,
but
it
doesn't
mean
the
invitation
should
be
entry.
C
C
C
A
A
A
A
It
sounds
good
I
think
we
need
to
make
distinction
between
gang
scheduling
and
batch
scheduling.
I,
don't
wanna
I
want
it
to
be
conflated
and
it
could
locate
the
problem.
Then
we
can
start
with
gang
scheduling
because
it's
the
simpler
problem,
which
is
basically
trying
to
make
sure
that
a
group
pods
are
scheduled
together
or
not
and
then
and
then,
if
there
is
demand-
and
there
is
a
reasonable
way
of
making-
you
know
more
and
more
batch
scheduling,
primitives
in
the
scheduler,
then
different.
You
should
do
it
and
I.
A
C
A
C
Little
so
they
are
two
thing:
wants
the
two
to
protect
the
the
cat
and
I.
On
the
other
hand,
I
used
to
set
up
the
rebel
I
want
to
do
two
items
to
be
20
forward.
You
know
in
parallel,
so
that
when
the
end
of
118,
we
can
have
two
or
three
Vikings
ready
for
that,
but
before
that,
I
would
I
need
to
think
about
the
like
some
random
strategies,
like
version
compatibility,
sort
of
things
to
make
sure
when
so
support
is
set
up.
It
can
be
in
a
very
good
study
point.
A
So
that's
the
classic
action
agenda
item
the
last
one
I
don't
have
it
here
is
I.
Can
I
give
a
quick
update
on
the
migration?
If
no
one
else
has
anything
to
discuss
you
you,
okay,
so
so
the
migration
technically
is
pretty
much
but
done
right
now
we
are
as
plug-in
first
scheduler
not
predicate
or
priority
first,
which
is
which
is
really
nice.
A
Now
we
can
claim
that
everything
actually
runs
through
the
framework
and
and
even
the
default
default
scheduler
or
default
algorithm
provider
is
expressed
in
plugins
and
rather
than
predicates
and
priorities,
and
what
is
live
is
clean
up
behind
behind
the
scenes
we
cut
a
dependency
on
the
conveyor,
a
demon
set
controller.
We
have
a
clear
path
for
cluster
or
scale
to
integrate,
with
the
framework
directly,
not
with
a
scheduler,
so
they
are
not
going
to
create
or
from
the
schedule
they
will
create
a
framework.
So
we
need
to
go
back
and
forth
with
them.
A
They
asked
us
for
slide
20
vacation
so
that
they
are
able
to
continue
to
do
the
things
that
they
are
doing
right
now
and
I.
Think
we
didn't
have
to
do
anything
out
of
the
ordinary
to
support
that,
so
it
was
really
minor
changes,
so
that
will
clarify
our
interface
with
them
and
hopefully
in
the
future,
we'll
make
it
easier.
Unfortunately,
some
of
the
coiler
we
have
right
now
and
predicates
and
filters
that
does
not
that
calculates
pre-filter
state
on
the
fly
will
stay.
A
Hopefully
we
can
clean
up
that
up
in
the
future,
but
it's
not
like
it's
not
something
that
we
have
to
care
about
in
the
near
future.
So
what
is
left
is
a
few
cleanups
one
of
them
I
think
the
last
one
of
the
major
one
of
them
is:
do
we
have
four
predicates
the
cubelet
calls
indirectly
through
a
generic
related
function.
A
A
But
if
we
don't
that's
fine,
which
is
creating
the
different
binding
plugin,
basically
moving
the
binding
logic
into
a
plugin
and
the
queue
sort
is
well
and
then
we
can
also
try
to
make
progress
on
shingles
proposal
related
to
adding
extension
points
to
update
how
do
preemption
I'm
a
little
bit
I'm,
not
sure
what
like
I
did
read
dear
I
did
read
your
proposal.
I
didn't
I,
didn't
forget
about
it,
and
I
was
just
like
to
try
to
mull
it
and
think
about
it.
A
little
bit
more.
A
My
hope
is
that
if
preemption
becomes
on
of
itself
like
a
post,
filter,
plugin-
or
maybe
it's
a
it's
set
upon
itself,
then
if
anyone
wants
to
change
it,
they
have
to
provide
the
new
preemption
plug-in,
not
do
pre
and
post
preemption
extension
points,
because
it
sounds
to
me
it's
a
little
bit
too
much
and
tides
it
with
the
current
implementation
of
the
preemption
logic.
The
way
that
wasn't.
E
It
yeah
I
would
like
to
discuss
phrases.
E
D
E
A
A
E
A
Nothing
that
I
know
like
there
is
no
active
discussion
around
this.
It's
something
that
people
some
people
are
using
in
those
that's.
Why
we'll
continue
to
maintain
it
has
a
v1
configuration
API.
So
if
we
are
going
to
duplicate,
it
will
have
to
be,
you
know,
follow
the
duplication
policy
of
one
year
after
announcement,
so
it's
a
stable
feature,
but
if
you
have
any
suggestions
how
to
improve
it,
please
feel
free
to
create
create
issues.
Oh
I.
A
A
D
A
My
problem
with
these
heuristics
that
people
are
proposing
really
pretty
interesting
heuristics
is
that
how
do
we
evaluate
them?
How
do
you
make
sure
that
those
work
for
people
who
are
already
using
the
scheduler
currently
and
we
know-
and
we
don't
compact
the
kind
of
behavior
that
they
are
used
to
and
we
basically
committed
to
so
far
I
just
want
to
make
sure
that
we
have
a
way
to
evaluate
those
things,
not
just
oh.
This
is
a
nice
estate,
but
most
of
their
is.
A
It
has
been
proposed
the
over
fit
on
the
type
of
workload
that
the
person
is
dealing
with,
and
that's
what
worries
me
about
these
types
of
heuristics:
they
they
are
not
bulletproof
and
and
I.
Just
would
like
to
see
some
sort
of
like
methodical
way
of
evaluating
them
and
making
sure
that
they
don't
have
a
negative
impact
on
current
workers.