►
From YouTube: Kubernetes SIG Scheduling meeting - 2018-11-01
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
And,
as
you
know,
this
meeting
is
recorded
and
will
be
uploaded
to
public
internet,
so
chances
are
whatever
you
say.
He
will
stay
here
forever,
I
mean
a
thousand
years
or
so
people
will
realize
how
smart
or
maybe
stupid
you
were
so
anyway.
With
that,
let's
start
the
meeting,
we
have
a
few
items
that
we
need
to
update
on.
So
one
is
that
Hodge
certeyn
based
eviction
tests
are
merged.
The
feature
is
enabled
by
default.
A
Please
let
me
know
if
I
am
mistaken,
those
who
were
working
on
these
features-
please
let
me
know
if
I'm
mistaken,
but
as
far
as
I
know
it's
on
track,
and
it's
done
hard
limit
priority
test
failures
that
we
were
seeing
last
week
or
so
are
fixed.
Now,
thanks
to
Ravi
and
thanks
to
Ravi
on
way
for
their
work
on
team-based
eviction,
we
are
going
to
have
a
demo
for
the
equivalents
cache,
but
let's
go
over
some
of
the
items
that
we
have
right
now
and
we
will
get
to
the
demo
as
well.
A
B
This
is
regarding
the
resource
limits,
promotion
to
beta,
so
the
test
case
that
we
have
there
is
something
like
we
are
updating.
One
of
the
node
to
have
a
capacity
larger
than
it
can
hold.
I
mean
the
physical
capacity
on
the
node.
So
there
is
a
there
is
an
inherent
chance
that
that
there
could
be
a
race
where
cubelet
is
updating
its
capacity
finding
out
from
the
node,
and
when
we
are
updating
on
the
node
side,
when
we
are
bidding
in
the
test
on
the
node
side,
it
may
actually
cause
the
problem.
A
A
A
B
You
can't
do
it,
but
the
problem
is
when
we're
running
eetu.
It
is
using
the
serial
mechanism
or
serial
tag
in
in
the
ETS.
The
assumption
is,
there
is
a
node
already
with
running
and
when
we
are
creating
so
before
each
and
every
test
starts,
we
are
assuming
that
there
is
a
node
and
we
are
waiting
for
the
the
node
to
be
available.
A
B
A
We
didn't
have
much
integration
test
at
all.
Actually
still,
if
you
look
at
kubernetes
source
code,
there
are
many
components
that
don't
have
much
integration
test,
but
we
started
creating
an
array
in
integration,
test
library
and
started,
adding
more
integration
tests
and
actually
even
we
converted
some
of
the
ìiî
test.
Integration
tests.
A
B
A
A
Okay,
I
I
was
saying
that
I
actually
send
out
the
first
implementation
of
a
couple
of
or
extension
points
for
the
scheduling
framework.
There
is
a
PR
out
there.
I
I
saw
that
way
has
comment
on
it
and
I
guess
ya.
Seen
has
some
comments
as
well,
but
I
didn't
see
any
major
objects
and
objections
about
the
approach
that
we've
taken
and
that
in
a
PR,
so
I
assume
people
generally
agree
with
that.
B
E
A
A
Believe
Leon
Yan
has
had
mentioned
I'm,
not
sure,
but
I
believe
Leon
had
mentioned,
that
there
are
PRS
out
there
to
add
some
some
guidelines
or
a
solid
catalyst
aren't
guides
or
documentation
for
the
scheduler.
We
are
we're
actually
trying
to
document
the
scheduler
behavior
there
are,
there
are
TRS
out,
I
haven't
looked
at
them,
thanks
for
riding
them
and
working
on
them.
I
will
definitely
take
a
look
and
would
be
interested
to
review
them.
A
I
guess
there
is
a
link,
if
you
guys
are
interested,
please
make
sure
to
take
a
look
at
them
as
well.
One
more
thing
quickly
is
that
cold
breeze
is
about
two
weeks
away.
If
you
have
PR
that
you
want
to
imagine
1:13,
please
send
them
as
soon
as
possible.
You
know
that
reviews
sometimes
take
time
and
if
you
want
them
to
emerge,
it's
best
to
send
them
this
week.
Pretty
much.
A
E
A
E
A
E
E
This
is
very
simple
of
batch
job
and
we
have
we
have
a
calculation
which
is
calculating
pirate
out
and,
and
it
has
the
parallelism
completion
definition,
which
means
this
java
bill
concurrently
create
three
parts
where
it
is
credit
and
what
I
need
nation
is
that
this
in
this
pod
template
it
has
enough
selector
which,
which
required
to
running
on
that
would
be
hard
labor
with
foo
and
bar,
but
but
it
is
actually
not
exist
in
our
cluster.
So
these
cut
of
God
every
part
of
this
every
part
of
this
job
should
be
pending.
E
Well,
they
Newton,
not.
The
new
desire
of
the
seeker
in
the
class
cache,
which
is
actually
proposed
by
a
Bobby.
Is
that
as
long
as
there
is
one
part
fill
to
schedule
due
to,
for
example,
these
notes,
the
nectar
is
not
match,
then
B,
then
the
scheduler
will
not
try
to
schedule,
will
not
try
to
schedule
all
of
the
rest
of
the
parts
which
is
defined,
which
is
defining
this
terrorism.
E
And
then
we
can
see
the
pass
status
they
are
all
pending
and
what's
more
interesting
is,
if
you
check
the
log
sorry,
the
log
file
to
be
scheduler
you'll
find
that
the
scheduler
actually
only
handle
volumes
II
part
of
this
job,
this
one
there's
no
other
parts
being
scheduled,
because
this
part
is
filled
scheduled
and
as
a
scheduler
will
start
to
schedule.
All
of
the
rest
part.
This
is
the
currently
design
of
the
according
to
catch.
A
B
A
If
we
just
try
one
of
them
and
we
realize
that
it's
not
a
scheduler
book
scheduler
can
save
another
multiple
hundreds
of
iterations
by
not
looking
at
all
of
those
instances.
This
is
actually
great.
Thank
you
so
much
heavy
for
for
implementing
this.
So
do
you
or
do
you
have
PR
that
you
are
share,
or
that
is
still
an
early
implementation,
so.
E
This
is
the
early
implementation,
because
I
know
there
are
still
more
issues
in
the
implementation
need
to
be
fixed,
but
I
think
the
general
idea
has
been
proved
to
me
exactly
what
you
are
thinking
I
hope,
because
now
they
come
implementation
only
happens.
Most
of
them
in
implementation
happens
in
the
in
the
scheduling
queue.
So
we
bill
or
not.
We
all
know
we
don't
need
to
change
any
interface
of
scheduler
or
something
like
that.
E
D
A
E
D
E
D
A
E
That
is
what
we
decide
to
do,
because
when
we
talking
about
calculating
the
equivalent
Hoshi,
Bobby
and
a
clause
or
thinking
about
you
know,
there
may
be
some
harsh
equation
when
you
calculating
a
hard
shoot
from
the
whole
pod
specification
which
will
and
if
you
want
to
handle
that
I
actually
concern
about
it,
an
develop
the
allot
of
performance
loss
in
implementation.
So
for
now,
at
least
for
the
Alpha
feature,
we
will
trying
to
use
controller
relief
only.
B
E
B
E
B
A
In
this
particular
case,
basically,
and
if
there
is
a
cache
collision,
there
is
a
chance
that
we
don't
consider
in
this
schedule
some
other
parts
which
may
be
actually
scheduled
able
right,
but
since
they
have
a
cache
called
hash
collision
with
another
part
that
is
unschedulable,
then
we
will
not
consider
them.
Does
it
make
sense.
B
A
Alright
Harry,
if
your
demo
is
over
I,
have
a
couple
more
items
that
I
would
like
to
hear
from
the
folks
who
are
the
owners
since
especially
Klaus
is
here
today.
I
would
like
to
hear
from
Klaus.
How
are
you
are
you
done
with
your
demo?
Don't
say
just
go
ahead,
all
right
class
since
you're
here.
Could
you
not
tell
us
about
gang
scheduling
and
what
is
left
to
be
done?
Oh
I,.
A
G
A
G
G
A
So
I
guess
in
that
case
I
guess
we
can,
since
our
goal
for
it,
113
was
to
have
an
early
version.
Yeah
might
actually
even
mark
it
as
green
I.
Guess
it
doesn't
really
matter.
I
mean
this
is
green,
yellow,
red,
whatever
I
think
that
I
am
just
keeping
here
is
for
our
own
referenced.
You
know
what
is
in
Pakistan
what
is
postponed
or
what
is
hot
moving
forward.
So
this
is.
A
A
A
A
B
A
D
A
B
A
F
E
D
E
Old
behavior
is
more
complicated,
idiot,
actually
part
of
the
predicate
to
fairest.
So
before
predicate
we
go
trick
if
they're
available
scheduler
without
in
to
reuse
them
well.
In
order
to
do
that,
we
need
to
invalidate
the
predict
a
read
out,
seeing
if
that's
in
fact
the
go
which
makes
the
maintenance
you
very
complicated-
and
you
know
very,
very
hard
to
extractor,
but.
D
E
D
D
D
A
So
you
know
the
different
set
was
just
mentioned.
Is
one
of
the
valid
differences
between
the
two?
The
other
thing
is
that
in
the
previous
case,
you
are
also
running
the
predicates
for
all
these
fora
for
disposal.
As
long
as
a
pod
was
equivalent
and
nothing
on
a
particular
note
was
changed,
he
would
use
the
previous
results
of
running
a
predicate
for
then
for
the
equivalent
part
as
well.
A
So
let's
assume
that
you
have
two
parts,
part
1
and
part
2
and
you
evaluate
part
1
on
a
series
of
notes
or
say
on
a
hundred
nodes,
and
you
determine
that
part.
One
is
schedulable,
not
unschedulable.
This
time.
Let's
say
it's
scheduler
well
on
all
of
those
hundred
notes
and
when
you
are
evaluating
part
two
and
part
two
is
equivalent
and
nothing
on
those
notes
have
changed.
In
that
case,
you
also
consider
part
two
as
scheduled,
able
on
all
of
those
nodes
without
actually
running
the
predicates.