►
From YouTube: Kubernetes SIG Scheduling meeting - 2018-10-18
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Alright,
let's
just
all
right,
we
started
I
actually
have
a
have
an
update
about
the
first
draft
of
gang
first
draft
of
scheduling
framework
I
promised
last
week
that
they
will
send
out
a
PR
soon,
but
the
PR
is
not
ready.
I've
gone
back
and
forth
about
various
things
and
the
design,
of
course,
and
the
PR
is
going
to
be,
of
course,
very
early
draft.
A
The
pure
discuss
it
further
once
it's
sent
out,
so
it's
not
going
to
be
by
any
means
a
final
implementation,
hopefully
I
will
send
it
out
in
the
next
few
days.
I
will
be,
I
will
be
basically
out
of
office
tomorrow
and
then
there's
gonna
be
weekend.
So
my
hope
is
that
somewhere
in
the
middle
of
next
week,
I
talked
about
the
scheduling
framework.
I,
don't
know
if
you
guys
have
any
questions
about
that.
A
A
My
hope
is
that
it's
gonna
be
ready
for
113
and
despite
the
fact
that
we
are
going
for
stability
release
in
113
I
think
we
can
merge
this
building
framework
changes,
because
these
are
these
are
not
gonna,
be
exercised.
I,
don't
think
now.
My
goal
from
the
very
beginning
for
the
first
step
of
implementing
everything
for
the
scheduling
framework
was
to
be
able
to
move
all
the
volume
binary
stuff
into
a
plugin
and
then
add
more
plug
in
points
and
extension
points
and
stuff
like,
but
I.
A
A
A
A
E
In
all
the
extension
points
that
we
have
this
theme
to
be
delegating
the
responsibility
of
I
mean
the
the
final
binding
will
still
be
written
by
the
default
scheduler
itself
and
not
any
extension
point
right.
So
is
there
any
a
plan
or
any
benefit
of
actually
delegating
the
responsibility
of
writing
the
final
binding
to
HCD
by
the
any
by
the
extension
point,
rather
than
the
default
scheduler?
E
A
A
F
So
there,
basically
to
it
we
test,
a
warranty,
has
been
with
much
ii
warrants
up
on
the
10
base.
Eviction
yourself.
The
tricky
tricky
issue
is
that
we
cannot
see
whether
they
eat
we
passed
in
the
CI
job,
because
the
seer
job
or
the
serious
test
has
been
disabled
in
the
STI
cook
of
Teja
PR.
So
that
means
we
have
to
kind
of
manually
rely
our
reviewers
to
see
whether
the
it.we
is
correct
or
missing
any
functionality.
So
we
then
checking
to
the
master.
F
A
F
D
A
F
A
F
F
A
After
P
ORS
are
managed,
that
is,
that
is
one
small
caveat,
but
but
if
there
is
still
an
issue
with
any
PRS
after
they
are
merged,
there
is
a
very
high
chance
that
we
can't
we
can't
touch
those,
and
that
does
should
be
fine,
actually
I.
Think
a
lot
of
our
features
are
like
that,
and
hopefully,
if
you
know
since
we
have
also
unit
tests
and
also
integration
tests
for
for
the
feature,
hopefully
we
can
catch
things
and,
if
not
before
submitting,
we
can
catch
them
on
a
daily
basis
or
even
hourly
basis.
Yeah.
F
D
A
A
A
We
were
thinking
about
adding
soon
all
we
were
not
very
aggressively
targeting
113,
but
we're
still
hoping
that
we
could
possibly
have
have
it
in
113,
but
it
looks
like
we
shouldn't
actually
add
in
113,
although
you
can
we
can
you
still
get
away
with
adding
that,
because
this
is
not
gonna
be
enabled.
Anyhow,
it's
gonna
be
enough
of
feature,
even
if
we
add
it
so
be
nice
to
consider
adding
it.
But
of
course
we
won't
be
really
pushing
it.
A
G
Very
quick-
and
that
was
these
eating
time-
usually
difficult
for
me,
but
I
wanted
to
drop
in
and
say
a
big
thank
you
to
everyone
who
has
been
working
on
the
cleanup
and
also
the
documentation
that
has
been
started.
So
I'm
really
excited
about
that
and
I'm
really
happy
with
how
much
we've
got
done
already.
Yeah.
Thank.
A
You
initiating
this
I
really
appreciate
all
of
you
guys,
first
for
that,
you
put
in
and
all
the
changes
that
you
made.
I
know
that
there
are
still
a
couple
of
those
left
to
be
done.
They
are
hopefully
getting
mesh
very
soon,
but
this
this
was
actually
a
very,
very
good
example
of
a
team
effort.
I
really
appreciate
everyone's
alright,
so
yeah
gang
scheduling,
yeah
yeah
yeah
for.
C
For
the
GAO
scheduling,
I
think
they're,
there
I
think
their
skills
wish,
and
you
still
give
some
comments
about
how
to
make
sure
the
results
have
a
deadlock
between
the
job
and
for
the
API
part,
I,
think
they're
almost
there
and
for
the
transaction
about
automation
and
the
life
cycle.
I
think
we
agree,
because
we
are
going
to
you
the
separate
proposal,
a
separate
document
and
to
define
the
details.
So
I
think
for
this
part,
will
be
fine.
I.
C
A
Second,
let
me
information
of
everybody
else
who
are
not
fully
aware
of
what
we're
talking
about
yeah.
One
of
the
questions
on
the
document
is
that
let's
say
that
we
are
you're
scheduling,
two
sets
of
gangs
so
and
then
we
you
have,
let's
say,
allocated
or
preserve
in
a
way
enough
resources
for
half
of
a
gang
and
also
reserved
resources
for
half
of
another
gang.
But
now
we
are
the
clusters
out
of
resources,
so
none
of
these
two
gangs
can
be
fully
allocated.
So
what
should
we
do?
Basically,
we've?
A
We
will
not
see
any
progress
on
any
of
them
and
in
fact,
if
we
hadn't
reserved
resources
for
one
of
them
and
there
was
there
were
chances
that
we
could
fully
admit
or
schedule
one
of
the
two,
instead
of
just
having
two
half
ways
reserved
resources
for
the
two
jobs.
So
that's
who
that's
the
that's
one
of
the
major
questions
on
this
proposal
then
you're
discussing
now,
yeah.
C
Yes,
for
this,
for
this
question,
I
think
I'm
trying
to
do
is
that
further
resolve
preserve
the
universe
gatherer.
We
have
stayed
for
this
kind
of
results,
I
call
it
a
located.
I
think
this
is
a
similar
to
just
assume
the
results
again
in
default.
Scatterer,
yes
similar.
Yes,
we
introduced
another
phase.
We
will
reconsider
such
general
resource
to
each
town
to
each
table.
So
fold
am
hopeful
for
assume
the
results.
C
If
away
after
allocation
phase,
we
will
consider
this
kind
of
resource
cannot
be
used
because
we
allocated
a
weaver
actually
how,
in
a
free
software
already
just
translates
this
data
to
the
bonding
at
least
we're
training.
The
stairs
of
this
can
of
resource
yeah
in
in
the
nests
exist.
If
this
is
this
resource
is
allocated
doesn't
mean
this
result
cannot
be
used
or
bad
that
at
all,
so
we
will
reconsider
this.
Can
this
part
of
draw
together
for
them,
hopefully
how
to
resource
or
to
job
here
yeah?
C
A
A
C
C
D
A
D
A
Just
can't
provide
the
name
of
the
group
to
various
collections.
It
could
consist
of
multiple
collections
and
these
cards
can
be
treated
at
different
times.
So,
for
example,
collection,
one
can
retweet
it
now
and
then
collection
two
can
create
it
a
minute
later
yeah.
So
in
the
meantime,
so,
while
not
all
the
parts
of
a
job
or
job
are
available,
you
can
start
processing
some
of
the
parts
of
the
first
collection
and
looking
resources.
Those
and
I.
C
C
C
A
C
H
A
C
A
C
C
A
C
D
A
B
A
B
So
I
just
look
to
add
a
quick
update
about
the
new
equivalent
cash
to
die.
Sure
yes,
so
so,
for
anyone
who
is
not
familiar
with
this
desire
that
we
will
have
a
class
concept
in
over-scheduling
Q,
which
is
proposed
by
Bobby
and
I,
just
spent
if
he
wakes
to
implement
a
POC
based
on
this
dude
I.
So
also
we
have
a
volunteer
who
carries
a
lot
fun
pie,
dough,
I,
don't
know
if
you
hear
but
I
want
to
say
thank
you
here.
B
So,
according
to
the
POC,
so
first
thing
we
can
make
sure
you
then
the
the
idea
from
Bobby,
which
is
you
know,
changing
schedule.
If
you
do
work
and
I
can
see,
the
scheduler
is
working
as
normal
and
there's
no
there's
no
blocker
here
I
think
well
the
well.
The
concern
will
be
in
this
desire.
We
don't
need
to
calculate
the
according
Akashi
whenever
we
want
to,
for
example,
add
a
part
to
the
queue
or
prayer
schedule,
so
the
performance
impact
will
be
a
little
bit
constant
here,
and
we
also
discussed
this.
B
A
D
A
Head
and
I
think
I've
done
it,
but
maybe
you
haven't,
but
I
I
believe
I
have
I,
have
commented
at
this
point
and
said
that
I
think
this
makes
sense,
so
at
least
as
you
said,
for
the
first
version,
and
if
you
want
to
try
to
see
in
the
impact
of
this
work,
I
don't
know
since
we
have
been
targeting
this,
particularly
for
the
cases
that
you
have
a
large
collection
and
the
collection
cannot
be
scheduled
for
whatever
reason.
For
example,
the
cluster
is
already
overloaded
or
the
collection
requires
a
lot
of
resources.
A
Constantly
so
it's
actually
a
good
idea
to
try
it
with
something
like.
If
you
have
a
POC
you
can.
You
can
try
it
with
this
kind
of
scenario
that
you
don't
have
enough
resources
to
schedule
all
these
parts,
but
you
have
a
very
large
collection
that
has
a
lot
of
parts
and
these
parts
none
of
them
can
be
scheduled,
is
to
trying
to
schedule
each
one
of
those
little
one
and
once
it
determines
that
is
on
the
schedule,
but
it
keeps
all
the
rest
because
it
doesn't
make
sense
to
try
any
anything
else.
If.
B
A
B
A
C
D
A
D
Jordan
found
it
out
like
a
couple
of
months
ago
and
he
provided
a
fix
wherein
we
are
when
we
are
abrading.
You
are
actually
awaiting
lots
of
peace
in
the
struct
that
are
used
as
keys
in
management,
so
he
provided.
The
fix.
I
did
not
test
that
out,
but
hopefully
he
that
should
fix
the
saloon
assessment.
I
see.