►
From YouTube: Kubernets SIG Scheduling Meeting - 2019-06-27
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
So
I
would
like
to
start
today's
meeting
with
a
couple
of
sort
of
like
reminders
and
/
announcements
about
feature
freeze
and
code
freeze
for
116.
So
as
I
put,
these
dates
and
our
meeting
notes,
the
feature
freeze
for
116
is
July
30th.
So
basically
the
end
of
this
month.
Sorry,
the
end
of
the
coming
month,
July
is
gonna,
be
the
end
of
it's
gonna,
be
the
feature
freeze.
What
it
means
is
that
all
the
enhancement
issues
must
be
filed
by
that
day
and
all
the
caps
must
be
merged.
A
A
By
that
day,
code
freeze
for
116
is
close
to
the
end
of
August,
it's
August
29th
and
that's
something
that
has
existed
for
a
long
time
so,
hopefully
most
of
your
familiar
red
code,
which
means
that
all
the
PRS
for
implementing
features
must
be
merged.
By
that
day,
that's
about
the
two
important
dates.
Is
there
any
question
about
any
of
these
feature?
Freezes
or
code
freeze
dates.
A
Alright,
I
had
a
couple
of
updates.
Recording
some
of
the
new
changes
in
the
scheduler.
One
is
volume,
limits
code
is
merged,
so
this
is
a
new
change
that
removes,
or
at
least
help
help
us
remove.
Eventually,
all
the
cloud
providers
specific
code
with
respect
to
a
number
of
volumes
that
can
be
attached
a
note
in
the
past
Ali.
A
Well,
actually,
even
today
in
115
we
have
some
hard
coded
numbers
for,
for
some
of
these,
like
more
famous
cloud
providers
on
the
number
of
volumes
that
can
be
attached
to
a
note,
we
were
not
so
happy
with
this
piece
of
code
because
it
was
not
expandable
and
it
was
not
scalable
as
new
quarter
waters
or
some
other
users
were
using
kubernetes.
They
had
to
go
and
make
changes,
so
we
are
removing
that
piece
of
code
and
instead
we
are
bringing
the
the
new
changes,
which
is
in
new
CSI
drivers
to
kubernetes.
B
A
Remember
any
other
College
specific
code
and
a
cloud
provider
specific
code
in
the
scheduler
yeah.
So
there
is
a
number
of
PRS
for
making
changes
or
implementing
extension
points
for
this
scheduler
one
which
is
kind
of
blocking
some
other
peers,
is
building
filter
extension
points.
I
have
had
some
discussions
about
how
to
implement
filter
extension
points
with
the
author
of
the
PR
I
will
add
the
links
through
the
meeting
notes,
but
basically
the
implementation
of
that
PR
was
problematic
in
some
ways.
A
With
this
reason,
as
well
as
some
other
stuff
or
logic
that
was
added
to
a
function
that
we
have
called
fine
nodes
that
fit
and
I
asked
the
author
to
change
it
I
actually
didn't
check
today.
This
is
the
update
from
yesterday
I
didn't
check
today.
If
there's
changes
to
the
PIR
but
I
believe
we
need
to
make
that
change
before
marriage,
England
and
I
know
that
there
are
a
number
of
other
PRS
waiting
to
be
merged.
B
A
The
test,
which
was
basically
sharing
this
data
Montez
and
I
there
was
Raven,
has
a
P
or
has
sent
a
P
or
that
fixes
that
problem.
Hopefully,
after
that
one
is
merged,
and
if
it's
not
already
match,
you
can
have
the
bind
extension
point
again.
I
know
that
Abdullah,
you
have
made
a
score
extension
point,
so
that
needs
a
review
right.
B
So
the
initial
one
was
just
the
interface
but
I
thought
that
you
generally
okay
with
the
interface
right.
Oh
it
just
like
okay
to
the
permutation
of
this
NPR.
A
So
so,
but
particularly
with
respect
to
score
and
filter
my
ideal
alright.
So
this
is
just
the
model
that
I
have
in
my
head.
It
doesn't
not
necessarily
be
backed
by
any
rational
or
scientific
reasons.
The
model
that
I
have
in
my
head
is
that
Tara
realism
of
how
to
run
a
CDs
score
scoring
or
filter
plugins
should
be
done
in
the
scheduler,
not
in
the
run
function
for
plugins.
A
So
basically,
basically,
we
have
like
a
much
budget,
maybe
a
piece
of
code
that
starts
bolt
in
pair
node
and
that
whole
routine
calls
these
framework
functions.
So
that's
how
I
feel
I
should
be
done
both
for
filter
and
score.
Plugins
I,
don't
know
if
you
guys
have
any
other
preference
or
believe
that
that
should
be
done.
Actually
the
parallelism
or
running
or
creating
these
go
routines
should
be
done
by
the
framework
and
the
run
function
of
the
framework.
Is
there
any
particular
reason
that
you
may
prefer
the
ladder.
B
I
think
it's
I
say
that
I
see
it
is
that
what
is
at
this
point
was
the
easiest
integrate
with
what
we
have,
so
that
was
I
think
my
main,
like
idea
with
your
dinner,
that
it
should
be
that
way,
but
I
don't
know
if
we
need
to
do
like
major
refactoring,
the
current
code.
I,
don't
know
if
that
requires
major.
In
fact,
if
the
kind
of
code
to
do
it
like
to
do
it,
like
a
you
know,
firm
ism
in
the
outer
loop
rather
than
say,
I,.
B
Functions,
there's
a
priority
function
that
iterates
over,
like
we
already
have
like
couple
of
loops
there
anyways
one
for
the
I,
think
the
old
approach
for
implementing
these
scoring
functions
and
then
the
new
one
which
is
based
on
Map
Reduce.
Yes,
so
it
didn't
like
I
mean
from
my
first
book.
I
didn't
feel
like
it
was
easy,
but
I
can't
take
another
look
and
and
see.
If
there's
a
there
is
a
variety
that
should
be
out,
so
we
should
have
enough
people
spring
yeah,
correct.
A
So
the
Map
Reduce
model
is
essentially
changed
in
the
framework
into
scoring
and
post
scoring
plugins,
basically
or
normalized,
scoring
something
like
that.
I,
don't
know
exactly
what
name
we
had
on
the
extension
points,
but
anyways.
There
is
this
case
that
we
scored.
You
know
the
next
phase
we
go
and
normalizes
for
some
minutes.
This
is
essentially
equivalent
to
that
MapReduce
model
in
the
code
and
I
have
a
feeling
that
it
should
be
relatively
easy
to
replace
that
with
the
new
mechanism,
but
I
have
to
check
and.
B
Removing
everything
from
map
the
map
produce
pattern-
this
should
be
should
be
easy,
even
the
older
one,
so
I
have
something
in
my
to
do.
List
is
basically
to
do
like
a
laundry
list
of
all
the
pieces
that
we
have
in
the
framework
that
needs
to
be
translated
into
Pauline's
and
then
baby
try
to
create.
A
A
B
A
How
the
rest
of
the
extension
points
are,
and
all
these
other
run
functions
for
the
framework
just
get
a
card
in
a
node.
If
we
want
to
build
if
the
parallelism
inside
the
framework
inside
those
don't
functions.
Some
of
these
functions,
particularly
definitiveness
scoring
functions,
should
get
a
slice
of
nodes
as
opposed
to
just
a
node.
A
C
So
in
the
current
state
or
in
the
current
code
base,
it
is
actually
expecting
a
node
list
and
say
if
someone
tells
that
okay
I
have
written
a
framework
and
I'm
trying
to
compare
it
with
the
existing
humanity
scheduler
framework
that
we
have.
Currently,
it
will
not
be
an
active
status
to
compare
comparison,
because
the
parallelism
is
outside
of
the
the
plug-in
framework
that
we
have
currently
right.
So
how
can
someone
tell
that?
Okay
I
have
written
something
that
is
better
than
the
existing
one
or
something
like
so.
A
So
when
it
comes
to
external
plugins
similar
to
that
those
in
extender,
we
cannot
basically
we
can.
We
can't
afford
calling
these
plugins
many
times
for
a
single
path,
so
if,
for
example,
in
that
model,
if
you
wanted
to
for
a
pot,
if
you
wanted
to
call
some
of
these
extenders
per
node,
you
would
have
had
to
call
the
extenders,
for
example,
a
thousand
times
per
pod.
A
If
you
had
like
ten
thousand,
if
you
had
a
thousand
nodes
in
your
cluster,
so
in
that
model,
it's
better
to
pass
all
the
audio
nodes
to
the
extender
in
a
single
pass,
as
opposed
to
the
framework
which
is
like
in
process
plug-in.
So
in
this
model,
like
I,
feel
it's
gonna
be
a
simpler
version.
If
plugins
don't
worry
about
like
it's
raining
over
nodes
and
if
they
the
only
focus
on
a
single
node,
they
can
still
share
a
state.
A
Why
are
the
plug-in
context
if
they
need
to
share
a
state,
as
as
they
are
called
for
various
nodes,
so
plugins
cannot
can
be
stateful.
They
can
have
state
in
their
own
instance
as
well
as
they
cannot.
They
can
pass
the
state
in
the
plugin
context
to
other
plugins,
so
I
feel
we
don't
really
in
the
in
the
in
process
model.
We
don't
really
need
to
pass
a
slice
of
nodes,
as
we
have
as
we
have
to
do,
and
in
the
extender
model
yeah.
C
A
B
C
A
First
of
all,
we
never
actually
we
never.
Even
in
the
model
that
I
described,
we
never
passed
a
slice
of
nose
to
clog
it.
You
know
that
was
only
passing
you
slice
of
not
to
the
frame
word
or
the
run
function
of
the
framework,
which
is
actually
an
internal
internal
function
for
the
scheduler
to
run
these
plugins
per
node.
Still
in
that
model,
the
plugins
are
called
per
node.
We
never
actually
discussed
passing
a
slice
of
nodes
to
plugins
directly.
A
It's
just
passing
a
slice
of
not
to
the
framework,
and
then
the
framework
calls
those
plugins
one
other
time
per
node.
That
interface
is
not
going
to
change
and
I
that
one
is
even
worse
than
what
I
am
actually
arguing
for,
so
that
one
is
horrible,
because
then
each
plugin
have
to
take
care
of
all
the
parallelism
and
how
they
want
to
deal
with
parallelism
and
some
may
decide
not
to
paralyze
at
all
and
all
that
stuff,
which
is
which
is
going
to
be
like
really
bad
and
I
mean
over
time.
A
It
could
create
a
huge
mess
and
we
need
to
replicate
a
lot
of
logic
of
parallelism
inside
each
plug-in,
as
opposed
to
just
having
it
once
in
this
scheduler
right.
So
that
model
has
like
a
slice
of
not
two
plugins
is
definitely
not
recommended,
but
what
we
were
talking
about
was
just
passing
this
slice
to
the
framework.
A
C
A
A
A
A
A
A
D
B
B
C
A
We
don't,
but
what
what
I
almost
always
do
is
that
I
keep
looking
at
our
dashboard
perf
dashboard
to
make
sure
that
we
don't
see
any
regression
any
performance
and
so
far
I
haven't
seen
any,
but
as
we
go
forward
and
as
we
add
plugins,
because
right
now,
the
framework
is
essentially
without
plugins
doesn't
do
much.
Really
it's
very
unlikely
that
unless
there
is
a
you
know,
a
really
bad
implementation
that
you
get
missed.
It
shouldn't
really
cause
any
performance
degradation.
A
It
is
basically
just
skips
all
the
logic
it
just
goes
and
iterates
over
plugins
and
since
there's
no
plug-in
it
doesn't
not,
it
does
nothing.
So
so
far
we
haven't
seen
any
performance
efficient,
but
you
have
a
good
point.
Once
we
start
building
these
plugins,
you
must
make
sure
that
we
don't
see
any
regression
or
if
we
see
they
are
within
a
reasonable
number.
D
No
question
for
scheduling
scheduling,
division
tasks-
you
know,
I
are
the
recept
functions
to
all
the
task
plugins,
but
is
these
tests
are
also
share
States,
so
it
is
necessary
to
create
a
individual
scheduler
for
each
task,
which
run
the
frameworks.
Innovation
task
I,
see
so
yeah
I.
Think
here
you
so.
D
A
A
A
B
Like
there
are
testing
frameworks,
where
you,
you
know,
setup
and
teardown
for
each
test
and
I
feel
that
this
should
be
the
way
that
we
do
it
like
we
setup
and
teardown
everything
for
each
test
that
we
assure
starting
from
a
clean
slate.
Every
time
a
test
is
run
yeah
you
can
use
Django
or
something
like
that.
There
I.
A
A
I
have
seen
that
PR,
yes
yeah,
but
I've
seen
what
you're
saying
that
that
PR
does
not
solve
the
problem
completely
right:
yeah,
ok,
yeah,
if
I
know
that
your
PR
is
out
and
I
saw
that
there
was
actually
a
failure.
I
wasn't
so
sure
I
couldn't
actually
see
the
errors.
For
some
reason,
it
was
not
showing
me
in
the
pro,
but
if
you
have
been
able
to
figure
out
what
the
problem
is
and
if
that's
and
if
the
reason
is
that
we
are
sharing
a
state.
Yes,
please
go
ahead.
A
Maybe
change
that
PR
to
recreate
these
scheduler
and
the
master
third
test.
So
we
see
we
have
other
similar
integration
tests
that
set
up
the
test
environment
and
I
tear
it
down
in
a
different
function:
okay,
yeah!
Thank
you
for
your
help
and
thank
you
for
many
other
pies
that
you've
sent
I
appreciate
your
help.
A
Okay,
the
reason
that
I
said
later,
because
sometimes
people
don't
like,
like
8
p.m.
it's
like
a
launch,
slash
family,
I'm,
sorry
dinner,
a
slash
family
time
for
them,
and
sometimes
they
prefer
like
later
at
night
like
10:10
p.m.
or
whatever,
but
one
other
option.
You
know
it's
already
like
8
a.m.
in
China,
and
this
is
it's
meant
to
be
a
China
friendly
meeting.
But
what
the
option
is
to
have
this
earlier
in
the
day.
I,
don't
know
how
it's
gonna
work
for
folks
in
China,
but
for
example,
it
is
like
7:00
a.m.
I.
B
I
mean
I:
guess
we
need
yeah
I,
don't
want
to
be
unfair
to
to
the
other
time
zone
because
you
already
have
the
other
meeting
at
1:00
p.m.
right,
yes,
but
I
agree
like
having
it
later
is
also.
It's
probably
also
better
like
that,
and
let's
see
this,
let's
try
few
time
to
them.
Maybe
it.