►
From YouTube: Kubernetes SIG Scheduling meeting - 2018-08-16
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
good
morning
or
hello,
if
you
are
not
in
a
time
zone
that
is
it's
morning
for
you
and
that's
the
start
day,
meaning
so
last
week,
I
guess
we
spoke
about
various
items
in
our
agenda
and
I
would
like
to
actually
follow
up
on
those.
There
are
a
couple
of
things
that
we
have
been
working
on
and
you
would
like
to
see
the
status
of
them
today.
Let
me
just
open
this.
A
A
Hence
tending
note,
by
condition
that
one
is
graduated
to
beta,
the
PR
is
managed
and
I
think
you
are
good
to
go
there.
Gang
scheduling,
of
course,
class
is
not
here
today.
There
are
a
few
comments
on
gang
scheduling.
It's
moving
forward.
I
think
we
are,
we
are
there
pretty
much
with
E
with
API
design.
I
had
some
idea
about
whether
we
want
to
pass
or
some
sort
of
an
IDE
two
instances
of
again
so
that
they
know
their
member
ID.
A
This
is
actually
a
much
bigger
and
longer
discussion,
because,
right
now
our
design
is
a
little
different.
Basically,
the
gang
today
is
not
an
object.
Well,
it's
an
object,
but
it
doesn't
have
parts
of
its
own
parts
belong
to
other
collections
and
gang
just
specifies
the
properties
of
a
Gehring
like
the
number
of
members
or
the
lifecycle
policies.
Up
again,
we
need
to.
We
need
to
think
a
little
differently.
If
we
want,
you
add
IDs
two
instances
of
again.
A
A
Yeah,
so
with
respect
to
next
item,
is
the
scheduling
framework
I've
updated
the
scheduling
framework
with
some
of
the
like
minor
changes
here,
and
there
feel
free
to
take
another
look
comment?
If
you
have
any
comments
or
question
image
locality
again,
he
is
not
here
so
image
locality
priority
function
is
out
there.
It's
enabled
by
default.
You
are
trying
to
enable
into
in
charge
40
mentality,
but
we
are
running
into
permission
issues
and
pushing
the
images
have
tried
to
reach
Jeff
just
to
push
these
images.
A
A
We
think
I
also
have
commented
on
it
by
the
PR
for
the
scheduling
policy
we
we
are
almost
there,
but
there
are
some
different
opinions
about
how
to
how
to
create
the
matches
and
the
policy
matching
against
parts
do
most
of
these
are
like
details
like
how
the
matter
should
have
like
a
is
hot
selector
and
whether
we
want
to
and
the
policy
rules
or
or
the
policy
rules
here
in
there.
So
there
are.
There
are
some
of
these
details
that
we
are
trying
to
hash
out.
A
A
A
B
A
B
A
I,
don't
consider
this
as
a
replacement
for
reschedule,
basically
don't
need
to
reschedule
anymore,
so
this
is
basically
a
new
add-on
or
for
kubernetes.
My
question
is
that
when
we
add
this
new
component
to
kubernetes,
is
there
any
concerns
of
backward
compatibility
or
any
other
concern?
Is
there
any
area
that
we
need
to
track.
B
Yes,
so
it
it
use
a
particular
service
account
on
that
service.
Account
needs
to
have
some
privileges,
especially
on
the
pod
resource.
We
need
to
have
avid
as
the
the
work
that
is
needed.
The
documentation
on
the
scheduler
page
shows
a
sample
service
account
how
to
use
it
and,
along
with
the
privileges
that
are
needed.
A
Next
item
is
scheduling
scoring
fewer
nodes
or
finding
fewer
feasible
nodes
in
a
cluster
to
improve
performance
of
the
scheduler
so
I.
Actually,
last
week
when
I
was
thinking
about
how
to
proceed
with
this
PR
I
felt
like
we
don't
need
to
wait
for
the
next
release
to
enable
it
we
can
actually
enable
it
starting
next
release,
1:12
and
in
my
PR
I,
actually
changed
the
default
value
of
the
of
the
rock
market
of
the
CLI
or
the
Catholic
option
to
50%
so
by
default.
A
Basically,
scheduler
stops
looking
for
more
feasible
notes
as
soon
as
it
finds
50%
of
the
cluster,
feasible
or
50%
of
notes
of
the
total
size
of
the
cluster
fees.
So
scheduled
stops
at
a
point
in
the
stars
scoring
nodes,
I
think
I'm.
We
are
going
forward
with
that
in
112,
but
we
will
we'll
know
better
after
we
merge
this
PR,
we
may
still
change
our
plans
before
I
want
to
help
is
released,
but
that's
what
we
think
we
should
go
forward
with
at
this
point.
B
A
A
C
A
The
reason
in
a
way,
some
randomness,
but
not
much
really
so
what
we
do
is
this,
so
we
have
a
new
structure
called
node
3
and
node
3,
similar
to
like
a
tree
structure
like
a
tree
data
structure,
or
you
can
even
think
of
it
as
a
library
with
with
the
height
of
2.
So
root
is
rude
not
to
14,
but
it's
the
first
level.
Children
of
this
are
zones,
and
then,
under
each
known
there
are
like
a
few
nodes.
So
the
way
that.
A
The
scoring
nodes
is
that
we
start
don't
want,
scored
the
first
node
and
then
we
go
to
the
next
one
or
the
first
and
so
on.
The
zones
are
exhausted
when
we
come
back
to
the
first
zone.
Again,
you
score
the
second
node,
and
this
is
how
do
each
other
work,
but
it's
not
random,
but
it
actually
goes
through
all
this
and
all
that
continues
from
the
point
that
he
stopped
in
the
previous
scheduling
size.
A
A
E
A
I
intentionally
didn't
put
it
behind
a
feature,
cake,
I,
don't
know,
I
am
NOT
a
huge
fan
of
all
the
future.
It's
that
we
have.
Sometimes
they
are
just
delaying
or
basically
roll
out.
Some.
Some
of
these
features
are
not
fundamentally
changing
anything.
Some
of
them
have
API
and
we
need
to
look
backward
compatibility
for
changing
nature,
behavior
into
scheduler
or
other
components.
I
agree
with
putting
some
of
those
behind
feature
gates,
but
for
this
particular
one
I
have
like.
A
The
part
that
is
configurable
is
a
percentage,
so
you
can
specify
what
percentage
of
nodes
should
be.
Basically
not
what
percent
of
nodes
should
be
supported?
Basically,
you
specify
when
to
stop
once
you
find
what
percentage
of
know
it's
feasible.
So
let's
say
that
in
a
question
that
is
overloaded,
you
put
like
10%,
but
this
10%
means
that
stop
once
you
find
10%
on,
you
know
it's
feasible
and
if
there
is
no
node
feasible,
every
node
in
the
cluster
is
gonna,
be
scared
right.
A
You
said
it
said:
let's
say,
set
this
configurable
value
to
10%
and
your
cluster
is
200
note.
It
means
stop
once
you
have
to.
We
know
it's
feasible,
but
you're
not
gonna.
Do
that
you're
still
gonna
go
ahead
and
scan
the
whole
cluster
because
we
or
measure
larger
portion
of
the
cluster,
because
you
still
expect
at
least
50%
of
our
nodes
to
be
found
feasible.
So
this
feature
is
not
gonna
impact
small
clusters
at
all.
Okay,.
A
I
guess:
you're
talking
about
equivalence
cash,
basically
that
we
are
working
on
this
results
that
I
was
telling
you
is
not
related
to
class.
We
don't
care
about
the
equivalent
if
this
is
but
that's
another
effort
that
we
are
trying
to
pursue.
So
for
what
reason
is
that
equivalence,
cash
doesn't
improve
performance
in
all
cases,
it
helps
with
more
sophisticated
predicates,
but
for
like
simple,
fast
predicates,
it
actually
can
hurt
performance,
but
anyway,
that's
that's.
What
I
like
orthogonal
to
this
pure
I.
B
A
C
A
A
A
E
This
is
related
to
my
last
couple
emails
about
explaining
how
the
scheduler
works
today
so
I'm,
putting
together
a
proposal
for
refactoring
several
packages
within
the
scheduler
and
once
that
gets
mailed
out,
we
can
discuss
that
and
then
my
hope
is
to
file
a
whole
bunch
of
help-wanted
issues,
break
it
up
into
smaller
pieces,
and
we
can
get
a
lot
of
there's
a
lot
of
contributors
sort
of
waiting
for
us
to
give
them
something
small.
To
do
so.
E
That's
a
good
way
to
get
some
value
out
of
those
contributors
and
yeah,
and
the
plan
is
not
meant
to
be
like
all-or-nothing,
I
think
we
could.
You
know
maybe
have
unanimous
agreement
on
some
parts
and
lengthy
discussions
on
other
parts,
so
yeah
I'll
try
to
carve
it
up
into
independent
chunks,
so
the
the
details
will
go
out
later
on.
Yeah
sounds
great.
Thank
you.
Okay,.
G
D
One
issue
is
then,
as
we
discussing
last
week,
the
diverse
schedule
doesn't
Reese
back
tenant
when
it's
going
to
schedule
the
demo
sense,
part
and
I
made
a
fix
and
that
one
got
merged
and
another
one
another
pure
was
murdered.
Yesterday
is
that
in
some
very
common
cases
that
if
the
past
has
host
net
work,
equals
true
in
some
corner
cases,
the
past
will
keep
increasing
that
that
means
the
default.
D
Scatterer
doesn't
God
the
scattering
of
the
past
properly,
so
the
past
gets
out
of
control
of
diverse
schedule
and
keeping
Christine
the
number
and
it's
like
a
pall
explosion,
and
that
was
a
bug
in
the
110
when
we
have
a
peer
to
affecting
the
host
had
information
that
the
data
structure,
so
this
is
the
data
structure,
was
changed
from
a
map
string
to
MATLAB
data
structures.
So
when
we
cloned
the
data
structure,
we
need
to
be
careful
because
there's
an
inside
map
should
also
be
deep,
calm,
so
yeah.
A
Was
actually
a
good
point,
thank
you
for
finding
and
fixing
that
issue.
Actually,
we
had
seen
a
couple
of
instances
that
ports
were
not
assigned
correctly
or
we
given
like
whenever
very
when
I
know
it
was
out.
Of
course,
we
try
to
schedule,
pause
and
stuff,
like
that,
and
I
suspect
that
some
of
these
issues
might
be
caused
by
why
this
issuing
the
code.
I'm
glad
that
you
found
it
thanks.
Nothing.
F
Just
hi
Bob,
yes,
all
right,
so
we
started
just
quick
update.
I
know
we
have
only
a
few
minutes
left,
but
I
just
wanted
to
pick
your
brain.
You
know
everybody's
praying,
so
the
two
things
we're
looking
at,
so
we
done
with
the
performance
and
all
the
former
mental
side
and
all
that
and
now
what
we
are
trying
to
do
is
the
whole
make
sure
we
can
coexist
with
the
default
scheduler.
You
know
the
firmament
can
coexist
as
the
alternate
scheduler.
So
there
are
two
problems.
F
One
problem
is
for
moment
is
breaking
and
we
ran
all
the
k2
other
kinetics
and
to
in
cases
and
lot
of
storage
sake,
test
cases
and
networking
test
cases
are
failing.
So
we
need
to
get
to
the
bottom
of
that.
So
essentially,
that's
like
a
four
moment.
Side
of
the
equation
means
we
are
breaking
things,
so
we
need
to
resolve
all
those.
So
that's
one
thing:
the
other
thing
is
kubernetes
default.
Scheduler
can
break
for
a
moment.
F
For
example,
you
know
the
preemption
priority,
preemption,
think
and
pre,
and
something
and
for
moment
doesn't
know
anything
about
it
and
it
can
so.
He
goes
crazy.
So
now
and
then
the
max
part
max
number
of
cards
in
a
node
as
well.
So
when
we
we
have,
we
have
implemented
the
same
thing
in
in
forum
and
also
that
you
cannot
have
more
than
100
no
100
odds
in
the
node
just
like
default
schedule
a
test,
but
the
problem
is
when
we
do
that.
F
F
F
A
You
know,
actually
it's
not
completely
impossible
to
replace
the
default
schedule
with
something
else.
We
just
need
to
make
make
sure
that
a
another
scheduler
like
whatever
like
firmament
or
an
address,
confer
that
is
going
to
replace
the
default.
Scheduler
schedules
diamond
set
paths,
for
example
that
are
now
that
we
are
going
to
schedule
them
by
default.
You
need
to
make
sure
that
those
domestic
parts
are
scheduled
properly
by
the
replacement
scheduler,
and
that
should
be
pretty
much
it
otherwise,
I
guess
another
scheduler
can
be
a
replacement.
A
F
That's
problematic,
though,
because
if
the
part
of
the
nodes
are
shared
by
both
the
schedulers,
then
the
whole
thing
gets
all
screwed
up.
Remember
that
the
number
of
part,
because
the
firmament,
the
second
alternate
scheduler,
doesn't
know
what
does
a
scheduler
did
and
he
will
do
things
because
thinking
that
he's
the
only
one
on
this
node?
Oh
so
you
so
essentially
every
node
every
scheduler
has
to
keep
track
of
every
other
part
which
was
not
handled
by
this
particular
schedule.
So
that's
what
it.
A
The
problem
is
that
today
or
generally
in
kubernetes
api
a
you
know
other
components.
For
example,
if
you
take
today's,
like
111
vanities
daemons,
that
the
scheduler
actually
schedules
parts,
that's
the
node
name
of
the
parts
itself,
so
this
is
done
outside
of
the
defaulter
scheduler,
but
default
schedule
is
still
handles.
Those
by
receiving
updates
for
all
the
parts,
not
necessarily
only
the
okay.
F
Okay,
so
so
essentially
in
former,
so
you
just
subscribe
to
that
message
and
then
update
your
knowledge
base
and
I
guess
we
will
have
to
do
the
same
thing.
I
was
trying
to
I
thought.
Maybe
we
can
just
the
the
the
very
short
I
mean.
The
short
term
solution
could
be
partition
the
nodes
depending
on
a
scheduler,
so
that
the
certain
scheduler
work
on
certain
nodes.
But
then
you
can
are
limiting
the
whole
cluster
yeah
that
that
is.
That
is
a
possibility.
F
So
that's
the
bigger
effort
for
us,
so
we
will
have
to
do
that.
We
have
to
put
it
former
and
extend
our
knowledge
base
to
include
things,
but
it's
not
just
a
matter
of
including
things
which
were
not
processed
by
former
men,
because
comma
men
will
go
haywire
and
say:
I
didn't
do
anything.
What
is
this
guy
doing?
So
we
need
to
kind
of
make
sure
it
doesn't
pray
anyway.
I
just
wanted
to
so
looks
like
we
will
look
into
that.
E
F
A
Yeah
I
wouldn't
be
too
much
worried
about
those.
If
other
scheduling
I
mean
we
need
to
eventually
address
those,
but
but
we
should
make
sure
that
firmament
passes
the
scheduling
test
first
and
then,
if
all
the
scheduling
has
passed,
then
we
can
think
about
some
of
those
storage
test.
Those
are
a
little
bit
more
complex.
You
may
need
to
add
up
part
of
the
volume
Beinecke
dynamic
volume
by
being
logical
disk,
but.