►
From YouTube: Kubernetes SIG Scheduling meeting - 2018-07-26
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Similar
to
Bobby's
yes,
a
way
mainly
to
enroll
controller,
to
do
something,
I
think
a
one
major
thing
they
desired.
You
know
you
in
current
her
controller,
such
as
a
job
controller,
only
take
a
controller.
They
didn't,
they
try
to
create
a
boat
as
soon
as
possible,
but
when
we
considered
just
getting
on
part,
I
think
we
may
create
a
boat
and
you
some
kind
of
service
for
them
or
create
create
a
positive
for
one
per
job
when
trouble
island
and
then
the
other
one.
A
So
they
can
make
sure
the
job
can
create
enough
enough
pose
for
the
gas
gelding
and
there's
only.
What
did
the
the
impact
is?
Only
the
last
job,
the
job
for
them
home
is
the
last
job.
Care
cannot
create
enough
clearly
enough
code.
This
is
something
in
my
mind.
I
think
we
made
discussed
a
detail
later
in
the
in
the
design
document
in
the
design.
All
in
the
meeting
yeah.
B
Yeah
so,
and
then
probably
your
question
I
could
have
had
it
another
part,
which
is
let's
say
that
ten
of
these
pods
are
running
and
we're
happy,
but
suddenly
two
of
them
died.
So
the
the
number
of
running
parts
drop
below
deck,
the
number
of
specified
in
the
gang.
What
should
we
do?
I
guess
this
part
can
be
addressed
by
the
gang
policy.
B
So
gang
policy
specifies,
for
example,
that
if
the
number
drops
below
the
specified
value
then
go
ahead
and
kill
all
of
them
or
wait
for
a
certain
number
of
minutes
or
seconds
or
whatever
for
the
number
to
go
back
to
ten,
for
example,
there
is
a
controller,
usually
have
like
these
collection
controllers,
I
have
to
assign
jobs,
etc,
so
those
controllers
may
add
more
pods
and
those
pod
may
get
to
schedule.
So
within
that
time
frame
that
is
specified
policy.
You
make
it
back
to
ten.
In
that
case,
we
just
continue
the
execution.
B
C
B
Job
controller
I
see
I,
see
well.
First
of
all,
these
are
not
limited
to
job.
Basically,
to
what
we
had
in
mind
is
probably
not
to
limit
to
limit
gangster
jobs,
but
I
can
I
can
see
that
jobs
are
probably
gonna,
be
the
most
common
form
of
collections
that
are
used
in
a
gang.
Sorry,
let
me
just
turn
on
the
lights
here:
I
guess
the
sensors
are
not
working
one
second.
B
Okay
yeah,
so
that
is
true.
So
in
a
job,
if
the,
if
a
job
dies,
the
door
controller
doesn't
bring
back
those
instances
and
do
you
may
want
to
think
about
some
some
ways?
For
example,
one
one
possible
thing
is
that
once
the
number
of
some
someone
some
of
the
jobs
exist
which,
with
no
error
in
that
case,
we
don't
want
to
recreate
those
objects
or
pods,
but
if
they
exist
with
an
error,
for
example,
if
they
crash
or
likes
it
with
any
other
error,
then
we
want
to
recreate
them.
B
B
B
D
E
C
B
D
D
B
D
B
D
So
in
my
test,
I
did
his
time.
I
did
this
test
last
night,
so
in
my
test,
environment
I
can
also
see
that
after
some
commit
I,
don't
know
which
committee
actually,
the
this
performance
of
default
schedule
is
greatly
improved,
improved,
but
I
don't
see.
There
is
any
significant
patch
there
I'm
still
not
very
clear
about
why
they
happened.
So
so
he
might
hence
the
performance
of
encoding
Kashyap
many
you
couldn't
catch.
Anything
Abel,
never
changed
it's
a
it's
fixed,
but
the
performance,
ad4
scheduler
changed
I.
D
B
B
F
B
Can
improve
performance
of
default
scheduler
once
it's
enabled,
of
course,
but
what
we
are
talking
about
is
that
we
recently
are
seeing
some
improvement
in
the
performance
running
predicates.
That's
something
that
we
don't
expect
repair
is
seeing
that
I'm,
not
so
sure
if
I
have
seen
the
same,
the
same
thing
but
anyway,
maybe
there
is
something
in
a
setup
or
maybe
some
new
changes
made
made
to
change
something
in
Japan,
like
tests
or
our
algorithms.
We
are
investigating
that
I.
A
C
B
B
E
For
the
tenth
note
back
addition
feature,
which
is
supposed
to
be
promoted
to
beta,
so
this
one
PR
much
yesterday,
which
is
which
is
suggest
about
different
kinds
of
integration
tests
to
cover
almost
all
the
scenario
there
and
for
the
call
code,
I
didn't
I,
didn't
have
a
PR
and
for
the
website,
I
have
I,
do
have
up
here
and
it's
a
relief
being
revealed.
Yeah
I.
Think
after
that
here
is
much
I.
E
B
E
For
their
one
ideas,
some
initial
research
on
the
github
issues
to
kind
of
aggregate
all
the
way
there
is
just
two
one,
two
one
placeholder
issue
so
and
actually
I
haven't
found
a
one
guy
opening
issue
to
claiming
that
the
dimmest
doesn't
schedule
or
run
in
the
pod
properly.
This
is
another
which
oh
yeah
issue.
I
was
trying
to
fix
so
for
that
issue,
although
it's
not
in
particular
related
with
with
our
our
our
schedule
by
still
going
well
and
I,
think
we
have
thanks
to
the
crowds
comments
and
also
legit
or
somewhere
else.
E
B
B
A
B
C
We
are
internally
debating
if
we
want
the
same
feature
that
we
have
in
reschedule
er
or
not.
So
the
main
idea
is:
let's
continue
with
reschedule
until
1.11
and
push
the
scheduler
as
add-on
component
and
after,
like
1
or
12.
Sorry,
when
we
are
releasing
1
or
12,
we
will
not
release
3
scheduler,
but
will
HD
schedule
and
by
the
time
we'll
have
the
diamond
set
schedule.
Dementia
controller
using
default
scheduler
for
scheduling
balls,
okay,.
D
C
B
C
B
E
B
B
C
Yeah
so,
and
get
back
to
you
on
this
by
tomorrow
or
by
Monday
latest,
like
which
option
we
are
fine
with,
so
they
make
a
decision,
and
the
second
thing
that
I
want
to
talk
about
is
graduating
quota
for
priority
classes.
So
because
has
created
an
issue
for
it
and
I
started
working
on
writing
some
pests,
so
it
needs
label.
Probably
either
you
are
cross,
can
add
the
milestone
label
for
100
yeah.
B
C
B
Use
the
slack
or
whatever,
to
reach
out
to
me
and
then
I'll
be
happy
to
yeah
thanks
I
just
lied,
alright,
one
more
quick
update
from
myself,
one
thing
that
I
wanted
to
actually
say,
or
talk
about
a
little
bit
you're
running
out
of
time,
but
just
very
quickly.
One
idea
that
we
have
about
like
for
improving
performance
of
the
scheduler
is
to
scan
or
is
to
check
feasibility
and
also
score
a
fewer
number
of
nodes
in
a
cluster
in
instead
of
scoring
or
checking
feasibility
of
all
the
nodes
in
the
cluster.
B
So
the
idea
here
is
that,
once
we
find
a
certain
number
of
nodes
that
are
feasible
for
running
apart,
then
we
can
stop
and
score
only
the
ones
that
we
have
found
so
far
and
not
go
with
the
rest
of
the
nodes
in
the
cluster.
This
can
help
improve
performance
of
the
scheduler
at
the
cost
that
the
node
that
we
choose
for
running
the
pod
may
not
have
the
highest
score
in
the
whole
cluster.
Basically,
we
are
gonna
have
a
maybe
local,
optima
optimum,
no
or
not
a
global
optimum.
B
Not
we
think
it
shouldn't
really
impact
schedulers
behavior
watch,
so
hopefully
this
can
help
and
as
it
this
is
probably
not
gonna
make
any
difference.
In
smaller
clusters
we
are
gonna,
be
set
a
very
fixed
value
as
the
minimum
number
of
nodes
that
we
are
gonna
score.
For
example,
maybe
we
set
it
to
100,
so
any
any
cluster
with
less
than
100
nodes
will
not
see
any
difference
and
for
larger
clusters
were
larger
than
$100.
B
He
will
probably
score
the
maximum
or
maximum
of
20%
of
the
cluster
or
a
hundred
nodes,
and
also
we
make
this
value
configurable
so
that
people
can
change
it
as
they
like.
So
we
are
planning
to
move
this
feature
to
alpha
in
112
and
I'm
working
on
it.
Currently,
I
may
need
some
help
to
finish
this
I
if
I,
if
I'm
working
on
a
pure
right
now
so
once
that
pure
first
PR
is
in.
Let
me
ask
some
folks
in
the
community
to
help
moving
the
rest
forward.
That's
all
we
are
one
minute
over
time.
A
D
B
A
A
For
personally,
my
time
is
more
flexible.
I
can
start
the
meeting
wrong
about
a
silent
a.m.
to
11:00
p.m.
cannot
ham,
so
I
think
I
can
meet
most
all
time,
but
I'm
not
sure
the
others
so
yeah.
So
I
think
we
can
start
of
us
right
about
this
one.
Maybe
some
survey,
or
maybe
some
others
yeah
sounds
good.
All.