►
From YouTube: Kubernetes SIG Scheduling Meeting 2018-05-17
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
We
have
been
able
to
find
a
real
user,
an
internal
gke
customer
who
has
been
using
this
feature
for
a
while,
and
we
haven't
seen
any
issues
in
their
clusters,
which
is
a
great
news
for
us,
looks
like
the
feature
was
working
fine,
at
least
when,
if
when
you
enable
it
without
using
pyonium
preemptions,
not
break
your
cluster,
we
were
concerned
about
this
particular
scenario
because
we
are
replacing
the
schedulers
queue
and
the
mechanism
of
the
new
queue
is
quite
different
from
the
previous
one
when
priority
and
preemption
is
enabled.
So
it
could.
A
Even
if
someone
doesn't
use
the
feature
it
could
impact
the
cluster
if
there
is
a
part.
So
far
we
haven't
seen
any.
We
are
trying
to
get
our
all
of
our
EDI
tests
and
clusters
basically
running
our
clusters
with
all
the
alpha
features
on.
There
are
some
glitches
there,
but
they
are.
We
believe
they
are
not
related
to
priority
and
preemption.
So
I
guess,
right
before
the
meeting
Jonathan
was
telling
me
something
that
could
possibly
be
the
key.
What
was
it
your
Oh
gender
now.
B
A
B
A
Oh
yeah,
so
we
will
know
better
soon,
but
so
far
we
have
not
identified
issues.
There
are
several
smaller
items
remaining.
You
would
like
to
add
some
scalability
tests
with
feature
enabled.
If
we
can
get
that
done,
the
formula
we
need
to
beta
ll
be
great.
We
should
double
check
that
our
cluster
bring
up
scripts
or
other
tools
that
bring
up
a
cluster
will
always
bring
up
as
scheduler
as
a
part
of
the
whole
cluster.
Bring
up.
A
D
A
Me
to
and
I
guess
that's
supporting
multiple
schedulers
that
a
name
that
have
preemption
support.
So
we
don't
have
this
support.
Yet,
although
we
don't
feel
like
it's
a
critical
feature
before
moving
the
four
moving
priority
and
preemption
to
beta,
basically,
the
idea
here
is
that
if
there
are
multiple
schedulers
that
we
add
that
support
preemption,
what.
E
A
A
Yeah
I
don't
know
exactly
but
anyway,
so
yeah
back
to
this
feature
that
I
was
talking
about
so
basically,
if
they're
almost
whole
schedulers
the
support
preemption,
they
must
be
aware
of
the
parts
that
they
have
nominated
to
run
on
particular
nodes.
Otherwise
they
may
step
on
one
another's
toes
and
that
that
feature
is
not
there.
A
So
if
there
are
multiple
schedulers
in
a
cluster
that
all
support
preemption,
we
may
run
into
issues
like
one
part,
one
scheduler,
preempts
pods
on
a
node
and
then
once
the
resources
become
available,
the
other
scheduler
uses
those
resources
before
allowing
the
original
pre-empting
is
scheduler
to
use
the
resources.
So
this
this
problem
is
gonna
exist,
but
hopefully
we
will
address
that
soon
as
well.
B
B
Somebody
I'm
not
familiar
with
came
along
and
filed
an
issue
on
github
saying
that
they
turned
on
equivalents
cash
and
had
a
significant
performance
degradation,
so
they
sent
us
a
PR
to
hopefully
reduce
lock
contention,
but
I
think
we
were
also
going
to
see
if
we
can
reproduce
what
they
saw
and
make
sure,
because
the
whole
point
of
equivalents
cash
is
supposed
to
be
a
noticeable
performance
boost
right.
So
maybe
their
test
evolved.
You
know
a
lot
of
heterogeneity
heterogeneous
odds
or
something
is
wrong
with
hashing
algorithm
that
we
haven't
noticed.
Yes,.
B
Them
with
google
loves
cache,
turn
on
and
turn
off.
Okay
I,
don't
think
anyone's
done
that,
so
we
can
get
at
least
collect
that
in
like
a
more
manual
way,
but
to
probably
do
us
well
to
explicitly
have
a
test
that
enables
an
explicitly
have
a
test
that
disables
we
don't
need
get
on
in
touch
code
each
time
so.
E
This,
the
one
that
was
used
by
person
who
has
reported
at
a
schedule
at
birth,
which
is
which
is
in
test
integration
for
the
scheduler
and
I,
was
able
to
like
I,
ran
the
test
today.
I
did
not
collect
any
blocking
contention
data,
but
I
will
do
it
today
again,
but
the
numbers
were
kind
of
right
like
with,
and
without
there
is
some
difference
with
without
being
taking
like
without
equivalent
scans.
It
was
taking
lessons
in
acceptance,
so.
A
So,
given
all
these
complexities,
we
would
like
to
see
a
major
improvement
into
scheduling
performance,
while
major
I
mean
something
at
least
20,
30
%,
or
something
in
that
order
or
in
the
throughput
of
the
scheduler.
So
if
we
don't
see
much
of
a
difference
and
of
course,
if
we
see
a
slowdown,
it
does
not
make
sense
to
enable
it,
and
if
we
determine
that
this
is
not
going
to
work,
we
should
think
about
alternatives.
I
am
pretty
sure
with
proper
implementation.
A
B
A
F
Beginning
of
actually
actually
deep
benchmark
and
a
beacon,
so
at
least
twenty
percent
improvement
in
scheduler
performance,
but
we
have
to
create
some
special
cases,
for
you
know
to
see
the
benchmark
results
and
at
that
time,
because
we
cannot,
we
cannot,
you
know,
just
see
improvement
by
using
a
default
scheduler
with
some
very
simple
paths.
It's
it
could
be
the
same.
But
at
that
time
there's
no
performance
peak
creation,
because
because
we
only
use
a
controller
wife
to
do
the,
we
don't
need
to
actually
calculate
very
complicated
for
she,
and
only.
A
A
A
I
mean
there
are.
There
are,
of
course,
also
two
scenarios.
A
lot
of
users
may
not
put
any
sophisticated
rules
on
their
pods,
but
one
thing
for
sure
in
a
lot
of
kubernetes
clusters
is
that
many
many
users
create
replica
sets
which
have
basically
a
lot
of
pause
creators
from
the
same
template
and
they
have
the
same
scheduling
require
yeah
or
deployments,
which
are
all
instances
of
the
same
template.
A
So
there
are
lots
of
these
similar
scenarios
in
many
of
the
communities
clusters,
and
we
should
expect
we
expect
to
see
some
performance,
hopefully
with
a
little
bit
more
investigation.
We'll
know
why
we
don't
see
that
and
you
can
resolve
the
issues
anyway.
So
hopefully
this
is
a
still
on
track.
If
we
can
find
what
is
going
on
soon,
we
can
wait
to
beta
and
I.
Don't
know
if
what
is
the
science
of
tests
and
coverage
to
test
coverage
there,
Harry
or
Jonathan?
One
of
you
guys
came.
B
A
B
B
A
A
Yangyang
scheduling,
you're
still
designing
this
feature.
One
more
percent
on
google
has
shown
interest
and
working
on
yank
scheduling
as
well.
So
hopefully
this
first
arrange
with
class
to
work
on
the
design
and
we
should
the
first
step.
We
should
finalize
all
the
requirements
for
gang
scheduling.
We
still
feel
we
don't
have
a
very
clear
understanding
of
all
the
requirements
for
the
applications
that
need
gang
scheduling.
A
We
should
hash
out
the
details
there
and
once
those
are
identified,
we
can
have
a
more
concrete
design
of
the
feature
and
then
we
go
for
them
to
the
next
step
of
how
it
should
be
implemented,
whether
it
makes
sense
to
implement
it
as
a
separate
scheduler
or
put
it
in
the
same
scheduler.
Do
we
need
different
controller
for
gang
and
adding
like
new
pause
began,
deleting
the
ganga
and
the
number
of
instances
of
the
gang
drop
below
a
certain.
D
Bobby
I'm,
just
wondering
like
for
initially
we
can
just
can
we
just
make
it
simple
like
just
just
say
that
there's
only
one
game
can
be
scheduled
at
the
time
and
initially
like.
Let's
just
make
a
deadline,
if
it
seeds
that
lied
like
say
one
hour,
40
minutes
and
it
just
it
just
tells
I
suppose
in
the
status
8.
This
fing
fail
and
then
pick
the
nests.
A
A
D
So,
for
example,
light
like
again
for
doing
scheduling
we
need
to
like
reserved
we.
So
that's
why.
But
like
reserving
all
those
resources,
it's
just
wasting
like
resources,
him
and
keeping
at
a
part
from
being
scheduled.
So,
let's
say
like
I'm
trying
to
schedule
again
of
like
one
parameter
server
and
two
workers,
and
now
I
only
have
one
pot
slots.
I
wasn't
any
additional
to,
but
like
now,
I
just
resolving
the
one
dead.
D
A
Know
it
works
fine,
something
like
this:
it
works
fine
as
a
proof
of
concept
or
for
for
prototyping
purposes,
but
I
highly
doubt.
If
the
community
accepts
adding
something
like
this
as
a
feature
to
the
scheduler,
you
know
when
we
think
about
adding
a
new
feature.
You
should
think
about
the
proper
API.
Is
you
know,
instead
of
just
hard
coding
like
one
hour
or
whatever
number
of
minutes
that
you
want
to
think
about,
we
should
probably
add
a
way
for
the
user
to
specify
the
timeouts
for
for
the
gang.
A
D
Actually,
we
want
my
stateless
services
as
again.
Well,
it's
it's
just
more
of
a
convenience
for
schism
in
a
lot
of
parts
flight
for
some,
but
we
have.
We
have
a
set
of
things
that
we
want
to
I
sketch
it
together,
wasn't
like
our
user,
had
a
lot
of
parts
stateless
when
apps,
which
is
basically
they
are
trying
to
like,
want
them
like
on
different
notes,
and
in
order
to
do
so
like
do
and
again
scheduling
seems
just
just
works
for
this
case,
but
this
is
different
like
for
this
case.
D
B
D
Now
this
is
this:
is
this
won't
work
like
that's
the
fame,
but
like
really,
we
can
walk
around
there
like
we
do
another
layer
on
top
like
say
just
like
keep
trying
and
like
because
it's
one
hunting
I.
If
we'd
attack
some
of
the
things
that
doesn't
get
scared
you
we
just
might
be
fail,
but
like
it's
just
not
native
enough-
and
it
seems
that
I
case
also
fits
into
the
gain
scheduling.
This
case
interesting.
A
D
D
D
B
A
Yeah
or
I
guess:
maybe
there
are
design
Doc's
I,
don't
know
if
we
did
like
or
not,
but
if
there
are
shared
design
Doc's,
you
can
comment
on
those
talks
as
well.
We
are
basically
we
would
like
to
hear
all
these
feedbacks.
One
thing
that
I
have
heard
multiple
times
in
various
contexts
is
that
heterogeneous
pods
are
needed
for
gang
scheduling,
basically
gang
that
has
different
parts
of
with
different
images.
That's
something
that
we
seems
that
we
need
to
support
from
the
very
beginning.
A
We
would
like
to
hear
some
of
these
other
feedbacks
like
whether
you
we
we
need
to
support
long-running
services
or
just
batch
jobs.
How
do
we
handle
the
case
that
the
number
of
replicas
go
below
a
certain
number
oftentimes?
We
think
that
we
should
kill
the
yang,
but
some
people
believe
that
we
should
give
it
some
time
before
killing
the
game
and
all
these
requirements
of
different
people.
How?
A
A
There
has
been
lots
of
different
opinions
about
how
we
should
design
our
scheduling
policies.
Some
people
believe
that
we
should
use
OPA.
Some
people
believe
that
we
should
design
API
in
one
way
or
the
other.
So
there
has
been
a
lot
of
discussions
over
there.
It
is
still
kind
of
in
progress.
We
are
hoping
that
we
can
reach
the
consensus
soon
on
that
area
as
well.
So
these
are
all
the
updates
that
I
wanted
to
give
you.
D
B
A
And
one
thing
that
I
would
like
to
say
about
the
design:
Doc's
I,
don't
know
if
it
matters
for
the
people
who
are
currently
present
in
the
meeting
or
not,
but
anyways
I
get
a
lot
of
questions
about,
or
a
lot
of
requests
to
access
to
various
design,
Doc's
that
are
created
on
Google
Docs.
When
we
share
those
in
the
sick
or
with
the
community,
you
need
to
be
a
you
need
to
be
able
to
access
them.
If
you
are
a
member
of
kubernetes,
they
have
Google
Groups.
A
A
These
are
usually
the
tests
that
the
scalability
has
your
end-to-end
tests
and
they
launch
a
large
cluster
usually
and
run
a
lot
of
parts,
usually
with
the
same
with
the
same
properties
in
the
in
the
cluster.
They
are
not
exactly
the
same
as
those
scheduled
performance
as
some
of
those
might
be
similar,
but
they
are
not
exactly
this
game.
A
B
G
I
think
I
have
a
draft,
a
design
document
here
about
gasps,
Gavin
and
I.
Think
we
have
some
internal
discussion
because
there
they
were
option
here
between
you
know.
You
know
when
you
to
do
some
scheduling
based
on
thought
group
to
do
that
as
single
there
may
be
some
enhancement
or
our
interaction
with
the
current
and
future.
Yes
together.
So
I
think
there
may
be
some
discussion
here
and
it's
ongoing
I
will
a
sink
away.
G
B
A
What
should
be
done
for
gang
schedule?
Actually,
it's
not
a
bad
idea.
If
you
share
the
document,
read
the
community,
be
the
stick
at
least
and
yeah
and
and
then
people
will
have
a
chance
to
to
say
what
they
need
in
the
gang
scheduling.
For
example,
we
just
we're
thinking
that
probably
gang
scheduling
would
be
would
be
a
feature
suited
for
running
batch
jobs,
basically
those
jobs
that
run
to
termination,
but
yeah.
A
Today
we
heard
that
it
might
actually
benefit
some
of
the
long
running
services
as
well,
and
some
people
may
need
it
for
long
running
services
as
well.
The
other
day
I
heard
at
cube
con
and
some
other
places
that
people
are
interested
in
running
heterogeneous
workload
in
a
gang,
basically
a
gang
that
bronze
pots
with
different
images,
not
necessarily
the
same
single
seta.