►
From YouTube: 2016-08-04 Kubernetes SIG Scaling - Weekly Meeting
Description
Public meeting recording of the Kubernetes Scalability SIG (Special Interest Group).
A
B
Okay,
so
affinity,
and
there
I
untie
affinity,
is
generally
done
like
it's,
not
perfect,
but
assuming
that
you
don't
have
like
thousands
of
pots
with
such
constraints,
it
should
work
relatively
good
and
all
changes
are
already
merged.
Regarding
scheduler
Frobel
tits,
around
hundred
thirty
and
thousand
node
cluster,
currently
140
pots
per
second,
something
like
that.
Actually,
the
QPS
limits
else
are
still
set
to
20,
because
I
need
to
figure
out
like
how
what
the
fruit
is
like
how
big
throughput
we
can
handle
in
small
clusters,
assuming
that
we
have
still
small
machines
like
us.
B
A
B
Feature
that
he
adds
ok,
let
me
start
with
both
affinity
first,
so
basically
pas
de
fin.
It
is
like
a
feature
where
your
pod
needs
to
be
scheduled
on
the
machine
which
has
it,
which
is
like
somehow
similar
to
the
machine
on
where
the
other
pots
are
running
with
which
then
the
other
pots
are
defined
by
some
label.
Selector
will
connect
it
with
this
affinity,
like
definition
and
by
similar
notes,
I
mean
here
like
either
the
same
note
specifically
or
the
note
in
the
same
zone
or
a
rock
or
something
like
that.
B
Auntie
affinity
is
basically
like
the
opposite
thing.
They
were.
You
know
where
your
pot
can't
be
scheduled,
where
some
on
the
note,
which
is
similar.
What
to
do
today
know
where
the
pots
defined
by
this
Lego
selector
are
running
and
not
affinity.
I
think
I
need
to
check
it
so
because
I
think
I
knew
what
I
started
wondering
and
I'm
not
sure
what
exactly
it
is.
C
But
but
someone
understanding
was,
the
affinity.
Stuff
was
so
so
poorly
performing,
even
though
pieces
of
it
were
included
in
alpha
and
13
that
what
we're
talking
about
is
actually
making
it
performant
enough
to
be
usable
to
14.
Is
that
correct?
It's.
B
Already
switched
on
by
default,
which
means
that,
yes,
it's
like,
if
you
start
like
thousands
of
pods
with
affinity,
photo
finish
the
constraints,
then
probably
it
will
degrade,
but
assuming
that
you
have
like
relatively
small
small
amount
of
them
like
I,
don't
know
100
or
something
like
that.
It
should
work
pretty
good.
C
A
B
B
A
C
Just
a
question
about
go
line
17,
I
know
we
sort
of
talked
about
it
at
the
onset
of
the
14
release.
Planning
cycle
folks
started
making
noises
about
it
and
sick
testing
I
think
there
is.
There
is
interest
in
attempting
to
push
forward
on
go
17
if
it's
actually
released
in
time.
But
that's
you
know
from
the
perspective.
Hey
we
always
like
to
see
our
builds
run
faster
and
whatnot.
I
was
curious.
C
C
D
General
performance
improvement
that
I've
read
about,
but
it's
every
time
they
say
that
it's
always
hand
wavy
on
very
specific
areas,
but
I
think
the
general
consensus
is
that
you
want
to
give
it
as
much
big
time,
because
whenever
we
did
the
transition
to
one
it
was
15
or
16
I
think
my
brain
is
willfully
ignoring
it.
Now
there
were
so
many
races,
another
little
minutia
that
were
uncovered
over
time
that
you
know
doing
it
earlier
in
the
cycle
gives
enough
big
time
where
you
can
find
all
those
Bugaboos
that
suddenly
change
anytime.
E
So
I
haven't
done
any
of
the
big
scale
tests,
but
I
have
been
doing
some
stress.
Testing
of
garbage
collection
as
part
of
like
the
general
performance,
conversions
producing
garbage
collections
and
there's
definitely
a
benefit
to
17
in
garbage
constrained
scenarios
where
there's
a
lot
of
trash
being
created
and
thrown
away.
However,
I
don't
think
we're
bound
by
that
in
any
of
our
highest
scale.
E
Scenarios
today
like
trash,
is
a
big
part
of
what
we're
is
it
big
part
of
our
like
upper
bound,
but
it
wasn't
enough
of
a
difference
in
even
in
the
synthetic
benchmark
that
it
was
like.
Oh
my
gosh,
the
only
thing
we
could
do
would
be
to
move
to
go.
17
cuz
it
magically
makes
everything
better.
It
seemed
like
a
five
or
ten
percent
in
some
very
stressful
garbage
collection
scenarios.
C
Cool
well
yeah,
I.
Got
that
the
hand
wavy
answer
kind
of
for
why
we
wanted
to
move
forward
to
17
performance
improvements.
So
I
appreciate
that
the
clarity
there
a
question
for
the
audience
here,
just
Pacifica
since
I
see
Joe.
You
know
the
whole
1.3
gigabyte
tarball
for
1.3,
everybody
agreed
that
that
was
stupidly
large
and
that
we
ought
to
do
something
about
it.
But
it's
actually
unclear
to
me
whether
anybody
has
taken
ownership
or
stewardship
of
it.
F
Tim
is
probably
the
closest
guy
there
because
he
was
dealing
with
a
lot
of
the
rebuilding
stuff.
I
filed
a
feature
issue
or
I,
don't
know
where
I
filed
it
on
that.
Essentially
just
saying
this
sucks
and
here's
a
whole
bunch
of
things
that
we
can
look
at
to
solve
it.
I
haven't
seen
a
lot
of
action
on
that.
Unfortunately,
and
I
was
gone,
so
I
didn't
get
a
chance
to
poke
at
it
yeah
it
would
be
nice
to
I
mean
it's
stupid
stuff
right.
It's
like
we're,
shipping,
the
docker
containers
and
the
individual.
F
C
F
C
F
D
If
there's
a
couple
of
other
issues,
I
think
I'd
like
to
address
to
that
are
removed
to
scale,
oh
one
of
which
is
the
there's
like
a
weird
race
condition
that
we
sometimes
come
across
when
we're
changing
some
things.
With
your
the
shared
and
former
I.
Don't
know
if
anyone
elses
saw
that,
but
we
it's.
It
causes
this
weird
race,
where
you're
in
a
loop
trying
to
sync-
and
it's
not
happening
all
the
time.
It
happens
with
some
code
changes
which
was
unexpected.
So
this.
E
Is
which
sheraton
former
any
of
them
or
pod
pod,
okay,
yeah,
cuz,
pods,
definitely
gonna
be
the
one.
That's
driving
the
most
changes
through
the
system.
I
would
expect
it
in
our
test
cases,
I'm
sure
David,
I
we're
talking
about
one
new
race
the
other
day,
but
I.
Don't
think
this
is
the
same
one.
It
basically
is.
D
It
down
to
what
exactly
is
happening,
so
we
could
we're
trying
to
narrow
it
down
before
we
have
a
PR
for
it.
If
we
have,
we
can
find
the
exact
location
where
it
is.
B
D
The
other
thing
is
I'm
still
working
on
the
SD
three
changes.
I
got
stuck
on
a
forced,
IT,
laptop
refresh,
which
caused
me
to
tear
down
my
development
environment
start
up
a
new
one,
so
I
have
I'm
working
through
some
of
the
minor
changes
right
now,
I'm
actually
coming
across
issues
in
the
tests
with
the
latest
version
of
NCT,
the
transitive
dependency
graph
between
the
between
a
rocket
at
CD,
three
and
Carew
Nettie's
was
funsies,
but
that's
all
done
now
on
trying
to
get
things
to
build
or
trying
to
get
the
test
to
pass.
A
Well,
I
think
this
was.
This
was
pretty
good.
I
was
paying
attention
to
the
discussion
here,
not
updating
the
notes.
I
think
this
is
a
set
of
good
topic,
so
if
we
can
try
to
get
the
notes
updated
after
the
flak
here
between
now
and
the
community
meeting,
that'd
be
great
yeah
buddy
any
further
comments.
I.
E
E
We
have
one
issue,
this
part
of
cig
scale
on
this
one
and
it's
just
a
general
catch-all.
So
maybe
there's
not
a
time
to
update
I
put
in
a
list
of
PRS
that
are
trying
to
make
it
for
v1
for
three
of
those
have
been
merged
and
there's
one
that's
still
outstanding
that
Derek
Wayne
is
working
on
that
one.
So
that's
as
far
as
I
understand,
that's
the
only
like
major
thing
that
this
sig
is
going
to
be
doing
for
this
release
in
the
features
you
go
so.
F
And
that's
not
surprising
to
me,
I
think
you
know
back
when
we
were
sort
of
at
the
beginning
of
the
14
cycle
or
like
okay.
What's
the
purpose
of
the
sig
and
it
was
really
a
place
to
coordinate,
bring
information
together
and
sort
of
you
know,
help
advocate
for
scale
stuff
across
all
the
other
cigs.
So
it'sit's,
you
know
I,
don't
remember
us
bringing
up
anything
super
specific
to
this
sig
early
on
the
side,
yeah.