►
From YouTube: 2016-10-06 Kubernetes SIG Scaling - Weekly Meeting
Description
2016-10-06 Kubernetes SIG Scaling - Weekly Meeting
A
B
D
C
E
National
soon
about
that,
he
can
give
a
overview
of
some
of
that
stuff.
I
think
Jeremy,
sir
no
tube
and
I
think
there.
You
can
probably
do
save
that
for
the
end
of
it.
We
put
items
that
are
up
a
list.
You
know
like
the
SD
three
perf
issue.
I
was
hoping
home.
Cha
would
be
on
because
I'm
kind
of
curious
what
their
timeline
is
or
31,
and
whether
or
not
they're
going
to
get
that
Prem
Ki.
C
C
G
F
G
A
B
So
11
threat
that
I
wanted
to
tie
together.
There
is
that
the
component
config
stuff
is
becoming
something
that's
important
to
the
life
clustered
lifecycle,
cluster
lifecycle,
Stig,
because
there's
you
know
this
desire
to
be
able
to
configure
things
without
having
them
up
with
command
line
flags
and
files
in
different
places
and
sort
of
have
the
the
one
true
dynamic
way
to
actually
configure
things.
So,
there's
sort
of
you
know
reasons
to
to
push
that
stuff
forward
as
the
way
to
do
things
across
multiple
multiple
angles.
B
H
E
The
basic
50,000
foot
summary
from
my
understanding
and
correct
me
if
I'm
wrong
is
that
the
translation,
but
it
has
an
extra
get
inside
of
the
previous
key.
That's
the
problem
is
that,
on
all
the
watches,
we're
trying
to
get
the
previous
key
so
there's
an
extra
get
in
all
of
the
update
cycles.
Yeah.
H
A
E
E
Yeah
as
we
go
forwards,
like
a
primary
target,
need
posts
in
the
pre
14
world
was
always
CPU
right
and
we're
going
to
get
to
a
point
in
the
near
term
future
where
we
bump
up
against
data
model
problems
more
so
than
raw
cpu
performance
issues.
Right
where,
where
the
speed
is
because
of
how
we
pass
data
around
the
system
and
and
how
we
behave.
A
D
H
E
E
I
Hi,
so
I
I
think.
The
first
thing
we
need
to
figure
out
is
like
what
are
the
like:
pre-k
VA
will
solve
the
problem,
so,
like
I
think
we
are
going
to
put
up
a
PR
today
and
whatever
is
going
to
how
astute
has
it
and
if
that
works
or
I
have
fix,
the
problem
will
go
in
the
back
part
staff
to
all
super.
No
release
and
I
will
do
it
release
on
next
week.
Okay,.
C
I
could
have
that
curve
test
of
tests,
ripple
that
I,
the
owner,
semi,
wojtek
and
then
team
like
currently,
it
has
configured
CLI
BOTS
and
some
really
simple,
Travis
buttock
its
use
of
also,
if
you
want
to
store
your
stuff
there,
like,
we
did
work
on
text,
keep
sound
like
you
look
for
a
repo
world
place
where
you
want
to
store
the
things
related
to
profess.
Think
that's
the
report.
We
can
iterate
much
faster
than
the
main
one
and
about
using
it
in
the
a
three-test
I
got
a
CI
for
me.
C
G
C
E
C
J
C
K
Marek
just
to
clarify
does
this
involve
also
the
current
ETA
chests
are
pretty
intertwined
with
each
other
they're
all
in
one
package
and
it's
a
bit
of
a
mess
and
we
have
the
same
problem
in
there's.
Other
areas.
Federation
test,
for
example,
should
also
move
out
a
bit
but
they're
kind
of
stuck
in
their
disposable
worth
spaghetti.
Are
you
guys
planning
to
be
spaghetti
eyes
that
for
for
the
performance
tests,
and
if
so
I
would
be
happy
to
help
you?
There
are
numerous
groups
that
have
a
similar
requirement,
which
involved
will
say
more
item.
A
D
L
A
A
E
M
Hi
guys
so,
assuming
all
of
that
stuff,
you
guys
just
went
over
it's
ultimately
sorted
out.
We
want
to
his
web
I,
went
over
this,
mostly
with
the
sig
testing
guys
last
week,
because
we're
kind
of
we're
not
sure
for
the
block
is
right
word,
but
we
basically
we
need
a
home
for
our
stuff
and
we
need
to
consume.
M
We
want
to
consume
me
to
eat
our
stuff
is
in
Python,
we're
going
to
Sebastian
and
the
rest
of
the
gang
I'm
gonna
implement
and
go
a
couple
of
our
tests
that
are
currently
open
shift
focus,
but
we
want
to
want
to
get
out
of
that
situation
to
the
point
where
it
can
work
on
any
distro
or
master
kooban
edits
itself.
So
we
have
the
guys
from
CN
c
FR
doing
something
similar
to
what
we
already
had,
and
so
basically
what
we
have
is
kind
of
it.
M
I
started
call
it
a
conversion
of
density
with
a
lot
of
additional
capabilities.
I
think
density
is
actually
we
could
read.
We
could
use
this
tool,
in
fact,
probably
to
replace
density,
because
it's
a
one
use
case
within
a
larger
thing.
So
I
asked.
Imagine
to
put
together
some
diagrams
about
how
all
this
is
laid
out,
what
the
tools
actually
does
we're
talking
about
a
tool
called
cluster
loader,
and
then
we
have
some
additional
tests
as
well.
M
Some
Network
test
some
storage
tests
and
we
also
gotta
work
load
generator
and
actually
puts
work
on
the
cluster
after
it's
all,
stopping
all
the
objects
are
created,
that's
a
high-level
view,
and
this
fashion.
Do
you
want
to
go
through
the
couple
of
diagrams
that
you
have
just
to
show
what
we're
trying
to
get
written
absolutely.
G
G
So
you
know
that
you
start
application.
Parses
everything
out
has
a
config
object.
That
has
various
data
in
the
side
of
it
creates
a
namespace
based
on
this
config
object
and
then
iterates
through
if
X
exists,
various
things
that
can
be
such
as
a
quota,
template
service
user,
a
pod,
RC
and
then
creates.
However
many
of
them.
It
needs
to
do
that
and
then
iterates
through
to
the
next
potential
namespace
or
pro
project
and
creates
everything
she
spins
up
the
cluster
with.
G
However
many
applications
that
it
can
do,
we've
also
extended
this
to
not
only
have
sample
applications.
So,
for
example,
like
we
call
quick,
starts
that
allow
clients
to
easily
get
just
basic
applications
running
like
Redis
or
whatever
it
might
be
WordPress
and
a
back-end,
but
additionally
it
extends
to
having
low
generation
and
stress
tools
as
well
that
it
can
run
against
the
cluster.
That's
already
running,
so
that's
essentially
what
we're
trying
to
do.
It's
pretty
straightforward.
We
just
need
to
migrate
it
from
what
it
exists.
G
M
Yeah
so
in
summary,
a
tool
that
can
create
a
beehive
of
activity
within
the
cluster
and
the
idea
there
is
to
put
more
more
realistically
simulate
what
the
customers
are
going
to
see
and
do
so,
as
you
guys
are,
may
be
aware,
openshift
has
like
a
source
to
image
workflow
that
includes,
builds
and
shifting
images
all
around.
So
we
have
interested.
We
have
interest
in
stressing
that
part
of
the
system
as
well,
but
post-deployment
we
want
to
have
load
put
on
it.
M
So
we've
got
this
as
Sebastian
mentioned
a
jmeter
test,
syslog
test
which
stresses
the
logging
plane
and
then
also
some
cpu-bound
tests.
The
idea
is,
we
don't
know
yet,
because
this
is
still
just
like
open
heart
surgery,
time
for
that
set
of
tests,
but
so
I
can't
say
exactly
what
we'll
learn,
but
I
know
that
the
top
end
scale
numbers
are
are
just
not
not
achievable
when
you
start
putting
all
of
this
other
stuff
on
on
the
cluster
as
well.
I
think
that's.
M
The
important
thing
we're
trying
to
tease
out
of
of
this
study
is
is
real.
What's
a
realistic
top
end
scale,
you
know,
that's
got
busy
a
lot
of
work,
a
lot
of
users.
We
got
a
web
interface
to
deal
with
all
that
stuff
put
together.
Then.
What
are
the
skill
numbers
really
look
like
that's
what
we're
trying
to
build
a
test
to
identify.
C
M
M
C
M
Indeed,
and
it's
not
necessarily
about
the
raw
performance
as
much
as
it
is,
you
know
verifying
best
practices
for
configuration
being
able
to
provide
Diagnostics
for
customers
if
they
have
issues
in
the
field.
So
it's
not
about
building
the
fastest
possible
system
more
than
it
is
learning
about
what
the
capabilities
are
to
identify.
What,
if
there's
things,
we
need
to
fix
yeah.
A
Right
I
feel
good
I
think
this
would
be
actionable
right,
so
I
agree
that
clusters
are
going
to
be
so
different
that
trying
to
use
the
absolute
numbers
that
come
out
of
an
effort
like
this
you're
going
to
be
I'm
pretty
hard
to
interpret
but
I
think
the
value
in
terms
of
catching
regressions
could
be
very,
very
good.
So
if
we,
if
we
have
a
standard
cluster
setup
in
standard
environments
with
realistic
workloads
and
who
cares
kind
of
like
how
many
pages,
how
many
pages
the
second
the
application
can
generate.
A
E
Here,
I
think
what
might
be
beneficial
to
because
we
talking
a
lot
about
the
capabilities
and
kind
of
an
abstract
fashion.
What
we
can
kind
of
maybe
do
next
time,
because
we're
running
a
little
low
on
time
is
maybe
you'll
walk
through
of
how
we
specified
a
specific
template
that
we
measure
for
what
we
call
master,
horizontal
scalability
test
plan
and
that's
a
very
finite
set
of
work
that
we
do
to
measure
master
performance.
We
have
other
tooling
besides
the
cluster
loader.