►
From YouTube: SIG - Performance and scale 2021-12-16
Description
Meeting Notes: https://docs.google.com/document/d/1d_b2o05FfBG37VwlC2Z1ZArnT9-_AEJoQTe7iKaQZ6I/edit#heading=h.yg3v8z8nkdcg
A
All
right,
here's
the
notes
in
the
chats
have
yourself
another
president
attendee,
please,
which
I
think
is
just
you
marcelo,
okay,
where
we
the
only
two
things
I
had
for
today,
where's
the
review
the
performance
of
the
ci
job
results,
which
is
because
of
a
I
did
the
patch
to
give
us
some
more
time
and
it
merged
this
morning.
I
just
don't
know
if
it
got
there
in
time
for
the
run.
A
A
Yeah,
okay,
yeah
same
stuff;
okay,
okay,
so
nothing
there.
Yet!
Then
I
think
I
think
what
I'll
do
is
because
I
I
really
want
to
get
this
like
at
least
a
little
bit
closer
to
100.
C
A
Yeah
I
want
to
get
like.
I
want
to
see
if
I,
if,
after
the
we
get
some
more
data,
I
might
just
post
in
the
millions.
So
I
think
you
guys
can
select-
and
we
can
discuss
this
because
it's
I
really
want
to
see
if,
because,
if
this
doesn't
change
after
we
only
strategize
again,
because
maybe
we
need
to
look
deeper
at
this
and
just
make
sure
we're,
because
I
agree
with
you
with
last
time,
what
dave
was
saying:
it's
like
like
your
test.
A
It
literally
waits
for
all
of
the
vms
to
be
running
before
it
continues,
so
they
should
be
there.
It's
just
odd
that
we're
missing
the
some
of
the
create
requests.
So
what
I
all
I
did
was
for
this
change
is
I
added
a
sleep
buffer
before
in
between
the
start
and
the
perf
test,
and
then
I
increased
the
time
afterwards.
A
So
now
we're
we
have
a
30
seconds
buffer
in
the
front
and
16
at
the
end,
so
we're
gonna
at
least
measure
90
seconds
so
that'll
give
us,
hopefully
a
different
look.
So
we'll
just
see.
I
guess
I
I'll
know,
and
I
guess
we'll
next
one
runs
and
what
time
10
41.
So
in
about
an
hour
and
a
half
the
next
one
runs.
So
I
can
check
it
then,
and
we
can
see
we'll
have
you
know
what,
when
what
ends
up
happening.
B
A
Okay,
the
second
thing,
the
low
generous,
so
I've
been
looking
at
this
I've
been
doing
actually
some
work
on,
so
here's
some
of
the
different
goals
we
had
originally
for
this,
and
so
you
actually
did
you
actually
did
this.
This
is
this
was
part
of
your
test
here,
but
the
the
I
was
interested
in
some
actually
different
types
of
tests.
Actually,
what
I've
been
working
on?
Let
me
add
it
so
maintain
it's
it's.
What
I
did
was
I
don't.
A
I
called
it
a
churn
test,
that's
what
I
called
it.
So
this
is
like
so
like
sk,
so
like
fill.
B
B
Yeah,
I
actually,
I
call
this
steady
state
test.
C
B
Yeah,
because
you
you're
going
to
you,
know,
check
like
steady
state
scenarios
and
so
yeah
yeah.
A
Like
like
so
the
like,
what
I
had
in
mind
is
like,
if
you
are
a
like,
for
instance
here,
if
you're
aws
like
you,
are
constantly
looking
for,
you
want
to
maximize
the
number
of
vms
that
you
have,
and
so
you
expect
right
that
people
start
stop
workloads
all
the
time,
and
you
also
expect
that
your
usage
is
high.
So
in
other
words,
like
you,
expect
your
data
center
to
be
at
all
times
nearly
full,
and
you
expect
lots
of
crates
and
deletes
to
happen
on
demand.
A
So
that's
basically
what
I
wanted
to
imitate
this.
Like
kind
of
that
idea
that,
like
you,
know
what
what
happens
when
we
see
this
and
then,
and
particularly
like
the
rates
like
how
is
you
know
like
we,
should
we
expect
to
be
able
to
reach
a
large
number
so
like?
How
high
can
we
go?
You
know,
does
that
affect
us
and
then
the
rates
like
how
does
that
affect
us
and
given
the
size
and-
and
you
know
how
strong
the
rates
are
so
that's
kind
of
what
I
was
interested
in
so
I
was
yeah.
B
C
B
A
A
Sudden
increase
and
then
wait
generate
high
set.
No,
no
wait
wait.
It
would
be
like
I
did
the
last
two
actually
so
it
would
be
this
one
so
generate
a
high
number
of
vmis
and
then
suddenly
increase
and
or
decrease
in
vm
account.
You
know
bmi
accounts,
something
like
that.
Maybe
okay,
sorry!
I
just
wanted
to
what's.
B
C
B
A
B
The
other
similar,
isn't
it
so
what
yeah?
So
the
point
is
the
burst,
and
the
steady
state
is
the
first
you,
for
example,
you
create
1,
000,
vms
10,
000
vms,
and
then
you
leave
it
and
see
what
happened.
Okay,
so
the
steady
state,
it's
what
you're
saying.
So
you
have
the
rump
up,
so
you
create
a
number
of
vms
and
then
you
delete
and
then
recreate,
and
then
you
try
to
keep
the
churn
so
the
the
a
constant
creation
and
deletion
the
system.
B
So
you
have
like
the
maximum
number
of
objects
that
you
want
to
create
and
then
you
you
keep
like.
You
know
you
do
some
math
and
then
you
keep
okay.
So
I
want
to
keep
this
maximum
number
of
objects
with
this
rate
of
deletion
and
you
keep
creating
things
and
then
you
need
to
so.
The
system
enter
a
nice,
steady
state,
okay,
so
it
will
be
like
I
don't
know:
20
20,
vms
creation
per.
I
don't
know.
B
I
mean
it's
very
high
this,
but
let's
see
what
we
can
do,
let's
just
say,
and
then
you
keep
that
and
see
how
the
system
behaves.
You
know
in
this
constant,
you
know
churn
that
you
were
saying
so
in
the
constant
load
behavior
so
and
the
burst
the.
What
I
call
shock
test.
Also
it's
it's
it's
just
thing,
so
you
you'll
it's
1000
or
10
000,
and
then
you
don't
create
that.
So
just
leave
see
how
the
system
behaves,
and
then
you
delete
that
in
the
end.
B
A
So
what
what
I
want
to
break
so
the
yeah,
so
okay
density,
so
to
me
like,
would
be
high,
vm
or
vmi
accounts.
Is
that
accurate?
Just
is
that's
the
only
that's
the
thing
that's
unique
to
density.
It's
just.
We
have
high
counts
of
vms,
like
we've
maxed
it
out
in
some
form,
but
is
that
accurate,
like
is?
Do
we
care
about
how
fast
as
possible?
Does
that
even
is
that
relevant
or
is
that
what
we
call
that
something
else.
B
Now
this
is
able
to
be
like
the
slo,
so
the
service
level
agreement
how
fast
it
can
create
so
yeah.
Let's,
let's
put
like
that.
So
there
is
two
two
ways
to
create:
generate
load
so
before
before
define
the
test.
So
is
the
the
short
test
or
the
burst
that
we
were
saying
and
the
steady
state
load
generation
and
then
inside
you
know
just
two
kind
of
ways
to
generate
load.
B
A
Okay,
so
this
is
okay,
so
we
called
it.
So
you
call
it
shock,
I
think
shock
or
spike
test.
I
don't
whatever
yeah.
B
A
A
So
shock
spike,
I
don't
know,
I
think
I
googled
the
term
spike
test.
That's
how
I
think
I
got
it
so
like
spike
test
is
like
fast
ramp
up
fast
ramp
up
and
then
we
have
steady
state,
we'll
call
our
slow.
B
Yeah,
so
what
I
don't
know
if
it
is
low
and
fast
here,
because
a
steady
state
can
also
be
a
fast
ramp
up,
isn't
it
so
depends
how
you
configure
the
task,
so
the
steady
state
is
you
keep?
You
know
the
cycle
in
the
test
and
the
in
the
what
I
call
in
the
burst
so
so
kubernetes
using
steady
state
and
burst.
So
maybe
we
can
keep
those
terms
just
true,
so
the
burst
you
just
generate.
You
know
a
huge
request
in
the
steady
state.
You
keep
cycling
the
creation,
update
and
deletion.
B
So
I
think
especially
because
kubernetes
used
that
I
think
we
should
stick
with
the
burst
and
steady
state
and
then
and
then
so,
if
you
can
go
to
the
to
the
file.
Yes
exactly
so
if
we
can
go
a
little
bit
down,
I
think
I
read
show
this
before
so
those
those
are
the
high-level
metrics
that
I
was
thinking-
and
this
is
again
this
is
based
on
also
kubernetes
doing
for
their
definitions.
Okay,
so
I'm
not
reinventing,
not
anything
just
inspired
in
so
the
burst
again.
So
the
burst!
B
It's
this
that's
always
saying
shock
test,
but
it's
spike
anyway
can
be
both
terms,
but
it's
you
generate
like
you,
try
to
create
1,
000,
vms
and,
and
then
you
define
you
know
at
once.
Okay,
so,
for
example,
in
this
test
are
there
they
are
showing.
Here
is,
let's
assume
that
you
have
like
a
1000
nodes
and
then
you
create,
like
I
don't
know,
there's
the
density
of
30
vms
for
an
old,
and
it
means
that
you,
you
try
to
create
3000
vms
at
once.
B
B
I
don't
know,
does
it
make
sense
and
and
then,
if
you
go
down
a
little
bit
more,
so
the
steady
state
also
there
is
this
kind
of
metrics
that
I'm
proposing
for
the
steady
state
for
our
case,
so
I
think
it
would
be.
I
didn't
finish
this
document
this,
that's
why
I
didn't
share
before.
But
since
you
already
start
to
talk
about
that,
I
think
this
might
be
a
good
good,
a
good
good
time
to
talk
about
this,
and
if
you
go
a
little
bit
down,
there
is
also
figure
about
it.
B
So
this
is
the
vm
churn
and
it's
has
some
updates,
but
we
can
have
only
crates
and
leads.
This
will
be
the
the
cycle
and
and
then
and
then
we
have
this
so
like
you
know
we
have
this,
we
want
we,
it
should
be
configurable,
okay,
a
churn
of
training,
probably
you're
at
doing
that,
but
20,
vms
and-
and
this
depends
a
lot
on
the
deletion
time
of
the
vms
in
it
and
the
creation
time
of
dvm
and
how
the
system
can
behave
with
a
constant
load
of
creation,
deletion
and
update.
B
B
A
A
I
see
what
are
their,
what
is
their
analysis
on
trend
rates
like.
A
Yeah,
what's
the
this?
Is
I've
been
looking
for
this?
What
is
the?
What
is
their
number
of
objects
per
name
space
like
limits
where's
their
performance
start
to?
They
must
know
this
because
we've
been
hitting
this
internally
yeah.
A
B
To
prove,
but
the
point
they
have
here
is
with
five
thousand:
they
can
only
have
two
namespace,
okay,
so
the
what
they
are
saying.
This
analysis
this-
and
that
is
very
it's
very
nice.
So
it's
is
it's
not
linear
the
relationship.
So
if
you
increase
the
number
of
namespace,
you
cannot
have
this
5000
servers.
C
A
Service
yeah,
that's
true,
yeah,
okay
and
five
thousand
per
name
space.
Okay,
after
this
size,
the
service
linked
environment
gets
too
big
for
the
name
space
causing
pod
crashes.
Okay,
yeah,
I
see
we,
we
we've.
Actually
I
I
mean
we've
we've
seen
this
like
not
with
services,
it's
just
actually,
it's
with
other
other
objects.
Pvcs
particularly
secrets
as
well.
They
might
have
also
noticeable
issues
yeah.
Let
me
see,
I
didn't
think
they
did
testing
with
pvc.
That's
I
know
I
didn't
see
them.
They
didn't
have
much
data
on
it
right,
no.
A
A
B
B
B
A
Yeah
no,
I
agree.
I
just
exactly
want
to
go
with
this,
and
it
would
be
interesting
too,
to
talk
even
with
this,
the
the
communistic
scale
as
well,
because
I
bet
you
know-
I
think,
if
we
put
to
compile
our
findings
and
reach
out
to
them
and
present
to
them,
they
would
be
I'm
very
interested
yeah
to
see
this,
I
mean
from
a
vm
perspective.
That
would
be
really
interesting.
I
think.
A
I
mean
because
like
yeah
because,
like
I
think,
yeah
exactly
like
take
our
ideas
because
we're
sort
of
we're
we're
very
dependent
on
them
and
we
basically
piggyback
on
a
lot
of
what
they're
doing.
And
so
we
want
to
reuse
the
patterns
and
everything.
So.
But
but
it's
also
interesting
because
we
have
all
these
controllers
as
well.
So.
C
B
C
B
A
A
A
B
So
they
don't
have
it
anymore,
but
I
I
know
that
openshift
has
a
limit
documentation
in
their
anyway,
so
we
are
targeting
kubernetes
here.
So
we
can
just
check
kubernetes.
A
Okay,
yeah
cool-
I
wasn't
aware
of
this
this
this
this
presentation,
but
this
is
really
useful
information
I
mean,
especially,
I
did
not
realize
that
the
number
of
pods,
so
number
of
pods
per
name
space
can
affect
the
number
of
names
like
that.
You
have.
I
thought
that
this
could
actually
scale
like
quite
a
bit
horizontally
and
it
was
independent
of
this
variable.
A
B
B
It's
sold
off
because
just
just
be
careful,
so
it's
for
four
years,
three
years
old,
something
like
that!
It's
2018,
but
I
wouldn't
expect
to
bet
you
know
too
much
better
than
what
they
had
three
years
ago.
So.
A
That's
really
interesting
who's,
the
author
of
this
oh
okay,
these
are
the
I've,
seen
these
guys
in
some
of
the
communities.
Okay,
I'll
review
this,
because
I
wasn't
aware
of
this.
That's
really
interesting.
Okay,
all
right!
So.
B
Were
here
is
to
to
say
that
maybe
we
should
stick
to
the
burst
and
steady
state
load
generation
and
and
then
then
we
are
doing
the
burst
now,
but
of
course
it's
we
should
also
do
the
steady
state
and
and
then
we
need
to
vary
the
object
and
see
what
we
do
also
just
just
name.
Space
analysis
is
also
it's
on
the
road.
You
know
idea
that
I
was
playing.
B
You
know
to
test
on
different
name
space,
so
the
load
generator.
What
I'm
doing
now
the
next
tag.
So
I
received
some
feedbacks
before
it
was
a
long
time
ago
when
I
was
working,
the
load
generator
okay
and
it
was
from
the
the
guys
that
created
the
coop
burn.
I
don't
know
if
you
saw
this
this
tool
before.
B
B
Then
what
I'm
thinking
is
so
and
a
lot
of
people
at
least
from
people
from
red
hat.
I
don't
know
it's
a
lot
of
people,
but
people
from
red
hat.
Many
is
you
know,
performance
engineer
guys
are
using
coolburn,
especially
for
testing
kubernetes,
but
I
think
I
want
to
maybe
contribute
for
ubuntu
to
create
vms,
however,
could
burn
it's
it's
it's
doing
only
the
burn
test.
You
know
so,
but
maybe
it's
also,
you
know,
can
benefit
from
this
steady
state
test,
and
you
know
that
that's
that's
what
I
was
playing.
A
Well
hold
on
before
I
I
do
want
to
talk
more
actually
about
our
plan
for
load
dinner,
but
I
want
to
finish
this
one,
so
I
want
to
make
sure
I
get
this
right
so
density
and
burst
density,
so
density,
how
we?
How
are
we
going
to
describe
this
so
this
is
about
density,
is
about
creating.
B
B
B
If
we
we
just
wait
it
to
be
created
and
the
steady
state
is
the
system
will
achieve
a
steady
state.
So
then
we
configuring
the
churn
so
also
the
churn
term.
It's
it's
a
term
for
yeah.
So
it's
how
we
conf
so,
for
example,
the
density
is
how
we
configure
the
burst
test
and
the
churn
is
how
we
configure
the
steady
taste
test,
steady
state
test.
A
But
this
would
they
be
okay,
so
steady
state
would
be
its
own
yeah.
B
Generation
and
then
oh
burst
test,
maybe
better
burst
ad
and
a
status
state
test.
So
and
then
we
have
this
scenario:
configuration
using
vm
density.
B
B
B
B
Well,
then,
then
did
the
intensity
and
we
were
like
saying
hi.
Enlargement
to
you
know
create
this
soul
can
spike
it's
configured
here.
So
it's
that
I'm
saying
is
the
difference
scenario.
So
you
have
more
density
or
lower
density,
but
it's
part
of
this
umbrella
test
the
worst
test
you
know
and
then
the
steady
state
again.
So
you
you
put
different
pressure
in
the
test
or
not
so.
C
A
Here's
the
question
the
so
burst
test.
This
would
be
what
would
be
my
assumption
like
when
I
start
this
test.
Is
that
that,
like
the
burst
tests,
all
I'm
going
to
do
is
like
how
is
it
different
than
the
study
state
so
for
a
steady
state
will
keep
a
a
data
center
full
at
all
time?
That's
the
assumption,
and
then
first
test
is
just
that
we're
going
to
cause
cr
pressure
by
creating
virtual
machines.
One
time
is
that
the
difference.
B
Yes,
so
and
then
they
detest
different
things.
So
that's
that's
the
definition,
so
the
the
steady
state,
it's
keep
the
data.
You
know
as
you,
as
you
mentioned,
with
a
constant
load
and
then
it's
to
test
what
they
call
normal
behavior.
So
you
don't
push
the
data
center
too
much,
so
you
can.
Of
course
you
can
be
close
to
the
borders,
but
the
the
burst
test
is
normally
to
test
like
a
creation
of
a
large
number
of
objects
and
see
how
the
system
behaves.
B
It's
specifically
has
many
use
case
for
that,
and
one
of
them
is.
B
B
Also,
if,
if
there
was
some
harsher,
so
you
know
some
nodes
broke
and
then
it
come
back
and
then
it
will
be.
A
lot
of
you
know,
recreation
of
a
lot
of
requests
for
recreation
things.
You
know
this
is
more
related
to
the
worst
test.
You
know
to
recover
to
suddenly
create
a
lot
of
objects
and
the
steady
state
is
to
see
how
the
system
performs
in
a
normal.
B
I'm
having
internet
problems
right,
I
think,
and
and
basically
the
the
steady
state
is
then,
is
to
test
how
the
system
behavior
will
flow
so
because
we
were
expecting
like
in
a
regular
system
to
have
a
lot
of
users
creating
the
leaking
objects
in
the
systems.
So
again,
steady
state
is
for
normal
behavior,
for
the
system
and
burst
is
for
corner
case
that
we
want
to
test.
A
Okay,
so
I
would
the
the
the
things
that
I'm
also
wondering
about
it
like,
like,
let's
see
so
in
a
steady
state,
so
in
this
churn
scenario,
right
like
this,
is
like
we
have
a
bunch
of
vms
being
created
and
deleted.
A
These
can
be
at
variable
rates
like
what
would
we
call
this
like
when,
like
when?
It's
really
quick
like
when
we're
doing
very
sudden
decrease
after
we
sort
of
have
this
the
zone
and
or
the
data
center?
And
this
like
in
this
at
capacity,
I
guess
we'll
call
it.
B
A
A
Yeah,
I'm
wondering
so:
maybe
we
just
don't
call
it
anything
so
turn
test
means
that
we're
going
to
turn
this
means
we're
going
to
do,
create
we're
going
to
delete
and
then
create.
That's
like
that's
all
means.
If
we
have
steady
state,
it
means
that
right,
that's
what
we're
saying
slow
ramp
up.
B
B
So
if
we
configure
configure
turner
5
it
should
you
know
after
the
ramp
up,
you
know
it
should
reach
a
state
where
we
have
a
constant
5,
vm
creation
and
deletion
per
minute.
I
don't
know
something
like
that.
You
know
and
then
it's
less
for
that.
No,
for
I
don't
know
for
one
hour,
I
don't
know
the
time
window,
but
it
should
last
for
that.
So
that's
we
achieve
the
steady
state
for
this
churn.
B
C
A
This
is
a
scenario
right
like
this.
This
is
a
pressure
we
put
at
it,
but
we
expect
in
the
steady
state
test,
will
pass
when
will
pass
when
it
stays
at
a
certain
number
of
vms
and
the
test
we
do
is
turn.
B
Exactly
so,
and
then
the
number
of
objects
is
another
variable
for
the
test.
It's
it's
can
vary,
so
we
can
keep
like
a
maximum
of
5
vms.
You
know
in
the
whole
system,
or
it
can
have
more
1,
10
thousand
vms,
but
this
it
can
have
a
churn
of
five
years.
B
You
know
because
it
affects
the
number
of
objects
in
the
system
affects
the
tcg
affects
api
calls
and
a
lot
of
things.
So
there
are
many
variables
to
the
test:
I've
I've
I've
have
done
that
before
for
kubernetes.
You
know,
but
I
can
actually
take
again
all
these
variables
that
we
should
take
into
account
for
steady
state
test
and
and
then
we
can
check
that.
A
Something
like
that
there
we
go
so
like
now:
we've
got
our
steady
state
test.
This
is
what
we
expect
and
then
our
different
configuration
scenarios
is
that
we'll
do
churn.
So
what
other
one
of
the
scenarios
are
there
for
steady
state?
Is
it
just
churn?
What's
what
else
could
we
do
create
delete?
I
guess
we
could
be
just.
A
Okay,
but
it
wouldn't,
but
the
is
the
expectation
that
no
matter
what
scenario
you're
running
whatever
it
is
like
if
this
that
the
steady
state
test
is
always
gonna,
is
gonna
sort
of
it
has
to
it's
gonna
recreate,
so
it
would
be.
It
has
to
be
update
patch
and
delete
vm
at
a
specific
rate,
so
that
causes
chaos
remain
a
certain
bmi
account.
So
yeah.
A
But
but
this
will
create,
though,
to
churn,
so
the
significant
scenario
wouldn't
create
any
vms
right.
This.
The
steady
state
test
is
responsible
for
this,
because
that's
where
our
goal
is
right,.
A
B
A
I
mean
like
I
because
the
reason
I'm
making
this
distinction
is
because
this,
I
think,
will
affect
our
implementation,
like.
B
B
A
Doesn't
cost
chaos
is
create,
create?
Well,
it
could.
But
it's
just
that
this.
This
is
it's
just
who
is
doing
the
creating
this.
This
would
be.
The
steady
state
test
is
responsible
for
the
creating,
is
sort
of
what
I'm
thinking
and
then
this
is
our
chaos,
so
so,
like
others,
any
other
scenario
here-
and
this
is
at
a
rate,
so
the
difference
between
scenarios
is
that
maybe
we
have
other
ones
that
don't
do
it
at
a
specific
rate.
They
just
kind
of
I
don't
know
like
that's.
What
I'm
seeing
is
the
difference.
A
A
No,
maybe
you're
right
I
mean
I
like
is
you're
right
that
it's
it
is
create
yeah,
I'm
being
being
picky,
because,
like
I
I
mean
we
can
have
it
here.
I
it's
not
a
problem.
I
just
it's
mostly
like
yeah,
I
mean
that's
what
we'll
measure,
I
guess
we'll
call
it
certain.
So
our
scenario
will
be
that.
B
A
Okay,
so
so
that's
what
we're
so,
but
we
so.
B
Want
we
want
to
have
like
a
20
vms
creation
per
second
and
to
have,
for
example,
in
the
scenario
and
to
have
20
vms
creation
per
second,
and
we
have
already
the
maximum
no
vm
number
maximum
vm
configured
that
we
can
have
in
the
cluster.
We
need
to
delete
20..
So
then
it's
where
we
have
the
ramp
up.
For
example,
we
want
the
maximum
number
of
vm
is
1000,
okay
and
then
we
keep
creating
until
we
reach
the
1000.
B
This
is
the
ramp
up
so
and
then,
when
we
reach
the
limit
of
1000
vms,
we
need
now
to
start
to
delete
to
create
new
vms
and
then
that's
we
start
to
cycle.
So
we
delete
and
create.
There
is
a
rate
to
delete,
delete
deletion
takes
more
time,
so
we
cannot
create
the
same
rate
so
and
then
the
things
will
be
like
achieve
a
steady
state.
B
You
know
behavior,
where
we
can
keep
this
amount
of
vms
created
in
the
the
system
and
keep
deleting
and
recreating
vms,
for
like
new
requests,
and
then
we
can
have
the
ramp
down,
which
means
when
we
finish
and
then
we
start
to
delete
things
and
the
system
should,
after
the
run
down
it,
should
come
back
to
the
normal
behavior.
Isn't
it
so
all
the
gaps
collector
should
be
achieved
and
everything
that
should
be
back?
B
A
Okay,
so
this
will
be
a
ramp
up.
I
just
have
changed.
I
don't
know
how
to
find
this.
Broadly,
I'm
just
change
objects.
You
know
yeah
in
our.
A
B
A
Be
ramp
up
could
be
anything
that
could
be
any
for
like
first
set.
That's
that's
fine,
okay
and
then
change
repeatedly.
That
could
be
our
scenarios
here
and
then
ramp
down
yeah,
I
mean
just
the
inverse
of
whatever.
This
is
okay,.
A
B
A
Let
me
okay,
so
we
have
got
our
goal
to.
Let
me
do
our
goal.
I'm
gonna
move
up
here,
so
our
goal
is
to
we
want
to
what
do
we
want
to
measure
like?
What's
the
thing
we
measure,
so
we
like
measure
the
the
rate
is
it
like?
Is
it
performance
well
like
we
want
to
measure
the
rate
and
the
number
of
objects
as
well
like?
Can
we
keep
up.
B
A
These
are
a
good
one
yeah.
These
are,
let's
start
with
these
latency,
and
then
it
was
the
creation
latency.
A
C
A
That's
our
goal,
and
then
so
it
doesn't
matter
if
our.
If
our
data
center
stays
at
a
certain
amount,
we
don't
care,
because
we
just
want
to
see
this
yeah,
it
doesn't
matter,
it's
not
the
data.
We
want.
Okay,.
A
B
This
I
think
the
scenarios
here
will
be
like
you
know
things
like
that.
B
B
A
B
A
B
Can
double
check
the
terminology,
the
better,
the
exactly
term
knowledge
for
that?
So
those
are,
you
know,
well-defined.
You
know
things
for
the
steady
state
test.
So
since
kubernetes
used
chern
I
was
using
chern,
but
I'm
not
sure.
Maybe
there
is
another.
I
think
there
is
another.
A
All
right,
I
like
your
definition
here,
so
this
is
good,
and
then
that
makes
sense.
We
have
some
things
we're
going
to
measure.
Okay.
That
makes
a
lot
of
sense
to
me.
Okay,
so
density
burst.
So
then
we've
got
stress
soaks
like
do
we
need
some
stuff.
Do
we
have
like?
Is
there
what
about
this
test
like
we
didn't?
Should
we
do
the
same
thing
here
like?
What's
the
what's
like
the
goal
for
this
one.
B
C
A
B
B
We
want,
for
example,
to
create
1000,
vm
and
creating
1000
vm
should
take
like
I
don't
know,
20
minutes,
and
if
we
keep
that
you
know
creating
1000
vms,
it's
20
minutes.
We
are
under
normal
operation.
B
B
A
A
I
still
so
I
want
to
be
a
little
bit
more
specific,
though
so.
The
evaluation
system
like
allows
the
system
to
function.
When
do
we
say
like
slos
or
like,
like
that's
like
within
our.
A
A
Yes,
I
call
it,
I
call
it
soak
down
here
so
like
this
is
like
we
we've
let
the
vms.
A
B
B
You
know
you
know
flexible.
You
know
make
it
a
little
bit
more
flexible,
the
definition
of
burst,
because
I
think
just
because
the
steady
state
should
it
should
have
this
thing
cycling.
You
know
creating
deleting
if
you
create
and
then
you
don't
you
do
nothing.
I
think
it
will
be
under
the
burst
test,
because
there
are
other
kind
of
tests
there
is.
There
are
another
thing
that
they
call
stair
tests,
which
means
you
you
creating
badge
working
on
call
also.
A
So
the
system's
ability
to
function
during
what.
A
So
then,
this
has
to
be
turn
is
what
you're
saying
this.
This
is
like
we
we
have
to.
We
have
to
cause
chaos.
Yes,
okay,
that's
that's
fine!
Okay!
I
I
think
we
just
got
to
find
our
definition.
That
makes
sense
like
so
like
for
this,
because
the
system
ability
to
function
while
objects
are
being.
I
mean
it,
that's
where
while
objects
are
being
created
deleted,
I
mean,
while
there's
change,
while
there's
churn
while
there's
something.
B
C
B
But
it's
a
constant
it's
a
constant
load.
You
have
like
you
know
so
could
be.
B
A
So,
okay,
we
want
to
use
that
definition
like
while
the
while
there's
action
while
there's
something
being
taken
place,
something
objects
being
acted
on
while
they're
others
change
during
chaos
or
something.
B
A
Okay,
well,
so
we
can
define
it
so
then,
under
while
control
point.
So
that's
that's
what
I
want
to
get
really
specific
about
so
then
evaluate
the
system
because
load
rate
I
mean
we
could
say
compute
like
we're,
not
that's
what
I
want
to
get
that's
what
I
want
to
clear
up
so
like,
while
the
while
the
control
plane
is
under
constant
under
pressure
under
constant
pressure,
properly
wow.
A
I
mean
you
could
say
load
I
mean
it's
under
constant
load.
I
mean
that's,
that's
the
same
thing,
I'm
just
using
different
different
words.
The
system
is
really
to
function
properly,
while
the
control
plane
is
under
constant
load.
Yeah
yeah,
let
me
see
something
more
pressure,
saying
same
thing:
synonyms.
A
A
Okay,
I
mean,
I
think
that
is
different.
Now
that
to
me
move
soak
up
here.
That
makes
sense,
because
we're
not
constantly
doing
it
we're
waiting.
This
is
just
one
test,
one
scenario
so
and
then
there's
others.
Maybe
like
we
measure
performance
or
I
don't
know
we
just
create,
deletes
or
something
I
don't
know
what
that
would
be
called
yeah.
I
don't
know
yeah.
B
C
A
B
Yeah
but
yes,
I
think
that's
this
exactly
what
burst
is
so
what
we
call
spike
and
first
well.
A
C
B
B
A
B
A
B
C
B
No,
don't
worry
so
we
have
this,
the
rum
you
know
up
face
and
then
the
rum
down
and
then
we
don't
have
the
middle
phase
like
the
steady
state.
I
think
that's
good
and
okay
and
the
scenarios
here
is,
we
can
configure
you
know
the
creation
rate.
Isn't
it.
A
Better
there
you
go
yeah,
something
like
that
and
then
and
then
soak
just
as
something
else.
So
soap
is
like.
It
just
means
that
okay,
so
so,
so
what
would
be
our
middle
face
then
like
if
with
soak?
What
would
we
say
here
because
we're
just
waiting
like
I
don't
know,
we
just
call
like
wait
time.
A
A
A
A
B
A
That
can
that
can
vary.
This
is
just
we
expect
and
soak
it's
just
long,
okay
and
then
dense
yeah,
though
okay
and
I
think
what
we
could
do.
Is
you
just
mix
them
together,
like
you
could
do
a
soak
test
with
high
vm
density
and
fast
creation
rate
like
you
can
do
all
of
them
all
the
scenarios
together
or
one
of
them
and
okay
make
sense.
Okay,
let
me
get
rid
of
this.
I
think
we
covered
all
of
that.
Those
are
all
of
the
ones
that
we
had.
A
We
should
continue
this
discussion
next
time
because
we
should
talk
more
about
this
and
because
I
do
want
to
do
some
design
on
this,
because
I
was
looking
at
doing
the
churn
and
I've.
I
just
basically
did
a
poc.
I
took
your
code
and
I
changed
it
around
and
kind
of
made
a
pc
out
of
it.
I've
been
doing
some
testing,
but
I
want
to
make
it
like,
because
this
is
like
to
me.
A
A
Cool
all
right
marcelo.
I
think
I've
lost
market
time
here
already
15
minutes
over
okay.
I
wonder
so
I'll
follow
up
on
this
with
you
guys
in
slack
and
yeah
we'll,
so
our
next
meeting
isn't
going
to
be
till
the
new
year.
So
I
think
it's
going
to
be
january
january,
the
first
week
of
january
january,
5th
or
6th
or
something
let's
see
january
6th,
so
that'll
be
your
next
call.
So
all
right,
we'll
have
a
have
a
good
year,
have
good
new
year's
and
have
a
good
holiday
thanks
for.
A
Yeah,
we'll
I'll,
let
you
know
in
slack
how
this
goes
and
keep
thinking
about
this,
because
I
I
kind
of
we're
gonna.
I
think
we
should
have
we'll
do
some
more
design
on
this.
Maybe
design
duck
or
something,
but
you
already
have
it.
I
think
we
should
take
this
and
yeah.
We
should
kind
of
expand
and
collaborate.
I'll
put
my
thoughts,
maybe
in
here
and
that's
maybe
what
we
can.
B
Yeah,
so
a
pointer
to
a
implementation
of
this,
you
know
steady
state
load
generation,
their
friend
from
ibm
did.
I
think
they
call.
Actually
he
calls
it
closed
loop.
Also,
okay,.
A
Well,
I
like
the
terminology
to
be
honest,
like
whatever
I
don't
really
care
what
we
call.
I
think
the
definition
is
the
most
important
thing
that
we
agree
on
and
that's
like
going
to
drive
everything.
So
I
think
we
know
we
have
something
on
paper
here.
We
can,
I
think,
would
be
really
valuable
like
to
like.
Let's
get
david's
opinion,
let's
get
others
opinions
yeah.
A
Maybe
we
should
even
like
this
will
be
on
the
mailing
list
when
I
push
the
notes,
but
it
would
actually
be
even
good
to
expand
or
just
post
this
like
hey
like
this,
is
you
know
on
the
mailing
list,
just
to
say
here's
what
we
define
this
so
so
people
are
aware,
so
I
think
that's
kind
of
where
we
want
to
go
with
this
and
then
maybe
we
can
work
on
design
after
that.
If
we
get
some
rerun,
the
language
cool
all
right,
marcelo
thanks.