►
From YouTube: Discussion with Kinvolk: Incorporating wrk2 into Meshery
Description
Performance benchmarking to compare service meshes using wrk2 with Meshery.
B
Great
Sacco,
thanks
for
getting
us
together
and
we've
got
grecian
TiVo
from
Kim
bulk
as
well.
Oh,
guys
I'm
pleased
that
we're
recording
this
particular
deep
dive.
We
do
a
good
job
of
recording
our
community
meetings,
and
my
hope
is
that
you
know
all
of
the
features
and
functions
that
were
working
on
are
its
transparent
and
open
and
that
were
encouraging
of
new
collaborators
new
contributor
to
so
when
we
first
met
with
Tilo
back
at
cloud
native
rejects,
which
he's
probably
representing
today.
B
B
We
were
learning
two
things
with
respect
to
one
coordinated
omission
and
then
to
just
kind
of
your
the
approach
that
you
guys
taken
in
general
and
and
wanted
to
acknowledge
that,
and
we
originally
chosen
for
IO
as
the
load
generator
that's
built
into
Mestre,
really
as
a
well
one
as
a
matter
of
convenience
and
as
a
matter
of
familiarity
to
steel,
because
that's
the
generator
the
load
generator
that
project
uses.
But
then
you
know,
after
having
been
learning
from
things
that
you
were
describing,
it
sounded
like
hey.
A
B
And
and
as
such,
in
Puerto
has
a
few
controls
that
we
expose
to
users
with
respect
to
the
longevity
of
the
load
that
you
are
in,
generate
the
not
explicitly
the
number
of
requests,
but
rather
the
requests
per
second
that
you
want
to
generate
and
control
over
the
number
of
threads
and
so
mathematically
you,
you
can
arrive
at
a
point
by
which
you
could
say
yes,
here
is
the
total
number
of
requests
that
I
would
expect
would
be
generated
the
test
again,
and
so
the
way
in
which
we
were
presenting
back
these
comparative
performance
or
the
you
know,
the
we're
showing
back
the
test
results,
and
we
were,
what's
most
handy
and
most
informative
to.
B
As
you
well
know,
in
the
studies
that
you
had
published.
It
is
the
comparison
between
different
types
of
tests,
that
you're
running
variables
that
you've
changed
and
and
so
to
facilitate.
So
currently,
the
the
comparative
chart
performance
charts
that
we
will
show
are
based
on
metrics
that
are
gathered
on
a
per
request
basis
and
so
to.
B
C
B
C
C
B
C
B
C
Made
a
friendly
port
now
me
I,
came
to
addresses,
addresses
coordinator
permission
and
that
way
I
came
to
the
the
requirement
to
specify
a
request
per
second
was
introduced
because
of
the
weight
on
the
W
okay
to
Burks.
Now
going
back
to
the
kind
of
scenario
set
for
you
lets
you
test
easily
having
Anna
vigil
request
measuring
basically
measuring
alongside
a
service
mesh
configuration
or
maybe
testing
different
application.
C
I
runs
into
latency.
That
is
by
the
way
why
we've
seen
such
hair-raising
legacies
that
we've
seen
our
servicemen
testing
I
would
not
expect
those
to
pop
up
much
in
in
real
life.
The
reason
here
is
to
have
a
predefined
scenario
and
to
drive
the
maximum
latency
possible,
but
any
given
system
in
that
specific
requests
per
second
scenario
to
the
to
the
point
where
it's
not
bearable
anymore,
and
so
it
shows
you
what
you
normally
would
not
see
the
benchmarks,
but
but
I
don't
know.
B
C
Requests
per
second,
it's
also
dependent
on
the
test
window.
So
you
won't
see
much
of
this
in
a
10
second
or
1
minute
test.
No,
but
if
you,
if
you
run
those
tests
for
5
minutes,
10
minutes,
30,
minutes
and
you're
starting
to
see
basically
the
outliers
and
that
helps
you
to
identify
those
those
bottlenecks
in
your
applications.
Long
before
you
run
into
any
be
all
of
your
users.
C
So
in
a
way
it
is
even
even
if
here,
if
you
keep
static
RPS,
if
you
increase
the
test
window
at
some
point
in
time,
we'll
just
push
your
best
chances
that
you'll
push
your
endpoint
to
basically
expose
those
kind
of
bottlenecks
and
to
take
the
30
minute
lunar
trends
to
transfer
that
that
can
be
a
life.
The
way
value
I
play.
Does
its
testing
is
best
explained
by
use
case
of
having
a
ver
H
being
assembled
that
gives
us
from
from
beginning
to
end
and
having
the
browser.
C
Basically,
in
order
to
assemble
that
webpage,
the
mix
a
certain
refresh,
wait
a
second
now,
no
one
will
look
at
a
webpage
that
needs
many
requests
that
it
takes
minutes
to
assemble,
but
it
will
definitely
30.
Minutes
is
a
little
bit
of
a
open
routine
example:
500,
maybe
a
better
version,
but
is
definitely
pointing
to
point
to
the
bottlenecks
in
your
in
your
application
way
before
anyone
run
system.
C
So
that's,
maybe
more
maybe
sooner
way,
comparing
from
you
and
W
okay
to
is
audio
is
extremely
fast
and
extremely
usable
for
just
basically
testing
basic
things
and
testing
response
times
on
a
her
request.
Level
was
something
that
can
react
a
tool
basically
discarded.
It
can't
do
that.
Okay
to
tests,
a
scenario
where
the
atomic
operation
is
building
of
a
website,
for
instance
in
a
new
client
frozen,
so
I
was
able
to
stinglash,
and
maybe
that
stirred
that's
the
value
that
the
real
value
wakt
can
link
to.
C
My
surely
he
will
distinguish
would
be
like
individual,
medical
and
penis
for
do
and
complex
scenario
tests
that
you
could
execute
on
your
KP
and
just
to
end
the
short
friendly
Forex
story
that
WT
are
kicked.
Who
has
so
there's
some
limitation
and
be
in
jail
genes
and
the
oxygen
value
of
k2?
In
that
the?
C
Even
though
you
can
you
can
spawn
as
many
threats
as
you
want,
this
only
a
single
URL
single
endpoint
that
you
can
benchmark
and
we
failed.
That
was
particularly
given
that
wek
to
aims
at
benchmarking
scenarios.
We
found
that
unrealistic
and
be
basically
the
six
or
seven
patches
that
you
have
on
top
of
the
Ginsburg
just
extends
DD
command
line
options
and
the
Lua
scripting
interface
to
be
able
to
test
multiple
animals.
C
B
C
C
B
No,
but
that
seems
a,
but
that
seems
quite
interesting
in
so
much
as
you
would
expect
that
an
operator
would
be
able
would
be
able
to
say,
yeah,
hey
look.
This
is
our
or
of
the
ten
services
that
we
have
it's
the
authentication
service
that
gets
hammered
every
single.
Sorry,
it's
this
one
that
people
use
the
most
so.
C
B
B
C
It
could,
it
could
be
like
I
mean
the
most
straightforward.
Implementation
would
probably
be
adding
five
or
six
lines
of
code
and
then
call
it
done,
but
I
mean
it
needs
to
be
impaired
and
be
tested,
but
yeah
it's
not
much.
It's
not
much
work
to
add
that
to
the
on
top
of
wk2
what
we
needed,
however,
so
the
the
state,
our
our
folks
currently
in,
is
highly
specific
to
the
benchmarks.
At
the
front,
we've
made
a
number
of
extensions.
C
For
instance,
if
you
wanna
supply
multiple
endpoints,
it's
it's
time
and
a
little
peculiar
way
right
now,
it's
not
like
the
obvious
thing
on
the
command
line
that
you
would
expect
and
before
committing
or
work
upstream
or
before,
starting
to
basically
extend
on
that,
I
would
rather
have
someone
spend
a
few
days
and
cleaning
cleaning
things
up
and
making
the
the
future
sweet.
I
don't
know
generally
usable,
that's
not
much
change
as
well.
This
is
actually
mostly
designs.
C
C
So
there
is
there's
a
scripting
interface
to
WR
k
where
you
can
basically
write
new
hooks
for,
for
certain
states
that
each
of
the
law
generator
threads
is
in
and
that's
I
mean
they.
They
basically
provide
access
to
to
internal
data
structures,
the
hosts
they
about
to
connect
or
if
it's
a
callback
after
the
request,
finished
or
reforming
fest
starts
it'll,
give
you
the
HTTP
header,
so
you
can
tweak
few.
C
If
you
have,
if
you
implement
a
responsible,
then
you
can
maybe
get
results,
and
now
you
are
k2
does
only
the
most
minimal
of
HTTP
parsing
because
it
tries
to
be
very
fast.
So
it's
not
it's
not
even
fully
HDB
compliant
just
us,
the
most
basic
things
necessary
to
work
as
a
client
and
a
server
app
to
work
as
a
client
and
to
to
process
server
responses,
but
other
than
the
very
specific
you
are
interface
that
lets.
B
Don't
I'm
gonna
make
a
suggestion
and
maybe
I'm
leaving
the
answer
hearing.
This
is
not,
but
to
the
extent
that
I
mean
this
is
already
available,
but
it's,
it
seems
like
last
time
that
may
have
engaged
on
the
and
chat
that
that
WR
k
didn't
omit
or
didn't
natively
emit
those
statistics
in
JSON
to
the
extent
that,
but
that
was
desirable
within
the
Sri
to
fit
into
the
existing
charting
framework
mm-hm
is,
is
what
is
the
in
your
mind?
Is
the
most
appropriate
place
to
to
hook
to
augment
that?
C
The
right
place
would
be
in
the
worst
script,
and
so
they
are
down
more
hooks
available
than
vendors
for
the
market
threats.
You
also
have
a
Sodom
hook
that
basically
is
called
from
the
from
the
context
of
the
main
application,
and
then
you
have
the
results.
Look
at
the
very
end
of
optic
test
run
and
you
basically
get
access
to
all
of
the
raw
okay
results
and,
as
actually
I
believe,
I've
seen
an
example
who
was
ripped
in
one
of
the
apps
PW
a
case
that
that's
just
there
are
clips.
C
C
A
C
A
C
Request
for
second,
because
that's
the
way
the
statistics
and
wlk
poober
w
arcade
doesn't
have
that
I,
don't
think
W!
Okay,
you
take
the
RPS
commander
argument.
That's
something
agile
and
be
the
main
difference.
Internally
is,
of
course,
that
an
attorney
basically
Jules
histogram
code
instead
of
statistics
code,
is
being
used
for
measuring
latency,
which
is
not
what
on
your
cadence,
so
W.
C
A
C
A
C
C
C
B
C
A
C
I'm,
not
sure
I
mean,
rather
so
it's
not
in
in
yet,
but
it
should
be.
It
shouldn't
be
hard
through
two.
Basically,
you
read
rough
it
up.
Okay,
this
needs
a
little
bit
research
in
the
result,
Danis
lectures
that
you
get
from,
if,
like
in
the
versatile
cases
and
the
UI
API
needs
to
be
a
little
bit
extended
to
get
like
more
data
and
more
access,
so
just
to
stay
on
the
safe
side.
For
now,
I
would
say
that
our
modification
lets
you
measure
the
overall
latency
distributable.
All
thank.
A
C
A
C
A
C
Will
eventually
so
I
have
a
few
ideas,
because
the
state
of
things
currently
aims
specifically
at
the
service
mesh
benchmarking.
We
did
so
I
need
to
work
a
little
on
the
on
the
command
line.
Interface,
make
it
a
little
more
generic
and
easier
to
use,
but
yeah
we're
definitely
planning
to
get
this
take
up
steam
because
that's
where
it
belongs.
D
C
Would
W?
Okay,
okay
to
the
people,
chins
up
spirit
repo,
which
currently
has
the
limitation
that
you
only
can
benchmark
a
single
endpoint
so
because
our
work
is
not
included.
Yet
it's
everything's
there
from
the
from
the
from
basically
the
runtime
side
and
I
think
integrating.
That
would
be
a
very
good
foundation
for
later
on.
Iterating.
B
C
A
C
A
C
C
If
you
google,
for
it,
you
only
find
a
few
blog
posts.
Most
prominently
by
a
drill
team
came
up
with
the
concept
behind
that.
So
in
a
traditional
benchmarks,
where
you
do
not
specify
a
request
for
second
rail,
but
just
say
just
you
tell
the
benchmark.
Okay,
there's
your
endpoint,
give
it
what
you
got
and
then
give
me
like
the
rough
RPS
and
how
it
can
take
the
thing
that
the
benchmark
will
do
is
the
Shahrukh
fests,
where
for
responses
in
a.
C
To
give
you
a
average
and
a
percentile
of
T
of
the
latency
that
you
that
you
might
might
expect
the
thing
that
this
kind
of
X
marking
comments
is
a
user
or
a
browser,
wouldn't
wait
for
a
specific
request
to
finish
until
the
next
one
until
the
next
honesty
issue.
Now,
this
has
an
impact
on
the
way
that
we.
A
C
For
for
traditional
benchmarks,
imagine
you
have
a
a
theoretical
requests
per
second
rate
of
let's
say
ten
RPS,
so
the
benchmark
at
the
local
er,
whatever
just
issue
the
the
requests
and
just
imagine
one
response,
so
it
needs
to
basically
issue
requests.
Every
hundred
milliseconds
and,
let's
assume
one
response,
takes
300
milliseconds
and
the
way
this
this
will
be
reported
and
the
result
is
nine
requests
bent
very
well.
C
But
a
single
request
was
an
outlier
and
took
three
times
so
you
kind
of
you're
you're
kind
of
motivated
to
just
remove
the
outlier,
because
it's
just
one
out
there,
but
your
user
will
feel
differently.
So
mr.
teen
theorized
that
the
user
would
issue
the
next
week
fest,
even
though
the
current
one
has
not
terminated,
and
that
does
two
things.
First,
the
next
week,
fest
isn't
late.
C
In
the
the
scenario
I
just
described
next
week,
fest
would
be
issued
200
milliseconds
to
day
so
a
she
renew
request
is
already
late,
and
that
means
the
next
week.
Fest
would
be
late
in
the
response
as
well
and
should
be
counted
as
such,
even
though,
even
if
it's
below
100
milliseconds,
because
it's
slow
for
the
user,
because
the
overall
thing
takes
300.
Anything
though
wik
it
definitely
takes
the
time
into
account
the
deployment
time
to
account
at
the
next.
C
If
s
should
have
been
issued,
and
it
adds
this
to
that
request,
latency
and
that
by
the
way,
is
the
reason
by
some
of
the
service
mesh
benchmark
results.
Look
so
bad
because
obviously
latency
is
piling
up
now
it
can't
escape
anymore
and
if
there,
if
the
server
falls
asleep
for
half
an
hour
on
a
single
request,
then
this
half
an
hour
will
make
every
single
success
of
request.
If
you
use
you
take
on
it.
Commissioners,
work
on
this
does
not
happen
that
we
just
won.
You
know
so,
the
one
after
another
request,
I.
A
B
A
B
B
Cool
cool
so
but
yeah,
and
so
great,
so
w
RK
is
explicitly
not
what
we're
interested
in
working
with,
but
like
rkq,
for
the
reason
is
that
Tila
expressed
and
actually
explicitly,
probably
Telos
friendly
Fork,
given
the
efforts
that
he'd
given
the
faculty
to
focus
on
much
of
the
same
thing
and
went
through
some
of
those
paints,
the
the
multiple
end
point
thing
is
pretty
is
create,
actually
leads
me,
maybe
20.
Other
questions
is
last
time
we
spoke.
B
C
So
the
unfortunate
truth
is
I
didn't
have
much
time
yet
to
put
any
work
into
W
okay
tool
to
basically
polish,
it
I'm
planning
to
look
at
this
in
next
week's.
But
so
the
reason
I
joined
chemical
is
that
you're
you're,
looking
at
some
some
growth
and
particularly
if
you're,
in
a
management
role,
there's
like
a
lot
of
hurts.
C
A
C
I
have
a
bit
of
testing
benchmarking
background
before
I
do
anything
for
okay,
but
for
Amazon
ec2.
You
have
a
kernel
and
hypervisor
team
in
recent
here
in
Germany,
and
we
also
did
a
lot
of
scenario.
Low
and
system
testing
automated.
So
there's
this
a
little
bit
of
I
have
a
little
bit
of
practical
back
one
of
the
statistics
and
I
just
it
felt
like
a
lot
of
things,
went
for,
fell
into
place
and
then,
when
clicked,
when
I
just
read
the
the
theory
behind
coordinated,
Commission
and
yeah.
So
that's
that's.
C
C
D
B
B
B
C
B
C
C
C
Is
it's
not
it's
not
much
of
a
change
like
yeah,
a
few
minor
items
that
need
to
be
fixed
in
W,
okay
and
then
the
C
part,
and
you
start
when
you
want
to
connect
to
different
posts
from
from
the
last
clip
those
fixes,
and
then
we
basically
supply
the
multiple
endpoints
options
in
native
in
your
script.
So
that's
not
even
a
core
WR
k2
extension,
so
yeah
I'm,
very
optimistic
to
getting
those
work.
Yeah.
C
So-
and
this
is
particularly
the
kind
of
ugly
part
that
so
the
the
way
it's
written,
it's
implemented
right
now,
you
supply
a
number
of
URL
templates
that
contain
a
pepper
in
the
template
that
you
want
iterate,
that
or
in
place,
and
then
you
as
an
argument
you
supply
account
and
what
value
but
bill
must
repent
does.
Is
it
extends
the
templates
discovering
all
of
the
counts
and
the
the
the
simple
reason
for
that
for
writing
it.
B
C
It
was
a
single
yes,
that's
that
is
some
future
extension
will
be
amusing
at
the
end
of
the
blog
post.
That
will
take
significantly
more
effort,
but
would
provide
significantly
more
value
if
you
wanna
get
what
classes
to
have
basically
a
distributed
wik
or
multiple
nodes,
so
to
basically
to
be
able
to
benchmark
other
parts
of
your
cluster.
C
We
kind
of
we
were
able
to
kind
of
circumvent
that
for
the
benchmarks
by
part
by
keeping
the
cluster
relatively
small
size
just
stuffing
applications
applications
applications
there
and
also
making
sure
that
all
of
the
Sheena's
and
all
of
the
networking
bandwidth
is
sufficient.
So
we
don't
need
to
scale
out.
It
can
basically
scale
vertically
inside.
A
huge
skating
rink
is
implemented,
is
just
using
MOSFETs.
Okay,.
B
Yeah
sure
yeah,
but
then
yeah
the
horizontal
they're
like
distributed
work
or
loan
generation
ism,
but
then
you're
saying
your
honor
you're,
also
reflecting
on
how
the
assimilation
of
all
those
results
and
then
the
crunching
of
those
numbers
to
produce
the
report
like
that
in
part,
because
you
ran
X
number
of
tests
with
lots
of
you
know
right.
You
know
results
per
test
and.
C
Isn't
yeah?
This
is
above
W,
okay
cool,
so
this
is
outside.
Even
the
the
container
must
be
filled
and
that's
more
a
scripting
based
on
the
on
the
roll
w,
okay
to
benchmark
output.
To
give
a
little
more
detail
on
what
we're
doing
there
is.
So
that's
something
I
learned
with
Amazon
was
considerable
size
data
centers.
If
you
wanna.
C
Get
something
that
we
presented
and
that's
what
we
were
aiming
for
the
service
professional
right.
You
shouldn't
rely
on
the
data
center
placement
algorithm
to
pick
a
good
server
for
you,
particularly
for
network
benchmarks,
and
you
need
enough
data
points
to
make
sure
and
you're
not
looking
at
data
for
a
like
Goldilocks
cluster,
where
everything
is
in
the
same
rack,
and
also
that
you
don't
have
any
lemon
hosts
there.
I
don't
know,
never
get
the
faces
census.
C
We
brought
more
of
less
performance,
so
we
try
to
get
enough
statistical
distribution
data
points,
and
so
we
ended
up
basically
running
the
Saline
tests
on
three
different
clusters
in
three
different
regions
and
then
executing
the
same
set
of
tests.
I
think
three
times
per
cluster.
So
we
had.
We
had
a
time.
B
C
He
had
her
a
little
antsy
and
that
gave
us
I
think
my
data
points
toda,
and
this
is
right
away.
This
is
something
that
I
dearly
missed
in
some
of
the
benchmark
novels
that
I
do
in
order
to
be
comparable
to
other
people
discusses
other
people's
data.
You
need
to
make
sure
not
only
to
get
the
results
for
your
particular
installation,
but
you
only
need
to
understand
in
a
similar
setup
in
the
same
or
a
similar
data.
B
B
B
B
The
the
results
you
can
report
on
you're
gonna
want
to
have
some
amount
of
statistical
significance.
That
says:
look.
We
ran
this
thing
a
thousand
times
and
we've
we've
put
out
the
outliers
or
you
know,
we've
excluded
those
and
here's
the
the
outliers
in
terms
of
the
test
results
not
in
terms
of
them
yeah,
that's
important.
What
you're
saying
but
I
think
you're
also
saying
something
slightly
more
nuanced
with
in
there
and
that's
to
say,
look
so
that's
great
in
terms
of
confidence
and
the
results
that
you're
producing.
B
But
if
you'd
like
to
say
and
look
and
this
other
teams
or
efforts
results,
are
here's
how
we
can
compare
those
you're
saying:
hey,
you
do
well
to
be
it.
You
gotta
start
with
at
least
identifying
your
your
low
and
high
water
mark
in
terms
of
variability
between
from
test
to
test
like
and
that's
again,
a
confidence-building
thing
right
like
hey.
We
we
have
this
much
this
much
variance
between
tests,
so
we're
we're
pretty
confident
like
environment
was
well
security.
B
C
Particularly
the
network
interface
B
causes
a
different
cluster
would
generate
vastly
different
results,
so
you
can
easily
identify
that
outlier
and
something
that
we
didn't
do
because
on
overall,
we
didn't
have
the
time
for
that,
but
basically
to
deep
dives
on
specific
failure
modes
that
we've
only
seen
and
sing
it
past
our
single
host.
We
actually
did
I
think
we
had
that
once
and
we
can
I'm
sure
that
was
causing
it
basically.
C
Yeah
is
that
what
that
good
and
the
other
thing
I
find
important
I
mean
confidences
of
core-
is
paramount,
particularly
for
this
kind
of
benchmark.
The
other
thing
I,
find
it
really
important,
is
usability
father's,
like
the
other
thing
we
try
to
we're.
Trying
to
establish
is
to
give
that
data
out
on
them
to
have
people
look
at
it
and
maybe
the
whole
whole
like
service,
much
benchmark,
automation,
there's.
B
C
B
Yeah
DUI
you've
taken
the
words
out
of
my
mouth,
that
I've
said
beautiful,
yeah
yeah.
You
totally
agree
that,
like
in
some
respects,
you
know
myself
as
user
coming
to
this
would
say:
well,
that's
fantastic.
You've
got
some
Kafka
thing.
That's
running,
such
as
a
bit
like
I'm
running
something
totally
different
by
the
way.
Also
you're
running
in
AWS,
on
Windows
machines,
where
I'm
running
in
GC,
like
I,
could
kick
like.
B
That's
interesting
from
afar
that,
like
you,
generally
see
this
much
overhead,
and
it
looks
like
that
to
you,
but
I
kinda
don't
care,
because
what
I
really
care
about
is
within
my
environment.
Moreover,
I
care
about
it,
one
time
not
just
one
time
but
like
ongoing
like
what's
you
know,
am
I
still
doing
as
well.
B
Am
I
improving
or
my
that's
along
the
lines
so
that
I
wanted
to
ask
which
was
given
that
the
the
at
least
from
my
perspective,
that
the
the
the
work
that
you
guys
have
produced
and
published
was
as
well
received
as
what
were
there
you've
begun
to
articulate
a
couple
of
aha,
a
couple
of
learnings
that
you've
taken
from
the
feedback
you've
gotten?
What
yeah
I
know
you
literally
got
feedback
from
the
ISTE
Oh
from
from
Mandar
and
other
folks,
but
in
terms
of
like
tuning
their
config
and
exciting?
B
Yes,
we're
like
too
keenly
aware
of
these
things
were
to
try
to
be
all-encompassing
in
terms
of
our,
but
any
other,
like
you
just
articulated
one,
where
a
couple
of
questions
any
other
how's
that
you
had
any
other
feedback,
you
got
what
people
said
yeah.
C
So
I
did
ruthless.
He
didn't
get
that
much
of
direct
feedback
on
there's.
A
number
of
people
seem
to
have
picked
up
both
the
service
mesh
and
test
automation,
and
basically
the
turkic
uber
visit,
because
it's
using
a
container
that's
that
supply
in
general
I
think
we
got
the
peer
responses
from
the
SEO
community,
which
we
are
very
thankful
for
I
mean
if
they
obviously
given
the
way.
We've
measured
and
I've
detailed.
C
That
previously
in
this
meeting,
is
a
very
aggressive
way
and
it
really
tries
to
push
the
boundaries
right
so
I
don't
mind
the
occasional
like
laps
and
communication
at
West
Texas
right.
If,
if
you
face
those
results-
and
they
look
funny
because
you've
never
seen
something
like
that
before-
which
basically
just
has
to
do
about
the
way
we're
measuring.
B
C
There
was
a
little
bit
feedback
to
the
SEO
community
that
actually
happened
in
a
in
a
discussion
on
the
service
management
for
acquittal
project
that
they
kind
of
adjusted
the
documentation.
So
if
you
use
that
configuration
and
you're
absolutely
aware,
it's,
it's
actually
optimized
to,
for
instance,
work
on
the
three-tier
AWS
right.
So
it's
that
minimal
that
you
can
that
you
can
run
it
on
a
teeth
phenomenal
over
there
pretty
easily.
Just
together
experience
in
that
evaluation,
they
just
mean
the
development
and
program
have
a
look,
how
this
is
configured
and
how
this
operates
yeah.
C
C
So
the
verdict
of
course
remains,
and
it
of
course,
because
we
move
memory
limits
it
made
the
resource.
You
did
usage
go
through
the
room,
so
it's
always
a
trade-off
on
overall
I'm,
actually
very
happy
with
the
interaction
that
we
had
with
us.
We
communicate
like
most
of
the
interactions
were
very
helpful.
We
have
a
we
have
a
document
update
on
on
the
SPO
website
regarding
the
the
evaluation
country
and
betoken.
A
few
few
hints
from
someone
very.
C
The
humanistic
performance
in
order
to
generate
the
perfect
contact
that
Medan
used
for
improving
of
the
polls
and
that's
all
basically
things
that
that
may
be
the
blog
post,
my
useful
father,
so
that
was
that
was
okay
other
than
that
we
didn't
actually
have
much
interaction,
so
that
was
that
was
the
most
of
it.
We
had
someone
theorizing
whether
the
benchmark
would
be
at
some
point.
C
B
C
Caused
it
caused
a
little
bit
of
confusion
and
a
few
misunderstandings
at
first,
but
after
this
base,
if
he
stated
that
be
clarified
that,
for
instance,
one
of
us
especially
see
the
new
Fox
head
regarding
the
highlight
we
see
was
the
configuration
of
the
NUS
nodes
for
SEO,
and
you
could.
Basically,
you
could
address
this
concern
by
saying:
oh,
no,
via
vision,
testing
in
class,
because
we
don't
wanna,
be
the
ingress
nodes
in
the
critical
path.
So
this
is
just.
B
B
B
That
in
them,
potentially
picking
up
the
mystery
as
a
utility
for
their
ongoing
testing
is
facilitate
a
I'm
gonna
use
determine
extraordinarily
likely
a
standard
result.
Spec.
If
you
will
just
really
you
know,
captures
three
things
it
captures
literally.
The
output
of
the
results
are
the
results
themselves
that
captures
just
the
config
of
the
environment
to
say:
yeah.
B
So
I'm
gonna
bring
this
up
and
explaining
it
in
context
of
a
couple
of
questions
around
your
ongoing
focus
and
us,
hopefully
the
or
me
or
us,
and
hopefully
delivering
on
some
of
the
things
we
said,
we're
hey,
we
should
hit
coordinate
a
mission
I
had
the
same
questions
taco
head,
which
is
what
the
hell
is
that
and
can
you
explain
it
four
times
on
a
white
board
to
me
so
I
can
so
I
can
understand
that,
and
maybe
we
should
talk
about
that
somewhere.
You
guys
have
talked
about
it
fantastic.
B
Which,
then
of
the
automation
Smith,
you
guys
have
that
that's
there
now
yeah,
that's
it!
That's
about
the
same
place
that
it
was
previously
is
there.
Is
there
still
an
intention
to
go
and
try
to
advance
that
or
or
the
reason
primarily
reason
asking
is
or
hey?
Should
we
or
even
be
encouraging
you
and
or
any
point?
Are
you
inclined
to
collaborate
even
more
within
mystery
than
we
already
have
been
spending
time
doing
stuff
mm-hmm.
C
C
C
August,
where
things
in
the
cluster
but
make
it
feel
out,
native
and
I,
mean
that's
them
actually
already
years.
So
that's
definitely
the
right
thing,
something
that
comes
to
mind
with
regard
to
having
individual
tests
test
ones
on,
what's
technically
the
same
infrastructure,
but
has
been
deployed
in
a
separate
data
centers.
And
if
you,
if
you
think
about
the
the
the
power
and
the
degree
of
freedom
that
one
would
bring
with
having
an
in-class
application,
that
does
those
kind
of
benchmarks,
maybe
future
direction.
C
Anonymously
or
not
to
central
repository,
you
can
basically
look
scenarios,
and
so,
if
they're
looking
for
a
specific
configuration-
and
they
will
basically
see
if
anyone
anywhere
has
already
run
a
benchmark
in
that
configuration,
I
could
use
the
benchmark
result
as
a
him
and
as
a
comparison,
an
Hollow
like,
for
instance,
different
cloud
providers
and
different
Cabana
discuss
the
scenarios.
Compare.
B
B
Yeah!
That's
something
that
that
it's
are
moderately
frustrating
of
that
that
the
effort
that
folks
have
put
into
misery
that
the
project
that
the
community
is
yet
to
get
to
a
point.
Where
we've
that
we've
been
able
to
do
a
public
service.
That
you're
describing
in
terms
of
highlighting
the
fact
that
it
does
that.
Today.
As
you.
C
B
B
We've
still
got
a
few
thing,
a
couple
of
items
to
implement,
but
in
concept
basically
bundle
up
the
test
results
anonymously,
send
them
back
to
a
free
tier
account
on
AWS,
because
they
just
know
as
a
community
project
to
then
you
exactly
what
you
said,
which
is
to
say,
and
then
you
actually
articulated
it
well
in
terms
of
like
in
terms
of
like
now,
not
only
just
like
hey
can
we
begin
to
say
something
like
look.
2000
tests
have
been
run
across
all
the
console
deployments.
It
costs
you
on
average.
B
Just
you
know
all
variables
included
whatever
cents
/
or
whatever
the
you
know
like
and
and
there's
a
couple
of
things
that
I
think
can
happen
out
of
that.
One
is
for
them
to
look
at
their
their
results,
as
he
would
I'm
writing
it.
Like
1.5
percent
I'm
feeling
good,
like
I've,
got
an
A+
like
when
you
know
this
is
great
like
I.
B
B
That's
the
worst
you're
ever
gonna
happen
you're
in
session.
Well,
anyone
would
ever
have
they're
running
a
match
anywhere
like
okay.
Well,
that's
on
to
its
own.
Just
said:
that's
an
interesting!
You
know,
but
but
yeah
and
also
doing
it
in
context
is
something
like
hey
actually
yeah.
There
are
standard
profiles
that
come
pre-loaded
in
in
this
case
of
measuring
that
right
now,
it's
a
number
of
a
couple:
different
sample,
apps
they're,
same
ones
that
and
and
also
here's
some
load
generation
profiles.
B
B
B
Emoji
voto,
it's
a
you
know
like
10
different
mom.
It's
a
horribly
designed,
microsoft
said
of
Micra,
so
this
is
actually,
but
can
it
doesn't
matter
because
it's
still
kind
of
valid,
because
there's
lots
of
people
that
design
their
stuff
horribly?
And
so
you
know
just
so
long
as
you're
saying
this
is
what
it
looks
like
like:
hey.
It's
still
a
matter.
The.
B
So
I'm
sensitive
to
the
fact
we're
like
15
minutes
over
the
time
that
we
had
so
there's,
maybe
two
rather
there's
you
you've.
You
already
have
the
vision
for
like
there's
some
of
the
things
that
we've
been
saying,
my
Kate
we
some
of
those
things
we
were
able
to
put
in
effort
toward
initially
and
like
some
of
those
are
in
place,
so
yeah
hey
today,
it's
sending
back
anonymous
reports
of
like
it.
B
B
Spec
such
that
if
people
ask
people
to
you're
and
you
met
your
test
suite
running
those
results
could
be,
it
would
be
made
available.
What
could
be
made
available
to
you
guys
as
well
along
ones
from
the
mystery
to
be
able
to
been
that,
in
my
mind,
would
help
strengthen
the
validity
of
the
results
that
are
presented,
that
it
isn't
just
there
isn't
just
one
load
generator
test
suite.
That's
doing
this,
but
there's
another
there's
taking
as
my
right
like,
especially
if
we
don't
aren't
successful,
I,
don't
get
to
and
I
think
we
will
be.
B
B
C
B
C
B
B
You
guys
know
that
who
was
it
was
you
know
that
was
demoing
here
recently
the
provisioning
of
k3
OS
I,
don't
know
if
he
didn't
do
a
micro
Kate's,
but
just
anyway
the
notion
that
there
was
we've
been
focusing
on
kind
of
mini
cube
and
darker
desktop
and
some
of
these
but
locomotive,
Linux
or
other
or
other
environments
that
are
more.
Can
that
I
can't
gather
those
would
be
great
things
for
my
sure
to
be
able
to
make
sure
that
it's
compatible
with
and
can
facilitate.
B
C
My
plan
currently
notes
that
is
the
underlying
Linux
distribution,
which
is
a
friendly
form
of
container
Linux,
and
they
are
we
of
course
continuing
that,
even
if
should
be
discontinued
at
some
point
yeah,
but
we've
actually.
So,
given
the
given
a
good
given
a
good
reason
or
a
good
opportunity,
there's
there's
a
lot
of
interest
and
can
afford
to,
for
instance,
and
memory,
so
it
can
basically
be
edit
orbit
can
be
made
default
for
certain
zones,
so
yeah
absolutely.
C
Regarding
regarding
that
particular
roadmap
and
I
like
to
pull
in
Chris,
because
he
knows
a
lot
more
he's.
Basically,
the
visionary
regarding
the
whole,
the
whole
locomotive
thing,
and
he
can.
He
can
basically
interface
a
lot
of
you
folks
to
just
basically
I
ran
out
wrinkles
or
and
make
make
everything
happen
so
yeah
reaching
out
to
Chris.
If
you
could,
you
tell
if
you're
interested
in
automotive
and
into
a
little
presentation.
B
B
C
A
C
B
We
are
or
we're
all
signed
up
and
improved,
to
go
and
take
I.
Think
it's
about
twenty
nodes
and
going
going
and
when
we're
ready
run
run
some
tests
and
I
figured
I'd
asked
how
much
well
did
they
facilitate
my
I
guess
in
your
case
you
didn't
want
them
to
facilitate
the
provisioning
of
Linux
or
of
kubernetes.
You
brought
your
own
and
so
and
I.
You
know
the
the
tests
are
agnostic
to
that,
and
so
what
tooling
did
they
give
you
in.
C
C
B
C
B
A
B
C
B
D
B
B
Of
hope
that
coral
annex
does
go
by
the
wayside
kiss
that
ficar
linux
will
be
the
thing
man,
oh
yeah,
all
right,
fair
enough,
read
over
a
great.