►
Description
Service Mesh Performance Community Meeting - Aug 19th, 2021
Join the community at https://layer5.io/community
Find Layer5 on:
GitHub: https://github.com/layer5io
Twitter: https://twitter.com/layer5
LinkedIn: https://www.linkedin.com/company/layer5
Docker Hub: https://hub.docker.com/u/layer5/
A
Welcome
everyone
to
the
smp
community
meeting
today
is
19th
of
august
and
we
have
a
long
list
of
topics
to
discuss
today.
Otto
will
not
be
able
to
join
us
today,
but
hopefully
you'll
be
able
to
join
in
the
next
week
and
discuss
about
adaptive
load
controllers.
A
A
Moving
on
to
the
first
topic
of
the
day
lee,
would
you
like
to
talk
about
the
recent
iterability
application.
B
Yeah
sure,
actually
this
is
so
so:
hey
everybody,
it's
nice
to
nice,
to
see
you
all
as
a
matter
of
fact,
just
a
quick
congrats
to
tom
on
on
an
incubation
incubating
project.
So
thank
you,
yeah
yeah!
That's!
No!
No
small
feat!.
B
Very
good,
so
so,
while
I
I
expect
that
I'll
offer
up
my
voice
on
a
number
of
the
bullet
points
here.
This
is
one
that
maybe
I
can
shut.
My
happer
on
and
sumku
might
be
able
to
tell
us
a
bit
about
the
first
topic.
D
It's
just
about
to
say
that
I
just
woke
up
and
I'm
like.
Okay,
I'm
talking
now.
D
Okay,
I'm
trying
to
frame
my
mind
yeah
regarding
the
article,
but
whether
it's
angstly,
for
you
know
putting
this
together
and
working
with
the
team
to,
I
guess
the
ieee
bridge
team.
D
To
put
this
article,
I
published
this
article
and
yeah,
so
the
abstract
and
the
article
kind
of
I
think
the
original
thoughts
were
something
else,
but
given
the
time
and
the
deadline
we
kind
of
finalized
on
going
with
the
current
set
of
research
and
data
that
we
have
so
the
outline
we
primarily
went
with
you
know.
So
what
why
is
performance
measurements
for
a
service
most
important?
D
You
know:
how
do
you
consider?
How
do
you
look
at
measuring
something
and,
and
then
we
dug
into
you
know,
so
what
what
is
measurements
about?
How
do
you
measure
something?
D
And
so
what
are
some
of
the
things
from
a
hardware
perspective
that
you
need
to
keep
in
mind
and
and
then,
from
a
benchmarking
perspective?
What
are
the
traffic
generator
items
that
you
need
to
consider
like?
That's,
which
is
what
the
auto
has
written
about?
Alright.
So
how
do
you
configure
a
traffic
generator?
What
are
the
things
you
need
to
keep
in
mind
from
a
load
generator
perspective,
and
then
next
part
is
about
specification.
D
You
know,
which
is
great,
that
it
did
touch
upon
the.
What
does
the
smp
spec
covers
the
part
or
why
is
it
important
and
then
it's
about
automation,
which
is
where
measuring
comes
in
talks
about
like
how?
How
can
you
ensure
this
is
repeatable
have
a
consistent
set
of
results?
D
I
guess
that's
the
brief
outline
of
the
paper.
I
believe
it
is
open
for
anyone
else.
If
you
want
to
comment
or
look
into
or
add
in,
but
I
guess
the
deadline
is
something
you
need
to
look
into
to
see
how
soon
we
need
to
send
this
out,
but
not
sure
if
everyone
else
had
got
the
chance
to
take
a
look,
but
I
think
overall,
with
the
diagrams
and
what
the
content
is
today,
it's
a
good,
pretty
good
state.
We
could
share
it
with
everybody
else.
B
Nice
good
for
those
just
a
quick
test
and
for
those
who
are
in
the
meeting
minutes,
are
you
able
to
access
the
this
draft
this
doc.
B
If
not
we'll
get
that
changed,
okay,
good
and
by
the
way,
just
on
the
topic
of
logistics,
there
is
well
now
sorry.
I
think
I
pasted
the
wrong
clipboard,
but
there's
another.
Actually,
if
you
go
to
that
link,
what
is
that
one?
Okay?
So
that
if
you
go
to
this
link
and
if
it
isn't
available
at
everyone,
let
me
double
check
make
sure.
B
If
it
isn't
available
to
you,
then
please
do
this.
If
you
would,
there
is
a
shared
folder
in
for
the
service
mesh
performance
community
for
this
project,
there's
a
shared
folder
in
the
layer,
5
community
drive,
the
link
to
that
shared
folder
is
in
the
zoom
chat,
so
feel
free
to
request
access
there
or,
or
and
or
maybe,
if
you'd
like
to
get
access
to
the
to
the
layer.
B
5
community
drive
that
has
meshri
get
nighthawk
smp,
smi
conformance
the
the
github
actions,
the
like
just
the
the
logos,
the
site
content,
the
five
project
sites
that
we
have
like
just
sort
of
all.
All
of
that
there's
a
couple
of
things
that
you
could
do
probably
the
best
thing
to
do.
I
apologize
for
a
little
bit
of
housekeeping
here,
but
since
the
project
is
relatively
new,
we
don't
talk
about
that.
B
We
haven't
it's
really
good
to
talk
about
this
there's
layer,
five,
just
as
a
really
focuses
on
community
and
so
there's
a
lot
of
process
around
trying
to
run
community
very
well.
There's
a
little
over
300
people
who've
contributed
to
the
projects
that
are
here.
B
There
is
a
new
community
member
form
that
is
just
got
a
few
questions,
but
the
probably
the
point
of
filling
in
the
form
for
some
of
you
is
that
it
will
grant
access
to
the
community
drive
overall,
which
has
quite
a
bit
in
there
about
the
way
that
things
work
and
what's
what's
going
on,
with
with
some
of
the
things
that
we'll
end
up
talking
about
in
this
call
about
get
nighthawk
distributed
performance
analysis,
a
number
of
things,
so
I
encourage
you
to
fill
in
the
form
and-
and
you
should
be
granted
access
pretty
promptly
to
all
of
these.
B
So
back
on
track
of
what
cinco
is
saying,
there's
a
second,
so
the
this
draft
that's
out
there,
please,
please
review!
You
know,
point
out
all
the
spelling
mistakes
point
out
things
that
are
not
interesting
or
where,
if
you
feel
like
the
meat
of
what
was
presented,
maybe
isn't
juicy
enough
or
isn't
you
know
thick
enough
like
you
can't
you
know
comments
are
welcome,
please
jump
in
it.
B
The
article
itself
suku
has
ensured
that
it
well,
it
does
a
few
things
I
mean
it
describes
sort
of
introduces
service,
mesh
performance
to
the
reader
as
well
as
talks
a
bit
about
the
notion
that
mescheri
has
implemented
it,
but
it
also
talks
about
the
need
for
the
need
for
such
a
spec.
B
The
need
for
at
least
this
back
in
its
current
state
and
tom
is
actually
here
we're
going
to
we've
been
talking
about
the
charter
of
the
of
the
project
itself,
and
so
so
we'll
talk
about
charter,
hopefully
in
a
little
bit
the
original
okay,
and
so
so
this
is
good,
so
you
know
at
least
so
suku.
I
don't
know
that
I
got
a
chance
to
tell
you
this
or
the
rest
of
the
community
that
the
as
you
wrapped
this
up
on
monday.
B
It
was
just
shortly
thereafter
it
was
sent
to
the
the
ieee
4
review
and
they
had
said
that
that
a
second
article
sometime
in
the
future
is
sort
of
pre-solicited,
so
so
that
other
doc
that
has
the
other
abstract
the
abstract,
that's
a
little
more.
B
I
think
you
know
in
this
project
and
that
that
there's
a
forum
to
potentially
publish
those
results
again,
although
I
think,
by
the
time
we
get
there,
you
might
have
other
ideas
about
forums
for
publication
depending
upon
maybe
how
you,
just
with
your
familiarity
of
the
ieee,
you
might
have
suggestions
on
on
how
well
yeah.
D
No,
absolutely
yeah
yeah.
No,
definitely
I
think
that
that's
great
that
they're
open
for
a
second
second
article
as
a
continuation.
I
guess
this
one
introduces
their.
You
know
what
performance
measurement
or
you
know.
How
do
you
measure
or
characterize
a
service
mesh
and
then
kind
of
builds
on
to
like
the
adaptive
part
of
the
the
whole
smp
yeah?
I
mean
from
ieee
site
point:
there
is
a
performance
measurement
group
for
nfv
like
basis
for
5g
niche
compute.
D
That's
an
area
that
we
could
share
this
with
and
also
there's
I
computer
society.
There
are
a
couple
of
chapters
or
how
do
you
call
it
as
a
couple
of
sections
that
are
working
on
a
service
mesh
side
of
things?
So
that's
another
area
we
could
reach
out
and
kind
of
build
on
this
one,
not
just
performance,
but
overall
you
know.
D
Even
the
measuring
is
a
complex
enough
piece
that
that
it
can
go
by
itself,
so
so
I'll
find
the
right
link
and
definitely
connect
with
with
them
and
with
this
group.
So
we
can
share
more
details.
B
B
Yeah
sure
so
so
well
so
for
those
of
you
who've
been
around
so
most
of
the
names
on
the
call
are
familiar
by
the
way.
Just
it's
a
community
tradition
to
well
break
the
ice
with
folks
who
are
new
on
the
call
and
so
srinivasa
have
we
gotten
a
chance
to
say
hi
before.
C
E
B
Nice
to
meet
you
very
good
say
a
bit
about
yourself.
If
you
would,
you
know
your
favorite
color
here.
E
Yeah
yeah,
I'm
I'm
from
india,
so
I'm
working
as
a
technical
lead
so
totally
having
around
13
years
of
an
developer
experience
I
mean
you
know,
microservices
cloud
aws
and
java
related
technologies
yeah.
This
is
about
myself.
B
Oh
good
deal,
okay,
so
so
as
a
service
mesh
performance,
as
this
project
was
prepping
to
go
into
the
cncf,
but
there
was
a
discussion
about
all
of
what
it's
covering
and
its
focus.
It's
charter
and
part
of
that
discussion
was
with
tom.
Tom
is
a
maintainer
of
cada
and
if
you
hopefully
you're
all
if
you're
not
familiar
with
cada
tom,
do
you
want
to
you
want
to
give
people
a
quick
introduction
to
cada.
C
Today
we
support
around
35
systems
on
which
you
can
scale
and
http
is
a
very
big
story
that
we're
working
on
now,
which
is
a
bit
harder,
but
the
issue
we
have
there
is:
you
have
ingress
and
gateway
api.
You
have
service
to
service,
you
have
service
meshes,
but
there
is
no
unified
way
to
get
metrics
around
the
traffic
there.
C
So
smi
has
a
nice
specification
for
this,
but
it
is
purely
focused
on
service
meshes
and
I'm
looking
for
a
way
to
open
up
that
specification
so
that
it
is
applied
for
any
traffic
type.
Basically,
that
would
allow
us,
as
an
external
system,
to
get
the
metrics
without
relying
on
something
like
prometheus,
because
some
customers
don't
use
prometheus
but
their
cloud
provider
instead,
for
example,
that's
the
the
main
idea.
D
No,
it's
just
about
to
ask,
and
it's
it's
good
that
you
know
say
looking
into
having
a
comprehensive
way
to
get
the
metrics,
and
so
now.
How
is
I
mean,
I'm
not
familiar
with
cada.
I
could
see
the
full
form
of
it
as
event,
driven
architecture
or
kubernetes
event,
driven,
auto
scaling,
I'll
just
get
got
it.
So
how
is
it
related
to
in
a
service
mesh,
or
is
it
independent
that
you're
just
looking
for
a
way
of
unifying
the
world.
C
It's
fully
independent,
so
you
can
deploy
data
in
your
cluster
and
then
it
is
up
to
you
to
say.
Okay,
I
want
to
scale
this
application
here.
D
Wow,
that's
great,
so
so
what
kind
of
events
do
you
consider,
as
as
important
events.
C
Well,
in
http,
it's
not
really
an
if
an
event
other
than
we
have,
let's
say:
3000
requests
per
second,
with
the
current
workload.
We
cannot
handle
that,
so
we
all
need
to
scale
out
basically,
okay,
so
yeah.
We
could
implement
it
depending
on
the
traffic
type
and
and
all
the
systems.
But
that's
a
bit
of
a
that's
a
lot
of
work.
An
smi
does
what
we
need,
but
it's
it's
just
focused
on
the
service
meshes
and
we
were
hoping
to
get
that
in
a
broader
scope.
D
Yeah,
this
kind
of
reminds
me
of
telemetry,
aware
scheduler.
I
know
it's
independent
of
like
overall
set
of
events
that
you
can
define,
but
I
mean
in
in
generally
I
think
the
kubernetes
extension
like
that's
it's
based
on
custom,
matrix
api
and
that
would
scale
your
cluster
using
horizontal
part
autoscaler,
based
on
some
of
the
telemetry
events
that
you
define
or
thresholds
or
triggers
that
you
can
define
based
on
telemetry.
D
C
Yeah
and
you
can
also
build
your
own
scalar
outside
of
gator,
but
that
also
triggers
the
scaling,
but
we
extend
the
hpa.
It's
not
that
we
reinvent
the
wheel.
So
what
you
said
is
actually
what
we
do.
We
serve
the
metrics
to
kubernetes
so
that
it
can
feed
the
hpa
with
with
the
metrics
and
trigger
the
scaling,
but
we
also
do
scale
to
zero
and
from
zero.
D
D
So
you
have
performance
metrics
for
something
like
this
and
would
be
not
just
latency
percentile
latencies,
but
you
want
to
look
at
across
the
cluster
and
define
what's
the
source
and
destination
and
identify
based
on
that
right.
D
So
because
I
mean
from
a
cada
perspective,
it
is
scaling
or
you
know,
changing
a
cluster
based
on
your
customized
set
of
metrics
from
performance
standpoint,
and
are
you
looking
at
establishing
how
how
well
catered
us
or
is
it
more
about
you
know
once
the
scaling
has
been
established,
then
you're
looking
at
you
know,
parameters
for
what's
going
on
afterwards.
C
D
Right
right
I
mean,
I
guess
the
question
is
with
respect
to
the
establishing
performance
side
of
part.
Four,
I
mean
if
you're
looking
at
a
smi
or
smb,
it's
a
these
metrics,
for
example,
smi
some
of
the
metrics
they
define,
or
how
do
you
measure
p99
or
p90,
these
type
of
latencies
from
a
cada
standpoint?
D
What
do
you
want
to
measure?
I
guess
is
the
question.
C
D
B
Yeah,
no,
this
is
good
as
well
as
or
yeah
it's
an
exploratory
conversation.
You
know
I'm
trying
to
figure
that
out
myself,
because
I
desire
for
the
answer
to
be.
Yes,
I
just
I
I
don't
so
some
aspects
of
what
tom
had
described
in
the
past
are
well
tom.
B
Let
me
let
me
assert
something,
and
then
you
tell
me
if
this
is
a
lot
off
or
or
if
this
is
too
myopic,
but
so
of
the
scalars
that
cada
has
today
there
is
not,
or
are
you
in
search
of,
or
is
it
a
road
map
for
a
generic
http
scalar?
Is
that
part
of
the
okay.
C
So
we
it's
it's
not
in
our
core,
but
it's
an
it's.
An
add-on
with
cada
doesn't
intercept
anything
it
just
checks.
What's
the
queue
depth?
Okay,
based
on
that
metric,
we
scale
that
application
http.
However,
it's
more
tricky
because
it's
synchronous
communication,
so
you
need
to
intercept
because
if
you
scale
down
to
zero
or
from
zero
to
one,
you
need
to
hold
the
request
spin
it
up
and
then
forward
it.
That's
why
it's
an
add-on
where
we
have
the
interceptor.
C
This
is
purely
for
ingress
by
the
way,
but
the
concept
would
be
the
same
where
the
interceptor
keeps
the
requests
and
then
based
on
a
metric
source.
We
decide
for
the
application
itself
if
we
need
to
scale
it.
So
in
theory,
we
have
an
interceptor.
We
could
measure
everything
ourselves,
but
we
want
to
really
rely
on
the
technology.
That's
already
there,
such
as
smri.
B
In
an
ideal
world,
the
http
scalar
would
would
generically
be
applicable
to
well,
in
some
reason,
what
would
be
generically
applicable.
Maybe
that's
the
statement
like
that.
B
C
D
Yeah,
I
guess
the
metrics
described
by
smi
is
we
had
this
conversation
in
the
semi
community
and
you
know
so
the
they
do
consider,
especially
where
you
could
determine
like
one
application
source
one
is
destination
and,
and
of
course
you
have
service
match
underneath,
but
then
the
issue
with
that
is,
you
know
so
good
that
you're
defining
two
applications
and
and
two
end
points
and
then
you're
measuring
you
know.
So,
what's
your
transactions
per
second
or
http
rate,
and
then
your
latencies
between
those
two
the
specification.
D
However,
I
guess,
in
my
opinion,
the
way
the
place
where
it
limits
itself
is.
It
doesn't
consider
anything
underneath,
like
neither
the
infrastructure
from
a
hardware
standpoint,
or
you
know
the
environment
it
runs
in.
Nor
does
it
consider
you
know
so
what
other
set
of
kubernetes
components
which
which
may
back
to
the
performance
impact
right?
Okay,.
D
I
would
say
yes
because
you
know
so,
if
you're,
if
you
already
decided
on
your
environment
and
it's
okay,
I
need
so
many
gateways
or
so
many
hops
between
these
two
endpoints.
All
I
need
is
my
performance
sure,
but
then,
if
you're
looking
to
optimize
your
latencies
or
your
throughput,
then
you
know
all
of
these
number
of
hops
in
between
and
the
infrastructure
underneath
plays
a
very
important
role
right
and
that's.
C
You
can
see
them
separate
as
well.
I
mean
if
I,
if
I
do
a
benchmark
today,
my
technology
does
not
change,
I'm
purely
interested
in
the
requests
per
second,
not
no
longer
the
infrastructure
underneath
because
we
already
did
the
benchmark.
So
we
know
what
we
can
handle
sure
so
from.
D
D
Yeah,
I
mean
identifying
the
components
in
your
deployment
is,
like
you
said
it's
already
identified,
but
in
terms
of
tuning,
in
terms
of
you
know,
configuring
these
parameters,
each
of
them
in
order
to
have
the
best
and
optimal
kind
of
environment,
optimum
performance-
and
you
know
in
terms
of
specification
identifying
these
laying
them
down
in
order
to
fine
tune
for
your
environment.
I
guess
that's
the
important
part
I
mean
from
load
generator
standpoint.
D
It
looks
like
okay,
what's
my
source
source,
my
destination
and
here's
my
result,
but
from
a
specification
and
an
actual
test
perspective,
we
need
to
kind
of
consider
these
and
make
sure
they're,
either
optimized
or
unnecessary
hops
can
be
removed
or
traffic
patterns
could
be
identified.
That
way,
you
don't
necessarily
have
to
go
through
certain
hops
right.
So
I
guess
that's
where
the
overall
specification
comes
into
play.
B
Both
truths
can
be
held
in
at
the
same
time
or
or
that
yeah.
If
you
know
tom,
your
phrasing
of
from
an
end
user's
perspective
if
it
like-
and
I
put
some
words
in
your
mouth
but
click
you
know
clarify
this.
B
If
this
is
like
from
an
end
user's
perspective,
if
the
primary
question
in
their
mind
sort
of
the
the
the
goal
that
they're
focused
on
is
guaranteeing
a
certain
response
time
or
guaranteeing
a
certain
or
like
I
mean
like
it's
like,
if
the
focus
is
on
making
sure
that
infrastructure
is
scaled
appropriately,
not
wasting
money
but
also
not
providing
a
poor
experience
to
end
users.
Then,
like
there's
a
number
of
concerns
within
there,
and
just
dealing
with
that
that
one
question
of
like
hey:
how?
B
How
do
we
elastically
respond
to
like
intelligently
such
that
we're
maintaining
a
requirement
from
the
end
user?
That
says,
I'd
like
to
receive
a
response
within
this
time
frame
or
like,
and-
and
I
don't
want
it
to
cost
me-
a
million
dollars
or
something
that
that
yep-
that,
like
hey,
that,
I
think
that's
the
that
captures
the
essence
of
and
then
part
of
cadence
challenge
is
like
okay,
great
so,
but
also
you
know
those
those
http.
Those
requests,
a
like
you
were
saying,
might
need
to
be.
B
If
you're
going
to
intelligently,
you
might
need
to
grab
those
hold
on
to
them
for
a
minute
scale,
some
things
and
then
pass
them
along
and
then
b
you
might
need
to,
or
those
http
requests
would
be,
could
be
coming
ingressing
through
a
number
of
different
types
of
infrastructure,
and
so
the
mechanics
by
which
you
analyze
those
and
determine
whether
or
not
they're
achieving
the
right
latency
the
right
quartile
it
differs
and
that's
the
part.
B
I
think
what
you're
trying
to
solve
the
if
you
were
to
expand
the
question
or
or
look
at
it
from
from
the
the
perspective
of
one
of
the
perspectives
of
what
suku
was
saying,
is
around
well
like
hey.
If
you
peel
open,
if
you
go
down
a
few
more
layers
and
then
you
and
you
say,
is
the:
are
we
even
using
the
right
infrastructure?
Is
there
more?
Is
there
an
optimal
configuration
that
could
be
used,
that
that
helps
eliminate
the
need
for
scaling
as
much
or
for
getting?
B
You
know
more
out
of
the
infra,
that's
being
used
or
hey
for
the
that
style,
workload
or
yeah
that
that
signature
of
requests
could
a
different
type
of
workload?
Sorry,
not
working
different
type
of
infrastructure
be
used,
could
the
service
mesh
be
tuned?
Could
the
infrastructure
be
tuned,
and
so,
if
you're
taking
the
question
that
far,
then
then
maybe
those
secondary
and
tertiary
layers
matter,
I'm
not
saying
that
they
need
to
matter
to
kata,
I'm
just
saying
like
it
depends,
and
and
so
and
hence
as
smp
was
discussing
with
smi.
B
And
so
that,
hence
part
of
the
the
charter.
Discussions
like
when
tom
was
jumping
in
at
the
right
moment
asking
this
question
about
you
know
is:
there
is
part
of
what
the
value
that
s
p
could
potentially
provide.
Well
is
it,
you
know,
is
a.
B
You
know
a
system
egg,
not
you
know
a
way
of
characterizing
http,
centric
traffic
and
its
performance
generically,
irrespective
of
the
of
where
it's
coming
from
what
or
what
infrastructure
it's
transiting,
which
I
think
like
if
it's
done
generically
like
it.
B
Could
it's
also
the
case
that
it's,
it
could
be
physical
hardware
and
therein
part
of
thinking
about
that
makes
the
hair
on
the
back
of
my
neck
stand
up
because
it
because
it
is
a
larger,
is
a
decent
sized
scope
or
like
if
you
went
off
to
just
solve
just
that
problem.
But
do
it
holistically.
B
Well,
hey,
if
you
have,
if
you've
captured
the
right
amount
of
infrastructure-centric
data
and
configuration
information,
then
you
could
go
over
and
do
tuning
and
do
analysis
and
say:
hey
the
the
traffic
itself
is
being
routed
over
four
more
hops
and
it
should
be
or
you
shouldn't
be,
using
that
architecture,
chip
or
or
whatever
the
you
know
and
like
actually
capturing
all
that
is,
like
you,
take
a
deep
breath
because
that's
a
there's
a
lot
behind
that.
D
Yeah,
I
agree
and
that's
where
I
think,
trying
to
consolidate
all
of
those
into
a
simple
spec
or
simple
deployment.
You
know
that's
a
such
tricky
part
I
think
measuring
has
you
know
you
said
patterns
and
rights
of
how
you
deploy
your
environment
for
performance
measurement
right,
so
you
could
really
define
a
lot
of
these
components.
I
mean,
of
course,
like
from
a
mystery
standpoint.
It's
independent
of
the
hardware
but
at
least
from
you
know,
mesh
deployment
standpoint.
For
example,
you
could
really
figure
out.
D
You
know
what
are
the
components
I'm
deploying,
because
that's
that
would
be
a
control
test
environment
and
then
you
know,
how
am
I?
What
are
the
resources
like
yeah
like
you're,
showing
here?
So
what
are
the
amount
of
resources
I'm
providing
for
each
of
this
component?
D
And
that
way
you
know,
I'm
certainly
not
using
one
of
the
components
that
you
don't
start
using
a
ton
of
ram
and
certainly
performance.
Latencies,
are
you
know
all
over
the
place
and
then,
once
you
have
this
controlled
deployment,
then
you
can
start
looking
at
okay,
east
west,
like
what's
what's
my
throughput
and
latency.
The
the
importance
of
of
these
is
that
the
latencies,
at
least
from
what
I've
noticed,
they're
very,
very
sensitive
to
what's
going
on
across.
D
Your
p99
latency
can
vary,
10,
20,
30,
sometimes,
and
that
doesn't
make
any
sense
like
you
know,
you're,
not
really,
it's
not
really,
representative
of
anything,
although
you
just
run
the
test
and
you
get
some
numbers,
and
that
is
where
this
type
of
controlled
environment
is
necessary
in
order
so
that
you
get
some
repeatable
results.
D
B
Tom
for
you,
as
you
as
you,
and
I
think
I
think
tom
you've
been
in
our
conversations.
I
think
we've
both
been
thinking
aloud
in
a
brainstorming
mode,
one
of
the
here's,
a
question
for
you:
if
do
you,
can
you
characterize
your
ideal
or
or
if
smp
could
do
the
ideal
thing
that
you
might
like
for
it
to
do.
C
I
think
smi
now
gives
it
in
percentiles
or
yeah.
We
just
need
one
number.
Basically,
probably
people
will
want
to
have
the
the
95th
percentile
or
the
99th
or
whatever
that's
fine
for
us
as
well.
But
the
most
important
point
for
us
is
the
couple
from
service
matches
and
I'm
not
sure
if
yeah,
that
might
be
a
problem
for
this
group
because
as
smp
is
service,
match
performance,
so.
B
Right,
yeah,
the
I.
I
certainly
think
it's
it's
within
scope
for
s
p
to
to
about
like
right
now,
the
proto
like
this
one
in
particular,
there's
like
three
of
them,
but
it
doesn't,
it
needs
to
go
deeper
and
that
it's
an
open,
there's
an
open
call
for
adding
a
number
of
other
things,
one
of
those
being
well,
not
just
the
service
mesh
ingress,
but
but
your
kubernetes
ingress
or
your
kubernetes
ingresses
for
accounting
for
the
api
gateway,
that's
that's
being
defined.
B
B
Basically,
it's
still
the
same
result,
I
think,
from
katy's
perspective,
like
still
would
be
looking
for
a
single
number,
a
single
number
to
key
off
of
whether
or
not
smp
when
we
open
up
that
conversation
and
talk
about
all
of
the
all
the
interfaces
that
s
p
should
be
concerned
with.
Maybe
it
should
be
concerned
with
vm
interfaces
and
hardware
based
interfaces.
B
Yeah,
it's
still
a
bit
up
in
the
air
with
smi
like
part
of
what
smi
and
snp
both
are
still
working
toward
is
figuring
out,
which
of
them
has
more
of
an
appetite
in
some
respects,
like
figuring
out,
which
of
them
has
more
of
an
appetite
to
move
more
move
more
briskly
to
to
really
address
concerns
beyond
just
east-west
traffic.
B
The
about
half
of
the
maintainers
that
chimed
in
for
that
are
on
smi
had
said
like,
oh
maybe,
smi's,
traffic
metrics
should
go
to
s
p
about
half
the
folks
between
the
two
said.
Maybe
the
two
projects
should
come
together
like
there's
still,
those
are
still
open
discussions.
Part
of
getting
s
p
into
the
sandbox,
was
to
facilitate
figuring
those
things
out
of
what
you've,
seen
of
so
so
to
re
to
re-characterize
this
a
little
bit
you
you
said
this
a
moment
ago,
so
I
apologize
but
of
what
smi
has
for
traffic
metrics.
D
I
guess
yeah
the
way
to
look
at
this
is
like
peeling.
The
onion
right
so
like
tom
said
first
is
from
an
application
writer
perspective.
What
are
the
basic
metrics
requests
per
second
latencies
great,
so
you
can
have
that.
You
know
independent
of
your
infrastructure.
Like
you
have
everything
deployed.
I
just
need
some
numbers
great.
You
can
have
it
and
then
you
dig
deeper
and
understand
hops
understand
your
configuration.
Then
you
can
dig
into
the
resources
for
each
of
these
like
hardware
or
software
resources
and
then
dig
deeper.
D
You
know
how
is
my
infrastructure
impacting
whether
it's
virtual
machines
or
hardware
bare
metal
whatnot
right,
so
you
can
look
at
it
from
multiple
layers
perspective
and
depending
on
the
depth.
I
can
go
there,
but
I
guess
the
smb
standpoint
at
least
the
specification
looking
to
address
almost
across
these
three
levels
based
on
whoever
wants
what
like
they
can
pick
and
choose.
B
B
Like
and
and
it's
kind
of
you
know
what
you'd
said
is
like
hey,
if,
if
the
spec
describe,
if
the
spec
can
goes
on
ad
nauseum
having
placeholders
for
capturing
the
finest
level
of
detail
about
the
driver
version
number
on
the
nick
for
your
aka
or,
like
you
know,
like
some
yeah,
some
some
super
all
the
way
from
that
all
the
way
up
to
distilling
it
all
the
way
up
to
just
like
you
know,
here's
your
here's,
your
soul
indicator.
B
Here's
like
the
one
number
yeah,
what
whether
or
not
a
given
implement
a
given
implementer
of
the
spec,
how
deeply
they
go
and
how
much
configuration
data
and
metric
data
they
collect
about
all
of
the
the
systems
that
affect
that
high
level
number.
B
Do
you
don't
like
they?
Don't
that
those
details?
Don't
have
to
be
there
like
the
you,
basically
starting
top
down
so
like
that
you
have
to
have
this
this
number
or
these
set
of
numbers,
and
then
your
mileage
varies.
You
know
depending
upon
how
or
how
far
down
you
want
to
go.
That
type
of
an
approach
makes
sense
to
me.
B
It's
it's
important.
What
meshri
does
today?
Some
measuring
doesn't,
when
you
run
a
performance
test.
It
will
first
class
first
and
foremost
focus
on
getting
you
these
statistics
and,
along
with
a
couple
of
other
things,
but
but
not
in
all
cases,
does
it
get
the
get
it
all
of
the
other
metrics
that
you
see
described.
B
It's
still
working
toward
that,
and
so
actually
today's
sort
of
implementation
in
that
regard
marries
up
with
what
you
just
said,
sunku
about
layers
of
details,
tom,
what's
what's
missing
again
from
the
smi,
not
that
they're
from
the
smi
traffic
metrics,
it
was
just
it
was.
Was
it
the
myopic
focus
on
east.
C
B
C
B
Okay,
so
yeah,
I
think
I'm
getting
a
much
better
picture
which,
which
is
to
say
like
yes
or
like
yeah.
I
mean
it's
a
great
like
to
to
to
give
a
real
example,
is
something
like
to
use
meshri
as
an
example
as
it
implements
smp
you
can
just
you
can
take
meshri
and
deploy.
You
know,
there's
about
the
one
that
I'm
running
on
my
desktop.
B
It's
running,
three
three
different
types
of
service
meshes
and
you
can
run
workloads
on
them
and
you
can
do
performance
analysis
of
those
workloads
in
context
of
the
service
matching
and
great,
and
it's
all
in
the
mesh
and
mesh
specific
and
or
you
can
turn
down
your
kubernetes
cluster
have
meshry
sitting
there
and
point
it
at
you,
know
google.com
or
we're
pointing
at
your
payment
gateway.
That's
running
on,
I
don't
who
knows
what
is
it
still
going
to
produce
a
performance?
B
A
set
of
performance
results
that
have
minimally
this
plus
some
additional
data
about
the
end
point
that
you
were
additional
data
about
like
how
long
you
were
running
the
test
at
what
intensity
and
how
many
concurrent
threads,
but
what
type
of
load
were
you
generating
like
yeah?
It
has
all
that,
so
it
has
all
that
today,
irrespective
of
whether
you're
testing,
something
on
the
mesh
or
off,
because
that's
actually
one
of
the
core
use
cases
for
what
people
like
part
of
the
genesis
the
project
is.
B
People
are
saying
well
we're
running
a
workload
today
and
we
know
what
that
overhead
looks
like
or
we
know
what
that
we
can
characterize
its
performance
now
as
we
go
to
put
that
onto
the
mesh,
and
it
does
stuff
for
us
great
we're
looking
for
that.
But
what's
it
going
to
cost
us?
And
it's
like
one
of
the
first
questions
people
ask
is
like:
what's
that?
What
how
do
you
compare?
What's
the
difference
in
performance
between
being
on
the
mesh
and
off
the
mesh?
It's
like
yeah
with
that.
B
So
measure
measure
is
not
reliant
on
a
service
nation.
It's
not
reliant
on
kubernetes
being
available
either.
If
you
don't
have
kubernetes
available
or
you
don't
deploy
a
service
mesh
you
there
are
other
aspects
of
what
meschery
does
that
you
know
those
dashboards
won't
be
filled
in
or
you
know
the
or
that
that
functionality
doesn't
work
but
the
core,
but
but
it's
but
the.
But
what
I
just
said
about
the
characterizing,
the
performance
of
a
given
endpoint
or
a
set
of
endpoints.
B
C
C
C
B
B
And
yeah
I
mean
there's,
there's
a
ton
of
value
there
I
mean
like
it's
it.
I
had
proposed
it.
While
I
was
there,
it's.
Oh
it's
I
don't
know
there.
B
There
are
certain
forces
in
the
industry
that,
like
would
potentially
not
like
that,
because
you
know
part
of
the
value
offering
that
they
have
is
is
doing
things
that
are
infrastructure,
specific
or
doing
things,
but
in
some
respects,
like
the
four
golden
signals,
if
you
will
is
sort
of
related
to
what
we're
talking
about
here,
which
is
like
you
know,
irrespective
of
all
of
those
underpinnings
like
to
your
earlier
question,
if
your
focus
is
on
the
end
result,
it's
like
none
of
those
underpinnings
actually
matter
at
all
at
all,
even
if
you
had
a
single
cpu
vm
that
was
a
and
what
is
it
an
aws,
a
tiny
or
a
teeny,
or
that
a
micro
and
you're
sending
hundreds
of
millions
of
requests
per
second
to
it
like
yeah,
it's
totally
the
wrong
infrastructure
for
this.
B
It's
gonna,
you
couldn't
even
possibly
handle
any
of
that,
but
but
in
some
respects
like
it
that
that
doesn't
matter
what
matters
is
yeah
the
the
response
time
on
that
is
like
10
hours
to
get
a
request
back
then
that
ultimately,
that
end
signal
mattered,
so
so
yeah
for
observability
providers
and
and
for
consumers
those
operating
infrastructure,
there's
a
ton
of
value
in
just
being
able
to
say
in
part
what
we
were
looking
to
achieve.
Maybe
I
don't
know
if
I've
described
this
before
it's
probably
important.
B
One
of
the
things
we're
looking
to
achieve
with
the
project
itself
is
well
and
again
it
like
it,
has
the
word
mesh
in
it,
but
but
is
to
provide
people
with
a
single
number
perf
a
univer,
a
universal
performance
index
to
gauge
like
if
you
were
to
insert
instead
of,
like
you
know
your
your
application's
efficiency
or
your
like.
Basically,
the
in
some
respects
the
the
initial
thought
is
like
how
well
is
your
for
a
given
workload
running
on
service
mesh?
B
Can
you
are
succinctly,
articulate
and
and
refine
it
down
to
a
single
number
like?
How
are
you
performing
the
concept
here?
Is
that
it
is,
it
is
intending
to
take
into
account
some
of
those
lower
level
considerations,
because
because
if
you
were
running
a
t2
micro
and
it
was
handling
100
billion
requests
per
second
versus
you're
running,
you
know
a
massive
cluster
of
very
expensive
machines
and
they're
handling
five
requests
a
second,
it's
like
it's
like.
B
B
So
yeah
I
mean
that's
so
that
we've
yet
to
start
in
earnest
on
defining
the
the
algorithm
here
or
the
how
efficiency
efficiency
is
calculated.
It's
possible
that
this
accounts
that
this
might
account
for
a
financial
aspect
of
your
you
know
of
that
infrastructure.
B
Yeah
I
do
get,
I
think
I
do.
I
think
I
can
re-articulate
the
the
need
that
you
have
or
like
the
thing
that
you're
looking
for
just
trying
to
align
it
to
so
once
so.
One
thing
is
like,
even
though
this
project
is
called
service
mesh
performance,
it
doesn't
mean
that
there
isn't
that
there
can't
be
a
section
of
the
specification.
That's
explicitly
called
something
generic
that
like
that,
is
that
it's
the
golden
traffic
metric
or
whatever.
B
The
thing
is
that
that
we
try
to
rally
others
around
and
and
try
blessed
as
like
the
canonical
unit
of
measure
for
http
requests
like
in
this
case
too,
as
we
go
to
focus
on
grpc
and
potentially
on
gnats.
B
C
B
So
the
yeah
as
we
look
to
support,
do
performance
character
like
characterize
performance
in
context
of
grpc
traffic.
The
the
same
approach
is
to
be
taken
where,
if
it
like,
like
that,
this
performance
can
be
analyzed
outside
of
a
mesh
or
in
context
of
mesh.
If
it's
done
in
context
of
a
mesh,
there
might
be
a
higher
fidelity.
B
There'll,
be
a
higher
set
of
details
to
dig
into
behind
that,
because
we'll
also
examine
what
the
proxy
is
doing
and
tell
you
about
it's
its
cues
on
there.
So
it's
not
something,
but
that
doesn't
mean
that
if
the
mesh
isn't
present
that
you
still
wouldn't
achieve
that
higher
level
number
you
just
you
know
you
you
just
couldn't
ask
it
more
questions
or
or
as
many
questions
rather.
C
B
Right
and-
and
it
becomes
more
valuable
to-
I
mean,
like
part
of
the
value
proposition
that
that
the
thing
that
tom
is
walking
us
through
is
like
is
not
pigeonholing.
The
effort
is
like
potentially
pigeonholing.
This
particular
effort
on
just
service
mesh
and
sort
of
constraining
its
potential
value
to
only
those
that
are
interested
in
that
infrastructure.
But
saying
there's
something
more
here
and
so.
B
Sunku,
I
guess
you
know,
I
suppose
things
to
reflect
on
as
the
things
to
simmer
on.
D
Yeah
yeah,
no,
absolutely
I
was
just
about
to
say
I
think
you
know
one
thing
I
was
about
to
ask
thomas
that
probably
could
help
define
what
this
would
look
like.
You
know
what
from
an
application
developer
standpoint
right.
So
what
what
exactly
you
need?
So
I
guess
the
way
to
look
at
this
is
once
we
have
something
like
this
defined,
probably
implemented.
D
We
could
make
it
generic
just
just
the
specific
part,
because
I
mean
this
is
applicable
to
any
set
of
microservices,
so
to
say,
kubernetes
or
not
right.
This
is
performance
between
two
micro
services,
and
I
mean
the
way
we
are
automating
things
or
configuring.
These
is
within
kubernetes
environment
within
services,
mesh
environment.
But
you
know
this
is
ideally
applicable
to
any
microservice
benchmarking
type
scenario.
So
maybe
you
could
look
at
to
see
how
you
can
make
this
generic.
D
You
know
either
the
the
tools
around
it
or
the
specification
around
it
right
deployment
around
it,
so
generic
enough
that
anyone
can
come
in
and
say,
okay,
let
me
pick
that
up
and
I
can
configure
it
follow
this
set
of
recommendations,
configure
it
in
my
environment
and
of
course,
you
know
we
can
dig
deep
into
service
mesh
part
of
it,
which
can
be
separate,
but
you
know
if
you
have
something
generic
enough,
that
in
terms
of
performance,
anybody
can
pick
it
up
and
use
that
use
their
test.
Suite.
D
C
D
Yeah
yeah
makes
a
vote
in
terms
of
defining.
You
know
what
is
generic
enough,
that
you
could
leverage,
and
I
understand
the
name
service,
mesh
performance
kind
of
can
throw
off
some
of
someone
else
like
that
they
don't
care
about
service
management.
They
care
about
the
performance
micro
service
performance
so
like
if
you
were
to
create
something
generic
enough.
It's
similar
to
what
you
saw
with
smi,
and
you
know
and
figured
that
oh
hey.
This
would
be
helpful
from
a
performance
standpoint
right.
D
So
if
you
create
something
generic
enough
with
the
generic
name,
probably
could
provide
some
of
the
requirements
as
to
what
is
it
that
you're
looking
for
would
be
good
enough
that
you
can
say
that
it's
generic
enough
and
and
then
we
can
look
at
how
we
can
build
that
or.
C
Well,
I
I
don't.
I
can
already
give
you
an
answer.
I
don't
have
much
of
criteria
other
than
just
have
the
traffic
metric
indication,
and
I
saw
that
the
protocol
earlier,
where
it
had
99
90,
50
percentile,
an
average
and
success
and
failure
rates.
I
think
for
us
that's
enough,
and
ideally
there
would
be
a
go
client
or
something
similar
to
how
smi
leverages
this
and
I
think
that's
a
good
start
and
then
it
can
go
if
there's
a
need,
but.
D
Yeah,
perfect
everything
we
can
look
to
see.
D
B
This
is
good
yeah,
I
think
the
only
yeah
the
meat
of
what
tom
is
after
is
well
aligned
and
and
actually
is
a
natural
like
like
there's
no
or
from
my
perspective,
I
don't
think,
there's
any
there's
no
tectonic
shift
here.
There's
no
like,
oh,
that
the
ask
like
is:
it
breaks
a
design
principle
or
something
of
the
project
like
it
in
many
respects
it
doesn't
or
I'm
sorry
in
all
respects
it
doesn't
and
in
many
respects
it
aligns
with
what's
already
happening,
which
is
hey.
B
If
there's
a
certain
level
of
details,
you're
interested
in
great,
you
know,
you
need
to
be
running
that
infrastructure
to
be
able
to
get
those
details
and
if
you're
not
interested
in
it,
then
you
know
as
long
as
like
that,
I
think
from
thomas
perspective,
as
long
as
that
principle
is
upheld,
that
that
the
a
service
mesh
isn't
required
and
that
these
statistics
are
universally
applicable.
Some,
you
know
somewhat,
irrespective
of
like
well
entirely,
irrespective
of
what,
whether,
even
if
you're
running
anything
virtualized,
potentially
it's
just.
B
You
know
now
whether
or
not
the
gold
client
would
like
facilitate
getting
physical
network
metrics.
It's
like
well,
that's
not
that's
not
what
tom's
after
anyway,
and
so
he
doesn't
really
really
care,
but
I
mean
technically,
the
metrics
apply
to
physical
stuff,
but
the
the
one,
the
one
stickler
and
I
totally
get
it-
is
service
mesh
in
the
title
like
either
of
this
or
in
the
title
of
the
spec
itself
or
of
the
you
know,
of
the
way
that
it's
couched
and
cast
to
the
to
everyone
gets
is.
C
I
think
that
that
will
influence
the
adoption
tremendously,
but
I
get
why
that
is
not
an
easy
thing,
but
maybe
there
could
be
a
genetic
one
with
a
service
match
edition.
Let's
say:
yeah.
B
C
It's
one
thing
to
ask
for
things,
but
helping
is
another
way
to
to
make
it
happen.
So
if
I
can
help
more
just
let
me
know.
B
Thank
you
on
that.
I
guess
you
know,
do
we
have
any
agreed,
you
know
any
super
urgent
items
as
we
hit
the
top
of
the
hour.
A
A
I
don't
think
there
are
any
other
urgent
items
to
discuss
so
we'll
see
see
in
the
next.
I
mean
in
two
weeks
the
in
the
next
meeting.
So
thank
you,
everyone
for
joining.