►
From YouTube: Layer5 Community Meeting (Sept 18th, 2020)
Description
Distributed performance load generation. Welcome @Smille02, Samson, Clement!
Learn more at https://layer5.io
A
Nice,
well,
hey
we're
six
after
it's
about
time
to
get
going,
hey!
Welcome!
All
my
name's
lee
you'll
get
to
hear
from
me
a
fair
bit
today,
but
also
hopefully
from
others
in
the
community.
So
what
is
it
friday?
The
september
18th
we're
about
five
after
the
hour.
This
is
the
layer
5
community
meeting.
We
record
these
calls
and
and
post
them
publicly
we're
not
always
timely
with
our
posting
of
the
recorded
calls,
but
but
we
try
there's
so
we're
using
zoom.
A
Actually,
maybe
for
the
first
time
on
this
community
call
boy,
we've
been
using
google
meat
forever.
Google
meat
had
you
know
in
the
face
of
a
pandemic,
had
allowed
the
lesser
g
suite
plans
to
record
google
meet
meetings,
and,
after
a
few
months
of
that,
I
think
they're
they're,
removing
that
capability
and
so
we're
going
to
move
over
to
zoom.
So
we
can
continue
to
record
calls
and
post
them.
A
Streety
or
or
anyone
else
like,
if,
if
you
might,
if
you
might
put
one
foot
into
google,
meet
just
to
make
sure
nobody's
over
there,
that
might
be
nice.
The
other
reason
that
it's
maybe
good
to
talk
about
zoom
just
a
little
bit,
is
that
well.
Is
it
actually
I'd
like
to
to
ask
all
of
you
for
an
opinion
real
quickly?
If
I
may,
and.
B
That
is,
they'll
seem
to
be
different
and
I
don't
even
know
if
you
can
hear
me,
I
think
you
can
yeah
zoom
seems
to
be
really
kicking
butt.
But
if
you
go
from
skype
to
zoom
to
webex
to
go
to
meeting,
you
have
to
mess
around
with
your
sound,
and
I
got
two
cameras
and
this
always
picks
the
integrated
one
and
the
laptop
shot.
You
know.
A
Yeah
it
used
to
be
the
software
was
bad
enough
if
two
things
tried
to
get
at
the
same.
I
o
interface,
you
know
the
same
webcam
or
the
same
microphone
at
the
same
time
like
you
were,
but
yeah
no
well
here
here.
Let
me
ask
this
question.
Let
me
propose
this
so
there's
in
a
lot
of
things
like
almost
all
the
things
that
are
done
here
are
done
by
all
of
you
recently,
one
of
you,
tenush
agarwal,
had
been
kind
enough
to
make
well
to
make
this
little.
A
A
And
that
that
a
little
animated
intro
like
for
my
part,
I
I
I
don't
mind
giving
up
10
seconds
of
my
life
every
time
I
see
it.
I
guess
it's,
I
really
like
it.
It's
it's
nice.
I
feel
like
it
officiates
and
sort
of
uplifts
our
the
things.
You
know
the
discussions
that
we
have
in
the
various
meetings
that
we
have.
A
If
we
didn't,
we
could
just
do
direct
to
like
youtube,
live
like
right
now
and
there
would
be
less
administrative
overhead
of
just
clicking
the
button
and
we
don't
have
to
go
back
to
edit
video
and
add.
And
so
my
question
to
all
of
you
is.
A
Do
you
what
I'm
just
out
of
curiosity,
I
guess
it
would
anyone
express
opinion
on
how
nice
you
think
it
is
or
isn't
to
how
worth
it
it
is.
There
isn't
to
have
the
to
have
this.
This.
B
A
Nice,
okay,
okay,
fair
enough,
yeah
good!
Thank
you
for
that
stephen
yeah,
and
I'm
totally
with
you.
I
was
just
just
a
side
note.
I
was
just
cleaning
up
one
of
the
computers
at
the
house.
The
other
day
last
week
and
my
lord
there
was
go
to
meeting.
There
was
webex.
There
was
all
the
all
the
ones
you
prattled
off,
which
is
so
very
good.
Let's
see
if
we
can
get
into
some
some
good
healthy
topics,
one
of
the
topics.
Actually,
let
me
let
me
stop
sharing
here.
A
I'm
seeing
stephen
reminds
me
of
this,
and
that
is
that
we
have.
We
have
a
tradition
on
our
calls.
I
mean
if
it's,
if
it's
the
first
time
for
you
on
a
call,
it's
an
excellent
time
to
for
you
to
get
to
know
the
community
and
and
community
to
get
to
know
you
and
stephen,
I'm
eyeballing
you
right
now,
I'm
eyeballing
clement
who's
on
as
well
and
as
well
as
harshida
everybody
shut,
the
cameras
up,
yeah
and
so
yeah.
A
Actually
steven,
if
you
don't
mind,
just
a
quick,
you
know
sure
sure.
B
29
years
into
a
career,
I'm
a
electrical
engineer,
but
I
went
straight
into
it
back
when
they
were
300,
baud,
modems
and
don't
think
al
gore
ever
thought
of
the
internet.
It
was
darpa
but
yeah.
I
worked
for
martin
marietta
and
then
and
then
I
moved
to
new
york
and
worked
for
a
lot
of
different
banks
about
half
the
time
consulting.
It
seems
like
the
pendulum
swings.
You
know,
everybody
wants
consultants
and
everybody
wants
full-time
so,
but
now
I'm,
I
guess
my
title
is
a
senior
cloud
solutions.
B
B
C
A
Oh
very
good,
so
sort
of
a
plus
one
to
the
earlier
time
zones
we've
been,
we've
been
trying
to
find
the
the
sweet
spot
of
where
all
of
our
time
zones
overlap
and
there
isn't
but
steven
actually
maybe
as
a
I'll
jot
this
down,
maybe
as
a
separate
topic
in
today's
call
the
notion
of
defense
in
depth
and
sort
of
the
layers
of
security
that
you're
talking
about
kind
of
a
I'll
bite
my
tongue.
But
I
think
that
there's
some
some
some
things
to
chat
through
in
there
very
nice.
B
I
know
about
layer
two
because
you
know
that
links
up
dvi
and
routers
and
things
like
that
layer,
three
tcpip
and
then
layer,
five
is
more
into
the
packets
and
what
they're
actually
doing
right.
The
wedding
is
firewall,
probably
strongest
lines
here.
A
That's
oh
nice!
Oh
fair
enough,
very
good,
very
good!
We
had
a
couple
of
other
folks
on
the
call,
maybe
for
the
first
time,
samson
if
you're,
if
you're
there
do
you
mind,
saying
hi,
do
a
quick
introduction.
D
Okay,
hi
samson
and
I'm
happy
to
be
here
in
this
community
yeah,
although
I'm
kind
of
a
newbie
to
all
this
devops
kind
of
stuff
mesh
network.
But
looking
at
repository
and
you,
I
also
watch
the
video
on
youtube
about
the
introduction
of
mystery
and
it
was
really
cool
about
what
she
wants
to
bring
into
the
devops
world.
Yeah.
A
Nice
very
good,
very
good
and
samson
you're
you're
joining
from.
Where
are
you
physically
based.
E
Go
ahead,
hi
everyone,
I'm
clement,
I've
been
a
full
specta
for
about
a
year.
It's
laura
samson.
I
don't
know
a
lot
about
service
meshes,
but
I'm
kind
of
interested
in
understanding
the
architectural
thing
and
getting
into
open
source
a
bit
more
and
I'm
also
motivated
because
at
work,
we're
kind
of
moving
from
a
monolith
to
trying
to
break
out
things
into
services.
So
that
sounds
fairly
related
microservices
to
this
as
well
so
yeah.
A
Nice,
oh
very
good,
so
clemen
now
that
I
can
enunciate
it
it's
it's
clement
right
like
it's,
not
you
leave
the
t.
The
t
is
silent,
hello,
maybe
my
microphone's
bad!
Oh,
oh,
okay,
yeah
the
t.
Is
there
yeah?
The
t?
Is
there
all
right,
very
cool?
Okay,
I'm
glad
we
clarified.
I
saved
the
whole.
D
I
need
help
my
friend
wants
to
join,
but
I
have
not
yet
seen
the
link
like
for
me
to
join
this
mission.
I
got
to
get
the
link
for
my
calendar,
but
I
would
love
to
have
the
link
yeah
in
the
comment
section.
A
Hey,
that's
that's!
Actually,
you
know
what's
nice
about
this.
Is
that
that's
some
recent
work
that
was
done
on
that
samson
and
we
just
set
up
a
system
in
which
the
links
are
hopefully
a
bit
more
memorable.
Let's,
let's
give
you
those
hello.
A
A
But
yeah
samson
thank
you
for
inviting
others,
it's
great
and
then
clement.
Where
are
you
dialing
in
from
the
pacific
coast?
Oh
nice,
okay,
very
good!
F
Hi,
hello,
hello,
yes,
so
I
am
mostly
into
front-end
development
and
I'm
still
learning
about
the
service
missions,
and
I
I
really
really
enjoyed
coming
here.
You
were
very
welcoming.
Thank
you
so
much
yeah.
A
Awesome
awesome
very
good.
Okay,
oh
nice,
to
have
you
hashina,
oh
there.
I
know
there
was
another.
Oh
well
has
anyone
else
not
introduced
on
a
critique?
Have
you
I
don't
know
if
you've
had
a
chance
on
this
call
in
the
past.
G
No,
I
haven't
introduced
myself
over
here,
but
I
have
introduced
myself
in
a
newcomers1,
so
yeah
hi
everyone.
Thank
you
all
for
joining
me
here.
I
am
I'm
from
india.
I
am
mostly
into
back
end
development
and,
like
anything
into
technology.
G
H
A
Awesome-
okay,
very
good,
well
critique
at
this
point,
your
your
old
news.
So
I
guess
that
we'll
we'll
move
on
quickly,
we've
got
a
couple
of
topics
lined
up
today
and-
and
the
first
one
is
well,
is
this
here
and
actually
maybe
relate
well,
is
the
notion
that
there's
a
blog
a
draft
blog
post
out
there?
A
This
one
comes
from
nikhil
nikhil
lada
has
been
in
the
community
for
a
year
or
more
he's
been
so
instrumental
in
the
community
that
red
hat
has
chased
him
down
and
and
is
paying
him
to
do
things
now,
which
is
great.
A
I
don't
I'm
trying
to
embarrass
him,
but
I
don't
think
he's
on
the
call
at
the
moment,
the,
but
for
a
lot
of
contributors
like
nikhil,
we
encourage
and
we
encourage
people
to
post
their
experiences
or
things
that
they've
learned
here
and
so
sudeep
batra
had
come
to
understand.
He's
a
great
example
of
a
he's.
A
cloud
architect
at
erickson
had
come
to
understand.
A
Istio
and
service
meshes
through
our
workshop
and
and
then
through
the
community
here
and
as
he
as
he
got
deeper
and
deeper,
he
wrote
up
a
blog
post
and
posted
it
here.
So
that's
fantastic
and,
and
so
two
two
no
two
thoughts
here.
One
is
consider
that
this
is
an
open
venue
for
you.
If
you
to
help
you
kind
of
uplift
works
that
you
do
the
second
cons.
The
second
thing
is:
this
is
a
an
upcoming
blog
post
from
nikhil
each
of
the
blog
posts.
A
So
we
should
probably
highlight
that
real
quickly.
Some
of
you
are
familiar
with
the
notion
that
there's
a
collection
of
individuals
who've
who
regularly
dedicate
their
time
to
well
onboarding
newcomers
and
making
sure
that
people
are
able
to
align
areas
of
interest
or
areas
of
passion
with
any
one
of
the
projects
or
basically
to
be
able
to
find
a
foothold
on
a
project,
and
so
mesh
mates
are
people
who
stepped
up
and
are
kind
of
and
perform
that
that
role.
A
Nikhil
is
one
of
those
he's
writing
up
his
experience
about
this,
and
so
he,
but
every
time
that
a
blog
post
is
made,
there
needs
to
be
an
a
graphic.
So
some
some
sort
of
image
that
goes
here.
A
His
post
is
missing
one
at
the
moment.
So
we're
I'm
just
I'm
highlighting
this
as
an
opportunity
for
anyone
in
the
community
that
is
ux
or
ui
or
you
know
graphically
inclined.
Clearly,
you
don't
need
to
have
a
much
talent.
If
you
take
a
look
at
some
of
the
examples
of
the
images
that
we've
used
in
the
past.
A
But
but
there's
there's
something
for
anyone
who's
interested,
so
the
link
is
there.
A
Very
good
next
topic:
kush
is
he
here
with
us.
A
He's
not
no
okay,
all
right,
we'll
ping
him
and
see
if
he
can
join
a
little
bit
later.
Next
up
is
mark.
You
are
working
on
a
video
documentation
system.
A
Nice,
oh
very
nice,
to
have
you
in
the
center.
You
you
yeah.
A
Yeah
we,
you
know
we
for
the
most
part
we
we
generally
like
samson,
so
we
won't
hold
it
against
you
that
you're
friends
with
him
so
we'll.
Thank
you
yeah.
I
know
I'm
just
yeah.
No
awesome
thanks
for
coming.
This
is
great.
A
Much
just
in
time
for
well
an
introduction
to
a
new
video
documentation
system
that
smark
and
some
others
are
working
on
and
and
this
system
he's
going
to
give
an
intro
to
it
and
it's
an
open
consideration
for
whether
or
not
some
of
the
messaging
documentation
or
some
of
the
other
project
documentation
might
benefit
from
you
know
using
this
framework
or
not
so
this
will
be
our
first
time
taking
a
look
at
it.
So
thank
you,
smart,
complete!
Please,
take
us
through
okay!
No,
no!
I
will
start.
I
Okay,
there's
without
system
mn
to
link
the
suitable
video
to
the
official
document
and
then
the
video
we
are
will
reveal
by
many
experiencer.
So
the
beginner
can
use
it
to
find
the
radio
explanation
for
the
document
and
under
the
concept
easily.
I
I
I
If
some,
if
something
another
so
good
for
the
page,
the
way
we
can
try
to
remove.
Oh
sorry,
we
can
try
to
make
it
better.
A
Yeah
yeah
very
good,
I
does
I
apologize
for
my
part.
We've
got
virtual
learning,
going
on
and
and
my
middle
boy
just
cut
his
foot
on
a
screwdriver
somehow
and
so
does
anyone.
So
I
apologize
for
my
distraction,
but
does
anyone
have
questions
and
feedback
for.
G
G
G
A
Okay,
it's
so
smart
to
maybe
to
re-articulate
in
part
what
pratik
is
maybe
asking
about
is.
If
I
were
to
quickly
characterize
the
you
know
vdoc.online,
I
is
it
appropriate
to
characterize
it
as
a
as
a
video
specific
review
system,
so
a
system
for
someone
to
have
made
a
recording.
A
Yes,
yes,
and
in
terms
of
the
display
of
the
video
like
the
embedding
of
the
video
into
a
site,
or
what
have
you
does,
does
vdoc
get
into
that
aspect?.
A
A
I
Yeah,
okay,
now
this
project,
just
you
need
to.
We
try
to
complete
the
for
from
some
open
source
project.
The
document
team.
We
will
try
to
invite
more
people
to
join
yeah,
got.
A
It
got
it
did
that
answer
your
question.
Are
you
what
other
feedback
and
questions
do
you
guys
have
for
smart?
I
guess
I
like
to
just
look
at
it
once
before.
It's
great.
G
A
I
If
one
document
want
to
unicorn
this
system,
they
just
need
to
put
the
this
js
and
the
tag
for
their
page
or
some
post
framework.
A
Interesting,
nice
anybody
have
any
other
feedback.
D
I
E
Hi
smark-
I
might
have
missed
this,
but
how
does
this
relate
to
layer?
Five.
I
F
F
C
A
I
No,
no,
we
just
maybe
we
just
put
this
just
on
the
cd
okay.
Oh
all,
the
logic
will
be
on
on
the
github.
C
I
That's
just
so
sorry,
I.
I
cannot
clearly
understand
what
you
want
to
see.
I
Now
that
now
we
use
the
logic
is
the
title:
is
the
page
url
and
the
value
will
be
the
video?
Maybe
we
can
make
more
per
person
to
try
to
get
the
video
link,
the
video
title
or
some
description
here,
so
it
will
now
we
can
see
the
label
need
a
review
for
some
to
me
you
this
is
to
create
the
reviewer
can
reveal
the
link
for
the
video
if
the
video
suitable
for
the
document,
the
page,
the
some
use,
maybe
four
or
five
people
review
me.
I
I
A
It's
okay,
nice,
yeah,
okay,
yeah,
very
good!
Well,
very
good,
it's
probably
time
to
to
move
on,
so
we
can
hopefully
get
to
kush's
demonstration
as
well
so
mark
thanks
for
staying
up
to
come
present
the
project.
I
think
I
have
a
sense
of
the
value
that
you
guys
are
trying
to
provide
around
content,
review
and
kind
of
a
system
and
framework
for
that.
So
I
appreciate
it.
It
might
be
that
yeah
so
anyway.
A
Folks,
if
you
have
more
feedback
for
smart
he's
in
the
slack
channel,
please
hit
him
up.
A
A
And
he's
not
on
the
call,
although
he
might
be
joining
okay.
Well
he's
not
in
the
call.
Let's
take
a
look
at
what
else
to
chat
about
there's
a
couple
of
things
that
a
lot
of
stuff
going
on
in
service
mesh
land.
As
a
matter
of
fact,
kush
was
hopefully
going
to
demonstrate
this
I'll
give
a
spoiler
alert.
So
that's
the
wrong.
A
Sorry
so
so,
as
we
transition
topics,
thank
you
smark
for
the
demo,
just
to
be
clear,
clement
great
great
question.
Smark
is
a
member
in
the
in
the
community
and
is
kind
of
working
on
a
project
that
could
potentially
be
useful
to
some
of
the
some
of
layer,
five's
documentation,
and
so
we
figured
you
know,
and
it's
good
for
him
to
present
that
on
the
community
call
to
of
the
last
20
minutes
or
so
that
we
have
left
a
number
of
things
to
chat
through
so
of
the
so
measuring.
C
A
A
A
Kush
trivedi
is
a
google
summer
of
code
intern.
That's
been
working
on
the
project
for
some
time.
Part
of
his
project
was
to
add
support
for
a
third
load
generator.
There
are
some
reasons
why
people
want
to
have
choice
around
their
load.
Generators
load
generators
themselves.
Well,
they
very
briefly
there's
a
couple
of
different
classes
of
them.
There's
closed
loop
load
generators
and
open
loop
load
generators.
A
A
Part
of
the
reason
that
mastery
is
bringing
in
support
for
nighthawk
is
maybe
a
couple
of
things.
One
fortio
is
the
load
generator
that
was
sort
of
born
of
the
istio
project.
It's
it's!
It's
its
own
open
source
project
fortio,
but
istio
had
been
using
it
for
quite
some
long
time
and
is
still
using
it
that
the
performance
and
scalability
team
of
istio
is
looking
at
leveraging
nighthawk,
which
is
also
a
recently
written
load
generator
over
the
last
few
years.
Nighthawk
is
born
of
the
envoy
project.
A
Nighthawk
and
wrk2
are
c
plus
plus
applications.
Fortio
is
a
go
application,
which
is
also
easier
for
measuring
to
pick
up
fortio
originally
and
use
fortio
as
a
library,
since
it's
they're
both
go
applications,
sort
of
the
third
or
fourth
reason,
or
so
anyway,
in
istio's
yeah,
istio's,
looking
to
move
over
to
nighthawk
potentially.
A
Meshri
will
ask
you
a
few
other
questions
as
to
whether
or
not
you'd
like
to
deploy
multiple
instances
of
a
load
generator,
maybe
on
different
nodes
in
your
cluster
or
maybe
in
different
places
outside
your
cluster,
so
that
just
like
in
the
real
world,
where
you
don't
have
a
single
source
of
requests,
necessarily
sometimes
those
requests
a
lot
of
times.
Those
requests
you
know
sort
of
conventionally
in
a
kubernetes
world
or
a
series
of
microservices.
A
You
might
have
requests
coming
in
through
a
gateway
and
those
requests
can
get
funneled
in
through
a
single
point
and
begin
to
flow.
Through
your
system,
if
you
think
about
it,
if
you've
got,
and
so
in
that
sense
like
having
a
single
sourced
load,
generator,
isn't
such
a
horrible
design,
because
you
do
have
oftentimes
load
being
funneled
through
one
gateway
or
through
a
couple
of
sets
of
gateways,
and
so
then
you
might
ask
well,
why
is
it
important
to
have
this
to
support,
distributed
load
generation
or
distributed
load
generators?
A
Well,
there's
a
couple
of
things
to
think
about
and
I'd
be
curious
for
thoughts
that
you
might
have
on
the
subject
now
or
as
you
contemplate
it
later
so
in
in
our
new
world
of
distributed
systems,
microservices
running
everywhere,
we
live
in
a
distributed
world,
but
our
performance
testing,
our
performance
management,
isn't
necessarily
always
done
with
it,
just
in
a
distributed
way
and
the
real
in
the
real
world.
A
If
you've
got
a
mic,
a
service
micro
service
or
just
a
service
in
general
that
has
been
written,
you've
deployed
it
and
it's
doing
its
job
really
well,
and
maybe
it
becomes
fairly
popular
turns
out,
like
some
of
those
requests
are
coming
from
other
service.
Other
internal
services
that
you
know
over
time,
more
services
were
written
and
it's
being
hit
from
multiple
locations.
B
Yeah
we
had
that
problem,
big
problem
on
a
project
when
I
was
at
mcgraw
hill
education.
We
were
doing
the
government
testing
and
we
unix
machines,
sun,
solaris
and
oracle
rack
on
asm
and
coherence
and
basically
that's
what
it
was
and
the
the
I
think
they
were
using
windrunner
or
load
runner
or
something
like
that,
and
it
would
be
like
zero
to
a
hundred-
and
I
mean
that's
fine
for
str
stressing
peak
test,
but
that's
not
really
a
normal
activity.
I
mean
you
know
it
could
be,
but
it's
probably
not.
B
A
Thank
you
for
the
I.
I
agree
I'm
trying
to
bring
up
some
some
notes
to
kind
of
reinforce
some
of
this,
this
thinking
which,
which
is
to
say,
there's
well,
I
guess,
let
me
introduce
an
another
project
here
that
is
built
in
and
around
some
of
the
things
that
stephen
is
saying.
A
This
other
project
is
another
one
of
this
community's
projects,
it's
if
you're
familiar
with
smi
service
mesh
interface
and
that
being
a
specification
smp
is
also
a
specification
and
it's
a
bit
younger,
but
in
about
seven
weeks
this
will
probably
be
in
the
cncf
we
sort
of
created
it
to
go
in.
In
the
first
place,
this
spec
sm
smp
service
mesh
performance.
A
A
The
reason
that
I
brought
it
up
was
was
really
to
highlight
what
steven
was
saying
about,
like
hey
turns
out
like
there's
different
types
of
testing
that
you
might
want
to
be
doing,
and
some
of
the
differences
between
these
might
be
a
little
nuanced,
but
it
doesn't
take
much
thought
to
recognize
like
there
are
different
types
of
unknown
color
that
your
system
might
undergo,
and
so,
and
so
you
should
probably
be
you
know,
experimenting
with
our
testing.
Looking
at
your
system
under
different.
There
are
any
number
of
load
generators
out
there.
A
For
my
part,
I
don't
know
that
they're
the
most
easy
to
use
kind
of
repeatable
tooling,
that
oftentimes,
when
we
see
someone
publishing
results
of
a
benchmark,
a
performance
benchmark
or
something
that
I
you
know,
there's
a
lot
of
setup
and
scripts
and
other
things
that
go
into
what
they're
doing
and
really
like
in
the
like,
just
to
focus
on
the
service
mesh
landscape.
A
A
A
This
is
what
this
is
costing
us
or,
like
you
know,
I
can.
I
can
see
this
and
control
it
from
one
place.
Measuries
is
because
that
is
such
a
common
question
for
people,
both
understanding
the
fact
that,
if
you're,
getting
if
you're,
getting
logs
or
you're
getting
load,
balancing
whatever
function,
you're,
if
you're
getting
it
from
somewhere,
it's
costing
you
something
somewhere.
Does
it
cost
you
more
or
less?
When
you
have
a
dedicated
layer
that
you
know
has
proxies
in
this
case
that
are
written
specifically
to
perform
those
functions?
A
Are
they
doing
that
really
well
or
is
the
the
code
that
the
app
the
application
developer
has
written
for,
doing,
retries
or
for
doing
load,
balancing
internal
to
the
app?
Is
that
more
performant
and
more
reliable?
And
I
would,
I
would
argue,
for
the
former,
like
the
thing
that's
custom
built:
okay,
well
great,
so
we're
using
this
cust
we're
using
this
purpose-built
new
layer
without
a
mesh
part
of
what
master
is
trying
to
accomplish
is
to
help
people
answer
this
really
like
what
I.
What
I
think
is
like
really
like.
A
Really
I
don't
know,
I
don't
mean
to
offend
anyone
but
kind
of
like
frustratingly
complex
and
kind
of
boring
questions
around
performance
and
characterizing
performance
and
overhead
and
characterizing
that,
and
by
that
I
mean
like
there
are
so
many
variables
to
control
with
like
what
mesh
are
you
running
and
what
workload
under?
A
You
repeat
those
things
in
your
environment,
specific
to
what
you're,
to
what
you're
trying
to
accomplish,
moreover,
we're
working
with
so
so
we're
trying
to
we
all
of
you
on
the
call
are
we're
trying
to
part
of
what
smp
is
trying
to
do
is
say
good
well
and
as
you're
having
that
repeatable
tooling
and
as
each
of
the
service
meshes
that
have
been
engaged
here.
You
can
imagine
each
of
those
the
maintainers
of
those
meshes
but
they're
pretty
keen
on
getting
well
represented.
A
You
know
in
these
types
of
performance,
these
benchmarks
and
things
and
as
an
as
a
third
party
as
an
organ
as
a
community
that
doesn't
hasn't,
created
a
mesh
and
doesn't
intend
to
like
there's
20
of
them
out
there.
As
a
matter
of
fact,
one
service
mesh
new
one
got
announced
this
week
remind
me
to
talk
to
you
about
it.
A
If
we
don't
it's
a
big
one,
it's
it's,
but
so
what
I
was
trying
to
say
is
we
don't
want
to
make
we're
also
keen
to
make
sure
that
each
of
the
service
meshes
get
well
represented,
that
they
get
maybe
highlighted
in
the
ways
in
which
they
work
well
in
the
situations
that
they
work
well
and
also
honestly
acknowledge
where
maybe
the
use
cases,
the
things
that
they
were
built
for,
maybe
don't
hit
certain
other
types
of
use.
A
Cases
like,
for
my
part,
I'm
glad
that
the
world
there's
more
than
one
mesh
in
the
world
like
there's
all
kinds
of
organizations
with
different
use
cases,
but
point
being,
is
that
well,
this
was
an
opportunity
to
kind
of
standardize
the
way
in
which
you
characterize
the
performance
of
a
mesh,
and
this
spec
for
the
most
part
tries
to
address
quantifiable
numbers
and
quantifiable
statistics
that
you
can
use
to
yeah.
You
could
use
that
to
facilitate
a
comparison
between
service
meshes.
If
you
want
to
that's,
not
the
outright
goal.
A
It's
it's
really
hard
to
get
an
apples
to
apples
thing
going
on,
but
but
this
type
of
an
effort
helps
facilitate
well
one
just
baselining
for
yourself
and
and
maybe
comparison
to
from
one
of
different
environments
that
you
run
or
to
others,
because
in
part,
if
you've
noticed
when
mescheri,
I
think
we've
got
this
statistic
here
like
one
of
the
things
that
measure
will
do
is
facilitate
the
anonymous
collect,
the
collection
of
anonymous
performance,
test
results
and
so
little
over
a
thousand
have
been
run
to
date
and
sent
in
to
an
aws
free
tier
t2
micro
instance.
A
A
What
is
yeah,
how
do
you
know
if
you
were
running
a
mesh
very
well
in
your
environment
or
not
like?
Would
you
just
generally
across
the
thousand
and
something
tests
that
have
been
run?
What
is
the
common
overhead?
Are
we
looking
at
like?
What's
the
ballpark,
are
we
looking
at
less
than
10
cpu?
Are
we
looking
at
more
than
that
and
it's
I
say
ballpark
because
and
that's
why
we
have
a
spec
is
because
there's
so
many
variables
you
need.
You
need
some
consistency
to
say
well,
that
was
on
kubernetes
115
istio.
B
A
Free
yeah
exactly
yeah
and
are
you
like
all?
Actually
we
gave
a
cube
talk
at
kubecon
eu
just
a
couple
weeks
ago
on,
like
kind
of
that
specific
thing
that
that's
even
steven
you're,
giving
like
all
kinds
of
you
need
to
come
on.
The
calls
more
often
you're,
just
queuing
up
all
these
interesting,
but.
B
A
Something
like
like
what
stephen
is
saying
is
if
you're
like,
if
you
enable
the
function
on
on
a
mesh,
to
throttle
the
amount
of
requests
that
are
coming
in
if
you
go
to
rate
limited
or,
however,
you're
throttling
well,
if
you
just
take
that
atomic
network
service
so
that
atomic
network
func
that
function
of
a
mesh-
and
you
say
well,
hey,
we
could
use
the
mesh
to
do
that
or
we
could
you
do
it
in
by
an
external
load
balance
or
we
could
do
it
within
the
code
itself
or
something
one
of
the
factors
in
that
decision.
A
Making
process
should
be,
or
it
would
be
nice
to
know
like
hey.
What's
the
overhead
like
hey
this
load
external
load,
balancer,
does
it
really
well,
it's
aj
proxy
and
an
eld
out
here,
or
maybe
it
doesn't?
Maybe
it's
not
close
enough
to
the
workload
or
whatever,
but
part
of
that
decision
is
what's
the
overhead
of
that?
What's
the
cost
of
that
and
it's
both
costs
like
financially
and
cost
and
hard
terms.
This
spec
is
the
talk
that
we
gave
at
q
connie.
A
You
was
trying
to
identify
like
for
any
given
specific
service
mass
function,
to
be
able
to
in
an
atomic
way
say
well.
This
is
what
that's
going
to
cost
you
such
that
as
you're
designing
your
services.
A
You
can
say
you
can
have
that
in
your
back
pocket
as
you
go
to
architect
them
and
say
here's
what
we're
going
to
leverage
the
mesh
really
well
for
because
it's
super
cheap
or
over,
not
or
but
to
do
it
in
a
standard
way
in
a
way
in
which
ultimately
and
the
other
thing
that
this
spec
facilitates,
is
something
that
we'd
like
to
to
create,
and
we
call
it
meshmark
at
the
moment.
A
That's
to
be
able
to
say
if
clement
was
talking
to
stephen-
and
you
guys
are
both
running
linker
d
personally
and
you're,
trying
to
make
you're
trying
to
compare
notes
and
say:
hey,
am
I
doing
it
right?
You
know
how
is
it
working
for
you?
What's
that
you
you've
got
an
average
of
10
percent
cpu
overhead
on
instagram
in
my
environment,
it's
only
two
percent
yeah
they're
a
little
bit
different
and
stuff,
but
but
to
be
able
to
have
a
common
language
and,
moreover,
like
when
you
guys
were
to
have
that
conversation.
A
A
What's
your
mesh
mark
and
then
when
steven
says
well,
it's
you
know.
81
and
clement
can
tout
that
over
him
and
make
fun
of
him
heckle
him
about
how
poorly
he's
running
is.
Actually
I
mean,
that's
not
the
point,
but
my
point
being
is
like
to
be
able
to
use
some
easier
language
and
some
common
language
to
say
you
know
stephen,
would
say
well.
Yeah.
A
Say:
hey
we're
running
it
in
81
and
that's
and
I
know
that
that's
a
bit
more
lesser,
we're
lesser
than
the
other
environment,
but
that's
actually
intentional,
because
I
got
to
tell
you
the
value
that
we're
deriving
how
much
we're
asking
the
mesh
to
do
we're
getting
a
lot
more
value
out
of
you
like
there
needs
to
be
a
formula
to
say
it's
not
just
all
hard
terms,
there's
a
bit
of
softer
qualifications
here.
A
So
so
some
of
this
stuff
is
only
you
know
is
is
is
in
flight,
and
so,
if
this
is
of
interest,
you
guys
jump
in
and
help
with
trying
to
steward
this.
A
So
good,
all
right!
Oh
we're
two
minutes
over
my
apologies.
The
we'd
been
long
in
private
conversations
with
nginx
who
apparently
I
didn't
think
they
were
going
to
do
it.
Yet
I
think
it
kind
of
leaked
out,
but
their
nginx.
A
A
new
service
mesh
and
very
pertinent
to
mesherie
as
a
project,
because
mesherie
is
the
tool
that
runs
the
official,
smi
conformance
suite
and
nginx
new
service
mesh.
It's
a
they
have
an
api.
The
api
is,
is
smi
and
only
smi,
smi
being
service,
mesh
interface
and
there's.
A
number
of
meshes
that
claim
conformance
measuring
is
the
tool
to
officially.
B
Dumb
question:
does
it
do
these
conform
to
like
open
api,
3.01
and
swagger,
and
is
there
a
combination
reality
there
or.
A
Yeah,
let
me
I
think,
the
there's
some
good
news
and
some
bad
news.
The
good
news
is
that,
with
as
many
service
messages
as
there
are,
there
are
three
apps
three
standards
or
three
abstractions
that
have
come
together.
A
None
of
them
address
that
none
of
them
specifically
use
open
api
or
swagger
as
a
or,
and
none
of
them
specifically
address
the
actual
application,
the
actual
service,
whatever
whatever
that
application.
Is
that
that's
running
they
don't
address
it.
There
are
those
as
much
as
they
address
the
service
meshes
themselves,
and
so
these
are
the
I'll
I'll
put
this
link
out
there
since
we're
over
time
to
these
are
the
three
that
are
I'll
put
this
in
the
medium
nginx
service
mesh
and.
A
With
that
anyone
have
anything
else,
I
don't
want
to
get
a
bad
rep
for
making
everyone
late
for
their
next
call.
A
Same
time
next
week,
I
think-
and
it's
just
you
know
it's
it's
climate
with
it
with
a
t.
So
just.
B
It's
on
the
calendar,
I
just
it's
always
just
there's
something
else
that
blocks
it
or
something.
A
Oh
nice.
Well,
thank
you
all.
Thank
you.
Steven
see,
you
see
everyone
same
time
next
week,
we'll
catch
you
on
the
next
call.