►
From YouTube: Service Mesh Performance WG (April 27 2020)
Description
So great to have @mt_lattices and @PrateekSahu22 of @UTAustin on our first SMPS WG call.
A
A
A
Can
eat
our
mojito
hey
guys
morning,
Hey
well,
I
shouldn't
have
enabled
the
waiting
room,
I
guess,
because
that
makes
people
sit
down.
Yeah
all
right,
one-one
minute,
I'm
gonna
grab
the
cup
of
coffee
that
just
brewed.
A
B
A
Okay,
very
good
cush
said
he's.
Gonna
join
as
well,
but
would
just
be
a
couple
more
minutes
so
nice,
there
very
good
to
have
you
if
you
haven't
met
monkey
before
I
guess:
Kaneesha
Tartine,
if
you
haven't,
met
Mohit
before
either
well.
He
is
well
where's
a
couple
different
hats,
one
of
those
hats
actually
Mohit.
Do
you
wanna
instead
of
me,
yeah
yeah,.
B
Yeah
I
can
just
tell
hey
guys,
I,
currently
working
at
symmetric
systems,
I
used
to
be
at
UT
Austin
as
a
professor
there
and
just
been
generally
very
interested
in
this
space
of
container
orchestration.
Just
how
do
you
build
better
containers,
whether
using
it
just
Linux
bass,
like
SELinux
and
so
on
or
hardware,
and
then
how
to
use
containers
for
interesting
things
so
over
different
things?
On
top
of
the
baseline
surface,
mesh
right,
so
load,
balancing
is
great.
Cannery
deployments
are
great,
but
I
think
this
is
just
like
the
tip
of
the
iceberg.
B
There
is
a
organization
called
speck
like
that,
helps
you
benchmark
amde
versus
Intel
processors,
and
you
know,
or
even
across
arms,
across
different
instruction
sets.
So
you
can
build
your
own
mesh.
However,
you
want,
but
in
the
end
we
have
to
come
back
and
report,
and
how
does
it
perform
on
things?
The
metrics
that
we
all
agree
on
benchmarks
like
for
service,
much
like?
What's
this
underlying
setting,
how
many
machines?
How
big
is
the
cluster?
How
is
that
cluster
set
of
like
how
sensitive
the
overall
metrics
should
be
to
these
artefactual
things?
B
You
just
should
standardize
on
it
and
figure
out
how
to
best
do
it.
It's
pretty
interesting
that
I
get
lab,
for
example,
I'm,
just
kind
of
going
brain
dump
on
kind
of.
Why
I'm
in
this
meeting
and
I'm
still
keen
to
push
forward,
you
know
like
get
lab,
is
a
super
popular
open
source
project,
but
even
that
doesn't
seem
to
have
published
numbers
on
performance
right.
So
it's
not
like.
We
can
just
import,
get
lab
and
run
it
in
a
sto
on
boy,
whatever
mesh
and
say
what
the
performance
is
going
to
be.
A
A
Yeah
where's,
a
few
different
hats,
and
one
of
those
has
been
as
a
professor
at
UT
Austin
and
so
he's
been
a
champion
of
the
line
of
thinking
that
collectively
shared
around
well.
Actually,
it's
around
a
multitude
of
things
and
that
there's
a
couple
that
ends
up
that
there
are
a
couple
of
different
sub
goals.
A
We've
presented,
some
of
their
numbers
and
some
of
their
research
on
I
feel
a
cubic
on
a
year
ago.
I
think
it
was
a
year
ago
and
then
we're
looking
at
doing
it
again
at
Q
Connie
you
in
August,
so
we've
got
a
spot
to
talk
about
some
of
this.
It's
precisely
what
he
just
described
so
in
a
good
way.
The
thing
that
I
just
mentioned
is
that
we've
got
there's
a
bit
of
a
time
bound.
A
There
are
some
of
our
efforts
here
are
time
bound
to
August.
16Th
I
think
is
the
data
that
talk.
So
that's
good
to
have
it's
good
to
have
that
venue
as
well
to
garner
additional
interest,
I'm
gonna
jump
in
and
share
a
few
slides
here,
a
few
thoughts.
The
agenda
today
was
to
refresh
Mohit
and
then
and
Prateek
Shahu
is
some
of
you
guys
know
him
he's
been
he's
the
graduate
student.
That's
been
the
long,
the
longest
standing
graduate
student
that's
been
I'm
focused
on
this.
A
A
A
The
in
speaking
of
the
scenes
yet
measuring
itself,
and
then
this
service
meshed
performance
specification
are
both
intended
to
head
into
the
scenes.
Yet
this
year
or
a
minimally
at
least
measuring
so
so
pratik
is
with
us
as
well,
and
pettite
is
familiar
with.
I
think
he's
not
petechia
on
audio
now,
right,
I
know
not
yet.
C
A
C
A
Anyway,
I
was
blathering
on
the
objective
of
a
couple.
Things
that
I
wanted
to
do
is
one
we're
starting
to
create
some
working
groups
in
the
layer,
5
community.
We
need
to
establish
a
cadence
and
we've
got
enough
interest
enough
talent
just
on
this
call
alone.
I'm
too
come
to
bear
on
these,
and
this
really
is
a
fraction
of
the
amount
of
people
that
I
expect
will,
at
some
point,
weigh
in
on
this
speck
away
on
this
benchmark
and
actually,
as
I,
say,
spec
and
benchmark.
It's
like
well.
A
What
kind
of
what
are
you
talking
about?
What
what's
the
what's
the
scope
of
the
things
that
we're
trying
to
accomplish?
I'll
articulate
that
from
my
perspective,
but
that
doesn't
that's
not
a
set
in
stone
thing,
necessarily
I'll
say
it
by
maybe
starting
here
with
this
service
measure
performance
specification.
It
is
essentially
a
yamo
file
at
the
moment
so
which
is
intended
to
be
a
standardized
way.
A
Any
standards,
a
big
word
but
anyway,
intended
to
be
a
standard
format
in
which
you
would
describe
the
type
of
performance
test,
you're
going
to
run
like
how
much
load
you're
gonna
generate
for
how
long
at
what
rates,
and
then
for
how
many
load,
generators,
you're
gonna
have
that
kind
of
thing.
It
would
also
describe
your
environment.
How
big
is
your
cluster?
How
many
nodes
and
CPUs
and
memory,
and
except
do
you
have
what
all
do
you
have
deployed?
What
type
of
app
you
have
deployed?
A
It
would
capture
the
results
of
the
performance
test
and
it
also
captures
kind
of
the
configuration
of
the
service
mesh,
it's
version
and
how
its
configured,
what
all
it's
doing,
I
mean
tend
to
from
that
spec
yeah,
hopefully
introduce
some.
You
know
some
kind
of
a
concept
like
a
I'm,
not
sure
if
there's
a
more
clever
name
at
the
moment,
but
something
about
a
mesh
decks.
If
you
will,
as
a
simple
way
of
referring
to
of
having
a
simple
scale
that
lets
people
say:
hey
I'm,
running
at
a
mesh
deck.
A
So
ninety
and
that's
great
because
this
is
how
much
value
I'm
getting
out
of
the
mesh,
because
it's
doing
this
many
things
or
that's
horrible,
because
it's
there's
so
much
overhead
here
and
I'm,
really
not
getting
that
much
value
and
so
to
give
people
to
give
all
the
rest
of
us
humans,
something
easy
to
understand
and
an
easy
scale
to
measure.
But
there
needs
to
be
a
formula
behind
how
you
calculate
mesh
decks
and
so
I
know.
A
It
also
looks
like
it's
from
the
90s
which
I
or
not
not
the
60s,
but
maybe
more
than
90s,
which
I
think
it
is
and
there's
a
if
you
go
look,
there's
a
you
can
kind
of
read
through
it's
it's
relatively
simple
formula
that
it
uses
and
in
some
respects
it's
subjective,
metrics,
it's
good
to
take
inspiration
from
and
and
think
about
how
it
is
that
a
mesh
tex
formula
would
be
created.
I
don't
have
these
answers.
This
is
and
then
and
I'm
just
gonna
kind
of
quickly
trying
to
go
through
the
what
it
is.
A
I
think
that
we're
going
to
try
to
accomplish
and
then
so
part
of
accomplishing.
This
is
then
that
you,
if
you've,
if
you
go,
do
a
google
search
or
go
do
a
search
for
comparison.
Comparing
to
perform
the
performance
of
one
mesh
to
the
next
you'll
find
a
number
of
articles
out
there
and
hats
off
to
those
that
have
taken
the
time
to
do
this
and
publish
the
results.
It's
not
easy.
The
people
typically
end
up
building
bespoke,
tooling
or
hobbling
together
a
set
of
tools
to
do
that
test.
A
Measure
e
is
intended
in
one
of
its
feature,
areas
intended
to
be
an
easy-to-use
tool
to
perform
those
tests.
We've
got
an
agreement
with
the
CNC
F
to
use.
You
know,
I
think
the
agreement
twenty-something
nodes
at
the
time,
physical
machines
and
a
pristine
environment
to
go
run,
sort
of
point
in
time.
A
Authoritative,
if
you
will
analysis
oops,
the
mesh
B
itself-
and
this
is
part
of
today's
agenda-
is
to
do
a
quick
update
for
I.
Think
for
all
of
us
to
kind
of
be
on
the
same
page,
about
measure
E's
current
functionality
in
terms
of
performance
analysis,
but
one
of
the
things
that
mesially
does
is
collect
performance
results,
so
you'll
notice
on
either
mesh
Rio
or
the
layer
5
site
that
this
number
is
real-time.
A
If
you
go
run
a
performance
test
now,
you'll
see
this
number
hiked
up
and
so
we're
trying
to
collect
performance
test
results
so
that
at
some
point
we've
got
a
statistically
significant
set
of
numbers
to
go.
Do
some
analysis
on
my
gut
tells
me
that
the
useful
numbers,
the
the
useful
number
of
tests
have
been
run
and
within
here
are
way
lower
than
this
number
either
people
are
just
running
some
sample
silly
tests.
A
It
doesn't
really
isn't
a
value
no
or
some
of
these
results
were
collected
before
we
actually
pulled
in
all
of
the
environment
information
and
kept
that
as
well
I.
Think
without
some
of
that
context,
not
all
these
results
are
useful
in
an
analysis.
Focus
we'll
see,
there's
there's
a
folder
that
each
of
you
I
think
have
access
to
that.
Has
some
materials
from
right
at
longer.
Write-Ups
about
what
we
think
we're
trying
to
accomplish
about
doing
a
benchmark
about
establishing
a
spec
that
I
was
talking
about
earlier
about
establishing
a
mesh
decks.
A
Guess
I'll
come
back
to
this
the
the
today
and
we'll
see
this
hopefully
demoed
here.
In
a
moment,
measure
e
has
support
for
two
different
load
generators,
but
we're
considering
a
third
and
when
we
do
we're
considering
with
that,
third
would
be
a
load
generator
that
you
could.
You
could
have
many
instances
of
and
generate
load
in
a
distributed
fashion,
kind
of
kind
of
interesting
to
me
that
we
have
a
bunch
of
distributed
systems.
A
A
Measure
he's
just
gotten
to
the
point
where
that's
I
think
I
think
that
we're
ready
to
engage
there
and
ready
for
some
of
those
people
to
begin
running
it.
My
hunch
is
cashy.
Court
might
be
the
first
ones
to
pick
it
up.
I,
don't
know
just
just
I
say
that,
not
because
the
other
ones
aren't
amenable
to
it,
just
because
they
keep
showing
up
and
trying
to
partner
as
much
as
they
do
and
once
one
of
them
does
it
like
the
other
ones,
are
gonna
feel
a
lot
of
pressure.
A
So
the
part
of
there's
another
related
initiative
that
helps
in
build
my
confidence
that
we
would
that
measure
e
would
ultimately
be
incorporated
into
those
build
processes.
One
is
that
measuring,
when
it
heads
into
the
scene
CF
that
all
that
will
make
everyone
feel
good,
I
think
be
comfortable,
recognized
as
a
sort
of
the
tool
to
use
to
the
measure.
E
is
positioned
to
be
this.
A
Maybe
the
I
think
in
some
respects.
This
is
kind
of
this
is
sort
of
that
that
first
meeting,
if
you
will,
we
need
to
wrap
a
bunch
more
structure
around
it
to
help
it
move
forward.
Maybe
that
working
group
needs
to
haven't
even
seen.
Cf
I
said
that
earlier,
and
so
with
that
I'm
gonna
step
back
and
be
quiet
and
we
can
walk
through
a
demo
but
good
comments
here.
A
Very
late
last
night
I
had
invited
a
couple
of
Googlers.
Are
the
sto
performance
team
lead
to
join
in
part
yeah
for
him
to
join
in
for
one
of
his
employees
as
well?
I
sent
that
to
them
extremely
late
I.
Don't
I
didn't
expect
that
they
were
joining.
That's
that's
fine,
but
but
they've
been
they
participated
in
the
past
Oh.
A
A
D
C
Shared
a
link
of
Steele's
performance
results
that
they
have
shed
on
their
webpage.
They
have
a
couple
of
benchmarking
tools
that
they
used
a
couple
of
apps
and
what
kind
of
latency
connections
throughput
graphs
they
have
run
on
baseline
and
with
various
of
their
service
mesh
features.
I
mean
that
is
one
that
they
have
openly
shared
I'm,
not
sure.
If
Lee
has
some
insider
info
on
that.
A
A
So
so
this
is
one
of
the
things
that
we
need
to
go,
follow
up
with
them
and
ask
them
for
specific
test
configurations.
We've
been
on
any
number
of
calls,
with
both
Google
and
IBM
40
Steel
specific
and
for
the
tests
they
run
for
SEO,
specifically
that
they
go
on.
Some
of
those
tests
go
run
for
miles
of
spreadsheets,
because
there's
lots
of
different
configurations
to
do.
A
A
We
need
to
yeah
get
like
he
said
like
there's
like
three
that
are
probably
critical
to
them
in
terms
that
they
would
want
to
have
my
shirtmaker
measure.
He
was
testing
and
I
think
between
all
of
us
have
being
busy
we're
just
not
sort
of
following
up
and
making
sure
that
we're
getting
that
described
they
they
he
had
hired
in
an
engineer.
Carolyn
is
her
first
name.
They
have
since
put
together
a
bit
of
a
dashboard
and
I.
Don't
know
if
it's
linked
here
I
would
assume
so
one
of
the
things
that
we
need.
A
A
I
think
that
were
to
a
point
by
which,
if
we
go
I
think
we've
had
a
couple
of
false
starts
with
engaging
with
the
sto
team,
because
we
haven't
had
enough
time,
but
the
formation
of
a
working
group,
a
bit
of
structure
with
a
cadence
to
meetings,
mailing
list,
a
slack
channel,
etc
recorded
meetings,
maybe
bringing
that
into
the
CN
CF
and
having
that
those
meetings.
Kind
of
in
context
of
that
might
encourage
some
of
that
participation,
and
so
one
of
the
things
the
action
ends.
A
B
There
are
some
softer
questions
that
I
think
I
think
these
tests
don't
seem
to
hit
at
this
test,
meaning
the
Stewart
has
a
bunch
of
others
that
I've
seen
is
their
micro
benchmark
proxy
by
itself.
So
if
you
scroll
all
the
way
down,
I
think
there's
some,
maybe
there's
some
other
link.
I've
followed
out
of
here.
Basically.
B
Yeah
I
mean
I
just
followed
the
link
that
took
center
kind
of
clicking
I,
don't
know
sure,
on
micro,
benchmarking,
the
storage
proxy
yeah
stuff
like
that
right,
where
you
can
you
just
looking
at
you
know,
latency
added
just
this
thing,
but
what
it's
missing
is
a
lot
tax,
which
is
a
you
know.
What's
the
effect
on
end-to-end
workload
like?
Is
this
even
going
to
show
up
on
intern
workloads
versus
not
right
and
be
especially
for
the
control
plane?
What
if
I
want
to
do
interesting,
more
aggressive
control,
plane
policies
right
so
what's
the
effect?
B
Also
I,
don't
just
want
to
have
a
static
workload
and
just
see
what
can
I
push
through
the
data
plane
proxy?
To
me,
that's
know
one
interesting
useful
case,
but
not
all
of
it,
so
benchmarking,
interesting
control,
plane,
ideas
and
even
for
the
data
plane.
You
know
a
lot
of
the
interesting
use.
Cases
for
envoy,
for
example,
are
adding
more
filters
for
traffic
right,
so
you
know
adding
whether
it's
like
Laird
for
but
also
like
application
layer
filters
into
the
logic.
B
So
now,
how
would
the
overhead
scale
and
now,
if
you
have
like
a
system
that
is
dynamic,
so
it's
not
just
like
okay
I,
have
these
three
boxes
worth
of
a
few
containers
kind
of
set
up
right?
If
you
want
to
scale
this
up
and
down
what
does
it
do
to
the
overall
system,
performance,
/,
responsiveness,
I,
think
just
establishing
that
would
be
really
interesting
and
it
could
be
done,
but
I
think,
like
still
guys,
do
it
for,
like
few
servers,
I
posted
another
link
from
a
different
provider
of
humanities
services.
B
We
had
like
a
small
cluster
with
two
or
three
machines,
so
just
kind
of
settling
down
like
hey
talking
to
a
median
Hasek
or
deployment.
Is
this
big
right
or
a
median
is
to
your
deployment
across?
You
know?
Who
will
they
know
about?
Is
this
big
and
maybe
like
a
small,
medium,
large
kind
of
setups,
with
different
types
of
how
like
put
the
control
plane
to
work
right?
B
Not
just
push
traffic
through
your
data
plane,
proxy
and
report?
You
know
oh
three,
milliseconds
everything
is
great.
Obviously,
if
we
do
a
little
work,
you
know
just
pushing
things
through
a
vanilla.
A
null
proxy
would
be
pretty
low
overhead,
but
that's
not
where
the
service
much
has
been
used.
Cases
are,
you
want
to
add
more
data
playing
functionality
and
I.
Don't
want
to
spin
up
a
separate
sidecar
for
every
new
thing.
I
want
to
try.
We
added
as
filters,
for
example,
into
envoy
right
so
I.
B
Think,
like
that's
the
kind
of
said,
I
mean
it
so
in
our
lab
there's
another
student
who
is
looking
at
you
know
deploying
in
version
execution
inside
a
service.
Much
so
now
you
have
like
multiple
copies.
If
there's
like
a
flaw
and
exploit
in
one
you
sort
of
always
it's
slightly
different
than
middle
I
mean
you
send
the
traffic
out,
but
you
also
check
the
results
before
going
out
before
sending
the
result
out.
It's
not
like
you
know.
The
copies
outputs
are
dropped
by
default.
B
So
there's
like
a
bunch
more
extra
work
being
done
inside
the
mesh,
then
like
a
really
simple
manila
data
plane.
Proxy,
yes,
I
think
like
these
are
the
type
of
use
cases.
One
of
the
things
I
was
thinking
is
in,
would
it
help
if
we
pull
in
some
existing
open
source
project
as
an
end-to-end
test?
I
mean
the
gate.
A
That's
good,
for
my
part,
yeah
I.
Consider
so
that
yeah
part
of
what
makes
these
tests
some
of
the
results
on
interesting
or
part
of
what
might
might
make
mesially
as
a
tool
interesting
to
others,
is
that
it's
a
tool
to
help
facilitate
tests
on
your
workloads.
So
one
that's
that's
interesting,
but
then
two
that
your
workloads
are
so
much
more
interesting
than
book
info
or
then
emos,
blotto
or
whatever
sample
apps
and
yeah
I.
B
Mean
that
I
mean
good
lab,
for
example,
sounds
like
a
reasonable
medium.
At
least
a
really
popular
wardrobe
or
elastic
I
picked
a
different
one
like
last
excerpts
stack
or
something
I'm
not
married
to
any
one
of
these,
but
it
seems
like
it
would
be
worthwhile
because
people
a
lot
of
people
deploy
last
I,
get
laughs.
A
A
And
ultimately
we
did.
We
spend
a
fair
bit
of
time.
We
ultimately
did
I
think
it
now
is
kubernetes
compatible.
It's
just
that.
We
never
really
I,
don't
know
if
it's
her
personality
or
whatever
really
like
got
traction
there.
That
doesn't
mean
that
those
workloads
aren't
necessarily
meaningful,
but
but
something
like
a
gitlab.
Actually,
this
isn't
like
almost
of
the
use.
Most
of
those
looking
at
the
results
would
be
yeah.
That's
a
bit
more
compelling
because
yeah.
A
I
agree
to
that
that
what
excites
me
in
particular
and
I,
know
a
couple
of
other
folks
on
the
call
thr
as
an
example
is
is
in
fact
just
what
Mohit
is
saying,
which
is
there's
application
level
filters
that
you
could
throw
wasum,
the
more
or
less
that
you
would
put
into
Europe
I
mean
we're
getting
technology,
specific,
I,
guess
but
but
put
into
envoy
and
through
webassembly.
We
are
on
the
list
to
presume.
Well,
we
were
on
the
list
to
present
that
at
dr.
Khan
this
coming
month,
I've
pulled
this
off
and
we're
gonna.
A
It
is
the
most
interesting
about
those
filters
and,
and
what
they
can
do
is
in
some
respects,
like
there's
enough
prior
art
out
there
that
talk
about
the
effects
of
the
control
plan,
the
effects
on
the
data
plane-
that's
like,
but
but
not
not
necessarily
the
effects
of
using
these
later
seven
filters
or
the
effects
of
doing
that
in
this
workload.
That
is
a
meaningful
workload
that
everyone
kind
of
understands.
A
B
Yeah
we
I
mean
I,
think
the
cake
knows
even
better,
but
I
think
we
have
a
good
lab.
We've
been
using
that
a
lot
I,
don't
think
we
have
deployed
elastic
and
we
have
deployed
last
time.
Elastic
is
easy
to
plug
I.
Don't
think
we
have
experimented
with
putting
actual
data
into
it
and
scaling
it
up
and
down
yeah.
C
We
have
been
experimenting
with
get
lab
and
a
couple
o
smaller,
like
sample
applications
from
Theo.
Definitely
there
are
a
couple
of
applications
which
we
just
used
for
security
testing
like
the
dam
vulnerable
web
app
and
trying
to
have
application
port
them
into
service
mesh
and
try
to
see
things
like
what
moth
was
mentioning.
One
of
my
colleague
who
was
working
on
the
inversion
in
he
has
ported
the
temple
ripple
web
app.
C
A
A
B
And
Matt
Ramos
is
another
one,
but
I
guess
it's
a
little
less
popular
than
Atlanta
and
if
you
want
to
set
up
your
lastic
and
that
sounds
pretty
reasonable.
B
C
A
A
A
That
we
have
pending
for
incorporation
of
well
potentially
of
testing
tool
that
envoy
uses
called
Nighthawk
and
so
push
been,
giving
some
consideration
here
about
either
incorporating
Nighthawk
or
Vegeta
as
as
load
generators
that
are
capable
of
being
distributed
or
deployed
in
the
distributed
fashion.
So
cush
do
you
wanna
I'm
gonna,
try
to
grab
the
link
to
your
design
spec,
but
this
would
be
something
good
to
explore
here,
because
I
wonder
with
that
brief
description.
A
Think
almost
any
test
configuration
is
valid.
I
think
some
test
configurations
are
not
very
meaningful,
while
others
are
extraordinarily
meaningful
in
part
to
mojits
point
about
like
what
workload
you're
using
it's
like
well,
it
is
valid
to
use
a
very
simple
app
that
you
know
that
that's
such
a
lightweight
workload
that
you
know
it's
a
valid
test,
but
but
what's
really
meaningful,
is
when
you've
got
a
big
workload
like
like
your
lab
and
I.
Guess
what
I'm
trying
to
ask
is.
A
Because
of
what
I
generally
see
is
people
doing,
testing
selecting
and
want,
and
that's
what
mystery
does
today?
Is
one
endpoint
one
load
generator
and
tries
to
you
hammer
on
it
from
there
and
it's
good
to
have
control
I
think
you
know
you
introduce
additional
variables
and
it
gets
harder,
but
but
it
seems
like
it's
a
valid
test
case
to
have
multiple
load
generators
hitting
on
multiple,
either
the
same
endpoint
or
multiple
endpoints,
and
then
that's
kind
of
what
this
spec
is
about.
C
C
Already
pushing
the
server,
which
is
serving
the
request
to
its
limits,
then
a
multiple
endpoint
generator
might
not
do
much
good.
But
if
the
bottleneck
is
from
the
client
side
and
the
client
is
not
able
to
generate
a
request
which
is
pushing
the
server
to
its
extremes,
then
it
might
be
helpful
to
have
a
distributed
load
generator
so
that
we
are
pushing
the
server
to
its
limit,
because
we
are
trying
to
benchmark
the
the
service
server,
which
is
providing
us
the
service.
C
So
yeah
I
mean
it
is
definitely
in
a
sense
like
when
you're
saying
from
different
geographical
locations,
I
mean
there
are
obviously
a
lot
of
network
delays
that
you
incur.
If
you
are
trying
to
hit
a
service
from
a
different
geographical
location
like
say
a
European
server
from
us
or
something
of
that
sort.
C
All
of
those
are
mostly
networking.
Latencies,
like
I,
don't
see
a
clear
picture
where
service
mesh
brings
in
a
functionality
that
is
going
to
help
with
this.
But
if
it
does
then
yeah,
then
it
is
definitely
like
if
there
is
like
a
jump
proxy
that
we
can
configure
using
service
mesh
or
something,
but
I
am
not
sure.
If
that
is
true,.
A
If
there's
different
system,
sometimes
people
I
think
someone
was
asking
this
question
in
the
slack
channel
here
recently
it
was
like:
can
you
forget
for
SEO?
Specifically,
it
has
ingress
gateways
and
egress
gave
you
could
imagine
that
having
a
single
ingress
gateway,
it
could
be
a
bottleneck
potentially
to
getting
into
for
traffic
getting
into
the
mesh.
So
ok,
so
people
might
deploy
multiple
ingress
gateways
into
the
extent
that
they
do.
A
Maybe
that
makes
for
an
interesting
scenario
for
distributed
performance
or
having
multiple
load
generators
either
one
per
ingress
I,
don't
know,
I
mean
yeah.
Does
that
just
changes
that
just
make
the
scale
the
numbers
higher
and
make
all
the
numbers
higher
in
terms
of
the
test?
What
does
that
hack?
It
have
a
in
effect
on
how
the
system.
C
I
agree
lake
in
in
such
a
scenario,
it
would
be
helpful,
so
that
is
again
if
we
have
multiple
ingress
gateways
and
configuring,
just
one
gateway
is
like.
If
we
can
configure
multiple
ingress,
great
ways
and
suppose,
with
just
one
gateway,
we
can
serve
700
requests,
but
with
like
seven
gateways,
we
can
service
3,500
requests.
Then
that
is
a
valid
scenario
where
we
are
pushing
the
service
to
the
limit
with
multiple
of
increase
gateways
and
in
that
case
or
distributed
load
generator,
definitely
makes
sense.
A
E
A
A
Okay,
so
yeah
I'm
sorry
should
I
did
we
are
recording
not
that
that
III
didn't
spend
some
time
catch
you
up,
because
alright,
so
briefly,
I
think
this
is
made
me
may
or
may
not
be
helpful
to
lead
or
as
well.
A
A
So
so,
alright,
quick
demo
of
mastery
I,
don't
have
to
do
this.
So
we
we've
got
I'm
running
on
my
machine
here.
It's
a
Mac
I'm,
not
running
every
masteries
configuration
that
I'm
running
is
really
just
the
standard
one.
It's
got
memory
server
and
then
individual
adapters
for
individual
service
matches
when
I
do
a
measure
of
CTL
start.
That
brings
up
my
Cherie
in
my
browser,
I'm
running
docker
desktop
and
inside
of
docker
desktop
kubernetes.
An
instance
of
kubernetes
is
running
I've
used
measure
e
to
deploy
sto
1.5
with
MPLS
I've
used.
A
Then
I've
used
my
Cherie
to
deploy
a
sample
app
book
info
and
so
Mohit
by
the
way
and
critique,
as
we
think
about
sample
apps.
There's
a
few
different
ones
supported
this
one
I'll
call
out
just
because
it
may
or
may
not
be
robust
enough
to
be
of
interest.
I
think
it's
kind
of
on
the
order
of
weaves
socks
shop,
but
this
one
is
Google
hipster
shop.
So
these.
A
A
C
C
A
A
A
A
When
you
go
into
memory
and
you
connect
grow
fauna
or
Prometheus
right
now,
the
memory
is
only
connected
to
core
fauna.
So
it's
connected
here
we
just
kind
of
do
a
quick
test
to
see.
If
we
can
talk
to
cliff
I
know
we
can
these
same
boards
that
you're
seeing
grow
fauna.
They
show
up
here
in
memory,
so
I
just
choose
the
same
one
for
I'm
in
here.
These
are
the
Shh
I
want
to
use
the
right
turn.
These
are
the
panels
that
we
were
just
looking
at
there's.
A
Actually,
four
of
them
called
V
CPU,
so
I'm
gonna
just
choose
those
four,
because
I
think
some
of
them
will
have
statistics
and
some
of
them
won't.
So
the
one
that
we
were
just
looking
at
it
I
think
is.
Is
this
one
here
there
hasn't
been
any
low
yeah.
It
is
because
these
are
the
same
for
sto
control,
playing
components
that
basically
have
zero
V
CPUs
being
used,
because
it's
not
doing
anything,
it's
not
serving
any
load,
but
so
once
you've
chosen
to
add
something
like
that
that
board,
it
will
now
show
up.
A
A
It
starts
to
load
up
that
same
set
of
charts
that
we
had
said
that
we
want
to
look
at
I'm.
Sorry.
It
actually
understands
that
we
deployed
this
year.
It
has
yet
to
understand
that
we've
deployed
other
workloads,
I
think
at
some
point
it
would
be
it'll
get
much
better
so
that
it
will
just
identify
all
the
endpoints
that
are
available
and
make
that
testing
easier.
D
F
A
A
A
You
wouldn't
have
to
have
multiple
load
generators.
To
do
that,
we
could
facilitate
for
like
right
now.
It's
saying
how
many,
how
many
threads
or
how
many
processes
you
want
to
spin
up
of
using
a
single
instance
of
Forteo.
How
many
threads
do
you
want
it
to
have
to
so?
Maybe
you
dedicate
one
thread,
I
think,
there's
a
number
more
nerd
knops.
If
you
will,
that
need
to
be
identified
like
hey
on
on
a
per
endpoint
basis,
how
many
threads
did
you
want
to
run
the
ability
to
save
these
configs
and
schedule
them
as
well?
A
Probably
helpful
that
between
the
which
one
you
use
you'll
get
the
same
user
experience
that
it'll
show
you
that
it's
running
you
can
navigate
to
elsewhere
in
the
UI
and
its
final
it'll
tell
you
that
you
know
your
your
test.
Results
are
ready
for
viewing.
I'll
show
you
what
those
look
like
it.
We
need
to
enhance
the
way
in
which
the
metrics
are
delivered,
because
it's
a
bunch
of
stats.
A
A
G
G
A
Last
thing,
I
think
that
kind
of
demo
here
is
just
the
notion
that
you
could
users
go
ahead
and
download
their
there's
like
download
their
test
results.
These
test
results
are
shown
in
that
service.
Mash
performance
specification
format
that
that
that
spec
for
me,
but
right
now,
it's
only
a
partial
implementation
like
a
Naveen
who's
on
the
call
he's
working
with
another
contributor
Vijay
on
trying
to
get
this
fully
flushed
out,
so
that
all
the
statistics
are
there
in
the
same
spec
format.
I
know
everybody's
got
to
go.
We
got
a
minute
left.
A
B
A
A
Some
of
you
are
actually
working
on
solving
that
I'd,
like
for
us
to
I'm
gonna,
send
out
a
poll
to
everyone
to
ask
when
a
convenient
sort
of
ongoing
time
to
meet
is
or
to
have
kind
of
a
regular
touch
point
to
to
outline
the
goals
a
bit
better
and
you
know
to
basically
get
get
us
get
us
going.
I'm
assuming
I'm,
assuming
we
might
get
a
more
participation
from
some
of
the
service
messages.
If
I
guess
I
have
a
mind
to
bubble
this
up
inside
of
the
scenes.
H
So
this
is
way
beyond
my
dreams
way
beyond
my
dreams.
To
be
honest
with
you,
because,
like
the
way
things
with
my
universe,
it
is
that
most
of
my
seniors
working
in
the
industry
are
just
working
at
supporting.
You
know
they
have
no
idea.
They
have
no
idea
of
this
and
it's
like
off
the
roof,
I'm
trying
to
get
a
hold
on
it
and
work
or
not,
but
I
also
know
my
semesters
about
to
get
over
in
me.
Love
it
and
be
done
with
this.
I
can
let
put
my
mind.
Oh.
B
A
H
It's
like
the
way
things
work
is
that
people
just
come
here
go
take
and
in
terms
of
get
a
job
that
says
you
are
done,
so
it's
like
do
hell
like
or
put
up
my
insight
into
a
different
level
and
most
of
my
seniors
when
I
like
told
them
that
I
was
working
on
service
because
they
had
no
idea.
No
idea,
especially
there
was
one
of
the
guys
was
in
Amazon.
He
was
ok,
that's
nice
and
my
professor
even
complimented
that
kubernetes
was
going
to
be
the
way
to
go
so
catch
for
doctors.