►
Description
Service Mesh Performance Community Meeting - Aug 05th, 2021
Join the community at https://layer5.io/community
Find Layer5 on:
GitHub: https://github.com/layer5io
Twitter: https://twitter.com/layer5
LinkedIn: https://www.linkedin.com/company/layer5
Docker Hub: https://hub.docker.com/u/layer5/
A
So
before
we
move
on
to
the
topics
for
today,
sakai
pausa
am
I
pronouncing
it
correctly,
but
I
think
it's
your
first
time
on
this
call.
Would
you
like
to
introduce
yourself.
B
All
right,
can
you
hear
me
now:
yeah
yeah,
alrighty,
good,
yes,
hello,
everybody!
So!
Yes,
okay!
Now
what
happened?
Can
you
still
hear
me
yeah
all
right?
Hey?
Yes,
I'm
sakurai
from
intel.
Yes,
it's
it's!
I
think
my
my
first
call
on
on
on
this
forum.
I
participated
the.
What
was
the
previous
forum,
the
thick
network.
B
Whatever
a
couple
times
and
I
work
for
intel
and
my
responsibilities,
are,
you
know
enabling
intel
features
on
service
mesh,
mainly
working
on
the
istio
now
and
performance
is
obviously
you
know
close
to
our
agenda
and
that's
you
know.
I
would
like
to
see
what's
what's
happening
in
the
smp
area,
and
hopefully
we
can
contribute
something
to
to
that
area.
Moving
forward.
D
Zachary,
it's
nice
to
nice
to
see
you
again:
okay,
yes,
good
to
be
back
yeah
yeah
and
by
the
way,
like
I
don't
know
that
even
I
could
articulate
the
there's
a
lot
of
there's
about
three
meetings
that
are
really
intertwined
and-
and
I
think
the
common
point
for
them
is
probably
the
cncf
service
mesh
working
group.
D
D
To
like
that
area,
then
there's
the
the
working
group,
that's
kind
of
specifically
focused
on
on
service
mesh
initiatives
that
we
had
been
having
the
same
meeting
time
between
those
two
groups
and
we're
like,
as
of
like,
as
of
now
we're
splitting
out
and
basically
the
same
project
meeting
for
service
mesh
performance
where
those
initiatives
inside
of
this,
the
service
mesh
working
group
they're
so
intertwined
that
we're.
D
Basically,
this
is
one
in
the
same
meeting
here
the
tag
network
meeting
being
separate,
but
it
will
still
the
the
things
that
we
do
here
will
still
be
reported
into
and
and
sort
of
exposed
there
as
well
so
yeah,
I
don't
know,
I
don't
know
that
that
distinction
is
even
important
anyway.
D
It's
it's
good
that
you're
here
there's
a
there's,
a
number
of
things
for
us
to
go
through
so
so
last,
so
the
projects
smp
it
just
this
is
basically
it's
it's
second
time
meeting
post
adoption
into
the
cncf
and
so
a
lot
a
couple
of
weeks
ago
and-
and
today
I
expect
we'll
probably
have
a
little
bit
of
a
continuation
of
the
conversation
from
a
couple
of
weeks
ago,
which
was
well.
D
You
know,
as
as
we
as
the
project
and
is
now
in
the
cncf.
It's
kind
of
it's
a
convenient
time
to
we'll
have
some
discussions
around.
What
do
we
want
to
get
done
here
like
what,
like
the
there
is
an
established
charter?
Maybe
that
should
have
some
discussion,
some
brainstorming,
there's
under
that
charter
for
service
mesh
performance,
there's
about
four
er,
four
categories
of
things
that
we
think
we
want
to
accomplish
in
general
and-
and
we
were
beginning
to
lay
out
some
specific
goals
within
there.
D
So
I
think
it
would
well
clearly
like
clearly
it
will
benefit
all
of
us
to
review
that
sum
to
brainstorm.
Potentially,
if
things
need
to
be
changed,
part
of
what
we
accomplish
here
together
is
well
not
part
of,
but
like
it's
highly
dependent
upon
who
shows
up
or
whether
it's
on
the
call
or
not,
but
just
who
participates,
and
so
so
glad
that.
D
So,
for
my
part,
I'm
glad
that
you're
here,
because
I
hope
that
you
get
the
sense
that
you're
here
to
influence
what
we're
doing
as
well
like
your
your
input,
is
quite
sought.
After.
B
I'm
in
finland,
europe
eastern
time
zone
so
nice
good
year
about
5
p.m.
For
me
now,.
B
The
the
day
yeah,
I'm
I'm
used
to
this
late
evening,
us
calls.
So
that's
that's
fine.
D
Good
good,
good,
good,
well,
okay,
so
so
navino
I
apologize
for
wrangling
the
the
agenda.
I
do
think
that
it's
healthy
for
us
to
continue
the
particular
topic
I
was
just
mentioning
about
you
know.
What
are
we
doing
here
like
what
like
it's
it'll,
provide
a
nice
backdrop
to
the
rest
of
our
conversation?
D
Let
me
dive
into
that
for
a
moment
and
say
all
right
to
orient
everyone,
and
some
of
you
are
quite
well
oriented
because
some
of
you,
I
think
some
of
you-
have
some
things
to
show
today
about
advancements
that
you're
making
in
the
project
itself,
but
just
because
some
of
this
has
changed
since,
since
the
project
has
joined
the
cncf,
so
the
project
was
in
terms
of
where
it
lands
in
github.
It
was
formerly
formerly
under
the
layer,
5
oregon
github,
but
that's
been
moved
out
to
its
own
org
inside
of
that
org.
D
There's
one
repository
service
mesh
performance
we've
got
some
stars
already,
which
is
good
that
the
prodi
initiative
has
been
going
on
for
about
a
year.
So
we've
had
lots
of
folks
come
and
go.
We've
had
initially
a
lot
of
participation
from
universities.
A
couple
of
universities
that,
where
we've
always
you
know
part
of
the
vision
for
the
project
has
been
to
is
to
do
some
research,
some
studies
to
to
to
learn
what
we
can
and
share
with
the
world
what
we
can
about,
how
to
how
to
run
this
newfangled
infrastructure
and
run
a
well.
D
D
And
let's,
let's
look
at
that
deck,
which
is
you
know
what
shayaten?
D
I
wonder
if
you
might
not,
if
you
might
do
us
a
solid
and
like
it's
great,
that
we're
providing
people
the
ability
to
walk
through
the
slides
here,
which
is
helpful,
but
also
it's
slightly
inconvenient
not
to
have
a
link
to
open
up
the
deck
itself.
E
Yeah,
I
can
drop
the
link
like
this.
Oh.
D
Thank
you,
sir
that'd
be
helpful,
okay
here,
so
so
more
or
less
here's
the
the
charter,
slash
roadmap,
or
it's
really,
the
roadmap
captured
into
four
areas.
So
there's
the
spec
itself
and
by
the
way
sakari.
D
In
part
or
that
part
of
what
I'm
saying
is
really
aimed
at
you
like
plea,
please,
please
feel
free
to
suggest
changes
or
other
things.
Let's,
let's,
let's
do
something
together.
Please
please
don't
let
me
speak
at
everyone
for
the
next
hour,
so
there's
the
spec
itself.
The
spec
itself
needs
refinement.
What
is
the
spec?
D
It's
a
set
of
protobufs
that
do
their
best
to
capture
relevant
information
about
the
infrastructure,
the
service
mesh,
the
application
or
the
services
they're
running
on
the
mesh
and
to
capture
that
take
a
snap
to
basically
define
a
standard
set
of
signals
that
are
interesting
to
be
able
to,
or
that
are
necessary
to
be
able
to
articulate
how
well
a
system
is
performing
like
how
well
a
service
mesh
deployment
is
performing
and
as
that
becomes
common
reference
through
that
common
reference,
it
becomes
well
we'll
see.
D
Standard
is
potentially
a
big
word
if
you're,
depending
upon
how
you
mean
it,
but
that
there
a
lot
of
value
is
derived
from
the
measuring
something
in
the
same
way
or
under
the
same
unit
of
measure,
and
so
the
protos
are
they
they
s,
they're
sourced
from
the
istio?
D
Well,
it's
no
longer
a
working
group,
but
it
was
the
istio
performance
and
scalability
working
group.
I
think
inside
of
the
steel
project,
and
so
the
protos
that
are
there.
It
used
to
be
just
in
a
simple
markdown
file:
we've
take
that
taken
that
refined
it
into
protos.
Those
protos
are
now
being
used
inside
the
the
tool
measuring
that
the
tool
that
implements
smp
or
one
you
know
the
first
tool
to
implement
smp
and
but
and
yet,
and
so
there's
ongoing
work
to
be
done
to
review
what's
being
captured.
Is
it
enough?
D
D
I'm
intending
to
capture
so
here's
the
three
protobuf
files
that
are
that
are
capturing
it
today.
If
it's
going
to
capture
information
about
specific
service
mesh,
that's
deployed
the
configuration
of
that
mesh,
your
infrastructure,
so
your
kubernetes
clusters,
their
size.
You
know
their
their
resources,
et
cetera,
like
like.
I
won't
step
through
all
of
what
it's
supposed
to
get,
but
you
know
it's
like
it's
a
lot,
and
some
of
that
is
well
the
application
itself.
So
what
type
of
workload
are
you
running?
How
busy
is
it
its
characteristics?
Have
an
impact.
D
You
have
a
consideration
on
the
overhead
of
the
whole
system,
the
performance
of
the
whole
system.
So
how
do
you
characterize
just
the
application
itself?
It's
like!
Oh
my
gosh.
Each
of
these
areas
can
be
fairly
deep,
and
so
some
a
couple
of
these
areas
are
attempts
to
define
them
have
been
done
by
other
projects.
D
So,
for
example,
there's
this
project
open
application
model
ohm.
It
helps
define
well
apps
that
you
deploy
on
kubernetes
or
it
helps
define
kubernetes
itself
the
configuration
of
kubernetes
so
anyway.
The
point
here
is
that,
as
smp
is
most
closely
married
with
smi
service,
mesh
interface
and
service,
mesh
interface
describes
features
of
a
given
mesh
and
how
you
might
configure
those
features.
Well,
that's
relevant
to
s
p.
So
there
needs
to
be
an
interchange.
S
p
doesn't
need
to
redefine
reinvent
any
of
those
wheels,
but
rather
reference
and
use
those
descriptions.
D
One
of
those
objectives
is
to
have
each
service
mesh
project
self-reporting
the
performance
of
that
that
particular
version
of
their
service
mesh,
because
every
time
that
a
new
version
of
the
service
mesh
is
released,
it's
possible
that
they've
regressed
in
performance
or
that
they've
improved
or
for
certain
types
of
tests
that
they've,
improved
or
regressed
and
so
to
facilitate
self-reporting
rude
rudraksha
is
going
to
talk
about
a
little
bit
later
about
a
github
action.
D
That's
been
made
that
will
invoke
and
run
mescheri,
which
will
run
a
number
of
performance
tests
and
capture
those
metrics
and
and
publish
them.
So
we'll
look
at
that
in
just
a
minute,
but
that's
an
example
of
what
what
is
meant
by
participation.
D
D
So
there's
plenty
of
opportunity
to
help
educate
the
world
and
share
the
learnings
here.
So
kubecon
is
coming
up.
Projects
like
smp
will
have
a
seat
at
the
table
to
have
a
talk
to
tell
people
about.
What's
going
on
here,
what
we're
doing
a
lot
of
users
a
lot
of
the
folks
who
want
to
consume
value
from
the
pro
you
know
not
not
participate
in
the
project
but
use
its
but
but
leverage
it.
D
They
are
most
often
interested
in
this,
like
hey,
what's
the
fastest
mesh
or
under
what
conditions
or
what
are
considerations
I
need
to
know.
I
understand
they're
curious
about
this.
We've
long
held
off
on
publishing
benchmarks
or
defining
them
and
publishing
them,
and
it's
time
it
is
it's
time
now.
We've
got
the
this
ecosystem.
D
There's
a
couple
of
things
that
have
happened.
One
it
we're
standing.
D
The
s
p
would
be
able
to
publish
you
know,
authoritative,
insights,
about
the
performance
of
cloud
native
applications
running
in
context
of
a
service
mesh
in
part,
because
it's
part
of
the
cncf
now
so
we'll
use
that
pulpit.
If
you
will
to
publish
those
we'll
be
as
even-handed
as
possible,
we
had
held
off
on
publishing
in
the
past
because
well
because
you're
likely
to
piss
off
it'd
be
really
hard
not
to
piss
off
someone
as
you
do
that,
because
it's
it's
a
sensitive
thing
to
these
projects
and
that's
not
the
goal.
D
The
goal
is
to
help
educate
and
help
users
not
be
scared
of
the
hidden
cost
of
a
mesh,
but
to
expose
that
talk
about
it,
make
sure
people
understand
it
and
can
use
service
meshes
much
much
to
their
benefit
to
actually
save
on
cost.
So
I
think
that's
a
goal.
People
want
to
often
compare
between
them
in
terms
of
what
they
want
to
which
ones
to
run,
but
here's
here's
a
an
area
sakari
that
I'm
going
to
call
you
out
again
on
is
was
something
that
we
often
get
asked
is
okay.
D
What
so?
Okay,
let
me
say
like
this:
well,
we
often
get
asked
what
workloads
are
being
tested.
What's
a
valid,
what
like
is
part
of
the
goal
of
the
project
to
produce
standard
benchmarks?
Yes,
let's
do
it?
Yes,
it's
helpful.
Even
though,
for
a
given
consumer,
a
given
user
of
a
mesh,
they
may
be
running
a
workload.
That's
you
know
totally
unique
to
their
environment
and
it
has
certain
characteristics
that
are
unique
to
what
they're
doing,
but
in
general
there
are
a
few.
D
Example,
services
or
workloads
that
are
representative
of
like
a
category
or
a
class
of
a
particular
type.
So
maybe
one
that's
database,
intense
or
very
high
volume,
small
transactions
or
but
part
of
what
we
need
to
do
here
is
take
a
stand
and
say:
here's
an
example.
One
and
we've
got
a
number
of
example
like
sample
applications
to
do
just
that.
D
B
D
D
Well,
I'm
just
trying
to
think
of
where
there's
a
common
list
of
them
that,
like
the
here,
let
me
let
me
share
more
of
the
screen.
D
There
are
a
number
of
sample
apps
that
are
packaged
as
part
of
meshrie
as
part
of
a
release
that
people
can
take
and
run
and
then
run
performance
tests
against.
You
know,
in
accordance
with
in
compliance
with
the
spec.
D
So
there's
these
four,
which
are
not
like
you
know,
which
again
I'll
withhold,
which
I
guess
I
would
say
like:
okay,
good,
let's
evaluate
whether
or
not
these
are
like.
You
know
how
good
any
of
these
are.
Whether
or
not
these
there
is
one
thing
to
say
about
these
apps.
So
there's
you
know.
Actually
we
just
saw
a
difference,
so
I
was
looking
at
applications
that
you
can
easily
that
measure.
You
can
easily
publish
onto
istio
these
these
four
emoji.
D
D
This
is
the
is
very
similar
to
the
book
info
application
from
istio.
It's
their
sample,
app
emojivoto,
that's
the
sample
app
from
linkerd,
and
so
something
that
mescheri
has
tried
to
do
is:
let's
help
people
learn
the
different
meshes
and
and
how
they
behave
differently,
and
so
it
it
takes
sample
apps
from
each
of
the
service
mesh
projects
and
will
let
you
run
that
mesh's
sample
app
on
this
mesh
over
here,
or
vice
versa,
so
that
you
can
then
see
a
behavioral
or
performance
difference
across
those
assets.
D
But
it's
not
but
yep,
but
right,
but
if
you
can't
tell,
but
I
don't
think
it's
the
it's
not
the
end
game.
There
is
so
one
of
the
universities
that
we
engaged
with
early
on
and
and
haven't,
had
a
lot.
How
do
we
we
helped
reformat
their
sample
app
or
not
their
sample
app,
but
their
their
workload?
App
and
didn't
really
and
just
didn't,
find
this
professor,
christina
delemetrial
to
be.
D
Desirous
of
collaboration
okay,
but
but
this,
but
but
we
we
can
take
these
applications
here.
So
there's
three,
that
this
is
cornell
university,
that
christina
delametria
was
part
of
she's
had
some
of
her
students
work
on.
Well,
I
don't
I
keep
saying
the
word
sample
app,
but
like
representative
applications
of
maybe
what
a
social
network
might
look
like.
C
B
I
mean:
have
you
done
this
one,
exactly
that
putting
this
that
starter
bench
on
on
on
top
of
kubernetes
and
included
that
as
a
sample
app
to
to
the
measuring
or
this
smp?
Is
that
what
you've
done
or
or
what
are
you
planning
on
doing.
D
D
Year
and
a
half
ago,
but
so
then
we
help
them
make
it
kubernetes
compatible,
and
so
I
assume
that
it
still
is
at
the
time,
and
so
we
included
it
as
a
choice
in
measuring
and
always
stay
already.
D
It
was
it
no,
it
was
it.
We
had
since
pulled
it
out
because
it
wasn't
what
because
it
didn't
always
work,
or
there
were
too
many
resources
that
it
would
needed
to
take
and
also
because
they
wouldn't
accept
our
pr.
I
don't
know
like
she
just
I
think
she
was
concerned.
We
were
trying
to
steal
her
thunder
or
something,
and
that
was
anything
you
know
really
far
from
the
truth.
It's
like
you
know
we're
just
trying
to
and.
B
Yeah
because
I
mean
internally,
we
are
doing
the
same
thing.
That's
you
know
intel
that
we
are
putting
this
one
on
on
top
of
the
hubert
hemis
and
measuring
it,
and.
D
Yeah
we'd
love
to
so
so
mesherie
itself
has
gotten
a
lot
so
back
when
that
was
going
on,
it
was
like.
Well,
we
don't
want
to
give
users
mesher
users
a
bad
taste
in
their
mouth
or
just
like
this,
you
know,
hey
this
stuff.
Doesn't
work
measuring
is
advanced
much
much
further
along
it.
Yes,
it's
like
very
much
so
desirous
that,
like
one
of
the
things
for
meshri
is
that
well,
okay,
it
has
example
apps
great,
but
it's
also
starting
to
just
let
you
load
up
your
own
apps
separately.
D
So
if
you
have
a
kubernetes
manifest
or
set
of
manifest
that
today,
you
could
just
go
paste
in
your
definitions
here
and
just
have
it
apply
them
and
run
them.
Okay,
tomorrow
or
actually
it's
it's
about
to
be
released
in
a
few
weeks-
is
that
you
can
just
upload
your
the
definition
of
your
apps
and
then
have
them
sitting
here.
D
So
you
can,
then
you
know
deal
with
them
ongoing,
but
yes,
zachary
did
that
yeah
plea,
please
express
opinion
every
which
way
of
the
apps
that
you
guys
have
are,
though,
do
you
think
that
those
would
might
be
candidates
to
like
bundle
as
common
reference
in
here
or
or
to
just
maybe
in
documentation,
point
people
to
the
fact
that
there
are
such
sample
apps
you
might
want
to
use,
or
or
maybe
maybe,
they're
not
open
source?
Maybe
it's
just
hey.
B
Yeah
I
mean
I,
I
I
think
what
I
said.
What
we
have
been
doing
is
this
as
well,
that
there
might
be
more,
but
the
two
that
I'm
aware
is
this,
that
that
star
bench
benchmarking
that
one
and
then
another
one
is
the
the
google
microservices
demo,
the
boutique
and
and
and
and
doing
the
same
thing
for
for
that
they
both
have
their
own
set
of
issues.
But
I
think
those
those
are
the
kind
of
applications
that
we
have
come
across.
B
So
it
would
be
good
to
good
to
see
these.
This
included
here
as
well,
yeah
and-
and
I
I
I
haven't-
dive
into
the
spec
itself
or
the
mystery.
But
you
know
one
thing:
what
is
very
topical
for
us
right
now
is,
for
example,
the
we
have
kind
of
enhanced
the
the
istio
and
the
envoy
to
to
accelerate
the
tls
handshakes
and-
and
we
want
to
somehow
demonstrate
you
know
the
the
the
capable
the
value
of
this
acceleration.
B
So
you
can,
you
know,
accelerate
the
the
handshake
to
bring
down
the
latency
free
up
some
cpu
cycles
and
all
that.
So
would
this
smp
be
able
to
tell
that
type
of
difference
that
you
know
you
you
run
the
you
know
two
service
materials
stills
like
the
the
vanilla
steel
and
then
the
intel
accelerated
is
still
and
you
could
you
will
get
a
number
that
okay,
the
the
upstream
was
you
know
five,
and
you
know
intel
is
seven
this
this
stuff.
Do
you
understand
what
I'm
asking.
D
Yeah
yeah
yeah.
I
think
I
think,
let's
see
if
this
answers
the
question,
which
is
let's
say
that
that
you're
running
you're,
you're,
you're,
yeah
you're
running.
D
Yes,
mastery
can
deploy
multiple
instances
of
a
given
service
mesh.
It
can
then
help
you
deploy
your
sample,
epic
and
then
help
you
run
performance
tests
against
those
environments
against
those
services,
and
then
it
can
also
help
you.
It
also
facilitate
comparison
between
two
different
configurations,
and
so
let's
say
that
you're
you've
defined
a
test
in
this
case.
D
In
this
case
this
this
is
the
you
defined
a
performance
test.
You
you
run
it
any
number
of
times
and
you
change
the
configuration
from
one
test
to
the
next.
So
you
turn
on
the
the
highly
tuned
mtls
handshake.
You
know,
you
know
you,
you
want
one
that
has
the
de
facto
vanilla
and
then
one
that
has
the
the
tuned
configuration
that
you
guys
are
working
through.
D
B
Yes,
yes,
exactly
this,
this
looks,
looks
very,
very
familiar
and
very,
very,
very
yeah
and.
B
And
this
you
mentioned
earlier
that
the
mercury
is
implementing
the
smp
spec,
so
the
spec
is
supporting.
All
of
this
is
that
the
correct
statement.
D
Yeah
right,
yes,
I
would
phrase
the
I
would
invert
the
direction
of
that
last
sentence
to
say.
Well,
let
me
show
you
a
visual
to
say
that
yes
is
the
answer
that
mescheri
implements
smp
mescheri's
concerns
and
its
goals
and
things
that
it
currently
does
and
things
that
it
aspires
to
do.
Are
you
know
much
broader
than
smp's
current
focus
around
you
know
performance?
D
Oh
so
you
know,
for
example,
s
p.
Will
you
can
load
up
your
webassembly
filters?
It
will
apply
them
to
you
know
you
can
you
can
then
apply
those
to
the
mesh.
You
know
part
of
that
is
very
relevant
to
smp
because
then
well,
maybe
you
want
to
you
want
to
be
testing
different
configurations,
understanding
what
the
overhead
of
using
web
assembly
filters
versus
something
else
is,
but
so
yeah
so
measuring
implements
s
p,
but
it
does
there's
more.
D
D
A
young
you
know:
well,
it's
it's,
it's
got
a
big
old
vision
and
it's
about
halfway.
It's
in
it's
v05,
it's
about
halfway
to
a
1.0,
it's
the
area
that
it
does
the
best
today
or
the
oldest
feature
set
that
it
has
is
around
performance
management.
There's.
As
a
matter
of
fact,
we
were
on
a
call
with
other
folks
from
intel.
D
She
wasn't
on
this
particular
call.
It
was,
I
think,
probably
I
think
someone
relatively
new
at
intel,
robert,
whose
last
name
who
who
lives
in
poland
and
has
a
long
last
name.
Oh.
D
You
would
ask
the
question
in
there
that,
oh
actually,
it
was
just
random
that
I
said
that,
maybe
you
know
measuring,
does
provisioning
of
web
assembly
filters
as
it
turns
out
like
there
was
so
in
coordination
with
the
university
of
austin
texas,
which
is
so
happens
to
be
where
I
live.
There
was
a
talk
given.
D
Well
doing
just
that,
like
using
measuring
to
analyze
the
difference
between
what,
if
you're
running
something
using
a
webassembly
filter
versus
just
the
native
like
a
native
filter,
what
what
tends
to
be
the
difference
and
what,
if
you
were
doing,
path-based,
routing
or
round-robin,
load,
balancing
or
context
space
right,
like
you
know,
what's
there's
so
many
variables
to
consider
that
anyway,
it
was
a
start
to
some
of
that
side,
so
good,
so
navindu
and
one
thing
we
should
probably
take
a
note
on
is
well
it's
just
that
is
like
is
like
how
to
proceed
with
some
of
those
sample
apps.
D
D
Sicari
there
is
the
hipster,
the
google's
hipster
microservices
app.
A
D
So
cool
I,
for
I
don't
know
like
there's
so
much
going
on
like
I
don't
know
why
we
didn't
see
it
just
now
but
like,
but
it
was
we
had.
It
was
supported
in
the
past.
It
was
included
and
I'm
not
sure
why
it's
not
included
okay,
but
absolutely
it
can
be
put
back
in
and
and
actually,
if
you
don't
mind
if
that's
of
interest
or
if
you
think
it
might
be
of
interest
yeah,
if
you
would,
if
you
would
like
just
open
a
brief
issue
just
to
request
support
for.
D
D
Good
okay,
so
basically
we
were
just
going
through
saying
great
here's
these
different
areas
we
do
desire
to
take
some
representative
workloads
run
through
now
that
mesherie
is
much
more
capable
and
you
can
it'll
help.
You
help
us
automate.
The
setup
of
environments
like
you
know,
kubernetes
environments
and
the
different
configurations
of
service
meshes
and
a
bunch
of
different
service
meshes
and
the
sample
like
and
it'll
capture
the
performance.
D
Now
it
becomes
a
lot
easier
to
actually
go.
Do
those
studies
and
publish
some
results
so
sakuri?
Actually
it's
desirable.
It
would
be
wonderful
to
talk
about
well
any
of
the
things
that
we're
learning
here
at
the
upcoming
kubecon.
So
in
a
few
months-
and
one
of
those
could
be-
I
mean
if
we
end
up
talking
more
about
the
work
that
that
you
all
have
been
doing
on.
D
Of
you
know
tls
the
handshaking,
if
that's
something
we
end
up
discussing
here,
like
that's,
there's
a
service
mesh
performance
as
a
project,
part
of
what
it
was
intended
intending
to
do
is
like
provide
a
venue
for
this
info
like
it's
a
spec
project.
So
it's
you
know
the
it's
not
an
implementation
itself,
but,
and
so
as
such,
it
intends
to
provide
a
forum
for
discussion
and
espousing
of
best
practices.
That
kind
of
thing
sure.
B
Yeah,
I
think
we
submitted
quite
a
few
proposals
for
the
for
the
kubecon
about
the
tunings
that
we
have
been
doing.
D
B
D
F
F
F
This
is
the
mystery
smp
action.
It
would
be
used
for
performing
snp
tests
in
ci
cd
pipelines,
and
you
can
just
use
this
service.
Mesh
projects
can
use
this.
People
who
use
service
meshes
can
use
this
or
other
vendors
can
use
this
to
like
one
of
the
use
cases
that
we
just
heard
was
for
comparing
vanilla,
s2
and
the
other
rescue.
F
You
can
just
put
this
into
your
ci
and
it
will
get
the
job
done
for
you
of
comparing
it.
Basically,
this
is
some
of
the
configuration
that
we
will
need
to
give
this.
This
is
a
provider
token
in
missouri
terms,
it
is
something
that
we
use
for
authenticating
with
the
cloud
provider
which
is
used
for
persisting
your
performance
profiles,
so
that
you
can
keep
them
using
them
again
and
again
over
time
and
also
compare
previous
results
with
new
one.
F
This
is
the
platform
which
you
would
be
using
to
deploy
a
machine.
This
is
something
right
now
might
not
be
a
possible
field
later,
but
let's
not
talk
about
this
now.
These
are
two
of
the
configuration
fields
which
you
can
use
for,
creating
a
performance
test.
One
of
them
works
by
supplying
a
performance
profile
file
like
the
one
which
is
compliant
with
smp
protos
and
quickly
put
up
a
link.
F
F
F
F
And
selecting
it
this
right
now
is
a
short-lived
token,
but
other
projects
which
need
a
token
with
longer
expiry
can
be
provided
one
now,
when
you
go
to
settings
and
update
my
create
to
contain
this
token,.
F
F
F
F
D
D
Okay,
yeah
nice
yeah
good.
Well,
then,
let's
explore
this
action
a
little
bit
more.
It's
a
rudraksha,
the
the
action
that
you've
written
is
you're
trying
to
make
it
really
convenient
for
people
to
embed
a
performance
test
in
their
build
well
in
their
in
their
ci,
and
that
that
could
be
a
continuous
delivery
pipeline.
It
could
just
be
simple,
ci
or,
but
so
some
of
the
use
cases
that
revolve
that
are
here
are
okay,
so
different
different
people
are
going
to
want
to
run
different
service
meshes
under
different
configurations.
D
So
right
now
the
action
you
it
allows
the
user
to
specify
what
service
mesh
they
want
to
perform
the
test
against
yep.
And
what
does
that
look
like
in
terms
of
I
guess
we're
looking?
Are
we
looking
at
it
right
there,
like
the
the
configuration
of
the
action
to
say
you
know
you
run
this
mesh.
D
Right,
I'm
sorry
sorry,
rather
from
the
user's
perspective.
So
if
you're
a
user
of
the
github
action,
how
do
you
just
simply
specify
which
service
mesh
you
want
to
run.
F
You
can
just
create
a
performance
profile
for
that
and
specify
the
service
mesh
in
that,
like
I
did
in
this
one,
I
just
specified
it
here.
I
specify
this
view
here
and
then
in
my
action
I
use
this
one.
I
just
use
this
profile
name
and
mystery
automatically
deploy
the
steer
for
me
to
perform
that
test.
B
Yeah,
that's
called,
but
can
you
say
something
like
okay?
This
this,
I
believe
you
know
deploys,
is
still
what
version,
and
can
you
specify
that?
Okay
pull
this,
you
know
our
steel
from
our
repositories
or
or
you
know,
docker.
What
is
this
registries
instead
of
from
the
upstream
is?
Is
that
possible
somehow.
F
Yep
you,
you
can't
specify
that
you
just
need
to
yeah.
It's
not
a
feature
in
this
one.
Yet,
but
yeah,
you
can't
specify
the
path
to
your
binary,
like
for
the
service
mesh
that
you
want
to
test
this
one.
You
can
give
us
the
binary.
I
mean
give
this
workflow
that
binary
or
specify
the
path
to
it
and
the
adapter
for
that
service
mesh.
The
mystery
adapter
will
take
care
of
deploying
that
version
of
the
service
list.
For
you.
C
D
There's
a
couple
of,
I
think
the
answer
is
like
resoundingly
yes
to
s
soccery,
but
it
is
there's
a
there's.
A
couple
of
ways
like
there's
something
that
there's
a
powerful
construct
in
measuring
that
we
haven't
talked
about
on
this
call.
But
it's
referred
to
as
a
pattern.
There's
a
whole
book
being
written
on
service
mesh
patterns
and
and
how
to
use
service
mesh
as
well
or
you
know,
and
those
the
book
through
o'reilly.
D
Each
of
the
patterns
that
are
described
in
there
are
importable
into
measuring,
so
that
you
can
import
the
yaml
description
of
the
the
thing
which
would
say:
oh
a
specific
mesh,
at
a
specific
version
with
certain
configurations,
you
can
import
that
into
meshri
and
it'll
deploy
that
service
mesh
with
that
under
that
configuration
with
an
app
an
app
that
you
might
want
to
specify,
and
you
can
also
in
the
same
breath
in
that
same
pattern,
specify
the
type,
a
load
test,
a
performance
test,
and
so
not
only,
I
think,
is
what
rude
rock
is
is
saying
is
true.
D
B
D
Act
that
I
want
to
do
to
test
against
that
as
part
of
the
as
part
of
a
run
like
this
you'd
spec,
you
could
do
rudraksha.
I
don't
know
that
you're
accounting
for
this
just
now,
but
this
action
you
you
were
just
showing
behind
the
scenes
you're
in
the
action,
is
invoking
a
few
different
measuring
ctl
commands,
so
one
of
those
commands
could
can
be
to
deploy
an
app
an
app
that's
been
loaded
into
meshre
apply,
deploy
a
service
mesh
apply
a
configuration
to
that.
D
Mesh
tricky
burn:
it's
not
tricky
that
right
now
so
sakai
just
to
qualify
your
use
case
right
now.
Each
meschery
service
mesh
adapter
they
out
of
the
box
they'll
use
the
they
they'll.
Let
you
provision
any
number
of
different
versions
of
the
upstream
service
mesh
they'll
also
subsequently
allow
you
to
apply
your
you
know.
Custom
configuration
for
what
you
know
things
you
want
the
way
that
you
want
to
configure
the
mesh.
D
Is
it
your
use
case
that
you
also
have
a
different
binary
for,
or
is
the
custom
bit
that
you're
referring
to?
Is
it
configuration
or
is
it
custom
binaries
of
the
mesh.
B
Custom,
binaries
and-
and
we
deploy
those
via
the
the
steel
ctl
operator,
overlays,
where
we
specify
that
okay,
this
you
know
this
image,
this
sdod
image
or
this
you
know
steel,
proxy
image
comes
not
from
the
upstream,
but
it
comes
from
from
here.
You
know
and
point
to
that
registry.
Nice.
D
Yep
yeah,
I
think,
then,
what
the
rude
good
rude
rocks.
That's
a
good
one
to
take
away
and
try
out.
I
think
I
think
that
what
you
were
just
highlighting
is
like
yeah,
hey,
here's
how
you
can
do
like
be
a
good
one
to
take
away
and
maybe
come
back
with
in
a
couple
weeks
on
just
explicit
instructions
on
like
hey
here's,
how
you
would
specify
that.
B
Guys
I
need
to
jump
to
another
call,
but
but
this
was
very
useful,
I'll
I'll
try
to
spend
now
some
more
time
on
the
s,
p
and
and
and
mystery
and
all
of
this
one
and.
C
F
D
D
Okay,
good
we're
kind
of
running
out
toward
the
end
of
time.
We
wanted
to
make
sure
we
talked
about
well,
obvi
is
here
and
nation's
here
we
wanted
to
brainstorm
a
bit
on
the
way
that
performance
charts
are
displayed
to
users
in
measuring
and
it's
I
think,
hard
to
understand.
It's
pretty.
There's
there's
two
things
that
can
be
done
to
improve
it
and
so
yeah
rude
rocks.
If
you
have
an
example
of
a
performance
chart
that
with
there's
something
that
can
be
done,
that
takes
like
very
little
imagination.
D
It's
very
obvious
and
it's
the
legend
right
up
there.
It
says
performance
graph,
okay,
good
one.
I
would
submit
to
you
that
if
we
have
to
tell
the
user
that
they're
looking
at
a
performance
graph,
we're
probably
in
a
bad
spot
instead
of
performance
graph,
it
can
say
the
name
of
the
either
the
test
id.
D
D
E
D
Yeah,
quite
possibly,
if
I
idea,
I
don't
know
the
by
the
way
like
as
we
walk
through
this,
I
don't
have
a
preconceived
answer
in
my
mind,.
D
Brock,
if
you
go
back
to
the
other
view,
okay,
so
we're
in
this
tabular
view,
the
user
has
some
high
level
details
about
the
particulars
of
that
performance
profile.
It's
queries
per
second,
how
long
it
was
run
and
then
what
some
of
the
statistical
averages
were
good.
That's
great!
That's
good,
high
level
info
what
mesh
they
ran
it
on
with
it.
Okay.
This
is
good.
D
Yes,
if
you
click
on
the
the
graph
again
the
chart,
it
can
make
a
whole
lot
of
sense
to
me
that
yeah,
the
the
title
of
the
thing
is
probably,
is
that
high
level
detail
it's
either
like
the
performance
profile
name
and
then
the
actual
result,
id
potentially
above
the
chart
or
below,
is
yeah
simply
a
tabular
again
like
either
a
bulleted
structure
with
some
bolding
or
row
based.
You
know,
columns
and
rows
of.
D
D
And
even
for
I
think,
the
serie,
the
seasoned
engineer
it
takes
a
little
while
to
interpret,
and-
and
I
don't
know
that
we're
going
to
have
better
suggestions
other
than
just
aesthetics.
We
might
be
able
to
improve
some
aesthetics,
but
the
way
that
the
histogram
is
created
and
the
calculations
are
done.
D
D
F
D
D
One
other
and,
like
you
know,
use
your
imaginations
too,
like
I'm
not
saying
we
want
to
go,
grab
a
whole
new
charting
library
and
all
that,
like,
I,
don't
think
it's
worth
it
for
us
to
go.
Do
that!
Rework
what's
worth
it
for
us
to
focus
on
is
the
use
cases
that
soccery
and
and
others
are
bringing
one
of
those
things
that,
when
we
were
sitting
on
a
call
with
the
other
intel
engineer
earlier
this
week,
rudraksha,
if
you
go
back
to
the
main
performance
menu.
D
D
And
I
think
this
nothing
happens
right
and
nothing
happens,
yeah
yeah,
and
so
this
is
a
this-
is
the
next
area
to
do
something.
It's
like
hey.
When
someone
clicks
on
that
link
or
on
an
entry.
What
should
happen.
D
It
was
never
created,
yeah.
Okay.
I
think
that
the
answer
here
is
well
that
individual
entry
represents
an
an
individual
performance
test.
D
Right
yeah,
exactly
the
the
same
tabular
layout
that
you
see
of
all
the
performance
test
results
yeah.
This
is
just
a
calendar
layout
of
the
same
thing
so
and
then
on
the
tabular
version.
When
you
want
to
drill
into
the
results
itself,
you
click
on
the
performance
chart
and
it
shows
you
the
graph.
So
I
guess
the
thought
here
is:
if
someone
clicks
on
an
individual
entry,
if
I
just
show
them
the
same
modal,
that
says
well,
here's
the
here's,
the
results.
E
D
Yes,
because
it's
hard
to
read
the
calendar
yeah,
it.
D
D
D
Well,
just
beneath
that
heading
with
the
charcoal-ish
blue-grayish
color,
there
could
be
a
subsequent
secondary
like
a
fairly
thin,
either
a
toolbar
or
reference
for
other
things
that
you
can
do
on
this
page.
It's
like
an
in-page
navigator,
like
jump
to
the
metrics
jump
to
the
profiles,
jump
to
the
calendar
type
thing
either
there
or
maybe,
on
the
right
hand,
side
or
something.
D
D
D
Anyway,
for
now
like
it
to
the
ability,
like
one
thing
to
be
able
to
do
as
well,
is
maybe
let
people
drag
it,
grab
the
cards
and
drag
them
that
might
be
nice
but
obvi
to
your
suggestion.
It
probably
is
too
much
in
the
view
and
so
yeah
having
re
like
in.
I
guess
it
depends
on
the
width
of
the
screen
that
might
be.
D
D
E
E
D
Nice
yeah
there's
the
ones
that
are
probably
closest
at
hand,
for
you
are
there's
a
bunch
of
things
going
on
with
graphql
and
resolvers
that
are
written
in,
go
and
that's
those
are
adjacent
to
the
ui
like
things
that
you're
showing
in
the
ui
are
coming
either
rest
or
craft
you
out
and
so
yeah.
Stepping
in
that
way,
so
barak
and
dhruv
there's
actually
open
prs
right
now
for
in
golang
on
the
resolvers
actually
specifically
on
performance.
So
yeah.
D
It's
not,
I
know
these
aren't
necessarily
starter
issues,
but
don't
let
that
stop
you
I
mean.
D
And
those
are
good
guys
to
be
in
contact
with
as
well
yeah.
I
know
you
already
know
them,
okay,
so
to
recap
real
quickly.
I
think
it's
just.
We
talked
about
restyling
the
the
modal
where
we're
showing
the
performance
chart,
and
then
we
talked
about
adding
a
modal.
When
someone
clicks
on
a
single
entry
that
they
we
would
show
them
the
performance
chart
there
as
well.
F
D
D
Was
saying
the
results
here,
if
you're,
if
you're
sitting
here
on
the
ui
you're
looking
at
one
test
result
for
this
profile,
then
you
go
over
to
measure
ctl
and
you
run
a
few
tests.
D
F
Yeah,
that's
the
thing
I
was
actually
talking
about
a
smaller
thing
that
when
you
run
a
test
using
mystery
ctl,
the
mesh
field
here
doesn't
get
reflected
like
this.
Was
this
test
used
a
steer
as
a
service
mesh?
But
you
don't
see
it
here,
but
if
you
do
this
test
using
ui,
you
will
see
the
istio
written
here.
D
Yeah,
that's
a
good
yeah.
We
should
raise
an
issue
and
the
the
issue
is
that
on
the
cli
you're
specifying
the
adapter
name,
which
is
distinct
from
this
label.
This
is
just
a
label
here.
D
F
D
E
Yeah
I'll
look
into
it
like
it's
I'll,
see
what
the
issue
is.
I'm
gonna
try
to
figure
it
out.
D
D
And
then
for
the
other
performance
chart
stuff,
do
we
have
any
takers.
E
Yep
I'll
create
issues
and
I'll
keep
tracking
them
yeah
and,
like
we'll
we'll
see
if
anyone
else
is
interested
I'll,
also
keep
working
on
the
master.
F
E
So
hey
yeah,
I
had
just
one
confusion
about
tbs
or
maybe
for
previous
meeting
like
only
homework
loads,
so
like
where,
like
the
question
was
the
problem
was
to
implement
the
home
workloads
inside
filters
section
right.
We
can
edit
filters
with
the
help
of
those
you
know
forms
so
actually
the
filter
is
not.