►
From YouTube: Meshery Development Meeting - April 1st, 2020
Description
Welcome @vineet_s29!
A
B
A
C
B
B
B
B
B
B
B
B
B
D
So
high
on
I
am
Vinny.
Sharma
I'm
at
president
refinery
or
CAC
undergrad
at
triple
itz
City
and
my
areas
of
interest
include
the
cloud
native
and
distributed
computing
and
pass
works
have
been
in
full
stack
development
using
view,
Jas
realities
and
Django
and
be
simply
I
took
a
course
in
cloud
computing
wherein
I
got
to
know
about
humanities
and
service
mesh.
So
after
going
through
some
of
the
articles
and
blogs
and
some
image
I
got
interested
interested
in
this
topic
and
and
then
Lee
approached
me
on
slack.
B
B
B
B
So
good,
no,
that
wasn't
that's
kind
of
a
topic
with
Desalle.
The
nice
thing
about
that
G
sock
proposals
is
that
it
has
forced
us
to
get
a
few
things
on
paper
that
we
didn't
have
on
paper
before
helped
us
get
a
little
bit.
Organized
I
might
suggest
that
we
that
we
talk
about
a
couple
of
the
topic
in
those
proposals,
but
before
doing
that,
let
me
ask
if,
if
anybody's
got
something
something
else
to
chat
through.
A
Guess,
like
you
know
like
since
I,
would
say
that
a
lot
of
I
saw
we're
trying
to
get
in
touch
with
you
know
like
envoi,
and
so
yeah
I
mean
like
if
we
can
get
started
with
at
least
initial
testing
of
how
you
know
like
with
all
services
we
can
use,
for
example,
Nighthawk
or
you
know
like
which
all
we
can
use.
At
least
you
know
it
for
those
students
who
are
selected
for
this
of
even
they
can
have
a
brief
overview
before
that,
like
I
think
we
can
have
a
small
discussion
on
that.
A
B
Good,
all
right
so
Anand,
actually
just
because
your
topic
is
a
relatively
short
one,
get
real
real,
quick.
The
of
the
three
UX
improvements
that
we
talked
about
on
you
know
from
every
CTL
birth.
It
would
be
good
if
we,
if
we
filed
just
three
quick
issues
on
those,
so
I
think
those
can
be
done
maybe
independently
or
all
at
the
same
time,.
B
B
Oh
great
of
you
have
used
mastery
CTL,
the
CLI
for
mastery.
There
was
a
new
sub
command
added
recently
perf
and
it
was
added
hastily
which
was
great.
Kanishka
was
the
one
who
added
it.
He
went
through
it
quickly
and
then
it
was
acknowledged
that
there
were
some
loose
ends
and
so
Anand
you
that
it's
good.
If,
if
you
end
up
finishing
it
was
hot
yeah,
nice,
okay,
yeah
you've
got
a
microphone
problem.
B
B
Kaneesha
car
did
file
a
google
Summer
of
Code
application
around
envoy
filters
and
use
of
awesome
web
assembly,
which
is
great
so
there's
there's
that
kind
of
focus
on
envoy.
But
this
particular
focus
on
envoy
is
the
notion
that
measure
eat
today
supports
two
types
of
load:
balancers,
so
I'm,
just
signing
in
chemistry.
If
I
go
over
to
the
performance
section
in
Missouri
it'll,
let
me
generate
a
bunch
of
today
only
HTTP
load
and
blast
that
at
an
endpoint
that
endpoint
can
be
something
that's
on
the
mesh
or
something
that's
off
the
mesh.
B
We
initially
supported
for
I/o
for
I/o
is
a
Google
project
that
was
foreign
born
of
the
SEO
ecosystem
built
in
and
around
sto,
w
RK
and
w
RK
to
have
been
around
a
lot
longer
they're
written
in
a
different.
This
is
written
and
go
think
this
is
written
mostly
in
C
I
recall,
maybe
there's
some
Lua
that's
used
here
in
w
RK
as
well,
and
one
of
the
reasons
to
support
WR
k2
is
that
there
is
this
concept
of
omission
or
is
it
coordinated
omission?
B
And
just
it's
just
a
difference
in
the
way
in
which
the
statistics
are
calculated
for
the
latency
and
throughput
of
the
requests,
the
HTTP
requests
that
are
generated
by
these
low
generators
and
how
quickly
this
service
in
the
new
this
service
responds
back.
How
that
how
those
statistics
are
calculated
is
different
between
these
and
there's
a
difference.
B
They
call
it
a
later
seven
HTT.
It
looks
like
HTTP
performance,
characterization
tool.
Now,
there's
a
couple
things
there's
something
I
want
to
mention
before
we
talk
more
about
Nighthawk
and
that's
the
measure
II
supports
generating
load
over
HTTP
really.
B
This
is
this
is
one
of
the
areas
that
make
sure
you
will
go
deep
on
one
of
the
areas
of
functionality
that
people
will
look
to
measure
e
to
as
a
tool
as
a
unique
tool
that
does
service
measure
specific
things
and
does
things
around
one
of
their
prominent
areas
of
concern,
which
is
which
is
trying
to
it's
not
just
simply
trying
to
understand.
How
much
is
the
service
manager
costing
me
like
you
know
how
much
is
the
what'swhat's
this
overhead?
B
It's
that,
but
it's
more
in
context
of
understanding
and
characterizing
how
much
value
the
service
measure
is
providing
me
in
and
now
tell
me
how
much
overhead
that
value
costs,
because
otherwise,
if
you're
just
looking
at
overhead,
it's
you're
kind
of
looking
at
it
in
the
back
units.
Well,
there's
fifty
percent
overhead,
okay!
Well,
what
is
that
good
or
bad
okay?
Or
how
much
does
that
mean
to
me?
There's?
Actually
these
two
questions
we
want
to
answer
and
I.
B
Actually,
what
the
answer
to
advise
question
is
a
little
bit
of
what
I'm
describing
right
now,
but
it's
also
Shiva
that
the
fact
that
what
we
want
to
do
is
create
a
design,
spec
and
start
writing
down
all
the
stuff
that
I'm
saying
right
now
and
writing
down
here.
Guys:
ideas,
protune
perspective,
but
but
I'll
light
some
of
this
and
give
some
context
that
there
it's
kind
of
interesting
there.
There
too,
if
you
think
about
this
analogy
for
a
second
every
one
of
us,
is
a
consumer
I.
B
Don't
one
of
us
goes
to
the
store,
buy
something
or
or
we
go
online
and
buy
something
of
some
of
the
spice
stuff
from
amazon.com,
which
is,
which
is
a
real
shame.
Since
AWS
is
not
a
it's,
not
the
most
friendly
cloud
as
a
consumer,
they're
kind
of
two
things,
I
consider
that
you
would-
and
you
know,
feel
free
to
speak
up
and
interrupt
me
here,
but
do
things
that
I
consider
that
you
would
ask
yourself,
when
you're
you're
trying
to
understand
whether
or
not
you
should
make
that
purchase.
B
One
of
those
is
just
your
own
internal
assessment
of
the
value
of
the
thing
that
you're
purchasing
and
how
much
it's
gonna
cost
you
to
get
it
you,
don't
you
usually
have
more
information
than
just
that.
You
have
information
about
other
similar
products
and
how
much
they
cost.
What
that
perceived,
that
they
give
you
their
name
brand,
any
implicit
goodwill
towards
that
name-brand
etc.
Is
it
a
luxury
item?
Is
it
the
on
the
cheaper
end?
That
kind
of
thing
you
know,
but
that's
one
sort
of
measure
just
like
a
what
values
are
giving
me
hey.
B
B
I
can
report
to
my
buddy
over
there
and
say
hey
how
much,
how
much
memory
footprint
is
your
service
mesh
you're
running
out,
and
that
would
help
right
if
you
had
a
bunch
of
other
people
to
compare
to
well
measure
e
every
time
that
someone
runs
a
test
so
long
as
they
have
their
preferences
set
to
send
anot
their
their
performance
results.
The
project
collects
those.
If
you
go
to
mesh,
read
il
and
go
down.
B
You'll
see
that
you
know
almost
700
people
or
700
tests
that
have
been
run
have
been
shared
those
results
and
as
soon
as
we
amass
enough
of
those
that
we
can
get
some
statistically
interesting
analysis.
What
we'll
do
that
we'll
publish
that
analysis
and
we'll
tell
people
hey
you
know?
According
to
your
environment,
your
set
up
the
service
mess.
You
have
the
configuration
that
you
have
know
you're
doing
you're
getting
ripped
off
like
you
should
only
have
10%
memory,
footprint,
overhead
or
whatever.
It
is
they're
just
giving
people
out
of
that
type
of
information.
B
It's
kind
of
hard
to
do,
because
there's
so
many
variables
for
the
types
of
environments
that
can
you
run
the
more
of
these
tests
that
are
run
the
the
public
service
that
we
can
do
for
people
about
indicating
to
them
whether
or
not
they're
doing
it
right
or
they're,
not
they're,
running
their
infrastructure
well
or
not.
B
It's
those
kind
of
signals
and
answers
that,
like
that's
the
value
in
part
that
we're
trying
to
provide
to
people,
there's
a
bunch
of
other
questions
that
get
answered
as
part
of
this
as
part,
this
performance,
testing
and
part
of
distributed
performance
testing.
Those
questions
are
things
like
how
well
does
my
service
do
here?
This
load
cannot
just
load
from
one
source,
but
multiple
sources.
Just
the
reality
is
you're
going
to
be
receiving
load
from
various
places.
Some
of
that
load
will
be
generated
internal
to
the
cluster
from
other
internal
services.
B
Some
of
the
load
on
a
lot
of
that
load
will
hopefully
be
coming
from
outside
the
cluster.
It
really
just
depends
on
what
your
applications
are
in.
Your
workloads
are
but
point
is
today
it's
while
it
is
sophisticated
in
nature.
What
we're
doing
in
providing
a
mystery.
It's
still
relatively
simple,
compared
to
distributed
load
generation
distributed
in
the
performance
testing.
There
are
any
number
of
other
interesting
questions
that
can
be
answered
in
doing
so.
It's
so
recently.
B
There's
kind
of
two
repositories
that
are
performance
related
in
the
Envoy
project
on
VoIP
or
perfect
I
think
is
a
collection
of
various
load
generation
tools,
and
you
know
Python
scripts
and
that
kind
of
a
thing
and
then
there's
Nighthawk,
which
I
think
is
near
as
I
understand
something.
You
know
a
customer
performance
testing
tool
for
envoy
and
is
I
believe
had
been
given
prior
thought
in
its
architecture
as
to
how
Nighthawk
could
be
run
in
a
distributed
fashion.
That's
great
like
it
can
be
controlled
over
GRDC
great.
We
love
measuring.
B
It
would
work
very
well
to
coordinate.
You
know
a
fleet
of
neck
Hawks.
If
you
will
over
G
RPC
it
sounds.
You
know
some.
Some
very
promising
things
about
engaging
with
Nighthawk
one
is
that
its
output
can
be
in
the
same
format
as
for
iOS
output.
Great.
That
means
that
there's
much
less
work
for
me
to
do
in
terms
of
like
refactoring
that
output
to
be
well
formatted
for
display
on
the
front
end
of
mystery.
The
other
thing
that
was
encouraging
is
that
some
would
had
mentioned
that
that
measure
that
Nighthawk
was.
B
Is
that
we'll
want
to
get
clarification
as
to
whether
or
not
what
Nighthawks
capabilities
are
right
now
in
a
distributed
sense,
if
there's,
if
they're
cognizant
to
one
another,
if
there's
any
amount
of
coordination
that
they
do,
the
one
of
the
the
probably
the
biggest
asks
it's
kind
of
asks
that
we
would
have
of
nighthawk
the
biggest
one
being
in
my
mind,
being
that,
once
those
that
load
is
generated,
the
the
results
are
created
by
each
individual
nighthawk,
the
biggest
asset
that
would
be
nice
for
us
is
if
that
project
facilitated
the
coalescing
of
those
results
into
a
single
result,
set
or
I
should
say.
B
Maybe
a
single
result
set
that
where
you
can
still
look
at
those
results
individually
to
kind
of
want
one,
that's
all
combined
and
then
one
that's
and
then
each
one
individually.
That's
not
the
easiest
of
work.
It
would
be
nice
if
that
project
does
that,
so
that's
kind
of
an
open
question
so
good
so
I
went
off
and
talked
about
this
for
a
long
time.
One
thing
that
I
can
say
that
we
should
do
is
go
and
create
a
go.
It
go
and
take
the.
B
This
mesh
Reed
document
template
make
a
copy
of
it
and
create
a
functional
spec
to
begin
to
capture
some
of
the
thoughts
that
you
guys
have
already
had
on
the
project
and
outline,
and
we
can
then
begin
to
share
that.
If
then
in
our
community
or
the
ongoing
community
and
the
nighthawk
project
itself,.
B
B
All
right,
I'm
gonna,
put
the
link
to
that.
Now
that
I
did
all
that
work
to
create
the
document
you
give
it
a
title
yeah,
so
that
so
Shubham
asked
a
great
question
about
g,
RPC
and
tcp,
and
maybe
adding
adding
that
support
in
which
we
saw
a
time
line.
So
so
I
think
there's
been
two
answers
to
this
question
and
just
to
repeat
it.
B
Yeah
and
so
so
so
that's
open
road
map
and
Siobhan
was
asking
hey:
can
we
or
can
we
take
that
on
and
include
that
in
part
of
the
timeline
for
G
sock?
The
G
saw
time
frame
just
passed
yesterday
in
terms
of
student
submissions,
and
so
so,
technically.
I.
Don't
think
that
we
can
we
that
that's
certainly
something
that
we
can
take,
take
and
look
to
accomplish
in
the
same
time
frame
in
the
community.
B
B
B
What
other
thoughts
you
guys
have
on
this?
What
this
is
certainly
something
that
I'd
want
to
like
build
up
the
momentum
with
and
keep
you
know,
begin
to
get
a
bunch
of
thoughts
down
iterate
amongst
ourselves
about
how
we
think
we
might
approach
it
and
then
begin
to
socialize
that
with
Nighthawk
team
and
see
what
they're
thinking.
E
Know
that
we
want
to
go
ahead
with
Nighthawk
and
not
Vegeta,
because
I've
been
seeing
night
hot
code
yesterday
and
the
build
system
which
they
are
using
is
basil
and
one
more
thing
that
they
are
completely
relying
upon
bash
scripts
and
Python
for
the
load,
testing
and
also
Nighthawk
maintainer
x'
mentioned
today
that
looking
for
distributed
load
testing.
So
it
might
take
them
one
or
one
and
a
half
months
to
initiate
multiple
load
testing
feature
and
the
Nighthawk
and
coming
to
the
integration.
B
Right,
yeah,
I,
don't
know
that
people
have
very
good
experiences
with
basil,
necessarily
either
yeah
cush.
It's
a
good
point.
What
mom
for
K
said
here
it
makes
it
or
what
Jacob
I
guess
said
here
it
sound
like
night
off,
doesn't
doesn't
yet
support,
distributed,
load
generation
and
maybe
what
someone
else
had
said
above
or
slightly
misleading
because
it
it
sounded
like
it
did
so
yeah,
it's
a
good
point.
It
doesn't
then.
C
B
B
B
If
memory
serves
for
tayo
is
included
as
a
library
inside
the
same
nursery
container,
so
inside
the
same
mesh
rebuild
that's
great
WR
K
2
is
included
in
the
same
every
container,
although
I
believe
it
as
it
is
sitting
there
as
a
separate
binary
or
a
separate
process,
not
process,
but
a
separate
binary
that
so
I
don't
think
that
we're
building
w
RK
2.
This
is
something
to
go.
Look
at
it,
whereas
I
think
we
are
incorporating
Ford
IO
as
a
golang
library,
but
not
w
RK
q.
Given
that
it's
written
in
a
different
language.
B
B
That
makes
it
kind
of
a
pain
when
you
go
to
do
development
on
your
on
your
local
system,
so
so
I'm
on
a
Mac
right
now,
if
I,
if
I'm,
building
mystery
in
running
mystery
on
my
local
host
on
my
local
machine
and
I
want
to
go,
perform
a
test.
The
performance
test
using
WR
k
I
need
to
have
be
running
WR
k2
in
a
separate
container,
not
on
my
local
host,
because
yeah.
B
The
binary
I
think
that
we're
using
isn't
compiled
for
of
my
architecture
and
so
point
is
yeah
that
the
it's
it's
nice
when
I
think
initially,
we
had
included
for
IO
as
a
separate
container,
but
after
a
while
you
start
to
get.
You
know
mystery
as
an
application.
There's
this
sort
of
cognitive
overload
that
people
get
when
they
see
that
they're
deploying
an
app
and
there's
like
you,
know
ten
to
twenty
something
containers.
B
B
B
E
B
Is
great,
the
we
will
I
suspect
will
need
to
manipulate
it
a
little
bit
to
make
it
the
same
JSON
format
so
that
the
UI
doesn't
need
to
change.
Well
will
want
to
you
know,
property
that
data
processing
in
memory
server,
but
this
one
actually
might
not
be
like
to
incorporate
Vegeta
as
a
non
distributed
load
generator
like
in
the
same
format
that
we
support
for
today.
It
might
be
not
such
a
big
piece
of
work
like
it.
B
The
reason
that
I
leaned
in
to
Nighthawk
kind
of
has
the
default
position
is
largely
because
it's
the
same
reason
why
we
leaned
into
for
tayo
initially
as
well,
because
for
tayo
is
so
close
to
is
deal
Nighthawk
is
so
close
to
envoy.
So
these
are
just
you
know.
Very
natural
relationships-
very
not
but
it's
a
good
environment
and
community
for
us
to
be
in,
but
it
does
mean
that
it's
the
right
thing
to
do
necessarily,
or
at
least
not
the
right
thing
to
do
it.
Usually
we
do
to
articulate
some
of
the
use.
B
Cases
are
some
of
the
ways
in
which
the
distributed
load
results
would
be
shown
so
as
you're
familiarizing
with
vegetto
some
of
the
questions
that
we'll
need
to
look
at
it
like
okay.
Well,
if
you're,
you
know
beyond
just
the
formatting
of
the
JSON
and
the
ability
to
call
it
and
invoke
it
like
is
well
hey
if
vegeto's
run
in
a
distributed
fashion,
does
it
go
a
lot?
Does
it
bring
back
all
of
its
actually
I
think
it
does
right?
It
brings
back
all
of
its
results
into
the
same
report.
B
B
B
B
Anything
out
something
of
a
working
group,
I
think
there's,
there's
probably
enough
interest
and
it's
a
meaty
enough
topic.
The
performance
topic
at
large
to
have
a
working
group
that
meets
you
probably
once
a
week
to
to
focus
the
discussion
just
on
on
advancing
this
initiative
to
make
that
a
productive
discussion
working
through
the
design
spec
for
distributed
performance,
testing,
I'm,
incorporating
in
thoughts
from
the
things
you
guys
have
put
into
your
G
sock
applications
and
then
also
that
there's
a
there's,
a
very
related
concept
here.
B
Community
will
make
a
mark
on
and
it's
this
performance
specification
called
the
service
measure
performance,
specification,
SMPS
and
it
was
originated
by
at
google
and
we've
adapted
it
and
iterated
on
it
a
little
bit.
So
the
performance
specification
is
really
just
a
yeah
more
file.
It's
one
of
the
repositories
under
the
layer
of
AI
org
service
measure
performance
specification.
It's
a
common
format
for
describing
and
captioned
performance,
benchmark
tests
and
results.
It
doesn't
have
to
be
benchmark
performance.
B
It's
kind
of
it's
it's
a
few
things.
It's
these
things.
It's
a
yamo
file
that
lets
you
describe
what
type
of
test
configuration
that
you're
running
so
so
cush.
We
were
talking
about
performance
test
profiles,
the
ability
to
to
allow
a
user
to
create
a
profile.
Maybe
they
want
to
run.
You
know
a
scope
test
for
two
days
under
a
certain
type
of
configuration
and
it
would
just
be
convenient
for
them
to
save
that
config.
B
B
This
is
an
example
that
this
is
the
the
spec
and
it
kind
of
has
an
example.
Details
filled
in
the
spec
needs
to
be
iterated
on.
It
hasn't
been
a
focus
of
this
communities
for
a
while.
There
is
the
beginnings
of
an
implementation
of
this
back.
It's
done
in
history,
so,
if
you
go
in
mesh,
you've
run
a
performance
test.
You
can
go
over
and
choose
to
download
it.
B
B
So
the
reason
that
I
say
it's
just
it's
the
beginnings
of
an
implementation
of
the
spec
is
because
right
now
that
that
results
file,
if
you
download
it
it'll
only
have
so
much
detail,
it
has
the
statistical
calculations
for
the
test
results
of
that
performance
run,
but
it
doesn't
have
nearly
enough
other
information.
It
says
that
the
environment
for
this
run,
you
know
one
fifteen
five.
How
many
nodes
were
they
were
there?
How
big
were
they?
It
is
it's
it
need
the
the
measure.
Ii
implementation
has
only
gotten
so
far.
B
Apdex
is
a
bit
dated
maybe-
or
it's
been
around
from
the
time
it
is
an
application
performance
index.
It's
an
open
standard
for
measuring
the
performance
of
software
of
software
applications,
so
its
purpose
is
to
convert
performance
measurements
into
insights
about,
in
this
case
they
were
saying
about
user
satisfaction
about
by
specifying
a
uniform
way
to
analyze
and
report
on
the
degree
to
which
measured
performance
meets
user
expectations.
So
they
have
this
whole.
B
This
whole
thing,
so
our
point
here
is
to
really
once
we've
gotten
a
common
format
described
to
then
come
out
with
a
formula,
a
mesh
Dex
formula
for
calculating
the
the
mesh
Dex
of
any
given
service
mesh.
Going
back
to
what
I
was
saying
before
about
weighing
the
value
of
what
you're
getting
out
of
your
mesh
compared
to
the
overhead
of
what.
B
B
But
if
you
boil
that,
if
you
put
that
into
a
formula
and
cannot
boil
it
down
to
a
simple
number,
like
hey,
you're,
currently
running
your
service
mesh
at
a
an
87
or
a
dot
87
like
what
okay,
that
that's
that's
a
lot
easier
to
comprehend
and
compare
to
others
like
yeah,
hey
others
that
are
that
are
running
a
similar
environment,
they're
generally
running
at
a
92.
Maybe
you
should
you
know?
Maybe
you
should
change
something
or
like
and
part
of
what
must
be.
B
A
webassembly
filter
is
sitting
there
watching
traffic
watching
that
traffic,
whether
it's
load
that's
been
generated
or
just
regular
user
traffic
and
in
real
time
calculating
that
index
and
then
measure
you
showing
that
back
to
people
saying
yeah,
hey
this
service
right
here,
it's
running
at
a
you
know
a
62
mesh
Dex,
maybe
that's
great
or
maybe
that's
not
there's
a
lot
of
context.
That
needs
to
go
along
with
that.
B
But
these
are
the
kind
of
industries
like
industry,
standard
setting
things
that
that
this
community
is
well-positioned
to
make
happen
so
that
service
measure
performance
index
and
the
specification
I
intend
to
with
your
help
like.
Hopefully
we
would
take
that
to
either
the
CNC
F
or
a
different
standards
body
and
try
to
make
that
a
thing.
B
So,
there's
a
bunch
of
there's
a
number
of
things
to
come:
make
make
your
name
on
so
to
speak,
but
we're
very
early
days
in
terms
of
service
meshes
and
there
are
ubiquitous
adoption
eventually
within
a
year
a
couple
of
years.
It
should
be
the
case
that
everywhere
that
you
see
a
kubernetes,
you
see
a
service
mesh
and
if
that's
the
case,
we're
talking
about
lots
and
lots
of
installations,
we're
talking
about
lots
and
lots
and
lots
of
people
coming
to
learn.
B
Well,
ashtec's
is
lots
and
lots
of
people
coming
to
use
mesh
read
because
because
it's
positioned
to
be
the
industry
standard
tool
for
managing
service
messages,
so
part
of
understanding
that
mesh
dex
number
if
I
was
running
a
service
mission
that
says
you
know
you're
running
it
in
you
know
as
a
73,
okay.
Well,
thank
you
like
what
am
I
supposed
to
do
about
that.
I
hear
you're
telling
me
that's
not
so
great
compared
to
others
that
are
running
similar
setups.
What
am
I
supposed
to
do?
B
Well,
you
can
use
measure
e
to
give
you
some
of
those
insights.
You
can
run.
You
can
check
against
validate
your
service
master
configuration
against
best
practices,
so
right
now,
I'm
not
really
running
any
workloads
in
my
environment,
so
configuration
is
just
fine,
but
what
we're
doing
here
is
we're
running
individual
letters,
individual
best
practices,
fetters
or
validators
that
check
for
certain
things.
This
better
checks
for
computing,
virtual,
conflicting
virtual
service
hosts
virtual
services,
are
an
sto
specific
construct,
a
virtual
service
and.
B
The
easte
you
doctor
has
like
nine
or
ten
of
those
built-in
none
of
the
other
adapters
have
those
built
in
that's
something
that
we'd
like
for
each
the
adapters
to
have,
because
different
meshes
have
different
best
practices.
We've
gotten
kind
of
started
on
that
with
this
deal
so
on
that
I
presented
quite
a
few
like
topics
and
things
that
we
want
to
get
to
and
I
see,
people
have
been
taking
meticulous
notes
here.
So
that's
that's
great.
B
B
You
might
just
it's
not
just
continual
performance,
benchmarking
or
validation
against
against
a
baseline
like
like
it's
like
yeah
you're.
Definitely,
I
want
to
do
that,
because
your
environment
change
things
change,
you're
gonna,
deploy
a
new
version
of
your
app
I.
Don't
want
to
test
your
performance
again
and
again,
but
more
than
that
cron
job
or
eventually,
policies
inside
of
missery
they're
gonna
become
important
because
well,
hey
I
just
checked
to
verify
that
we're
running
my
sto
config
against
best
practices.
B
It
looked
pretty
good
so
far,
but
then
you
know
earlier
today:
I
actually
changed
something
so
am
I
still
running
best
practices.
Well,
I
don't
know
like.
Maybe
this
should
be
running
and
checking
and
confirming
so
there's
another
example
of
like
hey
yeah,
something
else
that
would
be
scheduled
beyond
that.
At
some
point.
We're
gonna
want
to
have
policies
because
at
some
point,
as
a
user
who
gets
at
enough
after
a
while
I'm
gonna,
say,
look
actually
I.
Don't
really
like
this
better.
B
It
doesn't
really
make
sense
to
me
I,
don't
you
know
care,
it's
always
read
for
us.
Forget
it
I,
don't
I,
don't
want
to
see
the
little
red
notification
up
here.
So
I
want
to
be
able
to
configure
my
own
policy
that
says
you're
here.
Here's
the
things
are
important
to
me
all
over.
Maybe
I
learned
something
special
or
unique
to
my
environment
that
I
want
from
essary
to
check.
So
this
needs
to
be
an
area
of
extensibility
mastery
for
people
to
be
able
to
build
in
their
own
best
practices.
B
B
B
B
One
of
the
two
of
them
does
pro
does
bring
out
some
additional
use
cases
about
well
about
helping
me
identify
where
it
is
that
you
want
to
run
those
distributed
load
generators,
so
measure
e
will
like
we
will
need
two
fists
like
I,
think,
to
the
extent
that
people
want
to
do
that
load
generation
in
cluster,
like
from
with
inside
there
they're
kubernetes
cluster.
That
will
be
something
that's
relatively
easy
for
massery
to
orchestrate
from
essary
to
coordinate
because
mesh.
B
We
can
go
talk
to
kubernetes
and
say:
hey,
go,
deploy
a
daemon
set,
maybe
or
or
or
go
deploy
this
this
new
deployment
that
that
contains
any
number
of
instances
of
vegetto
instances
of
that
distributed,
load,
generator
and
and
then
mesh.
We
can
send
that
new
deployed
application
commit
and
to
have
it
spin
up
and
generate
load
and
collect
the
load
and
do
the
things
it
needs
to
do,
but
but
there's
some
amount
of
that.
B
B
B
B
B
So,
given
that
we
reached
the
top
of
the
hour
on
our
meeting,
we
will
it
will
stop
here.
Folks
have
got
the
links
to
these
Doc's.
These
are
good
things
that
can
go
through
digest.
Comment
on
Google
is
ready
to
engage
on
advancing
that
spec.
A
few
of
them
are
in.
Our
slack
now
looks
like
envoy
folks
were
receptive.