►
From YouTube: Meshery, the Service Mesh Manager
Description
Meshery is a CNCF project: https://meshery.io.
Meshery provides:
- Performance Management - for workloads on and off of service meshes and inside and outside of Kubernetes clusters. .
- Configuration Management - with deployment of established usage patterns and analysis against configuration best practices; integration of Open Application Model.
- Lifecycle Management - for service mesh provisioning and workload onboarding.
- Intelligence Management - for dynamic configuration and deployment of WebAssembly filters for Envoy
- Interoperation and federation - by managing multiple service meshes concurrently.
A
Hello,
hello
and
welcome
to
this
special
edition
of
the
cncf
webinar
series,
we're
going
to
talk
about
service
meshes
today
and
specifically
how
to
manage
them
with
confidence.
We're
going
to
talk
about
the
service
mesh
management
plane
called
mesheri
meshui
has
just
entered
into
the
cncf.
A
It
has
actually
been
accompanied
by
a
sibling
project
called
service,
mesh
performance,
and
so
these
two
are
well
certainly
part
of
the
latest
batch
of
projects
to
enter
into
the
cncf.
And
probably
you
know,
arguably
the
hottest
batch
to
enter
in
so
so
we're
going
to
talk
about
some
hot
projects
today.
My
name
is
lee
calcote
and
I'm
joined
by
my
esteemed
colleague.
A
The
venue-
and
I
spend
a
lot
of
time
focused
on
service
meshing
and
mesherie-
is
the
sort
of
the
largest
open
source
project
that
we
spend
time
on
so
in
the
service
mesh
community
at
layer.
Five
there's
a
lot
of
cloud
native
application.
Networking
that
goes
on
there
are
a
few
projects
that
satellite
measuring
and
so
we'll
touch
on
a
couple
of
these
projects.
Today
they
are
important
to
measure.
They
are
extensions.
A
Some
of
them
are
extensions
to
measuring
some
of
them
stand
alone,
but
but
we're
going
to
get
into
this
fairly
heavily
there's
a
number
of
cncf
logos
as
you're,
seeing
on
the
screen.
So
some
of
these
projects
are
now
now
are
donated
to
the
cncf,
and
some
of
these
are
initiatives
that
are
hosted
within
the
cncf
service
mesh
working
group,
we'll
talk
a
little
bit
more
about
that
group
and
how
you
can
get
involved
in
not
only
that
working
group,
but
also
in
mesherie
here
toward
toward
the
end
of
the
session.
A
So
with
that,
I
you
know
also
say
like
the
the
community
that
these
projects
are
built
within
is
alive
and
kicking.
Mesherie
itself
has
a
little
over
300
contributors
about
15
maintainers,
like
navindy,
was
saying
I'm
he
is.
He
is
one
of
those
there's
a
we're.
A
You
know
fortunate
to
have
well
a
diverse
community,
and
one
that's
you
know
includes
some
well
some
names
that
you
might
recognize
a
couple
of
maintainers
from
red
hat,
rackspace
intel,
oh
hashicorp,
and
a
variety
of
places
so
that
there's
measuries
that
the
project
itself
has
a
you
know
a
large
vision,
we'll
we'll
talk
all
about
that.
Mesherie
also
participates
and
has
for
a
couple
of
years
now
in
the
lfx.
Well,
I
guess
before
it
was
called
halifax,
the
community
bridge,
but
the
lfx
mentorship
program.
A
It's
actually
ranked
the
number
one
mentorship
project.
So
so
that's
certainly
a
point
of
pride
for
some
of
the
mesh
mates
in
the
layer.
Five
community
we're
coming
up
on
about
a
thousand
users,
a
thousand
folks
they've,
dug
their
teeth
into
measuring.
So
that's
helpful
as
well.
You'll
notice,
one
of
the
things
that
down
here
is
this
statistic:
about
performance
tests.
Measures
has
a
number
of
feature
areas,
a
number
of
aspects
of
a
service
mesh
that
are
managed.
A
One
of
those
initial
areas
was
focused
on
performance
and
the
built
out
of
the
desire
to
analyze
the
performance
of
a
mesh,
some
of
the
characteristics
around
running
service
meshes
efficiently,
and
so
there's
a
number
of
anonymous
performance
tests,
results
that
are
collected
as
as
users
send
those
in
and
they
are
to
be.
They
are
to
be
presented
for
analysis
as
part
of
the
the
cncf
service
mesh
working
group
that
we
were
talking
about
before.
A
So
as
we
well
unfold
like
what
is
a
management
plan
probably
would
serve
us
well
for
a
moment
to
just
talk
about
network
planes
in
this
regard,
and
so,
if
you
know
to
characterize
a
service
mesh,
I'm
at
a
high
level,
architecturally,
there's
generally
two
planes
that
a
service
mesh
is
comprised
of
one
of
those
is
the
data
plane,
the
other
one
is
the
control
plane
if
you're
a
network
engineer
or
a
died
in
the
wool
network
engineer.
A
These
terms
are
probably
really
familiar
to
you,
as
well
as
like
the
term
management
plane
if
you're
not,
but
you
spent
some
time
around
kubernetes
and
other
systems
in
the
cloud
native
ecosystem.
These
terms
probably
aren't
too
far
removed,
or
you
probably
have
some
familiarity
with
what
they
do.
What
they're
responsible
for
in
the
land
of
service
meshes
the
data
plane
is
really.
A
It
is
really
the
collection
of
intelligent
proxies
that
you
know
work
in
unison
and
work
under
the
control
of
the
well.
I
just
use
the
term
to
describe
the
term.
I
want
work
under
the
control,
command
and
control
of
the
control
plane,
control
plane
to
speak
generically
to
what
a
control
plane
does
for
a
service.
Mesh
is
really
like
configuration
management
of
these
intelligent
proxies.
A
The
proxies
are
something
of
the
workhorse
they're
lifting
the
packets
and
inspecting
them
sending
them
sending
the
packets
along
their
way,
rejecting
them
denying
them
injecting
chaos.
Doing
you
know
securing
them
doing
lots
of
things
to
requests
to
packets
that
flow.
It's
a
lot
of
control
that
a
service
mesh
brings
to
well
to
you,
the
user,
a
lot
of
insight,
telemetry
security.
You
know
observability
within
there,
a
lot
of
power
in
the
network.
A
Which
becomes
really
important
when
you're
running
distributed
systems,
the
network
is
fallible,
unfortunately,
it's
mostly
dns,
but
the
network,
the
other
aspects
of
the
network,
are
fallible
as
well.
So
a
management
plane
well
can
do
any
number
of
things,
but
essentially
help
you
integrate
service
meshes
into
your
back-end
systems.
A
A
Engineering
well
or
advanced
traffic
canarying
advanced
canarying
of
your
applications,
deeper
insights
into
the
performance
of
your
applications
and
to
the
performance
of
your
infrastructure,
maybe
to
bring
about
change,
management
or
workflow
to
to
things
that
that
transpire
in
a
match
like
there's
a
long
list
of
things
that
are
possible
to
do
and
part
of
those
possibilities
are
brought
about
by
well
intelligent
filters
that
can
be
loaded
into
each
proxy.
A
That's
running
in
a
match,
we'll
talk
about
well,
we'll
talk
about
webassembly
a
little
bit
and
and
we'll
talk
about
some
of
the
the
proxies
that
support
webassembly
and
some
that
don't
but
are
still
extendable,
still
have
a
plug-in
model
in
which
you
can
dynamically
insert
different
traffic
filters
to
well
to
do
a
number
of
things.
So
this
is
the
wheelhouse
of
meshrey.
These
are
the
things
that
meshri
has
the
service.
Mesh
management
plane
is
its
area
of
focus
to
articulate
that
a
little
bit
differently
mescheri.
A
Well,
it
performs
a
number
of
things
around
life
cycle
management
of
well
of
10,
different
service
measures.
Actually,
so
there's
been
a
lot
of
time
invested
in
building
out
adapters
service
mesh
adapters,
so
measury
supports
the
logos
that
you
see
below
it.
Has
one
adapter
for
each
of
these
service
meshes
there's
a
couple
more
that
are
on
the
roadmap,
and
so
we'll
we'll
see
how
long
it
takes
to
get
those
out,
but
yeah
so
suffice
to
say:
meshri
does
lifecycle
management
of
those
service
meshes?
A
A
A
Range,
so
you
know
if
the
proverbial
you
know
if
the
a
1.0
is
sort
of
the
proverbial
mark
by
which
it's
architecture
complete,
then
measure
is
about
halfway
there.
So
it's
not
a
not
a
small
project
measure
also,
does
configuration
management
of
service
meshes
themselves
and
hopefully
and
well,
encourages
you
or
enables
you
to
be
operate.
Your
mesh
with
confidence
measure
also
has
an
emerging
concept
being
built
into
it
right
now
and
that's
around
service
mesh
patterns.
A
I
guess
we
should
say
briefly
that
just
kind
of
to
the
note
that's
here,
that
there
are
a
couple
of
service
mesh
well
specifications,
perhaps
maybe
one
or
two
or
three
of
which
there's
there's
really
three
we'll
talk
about
two,
maybe
two
of
which
that
you're
familiar
with
so
smi
service
mesh
interface
meshery,
has
been.
It
was
an
initial
launch
partner
with
smi
and
continues
to
be
very
much
involved
in
smi.
We'll
talk
about
some
of
its
involvement
measuring
also
implements
service
mesh
performance.
A
With
that,
this
might
be
a
good
time
to
well
the
vendor
to
let
people
poke
around
measuring
a
bit.
B
So,
as
we
mentioned,
measuring,
helps
manage,
helps
manage
the
life
cycle
of
size
measures,
so
you
get
permission,
depression,
service
measures,
applications
and
all
those
other
stuff
onto
your
cluster,
and
the
misery
also
connects
to
your
kubernetes
cluster
automatically,
and
it
also
discovers
services
that
are
already
running
inside
your
cluster,
even
if
you
deploy
it
without
measuring
and
we
also
have
adapters
specific
to
each
service
mesh.
So
each
search
mesh
has
its
own
set
of
features.
B
Some
service
mesh
may
have
less
features
and
some
even
more
advanced
features
so
measuring
to
leverage
the
maximum
functionality
from
each
service.
Mesh
measuring
has
separate
adapters
for
each
of
the
service
mesh
and
measuring
also
lets
you
integrate
with
your
prometheus
and
grafana
add-ons,
so
you
can
import
your
existing
grafana
dashboards
to
measure
and
your
ctr
as
well.
So
so,
when
you
first
start
measuring,
we
also
have
a
configuration
wizard,
which
basically
walks
you
through
the
entire
setup,
to
get
measuring
up
and
running.
B
So
you
can
define
your
performance
test
to
be
used
to
be
around
repeatedly
to
be
reused
so
and
we'll
look
at
some
of
these
stuff
in
in
a
couple
of
minutes,
but
as
a
machete
has
all
these
functionalities
yep
back
to
you,
that's.
A
So
speaking
of
mesher's
functionalities,
it's
masri
as
a
project
takes
to
heart
this.
The
concept
of
being
extensible,
so
measuring
is
something
of
an
extensible
platform.
So
this
is
this
slide
is
a
an
exploded
view
of
some
of
the
internals
of
meshri
and
and
how
it
works
some
of
the
components
inside
of
it.
A
It
highlights
the
notion
that
there
are
any
number
of
extension
points,
areas
where
new
adapters
can
be
added.
Additional
load.
Generators
can
be
supported.
Mastery
supports
three
types
of
load
generators
today
or
where
out
of
the
box
patterns
that
we'll
talk
about
in
a
little
bit
can
be
additional
patterns
can
be
added.
A
A
We
find
that
some
users
well,
they
make
a
decision
one
way
or
the
next
in
part,
based
on
how
intensely
they're,
using
meshri
to
analyze
the
performance
of
their
clusters,
the
performance
of
their
meshes.
Sometimes
they
desire
for
that
load
to
be
generated
off
cluster
or
in
cluster,
maybe
in
a
separate
cluster
and
so
measure
affords
that
choice
for
users
each
kubernetes
cluster
is
the
recipient.
A
In
this
way,
measures
supports
not
only
greenfield
deployments
like
so
deploying
service
meshes
itself.
It
also
supports
connecting
to
existing
service
mesh
deployments
so
brownfield
deployment.
So
it
will
discover
your
existing
deployments
as
well.
A
There's
a
an
extensible
concept
in
mastery
called
a
provider,
and
so
providers
can
typically
offer
a
layer
of
persistence
so
to
the
extent
you're
that
the
user,
that
users
are
running
performance
tests
intensely
or
to
the
extent
that
users
want
to
have
a
particular
type
of
directory
integration.
So
they
want
to
bring
their
own
identity
to
mesheriy
and
have
a
multi-user
experience.
A
The
other
area
of
extensibility
is
the
notion
that
measurey
has
a
couple
of
apis
both
a
rest,
api
and
graphql
api
actually
comes
with
a
command
line
interface,
as
well
as
a
user
interface
that
what
novendi
was
just
showing
you
part
of
the
those
apis
have
been
used
to
build
out
two
different
github
actions.
So,
to
the
extent
that
you
that
you
want
to
pipeline
performance
management
into
your
ci
systems,
to
the
extent
that
that's
on
github,
you
can
do
that
relatively
easily
with
github
actions,
both
the
two
that
are
there.
A
One
is
for
performance
management.
The
other
one
is
for
smi
conformance
service
mesh
interface.
Conformance
we'll
talk
about
we'll
explain
that
in
a
minute,
but
but
before
that,
what
we'll
talk
about
is
configuration
management
that
meshery
does
I
noted
that
meshri
will
analyze
your
runtime
environment
for
certain
service
meshes
and
tell
you
you
know
if
you're
doing
things
right
or
or
not,
and
so,
if
you
see
a
lot
of
green,
then
you're
doing
things
right.
A
If
you
see
some
red,
it
might
suggest
what
you
should,
what
you
should
potentially
look
at
changing,
so
service
mesh
or
smi,
conformance
that
github
action
that
we
were
just
talking
about.
A
You
can
run
smi
conformance
as
a
github
action
that
will
run
meshri
in
your
in
well
as
a
in
your
github
workflow
or
you
can
just
run
smi
conformance
separately,
and
so
what
is
smi
conformance
that
it's
the
notion
that
I
was
saying
earlier,
that
measures
been
a
partner
to
service
mesh
interface
since
its
inception
and
to
help
you
know
so.
A
Smi
has
had
enjoyed
some
amount
of
adoption
from,
I
think
is
it
seven
or
eight
different
service
meshes
a
number
of
them
and
so
mesherie
will
verify,
will
validate
each
implementation.
A
A
We
mentioned
also
that
mescheri
in
this,
in
the
current
release
that
it
has,
it
has
a
the
nascent
ability
to
manage
webassembly
filters
specifically
for
envoy
based
data
planes,
so
most
notably
initially
for
istio
and
in
mesherie's
next
release
and
it's
v0.6
release.
That'll,
be
a
generic
capability
for
any
envoy
based
data
plane
that
that's
running
an
envoy
that
supports
dynamically
loading
and
unloading,
webassembly
filters.
A
A
Good
and
so
maybe
we
can
take
a
look
at
what
some
of
that
looks
like
in
measuring.
B
Yes,
yesterday,
so
one
of
the
things
that
we
discussed
is
validating
your
running
service
mesh
to
see
if
it
follows
the
best
practices
or
not.
So
this
is
the
istio
adapter,
so
sd
adapter
offers
a
lot
of
capabilities
like
provisioning
service
meshes
provisioning
sample
applications,
so
measuria
actually
supports
supports
your
own
applications,
which
you
can
bring
in
bring
in
with
you
so
other
than
the
sample
applications.
B
But
these
sample
applications
are
here
to
help
you
test
out
your
service
mesh
configuration,
and
it
also
offers
other
istio-specific
configurations
as
well,
and
one
of
the
things
we
talked
about
was
validating
your
configuration.
So
if
we
try
to
analyze
the
running
configuration
it
actually
wets
your
running
service
mesh
configuration
and
helps
you
ensure
that
you
are
running
things
in
the
best
possible
way
and
another
thing
that
we
talked
about
was
smi
conformance,
so.
B
To
offer
an
analysis
to
check
whether
their
service
mesh
is
smi
conformance
and
validates
it.
Actually,
as
so
here,
you
can
see
measuring
being
used
to
test
the
smi
conformance
of
open
service
mesh,
and
it
shows
how
much
of
the
test
that
it
actually
passes
so
and
the
reasons
the
areas
where
these
fails,
and
actually
we
also
talked
about
a
github
action.
B
So
some
of
these
mesh
projects
are
already
starting
to
adopt
the
smi
conformance,
github
action
and
using
that
in
their
ci
cd
pipelines
to
make
sure
that
they
they
the
service
mesh,
they
release
smi,
confirm
and
service
meshes
every
time
they
make
a
release,
and
we
also
talked
about
filters,
so
mushri
also
supports
wasan
filter.
So
this
is
in
a
really
early
stage
in
this
process,
and
this
will
be
released
publicly
in
the
next
release
of
measuring
and
we
also
looked
into
talked
about
bringing
in
your
own
applications.
B
So
these
are
some
sample
applications
so
measuring
what
you
can
do
is
you
can
upload
your
applications
directly
into
measuring
edit
them
in
the
measuring
ui
itself
and
actually
apply
these
applications
or
onboard
these
applications
on
your
service
mesh.
As
well,
we
also
talked
a
bit
of
patterns,
but
I'll
go
back
to
lee
to
explain
a
bit
more
about
patterns
before
we
dive
in.
A
Sure,
yeah
yeah-
this
is
a
little
bit
of
so
this
is
one
of
the
areas
that
that
the
service
mesh
community
has
been
discussing
around
within
the
cncf
service.
Mesh
working
group
has
been
discussing
around
well
really
like
best
best
practices
of
behaviors,
of
a
mesh
of
how
to
take
advantage
of
different
features
and
functionality
of
the
mesh
in
a
way
in
which
yeah
espouses
best
practices.
A
So
those
are
in
the
process
of
being
codified,
there's
about
60
that
have
been
identified
thus
far
and
as
they
are
codified
this
this
is
sort
of
a
well
a
sample.
It's
not
sort
of.
It
literally
is
a
sample
simple
set
of
yaml.
That
describes
a
pattern.
I
guess
in
this
case,
for
an
istio
service
mesh
with
a
few
configurations,
but
suffice
to
say
that
patterns
well
are
hopefully
somewhat
concise.
A
The
the
same
yaml.
The
same
pattern
is
capable
of
describing
the
deployment
of
any
of
the
10
meshes
that
mesh
ray
supports
as
well.
As
you
know,
the
configuration
of
the
mesh
sure,
but
also
things
like
ongoing
behavior.
So
if
you
wanted
to
run
a
canary,
you
can
describe
that
in
a
pattern.
As
well,
you
can
describe
so
patterns
or
like
a
template,
they're
they're,
customizable
and
ingestible
into
the
mesher
itself.
A
measure
will
take
action
based
on
what
you've
described
in
the
pattern.
A
There's
a
few
things
that
that
are
coming.
You
know,
if
you
want
to
deploy
a
webassembly
filter,
you
can
describe
that
in
a
pattern
as
well
and
and
have
meshri
apply
it.
The
patterns
are
service,
mesh,
agnostic
and
they're
reusable,
and
they
are
the.
The
initial
set
of
them
is
being
stored
in
a
a
public
facing
repository.
A
So
I
mentioned
that
the
initial
set
there's
been
like
60
that
have
been
identified,
and
so
there's
there
are
folks
that
are
are
working
through.
What
these
look
like,
measuring,
ultimately,
will
allow
you
to
ingest
these,
and
not
only
have
them
not
only
will
measure
then
orchestrate
and
apply
them
to
your
infrastructure,
but
you
can
also
use
meshrey
to
visually,
represent
them
and
to
visually
design
them
part
of
this.
So
I
mentioned
that.
There's
the
cncf
service
mesh
working
group
is
sort
of
hosting
and
stewarding
and
advancing
these
patterns.
A
They
are
also
being
written
up
into
a
book
service
mesh
patterns
from
o'reilly,
which
is
an
early
release.
So
there's
you
know
a
couple
of
patterns
that
are
in
that
early
release.
Now
I
encourage
you
go
check
it
out
or
actually
I
encourage
you
to
join
the
the
service
mesh
working
group
and
dig
in
to
some
of
those
patterns,
as
they're
being
defined
on
that
there
is
well
early
support
for
patterns
in
measuring.
B
Yes,
so
so
patterns
are,
you
can
basically
define
patterns
in
your
yaml
files,
so
this
is
an
example
of
the
book
info
application.
If
you
are
familiar
with
istio,
you
might
have
heard
about
book
info
application.
It's
the
sample
application
that
is
sdo
shapes
with.
B
So
what
you
can
do
is
define
your
configuration
of
your
service
mesh
as
well.
As
actually
mentioned,
the
ongoing
behavior
of
your
service
mesh
as
well,
but
like
another
capability
of
measuring
that
is
going
to
be
released,
released
in
the
upcoming
version
is
visually
configuring,
your
service
service,
mesh,
so
yeah.
So
this
is
a
mesh
map.
B
So
what
you
can
see
here
actually
is
the
same
book
info
application
represented
visually,
so
you
can
configure
your
service
mesh
configurations
and
behaviors
visually
through
this
mesh
map,
and
you
can
also
add
filters
applications
as
well
as
make
other
configurations
visually
here
as
well,
and
you
can
export
it
as
patterns
to
to
make
it
reusable
quite
easily
and
it
also
automatically
discovers
so
what
you
are
seeing
is
that
messing
that
we
discussed
earlier.
B
We
it
automatically
figures
out
that
we
have
deployed
the
sample
application
and
it
generates
a
visual
representation
of
that.
Actually,
nice.
A
One
thing
that
is
probably
is
worth
noting,
as
well
like
navindu
had
said
it
is
well,
is
the
ability
for
for
for
users
for
you
to
design
your
service
mesh,
your
service
mesh
configuration
the
applications
that
run
on
it
and
to
do
that
using
any
of
the
meshes
that
meshrey
supports.
So
that's.
A
A
The
effort
that's
being
put
into
those
patterns,
people
find
helpful,
there's
a
number
of
users,
there's
a
a
beta
going
on
a
mesh
map
now
and
some
fairly
excited
people
all
right.
So
I
think
we're
two
there's
kind
of
two
other
projects
to
talk
about.
A
So
one
of
those
is
the
service
mesh
performance
project
that
also
entered
into
the
cncf
part
of
the
goal
of
smp
service
mesh
performance
is
to
well
answer
an
age-old
question
age-old
like
from
the
dawn
of
service,
the
genesis
of
service
mesh,
and
that
is
what's
the
overhead
of
these
things.
How
do
I
know
if
I'm
running
them?
Well,
how
do
I
compare
against
myself?
How
do
I
baseline
my
environment
and
track
that
over
time?
A
A
The
project
is
joined
by
maintainers
from
intel,
hashicorp
player,
5
and
red
hat
and
so
another
place
to
come
and
get
involved.
So
I
I
encourage
all
of
you
too
there's
a
number
of
outside
of
just
being
a
spec
there's
a
few
other
initiatives,
some
research,
that's
going
on
with
the
university
partners
that
you
saw
listed
so
some
interesting,
interesting
research.
Actually,
just
yesterday,
there
was
you
that
there's
an
ieee
on
publication.
That's
coming
out
about
some
of
that
research,
so
so
keep
your
eyes
peeled
on
this
navindy.
B
Yeah,
yes,
so
actually
has
this
capability,
where
you
can
define
your
performance
test
into
profiles
as
well
as
you
can
integrate
with
grafana
and
prometheus
to
to
see
your
dashboards
and
see
all
the
metrics
that
are
valuable
to
you,
and
these
metrics
are
customizable,
and
another
cool
feature
is
to
is
that
you
can
actually
schedule
your
performance
test
to
run
automatically
so
yeah,
so
to
run
a
performance
test.
What
we
can
do
is.
A
Looking
at
it
looking
at
this
for
on,
if
you
take
a
look
at
the
the
okay
sure.
B
So
basically,
we
have
multiple
load,
generators,
fortio
and
recently
we
have
been
working
on
my
talk.
So
that
is
something
that
we
will
talk
about
soon.
So
this
offers
like
we
are
looking
into
distributed.
Performance
testing,
distributed
load
generation
as
well
and
oh
so
this
is,
we
can
basically
run
performance
test
on
your
service
meshes
even
outside
of
your
service
meshes
and
you
can
provide
urls
to
test
and
all
the
make
configurations
as
well.
So
let
me
show
you
a
performance
test
running.
B
B
So
what
you
can
see
is
the
some
some
metrics
that
talks
about
your
performances.
You
can
see
your
p99
value
and
all
the
other
way.
Other
things
you
can
also
a
powerful
feature
here
is
that
you
can
compare
multiple
performance
tests
and
see
which
one
is
performing
and
which
one
is
more
suitable
for
you.
So
you
can
basically
have
multiple
configurations
of
your
mesh
deployed
and
basically
compare
your
deployments
by
running
performance
tests
across
each
other.
B
We
also
have
a
github
action
that
runs
these
tests
on
your
pipeline,
so
you
can
test
your
performance
of
this
mesh
directly
from
your
pipelines,
while,
while
you
make
releases
or
while
you'll
ship
out
your
project,
new
new
release
as
well,
yep
nice.
A
Very
good,
so
service
mesh
performance,
good,
the
smp
as
a
project,
is
well
there's
a
frequent
question
about
what
the
difference
is
between
service
mesh
performance
specification
and
the
smi
service
mesh
interface
specification,
and
this
slide
articulates
a
bit
of
a
venn
diagram
about
their
areas
of
focus
and
how
they
complement.
They
were.
A
Were
designed
in
to
intentionally
overlap
and
interact,
and
so
hopefully
with
all
three
in
the
cncf
now
they
will
integrate
even
further
than
they
already
do
all
right.
Lastly,
if
you're
not
familiar
with
nighthawk
there's
a
there's,
a
project
called
get
nighthawk
that
it
helps
well
helps
advance.
A
The
existing
integration
of
nighthawk
and
mesherie
nighthawk
is
a
load
generator
that
is
of
the
envoy
project,
so
it's
written
in
c
plus
and
is
has
a
couple
of
intriguing
capabilities
that
are
the
ongoing
study
and
ongoing
effort
within
get
nighthawk,
and
that
is
to
well
take
advantage
of
nighthawks
adaptive.
Load
controllers.
A
Add
a
couple
of
those
in
and
expose
them
through
through
mesherie
to
let
people
recursively
evaluate
what
are
ultimately
an
optimal
configuration
in
your
environment.
It's
an
optimal
configuration
of
a
service
mesh.
If
you
consider
that
you've
got
a
certain
slo
or
a
certain,
you
know
minimum
latency
requirement
that
you
need
to
stick
to
that,
but
you're,
but
you
also
want
to
at
the
same
time
maximize
your
the
resiliency
characteristics
of
your
deployment.
A
Well,
that's
a
can
be
a
difficult
thing
to
figure
out,
particularly
if
any
of
your
infrastructure
changes.
If
you
add
another
node
to
your
environ,
your
clusters,
if
you
upgrade
your
service
mesh,
if
you
change
the
configuration
of
your
service
mesh,
if
you
add
another
service
to
your
set
of
workloads,
that
you're
running
those
things
change
and
so
the
ability
to
run
optimization
routines
and
help
you
identify
well
optimal
configuration
of
your
mesh,
but
in
accordance
with
your
own
constraints,
is
again.
A
That's
that's
what
this
project
is
about,
get
nighthawk,
and
so
there's
kind
of
a
mouthful
in
there
mesherie
again
is.
Is
that
a
v0.5?
So,
there's
a
fair
bit
more
to
the
roadmap,
a
number
of
things
that
we
haven't
spoken
about
today.
That's
maybe
the
agenda
of
another
cncf
webinar
with
that
navindi,
and
I
will
encourage
you
all
to
try.
A
Mesherie
go
there's
a
a
warm
and
welcoming
community
of
contributors
a
little
over
300,
as
I
was
mentioning
before
that
have
contributed
to
mastery
that
are
hungry
for
feedback
they
they
love
hearing
about
their
work,
so
go
try
mesherie
and
jump
into
the
service
mesh
community
thanks
for
spending
time
with
us
see
you
all
later.