►
From YouTube: Lee Calcote, Founder, Layer5 | Service Mesh Day 2019
Description
Service Meshes, but at what cost?
A
One
of
those
was
from
Larry
Peterson
earlier
I
hadn't
seen
Larry
speak
before,
but
you
know
I
had
my
I've
had
my
noodle
tickled
at
least
once
today,
which
is
nice.
Larry
had
talked
about
what
the
disruptors
dilemma
and
if
I
can
butcher
it,
it
was
something
around
on
one
hand
you
can
take,
you
know,
do
disaggregation
as
a
as
a
path
to
innovation.
A
On
the
other
hand,
you
can
do
what
integration
to
facilitate
adoption
and
I
think
I'm
butchering
it,
but
it's
the
adoption
bit
that
I
wanted
to
focus
on
right
now
and
talk
to
you
guys
a
little
bit
about
this
last
year.
I
gave
a
few
different
service.
Mesh
workshops
so
ended
up
speaking
to
a
lot
of
people
who
are
currently
adopting
or
considering
adopting-
and
there
were
lots
of
questions
asked,
but
two
that
were
consistently
asked-
and
maybe
you've
bugged
me
and
and
some
other
members
in
the
community
enough
to
go,
do
something
about
it.
A
Well,
it
sort
of
started
off
with
wow.
This
is
fantastic,
I
see
the
value
of
the
service
mesh.
These
are
awesome
right
because
you
know
what
else
would
be,
but
also
they'd,
be
saying
when
I
was
done
with
the
workshop
right
and
then
hey
and
then
they
said,
you
know
hey.
Where
do
I
get
started,
which
one?
What
do
you
recommend
there's
a
couple
out
there?
You
know
there's
various
ways
to
deploy
them.
You
know
what
what
do
you?
What
do
you
think
I'll
leave
you
I'll
leave
that
cliffhanger
there
I
won't
the
other.
A
One,
though,
was
also
asked
every
time
and
you
could
kind
of
see
the
gears
turning
in
the
engineers
mind,
and
it
was
something
like.
Oh
wait.
A
second,
that's
not
for
free,
there's,
see
what's
the
cat,
how
much
latency
are
we
talking
about
here?
What's
the
CPU,
that's
gonna
burn
based
on
all
this
value,
I'm
getting
for
free
with
no
code
change,
which
is
always
there's
that
little
asterisk
by
that
no
code
change
thing
just
you
know
mostly
true,
and
so
you
know
I
think
that
you
know
we.
We
should
collectively
help.
A
A
I
want
to
introduce
that
tool
to
you
today,
measure
e,
it's
a
multi
mesh
performance,
benchmark
and
playground,
an
open-source
tool
that
there's
a
small
collection
of
folks
who've
spent
some
of
their
extra
cycles
to
come
in
and
come
and
do
what's.
Hopefully,
a
public
service
I'm,
like
all
open
source,
is
right.
It's
just
a
all
right.
A
Well,
we're
getting
like
I'm,
not
sure
if
there's
like
sarcasm
floating
around
in
here
with
the
laughs
are
about,
but
speaking
of
I
think
Chris
had
talked
about
what
container
wars
I
have
to
say:
I
was
much
more
focused
and
centered
on
the
container
Orchestrator
Wars.
That
was
another
topic
that
I
went
around
talking
a
lot
about,
did
a
fair
bit
of
analysis
on
really
comparing
and
contrasting.
Okay,
the
four
most
prominent
at
the
time
container,
orchestrators,
baseless
marathon,
kubernetes,
swarm
nomads
and
felt
like
I
was
doing
a
very
structured
kind
of
fair
comparison.
A
One
of
those
categories
that
we
compared
on
was
the
was
scale
I
was
like
high-availability
scale
and
and
the
way
that
the
container
orchestrators
kind
of
measured
themselves
song
they're
they're
scale
was
around
the
scheduler
and
it
was
something
around
you
how
many
containers,
how
many
nodes?
How
quickly
you
know?
How
quickly
can
you
schedule
doesn't
get
them
up?
Each
of
the
container
orchestrators
use
each
of
those
projects.
A
So
those
are
part
of
the
things
we're
hoping
to
address
within
a
tool
like
this,
to
the
extent
that
you
can
get
to
an
apples
to
apples,
comparison,
I,
don't
I,
don't
know
that.
That's
it's
a
hard
thing
to
achieve,
but
hopefully
this
is
about
a
two-month-old
project.
So
so,
no
doubt
there
are
some
bugs
in
there.
It
has
anyone.
Have
we
done
a
live
demo?
Yet
today,
oh
okay,
I!
Oh
all,
right!
There
goes
mine,
okay!
A
Well,
thanks
for
that,
I'm
too
late,
we're
gonna,
I'm
gonna,
see
if
I
can't
demo
this,
but
it's
just
that.
You
know
simple
tool.
You
can
deploy
it
locally.
You
can
deploy
it
in
your
cluster.
You
can
deploy
it
on
the
mesh.
If
you
want
to,
although
to
the
extent
that
you're
trying
to
do
some
performance
analysis,
probably
don't
deploy
it
on
your
mesh,
but
rather
pointed
at
either
your
sample,
app
or
or
your
application
it'll
generate
some
load
receive
that
back
and
present
it
to
you.
A
Let
me
see
if
I
can
show
you
what
this
looks
like
and
it
looks
like
I'm
going
to
drive
from
up
here,
so
so
the
tool
itself
there's
a
couple
of
containers
as
you've
seen.
It's
set
up
to
be
multi
Orchestrator.
So
it's
maybe
not
well
shown
here,
but
you'll
deploy
measuring
the
tool
itself.
It'll
connect
to,
hopefully
any
number
of
adapters.
A
We'll
talk
about
the
state
of
some
of
these
adapters
I'm,
going
to
show
you
the
sto
adapter
right
now,
but
just
a
little
adapter
to
communicate
with
the
service
mesh
that
go
over
spin
it
up
so
we're
just
using
docker
compose
here.
We've
got,
you
know
the
the
to
measure
II
and
then
an
sto
adapter
if
we
go
over
it.
A
And
configuring
the
tool
quickly
so
to
configure
it
we'll,
go
over
and
choose
our
cube,
configure
in
this
case
and
and
then
we'll
point
the
tool
measure
e
to
the
adapter,
that's
in
the
environment.
So
this
is
the
measure
e
sto
adapter,
just
again
running
locally
on
this
laptop
and
we
can
go
over
and
you
know
also
it
to
the
extent
that,
since
it
wants
to
show
you
back
information
about
your
environment
and
performance
statistics,
you
can
connect
it
to
Agra
fauna.
A
If
you've
deployed
grifone
as
part
of
your
sto
deployment,
you
can
go
ahead
and
and-
and
we
are
that
this
one
is
running
a
font,
so
it
connects
to
go
fauna.
You
can
go
ahead
and
maybe
pull
in
some.
If
you've
invested
into
some
dashboards,
there
pull
those
in
maybe
manipulate,
which
specific
metrics
you're
wanting
to
display
here
but
be
nice
to
have
those
metrics
about
your
environment
in
your
nodes
and
the
resources
that
are
being
consumed
there
as
you
go
to
generate
your
load
test.
So
let's
go
over
and.
A
Briefly,
just
stop
within
the
playground,
which
really
is
an
under
form.
The
playground,
but
I
wanted
to
highlight
it.
Just
because
we've
had
a
contribution
here
around
sto,
that's
the
Aspen
mash
folks
have
been
kind
enough
to.
Let
us
incorporate
eesti
of
it
as
a
library
to
help
confirm
whether
or
not
you
were
your
config
is
that
you've
got
a
good
configure
in
your
environment,
and
so
we
went
ahead
and
and
ran
a
sto.
A
Vet
came
back,
but
the
number
of
notifications
saying
looks
like
your
each
of
these
vetters
as
run
well
like
this
one
hasn't
and
it'll.
Tell
you
again
just
kind
of
let
you
play
it
play
around
with
the
message
it
mash
in
and
see
what's
there,
but
but
to
the
extent
that
you
want
to
do
a
performance
test.
In
this
case.
That's
point
to
the
book
info
app
your
product
page
and
on
the
inside
here.
A
B
A
Essentially
running
you
want
one
request,
a
second
for
about
a
minute
here,
as
that
runs,
we
can
entertain
ourselves
with
our
graph
on
our
dashboards
that
we
can
see
here.
We
should
probably
see
a
pick
up
on
some
of
the
overhead
that
we
might
expect
around,
and
we
are
some
overhead
we
might
expect
around
the
telemetry
that
mixtures
having
to
deal
with
as
that
load
is
as
generated
on
the
sample
app
and
then,
if
I
tell
a
joke
for
30
more
seconds.
A
That's
great
yeah
it's,
but
our
hope
is
is
that
is
that
while
you
right
now,
this
isn't
the
most
complex
tool,
there's
many
nerd
knobs
to
tweak
and
configure
about
how
you
want
to
run
a
performance
test,
but
that
this
hopefully
helps
people
self
answer
some
of
those
questions
around
performance
and
what
they,
what
the
cost
is
either
having
them,
do
it
on
their
own
against
their
own
app
on
or
off
the
mesh,
and
so
in
this
case
we
just
we
were
displaying
the
results
back.
A
We
can
see
where
the
the
median
the
mean
and
kind
of
the
average
response.
Time
comes
back,
so
this
is
measuring
latency
and
it's
measuring
throughput,
which
is
kind
of
nice
and
so
to
the
extent
that
you
do
that
over
time.
This
will
just
you
know,
persist
your
your
results
and
come
in
and
look
at
them
or
you
can
come
in
and
maybe
compare
them
as
so
or
if
you've
got
it's
fun.
Fun
give
me
a
demo.
A
Well
if
you've
got
a
few
of
them,
that's
not
that's,
not
very
pretty,
but
if
you
get
a
few
of
them,
maybe
you
know,
compare
those
results
and
kind
of
compare
them
over
time.
Someone
needs
to
do
something
about
this
label,
but
but
but
more
or
less
we're
just
we're
leveraging
some
open
source
software.
That's
out
there
to
help,
help
empower
people,
and
so.
A
With
that
we're
not
here
today
to
say,
hey,
so
we've
run
this
against
all
the
other
meshes
and
kind
of
here's
how
they
compare
that's
to
the
goal.
This
particular
tool
is
on
on
the
schedule
for
these
next
three
upcoming
conferences,
so
stay
tuned.
There
will
be
some
results.
My
hope
is
that,
generally,
all
the
results
are
pretty
good.
A
We
might
see
that
some
meshes
perform
better
than
others
in
certain
environments
or
under
certain
against
certain
types
of
workloads
or
that,
but
hopefully
this
bolsters
and
in
you
know,
inspires
confidence
and
people
to
go
ahead
and
and
understand
what's
happening
in
their
mash
on
then
the
the
price
that
they're
paying
for
that
convenience
something
to
come
out
of
this
is
then
potentially
something
of
a
service
mesh
benchmark
spec.
So,
to
the
extent
that
you
know
you're
gathering
up
the
result
that
you
that
we
were
just
seeing
visualized
that
certainly
you
can
store
that.
A
But
but
you
need
to
know-
and
this
was
back
to
sort
of
the
it
depends
about
performance
engineering,
so
you
store
the
result,
but
you
also
need
to
store
well
hey
what
was
the
environment
that
we
were
using?
How
many
knows
that
we
have?
How
big
were
those
nodes?
What
was
the
app
we
were
using?
What
version
of
my
app
was
are
hitting
what
version
of
that?
What
service
mesh?
What
version
of
that
mesh?
How
was
it
configured?
B
A
Except
for
the
one
in
italics
have
committed
to
contributing
and
or
and
so
no
pressure,
Tony
Nick
you're
out
there.
No
no
pressure
but
I
do
want
to
thank
on
these
contributors.
As
a
matter
of
fact,
vanilla
who
was
just
up
here
has
contributed
some
so
I
want
to
embarrass
him.
If
he's
very
good,
alright
he's
embarrassed
good.
A
Lastly,
I'm
going
to
embarrass
another
fellow
ginger
over
here,
I,
don't
know
if
you
know
this
guy,
but
this
guy
is
a
halfway
through
writing
a
big.
You
know,
one
of
probably
what
are
only
two
potentially
steel
books
out
there,
and
so,
if
you
guys
are
looking
for
a
source
of
authority,
look
for
red
beard,
you
generally
all
right
when
one
of
us
is
more
distinguished
than
the
other,
just
listen,
but
but
go
ahead
and
subscribe
there
and
you'll
you'll
be
sure
to
be
notified.
So
with
that
I
want
to
invite
Zack
back
up.
B
Artie
yeah
we're
good.
Oh
there
we
go
hey.
B
We've
made
it
thank
you,
Thank
You
Lee,
so
I
want
to
thank
all
of
y'all
for
coming
so
first
off
just
how
was
it
good,
bad,
good,
yeah,
all
right,
awesome,
so
I
do
I
want
to
thank
all
of
our
sponsors
that
actually
made
this
possible
right.
Gcp,
juniper,
Capital,
one
pivotal,
AWS,
sorry
I
have
to
sneak
a
glance
at
my
science.
It's
like
down
the
list,
the
CN
CF,
the
o
NF
and
the
OpenStack
foundation
as
well.
Thank
you
all
so
much
for
making
all
this
possible.
It
really
is.
It's.
A
B
Cool,
it's
pretty
special
for
me
to
see
you
know.
Being
one
of
the
early
engineers
working
on
service
patch
is
pretty
cool
for
you
to
see
something
like
this.
It
really
is
very
special.
I
also
want
to
take
some
time
out.
Shear
am
chris
tweeting.
Come
up
here.
Come
on
come
come.
These
are
our
conference.
Organizers
come
now
on
the
stage.