►
From YouTube: Hoot [Episode 5] - More Service Meshes
Description
Hoot is a livestream by engineers talking about and trying out new technology.
Get to Know Service Mesh
We kick this off with a series on service mesh - each episode will look into a different service mesh provider.
* Istio
* Linkerd
* Consul
* AWS App Mesh
* More meshes like Kuma and Maesh
* Compare and contrast the different service meshes, explain their unique features and how to choose which one(s) to use for your applications.
A
Hi
everyone
welcome
to
episode
5
of
our
hoot
series,
where
we
are
learning
more
about
service
messes.
My
name
is
Graham.
I
am
an
engineer
here
at
solo
and
just
quick
background
on
myself.
I
have
been
working
on
our
new
service
mesh
hub,
offering
which
is
not
released
yet,
but
will
be
coming
soon.
So
I've
been
doing
a
lot
of
thinking
recently
about
you
know
different
service,
metric
limitations
and
the
trade-offs
between
one
versus
the
other.
So
I'm
really
excited
to
have
the
opportunity
to
to
share
this
information.
A
So
today
we're
gonna
be
talking
about
two
service
messages.
The
first
one
we're
going
to
talk
about
is
Maish
from
containers
and
the
second
one
we're
gonna
get
to
is
kuma
from
tong,
so
I'm
gonna
real
quick,
get
to
the
docks
site
for
mate,
and
we
can
kind
of
kind
of
walk
through
that
together.
So
let
me
get
to
start
straight
sharing
my
screen.
A
And
of
course,
network
problems
start
up
right
as
we
start
the
presentation
but
yeah,
so
so
to
kind
of
start
talking
in
general
terms,
while
we
wait
for
the
internet
to
figure
itself
out
there,
we
go
so
so.
Maish
feels
kind
of
an
interesting
niche
in
the
service
mesh
pub
and
in
the
service
mesh
landscape.
A
So
when
you
think
about
different
implementations
of
a
mesh
and
different
approaches,
people
may
take,
you
know
you,
you
may
have
you
know.
Some
meshes
that
are
are
more
heavyweight
to
set
up,
may
be
more
complex
and
they
do
a
ton
of
different
things
and
then,
on
the
opposite
end
of
that
spectrum.
You
have
meshes
that,
are
you
know
much
much
more
lightweight,
more
a
lot
easier
to
operate
and,
and
they
do
that
at
the
what
had
the
trade
off
I
was
like.
Maybe
they
do
less
things
here,
maybe
they're,
not
quite
as
feature-rich.
A
A
You
know
one
example
that
we
can
point
to
really
quickly
is
that
within
your
mesh,
within
your
Maish
service
mesh,
there's
no
MPLS
mutual
TLS
supported
between
one
service
to
another
and
eliminating
the
complexity
of
certain
management
and
rotation
and
all
those
security
concerns
means
that
you
know
there's
less
for
the
mesh
operator
to
worry
about,
so
so
what
else
about
about
mesh?
So
the
the
the
project
was
really
designed
from
day
one
to
be
kubernetes
native.
So
there's
there's
a
lot.
A
A
really
nice
native
support
for
different
kubernetes
concepts
and
the
implementation
was
with
kubernetes
in
mind.
The
the
other
nice
thing
that
makes
brings
to
the
table
is
as
contrasted
with
some
other
measure.
Implementations
Meishan
really
allows
you
to
to
adopt
it
through
kind
of
an
opt-in
approach.
So,
while
you're
running
the
the
mesh,
you
can
continue
having
your
service
of
service
communication
happen
over
the
normal
cluster
dns.
A
So
you
know
the
normal
name,
namespace
close
to
that
local
dns
approach,
but
but
you
can
really
easily
change
your
domain
names
to
service,
that
namespace
dot
Maish
and
just
through
that
one
change.
You're
now
talking
through
the
the
traffic
proxies
in
the
mesh
and
you
get
the
nice
features.
So
you
know,
if
you
have
some
services
that
are
ready
to
use
the
mesh
and
others
are
not.
You
can
kind
of
piecemeal
move
to
a
mesh
implementation
for
your
infrastructure.
A
A
Is
that
and
and
this
diagram
is
a
really
if
I'm
nice-
look
at
that
that
rather
than
have
a
proxy
for
pod,
you
actually
have
a
proxy
per
node
in
your
cluster
and
all
the
pods
on
that
one
node
will
share
the
the
proxy
on
that
node
and,
and-
and
that
is
nice,
you
know
you.
May
you
may
like
that
for
a
couple
reasons?
A
A
A
The
configuration
of
the
mesh
by
default
is
actually
entirely
done
through
annotations
on
your
already
existing
services.
So,
for
example,
to
add
a
service
to
your
mesh.
All
you
need
to
do
is
tell
tell
Mesa
that
the
service
communicates
over,
in
this
case
the
HTTP
protocol,
and
so,
if
you
don't
want
to,
there
are
no
custom
resource
definitions
that
you
need
to
set
up
to
run
the
mesh.
A
That
being
said,
if
you,
if
you
would
like
a
little
more
control
over
things
and
you're
fine
with
the
custom
resource
definitions
being
read
during
your
cluster
maisha,
also
recognizes
SMI
resources,
so
service
metal
interface
resources,
switching
it
to
that
mode.
That's
that's
uninstall
levels,
so
I
can't
quite
switch
easily
between
two,
but
if
you're
already
comfortable
with
SMI
or
or
you
know
you,
you
would
have
done
this
research
and
think
that
that
might
be
the
future
of
the
way
that
service
messages
are
configured.
A
You
can
run
Maish
in
SMI
mode,
so
yeah
some
some
reasons
that
you,
you
know
you
as
a
mesh
operator
may
adopt
Maish.
You
know
like
why
would
I
use
mesh
so
some
reasons
I've
kind
of
landed
on
with
it
is?
If
you
know
you
are
a
mesh
operator
who's
doing
a
lot
of
you
know
really
greenfield
work,
you're
doing
cloud
native
work
and
you
know
you're
you're,
starting
and
right
on
kubernetes,
because
mesa
is
implemented
to
work
on
kubernetes.
That
would
be
a
pretty
seamless
addition
to
your
your
infrastructure.
A
You
may
also
prefer
Maish.
If
you
know,
maybe
your
mesh
operator
team
is
really
small.
Maybe
it's
just
one
person
or
yeah,
you
know,
generally,
you
would
have
seen
or
you've
evaluated
some
of
the
heavier
weight
meshes
like
sto
and
you've
decided
that
you
know
I,
maybe
I
really
don't
need
all
those
features
and
the
complexity
of
operating
the
thing
Maish
can
be
a
really
simple
alternative
to
that.
A
A
Cool
so
I,
what
I'm
running
window
is
a
gke
environment
that
I
have
stood
up
separately
and
we
can
see
that
we
have
the
the
mesh
control
Kleenex
all
to
this
namespace.
So
also
just
a
note,
I
K,
it
was
alias
to
cube
GTL,
so
nothing
isn't
crazy
there.
But
if
we,
if
we
take
a
look
in
that
namespace,
we
see
a
few
things
there
interesting
to
note.
So
there
are
a
couple
of
observability
features
that
are
installed
not
going
to
talk
too
much
about
that.
A
So
what's
deployed
here
is
the
mesh
controller
pod,
this
being
the
entity
that
serves
up
configuration
to
the
three
proxy
pods.
These
makes
mesh
pods
and
we'll
get
back
to
those
in
a
second.
The
other
thing
is
that
Maish
depends
on
core
DNS
this.
This
cluster
I'm
running
doesn't
have
coordinates
installed,
but
it
was
really
easy
to
you
know
they
provided
a
deployment
of
core
DNS
and
handle
integrating
that
with
the
default
cube,
DNS
s-parameters
cluster.
So
just
a
note
about
like
that,
is
there
but
yeah.
So
if
we
we
can
really
examine
that
these.
A
These
three
proxies
are
actually
running
on
the
nodes
in
the
cluster.
So
so
we
see
this
cluster
is
running
these
three
nodes
with
these
three
names
and
if
we
take
a
look
at
the
yamo
definition
of
the
proxy
pods,
so
a
little
bit
of
a
copy-paste
here.
But
what
this
is
doing
is
getting
the
proxy
pods
by
this
label.
A
So,
let's
see
so
now,
we're
gonna
run
a
little
bit
of
a
demo.
So
what
we
have
here
is
just
like
a
really
dead,
simple
example
being
created.
So
in
this,
who
am
I
namespace?
Excuse
me?
We
we
have
a
client
pod
that
we're
just
going
to
SSH
into
to
you
know,
do
some
networking
and
then
we
have
a
server
that
that
comes
back
information
about
itself
over
HTTP
and
then
another
server
that
just
echoes
bytes
back
over
TCP.
A
So
so,
if
we
ssh
into
the
client
pod,
we
can
see
that
we
can
do
the
normal,
normal
kubernetes
networking.
You
know
over
the
the
base,
the
the
native
dns,
so
this
is
all
as
we
would
expect
it
to
be,
and
then
we,
similarly
we
can
go
over
the
cluster
network
to
send
bytes
and
get
bytes
back
from
this
other
server
that
we're
running.
A
A
So
if
we
come
to
our
HTTP
server
and
go
ahead
and
add
this
annotation
to
it,
so
just
telling
you
that
the
traffic
type
should
be
HTTP
and
then
go
ahead
and
save
that
that
is
all
it
takes,
and
now
it's
service
should
be
registered
so
jumping
back
into
the
client.
We
can
see
that
networking
to
the
service
is
the
same
as
before.
A
In
terms
of
you
know,
we
can
still
use
the
the
DNS
name
for
before,
but
if
we
now
change
the
domain
names
Maish
that's
going
to
get
intercepted
and
routed
through
the
corresponding
proxy
on
our
node.
So
if
we
run
that
we
see
that
we
actually
have
these
forwarded
for
these,
these
exported
headers.
Let's
say
that,
yes,
we
have
actually
gone
through
the
mesh
and-
and
we
can
do
the
same
thing
with
the
the
TCP
server.
A
A
So
right
so
here
we
see
that
they
they
can
express
a
bunch
of
different
things
over
service
annotation,
among
them
our
retries
and
these
sorry
yeah
retry
as
well
as
rate-limiting,
and
you
know
this
a
fundamental
will
just
get
piped
down
to
the
traffic
proxy,
but
really
simple,
to
set
rate
limiting
on
a
per
service
basis
with
these
annotations.
So
if
we,
if
we
do
something
like
that,
let
me
switch
back
now.
A
Cool,
so
this
is
the
HTTP
service
and
fairly
arbitrarily,
just
to
you
know,
get
the
demo
working
we're
gonna
set
the
the
wait,
a
minute
config
to
something
really
small,
that
we
can
trip
easily.
So
you
know
like
like:
let's,
let's
set
them
both
to
10
and
see
what
happens
so.
This
is
average
of
requests
per
second,
and
this
obviously
is
is
like
bursts
of
requests
over
a
smaller
time
window.
A
A
Die,
oh
yeah
right,
so
so
we're
gonna
take
a
look
at
the
debug
API
that
the
mesh
controller
pod
I
exposes,
as
as
part
of
our
the
the
debugging
tools
that
they
offer.
So
if
we
curl
to
9000
on
localhost,
which
again
has
just
been
port
forward
to
the
the
mesh
controller
and
we
get
so
this
end
point
was
going
to
give
us
the
configuration
that
currently
is
being
served
to
the
proxies
in
the
mesh.
A
So
we
come
in
here
there's
a
few
interesting
things
we
can
see.
So
we
have
explicitly
some
hosts
defined
and
this
should
look
familiar.
This
is
what
we
are
curling
before,
but
there's
also
now
this
middleware
section.
That
is
the
rate
limit
config
that
we
have
set
up
and
we
can
see
that
our
HTTP
service
does
reference
that
middleware.
So
this
should
be
applied
to
traffic
going
to
that
service
cool.
A
So
if
we
get
back
into
the
client
pot
now
and
what
we're
gonna
do
is
just
in
really
tight
loop,
we're
gonna
keep
curling
the
service
that
has
been
requited,
and
so
we
expect
some
requests
to
go
through,
because
you
know
not,
everything
will
break
the
rate
limit,
but
we
do
expect
that
the
offending
traffic
should
get
rate
limited
with
a
429,
so
go
ahead
and
run
that
and
a
lot
of
live
output
there.
But
we
go.
We
see
like
that.
Some
of
it
is
is
getting
through.
A
A
So
yeah,
that's
just
a
demonstration
that
the
it
took.
You
know
10
minutes
to
add
the
service
to
the
mesh
and
get
raped,
limited
rate-limiting
configured
on
so
pretty
low
barrier
to
entry
on
that
yeah,
so
I
figured
I
mean
cause
for
a
brief
moment.
Maybe
questions
or
you
know
anything
else
before
we
transition
over
to
talking
about
the
kumite
serve
as
mesh
and
while,
while
we
do
that,
I'm
just
going
to
switch
over
to
the
other
dental
cluster,
that
I
have
set
up.
A
Cool
so
yeah
yeah
kuma
is
yeah.
We
talked
we
talked
earlier
about
the
kind
of
spectrum
of
mentioned
limitations
in
terms
of
being
lighter,
weight
may
be
simpler
to
operate
and
then,
on
the
other
end
may
be
more
full-featured.
A
little
more
complicated
as
compared
to
summation
kuma
falls
a
little
bit
more
on
the
heavier
weight,
slightly
more
feature-rich
side
of
things.
So
some
some
important
details
here
about
kuma
kuma
is
built
on
the
Envoy
proxy,
which
has
I
think
we've
talked
about
in
the
past
is
becoming
kind
of
a
standard
for
meshing
limitations.
A
The
the
other
interesting
niche
that
kuma
sits
in
in
the
the
field
is
that
there
is
support
from
day
one
on
kuma
for
controlling
workloads
that
not
only
are
running
on
kubernetes,
but
may
also
be
running
on
a
virtual
machine
or
or
bare-metal
with.
Even
so,
you
know,
if
you,
if
you
maybe
have
a
very
heterogeneous
infrastructure
or
maybe
you're
in
the
middle
of
porting,
something
over
to
kubernetes.
A
This
mesh
may
be
a
good
approach
for
you,
because
you
can
incorporate
lots
of
things,
not
just
your
kubernetes
workloads.
The
other
nice
thing
about
kuma
is
that
also
from
day
one.
There
is
support
for
multi
mesh
operations
and
the
way
that
the
kuma
that
in
particular
handles
that
is
through
tendency
within
the
same
cluster.
A
We
can
see
in
a
documentation
that
there
there
are
two
modes
that
it
runs
in.
One
is
universal
mode
where
you
may
be
running
on
kubernetes.
You
may
be
running
on
virtual
machines.
You
know
it's
a
mix
and
there's
also
kubernetes
native
mode,
so
in
each
of
the
modes
you
have
different
options
for
what
what
storage
back-end
that
you
use.
So
you
can
see
here
that
you
can
opt
to
use
Postgres
to
store
the
kuma
state
in
so
digging
in
just
a
little
bit
here.
A
Kuma
CP
is
the
kuma
control
plain
binary,
and
so
you
can.
You
can
provide
it
with
Postgres
credentials
for
it
to
use
in
kubernetes
mode,
though
it
it
does
it's
a
slightly
more
conventional
and
that
it
uses
the
it
uses.
Custom
resource
definitions
on
the
kubernetes
api
server
to
store
its
state
rather
and
an
additional
external
data
store.
So
because
kuma
is,
as
I
mentioned,
farther
along
the
feature-rich
side
of
that
spectrum.
There
is
really
easy
support
for
things
like
mutual
TLS,
really
fine-grain
traffic
policies.
A
So
if
we
fire
that
up,
we
can
see
just
note
here
in
the
corner
that
you
know.
If
you
had
many
different
meshes
running,
we
could
see
them
all
here
by
default
when
it
starts
up.
It
places
everything
in
a
mesh
name
default.
So
that's
why
that
is
named
way.
It
is
and
yeah
we
can
get
a
really
nice
overview
of
what
all
is
running
in
this
cluster.
A
So
I
already
have
a
demo
that
they
have
put
together
running
here
and
we
can
see
on
this
page
that
you
know
we
can
drill
down
into
what
tags
are
associated
with
a
particular
service.
When
the
last
connect
time
was
another
terminology.
Explanation
the
the
kommen
refers
to
the
the
envoy
proxies
as
data
planes
because
they
may
not
be
sidecars
if
you're
running
outside
of
Nettie's.
So
you
would
have
an
entry
here
per
envoy
proxy
instance.
That's
running
and,
and
we
can
drill
down
even
farther
we
can.
A
So
we
have
the
networking
config
on
that
particular
Envoy
instance,
and
then,
if
we
scroll
down,
we
can
also
see
that
they
publish
stats
from
the
on
way
instance
itself
around
whether
or
not
it's
in
sync,
with
the
configuration
being
served
to
it.
You
know.
Maybe
these
are
all
in
sync,
and
so
there's
not
a
field
here,
but
we
might
see
you
know,
responses
unacknowledged
and
that
would
indicate
a
problem.
A
We
can
also
see
on
the
left
that
when
we
eventually
would
set
up
other
resources
that
they
would
appear
here
and
and
similarly
we
could
drill
down
into
them
and
get
a
nice
view
about
the
nice
view
of
the
state
of
the
cluster
as
it's
running
currently
yeah,
so
the
Before
we
jump
into
actually
running
the
demo
kind
of
a
summary
of
kuma
has
a
whole.
You
know,
maybe
maybe
again,
why
would
I
use
this
service?
Mesh
I
could
see
this
being
the
right
choice.
A
A
A
You
can
run
it
on
on
different
workload
types
so
just
as
a
point
of
comparison
so
again
going
to
switch.
My
screen
share
to
the
to
the
terminal.
Now
so
cool
we're
gonna,
kill
that
port
forward
and
let's
take
a
look
at
the
at
what
gets
gets
deployed
when
you
actually
install
Kumaon,
so
I'm
running
it
in
kubernetes
mode.
Right
now
for
simplicity,
so
there
there
will
be
differences
if
you're
running
it
in
the
universal
mode.
A
A
So
I
also
have
here
demo
running
that
the
the
Kumu
folks
have
put
together
to
kind
of
walk
through
piece
by
piece.
The
Kumud
demo
app
pod
is
just
an
engine
x
instance
forwarding
onto
the
back
end.
The
back
end
is
a
no
js'
app
that
fetches
I
believe
it's
so
so
what's
running
here
is
mapped
out
online
store
with
things
you
can
put
in
a
cart,
etc.
So
the
items
are
stored.
Redis
and
the
item
reviews
are
coming
from
elastic
search
here.
A
So
we
see
here
a
bunch
of
items
that
are
for
sale,
prices,
descriptions
coming
from
Redis
and
we
can
click
into
read
reviews,
and
this
is
going
to
be
served
from
elastic
search
so
yeah,
because
we
haven't
set
up
any
policies
or
anything.
This
is
all
happening
over
plaintext
in
the
app
and,
if
we
actually
now
go
to
this
is
right.
A
So
this
is
a
really
comprehensive
demo
that
they've
put
together
and
what
I've
based
this
portion
on
I'm
not
going
to
go
through
all
the
all
the
details,
but
we
can
see
so
we
we
saw
previously
that
we
had
traffic
working
over
clean
text
and
now,
let's
say
that
we
want
to
enable
enable
M
TLS
in
a
mesh.
So
we
can
see
it's
pretty
much
the
same
as
what
we
saw
before,
but
we're
now
just
flipping
the
enabled
flag
to
true
and
to
get
into
a
little
more
detail.
This
is
just
gonna
use.
A
Kuma
is
gonna,
auto
generate
a
root.
Ca,
that's
going
to
distribute
to
the
approximate
instances.
You
could
provide
your
own
cert
if
you
wanted,
but
this
is
just
using
a
built-in
search
generation
functionality.
So
if
we
come
here
and
we
actually
apply
if
you
apply
cool,
so
we
have
configured
our
mesh,
if
we
take
a
look
now
just
to
confirm,
empty
LS
have
been
clipped
to
true
and
if
we
restart
the
port
forward
to
that,
the
demo
application.
A
A
So
now
we
can
get
back
to
the
behavior
we
had
before,
but
still
talking
over
mutual
TLS
by
setting
up
a
traffic
permission
that
basically
just
says
that
everything
is
allowed
to
talk
to
everything
and
again
here.
These
are
not
labels
in
the
kubernetes
s
that
we're
operating
on.
These
are
tags
that
live
on
the
data
plane
instance.
So,
if
we
so,
for
example,
this
is
a
service
tag
that
we
are
we're
matching
against,
not
a
label.
So
if
we
could
come
here
and
I'm,
just
gonna
copy
this
into
the
cluster
real
quick.
A
So
now
now
we
have
the
chapter
permission
defined
would
come
back
to
the
app
we
should
up
and
if
I
restart
the
port
forward,
it's
an
important
step
cool.
So
now
everything
is
back.
Communication
between
the
back
end
and
the
two
different
storage
backends
is
working
and
everything
is
happening
over
TLS
and
and
we
could,
we
could
keep
going
with
this.
They
have
a
nice
logging
example.
That
I
think
is
a
little
too
involved
for
for
right
here,
but
you
know
we
we
can
do
fine-grained
control
of
our
our
networking.
A
So
so,
let's,
let's
now
delete
that
really
permissive.
If
you
have
a
permission
that
we
set
up
because
we
see
that
there
is
fake
reviews
coming
in
and
what
we
want
to
stop
according
those.
So
what
we're
saying
here
is
we're
setting
up
these
two
traffic
Commission's,
one
that
the
the
front
end.
The
nginx
instance
is
allowed
to
talk
to
the
back
end
and
the
other
that
the
back
end
is
allowed
to
talk
to
elasticsearch,
but
note
that
we
have
left
out.
A
A
So
so
the
spinning,
because
that
requests
in
the
backend
is
failing
so
yeah,
just
a
nice
demonstration
that
we
have
really
easily
installed
the
mesh
and
gotten
our
services
registered,
and
we
were
able
to
use
the
kuma
traffic
policy
api
to
perform
some
some
fine
grained
control
over
what
traffic
is
allowed
to
happen
in
our
application.
So
yeah
I
think
that
is
a
reasonable
stopping
point.
Ish
for
the
koomer
demo,
so
I
think
I
will
pause
again
and
wait
for
any
questions
about
about
kuma
or
about
Maish
or
or
anything.