►
Description
Hoot is a livestream by engineers talking about and trying out new technology.
Get to Know Service Mesh
We kick this off with a series on service mesh - each episode will look into a different service mesh provider.
* Istio
* Linkerd
* Consul
* AWS App Mesh
* More meshes like Kuma and Maesh
* Compare and contrast the different service meshes, explain their unique features and how to choose which one(s) to use for your applications.
A
So
I'll
be
your
speaker
today.
My
name
is
Christian
posted
on
the
field
CTO
here
at
solo,
I've
been
involved
in
helping
organizations
move
to
micro
services
architectures
for
the
last
five
six
years
now
and
I
have
a
lot
of
experience
in
this
area,
including
the
more
recent
emergence
of
the
servicemen's
technology.
I
wrote
the
first
book
on
SEO,
for
example,
and
I'm
writing
a
book
right
now.
A
What
we're
gonna
do
in
this
episode,
like
I
said,
is
wrap
up
the
the
series
that
we
started
at
the
beginning
of
the
year.
2020
we're
going
to
do
a
quick
recap
on
what
is
a
service
mesh
and
whether
or
not
you
need
it
that
service
match
and
then
we'll
look
at
how
they're
generally
built
and
architected
and
then
look
at
four
specific
service
promotion,
implementations
and
look
at
some
of
the
pros
and
cons,
or
you
know,
evaluation
criteria
that
might
be
interesting
to
you
and
then
we'll
wrap
it
up.
A
Like
I
said,
on
the
right
hand,
side
you
can
see
a
link
to
our
YouTube
channel
and
there's
a
playlist
there
for
the
various
service
mesh
options
that
we've
covered.
We
started
off
this
year
with
sto
and
what
we
did
in
that
episode
was
look
at
some
of
the
newer
features
coming
in
estilo
1.5
and
in
episode
2
we
looked
at
linker
D.
We
did
it
sort
of
boxing
of
lickity
getting
started
with
the
understanding,
the
different
components
and
working
through
some
of
the
examples
for
the
dots
we
do.
A
The
same
thing
for
hash
support,
console
Service,
match
AWS
at
mesh
and
then
kuma
from
Kong
and
make
from
the
folks
at
a
trafficker
containers.
And
then
this
is
episode.
Six
we're
gonna
continue
our
hoot
series
with
a
different
topic
shortly
and
I
think
we're
gonna
be
looking
at
gateways
and
ingress
gateways.
An
API
gave
reason
that
kind
of
thing.
A
So,
let's
get
going
and
just
to
set
a
little
context
when
we
talk
about
the
problem
space
that
we're
looking
to
address,
we're
looking
to
maybe
go
to
a
cloud
architecture
or
micro-services
architecture,
and
when
we
do
that
we
have
to
build
our
applications
with
different
assumptions
than
we
did
in
the
past,
where
maybe
we're
building
three-tier
applications.
Or
you
know,
even
even
in
the
original
SOA
days
where
we
would
build
services
knowing
that
well
we're
gonna
build
things
highly
available,
things
will
never
go
down.
A
Things
are
fairly
static
and
in
this
environment,
when
we
talk
about
cloud
deployments,
we
talked
about
the
assumptions
changing
and
how
the
services
that
we
deploy
can
go
down
and
will
go
down
or
a
peer,
slow
or
degrade
and
and
so
forth.
And
so
we
need
to
build
in
this
environment
where
the
networking
is
not
as
reliable,
where
the
platform
is
not.
As
you
know,
there's
no
guarantees
about
that
and
we
end
up
trying
to
solve
the
networking
challenge
of
how
services
talk
with
each
other
by
overcoming
those
those
challenges.
A
But
if
we
look
at
the
bottom,
two
metrics
here
in
this
in
this
table
from
the
state
of
the
DevOps
report,
we
want
to
go
faster.
We
got
to
do
it
safely.
We
want
to
understand,
what's
happening
in
our
system.
We
want
to
understand
steady-state.
We
want
to
understand
what
things
change
we
need
to
observe
the
detail
of
these
of
these
services.
We
need
to
be
able
to
lower
the
risk
of
making
changes
by
slowly
introducing
changes
into
the
system
in
a
controlled
way
and
doing
things
like
canari,
automation
and
so
forth.
A
So
when
we
talk
about
service,
mash
and
some
of
the
technologies
to
enable
micro
services
within
the
context
of
the
problem
or
there's
more
complexity
in
between
our
services,
now
we
want
to
create
an
environment
where
we
can
understand,
what's
happening
between
these
services,
lower
the
risk
of
making
changes
to
the
system
and
ultimately
move
fast,
but
do
that
safely.
So
some
of
these
challenges
of
deploying
micro
services,
applications
into
cloud
native
platforms
like
the
kubernetes
or
a
public
cloud
include
things
like.
How
are
we
going
to
discover
these
services?
These
are
potentially
dynamic.
A
A
What
we
did
was
we
implemented
this
in
in
the
code
in
our
application
logic
itself,
or
we
would
bring
off-the-shelf
frameworks
that
were
designed
for
specific
languages
to
to
implement
these
types
of
things
and,
as
we
moved
from
one
language
or
framework
to
another,
we
saw
deviations
and
differences
in
how
these
capabilities
solving
for
these
capabilities
were
implemented
in
the
application
code.
So
you
might
be
using
a
JVM,
you
might
be
using
spring
or
vertex.
The
the
paradigms
are
between
those
frameworks
are
different
and
the
implementations
are
different.
A
You
might
be
using
a
totally
different
programming
language
like
no
js',
so
now
you
have
to
go,
find
the
equivalent.
What
is
the
circuit
breaking
library
for
nodejs
and
client-side
load
balancing
and
how
we
gonna
expose
telemetry
and
in
what
format?
Maybe
you
want
to
use
go?
Maybe
you
want
to
use
Python
or
whatever
programming
language
right,
so
you
have
to
solve
these
problems
ideally
consistently
across
all
the
languages,
all
the
frameworks
that
you
you
might
end
up
using
in
your
micro
services,
environment.
A
So
this
is
where
the
idea
of
a
service
know
starts
to
come
into
play.
The
service
mesh
implements
these
capabilities
and
solves
these
challenges,
for
you,
independent
of
language
and
framing.
Now
this
all
sounds
really
good,
but
before
you
adopt
any
new
technology,
you
have
to
ask
yourself:
doula
do
I
need
that
is
my
environment
and
my
constraints
such
that
I
could
get
value
out
of
adopting
a
piece
of
technology
like
this.
A
There
is
some
there
is
some
initial
support
for
things
like
service
discovery
and
load,
balancing
and
resilience
and
health
checking
and
so
forth
that
you
can
get
out
of
out
of
kubernetes,
so
I
focus
on
where
you
are
in
the
journey
to
adopting
this.
This
infrastructure
get
the
most
out
of
what
you
have
today
and
then
evaluate
whether
it
makes
sense
to
bring
on
new,
potentially
complex
infrastructure,
to
solve
those
those
challenges.
Now,
if
you
do,
you
do
run
into
these
these
questions.
A
A
Do
they
support
containers
and
communities
as
well
as
existing
infrastructure,
VMs
yeah?
What
is
the
operating
model,
or
is
it
highly
centralized
or
highly
decentralized?
And
so
far
so
you're
gonna
run
into
some
of
these
questions,
as
you
evaluate
which
service
mesh
to
pick
and
whether
that's,
whether
it's
appropriate
for
you,
so
the
service
mesh
architecture
in
general,
like
I,
said,
starts
with
implementing
those
application.
A
Networking
features
out
of
process
of
the
application
all
right,
so
we
said
the
app
can
be
written
in
Java
or
golang
or
Perl,
and
still
communicate
with
the
network
through
this
service
proxy
and
the
service
proxies
what
we
implement
the
the
capabilities
of
the
application
networking.
So,
if
traffic's
flowing
through
this
proxy,
we
can
control
routing,
we
can
enforce
timeouts,
we
can
do
client-side
and
load,
balancing
and
and
routing
and
and
so
forth
through
through
this
particular
part
of
the
application.
A
So
you
can
see
here,
I've
driven
drip,
drawing
a
box
around
the
app
and
the
proxy,
because
they're
treated
they've
treated
atomically
they're
one.
In
the
same,
if
the
app
goes
down,
the
proxy
goes
down
at
the
prompt
signals
down.
The
app
goes
down
or
they're
treated
exactly
the
same.
Whenever
the
app
tries
to
talk
to
any
service
outside
of
its
boundary,
it
always
goes
to
the
proxy
and
vice
versa.
Whenever
any
any
service
tries
to
talk
to
this
app,
the
traffic
always
goes
through
the
proximate.
Now
you
can
do
interesting
things
with
this.
A
That's
happening
between
the
the
services
in
this
in
this
service
mesh,
so
a
service
mesh
typically
provides
service,
discovery
and
load
balancing
to
the
workloads
in
the
mesh
secure
service
of
service
communication,
as
well
as
policy
and
intention
based
access
control.
We
can
do
traffic
control
and
shaping
and
shadowing
and
fancy
things
around
the
request.
A
We
collect
a
lot
of
telemetry
and
distributed
tracing
and,
lastly,
one
of
the
most
important
things
is:
it
provides
an
API
the
service
which
provides
an
API
through
which
we
can
drive
automation
about
doing
things
like
canary
releases
or
servus,
debugging
and
chaos,
experimentation
and
so
forth,
so
that
the
API
part
of
the
service
match
becomes
very
important
because
you
end
up
building
things
on
top
of
it.
You
end
up
building
on
a
mission
on
top
of
it.
A
So
let's
quickly
take
a
look
at
some
of
the
existing
service
mission
implementation,
some
of
which
we
covered
in
our
in
our
series.
Up
to
this
point
so
link
your
D
is
the
first
one
that
we'll
take
a
look
at
make.
Your
D
is
specifically
Lugar.
T2
is
an
open
source
project
in
the
C
and
C
F
and
originally
started
by
the
good
folks
at
buoyant
now
has
a
big,
growing
community
and
linker
DD
kind
of
kicked
off
this
whole
service
match.
A
As
we
know,
service
mesh
today,
with
their
original
release
of
basically
they're
wrapped
twitter
finagle
into
a
proxy
and
finagle,
is
one
of
these
kind
of
netflix
OS.
That's
one
of
these
libraries
that
implemented
client-side,
load,
balancing
circuit,
breaking
and
routing
and
so
forth,
and
they
put
it
into
a
container
and
started
deploying
the
container.
A
Next
to
some
of
these
workloads
moving
to
link
22
they've,
improved
the
efficiency
and
lowered
the
footprint
of
deploying
linker
D,
the
control
plane,
as
we
saw
in
previous
diagrams,
is
in
liquid
ease
case
written
and
going
while
the
data
plane
or
the
proxy
level
part
of
the
of
the
architecture
is
written
in
rust.
They've
continually
been
improving
the
feature
set
that
they've
been
offering.
The
latest
release
is
2.7
and
linker.
A
You
can
see
in
in
episode
2
of
our
hoot
series
here
and
you
can
do
things
like
configure
the
proxy.
You
begin,
pull
telemetry
back,
put
it
into
Prometheus
tie
it
in
to
distributed
tracing
you
can
out
of
the
boxer
is
always
on
mutual
TLS
and
encryption
of
the
of
the
transport,
and
you
can
configure
the
the
route
CAS
and
and
so
forth.
A
I
would
say
very,
very
good
experience
when,
when
using
it,
it's
you
know,
any
of
these
service
measures
are
fairly
complicated
technology,
but
link
the
folks
at
boy
and
the
community
around
linker
D
is
is
really
so.
The
bar
pretty
high
in
terms
of
usability
and
the
experience
of
keeping
you
on
the
right
path
to
being
successful
with
the
service,
especially
getting
a
lot
of
value
out
of
it
right
now.
A
It
is
communities
only
so
they're
targeting
only
kubernetes
and
they're
in
the
midst
of
working
on
multi
cluster
supports
doesn't
exist
today,
but
it
should
be
coming
fairly
soon
and
I
think
there's
opportunities
to
improve
certain
resilience.
Capabilities
like
the
circuit,
breaking
and
fault
injection,
and
many
more
access
to
load,
balancing
or
zone,
aware,
load,
balancing
and
so
forth
that
he
might
want,
but
overall
I
would
say
even
as
early
as
probably
six
eight
months
ago
and
I
did
a
presentation
similar
to
this,
where
their
traffic,
routing
and
traffic
splitting
capabilities
were
missing.
A
Those
have
been
added
and
liquidy
twos
feature
step
is
is
growing
to
the
point
where
you
can
use
it
as
a
general
purpose
service
mesh.
So
definitely
it
definitely
take
a
look
at
link
82,
the
other
one,
one
of
the
other
ones
we'll
look
at
is
Console
service,
mesh
or
console
connect,
and
this
so
console
comes
out
of
hashey
court
and
but
started
a
couple
years
ago.
A
Just
wrapped
up
a
series
on
the
service
mesh
for
developer
workflows
using
console,
Service
mesh,
so
we'll
link
to
that
in
the
notes
to
not
only
recommend
taking
all
of
that,
but
so
consoles
deployment
options
are
very
similar
to
the
generic
service
mesh
architecture.
That
I
showed
earlier,
where
you
have
a
control
plane
and
the
data
plane
in
in
consoles
case
the
data
plane
proxies
or
the
service
proxies
that
we
see
here
on
the
bottom.
A
Those
are
envoy
proxies,
so
we're
you
using
envoy
as
the
service
proxy
way
through
which
the
traffic
is
flowing
and
you
take
advantage
and
get
the
capabilities
of
envoy.
In
that
case,
the
control
plane
follows
a
similar
pattern.
If
you're
familiar
with
console
in
that
console,
servers
are
a
consistent
database
set.
So
they
follow
a
raft
like
algorithm
to
stay
consistent,
but
then
push
out
configuration
to
various
console
agents,
the
clients
that
then
cache
the
config
and
make
lookups
for
the
configuration
very
fast
for
local
locally
running
proxies.
A
So,
for
example,
you
might
have
a
consul
agent
as
as
an
agent
on
a
VM
with
multiple
applications
and
service
proxies
running
on
that
BM.
Talking
to
the
console
agent
directly,
not
going
directly
over
the
network
for
all
of
its
configuration
or
certificate
signing
request
and
that
kind
of
stuff,
so
he
talks
talks
as
local
as
possible
in
a
kubernetes
world.
This
could
be
a
daemon
set
of
agents
that
run
locally
to
a
kubernetes
known
and
any
of
the
proxies
that
run
on
that
node.
A
Talk
directly
to
that
at
Damons
that
and
so
forth,
so
console
is
built
or
console.
Service
mesh
is
built
on
a
fairly
known
and
fairly
stable
and
widely
deployed
piece
of
software
console
it's
up
and
it
brings
l7
traffic
traffic
control
to
those
those
particular
workloads
are
participating
in
mesh
solves
for
identity,
using
spiffy
and
policy,
or
intention
based
access
control
and
has
capabilities
for
doing
multi
cluster
deployments
and
using
vault
and
other
hashing
for
products
that
integrate
nicely
on
the
other
side
of
the
opportunity
side.
A
If
you're
running
in
kubernetes,
specifically,
it
creates
additional
things
that
you
have
to
manage
right
now.
You
have
to
manage
console
if
you
don't
already
and
keeping
it
highly
available
and
keeping
it
consistent
and
all
the
parameters
for
doing.
That
is
not
all
that
trivial:
it
does
not
use
kubernetes
resource
or
custom
resources,
it
uses
its
own
support,
markup,
language
or
control
language
and
lastly,
like
linker
D
and
some
of
the
other
service
specials
attempt
to
be
transparent
to
the
app
script
console
is
not
the
console
service
mesh
is
not.
A
So
it's
the
is
another
open-source
service
mesh
that
originated
in
May,
2017
came
out
of
Google
and
IBM
and
lyft,
and
others
like
VMware
and
Red
Hat
and
Cisco
and
pivotal
and
so
on,
have
have
contributed
to
it
and
how
grown
the
community
have
you
been
commercialized
parts
of
is
beyond
support
you,
and
as
of
now,
when
that
five
is
the
latest
release,
a
lot
of
improvements
around
security,
performance
and
architecture
in
that
release
in
steo
is
also
based
on
envoy.
Just
like
we
saw
Console
service
measures
based
on
envoy.
A
History
is
based
on
envoy
and,
if
see,
this
feature
set
is
broad
and
in
foot
like
very
encompassing
of
all
different
features
that
you
can
get
out
of
out
of
Anglais
now
yeah
I'm.
You
know,
on
the
other
side
of
that
you
have
a
lot
of
features
and
a
lot
of
options
that
could
that
can
bring
complexity
or
perceived
complexity,
and
if
Steel
has
been
slowly
iterating
and
improving
on
on
its
experience
so
far,
I
think
the
community
is
large,
there's
a
large
open
source
project.
A
If
running
in
kubernetes,
then
you
use
kubernetes,
declarative
approach
to
get
to
spline
configuration,
and
then
this
Tod
watches
the
human
and
his
API
and
pushes
that
configuration
down
to
to
the
process
and
I
see
it
does
have
some
support
for
VMs
and
extending
the
match
to
those
workloads
that
are
not
in
containers
and
in
kubernetes,
but
that
is
still
that
is
still
evolving.
Some
of
the
strengths,
so
sto
is
backed
by
a
large
vibrant,
open
community,
a
large
feature
set
out-of-the-box
ingress
and
multi
cluster
support
and
existing,
but
evolving
support
for
VMs.
A
Recent
architectural
changes
will
do
a
lot
to
alleviate
some
of
the
concerns
around
performance
and
overhead,
but
there's
still
there's
still
work
to
do
the
last
one
that
we'll
take
a
look
at
is
AWS
at
mesh
now
AWS
at
mesh
is,
is
not
an
open
source
service.
Much
like
the
previous
three
that
we
that
we
looked
at
it
is
available
only
on
AWS,
and
it
was
slightly
different
about
this
one
compared
to
the
other
ones.
Is
that
the
control
plane
component?
So
if
we
come
back,
let's
look
at
this
off
of
the
right
hand.
A
We
see
that
the
control
plane
components
all
of
them
are
kind
of
hidden
or
managed
for
the
user.
So,
as
a
user,
you
just
worry
about
the
data
plane
and
getting
the
proxies
into
your
into
your
workloads
and
then
using
the
app
mesh
api's
to
configure
those
proxies
through
the
control.
Claim
that
you
don't
you
don't
touch
for
see
your
manager
have
to
worry
about
in
terms
of
upgrades
or
you
know
any
of
the
operating.
A
So
it
was
announced
a
little
while
ago,
when
GA
about
a
year
ago
and
a
lot
of
the
API
supported
by
by
app
meshes
around
traffic,
shifting
and
traffic
routing.
In
virtual,
you
know
getting
getting
services
virtualized
in
such
a
way
that
we
can
specify
these.
These
rules
independent
of
particular
implementations,
what's
interesting
about
it
at
meshes,
that
is
supported
across
from
multiple
deployment
platforms.
A
A
A
managed
control
plane
so
simplifies
the
operations
of
the
match
as
a
whole
supports
multiple
deployment
targets
like
ec2,
eks
and
DCs
and
so
forth,
and
the
eight
the
API
thus
far
has
been
kind
of
limited
and
skewed
toward
traffic
routing
and
traffic.
Shifting
that's
improving,
just
like
any
of
these
meshes
has
been.
There's
been
a
lot
of
improvement
over
the
last
six
to
eight
months
in
terms
of
what
features
get
exposed
and
so
forth.
A
You
should
be
looking
at
at
mesh
if
you're
running
on.
You
know
you
guys
now,
on
the
other
hand,
there's
if
some
of
the
some
of
the
capabilities
that
you
would
expect
out
of
a
service
bench
like
policy
enforcement
and
mutual
TLS-
don't
exist
yet
so
there
is.
There
is
an
encrypted
encryption
that
at
mesh
supports,
but
not
not
mutual
TLS.
A
A
That
might
be
the
case,
but
what
we
recommend
with
is
start
with
understanding
the
proxy
start,
with
adopting
as
little
from
the
mesh
that
you
can
to
get
some
value
and
get
it
and
get
a
good
understanding
of
how
that
proxy
works,
how
to
operate
it,
how
to
debug
it
and
so
forth
before
you
start
sprinkling
them
out
or
right
around
your
application
architecture.
A
good
approach
is
to
take
a
ingress
first
approach.
How
do
you
get
traffic
into
the
mesh?
What
you
need
to
solve
that?
A
But
if
you
end
up
using
technology
that
also
uses
the
same
technology
as
the
service
match,
or
so,
for
example,
envoy.
If
you
use
envoy
as
an
ingress
gateway
into
your
cluster
into
your
mesh,
you
can
get
a
lot
of
service
mesh
like
capabilities
without
going
all
in
again
sprinkling
proxies
all
all
over
the
place.
A
You've
got
to
take
a
look
at
that
and
then
evaluate
them
for
the
things
that
are
important
to
you,
which
might
be
usability
which
might
be
feature
set,
which
might
be
some
of
underlying
components
that
it
uses
like
a
non-void
proxy
and
and
evaluate
those
for
performance
and
usability
and
so
forth
in
your
environment.
Now,
what
we're
doing
in
solo,
just
a
quick
plug,
is
we're
trying
to
simplify
how
you
go
about
adopting
these
different
technologies.
A
So
you're
not
worried
about
well
on
console
I'm,
doing
HCl
type
configuration
on
kubernetes
I'm,
doing
a
declarative
language
specific
to
SCO
and
linker
D
doesn't
have
as
as
much
of
that
configuration
service
with
with
what
we're
trying
to
do
with
this
abstraction
and
make
it
so
that
you
can
deploy
and
configure
any
of
the
meshes
that
that
you
want
and
keep
a
consistent
configuration
set.
And
on
top
of
that,
if
you
can
do
that,
then
we
can
use
something
like
so.
A
The
to
love
working
on
service
mesh
hub
allows
you
to
connect
these
different
meshes.
If
you
need,
or
at
least
connect
multiple
clusters
of
a
single
mesh,
if
it'll
probably
be
more
likely
and
still
enterprise,
you
know
the
enterprise
reality
start
to
start
to
crop
up.
Where
you
end
up
with
more
than
one
implementation
of
a
thing
so
service.
My
job
is
a
way
to
abstract
and
manage
multiple
different
deployments
and
clusters
of
a
mesh,
including
different
types
of
mesh.
A
So
some
follow-up,
an
additional
reading
I,
do
recommend
taking
a
look
at
the
various
mesh
implementation,
the
their
communities
and
the
performance
numbers
and
the
documentation
and
the
community
and
the
community
notice.
What
they've
put
together,
including
the
roadmaps,
highly
recommend
that
and
like
I
said,
we'll
leave
some
of
these
links
in
the
the
notes
reach
out
to
me
at
at
any
time,
on
Twitter
or
directly
at
my
solo,
email,
I
write
about
a
lot
of
these.
These
topics
fairly
frequently
and
I
share
all
my
slides
and
presentations
on
SlideShare.
A
So
that's
it
for
for
me,
thank
you
for
joining,
go
check
out
the
let's
see
that
would
have
been.
Maybe
this
slide
go
check
out
the
YouTube,
the
solo
channel
on
YouTube
check
out
the
previous
previous
iterations
of
the
series
and
then
what's
coming
up.
Next.
Is
we're
going
to
be
talking
about
if
you
had
gateways
and
ingress
controllers
how
those
relate
to
service
mesh
and
how
they
compare
to
each
other?
So
thanks
again
follow
us
at
solo,
one
on
Twitter
and
and
stay
tuned
to
the
next.
The
next
series.