►
From YouTube: 002 How to Select a Service Mesh
Description
Which service mesh is right for you? With an increasing number of mesh technologies, it can be daunting to decide which is best for you organization. In this talk, we explore when you may need a mesh, which features to start with, and how to select the right mesh for you.
A
A
And
the
poll
will
be
open
during
the
talk,
so
you
can
continue
to
respond,
but
thanks
for
joining
my
session,
how
to
choose
a
service
mesh,
I'm
coming
to
you
live
actually
from
phoenix
arizona.
My
name
is
christian.
I'm
the
field
cto
here
at
solo,
I've
been
involved
in
helping
organizations,
build
microservice
style,
architectures
and
adopt
containers
and
kubernetes
and
other
cloud
native
infrastructure
to
improve
their
ability
to
deliver
software.
A
I've
been
involved
with
the
istio
community
and
an
envoy
community
for
about
four
years
now,
even
before
istio
was
generally
available
at
britain.
Some
books
currently
writing
istio
in
action
for
manning,
which
I
hope
hope
hope
to
be
done
with
very
soon,
and
you
can
find
me
on
these
social
media
channels,
but
I
came
to
solo
specifically
to
work
on
this
problem
of
modern
service
or
application.
A
So
some
of
these
service
meshes
you
see
here,
have
been
around
for
a
while
the
istio,
linker
d
and
so
on,
and
some
of
them
are
fairly
new
and
you
are
tasked
with
first
of
all,
understanding
what
a
service
mesh
does,
how
they
these
might
differ
from
each
other
and
then,
if
you're,
going
to
adopt
one
when's
the
right
time
and
how
to
how
to
adopt
one
and
also
understand
what
are
the
trade-offs.
What
are
the
differences,
and
how
do
you
pick
one?
That's
right
for
you.
A
Now
a
service
mesh
does
solve
some
very
challenging
problems,
especially
when
you're
deploying
into
a
cloud
environment,
kubernetes
and
ephemeral,
container
environment.
Things
like
service
discovery
being
able
to
find
services,
do
client-side
load
balancing
if
things
start
to
fail
right
around
them.
Traffic
control
and
shifting
for
things
like
canary
releases
policy
enforcement
intention
based
access
control,
and
then
things
like
service
resilience,
timeouts,
retry
circuit
breaking
and
doing
this
in
a
app
agnostic
way
and
doing
it
for
any
app
that
gets
deployed.
A
Now
you
have
to
decide
whether
your
environment,
you
you
need
a
service
mesh.
Do
you
need
a
service
mesh?
I
guess,
and
really
only
you
can
answer
that
you
have
to
understand
your
environment
enough
to
know
that.
Are
you
building
and
deploying
microservices
a
lot?
A
lot
of
different
services
are
they've
written
in
different
languages
and
use
different
frameworks
for
a
particular
language.
A
Now
knowing
and
understand
whether
you
have
this
problem
is,
like
I
said
unique
to
to
you,
because
not
everybody
has
this
problem.
Some
folks
are
either
not
deploying
microservices
or
they've
standardized
on
a
particular
language,
and
they
don't
deviate
from
that
language
and
they're
able
to
solve
some
of
these
networking
problems.
That
way.
A
Other
groups
don't
use
containers
and
cloud
infrastructure.
So
it's
another
bullet
point
to
understand
that
par.
You
know
the
networking
problem
has
existed
for
a
long
time.
It
just
becomes
exacerbated
now
the
more
smaller
pieces
we
have
that
are
being
deployed
and
that
are
communicating
over
these
networks.
These
cloud
networks
untrusted
unpredictable
cloud
networks.
A
So
if
you're
deploying
in
containers
where
things
are
shifting
and
ephemeral
coming
and
going
scaling
highly
decentralized,
then
you
might
consider
that
a
service
mesh
will
help
you
in
in
your
day-to-day
operations
and
to
actually
simplify
those
operations.
Some
architectures
are
built
on
messaging
or
reactive,
or
some
of
these
proprietary
protocols,
our
socket
and
so
on.
A
They
might
not
be
good
candidates
for
a
service
mesh
and
they
might
not
have
the
problems
that
that
that
you
see
or
they're
solved
differently
that
you
might
see
with
more
rpc
style,
grpc
or
rest
based
services,
of
which
you
know,
we
typically
see
a
lot
of
those
different
architectures
and,
lastly,
with
all
the
different
languages
and
frameworks
that
you
might
have
on
ephemeral,
infrastructure
that
use
these
grpc,
or
rather
rpc
style
communication
across
a
lot
of
different
services
across
a
lot
of
different
boundaries.
A
You
know
managing
that
applying
security
policies
to
that
keeping
those
things
consistent,
understanding
when
things
partially
fail,
how
to
debug
them,
how
to
figure
out.
What's
going
on,
you
know,
building
the
infrastructure
yourself
to
do.
That
is
a
tremendous
task,
and
this
is
an
area
that
service
mesh
can
help
so
before
deciding
which
service
mesh
decide,
whether
a
service
mesh
even
is
applicable
for
your
environment.
A
Now
another
thing
that
we've
seen
with
folks
adopting
this
type
of
technology
is:
maybe
they
don't
need
a
service
mesh
right
away,
or
maybe
they
need
some
of
these
features,
but
they
don't
need
to
deploy
service
proxies
with
every
single
service
instance
and
so
on,
or
they
want
to
start
small
and
an
approach
that
they
take
is
to
use
some
of
the
common
technology
like
envoy
and
use
that
at
strategic
gateway,
parts
of
their
architecture
or
a
service
mesh
light.
A
I've
heard
some
people
call
it
or
I
call
it
waypoints
right
so
as
traffic's
coming
into
a
system
giving
it
some
structure
giving
it
some
consistency
in
terms
of
security
or
traffic
control.
Waypoints
comes
from
aeronautics.
I
guess
where
pilots
are
flying
a
flight
plan
and
they're
looking
for
various
waypoints
in
their
in
their
flight
plan
to
make
sure
that
they're
going
in
the
right
direction
now
having
these
proxies
allows
you
to
to
give
your
architecture
some
structure
to
abstract
away.
Details
like
well.
A
Is
this
a
monolith,
or
is
this
a
microservice
or
functions
of
service
and
so
on,
and
give
you
some
of
the
benefits
of
what
you
might
get
from
a
service
mesh,
but
without
going
all
the
way
toward
a
service
mesh
architecture?
If
you
don't
need
it
yet
so
maybe
you're
going
down
this
path
and
you're
trying
to
decide
well,
is
my
environment
suitable?
Do
I
need
a
service
mesh
for
this?
A
Some
things
to
watch
out
for
that
might
indicate
that
it
is
a
good
idea
to
go
down
the
the
service
mesh
path
and
that
is
you're
growing
your
number
of
services,
your
microservices
you're,
deploying
into
these
cloud
native
platforms
or
adopting
them,
including
them.
A
Maybe
you
have
vms
as
well
as
physical
hardware
and
then
you're,
introducing
containers,
and
maybe
kubernetes.
If
you
find
yourself,
are
already
maintaining
multiple
languages
and
multiple
frameworks.
Libraries
that
solve
problems
like
circuit
breaking
and
distributed
tracing
and
telemetry
collection
and
client-side
load
balancing
it's
time
to
start
exploring
how
to
how
to
operate
your
services
with
a
service
mesh.
A
Instead
of
continuing
to
invest
in
that,
if
you
see
yourself,
building
and
maintaining
custom
or
bespoke
frameworks
for
handling
certificates
and
shuffling
certificates
around
and
or
you
find
that
the
service
communication,
you
know,
the
the
the
telemetry
collection
between
your
services
is
sort
of
a
black
box
and
you're
afraid
to
make
changes,
because
you
don't
know
exactly
how
this
is
going
to
impact
the
rest
of
the
of
the
system
might
be
worth
investigating
how
a
service
mesh
might
be
able
to
help
with
that,
and
the
last
piece
is:
do
you
have
the
supporting
infrastructure
to
make
adopting
a
service
mesh
successful,
so
things
like
git
like
source
control
like
ci
cd
pipelines,
to
be
able
to
make
changes
to
get
into
a
production
environment?
A
Things
like
the
supporting
infrastructure
like
telemetry
time
series
databases,
prometheus,
grafana,
cloudwatch
stackdriver.
These
types
of
things
for
collecting
logs
maybe
distributed
traces
so
having
these
as
foundational
pieces
for
operating
a
microservice
style
or
a
cloud
native
infrastructure
are
strong
pieces,
you
know
supporting
pieces
that
you
would
need
to
operate
and
successfully
deploy
a
service
mesh.
A
So
if
you've
thought
through
whether
or
not
a
service
mesh
makes
sense
for
you
or
you're
moving
past
some
of
the
initial
stages
of
adopting
this
cloud
native
technology
to
build
and
support
your
microservices,
then
you
get
to
the
point
of
actually
selecting
a
mesh.
Now.
This
is
a
big
step,
because
a
service
mesh
once
it's
deployed,
is
in
the
request
path
of
your
application
or
service
traffic.
A
This
is
huge.
This
is
critical
piece
of
infrastructure
and
having
an
outage
or
having
some
issues
with
it
can
significantly
impact
your
ability
to
deliver
on
you
know,
business
outcomes
and
other
business
goals.
So
selecting
a
service
mesh
is
not
a.
You
know,
a
trivial
thing,
it's
a
very
important
piece
to
your
application
puzzle.
A
Now
you
have
options,
as
we
saw
in
the
previous
slide.
There's
lots
of
different
service
meshes
and
you
might
think
about
this
in
terms
of
well.
Am
I
going
to
take
a
service
message
and
run
it
and
manage
it
myself,
which
some
folks
do
with
let's
say
kubernetes,
or
will
you
offload
that
to
a
public
cloud
or
some
cloud
provider
or
or
some
something
in
between
right,
because
there's
the
you
can
go,
take
a
service
mesh
off
the
shelf
and
install
it
and
run
it
yourself.
A
You
can
get
it
with
with
a
platform
provider,
something
like
you
see
from
vmware
or
red
hat
or
google
that
they
have
it
baked
into
their
kubernetes
distribution
or
you
can
go,
get
it
as
a
managed
service
on
a
public
cloud
and,
as
you
saw
in
our
keynote
now
with
glue
cloud,
you
can
take
a
service
mesh
istio
to
any
cloud
and
get
it
managed
right.
So
there's
a
quite
a
few
different
options
for
how
you
might
consume
or
adopt
a
service
mesh.
A
Is
it
using
envoy,
proxy
or
something
else
envoy
proxy
has
become
the
de
facto
standard
for
building
this
type
of
l7,
dynamic
cloud
native
application
infrastructure
from
things
like
edge
gateways
to
service
mesh,
to
shared
gateways
and
even
actually
beyond
that
beyond
the
edge
and
envoy
is
where
a
lot
of
the
innovation
is
happening,
how
traffic
gets
routed?
What
protocols
get
supported,
how
to
extend
the
data
plane
with
things
like
web
assembly
of
the
innovations
happening
in
unblind
that
innovation
lends
itself
to
the
the
service
mesh
feature
set
as
well.
A
Now,
if
you
see
what's
happening
in
the
service
match
ecosystem,
including
what's
happening
at
the
proxy
level,
you
don't
have
to
look
that
far
back
to
see
something
you
know
similar
and
and
and
see
what
happened
in
the
container
wars
or
container
orchestration
wars
right.
There
was
docker
and
you
know
a
lot
of
different
systems
adopted
doc,
but
not
everybody
did
some
folks
said
no,
no,
let's
well
we'll
focus
on.
You
know
the
remember
correct
the
cloud
foundry
folks.
A
So
now
we're
going
to
focus
on
developer
experience,
make
it
super
easy
and
all
this
stuff,
but
we're
going
to
use
our
own
container
format,
garden,
containers
or
whatever,
because
why?
Because
it's
special
built-
and
you
know
that's
that's
what
it
best-
suits
our
architecture
and
so
on
you're,
seeing
the
same
thing
play
out
right
now:
kubernetes
won
the
container
orchestration
wars.
Doctor
one
envoy
is
winning.
A
If
not
has
one
so
deciding
what
technology
is
actually
underpinning
your
your
service
mesh
is
actually
pretty
important
and
you
can
you
can
see
some
of
those
that
have
gone
off
and
done
their
own
thing.
A
A
You
know
enterprise
use
cases,
are
they
they
typically
take
a
technology
and
push
it
to
uncomfortable
extremes
for
what
that
technology
originally
envisioned,
and
you
can
kind
of
see
on
this
diagram
that
things
like
istio
have
been
around
for
three
and
a
half
years
and
have
been
pushed
to
its
extreme
and
is
constantly
you
know
up
until
now.
A
Being
you
know
the
innovations
happening
there,
the
bugs
are
getting
solved.
The
new
use
cases
are
coming
up
and
being
solved.
Istio
is
extremely
mature.
Some
of
the
newer
service
meshes
that
have
entered
the
scene
haven't
gone
through
that
hardening
and
betting
process.
A
A
A
Do
you
have
a
you
know,
can
you
rely
on
someone
with
expertise
who
can
help
you
and
especially
when
things
go
wrong,
you
know:
can
they
support
it?
Can
they
patch
it?
Can
they
deliver
hot
hot
patches
and
fixes?
Can
you
call
them
at
two
o'clock
in
the
morning
when
things
go
down
and
something
like
istio,
which
you
might
you
might
notice?
A
common
theme
here
has
been
through
that
process:
lots
of
different
large
vendors,
small
vendors
solo
included,
support
istio
in
production.
A
Other
service
meshes
are
backed
by
either
smaller
or
just
individual
companies
and
or
cloud
providers
right,
and
you
know
you're
at
the
mercy
of
these
individual
companies
to
to
try
to
support
a
service
mesh
and
some
don't
have
any
support,
or
at
least
not
yet.
A
Kubernetes
is
one
example
of
that
and
if
you're
running
in
kubernetes
you're
already
halfway
there
to
probably
needing
a
service
mesh
at
some
point,
and
so
it's
not
a
surprise
that
some
of
these
service
mesh
offerings
do
have
really
good
kubernetes
support
being
able
to
inject
the
side
car
next
to
your
workload
in
kubernetes,
so
that
it
becomes
part
of
the
pod
actually
running
the
control
plane,
running
all
the
components
and
managing
them
the
same
way,
you
would
any
other
kubernetes
workload
being
able
to
run
on
a
public
cloud
kubernetes
or
yourself
managed,
or
a
pre-packaged
vendor,
kubernetes
and
and
so
on.
A
So,
like
I
said,
it's
not
a
surprise,
but
kubernetes
support
is
is
where
a
lot
of
the
service
meshes
typically
do
do
better.
Now
what
about
non-kubernetes
support
or
being
able
to
run
in
hybrid
environments?
A
You
know
there
are.
There
are
some
surface
meshes
that
don't
do
anything
outside
of
the
kubernetes
environment,
and
then
there
are
some
that
are
sort
of
came
from.
The
genesis
of
their
origin
was
in
non-kubernetes
environments
and
then
they're
they're
moving
backwards
toward
kubernetes
and
then
something
like
istio
is,
you
know,
started
on
kubernetes.
It
was
kubernetes
first
and
then
slowly
moved
toward
vms,
but
now
has
a
first-class
citizen
for
for
vms.
A
Now
the
vm-based
workloads
are
probably
not
where
you're
going
to
start
to
deploy
your
service
mesh,
but
they
can't
be
excluded.
There's
likely
more
vm
based
services
than
there
are
container-based
or
kubernetes-based
services,
especially
in
a
large,
a
large
organization,
but
managing
a
service
proxy
managing
certificates.
Managing
the
integration
with
other
supporting
infrastructure
like
telemetry
and
logging
and
so
on
on
a
vm
is
a
lot
more
tedious
and
a
lot
more
automation.
A
That
typically
has
to
be
written
to
to
enable
that
so
finding
something
that
that
runs
in
your
environment
is
extremely
important
and
will
be
very
specific
to
the
way
you
run
your
services
in
your
environment.
A
When
I
came
to
solo
solo
originally
was
you
know
we
we,
we
had
a
super
glue
and
we
were
focused
on
simplifying
any
service
mesh
adoption
and
what
we
saw
through
our
customer
base
and
the
ecosystem
in
general
was
originally.
People
were
kicking
the
tires
on
everything
looking
at
everything,
but
more
recently
I
would
say,
within
the
last
year
the
the
market
has
started
to
coalesce.
Around
istio
istio
is
becoming
the
dominant
service
mesh
that
people
are
adopting
deploying
into
production,
which
also
manifests
itself
in
some
of
these
observations
that
we
saw.
A
You
know
earlier
that
it's
extremely
mature
that
it
is
supported
by
multiple
vendors
that
it's
got
strong,
kubernetes
and
vm
support
and
generally
the
observations
I
think
that
we
have
is
istio
is
going
to
be
the
leading
or,
if
not,
is
the
leading
service
mesh
that
folks
adopt
for
self-managed
use
cases
and
we
believe,
with
the
announcement
of
cloud
and
other
things
that
will
show
the
the
the
leading
service
mesh
for
managed
use
cases.
A
As
well
and
so
you'll
see
throughout
the
the
the
show
here
at
solocon
that
cert,
the
istio
has
become
a
focal
point
of
these
solutions
and
that
you
know,
if
you're,
going
to
seriously
evaluate
whether
a
service
mesh
is
right
for
you
and
which
one
to
pick,
then
our
suggestion
is
start
with
istio
and
say
why
not
istio
and
then
figure
it
out
from
there.
A
So,
let's
say
you've
gotten
to
this
point.
Understand
these
these
points
that
I've
made
here.
Where
do
I
start
one
of
the
things?
I
think
that
people
point
out
is
experience,
developer
experience,
these
types
of
things
you
know
there's
there
should
be
no
trade-off
between
using
an
envoy
based
service
mesh
and
a
service
mesh
that
has
a
good
developer
and
user
and
operator
experience.
A
A
It
typically
starts
at
the
edge
getting
the
control
plane
deployed
into
your
cluster
understanding
how
to
operate
it,
how
to
integrate
it
with
the
rest
of
your
infrastructure,
deploying
an
envoy
proxy
at
the
edge,
that's
a
familiar
area
and
using
it
to
route
traffic
to
secure
traffic
coming
into
your
clusters
now,
starting
at
the
edge,
not
every
service
mesh.
Has
this
opportunity
this
option
all
right,
some
service
meshes,
don't
have
an
edge
ingress
gateway.
A
Now
for
those
or
even
for
for
those
that
do
you
know
that
we've
built
glue
edge
here
at
solo,
which
you
may
have
seen
in
the
keynote
or
otherwhere
somewhere,
that
glue
edge
is
specifically
designed
to
fit
in
this
scenario.
But
if
you
have
a
service
mesh
that
has
an
ingress
gateway,
that's
typically
the
right
place.
To
start
after
that,
you
need
to
focus
on
what
is
the
high
value
use
case
that
you
can
deploy
and
operationalize
and
start
to
see
the
benefits
of
initially
don't
try
to.
A
A
So
that's,
as
I
said
in
the
previous
slide,
but
then
after
that,
securing
their
service
communication,
traffic
for
compliance
reasons
or
other
reasons,
so
enabling
mutual
tls,
if
not
between
services,
just
from
the
edge
to
the
first
hub
as
a
sort
of
a
first
step
and
then
starting
to
explore
using
authentic
authorization
policies
and
so
on.
So
the
focusing
on
security
as
sort
of
the
first
few
steps
into
the
service
mesh
is
a
fairly
common
another
one
is
surfacing
telemetry
or
or
the
golden
signals.
A
You
know
the
communication
that's
happening
between
services,
the
failures,
the
throughput
latency,
these
types
of
signals
about
how
the
system
is
operating
and
then
my
personal
favorite
is
to
be
able
to
control
traffic
and
do
canary
releasing
through
percentage
based
routing
or
content
based
routing,
dark
launches
these
types
of
things.
So
I
guess
ultimately
pick
a
use
case
that
you're
trying
to
solve
and
introduce
the
service
mesh,
introduce
the
control
plane,
maybe
the
edge
gateway
or
the
ingress
gateway
and
then
pick
off
some
of
these.
These
high-value
use
cases.
A
So
what
we're
doing
at
solo?
Also
not
coincidentally,
aligns
with
this
sort
of
adoption
journey,
which
is
leave.
You
know,
start
start
with
your
gateway
start
with
the
edge
that
could
be
a
full-featured
api
gateway
like
blue
edge
or
that
could
just
be
a
steel
ingress
gateway.
If
you
just
need
traffic
into
your
cluster
and
then
slowly
adopt
the
pieces
of
the
mesh
that
you
need,
we
add
solar
support
istio
and
can
support
it.
For
you
in
production,
apply
security
patches
for
much
longer
than
the
community.
A
Will
we
have
fips,
supported
or
certified
builds
we
build
for
arm
based
architectures
graviton
on
aws,
do
architectural
reviews,
and,
and
so
on
after
that,
you're
probably
going
to
build
a
user
workflows.
On
top
of
this,
how
do
you
get
services
into
production
and
align
that
with
the
control
of
the
traffic?
A
So
you
know
you
see
what
what
we're
doing
with
glue
mesh,
which
is
a
distribution
of
istio
that
supports
and
simplifies
running
across
multiple
clusters,
so
getting
from
hey,
I
have
kubernetes
and
a
bunch
of
microservices
to
successfully
adopting
a
service
mesh
is
something
that
we're
very
focused
on
here
at
solo
and
alliance.
A
lot
with
what
you
know.
A
So
at
this
point
I
will
thank
you
so
much
for
joining
us
here
at
solo
con
and
and
I'm
happy
to
take
a
few
questions.
A
Let
me
see
where
are
they
and
then
so
for
the
poll?
Actually,
let
me
just
point
out
that
our
users
responded
overwhelmingly.
A
Eighty
percent
of
users
are
are
either
using
istio
or
or
considering
using
a
service
mesh.
That's
the
rsd,
based
we
have
about
five
percent
for
console
five
for
linker
d5
for
app
mesh
and
five
for
traffic
give
or
take,
because
it's
not
exact
but
and
that
that
response,
I
guess,
aligns
with
what
we've
observed
in
the
in
the
community
as
well.
A
So
somebody
made
a
comment:
istio
wins
so
far,
but
you
need
to
understand
that
everyone
has
their
own
tasks
and
their
own
evaluation
criteria.
There
is
no
silver
bullet,
and
that
is
a
hundred
percent
correct
at
the
apa
gateway
you
can
do
things
like
server
discovery
and
deploy
the
control,
plane
and
vm
or
on
kubernetes
if
you
are
using
a
vm
and
something
like
console
is
helpful.
A
Oh
that's
just
the
answering
question
all
right.
Well,
a
question
that
I
often
get
is
that
about
performance
and
what
either
from
from
two
perspectives,
one
which
mesh
is
the
the
most
performant
but
another
one
is
how
much
of
a
performance
impact
does
a
service
mesh
contribute,
and
the,
though-
and
I
think
those
are-
those
are
fine
questions.
One
thing
that
must
be
stated
as
the
backdrop
before
even
answering
that
question
is
a
service
map
or
sorry,
a
microservices
environment.
A
With
this
inherent
you
know,
networking
is
inherent
networking
problems,
solving
things
like
client-side
load,
balancing
service
discovery,
timeouts,
retries
circuit,
breaking,
telemetry
collection
and
so
on.
These
are
not
optional,
so
it's
not
like
you
can
just
say.
Well,
I'm
going
to
deploy
microsoft
and
I'm
not
going
to
solve
any
of
those
things.
I'm
going
to
gain
huge
performance
benefits
from
that
those
those
are
not
optional.
Things
to
solve.
You
have
to
solve
them.
A
If
you
don't,
then
you're
walking
into
a
dark
room
and
you
make
changes
to
your
service,
your
services
and
who
knows
what
the
hell
happens
right.
So
those
are
those
are
not
optional.
So
now
the
question
is:
how
do
you
end
up
implementing
that
and
what
are
the
trade-offs
for
doing
that
now?
There
is
some
impact
on
any
approach
that
you
take
to
solving
this
problem.
A
Generally,
what
what
I
can
say
and
what
we've
seen
in
the
wild
is
that
envoy
proxy
should
be
able
to
run
with
a
millisecond
or
sub
millisecond
performance
impact,
so
that,
if
you
think
a
a
envoy
on
each
individual
workload
will
add
approximately
a
a
millisecond
of
at
least
time
overhead
performance
overhead,
that
is,
you
know,
that's
sort
of
where
you
should
start
looking
now.
A
See
what
are
the
parameters
that
we
should
use
to
evaluate
a
service
mesh,
and
so
I
think,
first
of
all
identifying
what
is
your?
What
does
your
environment
look
like
as
I
covered?
Are
you
using
vms?
Is
it
just
kubernetes?
Are
you
doing
messaging?
Are
you
doing
akka
and
scala?
Are
you
doing
grpc
right?
So
that's
like
that's
going
to
be
a
big
indicator
of
what
what
service
mesh
or
whether
or
not
you
you
should
use
a
service
mesh.
A
I
suggest
strongly
that
you
favor
a
service
mesh
that
uses
envoy
as
its
as
its
data
plane
and
then
work
back
from
there.
If
you
need
super
high,
optimization
envoy
can
be
extremely
optimized
and
and
and
and
work
back
from
there.
Like
I
said,
istio
is
the
is
the
most
highly
deployed
production
deployed
service
mesh
out
there.
You
know
I
personally
would
advise
people
start
with
that
and
say
why
not
what
why?
Why
can't
I
use
istio
in
this
environment
and
and
and
go
from
there?
A
So
somebody
asked
the
difference
between
a
micro
gateway
architecture
and
and
and
a
service
mesh,
and
I
guess
that's
going
into
a
little
bit
more
of
the
you
know
what
what
is
what?
What
is
a
service
measure?
A
What
is
it
I've,
given
a
talk
actually
a
couple
years
ago
now,
at
this
point,
I
think
I
think
it
was
the
first
service
mishkan
at
kubecon
that
talked
about
the
characteristics
of
a
surface
mesh
and
its
data
plane
and
specifically
that
the
the
data
plane
can
take
multiple
shapes
and
the
data
plane
that
we
know
it
as
a
proxy
that
lives
with
a
particular
instance
of
a
service.
A
But
you
can
also
have
data
planes
that
actually
live
in
the
application
code.
You
know,
grpc
and
some
of
the
libraries
are
able
to
understand
the
envoy
xds
api,
and
so
you
know
your
application
can
be
natively
tied
into
the
mesh
or
a
gateway
that
lives
at
the
edge
or
shared
and
so
on.
Those
pieces
can
be
considered
part
of
the
the
mesh
as
well,
but
generally,
the
pattern
when
you
hear
service
mesh
is
that
of
the
the
sidecar
proxies
deployed
with
a
with
an
instance,
but
with
an
application
instance.
A
Okay
and
then
we
also
get
questions
like.
Can
you
go
deeper
into
the
deployment
of
glue
mesh
control?
Plane,
keep
the
control
thing
right,
okay,
so
some
someone
asked
about
the
the
deployment
of
a
of
glue
mesh,
which
is
a
distribution
of
istio
and
how
you
can
deploy
it
to
manage
multiple
types
of
multiple
types
of
services,
multiple
clusters-
and
I
don't
think
I
have
a
slide
for
that-
but
but
typically
you
deploy
the
control
plane
separately
from
the
data
plane
and
you
remotely
configure.
A
The
idea
is
to
have
these
these
pieces
decentralized
and
sort
of
avoid
the
the
trap
that
we
saw
with
the
previous
generation
of
technology,
which
is
here,
put
it
funnel
everything
through
this
big
black
box
and
and
then
we'll
know
everything
we
can
control
everything
and
build
a
team
around
it
and
so
on.
But
typically
the
deployment
pattern
is
to
have
a
control
plane
separate.
So
you
write
your
policies,
you
define
your
configuration.
A
All
the
enforcement
points
happen
out
with
the
applications
in
a
highly
decentralized
pattern,
the
same
thing
with
with
glue
mesh
where
you
have
the
control
planes
and
management
planes
separate
from
the
underlying
data
planes.
A
Somebody
asked:
what's
the
difference
between
seo
ss
and
the
one
that
glue
mesh
ships
with
the
the
difference
is
actually
very
minimal.
So,
if
you're
running
the
istio
open
source
upstream
in
your
environment,
then
running
istio
from
blue
mesh
is
almost
identical.
You
can
use
the
same
tools.
Use
the
operator,
use
the
seo
ctl,
you
don't
need
additional
tools
to
to
be
able
to
run
it.
A
The
main
differences
come
in
two
areas,
one
if
you're,
if
you
require
fips
compliance,
so
a
limitation
on
certificates
and
algorithms,
and
all
these
other
things
that
the
federal
that
what
is
it
federal
information
processing
standards,
nist
guidelines
suggests
you
know
we
offer
that
with
seo
and
there's
some
minor
tweaks
and
changes
that
we
have
to
make
to
get
that
to
to
run
if,
if
you're
interested
in
fips,
if
not
it's
about
as
close
to
upstream
as
you
can
get
with
the
exception.
A
A
Then
there
will
be
security
patches
and
there
will
be
additional
fixes
to
versions
in
the
n
minus
two
and
minus
three
range
that
diverge
from
upstream,
because
upstream
doesn't
support
those
anymore,
but
otherwise,
if
you're,
if
you're
familiar
with
istio
upstream
open
source,
it's
it's
almost
exactly
the
same.
A
So
I
think
we're
running
out
of
time
here.
I
do
appreciate
your
attendance.
I
see
some
questions
about
ebpf.
We
can
take
that
to
let's
take
that
to
the
slack
channel,
but
I'm
I've
ran
out
of
my
time
here,
stay
tuned
for
the
the
next
sessions.
We
got
a
really
good
set
of
tracks
here
on
more
service,
mesh,
more
gateway,
more
web
assembly
content
and,
of
course,
for
for
day
two
for
for
more
announcements.