►
From YouTube: The Future of Service Mesh
Description
For more great content, visit https://solocon.io
SoloCon 2022:
The Future of Service Mesh
Speaker:
Christian Posta
VP, Global Field CTO, Solo.io
Abstract:
Service mesh implementation and usage continues to gain momentum, but where is the technology headed? With new developments related to Wasm, eBPF, GraphQL, and more playing an increasingly important role in how service mesh works and what it can provide for teams and users, it’s important to understand what evolution in the space means for you. In this session, find out how service mesh is growing, and how it’s helping companies deliver greater value to their end users.
Track: Community and Open Source
A
A
I
just
recently
finished
writing
my
third
book
istio
in
action.
It
should
be
available
soon
here
from
manning
in
the
next
few
days,
and
I've
also
put
my
contact
information
on
the
slide,
so
feel
free
to
reach
out
and
have
questions
or
follow
up
afterward,
as
as
you
feel
so,
solo
we've
been
working
on
application,
networking
technology
for
the
last
four
and
a
half
years.
Now
we
are
the
leader
in
this
space,
so
open
source
communities
like
envoy
and
istio.
So
we've
been
you
know,
involved
with
since
the
very
beginning.
A
A
We
recently
closed
our
series
c
round
of
funding
at
a
billion
dollar
evaluation
and
that's
really
a
testament
to
two
big
things:
one,
the
people
here
that
we
have
at
solo
and
the
customers
that
we're
working
with.
So
you
can
go
to
our
customers
page
here
and
see
the
some
of
the
customers
that
we
that
we
can.
A
You
know
talk
about
publicly,
but
you'll
also
hear
directly
from
those
customers
here
at
solocon.
Some
of
those
logos
are
listed
on
the
on
the
slide
here.
So
what
do
we
do?
At
solo?
Like
I
said,
we
work
on
application,
networking
technology
based
on
envoy
proxy
based
on
istio
service
mesh,
and
we
focus
on
abstracting
the
network
so
that
you
can
improve
developer
experience,
improve
your
security
posture
and
safely
bring
changes
into
production
by
consistently
applying
traffic,
routing
rules
and
security
policies,
and
and
so
on.
Now,
in
the
past,
or
even
even
present
day.
A
I
guess
you
know
you,
you
can
expect
a
you
know:
a
heterogeneous
deployment
of
networking
technology
from
gateways
to
sdns,
to
firewalls,
to
public
cloud,
vpcs
and
all
of
the
stuff
in
between
running
on
vms
and
containers,
and
so
on.
What
we
want
to
do
is
abstract
away
the
the
networking
pieces,
push
the
control
and
policy
enforcement
pieces
as
close
to
the
applications
as
possible
and
then
give
a
centralized
management
and
governance
workflow
on
on
top
of
that
to
actually
operate
this
kind
of
stuff
in
production.
A
A
Now
the
gateway
worlds
and
the
service
mesh
worlds
are
converging
so
you'll
see
in
the
rest
of
this
talk,
and
probably
some
from
some
of
our
customers,
how
the
edge
use
cases
like
rate,
limiting
and
validating
untrusted
traffic,
using
various
security
mechanisms
and
request
transformation,
and
these
types
of
things
they're,
you
know
the
the
edge
is
becoming
blurry
and
bringing
some
of
those
capabilities
into
the
mesh
and
managing
this
from
a
a
single
control.
Plane
is
operationally
probably
a
better
approach
and
that's
what
we're
working
on
here
at
solo.
A
Here's
an
example
of
a
large-scale
deployment
of
mesh
at
one
of
our
largest
customers
deployed
across
multiple
lines
of
business.
You
can
see
how
the
mesh
and
the
gateways
kind
of
complement
each
other
to
enable
this
very
fine
grain
security
and
traffic,
routing
policy
and
and
and
observability
throughout
the
system
without
having
any
centralized
bottlenecks,
and
it's
a
really
important
part
is
that
we're
decentralizing.
A
So,
let's
talk
about
what's
coming
next,
how
the
service
mesh
is
evolving
and
what
we
can
expect
in
you
know
the
innovation
and-
and
so
that's
happening
in
the
communities
and,
more
importantly,
here
at
solo,
because
we're
driving
a
lot
of
that
and
the
first
point
to
make
is
the
future
of
service
mesh
and
the
innovations
that
we'll
be
seeing.
They
come
from
the
data
plane
or
around
the
data
plane.
A
So,
if
you're
familiar
with
the
service
mesh
lingo,
the
data
plane
is
the
proxies
through
which
the
requests
and
the
messages
are
traveling
to
get
from
service
to
service
the
control.
Plane
is
the
piece:
that's
you
know,
configuring
those
those
data
plane
components
so
we're
talking
specifically
about
the
data
plane,
and
this
is
not
all
that
novel
of
an
idea.
In
fact,
in
2019
I
spoke
about
this
at
servicemeshcon
it
was
the
talk,
was
called
the
truth
about
the
service
mesh
data
plan,
because
everyone
was
so
focused
on
the
side
car
deployment
pattern.
A
But
in
that
talk
I
pointed
out
well,
you
can
do
you
can
do
some
of
this
stuff
to
optimize
the
data
plane
by
putting
it
into
the
code
itself
or
you
can
use
shared
gateways
or
you
can
use
centralized
gateways
and
so
that
the
data
plane
is
a
spectrum
and
we're
going
to
continue
to
see
that
in
the
following
slides
as
we
go
in
a
little
bit
deeper.
But
a
lot
of
this
is
known
and
we've
tried
a
lot
of
this.
A
We
know
where
the
trade-offs
are
and
where
the
issues
are
in
these
various
different
types
of
deployments,
but
let's
make
them
explicit
and
let's
talk
about
them
because
you
know
other
folks
are
saying
you
know,
let's,
let's
just
move
to
a
centralized
gateway
or
a
shared
proxy
per
node
model.
But
that's
not
you
know,
we've
been
there
and
we've
done
that.
Let's
look
at
some
of
the
some
of
the
trade-offs
for
doing
that.
So,
like
I
said,
the
data
plane
is
where
the
innovation
is
happening.
That's
where
the
future
of
service
mesh.
A
You
know
it
hinges
on
so
it's
important
to
pick
the
right
data
plane
for
your
application.
Networking
for
your
service
mesh
technology.
We
believe
here
at
solo.
That
envoy,
is
that
right
data
plane,
it's
extensible,
it
was
built
with.
You
know
these
dynamic
environments
in
mind.
It's
an
open
source,
extremely
vibrant
community,
and
it's
been
adopted
in
a
lot
of
the
service
mesh
service
mesh
distributions.
A
But
so
what
we're
going
to
focus
on
is
the
data
plane?
How
can
we
extend
it
and
make
it
more
powerful
going
up
the
stack?
So
that's
where
we'll
the
graphql
pieces
come
in
and
you
can
take
a
look
at
some
of
the
other
sessions
here
at
solo
and
get
a
little
bit
more
detail
about
that.
But
then
we'll
look
at
you
know
going
down
the
stack
so
how?
How
can
the
cni?
How
can
the
traditional
networking?
A
How
can
ebpf
these
types
of
things
create
a
you
know
a
safe
networking
world
where
we
can
run
the
proxies,
maybe
not
as
side
cars
but,
as
you
know,
other
shared
gateways
and
shared
infrastructure,
but
try
to
get
the
best
of
both
worlds.
So,
let's
take
a
look.
The
first
is
extending
the
the
proxy
and
what
we've
done
here
at
solo
is
we've
we've
taken
the
envoy
proxy,
the
base
envoy
proxy
we've
built
additional
filters
and
capabilities
on
top
of
that
now
envoy
was
built
specifically
to
be
able
to
extend
it.
A
Its
its
filter
architecture
means
that
you
can
build
filters,
layer
them
into
the
proxy
and
extend
the
capabilities
of
the
proxy,
and
if
you
don't
want
to
build
things
directly
into
the
proxy,
you
can
use
web
assembly
to
dynamically
inject
filters
into
the
proxy.
So
extensibility
is
a
core
capability
of
ongoing
is
very
important
to
how
the
service
mesh
will
continue
to
evolve.
A
So
you
can
take
a
look
at
some
of
the
filters
on
this
screen
that
we've
already
implemented
that
we've
built
so
things
like
request,
transformation
things
like
lambda
invocation,
so
you
can
directly
call
and
replace
albs
aws
api
gateways.
All
that
stuff.
You
don't
need
all
that
stuff
to
be
able
to
call
lambdas
from
from
our
proxy
things
like
data
loss
prevention,
web
application,
firewalling,
soap,
translation.
All
of
these
things
we
built
as
filters
into
the
envoy
proxy.
A
So
if
we
take
a
look
at
the
next
slide,
we
can
see
that
the
the
engine
is
made
up
of
a
resolver
engine,
so
you
can
specify
the
graphql
schema
as
a
declarative
configuration
and
then
also
specify
how
the
individual
fields
get
resolved
using
some
of
the
out-of-the-box
resolvers.
So
things
like
a
rest
based
graphql
query,
you
know
or
field
resolver
grpc
caching,
and
if
there
are,
if
there's
a
resolver
that
we
don't
have
out
of
the
box
or
that
you
need
to
customize,
we
can
actually
inject
webassembly
based
resolvers.
A
So
extending
envoy,
you
know
graphql
and
moving
up
the
stack
and
or
up
the
layers.
The
networking
layers
is
an
extremely
powerful
thing
and
you
know
brings
a
lot
of
power
to
the
service
mesh
and
is
definitely
the
future
of
glue
mesh
and
the
surface
mesh
that
we're
building
here
on
top
of
istio.
A
The
the
second
part
to
the
future
of
surface
mesh
is
how
we
optimize
the
data
plane
or
the
placement
of
the
the
layer.
Seven
proxies
now
specifically
why
we
might
want
to
think
about
doing
that
is
around
optimizing
for
performance
optimizing
for
resource
usage.
Some
of
the
larger
deployments
of
a
service
mesh
may
see
some
benefit
to
this,
but
we
also
have
to
keep
in
mind
the
extensibility
use
cases
that
we
mentioned,
or
the
operational
use
cases
around
upgrading
and
minimizing
impact.
A
The
data
plane
of
a
of
a
service,
mesh
or
application
networking
infrastructure
is
is,
is
vital.
It's
it's
critical!
Well,
you
know
the
the
requests
and
the
messages
are
flowing
over
these
proxies,
so
upgrading
and
operating
on
on
these
things
is
extremely,
you
know
sensitive
and
then.
Lastly,
we
also
want
to
keep
a
tight
security
boundary.
A
You
know
if,
if
there
are
incursions
or
exploits
that
we
want
to,
we
want
to
contain
them
to
just
the
smallest
blast,
radius
that
we
that
we
can
so
when
we
think
about
optimizing,
the
data
plane
and
the
placement
of
the
service
proxies.
We
have
to
think
in
terms
of
these
dimensions.
What
are
the
things
that
we're
hoping
to
achieve
and
what
are
the
trade-offs
that
we
are
willing
to
make?
A
So
the
first
pattern
that
we'll
look
at
is
the
service
proxy
or
the
sidecar
pattern,
and
in
this
pattern,
in
in
the
surface
mesh,
the
sidecar
proxy
gets
injected
right
next
to
the
application
instance.
So
in
kubernetes
you
can
think
of
this
as
another
container
in
the
kubernetes
pod
and
they
run
atomic.
A
You
know
the
life
cycle
of
the
application
is
atomic
with
the
with
the
proxy.
When
the
application
talks
to
another
application,
it
talks
first
through
its
local
proxy.
That
proxy
then
applies
security
policies,
traffic
routing
rules
collects
telemetry
and
so
on,
and
then
you
know,
carries
on
over
the
network
when
it
gets
to
the
other
application.
Then
it
goes
through
its
service
proxy,
which
then
does
rate
limiting
and
request,
authentication
and
authorization.
A
This
can
be
a
lot
of
overhead
and
we
do
see
this
come
up
with,
with,
I
would
say,
a
smaller
fraction
of
our
customers
who
are
worried
about
this
level
of
overhead,
but
it
does
come
up
now
in
this
model.
A
However,
though,
a
lot
of
a
lot
of
customers,
I
would
say
that
we
have
they
favor
the
you
know
the
trade-off
for
things
like
feature:
isolation
right
where
applications
can
be
configured
independently
from
others,
things
like
connection
pulling
things
like
timeouts
and
retries
and
budgets,
and
and
these
types
of
things
or
extensibility,
maybe
they're,
injecting
web
assembly
filters
and
this
kind
of
stuff-
and
they
don't
want
that
to
affect
other
applications.
A
So
they
can
configure
it
specific
to
the
needs
of
a
specific
application
without
worrying
about
impacting
the
others,
because
they
all
have
their
own
sidecar
proxies.
The
others
is
the
security
granularity.
So
when
a
request
comes
into
an
application,
the
proxy
can
say:
hey,
I
represent
application
a,
and
this
is
application
b.
Calling
me
all
right
is
that
allowed
to
happen
right
and
we
can
get
that
scope
down
to
the
level
of
the
application
instance.
A
Alternatively,
sharing
you
know
a
centralized
gateway
or
a
host-based
gateway,
you're,
not
down
to
the
level
of
the
application
instance
you're
down
to
the
level
of
the
host
right
now,
still
something
has
to
verify
or
assert
the
identity
of
the
the
actual
running
applications
at
the
end.
A
The
last
is
is
upgrade
impact
all
right,
so
we
can
very
finely
and
in
very
fine-grained
way,
upgrade
the
sidecar
processes
independent
of
the
rest
of
the
the
applications.
So
it
gives
us
a
better
a
better.
You
know,
control
over
that
blast
radius
if
things
start
to
go
wrong.
A
Another
approach
that
comes
up
that
is
also
a
valid
deployment
model,
is
sharing
the
proxy
for
the
entire
node
and
having
the
applications
when
they
communicate
with
each
other.
The
traffic
flows
through
the
node
based
proxy,
and
if
it
crosses
into
a
different
node,
then
it
goes
from
that
proxy
on
its
node
to
the
proxy
on
a
different
node
right
and
then
the
traffic
continues
on
from
there
and
you
can
use
things.
A
This
is
where
it's
very
handy:
to
have
cni
level
control,
maybe
using
things
like
evpf
and
rerouting
the
traffic
and
forcing
the
traffic
through
these
through
these
layer,
seven
proxies-
and
this
enables
the
applications
to
not
know
or
have
to
care
about,
the
sidecar
proxies,
and
it
almost
gives
it
a
little
bit
more
of
a
secure.
A
You
know
there
there's
some
security
properties
here
where
you
know
the
application
can't
go
around
the
sidecar
because
you're
using
the
lower
layers
in
the
network
to
to
force
traffic
through
the
proxies
now
the
benefits.
Obviously,
to
this
one
is
better
resource
use
usage.
A
A
As
far
as
you
know,
resource
consumption
and
so
on,
especially
not
if
you're
going
to
extend
the
proxy.
If
you're
going
to
inject
web
assembly
filters
and
lua
lewis
scripts
these
kinds
of
things,
you
can't
really
get
the
level
of
isolation
that
that
you
want
that
a
lot
of
our
customers
are
really
interested
in
the.
The
second
is
the
you
know.
The
third
point
here
really
is
around
the
security
granularity.
You
do
get.
A
A
So
you
lose
some
of
the
you
know
you
gain
you
can
see
in
this
chart.
You
know
it's
pretty
optimal
in
terms
of
resource
usage,
but
you
lose
some
of
the
isolation
and
you
know
the
security
benefits
that
you
might
get
with
more
fine-grained
or
sidecar
deployment.
A
Virtualized
away,
you
get
some
of
the
benefits
of
the
of
the
resource
overhead.
You
get
some
of
the
benefits,
you
still
get
the
feature:
isolation
and
the
lower
blast
radius
for
security
and
upgrade
impact,
and
you
sort
of
meet
in
in
the
middle.
A
Now,
the
last
one
that
we'll
we'll
look
at
is
another
another
approach
that
has
come
up
that
involves
taking
the
sidecar
proxies
and
moving
them
completely
off
the
node,
maybe
maybe
maybe
putting
them
in
you
know
specialized
part
of
the
architecture
and
using
either
either
the
cni
plug-ins
directly
or
a
a
smaller
lightweight
micro,
sidecar
microproxy
that
lives
with
the
applications
that
can
still
do
very
basic
things
like
mutual
tls
and
and
connecting
to
a
a
layer,
7
envoy
proxy
that
lives
somewhere.
A
A
A
Do
we
have
the
tenancy
that
we
have
for
our
different
teams
to
be
able
to
configure
their
applications?
And
then
you
know
once
once
those
things
are
in
place.
You
can
start
to
optimize
the
actual
resource
usage,
isolation
properties
and
these
kinds
of
things
as
you
go
and
refine
the
architecture,
and
here
at
solo
you
know
glue
mesh.
Is
our
enterprise
service
mesh
it's
based
on
istio?
A
You
know
multi-cluster
aware
api
built
for
global
load,
balancing
and
failover
across
clusters,
things
like
multi-tenancy
and
you
know
allowing
different
teams
different
parts
of
the
api
and
not
sharing
the
entire
service
mesh
api
with
them,
and
then
life
cycle
upgrades
and
these
types
of
things
can
be
built
into
you
know.
Glue
mesh
has
all
of
these
things
now
the
next
things
that
we
are
enabling
are
the
graphql
pieces
like
I
talked
about
earlier,
and
this
has
been
being
built
at
the
data
plane
layer,
so
you'll
start
to
see.
A
So
from
you
know,
layer,
7,
connectivity,
security
policy
to
enabling
new
protocols
to
expose
data,
to
you
know
improving
the
security
model
and
the
configuration
model
between
layer,
seven
and
layer
three
and
four,
and
giving
you
a
ways
to
optimize
your
the
usage
of
the
proxies.
This
is
these
are
going
to
be
core
differentiators
here
in
glue,
mesh
and
and
and
driving
the
the
innovation
that's
happening
in
this
space
and
as
we
continue
to
work
with
with
our
customers
and
get
feedback,
and
all
this
stuff
will
continue
to
improve.
A
So
you
can
see
here
in
this
last
slide.
This
is
you
know,
we're
tackling
layers,
seven
down
to
three
and
four,
including
the
graphql,
which
you
know
multiplexing.
I
guess
we're
calling
layer
layer
eight,
but
you
can
see
that
we're
we're
abstracting
the
network,
we're
focusing
on
the
the
ease
of
use-
and
you
know
large-scale
deployments
of
a
of
technologies
like
this.
A
That's
it
here.
I
thank
you
for
joining
my
session.
Please
go
check
out
the
sessions
from
our
customers,
some
of
the
road
map
and
update
sessions
and,
of
course,
check
out
the
the
keynotes
that
will
be
kicking
off
seoul
economy
each
of
the
days.
So
I'm
christian
pelosa.
Thank
you
so
much
for
joining
and
look
forward
to
chatting
offline.