►
From YouTube: Envoy from the Edge to Service Mesh and Beyond
Description
Speaker: Lawrence Gadban
Kubernetes has changed how we design and deploy applications, and with that how we network these services together and expose them to external clients and end users. Enter Envoy, a popular proxy that is driving the modern app data plane and is the basis for many service meshes. However this also causes a bit of confusion between proxies, gateways and service mesh. In this talk we'll cover the role of Envoy at the edge and in the mesh, functionality of the proxy, how they interact together and are different, control plane interactions, and how to extend it.
A
Okay,
so
welcome.
This
is
your
first
time
welcome
to
cloud-native
madison.
If
everybody
could,
please
make
sure
to
have
your
microphones
muted,
while
we're
giving
the
talks
today,
that'd
be
much
appreciated.
My
name
is
brandon
tim,
I'm
one
of
the
co-organizers
of
cloud
native
medicine,
along
with
kurt
rakely.
A
You
can
see
our
twitter
handle
is
there
and
if
you
ever
want
to
reach
out
about
a
topic
or
a
talk
or
really
anything
that
you
might
like
to
see
in
the
in
the
meetup
go
ahead
and
reach
out
on
meetup.com
or
on
twitter
or,
however,
you
can
get
a
hold
of
us
real,
quick,
just
a
little
bit
of
meetup
business,
so
we
are
going
to
hold
our
next
online
meetup
next
month.
A
On
november
5th,
we
don't
have
a
topic
nailed
down
yet,
but
we
are
in
talks
to
hopefully
have
a
talk
about
chaos,
engineering
in
the
cloud
native
space,
which
is
something
that
we
have
not
yet
touched
on
in
our
meetup
group.
So
really
looking
forward
to
hoping
that
talk,
shapes
up
and
have
some
new
interesting
content
for
everybody
for
next
month.
A
Our
sponsor
this
month
is
solo.
I
o
for
those
not
familiar
solo.
I
o
is
really
involved
in
sort
of
anything
involving
the
virtual
networking
space
with
modern
cloud,
cloud-based
architectures
they
do
tons
of
stuff
with
enboy,
which
is
obviously
what
we're
here
to
talk
about
today.
If
you'd
like
to
learn
more
about
their
company,
you
can
see
their
website
there
or
you
can
go
ahead
and
just
reach
out
to
them
on
their
slack
channel.
A
If
you
want
to
learn
more
about
what
solo
I
o
is
doing
with
that,
I
would
like
to
have
everybody
give
a
warm
welcome
to
lawrence
godman
he's
an
engineer
at
solo.
I
o
and
he's
going
to
be
talking
about,
basically
all
the
different
applications
of
the
ongoing
world
today.
So
with
that
take
it
away,
rewards.
B
Go
okay,
so
so
yeah,
so
I'm
lawrence,
godban
I'll,
be
giving
a
talk
today
on
it's
called
envoy
from
the
edge
to
service
mesh
and
beyond,
and
so
really
what
we'll
talk
about
is
kind
of
the
role
that
envoy
plays
when
you're
looking
at
our
service
mesh
or
if
you're
talking
about
an
edge
gateway
and
the
various
places
that
onward
can
fit
in
and
how
envoy
helps
when
you're
trying
to
accom
when
you're
trying
to
you
know,
use
these
type
of
these
type
of
patterns,
and
so
we
mentioned
solo.I
o.
B
You
know,
that's
why
I'm
a
field
engineer
at
solo.io.
We
are
the
modern
api
infrastructure
company
and
basically
you
know
like
brandon
mentioned
we,
we
are
heavily
we're
heavily
based
on
envoy
proxy
and
we'll
talk
about
envoy
a
whole
lot
in
this
top
on
the
left.
B
You
can
see
a
couple
of
or
a
few
of
our
products,
so
we
have
glue,
which
is
a
gateway
that
we
will
briefly
touch
on
today,
service
mesh
hub,
which
is
a
basically
a
multi-cluster
service,
mesh
tool
and
then
webassembly
hub,
which
is
a
way
to
as
as
web
assembly
becomes
more
and
more
common.
And
we
will
talk
about
this
as
well.
How
can
you
manage
your
web
assemblies
modules
and
so
on?
B
So
I'm
lauren
scottbaum
field
engineer
been
at
solo
for
about
about
seven
or
eight
months
now
so
working
in
the
field
directly
with
customers
and
users
and
how
they've
been
using
our
products
and
how
we
can
help.
So
I've
seen
a
lot
of
interesting
trends
and
a
lot
of
interesting
things
and
prior
to
that,
I
spent
about
eight
and
a
half
years
at
a
company
called
usaa
they're,
a
pretty
large
financial
services
company.
B
While
I
was
there,
I
spent
about
five
years
being
primarily
a
developer
and
so
working.
You
know
I
was.
I
did
some
java.
I
did
some
javascript
some
golang
and
then
the
last
three
or
three
or
so
years
I
was
there.
I
was
primarily
focused
on
kind.
B
Journey
that
they
were
going
under
so
you
know,
managing
openshift
clusters,
then
also
kind
of
working
on
api
security,
so
things
like
api
gateways
and
service
meshes
were
coming
into
play
as
well,
so
let's
go
ahead
and
get
into
the
actual
into
the
actual
content.
So
one
of
the
things
that
we
see
and
I
think
a
lot
of
organizations
are
kind
of
going
through.
This
is
the
journey
from
a
monolith
to
microservice
based
architecture,
and
you
know
there
are.
This
is
a
you
know.
Huge
topic,
there's
been
tons
of
you
know.
B
Talks
about
you
know
are
microservices
right
for
you
and
etc,
but
we
do
see
quite
a
few
folks
that
are
going
through
this
kind
of
journey
and
what
that
ultimately
means
right
is
you.
You
have
an
existing
legacy
style
setup
where
you
have
a
monolith
and
you
want
to
break
that
up
into
smaller
components,
and
so
there
are,
you
know,
technical
reasons
why
this
makes
a
lot
of
sense.
You
know
the
easy
one
to
think
about
is
when
you
have
a
monolith.
B
B
So
so,
if
you
can
find
a
way
to
break
out
that
functionality,
you
get
a
benefit
in
that
you're,
reducing
kind
of
the
risk
of
making
a
change
you
can,
hopefully,
you
know,
move
faster
and
you
know
there
are
other
valid
technical
reasons
why
this
kind
of
this
kind
of
migration
makes
sense.
Besides
that,
there's
also
organizational
benefits
to
doing
something
like
this,
and
you
know
most
large
organizations
enterprises
they
they
will
have.
B
Whether
or
not
that's
a
good
thing
can
be
debated
as
well,
but
it's
definitely
something
that's
common
and
so
microservices
are
a
nice
fit
for.
When
you
have.
You
know
groups
of
services
that
are
owned
by
specific
teams
and
if
that,
if
the
functionality
is
is
owned
by
that
team
and
if
they
own
kind
of
the
deployment
unit
now
they
can
iterate
as
fast
as
they
want.
You
know
their
velocity
is
under
their
control.
They
can
also
do
things
like
choose
their
languages,
that
they
use
you
know,
and
so
so
there
are
other
benefits.
B
It's
not
just
technical,
there's
organizational
reasons
why
this
makes
a
lot
of
sense.
But
when
you
do
this,
you
know
you
introduce
new
challenges.
For
example,
you
know,
let's
say
you
have
a
piece
of
you
have
like
20,
separate
components
in
your
monolith
when
you
break
those
up
into
microservices
now
you
have
20
different.
You
know,
services
and
you
know,
are
all
of
those
services
expected
to
be
called
by
everybody
else,
or
are
you
going
to
expose
some
of
that?
Functionality
is
an
api.
B
You
know
an
externally
facing
api
that
then
behind
the
scenes
will
be
called
or
call.
You
know
separate
microservices
and
basically
you
have
to
figure
out.
How
do
I
manage
all
of
those
apis
and
all
of
those
services?
Then
there's
a
question
of
security:
how
do
you
enforce
security?
So
before
you
know,
if
you
had
everything
that
lived
in
a
monolith,
you
know
you,
you
had
the
benefit
of
knowing
that
you
were
making.
You
know,
basically
an
in-process
call
to
get
some
other
functionality.
B
Now
that
that's
broken
up,
you
have
to
worry
about.
Okay,
this
is
going
to
be
a
network
hop.
This
is
potentially
going
to
be
in
the
cloud
and
you
know
I
have
to
worry
about.
I
have
to.
Basically,
I
can't
trust
I
can't.
I
cannot
trust
that
I
have
I'm
running
in
a
safe
environment
or
that
this
service
is
running
over
a
secure
environment.
So
now
I
have
to
introduce
security
at
all
of
those
different
places
and
then
finally,
you
need
to
worry
about
observability.
B
So
before
you
know,
you
had
a
request
being
processed
by
your
monolith,
and
that
was
all
happening
in
the
same
thing.
You
could
go
to
the
logs
and
you
could
see
the
flow,
but
when
you
have
all
of
these
discrete
services
now
you
have
to
worry
about
troubleshooting
that
you
have
to
know,
okay,
which
service
called.
What
you
know
is
this
is:
are
these
logs?
Do
these
logs
correspond
to
the
same
logs
in
this
other
service
service,
a
and
service
b
and
service
c,
and
so
on?
B
So
hopefully
you
have
some
sort
of
correlation
id
and
it
becomes
challenging
to
know
what's
actually
happening
in
your
system
and
then
to
throw
on
to
throw
on
to
that.
You
know
these.
Potentially
you
know
one
of
the
things
that's
happening
is
you
may
even
be
working
with
multiple
clouds,
and
so
you
know
these
are
public
clouds
being
called
out
here,
but
they
also
could
be.
B
So
one
of
the
one
of
the
things
that
one
of
the
patterns
that's
coming
into,
it's
been
popular
for
a
while
now
and
it's
starting
to
become
really.
You
know
we're
starting
to
see
folks
that
are
really
getting
serious
about
service
mesh.
You
know
surface
mesh
is
a
pattern
that
helps
with
some
of
these
challenges.
B
So
I'm
sure
you
know
many
of
you
guys
are
familiar
with
with
service
mesh,
but
just
to
briefly
talk
through
it,
the
the
essential
you
know
what
you're
really
doing
with
the
service
mesh
is,
you
have
all
of
these
discrete
services
and
those
services
are
basically
represented.
As
you
know,
workloads
we'll
talk
in
kubernetes,
you
have
a
service
and
you
have
a
deployment.
B
So
you
have
multiple
pods
that
represent
your
your
workload
and
what
you
can
do
is
you
can
push
a
proxy
that
that
lives
alongside
that,
your
workload
and
so
in
the
same
pod
you'll
have
let's
say
two
containers:
you'll
have
a
container
that
represents
the
the
workload
and
you
have
a
container
that
represents
that
contains
the
the
sidecar
proxy
right.
It's
the
sidecar
pattern,
and
so,
when
you
do
that,
what
you're
doing
is
you're
offloading
all
of
the
network.
B
Responsibility
of
you
know
the
service
communication
to
the
proxy,
so
the
application
will
make
a
request
to
service
b,
but
it's
ultimately
that
sidecar
proxy
that's
going
to
make
the
request
a
service
b
and
then
on
service
b
side.
Actually,
let's
yeah
on
the
service
b
side.
You
know
the
the
request
is
being
handled
by
that
proxy
that
lives
inside
the
service
b
pod,
and
so
you
get
a
lot
of
benefit
by
doing
this,
because
it
helps
with
all
of
those
concerns
that
we
were
just
talking
about.
B
In
order
for
this
to
work,
though,
you
need
what
is
called
the
control
plane,
because
the
the
the
proxies
inherently
are
gonna
need.
Some
configuration
and
they're
gonna
need
to
know.
Okay,
so
let's
try
this.
So
if
I
have
you
know,
if
I
have
service
a
right
and
I
want
to
talk
or
service,
let's
call
this
service
a.
I
have
service
a
and
I
want
to
talk
to
service
b.
This
request
is
happening.
B
B
Take
that
information,
translate
it
into
what
the
proxy
can
understand
and
send
that
that
send
that
information
to
it
dynamically
in
the
case
of
this
example,
again
we're
using
envoy
as
a
proxy-
and
you
know
we're
going
to
talk
about
envoy
a
lot.
This
is
kind
of
the
de
facto
standard
for
service
mesh.
You
know
there
are
other.
There
are
other
ones,
but
envoy
has
is
pretty
pretty
popular
right
now
and
for
valid
reasons
and
we'll
be
talking
through.
You
know
a
lot
of
this.
B
So
that
being
said,
you
know
when
you,
when
you
take
kind
of
a
the
bigger
picture
of
what
what
this
actually
looks
like
right.
You
have
a
bunch
of
services
that
you
know
live
in
your
cluster,
so
in
this
bit,
let's
just
say
this
is
a
kubernetes
cluster.
B
In
this
case,
you
have
five
services
with
your
sidecar
proxies
talking
to
each
other
in
the
mesh,
but
then
you
also
have
a
north-south
component
to
it
right.
So,
if
we're
calling
east-west
traffic
within
the
cluster
within
the
mesh,
there
still
is
a
north-south
component
to
this,
because
traffic
needs
to
get
into
this
mesh
somehow,
and
so
that's
where
the
edge
kind
of
comes
into
play-
and
you
know-
and
you
know
in
istio,
for
example,
you
can
also
have
an
egress
proxy.
B
So,
but
you
know
the
concerns
there
are
at
the
edge
right
and
that's
a
different
set
of
concerns
than
what's
happening
inside
of
the
mesh.
So
a
mesh
is
great.
You
know
it's
becoming
very
popular,
some
of
its
hype,
some
of
its
reality,
but
you
know
this
is
a
real
thing
and
there's
a
lot
of
value
in
a
service
mesh.
You
know
we
think
that
most
enterprise
or
organizations
or
as
you
grow
in
a
cloud
native
world
as
you
grow
in
a
micro
service
world
there
are.
B
There
are
absolute
reasons
why
a
service
mesh
is
useful,
but
it's
not
something
that
you
want
to
just
run
into
blindly,
and
so
there
are
things
that
you
have
to
consider
when
you're
talking
about
adopting
a
service
measure,
you
know
thinking
about
getting
to
a
service
mesh,
and
so
we
can
let's
go
through
some
of
those.
So
the
first
question
really
is:
do
you
even
need
a
service
mesh?
Because,
again
you
know,
service
mesh
has
become
kind
of,
it's
definitely
become
a
very
popular
thing
and
folks
are
trying
to
use
it.
B
Maybe
when
they
don't
need
it
or
not
yet,
and
so
you
need
to
ask
yourself
do
I
do
I
even
need
to
use
a
service
mesh
right
now,
and
so
some
of
the
questions
that
you
can
ask
yourself
as
you're
going
through
that
decision
are,
do
you
have
a
mix
of
application,
languages
or
frameworks?
This
is
an
important
question
because,
when
you're,
when
you're,
when
you're
in
a
smaller
organization
or
a
smaller
environment,
you
know
some
of
the
stuff
that
we're
talking
about
in
the
service
mesh.
B
So
one
of
the
examples
of
what
a
service
mesh
provides
are
things
like
fault,
tolerance
right.
So
as
you're
talking
distributed,
architecture
you're,
making
a
bunch
of
network
calls,
you
know,
requests
are
going
to
fail,
that's
going
to
happen,
and
so
one
of
the
ways
that
you
can
solve
that
you
know-
and
maybe
this
is
kind
of
a
band-aid,
but
in
a
lot
of
times
it
makes
sense,
is
just
to
retry
the
request.
B
And
so,
if
every,
if
all
of
the
services
in
your
organization
are
implemented
with
the
same
language
or
framework,
those
that
that's
something
that
you
can
build
into
your
app
right,
you
can
build
that
into
the
application
layer.
And
if
you
look
at
some
of
the
older,
not
older,
but
some
of
the
relatively
older
technologies
you
know
like.
In
the
java
world
right,
there
was
hysterics
by
netflix
there's
resilience
for
j.
B
There
are
other
tools
that
you
know
build
this
kind
of
stuff
into
the
app
and
so
you're
instrumenting
your
application
with
this
tool
that
can
perform
fault
tolerance,
type
of
functionality
that
doesn't
work
as
well,
if
you're
in
an
environment
that
has
multiple
languages
or
frameworks,
and
in
that
case
so
let's
say
you
have
an
infrastructure
team
that
owns
whatever.
Whatever
library
you
want,
your
apps
to
use.
B
If
you
only
have
10
services
in
your
entire
organization,
the
complexity
of
a
service
mesh
will
probably
outweigh
the
benefits
of
of
using
it
and
because
you
know
a
service
mesh
is
not
a
light
thing
right:
you're
in
you're,
putting
proxies
at
every
service
and
you're
you're.
Changing
the
network
path
for
every
single
service
call.
B
So
it's
not
something
that
you
want
to
just
rush
into,
especially
if
you
only
have
10
or
so
services
that
are
not
going,
and
you
don't
see
potential
for
growing
that
another
question
is:
do
you
actually
have
a
lot
of
service
to
service
communication?
A
lot
of
east
west,
because
if
you
say
okay,
so
let's
say
you're,
building
a
single
page
application
and
you
have
10
apis
that
you
call,
but
those
10
apis
are
kind
of
standalone
right.
So
you
call
one
of
them
and
it's
a
self-contained
unit.
B
And
then
the
response
goes
back
to
the
single
page,
app
there's
no
real
benefit
for
a
service
mesh,
because
there's
nothing
that
you
need
to
do
when
you're,
making
that
api
call.
You
just
make
the
call
the
api
processes
it
and
sends
the
response
back.
If
there's
no
east-west
communication
happening,
there's
no
reason
to
use
a
service
mesh.
If
you,
if
so
then
the
next
one
is,
do
you
are
you
struggling
to
implement
application
network
observability?
B
This
is
one
where
you
know.
If
you
have
a
solution,
for
you
know,
making
20
different
service
calls
in
a
single
in
a
single
request.
You
know,
if
you
have
a
way
of
understanding
that
flow,
then
maybe
maybe
you
don't
need
to
use
a
service
mesh.
If
that's
already
something
that's
comfortable
to
you
and
your
organization,
then
maybe
you
don't
need
it.
B
One
of
the
big
values
of
the
service
mesh
is
that
it
pushes-
or
I
guess
it
makes
it
makes
observability
kind
of
table
stakes
because
you
get
that
for
free
at
the
proxy
layer
and
but
if
you
don't
need
that,
maybe
you
don't
need
a
service
mesh
at
all
and
then
finally,
are
you
comfortable
with
your
infrastructure
stack
and
the
reason
that
this
is
important?
So,
let's
we'll
talk
kubernetes,
you
know
a
service
mesh
is
a
pretty
complicated
piece
of
tooling
right
and
kubernetes
itself
is
complicated.
B
So
if
you're
not
really
comfortable
with
kubernetes-
and
you
know,
your
sres
or
whatever,
whatever
you're
using
to
you,
know,
ensure
the
availability
of
your
system.
If
kubernetes
is
still
challenging
or
kubernetes
is
still.
You
know,
if
you're
not
quite
there,
yet
throwing
a
service
mesh
into
that
is
really
only
going
to
exacerbate
that
problem.
So
you
need
to
be
comfortable
with
the
infrastructure
you're
running
on
before
you
start
considering
the
service
mesh.
B
B
You
know
which
one
do
you
choose
so
there
are,
you
know
quite
a
few
service
meshes
now
istio,
I
would
say,
is
probably
by
far
the
most
popular
but
but
istio
isn't
the
only
one
isn't
the
only
answer,
and
so
when
you're
talking
service
mesh,
there's
also
considerations
for
which
environment
you're
running
it.
So,
for
example,
if
you're
running
an
aws
app
mesh
is
is
a
very
viable
solution.
B
Right,
that's
a
native
thing
to
your
environment,
microsoft,
just
released
open
service,
mesh,
osm
and
then
there's
link
rd,
which
actually
was
one
of
the
first
service
meshes.
B
So
choosing
the
right
one
can
be
a
challenge.
You
have
to
make
sure
you
don't
want
to
pick
the
wrong
one
and
start
building
out
all
this.
This
tooling
around
it
and
then
also
like
we
talked
about
you
know.
A
service
mesh
is
a
complex
thing,
picking
a
service
mesh
and
then
putting
all
this
investment
in
and
then
deciding
that.
Oh,
you
know
this.
Maybe
wasn't
the
right
choice.
B
That's
not
really
a
situation
that
you
want
to
be
in
so
then
another
question
is,
you
know,
who's
going
to
support
it,
and
this
is
more
along
the
lines
of
like
enterprise
support.
You
know
some
kubernetes
distributions
may
come
with
some
sort
of
istio
offering
that
you
can
use.
B
You
know
openshift
has
one
and
it,
but
but
you
know,
if
you're
gonna
use
something
like
this
in
a
production
environment
more
than
likely
you're
going
to
want
support
and
that
that's
that's,
that's
an
important
decision,
because
if
you
don't
have
that
and
if
you're
not
comfortable
with
the
service
mesh,
you
know
if
something
goes
wrong.
You're
going
to
be
in
a
tough
spot.
B
Just
as
a
side
note,
we
at
solar
are
actually
offering
enterprise
istio
support,
so
something
to
consider
there.
So
then,
another
question
is
the
multi-tenancy
issues
within
a
cluster.
So
if
you
are
running
the
kubernetes
cluster
and
if
you
are,
you
know,
if
you
have
a
multi-tenant
environment
that
can
become
a
challenge
with
service
mesh,
because
then
you
have
to
decide.
Okay,
do
I
want
a
single
mesh
for
all
of
these
tenants
or
does
each
one
of
these
tenants
get
its
own
mesh?
B
And
if
so,
you
know,
how
am
I
going
to
manage
that?
That's
not
something
that's
really
trivial
to
do
along
the
same
line
is
managing
multiple
clusters,
so
in
a
lot
of
circumstances
right
you
want
to
you,
have
multiple
clusters,
and
you
know
whether
this
is
for
domain
kind
of
breaking
up
domains.
So
you
have,
you
know
one
line
of
business
that
has
its
own
cluster
and
then
a
different
line
of
business
has
a
different
cluster
or
if
you
have
just
multiple
replicas
of
your
environment,
for
reasons.
B
B
So
istio
has
you
know
you
can
do
things
like
a
shared
control
plane
for
two
two
meshes
or
you
can
have
a
replicated
control
plane,
but
that's
something
that
you'll
want
to.
You
know
that
there
that's
not
that's,
not
an
easy
thing
to
do
so.
You'll
need
to
consider
what
what
is
your
cluster
deployment
look
like
as
you're
as
you're
building
out
your
service
mesh,
the
next
one
fitting
with
existing
services
this
one,
this
one
is
actually
something
that's
really
improving
over
time.
B
You
just
threw
another
container
in
your
pod,
but
there
is
no
controlling
of
like
you
know
the
ordering
and
stuff
you
can
use
in
a
knit
container,
but
that's
only
for
you
know
the
container
that
starts
and
stops
before
the
rest
of
your
containers
run,
so
that
that
is
something
that's
improving,
because
now
there
is
a
life
cycle
being
built
into
kubernetes.
That
helps
with
this
race
condition
is
another
similar
thing.
B
If
you,
if
you,
if
you're
running
that
and
you
need
to
you-
need
to
talk
to
services
in
the
mesh,
you
need
the
side
card
proxy,
but
the
site
card
proxy
doesn't
really
care
about
your
your
life
cycle
or
the
the
you
know
the
proxy.
That
would
mean
that
the
proxy
needs
to
start
up
before
your
container,
your
your
job
and
then
once
your
job
is
finished,
then
the
proxy
needs
to
shut
down,
and
it's
just
you
have
to
manage
all
of
that.
B
The
next
one
delineation
between
developers
and
operations.
So
a
lot
of
the
concerns
of
the
service
mesh
are,
you
know
they
kind
of
they
kind
of
straddle
the
line
of
development
versus
ops
or
devops
or
whatever.
So
you
have
to
figure
out.
How
am
I
going
to
manage
who's
going
to
own
the
the
various
configurations
so
who's
going
to
own
the
fault?
Tolerance,
config,
who's,
going
to
own
the
security
config
who's?
B
Going
to
own
the
load,
balancing
fix
and
all
of
that,
so
those
are
things
that
you
need
to
figure
out
as
you're
going
through
the
service
mesh
adoption
journey
and
then
lastly,
you
know
what
about
legacy
and
hybrid
environments.
This
is
important
because
you
know
in
most
most
places
you're
not
operating
complete,
completely
green
field.
So
you
have,
you
know,
a
set
of
legacy
infrastructure
that
lives
on-prem
and
now
you're.
Finally,
going
to
the
cloud
and
you're
trying
to
go
greenfield
with
microservices,
how
do
you
bridge
those
two?
B
B
B
If
you
have,
you
know
100
services
in
your
service
mesh,
if
every
single
one
of
those
services
can
talk
to
each
other
now
you're
looking
at
you
know,
potential
for
the
the
call
graph
exploding
and
looking
like
something
like
this,
where
it's
really
hard
to
reason
about
which
services
can
talk
to
which
other
services
and
since
every
service
can
talk
to
each
other.
You
know
it
becomes
this
giant
mess.
B
So
how
do
we?
How
do
we
keep
all
of
those
considerations
and
challenges?
What
can
we
do
to
help
help
along
that
path?
And
so
at
solo?
You
know
one
of
the
one
of
the
patterns
that
we
we've
seen
successful
and
we
kind
of
recommend
is
starting
with
a
gateway
approach
and
so
starting
starting,
very
simple,
starting
at
the
edge
with
a
single
proxy,
and
you
know
so
going
back
to
the
example
of
you
know:
you
have
a
legacy
on
premises,
environment,
you're,
moving
to
a
cloud
environment
for
micro
service
based
stuff.
B
You
know
you
can
use
a
single
proxy
that
will
help
you.
You
know:
okay,
we're
going
to
deploy
five
micro
services
to
start.
This
is
our
first
iteration
and
you
can
use
a
gateway
there
that
will
front
those
microservices
and
then
you
can
start
growing
from
there
right.
So
you
know
down
here.
We
call
something
that
lets
you
iteratively
adopt
service
mesh,
that's,
ultimately
the
goal
right
is
use
a
gateway.
Get
some
services
running,
get
the
interactions
right.
B
So
right
here
we
call,
you
know,
understand
the
necessity
infrastructure
when
you're
talking
cloud
and
when
you're
talking
kind
of
you
know
declarative
style
configuration.
You
know
this
is
where
things
like
your
cicd
pipeline
comes
into
play
right,
so
you
with
istio,
you
know
you
have
a
set
of
declarative,
configs,
a
virtual
service,
etc.
B
Those
are
things
that
you
know:
git
ops
is
becoming
more
and
more
popular
for
valid
reason,
and
so,
if
you
need
to,
if
you
want
to
use
that
style
of
deployment,
you
need
to
get
your
your
configuration.
You
need
to
get
that
that
infrastructure
in
place
for
that
to
work
correctly,
and
so,
for
example,
you
have
an
application,
you
have
a
repo
for
an
application,
and
then
you
have
a
configuration
repo
that
contains
the
kubernetes,
manifest
the
istio
manifest
right,
and
you
need
to
figure
out
how
you're
going
to
deploy
all
that
with
with.
B
B
One
of
the
other
things
that's
really
important
that
we
we
say
you
know
the
data
plane
is
important.
So
if
you're
going
to
pick
a
gateway
to
help
you
on
this
journey,
you
want
to
pick
the
data
plane
that
matches
what
you're
going
to
more
than
likely
pick
for
your
service
mission.
Obviously,
what
I'm
getting
at
is
that
most
of
the
service
meshes,
you
know
other
than
link
or
d.
We
kind
of
talked
about
a
few
of
them.
B
Most
of
the
service
meshes
are
built
on
envoy,
and
so,
if
you're
going
to
go
the
gateway
approach
to
help
bridge
that
gap,
it
makes
sense
to
pick
something
like
envoy
for
your
gateway
and
so
why?
Why
would
you
want
to
do
something
like
that?
You
know
so.
Basically
they
answered
all
of
those.
You
know
those
questions
from
before.
So
you
you,
a
gateway,
gives
you.
Flexibility
is
really
really.
B
I
think
the
main
point
that
I'm
trying
to
make
a
gateway
is
it's
it's
a
lightweight
unit
that
you
can
use
to
compose
your
architecture
based
off
of
the
needs
of
you
know
as
you
grow
you're,
going
to
see
things
that
you
didn't
expect
right
and
if
you
have
gateways
that
you
can
kind
of
you
know
put
in
the
place
of
your
of
your
deployment,
it
helps
you
augment
and
complement
existing
infrastructure.
It
helps
you.
You
know
iterate
on
what
you
have
defined
for
your
new
stuff
and
then
you
know.
B
Ultimately,
if
you
need
it
once
you're,
once
you're
done
with
your
old
infrastructure.
If
that
ever
happens
you
can
you
can
cut
it
off
with
gateways
right
and
it's
basically
there.
There
are
units
that
you
can
use
to
help
kind
of
control
that
flow
of
traffic-
and
you
know
with
when
we're
talking
using.
Let's
say
you
use
an
envoy
based
edge,
proxy
or
gateway,
you
get
all
of
the
benefits
of
of
envoy,
so
you
get
and
we'll
get
more
into
this
later.
B
But
you
know
you
can
easily
do
canaries
right,
you
can
easily
do
retries
and
all
the
stuff.
We
talked
about
and
using
envoy
at
the
edge,
also
means
that
when
you
start
using
envoy
in
the
service
mesh
you're
already
used
to
these
concepts,
you
already
know
how
to
do
all
this
stuff
and
it
really
helps
with
kind
of
that
progressive
iteration
so
and
then
security
posture
that
improving
security
posture.
That's
one
of
those
things
where
using
gateways
is
kind
of
like
a
plug
for
different
locations.
B
You
can
you
can
put
enforcement
points
where
they
make
sense.
So,
as
you're
going
through
kind
of
this
journey
of
okay
on-prem
legacy
cloud,
you
know
microservices
as
we
go
through
this
journey.
We
want
to
put
an
enforcement
point.
You
know
in
one
location
because
that's
where
we
want
to
start
enforcing
oauth
or
that's
where
we
want
to
start
using
jots
for
everything,
and
it
gives
you
like
it
gives
you
a
composable
way
of
of
doing
that.
B
So
when
you're
talking
to
edge
and
when
you're
talking
service
mesh
the
there
we
talked
about
this
earlier,
you
know
that
there
are
there's
a
different
set
of
concerns
that
you
need
to
care
about
inside
of
the
mesh
east
west
versus
versus
at
the
edge
north-south.
And
so
again
you
know
when
we're
talking
north-south
again.
This
is
kind
of
you
know
what
the
topology
looks
like
right.
So
you
have
these
services
in
a
cluster.
B
This
could
this
can
or
you
know,
maybe
it
isn't
a
service
mesh
yet,
but
the
service
interactions
between
these
are
a
different.
It's
a
different
set
of
you
have
a
different
set
of
concerns
versus,
what's
at
the
edge
and
the
edge
really
it's
whatever
kind
of
boundary.
Whatever
logical
unit,
you
define
as
the
edge
and
that's
kind
of
what
I
was
saying
with
the
gateway
is
it
allows
you
to
be
flexible
on
how
you
make
those
boundaries?
You
know
whatever
context
you're
defining
the
easiest
way
to
look
at.
B
It
would
be
a
kubernetes
cluster,
and
you
know
these
are
the
workloads
in
the
kubernetes
cluster
and
then
this
is
your
edge.
So
your
ingress
in
the
general
sense
right.
So
if
you
use
the,
if
you
you're
talking
about
north-south
traffic
entering
ingressing
into
your
into
your
boundary
and
so
when
you're
talking
in
conjunction
with
the
service
mesh,
let's
take
a
look
at
like
kind
of
what
those
what
concerns
you
have
that
apply
in
both
cases,
so
traffic
control
and
routing
you
know
pretty
basic.
B
You
know
you
need
to
be
able
to
route
correctly
and
you
want
to
route
correctly
based
on
you
know,
host
and
path
and
headers,
and
all
that
kind
of
hdp
stuff
or
whatever
else
it
is.
You
know
that's
pretty
much
you're
going
to
do
that
in
the
mesh
you're
going
to
do
that
at
the
edge
tls
and
mtls
they're,
both
important
they're
important
in
both
places.
B
This
is
actually
an
interesting
one,
because
when
you're
thinking
of
a
mesh
you
know
one
of
the
one
of
the
real
one
of
the
main
value
adds
that
istio
provides
is
kind
of
the
automatic
mtls
support,
and
so
that
allows
you
to.
Since
you
have
proxies
that
are
being
enforced,
that
they
get
injected
into
your
workloads.
Pods.
B
The
concern
is
still
there
for
tls,
especially,
but
it's
a
little
bit
different
because
when
you're
talking
edge
like
if
you
go
back
to
here
right,
if
this
is
a
kubernetes
cluster-
and
this
is
your
application
at
this
edge-
you
really
want
to
present
your
your
real.
You
know
your
application
cert,
your
like.
You
know,
example.com
right.
You
want
to
serve
that
up
from
here,
but
inside
of
the
mesh,
usually
you're
going
to
be
using
some
sort
of
ca
like
if
you
use
istio
the
default
way.
Is
it
uses
its
own
ca?
B
Those
certs
are
fundamentally
different
right.
So
the
concerns
are
the
the
mtls
and
tls
concerns.
Are
there
but
they're
they're
they're
a
little
different,
whether
you're
talking
about
the
edge
or
the
mesh
observability?
We've
talked
about
that
already.
You
know
that
that's
a
very
important
thing
when
you're
talking
when
you're
talking
you
know,
microservices
and
multiple
service
interactions,
observability
becomes
increasingly
important
because
you
know
I
from
personal
experience.
You
know
trying
to
troubleshoot
trying
to
troubleshoot
distributed.
Applications
that
don't
have
good
observability
is
a
nightmare,
because
you
don't
have
anything
to
tell
you.
B
You
know
you're
trying
to
you're,
basically
looking
for
a
needle
in
a
haystack
and
that's
where,
if
you
have
it
implemented
correctly,
your
your
edge
gateway
and
your
service
mesh
all
can
all
participate
in
the
same
trace.
You
have
like
some
sort
of
correlation
id
envoy.
Has
a
nice
way
of
solving
that
and
you
can
actually
find
you
can
you
can
track
the
request
as
it
flows
through
your
20
different
service
calls
and
policy
enforcement
going
back
to
security.
B
You
know
you're
going
to
need
to
do
security
you're
going
to
need
to
do
it
at
the
edge
you're
going
to
do
it
inside
the
mesh.
There's
no
getting
around
that,
especially
as
you're
talking
cloud
environments.
You
know,
hold
the
whole
xero
trust
network
and
everything
that
that
you
know.
That's
basically
you
you
have
to
do
that.
B
So
then,
if
you
talk
about
okay,
north
south
versus
east
west,
there
there
are
also
some
things
that
that
you
don't
really
necessarily
care
about,
or
you
don't
care
about
as
much
inside
of
the
mesh
versus
at
the
edge
of
your
of
your
boundary,
and
you
know
a
common
one
is
oauth
or
oidc
more
specifically
odc.
You
know
open
id
connect
right,
that's
really
the
authentication
tool
for,
for,
let's
say
a
web
browser
based
application.
B
So
you
know
you
have
a
user,
that's
logging
in
with
their
credentials
and
you're
talking
to
some
auth
server
and
they're
they're
logging
in
in
the
browser,
and
they
get.
You
know
you
have
an
id
token
that
represents
the
identity
of
the
user.
That's
more
of
a
you
know,
that's
more
of
an
end
user
concern,
that's
not
really
something
that
happens
inside
of
the
mesh
it
could,
but
it's
not
as
common
right.
So
that's
something
that
you
want
to
enforce
and
take
care
of
at
the
edge
web
application
firewalling.
B
You
know
laugh,
that's
another
one
of
those
things
that
usually
that
has
to
do
with,
as
as
untrusted
user
interaction
flows
in
and
out
of
your
system
right
you,
you
could
have
a
laugh
at
every
hop
inside
of
a
mesh,
but
more
than
likely
it's
not
something
that
you
would
want
to
do
message.
Transformation
transformation.
B
This
is
has
to
do
with
the
whole
kind
of
you
know
a
lot
of
the
value
at
the
edge
and
especially
if
you're
talking
like
the
api
gateway
model,
you're
kind
of
exposing
an
api
that
is,
you
know
not
necessarily
tied
one-to-one
with
the
internals
of
your
system,
so
you're
publishing
api
you're
publishing
a
contract
that
you
want
people
to
interact
with,
but
you're,
not
you're,
not
telling
them
that
this
is
the
internal
of
my
system.
You're
saying
use
this
api
and
then
behind
the
scenes,
I'm
going
to
do
whatever
I
want.
B
B
You
know
you
may
have
kind
of
edge
specific
security
like
hmac
and
then
the
api
service
into
coupling
kind
of
goes
along
with
the
message
transformation
so
enter
envoy
as
a
gateway,
and
this
is
kind
of
the
whole
thing
right.
So
envoy
is
a
pretty
fantastic
piece
of
software
cncf
graduated
project
kind
of
again
it's
kind
of
blown
up
in
popularity
for
a
lot
of
valid
reasons,
and
so,
if
you
use
that
at
the
gateway
and
at
the
mesh,
you
get
a
lot
of
benefits,
and
so
why
envoy?
B
You
know,
here's
a
laundry
list
of
things
that
envoy
does
a
lot
of
this
stuff.
Other
proxies
do
the
one
that
I
want
to
talk
about.
A
lot
is
the
first
class
dynamic
configuration,
and
the
reason
for
that
is
when
we're
talking
this
distributed
architecture
cloud.
You
know
all
that
stuff,
you
you
know
pods
are
mortal,
I
think,
is
how
the
kubernetes
documentation
references
it.
So
you're
gonna
have
pods
that
die.
B
You
know
you're
gonna
have
pods
you're
gonna
have
nodes
that
scale
in
and
out
you're
gonna
have
pods
that
move
around
for
maintenance
and
on
those
nodes,
etc.
And
basically
you
know
everything
is
dynamic
in
that
environment,
and
so
you
need
a
proxy
that
can
handle
that,
and
so,
when
you
think
of
some
of
the
older
proxies
that
again,
you
know
they're
good
tools
and
they
definitely
have
their
place.
But
a
lot
of
those
tools
were
built.
B
A
lot
of
those
proxies
were
built
to
be
statically
configured,
so
you
have,
you
know
some
some
configuration
file
that
tells
the
proxy
you
know.
These
are
the
end
points,
and
you
know
this
is
how
I
want
you
to
do
the
routing
rules
and
everything
and
that's
great
and
then,
if
you
ever,
but
if
you
ever
need
to
change
it,
you
need
to
change
that
static
config
and
you
need
to
reload
the
process,
and
you
know-
and
there
are
cool
things
you
can
do
like.
B
Okay,
we'll
do
a
hot
reload,
where
we'll
transfer
the
open
sockets
to
the
new
processor,
that
spins
up
and
all
this
stuff.
But
that's
all
working
around
the
fact
that
you
have
to
statically
configure
the
proxy.
So
when
you're
talking
dynamic
configuration,
you
know
h
a
proxy
nginx.
These
are
tools.
I've
been
around
for
a
while,
but
they're
they're,
adding
dynamic
configuration
but
they're
doing
it.
Because
you
know
in
the
in
the
modern
kind
of
cloud
world
you
have
to
be
dynamic,
there's
no
getting
around
it
and
so
envoy
itself.
B
So,
like
some
of
the
other
ones,
I
think
that,
like
they're,
mainly
using
http
with
envoy,
you
can
do
dynamic
configuration
over
grpc,
so
it's
very
efficient
and
the
api
was
designed
from
the
ground
up
to
handle
that
so
so
onboard
makes
a
lot
of
sense
for
this
kind
of
for
this
kind
of
environment
and
then
to
kind
of
briefly
look
at
like
the
logical
order
of
what
this
could
look
like
in
this
case.
B
You
know
you
have
envoy
sitting
as
a
proxy
between
a
client
or
another
service
to
whatever
you
define
to
be
behind
your
edge
or
behind
your
gateway.
So
maybe
you
want
to
route
to
your
monolithic
legacy,
infrastructure
and
then
service
a
which
is
over
grpc
and
then
a
websocket
called
the
service
b,
but
you
also
want
to
do
a
canary
to
service
b.
Prime-
and
you
know
this,
is
you:
have
five
percent
go
into
this
new
version
or
whatever?
B
And
so
this
makes
sense.
But
then
you
can.
You
can
even
do
something
like
throw
the
api
decoupling
in
front
of
it.
So
now
you
have
an
api
that
you're
exposing
that
isn't
mapping
one
to
one
to
what's
happening
behind
the
scenes
so
like,
for
example,
this
grpc
call
you
could
expose
that
as
js
as
a
json.
B
You
know
rest
endpoint
here,
but
behind
the
scenes
you're
actually
making
the
grpc
call
it
a
service
say-
and
so
you
know
that
there's
a
lot
of
power
here
and
so
that's
kind
of
what
glue
is
so
glue,
is
an
api
gateway.
That
built
on
top
of
envoy
ultimately
glue
is
the
control,
plane
and
envoys,
the
data
plane
and
envoy's
built
on
some
filters
and
the
you
know
what
it
what
it
does
is.
It
provides
a
way
for
you
to
dynamically
configure
envoy,
but
you
don't
have
to
build
your
own
control
plane.
B
You
can
use
a
control
plane,
that's
already
built
for
you
and
then
you
know
it
runs
anywhere.
It
can
use
a
lot
of
different
kind
of
config
but,
for
example,
kubernetes
the
most
common
and
like
most
kubernetes
apps,
you
know
you
can
configure
it
with
custom
resources
and
so
glue
will
watch
those
custom
resources
in
your
can
fit
in
your
kubernetes
environment.
B
Translate
that
to
a
to
the
to
the
format
that
envoy
understands
and
stream
that
over
the
xds
api,
which
is
how
you
dynamically
configure
envoy
to
set
up
okay,
these
are
the
filters
that
we
want
to
have
on
a
given
request.
So
actually
I'm
looking
at
the
time.
I
want
to
definitely
have
some
time
for
that,
so
I'm
going
to
go
a
little
faster
here.
B
Basically
glue,
can
you
know
at
bit
glue
adds
the
fit
the
functionality
that
we
were
talking
about
at
the
edge
and
it's
one
of
the
ways
that
you
can
accomplish
some
of
this
stuff.
So,
let's
see
here
and
then
patterns,
I'm
going
to
skip
through
this.
Ultimately,
what
what
we're
trying
to
show
here
is
that
you
know,
if
you
start
off
small,
you
know
having
a
single
edge
gateway
will
help
you
don't
have
to
have
service
mesh.
B
Yet
you
can
have
you
know,
however
many
services,
your
micro
services,
use
a
gateway
to
front
them
and
then,
as
you
grow,
you
can
start
using
additional
gateways
or
edge.
You
know,
edge
proxies
you'll
start
defining
your
edge
at
different
layers
based
off
of
the
based
off
of
how
your
how
your
deployment's
growing
and
then,
when
you
throw
service
mesh
into
it
right.
It
may
look
something
like
this.
Where
you
have
you
know
multiple
service
meshes
that
have
inter-service.
B
You
know
east-west
traffic,
but
then
you're
jumping
out
to
different
api
gateways
that
live
at
different
points
which
help
prevent
that
death
star
architecture,
because
you're,
not
every
single
service,
isn't
going
to
talk
to
every
single
other
service.
You
have
these
nice
points.
These
decoupling
points
where
a
service
can
make
the
service
in
mesh.
One
can
talk
to
another
service
in
mesh
too
and
then
ultimately
look
into
the
future.
B
You
know
as
as
as
your
deployment
grows,
and
you
have
more
meshes
as
you
have
more
gateways
you're
going
to
want
you're
you're,
going
to
want
tools
that
help
with
that,
because
that
can
quickly
go
out
of
proportion
and
that's
where
tools
like
service
mesh
or
glute
federation
help
or
even
some
of
the
other
open
source
tools
like
cube
fed,
you
know,
can
help
federate
amongst
clusters,
and
you
know
basically
something
to
consider
and
then
finally,
webassembly
is
kind
of
the
the
future
of
onboard
proxy.
B
And
so,
if
you
want
to
configure
envoy
today,
what
you
need
to
do
is
you
need
to
write
an
envoy
filter
written
in
c
plus,
and
you
need
to
build
that
into
the
envoy
binary.
So
you
need
to
build
envoy
yourself,
you
need
to
add
your
filter
into
that
build
process,
and
then
you
get
a
binary
that
contains
your
filter
with
your
custom
code
in
it.
B
Webassembly
is
going
to
change
that,
and
it's
going
to
give
you
a
way
to
basically
write
an
envoy
filter
in
you
know
soon
any
language
today
there's
a
subset
of
the
language
that
can
compile
to
the
web
assembly.
That
envoy
expects,
but
like,
for
example,
you
could
use
rust
and
if
you
use
rust,
then
you
can
compile
it.
B
You
can
write
a
filter
and
all
my
filter
in
rust
get
a
webassembly
module
and
then
dynamically
load
that
into
envoy
to
customize
the
behavior,
and
that
this
is
you
know,
I'm
rushing
through
this,
but
this
is
really
really
exciting
and
powerful
because
it
gives
you
the
flexibility
to
dynamically
configure
envoy.
So
if
you
have
envoy
in
the
mesh
and
if
you
have
envoy
at
the
edge-
and
you
know
you
can
deploy
those
in
different
places
and
kind
of
compose
your
architecture,
you
now
can
also
deploy.
B
You
know
custom
logic
at
each
one
of
those
points,
so
it
becomes
very
powerful
and
very
flexible
for
whatever
you
want
to
do
with
your
traffic
and
so
with
all
of
that
being
said,
let's
get
into
a
demo,
and
so
what
I
initially
was
going
to
demo
was
kind
of.
You
know,
glue
interface,
interfacing
with
service
mesh,
and
so
you
know.
A
B
Glue
as
a
gateway
and
istio
is
the
mesh,
but
I
think
it's
actually
more
beneficial
to
kind
of
talk
through
what
envoy's
actually
doing
under
the
covers,
and
so
why
like?
How?
Why
envoy
is
so
great
for
this
job,
so
I'm
gonna
go
ahead
and
okay,
so
I
have
a
terminal
here.
I
hope
that
that's
big
enough,
if
not
someone,
please
tell
me,
and
so
what
what?
What
what
we'll
look
at
is.
B
B
I
have
a
pet
store,
which
is
my
sample.
App
just
serves
up
some
json,
and
then
I
have
an
envoy
pod.
So
this
this
pod
is
it's
it's
running
a
single
container
that
has
envoy
in
it
and
then
a
single
configuration
file,
and
so,
let's
take
a
brief
look
at
that
config
file
ignore
the
name.
This
is
based
off
of
the
example.
One
of
the
examples
that
comes
with
envoy,
actually,
you
know
what,
let's.
A
B
This
okay,
so
this
is
my
static
config
file
built
into
my
image
and
so
we'll
go
through
this
really
quickly
to
show
you
know
how
this
is
set
up.
So
the
first
thing
we
have
is
an
admin
section,
and
this
is
really
defining
the
admin
interface
for
envoy
and
we're
going
to
use
this
to
kind
of
explore,
what's
happening
inside
of
envoy,
and
basically
I'm
just
saying
that
okay,
the
admin
interface
is
going
to
be
available
at
localhost
port
1001.
B
So
then
I
get
into
the
actual
configuration
of
what
I
want
my
proxy
to
do,
since
it's
all
static,
it's
under
the
static
resources
section.
The
first
thing
I
do
is
set
up
a
listener,
and
this
listener
really
translates
to
a
port.
Essentially
right.
So
envoy
is
going
to
listen
to
any
address
at
port,
10
000,
and
it's
going
to
be
expecting
a
tcp
connection.
B
Then
what
we
do
is
we
assign
a
filter
chain,
and
this
is
where
all
the
processing
happens
in
this
case,
there's
only
one
real
filter
and
that's
because
this
is
the
http
connection
manager.
So
this
is
a
network
filter.
You
can
see
that
here
and
the
network
filter
is
kind
of
the
lower
level
filter
for
envoy
and
in
the
case
of
you
know,
so
we
were
talking
about
connections
and
tcp
connections.
B
You
need
something
that
will
translate
that
into
http
concepts
right,
so
you
know
things
like
headers
and
the
path
and
the
request
body
and
all
that
stuff,
and
that's
kind
of
this,
this
this
network
filters
job,
the
http
connection
manager,
and
so
then
we
start
configuring
it.
One
of
the
important
pieces
of
configuration
for
it
is
the
route
table,
so
in
this
case
I'm
defining
a
a
route,
a
single
route
that
is
called
local
service.
B
It's
going
to
any
any
domain
of
any
domain
because
of
the
star
domain,
and
so
this
means
the
host
header
of
the
request.
If
it
is
anything
you
know
this,
this,
this
virtual
host
will
handle
it.
So
basically
anything
will
be
handled
by
this
by
this
virtual
host
and
then
inside
of
that
we
have
a
single
route
configured,
which
is
that
I'm
saying
okay.
So
then
I'm
setting
up
a
matcher,
I'm
saying
a
prefix.
B
So
this
is
a
path
prefix
of
slash,
which
is
another
way
of
saying,
basically
any
request
right,
because
every
request
has
a
root.
It's
the
root,
you
know
slash
and
then
what
I
want
to
do
is
I
want
to
route
to
a
cluster,
that's
called
pet
store
and
then
so
we'll
get
to
that
in
a
second
and
then,
as
as
another
configuration
on
this
connection
manager,
I'm
defining
a
set
of
http
filters.
B
So
again,
the
hp
connection
manager
is
a
network
filter
kind
of
the
lower
level,
but
then
there's
also
the
concept
of
http
filters,
which
are
what
actually
you
know
they
operate
at
the
http
layer,
and
so
in
this
case
I
only
have
one
which
is
the
router
and
that's,
what's
actually
responsible
for
taking
the
http
request
inspecting
it.
You
know
based
off
the
route
table,
taking
that
request
and
forwarding
it
to
the
correct
upstream
destination,
and
so
then
I
mentioned
the
cluster
in
this
case.
This
is
my
cluster
definition.
B
If
I
go
back
to
my
terminal,
what
I
can
do
is,
if
I
get
my
services
you'll
see
that
I
have
a
pet
store
service
defined
that
listens
on
port
8080..
So,
what's
going
to
happen,
is
that
envoy
is
going
to
say:
okay,
I'm
going
to
do
a
dns
resolution
of
pet
store
and
in
the
sense
it's
running
you
know
inside
of
the
same
name
space.
It
knows
that
okay,
this
pet
store,
corresponds
to
this
10
108
2
2
3,
1,
2,
0
ip,
and
so
I
have
so
what
I'll
do
now.
B
Okay,
so
that's
there
and
then,
if
I
curl
to
localhost
ten
thousand
slash
api
pets,
I
get
my
response
from
my
pet
store
application,
and
so
what
happened
is
that
I
made
a
request
to
the
listener
here
right.
This
listener,
which
has
hdp
set
up
and
then
the
hdp
request
was
inspected.
It
had.
You
know
there
was
no
host
headers
set,
so
this
works
fine
and
then
there
the
slash
prefix
matches
because
I
sent
for
slash
api
pets,
and
so
it
routed
that
request
to
my
pet
store
service,
and
so
what
we'll
do
now?
A
B
So
this
is
the
admin
interface
that
I
was
talking
about
and
then
what
I
can
do
is
I
can
go
to
my
clusters
here.
Let
me
make
this
a
little
bigger,
so
this
is
the
cluster
definitions,
for
my
for
my
envoy
instance,
and
in
here
you'll
see
that
okay,
so
I
have
pet
store
right
and
again
for
pet
store.
We
defined
it
to
be
the
pet
store
dns
name
and
that
we
saw
that
the
the
ip
address
for
that
pet
store
service
is
10.
B
108,
2,
2,
3,
1,
2
0
right,
so
you
have
that
defined
here.
So
this
pet
store
cluster
has
a
single
endpoint
that
corresponds
to
the
cluster
ip
of
my
service,
and
so
that's
how
kind
of
the
routing
takes
place,
but
ultimately,
behind
the
scenes.
This
is
a
service
ip,
which
means
that
really
cube
proxy's
taking
over,
and
so
I,
if
I
scale
up,
let's
scale
up
my
deployment,
we'll
scale
it
up
to
two
right
and
so
now
I
have
another
pause.
B
B
We
can't
actually
do
things
like
canary
releases
or
anything,
because
we
can't
target
individual
pods
and
that's
where
the
control
plane
comes
into
play.
So
not
only
do
you
need
a
control
plane
that
can
you
know
you
want
to
be
able
to
update
this
dynamically,
but
you
also
need
it
to
be
something
that
can
you
know,
understand,
kubernetes
and
give
the
correct
ip
addresses
to
your
environment
so
really
quickly.
Let's
go
ahead
and
look
at
using
glue
for
that
or
pretty
much
any
control
plane.
B
You
know
any
control
plane,
that's
kubernetes
aware
can
do
this,
and
so
I
have
glue
installed
so
we'll
do
deploy
in
the
glue
system
namespace,
and
this
is
a
pretty
slimmed
down.
This
is
just
the
open
source
version
of
glue,
so
I
have
this
gateway
proxy,
which
is
the
actual
envoy
pod.
I
have
glue,
which
is
my
control
plane
and
then
some
other
stuff
which
I'll
skip
for
now,
but
ultimately,
what
I
have
what
I
have
now
is:
I
have
a
glue
listening
on
port
8080.
B
So
if
I
were
to
do
this
and
go
8080
slash
api
pets,
it
will
work
as
well,
and
so
the
reason
that
this
works
is
that
I
have
config
so
glue
will
look
for
kubernetes
style
configuration,
that's
called
a
virtual
service,
very
similar
to
what
istio
uses.
But
if
I
look
at
my
virtual
services,
I
have
a
pet
store
one,
and
if
you
look
at
what
that
actually
looks
like
it
should
look
familiar
because
this
is
really
kind
of
the
abstraction
that
we
use
in
kubernetes
for
configuring
envoy.
B
So
we
have
a
virtual
host,
a
domain,
a
route,
matchers,
etc,
and
so
you
know
ultimately
behind
the
scenes.
We're
doing
is
we're
telling
envoy,
hey,
I'm
configuring
envoy
through
a
kubernetes
crd
and
I
want
to
say,
go
to
an
upstream
which
corresponds
to
that
service,
that
we
were
just
talking
about
my
pet
store
service,
but
really
briefly.
B
What
I'll
do
now
is
if
I
go,
I'm
going
to
port
forward
to
in
the
glue
system,
namespace
my
gateway,
my
envoy
instance
there,
which
is
called
gateway
proxy,
something
and
then
I'll
go
to
port
9000
19000,
which
is
our
the
admin
interface
for
that
one.
When
I
go
to
check
what's
there-
and
I
check
my
clusters
here-
there's
a
lot
more
here,
because
you
know
we
have
service
discovery
and
stuff
working
in
glue,
but
what's
important
is
to
see
the
pet
store
one,
and
so
this
time
these
are
my
clusters
for
pet
store.
B
And
let's
see
if
this
is
easier
to
show.
So
here
here
on
my
here's,
my
cluster
definition
for
pet
store,
but
you'll
notice
that
I
have
first
of
all,
I
have
two
different
ip
addresses.
I
have
10
244
0,
10
and
2.4020
rather
than
and
those
are
not
the
service
ips.
So
if
I
were
to
get
my
pods
and
I
look
at
the
ip
addresses
you'll
see
that
that's
one
of
the
pet
store
pods
is
ten
four
four
ten,
two
four
four
zero
ten
and
ten,
two
four,
four:
zero:
twenty
right.
B
Those
are
what
the
end
points
are
from
the
envoy
cluster
perspective,
because
now
I
have
something
that
understands
kubernetes.
That
knows
what
pods
like
endpoints
are.
You
know
really
if
I
were
to
do
okay,
get
endpoints
right.
You
see
that
pet
store.
Has
these
two
endpoints
right,
because
underneath
it
covers
kubernetes
nose,
okay,
kubernetes
has
a
concept
of
end
points,
and
so
these
the
the
control
plane,
is
watching
for
this
and
says:
okay
for
this,
the
pet
store
service.
B
These
are
the
end
points
that
you
care
about,
and
so
now
I
can
do
you
know
layer,
seven
stuff,
that
you
know.
Okay,
I
wanna
send
five
percent
of
all
requests
to
two
for
four
zero:
twenty
versus
zero
ten.
So
it's
a
little
quick,
but
you
know
under
the
covers.
You
know
this
is
how
istio
works
is
how
glue
works.
This
is
how
you
make
envoy
work
for
you
in
kind
of
the
cloud
native
microservices
world,
so
I'll
stop
there.
We
only
have
five
minutes
left.
B
A
All
right,
I'm
sure,
everybody's
on
mute
is
the
only
reason
you
can't
hear
anybody
cheering
with
applause.
That
was
great
lawrence.
Thank
you.
You
know.
I
really
really
liked
how
you
actually
started
with
a
bare
envoy
config,
because
you
know
coming
from
a
company,
that's
implemented,
istio,
I
think,
and
even
to
myself
I
think
you
know
people
can
put
their
hands
around
the
configuration
service
that
istio
exposes,
but
then
you
know
when
they
think
they've
got
that
yaml
right
and
envoy's,
not
their
istio
general
is
not
doing
what
they
think
it
is.
A
They
often
will
go
hit
that
admin
panel
and
they'll
look
at
the
config,
and
it's
just
like.
Oh
my
gosh,
because
with
service
you
know
discovery
and
having
hundreds
or
thousands
of
pods
in
a
particular
cluster.
I
mean
it's
like
you
know
the
ten
ten
thousand
lines
of
config
into
a
single
pod
or
single
sidecar,
and
it
can
be
really
really
difficult
to
understand.
A
Do
you
have
any
other
tips
for
people
who
are
either
developers
or
operators
who
are
getting?
You
know
interested
in
the
space
or
interested
in
deploying
these
to
to
get
to
a
higher
level
of
confidence
about.
You
know
really
knowing
what's
going
on
under
the
covers
and
being
able
to
troubleshoot
below
kind
of
the
control
plane.
B
Yeah,
so
I
think
you
really
hit
on
it.
You
know
using
istio
or
using
any
kind
of
onboard
base
tool.
I
think
if
you
want
to
be
kind
of
a
power
user
or
operator,
whatever
really
understanding
what
envoy
is
actually
doing
is
pretty
important.
You
know
these
tools
are
meant
to
abstract
onboard
for
you,
but
if
you're,
you
know
on
the
hook
for
supporting
something
like
this,
it
doesn't.
You
know,
there's
no,
there's
nothing.
B
There's
no
reason
that
you
don't
want
to
go
a
little
bit
deeper
understand,
what's
actually
happening
under
covers
as
far
as
recommendations.
I
really
think
that
understanding.
So
if
we
go
back
like
understanding
how
and
you
don't
have
to
go,
you
know
super
in
depth,
but
understanding
how
to
configure
envoy
with
the
raw
config,
I
think,
is
really
one
of
the
most
useful
things
that
I've
done
and
then
you
could
even
use
something
as
a
jumping
off
point.
B
So
istio,
maybe,
is
a
little
there's
a
lot
there
glue
is
a
little
less
complicated
and
you
know
whatever,
whatever
you
end
up
doing,
but
seeing
how
tools
like
that,
you
know
they.
They
ultimately
generate
envoy,
config,
right
and
so
kind
of
reverse
engineering
from
that
envoy.
Config,
to
understand,
what's
actually
happening
inside
of
envoy,
really
helps
that's
one
of
the
things
that
I
did
a
lot
and
then
really
getting
familiar
with
with
the
envoy
documentation.
B
You
know
at
least
at
first
I
was
not
too
impressed,
but
once
I
actually
started
understanding
how
envoy
works,
the
documentation
actually
helps
a
lot,
because,
ultimately,
what
we're
looking
at
in
this
config
right,
all
the
the
api
for
envoy,
is
based
on
protobufs,
and
so
ultimately,
what
you're
doing
is
you're
configuring,
a
bunch
of
the
fields
in
the
messages
for
the
various
api
types,
and
so
let
me
see,
I
actually
think
I
still
have
it
up.
B
You
know
so,
once
you
once,
you
know
how
to
tie
so
like
okay,
for
example,
we
have
a
filter
chain
here
right,
so
we
have
static
resources
and
then,
in
the
listener
we
have
a
filter
chain
and
then
knowing
how
to
tie
that
back
to
the
envoy
config
over
here.
So
here's
the
listener,
filter
chain
right
and
so
knowing
how
to
jump
back
and
forth
before
between
okay,
the
config
and
then
the
api.
B
The
protobufs
really
helps
us
a
lot
because
then
you
can
see
how
they
map
together
and
what
they
actually
mean
and
kind
of
you
can
start
kind
of
deconstructing.
Okay,
given
this
envoy
config,
let
me
go
and
see
what
actually
all
of
these
fields
really
do
mean,
and
so
so
that
and
then
honestly
I
started.
B
I
started
to
kind
of
go
through
the
envoy
code,
given
that
a
c
plus
plus
and
it's
a
pretty
beast
of
a
project,
you
know
that's
not
necessarily
the
easiest
way
to
do
it,
but
I
think
even
just
kind
of
getting
that
exposure
will
help,
because
once
you
start
understanding
what,
for
example,
the
http
connection
manager
is
doing
right,
it
makes
your
job
as
an
end
user
much
easier,
because
even
if
you
have
like
a
slight
idea
of
what's
happening
under
covers,
it
really
helps
you
think
through.
B
Okay,
if
I
need
to
troubleshoot
something
okay,
this
is
what
the
connection
manager
is
doing.
Let
me
go
through
the
logs
and
etc
and
then
yeah
one
last
thing.
Actually,
the
debug
logs
in
envoy
are
actually
really
useful
so
having
having
your
logs
set
to
debug
and
kind
of
watching
it
flow,
as
it
goes
through.
All
of
your
filters
and
everything
also
kind
of
gives
you
pointers
of
okay.
At
this
point
in
the
flow
I'm
in
the
connection
manager,
let
me
go
see
what's
happening
there,
that's
pretty
much.
It.
B
Yeah
brandon,
I
think,
you're,
on
mute.
A
Sorry
very
cool
yeah.
I
was
just
asking
other
questions.
I've
definitely
got
other
questions
about
this
space,
but
I
want
to
open
up
to
the
audience
as
well.
A
Okay,
I
guess
another
question
you
know
just
from
from
solo's
perspective
right
I
mean
there's
a
lot
of
different.
You
know
service
meshes
out
there.
Some
of
them
are
purely
open
source.
Some
of
them
are
kind
of
open,
but
you
know
you
can
get
commercial
support
and
it
seems
like
some
people
are
also
offering
you
know
open
source
core,
but
then
kind
of
premium
features
that
you
can
license
on
top
of
that.
So
can
you
talk
a
little
bit
about
how
maybe
solo
sees
the
space
and
and
how
are
you
trying
to
position?
A
You
know
your
open
source
software
and
are
you
bringing
extra
features
into
like
an
enterprise
product,
or
is
it
just
a
support
based?
You
know
revenue
model
like
what
is
the.
What
does
that
look
like
for
solo.
B
So
to
the
first
part,
there
are
definitely
a
lot
of
service
meshes
and
that's
kind
of
you
know
one
of
the
things
like
choosing
the
right.
B
One
is
going
to
be
difficult
and
what
what
I
think
we're
kind
of
seeing
is
that
istio
is
definitely
the
most
popular,
but
the
thing
about
istio
is,
if
you
want
to
use
community
seo,
like
you
mentioned,
you
know
you
more
than
likely
are
going
to
want
support
of
some
form
for
that,
but
so
so
some
folks
that
are
running
you
know
they're
they're,
they're,
more
willing
to
spend
engineering
time
and
effort
in
running
their
own
kubernetes
they're.
B
Those
are
the
kind
of
the
folks
that
we
see
also
willing
to
run
their
own
istio.
On
the
other
hand,
you
know
if
you're
running
everything
in
aws,
you
know
we
it
would
make
sense,
complete
sense.
Okay,
let's
just
use
aws
app
mesh
right
because
we
get
the
benefits
of
the
service
mesh.
We
also
get
an
envoy
based
service
mesh,
but
it's
something
that's
in
line
with
the
rest
of
our
environment,
right,
the
rest
of
our
stuff
and
then
I'm
sure,
azure
and
open
service
mesh
will
be
similar.
B
So
so
I
think
what
we
see
is
kind
of
that
and
so
you'll
see.
And
then,
if
you
have
folks
that
are
saying,
okay,
let's
do
on-prem
and
aws.
B
It
would
make
sense-
and
you
know
we're
kind
of
having
discussion
with
folks
like
do
they
want
to
run
istio
and
they're
on-prem
and
then
run
app
mesh
in
their
aws
environment
and
that's
where
kind
of
where
service
mesh
we
didn't
spend
a
lot
of
time
talking
about
it,
but
that's
kind
of
what
we're
positioning
as
as
the
next
step
is.
You
know
for
folks
that
are
in
that
kind
of
hybrid
in
world
having
a
tool
that
can
kind
of
you
know,
make
it
easier
to
use
both
of
them
at
the
same
time.
B
That's
where
something
like
service
mesh
hub
comes
into
play,
and
so
right
now
service
mesh
hub
is
open
source.
There
will
be
an
enterprise
offering
built
on
top
of
it.
That
we'll
be
releasing
sometime
soon,
but
but
ultimately
yeah,
that's
kind
of
where
we
see
service
mesh
hub
coming
into
play
and
what
we
do.
We,
we
think
at
least
that
there
will
be
people
and
probably
a
lot
of
people
that
have
more
than
one
service
meshing
play.
B
So
I
think
that's
starting
to
grow
and
I
think
the
main
use
case
that
we
see
for
it's
starting
to
grow
is
the
aha
reason
and
so
a
lot.
You
know
if
you're,
if
you're
going
to
be
running
multiple
clusters
for
redundancy
for
and
one
of
those
you
know
and
if
you're,
using,
if
you're,
using
a
service
mesh,
you're
you're
going
to
need
multiple
service
meshes
right
because
you're
going
to
have
to
have,
if
you're
doing
a
proper
replica
right,
you
want
them
to
be
the
same.
B
So
you
want
service
mesh,
a
or
cluster
a
to
have
the
same
service
measures,
cluster
b,
and
so
when
you're
talking
about
failover.
I
think
I
closed
the
slides,
but
you
know
one
of
the
interesting
things
that
that
will
happen
there.
Now
is
how
do
I
perform
failover?
Because
if
you
have
you
know,
if
you
have
a
cluster
a
and
cluster
a
prime
you
know
in
east
and
west,
you
you.
B
If
some
service
goes
down
in
that
mesh,
do
you
really
want
to
fail
over
all
of
your
traffic
to
the
other
cluster,
just
for
a
single
service
right?
And
so
that's
where
things
like
you
know,
what
we
see
happening
is
that's,
what's
really
driving
the
use
case
for
the
multi-cluster,
it's
not
necessarily
that
you
want
to
have.
You
know
multiple
different
meshes
combined
that,
although
that
is
a
use
case,
I
think
the
most
the
most
pressing
real
world
problem
is:
how
do
I
handle
failover
between
different
services
or
the?
A
Right
any
other
questions
for
once
we're
a
little
bit
over
but
take
another
question.
If
anybody
has
one.
A
A
All
right
well
yeah,
like
I
showed
in
the
beginning,
you
know.
If
you
want
to
learn
more
about
solo,
you
can
go
to
solo.I
o.
Otherwise,
I'm
sure,
if
you
have
more
questions
about
morris
talk
to
talked
about
here
today,
they
also
have
a
slack
channel
so
go
ahead
and
seek
that
out
if
you're
looking
for
some
more
tips,
tricks
or
advice
on
service
mesh,
but
really
again
I
want
to
thank
solo
and
thank
lauren
scott
for
coming
in
and
talking
with
us
today.
This
was
this
was
really
interesting.