►
Description
In this webinar, we will explore deployment patterns, configurations, global traffic routing policies and more for scalable, highly available, and resilient application environments using Envoy Proxy for the Edge / API Gateway across multiple clusters. We will compare the tradeoffs between the different patterns for consideration and guidance for implementation and operations.
Download the presentation https://speakerdeck.com/soloio/multi-cluster-deployment-patterns-for-ingress-and-api-gateways
Learn more https://www.solo.io/products/gloo
Request a trial https://lp.solo.io/lp-request-a-trial-general
Questions? https://slack.solo.io
A
So
again,
my
name
is
christian
field
cto,
here
at
solo,
been
involved
in
the
envoy
community
and
the
service
mesh
communities
for
the
last
three
years
or
actually
a
little
bit
longer,
and
I
came
to
seoul
because
solo
is
focused
on
helping
people
adopt
and
be
successful
with
this
application.
Networking
technology
that
includes
service
mesh
includes
gateways
includes
management
tools
includes
some
of
the
things
that
we're
going
to
talk
about
today,
and
you
know
I
enjoy
blogging
about
this
stuff
writing
about
it.
A
How
is
the
organization
structured
to
enable
that
and
what
are
the
complementing,
complementary
technologies
that
can
help
with
that
and
the
way
we're
looking
at
it
is,
you
know,
there's
lots
of
you
know
open
source
technology,
innovative
technology,
you
know,
containers
and
kubernetes
and
so
forth,
and
that
allows
you
to
deploy
your
applications
just
fine,
but
once
they've
been
deployed,
there's
you
know
the
the
issues
around
those
services
communicating
with
each
other
around
discovering
and
finding
each
other,
the
security
requirements
that
crop
up,
and
especially
when
you
go
to
cloud
infrastructure,
which
is
highly
dynamic,
ephemeral
and
so
forth,
with
smaller
services,
independent
services
that
are
changing
rapidly,
the
need
to
understand
what
is
happening
in
the
system
in
real
time,
so
that
you
can
also
you
know,
reduce
your
mean
time
to
recovery
and
then
the
number
of
failure,
incidents
and
so
forth
that
you
have
in
your
system.
A
So
those
are
the
challenges
that
our
customers
are
facing
and
that
we
are
uniquely
positioned
to
to
help
them
with,
and
what
does
that
look
like
from
the
solo
perspective?
Is
you
know,
as
as
you
start
to
modernize
your
application
architecture
and
go
to
let's
say,
containers
or
kubernetes
or
cloud
based
infrastructure?
A
You
need
a
way
to
tie
together
the
point
of
arrival
architecture
that
you're
working
on
with
what
you
have
existing
and
you
need
something
that
is
well
suited
to
live
in
that
new
cloud
environment.
Right,
you're,
investing
in
technology
today
for
the
next
two
three
five
years
right
and
so
doing
that
on
modern
technology
is
really
important,
so
glue.
We
built
as
an
api
gateway
to
be
that
stepping
stone
in
that
direction
and
and
solve
some
of
the
initial
challenges
around
moving
to
microservice
and
we'll
talk
a
little
bit
about
that
next
thing.
A
So
we're
building
management
tools
and
management
capabilities
to
layer
on
top
of
a
service
mesh
to
do
things
like
upgrades
to
do
things
like
failover
and
automatic
and
transparent
failover
and
routing
to
do
things
like
service
discovery,
identity,
federation.
All
all
these
things
that
you
need
when
you
run
a
service
mesh
in
a
non-trivial
way,
but,
like
I
said,
our
core
competency
is
an
envoy
and
that
translates
to
the
other
service
meshes,
so
we're
able
to
advise
and
even,
for
example,
with
istio
we're
able
to
support
it
in
production
and
so
forth.
A
So
we
see
service
mesh
as
a
big
part
of
this
story
and,
lastly,
and
just
notice
the
slide
transitions
were
in
the
assembly
hub.
What
was
something
that
was
already
showing
there?
Lastly,
the
the
need
to
customize
and
do
last
mile
integrations
with
your
existing
organization,
existing
investments
and
and
extend
the
capabilities
of
the
data
plane,
and
maybe
even
the
control
plane,
the
you
know.
A
So
we
built
a
web
assembly
hub
to
help
automate
and
take
a
lot
of
the
boilerplate,
and
you
know
the
things
that
are
not
nice
about,
building
and
deploying
and
managing
webassembly
modules.
To
be
able
to
extend
things
like
blue
or
istio
and
and
any
envoy-based
technology,
so
that's
the
way
we're
we're
looking
at
it.
A
You
know
helping
folks
adopt
the
the
next
gen
technology
to
support
their
api
infrastructure
and
we
work
with
some
very
large
household
name
companies,
some
of
these,
which
are
publicly
shared
and
you
can
go
to
solo.ios
customers
and
see
the
rest
of
them
and
a
lot
more
that
are
not
publicly
shared
and
we've
caught.
The
attention
of
you
know,
analysts
and
investors
and
reporters
and
so
forth
and
won
awards
and
so
on.
A
But
I
guess
the
the
point
is:
we
are
focused
here
at
solo
on
helping
people
go
down
the
realistic
path
of
adopting
envoy
adopting
service
mesh
and
in
those
cases
it's
you
know,
they're,
multiple
different
clusters,
we're
deploying
on
different
types
of
infrastructure,
different
types
of
policies
and
compliance
and
regulations
that
we
have
to
deal
with
and
getting
that
working
nicely
that
it
ends
up.
Achieving
the
customer's
goals
is
not
all
that
straightforward
and
something
that
we
we
specialize
in.
So
you
know
coming
back
to
the
technology.
A
Otherwise
things
can
go,
go
crazy,
right
and
and
work
against
you.
So
some
of
those
channels.
You
can
see
here
on
the
right
single
point
of
entry
establishing
a
boundary.
The
things
that
you
would
like
to
do
at
a
boundary
include
authentication
authorization,
traffic,
routing
part
of
the
reason
for
the
boundary
is
to
decouple
what's
running
inside
the
boundary.
From
what's
running
outside
the
that
bounces
off
this
boundary
could
be
a
kubernetes
cluster.
A
It
could
be
multiple
clusters,
it
could
be
a
data
center,
whatever
the
boundary
is,
so
you
might
need
things
like
transformations
and
extensions
custom
off
authors,
authentication
rate,
limiting
these
types
of
things,
and
you
might
want
to
treat
some
of
these
services
and
the
apis
that
expo
they
expose
as
first
class
reusable
company
assets.
A
So
you
need
a
way
of
cataloging
these
and
documenting
these
and
providing
self-service
capabilities,
sort
of
down
the
path
of
a
developer
portal,
and
so
that's
what
that's,
what
we
do
with
glue
and
and
and
so
glue
is
api
gateway
that
is
built
on
top
of
envoy,
as
I
mentioned,
so
we
we
leverage
some
of
the
core
technology
that
is
both
foundational
and
is
proving
to
become
the
leader
in
this
space,
and
you
know,
build
these
strong
api
gateway
and
edge
capabilities
on
top
of
it,
including
the
support
for
web
assembly.
A
Now
webassembly
is
still
evolving
in
the
community
upstream,
but
we've
started
to
build
the
tooling
and
infrastructure
around
it.
To
support
it
so
that
when
it's
ready,
you
can
hit
the
ground
running
and
use
web
assembly
to
extend
and
customize
the
behavior
of
the
the
proxy
and
so
to
facilitate
that
with
our
commercial
customers,
we
we
saw
blue
enterprise,
which
support
basically
supports
envoy
in
production,
which
is
no
easy
feat
I
do.
I
do
want
to
say
that
envoy
is
a
building
block
and
is
is
quite
complicated
and
complex.
A
What
we've
done
with
glue
is
kind
of
simplified
the
api
and,
given
you
a
partner
in
solo,
to
be
able
to
run
it
in
production,
support
it
if
things
break
patch
things
up
stream
when
when
necessary,
and
then
for
again
the
enterprise
customers
from
a
feature
perspective,
we
had
things
like
the
developer
portal.
We
invest
a
lot
in
the
security
capabilities.
A
You
know
that's
very
important
to
to
enterprises
and
and
we'll
talk
about
some
of
those
challenges.
The
federation
capabilities,
managing
across
multiple
clusters
and
so
forth.
So
glue
itself
again
is
an
api
gateway.
It
is
made
it
built
on
envoy,
we've
extended
envoy,
basically
layered
filters
on
top
of
envoy
and
and
and
also
supported
it
with
a
control
plane
to
be
able
to
drive
the
configuration
in
highly
dynamic
environments,
including
things
like
an
external
auth
service
and
a
rate
limiting
service
where,
for
example,
the
external
auth
service
can
can
support
functionality.
A
Like
you
know,
api
keys,
like
open
policy
agent
like
an
integration,
oauth,
oitc
and
and
some
of
these
people,
and
you
can
even
extend
it
with
your
own
custom
code,
so
that
the
value
in
that
server,
the
external
server
is
not
just.
It
comes
with
a
lot
of
out-of-the-box
capabilities,
but
you
can
write
your
own
go
plug-ins
and
load
them
dynamically
into
the
external
auth.
So
you
don't
have
to
go,
write
your
own
and
maintain
your
own
external
out
service.
A
You
can
use
what's
out
of
the
box
and
customize
it
with
your
the
capabilities
that
you
need,
and
so
I
won't
go
too
too
much
deeper
into
the
capabilities
of
globe.
I
do
want
to
point
out
that
is
intended
to
be
highly
extensible
as
it
is
today.
So
there's
a
few
options.
There
and
going
into
the
web
assembly
world
you
know,
make
it
a
lot
easier
to
extend
it.
It
is
multi-platform,
so
it
can
run
on
kubernetes
natively
with
crds,
but
it's
not
tied
to
kubernetes.
A
A
It
can
be
used
as
a
replacement
for
simple
ingress
controllers,
like
nginx
ingress
controller
and
can
even
tie
into
service
mesh
technologies
like
sdo,
like
linker
d
like
app
mesh
and
so
forth,
and
provide
that
that
integration,
lastly,
is
the
as
I
mentioned
earlier.
A
B
A
And
the
the
more
maybe
central
platform
team
can
own
things
like
the
the
the
security
and
maybe
rate
limiting,
configs
and
so
forth.
So
the
the
api
is
very
decentralized
for
for
glue
now
that
becomes
even
more
important,
as
you
start
to
run
your
application
and
microservices
across
different
deployments
across
different
clusters
across
different
zones,
different
regions,
maybe
different
clouds.
You
have
some
on
premises
and
some
in
a
public
cloud
and
running
some.
A
A
large
deployment
like
that
or
even
or
you're,
not
there
yet,
but
you're
aiming
like
that
is
on
your
trajectory
to
be
able
to
support
multiple
geographic
regions,
support
stringent
compliance
requirements
and
so
forth,
managing
multiple
clusters
and
getting
any
kind
of
consistent
view
over
that
is
very
difficult,
and
so
you
run
into
things
like
well
now
you
have
security
for
the
boundary,
but
now
you
have,
let's
say
four
boundaries.
What
about
the
security
between
the
boundaries?
A
You
have
this
idea
of?
Well,
if,
if
one
of
those
clusters
goes
down,
is
the
rest
of
the
topology
smart
enough
to
be
able
to
react
to
that
and
change
its
routing
behavior
and
you
know
change
its
failover
and
and
so
forth.
A
So
these
types
of
challenges
run
it
you
know
come
up
when
deploying
across
large
heterogeneous
footprints.
A
And,
lastly,
you
know
the
things
that
we're
deploying
and
that
we're
trying
to
manage
in
different
autonomous
clusters
we're
trying
to
share
pieces
of
configuration
we're
trying
to
build
automation
on
top
of
it.
You
know
this
is
all
done
these.
Those
services
that
are
running
in
here
again
are:
some
of
them
are
first
class
business
assets
apis,
and
so
what
you
need
is
a
way
to
treat
all
of
the
apis
in
a
unified
way,
maybe
through
developer
portal
or
something
regardless
of
where
they're
deployed,
and
so
there's
start.
A
You
know
you
start
to
go
down
the
path
of
well,
I'm
gonna
deploy
kubernetes,
maybe
some
on
vms,
maybe
some
in
aws,
and
I
have
these
problems
now.
I
need.
I
need
something
to
help
manage
that
and
that's
where
the
federation
comes
into
into
play,
and
specifically
now
some
of
the
problems
that
we're
addressing
with
this
federation
layer
are
around
security.
A
They
are
around.
You
know,
traffic
going
from
one
zone
to
the
other,
is
it
trusted?
Can
we
provide
access
control
across?
You
know
that
traffic?
Do
the
services
in
one
boundary
know
that
if
the
you
know,
if
some
of
those
services
start
to
fail,
can
they
automatically
fail
over
to
a
different
boundary
different
cluster?
A
Can
we
manage,
you
know,
get
the
best
of
this
decentralization
which
allows
autonomy
and
independence
and
moving
fast,
making
change
and
so
forth?
Can
we
balance
that
out
with
the
operational
overhead
and
burden?
That's
that's
placed
on
trying
to
manage
this
type
of
deployment,
and
so
at
solo
these
this
is
the
type
of
stuff
that
we're
working
on
and
I'll
I'll
show
you
exactly
what's
what's
going
on
here,
but
it
all
comes
back
to
what
is
what
are
the
capabilities
that
we're
working
with?
A
What
are
the
pieces
that,
if
we
put
in
place,
can
enable
solving
these
these
types
of
problems,
especially
for
that
last
bullet,
because
if
we
just
have
one
centralized
gateway
and
we
force
everything
through
that,
then
we're
going
to
end
up
with
the
same
challenges.
We
saw
organizationally
and
maybe
technologically,
but
definitely
organizationally-
that
we
saw
with
the
previous
class
of
integration
technology
around
enterprise
service
buses
or
even
the
way
that
api
management
tooling
had
been
adopted
and
implemented
in
your
existing
organization,
where
it's
highly
centralized
owned
by
a
single
team.
A
All
the
processes
run
through
that
team,
and
so
that
last
bullet
is
very
important.
We
need
to
balance
out
the
decentralization
of
its
ownership
and
enforcement
with
the
fact
that
to
operate
it
you
need
to
you
need
to
have
some
level
of
of
centralization,
so
it
turns
out
envoy
plays
a
very
important
role
in
being
able
to
build
the
infrastructure
and
automation
to
solve
this
type
of
problem.
A
A
It's
not
driven
by
you
know
you
don't
have
to
munge
up
a
bunch
of
flat
files
and
shuffle
the
files
around
and
balance
the
server
and
hope
that
it
takes
and
doesn't
drop
anything
and
this
type,
so
it's
driven
by
an
a
dynamic
api
and
that
api
in
fact
has
taken
hold
as
more
of
a
de
facto
api
across
these
types
of
application,
infrastructure
type
components
to
the
extent
that
that
api
can
be
used
in
application,
libraries
and
in
other
technology,
so
that
api
is
hugely
important
for
envoy
envoy
is
also
very
extensible.
A
So
you
can
you
can
write
your
own
filters.
The
architecture
of
the
code
base
itself
was
built
to
enable
customization
enable
contribution
which
then
fuels
the
innovation
that
happens
in
this
space
and
envoys.
As
I
mentioned
earlier
kind
of
emerged
as
the
leader
in
this
space
and
the
last
part,
it's
not
a
neutral
foundation.
It's
not
owned
by
any
single
one.
Vendor-
and
you
know,
the
maintainers
have
been
very,
very
good
about
cultivating
a
good
community
around
envoy.
A
So
this
is
a
you
know,
foundational
piece,
just
I
think
I
tweeted
it
yeah.
You
can
see
there
that
just
like
kubernetes
kind
of
coalesced
into
the
the
thing,
the
platform
container,
orchestrator
that
that
people
are
going
to
come
around
same
thing-
is
happening
with
this
networking
technology
with
envoy
in
the
envoy
community.
So
it's
very,
very
important
and
some
of
the
features-
those
are
the
capabilities
that
envoy
supports
for
specifically
playing
the
backbone
of
a
multi-site
multi
cluster
multi-deployment
application
architecture
are
a
handful
of
of
things.
A
A
So
it's
reactive
in
that
in
that
sense,
that
it
can
make
decisions
ideally
informed
by
the
context
that
it
has,
but
it
can
make
decisions
about
how
to
route
around
failures,
how
to
mitigate
latencies
and
and
so
forth.
So,
let's
take
a
look
at
some
of
those.
The
first
capability
is
request.
Racing
and
envoy
can
do
request
racing
when
services
and
their
upstream
calls
start
to
slow
down
or
they
time
out,
but
most
importantly,
they
slow
down
envoy
can
be
triggered
to
retry
at
a
different
host
and
see
you
know.
A
Maybe
that
was
a
momentary.
You
know
that
upstream
is
slow
or
overloaded
or
whatever,
but
let's,
let's
try
different
hosts.
While
that
request
is
in
in
flight
and
see
if
we
can
get
a
response
faster
and
maybe
start
to
fail
over
to
other
parts,
you
know
other
backup
services
and
just
try
to
get
a
response
and
return
it
within
a
reasonable
amount
of
time.
Instead
of
saying,
they're,
hanging
out
hanging
or
or
timing
out
and
so
forth,
envoy
can
also
dynamically
figure
out
what
and
what
upstream
endpoints
to
route
to
so.
A
If
things
start
to
become
unhealthy,
then
envoy
can
decide
on
its
own
based
on
the
heuristics
and
the
observations
it
has
based
on
requests,
response
times
and
error
rates
and
so
forth,
whether
or
not
to
start
to
spill
traffic
over
into
different
hosts
and
different
services.
That
might
be
located
in
potentially
different
regions
or
zones.
A
So
onboard
can
do
zone
aware
routing,
like
I
said,
the
envoy
itself
decides
in
this
in
this
case.
There's
a
variant
of
that
called
locality.
Aware
routing-
and
this
is
really
this
is
important.
The
not
just
locality
where
we're
routing
but
what's
happening
in
this
model
is
the
control.
Plane
is
actually
responsible
for
understanding
the
the
rest
of
the
topology
right.
It's
watching
service
discovery
registries,
it's
doing
things
like
you
know
it
can
leverage
health
checking
provided
by
some
of
those
registries.
A
It
can
take
configuration
from
the
operators
and
so
forth
and
then
synthesize
this
down
into
a
set
of
rules
that
make
the
the
proxies
smarter
so
and
what
I
mean
by
that
is
it.
The
control
plane
drives
the
configuration
for
for
the
proxy,
but
the
proxies
don't
communicate
with
each
other
proxies
just
communicate
with
the
control
plane.
A
The
control
plane
has
a
lot
of
information
about
what's
happening
across
the
entire
fleet,
and
so
in
this
locality,
aware
routing
the
control
plane
can
give
the
can
give
envoy
information
about
where
each
one
of
these
services
that
is
trying
to
communicate
is
located
when
things
start
to
go.
Healthy
are
starting
to
start
to
fail,
and
then
let
envoy
decide
based
on
locality
where
to
route
the
the
traffic.
A
But
it's
the
control
plane
giving
that
information
and
prioritizing
that
information
for
envoy,
which
means
the
control
plane,
is
actually
very
important
here
in
in
many
respects.
A
But
specifically
because
of
this,
the
last
thing
is:
we
can
do
failover
and
priority
based
routing,
regardless
of
where
those
workloads
run,
and
maybe
they
run
in
completely
different
clusters,
completely
different
regions
and
for
those
we
might
have
to
change
the
way
we
do
endpoint
discovery
or
or
endpoint
health
checking
and
and
so
forth,
and
so
another
area
where
the
control
plane
comes
into
play
to
be
able
to
help
help
envoy
become
smarter.
About
about
that.
A
So
now
we
now
we
understand
a
little
bit
about
envoy
being
sort
of
the
backbone
and
providing
some
fundamental
capabilities
that
we
can
now
layer
on
and
start
to
adopt
those
at
a
in
in
terms
of
multi-cluster
in
terms
of
multi-region
and
multi-multi-site.
A
So
we'll
start
off
with
the
the
simplest
way
of
deploying
envoy
as
a
gateway
at
the
edge
specifically
with
glue,
and
that
is
in
a
single
cluster
right.
We
deploy
glue
as
the
ingress
and
it
can
provide
things
like
rate
limiting
and
authentication
authorizations.
A
Opa
open
policy
agent
integrations
web
application,
firewalling
and
so
forth.
So
traffic
comes
in.
We
can
route
it
and
that's
all
good.
Now,
when
you
deploy
to
multiple
clusters,
you
have
a
choice:
do
you
run
a
single
cluster
for
your
api
gateway
and
centralize
it,
or
do
you
run
the
gateway
decentralized
in
its
own
clusters?
A
So
that's
one
route,
some
of
the
you
know
the
benefits,
obviously
highly
decentralized,
no
central
points
of
failure,
contention
of
process
sort
of
forming
around
that
centralization
and
so
forth.
But
you
know
some
some
organizations
that
we
work
with,
find
this
to
be
too
decentralized
right
and
then
they
still
want
us
a
point,
as
you
can
see
in
this
diagram,
actually
the
on
the
left
hand
side
we
sort
of
left
out
well.
What
is
that
thing
that
knows
how
to
route
the
traffic
to
those
different
backend
clusters
anyway?
A
So
you
need,
you
do
need
something
there.
That
can
be
simple.
I
mean
that
could
be
a
simple
l4,
l4,
router
f5,
or
something
like
that,
and
but
but
but
again,
some
some
folks
and
some
organizations
decide
that
what
they
want
is
more
something
like
this.
They
want
to
do
two
tiers
of
gateways
and
enforce
as
much
of
the
api
management
stuff
out
of
band
as
they
can
and
then
forward
the
last
mile
to
those
the
the
gateways
running
in
those
clusters.
A
Now,
in
this
model,
you
have
the
you
know:
you're
we've
seen
organizations
actually
able
to
get
rid
of
their
f5s
and
and
some
other
complicated
networking
technology
and
build
a
lot
of
configuration
as
code
for
the
the
gateway
clusters
that
we
see
on
the
left
and,
on
the
right
hand,
side
those.
A
A
So
now,
once
we
start
to
go
down
this
path,
and
you
get
all
you
get
a
lot
of
the
traffic
going
in
through
that
edge
tier
edge
gateway
tier,
you
might
get
to
the
point
where
you
know
you
realize.
A
Well,
it's
actually,
some
some
of
these
do
have
service
mesh
in
them
they
may
or
may
not
have
service
mesh
in
them,
but
you
might
even
want
to
replace
the
ingress
at
the
leaf
node
clusters
with
with
simpler
ingress
technology
as
as
needed
right,
and
so
in
this
model
you
have
the
the
layer,
the
edge
layer,
that's
still
handling
api
gateway
capabilities
and
forwarding
traffic
to
the
leaf
clusters,
and
those
may
have
api
gateway
installations
as
well,
or
they
may
be
more
simple,
ingress.
A
Ingress
solution
in
this
case
we're
just
we're
showing
the
sdo
interest
gateway
as
as
as
one
of
those
options
so
now
operating.
Something
like
this
becomes
difficult.
A
You
know
managing
the
configs
knowing
that
so,
for
example,
the
configs
on
the
the
edge
tier
you
know
if
something
changes
on
the
leaf.
You
know
that
second
tier
the
leaf
nodes,
the
leaf
clusters
that
we
need
to
write
automation.
We
have
to
figure
out
how
to
update
that
that
edge
tier
and
vice
versa.
A
Right,
because
there
are
things
like
you
could
add
new
services
to
a
particular
cluster,
the
cloud
you
could
add
new
clusters
in
the
leaf
tier
and
routing
and
the
behavior
and
all
that
stuff
you
will
need
to
adjust
and
it
needs
to
be
dynamic
now
so
that
so
in
general,
operating
securing
these
connections
so
forth,
operating
this
across
multi-clusters.
A
That's
where
we
built
something
called
glue,
fed
and
and
so
glue
fed
is
a
federation
management
plane
that
basically
knows
about
each
one
of
these
clusters
and
understands
what
proxy
is
running
in
there.
What
services
are
running
in
there?
What
is
the
configuration
for
that
proxy
and
can
do
things
like
configuration
management?
So
we,
if
we,
if
we
want
to
treat
let's
say
two
of
these
clusters
as
identical
from
the
magnet
from
the
federation
plane
the
glue
fed
federation
plane
here.
A
What
we
can
do
is
specify
that
config
needs
to
be
pushed
and
treated
identically
across
those
clusters.
We
can
do
things
like
say
for
cluster
one,
which
has
service
a
and
cluster
two
that
has
service
a
that.
If
traffic
is
destined
to
eat
any
one
of
those
clusters
and
services
is
not
available,
there
then
fail
over
automatically
to
the
location
of
of
of
service
a
that
cluster
service.
A
is
running,
so
we
can
do
a
lot
of
things
around
automatic
failover
and
we
can
do
things
around
locality-based
routing.
A
We
can
even
do
things
around
traffic
splitting
and
traffic
routing
in
a
more
controlled
way
through
through
having
this
understanding
of
the
environment
what's
running
in
those
environments
and
how
to
configure
them.
So
that's,
basically,
so
even
in
these
models
we
have
multi-tenancy
in
the
in
a
edge
layer
in
multiple
clusters
in
that
edge
layer
and
so
forth.
You
end
up
running
multiple
gateways.
A
A
A
You
don't
want
single
points
of
failure,
and
so
in
this
model
we're
heavily
leveraging
the
autonomy
of
each
individual
deployment
or
each
individual
cluster.
For
both.
You
know,
you
know:
change
management
purposes
for
compliance
purposes,
for
availability
purposes
and
so
forth,
but
we're
using
the
management
plane
to
directly
talk
with
those
different
control
planes.
If
the
management
plane
goes
down,
those
are
still
individually
operated,
autonomous
deployments
right
and
so
that
you
know
continues
to
go
continues
to
operate.
There's
no
downtime
in
any
one
of
those
clusters.
A
A
We're
gonna
do
this
across
two
different
clusters
and
we
have
glue
installed,
let's
see
in
cluster
one.
I
was
running
this
unkind
on
my
local
machine
here,
so
we
have
glue
installed
on
cluster
one
and
on
cluster
two
and
if
you're
not
familiar
with
the
glues,
just
take
a
quick.
Second,
we
go
into
the
glue
name,
spaces,
canines,
sort
of
a
ui
for
kubernetes
come
in
here
we
see
we
have
the
control,
plane,
components
and
we
have
gateway.
Procs
gateway.
Proxy
is
the
envoy
component.
A
So
I
do
box
here
we
can
see.
Envoy
is
actually
running
in
the
gateway.
Proxy
gateway.
Proxy
is
talking
to
the
rest
of
the
control
plane
as
we
the
control
plane
here,
basically,
some
other
supporting
components
for
automatic
service
discovery
and
validation
and
so
forth.
A
A
So
to
do
that,
we're
going
to
install
the
federation
servers
and
we'll
take
a
look
at
that,
let's
actually
check
to
make
sure
that
everything
got
deployed
looks
like
it
did
get
deployed
if
we
come
back
here
to
namespace.
If
we
see
the
glue
fed
namespace
now
and
we
have
the
glue
fit
controller
and
the
glue
fed
console,
let's
go
ahead
and
use
this.
A
We'll
take
a
look
at
that
if
we
do
come
over
here,
80
90.
A
A
Now,
if
we
come
up
here
over
here,
we
should
see
we
do
have
our
different
clusters
that
have
been
discovered
and
we
see
that
the
glue
api
gateway
has
been
discovered
on
those
clusters
as
well.
We
can
see
the
different
health
states
for
for
those
different
clusters.
A
We
use
a
resource
called
a
virtual
service
to
define
our
routing
rules
to
find
our
routing
table
our
security,
all
kinds
of
stuff
about
how
traffic
should
be
managed
when
it
comes
into
glue
in
glue,
fed
we're
going
to
use
a
federated
virtual
service
and,
in
this
federated
virtual
service,
we're
basically
taking
the
config
and
asking
glue
fed
to
place
this
on
the
different
clusters
that
require
this
configuration
and
so
glue
fed
all
of
the
glue
itself.
Everything
that
we
do
here
at
solar
follows
a
very
declarative,
driven
configuration
model.
A
So
what
we're
going
to
do
is
we're
going
to
apply
this
resource
to
glue,
fed
and
then
gluefed
we'll
go
make
that
happen.
The
declarative,
config
says
I
want
things
in
this
state
bluefit's
going
to
go.
Do
that
what
it's
going
to
do
is
actually
push
these
configurations
to
the
individual
api
gateway
installations.
A
Again,
it's
pushing
the
config
to
those
installations.
They
have
their
own
control
planes.
Those
control
planes
get
the
configuration
the
same
way.
They
would
get
the
configuration
if
an
individual
is
operating
it
and
then
applies
those
configurations
to
its
it's
installation.
So
now,
if
we
do
a
get
fed
virtual
service,
we
can
see
in
our
status
field
that
it
was
correctly
placed.
A
We
can
also
get
our
virtual
services
from
the
glue
running
in
cluster
one,
and
we
see
that
is
in
good
state.
If
we
come
back
here,
click
on
virtual
services,
we
can
see
that
the
it
was
this
virtual
service
has
been
federated
to
the
remote
and
the
local,
both
kubernetes
clusters.
Here
now,
if
we
call
the
gateway
on
cluster
one,
we
see
a
hello
from
cluster
one
if
we
call
it
on
cluster
two.
A
We
see
that
it
is
from
cluster
two,
because
that
that
routing
config,
that
we
that
we
set
up
now,
if
we
take
down
the
echo
service
in
cluster
one
and
we
try
to
call
it
we're
going
to
get
an
error
right,
so
traffic
is
going
to
cluster
one
gets
to
the
gateway
gateway
says
this
traffic
needs
to
go
to
the
echo
service.
Oh,
the
angle,
service
doesn't
exist,
return,
error
right.
So
that's
what
happens
in
a
normal,
a
normal
situation.
A
Why
don't
I
give
some
hints
or
some
config
to
both
of
the
gateways
so
that
they
know
that
if
the
econ
service
in
their
cluster
is
down
and
just
automatically
fail
over
to
the
gateway
you
know
to
to
the
other
cluster,
so
to
do
that
we're
going
to
specify
failover
scheme
so
from
cluster
one
fail
over
to
cluster
two.
If
the
echo
service
is
not
available
there.
A
So
if
we
apply
that
and
now
we'll
actually
poke
around
at
some
of
the
underlying
config,
so
you
can
see
exactly
what
happened
again.
Glue
fed
is
building
the
configs
and
pushing
them
out
to
the
individual
glue
installations
and
their
own
control
planes
highly
you
know,
they're
they're,
not
glue
management
plane,
glue
fed
is
not
directly
configuring.
A
The
onboard
proxies
right,
so
it's
just
it's
it's
creating
the
config
based
on
the
context
and
the
the
awareness
that
it
has
of
the
rest
of
the
environment
and
giving
that
to
the
the
different
different
control
planes
for
each
of
the
proxies.
A
We
see
that
this
has
been
accepted.
So
that's
good.
If
we
look
into
the
covers
what
it
actually
configured
was.
So
this
is
the
virtual
service
that
we
saw
earlier
routes
to
echo
service,
but
it
also
configured
the
the
way
glue
talks
to
the
echo
service,
and
we
can
see
specifically
here,
there's
a
there's,
a
part
of
the
config
here
that
specifies
failovers
and
what
is
it
doing?
It's
saying
fail
over.
If
you
can't
talk
to
the
echo
service
running
in
cluster,
one
fail
over
to
the
echo
service
in
cluster.
A
Two,
we
can't
talk
directly
to
the
echo
service
in
cluster
two,
so
route
it
to
the
gateway
route
it
to
the
glue
gateway
that
that
ends
up
fronting
and
providing
the
and
providing
edge
access
to
the
echo
service
running
in
cluster
2..
And
how
does
it
know
that
ip
and
all
of
that
port
import
and
everything,
because
the
management
plan
already
knows
that
so
it
automatically
configured
for
this
purpose?
So,
let's
put
back
echo
service
on
cluster
one
we'll
do
a
sanity
check
here.
A
A
If
we
take
that
service
down-
and
we
still
call
the
gateway
running
on
cluster
one
that
gateway
now
is
smart
enough
to
know
that
in
failure
scenarios
fell
over
to
the
gateway
and
cluster
two.
So
now,
if
we
call
it
so
we're
calling
the
gateway
cluster,
one
we're
getting
the
response
from
the
service
running
in
cluster
two
and
what's
happening
here
is
goes
to
the
gateway.
A
The
gateway
knows
hey.
I
don't
have
any
healthy
endpoints
for
this
local
cluster,
so
but
I
do
have
a
failover
config
so
create
a
mutual
tls
connection
to
the
gateway
in
cluster
2
and
forward
the
request,
and
now
the
gateway
gets
it
on
cluster
2
and
sends
it
to
the
echo
service
running
in
cluster
2..
All
right.
So
we
have
automatic
and
transparent
failover.
A
Based
on
the
capabilities
provided
by
the
management
plane
here
so
coming
back
to
our
slides,
you
know
glue
the
powerful
api
gateway.
We
know
operating
and
running
envoy
at
scale,
whether
that's
a
service
mesh
or
at
the
edge.
So
that
is
our
our
core
competency
at
solo
and
we
built
tooling
to
allow
enterprises
to
become
successful.
A
Doing
this
solving
the
real
problems
that
enterprises
have.
So
I
highly
recommend
that
you
come
check
out.
Glue
glue
is
open
source,
the
glue
fed
stuff
that
I
showed
part
of
the
enterprise
product
and
the
you
know
the
rest
of
the
journey.
We're
here
to
support
you,
on
with
support
for
istio
support
for
multi-cluster
management
tools
for
service
mesh,
similar
to
what
I
showed
here
for
edge
gateway
and,
of
course,
the
the
next
innovative
stuff.
That's
happening
in
this
ecosystem,
around
webassembly
with
webassembly
hub.
A
So
with
that,
I
want
to
thank
you
for
joining,
invite
betty
back
in
to
close
us
out
and
see
whether
there
were
any
questions.
I'm
happy
to
take
questions
about
the
topic
that
I
presented
here,
or
anything
that
we're
doing
here
at
seoul.
B
Mike
was
asking
you
know
if
there
is
a
ballpark
estimate
on
when
or
if
you
know
webassembly
is
production
ready.
So
I
think
you
know,
we've
talked
a
lot
about
it
and
there's
a
lot
of
excitement
around
the
potential
for
web
assembly.
But
if
you
could
give
a
little
background
on,
you
know
kind
of
where
it
is
today
and
its
status
within
the
envoy
project
and
what
that
means
for
downstream
solutions.
A
Yeah
yeah
definitely
so,
let's
talk
about
that
so
webassembly
and
the
stuff
that
we've
been
doing
here
at
soul
around
web
assembly
right
because
we
had
not
announced
webassembly
hub
back
in
december
last
year,
and
then
we
announced
a
major
revision
with
the
seo.
1.5
release
in.
I
believe
that
was
march,
and
so
basically
what's
happening
is
the
the
apis
are
still
an
abis.
A
So
the
binary
interfaces
between
wasm
and
envoy
are
still
being
finalized
and
they're
evolving,
which
is
good
right,
because
people
are
actually
trying
to
use
it
and
giving
feedback
and
and
so
forth,
and
so
those
are
still
being
finalized
upstream.
A
So
there's
there's
basically
a
fork
of
envoy
upstream
called.
I
think
it's
just
envoy
wasm
in
the
envoy
repository
where
that
work
is
happening.
A
Now
the
question
about
whether
it's
production,
ready
and
so
forth
is
a
good
one,
because
it
is
actually
getting
baked
time,
believe
it
or
not,
because
envoy
in
in
istio,
so
istio
actually
builds
its
proxy
off
of
that
envoy.
Bosom
fork,
and
so
the
envoy
in
the
istio
project
actually
has
wasm
enabled
in
it
and
is
using
wasm
for
some
of
the
the
telemetry
extensions
that
istio
has
built.
A
So
there's
the
the
stuff
is
moving
upstream
it
if
you're
running
istio
and
a
lot
of
people
are
running
a
cm
and
running
in
production.
You
know
they're
those
those
apis
and
that
engine
is
being
exercised
and
it's
going
in
the
right
direction.
A
A
So
I
would
say
my
guess
is
within
the
next
few
months,
hopefully
by
the
end
of
the
year,
so
we
don't
have
a
definitive
time
for
that
yet,
but
in
the
meantime
it
is
still
worth
checking
out
and
contributing
and
giving
feedback
to
us
about.
The
experience
of
using
things
like
the
webassembly
hub
tooling,
which
allows
you
to
bootstrap
webassembly
projects
and
push
them
to
the
repo.
So
if
we
come
actually
look
at
web
assembly.
B
And
actually
the
the
hub,
and
especially
the
was
me,
which
is
the
cli
right
experience
for
this
is
up
it's
current
up
to
one
issue,
one
six.
A
A
Yeah,
there's
there's
definitely
a
lot
of
good
material
on
on
I'm
doing
that
and
you
know.
A
Developer
experience
around
doing
that,
but
if
you
click
on
explore,
you
can
see
that
people
are
kicking
the
tires
on
it.
You
know
the
ad
header
one's
probably
just
the
that
simple
demo,
one
people
working
on
you
know
extracting
jots
from
the
requests-
and
I
don't
know,
there's
just
a
whole
bunch
of
you
know
adding
metrics
and
trying
in
different
different
scenarios.
A
So
I
would
say:
go
go
check
it
out.
It
is
moving
along
a
little
slower
than
we
expected,
but
it
is
moving
along
and
hopefully
we'll
get
that
in.
We
see
that
as
a
very
important
part
of
of
adopting
this
type
of
technology.
B
And
you
make
a
good
point
about
by
the
time
it
does
go
by
the
time
it
is
merged
upstream,
since
it's
already
had
time
in
the
istio
project.
A
B
Next
question
here
is
from
alan:
when
you
show
the
failover
for
failed
services.
The
question
is:
can
federation
also
be
used
for
a
failover
scenario
due
to
a
non-deployment
so
only
failover
to
cluster
two?
If
the
service
is
not
deployed
to
cluster
one,
regardless
of
the
health
of
that
service,.
A
A
Yeah
yeah
yeah,
so
right
now,
if
you
saw
in
the
demo,
we
specified
exactly
what
the
primary
and
secondary
services
would
be
in
the
next
iteration
of
of
glue,
fed
we're
going
to
do
we're
going
to
leverage
the
locality
information
a
little
bit
more,
and
so
that
would
allow
that
would
allow
us
to
route
two
endpoints
that
may
not
exist
in
that
cluster,
but
locality,
wise
they're
in
the
same
zone,
or
maybe
they're
in
the
same
region
and
and
and
route
that
way,
so
that
is,
that
is
definitely
down.
B
Alan
adam,
the
clarifying
point,
said
to
clarify:
I
don't
want
it
to
go
to
cluster
two
if
it
is
deployed
in
cluster
one
but
unhealthy.
That's.
A
Right:
yeah,
that's
right,
yeah!
So
basically
we
wanna
leverage
locality
a
little
bit
more.
So
if
a
request
goes
to
cluster
one
and
the
surface
lives
in
cluster,
one
then
route
it
there.
If
the
request
goes
to
cluster
one
and
that
service
doesn't
live
there
but
happens
to
live
in
the
same
zone
availability
zone
on
cluster
two,
then
you
know
route
the
service
route,
the
request
there
and
and
so
forth
right
if
it's
in
a
different
zone,
different
region
and
so
on,
so
that
that
is
exactly.
I
think
what
what
he's
asking.
B
And
what
we'll
do
is,
after
the
next
version
of
federation,
is
out
we'll
also
host
another
online
event
like
this,
so
that
we
can
actually
demo
those
scenarios
out.
B
Yeah
and
so
far
last
question
that
I
see
right
now
is
is
glue,
fed
part
of
glue
enterprise.
Yes,
yes,
and
if
you
would,
if
and
where
do
you
recommend
running
blue
federation,
should
it
be
in
a
standalone
vm
another
cluster?
Can
you
deploy
it
onto
one
of
the
existing
clue
clusters?
What's
the
recommendation,
the.
A
Recommendations
to
run
it
separately,
so
I
in
this
diagram-
I
don't
have
it
depicted
in
its
own
cluster,
but
it
it
should
be
in
its
own
kubernetes
cluster.
Now
in
this
demo,
the
way
I
ran
it
was
co-located
with
one
of
the
other
clusters,
and
so
that
is
an
option,
but
it's
not
it's
not
the
the
most
desired
deployment,
mostly
for
failure
tolerations.
B
Yeah,
yes,
and
just
in
case
one
of
the
other
clusters
go
down,
you
don't
want
it
to
take
down
the
the
federation
control
points.
That's.
B
A
So,
typically
in
the
in
the
sizing
discussions
we're
more
interested
in
sizing
around
what
is
the
what
are
the
capabilities
of
the
proxy,
because
that
ends
up
taking
a
lot
of
a
lot
of
cycles,
for
example,
cbd
cycles
and
when
it
comes
to
the
management
plane.
The
sizing
that
the
variable
that
we're
most
interested
in
is
the
number
of
clusters
and
the
amount
of
configuration
that
you
have
so
the
the
management
plane.
A
Will
you
know,
although
all
of
the
config
and
all
that
stuff
is
stored
in
crds
on
kubernetes,
it
is
doing
a
lot
of
calculations
about
the
config.
What
should
it
be?
What
should
go
where,
and
so
the
management
plane
will
likely
be
a
little
bit
more
memory,
intense
than
let's
say
the
data
planes
that
you
would
see
in
the
actual
clusters.
A
So
the
variables
there
are
going
to
be,
like,
I
said,
the
number
of
clusters,
how
how
many
configurations
are
deployed
in
each
one
of
the
leaf
nodes
and
yeah
those
those
will
be
the
big.
The
big
things.
B
Great
and
then
last
question
here
is:
does
a
glue
federation
assist
with
istio
service
to
istio
service
federation,
specifically
for
cases
where
communication
between
services
is
not
explicitly
configured
to
egress
from
the
cluster.
A
The
identity
model
between
clusters
is
not
the
same,
so
between
service
mesh
clusters
and
the
traffic
and
the
traversal
of
the
traffic
between
them.
Let
me
save
that
as
a
teaser
for
one
of
the
next
webinars
that
that
we
did.
B
Great
and
with
that,
we
are
at
time.
Thank
you,
everyone
for
joining
and
for
all
your
questions.
We
will
get
this
the
recording
up
and
the
slides
posted
to
speakerjack
and
every
out
to
everyone
via
email
by
tomorrow,
and
we
will
see
you
at
the
next
one.
Our
next
talk
is:
we've
got
some
live
streams
next
week,
but
then
the
next
webinar
is
september.
10Th,
so
see
you
online.