►
From YouTube: 015 Gloo Edge Futures & Directions
Description
Envoy at the edge continues to evolve as cloud-native architectures include service mesh, mutli-cluster, and multi-platform. In this talk, we'll explore the edge and how it converges with east west topologies.
A
A
Solo,
so
in
this
talk
today,
we'll
be
discussing
service
mesh
and
api
gateway,
specifically
we're
going
to
start
by
looking
at
the
current
landscape.
And
how
does
it
look?
How
do
these
two
play
together,
then
we're
going
to
talk
a
little
bit
about
the
future
of
this
partnership
and
how
they
can
play
nicely
together.
Finally,
we'll
talk
about
how
this
can
be
expanded
to
multi-cluster
use
cases,
advanced
security
features
and
more
so
now,
I'm
going
to
hand
it
over
to
shane
who's
going
to
start
with
istio.
B
Thanks
scott,
so
yeah,
let's
start
talking
about
sdo,
for
those
of
you
who
aren't
familiar
istio
is
an
incredibly
powerful
open
source
service
mesh.
It
has
a
ton
of
useful
features
around
traffic
control,
exposing
metrics
and
logs,
and
it
can
even
secure
communications
between
your
micro
services
using
mutual
tls.
B
B
This
is
phenomenal
and
really
powerful
if
you're,
using
a
single
cluster
in
a
single
mesh
environment.
But
how
do
we
go
bigger?
That's
where
glue
mesh
comes
along
glue.
Mesh
is
a
unified
service,
mesh
management
plane
that
we've
introduced
to
help
kind
of
scale
beyond
one
cluster.
This
lets
you
go
to
multi-cluster
multi-mesh
environments
and
get
really
advanced
configuration.
B
We've
seen
customers
starting
to
use
this
technology
at
the
scale
of
50
to
100
clusters
and
in
some
cases,
even
more
than
that,
but
no
matter
how
big
you
go,
you
can
always
configure
your
enormous
environment
from
this.
One
management
centralized
place,
which
is
great
once
you've
got
multi-clusters.
You
can
do
some
really
cool
things
and
add
features
like
global
failover
routing,
as
well
as
inject
things
like
custom
web
assembly
filters
across
all
the
proxies
in
your
environment.
B
This
is
great
for
inside
your
service
mesh.
But
how
do
you
handle
the
ingress
and
api
gateway
aspect
of
your
virtual
service
mesh?
This
is
where
glue
edge
comes
along
glue
edge.
Is
the
api
gateway
implementation
that
we
have
here
at
solo.il
and
it's
an
incredibly
powerful
way
to
harness
the
envoy
proxy
under
the
hood,
but
we've
built
a
higher
level
api
on
top
of
envoy
so
that
it's
easier
to
configure
and
manage.
We've
also
built
some
extra
features
on
top
of
it,
including
enterprise
grade
security
and
external
authentication.
B
We've
listened
to
a
lot
of
feedback
that
our
customers
have
given
us,
and
this
is
just
a
handful
of
them
and
we
just
like
to
talk
about
what
does
their
typical
environment
look
like
today.
So
you
hear
you've
got
a
single
cluster,
where
you're,
probably
using
glue
edge
at
the
at
the
edge
to
manage
your
ingress
api.
B
This
is
your
api
gateway
that
handles
your
north,
south
traffic
in
and
out
of
the
cluster,
but
then
you
might
also
be
using
an
oops.
You
might
also
be
using
an
istio
control
plane
to
handle
your
service
mesh,
which
is
your
east-west
traffic
inside
the
cluster.
But
when
we
look
at
the
feature
sets
of
both
blue
edge
and
istio,
we'll
see
there's
a
lot
of
overlap
here.
They
both
discover
services
and
upstreams
to
send
traffic
to
they.
B
Both
configure
envoy
proxies
where
they
differ,
is
that
glue
edge,
provides
some
edge
features
and
seo
provides
some
east-west
features,
but
is
that
really
enough
to
justify
having
two
control
planes?
We
don't
think
so.
So
what
happens
when
it's
one
control
plane?
Suddenly
you
can
do
all
of
your
discovery,
all
of
your
envoy
configuration
and
your
edge
traffic
and
your
east
west
traffic
all
in
the
single
control
plane.
So
it's
much
easier
to
manage,
especially
as
you
start
scaling
your
environment
up
now.
This
gets
us
to
a
single
cluster
and
a
single
control
plane.
B
But
what
happens
when
you
add
another
cluster
we're
running
into
a
similar
problem
here,
where,
even
though
there's
a
single
control
plane
on
each
cluster?
One
way
you
have
two
clusters,
or
you
know
we
go
up
to
50
clusters.
Suddenly
there's
50
things
to
manage,
and
this
is
where
we
already
have
that
solution
we
mentioned
earlier
glue
mesh
glue
mesh
can
come
in
and
help
you
manage
configuration
across
all
of
your
clusters
in
a
multi-mesh,
mul,
multi-cluster
environment.
B
A
Shane,
so
let's
take
a
look
at
how
this
is
going
to
work
under
the
hood.
What's
the
user
experience
going
to
be
like?
What's
the
api
like
and-
and
this
will
give
a
clearer
understanding
of
what
we're
actually
talking
about
in
concrete
terms,
so
we
have
an
api
that
we've
actually
built
off
of
the
v2
ingress
api,
as
some
people
call
it
or
now,
it's
being
called
the
kubernetes
gateway
api.
A
This
is
an
official
kubernetes
api
being
developed
by
some
kubernetes
sig
networking,
which
is
designed
to
be
a
successor
to
the
original
v1
beta
1
ingress
api,
which
is
very
limited
in
in
scale
and
and
really
very
basic,
and
what
what
kubernetes
has
attempted
to
do
with
this
newer
api
is
to
support
extensibility
by
multiple
vendors,
because
you
see
there
are
a
lot
of
vendors
in
the
space
who
are
talking
about
ingress
and
gateway.
A
So
we
are
now
utilizing
this
for
our
new
gateway
api,
and
this
will
help
us
demonstrate,
what's
actually
possible,
to
do
and
how
we've
extended
this
api
in
order
to
support
our
our
multi-cluster
and
advanced
security
use
cases.
A
So
the
first
piece
to
know
is
that
the
kubernetes
gateway
api
provides
a
new
crd
called
the
gateway
class.
The
gateway
class
essentially
allows
providers
to
gateway
providers
to
ship
their
own
custom
settings
which
can
be
applied
to
a
gateway.
So
in
our
case,
you
can
see
we
have
a
gateway
class
here
and
that
gateway
class
is
going
to
be
responsible
for
managing
a
set
of
gateways
and
specifically,
what
we're
providing
here
is
a
the
name
of
our
controller,
which
is
gateway
controller,
so
our
gateway.
A
That's
why
you
can
see
you
specify
the
group
in
kind
of
the
settings
object
so
that
you
so
that
they
so
that
the
implementation
can
provide
its
own
crd,
its
own
custom
configuration
to
the
user
as
an
extension
point,
and
if
we
look
at
what
our
gateway
settings
actually
looks
like
here,
we
can
provide
a
set
of
clusters,
which
means
any
gateways
which
consume
this
gateway
class
will
be
applied
to
this
set
of
clusters
and
we
use
that
as
the
first
building
block
of
our
multi-control
plane,
multi-mesh
multi-cluster
management
system.
A
So
let's
take
a
deeper
dive
and
and
actually
look
now
that
we
have
our
gateway
class.
What
does
it
mean
to
add
a
gateway
to
this
class?
So
you
can
see
right
here.
We
have
a
gateway
object.
This
is
again
a
part
of
the
core
kubernetes
api
and
what
we're
showing
here
is
the
ability
to
okay.
So
what
the
gateway
does
is
it
defines
a
set
of
listeners,
and
these
are
the
listeners
where
our
routes
into
our
clusters
will
exist.
A
So
you
see
here,
we
select
the
gateway
class
that
this
gateway
will
belong
to.
That
means
that
this
mesh
ingress,
this
mesh
gateway
will
be
distributed
to
both
the
management
and
remote
clusters,
and
now
what
are
the
actual
contents
of
the
gateway?
The
gateway
here
defines
a
single
listener.
Listening
on
port
8080
for
http
requests
and
the
actual
routing
rules
that
are
defined
for
this
listener
will
be
located
in
an
http
route,
specifically
it'll
be
in
a
set
of
http
routes
that
are
selected
according
to
a
label.
A
This
is
one
of
the
changes
that
we've
the
modifications
we've
made
and
are
currently
working
on
upstreaming
in
order
to
support
a
self-service
model
for
these
gateway
resources
and
you'll,
see
here
any
http
route,
which
again
this
http
route,
is
another
core
gateway.
Api
object.
This
is
not
ours,
but
it
provides
extension
points
which
we'll
show
in
a
second.
A
A
A
Therefore,
http
routes
are
high
level
object
and
we
have
delegation
feature
which
allows
multiple
users
to
share
the
configuration
of
a
single
http
route,
but
I
won't
get
into
that
here.
I'm
just
going
to
show
a
simplified
single
instance
of
an
http
object,
so
here
you
can
see
we
provide
host
names
and
then
we
provide
a
set
of
routing
rules
and
the
routing
rules
will
match
other
fields
inside
of
the
http
request.
A
In
order
to
determine
where
to
forward
the
request
to
so
you'll
see
here,
we
very
simply
match
the
prefix,
the
path
prefix
of
the
request
and
slash
product
page
and
we're
selecting
the
product
page
service
in
the
bookinfo
namespace
to
forward.
The
request.
To
one
thing
to
note
here
is
that
we
also
specify
the
cluster
on
the
request.
This
is
a
modification
that
we've
made
to
the
gateway
api
that
allows
us
to
define
these
routing
rules
outside
of
any
particular
cluster.
A
Another
thing
to
note
here
is
that
we
can
add:
there's
there's
a
field
inside
of
the
http
route
object.
That
again,
is
part
of
the
kubernetes
api,
but
it
allows
you
to
define
custom
filters
and
those
filters
are
where
you
can
specify
anything
from
security
policy
to
transformations
to
observability
and
monitoring
policies.
A
These
are
all
things
that
are
implemented
today
in
the
glue
1.0
api,
and
what
we've
done
is
we're
now
using
this
filters
field
in
order
to
support
the
same
set
of
features
which
here
you
have
an
example
of
us.
Configuring
rate
limiting,
but
there
are
a
number
of
advanced
features
that
those
who
are
not
familiar
with
glue
may
not
be
aware
of
and
we'll
get
to
those
in
a
following
slide.
A
One
more
thing
to
note
is
that
when
we
say
here
that
this
route
can
route
to
both
the
management
cluster
and
the
remote
cluster-
and
we've
said
here
that
this
gateway
will
live
in
the
management
cluster
and
the
remote
cluster,
that
means
that,
whichever,
whichever
gateway
it
doesn't
matter
which
cluster
so
this
gateway
will
live
in
both
the
management
and
the
remote
clusters,
and
they
will
both
provide
these
routes
so
that
they'll
know
how
to
route
to
something.
That's
in
a
different
cluster.
A
Let's
just
continue
here
so
another
example
of
how
we've
extended
the
system,
so
we
just
covered
how
we're
able
to
address
policy
as
well
as
multi-cluster
kubernetes
services,
but
now
there's
another
feature
from
or
another
domain
of
features
that
glue
supports,
which
is
routing
to
things
other
than
kubernetes
services.
A
These
are
things
that
the
control
plane,
like
istio
or
envoy
itself,
do
not
are
not
aware
of,
but
we
are
able
to
orchestrate
the
the
underlying
piping
in
order
to
enable
routing
requests
to
another
type
of
destination
rather
than
a
kubernetes
service.
So
the
core
kubernetes
api,
this
new
gateway
api,
offers
a
backend
ref
as
an
alternative
to
the
service
name,
which
you
can
see
here.
A
The
back
end
ref,
just
like
the
settings
that
were
the
parameters
that
we
provide
on
the
gateway
class
is
an
arbitrary
crd.
So
we
can
decide
what
actually
goes
there,
and
this
is
now
where
the
binding
comes
in
between
the
core
kubernetes
api
and
the
blue
mesh
api.
The
blue
mesh
gateway
api,
where
you
can
say,
for
example,
I
want
to
route
to
something
that
lives
in
in
a
cloud
cloud
provider.
A
So
here
we
actually
have
a
lambda
function
that
we're
routing
to
which
means
we
can
actually
invoke
this
product
page
and
everything
that
it
provides
via
lambda
function
in
this
example,
and
so
that
when
requests
come
and
they
match
this
product
page
instead
of
going
to
the
product
page
service
they'll,
actually
go
and
hit
a
lambda,
which
is
all
this
configuration
and
the
security
credentials
would
be
provided
through
glue.
Mesh
specific
configuration.
A
So,
just
to
understand,
architecturally
speaking
how
this
actually
looks
from
the
user
point
of
view,
the
users
are
going
to
supply
gateway,
class
gateway
and
http
route
objects.
Additionally,
they
they
may
provide
destinations
as
well
destination,
crds
setting
crd
and
there
are
additional
config
options
that
may
be
external
to
this
routing
configuration
like
policy
objects
which
can
be
shared
across
gateways
as
well
as
within
the
mesh.
A
So
the
user
supplies
these
crds
to
the
glue
mesh
control
plane.
The
glue
mesh
control
plane
then
distributes
the
translated
configuration
down
to
each
istio
control
plane
running
in
each
cluster.
Those
istio
control
planes
which
are
supercharged
with
the
glue
features
that
are
being
ported
over
we'll
then
go
and
configure
the
individual
envoy
proxies
with
all
the
filters
and
configuration
necessary
to
achieve
this
multi-cluster
gateway
setup.
A
So
just
for
an
example
now
that
this
is
configured
a
user
comes
and
makes
an
http
request
to
the
gateway
in
cluster
one
on
product
page
that
gets
routed
to
the
local
product
page
just
like
in
today's
world.
But
what
happens
when
the
user
sends
a
request
to
an
ingress
in
a
different
cluster?
So
what
we've
done
here
is
we've
created
a
gateway
data
plane
that
spans
across
clusters.
It's
a
logical
data,
plane,
a
logical
ingress
into
our
service
mesh
that
abstracts
away
the
requirement
to
think
about
cluster.
A
This
is
particularly
useful
when
we
have
copies
of
clusters
that
are
sitting.
We
have
multiple
instances
of
a
cluster
or
an
application,
a
single
application
that's
distributed
across
clusters,
so
we
don't
want
to
have
to
worry
about
which
ingress,
which
gateway
a
client
is
talking
to,
because
we
know
the
appropriate
destination
cluster
for
that,
and
we
don't
want
the
client
to
have
to
know
which
cluster
to
talk
to
in
order
to
talk
to
that
service.
A
Additionally,
one
of
the
things
that
we
bring
to
the
table
are
advanced
security
features.
This
is
one
of
the
things
that
distinguishes
an
api
gateway
from
just
a
simple
ingress
is
when
you
have
advanced
features
like
cores:
data
loss,
prevention,
waff,
external
authentication,
services,
jot
validation
rate
limiting,
etc.
This
is
one
of
the
most
heavily
utilized
and
requested
features
in
blue
mesh
today
or
in
in
glue
edge
that
blue
edge
production
users
are
making
use
of,
and
we've
built
out
the
system
a
lot.
A
So
we
have
here
a
set
of
filters
that
can
be
enabled
on
the
ingress
and
you'll
see
here
that
the
external
off
and
rate
limiting
filters
talk
to
an
external
implement,
an
externally
implemented
server.
This
is
normally
on
the
user
to
supply
their
own,
but
this
is
one
of
the
things
that
glue
provides.
A
The
enterprise
version
of
glue
provides
out
of
the
box,
and
the
implementation
of
the
external
log
provides
basic
auth
support.
Oauth
api
key
open
policy
agent
can
be
used
to
authenticate
requests
you
can
plug
in
an
ldap
backend.
You
can
also
provide
your
own
custom,
auth
plugins
or
pass
through
auth
to
another
server,
that's
implemented
by
the
user.
A
A
B
So
here
we've
got
our
two
clusters
installed
on
the
left.
Here
you
can
see,
we've
got
our
management
cluster
and
on
the
right,
we've
got
a
remote
cluster.
You
can
see
that
istio
is
installed
on
both
clusters,
so
we've
got
two
different
meshes
installed
in
two
different
clusters:
we've
also
got
glue
mesh
installed
and
the
istio
book
info
app
is
split
across
both
clusters.
You
can
see
product
pages
running
here
on
the
left,
and
details
is
running
here
on
the
right
now.
B
B
Once
we
apply
these
crds
next,
I
just
want
to
highlight
that
we're
exposing
port
8080
as
a
listener
here
and
finally,
the
routes
that
we're
opening
are
going
to
be
serving
productpage.example.com
and
they'll,
be
looking
for
the
right,
specifically
slash
product
page,
which
will
be
forwarding
to
the
service
product
page
that
runs
in
our
management
cluster,
as
well
as
slash
details
which
is
forward
to
the
detail
service
running
in
the
remote
cluster.
Again,
it's
worth
highlighting
these
two
services
run
in
two
different
clusters,
even
though
we're
configuring
them
all
in
the
same
place.
B
What
we
should
see
here
is
two
different
ingress
gateways
being
deployed
here,
one
in
each
cluster,
and
we
do
we
see
one
coming
up
here
and
we
see
one
coming
up
here,
so
both
of
these
meshes
will
be
serving
the
the
endpoints
that
we
just
looked
at
in
the
http
route.
Let's
do
some
port
forwarding,
so
we
can
try
them
out.
B
And
then
in
the
remote
cluster
I'm
going
to
forward
that's
also
running
on
8080.,
I'm
just
going
to
port
forward
it
to
8081
locally,
just
so
that
we
can
try
both
of
them
in
parallel
and
now
I'm
going
to
curl
these
services
so
we're
doing
a
curl
on
8080
to
the
detail
service.
So
we're
going
to
be
hitting
this
management
cluster
and
you
can
see
we
get
a
response
here
from
the
detail
service,
even
though
the
detail
service
is
running
in
cluster
2
and
we
hit
the
ingress
in
cluster
1..
B
B
B
Anyone
who's
familiar
with
our
glue
edge
api
external
auth
config
will
recognize
this
because
it's
the
exact
same
spig,
the
exact
same
config,
it's
the
same
syntax.
Anyone
is
familiar
with
who's
used
that
before
and
specifically,
if
anyone's
not
familiar
with
it,
we'll
just
walk
through
it.
Real
quick
here
we're
adding
an
authentication
filter
and
we
have
two
different
types
here.
B
That's
what's
interesting,
so
we're
adding
basic
auth,
so
you
can
authenticate
with
the
username
user
and
the
password
password
or
you
can
authenticate
with
an
api
key
of
solocondemop1234
we're
using
this
advanced
boolean
expression
logic.
That's
currently
found
in
blue
edge
api
gateway
today,
where
you
can
authenticate
with
either
basic
auth
or
the
api
key.
B
And
now,
if
we
try
to
curl
those
requests
again,
you'll
see
now
we're
getting
a
403
because
we
didn't
provide
any
authentication
so
once
they
start
adding
our
api
key,
and
here
it's
just
as
simple
as
designing
the
api
key
in
a
header
called
api
key
you'll
see
that
we're
getting
200s.
Similarly,
if
I
add
a
bad
api
key,
that's
not
authenticated
or
authorized
you'll
get
a
403
again
and
remember.
We
also
added
basic
off
here's
an
example
of
that
working.
I
run
a
basic
auth.
B
Now
we're
getting
200s
again,
we
get,
we
see
the
response,
we
expect
we
got
a
200
everything's
great,
so
this
is
just
a
few
features
that
we're
working
on
in
the
in
this
new
platform,
and
it's
just
to
give
you
an
idea
of
what's
possible,
but
there's
there's
plenty
more
to
come
and
we're
really
excited
to
show
it
to
you.
So
what
we've
just
shown
you
is
that
you
can
have
the
best
of
both
worlds.
B
You
get
all
of
the
incredibly
powerful
enterprise-grade
features
of
glue
edge,
api
gateway,
running
in
a
multi-edge
multi-cluster
environment,
thanks
to
glue
mesh
and
remember.
All
of
this
is
sitting
on
top
of
this
newly
supercharged
istio
environment,
which
itself
is
sitting
on
top
of
envoy,
so
you're
getting
a
lot
of
power
from
all
of
those
low
level
tiers.
But
you
only
have
to
deal
with
the
high
level
config
that
goes
into
the
management
plane.