►
From YouTube: 012 Gloo Mesh Roadmap
Description
Running a service mesh across multiple clusters, zones, or clouds increases operator burden for day-2 activities. In this talk, we'll see how Gloo Mesh aims to eliminate this burden and overhead.
A
Awesome
so
we've
been
digging
into
service
mesh
and
participating
in
the
ecosystem
here
for
the
last
three
years
or
so,
and
one
thing
that
we
noticed
from
the
very
beginning
is
that
there
wasn't
going
to
be
just
one
service
mesh
and
from
the
beginning,
we
we
noticed,
as
you
can
see,
some
of
the
different
logos
here
on
the
slide
that
you
know
there
are
going
to
be
different
cloud
provider
service
meshes
like
we're,
seeing
from
open
service
mesh
from
azure
or
app
mesh
from
aws.
A
We're
going
to
see
you
know
so.
Istio
and
linker
d
were
some
of
the
first
few
first
couple
that
started
off
and
then
other
folks
have
now
jumped
in,
and
what
we've
noticed
is
that
it's
difficult
for
folks
to
figure
out
which
one
to
use
which
one
best
fits
their
organization,
where
are
they
going
to
get
support?
A
Is
the
api,
the
right
api
for
them?
Does
it
intro?
Can
you
introduce
introduce
it
transparently?
Does
it
support
vms
and
non-container-based
workloads,
multiple
clusters
and
and
so
forth?
And
when
we
initially
started
attacking
these
problems,
we
open
sourced
a
project
called
superglue
that
focused
on
simplifying
some
of
the
complexity
around
adopting
and
installing
and
managing
a
service
mesh,
and
we
started
off
by
saying
any
service
mesh
right.
So
we
came
up
with
this
idea
of
an
abstraction
to
simplify
how
you
actually
interact
with
a
service
mesh.
A
Regardless
of
what
mesh
you
may
choose
now
we
we
do.
We
use
this
abstraction
to
focus
on
what
are
some
of
the
core
features
of
a
mesh,
maybe
those
that
might
be
common
across
other
meshes,
and
this
picked
up
some
interest
from
the
rest
of
the
community
and
started
a
the
movement
around
the
surface
mesh
interface.
A
A
B
A
Versus
you
know,
let's
say
10.,
and
what
we've
noticed
is
that
istio
is
emerging
pretty
convincingly
as
the
as
a
winner
in
the
sort
of
self-managed
service
mesh
space,
and
so
we
saw
all
of
our
customers
adopting
going
to
production
with
this
deal,
and
we
said
all
right,
istio
still
is
you
know
its
api
was
written
in
such
a
way
that
it
enabled
automation
on
top
of
it
and
wasn't
really
designed
for
end
users,
as
as
you
know,
developers
and
and
so
on,
to
use
that
api.
A
So
we
decided
to
continue
our
efforts
to
simplify
service,
mesh
adoption
and
management
in
day
two
operations
and
focus
on
istio
and
when
you
take
an
istio
service
mesh
and
deploy
it
into
your
environment
and
operationalize
it.
There
are
these
these
issues
that
you
run
into,
especially
across
multiple
clusters
when
you
deploy
for
high
availability
and
isolation
and
fault
tolerance,
and
so
on,
making
these
clusters,
aware
of
each
other
federating
their
identity,
sharing
enough
information
to
enable
service
discovery,
establishing
consistent
security
and
access
policies,
defining
failover
and
these
types
of
configurations.
A
These
are
you
know
the
the
adds,
a
level
of
burden
to
the
operator
that
to
to
make
it
successful
that
we
would
like
to
simplify
and
and
focus
on.
A
You
know
what
what
is
it
the
end
user
is
trying
to
do,
and
so
we
we
fast
forward
glue
super
glue
rather
into
blue
mesh,
and
that's
exactly
what
we've
built
is
a
product
that
allows
you
to
run
istio
in
production,
with
enterprise,
support
and
slas
and
so
on
and
manage
that
across
multiple
clusters
and
we'll
go
into
a
little
bit
more
detail
about
what
some
of
those
features
are
so
glue.
A
With
multiple
different
versions
of
istio
run
on-prem
or
in
any
cloud
or
both,
you
know
in
a
hybrid
scenario,
and
so
what
we've
done
with
glute
mesh
is
we've
simplified
the
operations
of
running
istio
in
this
in
an
environment
like
this,
which
then
enables
things
like
failover
and
global
failover,
routing,
locality-based,
routing
the
tooling
we've
built
under
the
covers
to
help
help
you
unify
observability
and
give
you
a
single
pane
of
glass
for
watching
the
the
telemetry
and
the
tracing,
and
so
on.
A
It's
happening
across
multiple
clusters,
giving
giving
you
a
unified
api
for
configuring
traffic
policies
and
access
policies,
as
well
as
extending
the
mesh
extending
the
underlying
data
plane,
which
is
envoy
in
istio's
case
with
web
assembly,
and
our
original
idea
of
supporting
multiple
meshes
is
still
whole.
You
know
holding
true.
We
have
users
that
are
interested
in
running
istio
and
atmesh
together
and
federating.
Those
so
supporting
multiple
meshes
is
still
part
of
the
the
charter
of
blue
mesh.
B
A
Istio's
community
supports
their
their
versions
for
n
minus
one,
so
it
really
isn't
a
long
term
support
option
in
the
community,
and
this
can
be
very
important
for
those
organizations
that
care
about
security,
vulnerabilities
and
so
on.
They
don't
get
patched
in
the
community
after
n
minus
one.
We
support
lts
for
long
term
support
for
n
minus
three.
We
offer
enterprise
slas,
for
you
know
we'll
pick
up
the
phone
call
at
two
o'clock
in
the
morning.
A
If
something's
going
down
we'll
jump
in
there
and
we'll
help,
you
fix
it,
we'll
patch
it
deliver
you
that
that
fix
and
before
you
even
get
there
we'll
offer
architectural
guidance
and
best
practices-
and
you
know
we've
built
up
a
very
strong.
You
know
we're
leaders
in
the
istio
community
and
we've
built
up
a
very
strong
practice
around
getting
people
successfully
running
istio
in
production.
A
Now,
then,
you
fast
forward
to
okay.
We
got
istio
running
now.
We've
got
multiple
groups
running
seo,
hosting
multiple
services.
We
want
services
to
be
highly
available
and
they
might
communicate
across
clusters,
and-
and
now
you
know,
if
you're
familiar
with
istio
and
istio's
api,
creating
virtual
services
destination
rules,
sidecars
all
those
things
you
know
doing,
that
across
multiple
clusters
can
be
very,
can
be
tricky
and
error
prone.
A
Even
if
you
got
all
the
automation
that
you've
built
all
the
getups
flows
and
all
that
stuff
tracking,
which
virtual
service
goes
on
to
which
cluster
and
if
a
particular
service
moves,
you
know,
there's
inherent
directionality
in
the
traffic
flows
and
the
access
policies
here,
managing
all
of
this
stuff
and
trying
to
orchestrate
it
yourself
becomes
very
cumbersome
and
error
prone,
and
you
might
know
that
configuration
is
one
of
the
leading
reasons
why
large
systems
like
this
might
might
go
down.
A
So
that's
where
the
glue
mesh
management
plane
comes
into
the
picture,
so
the
glue
match
management
plane,
allows
you
to
centralize
your
policy,
writing
and
definitions,
and
so
on
and
orchestrate
them
and
automate
those
out
to
the
individual
istio
clusters.
Now
glue
mesh
comes
with
a
simplified
api.
So
again,
this
stays
true
from
the
very
beginning
where
we're
focused
on
simplifying
the
adoption
of
a
surface
mesh.
A
We
offer
a
simplified
api
for
defining
traffic
policies,
which
is
multi-cluster
aware
for
defining
access
policies
for
restricting
traffic
to
or
from
certain
clusters,
whether
that's
on-prem
or
in
a
public
cloud,
and
you
basically
focus
on
the
what
what
is
supposed
to
happen
with
the
traffic
between
services
and
let
the
management
plan
automate
where
that
config.
You
know
it
translates
it
under
the
covers
to
istio
config,
where
that
configuration
needs
to
go
on
which
clusters
when
it
needs
to
be
updated
when
it
needs
to
be
removed
and
and
managing
that
whole
orchestration
problem.
A
So
let's
take
a
look
at
real,
quick,
some
of
the
features
that
are
enabled
by
this
api.
The
first
is
the
virtual
mesh
api.
Now,
the
the
virtual
mesh
is
an
abstraction
so
again.
A
A
Runs
on
its
own
independent
cluster
somewhere
else,
and
it
connects
up
to
the
the
various
leaf
clusters
and
actually
in
the
enterprise
product,
that
connection
happens
over
grpc
and
it
happens
in
the
in
the
reverse
right.
The
clusters
actually
connect
up
to
the
management
plane
and
identify
themselves
as
hey.
I'm
cluster.
One
give
me
my
configs
and
and
the
same
thing
with
cluster
two,
so
you
don't
end
up
with
a
scenario
where
one
cluster
has
access
to
everything
or,
as
you
might
notice,
in
the
istio
community,
the
the
documentation
for
setting
up
multiple
clusters.
A
You
each
cluster,
has
access
to
every
other
cluster,
also
not
a
very
good
security
posture.
So,
with
the
glue
mesh
management
plane,
you
have
the
leaf
buses
connecting
up
identifying
themselves
and
then
the
management
plane
is
able
to
push
out
configs
now
defining
what
the
configuration
boundary
is
or
what
clusters
should
participate
in.
This
abstraction
is
what
the
virtual
mesh
custom
resource
or
the
or
the
virtual
mesh
abstraction.
A
It
does
discovery
of
the
various
services
that
are
running
in
each
of
the
clusters,
so
it
can
build
a
global
service
discovery
catalog
and
it
also
automates
and
orchestrates
some
pieces
inside
of
each
of
the
clusters
to
facilitate
telemetry
collection,
log
aggregation
and
some
of
these
other
things
that
we
need
to
get
a
single
pane
of
glass
for
observability
across
the
the
different
meshes.
A
A
Traffic
policies,
routing
policies,
fault
fault,
injection
policies,
retry
policies,
timeouts
and
so
on
against
that
link
between
a
particular
service
or
a
particular
group
of
services
and
another
service
or
another
or
a
group
of
services
right.
So
it's
a
from
and
a
two
or
source
at
a
destination,
and
what
is
the
policy
that
applies
to
that
link?
A
A
And
this
destination,
or
these
groups
of
this
group
of
destinations,
these
can
communicate
with
these
characteristics
or
these
cannot
communicate
so,
similarly
to
the
traffic
policy,
the
access
policy
gets
translated
into
the
various
appropriate
istio
resources
and
then
orchestrated
onto
their
individual
clusters,
and
then
the
individual
control
planes
of
those
clusters
will
pick
up
these
configs
and
and
configure
their
respective
data
planes.
In
that
case,
and
like
I
said,
the
individual
control
planes
are
separate
and
isolated
and
if
any
one
of
them
were
to
go
down,
the
rest
of
the
system
performs
fine.
A
If
the
management
plane
goes
down,
these
are
still
individual.
Istio
control
planes,
they're
able
to
run
just
fine,
so
it
ends
up
being
a
very
resilient
system.
And
now,
let's
take
a
look
at
this
api
when
you
start
to
use
it
across
multiple
different
teams
and
different
personas
in
your
organization,
we'll
have
joe
kelly.
Take
it
from
here.
B
Thanks
christian
yeah,
like
you
said
typically
in
an
istio
deployment
in
the
in
the
world
of
service
mesh
you're,
going
to
have
a
lot
of
stakeholders
you're
going
to
have
the
administrators
of
the
global
environment
you're,
going
to
have
local
administrators
service
owners,
individual
developers,
etc,
and
when
you're
dealing
with
just
plain
istio
configuration
and
all
of
the
many
interlinked
api
objects
they're
in
you're,
going
to
have
issues
of
contention
as
to
who
owns
a
given
resource.
When
you
need
to
use
a
virtual
service,
a
destination
rule
and
a
sidecar.
B
With
the
glue
mesh
api,
we
can
leverage
a
persona
based
api
in
order
to
make
those
lines
more
clear.
As
you
saw,
our
traffic
policy
and
access
policy
apis,
make
it
explicit,
which
services
and
workloads
they'll
be
affecting
within
a
cluster
glue.
Mesh
then
can
offer
an
intermediate
layer
in
the
form
of
a
an
rbac
web
hook,
which
will
restrict
certain
services
or
even
individual
routing
features
on
the
traffic
policy
to
a
given
persona,
which
is
defined
by
the
user.
B
For
example,
you
can
offer
a
book
info
service
owner
persona,
who
has
the
ability
to
define
access
policies
and
traffic
policies.
Only
within
the
book
info
namespace,
then
a
different
administrator
persona
might
be
responsible
for
dictating
which
external
services
can
access.
Those
services
which
are
only
exposed
through
book
info.
B
Also
bundled
in
glue
mesh
enterprise
is
the
ability
to
provision
global
services
through
our
virtual
destination
api,
the
virtual
destination
api
offers
a
couple
of
modes.
One
is
a
serial
failover
scheme
where
users
can
specify
an
order
of
services
which
should
receive
traffic
in
the
event
of
an
outage
or
otherwise
detected
outliers
or
a
locality-based
scheme
for
multi-cluster
load.
Balancing
users
can
specify
an
arbitrary
service
name,
which
would
then
be
propagated
throughout
the
mesh
using
istio's
proxy
dns
and
then
specify
a
list
of
localities
in
priority
order
which
should
receive
the
traffic.
B
B
This
single
pane
of
glass
is
available
not
only
to
users
who
are
reliant
on
glue
mesh
for
metrics
aggregation,
but
also
to
others
who
might
have
a
more
long-standing
metrics
aggregation
system
in
place
such
as
federated
prometheus
via
thanos.
The
glue
mesh
metrics
functionality
is
all
pluggable
so
that
you
can
aggregate
your
metrics.
However,
you
would
like
and
still
benefit
from
the
different
observability
features
that
we
have
today
and
will
be
adding
over
time.
B
Glue
mesh
enterprise
goes
a
long
way
to
making
it
easier
to
manage
service.
Mesh
configuration
is
a
lot
simpler.
All
you
have
to
think
about
is
sources
and
destinations
for
various
routing
features,
security
features
and
additional
functionality.
That's
brought
to
your
match,
rather
than
having
to
wrangle,
complex,
interlinked
apis
glue
mesh
also
offers
long-term
support
for
istio
back
to
n
minus
three
with
bug
fixes
and
security
patches.
B
Well,
we're
also
there
to
help.
You
secure
your
assurance
at
your
for
your
service
mesh
choices.
You
might
we
all
appreciate
istio
as
a
leader
today,
but
the
service
mesh
ecosystem
is
rich
and
ever-growing
so
by
integrating
with
glue
mesh,
you
are
not
bound
to
any
given
mesh
implementation.
B
B
Let's
take
a
look
at
glue
cloud
in
action.
This
is
glue
cloud,
the
first
and
only
istio
service
mesh
as
a
service,
bringing
all
the
benefits
and
power
of
the
istio
service
mesh
to
all
of
your
clusters.
Data
centers
and
clouds
through
a
single
pane
of
glass
managed
by
the
experts
here
at
sola
glue
cloud
is
a
fully
managed
solution
for
istio.
B
B
Glucloud
offers
a
rich
suite
of
security
features,
including
identity
policy,
mtls
authen,
auth,
z
and
audit
tools,
all
on
top
of
fips
compliant
builds
in
both
the
istio
control,
plane
and
envoy
proxy
data
plane.
Glue
cloud
enables
global
service
load,
balancing
federated
traffic
and
access
control,
regardless
of
where
your
workloads
are
running.
Blue
cloud
also
supports
complex
rate,
limiting
usage
plans
and
billing
use
cases,
no
matter
where
your
services
reside.
B
B
Once
you're
registered
we'll
get
started,
provisioning
your
environment
with
the
istio
service
mesh
and
the
glue
mesh
management
plane
it'll
take
a
few
minutes
to
get
that
environment
ready.
So
let's
take
a
look
at
today's
application.
Cluster
here,
you'll
see
that
we're
running
book
info
and
nothing
else,
there's
no
sdod
and
those
book
info
pods
are
not
injected
with
the
istio
sidecar.
B
B
B
We
have
a
glue
cloud
coupe
config,
so
that
we
can
apply
api
objects
to
the
globe
cloud
management
plane,
as
well
as
a
certificate
and
cluster
registration
token,
to
enable
declarative
registration
of
new
clusters
to
the
glue
cloud
management
plane
back
in
the
terminal.
Let's
switch
to
the
coupe
context
provided
on
the
my
credentials
page
and
from
here.
B
B
B
B
Now,
let's
switch
back
to
that
blue
cloud
cluster.
Our
permissions
on
this
cluster
are
limited
to
those
that
are
needed
to
manage
our
services
on
remote
clusters,
and
what
that
means
we
can
do
is
apply
a
traffic
policy
to
the
glue
cloud
cluster
and
then
have
it
reflected
on
the
registered
application
cluster.
B
B
B
B
B
B
A
Well,
thank
you
joe
for
that
demo
on
glue
cloud
and
the
first
and
only
istio
as
a
service
for
any
cloud.
And
that
concludes
our
talk.
I
want
to
thank
you
for
joining
and
I'd
like
to
point
you
to
some
of
the
links
where
you
can
learn
more
about
what
we're
doing
here
at
solo,
including
glue
cloud
and
some
of
the
open
source
communities
that
we
work
in
and
join
our
slack
to
to
continue
the
conversation.
So
we
appreciate
you
joining
this
session.