►
From YouTube: Application Networking Day Session #11: Access Applications Anywhere at Anytime with Virtual Mesh
Description
Featuring Nick Nellis. A single cluster mesh deployment just doesn't cut it in most environments today. Enterprises are deploying more Kubernetes clusters than ever before. Applications that used to live in the same cluster may span many and be managed by different teams. This talk will show how Istio and Gloo Mesh has evolved along side these environments to extend the same service mesh features you expect from a single cluster deployment to many.
A
A
Than
some
of
the
other
ones,
you've
heard
today
we're
gonna
go
kind
of
more
of
a
developer-centric
approach
to
talk
about
what
happens
when
you
start
saying
more
than
one
kubernetes
cluster
and
how
do
you
access
applications
that
are
not,
in
the
same
cluster
that
your
application
isn't:
I'm,
Nick,
Nellis
I'm,
a
field
engineer
at
solo.
A
I've
worked
a
lot
on
istio
in
the
last
four
or
five
years,
mainly
from
an
end
user
I
was
one
of
the
first
end,
users
to
get
istio2
production,
and
so
now
I
help
a
lot
of
customers
like
yourself,
implementing
istio
and
lots
of
Technical
and
hard
environments.
So
if
you
have
any
hard
problems,
bring
them
over
my
way,
I
enjoy
those.
A
What
I
found
interesting
I
found
a
really
unique
infographic
here.
Is
that
actually,
before
I
even
said
that
how
many
people
are
running
more
than
one
kubernetes
cluster
in
production?
Most
of
you.
So
this
survey,
which
actually
is
really
interesting
from
the
cncf,
is
that
people
running
just
one
cluster
in
production
is
actually
decreasing
and
multiple
clusters
in
production
is
rapidly
increasing
from
you
know,
six
to
ten
to
fifty
plus
and
so
we're
going
to
kind
of
talk
about
the
evolution
of
that
as
it
relates
to
service
mesh.
Today.
A
Kind
of
some
of
the
reasons
why
that
is
is
because
using
kubernetes
is
so
much
easier
than
it
has
been
in
the
past,
managed
services
are
rapidly
On,
The
Rise,
and
due
to
that,
people
are
being
more
enabled
to
deploy
kubernetes
clusters
for
any
type
of
use
case,
whether
that
be
for
your
team
or
per
product,
or
you
know
for
a
lot
of
different
reasons,
and
so,
with
these
kubernetes
clusters
that
are
getting
deployed,
we
got
to
talk
about
how
service
mesh
fits
into
this
ecosystem.
A
That's
kind
of
changing,
and
so
today
we're
going
to
talk
about
a
little
bit
of
a
service
mess
journey
and
specifically
we're
going
to
talk
about
a
little
bit
about
the
operations
and
the
developers
who
use
that
service
mesh
in
a
multi-cluster
environment
and
then
we're
going
to
talk
a
little
bit
at
the
end.
A
little
bit
about
how
glue
mesh
makes
that
easy.
A
So
in
a
single
cluster
service
mesh,
something
like
istio,
it's
really
easy.
You
get
all
of
the
features
of
istio
out
of
the
box.
You
get
secure
communication,
observability,
you're
monitoring,
I
mean
this
is
all
managed
by
that
istio-d
control
plane.
There's
a
lot
of
really
great
features,
just
kind
of
built
out
of
the
box.
You
deploy
istio,
you
get
all
these
features
relatively
easily.
A
You
also
get
to
keep
some
of
your
DNS
naming,
so
you
get
to
use
Cube
DNS.
So
if
the
front
end
here
needs
to
talk
to
the
back
end
service,
it's
just
using
the
same
call
that
it
was
before,
but
now
there's
a
sidecar
attached
to
it,
and
so
it's
kind
of
a
seamless
placement
and
obviously
ambms
makes
that
a
lot
easier.
A
But
what
happens
if
we
add
a
Gateway?
So
now
you
want
your
trap.
You
want
to
expose
your
back
end
through
an
Ingress
Gateway
and
internally,
and
what
that
does
is.
Typically,
companies
stand
up
two
different
DNS
names.
You
have
two
different
ways
to
access,
your
back
end
application,
and
now
it's
kind
of
confusing
your
application
has
to
be
more
environment,
aware
of
where
it
is
and
how
it
accesses
that
application-
and
this
is
kind
of
a
logical
step.
A
A
A
So
it's
pretty
much
two
times
the
work
at
this
point,
but
specifically
we're
going
to
focus
on
some
of
the
routing.
So
how
does
the
front
end
reach
the
back
end?
It
can
no
longer
use
that
DNS
hostname
that
it
was
using
before
and
so
by
move
by
one
team
moving
their
application.
A
You
now
have
to
update
your
own
application
and,
as
you
grow
in
this,
this
problem
becomes
a
lot
harder
at
scale
to
do
this
today,
now
you're
gonna
have
to
change
your
finite
application
to
call
a
load
balancer,
which
then
goes
through
an
Ingress
Gateway
and
then
to
the
back
end
application,
but
this
problem
actually
gets
a
little
bit
harder.
What
if
I
have
two
of
them?
Well,
your
front-end
application
can
still
call
that
same
hostname.
A
But
now,
if
you
had,
let's
say
Team
B
decided
to
make
their
application
highly
available,
so
they
deployed
one
in
Us
East
and
then
in
US
West.
The
front-end
application
actually
has
no
say
on
where
that
request
is
going,
and
so
now
you're
going
to
get
even
more
kind
of
you
get
routing.
That's
just
not
efficient,
essentially.
A
So
without
without
service
mesh,
this
this
problem
actually
exists
in
shipt.
Is
that
we're
giving
a
talk
tomorrow
at
service
mesh
con
about
the
specific
problem
that
they
had
in
their
environment
and
they
came
up
with
a
really
crazy,
unique
solution
to
do
this
without
without
service
bash,
so
all
of
their
internal
applications
have
a
unique
DNS
hostname
and
that
actually
their
applications
all
calls
leave
the
cluster
for
essentially
a
global
gateway,
and
then
that
Gateway
is
told
where
the
next
route
needs
to
happen.
A
A
But
you
don't
have
to
you,
can't
you
don't
have
to
settle
at
this
Something
Like
Glue
mesh
can
actually
be
placed
on
top
of
this
to
significantly
improve
the
routing
between
those
clusters
and
what
it
does
is
it
lets
istio
do
what
it's
best
at
and
that's
managing
a
single
cluster,
and
so
each
istio
cluster
is
going
to
be
singly
managed,
but
then
over
the
top
of
it,
we
put
glue
mesh
and
it
stitches
those
together
into
a
thing.
A
We
call
a
virtual
mesh,
and
so
what
that
does
is
it
essentially
takes
those
service
mesh
features
that
I
talked
about
in
a
single
cluster
and
moves
them
to
be
environment
wide
for
every
cluster?
That
means
secure
communication,
even
when
you
leave
your
cluster,
a
single
Global
control
plane,
where
you
manage
all
of
your
Fleet
of
service,
meshes
as
one
intelligent
routing
and
we're
going
to
talk
a
little
bit
about
the
intelligent
routing
how
we
can
improve
that
shift
architecture
so
that,
even
though
they
they
still
have
that
kind
of
setup.
A
Today
we
can
be
much
smarter
about
how
we
do
our
routing
and
then
observability.
So
now
we
have
that
single
pane
of
glass,
where
all
those
service
meshes
can
feed
their
observability
information,
and
you
can
view
it
from
one
spot
so
from
kind
of
an
architecture
diagram.
This
is
what
it
looks
like
you
have
your
single
sdod
deployments,
and
today
we
support
both
open
source
and
customized
version.
A
That
has
some
added
features
into
it
that
we
offer
and
then,
when
you
just
attach
a
management
plane
on
top
of
that
which
we
call
glue,
mesh
or
glue
platform.
Knob
excuse
me
and
then
that
tells
each
istio
exactly
what
it
needs
to
know
to
solve
the
routing
problems
of
that
cluster,
and
so
it's
essentially
environment,
aware
and
it's
contextual.
So
it
knows
where
all
your
other
applications
are
and
how
to
best
route
to
those
and
it's,
and
it
can
also
support
kind
of
best
routing
cases.
If
you
know
locality
is
a
factor.
A
One
really
really
unique
feature
about
what
glue
mesh
offers
is:
the
idea
of
global
host
names
so
kind
of
along
the
same
same
stuff
that
Shane
was
talking
about
in
terms
of
spire.
You
can
give
all
of
your
applications
a
unique
identity,
but
now
with
blue
mesh,
you
can
also
give
them
a
unique
hostname,
and
so
this
hostname
essentially
is
available
to
any
cluster
running
blue
mesh
and
it
makes
your
application
available
from
any
other
cluster.
A
We
have
all
kinds
of
security
and
stuff
built
in
as
well,
so
that,
if
you
don't
want
somebody
to
access
your
surface,
they
can't,
but
it
essentially
now
you
can
essentially
make
your
service
unique
to
your
entire
environment
and
then
a
developer
doesn't
necessarily
need
to
know
where
that
application
lives,
because
between
glumash
and
istio
we're
going
to
figure
out
how
to
route
it
there
securely
and
efficiently
and
what's
really
unique
about
this,
is
that
it
allows
for
dynamic
updates.
So
let's
say
your
your
back
end
application.
They
decide
again.
A
They
want
to
move
that
back-end
application
to
a
different
cluster.
Well,
because
blue
mesh
kind
of
sits
on
top
of
that,
it
can
identify
that
it's
been
moved
and
update
all
the
routing
in
real
time,
and
so
now,
regardless
of
where
your
application
is
running,
that
front
end
doesn't
have
to
change
and
it'll
automatically
and
dynamically
update
to
a
best
path.
Routing
and
what's
really
great
about
something
like
in
the
shipped
environment,
is
in
this
multi-cloud
kind
of
hybrid
environment.
A
Even
if
you
move
clouds,
this
hostname
will
still
be
retained
within
the
server
the
virtual
mesh
and
the
routing
will
just
be
updated
accordingly.
So
it's
a
lot
more
allows
for
a
lot
more
seamless
migrations.
We
have
some
customers
that
are
migrating
from
on-prem
to
kubernetes
and
public
cloud,
and
the
same
thing
applies:
they
don't
even
have
to
update
DNS
host
names
because
it
just
updates
with
the
move
of
the
application.
A
The
second
thing
that
we
can
tie
to
this
DNS
is
intelligent
routing.
We
can
essentially
take
each
route
or
each
different
back
end
application,
that's
running
and
set
a
custom
priority,
or
even
a
locality
based
priority
to
it.
To
tell
that
front-end
application.
Where
is
the
best?
Where
is
the
best
place
to
route
your
first
request
and
then,
if
that
one
isn't
successful,
where's
the
next
best
place
to
Route
it,
and
this
can
be.
A
This
is
also
updated
dynamically
and
you
can
actually
change
the
rules
of
how
you
want
to
determine
what
priority
is
combining
that
with
the
DNS.
We
are
doing
a
lot
of
really
great
route
route,
efficiency
improvements
and
so
wasn't
really
a
great
example
of
the
the
shipped
use
case
and
tying
it
back.
Is
we
using
this
hostname?
We
can
actually
override
public
DNS
names.
So
if
we
go
back
to
kind
of
the
shipped
use
case
here,
bartaship.com
is
going
to
be
our
example
hostname.
A
We
can
actually
hijack
that
hostname
in
the
mesh
and
then
tell
the
foo
application
where
the
best
place
is
to
Route
it,
and
so
we
can
essentially
skip
all
of
the
Gateway
clusters
there
and
have
Foo
route
directly
to
the
bar
application,
and
then
it
saves
it
essentially
is
going
from
four
to
five
hops.
We
can
reduce
that
down
to
one
or
two
and
we
and
then
even
better
for
their
environment
is
we
can
retain
the
identity
of
that
Foo
application.
A
So
now,
Barr
will
know
exactly
where
that
request
came
from
and
it's
over
mtls
and
so
tying
those
two
together,
DNS
and
then
locality
based
routing
or
no
sorry
priority
based
routing.
It
really
enhances
the
service
mesh
experience,
even
Beyond
kind
of
what
istio
does
in
a
single
cluster
and
I
was
going
to
show
you
a
demo
here.
A
One
one
demo
we
have
is
that
with
glue
mesh,
we
have
a
demo
that
kind
of
shows
the
front-end
application
running
in
a
single
cluster
and
then
one
of
the
features
a
team
decided
to
implement
and
put
in
another
cluster,
and
typically
this
would
be
really
hard
to
orchestrate.
But
in
blue
mesh
you
can
create
one
custom
resource
which
we've
created
the
global
hostname
and
set
up
all
the
priority
based
load
balancing
and
then
the
front-end
application
could
automatically
Reach
This
checkout
service
feature
without
really
having
to
do
much.
A
Work
at
all
and
blue
mesh
is
really
doing
the
heavy
lifting
behind
the
scenes.
For
that
one,
and
so
that's
why
I
talked
about
moving
from
a
single
cluster
to
multi-cluster,
kubernetes
and
service
mesh.
Thank
you.