►
From YouTube: Amazon EKS Anywhere to EKS Communication with Gloo Mesh
Description
SoloCon 2022:
Amazon EKS Anywhere to EKS Communication with Gloo Mesh
Speakers:
Jonah Jones
Solution Architect, AWS
Lawrence Gadban
Field Engineer, Solo.io
Session Abstract:
This presentation will showcase how you can use Gloo Mesh to establish seamless network communication between an on premise EKS Anywhere cluster and an Amazon EKS Cluster running in the public cloud.
Track:
Service Mesh and Application Networking
A
Hello
there
and
welcome
to
today's
talk.
The
title
of
the
talk
is
amazon
eks
anywhere
to
eks
communication
with
glue
mesh.
A
I'm
jonah
jones.
I
work
at
aws.
I
was
a
container
solutions
architect
for
the
past
two
and
a
half
years.
Currently
I'm
an
sde
on
the
eks
anywhere
team,
I'm
also
a
maintainer
of
the
cncf
project,
falco
security,
which
is
an
incubating
project.
A
That's
my
github
and
that's
my
twitter,
so
I
will
be
covering
some
of
the
slides
today
for
today's
talk.
A
We
also
have
lawrence
who's
going
to
be
giving
today's
demo
lawrence
is
a
field
engineer
at
solo.
I
o
for
the
past
two
years,
and
that
is
his
github
and
that
will
also
be
where
you'll
find
the
link
to
the
demo
at
the
end
of
the
talk
today,
so
I'm
going
to
quickly
go
over
our
agenda
just
so
you
have
an
idea
of
how
taste
talk
will
be
structured.
A
I
mean
why
that's
useful
for
the
you
know
the
on-prem
to
public
cloud
communication
and
then
we'll
see
a
demo
of
that
working
in
action
with
blue
mesh
and
then
we'll
have
some
time
for
q
a
at
the
end
as
well.
So,
let's
get
started
so.
The
first
thing
we
want
to
cover
is
what
is
eks
anywhere
for
people
who
aren't
familiar.
This
is
our
on-premise
version
of
eks
that
we
released
in
september
of
2021.
A
A
Why
it's
super
useful
as
well
is
because
we
took
out
any
of
the
requirements
with
other
aws
services,
so
it
can
run
totally
isolated.
In
that
scenario,
where
you
don't
have
a
connection
back
to
the
public
cloud,
so
that
would
be
things
like.
I
am
role
service
accounts
or
the
load
balancers.
Those
aren't
a
requirement
for
eks
anywhere,
but
you
could
have
those
if
you
want,
but
you
can
also
run
this
air
gap.
A
So
it's
a
totally
open
source
service
and
it's
built
on
open
source
standards
as
well.
So
you
can
see
exactly
what
the
kubernetes
is
we
run
and
what
patches
we
put
on
and
every
piece
of
it
all
on
github,
it's
bundled
with
components
to
accelerate
like
your
launch
and
getting
started.
A
So
we
put
things
like
psyllium
in
there
as
the
cni
certain
manager
to
manage,
like
the
like
the
ca
and
the
cluster,
the
communication
between
like
your
v
sphere
and
your
cluster,
and
then
things
like
management
of
fcd
and
all
of
that
we
cover
for
you.
Amazon,
eks
anywhere,
provides
kubernetes
lifecycle
management
and
your
data
center.
On
your
hardware,
and
like
I
mentioned
in
the
last
slide,
we
are
offering
an
opinionated
cluster
creation
and
life
cycle
management.
A
Tooling,
I'm
not
going
to
go
over
everything
that
we
provide,
but
the
you
know
like
things
like
oidc
connection
back
to
the
console
enterprise
support
if
you're
a
paid
customer
are,
are
super
useful
and
we've
had
a
lot
of
customers
that
are
trying
eks
anywhere
in
their
own
data
centers.
It's
totally
free.
A
If
you
want
to
try
it
and
if
you
want
to
get
the
enterprise
support,
obviously
you
can
there's
like
a
paid
licensed
version,
and
so
when
people
are
trying
eks
anywhere,
there
are
sometimes
scenarios
where
they
want
connections
back
to
their
public
cloud.
For
you
know,
reasons
like
data
or
other
micro
services.
A
Amazon
ecs
anywhere
is
built
on
the
open
standards
and
provides
a
consistent
experience
across
environments.
So,
like
we
mentioned
earlier,
it
is
meant
to
be
a
offering
of
amazon
eks
but
on-prem.
So
we
use
the
same
core
primitives
under
the
hood
to
give
you
that
feeling
that
you
are
running
eks,
but
in
your
own
data
center
we
use
cluster
api
to
provision
the
clusters
that
isn't
what
we
use
for
amazon
eks.
A
But
it
is
what
we
are
using
for
eks
anywhere
and
that
helps
give
us
a
consistent
experience,
because
we're
able
to
use
the
different
cathode
providers
to
make
sure
that
the
cluster
is
created
in
the
exact
way
that
it
would
be
on
the
public
cloud.
A
We
use
all
the
same
patches,
all
the
same
observability
and
deployment
tools,
work
and
then
the
biggest
difference
would
be
what
you
know:
hardware
you're
running
on,
because
on
premise,
you
have
your
own
bare
metal,
vms
vmware,
where
on
the
aws
cloud,
you're
using
amazon,
ec2
or
fargate
most
likely,
but
from
the
operational
perspective
of
managing
a
cluster,
it
should
feel
like
the
same
experience
and
that's
really
what
we're
going
for
is
when
you
go
type
cube
ctl.
It
feels
exactly
the
same
as
amazon
eks.
A
A
Why
this
is
important
is
because,
whenever
there's
things
like
api
depreciations,
we
are
always
for
following
upstream
standards,
so
you
can
look
to
upstream
guidance
for
things
like
that,
in
addition
to
our
guidance
to
figure
out
how
you
need
to
resolve
these,
we
support
four
versions
backwards
of
kubernetes
for
security
purposes.
A
We
provide
managed
kubernetes
experience
for
performant,
reliable,
secure,
kubernetes.
Those
are
our
biggest
tenants.
We
do
a
lot
of
stress,
testing
and
reliability,
testing
and
security
testing.
We
have
members
of
our
engineering
team
that
work
on
the
security
board,
so
we're
always
trying
to
keep
security
as
like
our
number
one
goal
in
mind.
A
When
we're
you
know,
building
kubernetes
and
we
also
are
trying
now
to
make
operations,
administration
and
management
simple
and
boring,
and
and
so
now
that
we've
got
the
security
and
performance
and
reliability
really
down
we're
trying
to
make
your
life
easier
as
a
kubernetes
admin
and
take
out
some
of
the
the
tasks
that
provide
no
value
to
your
company.
You
know
we
call
them
like
the
toil
tech
right,
and
so
that's
really
what
we're
working
on
right
now.
A
So
we're
going
to
talk
about
a
scenario
now
that
you
have
a
background
of
amazon
eks
anywhere
on
amazon,
eks,
so
one's
on
premise
and
one's
in
your
public
cloud
and
so
a
scenario
that
we
get
really
often
is
people
want
to
have
a
cluster
on
premise
and
some
clusters
in
the
cloud
and
they
need
them
to
talk
to
each
other.
A
We
had
one
large
gaming
company
and
for
their
gaming
purposes
for
latency,
they
were
running
on-premise
clusters
in
different
co-locations
across
the
country,
but
they
ran
a
lot
of
their
micro
services
in
us
west
too,
and
kubernetes
there,
and
they
needed
a
way
to
have
these
clusters
on
their
co-locations.
Talk
back
to
these
clusters
and
us
west
too,
because
or
like
different
things
like
authorization
or
like
their.
A
A
So
these
clusters
could
seamlessly
talk
between
each
other,
and
so
this
is
a
common
scenario.
Things
also
like
data.
So
if
you
have
all
of
your
databases
on
premise
or
if
you
have
them
all
in
the
public
cloud
and
you're
running
kubernetes
on
edge,
you
might
need
them
to
talk
back
to
the
databases
right.
A
So
this
is
becoming
a
super
common
scenario,
as
companies
are
getting
more
mature
in
the
kubernetes
space
and
they're
able
to
run
more
kubernetes
clusters
across
the
different
areas
that
they
work
in
so
like
I
mentioned
latency
security
data
residency.
So
what
we're
seeing
is
gaming
companies,
health
care,
financial
services?
A
They
all
have
different
use
case
that
would
require
them
to
run
some
amount
of
kubernetes
on-prem
and
the
majority
of
them
are
also
aws
customers
who
already
are
running
in
the
cloud,
and
so
what
we're
going
to
do
is
we're
actually
going
to
be
using
glue
mesh,
which
is
a
product
by
solo,
to
facilitate
that
cross
cluster
communication
seamlessly
here.
A
So
here's
a
diagram
of
what
it
will
look
like.
So,
if
you're
familiar
with
istio,
there's
like
a
booking
app
and
so
what
you're
looking
at
here
is
the
booking
app
routes.
So
you
see
like
the
reviews,
routes
and
stuff,
and
so
we
have
the
eksa
cluster
on
premise
here
and
we're
using
glue
mesh
to
have
it
talk
to
another
service
and
the
eks
cloud.
A
So
you
have.
The
review
app
needs
these
reviews
to
talk
to
the
rating
micro
service.
So
this
is
exactly
like
our
gaming
company
scenario,
and
so
the
services
don't
know
that
they're
talking
to
another
cluster
they're
just
trying
to
talk
to
you
know
you
know:
service
dot,
name,
space,
dot,
local
right
and
so
with
glue
mesh.
You
can
make
this
a
really
seamless
communication,
where
your
clusters
are
able
to
talk
across
to
each
other
without
having
to
know
that
they're
talking
to
each
other.
A
B
Okay,
so
let's
take
a
look
at
how
we
can
use
glue
mesh
to
connect
your
amazon,
eks
anywhere
cluster
to
an
amazon
eks
cluster
running
in
the
public
cloud.
For
this
demo
we'll
be
using
the
standard
book
info
example,
application
from
istio,
which
is
a
microservice
based
app
that
has
a
ui
front,
end
called
the
product
page,
which
we
can
access
through
a
browser
that
then
calls
a
handful
of
microservices
behind
the
scenes.
B
So
there
are
three
review
services
that
are
called
a
detail
service
and
then
v2
and
v3
of
reviews,
we'll
call
the
rating
service,
so
we're
going
to
take
this
application
and
deploy
it
to
eks
anywhere
and
eks.
So
our
eks
anywhere
cluster,
which
we'll
be
running
on
my
laptop,
is
going
to
contain
the
majority
of
the
application,
so
we'll
have
the
product
page,
all
three
of
the
review
services
and
the
detail
service
and
then
to
simulate
the
cross
cluster
cross
cloud.
B
Connectivity
we'll
have
the
rating
service
running
on
a
eks
cluster
running
called
public
cloud
running
in
the
public
cloud,
and
then
we
also
have
a
management
cluster
where
we'll
install
glue
mesh.
This
will
also
be
running
in
the
public
cloud,
and
this
will
be
responsible
for
configuring
and
managing
these
two
workload
clusters.
B
So
let's
go
ahead
and
get
started.
I
have
the
script
on.
The
right
is
an
actual
script
that
can
be
run
to
do
all
of
the
steps
that
we'll
be
doing,
but
for
the
demo
I'll
just
copy
and
paste
and
kind
of
talk
over
the
steps
as
they
happen.
So
the
first
thing
is:
I'm
setting
up
some
environment
variables
that
will
be
used
throughout
then
I
would
create
the
actual
eks
public
cloud
clusters.
B
I
did
that
ahead
of
time
just
to
save
some
time,
but
this
is
the
command
that
these
are
the
commands
that
I
use
so
basically
eksctl
create
cluster
kind
of
just
standard,
eks
clusters
and
kind
of
just
to
show
where
we're
at
right
now
with
those
pre-provisioned
clusters,
we'll
take
a
look
at
the
context
and
the
deployments
available.
So
I
have
two
cube:
config
context,
cloud
and
management
and
on
the
management
and
cloud
clusters
I
only
have
the
only
deployment
running
is
core
dns,
so
basically
they're
just
bare
install
clusters.
B
B
First,
I'm
going
to
use
the
eks
anywhere
plugin
eksctl
anywhere
plugin,
to
generate
a
config
file
and
I'll
be
using
the
docker
provider,
since
this
is
going
to
be
running
locally
on
my
laptop
and
then
I'm
going
to
use
that
config
file
to
actually
create
the
cluster.
So
we're
going
to
kick
that
off.
This
will
take
a
little
bit
of
time,
so
we're
going
to
go
ahead
and
fast
forward
to
when
the
cluster
is
created
and
ready
to
go.
B
Okay,
so
now
our
eks
anywhere
cluster
is
now
ready
to
be
used.
Basically,
the
cluster
has
been
created
and
in
order
to
use
it,
I'm
going
to
actually
take
the
cube
config
file
that
was
generated
as
part
of
the
bootstrap
process
and
add
it
to
my
cubeconfig
environment
variable
so
that
I
can
use
it
with
cubectl
commands
and
then
I'm
going
to
rename
the
context
just
to
match
the
variables
that
we
set
at
the
beginning.
So
now
let's
go
ahead
and
just
check
make
sure
that
everything
is
good
on
it.
B
B
In
order
to
do
that,
I'm
going
to
use
the
istio
ctl
cli
tool,
and
so
in
order
to
use
it,
I
got
to
add
it
to
my
path
and
then
I'm
going
to.
Basically
I
have
two
config
files
that
are.
I
have
two
config
files.
This
is
seo
operator.yaml
and
then
the
istio
operator,
eksa
yamo.
B
Those
are
basically
config
files
for
installing
sto,
I'm
taking
them
doing
things
like
replacing
the
cluster
name
to
be
correct
and
then
I'm
using
istioctl
install,
which
will
take
that
config
file,
translate
it
into
a
set
of
manifests
and
then
apply
those
manifests
to
the
cluster
and
when
that's
done,
the
end
result
is
istio
will
be
installed.
And
so
in
this
case
the
cloud
cluster
looks
like
the
install
has
just
finished.
B
So
I'm
going
to
go
ahead
and
kick
off
the
install
for
the
eks
anywhere
cluster
and
while
that
is
running,
I'm
going
to
get
the
next
commands
ready.
So,
basically,
once
the
installation
is
complete,
we'll
go
ahead
and
actually
check
the
pods
in
the
istio
system.
Namespace,
which
is
kind
of
the
default
namespace
for
istio,
make
sure
that
the
pods
are
up
and
running.
B
So
now,
let's
check
the
same
thing
on
the
eks
anywhere
cluster
and
same
thing.
So
all
three
pods
are
ready.
So
now
let's
actually
deploy
our
book
info
application.
So
if
you
remember
on
the
cloud
server
or
on
the
cloud
cluster,
that's
running
in
the
public
cloud,
we're
only
going
to
deploy
the
rating
service
and
then
we're
going
to
play
the
rest
of
book
info
on
our
eks
anywhere
cluster.
B
So
let's
go
ahead
and
do
that
so
first
thing
I'm
going
to
do
is
label
the
default
namespace,
which
is
where
I'm
going
to
deploy
the
application.
In
this
case
on
the
cloud
cluster,
I'm
deploying
labeling
the
default
namespace
with
istio
injection,
enabled
and
then
creating
the
various
components
necessary
for
the
rating
service.
Only
that
that
label
is
there,
because
that
is
what
tells
seo
that,
as
pods
are
created
in
this
name
space
we
want
to
inject
the
proxy
as
a
side
car.
B
So
the
end
result
is
that
the
pod
spins
up
with
the
application,
as
well
as
the
proxy
and
then
we're
going
to
deploy
the
rest
of
the
application
other
than
ratings
on
the
eks
anywhere
cluster.
So
that's
created.
Let's
go
ahead
and
check
the
pods
in
cluster
one
which
is
the
cloud
cluster,
so
this
should
only
be
the
radiance
service
and
you
can
see
that
the
ratings
v1
pod
is
up
and
running.
B
It
has
a
35
seconds
to
go
and
it's
two
of
two
containers
already
so
the
the
ratings
container
and
that
the
sidecar
proxy
are
ready
and
then
now,
let's
double
check
the
eks
anywhere
cluster,
and
this
has
all
of
the
rest
of
them.
We
have
the
details,
product
page
and
the
three
versions
of
reviews.
Those
are
still
initializing.
If
we
run
it
again,
we
should
see
some
yeah
so
review.
One
is
now
running
two
of
two.
B
While
these
spin
up
we'll
go
ahead
and
move
on
to
the
next
step,
so
next
we're
actually
gonna
install
glue
mesh
and
we're
gonna
do
that
on
the
management
cluster.
That's
also
running
in
the
public
cloud,
so
we're
going
to
install
that
with
helm
and
again
this
is
going
to
install
the
management
plane
essentially
on
the
management
cluster
and
again,
the
management
cluster
is
going
to
manage
the
two
workload
clusters.
B
So
once
we
have
the
management
plane
installed,
we
need
to
register
our
clusters
when
we,
when
we
have
that
completed,
we'll,
be
able
to
use
glue
mesh
and
the
glue
mesh
apis
to
do
things
like
expose
the
application
and
then
also
set
up
the
cross-cluster
networking
that
we
want
that
we
want
to
set
up.
B
So,
let's
go
ahead
and
check
the
pods
before
we
move
on
the
book
info
pods
on
cluster
two,
so
those
pods
are
up
and
running
two
of
two
and
then
now,
let's
check
the
pods
of
the
glue
mesh
installation
on
the
management
plane
on
the
management
cluster,
and
we
can
see
that
we
have
four
pods,
for
example
the
management
server
and
the
ui,
and
those
are
ready
and
running.
B
So
now
we'll
actually
register
the
clusters
that
are
necessary,
so
cloud
and
eks
anywhere,
and
so
the
first
thing
we'll
do
is
create
a
custom
resource
called
the
kubernetes
cluster.
This
is
essentially
telling
the
management
plane
that
these
clusters
are
registered
and
will
be
able
to
basically
be
managed
by
the
management
plane,
and
so
we
created
one
for
the
cloud
and
we
created
one
for
eksa.
B
So
now
that
those
are
there,
we
can
actually
bootstrap
the
connection.
So
we're
going
to
install
a
agent
the
glue
mesh
agent
on
these
clusters
and
that's
what
will
actually
have
the
communication
with
the
management
plane.
So
we
need
to
get
the
management
server
address,
which
is
called
the
relay
server
address.
B
This
is
just
a
load
balancer
service
on
the
management
cluster
and,
as
you
can
see,
we
have
this
standard
kind
of
aws
load,
balancer
domain
name,
and
so
this
is
what
we'll
use
to
register
the
agents
as
we
install
them.
We'll
use
this
address
to
connect
and
kind
of
set
up
the
initial
connection
and
speaking
of
that
initial
connection,
that
the
the
connection
is
in
mutual
tls
secured.
B
In
order
to
bootstrap
that,
though,
we
need
to
set
up
some
secrets
on
the
workload
cluster
so
that
the
agent
can
use
them
basically
to
bootstrap
the
connection.
It
needs
to
have
some
credentials
so
that
it
can
establish
the
correct
connection,
and
so
we
copied
them
over
from
the
management
plane
to
the
workload
cluster.
And
so
we
can
kind
of
confirm
that
those
secrets
were
created
successfully.
B
So
now
we
can
actually
install
the
agent
in
this
case
we'll
do
it
on
the
cloud
cluster
and
it
will
use
these
secrets
and
the
relay
address
to
actually
establish
that
connection.
And
so
this
is
just
a
simple
helm
install
and,
as
you
can
see,
we
set,
for
example,
the
server
address
to
the
the
server
address
of
our
load,
balancer,
and
so
now
that
that's
installed.
B
We
can
check
the
pods
in
that
namespace
and
we
should
see
that
the
agent
is
running
and
ready,
and
so
now
the
agent
in
the
cloud
cluster
is
talking
to
the
management
plane
and
then
we'll
repeat
the
same
process,
creating
the
secrets
and
installing
the
agent
for
the
eks
anywhere
cluster.
B
B
And
so
now
that
that's
done,
we're
gonna
actually
check
the
pod.
This
one
might
be
a
little
quick
yeah,
so
the
container's
creating
we'll
give
that
a
second
and
while
that's
finishing
I'm
going
to
copy
over
the
next
commands,
we'll
check
one
more
time:
okay,
so
the
agents
running
on
uks
anywhere.
So
the
next
step
is
kind
of
the
last
kind
of
setup
step
and
we
need
to
create,
what's
called
a
workspace,
so
in
glue
mesh,
the
workspace
concept,
kind
of
defines
tenancy
rules.
B
So
maybe
a
team
shares
a
workspace
and
all
the
services
owned
by
the
team
are
in
a
workspace
or
maybe
it's
bigger
than
that,
and
you
have
like
an
organization.
That's
there
that
shares
a
workspace
either
way
for
our
purposes.
We
just
want
everything
to
be
part
of
a
single
workspace,
so
this
workspace,
that's
being
created,
is
a
very
generic
one.
That
means
any
works
or
any
cluster
and
any
namespace
in
that
cluster
will
be
part
of
the
same
workspace.
B
So
if
we
come
here,
localhost
8090
you'll
see
the
glue
mesh
ui.
Let
me
refresh
it
so
on
the
right.
You
can
see
the
clusters.
We
have
a
cloud
cluster
and
eks
cluster
that
are
registered.
You
can
see
some
information,
such
as
the
number
of
nodes
and
so
on.
You
can
also
see
that
the
istio
installation
was
auto,
discovered
and
then
in
the
workspaces
on
the
left.
B
We
only
have
one
workspace
that
we
just
created
and
that
workspace
has
two
clusters
that
are
part
of
it,
which
are
our
two
clusters:
14
namespaces,
between
those
two
clusters
and
four
gateways.
We
could
also
see
some
more
detail
here.
Kind
of
you
know
we
can
drill
down
if
we
wanted
to
get
some
more
details,
but
this
this
kind
of
confirms
that
everything
is
wired
up
and
we're
good
to
go.
B
So
now
we
can
actually
start
configuring
our
application
so
that
we
can
have
first
access
to
application
and
then
configure
the
network
connectivity
that
we
want.
So
the
first
thing
we're
going
to
do
is
actually
establish
a
root
trust,
a
shared
root,
trust
across
both
both
clusters.
B
This
is
a
simple
one,
a
self-signed
example
where
what
we're
telling
blue
mesh
to
do
here
is
create
a
self-signed
ca,
distribute
intermediate
cas
to
the
workload
clusters
so
that
the
istio
pods
on
each
cluster
can
communicate
mutual
tls,
regardless
of
which
cluster
it's
running
on.
In
other
words,
when
we
call
that
rating
service
running
in
the
public
cloud,
we
want
that
to
be
an
east-west
service,
call
just
the
service
to
service
call
that
uses
mutual
tls
and
in
order
for
that
mutual
tls
to
work.
B
B
We
will
check
so
you
can
see
on
the
cloud
cluster,
for
example,
the
istio
pods
are
up
and
running
as
of
20
seconds
ago,
or
so
the
ratings
pod
was
just
coming
up
as
a
restart
the
on
the
eks
anywhere
cluster,
the
istio
pods
are
restarting
and
the
workload
pods
are
restarting
so
we'll
actually
run
that
one
more
time,
and
we
can
see
that
now
everything
is
up
and
running
and
has
restarted
recently,
so
they
should
all
have
kind
of
that
new
shared
route.
So,
let's
actually
be
able
to
access
our
application.
B
To
this
point,
we
haven't
even
tried
the
virtual
we
haven't
even
accessed
application
and
to
do
that,
we're
going
to
create
a
virtual
gateway,
and
so
what
the
virtual
gateway
does
is
essentially
we're
telling
bluemesh
to
set
up
istio
config
that
will
allow
us
to
access
the
product
page
application
on
the
eks
anywhere
cluster,
and
so
the
end
result
of
this
is
that
we
can
now
I'm
gonna
open
up
a
new
tab
and
what
we
can
do
is
I'm
going
to
get
the
eks
anywhere
config
working
in
this
terminal,
real,
quick
and
then
I
will
be
able
to
port
forward
to
the
ingress
gateway
on
the
eks
anywhere
cluster,
and
now
I
can
access
product
page.
B
So
if
I
go
to
the
browser
and
hit
localhost
8080
product
page,
you
can
see
that
I
now
have
the
book
info
application
rendered
out.
You'll
also
notice
that
this
rating
service
is
currently
unavailable.
So
that
means
that,
on
this
request,
we
hit
one
of
the
reviews
versions
that
talks
to
ratings
and
since
there's
no
communication
right
now
between
clusters,
ratings
cannot
be
accessed.
So
if
we
refresh
it
again
in
this
case,
this
is
reviews
v1,
which
doesn't
talk
to
ratings.
B
But
if
we
refresh
it
again,
we
should
see
now
we're
seeing
the
rating
surface
unavailable.
So
basically,
the
application's
working,
it's
trying
to
talk
to
ratings,
but
ratings
essentially
doesn't
exist
at
this
point.
So
now
we
can
hook
up
and
set
up
the
cross
cluster
communication
so
that
when
the
product
page
or
the
reviews
applications
the
review
services
try
to
access
the
rating
service.
That
will
happen
over
the
mutual
tls
connection
and
go
to
our
cloud
cluster,
where
ratings
is
actually
running.
B
So
to
do
that,
we're
going
to
set
up
a
few
resources,
the
most
important
one
is
this
virtual
destination,
which,
as
the
name
implies,
is
we're
basically
creating
a
virtual
destination
in
our
mesh
that
can
be
called
from
any
cluster.
That's
part
of
the
mesh
and
when
that
cl,
when
that
virtual
destination
is
accessed,
we
will
set
up
the
kind
of
under
the
cover
routing
rules
that
are
necessary
to
get
the
request
to
where
it
needs
to
go.
B
B
Now
the
virtual
destination
is
in
place.
We
refresh
this
a
few
times
and
you
can
see
immediately.
We
now
have
black
stars,
meaning
the
rating
service
is
working,
and
so,
if
we
refresh
this
a
few
more
times,
we
can
see
that
you
know
we're
getting
all
of
the
versions
of
reviews
and
all
the
versions
that
talk
to
radians
we're
successfully
getting
stars
back,
because
that
communication
now
works.
So
one
of
the
other
interesting
things
we
can
do.
B
Let
me
just
refresh
this
a
few
more
times
we
can
go
back
to
our
ui
and
we
can
actually
see
this
traffic
happening,
for
example,
on
our
service
graph,
and
so
you
can
see
here
that
we
have
the
eks
anywhere
cluster
with
the
majority
of
the
applications
running
on
it,
and
then
in
this
case
this
is
seeing
that
reviews
v2
is
talking
to
ratings
in
the
cloud
cluster.
Let
me
refresh
it
a
few
more
times,
so
we
can
see
all
the
versions
and
then
let
me
go
ahead
and.
B
Okay,
so
we
now
have
a
service
graph
that
shows
basically
what
we
wanted
to
set
up
in
our
architecture
diagram.
So
we
can
inspect
the
traffic
that's
flowing
through
and
we're
getting
metrics
that
can
help
drive
the
service
graph.
That
shows
that
requests
are
coming
through
to
the
product
page
details,
v1
reviews,
e1
reviews,
v2
and
reviews.
V3
are
running
on
the
eks
anywhere
cluster
and
then
reviews
v2
and
reviews.
B
V3
are
calling
rating
service
running
in
the
cloud,
and
so
just
with
a
few
configuration
files,
we
have
been
able
to
establish
connectivity
between
a
unified
eks
environment,
where
we
have
eks
anywhere
running
on
our
laptop
and
then
we
have
communication
with
an
eks
cluster
running
the
public
cloud.
So
that's
it
for
the
demo.
Thank
you.
A
All
right
lawrence
thanks
for
that
awesome
demo,
and
hopefully
it
gives
everyone
a
better
idea
of
how
powerful
blue
mesh
can
be
with
facilitate
that
cross-cluster
communication,
and
this
doesn't
just
have
to
be,
for
you
know
public
cloud
and
on-premise
like
our
use
case,
this
could
be
any
kubernetes
clusters
anywhere.
It
seems
like
right
so
now
that
we've
kind
of
went
over
what
we're
trying
to
talk
about
today.
If
there's
any
questions,
you
have
feel
free
to
add
them
to
chat,
and
we
can
answer
them
right
now.
A
So
well,
thank
you
for
attending
the
talk
today
and
if
you
want
to
learn
more,
we
have
a
github
link
here
for
the
demo
that
laurent
showed
it's
it's
l
godban,
which
is
his
github
name,
eksa
bluemesh,
so
feel
free
to
check
it
out
and
try
the
demo
out
yourself.
This
will
also
be
included
in
the
the
slides
at
the
end.
So
if
you
want
to
grab
these
off
the
slides
you're
more
than
welcome
thanks
everyone
for
attending
and
I'm
joining
the
talk
today,.