►
From YouTube: Multi-Zone Service Mesh - Kong Builders Livestream
Description
This week: How to create a globally connected #servicemesh
One of the things we focus on a lot in #Kuma/#KongMesh is the ability to effortlessly create #multicloud/datacenter mesh deployments. In this stream, we’ll start exploring the multi-zone service mesh and how we can create a mesh that spans the globe.
#KongBuilders #Kong
A
Hey
everyone,
as
usual,
my
name
is
cody
arkland.
I
work
for
kong
and
we,
this
is
kong
builders,
so
we
do
this
stream
every
every
other
week
where
we
dive
in
we
start
we
build
things
with
the
kong
products,
so.
B
A
Is
a
special
show,
because
this
is
my
last
kong
builders
before
I
turn
it
over
to
our
our
developer
relations
team
to
ride
off
into
the
distance
with.
So
this
is
going
to
be
a
fun
one,
though,
because
we
are
like
jam-packed
with
things
on
this
one
that
was
that
sounds
super
markety
we
are
going
to
dive
in
we're
going
to
play
with
I've
promised
it
for
the
past
couple
of
sessions.
We've
never
gotten
to
it,
but
we're
going
to
get
to
it.
A
Today
we're
going
to
play
with
some
layer,
7
layer,
7
routing
features.
We
are
going
to
finish
up
our
observability
stuff
from
the
previous
stream
and
then
we
are
going
to
jump
into
building
a
multi-zone
mesh.
So
it's
going
to
be
interesting
because
we're
going
to
build
up
the
single
zone
and
then
we're
going
to
blow
it
away
completely
and
then
we're
going
to
stand
up
multi-zone
from
the
ground
up.
So
let's,
let's
jump
in
and
have
some
have
some
fun
we're
going
to
write
to
it.
A
So
fair
warning!
I
always
do
this
a
couple
of
housekeeping
items.
If
you
see
me
look
over
here,
it's
because
I'm
looking
at
the
chat
to
see
if
anybody's
asked
any
questions
or,
if
there's
anything
coming
through,
so
I
keep
an
eye
out
for
those.
We
love
interaction.
Please
ask
questions.
Jump
in,
have
some
have
some
dialogue,
I'd
love
to
go
back
and
forth
on
that
stuff.
A
It's
a
you
see
a
bunch
of
mess
behind
me,
I'm
moving,
so
I'm
buying
a
new
house
and
packing
so
apologize
for
the
mess,
but
as
part
of
that,
I'm
having
a
washer
and
dryer
delivered.
So
at
a
brief
moment,
we're
going
to
take
an
intermission,
so
I
can
open
the
door
for
them.
I'll.
Take
me
30
seconds.
Nobody
back
to
it.
So
sorry
tried
to
avoid
it
and
of
course
they
called
right
in
the
middle
of
the
stream
on
a
12-hour
window
so
funny
how
that
works.
A
Other
than
that,
that's
all
the
housekeeping.
I
got
we're
gonna
we're
gonna,
go
and
dive
in.
A
All
right,
we
should
be
going
now:
let's
go
into
this
window,
so
we
are
going
to
use
my
we'll
start
off
in
my
on-premises
kubernetes
cluster
make
sure
that
everything
got
cleaned
out
of
it
successfully.
A
So
I
try
to
go
back
when
we
do
these
sessions
and
cover
briefly
the
things
we
did
in
the
previous
weeks.
We're
gonna
do
this
one
a
little
bit
different,
because
we
have
such
a
ton
of
stuff
to
cover.
There's
a
few
things
we
haven't
hit
along
the
way
that
I
really
wanted
us
to
go
through
and
those
were
our
layer,
seven
routing
features
and
that's
how
we
allow
service
mesh
to
influence
where
services
route
to
in
a
policy
way.
A
So
I
can
have
you
know
api
calls
point
at
the
service
mesh
and
have
it
kind
of
divert
traffic
and
control
where
it
goes.
Layer
7
represents
http
traffic.
So
it's
looking
at
like
the
context
of
that
request
and
deciding
where
it
goes
based
on.
Maybe
a
path
switch
or
something
like
that.
So
we're
going
to
implement
that
for
our
application,
we're
going
to
finish
up
some
observability
stuff.
Last
week
we
got
our
dashboarding.
All
set
up
and
deployed,
but
we
weren't
able
to
see
any
metrics.
A
I
figured
out
why
I
was
running
on
an
older
version
of
kuma
and
I
didn't
upgrade
it
appropriately.
So
we
it
just
wasn't.
There
was
a
mismatch
there.
So
I
got
that
fixed
and
funny
enough.
I'm
running
an
old
version
now,
because
we
just
had
another
release
come
out,
we've
released
them
so
fast.
It's
sometimes
hard
to
keep
up,
but
I
got
it
working
so
we're
going
to
deploy
that
verify
that
it's
all
working,
we're
then
going
to
tear
it
all
down
and
we'll
get
our
multi
zone
going.
A
A
A
While
that's
coming
up
we're
also
going
to
do
a
kuma
control,
install
gateway.
This
is
going
to
let
us
deploy
out
kong's
ingress
gateway,
it's
based
on
kong
gateway,
so
we
have
all
those
plug-ins
available,
and
this
doesn't
give
us
our
front
door
into
the
environment,
so
the
gateway
gives
us
a
front
door.
The
service
mesh
acts
as
a
service
to
service
connectivity
thing
right.
So
when
I
say
thing
in
that
kind
of
funny
way,
service
mesh
does
a
ton
of
stuff
for
service
to
service
connectivity.
It
gives
you
zero
trust.
A
Security
gives
you
traffic
control
to
be
able
to
manage
the
way
services
connect
to
each
other.
Gives
you
that
observability
stuff
that
we're
going
to
cover
in
here
it
gives
you
healing
policies
like
rate
limiting
and
stuff
like
that,
so
there's
a
ton
of
stuff.
We
do
for
service
to
service
connectivity
in
service
mesh
when
you
think
of
service
mesh.
A
A
No
route
matches
those
values.
We've
covered
this
in
previous
videos.
I've
created
the
ingress
gateway,
but
I
haven't
created
the
ingress
resource,
yet
that
tells
the
gateway
how
to
reach
what
we
want
it
to
reach.
I
could
point
that
ingress
at
the
kubernetes
service
directly,
but
that's
going
to
just
connect
to
the
kubernetes
service.
It's
not
going
to
connect
through
the
mesh.
Since
we
deployed
this
gateway
using
kuma.
It's
part
of
the
mesh.
It's
got
a
sidecar.
We
want
to
divert
traffic
through
the
mesh
so
that
we
can
actually
use
all
that
stuff.
B
A
This
is
a
service
definition
for
kubernetes,
so
we're
creating
a
service,
we're
naming
it
front
in
the
kong
namespace.
It's
going
to
be
an
external
name,
and
it's
going
to
point
at
the
dot
mesh
address
that
dot
message.
Mesh
address,
tells
the
service
mesh
hey
or
tells
kubernetes
to
send
this
through
the
service
mesh
at
the
dotmesh
address.
A
80.,
so
that
is
created
if
we
go
back
into
here
and
refresh
our
app
is
up,
I
need
to
fix
that
it's
way
too
big
of
a
logo.
We
need,
like
fifty
percent
the
size
on
that,
but
we
can
see.
None
of
our
stuff
is
connecting
there's
a
easy
way
to
to
show
this.
What
we'll
do
is
we'll
go
into
the
pod
and
I'll
show
you
the
nginx
config.
This
is
a
react
website
and
it's
hosted
inside
of
nginx.
A
A
A
Now
the
app
itself
is
still
calling
these
addresses
as
part
of
as
part
of
the
way
it
was
written
so
that
app
actually
is
making
api
calls
to
these
addresses.
But
since
they're
commented
on
the
nginx
config,
it's
failing.
If
we
go
back
into
the
app
itself.
B
A
A
So
we
got
a
traffic
routing
policy,
we're
naming
it
app
route.
We
do
a
source
based
catch
where
it
says:
look
at
the
source
of
the
traffic.
The
source
is
always
going
to
be
from
the
kong
proxy,
which
is
the
ingress
gateway,
so
we're
coming
outbound
from
that
ingress
gateway
and
we're
telling
it
that
it
can
go
anywhere.
It
wants,
so
the
destination
can
be
any
service
inside
inside
of
kuma.
A
A
There's
seven
route
now
see
if
we
can
very
quickly
switch
back
into
the
ui
and
in
a
few
moments
this
should
start
talking
fantastic,
so
everything's
alive
now
and
what's
happening
is
the
mesh
is
handling
those
connections?
It's
saying:
hey,
I'm
gonna
resolve
this
slash
api
users
and
send
it
to
the
user
service.
A
I
could
start
to
change
the
way
that
the
app
itself
resolves
so
in
the
app
I'm
telling
it
to
talk
to
the
user
service,
but
I
can
use
the
mesh
to
change
that
and
have
it
say,
hey
if
you
reach
out
to
this
address,
actually
send
it
to
this
service.
So
we
can
use
this
to
start
to
influence
the
way
we
roll
out
new
versions
of
services
and
to
run
you
know,
debug
testing,
against
a
new
service
to
do
canary
type
testing
traffic.
A
C
A
A
B
A
B
A
Fauna,
service
grifan
is
going
to
give
us
that
dashboard
so
like,
as
I
mentioned
before,
kuma's
part
of
the
cncf.
We
donated
it
to
cncf,
we
like
to
interface
and
interact
with
other
cncf
projects,
so
we
have
those
integrations
for
things
like
prometheus
and
grafana
very
popular
projects
inside
the
cncf
space.
A
So
we're
going
to
use
that
grafana
dashboard
to
view
some
of
these
metrics
for
the
environment.
What's
really
cool
about
our
the
way
we've
done.
Grafana
for
us
is
a
lot
of
other
platforms.
Give
you
kind
of
that
a
gra,
some
sort
of
a
grafana
experience
they
inter
integrated
in
the
platform,
but
what
we've
done
is
we've
created
a
bunch
of
dashboards
for
you
kind
of
useful
things
that
we
suspect
you're
going
to
want
to
know
more
about
inside
of
the
environment.
A
A
A
A
Just
checking
my
time
for
the
the
old
washer
and
dryer
delivery.
Here
we
go.
Our
metrics
have
nothing
in
there.
A
Policies,
so
this
policy
has
a
few
things
I'm
going
to
turn
on
mtls.
So
all
services
are
going
to
be
encrypted
from
the
ground
up
encrypted
communication
between
using
an
integrated
certificate
authority
and
then
we're
going
to
turn
on
the
metrics
backend.
We're
going
to
tell
to
start
sending
metrics
out
to
prometheus.
A
Now,
what's
really
cool
about
this,
I
think
I
kept
my
port
forward
open
to
the
ui
we
can
see.
This
is
now
turned
on,
so
we
get
mtls
turned
on
and
we
have
a
metrics
backend
setup,
and
now
you
can
see
this
has
started
to
change
a
little
bit.
We
do
a
refresh
on
this
and
now
we're
getting
information
out.
So
if
I
come
in
here,
I
can
see
all
of
my
services
in
the
environment,
so
I
can
come
down
to
that
kong
proxy
and
I
can
point
it
at
my
front
end.
B
A
Patient,
if
you
click
on
it,
we
can
start
to
see
very
tiny
text
on
the
screen.
I
apologize
for
that.
You
can
see
that
p99
is
219.
Ms,
it's
kind
of
bad
p95
is
91.
P50
is
four
so
we're
seeing
like
the
actual
latency
between
between
the
service
communication
and
then
likewise,
we
can
also
see
the
amount
of
requests
coming
through.
I
fired
a
bunch
of
those
off,
so
it's
not
it's
0.76
requests
per.
Second,
that's
pretty
pretty
slow.
All
things
considered.
A
We
see
status
codes
coming
through
those
aren't
coming
through
at
this
point,
but
we
can
see
how
fast
we
were
able
to
get
like
actual
relevant
metrics.
Coming
back
out
about
this
platform.
Go
back
to
our
search.
We
hit
our
data
planes.
We
should
see
discard.
B
A
Yep
so
we're
seeing
data
plane
metrics,
it's
live
now
we're
seeing
statistics
around
how
the
data
plane
is
performing
and
like
what
request
is
servicing.
We
get
a
lot
of
view
into
what's
happening
in
the
environment.
This
is
kind
of
where
we
failed.
Last
time
I
was
using
an
old
version,
so
it
wasn't
grabbing
the
right,
the
right
data
out,
but
you
can
see
a
couple
quick
commands
deploying
our
application
and
we're
able
to
start
getting
metrics
out.
These
are
going
through
that
envoy
sidecar.
A
So
I
talk
about
this
idea
of
what
is
a
sidecar
a
lot
on
here,
and
I
want
to
make
sure
to
hammer
that
point
home.
That
sidecar
is
ultimately
hijacking
inbound
and
outbound
connections
to
a
service.
So
in
my
case
a
services
front
end
or
services,
users,
api
or
services.
You
know
posts
api
and
that
envoy
sidecar.
We
use
envoy
for
our
sidecar
process,
has
taken
over
the
inbound
and
outbound
connections
from
the
pod
and
it's
controlling
things
that
happen
with
it.
A
So
if
mtls
is
enabled
it's
setting
up
or
it's
having
that
cert
do
the
encryption
decryption
between
the
death
source
or
destination
service.
It's
handling
all
communication
stuff.
Likewise,
it's
handling
things
like
this.
The
the
metric
side
of
it
and
grabbing
having
prometheus,
go
out
and
scrape
and
sending
measures
to
the
right
place.
A
A
A
A
You
can
see
the
request
coming
in
for
the
user
service,
so
anyways
point
being
we're
starting
to
see
metrics
across
the
platform
by
interacting
now,
let's
blow
it
all
away
and
start
from
the
ground
up
again,
so
I'm
gonna
grab
a
I'm
gonna
come
in
here
and
we're
gonna
do
this
is
gonna,
be
a
very
big
command
that
just
I
use
to
go
and
kind
of
destroy
everything
in
the
environment.
A
So,
while
that's
finishing
up,
let's
start
talking
about
what
it
looks
like
when
we
do
kind
of
a
multi-zone
deployment
of
of
kuma.
Up
until
now,
all
of
our
demos
have
been
focused
on
a
single
site,
kuma
instance,
which
is
very
akin
to
I
get
a
kubernetes
cluster
and
I
install
kuma
on
it,
but
we
believe
that
most
people
who
are
operating
a
service
mesh
want
a
global
connectivity.
They
want
to
be
able
to
have
a
single
place
where
they
interact
with
policy
and
they
build
these
this
service
mesh
across
the
globe.
A
So
to
that
end,
we've
made
the
process
for
getting
a
multi-zone
up
and
running
pretty
straightforward.
We've
really
worked
hard
to
make
sure
that
this
process
is
something
that
is
very
adaptable
for
people.
So
to
me,
the
default
operating
procedure
for
for
kuma
is
should
be
global.
We
should
be
thinking
about
how
is
this
going
to
interact
with
other
kubernetes
clusters,
other
virtual
machine
environments,
other
container
workloads?
How
are
we
joining
all
this
together?
Multi-Zone?
A
A
So
we're
still
cleaning
up
looking
good,
so
in
a
multi-zone
deployment
we
have
a
few
components
that
we
want
to
think
through.
We
have
the
global
control
plane,
which
acts
as
the
place
where
I,
as
an
administrator,
I'm
interacting
with
the
platform
that
global
control
plan
is
going
to
be
pushing
its
configurations
down
to
zone
control,
planes,
those
zone,
control
planes
live
in
each
of
your
data,
centers
or
clusters
or
environments
that
you
want
to
be
part
of
a
service
mesh.
So
out
of
the
gate,
we
need
to
set
up
our
global
control
plane.
A
So
in
my
environment,
I'm
running
this
on
my
nas,
so
I
have
a
synology
nas
that
hangs
out
in
my
environment.
That's
pretty
much
like
the
work
horse
of
my
lab
and
in
this
case
I'm
running
the
control
plane
there.
It's
just
standalone,
kuma
binary,
so
I'm
running
in
universal
mode,
universal
mode
is
the
non-kubernetes
mode,
so
the
control
plane
is
going
to
be
a
universal
control.
Plane
and
kubernetes
environments
are
going
to
join
it
now,
because
this
is
just
very
simple
deployment,
there's
not
a
lot
that
goes
into
the
job
file.
A
A
It's
going
to
be
a
word
soup,
a
distilled
version
of
what's
happening
in
here
as
I'm
turning
on
turn
on
global
mode
and
I'm
telling
it
to
work
with
postgres.
So
I
have
a
postgres
database,
that's
going
to
handle
the
persistency
of
of
data
now
in
order
to
do
that,
we
need
to
create
a
database
in
my
postgres
environment,
so
I've
got
a
postgres
database
set
up
and
I'm
going
to
do
something
simple
like
create
database.
A
Command
is
run
now
as
part
of
getting
that
up
and
running.
We
need
to
do
what's
called
a
database
migration,
so
we
need
to
set
up
kind
of
a
schema
and
get
everything
running
in
that
way.
So
we
have
this
command.
Kuma
cp
migrate
up
and
we
pointed
at
the
config
file.
So
it
knows
what
database
to
use
to
prep
a
lot
of
data
on
my
config,
so
I
need
to
fix
that,
but
that's
been
migrated
now.
So
what
will
happen?
Give
me
one
moment
here.
While
I
open
my
garage
for
people
delivering
my
stuff.
A
There
we
go
the
joys
of
home
automation
if
you
haven't
wired
up
your
garage
door
to
your
home
automation,
you're,
letting
the
best
things
in
life.
Pass
you
by
we're
going
to
start
that
job
or
that
system
file.
That's
going
to
start
kuba
in
this
case,
so
we'll
go.
Sudo
start
kuma
plug
in
my
super
secret
password
that
is
running
and.
A
Duma
enterprise
running
on
universal
multi-zone,
so
this
zone
is
not
handling
user
traffic
at
all.
This
is
all
control,
plane,
data,
plane,
traffic
is
user
traffic,
so
things
moving
between
different
environments
is
data
plane
connectivity
can.
This
is
a
control
plane
only
so
there's
no
data
planes.
Here
the
data
planes
are
going
to
receive
their
configurations
from
the
zone
control
planes.
This
is
a
global.
A
A
Support
a
couple
of
things
happening
inside
of
this
configuration
file,
which
this
is
the
helm
configuration
files,
we'll
do
is
I'll
come
in
like
this.
Instead,
there
we
go.
A
These
are
from
a
previous
configuration
where
I
was
playing
with
other
stuff,
so
ignore
the
comments.
They
don't
exist,
control
plane
mode
zone,
so
we're
telling
it
hey.
You're
gonna
act
as
a
zone
instead
of
a
global,
we're,
gonna
name
it
v,
cates,
because
it's
on
vsphere
and
it's
kubernetes.
So
then
we're
going
to
give
it
a
global
address
that
kds
global
address
is
the
kuma
discovery
service.
A
So
this
is
how
it
understands
how
to
connect
to
a
global
control
plane
and
we're
going
to
point
it
over
grpc
at
my
nas
on
the
right
port,
we're
going
to
create
an
ingress.
So
this
is
an
important
piece
of
language.
This
ingress
is
how
services
communicate
with
this
environment
in
a
global
scenario.
So
when
I
do
an
eks
environment
in
a
few
moments
that
those
services
are
going
to
talk
to
this
one
over
that
ingress,
I'm
giving
it
a
load
balancer
on
port.
A
1001.,
this
is
important
because
I
actually
nat
inbound
my
connection
for
that
to
port
1001.,
so
outbound
comes
in
that's
what
these
annotations
are
for,
that
I'm
not
actually
using
right
now,
annotations
above
before
that
to
tell
kuma,
hey,
add
this
to
your
sansert,
when
you
do
tls,
so
we're
going
to
do
home,
install
namespace
oops
that
would
have
failed.
A
Let's
make
sure
we've
cleared
everything
out.
We
would
have
had
a
bad
mistake
here
where
the
old
app
was
still
hanging
out.
A
A
A
B
B
B
B
See
best
buy
is
delivering
my
stuff
install
actually
first.
A
B
A
C
A
Fire,
I
just
saw
a
question
come
through
in
the
chat.
Sorry
for
journaling
just
want
to
check
real
quick,
while
is
that
we
are
trying
to
deploy
implement
in
the
related
use
case
right
now.
What
we
are
doing
is
we
are
setting
up
a
multi-zone,
multi-zone,
kuma
deployment,
so
think
of
the
use
case,
where
you've
got
multiple
kubernetes
clusters
living
in
different
environments.
You
want
them
to
operate
as
a
single
service
mesh.
A
So
when
you
apply
policy,
it's
applying
across
that
environment,
so
giving
that
single,
like
single
control
plane
for
multiple
environments,
that's
kind
of
what
we're
setting
up
here.
So
if
you
want
to
implant
encryption
across
your
entire
entire
work
workloads,
you
can
do
that
universally
in
service
mesh,
so
we're
kind
of
creating
that
multi-cloud
multi-run
time
foundation
to
where
you
can
start
deploying
workloads
across
environments
and
they're
all
going
to
connect
and
talk.
So
hopefully
that
explains
what
we're
at
and
what
we're.
A
Switch
to
our
other
environment
and
do
the
same
thing
for
amazon
eks,
so
we'll
do
our
remote
value,
so
we're
going
to
zoom
in
looks
very
similar
zone
eks
global
address.
This
is
my
external
ip
for
my
lab
environment
at
home.
We're
setting
up
giving
it
an
ingress.
The
path
inbound,
so
helm,
install
actually.
B
A
Values,
I'm
gonna
pause
for
the
stream
for
just
one
moment,
while
I
make
sure
that
my
dryer
is
being
installed.
Okay,
so
we'll
give
this
a
couple
seconds
to
stabilize
so
one
moment
intermission,
sorry
about
that.
A
A
We
wanted
to
want
to
have
to
go
to
these
and
build
out
encryption,
so
we'd
have
to
drop
certs
on
configure
them
on
a
per
application
basis
and
there's
a
lot
of
like
a
lot
of
hand,
work
in
doing
that
in
a
service
mesh.
I
apply
a
policy
once
and
that
communication
between
those
services
is
automatically
encrypted,
so
that
use
case
there
is
hey.
I
want
to
encrypt
communication
between
services
as
a
foundational
item
and
now
I've
taken
away
that
time
that
it
takes
to
go
out
individually
and
do
this
across
every
single
service.
A
I'm
doing
this
as
a
policy
across
the
environment
you
want
to.
Maybe
you
want
to
be
able
to
control
the
way
you
roll
out
new
services.
You
want
to
be
able
to
bring
up
a
new
service,
send
five
percent
of
traffic
to
that
new
service
and
validate
that
it's
working
and
then
gradually
move
that
to
100
going
to
the
new
service.
That's
a
traffic
routing
policy,
so
you
can
get
really
granular
in
the
way
you
roll
out
new
services.
A
Maybe
you
care
about
having
good
visibility
at
the
beginning
of
the
stream
we
set
up
our
observability.
You
want
to
get
good
visibility
into
metrics
for
connecting
between
services.
That's
an
observability
story.
That's
a
service
mesh
problem!
So
really
it
comes
down
to
like
the
use
cases
you're
trying
to
solve.
You
know.
I
think
that
I
don't
really
know
the
in-state
of
the
question
service
mesh
versus
api
mesh.
A
We
have
like
the
concept
of
an
api
gateway
that
acts
as
like
a
front
door
to
an
environment,
so
you
think
about
how
an
api
gateway
interacts
with
a
service
mesh
that
front
door
is
ultimately
gonna
drop.
You
right
into
the
service
mesh
and
you're
going
to
be
leveraging
the
service
mesh
kind
of
capabilities.
A
Once
you
get
past
that
front
door,
it's
like
the
difference
between
walking
into
your
friend's
apartment
versus
your
friend's
mansion
right
that
mansion
has
a
ton
of
rooms
and
a
ton
of
places.
You
can
go
and
a
ton
of
different
things
you
can
do
inside
of
it,
whereas
that
apartment
you're,
probably
landing
in
the
living
room
kitchen's
right
there.
A
Maybe
a
couple
of
bedrooms
right
like
it's
a
much
smaller
use
case
that
that's
being
solved
there,
whereas
these
10
different
rooms,
15
different
rooms,
all
have
different
things
inside
of
them,
so
the
api
gateway
acts
as
that
front
door.
That's
taking
you
into
a
place,
but
ultimately,
that
service
to
service
connectivity
is
super
important
behind.
It
hope
that
helps
let's
check
out
our
mesh
and
see
how
things
are
looking.
A
So
I'm
going
to
head
back
into
my
nas
here,
I'm
going
to
do
human
control
apply,
f,
mtls,
so
you
can
see
that's
turning
on
our
assert
and
then
giving
us
locality
aware
load
balancing.
So
what
this
is
so
load
or
service
mesh
in
general
kind
of
solves
a
lot
of
the
problems
that
a
physical
load
balancer
solves
in
software
so
because
we're
operating
it
just
in
a
distributed
way.
Every
envoy
sidecar
knows
about
the
rest
of
the
environment
and
can
connect
directly
to
each
other.
A
We
kind
of
remove
that
need
for
a
load
balancer
in
front
of
it
locality,
aware
load
balancing
allows
us
to
kind
of
dictate
hey.
I
want
service
connectivity
to
land
at
this
location
are
at
this
location,
first
and
other
locations.
If
that
local
one's
not
working.
So
we
get
a
lot
more
flexibility
in
our
ability
to
connect
from
multiple
services
behind
a
load,
balancer
using
locality,
aware
load,
balancing
hit
refresh
enabled
and
our
certs
are
turned
on.
So
now
we
can
start
doing
routing
between
environments.
A
A
A
A
A
A
A
Now,
I'm
gonna
have
to
make
a
quick
adjustment
to
our
ingress,
because
my
port
forwarding
rule
is
set
up
to
use
like
these
are
backwards,
so
I'm
gonna
have
to
pop
into
a
different
window,
real
quick
and
set
that
up
so
there's
our
app
our
app
is
working
similar
to
last
time.
I
need
to
turn
on
our.
A
A
If
I
go
back
into
our
ui,
we
can
refresh
this
and
we'll
see
some
proxy
start
to
come
up
in
this
area
over
here
on
the
right
yep.
So
things
are
coming
online.
Looking
great,
all
of
these
are
online.
We
can
see
some
of
these
are
in
eks.
Some
of
these
are
in
the
kubernetes
environment.
So
that's
good
go
and
look
at
our
gateways.
We
can
see
I've
got
my
gateway
running
in
vsphere,
so
that's
good
as
well.
A
We're
still
red
here,
because
we
haven't
set
up
those
layer,
7
traffic
policies,
yet
so
I'm
going
to
go
kuma
control,
y,
f
and
oops,
not
mtls
l7
policy.
Now
this
is
the
same
l7
policy
that
I
showed
you
guys
before
just
in
this
environment.
Instead,
if
we
go
back
in
everything
is
coming
up
and
we
can
see
the
post
service
lives
in
eks,
eks
eks
redis
is
in
core.
That's
a
mistake!
Actually,
that's
not
registering
correctly.
A
That's
my
fault,
not
the
app's
fault,
so
what's
happening
here,
though,
is
our
front
end.
Is
we're
coming
into
our
front
end,
so
I'm
on
my
lan
at
home,
like
this,
is
a
private
ip.
I'm
hitting
this
and
it's
connecting
over
service
mesh
to
these
remote
environments
right,
so
the
service
mesh
is
handling
that
connectivity
between
environments.
A
A
All
right
back
in
the
game.
Sorry
about
that
again,
everyone!
You
know
these
things
happen
at
the
worst
possible
times,
but
fortunately
we're
getting
pretty
pretty
far
along
towards
the
end
of
this.
So,
let's
see
will
the
helm,
charts
and
code
basically
made
publicly
if
anyone
wants
to
give
a
try
in
a
similar
setup,
environment
yeah.
So
most
of
these
are
available
in
in
my
repo.
So
I
will
what
I
will
do
real
quick
is.
A
I
will
send
these
to
into
our
kong
builders
channel
and
I
will
let
my
I
always.
I
can't
believe
I
forgot
to
do
this.
I
have
some
awesome
help
in
the
background.
That's
helping
me
kind
of
manage
the
stream
side
of
this
taran
with
our
developer
marketing
team
she's
amazing.
She
makes
me
look
so
much
better
when
I'm
doing
these
streams
than
I
actually
am
and
then
michael
heap
from
our
developer
relations
team,
he
is
a
phenomenal
devrel
leader.
A
A
Yep
all
right,
so
that
is
sent
so,
let's
wrap
up
now.
So
we're
cruising
down
towards
the
end
we've
seen
connectivity
is
working
in
the
environment.
We've
got
our
connections
going
through.
I
think
we
were
talking
a
little
bit
about
what's
happening
across
the
communication
right
now.
A
So
our
front
end
is
in
our
data
center
living
inside
my
garage
on
a
on
a
vsphere
server.
That
connection
is
going
out
to
eks
and
it's
actually
hitting
the
zone
ingress
for
eks,
because
we
have
mtls
turn
on
it's
sending
something
called
a
spiffy
identifier
that
spiffy
identifier
is
telling
telling
that
zone
ingress
where
this
traffic
needs
to
go.
So
it's
handing
that
traffic
off
to
the
right
destination.
A
So,
in
a
few
kind
of
just
a
few
commands,
we
were
able
to
get
this
up
and
running
across
an
environment.
We've
got
six
data
planes
up
and
running.
We
can
see
a
lot
of
great
information
here.
If
I
go
into
like
the
traffic
routing
section,
I
can
see
that
route
policy
that
I
applied
see
the
yama
output
of
it.
So
again,
api
posts
is
going
to
the
post
service.
A
What's
really
interesting
is
when
you
get
in
and
do
something
like
this.
We
can
start
to
see
where
stuff
lives
in
the
environment,
so
I
can
see
front
end
v
cates.
But
if
I
go
down
here
and
I
look
at
something
like
the
reddest
one-
I
just
saw
it
there
we
scroll
through.
We
can
see
eks
now,
let's
because
this
is
because
this
is
my
last
stream
and
we
like
to
experiment
and
have
fun.
A
Let's
show
off
the
let's
show
off
the
locality,
aware
load,
balancing
so
right
now
our
post
service
is
reaching
out
to
the
front.
End
is
reaching
out
to
our
post
service.
It's
going
through
that
ingress
and
it's
landing
in
eks.
What
happens
when
we
deploy
that
same
post
service
in
our
local
data
center?
B
A
What
happens
when
you
get
crafty
with
your
roll
window?
Sometimes
things
don't
play
out
the
way
you
want
them
to
there?
We
go
perfect,
so
I
get
this
plain
text,
one
I'm
going
to
come
up
and
I'm
going
to
find
the
users
service
there's
that,
as
well
as
the
actual
user
service
definition
grab,
these
we'll
go
into
multi-site
s1.
A
B
A
A
few
moments
this
should
flip
there
we
go
so
if
you
saw
that
very
quickly
it
flipped
down
to
v
cates
instead,
because
it
is
preferring
the
local
connection.
If
that
was
to
go
down
so,
for
example,
what
we'll
do
we'll
get
crafty
here
we're
going
to
go
like
this.
B
A
B
A
Maybe
there
it
is,
it
takes
a
few
moments
for
the
envoy
to
report
back
that
it's
gone
and
for
it
to
reconcile
against
the
environment,
but
you
can
see
how
we
can
create
some
pretty
powerful
redundancy
here
between
environments.
A
A
So
with
that
being
said,
we're
kind
of
at
the
end
of
our
of
our
show
here,
we've
we've
deployed
out
our
multi-zone
environment.
We've
shown
being
able
to
kind
of
control,
routing
change,
routing
between
environments
we've
gone
through
and
did
observability
again.
We
did.
We
did
our
layer,
seven
traffic
routing,
so
this
was
definitely
the
most
packed
show
out
of
all
the
ones
we've
done
and
we
covered
a
bunch
of
stuff.
So
that
is,
that
is
what
I
got
for
you
on
this
on
this
great
friday.
A
So
I
appreciate
you
all
letting
me
take
you
through
all
the
cool
stuff
about
kuma
and
kong
mesh.
I
look
forward
to
watching,
watching
michael
and
victor
from
his
devrel
team,
take
this
on
into
the
into
the
distance
and
see
where
they
go
with
it.
So,
thanks
for
hanging
out
with
me,
I'm
always
on
twitter
at
cody
d
auckland.
You
can
always
reach
out
to
me
or
on
linkedin.
If
you
want
to
have
some
good
conversation
or
if
you
have
any
questions,
have
a
great
one
have
a
great
weekend
and
stay
safe
out.