►
From YouTube: Observability in Service Mesh - Kong Builders Livestream
Description
In this episode, we will be introducing #observability in #Kuma and #KongMesh.
We’ve got our application up from our last Kong Builders sessions, now let's see how we bring in #Grafana, #Prometheus and #Jaeger into the mix!
▬▬▬▬▬▬ CONTACT CODY ▬▬▬▬▬▬
🐦 https://twitter.com/codydearland
▬▬▬▬▬▬ ADDITIONAL RESOURCES ▬▬▬▬▬▬
Blog Post: https://konghq.com/blog/service-mesh-observability
Try Kuma: https://kuma.io/
Try Kong Mesh: https://bit.ly/2TdwYTp
A
Hello:
everyone,
as
usual,
my
name
is
cody
d
arkland.
I
work
for
kong.
I
focus
on
service,
mesh
and
you're
hanging
out
with
us
on
on
kong
builders.
This
is,
I
want
to
say,
the
third
or
the
fourth
stream
that
we're
we're
doing
for
kong
builders.
So
it's
a
it's
a
it's
been
a
pretty
fun
series
so
far
to
get
into
service
mesh
and
show
how
in
kuma
and
kong
mesh
we
build.
We
build
some
cool
stuff,
just
a
couple
of
kind
of
housekeeping
items.
I
have
multiple
monitors
going
on.
A
So
when
you
see
me
look
over
here,
I'm
looking
at
the
I'm
looking
at
the
q
a
stuff
and
trying
to
keep
track
of
people
asking
asking
questions
along
the
way.
This
is
streaming
on
youtube.
It's
streaming
on
linkedin.
I
think
it's
also
streaming
on
on
twitter
as
well,
so
we're
we're
trying
to
make
sure
that
we're
coming
to
the
mediums
that
matter
to
you.
A
So
if
there
are
other
places
you'd
like
to
see
the
stream
going
feel
free
to
drop
that
in
the
chat
as
usual,
I
have
some
background
people
helping
me
out.
Taryn's,
helping
me
manage
manage
the
stream,
so
as
always,
thank
you
thank
you
to
her
for
for
helping
me
helping
me
land
this
as
well
as
possible.
A
So
today
we're
gonna
jump
in
pretty
quickly
here
and
play
with
we're.
Gonna
resolve
some
problems.
We
had
last
time
with
traffic
permissions,
so
we
were
trying
to
do
some
traffic
permission
stuff
in
the
application,
and
I
I
failed
at
getting
it
working
right,
but
I
figured
out
what
was
wrong.
Sometimes
it's
harder
to
do
this
stuff
live
and
keep
your
your
thought
process
straight.
So
I
figured.
A
Wrong
we're
gonna
jump
in
there
we're
to
play
with
that
and
kind
of
show
traffic
permissions
working
working
correctly
and
then
we're
going
to
move
on
to
observability
and
we're
going
to
show
how
kuma
and
kong
mesh
have
this
entire
suite
of
observability
stuff
built
into
it
and
how
we
can
start
to
use
that
to
to
play
around
with
seeing
how
traffic
is
performing
inside
of
the
environment.
So
it's
going
to
be
a
fun
one.
As
usual,
we
like
this
to
be
as
interactive
as
possible.
A
I
try
to
keep
kind
of
twitter
up
on
the
side
and
answer
things
as
they
come
through
taryn's
also
managing
q
a
so
she
will
pop
questions
up
on
my
screen.
If,
if
they
come
up,
I
missed
a
few
last
time
it
turns
out.
So
I
don't.
I
don't
want
to
do
that
again.
So,
please,
please,
ask
questions
about
anything,
even
if
it's
not
on
the
topic,
we're
covering
it's
good
for
us
to
just
have
some
good
back
and
forth
dialogue
as
to
kind
of
what's
happening
in
this
environment.
A
As
a
side
note,
if
anybody
ever
wants
to
participate
in
these
any
any
people
who've
been
watching,
these
want
to
jump
in
and
be
able
to
talk
live
with
me,
certainly
open
to
that
for
future
streams.
So
if
anybody
has
any
any
desire
to
be
a
part
of
this,
I'd
be
super
open
to
that.
But
let's,
let's
jump
in
and
start
and
start
playing
unless
there's
any
immediate
questions
which
I've
been
keeping
an
eye
on
the
chat-
and
I
haven't
seen
anything
come
through
yet
just
double
checking
yeah.
A
So
if
there's
any
kind
of
out
of
the
gate
questions
things
that
are
kind
of
standing
out
in
your
mind
or
things
you
would
really
like
to
see
in
this
drop
those
in.
So
I
can
make
sure
to
put
a
bookmark
on
that
and
we
can
jump
into
it
with
that
being
said,
I
am
going
to
go
ahead
and
start
sharing.
My.
A
Screen
all
right,
so
a
couple
things
setting
the
stage
I'm
in
my
this
is
the
kind
of
the
app
directory
that
we've
been
playing
with
for
for
various
parts
of
this
demo.
So
I've
got
an
application.
That's
deployed
out
inside
my
environment.
A
If
we
do
cluster
info
for
the
kubernetes
cluster,
we're
working
against
you'll
see
that
we're
working
against
a
cluster.
That's
in
aws
we're
using
amazon
eks
for
this
environment
right
now,
we're
running
in
a
single
single
mesh
configuration,
so
our
single
zone
configuration
we've,
talked
about
this
a
lot
over
over
the
past
few
streams,
but
just
to
kind
of
quickly
highlight
in
kuma
we
have
this
concept
of
a
standalone
standalone
mesh,
which
is
just
a.
A
I
have
a
kubernetes
cluster
or
a
virtual
machine
environment,
and
I
just
want
to
run
service
mesh
on
it.
We
also
have
global
deployments,
which
are
multi-zone.
A
zone
is
kind
of
a
logical
segmentation
of
of
workloads,
so
a
zone
might
be
a
kubernetes
cluster.
A
zone
might
be
a
grouping
of
virtual
machines,
a
zone
might
be
another
kubernetes
cluster
and
all
those
things
together
would
be
three
individual
zones.
A
Inside
of
a
inside
of
a
mesh,
we
are
going
to
cover
multi-zone
deployments
in
in
this
series:
we're
not
there
yet
wanted
to
get
through
kind
of
the
foundation
stuff
before
we
start
jumping
into
kind
of
spanning
across
environments,
but
that's
something
that
we
really
focus
heavily
on
in
kuma
and
kung
mesh.
So
I
wanna,
I
wanna
make
sure
that
we
hit
that
eventually,
it's
just
not
not
right
now,
so
let's
go
and
take
a
look
at
what's
running
in
our
environment.
A
So
we'll
do
I
have
a
bunch
of
aliases
set
up
for
for
cube
control,
so
kgp
is
cube,
control
get
pods
and
I
do
dash
a
to
give
me
all
in
the
environment
so
right
now
we're
just
going
to
run
through
and
kind
of
review
the
stuff
that
we
have
going
inside
of
this
environment,
our
application's
up
and
running
right.
It's
been
up
for
about
13
days
so
that
tracks
pretty
close
to
the
last
time.
We
did
this
stream.
I
think
it's
about
14
days.
A
If
we
look,
we
have
our
kuma
control
plane
up
and
running
so
kuma
is,
is
here
and
going.
We
have
our
ingress
controller.
So
our
ingress
controller
is
how
we're
getting
into
our
application.
It's
got
three
side,
cars
so
or
three
pods
running
as
part
of
that
deployment,
and
what
that
means
is
that
we've
got
by
default.
The
congress
controller
has
two,
so
it's
got
conc
proxy
and
an
actual
controller
component.
The
third
one
in
this
case
is
the
sidecar,
so
we're
deploying
this
as
part
of
the
mesh.
A
So
it
has
a
side.
Car
just
quick
terminology
hit
sidecar
is
analogous
to
a
data
plane,
a
data
plane
proxy.
So
in
service
mesh
we
have
the
control
plane.
We
have
the
data
plane.
Control
plane
is
where
administrative
interaction
takes
place.
So,
as
we
fire
off
api
calls
to
change
parts
of
the
mesh,
we
apply
policy.
Those
are
hitting
the
control
plane.
The
control
plane
is
then
pushing
that
stuff
down
to
the
data
planes,
data
planes
or
the
side.
A
Each
of
these
tiers
of
our
application
have
two
pods
running
as
part
of
them.
So
one
of
those
is
the
app
the
other
is
the
is
the
actual
sidecar.
So
we
can,
if
we
do
like
control
logs
against
as
an
example,
there's
nothing
broken
in
here.
Boards
are
hard.
A
If
I
do
logs
against
just
that
pot,
I'm
going
to
get
a
not
an
error
message
when
I
get
a
bad
request,
because
ultimately
there's
two
front
end
and
the
kuma
sidecar.
So
if
we
were
to
look
at
front
end,
we
would
ultimately
end
up
seeing
some
nginx
style
logs
because
that's
what's
that's
what's
running
against
the
proxy.
A
Here
we
can
see
the
side
car
logs
that
are
coming
through,
so
kuma
is
ultimately
orchestrating
configurations
of
the
sidecar.
Our
sidecar
is
envoy.
So
what's
happening
is
when
we
make
these
policy
changes.
Those
policies
are
hitting
the
control
plane
and
the
control
plane
is
sending
that
down
to
the
sidecars
over
xds.
It's
protocol,
that's
how,
when
we
talk
about
operating
kind
of
service
mesh
operates
in
a
decentralized
way.
A
It's
because
those
envoy
sidecars
are
aware
of
all
of
those
policies
they're
receiving
that
configuration
down
and
applying
that
to
communication
inside
of
their
inside
of
their
domain.
That
means
if
the
control
plane
goes
offline.
For
some
reason,
it's
still
aware
of
the
most
recent
policy
push
and
is
still
able
to
function,
so
it
operates
again
decentralized.
A
A
So,
let's
see
we'll
do
a
get
services
and
we'll
bring
up
our
actual
application.
So
here's
our
kong
proxy.
If
we
go
in
and
hit
this,
I'm
going
to
just
grab
that
url
from
the
other
window
and
I'm
going
to
swap
over
we'll
drop
this
in
cool
there's
our
application.
Everything's
connected
things
are
good.
Now,
let's
start
breaking
stuff,
so
I'm
gonna
go
back
in
I'm
gonna
open
up
a
new
tab
and
I'm
gonna
go
ahead
and
port
forward.
A
And
bring
this
up
now,
localhost
5681
into
the
gui
good
thing
to
highlight.
We
did
a
recent
upgrade
to
1.2.2,
so
1.2.2
was
released.
I
haven't
updated
the
con
builders
environment
to
that
it's
a
minor
release.
There
was
definitely
changes
new
functionality
added,
but
it's
not
something
that
we
need
to
do
immediately
for
for
this
stream.
Let's
take
a
look
at
what
we
have
in
the
environment
today.
A
Oh
we're,
actually
not
even
using
that
we're
using
just
the
default
policy.
So
this
traffic
routing
policy
is
saying
all
traffic
can
route
to
all
of
their
destinations.
We
added
a
new
set.
Actually
I
remember
now
we
were
going
to
cover
traffic
writing
policy,
but
we
slammed
into
the
brick
wall
of
traffic
permissions
and
that
ate
up
most
of
the
time.
So
we
will
probably
touch
on
traffic
routing
if
we
finish
up
observability,
pretty
quick.
A
So
we
have
our
default
routing
policy
in
place.
That's
all
good
one
thing
that
we
don't
have
or
what
we
do
have
in
place,
but
it's
wrong
is
we
had
these
traffic
permissions
set
up
and
we
were
having
problems.
Last
time,
and
for
the
sake
of
moving
things
forward,
I
reapplied
the
default.
Allow
all
policy
which
says:
hey
everything
can
talk
to
everything.
We
have
this
other
policy
here.
That
was
controlling
some
specific
parts
of
the
communication
and
this
was
actually
the
fix.
A
A
Okay,
so
this
is
that
gateway
permission
we're
going
to
delete
this
one,
so
I
can
delete
using
the
file
I
applied
with
that's
one
way
to
do
it.
A
Now,
if
I
do
control
get
traffic
permissions,
all
you
can
see
all
the
traffic
permissions
applied
in
the
environment
when
we're
using
kubernetes
as
the
control
plane,
so
we're
using
that
as
kind
of
the
main
control
plane
for
the
environment.
All
of
these
configurations
are
stored
as
crds,
so
we
can
interact
in
like
a
kubernetes
native
way
with
them,
so
we
can
cube
control
against
any
of
those
crds
and
kind
of
perform
interactions.
A
So,
in
this
case
I've
gone
in.
I've
used
cube
control
to
delete
out
that
traffic
permission.
I
see
you,
taylor,
taylor's
an
old
friend,
great
guy,
really
smart
technologist
in
the
infrastructure
space.
Thanks
for
joining
the
taylor,
it's
good
to
see
you
buddy,
so
I've
got
one
policy
left
if
we
go
back
into
the
application
itself.
A
A
A
Now
our
application
should
freak
out
life
is
unhappy.
Children
are
crying
in
the
streets,
buildings
are,
buildings
are
falling
apart.
Life
is
bad.
We
go
in
here
we
hit
refresh,
we
can
see
all
permissions
are
gone.
So,
let's
start
off
getting
this
application
segmented
a
little
bit
right
out
of
the
gate.
We
know
that
we're
coming
in
through
a
gateway
right,
so
the
gateway
acts
as
the
front
door
to
an
environment.
This
isn't
a
cong
specific
thing
it.
A
When
you
look
at
api
gateways,
api
gateways
always
provide
a
front
door
to
an
environment,
and
then
service
mesh
is
about
optimizing
service
to
service
connectivity.
Obviously,
kong
has
a
has
a
stance
in
this
being
that
we're
a
gateway
company.
So
we
have
our
kong
ingress
controller,
acting
as
the
gateway,
so
that
api
gateway
is
a
full
featured
api
gateway
that
has
all
of
kong's
plug-ins
as
as
part
of
it,
so
we're
using
that
in
this
environment,
that's
where
the
traffic
comes
inbound
through.
A
A
Actually,
what
we
will
do
is
we'll
use
the
previous
one
that
we
actually
deleted,
so
we
should
just
kept
it
in
place.
What
we'll
do
is
go
them
against
traffic
permissions
and
for
now
we're
going
to
delete
out
this
group
demo
app
thing
here,
I'm
just
going
to
say
the
source
is
the
kong
proxy.
This
is
actual
service,
so
kuma
dot,
io
service.
We're
going
to
use
that
service
name
we're
going
to
match
against
we'll
delete
this
for
now,
because
we're
gonna
do
something
with
that
later
qio
service.
A
B
A
Let's
double
check
our
work
to
be
sure,
go
into
our
services
internal
here
we
can
see
front
end
kong,
svc
80.,
so
that
looks
good.
If
we
go
to
our
data
plane
proxies,
we
can
see
all
of
the
different
things
we
could
filter
off
of
so
in
this
case
front
end
underscore
kong
underscore
service
80,
that's
what
we're
using
you'll
see.
A
I
deleted
out
that
whole
demo
app
thing,
because
that
was
something
we
were
trying
to
make
work
and
I
got
it
working
so
we'll
come
back
to
that
in
a
few,
but
we
can
also
key
off
of
other
metadata.
So
in
that
case
I
was
keying
off
of
group
demo
app,
but
a
pin
in
that
we'll
come
back
to
that
in
a
minute.
A
Run
in
kong,
svc
80.,
that's
the
name
space
inside
a
cube,
that's
just
placeholder
for
service
and
then
80..
So
looking
good,
our
name
is
gateway.
Frontend
everything
looks
good,
so
we
will
go
yf
traffic
permissions.
A
And
we
will
go
and
refresh,
and
life
is
good
now,
there's
something
interesting
that
happens
here
and
you
can't
see
it.
I
could
go
in
and
pull
up
the
envoy
cluster
logs
and
show
it
to
you.
It's
there's
a
better
way
for
me
to
show
that
in
a
few.
What
happens
here
is
when
we
start
to
apply
these
traffic
permissions.
What
we're
doing
is
we're
adding
things
and
taking
things
away
from
envoy.
A
So
in
this
case,
when
we
add
that
that
traffic
permission
we've
added
that
policy,
what
we've
said
is
hey
envoy
things
that
match
these
make
the
sidecars
aware
of
that.
So
what
ends
up
happening
is
the
gateway,
gets
a
set
of
policies
that
say:
hey,
you
can
talk
to
these
clusters
and
that
cluster
gets
added
to
the
to
the
available
communication
list.
A
A
A
We
did
this
last
time,
so
I'm
not
going
to
spend
a
ton
of
time
stepping
through
all
of
it.
In
a
few
seconds,
this
connectivity
should
go
green.
Actually,
it
won't
go
green
because
it
needs
the
postgres
permissions
as
well.
But
if
I
come
in
here
and
do
clusters
view,
we
can
see
that
that's
starting
to
work
now,
so
it's
starting
to
become
aware
of
other
instances
in
the
environment
and
other
places
it
can
talk
to
in
the
environment.
A
A
A
A
We're
gonna
do
one
more
thing
and
we're
gonna
watch
it
start
to
the
service
is
gonna
start
to
flap,
we're
gonna,
see
things
start
to
go
green
and
then
red,
and
then
the
other
ones
are
gonna,
go
green,
it's
gonna,
bounce
back
and
forth,
and
we'll
explain
why
that
that
is
in
a
moment,
user
databases
in
and
now
everything
should
come
up
in
a
second
here
and
should
start
to
flap.
So
we
can
see
things
are
starting
to
kind
of
flap
back
and
forth
a
little
bit.
A
A
What
we
need
to
do
is
we
need
to
craft
our
policies
in
a
way
that
group
things
a
little
bit
better
because
what's
happening
is
these
policies
are
starting
to
overwrite
aspects
of
each
other,
and
in
order
for
the
policy
to
be
evaluated
correctly,
we
need
to
add
them
all
into
the
same,
not
the
same
file
necessarily,
but
we
do
need
to
be
a
little
bit
more
logical
in
how
we
configure
the
things
that
match
so
we're
going
to
start
going
through
and
we're
going
to
delete
out
some
of
these
things
that
we
added.
A
Wrong
one:
sorry,
you
want
to
add
it
here
so
the
front
end
we
know
architecturally
our
front
end
talks
to
both
a
post
posts,
api
and
an
user's
api.
So
we
want
to
have
both
these
destinations
allowed
by
that
that
rule
so
we're
gonna,
do
a
match:
puma,
dot,
io
service
user
service,
actually
user
service
kong,
svc,
5000,
totally
unrelated
to
kong,
but
anybody
who's
watching
and
is
a
fan
of
python
and
flask
you'll
notice,
I'm
using
flask
for
the
api
port.
A
5000
love
me
love
me
some
flask,
so
that
permission
looks
good,
so
we're
going
to
go
ahead.
Actually
real,
quick.
Since
we
deleted
everything,
we
should
be
able
to
show
that
the
user
service
just
isn't
getting
anything
now
yeah.
So
we
can
see.
User
service
is
not
online
at
all.
It's
not
reachable
we're
getting
nothing
back
from
the
api
call,
but
when
I
apply
this
file.
B
A
A
Finally,
our
app
is
up
and
running
all
is
good.
All
connectivity
we've
got
a
good
foundation
of.
Thank
you,
martin.
In
a
in
the
chat,
you
called
out
the
service
and
I
missed
it
in
the
in
the
comments
there
so
I
should
have.
I
should
have
been
watching
closer
sorry,
but
thank
you
for
calling
that
out.
A
Everything's
working
good.
We
have
a
good
foundation
at
zero
trust.
Now,
right
now,
if
we
add
another
application
to
the
environment,
there's
no
default.
Allow
that's
gonna,
say
everything
can
talk
to
everything.
So
communication
is
not
gonna
work,
that's
kind
of
what
we
want.
We
want
our
foundation
of
zero
trust
to
be
in
place,
but
this
is
a
little
bit
more
of
a
of
a.
I
don't
know
if
I
want
to
say
it's
a
complex
policy,
necessarily
it's
just
that
there's
a
lot
of
configuration
files
for
it
there's
a
simpler
way.
A
A
So
all
of
those
things
have
that
that
group
demo
app
component
to
them
the
gateway
doesn't,
though
the
gateway,
because
it
was
deployed
via
different
different
configuration
file.
It
doesn't
have
that
label.
I
could
use
like
cube,
control
and
patch
a
label
in.
So
I
could
like
patch,
that
into
the
deployment
and
then
reload
the
pods
and
they'd
get
a
that
label
added
to
them,
but
right
now
we're
going
to
use
a
source
of
the
service
name
or
that
group
name
reaching
out
to
that
group
name.
A
So
we're
going
to
make
a
kind
of
a
blanket
policy
that
says:
hey
things
that
live
in
this
demo,
app
configuration
I
can
like
they
can
talk
to
each
other.
Life
is
good
and
it
just
makes
it
a
little
simpler
policy
to
read
simpler
to
build
against,
and
it
gives
you
a
way
to
kind
of
logically
separate
an
application.
A
A
B
A
It
might
not
work
because
I
can't
remember
if
that
is
a
case
sensitive
thing
or
not
all
right,
so
that's
good.
The
gateway
can
now
reach
all
of
these
things,
as
expected,
so
we're
able
to
get
in,
but
we
have
read
across
the
board
here.
What
we
need
to
do
is
we
need
to
add
a
second
source
that
says
anything.
That's
coming
from
the
demo
app
group
can
talk
to
that
as
well.
So
we'll
make
one
more
quick
change.
A
A
And
just
like
that,
everything
comes
online
and
is
live
again,
interesting
thing
to
call
out.
If
we
go
in
here
we
take
a
look
at
certs.
We
can
actually
see.
All
of
those
groups
are
a
part
of
that
search
chain
now
and
they're
actually
labeled
here,
as
as
part
of
those
uri
headers
and
subject
alternate
names
for
the
certificates
that
are
they're
applied.
A
So
in
order
to
do
traffic
permissions,
you
have
to
have
mtls
turned
on
and
once
you
turn
on
tls
all
of
this
stuff
starts
to
generate
inside
of
envoy,
so
you
can
see
protocol
http
group
demo
app.
These
are
all
parts
of
that
of
those
traffic
permissions
that
we
showed
before
or
part
of
the
metadata
that
was
available
to
pick
from.
So
all
of
this
stuff
is
available
as
part
of
the
traffic
permissions
kind
of
api
that
we
can
work
against.
A
Kind
of
wonky:
let's
go
ahead
and
start
playing
now
with
getting
our
observability
installed
up
and
running
and
going
so
when
I
think
about
observability
in
kind
of
a
cloud
native
space.
First
thing
I
think
of
is
prometheus
and
grafana
right
out
of
the
gate.
I
think
dashboarding,
I
think,
about
a
good
visualization,
I'm
sucker
for
a
good-looking
dashboard,
so
I
enjoy
being
able
to
hit
that
dashboard
and
be
able
to
see
kind
of
relevant
data
about
my
service
or
my
environment.
A
So
we
make
that
pretty
easy
inside
of
kuma.
I
try
not
to
use
that
word
a
locks.
I
think
it's
a
bad
thing
to
say
on
live
streams
that
things
are
easy
because
what's
easy
to
to,
you
might
not
be
easy
to
me.
So
I
try
to
be
a
little
bit
more
inclusive
than
that,
but
this
one
legitimately
is
pretty
easy.
A
You
might
want
to
integrate
with
an
external
like
prometheus
instance,
if
you're
doing
like
multi
multi
zone
or
if
you
have
something
built,
maybe
you're
using
like
thanos
internally
to
kind
of
have
more
of
a
global
scale.
So
all
that
stuff
is
like
valid
and
in
play
it's
just
a
matter
of
how
you
configure
or
how
you
want
to
configure
the
grafana
instance
inside.
But
if
you
do
this
install
metrics
out
of
the
box
with
kuma
control,
you
get
kind
of
a
known,
good,
working
config.
A
A
But
what
will
end
up
happening?
Is
it
won't
work?
So
if
I
do
a
q
control
get
pods,
all
the
prometheus
is
coming
up,
so
it's
going
to
work
it's
going
to
deploy,
but
when
we
connect
to
it
we're
not
going
to
get
anything
out
out
of
it
everything's
up.
So
if
I
do
a
cue
control
port
forward
against
the
other
one
of
these
tube
control
port
forward
against
the
kuma
metrics
namespace
grafana
on
9999,
because
I'm
easy
like
sunday
mornings.
B
A
We
have
a
bunch
of
dashboards
built
in
again
these
aren't
gonna
work
or
if
they
do
work,
it's
not
gonna
work
reliably.
If
I
go
in
here
to
service
service
there
we
go
that's
what
we
were
hoping
to
see.
General
unhappiness
things
are
not
working
as
expected.
A
The
reason
for
that
is
so
first
things.
First,
we
need
to.
A
So
we
need
to
check
to
make
sure
they're
not
getting
a
sidecar
they
very
well
might
be,
and
if
they
are
for
the
sake
of
shortcutting,
we're
going
to
just
re-enable
the
default
allow
policy
just
to
so
we
don't
have
to
go
through
and
step
by
step.
Enable
every
single
bit
of
permissions
here,
but
also
what's
missing,
is
our
mesh
configuration.
A
So
we
haven't
told
the
mesh
that
metrics
are
enabled
inside
of
this
environment,
yet
so
it
doesn't
know
how
to
get
to
where
it
needs
to
go
to
actually
be
able
to
to
collect
metrics.
A
We
know
that
the
data
source
is
here,
we're
getting
a
data
source
out
of
the
box
same
test,
we're
getting
a
bad
gateway.
I
think
that
we're
running,
I
think
that
I
have
the
sidecar
injection
enabled
everywhere,
so
we
might
have
to
just
do
a
blanket
allow.
A
We
can
see
everything
all
of
those
systems
are
in
here.
You
know
what,
let's
actually,
let's,
let's
do
this
the
right
way.
A
A
A
This
is
the
fun
of
these.
These
kind
of
more
exploration
live
streams,
as
we
get
into
these
interesting.
How
do
we?
How
can
we
go
off
the
the
beaten
path
things?
B
B
A
A
A
Aha,
it
worked
so
data
source
is
working.
Our
bad
gateways
are
gone
now,
let's
configure
the
mesh
to
actually
be
able
to
send
metrics
out
that
pla
to
that
place.
So
I
have
oops.
I
didn't
mean
to
do
that.
A
I
have
this
mesh
policy
that
we
set
up
previously,
where
we
turned
on
mtls
with
our
back-ends.
I've
got
a
couple
of
commented
out
sections
here
just
so
I
don't
have
to
type
it
all
out.
Manually,
metrics
back-end
is
pretty
easy
to
enable.
We
can
actually
see
this
if
we
go
to
puma
dot,
io.
A
A
A
These
are
all
the
configurations,
and
then
we
also
have
some
of
the
annotations
in
place
that
are
needed
for
or
well
there's
defaults.
We
could
override.
We
don't
need
to.
We
have
the
default
prometheus
stuff
in
place
to
to
do
proper,
scraping
all
right,
so
life
is
good.
We're
going
to
go
and
apply
that.
Actually
we
need
to
correct
that,
because
it's
commented
out
right
now,
we'll
pop
into
my
favorite
vs
code.
A
B
A
Let's
enable
that
real,
quick,
let's
just
to
show,
I
always
use
the
phrase
like
like
a
good
magician.
I
like
to
show
before
things
happen.
Here's
the
default
mesh,
metrics,
disabled.
A
A
Refresh
once
again,
metrics
are
turned
on
now.
If
we
go
back
into
our
grafana
instance,
we
have
all
of
our
dashboards
here
different
different
things.
Inside
the
environment.
We
can
go
to
the
control
plane.
We
can
look
at
dashboards
for
the
dashboarding
for
the
control
plane
right.
So
we're
pulling
out
some
information
here.
A
Xds
is
sending
a
bunch
of
configs
down,
so
those
are
all
working,
we're
getting
metrics
for
the
environment,
which
is
awesome,
getting
message,
exchanges
from
from
xds,
meaning
that
policies
are
being
applied
and
synced
down
life
is
good.
A
A
A
A
A
A
A
A
Where's,
the
only
service
that
we're
actually
getting
against
is
is
that
oh
yeah,
we
might
need
to
temporarily
turn
on
all
all
traffic
after
all,
for
the
scraping
traffic
to
be
handled
correctly.
Now
that
I'm
thinking
about
it.
A
It's
nice.
We
can
start
to
see
traffic
coming
through,
but
we're
only
getting
metrics
off
of
the
prometheus
server
and
not
the
other
ones.
So
those
aren't
coming
in
at
this
point
they
haven't
started
scraping
yet
for
some
reason,
let's
jump
back
in.
B
A
A
B
A
So
I
redeployed
it
it's
taking
a
second
to
come
back
online.
A
It's
reinitializing
a
couple
of
the
services
that
were
cycled
out
by
that,
whatever
I
changed
in
there
so
front
end
got
a
new.
A
new
pod
post
service
is
getting
a
new
pod
user
service
also
got
a
new
pod,
no
changes
to
redis,
no
changes
to
posts,
no
changes
to
db.
At
this
point,
all
the
ones
terminating.
A
This
because
we
did
this
for
our
destination
automation
event.
So
now,
I'm
getting
the
other
version
of
of
this
pod,
so
hey
enjoy
the
enjoy
the
the
advertisement
for
destination
automation,
which
was
in
the
past.
A
A
A
I
think
I
know
what
it
is.
I
think
that
I
it
just
made
this
a
lot
more.
B
A
B
A
So
it's
not
oh,
I
think
I
I
think
I
know
what's
wrong
actually
because
this
is
a
different
pod.
A
A
A
So,
as
a
just
a
little
explanation
of
what
happened
here
in
this
demo,
I
used
traffic
routing
policies
to
control
how
these
services
were
reached.
There
we
go
so
things
are
starting
to
come
back
now.
A
Those
traffic
routing
policies
aren't
in
place
in
this
environment
because
I
was
doing
a
different
demo,
so
that
was
not
why
that
was
failing
there.
So
now,
we've
replaced
it
with
one
that
doesn't
use
that
traffic
routing
policy
yet
see
everything
came
back
up.
Life
is
much
better.
A
B
A
A
A
A
A
Cool
life
is
good,
fantastic,
hey
steven
thanks
for
joining
buddy,
I
see
you.
I
see
you
in
the
chat,
steven's,
a
an
old
friend
and
a
great
colleague,
awesome
automation,
guy.
If
you
need
someone
to
help
you
automate
stuff,
that
guy
taught
me
how
to
code
like
back
in
the
day.
So
thanks
thanks
for
joining
hanging
out
steve
all
right.
So
life's
good
connectivity
has
been
restored
to
the
environment.
Let's
go
double
check
and
see
what
sort
of
dashboarding
we
have
available
now
still
like
getting
all
of
our
metrics
in
place.
A
B
B
A
That's
super
super
odd.
I'm
gonna
have
to
figure.
I
have
to
look
into
this
one
and
see
why
why
this
is
not
not
grabbing
all
of
all
of
the
correct
source
services,
because
it
should
be
automatically
grabbing
all
of
those
from
the
environment.
So
I'm.
B
A
Sure
why
that
is
exactly
so,
we
will
have
we'll
have
more
things
to
troubleshoot
on
the
next
on
the
next
kong
builders,
to
figure
out
why,
or
at
least
come
back
to
you
with
why
that
was
happening
because
that
shouldn't
be
so
yeah.
This
is
a
good
point
for
us
to
pause
and
start
to
wrap
up
with
any
any
questions
that
come
out
in
the
in
the
environment.
A
So
next
time
we
visit,
we
are
going
to
we'll
get
back
to
playing
with
traffic
routing
policies,
so
we'll
show
how
we
can
use
traffic
grout
policies
to
control
the
flow
of
traffic.
When
I
think
of
service
mesh,
my
favorite
functionality
is
that
traffic
routing
capability
error-
disabled-
are
all
these
videos
meant
to
be
watched
in
order
yeah,
so
they
they
are
kind
of.
I
don't
say,
they're
planned
to
be
watched
in
order.
They
definitely
follow
a
progression.
A
So
we
started
with
kind
of
no
mesh
and
we're
starting
to
build
on
top
of
it.
So
we
are
kind
of
building
up.
I
do
try
to
capture
kind
of
the
main,
the
main
topics
and
the
main
concepts
that
are
missing
along
the
way.
So
if
something
pops
in
and
you're,
like
hey
cody,
just
said
some
stuff,
that's
really
confusing.
A
Please
ask
the
question
I've
I
like
to
go
back
and
explain
that
stuff
again
so
yeah,
it's
meant
to
be
watched
in
order,
but
you
should
feel
like
you
can
interact
anyways
and
there's
never
a
bad
question
to
ask.
Even
if
I've
I
think
I've
explained
control
plane
versus
data
plane
on
every
one
of
these,
these
streams
so
far
because
it's
just
worth
doing
so.
If
you
have
questions,
feel
free
to
ask
them,
don't
feel
like
you
have
to
go.
Watch
the
other
ones
to
get.
A
A
Thanks
dave,
I
appreciate
it
appreciate
the
support
there.
That's
never
fun
to
have
a
live
demo,
not
work
the
way
you
want
it
to
live,
but
it's
also
a
good
way
to
find
gaps
in
gaps
in
the
product
or
ways
that
can
be
better
or
ways
for
you
to
learn
to
your
point.
So
I
am,
I'm
excited
to
figure
out.
What's
what's
going
on
here,
cuz,
it's
kind
of
a
kind
of
a
weird
one.
A
It's
odd
that
it's
not
that's
odd,
that
it's
grabbing
all
the
destination
services,
but
it's
not
grabbing.
It's
only
grabbing
the
prometheus
server
from
here
just
pop
in
one
more
time
and
hit
this
refresh
the
whole
thing
yeah.
So
as
we
start
to
wrap
up
a
good
good
timing
on
on
the
message
there
taren
for.
B
A
For
the
kong
builder
stream,
we
do
this
every
two
weeks,
so
in
two
weeks
we'll
be
back
again
and
we'll.
Hopefully,
the
goal
for
the
next
two
for
two
weeks
from
now
is
to
do
traffic
routing
it's
to
show
how
this
was
fixed
and
maybe
start
to
explore
that
multi-zone
multi-zone
configuration
so
the
webpage
kong
hq
dot
com,
slash
kong
builders
has
all
of
the
streams,
as
well
as
the
future
future
ones
that
are
planned.
A
There
are
times
where
I
step
out
and
I'm
not
available,
so
we
have
somebody
else,
jump
in
and
do
like
a
gateway
one.
We
had
victor
from
our
from
our
developer
relations
team
step
in
and
he
did
a
really
cool
one
on
kong,
ingress
controller
and
connect
looks
like
definitely
keep
an
eye
on
these
they're.
Not
always
mesh
related.
Most
of
them
are
right
now,
but
they
they
won't
always
be
so
I'm
not
seeing
any
questions
pop
up.
A
So
I
am
going
to
assume
that
we
are
all
all
good
and
will
probably
a
target
wrapping
up
awesome
cool
everyone,
hey
thank
you
for
stopping
by
this
was
fun.
We
got
through
traffic
permission
so
yay
for
that
we
got
the
metrics
deployed,
but
we
ran
into
kind
of
a
weird
issue
that
we're
gonna
unpack.
So
we'll
we'll
look
into
this
next
time
and
see
if
we
can
figure
out
why
we're
not
getting
everything
we
want
out
of
this.