►
From YouTube: Kong Demo: Protecting Microservices with Servicemesh
Description
In this interactive demo, we will show how to encrypt and protect all services inside a service mesh using the Kuma Mutual TLS policy. We will then demonstrate how to control traffic permissions among each individual service using the TrafficPermission policy. In addition to security, Kuma provides traffic metrics using Prometheus and Grafana dashboards, as well as traffic tracing (APM) and traffic logging integrated into managed cloud logging and analytics services. In this example, we will be using Stackdriver on GCP.
A
Yeah
so
gonna
be
talking
about
service
mesh
today
and
how
to
protect
your
services
using
mtls
and
doing
a
little
bit
of
some
observability
using
the
service
mesh
pattern
with
kong's
kuma
product.
A
Let
me
just
go
to
my
next
slide
here
and
I
can
kind
of
a
little
overview
of
what
we're
gonna
cover
right,
using
mtls
policy
to
encrypt
and
secure
a
service
controlling
traffic
using
our
traffic
permission
policy.
So
once
the
once,
the
policy
is
in
place
and
your
traffic's
secure.
I
want
to
be
able
to
control
a
granular
level.
You
know
which
service
inside
of
the
mesh
is
allowed
to
talk
to
each
other
and
then
finally,
some
observability
and
tracing
some
apm
process
management
inside
of
gcp
integrating
kuma
with
stackdriver
okay.
A
A
Kubernetes
in
gcp
running
kuma,
and
so
I
wanted
to
show
you
a
little
bit
of
the
kuma.
You
want
here
just
to
kind
of
show
you
what
cool
there
we
go.
Let's
get
the
dashboard
all
right.
So
what
we
have
here
is
the
kuma
dashboard
that
shows
you.
You
know,
overall,
you
know
the
configuration
of
the
mesh
and
then
all
of
the
data
planes
inside
the
mesh.
A
So
here's
all
the
you
know
all
of
our
different
applications.
There's
some
spring
boot
applications
in
here
there's
some
python
applications
and
they're
all
talking
via
envoy
side,
cars
right.
The
service
mesh
pattern
indicates
that
you
know
a
side
car
for
every
application
inside
the
mesh,
as
indicated
here.
A
If
I
go
to
my
namespace
you'll
see
that
we
have
all
of
my
applications
we're
using
kong
as
an
ingress
into
the
kubernetes
gateway,
which
then
the
requests
get
stamped
with
a
using
some
calling
plugins
with
a
correlation
id
which
allows
us
to
do
tracing-
and
these
are
the
this
is
the
web
application
here
which
talks
to
a
restful
api
which
in
turn,
talks
to
a
database.
Okay.
So
the
first
thing
we
want
to
look
at
is
some
mtls
right.
A
So
what's
the
use
case,
so
we
have
these
applications
that
are
talking
to
a
database
and
we
want
to
essentially
lock
that
you
know
secure
that.
So
anyone
within
the
organization
you
know
cannot
talk
to
that
database
using
you
know
like
sql
commands
things
like
that,
so
just
lock
that
down.
So
that's
what
I'm
going
to
demonstrate
here.
If
I
go
into
my
you'll
notice,
change.
A
B
A
A
And
unless
I
just
type
in
my
password
and
then
it
would
select
star
from
crew
right,
so
I
have
access
to
this
database
right.
So
if
I'm
a
developer,
I
you
know
use
that
to
select
all
my
data
and
then
so.
What
I'm
going
to
do
with
service
mesh
is
what
I
can
do
is
I
can
configure
mtls,
so
you
can
disable
that
communication
without
the
applications
or
people
using
you
know
without
any
that
that,
knowing
it
or
not,
you
know
allows
a
devops
person
to
configure
policy.
A
You
know
outside
of
the
application.
So
if
I
enable
my
mtls
certificate
on
the
mesh
you'll
notice,
if
I
go
back
to
my
ui
here,
you
can
see
meshes
and
you'll
see
that
we
now
have
this
mtls
turned
on
so
kuma
manages
your
certificates,
so
envoy
can
talk
to
each
other,
only
certain
ones.
So
now,
when
I
try
to
log
in
to
my
database
you'll
notice,
I
cannot
because
I'm
the
this
particular
container
is
outside
and
it
does
not
have
an
ssl
certificate
that
is
allowed
to
talk,
so
it
essentially
just
locks
it
out.
A
And
then
you'll
notice,
if
I
type
their
password
in,
I
am
back
to
being
able
to
query
that
database.
Okay
and
and
I'd
like
to
say,
if
there's
any
questions
that
you
guys
may
have,
please
feel
free
to
interrupt
me
as
I
go
through
this
on.
I
can
pivot
and
stuff
like
that.
So
does
anyone
have
any
questions
with
what
I
have
done
just
there.
A
Cool
all
right,
so
the
next
thing
that
we
wanted
to
show
was
so
we
have
fine
green
control
over
this
too
right.
So
if
I
go
to
the
this
is
the
report,
so
this
oops.
A
There
we
go
all
right,
so
this
is
a
this
is
the
spring
boot
application
that
is
calling
many
micro
services
underneath
to
produce
this
report,
which
is
a
ship's
manifest
for
star
trek.
So
we
have
our
crew,
our
cargo
and
our
captain's
log.
Okay
now
say
I
wanted
to
restrict
so
maybe
a
data
entitlement
use
case
right.
I
wanted
to
control,
you
know
each
individual
micro
service.
You
know
granular
controlling
which
one
so
I
can.
A
B
A
A
Wow
all
right
having
a
little
bit
of
an
issue.
So,
let's,
let's
move
on
so
what
I
want
to
move
on
into
is
to
observability
and
tracing
so
with
kuma
or
any
service
mesh.
You
can
integrate
into
logging
and
tracing.
So
we
want
to
do
some
logging
of
metrics
in
application
management
tracing
okay.
A
So,
as
I
click
through
this
interface
right,
so
as
I
call
these
you'll
know,
there
is
a
fluent
d
process
that
is
running
the
background,
that
is
capturing
logging
to
zipkin
or
the
stackdriver,
using
zipkin
right
and
what
you'll
see
is.
When
I
go
to
the
google
cloud
platform
over
the
trace,
I
can
click
on
any
of
these
and
what
you
see
is
a
call
right.
So
this
is
the
overlying,
the
the
the
call
to
the
slash,
manifest
right
and
then
each
individual
call
microservice
underneath
gives
you
so
this
call
took
25
milliseconds.
A
This
call
to
cargo
right
to
33
milliseconds
and
this
call
to
crew
took
150
milliseconds.
A
So
this
enables
you
know
devops
engineers
or
application
developers
to
really
you
know,
analyze
a
distributed
trace
and
then
another
piece
of
the
puzzle
here
is:
there's
we're
using
kong
at
the
ingress
level
to
using
a
correlation
id
plug-in
that
stamps
each
individual
request
coming
in
with
a
unique
identifier.
A
So
we
can
now
correlate
the
trace
with
the
logging.
So
if
you
have
application
level
logs
that
are
logging,
yeah
different,
you
know
anything
that
an
application
for
for
doing
troubleshooting
of
your
applications.
If
I
can
look
at
standard
out,
I'm
going
to
add
that
and
then
I
can
search
the
text
payload
for
that
particular
uuid.
A
And
if
I
run
that
query
I
can
then
filter
down.
Oh
look.
I
see
that
that
particular
call
responds
to
four
different
requests
inside
and
then
you
know
whatever
application
levels
in
json
that
you
can
see,
and
it
gives
you
know,
developers
or
you
know,
devops
personnel.
You
know
indications
of
what
you're
debugging
problems
within
an
application.
You
know
latency
information,
you
know
problems,
application
level
errors,
so
logging,
combined
with
stack
tracing,
is
a
very
invaluable
tool
for
doing
this
type
of
thing.
A
So
here
I
would
like
to
you
know
any
questions
on
on
this.
This
is
where
the
discussion,
a
little
discussion,
could.
A
B
Hi
devon,
I
have
a
question
in
regards
to
you
mentioned
that
fluency.
Would
you
know,
push
these
tracing
locks
to
stackdriver
using
zipkin
right?
Yes,
I
just
wanted
to
have
a
look.
How
that's
configured
just
a
quick
overview.
Yeah
cheers
thanks.
A
Yeah
yeah,
so
the
nice
thing
about
gcp
is
what
is
it.
It
comes
with
fluency
already
loaded
and
what
happens
if
I
go
to
so
there
is
a.
Let
me
go
to
my
namespaces.
A
If
I
go
into
the
cube
system,
what
what
gcp
has
done
is
they've
given
us
a
fluent
dgcp
containers.
So
when
you
launch
that
container
what
happens?
It's
pre-configured
to
ingest
all
of
the
standard
out
from
every
container.
So
as
you
launch
containers
that
standard
out
automatically
gets
consumer,
you
know
ingested
by
the
google
fluentd
process
and
then
in
kuma.
What
you
do
is
you
apply
a
policy?
A
B
A
Traces
so
you'll
see
here
I
configured
kuma
in
kubernetes
to
say:
okay
on
any
service
match
any
service
and
then
pass
that
to
the
zipkin
back
end,
which
is
configured
on
the
control
plane
of
kuma.
So
if
I
go
to
the
control,
plane
and
burn
meshes
you'll
see
that
here's
the
tracing
back
end,
so
the
default
back
end
is
my
zipkin
and
here's
how
where
that's
configured
so
whenever
we
want
to
do
a
trace
post
to
trace
dot
sekuma
dot?
You
know
this
is
the
url
right
here.
A
A
B
Yes,
I'm
just
curious
on.
You
mentioned
qma
control
plane
right.
Yes,
as
we
talked
about,
is
your
service
mesh.
It's
your
service
mesh
also
comes
along
with
the
control
plane.
A
A
B
How
how
different
is
kumar
from
the
service
mesh
control
plan?
Obviously
the
zipkin
and
the
other
critical
components
like
the
galley
yeah
yeah
standard.
A
A
A
You
know
under
the
hoods,
so
the
difference
to
me
so
under
the
hoods
right
so
istio
and
kuma
akuma
are
envoy
under
the
hoods
right.
So
I
think
the
difference
involves
to,
in
my
mind,
is,
is
more
of
a
simplicity
thing
right,
so
kuma
is
turnkey.
One
of
the
differences
say
is
the
ability
to
support
multi-tenancy,
okay
in
a
single
control
plane?
A
Okay.
So
if
I
want
to
have
multiple
meshes
inside
of
kuma
to
support,
you
know
different
applications
different,
I
can
do
that
within
a
single
control
plane,
whereas
I
know
in
istio
you
have
to
launch
many
control
planes,
so
it's
like
one
unit
and
so
you're.
So
it's
less
resource
intensive
and,
as
a
result,
you
know
less
complex.
A
That
is
one
of
the
major
differentiators
in
my
mind,
between
the
two
sdo-10.
You
know
it's
a
it's.
A
good
tool
tends
to
become,
but
it's
it's.
You
know,
teams,
there's
a
big
learning
curve
right.
Huma
is
you
know
a
a
single
person
could
really
manage
a
mesh
within
an
organization
in
that.
Does
that
make
sense?
Is
that
yeah
sure
fair
enough,
yeah
yeah
sure
yeah
thanks
yep,
and
we
now
have
the
ability
to
do
multi-cluster
right
so
you'll
notice
in
this
uic
remote
cps.
A
So
we
can
now
with
kuma
as
of
0.6,
we
can
have
a
single
mesh
that
spans
network
boundaries.
Okay,
so
I
can
have
a
services
in
gke
universal.
I
could
have
you
know
a
mesh
running
in
on
ec2
instances
and
that
could
be
all
part
of
a
single
mesh.
So
a
flat
topology
is
no
longer
necessary
and
we
accomplish
that
by
using,
we
have
a
global
control,
plane
and
local
or
remote
control
planes.
A
So
you
can
configure
a
mesh
through
the
global
global
control
plane
and
then
your
remote
control
planes
will
talk
to
that
global.
It's
like
a
hub
and
spoke
kind
of
architecture
and
then,
once
all
of
the
remote
control
planes
are
configured
then
all
of
the
the
services
within
the
mesh.
They
only
know
about
their
own
local,
remote
control,
plane
and
that
that
communication
and
everything
is
handled
for
you
and
everything
is
cached.
So
if
the
global
control
plane
goes
down,
it
doesn't
really
matter.
A
So
it's
a
real
nice
piece
of
engineering
that
our
team
is
put
together.
Apologies
I
may
have
missed
it.
Is
this
specific
demo
available
somewhere
the
actual
the
actual
the
code
and
stuff
to
set
it
up
inside
of
correct
yeah
yeah
I
mean
I
am
more
than
happy
to
share
that.
It's
clearly
it's
in
a
github
repository,
but
I'd
be
more
than
willing
to
to
to
make
that
a
public
publicly
available
thing
and
you
guys
could
try
it
out
excellent.
Thank
you.