►
From YouTube: Kong Mesh Overview | Kong Summit 2020
Description
Learn more: https://bit.ly/37fudp3
Kong Mesh is the universal service mesh for enterprise organizations focused on simplicity, security and scalability with Kuma and Envoy.
A
It's
all
that
access
and
all
those
kind
of
concerns,
but
once
you're
in
the
building,
you
have
all
these
different
rooms
to
keep
the
analogy
going
in
this
case
we're
talking
about
all
of
our
different
services.
All
these,
like
microservices
and
different
things
that
have
all
this
concern
that
really
don't
get
addressed
once
you're
past
the
gateway.
A
So
what
we're
going
to
be
kind
of
talking
about
is
this
is
where
you
know
how
we
can
leverage
kong
mesh
to
address
a
lot
of
this.
You
know
governance
for
all
these
different
connectivity
between
these
different
microservices,
again
we're
connecting
to
a
graphql
instance.
A
So
the
first
thing
we
want
to
do
is
just
kind
of
level
set.
We
have
this
application,
it's
a
basic.
You
know
you
know
microblog
application.
That
does
a
little
bit
of
sentiment
analysis
and
we
can
run
this
and
we
can
kind
of
get
all
of
our
blogs
and
see
these
different
sentiment
scores
between
happy
and
sad.
It
looks
like
my
cat
is
sad.
A
Well,
what
we
do
in
kong
mesh
is,
we
basically
use
policies
to
enforce
these
things,
and
so
what
you're
going
to
see
here
is.
I
have
a
basic
policy
if
I
go
to
this
mesh
object-
and
it
was
just
like
we
were
talking
about
before
with
the
kong
ingress
controller,
how
we
get
to
describe
these
things
in
a
very
native
kubernetes
way.
Well,
the
same
is
true.
With
with
kong
mesh,
we
can
describe
a
basic
mesh
object
and,
if
you're
familiar
with
kubernetes,
this
should
be
very
at
home.
A
A
Now,
let's
go
ahead
and
enforce
this
or
apply
it
if
you
will
and
when
we
do
this
something's
going
to
happen
so
we've
applied
it.
We
can
come
back
here
and
this
service
that
we
saw
that
was
working
before
is
suddenly
not
working.
You
know
we're
getting
a
503,
well,
we've
enforced
zero
trust
policy,
and
what
that
means
is
that
nothing's
going
to
talk
to
anything
unless
we
give
it
explicit
permission,
so
we
can
fix
that
by
putting
in
a
nice
traffic
permission
policy
now
I'm
doing
kind
of
a
star
star.
A
Let
everything
talk
to
everything,
because
it's
a
demo
and
we
don't
have
like
all
day,
but
you
could
see
like
this-
is
a
opportunity
where
you
can
be
very
fine-grained
and
you
know
and
explicit,
in
what
services
can
talk
to
what
and
really
enforce
your
security.
You
know
policy
that
way.
So
let's
go
ahead
and
deploy
this.
You
know
basically
traffic
permission
policy
and
see
if
we
can
get
our
services
back
online.
A
A
We
can
kind
of
go
to
this
hierarchy
and
you
can
see
that
we
actually
have
our
mutual
tls
working
with
the
ca1
cert.
So
that's
great.
We
can
kind
of
see
that
we've
implemented
a
xero
trust
mutual
tls
policy
pretty
easily.
Now.
Another
thing
that
may
come
up
is
this
tracing
like
what?
If
we
want
to
track,
you
know
the
flow
of
this
traffic
and
find
out
if
there's
something
breaks
or
if
there's
latency
like?
Where
is
this
happening?
A
Tracing's
going
to
allow
us
to
do
that,
and
we
can
implement
that
pretty
easily
with
kong
mesh.
Now
we
have
this
kind
of
cli
tool
that
comes
with
com
mesh.
It's
kuma,
you
know
it's
a
kuma
cuddle
command
tool,
and
this
allows
us
to
do
a
lot
of
shortcuts
through
the
cli
one
of
them
is
we
can
install
a
tracing
service.
If
we
do
this
kum
kuma
cuddle,
install
tracing,
it's
gonna
install
a
jaeger
server.
That
is
where
we
can
centralize
all
of
our
the
collection
of
all
of
our
traces.
A
A
A
A
So
let's
go
ahead
and
add
a
tracing
policy,
so
you
can
see
that
right
under
our
mtls
policy,
we
can
add
a
tracing
policy
and
you
can
see
we're
using
a
default
backend
with
a
jaeger
collector
and
we're
just
going
to
name
this
jaeger
collector
and
has
a
type
zitkin,
and
then
we're
going
to
point
it
in
the
comp
to
our
collector
that
we
just
deployed
just
a
few
seconds
ago.
So
and
now
that
we've
done
that
we
can
tell
this
with
a
basically
traffic
trace
policy.
What
we
want
to
trace
again.
A
A
So
what
that
has
done
is
if
we
come
back
to
our
our
trace,
and
if
I
refresh
this
you
can
see,
we
actually
have
five
services
now
and
if
I
go
to
say
I
don't
know
our
apollo
service,
for
instance,
and
if
I
go
to
find
trace,
you
could
see
that
we
get
all
these
different
spans.
That
is
tracking,
and
we
can
see
things
like
latency
and
the
hops.
A
What
we
can
also
see
is
our
system
architecture,
so
you
could
see
the
kind
of
pattern
that
this
is
going
from.
You
know
the
gateway.
This
is
our
kong
ingress
controller,
which
is
a
kong
gateway
to
the
paula
service,
to
the
blog
service
and,
finally,
to
the
sorry,
the
natural
language
processing
service.
A
Another
you
know
easy
way
to
look
at
this
is:
if
we
go
back
is
we
can
go
to
system
architecture
and
go
to
this
dag
and
you
can
see
it
more
straightforward,
so
we're
actually
tracing
each
of
these
hops.
So
if
anything
is
going
wrong,
if
we
get
latency,
we
can
see
where
that
is,
and
that's
really
powerful.
A
So
we've
actually
covered
quite
a
bit
in
this
kind
of
short
talk,
so
we've
basically
covered.
You
know
how
to
how
to
basically
get
mutual
tls
between
there.
We've
gotten
our
request,
ids
and
we've
basically
centralized
them
to
jager,
there's
actually
a
large
number
of
these
policies.
We
can
deploy
and
there's
more
coming
out
each
each
and
every
you
know
quarter
we're
going
to
be
adding
a
lot
more
of
these
policies
to
come.
So
thank
you
so
much
for
your
time
and
have
a
great
day.