►
From YouTube: Kuma 1.2 Release 🚀 - Kong Builders Livestream
Description
In this special edition of Kong Builders, Cody De Arkland announces a new major release of Kuma and Kong Mesh built on Kuma!
Kuma 1.2 ships with 20 new features and countless improvements.
▬▬▬▬▬▬ CONTACT CODY ▬▬▬▬▬▬
🐦 https://twitter.com/codydearland
▬▬▬▬▬▬ ADDITIONAL RESOURCES ▬▬▬▬▬▬
Release blog: https://bit.ly/3xSQrHJ
#Kuma #ServiceMesh #Kong #KongMesh
A
Hello,
everyone
welcome
to
another
one
of
these
kong
builders
that
we're
doing.
We
are
doing
these
kong
builders
series
around
how
we
take
kong's
products
and
do
things
useful
with
them.
How
do
we?
How
do
we
build
oops?
A
I
need
to
mute
my
audio
in
the
background
there
we
go
so
kong
builders
is
designed
for
us
to
take
users
through
how
we
actually
build
applications
and
tools
inside
of
our
products
at
kong.
So
I
focus
more
on
kuma
the
service
mesh
and
kong
mesh.
My
name
is
cody
d
arkland.
I
work
for
the
office
of
cto
doing
doing
that
fun
stuff.
So
today
is
a
pretty
exciting
day.
We
are
going
to
jump
in
to
our
new
release
of
kuma.
This
one
is
a
a
big
release.
A
It's
a
there's,
a
little
funny
story
behind
it.
When
I
first
came
aboard
talking
with
marco,
the
the
cto
about
things
inside
of
service
mesh
and-
and
I
brought
up
the
idea
or
that
not
the
idea-
I
shouldn't
say
that
about
the
concept
of
l7,
l7
policies,
layer,
7
traffic
policies
and
those
were
already
on
on
the
road
map
it
turns
out,
but
it's
cool
because
they
this
release
brings,
brings
layer,
seven
traffic
routing
into
indicuma.
So
I'm
pretty
excited
for
that.
We're
gonna
jump
right
in
we're.
A
Gonna
play
with
this
release
live
so
we're
gonna
install
it
on
a
couple.
Kubernetes
clusters
we're
gonna
play
with
some
of
the
features
there's
like
20
features,
we're
not
gonna,
be
able
to
play
with
all
of
them,
but
we're
gonna
talk
through
them.
I'm
gonna
answer
questions.
I've
got
the
chat
up
on
the
other
window.
So
when
you
see
me
look
like
this,
you
know
I'm
looking
at
at
the
chat
window
to
see
see
what
everyone's
saying
share
this
out
on
twitter
share
this
out
on
linkedin.
A
I'd,
really
love
to
see
more
people
show
up
to
these.
It's
a
lot
of
fun.
To
have
engagement.
Ask
any
question
as
basic
as
you
want
to
go
or
as
a
dance
as
you
want
to
go
worst
thing
I
could
say
is
I
don't
know
the
answer
and
I
think
I
have
some
of
the
engineering
team
hanging
out
hanging
out
to
help
me
figure
out
things
that
I
am
not
smart
enough
to
figure
out
on
my
own.
A
So
let's
I'm
gonna
fire
up,
screen,
sharing
and
we'll
we'll
jump
in
and
have
some
fun.
A
All
right
here
we
go.
This
is
the
really
the
blog
release
that
was
just
done.
So
I
think
that
this
was
published
a
few
short
minutes
ago,
marco
released
this
blog
post
again,
20
new
features,
there's
a
couple
of
really
big
ones
that
we
want
to
talk
through
that
we're
going
to
play
with
inside
of
some
some
live
demo
action
today,
this
layer,
seven
traffic
routing
policy,
is
awesome
in
my
demo.
You'll
see
me
go
through
and
use.
A
Typically,
I
use
like
a
reverse
proxy
inside
of
my
demo,
app
toasted
and
react
so
I'll
set
up
reverse
proxy
to
the
different
backend
services.
I
can
wipe
those
out
completely
live
and
apply
this
policy
and
let
envoy
handle
all
of
that.
So
for
those
who
don't
know
when
we
talk
about
traffic-
and
we
talk
about
the
way
traffic
comes
into
a
service
mesh,
we
often
talk
about
layer,
4
versus
layer
7..
What
we're
talking
about
is
just
raw
connectivity,
so
we
think
of
like
a
tcp
connection,
so
hey
I'm
connected.
A
I
see
your
stuff
cool
envoy.
Has
this
ability
to
understand
layer,
7
traffic
layer?
7
is
http,
as
we
can
see
up
here
and
what
that
boils
down
to
is
understanding
that
this
is
an
actual
web
request,
an
http
request
and
being
able
to
do
things
with
that
request.
Some
of
that
stuff
includes
this
traffic
routing
concept.
It
includes
things
like
being
able
to
do
the
grafana
metrics
that
we
talk
a
lot
about
so
being
able
to
pull
those
metrics
out
about
request.
Request
rates.
A
Tcp
would
be
only
really
concerned
with
the
amount
of
data
transfer,
whereas
we
can
get
response
times
and
things
like
that.
Out
of
out
of
an
http
policy,
so
layer,
four
versus
layer,
seven,
you
hear
that
come
up
a
lot,
it's
a
little
confusing
when
you're
when
you're
new
to
the
space.
So
I
wanna
make
sure
that
we
kind
of
dispel
that
confusion.
A
Layer,
seven
is
standard
connectivity,
I'm
sorry
layer,
four
standard
connectivity,
layer,
seven
is
http
connectivity.
So
again
we're
gonna
jump,
jump
in
and
play
with
this
in
the
demo,
so
I'm
not
gonna
spend
too
much
time,
but
basically
taking
that
layer.
Seven
data
the
data
coming
in
as
part
of
an
http
request
and
doing
an
action
based
on
it.
Sending
traffic
to
a
place
rate
limit
policy
is
great.
We
have
this
thing
that
we're
kicking
around
inside
inside
of
kong
and
the
kuma
team
around
self-healing
policies.
So
we
talk
about
service
mesh.
A
We
always
talk
about
zero
trust,
observability
and
traffic
routing.
But
there's
this
kind
of
adjacent
thing:
that's
a
self-healing
concept.
It's
how
do
you
make
applications
more
resilient?
How
do
you
get
these
things
deployed
in
a
service
mesh
to
have
the
service
mesh
be
able
to
do
intelligent
things,
to
protect
the
environment
and
protect
you
from
things
going
down
rate?
Limiting
is
an
example
of
that?
If
you
have
someone
bombing
your
service
with
a
thousand
requests
a
second,
you
want
to
be
able
to
do
something
intelligent
with
that.
A
So
we
have
the
ability
to
slow
that
down
fire
back
a
header
response
fire
back
a
an
error
code,
saying
hey
you've
requested
too
many
times.
We
do
this
a
lot
to
back
in
services,
so
front
ends
usually
take
requests,
but
if
someone's
just
hammering
that
submit
button
over
and
over
again
trying
to
ddos
your
environment,
we
can
use
a
rate
limit
policy
to
slow
that
down
we're
going
to
take
this
first
in
live,
though,
and
play
with
it
a
little
bit
more
there's
a
couple
quality
of
life
improvements.
A
Something
like
this
right
here.
We
changed
the
name
of
remote
control,
planes
to
zone
control,
planes
remote
caused
some
confusion.
They
we
typically
have.
When
we
talk
about
a
global
service
mesh,
we
talk
about
local
zones
or
sorry.
We
talked
about
the
global
zone
and
historically,
we
talked
remote
zone,
so
the
global
zone
was
a
construct
where
people
would
interact
with
as
the
main
api
endpoint
for
this
environment.
So
how
do
I
do
things
against
against
the
mesh?
Overall?
A
The
remote
zones
were
where
workloads
lived
so
where
we
actually
have
data
planes
connect
in
and
work
inside
of,
an
environment,
calling
it
remote
was
sometimes
confusing.
It
implied
some
other,
like
definitions,
that
we
didn't
really
want
to
dive
too
much
into
so
what
we're
calling
them
now
is
zone
control
planes.
So
these
are
zones.
A
A
zone
could
be
an
aws
vpc.
It
could
be.
There
could
be
four
zones
each
for
a
different
aws
vpc.
It
could
be
for
an
azure
environment.
It
could
be
for
two
vsphere
data
centers.
It
could
be
a
number
of
things,
so
we've
renamed
them
to
zones.
So
you'll
see
that
terminology
come
out
a
lot
we've
also
renamed
zo
or
ingress
to
zone
ingress.
Calling
something
a
generalized
term
like
ingress,
creates
a
ton
of
confusion,
because
ingress
is
used
a
lot
inside
of
kubernetes
for
a
lot
of
different
things.
A
So
we've
made
that
a
little
clearer
to
say:
hey
these
ingresses
are
zone.
Ingresses
you'll
see
that
in
the
ui
this
one
is
super.
Super
cool
traffic
permissions
now
work
on
external
services.
What
that
means
is
if
you've
got
a
larger,
postgres
database
living
inside
your
environment.
You
don't
want
to
run
postgres
inside
inside
of
kubernetes
for
a
number
of
reasons.
There
are
times
where
you
can,
but
generally
let's,
let's
not
run
massively
huge
databases
in
cube
right
now.
A
We
can
actually
bring
in
those
things
as
external
services,
which
means
the
service
mesh
knows
about
them
and
is
able
to
reach
out
and
pass
traffic
over
to
them
before,
because
these
don't
have
a
sidecar.
They
don't
have
that
envoy
communication
set
up
for
them.
A
We
would
have
to
hand
we
would
not
be
able
to
enforce
traffic
permissions.
We've
corrected
that
in
this,
and
we
allow
that
to
be
enforced
as
a
traffic
permission.
So
we
can
restrict
access
really
persist,
that
zero
trust
model
that
we're
trying
to
we're,
trying
to
shoot
for
we've
added
some
fixes
for
dns
to
make
it
a
little
cleaner
in
how
we
get
running
up
an
environment
and
then
just
general
quality
of
life
stuff
inside
a
platform
improvements
around
around
functionality
for
like
gcp
gke,
we
fix
some
stuff
around
ingress.
A
One
thing
that's
not
listed
here
that
makes
me
makes
me
sad
to
to
not
highlight
is
that
we
did
a
ton
of
work
around
in
updating
the
kuma
control
command
to
let
you
install
a
gate,
a
functional
gateway
and
install
the
demo
app.
So
we're
going
to
do
both
of
those
things
in
the
demo.
As
well,
which
we're
about
to
jump
into
so
really
excited
for
it,
the
blog's
out
there,
thanks
to
marco
for
getting
that
out,
it's
gonna,
be
it's
gonna,
be
a
fun
release
to
play
with
so
go.
Give
that
a
read!
A
A
We
should
see
you
see
the
tag
up
here
coming
up
there,
we
go
we're
running
on
one
two.
One
point
two,
so
kuma
is
running
at
1.2
now,
kong
mesh,
the
enterprise
version
of
kuma,
is
running
on
congress
1.3.
A
So
just
a
little
bit
of
disparity
there
between
the
versions.
A
lot
of
people
have
asked
me
hey.
How
do
I
test
this
stuff
out
in
the
easiest
way
possible?
I
wanted
to
touch
on
that
very
very
quickly.
Before
we
jump
into
hands-on
demo
stuff
in
my
environment,
a
lot
of
people
will
use
mini
cube,
built
into
into
the
the
docker
on
mac
or
docker
on
windows,
and
that's
a
that's
a
good
enough
way
to
go.
I've
always
been
very
partial
to
kind
I'm
a
huge
fan
of
kubernetes
and
docker
great
great
platform.
A
There's
a
lot
of
blogs
out
there
on
how
you
can
create
load
balancers
to
to
attach
to
kind
so
just
wanted
to
share
a
little
bit
of
hey.
If
you
want
to
test
this
out
in
a
quick
way
and
you
don't
have
a
kubernetes
cluster,
you
can't
spin
up
something
in
eks
or
any
of
a
dozen
reasons.
You
can
use
minicube
or
you
can
use
kind,
a
big
fan
of
both
those
approaches.
Kind
lets
you
do
multi
multi
nodes.
A
It
also
gets
updated
a
lot
faster,
so
you
can
be
running
like
bleeding
at
kubernetes
stuff,
so
you
also
do
port
forwards
into
that
environment.
It's
a
really
sick
platform.
The
the
kind
team
has
done
a
great
job
with
it
just
wanted
to
give
them
a
positive
nod
in
the
in
the
kubernetes
community.
I'm
love
love
me
some
kind
all
right.
I
think
I've
talked
enough.
We're
gonna
jump
in
now
which
wanted
to
flash
the
change
log
real,
quick.
We
always
update
these
on
on
the
kuma
github
repo.
A
So
you
can
see
this
here.
This
has
all
of
the
fine-grained
fine-grained
commits
and
changes
in
what
I
love
about
our
approach
to
this
is
that
we
also
include
all
of
the
issues
that
talk
about
these
and
the
and
the
proposals
around
them,
but
you
might
not
realize
is
a
lot
of
our
proposals
are
actually
public
for
kuma
kuma
is
a
cncf
project.
We
donated
that
donated
at
cncf,
but
you
can
actually
go
in
and
see
the
proposals
for
some
of
these
features
and
comment
on
them
and
contribute
participate
in
the
community.
A
I
also
love
that
we
specifically
call
out
our
community
contributors
here
so
austin,
as
always,
thanks
for
thanks
for
all
the
work
you
do
on
kuma
sadeep
same
thing,
really
appreciate
it,
and
thanks
nikita
for
for
bringing
in
some
stuff
as
well
as
well
as
I
don't
know,
jew
or
toe,
but
thank
you,
sir
or
man,
either
way
a
lot
of
features
a
lot,
a
lot
of
detail,
a
lot
of
version
bumps,
so
you
can
go
in
read
this
in
the
change
log
and
you
can
see
a
lot
of
detail
around
this
functionality.
A
I'm
gonna
fix
this
real
quick.
There
we
go
cool
just
as
any
good
magician.
Does
I
like
to
show
environments
before
I
get
started
so
we're
going
to
go
cube
ctx,
you
can
see
all
of
my
kubernetes
contacts.
These
are
different,
cube,
config
files.
I
have
loaded
in
my
environment,
so
a
couple
of
different
environments.
I
can
work
in
here.
Both
of
these
are
in
my
home
lab
environment,
I'm
an
avid
home
labrad.
A
So
I
have
two
kubernetes
clusters
here:
this
one's
designed
to
be
multi-zone,
this
one's
designed
to
be
just
a
standalone
instance.
We're
gonna
play
an
eks
today.
So
we'll
do
this
in
an
elastic
kubernetes
cluster,
I'm
a
big
fan
of
cluster
api,
so
I
run
a
cluster
api
for
vsphere
in
my
lab
at
home
and
that's
how
I
deploy
my
clusters.
So
that's
what
that
one
is
so
we're
going
to
play
here
in
eks,
you'll
notice.
I
do
a
lot
of
these
shorthand
commands.
A
Let
me
make
this
a
little
bit
bigger
for
everyone
just
to
be
stream
friendly.
Unless
I
do
a
lot
of
these
shorthand
commands
kgp.
As
an
example,
I
have
a
bunch
of
aliases
set
up
in
my
terminal.
Environment.
Kgp
is
cube,
control
get
pods,
okay
get
pods
is
another
one.
I'll
use
a
lot.
So
just
a
heads
up
there,
so
q
control
get
pods.
We
see
the
environment
it's
going
to
come
up
in
a
moment
here.
As
long
as
my
internet
doesn't
go,
sads
cool,
we
have
an
empty,
an
empty
kubernetes
cluster.
A
Here
we're
gonna
go
ahead
and
install
kuma.
If
I
do
a
kuma
control
version,
you
can
see
I'm
running
on
1.2
already.
If
I
switch
back.
If
we
were
to
go
in
here
and
go
to
the
getting
started
or
go
to
the
install
section,
there's
a
script
in
here
for
installing
the
most
recent
version.
If
you
run
that
go
through
these
steps
here
in
the
kubernetes
section,
it'll
actually
pull
down
that
binary.
A
Look
at
the
docs
a
little
bit
out
of
date
here,
switching
to
fix
that,
but
you
pull
that
down
to
pull
down
the
binary
that
I'm
going
to
use
here
and
that'll.
Let
you
do
the
install
and
get
running
helm
is
another
popular
method.
A
lot
of
people
ask
me
the
difference
between
the
two.
A
If
you
want
the
fastest
experience,
do
it
in
doing
in
in
the
kuma
control
command?
A
It's
all
kind
of
packaged
there,
but
if
you
want
to
be
able
to
make
some
configuration
changes
or
you
want
to
be
able
to
manage
the
state
of
that
application
kuma
as
an
application
use
helm,
I
I
generally
use
a
lot
of
helm
in
this
example,
I'm
going
to
use
kuma
control,
but
a
lot
of
times
I
want
to
choose
to
have
the
control
plane
exposed
as
a
load
balancer,
for
example,
or
I
want
to
add
replicas
or
I
want
to
change
other
configurations
of
the
environment.
A
Helm
is
great
for
that,
so
use
helm
in
that
case.
In
this
case,
though,
we're
going
to
stick
with
the
the
kuma
control
command,
got
some
feedback
on
the
last
stream
about
no
more
window
swiping.
So
I'm
gonna
try
to
be
better
about
that.
There
are
times
like
that
where
it
just
makes
the
most
sense
to
do,
but
I'll
try
to
be
a
little
bit
better
about
about
not
swiping
the
window
too
aggressively
all
right.
A
So
this
folder
has
several
of
the
policies
we're
going
to
work
with
several
demo
things
we're
going
to
work
with,
I'm
also
going
to
show
off
a
fun
load
testing
tool.
That's
called
a
locust
as
part
of
this
demo
as
well,
so
we're
gonna
have
some
fun.
A
Let's
do
kuma
control,
install
control,
plane.
A
And
what
I'm
gonna
do
is
I'm
gonna
leave
this
command
alone
by
itself.
So
you
can
see
what
comes
out
so
when
you
run
this
command,
it
actually
outputs
the
kubernetes
ammo
by
default.
We're
not
going
to
decompose
this.
I
just
wanted
to
show
that
that's
what
comes
out
here
is
really
the
full
configuration
of
kuma
for
kubernetes
environment.
A
A
A
So
we're
just
going
to
port
forward
into
this
here
in
a
second
we'll
do
apologize
for
small
picks,
I'm
going
to
leave
it
small
because
we're
not
going
to
be
in
this
window
too
much
so
cube,
control,
port
forward
service,
zuma,
control,
plane,
five,
six,
eight
one,
five,
six,
eight
one
is
the
the
api
port
again
with
the
window
switch
sorry.
A
Coming
up
so
it
might
not
be
quite
ready
yet
love
the
logo.
One
of
my
favorite
logos
in
tech
awesome
we're
up
and
running
we're
running
on
kuma
1.2
on
kubernetes
kuma's
installed
that
this
is
the
control
plane.
We
talk
a
lot
about
control,
plane
versus
data
plane.
I
have
just
deployed
the
control
plane
for
kuma.
This
is
where
policies
are
applied
and
pushed
down
to
data
planes
that
live
alongside
your
application.
A
It's
cool,
that's
doing
its
thing,
so,
let's
get
started
installing
some
stuff
control.
Install
dns.
Do
that
so
that
the
kubernetes
dns
knows
about
kuma
and
knows
how
to
resolve
dot
mesh
addresses.
We
made
a
change
to
dns
to
have
the
data
planes
automatically
have
the
dns
configured.
So
that's
awesome.
We
still
have
to
install
it
for
configuration
sake,
though
we'll
install
gateway.
This
is
easily
one
of
my
favorite
features
of
kuma
now,
because
the
immediately
after
you
install
service
mesh
and
get
started
using
it.
A
Your
next
question
is:
how
do
I
get
inbound
to
it?
We
now
have
a
packaged
way
to
let
you
install
kong's
ingress
controller.
We
have
we
added
in
the
functionality
to
install
kong
enterprise,
but
we've
set
this
up
to
be
extensible,
so
other
vendors
and
community
members
can
bring
in
their
own
gateway
install.
So
this
isn't
meant
to
be
just
for
kong
kuma's,
a
community
project.
A
After
all,
we
just
added
the
first
ones
in,
and
we
have
a
model
there
now
that
other
people
can
bring
their
own
ingress
controllers
in
if
they
want
to
so
that
that's
available
right
now
we
have
kong
available.
So
excited
for
that,
so
we'll
install
that
that
is
going
to
run.
We
can
see
it's
coming
online
now
calling
us
controller
in
the
kuma
gateway
area.
A
So
this
is
the
demo
app
that
I
use
in
my
environments,
it's
kind
of
tuned
for
the
type
of
demos
I
like
to
do
in
an
environment,
I'm
not
going
to
decompose
this
entire
file.
I
want
to
touch
on
a
couple
of
key
points,
create
a
namespace
that
has
sidecar
injection
enabled
so
that
we're
automatically
getting
those
new
sidecars.
This
is
all
app
configuration
stuff
inside
of
my
environment
and
then
we're
deploying
out
the
service
now
there's
important
thing.
A
I
want
to
call
out
in
here
related
to
this
release
and
the
reason
why
I
wanted
to
bring
up
this
file
you'll
see
I'm
applying
this
annotation
to
my
services
here.
This
annotation
sets
the
service
as
an
http
service,
we're
going
to
be
using
the
layer,
7
traffic
routing
capability,
so
it
needs
to
be
an
http
service
in
order
for
that
to
work.
I've
set
that
here
so
that
service
and
all
of
the
http
services
are
going
to
come
online
as
http
enabled.
A
Those
are
coming
online,
they
take
a
little
bit
of
time.
This
front
end.
One
is
probably
going
to
bomb
out
a
couple
of
times.
Take
it
needs
the
user
surface
and
post
service
to
be
up
before
it
will
actually
come
up
all
the
way
we
also
added
to
common
control
the
ability
to
install
a
demo.
So
this
is
the
demo
that
marco
rcto
wrote
that
he
uses
in
his
in
his
demos.
A
Control
get
pods
all
name
spaces.
You
can
see
the
kong
stuff
up
and
running
two
out
of
two
means.
I
have
the
app
and
I
have
the
sidecar
sidecar
is
a
data
plane.
Sidecar
is
envoy,
so
sidecar
envoy
data
plane
same
thing.
In
this
case
they
are
getting
their
configuration
down
from
the
control
plane,
so
the
control
plane
is
where
we're
going
to
apply
all
of
our
policies.
Those
are
going
to
populate
down
to
those
data
planes,
hopefully
you're,
starting
to
kind
of
click
on
the
the
relationship
between
the
control,
plane
and
data
plane.
A
Now,
just
taking
a
quick
glance
at
the
chat,
things
are
looking
good.
Oh
thanks
jacob,
yes,
k3d
is
also
an
incredibly
good
option,
so
good
good
call
out
on
that.
Sorry
for
that
all
right,
so
these
are
all
up
now,
let's
show
something
break.
That's
not
gonna
work
right
out
of
the
gate
service.
We
have
a
kong
proxy
for
the
load
balancer,
so
we're
going
to
hit
that
that
is
going
to
come
up
on
this
window.
A
A
So
this
is
an
ingress
controller.
We
still
need
to
apply
routes
and
we
still
need
to
apply
configurations
so
that
it
works
it's
deployed
and
we
know
it's
working
because
we're
able
to
reach
it
but
we're
not
able
to
actually
access
our
application
yet
because
we
have
no
routes,
so
we
are
going
to
apply
first,
I
will
go
cat.
A
A
A
A
All
right
so
now
we
switch
back
into
that
window,
go
ahead
and
grab
our
address
again,
and
I
just
opened
the
window
again
and
we
missed
the
fun
animation,
because
I
played
around
too
much
there
we
go.
All
connections
are
working.
If
we
go
in
here,
we
can
hit
envoy,
we
can
hit
clusters
and
we
can
see
all
the
communication
across
across
the
environment.
So
this
is
the
like
heart
and
bridge
part
of
the
brain
actually
of
the
service
mesh
coming
down
to
the
side
car.
A
A
So
up
until
now,
the
only
thing
we
played
with
from
the
new
release
is
the
install
gateway
functionality,
but
we're
actually
going
to
jump
in
now
to
the
good
stuff
we
are
going
to
jump
in
first
to
actually
one
more
thing:
that's
not
release
related
we're
gonna
turn
on
ntls
we're
gonna
watch
the
whole
thing
break,
I'm
gonna
touch
on
some
traffic
permissions
and
then
we're
gonna
play
with
layer,
seven
layer,
seven
traffic
routing,
so
we'll
pop
back
in
here
I
now
have
so.
A
This
is
a
mesh
configuration
that
has
an
mtls
spec
and
the
configuration
for
it
as
soon
as
I
turn
this,
this
on
zero
truss
is
going
to
be
enabled,
meaning
things
aren't
going
to
talk.
Unless
I
tell
them
they
can
talk
and
all
traffic
between
services
will
be
encrypted
by
out
of
the
box.
Well
out
of
the
box,
I
have
to
apply
this.
Anything
added
will
be
encrypted.
A
A
There
it
goes
so
you
see,
mtls
is
now
turned
on.
What's
really
cool
about
this
is
we
have
multiple
meshes,
creating
multi-tenancy?
You
could
have
a
different
ca
provider
for
each
of
each
one
of
those
different
metrics,
configs
locality,
aware
load
balancing,
which
is
something
we're
going
to
talk
a
lot
about
soon
in
this
concept
of
removing
load
balancers
from
environments,
but
all
of
this
can
be
enabled
per
mesh
environment.
So
it's
pretty
cool,
so
that's
set
up.
A
If
we
go
in
and
take
a
look
at
our
application
now,
it's
likely
still
working
because
I
haven't
turned
off
the
traffic
yet
so
we
have
a
default
policy
in
there
that
allows
all
traffic
still
that
comes
out
of
the
box.
But
if
I
go
in
and
I
go,
control
delete
traffic
permissions
allow
all
default.
Let's
watch
what
happens
deleted,
sad,
reds
everywhere,
sad
reds
everywhere,
let's
go
back
in.
A
This
was
a
thing
we
added
in
the
last
release.
I
want
to
show
here
real,
quick,
the
reason
I'm
going
down
here.
We
didn't
get
a
chance
to
demo
this
for
the
last
release
of
kuma
and
kong
mesh.
We
added
the
ability
to
do
traffic
permissions
on
more
than
just
the
service
name.
So
in
my
app
I've
applied
all
of
those
pods
with
a
label
of
group
demo
app
that
allows
them
to
communicate.
So
now
I've
said
group
demo
app
is
a
is
a
valid
communication
path.
A
So
I'm
going
to
go
by
f
to
the
traffic
permissions
that
has
been
created
and
if
we
switch
back
into
the
demo
everything's
alive
again
and
working
well
now,
eventually
we're
gonna
come
back
and
we're
gonna
do
a
started
star
and
allow
all
communication
anyways
because
of
a
future
part
of
this
demo.
But
I
wanted
to
show
that
out
of
the
box,
you
saw
kind
of
what
was
happening
all
right.
We
are
now
ready
to
jump
into
the
the
real
fun
stuff,
but
first
we
need
to
break
things.
A
A
This
is
the
bleeding
heart
of
the
application
serving
traffic
out
to
the
environment.
So
you
can
see.
I
have
the
kuma
service
names
in
here
as
where
they're
forwarding
the
traffic
to.
A
A
In
this
case,
I
didn't
want
to
waste
your
all
time,
building
a
new
image
just
to
just
to
spin
it
up
again,
so
you
can
see
nginx
be
reconfigured,
I
just
refresh
the
config
and
that
pod
is
now
working
as
I
as
I
would
do
it
in
a
in
a
real
demo,
but
everything's
broken,
which
is
kind
of
what
we
wanted
anyways.
So
life
is
good.
A
A
This
is
a
traffic
route
policy
that
this
is
not
a
new
thing.
We
had
traffic
routing
policies
already,
but
we've
brought
in
as
the
ability
to
do
layer
7
on
these.
As
I
mentioned
talking
through,
what's
happening
here,
this
policy
applies
when
the
source
is
coming
out
from
the
kong
proxy.
So
this
is
the
ingress
controller
and
it
applies
anywhere
it's
going
so
anywhere
in
the
environment.
It
will
apply
this
policy
and
evaluate
this
policy.
A
We've
got
a
configuration
here,
scroll
down
a
bit.
We've
got
a
configuration
stanza
for
http
services,
we're
doing
a
lot
of
path
based
routing.
Here
we
can
do
this
via
header,
which
I'll
show
in
a
little
bit.
We
will.
We
will
do
a
header-based
one,
that's
kind
of
fun.
We
can
do
it
based
on
header.
We
can
do
regex
entries,
so
there's
a
number
of
ways
that
we
can
do
these
traffic
decisions
for
the
simplicity
of
the
demo.
A
I
did
path-based
routing
if
the
path
is
prefixed
with
api
posts,
you
can
do
exact
if
you
only
want
a
specific
path
or
regex.
Again,
as
I
mentioned,
slash
api
posts
we're
going
to
send
traffic
to
the
post
service
users
user
service.
My
user
service
has
the
login
endpoint
in
it,
so
I'm
sent
traffic
there.
My
websocket
configuration
is
in
my
post
service,
I'm
going
to
send
it
there.
A
That
route
is
created
now,
let's
hop
back
into
the
app
everything
comes
back
to
life.
Now,
what's
really
cool
about
this,
if
you,
if
you
get
really
deep
into
how
that
control,
plane
versus
data
plane,
communication
happens
when
those
filters
are
applied,
it's
telling
envoy
hey.
You
can
actually
accept
these
routes.
A
Now,
if
I
had
come
in
here
and
clicked
on
my
demo
app
and
clicked
on
clusters
and
before
it
would
have
been
blank
because
these
were
not
allowed
to
these
were
not
receiving
any
route
information,
and
now
that
I've
got
that
set
up,
it
actually
is
receiving
that
information
down
as
as
expected.
So
that
is
working.
That's
layer,
7
traffic
traffic,
routing
we're
now
having
envoy
to
understand
where
the
services
need
to
go.
A
I'm
going
to
make
a
quick
change
to
our
traffic
permission
policy
to
just
allow
all
traffic,
mostly
because
I
don't
want
to
write
a
whole
new
one,
where
I
allow
marco's
demo
app
to
work.
I'm
just
gonna
blanket
allow
traffic
for
now,
usually
you'd
wanna.
Do
this
a
little
bit
more
intelligently,
so
just
a
a
heads
up
there,
so
I'm
going
to
go
into
that
traffic
permission,
I'm
going
to
delete
out
the
second
one.
A
Are
what
this
is
saying
is
if
the
kuma,
if
the
service
tag
or
the
kuma
dot
io
service
name,
is
anything
allow
traffic
to
anywhere?
It's
just
a
blanket.
Allow
all.
A
That's
configured:
let's
make
sure
that
I
didn't
break
anything.
Everything
should
be
working
still
cool,
so
another
thing
that
I
thought
was
kind
of
a
fun,
a
fun
angle.
To
look
at
this,
I'm
really
big
on
application
delivery.
That's
you
know.
I
joke
that!
I'm
not
a
network
guy,
I'm
an
app
delivery
guy,
so
I
care
a
lot
about
how
we
get
applications
in
front
of
people
and
how
they
consume
them,
and
I
think
a
lot
about
how
we
intelligently
get
services
and
capabilities
in
front
of
users.
A
So
I,
when
I
look
at
like
things
like
traffic
routing
traffic
splitting,
I
think
a
lot
about
how
we
progressively
roll
out
applications.
There
was
a
guy
in
james
governor
from
redmonk
who
talks
a
lot
about
progressive
delivery
and
this
idea
of
gradually
rolling
out
services
in
an
environment,
so
you
don't
create
a
massive
blast
radius
of
an
app
going
down.
A
So
I
think
about
this
in
the
context
of
how
I
can
intelligently
use
this
functionality
to
roll
out
new
versions
of
an
application
or
give
an
opt-in
process
so
that
somebody
can
opt
in
to
a
new
version
of
the
page.
So
what
if
we
use
that
to
give
us
an
opt-in
to
marco's
demo
app?
Maybe
we
don't
like
cody's
app?
Maybe
we
want
to
be
able
to
bounce
between
the
two,
but
we
don't
have
to
make
massive
changes
to
our
environment
or
create
a
new
ingress
gateway
or
any
of
that
configuration.
A
Let's
do
that,
use
the
traffic
policies
to
do
that.
So
I
am
going
to
pop
back
into
that
layer.
7
policy
we've
got
our
matches
here,
based
on
path,
we're
going
to
create
a
new
match
to
do
this,
I'm
going
to
make
sure
that
I'm
copying
off
of
my
other
screen.
A
So
when
you
see
me,
look
a
little
bit
just
know
that
I'm
I'm
not
ignoring
you
all,
I'm
just
copying
stuff
from
another
window
a
little
bit,
so
I
don't
mess
up
we're
to
create
a
new
match
header,
and
this
is
going
to
be
based
on
headers
and
we're
going
to
call
this
app
mode
with
an
exact
oops
that
would
have
broken
a
lot
of
stuff.
A
A
You
might
be
wondering
how
is
this
constructed?
How
is
this
built
it's
in
the
ui,
so
you
can
see
it
there,
but
just
so
you
know,
this
is
the
name
of
the
service,
the
name
space,
it's
in
it's
a
service
and
the
port
that
it's
on.
So
these
are
all
just
identifiers
that
get
chained
together
to
create
kind
of
the
service
identity
game.
So
that's
done.
A
So
our
app's
working
life
is
good.
Now
watch
it
break.
I
have
this
mod
header
plug-in
installed.
That
lets
me
change
the
headers
on
on
requests
that
come
in
we're
gonna,
add
a
new
one
request
header
and
we
are
going
to
do
this
to
what
I
call
it.
I
called
it
something
interesting
mode
called
app
name.
That's
a
that's!
A
fail.
A
A
Boom,
why
did
that
all
go
red
one?
It
went
red
because
react
caches
locally,
so
the
page
is
still
loaded
for
me
locally,
but
none
of
the
routes
are
working
anymore.
Why
aren't
the
routes
working
because
it's
catching
that
header
now
it
sees
that
header
is
present
and
envoy,
says
whoa.
You
match
these
conditions,
I'm
going
to
do
something
else
with
you
now.
A
A
Oh,
I
love
that
so
rory
had
a
comment.
Layer,
7
matching
will
be
great
for
a
b
testing,
because
so
that's
exactly
my
thought.
I
love
that
you
brought
this
up.
That's
what
excites
me
about
this.
A
b
testing
is
totally
it
and
what
I
see
is
being
able
to
have
an
opt-in
header,
that's
maybe
dev
mode,
and
that
sends
you
into
a
split
configuration
that
will
drop
you
between
other
services
right.
A
So
you
can
have
this
dev
mode
environment
and
see
if
it's
working
right,
I
think
about
opt-in
for
mobile
right,
opt-in
for
a
different
app
configuration.
So
this
is
totally
in
the
space
of
how
we
deliver
a
b
testing
canary
stuff
and
let
people
test
these
things
out
progressively
instead
of
testing
in
prod,
where
we
disable
dns
entry
to
one
and
flip
it
onto
the
other
and
all
of
a
sudden.
All
traffic
goes
to
a
broken
version
of
an
app
or
getting
into
a
complex
load
balancer
and
playing
with
with
waiting
between
the
two.
A
That's
another,
another
area
that
you
could
go,
but
it's
higher
complexity.
We
can
do
splits
inside
of
kuma
and
you
can
self-serve
that
right.
So
we
can
enable
developers
to
be
able
to
self-service
testing
out
this
functionality,
applying
policies
that
let
you
roll
out
new
services-
header,
configs
yeah.
I
love
this
feature.
It's
one
of
my
favorite
things
inside
inside
of
kuma
and
kong
mesh
now.
A
So
that's
working
we're
gonna
talk
about
rate
limiting
now,
so,
let's
jump
in
and
play
with
rate
limiting
a
little
bit.
A
So
this
is
all
working
if
I
bring
up
insomnia
now,
if
I
go
into
insomnia,
this
is
my
insomnia
environment.
So
I
insomnia
has
an
api
tool
kit.
We
can
use
it
to
create
apis
to
test
apis
a
lot
of
cool
things
inside
of
insomnia.
I
use
it
for
a
bunch
of
my
api
calls,
so
I
have
a
set
of
functions
for
kong
mesh
when
I
want
to
interact
via
the
api
instead
of
the
kubernetes
native
way,
but
I
also
have
one
set
up
for
my
demo
app
as
well.
A
So
if
I
come
in
here
right
now
and
I
do
a
get
users
on
the
environment-
oh
it's
going
to
fail
because
I
didn't
update
this
variable
so
you'll
see
that
I
have
this
base
url
variable
here
and
it's
routed
back
to
my
on-prem
environment.
I
need
to
update
that.
So
I
come
in
and
manage
my
environments
and
I'm
going
to
change
that.
A
To
the
amazon
environment,
I
need
to
strip
off
the
those
parts
you
see
I'm
doing
a
chain
here
where
I
take
the
response
from
from
this
request
from
the
jw
tito
or
it's
added
into
the
jwt
token,
so
I'm
grabbing
that
and
adding
it
as
a
variable
in
my
environment.
Also,
let's
go
done
now.
Let's
send
our
request
cancel
our
previous
request.
There
we
go
so
this
is
my
getting
users.
If
I
get
po,
if
I
create
a
post,
we
should
be
able
to
do
send
a
couple.
A
We're
not
going
to
see
it.
If
I
go
in
here
and
do
a
login
and
go
admin,
we
will
see
it
there.
We
go.
There's
the
text
that
we
wanted
to
see
so
everything's
working
right
insomnia
is
doing
what
it's
supposed
to
do.
If
I
come
in
here-
and
I
do
this
a
bunch
of
times-
we
can
see
it,
does
it
a
bunch
of
times
it's
scrolling.
Life
is
good
before
we
do
anything
here,
I'm
gonna
take
a
so
everything's
working
inside
of
our
app
it's
doing.
What
we're
supposed
to
do.
A
We're
gonna
take
a
brief
segway
to
spin
up
the
load.
Testing
tool
because
we're
gonna
use
that
as
part
of
executing
calls
against
this
environment.
So
this
is
not
a.
This
is
not
a
kong
thing.
This
is
not
a
kuma
thing.
This
is
out
in
the
community
locus
dot.
Io
is
an
open
source
project.
That's
a
load
bouncing
framework.
I
love
it
because
it's
largely
written
in
python
and
I'm
a
huge
python
guy.
A
There's
a
bunch
of
stuff
inside
of
this
file
that
I
should
talk
through
a
little
bit,
so
this
deploys
out
just
the
main
locus
core.
It
sets
the
configurations,
and
this
is
where
the
tests
come
in.
So
I
have
two
simple
tasks
in
here
that
just
gets
against
the
user's
endpoint
and
the
post
in
point.
So
that's
this
test
section
is
what
it
feeds
in
to
the
application
to
actually
run
the
tests
that
it's
gonna,
that
it's
gonna
actually
run
for
us.
A
A
Boom,
so
we
got
two
workers
online.
That's
deployed
out
two
additional
workers
into
the
environment,
we're
gonna
start
very
small.
We'll
do
five
users
spawn
rate
of
one
user
per
second,
and
we
need
to
change
this
to
be
the
ingress
gateway
for
our
application.
A
Awesome
charts
are
firing
off
we're
doing
low
requests
per.
Second
things
are
hitting
the
environment.
It's
going
pretty
slow
life
is
life
is
good.
Let's
we
validated
it's
working.
I
always
like
to
test
these
things
before
we
fire
them
off.
Let's
kick
this
up
a
bit:
let's
go
to
100
users
at
a
spawn
rate
of.
I
don't
know
five
important
to
note
that
I'm
doing
this
all
against
a
small
postgres
database
hosted
inside
of
my
environment.
A
It's
not
the
best
for
like
load
testing,
so
we
might
see
some
chokes
in
here
where
failures
pop
up,
because
it's
trying
to
do
huge
queries
against
a
very
small
postgres
database
that
I
have
not
appropriately
sized
just
a
heads
up.
So
this
is
swarming.
That's
the
term.
Inside
of
inside
of
locust,
we
got
some
good
requests
coming
through
right
now,
things
are
hauling.
Life
is
good,
so
let's
go
ahead
and
apply
that
rate
limiting
policy.
A
A
A
But
let's
see
what
happens
when
we
go
with
something
like
that,
so
we're
iterating
on
that
policy
and
that's
one
of
the
most
powerful
things
about
service
meshes
you're,
not
locked
into
these
decisions
that
are
like
long-lived.
You
open
a
ticket
for
the
network
team
to
make
a
change
to
an
environment,
and
that
can
take
a
few
days
for
them
to
get
to
you
need
to
make
another
change.
It
might
take
a
few
more
days
we're
creating
a
self-service
model
for
making
your
application
connectivity
better.
A
That's
like
the
simplest
way
to
boil
down
what
service
mesh
does
we
want
to
make
applications,
connect
better
and
give
you
more
functionality
to
make
them
connect
better.
So,
let's
go
back
in.
Let's
go
look
at
our
charts.
Our
failures
went
way
down.
They're
dropping
life
is
good.
Take
a
look.
We're
probably
still
going
to
see
some
because
we're
getting
57
requests
a
second.
So
it's
it's
going
to
be
hitting
that
hitting
that
failure
domain
still
pretty
often.
A
Maybe
we
want
to
wait
limit
only
against
things
that
hit
the
post
service
right.
So
we
want
to
allow
traffic
to
go
normally
to
the
user
service
as
much
as
people
want
to
hammer
that
with
logins
and
new
user
creations
and
all
that
so
be
it.
But
the
post
service
we're
a
little
worried
about
it's
shaky
and
it
was
written
by
cody.
So
the
code
is
bad.
A
A
But
if
we
go
back
into
locust
and
we
watch
that
traffic,
we
should
now
stop
seeing
the
users
one
increment,
there's
still
some
leftover
requests
that
are
going
through
because
it's
batching
them
out,
so
it
takes
a
little
while
for
it
to
finally
stop
doing
those
requests.
We
should
continue
to
see
the
failures
here
climb,
but
this
should
hang
at
31.34
in
theory
going
to
insomnia.
We
try
to
do
some
posts.
A
Oh
we
got
a
post
through.
We
got
locked,
locked,
locked,
locked,
we're
in
locked
locked
so
back
and
forth,
but
if
we
go
into
getting
users
we
should
be
able
to
spam
this
all
the
way
through
we
can
go
and
we
can
create
a
new
user.
So
we
can
see
cody
who's.
The
tech
marketing
engineer
for
the
octo
group
and
my
password
is
kong.
Strong,
please
don't
hex
for
me,
so
I
can
create
that.
Maybe
I
can
create
sparco
cto.
A
Cody
strong,
that's
your
password
forever!
Now,
marco,
you
don't
have
a
say
in
the
matter,
so
I'm
able
to
keep
creating
those,
I'm
not
being
rate
limited.
The
idea
here
being
is
like
when
we
look
at
self
self-healing
and
environment,
we
want
to
be
able
to
create
barriers
that
keep
your
environment
safe
and
stable.
You
likely
have
a
rough
understanding
of
the
rate
of
traffic
that
you
want
into
an
environment.
A
If
you
don't,
we
have
great
observability
tools
to
let
you
see
that
when
you
do
when
you
bring
in
grafana
and
jager
tracing
and
all
of
those
capabilities,
but
we
once
you
get
to
a
place
where
you
understand
what
your
traffic
models
look
like,
we
can
actually
set
it
up
for
you
to
where
you
can
start
to
rate
limit
that
out.
We
can
do
a
lot
of
cool
functionality
with
managing
that
traffic
to
keep
an
environment
healthy
and
running.
Well,.
A
So
that
is
going
that
is
firing
off
everything's
good,
I'm
going
to
go
ahead
and
stop
this
from
running.
We
can
see
those
exceptions
all
right,
because
I
stopped
it.
We
won't
anymore.
We
can
see
the
errors
that
are
coming
back
from
that
the
last
exceptions
that
happened
and
fails
and
statistics.
So
locust
is
a
really
cool
way
to
test
this
stuff
out.
I
really
enjoy
it
check
my
notes,
real
quick
and
make
sure
that
we
hit
everything
we
wanted
to
awesome.
A
We
did
hit
all
of
the
stuff
that
I
wanted
to
cover,
so
I
am
now
going
to
open
this
up
for
questions.
Everything
is
running.
It's
pretty
cool.
To
think
that
I
talked
for
about
25
minutes
or
so
and
in
roughly
another
25
minutes,
we
were
able
to
get
all
the
way
through
deploying
multiple
applications,
not
just
mine,
but
we
also
brought
in
the
damn
other
demo
app.
We
were
able
to
control
policy,
we
applied
mtls,
so
we
got
that
end-to-end
encryption.
We
enabled
policies
to
control
permission
between
services.
A
A
We
showed
how
we
can
load
test
this
environment,
how
we
can
make
sure
this
environment
is
up
and
running
the
key
takeaway
for
that
rant
is
look
how
fast
you're
able
to
actually
do
self-service
application
connectivity
in
an
environment
right
like
it's
very
quick-
and
this
is
all
just
in
single
zone.
Imagine
your
scale
as
an
operator
or
developer
when
you're
doing
this
across
multiple
clouds
in
a
global
service
mesh
configuration
that's
when
kind
of
the
real
power
shines
through.
So
I
want
to
open
it
up
for
questions.
A
I'm
not
seeing
anything
come
in
now,
so
we
might.
We
might
end
up
being
clear
to
to
wrap
the
stream
up.
As
always,
I
want
to
make
sure
to
thank.
I
have
taran
in
the
background
helping
me
she's,
one
of
our
people
on
our
developer
marketing
team
awesome
help
with
all
this.
I
also
have
a
michael
who
leads
our
developer
advocacy
team
developer
relations
team
at
kong's,
awesome
guy
and
he's
they've
got
some
great
stuff
planned
to
build
content
around
a
lot
of
the
things
inside
of
the
application
connectivity
space.
A
A
A
I
also
don't
think
that
we
can
do
it
off
of
a
matched
header.
Let
me
go:
let's
bring
up
the
docs
real,
quick
and
see
if
there
was
any
example
added
for
that,
I'm
not
sure,
but
that
is
good
feedback
that
I
would
want
to
take
back
to
our
team
anyways,
because
I
know
they're
super
interested
in
how
people
want
to
use
this.
So
I
think
that
there's
like
definitely
room
for
us
to
start
to
understand
a
little
bit
more
about
how
we
can
make
these
policies
more
useful.
A
We
can
match
off
source
of
service
yeah,
I'm
not
seeing
that
rory.
I'm
gonna
take
this
back
as
a
as
a
question
to
our
engineering
team.
So
I'm
sorry
I
don't
have
the
answer
for
you,
but
I
try
to
own
that
when
I
don't
know
we
can
do
based
on
zone
also,
I
feel
like
there's
an
opportunity
here
to
put
in
a
few
more
a
few
more
inbound,
inbound
resources.
A
You
could
in
theory,
if
you
knew
where
the
traffic
was
coming
from
like
ip,
I'm
assuming
you
mean
client
ips,
so
it'd
be
a
little
bit
more
a
little
more
tricky.
You
could.
If
you
knew
the
ip
address,
you
could
create
an
external
service
potentially
and
filter
off
of
that,
but
I
will
dig
into
this
I'd,
encourage
you
to
join
the
kuma
community
slack
if
you're
not
on
there.
Already.
A
I
can
bring
that
answer
back
over
there
and
see
what
the
answer
is
or
bring
that
question
back
to
the
engineering
team
and
find
it
out
there.
So
I'm
sorry,
I
failed
you
at
least
I
did
a
cool
a
b
demo,
though
so
I'll
take
that
any
other.
A
Questions
all
right,
hey!
This
was
a
awesome
time
I
really
enjoyed
showing
this
to
everybody.
I
will
not
be
here
for
the
next
kong
builders
series,
which
is
on
next
friday.
We
are
looking
to
have
one
of
michael's
team
members,
vic
staff,
a
session.
I
think
he's
going
to
do
something
outside
of
the
mesh
world
as
a
as
a
kong
builders
thing,
so
that'll
be
really
exciting
and
fun.
A
Vic
is
a
great
guy,
does
great
work
on
on
stream,
so
I
would
encourage
you
come
hang
out
next
thursday,
where
we
do
this
again
or
it's
actually
friday,
I'm
sorry
next
friday,
friday
at
11
a.m.
Pacific
time
where
we
will
do
kong
builders
again
and
we
will,
we
will
build
some
connected
world
things
thanks
a
lot
everyone,
it's
been
real
hit
me
up
on
twitter,
cody,
dr
clinton,
or
on
linkedin.
I
love
talking
to
the
community,
so
let
me
know
how
it
can
help.