►
From YouTube: [Online Meetup] Service Mesh Demo
Description
Cooper Marcus and James Callahan gave us a great demonstration about how Kong works when deployed as a service mesh. They discussed what service mesh means in Kong parlance, and fielded some wonderful community questions.
Resources:
Streams and service mesh documentation: https://docs.konghq.com/1.0.x/streams-and-service-mesh/
Service mesh definition: https://konghq.com/blog/defining-service-mesh/
A high-level overview of service mesh architecture: https://konghq.com/blog/steps-to-deploying-kong-as-a-service-mesh/
Join our next Online Meetup: https://konghq.com/online-meetups/
A
Okay,
hi
so
hi
everybody
welcome
to
the
community
call
community
hall.
This
is
December
4th
2018
and
here
at
Cong
we
are
all
in
ramp
up
for
coop
con
mode.
So
in
getting
ready
for
that,
we
are
gonna.
Have
a
Cooper
Marcus
talk,
oh
I
put
him
second,
but
he
will
actually
talk.
First
talk
a
little
bit
about
service
mesh
which
is
going
to
come
out
as
part
of
Kong
one
datos
GA
Cooper's
been
doing
a
lot
of
education
about
service
mesh
for
us
and
he's
got
a
couple
of
blog
posts
coming
out.
A
You
know
how
Cong
thinks
of
service
mesh
what
our
service
mesh
looks
like
kind
of
how
its
put
together
and
what
it
is
and
then,
after
that
we're
going
to
have
James
Callaghan
who
actually
designed
a
service
mesh
and
built
it
talk
to
us
a
little
bit
about
where
it
is
now
and
show
us
what
we've
got
in
terms
of
our
c3
and
what's
gonna
come
out
in
the
future,
so
there'll
be
a
little
demo
of
that
and
then,
after
the
service
mess,
intro
and
demo,
we'll
talk
a
little
bit
about
where
you
can
find
us
at
kook
on
coupon
is
obviously
coming
up.
A
B
To
you
sounds
good
and
I'm
just
going
to
assume
everybody
can
hear
me
if
you
can't
turn
on
your
camera
and
give
me
a
thumbs
down.
Otherwise,
no
news
is
good
news,
so
hi
everybody.
My
name
is
Cooper
I'm,
the
director
of
ecosystem,
here
at
Kong
and
I
work
on
a
variety
of
things,
including
our
hub,
where
we
share
and
find
ways
to
extend,
enhance
Kong.
But
today
and
recently
I've
been
doing
a
lot
of
work.
B
B
I
found
it
made
my
head
and
the
head
of
some
of
my
colleagues
sort
of
exploded
a
little
bit
at
first
and
then
we
really
dug
into
it
and
we
realized
wait.
A
sec
I
think
it's
actually
much
easier
than
folks
fear.
So
let's
talk
a
little
bit
about
what
it
is
and
how
it
works,
but
first
James
how
long
should
I
call
for.
B
So
I'm
going
to
show
some
pictures
and
then
get
into
some
words.
So
in
the
olden
days
and,
of
course,
in
many
monolithic
applications,
what
we
have
are
function
calls
between
objects
right.
There
is
no
need
for
a
service
service
mesh
here,
because
all
of
the
communication
is
happening
within
the
application
and
even
when
you
start
to
scale
your
monolith
and
deploy
multiple
instances
of
it,
we
still
have
all
of
our
communication
happening
within
the
application.
But
what
happens
when
we
start
going
through
micro
services?
B
Those
little
changes
make
a
huge
difference
that
we
need
to
start
to
address
and
the
reason
the
core
reason
we
need
to
address.
It
is
because
the
network
has
latency
and
unreliability
compared
to
objects,
making
calls
to
one
another
within
a
application
which
is
luckily
fast
and
reliable.
For
the
most
part,
the
network
is
just
a
Wild
West
who.
B
Latency
issues
easily,
as
you
get
into
modern
deployment
scenarios
like
hybrid
cloud
or
calling
external
api's
that
are
kind
of
just
like
little
services
in
your
micro
services
arrangement.
You
can
imagine
all
the
unreliability
that
can
happen
across
these
network
connections,
so
service
mesh
was
developed.
The
idea
is
sort
of
a
response
to
the
problems
we
encounter
when
we
factor
our
monolith
into
micro
services.
A
B
B
B
It's
meant
to
be
a
place
where
we
can
deal
with
problems
right
the
whole
core
concepts
of
the
internet,
routing
protocol
or
that
it's
decentralized
and
fault.
Tolerant
things
can
go
wrong
and
we
can
recover,
but
when
we're
running
applications
that
are
talking
to
one
another,
we
need
that
recovery.
We
need
those
things
going
wrong
to
be
minimized
less.
We
get
impacted
in
a
way
that
we
weren't
when
we
had
such
a
liability
in
these
function
calls
within
an
application.
B
Ok,
so
the
network's
to
the
problem
right
we've
got
latency
and
routing
issues.
We've
got
potential
security
issues,
there's
now
communications
happening
across
the
proverbial
wire
and
we've
got
a
observability
issues
observability.
He
refers
to
the
fact
that
the
tooling
we
have
to
understand.
What's
going
on
in
these
communications
between
micro-services
is
not
much
less
mature
than
the
cooling
we
have
to
understand.
What's
going
on
with
Ian
and
given
application
and
service
mesh
is
also
a
response
to
this
sort
of
less
maturity
on
the
observability
side
of
things,
understanding
seeing
and
understanding
what's
going
on.
B
So
let's
have
a
quick
look
at
some
of
those
issues.
Right.
First
of
all,
we've
got
latency
issues,
Network
latency
networks,
typically
very
fast,
but
in
our
microservices
architecture.
A
given
request
from
a
client
and
response
may
involve
many
services
talking
to
one
another
across
the
network
and
all
that
latency
adds
up
security
as
well.
We've
got
the
communications
happening
over
the
wire,
as
I
mentioned.
We
want
them
to
be
encrypted
and,
furthermore,
we
want
the
services
to
both
know
that
they're
talking
to
a
reliable
party
on
the
other
side
when.
B
Here
I
mean
we're
just
within
an
application,
no
function
call
the
function,
call
is
going
to
have
a
security
issue,
but
when
we
communicate
across
the
network
with
a
service
on
the
other
side,
that's
being
potentially
built
by
another
team,
maybe
even
being
run
and
managed
by
a
whole
nother
company,
we
need
some
assurance
that
we're
communicating
with
the
service.
We
intend
to
be
communicating
with
routing
to
as
well.
B
We
now
have
the
opportunity
to
have
new
versions
of
our
services
and
we
need
to
be
able
to
control
the
routing
between
these
services,
so
we
can
try
things
out
and
roll
forward
or
back
to
new
and
old
versions,
and
then,
of
course,
we
need
error,
handling
and
resiliency.
The
network
itself
is
unreliable,
the
services
may
be
unreliable.
You
need
to
be
in
a
position
to
check
the
health
of
these
things.
B
Stop
sending
traffic
to
an
unhealthy
service
start
sending
it
when
it
becomes
healthy
again
and
then,
of
course,
observability
is
related
to
collecting
the
information
from
across
this
distributed
service
architecture
and
bringing
it
all
into
one
place
where
we
can
see
it
and
reason
about
it
and
share
it
and
use
it
to
improve
our
performance.
So
these
are
our
problems
that
service
meshes
solve
and
the
service
mesh
is
really
a
pattern.
It's
not
a
feature,
and
it's
not
a
product.
B
It's
a
way
of
doing
things
and
I'm
gonna
switch
to
a
second
here
to
some
draft
documentation.
I've
been
working
on
and
I
want
to
share
here.
So
what
is
a
service
match?
Well,
I.
Think
a
service
mesh
is
a
way
of
solving
problems
related
to
security,
reliability
and
observability
that
occur
when
multiple
applications
are
communicating
with
one
another,
with
the
inter
giving
computing
environment
by
routing
the
inter
application
communications
through
local
proxies
without
requiring
changes
to
the
applications.
B
Now
you're
gonna
see
some
of
this
in
a
blog
post
coming
up,
but
I
think
it's
helpful
to
a
sort
of
level
set
here
on
what
it
is
we're
talking
about.
Now,
first
of
all,
we're
solving
problems.
Write
service
mesh
is
a
problem-solving
approach.
We
caused
ourself
problems
by
going
from
the
monolith
to
micro
services.
We
also
caused
ourselves
a
lots
of
a
lot
of
benefits
right.
We
now
can
have
engineering
teams
moving
independently.
We
have
more
agility
with
services
that
can
be.
We
combined
to
meet
new
business
needs
and
new
challenges.
B
So
you
know
micro
services
have
lots
of
good
aspects,
but
they
also
have
problems
compared
to
monoliths,
so
service
mesh
is
a
problem-solving
approach
and
the
kind
of
problems
we're
solving
are
typically
security,
reliability
and
observability
problems
that
occur
when
applications
are
communicating
across
a
network.
Now
these
problems
occur
when
multiple
applications
are
communicating.
I
point
that
out,
because
these
aren't
the
sort
of
security,
reliability,
observability
problems
you
have
with
Ian
or
mano
list
right
we're
solving
problems
that
had
to
do
with
micro
services,
communicating
or
applications
in
general.
B
Communicating
across
the
network
I've
been
saying
micro
services
for
it.
Really
they
don't
have
to
be
micro
right.
You
could
have
a
mesh
of
monoliths
here
over
here
where
we
say
service
service
service
on
the
right.
They
can
be
big,
they
could
be
small,
they
could
be
functions,
tiny,
doesn't
really
matter
now.
The
service
mesh
is
operating
with
ian
a
given
computing
environment.
You
know
the
internet
is
a
giant
computing
environment,
but
really
it's
a
bunch
of
smaller
environments
connected
and
serviced
mesh
doesn't
really
apply
to
internet
scale
problems.
B
It's
going
to
apply
much
better
to
a
computing
environment
that
you
have
access
to,
that
you
can
control,
running,
deploying
and
running
service
mesh.
It's
gonna
typically
require
a
level
of
access
to
the
hosts
that
are
running
your
applications
that
you
don't
get
when
there's
somebody
else's
focus.
That
is,
if
you
want
to
improve
the
reliability
of
your
communications
with
the
say,
Facebook
API,
you
do
not
have
permission
to
go.
Install
service
mesh
proxies
on
Facebook
servers,
there's
ways
of
addressing
that,
but
we're
typically
talking
about
solving
problems
within
a
computing
environment.
One.
C
B
C
B
Quite
true-
and
maybe
this
part
of
the
definition
is
also
sort
of
so
obvious-
that
I
should
just
take
it
out.
I
mean,
of
course,
you
can't
control
other
people's
servers,
so
we'll
see,
as
is
work
in
progress
now.
The
way
we're
solving
these
problems
here
is
routing
the
inter
application
communications
as
the
communication
between
the
applications,
through
proxies
that
are
local
to
those
applications
and
importantly,
we're
not
changing
the
applications.
B
B
You're
gonna
go
talk
to
all
of
your
engineers
across
all
your
teams
that
have
all
of
them
at
libraries
to
their
applications
that
start
to
solve
service
master
type
problems.
Well,
you
could,
but
now
you've
got
a
big
coordination
challenge
and
one
of
the
brilliant
aspects
of
service
mesh
is
that
you
can
solve
these
security
reliability,
observability
problems.
Without
talking
to
all
of
your
engineering
teams,
you
could
do
it
sort
of
in
an
overlay,
fashion
and
I
think
that's
very
powerful.
Now.
C
There
are
some
caveats:
really:
it's
really
a
tool
for
Network
administration
or
a
deployment
team.
Often
we
say
that
all
of
our
customers
have
a
platform
team,
and
so
this
is
all
the
tools
of
them
where
they
don't.
They
can
change
how
things
communicate
with
each
other
without
having
to
change
the
communication
themselves
and
jump
into
the
dev
teams
that
exist
within
their
Oaks,
because
they
usually
got
some
sort
of
organizational
between
them
and
this.
You
know
it
really
enables
the
platform,
and
so
this
admin
teams
to
do
more.
That's.
B
Right,
it
definitely
does
so.
What
really
is
the
mesh?
Well,
really,
it's
just
paired
proxies
right.
We've
got
a
service
talking
to
a
local
proxy,
which
then
speaks
across
the
network
to
another
local
proxy
that
speaks
to
its
local
service.
These
white
arrows
on
the
left
and
right
are
local
calls
and
we
are
presuming
that
they
are
fast
and
reliable
and
I
think
that's
a
fair
assumption
right.
Local
communications
are
pretty
darn
fast
and
reliable.
B
The
network,
although
in
between
not
so
fast
and
not
so
reliable,
and
that's
part
of
the
problems
that
service
mesh
help
solve
now.
Of
course,
you've
got
more
than
one
service
more
more
than
two
services.
So
what
you
end
up
with
is
something
that
looks
more
like
this,
where
all
of
your
services
have
adjacent
local
proxies
that
are
proxying
the
traffic
between
the
services.
Now
we
got
to
here
or
here
without
making
any
changes
to
the
services
and
deploying
proxies
adjacent
to
them.
B
Typically,
service
meshes
have
a
control
plane
and
a
data
plane.
These
terms
just
mean
that
there's
some
parts
of
it
that
route
requests
between
services
and
there's
other
parts
of
it
that
configure
the
routing
of
those
requests.
The
control,
plane,
configures
the
data
plane
and
collects
metrics
from
the
data
plane.
The
data
plane
rats,
the
requests
now
in
calm,
unlike
some
other
meshes
on
the
control,
plane
and
data
plane
instances,
are
all
the
same
binaries.
It's
just
call
Kong
Kong
Kong.
C
So
the
main
thing
in
a
control
plane
for
comm
is
when
you
disable
the
proxy
ports,
and
the
main
thing
for
Delaplane
in
Kong
is
to
disable
the
earthing
force
you
can
have
both
running,
but
obviously
in
the
sidecar
environment.
You
don't
want
to
be
exposing
your
admin
API
to
all
the
services
in
your
infrastructure.
So
you
turn
that
off
and
your
control
plane
is
not
gonna,
be
routing
any
traffic,
so
just
ten
proxy
off.
So
it's
more
of
a
convention
and
then
something
that
you
have
to
do
and
it's
optional.
C
The
other
interesting
side
of
it
is
that
the
control
plane
sort
of
gets
startled
at
first.
So
you
need
a
control
plane
established
and
all
your
database
migrations
run
before
any
data
plane.
Nerds
really
start
up,
because
otherwise
they
don't
have
any
configuration
they
and
how
are
you
gonna
talk
to
you
that'll
play
nodes
when
they
don't
have
an
app
in
port,
so
you
usually
start
up
at
the
control
point
first
and
they
talked
about
one
and
and
for
an
enterprise
customers.
C
The
control
plane
is
also
where
say,
the
admin
GUI
lives,
and
you
know
where
the
dev
portal
products
are
etc.
So,
there's
sort
of
so
different
castles,
I
hope
that
makes
sense
everyone
them
I
would
love
anyone
to
actually
fight
feedback
and
jump
in
reply
and
chat
or
I'll
meet
yourself
and
have
a
chat
with
us
sure.
One.
B
Thing
to
call
out
I
mentioned
this
earlier,
but
like
these
little
boxes
right,
it's
never
like
this.
It's
really
like
it's
those
big
services,
all
services,
there's
lots
of
traffic
between
some
and
few
between
others,
but
for
the
purposes
of
talking
about
service
mashin.
What
we're
doing
here?
It's
fine
for
us
to
represent
it
like
this.
It.
B
A
D
C
C
B
C
See
the
screen
great
tool,
so
I
was
trying
to
figure
out
what
exactly
to
show
you.
So
what
I'm
doing
here
on
the
left
is
I.
Wouldn't
guess
that
my
text
is
too
small
people
to
read.
Is
that
a
correct
assumption,
I.
C
Right
so
up
here
on
the
top
left,
I'm
running,
just
a
really
simple
HTTP
set
up.
It
just
echoes
what
it
just
says:
hello
world
and
it
prints
off
sort
of
debug
Don
about
the
requests
there
receives.
So
we
can
make
a
it's
listening
on
localhost
for
eight
four
four
three
at
the
moment,
so
we're
just
gonna
make
requests
to
it
and
it's
not
going
to
congregate.
C
So
to
tell
the
world
it
prints
off
information
about
the
request
that
came
in
and
we
can
send
out.
You
know
it
tells
us
what
it
saw
we
could
also
if
we
wanted
to
make
it
an
HTTP
request.
Instead
and
of
course
we
need
to
pass
in
okay,
so
it
ignores
difficut
area
now.
Well,
it's
it's
listening
for
both
HTTP
and
HTTPS
on
the
same
port,
so
it'll
print
off
like
the
cert
that
it's
using
in
a
LPN
or
any
other
thing
that
it
receives
about
the
request.
James.
B
Can
I
ask
a
couple
setup
questions
here,
so
we're
gonna
be
trying
out
a
mesh
a
paired
proxy
right.
We
have
a
service
talking
to
a
comm
talking
to
a
Kong
talking
to
another
service.
Yes,.
C
B
C
Correct
so
this
is
all
just
running
he
and
local
on
my
laptop
in
reality,
when
we,
this
is
our
match.
There
is
service
at
the
top
here.
This
would
be
running
on
in
one
container
and
I
mean
this.
You
might
call
it
a
service
in.
This
is
just
a
HTTP
client
using
curl,
but
really
it
could
be
another
service
that
could
either
be
initiating
requests
for
whatever
reason
or
it
could
be
receiving
quests
request
itself
and
sending
them
off
to
the
next
location
right.
C
C
C
B
C
So
in
this
case
here
we've
got
a
service
up
and
running
we're
seeing
it
just
tells
us
information
about
the
request
that
comes
in
now.
What
I've
done
here
is
I've
got
two
cons
style
up.
I've
got
one
in
the
forum
sponsored
a
and
one
of
the
bottom
points
of
route
B,
and
then
they've
been
saddled
up
with
these
commands
here.
C
So
you
know
using
concrete
fix
to
tell
them
the
folder
to
run
out
of
we
using
proxy
listen.
So
here
we
can
see
that
it's
this
thing
on
port
8000
we've
got
this
new
listener
directive
called
transparent.
Now
this
is
something
very
special
for
intro
operations
with
IP
tables,
we'll
come
back
to
that
later.
I
think
and
additionally,
we're
listening
on
a
second
proxy.
Listen
for
this
time
with
SSL
on
8
4,
4
3
additional.
In
addition,
so
those
are
the
HTTP
proxy
listen
box.
B
C
So
transparent
is
a
new
listener
directive,
I'm,
very
IP
tables
related
and
string.
Listen
is
the
new
functionality
for
proxying
plain
TCP.
Instead
of
HTTP,
we've
got
the
admin
listener
port
on
7001
here
purely
because
the
default
is
8001,
and
if
I
tried
to
do
it
on
both
ever
clash
and
shall
start,
we've
got
a
brand
new
directive
here
called
pong
origins.
So
this
is
sort
of
an
override
to
say
that
if
comm
would
connect
to
the
thing
on
the
left
of
the
equal
sign,
then
instead
connect
to
the
thing
on
the
right.
C
C
Similarly,
we've
got
a
second
item
here,
which
is
it
if
you
are
going
to
connect
on
plain
server,
if
you
were
going,
click
with
SSL
or
TLS
r2
service
be
on
port
8000
instead
go
to
module,
7,
0,
0,
1
or
8,000
222,
and
then
we're
using
Congress
tart
instead
of
one
start,
because
I
like
to
run
this
over
again
in
case
I,
do
something
odd
or
different
or
want
to
play
with
things
crashing,
etc.
Now,.
B
Just
to
quickly
summarize
origins
and
transparent,
these
are
important
for
the
principle
we
mentioned
in
our
service
mesh
definition,
which
is
that
we
don't
require
changes
to
the
applications.
We
need
to
be
able
to
intercept
incoming
and
outgoing
requests
and
responses
from
the
applications
without
changing
the
applications
right.
B
We
don't
want
the
applications
communicating
out
of
the
mesh,
because
then,
of
course,
we
can't
secure
those
communications
and
we
can
make
them
reliable,
but
we
also
don't
want
to
force
our
engineering
teams
to
change
their
applications
to
speak
only
with
the
local
proxies,
instead
of
just
doing
what
they
normally
do
and
to
accomplish
that
we
need
this
combination
of
transparent
and
origins.
So.
C
Being
in
common
prefix
server
be
or
mentioning
that
this
is
the
column
running
as
a
sidecar
in
front
of
service
B,
which
in
our
case,
is
this
demo
server
up
here,
so
we're
making
us
that's
car
located
and
that's
why
that
Kong
origins
contains
its
own
IP
map
to
something
local
to
it.
So
this
is
saying
that
the
Kong
running
on
you
know
service
B
when
it
receives
a
request
for
service,
be
really
that
is
meant
to
go
to
localhost,
and
we.
B
Have
to
do
that,
because
if
we
didn't
have
that
that
Kong,
that
received
a
request
for
service,
be
the
one
that's
running
adjacent
to
service
B
as
a
sidecar
service
B,
instead
of
sending
it
to
service
beyond
the
local
interface,
would
reach
out
over
the
network
and
say
hey
service
B.
Are
you
out
there
on
the
network?
Yes,.
B
A
B
Notice
that
these
configurations
are
not
configurations
that
were
sending
to
the
Kong
admin
API.
These
are
configurations
on
the
communes,
that's
the
specific
concrete
instance:
that's
adjacent
to
service
fee
and
we'll
have
some
different
configurations
on
the
con
proxy.
As
the
instance.
That's
local,
to
a
service,
a.
C
Yes,
sir
sit
down
here:
we've
got
what
I
use
to
start
konpai,
so
we're
running
in
in
surfer,
a
I've
listening
on
the
service,
a
first
instant
they're
doing
the
same
course
before
the
Evan.
Listen
is
just
left
at
the
default
of
port
8000
and
one,
and
yet
we
using
the
transfer
and
I've
decided
to
exclude
Khan
origins
here,
because
s,
thus
a
is
not
actually
listening
for
anything
our
service.
A
in
this
example
is
just
curl
running:
it's
not
listening
for
requests,
so
it's
purely
originating
requests.
That's.
B
Right
service,
a
doesn't,
listen
it
doesn't,
it
doesn't
accept
incoming
requests,
it
only
does
its
own
thing,
hey.
Maybe
it's
like
a
cron
job
right.
It
triggers
and
does
its
own
thing,
but
it
doesn't
do
anything
in
response
to
incoming
requests.
We
know
that's
not
terribly
realistic,
but
it
definitely
simplifies
at
this
example.
Here
you.
A
C
This
here
is
purely
one
passing
in
environment
variables.
You
can
also
put
them
in
combat
conf,
but
usually
this
would
be
things
that
you
passed
your
container
Australia,
so
the
communities
or
using
this
field
or
any
of
the
other
sort
of
office
traders
or
container
runtimes.
These
would
be
things
that
you
pass
in.
C
Obviously
you,
if
they're
data
plane
nodes
that
are
need
an
admin,
listen,
we
could
just
change
this
to
admin,
listen
off
in
the
case
of
data
plane
node
and
this
have
sort
of
gone
for
the
other
case
of
they're.
Both
running
that
done
play
notes,
didn't
have
the
control
point,
data,
clean
separation,
sure.
B
B
Now
we've
got
a
question
for
the
audience,
so
where
does
the
database,
which
Kong
uses,
reside
great
question?
It
resides
wherever
it
used
to
reside,
there's
no
change.
So
when
we
talk
about
moving
to
call
mesh
some
things,
we
get
some
new
features,
but
other
things
stay
the
same
and
the
database
part
of
con
mesh
definitely
stays
the
same.
A
C
Plane,
nerds
still
need
to
contact
the
database,
so
it
effects
the
control,
plane
and
data
plane,
don't
communicate
directly,
they
all
communicate
via
the
database.
So
you
talk
to
the
control
plane,
you
send
your
configuration
and
it
then
writes
that
to
the
database
and
then
the
data
planes
nodes
pick
that
up
from
the
database.
That's
right.
They.
B
B
Until
they
need
to
get
fresh
information,
so
the
kong
node
to
database
communication,
there
may
be
some
at
startup
or
sort
of
when
kong
node
is
encountering
new
requests
that
it
doesn't
know
how
to
deal
with.
But
as
each
node
fills
up
its
local
cache
with
sort
of
known
configuration
from
the
datastore
that
communication
between
the
car
nodes
and
the
data
store
will
decrease
by.
D
C
C
Turn
off
proxy
listen
stream,
listen
is
off
by
default,
admin!
Listen!
We
can
put
it
on
some
other
number
and
short
6,001.
We
don't
need
the
origins,
directive
and
I
should
just
for
the
sake
of
it,
put
control
plane
on
this
tail
line.
So,
first,
let's
restart
this
tail
and
I've
actually
just
quit.
It
well
done
I'm
just
going
to
continue
tailing
that
and
we
can
start
up
what
we
are
now
calling
a
control
plane
node.
So
it
looks
to
why
James
is
doing
this.
B
I'll
just
speak
to
Shawn's
question
there
that
you
wrote
in
the
chat,
so
would
a
typical
use
case
be
that
a
given
app
and
all
the
services
have
its
own
dedicated
control
and
data
plane,
call
environment,
so
yeah
a
given
app
and
all
its
services.
We're
mostly
talking
here
about
a
collection
of
micro
services
or
a
collection
of
applications.
They
could
be
micro
or
macro
or
very
small,
indeed
functions
with
the
in
a
given
computing
environment.
Do
you
have
control
over
right?
B
So
you
can
control
the
way
that
the
con
nodes
are
deployed
and
the
way
that
the
hosts
are
configured
within
the
computing
environment.
Then
yeah.
You
would
ideally
put
a
local
Kong
proxy
adjacent
to
every
application
service
instance
they'd
all
connect
to
a
data
store
to
get
their
config,
and
then
you
would
also
have
a
control,
plane,
Kong
or
even
more
than
one
potentially
that
you
as
an
administrator
would
connect
to
to
make
admin.
Api
calls
to
retrieve
metrics
to
do
this
sort
of
typical
control
function.
So.
C
The
big
thing
is
that
a
a
Kong
cluster,
which
is
really
a
collection
of
control,
plane
and
data,
plane,
nerds
or
mixer
thing,
is
really
a
boundary
of
trust.
It's
when
you
trust
that,
essentially,
your
administrators
people
have
access
to
that
are
all
people
within
your
org
and
that
really
it's
within
a
single
org.
So
a
lot
of
organizations
will
just
have
one
kong,
cluster
and
they'll.
C
Have
the
production,
kong
cluster
it'll
be
just
one
of
them
with
one
control
plane
and
a
data
plane
nerd
for
every
service,
they're
running
other
orgs
might
have
you
know
one
production,
one
for
testing
on
to
staging
other
orgs
might
have
you
know,
maybe
per
department.
So
I
have.
Each
department
has
their
own
comm
cluster
because
they
want
to
upgrade
say
kong
individually.
They
have
their
own
organization,
will
sort
of
structure
where
they
want
to
independently
control
their
clusters,
but
generally
there's
very
few
kong
clusters
within
an
org.
We
tend.
B
C
D
At
Heroku
we
call
that
blast
radius,
and
the
idea
is
that,
well,
you
can
have
a
huge
thing
that
covers
everything.
That
means
that
huge
thing
can
take
down
everything,
so
you
might
want
to
think
about
it
in
terms
what
your
smallest,
reasonable
blast
radius
is,
so
that
you
know
problems
don't
reflect
in
like
your
whole
mesh
going
down
like
I,
don't
know
what,
since
it's
one
centralized
Postgres
database,
at
least
in
our
instance,
it's
a
it's
a
challenging
question
like
how
big
you
really
would
want
to
make
that
we.
C
Sort
of
dealt
with
that
as
the
defense
very
very
differently,
so
the
data
plane
nodes,
weren't
stopped
processing
traffic.
If
proce
grid
is
unreachable,
you
know
that
they
just
Falls.
You
won't
be
able
to
write
to
Congress
for
a
change
configuration.
However,
you
can
can
the
existing
nodes
will
continue
running
as
long
as
they
as
long
as
they
want.
It
doesn't
actually
result
in
an
outage,
so
it's
really
about
what
is
the
organizational
trust
rather
than
dealing
with
sort
of
felling
scenarios
of
software
and
networks.
At
this
point,
I.
D
C
B
The
key
thing
that
James
said
there
was
new
configuration,
and
this
is
what
makes
sort
of
speaking
generally
about
this.
A
little
tricky
different
calling
use
cases
have
vastly
different
sort
of
volumes
and
frequencies
of
new
configuration.
In
some
con
use
cases,
configuration
is
changing
many
times
a
second
in
other
country.
In
other
situations,
it
may
be
only
changes
once
a
year,
and
so
because
of
that,
it
can
make
it
tricky
to
sort
of
make
one
blanket
statement
about
how
reliable
the
datastore
and
clunks
connection
to
the
data
store
needs
to
be.
C
And
that
also
is
a
reason
that
we
do
support.
Cassandra
took
an
alternative,
the
Postgres
for
people
that
really
need
to
deal
with
sort
of
multi
region
or
no
really
still
with
changing
configuration.
While
they
have
network
outages.
We
do
support
Cassandra
as
the
database,
in
addition
to
price
price.
A
C
B
Mentioning
blast
radius
Mars,
one
of
the
ways
that
mesh
helps
reduce
the
blast
radius
is
by
now,
enabling
you
to
run
local
Kong
proxies.
You
have
a
proxy
adjacent
to
every
service
instance
in
the
previous
iteration
of
Kong.
Running
pairs
of
proxies
was
not
really
possible,
which
meant
that
service
to
service
communications
had
to
go
through
a
Kong
cluster
which
might
have
had
a
slightly
larger.
B
I
mean
really
I
do
agree
with
James.
The
blast
radius
is
really
decided
by
the
data
store,
like
that.
The
data
store
and
the
frequency
with
which
your
Kong
nodes
are
needing
to
connect
to
it,
and
if
they're
you
know,
if
they
need
to
connect
a
lot,
then
you,
your
data,
store,
defines
the
blast
radius
if
they
don't
and
they
can
quickly
sort
of
retrieve
and
cash
their
config,
and
then
their
config
doesn't
change.
The
blast.
C
So
just
make
this
quick
change.
My
demo
here
we've
started
up
a
third
comm.
Let's
just
change
this
to
three
pumps
or
rave
change,
and
so
now
it's
done
up,
control,
plane,
node
running.
We
can
see
that,
can
they
connect
to
it
on
port
6,001,
it's
not
running
a
proxy
port
or
anything,
but
we
could
be
sending
a
configuration
there
so
yeah
we
could
turn
this
off
or
whatever
we
want
to
do
really
this
you
can
configure
Kong.
C
However,
you
want
cool
so
then
I've
gone
in
and
added
a
service
with
the
name
service
B,
where
it
points
to
the
URL
HTTP
service,
B,
porch,
8,
4,
4,
3
and
then
I've
added
a
route
to
that
service
which
says
that
match
on
service
B.
That
is
just
the
same
as
the
Kong
admin
API
today.
If
anyone
is
familiar
with
it
and.
C
So,
each
so
this
Kong
this
is
your
control,
plane
Kong,
which
would
be
running
constantly
wherever
you
need
it
to
be
off
to
the
side
with
your
infrastructure,
this
column
is
the
one
that
you
would
deploy
at
the
same
time
as
Pompey
right
next
to
as
a
side
car,
and
this
is
the
Kong
that
you
would
deploy
with
service
a
right
next
to
it
in
a
sidecar.
So
this
notice
that
the
config
in
the
one
to
be
it
doesn't
say
anything
about
a
it's
all,
just
service
based
specific
context.
C
D
C
C
C
Sir,
what
this
does
is
that
saying
that,
if
there's
a
request
destined
to
service
fee
then
connects
to
service
B,
which
is
a
little
confusing.
But
now,
if
we
jump
right
back
over
here
and
look
at
the
Kong
origins
option
that
is
running
on
Kong
B,
we
can
see
that
if
there
is
a
request,
Destin's
HTTP
service
be
eight
four.
Four
three,
instead
directed
to
localhost
non-http,
four,
eight
four,
four
three,
which
is
this
server
running
locally
here
so
and
the.
B
Reason
that
is
helpful
is
because
now
we're
in
a
position
to
cause
communications
coming
across
the
network
to
service
B
to
be
run
over
TOS
without
changing
service,
be
to
support
TOS
and,
as
we
had
in
our
service
mesh
definition,
we
want
to
solve
problems
related
to,
in
this
case,
security,
without
relying
requiring
changes
to
the
applications.
Yep.
C
So
in
here
we're
doing
we're
gonna,
do
a
kill,
request.
I
took
the
Sakura
quest
to
service,
be
four
eight
four
four
three,
but
notice
that
it
is
going.
This
is
X
is
the
proxy
option
to
curl
which
says
that
send
this
via
the
proxy
service
a
point,
eight
thousand,
and
we
can
see
that
proxy
listen
service,
eight,
eight
thousand!
That
is
where
calm
a
is
listening.
So
we
should
do
this
request.
It
should
go
through
song
a
through
to
Kong,
B
and
then
through
to
our
service
at
the
end.
C
So
we
see
it
went
through
comic-con
a
and
convey,
and
we
saw
in
Kong
a
we
saw
that
we've
got
Arafat's
distance,
the
server
speeds
and
in
Kong
B
we
saw
that
we've
just
got
to
get
which
went
through.
We
can
see
that
we
got
a
hello
world
and
the
request
came
through
with
not
HTTP,
so
that's
sort
of
all
the
together
that
is
sort
of
demo
of
the
they
should
be
mesh
in
the
simplest
way.
I
could
think
of
questions.
C
D
C
C
C
Otherwise,
anything
that
doesn't
know
what
Kong
mutual
teal
is
or
isn't
part
of
the
correct
service
mesh.
What
does
it
ignore
it,
and
this
goes
to
any
TLS
service
out
there
in
the
world?
It
will
just
ignore
it.
So
that
gives
this
sort
of
gradual
and
configuration
unless
TLS
communication
between
come
notes.
B
C
B
There's
a
further
element
that
we
haven't
shown
off
here.
We
talking
more
about
the
future,
which
is
how
plugins
run
plug-ins
that
exist
in
comm,
mesh
and
they're.
Part
of
the
great
value
you
get
from
Kong
is
that
these
proxy
nodes
can
be
extended,
enhanced
improved
with
plugins,
but
when
we
have
comm
talking
a
Kong,
we
need
to
have
some
control
and
awareness
over
where
the
plugins
run.
You
can
imagine
Jansky
put
your
diagram
back
there
we
get
there.
B
You
can
imagine
that
if
ka
and
KB
both
ran,
say
a
rate
limiting
plug-in
and
incremented
a
rate
limiting
counter
we'd
get
two
increments
for
a
single
request,
and
that's
certainly
not
what
we
want.
So
Kong
has
one
more
new
feature
which
will
show
off
in
a
future
call
called
run
on
and
it
controls
where
the
plugins
run
on.
Do
they
run
on
ka?
B
Do
they
run
on
KP,
or
do
they
run
on
both
the
run
on
feature
is
totally
dependent
on
Kong
certificated
mutual
TOS
being
established
between
the
two
Kong
notes,
so
that
the
nodes
can
be
aware
of
each
other
and
thus
decide
or
correctly
determine
where
to
run
the
plugins
will
cover
that
detail
in
a
future
call.
Oh
yeah.
B
C
If
you
wanted
to
try
this
today,
you
do
have
to
build
from
source
out
docker
images.
They
got
built
for
RC,
3
I'm
missing
a
flag,
so
they
you.
If
you
want
to
try
this,
you
will
need
to
build
from
source.
I.
Think
we're
trying
to
rebuild
the
doctor
images
today
or
tomorrow,
because
we
only
tried
this
when
we
tried
to
come
up
the
demo
and
realize
the
doctor
was
broken.
So
that's
why
we
have
our
sees.
B
B
C
So
yeah,
what
I
should
definitely
should
mention
is
all
this
local
config.
This
is,
if
you
go
in
the
very
manual
route,
if
you
want
to
use
konglish
when
your
own
homegrown
designed
deployment
scenario,
these
are
sort
of
very
royal
low-level
options
that
conch
needs.
If
your
operating
environment
such
as
cubed,
then
you
can
automate
nearly
all
these
config
options,
because
you
know
where
things
are
deploying,
so
we're
actually
going
to
be
distributing
this.
No,
not
this
one
do.
B
I
have
an
onion:
are
you
working
on
it
there?
It
is
kubernetes
origin
manager
well
right
at
the
top
I,
don't
know
that
tab.
We
were
just
looking
at
so
we're
thinking
about
and
working
on,
some
sort
of
automation
and
enhancement
years
and
we'd.
Like
you
to
keep
this
in
mind
as
well
right,
we
don't
expect
everyone
to
be
doing
all
of
this
local
proxy
config
menu
aliy
like
we're
doing
it
every
time,
but
understanding
how
and
how
it
works.
We
think
is
an
important
part
of
sort
of
fully
understanding.
Cornish.
Yes,.
C
This
is
like
an
early
draft
manage
that
all
these
settings
for
you,
so
that
you
don't
have
to
put
all
these
in
yourself
in
the
kubernetes
world.
It's
done
all
for
you.
Obviously,
if
you're,
not
using
kubernetes
you'll
need
to
write
something
similar
or
do
it
manually,
but
because
and
we'd
love
to
see
what.
B
C
C
C
B
can
be
in
kubernetes,
we
sort
of
orchestration
tool
agnostic
so
that
your
service
mesh
can
span
all
the
infrastructure,
no
matter
where
or
what
tools
you're
using,
and
this
was
a
big
influence
in
the
design
of
our
service
mesh,
because
not
everyone
is
using
kubernetes
everywhere,
but
they
want
to.
You
know,
use
service
mesh,
which
is
I,
think
it's
something
pretty
unique
from
coal.
Absolutely.
B
A
Alright,
so
I
put
a
link
to
foundation
in
the
chat.
So
if
you
have
other
questions
that
are
just
burning
and
you
can't
wait
or
aren't
burning,
please
join
combination.
It's
already
discussed
discourse
forum
and
drop
your
questions
there
and
we'll
get
them
answer
for
you,
and
in
are
a
couple
final
minutes.
I
wanted
to.
C
A
Will
be
at
coupon
we're
super
stoked
about
it.
You're
gonna
have
an
awesome
beef
right
over
here
in
the
hall
at
Booz
s33,
so
it'll
come
in
the
main
interest:
you'll
walk
by
the
Oracle
lounge
go
down
the
go
down.
The
hall
and
you'll
find
our
little
booth
and
you
can
come
by
when
a
Sonos
in
a
raffle
spin,
a
big
wheel
for
a
t-shirt
and
talk
to
our
engineers
and
we're
gonna
have
Harry
Josh
and
Cooper,
and
hopefully,
Tebow
and
Marco
will
hang
out
at
the
who's
a
little
bit
as
well.
A
So
we
will
see
you
guys
there
if
you're
going
and
please
stop
by
and
say,
I
would
love
to
talk
to
you.
So
with
that.