►
From YouTube: [Online Meetup] Kubernetes Service Mesh and 1.0 Settings
Description
During this call, Colin Hutchinson demonstrated the work in progress on a repo to deploy Kong on Kubernetes, as a service mesh. We heard from Aapo Talvensaari about the settings available for routing raw TCP traffic in Kong 1.0, discussed sidecar injection into pods, and the separation between Kong data and control planes.
Resources:
Service mesh on Kubernetes: https://github.com/Kong/kong-mesh-dist-kubernetes/
Blog on the separation of control and data planes: https://konghq.com/blog/separating-data-control-planes/
Automatic sidecar injection in Kubernetes: https://github.com/Kong/kubernetes-sidecar-injector
Join our next Online Meetup: https://konghq.com/online-meetups/
A
A
Basically
the
demo
had
services
talking
to
other
services,
all
on
someone's
computer,
and
so
this
time
around
we're
going
to
have
Colin,
show
us
what
it's
like
to
run
service
national
kubernetes-
oh
hey,
Sean,
I,
put
you
down
as
presenting
but
I
believe
it's
Appa
who's
also
here
to
field
questions
about
the
settings
that
we
added
in
Kong
one
motto
and
basically
how
those
settings
work
the
nitty-gritty
of
them
all.
A
So,
although
I
know
that,
if
you,
if
ya,
if
you're
around
to
answer
questions,
that's
really
lovely
I,
see
here
and
before
we
kick
that
off.
I
just
wanted
to
mention
that
we're
always
looking
for
new
speakers.
So
we
have
a
community
speaker
set
up
for
next
time.
Who's
gonna
talk
about
mock,
api's
and
proxying
maki,
it
guys
like
Cong
and
if
you're
interested
in
speaking
on
the
call,
please
let
me
know
we're
always
looking
for
people
and
we've
heard
about
service
much
a
couple
times
now.
A
B
Yeah
I'm
Colin
I
kind
of
work
comm
worked
at
quite
a
while,
since
before
it
was
a
mass
shape,
do
internal
tooling,
which
kind
of
means
that
I
helped
you
know,
developers
be
happy
help.
Build
tools
exist,
try
to
take
care
and
keep
on
top
of
the
various
ways
to
deploy
and
configure.
Oh
and
yeah.
So
today,
going
to
demo
see
where's
the
screen
share
button.
A
A
B
Cool
so
before
anyone
asked
this
is
eclipse.
Qi
and
I
cannot
increase
the
font,
size
and
I
apologize
for
that.
So
this
repository
is
a
blicket
of
this,
so
people
can
look
through
it
walk
through
it.
I
am
a
few
changes
that
I
haven't
pushed
up
so
for
this
demo
we're
just
using
mini
cute
locally
drivers.
None.
B
Austerity,
currently,
nothing
is
running
in
kubernetes,
pretty
sure
I
already
built
that
not
finish
again.
I
did
so
I'll
talk
to
us
as
I
do
it.
So
what
I'm
doing
here
is
using
also
public
repository.
It's
calm
just
to
Nettie's
I'm,
using
the
combo
branch
to
run
post
press
I,
run
it
to
calm
migrations
and
then
I'm
running
kong-kong
instance
itself.
B
Sir
I
want
to
see
the
commentator
cool,
so
Kong's
running
just
thumbing
away.
So
what
is
service
be
so
service?
Be
we
wait
for
Postgres
before
we
start
double
check
back
on
migrations
and
I
set
up
network
circle
back
to
that,
so
the
two
things
I'm
running
so
here
is
the
comm
instance.
So
this
comm
is
is
configured
for
transparent
proxy
on
for
7000,
and
this
and
cat
is
simply
listening
on
port
80.
So
now
that
that's
running.
B
B
So
search
a
it's
service,
a
again
we're
waiting,
Postgres,
we're
running
an
intent
will
attempt
to
talk
to
service
beyond
port
8080
and
we're
again
we're
running,
come
on
for
to
7000
transparently.
So
it'll
contain
your
right,
oh
and
we're
configuring
conflict,
so
the
container
and
both
of
them
distance
water
is
deal
proxy
in
it.
This
is
rewriting
the
IP
tables
on
the
pod
so
that
all
traffic,
the
exception
of
traffic,
that
runs
it
is
user,
ID,
1,
3,
3,
7,
of
course.
B
So
all
traffic
goes
to
the
con
container,
so
transparently
VIP
table
rules.
This
end
cat
yeah,
this
IP
table
rule
will
go
to
this
come
so
all
this
n
cat
is
doing
is
just
echoing
things
just
date
through
to
service,
so
I
go
back
to
logs,
calm
cool,
so
it's
working
demo
gods
did
now
straight.
So
I
can
tell
it's
working
because
calm
is
blogging,
that
something
connected
to
it
and
said
traffic
through
and
I
can't
improve
that
switch.
So
I
did
the
logs
for
service
B.
B
Then
you
can
see
service
a
talking
transparently
through
comm
to
service
and
I.
Guess
the
only
thing
that
I
didn't
show
just
the
comp
config.
This
is
all
of
this
so
because
we
don't
have
cleared
of
config
I'm
just
using
HTTP,
and
this
is
an
entry
point
check
that
come
is
alive
and
then
I
put
these
two
rules
so
for
those
that
were
on
the
last
call.
This
is
basically
taking
an
example
that
I
did
and
just
translating
it
to
kubernetes
in
mini-cooper
and
so
I
think
that
covers
the
meet
of
transparent,
proxy
Mediacom.
A
Cool,
so
can
you
talk
a
little
bit
Colin
about
how
this
would
change
in
production,
because
we've
seen
a
lot
of
local
demos,
demos
of
service
mesh,
but
obviously
most
people
won't
be
running
it
locally.
So
you
know,
could
you
point
out
maybe
some
places
where
this
would
be
different
or
you
might
configure
things
differently
or
set
up
security
considerations
stuff
like
that,
so.
B
One
way
it
will
change
in
production
is
the
initial
bootstrapping.
So
right
now,
so
if
you
want
to
run
a
transparent,
comm
mesh
proxy,
then
you
would
have
to
you
know
kind
of
splice
in
all
this
stuff,
and
so
like.
Let's
see
this
is
your
actual
micro
service
or
service
that
you're
running
upon.
But
because
you
don't
want,
you
know
all
of
this
stuff,
as
well
as
your
service
James,
who
where's
gallery
I,
don't
think
James
is
on
the
call.
B
So
James
is
developing
a
common
plugin,
so
the
general
idea
is,
you
would
start
the
comm
control
plate
which
I
happen
to
have
income
control
planning
running
here.
That's
what
the
East
three
are
and
yeah
so
I
mean
the
reason
I
needed
that
control
plane
was
the
comm
configuration
talks
to
that
control
thing.
So
in
a
production
setup
you
would
start
the
comm
control
plane
and
then
you
would
go
either
via
the
column,
command
line
to
the
admin
API
or
your
enterprise,
probably
a
condiment,
and
you
would
enable
a
combat
naming
to
BD.
A
So
that
will
be
a
separate
like
a
separate
tool,
but
it'll
be
enabled
as
a
plug-in.
Basically,
it
won't
be
some
something
separate
took
on
will
configure
it.
That's
a
fun
gun,
yep.
B
A
We'll
check
on
that
I
think
we
recently
added
the
ability
to
make
some
some
nodes
control
planes
and
some
data
planes.
Do
you
want
to
talk
about?
What's
going
on
there
in
a
little
bit
more
detail.
B
C
C
We
can
have
a
cluster
of
con
notes,
a
set
of
kong
notes
that
only
have
admin
enable
and
those
con
notes,
perhaps
could
be
put
in
front
of
our
private
load
balancer
and
those
con
notes
will
only
be
able
to
configure
the
system
but
will
not
be
able
to
execute
api
requests
on
the
system,
and
the
reason
for
that
is
that
the
proxy
service
will
be
disabled.
So
by
separating
the
planes
we
are
effectively
saying
a
set
of
kong
nodes
can
process.
C
A
C
Correct
I
mean
the
data
plane.
Should
you
know
the
the
kong
nodes
that
are
executing
the
api
requests,
the
the
proxying
functionality,
those
kong
nodes
should
never
have
the
admin
functionality
open
for
the
simple
fact
that
the
clients
consuming
api
stroke
do
not
need
that
available
in
any
possible
scenario.
So
we
can
start
a
new
set
of
Kong
nodes
that
are
only
admin
no
proxy
and
those
con
nodes
can
be
in
a
private
subnet
behind
a
private
little
balancer,
where
only
people
who
really
have
to
have
access
to
them
can
access
the
notes
right.
A
Great,
so
you
wanna,
unmute,
I,
think
we're
kind
of
gonna,
informally
or
maybe
formally
move
into
the
Q&A
portion
of
Oh.
Before
we
move
on
:
did
you
have
anything
to
add?
Are
you
like
set.
A
Thank
you
so
much
for
this
demo
is
really
cool.
So
hey
I
see
you
then
muted.
If
anybody
else
has
questions
about
any
of
the
settings
that
Colin
went
through
now
would
be
a
great
time
to
ask
them.
Since
we
have
both
a
PO
mark.
Oh
hey,
Sean,
we've
got
everybody
here,
so
feel
free
to
unmute
and
ask
away
or
chat.
D
D
So
what
we
added
here
in
chrome,
1.0,
we
added
stream
routing
which
we
didn't
have
before
stream
routing
and
three
proxying
is
layer
for
proxy.
So
it's
raw
TCP
at
the
moment.
Currently
we
don't
support
UDP,
but
that
might
be
added
in
the
future.
So
at
the
moment,
if
you
want
to
use
cone
to
proxy,
let's
say
T
RPC
traffic,
you
can
use
the
L
for
stream
streams
for
that
in
the
future.
I
guess
we
will
be
adding
a
layer,
seven
support,
40
RPC
as
well,
but
that's
in
the
future.
D
So
I
can
quickly
show
you
asked
about
how
IP
tables
go
with
all
these,
so
there
is
a
section
I
think
there
was
a
configuring,
transparent,
proxying
rules.
So
this
is
what
calling
script
basically
does
or
similar
duties.
So
we
add,
like
iptables,
routing
rules
on
Mac
I,
think
you
can
use
or
bsds
you
can
use
another
which
is
like
which
is
PF
or
something
like
that
rules.
I
can
update
this
document
later
on
to
support
those
canarios,
but
these
are
mostly
on
the
Linux
side
right
now.
D
So,
for
example,
this
rule
here
adds
that,
if
the
destination
port
of
the
Tees,
this
is
the
service
a
that
tries
to
call
the
service
B.
If
the
service
a
tries
to
call
the
destination
for
this,
we
actually
go
change.
The
request
transparently
go
to
actually
the
sidecar
node
of
the
service,
a
and
on
port
9000.
On
this
example,
and
that's
why
we
do
have.
D
D
D
D
D
Apart
from
that
transparent
option,
we
also
have
origins,
option
origins
you
can
enable
on
on
certain
nodes.
This
is
node
level
configuration.
So,
for
example,
if
you
want
you
can
do
things
like
change
the
port
so
origin.
Actually,
when
we
do
the
load
balancing
and
it
tries
to
proxy,
because
this
node
actually
knows
that
it's
servicing
and
a
service
B,
it
can
kind
of
like
change
the
port
to
actually
different.
We
can
also
do
like
HTTP
to
HTTPS,
upgrading
and
HTTP
to
HTTPS
also
works
with
TCP
and
TLS.
What
are
you
doing
with
layer?
Four.
A
Great
I
think
this
is
great
I
think
there
were
other.
Were
there
any
other
settings
that
we
added
recently
for
service
match
that
we
wanted
to
go
over?
You
know
I,
guess.
D
D
So
here
I
create
routes
and
services.
This
is
for
TCP
and
I
create
this
a
normal
like
this
DC.
This
is
the
one.
That's
five
is
the
service
B
and
it's
listening.
Tcp
on
19000
and
I
do
set
that
as
a
service
I
create
a
new
service
with
upstream
URL
of
this
raw
TCP
straight
to
the
service
p1
19000
then
I
do
create.
D
Yeah-
and
there
are
other
rules
like
you-
can
route
by
protocol
and
then
you
can
add
like
a
different
destination
IPS
or
you
can
even
use
subnet
masks
here
and
then
you
can
add
ports.
As
you
see,
there's
one
you
can
define
multiple
rule
rules
in
one
one
route
right
now.
These
l4
proxy
rules
like
destination
IP
destination
port
are
only
used
for
stream
routing.
But
we
are
looking
forward
maybe
to
add
a
possibility
to
use
those
rules
even
on
let's
say
layer,
4
proxying
routing,
like
HTTP
and
HTTPS.
A
So
we've
been
having
a
little
bit
of
a
side,
conversation
and
chat
about
automatic
sidecar,
injection
and
kubernetes,
and
so
I
was
wondering
if
we
might
want
to
just
talk
about
how
that
works,
that
repo
that
just
got
posted
in
the
chat,
all
put
it
into
the
notes.
But
I
was
wondering
if
somebody
wanted
to
talk
about
how
that
works.
Specifically.
I.
B
D
B
It
yeah
the
proxy
in
it.
So,
instead
of
doing
that
in
your
own
deployment
channel,
you
do
it
as
a
sidecar
injection,
which
means
any
time
you
start
a
pod,
those
two
get
injected
so
the
proxy
in
it
as
an
init
container.
So
it
just
runs
once
mangles
IP
table
rule
on
your
behalf
and
then
the
call
mesh
node.
A
B
So
the
sto
proxy
unit,
docker
container,
is
actually
not
an
SEO
container.
It's
just
a
container
that
has
an
IP
tables
bash
script
to
mangle
the
IP
table
rules
such
that
it
will
proxy
all
traffic,
with
the
exception
of
one
user,
ID
and
so
the
goal
that
is
to
funnel
all
traffic
within
the
pod
to
come
or
Sto
or
a
master
or
or
so
on
and
so
forth.
So
a
lot
of
other
kind
of
mesh
yeah
overlays
tend
to
use
the
proxy
in
it.
B
A
And
then,
if
you're,
injecting
the
sidecar
into
every
pod
does
that
after
service
discovery,
that
kind
of
gives
you
your
your
con
cluster,
a
sense
of
what's
running.
B
B
A
So
well,
okay!
Well,
we
will
maybe
have
a
deeper
look
at
that
on.
A
future
call
sounds
like
there's
a
lot
going
on
in
there
cool.
So
does
anyone
else
have
any
questions
or
topics
of
discussion?
We
are
now
to
our
open,
open
agenda
portion.
A
All
right
great!
So,
as
I
mentioned
next
time,
we
are
going
to
have
a
community
member
talking
about
proxemic
api's.
If
anybody
else
wants
to
get
on
the
agenda
feel
free
to
just
add
your
name
to
the
future
topic
section
and
I'll
reach
out
to
you
and
see
what
you
want
to
talk
about
and
get
you
on
the
schedule.
So
thanks
so
much
for
joining
us
everybody
and
have
a
great
month.
A
Next,
let's
see
here
the
next.