►
Description
Magicians never reveal their secrets . . . but today, we reveal everything! Behold the mysterious Envoy and the magic of mesh in Kong Mesh and its open source sibling, Kuma. Spoiler: the secret is in the sidecar! Join this mesh-by-example talk to learn about how the service mesh manages certificate rotation, cross-zone communication, and service discovery. This talk will explain to service mesh newcomers what application developers can offload to the sidecar proxy — and why it’s a cost-effective way to achieve your reliability and security objectives.
A
All
right,
why
don't
we
go
ahead
and
get
started
good
morning
good
afternoon
good
evening?
Everyone,
depending
on
where
you
are
joining
us
from,
and
thank
you
for
taking
the
time
out
of
your
day
to
join
us,
I'm
Caitlin,
Barnard
I,
am
one
of
the
members
of
our
developer
marketing
team
here
at
Kong
and
I
would
like
to
welcome
you
all
to
Tech
Talks
by
Kong.
So
Tech
talks
is
a
webinar
series
dedicated
to
our
developer
audience.
A
We
feature
Open,
Source,
Products
and
topics
relevant
to
you
with
more
extended
live
demos.
So,
at
the
end
of
the
presentation
today,
we
will
open
it
up
for
Q
a
and
discussion,
so
please
feel
free
to
use
the
chat
function
throughout.
But
if
you
have
any
questions
today
for
Charlie,
please
make
sure
to
enter
those
in
the
Q
a
at
the
bottom
of
your
screen.
A
So
that'll
help
us
keep
track
of
them
and
make
sure
we
get
to
all
of
your
questions.
At
the
end
this
is
being
recorded,
so
you
will
receive
a
copy
of
the
recording
and
slides
via
email
within
24
hours.
So
I
would
like
to
welcome
our
speaker
today.
Charlie
maltaire,
who
is
our
engineering
manager
of
our
mesh
team
here
at
Kong,
so
Charlie
today,
is
going
to
be
talking
about
the
magic
of
service
mesh
and
everything
that
your
side
card
does
for
you.
B
Hi
everyone
thanks
Caitlin
thanks
everyone
for
joining
so
as
said
I'm
going
to
talk
about
the
magic
of
service
mesh.
So
why
am
I
giving
this
talk?
B
A
lot
of
things
have
been
said
around
sidecars
recently,
and
so
I
was
looking
for
good
content
for
people
to
try
to
get
a
better
understanding
of
what
sidecars
actually
do
and
I
couldn't
really
find
anything
for
that.
So
I've
decided
to
try
to
get
a
stab
at
it
and
maybe
bring
something
some
knowledge
that
is
useful
for
it
to
everyone.
There's
three
different
level
at
which
you
can
take
this
talk
either
you
completely
new
to
service
mesh
or
you
might
not
even
be
interested
in
using
your
service
mesh.
B
In
that
case,
it's
just
pure
engineering.
Culture
just
learn
how
it
works.
It's
an
interesting
pattern.
Maybe
it's
reusable
somewhere
else,
whatever
the
second
level
is
you're
already:
a
user
of
service
mesh
or
you're
starting
your
service
mesh
Journey.
This
will
teach
you
a
little
bit
about
the
internals
and
it's
always
good
to
understand
a
little
bit
about
how
the
tech
you're
using
actually
works
underneath
and
then.
B
B
So
a
service
mess
architecture,
if
you
take
Kumar
or
Kong
mesh,
is
all
that
it's
multi-zone.
They
can
be
external
Services,
there's
a
bunch
of
CPS,
etc,
etc.
That's
a
lot
of
information
so,
for
this
talk
we'll
go
to
the
slower
to
the
smallest
possible
deployment,
which
is
a
control
plane,
add
a
bunch
of
services,
these
services
in
kubernetes
their
parts.
So
you
have
multiple
multiple
instances
of
a
service
and
they
all
connect
to
the
control
plane.
And
then
each
of
these
servers
have
a
sidecar
next
to
it.
B
B
So
why
a
sidecar,
a
sidecar,
is
a
very
simple
security
model,
it's
very
similar
to
what
you
do
with
kubernetes,
which
is
the
the
Pod
isolation
it's
imperfect,
but
if
you've
decided
to
go
with
kubernetes,
that's
probably
a
trade-off,
you've
already
considered
and
evaluated.
B
Similarly,
containers
did
a
lot
of
work
to
achieve.
Multi-Tenancy
multi-tenancy
is
one
of
the
hardest
problem
in
computer
science
and
containers
through
c
groups
have
actually
done
CPU,
CPU
and
memory
isolation
in
a
pretty
okay
way.
B
It
has
also
great
failure.
Isolation,
as
in
it's
already
what
you've
considered
when
your
sidecar
crashes,
your
pod,
loses
connectivity,
it's
very
similar
to
your
pod,
actually
crashing.
B
So
it's
probably
something
you've
looked
into
and
you're
already
trying,
even
without
a
service
mesh,
so
adding
a
service
mesh
doesn't
introduce
another
layer
of
complexity,
another
layer
of
failure,
cases
that
you
need
to
consider
in
a
similar
manner.
Your
cycle
is
going
to
scale
exactly
like
your
app
does.
So,
if
you
need
most
capacity,
you'll
increase
the
number
of
PODS
you
have.
B
This
is
different
that
if
you
have
a
proxy
that
is
external
to
your
pod,
in
which
case
you
have
two
dimensions,
and
these
things
are
don't
add
up.
They
actually
multiply
together
in
complexity.
So
when
you
need
to
scale
up
things,
do
you
need
to
add
more
proxies?
Do
you
need
to
add
more
parts?
Do
you
need
to
add
both?
B
This
is
a
lot
harder
to
figure
out
to
reflect
about
than
when
you
have
a
one-to-one
mapping
like
with
the
side
cardboard
with
the
sidecar,
for
instance,
case
and
then
finally,
upgrading
your
sidecar
is
exactly
like
upgrading
your
app
you
just
restart
it.
It's
pretty
fundamental
in
the
construction
of
things
like
kubernetes
that
the
plot
is
a
primal.
You
just
recreate
it
when
you
do.
If
you
cannot
restart
your
application,
the
problem
is
not
with
the
service
mesh.
The
problem
is
with
your
application.
B
You
should
work
very
hard
in
being
able
to
restart
your
application
fairly
often
so
with
the
sidecar.
It's
only
becoming
a
forcing
factor
of
you
being
able
to
restart
your
apps
quite
easily
and
quite
frequently
this
is
this.
Is
these
arguments
have
been
talked
more
more
at
length
in
this
article
by
William
Morgan
and
that
I
invite
you
to
check
out.
B
B
The
control
plane
will
verify
this
token
and
generate
the
sidecar
configuration
that
matches
the
state
of
the
mesh.
So
what
is
the
state
of
the
mesh?
The
state
of
the
mesh
is
the
different
policies
that
apply
to
your
site.
To
your
to
your
instance,
what
we
call
policies
are
different
configuration
that
you
can
set
in
your
mesh,
so
things
like
retries,
timeouts
or
but
also
security
configurations
like
your
certificates
and
things
like
that.
B
But
it's
also
all
the
different
instances
of
other
services
that
your
sidecar
are
going
to
talk
about,
will
go
more
at
length
in
there,
but
your
sidecar
will
enable
you
to
to
specify
your
load
balancing
strategies
and
stuff
like
that.
So
once
it's
for
compute,
what's
it
has
computed
all
this
configuration,
they
will
send
it
down
to
the
Sidecar.
B
The
sidecar
will
transparently
reload
this
configuration
to
be
able
to
be
up
to
date
to
receive
and
send
traffic,
as
it
needs.
It's
very
important
to
understand
here
that
it's
transparently
reloaded,
and
that
means
you
don't
need
to
restart
your
sidecar.
You
don't
really
need
to
do
anything
on
your
applications.
B
This
is
quite
useful,
for
example,
in
the
middle
of
an
outage
when
actually,
for
example,
setting
reducing
the
size
of
a
connection
pool
to
free
up
some
some
capacity
or
returning
more
proactively
Eris
would
actually
get
you
out
of
an
outage
if
you're,
using
a
library
in
in
your
application.
This
might
be
something
you
can
do.
You
can
set
the
size
of
your
configuration
of
your
of
your
connection
pool.
However,
this
is
not
something
you're
going
to
be
able
to
transparently
set
and
reload
without
changing
your
app.
B
B
You
probably
don't
want
to
restart
everything
when
things
are
burning.
So
whenever
the
state
of
the
chair
of
the
mesh
is
going
to
change
again,
the
control
plane
will
regenerate
this
configuration
push
it
again
to
the
cycle.
This
is
done
in
the
loop
and
you
don't
really
have
to
do
anything
on
your
side.
This
is
all
dealt
by
the
service
mesh.
B
So
now
we'll
do
a
little
segue,
where
I'll
need
to
introduce
a
little
tool
for
the
rest
of
this
talk
where
we'll
walk
into
through
a
set
of
examples.
So
Envoy
has
a
pretty
complete
admin
API,
it's
so
complete
that
you
can
do
things
like
shutting
down
your
sidecar
through
that,
and
obviously
we
don't
want
to
expose
that
any
anywhere
too
widely.
So
we
provide
a
way
through
the
control
plane
to
actually
access
this
API
sorry
to
access
this
API.
B
For
me,
the
rcli
or
the
GUI
from
there
you
can
access
four
different
type
of
inspect
apis.
The
policy
API
that
lists
all
the
policies
that
apply
to
your
data
plane,
the
config
dump
API.
That
gives
you
the
full
Android
config
done.
This
is
very
verbose
and
very
complete,
but
sometimes
it's
the
best
way
to
actually
figure
out
deep
and
complicated
issues.
B
But
this
is
live
and
sometimes
it's
just
easier
to
connect
to
the
to
the
side,
car
and
just
get
the
latest
and
up-to-date
information
that
way
and
then
finally,
the
Clusters
API
a
cluster
in
Envoy
is
similar
to
a
service
in
kubernetes,
so
behind
behind
a
cluster
in
Envoy,
there's
a
set
of
endpoints,
and
so
with
this
API
you
can
get
very
in-depth
statistics
down
to
every
single
endpoint
of
things
like
number
of
requests
and
errors,
connections,
and
things
like
that.
B
B
So
whenever
you
do,
whenever
you
configure
Kumar
to
use
mtls,
what
you'll
end
up
doing
is
that
so
you
enable
the
back
end,
that's
called
ca1,
and
then
you
define
these
back
in
this
back-end
has
got
a
90
ca1,
as
I
just
said,
and
we're
using
the
built-in
backend
in
that
case,
so
the
the
the
CA
and
it's
key
will
actually
be
stored
in
the
in
the
data
store
of
of
Kuma.
B
It's
it's
good
to
get
started,
and
but
we
have
other
ones
that
might
be
more
secure
and
more
more
correct.
For
example,
Wisconsin
mesh,
you
can
store
all
this
in
Fault
and
then
you
specify
a
certificate
rotation
policy.
In
that
case
we
say
one
day
so,
every
day,
every
every
certificate
will
only
be
valid
for
24
hours.
B
So
whenever
a
sidecar
connects
to
a
control
plane,
the
control
plane
will
issue
a
certificate
for
the
sidecar
and
send
all
the
data
down.
It
doesn't
need
to
like
the
the
control
plane,
doesn't
need
to
save
anything
about
this
this
certificate,
because
the
certificate
is
only
valid
for
the
lifetime
of
the
Sidecar.
B
If
the
sidecar
restarts
it
just
gets
a
new
certificate
also
when
it
gets
close
to
expiration
time,
the
control
plane
will
proactively
renew
the
certificate.
So
that
means
certificate
rotation
is
not
something
that
your
operators
need
to
deal
with
anymore.
This
is
all
that
by
the
status
mesh,
this
certificate
will
contain
in
the
sun
some
information
to
identify
precisely
the
sidecar.
This
takes
form
of
speaker.
Ids,
speaker
is
a
protocol.
It's
a
cncf
project
with
a
spec
that
is
very
well
defined.
B
B
Whenever
we're
gonna
connect,
clients,
server,
client
to
a
server,
each
site
will
be
able
to
verify
that
the
other
side
actually
has
a
speaker
ID
that
they
want.
So
that
way
you
can
easily
say
I
only
want
service
X
to
be
able
to
talk
to
me
or
I.
Don't
want
service
wise
to
talk
to
me
and
things
like
that.
This
is
very
easy
and
it's
all
relying
on
these
FIFA
IDs.
B
B
B
B
What
will
happen
is
new
certificates
will
be
issued
on
this
new
ca,
but
the
old
CA
will
still
be
sent
to
All
sidecars,
so
sidecars
will
be
able
to
accept
connections
from
clients
that
have
a
certificates
issued
on
either
ca1
or
ca2
after
a
full
rotation
period,
or
even
quicker.
If
we
wanted
the
entire,
the
all
the
CA
stands
for
certificate
Authority.
B
So
after
after
the
the
the
expiration
period
so
one
day
in
our
case,
all
our
certificates
will
be
reissued,
so
they
will
all
be
on
ca2.
When
this
has
been
done,
we
can
just
remove
ca1
and
that's
it.
We've
rotated
CA.
B
So
how
about
metrics?
That's
the
second
most
common
use
case
for
service
mesh
is
being
able
to
have
a
unified
view
of
every
service
in
your
your
cluster,
so
every
applications
will
have
a
set
of
standard
metrics
coming
from
your
site
cards,
so
this
is
purely
Network
metrics,
so
they're
going
to
be
HTTP,
Response,
Code
and
things
like
that,
for
example,
so
Android
will
expose
the
stats
by
default
on
five
six
seven.
B
Oh,
we
have
a
Prometheus
scraper
that
will
enable
you
to
automatically
discover
and
get
these
metrics
everywhere,
but
your
sidecar,
you
can
also
set
it
up
so
that
it
actually
scrapes
metric
from
your
application
as
well.
The
advantage
of
that
is
that
both
Hue
cycle
metrics
and
your
application
metrics
are
going
to
be
exposed
with
the
same
labels,
so
it's
very
easy
to
build
a
common
set
of
dashboards
and
things
like
that
that
will
combine
site
called
metrics
with
your
application
metrics.
B
Also,
these
metrics
will
be
identified
with
a
common
setup
label,
regardless
of
where
they're
at
whether
it's
communities,
whether
it's
Universal
and
VM,
whether
you're
migrating
from
one
to
the
other,
it's
a
great
way
to
actually
provide
abstractions
in
the
process
of
migrating
to
kubernetes,
for
example,
you
first
get
on
a
service
mesh.
You
have
a
common
setup
view.
You
have
a
view
of
your
network
and
now,
when
you're
migrating,
you
can
finally
understand
what
happens
so
application
metrics.
Is
you
know
whatever
your
application
is
gonna
expose
or
metrics?
So.
B
B
Okay,
one
of
the
great
advantage
of
that
is
you
don't
need
to
expose
a
port
from
your
application
outside
of
the
Pod,
so
everything
is
contained
behind
the
Safeguard
and
protected
by
the
sidecar
in
some
ways.
B
So
our
next
example
is
around
endpoint,
Discovery
and
load
balancing.
So
whenever
pod
is
added
or
removed.
B
Or
even
goes
in
Alfie,
the
CP
is
going
to
recompute
the
configuration
of
all
the
clients
of
this
service.
Okay.
So
that
means
whenever
we
had
an
instance,
all
the
clients
that
connect
to
the
services
in
front
of
this,
of
this
instance
will
get
this
new
instance,
so
it
will
suddenly
start
sending
requests
to
it.
This
is
happens
very
quickly,
it's
usually
in
the
second
and
you
can
configure
the
load
balancing
algorithm.
It's
used
if
you're
only
using
Row
kubernetes.
B
The
only
thing
you
can
do
is
do
round
robin
also
kubernetes
were
abstract
the
way
the
instances
behind.
So
you
won't
be
able
to
do
very
specific
things
that
will
show
in
following
examples.
So
here,
if
we
look
at
our
inspect
API
here
and
we
look
at
our
clusters,
we
can
see
there's
three
different
instances:
22
23
and
24,
and
in
our
case
they
all
receive
23
requests.
B
So
what
this
enables
us
is
to
do
things
like
outlier
detection,
so
at
flyer
detection
is
a
policy
that
enables
you
to
set
up
a
threshold
above
which
a
threshold
of
failure,
above
which
you
will
stop
sending
requests
to
the
instance
that
breaks
this
threshold.
So
in
our
case
we're
going
to
set
a
threshold
of
15.
So
any
any
instance
that
has
it
that
has
it
that
has
more
than
15
percent
of
Errors
will
stop
receiving
traffic.
B
So
if
we
take
our
example
again
with
our
three
different
instances,
we
get.
Our
first
instance
has
a
hundred
percent
success
rate.
So
we
can
see
it's
healthy.
It's
going
to
keep
receiving
requests.
The
second
one
has
an
81
success
rate,
so
it's
failure
rate
is
91
percent
81
stock,
it's
failure
rate
is
19,
sorry,
and
so
it
has
spelled
out
live
checks,
so
we
will
stop
sending
requests
to
it
and
our
third
instance
has
a
90
success
rate
and
it's
still
healthy.
B
So
how
about
we
take
it
down
one
level
down.
We
put
this
threshold
to
five
percent
now,
and
so
we
can
see
that
two
instances
are
actually
kicked
out.
B
This
is
great
and
it
can
increase
increase
your
your
your
SLO
significantly,
for
example,
if
you
have
sometimes
like
Java
processes
that
end
up
on
slow,
GCS
and
posing
and
stop
the
world
and
stuff
like
that,
you
will
be
able
to
stop
sending
requests
to
these
instances
while
it
recovers,
and
then
it
will
be,
it
will
be
coming
back
and
will
be
able
to
to
get
back
to
a
good
state.
B
So
the
one
problem
with
these
kind
of
things
is
usually
you
implement
them
yourself
and
the
edge
cases
are
tricky,
and
so
what
happens?
Is
you
implement
it?
It
works
well
and
then
suddenly,
an
upstream,
an
upstream
dependency
actually
starts
failing,
and
you
have
a
40
rate,
a
40
failure
rate
on
every
single
instances,
and
so
your
outlier
check
actually
kicks
out
all
your
influences
and
you
you
end
up
with
a
zero
percent
success
rate
instead
of
a
sixty
percent.
B
Yes
right,
so
what
is
great
is
because
we're
leveraging
Envoy
it
already
comes
in
with
a
bunch
of
different
ways
to
avoid
shooting
yourself
in
the
foot.
In
this
specific
case,
you
can
set
a
map,
ejection
percent,
where,
if
more
than
50
percent
of
the
instances
are
considered
as
outliers,
it
will
actually
stop
obeying
to
the
to
the
to
the
outlier
detector.
B
B
This
is
one
of
the
great
benefit
of
building
on
top
of
envoy,
as
in
Android
is
a
pretty
major
software
with
a
lot
of
different
configurations,
and
so
this
is
something
it
comes
in
with
a
bunch
of
configurations
that
will
enable
you
to
specify
very
specific
configurations
to
achieve
the
best
availability
and
the
highest
reliability.
B
So,
in
conclusion,
sidecars
are
Implement
very
complex
algorithm,
as
I
said,
Envoy
Envoy
was
built
by
Lyft
and
has
contributions
from
a
lot
of
places.
I
think
every
single
public
cloud
has
at
least
some
components
that
are
built
on
top
of
Android,
and
these
algorithms
are
very
solid.
The
configuration
and
the
defaults
are
pretty
good
and
you
can
feel
safe
that
this
is
going
to
be
a
lot
better
than
whatever
your
application
team
will
develop.
Moreover,
your
application
thing
can
focus
on
providing
value
to
your
business
rather
than
working
on
reliability
algorithm.
B
That
is
neither
their
specialty.
Neither
what
will
bring
profit
to
your
company
and
then
another
point
is:
whenever
you're
evaluating
the
other
head
of
a
service
mesh
do
remember
to
compare
apples
to
apples.
We
sometimes
have
comparisons
where
people
run
their
workload
without
the
mesh,
then
they
enable
them
as
they
put
the
observability
on
they
put
the
mtls
on
and
then
they
say
well,
my
latency
and
my
CPU
usage
is
higher
and
it's
like.
Yes,
of
course,
you've
got
encryption
and
and
observability
on
top
of
that.
B
B
Also,
in
the
event
of
outages,
how
your
your
time
to
recovery
is
going
to
be
improved
thanks
to
the
usage
of
a
service
mesh,
the
sort
of
example
where
I
was
saying
you
have
a
problem
with
a
timeout
or
an
overload
and
stuff
like
that,
and
you
can
quickly
jump
and
create
a
policy,
push
this
policy
and
get
back
to
a
better
State
until
your
application
teams
work
on
actually
solving
the
bug
that
will
then
be
pushed
in
production
and
in
which
case
you'll
be
able
to
remove
this
configuration
and
then,
finally,
if
you're,
using
Kumar,
do
use,
kumara
color
will
inspect
and
the
GUI
that
is,
that
is
that
comes
with
it
and
to
figure
out
how
things
work
internally,
that's
what
the
dev
team
uses
a
lot.
B
That's
sometimes
what
we
ask
in
issues
to
actually
figure
out
what's
actually
happening,
so
do
get
used
to
it,
because
it
has
a
lot
of
value.
So
that's
about
it!
Thank
you
very
much
and
yeah.
Let's
jump
into
q,
a.
A
One
of
the
things
I
was
hoping
you
could
talk
a
little
bit
about
Charlie
that
you
kind
of
covered
throughout,
but
just
to
kind
of
lay
it
out
all
together.
What
is
the
difference
between
Envoy
Kuma
and
then
Kong
mesh.
B
Right
so
Envoy,
let
me
get
back
to
my
very
first
slides,
so
Envoy
is
purely
just
the
Sidecar,
so
Envoy
is
a
proxy
that
is
used
by
Kumar
and
omash,
but
also
other
service
meshes
to
actually
provide
the
cycle.
Functionality.
B
Envoy
has
like
the
one
great
feature
of
envoy
in
some
ways
is
the
xjs
protocol
that
enables
the
the
proxy
to
be
remotely
configured
by
a
server
on
the
other
side.
So
Envoy
will
be
able
to
connect
to
the
control
plane
in
our
case
and
get
its
configuration
out.
B
Then
a
is
a
cncf
service
mesh
that
will
enable
that,
like
actually
is
a
service
mesh
like
so
it
will.
It
will
enable
you
to
run
sidecars
next
to
your
to
your
instances,
run
a
control
plane
that
will
enable
you
to
create
policies
and
apply
these
policies
to
a
sidecar
and
Kong
mesh.
Is
the
Enterprise
offering
built
on
top
of
Kuma
built
by
Kong
that
will
provide
like
more
features
and
easier
operational
ability
compared
to
Puma.
B
I
think
yeah
how
much
percent
of
CPU
by
Envoy
versus
odd
five
percent,
ten
percent
twenty
percent
it
does
depend.
It
depends
on
what
your
pot
does.
So,
if
your
pod
is
actually
just
you
know,
just
Echo
doing
ping
pong,
it's
quite
likely
that
the
vast
majority
of
the
thing
will
be
involved
because
Envoy
will
be
doing
all
the
heavy
lifting.
B
B
So
Stephen
is
asking
Kumar
being
a
data
plane.
Does
he
route
through
Envoy
for
its
configuration,
our
Envoy
and
Kumar
configuration
separate
entities
from
the
control
plane?
So
Kuma
is
like
the
whole
service
mesh
right,
so
it's
both
the
control
plane
and
the
data
plane
right,
the
data
plane
being
like
a
very
small
body
process
and
then
Envoy,
okay,
so
and
then
Kuma
has
configuration
right
and
this
configuration
gets
translated
into
Android
configuration.
B
And
yes,
Musa,
it's
typically
under
five
percent.
B
No,
so
the
CP
runs
like
as
a
set
of
instances
somewhere
right
like
it's
a
kubernetes
deployment
and
that's
the
CP,
and
then
your
DPS
are
like
your
your
deep.
Your
data
plane
proxies
are
going
to
start
next
to
your
app
inside
the
same
pod
right.
So
what
we
typically,
what
we
do
in
the
case
of
kubernetes
is
we
have
a
a
a
Web
book
that
will
intercept
the
Pod
configuration
and
inject
the
sidecar
container
onto
your
pod
configuration
before
it
gets
like
before
it
gets
to
communities.
B
So
well,
there's
three
questions
there.
So,
let's,
let's
take
them
in
order,
so
are.
Are
there
situations
in
pod
life
cycle?
Where
would
need
to
have
in
mind
how
sidecar
is
behaving
and
Taylor
configuration
to
it
yeah?
So
it's
almost
transparently
but
not
fully
like
so
because
the
sidecar
redirects
all
the
traffic
they
like
in
in
some
default
cases.
B
You
you
don't
have
Network
like
straight
away
when
your
app
starts.
So
sometimes
it
requires
the
app
to
actually
retry
connections
like,
for
example,
if
it
connects
to
a
database.
It
might
like
fail
the
first
connection
to
the
database,
but
then
write
the
second
one
will
will
succeed.
So
this
is
one
thing
that
can
happen.
B
These
kind
of
things
they're
pretty
minor,
but
they
are,
there
are
edge
cases
where
sometimes
you
do
need
to
understand
a
little
bit
more
like
how
things
actually
work
inside
so
and
then,
if
you,
if
you
have
a
follow-up
to
that
or
if
I
haven't
and
like
answered
completely
I,
just
tell
me
again
and
then
can
Puma
coexist
with
other
Ingress
controllers
and
do
rolling
release
service
by
service
yeah.
B
So
Kuma
really
deals
with
the
traffic
inside
of
your
cluster
right,
so
you
can
still
have
like
an
Ingress
controller.
B
We,
for
example,
support
like
kubernetes
using
sorry,
Kong
interest,
controller
and
others,
but
like
I'll
I'll
talk
about
Kong
game
rest
controller,
but
you
can
still
use
it.
It
still
works
the
same
way
really
so,
whatever
you
were
doing
before
you
can
still
do.
B
On
top
of
that,
you
can
also
control
inside
your
cluster
and
throughout
to
actually
selectively
take
subset
of
instances
that
you
might
want
to
send
traffic
to
on
an
East-West
point
of
view,
so
even
within
your
cluster
and
then
is
multi-cluster
mesh
communication
between
multiple
zones
possible
over
private
Network
yeah.
So
what
happens?
Is
we
have
these
little
things
called
Zone
egress
and
Zone
Ingress
Zone
egress
being
optional,
and
what
will
happen
is
when
you
go
across
zones.
B
You
will
actually
go
through
this
Zone
egress
and
contact
this
Zone
Ingress,
and
this
zoning
grasses,
typically
behind
a
a
load
balancer,
and
so
that's
how
it's
reached
and
then
we'll
reroute
the
traffic
out
to
be
able
to
reach
the
service.
So
that's
how
like
multi-zone
communication
will
work
yeah.
So
in
that
case
you
have
like
two
private
networks
and
just
a
tiny
bit
that
like
actually
needs
to
have
a
public
IP.
A
A
So
the
first
is
our
slack
channel.
So
if
you
have
questions,
please
feel
free
to
reach
out
to
us
on
there
and
then
I
will
also
drop
the
link
to
our
Community
page.
So
this
is
where
we
have
really
all
of
our
Community
channels
for
you
to
take
advantage
of,
but
also
this
is
where
the
list
of
our
upcoming
Tech
talks
are.
So
we
would
love
to
see
you
all
at
a
future
one
of
these
as
well.
We
talk
about
service
mesh.
We
talk
about
gateways
Ingress.
A
All
right
you
just
see
yes,
and
then
the
recording
of
this
session
will
be
sent
out
via
email.
I
tried
to
do
same
day,
but
within
24
hours
at
the
latest.
You'll
get
that,
along
with
a
copy
of
the
slides.
A
A
B
No
thanks
a
lot
and
yeah.
Don't
hesitate
to
come
on
slack
well,
like
happily
have
a
chat
and
help
you
out
for
your
service
based
Journey.