►
Description
Install Kuma Now: http://bit.ly/2PdDa9j
Kong CTO and Co-Founder Marco Palladino discusses what it takes to run modern infrastructures and how Kuma, a new universal service mesh, can help tackle the challenges that come with decoupling and distributing software. He is joined by Envoy Creator Matt Klein and CNCF CTO Chris Aniszcyk to talk about the importance of open source ecosystems.
A
We
are
seeing
it
every
day
around
us.
Every
product
is
becoming
a
digital
product
and
when
they
become
digital,
they
become
cloud
native.
We
talked
a
lot
about
platforms
architectures,
but
really
what
matters
is
our
business
scaling
and
growing?
The
business
is
the
main
driver
to
any
technological
transformation
and
so
to
grow
the
business
we
focus
on
our
products.
We
want
to
make
existing
users
and
customers
happier.
A
A
The
application
teams
are
relying
on
us,
the
architects
to
provide
for
them
the
best
infrastructure
possible
for
them
to
run
the
applications
running.
A
modern
infrastructure
is
really
like
running
a
city
with
the
architects,
build
the
roads,
the
bridges,
so
that
the
application
teams
can
focus
on
building
the
products
and
focus
on
the
things
that
matter
for
the
business,
the
users,
the
customers.
A
But
like
running
a
city,
we
need
to
connect
different
places
together,
we
need
to
enable
the
flow
of
traffic
from
one
building
to
another.
Building,
we
need
electricity,
we
need
police
departments,
we
need
security,
we
need
routing,
we
need
street
lights,
we
need
street
signs
in
the
more
buildings
in
the
more
people
and
the
more
cars
are
running
in
our
city,
the
more
complex
the
infrastructure
becomes,
so
that
we
can
run
it
in
an
organized
way.
A
A
A
We
are
replacing
the
reliable
reliability
of
the
CPU
with
the
unreliability
of
the
network
some
patterns.
Some
techniques
can
help
us
with
this
like
service
mesh.
Likewise,
we
don't
want
to
build
our
own
infrastructure
from
scratch.
We
also
don't
want
to
build
our
network
management
from
scratch.
We
want
to
be
able
to
leverage
technologies
that
can
help
us
with
this,
and
so,
instead
of
creating
everything
in
our
applications,
we
can
outsource
it
to
an
out
of
process
proxy
that
will
run
alongside
every
service
in
every
instance
of
that
service.
A
Servus
mash
until
now
has
been
a
very
hard
topic
and
technique
to
implement
in
production.
For
many
many
of
us,
we
learn
a
lot
from
our
users
from
our
community
and
we
hear
a
lot
from
you
that
it's
complicated
it's
hard
to
implement
it's
hard
to
deploy
its
to
ambition.
Oh
that's
why,
20
days
ago,
we
have
released
cumin
our
open
source
service,
mesh
for
every
environment
in
the
organization
Universal
and
today
we're
released
0.2.
A
new
version
of
cumin
cumin
is
a
universal
service
mesh
built
on
top
of
a
very
solid
foundation.
A
A
Chances
are
that
the
majority
of
the
business
today
runs
on
one
of
those
old
platforms,
not
in
the
new
greenfield
platforms,
but
the
brownfield
ones,
the
ones
that
we
have
to
transition
to
these
modern
architectures.
So
Cooma
has
been
built
first
and
foremost
to
be
platform
agnostic.
It
can
run
across
every
environment
like
Khan
Gateway,
and
it
provides
an
API
abstraction
which
allows
to
deploy
a
service
mesh
from
virtual
machines
as
easily
as
deploying
one
on
kubernetes.
A
It's
multi-tenant
since
day,
one
we
heard
from
you
that
deploying
service
mesh
today
implies
very
high
operational
costs.
It
doesn't
have
to
be
that
way.
That's
why
we
built
kuma
with
multi-tenancy
since
day
one
so
that
from
one
control
plane
you
can
provision
as
many
meshes
as
you
want
across
all
the
environments
across
all
the
teams
in
the
organization
the
service
mesh,
it's
more
valuable.
The
bigger
it
is.
A
The
more
services
belong
to
the
mesh,
the
more
valuable
that
becomes,
but
pragmatically
different
things
are
going
to
be
going
at
different
speeds,
different
paces
because
they
have
different
products,
different
business
requirements.
So
with
kuma
we
don't
have
to
force
those
teams
to
adopt
service
mesh
all
at
once,
but
we
can
create
independent
service
meshes
for
them.
A
A
But
most
importantly-
and
this
is
something
that
we
have
learned
with
Kongregate
way-
no
product
can
be
adopted
if
they're
not
simple
and
easy
to
use,
that's
a
key
requirement,
so
the
same
simplicity.
We
have
delivered
in
the
API
management
space
with
Kong
gateway
back
in
2015,
that
the
same
simplicity
we're
going
to
be
delivering
in-service
mash
with
cumin
cumin
to
use
on
top
of
kubernetes.
A
Kuma
provides
on
top
of
your
infrastructure.
A
few
features,
a
few
benefits
that
you
don't
have
to
worry
about,
so
that
you
can
focus
on
maintaining
and
supporting
your
team's.
It
brings
identity
to
your
workloads.
It
brings
permissions
and
segmentations
for
your
traffic.
It
locks
and
observe
everything
that's
happening
within
your
infrastructure,
so
that
you
can
push
it
anywhere.
A
You
can
push
it
into
a
remote
server
into
elastic
into
log
stash
into
Splunk
into
data
log,
whatever
you're
using
today,
and
it
supports
the
most
adopted
open
source,
API
gator
in
the
world,
con
gateway,
so
that
you
can
run
them
both
together
to
enable
connectivity
within
your
infrastructure
and
then
for
the
traffic
you
want
to
open
up
outside
of
the
data
center.
You
can
use
Khan
gateway
for
that.
A
One
of
the
things
that
really
makes
no
sense
to
me
is
how
service
mesh
today
without
kuma
is
being
proposed
as
the
last
step
of
modernization.
So
we
move
to
containers
first,
then
we
move
to
kubernetes
massive
transitions
by
the
way
and
then,
after
we've
done
that
we
transition
to
service
mash
service
mash
is
at
the
end
after
we
already
transitioned
everything
else,
and
that
makes
no
sense.
A
A
The
same
reason
why
you
should
shoot
him
to
reinvent
your
networking
layer.
That's
the
same
reason
why
we
shouldn't
either
for
kuma
we're
leveraging
an
open-source
project
that
has
lots
of
momentum.
That's
a
strong
foundation
for
cumin
envoy
in
who
better
than
math
can
explain
to
you
why
ammo
is
great,
so
I
would
like
to
invite
Matt
on
stage
here
with
me.
B
Thank
you.
Everyone
thanks
for
having
me
great
to
be
here
so
when
I
joined,
lift
four
and
a
half
years
ago.
Lifts
architecture
probably
looks
very
familiar
to
all
of
you,
we're
entirely
cloud
based.
Obviously
we
have
a
bunch
of
mobile
clients
we're
using
load
balancers.
At
that
time
we
had
a
monolith
written
in
PHP.
We
had
some
services
and
Python,
and
you
know
we
had
this.
B
We
had
this
beginning
micro
service
architecture
and
what
was
happening
to
us,
which
I
think
is
what
is
probably
happening
to
many
of
you-
is
that
there
was
lots
of
problems,
because
what
people
find
when
they
move
to
a
micro
service
architecture
is
that
networking
is
very
difficult.
Observability
is
very
difficult
and
developers.
Ultimately,
they
they
don't
understand
what
it's
going
on,
and
what
we
found
at
lyft
is
that
they
do
not
trust
the
micro
service
architecture.
So
we
have
this
fundamental
problem
of.
We
want
people
to
decompose.
B
B
You
know
we
are
ultimately
overhead.
We
are
doing
a
bunch
of
work
but
organizations
they
don't
care
about
that
work.
They
want
to
focus
on
their
business
logic,
so
every
minute
that
a
developer
and
most
organizations
is
fighting
with
their
infrastructure,
whether
that
be
networking
or
observability
or
kubernetes
or
whatever
they
are
wasting
time
and
they're,
not
making
money
for
it
for
that
business.
B
So
we
developed
envoy
as
a
network
proxy
and
the
goal
of
on
boy
as
a
project
was
to
effectively
use
it
as
an
API
gateway
and
as
a
service
matched
proxy
to
make
the
network
transparent
to
all
of
our
application
developers.
So
if
you
fast
forward
to
lift
today
about
four
and
a
half
years
after
we
started
the
project,
we
now
use
envoy
almost
everywhere,
we
use
envoy
as
our
API
gateway
at
the
edge.
We
now
run
our
services
and
go.
B
We
have
Python
services,
we've
actually
gotten
rid
of
our
monolith
and
envoy
is
a
co-located
sidecar
that
runs
next
to
every
service
at
our
edge
and
we
use
envoy
for
all
service
to
service
communication.
We
use
envoy
to
talk
to
all
of
our
back-end
databases
and
our
third-party
providers
and
the
goal
again
is
that
we
can
move
a
bunch
of
this
common
logic.
Around
timeouts,
retries,
observability
load,
balancing
all
these
things
into
a
common
substrate,
and
it
has
been
an
incredibly
powerful
journey
so
envoy.
We
started
it
again,
four
and
a
half
years.
B
We
open
sourced
envoy
about
two
and
a
half
years
ago,
and
it
has
been
the
most
incredible
journey
of
my
life
and
career.
The
uptake
is
more
than
I
would
have
ever
imagined.
You
know
if
you
fast
forward
to
today.
Envoy
is
now
in
use
by
all
major
cloud
providers.
You
know
so
so
many
different
companies
have
adopted
envoy
to
build
their
own
internal
service
mashes,
it's
used
as
an
API
gateway
in
many
locations.
We
have
tons
of
startups
and
other
products
that
are
using
envoy
as
a
base
to
build
vertical
products.
B
So
it
has
been
a
truly
astounding
journey.
So
that
begs
the
question
in
such
a
short
period
of
time.
You
know
how
has
envoy
become
so
so
popular
and
I
think
there's
a
couple
of
different
reasons.
Obviously
we
focused
a
lot
on
performance.
You
know
we
understand
that
from
a
network
proxy
perspective,
people
are
gonna,
run
those
horse
race
benchmarks.
We
want
to
be
competitive.
There
envoy
is
a
very
reliable
piece
of
software.
B
I'm,
proud
that
to
this
day,
even
with
the
number
of
end-users
and
in
vary
production
and
critical
environments,
many
of
them
run
master.
So
we
we
try
to
keep
master
at
super
high
quality,
modern
code
base.
We
are
a
github
driven
organization,
you
know
that
uses
CI
and
we
adhere
to
modern
best
practices.
That's
been
very
popular,
but
really
you
know
it's
it's.
It
comes
down
to
operability
and
I.
B
Think
because
envoy
was
created
at
lyft,
where
you
know
myself
and
my
small
team
rolled
out
envoy,
ultimately
to
over
a
thousand
developers
and
we
had
to
operate
it
and
we
were
on
call
for
it.
You
know,
there's
a
very
strong
focus
and
envoy
on
observability
stats,
metrics
logging
tracing
just
generally
making
it
easy
to
operate
in
these
dynamic
environments.
B
Envoy
is
a
very
extensible
piece
of
software.
So,
from
an
open-source
perspective,
we
have
scaled
our
open-source
community
by
you
know
allowing
for
all
these
different
extension
points,
and
today
the
the
wide
range
of
situations
in
which
envoy
is
used
is
is
truly
incredible.
It's
really
amazing,
and
now
we
come
to
our
config
API.
You
know
this.
This
is
really
one
of
the
killer
features
of
envoy.
B
So,
historically,
with
most
of
our
competitor
proxies
the
way
that
people
would
deploy
complex
configurations
as
they
would
fan-out
configuration
files
to
every
host
and
for
those
of
you
in
ops
out
there
I'm
sure
you've
seen
some
pain
and
of
trying
to
get
those
config
files
out
there
and
get
them
up
to
date
and
then
reload.
Your
proxies
envoy
was
built
from
the
ground
up.
B
You
know
to
work
in
a
dynamic,
eventually,
consistent,
auto-scaling
environment,
so
envoy
talks
to
a
central
control
plane
over
what
we
call
our
XDS
or
our
discovery
service
api's
and
now
we
have
many
different
api's,
but
this
is
allowed
again
people
to
deploy
out.
You
know
relatively
simple
configuration
files,
so
there
are
fleet
of
envoys
and
they
all
talk
to
a
central
configuration
server
and
we
now
have
a
distributed
system
of
envoys
and
that
can
be
a
service
mesh
or
an
api
gateway
or
a
middle
proxy.
B
B
You
know
technology
first
project
we
are.
We
are
not
open
core,
there's
no
primary
business
behind
envoy
and
if
you
look
at
the
ecosystem
today,
especially
in
the
infrastructure
software
space,
it's
pretty
rare
that
you
find
something
like
that,
and
this
has
been
an
incredibly
powerful
advantage
for
us.
It
means
that
we
never
refuse
anyone's
PR
because
it
might
can
apply
some
business
model
and
it
allows
us
to
actually
build
an
ecosystem
on
top,
so
I
talked
about
extensibility
and
api's.
B
You
have
an
API
driven
system,
you
have
configuration
coming
in
and
you
have
data
going
out
and
when
you
think
about
it,
you
know
I
think
about
it
as
the
as
the
network
operating
system
right
for
these
cloud
native
environments
and
and
if
you
think
about
some
of
the
things
that
can
be
built.
On
top
of
that
type
of
framework,
where
envoy
is
seeing
all
the
data
can
actually
route
all
of
that
data,
you
can
start
building
vertical
solutions
around
security,
observability,
auditing,
debugging.
B
You
know
different
types
of
management
systems
so
and
I
think
we've
already
started
to
see
that,
where
you
see
other
vendors
that
are
starting
to
build
envoy
integrations,
which
is
super
super
amazing
to
see
so
again,
just
to
recap,
it
has
been
an
absolutely
amazing
journey
with
envoy
I.
It's
been
an
incredible
four
and
a
half
years.
I'm
very
excited
to
be
here
very
excited
that
Cong
chose
envoy
to
build
kuma
on
and
again
with
envoy.
You
know
we
take
quality
and
velocity
very
seriously.
You
know
we
want
the
system
to
be
super
extensible.
B
My
general
philosophy
from
an
open-source
perspective
is
I.
I,
don't
like
to
say
no
I
like
to
say
how
can
we
make
the
right
extension
point
that
will
make
it
so
that
you
can
you
can
do
what
you
need
to
do
and
I
think
as
a
project
we've
rarely
said
no,
but
we
haven't
had
to
compromise
the
core
integrity
of
the
project
itself.
So
again,
just
to
recap,
envoy,
is
a
you
know,
eventually
consistent
API,
driven
piece
of
software.
B
This
makes
it
really
you
know,
built
from
the
ground
up
for
these
dynamic
environments,
no
open
core.
We
are
a
technology
first
project
and
we
love
companies
like
Kong.
You
know
that
want
to
build
these
differentiated
solutions
on
top.
And,
finally
again
you
know
thank
you
all,
as
hopefully,
future
envoy,
envoy,
users
and
I
will
be
around
the
conference.
If
you
have
any
questions,
so
thank
you
for
having
me.
B
A
Kumar
is
also
open
source,
so
you
can
go
on
our
github
repository.
You
can
download
Kumar,
you
can
contribute
back
to
Kumar
and
you
can
join
our
community
community
forums
to
summarize
every
product
becomes
digital
and
when
they
become
digital,
the
require
infrastructure
that
allows
them
to
scale
their
business.
They
become
cloud
native
and
when
that
happens,
connectivity
becomes
the
main
roadblock
to
our
success.
A
Kuma
joins
the
family
of
open
source
products,
that
kong
is
maintaining
cong
and
insomnia
among
the
others,
and
open
source
really
has
been
the
driver
of
all
the
transformations
we're
seeing
in
the
industry
today
if
becoming
digital
is
the
most
important
thing
that
any
organization
can
do.
Then
we,
the
developers
in
the
architects
who
are
going
to
be
implementing
that
transformation,
become
much
more
important
in
every
organization.
A
Open-Source
has
always
been
part
of
our
DNA,
since
even
before
conch
came
out
and-
and
this
would
not
have
been
possible,
all
of
these
open-source
projects
would
not
have
been
possible
without
having
a
good
and
great
community
who
participated
and
helped
us
improving
our
products.
So
we
did
this
together.
We
built
it
together
and
I
would
like
to
thank
in
particular
five
Kong
champions
for
their
contributions
to
Kong
and
to
into
the
products
into
the
broader
community.
So
around
student,
please.
A
Ecosystems
have
been
a
big
part
of
what
made
open-source
so
important
today
when
we
look
at
the
technologies
that
are
driving
this
next
era.
Technologies
like
darker,
like
kubernetes,
they're,
all
open
source
and
I,
would
like
to
invite
on
stage
Chris
from
CNCs,
to
talk
more
about
the
importance
of
open
source
ecosystems.
C
Cool
Thank
You
Marco,
it's
fantastic
to
be
here,
so
you
know
it's
great
to
see.
Companies
like
Kong
invest
in
open
source
products
and
projects
that
they
collaborate
with
their
partners,
clients
and
and
other
people
in
the
wider
community,
but
as
someone
who's
been
involved
in
open
source
for
nearly
20
years
across
a
variety
of
projects,
I
think
a
lot
of
us
just
underestimate
how
pervasive
open-source
has
become
throughout
our
lives
like.
If
you
have
a
phone
to
the
car,
you
drive
to
your
TV
at
home
to
maybe
even
that
crazy.
C
You
know
refrigerator
you
bought
with
internet
on
it
at
home.
All
this
stuff,
ships,
open
source
software-
and
we,
you
know,
depend
on
it
across.
You
know
all
these
different
organizations
and
and
to
me
it
really
speaks
to
the
success
of
the
open
source
development
model.
It's
essentially,
the
evidence
is
all
around
us
how
successful
open
source
has
been.
So
you
know,
my
story
is
a
little
bit
interesting,
where
a
little
over
four
years
ago,
I
had
the
fun
opportunity
to
help
co-found
an
organization
called
the
cloud
native
computing
foundation.
C
If
you
recall,
Google
was
looking
for
a
home
for
for
kubernetes,
which
has
become
a
fairly
popular
project,
but,
more
importantly,
they
were
looking
for
a
way
to
help
educate
the
industry
on
Klown
native
techniques
and
programming,
and
if
you
kind
of
step
back
and
look
back
where
clown
native
actually
came
from,
it
came
from
essentially
the
challenges
that
internet
scale.
Companies,
like
you,
know,
Google
Twitter,
Netflix,
Facebook,
we're
dealing
with
as
organizations,
and
you
know,
given
that
they
had
all
this
crazy
user
growth
and
Load.
C
They
had
to
actually
come
up
with
interesting
techniques
to
deal
with
that
and
I
think
as
an
industry.
We're
extremely
fortunate
that
a
lot
of
these
companies
decided
to
share
these.
You
know
lessons
learned
through
research
papers,
open
source
projects
and
and
and
and
so
on,
and
so
the
mission
of
C
and
C
F
is
basically
to
take
these.
You
know
lessons.
C
You
know
whether
they're
projects
or
papers,
curate
them
and
help
educate
the
wider
industry,
and
if
you
kind
of
look
at
the
last
couple
years,
especially
like
kubernetes,
has
been
like
a
crazy
rocket
ship.
That's
basically,
you
know
outside
of
the
you
know,
Linux
kernel,
potentially
the
fastest-growing
open-source
project
out
there,
as
in
an
organization
the
CNC
F
has
grown
to
be
a
lot
more
than
kubernetes.
Of
course,
you
know
we
have
42
projects
now
I
believe
spanning
50,000
people
contributing
to
this
extremely
large
ecosystem.
C
We
kind
of
had
this
very
data-driven
approach
of
surveying
and
evaluating
our
projects
and
communities
that
we
support,
and
you
know
today,
I'm
happy
to
announce
that
we're
releasing
what
we
essentially
call
a
project
journey
report
so
essentially,
which
dictates
and
tells
a
story
of
how
envoy
was
founded.
You
know
how
it
came
to
CN
CF,
the
overall
statistics
involved
in
that
project.
It's
quite
amazing
to
see
that
you
know
I
think
you
know.
C
Maybe
matt
was
a
little
humble
on
stage,
but
you
know
that
project
has
grown
so
aggressively,
basically
to
kind
of
become
the
default
data
plane
that
people
are
building
service
measures
with
and
other
interesting
tools.
You
know,
there's
nearly
you
know:
2,000
contributors
spanning
across
200
companies
for
that
project
and
if
you
kind
of
dive
into
the
actual
data,
it's
like
a
a
beautiful
example
of
you
know
what
happens
when
you
build
an
open-source
project
in
a
sustainable
fashion.
So
if
you
look
at
kind
of
the
data
here,
you
know
on
voice
started
from
lyft.
C
They
did
all
the
heavy
lifting
and
work
and
over
time
they
built
a
very
good
neutral,
sustainable
community
by
involving
other
other
companies
and
it's
a
good,
healthy
mix
of
not
only
you
know,
vendors
building
products,
but
you
know
end-users
such
as
Pinterest
and
slack
contributing
to
this
project
where,
even
though
you
know
lyft
and
Matt,
still
heavily
contribute
two-on-two
envoy
and
actually
have
grown
their
contributions
over
time.
It
only
represents
about
a
quarter
of
the
of
the
total
contributions.
C
Now
they've
done
a
great
job,
diversifying
that
community
and
I
encourage
all
of
you
who
are
starting
to
experiment
in
the
space
to
to
contribute
and
help
sustain
this
project
over
time,
and
you
know
to
kind
of
close
things
out.
You
know
we're
we're
hosting
a
fairly
large
kubernetes
conference
in
cloud
native
conference
next
month
in
San
Diego
we're
expecting
over
10,000
attendees,
which
makes
it
one
of
the
largest
open
source
conferences
in
the
world.