►
Description
Kong Co-Founders Augusto Marietti and Marco Palladino, joined by other Kong leaders and industry guests, will share their vision for connectivity in this new digital reality and unveil exciting, new products that will redefine what connectivity means today — and beyond.
Learn more about Kong: https://bit.ly/2I2DypS
#KongSummit20
A
A
D
D
E
F
F
Api
started
to
become
multi-protocols
and
architecture
start
to
be
hybrid.
To
move
this
information
in
an
easier
way
and
kong
back
then
gave
you
the
service
control
platform.
Then
in
2019
we
envision
a
story,
a
new
era
10
years
down
where
all
the
world
was
going
to
be
digital.
Connected
data
in
motion
is
the
fastest
growing
segment,
more
than
data
news
or
data
at
rest,
and
we
had
to
power
all
those
connections
and
back
then
we
turned
the
service
control
platform
into
a
full
life
cycle.
F
It
was
not
just
about
building
running
an
automate
as
a
standalone
entities,
but
we
put
all
of
those
together
and
we
acquired
insomnia.
The
number
one
open,
source,
testing
and
design
tools.
Also,
we
open
source
kuma
the
open
source
service,
mesh
platform
built
on
top
of
envoy
now
part
of
the
cloud
native
computing
foundations.
F
It
was
a
great
year.
We
never
thought,
though,
that
this
vision
that
we
showed
you
10
years
down
would
have
come
that
soon
here
is
20
20..
What
a
year
we
saw
an
explosion
of
everything
literally,
but
really
the
stories
that
we
accelerated
by
10
15
years.
What
the
work
looked
like
digital
is
now
digital
is
here
to
stay
2020
for
sure
it
brought
us
a
lot
of
challenges,
but
also
there
is
massive
amount
of
opportunities.
F
F
I
also
want
to
show
you
and
tell
you
about
really
two
customer
stories
number
one.
It's
a
multinational
banking
that
had
to
provide
loans
faster
than
ever.
They
had
thousands
of
smb
asking
for
help
and
they
had
to
accelerate
time
to
delivery
loan
from
six
weeks
to
just
50
minutes,
and
they
were
able
to
do
that
through
apis
to
support
all
the
little
business,
so
they
could
keep
staying
alive
and
keep
working.
F
Then
a
global
rental
car
company.
When
the
war
shut
down
in
march,
they
had
over
a
million
vehicles,
spread
out
across
the
where
and
nowhere
to
be
found
and
once
again
apis
comes
to
rescue,
find
all
those
cars
and
localized
them.
At
one
point
they
were
making
400
000
transactions
per
day,
but
kovite
19.
F
It's
really
a
catalyst
for
three
major
big
problems.
First,
really
the
war
moves
10
years
forward,
but
the
number
one
problem
is
that
we're
because
we're
10
times
digital
connected,
we
need
to
build
our
applications
10
times
faster.
We
need
to
build
our
digital
experience
10
times
faster,
and
how
do
we
do
that?
How
do
we
accelerate
the
need
of
now
of
building
digital
experience?
Today?
That's
problem
number
one
problem
number:
two:
it's:
how
do
we
get
reliability
built
in?
F
F
It's
all
decentralized,
spread
out
across
all
the
part
of
your
organization,
your
partners
and
your
customers,
and
when
you
do
that
the
control,
the
visibility
but
most
of
all
reliability
goes
down.
This
is
problem
number
two.
How
do
we
get
reliability
built
in
problem?
Number?
Three:
it's
about
building
a
connectivity
platform
for
the
millions
of
developers
that
are
the
makers
at
the
architect
of
next
digital
world.
F
I
give
you
a
good
data
points.
In
the
early
90s,
there
was
a
startup.
There
was
really
buildings,
rich
and
routers.
They
were
also
moving
traffic
in
the
hallway
and
over
time
they
built
a
niche
of
skill
set
engineers
and
architects,
and
there
is
about
500
000
people
that
are
working
on
that
infrastructure.
F
But
as
we
move
to
the
cloud
there
is
now
100
times
more
than
that
there
is
50
million
developers
building
and
they
don't
want
to
care
about
connectivity.
They
want
to
build
application
as
fast
as
they
can.
So
we
have
to
build
a
platform
that
is,
for
the
end
users,
for
the
makers
that
it's
easy
to
use
that
can
support
all
the
cloud
native
applications,
the
thousands
of
those
are
getting
building
in
the
next
few
years.
F
We
really
arrive
at
a
point
that
we
experience
life
through
digitals.
Those
digital
connections
are
really
human
connections
and
we
have
to
build
a
platform
where
connectivity
looks
a
lot
like
netflix.
It's
always
on
it's
always
reliable.
You
can
subscribe
and
just
get
connectivity
out
of
the
box
on
demand.
Whenever
you
want
simple
to
use
that
runs,
24
7.,
it's
a
great
concept.
F
We
have
to
support.
To
recap:
three
major
problems,
number
one:
it's
how
we
build
applications
faster
now
that
the
water
is
ten
times
more
digital
connected
number
two.
It's
really
how
we
build
reliability
that
is
built
in
and
number
three
is
how
we
build
a
platform
for
50
million
developers
that
it's
easy
to
use,
and
it's
not
just
about
supporting
all
kinds
of
connection
like
north
south
east
west,
from
edge
to
internals,
but
it's
also
is
about
making
those
connections
reliable.
F
Also,
it's
about
making
sure
that
we
build
a
platform
for
everyone,
not
just
for
the
niche,
but
for
all
the
millions
of
developers
that
build
application
today,
and
they
want
to
get
there
faster
and
don't
want
to
rebuild
a
connectivity
platform
or
a
connectivity
grid.
We
have
to
deliver
it
all
kind
of
connection
reliable
anytime
for
everyone
today,
I'm
super
excited
to
introduce
you.
F
Think
about
turning
light
bulbs
in
your
house.
It
just
works,
no
matter
it's
going
to
layer,
four
or
layer.
Seven,
no
matter
is
going
external
or
it's
going
internal
service
to
service
or
teams
to
teams.
It's
one
unified
experience
that
you
can
apply.
Security
on
you
can
get
reliability
built
in.
It
would
be
easy
to
use.
F
Everybody
can
touch
it.
It's
a
platform
really
built
for
the
end
user,
for
the
architects
for
the
operator
for
the
developers
and
it's
going
to
be
full
of
functionality
tightly
integrated
to
each
other's
to
provide
you
the
best
experience
on
top
of
many
different
run
times,
whether
their
proxy
their
gateways,
their
mesh
sidecar.
It
doesn't
matter
it's
one
connectivity
solution
for
many
connectivity
problems.
F
Please
join
marco
on
camera,
he's
my
co-founder
and
cto,
and
we've
been
working
this
problem
for
almost
a
decade.
He
will
tell
you
his
story
and
his
personal
challenges
on
building
connectivity
as
he
was
building
different
digital
application,
but
also
he
will
unveil
a
unique
product
that
we've
been
building
for
the
last
year.
This
product
will
make
you
the
superhero
of
connectivity,
and
it's
something
that
the
world
has
never
seen.
It.
F
B
I
am
very
excited
to
be
here
today
when
kong
was
first
released
in
2015.
It
changed
the
api
world
forever.
Kong
started
the
new
era
of
cloud
native
api
gateways.
For
the
first
time
we
had
an
api
gateway
that
was
this
fast
disportable.
This
extensible
open
source
that
could
run
on
kubernetes
cong
was
very
different
from
the
monolithic
soa
inspired
gateways
that
were
popular
at
the
time
and
because
of
this
kong
today
runs
on
over
1.5
million
instances
per
month
all
over
the
world.
B
Augusto
already
mentioned
that
connectivity
is
the
future
and
that's
inevitable.
Every
time
a
team
creates
a
new
application,
they
create
new
connectivity
and
the
more
decoupled
and
the
more
distributed
our
applications
become
the
more
connectivity
they
create.
The
thing
is
our
services
talk
to
each
other
a
lot.
If
our
services
were
people,
they
would
never
be
invited
at
any
party.
They
talked
non-stop
over
http
over
grpc,
over
kafka
to
databases
and
so
on.
B
So
let
me
share
a
story
with
you
on
how
I
have
discovered
that
this
was
important
when
I
was
cto
at
mass
shape,
we
were
transitioning
to
microservices
and
we
were
so
focused
on
moving
away
from
the
old
monolithic
code
base
that
we
didn't
think
of
connectivity
as
much
as
we
should
have.
We
thought
that
our
cloud
vendor
would
give
us
out
of
the
box.
Connectivity
that
would
be
secure
would
be
reliable
for
all
of
our
services
and
then
once
we
ran
our
new
architecture,
nothing
worked
the
way
it
was
supposed
to
work.
B
B
It
is
the
architects,
but
today,
in
most
organizations
it
is
the
application
teams
that
do
it
in
addition
to
building
their
own
their
products,
which
is
their
primary
goal.
So,
let's
take
a
look
at
what
happens
when
the
application
teams
are
building
connectivity
instead
of
the
architects,
an
application
team
would
start
a
new
application
and
this
application
may
be
built
with
different
services,
and
so
they
go
ahead
and
build
these
different
services,
perhaps
in
different
programming
languages
and
eventually
they
will
have
to
connect
them
together
and
once
they
connect
them
together.
B
They'll
have
to
secure
that
connection,
so
they
go
ahead
and
build
that
then
they'll
help
to
route
these
connections
and
they
build
routing
inside
of
the
service.
Then
the
other
service
will
also
need
to
have
security,
and
so
they
rebuild
the
same
features
over
and
over
again.
Eventually,
they
will
have
to
log
all
of
those
requests,
so
they
go
ahead
and
they
build
that
as
well.
But
you
know
logging,
it's
not
an
isolated
concern,
it's
a
common
concern,
and
so
every
service
we're
building
is
going
to
have
that.
B
B
B
I
was
talking
to
one
of
our
largest
enterprise
customers,
it's
a
global
bank
and
they
were
telling
me
how,
in
2018,
they
started
a
new
project
to
transition
away
from
tls
1.2
to
the
new
version
1.3,
because
1.2
has
a
vulnerability,
as
we
all
know,
and
something
that
should
have
been
simple
for
them
became
a
multi-year
journey
because
they
had
to
go
and
chase
down
every
single
application
team
to
tell
them
to
upgrade
their
encryption
and
stop
doing
what
they
were
doing
and
focus
on.
You
know
making
sure
that
encryption
was
working.
B
G
G
First,
at
the
technology
landscape
level.
Did
you
know
that
89
of
the
respondents
to
kong's
annual
survey
told
us
that
they're
going
through
an
actual
microservices
change,
they're
changing
some
of
their
applications
into
a
microservices
architecture,
and
what
this
means
is
that
all
of
the
connectivity
challenges
that
marco
outlined
are
actually
magnifying,
because
every
single
application
is
becoming
multiple
services.
Multiple
remote
services
with
security
needs
reliability
needs
that
need
to
be
addressed.
G
Cncf's
annual
survey
tells
us
that
70
of
the
survey
respondents
told
us
that
they
are
on
kubernetes
or
containerized
infrastructure
today,
and
that
means
that
more
and
more
of
the
environments
we're
dealing
with
are
actually
heterogeneous.
They
are
containerized
and
non-containerized,
and
the
life
cycle
of
all
services
needs
to
consider
that,
and
last
but
not
least,
gartner
predicts
that
over
500
billion
dollars
would
be
spent
on
multi-cloud
environments
by
2023,
which
means
that
any
service
lifecycle
solution
will
need
to
consider
multi-cloud
environments.
G
Now
not
only
the
technology
landscape
is
changing.
The
way
that
consumers
are
consuming
connectivity
is
also
changing.
Now,
I'm
going
to
use
a
personal
story
of
mine
to
illustrate
this
change
around
10
years
ago,
when
I
was
at
oracle
and
moved
to
mulesoft,
a
big
change
was
happening
at
oracle.
The
decision
makers
for
connectivity
software
seemed
to
be
very
much
the
cio
and
the
decision
criteria
was
it
compatibility.
G
The
way
the
software
was
being
installed
was
on
traditional
hardware
as
software
package
and
the
purchase
process
took
months
now
when
I
joined
mulesoft,
what
I
was
noticing
was
a
big
change.
More
and
more.
The
business
unit
executives
were
the
decision
makers,
and
the
decision
criteria
was
whether
the
software
suited
their
business
units
needs
so
closer
to
the
end
user,
and
more
and
more
that
software
was
being
purchased
through
a
cloud
model
or
a
sas
model.
G
G
G
Now,
as
I
was
mentioning,
the
end,
users
have
obviously
stayed
the
same
and
their
needs
surprisingly
have
stayed.
The
same
developers
are
very
much
focused
on
business
logic,
operators
they're
trying
to
deliver
a
platform
on
which
these
developers
can
deliver
their
software
that
is
reliable
and
and
and
has
a
low
cost
of
maintenance
and
enterprise.
Architects
they're
trying
to
decrease
the
time
to
value,
so
they
are
sitting
in
between
the
developers
and
the
operators
and
trying
to
introduce
automation
and
other
capabilities
that
enables
the
organization
to
move
faster
and
be
more
agile.
G
G
G
Now,
when
I
say
enable
through
connect,
I
mean
that
connect
enables
you
to
deploy
these
runtimes
on
any
cloud
on
any
kubernetes
in
a
hundred
percent
consistent
manner,
and
we
can
provision
the
runtime
so
that
you
manage
the
data
plane
or
we
can
host
the
data
plane
ourselves
now.
Around
connect
is
a
set
of
platform
services
that
enable
productivity
for
the
different
end
users
we're
talking
about.
We
start
with
the
service
hub,
which
provides
a
single
system
of
record
for
all
of
your
services
so
that
they
can
be
socialized
and
reused.
G
G
What
we're
looking
at
here
is
service
hub.
I'm
logged
in
as
a
user
who's
able
to
look
at
all
of
the
services
that
are
inventoried
by
the
organization
that
I
have
access
to
and,
as
you
can
see
here,
there's
three
services
that
have
access
to
each
of
these
services
has
different
versions
with
different
protocols
now
before
diving
into
the
services.
Let
me
talk
about
the
runtime
management
capabilities
of
connect
because
the
runtimes
are.
G
G
Now,
interestingly
enough,
if
you
look
at
the
con
gateway
runtimes,
you
can
see
that
some
are
hosted
by
a
column
and
some
are
hosted
in
a
self-service
way.
These
are
the
data
planes,
so
we're
hosting
some
of
them,
but
you
can
host
your
own
and
have
them
be
connected
to
connect,
and
we
can
see
here
the
status
of
each
of
the
runtimes
as
well.
Now,
of
course,
I
can
configure
new
runtimes
as
well
of
any
of
these
types
and
decide
where
I
want
to
deploy
them
and
the
deployment
model
as
well.
G
G
G
G
G
Now,
what's
interesting
is
that
I
just
showed
you
the
ui,
but
underneath
everything
that
we
see
here
is
driven
through
apis,
that's
accessible
through
a
cli,
because
that
is
one
of
the
important
product
principles
that
we
follow
here
at
con.
Let
me
show
you
a
little
bit
what
the
engine
under
hood
looks
like,
so
we
have
a
command
line,
called
connect,
ctl,
let's
go
through
and
see
the
run
times
just
like
I
did
before.
G
G
G
G
G
So
what
I
could
do
is
revert
to
a
snapshot
of
a
particular
service
version,
so
that
runtimes
can
revert
back.
So
this
allows
me
to
correct
any
errors
that
might
have
happened
by
just
reverting
to
that
particular
snapshot,
which
is
great
because
it
allows
me
to
quickly
address
problems
that
might
be
happening.
So,
of
course,
this
ui
this
cli
can
be
plugged
into
your
ci
cd
system
to
automate
everything
that
we
just
saw
all
right.
So
we
just
had
a
whirlwind
tour
of
all
of
connect's
capabilities.
G
Now
it's
important
to
remember
that
connects
capabilities
are
built
on
the
shoulder
of
giants.
We
started
this
journey
with
con.
The
number
one
open
source
api
gateway
in
2015
and
then
in
2016
introduced
kong
enterprise,
a
full
lifecycle
service
connectivity
platform
built
on
top
of
konk,
then
came
kong
ingress,
which
exposed
all
of
kong's
capabilities
in
a
fully
kubernetes
native
way.
G
G
H
Thanks
reza-
and
I
appreciate
the
opportunity
to
come-
talk
to
you
guys
about
what
is
mind
sphere
and
how
we're
using
kong.
So
let
me
first
start
just
by
explaining
a
little
bit
about
what
mindsphere
is
so
mindsphere
is
an
industrial
iot
platform
and
we
provide
industrial
iot
as
a
service.
So
it's
a
cloud-based
software
as
a
service
solution
and
it
combines
the
connectivity.
So
we
first
start
with
by
connecting
devices,
then
we
provide
a
number
of
services
that
you
can
access
via
api
that
are
out
of
box
functionality.
H
That
gets
your
data
brings
it
into
the
system,
high
volume
high
throughput,
and
then
you
can
access
and
query
that
data.
We
also
provide
a
number
of
applications
that
allow
you
to
visualize
the
data
and
analyze
the
data
to
see
what's
really
happening
in
your
business
and
beyond
that.
We
provide
the
api
so
that
you
can
build
your
own
applications
and
extend
the
platform
in
any
way
that
meets
your
business
needs
and
we
provide
a
rapid
application.
H
Development
environment
called
mendix,
and
this
allows
you
to
use
very
low
code
to
quickly
develop
new
applications
to
extend
the
capabilities
of
the
platform.
H
Why
do
you
use
mindsphere
mindsphere
gives
you
three
things.
It
gives
you
the
ability
to
understand,
really
what's
happening
in
your
business,
by
connecting
all
your
devices
and
connecting
all
of
your
products,
you
can
really
get
some
understanding
what's
happening
in
your
shop
floor,
what's
happening
to
your
products
in
the
field
and
really
give
you
insight
into
what's
working.
What's
not
working
for
you
to
do
predictive
maintenance
for
you
to
do,
process,
optimization
and
product
optimization
and
really
give
you
true
visibility
into
what's
happening
in
your
business.
It
you.
H
You
can
also
gain
experience
a
unified
user
experience
through
all
the
applications
that
are
available
in
the
platform
and
through
the
mindsphere
marketplace.
So
you
can
bring
all
these
applications
together,
operate
on
your
data
and
extend
the
platform
all
within
a
single
unified
user
experience,
and
it
really
gives
you
that
level
of
transparency
into
your
business
into
the
operations
of
your
business
and
into
what's
happening
with
either
devices,
whether
you're,
using
them
in
the
shop
floor
or
products
that
are
connected
products
in
the
field.
H
Mindsphere
has
over
475
partners
that
participate
in
the
mindsphere
ecosystem
and
we
have
over
400
applications
already
in
our
marketplace,
and
this
provides
a
particular
challenge
in
in
handling
apis
and
and
handling
throughput
and
volume
in
the
platform.
So
it's
very
important
that
we
can
handle
the
scale
that
we're
running
at
we're
running
at
global
scale.
We
have
hundreds
of
gigabytes
per
second
that
we
process.
H
We
connect
millions
of
devices
and
we
have
tens
of
thousands
of
users
who
are
using
the
platform,
so
we're
really
operating
at
very
large
volume
of
data,
but
also
at
very
large
scale
in
both
connected
devices
and
applications
and
connected
users.
H
So
how
does
it
all
work?
So
we
introduced
kong
as
the
our
api
gateway
so
from
from
the
outside
world
users.
Connect
to
the
the
apis
through
the
api
gateway
to
consume
different
apis
and
build
their
own
applications,
but
also
external
applications
outside
in
the
outside
world.
You
can
build
any
application
and
consume
any
api
within
the
platform
and
devices
also
connect
from
the
outside
world.
So
we
have
multiple
types
of
connectivity
to
the
outside
world.
H
Everything
goes
through
an
external
facing
kong,
api
gateway,
and
this
provides
a
central
way
of
all
users
applications
and
devices
to
access
all
the
apis.
In
the
platform
from
a
single
entry
point,
particular
aspects
of
security
are
implemented
here.
It
checks
to
see
if
you're
logged
in,
and
so
we
use
a
lot
of
plugins
to
make
sure
that
we
have
users
are
authenticated.
If
they're
not,
they
can
get
redirected
to
authentication
mechanisms.
H
We
also
have
quite
a
bit
of
internal
traffic,
so
services
talking
to
each
other
and
applications
talking
to
each
other
and
to
facilitate
this.
What
we've
introduced
is
a
second
tier
of
the
api
gateway.
So
we
have
an
internal
communication.
Api
gateway
also
a
via
kong,
and
this
allows
us
to
provide
with
inside
our
virtual
private
network.
It
allows
us
to
have
internal
communication,
either
service
to
service
or
application
to
application
without
traversing
the
external
facing
api
gateway.
It
allows
us
to
do
a
little
bit
more
efficient
processing.
H
Maybe
we
have
different
requirements
and
different
plugins.
We
need
on
the
internal
communication
than
we
have
on
the
external
communication,
so
it
really
allows
us
to
optimize
both
the
external
traffic
into
the
platform,
as
well
as
the
internal
traffic
within
the
platform.
So
why
do
we
use
kong?
Well,
there's
a
lot
of
key
reasons
why
kong
is
very
attractive
to
us.
H
H
So
we
can
achieve
a
higher
throughput
with
less
nodes
and
and
smaller
nodes.
When
we
operate
in
the
cloud
and
there,
therefore,
we
can
cost
optimize
the
solution.
Also,
the
the
ability
to
do
block
non-blocking
calls
our
previous
implementation
had
blocking
calls,
and
this
allows
us
to
operate
with
better
performance.
H
We
also
have
access
to
a
lot
of
plugins.
So,
as
I
mentioned,
we
use
plugins
for
various
purposes.
We
use
rate
limiting
plugins,
we
use
authentication
plugins,
and
this
allows
us
to
customize
and
extend
the
capabilities
based
on
our
needs,
and
it
also
supports
quite
a
few
additional
protocols,
so
not
just
restful
apis,
but
we
also
have
a
need
to
you
use
web
sockets
for
some
device
connectivity
and
bi-directional
communication
with
devices.
H
G
As
you
know,
the
foundation
of
connect
is
built
on
our
runtimes,
and
I
want
to
introduce
shane
connolly
to
tell
us
a
little
bit
more
about
some
exciting
announcements
that
we're
about
to
make
about
the
cone
gateway
and
the
kong
ingress
controller.
The
run
times
for
managing
connectivity
at
the
edge.
I
As
reza
was
going
through,
you
might
have
noticed
that
the
version
column
on
connect
was
showing
kong
gateway
version
2.2,
and
with
that,
I'm
excited
to
announce
that
kong
gateway
2.2
is
here
as
a
beta
kong
gateway.
2.2
is
the
next
iteration
of
our
gateway
product,
with
some
really
incredible
new
features.
It's
available
today,
like
I
said
as
a
beta
and
we're
planning
to
launch
our
enterprise
version
in
about
a
month.
Now,
let's
get
straight
into
the
features.
I
First,
let's
talk
about
go
in
kong
gateway,
2.0,
we
first
introduced
golang
plugin
support.
Golang
is
great
for
plugins,
because
it
has
such
a
large
ecosystem
of
modules
to
pull
from,
and
then
you
can
use
use
and
include
those
in
your
common
gateway
plug-in.
This
makes
developing
integrations
in
the
gateway
of
breeze.
However,
when
we
launched
the
gelang
support
in
2.0,
we
started
off
with
an
mvp
approach
to
really
see
how
you
and
the
community
were
going
to
use
those
in
plug-ins.
I
Kong
can
now
handle
the
response
phase
as
well
in
go
plugins,
we
buffer
the
upstream
response
and
now
can
run
whatever
custom
logic
you
have
in
the
response
phase.
For
example,
you
can
operate
directly
on
the
response
body
now.
We're
super
excited
to
see
where
you
help
us
drive
golang
support
to
next.
So
please
do
keep
the
feedback
coming.
I
Next.
Many
of
you
will
be
aware
of
kong
support
for
a
variety
of
protocols,
so
that
no
matter
what
sort
of
api
you're
building
kong
is
there
to
help.
You
meet
your
needs.
Well,
it
is
almost
no
matter
what
protocol
you're
using
one
thing.
I
know
that
I've
learned
being
locked
down
during
covet
is
the
importance
of
audio
and
video
streaming
and
maybe
even
a
little
bit
of
gaming,
and
as
these
services
are
built
up,
they're
backed
in
more
and
more
by
apis,
but
those
apis
are
a
little
bit
different.
I
There's
a
variety
of
use,
cases
for
udp,
proxy
and
balancing
and
kong
is
now
here
to
help
now
I'll
give
a
bit
of
a
demo
of
this
in
just
a
moment,
but
it
should
open
up
the
door
to
all
kinds
of
new
streaming
options
for
you,
our
users
and
there's
a
lot
more
in
the
2.2
gateway.
I
don't
have
time
to
get
into
all
of
it
here
today,
but
please
do
go.
Look
for
the
2.2
beta
read
the
release,
notes,
try
it
out
and
let
us
know
how
you
get
along.
I
I'm
also
really
excited
now
that
it's
not
just
kong
gateway.
Like
it's
a
new
release,
I'm
excited
to
announce
that
our
kong
ingress
controller
for
kubernetes
has
just
hit
a
major
milestone,
as
well
version
1.0
now
to
roll
things
back
a
little
bit
here
at
kong,
we
see
development
of
our
offerings
based
on
kubernetes
as
highly
strategic.
I
Kubernetes
hit
a
recent
milestone
of
their
own
with
the
general
availability
of
kubernetes
ingress
1.0,
which
kong's
ingress
controller,
makes
use
of,
and
now
kong
is
hitting
its
own
milestone
with
our
release
of
version
1.0
of
our
ingress
controller.
This
is
our
indication
to
you,
our
community.
That's
been
vetted
for
a
variety
of
use
cases,
it's
been
battle
tested
and
it's
production
ready.
We
hope
you
all
get
a
chance
to
go
off
and
try
today
now
I'd
like
to
do
a
brief
demo
of
the
udp
support
that
I
just
mentioned
in
kong
gateway
2.2.
I
Okay,
great
so
I'm
going
to
create
a
new
service
here
in
connect,
and
I've
recently
found
out
that
california
has
live
video
streams
of
various
places
along
the
highway
and
we're
going
to
build
a
udp
service,
that's
backed
by
one
of
those.
So
I'm
just
going
to
call
this.
Our
highway
live
stream
and
our
california
highway
stream.
I
This
will
be
our
new
service,
we're
going
to
create
the
service
as
version.one
and
we're
going
to
select
the
udp
protocol
and
enter
just
other
as
the
style
great
now,
we've
all
got
this
set
up
and
we're
going
to
go
through
and
add
the
implementation,
as
well
as
a
plugin.
So
first
adding
our
implementation,
choose
kong
gateway
and
I've
got
a
local
service
hosted
on
a
local
machine.
We're
going
to
list
this
as
10
128,
0,
11
under
udp
start
and
just
call
this
an
example,
wrap
and
get
started
great.
I
So
now
I've
done
everything
except
for
I
need
to
add
one
more
bit,
which
is
our
plugin,
our
logging
plugin,
so
we'll
scroll
down
scroll
down
and
get
to
our
file
log
plugin.
I'm
just
going
to
add
this
to
a
temporary
path
and
click
create,
and
we
are
good
to
go
so
just
to
show
that
we
do
indeed
have
our
kong
gateway.
That's
running
on
this
host
and
port
10
128
0
11
for
8
000.
I
I
B
Thank
you
reza
for
showing
us
con
connect
like
reza
mentioned.
Connect
is
built
on
top
of
open
source
foundations,
and
that
is
because
modern
infrastructure
is
open
source.
It
is
built
on
top
of
kong
gateway,
kuma
and
insomnia.
So,
let's
talk
about
kuma
service
mesh.
It's
not
a
siloed
concern,
but
it
is
an
important
part
of
our
connectivity
journey.
All
the
features
that
service
mesh
provides
out
of
the
box
like
the
zero
trust
security,
the
observability,
the
routing
we
have
to
build
ourselves
in
our
applications.
B
B
J
J
However,
both
share
the
same
drive,
accelerating
the
path
from
concept
to
market,
while
reducing
the
logistical
complexity
and
in
order
to
accomplish
this
simplicity,
has
been
one
of
the
core
principles
to
streamline
adoption
on
one
side
and
ensure,
on
the
other,
that
governments
can
be
enforced
without
actually
putting
any
friction
or
interfering
with
the
delivery
velocity.
J
Kuma
is
way
easier
to
install
and
to
get
started
with,
and
some
of
these
main
points
is
that
our
the
coup
and
kong
are
a
match
made
in
evan,
as
they
work
perfectly
together
with
kong,
leading
as
clearly
the
best
in
class
experience
for
api
management
and
development
and
kuma
seamlessly
take
care,
takes
care
of
east-west
traffic,
delegating
kong,
the
very
hard
work
or
north
south
traffic,
and
if,
on
one
side,
the
istio
supports
just
one
mesh
per
deployment,
you
know
not
in
organization
of
any
size
and
specifically
scale
when
the
the
journey
to
microservices
transformation
is
ongoing.
J
We
are
running
on
virtual
machines
and
that's
when
the
native
first-class
store
for
both
virtual
machine
and
kubernetes
environments,
kuma
has
progressively
clouded
transforming
legacy
environments
while
seamlessly
support
the
business
as
usual
and
last,
with
the
growth
of
distributed
and
federated
environments
and
consequently,
also
of
development
teams.
The
need
to
a
centralized
control,
plane
capable
of
transparently
and
seamlessly
manage
and
coordinate
the
hybrid
workload
in
a
multi-zone
setup
right
because
we
are
distributing,
is
becoming
increasingly
foundational
in
any
modern
architecture.
J
Telemetry
observability
policy
control,
service
coordination
and
infrastructure
federation
are
then
brought
to
a
different
level
of
accessibility
and
management.
What
before
was
seen
as
an
inevitable
cost
for
any
development
and
delivery
team
is
now
transformed
into
a
commodity
accessible
in
a
simple
and
transparent
way.
To
anybody.
B
Thank
you,
luca
luke
attached
on
a
very
important
feature
that
kuma
provides,
and
that
is
the
multi-zone
deployment
with
cuma.
We
can
support
multiple
clouds,
multiple
platforms,
multiple
clusters,
together
into
one
large
service
mesh.
We
can
also
mix
and
match
hybrid
kubernetes
and
virtual
machine
workloads.
B
There
is
usually
two
different
challenges
into
running
a
distributed
service
mesh
one.
How
do
we
propagate
all
the
service
mesh
policies
across
each
one
of
these
different
zones?
And,
second,
how
do
we
discover
and
connect
services
across
a
multi-zone
deployment
and
with
qma?
We
do
both
of
these
operations
out
of
the
box
in
a
seamless
way.
B
I'm
very
excited
to
show
you
this
demo
in
a
multi-zone
deployment,
we're
going
to
be
having
one
global
control
plane
of
qma.
That
is
going
to
be
in
charge
of
synchronizing
our
service
mesh
policies
to
every
remote
zone
that
we
want
to
support.
Like
I
said,
a
remote
zone
can
be
a
cloud,
it
can
be
a
kubernetes
cluster
can
be
anything
and
we
can
also
mix
and
match
virtual
machine
based
workloads
with
kubernetes
workloads.
B
So
I
do
have
a
setup
here
running
simultaneously
on
two
different
clouds:
one
is
aws
ec2
on
virtual
machines
and
then
I
also
have
three
gke
clusters
running
on
gp,
on
gcp
we're
going
to
be
having
one
global
control
plane
on
gcp
and
then
two
remotes
kuma,
east
and
kuma
west
on
kubernetes
and
another
remote
on
aws
ec2.
So
we
can
see
here
that
there
is
a
remote
running
on
virtual
machines
and
inside
of
these
clusters
we
also
have
a
control
plane
of
qma
running
in
remote
mode.
B
Now
I've
deployed
a
demo
application
that
demonstrates
how
we
can
increment
a
counter
on
redis.
So
there
is
a
demo
app
and
a
redis
app
deployed
on
aws
and
we
do
have
a
demo
app
and
a
ready
service
being
deployed
on
each
one
of
these
clusters
as
well.
So
if
I
connect
to
the
global
control
plane-
and
we
look
at
the
kuma
gui-
we
can
see
how
we
have
all
the
workloads
connected
to
the
control
plane.
B
I'm
going
to
be
loading
up.
My
browser
kuma
by
the
way,
also
provides
a
restful
api,
so
the
gui
it's
just
built.
On
top
of
this
restful
api
and
by
loading
at
the
gui.
We
can
see
that
we're
running
on
three
different
zones,
one
it's
running
on
virtual
machines
on
aws
and
two
gcp
zones
and
we're
going
to
be
having
the
demo
app
and
the
redis
app
being
replicated
across
both
aws
and
these
gke
clusters.
B
So
we
can
see
here
that
we
have
all
the
data
planes
online
one
it's
redis
running
on
aws,
then
we
have
the
demo
app
running
on
nws
as
well
as
we
have
the
redis
running
on
gke
east
as
well
as
we
have
the
demo
app
running
on
gk,
east
and
west,
and
so
on.
So
we
we've
effectively
replicated
the
same
application
across
each
one
of
these
different
environments.
B
Now,
let's
go
ahead
and,
for
example,
load
the
application
from
aws
and
see
what
happens.
So
if
I
do
load
this
application
on
port
5000,
I'm
by
passing
the
data
plane
right
now
just
show.
So
I
can
show
you
the
demo
application
and
I
increment
the
counter.
We
see
how
the
demo
app
on
aws
is
incrementing
the
radius
that
also
runs
on
aws,
but
let's
demonstrate
the
multi-zone
feature
that
kuma
provides,
and
so
if
we
do
enable
mutual
tls
and
the
traffic
permission
policies,
we
can
now
demonstrate
activate
this
multi-zone
connectivity
out
of
the
box.
B
Now
qma
will
automatically
discover,
based
on
the
routing
policies
that
we
have
set
up
if
we
should
be
routing
the
request
into
the
same
zone
or
in
another
zone
regardless,
if
we
run
kubernetes
containers
or
if
we
run
virtual
machines,
so
I
do
have
a
default
mesh,
as
you
can
see
here,
and
this
default
mesh
doesn't
have
mutual
tls
enabled.
So
we
must
enable
mutual
tls
in
order
to
enable
this
cross
zone
connectivity.
B
To
do
that,
we
can
explore
the
policies
that
qmap
provides,
and
we
can
see
that
among
them
there
is
a
mutual
tls
policy.
Now
the
mutual
teles
policy
is
a
very
simple
policy.
It
allows
us
to
provision
a
certificate
authority
either
automatically
or
by
providing
our
own
certificates,
and
then,
on
top
of
this,
it's
going
to
allow
us
to
assign
a
certificate
to
every
data
plane
proxy
and
also
automatically
rotate
those
certificates
with
no
user
intervention.
B
So
in
this
case,
in
this
example,
I
am
going
to
be
updating
my
default
mesh
to
enable
the
ca1
backend,
which
is
of
type
built
in
which
means
we
create
the
root
certificating
key
and
it
will
rotate
the
certificates
every
day,
applying
zero
trust
security.
It's
this
easy
with
huma,
so
I'm
going
to
be
echoing
this
command,
so
I
can
provide
the
cube
cube.
Color
apply
command
that
I
need
to
run
in
order
to
enable
this
mutual
kls,
and
I'm
going
to
be
doing
this
next
next
to
my
demo.
B
B
And
I
just
do
this
next
to
our
demo
application.
So,
basically,
I'm
just
copying
and
pasting
the
policy
so
that
it
applies
on
top
of
the
global
control
plane,
we're
seeing
how
the
traffic
will
resume
automatically,
but,
most
importantly,
it
will
go
from
one
zone
to
another.
What
we're
seeing
right
now,
it's
something
very
exciting.
B
We
have
enabled
multi-zone
connectivity
across
different
clouds
in
a
hybrid
vm
and
kubernetes
environments
with
zero
trust,
security
and
traffic
permission.
Acl
we've
done
this
in
less
than
five
minutes.
This
is
how
simple
and
easy
it
is
to
use.
Scuma
now,
let's
say
that
we
want
to
force
our
traffic
to
not
be
load
balanced
across
each
one
of
these
different
zones,
but
to
be
redirected
to
one
zone.
In
particular,
we
can
apply
another
policy
called
traffic
route
and
with
traffic
route
we
can
determine
again
what
sorts
of
traffic
we
want.
What?
B
What
is
the
traffic
path
that
we
want
to
configure
and
how
this
traffic
path
should
be
routed
across
our
cluster?
So
I'm
just
going
to
copy
and
paste
this
real
quick
in
order
to
show
you
how
we
can
route
our
requests
from
one
region
to
another.
So
let's
say
that
we
want
every
traffic
being
generated
by
oh
and
by
the
way
I
forgot
to.
Let
me
expose
let
me
put
forward
our
gui
again,
I
shut
it
down.
B
Let
me
show
you
how
we
can
redirect
all
traffic
that
goes
from
virtual
machines
to
a
specific
cluster
into
kubernetes,
so
we
want
to
apply
this
route
on
all
traffic.
That
goes
from
the
demo
app.
B
B
So,
let's
pull
up
the
demo
again,
as
you
can
see,
it's
still
load
balanced
across
every
zone.
So
what
I'm
going
to
be
doing
is
apply
this
new
policy
and
as
I
do
that
we
can
see
that
our
traffic
will
be
redirected
and
routed
all
the
way
to
our
gke
east
region.
As
you
can
see
here
right
now,
in
one
policy,
we've
done
a
data
center
traffic
shift
in
one
command
from
virtual
machines
to
one
specific
cluster
of
kubernetes.
B
There
was
lots
of
effort
that
went
into
this
underlying
foundation
to
make
sure
that
it
would
be
this
easy,
this
simple
and
disintuitive.
So
this
is
a
very
short
demo.
If
you
want
to
explore
kuma
more
in
depth,
don't
forget
to
look
at
our
sessions
and
workshops
which
will
go
a
little
bit
deeper
into
all
the
policies
and
all
the
features
that
cuma
provides
out
of
the
box
so
that
we
can
create
a
service
mesh
in
just
a
few
minutes
across
every
cloud
and
every
environment.
B
We
have
discovered
our
new
product
conconnect,
delivered
as
a
service
to
provide
connectivity
for
every
cloud
and
every
platform,
and
we
have
seen
the
new
version
of
qma
1.0
the
universal
service
mesh,
to
enable
connectivity
across
multiple
zones
in
multiple
regions,
but
we're
not
done
yet.
I
would
like
to
invite
augusto
back
on
camera
to
join
me.
F
F
I
So
here
I
have
a
raspberry
pi
zero
w
now
for
those
that
are
unfamiliar
with
this
particular
bit
of
hardware,
this
little
device
is
running
a
32-bit
arm.
Cpu
at
about
one
gigahertz
has
about
half
a
gig
of
memory
on
it
as
well,
and
in
fact
it's
using
so
little
power,
I'm
just
powering
it
from
a
portable
usb
pack.
I
I
think
it
really
goes
to
highlight
some
of
the
major
benefits
that
we
often
hear
customers
talk
about
when
using
the
kong
gateway,
and
that's
really
that,
because
kong
can
run
on
virtually
any
device
and
run
virtually
anywhere
and
run
with
a
very
low
cpu
and
memory
cost,
it
means
that
you
can
process
far
more
instructions,
far
more
requests
per.
Second,
on
the
same
hardware,
and
really
do
so
on
any
enterprise
hardware
of
your
choosing,
so
we'll
just
go
ahead
and
show
here
that
we
are
running
kong
on
this
device.
I
We
can
run
the
gateway
straight
on
devices
on
the
edge
like
this,
which
means
that
we
can
route
and
secure
things
like
maybe
the
local
gpio
pins
here,
for
example-
and
you
know
I'll
just
turn
on-
maybe
an
led,
for
example,
locally
and
those
embedded
devices
can
easily
have
access
to
the
rest
of
say,
an
iot
network.
So
I'm
just
going
to
copy
and
paste
a
network
command
here,
we're
going
to
read
the.
I
State
of
temperature
sensor-
that's
on
premise
here,
or
maybe
even
go
so
far
as
to
build
and
secure
apis
for
something
like
I
don't
know,
the
garage
door
which
allows
us
to
really
control
and
embed
these.
This
high
power
gateway
directly
on
these
load
power
devices
at
the
edge,
and
we're
really
excited
to
see
what
you
sort
of
think
of
to
use
kong
embedded
for.
So
thanks.
F
As
you
can
see,
even
shane
is
able
to
turn
on
the
lights
or
open
his
garage
door
by
using
kong
embedded.
It's
not
that
hard.
The
possibilities
are
endless
and,
as
we've
seen
today,
this
journey
of
connectivity
really
never
hands
we're
powering
all
kind
of
connections
giving
you
a
reliability
grid
and
we're
building
connectivity
for
everyone.