►
From YouTube: Announcing Kong Enterprise 2020 [Keynote Part 6]
Description
Learn more about Kong Enterprise 2020: http://bit.ly/2pQ1ofq
Kong VP of Engineering Geoff Townsend announced the newest edition of Kong Enterprise at Kong Sumit 2019 – Kong Enterprise 2020 – that includes multi-protocol support for gRPC, GraphQL and Kafka.
#KongSummit19
A
The
way
that
it
works
for
you
and
augusto
also
started
in
the
very
start
of
the
keynote
yesterday
talking
about
the
full
lifecycle,
and
that
sounds
super
market
II
and
kind
of
not
an
engineering
thing
right.
But
to
me
what
matters
about
the
full
lifecycle
is
the
fact
that
it's
facilitated
by
that
need
to
transform
and
the
reason.
A
Why
is
because
you're
going
to
deploy
services,
you're
going
to
run
services
and
then
you're
gonna,
hopefully
end-of-life
some
services
right,
but
at
the
same
time,
as
I
mentioned
yesterday
and
I
say
this
sort
of
flippantly,
you
know
in
terms
of
you
know,
deploying
things
and
end
to
end
of
life
and
things,
but
one
of
the
things
that
we
hear
today
in
the
Silicon
Valley,
but
also
all
over
the
United
States
is
and
the
world
is.
If
you
just
deploy
everything
on
the
newest
technology,
everything
will
be
fine.
A
All
your
problems
will
be
solved
right,
but
we
all
know
that
that's
actually
not
necessarily
true
you're
gonna
be
bringing
some
of
these
things
along
with
you
in
the
journey
that
you
go
through
the
course
of
your
career
and
as
you
move
the
services
that
you
deliver
value
to
your
customers
with
from
the
older
technologies
to
the
newer
technologies
that
make
your
life
easier
and
in
like
I
mentioned
before
I
mean
I.
Jokingly
said
you
know,
some
of
the
things
will
never
go
away,
but
in
reality
the
truth
is
it's
more
of
a
transition.
A
They
feel
like
they'll,
never
go
away.
There
it
may
be
a
five-year
journey
to
get
off
of
the
thing
that
you're
on
now
on
to
the
next
thing,
and
so
the
thing
I
hope
to
leave
you
with.
As
you
leave
the
conference
and
as
you
go
back
to
your
jobs
and
as
you
build
the
things
that
deliver
value
for
your
customers.
The
important
part
of
how
you
do.
That
is
that
you
should
use
the
thing.
A
Now,
but
before
I
start
I
did
want
to
leave
one
note
in
your
head,
which
is
that
a
lot
of
you
are
using
the
self-managed
on-premise
version
of
Kong,
but
I
wanted
to
leave
a
little
kind
of
tick
in
your
ear
that
we
also
have
a
full-featured
all
the
features
of
enterprise
in
a
managed
cloud
version
of
the
product
so
go
ahead
and
explore
it.
There's
a
free
trial,
there's
a
way
to
get
access
to
it
right
away.
A
You
could
probably
even
get
it
today
on
the
way
home
if
you
wanted
to
so
just
to
know.
So,
let's
talk
a
little
bit
about
the
dev
portal.
We've
gotten
some
feedback
from
you
folks,
and
we've
done
some
innovations
over
the
course
of
the
year.
That
I
hope
that
you're
gonna
be
excited
about
and
one
of
the
biggest
things
that
we
noticed
was
that
it
was.
We
gave
you
the
power
to
do
whatever
you
wanted
to
do
in
the
dev
portal,
but
we
didn't
give
you
the
power
to
do
it
easily
right.
A
It
was
a
very
powerful
platform.
You
could
push
files
in
and
out
skin
it.
However,
you
wanted,
but
you
had
to
kind
of
do
it
yourself,
but
let's
go
ahead
and
go
into
the
con
manager
and
start
looking
at
what
we've
given
you
today.
So
if
you
go
to
the
dev
fortold
tab
inside
of
the
cog
manager,
you
can
see
that
we
have
all
the
dev
portals
that
are
available
inside
of
the
platform.
A
If
we
go
to
the
default
and
click
on
the
overview,
you
can
see,
we've
dropped
in
and
it's
enabled
let's
go
ahead
and
open
the
default
portal
that
you
get
out
of
the
box.
So
this
is
what
you're
going
to
get
out
of
the
box
today,
if
you
just
spin
up
a
Kong
node
and
enable
the
right
port.
But
if
we
go
back
into
the
Kong
manager,
we've
added
the
appearance
tab.
A
Something
interesting
like
a
light:
blue
yeah,
perfect
and
let's
go
ahead
and
take
a
look
at
the
header
and
let's
change
it
to
something
like
a
light
gray
now,
as
you
can
see
at
the
top
there
background
for
header,
tells
you
quickly
with
annotations
what
each
one
of
these
buttons
are:
gonna
change
inside
of
the
dev
portal.
So
you
can,
you
know
in
five
or
ten
minutes,
figure
out
what
you're
gonna
change.
A
A
Now
another
thing
that
that
people
asked
when
I
was
on
the
road,
particularly
I,
was
in
Australia
three
or
four
months
ago
and
talked
to
some
really
really
great
folks
at
some
of
the
largest
banks
in
the
world.
They
said
that
they
were
really
interested
in
having
a
way
for
their
people
inside
of
their
organisations,
to
publish
api's
and
have
everybody
be
able
to
find
those
api's
and
then
use
them
without
having
to
reinvent
the
wheel.
A
So
let's
go
ahead
and
up
to
the
top
and
click
on
the
Service
Catalog,
and
you
can
see
that
we've
added
a
quick
place
for
anybody
who
uses
the
dev
portal
to
log
in
and
find
any
of
the
specs
on
any
of
the
api's
that
are
available
inside
the
company.
Now,
if
you
go
ahead
and
click
in
that
search
box
and
we
type
in
pets,
you
can
see
that
it's
soups.
Well,
that's
a
there.
We
go
all
right
so
type
in
pets.
A
Now
the
next
thing
that
people
asked
for
was
the
ability
to
have
very
granular
permissions
inside
of
the
dev
portal
so
that
you
are
able
to
slice
and
dice
things
based
off
of
the
roles
that
you
have
when
you
log
in.
So
let's
go
back
to
the
manager
and
we'll
go
down
to
the
permissions
tab
to
missions
menu
and
you
can
see
here.
We
not
only
have
a
roles
tab,
but
also
a
content
tab.
So
let's
go
ahead
and
create
a
role.
A
You
can
name
it
something
silly
whatever
we
want,
and
you
even
have
a
chance
to
put
in
a
comment
here.
So
you
remember
what
the
heck,
the
silly
role
means:
let's
go
ahead
and
create
that
and
now,
if
we
go
to
the
content
tab,
you
see
all
of
the
files
that
are
available,
and
this
is
by
a
granular
sort
of
page
view
and
you
can
decide
something
like
I
only
want
the
silly
role
to
be
able
to
access
the
pet
story.
Ammo
file.
I,
don't
want
other
folks
to
be
able
to
see
that
spec.
A
A
So
with
that,
we
finished
the
first
node
on
the
production
side,
as
I
mentioned
before
we
looked
at
pre-production
and
post-production
yesterday.
Today,
we're
focusing
a
lot
on
production
and
you're.
Gonna
hear
some
interesting
stuff
about
open
source
later
from
some
of
the
engineers
as
well,
but
before
we
do
that,
let's
go
ahead
and
generate
some
traffic.
A
So,
like
I
said
before,
with
the
same
platform,
you
can
have
each
one
of
the
protocols
that
your
teams
might
be
using
and
they
don't
have
to
use
the
same
stuff
and
you're
still
going
to
get
that
one
unified
view
of
all
of
the
different
services,
no
matter
what
protocol
they're
running
on
and
what
your
teams
choose
to
use,
and
so
it
goes
a
little
unsung.
But
the
power
of
workspaces
inside
of
Cong
manager
is
something
that
I
really
wanted
to
show
today.
So
I
hope
that
I
hope
you
go
play
with
it
later.
A
Now,
if
you
click
on
the
roles
tab,
you
can
see
that
there's
roles
for
every
single
workspace,
so
you
can
set
completely
different
roles
in
completely
different,
are
back
permissions
for
each
one
of
these
teams
that
you
have
and
they
never
need
to
know
about
each
other's
permissions
or
roles
at
all.
For
those
of
you
who
use
CI
CV.
All
of
this
is
also
available
via
the
admin
API.
A
But
if
we
scroll
down
you
can
see,
we
give
you
a
couple
of
roles
out
of
the
box
that
are
indicative
of
what
we're
expecting
you
to
be
able
to
do.
But
these
are
not
the
only
things
that
you
can
do
you
can.
The
this
role
based
access
control
can
be
as
granular
as
you
like,
or
as
bold
as
you
like,
and
so
that's
Kong
teams,
instead
of
Kong
manager,.
A
Now
one
last
thing
I'd
like
to
show
you
is
something
that
we
saw
sort
of
in
passing
yesterday,
but
if
you
go
back
and
click
on
workspaces
up
at
the
top
and
scroll
down,
you
can
see
that
we
have
a
G
RPC
workspace.
So
let's
go
ahead
and
explore
the
services
inside
of
that
workspace,
you
can
see
we
have
one
service
and
if
you
click
into
that
service
and
poke
around,
you
can
see
that
it
is
in
fact
of
protocol
G
RPC,
and
you
can
see
that
down
at
the
bottom.
A
We
have
a
couple
of
routes
set
up.
We
have
not
only
a
specific
end
point
for
G
RPC
setup,
but
we
also
have
a
general
route.
That'll
pick
up
the
introspection
as
well.
So
not
only
do
we
have
G
RPC
support
in
the
open
source
version,
you
can
also
graphically
administer
it
or
administer
it
on
the
admin
API
inside
of
enterprise.
So
there's
G
RPC.
Instead
of
calling
enterprise.
A
A
We
know
you
know
neh
how
the
CTO
is
a
friend
of
ours
and
what
we
ended
up
hearing
was
that
people
wanted
to
have
event
based
services,
so
basically
asynchronous
services
that
pushed
hits
on
and
off
of
kafka
cues
and
were
processed
at
whatever
time
they
could
be
processed
and
returns
when
they
were
done.
So
we
have
the
first
piece
of
that
today.
A
A
Now
the
value
proposition
of
this
one
is
that
you
can
set
up
a
plug-in
on
a
service
or
out
instead
of
Kong
enterprise
and
when
you
make
requests
to
that
service,
either
successfully
or
unsuccessfully
will
automatically
push
the
logging
output
of
Kong
on
to
a
Kafka
queue
for
coalescing
and
putting
into
a
third-party
tool
for
slicing
and
dicing
of
how
your
services
are
running.
So
if
you
go
ahead
and
make
a
request,
you
can
see
that
as
we
make
requests,
we
push.
A
Log
hits
on
to
a
Kafka
queue,
subscribed
to
a
particular
topic
right,
a
logging
topic.
So
that's
that's
one
of
the
first
use
cases
that
that
we
heard
people
ask
for,
and
some
people
had
built,
even
custom
scripts
to
do
this,
and
so
we
thought
well.
We
should
make
that
easier
for
people
now.
The
second
hit.
The
second
use
case
that
I'm
very
excited
about,
which
is
the
first
half
of
the
sort
of
Kafka
lifecycle,
is
the
ability
to
push
any
arbitrary
message
onto
a
cough.
A
Kick
you
from
inside
of
Kong
and
and
from
a
restful
service.
So,
as
you
can
see
here,
we
have
the
Kafka
up
screen
plug
in
the
value
prop
of
this.
One
is
that
you
can
send
any
arbitrary
message,
as
you
can
see
here,
instead
of
our
trusty
friend
studio,
and
we
can
send
that
message
on
to
a
Kafka
queue
from
a
restful
request
from
a
browser.
A
The
next
piece
that
we'll
end
up
releasing
is
the
ability
for
us
to
pull
that
Kafka
queue
and
when
that
job
is
finished,
and
there
are
indications
in
that
message
that
we
need
to
re
subscribe,
that
we
started,
we
need
to
re
emit
that
message
to
the
caller.
The
original
caller
will
be
able
to
do
the
full
lifecycle
of
event
based
mesh
or
event
based
services,
and
so
that's
kapha
inside
of
krong
interface,.
A
So
we
talked
a
lot
about
enterprise
today.
We've
talked
a
lot
about
sort
of
the
customer
base
product,
but
we
also
love
the
community
and
we
have
developed
a
lot
of
really
interesting
stuff
and
arguably
a
demo
that
will
blow
any
of
my
demos
away
in
a
minute
here
with
that
one
I'm
only
talking
you
about
but
I'd
like
to
welcome
to
the
stage
Juan
lund,
I,
the
manager
of
core
and
cloud
to
talk
to
you
about
the
open
source,
advancements
and
Kong.
B
Hello
how's
it
going
away
all
right,
I'm,
going
on
I,
really
like
my
job.
One
of
the
biggest
reason
is
I
get
interacted
with
our
community,
our
community,
it's
so
vibrant,
it's
challenging
sometimes,
but
when
I
say
our
koats
has
been
running
everywhere
and
have
the
possible
impact
for
your
business,
it's
so
rewarding.
As
a
co-founder
Marco
Palladino
said
yesterday,
applesauce
is
a
DNA
I,
truly
believe
that
we
was
born
from
the
community
and
has
grown
with
the
community
from
side
projects
of
the
camp.
B
B
Engineering
and
product
teams
has
working
very
hard
over
the
year
to
deliver
lots
of
great
features.
If
your
big
enterprise
customers,
you
have
thousands
servers
routes,
maybe
you
have
half
dozens
complainings.
You
have
lots
of
complexity,
don't
worry,
we
got
you
covered.
We
had
the
color
dip
configuration
you
can
integrate
with
your
favorites
seriously
the
pipeline.
You
can
choose
your
favorites
version
control
to
reduce
your
complexity.
B
B
B
As
a
mention,
besides
all
the
great
features
we've
been
built
over
the
year,
we
also
spend
a
lot
focus,
improving
our
performance
to
make
our
system
more
reliable
because
we
are
the
foundation
of
the
whole
company.
Many
of
you
are
here
my
already
used
open-source
version
before
everything's
purchased.
Us,
as
you
can
say,
Gateway
is
the
root
of
everything
is
make
the
whole
lifecycle
complete
as
company
folks,
my
notice
in
cup
of
sliced
before
there's
queue
gopher
over
there.
B
Alright,
alright,
so
who
has
coding?
Production
makes
a
noise,
alright,
okay,
six
docks,
so,
according
to
a
survey
done
by
the
go
team
in
March
2019
with
more
than
5,000
participants,
this
more
than
69%
a
promoter
according
to
the
NPS
score
and
there's
more
than
89
percent
of
the
participants
says
they
are
laughs.
Go
in
production.
I
things
to
go,
has
been
getting
much
more
mature
and
has
very
good
library
to
use
for
networking
for
your
date
on
Linux
and
just
get
job
done.
It's
it's
what's
great
and
fun
with
that.
B
Let's
do
this,
so
let
me
show
you
a
real
plugging:
it's
a
rewrite
from
five
locks
we
already
have
today.
So,
as
you
can
see,
they
have
access
phase
lock
phase.
So
it's
very
similar
to
what
your
we're
doing
here
is
come
before
for
Lua,
but
it's
actually
everything
writing
go.
So,
let's
try
to
compile
that
and
get
started,
and
then
these
steps
we
build
a
code
bridge
to
make
our
internal
talk
to
the
go
plugins
and
provide
a
go
PDK
to
allow
you
to
write
plug-in
instead
of
go
next
step.
B
Let's
make
your
request,
as
you
can
see
here,
there's
a
header
returns.
There's
nothing
fancy
here.
What
the
purpose
of
this
request!
We
will
have
this
go
plugging
right
into
your
disk,
so
let's
check
the
disk
file.
As
you
can
see
here,
the
fire
has
been
successfully
injects
the
lock
entry
or
there.
So
let's
make
some
changes
or
go
plug-in,
let's
a
common
access
phase
to
a
door
header
consummate
here,
so
the
message
will
be
hello
from
consummate
all
right:
let's
recompile
those
things
and
restart.
B
B
So
I
I
would
like
really
excited
to
say
with
the
innovation
from
the
community
by
leveraging
the
matura
community
from
the
go
one
customer
chat
with
me:
they
are
from
Asia,
their
one
biggest
become
a
website
do
to
me.
We
have
tons
of
traffic
and
because
of
that,
we
have
to
have
a
lot
of
con
notes
and
actually
DB
connection
becomes
a
issue
for
for
them.
B
So
we
release
the
D
bless
mode
for
your
make
your
deployments
much
lighter
and
also
much
easier.
Also,
its
can
leverage
the
good
power
from
critics
which
Harrigan
cover
next,
but
we
do
not
stop
there.
We
keep
pushing
for
innovation
with
that
and
very
proud
it
to
nothing.
We
are
introducing
hibernate
mode.
B
It
has
our
newly
added
data
plan,
contributing
separation.
Nemi
explained
how
it
works
so
from
the
classical
mode
that
you
can
see
here.
Every
single
cone
has
talked
to
DB.
It's
sometimes
causing
a
lot
of
issues
as
I
say
it
before,
but
with
high
burning
modes.
You
can
deploy
your
country
plan
in
anywhere
any
cloud
you
can
enjoy.
The
flexibility
to
access
to
manage
to
maintain
your
cluster
everywhere
can
also
choose
to
deploy
the
day
span,
on-premise,
which
you
can
get
maximum
performance
by
local
practicing
your
traffic.
B
So
first
here,
let's
be
honest:
three
instance
want
data
plan
on
ADA
bless.
The
row
is
more
admins
to
control
all
the
flows
here
and
the
two
ones
in
a
dress
and
a
once
did
you
ocean,
which
is
sort
of
like
proxy
or
traffic?
They
are
the
different
set
to
different
clouds,
so
we
are
basically
doing
the
multi
cloud
environments
here,
let's
first
check
the
health
information.