►
Description
Kong VP of Engineering Geoff Townsend discusses the “three-headed dragon" of digital transformation at Kong Summit 2019 and how organizations must be agnostic in their use of language, deployment methods and cloud providers in order to get to the finish line.
#KongSummit19
Geoff is joined by Apollo Founder and CTO Matt DeBergalis to discuss how organizations can use the new Kong Studio to seamlessly edit, test and publish REST and GraphQL services directly onto the Kong Enterprise runtime and Developer Portal.
Learn more about Kong Studio: http://bit.ly/31F8e4r
A
B
Alright,
who
wants
to
see
some
products
today,
yeah
yeah,
alright
I'm,
very
happy
to
be
here
to
talk
to
you
guys
today
about
what
we've
been
doing
at
Kong
over
the
course
of
the
last
year.
B
B
Chances
are
more
likely
than
not
you've
acquired
quite
a
bit
of
technology
along
the
way,
and
in
fact
you
may
have
at
one
point
said
to
yourself
man.
We
have
all
the
things.
We
have
many
different
protocols,
lots
of
different
platforms
we
deploy
on
on
lots
of
different
technologies,
and
we
may
even
have
more
than
one
cloud
and
to
make
things
even
more
interesting.
We
just
had
crisps
up
here.
New
technologies
are
coming
in
all
the
time.
B
If
you
look
at
the
growth
of
the
CN
CF
from
2017
to
2019,
it's
a
massive
explosion
of
technologies
that
could
end
up
coming
into
your
company
in
the
next
couple
of
years,
but
more
importantly
than
that,
you
also
have
legacy
technologies,
which
you
may
or
may
not
have
felt
at
some
point
like
this
about
and
whether
it
be
because
they're
too
expensive
to
migrate.
The
ROI
is
not
good
enough
to
do
it
or
more
likely
than
not.
The
migration
period
is
a
multi-year
journey.
B
It
may
sometimes
feel
like
this,
and
so
you
know
a
good
example
of
this.
Is
we
have
a
very
large
banking
customer
that
was
quoted,
25
million
dollars
to
move
one
of
their
back-end
services,
one
back-end
service
from
the
legacy
technology
to
api's
and
the
CTO
being
a
very
smart.
Smart
person
said
well,
okay.
That
number
doesn't
scare
me,
but
what
do
I
get
at
the
end
of
this
journey?
B
B
So
the
thing
I'm
trying
to
get
you
to
see
is
that
you're
gonna
end
up
having
both
new
technology
as
you
roll
it
out
and
you're,
also
going
to
be
migrating
old
technology.
You're
gonna
be
changing
the
engines
on
the
airplane,
while
the
airplane
is
in
flight
and
the
reason
why
is
because
the
services
that
you
already
built
and
the
services
that
you're
building
next
are
how
your
customers
get
value
right
and
so
I
think.
B
B
In
fact,
they
found
that
80%
plus
of
those
who
were
successful,
deployed
more
than
one
mobile
strategy
more
than
one
web
technology
and
likely
more
than
one
cloud
so
so
this
is
where
I
wanted
to
talk
about
the
three-headed
dragon
that
we
talked
about
before
the
jump
to
success
in
the
technical
transformation
is
by
agnosticism
and
there's
sort
of
three
axes
in
which
you
need
to
be
agnostic.
If
you
really
want
to
want
to
beat
this
beast
and
cut
all
three
heads
off
and
so
I'll
go
through
those
right
now.
B
First
one
is
protocol.
Your
system
should
talk
the
language
they
need
to
talk
to
do
what
you
want
them
to
do.
Rest
is
awesome,
but
there
are
times
when
you're
gonna
want
a
compressed
format
like
to
your
PC.
That's
going
to
save
you
data
on
the
wire,
or
maybe
you
want
to
wield
the
power
of
graph
QL
to
do
much
more
extensive
queries
with
much
less
effort.
B
Your
tools
should
be
able
to
handle
those
protocols
and
give
you
the
visibility
into
them
to
deployment
pattern
you're
likely
going
to
deploy
on
multiple
different
systems
and
multiple
different
methods.
You
could
use
kubernetes,
everyone
is
I
mean
the
world
is,
is
is
proliferated
with
kubernetes
at
the
point,
but
you're
gonna
still
have
some
legacy
systems
on
hardware
and
you're,
probably
going
to
end
up
having
some
VMs
for
a
while
and
three
making
sure
that
you
can
handle
multi
cloud
in
the
next
five
years.
B
The
majority
of
you
will
end
up
having
more
than
one
cloud
provider
at
some
point.
So
what
I
really
hope
to
get
you
to
come
along
with
me
on?
Is
this
concept
of
of
digital
transformations,
succeed
by
bringing
your
own
technology,
and
so
without
further
ado,
I'd
like
to
show
you
the
first
product
suite
that
we're
going
to
that
will
help
you
lop
off
one
of
those
dragons
heads
kong,
studio.
B
So
konk
studio
is
built
on
insomnia,
continuing
our
open
source
DNA,
but
one
thing
I
I
would
like
to
direct
you
to
that
slightly
different.
Is
we
actually
now
have
this
nifty
menu
up
on
the
left
hand,
side
of
the
product,
and
you
can
see
we
have
several
panels.
The
top
is
the
editing
panel.
Then
we
have
the
testing
panel
and
then
we
have
the
results
panel.
B
And
now
you
have
your
spec
in
whatever
format
you'd,
like
so
let's
say
we're
happy
with
our
spec
and
we're
interested
in
testing
it
now.
Well,
we
can
do
that
right
inside
the
application
up
on
the
top
right
hand,
corner
I'll
point
you
to
the
testing
button.
If
we
go
ahead
and
do
that
you'll
see,
we
flip
over
to
the
testing,
tab
and
studio
has
built
queries
for
every
single
one
of
the
endpoints
inside
of
that
file,
as
guanlin
just
showed
you
a
second
ago.
B
B
We
thought
it
would
be
pretty
nifty
if
we
went
ahead
and
integrated
that
with
studio
and
with
Kong,
so
that
you
could
execute
the
full
pre-production.
Lifecycle
well
illustrate
a
bit
of
what
we
can
do
today,
which
is
we
can
create
ourself
a
restful
endpoint.
Inside
of
you
know,
we
can
put
in
any
kind
of
arbitrary
data,
we'd
like
and
go
ahead
and
push
that
up
to
push
to
mock.
B
Pin
you
get
this
nifty
ID,
which
you
can
then
use
to
query
that
same
restful
endpoint
and
go
ahead
and
retrieve
whatever
data
you've
decided
to
put
in
their
arbitrary
restful,
and
so
that's
what
you
have
today
inside
mock
them.
But
what
I'd
like
to
talk
to
you
about
in
v2
is
basic
graph
QL
support
inside
of
mock,
pin,
as
you
can
see
in
our
trusty
friend
studio,
which
we're
using
a
lot
today.
B
You
can
pick
a
JSON
blob
of
your
choice
that
has
the
data
that
you'd
eventually
like
to
query
in
whatever
shape
you'd
end
up
like
a
query
it
you
can
go
ahead
and
push
this
up
to
mock
Ben.
Again
you
get
an
ID
so
that
you
can
then
query
the
data
that
you
pushed
up,
because
multiple
users
may
be
using
the
back
end.
And
now
you
can
make
queries
to
that
JSON
blob
and
get
data.
B
That's
auto-generated
back
in
whatever
shape
you'd
like
in
fact,
what's
even
cooler
is
that
studio
is
smart
enough
to
pull
the
schema
down
from
mock
Ben,
and
it
can
give
you
indications
about
what
types
of
data
is
available
for
you
to
query
so
that
you
can
shape
whatever
data
you'd
like
as
you
can
see.
Now
we
get
arbitrary
data
back
in
whatever
shape
we'd
like
to
start
testing
and
mocking
our
graph
QL
endpoints,
and
that's
graph
QL
inside
of
Mokpo.
B
And
with
that,
we've
done
the
first
couple
of
circles
here
on
the
pre-production
part
of
the
lifecycle.
Now
to
show
you
how
serious
we
are
about
graph
QL,
we
have
a
sister
company
here
in
the
Bay
Area
named
Apollo,
and
we
have
a
guest
speaker
here
who
I'm
very
proud
to
announce.
The
co-founder
and
CTO
of
Apollo
welcome
Matt
to
the
stage.
C
Morning,
everyone,
my
name,
is
Matt
tuberculous
I'm,
the
co-founder
and
CTO
of
Apollo
we're
also
an
open
source
business
built
around
graph
QL
graph
QL
has
had
an
astonishing
rise
in
popularity.
A
recent
survey
showed
north
of
80%
of
JavaScript
developers
are
using
or
want
to
learn
graph
QL
to
build
their
applications
and
our
open
source
implementation
of
graph
QL.
Just
one
of
many
has
over
a
million
downloads
a
week
by
developers,
building
apps
like
Kong
we're
in
the
digital
transformation
business,
consider
a
travel
booking
application.
C
Today,
customers
expect
a
rich
experience
connected
to
a
wealth
of
data.
So
on
just
one
screen,
we
have
everything
from
property,
details
to
real-time
pricing,
to
real-time
offers
to
user
reviews
and
so
on
and
so
on
and
so
forth,
and
the
days
of
all
that
data
being
in
one
database
or
one
API
are
long
gone.
C
That
data
is
coming
from
a
wealth
of
different
systems
built
in
different
technologies,
potentially
the
product
of
acquisitions,
there's
nothing
common
between
them,
and
on
top
of
that,
we
have
this
rise
of
platforms,
so
that
kind
of
experience
needs
to
be
brought
to
users,
whether
they're
on
desktop
or
mobile
web
or
native
mobile
or
IOT,
or
voice
or
whatever's
coming
next
month.
And
what
we
end
up
with
is
this
m-by-n
problem
right?
C
So
the
solution
is
what
we
call
a
data
graph,
a
layer
that
sits
between
your
services
and
your
applications
and
it's
a
declarative
layer.
So,
instead
of
writing
code
to
express
what
kind
of
data
we
want
in
each
of
these
products,
we
write
queries
and
we
write
schemas
and
the
layer
in
the
middle
functions
almost
like
a
marketplace:
an
Amazon
Prime
for
your
data.
C
So
as
a
developer,
I
don't
have
to
know
precisely
which
service
each
piece
of
data
I
might
want
is
coming
from
and
there's
a
decoupling
so
that,
as
the
services
change,
I'm
isolated
from
the
details
of
that
and
I
can
focus
on
just
describing
in
a
very
natural
form,
as
Jeff
showed
what
data
I
need
and
how
I
want
it
organized
to
match
the
shape
of
my
product.
That's
graph
QL
and
the
example
I
showed
you
isn't
a
hypothetical.
C
If
you
book
travel
on
Expedia
today,
you're
using
a
data
graph
to
pull
all
that
data
to
your
browser
to
your
phone
or
whatever
other
device
you
might
use.
Hundreds
of
companies
have
adopted
that
pattern.
Github
Facebook
outie
when
they
launch
their
each
ron-karr
all
building
on
top
of
a
data
graph
instead
of
a
set
of
fixed
api's
and
that
allows
them
phenomenally
faster
software
development
and
a
richer
user
experience.
C
When
I
talked
to
teams
using
graph
QL,
the
message
that
we
often
hear
is
how
transformative
transformative
it
is
not
just
for
their
development
but
the
whole
lifecycle
of
building
and
maintaining
applications.
The
team
structure,
the
way
to
think
about
API,
is,
as
just
one
example
we're
used
to
thinking
about.
Api's
is
these
fixed
ideas
versions
change
very
slowly
and
when
they
do
change,
it's
a
really
big
deal,
but
with
graph
QL
we
push
multiple
versions
of
a
schema
a
day
the
API
becomes
a
product
itself.
C
That
continuously
evolves
the
way
our
products
do,
and
so,
as
teams
go
through
this
process,
what's
come
up
over
and
again
is:
what's
this
mean
for
how
I
run
my
data
graph?
What
does
this
mean
operationally
and
how
do
I
think
about
that
product
and
how
do
I
think
about
how
it
connects
to
everything
else?
C
So
a
an
infrastructure,
centric
view
of
a
data
graph
might
look
something
more
like
this,
where
the
graph
and
its
implementation
is
just
one
of
many
components
in
a
modern
piece
of
infrastructure
and
all
the
questions
that
have
come
up
earlier
around
transport
around
policy
around
discovery.
Those
are
questions
that
are
is
applicable
to
the
graph
as
they
are
to
each
of
the
services.
C
In
fact,
the
graph
elevates
the
value
of
those
services
because
it
it
allows
developers
to
describe
what
they
need
and
it
allows
those
services
to
be
reachable
by
product
teams
all
across
the
organization,
and
so
it
puts
more
and
more
pressure
and
more
and
more
emphasis
on
the
properties
of
those
wires.
The
connectivity
and
the
policy
that
you
can
build
on
top
of
that,
and
so
that's
why
I'm
so
excited
about
the
partnership
that
Apollo
and
Kong
have
now.
C
Teams
are
able
to
develop
this
kind
of
software
dramatically
faster,
integrate
with
the
infrastructure
that
they
already
have
and
take
advantage
of
all
these
developments
that
you're
hearing
about
today,
from
Kong
and
from
the
open
source
ecosystem
that
we're
both
part
of
to
be
able
to
build,
really
valuable
products
and
really
valuable
companies.
Thanks
so
much.
B
All
right
so
to
show
you
that
we're
serious
about
doubling
down
on
graph,
QL
and
and
and
working
with
Apollo
I'd
like
to
talk
about
a
very
exciting
development
that
we're
going
to
release
today,
which
is
the
Kong
Apollo
QuickStart
docker
container.
This
is
a
docker
container
that
has
both
Kong
proxy
and
the
Apollo
graph
QL
server,
already
networked
and
set
up
inside
the
value
prop
of
this
is
to
be
able
to
get
you
up
and
running
with
an
out-of-the-box
graph,
QL
server.
B
B
So
the
first
one
we're
going
to
show
here
is
the
graph
QL
cache
test.
So
caching
is
slightly
different,
difficult
and
different
in
graph
QL
than
it
is
in
regular
restful
interfaces,
which
usually
cache
based
off
of
Earl's
graph
QL
same
Earl
with
a
different
query,
can
actually
end
up
returning
completely
different
results,
so
understanding
how
to
cache.
This
is
a
little
bit
more
complex
and
we've
built
a
plugin
to
help
you
out
with
that.
B
All
that
data
has
to
be
coalesced
and
then
returned
to
the
client.
To
that
end,
we've
created
the
graph
QL
cost
plug
in
value
prop
here
is
that
you
can
assign
weights
to
different
graph
QL
queries
that
you
have
in
your
system
so
that
you
can
rate
limit
them
according
to
the
load
on
your
back-end
systems.
As
you
see
here,
if
we
make
a
request
for
the
old
user
20
of
the
starwars
data,
you
can
see
that
we're
able
to
get
a
response
with
headers
that
tell
us
what
kind
of
rate
limiting
situation
were
in.
B
Finally,
the
last
one
I
would
like
to
show
you
on
your
journey
to
either
migrate
to
graph
QL
or
to
use
it
it
for
testing
or
mocking.
You
may
end
up
in
a
situation
where
you
have
clients
that
still
speak
rest,
but
you
also
want
to
start
rolling
out
your
back-end
graph,
QL
infrastructure
so
that
you
can
start
testing
and
using
it.
So
in
the
spirit
of
migrations
and
that
idea
of
being
able
to
do
a
digital
transformation,
we've
we've
created
the
Resta
graph,
QL
plug-in,
which
allows
you
to
make
restful
requests.
B
As
you
can
see
in
the
Earle
up
there
we're
now
querying
the
same
graph,
QL
back-end
for
user
10.
However,
you
can
see
that
the
request
comes
back,
not
only
is
requested
in
a
non
graph,
QL
format,
but
the
data
is
returned
to
us
in
a
more
restful
format,
so
those
clients
can
consume
it.
The
way
that
they're
used
to
until
you're
ready
to
let
them
transition
and
there
you
go.
There's
the
new
girl
support
inside
of
calm.
B
You
may
have
been
here
last
year
and
saw
us
release
the
original
version
of
brain
which
our
assisted
service
management
platform
for
auto
documentation
and
Auto
visualization
of
your
services.
You
may
have
seen
the
rudimentary
service
map
that
I
showed
you
and
saw
it
update.
Well,
I'd
really
like
to
show
you
today
a
much
more
updated
slick
version
that
we're
gonna
see
some
nifty
tricks
in
in
a
second
we're
now
back
inside
of
cog
manager
and
we're
going
to
click
into
the
collector
workspace.
You
can
see.
B
We've
generated
some
traffic
here
for
you
and
we
can
go
up
to
the
service
map
menu
up
in
the
top
of
the
product,
and
you
can
see
that
we've
now
generated
a
written.
If
T
zoomable
service
map,
you
can
expand
and
contract
nodes
you
can
zoom
in
and
out,
and
you
can
also
roll
over
the
endpoints
to
get
contextual
data
for
the
things
that
are
configured
inside
of
the
service
map.
B
But
what
you
probably
didn't
know
is
that
guanlong
has
been
sending
some
G
RPC
traffic
in
the
background,
and
we
can
also
go
into
the
service
map
and
see
a
G
RPC
service
map
for
a
G
RPC
service.
That's
can
been
configured
on
Kong.
You
can
see.
We
not
only
have
the
endpoint
that
we
actually
queried
this
hello
world
end
point,
but
we
also
automatically
got
the
introspection
and
point
as
well
and
the
demo
gods
are
not
being
kind.
B
So
the
second
thing
I'd
like
to
show
you
in
Kong
brain,
is
how
we've
improved
the
CI
CD
portion
of
how
you
use
brain.
You
may
or
may
not
have
remembered
that
we
can
auto
generate
open,
API
specs
from
any
traffic,
that's
going
over
the
Kong
enterprise
platform
and
we
can
generate
new,
open,
API
specs
as
your
service
changes
and
evolves
in
production.
B
But
what
we've
added
this
year
is:
we've
added
the
ability
for
you
to
customize
the
data
that
comes
back
in
those
open,
API
files,
so
that
you
can
version
and
source
control
your
files,
the
way
that
you
would
like
with
an
automated
CI
CD
system.
So,
as
you
can
see
now,
you
can
not
only
slice
and
dice
the
specs
by
workspace.
B
You
can
also
slice
and
dice
them
by
host
and
you
can
start
sending
in
metadata
information
like
title
description
and
version
so
that
you
can
tie
them
into
your
Jenkins
or
back-end
CI
CD,
in
a
way
that
you
can
be
alerted
if
a
new
version
comes
out.
That
differs
from
the
last
version
and
you
can
auto
update
that
version
number
and
decide
if
you'd
like
to
publish
or
if
you'd
like
to
have
that
be
audited
before
it's
published,
live
to
your
systems.
B
But
what
we
got
back
from
the
field
when
we
started
having
people
use
the
product
was
that
they
liked
the
power
of
it.
They
liked
the
idea
of
it,
but
it
was
really
hard
for
them
to
grasp
it
unless
it
was
graphically
represented.
So
what
I'd
like
to
show
you
today
is
the
graphical
alerts
UI
on
top
of
immunity.
B
If
we
go
back
into
the
Kong
manager,
go
and
drop
into
the
collector
or
workspace,
and
we
go
to
the
alerts
section
of
the
menu.
You
can
see
that
we
have
a
full-fledged
UI
for
the
alerts
that
have
been
created
by
and
generated
by,
immunity,
I'd
like
to
direct
you
to
all
the
filtering
that's
available
up
at
the
top.
You
can
filter
by
severity.
You
can
filter
by
type
of
alert,
and
you
can
also
filter
about
whether
or
not
it's
been
a
resolved
alert
or
an
unresolved
alert.
B
B
Now,
there's
one
last
thing
that
I
want
to
show
you
that
I
think
is
going
to
be
the
most
exciting
part,
which
is
the
power
of
these
things,
actually
isn't
in
each
one
of
the
bits
it's
actually
being
able
to
use
them
all
together
at
the
same
time.
So
in
the
past
week
you
know
we
saw
the
service
map
and
we
also
saw
immunity,
but
what
I
want
to
show
you
is
the
ability
to
do
a
full
debug
cycle
through
your
services
in
production
using
brain
and
immunity
and
vitals.
B
So
if
we
were
to
go
back
to
the
service
map,
you
can
see
now
that
we
have
some
alerts
on
the
service
map
that
will
allow
us
to
see
values
that
have
showed
up
that
have
oops
we're
gonna
wait.
A
second
here
takes
a
little
bit
for
the
data
to
show
up.
Well
we're
hoping
to
see
here
when
we
end
up
getting.
This
is
we'll
end
up
getting
some
alerts
in
these
endpoints
that
we'll
be
able
to
look
at
give
us
the
second
demo.
Gods
are
not
not
kind.
B
Yeah
well
see
what
we
get
here,
so
what
you
should
be
seeing
is
the
ability
to
see
icons
that
show
up
in
the
service
map
that
will
actually
show
you
some
alerts
that
you
could
click
through
into
the
service
UI
inside
of
in
the
alerting,
UI
inside
of
immunity
and
then
you'd
be
able
to
click
through
and
take
a
look
at
the
well.
We
just
do
it
manually
here.
B
So
we're
supposed
to
be
able
to
see
his
alerts
through
service
map
drop
you
into
the
alert,
UI
and
then
ultimately
drop
you
into
the
route
so
that
you
can
start
debugging
what's
wrong
with
your
service,
so
hopefully
very
soon,
you'll
be
able
to
play
with
the
full
debugging
lifecycle
of
brain
and
immunity
inside
of
Kong
Enterprise.
Thank
you
very
much.
B
I'll
just
call
out
some
of
the
big
ones,
but
we're
gonna
we're
gonna
talk
a
little
bit
about
open
source
and
we're
gonna
talk
a
little
bit
about
enterprise,
but
the
really
exciting
sort
of
highlights
is
we're.
Gonna,
you
I'm
gonna
tease,
go
support
for
plugins
and
open
source.
It's
gonna
be
exciting,
and
we're
also
going
to
show
some
kafka
support
inside
of
enterprise
tomorrow,
so
make
sure
that
you
show
up
and
check
it
out.
B
All
right
so
to
wrap
up
transformations
are
successful
when
you're
agnostic,
it's
okay,
to
have
a
bunch
of
different
tech
in
different
states
of
migration.
You're
gonna
end
up
running
new
things
and
you're
gonna
have
to
maintain
some
of
the
old
things
and
that
three-headed
dragon
that
we
talked
about
was
the
head
of
language
and
protocol
you're
going
to
have
multiple
and
your
solution
should
be
able
to
handle
it
too
you're
going
to
be
deploying
on
multiple
platforms
and
that's
okay.