►
Description
In this call Optum talked about their journey to overhaul their API technology using Kong. They explained their implementation challenges and priorities considered when choosing Kong. The Optum team also shared how they've automated maintenance tasks and built their own stability test suites.
Join our next Online Meetup: https://konghq.com/online-meetups/
A
So
with
that,
it's
about
time
and
let's
get
started,
let's
go
for
a
round
of
introductions.
So
first
thanks
for
joining
us
here
on
the
community
call.
My
name
is
Hanna
I'm,
a
member
of
the
Cong
team,
and
today
we're
gonna
have
Jeremy
and
Ross
from
Optimus
and
as
if
you
like
to
introduce
yourself
sure.
D
A
Okay,
well,
let's
get
start
their
agenda,
so
we
said
a
round
of
introductions.
I
have
an
exciting
announcement
about
consummate
2019
here
for
you
in
a
minute
and
then
Jeremy
and
Ross
are
gonna
present
about
what
they
do
at
optimum
and
how
Cong
has
facilitated
their
journey
and
we'll
leave
some
room
for
questions
and
open
agenda
at
the
end.
A
So
very
excitingly,
our
CFP
for
Cong
summit
2019
will
be
released
next
week.
So
keep
your
eyes
out
we'll
be
posting.
You
know
widely
and
contact
you
in
various
ways.
So
if
you
have
something
you'd
like
to
share
a
person
at
Cong
summit-
and
please
keep
your
eyes
out
and
we'd
love
to
hear
from
you,
okay
and
with
that
I'm,
going
to
pass
the
baton
off
to
Jeremy
and
Ross
all.
A
C
B
So
we
are
optimally
serviced
the
IT
industry,
around
healthcare.
We
are
under
the
United
Health
Group
healthcare
company
umbrella.
We
have
over
three
hundred
thousand
employees
within
United
Health
Group,
not
Healthcare
Group.
We
service
hundreds
of
different
products
which
boils
down
to
you,
know
thousands
of
api's
between
our
production
and
non
production
environment.
We
have
over
10,000
api's
for
each
of
those
api's.
We
leverage
thousands
of
different
consumers
that
access
our
API
is
in
a
variety
of
different
manners.
Our
team
is
quite
small.
C
So
we've
started
this
effort
in
the
final
weeks
of
2017
and
we
had
an
existing
vendor
product,
fitting
the
role
of
an
API
gateway
solution.
Many
of
the
priorities
we
set
out
for
this
modernization
effort
were
reflections
of
concerns
we
had
had
with
that
existing
vendor
product,
so
we
were
really
looking
for
something
that
was
open
source
that
could
run
in
the
cloud.
Obviously,
one
of
the
most
performance
solution
possible
and
I
don't
want
to
get
too
far
into
our
selection
process,
spoiler
alert.
We
ended
up
choosing
Kong
as
the
gateway
technology.
C
B
Certainly
so,
with
an
optimally,
hadn't
had
a
whole
lot
of
familiarity
around
leveraging
and
contributing
back
to
a
lot
of
the
open
source
technology
applications
out
there.
So
we
had
to
start
understanding
how
we
could
contribute
to
these
applications
or
and
give
back
to
the
community
and
a
enterprise
approved
manner.
We
had
to
learn
how
to
deploy
our
applications
onto
private
cloud
and
how
to
maintain
and
manage
them.
We
had
to
figure
out
how
to
produce
high
availability
and
provide
disaster
recovery.
B
With
these
applications,
we
had
to
work
through
enterprise
security
approval
and
gaining
the
trust
of
allowing
open
source
applications
to
thrive
within
the
enterprise,
implementing
logging,
telemetry
reporting
and
metrics
for
all
of
our
application,
stack,
automate,
automation,
around
kind
of
the
mundane
and
the
maintenance
tasks.
It's
in
our
application,
as
well
as
driving
consumer
engagement
and
consumer
onboarding,
for
the
newer,
newer
applications
that
we're
building
out.
C
So,
let's
talk
about
open
source
integrations
in
a
big
enterprise.
Setting
unitedhealth
group
has
its
roots
as
an
insurance
company
and
insurances,
a
business
all
about
mitigating
risk
and
for
a
long
time,
open
source
technology
was
seen
as
a
risk.
Now,
due
to
the
support
from
our
senior
leadership,
optim
is
now
beginning
a
journey
where
we're
embracing
open
source
technology
more
and
as
the
benefits
become
more
obvious.
Leadership
is
getting
more
and
more
on
boards,
so
I'm
really
excited
in
the
direction
that
we're
going
with
this.
C
So
I
want
to
talk
a
little
bit
about
specifically
what
we
had
to
do
within
our
company
to
leverage
open
source
technology
and
I.
Hopefully
our
experiences
can
benefit
others
in
a
similar
situation.
So
for
us
in
order
to
just
leverage
Kong
to
start
with,
in
addition
to
having
the
support
to
even
consider
an
open
source
technology,
we
just
had
to
have
a
technical
review
and
publish
tracking
documents
in
our
internal
portal.
Being
part
of
the
open
source.
C
Community,
though,
means
also
contributing
back
for
us,
and
this
again
because
of
the
push
we
got
from,
our
leadership
about
open
source
ended
up
being
a
pretty
straightforward
process.
We
just
had
to
submit
all
of
the
plugins
that
we
built
out
to
support
you
know
in
any
type
of
custom
functionality
that
we
needed
to
a
technical
review
and
a
legal
review
and
publish
those
tracking
documents
to
our
own
internal
repos
and-
and
finally,
we
were
very
happy
to
include
in
our
open
source
journey.
C
B
So
one
of
the
most
important
things
to
the
enterprise
is:
you
need
to
be
able
to
provide
high
availability
with
your
applications,
as
well
as
we're
covered
from
disaster.
So,
in
the
event
that
we
wanted
to
have
the
highest
availability
possible,
we
leveraged
all
of
our
data
centers
to
create
a
multi
data
center
deployment.
We
leveraged
a
load
balancer
to
be
able
to
distribute
that
traffic
evenly
across
our
data
centers,
where
they
were
reporting
healthy
traffic.
We
have
zero
downtime
deployments
where
we
can.
B
You
know
instantly
change
a
config
value
and
as
all
the
application
redeploys
itself,
there
will
be
no
customer
downtime
or
no
foreseen
impact
when
we
make
little
changes
or
need
to
do
like
a
rolling
balance
of
the
application
on
the
fly
so
very
quickly.
Our
databases
are
cluster
driven.
So
if
we
were
to
lose
a
database
note,
it
would
not
cause
customer
impact
or
impact
any
of
the
API
transactions
within
the
company
and
event
that
the
databases
were
to
go
down
or
we
were
to
lose
a
database
node
and
unable
to
restore
it.
B
We
have
database
backups
that
occur
on
a
24-hour
basis,
as
well
as
restoration
scripts
that
can
bring
that
old,
recorded
data
back
into
fresh
database
nodes.
Our
application
is
containerized
so
for
deployments.
We
can
do
fast
recovery
where
we
just
set
up
the
environment
variables
and
redeploy
to
the
cloud
in
the
same
manner
that
we
already
had
the
application
stood
up
in
a
matter
of
minutes.
So
it
provides
us
a
very
quick
way
to
bring
the
application
back
up
on
our
private
cloud.
C
Share
so
you
start
talking
about
making
fundamental
changes
to
the
security
architecture
of
a
massive
enterprise.
It
shouldn't
come
as
too
much
of
a
shock
that
there
are
some
compliance
concerns
and
hurdles
to
be
to
be
overcome
as
well.
In
order
to
use
kong
from
an
enterprise
security
compliance
standpoint,
the
only
thing
we
really
had
to
do
is
include
an
external
wathah
plication
in
our
stack.
This
is
to
protect
against
some
attack.
Vectors
that
come
natively
is
not
able
to
protect
against.
C
It
was
helpful
and
getting
us
approved
that
kong
is
also
supporting
right
out
of
the
box.
The
default
security
patterns
that
we
absolutely
had
to
support
in
order
to
run
in
our
company.
Another
thing
we
had
to
run
past
security
was
one
of
our
custom
plugins
for
provider
side
security.
This
fills
the
same
niches
was
filled
by
the
existing
proprietary
gateway
solution.
C
Just
does
it
in
a
slightly
different
way,
so
it
was
fairly
easy
again
approve
for
us
and
then
the
last
thing
we
had
to
run
by
our
enterprise
security
team
was
an
improvement
to
the
routing
flows
that
we
can
support
for
certain
patterns.
That
kong
enables
us
to
do
that.
We
were
not
able
to
achieve
with
the
prior
solution.
Once
we
got
those
new
routing
clothes
approved,
we
were
able
to
shave
entire
network
out
of
certain
different
types
of
transactions,
and
that's
just
another
benefit
that
Cong
brought
to
our
stack.
B
Within
our
new
application
stack,
we
wanted
to
make
sure
we
have
all
kinds
of
telemetry
logging,
reporting
and
alerting
for
various
application
hosts
and
transaction
level
data
for
everything
that
goes
through
the
API
gateway.
So
the
applications
we
chose
to
leverage
was
Splunk
because
it
has
unparalleled
data
visually
data
visualization
and
analytics
around
the
transactions
happening
through
our
applications.
B
We
also
leverage
open
tracing
and
just
recently
started
doing
so
with
open
tracing
were
pointing
Kong
to
our
Jaeger
deployment
and
getting
those
open
tracing
latency
metrics
out
there
for
other
applications
to
integrate
with
as
well
to
kind
of
show
those
point-to-point
solutions
which
really
essentially
overlaps
with
what
dynaTrace
does.
But
we
wanted
to
move
towards
open
tracing.
So
we
can
kind
of
begin.
The
retirement
of
proprietary
vendor
products
and
move
towards
open
source
solutions
like
what
open
tracing
in
iaeger
provide
within
our
application
stack.
C
Let's
highlight
a
few
of
the
routine
tasks
that
we've
automated,
that
we've
seen
a
lot
of
value
out
of
too,
especially
at
an
enterprise
scale.
This
type
of
automation
is
almost
required
in
order
to
run
so.
The
first
thing
that
we
noticed
when
we
initially
began
running
Kong
is
that
expired.
O
auth
tokens
persist
in
the
database,
so
we
just
wrote
a
quick
Python
script
that
runs
in
the
24
hour
cron
to
delete
them.
We
also
wrote
a
quick
Python
script
that
runs
as
a
24
hour
con
to
take
database
backups
and
again.
C
This
is
for
our
dr
one
of
the
things
I'm
very
proud
of
is
we
also
wrote
an
entire
stability
test
suite,
which
runs
as
a
Jenkins
job
again
on
a
cron
schedule
every
four
hours.
It's
going
a
little
bit
more
detail
about
this.
What
this
does
is
it
doesn't
just
test
the
basic
service
and
route
functionalities
of
Kong.
It
also
tests
every
property
of
every
plug-in
that
we
commonly
use
with
an
Optima
any
proxy,
and
this
test
suite
not
only
allows
us
to
be
confident
that
our
environment
is
stable
at
any
given
moment.
C
But
it's
also
very
helpful
if
we
make
a
change
to
some
core
part
of
Kong
or
or
even
more
of
a
tangential
portion
that
we're
not
sure
is
gonna
have
adverse
effects
to
other
areas.
We
can
just
run
the
test
suite
against
it
and
be
sure
that
everything
we
need
to
work
is
going
to
work
or
or
otherwise,
depending
on
how
the
test
goes
again,
quick
little
ansible
script
to
just
rotate
logs.
C
This
is
more
useful,
not
necessarily
on
Kong
but
more
useful
on
our
databases
but
valuable
nonetheless,
and
automated
cert
rotation
scripts
both
for
our
database
enough
for
our
gateway.
You
know
those
only
get
run
once
a
year.
Of
course.
It's
it's
a
good
idea
in
terms
of
stability,
to
automate
a
process
like
that.
It's
too
easy
to
make
a
mistake.
B
So,
with
the
setup
of
our
new
application
stack,
we
then
had
to
solve
for
customer
onboarding
and
customer
engagement.
Originally,
as
we
were,
a
small
growing
application,
we
started
off
with
one
email
distribution
list
and
we
would
have
customers
reach
out
to
us
to
our
email
distribution
list
as
well
as
respond
through
the
email
distribution
list.
This
became
hard
to
maintain,
as
we
got
lots
and
lots
of
customers.
B
It
would
became
hard
to
kind
of
keep
your
context
of
who
you
were
responding
to
with
regards
to
which
problems,
and
we
quickly
figured
out
just
that
that
wouldn't
work
at
scale.
So
then
we
progressed
on
to
ticket
work
orders
where
customers
would
submit
a
work
order
ticket
it
would
get
assigned
out
to
somebody.
Somebody
would
do
whatever
needed
to
be
done,
whether
it
was
a
consultation
or
a
request
to
create
you
know,
proxies
or
consumers
within
the
API
gateway
and
then
respond
back
to
to
those
work.
B
Orders
that
the
work
was
complete
within
that
we
found
problems
where
the
you
I
wasn't
intuitive
to
the
customers.
Customers
were
unsure
of
kind
of
the
status
of
where
their
ticket
was
when
it
should
be
completed
by
so
we
decided
to
iterate
one
more
time
and
get
to
where
we
are
now,
which
is
a
customer
centric
self-service
model.
This
is
leveraging
github
kind
of
ops
model
where
customers
will
PR
different
resources
to
our
repo,
whether
it
be
proxy
resources
or
new
consumers
to
create
on
the
Gateway
github
then
sends
a
web
hook.
B
Call
to
this
custom
java
application,
we've
written
that
we
call
the
Stargate
agent.
The
agent
then
integrates
directly
with
the
Kong
admin
API
to
make
those
resources
that
were
requested
a
reality
within
the
Gateway
itself.
So
it
provides
customers
with
a
quick
one-click
solution
to
be
able
to
create
resources
and
we
can
validate
those
resources
and
then
create
them
within
Kong.
If
we
consider
what
they're
creating
to
be
correct,.
C
Finally,
let's
take
a
look
at
some
of
the
benefits
that
we've
seen
from
this
effort.
So
just
as
a
direct
comparison,
you
can
see
the
graphic
and
the
lower-right
Cong
is
80%.
Kong
present
gives
us
an
80%
reduction
and
overall
gateway
time
compared
to
the
existing
solution.
The
graph
you
can
see
in
the
lower
right
is
an
excerpt
from
a
performance
test,
the
grey
bar
being
the
AB
response
time
for
that
performance
test
on
the
existing
solution
in
the
orange
bar
being
calm,
you
can
just
visually
see
that
difference.
C
One
of
the
things
that
didn't
end
up
making
it
into
this
deck
it
looks
like,
but
something
I
want
to
highlight
is
that
the
resource
efficiency
of
Kong
allows
us
to
go
from
more
than
a
hundred
virtual
machines,
which
were
required
to
run
the
existing
solution
to
two
containers
that
will
host
more
or
less
the
same
amount
of
traffic.
That
is
an
enormous
reduction
in
infrastructure
costs,
as
well
as
just
being
generally
more
efficient
to
the
enterprise
as
a
whole.
A
F
C
Be
happy
to
I
just
want
to
really
quickly
glance
over
at
my
product
owner
is
it?
Is
it
fair
to
discuss
our
specific
database
solutions
publicly?
Yes,
okay,
so
we
use
a
Cassandra
database.
We
are.
We
have
two
separate
data
centers
with
an
optimum.
We
deploy
one
kong
cluster
per
data
center
for
dr
purposes
in
each
of
those
data
centers
we
have
three
Cassandra
nodes,
so
it's
a
six
node
cluster
total.
Our
experience
with
the
product
is
that
cassandra
is
an
extremely
resilient
database
solution.
C
Right
now,
with
our
current
set
up,
we
are,
we
can
lose
up
to
a
node
per
DC
with
zero
impact
to
our
customers
and
that's
just
a
great
position
to
be
in
regardless.
Our
experience
with
backup
and
and
we
install
scripts-
or
it
has
been-
has
been
pretty
great
as
well
in
general.
I
have
nothing
bad
to
say
about
Cassandra.
The
only
thing
you
want
to
look
out
for,
of
course,
is:
if
you
have
an
instance
where
you
need
to
do
lots
and
lots
of
inserts
very
quickly
say,
for
instance,
auth
token
creation.
C
B
No
I
would
say:
that's
accurate,
we
kind
of
run
a
rigid,
Cassandra
installation
and
deployment
where
we
do
replication
across
all
our
nodes,
because
we
found
a
couple
of
little
instances
where
the
replication
of
Cassandra
how
it
moves
data
around
node
to
node.
If
you
don't
have
a
strict
replication
factor,
it
can
kind
of
cause
issues
with
the
admin
API
or
things
like
oo
off
token
creation.
That
does
those
quick
read
rights.
So
I
would
say
you
want
a
strict
replication
factor
to
your
Cassandra
cluster,
where
all
the
nodes
would
contain.
All
the
data
hey.
D
C
Much
I
there
there
may
have
been
other
applications
that
used
Cassandra,
but
to
my
knowledge,
I
believe
we
were
one
of
the
first
I
know.
I
didn't
have
any
experience
with
Cassandra
prior
to
working
with
Kong
and
I.
Don't
think
Jeremy
did
either
and
I
know
we
were
I
know.
We
were
one
of
the
first
because
we
did
have
to
have
a
policy
rule
change
in
order
to
allow
us
to
connect
to
a
Cassandra
database
from
a
specific
Network
zone.
So
we
had
to
be
one
of
the
first
yeah.
E
I
would
say
that
that
part
of
our
optim,
like
technology,
culture
of
trying
to
embrace
open
source
like
Cassandra,
was
on
that
path
and
now
going
kong
going
with
Cassandra.
Both
of
those
things
were
a
a
calculating
the
kind
of
risk
in
outside
Optum
and
one
that
we
embraced
wholly
and
took
a
lot
of
work
to
get
the
company
kind
of
comfortable
with
it.
But
once
we
did
and
we've
proven
out,
everything
has
been
a
really
great
journey
and
more
teams
within
Optim
are
now
embracing
and
using
these
technologies.
F
You're
saying
you've
got
episode
that
we're
doing
the
same.
You
know
the
three
node
cassandra
cluster
running
on
Azure
right
now,
just
in
one
day
at
a
central
or
probably
going
to
you
know,
expand
that
aren't
you
know.
Second
datacenter
and
possibly
a
third
one
over
yeah,
like
I,
think
was
German.
To
mention
that
you
know
we
were
definitely
hitting
some
issues
with
the
admin.
Api
I
think
not
able
to
delete
stuff,
for
example,
even
though
it's
shown
as
deleted
on
the
GUI,
so
maybe
you
can
maybe
maybe
that's
a
great
tip
there.
B
Look
at
a
couple
github
issues
out
there
I
know
I've
posted
a
couple
where
I
talk
about
a
couple
issues
with
replication
factor
and
that
admin
API.
If
you
just
search
and
github
on
Kong,
you
should
actually
kind
of
run
into
those
and
see
kind
of
the
comments
from
a
lot
of
the
Kong
principle
engineers,
as
well
as
our
dialog
and
kind
of
where
I
went
with
it,
which
was
just
to
you,
know,
scrap
having
every
node
only
having
a
portion
of
the
data
and
just
if
you
do
a
replication
across
all
nodes.
B
F
C
Physically
close,
not
really
they
are.
You
know,
you're
gonna
eat
a
little
bit
in
I
guess
you
know,
depending
on
how
your
data
centers
are
configured
depending
on
whether
or
not
there
is
a
dedicated
trunk.
You
know
you
might
eat
a
little
bit
of
latency,
but
in
general
you
don't
usually
feel
that
from
the
Gateway
perspective,
unless
you
are
doing
something
specifically
like
an
insert
like
creating
an
OAuth
token
or
or
creating
a
router
service
and
making
a
change,
it
shouldn't
be
very
routine
activities.
So.
F
You
think
you
could
have
that
set
up
even
with
say
you
know,
US
and
Europe
and
in
a
cluster
you'd.
C
F
C
F
Interesting
that
sometimes
my
experiences,
all
their
stuffing
or
not
Kong,
and
that
if
the
days
whose
warm
pretty
close
and
I
was
trying
to
do
any
kind
of
synchronous
replication,
you
know
I
had
to
find
other
ways
to
do
was,
but
it
yes
we're
pretty
new
to
Kong.
So
it
sounds
like
the
tokens,
the
OAuth
tokens.
You
know
if
we
we
use
ping
federer
for
our
token
processing
and
stuff.
So
we
store.
C
F
B
And
we
cashed
that
and
the
the
local
memory
we
don't
use
the
delete.
We
don't
have
a
plug-in
that
writes
those
tokens
to
database
from
ping
federate.
We
would
just
cache
in
local
memory
and
let
each
independent
calling
gateway
and
the
various
data
centers
cache
that
into
local
memory
and
do
lookups
on
that
and
evaluate
the
validate
the
token
essentially
Jeremy.
F
C
C
C
Wrote
our
own
custom
piece
to
get
it
done
for
us,
and
that's
it's
more
or
less
a
result
of
the
fact
that
Optim
does
have
a
fairly
customized
use
case
for
open
ID
connected
in
some
instances
like,
for
instance,
you
might
go
to
more
than
one
identity
provider
depending
on
specific
nature
of
the
proxy
call.
So
we
had
to
build
out
our
own
from
scratch
and
that's
just
how
we
wrote
it.
F
C
F
Some
logic
in
then
to
verify
the
tokens
or
something
like
that:
yeah
depending.
C
On
the
call
we
might
either
call
a
JW,
KS,
endpoint
and
validate
token
there
and
I.
Don't
want
to
spend
too
much
time
in
this
because
we're
getting
kind
of
deep,
but
we
might
either
college
a
wks
endpoint
exposed
by
thing
federate
and
use
that
key
to
validate
the
token
or
post
the
token
to
the
authorization
endpoint
exposed
by
ping,
federate
and
cash.
Those
validation
results
for
the
lifetime
of
the
token,
depending
on
a
couple
specifics
in
the
pattern
soon.
F
Yeah
sorry
cause
you
had
that
she
does
we're
doing
with
the
organ
ID
connect
the
enterprise
one,
the
counting
kind
of
step,
just
through
her
by
you,
know,
working
out
that
jwk
asked
verifying
the
talk
and
you
know,
is
good
using
the
certs
and
that
JW
KS
and
all
that
but
yeah.
So
first
welcome
Oh,
real
good,
but
honor
the
covers.
You
know,
I'm,
not
really
clear.
What's
happening,
you
know
with
the
actual
plug
in.
Is
it
store
on
those
tokens
anywhere
in
the
DB
or
you
know
for.
F
A
A
Okay,
if
that
was
the
question,
could
we
repeat
it
once
more
really
clearly
I.
G
F
Worries
guys,
you
know
we,
we
know,
you
know.
Of
course
we
don't
expect
you
to
know
the
answers.
This
kind
of
stuff
on
the
fly
you
know
might
take
a
bit
of
research
and
stuff,
but
so
it
looks
like
I,
United
group
or
I
forgotten
the
name
I
joined
late
here
guys,
you
know,
if
you
had
say
you've
got
like
thousands
of
API
run
to
the
game.
It
sounds
like
mm-hmm.
C
B
We're
still
scaling
out
and
building
out
the
product
and
bringing
on
more
api's
and
customers.
As
we
speak
like
right
now,
we've
only
got
10%
of
our.
Our
API
is
brought
over
onto
the
newer
Kong
stack,
so
so
there's
still
a
lot
of
room
for
growth
and
I'm
sure
you
know,
as
we
progress
there's
going
to
be
a
lot
of
discussions
between
us
and
calling
it
calm
running
at
scale
and
the
more
API
is
and
more
consumers
we
bring
on
witnessing
how
its
performance
and
how
it
behaves
under
load
as
we
grow.
B
Yeah
we
we
have
our
homegrown
solution
of
the
self-service
leveraging
github,
essentially
where
people
PR
you
know,
do
a
pull
request
with
gateway
resources,
whether
it's
a
new
consumer,
a
new
proxy
that
needs
to
be
created.
And
then
we
validate
the
taxonomy
of
the
proxy
or
I
structure,
as
well
as
the
consumers
that
are
authorized
and
any
kind
of
extra
plugins
that
they
want
to
throw
on
the
proxy
to
make
sure.
If
it
follows
our
enterprise
standards.
B
F
So
right,
right
now
we
know
we're
right
and
kind
of
you
know
courage,
scripts
to
go
off
and
you
know
create
our
services,
create
our
routes,
consumers
add
plugins
to
services
and
routes
and
stuff
or
we've
only
got
it.
You
know
we
want
to
go.
A
few
API
is
up
there
right
now.
Well,
you
know,
of
course,
we're
hoping
this
is
going
to
grow
and
this
demand
is
Daphne
there.
So
we're
thinking
you
know,
we
need
to
start
all
to
me
stuff
and
or
taken
you
know.
You
know
say,
for
example,
the
provided
the
API.
C
The
exact
conversation
that
he
had
about
a
year
ago,
yeah
with
the
open,
API
spec,
the
only
problem.
The
only
reason
we
really
couldn't
make
it
work
in
Kong
is
that
the
there
are
many
different
gateways
that
you
could
potentially
be
able
to
publish
to
with
an
optimist
applied
to
different
networks
zones
to
support
various
things,
and
we
just
weren't
able
to
get
the
information
out
of
the
open,
API
spec.
That
would
tell
us
where
does
this
service
need
to
go?
But
you
could,
in
theory,
do
that
you
could
do
exactly
that.
B
C
D
F
F
It's
definitely
an
item
in
UI
and
there's
a
forum
on
API
over
that
admin
API.
You
know,
then
that's
what
we
used
to
you
know
create
all
our
kong
objects
and
it's
you
know
it's
very,
very
powerful
stuff,
but
you
know
it's
very
much.
This
you
know,
for
example,
comes
out
the
new
api.
You
know
you
have
to
right
now
we
have
to
do
a
bunch
of
stuff
to
get
that
API
on
to
the
art
of
the
gateway
configured.
F
B
You
check
the
con
forum.
I
know,
there's
been
some
discussions
around
parsing,
open,
API,
specs
and
creating
resources
on
top
of
that,
so
I
would
go
check
out.
There
I
think
it
is
kind
of
in
the
radar
I
don't
know
if
that's
actually
a
road
map
item
yet
but
I
know
that
Kong
is
aware
of
it
as
kind
of
a
highly
requested
item.
B
G
So
we
are
aware
of
that.
We
have
recently
done
some
work
around
importing
and
exporting
as
well
as
declarative,
config,
so
I
mean
just
for
the
sake
of
clarity,
and
we
do
need
to
improve
this
in
the
docs
if
one
is
using
declarative
config,
that's
only
if
you're
running,
Deb
Ulis
running
DB
list
is
nice.
If
you're
doing
you
know
kind
of
level
four
stuff,
so
basically,
if
the
proxy
can
be
stateless,
then
DB
list
is
a
good
choice.
You
can,
then
you
know,
scrap
Cassandra
and
Postgres
entirely.
F
Sounds
good
so
I,
you
know
Hector,
do
you
guys
use
kubernetes
at
all?
That's
another
one
that
we're
you
know
looking
at
and
thinking
do
we
nee
user
or
do
we
not
need
it
I
see
I
think
you
said
you're
using
doc
or
so
you
know
we
start
to
use
that
but
yeah
the
next
piece
was
you
know
his
kubernetes
doesn't
make
any
sense
to
use
it
and
then
for
something
like
an
API
GAE
away.
E
Yeah
I
could
talk
about
that
briefly.
I
mean
we
about
a
lot
of
different
products
and
platforms,
and
things
within
our
company
with
an
outdoor
kubernetes
is
definitely
a
large
piece
of
that.
But
as
well
as
you
know,
many
other
different
containerization
platforms
that
are
out
there
so
I
think
anything
that
can
orchestrate
the
containers
is
a
good
fit
for
scaling
out
API
gateways.
A
A
Nope
alright!
Well,
thank
you
all
for
joining
us.
My
next
call
will
be
May
14th
and
please
add
your
name
to
the
list
of
folks
on
the
agenda
to
receive
the
calendar
update
that
goes
out
every
month
with
information
about
this
call
and
keep
your
eyes
peeled
for
our
CFP
for
Kong
summit
2019,
and
thanks
again
for
presenting
Jeremy
and
Ross.
We
really
appreciate
it.
Thanks.