►
Description
The team at Tyler Technologies has used Kong to help take the company’s applications from on-premise installations to multi-tenant cloud services. In this Kong Summit 2020 session, we’ll explore how Kong can help make this move without clients ever knowing it even happened, how we use Kong Brain to automatically generate OpenAPI documentation for our integrators and the AWS infrastructure choices we made to get a large, robust Kong instance running in AWS.
Learn more about Kong: https://bit.ly/2I2DypS
A
Hey
guys
welcome
once
again
to
the
kong
summit
here
in
2020,
thanks,
especially
to
all
you
guys
for
joining
my
session,
where
I'm
going
to
be
talking
today
a
little
bit
about
how
me
my
team
and
my
company
are
using
kong
to
help
us
get
from
on-prem
installations
to
the
cloud.
My
name
is
mark
graves.
I
am
an
architect
for
a
company
called
tyler
technologies.
A
Tyler
technologies
is
the
largest
provider
of
public
sector
software
in
the
united
states.
We
have
about
a
you
know,
a
little
more
than
a
dozen
divisions
across
our
company,
those
divisions.
They
specialize
in
all
kinds
of
different
government
software
from
courts
and
justice,
solutions
to
enterprise,
resource
planning
and
financial
management,
to
big
data,
tax
and
appraisal,
public
safety,
even
bus
routing
and
education.
A
Software
and
all
those
all
those
different
divisions
serve
state
county
and
local
government
across
the
united
states,
from
the
entire
state
of
minnesota
to
santa
fe
and
charlotte
and
dallas,
you
name
it
we're
pretty
much
in
every
market
in
the
united
states
and,
as
you
probably
also
already
know,
most
governmental
agencies
in
the
united
states
still
host
their
their
own.
It
infrastructure-
and
that
means
they
have
their
own
I.t
staff.
They
have
their
own
servers
and
essentially
what
we
do
at
tyler
technology
is
what
we've
done.
Historically.
A
These
are
things
like
child
protective
services
or
animal
services.
The
ability
for
a
court
system
to
report
to
the
fbi.
A
You
can
even
imagine
a
scenario
where
we
might
have
an
installation
with
a
local
government
where
we're
doing
the
court
system,
but
like
the
sheriff's
office,
is
not
on
our
system,
so
there
would
be
an
integration
between
the
sheriff's
office
and
the
court
system,
and
so
really
what
I'm
going
to
be
talking
to
you
guys
about
today
is
you
know,
moving
software
from
an
on-prem
installation
to
the
cloud
and
for
us,
one
of
those
things
is
going
to
be
breaking
up.
A
Our
large
monolithic
applications
into
smaller
containerized
applications
that
we
can
host
in,
like
an
eks,
kubernetes
cluster.
I
could
talk
about
that
for
three
hours
straight,
but
what
I'm
going
to
really
talk
about
today
is
what
we're
doing
on
the
api
front,
and
that's
really
what
we're
looking
at
on
this
slide
right
now.
How
are
we
going
to
move
all
of
our
public
secure
apis
from
on-prem
servers
to
the
cloud,
and
essentially
this
all
started
by
my
boss's
boss,
coming
to
my
team
one
day
and
saying:
okay,
we
already
own
two
data
centers.
A
Today
we
don't
want
to
open
a
third.
We
want
to
control
costs
and
with
the
prevalence
of
ransomware
attacks
and
other
security
issues
in
the
united
states.
Today,
we
also
want
to
provide
a
more
secure
product
to
our
clients.
So
what
we've
done
is
we've
signed
a
contract
with
aws
and
we
want
your
team
to
go
out
and
actually
work
to
move
all
those
on-prem
installations
to
the
cloud.
A
A
Well,
one
of
the
first
problems
we
looked
up
at
and
realized
we
really
needed
to
solve
was
the
fact
that
when
it
came
time
to
move
an
installation
from
on-prem
to
the
cloud,
we
really
needed
an
easy
way
to
make
that
flip
right,
and
so
that's
the
first
thing
that
we
really
focused
on.
How
are
we
going
to
be
able
to
at
the
time
that
the
service
moves
easily
move
that
service
from
an
on-prem
service
server
to
the
cloud?
A
And
then
we
actually
started
evaluating
our
apis
and
really
started
looking
at
them,
and
we
realized
number
one.
The
urls
for
our
apis
were
really
really
cryptic.
They
often
contain
the
server
name,
the
name
of
the
server
that
was
actually
doing
the
work,
and
you
know
they
were.
They
were
just
not
easy
to
formulate.
A
In
fact,
I
put
some
examples
up
on
this
slide
right
now
and
just
for
the
record,
all
these
urls
are
fake
to
protect
the
innocent,
but
you
can
sort
of
look
and
see
what
I'm
talking
about
like
on
the
very
top
one
there.
It
says
d-e-n-d-v
svprod2.tylertech.com
some
path
to
some
service
and
we
thought
well.
Maybe
we
can
use
this
as
an
opportunity
to
make
our
urls
better
going
forward.
A
The
other
thing
we
realize
is:
we
have
a
lot
of
api
diversity.
We
have
rest
apis.
We
have
soap
apis,
yes,
we're
still
using
soap.
We
have
websocket
connections
that
are
used
for
things
like
a
virtual
court
system,
where
we're
transferring
things
back
and
forth
across
a
persistent
connection,
we're
shipping
all
kinds
of
different
payloads
from
json
to
xml,
to
even
large
pdf
documents
and
other
types
of
media,
and
keep
in
mind
that
a
lot
of
the
apis
that
have
been
written
for
our
company
were
written
to
a
spec
that
was
encoded
in
law.
A
Not
necessarily,
you
know
using
practical
experience
in
technology
right,
so
we
we
have
a
lot
of
api
diversity
to
deal
with
and
we
need
a
flexible
solution
that
knows
how
to
deal
with
all
those
things.
A
The
second
thing
we
looked
at
and
realized
is
that
you
know:
tyler
technologies
has
historically,
over
the
decades,
brought
in
new
divisions
through
acquisition.
We
buy
these
companies
and
these
companies
often
come
with
with
strategies
and
sort
of
a
whole
model,
they've
developed
around
their
apis,
and
that
means
we
end
up
with
a
lot
of
diversity
in
terms
of
security
models.
A
But
then
we
also
have
teams
that
are
using
like
octa,
which
is
the
pinnacle
of
identity
management
in
2020,
we're
shipping
java
web
tokens,
we're
shipping
saml
tokens,
we're
using
open
id
connect
in
some
divisions,
oauth2
and
others,
and
we
wanted
to
make
sure
that
we
had
a
solution
that
would
allow
us
to
host
a
whole
different.
A
You
know
a
large
variety
of
different
security
models
and
then,
as
we
were
looking
at
the
apis,
another
thing
that
we
realize-
and
I'm
sure
this
has
happened
to
you
as
well-
is
we
don't
have
a
lot
of
documentation?
You
know
we're
trying
to
learn
about
these
apis
and
we
realize
we
don't
even
have
documentation.
We
can
use
to
learn
about
those
apis,
and
so
you
know
we
looked
up
and
thought
okay.
A
Well,
it
would
be
really
great
if
we
had
actual
documentation
that
was
globally
referenceable
by
our
third
parties
and
our
partner
government
agencies
so
that
they
could
learn
about
our
apis.
They
could
go
out
on
the
internet
and
they
could,
you
know,
poke
around
on
our
apis
and
test
them
before
they
implemented
their
own
integration
solution.
A
I'm
sure
you've
all
been
in
this
situation
as
well.
Like
I
said,
documentation
is
hard
and
that's
why
we
really
want
to
generate
our
documentation
if
at
all
possible
just
the
way
that
swagger
docs
get
generated.
Often
when
you
build
your
project,
and
so
you
know
we
looked
up
and
said:
okay.
Well,
it
seems
like
the
right
solution
for
us
is
to
have
an
api
gateway.
That
seems
like
the
right
thing
for
us
to
do
and
we're
an
aws
company.
I
love
aws.
A
Don't
get
me
wrong,
I'm
not
saying
that
a
30
second
timeout
is
an
unreasonable
limit,
but
for
for
our
business,
unfortunately,
it
kind
of
was
unreasonable,
because
you
can
imagine
a
scenario
where
you
might
make
an
api
call
that
generates
as
an
example,
a
fire
code
violation
document
with
dozens
of
high-resolution
images
sort
of
similar
to
what
you
see
on
my
slide
right
now,
also
like
in
the
course
and
justice
space
you
might.
You
can
envision
an
api
that
has
to
generate
what
we
call
an
appeal
packet.
A
An
appeal
packet
is
essentially
a
summary
of
a
lower
court's
decision
on
a
given
case
and
then,
when
that
case
gets
appealed,
you
have
to
generate
a
very
large
document
that
has
all
the
information
about
the
lower
court
case
so
that
you
can
send
it
to
a
higher
court
for
appeal.
A
Those
things
take
time
and
we
knew
that
a
30
second
timeout
just
wasn't
going
to
work
for
us
now.
Aws
also
supports
all
the
security
protocols
that
we
need.
We
can
do
everything
we
need
in
aws,
but
it
is
going
to
take
some
work
so,
for
instance,
if
we
want
to
use
octa
we're
going
to
have
to
write
a
lambda
authorizer
function,
if
we
want
to
use
identity
server,
we'll
have
to
write
a
different
lambda
authorizer
function.
A
If
we
wanted
to
use
iam
for
our
security
for
apis
number
one,
that
would
be
crazy
that
that
would
be.
That
is
not
what
we
want
to
do.
That's
the
furthest
furthest
thing
that
we
want
to
do
so.
We
just
looked
up
and
thought.
Well,
it
would
be
really
nice
if
we
could
get
security,
but
we
could
get
it
easier
right
that
it
wouldn't
be
so
hard.
A
The
other
thing
that
we've
learned
from
standing
up
lambda
functions
out
in
aws
is
that
cores
isn't
as
easy
as
it
seems
in
aws
api
gateway.
You
know
it
seems
really
easy
right.
You
just
click
the
actions
button
and
then
that
drop
down.
Then
you
click
enable
cores
right,
yeah.
No,
it
ended
up
not
being
quite
that
easy,
and
the
reason
was
because
automation
of
that
course,
that
course
configuration
was
actually
kind
of
tough
in
cloud
formation.
A
We
ended
up
selecting
kong,
and
here
are
some
of
the
reasons
why
we
did
that.
The
first
reason
is
that
there
isn't
a
timeout
right.
I
can
choose
to
make
my
time
out
really
small.
I
can
choose
to
make
my
time
out
really
large
or
I
could
choose
to
not
have
a
time
out
at
all,
not
having
a
time
out
at
all
is
probably
not
what
we're
going
to
do,
but
it
this
does
give
us
the
flexibility
to
have
long-running
api
calls
still
succeed.
A
The
next
thing
is
security
right.
We
really
wanted
security
to
be
easier
and
with
kong,
all
we
really
needed
to
do
was
put
up
a
plugin
in
kong.
For
our
oidc
for
openid
connect
right
and
we
could,
we
could
essentially
create
a
standardized
identity,
call
right
there
in
kong
for
all
of
tyler
services,
regardless
of
what
division
was
making
the
call
right,
and
so
the
way
that
would
work
is
an
api
consumer
would
make
the
call
kong
would
receive
a
jwt
token
from
that
api.
A
Consumer
kong
would
then
call
into
octa
to
verify
the
signature
on
that
token
and
then
the
on-premise
apps
that
are
out
there.
They
can
continue
to
do
whatever
security
they
were
doing
before
and
we,
but
but
we
will
always
have
the
ability
to
interject
tyler's
main
identity
solution
on
any
api
call,
that's
already
happening,
and
then
core
setup
was
really
really
easy.
I'm
showing
you
a
screenshot
right
now
of
our
cores
configuration
for
our
developer
portal
on
one
of
our
kong
instances.
A
You
know
this
is
it
all
we
really
had
to
do
is
add
one
plugin
put
in
the
the
origin,
url
and
the
the
methods
that
were
allowed
and
then
boom.
We
click
save,
and
course
just
works
and
it
doesn't
break,
and
it
was
just
really
really
simple
and
maybe
the
best
part
was
when
we
chose
kong.
We
got
a
lot
of
icing
on
the
cake
so,
for
instance,
the
request
and
response
transformation
plug-ins
that
exist
inside
of
kong.
A
Allow
us
to
formulate
any
url
that
we
want
to
the
outside
world,
essentially
creating
a
facade
right,
so
that
we
can.
We
can
create
a
very
standardized,
consistent,
url
structure
for
tyler
as
a
whole
as
a
corporation,
and
then,
when
you
send
in
a
request,
we
can
actually
use
that
request
transformer
to
change
the
request
we
receive
into
something
that
the
upstream
service
can
understand,
and
that
was
really
great.
For
us.
Second
thing
was
access
control
list.
This
is
just
yet
another
option
for
us
in
terms
of
security.
A
It
gives
us
the
ability
to
give
access
even
to
specific
endpoints
to
consumers
of
our
services,
the
other
another
great
thing
is
aws
lambda
integration,
which
I
think
has
done,
has
been
done
really
really
well
inside
of
con,
because
essentially
there's
just
a
check
box
that
you
check
that
says:
hey
I'm
going
to
be
calling
a
lambda
function,
make
sure
to
format
my
call
in
a
way
that
aws
can
understand
it,
and
we
have
found
that
checkbox
to
be
super
awesome
and
finally-
and
maybe
most
importantly,
was
the
datadog
integration.
A
Tyler
technologies
is
a
big
consumer
of
datadog.
We
ship
all
of
our
cloud
blogs
to
datadog
for
in
for
aggregation,
and
it
was
just
a
really
really
great
thing
to
have,
and
so
we
looked
up
and
we
chose
kong
and
we
decided
all
right.
We
need
to
make
a
plan
around
this.
We
need
to
plan
for
a
future.
That
is
essentially
an
evolution,
not
a
revolution,
because
we
want
to
end
with
the
revolution,
but
we
can't
get
there
in
one
step.
We
need
to
evolve
toward
that
plan.
A
That
needs
to
be
encoded
in
headers
to
be
sent
to
an
upstream
service
and
we're
going
to
use
the
request
transformer
to
add
those
headers
by
plucking
information
off
the
path
and
then
we'll
have
a
very
standardized
way
for
teams
to
formulate
their
urls
and
and
we're
going
to
mandate
that
now
before
we
ever
start
moving
to
the
cloud.
A
And
that
means
we
want
to
implement
com
before
we
move
to
the
cloud.
So
our
plan
is
essentially
to
take
kong
and
front
our
existing
on-premise
services
with
it
so
that
when
the
time
comes
for
us
to
make
the
switch,
it
will
be
easy
to
do
and
our
api
consumers
won't
know
any
different
and
by
changing
the
url
structure
today,
when
it
comes
time
to
actually
make
the
move
to
the
cloud,
we
won't
have
to
change
urls
with
our
api
consumers
at
the
same
time,
which
would
make
that
move
even
harder.
A
The
next
thing
was
brain
right
like
having
brain
was
a
really
big
thing
for
us,
and
so
you
know,
kong
brain
is
is
basically
a
tool
that
analyzes
the
traffic
that
that's
coming
through
your
kong
instance
and
it
generates
swagger
documentation
based
on
that
traffic,
which
was
a
really
great
addition
for
us,
because
some
of
our
teams
have
really
great
swagger
documentation
that
they
already
want
to
use,
and
some
of
our
teams
didn't
have
any
documentation
at
all.
A
And
finally-
and
maybe
most
importantly,
once
we
have
those
urls
in
place
once
we
have
kong
fronting
all
of
those
on-premise
services
when
the
time
comes
for
us
to
move
a
service
from
an
on-prem
installation
to
the
cloud,
all
we
have
to
do
is
go
into
kong
and
point
a
given
route
from
the
on-prem
service
to
the
cloud
service,
and
then
our
third-party
integrators
should
have
no
idea
that
it
even
happened,
and
so
we
looked
up
and
said
all
right
now.
We've
chosen
kong
as
our
solution.
A
What
are
we
actually
going
to
build,
and
our
team
at
tyler
believes
that
if
you
can't
automate
it,
it's
not
worth
doing,
and
so
the
first
thing
we
decided
was:
we
need
to
build
a
cloud
formation
template
that
we
that
we
can
use
to
create
a
kong
instance
in
any
aws
environment.
We
want
to
at
any
time
and
that's
exactly
what
we
did
if
you've
ever
built
anything
in
aws.
You
know.
The
first
thing
that
you
have
to
build
is
a
vpc
and,
of
course
that's
exactly
what
we
did.
A
We
wanted
this
to
be
a
highly
available
solution,
and
so
what
we
did
is
we
made
sure
that
we
spanned
at
least
three
availability
zones,
so
we're
spanning
three
availability
zones.
We
have
three
public
subnets
and
three
private
subnets,
and
then
we
started
layering
on
top
of
that.
The
actual
network
infrastructure,
of
course
any
dpc-
needs
an
internet
gateway
in
order
to
access
the
outside
world.
A
We
also
have
an
external
application
load,
balancer
albv2
running
in
our
vpc,
that
load
balancer
is
going
to
be
taking
traffic
from
the
outside
world
and
shipping
that,
through
to
our
private
subnets,
where
our
kong
instances
are
going
to
live.
The
next
piece
that
we
thought
was
really
really
important
was
a
nat
gateway.
We
actually
added
in
that
gateway
to
enable
all
traffic
coming
from
our
conga
instance
to
be
coming
off
of
one
and
only
one
ip
address.
A
Is
we
added
a
network
load
balancer
in
our
private
subnets,
because
we
knew
we
were
going
to
have
a
cluster
of
kong
nodes
and
we
knew
we'd
have
a
cluster
of
brain
nodes
and
we
wanted
to
make
it
easy
for
those
two
clusters
to
communicate
with
each
other,
and
so
that's
the
next
set
of
pieces
that
we
added.
I
think
I
already
mentioned
earlier
that
you
know
our
cloud
strategy
is
to
containerize
and
move
things
into
an
eks,
kubernetes
cluster,
and
so
a
lot
of
people
might
ask
at
this
point
well
mark.
A
Why
didn't
you
guys
just
create
a
an
ingress
controller
out
of
kong
that
sat
in
front
of
your
eks
cluster
and
routed
traffic?
Well,
as
you
can
see
from
my
presentation,
we're
starting
by
having
kong
front
on-prem
services,
and
so
we
have
an
ingress
controller
sitting
in
kubernetes
that
would
be
taking
traffic
from
the
outside
world
and
then
shipping
it
right
back
out
to
the
outside
world.
A
We
can
create
an
initialization
script
that
installs
kong
configures
kong
exactly
the
way
that
we
want
to
configure
and
and
creates
an
instance
from
scratch,
so
that
all
we
have
to
do
should
we
want
to
scale
up
or
scale
down
is,
go
and
just
change
an
integer
inside
of
aws
to
say
now.
I
want
six
nodes
instead
of
five
or
I
want
four
nodes.
Instead
of
five,
we
essentially
did
the
same
thing
with
brain
and
immunity.
A
Immunity
is
like
brain
in
that
it
evaluates
your
traffic
and
gives
you
a
heads
up
about
potential
problems
that
might
be
happening
inside
your
apis
in
brain
immunity,
they're
packaged
as
one
unit
we're
primarily
using
brain
right
now,
and
so
we
created
a
cluster
of
brain
nodes
as
well.
Each
of
those
brain
nodes
has
docker
installed
on
it
and
we're
running
brain
and
immunity
as
docker
containers
in
those
particular
nights.
A
A
A
His
code
is
open
source,
it's
a
golang
cli
and
it's
really
well
written
and
it
works
great.
Essentially
with
kong
deck
you
can
create
a
yaml
representation
of
the
configuration
that
you
want
to
post
to
kong
and
you
can
easily
post
it.
A
Furthermore,
you
can
reverse
that
right
and
you
can
actually
pull
down
or
dump
all
the
configuration
in
your
kong
instance
to
that
down
to
your
machine
and
yaml
files
as
well,
giving
you
sort
of
the
opportunity
to
back
up
your
con
configuration
with
the
ammo
fonts,
though
kong
deck
is
super
awesome
and
we
think
it's
really
well
built.
A
There
were
a
couple
things
it
couldn't
quite
do
for
us,
so,
for
instance,
at
the
time
we
were
developing
our
cloudformation
template,
kongdeck
didn't
really
have
the
ability
to
create
or
delete
a
workspace,
and
so
we
created
our
own
goling
cli.
I
say
we
it's
really
sammy
khan.
Who
is
my
technical
mentor
at
tyler
tech?
I
hope
you
have
a
technical
mentor
as
awesome
as
sammy
is.
A
A
It
adds
plugins
like
the
oidc
plugin
and
the
course
plugin
that
I
mentioned
earlier
and
all
the
other
standard
plugins
that
we
think
that
every
kong
installation
of
tyler
technology
should
likely
have,
and
then
the
third
part
for
us
was
the
kong
portal
cli,
because
we
knew
that
some
teams
were
going
to
want
to
post
their
own
documentation,
like
literally
their
own
text,
as
well
as
swagger
documents
to
our
kong
developer
portal
and
the
kong
portal
cli
that
was
written
by
re
using
cliphanyon
or
cli.
A
We
seek
to
contribute
and
we
seek
your
contribution,
and
so
we've
actually
posted
that
aws
cloudformation
template
in
github,
so
that
you
could
go
pull
it
right
now
and
essentially
build
the
infrastructure
that
I've
outlined
in
the
previous
slides,
and
so
that,
basically,
is
is
the
story
of
tyler
technologies
and
our
use
of
kong
to
try
to
move
our
public
secured
apis
from
on-prem
to
the
cloud.
I
hope
you
enjoyed
the
presentation
and
I
definitely
hope
you
enjoy
the
rest
of
the
conference
thanks
so
much.