►
From YouTube: [Online Meetup] Terraform, Heroku, 1.0 GA update
Description
This month we had demos from two community members. Dennis Kelly showed off the Terraform module that he built to deploy Kong. Mars Hall demonstrated how to deploy Kong on Heroku.
Resources:
Terraform module overview talk - https://konghq.com/blog/kong-terraform-field-dreams/
Terraform module - https://github.com/zillowgroup/kong-terraform
Kong Heroku app - https://github.com/heroku/heroku-kong
Kong Heroku Terraform example - https://github.com/heroku-examples/terraform-heroku-common-kong-microservices
Join our next Online Meetup: https://konghq.com/online-meetups/
A
This
November
13th
and
today
we
have
Dennis
Kelly
here
who
is
going
to
present
a
terraform
module
that
he
built
for
while
he
was
as
below
that
the
place
Kong
and
of
Mara's
Hall
who's
going
to
present
his
Heroku
app.
So
without
any
further
ado
Dennis
do
you
want
to
go
first.
B
Building
this
in
AWS
I
specify
a
region
which
is
US
West
too
close
to
Seattle
in
Oregon,
and
then
I
have
the
actual
Kong
module
myself
itself,
and
so
here
I'm
calling
the
module
from
github,
and
this
is
hosted
by
the
Zillow
group
in
Kong,
terraform
and
I'm.
As
it
stands
right
now,
I
would
be
running
this
off
of
master,
but
we
do
version
this
so
that
you
can
lock
in
at
a
version
and
not
take
incremental
updates.
B
That
would
then
change
your
infrastructure
and
there's
plenty
of
examples
online
on
how
to
do
that,
and
so
I
think
the
current
release
is
version
2.1
and
with
this
example.
This
is
like
the
bear
that
you
would
need
to
define
in
order
to
deploy
a
full
kong
cluster
in
AWS
and
what
this
is
going
to
provision,
for.
You
is
a
an
Aurora
cluster
for
Postgres
for
the
Kong
database.
Back-End
and
configuration
and
then
ec2
instances
for
the
Kong
nodes
themselves
and
then
load
balancers.
B
Both
excuse
me,
internally
and
externally
in
order
to
have
access
to
internal
facing
resources
such
as
the
the
GUI
or
they
admin
API.
But
then
you
wouldn't
want
to
expose
those
to
the
internet,
so
we
would
only
be
exposing
the
Kong
gateway
itself
to
out
to
the
internet,
and
then
you
can
additionally
include
tags
that
you
would
use
for
say
auditing
and
billing
purposes
in
AWS,
and
so
that's
the
the
very
basic
piece
of
it.
I.
B
Then
want
to
go
over
to
the
Kong
module
itself
and
github
and
we'll
do
some
exploring
over
here,
and
so
here
you
can
see
all
of
the
resources
that
will
be
provisioned
for
you.
In
addition
to
what
I
have
already
mentioned,
there
will
be
like
auto-scaling
groups
for
the
ec2
instances
so
that,
as
your
load
increases,
you
can
add
additional
instances
into
your
cluster
or
you
can
also
scale
back
and
there's
also
some
additional
information
about
what
ami
is
used
and
how
everything
gets
provisioned.
B
But
then,
if
we
look
at
this
variables
file
here,
these
are
all
the
variables
that
you
can
pass
in
to
that
kong
module
to
fine-tune
your
own
specific
environment,
and
so
you
can
see
there
are
public
and
private
subnet
tags
that
we
rely
on
within
the
VPC.
So
that
way,
you're
not
statically
coding
that
into
your
terraform.
So
if
you
say,
rebuild
your
environment
and
those
subnet
IDs
change,
you're,
just
relying
on
the
tags
to
rebuild
your
cluster
there's.
Also
access
control
blocks
that
you
can
tweak
for,
say
your
Bastion
host.
B
B
If
you
look
at
the
load
balancers
or
the
ec2
instances
that
are
provisioned
they're,
just
going
to
be
called
zg
kong,
2
1,
and
then
these
supply,
the
environment,
and
so
in
my
example,
it
would
be
prod
and
then
it
supports
both
the
community
and
enterprise
edition.
So
here's
a
flag
to
enable
the
enterprise
edition
it's
just
true
or
false,
and
then
you
have
ec2
specific
settings
again.
B
All
of
these
are
are
tweakable
and
then
also
what
the
comp
package
is,
and
so
the
latest
versions
that
I
had
tested
were
the
Kong
Enterprise,
0.31
and
12.3.
Obviously
that's
a
little
behind
the
times
now,
but
for
all
intents
and
purposes
it
should
easily
upgrade
to
the
other
editions
just
by
overriding
the
the
package
name
and
then
you
can
also
tweak
things
like
the
load.
B
B
You
can
go
in
there
and
tweak
them,
and
then
we
do
have
cloud
watch
alarms
so
that
when
you
have
a
threshold
of
4x
or
5x
errors,
you
can
alert
on
those
and
use
cloud
watch,
actions
for
them,
and
so
a
good
example
of
that
would
be
defining
an
SNS
topic
that
linked
to
pager
duty,
so
that
if
you
hit
your
thresholds
or
if
a
node
goes
unhealthy,
you
could
page
on
those
events
and
then
again
the
same
thing.
You
can
tweak
everything
with
the
database
settings
as
well
and
also
with
Redis
and.
B
So
all
of
those
are
just
values
that
you
would
then
pass
back
into
the
combat
you
saw
earlier
in
the
shell
screen,
and
so
again
here
have
an
example
here,
where
I
defined
some
additional
values,
where
I
don't
necessarily
have
to
explicitly
enable
the
internal
load
balancer.
But
I
did
just
for
examples
sake.
B
So
once
you
have
things
defined,
the
way
that
you
want,
all
you
then
have
to
do
is
run
terraform,
an
it
plan
and
then
apply,
and
that
will
provision
all
the
resources
for
you,
and
this
is
security
groups,
the
load
balancers
all
the
nodes.
Your
con
cluster
will
come
up
healthy
within
about
10
minutes,
because
the
Aurora
clusters
do
take
some
time
to
provision.
B
While
that's
provision,
though
it's
important
to
go
into
the
AWS
console
into
the
parameter
store,
that's
where
a
lot
of
the
secrets
are
kept
encrypted
and
that
this
way
none
of
that
ends
up
in
your
Terra
form
or
in
your
terraform
state,
which
would
then
be
maybe
pushed
back
to
a
repository
and
available
in
plaintext.
And
so,
while
the
kong
cluster
is
provisioning,
you're
gonna
go
into
the
ec2
parameter
store,
and
so
here
we
have
this
generic
service.
In
the
example,
it
would
be
zg
comm,
2,
1,
/
print
prod,
/
DB
/
password.
B
We
do
use
a
default
master
password
for
Postgres,
and
so
it's
highly
recommended
that
you
log
in
to
an
ec2
instance
or
you're,
using
your
Bastion
host
to
run
these
commands,
and
so
here
you
can
see
the
default.
Password
is
kong
change
me
now
dollar
pound
one
and
then
you
would
connect
to
the
database
host
and
then
use
alter
user
brute
with
password
and
then
supply
your
new
password
in
there.
You
would
then
want
to
put
that
back
into
the
parameter
store.
C
A
B
B
Pull
requests
on
this
so
please
feel
free
to
commit
back
and
support
the
project.
I
think
what
we
have
slated
would
be
to
support
or
just
RDS
Aurora
is
you
know,
in
their
enterprise
class
database
service
that
the
minimum
instance
type
is
pretty
hefty
in
terms
of
computational
power
and
also
costs,
which
may
not
be
necessary
for
some
environments,
especially
how
little
for
how
little
the
comm
nodes
actually
rely
on
the
database,
so
we're
gonna
be
adding
RDS
support
in
a
future
release.
B
So
that
way
you
can
toggle
between
aurora
and
RDS
for
cost
saving
and
feature
perspective,
and
then
also
looking
at
tweaking
some
of
the
VPC
settings,
so
that,
if
you
happen
to
tag
your
subnets
differently,
you
don't
use
say
the
type
tag
that
we
could
then
tweak
it.
So
you
could
modify
that
setting
as
well
just
to
provide
you
more
flexibility
when
you're
provisioning
a.
C
D
B
Yes,
that's
the
only
reason
I
suspected
that
the
the
password
generation
option
didn't
exist
in
terraform
when
I
started
with
this
I'm.
Happy
again,
that's
like
a
great
example
of
a
contribution
where
you
know
someone
could
come
in
and
provide
that
I'm
also
happy
to
go
and
revisit
that
yeah,
because
I
think
that's
a
great
idea,
the
more
that
you
can
automate
and
the
less
you'd
have
to
manually
go
in
and
change
things
the
better.
You
know,
that's
the
whole
DevOps
principle,
so
I
think
that's
an
excellent
thing
to
add.
Thank
you.
C
C
B
B
B
B
What
I
do
is
when
the
instance
comes
up
and
I
see
that
it,
the
the
status
is
running
I,
actually
create,
create
a
Kong
endpoint
for
health
checks
that
loops
back
to
the
admin
interface
such
that
I.
Don't
have
to
expose
the
admin
interface
beyond
localhost,
but
I
can
then
just
connect
to
a
Kong
endpoint.
B
B
A
B
Well,
that's
exactly
is:
if,
if
Kong
is
unhealthy,
then
you
really
don't
want
it
to
serve
requests,
and
so
it's
a
great
way
to
make
sure
that
Kong
is
healthy.
Is
that
you
loop
back
to
that
admin
API
to
get
the
status?
And
if
the
cluster
is
unhealthy,
then
that
would
fail.
If
con
can't
serve
the
requests,
it
would
fail,
and
so
in
either
case
you
really
don't
want
that
node
serving
any
traffic.
B
B
No,
that's
absolutely
correct
right
now,
as
it
stands,
this
really
only
checks
for
that
200
status.
Ok,
we
could
obviously
do
like
a
deeper
ping
or
deeper
health
check
in
order
to
create
different
levels
of
severity
with
that
and
I
think
that
would
be
a
great
thing
to
do
just
leverage
cloud
watch.
So
if
you
had
an
instance
where
the
database
is
unreachable,
you
could
then
notify
the
admins
of
that,
but
the
Kong
cluster
still
is
up
in
healthy
and
running
with
cash
values.
In
the
interim.
B
Yeah
and
and
while
we're
in
this
file,
it's
a
also
I
think
good
to
note
that
you
know
I
really
built
these
Kong
clusters
to
be
enterprise
level
and
are
priced
feature-rich,
and
so
you
can
see
in
here
that
we're
leveraging
run
it,
which
is
a
process
supervision
for
Linux
that
basically
always
make
sure
that
Kong
is
running.
B
B
14
days
worth
on
disk
and
rotating
those
to
make
sure
that
you're,
not
filling
the
disk
as
you
go,
I've
also
written
a
Kong,
a
Splunk,
a
log
for
Kong
that
enables
you
to
send
basically
the
HTTP
logs
to
Splunk.
And
this
way
you
can,
you
know,
have
a
longer-term
retention
policy
and
get
those
rich
metrics
that
you
get
from
HTTP
log
versus
just
the
nginx
logs.
B
C
Interesting,
so
that's
neatly
done,
and
one
of
our
plans
would
be
to
the
husband
for
a
long
time.
Ideally
we
get.
It
eventually
would
be
to
give
the
ability
to
provide
custom
search
serializers
for
log
plugins,
so
that
you
could
adjust
the
producer
Eliezer
as
some
chunk
of
Lua
code,
and
then
you
can
apply
different,
sir
lies
or
pair
transport
plugin
right.
So
if
you
log,
you
know
TCP
lot,
Praveen
or
different
transport
modes,
and
so
your
sterilizers
will
just
extract
whatever
data
it
desires,
especially
things
the
PDK
dragging
development
kit.
C
That
should
be
a
lot
easier
and
then
serialize
it
in
jazzing,
in
maybe
any
binary
format
anything
and
then
send
it
over
a
transport
again.
That
would
be
very,
very
neat,
but
I
understand
the
need
to
to
fork
the
plug-in
currently
and
now
very
nice
that
you're
using
preparer
and
the
nginx
binary.
That's
some
power
power
usage
here
very
nice.
B
Yeah
try
to
adhere
to
some
best
practices.
There
definitely
want
to
make
it
easy
on
myself
like
again
when
you
start
to
go
off
and
kind
of,
do
your
own
thing,
it's
important
to
make
appropriate
decisions
about
how
and
where
you
fork
off
from
mainstream
and
so
like
with
the
status
endpoints
for
the
health
checks.
You
know,
I
think
I
could
have
gone
in
and
done
some
work
on
the
nginx
side
to
make
that
work.
B
The
logging
I
think
that's
an
excellent
idea
because,
if
I
were
just
able
to
provide
a
custom
serialization
for
the
log
I
could
then
probably
get
away
with
creating
a
Kong
endpoint
for
logging
to
then
at
my
Splunk
token,
in
the
headers
and
pass
it
on
that
way
so
again
using
comm
to
deliver
Khan
logs,
and
they
would
eliminate
the
need
for
all
that
custom.
Plugin
work.
B
A
Awesome.
Thank
you
very
much.
This
is
very
cool.
So
next
up
we
have
Mars
Dennis
I.
Think
if
you
stop
sharing
perfect,
then
Mars,
you
should
be
able
to
start
sharing.
D
D
If
you
don't
know,
is
a
platform
as
a
service,
it's
one
of
the
kind
of
original
platform
as
the
services
that
makes
it
really
simple
to
go
from
code
and
a
git
repository,
and
you
essentially
push
it
to
our
platform
and
we
manage
the
build
the
runtime,
the
operations
patching
the
stack
like
all
kinds
of
stuff.
So
it
makes
it
really
simple
to
to
to.
You
know,
build
your
business
and
focus
on
what's
unique
and
important
about
your
apps.
So
why
Kong
on
Heroku?
D
So
when
you
deploy
an
app
who
automatically
routes
traffic
to
the
instances
of
the
app
but
there's
very
limited,
custom
configuration,
you
can
attach
custom
domains
and
you
can
attach
custom
certificates.
You
can
also
use
our
automated
certificate
management,
which
actually
uses
let's
encrypt
on
the
backend.
D
We
kind
of
hit
a
lot
of
difficulties,
though
Kong
back
in
the
point.
Five
point.
Six
time
range
was
made
of
made
up
of
a
number
of
different
processes
and
used
networks,
the
network
for
clustering
and
really
didn't
work
so
well
on
our
platform,
and
so
we
ended
up
suspending
this
experiment
in
the
spring
of
2016.
D
But
thanks
to
I
would
like
to
think
that,
thanks
somewhat
to
my
feedback,
our
feedback
and
a
lot
of
amazing
work
at
Kong,
it's
evolved
and
earlier
this
year
came
back
to
it
and
we
now
have
something
that
works
really
gracefully
on
Heroku.
So
what
is
the
app
itself?
It
lets
you
deploy
Kong
instantly,
you
don't
have
to
think
about
code
or
configuration
off
the
top.
D
It
even
works
on
our
free
tier
now
that
can't
handle
much
traffic
and
you
can't
scale
it
when
it's
free,
but
it
lets
you
very
quickly
spin
up,
Kong
and
start
configuring
it
and
see
how
it
will
work
for
you.
One
of
the
features
of
our
Heroku
Kong
app
is
that
we
automatically
enable
a
secure
proxy
to
the
Kong
admin.
So
this
is
as
documented
in
the
DOX
itself
and
I'll
show
you
more
about
that.
D
So
here
is
the
github
repo,
Heroku
LaRocca
Kong,
and
let's
see
here
so
this
is
our
deploy
to
Heroku
button.
I
just
wanted
to
give
you
a
demo
of
how
this
works,
so
I'm
gonna
call
this
so
I'm
going
to
name
my
app
community
call
Kong
I'm
gonna,
put
it
in
my
personal
apps
and
there's
nothing
else
to
configure
here.
The
the
admin
key
will
be
auto-generated,
so
I'm
gonna
save
deploy,
and
we
should
see
it
begin
to
do
its
work
pretty
quickly
here.
D
So
there's
a
number
of
different
ways
you
can
deploy
apps
to
Heroku.
You
can
do
it
through
these
button.
Deploys
the
more
conventional
way
is
to
have
source
code
and
you
actually
use
git
push
Heroku
master
and
it
pushes
to
the
Heroku
get
and
then
Heroku
will
build
and
deploy
your
app.
So
the
build
is
almost
finished.
D
As
you
can
see,
this
is
a
19
megabyte
slug.
So
that
means
that
our
application,
as
it
is
sent
to
the
different
servers.
His
own
is
just
under
20
megabytes,
which
is
a
great
low
profile
app
and
then
what
it's
doing
now,
as
it
runs
scripts
and
scales
dinos.
This
is
where
it's
actually
setting
up
the
database.
So
we
have
some
database
seeds
to
set
up
that
secure
admin.
Api.
D
So
I
will
give
that
demo
here.
Okay,
so
this
has
finished,
deploying
I'm
gonna
step
over
to
manage
the
app
and,
if
you're,
familiar
with
Kong
you'll
know
that
this
message
means
that
Kong
has
been
deployed
and
nothing
has
is
configured.
So
that's
all
it
takes
to
get
Kong
running
and
then
the
auto
admin
API.
D
If
I
was
to
go,
look
at
my
settings
reveal
config
VARs,
here's
my
admin
key
I'll
copy
that
hide
them
again
and
let's
see
here
actually
I
don't
want
to
take
up
too
much
time,
because
the
the
terraform
example
is
actually
going
to
show
you
this
working
live
so
anyway.
That's
all
it
takes
to
get
Kong
working
on
Heroku,
very
simple,
deploy.
D
Now,
let's
go
to
this
github
repo,
which
is
in
Heroku
examples,
and
here
it
is,
and
so
this
one
much
like
what
Dennis
just
demonstrated
is
actually
a
terraform
config,
and
this
is
the
architecture
of
what
it
actually
provisions
to
Heroku
that
provisions
a
Kong
gateway
app
and
then
my
sample
is
just
a
single
micro
service,
but
it
of
course,
could
be
more
of
them.
So
we're
gonna
step
over
to
my
terminal
now
and
I've
got
this
preset.
D
D
We
use
to
deploy
on
Heroku,
so
we
have
releases
for
each
of
these
apps
and
we
have
formations
which
represent
the
the
number
of
instances
and
their
sizes,
and
then
we
have
the
slug,
which
is
the
code
that
Kong
is
made
of,
and
then
here
we
have
another
slug.
So
in
Kong's
case
it's
it's
called
Kong
and
for
this
wasabi
app,
it's
actually
a
note
j/s
app.
So
if
I
start
this
provisioning,
so
as
Dennis
was
alluding,
terraform
is
pretty
amazing
if
you're
not
using
it
yet
I
highly
recommend
you
check
it
out.
D
This
should
do
a
complete
provisioning
in
just
a
minute
or
two.
So,
as
you
can
see,
it's
making
the
apps
so
part
of,
what's
amazing,
about
terraform.
Is
that
when
you
so
the
actual
configuration
if
I
was
to
go
to
the
main
TF
file
here,
you'll
see
that
we're
using
the
Heroku
provider
and
the
cotton
this
open-source
Kong
provider,
and
then
this
a
random
provider
in
order
to
generate
a
random
admin.
D
D
So,
when
kong
is
provisioned,
we
actually
use
what's
called
a
local
exec
provisioner,
and
when
it's
provisioned
we
actually
run
this
health
check
against
kong
so
that
we
don't
try
to
configure
kong.
Terraform
won't
try
to
configure
kong
until
it's
actually
booted
up,
and
so
once
it's
booted
up,
then
we
have
these
few
resources
here,
this
kong
service,
which
will
point
at
that
wasabi
app
URL.
We
have
the
route
which
is
going
to
be
slash
Mesabi
on
the
kong
proxy
and
we
have
this
kong
plug-in
which
inserts
our
internal
api
key
on
the
back-end
request.
D
D
D
You'll
see
that
there's
my
fun
little
wasabi
time,
back-end
app,
and
so,
of
course,
in
this
configuration
you
can
have
far
more
than
just
one
back-end
app
and
you
can
configure
all
your
routes
and
it
will
all
come
to
life
as
a
single
terraform
config,
and
this
is
really
you
know
it's
a
huge
part
of.
It
is
thanks
to
Kong's,
really
pure
or
API,
driven
behavior,
which
is
really
a
blessing
for
for
the
way
it
works
on
our
platform
and
then,
of
course,
part
of
what's
great
about
terraform.
D
Is
that
as
easy
as
it
is
to
to
create
something?
You
can
destroy
the
whole
thing,
and
so
here
we
can
tear
it
all
back
down
almost
instantly
and
then
those
resources
aren't
used
anymore.
So,
basically,
we
have
enabled
Kong
as
this
as
this
app
that
integrates
programmatically
with
other
apps
on
our
platform.
It's
a
really
great
story.
This
has
just
really
happened
over
the
summer
and
so
we're
very
interested
in
getting
more
feedback
and
seeing
what
you
folks
think
about
it.
C
D
C
D
D
So
like
with
that
terraform
demo.
Of
course,
I'm
using
the
Kong
provider
which
is
made
by
this
British
fellow
named
Kevin
whole
ditch
I,
don't
know
if
you
all
are
familiar
with
him,
but
it
seems
like
he's
done,
an
amazing
amount
of
work.
He
made
a
whole.
You
know,
go
client
for
Kong
and
then
build
the
terraform
provider
for
it
and
it
kind
of
seems
like
he's.
D
So
this
is
calling
Oh
point
fourteen
point,
one
right
now
and
as
soon
as
the
rc1
came
out,
I
tried,
but
because
of
the
lack
of
data
based
migrations,
it
really
cramped
my
style
on
that
first
Marcy,
so
I
didn't
really
do
anything
with
it,
and
it's
in
my
stack
of
to
dues
right
now
to
try
it
out.
If
there
are
any
issues
running
one
on
Youku.
That's
where
you'll
probably
hear
from
me.
Please.
C
Let
us
know
yeah:
we
will
talk
about
that
in
a
second
okay
yeah,
maybe
maybe
oh
great,
can
you
date
for
a
future
community
cool
yeah.
C
A
C
Yeah,
it
was
awesome
you
see
if
I
can
share
my
screen.
C
Okay,
so
we
wanted
to
give
everybody.
You
didn't
update
orally
over
this
code
to
talk
about
the
development
process
around
comment
of
zero.
So,
as
you
know,
and
as
we've
just
talked
about
so
kong
1.0
is
to
to
rehab
the
labeling
of
the
stability
over
the
last
three
years
of
kong
and
it's
running
in
production
in
two
mature
organizations
around
the
world,
both
the
the
kong
open-source
project
and
its
enterprise
edition
the
first
release
candidate,
which
was
announced.
C
Our
first
user
summit
in
san
francisco
back
in
september,
introduced
a
number
of
of
changes,
including
migrations,
just
as
most
talked
about,
and
also
consider.
We,
we
consider
the
planning
development
kit,
starting
with
1.0
the
official
way
of
writing
plugins
and
we've
made
a
number
of
improvements
in
the
admin,
API
and
database
layer
with
which
comic
books
to
to
the
underlying
cassandra
or
Postgres
data
stores.
So
after
another
few
weeks
of
developments
and
helpful
feedback
from
the
community,
we
announced
our
situ
in
October.
C
Our
c2
is
the
first
release
candidate
that
has
migrations
between
an
existing
0:14
deployment
of
God
and
1.0,
so
Mars.
To
answer
your
question,
if
that
is
what
you
were
looking
for,
RC
should
have
this
the
upgrade
path
between
0,
14
and
1.0.
On
top
of
that,
RC
has
a
few
last-minute
features
that
didn't
make
it
to
our
c1,
but
are
very
important
to
be
part
of
1.0.
C
Including
also,
let's
not
forget
the
berm,
oh
Vanessa,
Sarah
1.1.1,
which
would
bring
us
support
for
TLS
1.3
very
soon
as
well.
This
was
a
significant
effort
under
the
hood,
although
the
result
is
just
a
one
version.
Button
like
1.1
that
0
to
100
1.1
actually
required
a
lot
more
effort
under
the
hood,
but
it
will
give
comm
the
ability
to
support
us
103,
for
which
we
are
very
excited
about.
So
I
did
want
to
remind
everybody
that
the
great
path
has
no
change
to
diamond
on
0.
C
We
have
rewards
the
migrations,
something
that
we've
had
a
lot
of
feedback
and
complaints
about
over
the
last
few
years
right.
So
we,
it
was
very
important
for
us
to
ship
a
1.0
version
with
very
strong
and
robust
migrations,
which
is
what
we
will
hopefully
achieved
with
1.0,
and
we
are
still
looking
for
feedback.
So
the
new
process
is
a
two
steps
process
in
which
you
first
upgrade
your
database.
C
We
have
documented
this
process
currently
only
in
the
announcement
post
for
Kong
for
the
RC
tube,
but
it
will
also
be
documented
on
the
website,
of
course,
and
the
upgrade
path
on
the
github
repository
we
just
haven't
had
time
yet
to
to
put
that
in
in
place.
There
was
the
other
one
confusion
from
our
C
2
users.
So
I
just
wanted
to
remind
me
to
remind
everybody
that
the
new
migration
migration
path
has
slightly
different
comments,
but
eventually
is
a
lot
more
robust.
We
have
tested
their
grade
from
14
to
1.0.
C
We
have
we're
still
running
some
more
additional
tests
to
test
each
of
our
migrations
and
show
no
downtime
and
the
proxy
side
and
make
sure
that
everybody
can
great
21.0
very
smoothly
and
in
the
future,
every
upgrade
with
migrations
is
going
to
be
thoroughly
tested
and
how
this
new
two
steps.
Migration
process,
which
which
were
very
excited
for
the
last
part
of
the
migrations,
is
the
migration
from
the
a
store,
in
which
case
we
introduced
a
new
command
code,
bootstrap
to
sort
of
really
distinguish
between
the
new
migration
path
that
exists.
C
So
there
exists
the
migration
path
from
the
fresh
data
store.
That
is
not
bootstrap
and
migration
path
from
an
existing
data
store,
and
so
by
making
this
distinction
providing
very
clear
error
messages.
We
hope
that
the
usability
of
the
migration
comment
greatly
improved,
as
well
as
its
stability
and.
C
Yeah
I
think
that's
all
what
I
wanted
to
talk
about
regarding
migrations,
I
hope,
I
haven't
forgotten
anything
we
can.
We
can
have
a
few
questions,
but
before
some
mindful
of
the
time
yeah.
So
let's
talk
know
about
what
the
future
release
candidates
are
going
to
look
like,
so
we've
planned
for
two
more
release:
candidates
or
C
3
in
our
C
for
our
C
3
is
the
first
release
candidate
that
we
introduced
the
announced
service
mesh
feature.
We
are
still
squashing
the
bugs
away
on
this
one.
C
C
C
We
are
also
planning
on
releasing
a
version
that
we
call
0.15,
which
would
be
sort
of
our
c3
right,
and
so
it
will
be
the
service
mesh
and
all
the
new
features
introduced
with
con
1.0,
but
with
the
deprecated
API
entity
and
with
the
deprecated
concepts
and
utility
functions
for
plugins.
So
this
way
we
are
hoping
to
provide
our
users
and
the
community
the
most
flexibility.
We
are
hoping
that
the
upgrade
path
between
14
and
1.0
and
14
and
15
is
very
smooth,
and
hopefully,
if
you're
not
ready
to
upgrade
1.0
and
upgrade
your
plugins.
C
Yet
you
can
at
least
upgrade
to
15
still
enjoy
API
and
still
use
the
don't
know
you
don't
have
to
upgrade
all
of
your
plugins
yet
and
then
later
on,
when
you're
ready
for
it,
you
can
upgrade
from
15
to
1.0
and
that
upgrade
would
just
be
removing
the
api's
entity
and
removing
a
few,
a
few
utilities
assuming
you
converted,
you're
pregnant,
to
use
the
PDK.
So
that's
that's
where
we
are
at
with
1.0
development.
C
D
C
D
Do
you
mean
so
like,
for
example,
for
the
Heroku
Kong
app?
We
automatically
run
migrations,
mm-hm
and
so
I'm
wondering
like
can
Kong
migrations
bootstrap
just
be
automatically
run
without
damaging
an
existing
data
store
or
is
it
something
where
we
need
to
like
probe
to
figure
out?
Do
we
run
bootstrap
or
library
or
migrations?
No.
C
Wording
bootstrap
should
be
harmless,
I
completely
enclose
within,
if
not
exists,
close
even
with
even
within
Cassandra,
so
it
should
be.
It
should
be
pretty
harmless
to
to
run
the
bootstrap
comment
and
something
that
you
know
I
I
can
mention
about.
The
new
migration
is
that
there
is
a
new
command
I
believe
it
is
called
status,
and
so
this
command
gets
you
not
only
error
messages,
but
also
an
exit
code
that
corresponds
to
what
the
next
steps
for
your
database
would
be
so
say
you
have
a
newer
version
of
cognitive
run
status.
C
Then
it
tells
you
you
have
new
migrations
to
run,
or
it
tells
you
you
have
run
some
migrations
and
you're
in
the
process
of
migrating,
but
you
haven't
run
the
finish
command
yet
for
those
migrations.
So
it's
sort
of
a
way
to
automate
your
migration
process,
or
it
tells
you
you
know
like
everything
is
up-to-date
and
you
don't
have
any
pending
migrations
running,
so
you
are
as
up-to-date
as
you
as
you
could
be
so
both
with
our
messages
in
status
code.
C
So
it's
also
great
to
like
a
great
little
helper,
to
run,
to
figure
out
what
you
have
to
do
to
a
great
your
database
or
start
taking
actions,
but
but
bootstrap
it
surface
is
is
harmless
and
hopefully
you
can
test
this,
make
sure
that
we
didn't
make
any
mistakes
and
everything
right.
So
any
feedback
you
need
test
testing
is
very
much
welcome.
Of
course,.
C
D
C
It's
been
really
helpful,
with
harder
we've
caught
a
number
of
of
issues
regarding
or
because
we've
upgraded
or
about
bundled
plugins,
to
use
the
plug-in
development
kit.
So
we've
noticed
a
few
things
that
were
left
over.
If
you
miss
behaviors
indirectly
introduced
also
the
premium
development
kit
itself
is
not
buggy,
but
we've
had
to
change
some
surrounding
code
and
we've
noticed
a
14
migration
in
the
S
here
upgrade
for
which
we
haven't
written
the
tests
yet.
But
that
is
something
that
we
were
working
on
before
C
3
and
rc4.
C
C
A
So
the
last
thing
that
I
actually
had
on
our
agenda
was
maybe
to
just
briefly
mention
the
CLA,
oh
yeah,
so
I
actually
haven't
been
too
involved
in
the
process.
I,
don't
know
if
you
have,
but
no.
C
I
haven't,
but
it's
it's.
It's
been
like.
It's
been
something
that
the
organization
has
when
it
put
in
place
long.
Hopefully
you
know
that
I
went
open-source
projects
which
is
four
in
size
and
have
companies
are
backed
up
by
companies
such
as
come
in
contributor,
License,
Agreement
sort
of
becomes
in
necessary
I,
don't
want
to
say
necessary,
evil,
but
a
necessary
to.
A
C
C
You
know
ultimately
fashion
a
contributor
License
Agreement.
Hopefully
it
isn't
too
much
of
a
hassle
should
be
very,
very
simple
to
design
it
and
your
pull
request
will
ultimately
it
could
be
approved.
There
will
be
a
tick
box,
especially
for
the
contributors
contributor
or
license
agreement.
So
there
is,
it
should
be
minimal.
Friction
hopefully.
A
A
C
Far,
we
still
receive
within
the
usual
number
of
pull
requests
and
we've
had
a
number
of
contributors
already
signing
it
and
and
I
do
think.
We've
already
merged
pull
request
that
had
the
contributor
License
Agreement,
so
it
hasn't
had
any
further
impact
in
terms
of
our
contributions.
So
far,
so
we're
pretty
happy
does.
A
A
C
Sure
we've
also
had
some
one
kong-kong
itself
Khan
core,
but
you
know
with
very
low
contributions,
whether
they
come
into
the
community,
the
project
itself
or
the
documentation,
and
it's
been
really
awesome
to
see
the
documentation
receiving
some
love.
It's
been.
It
was
very
rewarding
to
see
to
see
people
coming
together
and
contributing
on
a
number
of
issues
that
that
Cooper,
no
Cooper
cleaned
up
or
issues
and
labboard
good
first
issues,
and
that
was
really
helpful
for
newcomers
to
come
in
and
and
decide
what
they
would
be
having
on.
C
A
Sweet
alright.
Well
with
that.
Thank
you,
everybody
who
stuck
around
for
the
last
couple
of
minutes.
This
recording
will
be
online.
Thank
you
so
much
to
Mars
and
Dennis
for
great
presentations,
and
we
will
see
you
next
month
next
month
we're
moving
the
call
up
a
week,
because,
if
falls
right
in
the
middle
of
coupons,
so
we'll
be
moving
it
up
to
I,
think
it's
the
six
great
have
a
good
week.
Everybody!
Thank
you.
Everybody.