►
Description
Learn more about Kong: https://bit.ly/2I2DypS
SIGHUP provides enterprise-grade cloud native infrastructure automation for Kubernetes and cloud native stacks. In this session, we’ll dive into how we leverage Kong for Kubernetes including:
1) Our opinionated reference architecture on VM
2) Configurations (healthchecks, Prometheus integration)
3) Installing and managing a Cassandra cluster
4) IaC with decK in Enterprise (hello RBAC users)
5) Easy developer portal with “portal” (Why didn’t you tell us?)
6) We got you covered with serverless (Lua will be always your friend)
7) Why we love Kong for Kubernetes
B
A
A
A
little
disclaimer?
In
this
talk,
we
will
mostly
focus
on
shortcomings
of
version
1.x
of
kong
enterprise.
Some
of
them
have
already
been
addressed
in
the
latest
releases,
but
we'll
see
them
and
point
out
which,
which
are
which
so
we'll
definite
we'll
definitely
need
one
or
more
control
plane
nodes.
Obviously,
kong
assumes
only
one
know
that
the
time
is
interacting
with
the
database.
Otherwise,
inconsistencies
might
happen.
A
A
single
node
with
reduced
mean
time
to
repair
is
totally
acceptable,
given
that
the
control
plane
is
not
on
the
data
path
and
the
business
critical
functionalities
are
not
affected
in
case
of
its
failure.
As
a
db,
we
usually
go
with
cassandra.
We'll
discuss,
we'll,
discuss
the
choice
more
in
details
in
the
next
slides.
A
If
we
need
some
plugins
requiring
radius,
we
usually
set
up
a
three
or
more
node
cluster
of
radius
with
sentinel.
Then
we'll
need
a
pair
of
load
balancer
in
front
of
everything
and
to
monitor
it.
We
deploy,
we
usually
deploy
prometheus
and
grafana
sizing.
Wind
depend
on
the
expected
load.
Obviously,
as
I
was
saying
version
two,
the
text
changed
a
few
things.
A
A
Even
on
version
1.x
enabling
the
admin
gui
and
api
on
the
control
plane
nodes
only
was
mandatory
for
us
enabling
developer
portal
on
all
nodes
is
required,
but
we
wanted
it
to
be
served
only
by
data
plane
nodes,
and
then
we
wanted
to
enable
prometus,
metrics
and
delve
checks
on
all
nodes
which,
having
disabled
the
admin.
Api
we'll
see,
is
not
real.
A
A
Our
choice
usually
follows
this
simple
rule
of
thumb:
if
you
have
the
possibility
to
use
a
managed
db,
for
example,
you
are
an
aws
and
you
can
use
an
aurora
cluster
go
for
it.
Vdb
will
be
relatively
small
and
low
maintenance,
so
even
a
few
small
instances
will
be
more
than
enough
if
that's
not
the
case,
but
you
have
some
existing
skills
in
the
company
about
using
one
of
the
two
options
choose
that
one.
A
If
even
that
is
not
the
case,
if
kong
is
handling
production
traffic,
he
will
not
want
the
database
to
go
down
often
so
it's
mandatory
to
have
it
in
with
a
long
mean
a
long
mean
time
to
failure.
Therefore,
the
easiest
solution
will
be
cassandra
as
it's
natively
distributed,
while
posgas
will
need
some
more
care
to
achieve
the
same
degree
of
resiliency.
A
Lastly,
if
you
can
guarantee
a
short
mean
time
to
repair,
also,
a
single
postgres
instance,
with
a
well-organized,
backup
and
restore
strategy,
could
be
sufficient.
Keep
in
mind
that
kong
will
keep
for
a
while.
It
will
keep
working
for
a
while.
Even
if
the
db
is
down
and
usually
its
size
will
be
around
the
hundreds
of
megabytes,
totally
manageable.
A
As
I
was
saying,
in
version
2.x,
we
have
more
deployment
options
among
the
other
many
changes,
so
we
have
classic
mode
the
same
as
version
1.x
in
which
all
nodes
communicate
through
the
db.
So
everything
will
speak
up.
We'll
speak
about
in
this
talk
still
holds
and
hybrid
mode.
The
control
plane
in
which
control
plane
nodes
are
the
only
communicating
with
the
db
and
the
data
plane.
Nodes
are
pulling
the
configs
from
the
control
plane
directly,
but
it
has
its
limitations.
A
A
If
you
want
to
do
the
tls
termination
on
the
load,
balancer
in
front
of
kong
you'll
need
an
l7
load
balancer.
These
usually
require
some
kind
of
file
check,
endpoints
to
be
able
to
route
traffic
only
to
healthy
instances,
but
kong
does
not
provide
such
an
end
point
if
you
disable
the
admin
apis.
So
what
we
decided
to
do
was
to
set
up
a
slash
status
route.
We
were
with
the
request
termination
plugin
returning
a
200
status
code,
but
we
we
wanted
it
to
be
automated
and
reproducible,
so
we
had
to
do
it
with
ansible.
A
We
try
to
always
keep
in
mind
security.
Therefore,
we
wanted
to
enable
our
back
and
authentication
on
the
admin
apis.
This
complicates
everything
because
we
will
have
needed
to
pass
tokens
and
credential
in
order
to
configure
the
route.
So
what
we
had
to
do
was
to
start
chrome,
with
her
back
and
now
disabled
of
on
the
first
setup,
configure
the
route
and
the
plugin,
as
you
can
see
in
the
picture
on
the
side
and
then
restart
comb
with
them
enabled
regarding
prometheus
metrics
in
kong.
Everything
is
a
plugin
and
prometheus
is
not
an
exception.
A
The
problem
is
that
by
default,
that
plugin
is
exposed
on
the
admin
api
port,
which
again
we
have
disabled.
So
we
had
to
go
with
the
third
option
proposed
by
the
documentation,
given
that
all
the
others
were
assuming
value,
api
were
enabled
and
tried
protecting
them
in
various
ways.
To
do
so,
we
needed
to
add
the
custom
server
block
listening
on
a
different
port
and
exposing
the
primitives
metrics
take
care
that
metrics
will
start
appearing
in
your
grafana
dashboard.
Only
after
some
traffic
has
it
belongs,
so
do
not
despair.
B
B
B
In
addition
to
these
roles,
we
also
use
some
customization
on
the
customer
side
to
make
them
fully
air-gapped.
This
means
downloading
all
the
packages
of
cassandra
medusa
and
modify
the
ansible
role
to
copy
the
files
on
the
target
server
and
use
them
to
install
to
install
instead
of
the
classic
apt
repositories.
B
Three
cassandra
nodes
makes
our
reference
architecture.
This
ensure
a
full
tolerance
of
one
node.
The
authentication
option
we
choose
is
password
out.
Our
cassandra
cluster
is
used
only
by
one
count
cluster.
At
a
time.
We
are
not
sharing
a
cassandra
cluster
with
more
than
one
contrastor
using
cassandra.
Add
some
more
challenge
on
the
concepting
side,
mainly
because
you
need
to
tune
the
cassandra
consistency
you
want
and
how
much
amounts
to
give
to
on
the
queries.
B
Our
defaults
are
cassandra.
Consistency
set
to
quorum
and
cassandra
the
seconds.
The
default
of
five
seconds
is
not
enough
because
it
gives
random
errors
when
using
conga
admin.
Apis,
remember
also
to
assess
the
cassandra
replication
factor
on
configuration
before
running
the
first.
Immigration
is
ensure
that
the
cassandra
key
space
has
the
right
replication
set.
If
you
forgot,
there
is
no
problem,
remember
to
change
it
with
an
alter
query.
Afterward.
B
What
if
someone
makes
an
error
on
comm
manager
delete,
is
a
fundamental
role
or
some
other
core
settings
you
need,
or
perhaps
a
catastrophic
caster
strategic
event
destroys
all
the
node
in
your
data
center.
It
would
help,
if
you
add,
a
robust
backup
solution
for
this.
We
choose
the
medusa
backup
tool
that
can
be
capable
cluster
on
a
remote
analysis,
folder
or
a
s3
compatible
bucket
when
running
the
backup
as
a
we
run
the
backup
as
a
crown
tab,
and
we
name
it
with
the
data
plus
hour
and
minutes.
B
This
ensures
that
this
node
will
start
the
backup
with
the
same
name.
The
same
name
is
needed
to
give
medusa
or
the
necessary
information
to
check
if
the
backup
is
complete,
how
many
nodes
are
missing
and
so
on?
You
can
also
list
the
backups,
verify
them,
etc
and
obviously
restore
them.
In
a
case
traffic
scenario.
For
this
we
run
an
ansible
playbook
that
runs
the
restore
simultaneously
on
all
the
nodes
of
the
cluster.
You
can
use
the
app
this
approach
on
an
existing
cassandra
cluster
or
a
new
one
created
with
the
same
infrastructure
as
code.
B
B
So
now,
let's
talk
about
ci
cd.
We
all
know
the
tool
deck
made
by
harry
bagdy.
This
tool
is
the
infrastructural
code
approach
for
kong
apis.
When
we
showed
that
tool
at
big
enterprise
companies,
they
asked
why
the
airbag
users
were
not
working
with
dac.
We
started
looking
into
that
code
and
we
did
a
small
pr
to
add
the
compatibility
for
buck
users
to
use
an
airbag
token
with
dac.
You
will
need
to
add
the
key
skip.
Workspace
crude
in
the
config
for
dac
an
airbag
user
is
about
user
inside
a
workspace.
B
B
B
You
can
then
start
doing
all
the
things
and
more
you
usually
do
on
the
cog
manager.
You
can
enable
the
portal
factually
files
from
it
change
them
on
your
local
machine
and
obviously
deploy
them.
This
tool
is
also
handy
when
it
comes
to
deleting
the
developer
portal
from
a
workspace.
This
is
needed
when
you
want
to
delete
a
workspace
from
conga,
because
cong
will
complain
about
it.
When
there
are
some
data
left
in
their
workspace
being
then
congress
sources
or
developer
partners
files.
B
So
as
for
now,
we
have
seen
two
tools
to
manage
different
parts
of
kong
infrastructure
as
code
paradigm,
but
how
do
we
combine
them
with
one
big
infrastructures
code
folder?
This
is
obviously
a
starting
point.
You
can
split
all
the
projects
in
different
repositories
with
different
accesses,
if
needed,
but
having
all
the
resources
in
the
same
place
makes
life
easier
for
the
team
in
charge
of
managing
your
apis.
They
can
change
the
data,
configuration
and
contextually
change.
B
The
swagger
open
api
definition
of
your
apis
in
the
developer
portal,
all
the
secrets,
airbag
users,
tokens
and
so
on
can
be
managed
on
the
ci
cd
of
choice
without
exposing
them
to
the
repository
for
a
reference.
We
set
all
our
back
users
as
a
super
admin
on
the
workspace
for
simplicity,
but
you
can
put
your
fine
grain
and
permission
in
place
with
some
testing.
A
A
This
plugin
allows
to
write
simple
stateless
functions,
written
in
lua
to
be
executed
in
any
of
the
usual
open
rest
phases
of
a
request.
Lifecycle,
plugins
ordering
income
is
a
bit
of
a
painful
point,
as
we
usually
cannot
configure
it
custom
on
the
plugin
configuration,
but
only
at
build
time,
but
at
least
here
we
are
provided
with
two
flavors
executing
before
or
after
all
the
other
plugins.
A
A
A
We
are
provided
with
two
confusingly
named
options
to
deploy
kong
enterprising
kubernetes
on
kubernetes
enterprise
on
kubernetes
and
kung
f4
kubernetes
enterprise.
The
first
option
is
kong
enterprise
on
kubernetes,
which
allows
to
run
a
classical
full-fledged
installation
of
kong
enterprise
on
kubernetes.
A
If
you
are
not
familiar
with
ingers
controllers,
these
usually
are
l7
proxies
deployed
inside
a
kubernetes
cluster
implementing
ingress
resources,
but
for
routine
based
on
host
names
and
paths
to
services
deployed
in
kubernetes.
Cong.
Here
is
doing
exactly
this
plus
a
few
things
like
plugins
consumers
and
users
always
configured
as
kubernetes
resources
through
some
convenient
crds
custom
resource
definitions.
A
A
Here
you
can
see
a
short
list
of
the
resources
it
provides
to
configure
conveyor
kubernetes
apis.
We
have
plugins
that
can
be
associated
with
the
annotations
to
ingresses
or
services.
You
can
see
an
example
here
on
the
side
and
we
have
cong
ingresses,
which
are
resources
to
extend
the
standard
ingresses
with
cong
specific
functionalities,
tcp
ingresses,
to
allow
l4
routing
on
different
ports
and
con
consumer
to
define
consumers
and
credentials
to
access
specific
apis
as
we
are
used
to
on
cog
enterprise,
and
so
this
is
it.
Thank
you
for
joining
us.