►
From YouTube: Kong Gateway Microservice Architecture
Description
Kong's Gateway offering comprises several frontend and backend services which can include an Admin API and GUI, Developer Portal API and GUI, as well as Ingress Controllers, data plane proxies, and various cache and database backend services. Join us in examining and deploying each as its own independent microservice helm release.
A
Well,
hello:
everyone
thanks
to
all
of
you
who
are
joining
us
today.
My
name
is
dalia's
password,
I'm
part
of
the
developer
marketing
team
here
at
kong,
and
I
would
like
to
welcome
all
of
you
to
our
june
kong
gateway
online
user
call.
So
today
we
have
a
very
special
presentation
for
you
coming
from
kat
morgan
she's,
the
senior
developer
advocate
here
at
kong
and
the
topic
for
today's
call
is
kong
gateway,
microservice
architecture
and
the
end
of
the
presentation
will
be
open
up
for
q.
A
A
and
discussion
cat
has
also
left
some
q
a
poses
during
the
presentation.
So
you
can
leave
your
questions
in
the
q,
a
or
in
the
chat,
if
you'd
like
during
the
presentation
and
yeah
you'll,
be
also
able
to
admit
yourself
and
ask
questions
if
you'd
like
so
with
that,
I
will
hand
it
over
to
kat
to
start
a
presentation
go
ahead
cap.
A
Thank
you.
It's
good
to
be
here
and
I
haven't
attended
a
user
call
before
so
for
anyone
watching
if
I
break
from
your
routine,
just
bear
with
me
and
we'll
do
better
in
the
future.
Yes,
I
am
cat
morgan.
I
am
a
developer
advocate
with
kong.
A
Previously,
before
joining
the
devrel
team,
I
was
working
as
a
field
engineer
and
consulting
with
customers
on
kong
builds
and
deployments
in
the
real
world.
So
a
lot
of
what
you'll
see
today
stems
from
that
experience.
Lessons
learned
and
things
that
we
regularly
saw
working
with
with
customers
of
kong's
different
products
in
the
field.
A
So
getting
started
as
far
as
today's
agenda
goes.
We
are
going
to
be
primarily
focusing
on
what
kong
looks
like
on
kubernetes
and
we're
going
to
be
taking
a
micro
service
approach.
There
we'll
talk
a
little
bit
about
the
different
options
for
deploying
kong,
some
of
the
permutations
and
what
each
of
the
different
microservice
components
that
make
up
the
kong
gateway
product
are
and
how
they
kind
of
fit
in
a
real
world
use
case.
A
A
There
are
a
there
are
three
different
releases
or
modes
of
using
hong
gateway,
so
when
it
comes
to
sitting
down
and
looking
at
the
drawing
board
for
determining
what
your
outcome
should
end
up.
Looking
like
how
you're
going
what
problems
you're
aiming
to
solve,
there
are
a
number
of
different
paths
for
how
to
reach
that
end
goal
of
success
and
we'll
talk
about
how
that
those
permutations
can
evolve
based
on
your
needs.
A
First
off?
We
have,
of
course,
the
open
source
kong
gateway
that
was
originally
developed,
of
course,
as
a
solution
in-house
for
mash
ape
kong's,
a
former
company
that
kong
grew
out
of,
and
then
the
pattern
itself
proved
to
be
valuable
outside
of
just
the
mass
shape
use
case,
and
that's
when
kong
was
formed
to
productize
the
gateway
and
led
to
all
of
the
different
micro
service
components
that
we
actually
get
to
look
at
today.
A
So
what
actually
comes
in
the
open
source
product,
of
course,
in
all
use
cases
of
kong
you're
going
to
have
the
core
pieces,
which
are
that
proxy,
the
data
plane
itself
powered
by
nginx
and
lua,
and
then
your
admin
api,
which
is
how
you
actually
interface
with
your
proxy
data
planes
and
configure
them.
A
You
can
also
bolt
on
the
inquest
controller
if
you're
using
kubernetes,
it
is
optional.
So
you
can
use
kong
in
its
classic
deployment
style
on
kubernetes
without
the
ingress
controller,
if
you
have
other
ingress
controllers
or
other
means
of
publishing
your
services
and
then,
of
course,
you
can
also
use
the
free
plugins
that
extend
kong's
capabilities
or
write
your
own.
A
Now,
once
you
flip
over
to
the
enterprise
distribution
of
kong,
that
is
a
fork
that
is
maintained
based
on
the
upstream
open
source
code
base
and
if
you
run
the
open
source
enterprise
version,
you
get
everything
that's
in
the
open
source
release,
as
well
as
the
admin
gui
with
the
unlicensed
features
that
are
enabled
at
that
point,
once
you
get
a
license,
if
you're
on
a
paid
plan,
we're
not
going
to
over
emphasize
the
the
paid
pathway
of
kong,
but
we
do
want
to
just
kind
of
illuminate
in
context
what
the
open
source
components
provide
all
the
way
through
this
story
and
then
how
that
fits
with
the
licensed
components
of
kong.
A
So
you'll
see
a
little
bit
of
both
sides
of
that.
Throughout
this
talk,
so
in
that
enterprise
paid
version,
you
get
access
to
all
plugins.
There
are
some
plugins
that
are
maintained
for
our
licensed
customers.
We
can
look
at
that
in
a
bit.
You
get
the
option
of
setting
up
multiple
workspaces,
so
in
kong
there
are
logical
groupings
of
services
are
back
groups.
So
that's
your
role-based
access.
A
If
you
have
different
teams-
and
you
want
to
isolate
them
to
different
workspaces
with
perma
permissions
specific
to
those
sets
of
services
or
those
sets
of
plugins,
that's
what
workspaces
provides
the
open
source
and
the
enterprise
free
release
and
mode
of
operation.
A
You
are
constrained
to
one
workspace.
The
licensed
version
also
enables
vitals
so
you'll.
Have
your
mark
monitoring
and
alerting
based
on
those
files?
That's
when
our
back
is
enabled
by
default.
A
A
The
license
version
is
also
when
you
get
access
to
the
developer
portal,
so
that
is
where
you
can
document
and
publish
different
api
specs
and
explain
their
usage
and
show
different
examples
of
how
to
interact
with
those.
A
A
A
Of
course,
we
can
also
run
what
we're
focusing
on
today,
which
is
kong
on
kubernetes,
and
then
you
can
also
run
it
outside
of
communities
in
its
containerized
form
in
things
like
a
different
container
as
a
service
platforms,
your
aws
ecs
or
fargate.
I
did
both
of
those
azure
container
service,
docker,
swarm
and
so
on.
So,
additionally,
we're
not
going
to
be
getting
into
the
scope
of
kong
connect
cloud,
but
that
is
the
hosted
kong
platform,
which
is
our
own.
A
homegrown
cloud
kong
gateway
platform.
A
All
of
those
choices
can
be
navigated,
based
on
a
balance
of
cost
integration,
so
how
well
fully
featured
it
is
and
effort
how
much
effort
you
have
to
put
in
or
how
much
effort
you
might
save
by
opting
for
some
of
the
enhancements
that
an
entitlement
provides
or
other
types
of
professional
services
that
con
can
offer
to
help
augment
the
effort
that
you
bring
to
the
table
towards
that
final
solution
and,
of
course,
choose
wisely.
A
This
is
the
when
it
comes
down
to
cost
every
single
pass
that
you
might
want
to
take,
whether
it's
with
the
enterprise
licensed
version
or
the
open
source
version
is
a
completely
valid
journey
towards
an
an
end
goal
of
success
with
the
common
product.
A
But
there
will
be
a
cost
associated
in
the
open
source
version
that
might
be
a
labor
cost,
a
time
cost
and
a
risk
cost
of
handling
outages
or
bugs
and
upgrades
more
individually
and
on
your
own,
which
is
perfectly
valid.
If
your
team
is
capable
and
equipped
to
mean
to
deploy
that
and
maintain
it
over
time.
And
then
there
are
benefits
if
you
opt
for
some
of
the
paid
options
for
pre-built
integration,
whether
it's
an
oidc
plug-in
or
leveraging
the
built-in
rbac
or
taking
advantage
of
the
portal
and
additional
customizability.
A
So
in
the
open
source
version,
you
have
all
the
ability
to
bring
your
own
developer
portal
for
documenting
your
apis.
You
have
the
opportunity
to
roll
your
own
r
back
and,
of
course,
that
is
something
that
I
do
sometimes
just
to
understand
what
the
open
source
capabilities
are
and
how
I
need
to
augment
those
if
I
am
working
on
my
own
stuff
and
the
open
source
software
fits
a
lot
of
use
cases
all
by
itself.
A
So
if
you
just
want
an
ingress
controller
or
you
just
need
to
set
up
some
basic
plugins
for
different
features
that
are
common
to
api
gateways,
the
open
source
can
completely
cover
without
too
much
additional
effort.
A
But
if
you're
looking
for
that
enterprise
feature
where
you're
baked
in
with
all
of
the
default
persistence,
integrations
and
controls
over
users
and
reporting
metrics,
and
things
like
that.
That
is
when
you
have
to
measure
how
you
balance,
cost
and
effort
over
time.
A
A
A
Beyond
just
those
two
core
pieces
you
can
attach
like.
We
said
the
developer
portal,
the
admin
gui,
which
is
that
web
ui,
the
web
manager
for
kong
gateway
if
you're
interfacing
with
a
web
ui
point
and
click
style.
I
do
it
all
the
time,
especially
in
pre-prod
scenarios,
where
I
just
want
to
test
different
things
or
debug,
something
and
then
the
ingress
controller,
which
is
a
stand-in
of
course,
for
a
human
operator
that
is
optionally
configuring.
A
A
Also
there's
a
crd
for
kong
plugins
if
you
want
to
declaratively,
configure
plugins
and
integrations
with
kong,
that
ingress
controller
will
interpret
those
resources
and
call
the
admin
api
directly.
So
one
question
that
I
hear
a
lot
is
comes
around
to
scaling.
If
they
want
to
increase
the
capacity
of
kong,
you
know
maybe
you're
overloading
some
pot,
your
data
plane
pods.
A
A
That
is,
the
component
handling
the
traffic.
So
if
you
want
to
increase
your
traffic
capacity,
that
is
the
component
that
you
want
to
scale
horizontally
and
you
can
scale
it
to
tens
or
hundreds
of
pods
and
so
long
as
you
have
the
load
balancer
capacity
to
handle
that
you're
going
to
see
your
capacity
increase.
A
We
can
get
into
the
helm
values
here
in
a
minute
to
show
you
affinity
and
anti-infinity,
where
you
might
want
to
make
sure
that
you
only
have
one
proxy
service
on
each
node
in
your
kubernetes
cluster.
Obviously
overloading
a
given
node
with
dozens
of
pods
may
not
increase
your
capacity,
so
there
are
measured
ways
of
scaling
like
that
which
are
worth
taking
into
consideration
as
you're
deploying
and
building
your
kong,
the
ingress
controller
component.
Really
you
only
need
one
pod
for
that.
A
A
So,
as
you
can
imagine,
if
you
are
have
a
very
full
cluster
and
you
have
a
lot
of
different
things,
hitting
your
kubernetes
api
operators
and
so
on,
I
have
seen
kubernetes
api
exceed
its
capacity
and
become
a
bottleneck
for
the
entire
cluster,
and
once
your
api
goes
down,
everything
starts
falling
over
itself,
so
the
increased
controller
is
actually
something
you
want
to
specifically
avoid
scaling
the
admin
api
itself
if
it's
standalone.
A
That
is
something
that
you
can
scale
if
you're
looking.
If
you
see
a
lot
of
traffic
from
maybe
several
different
developer
teams
hitting
your
admin,
api
need
to
increase
that
capacity,
scaling
it
to
two
or
three
might
make
sense,
or
if
you're
scheduling
them
in
different
availability
zones,
and
you
want
to
make
sure
that
there's
you
know
no
downtime
between
a
pod
dying
and
an
availability
zone
outage
and
traffic
being
able
to
reach
another
zone.
A
A
A
Excuse
me,
let's
see,
I
do
see
your
question
we'll
get
to
that
in
a
minute,
so
the
other
place
that
you
might
see
admin
api
load
come
from.
Is
the
developer
portal
so
that
developer
portal
does
call
the
admin
api
in
some
scenarios
it
might
not
always,
and
in
some
configurations
it
never
will,
but
if
you
have
a
plug-in
installed
for
your
developer
portal,
which
enables
your
developers
to
create
authentication
tokens
against
your
admin
api.
That
is
one
scenario
where
you
should
would
see.
A
Maybe
a
small
load
also
generated
on
the
admin
api
by
your
developer
portal
and
then,
finally,
your
admin
gui
is
going
to
be
100
reliant
on
the
admin
api
for
being
able
to
look
up
what
services
you
have
being
able
to
identify
the
the
vitals
traffic
that
you're,
seeing
across
your
data
planes,
if
that's,
enabled
all
functions
of
the
admin
gui
result
in
your
client,
calling
your
client
being
the
browser
calling
the
admin
api
directly,
the
admin
gui
again,
that's
kind
of
like
the
ingress
controller
you're
not
going
to
see
usually
a
significant
need
to
scale
it.
A
A
Generally
speaking,
much
like
the
availability,
if
you
are
aiming
to
avoid
any
downtime
if
an
availability
zone
outage
occurs
or
something
like
that,
scaling
it
to
two
or
three
might
make
sense.
I
honestly
have
never
seen
a
scenario
where
it
needed
to
scale
beyond
two
for
reasons
of
load
on
the
admin
gui
itself.
So
that's
another
thing
that
having
the
micro
service
architecture
approach
to
com
really
helps
to
manage
is
making
sure
you're
not
duplicating
unnecessary
overhead
in
your
deployment.
A
So
the
one
thing
that
I
wanted
to
definitely
illustrate
today-
and
I
don't
have
a
diagram
for
this,
but
everything
that
you
see
as
far
as
all
of
these
components
go
can
actually
be
deployed
as
its
own
helm,
release
where
everything
all
the
the
proxy
function.
The
developer
portal
function,
the
admin
web
ui,
the
admin
api,
the
ingress
controller.
A
A
Have
your
hands
on
all
possible
components
of
kong
itself
and
then
decide,
perhaps
from
there
what
avenue
you
want
to
take
for
your
final
architecture,
for
example,
it
might
be
perfectly
reasonable
to
have
your
developer
portal
and
your
kong
admin
web
ui
to
served
both
from
the
same
pod
just
because
those
are
gonna
scale,
pretty
similar
to
only
you
know,
two
or
three
pods
max
your
ingress
controller.
A
I
like
to
always
deploy
my
ingress
controller
as
its
own
independent
home
release,
especially
if
I'm
working
with
an
a
licensed
deployment
of
kong,
because
you
you're
going
to
see
that
your
ingress
controller
associates
on
a
one-to-one
basis
with
a
workspace
and
becomes
its
own
ingress
class.
A
So
you
might
have
your
defaulting
breast
class
attached
to
your
default
workspace
in
column
or
you
might
see
another
ingress
class
that
is
constrained
to
a
specific
kubernetes
namespace
and
are
back
applied
to
that
ingress
resource
ingress
class,
so
that
one
application
team
might
be
constrained
to
one
namespace
in
their
kubernetes
cluster
associated
with
one
workspace
on
the
kong
gateway.
A
A
Once
you
jump
to
the
discussion
of
the
connect
cloud
where
you
might
see
the
connect
cloud,
giving
you
one
control
plane
to
manage
multiple
different
data
planes
that
are
completely
independent,
possibly
in
different
clusters
or
in
different
regions
of
the
world
entirely
wrapping
up
that
point
just
want
to
make
sure
for
everything
that
we
are
discussing
today.
We
are
talking
about
it
in
the
scope
of
only
having
one
data
plane,
so
any
different,
namespace
ingress
controllers,
any
different
workspaces.
A
A
Last
thing
that
we
haven't
covered
in
this
piece
of
the
diagram
is
you'll
see
I
have
redis
and
auth0,
as
example
plugins,
so
those
plugins
live
on
the
data
plane.
If
you
use
them,
redis
would
possibly
come
in
handy
if
you're
looking
at
doing
some
some
types
of
credential
session
management
or
rate
limiting
across
multiple
data
planes.
Usually
I
will
lean
back
on
rate
limiting
the
with
the
in
memory
cache,
so
each
pod
of
your
data
plane,
each
individual
proxy
will
have
its
own
independent
memory.
A
So,
if
you're
rate
limiting
any
given
client,
then
what
you'll
see
is
it
might
hit
its
rate
limit
on
one
pod.
If
you
usually,
that
session
will
be
routed
to
the
same
pod.
If
something
happens
that
bounces
that
traffic
to
another
pod,
it
will
have
a
new
rate
limit
applied
and
it
might
start
from
a
new
counter,
a
fresh
counter
of
zero
on
that
other
pod.
A
If
that
is
something
you
have
to
design
against,
then
that's
when
you
might
use
redis,
for
example,
to
maintain
a
common
cache
for
rate
limiting
across
all
of
your
pods
or
a
common
cache
for
session
handling
and
tracking
authentication
sessions
across
multiple
pods
pot.
Zero
is
one
example
of
an
authentication
provider.
A
There
are
a
lot
of
different
authentication
providers
that
you
can
integrate
with
or
use
something
like
dex
or
key
cloak,
where
you're
going
to
be
using
those
as
a
broker
for
a
federated
authentication
strategy.
But
again
that
is
a
plug-in
and
those
plug-ins
live
in
the
proxy
itself.
A
There
are
dozens
of
other
prep
plugins.
Of
course
you
can
check
out
the
plugin
hub
to
find
out
more
about
the
ones
that
we
help
publish.
There
are
also
community
plug-ins
and
you
can
write
your
own
sure
we
could
get
that
into
more
of
those
details
in
another
call.
There
are
good,
I
know.
Last
week,
victor
gamoff
did
a
session
about
some
of
his
favorite
plugins.
A
You
might
go
look
that
up,
but
all
in
all
that
should
give
you
at
least
an
idea
of
what
pieces
we
have
in
our
micro
services
that
make
up
kong
gateway,
as
well
as
where
the
plugins
work.
What
makes
sense
to
scale-
and
I
didn't
get
into
it
significantly,
but
you,
if
you're
doing
a
stateful
deployment
of
kong
that
is
not
exclusively
declarative
via
dec
or
kubernetes
resources,
or
if
you're,
using
utilizing,
rbac
or
any
other
plugins
that
require
a
stateful
backing
store.
A
That's
when
a
postgres
database,
it
adds
us
to
the
deployment
as
another
dependency
that
might
help
your
chrome
deployment
not
shown
here
is
cert
manager.
Cert
manager
can,
of
course,
help
with
issuing
individual,
specific
certs
for
certain
ingress
resources,
or
it
can
be
used
to
issue
the
certificates.
That
kong
requires,
for
example,
between
the
admin
api
and
the
data
plane
proxies.
A
If
you
do
not
deploy
them
all
in
a
single
helm,
release
and
a
single
pod,
then
you
do
have
to
come
up
with
an
mtls
scheme
and
there
are
a
couple
of
different
ways
to
do
that.
I
really
like
shared
tls
when
possible,
where
it's
just
one
self-signed
cert
and
certificate
key
and
those
are
issued
to
a
secret
mounted
into
the
gateway
proxies
and
the
admin
api,
and
then
they
just
mutually
share
that
certificate
to
establish
trust
between
those
pods.
A
So
they
can
communicate
over
tls
in
the
cluster
itself,
so
cert
manager
could
be
another
dependency
and
there
are
a
few
others
in
odd
cases,
but
those
cover
most
scenarios
that
I've
seen
so
we're
going
to
pause
for
questions
again
before
we
jump
further
into
an
example
showing
kind
of
an
outcome
that
you
can
achieve,
and
I
know
we
have
at
least
one
question
here:
let's
read
that
with
a
kong
provided
helm
chart
the
ingress
controller
seems
to
scale
with
the
proxies.
Is
there
a
way
to
decouple
them?
A
Yes,
that
is
a
fantastic
question.
Actually,
at
the
end
of
this,
we
can
dive
into
some
of
the
helm,
chart
values,
files
that
I've
saved
off
and
we'll
talk
about
what
it
looks
like
when
you
are
decoupling,
those
things
and
deploying
them
as
their
own
individual,
separate
releases
so
hold
that
thought
and
we'll
get
into
it
very
soon.
A
I
am
going
to
pause
for
another
drink,
feel
free
to
pipe
up
with
anything.
On
your
mind,.
A
A
We've
covered
the
different
deployment
permutations
for
either
deploying
as
a
monolith
containing
all
possible
services,
or
any
mixture
of
those
that
you
have
decided
to
use
in
your
deployment
and
that
it
can
live
on
many
different
platforms,
whether
it's
in
kubernetes
or
outside
of
kubernetes.
There
are
many
different
ways
covered
all
of
those
things,
including
how
to
scale
the
right
components
of
kong,
how
far
to
scale
different
components
of
kong
and
what
plugins,
actually
what
component
the
plugins
live
on.
So
once
you've
made
a
lot
of
those
choices.
A
What
can
you
actually
accomplish
with
kong,
and
that's
one
thing
that
I'm
going
to
take
a
diagram
that
I
used
in
a
real
world
use
case
with
one
of
my
customers,
and
this
was
actually
really
interesting
to
me.
We
had
a
fairly
simple
deployment
of
kong.
It
was
hybrid
meaning,
we
deployed
the
data
plane
independent
of
the
control
plane
and
we
did
use
some
pre-existing
plug-ins
as
well
as
custom
writing
our
own
small
plug-in
to
handle
a
really
unique
scenario.
A
Basically,
what
happened
is
this
customer
was
an
upstream
service
provider.
Their
service
was
provided
to
third
parties
who
were
service
providers
to
end
users.
The
end
users
themselves
never
directly
interacted
with
this
customer's
service.
So
what
would
happen?
Is
you
would
have
two
layers
of
identity
to
keep
track
of?
You
would
have
your
end
user
identity,
and
then
you
would
also
have
your
third
party
that
end
user
service,
which
was
a
broker
between
this
customer
service
and
the
end
user.
A
We
implemented
a
redis
cache
so
that
we
could
have
a
session
which
tracked
sessions
across
all
of
the
data
planes
as
they
scaled,
and
then
we
also
wrote
a
plugin
which
would
cache
a
second
set
of
scopes,
requested
from
a
custom
api
that
the
customer
maintained.
A
So
the
client
would
basically
have
their
request
come
in
authenticate
against
auth0
zero
using
the
third
party
credentials,
and
then
it
would
also
look
up
the
client's
individual
id
with
the
custom
plugin.
It
was,
I
want
to
say
about
a
300
line,
lua
plugin
and
the
lines
are
not
complex
or
long.
I
was
not
familiar
with
writing
lua
when
I
wrote
this
plugin,
so
I'm
just
gonna
say
that
it's
actually
a
fairly
easy
language
to
pick
up.
A
If
you
need
to
do
some
simple
custom,
plugin
writing,
but
basically
this
plugin
would
call
a
custom
api
to
provide
a
second
layer
of
rolls
and
attributes
and
attach
those
to
the
packet
as
headers,
and
then
we
would
run
those
headers
through
opa
to
get
a
true
false,
a
boolean
for
allow
or
deny
the
continued
handling
of
those
packets,
opa
being
open
policy
agent.
That
is,
we
talked
about
postgres
being
possible
dependency,
cert
manager
being
a
possible
dependency.
A
A
Sorry
about
that,
and
then
what
we
were
doing
with
that
this
is
I
mentioned
there
are
different
caching
strategies.
You
can
use
redis,
which
definitely
makes
sense
for
an
authentication
session
cash.
A
There
is
also
the
memory
cash
you
can
use
the
memory
cash
or
the
redis
cache
for
things,
like
rate
limiting.
We
use
the
memory
cache
for
caching,
those
additional
headers
that
we
were
adding
to
packets
before
letting
it
continue
on
to
opa,
so
that
memory
cache
again
is
constrained
to
each
individual
pod.
A
A
A
It
continues
on
with
the
custom
plugin
to
pull
the
additional
scopes
and
our
back
information
from
that
custom,
api
and
caching.
Those
as
well
continues
through
opa
plug-in
to
identify
whether
the
packet
has
the
right
permissions
to
continue
on
and
then
drops
back
out
and
sends
the
response.
A
All
right
so
slide
is
mis-titled.
This
is
not
final
questions.
We
actually
can
continue
on
to
take
a
look
at
some
of
the
helm's
values
and
things
like
that
before
I
jump
into
helm's
values,
though
it
will
give
you
an
opportunity
to
speak
to
any
other
questions
or
just
ideas
that
might
be
on
your
mind
that
you
want
some
feedback
on
at
this
point.
I've
finished
the
agenda
of
the
the
program
and
we
have
about
15
minutes
still
to
continue
with
general
discussion.
So
I'm
going
to
invite
you
to
speak
up
or
chat.
A
All
right,
we
have
something
here.
If
you
want
to
actually
unmute
and
speak
to
this
question,
we
can
engage
that
way
if
you
like,
or
I
will
go
ahead
and
read
it
out
loud.
A
Awesome.
Okay,
so
we
have
a
lot
of
good
questions
coming
up
all
right,
I'm
going
to
go
ahead
and
start
with
radius
apologies.
I
I
butchered
your
name
so
the
first
question,
based
on
the
organizational
setups
every
team
manages
its
own
kong
instance
rather
than
a
centralized
repo.
This
is
a
two-part
question.
How
can
developer
portal
be
helpful
here
to
connect
to
multiple
kong
instances
to
what
is
the
best
practice?
A
Most
of
the
time
I
am
going
to
encourage
you
to
only
use
one
kong
deployment,
especially
per
cluster
specifically,
and
then
I'm
going
to
encourage
you
to
separate
app
team,
a
and
epic
cap
team
b
into
different
ingress
classes,
possibly
and
different
workspaces
associated
with
those
different
ingress
classes.
A
A
Because
of
this
I
definitely
encourage
not
just
running
hundreds
of
workspaces.
That
would
be
excessive,
to
say
the
least.
A
You
can
document
all
of
your
apis
from
multiple
workspaces
in
one
developer
portal,
so
that
is
an
option
if
you
want
to
have
a
consolidated,
consolidated
presentation
of
your
api
documentation,
or
one
thing
that
I
have
seen
is:
we've
really
benefited
from
having
a
public
developer
portal
and
an
internal
developer
portal.
A
A
A
How
I
would
recommend
approaching
without
your
your
workspace
organization,
with
a
licensed
version
of
kong,
setting
up
a
public
and
a
private
workspace
and
deploying
the
two
different
developer
portals
per
workspace
makes
a
lot
of
sense
to
me.
You
might
have
additional
workspaces
to
a
degree
like
you
might
have
a
workspace
for,
say,
administrator
platform
administrator
services
that
might
not
have
a
developer
portal.
You
don't
have
to
enable
a
developer
portal
on
every
workspace
and
I
definitely
have
found
a
lot
of
value
in
having
a
workspace
for
the
platform
administrator
team.
A
For
example,
I
commonly
actually
serve
the
admin
api,
the
admin
gui
and
developer
portals
through
the
kong
gateway
proxy
itself.
A
It
keeps
you
from
accidentally
locking
yourself
out
from
your
means
of
recovering
from
a
mistake
like
that,
which
I
have
made
a
mistake.
I've
made
many
times
so
not
in
protection,
yet
though
fingers
crossed
and
then
let's
see,
moving
on
to
marcelo
any
recommendations
about
db
lists
versus
cassandra
deployment.
A
Yes,
so
important
to
note,
as
of
two
seven,
I
think
don't
quote
me
on
that
cassandra
as
a
backing
store
was
deprecated.
So
now
that
we're
on
2-8
and
approaching
the
kong
3.0
release,
keep
in
mind.
A
Postgres
is
going
to
be
the
recommended,
and
the
only
non-deprecated
data
store
that
you're
going
to
want
to
back
your
cluster
as
far
as
db
list
goes
db,
in
my
opinion,
is
most
applicable
to
the
non-kubernetes
deployment
or
to
the
kubernetes
deployment,
where
you
are
not
using
the
kong
ingress
controller
I'll
get
into
that
in
a
second.
It
is
also
valuable
to
the
open
source
version
of
calm.
A
In
to
contrast
it
if
you're
deploying
to
kubernetes,
you
can
do
almost
all
of
kong
configuration
that
you
can
do
with
the
dbliss
approach.
Actually
I
will
describe
the
db
list
is
kong,
where
you
are
configuring
it
without
a
database
backend
and
you're
doing
it
declaratively
through
the
admin
api
commonly
with
deck
yeah.
I
have
seen
some
customers
write
some
admin
api
auth
configuration
themselves
that
was
just
bespoke,
but
dec
is
our
own
declarative
admin,
api
syntax
that
allows
us
to
configure
it
without
kubernetes
or
kubernetes
custom
resource
definitions.
A
Full
postgres,
backed
kong
in
the
event
that
you
were
to
define
the
same
service
in
a
deck
configuration
and
apply
that
to
a
db
list
and
also
have
an
in-rush
controller.
There
is
potential
that
you
could
end
up
with
a
collision
or
duplication
every
of
service
resources,
and
just
generally,
we
want
to
make
sure
that
we
don't
design
our
cluster
mana
and
kong
management
implementation
in
such
a
way.
That
could
possibly
lead
us
into
that
kind
of
a
path.
A
So
if
you
want
to
use
kong
engrish
controller
for
some
things
and
have
an
ingress
class
for
your
kubernetes
cluster,
but
you
also
have
a
workflow
that
utilizes
deck
and
db
list
kong
in
that
kind
of
a
scenario
which
I
have
not
actually
seen
a
valid
use
case
for
or
any
that
we
actually
ran
all
the
way
to
production
with,
but
for
hypothetical
argument's
sake.
A
If
you
did
find
yourself
in
that
situation,
I
would
say,
deploy
two
totally
separate
cons,
deploy
your
ingress
controller
kong
for
all
of
your
ingress
classes,
for
your
communities,
cluster,
that
utilize
kong
and
then
separately,
deploy
a
second
kong
that
you
manage
db
less
with
deck
without
an
ingress
controller
and
probably
without
a
kong
manager.
A
If
you're
using
any
of
the
rbac
features
of
kong,
you
absolutely
have
to
have
a
database
at
that
point.
So
at
that
point
running
a
deck
configuration
in
my
opinion,
especially
if
you're
on
kubernetes.
A
Since
you
have
that
stateful
backend
and
you
have
ingress
controller,
I
would
be
inclined
to
suggest
that
deck
is
an
unnecessary
tool
to
add
to
your
regular
maintenance
and
configuration
tool
belt
and
would
steer
you.
Instead
towards
configuring
with
the
ingress
controller,
custom
seat
resources,
so
your
your
plug-in
configuration
with
crds
and
your
ingress
and
service
and
gateway
api
now
is
in
beta,
so
give
it
a
try.
A
With
with
ingress
resources,
I
hope
I
answered
that
it
looks
like
we
are
getting
short
on
time
and
I
did
promise
to
jump
into
some
help
values.
So
let's
go
ahead
and
jump
in
here
and
we'll
cover
those
real
quick.
It
should
be
pretty
straightforward,
we're
actually
going
to
start
with
the
data
plane
and
then
we
can
jump
to
the
admin
api
and
then
take
a
look
at
the
ingress
controller.
A
If
you're
curious
to
see
these
things
for
yourself,
they
are
published
in
no
finished
state
whatsoever,
but
for
your
for
for
you
to
follow
along
you're,
welcome
to
check
them
out
at
this
link.
A
Let's
see
we're
going
to
jump
into
the
data
plane,
all
right,
so
you
can
see
here
this.
All
of
these
are
written
for
a
local
testing
environment
on
kind
kubernetes.
So
I
only
have
a
single
node
and
in
this
case
I
can
only
scale
to
one
because
of
that,
because
I
am
telling
it
to
only
schedule
data
planes,
one
per
node
in
my
communities,
cluster
you'll
see
a
lot
of
these
other
features
specifically
set
to
enabled
false
it's,
because
I
don't
need
to
enable
the
cluster
api
that
is
running
in
the
admin.
A
The
control
plane,
helm,
file,
release
these
I
don't
need
to
talk
about.
I
can
delete
those
they
stay
as
default.
This
enterprise.
We
have
enabled
true
and
you're,
going
to
set
that
to
true
whether
you're
running
with,
if
you're
running
kong,
enterprise
in
free
mode
or
in
licensed
mode.
So
if
you're
running
in
free
mode,
you
will
also
have
this
license
secret,
that
license
secret
is
just
going
to
be
an
empty
json
string
for
free
mode
and
then,
of
course,
your
real
license
if
you're
running
the
the
paid
version
of
kong
enterprise.
A
So,
if
you
change
in
these
end
variables,
you're
going
to
roll
your
pods
after
you
apply
the
helm
release
or,
if
you're,
using
an
iac
that'll
row,
those
pods
for
you
this.
A
This
is
why
this
is
one
of
the
reasons
why,
if
you're
running
kong
on
a
virtual
machine
or
bare
metal
you'll
see
all
of
these
things
are
associated
with
what
you
would
configure
in
your
conf.
If
you
are
manually,
writing
your
configuration
for
that
type
of
a
deployment
scenario,
so
it
maps
one
to
one
to
your
kong.conf,
the
image
here.
Okay.
A
So
this
is
a
2.8
release
and
if
you
are
deploying
the
open
source
version,
you're
just
going
to
get
the
kong
image,
not
the
gateway
image,
just
the
kong
image
kong
gateway
is
the
enterprise
container
image
so
you'll
see
here.
That
is
where
I'm
actually
telling
it
to
go,
use
the
enterprise
container
and,
if
you're,
using
your
empty
string,
json
license
for
enterprise
free
mode
you're
going
to
be
using
the
enterprise
gateway
container
ingress
controller.
Part
of
that
decoupling
conversation.
A
When
I
deploy
my
data
plane,
I'm
going
to
actually
set
my
ingress
controller
to
false
because
I'm
going
to
enable
ingress
controller
in
its
own
home
release
somewhere
else
same
for
kong
manager.
So
that's
that
web
gui,
we're
not
deploying
kong
manager
on
the
data
plane
doesn't
make
sense
to
scale
it
along
with
the
data
plan,
so
you're
going
to
set
that
to
false
migrations,
we're
handling
migrations
in
with
the
control
plane.
So
these
are
jobs,
kubernetes
jobs
that
maintain
your
database.
A
If
we
update
the
schema
of
your
database
there,
we
will
release
jobs
which
will
migrate
your
database
to
that
new
schema.
Of
course,
namespace
is
pretty
familiar
portal
and
portal
api.
These
are
the
two
components
that
make
up
the
developer
portal,
where
you
can
document
and
publish
those
apis
specs
that
we
talked
about
and
I'm
gonna
come
back
to
proxy
here
in
just
a
second
of
course,
you
can
see
I
set
replicas
to
one
because
I'm
using
my
host
port
port
80,
I
have
to
destroy
my
pod
before
I
create
a
new
one.
A
If
I
roll
my
pod.
So
that's
why
you
see
my
rolling
update
strategy
set
to
100,
make
sure
that
kubernetes
destroys
the
pod
frees
up
the
port
before
it
schedules
a
new
pod
and
here
in
secret
volumes.
That's
where
I'm
setting
up
my
cluster
cert,
which
is
that
shared
mtls
cert
that
I
used
in
this
scenario
it
was
issued
by
cert
manager.
Proxy
tls
is
the
secret,
where
I
issued
my
default
proxy
certificates
that
kong
serves
services
through
kong.
A
If
you're
using
the
ingress
controller
can
be
configured
to
use
certs
dynamically
created
by
something
like
cert
manager
that
are
specific
to
a
given
service
or
ingress
object.
However,
if
you
don't
specify
an
ingress
certificate,
it
will
default
to
the
proxy
tls
certificate
and
then
the
control
blade
services.
A
This
has
to
do
with
where
I
provisioned,
like
my
kong
manager,
web
ui
certificates
and
developer
portal
certificates.
Setting
resource
limits
should
be
a
pretty
routine
kubernetes
idea,
and
then
the
last
thing
in
this
have
values
file.
We
are
at
time,
I'm
gonna
go
just
a
tad
bit
over
and
anyone
who
needs
to
can
drop,
but
this
is
where
I
actually
set
up
the
service
for
the
proxy
and
in
my
case,
because
I'm
deploying
locally
on
kind
which
doesn't
have
a
load
balancer
and
concept.
A
A
Control
plane,
we
don't
have
to
get
into
a
lot
here,
because
we
covered
the
basics
already
pretty
well.
We
are
enabling
the
admin
api
and
we're
enabling
an
ingress
for
it
and
we're
enabling
our
cluster,
which
is
where
that
hybrid
mode,
where
you
have
a
separate
admin
api
and
a
separate
data
plane.
That's
where
that
is
published,
clustered
telemetry.
That's
part
of
the
licensed
feature
that
enables
vitals.
A
We
covered
the
enterprise
basics,
which
is
enabling
enterprise,
even
if
you're
doing
the
enterprise
free
mode
and
providing
your
license.
If
you
have
a
license,
then
you
can
go
on
to
enable
portal,
I'm
enabling
portal
in
its
own
home
release
you
can
enable
rbac.
That's
where
you
get
that
I
can
set
up
your
super
admin
user
and
things
like
smtp
and
integration
and
vitals
come
along
with
that.
A
Again,
I'm
using
the
enterprise
container
and
enabling
manager
I
enabled
manager,
along
with
the
control
plane.
This
is
where
your
migration
jobs
occur,
and
everything
else
is
pretty
well
covered.
We're
not
enabling
portal
here
either
that's
its
own
standalone.
So
let's
go
take
a
look
at
the
ingress
controller.
Now
you'll
notice.
Here
I
actually
have
default
controller
and
public
controller.
These
are
two
different
ingress
classes.
In
my
deployment
and.
A
A
Where
is
my
ingress?
No
I'm
on
the
developer
portal.
I
was
wondering
why
that
looked
funny
there
we
go
okay.
This
is
really
really
simple.
So
the
ingress
controller
standalone
piece
is
a
really
really
simple
home
but
release
values
to
publish
this
is
the
last
thing
I'm
going
to
show
today
we're
enabling
ingress
controller.
A
We
are
setting
the
image
to
the
ingress
controller
image
and
that
is
its
own
release
schedule
and
then
we're
telling
it
where
to
find
the
kong
admin
api.
So
it
can
configure
kong
as
well.
As,
let's
see
we
have
the
kong
workspace,
this
is
what
actually
names
my
ingress
class.
So
if
I
have
default
ingress
class,
I'm
going
to
create
ingress
objects
with
for
the
default
class
and
you'll
see
that's
different
with
the
other
ingress
controller.
You
can
go.
Look
at
that
yourself.
A
Thank
you
for
being
part
of
the
user,
call
today
great
feedback
or
great
questions,
provided
I
hope
that
leads
to
more
questions
for
the
future,
and
I
hope
that
you
all
have
a
great
rest
of
your
day.