►
From YouTube: [User Call] Kong Kubernetes Ingress Controller 2.0
Description
Kong #Kubernetes Ingress Controller (#OSS) has launched 2.0 with a number of awesome new features and fixes. In this session, join the Kubernetes Engineering team as we walk through the release highlights including:
Overview of 2.0
Major architectural improvements
Kubernetes Testing Framework (KTF) & new testing strategies
KIC 2.0 controller manager
And more
Kong’s User Calls are a place to learn about technologies within the #Kong #opensource ecosystem. This interactive forum will give you the chance to ask our engineers questions and get ramped up on information relevant to your Kong journey.
A
Okay,
so
let's
go
ahead
and
get
started.
Thank
you
so
much
for
joining
us
today,
I'm
taryn
jones
and
I'm
on
the
developer
and
community
marketing
team
here
at
kong.
I'd
like
to
welcome
y'all
today
to
to
today's
online
meetup
we're
going
to
be
talking
about
the
upcoming
kong
kubernetes,
ingress
controller,
2.0,
release
and
shane.
Is
our
senior
software
engineer
on
the
kubernetes
team
and
he's
going
to
be
walking
you
through
the
features
that
are
coming
up
in
this
latest
release?
A
Shane
is
open
to
questions
throughout
the
presentation.
We
just
ask
that
you
type
them
in
the
chat
or
raise
your
hand
if
you'd
like
to
speak,
and
I
can
try
and
interject
and
we
can
get
that
question
answered,
but
we
will
also
open
it
up
for
q
a
at
the
discussion
at
the
very
end.
So
with
that
I'll
go
ahead
and
hand
it
over
to
shane
to
kick
us
off.
Thank
you.
Shane.
B
Hey
thanks,
yeah,
just
two:
I
prefer
if
people
just
talk
rather
like
put
your
hand
up,
but
talk
rather
than
the
chat,
because
I
might
have
a
little
bit
of
trouble,
keeping
track
of
the
chat,
while
I'm
doing
this
feel
free
to
talk
up
all
right.
So
welcome
we're
going
to
talk
about
the
ingress
controller
2.0.
Many
of
you
are
running
on
1.0.
B
As
said,
I'm
shane,
I'm
a
senior
software
engineer
on
the
kubernetes
team
at
kong.
B
Let's
we're
gonna
go
over
the.
What
why
and
how
of
2.0.
So
my
hope
is.
I
think
this
is
the
first
actual
presentation
we've
had
like
publicly
about
2.0.
It's
been
mostly
an
internal
thing
or,
if
you're
a
contributor
in
the
get
repo
you
probably
have
seen
bits
of
it,
but
for
anybody
who's
only
just
heard
of
it
or
hasn't
heard
of
it
yet
probably
wondering
what
it
is.
B
Why
and
how
so
we'll
go
over
that
then
we'll
talk
about
the
architecture,
one
of
the
key
things
or
two
of
the
key
things
coming
up
in
kick
2.0
are
basically
related
to
testing
and
like
maintenance
costs
and
stuff,
like
that.
So
we'll
talk
about
our
use
of
the
q
builder
stk,
going
forward
with
2.0,
and
then
our
new
kong
kubernetes
testing
framework
we'll
talk
about
a
little
bit,
which
was
basically
a
side
project
that
was
folded
up
into
2.0.
B
So
the
what
why
and
how
of
kick
2.0.
So
when
I
started
the
company
one
of
the
things
that
we
had
with
1.0,
rather
we
just
referred
to
a
one
point
x.
So
if
you
hear
me
say
one
point
x
that
technically
for
the
purposes
of
this
meeting,
will
refer
to
everything
before
2.0,
so
like
even
zerocoin
x
releases.
B
So
kick
2.0
adds
a
few
things
from
the
end
user
perspective
and
a
few
things
from
back
end
from
the
end
user
perspective,
udp,
ingress
support.
This
was
a
feature
that
came
out,
I
believe
in
upstream
concore
2.2
and
is
now
going
to
be
available
in
kik
2.0.
B
We're
going
to
have
some
features
around
specifying
single,
multiple
multi
and
like
custom
name,
space
watches
for
the
controller
and
we're
going
to
have
some
modified
modernizations
for
the
code
base.
B
Basically,
for
the
purposes
of
our
contributors
and
for
people
who
are
on
the
technical
side
of
the
kick
a
lot,
a
fair
amount
of
this
is
going
to
be
geared
at
more
of
the
technical
side.
Just
a
heads
up
so
but
please
feel
free
to
ask
questions.
You
know
pretty
much
any
questions
that
you
have
relevant
to
the
topic
so,
from
the
back
end
perspective
we've.
Actually,
I
should
probably
talk
about
a
little
bit
more
about
the
modernization.
B
An
important
piece
of
historical
context
is
that
the
kubernetes
ingress
controller
was
originally
forked
from
a
controller.
Let's
say
three
three
years
ago,
something
like
that
and
uses
client
go
for.
Those
of
you
who
aren't
super
familiar
client
go
is
kind
of
the
native
kubernetes
api
client
for
goling,
and
it
is
one
of
the
ways
in
which
you
can
build
a
controller.
B
I
wasn't
planning
on
talking
too
much
about
like
kubernetes
fundamentals,
but
I
will
just
kind
of
gloss
over
controller
and
say
like
the
actual
work
that
is
done
in
the
kick
that
to
translate
like
ingress
resources,
ultimately
to
the
kong
back.
End
is
done
by
a
controller,
and
that
is
a
loop.
That
watches
for
updates
makes
the
conversions
and
send
them
off
we'll
go
over
that
a
little
bit
more
in
the
upcoming
architecture,
slides,
but
just
for
some
context
to
get
started
so
modernizing
that
has
been
kind
of
a
factor
of
this
coup.
B
Builder
sdk
in
incorporation,
used
to
kind
of
be
just
custom
built
now
we're
using
an
sdk
to
kind
of
generate
code
for
us
and
do
a
lot
of
automatic
maintenance,
basically
save
a
lot
of
time
by
using
things
that
are
maintained
upstream,
a
re-architecture
of
the
controller
which
is
partial
and
we'll
talk
more
about
that
and
then
integration
testing
using
our
new
testing
framework.
B
If
you're
interested,
I
put
this
link
here
and
you
can
find
it
it's
in
it's
in
next,
but
it's
also
in
maine.
I
think
now
there
is
a
work
in
progress.
2.0
change,
log
entry,
where
we're
adding
things
it's
kind
of
just
we
just
added
this.
So
not
everything
is
in
there
yet,
but
you'll
see
things
over
the
next
couple
weeks,
funneling
in
there.
B
So
why
kick
2.0?
Oh!
Thank
you
for
sending
the
link
there's
some
technical
debt
normal
part
of
any
process,
basically
any
any
software
development
processes.
There
are
points
where
you'll
take
on
technical
debt,
with
the
intention
of
paying
it
back
later,
so
that
you
can
get
value
up
front.
That
would
otherwise
have
taken
longer
to
get
to,
and
there
was
some
technical
debt
that
we
needed
to
work
through
here,
and
there
were
some
things
that
we're
looking
forward
to
for
the
future,
which
we
needed
to
kind
of
build
some
scaffolding.
B
For
so
on
the
technical
debt
side
side
of
things,
we
had
a
bunch
of
code
that
we
were
maintaining
with
like
client
go
and
stuff
like
that
for
what
we
will
call
the
controller
runtime,
but
suffice
to
say
all
the
things
that
you
need
to
do
to
talk
to
the
api.
Pick
up,
watches
of
objects,
stuff
like
that,
that
was
kind
of
done
manually
with
clientgo
and
a
couple
other
libraries
that
is
now
being
placed
heavy
maintenance
for
the
admin
api
client
code.
B
We're
not
going
to
talk
too
much
about
this
unless
you
have
specific
questions,
but
if
you're
really
in.
If
you
really,
if
you
know
the
code
base
really
in
depth,
you'll
know
that
a
lot
of
what
we
do
is
actually
the
work
to
translate
things
into
kong,
dsl
from
kubernetes
dsl
at
the
lowest
of
levels.
So
there
was
a
lot
of
maintenance
regarding
that
and
we're
starting
to
pick
some
of
that
back
up
and
make
things
simpler
and
reducing
things
by
re-architecting
and
like
thinking
differently
about
how
we
approach
the
problem.
B
The
original
controller
is
more
of
a
monolithic
style
in
terms
of
its
architecture.
It
makes
it
a
little
bit
hard
to
contribute
to
it.
I
remember
like
my
first
contributions.
You
know
it
can
be
a
little
difficult
to
get
in
there
and
figure
out
kind
of
where
things
are.
We
haven't
made
that
perfect,
but
with
this
release
it's
going
to
be
a
little
bit
more
separated
we'll
have
a
whole
slide
just
about
that
and
then
testing.
B
So
I
think
maybe
there
there
was
a
lack
of
testing
for
some
features
and
there's
some
other
things
that
are
like
more
on
the
fringe
of
things
that
we
wanted
more
testing
for,
but
also
just
more.
We
wanted
to
be
able
to
create
tests
faster,
so
that's
part
of
why
we
got
into
like
the
ktf
which
we'll
talk
about
and
so
for
future
initiatives.
B
Our
next
big
thing
that
we
really
want
to
get
on
top
of-
and
we
already
have
some
work
started
on
this,
but
it's
not
very
far
along
yet
is
gateway
api
for
anybody,
who's
familiar
with
services,
api,
but
hasn't
heard,
they've
changed
the
name
to
gateway
api.
So
if
you're
thinking
services
api,
it's
the
same
thing
and
then
for
anybody
who's
completely
uninitiated
gateway
api
is
in
many
ways
you
can
consider
it
the
the
replacement
for
ingress.
If
you
will
or
let's
the
follow-up,
so
ingress
has
been
serving
us
pretty.
B
Well,
it
just
went
into
v1
not
too
long
ago.
You
know
your
last
year
or
something
like
that
in,
but
gateway
is
kind
of
the
logical
next
step
based
on
the
learnings
from
that
api.
B
We
also
have
an
operator
which
you
may
have
seen.
The
operator
is
being
maintained
right
now
at
a
very
basic
level.
It
does
some
very
basic
things.
B
We
are
working
on
kind
of
taking
the
operator
to,
I
should
say
we
aren't
working
on,
but
one
of
the
things
we
want
to
set
ourselves
up
for
is
adding
a
lot
of
features
to
the
kong
operator.
So
this
helps
with
that
high
performance
and
scaling
we
are
looking
at.
B
We
are
specifically
digging
into
like
some
of
the
performance
characteristics
of
one
point
x
and
two
point
x
right
now,
since
they
have
significantly
different
code
bases
and
one
of
the
things
we're
trying
to
keep
ahead
of
is
making
sure
that
we
have
not
only
quantified
performance
characteristics
but
between
releases,
improved
and
not
decreased
performance
characteristics.
B
So
that's
something
that's
on
our
mind
and
then
the
how
this
will
be
a
little
bit
more
for
people
who
want
to
contribute,
but
we
kind
of
had
this
strategy
going
into
how
we
were
going
to
approach
kick
2.0
in
the
repository
now,
and
I
have
a
link
right
here.
You'll
see
I'll,
explain
this
directory
structure
in
a
minute.
B
Actually,
don't
let
me
forget
that
before
we
get
off
the
slide
under
the
caps
directory,
we're
now
using
the
kep
process,
which
is,
if
you're
not
aware,
it's
the
kubernetes
enhancement
proposal
process
and
that
now
exists
within
kong
with
the
purposes
of
we're
not
using
it
precisely
the
same
as
upstream
as
using
it
we're
focusing
a
little
bit
more
on
using
it.
As
a
consensus
building
tool,
then
we're
not
going
too
deep
into
the
designs
for
things
yet
because
we
are
taking
with
kick
2.0.
B
We
took
a
bit
of
a
prototyping
and
emergent
design
approach.
That
is,
we
wanted
to
get
our
motivations
and
our
goals
and
stuff
like
that
figured
out
first,
but
because
we
wanted
to
be
able
to
quickly
pivot
and
experiment
and
stuff,
like
that,
we
took
an
emergent
design
approach,
which
is
we
weren't
super
worried
about
what
the
design
was
up
front,
but
rather
the
goals
and
then
allow
the
design
to
kind
of
work
its
way
out
of
the
goals
as
we
prototype,
as
we
actually
do
like
kinesthetic
process
of
building
and
research.
B
That
way,
the
testing
focus,
which
we've
talked
about,
there's
a
lot
more
testing
going
on
in
the
code
base
with
this
release
and
there's
test
tooling.
That
makes
that
a
lot
easier
and
then
code
diffusion
yeah.
So
this
is
the
directory
thing
that
I
wanted
to
talk
about.
You'll
see
right
now.
B
We
we
made
the
decision
and
we
may
do
this
differently
in
the
future,
but
we
made
the
decision
with
2.0
not
to
branch
for
the
significant
changes
that
we
wanted
to
make,
but
instead
we
created
a
subdirectory
and
the
idea
right
now
is
all
the
new
features
that
aren't
being
released
are
technically
still
in
the
code
base
for
the
releases
under
this
directory.
The
releases
you're
getting
are
just
coming
from
like
the
base
root.
B
There
nothing's
changed
sort
of
thing,
the
1.3.1
that
just
came
out
the
other
day
and
so
forth,
but
the
new
stuff's
in
here
and
it's
slowly
coming
out
we're
diffusing
it
out
into
the
root
we're
just
not
there
yet.
So
if
you
see
that
railgun
directory
just
know
it
almost
acts
like
a
different
repo,
it
almost
looks
like
a
different
repository,
but
actually
links
back
to
the
root
and
in
time
it
will
completely
be
pushed
back
up
into
the
root
when
we
finally
get
to
beta
for
kick
2.0.
B
So
right
now
we're
only
at
alpha
one.
The
next
step
will
be
beta
one
and
then
obviously
ga
for
the
alpha
one
we're
going
to
in
beta
one
we're
doing
kind
of
like
these
specific
milestones
and
releases
and
for
in
between
we're
moving
to
a
a
regular
release.
Cadence
so
you'll
see,
updates
to
the
cli
soon
or
sorry.
You'll
see
updates
to
the
ci
soon,
where
these
releases,
instead
of
being
released
by
me
or
one
of
my
teammates,
will
start
being
released
by
ci
on
a
regular
release.
B
Schedule
for
kick
2.0.
I
think
kick
point
one
play
one
point.
One
point
x
is
not
going
to
change
at
all
for
how
its
release
schedule
goes.
Okay,
so
there's
links
to
the
milestone.
I
think
somebody
put
them
in
there
for
me.
Thank
you.
B
Now
would
be
a
good
time
for
questions
if
anybody
has
them
just
because
we're
going
on
to
the
next
section,
we
did
the
what
why
and
how
of
kick
2.0.
So
if
what
why
or
how
is
not
clear
to
you,
that
would
be
a
great
time
to
raise
your
hand,
and
I
can
hopefully
clear
that
up.
A
Oh
yeah
shane,
this
is
jimin.
I
have
a
question
about
the
mona
lisa
controller
architecture.
What
is
the
main
you
know,
what
is
the
main
factors
that
we
want
to?
I
mean
review,
remove
it
and
move
to
the
new
runtime
controllers.
Do
you
like
to
talk
a
little
bit
more
about
the
caches
and
the
advantages
of
the
new
architectures.
B
Yeah
yeah,
I
have
well,
I
do
have
slides
coming
up
specifically
for
that,
so
I
will
go
over
that
topic
shortly.
A
B
So
if
we
want,
if
we
have
anything
else,
we
can
do
that.
Otherwise
we
can
go
straight
into
that
question.
Basically,
by
going
to
the
next
section,
I
think
it's
the
next
section
yeah
going
once
going
twice.
Okay,
so
we
one
of
the
things
we
really
wanted
to
do
for
2.0
to
point
x
is
making
contributing
a
little
bit
easier.
So
this
isn't
like
to
say
that
it
was
super
hard
to
contribute
before,
but
I
think
it's
the
goal
of
every
project
to
make
contributing
easier.
B
B
Maybe
this
is
a
little
bit
of
an
opinion
piece
but
you're
more
likely
to
be
familiar
with
like
the
cooper,
the
coup
builder
sdk
or
the
operator
operator
sdk
from
red
hat.
A
lot
of
people
are
coming
into
kubernetes,
like
controllers
and
operators
from
that
perspective.
So
the
the
older
way
that
we
used
to
do
things
like
three
four
years
ago
can
be
a
little
bit
esoteric
and
so
because
a
lot
of
work
over
the
last
three
four
years
has
kind
of
just
made
a
lot
of
this
available
in
nice.
B
Succinct,
libraries,
that's
one
of
the
big
focuses
of
this
change
rather
of
this
release,
so
coup
builder,
if
you're
not
aware-
and
if
you
are
aware
of
like
what's
the
other
one,
I
just
said
red
hat's
operator,
sdk,
similar
concept-
it
is
just
an
sdk
where
you
can
run
commands
like
qbuilder,
add
an
api
and
you
get
an
api
and
a
controller.
B
Previously
we
used
to
do
a
lot
of
the
that
maintenance,
basically
updating
things
or
adding
new
apis
was
fairly
manual
process.
B
Now
everything
is
plugged
in
and
you'll
see
a
at
the
at
the
root
of
the
repository
right
now,
that's
under
railgun,
like
we
were
talking
about
earlier,
but
eventually
you'll
see
a
project
file
which
is
a
yaml
configuration
for
queue
builder.
All
of
the
apis
are
now
managed
by
coupe
builder,
so
that
if
you
need
to
make
updates
for
them
or
generate
configurations
for
them,
based
on
those
updates
and
stuff,
like
that,
there's
some
automation
around
that
that
can
make
it
a
lot
easier.
B
Controller
runtime
is
kubernetes
sigs.
It's
basically
the
way
to
create
a
controller,
it's
the
library
that
makes
all
of
that
very
easy
coup
builder
generates
and
scaffolds
out
a
lot
of
that
for
you.
But
since
it's
available
to
you,
there's
also
some
features
of
controller
runtime.
B
We
now
have
access
to,
I
think
in
one
point
x,
there
was
some
stuff
we
we
may
have
used
from
controller
runtime
or
a
predecessor
of,
but
now
it's
like
the
most
modern
controller,
runtime
library
is
now
available
in
the
code
base
for
developers
we
have
yeah.
I
think
I
covered
everything.
It
also
manages
things
like
our
crds,
generating
our
back
rules.
That
kind
of
thing,
so
it's
like
a
bunch
of
stuff
that
used
to
be
done
by
hand
now
just
generated
if
you're
interested.
B
I
would
definitely
encourage
people
to
go
to
the
coup
builder
book.
I
think
within
like
10
minutes.
B
You
can
basically
scaffold
yourself
a
little
kubernetes
controller
and
run
it
locally
and
then
start
playing
around
with
that,
and
that
will
ramp
you
up
pretty
quickly
for
contributing
to
the
kick
post
2.0
the
kubernetes
testing
framework,
so
the
old
integration
testing
framework
was
wasn't
written
in
go
and
that's
one
thing
that
we
wanted
to
kind
of
take
on
directly
among
the
among
the
maintainers
we
wanted
to
switch
to,
like
you
know
all
of
our
tests
being
go
tests,
and
so
that
was
one
of
the
factors
and
you'll
see
that
in
the
cap
for
the
kubernetes
testing
framework
kind
of
one
of
the
motivators,
we
wanted
more
options,
so
the
kubernetes
testing
framework
actually
gets
into
the
provisioning
of
right
now
just
kind
clusters,
if
you're
not
familiar
kind,
is
kubernetes
and
docker
similar
to
minikube
local
kubernetes
clusters
for
testing
and
ci.
B
We
wanted
options
basically
to
say
like
very
easily
express
you
know
I
as
a
test.
I
don't
want
to
have
to
do
any
provisioning
right.
I
don't
want
to
have
to
think
about
how
to
build
a
con,
a
kubernetes
cluster
or
anything
like
that.
I
just
want
it
to
be
available
to
me
as
an
object,
something
that
I
can
say,
cluster
get
the
kubernetes
client
for
and
start
making
calls
to
the
api.
B
So
we
have
that
now,
but
we
also
have
optionality
so
the
suite
the
testing
suite
that
ktf
gives
you
allows
you
to
say,
pick
specific
versions
of
kubernetes
pick
specific
versions
of
sorry
of
kong
itself.
So
the
proxy
version
stuff,
like
that,
there's
optionality
there
that
it
wasn't
available
before
so
that
we
can
build
a
test
matrix
and
say
rci
before
a
release
or
when
we
have
our
release.
B
Ci
set
up
we'll
automatically
test
every
supported
version
of
kubernetes
with
every
supported
version
of
the
proxy,
with
this
version
of
the
kick,
which
is
something
we
don't
have
today,
but
we'll
have
tomorrow,
so
to
speak
so
yeah.
That
was
one
of
the
motivating
factors
for
the
testing
framework.
Another
one
was
udp
ingress,
build
velocity,
so
kind
of
the
proof
of
concept
for
kick
2.0,
which
we
originally
called
railgun.
B
I
probably
should
have
mentioned
that
before,
but
railgun
was
the
kind
of
code
name
for
it
and
is
based
off
of
just
an
old
video
game
joke
if
you
ever
played
the
old
quake
video
games,
so
the
udp
ingress
build
velocity
because
it
was
fairly
hard
to
contribute,
or
rather
it
was
harder
to
contribute
in
the
old
1.x
architecture
and
setup.
B
Udp
ingress
was
like
built
pretty
quickly
like
basically,
we
were
able
to
produce
a
preview
build
for
it
like
a
prototype
for
it
within
a
couple
of
hours,
given
the
given
the
coup,
builder,
sdk
setup
and
all
that,
and
so
that
was
part
of
the
proof
for
what
we
were
trying
to
do
earlier
on
before
it
got
okayed
by
product
and
so
forth.
So
some
highlights
from
that
is
just
a
lot
of
the
code
is
generated.
B
Now
you
pretty
much
if
you
want
to,
I
would
recommend
people
go,
take
a
look
at
what
we
have
so
far
for
udp
ingress,
it's
under
railgun
apis,
it's
under
our
normal
api,
is
under
v1
alpha
one
that
one's
getting
deleted
just
a
heads
up,
but
to
take
a
look
at
it
and
just
kind
of
pick
through
it
before
it
goes
to
beta.
It's
you
can
see
how
most
of
the
logic
was
generated
and
you'll
find
that
pretty
much.
B
The
only
thing
that
we
had
to
do
was
tie
the
pieces
together
for
how
we
convert
that
into
kong
services.
Okay,
yeah
and
I
covered
that
testing
facilities.
We
might
talk
a
little
bit
more
about
the
ktf
later,
but
I'm
worried
I'm
running
short
on
time.
Actually,
so
we'll
go
right
into
the
architecture
that
jimin
was
talking
about.
So
let's
talk
a
little
bit
about
architecture.
B
This,
I
hope,
will
be
a
good
overview
of
how
things
are
today.
This
is
pretty
high
level
I
mean
sort
of,
but
it's
going.
This
gives
you
an
idea
of
more
or
less.
The
workflow
is
the
purpose
of
this
diagram
than
all
of
the
independent
details
of
how
things
work
today
in
the
kick,
so
you
have
the
kubernetes
api.
You
have
watches
for
resources
like
ingress
kong,
ingress
native
ingress,
so
forth,
and
the
event
watcher
picks
those
up
and
then
they
get
reconciled.
B
During
that
process,
we
cache
we
basically
store
a
cache
of
these
objects
into
a
client
go
based,
cache
that
is
actually
using
upstream
client
goes
native
in
memory
kv,
you
could
call
it
a
kv.
I
don't
think
they
specifically
call
that,
but
that's
how
it
operates
for
us
and
on
a
regular
cycle
that
gets
converted
and
depending
on
the
configuration
whether
or
not
it's
db-less
or
dd,
backed
with
postgres.
B
It
may
have
an
extra
step
of
conversion
through
deck
and
gets
sent
off
as
a
update
to
the
kong
admin
api.
B
B
B
Well,
we
can
go
back
too,
so,
just
you
know,
if
you
think
of
it
all
right,
so
in
kick
2.0
we
haven't
completely
gotten
rid
of
some
of
the
paradigms
that
we
have
in
terms
of
like
the
workflow
and
like
how
things
are
built
and
you'll
see.
A
lot
of
these
items
are
still
relatively
the
same.
However,
we
have
changed
it
intentionally
so
that
there
are
more.
B
Basically,
there
are
interfaces
and
more
abstract
layers
around
the
different
parts
of
that
workflow.
So
now
we
have.
The
controller
component
is
kind
of
its
own
thing
and
we
have
individual.
We
used
to
watch
like
all
the
resources
in
one
point
x,
kind
of
as
a
as
one
big
watch,
or
rather
you
could
consider
it
that
way.
Now
we
have
actual
individual
independent
watches
with
their
own
watch
rules
and
stuff,
like
that
for
each
resource
that
we
support,
ingress
kong,
ingress
udp.
B
The
list
does
go
on
from
here:
there's
tcp,
ingress
and
so
forth,
but
that
just
for
the
purposes
of
this
diagram,
so
we
watch
we
reconcile
for
the
most
part,
that's
just
validating
and
figuring
out
like
this
is
an
item
that
needs
to
be
configured
for
in
kong
and
then
we
pass
it
off
to
this
new
interface,
the
proxy
update,
or
rather
the
update,
object
method
on
the
proxy
interface.
The
proxy
interface
is
a
wrapper
around
cache
server
which
still
uses
the
client
go
cache,
but
has
a
different
timing
mechanism.
B
Basically,
with
more
tunables,
there's
gonna
be
a
lot
more
you're
gonna
be
able
to
configure
how
this
happens.
So
I
know
in
previous
versions
there
were
some
tunables
around
like
staggering
updates
to
kong
and
so
forth.
Some
of
those
have
been
reworked,
and
now
there
are
some
new
fields
and
flags
about
how
to
basically
try
to
optimize
for
your
environment.
If
you
have
like
a
lot
of
a
lot
of
requests
coming
in
from
the
kubernetes
api.
B
But
again,
this
is
pretty
much
just
a
cache
server
at
this
point
and
doesn't
really
understand
or
know
anything
about
the
last
bit,
which
is
updating
kong
with
these
changes.
So,
instead
of
having
to
know
any
of
the
internals
or
using
the
libraries
directly,
there
is
a
function,
kong
updater,
and
it
just
sends
the
updates
over
onto
that
at
a
regular
interval.
That
remains
pretty
much
the
same,
that
back
end
since
the
last
architecture,
so
parser,
dex
and
config.
This
is
the
in
time.
B
We
may
make
significant
changes
to
this
too,
in
a
later
like
revisit,
but
at
least
for
now,
we've
kept
most
of
this
intact,
because
this
is
at
its
core
kind
of
the
kind
of
where
most
of
our
our
what's
the
right
word.
B
This
is
kind
of
where
most
of
the
things
that
we
we
want
to
approach
from,
like
a
technical
debt
perspective
later
still
exist,
but
we
have
abstracted
it
a
little
bit
so
that
it's
a
little
bit
more
easy
to
reason
with,
and
there
are
there
are
places
in
the
code
now
where,
if
you
contribute,
you
won't
really
have
to
interface
with
these
pieces
depending
on
what
you're
doing
so,
it's
not
as
in
your
face,
if
you
don't
need
it
to
be,
and
when
you
do
need
it
to
be,
it's
a
little
bit
more
compact
and
you
can
jump
into
it
so
that
still
works
pretty
much
the
same
way
from
there
on
out.
B
I
think
that's
yeah,
we'll
talk
about
some
of
the
gains
of
that.
But
again,
please
do
stop
for
questions
so
we're
trying
to
move
and
two
point
x.
Isn't
the
end
of
this
we're
trying
to
move
so
that
everything
kind
of
works
as
its
own
little
service
micro
service
architecture
in
the
future.
The
intention
is
more
or
less.
For
rather
sorry,
I
should
say
in
the
future.
B
One
of
the
things
that
we're
looking
at
potentially
doing
is
that
we
would
end
up
with
not
even
having
this
last
bit
of
configuration
and
instead,
when
we
update
kong,
we
would
have
that
be
individual
updates
with
a
basically
with
the
restful
api.
We
can
talk
more
about
that.
If
people
have
questions
but
that's
kind
of
the
high
level,
so
we
gain
that
kind
of
composability.
B
You
can
jump
in
now
at
a
controller
with
crew
builder.
Instead
of
having
to
make
it
yourself,
the
api
will
more
or
less
be
worked.
You
do
have
to
go
and
hook
things
up
in
the
back
end
a
little
bit
depending
on
what
you've
done,
but
it's
a
there's
a
little
bit
more
of
a
separation
of
concern
between
these
components,
cleaner
interfaces
and
stuff.
Like
that,
you
don't
need
to
know.
B
We
have
so
we
tried
to.
We
made
it
a
two
point
x
with
the
intention
that
if
we
really
need
to
we
want
to
have,
we
want
to
be
able
to
say
yeah.
We
made
some
backwards
compatibility
changes,
but
as
it
stands
right
now,
I
think
we
only
have
one
very
minor
backwards.
Compatibility
change
for
the
entire
upgrade
we
are
intending
for
this
to
be
a
almost
seamless.
B
There
is,
like
I
said,
we're
working
on
the
last
little
bits
of
that
to
see
if
that's
going
to
be
completely
doable
by
the
point
we
get
to
beta
because
we're
still
in
the
alpha
stage,
but
most
of
the
things
that
we
have
right
now
that
are
even
looking
like
potentially
backwards
and
compatible
have
to
do
with
features
that
were
were
not
in
use
or
with
flags
that
had
to
be
reworked.
So
the
answer
is
99
yeah
everything's
just
going
to
be
upgrade
cleanly.
B
B
B
It's
not,
I
guess,
in
the
strictest
of
sense:
it's
not
technically,
it's
not
the
technical
micro
service
architecture,
where
you
have
things
talking
over
like
cheap
grpc
and
http
we'd
like
that
to
be
like
that
in
the
future,
potentially
so
that
we
can
have
more
separation
of
concern.
That's
a
conversation
for
later
that
I'd
be
speculating
a
little
bit
right
now,
it's
all
over
interface
methods,
pretty
much
using
interfaces
is
kind
of
that
boundary,
as
opposed
to
like
actual
over-the-wire
apis.
C
B
I
I
couldn't
hear
that
very
well.
I'm
sorry.
B
Oh
yeah:
let's
talk
about
that
sure.
So
replacing
isn't
the
right
word.
If
I
did
say
replacing
earlier,
I
apologize,
that'd
be
a
misnomer
gateway.
Api
will
be
potentially
a
replacement
for
some
people
to
ingresses.
I
think
maybe
what
I
said,
but
I
should
quantify
that,
as
we
will
still
support
ingress,
but
the
gateway
apis
will
add
additional
feature
sets
that
we
will
also
support
so
we'll
support
both
at
a
time.
We
won't
be
dropping
support
for
ingress.
If
that
was
your
concern,.
C
C
Hey
hi
one
question
so
regarding
the
the
database
right
like
what
are
all
the
databases
that
you
would
support
with
the
2.0.
B
We're
going
to
continue
supporting
just
postgres
and
the
db
list
mode
in
2.0.
Cassandra
is
currently
deprecated
and
kick,
and
it
will
remain
that
way
for
2.0
we're
not
adding
any
new
database
support
for
2.0.
Specifically,
however,
that
doesn't
preclude
the
notion
that
some
point
some
point
during
the
2.0
lifetime.
If
there's
something
that
people
need
us
to
support,
we
could
add
that
it's
just
that's
not
going
to
be
coming
out
with
2.0
itself.
C
Update
sorry,
the
radius
using
radius
database
for.
B
D
So
a
clarification
there,
hey
sorry,
this
is
very
red-
is
for
rate,
limiting
that's
supported
as
it
is
supported
on
one
dot
x.
So
there
is
no
changes
to
like
anything,
any
configuration
that
you
have
on
the
kong
side.
That
remains
as
true
as
is
correct.
B
Okay,
thank
you.
Yeah
things
will
not
be.
This
is
meant
to
be
a
pretty
much
99.9
backwards,
compatible
change
for
most
people.
There
will
be
no
difference
like
it
will
just
roll
over
to
the
new
version.
Everything
effectively
works,
the
same
minus
a
lot
of
these
internal
things
have
changed.
Ideally,
you
won't
notice
if
you're
an
operator,
you'll,
probably
notice
that
there's
different
and
more
extensive
logging,
but
that
would
be
kind
of
the
extent
of
it
from
the
operator's
perspective
as
a
contributor,
you'll
notice,
a
lot
of
changes.
C
C
I
have
a
question,
but
not
might
not
be
related
to
this.
Is
it
okay,
if
I
can
ask
like
related
to
a
multi-region
or
a
hedgehog,
so
how
does
generally
clients
build
for?
Let's
say
if
I
have
a
requirement
to
build
api
gateway
across
two
data.
Centers
active
active
modes
with
postgres,
as
we
know
like
it,
doesn't
support
like
active
active
right.
So
how
does
clients
generally
or
you
guys
suggest,
building
that
architecture.
D
Yeah,
so
are
you
running
like
in
kong
in
this
controller
world,
or
are
you
using
like
kong
without
the
ingress
controller.
D
C
I
mean
we
just
started
the
pocs,
so
we
are
getting
into
the
stage
of
non-broad
right
now.
D
Okay,
I
mean,
if
you
are
using
kubernetes
and
if
you
are
doing
multiple
deployments,
you
can
do
active
active
and
the
right
way
to
do.
It
is
how
you
do
active
active
in
multiple
regions
with
kubernetes
right.
So
here
what
you
would
do
is
you
would
have
two
different
clusters
of
kubernetes
in
region
a
and
region
b
yep,
and
then
you
would
feed
in
the
exact
same
configuration
to
these
two
clusters.
D
D
Is
rate
limiting
across
these
two
regions?
You
know
at
a
global
level
for
users.
You
cannot
do
that
at
that
point.
If
you
share
so
there
are
some
gotchas
there,
but
that's
really
how
you
do
it
if
you're
not
using
kubernetes
or
if
you're,
not
using
the
ingress
controller,
then
there
is
something
called
the
hybrid
mode
in
kong
which
allows
you
to
run
kong
data
planes
in
in
regions
and
host
your
control
plane
in
one
region.
C
But
in
the
in
the
case
of
where
we
run
active,
active
workloads
like
when
I
say,
active,
two
workloads
having
separate
kubernetes
running
active
active
but,
like
the
dynamic
token
generations
right.
How
will
that
I
mean
that
will
have
an
impact
right
to
the
to
the
clients,
because
the
tokens
that
are
generated
in
one
region
will
not
work
for
the
other
region
right.
So
how
would
you.
C
D
Yeah
yeah
so
like
or2
plugin,
is
a
very
special
case
where
you
can
run
into
these
issues.
The
way
to
solve
this
is
that
you
could
use
something
like
redis
and,
like
chris
mentions
redis
enterprise
in
the
chat
where
you
can
have
active
active
configurations
of
that
to
to
make
sure
that
you
know
clients
hitting
any
region
can
you
know
be
authenticated.
D
I
don't
think
we
have
tested
that,
so
it
will
certainly
require
some
some
more
effort.
What
like
one
of
the
plugins
that
is
very
very
hard
to
work
around
with
active
active
across
regions,
is
odd
too.
D
Other
plugins
are
relatively
pretty
straightforward,
but
or
two
is
really
complicated
and
the
replacement
for
that
is
oidc,
which
I
I
mean
it
doesn't
work
for
everyone,
because
it
is
an
enterprise
page
feature,
so
so
that
that
helps
you,
you
know,
do
active,
active,
much
more
easy
sure.
C
Another
question
specifically
not
specifically
with
regarding
this
postgres
right.
I
do
understand
that
right
now,
you
support,
like
maybe
a
user,
a
basic
authorization
with
respect
to
postgres,
but
is
there
any
in
your
roadmap?
Do
you
have
anything
where
you
support
the
certificate
based
authoring
authentication
with
respect
to
postgres
as
well.
D
C
D
D
That,
probably
is
correct,
I
mean
I'm,
I'm
not
sure.
What's
the
latest
update
on
that
that
specific
feature
in
kong,
what
I
would
recommend-
and
this
is
something
that
we
have
at
least
I
have
certainly
heard
of
before
this
asks,
so
what
I
would
recommend
is
please
go
check
out.
2.4
at
latest
release,
see
if
it
has
that
feature.
D
If
it
does
not,
then
please
open
a
github
issue
to
track
this
and
it
it
is
something
that
is
supported
by
the
underlying
postgres
library
and
the
tls
implementation
we
have,
but
it
it's
a
matter
of
exposing
that
detail
to
our
end
users.
D
A
D
A
Yeah
very
similar,
so
basically
it's
just
do
we
already
have
any
tears
around
implemented
or
tears
in
great
ingress,
something
like
that.
D
Yeah,
so
that's
a
good
question,
so
we
don't.
We
currently
support.
You
know
http,
we
support
tcp.
We
support
udp
right,
but
routing
based
on
something
like
sni
in
tls
or
you
know,
like
essentially
tls
byte
streams
is
something
that
kong
supports,
but
the
increase
controller
does
not
support
that.
Yet
we
are
working
on
as
shane
pointed
out
right
like
implementing
data
api.
After
we
release
2.0
and
gateway
api
has
a
specification
for
tls
route,
which
is
what
we.
C
D
Of
course,
and
and
anybody
if
anybody
has
questions,
you
know
anything
other
than
congratulations
controller
kong,
you
know
we
have
kuma,
we
have
insomnia
like
we
have
a
bunch
of
products
under
hong
king
now,
so
any
any
other
questions
more
than
happy
to.
C
This
is
barney
again,
as
I
said,
right
like
we
are
just
exploring
just
completed
couple
of
pocs
and
trying
to
see
if
we
can
enable
it
do
you
have
any
best
practices
guide
or
something
that,
from
the
kong
ingress
controller
standpoint
that
we
can,
you
know,
take
a
look
at
from
the
production
standpoint
that
you
can
share
any
reference
links
or
something.
D
Yeah,
so
I
can,
I
can
try
to
share
some
blocks
on
production
usages.
We
do
have
some
production
guidelines
for
kong
itself
and
continuous
controller
really
builds
on
top
of
calm
right.
So
we
can.
I
can
share
you
some
production
usage
guidelines,
sure
they
are
going
to
be
a
bit
rough
and
we
definitely
are
lacking
documentation
in
this
regard
right.
So
so
that's
again
something
that
we
would
like
to
fix
in
future.
But
for
now
I
can
I'll
I'll
post
a
link
to
that
in
chat.
B
Kind
of
to
piggyback
on
that-
and
I
think
I
put
it
in
the
slides,
but
I
since
brought
it
down-
we
have
on
kubernetes
slack
if
you're
not
familiar
just
go
to
kubernetes,
like
community
you'll
find
their
slack
channel.
We
have
a
kong
channel.
There
always
welcome
jump
in
there,
we're
active
in
there
I'm
in
there
all
day.
If
you
have
like
little
questions
and
stuff
like
that,
especially
pertaining
to
things
like
that
do
feel
free
to
hit
us
up
in
there.
B
C
C
B
Proxy
talks
to
postgres,
if
you
set
it
up
with
postgresql
kong,
specifically
talks
to
the
co
or
sorry
the
kick
kubernetes
ingress
controller
specifically
talks
to
the
kong
admin
api.
That's
the
only
thing
it
interfaces
with.
So
all
the
interactions
with
postgres
will
actually
be
the
result
of
hitting
that
api
from
the
proxy
to
the
postgres.
Not
from
the
kick
does
that
make
sense.
Was
that
a
good
answer.
C
B
So
right
now
we
have
in
current
kubernetes
ingress
controller
and
kong
I'll
call
it
the
upstream,
because
it's
upstream
to
us
that
work
on
the
kubernetes
side
of
things,
there's
some
you.
You
have
to
have
some
awareness
of
the
underlying
storage
behind
the
calling
admin
api
to
make
decisions
about
like
how
you
use
the
api.
So
when
we
talk
about
like
the
different
databases
and
stuff
like
that
from
the
kick
perspective,
that's
what
we're
talking
about
and
then
there's
an
operational
side
that
has
nothing
to
do
with
the
kick.
B
Rather
so
from
the
kick
side,
we
just
there's
a
few
things.
We
need
to
know
when
making
like
logical
decisions
in
the
kick
from
the
operator
side.
B
B
D
B
C
So
is
this
a
monthly
meet-up
which
happens
locally
or
like?
Is
it
all
over
us
or
I'm
just
trying
to
understand
the
cadence
of
this.
A
So
it
is
a
monthly
meet
up,
always
at
the
same
time,
so
our
next
one
will
be
on
july.
13Th,
it's
the
second
tuesday
of
every
month,
and
we
do
go
over
different
topics
but
be
sure
to
check
out.
We
have
an
online
meetup
page,
I'm
adding
it
to
the
chat
right
now
and
you
can
see
there
our
past
and
upcoming
meetups.
C
A
D
Yeah
and
just
to
add
to
that
right,
like
our
audience,
has
been
very
varied
in
the
past,
so
like
kong
is
adoption
globally.
So
sometimes
we
get
a
lot
of
use.
Users
from
you,
know
asia,
pacific
region,
sometimes
in
europe,
sometimes
in
the
us.
But
this
is
what
the
time
that
works
for
most
people,
not
everybody,
but
if
we
do
have
like
you
know
a
significant
audience
in
any
of
these
regions,
we
are
happy
to
cater
to
other
time
zones
better.
A
Excellent
well
final
call
for
questions,
otherwise,
we'll
we'll
close
today's
session,
going
once
going
twice
all
right.
Well,
final
reminder
to
ask
any
of
questions
that
might
come
up
on
that
kong
channel
on
kubernetes
slack.
The
link
that
I
just
shared
will
also
feature
the
recording
tomorrow.
So
please
check
out
that
link
if
you'd
like
to
get
a
copy
of
this
recording
thanks
so
much
for
joining
us.
Our
next
call,
as
I
said,
will
be
on
second
tuesday
of
july,
which
is
july
13th.