►
From YouTube: [Online Meetup] Kong Gateway 2.2 Deep Dive
Description
The Kong Gateway 2.2 release is right around the corner! Join our Core Engineering team as they walk you through the features and updates in this release including:
- Upgraded OpenResty and NGINX versions
- Deprecating Cassandra 2.2
- Extended functionality for Go plugins (response handling)
- UDP support (proxying, load balancing, and logging plugins)
- New route object attributes: dynamically configure request and response buffering
- Removed target history
- Set certificate per upstream client for proxying
- Automatically load OS certificates
A
Yeah
so
yeah
today's
theme
is
kong
gateway.
2.2
is
right
around
the
corner.
We
are
getting
really
close
to
the
release
and
this
is
essentially
a
part
two
from
the
presentation
that
we've
had
in
the
previous
online
meetup
around
the
time
which
we
had
just
released
our
alpha.
So
we
already
had
a
bunch
of
the
new
features
out,
so
we
presented
a
little
bit
of
those
and
today
we're
going
to
talk
about
the
rest
of
the
new
stuff.
That's
coming
for
kong,
2.2.
A
A
So
let's
take
a
quick
recap
of
how
the
2.2
release
cycle
has
been
so
far.
So
at
the
same
time
that
we
have
been
maintaining
the
stable
series
of
kong
2.1,
which
was
released
quite
recently,
and
we
have
had
two
one
one,
two
one,
two
two
and
three,
two
and
four:
I
had
the
series
of
releases.
We
have
been
also
hard
at
work,
building
kong
2.2.
A
At
the
same
time,
so
in
september
we
released
the
2.2
alpha
and
this
month
in
october,
a
little
a
few
days
before
the
summit,
we
have
released
our
2.2
beta.
If
you
were
paying
attention
at
cognition,
you
caught
that
and
also
we
made
a
bigger
splash
about
it
and
summit
announcing
it.
So
this
is
the
version
that
is
currently
out
and
so
far
the
feedback
for
2.2
beta
has
been
positive,
in
which
generally
in
open
source.
A
That
means
no
one
has
come
to
us
shouting
fire
that
everything
was
completely
broken,
so
in
generally
in
open
source,
no
news
is
good
news
and
so
far
the
feedback
that
we
have
got
was
positive,
that
people
were
happy
and
looking
forward
for
new
features.
A
So
if
everything
goes
along
the
plan-
and
we
don't
have
any
big
surprises
next-
we'll
be
releasing
release
candidate
one,
which
is
our
idea
of
what
the
real
final
release
should
be
and
if
that
doesn't
get
any
shouts
of
fire,
then
that
will
be
promoted
to
become
like
the
generally
available
final
version
of
kong
2.2.
A
So
our
current
plan
for
this
shortest
cycle,
which,
if
you
have
noticed
it's
a
lot
shorter
than
the
one
that
we
have
between
2.0
and
2.1,
is,
will
be
a
total
of
something
like
three
months
in
total
between
2.1
and
2.2.
So
our
goal
is
to
do
the
rc1
later
this
week.
So
if
you
want
to
try
out
the
beta
and
give
us
any
feedback,
no
there's
no
better
time
than
now.
A
So,
let's
start
with
a
quick
recap
of
everything:
that's
come
in
in
kung
2.2
so
far,
including
some
of
the
stuff
that
we
have
discussed
and
presented
at
the
previous
meet
up.
So
we
can
have
in
this
presentation
a
full
view
of
what's
in
kong,
2.2
all
right.
So
first
thing
we
have
bumped
the
open
rest
version,
which
includes
a
bump
of
the
underlying
engine
x
version,
so
that
includes
tons
of
bug,
fixes
features
optimizations
and
we
direct
you
to
the
changelog
for
openresty.
A
For
all
of
the
details
in
that
we
are
also
deprecating
cassandra
2.2,
which
will
be
end
of
life
when
4.0
is
released.
Note
that
the
beta
4.4.0
is
already
out.
So
it's
just
a
matter
of
time
and
our
our
telemetry
generally
shows
that
people
who
are
running
the
latest
kong
2.x
versions
they're
not
using
cassandra
2.2
anyway.
So
this
is
just
maintenance
burden.
B
A
We
have
this
new
phase
called
response
for
go
plugins
which
buffers
the
upstream
response
and
then
runs
the
phase,
which
means
it
allows
you
to
get
the
body
of
the
response
and
act
on
the
response
right
because
before
for
go
plugins,
you
can
only
act
on
the
request
and
yeah
and
then
there's
also
a
buffered
response
phase
for
lua
matching
this
one
as
well.
A
What's
probably
like
the
biggest
feature
in
cone,
2.2
udp
support,
so
it
works
similarly
to
what
we
currently
have
for
tcp
support.
It
means
you
have
support
for
proxing.
We
have
support
for
a
little
balancing
based
on
about
usual
balancing
criteria,
and
we
also
have
support
for
plugins
generally
for
what
makes
sense,
because
most
of
kong
plugins
are
targeted
toward
the
api
gateway
use
cases
for
rest,
apis
for
very
http
oriented,
very
json
oriented
those
kinds
of
things.
A
So
generally,
the
plugins
that
are
most
useful
in
the
udp
scenario
are
the
ones
for
logging
and
when
we
have,
you
know
we
have
tested
evaluated
all
of
those.
A
So
let's
talk
about
now
and
what's
new
in
the
2.2
cycle
since
the
beta.
Well,
what's
the
new
stuff
that
come
in
and
for
a
couple
of
those
I
will
direct
the
presentation
to
our
engineers
from
core
team
who
are
going
to
show
and
talk
about
those
features
a
little
more
in
detail.
A
So
the
first
one
of
those
is
that
now
we
have
added
two
new
attributes
to
the
route
object.
So
you
can
dynamically
like
configure
the
request
and
response
buffering,
which
used
to
be
something
very
static,
something
that
could
only
be
adjusted,
I
believe,
via
the
nginx
template,
but
apple
is
going
to
talk
a
bit
more
about
that.
So
I'm
going
to
stop
sharing
and
hand
it
over
to
apple.
So
he
can
do
a
quick
demo
and
explain
what
this
feature
is.
C
Let
me
share
my
screen,
so
it's
probably
easiest
to
show
what
what
this
request
and
response
buffering
is
by
showing
you
a
demo.
So
here
I'm
running
latest
beta
on
our
release,
tube
2.2
branch,
and
so
let
me
create
first,
a
service
that
we
can
use
for
this.
So
I'm
adding
a
new
service
called
upload
like
like
this
buffering.
This
usually
is
problematic
when
you
are
transferring
bigger
files,
as
we
will
see
in
this
demo.
C
Okay,
let
me
change
this
to
something
available.
I'm
not
sure
if
I
have
that
running,
so
I'm
using
that
and
just
having
the
status
200
endpoint
as
an
service
endpoint.
C
Next,
I'm
going
to
create
a
couple
of
routes,
so
I
will
be
adding
for
this
upload
service.
I
will
be
at
ad
and
route
called
upload
buffered.
This
is
the
default
behavior
of
like
before
we
added
this
new
feature,
so
we
will
see
how
it
changes
when
you
turn
it
off,
so
I
don't
give
any
options
for
for
this
route
for
buffering,
so
it
everything
will
be
buffered
both
request
and
response
here
on
proxy
module.
C
I
will
add
a
bath
for
buffer
here
and
that's
then,
let's
create
a
new
new
one
for
where,
for
unbuffered,
the
path
is
slightly
different
and
I
will
be
adding
this
new
flag
there
for
this
route.
So
in
this
route
we
will
be
disabling
request
buffering.
So
when
your
client
sends,
for
example,
five
megabyte
file,
we
don't
buffer
that
file
before
we
proxy
to
upstream,
and
this
will
have
like
latency
benefits
with
the
large
files.
As
you
will
be
see
in
this
demo,
so
we
can
first
send
a
request
to
this
unbuffered.
C
This
is
the
default
proxy
port,
and
now
I'm
just
sending
request
there.
You
don't
see
much
happening.
Okay,
there
was
like
slightly
latency.
If
I
warm
up
this
a
little
bit,
so
you
can
see
that
kong
adds
a
very
little
latency
to
this
request.
Only
like
one
millisecond
or
even
less.
C
Now,
let's
do
something
with
the
unbuffered
one.
As
you
see,
we
still
get
same.
So
what's
the
what's
the
stuff
here,
like
nothing
changed,
everything
was
fast
with
both,
so
we
need
to
do
something
like
http,
1.1,
chunked
request.
So
let
me
use
curl
for
that
and
now
I'm
uploading.
I
have
a
151
megabyte
file
on
my
machine
here
that
I'm
transferring
with
the
chunked
encoding
and
that's
the
new
feature
they
added
to
http
1.1.
C
So
this
setting
that
we
added
route
dot
request,
buffering
it's
affecting
only
the
http
1.1
when
you
are
kind
of
like
using
this
chunked
chunk
request.
So
now
I'm
uploading
to
the
buffered
one.
C
As
you
can
see,
I
got
like
a
lot
of
latency
approximately
here
and
when
I
do
it
like
it
doesn't
even
go
like
very.
Even
if
I
warm
up
you
there's
still
like
a
lot
of
latency
there.
That's
be
because,
with
default,
behavior
kong
reads
the
whole
body
of
the
request
before
it
forwards
it
to
upstream.
C
So
now,
let's
do
the
same
for
the
unbuffered
one,
and
you
would
see
that
latencies
are
like
one
or
even
like
smaller
one.
Okay,
there's
always
like
something
some
latency
perhaps
happening
here,
but
this
is
the
main
point
of
turning
off
the
buffering
on
a
certain
route.
For
example,
if
you
want
your
service
to
be
able
to
read
chunked
requests
that
send
larger
files,
there
is
also
another
setting
called
response
buffering.
C
I'm
not
sure
what
the
use
cases
are
for
that.
But
if
you
turn
that
on
then
we
will,
the
nginx
will
do
not
do
any
buffering
on
its
own.
It
will
synchronously
send
all
the
data
to
your
client
when
it
receives
that
now,
with
the
response
response
buffering,
even
if
it's
turned
on
nginx
doesn't
try
to
buffer
the
whole
response,
it
only
buffers
so
slightly
slightly
so
so
the
change
is
not
that
it
just
means
that
there
are
less
chunk,
probably
a
little
bit
less
chunks
to
be
sent,
I'm
not
sure.
C
What's
the
use
case,
perhaps
there
is
like
a
old
90s
chat
application
or
something
like
that.
That
keeps
like
ongoing
connection,
and
they
they
don't
want
to
have
that
chat
application,
wait
for
the
nc
next
buffer
to
fill
before
it
flashes
to
the
to
the
client,
so
that
might
be
the
use
case
for
it,
but
yeah
quite
uncommon.
So
this
was
the
new
feature
we
added
thanks.
A
Thank
you
apple
and
yeah.
I'm
gonna
reach
our
screen,
so
we
can
continue
discussing
the
new
features
in
2.2
yeah
and
another
big
thing
that
is
coming,
which
will
not
be
immediately
visible
for
users
but
which
is
paying
off
a
lot
of
tech
depth
that
we
have
and
will
unlock
new
and
exciting
features
in
future
releases
is
that
we've
had
a
major
internal
refactor
of
the
target
entities
such
that
we
have
removed
the
so-called
target
history
from
the
storage
of
target
entities
and
kong.
A
A
As
you
have
seen,
we
have
over
time
added
new
algorithms
and
criteria.
We
have
round-robin
consistent
hashing
list
connections,
and
this
was
an
assumption
that
was
deep
in
the
database.
That
only
makes
sense
and
existence
for
the
sake
of
the
consistent
hashing
load,
balancing
algorithm.
So
now
that
has
been
decoupled
and
it's
no
longer
tied
in
the
in
the
database
representation
and
we'll
just
give
this
flexibility
to
moving
forward.
A
A
So
another
feature
that
has
been
added
in
kong
2.2
is
that
now
you
can
set
the
per
up
stream
for
the
upstream
object,
the
client
certificate
for
use
with
proxy
in
kong
2.1.
We
had
added
the
client
certificate
attributes
to
the
upstream
object,
but
that
was
all
used
only
for
active
health
checks.
A
So
now,
if
you
don't
have
the
client
certificate,
if
you're
doing
you
a
mutual
tls
kinds
of
things,
and
if
you
don't
have
the
client
certificate
set
to
in
your
service-
and
you
have
multiple
services
that
share
the
same
upstream,
then
the
client
certificate
of
the
upstream
will
be
used
for
the
processing
as
well.
So
generally,
we
feel
that
this
is
basically
like.
A
Things
are
working
as
one
would
guess,
as
one
would
expect
right,
so
we
feel
that
we
are
making
the
the
api
the
usage
of
those
entities
more
consistent,
because
now
you
can
set
client
certificate
in
the
service
if
you
want
to
set
it
on
a
per
service
based
basis,
and
you
can
also
add
it
for
proxy.
In
the
upstream
in
case,
you
have
an
app
stream,
that's
shared
by
multiple
services.
A
This
was
this
was
a
feature
request
coming
from
folks
from
the
ops
side
of
things,
because
they
mentioned
that
deploying
the
last
certificates
and
configuring
kong
to
to
use
them
was
always
an
additional
burden
and
a
step,
especially
when
you
have
your
own
certificates
that
you
already
want
to
configure
for
it
in
kong.conf,
but
you
also
want
to
use
device
certificates.
A
So
now
this
has
been
seriously
improved
for
con
2.2
and
I'm
going
to
stop
sharing,
because
enrique
is
going
to
talk
a
little
bit
more
about
how
that's
going
to
work
in
kong,
2.2.
D
D
The
way
that
it's
used
is
explained
here.
This
is
the
2.1
version
of
the
explanation
that
will
be
improved
on
the
2.2
version
and
the
way
it
works
is
basically
any
time
we
do
this
on
lua
code,
specifically
calling
ssl
handshake
with
true
as
a
third
parameter,
which
is
quite
specific,
then
that
option
get
gets
used.
The
certificates
pointed
by
that
file
get
used
on
this
request.
D
That
means
that,
on
these
cases,
this
list
might
not
be
completely
exhaustive.
There
might
be
another
case
that
I
forgot,
but
these
are
the
main
three
cases
I
detected
when
working
on
hybrid
mode,
the
control
planes
and
data
plane
use
these
certificates
when
we
do
any
plug-in
work
connecting
via
ssl
and
also,
if
enabled
when
we
connect
to
the
database
used
by
kong,
posgus
or
cassandra.
D
The
changes
we
have
made
are
two
different
ones.
First,
one
is
that
now
that
option
accepts
multiple
paths.
Instead
of
one
that
way,
it's
easier
to
merge
two
different
certificates
from
two
different
sources:
previously
people
having
to
do
that
did
have
to
concatenate
those
files
themselves,
that's
not
necessary
anymore,
and
the
second
one
is
that
we
enabled
one
special
value
for
those
one
possible
special
value
for
those
paths
which
is
called
system
and
that
one
will
be
expanded
to
the
useful
default
that
all
installations
have.
D
So
if
you
are
on
debian,
it
will
try
the
debian
one.
If
you
are
on
red
hat,
you
will
try,
try
the
red
hat
one
and
so
on.
It's
nothing
fancy.
It
tries
like
several
seven
different
pre
hard
coded
paths,
but
that's
what
most
most
people
need
and
you
can
still
add
as
many
as
you
want
later
on,
if
that's
not
sufficient.
D
So
with
this,
we
hope
that
the
problems
with
dealing
with
ssl
certificates
will
be
much
easier
to
deal
with
for
people
doing
as
hisham
said,
infrastructure
work,
and
with
that
I
give
the
screen
back
to
hisham.
A
All
right
thanks!
Yes,
when
we,
when
we
presented
this
internally,
that
and
when
we
announced
internally
that
this
feature
was
coming.
The
folks
from
the
customer
experience
team
were
cheering,
because
this
has
always
been
a
pain
point
for
them
for
for
managing
customers,
ssl
certificate
configurations
and
yeah.
The
feedback
that
we
got
from
them
is
that's
going
to
make
their
life
a
lot
easier
and
we
hope.
A
Was
early
this
year,
yes,
so
yeah,
so
we
expected
that
same
true
for
all
of
our
open
source
users,
so
yeah
a
lot
more
things
are
coming
in
2.2
and
by
now
none
of
it
is
a
surprise
because
we
have
already
released
the
beta,
which
has
which
has
already
like,
been
a
feature
frozen.
A
So
in
terms
of
features,
everything
that
you
see
in
the
combination,
announcements
for
the
alpha
and
the
beta
are
the
things
that
are
coming
for
account:
2.2,
here's
just
a
quick
list,
shouting
out
some
of
the
some
of
the
things
so
now,
we've
always
had
like
in
schemas.
We've
always
had
that
shorthand
feature
for
specifying
like
deprecations
and
renames,
and
things
like
that.
So
now
that
have
been
extended
into
a
fields
table
where
you
can
have
proper
type
definitions
and
not
assuming
strings
anymore.
A
So
that's
useful
in
the
pdk,
kong
response
exit
now
honors
the
headers
settings
that
you
have
in
kong.conf.
So
if
it
says
like,
if
you,
if
you
configure
it,
that
you
want
to
return
the
server
header
or
the
via
header,
and
so
those
things
same
for
the
admin
api
responses
for
hybrid
mode,
we
have
improved
our
graceful
exit
procedures
for
control.
Plane
data
play
nodes.
A
Also
when,
as
we
were
developing
the
udp
support,
we
went
through
a
fine
comp
through
all
of
our
existing
plugins,
so
we
made
fixes
and
improvements
for
various
of
the
logging
plugins
when,
specifically
when
dealing
with
tcp
and
now
udp
services,
yeah
and
more,
which
you
can
check
in
the
in
the
change
logs
for
for
the
alpha
and
the
beta
and
again,
the
feedback
for
the
beta
so
far
has
been
pretty
smooth,
and
we
are
already
like
doing
like
lots
of
like
final
touches
on
that
and
we
we
hope
to
get
rc1
out
by
later
this
week
and
yeah
and
again,
if
no
one
shouts
fire
from
about
that,
then
that
will
be
promoted
into
ga
shortly
after
so
yeah.
A
So
that's
all
we
have
for
today
on
what's
coming
in,
come
2.2
and
yeah,
we
have
a
lot
of
time
and
so
we're
open
for
questions
this
time.
B
I
got
a
question
you
mentioned
some
refactoring
on
the
the
targets
and
stuff
like
that
in
upstreams.
I
thought
I
remember,
reading
somewhere
that
targets
are
eventually
going
to
become
deletable,
that
we'll
be
able
to
remove
them
versus
setting
the
weight
to
zero.
Is
that
in
this
release,
or
is
that
an
another
iteration
in
the
future?
I
can't
remember,
I
thought
I
saw
a
pr
about
it.
B
D
B
Yeah
the
workshop
as
any
other
internal
you
can
delete
you
can
patch.
You
can
compose
it's
the
same
thing
as
any
other
end.
It's
not
a
special
one,
anymore
cool
yeah.
I
always
wondered
what
limitation,
what
the
reasoning
was
for
for
not
having
to
delete
in
there
and
having
it
just
be
set
to
zero,
but
it
sounds
like
whatever
it
used
to
be.
Y'all
have
eliminated
that
to
make
it
just
easier
on
everybody
to
to
be
able
to
delete
it.
So
that's
really
cool.
Thank
you.
All.
A
Yeah
excellent
thanks
for
your
issues
and
yeah.
Do
you
have
any
other
questions.
A
Yeah,
if
not,
you
can
always
catch
up
on
cognition,
so
we
can
stay
in
touch
and
continue
discussing
and
with
that
I'm
gonna
hand
over
to
caitlyn.
B
We
will
have
this
recording
up
later
on
youtube,
so
you
can
take
a
look
at
it
in
the
future
and
then,
if
you
have
any
questions,
you
can
join
us
on
cognition
like
sean
mentioned.
So
thanks.
Everyone
have
a
great.