►
From YouTube: Online Meetup: What's Coming in Kong 2.2
Description
Preview of the features in the upcoming Kong Gateway 2.2 release including UDP support, finer-grained buffering control, extended functionality for Go plugins, and more.
A
Okay,
yeah
so
yeah,
let's
get
started
talking
about
what's
coming
in,
come
gateway,
2.2
well
for
for
those
of
you
who
follow
our
github
and
see
what's
going
on
in
our
next
branch.
This
is
not
going
to
be
a
lot
of
news
because
it's
the
nature
of
open
source.
So
everything
that's
coming
like
that's
already
been
finished.
It's
now
there,
but
yeah.
So
there's.
A
I
hope
there
will
be
still
a
few
surprises
and
interesting
stuff
for
us
to
discuss
here
so,
okay,
so
I
will
be
presenting
I'm
product
manager
for
the
kong,
open
source
gateway
at
kong
and
we'll
have
also
a
demo
by
the
tong
who's
one
of
our
systems,
engineers
in
kong's
core
team,
and
this
presentation
is
going
to
be
pre-recorded
because
the
tongue
is
based
in
china.
A
So
let
me
start
with
a
little
bit
of
a
year
in
review.
I
started
the
same
way
when
we
were
talking
about
the
com2.1
release,
so
it's
it's
good
to
put
things
in
context.
A
So
over
at
the
congo
person's
project,
we've
been
like
busy
hard
at
work,
pushing
lots
of
releases
through
the
years,
and
everyone
must
have
noticed
that
we've
had
a
very
long
cycle
between
kong,
2.0
and
kong
2.1,
and
now
that
we
are
within
the
the
2.1
cycle,
we
already
have
a
a
number
of
patch
releases
in
in
the
past
month.
So
this
is
how
the
air
has
looked
like
in
terms
of
stable
releases.
A
If
we
expand
that
to
include
the
pre-releases,
this
is
more
like
how
the
year
has
looked
like
so
2.1
has
been,
has
seen
a
really
long
cycle
in
which
we
have
like
really
essentially,
as
we
release
a
stable
version
we
already
have
like
in
we
push.
The
next
branch
becomes,
the
master
as
it
becomes
the
stable.
Then
the
next
branch
is
already
like
free
to
get
new
features
that
will
land
in
the
next
release.
A
So
basically,
kong
2.1,
which
was
released
in
late
july,
was
the
culmination
of
all
of
the
new
stuff
that
we
were
worked
on
from
january
to
july.
So
that
was
pretty
long
and
unusual
for
our
our
cycles.
If
you
will
remember
the
one
point
x,
series
last
year
was
more
like
two
to
three
months
between
releases
and
we
want
to
get
back
to
that
cycle.
Now.
A
This
long
cycle
for
for
2.1
had
a
lot
to
do
with
our
re-synchronizing
our
release
schedules
between
the
open
source
project
and
kong
enterprise,
which
includes
the
com
gateway,
so
the
the
idea
for
that
was
to
actually
to
be
able
to
speed
up
both
projects
by
by
doing
that.
A
So
now,
with
with
2.1
having
been
released
late
july,
our
goal
is
to
get
2.2
alpha
one
later
this
week
so
well,
and
then,
after
that,
we
want
to
do
a
beta
and
then
an
rc
series,
and
we
definitely
don't
want
it
to
take
as
long
as
2.1
took.
We
want
this
to
be
much
shorter,
much
in
the
style
of
the
one.x
releases.
A
So
you
should
expect
that
contour
to
alpha
coming
out
now
ish
like
this
week
and
the
beta
like
less
than
a
month
and
then
then
that
and
the
then
we
get
to
rc
like
less
than
a
month
after
the
beta.
So
that
will
put
us
back
into
two
to
three
months
cycle
that
we
have
been
seeing
before
so
yeah.
So,
as
I
said,
like
a
lot
of
the
stuff,
that's
going
to
be
in
2.2
alpha,
you
can
already
play
with
it
right
now.
A
And
you
see
our
pr
is
coming
with
the
new
features,
and
this
is,
I
want
to
start
about
talking
a
bit
a
little
bit
of
that,
like
the
features
that
are
already
in,
and
yes,
I'm
going
to
talk
a
little
bit
about
the
stuff,
that's
also
to
come
so
yeah
one,
one
of
the
one
of
the
big
changes
that
we'll
see
is
that
kong
2.2
will
bundle
open,
rusty,
117.82,
which
is
in
itself
based
on
nginx
117.8,
and
that
includes
I'm
not
going
to
go
into
the
details
here.
A
But
that
includes
like
tons
of
bug,
fixes
features
optimizations.
The
openresty
package
also
always
gets
the
latest
commits
in
the
luggage
development
branch
from
mike
paul.
So
optimizations
on
the
luggage
side
there,
so
it's
it's
a
general
improvement
overall.
Whenever
we
get
to
bump
the
open
rest
version
which
we
choose
to
do
when
we
switch
miners
and
their
release
cycle
matches
ours.
So
I
will
share
the
presentation
and
we'll
see
in
this
link.
A
We
can
go
through
the
change
logo
from
rusty
and
see
the
full
details
for
that
next
thing,
I
want
to
give
a
heads
up
that
in
starting
kong
2.2,
we
will
be
deprecating
support
for
cassandra
2.
in
practice
in
terms
of
code.
Nothing
changes
for
now.
A
The
kong
202
was
to
work
with
cassandra
2.2,
but
this
is
a
heads
up
to
that
by
the
time
of
kong
3.0,
this
support
will
be
outright
removed
and-
and
it's
it's
a
good
time
to
be
deprecating
it
because
cassandra
2.2
will
be
end
of
life
by
the
time
4.0
is
released.
I
mean
cassandra
4.0
and
cassandra.
4.0
beta2
is
already
out
so
the
release
of
cassandra
4
is
imminent,
which
means
that
cassandra
2
will
be
end
of
life
and
it
doesn't
make
sense
for
us
to
keep
supporting
it.
A
The
good
news
is
that
this
will
probably
not
affect
any
of
you,
because
from
the
from
the
the
metrics
info
that
we
get
basically,
nobody
who
is
running
kong2.hex
like
who
is
running.
Recent
conversions
is
still
reason
running
cassandra,
two
like
at
least
from
from
this
from
the
stats.
Without
that
we
get
the
only
people
that
we
see
running
contender,
two
are
running
very
old
versions
of
kong
and
well.
They
should
probably
upgrade
both
of
them
by
now,.
A
So
yeah
now
talking
about
features,
features,
features.
There's
one
open
pr
that
we
hope
to
get
merged.
It
might
have
been
it
might
be
merged
like
as
I
speak
here
for
all
I
know,
but
we
have
the
code
pretty
much
ready
for
adding
response
handling
for
go
plugins,
and
what
does
that
mean?
That
means
that
com
com,
plugins
are
built
around
these
concepts
of
faces
which
mirror
the
open,
rsd
and
engine
x
phases.
But
essentially
that
means
that
you
have
callbacks
for
access
phase
or
in
lua.
A
You
can
process
the
headers
body
filter,
you
can
process
the
body
and
this
is
very
aligned
with
the
way
that
you
know
the
lua
functions
in
openresty
work
and
that
would
not
make
sense
to
replicate
in
our
goal
support,
because
that
would
take
a
lot
of
back
and
forth
between
the
between
the
go
server
and
the
clone
process
to
for
for
it
to
be
usable.
A
So
what
we're
doing
is
instead.
So
in
our
first
version
of
the
go
pdk
support,
we
only
had
support
for
access
log
phases
like
you
could
you
could
handle
the
request,
but
not
really
manipulate
the
response.
Now
we're
adding
support
for
manipulating
the
response
by
buffering
the
upstream
response
on
the
kong
side
and
then
running
an
all-in-one
phase,
new
phase
call
response,
which
has
which
allows
you
to
modify
and
manipulate
both
the
headers
and
the
body
at
once.
A
So
it
supports
things
like
new
apis,
like
kong.response.getbody,
where
you
can
get
the
body
of
the
response,
or
and
also
like,
read
the
response
headers
and
set
the
response
headers
and
do
all
of
that
as
a
bonus
and
in
order
to
get
consistent,
pdk
apis
across
the
board.
We're
also
adding
this
response
phase
that
that
automatically
does
response
buffering
into
the
lua
pdk
as
well.
So
you
can
also
use
this
new
response
phase.
A
Instead
of
the
filter
phases
right,
you
can
use
one
or
the
other,
and
if
you
use
the
filter
phases,
then
your
responses
are
not
buffered
and
you
get
like
streaming
and
it's
higher
performance.
And
if
you,
if
you
use
response,
then
the
response
is
buffered
with
all
that
that
entails.
A
But
then
you
can
do
things
that
you
cannot
do
with
the
filter
phases
such
as
modify
a
header
based
on
the
contents
of
the
body
right
because
in
the
filter
phases
that
the
header
filter
has
runs
first
and
the
body
filter
runs
next.
So
here
since
we
buffered
the
entire
response
in
the
response
phase,
you
can
modify
both
at
once.
A
So
another
major
thing
that
was
just
merged
into
our
next
branch
and
we
will
be
featuring
in
2.2
already
in
the
alpha-
will
be
udp
support.
A
Now
this
will
complete
at
this
point
we'll
have
http
http,
2,
jrpc
and
tcp
udp,
so
at
in
this
first
stage
of
development.
What
we'll
have
is
an
introduction.
This
feature
will
have
a
support.
That's
similar
to
kong's
tcp
support
what
that
means
is
with
support
for
proxying
as
well
as
usual,
load,
balancing
and
plug-in
support,
and
most
mostly
for
plug-and-support.
That
means
at
first
logging,
because
most
of
given
the
nature
of
the
api
gateway,
most
of
the
plugins
assume,
are
only
make
sense
in
http
environments.
A
But
for
things
like
udp
you
can,
you
can
do
login
support,
and
for
that
we
will
proceed
with
a
demo
which,
as
I
said,
is
pre-recorded
we'll
hear
the
tongue
talk
about
the
incoming
udp
support,
and
this
is
another
like
the
I
I
think
I
think
the
sneak
peek
of
the
next
conversion
is
sort
of
a
first
for
us
here
in
the
the
online
meetups,
and
I
think
another
first
is
that
we're
we're
going
to
give
you
like
an
inside
look
at
how
things
happen
in
in
kong's
development,
because
this
video
was
actually
not
prepared
for
this
demo.
A
This
was
one
of
our
internal
engineering
demo
sessions
that
we
hold
regularly
inside
the
company,
where
all
the
engineers
as
they
develop
the
features
they
present
to
the
other
teams,
the
cool
stuff
that
they're
doing
right.
So
this
is
a
presentation
that
the
tong
prepared
for
to
be
presented
internally
at
the
company
to
show
the
other
teams
about
udp
support
and
we
decided
to
help.
But
why
not
just
you
know,
share
with
the
community
at
large,
so
without
more
further
ado,
let's
hit
play
and
watch
the
songs,
presentation.
C
Hello
folks,
in
this
video,
it
is
my
honor
to
demonstrate
the
udp
proxy
feature
that
we
have
recently
developed
for
open
source
calm
that
is
scheduled
to
be
releasing
the
upcoming
2.2
release,
which
I
believe
the
alpha
is
actually
coming
out
next
week.
So
it
should
be
available
for
testing
by
everyone
pretty
soon,
but
I
just
want
to
give
everyone
a
taste
of
the
feature
as
of
the
current
stage
before
we
go
deep
into
the
demo.
C
C
We
might
be
adding
more
features
such
as
health
checkers,
and
you
know
so
on
in
the
future,
but
that's
not
going
to
be
coming
in
2.2
yet
so
without
further
ado,
let
me
jump
straight
into
the
demo,
so
I
have
three
demos
here
to
show
you
so
the
first
one
is
a
pretty
basic
udb
processing
demo.
So
I
have.
C
Config
here,
so,
if
you
take
a
look
at
this
file,
it
is
pretty
simple:
it's
just
proxying
onto
the
localhost
7000
port,
using
the
udp
protocol,
which
we
added
in
their
2.2
version
and
then
for
the
routes.
It's
basically
the
same
as
a
tcp
processing
route,
but
the
protocols
is
actually
uvp,
so
this
actually
works
as
you
would
expect.
So
let
me
actually
demonstrate
this
for
you.
C
So
in
order
to
run
this,
so
you
need
the
cone
stream
listen
and
we
added
the
new
ud
flag
onto
the
listener.
So
this
would
actually
cause
cone
to
listen
to
the
9999
port
on
the
local
host,
but
the
listening
protocol
will
be
udp
instead
and
the
udp
and
pcb
ports
could
coexist
on
the
same
port.
They
don't
interfere
with
each
other.
So
if
we
actually
start
account
like
this.
C
C
So
I
can
specify
that
it's
in
udp
mode
and
then
the
ip
address
is
localhost
and
then
we
connect
up
to
the
stream
listen
port.
So
this
is
the
9999
port.
It's
not
the
7000
port
for
the
upstream.
So
if
we
connect
onto
that-
and
we
type
something
you
know
as
expected-
it
just
works
back
and
forth
right,
so
the
udp
packets
are
being
shuffled
between
the
back
end,
which
is
here
and
the
downstream.
The
client
which
is
here
and
call,
is
just
acting
at
the
middle.
C
So
some
of
you
might
be
wondering
how
this
would
actually
work,
because
you
know
we
know
udp
protocol
is,
it
does
not
have
a
concept
of
connection
and
stream.
So
how
do
we
actually
proxy
this
kind
of
protocol?
So
we're
able
to
proxy
udp,
based
on
the
fact
that
the
majority
of
the
udp
protocols
udp
based
protocols,
I
would
say
out
there-
are
designed
on
this
principle.
So
say
you
have
a
client
a
and
it
uses
the
source
board
123
to
send
the
udp
packet
to
a
server
b.
Let's
say
the
destination
code
456..
C
C
This
is
not
necessarily
required
for
udp,
I
mean
technically
we
could
send
the
response
to
any
port
like
it
could
actually
send
it
to
one
to
four.
If
a
chooses
to
listen
on
that
port,
but
majority
of
the
protocols
out
there
just
sends
the
packet
back
to
the
original
port.
So
that's
actually
how
this
proxy
actually
works,
there's
technically
no
concept
of
a
string
in
udp,
but
because
of
this,
this
port
consistency
between
related
packets.
C
We
were
able
to
kind
of
guess
which
one
of
them
you
know
belongs
to
the
same
stream
and
try
to
proxy
it
like
that.
C
So
I
will
show
you
another
demo
next,
which
is
one
with
a
logic.
So
let
me
look
at
this
file
here,
so
in
this
file.
It's
all
the
same
thing.
What
we
added
is
that
we
added
we
enable
the
file
log
plugin
onto
the
example
service
and
we're
right
there
log
on
to
the
tempo
system.
C
So
in
order
to
demo
this,
we
actually
don't
even
need
the
app
stream,
because
you
know,
even
if
the
the
packet
is
lost,
the
logging
plugin
will
still
be
fired
because,
like
I
said
you
know,
especially
in
udb,
because
it
doesn't
really
have
the
concept
of
a
connection
being
established
and
it
doesn't
really
care
whether
the
packet
arrives
or
not
by
the
protocol's
design.
C
So
if
we
actually
restart
call
with
this
declarative
config
instead
and
then
I
just
simply
send
some
something
onto
that
port,
and
then
you
know
the
upstream
is
actually
not
running
so
this
actually
goes
to
nowhere.
But
if
we
actually
look
at
the
the
log
file
that
we
specified,
you
will
actually
see
that
call
indeed
wrote
some
login
entry
onto
the
onto
the
file
system.
C
C
We
have
the
the
session,
so
the
session
in
the
layer
for
proxy
mode
for
call
is
basically
the
same
as
the
http
request,
and
it
also
has
a
status
code.
So
this
is
not
a
real
status
code.
It's
just
relating
the
l4
status
to
the
http
status.
So
it's
easier
to
understand
by
people,
and
this
is
actually
done
by
ngx.
C
You
will
actually
see
that,
despite
the
upstream
not
being
running,
the
status
is
still
at
200,
which
means
that
consult
that
it
did
its
job
and
successfully
proxy
the
packet
which
it
indeed
did
because,
like
I
said,
the
udp
protocol
doesn't
have
the
concept
of
a
connection.
So
even
if
the
upstream
is
down,
we
would
still
consider
that
proxy
was
successful,
even
if
nobody
else
was
listening
for
the
package
right.
C
So
that
should
some
sense,
I
hope-
and
the
third
demo
I
want
to
show
is
the
load
balancing
the
support
for
the
udp,
which
actually,
I
guess
it
probably
makes
this
feature
a
little
more
useful
without
the
load
balancing.
I
doubt
anyone
is
actually
going
to
want
to
try
this.
So
this
is
still
the
same,
except
that
the
services
is
now
passing
on
to
a
upstream
and
then
this
upstream
is
using
a
run,
rubbing,
load,
balancing
algorithm
and
it
has
two
targets.
C
One
is
seven
thousand
and
one
is
on
seven
seven
and
one.
So
if
we
actually
spin
up
two
net
cats
on
here,
so
one
is
listening
on
seven
thousand
and
the
other
one
is
listening
on
seven
thousand
and
one,
and
then
let
me
restart
call
with
the
lv
config.
C
What
would
actually
happen
is
that
the
first
time
you
send
the
package
here,
it
would
actually
go
to
one
of
the
upstream
and
then
the
next
time
you
send
it.
It
would
actually
be
load
balanced
onto
another,
so
I'll
actually
show
it
to
you
here.
So
if
I
hit
hello,
you
see
that
hello
actually
showed
up
on
7001
and
this
actually
works
because,
like
I
mentioned
about
because
of
the
the
port
number
assignment,
so
you
know
if
this
upstream
response
we
still
know
where
it
should
go
back
to.
C
So
if
there
are
more
packets
from
the
downstream
you
know
more
packets,
it
will
still
end
up
on
the
same
option,
so
it's
not
actually
going
to
randomly
bounce
between
those
two,
because
call
knows
that
this
source
port
should
always
go
through
that
upstream,
for
this
action
address
right.
So
this
is
the
the
session
consistency
that
we're
talking
about
when
processing
a
udp
protocol.
C
But
if
we
actually
kill
the
client
here
and
then
we
rerun
the
client
again
and
then
now
we
send
something
else.
You
would
actually
see
that
now
it's
actually
load
balanced
onto
the
7000
upstream.
Why?
Because,
when
we
respawn
the
net
cap
client
in
here,
the
source
port,
which
netcat
randomly
chooses
to
connect
to
call
changes
and
then
code
would
actually
consider
that
from
a
separate
udp
transaction
and
then
it
will
actually
rerun
the
load
balancing
algorithm.
C
C
So
I
hope
this
all
makes
sense.
Obviously
I
wouldn't
be
at
the
meeting
to
take
the
questions,
but
I
do
think
there's
one
thing
that
people
might
be
curious
is
that
you
know
how
long
do
we
remember
this
for
the
mapping
between
the
client,
the
source,
port
and
the
the
destination.
C
So
that's
actually
not
something
that
we
do
specifically.
This
is
actually
done
by
the
the
ngx
stream
proxy
module.
If
you
look
at
the
documentation,
there's
a
directive
called
proxy
timeout
and
it's
set
to
10
minutes
by
default.
So
this
obviously
works
for
tcp
mode,
but
for
udb
mode.
We
will
actually
remember
that
mapping
between
the
client
source,
ip
and
the
you
know,
and
the
upstream
for
10
minutes
so
within
10
minutes.
C
Anything
that
this
client
sends
to
the
upstream
will
end
up
in
the
same
option,
and
I
would
actually
have
to
say
that
this
is
not
the
total
time
limit.
This
is
the
minimum
time
that
we
have
to
see
a
packet.
You
know
it's
the
maximum
time
between
packets.
That
will
wait.
So
if
we
don't
see
anything
from
the
client
for
10
minutes
at
all,
then
we
would
actually
forget
about
this
connection,
information
and
then
next
time.
C
If
the
client
comes
again,
it
would
actually
be
a
new
connection
and
it
might
be
load
balanced
onto
a
different
upstream
right.
So
I
just
kind
of
want
to
emphasize
it
because
of
the
nature
of
udp.
We
just
don't
have
a
reliable
way
to
tell
if
the
transaction
has
been
finished,
so
we
just
have
to
use
some
kind
of
time
limit
for
remembering
the
udp.
C
Information
instead
of
like
tcp
because
of
the
explicit
handshake
and
you
know,
and
the
finalization
for
tcp,
we
always
know
whether
a
session
is
still
active
or
not,
but
for
udp
we
just
have
to.
We
just
have
to
guess,
and
we
we
guess
it.
You
know
if
it's
too
long
and
we
don't
see
anything,
we
just
assume
the
connection
is
dead.
C
A
Stuff,
sorry
about
that
and
yeah,
so.
A
Yeah,
if
you
have
any
questions
about
it,
feel
free
to
to
post
it
on
on
the
chat
and.
A
We
will
we'll
get
to
them
later.
Let
me
get
back
to
the
presentation
here
and
talk
about
the
rest
of
the
2.1
stuff,
so
other
other
things
that
are
also
coming
in
kong.
2.2
alpha
one
is
rate.
Limiting
by
path
is
one
of
like
the
limit
by
limit
by
attribute
and
the
rate
limiting
plug-in
now
supports
path
as
an
option,
so
that
was
one
that
was
the
community
contribution.
A
A
Also,
kong
now
produces
and
respects
when
coming
from
a
trusted
source,
the
x
forwarded
path-
header.
That
was
it's
also
in
addition,
2.2
related
to
that
there's
a
new
pdk
function,
called.request.getforwarder
prefix.
A
In
order
to
read
that
information
for
for
the
path
for
the
prefixes,
the
conducts
com.log.serialize
function
was
also
extended
for
supporting
streams
to
make
sure
that
all
of
our
login
plugins
support,
tcp
and
udp
and
provide
information
correctly
and
the
grpc
web
plugin
was
extended
to
support
handling
the
same
behavior
of
strip
path
that
our
routes
support.
So
that's
also
already
in
and
coming
in
kong
2.2.
Alpha
one.
A
And
more
stuff
is
on
the
way,
as
we
are
hard
at
work
and
building
things
that
we
want
to
get
into
2.2.
So
one
of
the
things
that's
in
the
works
right
now
and
coming
is
support
for
having
a
configurable
setting
for
request
or
response
buffering
and
make
it
configurable
per
route.
Usually
that,
like
before,
that
was
enabling
request,
buffering
or
response
buffering,
was
something
like
of
a
global
setting
that
that's
what
you
get,
for
example,
from
from
nginx,
but
with
kong
2.2.
A
So
on
since
kong,
2.0
and
especially
in
2.1,
we
have
been
pushing
this
new
hybrid
mode
in
which
you
can
have
database,
backed
control,
plane,
nodes
and
db-less
data,
plane,
nodes
that
coordinate
between
each
other,
where
you
can
have
your
admin
api
on
control,
plane,
nodes
that
talk
to
your
database
and
you
have-
and
you
can
have
data
play
nodes
that
do
not
use
a
database
at
all
and
respond
to
the
proxy
requests
and
for
2.2.
A
We
are
working
on
some
api
improvements
for
the
status
endpoint,
so
you
can
have
a
better
view
of
your
data
plane,
nodes
from
your
control,
plane,
nodes
and
also
we
have
made
some
significant
performance
improvements
on
the
synchronization,
and
this
is
this-
is
going
in
2.2,
not
2.1,
because
it's
it
changes
the
the
it
has
to
do
with
the
internal
internal
communication
between
the
cp
ndp
nodes.
So
this
was
this
was
something
that
fit
in
2.2
better.
A
Also,
we
have
a
pipeline
of
work
on
future
improvements
for
load
balancer,
so
we
are
working
on
like
internal
refactors
for
that
and
looking
at
some
possible
features
that
we
want
to
get.
A
I'm
being
purposely
vague
on
this
one,
because
it's
very
much
work
in
progress,
and
I
don't
want
to
vaporware
anything
for
2.2,
because
if
it
doesn't
get
ready
for
2.2
we're
getting
it
into
2.3,
but
but
we're
looking
things
such
as
such
as
integration
with
service
discovery
services
like
console,
and
I
would
very
like
much
to
hear
people's
impressions.
A
Whether
this
is
something
that
we
should
further
pursue
like
to
have
like
a
more
direct
integration
with
service
discovery
tools
and
yeah
and
and
generally
we're
we're
laying
the
groundwork
with
internal
refactors
and
things
like
to
make
to
make
the
upstreams
and
targets
more
amenable
for
for
adding
new
features
in
the
upcoming
releases.
So
one
one
of
the
things
that
we
are
we
are
looking
into
is
removing
the
target
history.
The
target
history
that
gets
stored
in
the
database
for
calling
in
order
to
to
perform
consistent
hashing.
A
We
are
changing
the
consistent
hashing
algorithm
so
that
you
can
still
have
that
without
a
a
target
history
stored,
because
that
target
history
was
always
a
bit
of
a
pain
that
in
which
you
have
to
you,
have
to
periodically
clean
it
up,
and
that
was
if
you
make
lots
of
changes
in
your
targets.
You
know
that
that
was
really
like,
not
an
optimal
way
of
doing
it,
and
now
we
are
working
on
that
and
we
hope
to
have
some
news
in
future
releases.
A
This
is
not
going
to
be
in
the
alpha,
but
this
is.
This
is
the
threat
of
work
that
we
are.
You
know
that
that
we
are
looking
at
and
and
working
on
right
now
within
the
core
team,
so
more
stuff
in
the
load.
A
Balancer
will
be
on
the
way
and,
of
course,
when
we
talk
about
like
stuff,
that's
coming
for
new
features,
I
guess
another
thing
would
be
well
possibly
your
contributions
to
come
to
the
two
we've
had
a
bunch
of
new
features
coming
and
coming
to
the
one
that
came
from
the
community,
which
we
have
then
reviewed
your
prs
and
got
the
merged
and
in
shape
for
release
so
yeah.
A
So
if
you
have
any
ideas
and
contributions
and
things
that
you
want
to
improve
kong
with
make
sure
you,
you
know.
B
A
A
pr
and
not
just
if
we
don't
respond
quickly,
we
try
our
best
and
we'll
look
into
getting
that
included
as
well,
so
so
yeah.
That's
that's
all
that
I
have
for
for
today
and
open
for
questions
so
whether
you
wanna
just
unmute
yourself
and
ask
questions
or
through
the
zoom
chat,
feel
free
and
also
I
have
a
question
that
comes
from
us
from
kong,
open
source
development
team.
A
So
from
all
that
was
presented
here
and
we've
been
talking
about
what
are
the
features
that
you're
most
excited
about,
and
we
like
feel
free
to.
You
know
to
comment
on
the
chat
and
give
us
feedback
on
any
shape
or
form,
where
also
in
cognition,
we're
very
much
eager
to
hear
your
feedback
on
this,
because
we
developed
the
features
as
we
feel
that
there
is
demand
for
them,
and
so
that's
really
important
for
us.
A
If
there,
if
there
are
any
questions,
we'll
hand
over
to
caitlin.
B
Awesome
well
thanks
to
sean
for
the
awesome
presentation
again
and
thank
you,
everybody
who
joined
us
today.
The
only
other
thing
I
just
want
to
mention
is:
we
have
more
exciting
announcements
and
product
updates
coming
out
at
kong
summit,
which
is
happening
on
october
7th
through
the
9th.
We
also
have
workshops
happening
there
if
you're
interested
in
kind
of
diving
into
kong
gateway
or
some
of
our
other
projects
a
little
bit
more,
so
I'm
gonna
drop.
B
The
link
in
the
chat
tickets
are
free
this
year,
it's
a
virtual
event,
so
I'm
pretty
excited
about
that
and
I
hope
to
see
you
all
there.
Our
next
online
meetup
will
actually
be
the
week
after
this
in
october,
so
we'll
have
some
updates
there
as
well,
I'm
sure
coming
out
of
kong
summit.
So
thanks.
Everyone.