►
Description
Kong’s User Calls are a place to learn about technologies within the Kong #opensource ecosystem. This interactive forum will give you the chance to ask our engineers questions and get ramped up on information relevant to your #Kong journey.
This month, Wangchong Zhao, Kong Senior Software Engineer will discuss how he and his team were able to obtain a 12% increase in RPS and a 37% drop in latency.
#KongGateway #Community #usercall #DevOps
A
Thanks
to
all
of
you
who
are
joining
us
today,
I'm
taryn
jones
part
of
the
developer
marketing
team
here
at
kong
and
I'd
like
to
welcome
you
to
our
february,
calm
gateway
user
call.
Today
we
have
a
special
presentation
for
you
from
wang,
chung
zhao,
a
senior
engineer
at
kong
wing
chun
is
going
to
talk
about
the
recent
kong
gateway
performance
enhancements
and
tell
you
how
he
and
his
team
were
able
to
obtain
a
12
increase
in
rps
and
a
37
drop
in
latency.
A
At
the
end
of
the
presentation,
we
will
open
it
up
for
q,
a
and
discussion
you'll
be
able
to
unmute
and
turn
on
your
video,
but
you
can
also
feel
free
to
drop
questions
in
the
chat
tab
at
the
bottom
of
your
screen
and
we'll
make
sure
and
get
to
those.
So
with
that
I'll
go
ahead
and
hand
it
over
to
wang
chung
to
start
the
presentation.
B
Hello,
everyone,
my
name
is
swanson
and
I'm
a
team,
great
toy
engineer,
I'm
working
on
kong.
First
of
all,
I
want
to
say
happy
tiger
new
year
to
everyone
and,
as
you
know,
tiger
runs
very
fast,
so
the
topic
we
are
doing
today
is
also
about
speed,
which
is
about
the
recent
performance
gain
we
have
in
kong.
B
I'm
not
going
to
share
this
the
journey
on.
How
do
we
find
the
issue
and
how
do
we
solve
it,
and
and
also
the
technical
background
around
this
topic,
and
hopefully
this
will
benefit
all
of
us
about
learning,
corn
and
also
open
recipe?
B
Okay,
so
before
we
get
to
the
technical
part
in
case
you
didn't
join
our
previous
meetups,
we
do
introduce
our
performance,
statin
performance
testing
framework
a
while
ago,
so
this
framework
is
currently
open
source
and
we're
going
to
be
in
enterprise
as
well
in
the
next
version.
B
This
is
basically
basically
some
extension
to
our
current
lower
base
entrance
testing
framework,
and
you
can
run
multiple
backends
front
docker
to
local
machine
to
a
term
form
probability
real
bad,
bare
metal
machines,
and
it
will
give
you
results
like
ips
and
also
as
well
as
as
well
as
frame
graphs.
B
So
we
can
give
you
a
introduce
like
a
brief
idea
of
how
it
looks
like
so
it's
just
written
in
lua,
and
you
can
call
multiple
apis.
Like
start
this
script,
to
collect
the
system,
tab,
samples
and
then
start
the
load
and
gathering
the
results.
B
Okay,
so
we
do
have
a
integration
of
this
framework
with
our
github
ci.
So
you
can
view
all
the
results
here
and
also
download
the
artifacts,
which
includes
we
so
called
the
free
graphs.
Okay,
let's
take
a
look
at
what
a
finger
looks
like
so
its
name,
it
gets
its
name
from
is
a
front
strip
because
it
looks
like
a
flame
being
set
up
and
there's
a
span
of
each.
B
Each
of
this
box
represents
the
time
it
takes
when
it's
being
sampled,
so
the
system
tab
is
a
kernel
module,
so
it
will
sample
the
the
the
call
stack
of
the
current
current
processor
process
and
it
will
record
what
is
the
the
current
function
being
called
and
and
as
well
as
its
parent
call
stack
so
so
the
longer
the
span
it
appears
in
the
frame
in
this
frame
graph.
It
means
a
cpu
is
taking
more
time
in
this
function,
and
this
is
a
test
case.
B
We
have
for
10
services,
10
rounds
and
no
plugins,
so
basically
all
the
concord
itself
and,
as
you
can
see
here,
kind
of
kind
of
obvious
that
we
have
this
var
get
reset
taking
a
lot
of
like
time.
So
I'm
almost
like
20
percent
of
time-
and
you
can
also-
I
can
actually
see
this
appear
in
other
phases.
B
So
yeah.
That's
that's
how
we
get
to
the
idea
on
what?
What
part
should
we
optimize
most,
but,
of
course
the
the
part
it
takes
most
does
not
mean
it's
always
slow.
It's
probably
also
doing
some
business
logics
and
you
cannot
always
optimize,
but
we
decided
to
take
a
look
on.
How
much
can
we
improve
on
this
part?
Okay,
so
so,
in
order
to
to
optimize
it,
we
first
have
to
understand
how
it
works.
B
Okay,
so
let's
take
a
look
and
first
take
a
look
at
what
what
is
what
is
the
variable?
What
is
the
var
broadcast
versus
what
is
being
getting
in
a
setting,
so
the
rest
of
your
corridor
var
get?
Is
an
operations
api
for
x
or
for
accessing
a
so-called
ngx
variable?
B
B
You
can
also
use
the
same
syntax
to
set
its
value.
For
example,
you
can
see
boxing
mode
equals
to,
for
example,
jrpc.
Then
you
are
setting
this
variable
turning
value,
okay,
so
so
this
this
api
is
provided
by
ubersd
and
how
it's
being
accessed-
let's,
let's
get
to
the
initial
part.
B
So
here
we're
patching
this
in
engines
with
the
dot
bar
and
to
redirect
it's
like
a
getter
and
setter
to
a
new
function,
which
here
is
var
get
okay,
let's
see,
let's
search
for
this
function
start
some
sanity
test
and
prepare
the
buff
and
another
other
stuff,
and
then
it
will
call
this
function.
Okay,
this
is
this
is
called
through.
We
call
it
ffi
function,
so
ffi
is
a
special
technique.
B
B
Okay,
so
it
calls
this
c
function
and
let's
go
above
and
see
what
it
does.
So
this
function
is
equals
to
this
one.
This
guy
so
injects
rgb
lure
ffi
rocket.
So
this
is
actually
a
save
function,
but
we're
calling
from
lower
size.
So
we
need
to
go
to
the
c
source
code
and
see
what
it's
been
doing
inside
of
it.
Okay,
let's
search
for
this
function.
Okay,
let's,
let's
take
a
look
okay,
so.
B
This
is
you
the
sanity
test,
okay,
so
it
also
accepts
several
parameters.
So
this
is
the
request,
contacts
and
also
the
name
data
which
is
the
the
key
the
variable
you're
getting.
B
It
appears
in
two
variables,
which
first
one
is
the
on
the
alternative
string
and
also
the
the
length
and
then
you'll
get
you're
getting
the
the
path
and
to
let
the
process
do
anything
and
also
the
value
to
return
okay.
So
if
name
it
goes
to
zero.
This
is
a
special
case.
Let's
go
back
to
here,
so
this
is.
This
is
about
a
different
chord
path,
which
is
the
regis
capture
groups,
we're
not
touching
that
part.
So
we
will
just
ignore
that
here,
as
the
comment
describes.
B
Okay,
so
it's
so
it's
for
a
normal
key
variable
like
the
the
this
guy
we
were
talking
about
before
it
will
hit
this
pass
here,
so
it
automatically
ultimately
called
the
njx
hash
function
to
calculate
its
hash
in
this
hash
map.
B
So
we're
passing
the
some
buffer
to
allow
the
calculation
and
also
the
input
named
the
key,
the
key
itself.
Okay,
then
we
we
call
this
function
called
engine
set,
dp
get
variable
which
will
pass
in
the
hash
and
then
to
return
use
value.
Okay.
This
is
this
is
how
it
does
in
the
in
the
current
in
the
current
world.
Before
we
have
this
optimized
okay,
this
looks
very,
very
straightforward
and
you
normally,
you
would
see
any
any
any
any
sign
of
why
it's
being
slow
and
or
why
is?
B
Why
should
we
do
it
in
the
other
way,
but
let's
hold
in
the,
but
let's
hold
that
for
hold
that
on
for
a
second
and
let's,
let's
actually
take
a
look
at
what
other
people
are
doing?
Okay,
what
other
modules
are
doing?
Okay?
So
let's
take
this
simple
module,
for
example,
which
is
a
memcached
module.
It's
part
of
the
official
njx
release,
so
in
this
module
it
creates
a
new
variable
called
mem
cachet
key
okay.
So
this
is
sorry.
B
Is
it
defining
this
string
and
let's
see
how
it's
being
used
here
in
this
module?
Okay,
so
in
this
function
we
call
it
memcache
path,
which
is
a
directive
handler
if
you
search
above
okay,
so
this
is
a
this.
Is
the
handler
for
this
memcached
pass
directive
in
the
enginex
conf?
B
It
lets
me
automatically
call
this
function
with
appropriate
configuration.
You
have
this
com,
a
general
conf,
then
you
notice
this
line,
seven
to
eight,
because
ngsb
get
variable
index.
Okay!
So-
and
it's
passing
this,
the
the
stream
retro
is
storing
this
index
into
its
module
context.
B
Okay,
so
insert
in
this
function,
memcached
create
request
which,
as
it
stressed,
is
being
called
in
every
request
comes
in
when
ngox
create
the
request
context.
Okay,
so
you'll
notice
in
250,
this
line
is
using
a
different
function.
It's
called
http
get
index
variable.
So
if
we
go
back,
there's
difference
of
index
and
no
index.
So
if
it's
in
in
in
the
in
the
operating
part
we're
doing
unjust,
get
variable
which
we
pass
in
the
hash
which
we're
actually
doing
a
hash
map
lookup
and
in
here
we're
just
passing
this
index.
B
B
So
so
now
you
know,
another
question
comes:
why
oppressive
choose
the
other
way,
not
this
index
way
of
doing
things?
Okay,
so,
let's
search
for
what
other
people
are
doing.
I
use
this
function
so,
whether
it's
fast
or
slow,
it
appears
in
the
engine
api.
Then
it
has
a
reason.
Otherwise
you
just
get
got
deleted.
B
There's
no
reason
to
have
two
functioning
with
different
api.
That
does
the
exactly
same
thing.
Okay,
so
if
you,
if
you
search
for
what
variables
are
using
this
function,
instead
of
the
instead
of
the
in
inspection
from
this
part
that
defines
this
variable,
we
have
used
it
in
the
per
module
and
also
the
ssi
module.
B
So
if
you
read
the
documentation
of
the
ssi
module
in
the
prime
module,
you
will
notice
that
both
of
the
two
module
is
kind
of
allows
you
to
do
the
dynamic
scripting,
so
per
module.
Of
course,
allow
you
to
allows
you
to
run
per
scripts,
it's
similar
to
operating,
but
it's
its
main
problem.
B
Programming
languages
is
paralleling
that
blur,
so
it
will
also
allows
you
to
get
variable
set
variable,
and
this
server
side
include
function
also
has
its
simple
dsl
of
writing
a
script
to
allows
you
to
execute
a
certain
logic.
So,
let's
also
search
for
get
index
variable.
Okay,
this
one
appears
in
more
modules,
which.
B
B
Okay,
so
this
all
no,
let's
search
for
this
setting,
let's,
let's,
let's
search
for
how
the
how
the
index
is
being
get.
B
Okay,
so
this
function,
if
you
search
for
the
caller,
which
does
get
the
index
in
english,
license
index
is
being
called.
B
It's
actually
been
calling
the
configuration
phase.
So
now
the
answer
becomes
kind
of
clear.
So
as
long
as
a
variable
can
be
deterministic
during
the
conflict
time,
this
config
time
will
with
referring
to
the
engine
sort
of
conf
time.
So
when
you
load
this
configuration
into
into
internet
jigs,
this
does
not
include
any
dynamic
problem
part.
B
B
During
the
request
time,
you
can
use
index
to
access
the
variable.
But,
however,
if
you
cannot
know
which
variable
you
need
it
especially
happened.
When
you
are
accepting
a
script
to
actually
execute
during
requests,
then
you
will
not
know
which
variable
you
need
during
the
request.
B
B
After
I
understand
under
understand
this
part,
let's
get
to
next
section
which,
which
is
how
do
we
improve
the
performance
of
this
broadcast
and
set
api.
B
Okay,
so
we
build
a
new
module,
a
new
function
into
our
into
our
current
module
of
called
lower
chronologics
module.
It
does
some
patches
and
also
provide
us
new
apis
to
using
kong.
B
Like
we
have
for
the
rsd
core,
we
are
also
having
a
patch
after
engine.
ngx.bar
table,
to
redirect
its
car
to
our
new
function.
So
this
is
a
getter
of
this
meta
table
and
it's
redirected
to
broadcast
by
index
before
we
search
for
this
part
and
jump.
Let's.
B
B
B
B
Hey
so
for
here,
so
we
we
now
use
the
the
new
new
api
we
found,
which
is
the
indexed
access
and
variable
function.
We're
just
passing
in
this
index
to
get
the
variable
look
at
the
value
back
and
it's
also
the
same
for
set
but
set
is
more
complex
because
you
have
to
handle
non-existence
and
other
other
special
cases.
So
we'll
not
cover
this
part
in
this
meetup
video.
B
So
now
you're
going
to
ask
a
question
about
how
do
we
know
the
index
of
a
variable
right,
so
we
are
having
two
different
functions
and
the
first
one
is
to
tell
nginx
to
index
a
variable.
You
you
specify
okay,
so
the
load
for
index
is
a
handler
for
a
new
directive
called
low
var
index,
lower
cone
lower
index.
So
if
you,
if
you
type
this
one
and
also,
for
example,
com
proxy
mode,
then
it
tells
tells
injects
to
index
this
variable.
B
Okay.
So
if
we
search
for
the,
if
you
go
back
to
the
function
itself-
okay,
so
it
it
costs
the
gut
variable
index
as
we
see
in
the
memcache
value,
memcache
module
and
it
returns
this
index,
but
we're
not
populating
this
to
the
color.
B
The
value
into
this
index
and
if
it's
being
already
found,
then
it
will
just
return
this
value.
Let's
return
this
index,
okay,
so
so
we're
actually
doing
this.
So
this
this
function,
just
for
here,
which
I
was
just
using
using
this
logic,
to
tell
engines
to
index
a
variable,
but
we're
not
getting
the
result
back.
So
we
have
a
different
function,
called
loading
indexes.
This
is
a
ffi
interface.
B
We
will
use
in
the
lower
part
and
for
this
function
we
are
returning
all
the
indexes
of
the
names
we
want
for
for
the
for
the
variables,
so
it
actually
just
iterates
over
the
the
index
as
every
index
and
gives
you
back
the
names.
So
it
returns
the
names
in
order,
so
you
don't
need
to
instead
of
returning
the
maps
and
itself.
B
So
if,
for
example,
if
complexity
mode
appears
in
the
first
update
every
then
its
indexes
are
zero
yeah.
So
it's
a
small
smart
trick,
we're
doing
here,
yeah
so
yeah.
I
guess
that's
all
the
part
we
have
so,
let's
also
show
the
results
we
have
after
we
have
this
other
stuff
being
deployed
into
con.
So
in
2.6
we
have
this
optimization,
let's
compare
to
2.5,
which
we
have
so
if
you
focus
on
cone
access
previously,
the
the
the
bracket
and
the
set
part
is
like
this
width
see
I'm
going
to
concess
right.
B
B
If
you
go
back
to
this
part,
so
this
like
index
and
there's
still
a
meta
table
lookup
happening
here.
So
if
you
still
look
at
this
flame
graph,
you
will
notice.
B
Some
something
happening
about
tables,
sorry
about
index
like
this
one,
so
we
can
further
optimize
those
two
apis
by
providing
a
more
like
a
direct
api
call,
for
example,
instead
of
doing
engines
of
the
dot
something
equals
to
a
something
like
this,
we
can,
we
can
do
entrance
out
of
r.
Does
that
something?
B
A
this
up
of
course
changed
the
original
interface
compared
to
the
princeton
one.
So
we're
not
pressing
that
pushing
that
in
the
first
release,
but
this
will
get
rid
of
get.
This
will
get
us
rid
of
the
the
meta
table,
lookup
and
so
just
converts
into
a
direct
function
call
and
what
invokes
the
ffi
interface?
B
Okay,
I
guess
that's
all
of
it
thanks
everyone
for
watching.
A
Thank
you
so
much
wang
chung.
For
that
great
presentation.
I
hope
you
all
enjoyed
it.
Remember
that
we
have
these
user
calls
once
a
month
on
the
second
tuesday
of
every
month,
so
we
hope
to
see
you
next
month
we
will
be
talking
about
the
new
kick
or
kubernetes
ingress
controller
release.
So
we
hope
to
see
you
there
have
a
great
day
see
you
soon.