►
Description
Since the general availability of the Kong Gateway (OSS) in version 2.5 we feature a new Performance Testing Framework. In this session, we’ll showcase an example test used for Gateway development, detail how a Kong Developer can get an environment set up to use the framework, and walk-through extending the framework to match your testing needs.
Kong’s User Calls are a place to learn about technologies within the Kong open source ecosystem. This interactive forum will give you the chance to ask our engineers questions and get ramped up on information relevant to your Kong journey.
A
We're
going
to
go
ahead
and
get
started.
Thank
you
to
everyone
for
joining
us
today.
I'm
michael
heap,
I'm
the
director
of
developer
relations
here
at
kong
and
I'd
like
to
welcome
you
to
today's
online
meetup,
where
we're
going
to
be
talking
about
the
new
performance
testing
framework
from
the
kong
gateway,
2.4
release.
A
Now
for
those
of
you
that
are
new
here,
we
take
the
first
part
of
the
meeting
to
walk
through
a
short
presentation
and
a
demo
paul
will
be
doing
the
presentation
and
one
trunk
will
be
doing
demo
and
then
we're
going
to
open
it
up
for
a
q,
a
and
discussion
and
at
that
point
you'll
be
able
to
unmute
and
turn
on
your
video.
A
But
if
anything
comes
up,
whilst
you're
watching
feel
free
to
drop
the
questions
in
the
chat
tab
at
the
bottom
of
your
screen,
we'll
make
sure
we
get
to
those
when
we
get
to
the
q
a
section
of
the
call
with
that,
I'm
going
to
hand
it
over
to
paul
to
kick
us
off.
B
Thanks,
michael
so
hello,
everyone
and
welcome
again
just
wanted
to
say
thanks
for
taking
the
time
out
of
your
busy
schedules
today
to
be
with
us
here
on
the
community
call.
We
really
appreciate
you
being
here.
My
name
is
paul
fisher
and
I
serve
as
the
product
manager
for
the
kong
gateway
and
joining
me
today
is
for
a
technical
walkthrough
is
wang
chung,
an
engineer
on
our
gateway
team
as
well,
so
with
that,
let's
just
jump
right
into
it
onto
the
performance
testing
framework.
B
So
I
want
to
make
this
a
little
bit
interactive
to
start
we'll
see
how
successful
it
is,
but
maybe
by
show
of
zoom
hands-
and
I
believe,
there's
a
button
at
the
bottom
of
your
bar.
To
raise
your
hand,
I
want
to
kind
of
just
pull
the
audience
to
see
how
many
of
you
engaged
in
some
type
of
capacity
planning,
maybe
to
help
you
understand
how
many
kong
nodes
are
required,
for
maybe
a
migration
you're
doing
internally
or
if
you're
gathering
some
type
of
infrared
information
around
concurrency.
B
So
what
happens
if
I
have
like
x
amount
of
users
trying
to
make
x
amount
of
calls
per?
Second,
at
what
point
does
sort
of
the
gateway
or
your
api
start
to
break
down.
B
So
if
you
click,
maybe
the
thumbs
up
or
the
raise
hand
button
I'm
just
kind
of
looking
through
the
audience
here.
If
you've
run
into
these
type
of
scenarios,
I
got
some
hearts.
I
got
some
hand
raises.
B
Thanks
for
that,
thanks
that
little
survey,
so
in
version
two
five
of
our
gateway
we've
been
incubating
a
new
performance
testing
framework
which
we're
excited
to
show
you
today.
B
It
provides
an
efficient
way
of
carrying
out
performance
benchmarks
on
the
kong
gateway,
so
on
our
side
of
the
house
in
the
kong
gateway,
github
repository
we've
integrated
this
performance
testing
framework
with
github
actions
to
help
us
from
a
maintenance
perspective
understand
how
performance
changes
all
the
way
down
to
the
pr
level
and
you'll
see
here
on
the
slide,
a
sample
output
from
our
own
internal
tests.
B
So
with
every
pr
we
can
trigger
this
performance
testing
framework
to
show
the
latency
or
requests
per
second,
all
the
way
down
to
the
pr-
and
this
is
a
really
powerful
tool
for
us
internally,
because
maintaining
the
the
open
source
gateway
requires.
You
know
a
constant
trade-off
between
performance
and
delivering
really
rich
feature
sets.
B
So
with
this
performance
testing
framework
in
place,
our
engineering
team
has
had
the
ability
to
start
to
plot
this
performance
trends
over
time
and
so
that
we're
ensuring
that
our
gateway
has
a
high
performance
benchmark.
That's
come
to
be
expected
by
our
community,
and
this
is
maintained
with
sort
of
every
commit.
B
This
performance
testing
framework
has
a
test
that
we
built
out
of
the
box
to
kind
of
use
internally,
so
so
we're
continuing
to
evolve
this
framework
and
think
about
benchmarks
and
tests
that
may
serve
a
broad
range
of
our
user
group
as
we
look
to
evolve
this
framework.
So
I
also
want
to
mention
that
all
the
resources
that
we'll
show
today,
we
will
append
to
our
existing
doc
article
on
our
performance
testing
framework.
So
you
can
kind
of
get
it
set
up
and
running
quickly
and
help
provide
feedback.
B
So
with.
D
Cool
hi,
everyone
stop
sharing
yeah
yeah.
My
name
is,
and
I
work
with
a
cartoon,
and
today
I
would
like
to
share
a
session
with
all
of
you
to
like
actually
gets
on.
I
get
hands-on
to
use
that
performance
testing
framework
and
as
we're
trying
to
be
like
more
flexible
for
for
just
to
get
on
these
tools
as
fast
as
we
can,
and
we
are
choosing
the
docker
approach
here.
Let
me
actually
share
my
screen
first.
D
So
if
you
have
your
laptop
around,
you
can
just
follow
my
command
and
work
at
this
pretty
soon
like
just
a
few
comments,
and
so
we'll
be
using
the
docker
image
on
black
image
and
since
we
are
using
the
darker
approach
here,
so
we're
also
going
to
mount
the
docker
socket
into
the
container
and
actually
understand
this
come
on
zoom
chat,
so
we
can
just
click
paste.
D
Okay,
so
also
we're
also
running
using
the
in
the
root
user,
since
we
need
to
access
this
docker
socket
and
you
don't
need
to
want.
You
don't
want
to
use
that
on
your
production,
because
in
this
image
you
are
using,
nobody
is
supposed
to
be
more
secure.
D
So
the
next
thing
we're
going
to
do
is
to
install
several
dependencies
for
development
I'll,
be
installing
the
darker
ci
and
also
some
headers
or
headers,
and
the
compilers
to
make
the
the
test
we're
going
to
run
happy.
So
it
takes
a
few
seconds.
D
I
see
where
this
is
running.
I
can
also
describe
next
step
next
next
step,
so
in
car
we
are
using
a
tool.
We
call
basket.
D
It
is
like
a
lower
testing
framework
and
we're
using
it
for
unit
tests
and
also
into
end-to-end
tests
inside
the
calm
and
the
the
performance
testing.
Work
is
also
based
on
the
basket
so
for
next
step.
We're
actually
going
to
install
the
the
basket
framework
in
our
container
because,
as
I
just
described,
this
is
like
development
dependency
and
it's
not
included
in
the
coin.
Shipped.
D
D
All
right,
this
is
like
lighter
than
the
headers
we
saw
earlier,
and
then
we're
actually
going
to
run
this
because
we
have
all
the
dependencies
ready-
and
I
have
this
like
a
template
test-
I'm
not
having
the
camera,
but
in
a
separate
branch.
D
D
And,
unlike
the
the
files
you
see
in
this
container-
and
this
also
includes
a
lot
of
like
scenes
for
development
like
the
box
pack
and
like
the
written
files
and
also
this
this
stuff
and
this
stuff-
and
those
are
those-
are
the
spec
and
t
the
software
testing
and
just
like,
without
including
your
you
know,
image
so
into
for
this,
for
our
repository
and
and
the
command
you
want
to
run
for
this.
Anybody
basic
testing
is
is
by
using
the
binary.
D
We
have
on
the
beam,
investing
and
then
the
fire
you
draw
for
and,
for
example,
a
car
we
use
having
a
new
test,
and
example
the
rock
spec
spec
testing
and
yeah.
So
this
is
like
what
you
expect
from
running
a
test,
but
this
is
for
just
a
general
10
unit
test
and
start
for
the
steps,
but
it
will
be
very
similar
and
if
you
change
used
to
be
0
4,
which
is
the
curve,
include
all
the
percentage
things
in
the
stream
meetup.
D
D
So
for
every
test,
you're
running
here,
it's
actually
a
raw
file
and
just
like
the
extension
name
and
suggested,
and
so
we're
going
to
set
up
this
program
setting
framework
like
the
environment
because
it's
testing,
so
you
expect
a
lot
of
levels
to
feedback
and
then,
of
course,
we
use
the
buckle
driver
here.
D
D
You
have
this
description
here
and
this
busy
setup
is
another
function.
A
car
to
set
up
the
setup
call
actually
so
you'll
get
this
hyper
speaker
to
manipulate
the
quantities
if
we
return
by
the
setup
and
then
you
set
up
some
service
and
routes.
So
the
test
we
are
testing
here
is
to
have
two
separate
routes.
Point
to
the
same
service
with
your
service
is
something
really
was
set
up
for
you
in
the
framework
it's
just
njx
and
upstream.
D
So
it's
the
the
performance
options
should
be
the
bottom
neck
of
your
framework.
Then
we're
also
going
to
have
a
plugin
called
creationid.
It's
just
a
simple
plugin
that
has
certain
headers
in
response,
and
this
won't
be
attached
to
this
row.
Two
it
will
be
attached
to
rot1,
so
we
can
compare.
D
The
different
stock
performance
is
always
out
of
working,
and
so
here
so
see
the
results
here,
and
so
this
is
like
just
a
different
part.
Just
for
that
and.
A
D
Each
section
of
tests
you'll
see
the
results
coming
out
and
using
the
using
the
tool
called
wrk
to
show
you
the
request,
ips
and
like
requests
per
second
and
also
the
latency.
D
So
this
is
without
plugging,
and
this
is
grease
and
you
can
see
there's
a
slightly
difference.
So
actually
the
difference
would
not
be
that
much
because
the
running
side,
docker
and
the
I'm
kind
of
running
zoom
here
so
my
computer
is
overloaded,
so
yeah,
let's
go
back
so
and
you
can
get
the
basic
idea
here.
That
is
that,
and
you
have
a
slightly
performance
difference
with
this
correction,
id
and
plugin
enabled-
and
this
is
just
a
basic
idea.
So
let's
go
back
to
this
test.
D
This
is
just
for
starting
call
and
stop
calling
down
so
five
tests.
We
just
copy
paste
about
its
parts,
and
this
is
a
section.
It
actually
makes
a
difference
where
you
see
the
results,
so
it
is
actually
just
another
description
block
and
in
this
set
in
this
block
you
can
just
start
a
load
with
five
with
five
spreads
and
1000
connections
and
last
for
30
seconds,
and
it
will
hit
this
road
one.
D
We
don't
have
this
plugin
and
then
get
results
and
get
a
result
and
in
the
next
section,
we'll
moving
into
row
2,
but
we
will
actually
have
this
plugin
and
it's
also
the
same
thing
that
you
wait
for
result
and
print
it
out.
Yeah.
That's
that's
the
basic
idea.
So,
although
this
is
very
simple,
it's
very
a
simplified
test,
and
just
more
than
this,
you
can
do
and
it's
good.
D
For
example,
you
can
test
between
different
versions.
Let's
see
here,
because
when
you
start
calling,
you
can
actually
point
a
version
it
can
actually
it
can
either
be
a
github
tag
or
a
dock
image
or
anything.
You
want
to
distinguish
between
different
versions,
and
so
you
can
actually
test
between
your
like
different,
maybe
different,
custom
images
or
different
companies.
You
made
to
compare
then
also
for
the
actual
capacity
planning.
D
You
don't
want
to
use
docker
driver
here,
because
docker
is
all
running
on
your
local
machine
and
it
will
be
very
like
flaky,
based
on
the
workload
you're
having
on
your
on
your
laptop,
so
we
also
kind
of
provide
some
other
drivers.
We
provide
a
local
driver
where
you
don't
want
to
run
inside
docker,
so
you
can
get
some,
maybe
some
network
like
a
performance
boost
and
without
using
the
bridge
network
and
also
another
java
called
terrible.
This
is
the
maybe
the
you
know.
D
This
is
a
driver.
You
will
think
for
classy
planning,
so
this
will
just
invoke
the
telephone.
We
call
it
from
hashicorp
and
it
will
spin
up
discovery
infrastructure
like
a
real
pc2
instance
or
like
a
bare
metal
instance,
or
a
google
cloud
instance,
and
it
will
actually
run
call
inside
and
test
and
ips
and
agency,
based
on
the
the
spec
select
and
to
strap
because
of
the
flexibility
here.
You
can
also
add
as
many
infrastructure
details
here.
D
You
want
to
have
a
yearb
and
having
your
asg
in
your
ec,
for
instance,
and
that's
all
for
support-
you
just
buy
this
much
yeah.
I
guess
that's
it,
and
people
have
sure
the
talks
were
like
describing
the
difference
between
drivers
and
how
you
want
to
select
yeah.
A
Yeah
that
was
really
interesting.
I've
been
copying
and
pasting
all
the
things
from
the
the
chat
that
you
shared,
and
I
think
my
my
project
this
evening
is
going
to
be
building
the
docker
image
with
all
of
that
pre-installed.
D
Yeah,
definitely
that's
like
actually
exactly
what
we
plan
to
do
in
the
future
and
we're
going
to
like
maybe
provide
dr
capra's
fire
or
just
like
an
image
based
only
every
release
of
call.
So
it's
a
like
just
for
developing
purpose.
It's
not
for
running
content.
You
can
use
the
image
to
run
any
performance
tests
or
even
the
real
normal,
like
end-to-end
test
details.
A
A
A
It
so
thanks
for
showing
us,
we've
actually
got
some
time
for
questions
now
and
if
anyone's
got
any
they'd
like
to
ask,
feel
free
to
mute
and
ask,
or
you
can
drop
them
in
the
chat,
and
I
will
read
them
out
for
you.
A
D
Yeah,
that's
actually
a
good
question.
So
so
actually
that's
like
two
different
way
of
talking
about
versions,
and
so,
as
you
can
see
here,
we're
running,
I'm
not
sure
but
yeah.
So
we
were
cloning,
the
converter
and
also
specified
version
around
the
test.
So
when
you
clone
the
version,
you
have
to
use
the
chrome
branch
newer
than
2.5,
but
that
doesn't
mean
doesn't
mean
you
can
only
test
the
coin
newer
than
2.5.
D
A
B
One
thing
I'm
interested
in
learning
too
from
everybody
is
we
have
this
example?
You
know
latency
request
per
second
test
as
a
part
of
our
meetup
prep,
but
I'm
kind
of
curious
to
learn
what
other
kind
of
benchmarks
are
most
impactful
to
the
audience
here
in
solving
their
challenges.
You
know
internally,
whether
it
be
costing
or
concurrency
or
any
kind
of
information
around
benchmarking.
That's
going
to
most
help
you
we
could
potentially
build
some
out
of
the
box
test
for
it.
A
D
Yeah,
that's
a
good
point
and
also
like
from
front
of
the
other
side.
You
also
want
to
know
like
the
number
of
routes
and
service
and
yeah
also.
D
Like
upstream
number,
with
regard
to
the
performance.
A
So
is
there
anything
that
anyone
on
the
call
would
like
to
test
whether
you
do
it
yourself,
whether
it's
an
idea
for
us
to
pick
up
and
write
the
the
implementation
for
you
like?
What?
What
would
you
like
to
measure
the
performance
of
we've
got
roots
services?
C
Hi
guys
so
generally
well,
while
we
run
some
performance
tests
at
dream,
11,
what
we
check
is
the
latency
that
a
plug-in
that
we
have
written
introduces.
C
So
that
is
one
thing
that
we
check
very
closely
and
it
generally,
what
happens
is
when
you
start
writing
your
own
plugins
and,
as
as
the
layers
of
plugin,
keep
adding
on
the
latency
increases,
but
we
always
keep
a
check
that
what
is
the
trade-off
and
how
much
is
the
latency
increasing,
but
I
think
this
is
already
there
in
this
performance
testing
framework.
C
That
is
something
we
can
already
measure
so
yeah.
That
is
one
thing
that
we
generally
check.
A
Thanks
for
the
insight
interesting
that
it's
primarily
plugins
that
you
write
yourself
yeah
and
there's
a
message
from
shivag
as
well
about
mostly
the
plugin
overheads,
can
you
tell
us
more
about
that?
Are
those
in-house
plugins
or
are
you
evaluating
the
ones
that
kong
and
the
community
provide.
D
Yeah,
that's
a
good
case,
and
actually
I've
been
asked
by
this
question
for
our
colleague
as
well,
so
so
the
first
challenge
is
no
for
now,
but
this
is
like
something
we
are
planning
to
do
because
with
terrific,
you
have
the
complex
provider
and
you
can
just
bring
up
the
compost
and
also
like
the
workloads
inside
of
the
kubernetes
cluster
and
from
the
up
level.
D
You
just
see
this
as
a
con
service
and
the
service,
and
it
will
help
you
just
transparent
into
the
framework
and
it
will
just
depending
on
how
do
you
implement
using
the
telephone
driver
and
it
can
be
yeah.
You
can
also
like
spin
up
this
workload
on
like
a
ecs.
Maybe
something
like
that
hosted
the
next
solution,
and
also
maybe
also
on
your
local
mini.
D
Yeah
that
doesn't,
and
also
actually
you
can
also
spin
up
using
using
the
telephone
driver,
but
not
the
like
the
docker
driver,
which
we
are
demoing
today
with
that
scene.
You
can
actually
run
the
call
inside
of
kubernetes
and
see
what
it
performs
with
regard
of
your
infrastructure
setup,
and
so,
but
it's
so.
This
is
not
like
a
support
right
now,
but
it's
very
easy
to
implement
and
both
on
our
side
of
the
community
side.
A
All
right,
in
which
case
we're
going
to
wrap
it
up.
Thank
you
for
joining
us
today.
Everyone
thank
you
rangtron
for
the
demo
paul
for
the
presentation.
Just
want
to
remind
you
all
that
our
next
call
is
on
september,
the
14th,
and
we
hope
to
see
you.
There
have
a
great
day
morning
afternoon
or
evening.
Everyone
cheers.