►
Description
In this video, our Sanad Liaquat Sr. Test Automation Engineer gives a demo on how we use Artillery.io to generate load for Performance Tests.
A
A
Of
an
existing
rate
tasks
any
task
that
adds
some
data
to
whatever
environment.
You
point
it
to
and
then
that
Drake
task
creates
a
file
with
some
URLs
the
rate.
The
task
that
I
have
added
makes
use
of
those
URLs
and
hits
the
hits.
Those
endpoints
with
using
artillery
artillery
is
basically
artillery
that
IO
is
basically
a
tool
using
written
in
node
for
load
testing,
so
I
have
I'll
share
my
screen
here.
A
Believe
so,
I
have
not
run
I
have
not
gone
into
the
report,
reporting
part
of
it,
but
until
now
I
have
implemented
using
CLI,
so
on
CLI
it
in
the
end,
it
gives
you
like
a
report
such
as
this,
but
I
heard
Ramiro
mentioned
that
it.
There
is
also
a
feature
for
providing
like
a
graphical
representation
of
of
the
report
itself.
I
haven't
looked
into
that
yet,
but
this
is
what
I
have
right
now,
if
I
look
at
the
test,
so
even
the
test
I
have
it
very
basic
right
now,
so
this
is.
A
This
is
what
I
have
we
start?
I'm
just
hitting
I
have
just
two
scenarios
where
I
hit
a
large
issue
URL
in
one
of
these
scenarios
and
the
endless
scenario
that
I
hit
a
large
mr
I
have
not
assigned
weights
to
this
scenario.
To
these
scenarios,
which
means
that
this
these
will
be
hit
like
50%
of
died,
approximately
50%
of
the
time.
A
Then
there
are
phases,
as
you
can
see
over
here.
The
first
phase
is
for
fees
stays
for
60
second
seconds,
and
then
it
is
just
one.
This
is
not.
This
is
not
final,
yet
I've,
but
it
is
good
for
my
local,
but
for
the
testbed
that
we
are
waiting
for
to
be
set
up.
This
will,
of
course,
be
a
higher
number.
I
have
a
rate
of
one
is
just
a
one
user
per
second.
This
is
the
sort
practical
for
testing
load.
This
is,
but
for
my
local
and
just
for
making
sure
things
are
working.
A
I
have
kept
it
like
that.
So
the
warm
of
phase
is
one
second
and
then
for
the
next
to
two
minutes
120
seconds
it
would
be,
it
would
start
from
1
and
ramp
up
to
50
per
second
and
then
for
the
next
50
60.
Second,
it
would
stay
at
50
at
an
arrival
rate
of
users
of
50
per
second,
and
these
would
be
the
two
scenarios.
The
URL
URLs
over
here
are
taken
from
what
the
output
of
the
previous
load
test.
A
A
A
So
this
still
work-in-progress
I
still
have
to
look
at
the
reports
and
and
I
think
what
what
what
whatever
is
left
here
is
looking
into
the
graphical
user
interface
of
the
HTML
report
and
and
making
sure
that
the
test
itself
is
good
for
the
testbed
right
now.
It's
I
mean
it's
just
one
user
per
second.
This
is
not
practical
enough
for
load
testing
and
let.
B
A
We
will
we
will
compare
the
results
with
the
baseline.
Basically,
the
baseline
would
probably
be
a
previous
run
and
these
tests
would
be
run
against
a
testbed
that
is
currently
in
works.
We
don't
have
have
that
yet,
but
you
have
such
the
idea.
We
will
deploy
I'm,
not
sure
which
release
but
I'm,
not
exactly
sure
how
often
we
will
be
deploying
to
that
testbed
I
would
say
you
know
every
should
be.
B
It
makes
sense
we
would
have
to
see
like
how
much
it
takes
to
generalize
the
data
in
to
make
the
comparisons
because,
like
it
seems
that
part
of
the
job
I,
like
that's
my
experience
with
performance,
that's
an
excellent.
The
easiest
part
is
the
automation
part
of
the
the
hardest.
Is
the
analysis
right.
B
A
B
I
went
to
a
conference,
but
it
was
like
it
was
I
actually
I
think
this.
This
talk
is
recorded.
I,
try
to
find
it's
from
godaddy,
it's
a
library
that
they
open
source
in
2017
that
you
can
basically
integrate.
You
were
into
n
tests
written
with
selenium
or
something
like
that
to
generate
performance
tests
from
for
the
front
end
and
then
yeah.
It
would
like
give
you
thumbs
up
or
thumbs
down
in
your
pipeline
depending
on
the
performance
in
the
front.