►
From YouTube: GitLab Load Testing Tool Overview
Description
Grant Young, Senior Software Engineer in Test, Enablement, presents an overview of the GitLab Performance Tool (GPT)
A
Hello,
all
you
wonderful
people
and
thank
you
for
joining
us
for
the
customer
success
skills
exchange.
We
have
the
pleasure
of
having
grant
young
with
us
today
we're
going
to
talk
about
the
get
lab
load,
testing
tool
and
you
know
how
to
prep
it
and
how
to
run
it.
All
of
those
good
things
I'll
put
the
agenda
in
the
chat
and
give
the
stage
to
grant.
B
Thanks
so
much
yeah,
so
I'm
that's
that's
pretty
much.
What
we're
going
to
do
today,
I've
prepared
some
slides
for
us
to
kind
of
go
through
briefly.
I
try
and
keep
it
high
level
and
not
go
into
the
details
because
we
do
have
documentation
as
well,
but
we'll
try
and
keep
a
lot
of
time
for
questions
at
the
end,
for
if
people
want
to
really
deep
dive
or
not
but
yeah,
let's
get
into
it.
B
So
yeah
we're
going
to
cover
today
is
what
kind
of
was
said
there
is.
What
is
the
gpt?
We
call
the
gitlab
performance
tool
and
how
do
you
set
up
run
it
essentially
and
as
the
next
slide
kind
of
will
say
after
this
one
is
that
what
we'll
see
first
is
that
the
tool
is
something
that
we've
designed
over
the
last
year
to
run
performance
tests.
I
guess
git
lab
is
all
so
try
to
make
it
as
clear
as
possible.
B
The
idea
is
that
it
is
primarily
designed
to
run
against
a
gitlab
environment,
but
it's
designed
to
test
gitlab
the
application
in
a
lab
like
saying
I
don't
go
into
it
here,
but
what
that
means
is
that
you
know
we
try
and
set
up
gitlab
in
a
clean
environment
where
there's
not
other
people
using
the
environment
or
anything
else
there
that
could
disrupt
the
results,
and
then
we
can
run
the
test
test
against
it
and
then,
if
we
get
any
bad
results
back,
we
know,
or
we
could
be
confident
that
the
is
the
actual
code.
B
It's
the
actual
application
that
needs
to
be
fixed,
and
this
is
supposed
to
with.com,
where
you
get
a
lot
of
data
and
about
performance
stuff.
That's
called
field
data,
which
is
also
very
important,
but
gpt
is
the
the
lab
side,
but
by
proxy
as
well.
It
can
also
be
used
to
test
test
environment
as
as
it
is
actually
test
that
environment
itself
is
good
enough
to
actually
handle
the
fruit
that
we
expect,
and
that
is
working
correctly.
B
So
it
is
a
little
bit
functional
in
that
regards
as
well
a
kind
of
a
build
tool
you
can
see,
but
before
we
get
into
the
the
tool
completely.
The
vast
majority
effort
forms
testing
isn't
actually
running
the
tests.
It's
actually
preparing
the
tests
itself
and,
as
this
very
detailed
graphic
will
show
you
half
the
over
half
the
battle.
I'd
say,
is
actually
preparing
the
environment
and
preferring
to
test
data,
and
then
there's
the
actual
bit
of
actually
designing
and
running
the
tests
themselves.
B
Tesla
is
a
very
symbiotic
situation
where
we
were
testing
gitlab
with
the
tool,
make
sure
the
tools
working
and
then
actually
tweaking
the
environment
setup
to
to
be
the
right
size
with
the
right
specs
to
actually
handle
the
throughput
that
we
expect
and
how
that
came,
the
reference
architectures.
So
it's
a
very
dependent
situation
here.
We
need
a
good
environment
that
can
handle
throughput
and
we
need
to
actually
have
the
the
tool
to
actually
run
against
them,
so
we
built
both
in
tandem
essentially.
B
B
The
other
about
the
battle
so
to
speak
is
test
data
once
you've
got
the
environment
set
up.
Obviously,
you'll
have
nothing
in
it
and
test.
There
is
very
difficult
to
get
right
because
you
can
do
in
so
many
different
ways
and
from
experience
really
one
test
there
that
can
be
as
realistic
as
possible
and
the
most
realistic
thing
you
can
get
is
actually
real
projects.
So
we've
we
initially
started
testing
with
the
actual
git
lab
project
itself.
B
It's
a
sanitized
version
that
we
we
keep
as
a
backup
and
through
that,
then
we're
actually
testing.
I
guess
you
know
realistic.
We're
hitting
various
end
points
that
are
returning
real
real
data,
such
as
merge
requests
issues.
That
kind
of
thing
we've
also
then
built
on
top
of
that
and
started
expanding
a
bit
more
to
try
and
cover
different
aspects
of
gitlab
that
we
want
to
test.
One
of
these
are
projects
and
groups
which
are
like
a
level
above
projects,
so
we've
built
a
script
recently
that
helps
to
automate
all
this
process.
B
That's
the
goal
where
you
just
run-
I
guess
your
viral
ones
and
it
should
import
all
the
projects
it
needs.
It
should
also
set
up
all
the
groups
and
projects
it
needs
all
under
one
big
group
and
then
gpt
will
be
able
to
run
against
that
and-
and
it
should
be,
it
should
be
grand.
The
idea
is
that
it's
always
the
same
data
on
every
environment
that
I
test
against.
So
then
we
can
compare
like
for
like.
We
know
the
test
data.
We
know
the
tests,
it's
completely
controlled
environment,
essentially.
B
And
then
the
actual
tests
and
actual
running
the
test
comes
into
play.
We've
over
the
last
year,
we've
tried
to
build
tests
to
cover
the
various
endpoints
with
gitlab.
These
include
like
api
web
git,
to
call
it
the
web
test,
though
we're
not
actually
testing
browser
performance
with
this
tool.
We
should
call
that
out
specifically
we're
testing
server
performance,
because
even
the
web
page
still
needs
to
hit
the
server
in
various
ways
to
pull
in
the
data.
We
have
a
separate
pipeline,
the
separate
source
site,
speed
to
test
browser
performance.
B
Our
coverage
is
100,
it's
very
difficult
to
build
up
stuff,
it's
very
difficult
to
test
certain
aspects
of
gitlab
in
terms
of
performance,
but
we're
we're
continuing
to
add
to
it.
Expand
it
as
much
as
we
can.
B
We
fight
the
tool
in
both
docker
and
native.
You
can
run
it
natively
on
linux
or
you
can
run
it
in
a
docker.
If
you
wish
there's
a
whole
bunch
of
parameters
that
you
could
use
to
to
run
with
it,
although
these
are
mostly
optional
and
only
would
need
to
be
used
if
you're
wanting
to
do
something
specific,
we
also
provide
various
scenario
options,
as
we
call
them
to
run
the
tests.
B
We're
always
looking
to
improve
this,
but
at
the
moment
you'd
be
able
to
see
you'd
be
able
to
pick
a
file.
That
says
I
want
you
to
run
every
test
at
60
seconds
and
at
200
rps
requests
per
second,
for
example,
which
would
be
the
throughput
of
an
environment
that
should
handle
in
about
10
000
users.
B
Another
thing
to
call
it
is
that
you'll
see
that
the
the
tool
will
test
different
endpoints
differently
api.
We
hit
the
full
throughput
because
the
api
endpoints
we've
seen
in
the
real
world
get
hit
the
hardest,
whereas
web
and
get
actually
quite
comparatively
quite
a
lot
lower
than
api,
but
they
still
get
to
hit
quite
hard
as
well.
B
Once
the
tests
are
then
finished,
you'll
get
a
result
summary,
and
in
this
page
we
also
try
and
call
out
that
we
we
follow
various
thresholds
that
we
we
evaluate.
The
tests
with
there's
two
main
ones
really
is.
The
rps
rate
request
was
second,
the
test
was
actually
able
to
achieve
against
the
server
and
then
it
is
the
the
the
time
to
first
by
response,
the
how
long
it
took
for
the
server
to
respond
and
we've
measured
the
90th
percentile
of
that.
B
So
typically,
it's
not
quite
here,
but
typically
we
aim
for
each
test
to
be
under
500
milliseconds
for
for
that
metric.
But
if
we
find
endpoints
that
are
slower
or
not
performing
after
standard,
we
adjust
the
thresholds
in
the
test.
So
you'll
see
the
thresholds
will
be
different,
but
that's
fine.
We
do
that
intentionally.
So
we
then
we
raise
issues
against
those
end
points
get
them
fixed,
hopefully,
and
then
we'll
adjust
them
back
down.
We
just
keep
them,
but
we
adjust
them.
B
And
then
yeah
the
last
year
we've
made
good
progress.
We've
raised
actually
they've
raised
about
47
issues.
So
far
again,
skit
lab
25
have
been
closed.
22
are
still
open.
Either
employees
have
been
fixed
completely.
All
right.
We've
seen
really
good
instances
where,
for
example,
the
the
cpu
usage
or
for
some
endpoints
it
was
dramatically
bad
and
that's
been
fixed
substantially,
so
the
cp
doesn't
spike
as
much
we've
seen.
B
Various
employees
drop
seconds
off
their
endpoints
and
there's
a
credit
to
the
teams
for
the
work
they've
been
they've
done
so
far
and
the
and
the
effort
they
they
put
into
because
they
know
that
performance
issues
is
something
that
we
all
need
to
tackle.
B
Some
of
the
issues
are:
we've
set
up
a
system
where
we
don't.
You
know,
because
performance
is
difficult.
You
know
if
an
endpoint
is
coming
in
at
say
10
seconds,
it's
probably
going
to
be
not
very
likely
they
can
get
down
to
500
milliseconds
in
one
goal,
so
this
tip
we
actually
have
a
system
where
we
have
different
thresholds.
B
We
say
we
would
like
to
get
down
to
this,
and
then
we
raise
a
new
issue
for
the
next
threshold
and
that
helps
us
just
chip
away
and
just
keep
constantly
improving
the
performance
of
gitlab
and
then
at
the
end.
Yes,
some
useful
links
and
some
questions.
I
appreciate
that
was
a
whirlwind.
I
I'll
try
to
keep
it
high
level
again
into
the
details,
but
I'm
more
than
happy
to
answer
questions
and
I
will
cut
again.
B
The
documentation
goes
into
all
these
details
on
our
project
to
for
you
to
kind
of
go
at
your
own
speed
and
just
kind
of
go
for
in
and
see
what
we've
done.