►
From YouTube: GitLab Runner - Autoscaling: Performance testing tool
A
Hello,
my
name
is
steven
zappardi,
I'm
a
back-end
engineer
at
get
lab
and
I
work
on
the
github
runner
project.
A
I
wanted
to
give
a
small
update
on
the
work
we
have
been
doing
on
the
machine
replacement
and
what
we
have
been
doing
so
far
is
working
on
a
load
testing
tool
now
before
we
go
into.
Why
the
what's?
What
it's
doing
I'd
like
to
understand
why
we
want
to
have
a
low
testing
tube.
A
So
the
main
reason
is
to
validate
the
auto
scaling
solutions
that
we're
going
to
provide
right.
We
want
to
check
their
skills
as
we
expect
and
if
it
auto
skills,
ideally
better
than
what
we
have
right
now-
and
this
was
gonna-
provide
a
consistent
way
to
test
it.
So
every
solution
with
us
in
the
same
scenario,
so
you
can
say
okay,
this
is
better
for
scenario
x
and
so
on
and
so
forth,
and
we'll
also
take
this
into
consideration
from
a
performance
perspective
and
even
from
a
cost
perspective.
A
A
little
broader
cluster
might
perform
really
well,
but
it
can
really.
It
can
cost
a
lot
of
money
and
then
we'll
be
able
to
provide
some
metrics
to
the
to
our
customers
as
well
by
basically
saying
hey.
If
you
configure
your
guitar
cluster
like
this
with
auto
scaling,
it
can
run
10,
000,
concurrent
jobs,
given
that
the
jobs
are
running
these
among
these
type
of
scripts.
A
So
I'd
like
to
introduce
a
project
called
kettleberry
stress.
This
is
a
group
project,
so
it
has
multiple
projects
inside
of
it
and
will
go
through
each
project
and
the
role
it
plays
in
one
by
one
and
the
main
project
is
the
trigger
project.
A
The
trigger
project
is
just
a
collection
of
gitlab
ciaml
files
that
is
just
triggering
multiple
applications,
so
here
we're
triggering
a
load,
the
load
on
our
gitlab
runner
cluster,
and
it's
going
to
trigger
a
go
application.
A
Compiling
the
linux
kernel
under
ruby
application
as
well,
which
has
some
node.js
inside
of
it,
and
it
can
also
trigger
benchmarks.
So
we
want
to
have
kind
of
boat
loading
from
like
test
the
cluster
from
a
loading
load
perspective
and
also
from
a
performance
perspective,
so
we'll
benchmark
some
cpu
io
memory
and
even
on
a
network
level.
Since
the
network
matters,
when
we
want
to
upload
artifacts
and
do
fifth
cloning
and
so
on,
the
application
projects
are
for
now
three.
A
We
can
definitely
add
more,
but
I
wanted
to
pick
some
more
realistic
applications,
so
the
github
application
itself,
which
is
a
ruby
application
and
node.js.
So
it's
a
one,
big
monolith,
application,
which
will
give
us
a
good
understanding
of
like
how
models
will
perform
a
gold
application
which
is
killer
prunner.
Both
gitlab
and
kit.
Lebron
are
running
a
frozen
fork
and
meaning
that
we've
worked
those
projects
and
we
will
not
keep
updating
them.
A
The
reason
for
this
is
so
we
have
again
consistent
base,
so
we
don't
end
up
seeing
like
any
performance
improvements
or
regressions.
If
we
just
update
these
works,
and
then
I
a
linux
kernel
compilation,
because
it's
a
good
chunky,
c
application
that
is
compiled
and
that
will
stress
just
the
ecp
of
it-
and
these
are
all
my
using
multiple
keto
features
right,
like
services
and
and
so
on.
So
we
can
also
test
like
how
well
it
performs
on
spinning
up
containers,
for
example,
and
they're
fairly
long
pipelines
as
well.
A
So
we
can
also
check
like
okay,
our
auto
scaling
algorithm
bursts
really
well,
but
then
falls
falls
down
after
20
minutes
of
running
a
lot
of
jobs.
A
The
last
project
we
have
is
kit
laboratory
source
itself.
It's
a
simple
cli
application.
A
A
So
if
we
go
to
my
terminal
and
just
trigger
the
benchmarks,
give
me
a
second
and
if
here
we're
just
saying,
hey,
run
the
benchmarks
on
the
gitlab.com
instance,
and
it
gave
us
a
link
of
the
project.
And
if
we
go
on
to
it,
we
can
see
the
benchmarks
pipeline
is
running.
A
After
that,
we
can
also
use
this
tool
to
run
the
load
and
this
one
instead
of
running
the
benchmark
command,
we'll
run
the
trigger
command.
So
if
we
just
go
here
and
type
trigger,
you
can
also
specify
the
count.
So,
for
example,
let's
say
we
want
to
trigger
20
pipelines,
100
pipelines
and
so
on
so
forth.
I'm
just
going
to
put
it
four
onto
two
for
now
and
we
can
see
it
will
trigger
true
pipelines
for
us
with
by
running
the
applications.
A
A
A
Let's
wait
a
while
and
there
we
go,
we
can
see
the
gitlab
pipeline
being
created
as
well.
A
After
this,
we
can
keep
iterating
on
the
bronzer
stress
project,
as
you
might
saw
seen
we're
using
the
gitlab.com
instance
to
trigger
this
project,
which
is
not
ideal,
and
the
reason
for
this
is
for
multiple
reasons
right,
one
we're
just
creating
a
bunch
of
load
on
the
production,
gitlab.com
instance,
which
is
not
needed,
and
there
are
also
some
application
limits
in
the
sense
that
if
we
create
a
bunch
of
pipelines,
those
pipelines
are
going
to
be
considered
considered
a
failure
because
we're
creating
too
many
applications,
and
that
is
abusive
behavior.
A
So
we
want
to
create
an
export
import
command.
In
this
sense,
you
have
your
own
gitlab
instance
that
we
use
just
for
testing
with
no
application
limits,
it's
not
affecting
gitlab.com
production
systems
and
so
on
and
so
forth.
So
it
will
be
an
easy
way
to
export
the
group
and
import
it
and
then
add
tooling,
around
pipeline
timing,
and
it
sounds
like
let's
get
okay.
A
This
pipeline
took
30
minutes
instead
of
25
minutes
and
so
on
and
so
forth,
instead
of
just
clicking
around
to
the
ui,
and
this
gives
us
an
easy
way
to
to
get
timings.
Basically,
that's
it
for
now.
Thank
you
so
much
for
taking
the
time
to
watch
this
feel
free
to
ping
me
on
the
epic
or
any
issue
that
you
see
about
the
auto
scaling.