►
From YouTube: GitLab Performance Tool Discussion
Description
Grant Youg answers questions for the customer success team regarding the GitLab Performance Tool.
B
I
think
I
have
the
first
question.
You
talked
about
using
test
data.
One
of
our
the
reason
why
we
kind
of
have
you
here
today
is
we're
looking
at
implementing
a
customer
health
check
that
tams
can
do,
and
so
we
want
to
run
this.
The
gpt
against
a
customer
instance.
So
talk
about
test
data.
Is
it
okay
to
use
their
real-world
data?
C
A
No,
the
problem
is
that
performance
testing
is
so
everything
needs
to
be.
You
know
trying
to
figure
out
a
way
to
put
this.
Everything
is
to
be
in
a
line
like
in
a
row
in
terms
of
the
setup
from
start
to
finish,
everything
I've
described
has
to
be
set
up
in
exactly
the
same
way
as
much
as
possible,
or
else
the
scenario
has
changed
or
the
test
conditions
you
can
call
as
change.
So
that
means
the
results
will
change
they
might.
A
For
example,
if
you
test
against
real
data,
they
may
have
10
000
more
issues
than
what
our
test
kind
of
tool
has
been
set
up
to
take
and
that's
fine.
The
test
tool
will
still
test
against
that
they
will
for
a
slower
result,
that's
something
we're
still
figuring
the
best
way
to
tackle
in
the
future.
We
mentioned
one
of
the
slides
that
we
try
and
design
the
test
there
to
be
holistic
but
large,
but
there
will
always
be
extreme
outliers
in
the
real
world
and
that's
in
itself
is
still
a
problem.
A
No
customers
should
be
seeing
bad
performance
really.
So
that's
just
a
situation
that
we
need
to
figure
in
the
future,
but
you
can
still
run
it
against
real
data,
but
what
we
will
say
what
we
do
say
in
the
docs
is
that
we
still,
we
still
like
a
run
against
our
own
test
there.
So
we
can
compare
if
you
get
what
I
mean
because
running
against
their
test.
There
just
tells
us
that
their
testing
might
be
slow,
whereas
we
need
we
need
our
own
run,
to
tell
us
how
the
environment
is
actually
performing.
B
I
I
see
what
you're
saying,
because
you
want
an
apples
to
apples
comparison,
but
I
think
folks
are
definitely
going
to
want,
at
least
for
for
our
health
check,
they're
going
to
want
to
run
this
against
their
real
data
just
to
see
maybe
where
there
are
problems.
B
But
as
long
as
we
know
that
using
real
world
data
might
skew
the
results
slightly.
I
think
this
it
could
still
be
useful
for
customers.
It.
B
A
A
guidance
on
how
to
run
the
gpt
with
custom
data
with
not
the
standard
data
but,
as
I
say,
if
you
wanted
to
come
to
us
and
ask
why
is
this
providing
reform
so
badly?
That's
fine
we're
more
than
happy
to
discuss
about
it,
but
then
my
my
first
ask
would
be
to
run
it
against
our
data,
so
we
can
have
that
baseline
and.
A
On
it,
because
then,
if
the,
where
they're,
seeing
like
really
bad
performance,
for
example,
and
then
you
run
against
our
testing,
doing
our
test,
that
shows
the
same
bad
performance
extra
you
know
extrapolated
then
at
least
then
we
can
have
that
kind
of
baseline
of
okay,
the
environment's
not
actually
expected
well
enough
or
there's
something
else
happening
with
environment,
to
slow
it
down.
For
some
reason,.
B
D
Yeah
I
sort
of
asked
that
tongue
and
cheat,
but
I
I
guess
I
I
was
curious
for
the
group's
reaction
to
that
because,
like
for
a
company,
that's
like
serving
git
lab
on
metal
that
maybe
doesn't
have
the
ability
to
spin
up
like
a
an
analogous
sort
of
production-esque
instance
in
a
staging
environment.
I
think
most
of
my
customers
do
so
I
it's
probably
a
moot
point.
A
I
mean
yeah,
the
the
general
process
would
be
to
test
against
a
staging
or
another
environment,
or
if
customers
are
just
doing
it
as
a
one-and-done
to
build
the
environment,
they
can
also
run
it
before
they
go
live.
If
it's
the
case
that
they
only
have
one
environment
and
they're
adamant,
that's
the
case,
then
it
would
be
possible
probably
to
run
it
against
an
environment
that
would
have
been
in
downtime
or
a
quiet
time.
Maybe
when
people.
B
D
Like
I,
I
can
envision
this
area
where
you
know,
we've
exhausted
all
other
options
and
we
suspect
you
know
some
kind
of
flaky
hardware
problem
or
something
you
know.
Hopefully
you
know
a
problem,
that's
well
behind
us
in
the
age
of
cloud
native
right,
but
I
could.
I
could
see
a
scenario
where
you
might
tell
a
customer
like
hey.
You
know
book
a
like.
D
A
Yeah
as
long
as
everyone
is
aware,
they
can
still
use
the
instance
it
probably
in
our
experience,
it
probably
will
still
work,
but
obviously
in
the
world.
You
never
know
what
what
different
environments
are
out
there.
So
it's
just
an
awareness
thing,
but
yeah
that'd
be
fine.
The
thing
I
didn't
call
it
this,
because
it's
not
directly
laid
with
gpt,
is
the
metrics.
It's
not
something
that
we
do.
A
lot
as
well
is
measuring
the
metrics
of
an
environment
that
has
its
tests
run
against
it
when
debugging
an
environment.
A
A
This
was
a
full
test
run
earlier
today,
and
this
is
just
the
dashboard
we've
made
of
the
inbuilt
fan
and
prometheus
that
comes
with
git
lab,
and
this
is
obviously
showing
cpu
memory,
and
we
also
have
various
other
stats
and
everything
else,
but
every
customer
has
different
monitoring
kind
of
solutions,
but
this
would
be
certainly.
A
E
Yeah-
and
I,
if
I
missed
this-
apologies,
I'm
trying
to
understand
if,
if
this,
if
we
should,
if
we
need
to
set
up
an
environment
that
is
completely
separate
or
you
know,
I
hate
to
use
the
word
destructive
testing
right,
but
there's
some
contamination
of
the
alteration
of
the
test
environment
under
test,
and
so
we
should
keep
that
separate.
I'm
still,
I'm
selling
my
first
cup
of
coffees
jamie.
If
you
want
to
translate
for
me
sorry.
F
How
about
I
read
my
next
question
actually,
because
I
think
we
can
because
hold
on.
I
I
think
the
the
question
you're
looking
for
is
exactly
what
I've
written,
which
is,
is
the
gpt
tool
designed
to
be
used
against
real-world
customer
instances?
Is
it
going
to
cause?
Could
it
potentially
cause
data
loss?
Is
it
going
to
clean
up
after
itself.
A
So
the
the
the
generation
of
data
is
a
separate
script
and
we
did
that
intentionally.
The
data
that
it
generates
will
all
get
put
into
his
own
group
and
it's
designed
to
be
separated
from
the
rest
and
then
the
idea
is
that
for
our
customer
they
can
then
come
in
and
just
delete
the
top
group
and
walk
away
everything's
happening
gbt
itself.
The
vast
majority
of
tests
are
only
doing
safe
requests.
It
gets
and
stuff
like
that,
they're
all
targeting
against
that
exact
test
data,
and
we
don't
do
any
delete
to
existing
tests
there.
A
We
do
have
some
more
experimental
tests
in
in
the
in
our
suites
that
do
or
which
we
call
scenarios
which
are
meant
to
be
a
little
bit
more
realistic
in
that
like,
for
example,
customer
creates
a
new
issue,
but
that
again
is
in
its
own
bubble
and
it's
that
way,
it'll
go
off
and
create
his
own
project.
It'll
both
create
its
own
kind
of
situation,
create
the
issue
and
then
I'll,
delete
that
after
the
fact,
but
it
will
never
delete
any
existing
data.
No.
F
F
A
Don't
test
against
our
production,
but
for
a
customer
that's
better
to
live,
it
would
make
sense
to
run
the
tool
against
it.
We
did
it
one
customer
recently
where
they
actually
ran
it
many
times
against
it.
A
Well
tweaking
the
environment,
getting
it
to
the
right,
the
right
place
and
trying
to
figure
out
issues
that
had
to
be
some
weird
issue
of
the
cloud
provider,
and
then
that
was
just
dramatically
affecting
performance
and
then
they
run
it
as
they
find
a
build
check
just
to
again
ensure,
even
if
some
tests
are
failing,
that's
that
it
doesn't
necessarily
mean
that
the
environment's
not
ready
it
just
means
that
oh
there's,
this
little
blip
there.
That's
fine,
don't
worry
about
it!
That's
that's
just
a
specific
test.
It's
the
environment
itself
is
clearly.
A
C
I
think
it's
me
then
so
the
the
thought
behind
my
question
is
is
basically
when
you,
when
you
see
a
test,
result
one
of
the
things
that
you're
always
asking
yourself
is.
This
result
a
good
result
or
bet
one
or
midpoint
helpful
to
kind
of
make
that
judgment
is
eventually
some
data
to
compare
it
against.
So
are
we
having
any
plans
to
provide?
I
don't
know
an
anonymous
upload
kind
of
database
to
compare
your
results
again.
So
does
that?
Does
that
not
make
any
sense?
A
Yeah,
no,
I
think
you
said
it
makes
complete
sense.
This
is
kind
of
what
I
was
trying
to
describe
throughout
is
that
this
is
a
lab
test
tool
and
the
idea
is
to
make
it
a
controlled
test
environment.
So,
right
from
the
start
from
how
the
tests
are
created
right
through
to
test
data,
everything
is
designed
to
be
controlled,
so
we
can
try
and
get
real
results.
You
know
realistic
results,
accurate
results.
A
If
you
run
this,
if
you're
going
around
this
against
an
environment,
that's
busy,
then
there's
also
going
to
be
skewed.
There's
no
point
they're
dirty
for
lack
of
better
term,
because
this
other
folk
over
other
process
is
happening
and
that'll,
obviously
impact
performance.
So
that's
kind
of
from
start
to
finish.
That's
what
this
tool
and
the
and
the
idea!
This
is
why
I
said
at
the
beginning:
half
the
battle's
preparation
is
the
setup
running.
A
The
tests
are
comparatively
easy
once
everything's
in
place,
but
once
you
need
to
get
everything
all
you
need
to
get
your
ducks
in
a
row
so
this
week
and
for
comparison's
sake
again.
This
is
why
we
said
we
have
opinionated
test
data
because,
like
say
the
first
thing,
we'll
probably
ask
you
to
do
is
run
it
with
our
test
there.
So
we
can
take
a
pair
like
for
life.
We
can
compare
a
customer's
10k
target
environment
to
our
10k
target
environment.
A
We
can
compare
the
specs,
we
can
compare
the
results
and
we
can
start
debugging
properly
to
get
to
the
right.
The
right
answer,
why
something's
happened
or
why
it's
not
performing
as
they
should
do.
D
D
A
No
gt
is
an
external
tool,
so
it's
quite
hard
for
it
to
to
know
that
so
to
speak,
it
will
tell
you
the
version
of
the
environment.
I
feel
a
small
metadata,
but
this.
A
This
other
aspect
of
the
debugging
I
was
talking
about
is
with
the
it's,
the
monitoring
after
the
fast
looking
at
the
metrics
running
the
test.
Looking
at
the
metrics
and
evaluating
we
go
into
some
detail
about
this
in
the
docs
about
how
to
do
do
it
best,
although
I
think
I
still
want
to
actually
make
that
a
little
bit
more
detailed.
A
But
the
idea
is,
you
run
the
test
and
then
you
look
at
it.
Look
at
the
results,
look
at
the
metrics
and
then
you
debug,
like
you,
would
any
other
performance
issue
and
go
okay.
Well
clearly,
for
example,
this
the
cpu
here
is
absolutely
maxed.
Why
is
that
the
case?
A
It's
the
search,
endpoints
global
search,
it's
the
name
is
quite
hard
and
the
team's
working
away
and
proving
that
all
the
time,
but
on
a
customer
environment
you
might
see
that
much
higher
and
again,
then
it's
just
going
through
doing
the
work
essentially
going
through
the
detail,
trying
to
figure
out
what
what's
what.
D
I
I
haven't
put
this
in
the
dock,
but
it
occurs
while
you're
showing
that
and
talking
about
10k
reference
architecture.
Have
we
published
results
from
our
different
size,
reference
architectures
and
like
like
a
desirable
gpt
result
for
like
a
2k
5k
10k
reference
architecture.
A
It's
on
the
screen
right
now,
amazing,
we
have
our.
We.
D
A
Are
our
pipelines
with
gpt
report
the
results
to
our
wiki
daily
or
weekly,
depending
on
the
environment
and
yeah?
That's
that's
your
that's,
essentially,
your
your
benchmark
to
at
least
compare
against
initially
yeah.
G
G
This
isn't
something
that
you
want
to
rely
upon
necessarily
to
to
move
you
forward,
but
I
I
did
receive
some
feedback
from
customers
around
the
some
of
the
rps
metrics
that
came
back
as
far
as
being
like
super
low
or
it
kind
of
shook
their
confidence
as
far
as
what
they
can
use
like
and
and
mainly
around
the
commit
like
the
the
git
commits,
and
things
like
that.
It
was
like
four,
I
think,
rps
think
really
low,
and
that
was
troubling
to
to
some.
G
Customers
are
asking
for
best
practices
or
asking
for
sir
for
vm
size
or
asking
for
these
types
of
things,
and
we
don't
necessarily
take
an
opinionated
approach
on
that,
and
especially
with
around
troubleshooting
like
if
a
cpu
is
launching
for
a
particular
service
or
server,
it's
hard
to
go
okay,
so
I
have
80
terabytes
of
git
data
and
I'm
going
to
be
uploading.
It.
My
pipelines
are
small,
but
I
still
have
huge
amounts
of
data
that
I'm
committing
to.
A
It's
very
difficult
is
the
answer.
Yeah,
the
the
the
problem
is
that,
as
as
I
kind
of
alluded
to
the
gpt
is
designed
to
be
aligned
in
the
sand
and
that
is
subjective
by
his
nature.
We'll
always
have
at
one
point
we
had
to
decide:
okay,
here's
the
here's,
the
line
in
the
sand
that
we're
drawing
there
will
be
outliers
on
both
sides,
sometimes
extreme
outliers,
on
both
sides.
A
A
We
test
against
linux,
which
is
the
largest
public
repo.
You
can
get
your
hands
on
today,
but
that
is
still
tiny
compared
to
some
private
repos
out
there,
so
there
will
always
be
outliers
and
what
we're
trying
to
do
is
minimize
the
impact,
but
sometimes
there
will
be
an
impact
to
support
where
they
have
to
go
through
and
debug
either
a
large
repo
or
repo
that
has
data.
That's
very
heavily
skewed.
A
A
So
many
different
factors
that
can
impact
performance
and,
as
I
say,
we
are
continuing
to
try
and
expand
where
we
can,
but
it
will
always
be
a
place
in
the
middle,
so
to
speak,
so
we
can't
always
cover
the
edge
cases,
but
we'll
try
and
get
there
as
best
we
can.
A
When
it
comes
to
the
customer
of
the
rps
thing.
Are
you
talking
about
the
rps
that
was
being
tested
with
for
the
actual
rps
result.
G
I
so
that's
a
good
question.
I
I
was
mainly
referring
to
the
result,
so
the
git,
that's
why
rps
was
for-
and
I
asked
in
the
in
the
channel
about
that
within
the
group,
and
they
had
stated
it.
Oh
it's
really.
It's.
We
don't
really
have
a
good,
I
guess
test
load
for
for
the
gets
right.
G
We
can
only
use
our
test
users
that
we've
created
in
order
to
perform
those
actions,
but
nothing
from
my
understanding,
nothing
automated
as
far
as
where
you
could
just
really
crank
up
or
turn
a
knob
or
a
dial
right
to
increase
the
load
on
git
commits
within
the
gpt
tool
to
to
get
better
results
or
higher
results.
A
There
is,
we
can
do
some
limited
things
with
that
on
the
screen.
Now
is
I
get
push
test?
I
just
ran
there,
which
is
doing
pushes
of
of
some
commits
at
a
rate
of
20
requests
per
second.
A
A
All
this
is
done
for
http,
though
doing
it
through
doing
it
through,
you
know,
get
with
ssh
is
we
have
we
have
we've
literally
not
found
a
tool?
We
can
do
it
with.
It
doesn't
seem
to
be
possible,
so
I
always
tested
them
for
http.
A
As
I
said
in
the
slides,
we
do
test
getting
web
a
lower
throughput
just
to
people
who
don't
who
may
see
for
the
first
time,
you
might
think
it's
actually
quite
substantially
lower
throughput,
but
that's
actually
is
actually
based
on
real
test.
The
availa
metrics
in
the
real
world
from
the
data
that
we've
got
our
hands
on
at
least
I'm
always
always
happy
to
look
at
more
data,
but
we
always
found
that
get
web
actually
quite
lower
than
api,
but
again
we're
still
testing.
A
We
test
all
our
endpoints
with
considerable
headroom.
It's
a
rule
that
I
follow,
performances
take
what
you
see
in
the
real
world,
double
it
and
then
double
it
again,
and
you
probably
still
aren't
there
yet.
But
that's
that's,
probably
a
good
start,
so
our
we
might
be
sometimes
get
feedbacks
like
wow.
Your
rps
is
way
too
high
compared
to
anything
we
see
and
then
the
answer
is
well,
we
need
headroom
and
it's
better
to
be
higher
than
lore,
better,
be
safe
than
sorry.
A
So,
if
you're
someone
froze
with
that,
give
me
a
shout.
A
That
our
rps's
are
locked
in
that
regard.
You
pass
through
the
rps,
which
is
the
top
rps,
which
is
for
api,
and
then
the
tool
will
calculate
a
10
rate
for
git,
so
you
can
use
that
and
if
you're
looking
for
a
very
specific
rps
and
that's
something,
can
help
guide
you
through
the
tool.
A
That's
a
good
question:
officially,
we
will
support
back
to
12.5.
We
can
support
before
that,
but
that's
a
bit
more
of
a
handholding
situation.
The
main
reason
is
importing.
Data
is,
being
it's
been
quite
bumpy
with
older
versions.
Specifically,
I
think
on
12
3
4
we
found
just
it
didn't
work.
I
think,
there's
actual
product
issues
at
that
time
on
all
the
versions
it's
up
to
12-0
downstream.
A
C
A
Wood
but
again
we
have.
We
actually
found
that
more
import
issues
on
the
last
month
for
only
we're
holding,
unfortunately
to
how
good
import
is,
and
in
the
last
few
releases
there
has
been
some
issues
as
well.
So
if
you're
having
any
problems,
then
again,
this
is
the
reach
and
we
can
try
and
guide
you
through.
A
There
are
things
to
call
if
that
actually
is
the
thresholds,
we're
still
not
sure
the
best
way
to
tackle
thresholds
for
previous
versions.
The
thresholds
that
you'll
see
in
the
test
results
like
the
one
in
screen
has
a
5000.
A
Ttfb
threshold
that
might
get
reduced
in
the
future
as
the
end
point
improves.
So
you
might
run
the
test
today
with
the
latest
version
tool
and
they
might
report
failures,
but
those
values
actually
are
still
correct
for
that
time.
That
endpoint,
which
is
slower
in
12
12
8,
for
example,
which
is
substantially
slower,
but
the
tool
report
is
a
failure.
H
All
right,
so
I'm
gonna
think
for
the
purposes
of
this
call
we'll
go
ahead
and
skip
that
question.
There's
some
great
discussion
in
the
notes
doc.
So
that
brings
us
to
john.
B
Yeah
grant
can
the
performance
tool
be
configured
to
test
the
limits
of
a
machine
where
it
keeps
scaling
up
until
it
like
breaks
the
machine,
and
then
we
can
say
here's
the
max
number
of
users
that
this
setup
can
support.
A
We've
we
provide
the
box
option
files
which,
as
the
slide
said,
the
duty
is
based
on
k6,
which
is
a
well
regarded
industry,
open
source
performance
testing
tool.
We
provide
various
option
files
just
to
kind
of
help,
make
that
experience
easier
but
gft
and
buys
nature.
K6
can
be
used
to
do
more
complex
scenarios.
Our.
A
Straightforward,
it's
spin
up
to,
for
example,
10k.
It's
been
up
to
200
users
over
the
over
the
first
five
seconds,
maintain
that
for
the
next
55
and
then
spin
it
down
with
a
little
bit
of
basic
knowledge
which
shouldn't
take
too
long
to
learn
up
on.
You
can
easily
take
one
of
those
files
and
adjust
them
to
say
actually
sort
of
200
users
over
the
next,
the
first
five
seconds
and
then
spin
up
to
a
thousand
over
the
next
five
minutes.
F
No,
I
had
a
slight
spin
on
that,
so
obviously
the
options
file
the
options
files
are
there
for
sort
of
validated
one
1k
reference,
2k
reference
users
etc.
Could
you
set
the
tool
so
that
actually,
what
it
does
is
it
starts
off
with
the
1k
reference
firm
runs
that
for
runs
that
for,
however
long
it
takes
so,
I
think
it's
about
an
hour.
F
It
takes
then
does
the
2k
and
basically
it
carries
on
going
through
the
ops
until
it
gets
to
one
that
fails,
because
I
know
that
there
is
a
and
ultimately
there's
a
there's,
a
pass
fail
at
the
end
of
it.
So
I
mean
we're
talking,
probably
an
overnight
like
eight
hours,
ten
hours
plus
run
here,
probably
yeah.
A
Yeah
and
that
that
might
need
to
happen
sometimes
that's
more
of
a
not
a
salt
test,
but
yeah
not
not
in
the
tool
itself,
but
it
would
be
not
too
hard
to
script.
Yeah.
A
F
A
That
should
be
quite
scriptable
scriptable
to
do
the
appeal
will
be
quite
heavy.
A
Long
as
you're
saving
the
output
and
you
can
go
back
and
view
it
and
you
can
see
how
it's
done
for
each
kind
of
yeah
each
kind
of
thing
you
can
again.
You
can
easily
automate
that
in
galaxy
as
well
or
you
could
just
have
one
stage
for
each
for
each
so
there's
various
ways
to
kind
of
tackle
that
yeah,
but
not
necessarily
the
tool.
F
The
reason
why
I
ask
that
in
particular
is
I've
got
customers
that
say
you
know
they
have
massively
over
provisioned
their
infrastructure
for
the
number
of
users
that
they
are
having,
so
they
may
have
like
300
users,
but
the
architecture
they've
got
should
probably
support,
1000,
2000
plus-
and
I
guess
you
know
they
don't
want
to
test
for
300
users
because
they
know
full
well,
it's
going
to
it's
going
to
succeed
for
300
users.
F
A
A
fair
question
yeah
so
yeah
the
best
thing
to
do
would
be
to
script
that
with
five
skill,
fci
or
or
for
scripts
bash
scripts
or
some
other
script
you
could.
If,
if
you
did
that
inside
the
tool
which
you
could
do,
you
can
create
a
scenario
we
could
say:
go
to
1
000
users
then
go
to
two
go
to
free
the
the
underlying
tool.
Key
six
will
report
the
summary
of
that
and
that's
the
only
issue
there.
So
you
probably
will
just
every
test
will
fail
because
it.
C
A
B
Grant
how
open
are
you
to
helping
us
interpret
the
results
of
the
gbt
run
against
customer
data?
I,
with
all
the
considerations
about
we
would
like
a
clean
set
of
data
using
our
basic
test
data
as
part
of
our
proposed
tam
health
check.
I
think
we
are
going
to
customers
are
going
to
want
to
run
this
against
their
existing
data.
Would
you
be
open
to
helping
us
interpret
the
results?
If
that
was
the
case,.
A
I'm
always
happy
and
nalia.
My
co-worker
is
always
happy
to
look
at
data
results,
but
then,
by
his
nature,
performance
testing
is
quite
large
and
quite
wide,
and
so
the
results
you
get
are
quite
high
level.
So
you'll
see
that
merge
requests
list
listing,
merge
quests
is
direly
bad,
but
then
we
don't
have
full
transparency
of
the
data
to
actually
see.
Why
is
that
the
case?
They've
got
10
thousand
more
merged
quests
in
our
test.
A
There
is
that
just
the
only
reason
or
is
also
because
the
environment
is
slower
so
again,
there's
two
factors
there
and
that
muddies
the
waters.
So
this
is
why
again,
we
need
our
base
there,
so
if
usually
with
us
for
both,
if
it's
definitely
definitely
not
possible,
we'll
try
and
give
a
look
and
give
our
best
guesses,
but
then,
as
I
say,
we're
impeded
in
terms
of
being
able
to
give
a
full
analysis.
B
And
that's
fair
and-
and
that
might
be
sufficient,
because
we
want
this-
the
tam
health
check
to
be
a
very
a
much
more
lighter
experience
and
a
professional
services
health
check,
which
would
probably
run
against
a
clean
server
with
all
that
test
data
already
implemented.
So
that
might
be
sufficient
for
us
just.
A
B
A
A
Unfortunately,
cluster's
still
not
performing
as
well
as
we
want
it
to
it's
performing
pretty
badly
in
terms
of
prefect,
specifically
uses
database.
Very,
very
heavily.
We've
had
some
improvements
on
that
recently
before
it
was
infinite.
It's
best
I
could
tell
the
davis
cpu
hits
for
prefix
against
his
database
was
infinity.
It
was
just
absolutely
obliterating
it,
and
the
cluster
team
immediately
tried
to
look
at
fixing
that
and
they
did-
and
now
it's
about
maybe
four
times
specs
wise
compared
to
the
gitlab
davy
specs.
So
that's
still
bad,
but
it's
something.
A
So
if
you
have
a
customer,
that's
really
really
really
needing
it.
Now,
then
you
can
give
them
the
data,
but
hopefully
in
34
they'll
get
that.
F
Done
for
you,
so
just
briefly,
the
the
database,
part
of
the
cl
of
the
configuration
is
the
bit
that
needs
to
be
over
provisioned
for
x,
roughly
by
comparison.
A
Yeah,
so
the
problem
is
that
we
wanted
clusters
to
get.
We
didn't
want
updating
reference
gestures
is
difficult.
Every
update
impacts
every
customer
potentially
so
we
need
to
always
be
careful
about
it
and
we
didn't
want
to
put
cluster
in
until
it
was
definitely
ready
and
in
the
configuration
we
wanted
as
well.
A
The
big
wind
cluster
that
we
had
didn't
have
before,
which
is
a
massive
pain
point,
is
that
with
the
generator,
if
you
go
off
and
run
it
to
import
data,
you
know
describing
documentation,
you
need
to
import
the
same
git
lab
project
large
project
into
every
gitly
node,
because
they're
separate,
and
there
is
no
concurrency
or
anything
else
there
and
that's
a
big
pain
point
for
us
with
cluster.
That
goes
away.
A
You
have
one
git
lab
copy
gateway
copy,
so
to
speak,
and
that's
translated
across
multiple
cluster
nodes
in
the
cluster
and
then
the
the
real
winner.
There
is
the
idea
of
distributed
reads,
which
allows
obviously
gitlab
to
then
speak
to
the
standbys,
as
well
as
the
main
primary
and
that
improves
performance
greatly.
So
that's
the
thing
we
really
wanted
in
the
reference
architectures,
because
that
will
we
just
cost
it,
which
is
complexity.
A
It
takes
every
box,
it's
a
massive
win,
but
with
that
feature
on
is
when
prefect
really
hits
this
day
base
too
hard,
and
as
I
say,
we
raised
issues
literally
last
week,
the
cluster
team
working
on
13-4,
I'm
very
much
hoping
we'll
have
it
in
soon,
but
we
want
to
do
it
right
because
we
get
into
early
and
then
customers
go
from
buy
hardware,
that's
like
much
massively
bigger
and
then
a
month
later
say.
A
Actually
they
don't
need
the
hardware
anymore
they're
not
going
to
be
happy
so
yeah,
but
it's
not
a
closed
door
in
that
one.
If
a
customer
comes
in
really
wanting
it,
that's
just
a
conversation
to
have
either
also
the
self-managed
environment.
Triage
working
group.
A
H
All
right
well,
thank
you
so
much
grant
and
nalia
for
joining
us
today.
A
great
session
join
us
next
time.
We
may
or
may
not
be
having
a
session
next
week,
so
stay
tuned,
and
we
will
see
you
all
soon.