►
From YouTube: Verify:Testing Internal Customer Call - August 2021
Description
Today we talked about our new category of Review Apps and Kyle walked us through a use case where the Failed Test counter has been helpful but the workflow is still clunky (then he recorded a video).
Review Apps Direction Page: https://about.gitlab.com/direction/verify/review_apps/
Kyle's Video: https://youtu.be/HvCCOUI8ZGo
B
That's
right,
I
I
don't
know
so.
This
is
the
internal
customer
call
for
august
2021
right
now.
We
don't
have
any
internal
customers
on
the
call,
although
we
could
all
call
ourselves
internal
customers,
because
we
use
our
tools
too.
I'm
going
to
start
with
intros
just
to
let
the
folks
who
are
new
here
introduce
themselves.
Why
don't?
We
start
with
alana.
C
A
Thanks:
hey
everyone,
I'm
scott,
I'm
doing
a
internship
for
learning
under
james
for
the
product
management
role;
yeah.
It's
glad
to
be.
B
B
Now
that
the
code
quality
in-line
annotations
are
shipped,
we
were
just
wrapping
up
a
couple
of
things.
Miranda
was
working
on
some
front-end
stuff
and
making
sure
that
display
in
the
merge
request.
B
Over
and
over
so
to
appreciate
you
jumping
in
and
doing
that,
so
the
topic
that
I
had
proposed
so
was
review
apps
and
kyle.
You
got
started
with
the
topic
there,
so
why
don't
you
jump
in
and
tell
us
how
we're
using
review
apps
internally
to
start
and
then
take
that
conversation
wherever
you
want
to.
D
Good
to
meet
good
to
meet
you
alana
and
scott,
and
good
to
good,
to
interact
with
you
again
good
to
see
everybody
so
yeah
thanks
for
the
prompt
in
the
engineering
productivity
channel.
That
kind
of
gave
me
a
little
nudge
to
be
like,
let's
maybe
have
some
things
in
the
dock,
to
spur
some
conversation.
D
D
Apps
are
really
handy
to
enable
like
cheap
exploratory
testing
and
then
more
challenging
projects
like
the
gitlab
single
repo,
where
we
have
to
build
the
image
inside
the
pipeline
and
deployment,
which
is
like
a
tedious
process
like
a
complex
and
tedious
process
that
has
cluster
management
complications
and
cost
management
complications.
D
I
kind
of
noted
an
issue
that
I
have
for
discussion
around,
maybe
changing
how
we
use
review
apps
internally
to
make
them
ephemeral
by
default,
so
they
are
used
to
support
testing
and
then
they
disappear.
And
if
someone
wants
to
keep
them
around
like
24
7
as
they
are
now
make
that
functionality
available
on
demand.
D
So
before
I
go
into
my
question,
those
are
the
two
usages
for
review
apps,
that
that
I'm
aware
of
any
questions
on
those
that
I
I
can
help.
B
I
just
want
to
make
sure
that
I'm
clear
we
by
default
have
review
apps
available
available
for
handbook
for
www,
for
I
guess
for
the
the
self-contained
projects
that
you
described,
but
for
the
monorepo
for
gitlab.org
gitlab.
That
is,
doesn't
come
with
a
review
app
out
of
the
box.
So
if
I
commit
open
an
mri
and
commit
I'm
not
going
to
get
a
review
app,
that
would
spin
up
a
review
app
for
a
gitlab
instance.
D
Not
all
changes
would
so
front
end
changes
and
then
end-to-end,
spec
changes
or
ci
like
there's
some
set
of
file
changes
where
you
will
have
a
review
app
available
along
with
your
change.
D
The
review
deploy
like
the
review
app
jobs
are
available
in
the
pipeline,
for
the
other,
like
a
back
end
change.
If
you
want
it
and
you
can
play
those
jobs
manually,
but
they
don't
run
by
default.
Okay,.
C
D
To
be
really
clear,
it's
not
three
projects
there
are,
there
are
lots
of
other
ones.
I
just
more
wanted
to
focus
in
on
use
cases
where
there's
simple
review
app,
deploy,
processes
that
are
very
straightforward
and
then
there's
more
complex
ones
where
things
are
a
little
more
tedious.
B
I
appreciate
you
reducing
that
to
just
two
use
cases
for
for
us.
D
So
the
the
question
that
came
to
mind
as
you're
taking
this
on
is
I've
been
thinking
a
lot
on
how
we
can
provide
more
context
in
mrs,
like
one
example
would
be
maybe
related
to
performance
regressions
where
we
could
use
a
baseline
set
of
performance
data
that
would
run
against
the
review,
app
environment
and
then
test
the
mr
against
those
same
like
the
same
performance
test
and
look
where
there's
deviations
to
to
that
baseline,
and
it
got
me
thinking.
D
B
We
can
do
the
first
one
today,
browser
performance,
testing
and
accessibility.
Testing
can
both
run
against
review
app
and
the
merge
request
wages
will
show
that
deviation.
I
think
that
there's
huge
improvements
that
we
can
make
in
those
and
making
that
process
easier
of
saying.
I
want
to
do
this
test
and
I
want
to
do
it
against
the
review.
App
could
be
better
as
well.
B
I
long
thought
that
there's
also
opportunity
to
run
that
comparison
like
see
that
comparison
against
what
it
looks
like
in
production
so
yeah.
It
might
change
in
my
review
app,
mr
to
mr,
but
how
does
it
look
compared
to
what's
going
on
in
production
from
a
browser
performance
standpoint
so
starting
to
pull
that
data
back
in?
B
I
I
think,
there's
also
something
interesting
about.
You
can
request
a
code
review.
I
want
to
request
a
request,
a
user
acceptance
review,
so
somebody
can
push
a
change.
It
goes
up
I
for
me.
The
use
case
is
release
post
because
it's
release
post
week
and
when
it's
time
for
a
product
manager,
a
tech,
writer,
whoever
it
is
to
do
that,
a
change
they
get
a
ping.
B
But
that
ping
includes
not
just
the
mr
but
the
link
directly
to
the
review
app
so
that
they
can
go
in
and
look
at
that
change
directly
in
the
review.
App
leave
the
comments
through
visual
review
tools
or
make
the
edits
in
within
the
review
app
itself
or
the
suggestions
and
then
kind
of
make
that
process
just
a
little
bit
easier.
B
I
think
that
we're
right
on
the
cusps
of
being
able
to
do
that
with
a
lot
of
the
tech
that
we
have
and
a
lot
of
the
feature
set,
that
we
have
to
make
that
experience
really
radically
different
the
other
one.
Then,
as
I'm
thinking
about
my
use
case,
because
I'm
always
thinking
about
my
use
case
as
a
product
manager
when
it
comes
to
review
apps
very
selfishly
is
how
do
I
go
to
user,
go
to
user
acceptance
testing?
And
what
do
I
need
to
go
test?
B
That,
for
me,
is
a
really
compelling
use
case,
but
again
it's
selfishly
as
a
product
manager,
so
that's
at
least
my
initial
thoughts,
but
I'd
love
to
hear
from
the
team.
What
are
you
thinking
as
we
take
on
this
new
category?.
E
One
obvious
one,
maybe
to
us
as
testing,
was
our
failed
kind
of
failed
visual
review
tools
that
we
didn't
really
get
off
the
ground
very
well.
If,
for
for
those
that
not
aware
we
created
one
a
feature
called
visual
review
tools,
it's
kind
of
a
separate
app
they
can,
the
user
can
add
a
script
to
their
build.
So
when
the
review
app
comes
up,
there
they're
viewing
their
app,
we
have
a
little
icon
at
the
bottom.
They
can
just
make
comments
straight
from
the
review.
E
E
These
tools
onto
your
review
app,
rather
than
having
to
paste
the
script
onto
your
straight
onto
your
html.
So
there
there
are
some
things
that
we
can
investigate
here.
Now
that
we
have
more
control
over
review
apps.
How
can
we
package
this
more
easily?
D
James,
going
back
to
one
thing,
you
said
about
pulling
data
from
like
production
for
that
performance.
We
may
this
may
be
a
minority
of
use
case,
but,
like
our
review,
app
environment
doesn't
align
from
a
capacity
or
performance
perspective
with
production.
D
So
that's
where
I
was
really
talking
about
like
a
review
app
performance
baseline,
where
we
were
kind
of
comparing
it
to
something:
that's
a
similarly
scoped
app
and
then
look
for
deviations
against
that.
B
But
for
me,
that's
probably
the
blocker
of
why
I
wouldn't
want
to
use
that
feature
like
yeah.
Of
course,
I
know
that
there's
problems
in
staging
or
do
we
limit
it
just
to
hey
front
end
is
front.
End
is
front
end
under
its
browser's
side.
Are
you
seeing
the
same
browser
side
problems
that
you
would
somewhere
else,
and
can
we
start
to
give
you
capabilities
to
spin
up
different
browsers
and
that's
where
that
gets
really
interesting?
So
you
can
start
to
do
more
of
that
kind
of
testing
at
when
you're.
B
Yeah
tell
me
more,
though,
about
that
that
difference
and
how
that
is
valuable
to
you,
because
I'm
making
assumptions
it's
like
there's
problems
in
that.
D
Yeah
yeah
so,
like
I
guess
the
way
I
look
at
it
is
an
indication
that
a
transaction
is
taking
longer
compared
to
the
like
environment
is
much
more
helpful
than
comparing
against
like
a
different
environment
and.
C
D
Saying
always
set
up
your
review
apps,
just
like
your
production
environment,
for
all
of
your,
mrs,
like
that
would
be
very
costs
like
that.
Would
cost
a
lot
of
money
for
the
single
repo,
because
the
intent
and
that
kind
of
tied
into
my
discussion
issue
up
above
the
intent
is
that
review
apps
are
either
available,
24
7,
so
that
anyone
can
go
and
click
the
review
app
link
and
go
in
and
access
it
or
it
can
spin
up
very
quickly.
D
Both
of
those
things
are
I'll
say
like
spinning
up
very
quickly
is
not
something
we've
been
able
to
figure
out
in
the
mono
repo
and
then
scott.
What
you're
talking
about
with
authentication
challenges?
I
see
that,
like
again
looking
at
the
single
repo,
that's
almost
like
a
challenge
that
we
would
have
on
the
app
of
trying
to
figure
out.
How
do
we?
D
How
do
we
like
integrate
our
review
app
implementation
so
that
so
that
things
can
be
like
the
handbook
so
like
when
you
make
a
handbook
change,
you
can
click
the
review
app
link
and
it
takes
you
right
to
the
page
because
there's
no
authentication
needed
to
get
to
that
page.
Whereas
on
like
a
gitlab
instance,
you
would
need
data.
You'd
need
a
number
of
things
just
to
even
get
to
that
point.
Where
it's
hey.
D
I
changed
the
I
changed
the
way
the
issues
page
list
looks,
go,
look
at
it
you
you
know,
that's
that's
a
that's
an
app
bubble.
E
E
D
E
D
Another
thing
like
like
so
james:
this
was
something
really
more
end-to-end
testing
and
again
hyper
focused
on
our
use
case.
We
kind
of
talked
about
this
a
bit
yesterday
is
bringing
more
intelligence
to
the
mrs,
and
this
might
be,
where
the
feature
that
you
have
of
this
test
failed
x
number
of
times
in
the
master
branch
in
the
last
14
days,
we
could
look
at
end
to
end
test
failures
against
the
review.
App,
compare
it
to
that
baseline
and
say:
hey
this
test
failed
and
it
hasn't
failed
in
master
for
the
last
14
days.
D
I
would
imagine
the
same
thing
with
performance
like
deviations
from
a
baseline
there's
going
to
be
a
lot
of
noise,
and
how
do
you
tune
that
signal
to
noise
ratio
of
these
are
the
most
actionable
things
from
either
performance
testing
and
to
end
testing
anything
that
you're
doing
against
the
live
environment
joanna.
Do
you
have
anything
maybe
to
pile
on
here
too.
B
B
Testing
is
something
that
brandon
opened
or
brendan
opened
like
three
years
ago.
There's
an
issue
floating
around
for
that,
like
there's
a
lot
of
really
great
ideas
in
both
the
review
at
backlog
and
visual
review
tools,
backlog
that
we
just
have
to
have
the
capacity
to
jump
into
and
get
on.
B
Data
seating
seems
like
an
interesting
problem
to
go
tackle.
If
that's
one
of
the
things
that
blocks
you
from
quickly
spinning
up
an
instance
is
having
that
data
available
and
better
understanding
for
the
second
use
case
of
more
for
the
single
or
the
modern
repo
for
the
gitlab
app.
What
blocks
us
from
spinning
up
those
review?
Apps
more.
C
B
And
like
is
there
a
ttl
that
we
need
to
create
so
that
they're
automatically
shutting
themselves
back
down
for
cost
reasons?
Things
like
that
yeah.
D
I
totally
agree.
I
look
at
those
as
like
implementation
choices
on
our
usage
of
review
apps
mostly,
but
if
there
is
accelerators
in
the
future
that
you
can
build
in,
I
just
think
how
review
apps
are
hosted.
So,
like
the
examples
I
listed
above,
like
I
think,
docs
and
gitlab.com
shove
stuff
to
like
a
gcs
bucket
and
then
set
up
a
route,
so
it's
just
static
site
hosting,
whereas
we
have
like
a
helm
deployed
app
with
like
15
different
pods
for
every
single
review,
app.
It's
a
like
on
a
kubernetes
cluster.
B
Cool,
do
you
want
to
keep
going
down
the
review
app
path
a
little
bit,
or
do
you
want
to
put
a
pit
in
that?
Let's
talk
a
little
bit
more
about
the
plan
for
the
test
suite
and
the
spec
pass
fail
rate
functionality.
D
B
Yeah,
I
will
probably
follow
up
with
you
later
about
the
browser
performance
testing
and
how
we
can
start
to
incorporate
that
and
look
at
that
drift
see
what
that's
like
cool
all
right.
So
kyle
you
had
put
in
a
question
about
what
function
and
what
functionality
is
available
or
planned
for
test.
Suite
and
spec
past
fail
rate
over
time,
for
instance,
the
past
14
days.
B
We
did
get
the
mvc
that
you
talked
about
out
a
while
ago,
so
you
can
always
see
how
often
has
a
test
passed
or
how
often
has
it
failed,
rather
in
the
last
14
days,
that's
definitely
an
nvc.
We
haven't
heard
much
for
feedback
either
positive
or
negative,
but
for
us
the
next
logical
step
is
probably
we
already
have
a
design
done
and
we've
thought
through
some
of
the
back
end
components
around.
How
do
we
show
just
the
last
12
or
10?
I
don't
remember.
B
The
exact
number
was
executions
of
the
test
again
on
that
fault
branch
and
showing
that
full
report
so
that
you
can
start
to
see
a
better
trend
over
time.
I
think
we've
done
some
of
the
back
end
work
eric's.
Looking
at
me
a
little
bit
like
I'm
crazy
because
he
probably
would
have
been
the
one
who
did
that,
but
for
us
that's
the
next
logical
step
going
down
that
path
of
here's,
a
richer
report,
so
that
you
can
start
to
dig
in
and
understand.
B
When
did
that
test
start
failing
what
might
have
changed
there?
What
else
failed
at
the
same
time
so
that
as
a
manager
or
a
lead,
you
can
start
to
figure
out?
Where
do
we
need
to
focus.
D
I
realized-
I
didn't
add
this
in
the
original
question,
but
I
did
add,
like
a
say,
like
a
quality
engineering
focus,
it's
it.
It's
an
ask
to
help
with
our
internal
reliability
efforts
where
we
can
start
to
better
understand
our
end-to-end
test
health
over
time.
D
That
has
a
lot
more
specific
details
so
for
feedback
on
the
test
failed
the
last
14
days.
I
love
it.
I
feel
like
I
feel,
like
I
said
this
before,
but
it's
really
helpful
for
master
stability.
In
particular,
there
was
a
fleet.
There
was
an
order,
dependent
failure.
Earlier
today,
where
I
was
like,
how
often
has
this
failed
in
master
test
report
was
the
first
place
I
looked
and
it
was
higher
than
I
thought.
D
So
it's
a
helpful
thing,
but
I
think
it's
buried
so
knowing
that
it's
there
and
then
navigating
to
it,
it's
always
like
a
10
click
go
to
the
pipeline.
Click
on
the
test
report
go
to
the
suite
view.
The
like
click
on
the
suite
wait
for
it
to
load
view
the
details
of
the
spec
that
you're
talking
about,
and
none
of
those
have
unique
links
beyond
the
test
report.
D
B
Your
description
is
great,
but
I'm
sure
that
I
I
would
love
to
see
the
next
time
you
start
down
that
path.
If
you
do
a
quick,
zoom
and
record
and
share
it
with
us,
we'd
love
to
actually
see
how
many
clicks,
I'm
sure
our
new
ux
designer
would
love
to
see
that
too.
D
Okay,
I'll
link
to
the
comment
that
I
that
I
just
did
earlier
today
on
this
so
perfect.
D
And
then
I'll
record
a
video
when
I
can
tomorrow
and
just
be
like
hey.
This
happened,
let's
diagnose
how
I
handle
this
from
a
master
stability
perspective.
I
might
also
get
some
tips
where
it's
like
you
don't
have
to
here's
two
clicks
that
you
can
say.
F
Yeah,
I
would
say
it's
really
helpful
for
a
pipeline
triage
as
well.
Just
to
like
get
an
initial
idea
of
you
know
is
this
something
that
has
been
failing
for
a
while
and
then
maybe
we
don't
immediately
jump
into
investigation
on
that,
whereas,
like
a
new
failure,
there's
a
higher
possibility
that
it's
a
it's
a
bug.
So
we
can
like
focus
our
efforts
there
and
then
come
back
to
the
tests
that
have
been
failing
for
a
while.
E
I
think
that
deep
url,
what
you're,
what
you're
saying
is
it's
you
can't
really
share
a
url,
a
direct
url
for
the
the
failures
right.
I
think
that's
a
fairly
quick
win
that
we
could
put
in
place.
D
Yeah,
even
just
to
the
suite
right
so
like
in
the
comment
I
link
to
I
link
to
the
overall
test
report
and
how
I
got
there
is
build
failure.
Click
on
the
pipeline
go
to
test
report
like
if
you
could
go
into
our
spec
unit
pg12
and
just
link
to
that.
B
D
A
Kyle
I
had
a
quick
question
for
the
recording
of
the
test.
That's
failed.
I'm
not
sure
the
implementation
of
that,
but
over
time,
if
does
re-architecture
affect
that
reference
good
well
like
if
I
rename,
if
someone's
like
hey
rename
this,
because
you
have
a
spelling
mistake
and
then
I
go
in-
is
that
going
to
affect
long-term
data.
D
C
D
E
D
James,
you
mentioned
the
the
like
the
quality.
I
think
you
said,
quality
management
report
is
there
is
then
the
deck
and
somewhere
that
maybe
I'm
just
overlooking,
or
is
there
an
epic
for
this.
B
B
B
E
I
was
just
making
sure
that
we
stayed
on
the
call
all
right.
Do
we
have
more
on
the
test
report
stuff
that
yes
or
do
you
want
to
go
to
the
next
point,
yeah.
D
Yeah,
what's
on
the
road
map
for
functionality
to
reduce
ci
minute
usage
by
testing
smarter,
that's
like
a
better
question
than
what
I
started
with.
If
anyone
was
following
the
slack
threads
that
james
had.
D
So
like
we
were
on
184
000
tests,
every
time
we
run
a
pipeline
across
front
end
and
back
end,
we
are
embarking
on
a
project
to
make
that
number
much
less
to
save
the
number
of
pipeline
minutes
per
pipeline
that
the
single
repo
has
by
like
in
ruby,
we're
using
dynamic,
spec
analysis
to
say
these
files
relate
to
these
specs.
Let's
only
run
those
prior
to
approval,
and
I
know
the
workflow
and
things
are
a
little
not
optimal.
D
So
I
see
the
challenges
with
all
of
this,
but
we're
trying
and
we're
doing
something
similar
with
jest.
I
was
wondering
if
you
all
are
looking
towards
how
can
how?
How
can
we
enable
customers
to
do
something
similar
for
common
testing
practices?
Just
like
like
we're
doing
with
ruby
and
jest?
I
look
at
it
almost
as
the
potential
of
a
mode
where
you
can
say,
run
a
selective
run.
Selective
tests
on
merge
requests
up
until
this
point
in
the
merge
requests.
E
Yeah,
that's
a
good
question,
I'm
not
sure
I
mean
I'm
stepping
on
my
element
here.
I'm
not
sure
if
we
have
anything
on
the
roadmap
currently
to
solve
that,
but
I
think
that's
a
very
useful
feature
to
look
into.
G
Like
so
I
just
I
just
I
think
yesterday
had
this
talk
to
someone
customer
facing,
I
can't
remember
their
role
but
they're
talking
about
how
they're
asking
about
how
we
support
parallelism
and
test
execution
versus
actions,
no
circle,
ci
and
so,
and
the
question
was
phrased
as
like.
Oh
they
have
this
thing
built
in
that
just
does
it
do
we
have
a
built-in
thing
like
that
and
when
I
looked
at
the
way
we
implemented
it
and
the
way
that
circle
ci
implemented
it.
G
It
was
the
same
thing
like
all
the
concepts
under
the
hood
worked
exactly
the
same
way,
but
theirs
was
packaged
up
in
a
way
that
made
it
feel
automatic,
and
ours
is
this
very
sort
of
like
computationally
academic
explanation
of
like
well,
it's
just
compute
minutes,
so
we'll
parallelize
the
jobs
and
then
like
you've,
gotta
just
figure.
I
don't
know
how
your
test
suite
works.
You
have
to
take
these
things
and
make
it
parallel,
and
it's
it
struck
me
as
like
something
that
we
could
just
advertise
better
and
publish
more
guidance.
G
I'm
like
here's
a
good
way
to
do
it
and
like
if
that
means
we
put
some
syntax
sugar
on
it
to
make
it
feel,
like
you
know,
gitlab
make
it
go
faster,
like
that'd,
be
great
and
running
fewer
tests
see
to
me
falls
kind
of
in
the
same
category,
but
it's
like
it
seems
like
a
hard
qualitative
decision
about
which
test
is
important.
G
Right,
like
what's
the
what
is
the
cost
and
value
of
a
given
test
right
like
there
are
expensive
tests
that
you
want
to
run
all
the
time,
because
they're
just
super
duper
important
and
then
prioritizing
that
I
I
don't
think,
there's
a
way
for
us
to
ever
automatically
do
that
for
a
customer,
because
it's
so
preference
based,
but
if
we
can
start
documenting
the
way
we
do
it
and
trying
to
just
declare
that
the
best
practice,
because
it's
the
way
we
do
it,
and
here
we
are
being
a
company.
G
I
think
we
can
get
a
lot
of
mileage
out
of
that
with
customers
who
just
sort
of
get
to
this
open-ended
place
and
they're
like
I
want
to
run
fewer
tests
and,
like
I
read
this
blog
post,
I
don't
know
what
to
do
now
that
I
think
if
we
just
if
we
write
some
like
forceful
opinions
in
our
documentation,.
G
G
So
and
it's
I
think,
if
we
just
if
we
get
opinionated
about
the
tools
we
already
have,
I
think
we
can
get
pretty.
I
think
we
can
get
a
lot
of
good
feedback
on
that
and
it'll
help.
Customers
like
know
what
they
want
to
do,
because
it's
not
really
articulated
and
we've
had
a
whole
team
spend
like
an
initial.
You
know
an
engineering
initiative
amount
of
time
figuring
out
like
what
do
we
care
about.
So
let's
assume
that
everybody
else
cares
exactly
the
same
way
and
just
tell
them.
E
D
We
use
crystal
ball.
We
essentially
generate
a
mapping
every
two
hours
that
says
these
files
relate
to
these
specs
and
then
we
consume
that
mapping
to
say
run
those
specs
based
on
the
change
number
of
the
change
files
until
first
approval
on
the
pipeline
with
jest,
which
is
what
we're
looking
towards
next
there's
a
built-in
find
related
test
features
that
does
a
lot
of
that
has
a
lot
of
intelligence
built
into
it
on
looking
at
load
paths,
and
things
like
that.
D
I
imagine
there's
something
similar,
but
this
I
agree
with
you
drew
like
this
is
not
like
a
there's
one
solution
for
everything:
it's
almost
like
a
language
by
language
by
language
challenge,
but
the
general
capabilities
there.
I
did
have
one
question
for
you
on
circle:
ci,
so
does
circle
ci?
Do
they
auto
determine
the
number
of
parallel
builds
to
execute
because
like
for
us,
we
have
to
manually,
maintain
it
and
monitor
it
to
say:
oh
we've
added
x
number
of
tests
we
maybe
need
to
bump
our
builds.
D
D
So,
like
you,
can
shorten
your
total
time
to
feedback,
which
is
very,
very
important,
but
at
the
expense
of
ci
minutes,
because
if
you
throw
like
10
more
machines
at
running
your
rspec
tests,
you're
essentially
having
10
times
the
number
of
like
in
the
end,
it's
probably
the
same
number
of
minutes
and
you're
just
accelerating
feedback.
But
there
is
some
startup
time
and
things
like
that
that
we
see
in
our
own
pipelines
where.
G
I
don't
think
we
have
the
level
of
introspection
to
know
like
how
much
of
your
job
was
load
time
and
how
much
is
in
your
script.
That
would
be
really
cool
and
we
could.
We
could
like
plot
a
chart
of
like
here's,
your
ci
optimality,
where
you're
you
know
getting
the
shortest
duration
without
spending
too
much
setup
time.
D
G
I
don't
know
how
to
I
don't
know
how
to
programmatically
tell
the
difference
between
a
job
that
runs
like
a
hundred
thousand
short
tests
and
one
very
long
test
right.
We
just
know
that
this
thing
ran
for
two
hours.
D
Yeah
yeah
yeah-
I
was
just
thinking
like
even
in
the
test
report.
So
again
I'm
going
to
get
to
implementation,
and
this
is
stupid.
You
can
see
how
many
total
tests
were
run
by
sweet.
Usually
those
suites
correlate
to
a
ci
job
like
our
unit
test
is
at
79
351
in
this
example
that
I'm
looking
at
we
have.
I
think
25
concurrent
builds
that
we
split
that
out
amongst
in
the
ui.
D
G
G
D
It
has
to
be
because
that's
337
914
seconds,
that's
600
minutes.
So
you
know
if
they're
yeah,
if
they're
using
test
report,
there
might
be
a
way
to
start
that
intelligence
of
here's,
where
you
can
optimize
your
pipeline.
Just
with
that
data.
D
D
I
this
maybe
comes
back
to
just
doing
a
recording
where
words
on
a
page
and
out
of
my
mouth
aren't
doing
the
selective
test,
execution,
stuff
justice
so
like
a
three
minute,
video
of
recording
like
here
is
a
pipeline
before
here's
a
pipeline
after
here's,
the
savings
and
duration
and
minutes
that
we
see,
I
I
think,
would
help
kind
of
bring
some
context
to
that
effort.
That
would
be
beneficial
way
beyond
this
anyways
I've
been
talking
with
albert
to
consider
doing
that.
E
Yeah,
no,
I
think
that
idea
is
very
useful.
I'm
just
not
sure
the
best
way
to
go
about.
E
E
E
D
E
D
E
D
I
view
like
that
selective
test
execution
is
like
an
end
stage
goal
towards
giving
customers
better
feedback
if
here's,
how
to
optimize
your
pipelines
like
like
that's
like
a
this,
is
a
feature
where
we
can
auto
optimize
or
like
give
you
more
control
over
the
tests
like
running
less
tests,
but
as
we
were
talking
about
it,
it
just
made
me
think
more
about
the
nudges
that
we
could
provide
in
the
product,
and
things
like
that.
D
G
E
E
Cool
well,
is
there
anything
else
we
want
to
bring
up?
We
can
end
this
being
a
little
bit
early.
E
D
It
should
be
a
recording
that,
like
I,
should
just
record
it
and
be
like
hey.
These
are
the
things
that
I'm
saying
like
that's
interesting,
like
that's,
really
good
feedback
to
think
about
how
I
can
do
that
more
ahead
of
the
next
one
and
use
something
that's
a
short
way
to
tee
up
any
discussion.