►
Description
Today we had a great update from Kyle on how their use of the the Fail Fast Template and the TFF gem (https://gitlab.com/gitlab-org/ci-cd/test_file_finder/). We also dug into the problems we are looking to solve for Code Quality to move the category to Viable (https://gitlab.com/groups/gitlab-org/-/epics/3686).
A
This
is
the
september
24th
2020
verify
testing
internal
customer
call,
I've
shared
in
the
agenda,
the
roadmap
roadmap,
deck
changes
of
note.
Just
some
quick
highlights
I'll
verbalize
before
we
jump
into
other
agenda
topics,
we've
shipped
the
first
iteration
of
the
group
test
coverage
data,
so
it
builds
up
from
the
project
data.
You
can
get
data
for
all
of
your
projects
within
your
group
and
we
have
a
feedback
issue
there.
A
So
if
anyone
internally
is
using
it
or
external
folks
watching
the
video
later
we'd
love
to
get
the
feedback
within
that
issue,
about
that
that
first
iteration
is
being
able
to
download
the
csv,
and
then
the
team
will
be
iterating
on
that
to
show
the
data
in
the
table
and
a
graph
of
all
of
the
data
over
time.
The
average
of
all
of
your
groups
coverage
data
over
time
and
then
our
next
big
focus
kind
of
the
epic
that
we
are
just
wrapping
up
is
around
the
unit
test
report
and
showing
data.
A
There
there's
one
more
open
issue
there
of
displaying
file
name
as
part
of
the
table
and
then
we'll
have
some
a
little
bit
of
follow-up
cleanup.
Just
on
the
ui
of
that
interface,
but
as
we
close
out
that
epic,
the
next
big
focus
for
the
team,
is
going
to
be
moving
into
the
code
quality
category,
primarily
the
features
that
it
will
enable
us
to
move
our
maturity
up
to
viable,
and
one
of
the
key
components
of
that
is.
A
The
majority
of
internal
customers
are
using
the
category
and
so
for
me,
that's
the
deep
dive
I
want
to
get
into
today
and
understand
from
anybody
on
the
call
who
has
contributed
to
code
quality
issues
or
has
open
issues.
What
those
issues
are
making
sure
that
the
issues
that
we
have
identified
for
the
epic
are
closing
out
the
problems
that
are
preventing
them
from
using
that
feature
internally.
A
Before
we
do
that,
though,
I
want
to
make
sure
that
ricky
has
a
chance
to
verbalize
his
point
as
we
talked
about
it
on
an
earlier
call
this
morning
in
their
team
refinement.
So
brady,
I'm
going
to
hand
it
over
to
you.
B
Sure
is
that
are
you
talking
about
my
the
last
bullet
yeah,
so
I
noticed
that
albert's
doing
his
fail
fast
experiment
with
the
kind
of
parallel
fail,
fast
job
running
that
then
cancels
the
pipeline.
If
detects
a
failure,
so
I'm
just
very
curious
about
the
results
for
it.
Is
there
like
an
issue
or
somewhere
that
you
can
share
with
us
kyle
about
any
kind
of
results
or
is
it?
Is
it
good?
Is
it
bad?
Is
it
saving
time?
Is
it
spending
more
time
like?
What's
the
what's
the
4-1-1
yeah.
C
So
I'm
I'll
right
now
we're
still
gathering
data,
I'm
trying
to
pull
the
issue
I'll
add
to
it
I'll
add
to
the
agenda.
We
have
a
sysense
dashboard
that
pulls
together
the
key
metrics
we're
looking
at.
I
thought
it
was
in
the
issue
you
linked
to,
but
I
don't
seem
to
see
it.
C
C
So
the
next
steps
are
really
to
move
towards
the
dynamic
test,
mapping
identification
that
we
had
kind
of
checked.
I
think
we
had
chatted
about
probably
two
months
two
months
back.
That's
something
we'll
look
to
continue
to
grow
in
q4,
so
that
we
can
really
get
to
the
point
where
it's.
C
We
have
high
confidence
that
these
impact
jobs
are
catching
the
failures
that
would
be
caught
in
our
r-spec
jobs
and
start
to
move
away
from
running
every
single
test
on
every
single,
mr
we'll
run
them
in
master.
But
in
the
mrs
will
you
know
we're
trying
to
move
in
the
direction
of
moving
away
from
that.
C
So
give
me
just
a
second
and
I
will
find
the
yeah
I'll
find
that
dashboard.
B
Awesome
here
thanks
yeah
is
the
I'm
just
also
curious
for
the
for
your
plan
to
start
to
do
more
dynamic
analysis
of
what
tests
need
to
run?
Are
you
kind
of
planning
on
generating
a
mapping
file
and
then
feeding
that
into
tff
and
kind
of
generating
that
mapping
separately
more
dynamically
or
is
there?
Is
there
thoughts
about
building
that
right
into
the
tff
tool.
C
I
I
would
have
to
defer
to
albert
on
that
we've
kind
of
talked
about
both
approaches,
and
I
was
thinking
we
could
collaborate
with
you
to
kind
of
talk
about
our
concerns
of
building
into
the
tool.
I
think
we'd
have
a
preference
towards
that.
Assuming,
like
we're,
a
small
team
gets
pulled
in
a
lot
of
directions,
just
like
you.
So
assuming
we
have
the
capacity
to
do
that.
C
B
Time
that
we
have
for
sure
it
seems
super
interesting
to
because,
with
the
with
the
tff,
just
really
taking
a
con
big
mapping,
you
don't
really
have
to
build
that
mapping
in
tff.
You
could
build
it
using
whatever
kind
of
mechanism
you
want.
So
you
could
do
the
same
shopify
approach
and
just
generate
a
mapping
file,
that's
compatible
with
tff,
and
then
your
gold.
C
Yeah,
I
I
that
that
was
the
way
that
I
I
I
think
we
see
the
fastest
way
to
get
to
that
point
is
just
generating
mapping
the
questions
I
think
we
have
for
you
is:
does
it
make
sense
to
even
build
the
mapping
generation
functionality
into
tff
because
it
feels
like
it's
more
general
purpose
like
it's
supposed
to
be
more
than
just
ruby
and
we're
going
to
be
building
like
a
ruby,
dynamic
and
not
mapping
analysis
generation,
or
is
that
a
different
component
that
maybe
fits
in
the
same
ecosystem
that
you
all
are
working
in
with
the
tool?
C
Those
are
some
of
the
questions
and
again
if
we
could
set
up
just
a
sync
with
your
stakeholders,
me
and
albert
I
can
provide
some
times
probably
about.
I
can
provide
some
times
in
the
agenda
on
one
that
can
work
or
in
in
slack,
but
those
are
the
things
we.
C
Through
ahead
of
q4,
so
that
we
can
get
a
kr
potentially
to
support
a
shared
direction
on
that-
and
I
say
we
I
mean
entering
foreign.
C
That
help
yeah-
I
I
I
can
talk
through
the
metrics
but,
like
I
said,
we're
not
we're
like
we're
seeing
cost
per
pipeline.
I
think
go
down
like
five
cents,
because
the
the
short
circuit
rate
is
about
one
to
two
percent,
so
we're
we're
not
necessarily
seeing
the
number
the
volume
of
pipelines
cancel,
which
is
a
good
thing.
That
means
developers
are
likely
running
unit
tests
that
apply
most
locally
like
we
have
some
good
practices,
but
we're
not
we're
not
seeing
the
pipeline
reductions
that
we
were
targeting
yeah.
C
B
C
Oh,
we
we
don't
measure
that
yet
actually
so
like
are
you
saying
how
often
is
the
impact
job
passing
and
other
rspec
jobs?
Failing
when
you
say
the
miss
rate.
B
C
Yeah,
so
we're
not
measuring
that
right.
Now,
that's
a
critical
measurement
to
answer
your
question
albert
and
I
are
appearing
this
evening
my
time
to
get
that
together.
We
just
had
some
other
metric
demands
pulling
at
our
time
and
attention.
I
haven't
picked
this
one
up
yet
so
I'll
report
back
in
slack
when
we
get
that
like
when
we
have
the
chart
so
that
we
you
can
at
least.
C
A
Cool,
so
we
had
talked
earlier
today
in
our
meeting
about
the
next
steps
for
us
for
tff,
taking
some
of
that
work
that
engineering
productivity
had
done
around
the
short
circuit
or
failing
the
pipeline
when
it
finds
failures
and
working
that
back
into
the
template.
So
it
is
more
general
purpose
available
for
the
template
as
what's
would
we
be
able
to
pick
that
up
fairly
soon,
or
is
that
at
a
point
where
it's
can
be
built
back
into
the
template?.
C
I
I
think
our
implementation
is
very
opinionated,
so
that
this
would
be
another
thing
where
having
a
discussion
with
albert
would
be
better
than
having
a
discussion
with
just
me.
Unfortunately,
like
I,
I
feel
that
I'm
failing
as
an
internal
stakeholder
right
now
no
worries,
but
that
kind
of
that's
what
I
told
him
in
the
last
meeting
right.
C
C
Maybe
one
thing
that
we
can
do
acing
with
albert
so
instead
of
having
a
synchronous
session,
is
maybe
albert.
I
can
just
set
up
some
time
for
albert
to
talk
me
through
the
jobs
we
can
record
it.
Put
it
down,
unfiltered
share
it
with
you,
and
then
we
can
kind
of
see.
What
would
you
know,
collaborate
async
on
on
doing
that?
I
can
create
the
issue
to
record
a
video
and
mention
you
all
when
we
get
it.
B
Yeah,
I
think,
there's
probably
different
components
that
we
could.
I
don't
know,
figure
out
a
way
like
we
could
spend
our
own
time
figuring
out
a
way
how
to
take
what
you
you
all
have
done
and
generalize
it
and
maybe
come
up
with
something
that's
a
little
bit
more
general
purpose
and
then
build
that
into
a
template.
Somehow,
because
I
think
that
the
the
real
kind
of
nifty
part
is
the
cancelling
jobs
that
are
in
flight
when
you
detect
failures.
B
C
Yeah
and
right
now
we
have
like
I'll
say
like
scripts
like
bash
scripts.
Essentially
that
do
the
api
calls
to
cancel
the
pipeline.
So
it's
nothing,
there's
nothing
built
into
the
ci
configuration
that
does
that
short
circuiting
for
us,
because
we're
not
blocking
we're
not
blocking
the
r
spec
jobs
with
this
one
they're
running
in
parallel,
so
we're
trying
to
short
circuit
jobs
that
are
actually
executed
in
parallel
until
we
feel
confident
that
we're
able
to
condense
the
number
of
tests
that
are
run.
B
I
think
even
that
would
be
valuable
to
to
people
to
kind
of
to
build
in.
I,
like,
I
actually
like
that.
It's
not
like
built
into
the
product.
That's
just
a
bunch
of
kind
of
batch
scripts
and
api
calls,
because
that
if
we
can
package
that
and
then
people
can
customize
it
in
order
to
do
other
things,
then
that
that's
even
better
so.
C
Yeah,
I'm
going
to
make
a
note
and
just
an
action
in
my
to
do
a
stand
on
here,
so
it's
transparent
to
create
the
issue
set
up
some
time
with
albert
to
talk
through
the
pipeline.
Talk
through
the
different
components
record
it,
and
then
we
can
try
to
collaborate
async
first
and
if
we
need
to
have
a
synchronized
session
with
albert
me
and
other
stakeholder
like
you
all,
we
can
set
that
up.
But
let's
try
to
do
async.
First
here
awesome
cool.
A
All
right,
I'm
gonna
jump
back
up.
Then
I
wanted
to
do
a
bit
of
a
deep
dive
on
the
upcoming
code
quality,
epic,
that
the
team
will
be
picking
up.
I'm
going
to
first
talk
through
the
problems
to
solve
that.
We've
identified
from
a
combination
of
customer
interviews,
the
open
dog
fooding
issues
that
are
out
there
and
discussions
with
internal
stakeholders
who
wrote
those
dog
fooding
issues,
so
the
three
big
ones
that
we'll
be
looking
to
solve
and
feel
like.
A
The
problem
is,
I
don't
know
how
important
any
particular
item
is
that
was
reported
today.
The
code
quality
issues
just
show
up
flat.
You
see
things
that
are
new
and
you
see
things
that
are
fixed
in
the
test,
not
test
summary
widget,
I'm
so
used
to
talk
about
test
history
that
that's
the
thing
that's
stuck
in
my
head
in
the
mr
widget
and
then
within
the
report.
It's
just
here's
all
of
the
violations,
but
again
no
sort
of
severity.
So
the
team
is
picking
up
in
13
6.
A
I
believe
the
first
issue
there,
which
will
expose
severity.
It
comes
on
a
basically
one
to
five
scale
of
informational
up
to
critical.
I
think
that'll
have
that
data
available.
So
we
think
that
solves
for
that
problem.
So
then,
as
you're
looking
at
the
code
quality
mr
widget
you'll
understand
this
is
a
critical
issue,
so
more
matches
up
with
what
security
does
today
or
sas
and
nas
do
today.
A
The
next
one
is
that
the
code
quality
reports
are
noisy
and
hard
to
use
during
a
code
review
and
that
people
don't
care
about
low
priority
issues.
So
I
kind
of
combine
these
two
into
one.
There's
two
separate
features
here:
two
issues
that
we're
talking
about
one
is
pulling
code
quality
data
into
the
mrdiff.
A
What
I've
heard
over
and
over
from
folks
is
that,
as
I'm
doing
a
code
review
like
I
see
in
the
mr,
that
there
are
new
violations
found
or
that
there
are
violations
found
from
the
code
quality
scan.
But
I
don't
have
an
easy
way
to
see
if
that's
in
the
code
that
I'm
reviewing
so
by
exposing
that,
in
a
similar
fashion,
to
what's
recently
happened
with
the
test
coverage,
we
make
that
much
more
readily
available
to
the
reviewer.
So
they
can
see.
A
If
this
is,
you
know,
code
that
or
a
violation,
that's
in
the
new
code
or
near
the
new
code,
and
there
might
be
some
opportunity
then,
to
go
and
fix
that
violation
and
improve
the
overall
quality
and
then
not
caring
about
low
priority
issues.
We
have
an
mvc
written
up
to
within
your
gitlab
cie
yaml
set
basically
the
floor
at
which
you
want
to
see
violations.
A
So
if
you
don't
care
about
informational
or
low
whatever
that's
lowest,
one
is,
you
can
say,
don't
care
about
these
as
part
of
the
code,
quality
job
or
extending
the
template,
and
those
just
don't
show
up
in
the
ui.
So
you
would
start
to.
You
know
see
a
lot
better
signal
and
not
just
a
lot
of
noise
about
informational,
maybe
lending
errors,
or
things
like
that
that
you
just
don't
care
about,
and
then
the
last
big
one-
and
this
is
one
from
and
definitely
from
the
internal
customers
is,
my
team-
can't
enforce
the
code.
A
Quality
is
important
without
a
big
workaround,
so
creating
some
sort
of
your
own
custom
scripting.
That
goes
and
looks
at
the
code,
quality
job
or
the
code
quality
report
understands
if
it's
new
violations,
so
it
does
its
own
diff
against
something
coming
from
your
default
or
your
target
branch
and
then
failing
the
job
or
failing
the
pipeline.
If
there
is
that
case,
we
want
to
build
that
in
so
that
there
is
that
quality
gate
of
sorts
similar
to
sas
and
dast
of
hey.
A
You
can't
merge
this,
mr
because
you
have
new
quality
problems,
so
those
are
the
big
issues
that
we've
heard
about
and
after
that
overview
I
wanted
to
open
it
up
and
see.
Are
there
questions
there?
Are
there
other
things
that
our
internal
customers
have
heard,
or
the
team
has
heard
that
I've
missed
from
customers
or
internal
folks
that
we
should
be
paying
attention
to
and
considering
as
part
of
this
epic.
B
I
I
think
going
back
to
the
the
middle
one.
The
reports
are
noisy,
I
think
that's
we
talked
about
this
in
our
last
call
and
drew-
and
I
talked
about
this
quite
a
bit,
but
I
think
a
lot
of
that's
due
to
the
comparison
that
the
mr
widgets
are
invoking,
and
this
is
a
problem
with
the
sas
and
dash
reports
as
well.
B
When,
when
merge
result,
pipelines
is
enabled
the
two
reports
that
it
compares
in
order
to
determine
what
is
new
for
that
merge
request
is
not
quite
right
and
that's
why
you
get
a
lot
of
times
in
the
mr
widgets
you'll
see
437
new
violations
found
or
new
security
vulnerabilities
found,
and
it's
really
just
a
front-end
change
or
something
like
that,
and
so
that
the
mr
widget
data
almost
seems
like
it's
nonsensical,
so
we
had
kind
of
narrowed
that
down
into
like
the
actual
commits
that
are
attached
to
certain
reports
that
are
being
compared
as
incorrect.
B
We
have
a
kind
of
plan
moving
forward
to
fix
that
that
is
in
the
current
milestone.
So
hopefully
we
can
fix
that
up
for
the
code,
quality
and
then
kind
of
shop
that
around
to
the
other
mr
widgets,
and
see
if
we
can
kind
of
help
them
out
with
those
noisy
comparisons
as
well.
C
Yep
that
was
going
to
be
the
I
would
say
that
was
the
feedback
that
I've
heard
from
engineers
is
when
I'm
looking
at
the
mr
widget,
the
when
it
was
enabled
I'll
say,
but
when
I'm
looking
at
the
mr
widget,
the
number
of
issues
is
just
very
confusing
like
what
is
actionable
to
me
within
this.
Mr,
what
is
something
I
need
to
create
an
issue
for
and
resolve
outside
those
sort
of
workflows,
weren't,
really
clear.
B
Yeah-
and
I
think
a
large
part
of
that
is
because
the
way
that
the
comparison
was
working
when
merge
result,
pipelines
was
turned
on
it.
Just
wasn't
quite
right,
I
drew
you
can
correct
me
if
I'm
wrong,
but
basically
we
were
comparing
the
the
report
as
it
was
generated
in
the
merged
results,
commit
the
ephemeral
commit.
We
were
comparing
that
to
the
report
that
was
generated
at
where
the
commit,
where
the
merge
request
branched
from
so
basically
anything
new
that
happens
in
the
main
branch
in
between
that
is
getting
associated.
C
A
Yeah
this,
if
it
works,
it's
an
approach
we'll
take
for
all
of
our
own
widgets.
Remember,
accessibility
report
would
have
this
problem.
I
don't
think
browser.
No,
maybe
browser
performance
does
as
well.
I
can't
think
of
the
rest
of
the
top
of
my
head.
Yeah.
Definitely.
B
I'm
nearly
certain
to
have
this
from
now
that
I'm
saying
this
out
loud
there's
a
few
that
are
implemented
in
kind
of
interesting
ways
that
manage
to
sidestep
this.
But
it's
a
it's.
It's
a
it's
a
broad
problem.
I
think
it
affects
the
majority.
It's
like
the
idea
of
base
and
head
pipeline
is,
is
a
little
bit
out
of
sync,
so
it
should
affect
most
of
those
widgets.
C
And
the
thing
that
I
would
say,
I'm
most
excited
for
is
seeing
something
in
the
mrdif
on
this.
So,
like
I
don't
know
how
that
would
that
would
be
exposed,
but
once
code
coverage
was
turned
back
on,
there
was
a
lot
of
thumbs
up.
I
think
you
all
saw
a
few,
but
a
lot
of
positive
feedback
there
yeah.
A
I'll
link
to
jj's
working
on
design
in
the
current
milestone
and
there's
some
prior
design
that
we're
starting
from
I
don't
know
it's
pretty
old.
I
will
caveat
it
with
when
the
issue
was
originally
written
up
a
couple
of
years
ago,
so
this
may
evolve.
I
expect
that
this
will
evolve
to
something
that
more
closely
matches
our
current
standards
and
how
we
understand
customers
want
to
interact
with
the
app
but
I'll
link
that
issue
in
as
soon
as
I
find
it.
E
James
overall
you
with
these
three
issues,
you,
I
think,
really
hit
the
nail
on
the
head,
especially
the
first
two,
which
kind
of
lead
to
the
third
one
and
the
teams
that
I've
worked
with.
They
haven't
even
really
considered
the
third
one
so
much
because
of
the
problems
in
the
verse
too.
A
Yeah
yeah,
that
was
what
I
heard
from.
I
think
it's
the
gitly
team
was
they
really
wanted
to
enforce
code
quality,
but
until
you
could
like
understand
how
it
was
changing
in
relation
to
the
mr,
they
didn't
want
to
bother
with
it
because
it
was
just
too
painful,
so
they
just
kind
of
kind
of
moved
it
along
the
other
interesting
thing.
I've
heard
this
week
from
external
customers,
but
I
don't
know
that
it
impacts
us
internally.
A
Our
internal
customers
is
a
request
to
be
able
to
handle
multiple
code
quality
jobs
in
the
same
in
the
same
pipeline
and
grab
all
those
results
together,
similar
to
what
we
do
with
junit
reports.
A
So
there's
a
customer
I
talk
to
who
has
multiple
linting
jobs
that
run
separately,
and
then
they
want
to
pull
all
of
that
together
and
in.
I
think,
both
of
these
cases-
they're,
customizing
or
they're
re
reformatting,
the
output,
so
that
it
matches
the
code
climate
spec
so
that
they
can
see
it
in
gitlab.
A
The
other
one
is
a
very
unique
case
of
their
sidestepping
having
to
buy
an
ultimate
license
by
using
the
open,
set
open
source
sas
scanning
and
then
re
reformatting
that
into
code
climate
json.
So
they
can
see
it
as
part
of
the
code
quality
report
so,
and
that
is
a
required
job
as
part
of
the
pipeline.
Yeah
right
super
creative,
but
that
is
stomping
all
of
the
other
linting
jobs
that
are
happening
because
they
run
before
the
security
job
runs
and
it's
a
required
bit
of
the
pipeline
that
runs
towards
the
end.
A
So
you
see
all
of
your
security
violations
that
are
part
of
your
change,
and
you
see
none
of
your
linting
errors
for
front
end
back
end
whatever
it
might
be.
So
they
want
to
be
able
to
combine
all
of
those
reports,
so
that
problem
is
not
currently
one
to
solve
as
part
of
the
epic.
But
those
are
highly
upvoted
issues
externally
and
something
that
customers
are
asking
for.
So
that
might
make
an
appearance
in
this
epic,
as
we
move
along
just
to
be
able
to
also
solve
that
problem
externally.
C
Us
I
I
had
a
question
it
might
be
getting
way
ahead
here,
but
related
to
just
dog
fooding.
I
know
there's
the
mr
to
bring
it
back
in
the
gitlab
project.
C
Have
you
all
considered,
starting
with
a
project
that
might
have
a
simpler
workflow
just
to
get
like
we
talked
about
how
merge
results
is
a
is
a
problem.
There's
I'm
sure,
there's
a
few
satellite
projects
that
don't
use
that,
and
maybe
that's
already
happening,
and
I
just
don't
I'm
not
aware
of
it-
is
that
something
that's
being
talked
about
or
considered
yeah
there's
a
couple
of
different.
A
Dog
fooding
issues
out
there,
so
we're
talking
to
a
couple
of
different
groups
about
as
we
release
features
starting
to
use
it.
B
I
think
we've
we've
previously
worked
with
runner
and
james
is
mentioning
that
he
was
talking
to
italy,
so
we
we
do
kind
of
shop
our
ideas
around
to
smaller
projects.
First,
I
think
runner
had
the
coverage
stuff
going
before
we
put
it
on
the
main
or
we
made
dmr
to
get
it
into
the
or
you
made
the
mr
to
get
it
into
the
main
gitlab
project.
I
don't
remember
how
that
all
played
out,
but.
C
A
A
I
can't
think
of
anything
off
the
top
of
my
head
for
me,
the
next
two
things
that
I'm
thinking
about
dog
fooding-wise
and
trying
to
understand
why,
for
categories
that
are
minimal,
that
we
don't
have
more
widespread
adoption
internally
are
usability
and
browser
performance,
and
so
I'll
start
working
with
product
managers
on
hey.
Are
you
starting
to
look
at?
Are
you
using
visual
reviews?
Are
you
using
review
apps?
What
is
a
better
way
for
you
to
get
the
feedback?
Why
aren't
you
using
these
tools?
A
Just
I'm
I'm
kind
of
laser
focused
on
that
persona
for
that
feature,
because
it's
one
that
is
near
and
dear
to
my
heart
as
a
product
manager
of
a
much
easier
way
to
leave
feedback
on
an
mr,
especially
a
visual
type
component,
and
how
maybe
we
can
partner
that
up
with
some
design
features
and
really
get
those
two
things
together
and
working
really.
Well,
so
I
don't
think,
there's
anything
it's
a
long
way
of
saying.
No,
we
don't
need
your
help
on
that
one.
I
don't
think
right
now,
kyle
and
then
browser
performance.
A
I
don't
know
I've
been.
I
had
went
through
a
similar
exercise
of
talking
to
some
of
the
folks
on
quality,
about
browser
performance
and
the
new
pipeline
that
was
put
into
place
to
do
that
testing
and
where
there
were
gaps
in
our
solution
for
it.
So
I
think
we've
identified
those
and
they're
sitting
in
the
epic.
D
Hey
sorry,
I'm
late,
I
was
in
the
the
sas
call
and
jumping
in
where
I
can
no
worries.
D
Now
I'm
following
along
the
the
next
iteration
of
test
file
final,
I
think
it's
landing
so
catch
up
facing,
but
yeah
looking
forward
to
see
to
see
if
that's
adding
to
to
the
cost
reduction
and
we've
seen
if
cost
reduces
it
adds
it
has
a
ripple
effect
in
in
duration
as
well
as
long
as
it's
not
paralyzed.
C
That
yeah
and
just
briefly,
I
think,
we're
not
achieving
the
short
circuit
rate
that
we
anticipated
so
looking
ahead
to
the
end
of
q3,
started
q4
to
look
at
the
dynamic
test,
mapping
portion
as
well,
so
something
that
can
give
us
a
little
bit
better
mapping
so
that
we
can
hopefully
detect
errors
or
at
least
start
to
refine
the
number
of
jobs
that
we
run
within
an
mr.
As
we.
C
Implemented
a
base
solution
and
we're
measuring
the
results.
Albert
and
I
need
to
come
up
with
a
sense
measure
on
the
like
that
hit
miss
rate
where
it's.
How
often
is
the
short-circuit
job
passing
and
other
r-spec
jobs,
failing
like
if
we're
doing
unit
tests,
for
example,
how
often
our
unit
test
failing
and
trying
to
figure
out
how
to
refine
the
mapping
or
just
move
towards
that
dynamic
mapping.
C
Yeah
good
question
we're
actually
hitting
that
right
now,
so
because
we're
essentially,
we
added
a
new
job.
It's
a
small
increase
that
that's
that's
where
it's
we're
almost
offsetting
the
decreases
that
we're
seeing
when
we
short
circuit,
because
every
pipeline
now
probably
has
like
two
and
a
half
cents
for
that
job.
I
think
it's
that's
our
average.
So
I
think
this
job
is
a
little
bit
higher
than
the
average
I
should
say
build.
C
That's
the
average
build
price
for
a
pipeline
is
two
and
a
half
cents,
so
we're
hoping
to
see
a
small
increase
so
that
we
can
see
a
bigger
decrease
in
the
future
in
the
next
probably
two
two
months
or
so.
D
D
I
think
we
will
need
to
move
to
some
some
form
where
you
you
enable
it
for
a
while,
and
you
gather
data
and
from
then
you
use
that
to
help
make
better
decisions
on
what
test
to
run.
I
would
be
surprised
if
that's
the
path
you
take
it
won't.
It
will
add
on
to
the
benefit
once
you
have
it
on
for
a
while.
C
I
would
also
expect
once
we
do
once
we
start
testing
smarter,
just
to
put
a
general
name
to
it,
we're
going
to
see
more
failures
and
master,
because
that
will
surface
areas
where
our
mapping
isn't
good
enough
immediately.
C
So
our
pipeline
stability
is
going
to
decrease
as
well
we're
trying
to
like
figure
out
the
right
balance
between
price
stability
and
duration,
like
those
are
all
three
different
things
that
we
can
tune
in
different
ways,
but
you
usually
have
to
give
one
to
get
one
on
something
else,
not
always,
but
there's
there's
such
an
else.
There.
C
Yep
and
that
that's
a
part
of
that
plan
to
implement
dynamic
mapping
where
we're
doing
both
in
parallel
we're
likely
going
to
do
it
for
a
month.
That's
what
we've
been
talking
about
internally,
we'll
just
kind
of
see
what
the
results
are
as
we
do
it,
but
we
want
to
feel
really
comfortable,
because
our
team
is
that
first
line
of
defense
when
there
is
a
master
failure
and
in
the
u.s
afternoon
hours.
It's
just
me.
C
So
I
especially
have
I'll
say
a
vested
interest
in
making
sure
workout
not
that
other
people
don't
know
because
lots
of
people
help
out
but
yeah.
I
want
to
make
sure
that
my
afternoons
are
not
fixing
master
failures
all
day.
D
D
Sorry
I
was
typing
out
a
localized
now
just
a
a
slight
reminder.
I
see
that
we're
doing
this
in
the
main
repo
just
to
make
sure
that
we're
adding
value
broadly
and
it
can
be
used
in
in
multiple
cases.
D
E
There,
that
is
something
I
kind
of
had
in
my
backlog,
mech,
because
I
was
involved
in
the
initial
part
of
it
and
I'm
actually
meeting
with
albert
tonight.
Are
you
meeting
with
albert
tonight
too
kyle.
E
C
I
and
I
have
a
gut
feeling
where
it's
not
so.
A
lot
of
the
logic
that
we
have
in
gitlab
is
to
short-circuit
pipelines
on
jobs
that
are
like
leveraged,
very
highly,
where
we
have
lots
of
parallel
builds
and
in
customer
portal.
There's
just
that.
Not
that
test
volume,
that's
not
to
say
that
we
can't,
but
I
think
the
current
iteration
that
we
have
is
likely
not
going
to
benefit
small
like
projects
that
don't
have
to
have
20
different
builds
running.
C
There
are
20,
builds
simultaneously
running
the
unit
tests,
maybe
a
project
like
italy
or
runner.
It
could
benefit,
but
you
have
better
insight
into
into
that.
E
No,
I
I,
I
think,
you're
right,
and
that
was
my
initial
thoughts
about
it,
but
I
I
felt,
like
I
had
to
do
my
due
diligence
and
and
take
a
better
look
at
at
albert's
implementation
to
see.
If
there
there
could
be
some
some
overlap.
My
my
first
impression
was
no
there
isn't.
There
isn't
some
overlap
here
that
we
could
make
a
lot
of
use
of.
B
Yeah,
I
feel
like
the
larger
the
project
with
the
more
tests
and
the
longer
it
takes
to
run
a
pipeline,
the
more
likely
you're
going
to
be
able
to
see
a
benefit
from
short-circuiting
that
pipeline
in
terms
of
cost
and
in
terms
of
saving
developers.
C
C
Going
back
to
what
we
talked
about
earlier,
that
mech
missed
I'll.
I
was
going
to
record
a
video
with
albert
talking
about
the
components
and
the
implementation,
and
then
we
were
going
to
regroup
with
the
testing
team
to
to
just
look
at
what
can
be
brought
into
the
template,
and
then
I
think
it
could
be
leveraged
by
other
products
easier
to
make
that
evaluation
versus
just
like
a
a
general
cat
feel,
which
is
what
we're
talking
about
right
now.
B
I
wonder
if,
after
your
experiment
is
done,
and
we
kind
of
figure
out
how
often
on
average
we
get
a
short
circuit,
we
could
kind
of
plug
that
probability
into
a
simulation
and
then
use
that
to
determine
when
the
cutoff
would
be.
A
Make
I
like
your
suggestion
on
the
slight
tweak
to
the
internal
customers
I'll
get
an
mr
open
to
tweak
that
today,
to
have
kyle's
group,
be
the
primary
and
join
his
group
via
just
another
potential
internal
customer
stakeholder.
D
Yeah
thanks
for
that,
so
yeah
seth,
I
I
didn't
mean
to
add
additional
pressure,
just
set
a
due
date
and
if
it
needs
to
move
just
move
the
date
and
there's
a
lot
of
things
going
on
within
in
your
team.
So
I
want
to
make
sure
that
I'm
sensitive
to
that.
So
my
my
read
from
the
handbook
is
that
kyle's
team
is
doing
great
work
and
from
the
handbook
it
sounds
like
joanna
is
doing
double
duty
being
a
qem
counterpart.
D
So
we
can
divide
up
that
role
and
make
sure
we
hate
to
use
to
turn
divide
and
conquer,
but
they're.
It's
more
specific
and
more
focused
on
those
trolls
and
join
us.
Who
can
be
the
main
counterpart
to
this
group,
and
it
sounds
like
that.
Kyle
is
using.
E
D
And
there's
a
lot
of
good
feedback
coming
in
there.
Thanks
for
that.