►
Description
Today was an AMA style discussion.
We talked about the next epic the team will focus on (Code Quality: https://gitlab.com/groups/gitlab-org/-/epics/3686) and Kyle provided an update on the next steps Engineering Productivity is taking to run fewer tests.
Link to last months call: https://youtu.be/7Y8CCoeh-pQ
A
This
is
the
verify
testing
internal
customer
call
for
october
2020.,
I'm
going
to
go
ahead
and
jump
right
into
the
agenda
since
we're
starting
a
little
late.
Today,
I've
called
out
a
couple
of
very
small
changes
to
the
roadmap
roadmap
deck
we've
wrapped
up
the
unit
test
report.
All
of
the
unit
test
report
improvements,
epic,
all
of
the
work
that
was
associated
with
that,
and
actually
there
was
a
couple
of
follow-on
issues
that
didn't
get
added
to
the
epic
that
are
getting
wrapped
up
in
this
milestone.
A
One
of
them
was
the
taking
the
traces
off
of
the
page
and
putting
them
into
a
modal.
So
that
page
is
a
lot
smaller
now
and
easier
to
navigate,
which
is
great,
and
we
got
tagged
on
twitter
on
the
day
of
the
release
by
a
user
who
didn't
notice
that
in
the
release
post,
because
it
wasn't
in
the
release
post,
it
was
merged.com
that
morning
so
awesome
work
there
by
the
team.
A
That
means
that
we
have
opened
up
and
started
working
on
the
next
epic,
which
is
code
quality,
that's
resolving
the
open
dog
fooding
issues
and
moving
maturity
to
viable.
We
talked
through
some
of
the
problems
that
we
want
to
solve
with
that
epic
in
our
last
internal
call
or
internal
customer
call,
and
so
I
will
link
to
that
in
the
bottom
of
the
video.
A
If
anybody
wants
a
refresher
on
that,
they
can
watch
that
discussion
and
I'll
link
to
the
epic
as
well,
so
that
you
can
quickly
get
into
that
and
look
at
those
issues.
A
So
I
didn't
have
anything
specifically
to
talk
through
today,
so
it
was
going
to
do
this
as
more
of
an
ama
style.
Some
of
the
topics
that
I
included
that
I
had
questions
on
or
what
was
next
for
test
file
finder
and
kyle
is
already
I
see
in
the
agenda
started
to
provide
us
an
update
there.
So
we'll
jump
right
into
that.
First.
A
We
could
also
talk
through
the
code
testing
vision
designs
that
wanda
and
I
worked
on
right
before
he
left
the
future
of
code
coverage
reports,
which
is
a
think
big.
The
team
is
going
to
be
talking
about
next
week
and
yeah.
On
that
note,
we'll
jump
right
into
kyle
kyle.
Can
you
give
us
an
update
on
dynamic
test
mapping.
B
Yes,
thanks
joanna,
I'm
pretty
sure
you
added
this
yep.
I
apologize
for
all
the
children.
In
the
background
I
have
my
four-year-old
and
20
something
month
old,
less
than
two-year-old
home
with
me
today.
So
that's
the
background.
B
I
will
point
you
to
what
I'm
highlighting
here,
the
subtask,
so
that
is
that's
kind
of
the
list
of
the
plan
of
where
we're
going
with
dynamic
mapping
on
chris
football
right
now.
Thank
you
to
drew
for
reviewing
the
json
mapping.
B
Mr
that's
going
to
allow
us
to
take
advantage
of
the
of
the
crystal
ball
json
method
as
output,
in
addition
to
the
yml
map
that
we
use
for
boss,
failure
and,
and
just
the
r
spectacle
fast
right
now,
so
we're
already
generating
the
map
with
crystal
ball,
which
is
great,
and
there
are
two
tasks
that
we're
going
to
be
looking
at
to
take
advantage
of
that
map.
B
B
Have
crystal
ball
where
we
need
more
than
just
one
build
job,
so
we
have
to
kind
of
take
some
do
some
ground
work
to
say:
okay,
there's
just,
for
example,
there's
60
tests
that
are
going
to
be
run.
Let's
break
this
into
three
three
executors
and
break
it
out,
20
20
on
all
three
after
we
do
that.
That's
when
we're
going
to
start
looking
at
the
fail
rate.
B
So
thanks
for
the
question
down
at
the
bottom,
we
don't
have
anything
at
the
moment
until
we
implement
those
items
but
I'll
make
sure
to
mention
to
albert
to
add
our
measures
to
the
same
dashboard.
We
may
rename
it,
but
the
link
should
still
work
in
spy
sense,
because
it's
based
on
the
dashboard
id.
B
A
Just
to
the
dashboard
that
the
team
is
using
to
measure
success
of
this
there's
cost
on
there,
which
is
great,
I'm
just
scrolling
through
does
this
also
include
the
false
positives
or
false
negatives.
I
don't
remember
which
way
we're
tracking
that
of
tests
that
are
found
that
can
be
tested
or
skipped
that
then
failed
in
the
full
run.
B
Yeah,
so
I
I
thought
that
it
did,
but
I
don't
see
it
in
here
now,
so
maybe
I'm
looking
at
the
wrong
dashboard
or
maybe
it
was
oh.
A
B
It
the
fail
fast
misrate,
the
mystery
yeah
yeah
yeah,
I'm
sorry,
it's
split
on
half
screen
and
it
was
just
a
little
yep,
so
the
miss
rate
is,
is
that
and
it's
split
out
by
I'll
say
test
level
so
that
we,
because
our
general
approach
is,
we
think
our
hypothesis
is
the
miss
rate
for
unit
test-
is
going
to
be
a
little
bit
less
than
migration
or
system
test,
for
example,
so
we're
just
going
to
iteratively
kind
of
tackle
each
level,
that's
awesome
and
that
that
helps
us
with
that.
B
This
is
just
focus
on
the
the
fail
fast
implementation,
though
right
now
just
just
for
awareness.
It's
not
none
of
the
new
crystal
ball
mapping.
Okay,
can
I
see
since
I
hopped
around
in
my
update?
Let
me
go
back
to
ricky's
question
here:
ricky,
you
wanna
vocalize,.
C
Yeah
sure
I
was
just
wondering
if
there
were
any
thoughts
on
instead
of
using
crystal
ball,
which
is
a
ruby
exclusive
gem.
If
there's
any
thoughts
on
on
kind
of
using
the
process
analysis,
I
think
it
was
dynamic
and
now
that
dynamic
process
analysis
that
shopify
was
using
to
determine
like
which
you
know
what
I
mean,
that
blog
post
any
thoughts
about
doing
that.
Instead
of
crystal
ball,.
B
Yeah
when
we
started
so
I
think
that
was
rotoscope.
If
I
remember
right.
A
A
B
So
we
albert
albert
experimented
with
that,
and
we
found
that
rotoscope
initially
created
a
like
12
gigabyte
file
and
took
a
really
long
time
to
generate
so
we
just
simplified
our
area,
focus
it's
not
to
say
that
we'll
never
kind
of
expand
back.
We
really
want
to
look
at
the
results
first
of
all,
because
it
was
just
the
easiest
to
start
with
that
sound
for
our
use
case,
I
think
rotoscope
is
something
that
will
definitely
help
with
the
general
product.
Here.
A
Yeah,
it
looks
like
our
use
case
is
similar
to
that
shopify.
You
know
150
000
tests
growing
20
to
30
annually.
I
don't
know
what
our
test
growth
rate
is
taking
30
to
40
minutes
to
run
like
that
all
sounds
like
familiar
pain
points
when
it
comes
to
running
the
full
pipeline.
Yes,
so
this
is
probably
my
guess
is.
We
would
benefit
from
a
similar
approach,
but
a
large
chunk
of
the
user
base
when
it
comes
to
genericizing
this
and
building
it
into
the
product.
A
B
And
I
I
just
want
to
reiterate
that:
that's
not
to
say
that
we
won't
look
at
rotoscope.
We
really
are
still
early
in
seeing
what
the
results
of
even
crystal
ball
would
be.
So
if
the
results
aren't
good,
then
we
can
go
back
to
look
at
rotoscope
and
trace
point
a
little
bit
closer.
There's
just
a
call
and
I'll
say
it
was
just
a
decision
I
made,
whereas
let's
not
spend
time
on
that
right
now.
Let's
focus
on
the
things
that
we
think
will
give
us
results
faster.
C
Cool,
so
we
still
have
and
now
go
ahead,
ricky
sorry,
I
just
wanted
to
say
that
I
think
focusing
on
those
things,
even
though
a
lot
of
our
customers
don't
benefit
from
it.
The
ones
who
would
benefit
from
it
are
going
to
be
like
the
big
customers
right.
C
Let
me
set
us
up
for
success
with
people
who
who
are
on
ultimate
and
who
have
large
repositories
and
a
lot
of
huge
pipelines,
and
so
it's
even
though
it's
not
the
80
solution,
there's
still
a
lot
of
value
in
in
that
kind
of
finicky
section.
I
think
yeah.
A
Yeah
yeah,
I
think
our
mono
repo
users,
too,
are
gonna,
be
a
they'll
love
that
right.
So
just
looking
at
our
active
epics,
we
still
have
an
epic
open
to
dog
food
fight
failures
fast
and
some
work
that
we
thought
was
required
there
that
the
testing
team
would
do.
At
this
point,
the
engineering
productivity
has
taken
on
most
of
the
work
of
continuing
to
iterate
on
this
and
work
towards
dog
fooding
it.
A
What
other
I
get
in
this
epic?
Are
there
things
that
you
are
waiting
on
us
to
provide
or
where
we
can
add
leverage
for
your
team
to
move
faster
in
a
while
at
the
same
time,
building
that
into
the
product?
Or
should
we
take
a
step
back
close
this
epic
out
at
this
point
and
pick
it
back
up
later.
B
Excellent
question
and
I
unfortunately
don't
have
an
answer
right
now.
Let
me
know
epic,
I
I
can't
see
how
like
I'll
I'll,
try
to
find
out
I'll
take
the
action
to
review
it
and
see
what
things
could
be
prioritized
by
a
testing
group
that
will
help
us
and
and
go
from
that,
thank
you
for
linking
it
in
the
chat.
Yep.
B
Yeah
sounds
good,
sorry
that
I
can't
give
you
a
better
answer.
No,
oh,
no,
sorry
to
spring
that
one
on
you!
No
this!
This
is
the
time
to
do
it,
so
no
need
for
a
faulty
on
that.
Does
that
help
as
far
as
an
update
we're
still
early?
How?
How
can
I
provide
updates
ahead
of
the
monthly
touch
point?
Is
there
like
just
slack
updates
of
that?
Be
okay
and
g
testing
say
hey?
This
is
what
we're
seeing
or
here's,
where
we're
at
for
progress.
A
Slack
update
just
to
the
issue,
I
think,
keeping
track
in
that
issue
that
you
can
link
to
initially
on
the
steps
would
be
the
the
best
way
for
us
to
stay
up
to
date,
async,
okay,
much
appreciated.
Thank
you.
B
Yeah,
we
will
do
that
and
be
sure
to
mention
at
least
james
you
and
ricky
directly,
okay,
cool
awesome
that
that's
it
on
test
file
finder.
I
had
more
of
like
an
ama
question,
it's
kind
of
in
the
engineering
productivity
vein,
so
you.
B
Some
really
good
work
on
improving
some
of
the
functionality.
Where
are
we?
B
Where
are
we
not
using
something
that
we
should
that
maybe
recently
has
been
refined
a
little
bit
more.
A
Say
the
next
dog
fooding
initiative
that
we
have
is
around
code
quality
and
yeah.
We
talked
about
that
last
month.
Really
the
focus
is
on
enabling
the
developer
to
utilize
our
code
quality
template
in
their
day-to-day
workflow,
so
we've
really
focused
in
on
what
we
heard
from
getaway
developers
and
get
live
developers
of.
I
like.
I
see
this
data
it's
here.
A
I
want
to
make
use
of
it,
but
I
can't
get
ready
access
to
it
in
the
code
review
and
it's
not
gating
the
merge,
and
so
we
have
some
items
in
the
epic
that
will
move
us
towards
being
able
to
do
that
and
make
it
a
viable
alternative
to
something
like
sonar
cube
when
it
comes
to
code
quality
on
the
security
front,
they're
already
there,
and
we
can
follow
a
lot
of
that
playbook.
A
So
things
like
incorporating
code
quality
data
into
the
diff,
mr
or
the
mrdiff,
is
on
that
list
being
able
to
set
a
threshold
for
the
level
of
failures
that
you
want
to
see
and
above
and
then
gating
blocking
on
code
quality
decreasing.
Overall,
those
are
the
ones
that
stick
out
in
my
mind.
So.
C
One
in
terms
of
stuff
that
we've
already
done
and
that
you
can
kind
of
take
advantage
of
right
now
is
drew's
just
enabled
a
feature
flag
for
all
gitlab.com
that
changes
the
the
base
comparison
commit
for
the
code
quality
diff.
C
So
now
the
the
merger
request
widget
for
code
quality
should
actually
be
true
for
all
of
the
pipelines
on
git
lab
that
use,
merge
trains
so
before
the
problem
only
existed
for
merge
trains
where
it
would
show
that,
like
700
code
quality
violations
in
every
merge
request,
even
if
you
only
touched
one
file
but
now
now
what's
in
that,
widget
should
actually
be
completely
100.
True.
C
So
if
there's
anything
that
your
team
could
think
of
how
we
could
use
that
even
in
terms
of
like
I
don't
know,
danger
bot
or
something
like
that,
it
it's
like
a
stop
gap
until
we
get
it
built
into
the
application
where
we
can
stop
things
from
being
merged
on
the
pipeline
level.
That's
that's
completely
open
now
and
should
be
really
accurate.
B
I
will
bring
that
up
in
the
next
team
meeting,
where
we
maybe
just
build
something
that
starts
to
condition
I'll,
say
developers
that
hey
code
quality
checks
used
to
not
be
super
valuable,
but
now
they
are
so
let's,
let's
do
at
least
a
warning
or
let's
do
a
gate
and
say
you're,
introducing
a
new
critical,
critical
code
quality
and
then
that'll
allow
feedback,
I
think,
to
come
into
where
it's
I'm
confused
by
this,
whereas
you
know
engineers
have
probably
just
tuned
it
out.
C
Yeah
and
that's
kind
of
the
unfortunate
thing
that
happened
like
it
kind
of
it,
was
kind
of
slow
too.
As
soon
as
we
turned
on
merge
trains,
then
we
started
conditioning
developers
to
ignore
those
merge
request
widgets
just
because
of
the
the
the
base.
Comparison
wasn't
quite
right
when
you
have
an
ephemeral
commit,
but,
but
now
with
this
new
change,
the
comparison
should
be
should
be
true,
and
so
we
can
start
to
undo
that
conditioning.
C
But,
like
you
said,
it's
going
to
be
a
process,
because
it's
a
lot
harder
to
do
that
than
it
is
to
condition
them
to
ignore
something.
B
Yeah
yeah
I'll
yeah
I'll
put
that
on
the
agenda
for
our
next
team
meeting.
Just
to
just
to
say
see
what
ideas
everyone
has
yeah
danger,
even
just
having
danger
doing
an
api
check
to
say,
there's
a
new
code,
quality
vulnerability,
that's
medium
or
high,
or
critical,
just
bringing
additional
attention
to
it
and
then
linking
to
something
else
and
saying
this
is
like
the
epic
that
shows
us
being
improved.
Will
I
think,
help
nudge
people
to
take
the
merge
requests,
widget,
more
serious.
C
Yeah
we're
also
hoping
with
that
change.
We
can
help
other
teams
with
their
widgets
and
make
it
a
little
bit
more
more
tight
too.
I
know
the
rest
of
our
widgets.
We
also
want
to
incorporate
the
same
comparison
to
I
know
the
metrics
report.
Widget
is
basically
completely
ignored
as
well,
and
we
should
be
able
to
tighten
those
responses
up
with
the
same
approach
and
and
the
feeling
is
that
we
should
also
be
able
to
help
with
the
sas
and
dash
widgets.
C
C
B
Drew
I
I
thought
I
saw
drew
shared
like
an
example
of
an
mr
that
shows
that
the
that
the
checks
are
valid.
Do
you
remember
where
he
did
that,
so
I
can
just
link
to
it
in
the
agenda.
Yeah
the
ep
team
agenda
I'll
post,
the
epic
awesome.
A
A
I
would
say
continuing
to
answer
your
question.
We
have
there's
templates
for
accessibility,
we'd
love
to
see
that
being
used
by
the
team
browser
performance,
I
think,
is
used
in
part.
B
Yeah,
I
know
our
performance
testing
does
a
lot
of
very
specific
testing,
yeah,
so
yeah
grant
and
nalia
and
yeah.
They
have
a
really
good
handle
on
that,
but
yeah,
I'm
not
sure
that
we're
using
the
functionality
that
you
all
have
from
the
product,
but
we
it's
almost
like
what
we're
doing
with
test
file
finder.
I
think
we
started
with
some
of
the
same
pieces
and
I
could
be
way
wrong.
So
apologies.
If
that's
the
case.
B
Okay,
yeah,
I
I
guess
I
don't
really
have
any
any
other
questions.
B
I
just
wanted
to
recap:
was
I've
seen
a
lot
of
good
improvements,
but
I'm
not
sure,
maybe
things
that
we
should
be
using
that
you
see
that
we're
not
besides
the
code,
quality
was
top
of
mind,
so
yep.
A
Now,
code
quality
is
going
to
be
the
next
one,
the
and
ask
for
you
we're
working
on
the
category
maturity
scorecard
for
code
testing
and
coverage.
So
I'd
love
to
get
a
couple.
More
developers
lined
up
to
walk
through
a
task
so
that
we
can
do
the
scorecard
internal
developers
gitlab
developers,
because
we
want
to
move
the
category
from
minimal
to
viable
and
that
process
is
interviewing
internal
customers.
A
B
I
will
I'll
put
that
in
the
update
in
the
team
meeting.
I
have
a
few
few
people
in
mind,
but
let's
see
what
the
team
says
I'll
get
back
to
you
next
week,
perfect,
unless
you
need
it
before
then.
A
No,
that's
that's
fine
going
into
u.s
holidays,
north
america
holidays.
We
expect
things
to
slow
down
a
little
bit
and
we
already
missed
the
q3
time
frame,
so
our
q4,
whatever
quarterly
calendar,
is
so
as
long
as
we
get
it
done.
I
think
yet
this
year,
we'll
be
in
a
good
spot.
B
A
B
Cool
the
other
thing
that
to
keep
in
mind-
or
the
other
thing
I
was
going
to
share-
is
the
engineering
productivity
team
had
some
discussions
on
moving
towards
real
merge
strain
for
for
our
for
the
mono
repo.
So
right
now
we
do
merge
results
pipeline
and
I
think
merge
train
won't
cause
any
impacts
of
things
we're
talking
about,
but
we're
talking
about
fixing
our
team
fixing
some
of
the
bugs
that
are
preventing
us
from
using
it
so
that
we
can
so
we
can
move
forward
with
that.
B
Do
you
know
if
that's
going
to
cause
any
impacts,
any
the
widget
work
or
anything
that
you
have
in
flight,
that
we
use
internally,
that
we
should
look
out
for.
C
B
C
Well,
and
I
think
the
problem
was
it's
so
finicky
too,
like
if
you
look
at
drew's
diagram
like
it
took
it,
took
a
diagram
to
understand
what
the
problem
actually
was,
and
so
that's
how
kind
of
insidious
it
was.
B
Iteration
diagram,
iteration,
there's
much
iteration
on
those
diagrams
cool
and
and
yeah.
I
was
just
gonna
say
we
really
appreciate
like,
like
you,
said,
doing
that
and
then
having
it
be
a
pattern
for
all
the
other
merge
request
widgets.
So
we
can
hopefully
start
to
shift
that
behavior
of
the
of
the
merge
request.
Warnings
in
the
widget.
B
Yeah
it
it's
super
interesting
because,
like
former
co-workers,
so
I
always
hear
mergers,
merge
results
pipeline
isn't
used,
but
anyone
that
I
work
with-
and
I
talk
about
how
we
use
merge
results
pipelines
are
like
what
gitlab
can
do
that,
like
they're,
just
blown
away
like
like
it's
almost
a
huge
like
driver
for
developers,
but
people
that
sales
would
be
interacting
with
probably
just
like
okay,
you
know
it's
really
interesting.
B
C
So
we're
talking
about
merge,
result
pipelines
and
how
it's
like,
really
as
soon
as
a
developer,
understands
what
it
actually
does.
They're
super
excited
about
it
and
they
want
it,
but
from
the
name
it
doesn't
convey,
convey
the
meaning
we
had
the
same
problem.
I
think
with
the
junit
report
results,
because
it's
called
g
unit
report
results
and
people
when
they
read
it.
They'd
just
be
like.
Oh,
it
does
stuff
for
java
great,
but
that's
like
it
doesn't
when
you
actually
find
out
what
it
is
they're
like.
Oh
that's,
really
cool.
B
Yeah
we
can
work
on
stuff
like
that.
Let's
take
a
note
of
them
for
sure
yeah,
yeah
yeah!
No!
I
I
think
it's
been
like
colleagues
of
three
different
companies
that
I
talk
about
because
they're
always
like.
Well,
how
do
you
work
at
gitlab?
I'm
like
well
one
one
of
the
things
we
do
is.
I
was
eventually
get
the
merge
results
pipeline
and
it
just
kind
of
like
their
head
spin
a
little
bit
they're
like
that's
so
awesome,
so.
C
B
Yeah
same
cool,
okay-
I
I
don't
have
anything
else
other
than
just
rambling,
to
probably
take
some
of
your
time
that
you
could
be
spending
better
somewhere
else,
so
I'll
just
put
that
out
in
the
world.
B
Anything
I
can
help
with
things
that
are
in
flight
from
engineering
productivity
that
are
maybe
unclear.
A
A
lot
of
still
actually
to
exist,
yeah,
I'm
really
optimistic
about.
I
mean
we
have
been
working
to
slim
down
the
milestones
a
little
bit
over
the
next
couple
of
months,
but
I'm
really
excited
about
some
of
the
stuff
that
we
have
coming
up
to
help
dog
food
code,
quality,
getting
screenshots
from
failed
tests
onto
that
pipeline
tab
or
onto
that
pipeline.
A
Page
that'll
be
really
nice
that
one
especially
since
I
think
I've
been
talking
about
it
since
my
third
day
here
that
feature
we
will
get
that
done
and
it
is
valuable.
We
know
it
is
so.
B
And
when
that's
done,
I
will
say
so:
we
capture
screenshots
from
system
tests
right
now
and
put
them
as
artifacts.
So
let
us
bring
me
in
and
we
can
see
how
entering
productivity
can
just
like
default,
that
that
feature
in
with
all
of
our
pipelines.
I
use
that
all
the
time
in
diagnosing
master,
brokens
or
flaky
tests,
or
something
like
that.
Yep.
A
All
right,
well,
yeah,
we'll
give
everyone
some
time
back.
Thank
you
so
much
for
the
questions
and
the
commentary.
Great
discussion,
as
always.