►
From YouTube: Verify:Testing Group Think Big #8 (Code Coverage)
Description
Today we think big about code coverage reports, who uses them, getting data out of them for developer and team lead workflows and how we can provide a great experience for users leveraging multiple GitLab stages.
A
This
is
the
think
big
session
for
verified
testing
for
november
2020..
Today
is
nothing
big.
A
portion
we
are
do
a
think,
big
and
then
the
next
week
do
a
thing
small.
So
getting
back
into
these
today
we're
going
to
talk
about
code
coverage
reports
and
I
have
a
link
into
the
issue
which
I
don't
think
has
too
much
more
information
than
what
is
in
the
agenda
that
we're
looking
at.
B
Well,
the
developer's
perspective
is
to
be
able
to
easily
see
what
code
I
have
not
written
tests
for
right
and
that's
helpful
in
a
few
different
scenarios.
One
the
overall
project
like
what
is
our
percentage
wise
of
how
much
of
our
repo
is
covered
by
tess.
B
B
A
So
it
sounds
like
you
talked
about
a
couple
different
workflows
there.
You
talked
about
the
overall
project
repository.
What's
the
coverage
like
and
then
as
part
of
a
code
review
code
review
workflow
are
there
what
other
workflows
would
this
report
be
handy.
C
C
Yep,
but
to
me,
like
the
coverage
report
itself,
like
the
feature
as
a
developer,
it's
like
more
a
visual
tool
that
tells
me
right
away
like
if
my
code
is
tested
or
not
and
as
a
reviewer.
It's
super
helpful
because
I
can
I
can
see
where
the
piece
of
code
introduced
was
not
tested,
probably
which,
which
one
is
tested,
and
I
am
now
confident
that
we
can
ship
that
piece
of
code,
because
we
we
added,
like
this
coverage
right
cool.
A
So
it
sounds
a
lot
like
the
existing
feature
that
we
have
the
test
coverage
visualization
of
seeing
it
as
part
of
the
code
review
process.
I
want
to
circle
back
to
looking
at
a
project
holistically,
there's
a
couple
of
different
bits
of
data
that
come
out
of
the
cobot
report
or
could
come
out
of
it.
You
could
get
just
raw
line
coverage
function
coverage
and
I
don't
remember
what
the
other
ones
are,
but
there's
a
couple
of
different
stats
that
can
come
out
of
that.
A
C
D
I
was
just
typing
up
my
my
thoughts
here
in
terms
of
so
yeah,
so
the
the
overall
health,
I
think,
is
the
important
thing
as
an
engineering
lead,
so
you
can
focus
on
areas
that
you
might
want
to
schedule.
Some
tasks
for
the
other
thing
that
I
did
a
lot
in
the
past
was
kind
of
like
okay,
here's
my
project,
I
know
which
workflows
and
which
files
are
the
most
risky.
D
Send
a
newsletter
probably
like
don't
need
as
high
coverage
as
the
part
where
I'm
taking
a
payment,
and
I'm
not
saying
that
sending.
That
was
a
bad
example.
I
was
trying
to
think
of
something
that
was
really
not
important,
but
newsletters
can
be
important
to
the
business
as
well.
So
when,
if
I
could
like
something
that
would
be
neat
would
be
if
I
could
highlight
certain
files
or
workflows
as
being
high,
risk
can
have
like
different
rules
that
apply
to
those
workflows
that
might
be
kind
of
nice,
so
I
could
say
hey.
D
If
anything
is
committed
to
this,
it
would
be
even
cooler
if
you
could
do
that.
You
could
say
if
anything
is
committed
to
this
file.
This
person
needs
to
review
it
and
it
can
and
the
test
like
as
far
as
coverage
goes,
the
test
coverage
cannot
dip,
but
also
this
person
is
like
a
special
reviewer
for
this.
So
if
we
could
have
like
that
type
of
annotation
that'd
be
really
neat.
We
kind
of
do
that
with
code
owners,
I
think
so,
if
we
combined
code
owners
plus
special
coverage
rules
for
certain
sections.
D
That
might
be
neat,
but
I
think
I
think
that's
something
that
I
was
focusing
on
a
lot
when
I
was
at
my
last.
Job
too
was
like
hey,
there's
this
part
of
the
workflow,
where
we
actually
go
and
we
create
a
subscription
with
somebody's
like
tokenized
credit
card
information.
E
Yeah
business
critical
right,
I
think
I
think
your
example
was
good
being
a
little
hard
on
yourself
right,
I
mean
there
are
things
that
are
business,
critical
versus,
not
right.
You
think
about
the
things
that
get
into
production
yeah
you
want
to
have
like,
maybe
some
more
guardrails
or
some
more.
You
know
some
more
stringent
gates
there
and
that's
cool,
and
it's
interesting,
because
I
was
thinking
about
the
developer
individual
developer.
What
do
I
gain
from
this
versus
an
aggregate
level
or
a
manager
or
team
lead?
A
So
the
code
coverage
report
as
it
stands
today
is
kind
of
a
snapshot
in
time.
You
see
what
is
coverage
the
last
time
this
report
was
generated.
Where
are
there
gaps
there
potentially
or
how
can
we
improve
on
that
view?.
D
I
I
think
the
the
much
like
a
lot
of
our
reports
are
kind
of
either
we
just
host
an
artifact
that
people
can
go
and
look
at,
but
we
don't
have
like
a
holistic
okay.
This
is
code
coverage.
Dashboard
report
thing
right
like
same
same
with
code
quality.
You
can
go
into
a
pipeline.
You
can
look
at
the
code
quality
thing
for
an
individual
pipeline,
but
there's
no
like
project
level
code,
quality
page
that
has
all
the
code
quality
information
and
same
same
with
code,
testing
and
coverage.
D
D
It's
more
just
like
well,
the
pipeline
did
a
thing,
and
then
we
smushed
that
thing
into
the
pipeline
or
we
smooshed
that
thing
into
the
merge
request,
which
are
great,
but
I
think
that
that
the
next
like
level
to
build
up
onto
that
would
be
at
the
project
level
and
then
we're
already
kind
of
thinking
this
way
with
with
coverage
reporting
at
the
group
level.
So
it
would
be
interesting
to
have
like
a
finer
grain
view
like
I
was
talking
about.
D
Okay,
here's
all
the
files
I
want
to
ignore
this
file
in
the
ui.
I
want
just
pay
special
attention
to
this
file
in
the
ui
and
have
that
kind
of
like
like
that's
that's
really.
What
sonarcube
does
too
is
they
kind
of
just
give
you
the
files,
and
then
you
have
special
abilities
to
like
say
no,
I
don't
care
about
this
code,
quality
warning
specifically,
and
then
it
remembers,
because
that's
what
it
is,
that's
what
kind
of
application
it
is
right.
A
Yeah,
so
it
sounds
like
a
lot
of
that
is
based
on
work
that
you
as
a
developer.
You
as
a
manager,
have
done
to
set
this
up
and
get
this
out,
and
then
it's
on
you
to
kind
of
interpret
those
things
I'm
skipping
ahead
because
we've
talked
about
who
has
this
problem
who's
doing
this
job?
We've
talked
primarily
about
the
developer
and
the
team
lead.
A
But
if
the
team
lead,
you
just
started
up
a
new
project
or
moved
a
new
project
into
gitlab
moved
an
existing
project,
maybe
ran
a
pipeline
just
to
get
it
all
running
and
parts
of
the
output
of
that
would
be
here's
the
top
10
files
that
you
should
look
at
from
a
quality
perspective
because
of
low
coverage
look
quality
whatever
else
we
have
so
that
you
could
then
just
go
start
creating
issues
from
that.
A
Based
on
what
gitlab
knows
about
the
project-
and
you
may
already
have
an
inkling
about
that
because
it's
you
know
your
team's
code,
but
it
might
be
that
reinforcing
bit
that
you,
you,
then
trust
gitlab
to
say:
okay,
here's
where
the
tech
that
is
and
what
I
should
go.
D
I
I
think,
if
we're
gonna
do
gating,
it
has
to
be
really
really
really
easy
for
the
engineer
to
figure
out
specifically
which
line
which
file
and
sorry
yes,
specifically,
which
line
in
which
file
is
the
that's
preventing
this
from
happening.
I
know
in
previous
things
that
I've
used
you
kind
of
throw
it
up
there
and
then
it
just
gives
you
oh
well.
The
code
coverage
went
down
and
it's
like
great.
D
Where
and
wha
what
test
do
I
need
to
read
right
in
order
to
get
that
back
up
so
as
specific
as
possible
to
let
the
engineer
go
and
fix
the
problem,
maybe
even
like
it
would
be
super
neat
if
it
was
like.
You
should
probably
add
a
test
in
this
test
file
for
this
file
so
like
that,
even
that
extra
thing,
so
you
don't
have
to
go
okay
well,
this
isn't
covered.
But
where
should
I
put
this
test
like
it'd?
Be
super
neat?
A
But
yeah,
if
you
add
code
to
a
file
and
it's
already
mapped
into
a
test
file,
you
could
say
hey
like
as
part
of
the
the
workflow
it
would
block.
It
would
say
that
code
coverage
is
decreased
and
it
would
be
would
be
some
sort
of
tip
or
trick
or
hint
of
here
is
the
file
that
you
probably
need
to
add
a
test
into
because
of
this
mapping
that
you've
provided
yeah,
and
I.
B
Portraying
to
the
user
like
if
they
write
a
test
that
happens
for
that
line,
but
it's
happens
to
be
in
a
different
file,
then
it's
gonna
fail
or
worry
about
being
too
prescriptive
here
or
too
opinionated
on
that
spot.
I
know
that's
getting
a
little
nitty
gritty
thing,
just
it's
a
good
question,
pushing
it
down.
A
Yeah,
I
think
we
could
probably
think
not
think
down
constrain
later
today
is
really
about
you
know.
What
could
the
idea
look
like
I
mean.
Ideally,
we
could
just
say
generate
the
text
right.
A
B
E
Yeah,
I
think,
suggestions
always
go
a
long
way.
You
know
just
from
research
into
the
persona
with
developers,
especially
like
they
got
a
lot
going
on
all
the
time
right
like
context
with
you
all
these,
so
the
more
I
think
we
we
look
at
it
as
a
very
risky
thing,
sometimes
to
be
prescriptive
and
opinionated,
and
I
think
they
look.
They
welcome
that.
I
think
sometimes
they
even
welcome
over
prescriptive
approaches
because
at
least
they
know
hey.
E
A
Cool,
so
we
have
some
great
thoughts
here
on
ideal
outcome
for
developer,
using
a
code
coverage
report
getting
some
additional
information
out
of
that
report
back
into
their
day-to-day
workflow.
How
about
that
team
lead
persona?
What
what's
an
ideal
outcome,
look
like
for
them,
stepping
back
out
to
that
here's
your
project?
B
I
think
the
overall
coverage
number
is
nice,
but
it's
a
little
opaque
there
be
great
for
the
lead
to
be
able
to
drill
down
a
little
bit
on
sections
of
the
app.
Maybe
they
care
about
more,
like
in
ricky's
regard
like
hey.
We
care
about
this
payment
section
more
than
other
sections.
A
A
It
matches
up
with
a
lot
of
the
conversations
I
had
doing
a
first
round
of
problem
validation
around
code
coverage.
Reports
of
people
just
want
to
see
like
not
just
overall.
What
is
the
project
coverage?
I
need
that
next
step
down,
so
that
I
can
go,
find
those
files
and
figure
out
where
we
can
quickly
improve
coverage
or
where
we
might
have
problems
of
persistent
bugs.
D
Yeah,
I
kind
of
like
that
idea
about
honing
in
on
a
certain
section
of
the
code,
whether
it
be
through
setting
up
a
config
in
the
galaps
diamond
or
whether
it
be
building
a
full-on
thing
into
the
projects
page
where
you
can
go
and
set
up
a
configuration
that
says:
hey
anything
that
has
the
word
payment
any
file
with
that
matches.
D
This
regex
string
like
star
payment
star
that
needs
like
more
tests,
that's
high
priority
or
that's
medium
priority
or
that's
low
priority
in
terms
of
in
terms
of
code
coverage
and
then
further
being
able
to
set
up
what
that
means
in
in
the
project
would
be
kind
of
neat.
So,
like
okay,
if
it's
high
priority
the
you
cannot
decrease
the
coverage
at
all.
If
it's
medium
priority,
you
can
decrease
it
by
like
two
or
three
percent.
That's
probably
fine!
If
it's
low
priority,
I'm
not
even
gonna
apply
a
coverage
gate
to
the
merge
request.
D
See,
that's
the
tricky
part
and
I'd
probably
be
pretty
language
dependent
like
what
what
I
would
want
to
do
in
a
perfect
world
that
doesn't
exist
is
be
able
to
like
kind
of
identify
code
paths
or
workflows.
Like
give
me
a
trace
and
I'll
show
you
all
the
things
that
that
trace
runs
through
and
this
trace
cannot
have
any
coverage
decrease
on
it.
So,
in.
E
D
A
That's
an
interesting
approach,
though,
of
like
hey:
here's,
the
here's,
all
the
paths
that
it
goes
through
even
potentially
across
projects
and
say
this
is
a
business,
critical
workflow
and
we're
going
to
apply
things
at
the
workflow
level
or
the
value
stream
level
versus
just
with
thinking
within
a
project
and
within
files
within
the
code
base.
That'd
be
very
cool.
A
Let's
so
thinking
big,
that
is
a
pretty
big
thing.
I
think,
and
that's
what
I'm
going
to
take
away
from.
This
is
hey.
Let
me
identify
paths
through
my
application
that
potentially
stream
or
span
groups
projects
whatever
and
apply
a
code
coverage
requirement
to
that
path.
How
could
we
differentiate
what
we're
thinking
about
from
some
existing
solutions
out
in
the
marketplace?
I
called
out
sonar.
A
D
A
I
like
that
idea
of
the
suggestion
of
hey
here,
you
have
decreased
coverage.
I
see
the
new
code,
I
see
where
you've
got
existing
mapping.
Let
me
at
least
tell
you
where
you're
likely
to
be
able
to
add
a
test
quickly,
even
if
we
don't
provide
you
with
the
full-on
test.
A
D
Even
if
you
had
kind
of
like
a
default,
oh,
we
can
tell
that
they're
using
jest,
so
here's
the
default
jest
test.
This
is
what
it
looks
like.
A
D
D
If
you
want
to
talk
about
ml
like
ml
algorithms,
that
classify
things
so
they
could
look
through
the
test
directory
and
then
classify
it
like
well.
This
is
probably
jest-
or
this
is
probably
this
is
probably
go
and
so
they're,
probably
using
ginkgo
or
something
and
then
okay,
here's
what
ginkgo
maps
do
in
terms
of
blank
test.
A
What
if
we
took
that
ml
a
little
bit
further
and
said
hey,
we
see
that
you're
creating
incidents
in
these
projects
and
that
follow-ups
from
those
are
mrs
fixing
tests
or
fixing
code.
Maybe
your
unit
tests,
weren't
testing
enough
or
something
it's
probably
integration.
Testing
at
that
point,
that
is
flubard
and
not
the
unit
test,
but
might
be
something
to
think
about.
D
I
wonder
if
there's
a
simpler
thing
like
as
as
you
report
incidents,
I
wonder
if
we
tie
them
to
specific
files
or
something
and
then
you
can
say
hey.
This
incident
was
caused
by
this
line
in
this
file,
and
that
way
we
can
kind
of
build
that
into
the
project
level
view
of
what's
important
and
what's
not
important.
If
this
one
file's
causing
a
lot
of
incidents,
then
hey,
that's,
probably
a
really
important
file.
A
Yeah,
I
think
that
goes
back
to
that
that
work
stream
too
of
or
that
code
path
of
going
down
this
path.
We've
had
multiple
incidents.
We
need
to,
you
know,
think
more
about
resiliency
of
this
code,
and
you
know
apply
that
somehow
it
needs
multiple
code
reviews,
not
just
one,
as
things
are
going
out.
D
Yeah
I
I
just
started
thinking
when
you
were
talking
about
leveraging.
The
single
devops
platform,
like
like
sewing
a
thread
through
a
bunch
of
different
aspects
of
the
application
about
the
feature,
and
if
we
can
do
that
enough
times,
then
it
becomes
more
useful,
so
think
about,
like
you
said,
there's
an
incident
think
about
tying
that
back
to
a
file
and
think
about
exposing
the
coverage
of
that
file.
In
the
incident
view,
say:
hey
like
there's
an
outage
caused
by
this
line.
D
The
code
coverage
is
100
or
it's
20
and
the
other
things
I
was
thinking
about
were
your
you're
browsing
the
code
and
you
have
a
little
thing
hey.
This
is
x
percent
covered.
Maybe
you
want
to
expose,
like
you
said,
a
list
of
a
view
that
shows
the
files
by
coverage
that
that
that
would
take
into
account
maybe
the
ignore
settings.
D
So
if
we're
ignoring
files,
then
we're
not
gonna
like
we're,
not
gonna
shove
it
in
your
face
that
there's
not
very
much
coverage
on
that
file,
because
it
probably
doesn't
matter
because
you've
ignored
it
so
being
able
to
like
tool
in
the
ignores
into
the
application,
while
you're
viewing
stuff
would
be
really
neat,
as
opposed
to,
I
think,
a
lot
of
things
when
you
when,
when
it
does
that-
and
it
kind
of
like
presents
it
to
you,
it
doesn't
take
into
account
all
the
other
touch
points
that
you've
had
with
the
application
where
you've
told
it
something
about
code
coverage,
especially
when
you
have
like
plugins
and
jenkins,
and
all
your
stuff's
all
over
the
place.
D
A
Yeah,
it
doesn't
does
not
need
coverage
hey.
We
were
talking
earlier
about
that.
The
generated
report
of
here's,
the
you
know:
here's
your
top
10
files
that
need
some
sort
of
work
on
them
and
thinking
even
a
little
bit
more
broadly,
it
can
combine
code
coverage
and
code
quality
if
there
is
a
way
to
generate
issues
from
all
of
those
that
contained
all
of
the
necessary
information
and
linked
back
to
some
sort
of
dashboard.
A
I
think
that
kind
of
ties
from
the
source
and
the
plan
bits
together
as
well
back
into
verify
to
say,
especially
when
you're
going
back
through
or
when
you're
running
pipelines
to
say,
hey
code
quality
issue
found
in
here.
Oh
there's
an
issue
open
for
that.
If
I
resolve
it
with
my
merge
request,
I
can
also
resolve
that
issue.
A
At
the
same
time
and
clean
up
some
code
coverage
or
code
quality
problems,
it's
thinking
from
the
manager
perspective
like
it'd,
be
great
to
get
that
list
of
files,
but
I
don't
want
to
have
to
go,
create
20
issues
to
address
that
tech,
that,
for
my
team,
if
gitlab
did
it
for
me,
oh
that's
happy
day.
I
think
just
because
I
know
there's
no
csv
uploading
or
I
don't
think,
there's
a
csv
upload
to
create
issues
in
git
lab.
E
You
know
how
did
our
code
coverage
improve
and
then,
if
you're
talking
about
like
being
able
to
segment
or
ignore
like
that
kind
of
listed
view,
you
were
talking
about
ricky,
then
you've
got
oh,
let's
not
look
at
the
ones
that
we
don't
care
about
right.
You
said
this
is
a
config
file
right.
Let's
look
at
this
stuff,
that's
that's
important
and
how
did
that
trend?
If
it's
going
up?
That's
a
great
thing
right.
D
Cool
yeah
just
to
quickly
revisit
the
thing
you
said
about
automatically
creating
issues.
What,
if
we're
thinking
big
today?
What
if
it
automatically
opens
an
mr?
What
if
it
opens
nmr
with
the
test
that
fixes
it
or
it
opens
it,
mr
with
like,
because
it
knows
it
knows
what
file
the
test
should
be
in
because
the
test
file
finder.
D
So
it
opens
an
mr
into
that
file
with
like
the
skip
the
boilerplate
test,
and
then
you
just
open
that
up
in
the
web
id
and
the
merge
request
and
add
your
test
and
then
commit
you.
Don't
even
have
to
open
your
ide,
you
just
you
fixed
it.
You
fixed
the
problem
in
gate,
lab
and
created
the.
I
guess
created
the
issue
in
git
lab
and
then
fixed.
The
problem
in
git
lab
not
created.
A
A
Love
it
cool.
Well,
we
are
at
time
I
want
to
be
respectful
of
calendars.
As
always,
if
you
have
more
thoughts,
though,
feel
free
to
drop
them
into
the
doc
or
continue
the
conversation
and
I'll
add
notes
to
the
doc
over,
in
slack,
I'm
happy
to
continue
it
there
and
then
next
week,
we'll
go
from
the
divergent
back
to
the
convergent
thinking
and
think
about
great.
A
So
here's
this
world,
where
ricky
our
team
lead,
gets
a
report
and
auto
generates
issues
that
auto-generate,
mrs
with
tests
in
them
and
max,
is
following
up
and
just
closing
things
out,
left
and
right.
This
is
how
we're
going
to
increase
our
mr
rate
and
how
we
can
build
something
in
an
upcoming
milestone
that
takes
us
one
step
toward
closer
to
that
brave
new
world.