►
From YouTube: Verify :Testing Group Think Big recaps
Description
Today we re-visited the various Think Big topics we've covered as a group to talk about how the very small thing turned out, what might be next and if we're still working towards the right big vision.
A
This
is
the
verify
testing,
think
big
update
for
february
2021.
I
thought
it'd
be
a
good
time
since
we've
gone
six
seven,
eight
months,
almost
probably
of
doing
thing
big
to
look
back
at
some
of
those
look
at
the
outcomes
that
we
wanted
to
get.
What
are
things
small
resulted
in
what
the
smallest
thing
was,
but
the
progress
was
since
then
just
kind
of
calibrate,
see
where
we're
at
and
see
if
these
are
worthwhile
for
us.
A
If
we
want
to
continue
to
do
these,
take
the
synchronous
time
as
a
team,
so
I'm
going
to
go
ahead
and
share
my
screen
and
the
doc.
A
A
What's
the
next
thing
towards
the
big
vision,
so
I
have
the
links
here
for
back
to
the
issue
itself
in
our
board
the
agenda,
doc
that
we
had
just
to
recap,
our
big
vision
was
providing
actionable
suggestions
about
tests
that
might
be
slow
or
flaky
based
on
past
performance
tips
in
the
ide
about
how
to
write
those
kinds
of
tests.
That's
what
we
think
this
could
eventually
become.
A
Our
next
small
thing
was
sorting
tests
in
our
unit
test
report
by
failure
and
then
by
time,
so
that
someone
could
easily
identify
that
the
slowest
tests
of
the
failed
tests
were
on
top.
That
might
be
indicative
of
a
test
that
is
flaky
since
then.
We've
also
added
in
our
test,
repeat
test
counter
mvc,
so
that
shows
up
now
in
this
view
as
well.
A
We
wanted
to
increase
views
of
that
test
report
on
dot
com
by
10
within
30
days
of
the
release.
We
don't
actually
have
any
tracking
for
that,
though.
So
while
that
was
an
ambitious
goal,
who
knows
if
we
actually
got
there?
We
have
seen
that
there
are
more
pipelines
being
created
and
ran
that
have
junit
test
data
uploaded
to
them
on.com
as
a
growth
number
than
just
the
overall
pipeline
growth.
A
So
more
people
are
using
these
features
potentially
or
at
least
uploading
data
that
can
make
them
use
the
features
our
next
test.
Next
step
was
adding
that
test
rate
and
the
test
history
mvc
into
the
unit
test
report,
which
we
have
done
and
then
project
level
report
would
be
coming
after
that
to
help
show
these
flaky
and
slow
tests.
A
A
B
Yeah
I
tried
to
I
tried
to
physically
raise
my
hand,
but
I
remembered
that
there's
a
button
for
that.
Now,
so
I
mean
save
the
calories.
We
we've
had
some
good
conversations
in
this
area
with
engineering
productivity
around
the
data
that
we're
collecting
for
the
test.
B
So
we
can
probably
stuff
the
whole
test
name
in
there,
which
would
open
up
a
bunch
of
other
doors
for
engineering
productivity
to
attract
flaky
tests,
and
I
think
that
gets
us
really
well
positioned
for
the
project
level
report,
because
if
we
can
start
showing,
we
can
start
showing
the
list
of
tests
and
how
many
times
they
failed
in
the
default
pipeline
over
a
certain
period
of
time.
B
Then
that's
ample
data
to
to
create
a
new
page
and
talk
about
specific
tests
that
are
are
important
or
are
failing
or
might
be
flaky,
and-
and
I
I
think
that
that
is,
is
a
logical
kind
of
progression
for
for
this
feature-
and
I
I
I
think
the
think
bigs
have
been
super
useful
in
this
regard,
because
we
get
we
get
all
those
people
in
a
room
right,
well,
a
digital
room.
B
We
get
kyle
and
the
engineering
productivity
team
and
sometimes
mac
and
peop
joanna
from
quality
team,
and
we
can
talk
about
what
they'd
like
to
see,
and
so
I
think
that
there's
a
lot
of
things
that
we
can
do
with
this
and
I'm
I'm
excited
to
keep
keep
plugging
away.
A
C
A
I
mean
I've
been
cheating
a
little
bit
and
picking
think
banks
that
fit
into
a
little
bit
of
the
existing
roadmap.
But
I
think
that
they
are
turning
into
better
issues
after
our
discussion
and
that's
a
relatively
cheap
way
for
us
to
iterate
quickly
through
an
hour
of
discussion
and
an
hour
of
follow-up
versus
building,
something
launching
it
and
then
waiting
for
a
feedback
cycle.
A
A
The
next
topic
that
we
talked
about
this
was
in
july
and
august,
was
around
user-defined
mr
widgets
issue.
The
agenda
link,
ultimately,
what
we
thought.
The
big
vision
where
we
could
end
up
was
a
wysiwyg
type
editor,
where
admins
could
build
their
own
widgets
on
their
instance,
decide
which
information
appears
and
decide
a
threshold
for
actually
failing
a
pipeline
or
blocking
an
mr
from
that
data.
A
Our
next
smallest
thing
was
just
use
a
blog
post
to
better
socialize
how
you
can
integrate
with
the
merge
request,
api
and
leave
comments,
some
of
the
other
existing
functionality
that
we
have
yeah
using
expose,
as
mr
comments
custom
metrics
to
get
more
context
out
of
your
pipeline
into
an
mr
that
hasn't
been
done,
and
so
the
next
thing
is
still
that
thing,
but
we
haven't
made
much
for
progress
ricky.
What's
up.
B
I
was
just
thinking
about
this
in
a
kind
of
hacky
frame
of
mind,
and-
and
I
was
wondering
about
because
we
define
our
own
code
quality
rules
and
you
can
kind
of
just
upload.
B
Your
json
and
it'll
show
up
in
the
code
quality
widget
someone
could,
theoretically,
you
know,
parse
their
unit
test
report
results
with
their
own
application
and
then
upload
it
in
the
code
quality
format,
and
then
they
could
have
a
custom
widget.
That's
really
just
a
code
quality
widget
that
would
surface
those
pieces
like
that
that'd
be
kind
of
neat,
then
you
could
kind
of
consolidate
all
your
widgets
down
to
just
one
through
the
code.
Quality
report
format,
but
you'd
have
to
do
like
a
decent
amount
of
programming
and
you
probably
have
to
figure
out
okay.
B
Well,
this
is
a
minor
issue.
This
is
a
crit.
Well,
it's
kind
of
neat
that
you
get
that
ability
to
be
like
okay,
this
unit
test,
failing,
is
a
critical
issue,
but
this
security
scan,
throwing
an
error
is
just
an
info
and
then
kind
of
I
don't
know.
That's
kind
of
an
interesting
idea.
A
Yeah,
that's
it's
definitely
something
that
we've
already
started
to
talk
about
with
that
future
issue,
around
blocking
merge
with
the
code,
quality
degradation
and
why
that
would
stay
at
an
ultimate,
because
there's
lots
of
possibility
to
work
around
some
of
the
existing
ultimate
functionality
there.
But
that's,
I
think,
a
really
interesting
other
use
case
or
tangent
adjacent
use
case
is
the
right
word.
I
think,
to
mangle
that
data
into
something
else
and
display
it
in
code
quality.
As
long
as
you
understand
the
json
format,
I
mean
you
can
stick
anything
in
there
and
upload
it.
B
It's
it's
similar
to
the
conversations
that
we've
had
around
the
coverage
yaml
entry,
where
it's
sure,
probably
supposed
to
be
unit
test
code
coverage
that
doesn't
mean
that
everyone
is
using
it
that
way.
A
Yep
yep,
bending
things
to
your
use
case
is
a
common
use
case.
I
think
right.
I
will
make
this
tool
work
for
me.
A
Well,
I
don't.
I
think
that
our
next
smallest
thing
is
still
the
next
thing
here
we're
not
making,
and
we
haven't
done
anything
really
with
the
checks
api.
We
haven't
done
anything
down
this
road,
it's
not
or
down
this
path.
It's
not
something
that
we
have
a
ton
of
users
clamoring
for
today
that
we
don't
already
have
plans
for,
and
we
have
plenty
of
use
cases
and
other
bits
of
functionality
that
we
want
to
go
solve.
A
B
A
decent
amount
of
people
who
are
interested
in
the
checks
api
from
what
I
remember
like
that.
That
gets
a
lot
of
traction
that
issue,
but
it's
also
like
a
weird
feature,
because
it
doesn't
neatly
fall
into
our
groups,
slice
of
git
lab.
So
I'm
not
not
that
that's
a
problem,
it's
just
it
kind
of,
makes
it
more
difficult
to
prioritize
yeah.
B
A
Thumbs
up
general
knotting,
all
right,
cool
move
on:
let's
talk
about
code
quality,
then
that
was
the
next
thing
we
talked
through
so
in
september,
brought
up
code
quality,
our
big
vision.
The
ideal
outcome
was
a
totally
customizable
dashboard
for
users
that
they
could
use
to
identify
risky
bits
of
the
code
base
that
got
reflected
into
the
code
quality
direction
page.
So
I
had
an
mr
that
I
opened
up
for
that.
A
I
think
no,
I
didn't
like
in
this
one
in
the
code,
testing
and
coverage
I'd
link
to
one
of
our
think
bigs
the
youtube
video
of
it
to
discuss
it,
but
it
is
a
bit
reflected
in
our
vision,
design
that
we
have
here.
A
Our
next
smallest
thing,
ignoring
the
rules
with
severity
that
helps
customize,
that
experience
for
someone.
If
they
don't
want
any
of
those
info
things,
then
they
should
be
able
to
just
filter
them
out.
It
should
be
configurable
for
them.
By
default,
we
just
show
everything
which
is
fine
and
right
now,
that's
scheduled
for
14.2.
A
B
I
think
the
smallest
thing
makes
sense
figuring
out
how
we
can
just
basically
ignore
certain
certain
topics.
I
think
the
challenge
there
is:
where
are
we
going
to
put
that
config
right?
B
A
B
I
don't
really
see
another
way
that
we
can.
You
know
just
continue
to
treat
things
as
if
they
are
just
files
and
then
still
be
able
to
bring
the
features
and
functionality
that
people
expect
from
a
tool
like
this.
So
thinking
of
sonar
cube,
for
example,
where
you
log
into
the
app
you
there's
there's
a
very
clear
entity
that
relates
to
a
specific
warning
that
it's
giving
you
and
you
can
comment
on
it.
You
can
create
a
jira
issue
for
it.
B
B
So
so
I
think
that
that's
going
to
be
a
challenge
just
because
of
the
sheer
quantity
and
and
making
it
scale
for
com
we're
going
to
have
to
do.
I
think
similar
types
of
tests,
as
we've
been
doing
with
the
unit
test
report
right
now,
for
the
failed
tests,
so
we'll
have
to
kind
of
you
know
see
how
big
it
is
and
then
and
figure
out
what
we
can
do
from
there.
A
I
wonder
we
don't
have
an
epic
or
issues
worked
up
for
that
yet,
but
once
we
start,
we
want
to
think
about
what
is
what's
the
smallest
thing
we
can
scope
down
to
tracking
while
we
persist
an
issue
at,
I
would
say,
on
the
default
branch
like
here's,
your
default
branch,
here's
all
the
issues
that
we
found
on
it.
What's
the
next
piece
of
metadata
that
you
want
to
track
for
that
or
it's
the
even
the
smallest
piece.
B
A
B
B
We're
just
like
well
how
many
code
quality
infractions
are
there
on
the
on
every
instance,
and
that
would
give
us
a
good
kind
of
heuristic
for
how
we
can
approach
this
and
then
and
then
we
can
do
the
same
kind
of
thing
where
we
pick
like
you
said
we
pick
the
mvc,
that's
small,
and
then
we
make
we
throw
it
in
a
table
with
the
intention
of
not
feeling
bad
if
we
have
to
get
rid
of
it
and
then
and
then
kind
of
move
forward
from
there,
because
it
turns
out
that
we
don't,
we
didn't,
have
to
be
as
worried
as
we
thought
we
would
have
to
be
about
the
unit
test
once
so.
B
Maybe
maybe
it'll
be
the
same
type
of
situation.
Yeah.
A
I
think
secure
is
going
to
have
some
interesting
information
for
us
as
well,
because
they
already
do
this
with
the
security
dashboard
where
they
can
track.
I
believe
you
can
track
an
issue
found
in
security
to
a
get
live
issue
and
that
linkage
stays
there
and
then
oh
everything
else
happens
on
the
issue,
so
we
don't
have
to
worry
about
storing
comments
or
anything
like
that
and
some
new
bespoke
thing.
It
just
lives
in
a
gitlab
issue
and
we'll
tie
it
to
that
yeah.
B
A
A
A
This
was
our
conversation
in
november,
as
we've
seen,
popularity
of
the
test
coverage
visualization
feature
increase,
we've
seen
a
pretty
dramatic
bump
in
the
number
of
reports
that
are
being
uploaded.
So
we
wanted
to
think
about
what
are
ways
that
we
can
leverage
that
data.
What
opportunities
exist.
A
Our
ideal
outcome
was
that
we
think
a
developer
or
a
team
lead
could
use
the
report
view
which
you
can
generate
html
reports
for
covertura
and
upload
them
to
pages,
but
have
that
actually
visible
in
the
ui.
Without
that
extra
step
to
identify
gaps
in
code
coverage,
create
issues
to
manage
those
gaps
even
add
coverage
through
a
suggestion
bot
similar
to
dangerbot.
A
That
was
our
big
vision
for
it.
Our
next
smallest
thing
was,
I
forgot,
oh
yeah,
our
ci
view
for
cobratura
coverage
reports,
and
I
think
we
even
have
a
smaller
mvc
version
of
this
as
well
of
taking
the
data
generating
it
and
putting
it
into
the
mr
right
now
that
is
unscheduled,
but
I
think
it's
worth
revisiting,
seeing
if
that's
really
the
next
thing
we
should
do
and
if
there's
anything
we
want
to
expand
on
in
our
big
vision.
For
that
matter,
ricky,
you
know
the
first
topic.
B
By
surprise,
so
I
was
just
reiterating
the
thing
that
we've
talked
about
before,
which
is,
we
should
definitely
consider
using
the
corporator
report
as
the
source
of
truth
for
coverage
when
it's
present,
so
that
we
can
kind
of
leverage
that
and
push
it
into
the
rest
of
our
code
coverage
features
instead
of
having
them
as
two
distinct
entities.
So
we
want
to
kind
of
like
bring
it
together,
so
people
don't
have
to
configure
things
in
multiple
places
to
use
our
features.
B
So
that's
that's
something
I
think
we
should
just
keep
top
of
mind.
We
could
even
there's
some
scenarios
that
we'd
have
to
consider
like
what,
if
somebody
has
both
well,
which
one
takes
precedence
and
et
cetera,
et
cetera,
but
but
I
think
that
doing
that
could
avoid
a
lot
of
edge
cases
and
weird
scenarios
that
would
result
in
kind
of
issues
or
forum
posts.
A
A
We
could
add
something
to
the
interface
and
we'll
get
high
anna
to
look
at
it,
because
she
does
this
way
better
than
I
do,
because
everything
is
immodal
when
I
design
things
to
add
like
the
coverage
by
line
coverage
by
whatever
the
other
coverage
numbers
are,
it
starts
to
give
someone
that
data
that
they
want.
We've
heard
a
lot
of
feedback,
especially
when
it
gets
to
group
coverage
that
they
just
want
total
lines
covered
versus
total
lines,
not
the
average.
So
they
can
get
a
better
sense
of
what
the
coverage
numbers
are.
A
B
A
A
B
Yeah,
that
seems
like
a
pretty
simple
back-end
change,
but
as
as
as
most
back-end
changes,
it's
a
good
chance
that
it
might
not
be
so
simple
as
well.
Yeah.
C
Yeah,
so
the
the
issue
that
is
under
the
smallest
change
is
adding
possibly
adding
a
ci
view
for
on
the
pipeline
page
or
another.
Mr
widget,
probably
both
so
my
concern
is
just
continuously
adding
more
more
tabs
on
that
pipeline
page
because
we're
not
the
only
ones
on
that
page
either.
There's
the
pipeline
authoring
and
ci
team
that
are
both
adding
things
we
have
now
the
dag
view.
We
have
the
test
report.
We
have
we're
continuously,
adding
more
tabs.
C
There's
performance
concerns
with
that,
but
we
cannot,
of
course,
like
defer
loading
components
on
there
until
you
click
on
a
tab.
But
it's
also
like
a
ux
thing
like.
Are
we
overwhelming
the
user
with
so
many
different
views
on
this
page.
B
It's
not
you
know,
unit
tests,
a
bad
example,
better
examples
like
code,
quality
or
or
something
like
coverage,
as
opposed
to
like
it
being
associated
with
a
pipeline.
That
just
seems
like
once
you
explain
it
to
someone
they'll,
be
like
oh
okay.
B
Now
I
understand
what
git
labs,
especially
because
our
customer
base
is
developers,
they'll
pick
it
up
and
then
they'll
run
with
it
and
then
they'll
like
we
were
having
some
ux
issues
about
where
to
find
the
group
code
coverage
reports,
because
it's
in
the
analytics
like
it's
in
repository,
it's
not
in
in
pipelines,
which
is
what
people
are
expecting
because
they
that's
what
they've
come
to
know
our
data
model
as
being,
even
though
it
like
we're.
A
B
Yeah
and
it's
that
wasn't
the
point
I
wrote
down,
but
the
other
point
I
wrote
down
was,
I
think,
that's
a
good
call
out
and
a
lot
of
our
reports
on
that
page.
Like
code
quality
and
test
reports,
I
think,
would
make
more
sense
being
pulled
out
from
that
pipeline
page
and
being
put
into
like
a
project
level
page
where
it's
like.
Okay,
this
is
the
default
report.
If
you
want
to
see
a
report
for
a
specific
merge
request,
you
can
kind
of
filter
it
down
on
that
page.
B
C
A
I
think
that
it's
interesting
to
think
about
this
at
the
project
level.
First,
as
opposed
to
the
pipeline
level
and
say
you're
to
utilize.
This
you
have
to
run
run
the
coverage
job,
get
the
coverage
data
upload
the
report
as
part
of
your
default
pipeline
to
see
it
at
the
project,
and
you
can
do
that
scheduled
whatever,
like
that's
an
easy
way
to
to
onboard
somebody
into
this.
But
that's
where
the
report
lives
and
so
we're
just
showing
it
to
you,
for
whatever
is
the
latest
on
your
default
branch.
A
It's
I
mean
in
part,
then
that
we
can
make
that
a
premium
feature
for
a
team
lead
or
for
a
director,
because
it
is
that
view,
but
then
we
could
pretty
easily
enable
a
developer
to
get
that
same
data
for
their
pipeline
or
for
them
or
their
mr,
and
see
the
data
that
they
want
very
contextually
to
their
change.
A
Cool,
well,
I
think
on
for
me
on
this
one.
Our
our
next
vc
could
actually
be
we'll
write
up
a
little
issue
to
say:
parse
coverage
data
out
of
the
cobra
terra
report
and
utilize
it
if
it
isn't
already
set.
A
That
would
be
interesting
well
to
write
some
very
specific
write,
the
proposal
very
specific
to
what
happens
if
it
is
already
set,
and
if
do
we
overwrite
that,
but
I
think
that
that'd
be
a
great
one
for
someone
who
doesn't
realize
that
we
have
the
test
coverage
graph.
The
test
coverage
other
stuff,
that's
out
there
yeah,
so
they
can
pop
them
in
yeah.
B
A
Yeah
yeah
enabled
by
default
right
it's
one
of
our
values
cool.
Well,
I
got
as
far
as
november.
I
did
not
get
our
january
topics
covered,
which
we
have
done
some
thinking
big
since
this
around
the
group
or
pulling
data
out
of
a
for
a
specific
group.
We
do
have
an
issue
scheduled
for
that
in
the
coming
milestone,
1310,
just
a
manual
effort
of
generating
a
report
for
ourselves
for
our
own
code
that
we
touch
and
work
on
and
we'll
go
from
there.
A
So
I
know
we
have
a
small
sampling
of
the
group
today,
but
what
would
you
all
give
us
or
give
this
a
thumbs
up
thumbs
down
as
far
as
continuing
to
do
think
bigs
at
the
group
level,
I
see
two
thumbs
up.
I
love
it.
Cool
is
our
cadence
okay
of
doing
these
monthly
and
splitting
it
up.
Should
we
try
to
find
a
full
hour
to
do
it
all
in
one
week
as
opposed
to
breaking
it
up
and
keeping
with
the
monthly?
What
are
your
thoughts
on
timing.
B
I
like
having
it
split
because
it
gives
me
some
time
to
digest
it
and
then
try
and
you
know,
come
up
with
the
solution
for
the
think
small
and
I
like
doing
this
thing
big
first
too.
I
think
that
makes
a
lot
of
sense,
especially
you
know
you
know,
unconscious
thinking
and
processing
needs
some
time
to
stew.
C
A
Well,
we'll
keep
it,
as
is
I'd,
love,
to
see
more
topics
coming
out
of
the
group
more
thumbs
on
these
things.
So
it
doesn't
so
I'm
not
just
picking
what
we
talk
about
next.
I
want
to
make
sure
that
the
team
is
engaged
with
what
we're
talking
about
and
then
working
on
and
I'll
try
to
be
more
mindful
and
conscientious
about
getting
these
things
scheduled
into
a
milestone
more
quickly
than
we
have
14.
2
is
pretty
darn
far
away
for
something
we
talked
about
in
september.
A
That's
over
half
a
year
from
when
we
initially
discussed
the
issue
to
when
it'll
finally
get
scheduled
so
I'll.
Try
to
get
for
that
code,
quality,
specific
one!
Is
that
right,
no
sorry
for
the
code
coverage
one
try
to
get
that
new
nvc
issue
worked
up
and
get
it
scheduled
in
one
of
our
next
two
milestones,
1311
or
140..
C
A
Yeah
I'll,
why
don't
I
do
this
I'll
record,
a
quick,
unfiltered,
video
and
post
it
of
showing
you
where
to
find
the
issues
in
our
team
project
and
how
to
create
one
just
walk
through
the
template.
Real
quick
and
add
comments
add
thumbs.
Basically,
what
I'm
doing
when
I
pick
one
and
it'll
try
to
give
us
some
more
time,
as
well
than
just
a
day
before
the
week
before.
Look
at
whatever
has
the
most
thumbs.
That
is,
we
haven't
already
talked
through
and
post
that
and
then
additional
comments.
A
Additional
thumbs
is
always
helpful.
Just
to
get
extra
context
for
the
group.
A
Awesome
we're
past
time
wow.
I
was
missed
the
timing
on
that.
Thank
you
both
great
conversation,
as
always,
and
I'll
post
this
to
unfiltered
for
the
folks
you
missed
and
want
to
watch
so
they
gave
me
get
a
recap
thanks
a
lot
james
thanks.