►
From YouTube: Verify:Testing Group Think Big #8
Description
Today we think small about Code Quality, what some simple fixes to the experience might be and pros/cons to displaying an opinionated score of what "quality" means.
The Think Big session: https://youtu.be/aJPp32Dhazk
A
This
is
the
sixth
think
big
for
the
verify
testing
team.
This
is
actually
a
thing
small
version
of
that
and
we'll
be
thinking
small
sorry.
This
is
the
seventh
my
bad
and
we'll
be
thinking
small
about
code
quality.
Today,
let's
read
the
wrong
line
in
the
agenda,
so
quick
recap
from
the
think
big
which
we'll
link
to
in
the
video
as
well.
A
A
We
talked
about
a
data
view
that
would
help
them
summarize
that
risk
as
it
relates
to
the
quality
of
the
code,
and
we
think
the
ideal
outcome
is
a
totally
customizable
dashboard
of
sorts
that
they
can
use
and
push
things
around
together
to
identify
risky
things
I
intended
to
in
the
agenda
link
to
some
design
that
one
just
did
that
actually
for
our
long-term
vision.
That
would
show
that
really
really
well
and
I'll.
Add
that
into
the
agenda
later.
Did
I
miss
anything
from
that?
A
Great
so
then,
moving
into
from
this
really
great
awesome
idea
of
here's
a
great
way
for
you
to
categorize
risk
of
your
project,
and
if
you
should
move
forward
with
the
deployment
or
not
or
invest
in
squashing
tech
debt
there,
let's
think
small
about.
Well.
What
can
we
do
that
in
the
next
milestone?
B
Could
we
historically
look
at
code
quality
like
per
deployment
like
how
it
changed?
So
if
somebody?
So,
if
we
identify
deployment
as
having
intro,
you
know
we
have
to
roll
back
a
deployment,
because
it
introduces
some
horrible
thing.
If
we
could,
if
there's
a
simple
way
to
correlate
that
with
code
quality
to.
C
A
So
you
would
do
or
do
something
like
take
a
snapshot
of
your
code
quality
report
and
attach
it
as
release
evidence
almost
or
release
artifact
yeah
through
the
progressive
delivery,
or
I
think
at
the
wrong
group
name
or
don't
know
the
right
group
name,
but
that
group
they're
taking
the
junit
report
for
test
evidence
already
for
that
kind
of
purpose.
A
So
it
would
be,
would
it
be
the
entire
j
or
the
entire
code
quality
report?
So
what
would
that
artifact
look
like
to
say
a
release
manager?
Let's
talk
about
a
release,
manager,
persona
coming
back
and
the
use
case
might
be
hey
engineering
manager.
We
had
to
roll
back
this
release.
There
was
a
bug
it
was
in
identified
in
this
file,
which
is
something
you
work
on.
Here's
the
code
quality
report.
B
The
file
might
be
too
specific.
Overall,
I
was
thinking
that
if
we,
if
we
had
the
code,
quality
artifact
for
a
release
and
the
a
release
manager
could
look
at
the
improvements
and
degradations
from
the
last
release.
To
say,
like
you
know,
was
this:
did
we
add
a
lot
of?
Did
we
simplify
a
lot
of
things?
Did
we
you
know,
make
things
a
lot
more
complex?
B
A
Yeah,
so
I
could
see
like
if
you're
using
feature
branches
or
even
with
these
branches-
and
you
went
to
do
that-
merge
it
would
generate
that
artifact
at
that
time,
and
you
could
look
at
that
summary
report
and
say
hey
between
what
is
our
current
stable
branch
and
what
we're
trying
to
merge
in
here's
the
degradations
here.
The
improvements
in
code,
quality.
A
A
A
Oh
wow,
okay,
so
every
six
hours
is
there
a
comparison
that
would
happen
what
would
be
compared?
How
can
we
slice
that
down.
B
I
think
that
it
might
be
more
useful,
as
almost
like
an
after
the
fact
thing,
especially
when
you're
talking
about
continuous
delivery
at
gitlab
like
if
something
did
go
wrong.
It
could
be
one
piece
of
evidence
among
many
where
we
could
look
for
trends
or
like.
Where
did
we
go
wrong
like?
How
did
the
process
fail
us?
B
D
I
think
part
of
it,
too
is
we're
we're
trying
to
figure
out
risk
to
to
do
a
release,
as
opposed
to
just
doing
follow-up
after
a
release
has
gone
bad
right.
So
I
what
I
wonder
is:
where
does
that
fit
in,
because
if
we
go
to
like
releases,
I
think
that's
more
of
a
this
is
what
has
happened
in
the
release?
It's
already
done,
it's
not
what
has
what's
leading
up
to
this
next
release.
B
It's
almost
like
if
the
company
had
embraced
a
continuous
delivery
process
or
continuous
deployment
process
that
they
would
use
the
report
differently.
So
if
they
hadn't
done
that
and
they
were
still
doing
manual
deployments,
they
would
probably
use
it
as
like.
Okay,
how
concerned
should
I
be
when
I
click
this
button
as
a
release
manager,
but
if
they're
already
continuously
releasing
it's
more
of
a
oh,
this
blew
up.
Let's
try
and
figure
out
why
I
wonder
if
it's
like
something
cool
could
be.
B
Maybe
it
stops
an
auto
deployment
if
there's
too
much
risk
associated
with
the
deployment
and
that
forces
somebody
to
intervene.
That
might
be
neat,
but
this
is
supposed
to
be
think
small,
not
think
big,
so
I'll
hold
on
to
that
one.
A
B
Milestone,
putting
the
severity
and
ordering
by
the
severity
in
the
mr
widget
there's
already
an
issue
for
that
open.
B
A
Scott
max
eric
zeff,
juan
parker,
any
other
thoughts
from
the
collection
on
mute.
E
And
so
you
have
to
excuse
my
ignorance
a
little
bit
here
right,
I'm
not
as
close
to
the
code
as
y'all
so
taking
severity
into
account.
Would
that
be
something
like
a
red,
yellow,
green
kind
of
low
medium
high
severity
kind
of
display
like
how
would
that
be
surface
to
them,
because
I
think
there's
value
on
that
right.
If
you
see
red
you
know,
maybe
maybe
someone
goes.
Oh
right,
you
know
sound
the
alarm,
that's
something!
That's
super
severe!
I
need
to
take
care
of.
A
B
A
E
A
B
A
A
Let's,
if
we're
talking
about
now,
if
based
on
the
hypothesis,
that
low
quality
code
as
identified
by
the
scanner,
would
create
an
issue
if
we
just
block
the
deploy
or
block
the
merge,
rather
not
the
deploy.
B
E
B
A
B
B
D
B
You
could
you
could
click
into
it.
You
know,
and
if
you
looked
at
the
full
pipeline
page,
you
could
still
see
it,
but
I'm
just
I'm
thinking
about
how
you
know
we
have
a
lot
of
widgets
that
are
like
800
improvements
and
900
degradations
and
it's
you
know
three
line,
change
or
whatever,
that's
partially
or
mostly,
because
we're
not
comparing
the
right
commits
right,
yeah
yeah,
no,
that
problem
that
that
is
a
big
source
of
the
problem.
B
But
I'm
just
I
was
thinking
about
how
when
there's
too
much
there
developers
and
release
managers,
sort
of
automatically
check
out
and
they're
like
that's,
not
it's
it's
just
that!
There's
too
much
noise
is,
I
think,
a
hurdle
for
people,
but
I
don't
think
that
there
would
be
too
much
noise
in
the
opposite
case.
If
you
think
about
like
danger,
bot
and
how
reliable
danger
bot
is
when
it
comes
up
with
like
a
robocop
violation
or
something
or
like
it
did
put
stuff
in
the
danger.
C
I
I
think,
a
defect
of
the
design
that
basically
became
more
defective,
as
as
gitlab
has
been
growing,
is
that
the
the
widgets
are
are
very.
I
think
people
are
becoming
blind
to
the
witness
a
little
bit.
You
know
like
you,
see
them
and
you
probably
pay
attention
a
lot
to
the
pipeline
one,
but
the
other
ones
are
like
yeah.
They
are
there.
You
know,
like,
I
think,
that's
what
really
saying
right
like
it's.
It's
part
of
the
background
noise.
You
know
the
difference
with
the
danger
body
that
that
that's
actually
active
noise.
C
C
Yeah
exactly
yeah,
that's
yeah!
That's
one
thing
too!
You
know
the
code.
Quality
doesn't
feel
your
piping
right,
that's
unless
by
design
it's
not
like
that,
but
I
guess
you
could
set
it
up,
that
it
tells
your
pilot,
but
that
doesn't
make
sense
but
yeah.
What
I'm
saying
is
that
we
we
gotta
think
about
these
things
as
how
like,
if
they
really
deserve
more
attention,
then
we
gotta
design
around
the
constraints
that
we
have
right
now,
which
are
people
are
likely
going
to
ignore
those
things.
A
So
what
is
as
we
think
about
the
widget,
then,
if
the
widget
is
not
helpful
in
understanding
the
issue
or
if
under
even
understanding,
there's
risk,
is
there
a
minimal
change?
We
can
make
to
that
presentation.
Just
better
understand.
I've
introduced
risk
to
the
code
base
or
I've
taken
away
risk
from
the
codebase
even.
B
I
think
that
that
goes
back
to
what
we
were
talking
about
before
as
including
severity
like
as
soon
as
we
do
that,
and
we
kind
of
implement
the
same
design
that
the
sas
and
das
reports
have
after
we
fix,
which
two
reports
we're
comparing,
then
that
would
be
very
meaningful
because
you
could
see
what
happened
as
far
as
like
instant
feedback
when
secure
implemented
that
change
like
they
didn't
change
the
amount
of
violations
that
their
widget
was
reporting.
B
They
just
made
it
red
and
when
they
did
that
they
got
a
ton
of
feedback
from
a
lot
of
different
people
around
the
organization
and
from
customers
about
that
change,
but
like
all
they
did
was
make
it
red
like
they
didn't
block
the
pipeline.
They
didn't
do
anything
else.
They
just
made
it
red.
So
if
we
make
it
red,
but
we
also
fix
the
report,
comparisons
and
we
won't
get
all
the
blowback,
but
we'll
get
all
the
the
eyes
on
it.
We'll
get
all
the
attention.
A
All
right,
so,
let's
take
another
step
down
the
we've
already
implemented
this
path.
It
feels
like
we've
already
like
six
months
ago
or
less
identified
what
the
problems
are
in
making
the
code
quality
comparison,
useful,
the
right
commits
and
exposing
severity.
A
D
A
C
That's
is
that
shaped
that,
like
is
that
life,
I
I
don't
see
it
the
the
security
affordances,
the
like
the
severe
severity
affordances
that
rikki
was
mentioned.
C
I
mean
the
expectation
that
I
will
have
is
that
if
we
do
the
same,
we
get
attention
into
it
as
well.
So,
but
I
think
overall,
there's
other
things
that
we
gotta
do
at
the
merch
request
page
level.
The
the
interesting
thing
here
is
that
we
own
most
of
those
widgets
right.
C
I
think
like
it's
like
70
percent
us
on
fixing
the
visibility
of
that
block
and
making
it
like
more
readable,
and
I
don't
have
ideas
I
mean
I
I
I
could
explore
ideas,
but
I
think
I
want
to
hear
what
people
think
about
that
one.
One
thing
that
I'm
going
to
mention
is,
for
instance,
if
you
look
at
the
pipeline,
the
coverage
there
is
very
clear
right.
C
C
I
mean
the
things
that
we're
saying
about
code
quality
on
the
widget.
We
could
say
those
things
on
the
on
the
actual
could
be
saying
those
things
on
the
merge
request
pipeline
widget
right,
so
we
could
say,
like
code
quality,
improve
on
75
criteria
and
degrade
it
in
89
criteria,
and
then
maybe
we
can
compute
the
number
on
how
like
that's
positive
or
negative.
You
know
what
I'm
saying
so,
I'm
not
saying
that
that's
the
solution,
I'm
just
kind
of
like
suggesting
that
like.
C
Why
is
it
that
coverage,
at
least
in
my
particular
very
biased
case,
works
better
in
visibility
than
code
quality
right
now
you
know,
I
don't
know
it's
just
kind
of
like
I'm.
Just
formulating
that
question
marina.
A
B
C
In
the
for
that,
bishop
three-year
vision,
issue
that
james
and
I
have
like
created
and
that
I'm
working
on
it
some
of
those
designs,
I'm
actually
suggesting
that
you
know.
I
think
that
we
should
be
more
opinionated
and
say
like
this
is
an
a.
This
is
a
b.
This
is
a
c.
This
is
a
d,
or
this
is
an
f.
You
know
like.
C
Perhaps
we
we
could
create
ways
for
people
to
adjust
the
sensitivity
of
those
scores,
so
if
they
feel
that
they
are
working
in
a
project
that
which
doesn't
have
like
very
straight
constraints,
then
they
kind
of
like
let
that
loose
a
little
bit
and
then
like
the
the
score,
could
be
a
little
bit
more
generous,
but
I
definitely
think
that
those
letters
help
a
lot
to
create
a
context
around
your
your
code
base
quality
yeah.
I
think
I'm
ready.
A
C
But
what
I'm
saying
is
that
most
people
like
they
are
not
gonna
make
sense
of
their
improved
or
the
great
quality
if
there's
no
context
around,
if
you
like
how
how
bad
or
good
is
that
you
know
what
I'm
saying
it's
like
almost
like,
if
you
tell
me
that,
like
my
code,
quality
improve
on
75
areas
against
what
you
know
like
that,
doesn't
tell
me
anything
right
now
like
it
will
be
better.
C
If
it
tells
me
like
you,
know
you
your
overall
code
quality
is
a
c
and
you
just
made
it
like
several
points
better
and
like
you're,
closer
to
a
b.
You
know,
I
think
it's
more
about
the
the
idea
of
gradually
improving
that
grade.
You
know
it's
like
trying
to
score
more
and
more
and
more
when
you're
like
in
school,
and
you
want
to
get
better
at
something.
C
The
great
is
just
like
a
very
sub.
It's
it's
very
subjective,
but
it's
more
it's
a
better
reference
than
just
like
mentioning
that
things
are
improving
or
degrading.
So
I
agree
with
you.
It
needs
to
be
very
configurable
or
very
well
defined,
but
I
think
the
value
for
the
customer
is
that
they
have
a
reference.
You
know
they
they
stop
looking
at
something
that
it's
vague
and
in
a
vacuum
and
doesn't
mean
anything
to
them.
Right
now,.
D
I
posted
a
link
in
chat
of
just
what
code
climate
themselves
provide
on
their
free
version.
A
Coverage
like
jj,
had
some
great
ideas
around
that
around,
like
churn
plus
unit
coverage
or
test
coverage
rather
and
then
starting
to
think
about
hey
your
churn,
plus
your
test
coverage,
plus
how
many
of
those
tests
are
failing
all
the
time.
Well,
you
got
low
quality
quote
in
there
collectively,
because
you're
not
actually
testing
it.
So
we
don't
know
if
it's
good
or
not
so
some
some
ideas
there,
and
maybe
there
is
a
simple
version
of
that.
That
just
starts
with
the
code
climate
engine
yeah.
C
So
yeah
that
we
can
show,
like
the
I
could
say,
code,
climate
and
sonar
cube
next
to
each
other
and
say,
like
oh
code,
climate
gave
you
an
a
and
sonar
q
gave
you
a
b.
You
know.
A
Yeah
kind
of,
like
you
get
like
with
your
credit
score,
if
you're
in
the
states
where
you
can
see
like
here's,
your
experience
score
and
here's
your
whatever
the
other
company
is
score
and
see
how
those
two
things
change,
and
you
might
think
that
you
know
different
parts
me
should
mean
different
things,
but
that
isn't
the
case.
Maybe
we
should
take
an
opinionated
approach,
the
first
time
and
say
listen.
This
is
what
it
means
to
have
high
quality
code
and,
if
you
don't
like
it
well
just
turn
this
feature
off.
A
We've
heard
from
plenty
of
folks
that
they
want
opinionated
tools
and
if
they
don't
they'll,
pick
a
different
tool
and
they
just
they
want
to
see
a
score
and
see
how
it
moves
it's
more
important
to
understand
what
goes
into
it,
and
then
they
can
like
weight
themselves.
Well,
how
much
do
we
actually
need
to
pay
attention
to
that.
D
So
make
our
baseline
opinion
as
as
what
we
expose
and
then,
if
they
want
to
tweak
it,
they
can.
A
B
Yeah
in
when
it's
configurable,
I
think
I
still
think
that
there's
got
to
be
some
kind
of
like
hypothesis,
that
once
like
most
people,
don't
configure
the
thing
yeah
like
the
default
configuration
is
going
to
be
what's
rolled
with
by
most
people,
because
they're
like
yeah,
looks
great
good
enough
and
so
art.
So
even
if
it
is
configurable
like
we
should
be
opinionated
about
what
the
default
is.
I
think
I
agree
with
that.
C
By
the
way,
the
configuration
doesn't
need
to
be
like
a
crazy,
like
we
can
start
with
a
very
basic
slider
right
that
it
says,
like
very
strict.
Not
so
strict.
Very
you
know
like
that
type
of
stuff,
like
that
you
move
around
and
you
can,
because
not
every
project
has
the
same
amount
of
like
constraints.
You
know,
like
maybe
you're
working
on
some
super
secret,
maybe
you're
working
on
like
software,
for
like
an
airplane,
you
know,
I
think
things
that
need
to
be
very
tight.
C
You
know
you
want
like
the
the
the
highest
amount
of
strictness
and
everything
needs
to
be
perfect.
You
know,
but
if
you're
working
on
a
website-
for
I
don't
know
teenagers
sharing
something
like
whatever
they
share
nowadays,
then
you
don't.
I
don't
I'm
not
saying
that
you
shouldn't
write
good
code
for
that.
I'm
just
saying
like
maybe
you
don't
need
so
much
strictness.
D
A
Yeah
cool:
well,
we
didn't
quite
get
down
to
a
single
issue
to
write
up,
but
I
think
we
did
pretty
well
validate
that
we
already
have
most
of
those
issues
written
in
the
realm
of
code
quality
in
the
near
term
anyway.
So,
as
we
think
about
our
next
think,
big
and
we'll
talk
about
this
in
our
group
group
discussion
later
this
week
think,
let's
think
about
some
categories
outside
of
what
we're
working
on
currently
and
see.
A
If
we
can't
have
a
different
kind
of
brainstorming
discussion
in
a
couple
of
weeks
when
we
do
our
next
think
big,
so
open
issues,
upvote
issues
that
are
there,
I
will
link
to
that
in
the
video.
Thank
you
everyone
for
your
time.
This
was
a
fruitful
discussion,
though,
invalidating
that
we
are
on
the
right
track
and
in
thinking
about
some
of
the
small
iterations
that
maybe
we
make
even
to
those
current
issues
of
getting
a
cleaner
view,
more
signal,
less
noise
in
the
mr
widget
all
right
cheers.
Everyone
cheers
thanks.