►
From YouTube: Testing Group Think Big #8
Description
In this session we talked about the 1 and 3 year vision for the group, problems we want to solve in that time and some early design ideas that will help as a north star towards the vision.
We then did a quick overview of our current epics to ensure they are contributing to that vision.
A
This
is
the
think,
big
for
the
verified
testing
group
for
september
22nd,
2020.,
first
off
happy
release
day,
everyone,
it's
13
for
release,
yay.
Second,
we're
switching
up
the
format
just
a
little
bit
this
month,
just
because
we
did
have
some
light
participation,
some
well-deserved
pto
happening
in
the
group.
So,
instead
of
a
think
big.
A
A
Share
that
so
we
have
a
we've,
been
working
on
some
designs
for
a
one
and
a
three
year:
vision
for
the
category
overall.
Just
where
do
we
want
to
go?
What's
our
north
star?
What's
going
to
help
us
move
that
paid
group
monthly,
active
user
number
forward,
based
on
what
we
know
today
and
what
we
think
we're
going
to
validate
in
the
not
too
distant
future?
So
one
and
I've
been
going
through
this
process
for
a
couple
of
months.
A
We
first
started
by
identifying
what
we
kind
of
heard
as
we
looked
through
issues
and
based
on
the
conversations
that
we've
had
were
kind
of
underlying
or
undercurrent
of
problems
that
customers
were
having.
What
are
those
big
three
to
five
problems
that
all
of
the
feature
requests,
kind
of
lead
up
to
or
sum
up
and
where
we
settled
is
really
around
hot
spots
and
code.
A
Tell
me
about
where
there's
problems
that
I'm
not
seeing
yet
usually
through
static
analysis,
and
we
see
that
as
an
underlying
current
of
a
lot
of
the
code,
quality
requests
which
tests
are
flaky,
and
we
just
have
an
issue
for
that.
But
it's
a
very
popular
issue.
We've
talked
about
this
a
lot
of
tests,
you
know
sometimes
fail,
sometimes
fast
help
me
identify
those
so
that
I
can
fix
them
and
get
more
green
pipelines
which
projects
are
costing
the
most
there's
a
little
bit
of
undercurrent
there.
A
This
is
more
in
talking
with
the
customers
less
in
the
issues
of
trying
to
help
them
understand.
Where
are
they
spending
on
their
testing,
and
is
it
a
good
return
on
that
investment
for
them
and
then,
finally,
which
projects
have
tests
pass
in
testing
and
failing
in
production?
We
think
that,
and
this
goes
beyond
tests,
but
we
think
that
there's
an
interesting
problem
to
solve
around
what
does
my
software
do
in
my
testing
and
staging
in
pre-production
environments
versus
what
it's
doing
in
production?
A
And
how
do
I
start
to
map
those
two
things
together
so
that
I
can
see
hey
this
test?
That's
always
failing
in
pre-prod
is
always
passing
in
production,
or
vice
versa,
so
that
you
can
start
to
identify
issues
that
are
actually
going
to
show
up
out
in
your
production
environment
faster.
So
there's
a
there's
some
problems
there
that
we
want
to
dig
into
so
that
we
talked
through
the
workflows
of
these
one
day,
and
I
think
we
have
a
recording
of
that
user
story.
A
Mapping
or
just
that
discussion
that
I
can
link
back
to
and
settled
on
a
couple
of
things
to
try
to
work
through
how
someone
would
go
to
answer
these
questions
or
try
to
solve
those
problems
within
the
app
and
now,
let's
just
get
to
the
actual
interesting
stuff,
the
design
so
one
put
together
some
just
forward-looking
wireframes.
These
are
low
fidelity
on
purpose.
We
anticipate
that
this
changes
quite
a
bit
as
we
go
through
the
actual
validation
of
these
problems.
A
But
a
lot
of
this
should
look
familiar
because
it
rolls
up
a
lot
of
what
we've
been
working
on
over
the
last
12
months
already
and
putting
all
that
into
a
singular
data
view
for
a
project
or
a
singular
data
view
for
a
group
and
that's
where
we
think
there's
really
a
value.
Add
for
those
customers
who
have
multiple
projects
across
their
groups,
and
you
could
then
in
turn
easily
see
this
rolling
up
to
an
instance
for
those
self-hosted
folks
of
hey
across
all
of
my
groups,
where,
where
is
quality
good?
A
Where
is
quality
not
so
good,
so
testing
analytics
lots
of
great
stuff
in
here
that
plays
off
of
you
know
the
accessibility
score,
the
code
quality
scores.
You
know
just
how
long
does
it
taste
taking
to
run
tests
and
what's
my
test
coverage
look
like
and
then
it
gets
interesting
down
here
with
the
charts,
so
you
can
start
to
see
how
those
how
that
data
is
trending
over
time
and
this
is
somewhere
or
something
that
we've
already
started.
A
Looking
at
today
is
the
first
release
of
the
code
coverage
for
groups,
so
we're
starting
to
roll
that
data
up
already,
so
you
can
see
we're
already
heading
in
this
direction,
but
some
new
fun
interesting
stuff
here
of
being
able
to
pick
and
choose
and
compare
projects
of
what
is
trending
over
time
kind
of
making
these
charts
a
little
more
interactive.
A
And
then
some
test
coverage
hot
spots,
we
have
some
really
fun
stuff,
just
ideas
that
one
is
playing
around
with
of
what
could
this
look
like?
What
do
competitors
do?
You
know?
What
do
we
think
would
solve
problems
for
the
customers,
so
I'm
going
to
keep
clicking
through,
but
want
to
start
answering
any
questions
from
the
team
things
that
you
spot,
that
hey,
that
looks
really
cool
hey?
Why
is
that
there
in
the
first
place,
hey
here's
a
thing
you
missed!
B
This
is
the
the
first
time
I've
seen
this
design.
It
looks
pretty
cool
I'd
I'll,
probably
just
take
some
time
after
the
meeting
to
look
through
them
and
dive
in
a
little
more
there's
a
lot
of
information
on
this
for.
A
Sure
for
sure
I'm
gonna
jump
into
the
last
three
then,
as
so,
we
start
talking
or
as
we
were
talking
about
this,
we
wanted
to
give
you
data
initially
and
that's
kind
of
our
one-year
vision
of
a
lot
of
the
groundwork
that
we've
laid
or
a
lot
of
the
features
that
we've
built
in
the
last
12
months.
A
Let's
start
to
sum
those
up
and
give
you
a
data
view
of
those,
so
they're
actionable
beyond
just
within
the
merge
request
that
you
can
start
to
plan
against
the
data,
the
other
bit
as
we
start
thinking
forward
beyond
that
is
within
your
mr.
We
tell
you
about
stuff,
but
then
we
don't
really
give
you
a
way
to
do
anything
with
it.
A
So,
in
the
short
term,
in
like
the
12
to
18
month,
timer
time
frame,
we
think
a
great
next
step
is
to
start
to
say:
hey
here
are
code
quality
problems
start
to
create
issues
from
those.
You
can
do
that
from
the
mr
and
then
a
view
to-
and
we
don't
have
a
visualization
of
this,
but
if
you
just
start
to
track
those
things
so
that
if
you
as
a
developer,
are
working
through
a
merge
request
and
you
see
a
code
quality
issue,
you
don't
see
a
way
to
create
an
issue.
A
You
see
an
already
tracked
issue
for
it,
so
that
we
can
start
to
tie
those
things
together
and
you
can
say:
hey
somebody
already
started
looking
at
this.
They
have
an
openmr.
Can
I
you
know,
grab
that
code
and
put
it
into
mine
as
well
and
resolve
this
problem,
because
I'm
more
familiar
with
this,
or
can
I
go
close
out
that
issue?
A
If
I'm
more
familiar
with
this
thing
so
starting
to
incorporate
that
and
that's
something
that
the
security
team
has
started
to
build
on
already,
so
that
you
can
track
security
issues
and
that
you
can
see
those
from
the
code
qual
or
the
security
widgets
rather
so
then,
beyond
that
18-month
time
frame,
we
think
it'd
be
really
really
interesting
to
start
to
just
allow
or
start
to
figure
out
what
the
fix
would
be.
A
So
I
think,
especially
around
code
quality.
We
have
a
lot
of
or
we'll
be
able
to
gather
a
lot
of
data
of
the
same
type
of
failure,
the
same
type
of
issue
across
code
bases,
especially
the
open
source
code
bases
or
just
within
your
group
or
your
project.
If
it's
a
private
project
and
start
to
look
at
well,
here's
all
of
the
places
where
this
type
of
problem
has
shown
up
and
then
here's
all
of
the
issues
that
were
tied
to
it.
A
B
B
So
the
idea
for
picture
six
is
basically
that
you're
you're
viewing
a
failed
test
and
you
click
a
button
saying
create
an
issue
based
on
this
failed
test,
and
then
it
would
automatically
like
fill
it
out.
Okay,.
A
Yeah
so
give
it
to
you,
then
we
can
grab
the
history
of
it,
so
you
can
start
to
see
hey.
This
is
flaky.
This
is
not
flaky
link
back
into
those
pipelines
be
able
to
collect
some
data
yeah,
and
then
you
can
start
to
tie
those.
We
can.
You
know
in
the
test
history
report
start
to
say:
hey
this
flaky
test.
Has
this
open
issue
to
resolve
its
flakiness?
A
It
it
helps
replace.
I
think
the
quality
group
has
their
own
project
that
tracks
every
test
and
the
history
of
it,
and
they
use
that
then
to
create
issues
and
link
to
if
something
goes.
Flaky
and
it's
just
it's
this
massive
project
with
thousands
and
thousands
of
issues
in
it
that
a
bot
is
manually
going
and
finding
and
updating
based
on
tests
test
runs.
B
B
A
Absolutely
no
idea,
it's
all.
You
know,
unicorn
magic
to
me.
A
No
it'll
probably
be
a
very
manual
piece
to
start
of.
I
have
a
test
failure.
So
here
is
a
you
know:
a
singular
failure
for
a
singular
test
case.
I
want
to
mark
that
as
flaky,
because
looking
at
the
history
of
the
test
execution,
I
can
see
it's
flapping
and
then
I
think
that
there
would
just
be.
The
issue
is
tied
to
the
case
at
that
point
will
probably
have
some
sort
of
database
for
cases
and
you
can
start
to
tie
in.
You
know
here's
an
issue,
that's
tied
to
the
case.
A
A
So
yeah
I'll
link
on
to
this
stuff
in
the
agenda.
Let's
go
ahead
and
look
at
then.
I
don't
know
why
I
stopped
my
share
because
I'm
just
going
to
go
to
the
other
tab,
so
that's
kind
of
the
think
big
of
where
we
want
to
go
and
I'm
going
to
start
to
work
screenshots
of
those
into
the
direction
pages
so
that
it
all
links
back
together
and
I'll
share.
A
I
think
I
already
shared
with
the
team,
the
video
of
juan
and
I
walking
through
these
and
ideating,
even
a
little
bit
more
of
some
cool
stuff
that
we
think
we
could
do
so
to
get
there.
Though,
we
have
a
number
of
epics
that
are
currently
being
worked
on,
or
are
next
up
for
validation,
things
that
we
know
we
need
to
build
to
get
there
or
we
think
we
need
to
build
to
get
there.
A
I
don't
know
that
these
active
epics
are
actually
in
this
correct
order
by
when
they'll
start
or
when
they'll
finish
anymore,
because
the
last
time
I
made
or
when
I
made
this
list
was
like
three
months
ago
and
things
have
changed
since
then.
It
turns
out,
but
the
rough
order
of
stuff,
I
think,
is
wrapping
up
the
code
coverage
data
for
groups
which,
like
I
said,
first
parts
of
that
are
shipping
today.
That
is
a
premium
tier
feature.
A
There's
only
one
item
left
in
our
unit
test
report,
ci
view
enhancements,
there's
been
tons
of
great
work
there.
I
think
it's
made
that
report
so
much
more
usable,
the
sort
order
being
better.
That's
buggy
scrolling
getting
fixed
up
all
sorts
of
great
stuff.
There
test
history
for
mrs
and
pipelines.
That's
our
mvc
approach
to
test
history.
A
I
know
eric's
been
working
really
hard
on
solving
the
data
storage.
Part
of
that
or
helping
with
the
discussion
there,
I'm
trying
to
figure
out.
How
are
we
going
to
store
these
things?
Is
it
red
is
key?
Is
it
something
else
so
that
it's
still
performant
enough
to
learn
and
then
knowing
that
we'll,
probably
or
knowing
that
we
will
not
probably,
but
that
we
will
throw
that
away
in
the
not
too
distant
future?
A
So
that's
some
of
the
stuff
that's
coming
up
next
in
our
internal
stakeholder
meeting
later
this
week,
I'll
reference,
the
code,
quality
viability
or
code
quality,
moving
to
viable
plan,
that's
a
mix
of
things,
we've
heard
from
customers
and
things
we've
heard
from
internal
stakeholders,
and
that
starts
to
build
out
a
lot
of
the
data
that
you
saw
in
those
locks
are
better
exposing
it
to
the
front
end
and
making
the
code
quality
feature
set
just
a
lot
more
valuable
and
less
noise.
A
I
would
say
than
what
it
is
today,
because
you
can
see
you
know
35
violations,
you
don't
know
if
those
are
violations
you
added
or
those
are
violations.
Somebody
else
added
that
have
been
there
for
three
years
or
if
they're
super
high
critical
priority
or
they're
just
linting
errors
that
are
optional
to
fix.
So
solving
some
of
those
problems
is
key
in
that.
A
Beyond
that,
I
mean
it's
we're
talking
about
six
months
out,
potentially
of
working
out
some
of
these
other
things,
so
don't
want
to
dig
too
far
into
those
but
happy
to
click
through
and
look
at
any
of
the
epics
if
these
are
unfamiliar
or
not
jogging
in
memory.
For
you
all.
C
A
A
I
will
take
as
a
takeaway
from
this
discussion
today
for
all
of
our
think
bigs,
especially
the
next
time
we
do
this
format
to
give
the
team
more
than
four
seconds
to
think
about
this
stuff.
When
I
put
you
on
the
spot,
know
it's
big
a
big
meaty
topic
so
asking
you
for
immediate
feedback,
it's
not
very
fair
at.
A
A
While
you're
pondering
that
I'll
bring
up
the
topic
that
I
tweeted
about
yesterday
and
what
I'm
going
to
talk
about
in
the
cross
stage,
think
big
a
little
bit
on
thursday,
the
idea
of
using
an
error
budget
or
just
error
rates
and
as
those
increase
or
as
your
error
budget
gets
smaller.
A
If
that's
the
model,
that
a
team
is
following
that
the
reaction
to
that
is
like
slow
down
deployments,
potentially,
which
is
what
the
primary
action
is,
but
also
to
increase
stringency
of
test
requirements
so
that
someone
could
say,
as
we
get
closer
to
our
air
budget
being
gone,
our
you
know
uptime
how
much
uptime,
how
much
down
time
we
can
have
for
the
rest
of
the
iteration
shrinking
the
requirements
of
testing
can
start
to
go
up.
So
instead
of
saying,
hey,
test
coverage
for
the
project
has
to
stay
above
this
threshold.
A
A
C
A
Yeah,
I
don't
know
that
we
have
100
match
between
what's
in
these
yet
and
the
actual
epics
that
are
out
there
that'll
be.
The
next
step
for
me
is
to
start
to
tie
these
designs
back
to
the
existing
ethics
based
on
what
we
know
and
then
start
to
kind
of
map
that
in
and
fill
in
the
blank
spots
so
like.
I
know
that
so
test
history,
we're
starting
to
build
out
today,
coverage.
A
We
will
finish
building
out
the
next
milestone
or
two,
so
that
that
data
will
become
available
to
us.
There's
an
ongoing
discussion
within
the
entire
verify
stage
with
the
runner
and
the
two
ci
groups
about
how
do
we
start
to
get
some
data
around
job
statistics
and
metrics
of
how
long
they're
taking
to
run,
and
once
we
do
that
we
can
identify
more
ease
more
readily
the
testing
jobs,
things
that
are
at
least
labeled
with
the
stage
and
be
able
to
pull
that
data.
A
As
we
start
looking
at
the
code,
quality,
epic
and
moving
the
maturity
of
that,
we
may
have
to
figure
out
some
of
our
own
funky
grading
systems
for
taking
what
we
know
about
those
code,
quality
violations
and
turning
them
into
some
of
this
data,
and
none
of
this
is
meant
to
be
prescriptive.
A
This
is
just
what
juan
and
I
came
up
with-
that
we
thought
would
be
helpful.
None
of
it's
been
validated,
so
we
shouldn't
necessarily
make
a
straight
draw.
A
straight
line
from
this
will
require
this
because
we
haven't
validated
that
this
stuff
in
the
report
is
actually
valuable
or
solves
any
problems
for
anybody
based
on
competitor
reviews
and
what
we're
hearing.
A
We
think
that
this
gets
us
going
in
the
right
direction,
but
we'll
definitely
validate
that
and
how
to
expect
a
follow-on,
epic,
or
at
least
a
couple
of
issues,
to
follow
the
what
we're
going
to
work
on
next
in
code
quality,
to
build
out
some
of
this
data
accessibility.
I
think
that
we
have
most
of
this.
We
may
need
some
need
some
need,
some
things
just
to
grab
the
date,
be
able
to
grab
the
data.
Maybe
even
store
the
data
and
look
at
that
over
time
and
then
endpoint
latency.
A
I
don't
remember
exactly
what
the
use
case
was
for
this,
but
it'll
be
something
that
we
can
talk
with
juan
about
and
see
if
that's,
even
something
that
we
have
existing
today
through
the
monitoring
group
and
some
other
data
excites
me.
A
You
know
for
this
release,
here's
their
quality,
their
coverage
and
just
their
overall
churn
like.
Is
there
a
lot
of
code
being
changed
in
this
all
the
time?
And
what
are
those
three
things
based
on
with
the
levels
that
they're
at
tell
you
about
the
riskiness
of
releasing
that,
mr
out
into
the
wild
or
the
need
to
have
some
tech
debt
paid
down
in
some
of
these
areas?
A
So
a
high
churn,
low
test
coverage
area
may
need
some
better
attention
or
some
more
tests
in
it
versus
a
low
test
coverage,
but
high
quality
and
low
churn
file
like
well.
We
never
touch
it.
It's
gotten
high
quality
doesn't
matter.
If
we
had
tests
there,
maybe
those
kind
of
signals
we
can
start
to
pull
out
in
a
view.
A
Cool
well
things
to
ponder
thanks
for
the
feedback
so
far
and
by
all
means
jump
into
the
issue.
Comment
on
that.
I
just
linked
in
the
agenda
here
leave
comments
in
the
agenda
too
we'd
love
to
get
your
feedback
there
any
of
the
epics
as
always,
and
we'll
be
back
to
our
regularly
scheduled
and
formatted.
Think
big
things
small
in
I
think
about
three
weeks
or
so
four
weeks
or
so
so,
please
jump
in
upvote
the
issues
that
are
there
create
new
issues
for
things
you
want
to
talk
about
right
now.