►
From YouTube: Testing UX / PM / Research Sync 2020-08-26
Description
Today Juan, Nadia and James talked about:
* Current Design items being done in 13.4 and the priority of them https://gitlab.com/groups/gitlab-org/-/issues?scope=all&utf8=%E2%9C%93&state=opened&label_name[]=group%3A%3Atesting&label_name[]=workflow%3A%3Adesign&milestone_title=13.4
* Which personas to target for the current research of JTBD for Cod Testing and Coverage - https://about.gitlab.com/handbook/engineering/development/ci-cd/verify/testing/JTBD/
* Ways to research low or declining usage of features - https://app.periscopedata.com/app/gitlab/633395/Testing-Category-Metrics
A
This
is
the
august
26th
verify
testing,
ux
research
pm
sync,
and
I
think
I
have
the
entire
agenda
today,
at
least.
What's
in
there
so
far,
I
wanted
to
start
just
taking
a
look
at
the
thirteen
four
design
items
we
have
four
of
them
in
there
and
I
wanted
to
talk
about
de-prioritizing,
two
of
them,
one,
to
give
you
more
time
to
focus
on
the
other
two
okay,
so
I'm
gonna
go
ahead
and
just
share
my
screen,
so
we
can
look
at
them.
While
we
talk.
A
So
these
are
the
four
that
are
in
workflow
design
in
thirteen
four.
I
think
that,
with
the
change
that
we're
making
with
the
sorting
that
the
showing
the
failed
tests
only
by
default,
doesn't
make
a
lot
of
sense
anymore
wanted
to
get
your
take
part
of
the
other.
A
One
of
the
other
motivations
of
this
was
to
try
to
parse
the
reports
faster.
We
know
now
that
it
doesn't
matter
if
we
only
show
the
failed
ones.
We
have
to
parse
the
entire
report
anyway,
right
so
saying:
hey
we're
just
going
to
show
failed
tests.
It
doesn't
doesn't
improve
our
speed
by
pulling
the
fill
test
to
the
top
which
we're
doing
in
another
issue.
We
accomplish
the
same
task
totally.
B
I
completely
agree
this
one.
Has
these
signs,
though,
right
this.
B
So
yeah,
I
already
added
these
signs
for
these
okay.
Basically,
I.
B
And
then
we
can
at
least
together
like
a
knowledge
it's
if
we
want
to
prioritize
these
basically
because,
like
oh.
A
B
It
hides
the
like
all
the
failed
tests
right,
that's
an
option,
but
I
think
you're
right
about
like
the
sorting,
perhaps
solving
this
already.
C
B
So
yeah
we
this
one,
can
get
like.
We
can
park
this
one
okay
for
a
while
and
see
what
happens
with
the
sword
team,
and
then
we
can
yeah
figure
out
what
to
do
yeah.
B
I,
like
the
deal,
hiding
the
the
failed
ones.
I
honestly
when,
like
I,
I
commit
something
and
I
open
an
mr.
I
care
about
the
failed
ones.
Never
about
that's
my
particular
case.
You
know,
so
it
makes
sense
to
me
that,
like
I
don't
even
want
to
see
the
failed
ones,
even
if
they
are,
I
don't
want.
I
don't
really
want
to
be.
I
don't
even
want
to
see
the
successful
ones,
even
if
they
are
hidden.
If
they
are
at
the
bottom
of
the
table,
you
know
yeah
you
care
about
the
failed.
B
A
B
A
Okay,
so
it
was
that
one
and
then
the
other
one
which
you
also
already
have
designs
in
here
for
was
to
not
iterate
any
further
on
this
show
the
last
10.
we
don't
just.
A
Not
right
now,
I
don't
think
it's
our
priority,
because
we've
revamped
the
mvc
to
just
show
the
count
of
the
failed
tests.
Yeah
we're
going
to
do
that
with
that
redis
counter,
and
I
actually
just
show
that
in
the
tool
tip.
I
think
that
we
have.
I
think
we
should
maybe
look
at
that
design
and
make
sure
it's
tight
so
that
we
can
do
it
in
13.5.
B
Right
right,
yeah,
I
don't
have
these
signs
for
that
one,
but
I
saw
your
your
wireframe
so
can
we
find
that
issue.
A
And
yeah
you
flatter
me
by
calling
that
a
wireframe.
B
Yeah
so
yeah,
I
can
create
a
mock
for
these
and
I
like
that
solution.
That's
an
npc.
I
mean
that's
more
or
less
originally
what
we
have
discussed
before
we
did
all
the
other
crazy
stuff.
It
makes
sense
right,
it's
kind
of
like
a
good
first
step.
Could
you
add
the
my
active
label,
the
one
that
I
have
been
using
lately
to
this
one
sure
can
and
just
assign
it
to
me.
B
I
mean
perfect
all
right
and
then
maybe
then
we
can.
I
we
can
put
the
last
thing
in
standby
and
then
change
the
mouse
on
them.
A
A
B
A
A
Yeah
for
me
it
it
doesn't
matter
where
we
put
it.
I
think
we'll
still
be
able
to
get
quality
feedback,
and
I
have
no
signal
about
which
of
those
two
places
gathers
more
eyeballs,
where
there's
more
eyeballs
on
those.
So
I
I
can't
say
well
quantitatively,
we'll
get
more
people
looking
at
this
here
versus
here
right
now,
right,
yeah,.
B
Well,
if
you
ask
me,
I
will
say
that
the
mr
we
did
like
gets
more
eyes
for
sure
like
but
passively.
You
know
what
I'm
saying
like
like
active
people
looking
at
something
it's
gonna
be
the
test,
so
we're
probably
gonna
get
more
feedback,
because,
if
you
go
to
that
view
is
because
you're
actively
looking
at
your
tests
right,
yeah
and
dmr,
you
likely
are
gonna,
ignore
it
so
like
it
gets
more
eyeballs
on
dmr
doesn't
necessarily
mean
that
it
gets
more
active
people.
You
know
on
it,
yeah.
A
I
think
either
way
we're
gonna
want
to
do
a
little
bit
of
promotion
through
some
social
channels
to
try
to
gather
feedback
on
this.
I
will
try
to
highlight
it
in
the
release
post.
I
can
work
with
parker
to
do
a
little
example,
video
and
a
blog
post
about
it
and
both
highlight
the
implementation
of
this
is
now
a
history
of
your
tests
of
a
limited
history,
and
this
is
our
minimum
viable
change.
A
We
expect
this
to
be
a
temporary
feature
on.com,
while
we
gather
feedback,
here's
where
to
give
us
that
feedback
about
what
this,
how
you
feel
like
this
should
iterate
to
be
a
better
experience,
so
we
can
kind
of
highlight
both
the
feature
and
our
process
in
that
blog
post.
That
might
be
interesting,
yeah.
A
Okay
cool,
so
I
think
I
have
this
already
in
our
team
chat
for
tomorrow
to
talk
about
where
this
thing
ends
up.
Okay,
it
I
I'm
gonna,
defer
to
you
a
little
bit
on
where
you
think
it's
gonna
be
a
better
experience
in
the
mvc.
A
So
if
you
have
an
opinion
on
that,
try
to
form
that
up,
so
you
can
speak
to
it
yeah
from
a
quantitative
perspective.
I
just
don't
know.
B
I'll
put
my
comments
on
the
issue
and
yeah
I
I'll
try
also
like
be
objective.
I
think
more
than
anything
like
I
mean
more
than
the
location,
I
think
it's
getting
the
feedback
right
and
yeah.
I
think
it's
gonna
depend
a
lot
on
where
we,
if
we
do
what
you
said,
we
are
gonna,
do
right
like
telling
people
hey.
A
A
And
then
the
designs
for
vision
of
the
testing
categories,
we
talked
about
that
a
little
bit
last
week.
That's
that
lo-fi!
I
think
that
we
have
in
there
the
workflows
that
we
want
to
get
some
sketches
for.
B
Right,
I'm
working
on
those.
Actually,
I'm
just
perfect
wireframing
right
now,
awesome,
don't
wanna!
I
don't
wanna
kind
of
like
spoil
it,
but
I
haven't
put
anything
yet
because
I'm
like
oh.
A
Yeah
no
worries
and
then
I'm
ready
to
talk
about
that.
The
cleanup
language.
I
think
this
one's
ready
to
roll
yeah
that.
B
One's
ready
to
roll
but,
like
my
last
comment,
is
that
we
should
probably
break
it
apart
into
two
issues
or
like
three
issues,
potentially
all
right
one.
If
you
go
to
the
bottom
of
this
issue,
like
I
have
a
proposal
there
like
what
should
we
do,
and
I
said
that
I
was
gonna,
do
it,
so
I
can
do
it
actually.
A
B
B
B
Yeah,
let
me
just
to
help
keep
it.
A
A
It
there
we
go
all
right
all
right,
great,
thank
you
for
helping
us
take
notes.
Nadia.
I
totally
lost
the
agent.
A
Well,
the
transcript
will
be
there,
so
we'll
upload
the
transcript
too.
The
the
next
item
for
me
was
the
job
to
be
done:
research
and
the
maturity
research.
I
tagged
laurie
in
a
question
in
the
agenda.
I
had
some
questions
about
persona
and
who
we're
targeting
so
when
we
think
about
viable
versus
minimal
and
the
the
jobs
that
we're
researching
right
now
in
the
code,
testing
and
coverage
we've
added
in
the
issue
that
is
listing
the
jobs.
A
The
current
step
that
we're
on
is
brainstorming
jobs
between
myself,
juan
and
our
two
engineering
managers.
We've
added
persona
to
those.
We
think
that,
for
this
next
maturity
state,
we
will
target
the
developer
persona
and
advance
if
we
meet
the
jobs,
the
category
maturity
too
viable
for
that
persona.
A
We
know
that
we
are
not
close
to
viable
for
our
team
lead
or
director
personas,
so
we
just
wanted
to
get
some
guidance
on
well.
Should
we
test
for
one
persona,
knowing
that
we
have
some
feature
sets
for
others.
B
Yeah,
I
wanted
to
hear
nadia's
opinion
on
this,
because
this
one
is
very
confusing
to
me
regarding,
like
the
category
maturity
scorecard,
so
we
have
categories
right.
We
have
like
that
same
three
category,
I'm
just
talking
about
like
a
theoretical
category,
not
this
one.
Just
for
the
sake
of
the
example.
I'm
gonna
use
another
one,
so
we
have
a
category
in
gitlab.
B
We
a
group
in
need
that
has
three
categories
right,
so
each
of
those
categories
have
a
level
of
maturity,
but
in
our
particular
case
like
there
could
be
a
category
that
serves
two
completely
different
personas
right.
B
C
B
A
C
Yeah
and
hey,
this
is
not
like,
and
again
I
will
reply
on
my
from
my
knowledge
and
understanding.
I
think
lori's
verification
will
be
much
helpful
here,
but
from
what
I
see
happening
in
other
stage,
groups
is
that
there
is
never
a
case
when,
when
the
category
serves
only
one
persona,
it's
it's
always
like
that.
We
have
multiple
personas
and
our
goal
for
the
category
maturity
scorecard
process
is
to
pick
the
primary
well.
C
We
need
to
have
like
a
list
of
jobs
to
be
done,
and
then
we
pick
one
primary
or
like
the
one
that
we
want
to
focus
on.
Usually
that's
the
primary.
If
that's
like
the
second
time,
we
are
running
this.
That
could
be
like
the
second
primary,
but
we
pick
one
job
to
be
done
and
that
for
that
job
to
be
done,
we
can
still
this
job
to
be
done
can
be.
Cover
can
be
well
solving
a
challenge
or
a
problem
for
multiple
personas.
C
If
that's
the
case,
if
we
have
selected
that
primary
jobs
job
to
be
done,
that
is
that
can
be
serving
multiple
personas.
What
is
our
official
process
of
cms
proposing
is
that
we
cover
more.
We
we
try
to
get
the
mix
of
the
people
when
we
are
validating
the
experience,
so,
for
example,
and
actually
the
minimum
a
number
of
participants
is
five,
that's
proposed
by
the
new
cms
process.
C
So,
for
example,
if
we
would
say
we
have,
we
selected
this
primary
job
to
be
done
and
we
have
like
and
it
could
possibly
cover
like
team
lead
and
engineer
persona.
That
means
that
we
will
need
to
take
minimum
of
ten
participants,
five
of
that
and
five
of
them.
This
is
my
understanding
of
the
process.
I
also
have
linked
an
example
here
of
the
create
a
source
code,
a
category
maturity
process.
C
It's
pretty
wide,
it's
pretty
nicely
done,
it's
still
ongoing,
but
there
is
an
evidence
there
that
they
are
validating
with
multiple
personas
one
job
to
be
done
well.
In
the
same.
B
A
B
Yeah,
I
don't
know
what
happened
for
sure
and
I'm
I
I
don't
know
exactly.
What's
the
guidance
there
I
mean
like.
What's
anyway,
I
think
like
laurie
is
the
one
who
can
like
probably
shine
some
light
on
it,
but.
C
Yeah,
I
also
I'm
sure,
like
I
kind
of
like
go
ahead
and
like
see
what
laurie
will
probably
say,
also
that
the
cms
process
is
not
perfect
and
that
I
know
that
we've
been
looking
into
like
replacing
it
overall
with
something
else
and
again.
Similarly,
to
what
you
are
now
describing,
we
had
the
same
scenario
in
the
release
management
when
we
felt
like
that
job
to
be
done
is
just
covering
like
this
little
bit
part.
B
C
B
B
That
type
of
reasoning
is
gonna,
be
very
common,
as
we
keep
doing
more
of
this.
So
it's
fine.
I
mean
I'm
not
I'm
just
saying,
let's
ask
laurie
and
either
way,
let's
move
forward
with
our
category
maturity
scorecard,
because
we
feel
that
we
want
to
move
those
categories
that
particular
category
and
then
yeah
in
the
process.
We
will
figure
out
if
we
need
to
do
kind
of
like
some
sort
of
like
cutoff
or
like
work
of
the
categories.
A
I
added
next
steps.
Oh
verbalize,
I
think
that
our
next
steps
are
just
keep
moving
forward
with
our
job
to
be
gun
validation,
because
that's
the
process
that
we're
in
for
both
personas
our
team
lead
and
our
developer,
and
that
way
we
can
at
least
validate
the
jobs
to
be
done
as
part
of
the
category
maturity
scorecard
interviews,
we
may
only
pick
one
of
those
personas
like
the
primary
job
for
developers
and
validate
against
them,
and
we
make
that
decision
later.
C
A
You
lori,
when
you're
watching
this
later,
all
right
and
then
researching
low
usage.
This
is
the
other
thing
in
the
last
couple
of
minutes
that
we
have
that
I'm
starting
to
think
about
we're
just
getting
telemetry
around
our
paid
features
paid
group
monthly,
active
users
is
our
primary
performance
indicator
for
the
verify
testing
team
and
so
we're
looking
at
ways
to
research
why
customers
aren't
using
features
code,
quality
report.
This
is
the
only
one
that
we
have
data
on
now.
I
have
a
good
idea
on
why
people
don't
use
it.
A
A
I
don't
have
data
about
how
many
people
interact
with
the
widget,
though,
but
I
have
to
assume
that
that
is
also
going
down,
if
not
as
many
jobs
are,
including
that
data
there's
less
opportunity
to
interact
with
it.
So
my
question
is:
are
there?
Is
there
an
existing
process
that
we
can
use
to
get
feedback
from
customers
who
were
using
it
and
aren't
or
who
want
to
use
it
and
can't.
C
A
We
could
probably
find
out
I
we
can
work
through
the
tams.
We
can
work
through
some
sort
of
data
set,
probably
to
figure
out
who
is
using
this
feature
like
specifically,
which
which
customers
or
there
may
be
a
more
broad-based
approach
that
we
could
try.
Maybe
we
do
an
in-app
notification
of
trying
to
gather
the
data.
C
Right,
oh
yeah.
Well,
actually
that
pushes
me
to
another
idea
that
I
had
again
like
I'm.
I
will
try
to
take
the
turn
here
on
laurie's
side,
but
like
yeah,
it's
all
up
to
her.
Of
course,
I
see
two
ways
here.
There
is
usually
well
when
we
are
trying
to
understand
the
usability
of
the
future.
C
We
sometimes
can
like
try
to
find
out
the
sas
score,
like
the
that
will
tell
us
about
the
general
usability
and
that
could
be
like
or
inserted
as
a
widget
on
the
page
or
we
can
send
the
follow-up
survey
like
you
know,
but
there
are
the
questions
there
are
more
generic
like
like.
Would
you
recommend
this
feature
or
like
what
was
your
experience
like
you
know,
so,
they're
generic
or
another
idea
for
getting
quick
results?
C
Quick
is
always
surveys
I
think
and
like
we
could
define
a
short
list
of
questions
like
have
you
tried
this
feature
like
what
was
your
experience
like?
What
did
you
miss
or
like
why
you're
not
a
continuing
blah
blah
and
then,
if
you
would
find
a
good
set
of
responses,
maybe
we
can
get
them
more
in
touch
and
have
maybe
a
detailed,
more
detailed
conversation
get
more
data
if
we
would
be
needing.
That
would
be
my
initials
yeah.
I
like
that.
Actually.
A
A
B
But
if
you
like
apply
like
the
kind
of
like
negative
filter
to
it,
you
know
you
can
just
go
and
ask
the
people
who
are
using
it
like
hey
you're,
using
it
what's
good
about
it,
what's
bad
about
it
and
you
can
like
extrapolate
while
some
people
may
be
just
like
not
using
it
at
all
or
like
dropping
it
completely
if
they
or
if
there's
a
way
that
we
can
find
someone
who
used
it
set
it
up,
didn't
like
it
and
remove
it.
You
know
that
that
would
be
more
telling
that
you
know.
A
Yeah,
I
might,
I
might
start
with
just
pinging
in
like
the
account
management
for
the
and
see
if
there's
customers
who
have
used
it
in
the
past
and
have
stopped
the
cab
might
also
be
a
good
way
to
query
for
that
of
hey
who's
using
this
browser
performance
testing.
B
Right
personally,
I'll,
try
to
I'm
gonna,
create
a
project,
and
I'm
gonna
test
it
myself
to
see
how
it
feels
great.
I
mean
like.
Oh
I
right
away.
I
could
probably
I
would
find
like
some
keycops
on
the
ux
and
yeah
and
even
though,
like
yeah
improving
the
feature
just
like
it's
a
win
right
like
anything
that
we
can
do
to
improve
the
usability
of
the
future
and
by
the
way,
that's
the
goal
right
now,
as
a
company
right
like
we're,
moving
more
towards
usability
and
like
improving
set
of
features.
A
Awesome
all
right,
anyone
else
have
agenda
items
to
discuss
in
our
last
couple
of
minutes.
B
Anyone
good
I
mean
like
I,
I
was
gonna
cover
the
existing
items
and
you
already
did
that
for
me.
So
thanks
your.
C
Star
of
today's
meeting
james,
you
got
it
all.