►
From YouTube: Testing UX / PM / Research Sync 2020-06-24
Description
In today's session we talk about the scorecard process to move "Code Quality" and "Code Testing and Coverage" categories from Minimal to Viable, the design work happening in 13.2 and 13.3 and how the team can help Lorie in interviews for the MR performance research.
A
This
is
the
weekly
PM
research.
You
hex
ink
for
testing
for
June
24th
forget
what
meeting
I'm
in
some
days.
I
think
I
have
the
first
agenda
item
I
wanted
to
start
talking
through
viability,
maturity
for
a
code
quality
which,
as
I'm
thinking
about
it
and
we're
in
focus,
is
that
for
the
testing
group.
A
It's
all
on
the
code,
testing
and
coverage
not
on
the
quality,
but
we've
dogfooding
item
for
that
and,
as
we
think
about
going
for
minimal
to
viable
the
key
call
out
there
is
that
it's
used
by
the
majority
of
internal
users
at
get
lab
and
so
seeing
what
the
kind
of
clear
criteria
are
for
us
to
actually
get
to
get
to
viable
I
think
is
all
laid
out
and
that
dogfooding
issue
and
I
created
an
epic
history
for
it.
But
I
wanted
to
talk
through
where
next
steps
Boulton
validating
the
solutions.
A
And
then
what
does
the
scorecard
experience?
Look
like
because
it'll
be
something
that
we
want
to
mirror
pretty
soon
for
the
code,
testing
and
coverage
and
understand
where
there's
still
gaps
on
our
dogfooding
approach
to
that,
because
it's
sitting
at
minimal
and
I
think
that
we
can
move
it
to
viable
with
not
too
much
effort
as
well.
B
B
A
B
Though
we
don't
have
to
go
through
this
process,
it's
not
required.
You
can.
If
you
want,
it's
always
a
good
idea
to
get
those
jobs
to
be
done,
finalized
and
done,
but
they
don't
have
to
be
done
before
you
go
to
viable.
This
is
really
meant
to
move
from
viable
to
either
complete
or
loveable.
To
know
where
we're
going
so
because
of
that
minimal
to
viable
is
very
much
what
you
said
earlier,
James
like
dogfooding
like
we
have
to
use
it
internally.
B
When
we
do
go
through
this
process,
you
will
be
looking
at
things
like
how
long
does
it
take
them?
What's
their
perception?
Do
they
feel
like
that
feature
set
met
their
requirements?
Do
they
feel
like
it's
easy
to
use
things
like
that,
so
success/failure?
So
just
keep
that
in
mind
that
that's
what
this
process
is.
We
can
always
run
somebody
through
it.
You
don't
have
to,
though,
so
it's
it's
up
to
you.
A
A
The
cut
quality
stuff
yeah
like
thirteen,
seven,
eight
nine,
like
it's
a
ways
I
was
just
I-
was
doing
direction.
Update
yesterday
and
I
realized.
Oh
I
should
go
look
at
the
dog
fooding
issues
to
the
dog
fooding
for
others
and
then
started
thinking
about
maturity.
Oh
well,
very
clearly
we
have
to
have
this
feature
used
by
the
majority
of
internal
users
and
we
have
a
dog
fooding
issue
for
code
quality.
That's
preventing
us
from
doing
it.
A
C
Right
well,
I'm
thinking
with
locally
later
today,
and
we
we're
likely
gonna
have
to
talk
about
I
mean
I
was
going
to
talk
mainly
about
like
runner.
We're
like
we're
talking
about
strategy
for
like
next
thanks
moving
forward
yeah,
so
it's
applicable
to
testing
for
I'll.
Tell
you
about
that
later
as
well.
C
So,
basically,
what
we're
trying
to
do
here
is
to
figure
out
what's
next
in
terms
of
problem
allegation
and
discovery,
you
know
I
think
we
have
a
better
path
ahead
in
testing
when
it
comes
to
like
that
horizon
of
things
that
we
need
to
build
and
research
or
move
to
the
categories.
We
don't
have
that
forerunners.
So
that's
why
I
wanted
to
focus
more
Runner,
but,
like
anything
that
we
do
there,
it's
gonna
be
applicable
from
testing,
so
I'll
take.
A
Our
our
next
research
item
and
I,
haven't
in
you
today
to
create
the
research
task
or
the
research
request,
is
around
the
test.
History,
like
we
discussed
I,
think
it
was
last
week
around
really.
What
is
next
is
not
the
solution,
validation,
it's
going
back
and
revisiting
the
problem
and
what
our
customers
doing
data
itself
so
that
beginning
inform
where
it
build
that.
C
C
It
seems
that
that
design
that
last
week
on
the
group
code
coverage
like
like
I,
think
one
person
commented
and
I
seem
to
be
fine
with
what
you
are
seeing.
I,
don't
know
what
are
your
thoughts
on,
but
on
that
particular
design,
you
want
more
equations
and
that'll.
You
think.
That's
okay!
Anything
like.
A
I
think
I
commented
on
the
design
because
you
left
it
as
a
comment
in
the
issue
and
we
can
update
that
as
a
final
design
into
the
design
tab
on
those
edges
or
reference
back
to
design
in
all
of
the
issues
that
have
a
design
component.
Let's
just
records
back
to
the
design
tab
in
one,
so
there's
a
single
source
of
truth,
yeah.
C
A
Cool
I
think
we
can
move
that
item
then
out
of
solution
validation,
if
we're
not
getting
any
more
feedback.
We've
I've
reached
out
to
the
customers
that
we
had
the
initial
conversations
with
and
I'm
waiting
on
any
feedback
there
beyond
what
we
already
got
from
the
initial
thing,
I'm
hoping
to
have
conversations
with
those
folks,
but
have
you
learned
anything
it
from
Tam's
I'll
bug
them
again
today
or
just
reach
out
directly
through
shared
slack
channels?
Well,.
C
It's
pretty
straightforward
right,
like
there's,
there's
not
a
lot
of
things
that
you
can
do
to
solve
the
problem.
Yeah
I
think
he
solves
the
problem.
If
it
doesn't,
it
gets
you
into
get
us.
It's
getting
us
closer
to
what
the
final
solution
is.
Gonna,
be
I
think
that
the
only
drawback
that
I
see
from
that
is
people
saying
like
I,
still
don't
want
to
see
the
average,
and,
if
that's
the
case
in
like
we
can,
we
think,
what's
the
actual
calculation
that
we
have
to
do,
they're
not.
C
A
C
C
C
C
A
C
C
Don't
know
the
scents
are
coming
for
sure
I
was
just
like
wondering
about
the
yeah
I've
been
trying
to
like
so
just
for
the
sake
of
transparency
like
I
have
been
trying
to
like
prioritize
better
between,
like
runner,
teams
and
testing
things
and
I
said
I
didn't
know
that
I
said
before
that
I
was
trying
to
be
warm
like
communicate.
My
bandwidth
better,
so
I
mean
I
was
just
trying
to
say
like
I
feel
that,
like
it
seems
fine
to
me,
I
can
work
with
designs
and
I
have
bad
week
to
do
these
and
star-like.
B
B
B
Yeah,
that's
it
I
think
my
main
goal
to
have
those
is
to
test
the
assumption
of
poor
performance,
meaning
slow
load
time
for
the
diffs
I.
Don't
think
that
that's
all
it
is
I
know
we
are
aware
of
a
lot
of
other
things,
but
my
hypothesis
is
that
there's
something
else
going
on
that
people
refer
to
as
poor
performance.
B
So
that's
really
what
my
goal
is
and
half
an
hour
to
understand
what
their
experiences
but
then
try
to
target
people
who
have
either
been
somewhat
dissatisfied
with
their
experience,
either
authoring
or
reviewing
in
Mars
or
neutral,
and
then
I've
got
maybe
one
or
two
who
are
extremely
satisfied,
but
did
a
lot
of
them.
So
I
really
want
to
understand
why
they're
happy,
if
they're
doing
21
to
500
of
them
a
week
like.
Why
are
you
still
happy
with
that?
B
So
that's
that's
what
I'm
hoping
to
get
out
of
it
so
I,
don't
think,
there's
anything
that
you
guys
have
to
provide
to
me
or
anything,
but
please
do
so.
What
could
P
can
help
take
notes
if
you
have
any
color
commentary,
shoot
it
to
me
and
flak.
That
would
be
great,
you
know
any
kind
of
context.
Woods
always
helpful.
So.
A
B
Necessarily
so
it's
it's
about
the
whole
experience
also
trying
to
understand
holistically.
What
is
it
so?
We
think
that
the
poor
performance
is
related
directly
to
diffs
I.
Don't
think
that
that's
correct,
I
think
there's
something
else
there
I
just
don't
know
what
it
is,
and
it
may
just
be
an
amalgam
of
all
the
other
that
we
know
about
that's
wrong.
B
That's
where
I
was
like
I
think
they
think
it's
just
this
and
that's
where
they've
been
working
very
hard
to
fix
by
what?
If
it's
not
and
Daniel
mentioned
something
about
taking
it
from
like
20
seconds
down
to
five
seconds,
then
people
are
still
complaining.
It's
like
okay,
so
I
feel
like
there's
something
else.
There
can't
just
be
it's
not
instantaneous.
It's
it's
gotta,
be
something!
So
that's
my
goal
just
to
talk
about
them,
talk
about
their
experiences
in
general
and
then
kind
of
dive
into
some
more
specifics.
B
B
It's
also
on
the
UX
yeah.
It's
on
the
UX
research
calendar.
It's
a
shared
calendar.
You
should
be
able
to
add
to
your
Google
list
of
gallery
or
anything
like
me,
and
then
each
of
those
invites
on
that
calendar.
Have
the
discussion
guide
in
the
dovetail
project
linked
as
well
as
the
issue
that
were
discussing
because
that's
our
calendar,
so
the
participants
don't
have
links
to
those.