►
From YouTube: Testing UX / PM / Research Sync 2020-09-16
Description
Today Juan, Nadia and James talked about upcoming design issues in 13.5 and 13.6, current state of research and why James got yelled at by his daughter's teacher during the call.
A
This
is
the
verify,
testing,
ux
research
and
product
sync,
for
when
september
16th
is
the
day
and
as
nadia
was
saying,
I
have
a
lot
of
things
in
the
agenda,
so
I'm
just
gonna
go
ahead
and
start
off
just
to
sync
on
our
three-year
vision,
project
that
we're
working
on
juan.
I
set
up
time
for
us
later
today
to
talk
through
the
one
year
and
do
a
recording
of
those,
as
we
start
to
think
about
the
three-year.
A
I
think
a
lot
of
what
you've
already
done
in
the
design,
which
is
awesome
kind
of
leads
into
our
three-year
vision,
pretty
easily
from
here's
where
the
problem
is
to
potentially
and
then
here's
a
fix
for
it.
I
think
that's
a
good
step
as
we
think
about
three
year,
but
we'll
talk
about
that
more
later.
When
we
do
the
recording
and
then
we'll
post
it
back.
B
Okay,
so
yeah,
my
my
reply
to
that
is
that
I'm
working
on
those
kind
of
like
fixed
type
of
solutions
like
I
basically
am
on,
like
the
the
merge
request
page
and
how
that
would
look
like-
and
my
last
point
was
that
I
was
talking
about
a
little
bit
about
that
issue.
But
we're
going
to
talk
about
that
later.
Maybe
or
I
don't
know-
but
my
point
was
like
I-
I
may
have
some
ready
by
today,
but
I
may
have
like
everything
ready
by
tomorrow.
A
A
Think
we
can
record
it
even
as
a
work
in
progress.
Okay
for
purposes
of
the
three-year
vision,
work
that
we're
doing
as
a
as
a
ci
group.
B
A
B
Right,
perfect,
so
yeah
I'll
share
I'll
update
the
issue
with
the
with
the
my
work
in
progress
with
the
other
ones.
Sometime
today,
before
that
meeting.
A
And
then
I
just
wanted
to
talk
through
13
5
and
13
6
design,
just
looking
forward
to
13
6
and
13
7,
making
sure
that
we
have
design
items
for
implementation
items.
It
looks
like
the
big
ones.
Are
there
the
code
quality
in
the
div
and
the
mr
there's
a
designed
item
for
that?
Already,
dimitri
had
done
work
on
this
couple
of
years
ago,
when
this
kind
of
first
kicked
off
in
the
issue
was
written,
but
I
wanted
to
make
sure
that
you
had
a
chance
to
look
at
that
review.
B
A
So
that
design
item
is
in
13.5
and
then
we'll
be
looking
to
do
implementation,
I
think
in
13,
seven,
okay,
so
it's
we
left
some
space
there
for
a
tech,
evaluation
and
iteration
on
that
for
the
team.
B
I
assume
there's
likely
updates
that
we
gotta
do
yeah.
A
Yeah,
the
other
thing
is
that
that
design
doesn't
take
into
account.
At
least
I
don't
remember
it,
taking
into
account
the
test
coverage
data
being
there,
and
so
now
we
have
a
potential
for
test
coverage
and
code
quality
being
there
or
just
code
quality
being
there.
So
we
want
to
make
sure
we
account
for
both
of
those
in
that
diff.
Mr
view,.
B
B
Yeah,
okay,
I
just
had
the
thought
that
we're
shipping
that
we're
shipping
that
community
contribution-
and
I'm
wondering
like
that's
on
that
same
page
right.
A
B
A
B
Yeah
totally
nadi
is
asking
if
there's
a
url
for
that
issue,.
A
Yeah,
I
will
hunt
that
up
real,
quick
here
and
drop
it
back
into
the
agenda.
Thanks.
A
A
So
then,
while
we're
talking
about
code
quality,
the
other
thing
I
wanted
to
mention
was:
we
have
a
code
quality
severity
displaying
that
data
that
there's
no
design
for.
I
don't
believe
that
we
should
take
a
look
at
creating
a
design
item
for
that
or
using
that
item
for
the
design
in
13.5
right
now,
it's
tentatively
scheduled
for
13.6,
but
obviously
we
can
move
that
out
if
we
don't
have
a
design
or
ux
wrapped
up
for
it.
C
For
sure
sure
yeah
james
warned
in
the
beginning
of
the
meeting
that
he
has
to
put
his
daughter
to
the
class.
Oh
and
it's
an
online
class,
because
I
thought
for
a
second
that
he's
like
in
the
school
or
something
like
that.
A
B
A
Cool,
let
me
find
the
implementation
issue
real,
quick.
A
All
right,
so
that
one
is
in
there
as
well
and
then
in
13
6
we're
going
to
start
looking
at
the
test.
History's
result
for
projects,
I
hope
actually
in
1307.
Rather
it
would
be
when
the
implementation
happens.
That
gives
us
a
couple
of
months
to
get
feedback
from
users
on
how
that's
going
and
to
think
about
architecture,
so
that
we
can
leave
that
mbc
in
place
on
the
pipelines
and
mrs
but
build
architecture
so
that
we
can
create
the
project
view
of
a
history.
A
B
Yeah,
I
think,
we're
good,
I
think
the
the
the
I
think
we
should
just
iterate
on
top
of
what
we
did
and
I
think
what
we
did
at
like
addresses
the
baseline
and
if
we
ship
that
we're
gonna
get
a
lot
of
thoughts
from
people
and
and
we
can
just
start
seeing
data
if
how
people
interact
with
that.
You
know
in
general,
yep.
A
Yep
cool
all
right,
so
that's
everything
I
had
for
a
design,
any
other
design
work,
that's
in
flight
or
coming
up
that
we
should
talk
about
one.
B
Yeah,
I
know
I
just
had
that
point
at
the
end
about
the
bishop
mocks
or
like
the
mission
wireframes,
I'm
gonna
be
sharing
the
update.
I
mean
I'm
going
to
be
sharing
more
things
throughout
the
week,
but
before
the
meeting
today
I'll
share
my
work
in
progress
for
the
awesome
I
like
fix
part
of
it
and
yeah
it's
fun.
There's
a
lot
of
things
that
I
have
been
researching
learning
a
lot
about
the
the
space.
C
B
Something
that
sounds
like
the
sounds,
like
the
you
know,
like
the
kind
like
the
like
the
perfect
experience,
but
it's
not
something
that
it's
real
yet
for
anyone.
You
know
it's
like
a
very
it's
like,
so
I
think
we
gotta
be
more
creative,
like
in
three
years
how
testing
is
gonna?
Look
like,
I
think
what
we
gotta
do
is
like
try
to
start
bridging
the
gap
between
what
we
believe
should
be
the
ideal
testing
experience
with
what
we
could
do
at
that
moment.
A
Yeah,
I
think,
identification
of
flaky
tests.
I
think
we
could
even
we
could
probably
start
getting
into
like
helping
augment
code
coverage
or
test
coverage
where,
if
you
have,
I
mean
just
like
some
really
basic.
A
If
you
have
some
really
basic
unit
tests
and
then
base
what
we
can
based
on
what
we
know
about
all
of
the
open
source
projects
that
are
out
there,
that
also
have
test
coverage.
We
can
start
to
look
at
and
say
this
method
looks
like
these
45
other
methods
that
we've
seen
that
have
tests.
These
tests
are
good
tests,
we're
going
to
build
one
for
you
and
put
it
into
an
mr
and
then
you
can
start
to
that
could
be
a
really
cool
yeah
up
to
your
feature.
B
So
yeah
and
the
other
yesterday
the
idea
that
I
shared
with
you
about
the
flaky
tests.
I
remember
seeing
that
somewhere.
I
don't
remember
exactly
what,
but
it's
like
that
idea,
like
you,
give
a
threshold
to
the
tool
and
like
it
retries
multiple
times
until
like
it
says,
like
you
know,
I
think
it's
it's
it's
broken
for
good.
You
know,
yeah.
A
Yeah
nadia.
That
idea
was
that,
if,
as
we
start
to
identify
flaky
tests
and
have
history
that
a
user
could
go
in
and
mark
a
test
as
flaky
and
then
we
would
have
another
stage
run
as
part
of
a
pipeline
where
you
run
your
pipeline
run
your
full
speed
of
tests.
Anything
that's
failed.
That's
been
identified
as
flaky,
we
re-run
so
that
you
could
potentially
get
a
green
pipeline.
Even
if
you
had
failures,
because
those
tests
might
just
pass
next
time.
C
Oh,
that's
cool,
so
did
I
understand
correct
that
a
person
could
go,
a
user
could
go
on
like
manually
mark
a
test
as
flaky.
A
It's
not
a
great
test,
but
it
shouldn't
hold
us
back.
So
you
can
mark
that
as
flaky
and
then
figure
out.
If
it's
it's
flaky,
we
run
it.
It's
flaky,
ignore
it
there's
a
bunch
of
different
options
there.
I
think
so
that
you
could
then
get
even
with
failures,
but
they're
failures
that
you
know
you're
just
going
to
skip
anyway,
get
to
a
green
pipeline.
C
B
No,
I
saw
that
so
I
saw
that
in
a
completely
unrelated
developer
tool.
I
don't
remember,
was
it
and
it
was
like
this
idea
of
like
retry
these,
like
several
times
and
like
until
you
hit
a
threshold,
then
like
stops
refrying
and
then
I
researched
more
a
little
bit
like
how
people
deal
with
flaky
tests.
I
like
it
seems
that
what
many
people
are
doing
is
like
we
know,
like
1.5
of
our
tests
are
flaky,
so
we're
not
going
to
fix
them,
we're
just
going
to
retry
them
until
they
pass.
You
know.
B
A
100
cool
right
so
then
I
just
wanted
to
touch
base
on,
and
this
is
more
of
a
so
lori
when
she
comes
back
and
looks
at
the
agenda
can
see
where
we're
at
with
things.
So
we
have
three
research
items
in
flight
right
now:
co-testing
and
coverage
jobs
to
be
done.
A
A
I
will
start
the
recruiting
and
the
research
or
the
recruiting
issue
so
that
we
can
start
recruiting
potential
cms
interview
subjects
next
week,
even
hopefully,
but
anticipate
it's
going
to
take
some
time
to
get
to
our
survey
number
and
jobs
to
be
done,
and
then
we
can
do
those
interviews.
So
just
it's
going
to
stretch
into
november
and
we're
okay
with
that.
I
just
wanted
to
set
expectations
with
everybody
and
I'll
do
the
same
on
the
product
side,
about
we're
not
going
to
move
the
maturity
of
this
category
in
this
quarter.
A
Like
we
thought
so,
that
being
said,
we
are
going
as
fast
as
we
can
on
the
code
quality
jobs
to
be
done
so
that
we
can
in
january,
hopefully
move
that
maturity
up
to
the
next
level,
and
so
we'll
have
a
code
quality,
cms
research
project
coming
up
as
well
too,
but
right
now,
where
the
jobs
to
be
done.
Part
of
that
and
that's
the
team
is
brainstorming
and
then
we'll
move
into
the
same
thing
that
we're
doing
for
co-testing
and
coverage
right
now
of.
C
A
It
was
just
understanding
the
process
and
then
getting
the
survey
put
together.
There
was
just
a
little
bit
of
a
delay
last
week
and
then,
with
our
recruiter
being
out
this
week
for
a
conference
I
mean
it's
just.
I
know
they're
juggling
a
lot
of
balls
as
well.
We
can't
like
yeah,
I
mean
nothing,
nothing
we
could
have
prevented
or
that
or
I
mean
we
could
have
prevented
it
if
we'd
have
been
a
little
more
proactive
about
it,
but
it's
it's
a
learning
experience
I'll,
take
it
as
that.
C
Yeah,
how
was
the
process
so
far?
It
wasn't
like
smooth
or
like.
A
So
far,
it's
okay.
It
does
feel
like
there's
a
lot
of
overhead
just
because
I
think
I'm
creating
duplicative
duplicative
issues.
I
have
one
issue
in
our
testing
project.
That's
tracking
the
whole
thing.
I
then
have
a
research
issue
hold
on.
I
gotta
step
out
of
the
room
for
just
a
second
sure.
Sorry,.
B
B
I
was
thinking
that
because
category
maturity
scorecard
3
ish
shipping-
I
don't
know
it's
it's
coming
soon.
Should
we
just
wait
for
that?
I
mean
that's
supposed
to
happen
very
soon.
Right
nadia.
I
don't.
C
Know
they.
C
I
don't
know
how
urgent
is
that,
like
honestly,
like,
I
don't
think
we
are
expecting
like
a
huge
changes.
I
don't
know.
Maybe
lori
is
the
best
to
say
on
that,
but
honestly,
like
I
would
say
if
we
feel
like
we
are
going
good
with
that,
like
I
probably
wouldn't,
maybe
not
delay.
I.
B
A
C
You
if
we
can
wait
sure,
but
otherwise
I
think
the
process
is
the
second
version-
is
still
fine.
As
far
as
I
feel
that's
always
more
like
up
to
you
both.
A
I
mean
so
I'll
if
you
can
share
that.
Mr
with
me
to
be
great
of
the
the
the
next
iteration
on
the
process.
I
don't
know
where
we're
at
in
the
in
the
life
cycle
of
this.
It
was
either
about
to
change
or
had
just
changed
when
we
did
the
scorecard
for
accessibility.
B
If
we're
change,
when
we
did
that
one,
it
was
the
the
first
version
ever
and
now
we're
working
on
the
second
version,
and
now
there's
gonna
happen
at
third
version.
So
I
guess
that
it's,
the
iteration
is
like
a
response
of
how
like
people
are,
perceiving
it
and
like
things
where
we're
finding
bottlenecks.
That's
why
my
thought
was
like.
If
we
wait
for
free,
then
we
might
have
a
way
smaller.
A
C
So
I
just
attached
the
dock,
but
I
also
I
didn't
have
the
access.
Then
I
don't.
I
still
don't
have
it.
So
there
is
no,
mr,
yet
even
it's
just
like
a
document
where
adam
is
collecting
some
thoughts
around
this.
C
Three
so
yeah.
This
is
why
I
was
a
little
bit
like
okay.
If
we
don't
even
have,
mr,
maybe
that's
pretty
far
so
I
don't
know.
Lori,
however,
said
that
it's
coming
in
one
month
like
once,
okay.
A
C
B
If
it's
like
a
complete
revamp
like
every
like,
but
I
don't
think
that's
gonna
be
the
case,
I
think
they're,
just
like
you
know,
like
fixing
small
things,
I
think
we're
good.
I
think
the
next
step
that
we
should
start
thinking
about
is
preparing
the
the
testing
scenarios
yep
just
kind
of
like
get
ahead
a
little
bit
of
that.
So
maybe
we
should.