►
From YouTube: Testing UX / PM / Research Sync 2020-06-05
Description
Today's session was a discussion on an MR Beautification project, an ask to create some mocks to "show" the vision for Code Testing and Coverage for an upcoming analyst review, using the Web Performance and Accessibility features for an internal epic (yeah dog-fooding!) and an update from Juan on some inflight design and implementation items.
Agenda: https://docs.google.com/document/d/1lBo6x1hoZigWL7kcS6uFo-ncxcu7SE8_WDUsewk4Kbc/edit#
MR Beautification Issue: https://gitlab.com/gitlab-org/gitlab/-/issues/217568
A
This
is
the
June
5th
testing
group,
xpm
research
sink
and
Nadiya.
It
looks
like
you
have
the
first
point
in
the
agenda.
If
you
advise
that
over
yeah.
B
B
We
are
created
by
the
deaf,
create
steam
summer,
sale,
Pedro
and
a
couple
of
other
folks,
and
actually
a
lot
of
people
in
the
US
Department
are
focusing
right
now
to
address
the
concerns
that
Marcel
have
raised
with
merge,
request
experiences
in
the
video
in
that
famous
video
hi
Laurie
I
think
James
you've
seen
that
one
yeah.
So
there
is
a
lot
of
like
efforts
going
on
around
that
and
there,
of
course,
we'll
testing
and
verify
are
super
much
influencing
the
part
of
the
product
and,
of
course,
I
mean
one.
B
We've
been
discussing,
okay,
like
what
are
the
things
that
we
could
possibly
help
to
do
here
in
order
to
improve
all
those
well
pain
points
that
Marcel
have
collided
in
the
video
and
thanks
for
your
contribution,
James
as
well.
So
what
I've
done
in
the?
Where
is
our
our
our
in
this
issue?
So
there
is
a
table
where
we
have
collected
all
of
the
items
related
to
verifying
release
and
reach.
B
B
We
have
agreed
that
from
every
stage
group
we
want.
We
would
like
to
propose
to
pick
two
to
five
items
to
be
implemented
in
quarter
two,
which
means
you
know
like
with
the
goal
of
kind
of
like
mining
on
those
I
with
the
engineering
before
before
June
13th.
So
you
know
that
makes
that
will
make
us
too.
Okay,
some
of
the
items
could
probably
going
13.2
and
some
of
them
could
con
13.3.
B
Basically,
my
proposal
here
is
to
you
know,
want
to
go
over
those
ten,
a
bit
more
items
that
we
have
in
this
issue
for
group
testing,
selected
and
yeah,
maybe
picking
up,
maybe
some
of
them
are
really
low
hanging
fruits
that
we
could
just
say,
hey.
Okay,
we
can
just
like
pick
them
up.
I
know
that
I
will
give
this
next.
One
I
know
that
he
was
proposing,
so
basically
I
wanted
to
see
okay,
which
points
we
can
like
and
share
on
that
we
could
how
achieving
this
coke
we
are
here.
C
A
C
So
yeah
there's
at
least
at
least
one
issue
that
I
know
that
I
can
talk
for
myself,
I
mean
that
require
and
that
it's
actually
quite
easy.
The
language
one
I
think
that
I
think
I
need
to
work
with
the
Dark
Riders
on
that
one
I
mean
just
to
make
sure
that
yeah
we're
getting
like
some
support
on
that,
but
I,
but
I
think
we
already
had
an
idea
I
discussed
with
the
team
like
a
month
or
so
ago
about
it
and
we
we
actually
came
up
with
some
ideas
on
how
to
improve
that
language.
C
C
A
So
then,
I
think
we've
identified
a
couple
that
are
already
in
flight
or
one
can
handle
himself
I,
think
just
triaging,
the
rest
of
them
and
prioritizing
a
couple
for
13:2
and
beyond
our
next
steps.
I
see
a
couple
in
the
list
that
we
probably
just
won't
address
or
are
way
down
on
the
list
and
a
couple,
others
that
already
worked
into
work
that
we're
gonna
be
working
on
then
the
next
quarter
or
two
so
I
think
that'll
be
pretty
easy
to
get
through
the
rest.
B
A
B
Say:
hey
we
added
this
selected
as
a
priority
item
column.
We
can
rename
it
to
whatever
just
so.
It's
not
misleading
like
well.
It
selected
for
implementation,
I,
don't
know
what,
and
maybe,
if
we
could
go
during
the
next
week
and
like
add
those
couple
of
like
signs
that
we
have
chosen
those
to
work
and
maybe
assigning
a
milestone
to
that.
That
would
super
helpful.
Do.
B
A
A
B
Amazing
thanks
a
lot.
I
really
appreciate
that
and
I
know
you
know.
Kind
of
I
was
saying
this
to
one
already.
I
know
that
it
may
look
like
it's
kind
of
like
passing.
You
know
the
regular
prioritization
process,
but
yeah.
This
is
super
important
and
I
think
all
together.
We
can
really
improve
that
experience
here.
B
A
B
A
Sounds
good
I
think
I
have
the
next
point
then
so
creating
some
mocks
for
a
longer-term
vision,
so
we're
setting
up
for
a
review
with
some
analysts.
Oh
look
at
the
roadmap
kind
of
the
12-month
roadmap,
and
this
is
less
about
features
and
more
about
opportunities.
But
as
part
of
that,
we
think
that
we
want
to
do
a
little
bit
of
some,
almost
pseudo
mocks
or
wireframes
of
what
could
like
what
is
possible,
based
on
some
of
the
opportunities
that
we
and
the
things
that
are
just
available
within
gitlab
as
a
single
DevOps
app.
A
And
so
one
wanted
to
talk
to
you
about
hey.
What
can
we
do
in
the
next
couple
of
weeks
to
put
together
some
simple
wireframes?
Is
there
bandwidth
on
your
site
for
that?
I
know
that
you're
still
splitting
time
with
runner
and
helping
out
a
little
bit
with
CI
as
well
based
on
our
previous
conversations,
but
is
that
something
that
we
can
look
at
or.
C
A
This
fits
in
with
use
case
Papes.
The
idea
is
that
we
use
these
things
as
a
not
a
like
here's.
What
we're
gonna
go
build
or
even
here's
what
we're
going
to
go
validate
it's
a
here
as
what
is
possible
like,
as
we
start
talking
about
in
twelve
to
eighteen
months,
down
the
road,
if
you're,
using
gitlab
you're
able
to
you
as
an
executive
team,
can
look
at
that
single
source
of
truth
that
single
date
of
you
of
here's.
A
What
your
quality
looks
like
here's
new
bugs
here's
quality,
here's
coverage,
and
then
you
can
start
to
drill
down
and
you're
like
start
to
show
that
flow
of
here's,
what
an
executive
sees
and
then
they
click
on
it
and
here's
what
a
director
sees
and
they
click
on
it.
And
here's
like
all
the
way
down
to
here
is
your
line
by
line
code
coverage
and
code
quality
like.
C
A
Of
stitching
that
flow
together
with
some
really
loose
mocks,
here's
what
it
could
look
like.
We
don't
know
if
this
is
what
its
gonna
look
like.
It
should
not
at
all
be
prescriptive
of
what
our
solution
is,
especially
if
we
find
hey
they'll
pay
for
this,
and
it
doesn't
solve
a
problem.
But
it's
it's.
What
how
we're
thinking
about
describing
here
is
the
value
of
get
lab
and
how
we
tie
together
your
SCM
and
your
CI.
So
that
are
your
testing
so
that
you're
really
getting
value
out
of
those
two
stages
together
about.
C
C
C
Games
mention
the
wireframe
possibility,
because
if
we
were
gonna
work
out
like
high
fidelity
like
that
fidelity
it
takes.
It
takes
more
time
by
like
if
I
going
whimsical,
for
instance,
like
I,
know
that
I
can
put
something
together
in
like
two
hours.
You
know
which
I
can
definitely
separate
like
part
of
my
day
to
do
yeah.
C
B
A
Awesome
cool,
so
my
next
point
can
be
read-only.
It's
there's,
I
think
there's
actually
two
epics
and
I
found
one
of
them
as
I
was
remembering
this
and
putting
it
into
the
agenda
this
week.
There's
mention
of
these
UX
accessibility
epics
in
some
of
the
old
accessibility
issues
that
are
out
there
and
on
the
Direction
page,
and
that's
where
this
one
came
from.
It's
currently
an
assigned
to
someone
who
no
longer
works
at
gitlab
and
so
I
just
wanted
to
understand
status
of
this
it
longer-term
thinking
about
hey.
A
Are
we
dogfooding
our
accessibility
feature
set,
so
we
can
follow
up
on
this
a
sink
or
in
next
week's
I?
Guess
not
next
weeks,
but
the
week
after
is
synchronous
call
if
we
want
I,
just
it's
referenced
on
the
Direction
page,
and
so
we
could
be
driving
people
to
that
epic
itself,
because
I
don't
think
it's
a
private
epic
I
just
wanted
to
understand.
If
there's
big
traction
being
made
on
that,
since
it's
a
sign
to
someone
who
is
no
longer
a
gitlab,
er
I
wasn't
sure.
C
A
This
seems
to
predate
by
quite
a
bit
any
accessibility
features
and
we
use
our
accessibility
features
to
measure
where
we
stand
against
these
things
and
then
show
that
we
are
improving
against
these
items
right.
It
calls
out
use
lighthouse
in
CI
to
value
good
performance.
We
have
both
web
performance
and
our
browser,
performance
and
accessibility
features.
C
B
C
C
Oh
yeah,
yeah
I
was
excited
I
sure
duh
he's
weak
on
our
UX
meetings.
Just
so,
people
knew
that,
like
we
went
through
that
with
the
old
process,
I
mean
I
thought.
Oh
there's
a
lot
of
learning
there
and
I
hope.
I
can
share
some
lessons
later
with
people
cool
all
right.
So
talking
about
dad
you
Expedia
and
like
getting
feedback.
So
I
got
some
feedback
from
the
UX,
the
UX
folks,
on
on
on
verifying
like
an
on
this
table,
some
verified
and
regarding
the
hash
marks
so
Dimitri
and
INR.
C
They
gave
great
feedback
on
some
things
that
I
think
were
very
appropriate
change
and
I'm
working
on
those
changing
those
hitting
my
idea.
My
goal
with
these
tests
internally
next
week
and
then
I
think.
Once
we
get
an
idea
that
works,
we
can,
we
will
be
good
design-wise,
yeah
I
got
great
feedback
and
I
think
I'm,
gonna
I'm
polishing
that
right
now
and
I
think
it's
gonna.
Look
good
I
just
want
to
give
you
an
update
on
that
awesome
and
then
the
other
point
that
I
had
was
that
Scott
is
gonna.
C
So
I'm
glad
that
it
wasn't
that
I
was
like
shrugging
too
much,
but
there's
actually
like
a
I,
don't
know
like
a
big
issue
that
has
nothing
to
do
with
me,
which
is
that
we're
we're
migrating
but
yeah
I'm,
yeah,
I'm
glad
that
she's
taking
over
that.
So
we
can
get
it
merged
and
that's
part
of
the
epic
as
well,
so
the
whole
be
great,
should
I
get
completed
by
yeah
I
just
want
to
give
you
an
alike
day.
There
I'm
like
paying
attention
to
that
one
as
well
I'm
for
Nadia.
C
A
C
C
C
B
It's
more
related
for
bringing
them
Authority
I!
Think
up!
Isn't
that,
because
this
is
about
talks
about
the
scorecard
process,
the
yes,
it's
a
bit
of
a
different
one,
although
like
drives
to
the
same
point
but
yeah
I
feel
like
we
could
create
one
for
testing
and
just
link
the
epic
to
this
epic.
If
we
would
want
to
I,
don't
know
what's
best,
just
let
me
know
just
a.