►
From YouTube: UX Scorecard, Pipeline Insights FY23: Review Apps
Description
Gina Doyle (Product Designer: Pipeline Insights & Runner) summarizes the UX scorecard evaluation for Review Apps, a GitLab feature within the Pipeline Insights group.
Referenced links/issues:
- Review apps documentation: https://docs.gitlab.com/ee/ci/review_apps/
- UX scorecard process at GitLab: https://about.gitlab.com/handbook/engineering/ux/ux-scorecards/#intro-and-goal
- Review apps to Lovable epic: https://gitlab.com/groups/gitlab-org/-/epics/7313
- Scorecard issue: https://gitlab.com/gitlab-org/gitlab/-/issues/352789
- Recommendations issue: https://gitlab.com/gitlab-org/gitlab/-/issues/352790
A
Hi
everyone,
I'm
gina
doyle
and
I'm
the
product
designer
for
verify,
pipeline
insights
and
runner.
Today,
I'm
going
to
walk
through
the
results
of
a
ux
scorecard.
I
recently
carried
out
for
review
apps
and
it's
part
of
the
pipeline
insights
group
to
give
a
little
background.
So
the
testing
group
was
rebranded
to
pipeline
insights
recently
and
we
also
shifted
around
some
of
our
categories
so
instead
of
having,
I
think
we
had
five
or
six
categories
before
we've
consolidated
them
so
now,
review
apps
is
its
own
category
and
it
has
usability
testing
underneath
it.
A
I
mentioned
that
because
the
ux
scorecard
process
involves
selecting
a
job
to
be
done
to
conduct
the
evaluation
around
and
when
we
did
that
we
had
a
few
to
select
from,
but
we
ended
up
selecting
when
reviewing
a
user
interface
change
before
a
software
is
released.
I
want
to
reduce
unexpected
negative
impacts
to
the
end
user,
so
we
can
retain
usability
while
releasing
changes.
A
That
was
the
first
step
that
we
took
in
the
ux
scorecard
process
and
then,
after
that,
you
have
to
create
an
experience
map
of
the
process
of
the
scenario
and
tasks
that
you
carry
out
and
to
do
this,
we
use
mural
to
document
the
tasks,
emotions
pains
and
opportunities
throughout
the
scenario
and
then
finally,
you
carry
out
the
evaluation,
depending
on
the
the
scenario
and
tasks
that
are
at
hand
and
score.
The
experience
there's
a
couple,
different
evaluation
methods
that
you
can
pick
from
for
this
one.
A
I
use
the
heuristic
evaluation
rather
than
user
testing,
and
then
you
score
it
using
the
grading
rubric
and
that
allows
you
to
rate
the
overall
experience.
There's
a
lot
more
documentation.
We
have
in
our
handbook
to
learn
more
about
this
if
you're
interested
so
specifically
around
this
scorecard,
I
was
evaluating
review
apps
and
I
attached
the
issue
in
the
slide
and
I'll
put
it
in
the
comments
of
this
video
as
well.
So
you
can
look
at
it,
but
that
issue
contains
all
the
more
detailed
information
than
what
I'm
going
over
today.
A
So
the
persona
that
we
were
focused
on
was
sasha.
The
software
developer
for
the
job-
I
already
stated
that
in
the
last
slide-
and
the
reason
why
we're
doing
this
in
general
is
because
review
apps
is
a
highly
used
feature
in
our
group
and
at
gitlab
and
for
the
pipeline
insights
group
in
general.
It
has
a
ton
of
issues
associated
with
it.
Probably
review,
apps
and
artifacts.
A
Have
this
the
most
amount
of
issues
for
our
group
so,
like
I
said,
we
had
moved
around
some
categories
and
now
that
review
apps
is
more
high-level
one
rather
than
usability
testing.
We
wanted
to
concentrate
and
focus
on
improving
the
experience
there.
We
know
and
we've
heard
that
users
are
struggling
with
some
aspects
of
the
future,
so
we
wanted
to
evaluate
the
experience,
since
we
haven't
done
that
so
far
and
identify
any
gaps
and
then
we
can
create
a
vision
to
move
towards
when
striving
to
move
up
in
maturity
which
we're
shooting
for
lovable.
A
The
scorecard
details
are
also
documented
in
the
issue
that
I
linked,
but
the
scenario
that
I
was
going
for
was:
you
are
a
front-end
developer,
who
is
responsible
for
developing
and
testing
new
features
and
successfully
pushing
them
to
production,
to
enable
yourself
and
teammates
to
easily
review
new
ui
features?
You
need
an
environment
to
see
and
review
the
new
changes
before
it
goes
to
production.
This
should
help
reduce
any
bugs
and
undesired
or
unexpected
behavior
before
users
interact
with
it.
A
The
tasks
to
complete
the
scenario
were
to
set
up
a
review
app
for
your
project
by
configuring,
an
environment
to
deploy
changes
to.
There
is
no
specific
application
you
needed
to
use
here.
You
can
pick
and
choose
what
you
like.
I
ended
up
using
surgeon.
Gatsby
then
create
an
mr
with
a
simple
ui
update
on
the
home
page
or
whatever
page
that
triggers
the
review,
app
review
the
update
and
make
any
changes
necessary
to
obtain
the
expected
features
and
then
merge
the
mr.
A
So
before
this
I
had
done
some
setup
tasks
to
be
able
to
set
up
a
project
to
use
on
gitlab.com.
So
I
ended
up
setting
up
a
project
that
was
a
public
project
and
I
used
a
previous
review
apps
project
that
our
group
had
created.
That
shows
how
to
use
surge
in
gatsby
so
that
I
could
set
myself
up
to
complete
these
tasks.
A
This
is
the
issue
that
I've
been
referencing
the
whole
time
that
contains
more
data
around
and
details
around
the
scenario
and
tasks
and
all
that
and
then
here's
an
overview
of
the
mural,
which
is
the
experience
map,
which
shows
how
my
emotions
changed
over
time.
Throughout
the
scenario,
and
as
you
can
see,
the
part
that
I
struggled
with
the
most
was
the
configuration
of
the
review
app
and
I
think
that
that
will
differ
highly
depending
on
what
you're
using.
A
So
I
struggled
a
lot
because
I
was
troubleshooting
with
search
and
I
had
never
used
that,
but
then
once
you're
past
that
step,
it's
pretty
easy
going
after
that,
and
the
experience
is
actually
really
good.
I
would
say,
there's
still
some
things
that
we
could
take
opportunities
to
make
better,
but
it
was
still
great
I'm
going
to
show
you
the
project
that
I
used.
So
when
I
was
going
through
this,
I
was
mainly
looking
at
the
environments
page.
A
It
was
nice
that
there
was
an
enable
review
app
here
and
some
of
this
there's
a
couple
bugs
in
this
modal,
but
this
actually
kind
of
walks
me
through
the
experience
pretty
well
and
then
once
you're
done
with
that.
A
There
was
an
attachment
that
said,
gave
me
status
of
my
review
app.
So
it
told
me
if
it
was
deploying
if
it
was
ready
to
go
and
then
when
I
could
view
it,
and
that
was
awesome.
I
also
added
review
app
routes
to
my
configuration
and
that
points
out
every
page
that
you
update
and
allows
you
to
go
straight.
A
Like
gives
you
a
direct
url
to
go
straight
and
view
those
changes,
so
that
was
also
great
to
see
and
really
easy
for
me
to
check
out
my
new
changes
so
I'll
hop
back
and
tell
you
about
the
experience
score
that
I
I
graded
this
at
I
said
it.
It
was
a
b
because
it
meets
expectations,
but
it
doesn't
necessarily
exceed
user
needs,
you're
able
to
reach
the
goal
and
complete
the
job.
A
So
the
highlights
that
I
kind
of
just
went
through
as
well
was
the
enable
review
app
is
highlighted
in
environments,
which
makes
it
really
easy
to
get
started.
You
receive
constant
status
of
the
review,
app
progress
when
a
change
is
committed
and
then
review
app
routes
makes
it
super
easy
to
find
where
changes
were
made.
A
The
low
lights
were.
There
were
a
couple
bugs
in
the
enable
review,
app
and
the
template
code
isn't
really
the
best
thing
to
use.
I
don't
think
because
there's
so
many
changing
variables,
depending
on
what
your
site
is
like
and
I'm
not
sure
if
it
really
gives
a
a
great
template
to
use.
A
I
think
the
other
thing
is
that
there's
no
way
of
validating
that
the
review
app
script
contains
no
errors.
So
maybe
if
there
was
a
way
to
do
that
as
pipelines
do
today
with
us
the
yaml
file
that
would
be
really
helpful
just
to
troubleshoot
early
on
before
you
even
make
the
changes
to
summarize
the
experience
and
then
give
next
steps.
A
So
there's
some
opportunities
that
we
can
capitalize
on.
It
gives
us
a
sense
of
where
we
are
at
today,
because
we've
never
scored
this
before
it.
Ident
identifies
gaps
that
need
to
be
solved
to
be
able
to
get
to
lovable
status,
and
then
it
also
come
helps
us
come
up
with
a
vision
that
we
want
to
work
towards.
A
A
I
was
saying
in
the
beginning
a
number
of
issues
that
are
related
to
review
apps,
that
there's
just
so
many,
and
this
should
help
us
really
prioritize
which
of
those
features
are
needed
and
when
we
should
get
to
them
and
that's
it
if
you
have
any
feedback,
I'm
welcome
to
hear
it
and
please
leave
some
comments
in
the
issues
that
I
link.
Thank
you.