►
From YouTube: UX Showcase writing better scripts for unmod testing
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hey
everyone,
I'm
michael
I'm
a
product
designer
for
secure.
Today
I
wanted
to
talk
to
you
a
little
bit
about
my
experience.
Doing
unmoderated
usability
testing
here
at
gitlab
share
some
of
the
pitfalls
and
unintended
consequences
that
I've
run
into
while
doing
it,
and
then
a
few
tips
that
I've
looked
into
and
hopefully
how
to
alleviate
those
and
avoid
those
in
the
future.
A
A
I
think
this
last
word
is
worth
calling
out
here,
as
the
name
of
the
study
implies,
there's
no
one
there
to
moderate
the
tests
or
to
be
there
to
help
users
through
it.
So
really
users
are
working
through
the
instructions
on
their
own,
so
typically
unmoderate
usability
testing
is
facilitated
using
a
third-party
service
that
will
connect
researchers
with
the
panel
participants
and
then
also
provide
a
tool
that
will
walk
those
participants
through
a
set
of
written
instructions
provided
by
a
researcher.
A
So
really,
as
researchers
are
only
a
way
to
interact
with
participants
is
through
that
set
of
written
instructions,
so
it
becomes
increasingly
vital
that
those
instructions
can
stand
on
their
own.
So
to
drive
this
point
home.
I
just
want
to
call
out
a
quote
from
an
article
from
the
nielsen
hormone
group
and
it
states
that
every
instruction
task
and
question
needs
to
be
fine-tuned
to
eliminate
the
potential
for
misunderstanding.
A
The
next
thing
I've
seen
is
where
participants
will
misinterpret
instructions
altogether,
and
this
could
be
that
the
language
maybe
directs
users
to
an
under
unintended
area
of
the
prototype
that
you
didn't
anticipate,
or
maybe
they
just
understand
the
instructions,
a
lot
differently
than
you
thought.
Regardless
of
what
happens,
though,
there's
really
no
opportunity
for
us
to
clarify
those
tasks
with
users,
so
if
they
misinterpret
that
task,
you're
pretty
much
out
of
luck.
A
The
last
thing
I've
seen
for
participants
just
straight
up,
don't
complete
the
full
task
that
you
asked
them
to,
and
there
can
be
a
number
number
of
factors
that
really
come
into
this
like.
Perhaps
the
task
is
just
too
long
and
too
wordy
is
just
ambiguous
or
the
users
just
straight
up,
don't
understand
what
you're
asking
them
to,
or
perhaps
they
just
get
distracted
or
don't
care
to
complete
the
full
task
so
regardless
all
of
these
can
obviously
have
a
big
impact,
a
negative
impact
on
the
results
of
your
usability
test.
A
So
the
first
thing
I've
seen
our
first
tip
that
I
have
is
to
use
a
template
when
drafting
tasks,
and
this
will
really
help
just
to
ensure
that
your
tasks
are
really
well
focused
and
to
the
point
so
there's
four
key
aspects
to
a
task
template
so
the
first
of
which
is
the
starting
url.
So
this
is
where
participants
would
be
while
they're
starting
their
task.
A
The
next
would
be
the
goal.
So
what
are
we
trying
to
accomplish
here?
Why
are
we
having
users
go
through
and
take
on
this
task?
What
do
we
want
to
learn?
The
next
thing
would
be
the
actual
task
scripts.
These
are
the
specific
instructions
that
participants
would
read
and
use
while
trying
to
complete
a
task,
and
the
last
obviously
is
what
does
success
look
like
for
that
task.
So
what
needs
to
be
what
needs
to
happen
for
the
task
to
be
completed
as
desired?
A
The
next
tip
I
have
is
to
keep
instructions
short
and
simple.
So
really
what
that
means
is
we
really
want
to
try
to
focus
on
only
having
one
instruction
per
line
and
avoid
compound
tasks
whenever
possible,
I've
seen
a
couple
times
where
participants
will
come
in
and
complete
the
first
part
of
a
task,
but
entirely
forget
about
the
second
part
of
the
task,
so
it
is
usually
best
practice
to
break
those
compound
tasks
into
multiple,
smaller
tasks
whenever
needed,
and
then
last
here
is
to
always
try
to
lead
with
a
verb
whenever
possible.
A
The
next
thing
I've
seen
or
the
next
tip
I
have
is
to
avoid
leading
questions.
I
think
we're
all
aware
of
this
to
some
degree,
but
obviously
it's
important
for
us,
not
the
primary
users.
We
want
to
guide
them
through
the
experience,
but
not
really
influence
their
behavior,
and
if
we
do
find
ourselves
influencing
our
their
behavior,
it
can
obviously
skew
the
test
results.
A
So
the
task
that
I
had
originally
was
as
follows.
So
you've
already
navigated
to
the
website
security
testing
profile
in
get
lab.
The
existing
authentication
information
is
stored
within
the
authorization
request
header
in
the
additional
request.
Headers
section
locate
this
item
and
proceed
to
the
next
task.
So,
as
you
can
tell,
this
is
very
wordy,
there's
a
lot
of
fluff
and
it's
very
prescriptive
and
there's
a
lot
of
room
for
just
misinterpretation
and
ambiguity
here.
A
So
I
spent
some
time
and
took
this
task
and
rewrote
it
based
on
the
tips
that
I
found
and
ultimately
came
up
with
this
revised
version.
So
this
is
a
configuration
page
for
security
testing
profile,
find
the
authorization
request
header.
So
obviously
it's
a
lot
simpler.
It's
a
lot
more
succinct
and
there's
a
lot
less
ambiguity
here.
B
A
B
So
I
have
the
next
question
and
it's
not
necessarily
for
you,
it's
maybe
for
adam
or
someone
else
that
might
know
more
about
user
testing,
but
I
know
typically,
when
you're
doing
a
sorry,
moderated
study,
you
would
do
a
pilot
where
you
run
it
by
somebody,
local
or
maybe
you
do
actually
use
it
with
a
real
participant,
but
the
goal
of
the
pilot
is
to
sort
of
test
out
the
test
itself.
B
You
know
to
find
to
see
if
there
are
any
pitfalls,
there's
a
worrying
problem
or
a
problem
with
your
prototype
or
anything
like
that.
Is
there
any
do.
We
know
if
there's
any
mechanism
or
way
to
do
something
like
this
for
user
testing
or
do
we
have
to
pull
it
out
of
user
testing
and
do
it
in
a
moderated
fashion.
D
So
again,
I
I
don't
know
if
this
is
exactly
the
way
that
it
should
work,
but
I
know
in
my
my
previous
role
outside
of
gitlab
the
way
that
we
were
instructed
to
do.
It
was
if
we
had
an
unmoderated
test,
just
launch
it
out
with
one
participant
initially
and
then
you
can
always
like
add
additional
participants
later
or
kind
of
duplicate
that
project.
And
then
you
know
once
it's
all
said
and
done
launching
it
out
to
to
more
people
kind
of
once.
E
That's
exactly
what
I
did
in
my
in
my
last
study.
The
good
thing
about
user
testing
is
that
you
get
participants
really
quickly,
or
at
least
I
did
for
my
last
study.
So
I
was
able
to
open
it
up
to
one
person
and
identify
some
of
those
pieces
of
like
confusion,
so
they
were
essentially
like
a
beta.
Tester
went
back
in
adjusted
some
things.
E
I
remember
once
the
the
link
to
the
prototype
was
like
funny.
It
worked
weird
for
some
people,
so
I
just
went
into
the
prototype
and
made
sure
that
everything
was
working
properly,
and
so
I
was
actually
able
to
just
add,
like
one
person
at
a
time
in
the
beginning,
as
kind
of
my
beta
testers
and
user
testing
was
really
really
flexible,
there's
actually
a
way
to
say
that
you
want
to
replace
one
of
your
participants,
because
I
think
we
have
a
limit
of
like
10.
E
I
don't
know
if
that's
right,
there's
some
kind
of
limit
with
the
number
of
participants
we
have,
but
I
was
able
to
to
easily
replace
some
of
the
participants
who
something
went
wrong
or
it
just
wasn't
as
valuable.
B
Cool
those
are
those
are
good
tips
I
think
it'll
help.
Maybe
we
could
add
that
when
you're,
michael,
when
you're,
adding
to
the
adding
the
tips,
maybe
that
could
be
something
that
we
also
add
is
to
do
sort
of
a
test
run
with
at
least
one
person
at
a
time
until
you're,
confident
to
launch
it.
A
Yeah,
I
think
that's
actually
mentioned
in
the
handbook
already,
but
we
could
definitely
add
a
clarifying
point
around
that
cool.
F
Thanks
so
much
thank
you
michael
for
this.
This
was
a
great
overview
of
some
of
the
challenges
that
come
with
unmoderated
testing.
I
was
curious
about
how
you
validated
what
you
originally
had
in
that
example,
was
it
just
kind
of
a
hunch
that
maybe
it
was
not
the
best
approach?
F
A
That's
a
great
question:
holly!
Thank
you!
So
really
what
how
I
landed
on
this
is
basically
doing
a
couple
different
usability
tests.
I
was
seeing
sort
of
repeated
behavior
from
users
so
a
lot
of
times.
I
know
I
have
a
bad
habit
of
being
very
wordy
in
how
I
write
stuff
and
I've
that
translated
into
my
testing
scripts-
and
I
can
see
a
couple
times
where
users
would
read
some
of
those-
prompts
and
then
skip
over
portions
or
maybe
just
summarize
it
and
just
kind
of
brush
over
it,
and
it
happened
multiple
times.
A
So
I've
seen
that
and
then
I've
also
seen
times
where
I'll
be
asking
questions
to
users
about
the
interface
and
try
to
be
clear
about
having
them
not
interact
with
stuff
and
they'd
still
click
around,
and
that
just
might
be
human
nature
and
things
like
that.
But
I
think,
there's
always
an
opportunity
to
the
language
that
we
provide.
Our
users
can
obviously
have
a
big
impact
on
the
actions
they're
going
to
take
for
usability
tests
so
just
being
able
to
just
work
to
refine.
A
Those,
I
think,
is
always
sorry,
I'm
losing
my
train
of
thought
here,
but
basically
it
comes
down
to
I've.
Seen
the
repeated
behavior
over
and
over
and
just
kind
of
gets
me
wondering
about
how
I
can
potentially
improve
it
so
yeah
the
circle
around
though
I
did
have
some
people,
some
peers
review
some
of
these
tests,
but
I
think
the
two.
So
I've
done
two
on
moderated
tests
so
far
at
gitlab
and
they've
both
been
kind
of
lightweight
to
a
certain
degree.