►
From YouTube: UX Showcase: (Un)moderated solution validation
Description
Iaian Camacho shares the results from moderated and unmoderated solution validation for Dependency Proxy and Cleanup Policy.
A
Hi
everyone
I'm
ian
camacho,
I'm
the
product
designer
on
the
package
stage.
Today
I
wanted
to
share
with
you
some
of
the
solution,
validation
that
we've
been
doing
on
the
package
stage.
Recently,
specifically,
we
have
both
an
unmoderated
solution,
validation
and
a
moderated
solution,
validation,
and
so
I'm
going
to
go
through
the
two
different
tests
and
talk
a
little
bit
about
the
difference
between
the
two
of
them
and
the
advantages
they're.
A
In
for
a
quick
review,
the
package
stage
we
are
responsible
for
taking
build
up
pieces
of
code
and
other
dependencies,
storing
them
in
a
registry
for
our
users
and
then
reliably
delivering
them
out
to
the
ci
or
pipelines
or
to
engineers,
computers
or
out
to
different
platforms
and
stuff
like
that.
So
we
have
the
container
registry,
the
package
registry
and
a
few
other
asset
management
tools
in
there.
A
Today.
I
want
to
talk
about
the
solution,
validation.
We
did
for
two
different
aspects
on
the
package
stage.
First,
I'd
like
to
talk
about
the
dependency
proxy,
which
I've
shared
in
a
previous
showcase
around
what
it
is
and
how
it's
useful.
We
did
a
moderated
solution,
validation
to
see
if
the
vision
and
our
plan
for
that
is
really
going
to
help
our
users
and
then
afterwards,
I'm
going
to
talk
about
a
clean
up
policy,
redesign
that
we
did.
A
A
Diving
into
the
dependency
proxy
validation,
the
major
question
we
have
from
the
research
perspective
was:
does
the
proposed
solutions
enable
users
of
large
organizations
to
manage
packages
in
many
different
locations?
Effectively,
we've
heard
from
some
of
our
customers,
the
larger
they
are,
the
more
we
tend
to
hear
this,
that
they're
not
just
hosting
their
packages
in
gitlab
we're
just
in
public
registries,
but
they'll
also
have
their
own
third-party
registries
that
they're,
storing
or
archiving
packages
in
to
do
this
test.
A
We
did
a
moderated
test
structure
as
a
usability
study,
but
really
had
deep
dive
exploration
aspects
to
it
to
really
understand.
If
the
solution
we
were
proposing
was
going
to
help
our
users
how
and
when
they
would
use
it
questions
like
that.
So,
first,
what
we
did
is
we
asked
the
users
to
find
the
dependency
proxy
in
the
product,
be
able
to
navigate
to
it
and
then
describe
the
ui
that
they
saw
what
details
was
included.
A
What
information
was
relevant
if
there
was
any
information
they
expected
to
see
if
it
didn't
once
we
had
that
conversation,
we
asked
them
to
connect
a
new
remote
registry.
This
again
was
kind
of
the
user
study
side
of
it
were
they
able
to
find
the
right
button
understand
what
all
the
different
fields
in
the
form
were
for
and
able
to
work
through.
It
answer
all
the
questions
they
knew
they
would
need
to
and
be
able
to
move
forward
from
there.
A
We
asked
them
to
then
create
a
virtual
registry,
which
is
a
combination
of
many
registries
into
one
single
endpoint,
so
that
a
devops
engineer
or
a
leader
and
a
manager
in
that
area
could
make
one
url
that
the
rest
of
the
engineering
team
could
use,
and
nobody
would
really
need
to
pay
attention
to
it.
Streamlining
efforts
in
general,
making
sure
there's
not
duplicate
packages
in
places,
as
well
as
being
able
to
integrate
with
secure
and
defend
kind
of
features.
A
One
thing
that
was
pretty
cool
about
this
moderated
study
was
that
we
were
able
to
recruit
participants
through
sales,
as
well
as
through
our
traditional,
recruiting
efforts.
We've
heard
from
some
of
our
customers
that
the
reason
they
would
struggle
to
move
over
to
gitlab
for
their
package
needs
specifically
revolve
around
these
kind
of
features.
A
So
we
completed
the
testing
and
we
came
up
with
quite
a
few
research
insights,
and
then
we
presented
those
insights
during
our
package
think
big
session.
So
not
only
did
we
do
the
study,
we
were
able
to
change
our
strategy
and
make
some
design
adjustments,
but
everyone
on
the
team,
including
engineers
and
sales,
were
able
to
see
that
feedback
and
the
adjustments
we
were
making
and
have
a
good
conversation
about
it.
A
The
really
great
news
is
the
proposed
solutions
were
really
positively
received.
We
walked
through
a
lot
of
these
different
aspects
of
the
study
and
then
at
the
end
asked
them.
If
this
was
available
today,
would
you
be
able
to
utilize
it?
Would
it
make
it
easier
for
your
team
things
like
that
and
universally?
We
got
positive
responses
from
that
selfishly
from
the
design
side,
participants
consistently
understood
the
ui,
so
the
actual
screens
that
we
put
in
front
of
them
the
data
we
were
presenting,
how
we
were
organizing
it
made
sense
to
our
users.
A
A
One
of
the
aspects
we
were
kind
of
surprised
about
was
the
idea
of
caching
packages
that
were
proxy
from
outside
registries
was
a
lot
more
important
to
our
users
than
we
had
initially
thought.
We
added
a
little
bit
to
the
design
that
basically
was
a
checkbox.
That
said,
we'll
cache
it
for
30
days
and
it
ended
up
being
a
focus
of
conversation
with
almost
everyone,
so
that
was
an
exciting
little
learning
for
us
and
it'll
apply
a
little
bit
later
in
how
we
wanted
to
build
it.
A
We
learned
that
there
was
going
to
be
some
limitations,
sorry
that
those
limitations
were
going
to
inhibit
some
of
the
users
from
being
able
to
utilize.
This
feature
fully.
They
had
more
registries
involved
than
we
were
kind
of
allowing
what
was
cool
and
the
advantage
of
doing
a
moderated
test
is
that
we
were
able
to
explore
that
a
little
more
deeply
and
understand
their
needs,
and
we
discovered
that
their
remote,
the
number
of
remote
registries
was
actually
pretty
limited.
But
the
number
of
registries
hosted
in
gitlab
was
a
lot
larger.
A
That
information
really
helped
us
understand
and
find
a
work
around,
so
that
the
limitations
we
have
on
the
technical
side
would
still
enable
our
users
and
one
of
the
last
big
learnings
that
we
had
was
that
the
terms
that
we're
using
specifically
hosted
remote
and
virtual
registries
as
terms
to
describe
these
different
parts
of
the
features
were
not
immediately
clear
to
our
users.
A
The
good
news
of
that
was
by
the
end
of
the
study.
After
going
through
all
the
different
pieces,
they
understood
what
it
meant,
how
they
worked
together
and
were
able
to
use
the
terms
and
conversation
so
they're,
very
learnable,
and
what
this
means
is
terms
of
us
actually
building.
The
solution
is
when
we
think
about
documentation
or
how
we
talk
about
the
features.
We
need
to
be
really
clear
about
what
those
terms
mean,
how
they're
related
to
each
other
and
how
they
get
used.
A
One
of
the
advantages
of
doing
the
solution,
validation
is
that
we
got
a
lot
of
data
of
what
was
most
important
to
our
users
and
that
was
able
to
inform
an
mvc
style
iteration
when
we
first
started
talking
through
how
to
build
this
large
feature,
it
really
felt
like
we
would
have
to
take
many
milestones
to
build
all
of
it
before
we
could
actually
deliver
any
value
to
our
users
and
that's
not
mvc.
It's
kind
of
missing
that
viable
part,
and
so
the
iterations
didn't
really
make
sense,
taking
a
step
back.
A
What
we
decided
to
do
is
kind
of
break
it
apart.
As
you
see
here
so
the
first
step
is
we
have
request
forwarding
for
npm.
So
if
a
package
isn't
available
in
our
hosted
registries,
we
would
go
out
to
npmjs.org
and
see
if
we
could
pull
it
from
there.
That's
the
default
registry
for
npm
we're
going
to
upgrade
that
functionality.
A
It
already
exists
to
make
it
act
a
little
bit
more
as
a
proxy
from
there
we're
going
to
implement
a
30-day
cache,
which
is
a
really
simple
tool
that
whenever
we
pull
a
package
from
that
remote
registry,
we'll
keep
a
copy
of
it
and
if,
for
whatever
reason
we
aren't
able
to
reach
npm
js.org
that
remote
registry,
then
we
can
pull
from
the
cache,
and
that
raises
the
stability
of
pipelines
and
a
lot
of
other
things
like
that.
What's
kind
of
interesting
is
during
the
study.
A
After
we
built
the
cache
itself,
we're
going
to
create
a
ui
to
help
display,
what's
being
capped,
as
well
as
allow
users
to
kind
of
purge
the
cache
entirely
or
remove
specific
packages
from
the
cache
helping
them
manage
for
when
they
have
problems
with
troubleshooting
or
if
a
package
has
been
updated.
Things
like
that,
after
we've
kind
of
built
those
three
stages
for
npm
itself,
we'll
expand
the
proxy
and
caching
mechanisms
to
other
package
formats,
we're
going
to
start
with
maven
after
that,
it's
our
next
most
popular
package
manager
as
we
start
to
expand
it.
A
We'll
then
also
start
working
in
the
idea
that,
instead
of
having
only
one
remote
registry
that
you
can
make,
that
request
from
the
specific
one
we've
said
is
the
one
we're
going
to
pull
from
will
allow
users
to
set
up
another
remote
registry
to
be
able
to
pull
from
that.
Slowly
gets
us
to
the
idea
of
connecting
many
registries
together
from
there.
We
will
enable
users
to
combine
those
multiple
registries
and
remotes
that
they've
set
up
into
one
api
endpoint.
A
That's
that
virtual
registry
idea
right
at
the
end
of
the
story
and
then
from
there
we'll
start
integrating
with
security
and
compliance
and
make
sure
that
the
dependencies
that
we're
pulling
in
meet
standards
are
we're
able
to
flag.
If
there's
any
warnings
around
that
and
that
kind
of
information
make,
it
feel
a
lot
more
stable
and
secure
for
our
users.
A
A
Using
this
tool
called
the
cleanup
policies
we
heard
from
our
users
that
the
way
the
ui
was
presenting
itself
wasn't
very
clear
and
they
were
getting
a
little
confused
in
the
terms
that
were
being
used,
and
so
we
wanted
to
redesign
that
we
did,
and
this
solution,
validation
was
really
focused
on.
Do
users
now
understand
and
feel
confident
using
these
new
terms
in
the
new
settings
layout
in
the
new
ui?
This
is
an
automatic
and
destructive
action,
so
making
sure
users
feel
comfortable
with
it
is
incredibly
important
to
be
able
to
get
adoption.
A
We
ran
an
unmoderated
test
to
basically
ask
a
larger
variety
of
users
and
ask
them
a
really
simple
question:
can
you
describe
what
each
one
of
these
fields
mean
so
during
that
unmoderated
test,
which
means
there's
nobody
there
to
help
them
they're
just
kind
of
on
their
own?
We
first
ask
them
to
find
the
cleanup
policy
settings
related
to
a
container
registry.
A
We
asked
them
how
they
would
enable
or
disable
the
automatic
cleanup
and
then
from
there.
We
asked
them
to
explain
details
related
to
each
of
the
fields.
What
impact
does
this
field
have
on
a
cleanup
policy?
What
other
options
do
you
expect
to
be
there
and,
lastly,
of
the
options
we're
providing,
which
would
your
organization
utilize?
A
A
So,
for
this
test
we
were
able
to
get
15
participants
which
was
really
cool
to
test
kind
of
one
settings.
Ui
eight
of
them
were
sourced
through
the
unmoderated
testing
tool
that
we
used
and
we
were
able
to
get
all
eight
of
those
responses
within
24
hours.
So
it's
a
lot
faster
than
some
of
our
more
traditional
recruiting
methods.
A
A
majority
of
the
participants
either
strongly
agreed
or
agreed
with
the
idea
that
the
interface
was
easy
to
understand
the
available
options
for
each
of
the
fields
match
the
expectations.
When
we
asked
users
to
predict
what
was
going
to
be
there
and
the
options
that
we're
providing
users
selected
a
variety
of
them
that
would
work
for
their
organization,
which
means
that
we
have
the
right
number
of
options
in
the
right
areas.
Instead
of
one
option,
always
being
the
one
that's
chosen.
A
One
thing
we
did
learn
is
that
users
expected
the
cleanup
policy
to
be
able
to
be
accessed
from
the
registry
ui
right
now.
It
is
in
the
package
registry
settings,
and
so
we
know
that
one
of
the
things
we
can
do
to
make
it
a
little
easier
is
directly
connect.
The
container
registry
to
these
settings,
and
one
of
the
things
that
we
did
learn
on
the
settings
page
itself,
is
that
there
was
some
confusion
between
when
we
say,
keep
the
most
recent
x
tags
in
the
image
repository.
A
Some
users
thought
that
we
were
referring
to
when
it
was
being
published,
which
is
what
it
is
using
and
others
refer
to
when
it
was
being
last
pulled.
So
you
could
have
a
tag
that
has
been
there
for
years,
but
it
was
pulled
yesterday,
so
we're
not
going
to
get
rid
of
it,
and
so
we
need
to
make
sure
that
we're
a
little
clearer
on
that
next.
Steps,
for
this
is
we're
going
to
iterate
on
the
design
itself,
based
on
some
of
that
feedback.
A
Once
I
finish
the
synthesis
and
make
sure
we
have
all
the
information,
we
need
and
then
lastly
prepare
the
issue
for
development,
get
it
ready
for
the
front,
end,
tech,
writing,
review
and
we'll
send
it
on
its
way
to
wrap
up.
I
wanted
to
talk
a
little
bit
about
moderated
testing
versus
unmoderated
testing
and
kind
of
reflect
on
the
experience
of
being
able
to
do
both
of
them.
A
One
of
the
pros
sorry,
the
pros
for
for
moderated
testing,
and
I
think
the
most
important
one
is
that
you
can
ask
participants
why
you
are
there
and
you
can
watch
them,
go
through
something
and
when
they
hit
a
hiccup
or
they
have
an
answer,
you
didn't
expect.
You
can
stop
the
test
and
just
ask
them.
Why
do
you
think
that
what
are
things
that
could
have
been
done
better?
How
would
you
use
it
and
really
follow
up
on
those
questions?
A
A
One
of
the
big
advantages
when
we
did
the
on
the
moderated
testing
for
the
dependency
proxy
is
that
when
a
user
responded
with
something
or
got
excited
about
an
idea
we
jumped
in
and
were
able
to
really
in
depth,
explore
that
idea
and
one
selfish
bit
is
that
you
get
live
feedback
on
the
actual
test
you
produced.
So
if
you're
doing
moderated
testing-
and
you
have
your
first
participant
and
your
prototype
breaks-
you
know
right
then,
and
there
that
that
happened-
you
can
fix
it
so
that
the
next
session
is
a
lot
cleaner.
A
A
Your
data
can
feel
it
can
be
trickier
to
get
consistent
or
quantitative
data
from
all
of
your
studies,
because
you're
able
to
ask
why,
in
some
studies
and
not
in
others,
means
that
you're
not
going
to
be
able
to
necessarily
say
707
users
sent
this
and
overall,
it's
just
a
little
bit
more
expensive
on
the
unmoderated
testing
side.
Some
of
the
pros
and
my
personal
favorite
is
that
it
is
very
async
friendly,
which
is
a
gitlab
favorite
and
the
overall
overhead
is
lower.
We
can
send
the
ted
up
the
test
out
to
a
participant.
A
A
Some
of
the
cons
is
that
unmoderated
testing
requires
that
a
set
of
you
have
a
set
of
fixed
questions
and
there's
no
option
for
follow-up.
You
can't
ask
that
why
you
just
kind
of
get
whatever
they
say
at
the
time
if
the
user
gets
stuck,
you
can't
really
guide
or
help
them
nobody's
there,
so
if
they
get
stuck
and
they
can't
move
forward
in
the
test,
that
kind
of
just
ends
the
session
and
that
both
doesn't
feel
very
good
for
the
person
taking
the
test
and
doesn't
provide
a
lot
of
data.
A
B
Questions
I
I
typed
a
couple
in
the
chat
again
you
can
pick
which
one
you
want
to
answer.
If
any.
A
A
A
I
think
that
it
does
have
a
less
of
a
bias
because
there's
no
option
for
a
follow-up
question
and
so
based
on
their
responses.
You
can't
change
or
expand
on
the
test,
so
you're
going
to
get
the
same
result
every
time
and
because
it
is
kind
of
blind
they're
not
talking
to
you,
which
means
they're
going
to
possibly
act
more
naturally,
and
your
feedback
is
just
going
to
be
more
straightforward.