►
From YouTube: Secure Usability Benchmarking Report
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi
I'm
Michael
Oliver,
and
this
is
the
high
level
overview
of
usability
benchmarking
study
that
was
completed
in
October
2022
for
the
secure
section
of
gitlab.
So
a
little
background
into
benchmarking
study
previous
scores,
like
sus,
didn't
really
capture
the
full
measure
of
usability
on
a
system
level.
They
didn't
really
capture
all
of
the
individual
things
that
we
needed,
but
a
benchmarking
study
which
examines
cross
workflow,
use
cases
and
has
quantitative
and
qualitative
understanding
is
able
to
get
us
that
usability
understanding
that
we
need.
So
to
do
this.
A
A
The
secure,
Benchmark
scored
relatively
well
scored
good
with
the
84.7
out
of
100
and
to
go
over
the
specific
workflows.
We
see
that
triaging
vulnerabilities
scored
great
with
all
of
the
tasks
scoring
great,
creating
a
scan
result.
Policy
scored
good
with
two
out
of
the
three
of
the
task
scoring
great
with
the
configuring
policy
test.
A
Having
some
errors,
we
also
saw
that
configuring,
SAS,
analyzer,
workflow,
scored
good,
and
this
one
had
the
most
amount
of
tasks
that
had
some
issues
or
participants
had
issues
with,
and
only
one
task
scored
great
and
there
was
the
monitor
vulnerabilities
in
a
merge
requests.
Workflow
this
scored
great.
But
there
is
one
caveat
that
users
who
did
this
did
not
were
not
able
to
see
a
security
widget
in
the
Mr
overview,
which
means
that
they
saw
what
would
be
git
lab
if
you
did
not
have
the
ultimate
tier.
A
So
in
the
ultimate
tier
there
is
a
security
widget
that
shows
up
in
the
Mr
overview.
That
shows
you,
the
scans
that
were
run,
but
on
tiers
below
the
ultimate.
That
scan
is
not
there
and
because
of
a
small
overview
or
oversight.
During
the
configuration
of
This
research
project,
the
re,
the
repository
for
this
project
participants
did
not
see
that
security
scan,
so
I
did
not
include
the
results
for
that
task
in
the
overall
score,
but
I
will
go
over
the
granular
results
because
it
is
still
generalizable
for
any
particip
any
user.
A
A
So
we
were
able
to
surface
some
some
insights
and
we
were
able
to
steer
the
direction
for
that
theme
as
well
as
this
theme
of
improving
the
learnability
for
the
SAS
configuration
tool.
Specifically,
the
desired
outcome
for
this
is
to
increase
the
confidence
in
tools,
and
we
saw
that
participants,
even
when
they
did
have
the
correct
configuration.
They
were
not
confident
that
it
was
the
correct
configuration,
so
participants
really
were
not
able
to
be
confident
in
any
of
their
choices,
which
hopefully,
we're
able
to
use
the
results
to
really
help.
A
The
other
theme
that
we
were
able
to
relate
to
this.
This
benchmarking
study
was
the
theme
of
increasing
security
teams
efficiency
when,
when
triaging
vulnerabilities
at
scale,
this
is
the
at
scale
is
particularly
important.
A
This
is
something
that
we
know
large
organizations
have
trouble
with,
which
is
why
we
try
to
emulate
that
in
our
repository.
But
we
even
had
one
participant
specifically
say
that
if
the
repository
were
bigger
and
if
there
were
more
vulnerabilities,
this
task
would
be
even
harder
to
complete,
so
we're
able
to
get
some
insight
into
what
people
what
organizations
want
to
do
and
how
they
frame
it
in
order
to
triage
vulnerabilities
at
scale.
But
in
order
to
really
get
those
insights,
we
actually
are
going
to
launch
exploration
needed.
A
We
already
created
an
actual
Insight
exploration
needed
issue
which
will
be
Research
into
how
moving
from
severity
base
to
risk-based
security,
which
will
really
help
us
with
triaging
vulnerabilities
at
scale.
So
just
to
give
some
specific
bits
about
each
workflow
we'll
see
that
the
triaging
vulnerabilities
workflow.
What
participants
had
to
do
was
they
had
to
go
to
the
vulnerability
report,
filter
those
triage
that
filter
those
vulnerabilities,
identify
Target
one
and
click
it
and
then
create
an
issue
for
that,
and
people
were
able
to
do
that
really
successfully.
A
There
was
a
98
completion
for
this,
and
a
lot
of
participants
were
able
to
do
it
with
very
little
errors.
What
errors
there
were
were
really
just
on
the
filtering,
the
results
or
on
the
next
one,
the
the
filtering
results.
The
errors
seem
were
on
the
status
or
on
the
activity
filters.
People
either
were
either
kind
of
confused
about
the
language
on
the
status
or
might
have
been
confused
on
the
logic
on
the
activity
filter.
But
when
people
made
a
mistake,
it
was
relatively
small
and
they
were
able,
to
course
correct
pretty
quickly.
A
The
other
thing
that
we
did
see
was
participants
wanted
to
filter
by
more
than
just
the
date,
and
that
does
kind
of
relate
to
the
research
issue.
I
brought
up
just
before
of
being
able
to
understand
more
of
the
risk
base,
because
right
now
we
only
sort
by
date
and
I
think
severity.
But
in
order
to
really
understand
how
organizations
think
about
vulnerabilities,
we
have
to
change
the
way,
we're
actually
organizing
and
sorting
for
vulnerabilities.
A
So
we
were
able
to
get.
We
got
some
insights
for
that,
even
a
clip
from
a
particular
user
that
you
can
check
out
if
you'd
like
the
next
workflow
that
we
did
was
creating
a
new
scan
result
policy
and
for
this
people
had
to
First,
create
and
do
scan
result
policy.
So
they
had
to
go
to
the
policy.
Page
and
click
create
new
scan
result
policy.
Then
they
had
to
actually
configure
the
policy
correctly.
A
This
is
the
task
that
they
struggled
at
the
most
and
then
they
had
to
actually
just
configure
to
implement
those
changes
with
a
merge
request.
So
participants
were
able
to
find
the
policy
page
with
ease,
but
they
did
have
some
trouble.
Actually
configuring
that
page.
So
we
can
see
here.
The
trouble
was
mainly
on
the
tool
selection.
There
were
five
people
who
had
trouble
with
the
tool
selection.
They
had
to
click,
select
all
to
deselect
everything
and
then
click
sass.
Some
people
just
had
some
trouble
with
that.
A
Other
people
might
have
had
some
trouble
with
the
group
and
the
user
search.
They
had
to
find
a
security
group
specifically
for
this
policy
and
assign
it
to
that
Security
Group
and
some
people
had
trouble
with
the
logic
of
the
number
threshold.
So
we
have
it
set
up
for
more
than
x
number.
So
if
you
set
it
up
for
one,
it
would
actually
allow
one
vulnerability,
which
is
what
two
people
actually
did
so
because
of
those
two,
particularly
the
last
two
pain
points.
A
A
number
of
people
unfortunately
failed
this
task,
because
the
security
policies
are
very
strict
and
any
error
on
those
would
lead
to
errors.
A
significant
amount
of
error
in
the
repository
later
on
that
the
other
small
issue
that
some
people
had
was
when
creating
the
policy
with
a
merge
request.
A
Some
people
had
issues
with
the
name
and
the
fact
that
it
doesn't
look
like
a
policy
name
is
required,
but
when
you
click
confer
can
configure
with
a
merge
request,
it
shows
up
with
a
warning
saying
empty
policy
name,
so
that
happens
a
couple
times
and
we
actually
already
have
actual
Insight
product
change
issue
for
that
which
will
correct
that
bug
and
make
it
obvious
that
the
policy
name
is
required
and
the
error
message
pops
up
in
a
way
that
everybody
can
see,
because
that
was
a
slight
issue
for
some
people.
A
So
the
next
workflow
that
we
did
was
configuring,
SAS
analyzers,
and
for
this
one
people
first
had
to
go
to
the
security
configuration
page.
Then
they
had
to
enable
spot
bugs
they
had
to
review
all
other
SAS
analyzers
and
tell
me
when
the
product
was
as
secure
as
it
can
be,
and
then
they
had
to
configure
any
changes
with
a
merge
requests.
Overall,
participants
were
able
to
do
this
with
a
relatively
High
degree
of
completion
percentage
of
of
success,
but
there
was
a
lot
of
trouble,
particularly
understanding
the
SAS
page,
which
we
can
see
here.
A
So
the
second
task
they
had
to
do,
which
is
one
of
the
ones
they
had
trouble
with,
was
enabling
SAS
in
this
one
they
had
to
overview
the
entire
page
see
that
spot
bugs
are
enabling
yes
see
that
eslint
was
disabled
and
then
specifically
enable
it.
A
Five
participants
really
struggled
to
reveal
even
just
the
list
of
scanners
just
to
click
on
expand
just
so
they
can
see
the
scanners
and
then
participants,
even
when
they
saw
scanners,
a
lot
of
them
struggled
to
read
and
to
truly
understand
everything
and
one
of
the
biggest
pain
points
that
we
saw
is
that
there
was
a
lot
of
information
there
that
wasn't
presented
in
a
clear
and
concise
way,
so
they
weren't
able
to
find
the
Salient
points
very
easily,
because
it's
it's
all
just
information
overload.
So
people
really
wanted
the
number
one
pain
point.
A
Was
people
really
wanted
in
depth,
but
clear
information,
and
especially
about
the
confidence
levels,
which
was
particularly
confusing
for
five
participants
where
they
didn't
really
understand
the
confidence
levels
at
all,
and
our
current
UI
does
not
give
any
real
insight
into
what
it
is.
We
also
had
some
confusion
about
the
maximum
depth
on
the
page.
Five
participants
did
not
understand
it
and
didn't
understand
if
it
was
important
or
if
it
wasn't
important
or
why
or
why
not
at
all,
which
is
particularly
troubling
and
all
of
this
confusion
led
to
a
very
poor
completion
percentage.
A
But
thankfully,
because
the
themes
in
the
future
roadmap
will
make
some
big
changes
to
this
page.
So
hopefully
the
usability
will
increase.
The
next
workflow
we
had
was
to
monitor
vulnerabilities
in
the
merge
request
and
for
this
people
had
to
First
find
the
open.
A
specific,
open,
merge
request
to
review
the
vulnerabilities
associated
with
that
merge
request
and
then
to
approve
the
merge
request
as
needed,
and
this
task
was
pretty
easy
for
pretty
much
everybody.
The
only
real
problem
was
the
one
task
of
reviewing
the
job
for
the
merger
Quest.
A
They
had
trouble
with
the
link
and
thinking
it
was
too
hard
to
find
even
on
that
Pipeline
on
that
tab
or
on
that
page.
So,
even
if
they
did
get
there,
they
weren't
confident
that
they
were
on
the
right
path.
So
from
this
we
would
unfortunately
see
a
50
completion
percentage,
but
that
would
change.
Obviously,
if
there
was
a
security
widget
there.
A
So
that's
all
of
the
high
level
insights
for
everything
I
want
to
give
or
first
I
want
to
look
at
the
uncontrolled
study
factors.
First,
there
was
time
zone,
fatigue
and
and
biggest
was
actually
broadcast
Broadband
connectivity.
A
There
was
some
issues
with
some
participants
just
understanding
just
because
they
couldn't
hear
correctly
because
their
their
internet
wasn't
so
great,
but
I.
Don't
think
that
should
be
too
much
of
an
issue
and
the
biggest
thing
was
some
participants
would
be
recording
their
doing
their
sessions
at
like
9
00
PM
their
time,
so
they'd
be
pretty
tired,
which
would
obviously
affect
their
ability
to
do
the
tasks.
A
I
will
give
this
presentation
in
the
slack
message,
so
you
can
feel
free
to
go
to
these
resources
and
view
any
of
the
resources
I
have
in
that
epic
too,
but
I
want
to
give
a
a
shout
out
to
Lucas
Charles
and
Andy
Volpe,
who
both
helped
me
with
this
significantly
and
setting
up
this
environment,
getting
the
research
actually
on
the
research
project
in
a
way
that
will
help
my
stakeholders
and
in
a
way
that
can
actually
be
run
on
a
live
test
environment.
So
I
want
to
thank
them.