►
Description
Defend: First-class Vulnerabilities
by Andrew Volpe, Sr. Product Designer - Defend
A
Okay,
so
I'm
Andy,
a
senior
product
designer
for
defend
and
today
we'll
be
walking
through
some
changes,
we'd
like
to
make
for
vulnerability
management
and
how
we've
come
to
decide
on
those
changes
quickly.
Vulnerability
management
exists
within
the
threat
management
group
and
defend
its
our
only
feature.
We
have
let's
plan
for
minimal
this
quarter.
We
also
have
responsible
disclosure,
which
is
coming
soon
so
vulnerability
management.
A
Today,
vulnerability
management
will
be
minimal
or
should
be
minimal
soon,
and
then
it
will
be
based
on
our
first
framework,
which
was
currently
being
in
review
and
right
now,
it's
a
pretty
large
overhaul
of
the
database
and
how
we're
storing
vulnerabilities
and
then
allowing
users
to
interact
with
vulnerabilities
they're,
not
quite
like
issues
but
they'll,
have
certain
metadata
and
interaction
points
that
make
them
more
robust
than
they
are
today.
So
now.
Our
problem
is
how
to
prioritize
the
next
step.
So
we
wrote
down
a
problem
statement.
A
How
might
we
prioritize
and
plan
the
next
iterations
of
vulnerability
management
and
our
goal
here
was
to
test
and
validate
a
point
of
view
on
the
next
maturity
level
for
vulnerability
management?
Our
objectives
are
to
leverage
MVC
framework
as
a
starting
point.
I
don't
want
to
make
too
many
large
scale
changes
just
because
it
will
be
released
soon.
We
want
to
inform
the
design
decisions
based
on
research
findings,
which
we
did
a
few
months
ago.
A
A
We
believe
that
users
will
be
able
to
take
appropriate
action
on
vulnerabilities,
depending
on
where
they
are
in
their
lifecycle,
we'll
be
able
to
give
users
more
clear
and
form.
Decisions
on
vulnerabilities
with
contextually
relevant
data
and
users
were
able
to
control
and
visualize
the
entire
point
ability
management,
lifecycle
from
one
location.
A
First,
so
I
can
ensure
my
company's
not
at
risk
in
the
a
feature
attack
and
we
have
a
secondary
persona
as
well
that
we
want
to
account
for
in
these
designs,
when
I'm
evaluating
my
application,
security
team
want
to
have
access
to
data
pertaining
to
individual
and
team
performance.
So
I
can
assess
our
effectiveness
as
a
team
and
adhering
to
our
security
policies
and
identifying
shortcomings
that
may
be
present.
A
So
back
to
the
research,
we
did
some
user
flow,
workflow
analysis
and
some
light
journey
mapping
with
the
application
security
team
learned
a
bunch
of
things
and
from
that
mainly
insights
into
their
workflow.
We
were
able
to
kind
of
synthesize
how
they
were
managing
vulnerabilities
from
inception
to
resolution
and
wrap
up
into
three
main
stages:
that's
triaging
and
monitoring,
and
then
resolution
wrap-up.
A
So
we
took
approach
of
just
creating
some
mid-level
mid
fidelity
wireframes
that
we
want
to
take
to
testing
as
a
prototype,
but
we
wanted
to
make
more
than
just
one
or
two
changes
to
not
really
wanted
to
see
the
the
needle
move.
So
we
started
with
the
foundational
components
that
are
going
to
be
translated
across
the
entire
experience,
and
these
elements
are
mainly
just
like
the
header
and
certain
interaction
elements,
and
then
we
took
all
the
insights
we
had
from
testing
as
well
as
assumptions.
A
We
started
mapping
changes
and
so
I
won't
go
through
them
all.
There
isn't
a
ton
of
time
to
do
that,
but
you
can
see
by
doing
this
mapping
during
the
actual
research.
We
can
kind
of
check
our
assumptions
and
say
like
component
wise
or
interaction
wise.
This
was
or
was
not
successful
or,
and
why
and
by
compartmentalizing
it
that
way,
we
can
make
moving
to
the
next
steps
a
little
quicker
than
just
regenerating
a
whole
new
prototype
and
retesting
and
retesting.
A
We
wanted
to
be
contextual
as
well,
so
each
of
these
lists,
since
they're
separated
in
tabs,
will
have
different,
contextual
information
based
on
where
the
vulnerability
is
in
its
workflow.
So
that's
one
assumption
we're
ready
to
test,
as
well
as
like
adding
better
tracking
data
with
the
Technic
columns,
which
we've
heard
from
users
is
something
they
want
changing
some
terminology.
So
today
we
call
these
identifiers,
I,
don't
I,
think
that's
just
our
back-end
creeping
into
our
front-end
a
bit,
so
the
industry
calls
the
morning.
So
you
know,
assumption
is
changing.
A
A
Then
we
have
resolution
tracking
itself.
This
is
where
users
will
come
and
kind
of
monitor
how
vulnerabilities
are
being
remediated
or,
if
there's
any
problems
in
the
process,
so
again
adding
more
context,
really
relevant
data
like
when
it
was
triaged
taking
a
swing
at
removing
the
file
and
line
information.
So
our
assumption
here
is
kind
of
a
high
risk
or
medium
risk.
Is
users
don't
want
to
see
this
information
after
a
vulnerability
has
been
assessed?
A
It's
already
under
remediation,
so
we
don't
need
to
redisplay
it
as
well
as
kind
of
breaking
some
patterns,
but
just
not
necessarily
identifying
if
this
is
the
right
visual
solution,
but
if
this
is
just
a
solution,
users
want
to
see
is
a
pipeline
view
of
the
remediation
status.
So
there
are
some
new
things
that
we'll
be
testing,
but
it's
not
going
to
be
exactly
what
shows
up
at
the
final
UI.
A
And
then
the
last
phase
is
around
the
resolution
wrap-up
right,
so
we
want
to
identify
what
like
features
or
data
here
is
relevant
as
well
as
call
out
where
we're
removing
data
that
we
had
thought
was
relevant
in
the
past
and
some
of
those
things
like
adding
it
a
disclosure
column
with
status.
We
don't
know
how
often
this
is
going
to
happen,
but
we
want
to
assume
that
this
still
wants
to
be
tracked.
An
SLA
Delta
as
well.
That's
something
that
we
can
measure.
A
So
if
we
can
measure
it,
maybe
we
should
show
it
and
keep
moving
on
and
then
the
last
bit,
which
is
kind
of
outside
of
the
workflow,
is
just
monitoring
auditing
and
the
single
source
of
truth
of
vulnerabilities
that
have
been
dismissed.
Vulnerabilities
are
dismissed
because
the
organization
chooses
not
to
fix
it.
It's
not
a
real
vulnerability
or
it's
a
duplicate,
and
some
of
the
things
we
wanted
to
investigate
is.
Is
it
useful
to
see
who
is
dismissed
by?
We
heard
that
in
testing?
Does
it
need
to
be
here?