►
From YouTube: UX Showcase: Validating and prioritizing your roadmap
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hey
everyone,
it's
andy
and
another
ux
showcase
from
the
secure
vulnerability
management
team.
Today
I
want
to
talk
a
little
bit
about
validating
and
prioritizing
your
roadmap
and
how
we
are
achieving
that
so
vulnerability
management
just
moved
a
viable
last
winter,
which
we
did
so
by
focusing
primarily
on
the
security
workflows
and
a
little
bit
on
the
dashboards
really
trying
to
hone
in
on
the
needs
of
the
application,
security,
engineer
or
analyst.
A
But
getting
to
complete
is
going
to
be
a
different
story.
I
know
we'll
probably
have
12
to
18
month
timeline,
but
the
areas
of
focus
are
varied
right.
It
could
be
multiple
performers,
jobs
to
be
done.
We
haven't
even
targeted
or
began
researching
yet
as
well
as
kind
of
this
automation
or
any
other
these
categories
that
are
kind
of
new
to
us
or
new
to
us
in
the
space
for
security,
so
planning
then
for
the
year
we
wanted
to
start
out
by
generating
ux
themes.
A
So
the
the
themes
here
are,
you
know:
challenges
measuring
team
and
application
performance,
challenging
managing
large
amounts
of
vulnerabilities
and
challenges
with
repeating
multiple
tasks,
and
then
I
reached
out
to
my
pm
and
kind
of
asked
like
what
are
your
tempos?
What
are
your
pillars
for
this
year?
A
What's
being
kind
of
like
driven
through
product
and
matt
had
mentioned
enterprise
readiness,
market
leadership,
growth
and
scalability,
so
we
got
together
and
started
thinking
about
how
to
kind
of
peanut
butter
and
chocolate
these
and
some
ux
themes
for
for
us
on
vulnerability
management.
We
came
up
with
these
four
vulnerability
management
at
scale,
configurability
and
flexibility,
deeper
life
cycle
capabilities
and
vulnerability
prevention.
A
But
let's
just
get
out
of
this
presentation
and
go
check
those
out.
There's
a
link
here
if
you
would
like,
I
already
have
a
link
so
looking
at
how
these
themes
are
laid
out.
A
We
have
the
you
know
primary
themes
here:
vulnerability
management
at
scale,
all
the
way
down
to
configurability
and
flexibility
etc,
and
we
have
kind
of
the
objective
of
each
theme
followed
by
the
associated
job
family.
So
we
are
using
the
jobs
to
be
done
framework,
but
we're
taking
the
core
job
and
putting
that
in
this
issue
or
in
these
sections,
because
it's
easier
to
move
around
than
bringing
in
the
entire
jobs
to
be
done
statement
and
with
our
kind
of
altitude.
A
These
are
also
you
could
call
these
features
as
well,
which
are
broken
out
into
the
sections
of
the
product
that
they
can
impact,
and
each
one
has
a
link
to
an
issue
if
it
came
from
a
customer
where
the
insights
may
be
the
validation
type
that's
required,
or
that
it's
currently
in
and
just
like
a
swag
on
the
ux
weight,
and
these
were
again
from
cm
scorecard
customers,
internal
requests,
internal
customers
from
the
abstract
side
and
from
our
backlog.
A
So
we
didn't
really
generate
a
lot
of
net
new
ideas.
Here
we
really
just
I
mean
everyone
knows
how
how
deep
their
backlog
can
get.
So
we
wanted
to
make
sure
that
we're
kind
of
working
with
what's
existing
at
the
moment,
but,
as
you
can
see,
there's
a
ton
of
stuff
here
and
prioritizing
this
into
the
year
would
be
a
big
challenge.
A
So
we
have
thought
of
a
methodology
that
I've
used
in
the
past
successfully
and
it
is
called
the
kano
method
and
what
this
does
is
it
allows
you
to
prioritize
your
features
based
on
your
user
sentiment
towards
the
functionality
and
performance
that
feature
gives
them,
and
we
have
gone
down
this
route
with
seven
features.
If
you
are
thinking
about
doing
this
method,
I
wouldn't
do
any
more
than
this.
To
start
with,
this
is
a
survey
as
with
any
survey.
A
A
What
this
does
is,
it
will
create
buckets
in
which
the
features
you
are
analyzing
will
be
kind
of
fall
into,
and
it
helps
you
make
more
informed
decisions
on
what
to
bring
forward
or
pull
down
in
your
backlog,
and
the
process
is
a
little
nuanced,
so
I
won't
dive
too
deeply
into
it,
but
at
a
high
level
you
are
going
to
be
for
each
feature.
You
just
need
a
description
and
a
mock-up
as
well
as
three
questions
you'll
be
asking
a
functional
question,
for
example
we're
talking
about
exporting
video
times.
A
This
is
just
from
another
guide
to
how
to
do
the
k,
no
analysis.
So,
if
exporting
any
video
takes
under
10
seconds,
how
do
you
feel
and
then
the
second
question
is
the
inverse
if
exporting
videos
takes
longer
than
text
10
seconds?
How
do
you
feel
and
getting
those
two
wordings
is
really
important,
often
times
we're
not
talking
about
the
absence
of
a
feature
by
saying,
if
you
could
not
export
videos,
how
would
you
feel
we're
talking
about
how
the
feature
currently
exists?
A
So
in
this
example,
there
is
an
export
video
feature,
but
it
just
takes
a
long
time
as
opposed
to
there's
no
exporting
video
feature
and
when
you're
setting
this
up
too.
It's
really
important
to
keep
in
mind
who
you
are
going
to
be
targeting,
as
you
can
imagine,
you're
asking
about
features
with
a
broad
audience.
A
As
you
can
see
here,
you
know,
we've
got
individual
contributors
as
one
of
our
primary
cohorts,
which
will
then
be
broken
down
into
the
department
as
well
as
another.
Primary
cohort
manager
and
team
leads.
A
These
user
types
will
have
different
responses
based
on
their
needs.
So
if
we're
talking
about
a
feature
that
benefits
managers
or
team
leads
or
people
who
need
to
measure
and
track
performance,
they're
going
to
respond
differently
than
individual
contributors
who
are
more
focused
on
executing
the
their
day-to-day
tasks
of
managing
vulnerabilities
and
triaging
them
with
ease.
So
we
want
to
make
sure
that
we're
kind
of
looking
at
this
data,
not
as
one
set
but
based
on
the
segments
we've
created,
so
see
we're
getting
pretty
close
to
time.
A
But
let's
jump
in
I'll
show
you
how
we've
kind
of
started
this.
We
have
four
surveys.
So
one
is
the
link
we're
sending
direct
to
customers.
They
are
all
the
same
survey
by
the
way,
the
one
we're
sending
direct
to
customers.
It's
just
a
different
link,
it's
just
a
little
easier
for
us
to
compartmentalize
and
distribute
that
link.
Without
worry
of
you
know,
other
people
taking
it
and
we
have
our
on-site
collector,
which
we
just
put
up
just
in
the
vulnerability
management
areas
on
dot
com.
A
That's
been
up
for
just
a
little
under
a
week
and
we
sent
out
a
survey
internally
as
well,
and
we
separated
the
internal
team
out,
because
we
also
have
kind
of
an
assumption
that
the
internal
team
may
be
using
our
tools
and
features
differently
or
we
can
validate
that
assumption.
But
this
doesn't
really
harm
the
analysis
doing
it
this
way,
because
you
can
just
download
all
of
the
basically
excel
files
and
put
them
into
one
and
start
your
analysis
based
on
the
group
also
retakes
as
well.
A
If
you're
sending
them
direct
to
customers
and
something
happened,
you
can
set
up
a
retake
survey,
but
there's
there's
qualtric
stocks
for
that
and
then
looking
at
just
the
formatting
right.
That
should
be
helpful
too.
You've
got
your
intro
and
your
screener.
These
are
the
same
for
everyone
instructions.
So
the
survey
is
weird:
it's
as
you
can
imagine
kind
of
asking
these
two
inverse
questions.
A
And
then
you
get
to
your
features
when
you're
describing
the
feature
talk
about
the
benefit
as
much
as
possible,
you
can
be
more
conceptual
as
well
and
when
you're
thinking
about
providing
an
asset
or
the
mock-ups,
these
don't
have
to
be
production
ready.
The
goal
is
really
just
to
sell
the
concept
or
provide
enough
background
on
the
concept
that
makes
sense
visually.
A
We
don't
have
some
of
these
patterns
today.
That's
not
a
big
deal.
It's
really
about.
Does
this
explain,
grouping
in
a
mock-up
and
then
going
down
to
the
questions
we're
talking
about
these
specifically
some
canoe
analyses
like
to
say:
if
you
had
this
feature,
how
would
you
feel
if
you
did
not
have
this
feature?
How
would
you
feel
there
are
wordings
out
there
like
that?
I
you
don't
get
great
results
with
that,
primarily
because
people
don't
know
how
to
answer
those
questions
with
the
natural
responses
we
have.
A
So
we
have
kind
of
tailored
our
questions
to
be
more
specific
to
the
feature
itself.
It
takes
a
little
bit
more
time,
but
you
get
better
results
so
for
grouping.
The
example
is,
if
you
could
manage
groups
of
vulnerabilities,
how
would
you
feel
and
the
dysfunctional
question
if
you
could
only
manage
vulnerabilities
individually?
A
A
When
you
ask
a
user,
if
you
could
not
manage
vulnerabilities
in
groups,
I
think
that
you
see
responses
that
are
kind
of
all
over
the
board,
because
you
don't
know
how
to
process
that
question
you're
like
I,
I
would
hate
I
would
like
it.
I
guess
that's
a
really
weird
answer,
so
by
kind
of
just
stating,
if
you
could
only
do
what
you
can
do
today,
unless
it's
a
very
unique
feature,
that
is
some
that
knew
that
we
don't
even
have
a
capability
that
matches
it.
A
I
would
follow
this
paradigm
and
then
it's
followed
up
by
an
importance
question.
This
helps
us
stack
rank
these
as
well.
If
there's
any
like
close
relationship
in
the
segments
and
kind
of
wrapping
up
here
pro
tip.
If
you
are
doing
multiple
surveys,
that
kind
of
feed
into
one
kind
of
analysis,
you
can
name
your
questions.
A
I
also
think
naming
your
questions
is
as
just
beneficial
in
general,
you
get
your
you
know:
grouping
functional
grouping,
dysfunctional
and
when
you
go
through
and
you're
creating
a
report,
you
know
what
those
questions
are
versus
question
29
or
some
some
number
that
qualtrics
gives
you.
A
So
we
haven't
finished
fielding
yet
in
wrapping
up,
we
have
total
of
30
qualified
responses
that
is
completes,
and
I've
also
set
a
filter
for
people
who
have
answered
that
they
do
use
one
of
the
security
features
in
the
in
the
screener.
Sorry-
and
we
have
this
report
that
is
generated
from
qualtrics
bringing
in
those
three
surveys.
A
There's
zero
completes
in
that,
but
if
there
are
and
one
day
we
would
probably
bring
them
in
here-
and
this
is
this
is
not
how
we're
going
to
analyze
the
data,
but
I
did
aggregate
this
report
because
I'll
be
out,
so
my
pm
can
kind
of
track
and
monitor
as
well
as
share
this
with
the
ux
researcher
coming
in,
and
then
they
can
use
this
as
pretty
much
this
page
to
know
that
we've
hit
our
cohort
limits
and
we
can
go
ahead
and
download
the
data,
which
is
a
whole
different,
ux
showcase
on
analyzing
kano
not
to
scare
you
off.
A
So
with
that
I'll
kind
of
end
there
and
how
we
have
been
doing
our
prioritization
and
hopefully
when
I
get
back
after
paternity
leave,
we'll
have
prioritized
roadmap
right
all
right.
Thank.