►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
And
can
you
all
see
just
the
screen,
or
can
you
see
both
monitors?
We
see
just
your
slides,
perfect
thanks.
Thank
you
so
much
excellent,
so
hello,
everyone,
my
name
is
holly
reynolds
and
I
am
the
senior
product
designer
for
the
product
or
project
management
group
here
at
get
lab
within
the
plan
stage,
and
today
I'm
going
to
be
talking
about
validating
the
category
maturity
of
issue
tracking.
A
A
A
It
has
at
least
100
customers
using
it,
and
it
has
a
ux
light
score
of
at
least
3.63
for
main
and
related
jobs
to
be
done
when
tested
with
external
users.
A
This
was
associated
with
a
key
result
for
the
second
quarter.
It's
an
okr
for
improving
the
lovability
of
issue
tracking
and
that
key
result
specifically
was
to
validate
the
current
complete
maturity
rating
and
understand,
basically,
if
we're
still
at
complete
and
how
changes
to
the
product
might
have
affected
issue
tracking
over
time.
A
So,
in
order
to
obtain
this
ux
flight
score,
I
needed
to
create
scenarios
for
users
to
work
through
in
a
live
product
based
on
plans,
jobs
to
be
done.
A
I
needed
to
first
fully
understand
what
the
category
of
issue
tracking
meant
and
then
how
that
definition
and
our
jobs
tied
together
our
issue
tracking
page
states
that
its
essential
intent
is
to
provide
a
single
source
of
truth
for
an
idea
from
inception
to
implementation
and
that
members
of
a
community
should
be
able
to
trust
and
effectively
collaborate
on
it
from
anywhere
within
gitlab,
without
slowing
down
their
current
flow.
This
felt
a
bit
large
parts
of
it
seemed
kind
of
difficult
to
pull
tangible
scenarios
from
for
me.
So
I
started
by
outlining
some
questions.
A
A
So
I
collaborated
with
my
pm
and
research
on
the
statement
and
then
iterated
on
it.
Revising
it
to
issue
tracking
is
the
process
of
documenting
planning,
organizing
and
tracking
work
and
team
progress
with
issues
now
issue
tracking
obviously
can
and
does
encompass
more
objects
than
just
issues.
We
have
epics.
We
have
merge
requests,
we
have
other
things
associated
with
kind
of
the
concept
of
tracking
an
idea,
but
the
majority
of
our
focus
for
this
particular
research
was
centered
around
issues.
A
I
reviewed
these
with
katherine
and
laurie
to
verify
that
they
were
well
structured,
catherine,
as
awesome
as
she
always
is
created
an
initial
set
of
secondary
stories,
which
I
revised
and
iterated
on,
and
that
was
a
huge
help
to
kind
of
give
me
a
baseline
to
start
with
identified
the
personas
that
would
seek
to
hire,
get
lab
or
issue
tracking,
specifically
for
these
jobs.
And
next
I
looked
for
opportunities
where
there
was
overlap
in
these
responsibilities.
A
So
here's
an
example
of
the
structure
that
we
had
in
mural
with
these
primary
jobs
at
the
top
in
yellow,
followed
by
secondary
and
related
jobs
in
orange,
some
possible
scenarios
for
testing
in
pink
and
then
maybe
the
associated
tasks
with
those
scenarios
in
blue,
and
I
think,
there's
a
link
to
that
as
well.
If
you'd
like
to
actually
see
that
board
in
the
slides,
then
I
took
those
personas
and
created
some
high
level
journey
maps
outlining
a
flow
for
how
our
jobs
tie
into
their
day-to-day.
A
So
again,
we've
narrowed
it
down
now
from
the
personas
to
the
two
personas
of
sasha
and
parker,
and
then
after
evaluating
all
of
the
the
jobs
themselves
and
how
they
related
to
the
other
personas
that
potentially
had
overlap
with
sasha
and
parker.
I
came
up
with
these
two
roles
of
the
issue
tracker
and
the
issue
worker.
The
issue
tracker
is
typically
someone
in
a
leadership
position
for
a
development
team
that
needs
to
be
able
to
track
and
evaluate
work
and
understand
its
progress.
A
The
issue
worker
is
the
person
typically,
who
actually
does
that
work
such
as
an
engineer
or
a
designer,
and
then
I
determined
core
tasks
associated
with
those
roles
that
were
again
considered
to
be
key
aspects
of
the
issue
tracking
category
from
there
took
the
those
ideas
and
all
of
that
information
and
started
to
create
some
scenarios.
A
A
lot
pedro
was
doing
the
same
process,
but
for
code
review
at
the
time
he
was
a
massive
help
to
me
through
this
process,
and
so
we
bounced
ideas
off
of
each
other,
and
I
looked
at
how
he
was
structuring
his
scenarios
and
shared
my
scenarios
and
just
that
collaboration
process
really
helped
in
sort
of
narrowing
down
and
iterating
on
these
scenarios.
Until
we
felt
like
we
had
a
good
place
to
start
to
actually
test
with
users.
A
My
pm,
andy
and
laurie
were
also,
I
think
I
put
andy,
but
I
meant
and
j
I'm
so
sorry,
we're
also
part
of
this
process.
So
thank
you
all
again
for
your
help
with
that
we
determined
the
types
of
candidates
that
we
needed
to
speak
with
as
part
of
our
research
strategy.
So
this
included
the
issue
tracker
role,
which
was
dev
managers,
pms
and
I.t
managers,
the
issue
worker
role,
which
included
engineers,
design,
designers
and
testers.
A
We
decided
that
we
would
want
to
have
a
minimum
of
seven
per
roll,
which
would
mean
that
we
would
need
to
have
14
total
sessions
with
users
and
created
a
screener
survey
to
start
understanding
and
evaluating
who
might
be
good
candidates
for
this
and
had
a
handful
of
volunteers,
pedro
martian
and
austin
in
particular,
stepped
up
and
helped
lead
sessions.
So
thank
you
all
so
much
again
for
helping
with
that
and
and
to
kind
of
spread
out
some
of
that
responsibility.
A
As
far
as
the
scenarios
themselves,
we
had
14
total
for
the
issue.
Tracker
and
13
total
scenarios
for
the
issue
worker
and
again
there
were
14
sessions
with
each
of
those
scenarios
included
as
part
of
that
process.
A
A
So
I
think
we've,
probably
most
of
us,
encountered
the
scenario
of
trying
to
add
an
assignee
and
when
you
add
an
assignee,
for
example,
you
click
the
name,
the
drop-down,
but
there's
not
a
clear
way
to
actually
apply
that
assignee,
you
have
to
click
away
and
then
it
sort
of
populates
in
that
drop
down.
This
tripped
up
a
handful
of
users,
one
of
the
scenarios
that
that
this
was
a
challenge
in
was
I
had
asked
for.
A
I
think
both
roles
to
show
me
how
they
would
commit
someone
to
a
particular
piece
of
work,
and
so
they
most
cases
found
assignees
pretty
easily.
But
then
they
they
would
trip
up
on
that
little
interaction,
challenge
of
how
to
actually
apply
the
person.
Most
people
figured
it
out,
but
they
gave
us
a
bit
of
a
lower
score
in
terms
of
ease
of
use
for
this
particular
feature
because
of
that,
so
people
were
also
confused
by
the
word
participants
and
what
participants
meant
in
the
product.
This
was
a
challenge
for
them
as
well.
A
Quick
actions
were
not
discoverable
and
we
use
primarily
quick
actions
for
a
couple
of
things,
such
as
promoting
an
issue
to
an
epic.
So
you
know
having
that
option,
to
make
it
into
a
bigger
feature,
was
something
that
just
about
no
one
found
even
with
given
the
opportunity
to
google
things
and
search
for
things.
It
just
wasn't
discoverable
something
that
I
don't
actually
have
in
my
slides
and
again.
A
This
was
simply
due
to
time,
and
I
apologize
for
that
was
that,
after
each
scenario
that
the
user
successfully
completed,
we
asked
them
to
complete
two
statements
of
whether
or
not
the
feature
met
their
requirements
and
whether
or
not
they
found
it
easy
to
use,
and
then
they
rated
it
based
on
whether
or
not
they
thought
that
that
statement
of
being
of
meeting
the
requirements
being
easy
to
use
was
something
they
completely
agreed
with.
A
A
This
meant
that
we
had
to
be
very
careful
about
how
we
defined
success,
which
is
kind
of
an
ongoing
discussion
still
in
relation
to
the
process,
the
number
of
successes
and
fails
all
of
that
kind
of
tied
into
this
larger
score,
and
if
anyone
wants
to
know
more
about
that
scoring
process,
I'm
happy
to
get
with
you
offline
and
go
into
that
a
little
bit
more
in
detail.
To
sum
up,
though,
our
ux
light
score
unfortunately
went
down
just
a
little
bit.
It
is
at
3.44.
A
I
identify
that
as
complete,
I
think.
Technically,
it
puts
us
at
viable,
but
because
we
do
have
a
minimum
of
100
customers
using
the
product
and
because
we
are
using
it
exclusively
dog
fooding
it
internally
widely
here
at
gitlab
to
me,
it
falls
into
that
category,
but
nonetheless,
what
that
basically
says
is
that
we
still
have
a
lot
of
work
to
do,
and
I
really
believe
that
the
insights
and
work
that
we
have
done
here.
A
So
there
was
a
lot
to
balance
with
this.
As
far
as
challenges
go,
you
know
there
was
still
kind
of
regular
milestone
and
new
feature
work
that
needed
to
happen
in
addition
to
the
work
that
went
into
this.
But
again
I
had
a
lot
of
amazing
people
helping
me,
so
thank
you
so
much
again
to
my
pm,
gabe
to
pedro
for
all
of
his
help
to
mike
long
for
being
there
as
a
mentor
and
offering
advice,
definitely
catherine
and
laurie,
and
the
other
folks
associated
with
research
and
then
for
my
volunteers
as
well.
A
I
feel
like
I'm,
giving
an
oscar
speech.
So
if
I
forgot
anyone,
I'm
sorry
the
phrasing
of
the
questions.
This
was
another
challenge.
I
realized
it
threw
people
off
at
times
and
I'm
still
not
certain
how
that
might
have
affected
the
results.
So
that's
something
that
I
think
that
we
could
improve
on
and
explore
in
the
future,
defining
multiple
success
criteria
again.
This
was
very
important.
We
realized,
so
we
couldn't
just
say
there
was
one
way
to
complete
a
scenario.
A
A
Sometimes
people
would
go
into
a
comment
section
and
state
that
it
is
a
bug
and
then
notify
the
team.
So
there
were
multiple
ways
to
kind
of
accomplish
something
and
see
it
as
success,
but
it
wasn't
necessarily
the
way
that
we
had
defined
so
very
insightful,
but
also
a
challenge
in
quantifying
how
we
were
doing
and
then
finally,
there
was
also
set
up
challenges.
So
for
this
I
had
to
create
14
different
projects.
A
As
far
as
future,
everything
is
in
dovetail.
This
was
really
helpful,
so
the
videos
are
transcribed,
there's
an
opportunity
for
tagging
and
adding
insights.
This
basically
gives
us
an
opportunity
to
go
back
and
look
at
all
of
that
information
and
start
to
determine
what
we
can
do
with
it
and
how
we
might
be
able
to
use
it.
We
plan
to
document
and
review
those
key
insights
and
prioritize
that
information
gabe
created
an
epic
I
think
earlier
this
week.
A
B
What
a
great
work
and
a
great
presentation
holiday
thanks
so
much
it
looks
like
a
ton
of
work
and
it
sounds
like
a
great
team.
You
had
there
thanks
a
lot
for
showing
such
a
detailed
process
also
for
defining
the
the
main
jobs
to
be
done.
I
think
that's
a
great
example
of
yeah
what
process
we
should
follow.
A
quick
shout
out
for
any
questions
or
any
compliments.