►
From YouTube: Artifacts JTBD Survey review & next steps
Description
Jackie Porter (Director, Product - Verify & Package), Jocelyn Eillis (Product Manager, Pipeline Insights), and Gina Doyle (Product Designer, Runner & Pipeline Insights) review the survey to validate Artifacts JTBDs, discuss the plan, and finalize next steps.
JTBD issue: https://gitlab.com/gitlab-com/Product/-/issues/3954
A
Okay,
my
refrigerator
is
going
off
in
the
background.
Hopefully
that
wasn't
recorded,
but
we
are
here
today
to
talk
about
the
proposed
plan
for
the
job
to
be
done.
Artifacts
survey
that
we
want
to
send
out
so
that
we
can
validate
artifacts
jobs
because
we
kind
of
like
created
them
based
on
the
little
amount
of
research
that
we
have
and
we
want
to
be
able
to
validate
them.
So
then
we
could
run
a
category
maturity
scorecard
for
artifacts.
A
Since
that's
a
new
work
category
for
our
team
within
the
next
I
don't
know
a
few
months.
I
think
we
have
a
plan
for
the
end
of
the
year,
so
I'll
share
my
screen,
so
we're
not
just
looking
at
nothing.
A
Okay,
so
jackie
had
proposed
that
we
review
the
questions.
I
think
that
would
be
a
really
good
idea.
B
B
A
Okay-
let's
see
so
these
first
ones,
I
think,
are
just
general
about
your
organization
and
the
like
your
role
right,
jackie.
A
Yeah
and
I'm
just
trying
to
get
to
the
ones
that
are
about
okay
here
we
go
okay,
so
can
is
this
easily
seen
or
do
you
want
me
to
zoom
in?
I
can
see
it.
I
think
it's
good
okay,
so
we
have
actually
jackie.
Do
you
want
to
walk
through
it
because
you're
the
one
who
created
it?
I
don't
want
to
speak
for
you.
B
B
In
order
for
people
to
start
thinking
about
like
what
are
the
most
important
tasks
that
they're
trying
to
complete
whenever
they're
interacting
with
artifacts,
so
we
would
be
able
to
then
see
per
the
stack
rank.
What
would
be
the
most
important
job
to
be
done
for
us
to
then
evaluate
maturity
against?
B
We
would
then
be
able
to
also
correlate
it
to
the
personas
that
they
answered
what
their
title
was
in
the
beginning
of
the
survey
so
that
we
could
start
to
suss
out,
like
oh
system
admins
are
getting
different
values
or
different
tasks
completed
with
artifacts,
whereas
developers
or
what?
What
have
you.
So
that's
the
intent
of
this
question.
B
A
What
I
was
hoping
for,
because
the
things
I
added
at
the
end
are
actually
came
from
the
sessions.
I've
been
running
around
artifacts
because
a
few
people
have
said
that
they
want
to
compare
artifacts
across
like
for
a
single
job
across
a
default
branch,
so
that
they
can
understand
what
changed
over
time
and
then
as
well
as
getting
an
overview
of
inputs
and
outputs
for
each
job,
which
I
spelt
wrong
in
the
pipeline
when
they're
debugging.
So
I
added
those
two
but
other
than
that.
I
didn't
touch
what
you
had
done,
jackie.
B
Okay,
that
makes
sense
so
jocelyn
for
your
for
your
information
context,
jobs
to
be
done
are
a
framework
that
we
use
here
with
between
design
and
product
to
kind
of
think
about
generically.
What
is
our
solution
trying
to
accomplish
in
the
market?
There's
I'm
sure
you're
familiar
with
this,
just
giving
you,
like,
you,
know
kind
of
the
starter
from
scratch.
B
There's
lots
of
different
ways
to
think
about
this,
but
you're
trying
to
become
as
generic
as
possible
so
that
you
are
thinking
about
the
problem
from
the
user
perspective
and
not
about
our
solution.
So
a
lot
of
these
are
going
to
be
very
generic,
sounding
very
vague
and
not
necessarily
like
super
specific
to
our
pages
or
our
objects
or
our
the
way
gitlab
works.
So
that
way,
we'll
have
to
have
that
dialogue
between
gina
and
and
you
about
what
do
these
mean
for
our
existing
offering?
B
So
here
I
have
now
taken
the
various
job
like
like
tasks
that
were
above
that
we'd
want
to
aggregate
into
a
job
and
started
to
ask
like
well
how
well
do
we
do
this,
so
that
that
we
can
understand
also,
like
the
satisfaction
of
that
experience,
this
is
kind
of
like
a
preliminary
peak
into
our
hypothesis
of
what
we
think
people
are
doing
with
artifacts
today
and
what
they're
really
actually
able
to
do,
and
then
the
next
one
is
again
digging
into
those
micro
tasks
that
are
specific
to
our
product,
to
gauge
the
maturity
based
off
of
the
jobs
they
ranked
as
important
how
they're
able
to,
or
the
task
they're
going
to
support
how
they're
actually
able
to
render
those
tasks
inside
of
gitlab.
A
B
Yeah
so
yeah
the
ones
that
the
ones
that
we
have
documented
as
our
current
jobs
are
the
ones
that
I
created
this
for
sorry,
I
did
that
I
did
like
a
month
ago,
so
it
took
me
a
little
bit
to
like
remember
what
I
did.
No,
no
that's
okay,
yeah,
because
I
I
focused
on
again
bridging
that
gap
from
the
generic
expectation
to
how
does
it
actually
work
and
get
lab
today.
Yeah.
B
And
then,
in
the
end,
we
should
have
an
open
thing
of
like
hey:
do
you
want
to
meet
with
us
and
if
we
have
a
particular
and
particularly
interesting
response,
that's
surprising
or
not
going
with
the
flow
that
will
probably
be
a
good
candidate
to
like
have
a
follow-up
interview
with
or
if
we
start
to
see
some
interesting
trends
we
may
want
to
cherry
pick
some
of
those
to
get
more
insight
into
okay.
Should
we
add
a
new
block
for
that
or
add
it
to
this
email
thingy?
B
Well,
that's
what
it
is
like
that
we're
just
saying
if
you'd
like
to
be
entered
to
win
because
don't
they
with
that
we
collect
their
email
address.
Then
we
reach
out
to
yeah.
A
B
Okay
sure
we
can
because,
like
I
think
you,
you
can
give
incentives
for
survey,
respondents
and
you
can
give
incentives
for
people
who
follow
up
the
interview.
So
I
think
here,
like
this
incentive
was
for
them
to
provide
us
their
email
address
for
follow-up,
so
we
can
reframe
it
to
be
like.
Please
enter
your
email
address
to
to
be
to
participate
in
an
interview.
B
A
Research
interview
yeah,
that's
perfect,
okay,
yeah.
That
sounds
good
and
then
is
there
anything
specific
that
we
wanted
to
go
through
in
the
questions
or
should
we
like,
like
I
went
through
this
and
it
all
seemed
to
make
sense,
should
we
all
do
it
async
and
just
make
sure.
B
B
Okay,
because,
like
I
think
that
we
should
move
fast
and
try
to
get
this
on
social
and
start
recruiting
for
it,
and
then.
B
A
B
So
we
don't
know
what
we
don't
know
one.
I
think
that
we
shouldn't
restrict
the
survey
like
we
should
disqualify
anybody
based
off
of
their
title
right,
because
jobs
to
be
done
are
abstracted
from
the
persona.
What
we're
going
to
do
after
getting
the
persona
data
or
the
title
data
is
correlate
the
tasks
to
the
personas
and
then
update
the
personas
based
off
of
the
tasks.
But
we
shouldn't
restrict
anybody.
So
our
target
percent
is
kind
of
irrelevant
at
this
point,
because
we're
trying
to
figure
out
what
people
are
doing
with
the
tasks.
B
Or
are
you
using
a
ci
cd
solution
where
you
deal
with
outputs
of
builds
and
then
both
of
those
are
going
to
help?
People
know
like
this
is
a
relevant
survey
for
them.
A
Okay,
that
will
help
for
the
recruit,
because
I
get
what
I'll
do
is
I'll
open.
The
recruiting
requests?
Okay
and
just
add
that
as
criteria
for
like
people
that
we're
looking
for
generally
is
that.
A
B
I
think
I
do
I
post
a
linkedin
and
to
twitter
and
the
people
that
retweet
my
tweets.
I
have
like
three
strategic
followers
and
that's
it
like,
and
that's
how
I
get
most
of
this
is
that
I
have
very
few
people
that
are
like
retweeting
things:
okay,
like
vitica,
she
has
a
huge
following,
so
I
think
that
that's
okay.
C
Okay,
anything
I'll
stop
sharing
is
something
like
the
essays
and
tims
can
also
help
like
recruit.
B
So
for
this
particular
exercise
we
can
have
them
send
to
their
users,
but
their
contacts
are
going
to
be
kind
of.
You
know
prohibitive
they're,
going
to
it's
going
to
be
like
a
bias
sample
for
for
the
account
teams,
because
of
who
they
are
contacting.
B
So
I
think
in
we
can
definitely
do
like
follow-up
interviews
to
validate
the
jobs
to
be
done
once
we
get
them
to
find.
I
think
that
would
be
a
good
use
of
our
time,
but
right
now
we
don't
really
know
the
universe
of
our
jobs
to
be
done,
and
what
I
wouldn't
want
to
do
is
to
validate
our
jobs
to
be
done
on
a
biased
sample,
and
I
feel
like
that's
what
would
happen
if
we
target
our
customers.
A
B
Which
we
need
this
for
for
that
yeah
you're,
absolutely
right,
yeah,
yeah,
okay
and
then,
once
once
we
publish
this.
If
we're
good,
to
publish
this
survey
now
I
can
yeah
just
tweet
this
out
and
then
ask
social
to
also
retweet
it
and
then
ask
vedica
and
michael
over
in
developer,
evangelism
to
retweet
it
so
for
good.
Then
I'll
go
ahead
and
do
that
right
now,
I'm
going
to.
A
B
C
Perfect
we'll
see
if
I
can
figure
it
out.
A
Okay,
okay,
so
once
I
once
I'm
done
testing
it
I'll
tag,
you
jackie.
B
B
B
Perfect
austin
y'all.
Thank
you
so
much
gina
for
setting
this
up
and
for
working
with
me
on
this,
and
I'm
excited
to
get
some
concrete
perspective
on
the
tasks
that
people
are
doing
and
then
I'm
really
excited
to
learn.
How
like
mature
our
interactions
are
I
mean
we
have
like
heuristic
evaluation
of
how
things
work
today,
but
it'll
be
really
cool
to
hear
how
people
feel
about
it
and
in
situ
yeah
agreed.