►
From YouTube: Debugging a pipeline
Description
Analysis from a short UX activity at the All things open 2022 GitLab booth. A balanced sample size of Large enterprise users and SMB users.
A
Hey
y'all
I'm
beat
Godzilla
senior
product
designer
for
pipe
and
execution
team,
and
today,
I
want
to
give
an
overview
of
the
findings
that
we
have
from
a
recent
ux
research
that
we
did
at
the
booth,
the
gitlab
booth
at
all
things
opened
last
week.
It
was
a
very
quick
game
that
we
built
as
a
survey
on
top
of
a
framework
that
was
already
used
at
kubecon
for
the
secrets,
workflow
research
and
some
of
the
questions
that
we
asked
our
users
were
around
like
what
is
it
that
they
look
at
like?
A
What's
the
most
important
identifier
on
the
pattern
list
view
was
the
first
piece
of
information
that
they
look
for
when
they
land
on
that
page
and
then
some
questions
about
like
what
is
it
that
they
look
for
when
they
see
a
pipeline
failing
or
a
free
pipeline
or
a
failed
job
for
their
pipeline.
What's
the
next
step
that
they
take?
What's
the
information
that
they
look
for
and
how
do
they
act
upon
that
information?
A
And
what
is
it
that
we
can
do
from
our
end
to
make
sure
that
they
are
able
to
debug
and
optimize
their
pipelines
in
a
better
way?
A
So,
taking
a
look
at
the
insights
that
we
received
from
that
game
survey,
the
first
one
says
contextualized
performance
information.
So
recently,
simply
if
you
can
recall,
we
ran
a
research,
an
experiment
in
pipeline
execution,
where
we
presented
the
performance,
related
information
close
to
the
pipe
and
craft
View.
We
had
our
own
set
of
learnings
from
that
particular
research,
but
some
information
that
we
received
from
this
effort
was
also
kind
of
coincidence,
coinciding
with
our
learnings
from
this
other
experiment.
So
users
want
to
they.
A
Don't
just
want
to
look
at
like
what's
the
slowest
job.
What's
the
was
the
one
that
has
taken
the
longest
time
or
which
is
the
one
that
has
waited
the
longest,
but
they
want
to
look
at
these
information
in
the
larger
context.
They
want
to
understand
the
role
of
the
runner
infrastructure
and
other
factors
that
are
slowing
down
the
job,
so
they
can
determine
the
fix
in
a
better
way
and
not
just
like
they
don't
have
to
figure
out
a
lot
of
things
even
after
getting
one
set
of
information
from
us.
A
Next
is
improve
log
output
to
be
more
suggestive
of
the
next
steps.
So
almost
everybody
responded
that
the
next
step
that
they
take
when
they
see
a
failed
job
for
a
pipeline
is
they
go
to
the
logs
and
that's
like
the
most
common
workflow
when
it
comes
to
python
execution
and
specially
debugging
a
pipeline,
and
yet
when
they
have
to
skim
through
the
errors
they
have
to
literally
scroll
through
the
through
the
whole
job
log
output.
We
do
have
a
feature
that
was
added
by
Patent
that
allows
them
to
search.
A
So
if
you
enter
the
keyword
error
you
you
would
be
able
to
like
travel
through
the
errors
on
the
log
output,
but
still
if
there
can
be
a
better
way
since
this
process
is
still
very
manual
today.
So
surfacing
error
is
better
or
highlighting
and
even
suggesting
the
remedial
action.
A
It
can
go
a
real
long
way
in
making
the
python
debugging
experience
better
for
our
users,
and
this
can
be
a
cross
stage
or
across
Stage
Group,
inside
that
we
are
looking
at
because
presenting
some
some
information
related
to
the
runner,
also
kind
of
helps
user
build
a
context
on
what's
happening
with
the
performance
of
the
job
and
pipeline.
A
Next
is
present
errors
and
performance
Insight
at
a
higher
level.
So
we
asked
users
many
questions
about
like
how
do
you
do
it
today
and
what
can
we
do
to
help
you?
Do
it
in
a
better
way,
and
most
of
them
answered
with
a
suggestion
that
if
all
this
information
was
already
kind
of
packed
as
a
summary
again
presented
at
a
higher
level,
preferably
a
dashboard
like
dashboard
was
not
the
presented
as
the
absolute
solution,
but
The
Proposal
kind
of
like
hinted
at
a
dashboard.
It's
mostly
a
summary.
A
If,
if
we
are
able
to
do
that,
then
that
would
avoid
like
that,
would
help
users
not
to
go
through
like
a
hundred
steps
just
to
understand.
What's
the
problem
with
their
pipelines
and
then
fix
those
problems,
another
one
is
users
want
to
CPL
job
status,
so
the
majority
of
users
when
they
go
to
the
pack
and
list
you
what
they
look
at
is
is
like,
perhaps
failed.
They
literally
check
like
scan
for
the
check
marks
and
the
crosses
the
red
crosses
to
quickly
understand.
What
is
it
that
needs
their
attention?
A
That
means
failed.
Jobs
is
what
they
actually
investigate
into.
We.
How
I'm
able
to
relate
this
with
some
changes
that
we
can
make
to
improve
the
performance
of
our
pH
and
also
allow
them
to
do
this
particular
job
better,
is
through
the
pipeline
Minicraft.
We
present
information
about
all
this
succeeded
jobs
as
well
in
the
same
way
as
they
provided
for
the
failed
ones.
A
We
can
probably
re-look
at
that
and
provide
more
importance
to
the
field
ones
and
make
it
easier
for
them
to
get
the
errors
without
having
to
click
and
go
to
the
logs
directly
somewhere
close
to
the
mini
pipeline
graph.
A
A
There
are
many
different
personas
many
different
team
structures,
organization
structures
which
are
which
come
into
picture
and,
depending
on
the
overall
context,
on
this
Collective
sort
of
combination
that
come
together
that
kind
of
determines
how
how
users
want
to
form
their
context
and
go
about
debugging
their
pipeline,
how
they
want
to
understand
what
cause
a
particular
problem
so
providing
raw
data
next
to
like
any
insight
that
we're
providing
so
that
our
users
are
still
able
to
form
their
own
context.
A
A
Lastly,
wait
to
mount
Runner
So.
Currently
we
do
not
do
this.
We
do
it
in
some
other
form,
like
the
I
think
on
the
runner
Fleet
Management
page,
they
have
recently.
The
team
has
recently
added
this
information
about
the
wait
time,
the
queue
time
and
it's
a
very
useful
piece
of
information
while
determining
like
choir
pipeline,
is
behaving
in
a
in
a
way
that
it
is
and
what
what
role
does
the
runner
have
to
play
in
that
yeah?
A
So
these
were
the
six
learnings
that
I
had
and
yeah,
and
depending
on
this,
we
would
be
creating
some
issues,
making
some
changes
to
the
existing
features
that
we
have
and
discuss
with
the
team
like
how
else
we
can
act
upon
these
insights.
Thank
you.