►
From YouTube: CI/CD UX Team Meeting (EMEA/Americas edition)
Description
CI/CD product designers and researchers meet to discuss what is happening within each group.
A
Hello,
this
is
the
cicd
ux
team
on
june
15th,
the
america's
thread
and
I'll
just
start
by
going
through
some
of
the
manager
announcements,
so
hayana
had
added
if
there
was
okrs
that
are
blocked
or
at
risk
and
if
we
need
help
with
them,
but
it
doesn't
look
like
anyone
added
anything.
So
cool
looks
like
we're
on
track.
Unless
does
anyone
have
anything
for
that?
A
Okay,
sweet?
We
have
a
friends
and
family
day
on
june
24th
next
week
and
then
hayana
had
sent
out
a
message
about
mid-year,
check-ins
and
created
a
tracking
issue
for
these.
They
are
team,
member-led
check-ins,
to
assess
how
things
are
going
from
both
the
team
member
and
the
manager's
point
of
view,
and
then
share
feedback
to
help
inform
performance
and
development
plans
and
just
make
sure
to
dedicate
an
upcoming
one-on-one
to
do
this
before
july
22nd
and
welcome
emily
to
the
team.
A
And
then
katie
says
that
she'll
be
in
europe
for
around
two
months,
starting
late
next
week,
she's
taking
a
few
days
off
to
deal
with
jet
lag,
but
that
could
be
a
good
time
to
set
up
coffee
chats
if
your
time
zone
doesn't
overlap.
I
know
for
me
it's
like
the
last
hour
of
the
day,
so
anything
else
in
the
announcements
that
anybody
wants
to
say.
A
A
Okay,
so
I'll
just
go,
I
have
more
of
like
updates
on
my
side
for
pipeline
insights
and
runner.
I'm
gonna
go
through
them
quickly
since
there's
four
of
us,
the
artifacts
page
research
that
I've
been
talking
for,
I
think
for
weeks
now
about,
is
finally
complete,
so
we're
moving
forward
with
creating
the
page.
It's
a
projects
page
for
artifacts
that
lists
all
of
your
artifacts
in
that
project
and
we
got
really
great
feedback
and
so
I'll
be
making
on
updating
the
list
view.
Based
on
that
feedback.
A
We
also
sent
out
a
survey
and
erica
helped
so
much
with
this
and
so
did
caitlyn.
We
sent
out
a
survey
to
validate
artifacts
jobs
because
we
have
no
data
really
around
the
jobs
related
to
artifacts.
So
we
just
wanted
to
quickly
like
get
something.
So
we
did
the
survey
route
we've
gotten
80
responses
already,
which
is
great,
so
we're
just
going
to
cap
it
at
that
and
start
analyzing.
A
The
data
there
erica
had
asked
if
I
saw
any
differences
in
self-managed
versus
sas
responses,
because
jackie
who
used
to
be
the
pm
for
pipeline
insights,
had
asked
that
I
haven't
looked
into
it.
Yet
but
my
assumption
is
that
self-managed
folks
are
focused
on
storage
related
and
management
jobs
when
it
comes
to
artifacts
versus
sas
is
probably
focused
on
using
artifacts
to
debug
jobs
and
pipelines,
because
we
have
like
automatic
cleanup
for
sas
beef
gut.
Do
you
want
to
voice
your
question.
C
Yes,
so
I
mean
I
was
looking
into
build
artifacts
as
a
category,
and
I
know
that
both
job
and
pipeline
artifacts
are
a
part
of
that.
But
since
we
had
stopped
using
the
word
built
in
many
different
parts
of
the
product,
is
there
a
plan
to
maybe
rename
this
category.
C
C
Sorry
so
the
name
currently
is
build
artifacts
right.
Is
there
a
plan
to
stop
using
the
word,
build
because
that's
confusing
to
many
users
and
even
internally,
it's
very
it's
a
very
confusing
term.
So
just
wanted
to
ask
about
that.
A
Yeah,
we
don't
there's,
there's
no
plan,
but
I
completely
agree:
there's
been
so
much
confusion
about
if
artifacts
means
like
packages
or
build
artifacts,
so
we
need.
We
need
to
do
some
type
of
clarification
there.
Maybe
at
gitlab
we
call
them
job
and
pipeline
artifacts.
So
there's
potential
to
just
call
them
that,
rather
than
even
build
artifacts.
C
C
D
Yeah,
I
guess
going
back
to
erica's
question
when
you
mention
that
there
were
80
responses.
I
haven't
really
dug
into
the
data,
but
how
many
of
those
80
responses
were
self-managed
versus
sas
users?
Do
you
know
that
offhand
or
have
some
sense
of.
A
D
A
I'll,
let
you
know
next
time
we
meet
I'll,
bring
like
the
actual
numbers
in
because
I'll
export
it
from
qualtrics
too.
C
What
is
recommended
like
how
much
of
a
balance
should
we
keep
in
the
sample
size
for
sales
versus
self-managed?
I
know
that
in
our
direction
as
an
organization,
we
want
to
later
focus
more
on
sas,
but
at
the
same
time
we
also
want
to
understand
like
why
self-managed
users,
I
mean
what
would
it
take
for
self-managed
users
to
become
sas
users,
so
any
recommendations
there.
D
D
D
D
C
A
A
Okay
last
thing
was
the
runner
list
view.
I
was
also
I
took
on
three
research
projects,
this
milestone,
which
was
a
mistake,
but
this
one
is
the
last
one.
So
I
did
unmoderated
research
on.
We
got
that
and
it
was
purely
quantitative
data.
So
I
want
to
hold
sessions
and
m
holding
sessions
with
customers
to
get
qualitative
data
to
make
sure
that
they
can
complete
their
jobs
with
the
new
view
and
I'll
have
some
updates
around
that
and
we
added
an
upgrade
icon.
A
If
anybody
has
the
upgrades
within
their
area,
so
this
one's
specific
to
get
lab
runners,
you
would
update
the
version,
and
now
we
have
an
icon
to
represent
that
if
anybody
needs
it.
Yes,.
B
Emily,
so
I
don't
have
too
much
to
update
on
just
like
the
next
few
weeks
in
152.
My
focus
will
really
be
on
onboarding,
so
I
think
I've
sent
coffee
chats
to
everyone
on
the
team,
just
getting
to
know
like
their
release
area
really
well,
and
all
that.
B
So
that's
going
to
be
my
focus
for
the
next
little
while
hayana
has
also
given
me
like
two
tasks,
to
help
me
with
onboarding
the
first
one
which
I'm
going
to
tackle
end
of
this
week
start
of
next
week,
which
is
really
deploying
a
project
and
creating
a
release
season.
Gitlab
and
I
will
be-
we
are
chatting
gina
and
I
and
I
will
be
taking
like
a
few
notes
throughout
it.
B
I
could
capture
some
of
like
the
screens
and
what
the
flow
looks
like
for
those
interested
but
yeah
that
focus
will
be
kind
of
just
getting
used
to
the
journey,
and
I
see
will
has
a
question.
D
Yeah
for
the
first
task,
I'm
definitely
interested
in
whatever
you
learn
so
feel
free
to
to
share
once
you
have
gone
through
that
experience,
I'm
just
connected
with
so
many
teams.
I
haven't
like
dug
in
deep
to
understand,
like
that.
End-To-End
experience
so
I'll
be
interested
to
hear
how
your
your
process
goes
and
then
for
the
jobs
to
be
done.
B
So
that
one
I'm
actually
going
to
get
into,
I
have
to
do
task
one
first,
so
I
ultimately
haven't
read
into
this
one
too
much
it'll
probably
be
done
in
like
the
second
half
of
fifteen
two,
but
I
don't
actually
know
I
have
to
read
into
this
one
a
bit
more.
I
can
link
the
discussion
with
hayana,
where
there's
a
bit
more
detail,
so
you
can
read
into
that.
D
B
The
conversation
around
it-
and
I
think
the
point
of
this
is
really
just
to-
I
think
it
was
one
of
gina's
first
activities
to
onboarding
onto
the
team.
It's
like
a
good
onboarding
activity
to
go
through
the
process
and
ultimately,
in
growth.
We
didn't
do
a
lot
of
jobs
to
be
done
because
we
were
working
cross
stage.
So
it
will
be
my
first
job
to
be
done
here
at
get
lab
as
well
so
kind
of
just
get
in
the
process
of
that.
D
Okay,
well,
when
you're
ready
to
pick
up
that
work,
I've
done
a
couple
jobs
to
be
done:
studies
for
different
teams,
so
I
can
help
with
that,
and
I
did
include
a
link
to
my
planning
issue,
which
I'll
talk
a
little
bit
more
about
later.
But
we
use
it
to
essentially
just
prioritize
like
how
researchers
are
aligned
to
different
projects
and
how
we
prioritize
the
work.
B
C
C
I
create,
I
documented
the
jobs
to
be
done
for
continuous
integration
when
I
started
off,
which
was
about
more
than
one
and
a
half
years
back
and
in
the
course
of
this
time.
What
I
realized
was.
C
It
was
very
much
based
on
my
very
like
initial
understanding
of
what
the
stage
group
was
all
about,
and
what
this
whole
thing
is
like
what
verify
stands
for
in
the
devops
process,
and
I
understand
that
the
jobs
to
be
done,
the
core
jobs
to
be
done
is
not
something
that
kind
of
evolves,
but
because
my
understanding
evolved
of
those
jobs.
C
I
thought
why
not
like
take
up
a
very
basic
foundational
research,
which
is
not
very
much
tied
to
any
specific
area
in
the
three
categories
that
we
look
at,
but
something
which
is
generic
enough
from
where
we
can
extract
the
core
jobs
to
be
done.
The
high
level
ones.
So
we
did
that
and
because
I
deleted
the
history
of
my
browser,
I
couldn't
find
the
issue
I'll.
Do
that
and
I'll
add
this
here,
but
in
the
meanwhile
I
so
will
you
mention
that
you
have
taken
like
you.
C
You
have
conducted
a
few
jdbd
studies.
A
C
I,
when
I
went
through
the
whole
process
when
I
read
the
description
and
when
I
wanted
to
like
do
things
from
scratch
once
more.
What
I
figured
was
in
the
documentation.
We
have
mentioned
that
a
problem
validation
is
a
really
good
way
to
go,
but
from
the
recent
problem
validations
that
we
were
engaging
within
the
pipeline
execution
team,
they
were
too
focused.
C
They
were
very
focused
on
the
specific
capability
or
specific
requirement,
so
I
was
not
able
to
get
much
from
that
and
that's
why
I
conducted
a
whole
new
research
and
with
erica's
help.
We
did
this
in
a
very
different
way,
which
I
have
mentioned
in
the
youtube
video
that
I've
added-
and
this
is
the
outcome
from
that.
C
I
know
that
that's
a
lot
to
consume
and
provide
feedback
on
in
this
short
while,
but
in
case
all
of
you
get
time.
Please
look
at
the
video
and
I
would
love
to
hear
your
thoughts
around
the
process,
because
I'm
also
planning
to
write
something
about
this
process
and
like
document
it
and
publish
it
as
a
blog.
So
any
feedback
is
helpful.
C
I'll
also
add
the
link
to
the
issue,
as
I
find
it
anyway.
That
was
all
from
me
for
the
day.
A
D
Yeah
I
did
have
a
quick
question
for
vitica.
I
opened
up
your
merge
request
and
it
looks
like
you
know:
you've
added
your
jobs
to
be
done
to
like
a
more
extensive
yaml
file.
D
I
think
when
I've
been
a
part
of
some
of
the
other
jobs
to
be
done,
studies
they
typically
will
also,
like,
I
think,
add
a
section
of
like
a
product
direction,
page
or
a
category
page
and,
like
add
tables
with
you,
know
the
jobs
to
be
done
that
have
been
determined
based
on
the
research
so
that
you
know
once
you
do
like
a
category
maturity
scorecard,
you
can
like
fill
out
the
scores.
Are
there
any
plans
to
do
something
like
that.
C
So
once
things
are
finalized
and
based
on
that,
we
will
kind
of
make
a
plan
with
the
research
like
what
researchers
we
need
to
do
in
the
upcoming
months,
like
the
very
close
milestones
that
we
need
to
hit
with
the
maturity-
and
we
usually
I
mean
this-
is
how
I
have
been
documenting
jobs
to
be
done
specifically,
and
we
don't
replicate
that
to
the
direction
page.
We
only
mentioned
the
maturity
plan,
so
this
kind
of
shows
up
in
the
pipeline
execution
jobs
to
be
done
page
that
I
have
set
up.
B
C
D
Okay,
so
just
to
clarify,
like
our
the
ones
that
you
like
the
high
level
jobs
to
be
done,
are
those
like
mentioned
on
this
link
that
you
just
added.
C
C
How
they
are
documented
is
the
high
level
once
they
appear
as
the
ones
with
a
more
bold
title
like
the
shortened
one
and
following
that
are
like
what
are
the
sub
jtbds,
which
are
related
to
it.
B
D
C
And
if
you
can
also
point
me
to
the
researchers
that
you
have
conducted
so
I
can
also
get
inspired
by
the
process.
D
Yeah
yeah,
I
can
link
to
the.
D
I'll
do
a
reminder
just
so
I
don't
have
to
try
to
find
that
in
the
middle
of
this
this
meeting
so
yeah.
I
can
talk
briefly
about
some
of
what
I'm
doing
in
release
within
the
ux
research
thread
below
so
emily.
For
your
you
know,
information
erica
and
I
have
included
our
prioritization
issues
so
mine's
right
here.
I've
highlighted
it
within
the
dock
and
I
basically
have
like
all
of
the
research
that's
going
on
in
the
teams
that
I
cover.
D
And
so
I've
tried
to
specify
to
the
best
of
my
abilities
like
which
projects
there's
like
a
column
kind
of
in
the
middle
of
that
particular
prioritization
issue.
D
That
says,
if
certain
work
is
connected
to
release
so
there's
about
three
or
four
projects
in
total
that
are
listed
within
that
table.
So
some
of
the
things
that
I'm
working
on
currently
I've
got
a
research
epic
for
my
usability
benchmarking
study.
D
I've
made
a
lot
of
updates
to
that
just
in
the
past
a
couple
days,
so
I've
added
a
timeline
checklist
to
that
epic
and
I'm
also
going
through
a
set
of
tasks
that
I
worked
on
with
hayanna
and
chris,
and
then
I'm
also
in
the
process
of
making
a
ux
cloud
sandbox.
So
I'm
following
the
handbook
documentation
and
that
project
that
we'll
eventually
create
will
be
used
for
data
collection,
taking
users
through
very
specific
tasks
for
release
and
outside
of
that
study.
D
Out
of
that,
and
what's
on
the
backlog,
it's
all
there
within
that
link
and
then
finally,
chris
at
the
pm
for
the
release
team,
is
wrapping
up
interviews
for
his
project.
That
kind
of
covers
a
couple
different
stage
groups
not
just
release,
so
it's
focused
on
kubernetes
deployments
and
he's
conducted
about
six
or
seven
user
interviews
in
total
and
hopes
to
have
insights
later
this
month.
B
I
know
we
put
some
time
aside
next
week
to
go
over
the
usability
benchmarking
for
release
so
in
preparation
for
that
I'll.
Just
make
sure
kind
of
to
read
up
on
this
bring
any
like
specific
questions
along.
B
Like
kind
of
I
notice,
this
is
going
into
like
september,
probably
when
I'm
going
to
be
fully
onboarded
so
like
what
I
can
do
to
help
in
the
future
and
all
that.
But
this
is
great
thanks
for
all
the
links
into
this
and
I'll
definitely
take
a
look.
I
know
chris
has
talked
about
the
interviews
he's
been
doing
as
well,
so.
D
Yeah,
I
think
one
thing
that
might
be
useful
to
check
out
within
that
epic
is.
I
went
through
like
a
async
mural
activity
with
chris
and
hayanna
that's
referenced
in
the
comments
section
of
that
epic
and
we
spent
about
two
or
three
weeks
going
through
different
parts
of
the
study
setup
trying
to
determine
what
tasks
we
were
going
to
do,
who
we
were
going
to
recruit
what
metrics
we
were
going
to
look
at
so
that
mural
board
is
linked
and
available
in
there.
A
Yeah,
I
think
so
they
also
had
already
when
they
had
the
meeting.
They
took
a
recording
of
it.
If
anybody
wants
to
take
a
look
at
that,
it's
in
the
channel.
A
Katie
completed
a
solution,
validation
for
container
registry
cleanup
policies
and
she
links
the
dovetail
and
it
allows
the
user
to
set
up
rules
or
policies
to
delete
items
from
the
registry
to
save
storage
space
there.
Her
findings
were
in
line
with
her
assumptions,
but
she
did
discover
that
users
want
a
way
to
make
a
template
of
their
policies
and
apply
them
to
other
projects,
which
sounds
cool.
A
Should
I
read
like
the
whole
conversation
after
that,
or
should
I
just
move
to
the
next
point?
I
can
move
to
the
next
one:
okay,
okay
in
15-2,
they'll,
be
implementing
a
feature
that
will
release
to
a
small
subset
of
users
and
monitor
with
snowplow.
The
reason
is,
despite
two
rounds
of
validation,
we're
still
not
certain
if
this
is
disruptive
to
people's
workflows,
so
we
are
taking
a
cautious
approach
and
this
will
be
katie's
first
time
at
get
lab
doing
a
phased
rollout
and
she'll
keep
us
posted
and
finally
she's.
A
Oh,
no,
that's
not
true
she's
consistently
hearing
from
customers
that
the
integration
with
package
and
release
could
be
better.
An
example
of
the
feedback
is
captured
in
that
link
will
and
emily.
That
might
be
good
for
you
to
look
at.
D
B
A
Yay
her
focus
for
15
2
will
be
mostly
research,
heavy
and
we'll
be
improving
the
package
detail
page
and
then
she'll
also
attempt
to
implement
the
missing
front-end
events
tracking.
So
we
have
better
data
of
how
customers
are
using
the
product
that
was,
inter,
I
don't
know
if
anybody
had
time
to.
I
think
katie
either
recorded
a
video
or
shared
in
a
previous
ux
meeting,
but
she
found
all
the
front-end
events
like
through
the
browser
and
tracked
those,
and
that
was
really
cool,
especially
if
your
team
is
lacking
telemetry
right
now.
A
Okay,
nadia
said
that
this
milestone
and
in
153
should
be
focusing
on
pipeline
components
mvc
as
we're
starting
to
get
as
we're
getting
ready
to
start.
The
implementation
15-4
and
we've
recently
added
the
guidelines
for
in-product
reference
information
using
drawers,
two
pajamas
and
that's
a
good
thing
to
check
out
as
well.
C
I'm
sure
I'm
going
to
share
the
issue,
that's
created
by
by
the
product
manager
of
package
for
the
snowplow
tracking
events,
I'll
be
sharing
that
with
my
pm
as
well,
because
yeah,
we
are
also
facing
some
issues
of
making
a
decision
regarding
filters
and
similar
functionalities.
Where
things
are
not
so
black
and
white,
I
mean
the
insights
are
not
so
black
and
white
that
we've
received
from
the
validations
and
still
we
have
to
make
like.
We
have
to
proceed
in
some
direction
and
we
have
to
do
it
cautiously.
D
Sure
I
didn't
notice
that
we
didn't
touch
on
this
general
update.
I
think
erica
must
have
added
this
but
looks
like
we're
seeing
a
higher
overall
no-show
rate
for
sessions.
So
just
a
heads
up.
D
I
think
this
might
just
be
due
to
like
seasonal
variants,
so
it
may
be
related
to
like
you
know,
a
lot
of
people
are
out,
or
you
know,
have
holidays
or
have
you
know
summer
vacation
or
you
know
whatever
it
is
depending
on
where
they
are
in
the
world,
so
just
wanted
to
provide
a
heads
up
about
that.
D
C
It
never
happens
when
we
recruit
through
platforms
like
respondent,
but
if
we
rely
on
the
our
our
internal
recruiting
process,
we
send
out
males
and
like
nobody,
responds
to
them
for
a
very
long
time,
even
when
they
do
it's
like
pretty
much
after
the
whole
analysis
is
over,
and
I
am
not
very
sure
if
that's
for
that's,
if
that's
due
to
seasonal
variance,
but
I
mean
that's
how
I
have
been
trying
to
reason
by
telling
it's
because
there's
some
hindsight
bias.
C
That's
going
on
here
that,
oh
maybe
it's
what
it's,
because
this
was
going
on.
They
didn't
turn
up,
because
this
has
been
happening
throughout
the
year
and
our
when
we
internally
try
to
recruit
users,
the
rate
at
which
we
get
a
positive
reply.
It's
it's
always
pretty
less,
so
I
think
we
need
to
figure
out
what's
happening
there.
D
Yeah-
and
I
know
that
we
talked
about
it
with
just
like
the
research
team
last
week,
and
some
of
what
came
out
of
that
was
that
you
know
respondent
sends
like
a
lot
of
reminders,
and
also
has
you
know
that
panel
of
users
that
are
more
likely
to
you
know,
apply
and
attend
for
the
things
that
they
sign
up
for.
D
So
there's
been
kind
of
some
talk
within
the
research
team
about
ways
that
we
can
try
to
remind
people
a
little
bit
more
so
like
maybe
getting
caitlyn
involved
and
reminding
her
to
to
send
out
reminders
or
if
you
have
the
ability
to
like
have
the
list
of
emails.
You
could
reach
out
to
to
people
before
sessions
and
say,
like
hey,
just
a
heads
up.
You
know
the
session's
gonna
happen
in
24
hours.
A
Same
I
I
don't
either
and
this
I
I
can
for
the
artifacts
research
that
I
did
get
a
few
no
shows
and
some
of
them
like.
If
they
missed
it,
I
would
email
them
and
be
like.
Oh
sorry,
we
missed
you,
do
you
want
to
sign
up
again
and
then
they'd
sign
up,
and
then
they
wouldn't
show
up
to
the
next
one.
B
D
But
I
can
provide
some
updates
once
we
know
a
little
bit
more.
I
think
caitlyn
and
our
other
new
research
ops
coordinator,
we're
going
to
try
to
work
on
some
of
that
to
see
if
they
could
build
some
things
into
their
current
workflow
to
take
into
account
that
people
need
to
be
reminded
ahead
of
time,
because
I
think
there's
a
way
to
set
that
up
through
calendly
so
that
it
just
automatically.
Does
it
instead
of
having
to
remember
to
do
it
but
I'll
I'll
provide
some
updates.
Once
I
know
a
little
bit
more.
D
So
let's
see
she's
asking
if
anyone
can
link
and
add
the
studies
that
they
would
like
to
be
included
in
the
verify
and
package
research
registry
and
synthesis
in
this
issue.
Number
1738.
D
She
closed
out
the
ops
product
erection
survey
where
she
went
to
kubecon,
so
she's
included
the
link
to
the
report
there
and
has
a
link
to
the
async
discussion
issue
and
I've
looked
at
that
report
too.
It's
very
detailed,
so
I
highly
recommend
checking
that
out
and
then
she's
got
a
couple
of
read-only
updates.