►
From YouTube: CI/CD UX Meeting - 2022-02-09 [EMEA Edition]
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello:
everyone-
this
is
the
february
9th
ux
ci
cd
meeting.
This
is
the
emea
americas
version,
since
our
co-workers
in
asia
cannot
join
in
this
time
zone.
I'm
here
with
gina
doyle,
who's
designer
on
pipeline
insights
and
runner,
and
I
think
we'll
have
erika
joining
us
soon
from
the
release
side,
cool
I'll
start
with
the
announcements.
Then
hayan
is
out
of
office,
so
she
asks
us
to
record
and
upload
the
call.
The
standing
item
that
we
have
is
reviewed.
A
Okay,
our
progress
with
the
tracking
issue
for
our
q1
okrs,
especially
here
the
reminder
to
prioritize
us
impacting
issues
between
pm
and
team,
so
bringing
up
severity
and
the
importance
of
sales
related
issues
in
our
team
discussions
from
my
side,
I'm
happy
that
this
is
going
well
with
my
pm.
We
don't
have
that
many
impacting
issues
on
our
release
side,
but
the
ones
that
we
have
we're
being
able
to
schedule
so
happy
with
with
our
progress
here.
Do
you
have
any
thoughts
on
what
you
want
to
share
gina.
B
B
Yeah
we
recently
rebranded
the
testing
group
to
pipeline
insights.
I
linked
in
the
document
the
slack
thread
of
the
announcement,
but
now
that
all
the
labels
in
like
for
issues
and
stuff
in
gitlab
automatically
update
to
pipeline
insights,
but
we're
still
trying
to
update
some
of
the
handbook
references
and
things
like
that,
so
the
group
name
is
switched.
A
B
A
Yeah
cool,
so
let's,
let's
move
down
then,
and
and
just
try
to
briefly
mention
the
points
from
from
our
apac
peers,
starting
with
kd
and
package,
kd
has
been
wrapping
up
the
second
round
of
solution,
validation
for
the
package.
Cleanup
policies,
one
interesting
component:
this
was
testing
linking
to
the
settings
from
the
feature
page
itself.
That's
interesting.
I
saw
her
asking
about
that
on
slack.
A
I
don't
have
any
examples
from
release,
but
I'm
interested
as
well,
because
in
release
we
have
lots
of
links
to
the
docs
from
the
ui
and
a
lot
of
these
is
linking
to
a
doc
that
describes
how
to
change
the
setting
right.
So
I'm
interested
in
how
we
can
adopt
this
pattern
or
if
we
should,
at
all.
A
All
right
now,
moving
down
wrapping
sessions
should
determine
how
to
consume
yeah.
I'm
trying
not
to
be
overly
detailed
here,
but
it's
it's
also
hard
to
get
the
context
just
from
reading
here
so
yeah
I
I
I
can
kind
of
get
what
katie
is
working
on
here,
but
I
think
we
can
go
into
detail
a
sinclair
later
on.
Does
that
make
sense,
gina.
A
Cool,
so
in
pipeline,
altering
nadia
has
been
working
on
the
ci
catalog,
which
is
a
ci
marketplace,
but
for
git
lab.
That
is
super
interesting.
I've
seen
like
the
the
vision
designs
that
nadia
has
worked
on
this
before
so
yeah
super
interested
to
dive
deeper
into
the
simply
feedback
async
as
well.
A
There's
also
the
work
that
nadia
is
doing
to
document
objects
in
pajamas
within
ci
cd,
so
the
job
object.
I'm
really
excited
to
see
this
work
as
well,
because
I
think
pajamas
really
needs
a
stronger
set
of
conceptual
definitions
and
and
she's
really
driving
this
so
happy
to
see
this
happening.
A
A
This
is
interesting
as
well.
I
think
it's
it's
better
for
async
and
provides
invisibility
for
shared
runner
usage.
Does
this
has
any
overlap
with
your
working
runner,
gina.
B
A
little
bit,
I
I
think
the
main
thing
that
she
was
trying
to
figure
out
is
where
this
should
show
up,
because
there
was
a
spot
for
like
what
it
would
just
be
part
of
your
user
menu.
Pretty
much,
and
when
you
go
look
at
your
profile,
they
wanted
to
add
analytics
for
how
much
how
many
minutes
you're
using
for
shared
runners
right
and
so
it
sort
of
overlaps
just
because
it
is
a
representation
of
runners.
But
that's
really
the
closest
that
it
overlaps
with
right.
A
A
All
right
so
moving
down
to
to
our
own
thread
on
release.
I've
been
working
on
on
different
aspects
of
deployment
approvals,
the
first
of
it.
The
first
link
is
project
settings
for
deployment
approvals.
So
essentially,
if,
if
you're
gonna
set
up
deployment
approvals,
you
need
to
be
able
to
properly
set
up
how
many
approvals
you
need
for
a
given
environment
or
and
who
should
be
able
to
approve
into
that
environment.
So
that's
the
first
part
of
this.
The
second
part
is
updating
the
approvals
ui
itself.
A
With
the
last
details
for
development,
the
design
work
was
wrapped
up
a
little
while
ago,
and
then
we
updated
the
whole
environments
page.
So
it's
likely
that
the
approvals
ui
will
be
implemented
for
the
new
page.
So
there
were
some
adjustments.
I
was
doing
that
and
still
some
adjustments
needed
for
how
the
how
job
deployment
jobs
that
need
to
be
approved,
look
like
in
other
views
right.
So
in
the
pipeline
view
in
the
jobs
view,
there's
many
places
where
today
you
can
just
run
a
manual
job
that
will
now
need
to
be.
A
A
In
parallel
with
that,
I've
been
reviewing
and
working
with
andrew
our
front-end
developer,
on
lots
of
mrs
for
the
new
environments
page,
so
all
the
little
pieces
that
that
make
up
that
page.
It's
been
a
concentration
of
mrs.
So
that's
that's
interesting
to
just
review
all
the
little
pieces
one
by
one
and
then
on
the
more
strategy
and
planning
side
me
and
chris
rpm.
We
are
conducting
ongoing
customer
conversations
so
usually
one
or
two
per
week.
A
Chris
is
summarizing
these
conversations
in
this
first
link
here
on
the
on
the
agenda
and
then
from
there.
We
also
have
our
our
weekly
catch
up,
where
we
kind
of
debrief
these
conversations
and
break
down
the
the
main
points
from
each
one
of
them
into
issues
if
needed
be,
and
those
issues
get
a
customer
label.
A
So
once
we
open
once
we
we
have
our
weekly
catch
up.
We
go
over
all
the
issues
with
the
customer
label.
That
is
that
they
are
also
ordered
by
popularity
right,
so
it's
not
only
with
the
label
but
also
the
ones
that
got
the
most
thumbs
up
to
see
beyond
our
own
scheduling
our
own
planning.
What
what
is
bubbling
up
from
raw
customer
demand
that
we
should
perhaps
put
in
our
milestone.
So
this
is
all
very
new.
B
I
have
a
question
about
your
the
customer
feedback.
I'm
trying
to
find
the
okay
here
we
go.
Do
you
also
put
those
in
the
recordings
and
dovetail
and
then
also
document
them
in
that
issue?.
A
Yes,
so
so
chris
has
been
responsible
for
first
structuring
them
after
we
have.
The
conversation
he's
been
uploading
them
to
dovetail,
but
then
putting
the
summaries
here
in
the
issue,
and
the
reason
is
that
it
is
easier
for
all
of
the
engineering
team
as
well,
and
not
only
the
engineering
team
right.
We
have
technical
technical
managers
and
other
people
chiming
in
here.
So
it's
easier
for
them.
If
it
isn't
an
issue,
but
recordings
aren't
updated.
Yeah.
B
Okay,
yeah,
I
found
oh
sorry,
go
ahead
erica.
I
found
the
same
thing
that
it's
easier
for
the
whole
team
to
access.
If
it's
in
an
issue
rather
than
dovetail
because
it
seems
like
those
who
are
not
like
in
the
ux
department,
don't
feel
super
comfortable
going
into
or
and
product,
I
guess,
don't
feel
comfortable
going
into
dovetail
and
like
pulling
out
the
insights
from
there.
A
Yeah,
I
think
dovetail
is
really
good
to
break
down
like
when
you
have
a
high
volume
of
research,
it's
great
to
break
down
that
research,
but
for
these
interviews
they
are
like
short
interviews,
usually
30
minutes.
40
minutes
stops
on,
like
with
one
customer,
usually
around
the
single
topic,
so
it's
easier
to
just
break
them
down
and
have
them
here.
Like
dovetails
kind
of
overkill.
For
this.
C
Yeah
and
then
daniel
one
way
you
can
make
that
work
for
you
is
to
like
go
through
this
level
of
insight
and
try
to
like
frame
your
learnings
and
take
homes
as
like
an
early
draft
right
and
then
from
from
that,
like
make
your
dovetail
tags
and
then
like
at
the
end,
when
you're
trying
to
like
crystallize
your
learnings
go
through
and
tag
everything
in
dovetail
and
like
kind
of
bring
it
together
there,
but
kind
of
the
way
that
the
did
you
see
that
best
practices
for
tags.
A
C
I
think
like
technically
we're
supposed
to
put
them
in
dovetails
for
sure,
but
I
didn't
know
I
did
that,
so
we
could
put
them
in
dovetail
but
like
we
can
make
it
work,
for.
I
think
this
is
a
good
format,
because
it's
accessible
to
everyone,
but
like
one
way
to
make
it
work
in
the
end
would
be
to
go
through
and
assign
those
tags.
A
Yeah,
I
think
I
think
I'll
raise
this
to
him
and
see
if
it
makes
sense
for
for
his
process
really
because,
like
these
are
supposed
to
be
ongoing
interviews
like
no.
A
C
No,
no!
No,
I
would
do
it
like
it's
two
steps
like.
Let
them
collect
right,
don't
do
too
much
and
then,
when
you're
ready
to
like
formalize
the
learnings
and
write
them
up,
then
look
back
through
here
and
see
the
key
themes
and
then
those
is
done,
but
you,
I
think
you
don't
want
to
have
too
many
tags.
C
A
No
thank
you
for
for
the
feedback,
we'll
figure
it
out
and
see
like
what
is
the
best
way
because,
as
I
said,
we
just
started
so
like
we
just
started
like
summarizing
these
and
and
do
having
this
weekly
process.
Where
we
summarize,
then
we
go
through
the
customer
issues.
So
how
exactly
we'll
break
down
the
insights
and
turn
them
into
issues
is
still
not
entirely
clear
like
we're
just
getting
started,
so
we
might
invest
more
on
the
updates
and
do
that
using
the
tags.
So,
thanks
for
that.
B
B
Okay,
I'm
gonna
just
go
through
what's
happening
in
both
the
groups
for
pipeline
insights,
I'm
carrying
out
a
ux
scorecard
right
now
for
review,
apps
and
daniel.
This
kind
of
this
definitely
overlaps
with
your
area.
I
am
just
finishing
it
up
today,
but
it
has
been.
It
is
really
difficult
to
set
configure
a
review
app
because
I
think,
like
the
environments,
it
depends
on
what
what
what
I'm
just
using
a
static
website,
but
it's
been
hard
for
me
to
just
like
actually
get
the
website
to
be
up
and
running.
B
I
think
that's
because
of
the
I
decided
to
go
with
surge
and
gatsby
to
do
that,
and
I
think
that's
part
of
it
and
my
unfamiliarity
with
that,
but
I
am
almost
through
it
and
the
I'll
I'll
be
making
like
a
video
and
summarizing
all
of
what
comes
of
it
as
well.
B
The
other
thing
that's
coming
up
for
runner
is
this
is
part
of
our
kr,
but
also
goes
along
with
where
we
want
to
go
with
runner
enterprise
management.
We
have
this
we've
heard
from
customers
that
they
struggle
with
seeing
if
they
have
10
000
runners,
seeing
how
many
are
behind
by
a
certain
amount
of
versions.
B
I'm
really
excited
for
that.
I
haven't
seen
anything
in
our
product
thus
far.
That
does
something
like
that
with
updates.
A
I
mean,
I
know,
I
know,
there's
likely
reminders
or
some
some
system
of
alerts
for
for
self-managed
users
to
update
the
lab
itself
right
so
so
like
pushing
and
not
pushing
but
nudging
gitlab
admins
to
update
update
their
own
version
of
the
lab.
So
perhaps
that's
that's
one
area.
You
could
ask.
A
About
communicating
the
versions
of
runners
and
what
needs
updates,
I
think
there
might
be
some
overlap
in
terms
of
design
patterns
with
the
work
we're
doing
on
the
environments
page,
because
a
lot
of
it
is
which
one
is
the
latest
deployment
to
this
environment
right.
So
there's
a
little
bit
of
this
vision
of
you
have
multiple
versions
of
something
that's
running
here,
but
it's
a
deployment
within
the
environment
rather
than
the
version
of
the
runner
software.
A
B
Okay,
yeah
that'd
be
great
another
area
that
I
was
looking
at
because
our
our
runners
list
right
now
is
just
a
table
and
it's
like
getting
kind
of
out
of
hand
with
the
amount
of
data
that
we
have.
So
I
was
actually
going
to
try
to
explore
very
much
in
the
future
a
different
way
of
of
displaying
the
data,
like
you
did
with
the
environments
page
or
deployments
and
yeah,
so
that
I'm
also
going
to
use
like
that
kind
of
pattern.
As
a
future
version
for
runners
or
a
possible
version.
B
The
last
thing
that
I
had
that
I
was
looking
for
feedback
on
was
we
the
so
our
pipeline
insights
team
wants
to
gather
more
feedback
on
visual
review
tools,
which
is
a
feature
that
you
can
add
to
review
apps,
and
it
allows
you
to
comment
on
pages
basically
within
the
site
and
then
those
comments
turn
into
actual
compliments
in
gitlab.
B
A
So
I
I
left
a
comment
here,
mentioning
maybe
making
a
video
sharing
the
problem
space
or
perhaps
just
describing
the
problem
facing
the
actual
case
and
following
up
with
a
survey,
could
be
one
way
to
go
for
this.
I
think
perhaps
the
challenge
for
you
on
this
specific
project
is
that
the
feature
exists,
but
it's
not
very
much
used.
A
So
it's
hard
to
visualize
right
like
exactly
how
it
looks
like
when,
within
the
use
case
being
applied,
especially
because
I
don't
think
we
would
ever
use
this
for
gitlab
the
application
itself.
So
that's
tricky.
One
thing
you
could
do
perhaps
is
like
what
you're
doing
like
setting
up
a
website
is
having
a
website
set
up
as
the
use
case
and
then
use
that
as
the
the
kind
of
like
vision
board
to
help
people
understand.
Oh
right,
so
this
is
the
issue.
A
I
understand
the
context,
and
now
I
can
give
feedback
on
on
the
design
and
direction
of
this.
Maybe
that's
that's
one
way
you
can
go
about
it.
B
Okay,
anyways.
I
think
that
that's
a
great
idea
like
giving
an
example
because
you're
right
there
aren't
many
people
who
use
the
feature,
so
I
might
try
to
do
that
and
just
add.
We
have
some
demo
projects
actually
that
our
team
puts
together
that
we
can
test
out
all
the
pipeline
insight
features,
so
maybe
I'll
just
add
one
of
those
in
there
and
then
allow
people
to
play
with
it
too.
B
That's
it
for
my
side,
if
erica,
do
you
wanna.
C
Yeah,
I
just
put
two
items
in
there
for
feedback
daniel.
The
first
is
kind
of
I'm
doing
a
little
bit.
What
you're
doing
that's
why
I
was
like.
I
have
an
idea
how
to
use
dovetail,
but
basically
we're
gonna
build
out
one
issue
per
enterprise
case
study
and
that's
like
a
prototype
of
the
summary
there,
with
the
idea
that
we
can
go
into
depth
like
over
the
next
two
quarters.
C
And
so
I
have
like
a
comment
for
each
of
the
kind
of
bucket
threads
and
I'm
going
to
start
this
week
or
next
week,
socializing
that
with
the
engineers
and
then
so,
that's
like
the
prototype.
So
it's
like
the
questions
for
everyone
and
then
I'm
working
on
with
james
a
case
study
of
an
athleisure
company.
C
We've
had
like
a
couple
of
meetings
just
internally
and
we're
gonna
write
that
up
and
then
release
that
for
like
more
targeted
feedback
questions
like
or
follow-up
questions,
and
the
idea
is
like,
as
we
meet
with
them
over
time,
we'll
build
more
and
more
of
those
questions
in
so,
if
you
want
to
you,
could
you
could
provide?
You
know
some
questions
there.
That
would
be
interesting
to
you
and
like
make
your
own
thread
or
comment
within
those
threads.
C
A
This
looks
awesome
and
a
bit
complicated
like
beyond
what
I
can
learn
to
understand
so
I'll
make
perhaps
silly
questions.
So
the
idea
here
is
that
this
is
the
skeleton
of
what
a
case
study
for
this
company
would
look
like.
After
all,
the
research
has
been
done
and
compiled
on
dovetail.
I
could
just
export
this
long
case
study
into
the
ux
research
repo
is
that
it.
C
When
I
introduce
this
in
my
meetings,
I'm
like
I'm
a
research
nerd,
you
couldn't
you
can
not
look
at
that
part
but
like
to
make
sure
we
would
know
how
to
operationalize
high
fidelity
or
low
fidelity.
That's
why
that's
there
yeah,
but
any
feedback
regarding,
like
I'm,
hesitant
to
like
post
these
things
in
channels,
because
I
feel
like
they'll
be
confusing,
but
I
needed
them
to
like
explain
to
my
team
what
I
would
be
doing
and
what
the
final
product
would
look
like.
C
So
that's
like
an
example
without
too
much
findings
of
what
that
will
look
like
and
then
there'll
be
like
a
doc.
That's
like
a
very
long
case
study,
but
this
will
just
be
like
higher
level
notes.
I
think
it
I
might
wait
to
socialize
it
more
broadly
when
I'm
not
here
to
talk
about
it
in
person
until
I
have
the
actual
case
study,
so
it's
a
little
less
confusing.
A
The
the
the
other
question
I
have
is
on
these
threads
right
asking
for
follow-up
questions.
I
see
they
they
kind
of
like
target
different
areas.
So
the
idea
is
that,
for
example,
this
first
one
what
follow-up
questions
you
have
for
this
enterprise
company
about
their
infrastructure
resources
and
sas
usage,
like
that?
That
is
one
place
where
I
could
provide
follow-up
questions
around
release
right.
A
C
So
they're
great
that
I
can
put
like
I
can
put
this
into
that
prototype,
so
the
and
we
were
debating
this
when
I
made
it,
so
we
think
we're
going
to
commit
to
doing
this
on
a
six
month
period
for
at
least
so.
The
idea
is,
you
can
put
your
questions
there
either
that
you
know
you'll
want
to
ask
or
when
you
see,
findings
come
up.
C
You
can
post
your
questions
there
and
then
I
can
like
roll
them
into
interviews
and
it's
not
clear
exactly
when,
because
they're
enterprise,
so
we
have
to
like
you
know,
do
a
little
dance,
around
availability
and
stuff
like
that,
but
yeah.
The
idea
is
that
if
we
don't,
if
we
can't
explore
it
in
the
initial
interviews
that
we'll
set
up
another
one
and
be
able
to
like
dig
deeper
into
like
last
time,
we
spoke
about
this.
C
C
A
Yeah,
and
would
this
like
be
all
centered
around
one
specific
company
or
or
this
is
just
like
an
aggregate
of
different
enterprise
companies
with
a
similar
shape
or
similar
focus.
C
Great
question
daniel,
so
I
tried
to
link
the
initial
study
proposal.
I
think,
as
we
get
started
for
sure
it
will
be
we're
going
to
shoot
for
six
companies.
C
Six
companies
that
have
are
working
under
different
security
and
compliance
requirements
is
the
idea
hard
to
recruit,
though,
and
so
for
each
company.
It
will
have
a
separate
research
issue
report
and
then,
if
it
seems
this
is
qualitative
research,
so
it's
messy.
So
if
it
seems
like
they're
all
really
the
same,
then
we
will
kind
of
combine
it
into
one
meta
level
yeah.
But
it's
like
hard.
C
This
is
the
hard
part
about
it's
like
a
little
ambigu,
it's
ambiguous
until
we
kind
of
get
underway
and
know
yeah,
and
then
I
think
what
we
might
to
there's
like
a
different
ways
to
like
group
it
together
too,
which
would
be
like
by
industry
vertical
right.
C
So
if
we
end
up
in
like
people
in
clothing
and
retail,
they,
for
example,
have
set
requirements
related
to
not
doing
releases
when
there's
like
a
big
sale
like
so
around
black
friday
or
christmas
season,
it's
like
don't
touch
it,
so
they
kind
of
go
in
fits
and
starts
with
their
like
devops
practice.
C
So
that
might
be
one
way
to
like
characterize
and
group,
but
we
just
have
to
kind
of
wait
and
see
so
any
advice
you
have
on
how
to
make
this
less
confusing
and
not.
I
think
it's
just
kind
of
confusing,
because
it's
like
open-ended
a
bit
yeah.
A
But
no,
I
I
appreciate
how
much
effort
and
structuring
has
to
go
into
this,
because
it's
a
massive
volume
of
information,
so
this
is
great
one
thing
that
maybe
for
me,
would
help
help
get
my
head
around.
This
is
to
think
of
this
as
like
stacked
umbrellas,
so
you
might
have
like
the
big
umbrella
is,
as
you
said,
you
know.
A
Maybe
an
industry
vertical,
maybe
like
a
specific
shape
of
enterprise
company
and
then
within
that,
like
you,
don't
need
to
try
to
force
all
of
these
massive
companies
with
different
use
cases
into
like
one
shape
to
meet
this.
This
is
this
case
study
right
within
here
you
can
list
more
or
less
like
some
of
these
companies
have
this
these
use
cases
and
then
others
of
insight.
C
Yeah
and
I
think
ultimately,
it
will
be
like
a
design
exercise
where
you
can
read
the
case,
study
and
think
about
how
it
would
impact
this
particular
customer
and
like
a
set
a
second
step.
So
if
we
can
pull
this
off,
I
think
we
can
maybe
with
one,
but
if
the
next
step
would
be
to
layer
in
like
a
quantitative
mapping
of
how
much
of
our
customer
base
does
this
case
study
map
to
right
right?
C
Well,
we
can't
quite
do
that
yeah,
because
we
don't
even
know
what
our
sample
would
be
because
it's
so
hard
to
recruit,
and
I
was
gonna
wait
to
get
this
started,
but
I
think
our
teams
need
it
already,
because
we
have
like
we're
asking
these
same,
like
we're,
asking
different
questions
of
enterprise
customers
right
now,
a
little
bit
in
silos
and
it's
so
hard
to
recruit
them.
But
I'm
like.
C
Oh
maybe
I
can
like
help
to
kind
of
connect
the
dots,
but
it
also
is
important
that
all
of
those
individual
studies
execute
on
their
own.
So
they're
not
like
waiting
for
this
work
to
get
started.
So,
like
that's
confusing
too,
but
I
think
in
the
end
it's
going
to
be
helpful,
but
yes,
we're
going
to
need
a
way
of
layering
in
how
representative
they
are
of
our
customer
base.
On
top
of
thinking
about
the
fidelity
of
them,
yeah.
C
I
really
love
it
too,
because
what
you
shared
with
like
mapping
out
the
issues
that
those
customers
needed,
I
would
have
a
table
for
that
too,
like
in
the
top
of
that
issue
report,
which
is
right
because
I
see
in
our
slack
channels,
I
see
the
pms
doing
that
like
they'll
say
this
customer
has
that
need
what
are
our
issues,
and
so
I
know
that's,
like
you
know,
that's
where
we're
working
in
issues
so
like
that's
how
we
can
kind
of
leverage
it,
but
I
think
in
a
way
this
will
be
helpful
for
leadership
when
we
say
we
want
to
make
this
change,
we
can
kind
of
point
to
these
more
in-depth
case
studies
and
think
about
the
issues.
A
All
right
we're
a
little
bit
over
time,
I
think-
or
maybe
almost
that
time.
So,
let's,
let's
move
on
to
the
next
one,
you
have
an
update
on
the
ops
product
direction.
Survey
right.
C
C
So
the
ops
product
direction
survey
is
set
up
and
I
set
up
a
specific
issue
for
survey
development,
so
we'll
just
do
interviews
where
people
talk
aloud
as
they
take
the
survey
and
use
that
to
like
get
their
feedback
like
they'll,
be
like
I
don't
know
what
that
means
or
like
things
like
that
and
then
we'll
ask
about
why
and
why
and
why
and
then
we'll
use
that
to
fine-tune
it
so
we'll
post
updates
on
those
interviews
as
they
happen
there
and
then
also.
C
Specifically,
we
have
a
question
around
security
that
I
think
would
be
helpful
for
others
to
look
at
and
comment
on,
but
without
like
changing
our
whole
product
project.
So
there's
an
issue.
C
I
think
it's
the
one
that
I
linked
to
yeah,
so
that
that
that
that
example
is
the
survey,
development
issue
and
study
so
we'll
post
the
updates
from
the
interviews
there
like
this
was
confusing.
We
made
this
change
we'll
be
kind
of
iterative
in
that
way,
and
then
I
set
up
the
comment
threads
to
ask
people
for
feedback
and
the
persona.
One.
Is
there
because
I
think
we're
having
some
debates
right.
We
made
a
change
where
we
posted
ingrid
as
a
separate
one,
but
there
was
debates
about
what
made
them
different.
A
So
all
right,
thanks
for
that
america,
all
right,
so
the
last
point
is
from
will,
but
actually
it's
based
on
on
a
conversation
I
had
with
him
earlier
this
week
we
have.
We
had
a
cm
scorecard
scheduled
for
this
quarter,
but
after
discussions
with
chris
and
kevin
on
our
pm
team,
we
realized
that
there
wasn't
enough
significant
progress
in
feature.
Work
in
in
the
category
of
environments,
like
the
the
progress
is
happening
right
now,
so
running
a
cm
scorecard
right
now
that
makes
sense.
A
So
we
defer
that
to
q2,
and
then
this
was
helpful
to
know
in
terms
of
other
requests
have
been
made
with
enablement,
ops
yeah,
so
we
tried
to
like
it,
took
us
a
while
to
to
get
to
this
decision,
so
we
tried
to
inform,
will
as
soon
as
possible,
so
he
could
free
up
his
his
capacity
for
other
stuff.
So
yeah,
that's
that's
about
it.
B
A
I
would
say
that
follows
more
on
design
and
product
within
the
team
more
in
design
and
product,
but
another
aspect
that
will
told
me
is
that
even
in
q2,
where
he
won't
have
as
much
capacity
for
this,
he
still
wants
to
help
me
because
he
hasn't
run
one
before
so.
He
wants
to
get
more
knowledgeable
on
the
process.
Wheel
is
very
collaborative,
so
he's
always
up
to
you
know
helping
with
you
helping
me
review
scripts
or
or
whatever
I'm
preparing
but
yeah.
A
All
right,
I
think
that
was
the
end
of
our
agenda.
I
don't
think
we
have
any
read-only
towards
the
end,
so
we'll
give
us
all
six
minutes
of
time
back.
Thank
you
all
for
being
here.
Enjoy
this
conversation
a
lot
and
thanks
for
watching.