►
From YouTube: 2022-05-25 Code Review UX Sync
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Amy
you
got
the
first
one.
You
asked
for
a
reminder
about
email
notifications,
so.
B
Yes,
so
this
ca,
this
has
come
up
a
couple
of
times
in
various
channels
that
we
looked
at
the
ui
text.
That
code
review
has
responsibility
for,
but
we're
not
sure
if
anybody's
ever
actually
looked
at
the
email
notifications,
we
send
out
I'm
sort
of
afraid
to
bring
this
up,
because
I
know
where
this
leads.
I'm
just
going
to
chunk
this
over
the
wall
at
pedro
and
annabelle
and
say
do
with
this.
As
you
will.
I
am
happy
to
help,
but
I
also
recognize
this
is
a
huge
can
of
worms.
B
C
You
know
interesting
and
you
know
valuable
things
for
us
to
work
on
at
gitlab
that
are
not
necessarily
inside
of
our
silo
walls
of
our
box.
Yet
so
what
are?
Are
we
going
to
do
like?
No
one
owns
email
notifications
like
no
one
owns
to-do's,
no
one
owns
like
the
project
lists
and
and
like
no
one.
Oh
actually,
I
think
someone
owns
the
profile
page
but
yeah
anyway.
There
are
all
of
these
things
that,
like
hey
someone,
someone
should
care
about
that
right.
C
C
C
E
B
We're
going
to
talk
about
it
in
a
meeting
and
we're
going
to
mention
it
to
my
supervisor
and
we're
going
to
walk
away
in.
In
truth,
I
was
only
thinking
as
far
as
hey
did
we
look
at
the
the
email
notifications
that
we
send
for
merge
requests.
I
wasn't
thinking
about
the
whole
thing.
I
was
thinking
about
the
the
merge
request,
ones
yeah,
but
the
same
approach
applies.
C
Yeah,
I
think
that
that
can
be
like,
depending
on
the
answer
that
we
get
back
from
those
two
groups,
that
might
be
a
good
place
to
start.
My.
C
Because
most
of
it
is
you
know,
email
templates.
So
it's
like,
I
would
imagine
you
know
you
could
apply
to
all
of
our
emails
and
the
second
thing
is
text
which
is
easy
to
change,
and
I
think
it's
a
shame
that
we're
just
looking
at
that,
but
it
could
be
a
good
start
there.
But
then
again
we
have
so
many
other
things
to
work
on.
C
C
But
thanks
for
asking
about
it,
amy,
I
think
it's
important
and
I'm
I'm
I'm.
I
hope
it's
not
like
I'm
dismissing
what
you're
saying
I
think
it's.
I
think
it's
important,
because
if
you
think
about
it,
it's
another
touch
point
of
our
experience.
It's
not
inside
of
the
gitlab
ui,
but
it's
it's
so
important
and
people
live.
Some
people
live
by
the
email
notifications
and
how
they
interact
with
good
lab
some
projects,
even
use
only
email
to
respond
like
to
each
email
that
they
receive
as
a
way
to
respond
to
issues
and
merge
requests.
C
B
C
Okay
yeah,
my
non-native
english
brain.
I
couldn't
discern
the
difference
between
you,
know,
pleased
and
satisfied.
Okay,
please
is
is
below
satisfied.
C
C
B
C
Okay,
so
future
of
attention
requests
it's
a
very
big
wall
of
text.
I
try
to
highlight
what
is
bolded.
The
rest
is
more,
you
know
just
context,
and
but
first
of
all,
I'd
like
to
thank
everyone
that
was
involved
in
this
conversation
and
analyzing.
C
C
While
I
was
away
and
and
to
be
honest,
it
wasn't
that
hard
to
put
everything
together,
and
I
appreciate
the
transparency,
the
fact
that
everything
was
written
down
in
issues
and
linked
or
cross-linked
in
this
case
and
that
yeah
I
just
saw
everyone
regardless
if
they
were
inside
of
the
group
or
external
from
the
code
of
review
group,
everyone
having
just
a
very
thoughtful
open
and
no
ego
attitude
and
I
think
yeah.
I
wanted
to
praise
everyone
for
that.
C
I
think
that
was
amazing
and
really
good
to
see
at
a
certain
point
you
couldn't
tell
who
was
you
know
involved
with
the
feature
or
not?
Everyone
was
just
you
know
at
the
same
level,
caring
about
it
anyway.
C
So
I
got
caught
up
on
everything
and
I
took
some
time
to
reflect
on
what
happened
and
going
through
those
points
quickly,
so
one
I
think
we
have
really
good
things
and
we
designed
and
built
things
that
hit
all
of
the
points
that
we
wanted
and
we
get
got
great
positive
feedback
from
it.
We
also
got
negative
feedback
from
it
and
we
haven't
necessarily
exposed
this
to
the
wider
external
audience,
and
so
the
feedback
issue
is
very
skewed
towards
gitlab
team
members,
but
nonetheless
I
think
analyzing
everything
that
we've
done.
C
It's
clear
that
we
have
not
integrated
this
well
with
our
existing
notifications
and
also
the
user's
usual
flow
again.
There
are
some
pros
and
we
solved
those
things
that
we
wanted
like.
Okay,
it's
a
clear
signal
to
hand
over
it's.
You
can
now
find
exactly
the
merger
best
need
your
attention.
We
hit
all
of
those
points,
but
it's
very.
C
It
was
a
very
narrowly
designed
concept
and
also
very
narrowly
tested,
and
it
like
the
metaphor
that
I
used
there.
It
seems
like
we
skipped.
You
know
iterations
a
b
and
c
and
jump
directly
to
iteration
d,
which
is
like
the
cherry
on
top
of
the
cake
that
will
do
everything
that
we
wanted.
It's
like
magic,
but
it
it
feels
like
we
missed
and
skipped
those
iterations.
C
That
would
support
the
experience
so
that
it
would
be
just
like
very
easy
for
anyone
to
on
board
and
understand
everything,
and
I
think
one
symptom
of
that
is
that
we
had
to
add.
You
know
two
popovers
and
maybe
even
those
two
you
know
popovers
that
explain
the
future.
Even
those
two
might
not
have
been
enough
to
explain
how
you
should
use
it.
So
that's
it
can
be
useful,
but
sometimes
you
know
it
might
be
a
symptom
of
something
that's
wrong.
C
So
anyway,
I
can
go
into
specific
details
if
you'd
like
me
to,
but
my
recommendation
is
not
to
ship
what
we
have
implemented
and
I'll
be
like
if
we
weren't
looking
at
in
this
in
terms
of
like
personally
and
ego,
I
would
be
very
sad
because
I'm
here
from
the
start
discussing
all
of
this
for
months
and
me
suggesting
this-
it's
also
a
big
blow
for
me,
but
I
think
this
I'm
really
actually
I'm
very
happy
and
comfortable
with
this
recommendation,
because
we
have
learned-
and
I
think
iteration
is
learning
at
the
end
of
the
day.
C
C
A
What's
the
opportunity
cost
from,
like
your
perspective,
so
like
if,
if
we
go
back
in
time,
not
back
in
time,
if
we
approach
this
anew
again,
that's
not
an
insignificant
effort,
and
so
the
question
would
be
like
what
are
we
not
going
to
be
able
to
to
work
on
that?
We
were
probably
going
to
work
on
over
the
next
few
months.
C
That's
a
good
question.
Let
me
let
me
see
if
I
have
something
written
here.
C
C
C
You
know,
I
don't
think
we
were
planning
any
like
big
features
or
big
ux
work
other
than
the
restructure,
mr,
and
if
we
were
to
say
okay,
let's
keep
working
on
the
restructure,
mr,
but
also,
let's
you
know,
do
this
like:
let's
do
this
right,
this
hand
over
process
and
workflow
that
will
effectively
results
in,
according
to
my
suggestion,
in
annabelle,
for
example,
having
less
capacity
and
less
involvement
in
restructuring,
mr,
which
is
basically
saying
that
we
would
be.
C
We
will
be
pushing
out
less,
perhaps
less
concepts
or
being
able
to
validate
our
concepts.
We
will
be
validating
them
more
slowly.
Basically,
so
it
affects
the
velocity
of
the
restructure,
mr
initiative.
Basically,
how
much
I
I
can't
say
for
sure
because
yeah
that's,
it's
all
a
bit
of
an
unknown
at
the
moment.
C
The
good
thing
is
that
we
have
two
designers
if,
if
it
was
just
me
or
annabelle
one
designer,
when
we
were
at
that
time,
it
would
be
a
different
conversation
and
it
would
be
a
significant
impact
on
anything
that
we
would
do
and
we
would
have
to
make
trade-off.
I
don't
think
at
this
point.
We
have
to
make
a
trade-off
of
doing
this
or
that
it's
we
can
still
do
both
the
velocity
will
be
impacted
in
the
restructure.
Mr,
but
I'm
not
that
concerned,
to
be
honest,.
A
A
I
think
it
brings
another
question
which
is
it
then
a
worthwhile
use
of
our
time,
because
I
think
we'd
all
agree.
I
mean
there
was
a
significant
amount
of
effort
in
like
even
kickstarting
the
restructuring
effort,
because
of
how
important
and
critical
that
is
to
like
sus
and
usability
and
future
of
what
merge,
request
does
and
look
like
is
slowing
the
velocity
there.
A
The
right
decision,
because
the
longer
it
takes
to
like
finish
sort
of
design
and
validation
the
longer
it
takes
us
to
start
in
what
was
currently
an
unknown
engineering
effort,
and
so
we,
it
seems
like
we
would
be
better
served
to
get
to
a
point
where
we
can
expand
engineering
effort.
Given
that
we
have
nine
engineers
on
our
team
right
like
we
have
more
engineers
than
we
do
designers
and
so
the
faster
we
have
a
a
backlog
of
work
for
them,
the
more
efficient
they
are
versus.
A
A
C
I
get
that
I
don't
think
yet.
As
a
counterpoint,
I
don't
think
we
have
a
shortage
of
work
for
engineers
to
pick
up
on,
but
I
understand
what
you're
saying
it's.
It's
a
concern.
Nonetheless,.
C
C
C
C
It's
almost
like
we
were
trying
to
elevate
and
provide
a
first-class
way
of
doing
this
handover,
but
people
were
already
doing
it
in
some
fashion.
Here
at
gitlab,
we
were
we're
using
the
that
method
of
you
know,
unassigning
yourself
as
a
reviewer
and
then
adding
yourself
as
a
reviewer.
We
had
a
comment
by
one
of
our
former
colleagues
who
works
in
another
company
and
he
said
that
they
never
thought
about
doing
that
and
they
they
are
still
assigned
as
reviewers,
and
they
use
other
ways
of
signaling
how
to
hand
that
over.
C
But
I
do
think
it
would
be
very
helpful
if
we
design
this
vision,
which
would
probably
include
some
things
that
we
had
already
talked
about
you
and
and
with
you
me
and
annabelle,
when
she
was
designing
that
finish
review
moment.
You
know
like
what
you
can
approve.
Maybe
like
is
there
a
way
we
can
have
something
there
for
the
handover
or
not,
and
even
before
that,
something
that
ben
reiterated,
which
is
a
problem.
C
And
then
he
re
reiterated
reiterated
that
in
the
benchmarking
is
the
starter
thread
versus
commenting,
and
that
is
also
part
of
that
flow.
Basically,.
C
So
it's
it's
a
hard
decision.
I
think
that
it
is
more,
perhaps
more
balanced
and
healthy
if
we
have
each
designer
leading
one
initiative
instead
of
saying
okay,
we
have
the
mre
structure
and
we
have
all
of
our
resources
on
that.
C
But
but
I
may
I
can
be
convinced
otherwise.
D
I
don't
know
also
that
they're
necessarily
two
separate
concepts
fixing
this
handover
is
kind
of
a
part
of
the
restructure.
It
does
solve
a
problem
that
the
restructure
is
supposed
to
solve
and
it
might
even
fit
in
pretty
well
to
one
of
the
hypotheses.
C
Yeah,
I
think
I
think
we
can
yeah
every
I
mean
at
the
end
of
the
day.
Everything
is
related,
but
I
think
we
can.
We
can
separate
them
in
that
the
restructure.
The
focus
that
we
have
defined
is,
above
all,
the
navigation
and
the
information
architecture.
C
Of
course,
the
flow
is
is
part
of
it,
but
I
think
we
can
we
can
work
on
both
at
the
same
time,
although
related
yes,.
C
Well,
I
mean
that
said:
we
we
still
don't
know
how
we're
going
to
proceed
right
now
with
the
mri
restructure,
I'm
discussing
that
with
with
annabelle
and
alexis.
So
I
don't
think
we're
in
a
good
point
today
to
decide
what
we
should
prioritize
or
not
yeah.
My
idea
so
far
is
what
I
shared
above.
To
have
annabelle
leave
this
work
and
in
parallel
I
would
be
leading
the
mre
structure
and
although
we
be
working
on
these
together
nonetheless,
but
yeah,
that's
just
a
preliminary
idea.
C
How's,
okay,
more
direct
question,
how's
everyone
feeling
about
my
recommendation,
which
is
I
mean,
I'm
saying
my
recommendation
because
I'm
voicing
it
here,
but
I
think
I'm
it's
also
the
not
only
my
opinion.
I
think
other
people
in
the
ux
department
also
share
this
with
you.
A
You
said
b1
the
v
release,
as
is
like
what
can
you
like,
oh,
like
the
not
being
able
to
validate
in
real
life
or
constraining
future
iterations?
Can
you
just
speak
more
about
that,
like
what
do
you?
What
do
you
think
might
happen.
C
Yeah,
it's
it's
more
or
less
what
I
wrote
so
in
terms
of
we
don't
have
something
that
is
validated.
We
have
some
feedback,
positive
feedback.
We
also
have
you
know
negative
feedback,
people
that
are
confused
about
certain
aspects
of
the
feature
and
I
don't
think
we
can.
C
So
first,
I
don't
think
we
can
easily
iterate
on
them
in
the
way
that
we
are
used
to
iterating
on
something
with
usage
feedback
which
is
in
this
case
is
like
we
have
the
concept,
we
have
the
foundations,
it
would
be
just
more
incremental
in
polishing
what
is
already
there,
but
the
foundations
are
there.
The
things
that
are
confusing
to
people
are
foundational
things.
C
C
C
Now,
that's
a
significant
change
or
the
way
that
we're
counting
things
in
the
the
header
and
that
drop
down,
and
we
realize
through
feedback
that
actually
that
doesn't
work
that
way
so
now
we're
going
to
count
that
differently.
Okay,
so
now
people
have
to
learn
again
what
that
count
means
and
so
on.
At
the
end
of
the
day,
we
haven't
validated
that
the
concept
works
and
we,
as
I
said
there
are
positive
things
and
we
have
positive
feedback,
because
for
some
people
it
made
sense-
and
I
don't
think
we
have
we
had.
C
We
would
have
reached
this
point
if
we
also
didn't
think
that
those
things
made
sense
right.
We
thought
that
the
solution,
validation
was
those
two
solution.
Validations
were
valid,
which
would
add
to
our
confidence,
but
also
as
users
me
you
and
others
that
made
sense
to
us.
C
But
at
the
same
time
we
have
so
much
context
and
we're
the
power
users
of
gitlab
we're
much
more
sophisticated
and
connect
the
dots
much
more
easily
about
everything
in
this
platform
compared
to
other
users
and
and
so,
if
we
weren't,
when
looking
back
again
at
the
solution,
validation,
if
we
weren't
able
to
validate
that
concept
or
those
concepts
in
a
vacuum
that
is
troubling.
And
I
don't
think
in
real
life.
C
A
big
risk,
as
with
everything
I
can't
say
for
sure
that
yeah
it
wouldn't
be
a
success
and
we
wouldn't
be
able
to
iterate,
but
with
what
we
know
today.
I
think
it's
a
big
risk
to
hope
that
we
can
iterate
on
it
and
find
our
way
once
it's
out
there,
because
we're
immediately
once
we
put
it
out
there,
we're
already
conditioning
a
lot
of
behaviors
about
the
future.
A
A
We
don't
actually
have
a
way
to
validate
like
complicated
code
review,
behaviors
outside
of
like
introducing
them
to
teams
at
all
like
that's
not
user
testing
is
like
will
never
be
sufficient
for
that.
A
Solution,
validation
couldn't
validate
it
also
sort
of
like
gives
credit
to
our
ability
to
validate
that
like
it
would
legitimately
validate
something,
and
I
think,
that's
maybe
like
a
deeper
flaw
in
like
how
we
should
like
think
about
that
that,
like
I
don't
think
we
could
validate
this
without
giving
it
to
real
users,
either
any
solution
this
one
or
another
one
because
it
is
like
you
need
actual
software
engineers
working
in
a
team
that
have
their
own
review
flow
right,
like
you,
can't
just
give
it
to
one
person
in
a
vacuum
like
how
does
this
work
for
you
and
your
code
review
flow
with
yourself,
as
you
click
through
screens,
like
that's,
not
possible,
and
so.
C
A
Like
I'm
a
little
concerned
that,
like
there's
a
few
points
here,
where
we're
saying
it's
not
validated,
I
mean,
I
think,
like
there
were
some
things
that
worked
and
didn't
work.
I'm
also
a
little
concerned
because
you
mentioned
that
there
was
negative
feedback
and
positive
feedback.
One
of
the
things
that
strikes
me
and-
and
this
is
not
the
first
time
we've
seen
this.
A
A
Reviewers
was
one
of
the
reasons
we
like
did
extra
prep
for
this
to
try
and
like
build
tolerance
internally
and
much
of
the
negative
feedback
was
internal,
whereas
many
of
the
external
participants
actually
provided
very
positive
feedback
on
the
experience,
because
it
was
a
market
improvement
and
how
they
were
using
like
gitlab
today
and
so
like
it.
A
This
feels
like
we're,
maybe
like
internal
confirmation,
biasing
ourselves
right
in
a
way,
that's
like
well
internally,
no
one
understood
it
and
internally
we
have
a
process
for
this,
and
the
reason
we
have
a
process
for
this
is
because,
when
we
rolled
out
reviewers,
everyone
was
mad
about
reviewers.
The
reason
people
unassigned
themselves
is
because
we
couldn't
get
the
counts
right,
like
all
of
the
things
that
like,
if
you
think
about
internally,
what
has
happened,
rb
and
like
the
reason.
A
It's
not
a
problem
now
is
because
we're
18
months
removed
from
reviewers
and,
like
we've,
just
learned
to
live
with
it,
and
I
think
I'm
concerned
that,
like
there's
an
assumption
that
we
could
validate
something
like
this
later
and
there's
an
assumption
that,
like
there
maybe
wasn't
like
a
positive
impact
externally,
even
though
I
think
externally,
there
was
significantly
more
positive
feedback
than
there
was
internally
yeah
and
I
think
that's
okay,
like
I
think
our
teams
need
to
like,
as
approvals
start
to
turn
on,
like
this
is
they're
gonna.
A
Feel
pains
that
they've
never
felt
before,
and
I
think
those
are
good,
but
I
think
this
is
one
that,
like
I
don't
know,
I'm
just
worried
that,
like
we're
not
shelving
it
to
me
feels
like
not
the
right
decision,
but
it
hasn't
felt
like
the
right
decision,
and
I
this.
The
rationale
here
continues
along
the
same
rationale
that
seems
to
have
been
presented
before
and
that's
not
a
flaw
here,
but
I
feel
like
there's
not
a
what's
not
being
presented
well
enough
for
me
at
this
point
is
like
okay.
C
A
C
Yeah,
I
think
that's,
that's
your
your
question.
I
completely
understand
so
the
solution,
validation
that
we
did.
It
was
a
very
narrow
solution,
validation
where
we
basically
so
we
designed
some
things
in
figma
and
okay,
we're
going
to
change
this
drop
down,
we're
going
to
add
this
new
button
and
maybe
showing
this
toast
when
you
click
on
the
button
and
this
tool
tip
and
yeah.
C
That's
you
know
a
few
things
here
and
there
that's
it,
and
we
designed
the
solution,
validation
to
test
those
specific
aspects
and
not
the
real
life
scenario
and
nothing
can
replace
real
life
usage.
It's
not
like
I'm
saying
we're
going
to
have
a
solution,
validation
that
is
better
than
actually
shipping
the
real
thing.
C
Complete
a
specific
job
in
this
tool
in
this
scenario,
so
we
would
place
them
in
the
most
realistic
scenario
and
tell
them
you
know:
go.
Do
your
thing
like
you're
in
this
merge
request.
How
would
you
review
it
and
see
what
they
do?
Basically,
because
in
that
case
we
would
then
see
all
of
their
usual
behaviors,
because
what
we
were
doing
in
the
solution?
Validation
was
saying.
C
In
addition,
it
was
also
using
you
know,
just
usability
testing
and
we
didn't
have
an
opportunity
for
dialogue.
So
what
you
were
saying
that
different
teams
had
different
workflows
if
we
were
testing
with
users
and
letting
them
do
their
own
thing,
we
would
recognize
certain
behaviors.
That
can
you
know,
elicit
our
curiosity
and
say:
oh
wait.
Why
did
you
do
that
and
they
would
say?
Oh,
this
is
how
we
usually
do
in
our
team.
A
A
C
So
I'm
you
know
I'm
very
positive
person,
but
I'm
realistic,
okay,
so
I'm
not
a
delusional
person.
I
think
that
we
can.
We
can
do
this
right.
Okay
and
I
don't
know,
I
can't
predict
the
future.
I
don't
know
if,
once
we
do
all
of
this,
we
would
say,
oh
actually,
to
really
answer
this
and
this
question.
We
need
to
turn
it
on.
C
E
Yeah,
I
think
we
have
an
opportunity
here
actually
because
we've
already
turned
it
on
for
the
get
a
league
group.
I
think
there's
an
opportunity
here
to
really
talk
to
them
and
actually
direct
some
of
this
experience
that
they're
having
right
now
towards
whatever
we
iterate
on.
So
I
had
a
point
up
here.
E
I
asked
I
don't
know:
kaif
you've
talked
to
them
at
all,
but
maybe
we
could
unleash
some
research
on
them
a
little
bit
more
formally
to
get
some
of
their
feedback
or
just
at
least
talk
to
them
in
sort
of
an
interview
session.
Something
like
that
because
they
are
experiencing
it
right
now
as
it
is,
and
so
we
might
as
well
do
the
best
validation
we
can
on
this
iteration
and
then
we'll
have
some
very
pointed
feedback
on
some
things
that
we
can
do.
A
C
It's
it's
a
so
it's!
C
I
think
the
fact
that
we
were
not
able
to
validate
it
it's
concerning,
because
again
we
were
basing
all
of
our
well,
not
all
of
it,
but
a
lot
of
our
confidence
in
the
solution.
Validation
that
we
thought
went
great,
and
the
second
thing
is
that
we
have
clearly
identified
issues
with
the
concept
that
I
don't
think
we
can
easily
overcome
them
without
introducing
significant
changes.
If
it's
out
in
the
world
and
again
it
will
influence
people's
perception
of
that
and
so
on
and
so
on,
and
that
work
that
iteration.
A
Okay,
I
can,
can
we
do
two
things
then?
Maybe
before
we
like
keep
rehashing
like,
can
we
test
it
with
more
people?
Then?
Can
you
specifically
come
up
with
the
list
of
like
significant
problems?
You
don't
think
we
can
iterate
against.
A
Like
I
know,
I
asked
you
for
like
a
few
examples,
but
like
it
would
be
nice
to
have
like
a
list
of
specific
examples.
I
think
it
would
be
nice
to
have
a.
A
I
think
you're,
one
of
the
concerns
you
expressed
is
hard
to
iterate
against
it.
If
lots
of
people
have
it
which
is
different
than
hard
to
iterate,
if
we
keep
it
contained
to
three
teams
that
we
could
easily
line
up
and
turn
on
today,
if
we
wanted
to
with
no
guarantee
or
commitment
of
long-term
anything
with
it,
but
it
gives
us
actual
validation
with
actual
real
users
in
a
way
that
allows
us
to
see
like
what
are
the
remaining
deficiencies
like
there's,
I
agree.
The
the
feedback
from
getaly
is
sparse.
A
I
think
we're
going
to
need
to
poke
harder
there
somehow
and
get
more
out
of
them.
But
you
know
there
are
other
teams
that
if
we
broaden
the
horizons
of
like
external
to
gitlab
and
letting
entire
teams
use
it,
that
I
think
we
could
get
not
a
large
sampling
but
there's
a
potential
for
a
reasonable
sampling
of
like
people
using
the
feature
that
puts
us
in
a
position
to
have
a
conversation
based
on
actual
validation
or
actual
users
versus.
C
C
Yeah,
I
think
the
the
danger
there
is
that
it's
a
very
specific
audience
there
are
power
users
and
that
we
are
whether
we
like
it
or
not
or
or
whether
it's
intentional
or
not
we're
kind
of
hand
holding
them.
So
so
any
question
that
they
have
and
even
the
contacts
they
if
they
already
have
of
the
feedback
issue.
They
already
know
that,
for
example,
the
icon
is
not
a
drop
down,
it's
not
something
that
opens
up.
C
They
already
know
and
they've
learned
to
make
sense
of
what
would
be
confusing
for
someone
else,
and
I
don't
think
we
can
extrapolate
that
to
the
universe
of
gitlab
users
or
gitlab
code
review
users.
C
A
Anything
then
like,
if
you
can't
do
it
with
three
teams
that
have
some
amount
of
content.
I
think
this
is
where,
like
I
get,
I
get
not.
I
get
frustrated
like
multiple
times
it's
been
like
well.
How
are
we
going
to
validate
this
and,
like
one
of
the
answers,
was
like,
let's
do
like
a
six
month
diary
study?
A
Let's
do
like
all
of
these
things
that
like
actually
take
time
with
real
teams,
to
have
a
feature
on
some
version
of
a
feature
which
means
some
version
of
a
feature
has
to
be
engineered
right,
so
we're
gonna
go.
Let
annabelle
do
something
for
two
or
three
months
we
might
get
some
sort
of
figma
validation,
but
at
some
point
we
will
have
to
turn
this
on
for
people
in
a
smaller
setting
and
let
real
users
use
something
and
provide
us
feedback,
and
it
feels
like
we're
sort
of
like.
A
Throwing
this
one
away,
even
though
we
we
could
we
could,
we
could
turn
it
on
today
and
get
real
feedback
and
we're
like
no.
Let's
not
do
that.
Let's
go
do
something
else,
because
we
have
suspicions
here,
and
I
think
that
to
me
is
like
where
I'm,
I
think,
that's
like
the
more
frustrating
piece
is
that
we
sort
of
don't
even
want
to
try
and
like
test
this
we've
gotten
like
50.
It
was
literally
on
for
like
54
hours,
that's
the
most
anyone's,
seen
it
and
like
we're.
A
A
E
E
I
think
the
point
is
that
we
should
pay
attention
to
the
things
that
we
can
iterate
against
and
fix
those
things
before
doing
anything
else
like
it's
it's
it
doesn't
make
sense
to
turn
it
on
if
we
know
that,
like
the
notifications
are
going
to
be
a
big
pain
for
people
like,
let's
fix
that,
then
turn
it
on,
and
I
think
we
should.
I
think
like.
E
Let's
do
it
with
an
external
group,
and
let
me
add
them
like.
Let
me
talk
to
them.
Let
me
like
do
some
research
with
them
and
like
suss
out
some
things
and
report
back
and
like
continue
to
it.
I
think
this
is
probably
well.
We
should
figure
out,
and
I
think
this
is
probably
what
you're
getting
to
is
like.
Is
this
an
iterable
iterative
solution
right?
Can
we
iterate
against
this
to
get
to
something
that
we
like,
or
do
we
truly
have
to
go
back
to
the
drawing
board
and
come
up
with
something
new.
E
C
The
fundamental
things
in
the
concept
that
we
haven't
solved
yet
and
I'm
okay
like
if
we
if
we
want
to
keep
this
on
for
gili
or
even
you
know
a
few
other
teams,
I'm
I'm
okay
with
that
because,
as
I
said,
I
think
it
solves
some
things
and
once
you
know
how
it
works,
it
it
works.
But
you
have
to
deal
with
certain.
You
know:
duplication
and
learning,
okay,
how
the
header
works
and
okay
clicking
you're
there,
and
you
have
to
know
how
the
automations
work
etc.
C
C
You
know,
for
example,
people
understanding
that
when
you
post
a
comment
or
finish
a
review,
you
have
to
go
and
click
somewhere
to
request
someone's
attention.
You
know
as
one
example
once
you
learn
it,
you
know,
but
it's
not
integrated
and
there's
a
leap
of
f
like
there's
a
gap
here
that
you
have
to
fill
to
understand
what
needs
to
happen
and
what
is
requested
from
you
and
that
part,
as
I
said,
I
think
sure
we
could.
C
You
know,
try
to
iterate
backwards
on
this,
but
I
think
it
is
a
risk
that
we
would
be
introducing
changes
that
once
we
put
everything
together,
we're
like
actually
like
we're
walking
backwards
on
this,
and
that
thing
that
we
did
it's
actually
like.
C
We
knew
it
would
be
a
bit
confusing
and
but
yeah
we
were
hoping
we
could
iterate
and
we
can't-
and
I
actually
think
it's
much
easier
if
we
try
to
follow
like
have
the
design
vision
for
all
of
it
and
to
end
validate
it
and
then
slowly
build
the
pieces
that
ease
users
into
the
whole
flow
like
release
by
release.
We
would
be
putting
out
things
that,
once
you
see
everything
together,
there
are
not
even
any
questions
about
how
it
works.
C
You
already
know
how
everything
works,
because
you
go
through
the
review
flow
and
it's
integrated
with
our
new
notification
system
and
so
on,
because
we
improved
all
of
that
and
again
like
we
can
keep
it
enabled
and
even
bring
customers
in.
But
I
don't
know
I'm
a
bit
skeptical
that
we
are
able
to
overcome.
C
Well,
not
so
not
this
specific
concept
right,
so
we
would,
as
I
said
this,
what
we
have
designed
so
far.
It's
very
specific
changes
that,
as
I
said,
the
two
fundamental
things
are
they're
not
very
well
consolidated
with
our
existing
notifications
and
also
the
header
accounts
and
the
dropdown
etc,
and
also
the
the
usual
review
process
like
when
you're
writing
the
comments
and
and
so
on.
It's
not
a
natural
thing,
even.
D
C
Were
you
when
you
searched
for
attention
requests
ben
like
it
was
the
only
thing
and
I
think
yeah
it's.
It
doesn't
feel
like
it's.
C
It's
well
integrated
and
that's
why
people
have
to
significantly
change
how
they're
working
today
to
to
make
it
work
and
what
I'm
advocating
is
if,
when
we
design
that
end-to-end
concept
that
you
know
takes
into
account
to
do's
and
what
changes
we
need
to
make
to
lose,
if
any,
what
in
changes
we
need
to
do
not
only
to
the
mr
counter
at
the
top,
but
to
other
accounts,
I
don't.
C
You
know
these
are
just
examples,
I'm
not
saying
it's.
What
we
would
do
takes
into
account
changes
to
the
mrs
list
and
also
when
you're
writing
comments.
How
then
you're
going
to
connect
all
of
those
comments
and
hand
that,
over
and
and
the
approval,
and
all
of
that,
when
you
take
all
of
that,
we
have
a
design
vision.
We
can
validate
all
of
that
that
ideal
state
and
then
we
would
break
it
down
into
the
into
the
pieces.
We
feel
should
be
built
first
and
learn
from
that
usage
and
then
adapt.
C
We're
too
attached
to
the
fact
that
we
already
spent
time
building
what
we
have.
C
A
So
I've
put
in
I
like
we
didn't
take
notes
on
a
lot
of
this,
but
I
put
some
action
items
based
on
sort
of
what
I
think
makes
sense,
and
we
can
yes
no-
and
this
is
easy
for
me,
because
I'm
going
to
assign
them
to
other
people,
so
I'm
happy
to
like
take
more
nose
than
yeses.
But
here's
what
I
think
like
coming
out
of
this
conversation
we
need
to
do-
I
think
one
is.
A
It
would
be
helpful
to
me
if
there
was
a
list
of
like
foundational
issues
that
make
it
seem
like
we
can't
iterate
on
this,
because
I
think
that
is
still
a
piece
that
feels
like
it's
missing
from
the
discussion,
like
none
of
the
things
that
you
have
just
said,
like
the
notification
between
the
drop
down
and
the
to
do
count
or
feels
like
it's
not
part
of
the
complete
review.
None
of
those
feel
like
things
that
we
can't
continue
to
like
iterate
on
and
that
we're
not
iterating
on.
A
I
mean
I
think
by
that
logic
you
would
say
like
this
is.
This
is
not
intended
to
be
mean
or
like
misconstrue
this,
but,
like
you
could
say
that,
like
the
batch
comment
feature
shouldn't
have
been
released
because,
like
it's
not
a
complete
feature,
and
it's
not
part
of
the
review
cycle
and
like
we
haven't
iterated
on
it
for
two
years
and
so
like
there's
a
risk
that,
like
we
leave
broken
experiences.
But
I
think
that's
like
that's
sort
of
the
nature
of
how
we
work
and
like
iterate
is
like.
A
There
are
pieces
that,
like
it
will
take
time
to
like
plumb
all
of
those
things
together,
and
so
what
I
need
to
help
me
make
it
like.
A
better
informed
piece
of
this
is
like,
where
are
the
things
that
we're
so
concerned
that
we're
not
going
to
be
able
to
combine
them
or
mesh
them
or
like
put
them
all
together
a
year
from
now
18
months
from
now,
even
or
something
that
like?
A
A
Can
we
put
together
like
a
research
like
what
would
it
look
like
if
we
were
to
turn
this
on
for
like
two
more
groups?
What
would
what
would
that
process
look
like
in
terms
of
what
would
been
like
to
do
in
terms
of
talking
to
these
people?
How
do
we?
What
would
the
research
look
like?
What
do
we
need
to
do?
A
How
do
we
need
to
coordinate
that
those
things
and
just
get
an
idea
of
sort
of
like
what's
the
loe
to
to
do
that
on
top
of
sort
of
everything
else,
bin
has
potentially
going
on
right
now
and
then
the
last
one
is,
I
think
the
easy
one
is
like.
Can
we-
and
this
is
also
for
ben,
because
I
have
asked
and
not
gotten
responses
like
what
is
a
better
way
to
necessarily
approach
the
getaly
team
in
terms
of
getting
more
feedback
out
of
them
because
they
have
been
using
it?
A
Now,
for
two
weeks,
three
weeks
since
may
10th
like
what's
a
better
way
to
like
approach
like,
should
we
send
a
survey
and
be
like
hey
answer
this?
Should
we
like
go
talk
to
andres
and
be
like
hey?
Can
you
get
all
the
team
to
like
coming
like
what
do
we
need
to
do
to
just
poke
and
prod
there
and
I'm
open
to
like
reading
that
or
whatever?
A
But
I'd
love
some
thoughts
from
that
on
those,
and
so
that's
what
I
think
like,
let's,
if
we
can
go
away
and
sort
of
like
do
those
over
the
next
week
to
whatever
time
it
takes
since
I
don't
necessarily
know
that
we
need
to
rush
one
way
or
the
other.
A
I
think
those
would
be
helpful
to
me
in
terms
of
like
continuing
to
think
about
this,
because
I
don't,
I
don't
think
we
have
any
pressure
one
way
or
the
other,
since
it's
not
on
for
a
ton
of
people
like
we
have
time
to
continue
to
like
think
and
be
thoughtful
about
what
we
want
to
do
before.
We
go
embark
on
necessarily
like
another
concept
path.
E
A
E
A
A
C
Yeah,
I
I
actually
already
have
all
of
that
in
notes
the
the
action
point
for
me,
so
I
can
probably
create
an
issue
with
those,
and
so
we
can
discuss
them
one
by
one.
If
we
want-
and
I
mean
just
in
general-
have
an
issue
about
like
for
these
action
items
like
what
to
do
next
or
state
of
attention
request
feature
or
something
like
that.
I
think
the
epic-
I
don't
think
it's
it's
appropriate
for
this.
C
A
Awesome
thanks
everyone,
and
thanks
for
sticking
around
for
quite
a
while
today,
to
discuss
this.
I
think
amy
said
the
last
two
can
be
read
only
so
we'll
do
it
that
way
and
then
we'll
go
from
there.
C
Awesome
cool
thanks
for
bearing
with
me
and
for
your
inputs
and
yeah.
I'm
glad
I
think
it's
it's
a
fair
action
list.
So
let's
do
it
thanks.
Everyone
have
a
good.