►
From YouTube: 2021-05-04 Create:Code Review Weekly UX Sync
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
I
have
the
first
agenda
item
about
handoff,
the
mr
reviewers
and
the
signing
and,
as
I
already
share
with
you
kai
yesterday,
it
was
monday
and
with
petrol.
We
we
have
our
continuous
meetings
just
to
discuss
this
issue.
So
I
made
a
summary
note
here:
I'm
not
going
to
like
digging
into
details.
I
think
that'll
take
like
more
than
30
minutes
for
sure.
So
the
long
story
short,
I
think
there
are
two
modalities.
The
one
model
that
was
originally
proposed
was
what's
similar
to
gary.
A
Does
it's
like
turn
base
and
pointing
out
who's
turn?
Is
it
at
the
moment,
and
the
new
proposal
includes,
what's
similar
to
guitar,
does
like
it's
more
like
implicitly
show
that
who
it
and
it's
just
focusing
on
the
reviewers
not
just
for
the
signee,
so
that
they
will
have
the
review
states
and
they
will
have
more
stronger
signals,
but
not
for
the
signing.
A
So
you
can
also
check
the
figma
ui
here
and
as
the
next
step,
as
I
have
chatted
with
kai
on
monday,
I'd
like
to
just
run
the
quick
survey
like
which
channels
are
they
mostly
used
when
they're
working
on
dynmr
as
the
signing
and
also
for
the
reviewers,
and
the
next
step
would
be
preparing
for
the
internal
interview
validating
the
assumption
like
how
important
is
it?
Is
it
for
the
signee
to
receiving
signal
on
gillap
here,
and
how
can
we
solve
this
problem
in
a
better
way?
A
B
B
And-
and
here
you
called
it
waiting
for
and
then
for
github,
you
call
request
attention.
Would
it
be
the
the
other
way
around,
because
that's
what
I
looking
at
the
designs?
That's
my
impression.
A
C
I
mean
we
spoke
yesterday
and
I
think
I'm
fine
with
validating
which
of
these
is
sort
of
the
most
important
and
then
making
a
decision
on
how
we
go
forward.
I
think
that's,
that's
the
right
next
step
to
me,
based
on
sort
of
the
gaps
that
I
saw
in
in
one
of
the
pieces
here,
and
I
think
we
just
figure
out.
If
it's
how
much
of
this
we
need
to
cover
off
and
then
and
then
go
from
there.
B
A
B
Yeah,
when,
if
you
don't
mind
I'll
quickly,
just
to
explain,
because
I
think
what
you
showed
kai
was
this,
this
concept
right.
B
Yeah
yeah,
just
just
to
quickly
just
say,
example:
five,
usually
what
you
were
explaining
about
the
differences.
So
in
this
concept,
the.
B
The
signal
to
so
the
action
to
request
someone's
attention
or
or
say
that
you're
waiting
for
them,
so
the
handoff
process
has
this
button
for
reviewers
and
there's
also
these
icons.
That
not
only
tell
you
where
this
person
is
in
the
review
process,
but
at
the
same
time,
if,
if
there's
an
action
waiting
for
them,
so
it
has
both
concepts
of
the
handoff
and
the
review
status
in
one
indicator,
and
it
doesn't
provide
information
or
actions
for
the
assignees.
Only
reviewers
and
this
badge
here
at
the
top
would
be.
If
I'm
not.
B
B
This
is
the
one
of
the
concepts.
The
other
concept
is
slightly
different
in
that
it
separates
the
indicators
of
who
needs
to
take
action
from
the
review
status.
So
if
someone
has
approved
it
shows
up
here,
but
this
person
the
indicator,
for
example,
if
someone
has
approved
and
we're
waiting
for
their
action
for
some
reason
is
both
indicators
are
visible
and
those
statuses
are
visible
and
the
drop
down
the
drop
down
yeah
the
drop
down.
B
This
number
reflects
the
number
of
items
that
need
your
attention
that
have
this
indicator
and
you
can
have,
I
don't
know,
20
assigned
to
you
and
40
reviews
for
you,
but
it
will
only
show
the
two
that
need
your
attention,
because
your
someone
has
requested
your
attention
in
one
of
the
merge
requests
and
this
works
for
both
assignees
and
reviewers.
So
those
are
two
concepts
and
both
have
their
pros
and
cons
and
and
yeah,
and
the
next
steps
is
what
sanjong
was
explaining
yeah.
This
was
my
quick
explanation.
C
Yeah,
I
think
that
makes
sense
when
sunday-
and
I
talked
it
was
my
concerns-
were
more
about
the
using
the
merge
request
drop
down
as
a
to-do
list
which
this
the
first
one
that
you
showed
doesn't
doesn't
afford
us
like
the
ability
to
do,
because
that
count
will
always
be
reflective
of
like
where
you
are
it'll,
always
be
like
things.
You
need
to
action,
plus
a
number
that
doesn't
go
up
or
down
unless,
like
you're,
not
assigned
to
that.
C
Mr
right,
like
it's,
it's
less
actionable
in
that
count,
and
I
think
that's
where
to
me.
That's
the
biggest
piece
is
like:
do
people
work
off
of
that
count
and
need
that
to
be
their
burn
down
list,
or
can
that
count
be
sort
of
reflective
of
a
lot
of
things?
And
I
think
you
know
the
initial
feedback
we
got
when
we
dropped
when
we
added
the
drop
down
and
then
sort
of
as
we
got
back
and
we
tried
to
roll
out
reviewers
internally
even
was
like
well.
C
We
sort
of
extrapolate
that
to
both
that
like,
if
that
count,
doesn't
move
in
a
way
that,
like
makes
sense,
then
you
didn't,
we
don't
know
who
what
you
have
to
do
like,
and
I
and
I
I
would
say
my
contention
is
like,
and
maybe
the
other
piece
we
need
to
think
about
is:
do
people
care
if
they're
an
author
and
have
something
to
do
or
if
they're,
a
reviewer
and
have
something
to
do
or
do
they
just
need
to
know
they
have
something
to
do
right
like
does
it
matter?
C
B
My
assumption,
as
as-
and
this
is
from
my
personal
experience
using
gitlab-
is
that
not
seeing
this
number
go
to
zero
means
that
like,
for
example,
if
I
have
okay,
let's
put
so
this
one
is
zero,
and
this
is
one,
and
this
would
only
go
away
if
I
unassign
myself
or
if
the
mr
is
merged
or
closed
right.
So
this
would
always
stay
like
this
to
me.
I
I
become
a
bit
anxious
because
it
means
that
well,
it's
the
current
flow,
so
I
have
to
check.
B
B
So
this
is
not
as
efficient
as
having
this
as
zero
and
or
not
having
any
number
at
all,
and
it
would
appear
only
if
I
need
to
do
something
on
a
merge
request.
But
that's
my
my
perspective
and
my
assumption,
and
I'm
glad
that
sanjong
is
is
talking
with
engineers
to
see
how
that's
yeah.
C
Yeah,
I
agree:
I'm
excited
to
see
where
the
feedback
comes,
but
yeah.
B
Awesome
yeah.
The
second
point:
I
think
it's
interesting
for
us
to
discuss
so
kai,
brought
up
this
in
slack
that
we
have
a
success
metric
for
the
co-review
group,
it's
in
the
handbook
in
the
direction
page
and
it's
the
mean
time
to
merge.
So
it's
the
duration
from
the
first
merge
request
version
to
be
merged
so
from
the
first
push
until
it's
merged,
and
it's
not
it's
a
very
broad
metric.
So
you
as
you
can
see-
and
maybe
if
we
went
further
back
it
would
not
change
a
lot.
B
So
this
is
the
number
of
hours
to
merge.
And,
interestingly,
it
has
risen
a
little
bit
from
june
2019
until
today,
and
the
number
of
hours
has
increased.
But
even
so
I
I
don't
know
if
this
is
some
just
due
to
something
in
specific
that
we
shipped
or
not
in
the
code
review
group
or
maybe
it's
another
group
that
did
something
that
increased
these
or
maybe
not
it's
not
something
that
we
should
worry
too
much
about,
but
it's
a
very
broad
metric
and
it's
not
actionable
and
so
yeah.
C
I
think
those
are
all
pieces
of
it
and
I'd
say
those
are
things
we
probably
need
to
figure
out
how
to
measure.
I
need
to
look
at
the
mean
time
to
merge
the
mean
time
to
merge.
One
is
sort
of
easy
to
do
in
the
database
because
we
have
time
stamps,
and
so
it
makes
it
easy
to
like.
You
just
need
two
time
stamps.
I
don't
know
how
we
calculate
these
other
things
easily,
but
they
are
probably
things
we
could
do
and
should
do.
C
I
should
have
to
think
about
how
to
do
it.
I
guess
I
had
sort
of
and
I'll
just
go
to
my
next
comment.
C
I
think
when
sarah
had
mentioned
this
to
me,
I
had
sort
of
thought
like
maybe
we
should
think
about
slicing
that
data
differently,
because
that's
like
all
of
gitlab.com,
which
doesn't
account
for
like
a
hundred
percent
real
usage
in
all
cases
like
there
are
times
where,
like
people
have
sample
and
demo
projects
or
people
like
intentionally
have
long
running,
mrs
or
sort
of
like
other
things
that
are
factors
there
and
that's
every
single
merge
request
on
gitlab.com
and
like
someone
may
abandon
an
instance
and
then
come
back.
C
You
know
three
months
later
and
pick
something
up
and
like
merge
it,
and
it's
not
really
like
reflective
of
an
actual
dev
team
and
sort
of
what
their
their
mean
time
to
merge
is
on
the
flip
side,
like
I
don't
think
we
want
to
do
and
change
the
metric
to
be.
Like
the
mean
time
to
merge
of
mrs
and
gitlab,
I
don't
think
that's
like
in
the
gitlab
project.
I
don't
think
that's
helpful
to
know
that,
like
our
own
org
is
very
efficient,
but
I
wonder
if
we
could
do.
C
Something
based
on
like
the
top
number
of
customers,
like
the
largest
number
of
like
sas
customers,
and
maybe
we
you
know,
use
a
similar
metric
like
we
did
when
we
were
trying
to
figure
out
how
we
would
decide
a
large
mars
like
we
pulled
the
top
x
number
of
customers,
based
on
how
many,
mrs
they
had
merged
in
the
last
90
days
and
like
we
use
that
customer
list
and
only
query
their
mean
time
to
merge,
because
it's
a
small
enough
sample
of
what
we
would
assume
would
be
active
customers,
and
we
would
potentially
like
detect
that
anomaly
faster
without
like
sort
of
the
noise
like
it'd,
be
the
same
as
if
you
like
in
data
sets,
you
like
eliminate,
you
know
the
x
number
of
lowest
and
then
the
x
number
of
highest,
like
things
to
like
make
that
average
look
more
normalized,
I'm
wondering
if
that
would
help
make
that
metric
like
more
reliable.
C
B
Thanks,
I
think
the
the
slicing
is
interesting.
If
anything,
it
would
probably
allow
us
to
break
down
this
large
monster
of
the
meantime
to
merge
and
just
look
at
smaller
bits
to
then
decide
what
we
want
to
do,
because
maybe
we
can
realize
if
we
look
at
just,
I
don't
know
large
projects
or
very
old
projects
that
we
have
positively
or
negatively
affected
and
there's
a
trend
there.
B
B
Yeah,
you
have
an
answer
to
my
my
first
point,
or
I
think
you
already
said
something
like
that.
B
Okay,
yeah
below
that
charts.
Interestingly,
we
say
that
we
would
create
quarterly
basis
lines,
for
I
think
ux
yeah
to
provide
quality
of
feedback
so
yeah
we
haven't
done
that,
but
nonetheless
this
is
something
that
we
should
do
nonetheless,
and
we
already
have
some,
so
we
did
the
category
maturity
scorecards,
that
included,
that
included
some
part
of
code
review.
B
So
that
is
the
category
maturity
and
we
can
look
into
that
as
well.
But
this
is
something
that
I'm
hoping
we
can
do
a
bit
more
often
is
these
quarterly
bass
lines
and
that's
actually
something
that
I'm
planning
to
do
this
milestone.
Let
me
see
if
I
can
link
to
that,
is
so
to
to
understand
the
performance.
B
We
have
a
grading
rubric
that
basically
says
exceeds
expectation,
meets
expectation,
it's
average
poor,
terrible
and
that
grades
the
experience
and
it's
in
a
heuristic
and
something
that
we
could.
We
could
do
at
the
same
time,
I'm
looking
at
how
we
can
measure
the
perceived
performance
of
merge
requests,
and
that
could
be
another
thing,
but
we,
I
don't
know
we
already
already
have
a
lot
of
things
in
in
periscope,
a
lot
of
dashboards
with
different
metrics.
B
Katherine,
do
you
have
any
thoughts
from
ux
research
perspective
on
like
we're
here,
we're
only
thinking
a
lot
of
quantitatively
but
for
qualitative
data?
B
B
That's:
okay,
yeah!
I
only
have
one
quick
question
about
about
this.
So
in
the
direction
page
we
say
that
this
is
the
success
metric,
but
then
we
also
have
the
gmail.
C
Now
is
the
metric
that
the
business
is
run
on.
I
think
the
group
is
not
run
on
mao
exclusively.
The
group
is
run
on
the
merge
request,
experience,
and
so
I
would
say,
as
a
group.
C
You
know-
and
it's
probably
reflected
in
the
current
quarter's
work,
where
we're
thinking
about
performance
right.
We
care
about
the
merge,
merge,
request,
experience
and
making
sure
that
code,
that's
submitted
via
merge
request,
is
taken
care
of
and
merged
as
quickly
as
possible
right.
That
is
ultimately
what
the
code
code
review
group
facilitates.
C
C
You
know
more
users
might
like
tail
off
or
we
might
see
adoption
of
fewer
features
or
we
might
not
see
like
we
rolled
out
multi-line
comments
right,
like
that's
a
good
example
of
like
we
made
like
a
change
to
multi-line
comments.
Thinking
that
would
approve
you
know
the
code
review
experience
and
make
it
faster
to
leave
better
feedback
about
more
lines
of
code,
and
what
we
saw
was
like
an
initial
pop
and
multi-line
comments
based
on
like
that,
hitting
a
release
and
being
in
a
release
post.
C
If
people
just
leave
a
single
line,
comment
and
they're
not
willing
to
like
take
that
extra
effort,
right
like
you
could-
and
we
don't
know
like
I'm
just
off
the
cuff,
like
hypothesizing,
we
don't
know
one
way
or
the
other,
but
I
think
it's
both,
and
so
we
have
to
be
mindful
of
both
and
I
don't
think
we
we,
as
a
group
have
done
a
good
job
at
looking
at
like
mean
time
to
merge
or
accounting.
C
C
C
You
asked
for
an
issue
on
suggestions,
applied
suggestions
and
total
number
of
suggestions.
Right
like
we
there's,
there's
a
huge
discrepancy
in
that
male
number,
like
a
massive
difference
in
the
number
of
people
who
make
a
suggestion
in
the
number
of
people
who
apply
suggestions,
and
so
now
we're
like
we're
at
a
point
where
we
are
able
to
go.
Okay,
the
male
numbers
don't
seem
right.
What
other
things
do
we
need
to
keep
asking
ourselves
to
figure
that
out
and
then
understand?
C
If
it's
like
something,
that's
impacting
you
know
the
speed
at
which
the
merge
request
is
done,
or
if
this
is
expected
or
is
like
you
know
those
other
things,
and
I
think
that's
where
we're
at
and
sort
of
like
our
data
maturity
journey,
so
they're
both
important,
I
think.
As
a
group,
we
need
to
start
figuring
out
how
we
talk
about
meantime
to
merge
more,
but
largely
every
time.
C
I've
looked
at
that
chart,
it's
not
been,
I
think,
to
sarah's
point
and
why
we're
having
this
conversation
is
like
you
look
at
it
and
you
go
neat.
That
number
has
been
the
same
for
two
and
a
half
years
so
like
could
we
have
done
nothing
for
the
last
two
and
a
half
years
and
like
we'd,
be
we,
our
mao
would
have
grown
as
much
as
it
grown
or
like
did
we
add
a
bunch
of
stuff
that
made
it
better?
You
know,
did
we
do
anything
or
nothing
or
whatever,
and
we
don't?
C
B
Yeah,
I
think,
yeah
that
that
makes
sense
from
what
you're,
explaining
and
talking
about
the
mr
experience
it
sounds
like
we
should
be
tracking
the
usability
and
the
the
three
agreed
upon
facets
of
usability.
So
if
you
can
see
efficiency
and
satisfaction
because
they
they
basically
cover
everything
that
you
just
mentioned
right
so
satisfaction.
If
people
are
happy
about
the
experience-
and
maybe
it's
not
the
fast
experience
but
they're
happy
right,
their
perceived
performance
leads
to
satisfaction,
for
example,
but
then
you
have
just
pure
performance,
so
efficiency.
B
How
long
does
someone
take
to
comment,
for
example
or
approve,
and
also
if
you
can
see
how
many
attempts
to
users
take
to
do
something
right?
How
many
errors
can
they
make?
So
it
sounds
like
we
should
be
tracking,
more
usability
than
anything
else
so
yeah.
I
think
it's
good
truth
for
thoughts.
Yeah
animals
like
comments.
I
think
it's
due
to
this
very
structure
of
code
files
and
I
don't
think
it
would
change
but
yeah.
We
we
don't
have
a
lot
of
time
left,
so
we
can
jump
to
your
next
point
guy.
C
Yeah
we're
up
at
the
time
and
everyone
can
go
back
and
read
them.
This
is
every
time
I
jump
back
into
this
issue.
My
head
hurts
more
and
there's
two
pieces
here
and
I
know
pedro.
You
responded
to
some
of
it.
Some
of
this
is
the
default
when
a
project
is
created.
The
value
for
these
two
settings
is
nil
and
ruby
treats
nil
as
false.
C
C
But
the
value
that
we
expect
in
the
api
is
true
when
you
say
prevent,
I
think,
but
then
that
creates
like
something
not
happening,
which
would
be
like
the
false
side
of
that,
and
so
there's
this
sort
of
like
third
negative,
and
so
we
could
fix
this
like
carrie,
could
fix
it
and
say
like
nil
is
true
and
that
would
sort
of
make
the
default
new
project.
Behavior
work
like
we
expect,
but
then,
like
the
api,
is
continues
to
be
confusing
and
the
language
continues
to
be
confusing
mike
suggested.
C
Don't
the
settings
do
what
they
were
intended
to
do,
but
they
never
did
that
to
begin
with,
so
people
like
now
have
unexpected
behavior.
So
I
just
say:
take
a
look
and
like
read
through
this
and
we
we're
up
at
times.
I
don't
want
to
discuss
it
now,
but
it
is
it's
confusing
and
complicated,
and
I
could
use
some
help
trying
to
figure
out
like
where
do
we
think?
What
do
we
think
the
best
thing
to
do
is
and
how
do
we
do
that?
C
And
if
that
means
like
two
or
three
steps
that
we
need
to
take
then
like,
let's
just
say
it's
two
or
three
steps
versus
maybe
stop
gapping
it
with
something
and
then
adjusting
it
later,
so
feel
free
to
take
a
look
and
try
and
try
your
best
to
understand
all
the
all
the
language
and
consequences
there.
C
Yeah
and
thanks
for
finding
that
in
the
docs,
because
that
helped,
and
also
through
a
giant
wrench
and
everything.
So
it's
appreciated
that
you
found
that
there
was
supposed
to
be
a
default.