►
From YouTube: 2021-05-25 Create:Code Review Weekly UX Sync
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Well,
so
it
has
been
a
while
since
we've
met,
but
the
first
one
up
is-
and
we
spoke-
I
don't
even
know
how,
many
times
ago,
about
the
meantime
to
merge,
chart
and
so
matthew
has
filtered
that
now
to
the
top
200
projects,
and
it
still
looks
flat,
I
think,
is
the
word
I
want
to
use
like
relatively
unchanged.
Oh
there's
interesting,
there's
a
a
new
chart
for
the
top
100
projects.
That's
in
there.
A
A
Number
of
mrs
same
measure
that
we
use
to
find
the
top
200
projects
to
then
go.
Do
the
mean
to
go
find
the
large.
Mr.
So
it's
like
the
number
of
merge
requests
merged
over
like
the
last
30
days
or
something.
B
Yeah
I
was
reading
some
months
ago,
something
related
to
the
number
of
reviewers
in
in
a
project
and
and
all
of
these
different
aspects,
and
I
mean
to
no
surprise.
B
The
number
of
reviewers
versus
the
number
of
murder
requests
had
a
big
effect
on
how
long
it
took
to
merge
something.
So
it
might
be
interesting
as
well
to
know
how
many
people
are
reviewing
things
in
these
projects
right
because
they
they
might
be
suffering
from
a
lack
of
reviewers
right.
B
But
yeah
I
don't
know,
I
don't
know
what
how
to
how
to
interpret
this.
To
be
honest,
should
we
be
looking
more
at
these
top
200
projects
than
just
the
mean
time
to
merge.
A
A
You
would
have
expected
potentially
that
this
number
declined
in
some
amount
right,
and
so
I
was
hoping
that
maybe
we
would
see
that
within
the
top
200
projects,
as
opposed
to
what
what
we
potentially
see
is
that
over
the
past
two
years
that
we've
been
contributing
features
the
mean
time
to
merge
has
actually
increased
ever
so
slightly
for
all
of
these
projects,
and
so
maybe
we're
not
contributing
to
like
a
more
efficient
process
and
there's
you
know
something
else
is
going
on
here,
and
so
I
think
that
was
the
original
reason
to
look
into
it.
A
I
think
you
know
looking
at
it.
If
you
go
up
to
the
one
that
has
like
all
of
sas,
it's
basically
the
same
story.
Right
like
it
starts
a
little
bit
lower
than
it
is
today,
but
in
general,
over
the
course
of
two
years.
Like
it,
it
varies
some
hours,
but
I
wouldn't
consider
the
difference
between
20
and
21
hours
to
be
incredibly
significant,
and
so
I
think
it's
just
you
know.
I
think
the
original
context
of
this
was
like.
Is
this
the
right
metric.
A
B
Yeah,
I
think
it's
I
think
we
should
we
should.
I
don't
know
if
it
should
be
the
main
metric.
I
think
the
main
metric
should
be
user
happiness
right.
B
Because
that
that,
like
has
everything
attached
to
it,
this
is
something
that
we
need
to
look
at,
because
if
it
changes
over
time
to
be
less
it's
good.
If
it
doesn't
change
in
the
positive
side
and
it
becomes
worse
over
time,
you
need
to
take
more
attention
to
it.
So
I
think
that
it's
good
that
we
have
this
and
we
can
keep
looking
at
this
every
now
and
then,
but
I
do
feel
that
the
duration
of
merge
so
like
here
we're
looking
at
speed
how
fast
we
can
get
something
merged.
B
Is
I
don't
know
if
it's
there's
a
relationship
or
a
similar
relationship
with
what
I've
been
studying
for
the
perceived
performance,
where
people
like
a
product
being
fast
or
something
happening
fast
or
slow,
is
not
as
important
as
people
being
satisfied
right
so
and
then
there's
this
disconnect
and
sometimes
things
that
are
slow,
make
people
happy,
but
then
other
things
that
are
slow
people
as
not
as
happy,
because
maybe
that
duration
is
not
as
well
filled
with
other
things
so
to
to
say
it
in
another
way.
B
Let's
imagine
that
the
mean
time
to
merge
was
this
high,
because
people
were
very
inefficient
and
were
always
facing
roadblocks
and
and
and
things
to
to
work
around
when
they
were
using
merch
requests.
That
would
lead
to
a
very
inspect
and
unsatisfactory
experience,
and
so
people
would
be
pissed
off
and
and
and
would
not
like
the
experience.
B
But
maybe
the
current
mean
time
to
merge
is
the
way
it
is,
and
people
are
still
happy
because
they're
as
fast
as
possible
as
they
can
be
and
they're
happy
with
the
experience,
and
they
know
that
it
can't
get
faster
than
this.
Maybe
so
I
think
in
the
end,
we
need
to
compare
this
with
other
metrics
to
know.
B
If
we're
doing
well
or
not,
I
think
by
itself
it
doesn't
tell
us
much
unless
it
was
complete.
It
has
a
trend
like
if
there
was
a
trend.
We
need
definitely
needed
to
look
at
it,
but
if
it
doesn't
it's
not
necessarily
telling
us
that
it's
good,
but
it's
also
not
telling
us
that
it's
bad,
like
it's
yeah,.
A
Yeah,
I
agree.
I
think
that
that
makes
sense.
I
think
I
think
we
ruled
out
needing
to
dig
further
into
it
for
now
and
yeah.
We
do
need
to
keep
looking
at
what
else
we
should
be
measuring
to
to
sort
of
validate
some
of
this.
So
yeah
thanks.
I
think
you've
got
the
next
one.
B
Yeah
so
here's
a
plan-
and
I
wanted
to
run
this
by
you-
you
to
to
understand
if
it
makes
sense,
if
there's
something
else,
that
we
could
be
missing
here
so
there's
today
I
talked
with
anna
and
lash
one
of
our
ux
researchers
and
she
is
filling
in
for
catherine
on
a
few
things,
and
one
of
them
is
this
issue
about
merging
sorry
measuring,
merge,
request,
perceived
performance,
and
there
is
essentially
three
things
that
we're
looking
at
or
that
we're
thinking
about.
B
Looking
at
to
build
a
holistic
picture.
So
the
first
one
is
machine
reported
and
we
can
look
at
gitlab
only
and
we
do
that
by
focusing
on
the
lcp,
which
is
the
largest
content,
full
paint,
in
addition
to
other
matches-
and
this
is
something
I've
been
already
tracking,
fortunately
for
in
our
all
of
our
performance
tests
and
that
we're
tracking
now,
with
the
large,
merge
request
test
sample
this
becomes,
and
so
that's
the
machine
reported.
B
But
this
becomes
more
difficult
if
we
want
to
compare
it
with
competitors,
and
I
think
a
boarding
solution
could
be
to
rely
on
forge
perf,
because
it
it
runs
weekly
or
daily.
I'm
not
trying
weekly
and
although
the
test
samples
are
not
the
they
are
not
representative
of
large
merge
requests
or
we
can't
control
the
test
samples,
but
at
least
they
are
consistently
applied
across
tools
as
much
as
possible.
B
So
I
mean
some
tools,
have
certain
features
that
other
tools
don't
so
that
might
affect
things,
but
in
the
end
it
it
gives
us
some
indication
of
how
well
git
lab,
compares
to
others
given
the
same
conditions
more
or
less.
So
I
think
that's
something
that
we
can
take
a
look
at
and
and
not
necessarily
as
an
official
input,
but
something
that
is
more
secondary
and
say
hey.
B
B
Okay
and
then
the
other
thing
is
that
I
was
trying
to
get
away
from
is
the
human
reported,
and
you
see
that
in
point
c,
but
we'll
get
to
that
in
a
minute,
and
so
I
try
to
focus
on
task
times
to
be
as
objective
as
possible,
because
when
you
ask
humans
how
long
something
took
to
load
you'll
get
a
different
answer,
probably
every
day
right.
So
people
are
really
bad
at
estimating
durations
and
how
long
things
took
to
load
and
yeah.
B
It
gets
increasingly
complex,
as
we
add
more
variables
to
this,
so
I
thought:
okay,
let's
just
look
at
task
times,
and
so
one
of
the
methods
that
is
used
for
this
is
klm.
So
it's
a
keystroke
level
modeling
and
it's
basically
it's
very
simply.
Just
over
years
and
years
of
research,
researchers
have
come
up
with
specific
average
time
durations
for
specific
actions
or
operators.
B
So,
for
example,
for
example,
looking
at
something
on
the
screen
usually
takes
this
amount
in
milliseconds
reaching
out
for
your
mouse
usually
takes
this
amount
of
milliseconds,
and
so
all
of
these
small
things,
you
can
add
all
of
them
together
and
apparently
it's
it
gets
to
20
10
to
20
of
the
actual
time
when
you
try
to
use
this
to
predict
a
skilled,
user's
task
time
without
errors
right,
and
so
we
could
use
this
to
predict
how
long
a
power
user
would
take
to
create
a
merge
request,
request
a
review
or
even
review
a
merge
request
and
use
this
for
not
only
gitlab
but
also
competitors,
and
the
thing
is
not
looking
at
the
the
actual
duration.
B
So
it
doesn't
matter
if
it
takes
10
seconds
or
100
seconds
in
gitlab.
What
matters
is
it
takes
10
seconds
in
gitlab
today
and
when
we
release
new
improvements,
it
now
takes
9
seconds
or
it
took
100,
and
now
it
takes
50..
So
it's
the
comparison
between
versions
of
gitlab
and
between
gitlab
and
its
competitors
that
matters.
So
it's
that
difference
percentually,
not
so
much
the
actual
number
in
milliseconds,
and
so
that's
why?
B
If
you
will
always
use
a
consistent
method,
we
are
able
to
predict
with
the
fair
amount
of
certainty
how
long
it
would
take
for
users
to
use
this,
and
this
can
help
paint
this
picture
perceive
performance,
because
because
of
what
I
was
saying
in
the
beginning,
maybe
people
can
also
think
that
a
product
is
slow
not
because
of
the
response
times,
but
because
it
it's
slow
for
them
to
do
things
right,
so
they're
thinking
that
the
product
is
slow
and
actually
they
are
performing
slowly
when
carrying
out
those
actions
this
this
all
of
this
makes
sense.
A
Thanksgiving
the
link
that
you
provided
is
that,
like
moderated
testing,
how
do
you
do
it?
I
guess,
do
my
question
like
how
complicated.
B
Right,
so
it's
it's
very
simple,
so
there's
a
source
where
that
has
all
of
these
operators,
how
long
it
usually
takes
to
reach
the
mouse
to
look
to
click,
and
basically
you
have
a
script
that
details
every
single
thing
that
the
user
needs
to
do,
and
you
just
sum
those
durations
and
I
actually
found
an
awesome
tool
for
this,
and
that
allows
us
to
repeat
this.
B
How
often
we
would
like
so
it's
called
calculator
and
it's
an
open
source
tool,
and
basically
you
just
write
a
scenario
here,
or
even
this
one.
This
one
is
is,
I
think,
easier
to
understand.
You
write
the
goal
press
the
new
button
and
then
the
things
that
are
in
blue
the
words
in
blue
will
count
as
durations.
So
everything
you
see
here,
look
think
hands
type
verify
keystroke.
B
All
of
these
have
a
default
duration
attached
to
them,
and
so
this,
the
whole
duration
of
doing
all
of
these
things
would
be
11
seconds
for
someone
who's
an
expert
right,
and
if
I
here,
I
type
something
like
this-
the
test
time
now
changes,
because
I'm
typing
more
letters.
So
this
is
just
an
example,
and
you
can
do
this
for
whatever
you'd
like,
and
this
is
more
easier
to
reproduce
and
it
would
get
this
within
10
to
20
percent
of
the
actual
time
and
it's
much
simpler
than
com
running
tests
with
actual
users.
B
And
if
you
were
to
try
to
do
this
for
competitors
as
well,
it
would
blow
up-
and
we
would
take
a
long
time
to
recruit
the
users
and
have
a
stable
environment
for
testing
and
try
to
do
this.
Every
time
we
released
something.
A
B
No,
so
that's
why
I
don't
want
us
to
get
too
hung
up
on
the
actual
duration.
So
maybe
that's
something
we
could
strive
for,
but
the
most
important
thing
is
the
relative
duration
between
like
gitlab
14
0
and
then
in
gitlab
14014.1.
We
release,
I
don't
know
an
improvement
to
creating
a
merchant
request
in
the
interface.
So
now
we
change
that
scenario
to
what
it
looks
like
today,
and
we
notice
that
there's
a
10
or
20
difference
in
the
task
time
right.
So
it's
not.
B
A
Yeah,
that's
what
I
mean,
but
like
there's.
No,
no
one
actually
like
this
is
based
on
sort
of
like
industry
standards
for
those
blue
keywords
that
basically
we
define
the
scenario
and
then
we
do
that
and
then,
if
we
change
something
in
the
way
that
you
would
work
through
that
scenario,
we
update
that
file,
and
we
say
this
is
the
new
steps
that
you
would
take
and
that
either
increased
or
decreased
the
thing.
Okay.
So
it's
something
we
would
just
maintain
sort
of
like
on
our
own.
C
A
B
Yeah,
it's
it's
an
input.
It's
not
perfect,
as
it
says,
like
the
research
shows
that
it
has
a
10
to
20
margin
of
error,
but
it's
something
and
again
it's.
I
don't
think
it's
really
important
for
us
to
focus
on
the
actual
time.
Although
it's
interesting
it's
more
the
comparison
over
time
as
we
release
improvements
and
with
competitors
and
then
the
final
one
is
the
human
report,
and
so
this
is
something
that
anne
has
had
experience
when
she
worked
at
google
and
she
suggested
using
the
hats
method,
which
is
basically
and
you've.
B
Probably,
you
might
have
stumbled
upon
this
yourselves
when
using
google
products.
Sometimes
they
have
that
pop
over
or
that
banner.
That
says:
hey
we'd
like
to
hear
your
thoughts
about,
I
don't
know
gmail
or
google
calendar
or
whatever,
and
it's
basically
an
in-product
survey,
invite
that
would
be
specific
to
merge
requests
that
would
allow
us
to
get
responses
from
users
in
the
contacts
that
are
using
the
product,
and
this
would
allow
us
to
mitigate
the
biases
that
usually
occur
when
you
ask
people
about
experiences
after
the
facts.
B
So
when
they're
in
the
product
and
they're,
using
the
merge
request
experience,
we
would
ask
them
to
rate
their
satisfaction
and
how
they
would
rate
the
performance
of
the
merge
requests.
We
haven't
defined
the
questions
yet,
but
it
would
be
something
like
this
and
according
to
a
paper
that
was
published
on
that
method,
that
google
has
been
maturing
for
many
years
now.
It
seems
to
be
a
good
input,
well,
not
perfect,
because
there
are
always
biases
and
subjectivity
with
human
reports,
but
it's
a
yet
another
input
for
this
holistic
picture.
B
If
that
makes
sense
and
for
competitors,
we
wouldn't
look
into
that.
Yet
because
it's
very
yeah
it's
very
challenging.
It
would
be
a
lot
of
work
and
I
wouldn't
I
don't
know
if
it
would
be
that
helpful
at
this
point
in
time.
A
What
is
next,.
B
So
next
steps
is
what
I'm
going
to
do,
is
break
down
that
issue
into
or
create
it
and
promote
it
to
an
epic
and
create
separate
issues
for
these
things,
for
the
machine
reported
tasks
times
and
human
reported,
and
then
also
one
to
create
a
handbook
page
where
we
would
collect
these
inputs
and
explain
how
you
can
read
and
interpret
them
to
make
decisions,
and
these
would
run
independently
from
one
another.
So,
for
example,
the
machine
reported
we
already
have
all
of
the
information
that
we
need.
B
B
The
task
times
is
something
that
I
would
need
to
do
for
specific
tasks,
creating
a
merge
request
or
customer
review
and
reviewing
emergency
requests
not
only
for
git
lab,
but
also
some
competitors
and
then
the
in-product
survey.
Invite
would
be
another
initiative
that
I
don't
want
that
to
block
the
rest,
but
it
would
depend
because
we
will
probably
need
some
engineering
capacity
to
implement
this
in
the
product.
So
yeah
right
now
we're
in
this
planning
phase
and
we'll
see
how
that
goes.
Basically,.
B
D
C
So
this
is
the
supply
all
right
that
I
have
interviewed
several
developers.
Most
of
them
were
the
maintainers.
They
liked
the
waiting
for
concept
and
they
still
like
to
have
the
indicator
for
the
reviewers
date.
So
I'd
like
to
mingle
this
two
concepts
into
the
design
and
long
story
short,
we
are
going
to
having
the
unmolarity
test
using
the
usertesting.com
with
the
external
user,
and
then
it's
good
to
go.
A
Yeah,
I
lost
a
comment
under
the
you.
You
left
like
some
interesting
findings,
and
you
had
one
of
them
once
like
the
most
users
rely
on
email
notifications,
if
that
is,
is
the
email
notification,
like
the
first
notification
that
they
rely
on
or
like
what
is
their
triage
process
like
after
that
notification
like
to
me?
That
seems
like.
A
That
is
the
piece
that
feels
like
what
waiting
for
solves
is
like,
after
your
initial
notification
like
how
do
you
deal
with
that?
Or
is
it
that
every
engineer
which
I
would
find
suspicious
at
best,
based
on
what
I
see
in
daily
standups?
Is
that
like
they
get
an
email
notification
for
a
review
request
and
they
go,
and
they
immediately
take
care
of
it
and
then
unassign
themselves,
and
then
the
next
time
they
get
an
email
notification?
A
It's
because
they've
been
assigned
as
a
reviewer
like
how
do
they
know
like
where
they
are
in
the
flow
after
the
after
that,
first
email
notification.
C
So,
firstly,
what
I
was
amazed
by
their
response
was
like
they
normally
remember,
which
mark
they're
assigned
to
and
even
which
did
which
mrs
they
are
assigned.
As
reviewers.
I
was
really
surprised
to
hear
those
type
of
responses,
because
they're
great-
and
I
think
most
of
them
like
just
reply
to
me,
that,
like
I'm
using
email
as
the
primary
tool
because,
like
I
think
it's
mostly
the
very
first
signal
like
you're
assigned
as
reviewer
and
most
of
the
time,
they
will
have
like
several
different
discussions
from
different
reviewers
different
maintainer.
C
C
C
So
that's
what
I
have
heard
and
I
was
happy
that
they
like
this
feature.
They
like
this
concept,
but
the
only
downside
is
like
just
a
visual
problem,
so
I'm
working
on
it
and
I
and
pedro,
and
I
working
on
this
review
cycle
and
then,
if
it
is
ready
to
go,
then
you're
good
to
go
with
the
external
user,
with
the
moderated
test,
which
is
faster
than
usual.
D
A
C
A
B
Yeah
yeah,
I
shared
the
the
about
the
number
of
merge
requests
per
day
and
the
gitlab
roulette.
It
has
the
average
number
of
assignments,
so
not
necessarily
how
many
are
assigned
at
the
moment,
but
you
can,
if
you
filter,
for
example,
for
reviewer
backend
here
at
the
bottom.
It
says
how
many
on
average
are
assigned
to
all
backend
reviewers
at
the
moment.
B
C
B
That's
okay,
yeah,
just
just
a
very
quick
note
to
wrap
up
the
merge
request,
widget
workflow
interviews,
catherine,
did
an
amazing
job
doing
the
interviews,
but
she
wasn't
able
to
wrap
up
all
of
the
insights.
So
I'm
going
to
involve
ian
camacho.
Who
is
the
designer
leading
the
whole
merge
request,
widget
initiative
to
try
to
have
him
complete
the
insights
and
there
are
a
couple
of
some
insights
already
there,
but
it
I
don't
think
she
has.
B
She
was
able
to
finish
them
and
once
that
is
done
I'll,
we
will
report
back
on
next
steps
for
the
merge
requests.
Widgets.
A
Cool
yeah,
I'm
looking
forward
to
that
one
and
now,
with
catherine
gone,
I
think
it'll
be
good
to
yeah
it'll,
be
nice
to
get
this
wrapped
up
and
see
where
we're
at
with
it.
So
thanks
for
thanks
for
taking
it
over
and
shepherding
it
through
the
end,
but
yeah
awesome
we're
at
time
we're
a
little
bit
over,
but
thank
everyone.
It's
good
to
see
you
and
peter
you'll
be
off.
It
looks
like
next
week,
so
maybe
we'll
move
things
around
again
or
we'll
figure
something
out.
B
A
Yeah
monday,
the
us
holiday,
so
no
we'll
we'll
figure
it
out
we'll
catch
up
so
we'll
catch
up
at
some
point
again
in
the
future.
But
it's
good
to
see
everyone
enjoy.
Everyone
enjoy
your
time
off
and
thanks
for
thanks
for
all
this.