►
From YouTube: 2021-03-23 Create:Code Review UX Sync
Description
Weekly UX Sync for the Code Review group
A
So
I
think
the
only
one
on
the
agenda
is
the
one
I
put
in,
and
I
just
I
started
looking
at
the
results
just
based
on
what
we've
seen
so
far,
and
I
know
pedro
you
shared
it
on
social
as
I
shared
it
on
social.
I
also
shared
it
in
the
hacker
news
thread
where
people
were
complaining
about
coders,
you
did.
B
A
A
Almost
all
of
them
are
within
that
sort
of
like
one
and
a
half
the
mean
is
within
like
one
and
a
half,
but
then
they
all
have
a
standard
deviation
of
like
nearly
four,
which
I
think,
if
I'm
reading
that
correctly
is,
they
could
all
move
anywhere
within
that
range
and
they're
all
sort
of
like
equally
weighted
almost,
and
so
I
was
just
trying
to
get
a
sense.
A
If
that
was
true,
and
then
I
also
anecdotally
said
it's
interesting,
that
the
sort
of
the
finding
appropriate
reviewers
from
myanmar
is
the
least
popular
thing
on
the
list.
Even
though
that's
the
thing
that
we've
sort
of
we're
working
on
so
interesting
to
see
that.
C
Yeah,
this
is
a
great
question,
so
I
think
I
had
to
read
up
on
my
or
update
my
knowledge
of
standard
deviation,
but
if
we
had
where,
where
did
I
have
this.
C
C
So
a
standard
deviation
of
seven
would
be
the
worst.
It
would
be
saying
like
it's
all
over
the
place
and
as
close
as
we
can
get
to
one
or
even
less
than
that
would
be
the
the.
B
C
What
definitely
is
true
is
that
the
there's
a
large
cluster
so
dealing
with
large.
Mrs,
let
me
share
my
screen,
so
we're
all
looking
at
the
same
thing
so
dealing
with
large,
mrs
no
one
so
far
has
ranked
this
as
the
least
important
problem,
and
it
has
the
highest
mean.
So
it's
looking
like
there's
a
consensus
about
this
being
the
top
problem
and
everything
else,
this
one
6.6
almost
seven
and
then
yeah.
C
Basically
all
of
these
here
from
understanding
the
broader
impact
of
changes
until
communicating
understanding
tension
of
comments
they
all
share
more
or
less
a
similar
mean,
and
then
this
one
is
a
bit
off,
but
still
closer.
I
don't
know
it's
it's
a
bit
early
too
to
say,
but.
C
I
don't
think
that
there's
a
lot
of
variation,
I
think
there's
a
lot
of
indecision
in
the
middle.
Maybe.
C
But
yeah
so,
but
back
to
your
point
about
like
we
know,
performance
of
rgm
is
a
real
problem.
Should
we
start
thinking
about
some
different
user
experiences
here.
B
C
Yeah
also
that
so,
when
I.
C
B
C
Survey
yeah
exactly
so
dealing
with
large
landmarks
was
also
the
number
one
and
then
knowing
what
had
happened
since
I
last
visited
nmr
and
then
from
here
down,
it
was
more
or
less
similar
rankings,
almost
seven
until
eight,
so
this
one
until
this
one
so
dealing
with
large.
Mrs
knowing
what
had
happened
since
I
last
visited
nmr
and
understanding
the
broad
impact
of
changes,
and
here
we're
seeing
the
same
thing
like
the
the
three
top
problems
are
also
the
three
ones
drop
problems
internally.
C
It
looks
that
way,
but
but
this
gives
so
back
to
your
point
dealing
with
larger
mars.
It
was
ranked
number
one
internally.
So
far,
it's
also
ranked
number
one
externally.
C
Yes,
I
think
yeah.
B
C
A
there's,
a
big
problem
with
that,
I
think
partially
as
we've
recognized
and
we
in
the
internal
survey,
we
had
an
open
response
field
that
would
allow
people
to
say
what
they
considered
a
large
gmr
and
a
lot
of
them
talked
about
files
and
number
of
files,
and
it
was
mostly
down
to
also
the
performance.
So
I
don't
know
if
what
people
are
responding
and
saying
hey.
This
is
a
big
problem
for
me.
I
don't
know
if
it's
due
to
the
experience
due
to
the
just
just
the
performance
or
both.
C
And
in
looking
at
the
hacker
news
post
of
the
1310
release,
a
lot
of
people
were
complaining
just
about
the
performance,
or
at
least
that's
their
perception
like
if
people
when
they're
giving
their
feedback,
they
might
not
be
accurate
about
exactly
what
is
the
problem,
but
they
use.
B
C
So
should
we
start
thinking
about
some
different
user
experiences
here?
Definitely
one
of
the
things
that
would
alleviate
this.
The
most
is
the
automatic
switch
to
single
file
mode,
but
we
haven't
made
any
progress
there
and
I'm
happy
to
to
switch
this
around
and
start
thinking
more
about
that.
C
C
Let
me
bring
up
the
the
issue
so
that
we
can
all
see
there
yeah.
So
I
linked
there
to
the
issue.
A
good
thing
about
this
issue
is
that
there's
almost
there's
almost
a
proposal
in
place
that
I
did
a
lot
of
research
on
it.
Looking
at
research
papers-
and
I
also
asked
nick
thomas
for
some
help
to
look
at
our
own
projects
and
understand
how
many
files,
or
how
many
lines
changed.
C
So
if
you
see
in
the
issue,
we
have
this
section
for
files,
change
and
lines
change,
and
this
is
a
a
summary
of
across
the
gcc
project
project,
the
gitlab
foss
project
and
the
gitlab
project,
and
you
can
see
for
files
changed.
You
have
these
as
t-shirt
sizes.
C
There's
the
the
size
categorization
technique
is
described
here
and
you
can
already
see
the
accumulation
in
this
medium
range
about
the
number
of
files
changed
and
the
number
of
lines
changed,
and
these
map
kind
of
well
to
what
the
feedback
that
users
have
given
us
that
they
consider
a
large
mr
to
be,
but
the
user's
perception.
C
I
don't
think
it
can
be
very
trusted
because
it's
very
difficult
for
you
to
know
or
to
remember
like
what
you
would
consider
a
large
mr
to
be
like
the
exact
number
of
files
or
the
exact
number
of
lines
that
were
changed
so
yeah,
that
this
is
a
way
that
we
can
and
something
I
had
already
explored
doing.
That
would
alleviate
a
lot
of
the
pain,
but
it
it
would
probably
require
some
more
work
and
some
some
validation
on.
C
A
You
think
that's
the
right
issue,
I
know
we've,
it's
gets
floated
a
lot
as
like
automatically
enter
single
file
mode,
and
I
was
scrolling
it
and
phil's
comment
was,
is
down
there
like
under
that
table.
You
were
just
showing
and
his
comment
is
like
well
what,
if
there's
30
files,
but
I
only
me
as
a
reviewer
because
of
being
front
end.
I
only
need
to
review
nine
of
those
files
and
what,
if
we
just
didn't,
show
you
the
other
20
files
and
like.
A
A
Mrs
before
we
get
to
this
point,
where,
like
we,
cheat
and
put
you
in
single
file
mode-
and
I
say
cheating
because
it
feels
like
it,
feels
like
a
cop-out
almost
right,
like
we're
sort
of
saying
like
we
can't
design
an
experience,
we
don't
know
what
the
the
user
experience
should
be
and
we
don't
know
how
to
fix
the
performance
issues
that
we
have.
A
And
so
our
answer
is
like
you
get
to
look
at
one
file
and
good
luck,
figuring
out,
like
all
of
the
files
that
are
in
here
and
like
working
through
that
and
making
sure
you've
you've
seen
all
of
them
versus
something
that
could
be.
You
know
infinitely
more
complex,
but
maybe
is
the
the
right
experience
and
I
don't
know
if
we
go
down
this
like
first
easy
wind
path.
If
we.
A
It
makes
it
harder
to
do
the
other
things
or
makes
it
harder
to
like
get
back
to
those
other
experiences,
or
if
we
don't
care
as
much
or
we
don't
do
something
else,
I
don't
know
I'm
just
you
know.
I
think
this
is
like
a
thing.
We
don't
know
a
lot
about,
and
I
I
guess
I
don't
know
how
we
would
learn
a
lot
about
it
other
than
like
we
sort
of
generally
know
people
don't
like
the
performance,
but,
like
it's
gonna,
be
different
for
everyone.
C
Yeah
yeah,
I
agree
with
you.
I
think
it's
a
it's
a
cop-out
because
we're
not
solving
the
root
problem,
which
is
just
performance
like
in
an
ideal
world.
C
It
would
just
work
it's
like
would
be
instantaneous
almost
like
a
static
website,
you're
just
loading
a
simple
html
page,
and
it
has
everything
in
there
and
if
you
compare
like
there's
that
website,
what's
the
name
by
sourceforge
and
they
compare
the
performance
of
different
git
repositories
and
repository
tools
and
and
ours
is
the
worst
when
it
comes
to
diffs
and
things
like
that,
and
there
are
others
that
you
just
load
them
and
it's.
It's
really.
B
C
B
C
You're
you're
smiling:
are
we
the
worst
yeah,
maybe
not
the
worst
worst,
but
we're
we're
up
there
with
or
down
there
with
the
worst
and
and
yeah?
It
will
be
a
cop-out
because
we
wouldn't
be
solving
the
performance
problems
at
their
roots.
We
were
just
changing
the
experience
so
that
it's
easier
for
users
to
live
with
the
performance
problems
or
not
see
not
experience
directly.
The
performance
problems.
C
So
I
think
one
aspect
that
we
are
not
clear
about
the
large,
mrs,
and
that
is
what
we
also
wanted
from
this
survey.
If
we
remember
correctly,
is
to
understand
what
problems
are
worth
investigating.
So
maybe
we
don't
have
all
of
the
answers
for
or
don't
understand
exactly
what
dealing
with
large
mrs
means
or,
if
it
does,
it
mean
the
same
thing
to
everyone.
C
Does
it
mean
different
things,
but,
like
one
thing,
at
least
to
me,
is
clear:
there's
a
separation
between
dealing
with
larger,
mrs,
because
it's
it's
slow
in
in
the
browser
and
there's
a
lot
of
files
and
things
that
need
to
be
loaded,
and
so
the
experience
is
slow
and,
on
the
other
hand,
there's
the
the
problem
of
managing
large.
Mrs,
so.
C
I
think
an
interesting
thing
that
we
could
do
is
to
try
to
find
internally
and
externally
users
that
only
use
the
single
file
mode
today
in
gitlab
and
ask
about
their
experience
and
how
much
that
has
changed
their
perception
of
performance
with
gitlab
to
understand
if
it's
worth
doubling
down
on
a
single
file
mode
or
not
or
as
like
as
phil
says
like
I
don't
know
if
phil
uses
the
single
file
mode,
but
he
says
like
if
I
only
care
about
nine
files
and
there
are
30
files.
C
So
basically,
what
I'm
trying
to
say
is
that
there
are
a
lot
of
open
questions.
We
just
need
to
prioritize
which
questions
we
will
go
after
and,
as
I
said,
I'm
I'm.
I
am
comfortable
with
shifting
the
focus
away
from
the
catching
up
to
the
dealing
with
larger
mars,
given.
A
A
There
are
a
lot
of
like
nuances
to
large,
mrs
and
like
what
actually
is
the
problems
here,
but
I
think
universally.
We
I
think
we
get
fairly
clear
feedback.
That
performance
is
at
least
one
of
those
problems.
It's
at
least
the
one
we
know
about
today.
So
I
like
the
idea
of
trying
to
find
some
people
who
exclusively
use.
A
Single
file
mode
and
see
if
we
can
sort
of
somehow
do
some
interviews
to
just
gauge
their
perception.
Maybe
we
can
find
some
people
internally
and
just
talk
to
them
to
start
with.
A
A
A
I
think
that's
separate
from
what
we
might
do
feature
and
experience
wise.
I
think
there's
probably
opportunities
there
and
we
just
need
to
spend
more
more
time
on
those
in
a
way.
That's
focused
on
this
as
a
performance
issue
versus
like
trying
to
drive
some
of
these
other
features,
and
I
also
think
we're
getting
to
a
point
with.
A
I
think
if
we
can
get
through,
require
my
attention
and
get
like
the
attention
set
stuff
built.
I'm
now
like
I
can't
and
that
you
mentioned
that
it
was
very
close
to
garrett.
I
can't
stop
using
the
word
attention
set
when
I
think
about
it,
like
I
think,
if
we
get
that
finished,
that
sort
of
like
caps
off
reviewers
at
a
point
where
we
don't
necessarily
need
to
go,
do
anything
else
like
we've
got
that
and
we've
got
a
way
to
handle
what
you
need
to
go
look
for.
A
We
don't
need
to
continue
investing
in
like
surfacing
the
people
who
might
be
the
right
reviewer
like
maybe
that's
not
a
thing.
We
need
to
spend
any
more
time
on
like
building
that
algorithm,
to
be
smarter,
about
free
and
busy
time
and
subject
matter,
expertise
and
other
things
like
that,
like
we
just
leave
it
alone
and
sort
of,
let
people
continue
to
use
the
systems
they
have
the
viewed
and
like
progress
thing.
I
think
also
we're
the
goal.
A
There
is
we're
getting
to
a
point
where
we're
gonna
have
viewed
state
persisted
for
all
of
your
files,
no
matter
which
device
you
might
pick
it
up
on
our
which
browser
session
or
other
things
like
that,
and
I
think
if
we
got
that
done,
that
sort
of
also
gets
us
like
the
tracking
stuff
to
like
a
more
mvc
place
that
we
can
like
sort
of
hold
off
there
and
sort
of
both
of
those
efforts.
A
Do
some
performance
investigations
then
we
might
be
in
a
position
in
two
or
three
to
like
make
some
bigger
user
experience
changes
if
we
want
to.
But
that
gives
us
time
to
sort
of
like
I
think,
investigate
this
and
do
this.
But
I
I
guess,
like
it's
a
sort
of
consensus
wise.
C
Yeah
yeah,
I
agree.
C
Not
only
for
users,
but
also
when
we
would
analyze
the
findings
and
try
to
come
up
with
some
insights
it.
The
performance
problem
is
so
big
that
it
kind
of
it.
It
blocks
everything
else.
So
it's
very
difficult
for
us
to
separate
things
and
say:
okay,
so
this
is
something
that
we
can
do.
This
is
the
performance
problem
is
like
screaming,
so
it's
anything
that
we
would
do
like
the
automatic
single
file
mode
would
be
just
to
move
the
focus
away
from
that
without
solving
the
problem.
C
A
A
I've
been
hesitant
to
like
get
into
it
because
it
sort
of
works
and
like
people
are
like
yeah.
Well,
there's
this
new
bug-
and
this
is
one
of
those
things
where,
like
you
squash
enough
bugs
and
then
like
you,
don't
really
need
to
refactor
it
anymore
because,
like
it
sort
of
is
like
in
a
fragile
working
state
and
in
like
refactoring.
A
It's
just
going
to
send
you
back
in
time,
but
I
wouldn't
expect
and
especially
given,
what's
going
on
right
now
in
1311.,
we're
losing
issues
are
dropping
left
and
right
that
were
planned
for
1311.
A
A
We
need
to
figure
out
with
engineering
how
to
reframe
it,
because
we've
we've
often
looked
at
performance
in
based
on
like
those
issues
that
come
from
the
quality
team,
when
they're
doing
like
endpoint
testing
and
there's
some
merit
there,
but
there's
also
like
the
team
spent.
A
Not
framed
in
the
right
way
or
not
defined
well
like
we
think
about
it
on
an
endpoint
basis,
and
maybe
we
need
to
think
about
it
on
a
like
experience
basis,
which
might
mean
we
need
to
know
what
that
experience
is
going
to
be.
But
I
don't.
I
don't
have
a
good
answer.
I
just
feel
like
when
we've
given
performance
issues
that
are
sort
of
this
large
to
engineering
we've
gotten
back,
we've
gotten
back
positives,
but
not.
A
They
are
huge
and
hard
and
complicated
and
take
a
significant
amount
of
time
to
like
figure
out
and
solve
and
like
versus
if
we
could
find
smaller
wins.
A
So
we
might
just
need
to
think
about
that
more.
But
I
I
don't
know
what
14
is
going
to
look
like
yet
in
terms
of
capacity,
it'll
depend
on
how
much
falls
out
of
1311
that
we
still
think
we
need
to
get
done.
C
Yeah
yeah
in
the
meantime
I'll
think
about
this
a
bit
more
but
I'll.
C
But
I'm
I,
as
I
said,
I'm
comfortable
shifting
the
priorities
for
me
and
focusing
on
that
and
not
so
much
on
the
catching
up,
because
that's
that's
what
the
data
is
telling
us
and
we
know
that
it's
it's
a
problem.
It
has
always
been
a
problem.
C
Okay
yeah
before
we
part
ways:
sun
jung,
any
thoughts
or
comments.
B
I
I'd
like
to
dig
into
the
details
and
that
you
mentioned,
like
maybe
interview
words
starting
with
internal
first
and
then
and
then
see.
How
can
we
improve.
C
Cool
okay,
thanks
yeah
we're
over
time,
but
we
can.
We
can
bring
this
up
tomorrow
in
the
group
called.
If
you
want
kai
or
not,
we
can
just
let
it
incubate
for
for
next
time,
but
yeah
we
have
the
ux
department
call
happening,
so
we
have
to
drop
thanks.