►
From YouTube: 2021-04-07 Create:Code Review Weekly Sync
Description
Weekly sync with the Code Review Product, Engineering and UX
A
Thank
you,
kai
yeah,
so
I
I
added
a
bunch
of
points
that
are
all
related
to
emergency
requests
performance,
and
this
was
something
that
recently
blew
up
and
it's
something
that
we
collectively
at
gitlab
have
been
aware.
A
But
up
until
recently,
no
one
had
the
full
picture
or,
as
close
as
we
can
get
to
a
full
picture,
as
we
now
do-
and
this
has
raised
many
concerns
and
sid
is
one
of
the
people
that
is
most
concerned
about
this,
and
he
raised
this
topic
in
yesterday's
ux
group
conversation
and
I
posted
his
comments
there
and
he
is
expecting
us
to
share
plans
and
results
in
the
ceo's
life
channel
and
overall
to
involve
him
and
make
him
aware
of
what
we're
doing,
and
he
all
he
also
proposed
some
things
that
he
thought
we
could
attempt
to
do
and
I
placed
them
their
definition
of
large
merge,
requests,
testing
those
large
merge,
requests
and
bugs
related
to
white
space
and
perhaps
also
having
a
score
of
the
sus
survey.
A
That
is
specific
to
merge,
requests
and
that's
something
that
I'm
looking
into
below
in
point
three
and
then
we
can
get
to
that
yeah.
I
this
was
initially
an
fyi
point
and
then
I
wanted
to
discuss
this
specifically
with
michelle's
suggestion
for
a
shared
kr
below,
but
I
see
some
comments
there.
Of
course
yeah
we
can.
We
can
talk
about
this,
so
I'm
going
to
remove
the
fyy
fyi
marker
michelle
andrei.
If
any
of
you
wants
to
voice
your
comments.
C
Is
very
self-explanatory.
You
gotta.
B
C
C
I'll
have
to
go
back
and
watch
that
conversation,
but
we
know
and
honestly
it's
a
good
thing
that
so
many
other
people
across
the
company
are
taking
notice.
B
Yeah
I
agree.
I
we
welcome
this
attention
and
we
are
here
for
it.
My
question
then,
was:
I
see
a
lot
of
complaints
broadly
about
performance
and
slowness,
and
this
is
slow
and
you
want
larger
mars
to
be
faster
and
everything.
B
But
when
we
go
down
to
it
and
especially
important
to
setting
a
useful
okr
is
what
metrics
will
we
want
to
affect
because
there's
the
rendering
of
the
nmr
that
you
want
to
make
faster,
which
we've
done
extensive
work
on,
but
there's
also
the
operating
on
an
mr
like
doing
operations,
comments,
removal,
resolving
it
etc,
which
talks
about
a
different
kind
of
metrics,
so
the
site,
the
speed
index,
for
example,
will
not
track
that
improvement.
B
We're
talking
more
about
memory
usage
talk
about
more
memory
usage
over
perform
over
the
performing
of
several
operations.
Do
we
have
memory
leaks
that
sort
of
stuff
so
kai?
You
have
an
answer.
D
Yeah,
I
would
say,
they're
related,
it's
all
of
it,
and
I
tried
to
make
this
point
in
the
group
conversation
yesterday
that
there
are
lots
of
things
here
and
we
use
large,
mr
as
an
umbrella
term,
for
everything
and
we
sort
of
say
largemr
when
it's
lots
of
files.
When
there's
lots
of
comments
when
it
is
slow
like
it's
just
like.
Oh
it's,
because
it's
a
large
mr
right,
like
we
sort
of
just
generically,
use
it
as
not
optimal
for
a
variety
of
reasons,
and
so
it's,
I
think
it's
all
of
those
things.
D
D
It
does
make
it
hard
to
like
probably
set
a
benchmark
of
where
we're
going
to
get
if
we're
doing
both
of
these
things,
because
some
might
not
impact
that.
But
I
I
think
at
this
point
we
don't
get
a
chance
to
sort
of
decide
that,
based
on
sid's
response
yesterday,
I
don't
think
he
cares
how
we
feel
about
breaking
this
down.
D
A
Yeah
I
agree
with
kai,
and
but
I
think
you
that
distinction
is
important
to
to
make.
So
thank
you
for
bringing
that
up
yeah.
We
need
to
do
both
if
we
had
to
pick
one.
If
we
really
really
needed
to
pick
one,
I
would
say
the
loading
rendering
from,
but
that
is
my
personal
view
on
it.
I
don't
have
any
specific,
like
quantitative
data,
only
qualitative
data
from
research
that
points
more
about
the
loading
and
rendering
not
a
lot
of
people
say
that
it's
slow
in
commenting
or
browsing
or
doing
an
operation.
A
And
I
think
what
we
need
to
prove
is
not
only
for
leadership
but
externally,
the
best
way
to
prove
it
is
with
with,
with
actual
I
mean,
the
experience
of
course
needs
to
be
better,
but
with
if
we
can
say
to
to
sid
into
the
community
on
hacker
news
every
time.
Something
like
this
occurs,
we
can
say
hey.
This
is
the
amount
of
time
that
you
take
to
load
something
versus
the
same
thing
on
github.
If
we
can
say
that,
and
we
say
that
we're
improving
and
we're
almost
there
or
we've
even
surpassed
github.
A
I
think
that's
what
everyone
is
looking
for,
yeah,
michelle.
No
sorry.
C
I'll
write
this
end
when
I'm
done
talking,
I
did
have
this
in
mind,
but
performing
operations.
Part,
I
think,
is
a
huge
complaint
and
and
that's
why
I
really
added
the
for
all
elements.
Part
of
my
key
result,
but
I
see
here
that
I
also
added
total
page
load
time,
so
that
could
be
confusing.
C
But
the
very
first
thing
that
I
tested
was,
I
created
a
large
mr
again
that
wasn't
a
successful
test,
but
the
very
first
thing
that
I
did
was
I
added
a
comment
to
one
of
the
files
and
doing
that
took
like
two
or
three
seconds,
and
so
I
do
think
that
the
operations
part
comes
hand
in
hand,
but
to
kai's
point
I
don't
think
we
care
to
separate
that.
I
think
we're
going
to
see
that
once
we
start
working
with
this
key
result,
if
that
makes
sense,.
C
C
B
Okay,
yeah
yeah.
I
agree
mostly
mention
it,
because
the
we
can
improve
a
bunch
of
metrics
on
the
reports
that
users
will
not
feel
and
we've
done,
that
we've
seen
that
in
the
past.
The
one
we're
currently
focusing
a
lot
on
is
total
blocking
time,
because
it
manifests
itself
on
the
user's
perspective
of
the
browser,
locking
up
and
going
into
my
next
point
when
you
mention
what
people
expect
that
we'd
be
able
to
compare
it
directly
with
github
and
say
that
we're
faster
or
just
as
fast.
B
I'm
talking
about
the
technology
that
we
use
to
render
them
on
the
on
on
the
page,
just
to
compare
like
it's
40
megabytes
usage
on
their
browser
compared
to
a
gigabyte
on
our
page,
and
that's
because
we
use
significant
different
approaches
to
rendering
the
front
end.
So
what
I'm
saying
is
that,
if
that's
the
goal,
then
there's
two
things
here:
one
is
the
iterative
improvements
we
can
do.
That
will
only
get
us
so
far,
but
they
will
not
get
us
through
the
goal.
B
If,
when
we
get
to
the
goal
and
that's
the
goal,
then
we
need
to
hire,
have
another
thread
started
as
soon
as
possible
to
get
to
that
goal.
Does
that
make?
Is
that
clear,
and
just
for
the
perspective
of
virtual
scrolling
in
investigations
that
we've
done
the
lowest
we've
we've
been
able
to
get
a
memory
usage
that
was
around
500
megabytes
was
bring
it
down
to.
B
This
is
very
preliminary
with
some
bugs
still
getting
the
170
megabytes,
which
is
still
way
above
what
github
does
and
just
as
a
nugget
of
detail.
C
B
Wait
30
40
reduction
of
what
metric.
C
B
We'll
definitely
need
to
hash
out
the
specific
things
we're
targeting
40
faster
on
what,
because
what
I'm
saying
that
we
won't
be
achievable
is
to
get
the
level
of
performance
that
github
has
a
direct
comparison,
but
the
40
sounds
doable.
We
just
have
to
specify
specific
which
metric
we
want
to
target
metrics.
Sorry
more
than
one,
not
just
one.
D
I
just
I
I
know
everything
says
a
quarter,
because
we
do
okrs
and
we
make
them.
D
We
take
big
things
and
make
them
an
okr
when
we
want
to
make
them
people's
priorities,
and
so
it
says
a
quarter.
D
I
don't
get
the
impression
and
I've
never
gotten
the
impression
that
anyone
is
expecting
us
to
solve
this
in
a
quarter.
I
don't
think
anyone
thinks
we'll
come
out
of
this
three
months
later
and
go
we're
as
fast
as
github
and
like
okay.
We
did
this.
Let's
move
on
to
the
next
thing.
Now
the
purpose
of
making
it
an
okr
is
because
rightly
or
wrongly,.
D
We
didn't
actively
try
and
like
go
and
burn
that
down
and
make
that
a
better
experience.
We
sort
of
acknowledged
some
of
it
masked.
D
Some
things
worked
on
improvements
when
we
could-
and
I
know
great
improvements
have
been
made
in
some
of
the
endpoint
controllers,
and
some
of
improvements
have
been
made
in
you
know,
largest
contentful
paint
and
those
are
all
good,
but
we
were
also
still
building
features
and
adding
other
things
to
the
product
and
doing
other
work,
and
I
think
what
this
okr
and
sort
of
what
the
the
rest
of
the
company
is
telling
us
is
we
don't
we
don't
necessarily
get
to
do
that
for
the
next
quarter?
D
I
think
the
the
purpose
of
making
this
an
okr
is.
This
is
what
we're
doing
for
the
next
quarter
and
all
we're
thinking
about
as
a
group,
and
I
think
all
of
the
engineers
will
only
be
thinking
about
this
for
a
quarter
and
then
we
can
see
where
we
are
and
figure
out
what
we
need
to
keep
doing
and
what
are
the
bigger
pieces.
But
yeah,
don't
don't
expect
this
to
be
a
quarter
and
we're
done,
but
but
think
about
it
more.
D
That,
like
people,
want
us
to
really
pay
attention
to
this,
and
we
haven't
done
in
others
eyes.
We
we
maybe
haven't
done
as
well
as
they
would
have
liked
us
to.
I
don't
necessarily
agree
with
that,
but
I
would
say
that
we
accept.
D
D
A
Yeah
yeah,
I
think
that
would
be
great
to
metro
surpass,
but
I
think
there
are
other
things
that
we
can
do
to
affect
the
user's
perception
of
performance
and
that
they
will
still
rave
about
our
products
and
that
it
and
that
they
can
achieve
their
goals
faster
and
with
more
satisfaction
than
github.
Like
that's
the
ultimate
goal
and
not
so
much
the
timings
but
the
timings.
Of
course.
They
they
tell
a
story
as
well.
A
So,
for
example,
we
could
we
have
to
make
progress
not
only
in
the
performance
very
heavily
pure
product
performance,
but
also
we
could
do
other
things
from
the
ux
side
like,
for
example,
reducing
their
perception
of
waiting
times.
Making
sure,
like
everything,
has
loading
states
and
that
it
doesn't
look
like
the
ui
is
frozen,
for
example,
and
also
a
very
simple
but
very
challenging
thing
to
do
is
just
reducing
the
amount
of
clicks
and
travel
time
with
the
cursor
for
people
to
do
something
and
get
their
job
done.
A
B
Yeah
thanks
that
that
that
really
works.
What
I'm
talking
is
exactly,
for
example,
whenever
me
and
thomas
and
phil,
we
were
talking
a
lot
about
is
view
even
the
right
tool
for
this
job.
So
that's
how
deep
this
goes
right,
we're
talking
about
other
ways
of
rendering
templates
on
on
the
page
that
is
not
depending
on
virtual
dom,
so
we're
going
deep
on
this,
particularly,
but
we
haven't
really
make
started
an
effort,
but
that
that
works
so
yeah
thanks
for
that,
can
we
go
to
marcel.
E
A
C
A
B
Yeah,
the
the
tests
from
what
I
understand
from
grant
is.
We
have
two
major
large
hmrs
and
I
am
with
kai,
like
it's
a
very
big
umbrella
term,
to
that
it's
unspecified.
B
They
have
two
large,
mrs
one
for
commits
and
one
for
discussions,
and
they
run
the
tests
on
on
that
10k
reference
architecture
and
I'm
with
michelle
like
they
might
not
represent
exactly
a
real
world
scenario,
but
they're
good
enough
stab
at
it,
but
we
should
definitely
improve
it.
B
My
links
go
to
the
wider
report,
so
what
you're
linked
to
is
the
dashboard
of
historical
numbers,
but
we
have
a
report
that
is
run
repeatedly,
not
sure,
if
daily,
but
regularly
on
this
architecture
that
will
provide
us
with
diff
with
deep
insights.
B
It
will
provide
you
with
a
lot
of
actual
metrics
that
we're
looking
into
on
those
reports
and,
for
example,
when
we,
when
we're
doing
a
feature,
we
will
will
try
to
compare
the
numbers
that
we
get
after
the
feature
is
deployed
and
to
see
the
impact
that
this
has
like.
We've
done
with
the
tbt
just
recently,
there's
another
level
of
metrics
that
we're
measuring,
which
is
the
user
timing
apis.
B
If
that
makes
sense
where
any
page
on
the
web
will
have
those
metrics
taken
out
of
the
one
that
we
have
on
the
user
timing,
api
they're
specified
by
us,
we
were
able
to
specify
specific
moments
on
our
application
to
that
to
detect
when
the
first
file
gets
rendered
when
the
file
tree
gets
rendered
when
the
last
file
gets
rendered,
and
that
gives
us
a
better
perception
about
what's
happening
with
the
app
was
it
the
the
page
took
longer
to
load.
But
why
was
it?
The
file
tree
took
longer
to
load?
B
Was
the
the
files
took
longer
to
render
it
gives
us
a
more
detailed,
fine-grained
approach
marcel,
that's
kind
of
like
the
metrics
we're
looking
into
regularly?
If
it
helps
to
have
it
a
more
summarized
version?
Maybe
we
can
do
an
update
on
the
handbook
of
our
team
page
to
to
see
where
which
projects
we're
looking
into.
E
A
B
A
A
Okay,
so
michelle
made
this
suggestion
to
improve
the
performance
of
larger,
mrs
by
40,
as
measured
by
the
total
page
load
time
for
all
elements
and
fyi.
This
is
likely
to
become
a
product
key
result
for
q2
and
again
aligned
with
what
also
sid
is
prioritizing
yeah.
The
point
to
be
under
under.
C
A
There
is
related
to
what
we're
just
talking
about
like.
Can
we
reuse
this
data
from
the
performance
site
speed?
I
already
added
an
action
item
to
talk
about
that,
so
I
don't
think
we
need
to
talk
about
it.
One
thing
in
specific
that
I
wanted
to
bring
up
is:
would
it
be
well
not
possible
because
anything
is
possible?
C
B
Yeah,
I
don't
have
an
opinion
other
than
as
long
as
it
doesn't
take
capacity
from
the
team
solving
the
problems.
I'm
fine.
So
I
think
this
touches
on
marcel's
point
of
revealing
the
numbers
on
a
consumable
way.
B
I
just
I
just
would
rather
us
not
focusing
on
that
particular
with
the
engineers
we
have
yeah,
that's
kind
of,
like
my
only
thought
it
will
remove
capacity
from
the
fixing.
A
Sure
yeah,
I
think,
but
would
it
be
helpful
as
a
way
to
understand
how
competitors
are
doing
it?
I
mean
certain
things
so
that
we
could
learn
from
them.
B
So
if
you
go
into
the
disruptive
approach,
part
of
that
effort
will
be
a
exploratory
phase
and
in
that
we'll
do
the
benchmarking
of
others.
I
think
that
question
this
question
in
this
example
falls
more
on
a
pr
marketing
communication,
high-level
communication,
part
of
things
not
necessarily
beneficial
for
engineering
that
will
do
on
our
own.
When
we're
doing
the
exploration,
I
think,
but
I
don't
I
don't
want
you,
I
don't
want
to
stop
it.
I
just
think
that
it's,
it
falls
more
on
a
communication
part
of
the
effort
which
is
important.
D
Yeah,
I
think
it
would
be
distracting
to
have
it
now.
It
gives
it'll
create
a
comparison
and
potentially
detract
from
any
improvements
we
make
if
we're
still
not
there
right.
If
we
end
up
and
we
got
40
better
and
we're
still
not
github
fast
people
are
going
to
go
well,
you're,
still
not
github
fast
and
we're
going
to
get,
and
so
like
40
worth
of
improvement
is
gonna.
Look
like
nobody
cares,
or
nobody
appreciates
that
sort
of.
D
I
think
in
that
way,
so
I
don't
think
it
would
be
helpful
to
make
a
comparison
today.
D
I
think
what
we
need
to
do
is,
and
I
put
it
down
in
the
actions
we
need
to
find
the
the
reference
largemr
that
we're
going
to
use
set
up
the
dashboard
for
that
with
the
metrics
we're
going
to
track,
and
that's
the
one
that
we
have
and
eventually
we
can
take
that
reference
largemr
and
put
it
on
github.com
or
put
it
on
other
places
and
then
like
see
where
we're
at,
but
I
think
right
now,
we
should
be
in
a
a
looking
inward
phase,
not
a
not
a
looking
outward
phase.
A
A
C
My
point
here
is
basically
just
what
I
said
earlier.
You
see
these
results
in
the
form
of
our
performance
refinement
issues
that
we
burn
down
every
month
and
those
issues
are
important,
and
that
is
why
we
work
them,
but
sometimes
those
issues
have
been
de-prioritized
because
they're,
just
simply
not
an
example
of
what
happens
on
gitlab.com
and
so
yeah.
I
don't
know
if
those
those
tests
are
accurate
to
say
like
this
is
how
users
perceive
performance.
D
For
creating
the
reference,
mr,
who
just
who
wants
to
do
that,
we
probably
need
to
do
that
sooner
rather
than
later
is
it.
I
know,
we've
got
the
one
that
I
have
that's
terrible,
because
it's
1500
single
line
changes
which
is
not
useful
either
just
is
that,
like
does
backhand
want
to
build
out
a
script
that
we
can
use
to
sort
of
easily
do
that
everywhere
or
how
should
we
get
one.
D
B
Okay,
can
I
say
something:
we've
had
this
question
in
conversations
before
over
time,
and
I
know
that
at
least
since
2018
we've
been
tracking
one
particular
example
of
an
mr
that
it's
considered
large
and
it's
a
real
world
example.
It
has
96
commits
and
we've
been
tracking
it
in
our
performance
dashboards.
B
B
Instead
of
going
and
reinvent
the
wheel,
could
we
just
reuse
that
one
instead
try
to
find
where
we
are
in
the
agenda.
D
Yeah
we're
up
at
time,
and
we
should
probably
figure
out
if
we
can
use
it.
The
other
piece
is
we
need
to
define
large,
mr
and
then
that
will
probably
tell
us
if
whether
or
not
that
one
meets
the
benchmark,
and
I
asked
down
below
in
the
action
items
in
like
a
google
comment,
but
we
need
to
know
because-
and
this
is
again
where
large,
mr
the
terrible
catch-all
term
is
it
x
commits-
is
it
is
a
large?
Mr
actually
one?
D
That's
like
bumping
up
against
our
diff
limits,
because
we
know
we
have
customers
that
hit
our
diff
limits
with
their
large,
mrs
and
then
the
rest
of
the
mr
suffers
but
like
are.
Is
it
some
something?
That's
under
our
diff
limits,
collectively
that
we
want
to
approach?
And
so
I
think
we
need
to
define
that
and
then
get
an
mri
that
pushes
us
sort
of
right
to
the
envelope
of
that.