►
From YouTube: 2021-04-28 Create:Code Review Weekly Sync
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Pictures
got
the
first
one.
We
said
it
were
really
quick,
which
is.
This
was
the
research
from
the
internal
external
code
review
survey.
So
if
you
were
an
engineer
at
gitlab-
and
you
did
this
or
a
person
at
gitlab-
and
you
did
this,
thank
you
and
then
the
external
one,
so
pedro's
finished
linking
up
insights
and
there's
a
video
overview
that
you
can
watch.
A
So
if
you're
interested
in
some
of
the
results
of
those
take
a
look,
it
is
some
of
what's
driving
the
large
mr
work
is
the
feedback
we
got
back
about
large,
mrs
in
there
and
some
sizing
things
from
that.
So
there
are
some
interesting
things
there,
but
there's
also
other
work
in
there
so
feel
free
to
take
a
look.
A
The
next
one
is
the
big
one
that
I
wanted
to
put
on
the
board
for
for
chatting,
and
I
don't
have
a
good
idea
on
how
to
do
this
one
I
was
thinking
we
could
do
it
like.
We
do
our
planning
meeting
where
everyone
gets
a
few
seconds
to
explain
like
which
one
and
we
sort
of
quickly
yay
or
nay
them
for
thinking
about
in
14.
But
I
don't
know
if
that's
the
best
way
but
andre
if
your
thoughts
are
matt.
A
If
you
have
thoughts
on
if
that
that
works,
that's
good,
okay,
and
so
the
reason
I
put
this
here
is
one
of
the
things
that
we
need
to
do
is
figure
out
what
we
might
do
to
accomplish
the
okr,
and
while
there
are
certain
endpoints
that
we
know
might
be
problematic
or
other
things
that
we
want
to
look
at,
it
is
in
order
to
get
the
gains
we're
looking
at.
A
We
have
to
look
at
other
solutions
as
well,
particularly
like
in
the
memory
area
like
changing
the
end
points
is
not
going
to
reduce
how
much
memory
the
pages
load
that's
sort
of
not
how
that
works.
So
these
are
ideas.
The
first
handful
are
mine
and
then
there's
some
others
in
here.
A
That
look,
I
don't
know
who
they're
from
but
the
first
one
is
merge,
request
exclusions,
and
we
talked
about
this
in
slack
and
I
can
find
the
slack
link,
but
essentially
the
idea
with
that
there
is
a
subset
of
files,
in
particular
in
large,
mrs,
these
show
up
more
often
where
it's
the
code.
A
That's
like
vendored
from
go
so
every
time
you
have
to
like
rebuild
the
govender
library
like
it
also
submits
that,
in
your
merge
request,
I
think
like
this
would
be
equivalent
to
including,
like
your
vendor,
folder
and
like
npm,
or
your
node
modules
folder,
like
in
a
merge
request,
which
would
create
a
massive
merge
request
of
a
bunch
of
code
that
you
have
no
intent
to
review,
because
that's
not
how
that
works.
Usually,
so,
if
we
proactively
excluded
files,
we
could
sort
of
reduce
the
payload
of
the
merge
request,
that's
generated
at
first.
A
I
think
we'd
still
have
to
provide
a
way
to
like
see
those
files,
but
it's
a
it's
a
possible
way
to
do
this.
It
is
what
other
providers
do
and
the
linguist
gym,
or
the
linguist
yeah.
I
think
it's
a
gem
supports,
has
some
logic
for,
like
the
types
of
things
that
fall
into
these
categories
that
are
not
intended
for
humans
is
what
it's
called.
B
Yeah,
thanks
guy,
I
think
this
one
has
and
anything
that
brings
down
the
amount
of
files
being
rendered
does
improve
performance.
I
would
say
that
the
argument
here
is
that
this
will
only
affect
the
few,
mrs,
so
the
mrs
that
do
apply
to
that
situation
and
not
a
broad
impact
across
all
mrs.
I
still
think
it's
worth
it
to
consider.
It
there's
some
details
that
we
would
have
to
figure
out.
How
do
we
choose?
Which
ones
are
not?
B
I
think
we
already
talked
about
the
implementation
and
they
should
be
still
to
really
reap
the
benefits
of
performance.
They
should
done
on
the
back
end
so
that
the
back
end
doesn't
even
send
the
content
of
those
files
to
the
front
end,
but
yeah,
I
think
that's.
This
would
be
a
good
thing
for
us
to
consider.
It
would
help.
A
Just
putting
words
in
your
mouth
potentially
not
initially,
though
right
like
potentially,
we
should
address
that.
Maybe
after
we
get
through
the
first
shoot,
okay
got
it
correct,
yeah,.
B
C
Well,
I
was
just
wondering
we
might:
we
might
need
to
create
a
mechanism
to
hide
whole
folders
right
now
we
can
collapse
a
file,
but
from
what
I
get
at
kai.
If
you
are
committing
vendor
folder,
it's
going
to
be
thousands
or
hundreds
of
files,
and
we
at
the
moment
don't
have
any
way
of
collapsing
the
whole
folders.
C
A
Okay,
I
think
that's
fine.
I
think
the
answer
then
on
this
one
is,
and
I'm
gonna
do
this
and
then
just
after
okay,
we'll
just
put
that
one
down
to
the
bottom,
and
we
won't
at
this
time
we'll
say
the
decision
is
we'll
just
address
that
one
after
the
okr
work,
the
next
one.
I
don't
have
a
good
answer.
I
don't
have
a
good
link.
A
I
don't
know
if
there's
an
epic
or
an
issue,
one
of
the
things
that
I
think
is
true-
and
I
wish
carrie
was
here
to
come
in
and
teach
us
all
about
this,
but
is
that
like
diff
generation
goes
through
multiple
layers
in
order
to
get
to
the
front
end
like
where
it's
rendered
and
then
where
it
gets
cached
again
and
we've
talked
about
how
you
know.
A
One
of
the
problems
here
is
that
it's
rails
code
and
that
can
be
a
little
bit
slower
and
that
we
sort
of
send
you
know
lots
of
data
back
and
forth
in
all
these
places.
And
so
one
of
the
thoughts
is:
how
much
could
we
offload
into
giddily
to
sort
of
like
be
responsible
for
all
of
the
the
diff
generation
and
giving
us
much
more
like
doing
more
of
the
work
there?
B
A
B
Some
my
limited
knowledge
go
like
goes
up
to
the
border
of
the
back
and
front
end,
but
I
do
have
some
visibility
of
that.
We
are
addressing
some
of
these
problems
on
the
source
code
side
of
things
where
we're
going
with
a
different
approach
where
we're
we're
using
monaco
on
the
front
end.
We're
sorry
we're
considering
using
monaco
on
the
front
end
to
render
the
diffs
based
on
the
raw
content
from
the
server.
B
I
think
that
would
make
it
easier,
but
it's
such
a
unknown
path
forward
that
I
wouldn't
put
us
in
our
map
so
far,
just
because
monaco
would
be
a
significant
departure
from
the
current
implementation.
We
have
today
on
the
front
end
that
we
might
get
to
it,
but
not
right
now,
and
it
wouldn't
be
ripping
the
benefits
within
time
for
the
opr.
B
A
specific
spike
to
have
both
front
end
back
end
and
eventually
someone
from
giddally
involved
in
a
discussion
and
having
a
spike
would
allow
that
time
to
be
well
spent,
and
even
if
we
don't
ship
the
improvement
on
fourteen
zero,
then
we'll
be
able
to
work
on
something
palpable
on
fourteen
one.
So
that
would
be
my
recommendation
to
have
a
giddily
on
this
sorry
to
have
a
spike
on
this
giddily.
A
Matt,
can
you
can
you
work
with
someone
on
the
back
end
to
like
sort
of
define
what
what
we
should
look
at
or
investigate
in
this
or
have
them
validate?
If
this
is
worth
even
doing,
I
don't
put
you
on
the
spot
and
ask
you.
A
B
C
A
The
next
one
is
reload
to
change
disk
style,
so
this
is
the
premise
here.
This
is
linked
to
an
issue
that
says,
like
the
cog
gets
unresponsive
in,
like
a
large
mar
and
like
I'm
proposing
what
you
do
is
reload
to
change
the
disk
style,
and
I
think
those
two
things
are
like
wildly
unrelated,
but
I've
put
them
into
one
issue,
and
I
should
probably
create
a
separate
one
for
this.
A
But
the
idea
is
right
now
in
our
cog,
you
can
make
all
of
those
setting
changes
to
go
from
side
by
side
to
inline
or
show
or
hide
white
space
changes
or
any
of
those
other
things
on
the
fly
like
we,
we
will
return
that
data
back
to
you
sort
of
instantaneously
the
exception
being
the
white
space
changes
because
we
don't
have
like.
I
don't
think
it's
got
to
go
fetch
new
diffs,
but
because
of
that,
that
means
we
have
enough
data
in
that
initial
load
or
my
my
presumption
is.
A
We
would
cut
down
the
payload
and
sort
of
the
amount
of
logic
on
that
page
and
all
of
these
other
things
that
that
are
handling
that
for
us.
I
don't
know
how
true
this
is-
and
I
put
this
as
like,
because
we
do
a
unified
diff
now,
like
I
think,
we've
cut
some
of
that.
A
B
There,
thanks
for
the
question,
there's
a
lot
of
stuff
to
unpack
here,
but
you're
right,
since
we
unified
the
way
that
we
represent
the
diffs.
The
benefit
wouldn't
be
on
the
network
side
of
things.
They
would
be,
though,
on
the
processing
side
of
things,
so
we
had
made
that
decision
at
the
time-
and
we
considered
doing
this
at
the
time
and
the
reasoning
that
we
did
this
is
that
very
small,
mrs
will
just
switch
from
inline
to
parallel
very
easily
right.
B
B
So
the
benefit
here
would
be
that
the
expectation
from
the
user
is
okay,
I'm
reloading
the
page.
So
in
terms
of
timing,
it
might
be
the
actual
same
perceived
performance,
though
it
might
be
more
acceptable
to
see
the
page
reloading
than
the
browser
freezing.
So
that
is
the
distinction
here.
There's
a
slight
aspect
to
this,
which
is
at
this
point
every
time
the
user
clicks
an
option
it
triggers
the
update
whatever.
That
is.
B
One
of
the
things
we
have
talked
in
the
past
is
that
by
adding
a
button
to
apply
the
changes,
we'll
be
able
to
do
more
than
one
change
at
a
time.
So
if
the
user
wants
to
enable
the
white
space
changes
and
switch
to
parallel
all
in
one
go,
he
has
to
click
one
wait
for
the
browser
to
come
back
and
then
click
again
and
then
have
to
wait
again.
B
So
I
think
if
we,
if
we
do
look
into
this,
both
of
these
things
having
the
button
and
trigger
a
reload
would
improve
the
perceived
performance,
not
necessarily
the
timing
that
it
takes
to
do
with
the
mindfulness
that
it's
still
probably
going
to
make
it
worse
for
small,
mrs,
but
for
the
benefit
of
large.
Mrs,
I
think
it's
worth
it.
E
This
is
blasphemy
at
gitlab.
We
keep
talking
about.
We
keep
talking
about
how
bad
the
performance
is
when
we
tear
down
a
lot
of
things
in
the
ui,
it's
very
rare
that
we
talk
about
why
that
is
in
in.
I
know,
andre
you-
and
I
have
talked
about
this,
but
it
is
our
framework.
It
is
the
framework
that
is
causing
that
performance
problem.
Yes,
there
are
a
lot
of
dom
nodes
in
the
browser,
but
the
browser
can
generally
handle
a
lot
of
dominance.
E
B
Yep
thanks
thomas,
I
I
think
that
will
apply
to
a
more
long-term
vision
of
rewriting
diffs
in
a
better
way
that,
in
the
parts
of
the
discussions
we're
having
are
part
of
that
more
long
term
in
a
more
short
term,
I
feel
like
this
reload
thing
would
be
an
acceptable
way
of
dealing
with
it,
but
yes,
in
a
medium
to
long
term,
we're
looking
into
ways
to
work
around
this
cost
of
tearing
down
large
sets
of
data.
B
So
bottom
line:
okay,
if
it's
that
clear,
I
think
we
should
still
consider
this
for
14.0
to
work
on
it.
We
do
need
some
ux
guidance
xinjiang
about
how
to
handle
this,
particularly
because
the
benefit
will
be
perceived
performance
and
we
will
need
some
ux
participation
there
too,.
A
Over
from
all
right
and
then
ux
guidance,
well,
we
can
work
on
the
the
ux
piece
of
this
now
and
get
an
issue
created
and
then
we'll
tag.
You
is
this
front,
and
only
yes,
no
probably.
E
Andre
correct
me
if
I'm
wrong
the
caching
stuff
that
I
was
working
on
as
a
spike,
wouldn't
have
any
impact
here,
because
we're
not
re-preparing
the
diffs
that
are
already
prepared.
We're
just
trying
to
re-render
them
in
different
different
ways.
Is
that
right.
A
Okay,
the
last
one
is
improvements
to
white
space
changes.
There's
a
bunch
of
things
in
this
epic,
I
think
the
the
biggest
one
and
one
that's
like,
probably
not
up
for
discussion
and
something
we
we
need
to
do
is
fix
the
batch
diffs.
I
think
the
concern
is
that.
A
This
potentially
needs
to
be
done
on
the
giddily
side
and
we're
waiting
on
cj
to
get
back
to
talk
about
this
one,
but
I
guess
the
other
questions
is
like
what
you
know
there's.
This
is
one
of
those
where
we
do
generate
multiple
diffs
right.
These
are
these
are
technically
separate
disks
and
we
generate
inline
and
side-by-side
versions,
or
we
generate
the
one
unified
diff
of
these.
A
Are
there
other
ways
to
think
about
this?
That
we
could
do
these
things
to.
A
I
don't
know,
get
some
performance
gain
out
of
out
of
these
there's
a
ton
of
ui
consist
inconsistencies
related
to
this
because,
as
it
turns
out,
when
you
change
the
amount
of
white
space,
you
return,
you've
now
changed
the
size
of
all
the
diffs
and
that
changes
the
limits
that
you
might
hit
in
a
large.
Mr
and
different
things
can
happen
in
that
regard,
and
so
there's
a
bunch
of
those
things
that
we
need
to
get
to,
but
those
are
less
less
performance
related.
So
thoughts
on
this.
B
I
would
put
this
as
a
very
high
priority,
just
because
what
I
wrote
there
is
that
the
improvements
we've
shipped
for
the
batch
divs
have
been
life-changing
for
merger
quests
and
the
users
who
always
use
this
setting
are
not
feeling
it
and
they're
still
living
in
the
reality
that
we
had
a
year
and
a
half
ago.
So
they're
they're
not
experiencing
that.
Because
of
that
potential
of
the
problem
that
when
we
use
this
setting,
it's
not
batched.
So
I
know
that
there's
the
ghillie
dependency,
so
whatever
we
need
to
do
to
sort
this
out.
A
A
B
Kind
of
so
one
of
the
problems
we
found
when
phil
was
working
on
improvement
on
the
batch
tips
was
that
we
could
start
triggering
the
batch
tips
in
parallel.
But
for
us
to
be
able
to
do
that,
we
need
to
do
an
improvement
so
that
we
can
assess
order
on
the
batch
diffs,
and
that
was
the
bottleneck.
We
needed
backend
implementation
for
that.
B
B
If
that's
clear
enough,
that
will
be
the
one
technicality
that
I
think
we
can
improve
on
batch
diffs
without
going
into
caching
on
the
back
and
caching,
italy
there's
one
crazy
theory
matt
by
the
way
I'll
just
sneak
peek
this,
because
I
want
to
talk
to
you
about
this
on
tomorrow,
which
is
when
we
trigger
a
very
large,
mr.
You
usually
have
like
five
diff
five
batches,
but
from
the
moment
you
get
hit
the
first.
B
You
know
you're
gonna
get
the
next
four,
so
I
don't
think
we're
doing
any
preemptive
loading
and
caching
for
the
future
requests.
So
I
think
that
could
definitely
boost
the
last
batches,
but
I
don't
think
we've
done
a
lot
of
due
diligence
on
that
feasibility
on
it.
But
that
would
be
something
that
I
would
definitely
suggest
looking
into,
because
it's
one
of
those
cases
that
you
know
you're
going
to
get
those
requests
within
a
second.
B
B
A
E
Worth
talking
about
here
live,
but
I
can
say
quickly,
andre
the
the
dis
metadata
that
we
get
is
not
bashed.
I
believe,
and
has
every
file
a
metadata
blob
in
the
correct
order.
Can
we
not
use
that
to
to
identify
the
order
of
files
in
the
batches.
B
I
think
we've
stepped
away
from
leveraging
the
front-end
data
to
order
the
way
the
divs
are
rendering,
because
that
is
a
can
of
worms.
That
would
add
a
lot
of
complexity.
The
way
that
we
render
the
diffs
we
can
look
into
it,
but
I
think
that's
part
of
the
investigation
on
the
order,
but
I
do
think
when
we
looked
into
it,
we
realized
the
back
end.
B
Would
if
the
back
end
gave
us
that
it
would
be
far
simpler
to
just
put
them
and
we're
looking
for
simpler
on
every
time
that
we
can
here,
but
good
call
out,
though,
I
think
we'll
include
that
in
the
issue
just
to
make
sure
that
we
don't
do
something
that
is
not
necessary.
B
Okay,
I
think,
for
the
sake
of
time,
I'll
move
to
the
next
one.
Anyone
has
any
other
topic
here
on
this
white
space,
all
right.
So
briefly,
one
of
the
things
that
came
up
on
when
assessing
the
10k
10k
reference
architecture
site,
speed
reports
is
that
by
loading,
the
file,
the
diffs
tree
list
open
by
default,
were
incurring
in
a
lot
of
costly
operations
by
default
and
rendering
the
dmr
that's
because
we
have
to
compute
the
file
tree.
We
have
to
render
it
there.
B
We
have
to
compute
a
bunch
of
things
to
render
the
file
tree
if
it's
not
open
by
default,
if
it
makes
sense
to
the
user
that
it's
not
open
by
default,
that
itself
would
be
a
performance
boost.
Now
we
are
very
mindful.
We
don't
want
to
gain
the
performance
metrics
at
the
cost
of
user
experience.
So
this
is
mostly
ux
issue
and
I
think
pedro
is
already
involved
there
to
to
realize
whether
this
would
make
sense
to
the
user
to
open
nmr
without
the
file
tree.
B
One
note
there
is
that
we
already
do
open
it
closed
if
the
viewport
is
narrow,
so
if
it
is
there's
a
certain
magic
number
that
will
just
not
open
it
if
it's
below
that
number,
the
theory
here
is
a
lot
of
users
are
already
experiencing
that.
So
why
not
just
make
that
for
all
screen
sizes
and
if
they
want
to
see
the
file
tree,
they
have
a
very
easy
way
to
just
open
it,
but
by
default
you'll
be
closed.
A
Voice
it
yeah.
I
was
just
going
to
say
I
think,
in
a
larger,
mr,
that's
probably
when
you
want
the
file
tree
the
most
when
you
when
you
actually
need
to
navigate
files
that
have
overflowed
the
screen.
That's
probably
a
piece
that
you
want,
so
you
need
to
be
mindful.
I
quickly
clicked
into
the
issue
and
I
think
pedro
seems
to
think
it's
hidden
by
default.
B
The
clear
part
we
looked
a
lot
at
this
and
I
think,
having
the
viewport
a
certain
size
misleads
us,
but
the
screenshot
I
took
is
from
the
site
speed,
so
the
site
speed
loads
by
default
all
settings,
it
doesn't
change
any
settings.
The
screenshot
you
see
in
the
issue
it's
taken
from
the
site,
speed
report
at
the
size
where
it
triggers
that
open
by
default.
B
Okay,
yeah
right,
so
that's
the
one.
Can
I
go
to
the
next
one,
all
right,
so
this
next
one
is
about
a
theory
that
the
feeling
that
the
mergy
quests
are
slow
go
beyond
the
loading
of
emerging
quests,
with
no
options
with
no
other
context.
One
of
the
most
common
cases
is
for
you
to
follow
a
link
to
a
discussion
into
an
mr
somebody
shared
the
note.
Oh
I
just
updated
this
or
something
on
slack
or
it's
a
very
common
use
case,
to
follow
a
link
to
the
discussion
right
now.
B
What
happens
today
is
if
your
comment
is
at
the
end
of
the
merge
request
at
the
last
file.
You
have
to
wait
for
all
the
batches
to
load
all
the
files
to
render,
and
only
then
will
the
browser
jump.
You
will
wait
for
all
this
to
happen,
and
only
then
will
you
jump
to
the
right
discussion.
Now.
We
already
have
the
api
changes
because
kerry
shipped
this
recently
for
another
feature
where
we
can
request
the
patch
diff
for
a
specific
file,
and
we
already
so
we
would
have
to
load
the
discussions.
B
Does
that
reddit?
Does
this
and
it
just
it
makes
sense
if
you're,
if
you're,
if
you're
loading
a
link
to
one
particular
item
on
the
page,
why
do
you
need
to
load
all
the
rest
once
he
reads
that
comment?
He
might
want
to
go
there
and
we
can
facilitate
that
path.
So
I
think
this
will
improve
the
the
experience
of
emerging
quests
being
slow
going
from.
B
Oh,
it
just
became
much
snappier
all
of
a
sudden,
and
that
would
be
in
terms
of
complexity,
not
that
complex,
because
we
already
have
things
in
place,
but
the
win
would
be
major
on
this
use
case,
not
for
the
overall
load
of
the
page.
But
for
this
particular
use
case.
B
B
I
do
need
a
ux
guidance
on
how
to
display
the
call
to
action
to
render
the
rest
of
the
diffs,
because,
right
now
we
don't
know
how
to
show
that
it
would
be
very
easy
for
us
to
only
show
that
div
file
and
only
show
the
discussions
on
that
diff
file.
That
would
be
easy
enough
for
us
to
do,
but
we
don't
know
how
to
display
the
option
of
hey.
You
might
want
to
load
the
rest
of
the
diffs.
Here's
click
here
or
something
that's
the
ux
thing.
D
Okay,
so
I
don't
see
any
downside
of
like
improving
this
and
I
think
you
already
proposed
the
ui
here
and
it's
just
a
small
ux
change.
Of
course
I'm
open
to
hear
more
opinion
from
pedro,
but
I
think
it's
a
good
issue
to
start.
A
Matt
we're
we're
up
against
time.
A
We
can
punt
the
rest,
there's
a
couple
read-onlys.
The
last
two
I
think,
are
read-only's,
and
then
I
will
plant
number
three
or
we
can
talk
about
number
three
in
the
planning
issue.
So
thanks
everyone
that
was
fun.
It
was
thanks
for
promoting
it.