►
From YouTube: 2022-11-24 #5 Code Review Performance Round Table
A
Here
in
code
review,
so
we
have
some
topics
in
the
agenda.
Thank
you,
everybody
that
have
been
discussing
it,
pushing
puppets
ahead.
I've
noticed
before
we
go
into
the
topics.
I've
noticed
that,
since
we
moved
into
the
issue
board
and
issues,
it
seems
like
we.
A
A
It's
not
a
problem
per
se,
but
just
something
for
us
to
be
aware
of
that.
We
need
to
follow
the
issues
and
have
the
discussions
evolving
issues
themselves,
not
an
issue.
Just
the
heads
up,
so
the
first
topic
we
have
open
still
to
be
discussed
is
the
improved
caching
of
div
files.
Batch
API
there's
been
some
some
movement
there.
Is
anybody
aware
of
updates,
or
is
this
just
purely
back
in.
B
C
D
Basically,
someone
mentioned
last
time:
we
had
a
call
that
we
have
HTTP.
Caching
I,
think
it
was
Patrick.
D
I've
checked
that
and
the
caching
doesn't
work
really
well
for
us
at
now.
Right
now,
it's
on
production,
at
least
so
I
will
create
a
like
an
improved
version
of
that
and
see
if
that
works,
because
right
now
we
have
to
invalidate
our
caching
hash
on
the
backend
side
and
it
seems
to
not
work
so
I'll
try
to
push
this
caching
key
to
the
front
end.
So
it's
strictly
cached
on
the
front-end
side,
without
touching
backhand
at
all
and
we'll
see
what
it
gives
us.
A
All
right,
so
do
you
not
being
tracked
in
any
issue
in
particular,
or
something
like
that.
D
A
A
Happening
in
a
related
top
paper
and
I'll
put
the
link
in
there
so
that
people
can
follow
me
with
that
discussion.
Exactly
whether
you
and
Patrick
were
having
sounds
good
feels
like
we
definitely
want
to
make
sure
that
we
can
Leverage
The.
Caching
with
that,
be
the
easiest
way
to
make
sure
that
the
HTTP
caching
would
be
working
well,
and
if
we
move
it
to
the
hash,
sorry,
the
URL,
then
it
would
definitely
be
easier
to
make
that
happen.
So
yeah
keep
pushing
that.
D
Also
so
discussion
on
the
backend
side
that
they
want
to
basically
before
doing
any
preemptive
caching
on
radius
or
anywhere
else
they
want
to
measure
what
takes
the
most
time
for
our
gifs
file
bash
API
to
get
there
to
get
the
result,
so
they
want
to
measure
it
first
and
then
apply
any
optimizations
or
caching
strategies.
So
that's
what
I've
heard
from
them,
foreign.
A
Working
on
with
Gary
is
mapping
out
the
whole
system
and
part
of
the
ask
that
we
had
that
we
did
was
to
get
some
timings
like
rough
timings
for
each
component
like
where
are
we
taking
longer
I
think
we
can
get
a
good
first
hint
of
delays
out
of
the
I
want
to
say
flame
graph
that
they
have.
So
we
might
be
able
to
take
a
couple
of
of
good
hints
on
on
those
metrics,
but
we
might.
They
might
want
to
dig
deeper
on
certain
things
that
are
not
shown
on
the
plane
graph.
A
So
awesome,
especially
synchronous
work
and
everything
so
I
don't
know
yeah.
Let's
have
them
forward
they're,
not
here,
so
we
can
get
any
more
updates,
but
hopefully,
hopefully,
through
the
next
week,
we
can
get
some
updates
on
the
back
end,
yeah,
okay,
so
moving
on
to
the
next
point,
then
we
don't
have
to
linger
too
much
here,
investigate
the
possibility
of
paginating
GIF
metadata.
There's
been
some
movement
here
as
well.
A
D
Patrick
suggested
a
really
good
idea,
splitting
metadata
and
file
tree,
which
takes
most
of
the
time
for
a
big
mess,
request
yeah,
and
he
made
actually
a
working
proof
of
concept
already
that
they
can
use.
So
he
splits
file
3
into
a
separate
API,
and
it's
also
pigeonatable.
He.
He
told
me
that
it
yeah
it
takes
about
100
milliseconds
to
get
the
first
30
items
which
is
really
good.
So
we
could
use
that
to
show
the
file
tree
first
and
then
load
metadata
as
soon
as
it's
ready.
D
That
way,
we
can
show
stuff
independent
of
each
other.
We
won't
be
dependent,
metadata,
won't,
be
dependent
on
file
3
and
file.
3
won't
be
dependent
on
metadata.
It's
sold
in
parallel,
so
set
should
be
like
a
nice
performance
Improvement
for
us
and
I'll
play
with
it
around
to
see
if
it
can
actually
improve
the
perceived
performance.
A
A
A
What
I'm
thinking
here
is
we
have
an
issue
in
the
pipeline
to
only
load
the
single
file
if
you're,
following
the
link
to
a
discussion
right.
So
all
of
these
different
ways
of
loading
data
is
just
a
mess
at
this
point,
but
it
is
a
mess
that
is
introducing
some
improvement,
so
I'll
I'll
allow
it
I
guess
but
I
think
I
agree,
but
yeah
I
think
I
think
at
a
certain
point,
we
need
to
step
back
and
make
it
more
robust.
A
I,
don't
know
if
it's
moving
to
refuel
and
letting
the
front
end
kind
of
dictate
how
we
load
this
data
or
or
something
else,
but
we
might
want
to
make
like
whether
it's
a
service
like
a
local
front-end
service
that
is
more
responsible
kind
of
like
just
get
the
data
in
the
appropriate
way
in
the
the
app
is
simplified
by
that.
So
we'll
have
to
think
about
that
later,
but
for
now
I
think
this
is
a
good
way
to
get
some
gains
in
there.
A
B
A
Oh,
that's
even
better
I'm
gonna
put
that
in
the
comments.
A
Right
thanks
yeah.
Definitely
that's
Mutual.
He
begs
the
question
whether
the
metadata
is
used
elsewhere,
but
I
didn't
think
so.
The
the
file
on
the
metadata
I
don't
think
we
are
and
we
can
save
some
some
time
on
it.
Yeah
great
yeah,
baby,
I,
don't
think
we
are
because
we,
the
diffs
the
diff
files
on
the
batch,
is
the
one
we
use
to
run
the
the
content,
but.
B
A
Yeah
I
remember
something
like
that:
I
think
it's
when
you
expand
the
discussion
on
a
death
in
the
origin
tab.
A
Is
it
I,
don't
know
if
that's
the
one
anyway,
regardless
of
what
it
is,
we
can
definitely
make
that
split
into
a
separate
file
called
a
good
effort,
separate,
call
and
we'll
go
from
there,
but
yeah
something
to
keep
an
eye
out
for
right.
Next,
one
I
think
we
almost
almost
already
covered
this
one
like
whenever
possible,
random,
DMR
dips,
client
side,
the
cash
write
a
name
or
ID,
and
this
has
been
like
the
last
week.
A
B
Oh
so
we're
cashing
with
the
e-tag
stuff.
Why
don't
we
just
dumping
into
redness
and
now
I
know
you're
going
to
see
because
we
use
too
much
redness,
but
like?
Can
we
not
just
done
more
into
register
me
just
give
us
more,
you
know
stuff
in
the
back
getting
side
of
things
like
it
seems
kind
of
bad,
though
we
could
make
it
better
for
everyone,
but
we're
kind
of
being
held
up
biting
Bishop,
chair.
D
I
think
we
cannot
put
it
directly
in
Regis
using
this
exact
E-Tec,
because
it
is
based
on
the
user
preferences
in
some
way.
So
I
think
some
of
the
one
of
the
backend
Engineers
showed
the
exact
like
parts
of
this
deck
on
what
is
consist
of,
and
it
includes
current
user
cache
key
whatever
it
means.
D
A
B
But
if
we
find
a
white
balance,
we
use
it
and
we
kind
of
take
out.
So
if
it's
just
a
simple
check
of,
can
the
user
comment
on
this
line?
Well,
if
they
can
comment
on
the
whole
merge
Quest,
then
they
can
comment
on
the
line.
So
that's
a
single
permission
check
for
a
whole
merge
request.
A
C
A
Same
thing,
so
this
is
where
we're
talking
about
the
the
etag
caching,
which
is
related.
D
I
also
think
that
one
of
the
reasons
again,
this
quite
hesitant
to
move
this
into
radius
is
because
we
are
already
caching,
quite
a
lot
of
like
stuff
in
registered
already
like
I
think
they
they
are
caching
the
diff
itself
in
radius,
and
on
top
of
that
we
add
caching
of
the
response
that
we
request
on
the
client
side.
So
it's
essentially
it's
doubling
the
size
of
the
cache,
at
least
doubling.
D
A
Yeah
I
I've
been
even
thinking
about
using
different
storage,
different
caching
storage,
and
so,
if
you
think
about
all
the
all
the
static
stuff
on
the
merger
Christ,
if
we
had
an
object
like
say
an
S3
bucket
with
all
the
Json
responses
for
every
every
Mr
that
doesn't
change
until
there's
a
commit,
we
could
generate
those
files
upon
the
submission
of
the
commit
upon
the
creation
of
a
diversion
you
generate
immediately.
The
files
put
them
in
a
static,
storage
and
basically
rails
would
only
have
to
read
the
files
right.
A
It
wouldn't
be:
a
storage
in
cash.
It
wouldn't
be
a
storage
in
in
redis,
but
it
would
be
a
storage
in
like
some
file
systems
somewhere,
and
that
tends
to
be
fast.
It's
not
probably
as
fast
as
radixes
in
memory.
But
you
remove
a
lot
of
the
competition
because
then
then
we
won't
have
the
problem
of
in-memory
caching.
A
We
could
have
terabytes
of
files
rendered
pretty
cheaply
that
could
last
a
couple
of
days
of
caching,
so
I
I
don't
know
I,
that's
what
has
been
in
my
mind,
but
I,
don't
know
if
that's
even
feasible,
but.
D
I
also
think
that
we
probably
catched
too
much
that
we
actually
needed
to
Cache
I
have
suggested
that
in
the
improving
caching,
because
the
tips
value
page
API
issue
that
I
guess
we
were
caching
like
every
single
mesh
request,
like
every
single
response
that
we
have,
and
it
doesn't
seem
that
we
need
to
do
that,
because
we
only
need
to
Cache
the
highest
ones
like
so
once
with
a
lot
of
changes,
probably,
and
we
also
need
to
catch
only
the
latest
one,
because
that
is
the
most
frequent
used
Thief
and
the
rest
can
be
terrific
like
if,
if
there
is
a
new
push
to
the
measure,
Quest
just
remove
all
the
caches
and
create
the
latest
one,
so
that
should
at
least
reduce
the
amount
of
storage.
D
We
need
like
memory.
So
that
was
my
idea
and
methodonts.
Second,
didn't
really:
it
wasn't
really
sold
on
them
because
they
need
to
investigate
this
more.
They
need
some
data
like
analytics.
What
what
are
the
heaviest?
My
requests?
What
are
the
three
most
frequent
requests?
We
do
they
need
some
data
and
that
is
reasonable
and
I.
Think
we
at
least
I
don't
have
the
data
on
my
hands
right
now.
A
Yeah,
let's
keep
talking
about
it
and
hopefully
we'll
get
some
back-end
folks.
There
then
look
deeper
into
that
or
just
explain
a
little
bit
more
cool
on
to
the
next
point:
virtual
scroller
that
or
basically
alternatives
to
wrote
a
virtual
scroller
and
Santa's
love.
Do
you
have
a
bit
of
an
update
for
us.
D
D
I
would
say
that
it's
has
a
solution
that
is
a
bit
faster
than
what
we
have
right
now,
because
our
way
of
doing
things
is
already
not
optimal,
I
would
say,
but
at
least
with
what
we
have
it's
looking
good
I
can
give
you
a
little
demo.
D
All
that
give
me
a
second.
C
D
So
this
is
so,
let's
just
image
request
in
GDK:
I
could
find
and.
C
D
Loads,
pretty
quick,
so
it's
already
loaded
as
you
can
see,
the
scrolling
is
quite
fast
like
no
no
delays
and
itself.
It's
itself
finished
also
quite
fast.
It's
still
a
basic
demo,
no,
like
interactions,
no
commenting
collapsing
files
whatever,
but
it
shows
that,
like.
D
We
can't
render
lots
of
files
client
side
if
it's
just
HTML,
so
it's
basically
putting
HTML
into
the
document
and
renders
it
and
it's
it's
proven
to
be
quite
fast,
so
I
think
at
least
one
thing
we
got
right
is
that
renting
lots
of
files
on
clients
as
possible,
but
I
guess
not
with
notice
classic
clients
at
ranging
in
the
main
thread
which
I
think
virtual
scrolling
was
addressing
in
the
first
place,
because
it
took
a
lot
of
memory
a
lot
of
time
to
render
the
stuff
and
it's
still
in
memory
after
it's
rendered.
D
We
don't
get
that
limitation,
but
we
still
have
to
wait
for
the
batch
API
and
what
I'm
going
thing
to
push
forward
is
to
investigate
more
into
bringing
back
server-side
rendering
to
GIFS
and
splitting
up
like
Dynamic
and
static
stuff,
because
we
have
lots
of
static
stuff
on
that
page
and
it
will
be
really
beneficial
to
have
its
server
set
rendered,
but
not
the
like
the
entire
thing
we
could
render
like
five
files
first
and
then
stream
back
to
rest
on
the
client,
so
that
would
be
probably
the
fastest
way
to
do
it,
because
we
don't
have
like
a
proper
streaming
support
and
rails.
D
We
have
to
paginate
all
this
kind
of
stuff
and
build
it
up
on
clients
after
it's
loaded
regarding
the
working
implementation
right
now,
it
seems
like
a
huge
overhead
on
top
of
what
we
have,
because
it
includes
all
the
JavaScript.
D
D
So
I
will
continue
to
push
this
forward
to
see
how
how
complex
it
is
to
add
discussions
on
top
of
this
static
rendered
divs,
because
right
now
we
render
discussions
the
same
way.
So
we
render
divs
first
and
then
our
discussions
on
top,
but
it's
all
done
in
The
View
application
I
would
like
to
separate
system
applications.
So
discussions
will
be
a
separate
application
from
the
divs
I'll
see
how
how
that
affects
performance?
How
how
hard
is
it
to
tick,
Apple,
actually
and
we'll
see
and
and
I'll
make
some
measurements?
D
I
guess
compare
all
our
current
Solutions,
like
virtual
scrolling,
the
worker
stuff
and
if
I,
if
I,
have
a
chance
compared
to
server
side
rental
divs,
because
we
have
a
series
that
render
diffs
already
on
the
like
commits
page.
When
you
view
a
specific
commit
and
all
the
files
it
render
them
on
the
server.
We
can
reuse
that
to
at.
C
D
Yeah
yeah,
if
we
can,
we
can
reuse
that
stuff
to
to
at
least
compare
how
it's,
how
slow
it
is
or
how
fast
it
is
to
to
what
we
have
right
now
and
after
that,
we
can
make
some
decisions,
probably
or
conclusions
to
what
to
push
but
like
overall,
looks
like
we
should
bring
back
at
least
partially
server
side
rendering
to
the
divs
app,
because
it's
just
like
the
fastest
way
to
render
stuff
and
like
completely
Skips
a
API
request
part
because
in
the
demo
I've
shown
you,
the
slowest
part,
is
requesting
this
API.
D
A
A
Sorry,
let
me
ask
this
first,
so
you
said
that
trying
to
make
a
separate
application
for
the
discussions
separate
from
the
diffs.
So
how
does
that
relate
to
the
fact
that
we
already
have
two
View
apps
the
nodes
and
the
dips?
They
both
share
the
notes.
Would
you
extract
the
notes
from
those
two
apps
and
have
a
separate
app
to
handle
the
notes
and
then
talk
to
both
the
diffs
and
the
notes?
The
overview
tab
so
to
speak.
D
Okay,
if
it
starts
them
in
the
ux,
it
doesn't
really
matter
because
we
just
pre-use
the
same
store.
The
important
thing
is
like
how
we
Mount
these
applications,
how
we
like
add
them
to
the
Zone.
This
is
the
important
part,
because
we
cannot
add
the
discussion
until
we
fetch
the
required
line
like
we
if
the
land
is
not
in
the
Dom.
Yet
we
cannot
add
the
discussion
to
that
line.
We
should
handle
that
and
also
should
handle
adding
new
discussions
to
the
statically
rendered
divs.
A
Remember,
there's
discussions
in
divs
there's
discussions
that
are
non-indicts,
so
theoretically,
you
could
enter
the
discussions
that
are
not
in
depths
right
away
and
wait
for
the
divs
to
be
loaded
yeah.
We
can
turn
it
on.
Let's
begin,
okay,
I
had
another
question.
Yeah.
You
said
that
it
wasn't
you.
You
only
had
the
deaf
ones.
You
didn't
have
a
lot
of
interaction
yet
on
the
demo.
A
D
You
mean,
would
it
be
like
put
it
up
more
complexity,
to
adding
new
functions
to
to
this
app.
A
Not
new
functions
well,
I'm,
not
not
to
worry
about
the
new
functions
or
extending
features,
and
something
like
that.
It's
more
like
right
now
we
have
a
dump
of
HTML
on
the
page,
great
yeah,
but
now
we
need
to
make
all
the
interactions
like
leaving
a
comments,
adding
code
quality,
stuff
yeah.
Will
we
be
able
to
continue
to
use
a
view
for
that
for
those
interactions,
yeah.
D
Yeah
we
can,
we
can
use
you
view
for
that,
I'm,
not
sure
how,
because
I'm
not
I'm,
not
sure
how
like
code
quality
works.
It
just
starts
in
the
amount
of
hook
and
what
it
does
I'm,
not
sure
we
certainly
will
have
to
extract
all
the
interactive
components
into
separate
apps
like
the
line
expansion,
discussions,
diff
header
that
collapses
or
marks
as
viewed
you.
D
Yeah,
this
will
be
different
view.
Instances
like
we
do
on
most
of
the
pages
at
gitlab.
We
have
like
lots
of
few
instances,
and
this
will
be
like
also
the
same
case
for
that
page,
and
it's
it's
the
fastest
way
to
do
it,
because
we
can
instantiate
these
things
like.
D
A
That's
exciting
I
want
to
see
that
I
want
to
see
how
we
solve
that
for
sure,
but
that's
a
little
bit
ahead
of
now.
Let's
keep
pushing
on
the
on
the
front
that,
like
most
immediately,
is
exactly
like.
How
do
we
service
at
random
parts
of
this
I
wanted
to
add,
like
I'm,
okay,
in
repeating
ourselves
here?
If
we
have
to
copy
the
code
that
we
have
in
a
view,
template
or
or
HTML
piece
move
it
into
Hamel
and
duplicate
that
code,
a
little
bit
I'm?
A
Okay
with
that,
because
it's
so
performance
work,
I,
guess
so
yeah,
let's,
let's
try
it
out
and
see.
If
we
can
do
this
fast
enough
to
be
worth
it,
I
mean
and
yeah
I'm
happy
to
just
leverage
rail
that
we
have
it
right
now
and
then
we
can
look
for
other
options
later,
but
for
now
this
will
be
a
nice
try
cool
any
other
thought
any
any
other
thoughts
there.
A
Okay,
thanks
for
the
thanks
for
sharing
the
episodes
like
the
next
one,
is
suppressing
files
and
divs
to
reduce
overall
payload
dude.
Does
anybody
have
any
thoughts
here?
I
I
talked
to
Kai
and
kind
of
mentioned
that
he
was
mostly
just
back-end
topics
and
it's
priority
topics.
I
guess.
D
I
think
we,
if
we
go
the
server-side
rendering
crowd,
we
won't
have
the
problem
at
all,
because
we
won't
need
this
API
for
the
files
at
all.
But
if
you
go
SSR
the
whole
data
stays
on
the
server.
We
just
get
the
rendered
HTML
and
work
with
that.
C
A
Do
we
need
to
discuss
this
any
further
at
this
point,
or
is
it
mostly
just
on
their
side?
I
guess.
A
Yeah
I,
don't
think,
there's
much
to
discuss
at
this
point
we'll
keep
bringing
it
back.
It's
not
closed.
Yet
I
don't
see
any
conclusion
yeah.
Let's
keep
it
open
any
more
topics
or
thoughts
before
we
go.
D
I'm
not
sure
if
we
can,
if
you
should
create
a
separate
topic
for
the
file
tree
like
separating
metadata
and
Foundry,
because
it's
it's
separate
from
genetic,
but
genetic
file
tree
itself.
A
Foreign
might
be
worth
it
because
we
want
to
have
there's
a
difference.
A
Pick
some
warning
there
whatever
it
is
that
way
it
keeps
coming
back,
but
it's
it's
distinctive
about
that
effort
to
I
guess
because
you're
you're
splitting
it
up
but
you're
also
paginating
the
file
trade
list,
so
you're
both
you're
doing
both
things.
At
the
same
time,
okay,.
A
All
right,
then,
we're
done
here.
Thank
you
so
much
for
making
the
call
very
exciting
conversations.
As
always,
I
have
a
last
point
to
check.
If
this
meeting
was
useful
for
you
or
not,
please
answer
and
I'll
see
you
on
next
week's
issue.
A
Just
a
heads
up,
I
will
be
out
of
office
next
week
and
the
following
week.
So
the
next
two
weeks
I'll
be
out
of
office.
It's
a
holiday
here
in
Portugal,
so
I
will
find
someone
to
make
sure
that
the
meeting
happens
and
the
documents
get
updated
and
that
sort
of
thing
so
but
I
won't
be
here.
So
please
carry
these
on
push
the
efforts
forward
as
if
I
was
here
right
and
with
that.
Thank
you.
So
much
have
a
wonderful
weekend
and
I'll
see
you
next
week.