►
From YouTube: 2022-11-10 #3 Code Review Performance Round Table
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Yes,
right
we're
on
thanks
for
joining
us.
It's
our
third
round
table
of
performance
in
code
review
and
we'll
get
right
on
it.
A
First
topic:
yeah:
it's
a
returning
topic.
Do
we
should
we
should
we
tackle
Matt's
points
at
the
bottom
because
it's
probably
relevant
to
the
rest
of
the
meeting
so
Matt?
Do
you
want
to
quickly
verbalize
your
point?
I'll
move
it
up.
B
Okay,
yes,
I
love
all
of
the
ideas
and
feedback
in
these
issues,
I'm
getting
a
little
worried
that,
like
organization,
how
we
keep
track
of
all
of
them
in
a
good
way,
I
want
to
keep
the
ideas
coming,
but
I
don't
want
to
have
to
every
week
like
copy
it
from
one
issue
to
the
next
one
and
the
same
topics
over
and
over
so
I,
don't
know
if
anyone
has
ideas
on
a
better
way
to
to
organize
how
we
collect
this
information.
A
D
A
That
to
be,
you
have
to
be
a
bot
creating
those
issues.
I
don't
have
to
set
up
a
box
to
do
that,
like
we
have
a
retrospective
sketching
of
the
issues
that
have
been
delivered.
That's
a
possibility,
if
you
want
to
think
of
them
to
write
that
that
script,
though.
D
Go
ahead,
I
was
just
gonna,
say,
I,
suppose
a
link
to
some
label
as
a
search
could
easily
get
the
same
list
without
having
to
edit
the
issue
template
or
something
too.
A
A
There's
another
thing
for
this:
this
conversation
that
thanks
Matt
for
bringing
it
up,
which
is,
we
still
need
the
process
to
close
them.
So
I
did
a
bit
of
an
attempt
here
where
the
returning
companies
have
a
question.
Is
there
anything
else
that
we
need
to
discuss?
Even
if
you
haven't
finished
them,
we
can
archive
them
or
close
the
issue
so
to
speak.
If
we
don't
have
anything
else
to
discuss
except
just
work
on
it
so
yeah,
we
have
to
keep
that
in
in
our
minds
that
we
need
to
pose
them.
A
A
Cool
topics
to
be
discussed
then,
so
we
have
a
returning
topic.
It's
about
trimming
the
database,
the
records
of
the
database
there's
an
issue
already
for
it
or
sorry,
there's
an
epic
for
it.
A
A
Sweet
this
is
the
returning.
The
next
one
is
a
returning
button
from
last
week.
Thomas
wasn't
on
a
call
because
it
was
not
his
time
zone,
so
we
didn't
get
too
deep
into
it.
So
Tom.
If
you
want
to
take
the
opportunity
to
kind
of
like
present
it
as
a
new
topic
and
what
we
were
thinking
so
that
we
could
probably
wrap
our
heads
around
what
you're
trying
to
say.
D
Yes,
I
need
to
also
make
sure
that
I
know
what
I'm
trying
to
say.
I
can
I
can
I
can
Wing
this
a
bit.
So
obviously
one
of
the
one
of
the
worst
things
we
do
on
the
front
end
is
constantly
tear
down
or
re-render
Dom,
and
if
we
could
do
that
in
smaller
chunks
delivered
this,
this
kind
of
dovetails,
maybe
with
what
staff
swap
has
been
doing
and
and
what
we've
all
been
doing
with
like
real-time
stuff.
D
If
we,
if
we
can
deliver
it
in
smaller
pieces
to
incrementally
build
up
what
we
want
to
show
and
then
and
then
from
that
point
once
it's
all,
there
only
get
incremental
small
pieces
to
update
that
as
changes
come
in
like
say:
hey,
here's,
a
new
com,
here's
one
new
comment
that
has
been
added
to
this
or
hey:
here's,
here's
a
new
file
that
has
been
added
or
whatever
we
can
do
with
I
call
them
Delta.
D
You
can,
you
probably
have
heard
them
refer
to
as
operational
transforms
or
whatever,
where
each
piece
of
data
is,
is
a
whole
document
in
itself
and
describes
how
to
change
the
previous
document
to
be
the
most
updated
data
that
could
really
slim
down
not
only
how
much
Network
we're
using
and
our
need
to
our
need
to
like
request
on
the
back
end,
but
also
will
reduce
how
how
much
and
how
often
we
need
to
update
things
in
the
browser.
D
There's
a
lot
of
questions
here
like,
for
example,
my
second
one
is
how
feasible
could
is
this?
Even
is
this
even
possible
for
get
diffs,
because
git
diffs
are
themselves
complicated
than
Computing
Deltas
on
top
of
that
or
operation
transforms
on
top
of
that
is
even
more
complicated.
Would
this
even
make
sense
so
there's
a.
E
D
I
I
haven't
slept
for
like
40
hours,
so
I
hope
that
makes
sense.
A
A
Yeah
I
think
that's
the
the
usual
things
like
when
you
request
something
from
the
server
and
you
you
like
give
me
everything
since
a
certain
date
that
would
qualify
as
what
Thomas
is
talking
about
right
because
you'll
be
like
just
give
me.
The
new
stuff
I
already
have
the
old
stuff
or
I.
Don't
care
about
the
old
stuff
we
do
have
pagination,
but
that
doesn't
really
count
because
it's
just
on
for
items,
but
one
of
the
topics
that
this
could
apply
to
is
definitely
the
comments.
A
So,
if
I'm
I'm
cashing
the
comments
on
the
browser
and
I
returned
to
that
page
and
I
already
have
like
a
hundred
comments
and
I
know
that
the
last
comment
is
a
certain
timestamp.
I
can
just
say
the
to
the
back
end.
Hey.
Can
you
give
me
every
comments
or
updates
to
comments
that
happened
after
this
date?
Would
this
be
possible
so.
F
A
A
D
So
one
thing
to
note-
and
this
this
isn't
super
important,
because
this
this
conversation
is
useful,
no
matter
what,
but
one
thing
to
note
is
like
setting
a
Time
range
like
after
a
certain
date
is
useful
and-
and
it
like,
for
example,
like
we
just
noted
that
the
notes
does
this,
but
a
Delta
is
slightly
different
in
that
it
says
we
know
you
already
have
this
data
here
are
the
differences
between
what
you
have
and
what
now
exists
so
say
someone
say
someone
edits
their
comment.
D
You
just
need
to
get
the
piece
that
says
the
comment
that
you
already
have
should
be
edited
by
removing
these
lines
and
adding
these
lines
so,
like
that's
an
operational
transform
where
it
says
like
there,
are
a
bunch
of
ads
and
edits
and
deletes
to
change
the
data
that
you
have
to
look
correct
now
both
are
useful,
like
the
the
time
range
filtering
is
super
useful
to
just
like
ignore
data
that
you
already
have
and
Deltas
are
useful
to
say
we
don't
need
to
request
any
data
that
you
already
have.
D
A
Okay,
thanks
Thomas,
so
On
a
related
nodes.
Kerry
Patrick
mentioned
something
that
you
and
him
had
discussed
in
the
past
about
the
idea
of
having
a
catch,
the
diff
or
on
the
client,
with
a
certain
ID
or
fingerprints
that
could
be
calculated
and
then
every
time
the
client
comes
back
to
the
emergency.
Just
sends
that
and
hey
anything
newer
than
this
is
still
the
same.
A
Then
we
just
don't
fetch
the
diffs
there's
an
issue
for
that
already
created
whenever
possible,
render
the
Mr
dips
from
plan
subcache
unless
merge
Sid
is
outdated.
The
ID
is
the
fingerprint
sorry
ring
a
bell.
E
Yeah,
it's
a
specific
application.
What
we
were
talking
about
last
year
around
the
idea
of
like
any
time
that
we
in
the
back
end
detect
that
there
is
a
change
to
the
data
that
we
would
be
sending
to
the
front
end
just
go
ahead
and
build
build
that
data
response
out,
even
if
we're
doing
an
asynchronous
instead
of
waiting
for
a
request
for
it,
so
that
when
the
front
end
does
get
around
to
requesting
the
new
comments
need
to
have
or
whatever
we
have
it
ready
to
go.
It's
all.
It's
all
pre-warmed.
E
A
Oh
yeah,
so
what
we're
considering
is
then
moving
it
to
like
a
structured
database
on
the
client
like
in
xdb,
which
then
I
raised
the
question
that
some
of
you
might
have
ideas
so
I'm
raising
it
here.
So
if
we're
doing
this
of
caching,
the
merger
request
in
the
client
and
every
time
they
come
back,
they
check
hey.
Is
there
anything
new
now
that
we're
comparing
things
with
head?
A
So
this
is
a
concern
for
this
thing
and
Patrick
raised
a
couple
of
questions
here,
basically
assessing
metrics,
to
know
how
frequent
this
would
be
this
scenery.
Does
anybody
have
any
thoughts
on
that?
Would
it
be
a
problem
that
the
head
is
always
moving
or
will
be
able
to
distinguish
and
just
yep
the
different
moves,
the
different
change,
even
though
the
mass,
the
master
of
the
target
Branch
move
that
visible.
A
All
right
no
worries
if
you
have
any
thoughts,
whoever
picks
up
this
issue
will
have
to
research
it
and
try
to
copy
the
solution
for
that.
If
it
sounds
like
enticing
possibility,
I'm
gonna
stop
sharing
the
screen,
so
you
know
talk
any
more
thoughts
here.
I
come.
E
E
I
think
it
kind
of
pertains
a
little
bit
to
my
understanding
of
what
Thomas
you're
talking
about
in
that,
like
moving
the
ability
to
actually
calculate
diffs
into
the
front
end
in
a
way
right
in
that.
If,
if
we're,
if
we're
sending
out
Deltas
right,
if
I'm
sending
out
the
Delta
of
a
commit
to
the
front
end
the
front,
it
needs
to
be
able
to
basically
be
a
git
engine
on
its
own
right.
Remember
this
understanding.
E
D
Could
yeah
it
doesn't
have
to,
though,
like
the
back
end
could
still
be
Computing,
the
the
the
you
know,
the
difference
between
one
commit
and
the
next
commit
and
telling
the
front
end.
This
is
what
we
have
converted
from
the
git
format
for.
A
Yeah
I,
just
let
them
know
just
a
link
of
an
implementation
of
yet
using
webassembly
for
no
reason
at
all,
just
that
if
we
ever
want
to
go
down
that
road
we've
talked
about
this
in
previous
sessions,
it's
it's
kind
of
like
we're,
trying
to
do
less
on
the
front,
and
so
the
front
end
is
faster.
So
it
seems
like
that
would
be
a
fairly
risky
crazy
idea.
A
Sometimes
Place
ideas
work,
so
I'm
not
going
to
throw
that
away
right
away,
but
I
will
want
to
bring
up
something
that
Tim
bring
up
in
a
previous
call
this
week
that
if
there's
any
opportunity
for
webassembly
to
optimize
work
on
the
front
end,
we
should
consider
it
it's
there.
So
you
might
be
opening
up
a
couple
of
options
there.
A
couple
years
was
impossible,
so
this
could
be
one
of
them.
I,
don't
know.
E
E
Oh
I
just
wanted
to
underline
that
I'm
in
no
way
opposed
to
like
moving
the
work
around
to
like,
if
we
think,
there's
a
faster
way
or
taking
a
flyer
just
like
saying
like
well,
let's
actually
build
it
out
and
see.
I
just
want
to
make
sure
that
we're
actually
at
that
we're
we're
saying
here's
an
identified
performance
bottleneck
right.
So
we
have
a
metric
for
understanding
what
what
we're
actually
approving
or
what
we're
actually
addressing
with.
D
Yeah,
so
the
the
the
only
way
we
would
be
able
to
do
this
is
if
we
already
have
some
sort
of
front-end
caching
of
the
data
that
we're
giving,
whether
that's
with
an
indexeddb
or
something
where
the
last
time
we
fetched
the
Mr.
We
have
a
bunch
of
div
files,
and
now
we
can
just
get
the
Delta
to
change
that
Mr
within
with
you
know,
files
from
what
we
have
that
have
been
removed
or
or
diffs
that
have
changed
slightly
and
files
that
are
still
there.
D
Those
kinds
of
things
so
this
is
this-
would
necessarily
have
to
be
a
long
way
out
like
we
can't.
We
can't
use
Deltas
until
we
have
something
to
compare
a
Delta
against
and
right
now
we're
not
doing
that
caching.
So
it's
sort
of
a
it's
sort
of
a
longer
range
out
there,
which
is
why
Andre
is
talking
about
like
setting
that
time
range
is
more
feasible.
Now,
because
we
could,
we
could
say
we
have
an
MR
at
such
and
such
date.
What
are
the
changes
since
then,
which
we
could
do
now.
A
D
A
Table
this
topic
of
Deltas
for
later,
once
we
advance
a
little
bit
more
and
keep
thinking
about
it
as
we
go
forward,
sounds
good.
A
You
had
a
point
here
that
we
didn't
cover.
Do
you
want
to
bring
it
up.
C
Yeah,
my
point
was
that
we
still
didn't
solve
the
client
rendering
problem
like
we
are
using
like
a
temporary
solution
of
virtual
scrolling,
but
it's
still
causing
like
very
high
total
working
times
for
us
and
I
assume.
This
approach
also
requires
that
we
do
step
on
the
client,
so
basically,
we
re-render
the
divs
on
the
client
and
I'm,
mostly
afraid
of
increasing
complexity
and
demands
for
like
computation
power
of
this.
So
this
is
my
primary
concern.
A
That's
fair,
so
we
have
other
topics
to
solve
that
rendering
stuff
on
the
plane.
So
thanks
for
the
heads
up
in
the
morning,
does
anybody
have
anything
to
add
there
trying
to
move
on
before
I
move
on?
Should
we
close
this
topic
or
keep
it
coming
back
next
week?
A
D
A
Any
objections
all
right.
It
sounds
like
a
court.
A
All
right
I'm
having
too
much
fun
here
next
new
topic
use
a
Pure
event
system
on
the
front.
Then
Thomas.
Can
you
give
us
a
brief.
D
Yeah
Glimpse
yeah,
so
the
the
my
this
is
a
this
is
a
just,
a
gut
feeling,
based
on
all
the
times.
I've
seen
these
systems
implemented.
I
think
we
lose
a
lot
of
performance
to
view
X
and
the
and
the
data
flow
going
through
Vue,
X
and
storing
data
in
memory
and
the
re-render
cycle.
D
When
those
things
changes
when
those
things
change
and
I
think
a
lot
of
the
value
of
a
flux
system
like
view
X
is
the
ability
to
issue
the
action
and
then
have
a
a
result,
come
out
the
other
side
and
I
think
that
we
could
get
a
lot
of
performance,
and
this
is
kind.
This
is
kind
of
dovetails
with
stash
love
your
concern
about
like
using
more
memory
and
using
having
worse
performance
by
adding
complexity.
D
But
I
think
we'd
lose
a
lot
of
performance
there,
as
well
with
the
beat
on
so
I'm
wondering
if
we
could
replicate
view
X's
event
system
that
we're
using
to
sort
of
smooth
over
certain
trouble
issues
with
State
and
use
the
same
type
of
event
system
to
to
eliminate
view
X's
overhead
and
maybe
even
views,
vdom
overhead.
This
is
like
pie
in
the
sky
kind
of
thing,
but
I
think
we
lose
a
lot
of
performance.
D
I
think
that
I
think
or
possible
tooling,
that
is
more.
D
You
don't
have
the
memory
and
CPU
overhead
of
the
two
of
those
other
tools.
A
Yeah
yeah
I
think
that
the
risk
here
that
were
would
then
be
stepping
away
from
everything
else
that
we're
working
against
that.
So
it
kind
of
like,
closes
us
into
a
more
siled
technological
domain
but
and
yeah.
He
moves
us
away
from
view
which
again
standard
like
how
does
so
yeah
do
you
have
any
thoughts
on
here.
C
Yeah,
basically,
if
you
free,
has
a
very
different
mechanism
of
reactivity,
we're
just
causing
like
performance
issues
with
views.
So
if
we
complete
our
viewfree
migration,
it
basically
should
reduce
our
memory.
Consumption
in
half
and
also
really
significantly
speed
up
the
reactivity,
so
I
would
I
would
suggest
I
guess
watching
for
V3
migration
to
see
if
it
really
solves
the
issue
of
likes
the
views
that
we
have
right
now.
A
Yeah,
that's
right:
I
love
how
you
save
ux,
so
yeah!
That's
that's
a
very
good
point.
We've
been
talking
about
the
u3
impact
of
it
all,
so
there's
that
there's
also
I
mean
we're
essentially
working
view
with
ux
right
now,
because
we're
still
not
leveraging
graphql.
If
we
ever
moved
to
a
graphql
world,
then
will
be
in
an
Apollo
World,
which
will
be
a
different
kind
of
problems
and
things,
but
Thomas
do
you
have
anything
else
like
the
sound?
D
Yeah
I
think
I
think
we
should
definitely
come
back
to
it
when
we
get
view
3
in
yeah,
because
it'll
be
interesting
and
even
with
Apollo
there's
it's
just
a
different
kind
of
overhead
when
you
switch
to
Apollo
instead
of
Vmax.
So.
A
Okay,
yeah
we're
again
we're
still
if
we're
still
leveraging
the
code.
We
have
right
now
in
diffs,
it's
very
much
bound
to
view
X
right
now,
so
that's
a
whole
different
content,
a
whole
different
effort,
moving
from
your
view,
X
to
power,
which
is
a
separate
thread
so
for
now
putting
a
pin
on
it
tabling
this
right
closing
next,
thanks
Thomas,
it's
the
next
highlighting
I!
Think
still,
you
propose
this
Jonah
heads
up
a
new
overview.
F
Yeah
I
mean
so
the
so
scared
people
decided
that
we're
used
to
Ruby
gen
to
highlight
is
slow,
which
he
kind
of
miss
so
they're
using
highlight
chess
I.
Don't
think
we
can
use,
highlight
chess,
but
I'm,
just
wondering
if
there's
any
different
library
or
something
that
we
could
use
in
the
back
end
to
make
the
syntax
highlighting
faster
in
some
way.
C
I
recently,
when
I
saw
like
what
was
the
core
programming
with
headlighting
console,
backhand
I
think
the
core
problem
is
that
you
basically
have
to
highlight
all
the
files
before
you
can
send
them
and
if
you
could
just
highlight
them,
file
by
file
on
the
back
end
and
send
them
file
by
file.
That
should
should
be
at
least
a
little
bit
better
than
waiting
for
the
whole
page.
So.
A
A
E
C
A
Let
me
rephrase
it:
do
you
think
that
that's
highlighting
is
not
a
problem
on
the
front
on
the
back
ends
in
terms
of
speed
or
a
big
problem.
E
That's
a
good
question
like
I:
don't
have
any
metrics
I,
don't
have
any
I,
don't
have
any
numbers
to
sort
of
say
it
is
or
isn't
right.
It
only
happens,
one
time
for
each
case
Cadet
or
each
version
yeah
to
be
compiled.
That
said,
you
know
that
kind
of
feeds
back
to
what
I
was
saying
earlier,
like
every
time
a
change
comes
in,
like
we
should
just
generate
that.
That's
what's
ready
to
go,
rather
than
even
have
that
first
request
be
potentially
slow.
E
The
problem
that
we
run
to
that
then,
is
like.
Where
do
you
store
it
because
that's
a
lot
of
data
to
happy,
storing
continuing
all
over
the
place,
so
maybe
the
complexity
of
splitting
it
splitting
up
the
requests
and
streaming
out
that
data
as
we
do.
It
is
cheaper
than
there's
less
overhead
than
just
buying
more
storage
space,
but
I,
don't
know
I'm
totally
happy
to
like
put
together
a
project
where
we
actually
Benchmark
a
new
highlighter
and
the
one
that
we
builted
house,
but
we
should.
D
Do
you
know
if
the
like,
when
a
new
version
comes
in
of
an
MR,
does
it
recompute
the
highlighting
for
every
file
in
the
in
the
new
version
or
just
the
changes
in
that
new
version.
E
Oh,
that's
a
really
good
question.
That's
a
really
good
question!
I!
Don't
my
gut
says
that
we
do
the
entire
thing,
because
why
not
right
but
yeah
for
an
exceptionally
large
Mr?
That
could
be
quite
a
bit
of
work.
C
On
the
backhand
side,
it
really
impacts
like
very
relaxable
in
files
or
vitalize,
like
values
where
you
have
like
10
thousands
of
lines
Etc.
So
if,
if
it's,
if
that's
file
is
not
pre-halided
before
we
visit
it,
it
can
be
really
really
slow.
So
I
guess
that's
part
of
the
reason
why
front-end
team
chose
to
do
it
on
a
client.
A
Yeah
but
again
the
difference
there
is
that
they're
rendering
the
whole
file,
and
if
we
did
that
on
the
code
review,
then
we
will
be
slicing.
The
file
on
the
front
end
which,
like
we
could
get
the
raw
file
highlighted
on
the
front
end
then
cut
the
chunks
that
we
need
I.
Think
like
a
lot
of
overhead,
so
I'm
stepping
a
little
bit
back
and
holding
on
to
what
Kerry
said
that
we
could
have
a
way,
an
initiative
or
an
effort
to
Benchmark
other
options
and
highlighting
I
think
all
of
that
could
be
useful.
A
But
the
first
conversation
is
timely,
like
what's
the
what's
the
impact.
So
if
we,
if
we
made
syntax
highlighting
incredibly
faster,
what
would
be
the
impact
in
the
end
result?
Is
it
like
100
milliseconds
the
three
seconds,
because
if
it's
only
a
couple
of
hundred
milliseconds
you
might,
we
might
have
other
bigger
fish
to
fry?
First?
Sorry
for
the.
If
you
don't
need
fish,
but
so
we
could
have
an
issue
to
discuss
that
and
then
we'll
go
from
there.
A
E
E
It's
slow
but
like
well,
what's
slow
I,
don't
know
like
what
do
we?
What
do
we
actually
attack
here
so
Patrick
has
an
issue
for
that.
We
can
dig
up
that
and
make
it,
and
so
I
mean
you.
You
asked
about
reforming
the
diffs.
I
mean
like
that's,
that's
an
issue
that
we've
talked
about
for
like
a
year
of
doing
it
and
the
problem.
E
The
only
problem
is
that
we
are
already
the
primary
consumers
of
the
Raz
cash,
and
so,
if
we
start
generating
more
and
more
things
to
cash,
the
redist
team
will
absolutely
blow
their
top
at
us.
Right
like
we
have
to
just
be
aware
of
what
our,
how
much
more
data
we'll
be
making
for
doing
that,
but,
like
I
love,
the
idea
I
really
want
to
do
it,
but
for
that.
A
That
seems
to
be
a
pattern
like
rewarming
the
caches
that
we've
started
regularly
in
the
sessions
all
right.
So
there's
an
issue
I
linked
there,
it's
the
one
that
you
mentioned
from
Patrick.
Maybe
we
could
add
a
comment
there
to
add
syntax
highlighting
for
the
things
to
be
watched,
all
right
so
for
now
I
think
we
can
move
all
this
conversation
to
that
issue
and
then,
if
that
issue
then
becomes
an
epic
and
then
splits
up
to
different
topics,
then
we
can
do
it
for
now.
A
I
would
say
that
we
can
close
this
topic.
Obviously,
if
that's
how
I
think
per
se,
and
then
we
move
to
that
discussion
to
that
issue
for
discussion
sounds
good.
Joe,
yeah,.
A
Awesome
complexity
bill
you
were
as
well
so
by
the
way
I
wanted
to
highlight
that
we're
at
time
of
the
previous
time.
Slot
I
extended
it
for
an
hour,
so
we
had
more
votes
in
favor
than
against.
But
if
you
do
have
to
drop
it's
okay,
you
watch
the
recording
later,
but
you
want
to
try
for
an
hour
then
we're
going
to
get
feedback
on
it.
If
you
liked
it
a
whole
hour
or
if
we
go
back
to
30
minutes,
okay,
okay,
carrying
on
again
feel
free
to
drop
off
complexity.
B
Okay,
yeah
I.
F
Think
the
front
end's
kind
of
doing
too
much
now
we
kind
of
need
to
take
a
little
step
back
and
see
what
the
printing
does,
the
viewers
kind
of
got
just
naturally,
it's
grown
bigger
and
bigger,
most
of
it
being
able
to
do
more
free,
Spin
added
into
continues
of
added
stuff
into
it.
So
I
think
it's
kind
of
nice.
We
just
take
a
little
step
back,
see
what
it
all
does
documented
somehow
and
see.
If
there's
any
stuff
that
do
we
actually
need
it,
does
it
actually
get
used?
Can
we
remove
it?
Maybe.
A
A
I
think
there's
incredible
value
in
documenting
the
whole
thing,
the
back
end
and
the
front
end
so
that
we
can
then
take
a
look
at
it
like
ballistically
and
from
high
above
and
just
see
where
the
Questa
can
be
optimized,
where
the
place
that
we
are
not
very
efficient,
and
we
can
again
talking
about
the
topic
of
webassembly,
for
example,
which
definitely
look
for
ways
that
we
can
open
my
small
pieces
of
small
cogs,
the
whole
system,
and
that
could
be
an
option
so
all
right
so
for
now,
I
think
we
can
put
our
our
efforts
on
documenting,
creating
that
document
of
artifacts
and
back
in
and
front
end
Matt
so
or
we
could
have
someone
from
your
team.
A
A
Next
virtual
schooler
fill,
then
you
have
a
comments
recent
from
Santa's
love.
Any
of
you
want
to
take
this
by
the
way
I'm
at
10
battery
I'm
gonna
try
to
find
a
place
and
turn
off
the
camera,
but
you
guys
go
on
and
you
guys
you
all
go
on
and
discuss
the
topic
so
virtual
score.
Phil.
F
Adventure
scroll
is
nice
to
get
us
to
hit
the
metrics
for
a
quarter,
there's
kind
of
a
lot
of
books
that
exist
because
the
edge
of
the
scroll
as
it'd
be
kind
of
nice
to
remove
it
at
some
point
in
the
future.
Right
now,
I,
don't
think
we
come
at
large
image
quests.
It
just
kills
the
browser,
but
it'd
be
nice
to
get
to
that
point.
We
could
just
get
rid
of
it.
C
C
So
we
found
some
like
easy
gains
with
that.
We
can
apply
right
now
to
improve
the
speed
of
the
page
and
also
some
difficult
stuff,
which
is
basically
choosing
a
different
strategy
to
render
files
and
I
think.
The
core
problem
here
is
I've
mentioned
earlier,
is
using
client-side
rendering
to
render
static
stuff,
which
is
basically
rendering
it
once
and
doing
nothing
with
it
after
that,
because
we
I
I,
don't
think
we
ever
change
the
contents
of
the
files.
C
We
only
had
discussions
and
that's
it
so
I
have
checked
how
our
unnamed
competitor
does
it,
so
what
they
do
is
they
basically
server
side
render
like
10
50
files,
so
it's
showing
instantly
and
they
can
show
the
rest
to
you
using
infinite
scrolling
or
some
kind
of
laser
loading.
C
C
I'm,
not
sure
how
how
we
can
achieve
that
at
the
moment.
Probably
it's
not
really
possible
and,
alternatively
I'm
looking
into
a
different
option
which
is
using
rendering
in
a
worker
which
should
be
more
efficient
because
we
won't
be
storing
all
this
data
in
memory
once
we
rendered.
C
So
we
request
the
patches
and
then
render
it
and
forget
about
it.
It's
removed
from
the
memory,
so
it
will
remove
our
memory
constraints
and
also
remove
a
lot
of
CPU
usage
So.
In
theory.
It
should
reduce
our
total
working
times
and
also
help
us
solve
the
problems
with
virtuous
chronic,
which
is
using
search
on
the
page
and
anchor
links
issues.
C
So
hopefully
that
will
produce
some
results
but
I'm
not
sure
yet
I'm,
not
sure
if
it's
feasible,
because
we
have
some
workers,
so
we
will
see
I
will
get
you
updated.
So
once
once
I
finish,
the
investigation
on
set.
F
A
F
Don't
think
about
is
suggestions
that
would
use
to
the
line
from
the
story.
C
C
A
Yeah,
but
in
this
case
we'll
be
grabbing
the
stuff
so
yeah,
it's
not
it's
not
a
bad
idea.
Yeah
all
right,
so
I
will
put
this
topic
on
that
investigation.
That's
not
as
I've
been
doing.
It
also
seems
like
it's
probably
our
best
option
so
far
that
we
have
identified
the
sort
out,
a
replacement
for,
for
that
particular
virtual
score.
Software.
A
A
Right,
so
we're
not
closing
that
topic,
so
we
can
keep
you
bringing
updates.
So
new
topic,
then
merge
request,
page
Spa
architecture,
a
single
page,
single
page
application
architecture
by
standard
by
standard
web
and
issue
to
have
the
first
attempt
to
resolve.
This
is
lazily
initialize
overview
and
changes
merge
again.
If
you
want
to
add
something,
is
there
anything
that
we
need
to
discuss
here.
C
I
just
wanted
to
know
like
if,
if
this
is
by
Design,
so
the
problem
here
is
that
we
actually
render
both
Pages
both
overview
and
changes
page
when
we
go
to
either
of
these
pages.
So
even
if
you
don't
see
the
divs,
they
are
actually
being
rendered
in
the
background,
or
if
you
don't
see
like
merge,
request
comments,
they
are
still
rendered
and
hidden
with
display.
None
I
was
wondering.
Is
it?
Is
it
by
Design,
and
can
we
safely
like
make
these
pages
asynchronous?
C
So
we
only
render
them
once
we
actually
change
the
tab.
These
are
my
main
questions
because
I
think,
if,
if
it's
not
by
Design,
we
can
easily
improve
our
performance
just
by
using
lazy
initialization
of
these
pages.
F
F
A
F
Like
it
still
depends
on
it,
because
if
it's
a
lot
of
discussions
and
we're
just
gonna
build
them
all
on
the
page
still
are
going
to
be
slow,
I
mean
we
should
do
it
like.
It
should
be
done.
I'm
not
saying
it
shouldn't
be
done.
I'm
just
saying,
if
there's
a
lot
of
discussions
that
we
have
to
render
there's
a
lot
of
discs,
we
have
to
run
down,
we
need
to
figure
out
how
to
quickly
change
the
tabs
without
blocking.
A
The
browser
yeah
I'm
not
saying
to
do
that
only
when
they
usually
clicks
I'm,
saying
that
first
group
under
the
page
that
they
want
to
see
or
at
least
start
working
on
that
and
you
only
start
working
on
a
second,
a
secondary
one.
That's
not
being
shown
when
the
browsers
come
come
down
and
if
they
do
click
right
away,
then
you
have
to
wait.
But
at
least
you
give
a
good
shot
of
running
the
first
tab
faster.
C
My
my
idea
to
fix
that
was
by
just
using
lazy
initialization,
so
the
logic
will
stay.
The
same.
Both
pages
will
be
rendered
in
the
background,
but
the
invisible
page
will
only
be
rendered
once
you
go
to
the
tab
so
once
you're
there,
it
will
be
in
memory
and
you
can
switch
between
them.
How
how
much
you
like,
so
it
will
still
be
hidden
using
like
display
none,
but
it
won't
be
re-rendered
once
you
change
the
tab.
The
second
time.
A
Yeah,
okay,
this
sounds
like
which
one
to
try
so
Santa's
like
can
we
move
on?
Should
we
close
this
topic
and
move
it
to
the
issue?
This
one.
A
D
There,
yes,
are
we
we're
skipping
the
the
changes
counter
because
that's
been
fixed
by
Patrick
yeah,
so
this
is
kind
of
the
this
is
this
goes
back
to
actually
something
we
talked
about
earlier
in
this
meeting.
How
much
can
how
much
can
we
just
render
and
then
have
you
forget
about
because
we
have
we
have
a
lot
of
that
is
being
like
it's
reactive,
but
nothing
changes.
D
For
example,
I
think
the
diff
lines
are
are
one
of
them
where
once
it's
rendered
once
those
things
don't
change
other
than
when
you
click
I
want
to
add
a
comment.
You
know
a
comment
box
gets
added
in
in
the
diff
lines,
but
like
the
diffs
themselves,
don't
change
or
I'm,
not
existing
comments.
I,
don't
think,
are
updated
on
the
Fly,
which
would
be
something
we
could
do
if
we
were
changing
things
incrementally,
but
I
think
we
just
refresh
this.
D
The
entire
list
of
discussions
and
those
are
re-rendered
so
I
don't
know
that
we
need
reactivity
on
those
things
so
I'm
just
kind
of
wondering
how
much?
How
much
are
we
paying
in,
like
a
view
tax
in
a
vdom
tax,
when
we
just
don't
need
that
reactivity.
D
D
It
may
be
we,
we
talked
a
little
bit
about
or
Andre
you
and
I
talked
a
little
bit
about
this
and
I
heard
tell
of
us
using
V
once
in
the
past,
and
it
maybe
not
working
out
and
I'd
be
interested
to
know.
Maybe
Why
didn't
it
work.
Where
did
we
try
it
and
how?
D
How
aggressive
is
the
V
once
directive?
Like
does
view
actually
like
totally
forget
about
that
Dom
and
never
look
at
it
again?
Would
that
save
us
a
bunch
of
performance,
I,
just
I,
don't
know
anything
about
that
about
what
we
tried
to
do
and
whether
I
mean
obviously
Spanish
lab
stuffs.
It
could
could
be
really.
D
G
D
A
Yeah
I,
don't
remember
exactly
when
and
how
we
use
the
view
once
still
you'll
probably
remember
this
in
one
of
our
crazy
performance,
words
I
think
Justin
worked
on
it,
I'm,
not
sure
I'll
I'll
dig
around
I'll
dig
up
the
issue
and
the
effort
we
did
and
try
to
do
some
archeology
there.
Yes,
so
to
the
point,
if
Santa
is
left
moves,
the
percentage
of
initiative
moves
the
render
into
a
worker,
then
the
view
once
is
moot
it's
no
longer
an
issue.
A
A
C
E
C
Add
that
there
is
a
proposal
to
render
view
in
res
using
Create
isolates
which
can
help
with
this.
So
we
will
actually
render
this
on
server
and
never
touch
it
on
a
client.
So
this
could
be
very
efficient
for
static
content
as
we
described,
and
also
we
could
utilize
rails.
Caching
with
that,
so
it's
a
double
benefit.
Yeah
I
will
link
it
in
the
agenda.
So
please,
please
see
what
what
it's
what
is
suggested.
Actually
it
might
be
really
interesting.
A
Yeah,
particularly
for
Gary
and
Matt
on
the
back
end,
this
is
quite
a
daring
proposal.
Yeah
I
haven't
thought
of
this
being
applicable
for
this
problem,
so
I
have
to
go
back
and
reread
it.
I
suggest
that
is
homework
for
everybody.
Watching
this
all
homework,
that's
been
a
while,
since
I've
been
home
all
right
thanks
for
that
all
right,
so
I'll
look
at
the
issue.
That's
it
to
do
for
me,
so
shoot
I!
Think
we
should
keep
this
topic
at
least
open
for
another
week.
A
So
we
can
like
get
to
the
bottom
of
this.
Whether
we
should
revisit
the
v1s
for
a
more
immediate
gain
or
if
we
just.
What
do
you
say,
keep
it
open.
C
Yeah
I
was
concerned
with
really
high
response
times,
or
the
team
by
Json
endpoint,
so
basically
locally
I
get
like
four
or
five
seconds.
For
this
end
point-
and
this
is
one
of
our
main,
like
bottlenecks
in
the
changes
page,
so
we
cannot
render
content
faster
than
we
get
the
data.
So
I
was
wondering
if
there
is
a
way
to
to
Cache
a
page,
so
it
responds
much
quicker
and
what
are
some
wind
blockers
for
that?
Like
is
it?
Is
it
even
like
visible
at
all.
D
This
may
we
may
have
some
applicability
here
with
our
talks
about
using
index
DB
to
basically
cache
everything
and
a
service
worker
to
sort
of
interrupt
requests
and
grab
data
from
the
from
the
index
from
the
database,
because
if
you
are
visiting
an
MR
that
you
visited
already
before
like
of
the
same
version,
we
would
we
shouldn't
even
need
to
go
to
the
back
end
to
get
that
diff.
We
should
already
have
that
stored
and
that
would
that
would
most
likely
improve
the
performance
of
I.
D
A
So
I
just
want
to
add
this
topic
kind
of
came
up
last
week
with
Patrick
and
he
did
bring
up
the
topic
that
we
kind
of
recently
moved
some
of
the
things
that
we
were
patching
in
redis
moved
it
over
to
http
caching
to
leverage
e-tags.
So
it
seems
like
and
again
to
not
put
stress
and
redis
and
availability,
wise
and
liability
right,
so
I
feel
like
we
do.
We
are
doing
some
cashing
on
the
client
with
events
that
correct
math,
okay
or
at
least
the
ballpark
okay,.
C
Do
we
have
the
HTTP
caching
already
like
in
the
main
price,
I
think
so
yeah?
It's
live
because
I
I
think
it's
not
working
on
the
in
the
development
mode,
because
each
time
like
I
refresh
my
page,
it's
not
working.
Okay,.
G
A
Maybe
maybe
inspect
the
headers
of
the
responses
and
requests
of
the
production
environment
just
to
see
the
how
those
headers
are
being
used?
Okay,
now
that
doesn't
mean
that
we
can't
try
all
the
crazy
ideas,
so
Carey
Matt
anyone
with
background
intentions,
if
you
have
any
dreams
about
it,
jot
it
down
in
your
notebook
next
to
your
bed
and
bring
it
over
to
the
next
session.
That's
good
all
for
great
ideas.
A
That
way,
I
don't
know
I,
don't
know
if
some,
if
anybody
in
this
club
remembered,
but
we
used
to
have
a
section
in
source
code,
when
code
review,
The
Source
Code
about
crazy,
not
so
crazy
ideas,
and
it
was
exactly
that
sort
of
thing,
so
maybe
sometimes
they're,
not
bad
news
new
one.
So
we're
getting
close
to
the
end.
We
have
four
minutes
gifts,
metadata
pagination.
We
have
two
topics
left.
A
Do
you
want
to
go
quickly?
Send
us
up,
so
we
can
have
time
for
the
last
one
yeah.
C
I
was
wondering
if
we
could
split
up
metadata
position
metadata
for
Zoomers
request
into
pages,
and
would
it
give
us
any
performance
benefits,
because
we
can
have
like
a
thousands
of
files
changed
on
the
mesh
request
and
we
still
have
to
wait
for
the
for
the
metadata
for
this
1000
files.
I
was
wondering
if
we
can
get
metadata
like
for
the
first
100
files
to
render
them
faster.
C
E
A
That
always
depends
on
the
size
of
those
imagination
of
the
page,
where
you
have
an
MR
with
50
files.
You
might
as
well
just
get
the
whole
thing
in
a
bunch
if
it's
like
50
000
files
or
like
okay,
100
or
no
I
have
a
thousand
files
change.
You
might
as
well
just
designate
for
the
last
for
the
first
above
the
fold
so
to
speak
and
then
grab
the
rest.
Maybe
just
split
it
into
could
be
an
option.
Did
you
start
rendering?
A
A
I'm
I'm
willing
to
entertain
that
it's
a
little
while
opening
issue
for
research
or
spyper,
or
something
like
that
like
issue
to
investigate
splitting
metadata
I
have
further
discussion
before
we
schedule
this
place.
Just
so
in
case
we
don't
start
doing
something
without
any
potential
wins
sounds
good,
open,
an
issue.
Somebody
please
take
that
volunteer.
I'm
gonna,
move
to
the
last
one
Kai
is
this
yours,
suppressing.
G
I
put
it
there
one
of
the
thoughts
that
you
know.
G
We've
talked
about
this
on
the
product
side
before
was
like
suppressing
things
in
in
the
merge
request,
which
then
would
reduce
the
overall
payload
like
a
lot
of
big
files,
tend
to
be
things
that
people
don't
actually
need
to
to
review
in
a
merge
request,
all
of
the
time
and
so
I
just
linked
to
an
epic
and
then
some
existing
art,
a
gem
that
that
could
be
used
to
maybe
facilitate
some
of
that
as
a
way
to
like
think
about
how
we
could
gain
performance
benefits,
with
sort
of
like
a
different
sort
of
approach
to
this.
A
I
know
that
this
will
depend
on
the
project
you're
doing
some
project
you'll,
never
see
any
benefits.
Some
projects
will
always
see
benefits
depending
on
the
work
they're
doing
so
I
feel
like
this
is
a
definite
win.
My
only
concern
here
is
every
time
we
attempt
it
to
cut
files
in
the
past.
The
community
didn't
like
it,
but
those
were
meaningful
files.
These
ones
would
be
not
so
relevant.
Files
automatically
detected
right.
G
Yeah
I
think
generally
and
I
think
the
the
key
is
like
making
sure
the
user
experience
allows
you
to
like
if
you
want
that,
request
it
and
get
it
loaded
right,
like
a
lot
of
our
suppression
historically,
has
been
on
demand
suppression
without
a
way
to
like
actually
get
it
back.
Right
like
it
is
just
gone
from
the
death
and
so
I
think,
like
that's,
not
helpful,
so
being
smarter
about
that,
like
not
loading
that
data
initially
but
allowing
users
to
get
to
it
is
another.
F
A
Okay,
yeah
I
feel
like
it's
the
potential
so
from
my
understanding
is
basically
the
back
end,
having
a
way
to
check
each
file,
whether
it's
with
rendering
or
not
and
then
skipping.
If
not,
I
feel
like
that
could
be
beneficial.
It
doesn't
seem
like
a
huge
effort.
Okay,
it
could
be
totally
wrong
on
this
person,
but
yeah
it's
worth
taking
a
look.
So
there's
already
issues
with
this
right:
okay,
yeah.
G
There
isn't
there's
an
epic
that
I
linked
to,
and
then
it
has
issues
underneath
one.
A
I
mean
my
internet's
a
bit
chubby.
Sorry,
yeah
I,
don't
have
anything
about
this
I
mean
we
should
consider.
It
I
think
it's
about
priority
honestly.
G
Say
I
put
this
in
here
because
I
think
it's
worth
I
know
there's
a
ton
of
ideas
in
here
and
so
from
my
perspective,
I'm,
not
necessarily
pushing.
This
is
something
we
should
do.
I
think,
given
that
we're
looking
at
timings
and
other
things,
I'm
saying,
let's
be
open-minded
and
sort
of
look
at
look
at
this
as
well
as
part
of
that
like
maybe
this
is
an
option
so
as
we're
looking
at
things
like
keep
it
on
your
radar,
I
think
that's
for
carrying
the
back
and
forth.
A
So,
okay
I'll
keep
it
open
for
another
week,
at
least
to
keep
to
capture
intentional
thoughts
on
this
in
the
next
week's
issue
and
then
we'll
go
from
there.
Thanks
for
bringing
it
up,
people
were
two
minutes
over
time
from
the
extended
period.
I
appreciate
you
all
sorry
for
taking
so
long,
and
please
answer
the
last
point.
Was
this
meeting
useful
and
I'll
see
you
in
the
issue
for
next
week?