►
From YouTube: 2023-02-02 Code Review Performance Round Table
A
And
your
life
Welcome
everybody
to
yet
another
performance
Roundtable
for
our
group.
Good
review
I'll
go
straight
to
the
agenda
if
I
had
it
in
front
of
me,
sorry,
one
second,
okay,
so
first
book
before
we
go
over
to
topics
I
wanted
to
give
you
a
heads
up
where
it's
sort
of
an
FYI,
even
though
there's
some
still
some
work
that
we
want
to
add.
Carey
and
Phil
have
already
finished,
adding
the
technical
documentation
for
the
for
the
disc
generation
to
the
our
documentation
about
computation
base.
A
A
A
All
right
speechless
great:
let's
move
on
to
topics
then
the
first
is
a
increase
of
observability
of
performance
across
code
review.
Matt.
Do
you
have
a
little
comment.
B
Yep
last,
at
the
last
meeting
you
talked
about
with
Patrick
he
added
some
dashboards
to
our
grafana
dashboard
and
now,
with
those
there
were
three
areas
to
investigate.
So
he
created
those
issues.
They
are
on
the
issue
board,
there's
rendering
time
unfoldable,
poor
positions
and
right
Cash
Time.
Those
three
were
of
the
eight
those
three
stood
out
as
higher,
so
I
just
wanted
to
call
it.
Those
are
there
to
be
looked
at
and
discussed
and
planned.
A
Okay,
so
I'm
guessing
that
these
will
take
shape
of
spikes
in
upcoming
Milestones
or
something.
A
B
A
Them
in
the
agenda,
I
grab
them.
Thank.
B
A
The
question
the
question
then
becomes:
do
we
need
this
to
keep
in
refinement,
or
should
we
close
it
at
the
Active
topic
of
discussion
at
the
moment,
so
I
feel
like
it
will
move
over
to
those
issues.
A
A
Closing
this,
oh
sorry,
removing
this
issue
from
refinement
all
right.
Thanks,
Matt
anyone
has
any
other
thoughts
on
this
one.
Or
can
we
move
on
to
the
next
one.
A
Okay,
moving
on
to
the
next
one
Thomas
just
trying
to
call
Hey
Thomas,
so
we're
moving
on
to
micro
code
review,
front-end
proof
of
Concepts
Thomas.
Do
you
want
to
quickly
give
an
intro
share
some
thoughts
on
that
we
haven't
really
suddenly
presented
by
you
yet
so
do
you
want
to
take
the
floor.
D
Obviously
there
was
the
there's
the
goal
of
sort
of
trying
ideas
to
see
what
we
can
improve
in
code
review.
So
one
thing
I
was
thinking
was
what,
if
we,
what
if
we
did,
the
smallest
possible
thing,
we
could
do
or
Jeff.
What's
what's
the
least
we
could
do
for
showing
Mrs
and
so
I
built
a
small
code
review
app
or
a
diff
review,
app
I
guess
so.
Using
the
the
lightest
tools,
I
could
think
lettuce,
tools,
I
know
of
so
there's
a
there's,
a
I.
D
Actually
don't
have
the
agenda
open,
so
I
assume
that
it's
it's
linked
in
there,
but
there's
a
there's,
an
app
that
allows
you
to
oauth
in
and
grab
any
Mr
that
you
can
you're
allowed
to
view
and
show
the
files
in
it
it
uses
the
patterns
that
I
would
I
would
expect
from
from
a
good,
app
or
I
suppose
is,
like
the
event
event
driven,
separate
and
separate
like
totally
separate
UI
rendering
from
like
application
code.
D
It
uses
indexeddb
to
store
things
that
you've
seen
before,
so
that,
if
you
load
them
up
again,
it's
almost
instant
problem
and
then,
like
lazy,
fetching
of
that
same
thing,
so
it'll
load
locally
and
then
lazily
fetch
from
the
API
uses,
really
really
light
down,
rendering
so
no
V,
Dom
type
stuff
uses
web
components
sort
of
top
to
bottom.
D
D
Seem
does
that
seem
right,
that's
that's
what
I
did
so
I'm
interested
in
hearing
what
people
think
about
it
yeah
it's!
It
feels
really
fast,
I
think
for
some
people
it
feels
like
it's
broken,
because
you'll
load,
an
MR
and
it's
like
almost
instant
and
you're
like
this,
can't
be
right
so
interested
to
hear
like
your
feedback.
What
what
you
think
about
it.
A
Yeah
so
I'll
just
quickly
demo
what
you're
talking
about.
So
this
is
what
we
have
so
far
and
there's
a
bunch
of
caveats
that
Thomas
is
describing
the
snippet
so
right
now,
I
already
loaded
this
Mr
I'm
just
going
to
load
something
else
which
takes
things
from
the
public
API
to
dump
it
into
the
I.
Don't
even
know
if
this
merger
press
exists,
so
you
probably
could
it
looks
like
it
doesn't
there.
A
It
is
so
there
it
is,
and
if
I
go
back
to
that
one,
it's
just
instant,
that's
what
it
was
talking
about,
because
it's
loading
things
from
local
things,
which
would
eventually
be
the
reaction
of
the
app
if
we
were
to
load
it
from
like
navigated
again
to
something
that
we
just
already
having
cash.
A
Of
course,
the
usual
use
case
would
have
still
the
comments
being
added
and
a
bunch
of
things
and
I
think
you're
capping
it
from
what
the
API
gives
us
with
these
traffic
files.
I
think
yeah.
A
Them
right
now,
so
there's
still
a
lot
of
stuff
that
again
for
us
to
be
be
able
to
compare
apples
to
oranges,
which
should
be
a
bit
of
far
away
from,
but
as
an
experimentation
start
of
conversation,
I
think
it's
interesting
and
we
can
have
a
couple
of
thoughts
derived
from
here
so
yeah.
Anybody
has
any
more
thoughts
or
questions
about
this.
At
this
moment,.
E
I
guess
my
question
is:
what
what
do
we
want
from
that
POC
in
our
this
app?
What
exactly
do
we
want
to
extract
from
this
like
what
the
findings,
for
example,.
D
Is
it
possible
to
get
really
really
fast,
diff
rendering
sort
of
period
right
like
is
it?
Is
that
possible,
and
then
from
there
we
can
sort
of
make
I.
Suppose
you
can
call
them
compromises
or
you
can
make
we
can?
We
can
decide
what
what
we
want
to
take
away
from
it.
D
I
if
you
watch
the
video
the
my
my
my
Approach
was
like
a
very
hard
line
like
performance
budget
first,
so
bundle
size
render
time
those
kinds
of
things,
and
then,
from
that
performance
budget
I
took
a
couple
of
other
things
like
novidan,
no
flux,
those
kinds
of
things.
D
D
Oh
we're
not
going
to
use
view,
we're
obviously
going
to
use
view
so
like
what
what
we
take
from
this
I
hope
is
that,
like
there
is,
there
are
tools
that
can
render
these
things
really
fast
without
totally
rethinking
architecture
like
this
is
still
like
it
still
loads
files
and
passes
them
into
components,
and
then
the
components
render
the
file
like
the
highlighting
is
happening
on
the
front
end
so
like
we
can
still
use
Concepts
that
we
have
already
explored
or,
or
we're
already
used
to
like
individual
component,
rendering
that
kind
of
stuff
and
I
guess
the
takeaway
is
I
would
like
us
to
consider
what
we
can.
D
What
we
can
use
from
this
app
that
can
fit
into
our
current
architecture.
Andre
and
I
I
have
talked
about
this
before
and
like
one.
D
One
thing
we
could
do
is
say
well
what,
if
we
pass
our
div
files
into
a
component
like
this,
where
you
just
give
it
a
file
and
it
highlights
it
on
the
Fly
and
it
and
it
instantly
sort
of
renders
it
I'm,
not
sure
that
you
would
see
the
exact
same
performance
benefit,
because
a
lot
of
the
speed
comes
from
the
fact
that
we're
loading
from
an
index
DB
first
and
that
kind
of
stuff.
But
that
might
be
a
good
start
to
say.
D
We
know
that
we
can
do
this
without
a
lot
of
like
back-end,
highlighting
time,
because
we've
done
it
it's
working
now
we
need
to
like
resolve
the
the
problems
with
highlight.js
or
whatever
so
and
that's
the
whole
other.
That's
a
whole
other
discussion,
which
is
like
how
they're
going
to
front
end
is
complicated
and
May
Fail,
which
I
didn't
even
talk
about
in
the
caveats,
because
it's
like
it's
kind
of
outside
of
the
the
POC,
but.
A
Thanks
Thomas
I'll
add
a
few
things,
a
few
thoughts,
a
couple
of
things
that
we've
built
here
so
that
Thomas
has
built
here.
It
is
replicatable
in
our
current
application,
so
the
storage
of
indexeddb
they're
loading
things
from
Storage
from
indexedb.
We
can
take
that
approach
and
try
to
mimic
that
using
our
our
Geo
app
as
they
exist.
So
bits
of
this
app
are
detachable
from
and
putting
put
into
our
current
APP.
A
We
wouldn't,
we
wouldn't
solve
the
problem
of
attacking
that.
We
wouldn't
solve
the
problem
of
rendering
times
and
all
that
stuff,
but
we
still
probably
solve
the
the
instant
loading
of
data.
So
that's
one
part
that
we
could
take
away
and
experiment
with.
There
was
another
perspective
on
this.
A
That
I
was
that
I've
been
talking
about
for
a
while,
which
is
for
the
longest
time
the
merge
request
if
step
has
been
detached
from
all
the
other
discs
on
gitlab
right
and
one
of
the
approaches
that
that
Thomas
took
here
with
with
you
know,
having
a
quicker
bootstrap
using
native
custom
elements
like
web
components.
It's
interesting
from
that
perspective.
It's
very
quick
to
bootstrap,
but
it
falls
so,
but
then
it
gets
to
a
point
that
it's
we
need
to
start
at
certain
point.
A
We
need
to
reuse
things
that
we
already
have
in
view.
So
what
makes
the
approach
that
we
could
have
that
we
can
consider
here
is:
how
can
we
take
the
benefit
of
really
quick
bootstrapping
from
the
Native
part
and
then
evolve
to
a
point
where
we
build
where
we
bring,
in
the
view,
interactivity
and
reactivity,
when
we
need
to
something
that
you've
also
alluded
to
sentence?
A
Love
is
what,
if
we
had
a
partially
statically
rendered
part
of
the
app
and
then
we
then
add
a
direction
when
we
need
it
that
could
lighten
the
load
of
the
application
as
a
whole.
There's
even
a
third
scenario
here
that
which
is
the
the
the
concept
of
a
diff
viewer.
So
if
we
start
building
something
very
simple
as
something
that
is
able
to
to
see
diffs
to
render
dips
generically
without
building
it
too,
specifically
for
merge
requests,
we
might
end
up
with
a
solution
for
the
diffs
elsewhere.
A
So
mainly
the
repository
commits
the
compare
branches,
the
new
form,
new
new
Mr
form
or
editing,
NMR.
Sorry,
the
editing
Mr
doesn't
have
it.
The
new
Mr
form
has
the
changes
in
the
at
the
bottom
right.
So
all
of
these
places
are
using
divs
but
they're,
not
up
to
date,
they're
they're,
old
old
code
bases
that
are
not
up
to
date
with
what
we're
doing
on
the
front
on
the
new
app
of
diffs.
A
So
this
experimentation
with
Thomas
touches
on
a
bunch
of
things
and
right
now,
I
think
it's
just
about
experimenting
it
and
seeing
like
probably
doing
some
profiling
on
the
performance
side
of
things
to
see
whether
it's
worthwhile
I'll
consider
some
parts
of
it
extracting
it
to
our
app
right
now,
I,
don't
know
the
answer
is
experimental,
so
kind
of
like
we
want
to
see
where
this
leads
to,
but
I
have
another
thought,
but
I'll
keep
it
to
my
later
Point
here
in
the
agenda
status.
Live
you
have
you're
writing.
E
Something
yeah
yeah
Thomas,
and
you
mentioned
that
we
are
using
index
DB
to
store
my
requests.
I
I
have
a
concern
like
if
we
open
a
mesh
request
that
was
recently
updated.
For
example,
we
can
see
the
previous
version
if
it
takes
a
Canon
reasonable
amount
of
time
to
update
a
reviewer
might
actually
lose
all
of
the
context
while
reviewing
CMR
so
I'm
I'm
curious.
If,
if
that
is
a
consideration
at
least.
D
First,
yeah
yeah,
so
I
wrote
a
quick
answer
in
the
in
the
jog.
The
there
are
a
couple
of
concerns
with
the
next
DB
one
is
that
which
is
showing
kind
of
old
data.
Of
course,
the
goal
is
that
loading,
an
MR
is
almost
instant
and
you'll.
See
that
when
you,
when
you,
when
you
use
it,
it
is
almost
instant,
so
we
can
sort
of
get
around
showing
stale
stuff
by
showing
what
we
have
and
then
in
a
in
a
in
a
nicer
UI
in
a
in
a
more
production
ready.
D
Ui
just
show
something
that
says:
hey
we're,
showing
what
you've
seen
before
we're
showing
we're,
showing
content
that
we
know
that
that
exists
and
we're
gonna
we're
fetching
updates
we're
looking
for
new
content
and
potentially
disable
commenting,
while
that's
happening
so
that
you
can't
comment
on
a
file
that
maybe
doesn't
exist
or
something
like
that.
D
That's
like
one
option
like
while
we're,
while
we're
fetching
updates,
hopefully
less
than
a
second.
Hopefully
we
show
what
you've
got
in
less
than
a
second
later
we
get
updates.
We
can
kind
of
interrupt
like
hey.
This
is
from
local
and
you
probably
shouldn't
interact
with
it
until
we
know
that
it's
the
latest
there
is
also-
and
I,
haven't
ever
discussed
this
with
anyone
before
but
like.
D
There
is
also
the
opportunity
with
the
next
dbhd
everything
locally,
and
so
we
could
allow
like
commenting
or
whatever
on
local
copies
and
then
resolve
that
with
the
API
later,
which
which
policy
like
works
for
offline
stuff.
So
as
soon
as
you
as
soon
as
we
want
to
go
offline
with
any
of
this
stuff,
you
have
to
do
that
anyway.
You
need
to
be
able
to
resolve
things
with
what
happens
behind
the
scenes,
but
that's
a
whole
other
discussion.
There's
another
concern
with
the
next
db2,
which
is.
D
We
are
caching,
essentially
everything
that
you
look
at.
So
if
you
look
at
a
confidential,
Mr
or
confidential
issue,
for
example,
and
then
I
don't
no
longer
have
access
to
that
it
there.
We
need
to
have
a
way
to
like
get
rid
of
that
out
of
your
cash
and
that's
a
really
hard
problem
and
the
easiest
is
probably
just
to
never
never
cache
those
things.
But
then
you
have
the
opposite
problem.
D
Where,
if
you're
looking
at
something
public-
and
it
goes
confidential,
it's
in
your
cache
so
like
there's,
there's
some
weirdness
there
with
security
but
I
think
most
most
apps
who
store
data
resolve
this
by
just
saying
we're
going
to
show
you
what
we
have
and
maybe
limit
a
little
bit
of
functionality.
While
we
fetch
updates,
because,
like
it's
better
for
you
to
see
something
than
just
wait,
wait
wait.
Wait.
Wait,
wait,
wait!
D
Of
course,
if
we
can
keep
it
really
fast,
then,
like
the
benefit,
there
is
smaller.
If,
if
it's
always
a
one
second
load,
then
that's
probably
not
bad,
but
instant
is
better.
Of
course,.
C
Just
is
it
useful
to
show
all
content
in
this
case,
so
if
I'm
reviewing
Emoji
Quest,
send
it
back
to
the
author
to
change
it
and
then
I
go
back
to
re-review
it
I,
don't
genuinely
care
about
what
it
used
to
be
I
would
pretty
much
prefer
to
care
about
what
it's
going
to
be.
To
show
me
that
all
the
content's
been
updated
with
the
new
content
seems
kind
of
a
little
bit.
A
A
So
the
thing
is
exactly
the
most,
the
most
usual
scenario
that
you
could
come
back
to
an
updated
Mr
if
you're
the
reviewer.
If
you're
the
author,
you
might
come
back
to
the
Mr
many
times
and
there's
no
updates.
Yet
it
might
be
addressing
the
comments.
It
might
be,
just
opening
it
up
again,
you
might
be
going
back
to
the
discussion
without
any
commits
happening.
So
there's
a
bunch
of
scenarios
that
happen
where
the
Mr
hasn't
changed.
A
So
what
we're
doing
right
now
is
always
loading
it
from
the
server
in
those
scenarios
where
the
index
DB
has
a
copy.
We
wouldn't
even
we
wouldn't
even
need
to
request
from
the
server
and
I'll
get
to
that
in
my
comment,
right
down
below
now,
I'll
go
there
now,
because
one
of
the
concepts
that
the
backend
has
been
talking
about
is
having
a
cache
key
that
we
can
use
to.
A
So
basically,
the
page
comes
with
a
cache
key
already
built
in
there,
and
the
front-end
can
use
that
to
verify.
Is
the
copy
in
index
DB
the
same
using
the
same
cash
key
as
this
one
that
the
server
just
gave
me
telling
me
that
that's
the
latest
cash
key
available?
A
In
that
case,
we
can
just
use
the
local
copy
and
trust
that
the
it's
the
the
most
up-to-date
right.
So
in
that
scenario,
you'll
be
much
faster
than
whatever
we
can
pull
off
from
the
server
now
in
the
in
the
scenario
where
it
has
been
updated,
since
what
we
can
totally
do
and
that's
a
choice
on
us,
we
can
choose
not
to
render
the
local
copy.
That's
one
or
we
can
render
the
local
copy
show
a
sign
that
we're
updating
it
and
then
updating
the
Mr
in
real
time.
Yes,
it
will
come
up.
A
Topics
of
Automotive
layout
shift
the
layout
the
EMR
is
changing
if
you
just
did
a
commit
that
it
raises
everything
or
just
completely
changes
the
Mr,
but
most
often
than
not
the
additional
commits
are
additive
and
they're
tweaking
parts
of
a
file
or
not
in
that
entire
Mr.
So
in
that
sense,
I
think
the
the
why
update
of
an
MR
shouldn't
be
super,
shocking
or
jarring.
A
Given
that
the
benefits
that
will
take
is
a
lot
of
scenarios
will
benefit
from
an
instant
slowing
of
the
data
granted,
it's
up
to
us
to
decide
whether
we
want
to
show
an
outdated
while
we
update
we
can
detect
that
we
cannot
render
the
local
copy
wait
for
the
update,
but
the
the
key
here
I,
don't
think.
Thomas
has
built
the
cash
key
concept
into
this
thing.
But
if
we
do
have
that
available
from
the
from
the
page
served
camel,
we
can
use
that
as
a
check
to
see
or
it
has.
C
No,
so
I
think
the
cash
can
make
sense
and
that'll
make
it
better.
I
was
just
thinking
if
we're
rendering
still
content
I'm,
literally
in
the
middle
of
reading,
a
line
on
the
diff
file
and
then
all
of
a
sudden.
This
line
changes
which
doesn't
seem
like
the
best
of
experience.
We
pull
the
rope
from
under.
A
But
I
I
do
want
to
show
this
when
I
saw
the
the
Mr
loading
instantly,
even
though
I
know
that
it's
capped
at
any
files
like
it's
not
really
a
realistic
use
case.
That's
a
bit
of
an
Awakening
thing
and
this
POC
kind
of
allowed
us
to
see
it
naked
without
the
fluff
of
VMR
of
gitlab's
bundle
of
all
other
things
happening
around
the
DMR,
and
that
could
probably
get
us
back
to
trying
this
out
again
because
we've
tried
this
in
the
past
stanislav,
you
had
a
question.
E
Yeah
and
that's
a
perfect
segue
to
my
next
question,
which
is
figuring
out
what
is
exactly
stored
in
the
database
because,
right
now
our
teams,
page
API,
has
HTTP
cache
as
well
as
any
other
API
and
also
quite
recently,
there
was
a
cache
control
header
added
there.
So
there
is
already
a
cache
key
that
can
be
passed
to
Hamel
and
used
to
make
requests.
So
I
was
wondering,
what's
behind
the
decision
of
using
indexedp
and,
for
example,
not
not
reusing
our
HTTP
cache
that
we
have
already.
D
D
We
could
get
a
faster
experience
from
I
think
if,
if,
if
we
could
replicate
something
like
this
or
something
like
the
speed
of
this
with
HTTP
caching
Alone,
that
would
be
fantastic,
but
I
suspect
that
the
data
that
we
store
in
our
HTTP
cache
is
a
lot
less
persistent
than
this
I
guess.
D
I've,
never
I've,
never
seen
a
speed
increase
based
on
an
HTTP,
cache
and
I.
Don't
know
I,
don't
know
if
that's
just
because
maybe
we're
we're
really
eagerly
clearing
the
cache
or,
if
or
I,
don't
I,
don't
know
what
the
I
don't
know.
What
the
the
deal
is,
but
I've
never
seen
our
Pages
load
like
I.
E
I
can
I
can
answer
that
for
you,
because
there
was
a
recent
investigation
and
basically
there
are
two
things.
The
first
thing
is
that
our
E-Tech
cache
actually
executes
a
lot
of
code
on
the
rail
side,
so
we
have
to
wait
for
all
the
race
part
to
execute
before
we
can
send
300
response,
and
the
second
thing
is
Cash.
Control
is
broken
in
Chrome
with
that
tested
it
recently
and
for
some
reason,
chrome,
doesn't
cache
responses
with
cache
control.
We
couldn't
figure
out
why
that
is,
but
in
Firefox
it
works
perfectly.
A
And
coming
back
to
that
question,
I
think
these
two
realities-
we've
debated
it
with
I-
think
both
are
valid
approaches
to
make
the
app
faster
right,
because
if
you
have
the
cache
key
and
it's
cached
and
there's
e-tags
are
working
in
their
fast
sure,
the
requests
will
just
be
instantly
returned
and
we
can
work
with
it.
One
of
the
reasons
that
we
can
justify-
or
we
could
that
kind
of
like
was
behind
all
this
consideration
of
index
DBS.
Also,
it's
structured
data
that
we
can
make
decisions
on
where
the
HTTP
cache.
A
What
will
give
us
is
if
we
do
that
request,
another
request
is
already
cached.
It
just
gives
us
that
return.
We
can't
really
work
and
make
decisions
based
off
of
that
and
second,
this
can
open
the
door
to
be
able
to
render
an
MR
offline,
which
we
know
might
not
be
the
top
of
our
minds,
but
it
would
be
a
great
resilience
factor
in
terms
of
like
digital
Nomads
going
out
and
about
they'll,
be
able
to
review
that
on
the
train
on
the
flight
stuff
like
that
it'll
be,
will
be
a
neat
thing
to
do.
A
Neat
trademark
kind
of
thing
so
that
we
could
render
anymore
offline
from
the
cache
of
the
browser,
and
the
other
thing
is
a
structured,
the
data
that
we
can
allow
us
to
make
some
some
more
decisions
like
we
were
talking
about
now,
having
having
the
cash
key
built
into
it,
deciding
not
to
load
it
or
not.
I
think
would
be
a
trickier
thing
to
do
with
the
with
the
HTTP
caching,
but
I'm
I'm
just
registering
here.
D
Also,
just
as
a
side
note,
this,
the
data
structure
of
this
application
is
inspired
or
influenced
a
lot
by
the
the
API
and
when
I
say
the
API
I
mean
the
public
API,
the
one
that
you
can
go
off
to.
D
We
get
very
very
different
data
from
the
public
API
that
we
do
from
our
like
internal
application
and
they're.
You
know,
while
what
we
don't
necessarily
have
to
use
in
xtb.
There
are
a
lot
of
ways
that
we
can
improve
the
efficiency
of
our
data
fetching
if
we
were
to
use
maybe
a
new
API
endpoint
or
expand
once
we've
got
or
like
or
just
use
internal
ones,
those
the
data
that
comes
back
from
like
a
public,
API
sort
of
influences
the
structure
of
the
data
here.
D
A
Okay,
I'll
move
on
to
my
next
point,
because
for
the
sake
of
time,
then
one
of
the
things
that
this
demo
brought
to
mind
was
the
difference
between
what
Thomas
was
talking
about
now,
using
a
public
API
versus
that
music
batch
tips.
Batch
tips
delivered
the
lines
structured
with
metadata
per
line
where
each
line
has
the
line
hash
the
the
Shahs
of
the
commit
and
everything.
So
it
has
a
bunch
of
metadata
for
each
line.
A
I
can
remember
if
that's
true,
but
it
has
position
information,
a
lot
of
stuff
and
the
public
API.
Doesn't
the
public
API
delivers
a
diff
kind
of
blob
thing
that
we
just
dump
in
one
go?
And
yes,
if
we
evolve
this
POC
to
attach
comments
from
those
lines,
we'll
probably
reach
a
dead
end
right,
we
probably
need
to
find
a
solution
to
how
do
I
match
a
comment,
a
note
to
a
line,
if
I
only
have
these
like
the
full-on
content
right
instead
of
structured
lines.
A
We
could
avoid
having
to
pass
that
down
the
wire
until
it's
needed,
whether
to
attach
a
node,
whether
to
reply,
a
common,
something
like
that-
and
that
was
one
of
the
questions
and
then
something
that
we've
briefly
touched
several
times
is:
is
this
something
relevant
where
we
could
use
webassembly
to
kind
of
like
receive
the
blog
pass
it
to
some
application
that
we
run
with
webassembly
to
annotate
that
properly
and
turn
it
into
a
workable
diff
thing
where
we
can
touch
things?
A
F
Yeah
I
think
it's
sort
of
like
tangentially
related
I
was
I'm
curious,
like
I
agree
with
Andre
like
when
we
first
saw
Thomas's
stuff
and
then
I
had
been
thinking
about
it,
because
Andre
had
sort
of
planted
that
seed
that
it
is
really
interesting
that
you
just
sort
of
like
get
the
diffs
sort
of
immediately
and
I'm
I'm
sort
of
wondering
like
in
your
normal
workflows,
I
expect
like
getting
the
divs
understanding.
F
The
lay
of
the
land
is
more
important
than
like,
immediately
being
able
to
like
interact
or
comment
with
them,
but
that's
like
conjecture
on
my
part,
so
I'm
just
sort
of
like
pulling
the
massive
audience
of
three
here
to
see.
If
that's
does
that
make
sense
or
like?
Is
it
like
if
you
went
to
a
merge
request
and
there
were
dips
on
the
page,
but
you
couldn't
necessarily
like
immediately
leave
a
comment
like
comment,
ability
came
I,
don't
know
some
moments
of
time
later,
like
does
that
feel
bad
or
or
like
how
like?
D
Wrote
a
quick
comment
to
that:
I
can
verbalize
it
normally.
If
the
comments
weren't
immediately
available,
that
would
be
fine
like
I
love.
The
page
I'm
gonna
need
at
least
three
seconds
to
like
start
commenting
so
like.
If
that's
how
long
it
takes
to
load
comments,
that's
fine
or
maybe
even
10
seconds,
with
one
exception,
if
I
click
like
so
and
so
commented
on,
diff
and
I
click
that
and
it
goes
to
the
diff
with
the
comment
hash.
D
My
experience
right
now
is
that
that
that's
a
really
unreliable
or
at
least
slow
experience
like
the
diffs
load
in
sometimes
in
pages,
and
then
the
discussions
load
in
and
for
me.
Sometimes
the
discussions
never
load
in,
like
sometimes
I,
just
don't
refresh
the
page,
and
then
the
discussions
get
assigned
so
like
it's.
There
seems
to
be
a
bug
there.
D
It's
like,
if
I'm,
linking
to
a
comment
that
I
want
that
to
be
first
I
want
the
comment
to
load
it
first,
essentially
or
the
diff
that
they
comment
is
on
so
I
think,
there's
like
kind
of
two
modalities.
For
me
at
least
one
is
I
come
in
and
I'm
going
to
be
commenting,
and
then
it
needs
to
be
available
within
a
second
or
two
or
three
or
whatever
versus
oncoming
interview.
Someone
else's
comments,
that's
when
it
needs
to
be
available
like
past
almost
soon
probably.
E
Yeah
and
I'll
add
that
I
share
the
same
experience
here
that
if
there's
a
comment
in
the
Mr
you
you
would
prefer
to
see
it
immediately
as
soon
as
the
divs
are
shown,
because
at
some
point
you
can
scroll
through
the
files
and
just
miss
one
comment,
and
you
will
never
be
able
to
see
that
it
actually
popped
in
so
yeah.
That's
really
important
and
I
think
there's
a
correlation
there
between
an
immersive
Quest
size
and
time
to
comment.
F
A
I'll
add
that
in
the
past
we've
heard
some
people,
especially
in
the
context
of
really
large
Mrs.
Where
we're
you
know,
truncating
large
files
and
all
that
stuff
we
sometimes
seen
people
just
like
hey
just
just
give
me
the
option
to
glance
at
the
raw
get
diff
the
raw
diff
and,
like
sure,
I,
think
we
have
that
download
patch
but
yeah
on
the
UI
and
then
the
second
Point
is
we're.
A
We
might
be
very
used
to
the
UI
that
we've
had
for
the
past
years
of
having
comments
in
between
the
disk
lines.
But
as
Kai
was
talking
about
this,
it
kind
of
reminds
me
of
the
UI
that
we
have
on
the
design
management.
Where
we
have
things
on
one
side,
then
we
have
markings
for
the
discussions
and
then
on
the
side,
pane
sort
of
like
an
email
client.
We
have
like
the
list
of
messages
and
then
we
have
the
actual
message
on
the
second
second
pane
sort
of.
A
C
A
Written
my
ID
is
just
so
if
you
have
the
the
different
one
in
one
side
in
like
on
the
second
on
the
second
half
of
the
screen,
I
have
the
discussions
linking
to
that
highlighted
line.
I
think
I
think
some
diff
tools
do
that
I
think
I
think
I've
seen
this
somewhere,
where
we
don't
have
the
comments,
the
thread
putting
in
getting
in
between
the
diff
lines,
because
then
you're
going
to
be
hiding
the
code
that
it's
around
you
there's
downsides
to
that
as
well.
The
way
we
do
it.
C
F
I
was
gonna,
say,
I
wasn't
necessarily
thinking
like
different
experience.
I
was
thinking
like
like
if
the
diff
version
of.
F
No,
no,
if
it
just
like
loaded
like
that
that
Thomas's
version
of
diffs
and
then
like
line
numbers
and
comment
icons
like
come
later
right
like
if
you
get
diff
content,
does
it
allow
you
to
start
reviewing
and
then
like
go
back
and
like
do
commenting
later
sort
of
hours,
I.
Think
if
you
like,
link
to
an
MR
that
already
had
comments,
I
would
I
agree
like
that
expectation
that
the
comments
are
immediately
visible,
makes
sense.
I'm
talking
about
like
I,
think
more
of
the
like.
F
Could
the
interactive
Pieces
come
later
versus
the
interactive
stuff
being
sort
of?
What's
the
front
end
heavy
part
of
Us
loading?
The
merge
request,
that's
sort
of
like
how
I
I
took
like
time
like.
Could
we
could
we
get
all
the
really
fancy
things
we
do,
but
just
let's
get
them
later
and
give
people
this
like
immediately,
so
they
can,
they
can
start
working
on
their
stuff.
I,
don't
know,
I,
don't
know
what
that
looks
like
in
terms
of
user
experience.
F
I
think
feels
right
like
comment,
even
in
the
new
commenting
stuff,
that's
being
looked
at
in
like
the
restructuring
effort,
like
comments
are
still
very
much
in
line
and
diffs
they're
chunked
slightly
differently
but
like
it
is
still
very
contained
to
the
different
like
I.
Don't
I,
don't
necessarily
want
to
like
change
that
I
think
that's
sort
of
standard
and
makes
sense,
but
I
think
like
could
we
could
interactivity
come
later
in
a
way
that
was
like
valuable?
F
Is
that,
like
that
to
me
feels
like
a
big
learning
of
like
it's
cool
to
see
this
really
fast
I
think
tons
of
people
would
be
like
fascinated
if
our
dips
appeared
as
fast
as
they
do
like
that,
even
if
they
couldn't
immediately
comment
because
they
could
start
looking
and
scrolling
the
page
and
then
all
the
other
stuff
would
come
later.
C
I
guess
it
depends
on
how
quick
that
later
is
like
if
it's
a
small
major
question,
it's
taken
a
few
seconds
and
I
just
want
to
write
a
comment.
It's
kind
of
annoying
if
it's
like
within
a
second
I,
don't
really
care
that
much
I
think
getting
it
faster
and
then
displaying
it
relatively
quickly
after
that
is
okay
or
make
it
Interactive.
F
We
experience
small
merch
requests
on
a
very
regular
basis.
I
I
talk
to
customers
who,
like
I,
think
Phil
and
Thomas
you'll.
Remember
when
we
did
our
large
merge
request
stuff
and
we
had
like
the
95th
percentile
99th
percentile,
like
data
I,
talked
to
customers
who
are
like
consistently
above
our
99th
percentile
data
like
that's
just
the
way
it
works
for
their
development
process.
That's
it's
always
200
files
and
10
000
lines
like
there's
nothing.
F
There
is
no
like
that
is
small
for
them
right,
like
a
hundred
percent
of
the
time
that
is
their
smallest
Mr
and
so,
like
I,
think
it's
important
to
keep
in
mind
that,
like
more
of
our
customers
are
like
especially
more
of
our
like
larger
customers
are
like
that
than
they
are
hey.
Somebody
changed
two
files
in
this
like
merge
request,
and
it
only
touched
five
lines,
like
that's
very
uncommon,
I.
Think
for
the
way,
a
lot
of
some
of
these
like
bigger
Enterprises
work,.
D
One
of
the
things
that
we
talked
about
earlier
I
mentioned,
like
you
know,
I've
I've,
never
want
to
replace
all
of
you
at
gitlab,
but
they're,
you
know
they're,
always
maybe
we
could
just
drop
the
diff
file
in
this
was
actually
exactly
one
of
the
things
that
that
Andre
and
I
were
talking
about.
We
can,
you
know,
maybe
we
could
render
just
the
diffs
really
fast
and
still
have
our
regular.
D
I
think
a
big
part
of
the
speed
benefit
you
see
is
the
is
the
indexeddb
stuff
and
like
and
the
the
lack
of
the
flux
store.
So
those
things
are
fast
and
we,
it
would
be
very
difficult
to
not
not
use
those
in
our
current
view,
app
unless
we
kind
of
tore
out
the
whole
architecture.
D
So
there
may
be
some
speed
benefit
gained
by
just
like
say,
rendering
the
gifs
as
soon
as
we
can
the
fastest
piece
with
a
smaller
component
or
something,
and
then
loading
in
like
our
existing
kind
of
infrastructure.
On
top
of
it,
I'm
not
sure
how
much
speed
we
would
see
of
a
gain
from
that
that's.
D
My
main
concern
is
like
if
we,
if
we
replace
like
just
the
just
the
diff
renderer
with
with
like
say
a
web
component
or
something
it
would
probably
be
faster
and
we
would
get
the
benefit
of
what
Andre
is
talking
about
earlier,
maybe
dropping
it
in
on
the
commits
page
and
various
other
places.
A
Yeah,
okay,
you
also
mentioned
here
that
the
editor
group
is
using
an
xcb
from
the
new
web
IDE.
F
But
it's
worth
a
chat
with
Paul
just
to
see
like
what's
going
on
there
because
they're,
they
may
have
the
same
concerns
we
do
about
like
invalidation
and
security,
or
maybe
they
haven't
thought
of
those
things
yet
and
like
it's
worth,
having
a
conversation
with
them
on
that
side
to
like
have
that
those
thoughts
so
either
way,
it's
probably
good
if,
if
someone
else
is
using
it
like
just
as
an
FYI
I,
didn't
know
if
anyone
had
had
heard
or
seen
that
yet.
A
Amusing
about
it
on
the
snap
to
GL
effort,
Natalia
was
working
on
the
Persistence
of
graphql
Apollo
queries,
but
they
dropped
the
user
usage
of
V
next
to
be
at
this
stage,
they're
using
local
storage,
but
it's
in
their
future.
So
it's
something
that
technology,
wise
you'll
be
part
of
are
toolkit,
I,
guess
anyway,
I.
D
Think
yeah
I
was
going
to
mention
the
same
thing
when
I
talked
with
when
I
talked
with
Tim
about
indexedb
in
the
in
the
snappy,
GL
type
epics
or
issues
I,
don't
want
to
misrepresent
his
his
approach
to
it.
So
I'll
just
give
my
own
perspective,
because
I
don't
I,
can't
speak
for
Tim,
but
my
my
perspective
was
that
from
a
security
side
it
didn't
feel
like
Tim
was
super
concerned
about
it
because
there's
nothing
especially
new
here.
D
D
Issue:
that's
internal!
Only
that's
in
your
cache
somewhere!
It's
in
your
browser
cache
it's
in
it's
in
stuff
and
we
don't
currently
have
a
way
to
stop
you
from
like
extracting
that
from
your
cash.
The
only
difference
is
that
indexeddb
is
like
much
longer
lived
and
it's
like
truly
a
database
of
data,
and
so
from
that
perspective
the
at
least
why
what
I
perceived
from
from
Tim
was
that,
like
this
is
a
problem
that
it
doesn't
necessarily
make
the
security
hole
worse.
D
It
just
makes
it
maybe
more
apparent
that
you
can
look
at
data
that
maybe
you
shouldn't
have
access
to
anymore
or
something
like
that.
So
I
don't
know
if
that
makes
anything
better,
but
it's
kind
of
like
we
can
deal
with
the
security
problem.
If
it's
an
actual
problem,
I
guess
maybe
later
I'm,
not
sure.
A
Yeah
I
agree
right
now:
I
wouldn't
block
as
looking
into
it
anymore,
but
yeah.
So
status
is
a
great
question
for
this.
What's
next
for
this
effort,
I
don't
really
know
specifically,
but
I
feel
like
we're
a
little
bit
too
early
to
drop
this
right
now.
I
think
we
still
need
to
dissect
it
a
little
bit
further
and
try
to
see
what
we
can
extract
from
GV
Club.
A
I
I,
wouldn't
I,
wouldn't
spend
more
time
right
now
on
this
top,
because
I
still
want
to
talk
about
blame
page
streaming
from
Santa's
live
and
we
have
less
than
15
minutes.
So
anybody
else
objects
moving
over
to
this
next
week
and
we
can
keep
on
discussing
this
asynchronously.
Any
objections
are
moving
forward.
A
Right
thanks,
everybody
really
interesting
discussion
and
thanks
Thomas
for
the
POC
stanislav.
You
have
the
update
on
the
dev
findings
of
blind
page
streaming.
E
Yep,
first
of
all,
let's
start
to
stop
start
with
what
we
are
actually
solving
with
streaming
here.
So
blank
page
has
a
full
page
mode,
which
is
basically
loading.
The
whole
page
I'll
try
to
show
you
how
that
works.
Give
me
a
second.
E
Second,
so
here
I'm
loading
the
whole
blame
page
for
the
Emojis
index.json,
which
is
a
real
file
at
gitlab,
and
it
has
33
000
lines
of
code
and
each
line
has
its
own
author
click
grouped
by
lines
and,
as
you
can
see,
I'm
limiting
here
Network
to
3G,
and
it
takes
quite
a
while,
and
during
that
time
the
page
is
actually
hasn't
been
loaded.
Yet
so
what
you're
seeing
is
still
loading
but
JavaScript
hasn't
loaded,
and
we
can't
use
any
of
JavaScript
functions
here
like
we
cannot
call
up
sidebar,
for
example,
so
we're
streaming.
E
We
are
trying
to
solve
the
problem
so
that
the
page
is
interactable
as
soon
as
possible,
and
it's
not
blocking
our
main
thread
as
heavily
as
the
full
page
load.
So
I'll
try
to
show
you
how
that
looks.
E
And,
as
you
can
see,
the
page
is
already
interactable
and
it's
still
loading
in
the
background
as
as
a
full
page
version,
but
it's
now
much
faster
because
the
response
is
cached
and
it
can
be
rendered
a
lot
faster
than
it's
a
full
page
approach.
E
E
So
first
of
all,
I
would
say
that
streaming
and
its
current
state
I
wouldn't
consider
that
as
a
solution
to
all
the
problems
with
rendering
or
performance,
because
it's
still
very
heavy
on
the
main
thread
when
it
comes
to
rendering
stuff.
For
example,
what
you
saw
most
of
the
like
time
that
it
took
it
was
rendering
the
page.
So
it
was
layout
style
calculations
and
all
that
stuff
and
JavaScript
took
like
I,
don't
know
two
or
three
percent
of
the
time
and
to
deal
with
that.
E
E
It
has
been
reported
to
Chrome
team,
but
they
decided
to
close
the
issue
for
some
reason
to
not
put
any
effort
into
investigating
that
I'll.
Try
to
to
give
it
a
second
attempt
and
provide
more
evidence
of
the
likes,
because
they're
really
severe,
and
the
second
thing
is
that
content
visibility
for
some
reason
breaks
streaming,
so
it
can
cause
some
text
to
close
sooner
and
that
completely
breaks
the
layout
of
the
page,
and
this
is
caused
exactly
by
content
visibility,
because
without
it
the
page
works
flawlessly.
E
E
So
that
is
our
main
limitation
here
is
browser
rendering
and
also
I've
shown
you
like
it,
loading
in
the
poor,
Network
conditions.
This
is
that
is
a
place
where
blame
where
streaming
actually
shines.
So,
if
you
have
a
very
poor
connection,
you
don't
have
to
wait
for
the
whole
leg,
two
or
three
megabytes
of
page
to
load
and
you
can
stream
it
as
soon
as
it's
ready.
So
it
actually
gives
a
huge
benefit
for
users
with
Pro
connections.
A
Thanks
guys,
I
would
have
one
thing
that
between
those
two
realities,
the
blame
page
currently
has
a
paginated
implementation.
What
stannislav
is
rolling
out
under
feature
flag
is
that
first
page
still
renders
paginated
the
bottom
of
the
page.
You
have
you
know
the
links
to
all
the
other
pages
and
with
a
little
button
to
show
view
the
entire
blame
we'll
go
into
his
mode
of
streaming.
So
there's
like
a
progressive
disclosure
getting
into
this
mode
right
now
and
with
it
we'll
try
to
assess
whether
it's
worth
making
it
at
the
default
or
not.
A
But
that's
the
rollout
plan
for
this
feature.
Yeah
I
wanted
to
have
that
about
some
questions
about
it.
D
Thank
you
stencil
when
you
mentioned
that
this
is
you
know,
hugely
impactful
for
people
on
slow
networks
or
or
slow
devices
that
brought
to
mind
sort
of
the
Baseline
performance
budget
for
2023,
so
I'm
gonna
drop
a
link
to
that.
D
But
I
just
want
to
say
thank
you
for
for
working
on
something
that's
like
kind
of
brings
up
in
the
experience
for
people
on
lower
end
devices,
I
think
that's,
maybe
a
segment
of
our
customer
base,
I,
don't
know,
or
at
least
potential
customers
that
we
probably
aren't
serving
that
well
with
our
really
really
huge
apps
that
take
a
long
time
to
load.
So
it's
good
to
kind
of
bring
that
bring
that
experience
up
for
them.
A
Yeah
I
agree
right
any
more
thoughts.
I
think
this
is
as
status
lab
was
describing.
This
I
feel
like
this
is
kind
of
what
we
try
to
achieve
with
the
batch
dips.
I
guess
a
little
more
clunky.
This
is
a
little
bit
more
reproducible,
I
guess
to
other
pages
and
stuff,
but
yeah
with
some
improvements,
of
course,
for
optimization.
So
there's,
probably
some
cross-pollination
here
that
can
happen
between
this
and
the
pages
of
DMR,
so
yeah.
A
So
once
we
have
the
that
this
concluded
posting
this
summary
will
be
useful
and
then
share
it
here
as
well
status
up.
So
we
can
then
try
to
take
some
lessons,
and
what
can
we
apply
to
the
diffs
now
or
to
the
Future.
C
A
Right
we're
almost
a
time
five
minutes.
Oh
Thomas
is
this
reply
to
status
lab,
which
is
a
new
point.
D
That
was
a
it
is
a
reply.
I
was
just
I
when
so
I
thought
mentioned
that
yeah
for
people
with
like
a
slower
connection
or
slower
devices.
This
is
gonna
be
a.
This
is
gonna
especially
impact.
Then
it
reminded
me
of
of
this
document
that
is
like
you
can
have
a
hundred
and
you
have
300K
of
JS
kind
of
load
for
for
the
P35
of
devices
interesting,
but
not
related.
It
just
kind
of
brought
it
to
mind.
A
From
Context
these
Pages,
the
blame
pages
are
usually
the
ones
that,
on
those
comparisons
of
source
code
tools,
usually
some
of
them
are
just
like-
cannot
render
just
bricks
completely
or
it
breaks
the
browser
or
something,
and
especially
these
edge
cases
for
blame
pages
are
particularly
complex
to
render
in
a
full
thing
so
bringing
it
over.
Bringing
it
back
from
paginated
solution.
I
think
would
be
great
Improvement
for
the
ux
overall
in
the
blank
page,
so
yeah
Keen
to
see
that
roll
out
all
right.
A
All
right
so
from
that,
it's
really
exciting
to
see
this
Improvement
in
being
the
manager
of
both
teams
has
been
very
rewarding.
Seeing
Stan
is
like
playing
in
the
source
code
Pawn
again
because
he
also
started
before
you
hired.
You
were
hired.
You
also
started
playing
on
the
source
code
and
one
doing
some
performance
Improvement
so
great
to
see
you
going
full
circle.
A
Thank
you.
So
much
for
the
great
discussion
I
think
we
had
a
great
session,
so
next
week
will
be
in
a
different
time,
but
please
don't
don't
hold
off
from
participating
in
the
asynchronous
discussions
on
the
issues
that
we
have
linked
and
or
slack
whatever,
we'll
find
a
way
to
get
on
the
thread.
But
thank
everybody
for
your
time
and
I'll
see.