►
From YouTube: 2023-03-16 Code Review Performance Round Table
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
There,
it
is
all
right,
real
life,
welcome
everybody
to
yet
another
weekly
performance
round
table
for
code
review
group,
and
this
week
we
have
an
agenda.
We
have
a
few
things
on
the
agenda
to
talk
through.
So
oh
good,
nice
thanks
for
writing
that
Viewpoint,
so
I'll
take
it
by
order.
A
So
we
still
have
the
open
discussion
about
the
micro
code
review.
Poc
I
wanted
to
check
in
any
news
any
anything
that
came
up
or
that
we
should
pick
up
from
there
to
kind
of
convert
into
action.
Anything
that
anybody
wants
to
say
about
that.
C
C
A
C
D
A
Trying
to
see
if
there's
anything
else
here,
I
can't
remember
either
I
know.
I,
know
that
there's
a
couple
of
things
we
wanted
to
validate,
or
at
least
to
see
would
be
if
it
would
be
worth
mimicking
on
the
product.
One
of
them
was
named
xdb
and
then,
if
I'm
not
mistaken,
we
ended
up
staying
with
the
HTTP
caching,
with
the
max
with
the
max
stage
and
the
cash
key,
and
that
is
being
rolled
out.
So
we
kind
of
like
stopped
there.
A
So
we
didn't
re,
didn't
go
to
get
to
go
to
the
index,
CB
casting
side
of
things,
so
that
could
be
one
of
the
things
that
is
kind
of
like
holding
this
from
being
implemented.
Just
to
get
to
learn
from
that
robot
yeah
I,
don't
know
if
anything
else
was
forked
from
here
and
if
it
was,
it
should
definitely
be
here
on
the
on
the
board
and
it's
not.
C
Suppose
you're
very
interested
in
doing
like
an
index
DB
test
run
for
something
on
the
front.
End
I,
don't
know
I,
don't
know
how
we
would
structure
that
exactly,
but
seems
pretty
well
scoped
and
seems
like
we
get
some
pretty
good
benefits
from
it.
For
visiting
versions
of
Mrs.
You've
already
visited
before
probably
seems
like
a
win.
A
If
I
remember
correctly,
one
of
the
questions
that
came
up
was
which
users
would
be
benefited
from
from
that,
like
secondary
second
visits,
a
second
visit
to
DMR
without
a
change
on
the
on
the
data
being
presented,
yeah,
let's
open
an
issue
on
that
extracting
the
nxtb
caching
and
then
discussed
there
I
feel
like
there's
a
couple
of
discussions
to
be
had
over
on
that
topic.
A
A
Moving
on
the
target
branch,
the
queries
themselves
so
that
the
the
diffs
themselves
might
not
change,
but
the
target
Branch
might
be
moving
very
fast,
so
it'd
be
very
important
to
see
if
we
can
leverage
the
cache
key
Concepts
to
kind
of
like
use
it
as
the
again
cash
key
for
the
next
video
as
well,
because
yeah
we
don't,
we
want
to
see
how
quickly
the
data
will
go.
A
Stale
so
yeah,
let's
open
our
discussion
on
that
particular
extraction
like
it
extracting
the
index
DB
and
doing
some
due
diligence
before
we
schedule
it.
So,
like
exactly
scoping
out
these
questions
out
on
that
issue,
do
you
want
to
take
that
Action
Thomas
to
create
an
issue
for
that.
B
Yeah
it's
the
biggest
takeaway
for
me,
was
experiencing
GitHub
without
the
gitlab
over
ahead.
So
I
guess
we
should
expose
it
a
little
bit
more
because
without
all
the
rails,
stuff
in
front
of
us
I
think
we
can
achieve
great
performance.
So
we
definitely
should
keep
an
eye
on
that.
A
B
B
As
always
separate
static
from
Dynamic
stuff
render,
that
is,
HTML
serve
it
as
HTML
and
progressively
enhance
it
on
the
client.
Oh
set
might
be
a
way
of
doing
that
so
like
separating
some
of
the
pages
like,
for
example,
the
rest
of
the
GitHub
lives
in
rails
and
could
review
experience
specifically
lives
as
a
separate
app.
But
it
looks
exactly
like
gitlab
and
no
one
should
notice
that.
A
E
A
We
can't
get
rid
of
that
I
guess
right
and
that's
what
I
mean
by
Standalone
would
be
something
that
would
be
just
just
the
code
review.
Nothing
else
around
it.
B
We
can
play
around
that.
We
can
always
make
stuff
lazy
loading
like
you,
don't
have
to
see
all
your
to-do's
on
the
measure
request
page
like
immediately.
Well,
usually
when
you
open
the
music
request
page,
what
you
want
to
see
is
the
title
description,
discussions
or
divs
these
discussions.
Oh
that's,
okay,.
F
B
A
I
think
that
at
least
warrants
an
issue
to
to
see
if
that's,
even
so,
one
of
the
things
that
I
don't
want
to
sacrifice
is,
like
you
said,
the
gitlab
field,
like
those
things,
are
still
important
to
see.
Merge
requests
integrated
with
the
whole
Mr,
but
there
might
be
something
in
there
that
we
can
extract
from
it,
which
is
like
the
first
immediate
load
of
the
thing
we're
going
for.
A
There
could
be
something
in
there.
Apprenticeship.
A
Thank
you.
You
already
done.
No
I
was
doing
that
cool
all
right,
so
nothing
change
changes
for
now.
I
already
did
I
think
this
is
for
now
on
the
main
issue.
I
guess
it's
still
being
picked
apart,
instead
of
being
extracted
things
so
I
think
it's
something
that
has
to
keep
it
open.
A
All
right
next
Point
discussion
consider
performing
diff,
highlighting
on
the
front
end
instead
of
the
backhand
so
check
out
this
POC,
the
the
web
assembly
comparison.
A
E
Yeah
I
think
it
wasn't
clear,
like
the
the
video
got,
dropped
and
I
didn't
see
anything
else
on
it.
So
it
wasn't
really
clear
to
me
what
the
main
takeaway
was
like.
A
Yeah,
so
this
is
coming
from
the
discussion
of
potential
scenarios
going
forward
and
we
were
raising
the
question
whether
well
highlight.js
has
its
own
shortcomings
that
we
already
identified
lack
of
support
and
it's
really
falls
on
its
face
when
it's
highlighting
pieces
of
code
not
the
whole
file
at
once,
and
then
the
com,
the
concept
of
web
assembly,
came
into
conversation,
and
that's
exactly
so.
This
is
comparing
with
the
future
possibilities.
A
A
I,
don't
think
it's
I'll
just
pick
my
mind,
I,
don't
think
it's
worth
comparing
mostly
because
the
back
end
has
a
very
big
Advantage,
which
is
it
has
the
files
already
in
its
side
loads.
It
highlights
it
grabs.
The
Snippets
starts
with
the
Snippets
to
the
browser
for
us
to
do
this
on
the
front
end,
we
would
have
to
get
the
whole
file
right.
So,
even
if
we
have
those
comparisons,
but
anyway
I
I'm,
sort
of
a
naysayer,
so
I
don't
want
to
I
want
to
block
promise,
or
at
least
investigations.
B
Yeah
I
think
they're
discussing
a
problem
which
cannot
be
fundamentally
solved
in
a
magic
request,
because
magic
Quest
is
always
about
partial,
follows
files.
Not
the
whole
file
and
sending
the
whole
file
just
to
highlight
just
to
get
a
proper
highlight
seems
like
a
huge
Overkill,
so
I'm
not
sure
if
it
even
makes
sense
to
to
like
put
any
effort
into
this.
D
A
The
the
question
where
the
web
assembly
was
worth
investigating
is
kind
of
like
removed,
and
my
biggest
takeaway
is.
The
website
could
be
useful
if
we
had
a
more
single,
a
single
page
application
field,
where
you
have
a
loading
page
to
load,
all
the
assets
and
all
the
cags
and
all
that
stuff,
then,
as
you're
navigating
through
your
app,
you
already
have
those
things
loaded
in
the
case
of
like
a
first
page
loading
like
a
web
approach.
A
That
was
my
biggest
Takeaway
on
that
video
might
still
make
sense
for
certain
things,
but
more
like
long
running
things
on
the
browser,
not
just
like
the
page
rendered.
You
need
to
be
super
Snappy,
so
it
will
be.
B
I
think
I
also
missed
the
original
problem
with
highlighting
of
the
back
end,
because
Gary
mentioned
it
can
be
solved.
But
what
exactly
we
are
like
dealing
with
right
now,
I'm,
not
sure
what
it
is
like.
Is
it
cash
on
the
radio
side?
Is
it
something
else?
Is
this
computation
time
I'm
not
sure.
A
A
B
Okay,
but
assuming
how
gitlab
feels
right
now,
I
don't
see
like
this
is
a
really
big
problem,
because
right
now,
you
don't
have
to
wait
like
10
seconds
to
get
your
divs
like
it's
usually
half
a
second
and
they're
ready
right
away.
So
yeah
from
the
like.
A
No,
but
you
see
so
I
guess
the
the
manifestation
of
this
of
the
cost
of
highlighting
is
exactly
when
you're
serving
the
bad
strips
and
you're
you're,
not
getting
2
000
files
served
immediately,
like
you
still
have
to
go
through
a
period
of
loading
and
this
time
that
each
batch
dips
is
taken
to
respond.
A
I
guess
that's
how
they
manifest,
but
already
we
already
have
several
iterations
on
this
we're
already
batching
the
day
of
floating,
we're
already
like
doing
a
bunch
of
things
to
speed
up
the
loading
of
the
of
the
front
and
app
and
I
think
care
even
said,
or
somebody
said
in
this
call
that
seems
like
highlighting
is
a
solve
problem.
A
It's
just
it's
just.
This
is
more
like
theoretic
theorizing,
whether
we
could
make
it
in
any
way
more
efficient
than
it
is
right.
Now.
B
Just
a
crazy
idea,
we
can
always
set
up
a
service
that
does
highlight
it
only
like
pickups
the
most
performance
solution,
like
from
the
video
that
was
using
quasim
or
but
it
was
rust,
big
big,
the
most
performant
one
set
up
a
like
separate
server,
do
highlighting
there
and
you're
done.
C
On
the
thing
that
I
wrote
or
on
this
yes
sure,
oh
yeah,
well
so
I
mean
you
answered
it.
I
didn't
talk
about
it,
though
so
the
question
I
had
had
was.
We've
talked
a
bit
out
quite
a
bit
on
this
topic
about
how
like,
when
source
code
uses
highlight.js
it
identifies
when
it
would
fail
and
falls
back
to
the
back
end,
highlighting
so
that
it
doesn't
have
those
highlight
failures.
C
My
question
in
my
head
is:
if
we
can
identify
those
failures,
why
can't
we
work
around
them,
but
your
answer
sort
of
answers
that
which
is
that
it
just
identifies
a
file
type
and,
and
it
doesn't
do
it
so
that's
sort
of
disappointing,
but
it's.
A
Reasonable
it's
a
patch
I
mean
eventually
with
time.
The
idea
was
that
we
would
extend
support
for
highlight.js
for
those
that
are
missing,
but
yeah
we're
now
we're
now
looking
into
adding
support
for
spectrality
of
co-learners
file,
which
is
our
own
property.
It's
not
proprietary,
but
it's
our
own
syntax.
So
by
learning
that
we
can
potentially
learn
how
to
Extended
all
the
languages,
and
then
you
can
have
that
effort
to
be
done.
If
it's
worthwhile
again,
the
Vlog
is
different
than
the
diffs.
For
the
reasons
we
already
talked
about,
yeah.
A
C
So
I
am
content.
I
think
that
this
is
at
least
from
a
webassembly
perspective.
I,
don't
think
that
I,
don't
think.
That's
the
right
place
to
spend
our
energy,
but
I
did
have
one
I
did
have
one
thought
when
I
was
thinking
about
that
video
was
that
the
so
webassembly
is,
is
primarily
a
math.
It's
good
at
math,
it's
not
great
strings,
and
this
is
purely
strings,
and
it's
also
like
you
said
it's
also
good
for
a
really
long
running.
It's
like
good
for
video
games.
C
Right,
like
you,
can
port
a
video
game,
you
load
all
the
assets
and
then
it's
doing
a
lot
of
computation
kind
of
behind
the
scenes
in
C,
plus,
plus
or
whatever
it
seems
like
web
assembly
would
probably
fare
a
lot
better
on
a
hot
start.
So
so,
comparing
like
the
web
assembly
stuff
loaded
and
ready
to
run
to
like
highlight.js
loaded
and
ready
to
run
versus
trying
to
like
start
it
on
a
new
page
load
and
then
see
how
it
goes.
C
F
Let
a
note
here
stanislative.
What
was
your
point.
B
Yeah
I
think
I've
mentioned
that
in
the
previous
call,
or
when
we're
discussing
LHS
that
it's
not
for
free
on
the
front-end
side,
you
still
have
to
ship
a
lot
of
code
to
support
other
languages
and
also
it
takes
time.
So
it
also
consumes
memory
and
you're,
basically
redrawing
the
whole
divs
that
you
already
draw,
or
if
you
didn't
draw
them,
you
have
to
basically
transfer
the
whole
HTML
to
the
blossom
or
whatever
it
is.
You
can
get
it
back.
There
is
also
an
overhead
of
transferring
said
data,
so
it's
not
for
free.
B
It
also
has
its
downside,
so
we
might
be
removing
some
of
the
overheads
from
backend,
but
in
addition
adding
some
more
heads
to
client
side,
yeah.
A
I
think
the
only
way
for
us
to
be
able
to
pull
this
off
was
if
we
were
accepting
sure
if
we
were
okay
with
having
some
inaccuracies
by
only
highlighting
the
Snippets,
but
only
highlighting
the
code,
chunks,
not
the
whole
file,
and
that,
as
we
know,
already
produces
varying
results
in
terms
of
accuracy.
So
I
think
that
is
the
question:
how
tolerable
are
we
of
like
being
correctly
highlighted
files
which
I'm
not
very,
but
we
can?
We
can
move
on
I
guess
I
feel
like.
In
fact
the
points
have
been
made.
A
We
have
some
open
questions
if
somebody
wants
to
pick
them
up,
but
I'll
I'll,
move
on
for
the
sake
of
time.
For
that
service,
that's
okay,.
A
If
I'm
not
mistaken,
the
conversation
started
from
the
POC
of
the
micro
front
end,
then
the
question
of
whether
we
could
grab
it
from
The
Blob,
the
entire
blog
being
rendered
at
once
and
then
being
able
to
attach
common
to
each
one
individual
lines,
and
then
the
conversation
evolved
to
great
what
if
we
could
attach
it
to
anything,
how
would
be
that
system
of
identification?
That's
where
this
issue
comes
from
generalize,
much
request
items
as
it
stands.
It
has
little
to
do
with
performance,
but
it's
still
here
and
I
thought
I'd
bring
it
up
again.
A
B
E
B
C
That's
right,
it
is
I
mean
it's
very
heavily
back-end,
obviously,
because
it
suggests
a
structure
that
separates
components
of
VMR
from
each
other
more,
but
I
also
think
it's
pretty
heavily
front-end,
because
right
now,
what
our
app
the
structure
of
our
app
is
very
heavily
influenced
by
by
the
journey
that
we
took
to
get
here.
So,
like
you
know,
discussions
are,
are
children
of
lines
and
and
then
there's
also
a
whole
app
like
the
whole
overview.
C
Tab
is
the
notes
app
and
it
displays
notes
that
are
not
attached
to
lines,
and
it's
just
it's
it's
it's
very
much
a
product
of
of
the
of
the
needs
that
we
had
at
the
time
and
so
the
the
issue.
C
The
generalizing
issue
also
sort
of
talks
about
how
the
app
could
be
restructured
so
so
that
the
individual
pieces
are
not
are
not
they're
not
put
together
in
a
way
that
was
a
product
of
how
we
needed
them
at
the
time
they're
put
together
in
a
way
that,
like
represents
sort
of
the
current,
the
current
it's
a
dynamic
nature
of
of
the
code
review
app,
and
so
it
would
necessarily
mean
sort
of
a
pretty
strong,
restructure,
I
think
is
sort
of
in
parallel
to
the
back
end
of
like
restructuring
our
application
to
have
like
this
context.
C
This
concept
of,
like
a
note,
is
its
own
thing
and
we
can
attach
that
anywhere
in
the
Mr,
whether
it's
on
a
line
of
code
or
a
function
in
code
or
whatever
I
think
there's
a
lot
of
fun
involved
here,
not
just
kind
of
tweaking.
A
Yeah,
but
the
question
is
still
so
and
I
agree
in
ux
challenges
as
well.
How
do
we
place
the
call
to
action
to
leave
a
comment
on
objects?
How
do
we
make
it
visible,
so
somebody
Andy
Hope
was
talking
about?
Could
we
do
this
on
artifacts
of
pipelines
like?
Could
you
comment
on
a
merger
request
on
something
that
came
from
the
Bible
like
if
you
can
figure
out
a
way
to
put
that
on
the
UI,
we
can
probably
come
up
with
an
idea,
a
way
of
identifying
them
and
rendering
them
sure.
A
Pipelines
and
have
the
artifacts
there
to
come
and
like
it's
starting
to
be
like
all
right
now,
the
app
is
no
longer
the
changes
tab
only
it
could
be
anywhere
on
the
page.
Yeah
so
introduces
a
bunch
of
like
new
ideas
that
should
definitely
be
discussed
for
sure.
Yeah
I,
just
don't
see
that
in
as
a
shortcom
short-term
win,
no,
definitely
like
more
medium
long
term.
Do
you
have
that
discussion?
It
doesn't
seem
like
it
will
help
more
performance
if
anything
you'll
be
a
goal
for
a
refactor
of
an
app
or
something
yeah.
C
I
mean
I
think
yeah,
it's
not
it's
not
an
easy
win,
but
I
do
think
it
would
help
in
performance
because
we
we
do
like
right
now.
We
do
a
lot
of
stuff
where
we
Loop
over
every
discussion
and
check
if
it
matches
any
line
in
a
file
to
attach
it
to
the
right
line
and
there's
a
lot
of
like
looping.
C
That
happens
because
we
just
we
have
like
relationships,
are
child
parent
relationships
rather
than
you
know,
identifier,
identifier,
one-to-one
relationships,
and
so
we
have
to
do
a
lot
of
looping
to
like
identify
whether
to
place
something
somewhere.
C
C
Andy
said
we
could
just
we
could
switch
to
the
Google
Docs
pattern
where
everything
is
just
a
canvas,
and
we
just
we
just
paint
Graphics
onto
the
screen,
and
we
can
little
hover
buttons
everywhere,
just
based
on
Pixel
values,
I'm
sure
it
would
go
really
well.
A
Well,
I
I,
don't
know
I'm,
not
too
sure
about
the
canvas,
but
if
we,
if
we
do
change
the
way
that
we,
inter
interspace
content
with
comments
that
can
simplify
a
bunch
of
other
things,
yeah
but
I'll
say
this
bring
this
back
to
Performance
conversation.
The
only
way
that
this
topic
reverts
back
to
Performance
boost
is,
if
the
way
that
we
attach
notes
to
lines
which
could
be
anything
in
the
future
allows
us
to
render
the
code
chunks
as
a
whole
from
the
server
and
not
have
to
have
a
line
by
line
structure.
A
You
reduces
the
payload.
It
also
reduces
probably
the
the
overhead
of
rendering,
but
that's
the
only
way
that
I
see
that
this
can
benefit
performance
and
even
then,
like
leaving
comments
in
the
middle
of
the
lines,
would
still
be
challenging.
A
C
Yeah
I,
don't
I,
don't
really
remember
how
but
I
think.
Maybe
your
summary
of
how
it
relates
to
Performance
was
correct,
but
yeah.
It's
not
really.
It's
not
really
directly
performance
related,
so
we
can
move
it
somewhere.
A
Right
yeah
I'll
do
just
tax
so
moving
on
to
the
next
Point
stanislav
diffs,
server
side,
rendering
experiments
show
what
you
have.
B
Yeah,
a
quick
update
on
the
diff
service
at
rendering
so
I'm
working
on
this
right
now,
I've
tried
to
replicate
what
we
have,
in
the
view
app
using
camel
templates.
B
That
became
quickly
too
complex
because
we
have
a
lot
of
Logics
there
over
the
years
and
trying
to
understand
all
of
it
is
quite
a
difficult
task,
so
I've
decided
I'll,
be
making
a
simpler
variant
of
the
POC,
so
I'll
be
using
our
current
Legacy
divs
in
Hamel,
which
are
used
on
the
commits
page
and
other
Pages.
They
took
like
that.
B
So
we
already
have
this
this,
but
they
are
visually
quite
different
from
what
we
have
on
the
measure
Quest,
but
they
still
work,
so
it
should
work
as
an
example,
and
the
second
thing
I'll
be
testing,
is
attaching
discussions
to
these
divs
and
when
this
POC
is
ready,
I
want
to
verify
to
verify
two
things.
First
of
all
is
that
we
can
actually
do
that
stuff,
so
we
can
attach
discussions
to
already
server
set
rendered
divs.
B
The
second
thing
is
evaluating
the
performance.
I'll
be
comparing
the
new
variant
to
what
we
have
right
now,
with
this
virtual
scrolling.
I'll
see
and
see,
does
it
make
sense
to
continue
with
that
route
and
I'll
keep
you
posted
on
the
results?
Hopefully
next
week,
I'll
give
some
demo
of.
B
A
Yeah
I
have
I,
have
some
questions.
I
was
lost
in
my
thousand
Taps
first,
my
immediate
thing
is:
will
the
what
are
you
thinking?
What
are
you
thinking
about
loading
these
templates,
like
with
the
page
or
still
in
a
sync,
a
separate
call.
B
This
effort,
most
definitely
will
be
using
streaming,
we'll
be
using
the
same
approach
as
we
do
on
the
bling
page
right
now,
which
is
to
show,
for
example,
the
first
five
files
server
rendered
then
streams
the
rest
once
you've
loaded
the
page
on
the
client.
So
we
will
need
an
entry
point.
Another
API
entry
point
on
the
rail
side
that
can
serve
this
gives
as
HTML
similar
to
what
the
deep
space
digestion
does,
but
it
will
be
strictly
for
HTML.
B
A
Okay,
so
that
on
that
does
anybody
have
questions
or
thoughts.
A
Okay,
my
next
question
is,
when
you
say
here
about
a
must:
try
on
discussions
the
core
feature
So,
when
you
say
discussions
when
you
attach
a
discussions,
we're
talking
about
like
a
thread,
the
button
to
add
new
comments.
All
the
interaction
will
be
working
in
this
POC,
like
you'll
import,
The,
View
Behavior
in
the
view
code
is
that
it.
B
It
depends
on
the
effort
of
detaching
the
discussions
from
our
current
code
base
because
it's
deeply
integrated
into
pukes,
and
that
could
be
a
challenge
so
well
try
to
do
my
best
to
implement
most
of
it,
but
if
I
fail
to
do
so,
I'll
at
least
make
it
so
that
the
comment
button
works
and
you
can
actually
see
discussions.
A
Okay,
if
you
run
into
Roblox,
probably
Phil
can
help
out
because
he
he
knows
that
bit
right
well,
yeah
any
more
thoughts,
questions.
A
Cool
so
I
added
a
little
note
here
as
a
curiosity,
because,
even
though
we're
using
the
same
term
here
server
side,
rendering
that
can
mean
a
thousand
different
things
and
in
this
concept
that
we
tried
in
2020
like
we're
using
hypernova,
which
was
a
different
which
was
using
JavaScript,
we
were
wondering
the
templates
using
the
same
code
that
we
render
on
the
JavaScript.
A
This
will
inherently
use
a
duplication
which
you
would
have
like
the
markup
duplicated
on
the
front
on
the
packet
or
not
duplicated,
but
moved
to
the
back
end
I
guess
for
entering
the
starting
Parts,
just
the
static
Parts,
not
the
front-end
Parts,
not
the
dynamic
part
that
was
sort
of
like
the
one
of
the
biggest
distinctions,
which
is
a
little
different
that
hopefully
we
can
get
this
out
without
too
much
hassle
and
having
to
bother
infrastructure
with
having
to
run
new
things.
A
It's
back
cool,
any
more
questions
or
thoughts
on
this.
A
Cool
so
move
on
to
the
next
Point.
So
one
of
the
things
we
talked
last
week
was
exactly
about
the
okr
to
reduce
the
TBT
and
the
LCP
by
50.
A
Until
the
end
of
this
quarter
and
we're
asking
for
ideas
and
then
the
server
side,
rendering
came
up
as
we're
getting
closer
and
everything,
it
seems
like
we're
not
going
to
have
a
working
like
we're
not
going
to
Target
having
a
production
ready
rolled
out
version
of
the
server
side.
We
don't
want
to
rush
it.
We
want
to
make
sure
that
we
do
it
properly.
So
it
seems
like
we
might
not
have
it
in
time,
for
this
Milestone
and
I
was
wondering.
A
B
I,
don't
have
anything
easy
to
to
suggest,
but
I
would
like
to
raise
a
topic
from
our
previous
Round
Table
about
a
line.
Wrapping
I
think
we
should
at
least
give
it
another.
Try
like
at
least
try
and
if
it
fails
it
fails.
But
from
my
perspective,
I.
B
Am
disabling
client
wrapping
can
bring
huge
benefits
to
Performance
for
those
who
really
need
it,
like
people
who
loads
200
files
in
a
measure
Quest.
These
are
the
candidates
for
this
mode
and
if
we
can
figure
out
a
simple
way
to
implement
it
and
to
make
it
not
suck
yes
to
make,
it
feel
feel
good.
If
we
can't
do
that,
we
can
solve
our
biggest
problem,
which
is
lodging
Largemouth
requests
without
breaking
stuff
for
small
merge
requests,
because
we
can
just
leave
a
small
mesh
request
as
is
and
forget
about
it,.
A
C
B
Yeah
I
should
have
started
with
that.
So
if
we
know
exactly
the
height
of
each
line-
and
we
know
the
height
of
the
header,
we
can
basically
skip
the
pre-rendering
step
of
virtual
scrolling.
So
right
now,
when
you
load
tips,
you
see
like
the
first
two
files
they
are
already
rendered,
but
the
rest
is
rendered
in
sort
of
like
in
the
background,
but
it's
actually
blocking
main
thread
and
it
is
the
cause
for
our
high
TBT
times.
B
So
as
it
does
that
in
a
cycle
in
request
idle
task,
and
it
can
take
like
five
to
ten
seconds
to
properly
adjust
the
page
scroll
size
and
if
we
know
that
before
we
render
these
files,
we
can
skip
this
step
and
programmatically
sets
scroll
scroll
height
and
it
works
not
just
for
the
virtual
scrolling.
It
works
for
server-side
training
trick
as
well,
because
it
has
the
same
concept
with
content,
visibility.
It
does
exactly
the
same
stuff,
so
we
can
use
this
in
both
Solutions.
A
A
As
he
was,
as
you
were,
explaining
that
thanks
for
that
so
yeah,
it
seems
like
in
a
simple
way
like
we
can.
We
can
forecast
the
height
of
the
diff
file.
The
pages
we
can
like
do
some
calculations
and
stuff,
but
what
if
we
have
comments
in
the
middle
of
them
like
no
in
the
middle
of
the
div
file,
we
have
a
comment
and
that
comment
will
be
different
on
based
on
a
bunch
of
variables
that
are
not
that
easy
to
predict.
B
First
of
all,
we
have
this
problem
right
now
on
the
this
page.
So
when
you
load
this,
the
comment
just
pop
in
and
the
page
adjusts
accordingly
and
second
is
I've-
created
an
issue
to
consider
making
our
comments
float.
D
A
Because
for
me,
yeah.
A
A
B
A
Okay,
okay,
so
you
get
the
actual
status
okay
for
what
it's
worth,
it's
in
the
back
pocket
for
what
it's
worth.
C
Best
practices
for
images
are
to
actually
include
a
predefined
size
when
you
render
the
HTML
for
this
exact.
For
one
of
these
reasons,
the
the
layout
shift
is
really
bad.
If
you
don't
tell
the
browser,
here's
how
big
the
image
will
be,
you
can
also
give
it
a
ratio,
so
in
modern
in
modern
HTML.
You
can
give
it
like
the
ratio
of
the
image
just
so
you
can
like
kind
of
scale
it
to.
C
However,
of
course,
we
don't
do
that,
but
we
probably
should
we
probably
should
say
what
what's
an
image
in
a
comment:
it's
gonna,
you
know
maximum
of
I,
don't
know
600
by
900
and
that's
like
the
size
of
the
image
and
then
we'll
go
from
there.
I
don't
know.
A
Yeah
and
again
you'll
be
a
breach
of
expectations.
If
we
immediately
if
we
started
doing
that
on
user
generated
content
images
that
haven't
specified
in
size
because
the
size
of
the
image
itself,
the
asset
can
change
later,
but
yeah
Phil
has
a
question.
Do
you
want
to
read
that
field?
You
want
us
to.
F
F
I'm
just
curious,
if
we're
risking
to
grade
no
ux
just
for
the
benefits
of
performance,
if
we
render
normally
for
smaller
major
quests
with
the
lines
wrapped
and
then
we
say:
hey
well,
this
merch
press
is
kind
of
too
big
for
surrender.
F
We're
Not
Gonna
wrap
the
lines
it
just
kind
of
seems
like
we're
doing
two
different
things
and
we're
kind
of
saying
hey.
We
know
this
is
pretty
crappy,
so
we're
going
to
change
eqx
for
you,
whereas
for
the
rest
of
your,
you
can
have
this
ux
that
you're
very
much
used
to,
because
we've
been
wrapping
Lads
for
as
long
as
I
remember
and
then
taking
the
comments
out
of
the
actual
code
and
floating
somewhere
else.
I
understand
it.
F
B
I
think
definitely
if,
if
it's
a
good
performance
it
should,
it
should
mean
it's
a
better
user
experience.
So
right
now,
if
you
load
image
request
with
500
files,
The
Experience
would
be
math
like
it's.
It's
not
it's
not
good.
I
should
say,
and
with
this
approach
it
should
be
at
least
manageable,
like
you
will
be
able
to
work
with
this.
A
A
So,
even
if
you
tell
them
look
we're
doing
this
for
the
benefit
of
performance
all
they
see
is
they
now
have
a
file
that
doesn't
have
a
line
wrapping
when
they
prefer
the
line
wrapping,
and
you
can
say
that
it's
for
performance
reasons
and
we've
seen
and
that's
my
bottom
line
bottom
below
one
point
below
my
point
below
is
that
they
already
feel
very
bad
that
we
collapse
files
that
are
too
large
to
render
because
of
performance
like
we
have
a
lot
of
Corners
cut
for
performance
reasons.
A
Already
forcing
line
wrapping
doesn't
seem
to
be
one
that
would
go
down
very
easily,
but
I
could
be
wrong,
but
I
think
that's
a
fair,
a
fair
question.
There's
a
there's
a
more
well
conceptually.
It
makes
sense
to
do
changes
that
can
potentially
affect
ux
at
the
cost
of
a
better
performance,
because
that's
also
ux
right.
It's
also
a
feature.
A
I
get
that
that
doesn't
mean
that
every
changes
that
sacrifices
ux-
it
is
Meaningful,
or
at
least
we
should
so
I-
have
a
hard
time
with
this,
mostly
because
it,
if
anything,
we
could
do
this
for
a
section
of
our
users
like
we
have
the
file
by
by
file,
but
it
is
uses
such
a
high
level
of
complexity
and
Reliance
on
this
thing,
I
mean
maybe
I'm
not
getting
it,
and
if
DLC
will
clarify
that
for
me,
maybe
not
saying
that,
but
yeah
have
a
hard
time.
B
I
I
should
also
say
that
the
stuff
that
we
are
doing
right
now,
it's
also
for
the
like
one
percent
of
the
users,
because
you
don't
review
200
files
in
Mars
daily,
like
it's
usually
in
Outsiders
and
the
same
words
that
are
maintained
for
like
months.
B
So
this
is
already
touching
a
very
small
user
base
and
right
now
the
experience
with
smaller
Mars
is
actually
quite
good.
So
I
think
we
are
only
left
with
performance
issues
only
on
large
major
quests.
E
I
think
just
large,
merge
requests
are
more
common
than
we
think
in
our
customer
profile.
I
I'd
love
to
say
that
200
files
is
uncommon,
but
based
on
what
I've
seen
from
customers,
it
is
not.
B
Maybe
we
can
get
some
data
from
from
the
customers
who
experience
performance
issues
like?
Would
there
be
okay
if
we
just
disable
land
wrapping
like?
Would
it
be
a
good
enough
trade-off
for
them
to
switch
to
that
mode
like
it
won't
be
us
switching
them
manually,
it
will
be
them
clicking
a
button
like.
Yes,
we
want
the
best
performance,
but
we
won't
have
land
wrapping
and
we
will
deal
with
that.
Maybe
we
should
get
that
data
first
and
see
what
they
say.
F
Just
on
the
lineup
there
is
I
guess:
Andrew
Place
I
will
choose
the
actual
line.
Wrap
itself
creates
a
lot
of
ux
questions
in
that.
How
do
we
scroll
each
way
left
and
right?
If
these
are
scroll
bars,
wrists,
robot
visible?
How
do
you
make
it
more
noticeable?
That's
actually
content
off
the
screen
for
larger
day,
so
I
think
kind
of
before
we
commit
to
even
saying
to
customers.
Would
you
be
okay
with
Larry,
we
kind
of
need
to
figure
out
these
ux
problem
space?
A
Don't
get
a
suggestion
of
using
something
like
the
IntelliJ
solution,
which
is
when
you
scroll
one
of
the
views,
you
suppose
the
other
one
as
well
again
with
all
the
overhead
that
comes
from
JavaScript
synchronizing,
the
inner
Scroll
of
those
containers,
so
yeah
there's
a
couple
of
ideas
that
kind
of
like
mitigate
to
the
ux
I
think
I
added
to
the
Epic
that
we
created
for
it.
A
I
can
try
to
link
it
here,
but
I'll
focus
is
even
more
broader
question,
because
right
now
we're
discussing
things
for
the
1511
and
we've
been
here
a
long
time
and
it
feels
like
we
have
a
big
impact
coming
that
if
it
works
okay
and
we
can
bet
our
efforts
in
Rolling
the
services
rendering
into
Mrs.
It
feels
like
this
tiny
or
a
team
to
me
seems
tiny
in
the
scope
of
a
server-side
rendering
impact
right.
Is
this
even
relevant?
And
then
is
this
something
that
we
can
ship
in
a
milestone?
B
I
think
this
is
a
fundamental
problem
of
rendering
in
a
browser.
So
even
if
we
go
server-side,
rendering
crowd
will
still
be
faced
with
the
same
problem,
because
we
still
have
to
render
all
these
lines
in
the
browser
and
from
my
experience
working
on
a
blame
page,
it
can
take
a
lot
of
time
it
can.
It
can
even
overweight.
B
A
Yeah
I,
don't
know
it's
hard
for
me
to
express,
because
we
have
such
a
heavy
page
right
now,
because
everything
is
fully
reactive.
But
if
we
move
to
reality
where
not
everything
is
fully
reactive,
I've
seen
really
large
Pages
just
with
basic
HTML,
not
for
that
not
slowing
down
the
browser.
So
it
feels
like
playing
HTML
with
no
JavaScript
listeners
and
no
overhead
of
any
kind.
A
It
doesn't
seem
to
be
the
problem
of
hard
reflows,
especially
for
not
using
table
layouts
and
stuff,
so
I
think
we'll
need
to
iterate
on
it,
and
Phil
Phil
adds
a
message
that
yeah
they
do
manage
that
and
funny
enough.
That's
how
they
build
their
really
performant
pages
with
really
large
data
is
by
getting
chunks
of
server
side,
server-side
things
onto
the
page
and
then
eventually
decorating
it.
A
A
No
forever,
but
I'll
definitely
table
this
until
you
have
to
reset
rendering
knowledge
and
then
and
then
you
can
address.
We
can
talk
about
this
again
to
see
if
it's
it's
meaningful
or
not,
I,
that's
whatever
I
stand,
Jack.
B
One
last
point
for
me
is
that
suicide
rendering
actually
offers
us
a
balancing
mechanism.
We
actually
can
decide
what
is
more
important
for
users.
Do
they
want
a
sniper
interface
or.
D
B
B
D
A
Okay,
this
has
been
very
useful,
but
is
there
any
other
ideas
of
things
that
we
can
ship
right
now
in
a
milestone
to
address
the
TV
in
LCP?
Or
is
this
the
only
one
we
have.
C
Well,
it's
it's
just
kind
of
like
there.
There
is
going
to
be
a
a
big
chunk
of
things
that
is
rendered
at
some
point.
What
is
so
largest
contentful
pain
right
so
like?
What
is
that
for
our
Mrs?
Is
it
even
possible
to
reduce
that
I?
Don't?
Is
it
the
Mr
description
like
I,
don't
even
know
like
I
guess
it's
it's.
A
Then
yes,
it
would
get
a
speed
up.
The
merge
request,
description,
loading
and
that'll
be
that
in
the
emergency
changes
tab
is
a
little
bit
trickier.
I,
don't
know
by
hard
what
elements
is
being
picked
up
by
LCP.
Does
anybody
know.
C
B
C
C
B
A
C
A
A
A
To
interactive:
well
we're
not
addressing
those,
but
what.
A
While
just
the
first
file
no
yeah,
because
you
like
the
tree
would
be
interactive,
the
tabs
will
be
interactive
just
that
that
piece
of
content,
eventually
it
could,
it
could
increase
cumulative
layout
shift.
If
anything,
that's
a
crazy,
crazy,
crazy,
maybe
not
so
crazy
idea
for
prayer.
But
here's
the
thing
we
are
affecting
a
metric,
but
then
we
would
also
be
affecting
the
performance.
A
D
C
This
something
we
can
meaningfully
change
in
the
short
term
because
to
me
I,
don't
think
that
we
have
huge
chunks
of
the
diffs
app
that
are
like
taking
a
really
long
time
to
render
out
of
out
of
the
ordinary
of
other
ones.
Sure
diffs
take
a
while
to
load
or
whatever,
but
like
is
that
something
we
can
meaningfully
change
just
just
trying
to
address
LCP
I,
don't
know
if
that's,
if
that's
something
we
can
do
right
now,
maybe
TBT,
maybe
total
walking.
B
Lcp
is
what
we
can
do
is
LCP
is
reduce
the
overall
overhead
from
surreals
like
if
we
can
remove
unnecessary
CSS
JavaScript
from
the
page.
Yes,
that
will
increase
our
LCP
and
for
the
blocking
time
I.
Don't
think
we
can
do
anything
right
now
to
improve
that,
because
we
already
have
very
good
performance
for
smaller
Mars
and
for
the
FCP
and
LCP.
The
timings
are
actually
quite
good.
A
Okay,
we're
at
time
folks
so
I
think
I
think
we
have
a
SK.
We
have
an
issue
like
one
of
those
quality
issues
that
we
could
schedule
as
a
blank
slate
for
things
that
we
could
try,
then
we'll
go
from
there.
A
But
probably
this
last
thought
from
Stan's
life
could
be
worth
digging
around
and
auditing
a
little
bit.
Is
there
anything
that
we
can
do
to
speed
up
the
loading
of
the
page
and
potentially
speed
up
some
requests
and
speed
up
some
rails,
part
of
the
rendering,
which
will
probably
need
some
back-end
support
on
that.
B
A
Yeah
startup.js
might
sometimes
be
working
against
us
and
other
things
yeah,
so
things
for
us
to
address.
So
we'll
look
at
that
in
in
this
Milestone
Thomas,
you
had
a
note.
You
want
to
work
applies
above
before
we
go
and.
C
Yeah
yeah
I
mean
it's
sort
of
it's
sort
of
related
to
this,
but
it's
kind
of
also
at
some
point,
the
biggest
problem
that
we
have
is
view
overhead
I
mean
it's
just
that's
it
we're
not
really
addressing
the
elephant
in
the
room,
which
is
the
library,
is
doing
the
most
work.
When
we
talk
about
total
blocking
time.
We're
talking
about
all
these
things.
To
like
try
to
reduce
that
view
is
the
the
worst
blocker
just
loading
view.
Reactivity
I
think
you
mentioned
this
earlier.
C
C
I,
don't
know
if
it's
server-side,
rendering
it's
it's
more
just
like
do.
We
need
a
view
component
for
every
icon
is
the
icon,
reactive,
probably
not,
and
we
just
render
an
icon
with
HTML?
Do
we
need
a
VM
component
for
every
row
of
repository?
Probably
not
because
you
can
click
on
a
row
and
one
single
component
can
catch
your
click
and
figure
out
where
what
file
like
we
don't
like
there
could
be
less
view
running
on
the
page.
I
guess.
B
Yeah,
it's
the
same
as
you
mentioned,
we'll
try
to
deal
with
that
using
serious
at
rendering
and
see
how
it
works
for
us.
A
Steve
we
end
up
with
a
hopeful
note,
look
at
that
thanks
everybody,
I'm
gonna
cut
it
here,
for
the
benefit
of
everyone's
time,
sorry
for
going
to
build
overboard,
but
this
is
good
thanks
for
the
discussion
very
useful
and
I'll
see
you
next
week
until
then
have
fun
be
happy.
Bye,.