►
From YouTube: 2022-11-17 #4 Code Review Performance Round Table
A
And
we're
live
hi.
Everyone
welcome
to
yet
another
performance,
Round
Table
of
the
code
review
group.
Today
it
seems
like
we're
going
to
have
a
front
end.
Only
Round
Table,
that's
totally
fine,
so
starting
off
I'm
going
to
start
reviewing
the
board.
So
last
week
we
had
a
couple:
we
had
a
chat,
a
suggestion
of
using
an
issue
board
to
track
the
the
topics
that
are
being
discussed.
So
this
is
what
it
looks
like
at
the
moment
to
sharing
the
screen.
So
it
should
be.
A
So
these
are
the
topics
that
we
have
right
away,
so
these
two
are
ready
for
development
and
the
first
one
create
technical
documentation
describing
the
several
components
that's
already
being
scheduled
as
a
deliverable
Carey's
already
assigned
Phil
has
already
started
working
on
this,
so
I'm
tempted
to
assign
to
him
in
157
as
well.
We
want
to
finalize
the
plan
later
today,
so
that's
why
we
still
want
to
have
a
face
there,
but
that's
what
I'm
thinking
about
doing
then
there's
the
track.
A
Duration
of
specific
parts
of
this
request,
which
is
back-end
only
so
I,
don't
think
we
can
okay.
So
for
some
reason
this
didn't
make
the
cut.
So
that's
totally
fine,
hopefully
we'll
pick
it
up
soon.
So
we'll
go
through
the
discussions
topics
which
is
this
ones,
the
ones
that
need
refinement.
The
first
one
is
improving
improved
cash
of
the
diff
files
batch
API.
A
That's
a
total
back-end
topic,
so
we're
we're
going
to
skip
that
for
for
now,
as
will
we
do
it
about
with
this
one
as
well,
so
investigate
the
possibility
of
paginating
diff
metadata.
That's
mostly
back-end
restricting
restrictions,
so
we'll
skip
that
too,
whenever
possible,
under
the
mrdfs
from
a
client-side
cache
unless
mrid
is
updated.
So
I
wanted
to
see
if
you
have
any
opinion
on
this,
so
we
had
a
discussion.
I
had
a
question
about
the
security
mentioned
foreign
yeah,
so
this
is
one
of
the
things
that
we
talked
about.
A
A
word
on
the
street
is
that
we
wouldn't
worry
too
much,
because
it's
still
just
caching
data
and
we're
going
to
have
some
some
caching
validation
procedure,
so
even
if
they
do
have
access
to
it
after
a
couple
days
of
losing
access,
it
won't
be
for
very
long
so
I
don't
think
that
will
be
as
much
I
thought
it
was
an
issue,
but
all
the
discussions
that
I've
heard
have
downplayed
that
that
part,
so
I
think
we're
totally
fine
there,
and
this
is
really
the
question
that
came
up
for
me.
A
Other
call
comparing
hand
now,
if
we
compare
since
we're
comparing
with
head
now,
the
head
is
always
moving,
so
we
will
be
able
to
distinguish
from.
A
Comparisons
with
head
that
don't
cause
any
difference,
differences
in
a
diff.
If
that
makes
sense,
and
then
you
say
I
think
we
can
answer.
Okay,
I
think
we
have
an
answer
for
this.
If
we
can
have
the
metrics,
they
can
give
us
how
much
users
and
how
often
is
the
Mr
changes
in
theory,
we
can
check
requests.
We
need
to
log
in
diff
in
data
I'm,
not
sure
how
this
can
collect
code,
which
we
can.
If
you
can't
include
an
existing
log
entry,
we
can
just
have
a
new
log
engine
that
can
filter.
B
A
So
what
what
exactly
we
want
to
catch
yeah
so
in
this
Essence
we'll
be
caching
the
rest?
Well,
the
data
that
we're
getting
from
the
batch
diffs
so
in
essence,
and
then
I'm
guessing
this
metadata
too.
So
in
essence,
what
we're
theorizing
is
that
we
can
keep
a
copy
of
those
responses
in
client-side
and
this
could
be
index
CB
so
that
we
can
easily.
A
So
the
reason
why
why
we're
considering
index
CV
is
because
we
need
to
be
able
to
quickly
check
how
many
Mr
copies
do
we
have
older
than
seven
days,
for
example,
and
Destroy
them
that's
one
of
them
for
clashing
for
caching,
validation
and,
and
the
other
is
just
to
be
able
to
specifically
grab
certain
parts
of
data.
If
we
want
to
keep
it
in
local
storage,
we're
going
to
have
to
you
know,
un
to
parse
the
Json
that
is
kept
in
local
storage.
Read
it
check
it
all
right.
A
A
Same
thing
well,.
B
B
A
So
the
motivation
was
mostly
just
making
it
much
faster
and
render.
Now
you
do
bring
a
good
point.
Is
there
any
advantage?
A
Okay,
I
just
called
my
Siri
yeah
trying
to
think
if
there's
any
benefit,
if
the,
if
the
risk,
if
we
call
by
calling
the
files
by
calling
the
endpoints
we're
getting
warm
caches,
and
you
just
immediately
responds
I,
haven't
seen
it
working.
So
that's
a
funny
little
experiment
to
see.
B
B
But
it
but
that's
metadata,
I,
think
the
heaviest
part.
Are
this
page
yeah.
B
I
think
maybe
we
should
investigate
a
little
bit
by.
We
still
send
test
responses
like
why
it
takes
so
long
for
us
still
let.
B
B
I'm
not
sure
I'm,
not
sure
that
HTTP
guessing
is
really
working,
even
though
we
are
getting
like
three
or
four
yeah
cold,
we're
still
waiting
like
for
500
milliseconds,
which
basically
takes
the
same
amount
as
to
get
the
fresh
request
without
any
guessing.
So
it
doesn't
seem
like
it's
doing
anything
at
least
now,
because
if
we,
if
it
was
cached,
we
should
see
no
response
time.
It
should
be
just
written
as
like
in
memory
cache
or
like
local
cache.
Whatever.
A
Right,
try
to
find
a
large
Mr
with
a
lot
of
files.
Oh
I
think
I
can
respect.
A
I!
Don't
think
this
triggers
more
than
one
batch
diff
icult,
oh
two,
okay,
so
200.
A
A
B
It
it
has
a
must
be
validate
Heather
cash
control
must
will
validate
I,
wonder
if
that's
causing
for
it
to
wait
for
the
backend
to
respond.
Yeah
I
think
we
can.
We
should
investigate
this
because
that
it
doesn't
seem
like
it's.
A
A
Move
on
yep
yeah
right
thanks
for
that.
That
brings
us
to
Virtual
schooler
that
I
have
somewhere.
Here
we
go
right:
virtual
school,
we
talked
a
little
bit
about
aesthetic
under
being
rendered
client
side
that
we
don't.
That
has
never
changed.
Look
we
need
so
we're
looking
into
rendering
in
a
worker
and
then
I
asked.
A
A
So
do
you
want
to
share
something
status?
Love
about
this
at
all.
B
Yeah
I
had
some
progress.
Exploring
this
so
basically,
I
have
a
working
demo,
which
is
rendering
this
app
using
u3
compact
mode
and
also
working
them
up
of
rendering
our
divs
only
in
web
worker
I'll
try
to
show
you
how
it
works
right
now.
Basically,
it's
just
this:
it's
not
any
other
stuff,
it's
not
discussions,
extending
clients
or
whatever
sure,
basically,
no
interactivity,
but
it's
it's
working
already
in
in
a
very
primitive
form.
So
let
me
share
the
screen.
A
B
B
Yeah
right
now
it
usually
takes
some
time
because
it
has
lots
of
errors
in
the
console
and
chrome
takes
forever.
In
that
case,
to
refresh
the
page.
A
B
B
B
A
There
we
go
jsq,
precise
yeah.
B
Yeah,
it's
quite
large
right
now.
One
of
the
reasons
is
I've
not
removed
a
lot
of
stuff
right
yeah.
It's
expected
that
the
JS
hip
size
is
large,
a.
B
B
B
It's
basically
instant,
so
once
you
render
it's
completely,
it
is
really
fast
and
also,
as
you
can
see,
the
memory
is
not
growing.
So
it's
constantly
fixed
at
the
position,
which
is
a
good
thing
right.
It
might
be
also
related
to
to
the
to
the
stuff
from
the
previous
run,
so
it
might
not
be
just
cleared.
B
Yeah
I'm
still
using
relative's
requests.
They
are
not
shown
here
for
some
reason.
A
It's
still
work
in
progress
for
sure,
but
thanks
for
showing
me
this
yeah
I
think
it's
just
worth.
I
keep
on
experimenting
there
and
just
see
if
we
can
iron
out
some
of
the
Kinks.
What's
the
plan
there
what's
next.
B
Yeah
that
might
be
troublesome,
I'm,
not
sure.
Okay,
also
I
would
like
to
test
at
least
putting
all
the
interactivity
into
the
divs.
So
expanding
these
lines,
working
changing
list
view
from
side
by
side
and
the
line
navigation
in
the
sidebar.
B
All
this
stuff,
I
would
like
to
make
at
least
partially
working
to
see
to
see
the
complexity
that
we
that
we
have
to
introduce
if
it's
even
complex
or
not.
Also,
I
would
like
to
measure
when
it's
working
against
our
current
virtual
current
solution
and.
A
B
A
B
I
I
was
wondering
the
same
thing
and
I
was
wondering
if
we
can
catch
the
result
of
the
render,
because
right
now,
if
we
decouple
static
and
then
I
mixed
up,
we
can
just
cache
all
surrendered,
stuff
and
reuse
it
using
in
HTML
so
basically
print
it.
On.
B
B
So
I
plan
to
instigate,
like
the
coupling
static
and
dynamic
content,
to
see
how
it
works
sure
does
it
make
sense.
I
also
found
a
lot
of
caveats
regarding
this
approach.
For
example,
we
cannot
use
style
tag
in
sfcs
in
vsfc.
So
if
you
want
to
render
this
at
least
now,
because
it
tries
to
inject
Styles
into
head-
which
we
don't
have
in
worker-
basically
crashes
right-
and
there
are
many
many
more
other
stuff
that
that
I
can
list
forever.
B
But
one
thing:
I,
I
I've,
noticed
that
it
is
still
limited
by
the
way
we
start
our
apps.
So
we
still
have
to
wait
for
the
whole
page
to
load
for
the
bundle
to
load
before
we
can
do
any
rendering.
That
is
part
of
the
reason.
B
I
think
we
should
look
a
little
bit
more
into
server-side,
rendering
all
the
stuff,
okay,
I,
suppose
it
was
used
to
be
seriously
rendered
and
did
moved
to
client-side
I'm,
not
sure
yes,
so
if
we
had
this
client
server
side
rendered,
we
can
actually
combine
these
approaches.
So
we
can
fetch
the
service
that
render
stuff
rendered.
B
So
if
we
can
get
the
stuff
search
that
rendered
and
fetch
it
from
the
server
and
especially
if
it's
cached
already
yeah,
that
would
be
much
faster
than
the
working
the
worker
solution,
because
the
worker
has
to
spin
up.
It
also
has
a
completely
different
bundle,
which
also
takes
them
to
parse
to
compile
right.
B
A
Especially
for
the
first
paint,
if
it's
yes,
if
it's
just
the
deaths,
the
diff
lines,
there's
I
mean
we
always
try
to
not
repeat
ourselves
right
but
I
wonder
if
we
could
get
a
prototype
built
with
with
rails
template
with
handle
template.
We
can
just
code
what
we
have
in
use
Hamel
to
generate
those
server-side
rendered
for
now.
A
So
the
downside
of
that
is
that
we
won't
be
able
to
update
it
in
one
place
and
then
have
it
be
reflected
everywhere.
But
if
we
can
just
use
that
for
the
diffs
contents
and
generate
that
on
the
back
end
now
we
still
need
to
batch
it
to
the
to
batch
it
for
the
front
end.
The
reason
why
we
moved
away
from
the
server
side,
rendering
in
the
past
was
exactly
the
page
was
taking.
A
You
know
30
seconds
and
minutes
to
render
to
even
render
the
first
render
the
first
byte,
because
everything
was
being
computed
in
the
back
end.
So
so,
if
we
could
give
it
a
test
of
like
how
would
it
look
like
in
a
way
that
we
can
still
work
with
it
in
the
front?
End
could
be
interesting
to
give
it.
B
We're
not
necessarily
like
into
budget.
We
need
to
do
just
one
request,
so
basically,
the
rendering
stuff
could
be
split
into
two
parts.
The
one
part
is
rendering
like
first
five
files
on
server,
which
should
shouldn't
take
a
lot
and
the
user
immediately
gets
rendered
this
without
any
spinners
without
any
additional
loading
of
client-side
code.
B
You
can
see
like
Superstar
files,
but
after
that
we
request
like
not
after
that,
but
as
soon
as
we
render
the
Page
search
and
as
a
page,
we
request
the
server
side,
rendering
the
rest
of
the
files
and
start
streaming
it
right
after
the
page
is
loaded.
A
B
I
have
to
look
into
what
we
are
exactly
heading
inside
Json,
because
it's
not
just
clients
like
highlighted
lines.
It's
also
like
robot
coverage,
yeah
and
lots
of
other
stuff.
So
I
guess
we'll
still
have
to
request
this
stuff.
But
the
good
thing
is
that
now
we
can
decouple
this
so
sure
the
lands
themselves
decoupled
from
the
metadata
they
have
like
from
the
code
coverage
discussions.
At
least
we
can
see
all
the
code
and
render
all
the
stuff
on
top
of
this,
but
we
already.
A
A
Got
it
right,
we
would
be
interesting
to
see
how
that
could
work.
So
I'm
writing
a
comment
here,
so
you
can
so
I
missed
some
music
I
make
I.
Miss
I
missed
some
of
your
next
next
steps
and
I
added
some
caveats
there.
We
need
to
make
it
faster,
that's
with
HTTP
caching
what
if
we
could
catch
the
output
of
the
rendered
worker
in
the
client
and
some
caveats.
A
A
It
would
be
much
yeah
you'll,
be
much
faster
than
any
work
or
do
to
bootstrapping
to
bundle
parsing
strapping
and
to
take
it,
especially
if
cashed
on
the
back
end.
B
One
thing
I
would
ask
here
is
that
we
already
have
a
server
set
rendered
thieves,
but
I
think
they
are
very
basic
because
I've
recently
sold
like
Hamilton
plays
that
reuse.
All
these
like
lines
to
try
to
render
them
I
I'm,
not
sure
where
it's
used,
but
it's
there.
A
A
So
that's
probably
what
we're
still
using
this
will
be.
What's
still
in
the
templates.
A
A
Anything
yeah,
so
we
want
to
catch
that
and
update
the
the
markup
and
stuff,
but
it
shouldn't
be
too
different.
I
think
once
we
get
to
the
line,
that's
just
the
highlighted
line
that
we
get
from
I,
don't
know
Rouge
or
something
so
yeah
could
be
interesting
to
chat
test
that
out.
So
should
we
call
it
there
share
it
here.
What
was
the
next
steps?
Can
you
just
help
me
fill
this
in
I
was
running
these
questions,
testing
inline.
In
parallel,
there
was
something.
B
B
A
Service
side
rendered
see
below
so
there
we
go
I'll
link
into
that
I
think
it's
a
good
update.
B
A
Right
anything
else
on
this
topic.
A
Cool
and
then
the
other
one
was
the
suppressing
file
div
through
this
overall
payload.
Do
we
have
any
more
thoughts
here,
I.
B
A
B
A
I
think
this
is
a
no-brainer
in
terms
of
like
gains
right.
If
we
have
merge
requests
with
these
files
that
are
not
necessary
great,
we
can
suppress
them,
but
I
think
this
is
mostly
like
backhand,
because
the
backend
is
to
detect
them
and
they
need
to
not
send
them
in
the
payload
or
whatever
and
then
offer
a
way
to
reload,
which
we
already
have
I.
Think
with
a
given
the
path
so
yeah
and
there's
that
much
here
for
me
to
think
about
is
more
like
backend
wise.
Is
this
feasible?
B
I
think
the
question
is:
if
we
cut
anything
from
the
API,
does
it
really
improve
the
performance
of
that
API?
Because
it
seems
like
it's
not
the
amount
of
data
we
send
it's
the
amount
of
calls
we
do
and
I
don't
think
we
we
can
avoid
calls
to
like
facing
the
diff
itself
from
the
API.
It
doesn't
remove
that
call
right.
So,
even
if
it
has
the
data
I'm,
not
sure
if
it
can
actually
improve
the
performance
of
the
API,
but
we
have
to
ask
for
backend
here.
A
Yeah
of
the
responses
would
improve
significantly
because
it
doesn't
seem
like
the
problem.
Let
me
share
the
screen,
so
you
see
what
I'm
writing
to
see
if
I'm,
reflecting
what
you
said
well
seem
like
we're
being
slow
on
the
transmission
that
we
are
being
slow
in
the
queries
we
do,
yeah
suppressing
would
remove
them.
A
A
B
I
also
think
if
we
measure
that,
like
rendering
conserver
is
actually
a
lot
a
whole
lot
faster
than
virtual
scrolling,
yeah
I.
B
A
Also,
a
relevant
topic
is
the
possibility
of
suicide,
rendering
server
side,
rendering
parts
of
the
diff
see
there.
Okay,
all
right,
I
think
it's
good,
so
yeah
now
the
agenda
is
empty.
If
we
didn't
say
anything.
So
let
me
just
right
here
we
revised
so
I
know
what
I'll
do
so.
I'll
just
copy
this,
because
that
discuss
that
and
we
discussed
that.
A
Topics
discussed
comments
added
in
the
issues
please
check
so
that
everybody
who's
watching
the
recording
could
can
take
a
look
at,
but
the
issues
that
we
commented
on
do
you
have
any.
So
that's
the
end
of
the
issues
that
we
have
open.
Is
there
anything
else
in
your
mind,.
A
This
is
useful,
I
think
we're
still
able
to
cover
good
ground,
even
though
we
don't
have
any
other
back
end
or
front
end.
But
thanks
for
coming
Santa.
B
A
Kind
of
our
little
performance
focused
one-on-one,
yeah,
I,
appreciate
it.
Let's
keep
the
ball
rolling
I'll
open
a
new
issue
for
the
next
week
and
I'm
also
going
to
check
about
the
time
if
you
can
make
it
in
a
way
that
we
have
more
backend
available
to
to
join
the
call
I
think
if
it
bit
early,
we
could
probably
include
David,
so
I
want
to
see
yeah
it's
it's
seven
already
for
him,
so
yeah
I'll
I'll
get
the
ball
rolling
on
that.
Thank
you.
A
B
A
Pressure
If,
it
wasn't
worth
it,
you
can
say
no
pressure.
As
always.
Thank
you
you're
too
kind.
All
right
see
you
later
man
talk.