►
From YouTube: 2022-12-15 Code Review Performance Round Table
A
All
right
we
are
live.
Welcome
to
yet
another
weekly
performance,
Round
Table
for
the
code
review
group.
They
just
me
in
in
status,
live
I.
Think
Patrick
is
out
of
office,
so
we
can
just
go
ahead
and
get
started.
A
A
So
the
first
topic
that
we
have
is
improved
caching
of
the
diff
files
batch
API.
So
this
is
still
an
open
issue
being
refined,
but
is
there
anything
else
left
to
talk
about
here?
Do
you
know.
B
A
B
A
Awesome,
that's
the
first
well
I
wanted
to
also
show
that
on
the
release
board,
we
already
have
this
in
line
to
be
scheduled,
which
seems
to
be
the
direct
outcome
of
that
I
think
so
this
would
probably
be
scheduled.
Oh
this,
isn't
the
meta
issue
never
mind.
Yeah.
B
A
How
do
I
put
like
all
right?
That
was
embarrassing.
So,
having
said
that,
let's
go
over
to
the
next
one
split
and
paginate
file
tree
from
the
dips
metadata.
We
don't
have.
A
A
And
this
is
no
milestone.
A
B
Yeah,
so
there
is
a
new
issue
for
for
the
new
entry
point.
B
A
B
That
one
once
that
is
done,
we
can
actually
try
and
Implement
front-end
Logic
for
that.
But
well
it's
not
like
finished.
B
There
is
still
a
proof
of
concept
by
Patrick
which
I'm
going
to
experiment
with
and
see
if
it
actually
proves
the
performance
of
the
page.
Okay,
so
front-end
is
not
blocked,
as
that
only
is
a
like
production
version
of
the.
B
A
B
B
Even
if
you
do
like,
if
there
are
like
1000
files
changed
even
with
like
one
line,
changed
will
still
have
to
wait
for
these
5000
files.
Before
we
can
show
all
Hazard
controls,
like
version
checks,
Etc.
B
We
can
create
a
separate
issue
like
for
the
front-end
experimentation
to
test
this
POC,
that
Patrick
made
to
implement
front
end
with
this
POC
and
see
if
it
produces
the
results.
We
expect
and
then
like
opens
the
doors
for
the
back
end
issue
to
properly
implement
it.
A
A
A
A
B
Chrome
doesn't
properly
cache
this
request,
even
if
we
add
like
a
cache
control
header,
that
you
must
cache
it
that
there
is
a
max
age
for
this
resource,
Chrome
still
doesn't
cache
it.
He
still
makes
a
request
to
the
server.
We
still
do
the
full
processing.
We
still
return
three
or
four
that
result
didn't
change,
but
there
is
a
round
trip
to
the
server
which
shouldn't
be
there.
We've
tested
it
in
Firefox
and
Firefox
doesn't
have
this
issue.
B
So
if
we
implement
this
using
a
cache
key
on
the
front-end
side,
it
will
work
only
in
Firefox
at
the
moment.
I
don't
see
how
we
can
fix
it
for
the
Chrome,
because
this
is
most
certainly
a
Chromebook.
B
A
B
Not
there
is
a
different,
different
approach
to
http
cachings
that
is
working
right
now,
which
is
basically
sends
the
request
to
the
server
the
server
checks,
the
cache
key
and
returns
three
or
four
that
it
didn't
change.
So
we
basically
have
to
get
through
some
of
the
overhead
from
the
real
side.
It
takes
about
three
four
hundred
milliseconds
and
you
get
the
three
or
four
response
which
basically
tells
the
browser
to
reuse
the
previous
response.
B
So
it's
working,
but
it's
not
ideal,
and
what
we
want
to
do
is
to
remove
this
round
trip
to
the
server
at
all.
So
if
we
provide
the
cache
key
with
the
page
and
do
the
request,
the
browser
should
take
the
request
from
the
memory
immediately
the
cache
key
and
this
memory
cache
is
not
working
in
the
from
right
now
for
us
right.
A
Right,
let's
right,
I
put
it
in
in
maths
and
late
same
question:
I
asked.
A
If
we
want
to
Once
step
back
in
this
much
and
close,
oh
yeah,
this
is
a
really
good,
login,
clap,
so
I'm
not
going
to
make
the
bed
that
matter
mistake
again,
yeah
for
sure
we
can
keep
it
open,
we're
probably
going
to
be
making
it
workflow
blocked,
though,
does
it
make
sense
the
product.
B
On
this
one
blocked
by
by
what
exactly
so.
A
It's
the
issue
that
Patrick
said:
okay,.
A
My
goal
is
just
to
remove
this
from
the
weekly
calls
right
can.
B
A
A
All
right
next
topic
are
we
on
time
all
right,
we're
doing
good.
It's
like
we're
in
the
101.
virtual
schooler.
B
B
Not
really
because
we
are
actually
blocked
by
a
lot
of
things.
First
of
all,
the
migrations
that
we
do
are
not
backwards
compatible,
so
the
branch
I
was
working
on
stopped
working
after
I,
updated
jdk
and
after
we
rebased
of
the
latest
semester.
It
also
stopped
working
because
some
some
changes
in
gitlab
UI
for
some
reason
made
it
broken.
And
the
third
thing
is
that
gitlab
UI
actually
reverted
the
bootstrap
view
upgrade
so
yeah
right.
B
If,
if
we
cannot
resolve
it,
I'll
just
straight
up
like
put
it
in
the
closet
and
we'll
focus
more
on
server-side
rendering
stuff,
because
it
seems
like
a
more
productive
approach
to
it
with
those
overheads
that
worker
has
and
it's
still
a
lot
of
code
on
the
client
side,
so
I
think
shifting
our
Focus
to
server-side,
rendering
on
Rails
makes
more
sense.
B
We
discussed
it
previously,
I,
guess
and
like
it,
it
looks
simple
on
the
surface,
like
you
said,
it's
just
like
few
components
that
we
convert
to
view
templates
and
rails
yeah.
It's
not
that
simple!
Actually,
because
we
do
a
lot
of
Transformations,
I've
looked
up
the
code
and
it
transforms
the
data
required
to
render
these
lines
into
some
new
structures.
B
Hopefully,
I
can
get
a
get
rid
of
these
transformations
to
transfer
like
the
templates,
as
is
to
Hamel,
but
it
might
take
some
time.
I
haven't
dug
really
deep
into
that,
but
I've
just
discarded
that
we
do
a
lot
of
Transformations
there.
A
Yeah
the
question
the
question
of
that
I'm,
not
so
worried
about
the
server-side,
rendering
itself
I'm
more
concerned
about
how
how
do
we
hydrate
it
later?
How
do
we
make
you
know
after
thing
is
put
on
the
page?
How
do
we
bring
it
to
life?
Is
that
fast
is
this?
Is
that
slow
is?
Is
there
any
hurdles.
B
I
I
will
definitely
say
it
will
be
much
faster,
but
the
problem
is
that
the
code
won't
be
as
declarative
as
it
is
right
now,
like
we
just
mutate.
The
state
and
discussions
appear
on
these
divs
and
it
all
just
works
with
a
new
approach.
We'll
have
to
do
more
manual
work,
but
the
performance
will
be
much
much
faster,
as
that
is
that
is
for
sure.
A
B
A
Right,
suppressing
files
and
divs
to
reduce
overall
payload
I'm,
not
sure
this
has
been
oh
okay,
okay,
let's
go
back
inspect
for
this
bit
of
a
freedom
will
be
in
time.
A
Okay,
I.
B
B
B
If,
if
it's
centered
around
not
sending
files
that
are
too
large,
for
example,.
A
Yeah,
so
the
two
things
one,
the
the
father-
are
too
large,
we're
already
truncating
them
at
a
certain
point.
So
that's
already
done
and
if
anything
we
would
like
to
move
away
from
that
truncation.
But
that's
another
story
here
is
more
like
you
run
a
project
that
generates
a
lot
of
automated
files.
So
why
should
those
need
to
be
reviewed
I'm
of
the
mind
that
any
file
on
the
merger
Quest
needs
to
be
reviewed?
Otherwise
it
could
be
a
vehicle
for
a
vector
for
attack,
but
that's
other
stories
here.
Yeah.
A
B
B
Render
this
gifs
on
the
server,
we
won't
need
this
API
at
all,
so
there
will
be
nothing
to
do
as
like.
There's
an
example
here
about
how
it
will
look
like.
So
we
will
basically
embed
all
the
data
into
elements
and
some
cells,
and
we
won't
have
to
request
it
anymore.
A
So
the
question
I
have
there
is
exactly
how
fast
sorry,
how?
How
slower
will
that
make
the
page,
because
that
remember
in
the
first
day
in
the
first
couple,
not
what
made
us
move
to
an
asynchronous
rendering
of
the
page
was
exactly
because
the
page
itself
that
was
being
rendered
in
Hamel
took
a
lot
of
time
to
render,
which
is
fair.
If
it's
a
large
file,
if
it's
a
largemr,
it
takes
forever
to
render
the
whole
files
right.
A
But
there's
always
the
overhead
of
fetching
the
data
and
that's
the
question
is
that
overhead
for
the
five
files
acceptable
and
I
think
that's
worth
a
spike
for
the
back
end.
To
answer
that
question:
I.
B
I
hope
you're
right
because
we
are
skipping
the
whole
loading
JavaScript
executing
JavaScript,
making
a
connection
to
the
server
round
trip
to
the
server
getting
big
data,
pressing
embed
back
rendering
on
the
client.
It
all
takes
a
lot
of
time.
We
just
completely
eliminate
the
step
and
render
it
immediately,
and
so
it
should
be
faster.
I
think
there
is
just
no
way
to
outperform
server
size,
rendering
for
the
first
render
at
least.
B
A
Can
you
partially,
can
you
partially
look
up
the
files
through
giddly
very
quickly
without
having
to
load
the
whole
Mr?
Those
are
back-end
stuff
that
needs
to
be
investigated.
I'm
still
going
to
press
Matt,
I,
don't
wanna
I,
don't
want
to
ignore
what
you
just
said:
I
completely
hear
it,
but
I
still
gonna
push
for
a
potential
Spike
to
investigate
this
on
the
back
ends
of
the
day.
Did
he
no?
He
didn't
right.
So
all
right!
A
Sweet
the
next
one.
This
is
a
new
one
that
I
introduced
yesterday,
so.
A
This
is
something
that
Natalia
and
the
plant
team
are
investigating
for
making
the
issue
list
labels
filtered
search
faster.
Okay,
are
you
aware
of
this?
Are
you
on
top
of?
Do
you
know
what
this
is
yeah?
Okay,
so
I
can
skip
a
few
things
for
you,
but
for
the
recording.
So
this
is
the
Mr
that
they
currently
are
working
on
and
event.
Essentially,
it
requires
decorating
the
queries
a
little
bit
with
some
with
some
flags
and
stuff
in
this.
A
This
essentially
makes
the
data
that
deploy
cache
persists
across
pages
and
it
gets
refreshed
from
time
to
time
and
for
things
that
are
really
very
frequently
used.
It
makes
a
lot
of
sense.
I
wanted
to
raise
it
here
just
to
keep
this
in
our
minds.
A
A
The
only
question
I
have
is
how
useful
is
Persistence
of
stuff,
like
the
merge
request,
widget,
because
that
you
wanted
to
be
fresh
and
that
you
already
have
real
time
built
into
it.
So
if
anything
I
feel
like,
we
could
potentially
look
into
the
user
lookup
this
one
here,
because
it's
like
the
first
one
is
always
the
same
like
it's.
A
The
first
lookup
at
least
that
being
super
Stampy
would
be
your
first
nice
approach
and
then
we
can
build
from
from
then
on,
of
course,
like
if
I
start
writing
a
name
right,
then
it's
super
fast.
So
but
the
first
thing
the
first
lookup
is
where
we
could
probably
benefit
from
something
like
this
across
pages
I.
Don't
think
they're
looking
into
it,
but
eventually
I
think
they
will
get
there
because
issues
are
also
affected.
A
Issues
are
also
a
place
where
we
have
the
at
thing,
so
it
doesn't
necessarily
come
into
our
specific
code
review
things,
but
I
wanted
to
flag
this
in
case
somebody
has
any
ideas
to
at
least
take
a
look
and
see
the
demo
that
so
that
she
had
a
demo
on
the
application
performance
session
on
the
15th,
and
maybe
somebody
watching
the
recording
will
find
it
interesting
and
and
watch
and
that's
all
this
demo.
B
I
imagine
missing
something,
but
I
think
we
we
don't
have
any
any
places
where
we
could
utilize
this,
because
for
the
as
you
described
for
the
merge
request
match
widget,
we
always
want
to
have
the
fresh
State,
because
people
are
constantly
confused
with
our
mesh
widget
like
why
it's
not
merging
or
why
it
doesn't.
Allow
you
to
merge
so
introducing
their
outdated
state
will
make
it
even
more
confusing
yeah
Force
the
autocomplete
you
described,
I
think
it
doesn't
use
graphql
API,
which
is
one
of
the
limitations
of
this.
B
A
B
B
I
think
we
can
just
reuse
the
data
on
the
page
itself,
because
we
have
all
the
user
names
from
the
comments
from
assignees
reviewers.
We
already
have
all
this
data.
We
can
reuse
it,
but
it's
not
coded
into
the
suggestions.
Widget.
A
A
B
B
B
I
I
played
locally
with
it
a
little
bit
because
I
was
frustrated
by
this
recently
I
scrambled
through
the
code
it
and
it's
not
straightforward,
because
it
already
has
cash
and
we
basically
have
to
introduce
another
level
of
cash
which
is
like
pre,
pre-caching,
stuff
and
then
properly.
Caching,
these
requests
it.
It
will
make
the
logic
a
lot
more
complex.
So
that's
why
I
created
like
that's
why
this
issues
is
there
because
it
involves
a
lot
more
time
than
I
expected.
Actually,
okay,.
A
There's
one
thing
that
I
think
I'm
gonna
do
well.
First,
I
must
commend
the
three
months
old
me
because
it
sounds
really
smart,
but
more
seriously.
I'm
gonna
drop
a
note
to
Natalia
to
check
whether
is
there
given
the
the
work
that
she's
doing
on
the
Persistence
of
Apollo
cache.
A
Do
they
see
as
the
candidates
for
that
Improvement
requiring
refactoring
into
graphql
I?
Think
it's
worth
asking.
A
So
let
me
just
share
the
screen
again,
we'll
do
it
together,
it's
more
fun!
That
way
it's
gonna
be
in
our
chats
again.
A
Natalia
come
on.
This
is
just
an
example
of
where.
A
Thank
you.
Oh
cash
persist,
persistence,
the
question
and
whether
this
should
also
be
moved
to
graphql
to
benefit
from
that
persistence
came
up.
What
do
you
think?
Do
you
have
thoughts
on
this?
It
doesn't
exclude
making
the
optimizations
for
reusing,
page
relevant
users,
author
reviewers
Etc,
but
if
that
is
to
be
done,
the
work
can
be
planned,
iteratively
ground,
taken
that
into
consideration
to
be
clear.
We
don't
have
any
immediate
plans
to
schedule
this,
so
if
you
so,
if
anybody
wants
to
tackle
this,
go,
feel
free
to
go
ahead.
A
A
Engine
I
don't
know
who
that
one's
already
in
the
thread
right.
Thank
you
so
much
that
was
interesting.
Anything
else
in
your
mind
that
we
want
to
discuss
before
we
wrap
this
up.
B
Are
things
of
you
free
migration
is
taking
taking
long
for
us?
Okay,
since
we
are
actually
dependent
on
the
stuff
I
think
we
shouldn't
expect
it
to
land
anytime
soon,.
A
B
B
Count
on
it
because
all
with
all
the
issues
in
gitlab
UI,
please
stop
you,
immigration
to
some
new
version
and
the
base
that
did
not
pure
is
taking
to
update
all
the
stuff,
all
the
broken
stuff
it
might
take
a
while.
So
we
should
really
shouldn't
like
make
a
big
stake
on
the
viewfree
stuff,
okay,
at
least
for
now,
and
work
with
what
we
have.
A
I'm
particularly
interested
in
investigating
the
partial
server-side
rendering
part
and
see
how
we
can
benefit
from
that
realization
that
a
lot
of
our
page,
the
big
bulk
of
data
that
we're
transferring
is
pretty
static.
So,
okay,
it's
been
useful
and
productive
and
that's
that's
it.
Do
you
have
anything
else
to
talk
about?
Can
we
just
edit
here.