►
From YouTube: 2022-11-03 #2 Code Review Performance Round Table
A
All
right,
we
are
live
Welcome
to
our
second
code
review
performance,
Round,
Table
and
without
further
Ado
I'll,
get
straight
to
the
topics
that
we
need
to
discuss.
A
So
we
have
a
returning
topic
about
streaming
data
to
the
front
end.
What
data,
what
changes
could
could
be
required
or
could
be
done
in
the
back
end,
and
we
have
a
pending
thing
that
we
were
talking
last
week.
What
data
do
we
store
in
redis?
Exactly
and
it's
is
it
structured
or
just
a
plain
adjacent
string
and
Patrick
you'll
have
to
comment
you're
going
to
summarize
it
here
for
the
call
very.
B
Quickly,
yeah
I
mean
we
still
cash
or
store
something
in
real
estate
and
divs,
but
we
removed
like
the
the
serialized
caching
for
the
divs
endpoint
and
for
discussions,
so
we're
no
longer
storing
demographics,
but
there
still
sound
like
for
the
list
of
merge
requests
and
merge
requested
by
the
rest,
API,
so
sister
in
redis
still
clash
down
a
serialized
Json
and
some
other
stuff
like
the
Highlight.
Caching,
is
that
sketch
stats
cash
she
so
it's
related
to
diffs
so
and.
C
A
A
B
That
it
yep
yeah
there
was
I
think
there
was.
There
was
something
for
that
Bishop
before
that's
like,
because
we're
using
a
lot
of
Red
Storage.
So-
and
it's
not
really
that
you
can
use
that
much.
So
we
started
to
remove
it
from
radius
and
utilizations.
Okay,
just
behaving
okay,.
A
Yeah
so
last
week
we
talked
a
little
bit
about
some
some
discussions
going
around
with
the
concept
of
one
app
concept
and
and
I
share
the
link
on
the
agenda
last
week.
A
The
idea
there
is
to
even
do
a
more
aggressive,
client-side
caching,
without
not
just
relying
on
HTTP
caching,
for
the
request.
Actually,
caching
data
on
the
front
end
in
a
structured
database
like
you
need
xdb
and
then
using
a
service
worker
to
control
the
way
that
we
load
the
data
and
cache
it
and
everything
and
check
for
the
caches
before
requesting
from
the
server.
So
that
is
something
in
the
realm
of
possibility
for
us.
A
So
if
we
do,
does
that
mean
that
so
now,
instead
of
just
storing
the
requests
response,
we
could
be
getting
a
little
bit
smarter
on
it
and
so,
instead
of
just
storing
the
batch.
So,
for
example,
instead
of
storing
the
the
batch
requests
in
the
HTTP
caching,
we
could
just
store
the
actual
structure
of
the
merge
request
into
the
code
review
into
the
indexeddb
side
of
things
in
the
browser.
A
Any
thoughts
there
any
objections
on
it.
Any
concerns.
B
A
Think
I.
C
I
have
the
most
relevant
experience
with
this
approach,
because
yesterday
I
was
fixing
a
bug
which
was
caused
by
us
having
like
an
SBA
like
Pages,
which
is
match,
request,
overview,
page
and
view
space.
So
basically
it
simulates
the
spa
and
it
has
a
lot
of
problems
when
you
need
to
go
from
one
page
to
another
page.
C
So
maintaining
that
approach
is
actually
quite
hard
on
the
front
and
side,
because
we
have
to
basically
code
all
the
Styles,
the
browser
does
for
us
and
it
has
to
be
really
reliable
because
if
it
doesn't
work,
we
are
just
out
of
luck,
so
working
on
that
is
actually
tedious
for
the
development
team.
So
if,
if
it
does
make
sense
from
the
performance
perspective,
we
should
also
consider
if
it
outweighs
the
development
costs
and
maintainership
costs.
So.
C
Well,
I
I
think
Tim's
proposal
is
two
things
at
the
same
time
he
wants
to
cash
all
the
data
and
do
an
Spa
like
behavior
on
the
pages.
So
when
you
click
on
a
link,
it's
instantly
shown
so
I
think
these
things
don't
work
separately
and
he
wants
to
combine
them
into
one
one
single
proposal,
I,
would
say
so
doing
that
doing
the
spa
stuff
is
really
hard,
so
we
have
to
seriously
consider.
Is
it
worth
the
effortless
thing
to
to
put
a
lot
of
resources
into
that?
Okay.
A
So
again,
that
effort
is
a
little
bit
wider
than
the
code
review
and
it's
definitely
something
for
us
to
reach
out
and
and
talk
about
in
the
meantime,
we'll
take
note
here
and
we'll
move
on
to
to
the
continuing
discussion
of
this,
because
it's
about
the
streaming
of
data
thanks
for
that
note,
Patrick,
you
had
a
question
on
the
on
your
on
your
common
Patrick.
You
had
a
question
on
your
comments.
A
C
Yeah
the
main
problem
with
baked
approach
that
we
have
right
now
is,
you
might
have
five
small
files
in
the
page
or
you
can
have
five
very
large
files
in
the
page,
and
this
will
seriously
affect
how
soon
we
get
the
response
from
the
back
end
or
how
soon
we
can
show
these
files
to
the
user.
C
So
my
idea
in
the
next
point
was
to
actually
get
rid
of
baking
completely
for
the
files
and
just
serve
them
like
separately
from
each
other,
so
we
can
fetch
them
in
parallel
and
that
could
be
much
more
efficient
for
the
users
when
we
use
streaming
because
we
can
show
files
instantly
as
they
are
served
instead
of
waiting
for
the
whole
page.
C
So
that's
right
now.
That
is
not
really
not
really
a
problem,
because
we
don't
use
any
kind
of
streaming
or
Progressive
rendering
it's
all
just
Lane
show
show
us
the
file
sensitive
and
with
streaming.
We
have
to
consider
a
different
approach,
because
it
really
depends
on
the
data
that
we
have.
If
we,
if
we
are
slow
on
the
data
side,
it
won't
really
benefit
us
on
the
streaming
side,
because
we'll
still
have
to
wait
for
like
three
four
seconds
for
the
whole
batch
to
be
received.
A
And
I'll
I'll
build
on
top
of
that
and
I'll
say
that
when
I
raise
this
question,
so
one
of
the
one
of
the
things
we
discussed
in
the
past
with
the
patching
already
is
that
when
we're
requesting
the
data
in
different
requests
like
rails
gets
that
request
independently,
so
in
essence,
I'm
not
entirely
sure
how
we
do
it
on
the
back
end.
So
that's
why
I'm
asking
this?
Is
we
have
this
pipeline
of
fetching
data
from
giddly,
storing
it
in
the
database
and
then
service
it
to
the
to
the
browser?
A
We
also
have
that
distinct
Behavior
between
a
head
comparison
and
different,
merge,
request
versions.
Comparison
is
that
normalizing
already
now
I,
don't
think
it
is.
B
A
B
A
Right,
that's:
that's
reasonable,
yeah,
that's
reasonable,
so
the
point
then
becomes
when
we
were
requesting
the
first.
So
in
the
current
day
we're
requesting
in
pagination
right,
we
request
the
first
page,
sorry
yeah,
the
first
page
of
files
that
I'm
already
talking
about
batch
diffs,
not
on
the
metadata,
the
first
batch
and
then
we
request
a
second
then
requests
the
third.
A
We
can
parallelize
the
request,
but
oh
it's
always
paginated,
and
my
fear
is
that
there's
always
been
discussed
this
in
the
past
that
when
we're
requesting
the
second,
the
first
request,
we
still
have
to
get
the
data,
but
we
know
for
really
large
Mrs
that
the
next
pagination
is
coming
right
after
that.
A
So
I
always
wonder
if,
if
we
did
this
differently,
if
we
rewrote
this
today,
could
we
find
a
different
approach
like
sun
is?
Love
is
saying
like
we
don't
have
to
wait
for
the
first
file,
if
that's
the
largest
to
start
serving
the
second
and
the
third,
if
we
can
just
like
start
streaming
it
down
the
pipeline
as
soon
as
we
get
it,
but
I
also
don't
know
if
that's
even
possible,
given
the
way
we
get.
A
The
data
from
Italy
and
last
week
we
we
discussed
briefly
a
suggestion
that
Kerry
had
done
in
the
past,
which
was
to
pre-compile
these
data
and
store
it
somewhere.
If,
if
not
on
redis,
some
Object
Store
somewhere
could
be
an
option
of
like
pre-calculating.
The
data
structure
that
we
need
for
each
Mr
version
then
store
it
somewhere
and
we'll
just
be
requesting
the
static
data
and
just
stream
it
as
soon
as
we
can
yeah.
So
that
was
that
was
the
question.
A
A
Sense:
okay,
so
do
you
have
any
ideas
of
or
or
how
we
can
benefit,
how
we
can
change
or
what
would
we
done
differently
if
we
were
starting
now.
B
I
think
status,
Love's
idea
below,
regarding,
regarding,
like
fetch,
the
fridge
file,
I,
think
that's
possible.
I.
Think
it's
impossible
right
now.
B
I
think
it
divs
in
the
divs
batch
can
accept
a
path
that
this
the
dispatch
and
one
can
accept
the
platform.
So
if
we
have
the
list
of
files
that
we
need
to
load,
then
we
can
have
like
I,
don't
know
multiple
requests
at
once
and
then
Trevor
comes
first
gets
screamed,
but
yeah
I'm,
not
sure
about,
like
I,
think
the
next
the
next.
Let's
call
this
the.
A
So
maybe
we
can
start
move
there
and
sorry
I
just
invoked
Siri
by
accident,
so
we
can
move
on
to
that
discussion
and
then
that
can
probably
fit
into
this.
So
the
second
question
that
was
still
open
for
the
front
end
is
what's
the
ideal
transport
channel
to
get
required
information
streamed
from
the
back
end
and
just
to
give
you
a
bit
of
background
here,
I'm
I'm,
being
generically
speaking
because
it's
on
purpose
so
the
options
there
are
several
of
them
with
web
sockets
with
graphql
with
even
grpc.
A
So
is
there
what's
the
ideal
way
to
deliver
this
data
to
the
front
end?
Given
our
past
experience-
and
this
is
right
up
your
alley-
stanislav
Thomas
did
leave
a
comment
wondering
if
web
sockets
would
be
a
good,
a
good
option,
but
standards
love.
Let's.
C
Hear
it
I
think
it
doesn't
really
matter
which
protocol
we
use
doesn't
matter.
If
you
use
HTTP
or
websockets,
we
still
have
to
wait
for
a
single
batch
of
file
to
actually
show
it.
So
we
can't
actually
like
personally
receive
half
of
the
file
and
show
half
of
the
file
render
it.
We
can
do
that
stuff
because
we
have
to
wait
for
the
whole
stuff,
so
websockets
I,
don't
think
they
will
bring
us
any
benefits
to
that.
C
But
what
I'm
mostly
concerned
about
is
caching,
so
if
we
can
cash
on
an
HTTP
level,
that
would
be
really
nice.
That
means
non-graphql,
because
Garfield
cannot
be
cached
with
http
if
it
can
be
cached
both
on
on
the
rail
side
and
on
the
HTTP
side,
it
will
be
also
very
nice.
C
A
That's
interesting:
wouldn't
it
wouldn't
it
be
nice,
so
doing
HTTP
requests.
We
have
to
do
basically
almost
individual
requests
per
file
right
and
that
would
leave
us
in
the
same
position
that
we
are
today
where
we
have
to
wait
for
the
first
to
finish.
Well,
we
have
to
consume
them
in
a
sequence
right.
C
So
now,
because
that's
one
of
the
downsides
of
web
circuits
because
it
has
to
stream
the
files
sequentially
and
with
HTTP,
we
can
actually
fetch
them
all
in
parallel.
Just
Launch
them
as
soon
as
we
receive
the
metadata
and
launch
this
request.
So
it
really
doesn't
benefit
us
in
any
ways,
and
HTTP
is
actually
more
beneficial
when
we
do
streaming.
A
Okay,
and
if
we
would,
we
have
an
open
discussion
about
that
in
one
of
the
issues
I
can't
remember,
which
one
one
of
one
of
my
thoughts
there
is
is
that
is
very
much
front-end
controlled
right.
A
We
control
when
we
do
that
request
and
I'm,
not
sure
if
that
matches,
when
the
backend
is
ready
to
give
us
that
information,
then
I
mean
what
I
mean
by
this
is
like
if
we
could
have
a
way
to
start,
if
you
could
have
a
way
of
the
back
end
controlling
delivering
us
the
information
as
soon
as
it
has
it
could
give
us
the
opportunity
of
like
pipelining
in
a
way
or
or
parallelizing.
A
The
responses
where
we'll
be
the
front
end
will
be
handling
them
as
soon
as
the
packet
is
ready
and
that
that
is
the
idea
behind
the
question.
Do
we
have
any
solution?
Do
you
have
any
solution
in
the
market
that
could
give
us
that,
whether
the
back
or
is
the
back
and
even
able
to
do
this
in
a
non-sequential
way
or
do
we
have
to
always
get
it
from
Italy?
You
always
get
it
from
the
database
or
something
and
always
give
us
in
a
somewhat
sequential
way.
B
Yeah,
that's
what
I
want
to
say,
because
if
you're
going
to
stick
with
the
I
mean
the
current
order
order
of
the.
C
A
Yeah
I
I,
maybe
one
of
the
discussions
we
can
have
in
the
future,
is
drill
down
on
pre-calculating
those
payloads,
and
we
talked
about
this
last
week,
where
those
payloads
only
change
really,
when
there's
a
change
in
the
history
when.
A
So
the
question
is
whether
that
will
even
stay
accurate
for
a
long
wait.
We
can
have
a
discussion
on
that
just
to
validate
on
the
future,
because
we
have
a
couple
of
topics.
Should
we
move
on.
We
have
a
couple
of
topics
today:
let's
let
that
marinate
a
little
bit.
Thank
you
for
that
feedback,
Patrick.
So
so
the
same
approach
we
have
today
could
be
used
for
streaming.
We
just
have
to
optimize
the
way
we
get
the
data
and
that
sort
of
thing-
and,
let's
think
about
this
in
the
future.
C
I
wanted
towards
the
last
point
to
this.
One
of
the
problems
right
now
is
also
related
to
metadata,
also
measure
Quest,
because
not
only
we
have
large
files.
We
also
can
have
a
lot
of
files
changed,
so
it
can
be
200
files
changed,
so
the
metadata
would
grow
significantly
large,
depending
on
the
amount
of
files
you
have,
and
my
idea
was
also
to
enable
patronation
for
some
metadata.
So
we
can
start
fetching
the
files
sooner
and
do
not
not
rely
on
the
size
of
the
merge
request.
So
that
was
another
idea.
A
Yeah
yeah,
the
idea
is
to
instead
of
having
one
diffs
metadata.
This
metadata
would
also
be
paginated
like
give
us
give
us
the
first
100
files
on
the
metadata.
Beyond
that
you'll
be
a
larger
page,
though
not
just
like
five
files
or
whatever
it
is
on
the
batch
diff.
It
will
load
a
lot
more
with
super
beginner.
Is
that
an
issue
is
that
possible?
Is
that
beneficial.
B
A
Yeah,
so
Stan
is
like
one
of
the
homeworks,
for
you
is
to
open
that
thread
in
a
new
issue
for
the
for
the
next
week.
So
we
can
have
people
discuss
it
asynchronously
and
you
can
kick
the
tires
on
it
and
see
if
it's
worth
scheduling
an
issue
for
it's
hard
to
work
on
it.
Okay,
yeah
sure.
Thank
you.
We
have
seven
minutes
six
minutes
now
to
go
through
the
next
topics,
so
I'll
I'll
Blaze
through
a
couple
of
them.
So
there
was
another
returning
topic.
A
Kerry
had
talked
about
trimming
the
records
on
the
merger,
Quest
diff
tables
front,
dated
merch
requests.
There
was
a
bit
of
a
discussion
there
and
they
culminated
with
Patrick
asking.
Should
we
have
this
schedule
in
the
upcoming
Milestone,
which
is
confirming
the
impact
of
deleting
merge,
request,
diff
commits
records,
I
would
say:
yes,
it's
worth
the
shot.
A
Any
reason
we
should
not
I
mean
the
earlier.
We
look
into
it
the
better
right.
B
A
A
All
right.
A
new
topic,
Thomas
Randolph
presented
something
he's
not
here,
so
we
can
touch
on
it,
but
we
won't
delve
too
deep
on
it.
We
can
bring
it
back
next
week
when
he's
back.
So
his
question
is:
can
we
deliver
some
or
most
or
all
the
data
we
have
as
Deltas
or
operational
Transformations?
If
I'm
not
mistaken
yeah-
and
he
goes
a
bit
about
that
and
I
share
some
thoughts
on
it
and
then
Patrick
you
had.
A
There
was
one
idea
to
Cache
the
diff
on
the
client
with
an
ID
and
that
that
was
really
an
interesting
comment.
Patrick,
do
you
want
to
bring
it
here
in
a
quick
summary,
yeah.
B
I
think
it's
kind
of
related
with
the
the
one
app
thing
that
you
mentioned
earlier,
but
it's
I
don't
know
it's
a
bit
simpler,
I
think
so
it's
just
like
the
idea
is
like,
if
there's
a
key
value
store
on
the
client
on
the
browser.
Maybe
a
little
local
storage
for
index
TV
can
be
like
on
the
first
request
for
the
divs,
the
batchatives.
B
You
can
store,
get
cash,
I
mean
the
product,
you
can
cash
it
and
because
the
DFS
have
a
specific
ID,
so
that
can
be
used
as
the
team
and
then
one
subsequent
requests.
We
can.
We
can
ask
the
backend
like
this
is
still
the
latest
one
still.
This
is
still
the
latest
if
ID
and
check
it.
If
it's
not
make
a
new
request,
invalidate
the
cache
cache
it
again.
If
there's
there's
no
new
one,
then
we
just
show
the
cache
data.
A
Patrick
would
could
that
ID
come
straight
from
the
re
from
the
server-side
rendered
page
say
that
we
don't
have
to
fetch
that
ID
from
from
the
server
asynchronously
I'm
guessing
that
ID
should
be
pretty
easy
to
Cache
in
a
very
quick,
meaningful
way.
Like
yes,
I'm
saying.
B
A
The
data
property
and
this
one
you
can
grab
it
okay,
so
that
that
makes
sense.
So
we
looked
into
this
so
we'll
continue
to
look
this
on
the
perspective,
regardless
of
whether
it's
one
app
concept
of
spa
that
Santa's
love
mentioned
before,
or
it's
just
us
leveraging
it
our
own
way
for
index.
A
Cb
and
Thomas
has
a
proof
of
concept
for
for
from
a
year
or
so
ago,
where
he
tested
that
exact
thing
of
caching
things
on
on
the
front
end,
we
did
have
some
questions
about
cache
and
validation
and
privacy
related
stuff
or
security
related
stuff
about
people
still
having
data
in
their
browser
of
projects.
They
don't
have
access
to
anymore.
A
So
there's
a
couple
of
questions
in
there
that
we
want
to
like
consider
at
least
in
one
way,
with
the
restructure,
the
data
and
I
restore
it,
but
that
is
a
possibility
and
I
think
it.
It
should
be
worth
creating
an
issue.
You
I
know
that
you
already
have
an
issue
for
the
fingerprinting
of
merger
class.
A
Getting
a
fingerprint
I
do
think,
but
there
might
be
worth
it
an
issue
for
that
particular
idea
of
caching
things
on
the
client,
so
you
can
probably
open
an
issue
and
discuss
it
there,
okay,
any
more
thoughts
on
that.
Okay,
we
want
to
bring
it
back
so
that
we
can
have
Thomas
discuss
it
on
the
next
one.
Probably
so
this
will
come
back,
especially
about
the
Deltas
thing.
There's
probably
some
something
in
there.
A
But
I
said
on
my
comment:
there
was
that
comments
are
a
great
candidate
for
this,
where,
if
we
have
them
cached,
we
can
always
say
just
like
you're
talking
about
the
merge
request.
Id,
you
can
always
tell
the
server
hey.
Do
you
have
any
comments
be
after
this
and
for
you
Patrick
my
question:
would
we
be
able
to
send
comments
that
have
been
updated
before
using
the
updated,
app
or
something
like
give
us
all?
The
comments
that
have
created
at
or
an
updated
ads
larger
than
this
date?
Would
that
be
possible.
A
So
what
this
idea
would
be
is
basically
store
the
all
the
comments
from
all
the
notes
in
the
in
the
merge
request
locally
in
the
indexeddb
and
then
when
we're
loading
the
page
again,
we
immediately
render
those
we
request,
the
server
hey.
Do
you
have
anything
beyond
this
time
stamp
and
then,
if
you
have
something
you
send
that
down
the
pipeline
and
with
add
new
notes
or
we'd
updates
notes
that
have
been
updated
since
then,
and
then
the
front
end
would
would
deal
with
that
conflict
or
or
update
that.
So
that's
an.
B
A
Yeah
I'll
I'll
open
another
issue
about
open
an
issue
about
the
comments
too.
Okay,
we
have
all
right
we're
at
time.
Do
we
have
a
hard
cut
off,
or
should
we
go
on
for
a
little
longer?
A
few
minutes.
A
Thank
you
by
the
way,
I'll
open
a
thread
whether
we
should
make
this
call
an
hour,
because
the
last
week
will
also
was
pressed
for
time
and
we
had
to
rush
the
last
point.
So
I'll
bring
that
up.
So
we
can
have
a
discussion.
A
So
the
next
topic,
Phil
was,
was
bringing
up
the
topic
of
syntax,
highlighting
and
then
carry
brought
up
the
the
question.
When
was
the
last
time
we
did
a
breakdown
of
median
timings
for
various
parts
of
a
slow
request,
and
that
was
really
an
interesting
comment
and
Patrick.
You
said
I
think
we
can
start
tracking
durations
of
certain
tasks
being
done
in
a
single
request.
So
you
can
know
which
parts
are
the
slowest
and
get
actual
metrics
from
production
data
and
Gary
asked?
A
Should
we
get
an
issue
in
open
and
of
on
what
we
could
track
this
project
for
simple
for
possibly
15,
8.
or
even
earlier
I
would
say
so
yeah
thoughts.
There
Patrick
seems
like
a
slam
dunk
right.
We
should
definitely
take
a
look
at
it
right.
B
Yeah
I
think
so
because
I
don't
know,
I
think
it's
better
to
have
like
production
metrics.
So
we
can
talk
about
the
intactual
behavior.
A
So
help
me
understand
this
a
little
bit
our
aren't.
We
able
to
dissect
a
request
and
see
what
was
the
method
that
took
the
longest
in
any
circumstance,
do
we
have
to?
We
have
to
lay
down
something,
but
before
we
can
do
that
or
could
we
go
do
a
batch
deaths,
request
and
and
analyze?
What's
the
longest
part
of
the
task.
B
Yeah
we
can
use
a
clean
breath
for
that,
but
it's
like
it's
very
specific,
like
What
mattered,
so
it's
kind
of
hard,
sometimes
like
oh
hey
this.
What
part
of
the
regress
is
this
man
being
called
I?
Guess
it's
part
of
serialization,
it's
a
part
of
highlighting
or
getting
from
database
or
Italy
or
something
so
it's
like.
If
we
have
those
specific
sections
and
we
have
like
a
the
object
metrics
and
how
the
how
each
part
like
what's
the
duration
of
each
part,
then
we
can,
we
can
see
like
okay.
A
A
If
I
understood
that
do
we
have
to
do
so,
for
example,
for
metrics,
for
if
you
want
to
track
some
metrics
of
events
and
stuff,
we
have
to
add
them
to
you
know:
user
timing,
apis
or
something
do
we
have
to
do
something
on
the
back
end
to
do
this
flame
chart
method
thing
before
we
were
able
to
extract
that,
or
is
it
something
that
is
just
a
matter
of
going
in
there
and
using
the
tools
we
have
already
because
then
there
will
be
two
issues,
one
one
would
be
to
add
the
preparation
steps
and
then
one
would
be
to
gather
the
information
on
the
flame
chart.
B
Yeah
I
think
the
plane
graph
thing
I,
don't
know
if
it's
enabled
in
production
on
locally
on
your
local
machine.
You
can
do
that
for
requests,
but
if
we're
going
to
like
track
the
certain
duration
for
each
part,
we
need
to
track
that
in
Prometheus,
okay
and
yeah,
we
can
get
it
with
Thanos.
A
A
Right
cool,
thank
you
so
and
then
Phil
brought
another
topic
which,
like
has
the
front
end
gotten
too
complex,
which
is
something
we
discussed
last
week
a
little
bit
about.
We
want
the
front
end
to
do
less
of
not
more
of
because
we
discussed
this
crazy
idea
of
getting
the
the
get
output
from
gitly
and
just
dumping
on
the
front
end
the
front
end.
We
just
have
a
worker
munching
on
it
and
it's
like.
Could
it
be
done?
Yes,
should
it
be
done?
A
No,
so
this
led
to
something
that
that
I've
been
wanting
for
a
while,
like
we've,
we've
talked
about,
we've
been
very
additive
in
the
way
that
we
added
to
the
batch
diffs
app
over
the
next
over
the
past
couple
years,
and
the
question
is:
should
we
sit
down
and
design
the
current
system
of
all
the
parts
that
make
generating
the
diff
like
a
diagram
to
sort
of
to
help
us
make
sense
of
of
it
all
and
potentially
even
get
awareness
of
some
things
that
we
can
improve?
A
Does
anybody
see
an
objection
or
why
we
shouldn't
spend
time
doing
the
diagram
of
the
system?
I'm
not
I'm,
not
expecting
it
to
be
like
a
month
or
something
but
I'm
expecting
it
to
be
a
bit
of
an
investment
of
time
to
to
draw
it
foreign.
B
C
Okay,
I
I'm,
not
sure
what
should
we
include
inside
diagram,
actually
why
we
have
a
lot
of
a
lot
of
small
minor
patches
to
the
page
to
make
it
seem
like
it's
very
fast,
but
if
we
collect
all
of
them,
I'm,
not
sure
if
we
will
see
like
a
bigger
picture
of
what.
What
is
the
merge
request
page
is
so.
A
This
goes
beyond
the
the
front-end
part,
so
in
the
front
end
we
have
a
bit
of
components
like
we
have.
You
know
the
diffs
metadata
it
stores
it
travels.
Traverses,
the
the
information
creates
the
the
tree
structure
and
then
has
the
batch
divs
and
and
puts
that
into
the
into
the
Vue
X
state.
So
that's
how
we
deal
with
the
rendering
of
the
application,
but
then,
on
the
back
end
we
have
Gilly
getting
in
from.
We
have
information
from
getting
information
from
database.
A
We
have
syntax
highlighting
we
have
redis,
so
we
have
a
bunch
of
all
these
components
that
are
playing
together
in
from
the
perspective
of
looking
at
it
from
above,
we
could
potentially
start
to
see
that
hey.
Maybe
we
can
move
some
things
to
the
front
and
some
things
could
be
pre-calculated
a
little
bit
and
I'm
more
about.
A
We
have
we
kind
of
have
it
in
our
heads
right,
but
having
a
diagram
that
cannot
collaborate
and
can
all
like
update
and
keep
it
in.
Our
minds
could
probably
yield
so
like
Patrick
was
saying
for
them
to
understand
how
the
front
end
is,
is
working
with
the
data
once
we
get,
it
might
give
them
some
ideas
of
hey.
We
can
pre-calculate
that
for
you,
that's
what
I'm
aiming
at
so
I'm
going
to
open
an
issue,
and
then
we
can
move
that
discussion
there.
A
My
ID
is
to
assign
a
front
end
in
the
back
end
to
kind
of
work
on
this
together,
where
one
would
be
taking
the
design
of
the
front
end.
The
other
will
be
taking
the
design
of
the
diagram
for
the
back
end,
but
I'll
open
an
issue
and
we
can
start
there.
A
We
have
two
new
topics
status.
Live.
Would
you
mind
moving
this
to
the
next
call,
because
it's
already
late
and
then
we
can
move
that
to.
C
You
want
to
say
something
right
now,
since
Patrick
is
here.
I
just
want
to
discuss
the
last
topic
because
he
might
be
able
to
get
us
some
information
on
this.
So.
C
Discovered
that
our
overview
page
is
rendered
slowly
because
of
the
counter
of
the
changes
it
actually
takes.
30
percent
of
the
whole
page
time
and
I
was
wondering
if
we
could
cache
it,
so
we
just
make
it
30
faster.
A
There's
a
there's
a
spike
there
investigate
that
particular
delay
to
understand
it
and
potentially
cash
it.
What
do
you
say.
C
Yeah
I
have,
if
you
have
locally
gitlab
shell
repo,
there
is
a
merge
request,
7,
which
is
like
170
files
changed.
If
you
open
it
up
on
the
overview
page,
you
will
see
like
that.
The
changes
counter
is
really
slow.
Personally,.
C
A
B
A
Since
that
was
good,
that
was
good
and
what
I
was
going
to
mention
it,
since
this
came
off
from
this
call,
apply
the
label
of
the
performance
round
tables?
Okay,
so
that
we
can
track
all
these
issues
in
the
same
manner
and
that's
useful
for
scheduling
and
for
planning.
Okay
right,
so
we'll
bring
back
some
of
the
topics
so
back
next
week,
I'll
move
them
to
the
issue
once
I
create
it,
but
this
has
been
great.
Thank
you
so
much
for
the
amazing
session,
both
of
you.
A
My
last
thing
would
be
to
ask
you
if
you
thought
that
this
value,
this
call
was
valuable.
Please
leave
a
note
in
the
agenda
so
that
we
can
know
that
we
could
shoot
continue.
The
this
call
I'll
I'll
make
sure
that
the
recording
is
available
on
YouTube
and
ping,
the
rest
of
the
people
to
take
a
look
at
it.
But
thank
you
so
much
for
the
discussions.
I
look
forward
to
seeing
you
next
week,
even
if
not
on
the
call
at
least
the
synchronously
on
the
issue.
Fair
yep.