►
From YouTube: 2023-02-16 Code Review Performance Round Table
A
And
we're
live.
Welcome
everybody
to
yet
another
weekly
performance.
Roundtable
of
our
code
review
group
I'll
go
right.
Last
week
we
didn't
have
a
call.
It
was
only
just
Patrick
on
the
call
there's
some
comments
on
the
previous
date
in
the
agenda,
so
that
I,
we
don't
have
to
address
it
here,
but
I'm.
Actually,
no
I'm
gonna
do
this
I'll.
A
So
the
first
issue
we
have
to
discuss
is
the
micro
code
review
front-end
proof
of
concept
that
Thomas
has
built.
We've
discussed
this
in
previous
in
the
previous
fall.
We
chose
not
to
close
it
because
there
were
still
some
things
left
unanswered
that
we
would
like
to
keep
Drilling
and
keep
studying
and
investigating
and
see
how
we
can
you
know,
take
some
of
it
and
use
for
our
own
git
class
project
and
Patrick.
Please
add
your
notes.
A
If
you
have
questions
or
comments,
but
Patrick
said,
I
watched
the
video
and
tried
the
proof
of
concept
and
I
wonder
what's
needed
from
the
back
end
in
order
for
us
to
make
progress
on
this,
I
noticed
that
the
proof
of
concept
and
the
chart
shows
a
single
RPC
call
with
a
graphql
query,
be
sufficient
for
that
Thomas.
Do
you
want
to
address
it.
B
Sure,
I
guess
the
short
answer
is
yes,
I
suppose
it
depends
kind
of
how
much
you
want
to
pack
into
one
graphql
call,
because
what
this
is
doing
is
it's
getting
the
whilst
the
season
the
public
API
so
I
was
gonna.
I
was
gonna
refer
to
this
in
terms
of
like
our
internal,
like
how
like
our
internal
apis
for
merge,
requests
like
and
say,
like.
Okay,
it's
getting
the
metadata,
it's
getting
the
files,
it's
getting
the
it's
getting
all
that
stuff
in,
but
the
public
API
doesn't
have
that
concept.
B
So
it's
using
the
URL
that
you
put
Post
in
it's,
getting
all
the
files
for
that
URL
and
and
I'm,
actually
forgetting
what
I've
done.
B
Oh,
the
public
API
has.
It
has
has
to
the
concept
of
an
MR
and
diffs,
and
so
you,
you
would
grab
those
two
things
separately.
B
So
it's
it's
basically
getting
those
two
things
and
I
think
one
other
thing
so
yeah
I
mean
you
could
pack
all
that
into
one
graphql,
presumably
I,
don't
think,
there's
anything
blocking
us
or
doing
that.
It's
just
a
matter
of
making
it
I
guess,
but.
B
Yeah,
no
I,
don't
I,
don't
think
so.
I
don't
I
mean
if
you've
used
I,
think
you'll
I
think
you'll
agree
that
there's
not
a
lot
of
speed
to
be
gained
from
on
the
networks.
On
the
network
side,
it's
like
Sub
sub,
half
a
second,
maybe
loading
loading
content.
B
So
so,
yes,
you
could
put
it
on
graphql.
We
could.
We
could
swap
out
the
kind
of
the
rpc-ish
side
of
the
of
the
thing
with
a
graphical
call.
Do
all
that
stuff
I
don't
even
know.
Do
we
have?
Is
the
public
API
graphqli,
just
I
know
we
don't
have
to
be
using
the
public
API
I'm
just
wondering
like
if
there
is
graphql
on
it,
I
don't
even
know.
If
that's
a
thing,
we
have
a.
B
Yeah
so
I
mean
it
is
possible,
if
so
like.
If
that's
the
direction
that,
like
the
the
back
end,
would
like
to
put
everything
in
to
graphql,
then
then
it
is
totally
possible
to
do
that.
I'm,
not
sure
that
it
would
gain
a
lot
from
that
now.
B
That
said,
if
we
were
to
take
some
of
these
Concepts
and
bring
them
into
our
internal
app,
we
probably
would
gain
a
lot
from
putting
them
in
graphql.
I
would
guess,
but.
B
One
of
the
reasons,
one
of
the
reasons
not
the
only
reason,
but
one
of
the
reasons
that
it's
quick
is
the
public
API
is
pretty
fast
and
it's
we
give
it
an
MRI
D
and
we
say,
give
me
the
diffs
for
this
and,
like
the
only
diffs
that
come
back,
are
unmarked
up
blogs
and
I.
Think
a
lot
of
time
is
spent
in
our
internal
API,
taking
those
blogs
taking
diffs
and
and
highlighting
them
syntax,
highlighting
them
breaking
the
bomb
into
lines
and
syntax
highlighting
that.
B
So,
if
we
take
this
exactly
like
what's
available
on
this
public
API
and
and
put
it
in
graphql
on
our
internal
stuff,
this,
it
probably
would
be
faster
just
because
it's
not
doing
syntax
highlighting
and
it's
not
breaking
it
up
into
lines
and
those
kinds
of
things
we
do
lose
things.
Because
I
noted
I
noted
in
my
caveats
that,
like
not
having
lines,
is
a
problem
and
will
be
a
problem
in
the
future,
but
it
would
be
faster
to
not
do
that,
highlighting
and
that
kind
of
stuff,
and
so
we
could
like.
B
A
B
There
there's
probably
there's
probably
some
very
minor
speed
Improvement
we
could
get
simply
because
there
we
I
am
going
from
one
rest
call
to
like
three
API
calls
and
waiting
for
all
of
them
to
return
before
returning
the
data
over
the
rpc-ish
call,
and
so
like
there's,
probably
some
speed
up
there.
If
you
could
combine
all
that
into
one,
but
that's
probably
not
where
we
we
need
to
spend
our
like
engineering
time
to
to
get
improvements.
B
It's
not
that's
not
the
biggest
source
of
issues.
Thanks.
A
D
D
Oh,
no
I
just
think
that
I
think
you're
right
Thomas,
the
most
interesting.
The
most
interesting
thing
to
me
is
to
figure
out
like
well
I.
Think
we're
all
impressed.
You
know
with
the
print
concept
and
so
like
what
we
should
figure
out
at
some
point
outside
the
meeting
of
like
what
what's
the
path
to
an
alpha
like
an
opt-in
alpha
product.
So
we
start
Gathering
that
feedback
and
understanding
what
like.
Where?
Is
this
bad
right,
because
everything's
bad
so
like?
Let's
figure
out?
D
What's
bad
about
it
and
before
we
start
making
like
micro
optimizations
about
like
how
many
Network
calls
we're
making
or
what
the
like
this
yeah.
F
Yeah
I
just
wanted
to
say
that
we've
discussed
graphql
I
think
even
on
this
call.
Previously
we
speak
and
see,
and
there
is
an
effort
by
team
which
suggests
using
cash
on
client
side
to
exactly
mitigate
graphql
issues,
because
our
graphql
response
times
are
not
ideal,
and
so
the
way
they
are
dealing
with
it
is
using
local
cache.
So
I
was
wondering
like
if
we
move
to
graphql
I
think
there
is
a
certain
threshold
of
performance
that
we
won't
be
able
to
solve,
like
it
will
add
a
constant
overhead
to
our
responses.
B
Yeah,
my
my
experience
is
influencing
my
own
graphql
back.
End
has
been
similar
where
there's
like
it's
pretty
fast,
but
there
is
like
a
low
level,
just
like
everything
has
to
resolve
kind
of
overhead
I
get
with
that,
but
I
haven't
I
haven't
done
major
graphql
stuff
like
like.
We
would
have
here
the
the
to
get
to
Carrie's
point
about
like
what
can
we
do
for
an
alpha
sort
of
in
git
lab?
B
That's
where
things
start
to
get
really
sticky,
because
a
lot
of
the
a
lot
of
the
speed
of
this
demo
kind
of
comes
from
all
of
the
interlocking
pieces
fitting
together
and
we're
not
going
to
be
able
to
have
that
in
in
gitlab
and
it's
it's.
It
would
be
it's
hard
to
identify
the
fastest
thing.
B
I
I
think.
Maybe
what
we
could
do
is
is
bring
in
the
like
front
end,
only
highlighting
the
highlighter
that
just
drops
the
the
like
un
unhighlighted
blob
of
text
and
then
uses
JavaScript
to
to
highlight
it
one
it
it's
doing
it
when
you
intersect
with
it.
So
when
you
scroll
to
it,
that's
how
it's
the
file
we
might
be
able
to
get
that
in
and
speed
up
both
the
the
back
end
response.
So
it
doesn't
need
to
do
highlighting
and
on
demand
highlighted
on
the
front.
B
So
so
it
can
render
it
almost
immediately
and
then
highlight
it
when
you
get
to
it.
That
comes
with
all
sorts
of
problems
because,
as
we
know,
front-end
highlighting
isn't
super
reliable,
and
in
order
to
do
that,
we
have
to
load
up.
You
know,
probably
at
least
a
dozen
sort
of
like
highlighters
in
in
the
demo
I
loaded
up
like
basically
an
XML
highlighter
and
a
JavaScript
highlighter,
so
that
we
could
do
HTML
and
JavaScript
highlighting,
and
that
was
it.
B
We
would
need
a
ton
more
than
that
on
on
our
on
like
a
real
demo
of
ours,
because
we
need
to
be
able
to
highlight
at
least
the
top
15,
most
popular
languages
or
whatever
so
I
I'm,
not
sure
where
the
best
speed
improvements
are.
That's
where
I
would
probably
start
is
just
like
getting
getting
like
a
quicker
diff
on
the
screen.
Maybe.
A
Yeah
I
wanted
to
add
to
that
part
that
you
touched
on
the
acid.
That's
highlighting
that's
definitely
one
of
the
biggest
changes
here,
for
example
from
the.
If
we
start,
if
we
get
to
adding
notes
to
that
diff
we're
going
to
have
a
problem
because
right
now
we
don't
have
any
consistently
identifier
that
we
can
use
across
back
and
front.
Then
on
this
demo
right,
we
just
have
a
Blog,
whereas
in
our
production
app
we
have.
A
Every
line
is
annotated
with
metadata
that
we
can
use
to
to
attach
comments
to,
but
talking
to
the
subx
highlighting
thing
in
particular
just
so
that
everybody's
aware,
the
source
code
team
is
at
the
moment
working
to
address
Performance
issues
on
the
on
the
highlighting
part
and
roughly
the
strategy
is
going
to
be
we're
going
to
be
highlighting
the
first.
The
problem
comes
from
doing
syntax,
highlighting
in
bits.
So
basically,
if
you
do
highlighting
in
the
middle
of
a
file,
then
you
don't
start
highlighting
from
the
top.
A
Your
highlighting
might
be
skewed
because
of
the
context
you're
in
if
you're,
in
a
class,
if
you're
in
a
method
depending
on
language,
so
the
highlighting
needs
to
be
done.
The
whole
file
at
once.
As
you
probably
know,
on
the
back
end,
you
do
that
too.
First,
before
you
take
the
Snippets,
the
chunks
that
same
thing
happens
on
the
front
end.
What
we're
starting
to
do
now
is
on
the
blog.
It's
a
different
context
were
handling
the
first
lines.
Then
we're
using
a
web
worker
to
highlight
the
rest
here.
A
We
will
never
get
the
file
on
the
front
end,
so
we'll
always
be
vulnerable
to
having
a
faulty
highlighting
there.
Plus
there
are
shortcomings
on
the
library
we're
using
in
terms
of
language
support.
It
doesn't
support
all
the
languages
that
Rouge
supports
so
on
the
blob.
Some
languages
or
some
files
were
falling
back
to
the
back
end.
Rendering
audit,
which
is
not
ideal
so
I,
was
trying
to
make
sure
that
we
know
that
caveat
going
in
Thai.
If
he
was
here,
he
would
say.
A
B
Yeah
because
yeah,
he
has
explicitly
said
in
the
past.
Well
this
you
know,
we've
never
actually
seen
this
break,
it
could
break,
but
we've
never
actually
seen
it.
So
maybe
it's
a
good
Improvement
for
like
the
80
20.
like
for
the
you
know,
we
can
do
most
highlighting
on
the
front
end
without
problems.
B
I
just
don't
know,
I
don't
know
like
I
agree
with
him
that
we
probably
could
I
just
don't
know
what
happens
over
that
line
like
when
it
breaks.
How
do
we
know?
How
do
we
know
when
it
breaks
and
like
how
like?
How
do
we
fall
back
because
I,
don't
I,
don't
know
like
how
to
identify
that
it's
not
highlighting
properly.
B
A
And
the
way
that
we
handle
this
well,
like
you
said
it's
not
detectable,
so
one
of
the
issues
we
had
was
with
python
files.
Stun
bits
would
look
like
that
like
it's.
Unless
you
write
tests
for
all
languages
that
are
possible,
it's
very
easy,
very
hard
to
detect
what
we
started
re.
Well,
we
started
implementing
was
actually
the
back
end
detects
the
type
of
file
that
it
is
based
on
the
extension
or
business
and
library
that
we
use
I
can
remember,
and
it's
also
front
end.
A
This
is
a
python
file
and
then
the
front
end
based
off
that
I'm
sure
if
it's
a
front
in
the
back
end,
but
a
component
based
off
that
decides
whether
to
render
it
on
the
front
and
then
highlight
it
on
the
front
end
or
go
to
the
back
end
and
render
it
on
the
back
end.
A
So
it's
very
much
just
programmatically
defining
which
files
get
rendered
on
the
front
end
and
which
files
they
run
on
the
back
end,
not
ideal,
but
the
failures
would
be
undetectable
if
we
try
to
run
them
all,
and
that
really
is
a
challenge.
Yes,
even
harder.
When
we're
talking
about
diffs
because
we're
taking
essentially
chunks
of
files
out
of
the
highlighted
part,
so
that's
the
biggest
hurdle
for
me
in
terms
of
like
going
taking
that
part
over
to
to
gitlab.
A
Is
that
that's
only
one
part
of
the
Innovations
of
this
POC,
the
other
is
like
status.
I
was
saying:
is
the
cash
on
the
client,
aggressive,
storing
things
on
the
on
the
index
DB?
We
could
potentially
rebuild
this
into
our
current
APP.
A
If
we
navigate
all
the
complex
videos,
the
data
and
stuff,
the
data
fetching
and
stuff
some
sort
of
middleware,
that's
it
could
be
a
service
worker
that
handles
all
that
logic
to
make
it
simpler
in
the
other
part.
Yes,
and
the
other
part
is
the
bootstrapping,
where
we're
strapping
it
straight
from
a
static
well
from
a
static
page
and
empty
page,
but
we're
not
using
view
which
is
an
overhead
we're
again.
A
Those
two
realities
of
our
production
application
and
going
from
a
non-view
is
a
world
apart
in
the
effort
to
achieve
future
parity
would
be
huge
one
of
the
ideas
that
we
come
up
in
the
past
that
you're
talking
to
Kerry's
question
about
how
we
get
an
alpha
product
here
would
be
if
we
had
an
opt-in,
like
you,
said
way
to
render
a
lighter
version
of
the
diffs
on
demand
for
customers.
We
already
have
something
like
that
where
we
offer
the
plain
diff,
where
it
just
so
shows
the
raw
get
diff
output.
A
One
option
will
be
to
offer
that
as
an
alternative,
and
it
would
render
this
in
a
separate
page
I'm,
just
not
sure
if
that's
worth
it,
if
I
can't
annotate
it
with
comments,
that's
kind
of
like
my
first,
my
first
thing
so
we'd
have
to
find
a
way
to
implement
comments
on
this
to
even
begin
considering
this
as
a
viable.
A
F
Yeah
the
issue
you've
mentioned
with,
like
partial,
highlighting
it
reminds
the
problem
with
streaming
when
you
can
split
a
single
letter
into
multiple
bytes
and
if
you
split
these
bytes
into
pieces,
you
cannot
make
a
full
like
letter
that
was
previously
there,
so
it
can
be
handled
with
text,
encoder
and
I.
B
I
think
the
I
think
the
issue
with
the
highlighter
and
so
I
haven't
personally
run
across
it,
but
so
I'm
kind
of
speaking
just
from
assumptions,
but
I
think
the
issue
is
that
the
highlighter
literally
doesn't
know
like
it
has
a
hundred
percent
of
the
text,
but
it
doesn't
know
what
that
text
means
so
I
think
in
like
the
python
example.
B
It
doesn't
really.
It
doesn't
realize
that
it's
within
it's
within
a
a
string,
I
guess
or
a
comment.
What
was
it
three
is
three
quotes
a
comment
in
Python
I'm,
not
sure
I
think
it
is
I.
B
Think
the
three
the
three
double
quotes
is
a
comment,
and
it's
it's
supposed
to
comment
out
everything
until
the
next
three
quotes
and
it
and
it
I
guess
it
doesn't
realize
that
it's
in
a
comment
once
it
enters
it,
and
so
so
you
know
whether
you
have
you
know
multiple
bytes
of
the
string
or
the
whole
thing.
It's
just
that
the
highlighter
doesn't
realize
what
what
scope
it's
in
and
doesn't
highlight
properly,
unless
you
have
more
lines
on
the
outside
of
it.
B
I'm
not
sure
why,
in
that
example,
because
in
in
that
example,
it's
got
like
the
function,
entry
point
like
death,
food
or
whatever
it
was,
and
then
it
enters
the
function,
so
I
would
think
it
would
be
like.
Okay,
we're
in
a
function
and
I
can
highlight
a
function.
I
was
I
was
assuming
highlight,
JS
would
fail
in
weirder
places
where
the
diff
starts
like
in
the
middle
of
oh.
A
It
does
this
was
yeah.
This
is
a
problem
with
the
grammar
that
they
use
in
the
on
the
JS
per
Set
implementation
of
python,
but
yeah
yeah.
Yes,
it
definitely
fails
on
both
situations
that
you're
talking
about
yeah.
B
B
Yeah
I,
don't
so
so
using
a
text
encoder
is,
is
fun
I've?
It's
it's!
A
good
idea
and
I've
I
have
used
that
before
to
kind
of
Stack
up
like
a
buffer
of
text,
but
I
think
in
this
case
highlights,
is
actually
failing
in
a
much
more
mundane
way.
It
just
doesn't
have
the
scope
of
the
text
that
it's
in.
F
F
That
raises
the
question
like:
is
it
a
good
approach
for
this
to
to
highlighting
on
the
client,
because
we
actually
rarely
send
the
whole
file?
We
just
sent
a
part
of
it
and
so
I
think
in
most
cases
it
will
just
will
always
be
broken
for
us.
So
that
makes
me
wonder,
should
be
really
prefer
highlighting
on
the
back
end.
There.
B
Yeah
I
think
I
think
highlighting
on
the
back
end
is
the
only
way
right
now
to
say
like
this
is
this
is
the
only
way
to
do
this?
B
The
only
the
only
solution
that
I've
thought
of-
and
it's
not
it
has
a
lot
of
downsides
is
there
is
every
diff
is
always
a
Delta
from
like
the
base
file.
So,
like
the
first
time,
you
load
an
MR,
it
says:
oh,
the
original
file
is
this
and
it
could
be
empty
or
it
could
be.
B
B
We
could
highlight
from
there
we
we
would
have
line
numbers,
so
we
could
do
comments,
and
but
it's
that
there's
a
lot
more
up
front
at
data
loading
to
like
get
the
full
file
and
then
work
our
way
down
to
a
smaller
diff
from
there.
So
a
lot
of
downsides
to
that,
but
it's
kind
of
our
our
only
solution
to
proper,
highlighting
and
being
able
to
use
like
real
line
numbers
rather
than
like
the
way
we're
using
line.
B
Numbers
now
are
based
on
diff
file,
hash
and
then
a
line
number
which
has
to
be
provided
by
the
back
end.
Because
it's
not
sequential,
we
could
have
line
numbers
that
start
at
300
or
whatever.
B
And
to
the
iframe
point
and
to
what
Andre
said,
web
components
are
essentially
just
a
iframe
with
when
you
use
Shadow
Dom.
It's
all
the
benefits
of
an
iframe
style,
encapsulation
JavaScript
encapsulation,
but
none
of
the
downsides
which
are
like
in
terms
are
really
hard
to
resize
based
on
internal
content.
Shadow
Dom
doesn't
do
that
so
it
is.
It
is
essentially
the
encapsulation.
However,
it
is
all
front
end
so
to
stanislav's
point
of
of
getting
a
performance
benefit
from
it.
We
don't
get
that
so.
F
Well,
it's
just
partially
a
joke
because
it
can
actually
work
on
our
page
and
the
benefit
of
iframe
is
that
it
doesn't
depend
on
our
JavaScript
to
start
so
it
starts
at
the
bottom
of
the
body
and
we
can
skip
that
part.
We
can
just
start
loading
our
JavaScript
for
the
like
this
app,
for
example,
and
just
resize
this
iframe
to
the
whole
screen,
because
most
of
the
pages
like
fixed
set
buyers,
fixed
header
is
fixed.
D
F
B
Yeah
I
mean
I,
think
I
think
iframe's
got
a
bad
rap
because
they
they
can
be
misused
really
poorly,
but
hey
I
mean
there
are.
They
are
a
technology
that
we
could
use
the
the
vertical
sizing
of
them
based
on
their
internal
content.
On
our
page,
that's
the
like
worst
part
of
them
I
think
it's
really
hard
to
do,
but
other
than
that
yeah.
A
A
Wait
like
a
spacer
gift,
oh
never
mind,
gosh,
I'm,
old
I
want
to
say
something
here:
yes,
so,
okay,
so
syntax
highlighting
is
challenging.
We've
achieved
that
and
the
question
here,
it's
harder.
So
it's
a
very
hard
intensive
task
for
the
backend
to
do
that.
A
But
could
we
optimize
some
of
it
if
we
sand
down
highlighted
HTML
of
the
diffs
in
one
structure
rather
than
per
line?
Would
that
that
make
it
easier
for
us
in
terms
of
like
not
having
to
split
it
up,
not
having
to
spend
it
down
a
huge
payload
with
extra
overhead
per
line?
Is
that
worth
considering
working.
A
A
A
A
I
don't
have
the
identifiers
per
line
so
that
I
can
attach
notes.
That's
my
challenge
is
this
something
that
we
can
optimize
by
stripping
down
all
the
metadata
per
line
grabbing
the
blob
as
a
whole
grabbing
the
diff
as
a
whole,
and
then
the
Gap
is.
How
do
we
match
notes
with
lines?
A
Can
we
find
some
ways
to
do
this,
whether
it's
uuids
per
line
that
get
automatically
calculated
on
the
front
end,
but
backend
also
understands
that
and
then
converts
to
a
line
of
hash
in
the
back
like
crazy
ideas,
but
is
this
worth
investigating?
Would
he
make
the
back
end
faster
to
give
us
this
so.
B
We
obviously
the
the
bulk
of
this
answer
should
come
from
Carrie,
but
I
mean
the
the
the
like.
Can
we
match?
Can
we
match
discussions
up
to
lines?
Is
would
be
pretty
trivial
right
like
if
the
back
end
is
providing
us
a
predefined
blob
of
HTML,
they
could
also
just
drop
in
the
IDS,
like
literally
just
do
the
ID
tag
for
each
line.
B
That's
the
correct
line
identifier
and
then,
when
we
grab
discussions
we
say
this
discussion
is
for
this
piece
of
HTML
that
matches
up
with
this
ID
so
like
if
they're
already
providing
us
the
HTML
pre-compiled,
they
can
also
put
IDs
in
there.
They
could
put
any
other
identifiers
and
then
we
could
use
the
predefined
HTML
to
match
up
discussions
or
suggestions
or
whatever.
D
Yeah
I'm
glad
you
mentioned
that
Audrey,
because
I
was
kind
of
recording
the
back
of
my
brain
too
of
like
well.
We
already
have
the
highlighting
problem
solved
in
the
back
end,
so
you
know
where?
Where
is
the
slowness
in
that
process?
And
can
we
get
that
out?
I
I
want
to
think
that
it's
it's
precached
calculation.
D
For
us,
but
yeah
I
mean
like
whatever,
whatever
format
you
think.
D
I
think
it's
worth
taking
that
Spike
and
and
seeing
you
know
what
can
we
generate
like
what
do
you?
What
format
do
you
need
it
in
like
we
can?
We
can
do
anything
right.
So
why
not.
A
Yeah
I'm
with
you
I
think
the
the
syntheticality
is
a
solved
problem.
You
need
to
make
it
faster,
probably
the
solution
that
will
have
to
live
in
the
back
end,
because
all
the
solutions
that
we've
come
up
with
the
front
end
there's
shortcomings,
and
if
you
want
to
replace
the
production
level
application,
we
should
at
least
have
future
parity
which
we're
not
getting
so
I'll.
Put
a
pin
on
that,
but
probably
address
that
part
which
can
we
work
with
something
that
simplifies
other
parts.
That
is
not
the
syntax
highlighting.
A
C
D
D
We're
adding
in
Italy
the
the
support
for
patch
ID
calculation,
which
basically
can
take
like
you
know,
is
this:
is
this
range
of
commits
the
same
as
this
range
of
commence?
This
is
most
useful
for
us
for
rebasing
right,
because
there's
no,
your
code
is
coming
in
during
the
rebase,
but
because
it
commits
have
changed
internally.
It
looks
like
a
completely
new
set
of
diffs,
so
we
have
to
go
through
all
that
calculation
and
caching
and
everything
again,
so
we
could
be
smarter
once
we
have
this
about.
D
A
A
That's
definitely
useful
and
that
can
open
up
to
that
can
open
up
different
ideas
and
different
possibilities.
So
definitely
thanks
for
for
that
contribution.
A
What
I
was
saying
is
exactly
just
filling
completing,
so
you
didn't
derail
me
at
all.
Is
that
if,
if
we're,
if
we're
keen
on
investigating
that,
that
is
something
that
we
could
Port
back
to
our
app,
that
particular
thing
of
trying
to
find
ways
to
optimize
the
way
the
diffs
are,
are
delivered
without
all
the
action
metadata
by
the
lines,
and
that
would
be
a
huge
diet
on
the
payload
on
the
calculations,
probably
also
on
the
front-end
states
that
we
have
to
keep,
and
that
would
be
an
iterative
improvement.
A
B
No
I,
don't
think
I,
don't
think.
There's
anything
I.
I
also
note
that
the
one
of
the
speed
improvements
which
is
stashing
or
caching,
the
diff
files
in
indexeddb
and
loading
it
from
there.
The
first
time
on
visit,
two
or
more,
is
something
we
could
probably
implement
in
a
fairly
isolated
way
like
we
don't
have
to
do
it.
B
The
way
that
it's
in
the
demo
app
with
like
kind
of
a
global
store
and
an
event
system,
you
just
have
a
little
stash
of
files,
and
if
we
have
you
know
it
would
treat,
it
would
treat
a
blob
of
HTML
the
same
as
a
as
a
diff
blob
like
so
we
could
store
that
as
well.
It
doesn't
it
wouldn't
change
it.
So,
if,
if
that's
the
approach,
we
could
do
that
to
speed
up
the
front
end
visit
as
well,
so
where.
B
Well,
so
so,
like
the
the
major
pieces
of
the
speed
of
this
demo
are
storing
a
bunch
in
indexeddb,
the
the
API
doesn't
doesn't
highlight
lines
and
then
I
like
it
and
then
like.
It
only
highlights
when
you
get
to
the
file,
and
so
if,
if
internally,
for
an
alpha,
if
the
backend
is
returning
just
an
HTML
blob,
instead
of
like
diff,
highlighted
diff
lines,
we
could
just
store
that
blob
in
an
xdb
as
well
in
in
our
front
end
it.
You
know
it's.
B
But
it's,
it
would
be
like
a
second
second
visit
or
more
benefit
right
for
our
users
and
and
it
would
also
that
would
also
be
if
the
backend
is
delivering
or
just
finished
HTML.
Then
we
wouldn't
need
as
much
Logic
on
front
end
to
like,
go
through
and
render
each
line
and
do
that
stuff,
so
it
would
be
faster
on
rendering
probably
too
once
you
have
it
in
store.
It
would
just
be
like
put
this
into
the
Dom,
so
we
might
get
kind
of
two
benefits
there.
A
Gotcha
the
reason
why
I
asked,
because
in
the
best
conversations
status
level,
has
brought
up
the
idea
that
the
diff
lines
or
the
diff
blobs
here
are
the
most
static
part
of
our
app.
So
they
don't
need
the
reactivity
of
view.
So,
essentially,
what
we
should
prevent
is
having
to
store
The
Blob
divs
in
the
Vue
X
State.
We
don't
need
it
just
put
it
in
the
Dom
and
then,
if
you
ever
need
to
generate
suggestions
which
we
will
probably
just
need
to
either
get
the
line.
A
Interval
request
that
from
a
server
endpoint
or
something
to
give
me
the
raw
lines
for
those
for
those
intervals,
which
will
be
easier
to
calculate,
because
it's
just
that
interval
there
yeah.
Okay,
definitely
something
to
drill
deeper
to
see.
If
there's
any
anything
there
that
we
can
pick
up
to
our
to
our
main
app.
A
Right
one
thing
I
wanted
to
bring
up
is
the
index
DB
part
of
this
demo.
Now
that
we've
shipped
HTTP
caching
or
at
least
HTTP
caching,
is
working
with
the
batch
tips
requests?
A
How
big
are
the
gains
between
HTTP
caching
and
the
indexeddb,
because
a
lot
of
the
speed
on
this
demo
comes
from
being
a
standalone
page,
not
having
all
things
around,
which
is
very
hard
to
compare
with
the
real
thing,
but
I
wanted
to
kind
of
like
get
a
better
sense
of
that
particular
comparison
on
the
current
implementation.
If
we
did
all
this
work
to
move
the
requests
into
the
indexeddb
and
rendered
from
there
and
then
get
the
thing
from
the
server,
would
it
be
worth
it
in
terms
of
time?
Everybody
knows.
A
My
understanding
and
I
could
be
wrong
now,
once
the
back
end
part
was
fixed
for
the
HTTP
caching,
the
browser
would
be
the
one
doing
the
leveraging
of
the
caching.
It's
like
the
front
end
makes
that
call.
The
caching
is
caught.
It's
hot.
It's
still,
it's
not
stale,
so
the
browser
will
just
return.
What
has
in
cash?
It's
not
that
it's
not
all
right.
D
F
That
we
were
using
E-Tech
cache,
which
basically
requires
you
to
have
a
round
trip
to
the
server,
and
that
is
a
problematic
part,
because
that
round
trip
right,
I
checked
it
right
now
it
took
me
700
milliseconds,
to
get
to
the
server
and
get
the
response
and
I
got
cached
response,
so
we
discussed
making
it
client-side.
So
we
have
to
pass
a
cache
key
to
the
front-end
side
and
do
all
requests
using
that
cache
key.
If
we
do
that,
we
can
skip
this
round
trip
and
use
the
local
cache.
F
The
problem
was
that
Chrome
doesn't
really
work
well
with
this
kind
of
approach.
For
some
reason,
it
works
well
in
Firefox,
but
it
doesn't
work
in
Chrome
and
there
was
a
back-end
issue:
I'm,
not
sure
it's.
If
it's
actually
completed
I've
checked
right
now,
I,
don't
see
like
a
cache
key
on
the
request
in
production,
so
I
guess
the
front
end
work
is
not
there
yet.
A
A
So
it's
the
Implement
HTTP
caching,
with
max
age,
yeah
correct
that
was
that
was
super
59.
A
Changes
are
now
in
production
and
that
was
January
24th.
We
continue
to
roll
out
in
three
okay,
so
sorry
for
the
benefit
of
everybody.
I'll
share
the
screen
there.
It
is
that's
the
issue.
It
was
close
three
weeks
ago
translating
production
and
will
continue
to
roll
out
the
feature
flag.
Once
we
have
the
front
end
utilizing
the
changes,
you're
correct,
we
need
to
have
that.
Do
we
have
an
issue
for
it?
A
E
A
Milestone
sorry
for
doing
math
complaining
we're
live
on
this
call,
but
it's
worthwhile
now
that
we
have
the
backend.
A
B
Go
ahead,
there's
so
there's
an
overlap
of
of
speed,
Improvement
here,
but
it
but
HTTP
caching
wouldn't
solve
every
problem
that
a
local
database
like
nxtb
would
solve.
For
example,
I
think
that
if
you,
if
you're
visiting,
say
an
older
version
of
the
Mr
that.
B
A
month
maybe
or
something
or
the
last
time
you
saw
it
was
a
few
weeks
ago.
I
assume
that
the
HTTP
cache
would
already
be
emptied
out
and
would
not
I
don't
know
based
on
based
on
an
age
probably
whereas
an
index
DB
database
would
still
have
every
record
that
you've
seen
in
the
past,
so
it
would
never
expire
you
would.
It
would
never
have
to
be
reloaded
but
I
think
there's
a
lot
of
overlap.
It's
just
I
think
they
both
solved
slightly
slightly
different
edges.
Sort
of
of
the
of
the
caching
question.
A
B
B
A
Sure
there's
one
thing
that
is
important,
which
is,
if
you
visit
a
new
version
of
VMR.
One
of
the
things
that
this
doesn't
allow
is
to
render
things
from
the
local
cache
while
we're
loading
the
new
ones.
However,
we
were
still
not
decided
where
we,
whether
we
really
wanted
to
do
that
in
the
case
of
the
index
DB
because
of
ux
considerations,
we've
talked
like
field
raised
in
the
past
call
about
if
I'm,
coming
back
to
an
MR
that
was
just
received
pushes
from
the
author.
A
The
last
thing
I
want
to
see
is
the
old
version,
even
if
you're
going
to
update
in
two
seconds,
like
I'm
already
looking
at
the
file,
and
it
just
gets
replaced
with
something
else.
It's
a
jarring
experience,
so
we're
still
on
the
fence
whether
we
would
even
render
the
things
from
the
nxtb
in
the
start,
when
we
know
that
the
cache
is
stale,
so
yeah,
so
the
benefit
is
there,
but
only
if
we
did
that
rendered
it
from
the
local
cache
right
away,
we're
not
dated
so
I.
A
For
now
this
is
much
closer
to
shipping.
So
we'll
procedure
proceed
to
continue
to
push
on
the
HTTP
caching
to
leverage
that
thanks
stanislav
for
the
extra
context,
but
we
won't
exclude
the
indexeddb
work
for
now.
This
will
be
there's
also
other
work
like
like
the
graphql
Apollo
persist
that
was
until
now
done
on
local
storage.
I,
know,
I,
think
they're
working
now
on
doing
that
on
indexeddb,
which
can
also
again
land
us
some
more
and
more
lessons
and
stuff
that
we
can
use
in
the
future.
A
But
I
don't
want
to
discard
it
just
yet
in
a
matter
in
the
case
that
we
might
want
to
go
there
in
the
future
right
more
questions
or
thoughts.
Here.
F
F
Last
point
about
code,
highlighting
I
think
we
can
use
a
progressive
way
to
highlight
code.
So
basically,
as
far
as
I
understand
sending
plane,
div
is
the
fastest
way
to
show
this
measure
Quest,
and
if
we
can
send
that
as
fast
as
possible,
we
can
just
show
immediately
the
code
changes
without
highlighting.
Then
we
can
use,
for
example,
server
side
events
to
inform
us
that
the
highlighting
has
been
done
updated
on
the
client
and
it
will.
A
A
Like
what
I'm
hearing
like
theoretically
I
can
read
the
code
and
then
it
just
gets
highlighted
transparently,
it's
like
magic
right
and
it
could
be
a
few
milliseconds
or
it
could
be
a
second
or
so
so.
Essentially,
it's
not
too
jarring,
but
I
always
go
back
to
that
thing.
If
we
can
get
pre-calculated
cached
on
the
back
end,
things
sent
much
sooner,
it's
a
matter
of
having
the
data
available
to
send
on
the
pipeline,
not
the
pipe
so
I'm
again
we
can
try
it
out.
A
I
would
probably
invest
time
in
working.
What
Kerry
was
hinting
at
like
getting
it
much
pre-calculated,
as
we
talked
in
the
past
like
we
know
the
moment
that
the
code
changes
it's
on
a
push,
it
doesn't
change
anywhere
else.
If
we
can
pre-calculate
the
data
that
we're
going
to
need
to
render
that
Mr
right
away,
then,
when
the
push
is
done,
it's
ready
when
it's
needed,
that's
what
I
probably
would
go,
but
that's
just
me
I,
don't
know
what
you
think:
okay,
ready
man.
A
I
think
I,
remember,
I,
remember
the
conversation
and
at
the
time
we
were
having
struggles
with
redis
and
we
we
we're
not
going
to
put
this
on
redis,
because
red
is
already
overloaded
and
then
we
kind
of
like.
Maybe
we
can
use
another
store
somewhere,
but
because
this
is
again
static,
content
that
we
can
just
pre-calculate
it
keep
somewhere
in
a
cold
storage
and
just
wake
it
up
professionally.
When
we
need
it,
I
don't
think
there
was
like
a
hard
exclusion.
I,
don't
think
it
was
a
hard
rejection
of
that
path.
C
D
Didn't
pursue
at
the
moment
there's
and
there's,
there's
other
I,
think
I
think
if
we
look
at
it
from
a
heuristics
perspective,
there's
other
indications
that
highlighting
is
needed
right
or
that
somebody's
going
to
be
interacting
with
this
thing,
so
example,
if
they
come
into
the
merge
request
page,
they
might
not
have
gone
to
slash
diffs
yet,
but
they're,
probably
gonna
so
like.
Let's
just
on
the
back
end,
let's
kick
off
hey.
Do
we
have
this
cash?
Are
we
ready
for
this?
E
Well,
since.
A
Three
ship,
some
improvements
on
the
front
end
carry
that
window
is
much
smaller,
so
right
now
we're
using
startup.js
the
trigger
the
first
calls
right
away
as
soon
as
the
page
is
requested.
So
the
the
time
spent
between
the
requests
to
the
page
and
the
request
to
the
synchronous
request
that
we
need
is
kind
of
like
just
a
couple
of
milliseconds.
So
probably
the
benefit
wouldn't
be
too
much.
However,.
D
This
content
I
mean
it
to
my
mind
like
if
it's,
if
we
should
try
it
and
see
if
there's
an
improvement,
and
then
we
can
take
that
Improvement
and
say
hey,
we
need
more
redis
cash.
If
you
give
us
more
redis
cash,
we
will
have
this
Improvement
right.
I
think
that's
the
argument
to
make
in
that
that
situation.
Otherwise,
you
know
we're
just
like
theoretically,
like.
A
You
can
feature
flag
it
and
check
the
results
on
one
project
without
affecting
the
whole
redis
arm
yeah.
Exactly
so
should
we
have
an
issue
for
that.
B
B
D
A
A
A
Like
I'm,
not
I'm,
not
kidding,
Carrie
you've
got
so
you
got
ourselves
a
slogan.
So
all
right.
Thank
you
so
much
for
that.
E
Cool
so
who
wants
to
create
that
issue?
Harry
I
think
you'd
be
the
best
candidate
for
that
or
bring
an
issue
to
do
that.
Thanks
awesome,
any
more
thoughts
here,
I
can
move
on.
B
Should
we
open
a
front-end
issue
to
basically
like
proof
of
concept
loading
pre-computed
HTML
from
the
back
end
as
diff
files,
somehow
like
in
a.
A
A
A
Cool,
so
we
have
to
wishes
to
open
good
stuff.
Thank
you
for
the
discussion.
Again.
It's
still
too
early
to
close
off
POC,
so
I'm
going
to
keep
this
open
and
the
topic
to
be
discussed
in
the
future.
A
Any
work
we
should
consider
scheduling
1510,
you
have
a
note
here:
Stan
is
left,
you're,
gonna
voice.
It.
F
F
A
As
you
saw
earlier,
I
already
set
the
Milestone,
so
the
one
in
the
radar
thanks
I,
know
that
you
wrote
this
before
so
yeah
right
any
other
topics
of
discussion,
any
other
things
you
want
to
talk
about
before
we
go.
A
I
would
hold
off
the
awkward
silence
for
a
little
longer.
I,
don't
mind
all
right.
No,
that's
I!
Think!
That's
enough
awkwardness!
Thank
you.
So
much
everybody
for
a
really
healthy
discussion
and
I'm
really
enjoying
that
we're
getting
deeper
into
the
meat
so
of
it
all
and
I
think
we'll
definitely
get
some
some
gains
quick
soon.
E
Thank
you
so
much
for
your
time,
I'll
see
you
next
week
or
another
call
and
yeah.
Thank
you.
So
much
have
a
great
day.
Thank.