►
From YouTube: Application Performance Session 2022-10-10 (Snappy GL)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Right,
no,
no
yeah;
no!
No!
No!
No!
You
are
you're
way
better
on
technical
things
than
organizing,
but
so
I
will
keep
you
on
the
technical
side,
good
good
good.
So
we
are
recording
now
check
one.
We
are
all
in
the
same
room
check
two
as
I
think
we
can
get
started,
it's
a
classic
Monday.
B
So
we
were
discussing
this
also
in
the
development
staff
meeting
last
week
already,
where
I
showed
the
POC
that
we
were
working
on,
where
we
looked
into
a
couple
of
things
that
I
think
we
discussed
already
since
a
year
or
so
and
took
some
of
the
ideas
and
I
was
especially
coming
from
the
sus
point
of
hey.
Gitlab
is
not
Snappy
everything
that
I
click
takes
ages.
I
have
all
day.
We
can
do
this,
definitely
better.
B
There
was
also
an
experiment,
I
think
a
year
ago,
which
Thomas
Randolph
did
on
the
Mr
side,
with
working
with
index
DP,
and
things
like
that,
and
so
after
sitting
in
a
coffee
shop
in
Chicago,
with
John,
together
and
reading
through
the
SAS
issues.
I
was
like
okay,
let's,
let's
get
going
on
this,
and
we
are
definitely
looking
into
taking
the
steps
that
we
are
that
have
been
already
started.
B
So
I
did
like
a
service
worker
Mr,
which
finally
is
pipeline,
green
thanks
to
Desiree
from
QA,
which
means
that
we
want
to
go
from
there
and
especially
look
at
plan
and
Implement
in
India.
On
the
other
hand,
I
also
create
is
highly
interested
into
that
concept,
and
as
soon
as
we
have
basically
this
set
the
cornerstones.
Then
I
I
would
say
a
couple
of
areas
come
in
my
mind.
B
That
would
definitely
benefit
from
the
whole
thing
and
we
should
make
it
as
as
easy
as
possible
to
extend,
but
let's
see
that
I
try
to
create
a
walkthrough
of
the
concept
so
that
everyone
can
understand
it,
including
myself.
So
if,
if
there
are
any
questions,
remarks
please
jump
in
immediately.
We
want
to
go
and
step
through
the
concept
and
I
would
love
to
hear
everyone's
feedback
on
this
topic,
as
we
are
starting
to
basically
form
this
and
transform
this
hopefully
soonish
into
the
product.
B
Good
I
will
share
my
screen.
What
we
have
here
is
the
following:
the
ideas,
in
reality
quite
simple,
and
the
target
is
that
we
have
right
now.
Gitlab
is
not
thinking
or
is
not
architected
in
a
way
where
it
is
rather
working
in
a
sense
of
an
application
for
the
end.
C
B
But
we
are
rather
always
focused
on
having
a
state
per
page
law,
so
a
user
comes
loads,
a
page
we
load,
everything
for
the
page
user
goes
to
the
next
page.
We
load
the
same
things
for
each
and
every
page,
which
means
that
we
load
a
lot
of
the
same
stuff
over
and
over
and
over
and
over
and
over
again,
and
the
problem
is
to
some
extent
that
caching
can
be
really
difficult,
because
it's
very
person
based
based
on
access
rights
and
what
you
can
see
and
where
you
are
a
member
of
Etc.
B
So
caching,
overall
in
global
States,
is
a
very
limited
topic.
So
the
idea
is
to
take
a
look.
What
can
we
do
rather
on
the
frontal
side,
so
it's
really
focused
on
a
specific
user
and
and
where
we
can
maybe
use
some
of
the
technologies
that
we
have
around
to
do
stuff
in
a
way
that
we
are
not
reloading
and
redoing
it
over
and
over
again
and
in
reality.
B
If
you
look
especially
at
some
of
the
analytics
data
and
some
of
the
ux
research,
what
a
lot
of
people
have
is
that
they
are
working
anyhow
on
the,
as
everyone
in
here
is
most
probably
working
on
three
to
five
projects
on
average,
which
means
that
we
can
predict
with
high
AI
AKA.
We
count
how
often
you
use
a
project
and
then
based
on
that
we
can
Define
what
we
already
do.
B
Those
are
your
most
used
projects
and
the
idea
is
to
already
start
pre-caching
and
pre-loading
some
of
the
data,
especially
for
your
own
use
cases,
including
your
own
issues.
Your
own
merch
requests
your
most
used
projects
and
cached
itself.
So
the
the
idea
here
is
not
to
go
ahead
and
say:
okay,
we
cash
everything
we
go
and
anything
that
you
load.
We
will
write
into
index
DB
because
there
might
be.
If
you
look
at
gitlab,
there
are
thousands
and
thousands
of
issues.
B
Github
work
has
69
000
issues
almost
so,
if
you
click
through
that
stuff,
your
index
DB
would
be
quite
big.
This
is
really
about
stuff
that
where
we
predict
that
there's
a
very
high
chance
that
you
will
work
with
this
stuff
in
your
session,
so
the
idea
is
in
reality
to
pre-cache
something
some
of
that
stuff
in
indexeddp,
so
that
we
can
have
like
I
call
that
stuff
pre-results.
So
we
take
the
last
result
results
if
they
are
still
valid.
B
B
If,
when
you
hit
the
domain,
where
your
gitlab
is
running,
we
would
go
ahead
and
the
service
worker
would
get
started
and
would
say
Okay
service
worker
here
I
am
I,
can
go
ahead
and,
for
example,
start
fetching
your
issue
list,
so
I
will
do
the
graphql
query
and
already
pre-cache
your
own
issue
list
your
own
Mr
list,
simply
based
on
the
fact
that
we
expect
that,
most
probably
you
will
click
on
your
own
issues.
You
will
click
on
your
own
in
Mars
and
also
can
go
ahead
and
load
more
details
on
them.
B
That's
basically
one
part
where
we
would
start
foreseeing
the
stuff
that
you
are
going
to
do
and
preload
the
data.
The
other
stuff
is
that,
for
example,
that
we
Define
your
most
used
projects
and
say:
okay,
look
you're
you're
using
most
of
the
time,
those
to
skip
that
project.
Let's,
let's
fetch
your
labels,
let's
fetch
the
SNES
that
are
on
this
project,
so
that
all
the
quick
actions
that
you're
doing
are
much
much
much
much
much
faster.
B
Why?
Because,
currently
we
load
I
think
3
000
assignees
when
you
hit
the
slash
assign
on
a
on
a
kitlab
project
on
a
gitlab
issue,
which
means
that
this
tricks
sometimes
two
three
seconds
because
of
course
it's
a
very
big
and
heavy,
very
which
you
feel
in
the
experience
you
type
sne,
elevator
music
starts.
You
look
at
the
spinner
and
then
at
some
point
the
people
will
come
up.
Then
you
type,
then
it
loads
again
and
so
on.
B
So
this
is
definitely
something
that
we
can
also
improve
simply
by
the
fact
that
we
pre-cache
that
stuff
in
the
indexeddb.
B
That's
the
Baseline
and
skeleton
of
the
idea
on
the
data
side,
so
service
worker
writing,
index
DB,
stuff
that
you
look
at.
That
is
important
indexeddp
and
the
other
part
is
that
what
we
can
do
also
on
the
service
Market
is
that
we
can
Define
from
webpack
a
list
of
JavaScript
bundles
that
we
are
creating
that
you
most
probably
also
will
take
a
look
at
so
that
we
can
already
pre-cache.
B
For
example,
your
issue
list
view
app
your
issue
detail
view
app
and
basically
pre-cache
those
JavaScript
files
so
that
they
are
already
downloaded
and
are
in
your
file
cache
when
you
click,
for
example,
on
the
issue
button
and
how
this
then
works
is
in
reality.
What
I
also
want
to
take
a
look
that
we
do
here
is
because
our
Baseline,
for
our
page
render
is
around
400
to
500
milliseconds,
even
if
there's
nothing
on
it
in
reality.
B
So
what
I
want
to
take
to
try
is
that
we,
if
we
have
a
View
application,
we
have
this
in
more
and
more
use
cases,
for
example
the
issue
list.
We
have
now
a
view
app
that
is
already
used
to
the
projects
which
I
also
reused
here
in
the
puc.
We
can
fill
this
with
your
own
issues
and
what
we
can
do
on
top
of
that
is
we
simply
clean
the
body.
Dom
inject
a
new
Dom
element
were
being
checked
and
pre-load
and
the
view
app
with
the
loading
of
the
view
app.
B
We
will
take
already
the
these
brief
results
from
the
index.
Tp
and
say:
hey
viewer
go
ahead,
render
that
stuff
and
then
start
refreshing
and
reloading
and
stuff
also,
which
normally
takes
around
one
and
a
half
seconds
on
the
GDK
machine
here
and
then
you
basically
get
the
results
are
different,
updated
against
it,
and
that
would
create
this
experience
that
if
you
go
ahead
and
click
here
on
the
issue,
links
link
then
basically,
that
stuff
is
is
here
in
100,
milliseconds
I
think
we
can
even
optimize
the
list,
especially
more.
B
There
is
something
weird
going
on,
because
if
you
look
at
the
detail,
if
I
go
and
click,
then
this
is
basically
just
here
and
that's
also
using
the
same
type
of
index,
DB
and
free
result
and
instead
fetching
more
data
afterwards
below
the
fold,
and
but
this
has
the
feeling
that
it
feels
instant
in
reality.
If
you
go
to
back
to
the
list,
we
go
to
another
issue
you
and
so
on,
and
this
basically
can
give
the
end
user
experience.
B
That
is
much
much
much
much
much
faster
in
the
perceived
performance
than
what
we
have
nowadays
and
today
and
if
we
go
ahead
and
bundle
correctly,
those
use
cases
list
boards
detail.
Then
you
can
have
this
experience
of
just
switching
between
views,
but
not
click
load,
learn
and
alert.
Oh
that's!
My
issue,
I
go
type,
click
load,
blah
blah
and
that's
what
we
want
to
achieve
with
this
type
of
of
interaction.
B
What
it
does
in
the
background,
too,
is
that
and
the
POC
I
used
a
library
which
is
called
taxi
JS,
which
is
simply
makes
it
much
much
nicer
and
easier
to
work
with
indexedp,
because
you
can
add
versions
and
have
schemas
and
it
automatically
creates
indexes.
B
So
what
is
happening
is
that
I
have
I'm
using
now
the
Apollo
cache
persistent
as
like,
basically
an
extension
to
Apollo,
which
simply
can
persist
data
you
can,
if
the
plugin
a
data
provider
and
basically
either
there's
like
session
storage,
local
storage,
that
it
writes
to
or
indexeddp
the
thing
with
the
index.
Db
right
now
is
that
the
index
DB
for
this
Apollo
cache
per
system
will
simply
take
the
query
and
save
the
result.
B
So
it
gets
a
key
and
saves
it,
which
also
works,
but
doesn't
give
us
the
big
advantage
of
deconstructing
results,
because
what
what
happened?
What
will
happen
if
we
deconstruct
results
is
that
if
we
get
an
issue
list
result
of
a
query
and
that
deconstruct
the
details,
then
I
basically
can
go
ahead
and
already
render
with
that
data.
Also
the
detail
page
if
I
would
just
say,
Okay
own
issues,
queries
results,
then
I
have
a
huge
blob
and
that's
it.
B
So
what
we
do
is
we
deconstruct
in
this
with
this
plugin
with
this
Apollo
free
cache,
persistent
thingy,
we
deconstruct
the
results
into
objects
which
are
then
saved
into
specific
objects
here
in
index
TP.
So
you
have.
You
have
basically
the
issues
that
we
have
received.
They
are
also
stored
in
comparison
to
what
we
do
nowadays,
because
we
sell
stuff
already
a
lot
of
things
into
local
storage.
B
You
always
need
to
streamify
and
serialize
and
deserialize
all
the
time,
and
but
indexedp
has
like
native
objects
in
there,
and
this
means
we
can
natively
not
only
have
them
and
have
much
faster
parsing
time,
but
we
also
can
create
indexes
based
on
some
of
the
variables,
so
you
can
create
for
exactly
here,
I
created
for
the
issues
and
index
based
on
the
state
of
it,
so
that
if
we
would
have
different
the
different
states
of
lists
and
stuff
like
that-
and
this
can
really
be
extended-
super
easily,
there's
like
very
easy
config
thing
for
it
and
that
helps
us
on
reusing
those
things.
B
Even
if
the
next
query
that
comes
to
the
next
view
is
not
the
same
as
we
had
in
the
first
Square
yeah,
and
the
other
thing
is
that
we
can
reuse
this,
especially
for
something
which
we
also
want
to
implement,
which
is
the
command
palette,
which
gives
you
basically,
for
example,
here
the
thing:
okay,
I
want
to
jump
to
my
issues.
B
I
click
on
an
issue
and
boom
I
can
basically
go
from
one
issue
to
the
other
through
the
command
palette,
because
we
have
the
single
objects
we
can
feed
this
to
the
command
palette.
The
command
palette
can
render
this
in
whatever
they
do,
good
stuff
and
off
we
go,
and
this
is
how
we
can
in
the
future,
hopefully
reuse
a
lot
of
things.
B
One
easy
use
case
is
that
I
saved
here
my
last
pages,
so
I
save
all
the
pages
that
I
was
looking
at,
so
you
can
easily
jump
to
one
of
the
things
that
you
have
looked
at
last,
for
example,
and
my
issues
by
the
way
here
is
already
using
the
injection
thingy.
So
that's
why
it's
loading
and
displaying
so
fast,
it's
basically
entering
the
stair
and
yeah.
B
This
is
not
even
using
the
service
worker,
yet
so
the
service
worker
would
already
preload
and
pre-fetch
the
data
I
simulate
this
all
the
time
with
just
going
once
to
the
issue.
List
have
it
in
the
cache
go
back
and
then
click
on
the
issue
list
again
and
then
it's
already
coming
from
the
cache
basically
and.
A
C
B
A
Example,
but
the
thing
that
you
do
I
mean
I:
do
it
very
often
right,
like
I
click,
my
issue
date
list
five
times:
I'm,
not
the
issue
just
because
I
don't
care
but
to-do's
or
whatever
right,
like
I
click,
the
list
very
often
yeah
or
if
I'm,
on
an
issue
I
navigate
back
to
the
issue
list
I
had
before,
and
that
already
would
improve
my
experience.
A
A
B
Next,
the
first
thing
we
would
need
for
the
issue
list
is
that
we
get
an
official
implementation
of
a
graphql
possibility
to
fetch
your
own
issues.
John
was
so
kind
and
wrote
me
the
graphql
in
parallel
hackity
hack,
so
that
we
had
this
possibility
already
in
the
QC.
So
that
would
be
one
of
the
first
steps.
There
are
a
couple
of
things
that
we
can
do
in
parallel
as
as
discussed
today.
B
The
command
palette
can
be
done
in
the
meantime
and
can
be
started
and
simply
work
with
URLs,
but
as
soon
as
we
have
something
in
the
next
to
be,
it
becomes
simply
much
much
much
much
much
faster,
but
yeah
you're
right.
We
can
definitely
go
ahead
and
get
started
and
hopefully
have
with
the
service
worker
really
soon
also
the
advantage
that
we
have
the
data
simply
prepared
already.
A
Not
having
worked
with
index
TP,
is
there
like
this
potential
downside
that
we
are
downloading
a
lot
of
State
data
and
saving
a
lot
of
data
away
that
you
know,
might
break
like
smaller
devices
or
like
browser
smart
enough
and
just
delete
data
if
it
gets
too
large.
B
It's
a
little
bit
like
we
would
do
the
radius
usage,
so
we
could
always
check
if
we
even
want
at
the
top
level.
If
we
even
want
to
use
indexeddb
or
not
I
would
always
have
everything
implemented
in
a
way
that
it
should
work
without
even
indexeddb
there,
because
it
could
be
empty
or
it
could
be
not
possible,
but
that's
definitely
something
that
we
say.
B
Okay,
let's
at
the
beginning,
we
limit,
for
example,
the
usage
of
indexedp
only
to
desktop
devices,
but,
for
example,
daxi,
which
is
this
wrapper
for
the
indexedp,
which
makes
it
really
nicer
to
work
index
DPS
a
little
bit.
Frankie
is
like
it's
currently
used
already
by
those
applications
which,
especially,
for
example,
flight
radar.
Etc
works
on
the
on
the
on
on
a
mobile
and
uses
tons
of
data
there.
So
it's
GitHub
desktop
the
same
goes
for
WhatsApp.
They
all
use
index
DB.
B
It
can
do
a
lot
of
heavy
lifting
with
a
lot
of
data
and
I
would,
as
said
at
the
beginning,
I
would
definitely
not
extend
this
to
every
issue.
I
would
always
start
with
okay,
let's
have
some
sort
of
Sanity
check
to
say.
Okay,
we
are
only
caching
this
and
that
which
is
really
relevant
to
you.
For
example,
if
you
have
been
the
offer
or
the
sne
only
then
we
go
and
and
cache
stuff
for
an
issue
so.
A
I
mean,
to
be
honest,
like
caching.
Stuff
is
probably
not
that
bad,
like
I
think
when
I,
sometimes
for
shenanigans,
download
issues
with
descriptions
and
whatnot
I
end
up
with
just
130
megabytes
on
you
know
you
wouldn't
cache
images
and
whatnot,
but
are
large
I
mean
even
if
you
catch
a
few
thousand
issues.
It's
probably
not
that
much
right
like
even
if
you
cache
all
the
labels
from
one
org
like
how?
How
much
can
it
be
right,
yeah,
exactly
that.
B
But
yeah
you
could
go
even
at
some
point,
then
this
is
what
a
competitor
in
the
plan
area.
Does
they
go
ahead
and
do
the
diff
caching,
so
they
will
already
add
in
the
queries
to
some
extent
what
the
cache
already
has
so
that
the
server
for
example,
it
would
say
hey
by
the
way
I
already
have
all
the
labels
up
to
date
for
this
project.
So
don't
care
just
give
me
the
IDS
of
the
labels,
because
this
is
definitely
something
we
are.
B
If
you
look
at
all
the
data
that
we
are
sending
over
the
pipe
we're
sending
all
the
part
all
the
time,
for
example,
the
colors,
the
names,
the
titles
and
so
on.
So
this
means
that
we
could
get
really
they
even
do
it
super
Advanced.
They
really
don't
do
have
like
a
diff
mechanism
on
the
front
end
that
thus,
then
the
parsing
I
think
it's
service
workers.
B
Another
competitors
are
thinking
using
webassembly
to
do
like
real
diffing
and
merging
of
data.
That's
coming
in
that's
already
like
super
Advanced,
but
yeah
I,
think
looking
at
it
and
go
getting
through
the
the
POC
topic
is
what
I
saw
is
that
I
think
that
that
up
level
items
own
Mrs
own
issues,
own
to-do's,
should
be
fairly
easy,
because
what
you
do
is
you
simply
kill
whatever
is
in
the
main
content?
B
Dom
inject
the
new
one,
inject
The,
View
app
off
you
go
projects
definitely
becomes
a
little
bit
tougher
because
you
need
to
have
the
sidebars
and
we
would
figure
out.
Okay.
Can
we
configure,
for
example,
what
menu
items
are
around
create
a
Yammer
version,
create
the
hammer
version
from
a
Yammer
file,
create
a
new
version
from
a
Yammer
file
so
that
we
don't
have
like
two
implementations
but
have
an
automatic
okay.
B
We
provide
two
implementations
that
are
Auto
produced
with
stuff
like
this,
so
this
is
definitely
something
that
that
we
can
think
through
and
and
take
a
look
at
and
yeah.
A
B
Yeah
and
the
other
thing
is
that
what
I
I
would
I
want
to
get
updated
is
also
that
we
take
a
look
that
there
is
this
I
always
forget
how
this
the
API
is
called,
but
you
can
do
it
in
the
JavaScript
API.
B
You
can
do
this
message,
bus
between
tabs
of
the
same
domain,
so
you
can
basically
have
the
service
worker
fetch
your
accounts
and
they
simply
go
ahead
and
tell
all
the
tabs
to
update
just
the
counter
without
reloading
or
refetching
anything
so
that
whatever
tab
you
go
on,
you
will
always
see
basically
the
correct
count,
which
would
be
already
a
nice
start
to
your
issues.
Much
requests
and
to
do
so
I
think
that's
a
definitely
helpful
thing.
A
D
Can
I
ask
some
questions
because
I
I
think
I
don't
fully
understand
how
it
works?
My
question
is:
what
exactly
is
stored
with
the
index
DB.
So
is
it
only
graphql
data,
graphql
responses,
or
is
it
also
HTML
of
the
page
and
somehow
it
injects
into
the
page
where
you
click
on
the
link.
B
Only
data
objects,
so
we
have
what
I
cache
currently
in
this
POC
version
is
really
just
like
objects
or
group
members
issues.
Labels
Pages,
which
are
the
pages
that
I've
visited.
This
is,
for
example,
Auto.
This
is
created
by
the
front-end
to
have
this
kind
of
History
thing
in
the
command
palette.
Then
we
have
projects,
we
have
queries
which
are
the
root
queries
the
the
the
the
graphql
curves.
B
So
there
is
like
a
query
which
is
which
I
gave
the
key
is
user
issues
which
is
standard
query
which
gets
you
all
the
issues
that
are
assigned
to
me
and
that
that
is
basically
deconstructed
in
data
objects
like
issues,
for
example.
They
are
stored
here
and
then
I
take
again,
it's
recursively
going
through
these.
B
These
refs
and
the
ref,
for
example,
goes
to
the
enemy
says:
oh
there's,
a
user
core
connection.
Okay,
then
I
go
ahead
and
save
this
use
a
core
over
here.
So
right,
that's
the
users
that
I
currently
know
about
in
the
in
the
front
end,
and
this
means
also
that
I
don't
need
to
change
any
sort
of
code
in
reality,
because
it's
like
a
it's
going
through
the
let's
take
a
look
at
the
Mr
CMR.
B
C
B
Forget
that,
but
the
major
thing
here
is
that
that
we
have
the
graphql
query
to
foreign,
and
what
we
can
do
here
is,
we
simply
add
when
we
get
the
default
client,
the
Apollo
client.
B
The
only
thing
that
you
need
to
do
in
the
front
end
now
is
to
say:
Okay
I
want
to
have
a
local
cache
key
for
this
query,
specifically
which
I
call
user
issues
and
then
I'm
using
a
library
which
is
called
yeah,
page
jumping
Apollo
free
cash,
a
consistent
cache,
which
is
the
official
library
of
Apollo,
and
there
you
can
have
your
own
data
wrappers
and.
B
D
B
This
is
Apollo
free
cash
persist.
S
basically
gives
you
the
opportunity
to
write
your
own
wrapper
and
that's
what
I
did
here,
which
is
the
taxi
rapper.
So
a
polo
free
cash
persists
simply
set
up
that
in
that
the
in-memory
cache
is
then
going
through
the
Apollo
free
cash
persist,
plugin
and
that's
where
I'm
setting
up
then.
Basically,
this
new
persistor
and
I
add
this
new
kind
of
wrapper
that
I
wrote
and
the
new
kind
of
wrapper
is
not
just
writing
the
query,
but
it's
deconstructing
this
this
part
of
the
code
here.
B
So
it's
getting.
That's
basically
the
reconstructing
is
the
get
item
and
you
basically
need
to
you
need
to
implement
an
interface
for
for
the
Apollo
3
plus
cash
per
system,
and
what
we
do
here
is
simply
as
soon
as
I
get
the
query
results.
I
go
ahead
and
split
them
out
and
save
them
into
individual
tables.
So
the
query
itself
is
going
into
the
queries
table
and
then
for
each
type,
object
type
that
I'm
getting
from
the
query.
B
B
They
are
prepared
for
you
and
they
look
exactly
the
same
as
the
the
back
end
gives
that
stuff
to
The
View
application,
The
View
application
doesn't
see
any
sort
of
difference
with
the
results
and
Apollo
itself
will
then
go
ahead
and
resetch
that
data
from
the
API
and
as
soon
as
this
comes
in
it
goes
again
into
Apollo
3
cache
persistent
because
it
knows.
Okay,
this
Apollo
client
right
now
always
goes
ahead
and
writes
this
query
to
the
database
and
simply
updates
those
database
plus
the
results.
B
C
D
Yeah
thanks
thanks
for
the
detailed
explanation,
one
part
is
unclear
to
me.
You
mentioned
that
you
inject
pu
up
into
HTML
and
I,
wonder
how
does
it
work.
B
This
is
a
little
bit
annoying.
This
bug
that
I
can't
go
ahead
and
jump
to
that
file.
Let's
see
I
have
these
launchers,
so
I
have
added
simply
like
a
an
event
on
top
of
it,
so
that
the
list
can
say
hey
system
show
an
issue
and
what
I
do
is
I,
go
ahead
and
do
add
project
buddy
body,
which
is
then
going
ahead
and
will
go
ahead
and
simply
inject
and
create
a
new
view
app
based
on
the
layout
page,
which.
B
D
So
if
I,
if
I
get
it
right,
this
approach,
it
works
only
on
the
pages,
which
are
completely
view
based.
So
the
whole
page
is
a
View
application.
B
Yes,
exactly
there,
it
would
work,
but
what
I
try
to
do
is
also
try
to
figure
out
how
much
I
would
be
able
to
recreate.
B
For
example,
a
detail
issue
detail
page
there.
It
also
works
to
some
extent
by
not
only
using
you,
but
we
could
also,
for
example,
I
took
a
look
to
it's,
not
in
the
PC
I,
have
it
in
other
another
Branch.
B
But
what
I
tried
out
is
have
a
race
route
which
renders
inside
the
project
not
only
the
menu
and
I,
simply
go
ahead,
load,
HX
y
step
page
and
then
check
the
whole
HTML
that
I
got
so,
which
is
the
classic
jQuery
HTML
go
from
UL,
not
without
using
jQuery,
of
course,
but
that
could
be
also
something
where
you
could
simply
take
some
parts
from
Hammer
get
the
rendered
output
that
are
quite
static
and
simply
add
it
to
the
page
again,
as
I
said,
I
think
we
are
getting
to
the
issue.
B
Detail
also
in
that
frame
is
a
way
much
more
work
than
getting
now
to
very
simple
View
apps.
That's
why
we
also
referred
going
in
that
going
that
route
for
quite
some
time,
because
we
only
finally
got
the
issue
list
in
view
a
couple
of
months
ago,
and
things
like
that.
So,
if
you
list
I,
think
to-do
list
is
also
a
view
app
by
now
and
the
Mr
list.
C
D
I
also
have
a
question
you
mentioned
at
the
beginning
that
a
loading
just
the
list
of
projects
you
have
so
basically
home
page
for
gitlab
takes
about
500
milliseconds,
which
is
also
some
things
that
I
experienced
also
every
day
and
I.
Wonder
if
there's
been
any
like
investigation,
why?
That
is?
Why
do
we
spend
so
much
time
in
rails,
rendering
simple
stuff?
Maybe
we
can
also
work
on
optimizing
rails
application
as
well
to
get
benefit
for
every
single
pager
in
GitHub.
B
I
think
that
there
is
a
full
team
which
is
mainly
working
just
for
example,
on
optimizing.
So
if
you
take
a
look
at
the
on
that
page
here
on
the
project
page
that
you
mentioned,
you
see
that
we,
we
then
spend
around
219
milliseconds
in
postgres,
246
in
kitale
and
so
on.
So
the
most
stuff
goes
definitely
a
way
for
to
some
extent
for
Italy
and
what
we
are
doing
there.
B
John
can
tell
you
a
really
good
story,
for
example,
about
what
we
can
optimize
about
Hummer
stuff,
where
we
were
injecting
some
data
that
was
not
even
used
anymore.
I
still
remember
also
that
we
removed
in
the
create
section
there
was
this
pipeline
graph
next
to
a
project
which
took
around
80
percent
of
the
the
performance
for
the
project
list.
D
It
it
got
there,
it
is
on
the
left
side
left
two
yeah.
B
Icons,
but
there
was
in
the
past,
there
was
even
way
more
detail
thing,
but
we
have
also
replaced
and
removed
a
couple
of
things.
B
A
good
example
is
also
on
the
on
the
group
level,
the
rounded
counts,
so
what
we
do
here
is
those
are
the
cash
counts,
and
this
is
definitely
also
has
improved
performance.
There
are
a
lot
of
details
around
why
stuff
is
faster
and
slower
to
be
honest,
I
think
they're
still
quite
some
improvement,
but
there
has
been
so
much
improvement
over
the
last
12
months
that
it
gets
harder
and
harder
and
harder
and
compared
to
Simply
the
same
experience
of
having
a
precache
data
injected
data.
B
B
B
Those
are
some
of
the
things
that
we
can
definitely
do
around
optimistic,
rendering
optimistic
submission.
That
is
something
as
a
building
that
we
already
do
where
you
can.
Basically,
we
post
it
as
it
would
have
worked,
because
we
believe
that
98
of
the
time,
hopefully
the
comments
work,
then
this
shouldn't
make
a
difference
and
that
that
experience
is
only
possible
when
we
stay
on
the
page
and
we
do
more
tricks.
Does
this
make
the
code
easier
to
maintain
nah?
B
No,
definitely
not
does
this
make
it
easier
to
to
write
stuff
most,
probably
not,
but
that's
why?
I
also
believe
that
this
should
be
mainly
done
as
a
starting
point,
especially
for
most
used
workflows
that
people
have
going
through
your
issues
going
for
your
Mrs
to
make
that
experience.
It
doesn't
matter
on.
If,
on
the
admin
settings
page
XY
set,
someone
is
200
milliseconds
faster
because
they
only
use
it
once
a
month.
So
yeah.
D
I
also
on
the
same
page,
I
also
want
our
pages
to
be
as
snipey
as
you
showed
I'm
just
wondering
if
there
are
a
much
simpler
ways
of
doing
that,
without
writing
lots
of
codes
and
to
in
general,
a
lot
of
work
to
maintain
this
I'm
wondering
if,
if
we
should
start
with
the
back
end
yeah,
can
we
cache
this
graphql
queries
on
the
backend
side?
How
complex
that
is?
B
Yeah
I
think
that
that's
one
of
the
major
things
where
frontal
Engineers
are
not
that
much
aware
of
that,
especially
the
back-end.
They
have
invested
so
much
blood
blood
and
sweat
over
the
last
one
and
a
half
to
two
years
and
went
down
every
rabbit
hole
that
they
found
and
John
can
tell
you
a
lot
of
stories.
So
yeah.
Please
give
us
a
little
bit
of
insight.
What
what
are
the
optimizations
and
what
we
have
done
so
I
think
what
is
really
hard
is
to
get
even
more
squeeze
out
of
it.
E
Even
though,
when
you
demonstrated
like
the
is
really
interesting
because
group
kinds
are
that
kind
in
the
side
virus
on
every
group,
scoped
page
right,
and
so
even
if,
like
we
cache
that
I
think
every
one
hour,
I
think
every
60
minutes,
but
for
large
groups
like
that's
still
quite
a
lot
right
like
so.
E
If
you
bust
that
cash
every
o
r
for
everybody,
then
you
have
this
massive
Spike
when
that
cache
is
warmed
again
and
especially
for
like
larger
customers
like
it's,
it's
it's
really
painful
so
like
this
is
one
area
where
that
would
be
like
a
classic
fix,
because,
instead
of
like
loading
that
in
the
page
loads,
you
would
like
okay,
so
the
first
time
you
go
to
you
wouldn't
see
it.
Maybe
you
would.
E
It
would
load
in
a
graphql
query
in
the
background,
but
after
that
like
it
would
be
an
index
DB,
and
then
we
just
pull
it
from
that,
but
we
would
still
cache
the
graphql
query
on
the
back
end
anyway.
So
it's
not
like
one
second
or
ten
seconds
or
whatever
it
was
so
there's
some
crazy
for
large
large
groups.
E
The
tricky
thing
with
graphql-
and
caching
is
just
that,
like
the
whole
point
of
graphql-
is
that
you
can
do
anything
like
it's.
It's
a
it's
a
it's
a
graph
API,
so
we
can
cache
it
like,
but
we
need
to
be
able
to
predict
common
queries
and
cache
those
and
kind
of
be
I.
Don't
know
like
be
okay
with
that,
like
it's
like
yeah,
for
for
things
that
our
own
application
uses.
We
should
be
caching,
those
I
think.
E
A
Look:
okay,
funny
I
wanted
to
there's
a
new
feature
on
Zoom
that
if
you
raise
your
hand
after
some
time,
it
will
raise
the
hand
and
zoom
anyway,
you
need
to
activate
it
and
download
stuff.
C
A
No
one
thing:
that's
also
to
consider
with
the
caching
I
think.
What's
really
beneficial.
A
Caching
on
the
user
side
is
that
you
have
all
the
permission
stuff
already
resolved
while,
like
caching,
certain
queries
based
on
users
just
doesn't
scale
right
on
the
back
end
like
because
you
just
have
access
to
a
certain
amount
of
certain
kind
of
issues
as
that
user,
and
it's
basically
individual
for
every
user
right
because
they
have
might
have
different
access
on
different
projects,
and
these
kind
of
things
and
just
catching
them
locally
is
probably
like
good
I
mean
I
also
have
some
concerns
that
just
increases
front-end
complexity.
A
But
you
know
compensation
level
levels
between
front
and
back
end
are
the
same.
So
we
should
be
able
to
handle
the
same
complexity
as
big-end
people.
B
Exactly
that,
I
think
a
good
example,
and-
and
that's
why
I
don't
want
to
this-
is
the
classic
binary
topic.
Should
we
do
view?
Should
we
all
Hammer?
No,
we
should
do.
We
should
use
common
sense
and
use
whatever
is
needed
and
what
is
best
in
a
certain
situation,
and
one
of
the
things
that
we
got
if
you
especially
look
through
the
SAS
topics,
is
that
this
snappiness
or
performance
is
very
much
focused
on
the
main
user
workflows
of
Mrs
and
issues,
especially
and
anything
that
we
can
do
that.
B
This
simply
feels
faster,
even
if
we
don't
make
it
super
fast
and
the
I
would
love
to
have
someone
calculate
how
much
the
prediction,
how
much
that
the
the
things
that
someone
will
that
the
data
hasn't
changed
between
visit
one
and
visit
B,
while
they
are
looking
at
it
Etc
and
especially
with
the
service
worker.
We
would
would
use
this
by
a
lot
because
we
already
go
ahead
and
hash
as
soon
as
you
come
back
to
us.
B
A
good
example
that
I
read
about
one
and
a
half
weeks
ago
is
with
the
whole
figma
topic.
Is
that
one
of
the
major
things
that
they
invested
very
early
on
is
that
they
figured
out
hey
building
a
graphics
application
in
the
browser,
is,
is
painful,
slow
and
maybe
rendering
is
not
the
best
Etc
and
we
are
doing
complex
things
they
invested
early
on
and
got
a
lot
of
beating
for
that
they
invested
a
lot
into
going
into
webassembly,
which
I
assume
is
six
seven
years
ago
was
definitely
even
less
fun.
B
They
wrote
tons
of
their
stuff
in
C,
plus
plus
in
web
assembly,
no
I'm
not
suggesting
going
that
route
right
now
fast,
but
they
got
a
lot
of
beating,
but
they
got
three
to
six
times
faster
performance
out
of
it
which
made
them
so
successful
because
compared
to
others,
they
were
able
to
do
very
complex
stuff.
Very
nicely.
B
That's
the
big
thing
about
user
experience
and
performance
is.
It
should
be
that
fast
that
you
have
never
the
feeling
that
you're
waiting
on
something
is
breaking
your
workflow
of
being
working
and
being
creative.
If
we
are
able
to
do
that,
then
I
think
people
will
feel
it
and
have
this
huge,
Improvement
and
there's
already
slow.
D
Yeah
I
already
got
some
questions.
Sorry
for
for
drafting
today,
but
two
last
questions
for
me,
the
first
one
is
if
we
focus
around
the
snappiness
of
switching
between
pages
from
going
to
one
page
to
another
page,
have
you
considered
trying
out
the
pre-load
tag
for
the
prefetch
deck
which
actually
fetches
the
page?
In
the
background,
how
does
that
experience
differs
from
what
you're
suggesting
is
it
comparable
is
the
slower?
Is
it
faster?
What
do
you
think
we.
B
Already
do
that,
so
there
is
even
a
possibility.
You
just
need
to
add
a
special
class
to
your
link.
This
makes
it
automatically
pre-load
that
thing
so.
B
B
So
as
soon
as
you
hover
over
it,
you
will
see
that
you
are
going
already
ahead
and
have
this
navigation
utility,
which
is
sitting
here,
which
is
going
to
pre-fetch
the
document
through
a
link
prefetch
which
really
already
cut
issue
performance
in
half
the
perceived
performance,
because
we
are
simply
I
think
the
the
thing
is
that
you
that
it
takes
the
brain
from
moving
to
clicking
average
200
to
300
milliseconds
and
we
cut
basically
the
loading
base
loading
time
of
600
milliseconds.
We
cut
this
in
half
because
we
already
started
300
milliseconds
before
you.
B
Actually,
click
has
happened
and
I
think
we
need.
We
are
waiting
for
200
milliseconds
on
a
link
before
we
go
and
pre-fetch
stuff
on
those
issue
lists
and
also
on
the
Mr
lists
yeah
on
Mr
list.
We
do
already
the
same,
for
example,
so
that's
already
happening
the
same
happens
here.
If
you
go
over
the
issue.
B
Top
menu
item,
and
so
on
so
yeah,
that's
exactly
hitting
limitations,
hitting
limitations
of
of
making
even
more
improvements,
but
this
doesn't
mean
that
we
are
going
to
stop
with
the
backend
because
this
will
not
solve
for
it,
but
all
of
our
problems,
but
just
as
a
measurement
and
I
wasn't
able
to
finish,
the
measurement
I
have
created
like
a
workflow
which
measures
going
to
the
project
page
go
into
the
issued
list.
Go
into
a
detail.
Go
back
to
the
issue
list.
B
Go
back
on
the
issue,
detail
write
a
comment
takes
around
11
seconds.
If
it
does
it
automatically
and
like
this
is
basically
broken
down
to
six
seconds
already
in
the
first
measurements.
So
just
those
small
improvements
have
already
cut
the
whole
performance
time,
basically
in
half
rendering
wise,
because
this
is
so
much
faster.
D
I
guess
so,
so
you
consider
doing
this
in
parallel
to
the
perfect
fishing
stuff,
so
we
will
still
have
provisioning
for
the
pages
like
with
this
class
and
we'll
also
have
cache
for
the
graphql
queries
and
injecting
View
apps
loading
in
the
background,
so
these
things
will
live
together.
B
They
can
definitely
link
to
live
together,
because
this
optimization
is
really
especially
about
most
used
cases.
This
will
definitely
also
take
a
long
time.
Let's
see
for
how
long
it
will
take
us
to
get
to
the
issue
detail
state
so
that
we
can
render
whole
project
can
render
the
issue
detail.
I
think
we
still
need
to
do
a
lot
of
cleanups.
Currently
work
items
is
in
process
in
in
the
plan
section,
which
is
way
more
view
based.
B
So
this
would
already
cut
away
a
lot
of
things
that
I
needed
to
fake
and
do
very
awkwardly
in
this
POC,
just
to
figure
out
how
far
what
I
wanted
to
see
is,
how
far
would
be
the
off
and
it's
not
as
far
as
I
would
have
thought,
but
it's
also
not
super
close
that
we
can
do
this
in
the
next
two
months.
What
we
could
do
in
the
next
two
months,
I
would
say,
is
to
the
issue
list
to
the
Mr
list.
B
If,
if
code
review
also
has
some
time
off,
doing
that
through
index,
DB
preload
for
service
workers
and
then,
for
example,
add
some
of
the
things
on
on
the
issue.
Detail
page
already,
where
we
can
use
index
DB,
Chinese
labels,
and
things
like
that
that
so
that
we
are
basically
using
always
free
results
already
in
your
interaction,
which
makes
it
much
feel
much
much
faster
in
reality
and
in
the
background,
work
as
closely
two
words,
this
goal
of
having
also
the
detail
patient,
but
the
prefetch
can
be
used
today.
D
I
wonder
how
much
more
loads
this
puts
on
the
backend
side,
because
we
are
prefacing,
I,
guess
lots
of
data
so
once
it
hurt
everyone
else.
Because
of
this,
because
we
are
facing
a
lot
of
data
which
we
might
actually
not
use
at
all
so
wanted
to
actually
heard
the
performance
in
general.
B
That's
why
we
have
I
think
it's
200
milliseconds.
We
need
to
look
it
up,
but
it
is
in
the
navigation
utility.
We
have
a
delay
until
we
would
prefetch.
So
you
need
to
be
over
that
link,
stay
on
the
link
for
I,
think
100,
200
milliseconds,
and
only
then
we
go
ahead
with
prefetching,
but
on
the
back
end
there
was
Zero
impact
compared
to
before
and
after
we
measured
this
back,
then
I
think
it
was.
We
did
this
six
to
eight
months
ago.
D
Oh
well,
you're
searching
for
it.
I
100
I
have
a
question
for
John
100
milliseconds.
D
Okay,
I
have
a
question
for
John
I
guess
as
far
as
I
understand
the
main
problem
with
graphql
caching
on
the
backhand
side
is
just
because
it
is
graphql,
so
it's
it
can
be
anything
requested
from
the
front
end.
So
if
we
optimize
this
very
long,
very
difficult
queries
into
our
classic
Json
requests
pressed
API.
If
we
revert
some
of
this
back
to
the
classic
way
of
fetching
data,
will
it
help
us
cache
it
this
data,
or
it
doesn't
really
matter.
E
It
would
help
in
that
in
in
that,
like
that,
API
is
older,
and
so
we
have
more
mature
performance,
enhancement
or
performance
tuning
and
better
ways
to
measure
it,
but
it
wouldn't
help
in
the
long
run
because,
like
if
you,
if
we
had
a
graphql
query,
you're
right
like
so
the
cardinality
of
graphql,
is
what
makes
it
difficult
like
you
can
ask
for
anything,
but
our
own
client
is
going
to
be
asking
for
the
same
things.
E
All
the
time
give
me
an
issue
and
all
this
level
of
detail,
so
we
would
just
stuff
all
that
into
a
key
or
a
hash
it
and
then
cache
against
that
key,
so
that
for
a
query
that
come
in
looking
the
exact
same,
we
would
just
look
up
the
cache.
If
you
would
add
like
an
additional
thing
to
that
query,
then
all
caches
would
be
invalid.
Does
that
make
sense
so
like
if
we
introduce
a
new
feature
or
whatever
and
you're
loading?
E
The
whole
issue
detail
and
we
add
this
feature,
and
you
start
asking
for
that
on
the
front
end
then
yeah,
like
all
those
little
Keys,
would
be
invalid,
but
that's
not
really
a
big
deal.
I,
don't
think
because
it
would
be
on
a
progressive
basis.
So
say
we
rolled
out
something
new
like,
as
you
visited
each
issue
for
the
first
time
again,
it
would
be
refreshed.
So
the
long
answer
is
like
we'd
have
to
do
some
work
on
the
back
end.
A
Do
we
do
you
not
actually
know
whether
we
utilize
any
tooling
around
graphql,
because
one
of
the
things
that
we,
for
example,
do
right
in
the
back
end,
we
write
an
explain,
query
and
we
might
get
an
answer
why
a
certain
thing
takes
a
long
time,
but
now
we
also
move
the
responsibility
of
writing
queries
to
the
front
end,
and
there
could
be
like
something
that
hey
I'm,
getting
a
count
right
and
the
count
is
the
thing
that's
problematic.
Do
we
actually
have
like
some
monitoring
already
or
is
this
like
tooling,
that's
not
available
in.
E
Two
things
we
have
complexity,
calculations
which
but
they're
not
perfect,
because
we
have
to
Define
them.
So
it's
like,
if
you
ask
for
these
four
things,
the
complexity
is
multiplied
on
the
other.
The
other
thing
is
that
it's
really
hard
to
measure
performance
of
graphql
queries
at
the
minute,
because
you
can
do
things
like
you
can,
okay,
give
me
all
graphql
queries
that
are
you
know,
sub
epics
of
an
epic
but
they're
so
widely
different,
like
you
might
be
asking
for
just
the
title.
E
Like
it's
kind
of
meaningless
like
so
it's.
E
Well
like,
but
the
thing
is
like
somebody
has
to
build
a
tooling
either
on
the
front
end
or
the
back
end,
so
why
not
build
it
for
graphql
on
the
back
end
anyway
like
if
you
know
like
otherwise,
you
would
have
to
like
go
back
to
the
rest
API
for
a
period
and
then
we're
gonna
have
to
build
it
anyway.
No.
A
I
I
just
meant
it
could
be
that
the
that
the
you
know
that
the
ecosystem
has
the
tooling
right
like
it
could
be
that
you,
for
example,
just
have
like
a
plug
and
play
graphql
whatever
thingy.
That
then
sends
certain
things
to
Sentry
and
that
you,
you
know
similar
to
how
we
get
monitoring
on
the
slower
Sprouts.
We
would
get
monitoring
on
the
slowest
queries
whatever
based
on
their.
You
know
how
the
query
is
built
and
whatnot
right.
C
D
Do
we
have
time
for
for
another
last
question
from
me
of
course,
of
course,
I
I
I'm
not
sure
how
actually
cash
and
validation
happens.
In
the
case,
you
showed
I
wonder
who
is
responsible
for
this
new
validation?
Is
it
just
the
backend
who
invalidates
the
cash
that
we
have,
or
we
do
that
manually
on
our
side.
B
This
would
this
is
not
included
in
a
PC
I.
Think
I
wrote
a
couple
of
lines
in
the
in
the
Epic.
How
I
see
it
is
the
invalidation
itself
itself
happens
all
the
time,
because
the
Apollo
provider
always
will
anyhow
fetch
the
query.
So
it
basically
simply
says:
okay,
the
Apollo
carry
goes
out.
It's
basically
created
the
Apollo
cache
persistent
thingy
says:
hey
I
already
have
a
result
for
you,
but
Apollo
itself
will
anyhow
go
and
refetch
it.
B
So
it's
always
at
the
moment
some
sort
of
pre
result
most
of
the
time,
apart
from,
for
example,
the
command
palette.
But
what
I
would
add
to
this
taxi
wrapper
in
index?
Db
is
really
that
for
every
object
that
we
save,
we
basically
simply
set
a
timestamp
and
start
at
the
beginning,
with
a
very
rudimentary
okay.
Anything
that
is
older
than
five
days
can't
be
used
anything
that
stored
there.
The
service
worker
could
also
as
soon
as
the
service
work
is
activated.
B
It
can
go
through
all
the
items
and
simply
delete
them
or
when
we
basically
provide
back
through
the
wrapper
to
Apollo
the
results.
We
can
also
say:
okay,
yes,
we
are
filling
that
type
of
data.
Oh
that's
too
old,
I'm,
not
giving
it
to
you,
that's
something
we
still
need
to
figure
out
and
think
about,
but
we
have
this
all
under
control
on
the
front
end.
The
major
other
topic,
which
was
also
brought
around
in
validation
and
security,
is
really
on
one
hand.
We
store
already
a
lot
of
data
in
local
storage.
B
That
is
quite
the
same
to
some
extent,
this
organizes
it
to
some
extent
a
little
bit
better
and
makes
it
more
consistent,
and
the
other
big
topic
is
definitely
that
we
already
have
all
the
invalidations
on
lockout.
So
if
you
press
logout,
the
index
DB
would
be
killed.
Local
storage
is
killed,
so
everything
is
gone
from
your
browser.
But
apart
from
that,
it's
up
really
to
the
implementation.
B
What
I
would
love
to
do
and
I
think
that's
really
possible.
Is
that
when
we
are
doing
this,
this
can
be
wrapped
away
to
almost
98
so
that
when
you
on
The
View
app
you
simply
Say
Hey
I
want
to
block
the
cache
that
stuff
give
it
a
key,
and
that's
it.
That's
that
would
be
the
perfect
thing
in
the
end,
so
that
in
reality,
your
application,
your
issue
list,
simply
says:
hey
yeah,
that's
something.
I
would
like
to
catch
on.
B
The
MRI
says:
hey
it's
something
I
like
to
catch,
but
the
team
itself
doesn't
need
to
work
with
anything
anymore
more
than
before,
because
this
is
all
hidden
behind
the
scenes
if
we
are
going
in
the
territory
of
command,
pellets
and
working
with
individual
objects,
and
that's
the
nice
thing
that
we
still
could
do
so
without
Apollo,
for
example,
for
the
command
palette.
Where
we
say
that
my
issues,
then
we
need
to
do
some
sort
of
caching
validation,
but
taxi
is
really
easy
and
nice
to
pair
it.
To
be
honest,.
D
Okay,
so
if
I
get
it
right,
we
first
preface
the
data
on
some
page.
So
then
we
go
to
the
page,
and
then
we
fetch
the
data
again
to
check
if
we
have
select
the
up-to-date
yeah
version,
even
me,
being
a
front-end
engineer.
I
am
really
scared
about
the
performance
on
the
backhand
side,
because
that
means
we're
actually
doubling
our
request
rate
on
the
production
I.
A
Think
the
prefetch
is
wrong,
so
we
we
served
the
cash
we
give
you
the
cached
version.
Apollo
asks
for
something
we
give
you
the
cached
version,
while
the
normal
request
is
still
running,
and
then
we
update
what
UV
gave
you.
So
it's
still
just
one
request.
The
only
difference
is
whatever
we
had
in
the
cache
is
going
to
be
rendered
first,
but
yeah.
One
thing
that
you're
probably
rightfully
concerned
about
what
happens.
If
something
goes
wrong.
With
the
request,
we
might
show
you
our
data
data,
that
that
could
be
a
thing
like
what.
A
How
would
we
handle
that
you
know
in
case
the
query:
has
an
error
or
whatever
right,
yeah.
D
B
B
In
the
future
we
would
basically
pre-fetch
some
of
the
data,
like
my
issue
list
or
my
Mr
Page,
but,
to
be
honest,
this
is
like
I
think
we
could
even
save
data,
because
what
we
definitely
constantly
see
also
is
all
the
time
that
people
are
pressing
reload.
If
you
look
at
some
of
the
user,
oh
I
changed
something.
B
Oh
there's
my
issue
is
F5
and
if
we
are
able
to
figure
out
in
the
future
that
this
is
already
updated
and
the
issue
count
is
right
Etc
then
we
could
even
save
some
of
the
requests
there,
but
this
is
really
all
up
to
final
implementation
that
we
figure
out.
Hey
we
just
cached
this
a
second
ago,
then,
okay,
let's
not
do
this,
or
maybe
it's
requested,
so
I
think
this
is
really
still.
There
are
a
lot
of
open
questions
yet
to
solve
to
be
solved
to
get
this
production
ready.
B
This
was
really
mainly
about
figuring
out.
Would
this
make
sense?
What's
the
kind
of
experience
that
we
could
have
what's
the
gain
out
of
it
and
how
much
roughly
is
needed
to
get
us
there
and
what
benefits
do
we
get
out
of
it
and
yeah.
D
Yeah
I
think
Garfield
has
some
kind
of
flag.
That
tells
you
that
the
query
is
still
loading
for
refreshing.
I
guess.
Is
it
this?
The
the
whole
approach
you're
describing
is
really
similar
to
stale,
while
revalidate
approach
where
you
have
cash-
and
it
reveal
this
in
real
time
so
yeah
I
guess
graphql
already
has
that.
E
Just
on
the
previous
Point,
as
well
like
I,
wouldn't
be
too
worried,
even
if,
like
on
average,
the
number
of
requests
through
graphql
went
up
by
you
know
like
1.4
times
1.5
times,
something
like
that.
Like
they're
all
read
requests
anyway,
so
they
go
to
the
database
replica
you
just
you
can
spin
up
extra
architecture
like
you
just
add
more
pods,
it's
really
cheap.
E
You
know,
so
it's
more
like
if
you
would
increase
the
number
of
like
very
expensive
rights,
like
you
know,
like
writes
the
Cascade
and
things
like
that.
That
would
be
painful
but
and
I'm,
definitely
not
saying
like.
E
We
should
be
flippant
about
it,
like
obviously
like
if
we
can
avoid
it
like
don't
do
it
right
like,
but
but
it's
not
like
it,
wouldn't
really
worry
me
too
much,
especially
if
like
we're
going
down
the
road
of
the
cash
and
issue
list
and
then
an
MR
list,
and
then
a
to-do
list
like
we'll
pretty
quickly
see
if
it's
having
a
massive
effect
on
on
throughput.
B
So
there
are
also
a
couple
of
things
that
we
can
then
improve
even
further
and
and
even
make
it
more
complex,
but
also
get
some
more
gains
out
of
it.
I
think
the
the
result.
The
thing
is,
that's
that's
a
really
tough
nut
to
to
work
on,
but.
C
A
Some
things
like
I
could
really
see
the
benefits
and
it
not
being
too
hard,
like
diffing
the
labels
or
just
getting
the
update
list
of
labels
if
necessary,
and
that
kind
of
stuff
should
be
really
easy,
like
these
objects
that
are
not
updated.
That
often
should
be
really
good
or
milestones,
and
you
know
like
yeah
I,.
B
D
I'd
also
like
to
say
that,
if
once
we
figure
out,
how
do
we
render
a
few
apps
in
a
worker
for
the
whole
gitla
project,
once
we
do
that,
we
can
actually
mix
that
approach
with
what
you're
suggesting
we
can
actually
render
the
whole
page
in
the
background
and
just
use
inner
HTML
and
paste
it
as
is,
and
it's
a
fully
working
app
so
yeah,
it's
it's
quite
working.
C
B
Yeah,
that
was
the
major
topic
of
the
POC
I,
had,
in
the
back
of
my
mind,
that
this
shouldn't
be
that
far
down
the
road
to
Simply,
see
how
far
off
we
are
and,
to
be
honest,
it's
especially
for
at
least
this
should
be
to-do
list
just
to-do
lists.
A
command
pilot
is
another
story,
we're
already
working.
We
design
has
already
started
what
you
exist
started
on
working
on
the
designs.
B
There's
a
person
on
this
call
that
I
would
love
to
have
on
working
command
pellets
as
soon
as
he
has
time
and
I
think
those
will
already
make
huge
improvements
and
but
yeah
I
think
it's
something
really.
B
To
dive
into
to
see
what
we
can
do
with
it,
and
I
really
want
to
get
us
on
a
level
where
everyone
says
gitlab
is
for
the
main
stuff
that
you're
doing
is
super
fast
and
super
Snappy
and
I
had
another
Twitter
thread
on
this
weekend
and
everyone
was
be
very
happy
about.
Okay,
those
are
the
best
planning
tools
for
engineers
and
I
haven't
seen
our
name
and
that's
where
I
want
to
get
us
and
I
think
there
is
a
lot
of
room
with
those
improvements.
D
One
last
question:
yeah
since,
since
we
are
I
guess
at
some
point
just
imitating
an
Spa
style
of
navigation,
let's
say
we
have
a
page:
that's
rails
based
we
have
we
use
classic
rendering
to
get
our
page.
Then
we
go
to
another
page
which
is
using
your
Technique,
which
is
replacing
the
main
content
of
the
page
with
a
view
app.
What
happens
if
we
click
the
back
button?
So
do
we
actually
stores
the
previous
page
somewhere
in
memory.
B
The
whole
page
so
I'm
changing
the
URL,
so
if
you
would
hit,
for
example,
you
so
if
I
go
on
the
list,
I
click
on
an
issue-
oh
no.
This
is
now
actually
even
loading.
It
normally
that's
how
where
you
can
see
how
funky
fastages
and
if
I,
go
now
to
an
issue,
I'm
changing
always
the
URL.
So
this
was
now.
This
is
now
the
self-made
view,
issue
detail
page
and,
if
I
hit
refresh,
then.
B
B
That
is
then
fetching
the
back
and
forward
and
says:
okay,
don't
do
anything
don't
navigate
really,
but
let's
simply
do
your
thing
here,
but
if
you
hit
the
refresh
page
so
this
is
now
the
view
based
issue
detail
page
and
if
you
hit
refresh
here,
then
it
would
simply
reload
the
normal
Hammer
page
and
and
they
need
to
be
of
course,
then
one-to-one
the
same
to
some
extent,
but
that
would
definitely
work
and,
as
I
said,
I
I.
B
D
C
D
B
Think
in
overall
we
might
anyhow
end
up
with
with
some
sort
of
router
view
router
that
handles
then
the
own
issues
and
their
own
Mrs
Etc,
or
if
we
do
this
through
a
global
event
that
didn't
change
the
url
but
yeah
it
shouldn't
be
too
hard
that
we
monitored
you
and
then
say
hey.
This
is
nothing
that
we
can't
render.
B
But
very
good
point
cool.
B
B
An
issue
for
that
perfect
and
what
we
have
in
parallel
is
the
service
worker.
This
was
Green
just
so
that
everyone
knows
what
the
problem
is
that
here
we
have
a
test
in
the
environment
variable,
and
this
isn't
set
on
true
in
the
test
utilities,
but
sometimes
it's
undefined
in
the
tests
so
be
prepared.
I
will
I.
B
Think
QA
has
already
created
an
issue
that,
in
all
other
places
where
we
are
using
this,
we
should
also
change
this
to
something
else
that
it's
not
I,
simply
use
this,
as
as
it
was
in
some
other
tests,
but
this
is
not
always
true,
it's
true
and
undefined,
which
was
the
overall
problem
that
made
them
everything
slower
and
make
this
service
worker.
Then
we
have
a
service
worker
which
currently
the
service
worker
does
nothing.
B
B
Likewise,
thanks
for
joining
Lucas
take
care
and
yeah,
that's
where
we
basically
we'll
take
it
in
the
next
steps.
So
anyone
who
feels
happy
to
already
help
out
right
now
I'm
more
than
happy
to
take
help
on
the
service
worker.
What
this
Mr
needs
now
is
some
specs
and
testing
service
workers,
as
I
think,
are
quite
complex
topic
so
far.
It
was
mainly
about
getting
everything
else
green,
so
that
will
be
the
next
steps
and
then
we
will
work
with
plan
in
parallel
to
getting
us.