►
Description
In this session we discuss some approaches and tradeoffs to persisting the Apollo cache on the clientside.
MR: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/106004
B
Yeah
we
were
I,
think
I
I
wasn't
paying
attention,
you
guys
were
talking
about
labeling
people
and
I'm.
Joking
I'm.
Sorry,
yes!
So
thanks
for
hopping
on
the
front
end
pairing
we
are
gonna,
be
talking
about
Apollo
cash,
but
persisting
this
across
multiple
requests
for
a
much
improved,
perceived
and
even
real
user
experience,
because
I
noticed
sometimes,
if
I'm
just
clicking
through
things
like.
Oh
there's,
my
issue
go
I,
don't
even
have
to
wait
for
the
real
request.
B
So
that's
that's
the
idea
and
it's
a
real
win,
there's
a
slow
query
and
so
we're
going
to
try
to
persist
the
previous
response
to
remediate
the
real
problem
from
the
user.
Obviously,
and
Natalia
could
probably
deep
dive
into
this
is
she's
wrestled
with
the
need
to
do
this?
Is
we
don't
want
to
do
this
for
every
slope
rate,
because
the
problem
of
our
query
is
slow
still
needs
to
be
addressed,
but
this
is
a
nice
to
have.
A
No
and
I'm
jumping
the
gun,
but
I'm
really
interested
to
know
what
your,
what
logic
you
were
going
to
use
for
timings
to
like
update
that
cache.
If
that
makes
sense,.
C
So
the
point
is
updating.
The
cache
is
very
simple:
we
do
it
in
two
ways.
Some
of
our
queries,
most
of
our
queries,
are
what
we
call
Smart
queries,
that's
what
you
put
in
the
Apollo
in
the
components,
or
sometimes
you
use
a
Smart
query,
but
most
likely
you
see
them
in
the
Apollo
option.
Why
they're,
smart,
smart
queries
are
subscribed
to
the
cache.
C
In
the
case
of
the
smart
query,
we
simply
put
a
policy
cache
and
Network
with
again
with
some
hacks
about
what
cache
network
does
is.
Essentially
it
should
take
the
first
response
from
the
cache
and
the
data
is
there
because
it
persist
the
cache
and
it
should
send
a
network
request.
In
the
background,
why
I'm
saying.
C
C
In
this
case,
you
can
have
the
first
result
from
the
cache
and
handle
the
network.
Okay.
However,
we
also
have
simple
queries
and
in
the
case
of
filtered
search,
we
have
SIMPLE
querys.
What
is
simple
query,
what
is
different
from
smartphone,
so
smartphone
essentially
is
observable.
You
subscribe
to
the
cache
you
via
the
Apollo
implementation
of
observables
called
Zen
observable,
but
it's
observable
observable.
Camera
can
have
multiple
results.
Whenever
cache
is
updated,
we
have
so-called
next
callback
and
you
see
the
result.
Simple
query
is
a
promise.
C
It's
no
different
from
axios
call,
so
it
shows
whatever
it
returns
from
the
API.
It
doesn't
really
care
about
the
cash.
Whatever
is
returned
by
the
promise
is
displayed,
and
this
is
the
case
with
labels
too.
You
can
update
the
cache
manually,
it
doesn't
care,
it
doesn't
react
to
the
cash
update.
It
only
reacts
to
the
call
of
the
message,
so
in
this
particular
case
we
are
doing
even
with
dirtier
thing.
I
would
say
we
call
in
two
methods.
C
One
is
fetch
labels,
which
is
hitting
the
cache
and
returning
us
whatever
is
in
the
cache
and
second
one.
What
I
call
fetch
latest
labels?
The
naming
is
weird,
but
it
sends
the
same
request
with
policy
Network.
Only
simple
queries
cannot
do.
Caching
Network,
they
don't
know
about
the
cache,
so
we're
sending
this
strictly
to
the
API
immediately
after
we
show
the
first
result
from
the
latest.
So
whenever
you
open
labels,
you
see
the
result
immediately,
but
somewhere
in
the
background,
there
is
an
Network
request
happening.
C
C
Yes,
until
until
the
network
is
resolved,
the
next,
for
example,
if
you
open
just
took
the
cash
yourself,
you
don't
wait
for
the
network
money
post
a
drop
down,
it
will
still
be
updated.
So
next
time
you
open
the
drop
down,
you
will
receive
the
next
version
that
basically
brought
them,
so
it
should
be
very
updated
cash.
It
should
be
up
to
date
with
the
back
end
theoretically,
and
we
have
other
tools
to
invalidate
the
cash
ball
actually
asked
about
this
in
the
Mr.
C
B
So
I'm
I'll
share
the
Mr
and
I
was
going
through
replying
to
some
things.
So
there's
big
crazy
idea
that
I
had
to
Natalia.
We
could
talk
about
that
first
or.
B
So
right
now
we
are
targeting.
What
do
we
persist
by
looking
at
the
query
and
we
can
decorate
the
query
with
here's,
how
we
need
to
persist
things,
but
what
Apollo
cash
persists
actually
saves
is
our
state,
and
so
we
have
to
somehow
transform
these
Quarry
decorations
into
persist,
decorations
on
the
state
which
are
then
living
inside
of
living
inside
the
cache,
and
so
this
creates
the
need
for
a
thing
called
the
persistent
link,
which
is
a.
A
B
Apollo
cache,
which
is
like
a
middleware
for
quarries,
coming.
B
Right
so
looking
at
this
and
once
I
saw
once
I
saw
a
persistence
map
where
I
was
like.
Oh
maybe
we
could
just
talk
to
this
and
I
couldn't
get
that
idea
on
my
head.
So
I
was
like
I
gotta
see
if
I
could
do
it
and
it's
okay,
if
we
don't
do
it,
I
was
just
wanted
to
see
if
we
could
do
it.
So
the
idea
is
instead
of
the
developer,
adding
persist
decorators
to
the
query.
B
We
would
give
some
sort
of
parameters
that
would
get
passed
to
the
persistence
mapper
and
that,
rather
than
the
mapper
looking
for
these
persist,
Keys
it
would
just
match
based
on
these
patterns.
So
this
is
a
patch
I
put
together
that
implements
instead
of
adding
the
at
persists
and
the
underscore
persists
throughout
the
queries.
This
is
all
that
we
would
need
is
based
on
what's
being
stored.
B
B
This
clearly
isn't
like
ideal,
like
it's
weird
happened
to
for
one
I'm
using
regex
and
lots
of
people
don't
even
like
using
regex
at
all
and
like
this
is
clearly
weird.
My
main
question
is
I.
Think
it's
potentially
a
better
developer
experience
just
decorating
this
on
the
query:
I
want
to
Cache,
but
do
we
save
maintainability
from
the
implementation
side
this
way,
and
are
there
any
other
kind
of
concerns
like
with
the
way
our
with
this
specific
problem
and
the
way
our
priorities
are
with
this
of
hopefully
we're
not
doing
this
too
much
either?
B
Does
this
approach?
Is
it
more
favorable
than
decorating
the
queries,
and
it's.
C
Amazing
approach
is,
we
are
not
selective
enough.
The
thing
is,
we
have
multiple
queries
that
have
root
as
project
and
group.
That
goes
for
the
science
model
bones,
but
all
by
specific
for
all
the
options
in
the
drop
down,
even
within
the
same
application.
C
How
do
we
tell
in
the
cache
patterns
that
we
want
exactly
this
query
to
be
cached
like
only
this
little
one
with
labels,
or
only
this
little
one
with
issues,
so
we
can
use
cache
patterns,
but
they
will
be
much
more
detailed
that
we
have
right
now,
so
what
we
can
do,
potentially
is
maybe
we
could
rely
on
the
query
name
in
this
particular
case,
because
probably
I
mean
in
our
code
base.
Query
name
should
be
unique.
This
is
insured
by
the
Linker
operation.
C
Sinus
of
the
issue
should
be
also
included
as
a
properties-
a
user
like
not
only
label
an
issue
but
user
or
user
core
I,
don't
remember
what
the
signing
belongs
to,
because
we're
fetching
a
list
of
issues
with
the
sign
is,
and
this
would
require
us
every
single
time
when
we
add
one
more
nested
property
to
the
cached,
either
decorated
with
persist,
which
is
apart
from
perfect
or
modify
the
cash
patterns.
Which
is
far
from
perfect
two,
so
I
don't
have
a
good
I,
really
like
the
pattern.
I
just
need
to
I.
B
Yeah,
so
I,
yeah
and
I
think
I
think
that's
this.
You
bring
up
a
good
point
if
this
runs
the
risk
of
previously,
you
have
to
add
the
persist
everywhere.
Otherwise
the
caching's
probably
lost
here
we're
probably
being
way
because
we're
matching
so
liberously
like
we're
gonna,
potentially
cash
more
than
we
need
and
finding
that
balance
is
going
to
be
the
the
key
I
did,
but
I
did.
But
the
idea
is.
B
You're
wrong
enough,
like
it's
just
an
idea:
yeah
I
didn't
get
to
share
this
with
you.
I
did
so
when
I
was
working
on
this
I
used.
What
was
shared,
I
used,
what
was
saved
on
your
branch
and
the
cash
as
like
I
saved
that
out
and
I
kept
iterating
to
like
figuring
out
okay.
What
can
I
do
to
get
the
cash
looking
the
same,
and
so
with
these
properties,
the
cash
looked
the
same
on
load
and
when
I
searched
and
when
I
did
the
open
up
the
labels.
B
I
don't
know
if
there's
any
other
way
to
trigger
queries,
but
this
was
the
one
like
weird
quirky
thing
about
it
is
this
may
be
on
it.
This
may
not
be
intuitive
that,
so
we
only
would
include
quarries
we're
not
including
the
count
chords,
because
those
show
up
in
the
cache
but
they're
not
persistent,
and
we
don't
want
to
present
those
we're
only
persisting
and
so
here,
rather
than
selecting
by
query
we're
selecting
by
the
state
of
like
the
issues
that
actually
have
nodes,
I
love.
C
Be
really
yes
and
it
was
possible
in
Apollo
2.
We
were
discussing
it
on
the
one
of
the
front
and
fairings.
So
a
Polito
was
way
more
liberal.
In
this
you
could
decorate
their
response.
The
way
you
want-
and
they
can
find
one
five
properties
here-
fine,
but
the
policy
is
like
you,
don't
have
this
in
the
request
ship
everything,
so
it.
B
So
here's
an
idea-
this
is
weird:
these
are
crazy
ideas,
so
we
could
get
persistent
link.
A
B
C
C
That's
that's
the
whole
point
and,
unfortunately,
for
us
the
examples
that
Apollo
cash
perceived
prints
for
link
and
mapper,
not
working
with
Apollo
3.
Even
so
the
library
is
called
Apollo.
3
cash
persist,
they
just
kept
them
from
Apollo
2
and
the
example
of
persistent
is
a
bit
simpler.
Probably
Paul
would
be
happier
with
this
one,
it's
not
as
big,
but
it's
simpler
only
because
in
Apollo
2
persisting
didn't
need
to
do
so
many
things
it
was
only
removing
The
Decorator
one
decorator
at
persist.
The
directive
likes
at
Skipper
at
include
and.
C
It
was
only
one
decorator
you
put
on
the
aquarium
being
happy
with
that
and
Ebola
2
was
fine
with
them,
because
then
you
take
the
directive
and
you
recursively
go
through
the
leaves
of
the
object
just
putting
persist
through
in
every
single
entity
you
find
and
returning
it.
It
didn't
need
to
be
in
the
query,
but
suppose
three,
we
don't
have
this
nice
thing.
C
We
just
need
to
do
lots
of
manual
work
and
it
doesn't
make
me
happy
and
as
I
told
in
what
it
was
application
performance
session,
that
when
we
move
to
front
and
caching,
we
need
to
realize
that
either
way
we
go,
the
price
is
high.
The
price
and
developer
experience
is
high.
Like
really
high,
we
will
be
either
decorating
queries
or
just
remembering
to
put
things
in
persistence,
number
or
pass
them
as
a
parameters
like
either
way
we
go,
the
price
is
still
high.
C
C
B
Lot
of
times-
and
it's
yeah-
it's
not
that's,
not
ideal,
so
I'm!
Fine,
if
we're
just
like
hey
we're
just
gonna,
do
it
this
way
for
right
now,
I'm
just
ringing
up
the
brainstorm
for
us
to
talk
about
it.
If
no.
B
I
think
I
think
you
like
this,
because
we
can't
we're
pretty
much
guaranteed
a
timing
of
the
link
being
executed
and
then
our
persistence
map
are
being
executed.
So
if
we
had
a
shared
reference
to
some
object
that
just
housed
keys
that
we're
going
to
persist,
What
If,
instead
of
the
persist
link,
writing
these
directly
on
the
objects.
Underscore
persists,
wrote
to
some
object.
B
That's
going
to
be
read
by
the
persistence
mapper.
So
this
then
we'll
we'll
share
this
same
shared
object
to
persistent
snapper
keys
to
persist.
This
allows
us
to
circumvent.
We
don't
have
to
return
data.
That's
not
actually
in
our
query
that
metadata
lives
inside
this,
like
shared
object
that
we
that
the
link
will
write
to
the
Keys
we
need
to
so
anytime.
It
sees
an
app
persist.
Whatever
the
return
was
we'll
just
spread.
That
of
like
all
of
these
Keys,
we
got
to
persist
and
that's
then,
what
the
persist
mapper
will
read.
C
C
What
link
is
doing
is
interceptor
right,
has
the
request
and
response
and
it
works
with
request
and
response.
That's
why
we're
modifying
the
response,
because
that's
what
we
have
in
hand,
so
how
do
we
make
it
pass
through
link.
B
No,
so
this
receiving
this
intermediary
object
means
I,
don't
need
to
return,
underscore
persist.
Information
I
want
to
send
to
persistence,
mapper
I
write
to
here.
B
So
instead
of
the
query
having
to
figure
out
how
to
update
and
resolve
persists,
it's
I
would
see,
I
would
base
I
would
somehow
I
would
imagine
it'd
be.
Something
like
you
know
is
a
persist,
worry
on
the
operation
query
and
if
it
is
right
keys
from
response
to
keys
to
persist
and
then
pass
through.
So
this
is
just
a
pass
through
link,
it
just
writes
the
keys.
We
need
to
persist
by
reading
the
response.
C
B
It's
weird:
no,
it's
definitely
weird.
This
is
still
in
brainstorming
land,
but
it's
is
this
potentially
more
desirable
than
requiring
underscore
persists
everywhere
and
just
and
requiring
these
like
having
to
update
and
manage
what
we're
returning
as
well.
It's
like
this
is
just
the
idea
just.
A
Asking
another
remedial
question:
was
this
only
a
problem
for
simple
queries?
No.
C
Mean
we
can,
we
could
skip
all
the
processed
part
all
together
if
we
were
sure
that
for
this
sir
application,
we
just
want
to
Cache
everything
like
whatever
Apollo
sends
cash
cash,
cash
and
cash.
If
we
cash
everything,
we
can
just
drop
persistent
Snapper
and
persist,
link
all
together,
because
there
is
no
need
right.
We
just
cash
in
everything,
so
what
caching
client
is
doing
is
opt
out,
not
not
opt-in.
For
now,
it
is
basically
when
it's
enabled
it
caches
everything.
C
B
Does
this
I'm
now
thinking
I'm
I'm
I
am
naive
here,
I'm
realizing
the
result
that
data
doesn't
look
like
the
data
we
get
live
in
the
cache.
It
looks
like
the
date.
C
B
C
Know
right,
it's
like
honestly,
I
know
we
are
recording,
but
all
this
Mr
was
frustrating
from
start
to
finish
even
like
before
we
started
on
it
and
the
library
is
very
frustrating
too,
because
if,
if
you
work
with
it,
it's
very
much
Half
Baked,
they
just
put
the
basic
usage
in
place
and
if
you
want
any
advanced
thing,
oh
go
read
these
examples.
We
put
here
for
react
native
like
what
okay,
yeah
and
half
of
it.
A
C
C
We
need
to
see
if
it's
safe,
if
it's
secure,
because
that
was
a
great
question
from
Paul
about
security
too.
What
about
validating
the
cash
and
are
we
even
sure
it
works
as
expected?
And
for
this
we
need
to
just
put
it
in
place,
enable
the
feature
flag
and
see
how
it
plays
with
everything
yeah.
B
C
B
B
Well,
I
mean
for
me,
the
discussion
is
relevant
to
these
are
these
are
concerns
I
would
want
to
address
before
we
start
spreading
this,
and
so
it's
like
okay,
we're
testing
something
out
all
right.
Let's
comment
we're
still
discussing
this:
here's
where
the
discussion
is
happening.
We
want
to
test
this
in
prod.
That's
why
we're
doing
this
like?
We
can
totally
go
that
route
with.
C
C
That
would
be
the
best.
Unfortunately,
this
link
is
what
they
recommend
as
a
way
on
a
polar
cash
process.
That's
first
and
foremost,
the
reason
I
went
this
route
because
that's
a
recommendation
from
the
library
like
create
a
link,
create
another
work
with
it,
and
also
because
it
does
nice
matching
with
queries
and
entities.
B
Yeah
yeah,
I
I
think
it's
only
it
that's
probably
makes
sense
and
I
to
me.
The
alternative
would
not
have
been
obvious
without
seeing
this
implementation,
and
it's
like
that
to
me.
This
is
just
the
natural
thing.
It
happens
when
you
have
multiple
eyes
on
and
I
am
now
I
I
think
it's
got
to
be
possible
for
us
to
live
with.
Just
one
persist.
C
B
B
C
What
do
you
want
to
do
in
the
map,
or
so
whenever
we
have
query,
we
create
a
list
of
raft
things
right,
referenced
in
the
root
query
and
then
for
every
single
nested.
We
perform
the
same
kind
of
traversing
down
because
refs
on
only
on
the
roots
right
and
a
single
nested
track
can
also
contain
wraps.
So,
for
example,
if
there
is
an
issue
around
on
the
project,
it
also
wraps
labels
and
Milestones
and
assignees
and
everything
so,
instead
of
having
a
key
underscore
persist,
you
want
to
just
basically
have
all
direct
scenes.
C
B
I
would
I
think
I'm
I'm
feeling
a
recursive
algorithm
coming
I
like
recursion.
B
Think
coming
to
it
would
start
at
root.
Query
it's
where
we'd
want
to
start.
C
B
C
B
B
We
would
look
for
those
I'm
kind
of
feeling,
like
maybe
we
could
look
for
this
child,
but
that's
irrelevant.
We.
C
B
Then
once
we
have
that
we
have
a
function
that
says
deeply
persist
or
like
deeply
include
whatever
it
is
this
ref,
and
so
then
we
would
read
that
ref
and
we
would
somehow
need
to.
We
would
save
that
out,
but
then,
from
there
we'd
have
to
read
all
the
other
refs
and
those
refs.
We
would
say
deeply
include
this
ref
and
we
would
just.
C
One
thing
that
we
need
to
account
on
is
that
we
will
have
the
same
graphs
coming
from
multiple
different
queries
and
we
need
to,
and
but
they
will
also
go
through
persistence
member
and
we
need
to
make
sure
that
we
cache
only
those
that
go
through
this
query.
This
particular
one,
not
all
the
labels
that
come
with
the
same
wrap.
B
Yeah
and
when
I
think
yes
and
that's
why
we
would
do
it
this
way
going
from
the
root
query:
first,
okay,
finding
only
these
ones,
and
then
we
just
deeply
deeply
find
those
ones.
Man
that
that
algorithm
has
got
me
excited.
Do
you
want
you
want
to
try?
Do
you
want
to
try
doing
that.
B
I
think
we
could
let's
try
it,
because
the
persist
link
is
actually
going
to
behave
as
we
expect
so
I
think
what
I'm
going
to
try
to
do.
Oh,
this
is
for
groups,
wait
is
it
for
group.
Oh,
we
do
groups.
B
B
B
C
A
B
C
B
Yes,
yes,.
C
But
that's
why
we
decorated
every
single
property,
so,
for
example,
our
issue
is,
will
bring
us
issues
that
have
this
little
persist
through,
but
at
the
same
time
the
issue
count
will
bring
us
issues
without
this
little
thing,
and
that
was
a
different.
A
project
is
still
this.
The
route
is
still
the
same.
The
reference
is
still
the
same.
C
B
Yeah,
why
is
it
so
upset?
Oh,
the
web
hack,
I
I
think
the
I
think
the
decision
is
going
to
come
down
to
you
and
we
don't
have
to
make
the
decision
on
the
call.
But
I'll
I'll
leave
a
summary
comment
of
our
discussion
on
it,
but
we
can
either
Target
this
selectively
from
the
Quarry
side
or
very
broadly
from
this
state
side
and
I.
Think
I
think
that's
kind
of
the.
C
Yep
exactly
and
we
will
need
team,
because
team
is
kind
of
product
owner
for
this
feature.
He
would
need
to
decide
because
he
knows
what
is
the
end
goal
here
for
me.
It's
basically
narrowed
down
to
implement
the
caching
and
we
will
see
how
it
works
so
yeah
and,
however,.
B
B
Here,
where
I'm
creating
a
client,
this
is
where
I
put
like
our
local
cash
patterns.
What,
if
I
just
passed
in
the
persistence
mapper
here
like
and
I'm,
not
trying
to
do
anything
fancy
with
it,
I
just
I'm,
actually
selecting
these
are
the
ones.
B
C
We
just
need
to
make
sure
that
persistence.
Mapper
is
selective
enough.
Okay,
that's
my
main
worry
here
that
we're
not
caching,
things
that
are
not
supposed
to
be
cached
and
we
can
test
it
thoroughly
because
this
particular
application
contains
so
many
queries
with
the
same
root
like
project
or
group.
C
C
B
You
see
how
I
get
nerds
tonight
I'm
like
I
I,
get
the
hesitancy
with
this
in
the
first
place
and
like
okay,
this
is
an
un.
This
is
a
necessary
evil,
but
I'm
trying
to
make
it
as
nice
as
possible,
which
I
need
I'm,
not
an
efficient
person
and
naturally,
in
that
regard,
I'm
I'm.
The
kind
of
person
that
my
my
wife
would
make
fun
of.
B
Let
me
not
end
the
sentence
there,
because
when
we
dated
in
college
and
when
I
was
in
school,
I
would
have
like
you
know,
class
in
the
bag,
but
it
was
the
final
and
I
could
have
like
failed
it
and
still,
you
know,
gotten
good
grades,
but
I'm
the
kind
of
person
to
get
try
to
get
perfect
on
the
final,
which
is
perfect
on
everything.
So
it's
I've
I've
grown
a
lot
since
then.
B
Yeah
I
tell
you,
though:
Apollo
is
pretty
fancy.
It's
fancy.
A
C
B
Morning,
yeah
yeah,
no
I
that
it
makes
sense
why
we're
being
selective
cool,
yeah
now
I
I,
think
I'll
I'll
leave
a
comment.
Summarizing
there's
just
a
trade-off
here
we
could
be
very
permissive,
and
maybe
this
works
on
some
pages,
but
then
we
wouldn't
need
these
things
or
we
can
be
selective,
I'll
leave
it
up
to
you
and
if
you
want
to
leave
it
up
to
Tim,
that's
whatever
you
want
to
do
there,
but
thanks
for
talking
this
out
that
really
helps
that
helps.
C
B
Yeah,
that's
man,
that's
tough
I
feel
like
there's
got
to
be
some
sort
of
like
key
based
approach,
but
I'm
I'm,
starting
to
I'm,
starting
to
let
that
feeling
die.
C
No
they're
I'm
sure
that
that
is
easier
and
nicer
watch.
The
point
is,
it
will
come
with
iteration.
There
will
be
someone
you
or
someone
else
that
will
come
up
with
better
solution
to
that
and
I'm
quite
sure
about
this,
like
I'm,
just
not
doing
it
myself
anymore,
because
I
was
frustrated
for
two
months
to
write
in
this.
B
Yeah
100,
yeah,
okay,
so
I'll
I'll
write
a
a
summary
comment
on
there.
The
other
big
question
I
had
I,
don't
know
if
you
already
tested
this
and
then
and
then
and
then
I
think
we
can
just
take
the
rest
of
rest
of
it.
Offline
was.
This
was
complete
news
to
me.
I
had
no
idea
when
you
signed
out
I.
B
B
A
C
B
Yeah
those
and
then,
if
I
go
here,
it'll
do
the
Apollo
cash
one
yeah
of.
C
C
B
A
A
C
C
B
C
B
B
C
B
C
B
The
other
big
question
I
had
here
was
I
I.
Where
did
I
put
it.
B
C
B
You
haven't
tried
out
I'm
gonna,
I'll,
try
it
out
and
I'll
just
say:
I'll
just
write
it
to
do
for
me
of
paint
slaughtered.
B
Take
it
to
them
all
right,
yeah,
because
I
I
was
just
curious,
because
the
I
can
imagine
a
worst
case
scenario
of
after
using
this
for
a
week
and
then
all
of
a
sudden
worst
case
scenarios.
This
tells
this
breaks
Apollo
somehow
like
we
don't
actually
get
the
real.
C
C
Won't
that
shouldn't
happen,
because
Apollo
is
smart
enough
to
cash,
a
network
policy
that
if
something
is
broken,
it
will
just
fetch.
You
probably
have
noticed
a
few
issues-
maybe
you
didn't,
but
with
a
bullet.
There
is
huge
point
that
if
something
is
just
very
slightly
off
with
the
cache,
it
will
immediately
refetch.
It
happens,
a
lot
with
work
items.
We
have
I
have
a
lot
of
things
about
this.
So,
for
example,
people
fetching
work,
items
and
fashion
work
items
notes
that
has
slightly
different
response.
C
Cache
is
updated
and
apple
is
like
it
doesn't
match
what
I
have
to
do
in
the
cache
and
just
prefatches.
Everything
like
five
more
queries
a
fiery,
so
we
are
safe
in
these
terms.
If
something
is
so,
if
there
is
a
query
asking
for,
let's
say
an
asset
entities
with
three
three
levels,
and
we
only
put
two
I'll,
probably
like
not
there
or
a
fetch,
it
will
just
hit
the
server
every
single
time.
Something
is
missing.
C
It
still
needs
to
be
investigated
because
way
to
understand
what's
going
on,
and
we
also
need
to
dive
into
moments
where
I
approach
the
cache
manually,
maybe
with
some
time
out,
because,
what's
the
point
of
for
now,
it
stays
forever.
It
doesn't
have
any
expiration
date
and
I
believe.
If
we
put
it,
therefore,
here
even
the
labels
will
be
different.
We
need
to
somehow
Purge
the
cash
up
to
some
kind
of
a
timeout,
maybe
like
a
month
yeah,
but
only
if
it's
not.
Maybe
there
is
a
timestamp.
C
C
A
A
C
C
A
Yeah
and
I
guess
you've
maybe
got
some.
What
would
you
call
it
like
key
problem
areas
or
something
that
you've
got
in
mind
already,
but,
as
you
start
sort
of
rolling
it
further
and
further
it's
yeah,
it's
gonna
gonna
be
huge.
B
C
Be
yeah
I
just
want
to
put
as
many
flags
on
it
right.
It's
like.
Don't,
please,
don't
use
this
everywhere.
This
is
the
part
of
the
code,
the
design
specifically
to
solve
this
problem
and
please
evaluate
like
10
different.
There
is
a
question
or
like
go
through
it
and
if
the
answer
is
no,
at
least
for
one
of
ten
questions,
don't
use.
A
C
In
this
case,
I
had
a
proposal
and
a
known
backend
is
looking
into
it,
and
this
is
basically
a
short
circuit.
Labels
are
returned
by
graphql
in
pages
right.
We
have
100
entities
and
that's
it
if
you
want
to
overwrite
this
page
size,
it's
very
specifically
designed
on
the
back,
but
by
default,
100
labels.
C
Why
can't
we,
while
we're
doing
this
huge
operation
of
going
through
all
the
labels
through
the
project?
Why
can't
we
just
do
a
short
circuit
whenever
we
have
100
labels,
we
just
stop
the
search
and
send
100
labels
back
to
the
user.
It's
still
100
labels.
We
cannot
show
more
in
this
particular
case.
It
will
be
maybe
slightly
but
faster
instances
like
github.com
accumulating
100
queries
will
be
definitely
not
30
seconds.
A
B
C
B
Yeah
things
take
forever
at
the
same
time,
at
the
web
ID
we
have
one
endpoint
where
we
get
all
file
entries
of
the
main
gitlab
project,
which
is
like
26,
it's
a
list
of
like
26
000
strings
or
something
like
that,
and
it
happens
like
three
to
four
seconds
which
you
would
imagine
we'd
be
able
to
get
labels.
B
So
yeah
there's
something
something
is
it'd,
be
nice
to
address
the
root
problem
for
sure
yeah.
Those
this
is
thanks
for
thanks
for
talking
about
it.
I
think
this
is
really
good,
helping
understand
deeper
into
the
workings
of
hollow
cash
and
also
brainstorming
the
specific
problem.