►
From YouTube: Caching Workshop - Session 2
Description
Live walkthough of Caching in Rails and at GitLab with Robert May from Create: Source Code. See https://gitlab.com/gitlab-org/create-stage/-/issues/12820
A
Recording
and
I
will
bring
up
how
do
we
want
to
start
I'll?
Tell
you
what
we'll
do
to
begin
with
we'll
show
the
one
thing
I
mentioned
on
the
call.
Quite
a
lot
is
html
page
caching
and
the
performance
difference
in
this
is
ridiculous
to
the
point
that
if
you're
ever
building
your
own
piece
of
software,
you
should
just
try
and
do
this
because
rails
is
best
going
to
handle.
I
think
you
can
get
it
up
to
a
thousand
to
one
and
a
half
thousand
requests
per.
A
Second,
you
need
a
really
beefy
server
to
get
it
to
that
point,
but
nginx
will
serve
the
you
know,
set
tens
of
thousands
of
requests
per
second
on
a
raspberry
pi
or
something
it's
just
crazy.
It's
really
really
efficient.
So
if
you
can
get
html
page
caching
working
it
works
really.
Well,
so
I'm
going
to
use
my
own
site
as
an
example
of
this
and
then
we'll
look
at
gitlab
for
the
the
other
stuff,
because
we
could
we
might
be
able
to
do
this
with
gitlab.
A
There
are
some
tricks
to
using
it,
but
obviously
the
disk
side
of
it
can
be
a
bit
a
bit
problematic.
So
let's
have
a
look
at
this
is
very
tall.
B
A
Here's
my
rubbish
review
of
an
ipad
looking
at
the
performance
thing
in
the
browser
is
very
useful.
Actually,
so
you
can
ignore
most
of
the
other
bits.
What
we're
looking
at
is
the
initial
request,
speed
and
on
this
article
is
92
milliseconds,
which
is
just
under
that
100
milliseconds,
and
not
the
reason
that
I
set
the
100
millisecond
target
for
my
performance
thing,
because
I
knew
I
could
get
under
it.
A
A
That's
your
best
case
scenario,
you're,
probably
not
going
to
get
much
faster
than
that,
because
you've
got
the
transfer
speed
of
it
essentially,
and
the
other
parts
of
it
are
tied
into
tls
and
just
connecting
to
the
server
in
the
first
place,
and
in
that
regard
this
is
pretty
much
your
best
case
scenario
and
the
reason
why
it
works
is
because
this
is
just
a
html
page.
That's
been
rendered
to
my
server,
it's
served
off
of
disk
for
every
page
request
and
it
will
really
consistently
run
it.
A
It's
going
to
fall
over
now
as
soon
as
I
do,
but
you
can
sit
here,
refreshing
it.
It's
just
like
you
can
even
shift
refresh
I'm
just
going
to
move
this.
I
don't
move
the
zoom
bar.
You
can
just
sit
here
like
mashing
it
and
it's.
Oh,
no,
it
went
over
100,
milliseconds
and
it'll
be
pretty
much
consistent
and
it's
just
because
it's
a
html
file
and
the
reason
why
that
works
is
because
how
do
you
turn
off
javascript
on
this.
A
Do
you
remember
how
you
turn
javascript
on
it?
I
don't,
I
can't
believe,
they're
going
through
settings.
Basically,
the
menu
bar
up
here
is
putting
with
javascript.
You
might
be
able
to
see
it
very
briefly,
flickr
and
that's
a
trick.
I
use
a
lot,
is
kind
of
going
back
to
an
older
form
of
web
programming.
I
guess
you
don't
get
as
much
in
which
is
progressive
enhancement
rather
than
having
a
single
page
application.
You
have
an
app.
You
have
a
page
that
works
by
default
with
our
javascript
and
then
javascript.
A
It
just
makes
it
better.
That's
a
really
nice
goal.
In
fact,
that
is
kind
of
how
we
do
a
lot
of
stuff
for
gitlab.
We
have
a
lot
of
pages
that
are
fetching
stuff
from
the
server
side,
they're
rendering
on
the
server
side
and
then
they're,
regenerating
it
and
adding
stuff
to
it
on
the
browser
and
that's
something
that
makes
a
lot
of
sense.
It's
also
better
for
accessibility.
I
know
screen,
but
readers
are
getting
better.
I
mean
gitlab's
difficult
in
terms
of
accessibility,
because
it's
quite
complicated.
It's
got
a
lot
going
on.
A
It's
going
to
be
hard
to
get
accessibility,
working
for
screen,
readers
and
stuff,
but
even
non-screen
readers.
It's
important
and
speed
is
important
on
the
server
side
to
get
it
responding
as
fast
as
possible
for
the
front
end
to
then
load
the
faster
you
get
that
first
response,
the
faster
the
front
end
loads.
It's
just
generally
beneficial.
So
I
use
tricks
like
that.
Quite
a
lot,
and
so
I'll
stop
sharing
that
one
and
we
will
look
if
anyone's
a
really
good
vim
user.
By
the
way.
A
Please
don't
mock
me
because
I'm
still
really
I've
used
vim
for
years.
I
don't
even
know
half
the
key
bindings.
I
use
arrow
keys
to
like
wander
around
stuff.
I
have
no
idea
what
I'm
doing
so
here
is
a
terminal,
and
we
will
look
at
let's
look
at.
I
need
to
find
it
first,
that's
a
good
point.
A
Oh,
what
was
it
called,
there's
something
I
added
recently
what
we
had
recently
we've
added
a
couple
of
caches
that
are
kind
of
like
keyless
caches,
where
they
just
rely
on
the
timeout.
They
don't
expire
based
on
anything
else,
they're
just
too
cut
quite
heavy
load
and
they
are
purposefully
stale.
A
Essentially,
so
I'm
just
going
to
try
and
find
that.
A
Now
that
was
a
mistake:
there's
not
a
long
enough
storage
alert
limit.
I
think
it.
A
A
In
the
presentation
where
this
was
kind
of
problematic,
we
need
we
found
that
this
payload,
where
you're
checking
the
storage
size
service
for
the
name
space
on
gear
lab.
So
this
is
checking
whether
a
namespace
has
exceeded
their
storage
quota
and
it
does
this
on
every
page
load
inside
that
namespace
and
it
was
quite
expensive.
I
mean
it's
not
expensive,
expensive,
but
it
was
70
milliseconds
and
if
you
think
we're
trying
to
get
to
a
100,
millisecond
page
load,
that's
a
big
budget.
A
There's
a
delay
there
anyway,
before
it
ever
gets
to
gitlab,
because
we
have
to
process
all
the
commits
and
stuff
like
that,
and
the
chances
are
as
well
that
you
at
worst
case
it'll,
be
a
minute
late.
Chance
star
would
be
five
seconds
late.
You
know
very,
very
minor
and
you're
cutting
a
really
large
amount
of
traffic
from
the
date
space,
and
so,
in
this
regard,
this
payload
bit.
A
There
is
what
we
needed
to
to
limit,
and
you
could
add
caching
in
the
helper,
but
where
rails
is
very
useful,
is
where
it
does
that
template
tree
digest
stuff
in
the
template.
If
you
change
stuff
in
the
template,
the
cache
keys
will
rotate,
which
means
that
you
don't
have
to
worry
about
having
style
caches
when
you
push
changes
to
your
application.
A
As
far
as
I
know,
it
doesn't
do
that
in
helpers.
So
when
you
put
a
cache
in
a
helper,
you
kind
of
got
to
just
keep
that
in
mind
that
that
that
might
not
happen.
They
could
tweak
it.
They
may
even
have
done
it
since
they
originally
implemented
it.
I
haven't
actually
checked
lately,
but
it's
kind
of
effective.
So
in
this
regard,
we've
got
a
thinker,
helper
method
in
here,
which
I
mentioned
on
the
slides,
which
is
this
one
around
that's
used
to
count
the
number
of
storage.
A
Things
to
get
around
some
awkwardness
in
needing
that
that
query
data
see
if
we
can
actually
find
so
here's
where
the
cache
is
implemented
on
this.
So
you
can
see
it's
just
got
an
expiry
time
of
one
minute
and
then
it
scoped
the
namespace
and
the
number
of
those
hidden
storage
alert
banners,
and
that
feels
like
a
bit
of
an
odd
cache.
It
feels
like
it.
It's
not
caching
a
huge
amount
of
stuff,
but
it's
actually
in
this
regard.
The
template
is
not
very
expensive.
A
None
of
this
takes
very
long
to
render
most
of
it
just
to
help
the
calls
at
putting
strings
there's
no
loops
in
it.
So
the
template
in
this
regard
isn't
expensive.
The
query
as
well
as
expensive
here
the
payload
request.
A
I
came
across
something
interesting
in
this,
which
I
never
actually
knew
if
the
original
version
of
this
template
had
like
an
early
return
if
the
payload
was
empty.
So
instead
of
this,
unless
payload
empty
there,
it
had
a
return
if
payload
empty
and
that
just
totally
breaks
the
cache
it
actually
exits
out
of
the
block
for
the
cache
call
and
doesn't
write
cash
value.
So
the
right,
nil
cash
value
and
it
will
cause
it
to
just
be
re-executed
every
time,
so
it
actually
has
no
effect
whatsoever
and
kind
of
breaks
it.
A
D
A
It
looks
a
little
bit
lamer,
but
it
actually
works
which
is
kind
of
what
we're
going
for
so
the
actual
performance
benefit
of
that
isn't
necessarily
super
noticeable,
because
what
it's
doing
is
quite
subtle:
you're
cutting
a
70
millisecond
query
from
a
good
portion
of
page
loads,
but
that
sort
of
thing
you'd,
probably
not
expecting
it
to
sharpen
the
stats,
because
we
have
such
a
high
level
of
traffic.
A
A
A
Cool
right
does
anyone
have
any
particular
pain
points
in
the
gitlab
applications
that
they
find
a
slow?
Is
that,
like
I'm
taking
any
suggestions
here,
I
have
some
personal
ones:
I've
I've
got
them
sort
of
ready.
I've
even
got
merge
requests.
I
made
earlier
proper
blue
peta
fashion
for
anyone
in
the
uk,
but
I'm
happy
to
look
at
other
ones.
First,
if
there
are
any.
A
I'm
glad
you
said
that
that's
the
one
I
prepared
earlier.
I
have
fixed
the
pipeline
tab
on
mrs
the
I
say
fixed.
I
have
cached
the
pipeline
tab
on
mrs
and
glossed
over.
The
massive
amounts
of
json
it
renders
it
is.
It
is
huge.
Let
me
find
the
the
merge
request,
we'll
just
look
at
this
straight
in
the
gear
lab
ui.
A
Okay,
so
yes,
this
is
actually
you
can
even
see
this
as
an
example
in
its
own
merge
request,
which
I'm
very
happy
about
the
pipelines,
tab,
the
more
pipelines
you
have,
the
slower
it
gets,
and
it
is
quite
painful.
I
mean
that
was
good.
Some
seconds
I
didn't
actually,
and
the
thing
is
because
it's
not
cached,
you
can
just
do
it
again.
Let's
have
a
watch
this.
It
does
cache
it
for
the
polling.
It
actually
uses
http
caching,
so
you
can
see
it
reloaded
it
there.
A
Fine
and
I
think
that's
because
it's
cached
in
my
browser,
it
should
be
if
you
go
force
refresh.
So
that's
a
good
example
of
a
use
case
for
http
caching.
It
was
http
caching,
but
the
actual
response
speed
of
that
is
just
quite.
There
were
five
seconds
which
is
not
super
great,
that's
a
risk
of
hitting
timeouts.
A
It
gets
worse.
I've
had
one
mode
up
to
about
25,
30,
pi
players,
and
it
just
does
get
worse.
This
is
a
great
use
case
for
that
multi-fragment,
caching,
that
I
talked
about
before,
and
it's
where
I
thought
when
I
first
came
across
this
like
yes,
I'll,
just
go
and
add
multi-fetch
fragments
store.
Caching
to
that
it'll
be
sold
everyone
I
think
I'm
brilliant,
and
then
I
went
and
looked
at
it
and
it's
actually
a
problem
and
great
entity
again.
A
So
this
is
all
serialized
json
and
it's
done
in
something
in
our
serializers
folder,
which
is
where
we're
using
great
entities
to
serialize
json,
but
on
the
the
ui
side
rather
than
on
the
api.
Some
of
them
extend
the
api
entities,
but
it
uses
grape
for
all
of
this
great
entity,
and
when
I
discovered
that
I
I
was
quite
unhappy
because
it's
it's
just
it's
so
much
harder
to
cache.
This
merger
quest
actually
introduces
a
way
of
doing
that.
A
A
So
you're,
looking
at
this,
where,
where
it
was
before
it
was
just
render
it
render
json
with
this
it
does
this
returns,
basically
just
a
really
big
hash
and
then
rails
converts
its
json,
not
super
fast.
What
we
needed
to
do
is
to
cache
the
json
representation
of
each
of
those
individual
pipelines,
because
each
pipeline
can
change
its
status
over
time.
Obviously,
as
it's
running
its
status
will
update.
A
Luckily,
the
pipeline
records
do
actually
get
updated.
They
get
touched
so
an
easy
way
of
expiring
cash
keys
on
rails
models,
which
I
totally
forgot
to
put
in
the
slides.
A
Is
you
call
dot
cool
dot
touch
or
the
instant
method
touch
on
a
active
record
thing
and
it
updates
updated
it
and
it
doesn't
call
any
callbacks
there's
even
a
really
nice
touch,
all
which
you
can
call
on
a
collection,
and
it
will
just
do
all
of
them
in
one
query
very
useful,
and
actually
the
pipelines
do
already
do
that,
so
their
cache
keys
are
going
to
change
as
they
are
as
they
change
as
their
state
changes
as
they
go
through
it.
But
the
trick
like
if
this
was
just
html.
A
This
would
have
actually
been
quite
easy.
I'd
already
have
merged
it,
but
what
we
need
to
do
is
to
modify
how
the
serialized
stuff
works,
and
so,
in
that
regard,
I
built
a
caching
serializer
and
it's
pretty
horrible
and
what
it
does
it.
The
problem
is
that
it
can't
return
a
hash
like,
like
the
great
normally
does
what
grapes
returning
the
grape
entity
representation
thing
returns:
a
ruby
hash,
which
then
gets
converted
to
a
string,
but
we
need
to.
A
We
don't
want
to
be
serializing
a
hash
into
the
cache
and
then
back
out
again,
and
I
did
try
it,
but
it
said
that
it
wouldn't.
Let
me
because
I
had
to
write
a
specific
way
of
serializing
serializing
it
it's
much
more
effective,
as
I
was
saying
in
the
previous
talk
to
cache
the
view
so
cache
the
finished,
json
json,
rendering
is
quite
slow
and
it
generates
huge
strings
in
ruby
memory
all
the
time
you
just
want
to
avoid
it.
A
So
in
this
regard,
I
added
this,
but
what
it
does
is
it
returns
a
string
just
a
big
string
rather
than
rather
than
a
hash.
So
it
does
function
slightly
differently
and
what
it
uses
is
the
the
fetch
multi
method,
but
it's
a
modified
one
that
I've
added
in
this
merger
quest,
because
the
rails,
one
by
default,
doesn't
work
the
same
as
fetch
so
rails
fetch
when
you
do
fetch
your
key
do
and
then
you
give
it
a
parameter
to
the
prop,
which
would
be
the
the
object
that
was
missing
the
key.
A
Missing
block
or
whether
or
it
doesn't
or
you
don't
need
to,
because
it's
only
a
single
cache
call.
So
in
that
regard
you
don't
do
that,
but
on
the
fetch,
multi
one,
it
just
gives
you
the
key,
which
isn't
very
useful,
because
what
can
you
do
with
your
weird
combination?
Cache
key
when
you've
got
a
missing
record
and
you're
using
it
for
essentially
a
loop
you're
using
it
for
multiple
records.
So
you
can't
just
take
a
the
instance
variable
of
like
user
that
was
just
defined
outside
that
block.
A
You
need
it
passed
to
the
block,
so
I
added
a
I've
added
a
new
method
in
this
called,
which
is
our
own
version,
fetch
multi,
and
it
actually
provides
you
the
the
object.
So
my
cat
has
started
snoring
as
well,
so
I
apologize
you
can
hear
that
it's
not
me!
Farting
is
actually
the
cat
the!
So
that's
quite
useful
that
I
might
remove
that
from
this
and
actually
merge
it
into
the
thing.
Anyway.
A
I
can
talk
a
little
bit
about
this
sean
mcgivern's,
much
better
to
talk
about
the
redis
cluster
validator.
It's
kind
of
preparing
us
to
run
redis
as
a
as
a
cluster
kind
of
like
the
memcache
d
cluster
setup.
I
mentioned
you,
don't
have
to
worry
too
much
about
it,
but
it's
kind
of
tied
to
how
certain
certain
requests
might
need
to
be
all
on
the
same
cache
server.
A
I
think
there's
some
documentation
on
it,
I'll,
try
and
link
it
up
in
the
issue,
but
for
the
just
for
simplicity,
we'll
ignore
that
at
the
moment
it's
it's
something
that'll
crop
up
in
reviews,
if
it's
needed-
and
I.
C
A
Send
all
of
my
caching
merge
requests
reviews
to
sean
mcgivern,
and
he
tells
me
when
I
need
to
put
that
in
and
so
what
this
does
is
basically
for
each
of
the
things
that
are
passed
to
the
caching
serializer.
In
this
case
the
pipeline.
A
It
looks
to
see
if
it's
got
a
rendered
json
version
of
that
entity
in
the
cache
and
if
it
doesn't,
it
then
does
json.dump
and
does
that
it
works
for
both
one
record,
fairly
boring
vehicle
that,
but
all
returns
all
of
them,
and
let's
see
if
I
can
load
this
up
locally
because
it's
kind
of
interesting
I
didn't
really
I've
never
managed
to
get
the
pipelines
working
very
effectively
on
my
local
machine.
There
we
go.
A
A
So
this
measure
request
is
sort
of
almost
there,
but
because
it
does
some
slightly
old
things,
it
just
needs
a
bit
more
going
over
really
to
to
get
it
in
a
nice
place.
No,
that's
not
what
I
wanted
to
do.
A
A
That
as
well,
no,
so
we
use
redis
entirely
at
your
lab,
and
I
did
actually
ask
I
asked
andrew
needy.
Why?
Just
because
I
was
curious-
and
it
is
basically
because
it's
one
less
moving
part
and
that's
a
very
understandable
reason
to
to
choose
it.
I
don't
think
there
is
a
there's,
no
problem
with
race.
The
thing
with
radius
me
is
that
if
I
was
running
my
own
service,
I
wouldn't
use
it.
I
wouldn't
run
it's
like.
A
I
don't
like
running
my
own
postgres
database,
because
I
just
I
don't
trust
myself
to
set
it
up.
Right
and
radius
is
very
similar
to
me
in
that
regard.
I
don't
trust
myself
to
set
up
a
redis
cluster
and
have
it
not
fall
over,
but
memcached
is
so
easy
to
set
up
to
be
largely
infallible,
that
I'd
kind
of
do
that,
but
yeah
in
our
regard.
A
The
reasoning
behind
is
kind
of
we
already
use
redis
and
it's
one
that's
moving
parts
and
it's
simpler
for
customers
on
self
installs
as
well.
They
only
really
have
to
worry
about
redis.
We
actually
we
run
multiple
redis
clusters.
I
think
we
have
three
and
we've
got
one
for
like
background
jobs,
particularly
we've
got
one
for
the
cache
and
then
we've
got
another
one
that
I
always
forget,
the
name
of,
but
for
a
lot
of
installs
they're,
probably
just
using
one,
and
it
does
kind
of
make
it
easier
in
that
regard.
A
It's
kind
of
funny.
I
think,
when
you
use
a
cloud
platform
that
even
if
you
use
a
memcached,
offering
it's
probably
actually
redis,
underneath
it's
quite
and
one
of
the
reasons
for
that
was
originally
memcached.
He
didn't
support
stuff
like
authentication
of
any
sort
really
for
a
while,
but
I
think
it
does.
I
think
it's
called
samuel
now.
D
A
My
own
gitlab
install
right
so
merge
request
pipeline
thingy.
I
can't
really
share
to
so
I
can
share
my
whole
desktop,
but
it's
massive.
So
it
ends
up
so
small
that
no
one
can
really
see
it.
So
what
I'll
probably
do
is
show
you
how
the
effects
in
the
browser
and
then
we'll
look
at
the
stuff
afterwards.
So
this
one's
got
five
pipelines.
A
Let's
get
the
network
panel
back
up
right
pipelines,
so
first
pipeline
load
on
this
is
3.4
seconds,
so
that
was
totally
uncached.
There
are
five
items
here
to
cash,
so
if
we
force
refresh
now
it
should
be
less.
A
So
that's
under
700
milliseconds
now
so
those
are
being
served
from
the
cache
and
we
can
look
at
that
in
the
in
the
cache
as
well
and
when
you
change
the
statuses,
I
would
click
it,
but
it
doesn't
actually
work
on
my
machine
anyway,
because
I
never
set
them
up
properly,
and
so
it
will
just
return
the
same
status.
I
think
we
can
try
it
and
see
what
happens.
A
I
know
what
that
did
so
so
in
theory,
what
would
happen
now
is
because
I've
it's
a
multi-fetch
cache.
A
If
that
one
had
changed,
that
one
would
have
expired,
but
the
other
four
are
being
served
from
the
cache
still
so
there's
two
ways
of
caching.
It
like
we
use
the
http
caching
on
this
pipeline's
endpoint
as
well.
When
it's
polling,
you
can
see
it
polling
here,
75
milliseconds
and
it's
you're,
seeing
the
I
don't
know.
A
If
it's
showing
the
response
headers
there
you
go
so
you
can
see
it
says
no
cache,
but
it's
also
got
an
e
tag
there,
so
x,
gitlab
from
cache,
I'm
assuming
this
is
actually
being
sir
through
workhorse
or
something,
and
you
can
see
it
polling
there
repeatedly
over
and
over.
So
that's
just
returning
the
browser
cache
as
instructing
it
to
return
the
cache,
but
when.
C
A
Changes
so
that
e-tag
will
probably
change
with
the
latest
pipeline
that
whole
cache
is
gone,
so
all
of
the
pipelines
have
to
be
re-rendered.
What
we've
added
here
with
the
multi-uh
fragment
caching,
is
that
four
of
those
pipelines
now
don't
ever
have
to
be
re-rendered
they're
totally
broken
like
that.
They
failed
a
long
two
months
ago,
seven
months
ago,
those
are
probably
never
going
to
change.
They're,
probably
always
going
to
be,
I
say,
always-
could
be
served
from
the
cash
if
our
cash
lasted
forever.
A
They'd
always
be
served
from
the
cache
and
for
currently
in
progress,
merge
requests.
This
is
going
to
be
very
effective.
I
mean
it
saves
what
two
and
a
half
three
seconds
of
request,
with
only
five
pipelines
with
20
plus
it's
gonna,
be
pretty
huge.
So
we'll
have
a
look
at
the
logs
because
being
able
to
see
your
the
effect
you've
had
in
the
logs
is
quite
important.
A
Not
that's,
obviously,
not
the
right
way
doing
it,
and
so
this
won't
show
in
the
logs,
because
this
is
a
custom
one.
It
doesn't
use
the
standard
rails,
template
stuff.
I
will
add,
logging
to
it,
because
that
would
be
quite
nice.
Let's
go
back
to
the
actual
code
for
it
briefly.
A
And
see,
if
there's
anything
else
of
interest
inside
it,
so
this
would
be
able
to
be
reused
elsewhere,
a
couple
of
things:
I've
added
when
doing
other
things.
The
the
big
problem
with
grape
entities
in
particular
is
that
you
can't
cut
up
cache
parts
of
them.
Well,
you
can,
but
you
can
end
up
with
an
awful
lot
of
cash
calls,
especially
if
you're,
rendering
multiple
entities
and
every
entity
has
five
caches
in
it.
Whereas
right
now
100
entry
entities,
you've
got
500
cash
calls,
it's
very
frustrating
you
want
to
render
the
cash
the
whole
thing.
A
I've
got
another
merge
quest
that
sort
of
re-looks
at
how
we
can
actually
add
some
extra
cache
context
into
into
the
actual
my
operating
system.
Just
pops
up
saying:
do
you
want
to
upgrade
ubuntu?
Yes,
this
is
the
time
I
want
to
do
it
at
all
right,
let's
see
if
I
can
find
this
so
something
else.
A
I've
been
looking
at
adding
which
I
think
could
be
useful,
is
to
add
some
way
of
defining
additional
caching
context
on
the
grape
entities,
because
they
are
such
a
problem
and
because
what
you
can
do
is
by
default.
The
the
cache
helpers
I've
added
to
do
it
will
take
the
the
object
that
you've,
given
it
that
you're
going
to
render
in
a
great
entity.
A
So
we've
got
a
great
entity
for
a
branch
in
this
case,
so
it
takes
the
branch
and
it
looks
for
branch
cache
key
as
a
method
and
tries
to
use
that
to
find
a
cache
entry,
but
we
probably
need
extra
stuff
in
it,
and
branches
are
a
great
example
because
sometimes
you'd
you
ideally
what
you'd
use
is
the
that
touch
mechanism.
I
was
talking
about
before
for
child
relationships
in
active
record.
A
You
want
them
to
when
they're
updated.
You
say
on
any
of
the
belongs
to
association,
so
say
a
branch
has
like
a
child
branch
model
and
its
parent
is
a
branch
and
on
that
belongs
to
parent
branch.
You'd
have
to
do
touch
true
and
anytime
that
child
branch
model
is
updated.
It
would
update
the
updated
app
of
its
parent,
and
that
would
mean
that
the
parents
caches
would
also
expire
very
nice.
That's
what
rails
wants
you
to
do.
That's
the
the
base
camp
way
of
caching!
A
That's
what
they
want
you
to
do,
and
it's
very
effective
if
you
did
it
from
the
start,
but
if
you're
coming
in
later-
and
you
need
to
figure
out
how
every
single
model
can
affect
each
other
in
different
ways.
Branches
is
a
great
example
of
this.
Protected
branch
has
no
relationship
to
a
branch,
because
branches
aren't
really
in
the
database
they're
in
italy,
so
they're
not
active
record
objects,
they
don't
belong
to
a
protected
branch
or
anything
like
that.
A
So
protected
branch
is
quite
important
as
well.
You
need
to
know
that
on
the
api,
when
a
protected
branch
changes-
and
in
fact
in
this
in
this
entity,
you
can
see
it's
exposed
protected
and
it's
whether
it's
protected,
whether
the
developers
can
push
whether
you
can
merge
that
sort
of
thing.
This
is
quite
important.
It's
also
got
current
user
in
there,
which
that's
always
a
red
flag.
Anytime.
I
see
the
word
current
user
in
something
I'm
trying
to
cache.
A
A
If
I
just
take
this
check-
and
I
move
it
up
here
into
this
cache
context
method-
and
we
call
this
as
part
of
the
cache
key
for
rendering
the
pipeline
entity
so
we'll
take
the
branch
cache
key
and
we'll
add
this
extra
context
from
the
grape
entity
file,
we'll
give
it
the
branch
and
we'll
say
you
know
what
other
context
do
you
need
to
enter
this
key
so
that
this
record
expires
correctly
and
sort
of
jam
them
together,
and
so
in
this
regard,
this
will
be
kind
of
effective.
So,
instead
of
having
like
worrying?
A
A
What
it
cares
about
is
if
they
can
push
to
the
branch,
so
we
can
move
that
check
and
just
stick
it
up
here
and
then
it
will
cache
based
on
whether
the
user
can
see
the
branch
and
it
doesn't
matter
who
the
user
is
it
only
matters
what
they
can
do
and
that's
really
useful
when
it
comes
to
to
doing
these
sorts
of
weird
little
caches
and
so
in
the
pipeline,
one
that
needs
a
little
bit
of
tweaking.
It's
got
a
cache
context,
but
here
you
can
see
I'll
merge.
A
The
two
together
it'll
actually
be
the
other
one,
but
you
can
see
it
in
here,
so
the
cache
context
for
pipelines.
This
is
the
most
boring
part
of
adam
caching.
Is
you
you
and
you
just
have
to
do
it
at
this
point
at
gear
lap
in
particular
everything
that
might
appear
inside
that
cache
section
of
code
you've
got
to
go
through
all
of
it.
You've
got
to
go
through
the
view.
Look
at
every
helper.
It
calls
go
into
the
helpers.
Look
at
every
other
helper.
They
call
look
in
the
enterprise
edition.
A
You've
got
to
look
all
the
way
down
through
the
stack
of
what
might
appear,
because
there
are
current
user
calls
everywhere,
just
they
never
and
they're
never
passed
in
as
method
arguments
as
well.
It's
where
there's
a
bit
of
a
flaw
in
how
rails
works
when
you've
got
helpers.
In
particular,
they
have
access
to
the
other
helpers
without
being
passed,
those
values,
and
so
it's
very
hard
to
tell
where
those
things
depended
upon.
A
So
for
this
one
in
this,
in
this
instance,
I
spent
about
a
day
going
through
everything
that
might
appear
in
those
pipeline
things
and
finding
all
of
the
requests
that
are
tied
to
user-specific
stuff,
and
some
of
them
are
kind
of
interesting,
like
whether
the
user
has
permission
to
view
and
rerun
builds
and
stuff
like
that,
and
there
was
actually
a
huge
list
of
those,
but
I
think
you
can
cover
it
just
by
doing
this
permission
check
so
covering
multiple
permission.
Checks
with
one
of
them
might
need
to
check
that,
but
it
seems
to
work.
C
D
A
Yeah,
it's
quite
difficult
really,
and
that
is
a
an.
A
Yeah,
I
agree,
and
that's
actually
one
of
the
reasons
why
it's
not
great
in
this
having
it
in
the
pipeline.
Serializer,
I
think,
is
a
little
bit.
It
can
be
a
little
bit
weird
having
in
the
entity
files.
I
think,
made
quite
a
lot
of
sense
on
the
api
side,
because
the
api
entities
don't
typically
call
out
to
lots
of
helpers
but
yeah
in
terms
of
fuse
and
helpers
and
stuff.
A
It's
just
a
bit
of
a
pain.
It
would
be
nice
to
automate
it
a
bit.
I'm
really
rubbish
at
writing,
rubocop
things,
and
I
just
I
think,
I've
written
one
and
it
was
a
copy
and
paste
job
from
somebody
else's
one
and
their
one
was
still
better,
so
I
don't
generally
rather
than
but
it
would
make
a
lot
of
sense
really
to
actually
look
for
cash
calls
in
the
code
base
and
then
helpers
that
might
be
called
inside
of
them.
A
What
I
would
do-
and
I've
done
it
in
a
couple
of
places-
is
to
add
comments
in
related
sections.
We've
got
header
comments
and
some
files
that
say
there's
one
on
the
commit
view,
for
example,
which
is:
if
you
modify
this,
you
have
to
modify
this
vue.js
file
as
well,
because
they
must
remain,
they
must
keep
parity
and
we
can
kind
of
do
something
similar.
A
A
Equally,
I
suppose
in
this
regard,
because
the
pipeline
serializer
descends
from
caching
serializer,
we
might
be
able
to
pick
that
up
in
rubocop
and
just
flag
up
things
that
are
modified
in
there.
Potentially
look
for
current
user,
like
in
new
couples
stuff,
you
might
have
a
chance
that
but
yeah
it
is
it's
something
you
kind
of
do
just
have
to
remember
to
a
certain
degree.
D
Yeah
I
mean
static,
statically,
detecting
that
seems
pretty
tricky
in
in
a
generalized
way.
But
I
I
do
wonder
if
dynamically
detecting
it
might
be
an
option,
because
we
do
have
a
block
that
can
like
a
caching
block
that
wraps
whatever
call
we're
making.
So
if
we
can
set
a
global
flag
in
there
and
then
hook
into
current
user
and
intercept
those
calls
and
just
bail
out
during
development,
if
we
detect
something
like
that,.
A
A
Yeah,
I
think
that
makes
a
lot
of
sense,
and
one
aspect
that
would
play
well
today
is
that
I've
started
adding
our
own
cache
wrappers.
I
found
this
when
I
replaced
the
json
interpreter
that
we
use
in
gearlab
that
for
things
where
we
might
change
dependencies
or
things
like
it's
very
useful
to
have
our
own
system,
you
know
like
made
in-house
bit
that
just
wraps
these
things,
so
we
now
have
a
gitlab
cache
class.
A
We
can
take
that
and
kind
of
mirror
all
of
the
rails
methods
just
defer
to
them,
but
then
wrap
them
in
our
own
things.
Add
our
own
metrics
into
if
needed.
We've
got.
We've
got
a
place
to
put
that
now,
which
could
work
quite
well.
C
A
Testing
and
caching
is
one
of
the
things
I
didn't
stick
in
the
slides.
I
have
a
bit
of
a
guilty
reason
for
that,
and
I
never
do
it
personally.
I
don't
think
that's
right
at
github.
I
do
write
tests
for
caching
at
gitlab.
You'll
all
be
pleased
to
know,
but
it's
where
feature
specs
in
particular
come
in
for
me
is
that
they're
very
effective
at
testing
caching,
because
you
really
should
be
hitting
things
with,
ideally
with
multiple
users.
That's
where
the
big
check
is
for
a
cache.
A
If
it's
a
cache,
where
you're
worried
about
user
data,
actually
right
showing
you
know
slightly
incorrect
pipeline
status
or
something
is
different
to
showing
someone's
home
address
or
something
like
that
is
obviously
more
of
a
problem
and
for
those
things
you
can
add
essentially
like
a
smoke
test.
That
runs
the
same
thing
as
two
different
users
and
tries
to
see
if
it
returns
different
data
as
it
should
do
and
the
caching
is
involved
is
enabled
in
the
test
suite.
There
is
something
you
can
add
to.
A
I
just
want
this
screenshot
back
here.
We've
got
some
helpers
for
this,
and.
A
We've
got
some
of
these
useful
helpers,
so
when
you're
doing
a
a
cache,
the
rails
clicking
rails
memory
store
stuff.
A
I
don't
think
it's
actually
that
useful,
I'm
assuming
it
is
for
certain
things,
but
it
won't
work
across
requests
because
it
just
stores
it
in
memory
for
the
request
and
the
really
useful
one
if
you're
doing
feature
specs,
and
you
want
to
test
to
cache
over
multiple
page
loads.
You'll
need
to
use
this.
A
I
suspect
I'm
not
entirely
certain
feature
feature
specs,
but
you
want
to
add
the
flag
use
clean
rails
where
it
is
caching
and
it
will
clear
the
cache
beforehand
and
then
use
it
for
the
course
of
that
test,
and
you
can
hit
it
multiple
times
and
it
will
actually
cache
between
requests.
That
is
very
useful
and
I
totally
forgot
to
stick
it
in
the
slide
deck,
which
is
a
mistake,
but
any
other
questions
or
points
on.
C
And
when
you
have
these
multiple
levels
of
caching,
is
it
possible
to
have
race
conditions,
for
example
like
something
in
the
ui
triggers
an
api
call
which
would
update
the
updated
at,
but
that
isn't
propagated
in
time
for
another
level
of
caching
to
catch
it
or
something
along
those
lines?.
A
C
A
It's
kind
of
interesting
because
the
bigger
the
bigger
risk
that
I've
found,
especially
when
you're
using
timestamps
is
you
can
accidentally
introduce
infinite
loops
quite
easily,
where
you
know
one
thing
updates
one
thing
and
then
the
other
thing
updates
to
the
other
thing
and
the
one
general
good
sort
of
point
there
is
to
always
expire
upwards,
so,
instead
of
expiring
child
relationships,
so
I've
I've
updated
a
namespace.
A
Well
now
I'm
going
to
expire,
the
caches
of
all
sub,
like
groups
and
all
projects
in
that
name
space
you
kind
of
want
to
go
the
other
way.
You
want
the
project
to
be
expiring,
the
cache
as
the
namespace
and
only
going
up,
because
otherwise,
when
you
add
the
touch
commands
lower
down,
so
you
had
to
touch
from
project
back
to
namespace,
and
then
you
expire
all
the
projects.
When
you
update
the
namespace,
it
just
sticks
in
an
infinite
loop.
Thankfully,.
A
Suites
at
least
but
yeah
with
so
with
other
things,
especially
tying
things
into
caching,
you
can
get
sort
of
race
conditions
in
particular.
It's
why
the
the
rails,
fetch
command
is
particularly
useful
because
you've
got
we've.
We've
actually
had.
I
think
nick
found
a
couple
of
race
conditions
in
one
of
our
things:
I've
even
written
them
in,
if
you're
doing
a
read
and
then
a
write,
especially
if
you're
directly
using
the
redis
adapter.
A
A
It's
quite
hard
to
breathe
through
this
yeah,
so
here
is
a
sort
of
a
race
condition
where
you're
reading
from
redis
and
then
you're
checking
the
values
and
then
you're
writing
back
the
missing
ones.
A
That's
got
a
gap
in
it
where,
though,
those
values
can
change
from
elsewhere.
Ideally
you
don't
want
to
do
that.
There
are
redis
ways
around
it.
The
reason
why
it
doesn't
matter
in
this
case
is
just
because
of
the
way
that
cat
that
specific
cache
is
built
very
weirdly.
It's
like
an
additive
cache
where
the
cache
is
updated
with
extra
values,
and
it's
even
if
it's
slightly
delayed
it
doesn't
matter
because
it's
still
mostly
cached.
It's
a
really
weird
integration,
but
the
yeah.
A
Essentially,
what
you
kind
of
want
to
do
is
use
the
built-in
tools
where
possible,
so
that
the
rails
tools,
where
they
fetch
they're
doing
the
read
and
the
right
in
like
a
one
command
after
another
that
I
think
they're
pipelined.
Is
it
pipelined,
or
I
don't
know
if
it's
got
an
example
in
here,
there's
two
slightly
conflicting
things
with
redis,
so
you
got
pipelined
and
you've
got
like
a
transaction
one
and
they're
slightly
different.
A
Where
you
do
one
then
the
next
one,
and
I
can't
remember
what
the
different
term
for
that
one
is.
I
don't
know
if
anyone
else
knows
off
the
top
of
their
head.
A
That's
it
nice
yeah,
and
so
those
two
are
kind
of
different
they're
worth
sort
of
keeping
a
note
of.
I
guess
in
this
regard
we're
using
pipelined
here,
which
is
the
wrong
one.
You
know
that
just
is
more
efficient.
It
doesn't
actually
have
that
that
race
condition
sort
of
thing
involved,
but
the
default
built-in
helpers.
So
if
you're
using
the
rails,
cache
fetch
stuff
does
actually
keep
that
in
mind
and
it's
quite
effective.
A
A
So
this
uses
a
a
reddish
hash
specifically
because
of
the
way
does
anyone
has
anyone
used
the
network
controller
endpoint,
the
page
since
they've
been
at
gitlab
ever
it's
the
one
that
shows
you
like
a
commit
graph.
Let's
say:
let's
bring
it
up.
A
It's
really
really
nicely
built,
it's
got
a
really
lovely
sort
of
interface,
and
I
didn't
even
know
it
existed.
Is
this
one
if
you
click
craft,
and
it
renders
this
thing
and
it
takes
a
while,
particularly
on
git,
lab.
A
It
is
not
the
fastest,
I
think
it
might
even
time
out.
It
definitely
timed
out
before,
though,
and
the
reason.
Why
was
that
for
each
item
there
renders
on
this
it's
got
the
avatar
next
to
it,
but
because
of
the
way
it
renders
the
the
page,
the
our
other
avatar
helpers
have
two
ways
of
finding
an
avatar
for
a
person,
because
not
everybody
is
a
git
lab
user.
A
Some
of
them
are
just
email
addresses,
but
we
still
try
and
look
up
their
avatar
and
it
works
by
either
you
pass
it
a
user
and
it
uses
the
user's
avatar
and
we've
probably
already
loaded
the
users
for
passing
in
or
you
give
it
an
email
address,
and
the
first
thing
it
does
is
to
try
and
find
the
user
with
that
email
address,
and
this
page
did
that
for
every
single
commit
which
you
can
see.
It
is
timing
out
trying
to
render
I'll.
A
Project
just
as
an
example
where's
one
of
mine.
A
There
you
go
so
it
renders
this
avatar
and
it
wouldn't
even
cache
it
between
users
so
for
the
same
user
it
would
like
was
there
30
commits
it
would
make
30
queries,
but
those
would
be
cash
by
rails,
but
it
was
always
looking
them
up.
So
I
added
a
very,
very
specific
cash
around
this
endpoint
and
it's
the
biggest
performance
increase
I've
made
at
gitlab
that
no
one
has
noticed,
because
no
one
ever
looks
at
this
page
from
what
I
can
tell
it's
got
like
a
request
per
second
of
one.
A
Be
that
it's
we
can
actually
have
a
look.
It's
quite
a
shame,
because
it's
it's
actually
quite
a
nice
quite
nice
feature,
but
I've
bookmarked,
the
useful
dashboards
or
the
ones
that
I
find
most
useful.
That's
the
wrong
one.
A
I'll,
take
it
I'll
show
you
a
second
in
the
in
the
redis
one.
What
I
look
for
when
I'm
adding
a
new
cache,
and
I
don't
want
to
upset
the
sres
by
killing
redis
on
a
saturday
when
I
was
enabling
feature
flags,
as
eagle
probably
remembers,.
A
Here
we
go
so
yes
under
one
ops,
it's
yeah
under
one
request
per
second.
No
one
really
goes
on
this
end
point,
but
the
difference
I
made
to
it
was
so
massive.
I
felt
very
excited
about
it.
It
actually
is
a
cache
that
gets
used
elsewhere
across
the
site,
but
because
they're
just
speckled
in
amongst
everything
else,
you
don't
really
notice
it
and
yeah.
The
way
it
works
is
it's
basically
like
a
hash
in
redis
and
rena's.
Hashes
are
kind
of
weird
they're.
A
Not
it's
not
like,
not
like
a
rails
hash,
it's
only
one
level
deep.
For
a
start,
I
don't
think
you
can
have
nested
ones
as
far
as
I
know
might
be
up
to,
but
what
it
allows
you
to
do
is
have
a
second
set
of
level
of
keys
and
extra
values
on
them,
and
that's
really
nice.
A
If
you
want
to
expire
a
lot
of
things
in
one
go,
you
should
just
issue
a
single
request
for
one
cache
key
you
know
about,
and
then
it
deletes
all
the
ones
you
don't
know
about,
but
definitely
related.
And
so
in
this
regard
we
know
the
user's
email
address
and
when
that
user
happens
to
sign
up
or
they
update
their
avatar,
we
just
unlink
all
of
the
avatars
by
just
looking
for
their
email.
A
We
just
send
the
email
addresses
as
the
keys
and
delete
them
and
in
this
regard
it's
very
effective.
The
way
it
works
and
it
cuts
out
loads
of
these
queries,
but
this
will
never
get
used
for
anything
else.
It's
not
a
like
a
universally
useful
piece
of
work.
Unfortunately,
looking
at
the
grafana
stuff
for
like
what
you
do
when
you
are
breaking
stuff
and
you,
as
I
said
earlier,
you
have
to
test
in
production,
it
just
doesn't
work
locally.
A
I
I
can
show
the
pipeline
one
locally,
that's
very
obvious,
but
when
you're
adding
a
really
tricky
cache
like
something
that's
going
to
be
kind
of
intangible,
like
we're
trying
to
add
one
around
checking
the
binariness
of
a
file
and
we
look
at
it
locally
and
it
totally
works
and
you
stick
it
live
and
you
find
out
that
people
don't
visit
the
page,
often
enough
for
that
cache
to
be
useful.
A
That's
just
something
that
happens.
It's
a
pain
you
just
have
to
revisit
it
later
with
feature
flags.
It's
quite
nice.
You
can
just
turn
them
off
and
come
back
to
it.
A
Although
I
have
quite
a
lot
of
open
feature,
flag
issues
right
now
and
I
feel
quite
bad
about
it,
so
there
is
a
dashboard
for
the
redis
cache.
It's
got
very
exciting
wiggly
lines,
I
largely
don't
use
a
lot
of
the
ones
at
the
top
the
ones
I
look
at
quite
a
lot
further
down.
A
All
right,
so
memory
saturation
is
probably
not
going
to
change.
It
probably
stays
quite
constant
memory
used
rate
of
change
is
quite
interesting,
but
the
really
interesting
ones
are
down
here,
so
expired
keys
if
you're
seeing
a
lot
of
expired
keys.
That
probably
means
that
a
lot
of
new
stuff
has
just
come
in,
or
we've
issued
a
really
big
explicit
delete.
A
It's
quite
spiky,
which
kind
of
to
my
mind,
means
that
it
shows
a
lot
of
explicit
deletes
if
it
was
just
redis,
expiring
them
of
its
own
accord.
I'd
expect
it
to
be
kind
of
smoother.
A
No,
it
probably
ties
quite
related
to
a
ties
in
quite
specifically
to
traffic
the
more
traffic
you
have,
the
more
cash
rotation
you're
going
to
get.
The
key
rate
of
change
is
quite
interesting
in
that
regard.
The
one
that
I've
found
very
useful
is
looking
at
replication
offset
if
you're,
adding
new
cache
data
and
you're,
adding
quite
big
cache
data,
so
it
was
one
I
added
recently
I
was
adding
some
caching
around
commit
partials
and
they
rendered
it
everywhere
and
under
quite
a
lot
of
traffic.
A
Well,
suddenly,
you're,
adding
many
makes
of
data.
Very
quickly,
it
will
actually
you'll
see
it
pop
in
there
as
the
redis
servers
are.
It
lags
slightly
as
they're
trying
to
replicate
an
increase
of
data
coming
in.
A
You
can
kind
of
see
that
it's
kind
of
useful
to
know
it's
just
something
to
keep
an
eye
on,
because
if
you
cause
a
like
a
really
big
spike
like
that,
it
might,
if
it's
just
a
spike,
it's
probably
about
if
it
stays
high,
it
probably
means
that
you're
just
generating
an
awful
lot
of
cash
data
and
maybe
not
just
getting
the
reads.
Maybe
it's
just
churning
a
lot.
Let's
see
if
I
can
find
some
other
examples
of
that.
A
Has
anyone
got
any
other
suggestions
of
places
to
look
by
the
way,
otherwise
I'll
just
keep
going
through
other
things
of
interest.
A
Really
are
people
generally
interested
in
the
api
caching
side
of
it?
Do
you
want
to
look
at
that
or
more
of
the
rails
side
of
it?
I
don't
know,
what's
of
more
relevance
or
interest
to
people
really.
A
You
can
see
where
I
messed
up
a
cash
key
that
happens
quite
easily.
Let's
have
a
look
at.
It
was
kind
of
interesting,
so
I've
got
when
I
put
benchmarks
in
a
caching
merge
request.
A
They
are
the
least
scientific
benchmarks
imaginable
and
I
sit
there
and
I
refresh
the
page
a
few
times
without
the
cash
like
10
times,
and
then
I
sort
of
just
pick
the
most
average
looking
number
and
then
I
do
the
feature
flag
enabled
and
then
pick
the
best
looking
number
like
the
very
lowest
and
I'll
stick,
those
in
an
issue,
but
generally
it's
actually
quite
a
good
approximation.
It's
not
going
to
be
quite
accurate
compared
to
a
young
production,
but
not
necessarily
in
the
way.
A
A
I've
noticed
that,
for
example,
our
ci
servers
are
slower
running
the
test
than
my
home.
Personal
machine
is
that
does
affect
your
render
speed
it's
just
especially
if
you've
got
lots
of
cpu
heavy
stuff.
So
in
this
regard,
this
is
a
fairly
good
example
of
caching,
a
partial,
let's
just
look
at
the
file
rather
than
so.
This
is
a
good
example
of
using
this
cache.
A
A
You
just
pass
it
as
the
first
option
and
yeah
with
the
contents
of
this.
This
is
just
on
the
when
you
go
to
like
a
project
homepage
bit
so
that
you
go
here.
A
I
tend
to
put
expires
in
on
a
lot
of
caches
and
it's
kind
of
a
guessing
game
as
to
whether
the
time
will
be
effective,
but
then
there's
the
buttons
as
to
whether
a
user
follows
that
project
or
whether
we
call
it
starred
the
project
or
fought
the
project
and
those
have
to
be
cached
kind
of
per
user,
because
they're
very
use
specific,
like
notifications
and
stuff
like
that,
but
it's
a
small
value.
So
it's
not
too
problematic.
A
A
That
was
it
which
yeah
these
stats
are
kind
of
complicated
in
how
they're
calculated,
but
there's
no
need
to
for
those
to
be
fresh.
Every
page
load,
though
you
know
it,
takes
a
it,
takes
them
up
to
about
a
minute
to
process
an
incoming
push
just
from
the
various
hooks
going
off
all
the
authorization
and
stuff
so
showing
that
slightly
delayed
doesn't
really
matter,
and
it
actually
does
stop
a
fairly
expensive
bit
of
code.
It's
actually
got
a
few
queries
in
that.
Let's
find
some
other
stuff
to
look.
C
A
For
for
a
performance
request
that
was
quite
urgent,
a
regression
meant
that
for
a
big
potential
client,
this
was
going
to
flag
in
their
performance
review,
and
this
is
a
good
example
of
using
caching
as
a
hack.
This,
rather,
unfortunately,
doesn't
seem
to
have
as
big
an
effect
in
production
as
it
does
in
the
performance
test.
A
It's
like
a
performance
test
deficient
tool
is
it's
very
effective,
and
this
is
that
thing
that
I've
added
called
cache
action
to
the
to
the
api,
which
allows
you
to
cache
an
action
for
a
certain
amount
of
time,
and
basically
it
just
you
give
it
a
short
ttl
like
a
30
seconds
or
whatever,
and
it
at
this
end
point.
A
It
was
the
branches
list
api
and
it's
fairly
problematic
in
that
it
hits
italy
every
single
time
and
returns
every
single
branch
in
a
repository
which
is
very
unfortunate
because
there's
many
tens
of
thousands
on
gearlav
and
they
all
get
loaded
from
disc
for
every
page
load,
which
was
not
super
great.
A
We
actually
have
a
proper
fix
for
this,
but
we
had
to
turn
it
off
because
we
need
to
make
some
changes
to
italy
to
get
it
properly
properly
fixed
and
so
in
the
meantime
this
is
actually
almost
a
security
fix
in
the
that
endpoint
will
only
return
fresh
data
every
30
seconds,
so
30
seconds
of
kittery
requests
will
be
sort
of
ignored
when
someone
pushes
a
branch
and
then
you
immediately
go
and
check
the
api
chances
are
from
looking
at
the
traffic
data.
A
Most
people
aren't
polling
the
branches
api
that
fast
anyway,
they're
polling
in
more
like
once,
every
five
minutes
and
so
the
chance
of
never
even
seeing
that
cache
data
is
very
low,
but
it's
actually
very
effective
in
stopping
that
italy
load
and
the
thing
today
in
production,
I
watched
the
charts,
nothing
made
no
difference
whatsoever.
I
could
see
performance
tests
a
star
totally
solved
the
problem,
no
issues
whatsoever
in
future
sort
of
yeah.
It
just
dropped.
A
I
think
it
shaved
seconds
off
of
the
response
because
it
stopped
all
the
gizzary
loaded,
and
so
it's
actually
more
effective
on
weaker
hardware
as
well.
Actually
because
it's
stopped
so
many
disk
reads.
A
One
interesting.
One
thing
I
mentioned
on
the
first
call
was
about
disk
caching
versus
memory.
Caching,
and
it's
something
we
don't
do
at
github
yet,
but
it'd
be
interesting
to
explore
for
things
like
rendered
blobs,
for
example,
where
we're
rendering
just
big
bits
of
markdown
the
problem
we've
got
there
is
that
there's
a
lot
of
user
specific
stuff
in
that
that
I
never
realized.
A
There's
all
the
redaction
stuff
tied
into
whether
a
user
can
see
specific
links,
whether
we
should
show
markdown
links
to
things
they
can't
see,
and
what
we're
going
to
do
about
that
is
to
actually
move
that
redaction
step
further
down
the
stack.
So
it
happens
just
before
the
view
is
rendered,
and
that
will
mean
that
we
could
potentially
cache
these
big
blobs.
But
sticking
them
in
redis
feels
slightly
wrong.
A
I
mean
to
put
like
two
meg
of
html
for
every
markdown
file
like
every
variation
or
commit
of
it
seems
quite
heavy,
and
the
problem
you
have
is
because
you're
putting
big
things
in
it,
it'll
work,
but
your
cache
will
rotate
faster
and
so
it'll
be
less
effective
in
other
places,
because
you'll
start
losing
other
caches.
That
might
be
important
just
more
often,
and
it
could
be
a
bit
of
a
problem,
but
caching
that
to
disk
makes
tons
of
sense
how
you
do
that
in
a
cloud
environment.
A
That's
where
it
gets
a
bit
more
tricky.
You
could,
and
I
do
it-
I've
cached
directly
to
application
servers
hard
disks.
You
just
keep
in
mind
that
when
users
come
back
to
a
different
server,
they're
not
going
to
have
that
cache.
But
you
can
get
tricks
around
that,
like
sticky
sessions
on
the
load,
balancer
to
always
send
them
back
to
the
same
server.
It
doesn't
work,
doesn't
really
work
or
you
just
assume
that
sometimes
they're
going
to
get
stale
caches
like
just
that's
how
it
happens.
A
You
could
do
something
really
horrifying
and
actually
synchronize
the
cache
between
the
different
disks.
Probably
don't
do
that
the
way
it
could
work
for
us
might
be
an
object
store
like
storing
it
in
s3
or
something
like
that.
That
would
be
quite
an
effective
use.
A
You
could
use
it
like
a
cache,
maybe
even
just
using
it
like
loading
it
and
using
cloudflare
as
a
reverse
proxy
cache
stuff,
like
that,
there's
sort
of
options
that
we
could
have
and
then
actually
redacting
it
afterwards
and
doing
the
user
specific
stuff
on
the
finished
product
could
be
quite
an
effective
use
of
of
that
sort
of
data.
But
it's
that's
where
it
gets
quite
interesting.
It's
worth
having
a
look
at
mix
c,
dot,
jp
that
japanese,
I
say
it's
worth
having
a
low.
A
I
guess,
but
it's
a
it's
a
japanese
social
network
and
it
they
they
solve
problems
slightly
differently
to
a
lot
of
the
other
ones,
but
it
was
around
the
same
sort
of
time
as
things
like
memcached
existed,
but
it
actually
has
the
same
protocol
set
up
but
writes
to
disk,
and
it's
really
weird
how
fast
it
is
considering
that
at
the
time
as
well,
when
it
was
released,
is
reading
off
of
spinning
hard
disks
and
it
was
way
faster
than
it
has
any
right
to
be
it's
it's
kind
of
an
interesting
piece
of
software
and
quite
fun,
but
that's
something
that
we
can
look
at
a
bit
more
as
well
any
questions
or
points
we've
got
about.
A
Let's
see
if
I
can
find
some
other
interesting
messages,
I
am
just
going
through
my
own,
just
because
I
know
what
I
did,
but
other
people
have
been
doing
some
really
nice
stuff.
This
is
that
storage
banner
stuff
I
mentioned
earlier,
so
you
can
see
where
the
changes
came
in
on
this.
If
anyone.
D
A
Links
to
these
as
well
I'm
happy
to
provide
links
in
the
issue.
If
there's
a
anything
you
want
to
look
through,
but
especially
around
sort
of
how
the
testing
is
done
can
be
kind
of
useful
in
looking
at
those
sort
of
things.
What
else
we.
A
A
Not
specifically
tied
directly
to
caching,
but
looking
more
at
looking
more
at
sort
of
view
performance.
A
I've
mentioned
a
lot
in
the
the
first
talk
about
why
my
views
are
slow
but
sort
of
knowing
why
they're
slow
can
be
quite
important
when
you're
coming
to
caching-
and
this
is
one
where
I
refactor
the
view
just
to
make
it
faster.
A
You
can
see
my
choice,
benchmark
selection
as
well,
they're,
not
massively
faster,
but
you
know,
saving
100
milliseconds
whatever
adds
up
over
time,
and
in
this
regard
this
was
one
of
those
ones
where
I
removed
partials,
so
this
partial
would
have
rendered
really
fast
because
it's
just
tags
there's
nothing
happening
in
that,
that's
in
any
of
the
templating
languages.
A
For
for
ruby,
it's
almost
nothing
it's
because
it
does
something
where
it
sort
of
creates,
like
it
tokenizes
all
of
the
tags
and
it
just
they're
all
sort
of
like
symbolized
or
something
they're
very
neat
very
fast.
They
often
all
use
the
same
back
end
there's
something
called
temple
as
a
gem.
Interesting
thing
worth
a
look,
but
in
that
regard,
that
partial
this
isn't
very
big.
A
That
is
ten
percent
slower
than
if
that
is
just
in
the
view
roughly
and
yeah,
it's
not
slow,
but
it's
10.
You
could
save
by
just
moving
12
lines
into
another
file,
so
actually
having
slightly
bigger
templates,
I
mean
yeah,
it
can
make
maintainability
to
be
awkward.
Password
is
very
good
when
you're
sharing
data
between
multiple
places,
because
you
don't
want
to
be
duplicating
your
template,
but
if
you're
only
using
it
in
one
place
totally
makes
sense
to
have
in
it
just
in
that
one
place.
A
So
I
removed
a
couple
here
and
I
just
shoved
them
into
the
view
and
that's
that's
all.
This
murder
quest
is,
and
that's
100
milliseconds,
better
yeah,
roughly
just
from
doing
that,
and
that
ties
into
the
caching
side
of
things,
because
once
you've
got
some
of
this
now
that
these
are
in
one
view
as
well.
A
If
I
put
cache
around
this,
the
template
tree
digest
will
update
correctly,
whereas,
whereas
if
it's
inside
these
partials-
and
I
put
a
cache
around
this
out
here,
it
would-
and
I
updated
these
templates-
it
wouldn't
update
the
cache
because
they
wouldn't
have
the
template
to
tree
digest
bit.
The
I
think,
there's
so
there's
a
bit
of
a
trick
around
this.
A
The
reason
why
it's
so
much
faster
here
is
because
these
weren't
being
rendered
using
the
the
partial
collection
rendering
thing,
so
this
is
being
rendered
multiple
times
there
a
loop
around
it
somewhere.
B
A
The
the
expensive
one
here
was
this
being
where
it's
rendering
this
render
here
that
doing
individual
partial
render
calls
inside
a
loop
is
really
expensive.
A
Doing
the
render
call
where
you
pass
a
collection,
and
then
you
say,
like
the
collection,
that's
much
faster
and
that
then,
is
only
about
like
10
slower
than
not
using
the
partial
but
rendering
it
like
this.
Where
you
pass
where
you
you
can
make
multiple
render
calls
inside
a
loop
is
just
super
slow
good
by
comparison.
I'll
put
a
thing
in
the
slides
I'll
put
in
the
chat
as
well.
It's
where
was
it
this
article
from
scout
is
very
good
I'll.
A
Stick
it
in
the
chat,
it's
very
interesting,
and
they
actually
go
through
the
different
reasons
for
this.
Well,
they
don't
necessarily
talk
about
why
and
no
one
really
talks
about
why.
I
think
why
is
because
of
the
template
path?
Look
up
in
rails.
I
think
that
the
way
it
actually
takes,
the
partial
name
and
finds
the
correct
partial
for
it
is
where
that
overhead
comes
from.
A
I
don't
think
there's
any
real
way
around
it,
but
using
that
collection
render
is
much
faster
in
that
regard,
because
it
only
performs
that
partial
lookup
once
whereas,
when
you're
doing
it
inside
a
loop,
it
does
it
every
time
where
it
gets.
Interesting,
then,
as
well
is
when
you're
using
that
collection
renderer.
You
can
then
use
that
cached
true
option
and
you
can
cache
multiple,
multiple
items.
A
I
haven't
managed
to
use
that
at
get
lab
yet,
but
I
might
be
able
to
show
an
example
from
my
own
stuff
just
because
it's
kind
of
I've
shown
like
custom
versions
of
it,
but
it's
just
have
a
quick
look
and
see
if
I
find.
A
A
Multiple
other
things
there.
No,
I
don't
have
a
good
example
of
that,
unfortunately,
but
yeah
they're
doing
that
collection
stuff,
that's
a
very,
very
good
performance
boost
in
general.
It
does
make
a
lot
of
sense
and
it's
quite
effective.
A
A
A
Trying
to
remember
her
by
the
ones
yeah,
it
wouldn't
surprise
me
there's
it's
kind
of
weird:
it's
always
a
compromise
when
you're
doing,
but
when
you're,
looking
at
the
partials
between
maintainability
of
like
a
thousand
line
view
template
and
the
performance
overhead
I
on
my
own
stuff,
I
don't
really
even
question
it,
but
I
also
have
quite
small
views
anyway.
A
It's
obviously
more
complicated
when
you've
got
such
a
complicated
application
like
gitlab
one
area
where
I
noticed
there
being
a
problem
in
the
past
was
in
the
navigation
rendering,
but
that's
been
take
over
taken
over
by
front
end.
That's
a
some
of
these
areas
where
we
render
lots
of
these
mini
partials
or
like
rendering
the
same
like
if
you
look
at
a
diff
at
the
moment
and
you're
hovering
over
lines,
and
it
renders
the
comment
button
and
stuff
there's
one
of
those
buttons
on
every
single
line.
A
There's
like
100
of
them
rendered
per
page
moving,
that's
javascript
makes
total
sense.
That's
a
very
good
use
of
javascript
by
just
rendering
something
once
and
reusing
it.
That's
effectively
a
form
of
caching
and
it's
the
same
with
these
sort
of
partial
renders
as
well.
Let's
see.
B
A
Yeah,
I
mean,
I
think
it
makes
a
lot
of
sense.
The
one
to
look
out
for,
I
guess,
is
the
one
where
you're
rendering
a
partial
inside
a
loop.
So
if
you're
not
using
the
collection,
render
and
you're
rendering
in
a
loop,
that
is
that's
a
red
flag.
That's
the
biggest
performance
problem
with
it
using
though
one
partial
somewhere,
you
know,
takes
five
milliseconds
to
render
it
takes
six
milliseconds.
A
If
it's
partial
or
not
in
line
it's
not
the
end
of
the
world,
but
yeah
the
one
in
the
loops
is
where
it
becomes
very
much
a
problem
quite
quickly
and
it
especially
when
we
have
pages
where
we
render,
like,
I
think,
until
recently
on
commits
and
stuff,
we
were
like
unpaginated
and
rendering
hundreds
of
diffs
on
a
page
and
each
the
the
diffs
is
actually
quite
a
problem
with
it.
A
Diffs
renders
I've
got
multiple
mode
requests
for
this
renders
the
lines
in
a
diff
each
line
is
a
is
a
partial
render,
but
it
does
use
the
collection
api.
A
So
it's
not
as
bad
we'd
gain,
maybe
like
10,
I
do
have
do
you
have
some
stuff
around
that
one
of
the
sort
of
side
effects
of
caching,
in
fact,
is
that
you
can
you
reduce
memory
usage
quite
heavily,
because
what
you're
largely
doing
is
getting
one
big
string
from
somewhere
and
then
you're
sticking
it
in
the
view
and
where
it's
really
effective,
is
it's
cutting
down
on
all
the
little
string
generation
that
you're
doing
until
you
get
to
the
final
generated
string
for
the
view,
so
when
you're,
rendering
a
template
in
rails
and
you're
doing
you've
got
some
stuff
from
like
a
model
wherever
and
you're
outputting
into
the
view,
it
sticks
out
as
a
string
all
this
sort
of
stuff
and
when
you
start
joining
it
together
in
the
views
or
doing
things
like
that,
it
starts
to
get
quite
expensive
and
you're
generating
a
lot
of
strings,
especially
in
loops,
and
the
ultimate
goal
of
the
application
is
to
just
is
to
generate
a
big
string
and
then
stick
it
out
to
the
browser.
A
So
the
fewer
lumps
of
strings
you
stick
together
by
the
end
kind
of
works.
There's
a
really
cool
thing.
Someone
put
me
on
to
about
phoenix
the
the
elixir
framework
which
cuts
out
a
rather
major
part
of
that
and
I've
had
a
look
at
doing
it
in
ruby,
it's
quite
hard,
but
it
doesn't
concatenate
everything
to
a
big
string
at
the
end.
A
A
This
is
an
example
merge
request
where
it's
just
reducing
string,
duplication
and
this
doesn't
tie
necessarily
into
caching,
but
it's
something
that
caching
does
help
with.
When
you're
doing
less
of
this
processing
work,
you
generate
fewer
objects
and
you'll
see
this
in
a
lot
of
the
benchmark,
things
that
I
I
say
benchmarks,
I
don't
even
think
they
qualify
to
be
honest
that
I
put
in
the
most
request
things.
Let's
see,
if
was
that
one?
That's
not
my
one.
A
You'll
actually
see
like
the
allocations
go
down
quite
heavily,
that's
really
useful.
Just
there
you
go
so
uncached,
just
rendering
the
merge
request.
Title
is
56
milliseconds
and
it
allocates
65
000
objects,
but
then
you
cache
it
and
it
yeah.
It's
really
fast.
It's
nice
2.1
milliseconds,
but
it's
only
generating
1454
objects.
A
That's
quite
exciting
because
that's
64,
000
ish
free
object,
space
in
memory
to
not
fill
up
with
stuff,
and
it's
quite
it's
very
it's.
It's
just
like
chipping
away
a
very
big
task.
We
allocate
so
much
memory.
Also
every
page
requests
to
hundreds
of
thousands
of
objects,
probably
it's
just
absolute
millions,
so
much
being
generated
and
then
garbage
collected,
but
the
fewer
that
you
generate.
The
faster
the
ruby
processes
in
general
will
run.
So
caching
has
like
an
additional
effect
other
than
just
being
faster.
A
It
gives
the
ruby
code
more
sort
of
overhead
to
or
less
overhead
gives
it
more
space
to
work
in,
and
it
does
improve
it
in
that
regard,
I,
when
I've
built
my
own
applications
from
scratch
and
built
them
to
be
fast
from
the
start,
even
when
they
get
big.
I've
never
had
memory
issues
on
them.
I
could
always
run
them
inside
the
cheapest
heroku
up
the
free
one,
because
I
don't
want
to
pay
money
and
you
could
always
make
that
work
anytime.
A
You
go
away
from
that.
You
so
like
in
my
previous
jobs,
where
suddenly
we
had
to
have
a
load
more
features
because
they're
just
needed
by
the
investors.
You
know
I'm
sure
loads
of
you
know
what
that's
like
and
you
don't
get
the
time
to
concentrate
on
that.
A
The
that's
when
you
notice
the
performance
starts
dip
and
you
notice
the
memory
usage,
particularly
in
your
eraser,
just
go
up
and
it
never
comes
down
and
you
start
getting
memory
leaks
and
you
implement
the
ultimate
solution,
which
is
puma
worker
killer
and
it
is
sold
forever
and
no
one
has
to
worry
about
it.
But
that
happens
pretty
much
all
the
time.
I've
never
had
to
install
it
on
my
own
projects,
because
my
own
projects
do
so
much
caching,
even
if
they
do
have
a
memory
leak.
I've
just
I've,
never
even
seen
it.
A
A
Time
we're
sort
of
improving
this
in
parts,
but
there
is
something
to
keep
in
mind:
it's
not
something
I've
missed
out
the
slides.
Actually,
I
can't
even
remember
what
was
actually
in
this
one,
some
naughty
strings,
which
I
think
should
be
translated.
A
A
You
can
see.
I'm
never
gonna
manage
to
close
all
these
everyone's
gonna
get
really
crossed
with
me.
So
they're
just
gonna
be
satisfied
what
I
tend
to
do
when
I
stick
him
live.
You
don't
have
to
do
this.
I
think
it's
quite
quite
a
nice
way
of
doing.
It,
though,
is
to
stick
the
charts
in
the
comments
of
the
feature
flag
rollout.
A
I
think
that's
kind
of
kind
of
nice
because
you
can
sort
of
see
the
effects
and
you
can
see
over
time
as
you're
rolling
them
out,
I'm
just
trying
to
find
one
where
I've
actually
done
it
now.
Let's
say
I
do
it.
A
I
put
something
up
and
it
just
it
makes
the
endpoint
quite
a
bit
worse
and
it
was
kind
of
interesting
to
watch
in
the
graphs
and
see
if
I
can
find
the
graph,
because
it
it
just
gets
progressively
worse
and
it.
But
the
problem
I
had
is
that
when
you
first
look
at
it,
it
looks
like
it's
getting
better,
but
it
then
starts
getting
worse.
Oh
here
you
go
so
looking
at
this
chart.
I
enabled
the
flag
around
here
and
you'd
expect.
A
This
is
what
it
looks
like
when
you
enable
a
cache
to
start
with
your
lines,
will
get
wigglier
they'll
be
quite
potentially
stable
before
it's
going
to
spike
a
bit
and
you're
going
to
see
a
lot
of
up
and
down
lines,
because
some
will
be
not
cash
tools,
some
will
be
entirely
cached.
A
Ideally,
what
you
want
to
do
is
to
have
everything
partially
cached,
so
it's
very
consistent,
but
you
can
see
that
this
just
kept
getting
progressively
worse,
which
is
not
right.
That's
definitely
not
hashtag.
It
got
really
bad
when
I
turned
up
to
100
and
these
are
very
wiggly
and
they
just
keep
going
up,
and
the
problem
was
that
I'd
stringified
a
class.
A
You
know
that
has
a
different
identifier.
Every
time
it's
made
as
part
of
the
cache
key,
and
so
every
time
you
load
the
page,
you
just
stuck
a
new
record
in
the
cache,
not
effective,
and
just
nice
lots
of
cache
churns
so
that
one
wasn't
super
great.
Let's
have
a
look.
A
A
Yeah
yeah,
we
do
it's,
it's
quite
interesting
to
look
at
the
different
usage
pattern
of
it,
but
yeah
it
is
separate,
so
you
don't
have
to
worry
too
much
about
killing
it.
The
thing
is:
if
your
cash
goes
down,
your
application
will
go
down
like,
but
normally
the
cash
should
be.
The
last
thing
to
go
down.
A
We
do
have
a
quite
a
lot
of
people
are
a
little
bit
worried
about
taking
down
the
redis
cluster.
I'm
not
worried
about
it.
To
be
honest,
it's
got
quite
a
lot
of
overhead
and
as
long
as
you're
turning
it
on
with
feature
flags
and
you'd,
let
the
sres
know,
and
it's
very
unlikely.
It's
going
to
immediately
die
like
you
turn
on
the
feature
flag.
Gitlab
is
down.
A
I
would
panic
if
that
happened,
because
I
mean
I've
done
something
really
bad.
I
don't
even
know
what
that
would
be.
It's
very
hard
to
immediately
kill
something,
especially
if
you're
doing
like
a
gradual,
like
roll
out
of
a
feature
flag.
So
like
do
10
of
actors,
25
percent
50
you'll,
you
should
start
to
see
the
problem
beforehand
like
if
you
look
at
the
redis
dashboard.
A
B
They're
isolated
from
each
other,
though
right
like
the
shared
state
redis,
so
I
mean
the
worst
thing.
That's
gonna
happen
in
this
case.
In
the
case
you
mentioned
before,
where
you
created
something
that
was
just
creating
tons
of
different
keys
with
different
names,
you're,
just
gonna
close
the
space
right,
you're
just
going
to
expire,
the
older
keys
and
maybe
you'll
see
some
downstream
impact
on
other
parts
of
the
site,
because
their
cache
is
getting
busted
more
often
because
the
key
space
is
being
used
up,
but
I
mean
it
wouldn't
be
like.
A
Yeah,
that's
very
much
it
I
mean
the
the
cluster's
got
a
pretty
large
amount
of
ram.
I
don't
actually
know
what
the
servers
have,
because
they
all
have
to
have
identical
levels
around.
I
think
it's
256
gigs,
something
like
that,
which
is
just
you
know
you
can
buy
off
the
shelf
now
we
need
more,
more
ram
as
well.
That's
where
the
disk
cache
is
coming
very
handy
but
yeah.
That
is
all
that
will
happen,
really
kind
of
where
it's
more
problematic,
rather
than
the
big.
A
If
you
had
big
values
that
were
constantly
being
written
over
and
over
again,
like
hundreds
and
hundreds
of
times,
a
second
that's
a
risk,
because
you're
just
probably
gonna
overload
the
network
ports
more
than
anything
else,
but
it
can
be
just
as
much
a
problem
with
just
the
sheer
amount
of
traffic
going
to
redis.
Redis
is
no
longer
single
threaded.
I
think
it
used
to
be
bound
to
one
processor
call
it's
not
anymore,
thankfully,
so
it
should
be
able
to
handle
higher
load.
But
here's
just
some
is
something
to
keep
in
mind.
A
I
guess
yeah
is
a
in
this
regard.
I
finally
found
one.
This
is
an
example
of
charts
that
I
see.
This
is
more
typical
of
what
I
normally
see,
which
is
very
unexciting,
like
nothing
really
looks
very
dramatic
there
until
it
starts
going
down
there
and
you
can
just
see-
and
you
can
see
that
there's
other
higher
peaks
and
the
lower
troughs
in
it.
That's
quite
typical.
You
might
be
in
this
regard,
they're
only
on
like
a
10
minute
cash
timer.
A
I
would
expect
those
to
even
out
if
I
out
that,
like
an
hour,
because
I
think
this
endpoint
is
being
pulled
quite
a
lot,
so
that
would
probably
be
where
I
would
look
to
improve
that
cash
key.
I
think,
there's
a
reason.
I've
got
a
set
of
10
minutes
on
this
one.
A
The
most
fun
one
to
see
is
when
you're
cutting
sql
requests
that
chart
nearly
whenever
you
get
a
good
cache
going
and
you
cut
the
sql
query:
requests
that
just
go
straight
down.
It
looks
awesome,
yeah,
look
at
what
I've
done
so
nice
and
the
one
thing
to
keep
in
mind
is
something
the
the
ops
guys
mentioned
before.
Is
that
actually
postgres
is
better
at
dealing
with
loads
than
redis?
A
Postgres
is
easier
for
them
to
scale
it's
even
easier
for
them
to
fail
over
it
handles
higher
load
already,
whereas
redis
is
hard
to.
I
could
have
I've
set
up
a
redis
cluster.
It
didn't
work.
I've
set
up
postcards
as,
like
you
know,
primary
secondary
setup,
it
sort
of
worked
I'd
say
it's
slightly
easier,
but
memcached
in
that
regard
is
very
easy.
So
that's
why
I
still
use
it
personally,
but
it's
something
to
sort
of
keep
in
mind.
A
Sometimes
if
we
take
traffic
from
postgres
and
one
for
one
replaces
traffic
on
redis
that
actually
might
not
be
a
as
effective
as
we
know
it
might
be
faster,
but
we
might
be
putting
more
that's
sort
of
proportionately
more
load
into
redis
like
it's
easier
for
postgres
to
deal
with
that
load.
Sometimes
I
have
got
a
couple
of
merge
requests
like
that
and.
B
A
That
it's
kind
of
a
case
of
gradually
rolling
them
out
and
seeing
what
happens.
Yeah
anyone
got
anything
else.
B
A
We're
on
time,
if
you
do
have
any
questions
I
mean
I'm
always
happy.
I'm
gonna
do
another
work,
one
of
these
workshops,
probably
at
some
really
horrible
time
for
me
at
like
four
a.m.
So
I
can
do
it
for
like
the
australians
and
new
zealanders
and
all
that
I'll
put
up
recordings
of
all
of
them.
But
if
you
have
any
questions,
I'm
always
happy
to
answer
them.
Tag
me
on
merge
requests
or
I
did
make
that
issue.
A
You
can
stick
questions
in
there,
there's
sort
of
there's
some
people
in
the
company
who
are
very
experienced
in
caching
as
well,
who
maybe
it's
not
very
obvious
who
they
are
so
I'm
always
happy
to
sort
of
link
to
other
people.
Although
the
amount
of
murder
requests
I
give
to
sean
mcgivern
is
going
to
make
him
like
quite
across
with
me.
At
some
point.
I
imagine
it's
like.
A
Oh,
it's
caching
merge
quest
I'll,
give
it
to
sean
given,
but
I
feel
like
he
probably
didn't
volunteer
for
that
role,
but
yeah
I'm
always
happy
to
answer
any
questions.
So
if,
if
there
is
nothing
else,
I'll
I'll
end
it,
but
anything.
A
A
A
Yeah,
I'm
always
happy
to
do
stuff.
If
you
want
any
extra
details
on
stuff
from
from
the
thing
there's
bits,
I've
definitely
missed
out
as
well.
I
could
talk
about
it
for
hours
and
have
done,
but
yeah
just
give
me
a
shout
I'll,
stop
showing
them
and.