►
From YouTube: Caching Workshop - Session 1
Description
Learn all about Caching in Rails and at GitLab with Robert May from Create: Source Code. See https://gitlab.com/gitlab-org/create-stage/-/issues/12820
A
So
caching
workshop
I'll
go
through
some
various
different
bits.
A
There
will
be
a
second
session
afterwards,
where
we'll
actually
look
at
how
to
do
it
properly,
more
and
and
just
try
and
actually
do
some
caching
live
on
the
gdk
and
it'll
go
great
and
nothing
will
break.
The
gdk
is
actually
working
for
me
today.
So
it's
looking
good.
This
originally
had
an
original
title
before
I
rewrote
it
and
I
shortened
it
as
well.
So
it's
no
longer
that
long
so
for
session
two,
I
have
an
issue
in
the
other
issue.
A
So
in
the
original
issue
there's
a
link
you,
you
can
type
it
out
from
there.
I
guess
I'm
going
to
take
suggestions
of
places
to
try
and
cache
in
the
second
workshop,
we're
going
to
do
it
totally
blind
I'll,
take
any
suggestion
and
if
it
doesn't
work,
that's
totally
fine,
because
it
will
actually
just
be
a
fairly
good
example
of
how
it
might
not
work,
because
there
are
areas
of
the
application
that
just
can't
be
cached
properly.
A
So
that's
totally
fine
suggestions
and
questions
in
there
and
it'll
make
it
easier
in
terms
of
timing
for
the
questions
to
go
in
there.
So
just
add
them
in
there
and
we'll
follow
them
up
in
the
hour
between
and
bring
them
into
the
next
session
and
any
that
I
can't
answer
in
the
next
session,
or
we
can't
look
at
the
next
session
I'll
go
away
and
do
it
and
other
people
can
help
out.
We
can
look
through
it
as
well.
A
So
brief
history
of
why
I
think
I'm
kind
of
qualified
to
talk
about
this.
It's
probably
the
only
thing
I'm
qualified
to
talk
about,
and
it's
also
basically,
the
only
reason
I
do
programming
is
because
I
really
love
caching.
There's
like
I
don't
really
care
about
anything
else.
I
just
like
wiggly
lines
going
down.
I
like
fast
page
responses.
A
It's
kind
of
my
only
thing,
but
the
reason
I
got
that
way
is
because
I'm
really
cheap
and
I
worked
for
startups
and
especially
in
the
uk,
startups,
have
no
money
not
like
sort
of
american,
no
money
where
they
have
a
few
tens
of
thousands
of
dollars.
They
can't
actually
have
about
50
pounds
and
they
expect
to
scale
that
infinitely
to
any
level
of
traffic
and
it
doesn't
really
work.
A
They
all
tend
to
use
things
like
heroku
a
lot
as
well,
so
the
previous
company
I
worked
for
is
a
company
called
pirate
studios
and
they
were
still
using
heroku
three
years
after
launch
still
using.
I
think
one
dyno
most
of
the
time,
because
it
kind
of
scaled
quite
linearly.
After
a
certain
point,
it
had
lots
of
interesting
sort
of
sort
of
things,
but
it
comes
from
my
personal
projects
as
well.
I
don't
like
spending
money
on
tech
projects
which
seems
a
bit
stupid,
considering
it's
my
career,
but
it's
a
challenge.
A
It's
quite
fun,
to
spend
as
little
as
humanly
possible
on
running
your
own
websites.
I'll
look
at
one
of
the
my
only
current
live
one
will
okay.
In
a
bit,
I
frequently
post,
inflammatory
things
onto
hack
news
to
see
if
they
get
up
on
the
front
page
I
quite
like
taking
that
hacking
use
traffic,
it's
quite
fun,
to
watch
the
sort
of
traffic
coming.
It
will
kill
the
rails
out
quite
fast
if
you're,
not
quite
careful,
because
it
you
do
get
a
few
sort
of
thousand
hits
per.
A
A
So
the
reason
why
caching
is
so
good
is
that
it
kind
of
works
at
both
extremes.
When
you
are
a
poor
company
or
small
company,
it
helps
you
save
money
and
when
you're
a
large
company,
it
helps
you
save
money
and
both
company
types
like
that
is
the
ones
in
the
middle,
where
you're
sort
of
scaling
really
rapidly
as
I'll
just
throw
money
at
the
problem.
A
That's
not
as
much
of
an
issue
and
what
you're
trying
to
do
is
trying
to
make
the
experience
identical
for
everybody
and
that's
quite
hard
when
you're
receiving
a
lot
of
traffic.
There
are
different
ways
of
doing
it.
Just
making
everything
faster,
but
caching
can
really
help
with
that,
and
it's
also
one
of
the
easiest
ways
to
do
it.
A
A
Javascript
is
actually
really
important
for
making
caching
work
well
because
it
gets
around
some
of
the
really
awkward
problems
and
allows
you
to
scale
better,
because
what
you
really
need
to
do
is
to
have
your
cache
data
shared
between
as
many
people
as
possible,
because
then
everybody
helps
serve
each
other.
Each
other
faster
web
pages.
A
So
using
those
together
worked
really
well
rails
is
not
actually
very
slow.
I
mean
it
sort
of
is,
but
it
also
kind
of
isn't
and
the
bit
that's
really
slow
is
the
view
layer
and
that's
the
bit
that
I
look
at
caching
most.
It's
the
bit
that
people
traditionally
don't
look
at
that
much
because
it's
also
kind
of
the
hardest
bit
because
you're
kind
of
just
taking
stuff
and
trying
to
present
it
to
a
user,
and
so
it's
often
user
specific.
A
I'm
sorry
to
any
branding
people
by
the
way
for
sticking
sunglasses
on
all
the
tanookies,
I'm
imagining
I
might
get
in
trouble
for
that.
So
what
is
a
cache
they're
everywhere?
I've
got
a
great
beer
metaphor
in
here.
A
The
I'd
say
great,
it's
acceptable,
but
your
caches
are
everywhere
and
the
idea
really
is
just
to
be
a
temporary
store
of
data,
that's
closer
to
where
you
actually
need
it.
So,
whether
that's
on
your
processor,
so
there's
caching
data
that
it's
pulling
from
elsewhere,
whether
it's
on
hard
disks,
where
they're
storing
the
most
recently
accessed
or
most
frequently
accessed
data,
just
keeping
it
warm
so
that
they
don't
have
to
go
back
to
the
the
further
away
place
and
grab
all
this
data.
A
It
kind
of
is
true
everywhere
else
as
well,
so
you've
got
caches
in
your
browser
to
make
it
faster
for
yourself
and
what
we're
looking
at
is
caching
near
the
rails.
Application
sort
of
it
changes
a
little
bit
based
on
the
way
you're
storing
it.
But
why
use
a
cache
to
make
things
faster,
fairly
obvious?
I
guess
avoiding
io's
one
of
those
like
we
have
this.
Quite
often
in
git
lab
we
have
gita
and
reading
off
of
disks
disks
are
slower
than
ram.
A
So
if
you
can
cache
that
data
in
ram
it's
faster
in
theory,
it's
not
always
true,
you
can
actually
have
fast
discretes,
but
having
lots
of
random
disk
reads
or
having
lots
of
sequential
reads
where
you're
processing
data
and
stuff
like
that
starts
to
get
quite
slow
quite
fast.
That's
where
caching
can
help.
A
A
There
are
certain
types
of
caches
that
will
disguise
the
fact
that
your
application
has
died.
Cloudflare
does
this
as
a
service?
I
think
they
call
it
always
on
or
something
like
that.
It
doesn't
work,
but
it
you
know
it's
kind
of
nice
and
the
idea
is
that
it
serves
cached
responses
when
your
application
dies.
A
Big
thing
as
well,
save
money
obviously
helps
so
I'm
assuming
most
people
have
probably
heard
of
name
cash
d.
A
It
was
invented
a
live
journal,
which
was
a
thing
I
kind
of
still
remember,
and
that's
the
the
original
sort
of
web
cache
that
exists
now,
if
you're
looking
at
services
that
offer
like
a
memcache
in
the
cloud
or
something
they're,
probably
not
actually
running,
memcached
they're,
probably
running
redis,
and
making
it
kind
of
look
like
memory
hd
and
for
most
purposes
that
kind
of
makes
no
difference.
They
are
actually
quite
different,
but
as
an
end
user,
you
don't
have
to
care,
it's
really
more.
A
On
the
operational
side,
I
will
go
over
this
a
little
bit,
one
of
the
other,
weird
ones
that
I
really
like
well
worth
going
up
and
reading
on.
It
is
something
called
tokyo
cabinet
and
it
was
then
replaced
by
kyoto
cabinet,
and
then
I
don't
even
know
how
to
pronounce
or
whatever
it
is,
don't
know
what
that
is.
That's
their
newest
one,
I'm
fairly
certain,
it's
all
written
by
one
guy
in
japan
who
wrote
it
for
mixi.jp,
which
is
a
japanese
social
network,
and
it's
actually
a
disk
cache.
A
So
it's
got
like
network
drivers
that
make
it
look
like
memory.
Cache
db
starts
everything
on
hard
disks
and
it's
not
actually
much
slower.
It's
really
impressive.
It's
very
interesting
and
I'll
talk
a
little
bit
about
why
this
caching
can
actually
be
useful.
That's
a
really
good
example
of
how
it
works,
but
also
you
can
just
use
the
file
system
yourself.
Just
write,
html
files
to
it
I'll
cover
that.
I
use
that
a
lot.
I
don't
think
we
can
use
that
very
easily
at
gitlab.
A
There
might
be
some
areas
where
that
might
work,
but
we
can
look
at
that
later.
So
caching,
in
ram
versus
disk
ram
is
very
fast,
but
ram
is
more
expensive.
I
mean
you
can
get.
What
can
you
get?
You
can
get
up
to
about
a
terabyte
of
ram.
Now
I
think
you
can
get
a
couple
of
terabytes,
maybe
in
a
big
server,
but
you
could
put
a
pay-to-buy
of
disks
in
a
server
and
just
write
everything
to
it.
A
A
I've
used
had
pretty
bad
disks,
but
on
this,
caching
is
really
really
good
for
very
big
things
like
whole
rendered
pages
or
big
markdown
blobs,
or
anything
like
that,
because
storing
those
in
ram
is
gonna
get
quite
well,
not
expensive,
because
your
your
cash
will
also
clear,
but
it
will
push
everything
else
out
of
your
cash
because
you're
just
storing
these
big
things
all
the
time,
there's
obviously
still
a
use
case
for
that.
But
it's
just
something
to
keep
in
mind
and
writing
to
disk
can
be
much
more
cost
effective
in
that
regard.
A
So
I
come
across
this
everywhere.
Every
place
I've
worked
before
programming
groups.
Anything
people
have
a
weird
fear
of
caching
mixed
with,
like
a
disgust
of
caching
and
it's
kind
of
entertaining
it's
quite
interesting.
Part
of
it's
led
by
a
very
reasonable
fear
of
cash.
Expiry
is
kind
of
hard
it
I
mean
it
can
be
quite
easy.
It
can
be
quite
hard.
A
It's
sort
of
that
aspect,
but
also
the
fact
that
it's
not
really
the
right
solution,
you're
kind
of
papering
over
the
real
problem
and
that's
okay,
sometimes
actually
that's
very
effective.
But
the
real
thing
is
that
you
kind
of
want
to
do
both
no
matter
how
fast
you
make
a
rails
application,
it's
still
an
interpreted
language.
It's
really
not
going
to
be
that
fast.
A
There
are
exceptions,
but
all
of
the
view,
rendering
sides
of
things
in
rails,
even
just
rendering
json
it's
quite
slow
and
especially
large
string
handling
in
ruby,
generates
a
lot
of
memory
objects
and
it
does
get
quite
expensive
and
it's
what
primarily
slows
it
down
all
right.
Gitlab
use,
redis,
not
known
cash
d.
I've
written
this
just
because
it's
kind
of
specific
to
our
stuff.
The
reason
why
we
use
redis
rather
than
cash
d
is
because
we
already
used
redis
for
other
stuff
and
which
is
fine.
It's
it's
very
useful.
A
The
reason
why
I
don't
use
redis
as
a
cache
is
because
memcached
is
so
easy
and
as
fast
or
faster
and
easier
to
scale
I
could
set
up.
In
fact,
all
of
us
here
could
go
away
and
in
under
an
hour,
have
a
20
server.
Memcache
d
cash
cluster
set
up.
It's
super
easy,
there's,
no
real
configuration,
I'm
not
actually
confident.
I
could
go
away
with
a
month
and
get
a
working
redis
cluster,
because
it
is
just
quite
confusing.
A
It's
not
my
expertise.
It
is
quite
hard
to
do,
but
memcached
was
originally
written
to
sit
side
by
side
with
your
application.
It
wasn't
actually
like
you
had
a
memcached
server.
You
just
installed
it
on
your
server
and
it
uses
your
spare
cpu
cycles
in
your
spare
memory
to
make
your
application
faster.
Obviously,
things
have
changed
in
that
regard.
A
Yeah.
This
is
what
it
kind
of
comes
down
to
scaling.
Radius
is
quite
hard.
It
just
kind
of
is
rails.
Has
built-in
support
for
sharding
your
cache
across
memcached
servers
thanks
to
some
stuff
in
the
dali
gem,
which
is
what
is
the
memcached
wrapper
that
is
sort
of
possible
in
redis,
and
I
I
believe
that
the
the
ops
team
are
looking
at
doing
this,
but
they
need
to
have
multiple
clusters
and
then
shard
across
those
multiple
clusters.
They're
going
to
look
to
do
that
sort
of
thing.
It's
just
complicated.
A
You
need
an
ops
team
to
manage
it.
Anyone
can
go
away
and
serve
namecashd
redundant
cluster.
It's
super
easy
and
it's
supported
natively
by
rails.
So
if
you're
doing
itself
in
spare
time
something
to
keep
in
mind
at
gitlab,
it's
not
really
our
responsibility
as
software
engineers.
If
there
are
any
ops
people
on
the
call
that's
kind
of
more
there
and
responsibility
scaling,
the
cache
is
just
a
like.
We
should
be
mindful
of
it:
it's
not
currently
a
risk
of
falling
over.
A
I
see
this
quite
a
lot
when
I've
introduced
caching
stuff
that
there's
a
bit
of
a
an
inherent
fear
that
we're
going
to
kill
the
cash
servers.
It
shouldn't
really
be
possible.
You
should
be
able
to
abuse
the
cache
really
quite
like
badly.
You
should
be
able
to
just
constantly
churn
data
through
it
and
it
should
kind
of
stay
up.
A
Reduce
is
a
little
bit
difficult
because
one
server
is
taking
that
load
currently
and
we
have
noticed
there
was
an
issue
vaguely
recently
where
redis
responses
became
slower,
as
it
was
trying
to
evict
data
from
his
cache
very
rapidly
under
heavy
load.
It's
kind
of
down
to
how
it
expires,
keys
I'll
touch
a
little
bit
on
that
in
a
bit,
but
ultimately
they
predict
this.
A
There's
a
saturation
prediction
thing
for
forecast
we're
not
even
up
to
80
percent
sort
of
populated
yet,
and
it's
more
it's
less
a
case
of
how
much
data
you're,
storing
it,
but
also
but
more
the
the
rate
of
querying
the
cache
is
the
bigger
thing
of
the
rate
of
the
amount
of
data
sending
in
and
out,
rather
than
how
much
is
stored
there,
because
it's
got
a
finite
amount
of
ram
and
it
just
automatically
clears
out.
A
A
But,
as
you
add
more
stuff-
and
you
add
more
templates-
and
you
add
more
data-
it
just
gets
slower.
It
always
does
larger
responses
get
slower,
that's
frustrating!
A
So
it's
just
something
that
you
gotta
keep
in
mind.
Caching
can
help
with
that.
Cash
reads
are
very
consistent,
pretty
much
typically
all
under
one
millisecond,
so
you
can
take
any
amount
of
stuff
that
takes
any
amount
of
time
to
generate
100
milliseconds,
whatever
it's
one
millisecond.
Now
it's
very
nice,
it's
very
convenient,
but
the
biggest
challenge
you're
going
to
come
across
is
user-specific
data.
We're
going
to
look
at
that.
A
It's
not
always
possible,
but
there
are
aspects
to
that.
There
are
ways
to
do
it,
they're,
easier,
they're
things
that
you
can
do
to
help.
This
is
all
I
care
about.
I
love
seeing
wiggly
lines
on
the
dashboard.
Go
down,
it's
the
most
satisfying
thing
in
programming
and
yeah.
That's
that's
all
I
care
about
it's
very
exciting
and
it's
also
a
great
way.
It's
very
rewarding.
If
the
wiggly
lines
go
down,
you
fix
the
problem,
probably,
and
it's
very
measurable.
A
So
there
are
a
few
things
to
keep
in
mind
with
the
caching
the
best
thing
to
do,
and
it's
why
caches
work
so
well,
going
back
to
the
beer
metaphor
having
a
mini
fridge,
which
I
don't
think
is
expensible.
Unfortunately,
underneath
your
desk,
it's
going
to
be
it's
close
to
the
user.
It's
more
effective!
It's
more
useful!
A
You
don't
have
to
go
to
the
kitchen,
it's
the
same
with
caching,
in
an
application,
if
you
can
catch
at
the
view
level,
you're
catching
the
most
amount
of
background
work,
it's
the
most
effective
you're
closer
to
the
finishing
point:
there's
less
processing,
that's
going
to
happen
on
it.
It's
going
to
be
faster
for
the
user,
try
and
cache
data
as
much
for
as
many
people
as
possible.
So
it's
very
easy
to
go
out
right
now.
A
A
I
don't
even
know
how
many
users
are
up
to
now
millions,
it's
going
to
cause
a
lot
of
cash
churn,
so
the
ideal
scenario
to
get
into
is
where
you
cache
data
for
as
many
people
as
possible
at
once,
there's
loads
of
tricks.
To
doing
that,
it
does
add
a
little
bit
of
risk.
You
might
show
somebody
something
you
shouldn't
see.
A
That's
the
major
thing
we'll
have
to
look
at
is
just
how
you
avoid
doing
that,
trying
to
preserve
cache
data
as
well
right.
You
could
just
put
a
cap,
one
cache
at
the
end,
cache
the
whole
view
great.
But
what
happens
when
that
cache
gets
expired?
It
has
to
re-render
the
whole
view,
but
maybe
parts
of
it
don't
need
to
be
re-rendered,
so
you
you
can
have
nested
caches,
it's
not
always
bad
to
be
caching,
the
same
thing
multiple
times
it
can
be
very
useful.
A
A
It
takes
100
milliseconds
to
render
it
always
takes
a
hundred
milliseconds
to
render
you
know,
that's
consistent
and
you
replace
it
with
a
one
millisecond
cache
call.
But
what
happens
if
you
replace
it
with
50?
One
millisecond
cache
calls
well
now
it's
50
milliseconds,
but
what
if
it
gets
slower,
so
each
one
takes
three
milliseconds.
Well
now
it's
150
milliseconds
it
becomes
very
flexible.
It
goes
up
exponentially.
It's
quite
important
to
keep
that
in
mind
so
slightly
stuff
about
when
we're
adding
caches.
A
A
I
added
this
in
this
is
something
that's
now
standard
in
browse
apps.
You
should
run
caching
in
development
all
the
time,
because
it's
the
only
way
to
know
where
it's
actually
light
friend
users
and
yes,
it
can
get
quite
irritating
when
you
add
something
new
and
it
doesn't
actually
change
and
get
really
confused
and
waste.
Three
hours
of
your
life
wondering
why
the
change
isn't
happening
and
it's
not
just
because
you're
editing
the
wrong
file,
which
also
happens,
but
you
can
turn
it
on
in
development.
A
A
You
you
need
to
keep
it
in
mind
as
soon
as
you
start.
I
cash
from
the
first
thing
I
before
I
ship
anything,
it's
already
got
cash
in.
It
might
be
premature
optimization,
but
it's
actually
premature
optimization
for
the
amount
of
effort
you're
going
to
have
to
put
in
later,
because,
if
you're,
adding
things
in-
and
it's
already
set
up,
it's
much
easier
coming
back
to
it
later
is
a
nightmare.
A
So
as
we
experience
the
gitlab
coming
back
and
you've
got
a
10
year
old
application,
trying
to
add
cash
into
it,
much
harder
than
if
it
had
been
built.
That
way
from
the
start.
It's
just
how
it
is.
This
happens.
A
lot
and
you've
got
to
mentally
prepare
yourself.
Caching
is
fun,
I'll
hear
no
conflicting
opinions
about
that.
It
is
fun.
A
A
They
will
go
over
that
as
I've
said
before,
but
you
can
ship
it
with
feature
flags
just
turn
it
back
off
reassess.
Do
it
again,
you
have
to
test
it
in
production.
It
just
doesn't
work
anywhere
else.
You
will
not
see
your
effect
elsewhere.
I've
written
things
recently
that
are
extremely
effective
in
the
performance
test
suite
and
have
literally
no
effect
on
production
and
it's
down
to
traffic
patterns
and
stuff,
like
that.
A
It's
just
something
we'll
have
to
look
over
and
it's
something
to
experiment
with
you
have
to
experiment
is
a
bit
of
a
pain
because
our
deployment
flow
is
quite
slow.
That
I
mean
for
my
own
stuff.
I
just
stick
stuff
in
production,
every
15
minutes
and
test
out,
because
that's
the
easiest
way
to
do
it.
A
It's
very
very
hard
to
emulate
live
traffic
unless
you
actually
direct
that
traffic
over
to
a
test
thing
so
some
tips,
the
performance
bar-
is
very
useful
and
heinrich
added
a
very,
very
useful
thing
to
it,
which
allows
you
to
generate
a
flame
graph
of
the
request.
Awesome
saves
you
a
huge
amount
of
time.
A
A
I'm
looking
for
cash
hits
if
I'm
writing
a
cash,
I'm
looking
at
rendering
times
for
the
various
partials
and
I'm
trying
to
narrow
down
where
these
performance
issues
are,
where
the
cash
is
going
to
be
most
effective
once
you're
in
the
right
place.
Obviously,
everyone
knows
about
them.
Well,
I'm
hoping
most
people
use
binding
dot
prior
binding
dot
irb
as
it
might
now
be
in
our
thing
just
get
in
there
and
start
deleting
stuff.
A
That's
what
I
did
if
I
need
to
find
a
real
performance
problems,
go
around
and
delete
random
lines
of
code
until
I
narrow
down
what
was
actually
causing
the
issue,
and
you
just
go
down
in
steps
until
you
eventually
find
it.
One
thing
to
keep
in
mind
is
when
you
turn
feature
flags
on
and
off.
I'm
sure
you've
noticed
this.
A
It's
not
immediate
and
that's
really
something
to
keep
in
mind
when
you're,
trying
out
a
cache
locally
toggling
the
flag
on
and
on
on
and
off,
to
make
sure
that
you're
actually
looking
at
the
correct
data
to
know
that
the
feature
flag
has
actually
rotated
on
view
templates
erb
is
easier
to
cache
it
just.
It
feels
really
weird
to
say
everyone's
used
hammer
on
slim
for
a
long
time.
I
use
slim
a
lot
myself,
but
erb
allows
you
to
actually
like
partially
cache
blocks
of
code.
You
don't
have
to
cache
the
ending
tag.
A
You
can
just
put
caches
around
random
lines
in
the
templates.
Very
cool
will
get
confusing
later
and
it
can
introduce
some
interesting
problems
if
you
change
bits
of
vegas
but
very
effective
templating
languages
are
very
fast,
except
when
they've
got
ruby
in
them
and
that's
where
it
starts
to
slow
down
when
they're.
Just
if
you've
got
a
template,
that's
just
strings
or
just
tags.
A
There's
no
point
in
caching
it
it's
actually
really
fast,
but
the
main
thing,
and
in
fact
something
that
we
come
across
a
lot-
is
that
rendering
partials
in
rails
is
very
expensive.
It
has
an
overhead
it's
something
like
10,
even
if
using
the
collection
renderer
over,
embedding
that
in
line
it's
very
odd,
but
there
it
comes
into
how
you
cache
as
well,
because
if
you
can
cache
stuff
like
that,
like
rendering
all
these
partials
you're
actually
saving
more
than
just
the
rendering
time.
A
Right
I
had
I've
sort
of
made
up
my
own
terms
for
a
lot
of
this
they're,
probably
correct.
I
think
I've
noted
when
they're
not
so
http.
Caching,
the
caching
commands
that
you
send
to
the
user's
browsers
using
etags
and
expiry
times
you
tell
the
browser
like
this
hasn't
changed
since
you
last
saw
it.
The
request
actually
gets
through
to
the
rails
controller,
so
you're
not
skipping
the
rails
step.
A
But
it
is
by
far
and
away
one
of
the
most
effective
ways
of
caching,
it's
just
quite
hard
because
you
don't
really
have
any
control.
Once
you've
told
that
reverse
proxy
cache
this
page,
they
do
have
apis.
You
can
expire
them.
Sometimes
I
write
html
files
a
lot
I'll
give
an
example
later
in
my
own
blog
thing,
all
my
articles
are
written
as
html
files
to
disk
super
easy
takes
an
hour
to
implement
and
your
web
server
like
nginx
can
serve
20
30
000
hits
per
second
on
a
powerful
server
and
rails.
A
Can't
do
that
and
it
will
just
serve
html
file
off
this
very
happily,
and
all
of
that
support
is
kind
of
built
in
it's
very
nice
single
view
and
action.
Caching
doesn't
really
exist
anymore,
it's
where
rails
had
support
for
it
and
it
would
cache
the
view
rendering
stack
for
a
controller
action,
and
it
would
just
serve
the
whole
thing
from
a
cache
kind
of
like
the
html
page
caching.
But
from
your
cache
store
it's
not
in
core
anymore.
It's
not
that
useful.
You
can
still
add
it
back
in
I'll.
A
I've
got
a
few
comments
on
why
it
might
be
useful,
the
main
and
most
useful
one
and
it's
most
useful
gitlab.
It's
one.
We
use
all
the
time
fragment.
Caching,
where
you
cache
little
sections
of
your
templates,
that's
very
useful,
it's
very
effective
and
you
can
just
get
rid
of
the
expensive
aspects
of
your
templates.
A
Right
method,
caching,
is
something
we
do
a
lot
in
ruby.
We've
got
the
instance
variable
or
equals
thing.
We've
got
strong
memorize
it
gitlab
as
well,
which,
as
far
as
I
recall,
does
the
same
thing
but
actually
stores
nil
values
and
request.
Caching
is
kind
of
the
same
again.
We
have
a
request
store
for
the
request
that
acts
like
an
extra
cash
store
and
you
can
set
keys
and
values
in
it.
It's
quite
useful,
especially
if
you're
going
to
be
hitting
the
same
key
across
different
different
methods.
A
So
if
different
parts
of
the
application
need
to
access
the
same
data
through
a
request,
but
they
don't
necessarily
have
the
exact
model
to
call
or
something
like
that,
it's
very
effective.
They
don't
have
the
same
instance
of
a
class
very
useful
if
you've
got
class
methods,
it's
quite
quite
useful.
A
Sql
caching
is
something
rails
does
by
default.
There
is
this
way
if
the
same
query
is
executed
multiple
times
in
a
request.
It
just
returns
cached
data,
but
you
can
go
further.
Shopify
have
this
awesome
gem
called
identity.
Cache
very
fun
comes
with
its
own
interesting
problems,
but
it's
a
read
through
right
through
cache
for
your
database
in
memcached,
for
example,
it's
essentially
anytime,
you
update
a
model,
it
expires,
caches
and
then
repopulates
them
the
next
time.
You
read
it.
I
use
it
a
lot.
A
It
can
be
very
effective,
something
I've
called
novelty
caches,
it's
a
kind
of
like
building
a
really
weird
cache
for
one
specific
use
use
case,
I've
written
more
of
these
and
I'd
care
to
admit
at
gitlab.
So
far,
ideally,
you
want
to
use
standardized
tools
wherever
possible.
A
Sometimes
you
have
a
weird
problem
that
you
just
can't
solve
that
way,
so
I'm
not
going
to
go
through
and
literally
look
at
all
of
the
methods,
there's
great
documentation
on
the
rails
guides
that
will
detail
all
the
methods.
What
I
will
do
is
point
out
the
useful
ones,
because
some
of
them
aren't
useful.
Some
of
the
settings
are
really
weird,
some
of
it
actually
isn't
documented
and
one
of
the
most
useful
things
is,
and
actually
documents
is
very
weird
is
open
source
and
I
should
probably
contribute
that
back.
A
I
guess
these
are
the
useful
cache
methods
and
even
then
I
would
say
I
don't
actually
use
rails
cache,
read,
write
or
delete
kind
of
ever.
I
mostly
just
use
fetch,
read,
write
and
delete
have
their
places,
obviously
where
it
gets
more
interesting
things
like
fetch
multi.
A
A
The
cache
helper
in
the
views
also
has
kind
of
a
useful
thing
of
cash
if
and
cache,
unless
they're
very
useful
with
feature
flags
cache
if
feature
enabled
stuff
like
that,
really
nice
very
convenient
cached
colon
true,
is
the
weird
one:
that's
not
documented
and
I
will
learn.
I
will
explain
that
the
http
caching
and
controllers
fresh
running
expiring,
they're
very,
very
useful,
with
polled
endpoints.
We
use
them
in
a
few
places
like
that,
where
the
clients
are
just
constantly
polling
for
new
data,
and
so
I've
written
out
some
details
on
the
browse
cache
options.
A
You
can
pass
these
through
most
of
the
cache
helpers
and
methods.
Not
all
of
them
are
very
useful
and
in
fact
I
would
say
the
only
one,
that's
actually
useful
every
day
is
expires
in
the
rest
are
just
you
know,
weird
edge
cases,
race
condition.
Tto
is
very
interesting.
I've
described
what
it
does
there.
It's
basically
useful
if
a
single
cache
key
is
getting
hammered
and
there's
a
risk
that
multiple
people
might
hit
it
stale
at
the
same
time
and
all
try
to
write
to
the
cache.
A
At
the
same
time,
it
just
basically
saves
you,
multiple
simultaneous
rights,
kind
of
useful
I've
used
it
once
I
think
at
gitlab.
So
far,
it's
not
that
useful
expires
in
is
the
useful
one,
the
other
stuff
I
just
wouldn't
worry
about
too
much.
A
A
It's
only
the
user's
browser,
that's
going
to
cache
the
response,
you're
telling
them
to
cache
it,
but
that's
also
its
downside,
and
they
can
just
override
that
unless
it's
google
chrome,
where
it
caches
it
forever,
and
you
never
notice-
and
it's
really
irritating
everything
else
like
api
clients
can
ignore
it.
They
don't
have
to
pay
attention
to
it,
and
that
means
that
it
can
be
effective,
but
also,
if
you're,
relying
on
it
to
stop
performance
issues.
A
It's
a
bit
problematic
in
that
regard,
but
in
theory
you
should
kind
of
use
it
everywhere.
In
practice
it
makes
development
quite
frustrating.
I
would
say,
because
your
browser
locally
will
be
caching,
it
a
lot
as
well
and
it
can
be
kind
of
awkward.
It
doesn't
work
quite
the
same
as
the
others.
Reverse
proxy
caching
has
very
similar
use
cases,
but
has
a
proxy
in
front
of
it
most
useful.
One
of
this
for
us,
and
probably
most
people,
is
that
cloudflare
support
this.
A
You
can
actually
turn
on
a
setting
to
not
just
cache
static
assets
but
to
cache
all
files
that
send
the
conditional
get
headers.
So
it
can
be
quite
effective
in
that
regard,
but
exploration
is
hard.
A
Cloudflare,
don't
really
make
it
sort
of
obvious
as
to
how
long
your
cash
values
will
survive
stuff
like
that
not
too
much
to
worry
about
there,
but
patreon
page
questions.
I
use
this
a
lot.
I've
got
some
details
and
I'm
just
going
to
skip
past
this
one.
It's
unlikely
we're
going
to
be
able
to
use
this
that
much
at
gitlab,
but
I
use
it
personally
quite
a
lot
the
same
with
viewing
action.
Caching,
probably
not
too
relevant.
I
have
added
something
to
the
api
to
allow
you
to
do
this
on
the
api.
A
A
Use
it
everywhere
for
everything
very
useful,
very
handy,
there's
some
tricks
to
doing
it.
It's
actually,
as
I
mentioned
before,
because
of
partial
rendering
being
kind
of
slow,
it's
faster
to
cache
around
the
render
call.
But
that
also
is
a
problem
because
of
something
that
rails
does
is
quite
clever.
It
wraps
a
hash
of
your
template
files
into
the
cache
keys,
but
it
doesn't
do
that
if
you
put
it
around
the
render
call
it
does
it
if
you
do
it
inside
the
file.
A
A
I
come
across
these
all
the
time
in
the
test
suite
where
it
kind
of
just
needs
to
call
reload
on
things
can
be
kind
of
irritating.
It's
just
something
that
happens.
I
guess
request.
Caching
is
basically
the
same.
I've
added
a
helper
that
rolls
this
into
cache
calls
it's
called
fetch
once
it's
kind
of
useful.
Basically,
it
allows
you
to
do
a
rails,
cache
fetch,
except
that
it
won't
perform
that
fetch
again
it
will
serve
it
out.
A
A
Basically
it's
it's
very
useful
in
front
of
single
object
loads
in
particular,
like
article
defined
or
whatever
stuff
like
that,
where
you're
just
loading
one
thing,
especially
on
a
high
traffic
page,
it
does
cut
quite
a
lot
of
load
to
your
database
can
be
very
effective,
but
obviously
you
can't
perform
queries
on
it,
so
you
can't
have
conditions
or
sorting
things
or
stuff
like
that.
A
A
But
I
would
be
very
hypocritical
of
me
to
say:
don't
do
this
because
I've
done
it
all
the
time.
In
fact,
half
of
these
things
I've
mentioned
here
or
one
I've
specifically
mentioned
my
own
things,
because
I
just
know
them
better.
There
are
others
around
the
banzai
renderer
cache
is
particularly
interesting
and
complex,
but
things
like
the
repository
hash
cache
in
this
example
is
super
weird,
and
it
works
for
this
one
purpose
and
it
works
very
effectively
for
this
one
purpose.
A
But
I've
wrote
it
generically
to
be
reused
elsewhere
and
it
might
never
be
used
elsewhere,
because
it
kind
of
is
useful
for
this
one
specific
purpose,
so
there
can
be,
they
can
cause
sort
of
problems
in
terms
of
maintenance
later
because
they
can
be
a
bit
weird.
A
So
cash
exploration
is
where
everyone
panics
gets
really
worried
about
it.
I
put
some
details
in
about
how
memcached
used
to
expire
keys,
because
it's
kind
of
interesting
it
gets
into
your
head
about
how
it
works.
Basically,
new
entries
going
on
the
head
old
ones
get
to
lead
to
the
tail
anytime
anything's
accessed
it
bread,
it
gets
bumped
back
towards
the
head,
so
the
least
accessed
oldest
things
are
the
ones
that
get
deleted.
First,
that
worked
really
well
when
it
ran
on
beside
your
server.
They
then
changed
it
to
this
very
interesting
thing.
A
I
put
the
diagram
in
with
literally
no
notes
about
it,
because
it's
just
exciting
read
that
blog
article,
it's
actually
very
interesting
about
how
they
reorganized
it
and
it
basically
just
made
it
more
effective
for
modern
application
use
cases,
but
the
general
gist
is
the
older
stuff.
Just
gets
deleted
the
same
as
the
previous
one.
It's
just
got
some
interesting
sort
of
extra
caches.
A
Redis
has
loads
of
settings.
That
kind
of
let
you
do
both
of
these
other
things
as
well,
there's
something
called
least
frequently
used
mode.
It's
all
very
weird.
One
really
interesting
thing
they've
got
is
something
called
unlink,
which
we
now
run
by
default
anytime.
You
call
it
a
delete
in
in
the
redis
cache.
It
doesn't
actually
delete
the
data,
it
just
unlinks
the
key
from
the
value,
and
then
it
clears
the
data
up
later.
A
That's
much
faster
and
it
works
perfectly
well
for
our
use
case
and
it
kind
of
allows
redis
to
handle
it
better,
but
the
most
important
one
is
how
rails
expires
keys,
and
this
is
where
it's
most
important
to
sort
of
sort
of
focus
on
this
cash
key
expiry
is
how
it's
most
effective.
I've
got
another
slide
in
a
second
about
it.
It
allows
the
cash
to
not
worry
about
receiving
deletes.
A
I've
sort
of
written
out
how
braille's
cache
key
is
actually
composed.
Here,
it's
useful
to
know
because
it
sort
of
varies,
but
it
depends
on
where,
in
the
stack
you
use
it,
it's
different
in
views
to
how
it
is
in
the
application.
A
It's
quite
effective,
though
there
are
some
gotchas
I've
written
those
out
a
bit
later,
but
this
is
sort
of
the
main
comparison
between
explicit
delete,
cache,
key
rotation
and
using
time
to
live,
to
sort
of
dictate
how
things
drop
out
of
the
cache.
I
would
try
and
avoid
explicit,
deletes
and
we're
going
to
try
and
move
some
of
the
gitlab
application
away
from
it,
because
it
basically
is
it's
like
a
phone
call
to
the
cache
it
has
to
respond
right
now.
It
has
to
delete
a
thing.
A
A
So
there
are
some
gotchas
to
caching
I've
written
out
a
scale
of
interesting
caching
problems
from
no
biggie
through
to
danger.
Basically,
so
these
are
just
things
that
you
might
come
across.
I've
had
stuff
recently
where
cash
has
no
effect.
That's
you've,
probably
messed
up.
One
of
them
was
that
I
didn't
actually
turn
on
the
feature
flag.
A
That's
just
something
that
I
apparently
do
frequently
now,
probably
tuning
ttls
and
stuff.
You
can
do
those
later.
One
thing
to
keep
in
mind
as
well.
Is
that
not
everybody
needs
to
see
fresh
data
all
the
time?
What's
one
minute
delay
for
certain
things?
We
use
this
in
a
couple
of
places.
A
Now,
like
storage
alert,
banners
are
displayed
30
seconds
or
a
minute
later
in
theory,
because
it
caches
a
very
expensive
sql
request
for
entire
groups
at
a
time
it
can
be
very
effective
where
it
starts
getting
more
dangerous
is
where
you're
starting
to
cause
problems.
Like
recently,
when
I
created
a
cache
key
that
changes
every
page
load,
that's
fun,
your
graph
will
do
really
horrible
things.
It
will
go
up
sort
of
infinitely
as
everything
gets
slower,
as
it
keeps
writing
obviously
turn
off
the
feature
flag.
A
I
don't
know
if
anyone
remembers
but
valve
did
this
a
few
years
ago
in
steam,
where
people
were
logging
in
and
they
were
seeing
other
people's
personal
information
and
account
details
stuff
like
that,
it
was
a
caching
problem.
They
messed
up,
it
can
happen,
it's
not
great
and
so
we're
just
we've
got
to
be
more
careful
really,
especially
with
our
sort
of
time
to
change
things
around.
So
when
you're
in
production,
the
performance
bar
is
very
useful
for
this.
A
A
There
are
other
options
if
you're
doing
something
new
and
there's
a
few
sort
of
other
gotchas
in
there
that
are
worth
keeping
in
mind,
but
the
main
one
is
user-specific
data
network
latency,
I've
added
some
rough
numbers
here.
It's
just
something
to
keep
in
mind
it's
kind
of
not
a
huge
issue.
If
you're
getting
network
latency
in
your
cloud
provider
and
like
we
are
in
google
cloud,
if
you're
starting
to
get
network
latency
to
your
cash,
everything
else
is
probably
going
to
die
shortly
anyway.
A
So
it's
not
as
much
of
an
issue,
but
it
comes
down
to
load
as
well,
because
you
can
send
multiple
cash
requests
at
once
to
your
cash
using
multiple
fetch
methods,
and
it
just
has
less
overhead.
It's
got
no
tcp
stuff.
It's
got
no
yeah.
You
don't
have
to
take
up
extra
extra
connections.
Anything
like
that.
It
basically
just
cuts
the
overhead
of
doing
multiple
cash
reads.
A
They
still
have
to
be
read
on
the
cash
side,
though
the
cash
server
still
has
to
read
like
the
100
keys,
you
give
it,
but
it
is
really
fast.
There's
a
really
good
way
of
using
this
in
rails.
That
will
we'll
look
at
this
is
my
favorite.
Pun
of
the
entire
presentation
is
the
most
important
so
multi-fetching.
This
is
where
the
cached
true
thing
comes
in
when
you're
rendering
it
rendering
a
collection
of
partials
in
rails.
A
You
can
pass
it
a
a
proc
and
it
will
give
you
the
item
and
you
can
then
specify
additional
things
to
add
to
the
cache
key
or
in
this
case,
just
replacing
it
with
the
user
cache
key,
which
would
probably
totally
break
it.
It
doesn't
support,
expires
in
or
any
of
the
other
cash
options,
which
is
why
it's
worse
than
multi-fetch
fragments.
But
it's
still
one
of
the
single
biggest
performance
improvements.
You
can
make,
and
it's
well
worth
worth
looking
at.
A
Right
generate:
where
am
I
right:
preventive,
preemptive,
caching
or
cache
warming
trying
to
sort
of
prepare
a
cache
in
advance.
If
your
user's
only
going
to
see
things
something
once
then
there's
no
point
serving
them,
the
uncached
version,
because
it'll
be
slow
and
then
they
spend
extra
time.
Actually
writing
a
cache
value
and
they
never
see
it.
A
We
got
some
good
examples
of
this
on
gitlab,
I
think
like
diffs
or
blobs,
potentially
for
specific
commits
for
different
showers.
They
might
only
be
viewed
once
so
serving
the
use
of
the
the
uncached
version
is
kind
of
pointless.
You
could
re
pre-generate
it
in
the
background
on
a
sidekick
thread
that
can
be
very
effective.
A
It's
quite
hard
to
do
with
views,
because
with
views,
you'd
have
to
call
the
actual
rendering
stack.
There
are
ways
around
that,
though,
adding
awkward
things
to
the
cache
keys.
You
can
use
the
digest
system
to
do
this,
like
sharing
entire
things.
Don't
have
a
just
random
class
because
it
changes
every
time
and
your
cache
key
rotates
in
every
page
load,
because
that's
quite
bad,
but
it's
very
useful.
It's
actually
so
fast
that
it's
actually
effective
on
huge
strings,
even
when
you
wouldn't
expect
them
to
be.
It
is
very,
very
fast.
A
Another
aspect
to
it
is
going
for
cache
keys
that
are
not
quite
what
you
want.
There
are
situations
you're
going
to
come
to
where
you
can't
actually
create
a
cache
of
something
unless
you
already
loaded
that
data
I've
come
across
another
route.
So
you
can't
tell
if
a
has
one
relation
exists,
unless
you
make
the
query
you
can
with
the
belongs
to
because
it's
got
the
id,
but
with
a
has
one,
it's
actually
having
to
form
the
lookup
query.
A
So
if
you
want
to
avoid
that,
you're
gonna
have
to
come
up
with
a
different
way
of
caching
that
data,
and
sometimes
it
can
be
stuff
like
trying
to
budget
so
that
you're
just
seeing
if
something
like
what
you
want
exists
or
sort
of
mixing
together
conditions.
That
kind
of
gives
you
the
same
thing.
I
do
this
an
awful
lot.
There
is
a
maintainability
overhead,
which
is
something
to
keep
in
mind
something
I
do
literally
all
the
time.
It's
my
only
use
for
javascript
and
what
I
write.
A
I
use
javascript
to
allow
me
to
cache
more
the
eight
minutes
ago.
Thing
like
we
actually
do
this
in
javascript
to
gitlab.
I
was
very
happy
when
I
discovered
this
because
that's
one
of
the
things
that's
just
really
irritating
to
cache.
Otherwise
I
do
on
my
own
site.
On
my
blog
thing,
it's
got
like
a
user
navigation
system,
thingy,
that's
added
in
with
javascript.
A
It
looks
for
like
a
cookie
that
just
says,
like
users
signed
in
one
just
to
deter,
detect
what
navigation
it
should
serve
because
the
rest
of
it's
just
a
flat
html
page
and
that's
very
fast.
We
can
do
lots
of
tricks
like
that,
and
using
a
nice
mix
of
front
end
and
backing
together
is
how
it
gets
fastest.
You
don't
want
to
just
move
everything
to
the
front
end.
You
don't
want
everything
in
the
back
end.
You
need
the
two
working
together
to
be
really
fast.
A
I
forgot
that
slide
exists.
That's
basically
what's
going
on
about!
Is
there
anything
else
in
that
one?
A
No
right,
user-specific
freshness
sounds
like
a
really
insulting
deodorant
brand,
but
you
can
use
this
as
sort
of
an
effect
with
when
you've
got
multiple
people
you're
looking
at
something,
but
only
one
person
really
needs
to
always
see
the
latest
one
or
if
you've
told
someone
it
exists,
so
an
example
of
putting
heroes
around
a
blog
post.
The
author
obviously
needs
to
see
his
updates
immediately
because
it
can
get
really
confused
as
to
why
it
hasn't
changed,
but
other
visitors
aren't
going
to
know
any
different
same
with
gitlab.
A
You
could
potentially
do
this
with,
like
with
notes.
You
could
show
the
user
the
note
immediately,
but
you
only
serve
the
latest
version.
After
the
notification.
Email
has
gone
out.
Something
like
that.
You
can
do
little
tricks
like
that.
It
can
be
very
effective
on
very
busy
end
points
to
prevent
that
issue
where
you've
got
loads
of
stale
cash
reads.
At
the
same
time,
great
entity
is
a
nightmare
just
I
have
so
many
problems
with
it,
but
it
makes
caching
very
difficult
and
it's
also
really
slow.
A
The
json
generation
in
grape
is
slow.
It's
jason's
generation
in
rails
isn't
very
fast.
Anyway,
or
in
ruby
in
general,
we've
got
some
tricks
that
we've
added
around
this
to
make
it
a
little
bit
easier,
but
they
do
have
some
downsides.
It
requires
quite
complicated
cache
keys
and
grape
has
no
built-in
caching
support
in
grape
entity.
Unfortunately,
there
are
alternatives.
If
you're
doing
your
own
project,
I
use
something
called
jb.
I
think
I
think
rabble
is
another
one
that
has
caching
support.
A
There
are
basically
alternatives.
It's
a
bit
of
a
problem
and
we'll
look
at
this
in
the
second
session,
because
it's
a
personal
point
of
frustration
for
me
and
it's
worth
worth
checking
on.
So
how
do
you
actually
make
sure
that
your
cash
is
working?
I've
put
some
links
to
some
tools.
Graffana
is
very
useful.
Toggle
your
feature
flag
on
and
off
watch
the
weekly
lines
go
up
and
down.
A
They
will
go
up
before
they
go
down
when
you
enable
a
big
cash,
because
it's
writing
lots
and
then
it
should
go
down
if
it
keeps
going
up.
That's
not
right
if
you
can't
get
it
on
grafana.
Kibana
is
very
useful
for
that,
but
the
performance
bar
as
well
works
in
production
and
the
flame
graph
thing
works
in
production.
A
The
particularly
useful
thing
in
the
performance
bar
is
the
redis
call
list.
It
will
show
your
cache
key
reads:
I'd
use
this
recently
to
detect
the
cache
I
put
live
was
actually
not
doing
what
I
thought
it
was
it's
very
good
as
soon
as
your
things
on
production
turn
the
feature
flag
on
for
yourself,
scope
it
to
yourself
or
something
go
to
a
page.
Look
at
the
cache
keys
in
the
redis
list,
it's
very
effective!
A
So
there's
there's
not
many
slides
left.
So
I'm
almost
on
time
speaking
it
quite
a
lick
identity.
Cache
I
mentioned
before
this
is
very
useful.
Don't
use
it
for
current
user.
I
vaguely
recall
it
breaking
stuff,
really
weirdly,
it
does
read
only
it
provides.
You
read-only
objects,
so
you
can't
do
certain
things
like
you
can't
fetch
something
from
the
cache
and
then
add
like
extra
scopes
to
it,
to
query
something
from
the
database.
It
kind
of
doesn't
work
right,
but
it
is
very
useful.
A
Something
I
use
all
the
time
is
something
called
redis
objects.
This
is
a
great
gem.
It's
really
interesting
and
it
cut
it
basically
standardizes
a
lot
of
what
we
do
in
redis
every
day
as
methods
to
stick
on
models,
and
you
can
use
it
to
do
like
a
counter
cache
in
redis.
Instead
of
inactive
record,
it's
got
loads
of
built-in
methods,
it's
very,
very
cool.
For
me.
I
actually
use
redis
and
memcached
side
by
side.
A
A
We
can
kind
of
look
at
this.
I
send
this
out
of
just
bragging
really
to
some
big
rubyists
say.
Look
at
the
thing
I
wrote
and
they
both
replied
back
and
said.
That's
neat,
so
it's
certified
neat.
They.
It
said
exactly
the
same
thing.
It's
quite
weird,
literally.
No
one
has
a
use
case
for
this,
but
actually
what
it
does
is
it
allows
you
to
do
loads
of
little
caches
and
only
send
one
network
with
network
requests
in
rails.
It
has
a
very
specific
use
case.
A
It
might
be
used
for
git
lab,
because
it's
very
good
when
retrofitting
caches,
if
you
design
something
from
the
start,
you
don't
need
it.
So
it's
not
very
useful
places
that
have
already
built
their
caching
in
so
last
section
some
specific
stuff
about
the
tools.
We've
got
we're
going
to
look
at
these
in
the
second
session
I'll,
do
a
brief
bit
of
the
style
where
we'll
actually
just
load
up
the
code
and
look
at
how
some
of
these
work
or
places
that
they
are
in
use.
A
These
are
the
interesting
novelty
caches,
my
particular
favorite.
These
is
the
avatar
cache
where
I
found
a
controller
that
we
don't
even
performance
test
because
no
one
remembered
it
existed
and
it
was
causing
one
and
a
half
thousand
queries
for
avatars
per
request,
and
so
now
we
have
an
extremely
specific
cache.
Just
for
avatar
lookups
by
email,
that's
kind
of
a
fun
one.
It
actually
uses
explicit
deletes,
but
it
uses
redis
hash
keys
so
that
you
can
clear
multiple
avatar
size,
caches
per
user
for
each
email
address
you
send.
A
A
So
we've
got
some
extra
tools:
I've
added
a
generic
class
called
git
lab
cache.
Any
useful
caching
helpers
that
we
write
can
go
in
there.
I've
only
added
one
so
far,
which
is
fetch
once
it's
the
one
that
you
wrapped
in
the
request
store.
We've
got
some
tools
for
when
you're
doing
these
custom,
redis
caches
like
one
to
serialize,
bull
values
in
and
out
of
the
cache,
because
it
doesn't
quite
it
serializes
them
as
strings
by
default.
True
is
just
the
word
true
that
could
cause
like
conflicts.
Potentially
it
can
be
a
bit
odd.
A
Reactive.
Caching
is
super,
interesting,
go
and
read
the
documentation
on
that.
It's
kind
of
cool,
it's
a
quite
complex
feature
that
was
added
a
while
ago,
and
it
basically
allows
values
to
be
generated
in
the
background
thread,
whilst
somebody's
showing
interest
in
it.
Essentially,
it's
quite
cool.
A
I
added
some
bits
to
the
to
the
api.
These
are
particularly
useful
because
the
api
is
proving
to
be
kind
of
a
performance
bottleneck
for
a
lot
of
our
stuff.
Recently,
we'll
have
a
look
at
how
these
work
present
cache
is
the
useful
one.
It
kind
of
is
the
same
effect
as
the
partial
collection
renderer
caching
in
the
rails
stack,
but
for
grape,
it's
quite
effective,
but
it's
got
its
own
sort
of
potential
issues.
I
guess,
and
it
just
requires
you
to
have
to
come
up
with
quite
complex
cash
keys.
A
I
think
we'll
look
at
an
example
of
that
and
cash
action.
We
had
it
recently
and
it
is
like
the
rails
action.
Caching
is
kind
of
more
than
anything
else
is
a
denial
of
service
attack
prevention
tool
more
than
pretty
much
anything,
but
the
only
thing
we've
stuck
it
around
so
far
is
an
endpoint
that
always
hits
italy
with
an
expensive
request
and
you
can
just
cash
the
entire
response
for
a
short
ttl,
just
to
make
sure
that
it
doesn't
fall
over
that's
kind
of
useful.
I
am
kind
of
on
time.
A
So
that's
the
end
for
this
one,
any
questions
anything
you
want
me
to
cover
more.
Please
stick
in
the
issue
and
any
suggestions
of
bits
that
we
can
look
at
the
next
session.
I'm
going
to
do
less
direct
talking
and
everyone's
welcome
to
contribute
and
discuss
about
stuff.
I
just
wanted
to
keep
the
slides-
quite
I
say
short,
but
it
was
an
hour
of
me
talking
and
we'll
go
through
them.
A
So
that's
gonna
start
in
about
an
hour
and
I'll
try
and
make
sure
my
development
environment
actually
works
by
that
point.
It's
probably
all
right
great.
A
I
will
see
you
it's
a
different
meeting
for
the
next
call.
Just
keep
that
in
mind,
and
I
will
hopefully
see
you
on
it.
I
will
not
hold
it
against
you
if
everyone's
already
asleep
and
decides
to
go
away,
it's
perfectly
reasonable.