►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
we're
going
to
chat
about
what
we're
going
to
do
next
about
latency
spike
we're
seeing
on
redis
related
to
the
the
cache
evictions.
When
we
reach
max
memory
yeah,
I
don't
think
we
need
to
repeat
all
of
the
contacts
that
we
discussed
yesterday,
but
now
we're
going
to
like
we've
done
several
investigations
on
the
redis
side
and
on
the
application
side
and
we're
going
to
see
where
we're
going
to
take
it
from
here,
yeah
matt.
You
were
saying
before
I
interrupted
you
yeah.
B
No,
no,
it's
all
good!
So
so.
At
this
point,
we
we
have.
We've
established
that
the
that
the
redis
latency
spikes,
which
are
triggered
by
the
eviction
bursts,
are
are
driven
by
the
eviction
mechanism.
B
Reducing
redis
client
throughput
enough
that
we
have
a
backlog,
accumulate
quickly,
which
competes
for
memory
which
drives
the
eviction
cycle,
and
this
only
abates
approximately
when
enough
clients
get
stalled,
that
the
incoming
request
rate
falls
low
enough
that
that
evictions
can
keep
up.
So
that's
how
the
cycle
ends.
B
B
B
Yes,
exactly
so
so
yesterday
we
talked
about
some
potential
changes
to
so
I
I
kind
of
I
kind
of
want
to
focus
mostly
on
what
we
can
do
for
for
the
sake
of
our
own
system,
which,
as
as
we
discussed
yesterday,
is
chiefly
the
the
presumably
simplest
thing
we
can
do
is
reduce
the
ttls
so
that
our
demand
for
for
essentially
for
for
the
word
key
space,
has
a
different
connotation
in
the
context
of
redis.
B
So
let
me
come
up
with
a
different
word
for
it,
the
demand
for
how
much
memory
we
need
to
store
our
keys
needs
to
be
less
than
max
memory
and
the
main
control
we
have
over.
That
is
ttl
the
secondary
control
we
have
over.
That
is
what
redis
instance
do
we
store
these
keys
in
which
leads
us
directly
to
the
the
repartitioning
discussion.
A
Yeah,
I
think,
for
reducing
details.
I
think
we
need
to
because
I
was
looking
at
it
briefly.
There's
some
keys
there
that
really
stand
out
that
shouldn't
like
there
was
one
key
that
was
really
low
hanging
fruit
that
took
almost
two
gigs
of
memory
by
saving
something
for
half
an
hour
when
we
only
need
it
for
a
few
minutes
really.
So,
like.
Okay,
that's
like
low
hanging
fruit,
yeah.
I
think
we
should
find
them
and
fix
them.
Yes,.
B
A
Yes,
but
not
all
of
them
are
easy,
like
it's
not
always
easy
to
track
down
where
a
key
is
coming
from.
A
C
B
That's
that's
the
exact
question.
I
think
we
should
explore
next.
Okay.
How
should
we
do
well
all
right?
So
next,
so
reducing
ttl
is
probably
the
easiest
thing
we
can
do
and
the
goal
there
would
be
to
reduce
the
demand
for
for
storage
space.
That's
what
I'll
call
it.
A
B
No,
this
would
totally
be
driven
by
your
analysis
of
of
the
so
can
do
you
have
handy
the
the
the
table
you
were
sharing
yesterday.
Yes,.
C
And
I
guess
my
second
question
on
that
table,
which
I
also
I
read
a
lot
about
this
last
night.
Have
you
done
the
analysis
for
less
than
days,
meaning
that
table
is
all
days?
Do
you
have
it
by
hours?
No,
I
meant
to
do
that
today,
but
I
didn't.
A
Get
to
it
yet
so
no,
I
still
need
to
do
that.
It
takes
about,
I
think,
about
45
minutes
to
run
on
my
machine,
so
we're
not
going
to
watch
that,
but
I'll
do
it.
A
Doesn't
really
tell
you
everything
is
within
a
day
is
what
we're
correct?
Yes,
so
here's
am
I
sharing
my
screen.
Yes,
so
here's
the
table
and
I
mean
again.
A
Make
this
a
little
bit
bigger
yeah,
I
I
haven't
figured
out
how
to
screen
share
a
single
window
so
like
it
just
completely
breaks
zoom
and
then
okay.
So
that's
no
worries
yeah.
So
here's
the
the
the
list
of
big
things.
These
two.
B
B
Hang
on
so
this
is
large
in
terms
of,
let
me
just
clarify
the
units
here.
This
is
this
is
by.
D
B
One
of
the
two
things
we're
interested
in
actually-
maybe
this
is
a
good
time
to
just
to
to
talk
about
so
there
are
two
aspects
of
this
and
I
think
we
can
kind
of
approach
from
from.
Ideally
both
directions-
and
we've
talked
about
this
before
I
just
want
to
kind
of
bring
it
to
the
forefront.
B
Since
this
is
germane
to
the
current
topic,
the
the
pressure
comes
from
how
many
bytes
of
memory
we're
using,
which
is
why
it's
relevant
the
eviction
mechanism
doesn't
care
a
fig
about
the
size
of
the
keys
that
it's
evicting.
B
It's
it's
it's
it's
completely
unaware
of
of
that
when
selecting
keys
and
so
the
the
when
we
classify
keys
the
the
classes
of
keys
with
the
largest
count
are
the
ones
that
are
most
likely
to
be
evicted
so
for
purposes
of
of
of
the
evictor
key
size
doesn't
matter
as
as
much
as
which
which,
however,
we
categorize
the
keys
which
which
which
category
has
the
largest
count,
but
in
terms
of
avoiding
evictions
entirely
by
reducing
our
our
our
demand
for
key
keys.
B
Key
storage
space
size
does
matter,
so
those
are
the
kind
of
two
angles.
I
wanted
to
kind
of
try
to
keep
in
mind
at
the
same
time.
B
A
A
A
Yeah,
it
tries
to
match
keys
according
to
the
prefix,
and
then
it
like
will
try
to
match
them
to
something
that
it
already
knows.
So,
that's
why
these
two
are
like
separate
and
are
very
big,
because
they
basically
contain
everything
that
we
see
below
here,
and
this
one
is
also
not
very
interesting,
because
that
word
is
the
prefix
prefix
of
project
which
we'll
see
later
several
times
so
yeah
this
one.
A
A
Right
and
then
we
need
to
go
through
all
of
these.
Most
of
these
go
through
rails.cash,
which
means
that
we
don't
have
super
a
lot
of
influence
over
yeah,
how
they're
accessed
and
how
they're
stored,
for
example,
the
things
that
include
view.
A
B
And
just
to
refresher,
I
I
remember
I
remember:
kung
min
was
working
on
compression
for
sidekick
jobs,
but
I
don't
remember
the
rules
for
and
and
for
that
one.
We
have
a
threshold
where
the
the
job
definition
has
to
the
the
one.
B
Exactly
do
we
have
such
a
threshold
mechanism
in
place
here
or
yeah.
A
I
don't
know,
I
don't
know
the
threshold
for
for
sidekick,
but
the
default
one
for
rails
are
looked
up
before
this
and
that's
one
kilobyte.
So
everything
bigger
than
one
kilobyte
will
compress.
A
Client-Side
compression-
yes,
okay,
great,
I
don't
know
exactly
which
level
of
compression,
though
I
haven't,
checked
that
what
what
the
default
of
that
is.
Okay,.
A
B
Okay,
yes,
yes,
that
seemed
like
it
was.
That
seemed
like
it
was
the
the
simplest
thing
to
do
and
then
for
if,
if
it's
convenient,
we
can
also,
we
can
also
perhaps
classify
them
by
whether
or
not
they're
it
would
be
feasible
to
to
move
them
to
a
separate
instance,
and
I
think,
as
you
were
describing
a
moment
ago,
going
through
the
the
rails
default
cash
caching
mechanism
makes
that
harder.
So.
A
What
I
think
might
be
interesting
is
to
perhaps
go
through
all
of
the
uses
of
multi
like
multiple
keys,
the
multi-key
commands
and
moving
those
to
a
separate
instance.
Because
then
what
we've
got
left
is
something
that
could
fit
on
the
redis
cluster
instance.
A
That's
going
to
be
the
most
part
like
the
biggest
part
of
it,
but
but
I
don't
know
if
that's
worth
doing
space
wise,
so
yeah.
B
Like
it's,
it's
definitely
worth
doing
for
the
long-term
goal
of
getting
to
getting
to
read
this
cluster,
and
I
guess
I
guess
without
without
I
guess
we
have
to
do
that
analysis
to
find
out
how
much
of
the
storage
space
we're
talking
about
to
kind
of
assess,
whether
that's
a
now
thing
or
a
near
future
thing.
B
Oh
sorry,
so
moving
moving
the
so
if,
if,
for
example,
we
find
that
I'm
making
these
numbers
up
totally
if
we
find
that
that
the.
B
B
I
was
spacing
out
on
the
vocabulary
too.
Yes,
so
if,
if
we
end
up
having
say
10
of
our
key
storage
space
being
being
used
for
for.
D
B
These
for
for
the
cross,
slot
command
keys,
then
moving
that
10
of
the
key
storage
space
out
of
the
the
existing
redis
cluster,
reduce
it
by
enough
to
avoid
the
saturating
max
memory,
and
if
it
doesn't,
then
we
still
have
more
work
to
do
so.
I
guess
I'm
kind
of
I'm
kind
of
thinking
that
that
a
relatively
small
percentage
of
the
of
the
of
the,
as
far
as
I
know,
I
thought
that
we,
I
thought
that
we
have
been
avoiding
the
the
the
cross
slot
design
pattern.
B
A
We
have,
we
have
like
safeguards
in
place.
Yeah.
A
Like
right
now,
right
now,
you
can't
like
you
can't
do
specs
will
fail
if
you
try
to
introduce
a
multi-transaction.
Yes
thing.
B
Okay,
okay,
gotcha,
so
so
we've
so
it
sounds
like
then.
We
know
the
areas
that
have
the
tech
debt
of
of
needing
to
be
needing
to
be
tagged
so
that
they
can
be
compatible
with
this
cluster.
A
B
Thank
you.
I
mean
your
your.
Your
third
set
of
eyes
is
super
useful.
I
love.
I
love
the
the
questions
you're
asking
too.
C
Speaking
by
the
way,
since
yeah
bob
just
switched
over
to
this
as
far
as
redis
patches
go,
does
it
like
I'm
again,
I
think
that
is
a
really
much
longer
term
potential
solution,
but
it
doesn't
seem
like
redis,
has
any
ability
to
evict
based
on
key
size.
Does
it.
B
C
A
Yes,
if,
if
I
understood
what
matt
explained
yesterday
correctly,
then
it
will
look
at
idle
time
and
time
to
live
for
expiry.
C
B
B
Yeah,
no,
it's
it's
a
good
idea.
Let
me
let
me
just
briefly
then
add
that
eviction
is
kind
of
a
last-ditch
saving
through
oh.
B
And
and
the
redis
x
so
from
from
from
a
design
perspective,
redis
expects
to
have
numerous
keys
and
it
doesn't.
I
guess
what
I'm
working
up
to
is
when
it
gets
to
the
point
of
having
to
do
evictions
redis.
Doesn't
it
doesn't
even
attempt
to
say
I'm
going
to
evict
the
key
with
the
the
the
shortest
remaining
time
to
live?
B
It
instead
says
I'm
going
to
draw
a
sample
of
a
random
handful
of
keys
and
among
those
I'm
going
to
choose
the
one
with
the
the
least
remaining
time
to
live,
for
example,
if
your
policy
is
time
to
live
so
yeah,
so
in
the
same
sense,
if
we
were
gonna
do
if
we
were
gonna,
add
a
policy
that
that
that
said,
that
said,
if
it's
the
largest
keys,
then
it
would
presumably
follow
a
similar
pattern
of
select
select.
D
B
C
A
But
looking
at
something
here
that
I
noticed
there's
all
of
these
view
things
that's
quite
a
large
chunk.
That's
also
a
thing
that
I
don't
think
is
currently
covered
by
our
closso
cross
slot
validator,
because,
like
I'm
not
sure,
I
need
to
look
it
up,
but
that's
called
in
a
completely
different
way.
B
A
I'm
going
to
take
a
note
of
that
matt.
Do
you
think
we
can
do
this
like
cold
turkey?
Just
do
we
need
to
migrate
the
data.
B
B
Yes,
so
I
think
that
analysis
is
okay,
so
in
in
terms
of
I
want
to
take
it
back
up
back
up
a
level.
So
in
terms
of
next
steps,
the
ttl
analysis
still
feels
like
the
the
shortest
path
and
the
next
step.
The
immediate
next
step
on
that
was
bob.
You
were
talking
about
re-running
the
the
analysis
to
get
like
a
a
a
finer
grained
distribution
for
at
least
for
for
at
least
the
less
than
one
day
subset.
B
That
will
be
super
fantastic,
and
I
think
that's
that's
probably
going
to
let
us
that
will
probably
lead
to
some
concrete
recommendations
about
which
keys,
I
guess
what
I'm
thinking
of
is.
If
we
imagine
that
we've
got
that
data
set
right
now,
then
then
we
would
be
looking
at
the.
We
would
essentially
be
evaluating
the
semantics
of
what
what
harm
does
it
do
to
reduce
the
ttl
for
each
of
the
the
high
frequency
frequently
occurring
key
keys
and
since.
B
We
won't
have
numbers
on
this,
I'm
just
kind
of
thinking
out
loud
about
about
how
we
would
do
that
evaluation,
pretending
that
we
already
have
that
distribution
in
front
of
us.
If.
A
We
had
the
distribution
in
front
of
us,
we
would
be
able
to
see
which
keys
can
certainly
live
with
a
lower
ttl,
because
they
have
an
idle
time
much
larger
than
what
we
intend
to
set.
A
And,
on
the
other
hand,
we
can
also
like
look
where
the
the
batch
of
keys
live
like
if,
if
most
of
the
keys
don't
get
axed
like.
If
we
see
that
after
an
hour.
B
A
There's
a
lot
of
space
there
like
there's,
we
we
have
a
lot
of
memory
allocated
that
is
older
than
that
hasn't
been
used
since,
like
an
hour
or
two
three,
then
that
means
we
could
lower
the
the
default
ttl
there.
B
Yes,
so
if
we
haven't
had
a
cache
hit
on
that
particular
if,
if
the,
if
the
mean
time
between
cache
hits
so
all
right,
let
me
let
me
think
about
this,
we're
saying
that
if
a
key
is
if,
if,
if
a,
if
a
cache
key
is
relatively
infrequently
hit,
then
the
cost
of
of
reducing
its
ttl
is
is
presumably
lower.
B
I
guess
another
perspective
on
that
is,
and
I
don't
know
that
we
have
any
data
on
this,
so
so
it
would
really
kind
of
be
an
ad
hoc
assessment.
A
B
The
whole
point
of
doing
the
caching
is
to
avoid
extra
work
and
we
don't
have
a
good
way
to
quantify
how
much
work
we're
avoiding
with
these
cache
entries.
So,
for
example,
one
particular
kind
of
cache
key
is
avoiding
spending.
I'm
totally
making
this
up
say
one
one.
Cash
key
is
avoiding
spending
say
you
know
a
hundred
milliseconds
per
per
key
on
the
primary
db
and
another
type
of
cache
key
is
avoiding
the
same
thing
on
replica
dbs.
B
A
Yes,
also
duration,
right
like
yes,
yes,
but
what
I
what
I,
what
I
was
more
alluding
to
is
like
if
we
have
let's
here
here,
is
idle
time
in
days.
But
imagine
these
are
ours
yeah.
When
you
look
at
this,
then
then
I
I
would
be
inclined
to
say,
like
we
can
set
the
the
default
ttl
to
one
day
and
we
wouldn't
really
feel
anything.
B
A
But
I'm
imagining
that
as
like,
if
we,
if
we
look
at
this
bucket
closer
like
in
more
detail,
then
we're
going
to
have
a
wider
distribution
at
some
point,
which
means
we
can
say
like.
If
we
put
it
there
we're
going
to
lose
30
percent
of
memory
like
right
up
30
of
the
memory,
but
we
still
have
the
70
like
we
we're
actually
yeah.
C
So
it
was
mentioned
that,
like
these
are
all
caches
right,
so
caches
as
a
whole
are
to
preserve
precious
resources.
Is
there
any
like
if
we
went
and
dropped
the
ttl
to
30
minutes?
Would
we
be
able
to
see
what
the
impact
is
on
the
resources
that
we
are
cashing
from
meaning?
C
A
B
A
B
A
great
question
stephanie:
actually,
if
you
can,
if
you
can
find
a
way
to
concisely,
frame
that
that's
a
that's
a
wonderful
question
for
potentially
a
future
project,
so
that
we
can.
B
C
B
B
It's
it's
not
a
dumb
idea.
The
the
challenge
with
it
is
there
are
a
few
challenges
with
it.
The
the
main
the
main
obstacle
is
we're
because
we're
because
we're
beyond
the
saturation
point,
we
don't
know
how
much
we
would
need
to
grow
it
to
avoid
that
saturation.
All
we
know
is
that
the
demand
exceeds
the
capacity,
which
is
why
we're
having
these
inverse.
B
It
could
be
that
if
we
add
five
percent
more
more
more,
if
we
raise
the
max
memory
limit
by
five
percent,
then
we
would
top
out
at
an
additional
three
percent,
I'm
making
these
numbers
up.
I
don't
actually
believe
this
is
the
case.
If
we
added
50
to
max
memory,
I
would
kind
of
call
it
a
coin
toss
about
whether
that's
enough
to
you
know
level.
A
D
C
B
C
B
And
we're
definitely
dropping
I
have
to
have
to
attack
this.
I
just
have
to
take
this
on
because
it's
so
germane
to
what
you
were
just
asking.
The
the
inverse
of
that
is
how
much
do
we
need
to
reduce
ttl
to
get
underneath
the
max
memory
and
it's
exactly
the
same
class,
a
problem.
It's
just
easier
to
reduce
the
ttl
than
to
grow
the
machines,
because
that
wipes
the
cache
so
that
that's
the
the
logistics
is
the.
B
Much
spare
yes
yeah!
Let
me
let
me
go
back
and
look
at
it.
The
one
of
the
tricky
bits
go
ahead.
Yeah.
A
Or
like
cluster
or
yeah,
whatever
sure
yeah.
C
A
C
The
kind
of
side
effect
of
what
I'm
saying,
which
is
I've,
looked
at
some
of
the
things
it's
clear
that
the
app
decks
for
redis
cash
is
not
good,
but
how
big
is
the
impact?
Is
this
the
thing
that
if
we
work
on
this
for
another
month,
everyone
will
be
okay
or
is
it
the
thing
that
if
we
don't
have
it
fixed
by
next
friday,
you
know
rachel's
gonna
cry,
because
that's
the
other
question.
B
That's
that's
a
wonderful
question,
so
first
bob's
question
just
because
it's
quick,
the
the
host
has
117
gigs
of
total
memory.
Some
of
that
needs
to
be
reserved
for
the
thousand
cash.
B
The
other
piece
is
when
I
I
need
to
double
check
this,
but
I
think
we
have
rdb
backups
disabled
on
this
box,
but
when
we,
but
when
we
lose
a
replica
that
still
kind
of
relies
on
the
rdb
mechanism-
and
I
I
can't
remember
the
details
of
that,
but
essentially
the
of
how
of
whether
that
gets
to
use
a
special
mode.
B
But
sorry,
I'm
the
relevant
piece
here
is
when,
when
we
do
even
with
rdb
backups
disabled,
at
least
when
reprovisioning
a
failed,
a
failed
replica,
it
still
relies
on
the
rdba
mechanism
and
that
mechanism
can
consume
potentially
can
potentially
double
the
the
memory
demand
on
the
host,
because
the
mechanism
for
it
is
to
is
to
fork
the
main
redis
server
process
and
rely
on
the
kernel's
copy
on
right.
B
Behavior
to
preserve
that
point
in
time,
snapshot
of
all
of
the
entire
memory
footprint
for
the
further
by
this
process,
and
so
the
longer
it
takes
to
make
that
rdb,
the
the
more
pages
will
diverge
over
time
and
so
for
planning
for
capacity
planning
purposes.
We
have
to
have
enough:
yes,
not
necessarily
a
full
double,
but
yeah.
It's
it's!
You
know
the
theoretical
upper
bound
is
twice
the
memory
demand.
So
it's
it's.
It's
a
little.
B
It's
a
little
hard
to
kind
of
plan
in
in
without
without
knowing,
for
example,
the
the
lifespan
that
it
would
take
to
to
complete
that
for
the
the
lifespan,
the
memory
domain
from
the
copyright
mechanism
for
the
entire
lifetime
of
the
process,
whose
whose
virtual
memory
is
forked
from
the
main
run,
is
processed
okay
and
then
so
that
was
so.
A
Like
we
could
increase
it,
but
then
we'd
lose
some
resilience
when
we
do
need
to
do
a
failover,
possibly.
B
C
B
B
C
Oh
yeah
that
I
already
written
that
I
already
wrote
down
like
what.
What
are
we
cashing?
What's
impacted
by
a
lower
hit
rate,
and
how
do
we
quantify
that
with
data?
Yes,.
A
And
then
the
follow-up
question
to
that
was
what
is
the
effect
of
the
problem
that
we're
seeing.
B
B
Our
current
mandate
is
so
so
I
guess
the
the
context
here.
Bob
please
chime
in
here.
If
I'm
misrepresenting
anything
we
knew
about
this
problem
months
ago,
and
it
wasn't
high
enough
priority
to
to
you
know
displace
other
other
planned
work.
B
This
is
more
important
now
because,
because,
essentially
because
we've
we've
been
working,
bob
in
particular
has
been
working
with
the
stage
groups,
the
feature
teams
to
to
kind
of
own
their
error
budget
and
work
with
them
to
improve
visibility
and
and
part
of
that
kind
of
led
to
uncovering
the
impact
of
this.
On
the
on
the
feature
categories
that
those
stage
groups
own
and.
A
So
the
actual
client
side
impact
is
every
10
or
so
minutes
a
whole
bunch
of
requests
take
way
longer
than
they
should,
and
this
is
especially
visible
for
groups
like
the
people
in
the
create
stage
who
host
like
to
get
the
get
endpoints,
so
the
info
refs,
and
so
on,
like
endpoints,
that
are
hit
by
git
clients,
the
endpoints
that
are
hit
by
ci
and
so
on,
like
the
super
busy
ones.
Those
have
those
noticed
this
the
most.
B
Yeah
and
in
some
cases
it
could
lead
to
to
client
side
timeouts,
but
mostly
it's
aptx
regressions,
so.
C
B
Or
yeah
yeah,
so
it
varies
with
time
of
day,
and
so
it
varies
with
workload
which
varies
with
time
of
day.
My
kind
of
anecdotal
observation
is
it
doesn't
matter
you
can
you
can
look
at
the
there
are
graphs
to
show
the
pacing
for
it,
but
it
will
generally
vary
somewhere
averaging
about
10
minutes.
I
think
I
think
I
think
we've
seen
it
as
high
sorry.
My
my
reflection
here
doesn't
matter
the
data.
The
graphs
will
show
what
the
what
the
actual
pacing
is.
B
Is
it
getting
worse?
That's
that's
the
question.
B
So
it
it's
a
little
more
so,
as
I
recall
it
was
more
severe
in
in
late
last
year
and
then
we
had
a
low,
and
that
was
probably
because
of
the
the
workload
being
reduced
over
the
over
the
end
of
year
holidays
and
then,
in
january
it
picked
back
up.
There
are
a
few
factors
that
can
drive,
I
mean
essentially
from
redis's
perspective.
B
What
drives
this
is
how
how
quickly
the
the
memory
rises
back
to
max
memory
after
evictions
clear
out
some
some
headroom,
which
is
the
the
undesirable
eviction
burst,
behavior,
that
we're
trying
to
ameliorate
factors
that
influence
that
growth
rates
the
effectively
the
slope
of
the
rise
from
you
know,
back
up
to
max
memory
are
feature
changes
where
we,
where
we
store
more
things
in
cache
user
behavior
that
that
that
happens.
Hidden
points
that
you
know
that
draw
I
mean
changing.
B
Changes
to
to
the
the
cache
keys
can
also
drive
this,
so
there
there
definitely
can
be
influence
from
application,
behavior
and
from
end
user
behavior.
We
don't
have
a
clear
handle
on
on
that,
but
anecdotally.
I've
definitely
seen
the
the
the
the
pacing
of
these
of
how
often
we
reach
max
memory
vary
on
a
time
scale
of
months
going
both
up
and
and
down,
and
I
think
that
was
mainly
workload.
B
Workload
driven-
and
I
recall
also
seeing
change
a
discrete
change
in
behavior,
at
least
once
where
the
frequency
increased,
and
I
think
that
that
was
probably
a
feature
change
that
did
that,
although
I
didn't
track
down
which
specific
feature
they
did.
C
My
thought
on
this
was,
if
there's
like
a
linear
getting
worse,
which
it
doesn't
sound
like
there
is,
which
is
great.
That
does
mean
that
eventually,
there's
a
wall
at
which
point
the
world
falls
over,
and
that
is
a
very
clear
you
have
to
fix
it
before
this
time.
So.
B
A
A
That,
by
first
of
all
seeing
what
details
we
can
just
easily
lower
without
any
side
effects
to
the
application.
It's
also
going.
D
A
Hard
to
track
down
which
keys
that
are,
and
so
that
might
be
a
little
bit
a
slow
process,
we're
going
to
see
if
we
can
lower
the
and.
B
We're
specifically
favoring
reducing
the
ttl
on
keys
that
are
that
have
a
real,
a
propor,
a
relatively
high
idle
time.
So
the.
B
Looking
for
yes
exactly.
A
B
No,
no
I
I
was
just
I
was
thinking
about
since
since
we're
interested
in
two
dimensions
of
the
data.
I
I
find
it
really
helpful
to
frame
what
I
want
the
output
to
look
like
before
doing
the
implementation
work.
So
that's
all
I
was
going
to
say.
A
A
B
Yeah,
no
totally
I
I
was
thinking
I
I
wanted
to
spend
a
little
more
time,
looking
at
the
the
data
that
you've
already
put
together
and
and
maybe
mock
what
what
what
I,
I
guess,
I
I
want
to
sketch
what
what
ideally
I
would
like
to
I'd
like
to
extract
from
from
that.
You
know
in
terms
of
in
terms
of
the
the
subdividing
existing
categories.
So
that's
that's!
That's
it.
A
So
we
look
for
high
adult
time
keys.
Then
we
see
how
to
reduce
the
ttl
on
those
we're
also
going
to
have
an
overall
like
do
the
investigation
of
that
first
day.
If
there's
yeah
an
overall
thing,
we
could
we
spot
like
if
we
that
we
can
reduce
the
overall
ttl
by
x
days
and
see
how
much
we'd
gain
doing
that,
because
that's
a
simple
solution.
A
But
then
we
need
to
see
what
the
idle
time
like,
what
the
yeah
well
how?
What
impact
would
be
to
the
hit
rate,
the
cash
hit
rate.
B
Yes,
for
for
point
seven,
a
two
where
we
said
look
for
high
idle
time
and
big
keys.
I
I
think
I
think
maybe
I
think
maybe
I'd
refine
that
to
be
rather
than
big
keys.
It's
a
large
yeah,
exactly.
A
A
For
the
very
future
we
need
to,
we
need
to
figure
out,
but
maybe
not
somebody
something
for
our
counterparts
in
the
frameworks
team
to
do.
We
need
to
get
some
consistent,
naming
and
cache
keys,
because
if,
if
all
the
significant
identifying
bits
were
in
the
beginning
of
the
cache
key
like
to
see
where,
where
the
cache
key
is
coming
from,
and
then
the
identifiers
like
project
id
and
user
id
were
at
the
end,
then
this
analysis
would
be
so
much
easier.
D
B
Okay
and
then
in
addition
to
that,
once
we've
selected,
some
keys
that
have
a
high
idle
time
and
collectively
represent
a
large
amount
of
storage.
We
should
spot
check
that
to
to
the
kind
of
assess
to
the
best
of
our
knowledge,
if
any,
before
reducing
their
ttl,
if
any
of
them
are
known
to
to
the
three
of
us
at
least
two
to
be
particularly
expensive
and
maybe
skip
them,
I
don't
have
a
good
idea
about
how
we
would
empirically
test
for
this.
B
A
Rendered
commit
stuff
that
we
need
to
render
differently
for
certain
users,
but
most
of
it's
the
same.
It's
just
we
copy
everything
and
then
do
it
for
for
each
user,
so
yeah
there
might
be
things
that
we
need
to
optimize
there.
But
I
wonder
if
doing
optimizations
like
that
is
going
to
take
a
bit
too
long.
A
B
No
yeah,
I
think,
if
we
had
a
if
we
had
a
if
we
had
a
point
in
time
snapshot
like
an
rdp
backup,
then
we
could,
I
mean,
most
most
of
the
time
we're
looking
at
the
the
the
names
of
the
keys,
but
we
could
also
look
at
the
do.
An
assessment
of
looking
for
duplicate
values
across
the
whole,
the
whole
database
and
then
identify
what
what
names
those
keys
had.
A
I
thought
it
would
be
handy
to
have
like
an
rdb
dump
of
the.
D
B
B
B
I
guess
a
really
dumb
way
to
work
around.
That
would
be
to
sorry
this.
This
is
a
total
tangent.
We
could
we
could
hack.
We
could.
We
could
basically
run
run
a
hectare,
that's
by
an
area
that
just
ignored
ttls
for
could.
A
No
they're
crap
that.
B
A
A
Okay,
to
stop
the
recording.