►
From YouTube: GMT20200203 Scalability office hours
Description
Doc for notes https://docs.google.com/document/d/12gElFtU7FcZo0iHtxJOgFMJzFROCOHW7extpvG2aB_Q/edit
A
Cool,
so
since
it's
on
the
hour
and
I
have
the
first
question,
you
asked
me
to
look
into
some
like
some
numbers
for
the
three
Palmers
who
were
about
to
end
somewhere
in
March
I.
Think
yeah.
It
turns
out
about
84%
of
our
mirrors
that
we
run
are
free,
so
those
would
go
away
which,
like
is,
of
course,
a
huge
different.
B
So
I
kind
of
think
that,
regardless
of
whether
this
is
free
or
not,
we
should
aim
to
be
able
to
to
serve
all
of
those
as
in
long
term,
it
doesn't
really
matter
whether
this
is
free
or
not
but
shortened.
When
we
are
like
in
a
very
big
rush
to
actually
make
an
impact,
it
doesn't
matter
a
lot
whether
we
are
going
to
tackle
this
immediately
or
not
so
yeah
long.
A
B
B
True,
that's
true,
but
then
again
like
it
doesn't
like
from
my
perspective.
What
only
matters
here
is
that
priority
wise,
like
how
are
we
going
to
tackle
this?
If
this
is
not
going
to
go
away
in
March,
then
our
priorities
need
to
shift,
and
we
need
to
focus
more
on
this
because,
ultimately,
it's
going
to
affect
our
whole
infrastructure.
If
it
does
go
away
somewhere
down
the
line,
we
want
to
be
able
to
support
this
many
jobs
somewhere
down
the
line,
but
that
can
be
two.
C
A
D
It'd
be
interesting
to
know
what
percentage
of
those
mirrors,
because
what
I
see
kind
of
over
and
over
and
over
is
someone
who's,
and
this
is
not
obviously
representative
of
everyone,
but
is
definitely
portion
of
this.
Somebody
wants
to
test
like
it
lab,
so
they
set
up
an
unreal
tournament
from
from
get
up
and
they
play
around
with
get
lab
for
20
minutes
and
they
decided
school
within
they
never
visit
again
and
then
for
the
rest
of
time.
B
Only
relying
on
a
notification
to
let
me
know
when
something
something
fails
has
been
passing
all
this
time,
so
either
I'm
so
good
at
this
task
or
our
platform
is
so
stable
or
my
notification
failed
and
I
haven't
checked
actually,
but
the
point
is
like
there
are
all
those
repos
which
are
set
up
and
forget
and
they
do
serve
a
purpose.
So
it's
a
bit
tough
to
make
a
decision.
B
B
A
D
B
D
D
D
B
B
Okay,
put
this
here
and
then
ask
a
question.
D
Yep
so
I
can't
just
start
jotting
things
down
but
again
like
it's,
not
meaning
order,
and
it's
certain
you're
not
complete,
but
I
kind
of
wanted
to
state
the
things
that
I
think
are
important,
so
like
ideally
I
would.
Rather,
we
didn't
introduce
like
extra
application
logic
that
was
like
if
we're
doing
Redis
sharding
then
do
this
and
kind
of
having
that
all
over
the
show,
because
as
soon
as
you
have
that,
then
people
you
don't
know
about
it,
don't
realize
how
habits
and
things
break
what
kind
of
leaking
abstractions
I.
D
You
know
I'm
a
big
fan
of.
We
have
a
rid
of
server
and
kind
of
worth
behind
that,
as
is
Reyes
and
might
be
read
as
cluster.
My
favorite
Sentinel
might
be
standalone
Redis.
It
doesn't
really
matter,
but
you
know
we're
just
talking
with
us,
and
so
that
was
one
of
the
one
of
the
goals
and
then
another
goal
was
like:
if
we
can
kind
of
keep,
you
know
avoid
extra
complexity
by
having
extra
elements.
D
D
Bus
need
to
be
read,
tasks
and
lot
of
documentation
around,
because
the
thing
that
people
need
to
realize,
as
well
as
it
for
I've,
never
actually
operated
a
Redis
cluster
cluster,
but
it's
very
different
from
a
normal
race
and
it's
got
AB
commands
and
it's
like
it's
not
the
same
thing
and
that's
actually
also
something
that
is
really
worth
being
aware
of.
So
I
far.
D
One
thing:
that's
kind
of
interesting
consideration,
just
the
consideration
and
all
my
proposals,
but,
like
you
know,
if
you,
if
you
strictly
keep
to
like
your
Redis
connection,
will
give
you
rid
of
stuff
and
like
Mac
I
was
saying.
One
of
the
options
is
that
you
could
say
that
for
the
persistent
gratis
or
or
the
case
where
maybe
cluster
makes
the
most
sense,
you
could
get
a
enterprise
Redis
labs
in
Google,
Cloud,
managed
instance
right
and
we
kind
of
just
take
that
all
of
our
plates
and
we
just
handle.
You
know
that.
B
The
more
we
grow,
the
more
these
complex
becomes
and
I,
don't
know
whether
it
makes
sense
for
us
to
run
specifically
this
one.
We
can
expect
that
none
of
our
customers
are
going
to
reach
this
scale
at
all.
It
feels
like
a
such
a
engineering
waste
like
engineering
time
waste
to
go
through
that
I
think
now
we
are
finally
going
across
that
border
of
being
too
big
for
for
majora,
two
of
our
self-managed
customers.
D
Yeah,
you
know
interesting
to
to
consider
that
or
you
know,
even
even
that
one,
but
definitely
something
worth
considering
rather
than
yeah,
like
you
say,
and
then
we
just
and
then
for
those
huge
customers,
you
could
say,
like
hey,
you
know,
get
your
rid
of
slabs
instance,
but
yeah.
You
know
I
hadn't
thought
about
that
before,
but
is
sitting
there
for
needs
and
documents
so
and
I
just
kind
of
put
some
assumptions
in
here.
D
The
first
one
like
because
this
became
clear
in
that
meeting
like
I,
think
a
lot
of
the
assumptions
are
kind
of
in
my
head
I'm
in
the
meeting
this
morning.
But
one
of
them
was
that,
like
Redis,
cluster
cannot
run
sidekick
so
sidekick,
sidekick,
you're,
not
running
Redis
cluster
and
he's
like
people
have
gone
and
created
issues
and
said
my
realist.
You
know
sidekick,
isn't
working
very
well.
D
But
then
the
part
about
this
that
I
hadn't
actually
read
that
well
before
was
that
on
a
single
dedicated
Redis
instance,
you
can
get
about
8,000
jobs,
a
second
which
is
much
lower
than
the
50,000
right
and
they.
So
so
that's
really
interesting.
It's
the
way.
You
know
if
we've
got
like
tynix
on
that.
You
know.
We
know
like
clipping
that
yet
or
anything
like
that,
but
it's
much
lower
the
ceilings
much
lower
than
I'd
originally
thought.
D
So
that's
that's
kind
of
defines
that
and
then
and
then
another
thing
that
came
up
in
this
in
this
call
this
morning
was
there
was
some
arguments
about
not
only
there
was
some
backwards
and
forwards
about
whether
rid
of
Sentinel
is
H
a
and,
like
certainly
I,
would
say
for
our
availability
read
of
Sentinel
is,
is
H
a
you
know,
you
don't
need
to
be
in
not
inventing
here
things.
B
Me
to
make
it
clear
to
to
balance
all
DeJoria
who
are
here
now.
There
was
a
mention
that
Redis
is
a
single
point
of
failure,
but
I
think-
and
this
is
why
what
Andrew
is
addressing
from
the
infrastructure
side
of
things.
Ready's
that
we
currently
have
running
is
highly
available.
It
is
a
classic
standard
set
up
with
sentinels
and
so
on,
so
from
the
application
side.
That
might
be
the
case
right
like
if
the
cluster
goes
away.
How
does
the
application
behave?
A
The
question
I
had
when
you
mentioned
your
greatest
class,
there
isn't
Redis,
and
but
you
also
mentioned,
we
want
to
keep
the
dip
in
in
the
application
as
small
as
possible,
like
that,
kinda
doesn't
work
like.
If
we
decide
to
use
Redis
cluster,
then
there's
going
to
have
to
be
application.
Changes
right.
D
There
are,
and
the
main
one
is
around
this-
this
two
things
really
that
are
the
biggest
concern.
One
of
them
is
like
newer
scripting,
but
from
what
I've
seen
them,
no
scripting
that
we
use
is
very
it's
fairly
basic
at
the
moment.
Maybe
that'll
change
right.
So
basically
the
rule
is,
if
you
using
Redis
cluster.
If
you
have
a
command,
they
come
all
the
keys
in
that
command
have
to
be
on
the
same
Redis
node
right.
D
So
if
you
do
a
lua
script,
all
the
things
on
there,
and
so
that's
kind
of
that
that
does
impact
the
occasion
and
I
don't
really
have
an
answer
to
that.
Yet
so
it's
certainly
something
that
I
was
starting
to
think
about.
Like
the
one
thing
that
I
was
thinking,
which
is
also
like
extra
complexity,
it
was
my
I
had
a
proposal,
for
it
is
this
proposal.
It's
it's
not
great,
like
I'd.
Rather
we
didn't
do
this,
but
if
we
find
that
there's
just
stuff
that
we
can't
run
in
Redis
cluster,
then
we
could.
D
We
could
run
the
existing
Redis
persistent,
which
is
like
a
non
cluster
raters,
and
then
we
start
putting
things
in
in
rose
cluster.
But
that's
like
it's
a
bit
of
a
dirty
solution.
I
think
like
it.
You
very
just
have
everything
together,
but
if
we
find
that
it's
not
possible
that
we
could
just
have
like
red
is
single
and
rid
of
Susteren
and
on
like
a
self-managed
instance
that
actually
both
be
the
same
instance
and
okay
calm.
They
would
be
like
an
actual
cluster.
You
know,
stuff
that
we
know
is
cluster
proof
and
then
stuff.
D
That's
not
the
other
problem.
That
I
think
you
would
have
with
this
is
that
migrations
would
become
very
difficult
because
you
know,
you'd
have
like
some
of
the
stuff
would
need
to
be
transferred
over
to
the
to
the
cluster,
and
other
things
will
need
to
stay
in
the
persistence
and
it
all
gets
a
bit
messy
but,
like
I,
think
you'll
be
much
better
to
explore
the
possibility
of
just
having
everything
move
across
the
cluster
like
the
place
that
I
originally
encountered.
D
This
was
git
er
because
we
really
wanted
to
use
Redis
cluster,
and
but
we
have
these
newer
scripts
that
have
like,
like
dozens
and
dozens
of
different
keys
in
the
newer
script,
and
they
like
really
complicated,
like
you
know,
50
line
long
new
scripts,
which
is
a
lot
of
realism
and
in
loops
and
and
it's
like
crazy
stuff,
and
so
we
really
struggled.
We
couldn't
do
Redis
cluster,
because
there's
no
guarantee
that
we
could
get
the
keys
all
on
the
same
ones.
D
But
I
know
that
what
I've
seen
like
the
only
your
scripts
and
Stan
could
probably
come
to
this.
The
only
newer
scripts
I've
seen
are
super
simple,
like
locking
that
Jerry
had
like
a
single
key.
Not
just
like
you
know
dozens
of
keys
and
then
it
probably
also
be
a
bunch
of
multi
execs
that
we
need
to
look
at.
But
you
know
we
can.
You
can
look
at
those
on
ad
hoc
basis
and
kind
of
figure
out
if
this
is
gonna,
be
yeah.
C
D
C
D
D
But
I
started
writing
down,
like
all
the
sort
of
Redis
scaling
proposals
that
that
I
shall
see
and
have
discussed
with
people,
and
it's
very
very
early,
a
very
light
to
kind
of
flesh
it
out
a
lot.
But
I
think
that
kind
of
general
proposals
are
kind
of
sound
and
then
it's
just
building
up
what
the
problems
with
each
one
are
and
what
the
solutions
are
and
kind
of
figuring
out
and
that
the
one
thing
that's
worth
learning
is
I.
Think
like
the
three
Redis
is
that
we
have
have
over
very
different
solutions.
D
Think
that
also
be
fairly
easy
on
on
us
on
our
on
our
caching
instance
I.
You
know
the
the
issue
that
we
saw
the
other
day
with
its
members.
That's
not
it's
not
going
to
help
that,
and
earlier
today,
they've
been
a
whole
bunch
of
spikes
on
that
on
that
instance,
as
well,
we've
seen
hundred
percent
CPU
and
I.
Don't
really
know
if
anyone's
got
to
the
bottom
of
what
that
is,
but.
D
A
D
This
kidnap
rails,
if
you
wants
the
mythical
service-
and
that's
always
so
you
know
it's
that
side
of
it
rather
than
the
actual,
and
this
is
terrible
because
it
makes
Redis
cluster.
Look
like
a
single
point
of
failure
bit.
My
mermaid
skills
are
not
that
hot
and
then
and
then
read
a
sidekick.
So
this
one,
the
two
ways:
I
think
you
could
do
it
or
you
can
either
do
it
in
the
application,
and
so
what
I
was
imagining
with
that
is
that
you
kind
of
come
up
with
some
sort
of
identifier.
D
D
So
they've
got
this
a
pluggable,
sidekick
component,
climbs
abstraction
and
that
just
works
really
nicely,
and
it
kind
of
it
will
just
like
chart
everything
for
you.
The
only
downside
to
that
approach
is
that
the
API
and
basically
is
per
Redis
right,
so
the
API
doesn't
get
fanned
arts.
So
you
know
when
we
run
the
commands
like
to
go
and
danique
jobs
manually,
we
would
have
to
run
those
against
both
in.
A
A
D
D
B
D
D
D
And
but
that
that
that
looks
and
the
the
other
thing
that
I
was
kind
of
surprised
by
I
mentioned
this
earlier,
sounded
I'll
I'll
raise
it
with
you
was
I've,
been
quoting
this
50,000
psychic
jobs
read
quite
a
lot,
but
the
second
sentence
of
that
is
per
psychic
service
about
8,000.
So
we've
got
some
growth
on
that,
but
certainly
not
all
the
way
up
to
15,000,
like
you
quite
a
bit
down
on
that.
So
this
is
something
we
should
probably
start
thinking
about
at
some
stage
or
somebody
in
East,
Mississippi,
us
and
then
I.
D
Think
the
hardest
of
the
three
is
what
to
do
with
the
persistent
Redis
and
the
choices
that
we
have
are
we
can
kind
of
continue
with
the
strategy
that
we
already
used,
which
is
just
vertically
partitioning,
Redis
and
saying
you
know
like
the
way
we've
done.
We
obviously
started
with
us
fitting
the
cash
from
everything
else,
and
then
you
split
side
kick
off,
and
now
you
know
we
could
say
well.
D
Everything
to
do
pipeline
needs
to
go
to
you
know
we'll,
have
a
pack
line
Redis
connection
and
if
you
do
there,
but
I
kind
of
feel
like
we've
reached
the
end
of
the
road
with
like
those
divisions,
because
any
mall
and
like
when
an
application
developer
comes
along
and
he
just
wants
to
put
something
in
Redis.
He
is
he
or
she
beg
your
pardon
needs
to
figure
out
where
to
like.
Stick
that
thing
and
that's
complicated
and
error-prone,
and
you
know
if
one
person
uses
one
connection,
the
other
uses
the
other.
D
The
the
second
proposal
is
that-
and
this
is
really
only
for
the
case
that
I
discussed,
where
we
find
that
this
stuff
that
we
can't
move
to
Redis
cluster.
Then
we
just
stand
up
Redis
cluster
and
whereas
the
system,
you
just
start
putting
keys
into
integrators
cluster,
like
maybe
there's
session,
tokens
and
stuff
like
that
and
everything
that
we
know
is
cluster
safe,
so
that
I
think
they
would
probably
be
quite
a
bit
of
concern.
Iran
like
migration
and
obviously
migrate.
Then
if
it
migrated
back,
you
know,
rollbacks
become
really
like
dodgy
doing
that
Jim.
D
That
process
and
so
I
think
that
there's
a
lot
of
complexity
around
that
that
we
would
need
to
think
about
really
carefully
and
I.
Don't
really
like
it
also
again
application
developers,
they
don't
need
to
know
they
don't
want
to
know
this.
This
key
goes
to
Redis
cluster,
because
some
other
process
will
use
newer,
locking
and
so
there's
a
whole
bunch
of
cognitive
overhead
that
we
want
to
rather
avoid
and
so
I
think
that
for
the
persistent
greenness
I
think
that
Redis
cluster
would
probably
be
the
best
approach.
D
A
D
D
It's
if
I
remember
correctly
because,
like
that
was
one
of
the
things
that
I
was
concerned
about,
like
I,
don't
think
there's
a
lot
of
people
to
operational
experience
of
using
it
I
think
it's
a
different
binary
question,
but
I
know
it's
got
all
these
Ruby
scripts.
That's
you
kind
of
need.
You
need
to
run
in
order
to
do
anything
and,
if
I
remember
correctly,
those
don't
ship
with
normal
Redis
like
so
there's
like
a
whole
bunch
of
commands
that
you
run
so
we
invoke
fire.
These
they'll
be
ready,
special.
C
D
C
D
One
of
the
things
that
like
this
is
about
is
there's
a
lot
of
kind
of
conversations
around
like
Redis
and
like
it's
kind
of
firstly,
sitting
out
like
expectations
and
timeframes
and
like
how
urgent
is
this
so
that
everyone's
kind
of
thinking
in
the
same
direction
you
know
rather
than
putting
in
different
directions.
So
it's
not
necessarily
something
you're
gonna
do
tomorrow
or
whatever,
but
it's
just
kind
of
right.
C
D
C
C
Know
it's
good
to
have
kind
of
the
division
I
also
kinda.
We
gotta
have
to
step
back
and
figure
out
like
this
Redis,
really
the
long-term
approach,
because
it
sounds
like
we're
just
trying
to
build.
On
top
of
this
thing
and
I'm
worried
about
I'm
wondering
there's
a
we
need
to
think
about.
Is
there
something
else
we
need
to
consider.
B
D
A
D
I've
done
a
few
talks
recently
about
they're.
Just
different,
like
weariness,
is
kind
of
a
great
thing
to
talk
about
like
monitoring
and
failures
in
your
system
and
like
people
keep
coming
to
me
and
saying
kini
kini
is
the
future
and
I've
never
used
it,
and
it's
like
I
think
it'd
be
a
very
risky
move
to
make,
because
if
it
fails,
it
look
really
bad
like,
but
like
a
lot
of
people
think
EDB
its
multi-threaded.
But
what
we
were
saying
before
you
jumped
on
the
call
was
maybe
like.
D
We
know
that
this
is
something
that
is
not
going
to
hit
many
of
our
clients,
and
maybe
we
just
pay
for
Redis
like
like
British
labs.
You
know,
they've
got
a
partnership
with
GCP
and
we
just
kind
of
let
them
do
it
and
we
just
see
it
as
a
really
fit.
It's
definitely
worth
discussing
whether
or
not
that's
the
approach
we
take
I
mean.
D
Yeah
yeah,
so
so
Google
I,
think
kind
of
trying
to
distinguish
themselves
from
Amazon,
basically
getting
a
bit
of
a
reputation
for
what
they
didn't.
Open-Source
likely
like
Google
have
got
this.
It's
actually
Redis
labs
and
you
provision
it's
through
GCP
console.
But
what
you
get
is
a
Redis
labs
like
cloud
revision
instance:
that's
basically
supporting
Redis
labs
and
they're
open
source
in
the
whole
company.