►
From YouTube: 2023-03-02 Scalability Team Demo
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Semester,
you've
got
the
only
item,
so
do
you
want
to
go
ahead?
Yeah
sure
give
me
a
second.
Let
me
pull
up
the
the
last
metrics
that
you
go
found
just
a
second.
B
B
B
This
is
pre
all
right
anyway,
that
misses
crime,
because
it's
not
well
on
project.
So
let
me
go
a
couple
of
weeks
back
pointed
out
that
there's
mysterious
errors
on
cluster
rate
limiting
on
pre,
but
there
are
no
locks,
so
I
think
I'm
trying
to
join
in,
and
we
tried
to
dig
around
and
find
out.
What's
going
on,
I'll
just
cut
I'll
just
go
straight
to
the
root
cause.
The
root
cause
is
that
the
errors
are
being
hidden
because
of
moved
errors,
so
radius
RB.
B
When
you
encounter
a
move
there
in
a
cluster
contact,
the
client
should
handle
it.
So
the
client
would
would
see
would
get
a
message
called
move
followed
by
I,
guess
a
slot
number
and
then,
after
that,
it's
the
host
name.
So.
B
B
So
this
this
is
the
new
code,
the
old
code
over
here
when
you,
when
you
receive
an
error,
it
just
counts
as
an
exception,
but
then
the
exception
gets
swallowed
up,
because
what
we
instrumented
with
Interceptor
is
actually
the
client,
it's
not
the
cluster,
so
this
node
actually
refers
to
a
redis
client
rather
than
a
redis
cluster
object,
so
the
error
just
gets
followed
up
like
or
rather
to
to
the
to
the
gems.
There's
no
error,
but
we
because
we
patch
the
code.
B
We
prepare
the
the
client
we
receive
receiver,
and
we
don't
understand
why.
So
after
a
couple
of
like
attempts
to
knock
the
actual
error,
we
found
the
course,
but
then
the
next
question
was:
why
are
we?
Why
is
there
so
many
moved
attempts
technically.
If
moves
only
happens,
if
you
hit
the
wrong
the
wrong
instance,
so
that
well
that
that
should
only
happen
if
one
the
client
is
wrongly
configured
or
two.
B
If
there
are,
the
slots,
are
being
moved
around
which
shouldn't
happen,
because
we
are
not
doing
any
resharding
and
the
cluster
on
its
own
does
not.
It
should
not
move
around
and
and
then
the
problem
was
because
of
where
is
RV
class
in
cluster
slots.
B
One
is
cluster
nodes
and
clusters:
Lots-
okay,
maybe
not
here,
I-
think
it's
a
slot
loader
but
red
render
7.
Has
this?
Has
this
configuration
that
allows
us
to
say
like
what
is
our
preferred
endpoint,
so
I
think
the
the
key
word
is
foreign.
B
So,
as
a
result,
our
this
map
here
this
map
is
a
key.
A
key
contains
key
value
pair
of
Ip
to
to
the
slot,
oh
no
IP,
to
the
rule,
rather
than
whole
thing
to
the
row.
So
when
you
look
at
this,
when
you
look
at
this
map,
it's
always
a
cache
Miss,
it's
like
it
doesn't
do
anything.
It's
just
a
new
and
I.
B
Think
and
almost
here
it
checks
if
master,
if
Master
is
just
not
slave
and
not
slave,
is
it
checks
that
not
Flex
that
hash
and
because
it's
always
a
false
everything
that
you
checked
is
always
a
master.
So
because
of
that
you
always
the
the
client
is
configured
wrongly
at
the
start
and
every
time
it
tries
to
make
a
request
it
one
in
three
chance
a
total
chance.
They
wanted
the
wrong.
No,
actually,
no,
there
are
nine
nodes,
so
eight
over
eight
over
nine
chance.
B
It
will
hit
the
wrong
one
because
it
can
only
hit
one
right,
masternode
off
and
the
only
three
shots.
So
three
shots
one
master,
each.
So
there's
only
one
in
nine
chance
of
getting
it
right
so
because.
B
We
have
consistently
low
rate
of
moved
errors,
so
it's
fixed,
so
the
fix
can
be
seen
on
staging,
not
on
free,
because
I
think
when
we
roll
out
three
there's,
always
a
Troublesome
step
that
requires
intervention
to
run
DB
migrate,
so
we
haven't
run
it
on
free
yet,
but
on
staging
it's
there
and
when
else
when
we
go
first
found
out
what
the
problem
we
rolled
out
on
staging
and
we
got
the
spike
in
the
errors
and
then,
when
the
the
note,
the
note,
loader
patch,
let
me
see
basically,
we
updated
this
channel.
B
We
patch
this
chunk
of
code
to
handle
host
names.
In
our
case,
let
me
find
that
patch
yeah
so
yeah
that
patch
would
work
by
it
just
checks.
If,
if
horse
name
is
if
whole's
name
is
present,
it
would
return
the
hostname
point.
If
it's
not
both
name
is
blank,
it
would
just
return
IP.
So
this
is
sort
of
Backward
Compatible
with
I
haven't
tested
down
yet,
but
it's
it
should
be
very
compatible
with
older
versions
of
redis.
B
But
in
our
case,
new
radius
clusters
will
all
be.
We
are
going
we'll
go
with
version,
seven,
so
I
don't
think
that
will
be
tested
by
us.
But
if
it
was
after
me,
then
we
have
to
test
it
more
I,
guess
yeah!
So
long
story
short
when
it
rolled
out
there's
the
moved.
Errors
are
gone
and
we've
monitored
it
for
a
little
while,
then
the
interesting
thing
is
that
Matt
file
another
problem.
We
have
more
connections
on
one
of
three
Master
nodes
and-
and
that
is
because
we
have
we
actually
yeah
I
found.
B
B
So
we
actually
all
operations
go
through
the
the
rate
limiting
Singleton,
which
has
a
multi-store
and
we
initialize
cluster
with
limiting
just
so
that
we
have
it
initialized
at
the
start,
and
also
that
it's
also
that
we
have
the
instrumentation
class
created
and
because
of
that,
it's
just
static
and-
and
it
also
I
also
found
that
we
don't
configure
Timeout
on
our
redis
cluster
I've
checked
the
cookbook
and
the
chef
repo
I
compared
it
with
what
we
have
been
doing,
which
is
for
Sentinels,
so
our
existing
ones
for
Sentinels,
we
have
a
default
configured
which
is
60.,
yeah,
TCP,
timer
and
one
example
is
just
like
we
configure
it
to
60
or
4000
to
yeah,
of
course,
I
think
for
cash
we
configure
2002
where
the
default
is
60..
B
We
don't
have
that
for
cluster
Wireless
cluster,
so
we
are
going
to
discuss
some
discussion
as
to
whether
do
we
want
to
configure
it
or
are
there
other
ways
to
handle
it.
This
is
on
the
server
side,
of
course,
I
think
on
the
client
side,
where
this
RV
doesn't
give
us
that
those
options
to
do
it
so
yeah.
So
this
is
overall
good
news,
I
guess,
because
we
unblock
the
greatest
cluster
efforts
going
forward.
B
B
Anything
otherwise
I
think
that's
about
the
sharing.
That's
all
I
have
to
share.
A
B
C
Still
very
kind
of
daunting
that
nobody
else
has
picked
this
up
I.
You
know
there
are
it's
Ruby
and
version
and
the
host
names
and
everything.
But
it's
still
you
know
what
else
are
we
gonna
find.
B
Yeah,
you
quite
surprised
when
we
saw
like
we
went
into
the
pretty
like
you
went
to
the
pre
instance,
and
we
printed
out
I
mean
like
I
noticed
like
that
was
one
of
the
like:
there's
a
map
of
slot
to
post
to
host
name.
So
our
host
name
is
configured
like
the
chart
number
followed
by
the
node
number.
So
it's
like
zero
one,
zero
one.
So
so
all
the
master
nodes
are
something
zero
one.
B
Yeah,
but
we
patched
it
and
the
the
good
okay
I
guess.
The
good
news
is
that
the
patch
is
simple.
But
the
bad
news
is
that,
if
you're
going
to
patch
it,
we
have
to
patch
it
on
Ruby,
red
radius,
RB
for
the
4.8
version,
and
also
we
have
to
probably
patch
on
five,
because
I
checked
out.
Five
and
five
is
still
the
same
logic,
which
is
IP
based.
B
C
C
If
you
get
it,
if
you
have
it
in
production,
it
really
helps
with
the
pull
requests.
Right.
You
can
say
we're
running
this
in
production
on
gitlab.com
and
that's
carries
some
weight,
I
think
hopefully
yeah
it's
like,
but
I
think
it
should
be
done,
because
otherwise
it
becomes
a
maintenance
night,
a
risk
that
you
know
things
break
and
it's
one
more
thing
that
we've
got
to
check
on
rather
than
other
people
as
well.
C
B
That's
true,
I
I'm,
not
sure
I
I
checked
out
the
readers
I'll
be
GitHub
page
to
see
the
the
versions
it
run
against.
I
think
I
saw
seven,
but
maybe
the
version
7
is
not
configured
to
use
endpoints
host
names
as
endpoints.
A
A
B
B
C
B
B
All
right,
then,
okay,
I'll
hand
off
to
whoever
wants
was
anything.
Thank
you.