►
From YouTube: 2023-08-17 Scalability Team demo
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Right
so
I'll
start
psychic,
namespace
application,
it
first
of
all
thanks
for
reviewing
this
year,
Eagle
rainbow.
So
that's
that's
yours
for
staging.
Let
me
show
my
screen.
You
can
also
get
something
together.
A
Yes,
so
Jacob,
Sean
and
I
have
been
talking
about
this
since
last
year
and
I
think
this
is
quite
this
kind
of
links.
The
point
too
like,
if
you
want
to
scale
out
redisto
we've
got
to
have
the
gem
that
supports
the
redirection
error,
handling,
which
means
and
that
jam
version
for
redis.
This
is
like
this
version
sidekick
to
use
it
so,
hence
we
have
to
sorry
there's
another
hope
to
jump
through
the
new
version
of
psychic
that
uses
that
red
is
Gem,
doesn't
support
namespace.
A
Hence
we
need
to
duplicate
namespace
before
we
upgrade
the
gym
and
it's
quite
it's
quite
complicated,
because
I
think
the
newer
psychics
haven't
just
dropped
the
support,
and
you
can't
just
so
so.
Has
this
whole
whole
plan.
The
eagle
pointed
out
some
of
the
some
some
considerations.
I
I
propose
I
propose
to
do
personally
roll
out
instead
of
perch
per
QR,
because
if
you
do
it
by
per
Q,
we've
got
to
go
and
edit
and
we've
got
like
going
to
the
layer
where
the
namespace
is
added.
A
Whereas
right
now
we
are
just
pulling
both
namespace
and
non-named
space
and
yeah
like
people
pointed
out.
The
actual
load
is
not
that
much
not
really
times
two,
it's
just
plus
one,
because
it
only
affects
gitlab.com,
which
is
us,
and
we
do
one
Cube
per
shot,
pulling
using
the
blocking
right
blocking
right.
So
now
we
are
blocking
right
part,
one
two
so
choose
which
we
randomize
anyways,
so
it
like
each
time
you
run
the
block
right
for
the
order
of
the
queue
switches
up
so
over
time.
A
It
would
clear
off
the
queue
which
we
are
migrating
away
from
and
put
on.
So
originally
the
plan.
Sorry
yeah.
B
A
Yeah
and
the
the
change
we
made
was
pretty
small
I
would
say
considering
the
thing
we're
doing.
I'll
link
I'll
link
the
change
later
in
in
the
in
the
document,
but
yeah.
So
the
original
plan
was
to
run
a
sort
of
migrate
script.
B
A
Sorted
sets
sorted
sets
that
covers
scheduled
jobs,
because
if
I
only
do
a
perform
underscore
in
then
you
specify
the
minutes
that
becomes
and
that
that'll
get
put
into
a
sorted
set
and
there's
a
portal
which
tracks
it
regularly
so
that
that
that
is
different
from
like
when
your
normal
key.
When
you
do
a
performance,
it
just
gets
pushed
into
a
queue
immediately,
so
the
original
plan
was
to
do
a.
So
let's
do
a
migrate
for
that
like
just
this.
This
is
a
really
naive
one
which
was
just
towards
Union
store.
A
Then
zero
stores,
obviously
not
good,
because
it's
gonna
block
redness
for
a
long
time,
so
the
other
way
is
to
z-scan
and
then
move
it
into
the
new
non-native
space
key.
But
then,
after
looking
at
some
of
the
discussions
that
and
concerns
that
you
guys
point
out,
I
think
it
might
be
better
to
just
patch
the
so.
A
The
sorted
set
and
queue
puller
and
the
enema
is
really
tested
a
lot
by
waiting
for
reviews.
So
once
so,
the
good
thing
now
is
that
we
don't
if
you,
if
you
saw
pull
from
both
queues
and
also
both
sorted
sets,
there's
no
need
to
run
the
migration
script.
When
you
do
a
roll
up,
you
just
you
just
let
it
there's
two
feature
Flags
effectively.
One
is
called
sorry.
A
One
is
called
pole,
known
namespace.
So
so,
if
you
enable
that
all
the
workers
would
pull
from
both
set
of
namespace
and
the
costs
will
sort
of
said
that
sort
of
this
card
and
it's
negligible,
then
next
we
can
toggle
the
the
reads
like,
but
rather
the
rights
right.
So
when
you
talk
to
the
rights
bit
by
bit
the
work,
the
client,
the
psychic
clients,
will
send
the
job
to
the
new
one
to
the
non-name
space
queues
and
sets.
A
Then
the
workers
will
just
will
just
consume
jobs
from
both
sides
and,
as
the
volume
of
writers
get
pushed
to
non-name
space,
the
the
the
supply
into
the
namespace
queues.
Yes,
yes,
yes
gets
consumed
Away
by
by
our
psychic
servers,
while
consuming
from
both
namespace.
A
Then
that
is
the
plan
right
now,
so
I'm
waiting
for
the
the
patch
to
get
reviewed
and
Link
as
Bob
mentioned,
it's
good
to
get
some
Scripts
one
hand
just
in
case
when
there's
a
need
for
us
to
like
step
step
in
to
fix
things
in
case
there's
a
fire
just
in
case
yeah.
But
if
things
go
well,
we
don't
need
any
of
the
scripts.
A
B
What
about
jobs
scheduled
in
the
future,
like
scheduled
months
in
the
future.
B
I'm
asking
because
I
think
we
might
have
something
with
calculated
performance
that
we
have
yeah
things
who
knows,
but
we
need
to
do
something
for
self-managed
anyway.
So
couldn't
we
do
that
in
like
a
post
deployment,
migration,
yeah.
A
Anything
but
yeah
we
will
So
So.
The
plan
was
to
roll
out
the
change
with
the
feature
the
namespace
removed
and
have
multiple
migrations.
Then
the
positive
one
migrations
are
up
and
down
steps,
so
the
up
step
will
just
move
them
to
the
non-namespace
data,
sets
data
structures
and
the
download
just
reverse
and
moving
back
in
case.
The
users
need
to
do
a
rollback,
so
the
jobs
would
still
be
in
whatever
queues
and
sets
where
the
workers
are
consuming
from
yeah,
so
yeah.
We
could
want
that
and
I
think
things
will
bring
that
up.
C
B
Need
to
like
when
we
we
need
to
make
sure
that
there's
nothing
left
when
we're
done
with
the
feature
Flags
with
the
alignment
variables
yeah.
A
So
three
weeks,
I
think
yeah.
B
D
A
D
Yeah
I
I
wanted
to
ask
about
the
per
queue
versus
per
Zone
kind
of
thing.
I
guess
one
of
the
worries
or
one
of
the
risks
that
I
see
with
perzone
rollout
is
this.
The
scope
is
still
pretty
large
and
I'm
talking
about
the
enqueuing
side,
not
the
polling
side.
D
If
there
is
anything
broken
in
the
data
like
if
something
bad
happens,
there
then
we've
done.
We've
now
impacted
a
third
of
all
of
sidekick
versus
a
more
targeted
subset.
So
I
don't
know
how
hard
it
would
be
to
make
just
that
part
kind
of.
B
Sylvester
mentioned
starting
to
roll
out,
so
for
the
right
thing
we're
just
the
the
the
cluster
the
like
web
service
clusters.
We
can
start
with
Canary
there
and
then
proceed
to
ten
percent.
Ten
percent,
ten
percent.
A
I
guess
with
Zone,
if
you
do
like
each
deployment,
is
only
the
maximum
we
can
go
before
we,
we
trade,
we,
like
that's
a
huge
step
up.
I,
think
that's
a
lot
of
concerns
right,
like
there's
a
huge
step
up
from
just
doing
each
each
Zone,
which
is
about
10
ish
and
then
once
you
are
done
with
all
three
zones.
The
only
thing
left
you
have
to
update
is
the
psychic
cluster,
which
itself
picks
up
depends
on
what
time
of
the
day
it
could
be
up
to
like
60,
60
70
of
the
load
yeah.
C
A
Last
step
we
do
deploy
them
in
shots,
yeah
yeah,
we
couldn't
go
into
shots,
but
the
moment
you
Deploy
on
the
psychic
like
this,
the
more
the
psychic
server
starts
running
it.
It
also
triggers
Chrome,
which
then
you
might
have
two
parallel
sets
of
clones
and
I
I
capture.
All
clone
drops.
The
duplication
is
done
outside
of
all
these.
So
technically,
if
all
accounts
are
highly
important.
The.
A
We
find-
and
we
could
run
that
concurrently
like
two
set
of
prawns,
because
only
one
only
like
they
didn't
do
what
happened.
But
not
all
clones
are
important
so,
which
is
why
we
have
to
like
the
the
result,
like
I,
think
the
mitigation
was
that
the
entire
psychic
cluster
roll
out,
like
the
the
diploma
itself,
takes
a
lot
of
formats,
which
is,
is
a
pretty
short
window
and
doubles
double
double
a
double
job
and
Q
is
pretty
slim
there
for
three
four
minutes
window
and
if
yeah,
that
was
the
mitigation.
C
D
Mean
if
we,
if
we
patch
the
sidekick
and
queue
method,
then
we
should
have
full
control
over
where
we
push
up
right.
A
D
A
B
B
B
A
C
A
It's
a
random
timer
on
every
single
psychic
server
that
pulls
the
cross
and
then
like
one
of
them
will
schedule
a
chrome
job
and
if,
if
two
of
them
happen
to
share
your
court
job
within
a
period
of
time
and
I
get
the
map
should
check
out
and
should
I
think
Jacob
did
a
deep
dive
quite
a
while
ago,
where
we
were
fixing
clouds
but
of
the
revisit
that
I
think
might
be
able
to
weigh
not
that
I.
B
A
No
problem
like
this
is
a
fairly
big.
This
is
a
very
big
thing,
so
yeah
background
is
all
background
job
so
yeah.
This
is
this
is
the
right
to
be
worried
yeah
so
that
I
think
that
that
sums
up
this
this
whole
chapter,
the
next
part,
is
about
read
this
cluster,
because
we
have
four
valence
clusters
running
already,
so
we
gotta
think
about.
When
do
we
want
to
think
about
scaling
because
scaling
like
we
have
used
hopes
to
jump
through
before
we
get
to
scaling,
one
of
which
I
mentioned?
A
One
command
within
the
pipeline
might
hit
a
note
that
the
key
slot
is
was
just
moved
away
and
and
then
there's
the
server
says
you
get
redirected.
Red
is
Gem,
4.8
doesn't
handle
that
redirection,
it
would
retry
the
entire
you
have
to
retry
the
entire
pipeline,
whereas
red
is
five
handles
like
finds
individual
redirected,
commands
and
retries
them
for
you
and
gives
you
back.
The
entire
results
nicely
like
it's
all
wrapped
under
redis
cluster
client,
so
yeah,
which
is
why
I'm
sorry
doing
the
namespace
application
lets
us
do.
A
Let
us
upgrade
redis
to
V5,
which
sidekick
seven
leads
like
like.
D
D
A
So
I
think
that
we
probably
can't
use
Tamlin
like
the
Standard
Time
forecast
or
a
lot
of
people
might
adjust
it
to
be
a
little
bit
more
conservative
caused
by
the
extra
time
to
get
all
these
things
in
and
also
manually
scaling
up
reddish
clusters.
Something
I've
never
done
before
so
many
some
time
to
plan
the
failure
conditions
get
our
tooling
ready.
Do
some
dry
random
practices
online
staging
before
we
actually
have
to
do
it
for
a
real
saturation
prevention
yeah.
A
B
A
Okay,
I
I've
tried
it
locally
with
like,
because
I
did.
I
was
very
difficult.
I
tried
like
like
re-shotting
it
locally,
while
sending
a
stream
of
comments
and
D5
handles
it
nicely
so
yeah.
But
then
again
it's
local,
so
production
is,
like
actual
deployments,
are
always
better.
Yeah
worked
on
my
machine,
yes,
yeah
yeah
red
is
five.
A
Also
red
is
five,
is
oh,
the
red
is
B5
the
gem,
the
the
gem.
Not
the
red,
is
server
I
I
I've
been
looking
at
like
landmines
that
and
avoiding
them.
I.
Think
ego
is
aware,
so
we've
been
Patrick
or
patches
that
we
found
to
V5
plus
v
4.8
I.
Think
the
main
camera
is
not
backboarding
any
so
V5
should
be
the
the
port
to
be
fine
should
be
pretty
smooth.
D
A
D
A
Yeah
but
yeah
I
think
that's
a
little
original
surprise
and
we
are
still
have
a
lot
of
hearings,
but
I'll
I'll
think
English
just
bring
this
up
just
just
to
get
some
numbers
because
I
think
I
called
Vietnam
and
we're
thinking
like
what.
Where
should
we
do?
It
50
40
60,
it's
better
to
just
set
a
number,
and-
and
at
least
at
least
we
have
a
number
somewhere
to
look.
B
C
C
Yeah,
we
also
said
that
there's
an
element
of
probably
having
it
lower
for
the
first
time.
We
do
this
right
because
it's
more
risk
involved
but
yeah
I,
guess
if
we
could
find
that
out
and
then
create
an
MR
to
make
a
suggestion
and
discuss
kind
of
land
on
what
we're
all
happy
with.
D
B
Get
a
q
when
we
predict
85
three
months
in
the
future.
Yes,.