►
From YouTube: Scalability Team Demo 2022-10-27
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
all
right,
all
right.
Let
me
share
my
screen.
It's
like,
so
this
is
part
of
the
POC
or
redis
cluster
VMS.
It's
not
exactly
the
same
as
what
we
would
do
in
a
VM
setting,
because
our
VMS
have
have
shots,
have
shaft
scripts
and
they
pull
I.
Guess
the
rules
from
a
shaft
server.
So
it's
a
bit
more
simplified
I've,
actually
based
off
a
redis
chart,
sandbox
that
we
have
done
so.
A
It's
just
a
bunch
of
terraform
scripts
in
some
some
init
script,
some
scripts
to
set
up
the
cluster
there's
a
there's
a
rather
a
lot
of
hacky
scripts
to
just
check
the
topology
of
the
cluster.
It
checks
for
these
two
conditions,
but
I
think
we
might
not
go
with
the
rotation
where
we
Shuffle
the
Masters
around
so
I.
A
Think
for
for
context
for
Marco,
like
let's
say
if
we
have
three
three
shots:
meaning
like
a
radius
cluster,
there's
a
thousand
sixteen
thousand
key
slots,
so
I
think
about
five
thousand
per
in
a
three
shot
set
three
of
the
master
notes
they
would
cover
like
about
five
thousand
ish
key
slots.
So
this
these
masternodes
cannot
all
be
in
the
same
zone.
A
One
is
you
arrange
your
master
so
that
this
never
happens,
but
that's
not
that's
not
really
full
Pro,
so
so
I
think
you
go
and
map
has
discussed
that
the
way
around
this
would
be
to
force
promote
a
replica
to
become
the
new
master
so
that
one
I'll
probably
write
the
script
first
soon,
so
you
can
just
run
it.
I
think
Matt
probably
has
a
more
more
detail
purpose
on
how
we
can
do
it,
because
I
don't
think
we
can
keep
running
scripts.
A
It
probably
has
to
be
packaged
in
a
better
way
yeah.
So
a
couple
of
them
there's
a
run.
Note
test
script
for
anyone
to
pick
it
up
and
just
run
it,
but
you
do
need
your
own
sandbox,
so
I'll
go
into
how
to
set
up
in
a
bit
like
this
script.
Basically,
is
copied
almost
wholesale
from
a
script
over
here,
which
is
I.
Think
one
Benchmark
yeah,
except
I,
placed
a
couple
of
like
resource
requests,
because
I
ran
into
some
context:
cancellation
issues
when
I
ran
it
on
a
machine.
A
Okay,
so
there's
a
DOT
EMV
file.
It
works
the
same
way.
So
I
think
ego
will
be
familiar
about
this
before
Marco.
If
you
update
your
projects
to
your
own
sandbox
project
and
I,
think
everything
else
you
can
ship
it
the
same
you
can,
you
can
probably
switch
create
default.
Cluster
to
true
oh
yeah,
I
might
like
I
might
have
not
tidied
up
this
part,
but
the
main
flag
to
change
is
your
the
project.
A
Once
your
project
is
set
to
your
sandbox
or
whichever
project
you're
authorized
to
temporal
apply
on
this
audience
work.
So
you
just
do
a
source.
You
terraform
you
go
to
terraform
folder,
do
it
in
it
then
apply
so
so
one
difference
between
this
repo,
this
project
and
this
project
is
that
I
did
not
put
in
the
telephone
state
right
because
it's
probably
going
to
be
your
own
sandbox.
A
If
it's
all
about
your
own
sandbox,
you
might
as
well
put
it
locally
yeah
I'm,
not
sure
if
it's
preferred
to
put
it
in
to
have
the
state
start
in
a
project,
but
yeah
I
I
got
it
to
work
just
locally
first,
and
if
someone
were
to
run
it,
they
wouldn't
have
state
Treasures,
because
it
would
be
the
outside
boxing
mine
would
be
mindset.
My
mine
would
be
my
project
like
action,
something
yeah.
B
A
I'll
check
it
out
but
yeah,
so
this
group
should
get
your
should
get
your
look.
Your
things
running
spin
up
a
gke
cluster
with
I
think
some
some
notes,
I
think
six
notes,
and
then
you
have
node
generators.
So
you
can
you
could
you
could
modify
your
script
to
increase
the
number
of
load
amount
of
nodes
based
on
node
tests?
You
could
just
change.
A
Yes,
I
can
have
a
config
value
later
on,
but
if
you
want
to
change
and
increase
the
number
of
amount
of
node
you're
applying,
you
could
just
increase
these
and
then
I'll
probably
put
configs
so
that
you
can
change
the
amount
how
long
you're
going
to
run
it
for
because,
if
you
want
to
so
the
purpose
of
the
load
test
was
not
ready
to
test
where
this
cluster
can
handle
the
load.
I
think
Craig
has
already
run
a
benchmark
before
I
can
link
the
description.
A
A
This
was
more
of
the
to
get
get
like
I
just
wanted
to
get
things
started
and
to
see
that
any
any
observable.
So
I
was
not
these,
but
I'm,
not
sure
if
I
would
still
observe
them
under
like
a
production
setting
so
I
think
on
the
receiver
end,
you
will
see
more
RGB
safe
in
the
back
the
background
track,
which
makes
sense
because
it's
receiving
more
key
slots,
so
it
might
trigger
the
background
right.
A
The
I
think
on
on
the
sender.
You
see
a
lot
more
migrate
command
compared
to
get
yeah,
which
makes
sense
so
so
this
is
all
performed
under
like
minimal.
Like
non-non-realistic
note,
we
have
to
do
a
do.
A
purpose
proper,
proper
load
test
with
more
data.
That's
for
sure,
because
I
think
these
will
run
with
a
lot
lesser
data
yeah
anyway.
A
I
think
that's
it
that
is
it
for
this
POC
sandbox,
so
I
think
I'll,
probably
like
brush
out
a
bit
of
it,
but
I
think
it's
fairly
enough
for
anyone
to
just
pick
it
up
and
start
running
so
I.
If
we
we
are
so
I,
think
that's
a
plot
this
on
this
POC.
That's.
A
Text,
it's
just
enough
for
us
for
me
to
like
try
out
some
of
the
failure
modes
like
just
I,
can
just
crash
one
VM
and
see
how
does
it
handle
so
I
guess
right
now,
that's
a
little
bit
manual.
It
might
be
because
my
terraform
knowledge
is
not
up
to
scratch,
but
I
I
just
did
like
shot
one
shot
two,
instead
of
using
counts
because
within
it
shot
the
three
counts
and
the
way
things
the
way
like.
A
If
we
we
set
up,
we
reset
the
zones
by
using
the
element
function
it
sort
of
wraps
around.
So
it
goes,
I
think
cdb
and
then
just
keep
rotating
and
then
the
fourth
one
would
be
C.
So
I
did
this
later
is
cleaner,
so
that,
like
shot
two
node
zero
would
be
definitely
in
zone
B
and
then
shot
two
note.
A
One
will
be
in
zone
so
Zone,
D
yeah,
then
the
next
level
B,
so
that
when
locally,
when
I
read
it
like
I
know,
which
is
Switched
soon,
but
we
might
we
could
like
if
we
were
to
do
it
like
we
could
modify
it
so
that
we
can
configure
which
zone
we
want
to
bring
it
up
in
yeah,
mainly
referred
from
I.
Think
generic
stock,
which
is
the
the
terraform
module
that
we
use
to
create
nodes
for
redis
in
production,.
B
But
but
I
guess
we
could
I
mean
honestly
I
think
this
is
perfectly
fine.
If
we
wanted
to
make
this
a
little
less
repetitive,
we
could
do
like
a
terraform
module
within
this
repo
yeah
and
then
import
it
from
there,
but
but
I.
B
B
Two
questions:
first,
one:
what
is
the
reddest
version
that
you're
running
this
with?
Is
it
latest.
A
Okay,
Omnibus
yeah
I'm,
installing
on
the
bus
so
like
the
script,
will
just
pull
on
the
bus
and
then
and
then
run
it
I
think
it
also
pulls
a
certain.
Well,
it
pulls
it
checks
out
with
this
brunch,
because
that
branch
has
is
called
I've
modified
it
to
accept
cluster
configurations.
So
it
will
just
check
out
it's
a
little
hacky
but
yeah
yeah.
It
would
do
a
copy
of
it
will
do
the
copy
of
the
cookbooks
and
then
do
a
reconfigure,
yeah
yeah.
B
That's
cool
when
I
was
recently
reviewing
the
Reddit
7
change.
Logs
I
saw
really
a
lot
a
lot
of
redis
cluster
related
stuff
in
there
and
I'm
I
I
initially
wanted
to
go
with
Omnibus,
but
I'm
I'm,
starting
to
lean
more
towards
using
Reddit,
7.
and
Omnibus,
doesn't
have
redis,
7
and
I.
Don't
think
we
want
to
couple
ourselves
to
the
Omnibus
version.
I
think
we
want
to
maybe
move
a
little
faster.
B
On
this,
so
it
may
in
the
short
term,
it
may
be
worthwhile
getting
a
redder
7
running
in
this.
B
C
A
Yeah
sure
but
I
think
we
might
have
to
discuss
the
distributions,
because
I
think
if
the
like
I'll
I'll
get
like
okay
I'll
get
that
web
services
still
run
from,
does
it
still
run
from
Omnibus
or
is
it
from
the
the
cloud
image?
Does
the
container
image
just,
but
let's
just
delete
this
version.
Let
me
know.
B
A
C
B
I
I
think
there's
there's
precedent
for
not
using
Omnibus
on
on
our
databases.
B
Yeah
and
the
the
second
question
I'm
kind
of
curious
to
see
this
in
action,
do
you
do
you,
have
it
running
and
have.
A
A
Oh,
no,
it's
not
this.
Whether
it's
plus
30,
volts
I
could
run
the
script
one
screw
set
up.
Yeah,
that's
good!
We
just
keep
looping
and
ping
it,
but
normally
I
don't
do
that
now,
but
before
it
tries
to
join
the
masternodes
together,
it
would
make
sure
everything
is
alive.
You
know
so
technically
I
could
just
run
it
set
up
plus
the
kilometer.
Let
any
other
statement
set
up
plus.
B
A
Yeah,
this
is
what
I
mean:
I
think
that
GK
doesn't
takes
a
while
just
start
up.
Yeah
class
container
cluster
container
clusters.
A
Yeah:
okay,
a
side
track
is
sidetracked
for
kubernetes
like
if
you
were
to
run
release
cluster
in
in
communities
like
how
do
we
label
I
was
thinking?
How
can
we
label
the
Zone
information
oops.
A
B
B
Doesn't
matter
too
much
as
long
as
yeah,
they're,
distinct
and
if
we
did
want
to
enforce
some
policy?
That's
something
that
whoever
is
enforcing
the
policy
needs
to
discover
and
they
can
do
that
by
ins
by
inspecting
the
Pod
state.
Right,
like
you,
do
Cube
CTL
get
and-
and
it
should
tell
you
which
zone
that
pod
is
running
in.
If.
B
A
Yeah
I'll
Zoom
I'm
trying
to
do
it
locally
and
then
in
gke
I'll,
try
to
find
a
way
to
like,
based
on
IP,
like
can
I
get
which
what
is
the
information
like?
What
is
this
I
wanted
to
spread
the
Masters
out?
Well,
yes,
if
we
were
to
do
the
horse
failover
the
film
the
tape
failover
take
over
yes
yeah,
then
I
guess
we
don't
really
need
that.
Then.
B
A
Doesn't
matter
yeah,
it
doesn't
matter
yeah
yeah,
because
we
have
a
remedy
for
it:
yeah
yeah
that
I
guess.
If
we
use
kubernetes
like
the
Vietnamese
charts,
then
it
comes
to
brother
seven,
just
that
I
think
we
need
to
go
through
the
whole
process
of
getting
them
to
compile
with
the
three
pointers.
B
If
they
haven't
fixed
that
image
yet
because
they
they
did
do
it
for
the
redis
image,
so
it
could
be
that
they
already
modified
it
as
well.
Foreign.
A
A
B
A
B
B
A
B
A
A
Oh,
it's
already
done.
There's
a
really
dumb
I'll
just
comment:
it
all.
A
A
A
Long-Term
output,
destroying,
let
me
try
the
setup.
A
A
C
A
A
Yeah,
it
starts
the
slot,
so
when
I
joined
it
it
would
have
slots,
and
then
this
happens
yeah.
This
wouldn't
I
think
this.
This
happened
when
I
started
where
this
like,
when
I
installed
on
the
bus.
It
starts
up
everything
else,
so
everything
else
about
its
information
and
then
it's
not
a
clean,
empty
radius.
So
I've
got
to
do
a
flush
DB.
A
Yeah
I'll
just
do
a
cluster
notes,
so
we
can
see.
Oh
it's,
it's
really
messy.
Okay!
Let
me
let
me
try
I,
don't
know,
there's
a
lot,
neither
when
they
don't,
they
don't
have
slot
fragments
like
with
slot
fragments.
Everything
is
not
one
line.
A
So
the
first
step
it
creates
a
tree,
masternode
cluster
and
then
it
starts
adding
replicas
to
different
Masters.
It
would
mean
and
then
replicate,
meet
and
replicate.
Then
once
it's
done
it
would
have
the
topology
that
we,
everyone,
yeah
except
the
master,
would
all
be
in
the
same
Zone
yeah.
Then
you're
going
to
rotate
them.
But
if
you
don't
really
just
care
about
it
and
you
can
just
do
a
fix,
then
yeah,
then
then
it's
as
good
as
setup,
because
it's
it's
shot
is
it
shot,
would
be
different,
Zone
anyway,
so
like
so.
A
You
don't
get
two
per
two
per
Master
note,
so
you
see
four
seven,
four
or
seven
four
one,
one
e
zero,
one
e
zero.
So
it's
so
it's
a
one
master
two
replicant
shot
and
you
have
this
Behavior
across
all
of
them.
So
if
gke
was
up
nicely,
we
can
just
run
a
load
test
on
it
and
yeah
and
and
on
the
console
page
you
would
see
traffic
then
yeah.
That's
the
idea.
C
A
A
A
B
A
A
A
B
A
A
So
I
think
the
two
Master
that's
down.
Is
this
one
three,
four
one,
four
one!
Yes,
this
one
is
down
and
two
four
one
is
down
this
one.
So
these
two
are
down
and
it's
still
and
it's
still
clustered
faster
than
for,
but
it's
still
plaster
fill
well.
I!
Guess
if
you
do,
we
do
one.
Let
me
let
me
just
spell
over
for
one.
A
Okay,
yeah
yeah,
so
majority
would
be.
You
know:
okay
yeah
one
notes
yeah,
so
you
will
see
five
Masters
yeah,
so
two
masters
fail
five
Masters
or
three
Masters
running
the
okay.
So
so
this
one
this
one
is
the
one
that
took
over.
So
three
197,
which
is
197,
is
the
second
one
in
shot
zero.
It
took
over
the
master
row
and
then
I
guess
the
other
one
I
took
over
would
be
26..
A
Maybe
no
62
is
oh,
the
third
one
to
go
over.
So
it's
random.
It's
not
yeah.
So
this
one
took
over
so
yeah.
So
now
you
have
a
healthy
cluster
yeah.
So
this
is
how
like,
if
someone
were
to
be
on
call,
they
would
do
it.
But,
yes,
we
can
just
automate
this
process
all
right
when
cluster
is
down
just
find
a
healthy
sleep,
find
a
healthy
replica,
take
over
and
then
I
guess,
wait
and
then
repeat
until
it's
healthy.
A
A
A
Not
sure
if
they
do
or
not
how
suspend
Works
suspend
just
stops
it
right.
So
the
process
actually
is
still
running
the
background,
but
the
process
is
open.
Is
that
how
it
works
on
the
VM
I
mean
I,
just
reconfigure
anything
if
I,
if
I
unsuspended
the
process
would
just
sort
of
be
like
reshadow
and
it
starts
running
again.
A
B
A
So
14,
okay,
so
141,
which
is
the
first
shot
zero
first,
the
one
that
we
suspended.
It's
it
just
joined
back
and
it
became
a
slave
node
and
then
241
is
here.
It
just
joins
back
and
becomes
asleep
now
I
I
I'm,
not
sure
it's
a
crash
scenario,
it'll
be
this
clean,
so
we've
got
actually
crash
it
next
time,
but
I
guess
this
is
a
I
guess.
This
shows
where
this
cluster
is
fairly
robust.
But
then
again
you
know
discussion
is
it
has
a
lot
of
opponent
case
in
your
handle?
A
C
So
basically,
you
wanted
to
do
like
a
three
shot
in
a
radius
cluster
right.
So
oh
the
system
doesn't
slots
divided
to
three
shots,
and
then
you
want
to
replicate
it
as
well
on
three
different
zones.
So
that's
the
nine
VMS.
That's
it!
Oh
okay,
yeah
then
like
three
VMS
per
Zone,
then
there
is
then
the
master
safe
is
just
the
replica
right
and
yes,
okay,
yeah
configuration
again
sorry
how
which
which
one
becomes
Master
on
each
Zone.
A
Majority
right,
basically,
oh
tree
master
as
long
as
two
masters
around
it
will
just
find
a
appropriate
replica
and
then
promote
it
to
master.
So
just
now
it
it
like
sort
of
thought
right
because
only
one
master
was
around
so
majority.
So
if
you
have
a
three
three
note
set
up,
if
two
goes
down,
then
you
need
level
one
so
yeah.
So
this
happens.
Yeah
yeah.
A
I
think,
realistically,
we
should
have
more
than
three
clusters,
I'm,
not
sure,
but
it
makes
it
makes
me
I
think
it
makes
re-shotting
a
lot
cleaner
if
you
migrate
slots,
because
if
you
originally
start
off
with
more
plus
more
notes,
more
shots,
each
shot
does
lesser
work
to
move
into
the
new
like
you're,
not
oh,
this
is
33
split
right.
It's
totally.
C
A
Percent
to
move
to
25,
you
move
eight
percent
per
note,
whereas
if
you
start
with
25
25,
but
no,
if
you
add
a
fifth
one,
you
move
five
percent
per
month,
so
like
no
but
I
guess,
we've
got
a
factoring
costs
that
provisioning
and
the
VM
sizing,
so
yeah
I'm,
not
sure.
If
cash
we
are
running
into
a
CPU
issue
or
memory
issue.
First,
like
like
I
know
for
for
red
for
psychic,
we
kind
of
over
provision
the
memory
side,
but
I
guess
because
we
need
the
CPU,
the
CPU
yeah.
B
I
think
I
think
we're
running
into
both
and
also
the
the
memory
growth
is
driving
more
CPU
utilization
because
of
eviction.
A
B
But
but
yes,
we've
we've
certainly
put
the
brakes
on
adding
more
stuff
into
the
cache,
because
we
say
we
just
can't
handle
it
right
now,
so
I
think
we
will
likely
get
more
utilization
out
of
it
once
we
have
the
capacity
to
do
so
for
both
CPU
and
memory.
A
A
B
Thanks
thanks
for
the
demo,
I'm
gonna
stop
the
recording
now.