►
From YouTube: Scalability Team Demo 2022-09-15
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
sorry,
sorry,
yeah,
so
I'll
use
the
outside
context.
The
context
is
that
I
think
I
mentioned
something
about
the
red
is
set
up
and
he
he
said
that
there
was
no
resolution,
so
we
thought
they
have
an
issue
and
then
no.
This
is
not
in
this
class
like.
How
are
we
going
to
set
up?
Reddit
is
in
the
context
of
the
site,
so
no
psychic
like
would
that
be
a
setup?
How
are
we
going
to
deploy
based
on
the
original
proposal?
A
I
believe
the
one
Andrew
wrote
that
is
for
epic
423,
that's
linked
in
Epic
43,
the
the
setup
is
that
each
Zone
would
be
independent
of
each
other,
so
the
real
service
in
zone-
let's
say
us,
is
1B-
would
send
the
psychic
drops
into
the
redis
that
is
within
that
zone
and
then
the
there
will
be
a
psychic
worker
cluster
they
deployed
within
that
zone
and
it
will
just
all
be
nicely
partitioned
within
that
zone,
and
then
it
will
still
out.
So
let
me
show
my
screen
and
I
can.
A
It
will
look
something
like
this.
At
least
this
was
the
diagram
that
I
was
in
The,
Proposal,
so
and
I
think
the
ego
is
proposing
something
like
this,
which
is
a
lot
of
logical,
logical
partition,
so
that
we
can
have
another
P3,
P4,
P5
P6
without
being
constrained
to
zones,
I
believe
it's
difficult
to
do
this
and
we're
going
to
spin
up
one
more.
We
will
need
to
do
it
in
another
region
like
Central
or
west.
Correct
me.
If
I
don't
know,
I
we're
not
able
to
do
another.
A
One
like
us
is
one
another
us
is
1D
setup.
Would
that
would
that
be
correct?
Like
let's
say
we
have
two
us
is
1B
setups
and
then
so
we
have
a
duplicate
of
this.
B
C
B
A
Like
whether
we
we
are
able
to
do
a
setup,
so
let's
say
this
is
our
setup
where
my
Crusher
is
and
we
want
to
do
another
set.
We
want
to
expand
it
to
six
clusters
zones.
Instead.
Are
we
able
to
do
two
us
one
B,
two
us
one
C
and
two
Us
One
D?
Would
that
make
sense
like
and
they'll
all
be
in
the
east
region
without
without
going
to
Central
West,
because
I
think
Igor
mentioned
that
cross-vision
latency
is
something
we
haven't
explored.
C
So
yeah
this
was
a
pretty
vague
comment.
So
sorry
for
not
being
more
helpful,
but
I
do
remember
having
a
discussion
with
Craig
when
we
were
looking
at
this
before,
which
was
a
while
ago,
that
the
name
sidekick's
owner
clusters
was
potentially
misleading
because
there's
nothing
about
it.
That
requires
zones
r
d,
kubernetes
clusters-
that's
a
like!
C
You
know
you
could
logically
Shard
sidekick
on
VMS,
with
redison
VMS
and
with
rails
on
VMS
right,
like
you
could
say,
like
his
Reddit
sidekick
one
and
his
Reddit
sidekick
two
his
Reddit
side
pick
three.
We
are
going
to
pick
some
way
of
telling
those
telling
nodes
that
this
is
the
spreader
sidekick.
C
You
use
as
your
primary
sidekick
instance,
there's
no
strict
coupling
to
kubernetes
or
gcp
in
that
so
I,
guess
that
doesn't
really
help
but,
like
just
to
say,
like
don't
feel
constrained
by
the
name
necessarily
by
the
name
of
the
project.
A
Because
suggestion
actually
is
pretty
neat
like
because
so
now
we
have
three
regions
of
web
services,
and
if
we
were
to
do
a
Zone
setup,
it
means
you
have
to
scale
out
the
web
services
be
like
accordingly.
You
know
compared
to
this
setup,
where
we
can
spin
up
another
another.
Let's
say
let's
say
we
have
three
three
zones
of
web
services
and
then
we
have
three
equivalent
shots
of
psychic
readers,
so
like
P1,
P2
and
P3.
A
B
B
Mean
to
me
this
diagram
pretty
much
makes
explicit
what
Sean
just
said
right,
which
is
that
we
we
could
make
it
logical
and
so
I
guess.
My
question
is
well
okay,
so
maybe
maybe
we
should-
and
we
should
maybe
explicitly
include
that
in
how
we
frame
the
project
and
how
we
design
the
the
actual
architecture.
A
C
C
I'm
not
really
sure
what
the
disadvantages
of
doing
it
in
that
proposal
are.
Is
it
just
that
it's
more
complicated
to
like
you
have
to
do
some
extra
work
to
decide
which
ones
using
which
red
is
sidekick,
because
it's
not
just
automatically
the
one
in
your
zone
or
as
if
that
or
is
it
something
else
like?
What's
what's
the
downside
of
yeah,
not
caring
about
this.
B
B
B
Sidekick
consumers
per
partition,
we
we
will
need
to
have
separate
ones
pretty
much.
So
you
know
there
is
a
a
dedicated
kubernetes
deployment
for
P1
one
for
P2
and
those
could
be
zones.
But
let's
go
with
this
for
now,
so
those
are
separate.
C
C
So
I
think
Wayne
was
looking
at
this
a
bit
before
because
can
I
just
share
my
screen:
real
quick
Sylvester.
C
There
is
I'm
not
recommending
we
use
this
directly.
There
is
this
sharding
support
sidekick,
which
is
basically
like
you
want
to
use
this
lion.
I'll
tell
you
want
to
use
this
red
is
when
you
push
this
tell
it.
You
want
to
use
this
for
this
when
you
push,
this
does
not
necessarily
exactly
how
we
want
to
do
it,
but
we
could
consider
something
like
that
for
old
clients,
so
all
sidekick
so
just
to
rewind,
just
in
case
anybody's,
not
clear.
C
So
all
web
API
and
git
notes
that
serve
HTTP
traffic
psychic
clients.
They
can
push
psychic
jobs
or
psychic
nodes
containers
pods.
C
Whatever
are
both
clients
and
servers,
so
they
obviously
DQ
and
process
jobs,
but
they
can
also
push
new
jobs
like
a
job,
can
push
other
sidekick
jobs
so
from
a
client
perspective
like
Igor
says:
there's
no
need
to
even
stick
to
one
like
you
can
just
say
you
know
I'm
just
gonna
round
robin
these
I'm,
just
gonna
pick
randomly
like
whatever
approach
you
take,
you
could
say:
I'm
just
gonna
stick
to
one,
but
it
doesn't
really
matter.
C
It's
only
from
the
dequeuing
perspective
that
a
sidekick
container
needs
to
know
like
this
is
my
this
is
where
I'm
getting
jobs
from
and
another
thing
I
think
about
that
is,
that
might
mean
Sylvester.
Do
you
remember
which
things
need
to
see
all
shards
shouts
all.
A
C
A
Now
mainly
the
API,
like
metrics
I,
believe
there's
one
API
that
deletes
jobs
and
then
I
would
think
the
migration
the
migration
helpers
would
need
to
see
all
like
when
you
do
a
queue
to
queue:
migration,
yeah.
So
those
are
the
main
ones,
because.
C
I'm
just
wondering
if
sidekick
itself
needs
to
know
about
all
shots
or
if
it's
only
potentially
rails
HTTP
traffic.
That
needs
to
know
about
all
shards.
But
that's
not
it's
not
super
important.
If
we
can
configure
it
for
one,
we
can
configure
it
for
both.
It's
just
a
I
do
I
think
I'm,
throwing
out
there.
A
But
there
is
a
point
about
the
client
and
server
right
with
four
fours,
for
the
servers
will
pick
up
a
job
that
needs
to
and
queue
another
job
like
for.
One
is
the
jira
issue
importer
like
so
for
those
servers
would
they
should
like
they,
or
should
they
and
I'm
not
sure
what
what
it
would
be
anyway,
like
what
I
need
to
enqueue
to
the
same
shot,
or
are
they
free
to
do
a
load
balance.
C
B
A
But
for
one
of
the
okay
so
background
background,
post,
post
deployment,
background
migration
purchase
psychic
apis
like
it
touches
the
the
delete
set
the
retry
set,
so
for
those
I'm
not
sure
it
might
be
good
to
have
an
option
that
lets
you
stick
to
one
shot.
So
when
it
does
that
clean
up
it
stays
within
that
shot.
C
Yes,
yeah
anything
anything
that
needs
to
inspect
the
state
of
sidekick,
so
like
retry,
set
like
well
just
looking
at
a
queue
or
looking
at
one
of
the
Retro
set
scheduled
sets
looking
at
the
process,
this
thingy,
that's
there
anything
that
does.
That
needs
to
be
aware
of
all
shards,
because,
obviously,
if
you
look
at
the
retrace
set
on
one,
you
might
not
capture
everything.
C
Say
there
yeah
so
ego's
point
about
ordering
guarantees.
The
only
thing
we
do
have
is
the
job
teach
duplication
which
I
think
is
already
on
the
shared
State
redis,
not
on
the
sidekick
credits
right,
we
moved
it.
We
were
planning
for
this
some
time
ago.
B
C
A
C
C
Sorry
I
think
I
think
what
we
could
say
is
that
all
client
stuff
needs
to
be
able
to
either
load
balance
or
fan
out
found
it
like.
You
know:
if
it's,
if
it's
something
that's
inspecting
the
retrace
set,
it
needs
to
check
all
of
them
and
then
compose
those
results.
If
it's
something
that's
pushing
a
job,
it
just
needs
to
put
into
one.
So
maybe
that's
that's
the
distinction
read
and
write
operations,
but
for
the
servers
like
the
job
processors,
they
only
need
to
know
about
One.
C
A
C
B
C
Sorry
I'm
feeling
guilty
here
because,
like
I
said
like
there's,
definitely
some
stuff
that
Craig
and
I
discussed
all
that
we
and
I've
discussed
that
I
can't
find
right
now
so
I
don't
know
if
that's
just
in
comments
and
we
didn't
update
descriptions
or
if
it
was
on
slack
I
think
it
was
in
comments.
I
just
can't
find
them,
but
yeah.
Sorry
about
that.
C
C
It's
clear
that
all
clients
should
know
about
all
shards
anyway,
and
so,
like
you
know,
for
the
rollout,
that's
that's
a
bit
more
of
a
straightforward
thing:
I
guess,
because
you
just
just
update
one
place
that
says
these
are
all
the
reddish
Sidekicks
rather
than
say
these
Sidekicks
talk
to
this
Reddit
sidekick
these
psychic
slots.
Let's
read
this
psychic
stuff
like
that.
Maybe
that's
not.
B
B
C
B
A
Think
that
that's
it
for
I,
guess
that's
it
for
this!
This
question:
did
this
topic
at
least
I'll
just
follow
up
with
a
POC
to
see
if
it's
visible
and
if
it's
feasible
then
I
mean
ping
I
don't
draw
again,
then
you
can.
You
can
make
a
call
on
this
or
it
doesn't
or
whether
we
need
any
more
validation.
C
Yeah
I
think
one
of
the
issues
here
is
that
we
have
picked
up
and
dropped
this
project
a
couple
of
times
in
the
past.
So
that's
probably
where
some
of
the
not
that
I'm
taking
the
blame
off
myself,
because
I
think
I,
probably
should
have
done
a
better
ago
or
day
two
stuff,
but
that
probably
is
part
of
it
so
yeah.
If
we
can
keep
this
moving,
that
would
be
good.
A
D
I
think
it's
definitely
adding
to
the
challenge
of
getting
started
here
and
finding
out
where
all
the
pieces
are
and
especially
as
Sean
said,
like
some
of
it
was
in
conversations
and
there's,
there's
a
lot
of
issues
that
make
up
the
project.
So
yeah
I'm
glad
that
this
is
getting
started
again
and
I.
D
A
Yeah
I
I
got
a
random
thought
because
psychic
seven
there's
a
lot
of
development.
Psychic,
seven
and
psychic
7
has
quite
a
bit
of
like
it.
It
kind
of
removes
a
lot
of
global
variables,
which
is
opinion
about
for
us.
When
we
are
doing
psychic
God
redis
operations
like
any
psychic
apis
would
use
the
global
we
use
the
global
psychic
psychic
radius
object
that
that
kind
of
makes
it
difficult
for
us
to
do.
Things
like
go
across
several
host
address
because
to
sidekick
there's
only
one
host.
A
So
like
your
experience,
how
easy
is
it
to
upgrade
to
a
major
psychic
version,
because
if
it's
easy
and
there's
minimum
breaking
then
doing
that
would
make
lives
quite
easy
in
terms
of
getting
application
site
ready
for
is
the.
D
Next
version
like
General
availability
or
is
it
still
a
beta.
C
So
the
next
major
version
is
psychic
7
and
that's
not
released,
but
we
are
several
of
my
inner
versions
behind
sidekick
six,
because
the
there
are
some
changes
there
that
are.
C
Not
massive,
but
beyond
the
scope
of
like
I'm
just
gonna
go
create
an
MR
to
do
this.
In
my
in
a
gap
in
my
day
type
thing,
I.
C
Psychic
7
I
think
will
be
a
pretty
major
upgrade
I
would
imagine
I
think
actually
Heinrich
Lee.
You
has
been
doing
most
of
the
psychic
upgrades
in
the
past,
so
maybe
check
with
him
how
it's
been
in
the
past.
C
Oh
sorry,
I
just
had
a
quick
look
at
the
sidekick
7mr
and
we
can't
upgrade
to
that
because
it
requires
really
6.2
and
higher
and
we
still
require
redis.
B
D
C
B
B
So
I've
added
another
item
to
the
agenda.
If
it's
okay
to
move
to
the
next
one,
yeah
I
posted
this
a
week
or
two
ago,
and
so
fluent
D,
our
our
log
shipper
that
we
use
to
collect
log
files
from
both
VMS
and
kubernetes
nodes
and
send
them
into
Google
Cloud
Pub
sub,
which
then
gets
picked
up
by
Pub
sub,
beat
and
shipped
into
elasticsearch.
B
This
component
is
kind
of
a
scalability
bottleneck,
slash
Hazard
and
where
we're
sort
of
getting
close
to
the
Tipping,
Point,
I,
think
and
I'm,
not
sure
it's
something
we
have
a
good
handle
on
at
the
moment.
B
So
fluent
is
written
in
Ruby.
This
implicitly
gives
it
a
single
threaded,
bottleneck
and.
B
B
Usually
the
first
thing
to
go
is
actually
the
logging
and
not
h
a
proxy
itself.
Proxy
is
pretty
close
behind,
but
but
yeah
the
the
logging
is
kind
of
the
first
one
and
we're
starting
to
see
this
in
kubernetes
as
well
on
some
of
the
nodes,
I'm
not
sure,
if
I
post,
that
any
graphs,
no
I
don't
think
I
did.
B
Maybe
we
can
take
a
look
at
that.
So
that'll
be
a
logging
overview,
I
suppose.
B
Okay,
weird
there's
I
guess
it's
probably
gonna
be
saturation
details
and
then
the
kubernetes
CPU,
which
is
a
very
busy
panel.
This
one
okay,
looks
like
some
metrics
were
broken.
B
This
is
a
very
busy
panel
because
it
includes
all
of
the
pub
sub
beats,
of
which
there
are
many
many
many,
and
it
also
includes
fluentes.
So
if.
B
B
Yeah
I
don't
know.
Okay,
let's
see,
maybe
I
can
get
a
better
smaller
time
range
just
so
that
we
don't
have
only
blank
lines
and
then
I
guess.
B
B
So
is
it
this
one
I
guess
it
is
so
I
guess
that
is
that
gonna
be
enough?
We
need
to
add
it
down
here
as
well.
C
B
B
B
B
Okay,
so
yeah
I
mean,
in
this
case
it's
going
up
to
like
70.
If
we
look
at
the
last
24
hours.
B
So
I
guess
my
question
is:
there's
something
we
want
to
do
something
about.
C
I
mean
I
think
so
one
question
I
had
was:
oh
actually
let
me
click
through
because
you
already
talked
about
this
somewhere,
so
Vector
seemed
a
popular
one.
C
B
Maybe
I
think
that's
a
separate
question.
Yeah.
B
C
That,
as
it
happens,
as
I
was
talking
to
you
yesterday,
there
was
a
bug
in
the
saturation,
a
bug,
a
misunderstanding
of
the
saturation
calculations
where
for
timeland,
where
they
use
the
average
instead
of
the
max
of
the
series.
So
for
cases
like
this,
it
probably
wouldn't
catch
that
we've
actually
fully
saturated
for
a
while.
It
will
now,
except
that
changing
that
query
is
effectively
invalidated
the
cache.
C
So
we
haven't
been
able
to
get
Taman
to
run
since,
although
I
have
just
had
a
successful
run,
that's
populated
part
of
the
cache
so
I'm
on
the
way.
Now,
three
days
later,
four
days
later,
whatever
it
is
so
I
can
take
a
look
at
that.
Once
it's
done.
B
D
Igor
I
think
what
would
be
helpful
on
this.
One
is
if
you
can
update
the
description
with
what
we've
talked
about
here,
about
producing
a
POC
and
and
what
it
would
take
like.
What
steps
would
be
required
to
produce
that
POC
and
what
we
would
ultimately
be
seeing
at
the
end
of
that,
because
then
we
can
figure
out
when
we
can
schedule
to
get
that
done.
B
All
right
that
that
sounds
good
to
me.
There's
no
other
questions.
I
think
we
have
no
other
topics.
Do
we
no.
C
C
D
Yeah,
no
thanks
for
the
conversation
thanks
for
joining
the
demo
today,
yeah
I'll
upload
the
recording
so
that
others
can
see
it
later
on
thanks
so
much.