►
From YouTube: Scalability Team Demo 2021-11-25
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Okay,
I
have
just
one
item
today,
which
is
to
get
everyone
through
the
epic
I'm
working
on.
So
it
is
big
in
this
epic.
We
are
investigating
the
possibility
of
sharing
very
psychic
and
there
are
some
items
I'm
working
on
and
the
first
one
is
to
understand
which
components
are
interacting
with
very
psychic,
and
it
is
the
in
this
item.
B
And
finally,
we
evaluate
the
redness
and
the
upper
availability
of
the
solution,
and
I'm
recently
at
the
first
item,
which
is
to
create
a
curated
list
of
interaction
with
freddy's
psychic,
and
it
is
issue,
is
simply
just
go
through
on
the
item
and
all
the
components
we
had
in
this
in
our
system
in
the
red
square
base
and
in
on
the
application
we
have-
and
this
is
on
everything
I
found-
I'm
not
sure
whether
this
place
is
completed
or
not,
but
I'm
quite
confident
that
asgaming
I
scanned
through
on
the
well.
B
It's
like
I've
supported
in
our
rail
squad
base,
and
I'm
still
not
very
sure,
but
we
can
verify
that
later.
So
from
the
from
the
list
of
components
I
go
through
all
of
them
and
then
analyze
how
they
can
interact
with
credit
psychic,
including
the
commander
issue,
the
key
patent
and
any
action
they
perform
and
there's
a
massive
list
of
components
and
have
to
go
through
each
of
them
and
actually
run
it
in
the
my
concern
and
then
capture
the
command
in
redis.
And
finally,
I
composed
this
map
and
this
map
is
massive.
B
It
is
one
well,
hopefully
it
is
enough,
and
you
can
see.
I
brought
the
red
circuit
on
the
map,
they
ready
server
and
each
component
we
have
and
the
interactions
between
them
so
from
the
map
is,
we
can
understand
how
they
interrupt,
basically,
which
display
and
on
the
current
depend
on
and
with
what
keys
by
is
the
password.
We
need
to
care
about,
for
example,
the
schedule
and
we
try
are
the
most
popular
keys
that
other
components
are
interacting
with.
B
So
when
we
desire
model
in
later
step,
we
really
need
to
look
at
the
schedule
and
retry
and
then
some
components
just
work
a
lot
with
public
keys
such
as
the
gitlab
exporter.
This
one
was
with
merely
on
the
keys
in
ready
psychic,
and
we
really
want
to
cut
it
down
and
how
to
re,
and
maybe
we
need
to
rewrite
it
in
later
step,
so
well,
yeah,
there's
more
and
after
we
have
this,
we
can
continue
to
the
next
one.
We
still
design
a
model
for
scaling
psychic.
So
this
is
just
a
verification
step.
B
B
The
second
one
is
that
still
functionally
straight,
very
psychic,
but-
and
we
have
two
type
of
instances-
the
first
one
is
the
q
instance
and
and
the
common
race,
instant,
and
this
model
works
quite
well
and
for
each
one
here
to
redraw
the
map
and
try
to
map
all
the
components
and
in
the
interactions
to
that
model,
and
this
model
is
quite
a
little
bit
complicated,
but
actually
it
works
quite
well
and
yeah.
B
I
was
going
into
the
detail,
so
I
put
a
link
for
everyone
to
read
them
up
later
and
for
each
model.
I
will
need
to
find
what
is
the
solution
for
this,
and
what
is
the
key
thing
we
need
to
do,
and
then
I
try
to
go
through
the
component
matrix
again
and
try
to
import
the
solution
for
each
component
to
see
whether
it
matches
what
we
are.
B
So
after
this
step,
when
we
have
different
models
for
us
to
compare
and
then
we
can
continue
to
do
benchmarking
and
the
benchmark,
we
can
use
the
the
the
cases
for
benchmarking
like
what
we
already
did
with
the
this
one.
So
we
need
to
capture
the
traffic
on
direction.
Let's
turn
up
already
psychic
and
then
we
try
to
simulate
the
traffic
against
different
security
testing
and
because
we
only
have
a
map,
a
really
massive
map
for
and
how
the
kernel
interrupts.
B
We
can
just
simply
create
a
set
of
scenarios
that
match
this
model
and
with
the
key
pattern
of
explorer
generated.
B
So
I
think
the
law
test
is
really
realistic
and
it
can
reflect
the
traffic
we
want
to
achieve
in
the
future
so
that
after
a
lot
test,
we
can
understand
how
much
help
room
we
have
in
the
future
and
then
in
for
each
model.
We
can
understand
how
it
can
have
a
scale
and
what
is
the
limitation
of
each
one,
so
yep
and
finally,
we
implement
some
room
concept
to
prove
that
that
solution
is
workable
and
then
finally,
we
evaluate
the
observability
and
the
administration
of
that
model.
B
B
C
B
Yes,
okay,
there
are
some
of
them.
We
need
to
look
at
the
first
one
is
the
batch
queuing.
So
I
don't
understand
the
reason
behind
that,
but
this
should
stay
in
the
shared
state,
not
in
the
very
psychic.
We
can
push
it
back
out
and
the
second
one
is
the
gridlock
psychic
status.
You
can
push
that
into
a
sharepoint,
because,
whatever
we
do,
we
still
need
it
to
be
global,
and
this
is
like
a
lock-in
well
site,
not
the
one
that
belongs
to
ready
psychic.
B
We
can
push
that
into
another
instance
and
yeah
when
doing
that.
One
interesting
thing
I
explore
is
that
a
lot
of
things
interact
with
set
click
apis.
So
when
we
move
to
other
model
and
when
we
shade
we
can
easily
bash
or
not
match
but
create
a
federation
system
for
the
cyclic
api,
so
that
only
ui
still
work
and
all
the
components
still
work
and
yeah.
We
don't
need
like
to
put
a
lot
of
effort
into
batching
on
our
of
our
components,
so
yeah
that
one
good
thing.
D
I
yeah
looking
at
this
picture.
I
I'm
surprised
that
all
the
things
that
are
not
psychic,
client
or
psychic
apis
that
talk
to
redis,
and
I
wonder
if
it
would
be
worthwhile
to
just
get
rid
of
all
those
things
so
take
the
global
ones,
move
them
to
a
different
redis
and
take
the
things
that
do
need
to
talk
to
redis
and
only
have
them
talk
to
either
the
psychic
client
or
the
psychic
apis.
D
B
Yeah
about
that,
we
discussed
about
the
medium
in
like
once
per
two
months
or
something,
and
we
actually
have
an
initiative
for
that.
Where
is
it
okay,
this
one?
So
we
trying
to
migrate
on
pushing
directly
into
redis
to
use
the
weapon
system.
But
when
you
look
at
the
rebozo
is
not
a
massive
effort
and
I'm
not
sure
whether
we
should
pass
this
one
before
this
scaling
psychic
thing
or
after
that
it
took
a
lot
of
effort
and
it
is
truly
a
technical
depth,
but
I'm
not
sure.
B
Yeah,
that
cable
is
quite
small,
but
if
the
configuration
is
really
complicated,
we
have
omnibus
configuration
and
we
have
a
like
a
kubernetes
chart
for
that
and
we
have
to
update
the
configuration
and
make
sure
that
it
is
crackle
compatible
with
the
previous
one.
So.
D
But
that's
yeah,
but
that's
kind
of
normal.
If
you
want
to
change
these
things,
I
I'm
not
sure
that
is
necessarily
super.
C
The
the
thing
is
like
we've
done
this
before
so
it's
like
more
known,
yeah,.
D
System
cleaner
and
then
whatever
we
do
with
scaling.
Psychic
will
be
easier
if
we
don't
have
to
accommodate
these
things
that
use
psychic
in
a
an
official
or
a
weird
way.
B
C
Well,
I
think
we
can
compare
the
work
that
we
would
need
to
do,
but
I
think
like
this
is
like
no.
So
if
there's
an
idea
to
do
mail
room
the
other
way
like
make
mail
room,
talk
to
them
separate
side,
separate
redis
instance
directly
and
yeah,
that's
also
going
to
require
configuration
and
whatnot.
So
like
yeah.
D
D
What
would
we
rather
have
like
an
mailroom,
be
a
special
thing
that
talks
to
a
sidekick
or
mailroom
b
is
something
that
makes
internal
api
calls.
D
I
think
that
anything
that
uses
internal
api
calls
is
going
to
be
simpler
and
then
in
the
future
we
don't
have
to
think
about
it
anymore.
It's
just
a
thing
that
calls
the
internal
api
so
yeah
right.
We
have
to
spend
time
dealing
with
mailroom
one
way
or
the
other,
so
we
might
as
well
deal
with
it
in
a
way
where
we
don't
have
to
think
about
it
again
in
the
future.
B
D
Yeah-
and
I
also
wonder
about
these-
these
shared
state
things
that
we
can
already
start
cleaning
those
up.
F
D
Off
of
the
the
redis
psychic
server,
it
would
be
easier
to
reason
about
the
workload
if
there
isn't
non-psychic
random
stuff,
also
happening
on
the
redis
server
and
for
whatever
we
do.
B
D
B
G
I
I
had
a
question,
or
maybe
an
idea
about
this
map,
because
it
it
looks
like
there
was
a
lot
of
manual
effort
that
went
into
this,
and
I
mean
this
is
an
amazing
map,
I'm
just
thinking
as
we
get
new
sidekick
versions
and
as
we
move
things
around
like
some
of
this
stuff,
we
might
start
moving
to
shared
state
keeping
this
map
up
to
date,
maybe
there's
a
way
that
we
can
auto
generate
at
least
some
of
it
by
instrumenting,
the
the
client,
library
and
kind
of
capturing
you
know
a
stack
trace
of
where
redis
is
being
called,
and
so
that
way
we
know
who
yeah
just
sort
of
get
some
of
this
information
without
having
to
like
manually
scan
through
the
code
and
reason
about
it.
C
I
had
the
same
thing
yesterday
on
that
issue
that
I
pinged
you
on
igor
and
that
you
pointed
me
to
the
logs.
The
thing
like
we
have
like
instrumentation
on
the
server
side
for
commands,
but
we
don't
at
that
moment.
We
don't
remember
where
they
come
from.
Well,
we
don't
know
where
they
come
from
and
I
couldn't
find
anything
that
does
it
from
the
client
side.
So
that
would
be
really
handy
to
have
like,
because
then
we
know
when
traffic
changes
where
it
changed.
B
Well,
actually,
I
don't
know
like
in
theory,
we
can
put
some
kind
of
instrumentation
into
the
greatest
client,
but
I'm
not
sure
about
the
performance
and
how
it
would
affect
our
system,
so
we
can
give
it
a
try
later.
I.
D
G
D
H
D
A
I
think
what's
also
challenging
about
having
any
map
of
it
is
that
any
map
goes
out
of
date
really
quickly
and
even
if
we
have
the
instrumentation
to
rebuild
it,
we
have
to
realize
that
it's
out
of
date
and
then
rebuild
it
and
then
use
the
information
that
we
find,
and
I
think
for
now,
because
we
have
this
and
I
think
it
gives
us
a
path
forward
as
to
what
we
need
to
do,
and
I
think
if
we
get
to
the
point
towards
the
end
of
the
project
that
we
need
to
build
it
again,
we
should
decide
if
we
really
need
to
build
the
map
again
and
if
we
do,
we
then
make
the
choice
of
if
we're
going,
to
do
that
through
instrumentation
or
do
it
by
hand.
A
But
I
wouldn't
want
to
be
building
instrumentation
by
hand
right
now,
because
I
think
we
already
have
the
path,
but
it
is
good
to
know
that
that
is
something
that
is
possible
to
do.
If
we
decide
that
this
is
something
that
we
need
to
keep
up
to
date
regularly.
G
I
guess
one
of
the
risks
that
I
see
whenever
we
I
mean
this
is
sort
of
deviating
from
what
sidekick
does
natively
right
and
may
break
with
some
assumptions,
because
we
have
some
stuff
that
is
sort
of
quote,
unquote,
shared
and
kind
of
global
to
the
sidekick
system,
and
then
we
have
some
stuff
that
is
sort
of
per
q
or
per
shard.
If
you
will-
and
we
I
mean
when
a
new
sidekick
version
comes
out-
we
don't
know
if
that
behavior
changed
and
we
need
to
kind
of
revisit
some
of
those
assumptions.
B
Yeah
agree,
so
when
I
investigate
the
possibility
of
shading
the
psychic
functionally,
it
is
feasible
right
now,
but
it's
really
hard
to
maintain
because
it
depends
a
lot
on
psychic
internal
implementation.
So
I
read
that
is
well.
I
lean
more
to
the
journal
circuit
cluster.
When
we
know
what
we're
doing
I
mean
yep,
it's
efficient,
it
doesn't
mean
that
we
have
to
do
it.
Is
that,
like
an
investigation
to
compare
the
solution?
First.
A
A
I
A
C
Okay,
let
me
share
my
screen:
it's
just
something
I
dropped
in
real
quick
because
there
was
nothing
on
the
agenda
and
I
thought
like
if
I
have
everybody
anyway,.
C
So
I
was
looking
at,
I
saw
some
alerts
and
they
were
from
thanos
and
they
were
mostly
from
stuff
that
I
did
so.
I
looked
into
it
a
little
bit
and
these
are
rule
executions
and
tunnels
that
take
no.
These
are
roulette
executions,
recordings
everywhere
and
two
that
stand
out
very
much
are
the
stage
group
aggregation
and
the
future
category
aggregation
in
tunnels,
not
in
prometheus,
but
in
tunnels.
H
C
C
Because
then
we
have
like,
if
we're
running,
we're
trying
to
run
every
minute.
But
if
the
evaluation
takes
longer
than
a
minute,
then
we're
I
don't
know
what
thomas
will
do
start
the
new
one
in
parallel
or
wait
or
and
then
we
don't
have
data.
H
C
H
C
H
H
C
This
one
should
be
static
and
I.
H
C
C
I
added
the
the
job
to
it
because
I
wanted
to
see
if
this
was
also
happening
in
prometheus.
C
Useful,
let's
add
some
filters
now.
What
do
we
have
here.
D
But
so
going
back
to
what
the
problem
is,
while
bob
is
thinking
if
somebody
else
knows
the
problem,
the
problem
is
that
we're
worried
that
thanos
cannot
keep
up
with
evaluating
the
rules
or.
H
As
much
prometheus
as
well
but
yeah,
I
think
there's
a
there's
a
you
know,
not
necessarily
thanos.
I
mean
these
all
look
like
prometheuses
that
are
actually
well
no,
in
some
cases
as
well,
yeah,
more
worrying
to
me
that
thanos
can't
keep
up,
because
thanos
should
just
be
aggregating
a
handful.
Yes,.
C
This
is
what
this
is.
What
this
is
what
I
wanted
to
to
bring
up
like
the
aggregation
for
future
categories.
It
should
already
be
quite
small,
like
yeah,
because
that's
already
an
aggregation
yeah,
but
here
are
all
the
feature
categories.
C
Yeah,
but
I
don't
want
to
I
want
to
see
if
the
same
rules
also
is
the
rule
name
the
same
for
prometheus,
no.
C
The
thing
is
that
I'm
the
thing
that
I'm
I'm
interested
in
is,
if
these
feature
category
aggregations,
take
a
lot
of
time
in
prometheus
as
well.
So
I'm
going
to
look
into
that
like
right
after
but.
H
251
seconds
for
for
talking
to
a
handful
of
prometheuses
and
getting
back
yes.
C
C
D
G
D
The
way
we
track
these
metrics
for
feature
categories
that
thanos
can't
handle
it.
H
Yeah,
I
wonder
if
there's
a
thing
where
one
of
the
prometheus
services
like
not
responding
and
it's
kind
of
timing
out
or
you
know
like
not
timing
out
but
but
one
of
the
back
end
prometheus
servers
is
really
slow
and
that's
tripping
it
up
or
something
like
that.
It
would
be
interesting
to
take
some
of
those
queries
and
run
them
on
thanos
and
then
kind
of
break
them
down
by
you
know
the
routing.
C
That's
the
the
thing
that
I
briefly
tried
yesterday
and
I
do
query
these
feature
category
things
often
myself,
but
I
always
I
always
query
the
global
ones.
So.
G
So
I
think
yeah
I,
I
think,
figuring
out
whether
the
issue
is
in
tennis
rule
itself
or
in
one
of
the
backing.
Prometheuses
is
probably
one
of
the
first
steps
to
try
and
figure
out
and
see
if
they
you
know
like.
If
thanos
rule
is
just
cpu
bound
or
something,
then
we
need
to
address
that
directly,
but
but
I
think
we
recently
introduced
reintroduced
distributed
tracing
on
thanos
and
prometheus
steve
mentioned
that
yeah
and
I
I
don't
know
why
that
thanos
rule
is
included
in
that.
G
But
if
it
is,
then
that
might
give
you
the
answer
right
away.
H
Is
that
using
stackdriver
tracing
like
can
I
go
log
into
stackdriver
tracing
and
see
magic
stuff?
Yes
cool?
I
haven't
actually
done
that
yet
awesome
yeah.
C
H
I
would
say
that
that's
fairly,
I
mean
because
there's
also
that
thing
bob,
where
I
think
I
was
showing
it
to
you
the
other
day
where
loads
of
the
no
I
was
showing
it
to
rachel,
where
loads
of
the
six-hour
metrics
are
just
points
and
then
and
they're
not.
C
I
think
it's
quite
serious
I
it
could
be.
I
I
opened
another
issue
for
that,
but
it's
like
it's
also
how
you're
viewing
things
like.
If
you
view
things
with
an
overtime
function,
then
there's
no
two
points
in
it.
Then
something
weird
happens.
H
But
yeah,
so
I
would
definitely
do
it
in
prometheus
rather
than
in
in
grafana
just
because,
maybe
because
of
those
things
that
you're
talking
about,
but
then
the
main
thing
is
whether
the
series
is
becoming
absence.
D
H
Going
away,
if
it's
there
the
whole
time,
then
it's
fine,
then
it's
not
a
you
know.
It's
not
updating
that.
Often
it's
not
great,
but
if
it's,
if
it's
just
then
then
we've
got
a
big
problem
because
yeah.
C
So
that's
what,
with
the
metrics
that
I
was
looking
at,
I
we
need
to
dig
up
the
issue,
but
sean
brought
it
up
a
bit
building
a
dashboard
like
on
the
dashboard.
It
looks
like
dots
when
you
drop
that
just
straight
and
tunnels
without
like
it,
looks
like
tiny
short
stripes.
But
if
you
remove
the
overtime
function,
then
everything
is
just
there.
H
C
But
like
if
you
increase
the
interval
to
something
bigger
than
one
minute,
then
it's
there
and
that
could
be
because
of
the
slow
rule
evaluation.
Because
if
we
don't
have
a
data
point
every
minute
and
it's
like
yeah,
then
one
minute.
H
I'm
just
looking
at
the
google
cloud
trace
thanos
thing
by
the
way
it
looks.
I
love
this
daily
histogram
of
latencies
is
very
cool.
H
You
want
to
share
your
screen,
yeah
sure
I
I
haven't
looked
at
this
at
all,
so
I'm
like,
but
there's
data-
and
this
looks
super
interesting
like
this-
presumably
is
the
latency
of
all
requests
into
this
grpc
service.
I
don't
know
why
I
picked
that,
but
I
love
these
sort
of
histogram
over
time
with
not
bucketed
type
things.
I
find
them
always
really
useful
like
where
the
where
the
buildups
are
happening
and
everything
like
what's
going
on
there
between
60
seconds
and
80
seconds.
H
Well,
presumably,
that's
many
seconds
yeah
many
seconds,
but
this
this
is
cool.
It's
great
that
we
got
this.
H
No
but
like
I
almost
can't
use
zoom
and
sharing
my
screen
at
the
moment,
I've
got
a
horrible
feeling
that
my
computer
is
not
gonna,
make
the
time
frame
that
I
need
for
a
new
one,
but
anyway
I'll
look
at
this
afterwards,
but
very
cool
that
it's
there,
thanks
for
thanks
for
it.
For
that
eagle.