►
From YouTube: Scalability Team Demo - 2021-04-15
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
juan
min,
you
have
the
first
one
for
today.
B
B
So,
basically,
in
the
rapid
action
I
created
a
an
instrumentation
to
classify
whether
a
query
is
from
is
to
the
replica
or
to
the
primary.
However,
someone
told
me
that
it
is
not
really
correct,
because
we
still
have
some
housekeeping
on
the
table
varied
from
development
in
ourselves
that
doesn't
go
through
determinism.
B
So
it's
really
hard
to
tell
whether
a
query
is
to
directly
on
the
radical,
so
that
creates
a
really
confusion
in
the
recognition
and
eventually
we
have
to
change,
subject
and
regulations.
So
I
have
to
develop
that
issue
and
I
found
that
it's
kind
of
fun
that
the
first
case
is
that
we
do
have
some
housekeeping
query.
B
For
example,
we
do
query
the
inverse
from
the
last
reply,
timestamp
just
to
like
get
guess
where
the
america
is
catch
up
on
the
purchasing
directly
or
not,
and
that
query
is
that
inside
the
london
scene,
with
a
rooftop
horn
connection,
roxy
stuff
inside
relevancy,
that's
the
first
game
and
the
second
game
is
more
fun.
It
is
related
to
our
schema.
B
I
want,
and
internally
active
record
actually
creates
some
like
dummy
boot
to
query
for
the
schema.
So
somehow
it
react
is
onboard
and
read
like
with
even
before
initializing
the
virtually
connection.
So,
okay,
this
one
will
explain
better.
So,
for
example,
those
are
the
two
queries
that
triggered
by
the
active
record
and
internally.
It
will
raise
some
dummy
points
to
trigger
this
record,
so
it
should
be
hard
to
tell
whether
the
query
is
the
source
of
the
query,
because
it's
like
the
temporary
bonds
created
by
internal
active
record.
B
So
it
is
not
from
our
randomly.
It
is
not
from
our
local
scene
connection
bonus,
so
it's
still
impossible
to
get.
What
is
the
shock
of
the
query
so
right
now,
I'm
just
putting
it
as
the
unknown
source
to
indicate
that
we
don't
know
really
know
what
way
or
how
and
the
destination
of
the
queries,
and
this
is
one
of
the
like
the
disadvantage
of
our
of
rush
in
the
lobby
layer.
We
illuminated
our
own
luggage
in
layer.
B
However,
the
approach
we
are
choosing
is
to
implement
the
proxy
on
the
top
of
the
connection
object.
So
that's
why
it
should
be
hard
to
tell
whether
a
query
is
destination
and
in
braille
6.
We,
the
rest,
only
supported
the
local
scene
with
the
broad
and
the
shading
functionality
and
the
way
it's
do
is
add
a
simple
flat
replica
screw
inside
the
database
configuration.
B
B
So
in
the
some
incident
before
we
used
to
have
a
problem
with
the
oversight,
a
lot
that
will
overlap
our
radius
buster
so
after
we
implement
the
truck
and
the
race
mod
to
verify
whether
it
is
a
big
issue
and
prevent
someone
to
push
the
bit
below
into
the
psychic,
we
come
up
with
a
new
and
a
brush
to
use
us
to
offload
on
the
oversized
belt
into
the
object.
B
Storage
in
case
of
that
debate,
loss
is
like
big
enough
and
somehow
we
never
finished
the
rook
and
said,
and
right
now
I
implemented
a
tennis
ball
and
must
request
for
that.
B
There's
some
issue
will
be
approached,
but
I
will
show
how
it
works.
First
and
right
now
we
have
some
the
flag
environment
variable,
which
is
the
circuit
cylinder
mod,
and
there
are
three
ones.
The
first
one
is
the
trucking
mod.
Would
you
just
putting
out
the
blocks
whether
when
a
bailout
is
just
oversized
and
the
second
one
is
the
rest
one,
which
is
to
raise
an
exception
strategy?
B
Violence
will
know
whether
that
is
oversized
and
driven
is
from
pushing
into
relics,
and
the
final
one
is
the
upload
mod
and
when
we
set
the
problem
here,
when
we
schedule
a
situation,
it
will
push
on
the
j
plus
into
the
object
storage
and
it
will
based
on
middleware
at
the
server
to
restore
the
object,
storage
of
the
object,
storage
and
restore
the
other
ones
after
a
lot
and
letters
up
was
just
a
normal,
a
little
more
it
a
little
bit.
B
Okay,
so
I
will
start
with
this
mother
if
we
set
the
pilot
normally
it
will
just
like
process,
like
normally
nothing
happened,
and
if
we
increase
the
number
by
look
like
I
said
the
certified,
you
can
see
that
I
put
some
more
items
between
just
a
lot.
The
first
one
is
the
uploaded
flag
and
the
second
one
is
the
upload
part
and
when
the
job
is
pushing
to
the
object,
surface
will
be
visible
in
the
object
service.
B
Okay,
yeah
like
this
one
and
after
it
process
the
work,
the
this
pilot
will
be
deleted
from
the
storage
yeah
yeah,
and
when
I
follow
this
approach,
I
have
some
really
some
concern.
The
first
one
is
that
our
object,
storage
helpers
are
really
tight
coupling
to
the
active
record.
So
it's
really
really
hard
to
do
my
job,
which
is
just
to
put
a
file
to
the
object,
storage
and
block
it
down.
We
have
to
create
some
like
a
model
and
mount
behind
I'll,
just
store
it
into
that
model.
B
So
it's
the
first
concern
and
the
second
one
is
that
there
are
some
risk
of
some
operational
risk
that
it
will
increase
the
capacity
of
the
flow
and
maybe
when,
in
the
normal
case
it
will
even
create
more
problems,
even
even
more
than
the
oversized
cells.
So
I'm
just
immensely
concerned
to
see
whether
it
should
we
should
follow
this
or
not,
and
I
put
a
lot
of
concern
and
comment
inside
of
the
group
concept
and
you
guys
can
read
the
homework
request
to
leave
some
preview
into
that.
C
So
any
questions
I
haven't
actually
reviewed
it.
Yet
I
was
just
wondering
I
mean
we
could
create
a
model
for
the
sidekick
job
arguments
right.
It
would
be
a
bit
weird,
but
we
could
do
that.
B
D
Could
but
I
I
think
we
we
shouldn't,
because
that's
one
of
the
things
that
we
saw
with
the
the
well
when
uploading
stuff
with
carrier
wave
and
this
model
thing
with
mounts
and
the
long
transactions
and
stuff
you
know.
So
if
we
can
avoid
the
model
all
together.
That
was
one
of
the
questions
I
had
on
the
merchant
question.
Are
we
creating
one
of
those
upload
records
here.
B
No,
it
doesn't,
I
mean
it's
not
too
yeah
pretends
it's
an
upload,
but
it's
not,
I
think,
yeah.
It's
not
and
it's
kind
of
a
workaround
for
me.
So
I
think
that's
not
the
real
solution.
I.
D
Think
that's
better
for
this
use
case
yeah,
but
I
did
ask
on
the
merger
quest
as
well.
Should
we
because
I
couldn't
find
any
occurrences
like
we
have
the
thing
in
track
mode
running
in
production
right
now?
Don't
we.
B
No
hey!
Well,
we
eventually,
somehow
that
issue
is
nothing
behind,
so
we
did
haven't
anybody
talked
more
even
the
truck
mod,
but
based
on
the
kubernetes.
We
still
have
some
big
jump,
a
lot
in
solutions
so.
D
E
I
mean
generally,
if
you
can
have
like
a
cross
like
an
approach
that
works
for
everyone,
rather
than
getting
every
team
to
kind
of
implement
their
own
thing.
Is
that
what
you're
proposing
bob
is
that
just
leave
it
to
two
teams
to
address.
E
For
example,
I
bet
you
that
if
you
ask
a
team
to
take
it
out
to
to
do
that,
I'll
bet
you
more
than
one
team,
will
take
it
and
stick
it
into
raiders
cash
right
like
it's.
These
are
the
kind
of
things
where
it's
really
good,
to
have
infrastructure
that
works
kind
of
across
the
board,
and-
and-
and
so
that's
what
I
really
like
about
this-
it's
a
kind
of
a
general
approach
which
I
really
like.
B
F
I
understand
the
appeal
of
a
general
approach,
but
it
feels
to
me
like
we're
going
into
we're
we're
going
into
corner
where
we're
not
using
these
things
like
they're
intended
like.
If
you
have
something
that's
very
large,
it
should
probably
shouldn't
be
a
json
payload,
because
just
to
access
it,
you
need
to
unwrap
the
json.
So
you
have
another
problem
that
you
have
a
very
large
json,
payload
and
yeah.
One
thing
that
bugs
me
a
little
bit
is
that
this
happens
only
some
of
the
time,
so
it
it'd
be
nice.
F
Sorry,
I
I
feel
I'm
being
very
much
on
the
negative
side
here,
but
that
those
are
some
things
that
come
to
mind
and
the
third
thing
that
comes
to
mind
is
garbage
collection,
but
that's
just
an
implementation
thing,
but
these
we
cannot
rely
on
the
jobs
running
and
deleting
the
things
from
object,
storage.
So.
F
B
F
That's
fine
for
me
yeah,
but
I
wonder
if
what
people
really
want
is
to
throw
large
things
around,
and
we
just
saw
that
it's
really
hard
to
have
an
upload,
because
you
had
to
hack
the
model
and
and
fight
carrier
wave
should
we
maybe
be
making
it
easy.
F
B
Well,
I
think,
basically,
when
someone
come
up
the
up
on
their
workers,
they
will
meet
a
really
like
small
ballot
at
first
and
the
ballot
will
eventually
grow
at
the
limit
and
somehow
people
will
be
boxed,
but
why
is
it
get
so
large
and
this?
What
have
only
happened
with
the
reason?
Business
of
them
are
really
normal
workers.
They
don't
have
any
special
design,
but
eventually
someone
will
have
a
larger
payload
than
other
customers.
F
I
I
think,
maybe
what
I'm
where
I'm
coming
from,
is.
If
that
people
regularly
do
this,
then
they
shouldn't
be
sticking
50
megabyte
things
into
json
fields.
To
begin
with,
then
that
50
megabyte
things
should
probably
be
a
blob,
but
we
don't
know
if
people
regularly
do
things
doing
this,
that's
also
what
bob
maybe
was
getting
at
and
yeah.
It
is
nice
to
have
a
universal
feel
safe.
D
E
What
about
if
we
had
like
a
hard
limit
on
sidekick
jobs-
and
we
said
no
sidekick
job-
can
be
greater
than
10
megabytes,
but
then
we
also
had
like
a
mix
in
or
something
that
did
the
upload,
because
I
don't
like
it
being
automatic
for
everything,
but
to
have
an
easy
route
for
people
to
kind
of
move
their
services
to
object
storage.
So
we
know
what
the
problematic
you
know
if
something
goes
wrong
in
the
application,
and
you
just
get
some
sidekick
job
that
suddenly
has
really
big
things.
E
D
B
A
B
D
But
we
don't
look
at
it
very
hard
and
the
the
reason
is
like
the
thresholds
that
are
defined
are
quite
tight
and
we
meet
them
quite
often
and
that's
good.
But
the
requests
that
we're
measuring
are
very
diverse.
So
there
are
some
upload
requests
and
so
on
and
there
that
are
much
slower
than
the
gets
and
the
heads
for
the
the
manifest,
which
is
what
I'm
extracting
in
this
perspective.
So
the
idea
is
to
because
the
registry
only
has
a
limited
set
of
goods
that
it
needs
to
serve.
D
I've
gone
ahead
and
like
built
something
like
that
and
asked
craig
for
feedback
and
the
main
things
that
were
coming
out
of
it
was
like.
If
we
extract
the
fast
route,
we
need
to
be
careful
that
the
the
aptx
that
we're
extracting
them
from
doesn't
like
start
failing,
because
we've
extracted
a
whole
bunch
of
fast
things
out
and
now
relative
like
what's
left,
is
relatively
slow,
so
that
starts
failing
I've
paid
attention
to
that,
and
we
also
need
to
make
sure
that
we
don't
count,
requests
double
and
that
we
don't
miss
anything.
D
So
if
we
extract
the
route
out
like
here
the
manifest
route,
we
need
to
make
sure
that
we
have
both
the
gets
and
the
heads
which
are
fast
and
the
deletes
and
the
puts,
I
think
which
are
slower
so
yeah.
That's
what
I've
been
working
on
like
a
kind
of
a
little
framework
in
the
run
books
and
like
a
wall
of
json,
to
hide
this
behind,
so
we
can
extract
more
roots
out
and
we
don't
miss
any
methods
for
roots
and
so
on,
like
that
kind
of
stuff.
E
Yeah
I,
when
I
saw
that
issue,
I
was
really
happy
mostly
because
it
was
like
a
development
person
that
had
raised
it,
so
they
understood
how
the
sli's
were
working
and
they
saw
like
an
opportunity
to
mature
them,
and
I
was
like
so
happy
when
I
saw
that
issue
and
then
even
more
happy
when
I
saw
you
working
on
it
bob.
So
thank
you
for
making
my
day
or
maybe
my
week,
but
it's
been.
D
A
little
bit
slow
going
like
it's
been
one
of
one
of
the
things
I'm
working
on
on
the
site,
but
it's
like.
I
thought
it
was
important
enough
to
start
looking
into
because
the
registry
stream
is
like
doing
this.
Architecture
change
with
the
database
and
so
on.
So
would
be
nice
if
we
had
a
little
bit
of
a
better
handle.
What
the
thing
is
doing.
E
F
Yeah,
so
we're
taking
a
risk,
like
you
mentioned
by
splitting
those
but
yeah,
probably
we'll,
probably
get
a
better
understanding
of.
What's
going
on.
I
was
looking
at
italy
aptx
slis
the
other
day,
and
I
noticed
that
there
it's
only
unary
sli.
So
that's
a
little
bit
like
only
doing
the
gets
in
the
heads
and
anything
that
would
be
opposed
to
her
except
the
grpc
version.
We
don't
even
count
if
I
read
that
correctly.
G
E
The
biggest
reason
is
that
there
was
still
a
lot
of
change
and,
with
the
with
the
with
the
baidai
and
the
and
the
you
know,
the
streaming.
E
Basically,
you
have
to
review
them
on
a
endpoint
basis,
because
some
of
them,
you
know
they
do
lots
and
lots
of
stuff,
and
some
are
only
just
because
the
the
server
is
generating
a
lot
of
data
that
it's
sending
back,
but
so
it's
much
harder
to
kind
of
keep
an
sli
like
up
to
date
with
the
unity
ones.
It's
like
you,
send
a
request,
you
get
a
response
and
you
can
measure
it
very
quickly
but
with
the
streaming
ones
it's
almost
like
you
have
to
review
each
one,
and
so.
F
That's
what
I
figured
why
that
decision?
Yeah!
That's
what
I
figured
when
I
looked
at
it,
but
it
seems
to
work
well
enough.
So
that's
that's
fine,
but.
F
F
Okay,
then
I'll
talk
for
a
moment
about
the
back
objects,
cache
and
the
troubles
we've
had
there.
So
a
few
weeks
ago
we
did
an
experiment
where
we
disabled
the
pre-clone
script
for
gitlab
or
gitlab
for
a
short
period
and
everything
looked
fine
and
then
I
concluded.
Okay,
everything
is
fine.
So
now
we
just
need
to
finish
building
this
and
we're
done
so.
F
On
tuesday
I
made
that
configuration
change
and
then
my
day
was
over
and
I
went
well
I'm
home,
but
I
went
home
and
I
came
back
into
work
the
next
day
and
it
turned
out
that
the
asterion
call
decided
to
turn
it
off
because
the
aptx
was
degrading
and
when
I
turned
it
on,
I
noticed
the
abducts
got
worse,
but
I
explained
that
away
in
my
head.
F
We
lost
rachel.
I
explained
the
way,
the
ad
text
degradation,
because
I
thought
well.
If
these
fetches
git
fetches
are
part
of
the
aptx
and
we're
fetching
more
data,
then
it's
only
natural
that
that
takes
longer.
F
F
At
that
point
I
could
have
noticed
that
something
was
wrong
and
that
just
got
worse
through
the
day
and
then
the
s3
uncle
made
the
call
to
turn
it
off,
which
was
right
because
things
were
slowing
down
so
yeah,
then
I
learned
a
little
bit
more
about
how
those
abdexes
work
and
yeah
that
looks
good
nice
thing
is
that
it
looks
like
they
work
well.
F
I
looked
at
one
example
like
find
commit,
which
has
nothing
to
do
with
git
fetch,
and
you
could
see
it
slow
down
the
moment.
We've
made
the
conflict
change,
that's
that's
unrelated
and
it
shouldn't
slow
down.
So
the
fact
that
that
slowed
down
and
then
that
triggers
alerts
is
good,
but
the
it
does
leave
the
epic
in
a
unfortunate
state
because
I
made
it
all
about.
F
F
F
Yeah,
the
the
the
example
case
I
was
working
on
was
falcon
aerial,
one,
which
only
survives
because
of
the
pre-clone
script
and
the
outcome
I
was
aiming
for
was
to
say:
okay,
if
you
have
a
heavy
ci
workload,
then
that's
a
problem
and
we
used
to
have
a
custom
solution.
That
is
not
part
of
the
product,
but
you
can
copy
paste
it
and
put
glue
it
together
and
hope
that
it
keeps
working
and
that
you
have
the
quality
team
to
look
after
it.
F
Well,
you
don't
because
we
have
a
quality
theme,
but
that's
just
for
for
us,
so
I
I
wanted
to
replace
that
and
promote
it
to
a
feature
that
just
works
transparently
for
everybody,
and
it's
not
that
or
it's
hard
for
us
to
argue
that
it
is
for
some
people.
It
may
be
that,
but
it's
not
very
convincing
if
we
cannot
lean
on
it
ourselves,
so
that's
sort
of
where
we
are
now
with
the
with
you
with
the
projects.
D
F
Yeah,
that's
a
bit
hard,
you
can
look
at
cash
hit
rates
and
then
you
see
that
there
are
cache
hits
and
that's
great.
But
users
don't
care
about
cache
hits.
F
E
So
yeah,
like
I
started,
I
started
putting
some
stuff
on
that
issue
954
because
to
me
like
something
doesn't
add
up
still,
you
know
we
see
like
you
know
you
we
spoke
about
it
briefly,
but
the
the
things
that
are
dropping
are
like
the
which
one
was
it.
The
upload
pack
drops
off,
but
I
don't
know
there
was
a
whole
bunch
of
signals
on
that
issue
that
just
to
me,
it
feels
like
there's
still
something
that
we
like.
E
So
many
signals
are
really
good
and
then
other
signals
have
kind
of
remained
the
same.
Even
though
we
have
a
really
high
cash
rate,
for
example,
cash
hit
rate
yeah
and
and
then
you
look
at
kind
of
like
the
fundamentals
of
the
machine
and
they
sort
of
haven't
changed
at
all
and
to
me,
there's
like
a
disconnect
there
like
if
we're
getting
this
65
cachet
rate.
Why
aren't
we
seeing
that
in
the
in?
In
the
you
know,
scheduling
or
cpu
or
like
some
of
the
other
places
on
the
machine.
F
I
I
almost
think
it's
the
other
way
around,
which
is,
I
think,
that
there's
a
bottleneck
somewhere
that
we're
not
seeing
and
we
were
maybe
never
hitting
never
saturating
the
machine.
We
were
hitting
some
other
bottleneck
and
maybe
well
we're
doing
better
on
that
now
but
yeah.
Well,
I
guess
we
were
saturating
the
machine
when
it
was
at
100
cpu.
That
was
pretty
clear,
so
that
is
not
happening
but
yeah.
F
I
I
also
just
understand
too
little
of
what's
wrong
and
if
I
would
have
realized
the
aptx
problem
on
tuesday,
when
the
thing
was
on,
I
would
have
gathered
more
data.
I
would
get
pro
cotton
profiles
and
try
to
understand.
What's
what
the
machine
was
busy.
C
A
F
Yeah,
that
also
means
that
we
drag
on
the
the
project
for
longer,
or
we
would
do
you
propose.
We
do
that
outside
the
epic
rachel.
A
F
D
F
E
A
It
feels
odd
to
do
this
and
then
not
gather
the
data
at
the
end
of
it
and,
as
you
said,
if
you'd
been
online,
when
when
that
had
happened,
you
would
have
gathered
the
data
before
it
got
before
the
script
got
switched
on
again.
So
I
think
it
is
definitely
worth
gathering
the
data
and
I
think
it
is
worth
some
kind
of
time
box
exercise
for
analyzing.
A
F
That's
great
thanks:
okay,
so
I'll
I'll
make
that
a
separate
issue
to
just
for
collecting
and
analyzing
the
data
so
that
we
have
an
alternative
ending
like.
F
Ending
was
we
turned
it
on
and
it
worked,
and
now
we
have
an
alternative
ending
like
we
turned
it
on,
and
we
learned
this
and.
F
A
Yeah,
let's
get
to
let's
get
to
the
conclusion
and
then
we
can
decide.
If
that's
what
we're
going
to
do
about
that.
D
F
No
rush,
okay,
thanks
for
the
discussion,
everyone.
A
Is
there
anything
else
that
anyone
would
like
to
to
talk
about.
D
E
Oh
yeah
I'll
just
quickly
show
that
I
did
some
work
on
it
yesterday.
So.
E
So
I
realized
that,
firstly,
timeland
is
obviously
this
jupiter
notebooks
okr,
that
kind
of
became
like
its
own
thing
and
obviously
because
it
started
off
as
jupiter
notebooks.
It
started
off
as
markdown,
so
it's
got
a
kind
of
a
weird
origin
story,
but
the
the
code
was.
This
was
all
in
these
code
cells
in
in
a
markdown
document
that
was
translated
into
jupiter
notebook
that
gets
generated
into
report,
and
so
it
was
always
really
horrible
to
use.
And
then
I
realized
that
you
can
just
do
imports
from
a
local
directory.
E
So
you
can
keep
all
your
code
in
python
now
so,
instead
of
having
like
editing
code
blocks
with
your
code
in
it,
you
can
actually
start
moving
over
to
python
and
then
hey.
You
could
actually
start
if
you
have
a
unit
test
and
other
things
that
same
people
use.
So
that's
the
first
step,
which
I
think
has
really
helped
kind
of
speed
up
the
the
development
now
because
I
can
actually
work
in
python
and
not
in
markdown
mock
down
code.
E
So
that's
pretty
cool,
and
so
yesterday
I
did
that
and
then
I
spent
some
time
kind
of
gathering
a
couple
of
requests
that
people
have
made
over
the
last
few
weeks
and
the
first
one
was
christopher
in
the
infra
dev
meeting
said
that
he
wanted
to
see
the
threshold
at
which
we
were
alerting
on
the
graph.
So
that's
the
first
thing
we've
got
there.
We've
got
that
little
dotted
line,
which
is
the
what
we
call
the
heart
threshold,
and
so
that's
in
there
now
and
then.
E
The
second
thing
that
we
did
was
we
added
the
well.
I
added
the
dates
which
we
expect
to
violate,
so
this
this
over
here
is
the
expected
value.
It's
kind
of
the
the
middle,
the
most
expected
value,
and
so
you
can
see
there
that's
the
13th
of
july.
Hopefully
it's
not
a
friday
and
then
what
I
also
put
in
is
this
80
confidence
in
interval.
E
I
I
also
include,
as
like
a
pessimistic
value,
the
data
which
that
crosses
the
threshold
and
then
just
for
good
measure
like
the
data
which
is
going
to
hit
100
as
well.
So
I
got
some
new
things
and
then
I
also
got
it
so
that
we
finally
have
permanent
links
on
these
things.
So
if
you
want
to
refer
someone
to
something
in
the
report,
you
can
just
click
on
a
little
that
and
then
send
them
a
link
and
it'll
go
right
there
very
exciting
stuff.
E
Oh
this
needs
to
refresh
because
and
then
the
last
thing
that
I
did
was.
I
had
quite
a
few
people
keep
asking
me
like:
what's
the
you
know
what's
happening
with
petroni
and
all
the
saturation
on
there,
and
so
I
just
figured
that
it
would
be
better
to
move
that
into
its
own
thing
and
I
plan
to
do
the
same
thing
for
redis.
So
if
you're
only
interested
in
petroni,
you
can
now
just
come
to
this
page
and
all
it's
got.
Is
the
petroni
service?
E
So
it's
you
know
all
all
the
resource,
all
the
metrics
that
we
monitor
for
that
server,
all
the
resource
metrics
at
least,
and
how
they're
growing
and
what's
happening
to
them,
and
of
course
it's
also
got
the
dates
and
the
thresholds
and
everything
like
that,
so
yeah
and
shaun
has
already
given
me
some
feedback.
This
is
mostly
why
I'm
bringing
this
up,
because
I'm
looking
for
feedback
and
shaw's
feedback
was
that
he
would
like
to
have
these
arranged
from
the
expected
value
crossing
the
threshold
first
onwards.
E
And
then,
like
the
from
a
roadmap
point
of
view,
what
I'd
really
like
to
do
with
this
is
get
these
dates
or
days
until
into
prometheus,
and
then
you
could
actually
start
tracking
it
and
figuring
out.
You
know
that
we
can
look
over
time
to
see
if
it's
coming
in
and
you
could
even
start
alerting
on
it
and
stuff
like
that.
But
that's
obviously
we
don't
need
that
yet.
A
If
not
I'll
close
out,
this
call
by
saying
this
will
be
the
last
of
the
demos
that
is
in
this
format.
I
mean
the
format
will
stay
the
same,
but
my
intention
is
that
I'm
going
to
create
two
calls,
instead
of
just
having
the
one
so
that
there's
an
option
for
people
in
different
time
zones
to
join
at
different
times.
A
So,
if
you
haven't
already
indicated
your
availability
on
that
sheet,
that
I
sent
you
around,
please
do
so
andrew
and
marin,
I'm
not
sure
if
you'd
also
like
to
indicate
yours
there
or,
if
you're,
just
happy
to
join
either
of
the
calls
when
I,
when
I
do
set
them
up
for
next
week.
A
If
you
do
want
the
link
just
send
me
a
message
in
slack
and
I'll,
send
you
the
link
for
that,
but
thank
you
very
much
for
the
demos.
It's
all
looking
good.
Thank
you.
So
much
hope
you
have
a
good
rest
of
your
day.