►
From YouTube: Scalability Team Demo - 2021-07-08
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
I
will
share
my
screen
this
top
one.
Here
we
go
so
can
you
see
my
screen?
Yes,
okay,
so
recently,
I'm
working
with
jacob
on
the
project.
Before
I
start,
I
will
try
to
record
the
problem
statement
a
little
bit
so
basically,
when
we
are
working
with
daily
and
jacob
only
have
a
really
great
effort
on
the
back
object
catch,
we
observe
that
that
project
it
doesn't
give
a
good
result.
A
A
So
that's
why
jacob
think
that
it
is
really
great
for
us
to
just
work
with
the
blend
tcb
socket
and
so
that
we
can
utilize
the
network,
the
ctb
utilization,
as
well
as
the
other
thing
else,
but
it
is
not
very
wise
for
us
to
illuminate
a
whole
new
brightly
replacement,
because
we've
already
invested
a
lot
of
efforts
into
gately
and
cutely
has
tons
of
interceptor
middleware,
which
is
very
useful
for
us.
It
is
very
wise
for
us
to
eliminate
a
whole
new
server.
A
So
that's
why
we
come
up
with
a
thing,
a
mechanism
to
so
that
we
can
reduce
our
our
recent
data
list
server.
While
we
can
still
utilize
the
this
roster
db
circuit
and
we're
conducting,
let's
say
stream,
obviously,
no,
no,
it's
a
receiver's
maintenance.
Is
that,
like
a
hack
for
us
to
steer.
A
Yes,
so
basically,
we
just
try
to
replace
jpc
for
the
disfetching
operations.
That's
it
and
we
come
up
with
a
thing
called
stream
rbc.
So
it's
how
stream
rpc
work
again?
Where
is
it
so?
Okay,
I
brought
a
flowchart
for
us
to
easily
understand
how
the
flow
works
in
overview.
A
So,
basically,
when
our
client
want
to
query
something
in
the
b
stories,
we
will
call
an
rbc
request
to
get
this
server
and
the
way
it
is
that
is
new
is
to
establish
the
tcp
handshake
with
the
gilly
server
and
when
the
gilly
server
already
established
a
connection,
gilly
server
will
try
to
delegate
to
a
module
called
giggly
listeners
so
that
it
can
establish,
continue,
establish
the
trs
10
check
with
the
redemption.
A
So,
basically,
when
we
establish
a
raw
in
secure
connection
to
get
this
server,
we
don't
need
this
one,
but
we
are
trying
to
inject
our
own
multiplexer
into
so
I
will
show
you
how
it
works.
So
basically,
we
easily
get
the
code
for
the
gitly
server
main
server.
So
when
we
create
a
new
server
in
italy,
we
are
putting
a
lot
of
option.
A
The
stream
interceptor
the
generic
interceptor
and
what
important
thing
is
that
cbc
allows
us
to
inject
our
hand
shocking
handler
inside,
so
we
implement
our
own
mid
hand,
shocking
handler,
called
the
listeners
one.
So
this
one
it
is
it
up
doing
the
normal
tos
handshaking.
We
just
tried
to
make
it
a
meet
blesser
so
back
to
the
story,
and
so
when
our
client
tried
to
make
a
new
a
con,
we
will
try
to
inject
a
new
like
a
magic
bite.
A
A
stream
of
munch
bike
into
the
tcp
connection
called
the
stream
rbc
0
0,
then
our
digitally
listed
mod
will
try
to
classify
multiplex
the
connection
beyond
that.
It
can
pick
either
black
channel
or
stream
rbc
or
just
resemble
the
memory
flow.
So
if
the
magic
bar
is
stream
rbc00
it
delegates
to
our
stream
rbc
server
to
handle
the
stream
and
after
work,
the
client
will
try
to
compose
a
handshaking
request
to
send
to
the
server.
So
the
request
includes
three
elements.
A
The
first
one
is
the
method
which
is
the
illustrator
stream
first
to
test
anyway,
and
the
second
one
is
the
metadata
it
will
be
in
the
authentication
or
the
contest
or
anything
we
want
to
pass
to
the
server
and,
finally,
the
message
corresponding
to
the
cfc
method
here
and
the
stream.
Obviously,
handshaking
will
write
into
the
y
with
the
land
prefix
format
so
that
we
write
the
buy
first
and
then
the
home
payload
mushroom
a
lot
later,
and
then
our
stream
rbc
server
will
try
to
unwrap
the
handshaking
request
and
look
up
and
verify
it.
A
And
finally,
when
it's
a
separate
connection,
we
will
try
to
write
as
some
password
to
notify
the
client
that
it
accepts
connection
and
after
work,
it
will
invoke
the
real
cfc
handler
and
the
hot
interceptors
of
the
server.
So
the
cvc
handler
will
try
to
interact
with
the
client
via
draw
tcp
connection
right
here,
and
so
that's
why
we
can't
the
handler
can
just
fly
directly
into
the
tv
connection
and
it
is
not
possible
with
the
normal
grpc
flow
and
when
the
connection,
the
handler
finish
its
operation.
A
Let's
transfer
the
haunt
backfire
back
to
the
lion,
it
will
close
the
connections
and
then
the
our
gatorade
listen.
But
we
try
to
notify
the
gt
server.
The
home
flow
is
done
and
it
doesn't
need
to
resemble
the
number
flow.
So
that's
it.
It's
a
really
simple
flow.
So
basically,
our
gitly
stream,
rbc
server,
will
try
to
steal
the
job
from
the
gitly
server
and
try
to
do
something
behind
spec
so
that
it
the
handler,
can
use
the
raw
tcp
connection
and
that's
how
we
can
just
make
the
stream.
A
B
C
A
So,
okay-
and
we
should
have
got
this
one
so
when
we
initialize
the
cfc
server,
we
also
bust
the
honda
interceptor
chain
into
that
server.
It's
not
like
the
replication
so
that
the
stream
rbc
server
is
not
a
grb
server,
but
it
just
goes
through
the
whole
stack
of
interceptor
and
acts
as
if
it
is
a
uv
server.
So
we
can
reuse
everything
without
well
re-implementing
them.
A
Okay,
I
will
continue
so
this
is
just
the
story
between
the
client
and
the
digitally
server.
How
about
if
we
put
the
profit
into
the
picture,
how
it
would
fit
so
we
haven't
finalized
the
solution
yet
because
jacob
is
on
the
holiday.
But
I
move
forward
a
little
bit
and
eliminate
the
proper
concept
stream
c
proxy
to
block
into
profit.
So
how
its
work
is.
It's
nearly
the
same.
So
basically,
when
the
clan
try
to
okay,
I
zoom
a
little
bit
up.
A
What
is
it
so?
When
did
a
client
try
to
implement
a
connection,
establish
a
question
to
the
reference
server?
It
still
do
the
display
handshake
again
and
similarly,
we
inject
the
graphic
listener
into
the
reflex
server
so
and
do
the
same
message
by
stuff
and
then
the
stream
rpc
roxy
will
try
to
intercept,
intercept
the
handshake
message,
and
this
time
the
handshake
will
include
the
responsibility
which
is
required
for
on
rbc
refresh
to
gitly.
A
And
after
that,
the
stream
mcroxy
tried
to
read
and
wrap
the
message
to
look
up
and
then
look
up
the
gbc
method
responsible
for
the
one
with
passing,
and
then
we
will
try
to
unmeasure
the
stream
of
requests
here
and
try
to
try
extract
the
target
variable
responsively
from
the
header.
So
basically
we
work
the
same
as
what
we
are
doing
with
profit
with
this
us
router
to
store
the
to
find
the
giggly
server
corresponding
for
a
target
dribble
and
after
it
can
fire
the
ticket
knob.
A
A
C
A
Yeah
it's
a
hard
question
so,
basically,
when
we
are
trying
to
inject
our
stream,
obviously
into
italy.
This
will
not
be
very
easy,
however,
because
the
horn
stream
rbc
concept
is
new.
If,
after
we're
rolling
out,
there's
nothing
transfer
gearless
server
on
the
normal
operation
we
stopped
as
before,
and
then
we
have
if
he
implemented
a
test
stream,
a
state
rbc
to
test
out
all
the
features
we
need
from
streamrbc
protocol
on
the
blockchain.
And
after
that
we
continue
with
our
photos.
A
B
A
Yeah
we
have
one
figure
out,
but
I
think
we
can
just
eliminate
some
kind
of
a
feature
flag
to
end
the
balls
at
the
stream
rbc
rotocoin
for
a
particular
repository.
A
B
C
C
We
actually
need
to
just
redeploy
slackline
because
it's
not
giving
the
it
doesn't
know
about
the
service
because
it
gets
deployed
with
the
service
catalog.
But
that's
the
simple
thing.
C
C
C
B
Well,
yeah,
that's
what
I
what
I
thought
was
funny
like,
because
it
comes
from
doing
nothing
like
really
nothing,
and
then
we
throw
some
work
at
it
and
this,
where
is
the
saturation,
because
this
this
service
is
already
provisioned
to
take
the
entire
load
of
the
trace
chunks.
So
that
means
that,
right
now
it's
like
really
bored
and
I
think
it's
funny
yeah
at
the
moment.
It's
only
got
four
four
projects
in
it
right.
What
did
I
enable
the
project
that
I
was
working
on?
B
So,
what's
the
next
step
for
getting
more
projects,
I
was
going
to
come
comment
that
on
the
rollout
issue,
I
think
now
we
should
begin.
No,
I
think
we
should
leave
this
running
at
least
24
hours,
see
that
nothing
blows
up
and
then
start
percentage
based
rollout
on
actors.
I
think
so
yeah,
that's
some
projects
start
moving
towards
that.
D
B
B
I
prefer,
if
craig
answered
that,
because
I
don't
know
what
this
server
is
going
to
do
when
we
throw
all
of
the
work
at
it,
and
I
don't
think
he
knows
it,
that's
why
we're
doing
it
gradually
and
if
we
see
that
yeah
we
might
have
like.
Maybe
it's
too
well,
if
it's
too
big,
it's
fine,
but
maybe
it's
too
small
or
yeah
who
knows
well.
D
If
he's
able
to
continue
over,
if
he's
able
to
continue
later
today-
and
then,
hopefully
you
can
pick
it
up
again
tomorrow,
we
can
see
how
far
we
get
by
the
end
of
tomorrow.
F
B
No,
I
don't
think
craig
is
unsure.
I
think
craig
knows
more
and
I'm
a
little
bit
like
I'm
I'm
crazy
about
this.
What.
C
B
And
he
also
spun
up
this
server
like
the
he
spun
up
the
shard
without
me
like
knowing.
What's
in
there,
I
just
watched
graphs
go
up
after
I
did
something
with
feature
flags,
so
I'm
pretty
confident
that
it's
fine,
because
now
it's
not
doing
anything,
so
I
wouldn't
mind
starting
a
percentage
based
rollout,
but
I
prefer
to
have
an
sre
sign
off
on
that
and
corrects
the
sre
that
knows
about
it.
F
B
That's
basically
what
we
did
craig
rolled
this
out,
tested
it
on
staging
end
of
day.
He
said
this
is
where
I
got
somebody
else.
Please,
roll
out
for
these
projects.
That's
what
we
did
now
and
now
I'm
going
to
mention
to
greg.
You
carry
it
further
and
then
I
think
I
will
pick
it
up
again
tomorrow
to
yeah
pick
up
where
craig
left
off
tonight.
F
F
So
I'm
not
suggesting
we
should
go
faster,
slower,
I'm
asking:
where
is
our
level
of
confidence
right
like
if
we
have
all
the
metrics
that
we
think
we
need
right
now?
If
we
have
all
the
controls,
we
have
right
like
turning
things
on
or
turning
things
off.
If
we
have
run
the
experiment
on
a
smaller
substrate
of
projects
and
we
got
the
results,
we
expected,
what
is
our
like?
What
other
confidence
we
need
to
gain
to
be
able
to
well
either
change
our
tactics
or
you
know,
continue
with
it
like
what?
B
Right
now,
the
confirmation
that
we
got
from
this
is
that
everything
is
fine.
What
we,
what
I
don't
know,
but
craig
probably
does,
is
how
many
projects
do
we
add
now.
G
B
We
not
no,
it's
not
that.
I
know
okay,.
D
F
The
general
concern
when
things
are
being
rolled
out
in
apex
zones
is
low
traffic
right
like
we
don't
have
enough
traffic,
we
need
to
wait.
We
need
to
gather
the
data
and
if
we
are
doing
this,
if
we
are
already
confident
and
we
have
all
the
bells
and
whistles-
and
we
have
all
the
controls-
maybe
that
data
collection
can
happen
now.
So
that's
when
craigs
come
comes
in
and
if
we
have
like
the
details
of
how
we
are
rolling
this
out,
there
might
not
need
to
be
a
wait
time
in
between.
B
Looking
at
this,
what
I've
seen
now,
I
wouldn't
be
opposed
to
just
enabling
for
one
percent
of
projects
in
total,
but
yeah.
I
don't
know
if
there's
another
reason
that
craig
wanted
to
wait
in
the
issue.
There
wasn't
a
description
and
he
just
left
a
note
in
the
the
project
channel
as
a
handover
where
he
got,
and
I
thought
that
it
would
be
fun
to
continue.
So
that's
what
we
did.
D
Well,
if
we're
not
seeing
any
negative
side
effects,
maybe
we
should
just
go
for
one
percent
of
projects
and
then
he's
got
a
few
more
hours
of
data
before
he
comes
on.
B
D
So
where
I've
started
with
this
is
I
started
with
just
doing
the
work
for
two
sections
so
that
we
could
provide
them
back
to
product
and
they
can
give
us
feedback
before
I
go
ahead
and
do
this
for
all
the
other
sections,
because
there
is
an
awful
lot
of
copy
paste
in
producing
these
things,
and
I
don't
want
to
have
to
change
what
I'm
copying
and
pasting
across
everything.
So
when
they
first
come
in
here.
D
This
is
the
overview
showing
all
sections
in
development
and
the
availability
for
all
sections
taken
as
an
end-of-month
reading
for
june
and
as
far
as
we
are
currently
in
july,
and
then
it
is
broken
down
by
sections.
So
this
is
everything
that
is
in
every
group,
that
is
in
every
stage
in
the
dev
section
and
the
same
for
ops,
and
I
split
it
down
into
two
views
showing
the
availability
versus
the
target
and
the
budget
spent
versus
the
target.
D
D
So
yeah,
it's
it's
a
really
sort
of
simple
view
of
the
error
budgets,
but
it's
what
we
were
it's
what
they
asked
to
see
inside
of
sciences
and
what's
quite
nice
over
here
is
this-
is
the
one
that
they
can
use
as
the
performance
indicator
if
they
so
choose
to,
because
that's
also
what
they
wanted
to
see,
because
that
sums
up
absolutely
everything.
D
We
have
had
a
request
to
be
able
to
break
this
information
down
even
further
like
to
the
controller
level
so
that
an
engineering
manager
might
be
able
to
go
in
and
say
well.
We
did
work
on
controller
x
and
we
want
to
see
the
change
month
on
month,
but
we
aren't
pulling
the
data
in
in
that
level
of
detail
yet
so.
B
That's
not
going
to
be
an
easy
one
to
do
like
we
don't
like
it's
not
rolled
up
in
the
metrics.
Neither
like
the
moment
you
the
moment
we're
working
with
group
level
metrics
and
no
longer
pitch
category
metrics.
We
don't
have
the
detail
of
the
endpoint
anymore.
D
Yeah,
so
I
think
what
I'm
going
to
do
is
just
describe
to
them
how
they
can
do
the
same
thing
using
the
using
the
existing
links
that
they've
already
got
and
just
adjusting
the
filters,
but
in
terms
of
what
we
what
we
were
asked
to
produce.
This
is
a
decent
first
iteration
towards
that,
and
they
can
see
how
the
target
is
way
up
there
and
lots
of
things
aren't
quite
up
there
right
now.
So.
D
The
problem
was
over
here
so,
where
you've
got
something,
that's
got
a
really
high
availability.
Suddenly
it
gets
really
really
close
to
this
line,
and
you
can't
see
that
it's
still
below
so
it
felt
like
this
was
the
best
view.
I
could
come
up
with
just
to
show
that
it's
not
quite
there
unless
I
make
the
chart
way,
bigger.
G
Yeah,
I
was
wondering
if
we
could
reframe
it,
so
it's
the
other
way
around.
So
what's
the
opposite
of
availability,.
B
G
G
The
budget
minutes
spent
is
target
minus.
I
was
talking
about
one
minus,
but
as
the
target
so
close
to
one,
it
would
be
pretty
similar
yeah.
B
Yeah
rachel-
this
is
the
like
for
each
group.
This
is
the
average
rolled
up
like
for
the
average
of
all
groups
is
a
stage.
B
Is
that
how
it
goes?
Do
you
mean
in
this
chart
at
the
top
or
which
chart?
Are
you
yes,
the
chart
at
the
top?
That
shows.
D
G
Okay,
so
we
can't
wait
it
and
get
the
correct.
G
No!
No
like
yeah.
Basically,
if
you
have
a
stage
with
two
groups,
and
one
group
does
ten
operations
and
they
all
fail
and
the
other
do
group
does
a
thousand
operations
and
they
all
succeed.
This
would
show
50
because
it's
the
average
of
the
groups,
not
the
average
of
the
operations,
because
you
don't
have
the
operations
there.
I
think
that's
an
unlikely
example:
I'm
just
giving
an
example.
B
F
G
This
dashboard
so
rachel.
Sorry,
if
you
go
back
to
the
drill
down
of
the
that
one,
that
was
what
I
was
going
to
ask
yeah
so
that
that
would
show
you
here.
If
there
was
a
discrepancy
like
you
know,
it
would
show
you
that
okay
package
is
actually
kind
of
pretty
much
packaging
monitor.
Actually,
you
know
reasonably
difficult
and
configure
and
verify
less
so.
D
D
B
Like
because
right
now,
you've
done
that
manually,
which
which,
what's
the
section
for
these
stages-
and
we
have
the
stages
on
the
on
the
metrics
that
you're
using
there,
but
we
don't
have
the
section.
I
think
we
should
start
adding
the
sections
like
way
in
the
beginning
in
our
mapping.
Then
that
would
roll
up
here
and
it
wouldn't
be
manual
work
for
you
and.
G
It
also
means,
if
a
group,
if
a
stage
moves
between
sections,
it's
the
same
as
if
a
feature,
category
moves
between
groups
or
a
group
between
between
stages.
The
historical
data
would
also
be
right
because
it
would
be
at
the
point
we
collect
the
data.
D
I
think
that
I
think
that
would
be
helpful,
but
also
what
I
want
to
do
is
wait
till
we
get
feedback
on
this,
because
we've
spent
quite
a
lot
of
time
just
being
able
to
create
these
things
and
the
view.
D
While
we
understand
what
this
view
does,
we
might
find
that
this
isn't
what
the
product
managers
were
hoping
to
see.
So
I
want
to
get
the
feedback
from
them
and
then
see
how
much
more
work
we're
going
to
put
into
this
now,
but
I
definitely
agree
if
we
had
that's
that
section
information
available
sooner.
It
would
definitely
make
this
this
easier.
I
just
don't
know
if
we
need
to
add
that
in
right
now,.
G
D
G
Of
the
time,
so
their
impact
doesn't
really
show
up
here
directly,
but
without
the
v2
sli
proposal
or
something
similar
we
couldn't.
We
couldn't
really
teach
that
out
so
yeah,
I'm
just
calling
that
out
there,
because,
like
I.
B
Some
of
these
would
also
be
addressed
because
so
like,
for
example,
fulfillment,
which
has
the
purchase
group
and
whatnot,
does
run
services
on
our
internet.
Like
we
run
services.
C
B
Don't
really
have
sli,
so
I
don't
know.
Yeah
enablement
might
be.
G
B
Global
search,
I
I
was
thinking
it
would
be
cool
if
we
could
define
like
the
way
we
define
sli
for
services
in
like
a
fancy
chase
summit
thing
and
have
groups
define
their
own
slis
like
that,
like
so,
they
add
metrics
for
themselves
and
then
for,
for
example,
for
them
the
growth
that
there's
doing
a
lot
of
tracking
and
experiments
and
stuff.
They
could
define
an
sli
based
on
that.
B
D
Categories
well,
that
was
all
I
wanted
to
share
on
that.
Is
there
anything
else
anyone
would
like
to
share.
C
If,
if
people
are
really
bored
and
at
all
interested,
I
can,
I
can
go
into
a
little
bit
of
detail
of
the
pain
that
I've
been
experiencing
over
the
last
few
days.
But
you
probably
don't
want
to
know
about
about
python
dependency
management
and
timeland.
B
C
It's
so
what
this
is
this.
These
are
the
things
I've
learned,
so
obviously
I'm
kind
of
new
to
all
of
this,
because
I'm
not
really
a
python
developer,
but
one
of
the
things
that
I've
noticed
is
that
very
often
we'll
push
a
change
in
timeland,
we'll
push
a
change
to
like
gitlab
ci
and
that
will
kick
off
a
new
docker
image
build
and
we
because
the
python
is
basically
effective.
Gem
install
bundle
install
for
python
takes
such
a
long
time.
C
We
break
that
all
into
the
into
the
image,
and
every
time
we
bake
that
image,
like
various
different
things
break
and
at
the
moment,
there's
like
a
whole
bunch
of
like
logging
messages,
they've
just
started
appearing
because
we
happen
to
build
a
new
container
image
and
it's
super
sketchy
and
it's
not
very
stable,
and
so
I
was
quite
I
thought.
Well,
this
will
be
a
really
easy
small
fix,
let's
lock
the
dependencies
that
we
use
so
that
whenever
we
rebuild
that
we
can,
we
can
have
some
consistency
like
this.
C
This
is
going
to
be
easy
right,
like
this
will
be
like
a
little
five
minute
task
to
add
a
lock
file
to
our
conda.
So
just
to
give
you
an
example
of
what
it
looks
like
this
is.
This
is
what
the
condo
file
looks
like
and
actually
in
the
before
all
of
these
changes
they
had
like
full
version
numbers
in
them,
but
the
problem
is
that
the
actual
dependency
tree
is
much
much
bigger
than
this
and
there
isn't
a
way
of
generating
the
like
a
lock
file.
C
But
what
you
can
do
that
I
discovered
is
you?
Can
you
can
basically
take
the
current
set
of
of
of
gems
or
pips
whatever
python?
You
know
the
python
dependencies
and
you
can
take
a
snapshot
of
that,
and
so
what
I
did
was
I
I
installed
from
this
file
and
then
I
write
it
out
to
this
file
here,
and
this
is.
This
has
been
a
little
nice.
You
built
that
I
yeah
yeah
yeah,
because
I
couldn't
find
anything
to
to
do
it.
C
So
this
is
the
first
thing
I
googled
and
googled
and
googled,
and
it
just
doesn't
seem
to
be
like
a
thing.
So
there's
a
different
thing
called
poetry.
This
pip
is
obviously
the
main
python
thing
and
then
poetry
seems
to
have
ext
right.
That's
not
this
one.
That's
that's
pip,
so
pip
uses
requirements
txt,
but
but
the
problem
with
pip
is
that
the
build
for
pip
basically
builds
everything
from
source,
and
so
the
build
takes
like
several
hours
and
so
anaconda.
C
It's
got
all
of
these
packages
that
are
like
sort
of
pre-compiled
for
different
environments
and
they
are
sort
of
community
looking
after
them
and
making
sure
that
they
work
and
they
design
for
all
the
data
science
stuff,
and
that's
why
we
ended
up
with
anaconda,
especially
because
there's
pre-compiled
binaries
for
a
lot
of
the
stuff,
and
you
know
like
pystan.
You
know
this
massive
monte,
carlo
simulation
library
and
when
you
start
trying
to
do
a
build
on
that
it,
it
takes
hours
and
hours
and
hours.
C
So
you
definitely
want
to
use
like
pre-packaged
things.
So
I
started.
Building
up.
The
first
thing
I
discovered
was
that
the
the
packages
that
are
available
for
because
conda
understands
you
know
it
uses
these
pre-compiled
binaries.
C
The
first
thing
I
realized
was
that
binary
versions
that
are
available
for
osx
and
linux
are
different
right,
and
so
you
might
say,
I'm
going
to
use
pystan
and
pi
stand,
uses
lib,
fortran
g,
and
it's
because
on
os
x,
there's
lib42
and
g5,
so
it'll
use
lib42
and
g5,
and
if
you
try
and
then
install
that
on
linux,
it's
like
there's
no
lib
fortran
g5.
C
So
the
next
thing
I
started
doing
was
going
around
and
adding
all
of
these
like
hard-coded
dependencies
to
my
sort
of
source
file
like
we
can't
use
five,
because
it
you
know
blah
blah,
blah
long
story,
and
so
that
was
my
first
attempt
at
doing
this.
The
second
problem
or
there's
there's
been
a
lot
of
problems,
but
basically,
where
I'm
at
now
is
that
I've
pretty
much
oh
and
then
the
second
problem
that
I've
just
hit
is
that
there's
a
whole
bunch
of
stuff
in
the
dependency
tree.
C
That
is
that
you
don't
even
get
for
other
environments
and
so
there's
a
thing
that
I
just
removed
from
here
called
appnope
and
and
basically
when
I
tried
to
install
that
on
linux,
it
said
it
couldn't
be
found.
I
went
to
anaconda,
looked
it
up
and
there's
there's
only
app
note
4
for
for
mac
os,
so
I
tried
to
figure
out
what
it
is
and
it's
something
that
stops
mac
os
from
sleeping.
C
So
there's
never
going
to
be
a
similar
thing
for
for
linux
and-
and
so
the
first
thing
I
started
doing
was
like
well,
should
I
do
graph
minus
vs
and
remove
the
things
but
like
where
I'm
at
now?
Well
right
now,
this
was
supposed
to
be
a
five
minute
job
and
it's
been
like
a
three-day
job.
I'm
tearing
my
hair
out.
C
No,
no,
no
I've
looked.
I've
looked
for
it
so
where
I'm
at
right
now
is
that
I'm
going
to
build
this
in
ci,
I'm
going
to
run
the
I've
just
started,
putting
it
together
now
and
I'm
going
to
generate
the
the
the
log
file
in
linux
and
then
it's
going
to
go
and
push
a
merge
request
with
the
new
log
file
and
and
then
we
can
and
then
we
can
go
from
there
and
on
mac
os.
I
don't
know
we'll
you
know
we
can
do
that.
C
I'm
less
worried
about
mac
os,
it's
more!
The
the
stability
of
linux
that
I'm
that
I'm
concerned
about
here,
so
yeah
I'll
figure
that
that
bit
out
next
but
yeah,
it's
been
a
it's
been
an
eye-opening
experience
and
I
I
can't
wonder
if
I've
just
totally
missed
out
on
something,
but
it
seems
to
be.
You
know,
like
here's,
here's
actually
the
dependency
tree.
So
you
know
this
is
this
is
what
it
pulled
out
and
the
reason
it's
got
up
and
open.
C
C
But
I
do
think
it's
it.
It
feels
like
a
tangent,
but
I
do
think
it's
really
important,
because
the
number
of
times
that
tamland
is
broken
because
these
dependencies
keep
changing.
It's
becoming
a
problem
and
I
put
in
the
first
unit
tests
because
I
got
sick
of
trying
to
test
if
things
were
working
or
not.
G
Yeah,
I
just
realized
I
do
kind
of
have
something
to
share,
although
it's
not
really
done
but
andrew,
and
I
will
andrew
replied
to
my
mr,
so
I'll
just
quickly
share
what
I've
been
doing.
So
this
is
the
follow-up
from
the
psychic
didn't
alert
because
they
didn't
know
that
the
control
was
failing.
So
part
of
the
problem
was
that
the
counters
started
at
one
because
we
didn't
pre-initialize
them.
G
So
when
we
first
added
a
value
to
the
counter
it
went
from
this
counter
doesn't
exist
to
this
counter
as
one
and
so
if
you
try
and
take
the
rate
of
that
and
prometheus
you'll
get
zero,
so
we
need
it
to
happen
twice,
but
if
it's
con
job
or
if
it's
just
a
job
that
doesn't
happen
very
frequently
it
won't,
it
won't
ever
go
above
one,
so
the
rate
will
be
lower
than
it
should
be,
so
that's
fixed.
So
now
I'm
splitting
out
the
so
andrew
already
did
this
for
gitzly.
G
So
this
now
does
it
for
every
global.
Sorry,
italy.
G
Alerting
and
now
this
is
the
global
part
of
every
service,
so
it's
going
to
be
quite
a
big
diff,
but
what
it's
doing
is
it's
saying
instead
of
having
an
alert
that
is
for
the
one
hour
and
six
hour
windows
combined
the
multi
window,
multi
right
now
that
we've
got
two
separate
alerts,
one
for
each
window,
each
with
a
label,
so
the
diff
is
massive,
but
the
basic
idea
is
for
psychic
we'll
go
from
an
alert
that
looks
like
this
to
one
that
looks
like
this
and
one
that
looks
like
this.
G
So
this
is
a
one
hour
alert
and
this
is
a
six
hour
alert.
When
I
was
testing
these
didn't,
I
actually
have
any
data,
so
I
changed
it
to
give
us
an
order
of
magnitude
lower
of
a
threshold,
and
we
can
see
that
we
don't
actually
have
any
one-hour
alerts
here,
but
we
do
have
a
six-hour
alert
that
would
have
fired
with
this
and
there's
my
one
hour
alert
and
the
six
are
one
the
shape
on.
That
looks
the
same.
So
that's
the
first
step.
The
next
step
is
to.
G
Allow
again
the
same
as
andrew's
done:
forgetfully
for
node
level
alerts.
Instead
of
saying
that
this
is
a
one
hour
alert
say
that
this
is
a.
However
many
samples
is
alert.
Do
you
remember
how
many
samples
it
is
andrew
off
the
top
of
your
head.
G
We
can
configure
the
sample
rate
so
yeah,
then,
once
we
have
that
we
can
say
that
this
psychic
alert
should
fire
over
a
three-day
window,
given
sorry
not
over
yeah
over
a
three-day
window.
Given
this
number
of
samples,
rather
than
given
this.
G
G
Oh
it's
a
minimum
sample
rate,
so
it
should
be
fine.
Actually
so
yeah
I'll
have
to
have
them.
C
G
G
No,
I
think
yeah
I'll
have
to
be
low
for
the
cron
jobs,
because
I
guess
the
crown
job
well.
We
have,
I
think,
one
weekly
current
job,
but
most
con
jobs
will
be
once
a
day
is
a
minimum,
so
that
will
probably
have
to
be
a
very,
very,
very,
very,
very
low
request
per
second
to
match.
Yeah.
C
I
mean
three:
can
we
I
mean,
I
think
three
like
if
we're
saying
three
days,
and
you
got
three
that's
probably
too
low
like
I.
I
suspect
that
that
can
we
not
make
those
once
per
day
go
to
at
least
twice
or
three
times
a
day
is?
Is
there
like
like
if
we,
if
we
started
off
with
like
10
samples
as
a
minimum
and
then
and
then
reviewed
what
was
still
getting
missed
and
then.
G
G
G
C
C
G
G
Some
of
them
like
in
the
domain
will
have
to
be
daily
because
they
might
be
like.
I
think,
there's
one
that
sends
out
emails
for
issues
that
are
due
the
next.
B
G
B
But
I've
noticed
that
we
have
a
lot
of
these,
these
jobs
that
we
don't
see,
but
that
actually
do
something
once
a
day
fetch
a
bunch
of
record
from
from
the
database
and
then
process
them
in
the
loop.
And
one
of
the
suggestions
I
made
is
like
make
make
those
one
one
job
each
because
it
won't
scale
forever.
C
G
C
There's
one
other
thing
that
we
could
if
it
becomes
a
real
problem,
we
could
say
that
we
also
go
with
a
30
day.
We
measure
over
30
days,
and
then
you
know
your
minimum
sample
of
10
well
30
over
30
days
would
be
something
that
we
can
do
right,
but
that
yeah
but
you'll
find
out
quite
late.
If
it's
really
yeah.
C
You
would
find
out
but
yeah,
but
maybe
that's
we'll
just
have
to
see
how
noisy
it
is.
I
guess,
and
then
that
would
be
one
of
the
things
that
we
could
possibly
use.
Yeah.
G
Cool
I'll
take
a
look
at
that
and
I'll
pass
that
and
I'll
back
to
you
now
that
I've
actually
managed
to
commit
all
those
changes
without
the
do.
You
know
what
causes
the
the
weird
thing
where
the
git
api
and
web
services
always
have
a
diff
when
you
run
make
generate
anybody
else,
see.