►
From YouTube: 2020-02-07 Background jobs improvements demo
A
Okay,
this
is
the
scalability
background
processing
improvements,
demo
I'm
gonna
start
I
have
been
working
on
this
cue,
selector
syntax,
which
actually
Andrew
wrote
I've
just
been
sort
of
tidying
it
up.
So
I'm
just
gonna
share
it
terminal
because
I
think
that's
all
I
need.
So
basically
the
idea
is
that
at
the
moment
you
can
do.
A
A
You
need
to
know
what
the
queue
is.
Our
calls
and
we
have
a
lot
of
Kunik
like
we
have
too
many
q
things
like.
So
what
we
really
want
to
do
is
be
able
to
select
queues
by
their
attributes,
which
is
what
this
new
syntax
does
so
at
the
moment
is
just
on
this
branch
that
I'm
on
also
for
confirmation,
like
none
of
psychic
cluster.
A
The
reason
like
responds
a
relatively
quickly
there
is
because
it
doesn't
need
any
of
the
get
that
services
to
actually
run
in
order
to
spin
up
its
what
its
child
processes,
the
child
processes,
need
that
but
I'm
using
try
to
run
here.
So
if
we
have
this
queue,
query
syntax
option
which
is
I,
haven't,
come
up
with
a
better
one
and
it's
quite
hard
to
type,
but
you
won't
have
to
type
it
very
often
because
you'll
configure
it
and
get
that
Darby
and
then
forget
about
it.
A
Do
you
have
this
syntax?
So
we
could
say
like:
let's
take
all
latent
latency-sensitive
workers,
and
these
are
they
you
can
so
the
the
basic
syntax
is.
This
is
a
term
like
so
we
have,
you
know
we're
querying
on
this
field.
Latency-Sensitive
at
the
moment,
you
can
query
that
whether
it
has
external
dependencies,
its
resource
boundary
and
its
feature
categories,
so
we
could
say.
A
A
So
yes,
so
this
means
anything
that's
in
the
set
represented
by
LFS
and
source
credit
management,
so
any
any
worker
that's
in
either
of
those
feature
categories,
then
two
and
two
terms
together,
you
have
whoops,
you
have
just
do
this.
So
let's
say
source
code
management
and
latency-sensitive
is
a
comma
yeah.
B
A
B
A
B
Comma
was
because
I
wanted
the
old
syntax
that
we
used
of
just
queue-
name,
comma,
Q,
name,
comma
Q,
name
to
kind
of
work
in
the
new
world,
but
actually,
if
we
got
to
put
it
under
a
different
flag,
you
don't
need
that,
and
so
it'll
probably
be
much
more
worthwhile
to
have
something
that
is
like
apparent
and
obvious
to
people
using
it.
Yeah.
B
A
D
B
A
Or
latency-sensitive
that's
a
space
and
again
this
kind
of
makes
sense,
but
I
don't
know
if
that's
because
I've
been
working
on
it
because
you're
sort
of
describing
sex
right
so
like
the
first
set,
the
first
query
there.
The
first
query
is
like
you
know
something
come
with
something
something
come
or
something,
and
then
the
next
one
is
followed
by
a
space
now
I
think
at
the
moment
I've
documented,
it's
experimental.
A
A
A
A
Then
I'll
get
two
processes,
one
of
which
is
running
a
product
export
which
is
the
only
source,
co-management
memory,
bound
queue
and
then
the
other
one
is
the
only
cure.
That's
in
the
LFS
objects.
So
when
we
configure
this
on
our
production
environment,
these
won't
be
different
because,
like
you
know,
that's
not
how
we
configure
things
like
you
know.
A
We
have
nodes
dedicated
to
these
things,
but
it
is
possible
you
might
want
to
do
that
if
you've
got
a
more
constrained
environment
and
the
main
benefit
of
this
is
that
it
gives
the
infrastructure
team
a
lot
more
flexibility.
So
we
already
know
that,
like
our
pool
mirror
jobs
are
the
ones
that
run
what
is
it
like
external
dependency
source
code
management
jobs?
Probably,
but
we
can't
express
that
we
have
to
like
every
time
if
we
add
a
job
that
does
that
we
have
to
go
and
update
it.
So
no
here
we
can
just
write.
D
A
A
Now
that
it's
all
in
the
channel
file,
it's
kind
of
easy
to
just
like
spin
through
and
look
for
a
worker
that
you
want
anyway,
so
yeah.
So
basically,
this
is
Andrew
wrote
this
while
ago,
I've
just
been
tidying,
this
up,
the
documentation
is
already
in
the
mr,
which
as
well,
there
is
assigned
to
you.
So
you
can
tell
me
if
the
docs
are
good
enough
or
not
and
yeah.
A
C
A
A
C
B
A
A
A
D
A
So
if
you
run
this
it'll
check,
it
basically
just
generates
the
file.
It's
like
the
task
you
added
for
the
pop
files
right.
It
just
regenerates
it
and
checks
if
it's
the
same
after
regeneration,
if
it's
not
the
build
fails,
so
it's
exactly
the
same
as
that.
So
yeah,
it's
not
actually
done
yet.
It's
not
been
reviewed
yet,
but
yeah.
A
That's
it
if
you've
got
a
suggestion.
I
don't
want
to
bite
you
too
much,
but
if
you
do
have
suggestions
for
like
you
know
better
characters
for
the
operators
or
a
better
name
for
the
command-line
argument.
Now
it's
probably
the
best
time
to
do
it,
because
now
is
the
easiest
time
for
me
to
rename
a
bunch
of
stuff.
A
B
Think
that
it
would
be
really
important
to
run
it
past
I
know:
Craig
I'm,
the
original
Craig
miss
Kalon,
the
original
Mr
Oh
an
original
proposal.
He
kind
of
said
yeah,
it's
kind
of
surprising,
but
it's
I
can
understand
once
I
understand
it,
but
John
I
don't
know
if
you
have
any
sort
of
because
because
obviously
what
we
want
is
for,
like,
yes,
sorry
operator
type
folks
to
be
able
to
grukk
this
because
they're
the
ones
who
ideally
should
be
setting.
You
know
these.
These
selectors
so
it'll
be
definitely
good
to
get
like
that.
A
A
I
found
this
like
when
I
started
working
on
it,
like
you
know,
had
I'd
sort
of
read
the
issue,
but
hadn't
really
like
thought
about
it
very
much
and
like
honestly,
it
didn't
take
me
very
long.
It
was
pretty
obvious
to
me.
Okay,
all
that
means
that,
like
I
think
because
it's
so
minimal,
like
you
know,
there's
like
yeah
four
five
pieces
of
syntax.
C
A
That's
cool
there
in
the
docs
as
well
about
the
presidents,
because
that's
the
most
important
thing
to
me
is
that
the
presidents
is
absolutely
fixed
and
you
cannot
yeah
change
it.
And
if
you
want
something,
that's
not
there,
we
should.
You
should
probably
make
the
attributes
richer
rather
than
change
the
language
yeah.
It's
what
I
think.
For
instance,
one
thing
that
you
can't
query
on
right
now,
which
I
think
you
probably
should
be
able
to
and
I've
asked
you
about
it:
Mr
Andrew
his
name,
because
you
might
want
to
pick
an
excellent.
A
Oh,
the
other
thing
I
should
have
mentioned
is
because
of
the
way
this
is
implemented.
This
works
just
fine
with
the
negate
option
as
well.
So,
like
you
know
you
can
you
don't
have
to
like
try
and
figure
out
like
how
do
I
negate
or
my
selectors
and,
like
that's,
not
even
possible,
but
you
can't
do
like
nor
and
stuff,
but
you
can
just
negate
a
lot
yeah.
So,
yes,.
A
B
If
it's
all
those
horrible
names
like
what
is
real
time
versus
ASAP,
nobody
knows
right,
and
so,
instead
of
trying
to
reform
those
will
create
one
called
like
latency,
sensitive
CPU
I
mean
that's
also
a
horrible
name,
but
like
maybe
something
like
that,
that's
a
little
bit
more
obvious
and
then
we'll
you
know
basically
launch
that
and
it
will
kind
of
compete
with
real-time
and
ASAP
to
kind
of
pick
jobs
out
of
their
queue
and
then
we'll
kind
of
ramp
it
up
and
you'll
shut.
Those
ones
down
is
how
I
imagine
we'll
do
that
and.
B
Personal
workload,
so
the
for
the
you
know
we
already
use
different
ones
for
the
export
jobs
that
have
got
lots
and
lots
of
memory,
and
you
know
with
the
the
ones
that
use
them.
Creating
lots
of
disks.
You
know,
just
like
we've
been
discussing.
Those
ones
would
obviously
go
on
a
pot.
That's
not
like
a
big
temper.
This
I
mean
I,
think
the
deep
water
tip
the
fist
is
like
80
gigs
around
so
I
kind
of
imagine,
that'll
be
fine,
almost
anywhere
Thanks.
A
B
So
I
still
feel
like
like
we
should
be
knocking,
we
should
be
where
we
are.
We
should
be
cleaning
up
after
the
application
like
like
what
I
would
rather
do
in
that
case,
because
that's.
B
It
like,
like,
we
can
just
say,
maximum
number
of
requests,
and
then
we
just
basically
stop
the
health
check
after
that.
So
you
know
once
you've
reached
like
500
requests.
We
just
stop
responding
to
health
checks
and
make
sure
that
we
get
shut
down
in
a
in
a
in
a
nice
way.
So
we
can
to
kind
of
finish
the
jobs
that
are
running
and
not
take
any
more
I.
A
D
Going
to
be
just
let's
try
to
share
a
single
window
like
shonda's
that
work
said,
merger
quest,
I
spin,
like
that
I've
opened
up
yesterday,
and
it's
basically
an
illustration
of
the
work
that
we
have
been
doing
together
with
our
swabbo
that
last
week
and
I
think
it
was
during
the
demo.
Last
week,
or
two
weeks
ago
we
were
discussing
the
the
syntax.
D
We
were
going
to
use
to
schedule
jobs
in
bulk
stuff
like
that
for
now
I'm
going
through
all
of
the
chrome,
the
cron
job,
see
if
they
schedule
jobs
and,
if
necessary,
create
an
issue
or
fix
it.
If
it's
small
or
not,
this
was
an
instance
where
I
thought
it
was
simple
enough
to
deal
with
myself
and
so
for
all
cron
jobs.
D
We've
now
have
them
this
kind
of
disabled
cop
and
when
you
add
complex
to
the
job,
we
don't
need
to
disable
the
cop
anymore,
and
then
things
go
well
and
the
main
thing
that
you
need
to
do
is
like
always
provide
a
project
of
some
kind,
since
the
application
context
will
then
fill
up
everything
else
that
it
needs
to
the
plan.
The
namespace
all
that
stuff
and
the
annoying
thing
is
that
we
need
to
make
sure
that
we
don't
cause
extra
queries
from
sidekick
little,
whereby
pre-loading
all
the
resources.
D
D
C
B
D
D
D
D
The
most
like
that
kind
of
to
to
kind
of
things
we
have
once
that
schedule,
a
job
for
each
project
or
whatever
those
we
want
to
handle,
and
then
we've
got
a
bunch
of
instance,
white
things,
and
here
there
are.
We
make
that
a
little
bit
like
these
are
the
the
jobs
that
currently
don't
have
context.
This
is
over
the
past
seven
days.
That's
not
a
good.
D
Do
the
last
24
hours
because,
for
example,
this
one
should
be
fixed
already
and
yeah
it's
gone,
and
these
ones
as
well
this
one
and
this
one
as
well
to
fix
by
adding
them
to
the
runner
API
that
wasn't
using
complex.
Yet
so
then,
there's
yeah
I'm,
looking
here
at
the
ones
that
are
highest
on
the
list
to
get
rid
of
those
sooner
and
how
many
are
there
in
total
Andrew?
Do
you
have
an
idea
of
how
I
can
count
here
you
want
without
you
know
anything
yeah.
B
A
A
B
I
had
a
call
with
the
Cabana
p.m.
a
few
years
ago
and
I
framed.
It
is
like
look
like
people
like
me,
or
your
company's
biggest
champion.
Like
you
know,
since
2011,
every
company
I've
gone
to
elastic
is
like
you,
you
know
you
really
want
to
kind
of,
and
I
went
through,
like
literally
hundreds
of
little
niggles
exactly
like
that,
and
the
guy
was
so.
He
was
just
so
happy.
B
He
was
like
you
know,
because
I
think
that
I
think
they've
kind
of
lost
either
user,
like
our
kind
of
user
is,
you
know
so
I
thought
that
was
really
really
interesting
once
more
time.
He
wants
to
do
three
more
stuff.
So
if
you
have
anything
particular
like
that,
you're
not
possible
cool.
A
B
B
D
B
D
D
D
So
the
things
that
were
scheduled
within
24
hours,
I-
don't
have
contacts.
So
before
all
those
change
was
deployed,
its
deployed
already
are
showing
up
there.
Then
we
also
have
the
the
cron
jobs
themselves,
because
the
moment
they
start,
they
don't
have
chrome,
they
don't
have
contacts
and
we've
got
I,
don't
know
how
many
of
those
yeah
so
there's.
D
C
C
You
I
mean
a
go
project
stacked
like
profiler,
so
there
is
just
starting
the
this
process
when
we
call
the
serve
method
of
the
monitoring
package.
So
the
idea
is
that
we
don't
really
need
to
change
the
users
of
lab
kits
in
order
to
actually
use
system
start
this
profiler
the
whole
thing
will
be
configured
by
this,
our
Empire.
The
idea
is
passing
bit
like
wanted
continuous
profiling
with
a
few
parents,
and
also
we
can
just
conditionally
compile
the
binary.
C
So
there
is
that,
okay,
if
we,
if
we
don't
want
to
use
the
piece
library
with
a
bunch
of
dependencies,
we
can
just
add
a
butte
thing
here
and
we
pass
attack.
If
you
don't
pass
this,
this
tag
continuous
profile,
it's
that
title
we
just
skip
the
whole
file.
That
has
quite
a
few
dependencies
like
the
cloud.
C
Google.Com
go
profiler
and
if
you
don't
other
things-
and
we
use
that
here
and
we
use
that
on
the
tests
as
well
and
we
need
since
we
use
this,
you
need
profiler
function
here
on
the
serve
the
search
function
of
the
monitoring
package.
We
need
to
define
these.
This
isn't
like
a
new
think.
We
have.
The
new
tracer
is
a
blank
state
of
the
function.
This
function,
that
you
not
be
skip
it
by
this.
This
thing
that
we
added
here
the
builds
pin,
so
we
had
to
move
finder.
C
That
does
nothing.
So,
basically,
if
you
want
to
skip
this
one
to
be
doing
F,
nothing
here
so
yeah,
that's
basically
it
we
have
just
to
like
basic
functions.
Want
you
to
fetch
a
few
parents
from
the
the
actual
my
game
environment
and
the
other
one
should
just
initialize
the
profiler
very
basic.
Now
we
are
just
supporting
the
exact
driver.
So
if
we
decide
like
improving
the
issue,
okay,
let's
use
the
stack
driver
or
other
thing.
We
can
iterate
on
that
and
prove.
C
Ideally,
we
won't
like
start
two
different
profilers
at
once,
because
we
don't
like
a
little
bit
concerned
about
overhead,
since
it's
at
least
the
stack
driver
claims
that
they
pick
like
five
percent
overhead
over
the
the
go
process.
So
we
need
to
to
monitor
this
a
little
bit
to
see.
That's
that's
really
what's
going
to
happen,
but
anyway,
that's
that's
the
whole
idea
of
this.
This
change
here.
B
My
first
question:
it's
not
really
question,
but
it's
just
something:
I
think
we
should
do,
and
that
is
at
this
point
set
expectations
about
how
much
bigger
the
binary
will
be
so
go
get
workhorse,
blender
this
in
and
then
build
workhorse
with
the
table
and
then
build
vehicles
with
the
tag
on
and
see
the
size
difference,
and
then
maybe
just
like
see
if
there's
a
difference
in
like
the
size
of
the
binary
like
the
executable
when
it
starts
running
I,
don't
think
there
will
be.
You
know
how
much
memory
it
takes.
B
Yeah
yeah,
both
of
them
men
we
come
memory.
Consumption
should
be
too
hard
because
the
way
compiles
already
got
one
of
them,
so
you
can
just
use
like
RSS
for
the
memory
consumption.
It
should
be
good
enough
I
as
a
sidenote
orestes
is
generally
a
terrible
way
of
measuring
memory
usage.
But
but
for
this
this
example
is
fine.
B
The
main
thing
I'd
be
interested
in
is
how
much
bigger
that
binary
is
because,
okay,
what
I
think
we
should
do
is
so
what
we
did
with
the
tracing
was
we
just
enabled
Jaeger
by
default
in
workhorse
binary,
so,
like
99.9
percent
of
users
on
your
genome,
you
see
a
go
to
instrument.
There
get
lab
instance,
but
we've
just
stuck
there
anyway,
and
you
know,
let's,
let's
take
a
look
at
this.
B
The
alternative
is
that
we
start
building
our
own
custom
binaries
for
big
lab
comm
that
have
like
the
build
tags
that
we
want
in
it,
but
that's
obviously
godlike
because
then
we
can't
just
use
the
standard
on
the
bus
and
that's
got
like
a
whole
bunch
of
downside
to
it.
But
we
could
do
interesting.
Stuff
like
like
I,
would
very
much
like
to
have
a
Ruby
that
has
distributed
tracing
points
from
pollen
and
that's
a
custom
thing
for
Ruby.
B
You
can
say
you
know
basically
compiled
and
then
you
can
get
a
whole
lot
of
extra
stuff
out
of
maybe
just
using
the
trace
points
that
they
combined
in.
But
you
know
in
general,
like
that's,
gonna,
be
a
whole
or
a
work
like
that
would
be
a
big
piece
of
work
to
get
custom
builds
of
get
lab,
so
I,
don't
think
that's
gonna
be
necessary,
but
knowing
how
much
bigger
we're
making
binary
is
worth
putting
into
that
discussion,
yeah.
C
B
What
we've
done
Shawn,
what
we've
done
with
Yaeger
is
that
we
went
on
the
bust,
builds
workhorse
and
Guinea.
It
has
the
little
tired
for
yoga
enabled
you
know.
So
it's
kind
of
one
of
the
default
build
tags,
so
you
can
switch
it
around
and
make
it
that
the
default
is
that
it
builds
in
yeah.
It's
it's.
B
Gili
server
has
had
20
errors
in
the
last
like
1
minutes
or
last
5
minutes,
and
the
thing
is
that,
like
20
errors,
it
could
be
that
the
clients
gonna
wear
it
with
me,
client-side
error
and
so
those
alerts
that
we
get
for
the
giddy
nerds
I
kind
of
like
really
old
school
alert
before
we
started
focusing
on
on
the
SL
eyes
and
everything
like
that
and
they
and
they
were
also
an
absolute
value,
not
a
rate.
So
20
areas
out
of
10,000
messages,
probably
send.
B
Suddenly
you
want
to
wake
a
an
operator,
but
you
know
20
out
of
40
is,
is
really
high
and
so
I
really
wanted
to
get
rid
of
the
alert.
But
I
realized
that
concretely
do
that
until
we
are
monitoring
each
individual
given
note
because
they're
now,
like
50
killing
notes,
and
so
one
of
them
can
be
pretty
much
failing
everything
and
taking
too
long
on
the
uptick
scores.
B
But
in
general
the
the
service,
the
giving
service
would
be
fine,
because
one
out
of
50
years
is
too
small
to
actually
dent
the
headline
figure
and
because
that
1
out
of
50
impact
the
users
who
are
experiencing
a
bad
time
on
that
server
like
we
need
to
have
nerd
little
monitoring.
And
so
what
this
change
is
about
is.
B
B
So
now
by
enabling
this
node
level
monitoring.
It
basically
automatically
generates
a
new
set
of
aggregations,
so
we
always
aggregate
these
up
to
the
service
level.
So
so,
basically,
we
know
that,
for
the
Guinea
service
on
average,
the
optics
is
four
across
all
50
servers
is
like
99.5
percent,
and
if
it
goes
below
99%
or
actually
complies
below
95%,
we
will
get
an
alert,
and
you
should
actually
push
this
up,
because
this
was
pushed
down
by
heavily
during
the
the
troubles
in
november/december,
and
this
is
actually
very
low
from
kidding
me.
B
No,
you
should
push
it
back
up,
that's
kind
of
dope,
and
so
what
this
does
is
by
adding
this
node
level
monitoring.
It
adds
a
whole
bunch
of
new
recording
means
that
will
basically
aggregates
to
the
level
of
a
other
server,
and
so
this
is
auto-generated.
Files
of
this
kids
generated
key
metrics
gets
generated
automatically.
B
D
B
Shard
is
shot
is
actually
now
one
of
the
of
the
top-level
things
so
most
have
just
got
shard
equals
main,
but
the
the
idea
of
shard
is
that
the
the
things
that
Ishod
are
running
or
all
the
all
the
different
deployments
in
a
shard
all
the
different
instances
in
Asscher
are
running
the
same
version
of
the
code
right
where
canary
is
running
a
forward
version.
So
it's
different.
A
shard
is
like
in
like
an
isolation
bucket,
so
you
know
like
the
Marquee
customers
are
charged
and
the.
B
You
know
it's
not
it's
not
like
a
different
deployment,
which
is
what
we
get
with
stage
so,
but
but
it
is
a
big
Christian,
but
because
the
way
that
we
do,
that
is,
we've
got
this
thing
that
renders
this
file
and
at
the
moment,
that's
just
hard
coded
in
this,
but
because
it's
a
general
label
that
we
have
on
all
about.
B
So
we
can
just
say
here
we
say
node
level
aggregation
labels.
So
then
all
we
do
is
we
say
if
this
has
got
node
level
metrics
enabled
we
just
add.
You
know
we
just
aggregate
over
this.
The
there
is
a
question
now
like
as
we
transition
over
to
cube,
annuities
and
pods
like
like
at
the
moment.
All
the
stuff
we're
doing
is
very
much
cattle,
and
so
you
know
we're
probably
not
going
to
be
doing
node
level
monitoring
on
it
but
like
in
some
future
work.
B
Maybe
we
have
like
you
know
these
pets
that
are
running
in
kubernetes,
like
maybe
a
given
you
deploy,
and
in
that
case
fqdn
wouldn't
work
because
the
Kuban
it
is,
you
know
the
psychic
instances,
don't
have
a
fkn.
They've
got
a
pod
pod
name,
but
they
don't
have
a
gideon,
and
so
maybe
we
want
to
kind
of
roll
it
up.
But
the
thing
that's
great
is
that
it's
all
just
generated
not
right.
B
Exactly
and
also
when
we
change
something
in
a
service
level
like
to
say
you
know,
if
you
know,
we
can
ignore
all
errors
from
this
thing.
You
know
from
this
method,
for
example,
because
it's
a
really
noisy
method
that
will
apply
the
service
level
and
at
this
level
we
at
the
moment,
there's
like
two
different
rule
sets
that
we're
using
for
alerting
on
the
node
level
and
the
service
level.
So
it's.
B
D
B
B
So
if
we
go
check
and
look
at
so
this
is
one
for
what
is
this.
This
is
feeling
Ruby,
it's
particularly
the
cursor,
because
generally
that's
more
interesting.
So
this
is
an
active
school
for
the
go
server
and
you
could
is
pretty
complex
now
but
like
what's
nice
is
that
it's
like
there's
a
single
source
of
choice
for
that,
rather
than
trying
to
maintain
like
20
different
copies,
yeah.
B
B
B
Mean
it's
probably
easiest
just
to
take
this,
so
so
one
of
the
things
that's
really
interesting
about
this
is
that
there
is
this
one
server
that's
doing
really
really
badly,
and
when
this
rolls
in
when
the
literals,
and
at
least
we'll
have
to
put
a
silence
on
this,
but
I
do
think
it's
worth
investigating.
Why
protector
to
not
pre
Victor
one
just
protector
to
you
only
gets
its.
You
know.
It
only
makes
it
like
fifty
percent
of
the
time.
B
B
It's
it's
it's
it's!
This
there's!
Definitely
something,
and
also
like
the
other
thing
to
keep
in
mind,
is
that
this
is
pulling
down
the
entire
services
metrics
right.
So
we
should
understand
why
it's
doing
this
yeah,
and
so
you
know
the
other
way
that
this
is
done.
Is
you
just
simply
take
off
these
things?
The
other
reason
why
I'm
quite
happy
with
this
is
that
so
the
recording
rules
that
we
use
and
get
lab
comm.
So
you
know
all
about
dog
feeding
and
the
recording
rules
that
we
use
to
monitor,
gitlab
calm.
B
We
can't
take
and
put
into
the
product
because
the
product
doesn't
have
the
same
labels.
So
you
know
if
you,
if
you
give
gitlab
to
a
customer
and
the
customers
running
things,
they
try
aggregate
by
environment
stage
tier
type.
You
know
they've
just
got
like
a
good
lab
instance,
and
so
because
previously
we
used
to
hand
roll
all
of
these
queries
for
the
recording
rules.
B
It
wasn't
possible
for
us
to
just
generate
these
and
like
actually
put
them
into
the
product,
and
this
is
like
the
first
time
that
I've
used
it
to
kind
of
generate
a
whole
different
set
of
recording
rules
with
different
labels.
B
This
is
the
way
you
should
be
doing
it
rather
than
you
know,
like
I,
see
some
of
the
rules
that
we
ship
with
the
product
they're,
not
fantastic,
like
research,
and
you
can
use
them
that
you're
super
noisy,
and
so
it's
kind
of
like
a
bit
of
a
it's
proof
that
the
reverse
dogfooding
effort
of
like
taking
what
we
doing
and
pushing
back
to
the
product
is
Bible
I.
Think
so
sorry
is
kind
of
in
the
middle
of
that
so
like
here.
This
is
what
it
would
look
like.
B
B
B
A
B
Yeah,
so
canary
wants
alerts
on
so
before
it
goes
like
any
service
stage.
Combination
goes
below
a
solo
mill
alerts
on
it,
and
now
we
own
you
little
canary
of
Canaries
firing
is
Canaries
breaking
the
threshold
and
and
production
isn't
so
that's
the
kind
of
bad
canary
alert,
and
then
only
we
only
do
that
if
that
stage
is
receiving
more
than
1
percent
of
the
traffic
of
production,
so
kind
of,
because
there's
been
quite
a
lot
of
those
bad
canary
alerts
and
it's
just
chopping
down
those.
B
A
Sorry
about
the
break-in
recording
there
I
keyboard
shortcut
and
the
program
that
stops
the
recording,
apparently
so
yes,
no.
That
makes
sense
thanks
for
explaining
that
sorry,
that's
completely
unrelated.
We've
got
to
really
limit
the
time
on
the
discussion
items
because
we've
got
under
ten
minutes
left.
D
A
C
My
point
is
about
the
idea:
I
didn't
put
the
see
like
just
a
discussion
about
like
a
summary
of
what
the
plan
could
be
to
improve
a
little
bit
the
ID
upon
to
see
and
the
test
that
we
are
planning
to
do
on
sidekick
jobs.
I
think
we
can
and
discuss
a
little
bit
async
on
on
the
issue
itself.
It's
just
a
list
of
things
and
ideas
that
we
can
do
so.
If
we
want
to
discuss
quickly
this
one.
B
D
A
A
That's
what
I
mean
like
I
think
it's
gonna
be
hard
to
get
to
this
point,
especially
because
some
of
the
jobs
are,
things
like
you
know,
send
an
email,
so
we
have
to
build
a
whole
bunch
of
extra
Shockey
around
like
well.
We've
already
sent
this
specific
email,
so
we
need
to
not
send
this
specific
email
again,
but
we
might
need
to
send
an
identical
email
like
two
minutes
later,
so
it's
gonna
take
a
while,
but
yeah
thanks
as
well,
though,
let's
maybe
add
that
to
the
Monday
call
and
we
can
hopefully
spend
at.
C
D
Just
like
storage
came
up
twice
already
and
storage
used
on
sidekick,
node
and
I
was
just
kind
of
brainstorming
on
how
important
this
is
to
add
this
as
a
resource
boundary.
If
it
is
and
I,
don't
really
know
how
storage
shown
sidekick
node
is
currently
used,
which
was
a
bit
surprising,
so
yeah
I
just
wanted
to
bring
it
up.
A
This
is
this
is
the
thing
that
I
think
is
kind
of
fun.
Like
you
know,
I
certainly
enjoyed
some
was
reading
through
camels.
In
our
and
like
you
know,
then
the
issue
and
I
was
like.
Oh,
we
could
do
this,
like
maybe
there's
other
things
we
could
do,
but
I
also
wonder
if
it's
a
little
bit
of
a
trap,
because
we
don't
have
that
many
workers
that
need
that
much
storage
at
the
moment.