►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
It
was,
although
I
believe
that
there's
a
shortage
of
marmite,
so
it
might
be
one
of
my
last
marmite
toasts
for
a
while.
What's
marmite.
A
A
Rachel
it's
what
it's
made
of
is
when
they
brew
beer,
it's
the
sludge
that
the
yeast
that
they
don't
use
to
brew
the
beer
they
send
that
off
and
then
and
then
they
ferment
it
some
more
and
it's
this
black.
When
I
was
a
kid,
I
used
to
call
it
black
jam
because
that's
the
best
description
of
it.
It's
like
black,
like.
A
B
C
C
B
B
C
Two
desktops:
can
you
see
my
screen?
Yeah?
Okay,
so
we've
like
we'll,
have
we're
going
to
have
we're
going
to
be
measuring,
aptx
encounters
and
that's
going
to
be
a
total
counter
and
a
success
counter,
and
now
we
want
to
end
up
using
those
as
service
level
indicators
for
our
well
services.
C
So
we
have
a
way
of
defining
indicators
inside
our
service
catalog
and
that
roughly
looks
like
this.
Actually,
so
you
can
have
an
ad
text,
error
rate
and
request
rate,
so,
like
yeah
bits
that
you
specify
how
do
we
end
up
plugging
that
in
to
like
how
do
we
want
to
plug
that
into
the
success
rate
into
the
service
level
indicators?
There's
two
things
that
I
thought
of
like
first
one
is
we
make
the
the
like?
We
just
keep
it
as
addex
the
way
we
have
it
now,
yeah.
C
It
should
also
like-
I
already
think
that
we
use
the
histograms
as
counters
farther
down
the
line,
so
we
just
need
to
make
it
look
the
same
and
it's
it'll.
It
should
just
work
yeah,
but
I
was
thinking.
Would
success
rates
be
useful
elsewhere,
because
we
want
to
use
the
method
that
we
added
there
like.
The
the
thing
that
is
supposed
to
end
up
in
the
lab
kit
should
be
a
way
for
groups
to
define
a
slice
for
anything
they
want
yeah,
but
I
can't
think
of
anything
that
they
would
want
more.
C
That
can't
be
explained
as
aptx
or
error
rates.
A
A
Yeah
I
mean
I
was
I
was
thinking
about,
and
this
is
just
like
another
example
of
something
to
think
of,
but
we're
talking
about
this
front-end
observability
working
group
and
what
I've
been
saying
is
that
they
should
have
slis.
And
so
I
was
the
thing,
but
I
guess
it's
also
an
aptx.
I
was
just.
I
was
thinking
about
like
how
long
it
takes
that
mergeability
widget.
That's
something
that
bugs
me
a
lot
at
the
moment,
because
it's
flaky
and
and
having
that
like.
B
C
A
I
I
definitely
like
the
fact
that
if
we
went
down
that
route,
it
would
put
us
more
in
line
with
like
the
the
standard
and
then
but
the
thing
is
maybe
we
should
stop.
If
we
do
that
and
it's
maybe
we
should
stop
referring
to
aptx
and
error
rates
as
separate
things,
but
they're
all
just
slis
right,
yeah.
C
C
A
C
C
A
A
C
A
C
We
add
a
new
sli
next
to
the
one
we
currently
have
that's
called
yeah
here.
I've
called.
D
A
Yeah,
so
what
about?
If
they
just-
and
I'm
just
thinking
aloud
now
what
about?
If
we
had
it,
that
they
had
the
same
net,
you
could
have
two
things
that
had
the
same
name,
but
they
had
a
separate,
a
second
thing
which
could
then
be
called
so
it's
called
puma,
but
then
we
have
aptx
and
error
as
a
second
label.
I
don't
know
what
the
name
of
that
label
would
be
sli
type.
I
think
something
like
I
I
don't
know
because
then
the
the
reason
is
is
like
on
a
lot
of
dashboards.
A
People
do,
and
especially
I'm
talking
about
sres
here
they
respond
differently
to
500s
and
slowness
right.
So
if,
if,
if
we
have
a
way
of
separating
things
that
are
like
errors
from
things
that
are
slowdowns
like
culturally
people
like
whether
or
not
that's
a
good
thing
is
a
separate
discussion,
but
culturally
people
respond
to
them
differently.
A
Right
and
you
know
the
the
conversation
that
had
happened
in
the
daily
stand
up
yesterday
and
again,
I
don't
know
whether
this
is
a
good
or
bad
thing,
but
people
were
saying,
oh
well,
the
the
reason
our
error
budget
is
so
bad
is
all
down
to
aptx.
A
C
A
C
Like
right
now
we
have
the
keys,
total,
well
request,
count,
error
rate
and
aptx.
So
for
the
new
sli,
I
would
add
the
success
rate
kind
of
key
that
you
could
have
there
and
then
you
either
like
you
either
have
success
rate
or
error
rate
and
for
new
slis.
I
would
argue
to
separate
the
things
so
don't
have
aptx
and
error
rate
next
to
each
other.
A
A
A
C
D
B
A
A
So
we
can
do
a
general
case
and
then
things
might
get
a
lot
simpler
and
we-
and
we
only
have
to
do
one
big
flip
of
everything
for
the
for
the
sres,
where
we
say
there
is
no
longer
an
error
rate.
Everything
is
a
is
a
you
know
we
do
rather
than
having
these
things
are
success
rates,
and
these
things
are
error
rates.
So.
C
You'd,
rather
not
have
the
things
live
next
to
each
other
for
a
while
is
what
you're
saying.
A
Yeah,
I
think
I
so
as
a
first
step.
We
could
almost
make
all
the
error
rates
that
are
error
rates
in
this
in
the
sli
metrics.
We
could
flip
those
over
right
success
rates
to
success
rates
and
so
that
we
only
have
success
rates,
and
we
just
do
that
with
using
arithmetic,
not
using
label
yeah
not
using,
because
that's
like
a
a
whole
bundle
of
and
then
a
whole
lot
of
things
become
kind
of
simpler
around
you
know
all
the
all
the
things
about
how
we
do
an
slo.
A
You
know
there's
two
two
ways
at
the
moment:
how
we
do
the
slo
alerting,
there's
the
one
approach
that
we
use
for
aptx
and
the
other
for
errors,
and
those
would
just
become
one
and
a
lot
of
things
would
become
simpler
and
then
kind
of
adding
new
ones
that
are
only
success.
Rates
might
be
like
I'm
also
thinking
from
an
explaining
to
people
point
of
view.
Like
imagine,
a
new
sre
or
a
new
person
came
and
joined
the
team,
we'll
say
well,
we've
got
aptx's
and
those
are
measured
as
success
rates.
A
We've
got
errors
which
are
the
inverse
of
that,
and
then
we
have
this
like
bag
of
other
things
which
are
saturation.
You
know
it
could
be
aptx's
and
could
you
know,
and
and
and
those
are
success
rates,
and
so
it's
sort
of
there's
quite
a
lot
of
cognitive
overhead
like
in
in
in
understanding
it
where,
if,
if
we
make
it,
if
we,
if
we
bring
them
in
line
first
and
then
just
have
that
that
brings
us
in
line
with
the
industry,
which
is
really
nice.
A
Yeah,
well,
they
they
are,
but
but
not
for
for
the
for
the
slo
stuff
right
like
well,
the
you
know
the
way
that
we
it's
total
successes
over.
You
know
so
we
tend.
The
other
thing
you
can
do
is
if
you
bring
them
all
in
line,
you
can
always
have
the
graphs
flipped
around
right,
so
you
can,
if
you
have
the
the
metrics
in
in
oh
there's,
one
other
thing,
I'm
thinking
of
now
we're
going
to
have
to
run
this
in
parallel.
We
can't
just
flip
everything.
A
A
We
need
like
at
least
30
days
history.
Well,
I'm
more!
I'm
more
thinking
about
the
you
know
for
the
sla
charts
that
those
will
all
break
and
lose
all
their
history,
so
we'll
we'll
definitely
need
to
run
those
in.
B
D
C
You
really
think
that
we
should
do
this
before
before
anything
else.
C
B
A
And
also
we
have
all
of
our
code
yeah.
No,
no!
No,
we
do
it's
just
like
if
we're
adding
new
things
and
those
new
things
are
okay,
so
I
guess
this
brings
us
all
the
way
back
to
your
original
question,
which
is:
do
we
have
the
other
way
we
could
just
do
this?
Is
we
have
some
slis
that
are
that
are
error
rates?
C
A
Yeah,
so
so
so
can
I
just
add
one
more
thing
in
there
underneath
the
rail,
where
it
says
surface
indicators
rails,
can
we
have
a
an
attribute,
which
is
this,
the
service
level
indicator
type,
and
then
it
says
aptx
or
error
and.
B
C
Yeah,
that's
one
thing,
the
question
that
comes
with
that-
and
it's
partly
answered
is:
should
this
thing
that
we
add
here
yeah?
Yes,
we
used
to
get
what
threshold
we
use
for
it.
C
Kind
of
thresholds-
and
they
all
have
a
key,
we'll
use.
D
D
A
C
But
again,
but
the
thing
that
pushes
me
towards
having
the
success
rate
here
is
because
this
one
already
is
in
that
direction
like
we
flip
it
at
some
point
already,
yeah.
A
The
one
is
99.8
percent
like
in
this
exact
example.
This
shows
you
how
and
the
others
four
nines
right
so
like
as
high
as
you
ever
want
to
go
with
an
slo
and-
and
so
you
can't
really
put
it
in
the
middle,
because
half
of
them
are
going
to
fire
all
the
time
and
the
other
half
are
never
going
to
fire.
So
what
about
remember?
I
said
we
have
that
sli
type
and
then
that's
the
enum
with
your
values.
A
D
A
Whatever
we
call
that
thing
I
mean
type
is
just
such
a
terrible
label,
but
you
know
what
I
mean.
C
Yeah
yeah
because
we
already
have
a
type
label,
sli
kind,
yeah,.
A
C
There's
one
more
complication
that
we
are
going
to
run
into
later
and
that's
let
me
show
that
in
the
editor.
C
And
that's
this:
this
one
here
is
yeah
the
yeah
flipped.
C
Those
because
if
we
know
that
they're
not
yeah,
because
if
we
do,
if
the
sli
kind,
that
we
specify
is
error,
yeah
like
this,
that
we
just
write.
B
A
That
that
refactor
could
take
place
like
in
a
vacuum
and
should
be
done
because
it's
just
super
weird
right
like
well.
It
wasn't
weird
when
the
decision
was
made,
if
I
can
defend
it,
but
it's
super
weird
now,
because
you
know
the
whole
everyone
talks
about
like
99.95
and
they
were
talking
about
five
percent
and
so
well,
not
five,
zero
point,
zero
five
percent,
and
so
we
we
could
do
that
as
a
first
pass,
where
we
just
fixed
the
contractual
error
threshold
and
make
it
a
a
number
of
not.
C
We
already
have
the
positive
way
of
thinking
for
the
attacks.
In
that
thing,
I
would
argue
that
we,
just
as
part
of
this
project,
that's
kind
of
a
thing
that
the
metrics
catalog
that
way
gets
for
free
from
scalability,
because
we
want
to.
We
want
to
have
this
thing
that
we
give
to
the
stage
groups.
C
We
have
a
success
rate
kind
of
thing,
and
that
means
that
we
can
define
more
things
as
a
success
rate
in
the
future,
like
we're
going
to
have
a
traditional
period
and
it's
going
to
be
annoying,
and
I
think
we
scalability
should
keep
working
on
this.
The
same
way,
we
keep
working
on
the
single
keeper
chart
thing,
even
after
we've
done
catch-all
like,
even
if,
after.
B
B
The
thing
is
with
error
budgets
there's.
There
is
so
much
focus
on
error
budgets
right
now
and
using
them
to
provide
guidance
to
the
development
team
for
how
to
improve
the
like
the
reliability
of
the
system.
C
But
we
need
to
be
able
to
maintain
that
and
reason
about
it
from
from
the
infrastructure
side
of
things
as
well,
because
if
we
are
using
different,
that's
the
the
thing
that
started
this
in
the
first
place,
we're
using
different
indicators
for
alerting
and
and
monitoring,
then
product
is
using
for
error
budgets.
So
yeah.
B
So
bringing
them
all
in
line
and
using
the
same
thing
and
having
the
same
language
throughout,
I
think,
is
really
valuable
because
you
know
when
someone
new
comes
in
and
joins
and
suddenly
they're
asking
all
these
questions
about
how
this
works.
We
want
the
answer
to
be
a
simple,
easy
answer
rather
than
well
you
see
over
here
we
go.
B
A
Yeah,
okay
and
the
the
other
thing
to
keep
in
mind
is
like
if
we
want
to
what
we
could
start
doing,
is
flipping
those
error
rates
over
to
success
rates,
but
not
relying
on
on
the
data
because
then
at
a
later
stage,
if
we
want
to
flip
like
the
sla
calculations
and-
and
you
know
those
other
things
that
are
currently
built
on
top
of
that,
we've
got
data.
So
if
we
do
that
earlier
on,
you
follow
where
I'm
going.
You.
C
Andrew,
like
just
quick
code
thing,
so
let's
go
to
puma
here,
we've
got
aptx
error
rate
request
rate
here.
What
I
would
like
to
do.
We
have
yeah
yeah.
A
A
A
So
can
can
that's
that
inverts,
it
rate
metric
yeah,
that's
got
to
have
the
top
and
the
bottom
in
it
right,
because
you
can't
do
the
the
arithmetic,
without
both.
D
D
C
B
No,
this
is
good,
I'm
sure
he'll
be
back
soon.
There's
another
half
an
hour
anyway,
but
even
so,
what
I
was
trying
to
say
is
even
if
we
do
flip
it
to
be
success
rates,
we
need
to
still
communicate
it
as
error
rate
for
a
while
and
then
introduce
the
change
at
a
convenient
time
like
if
we
suddenly
change
the
messaging
around
it.
It's
going
to
be
challenging
to
keep
support
for
it.
Well,.
C
The
thing
is,
and
the
same
is
going
to
happen
while
we
do
this
like
we're
going
to
at
a
certain
point
in
time
for
the
error
budget
we're
going
to
have
two
aptx
measurements.
That
are
the
same
thing
because,
but.
C
D
C
Yeah,
I
don't
think
that
the
effect
on
error
budget
would
be
huge
for
this,
though,
while
we
do
that
flip,
because
if
it
counts
twice
like
it,
also
adds
an
operation.
C
So
if
you
have.
C
Yes,
so
the
the
thing
is
that,
with
with
the
little.
B
C
That
that'll
I
linked
to
you
this
morning.
The
two
on
two
would
become
a
four
on
four
or
a
one,
or
a
zero
on
two.
Instead.
B
C
A
A
So
what
I
was
going
to
say
was
that
success
rate
yeah
in
in
order
to
do
the
arithmetic
to
if
you've
got
an
error
rate
so
say
you've
got
the
the
failure,
the
the
failure
rates-
yeah,
okay
and
with
without
going
through
everyone,
and
basically
carefully
inverting-
that
that
regular
expression,
for
example-
and
we
as
as
we
mentioned-
you
can't
do
that
in
every
case,
because
some
of
them
are,
we
we've
got
failure
counselors.
A
C
We
end
up
with
the
thing
that
we
now
have
in
several
places.
It
would
be.
A
Yeah
yeah
over
or
one
minus
error
change
your
brackets.
One
minus
error
over
total,
but
same
same
difference.
A
It's
the
same
number,
it's
just
it's
just
it's
harder
to
use!
A
Yeah!
It's
weird!
I
it's
it's
quite
strange
because
I
find
that
easier
to
reason
about,
but
I
can
totally
see
why
you
find
it
the
other
way
around.
I
guess
it's
just.
A
C
A
Is
having
so
what
does
that
go?
How
do
we
distribute
things
like
this?
Well,
actually,
it'll
be
really
nice
for
significant
labels,
because
at
the
moment
the
significant
labels
get
applied
equally
to
the
abdexes
and
the
errors,
and
sometimes
the
aptx
has
one
and
the
errors
doesn't,
and
so
at
the
moment
it's
kind
of
like
well,
you
just
have
to
suck
that
up,
but
now
they
can
have
different
sets
and
the
same
way.
C
B
B
C
Is
a
single
thing,
a
total
and
a
success
yeah,
and
it
has
a
kind
with
two
ling
ling,
significant
labels
like
and
all
of
the
like,
all
of
the
slis
have
that
and
we
could
have
a
base
per
service.
Okay,
so
that
allows.
C
C
So,
in
the
end,
we
would
then
go
to
a
place
where
each
component
only
has
a
like
a
success
rate
and
a
total
rate
kind
of
thing
makes
sense
or
not.
A
Well,
you'd
still
need
to
like
it's
as
long
as
I
can
drill
down
into
logs
from
an
sli.
You
know
appropriate,
relevant
logs
and
also
the
other
thing
that
I'm
really
looking
forward
to
go
to
a
dashboard
sort
of
a
software
view.
A
Scheme,
yeah,
okay,
cool!
Let
me
let
me
do
that
quick
if
you.
A
Color
scheme,
no,
it's
no!
It's!
I
actually
prefer
the
the
the
light
color
scheme,
but
I've
all
the
colors.
I
don't
know
if
it's
because
I'm
color
blind,
but
they
don't
work
at
all.
Well,
on
a
light
background
for
me,
so
I
can
hardly
see
them,
but
when
I
originally
built
it,
I
built
it
with
a
light
background
and
then,
whenever
I
went
to
any
of
the
sre's
computers,
it
looked
terrible
because
they
were
all.
A
Yeah
and-
and
I
don't
know-
I
think
it's
I
think
it's
worse
for
me
than
for
most,
but
I
I
really
can't
see
anything
on
on
the
on
the
light
background
with
the
with
those,
especially
the
yellow.
A
So
if
we
go
in
here
this,
this
is
just
a
small
point,
so
I
didn't
mean
to
make
it
into
a
big
thing,
but
we
have
this
thing
called
aptx
attribution,
which
is
kind
of
like
a
breakdown
of
like,
what's
costing
us
our
error
by
some
dimension
right
and
those
dimensions
come
from
the
significant
labels
right.
So
you
know
the
more
the
more
you
get
here.
The
more
that
particular
thing,
the
the
combination
of
traffic
and
errors
right
is
leading
to
to
those
bleeds.
A
Like
you
know,
here's
the
one
per
method,
so
you
can
see
like
the
total
is,
is
whatever
like
probably
20,
0.2
percent,
but
0.19
of
that
is
is
gets
right
in
this
case,
where
we're
going
on
on
per
method,
and
this
this
this
thing
I
just
did
for
aptx,
but
it
it's
equally
possible
to
do
this
for
errors.
A
We
just
you
know
it
just
never
got
put
in
here,
but
if
you
look
at
those
skittly
alerts
that
people
are
getting
a
lot
of
right
this,
this
would
just
basically
illuminate
the
problem
straight
away.
So
at
the
moment
people
are
looking
and
because
they're
looking
at
absolute
errors,
they
they
sometimes
get
confused
as
to
what
the
the
root
causes.
A
But
this
is
you
know
this
really
breaks
down
the
error
budget
by
what
part
of
your
of
the
application
is
is
called
is
causing
you
to
bleed,
and
so,
if
we
have
a
single
one,
we
can
just
move
this
aptx
attribution
to
just
like
error
attribute.
You
know
non-success
whatever
you
want
to
call
it
attribution
and
it'll
be
and,
and
then
that'll
be
broken
down
by
significant
significant.
C
A
Yeah
yeah,
no
totally
yeah,
no
it
it's
fine,
but
that
those
would
be
really
nice
because
I
think
that
when
everything's
got
that
and
then
you
can
really
go
to
the
other
thing,
we
should
look
at
doing
sorry.
I'm
running
jumping
ahead
here,
but
we
can
do
that
kind
of
over
a
month
and
instead
of
having
a
chart,
we
can
just
give
a
table
so
that
you
can
say
like
someone
says,
oh,
you
know
we.
A
Are
you
talking
about
this?
Is
it
that.
C
I
can't
see
all
right
well,
no,
I
it's
not
your
fault.
It's
here.
Stuff
is
broken.
No,
I!
Yes,
it's
that.
A
Yeah
yeah,
I
am
okay,
so
so,
if
we
had
this
as
a
percentage,
maybe
the
one
takeaway
on
this
is
I'd
say
if
we
can
have
this
as
a
percentage,
so
you
know
people
are
saying
well,
you
know,
because
it's
kind
of
it's
a
little
bit
difficult
at
the
moment
to
convert
that
into
I.
Obviously
it's
still
ranked
it's
it's
where
people
need
to
put
the
most
effort,
but
we
can
also
do
it
as
a
as
a
percentage
but
yeah
that's
great.
We
do
actually
have
that.
C
Yeah
I
was
at,
I
was
thinking
to
add
two
more
columns
there.
A
percentage
had
a
total,
but
I
hadn't
gotten
two.
A
A
Yeah
yeah,
the
the
the
other
kind
of
thing
about
the
conversation
with
the
with
the
totals
is
that
I
think
it
really
helps
to
kind
of
explain
why
some
groups
have
got
like
really
bad
error
budgets,
because
they
just
don't
have
traffic
and
it's
statistical
noise.
And
so
you
know
the
the
the
database
group
were
complaining
about
it
and
it
is
just
an
statistical
noise,
and
so
in
that
infra
dev
report
that
we've
got
now.
We
include
that,
and
I
was
even
considering
I
mean.
C
C
Deal
with
diff
like
should
be
able
to
have
a
different
target
because
well.
B
That
is,
that
is
one
of
the
things
that
I
think
was
anup
has
has
proposed.
Is
that
different
teams
should
have
different
error
budgets,
but
so,
but
the
stance
that
I
took
to
them
is
that
it
would
be
better
to
have
that
set
on
an
endpoint
level
rather
than
at
the
group
level,
because
it
has
the
opportunity
to
just
sort
of
like
hide
things.
We're
looking
at
specific
endpoints.
A
At
the
end
point
level
rachel,
I
think
it's
really
important
that
we,
we
can
define
the
latencies,
but
not
the
error
budgets
right
at
the
at
the
group
level
or
the
stage
group
level.
One
thing
that
we
could
consider
and-
and
maybe
this
is
something
where
we
should
start
shopping
this
around-
there's
something
in
the
stages
yaml
or
one
of
those
yaml
files
in
www.com,
which
is
a
product
maturity
matrix.
I
think
it's
called
and
it's
like
all
the
different
parts
of
the
product
and
how
mature
they
are.
A
Their
internet,
in
the
interest
of
the
product
managers
to
then
increase
their
reliability
and
availability,
we're
all
working
in
the
same
direction
right,
whereas
if,
if
this
is
detached
from
that,
firstly,
I
think
it
just
makes
really
good
sense
from
an
organizational
point
of
view
that
it
it
becomes
like.
Well,
the
maturity
of
your
pro.
Your
product
is
only
mature
if
it's
available,
you
know,
frankly,.
B
I
I
agree
with
that,
but
one
thing
to
consider
is
that
that
is
also
for
self-managed,
so
like
they're,
looking
at
it
from
the
perspective
of
like
how
does
this
match
up
to
other
competitors
and
the
features
that
are
provided
by
them.
But
I
agree
that
having
this
fit
into
what
they're
ordinarily
doing
makes
a
lot
of
sense.
A
Yeah
yeah,
maybe
I
mean
it's
a
starting
point
like
maybe
it
would
be
its
own
thing.
I
I
don't
know
dot
com
maturity,
but
also
just.
A
Yeah,
oh
sorry,
no,
but
this
is
this
is
more
like
this
is
more
like
for
the
engineering
allocation
meeting,
and
this
is
more
about
getting
people.
Thinking
about
that.
Sorry,
this
is
yeah.
You
know
we're
not
going
to
do
anything
on
that,
but
but
it's
good
to
start
shopping
that
around
I'm.
B
Yeah-
and
I
mean
the
whole
point
of
having
this
is
to
provide
the
teams
with
guidance
on
how
to
make
stuff
better,
and
I
think
all
of
these
are
helping
us
move
in
that
direction.
A
Yeah
and
but
but
the
the
the
expectation
that
we
must
set
with
the
teams
is
that
we're
not
gonna
they're,
not
gonna,
have
control
at
least
like
fine
grain
control
over
their
slos,
but
they
do
get
control
over
what
they
consider
to
be
satisfactory.
C
B
A
C
Yeah,
the
last
sorry
go
ahead.
The
one
more
thing
is
like
right
now,
the
whole
epic
discusses
like
we're,
starting
with
a
one
second
default
like
we
want
your
request.
B
C
A
Yeah,
so
I
think
that
one
possible
way
of
doing
that
bob
is
not
giving
people
any
number
but
but
giving
categories
right
so
fast,
medium,
slow.
So
actually,
I'm
really
happy
with
how
he
did
this
with
giddily
timeouts
right.
So
yeah.
A
Yeah
and
the
other
thing
that's
interesting
about
the
kiddie
giddily
timeouts-
is
that
we
have
basically
fast
medium
slow,
but
the
fast
medium,
slow
values
are
different
for
sidekick
in
the
web
right.
So
so
we
can
say
fast,
medium,
slow
and
and
for
the
web
fast
is
500.
Milliseconds
medium
is
two
seconds.
Slow
is
10
seconds.
I
mean.
A
A
And-
and
you
know,
if
we
ever
brought
this
into
the
product,
you
could
find
that
a
particular
self-managed
instance
is
saying.
Well,
you
know,
for
whatever
reason,
we're
getting
too
many
alerts.
You
know
we're
running
on
like
a
486
from
1988
and
everything's
slow,
and
you
say
well
just
you
know,
change
the
threshold
for
slow
from
10
seconds
to
60.
C
A
No,
no,
no,
but
I
think
it
should
be.
I
think
it
should
if
we
can
get
it
to
be
categorized
rather
than
numbers
so
like
the
fast
medium
slow
rather
than
I
so.
The
question
there
is:
you
are
speaking
to
stakeholders
the
most
and
it's
really
important
that
we
get
buy-in,
and
so
you
need
to
figure
out
whether
that
is
something
that
you
can
get
buy-in
on,
because
it'll
make
a
lot
of
stuff
easier.
But
obviously,
if
people
reject
it,
then
we've
got
nothing.
A
So
we've
got
to
kind
of
see,
but
I
think
it
worked
really
well
with
the
with
the
giddily
timeouts.
That's
the
deadline
right
yeah,
it
it
gave
people
guard
rails
and
because
you
can't
review
every
the
the
other
thing
that
you
can
then
start
doing.
Is
you
can
say
you.
A
Yeah
yeah
and
the
the
other
thing
you
know
we've.
This
is
this
is
a
separate
story,
a
separate
thing,
but
we've
spoken
about
like
the
criticality
of
things
in
the
past,
or
I
don't
know
like
some
things
we
should
we
could
possibly
have
rules
where
we
say
you
know
these.
These
requests
are
like
real
hot
spots
in
the
code
and
they
can
never
be
slow
like
you
can
never
have.
A
If
you
encounter
a
slow
sli
on
this
code
path,
then
you
know
like
so
so
imagine
you
hit
authorized
keys
or
something
like
that,
and
that
goes
down
into
code
and
somewhere
in
that
code
it
hits
an
sli.
That's
got
glacial.
A
C
A
It
would
have
to
be
dynamic,
but
but
it
could
be
like
a
thread
local
type,
so
the
first
path
that
you
hit,
you
know
the
grape
end
point
or
the
you
know.
If
that's,
if
that's.
B
A
A
C
C
B
C
A
B
C
B
Last
error
budget
question:
before
we
reach
time,
we
currently
have
an
error
budget
for
teams
in
enablement
as
well
global
search.
It
makes
sense
for
but
there's
database
in
there
does
database
need
to
have
an
error
budget.
I
mean
they
they're
quite
different
from
an
ordinary
stage
group.
A
So
that's
the
that's
goes
back
to
I
I
think
we
shouldn't
have
so
because
we
generate
that
from
the
stages
yaml
file
like
I
don't
think
we
should
have
things
in
there
that
are
like.
If
group
equals
stage
group,
then
you
know
leave
it
out.
What
I
think
we
should
do
is
we
should
just
filter
out
like
what
I
was
planning
on
doing
on
the
infradev
report.
A
Is
you
know
we
have
the
table
at
the
top
if
you've
got
no
incidents
and
your
traffic
is
below
a
certain
number
and
you've
got
no
infra
dev
issues
assigned
to
your
group.
I
was
actually
just
thinking
of
leaving
you
off
because
you
you
don't
really
can't
earn.
Okay.
A
Because
the
state
the
database
stage
group
are,
you
know
they
only
have
the
most
minimal
impact
on
on
that
petrone
sli,
you
know,
a
lot
of
it
is
is
actually
on
the
on
the
data
stores.
Will
the
former
data
stores
team
right
like
in
the
the
that
team,
don't
aren't
involved
in
the
day-to-day
running
of
the
patrony
server?
I
think
it'll
be
quite
a
hard
sell
and
I
don't
think
they
would
buy
into
that
at
all.
A
A
We
do
yeah
yeah,
we
I
mean
if
we
look
at
them,
you'll
probably
find
that
you
know
ten
thousand
or
you
know,
and
it's
just
it's
just
noise.
C
B
Well,
the
thing
is,
it
was
like
there's
actually
two
questions
wrap
up
into
one
here,
the
the
the
first
one
was
specifically
about
database,
but
the
second
one
was
even
if
a
future
category
is
small
and
little
and
not
well
adopted,
they
should
still
have
stuff
that
performs
well
like,
even
if
in
comparison
to
the
grander
scheme
of
things.
B
A
B
But
if
you
have
like
I
mean
having
two
requests-
and
one
is
slow-
is
quite
extreme
like
if
you've
got
say,
for
example,
500
requests
and
100
of
them
are
slow.
Do
you
spend
time
looking
at
your
100,
slow
requests.
A
B
A
It
might
be
that
they
deal
with
customers.gitlab.com,
in
which
case
they
should
instrument
that
right.
But
that's
that's
a
whole
nother
kettle
of
fish.
D
B
Cool
we're
nearly
a
time.
Is
there
anything
else
that
we
should
be
chatting
about.