►
From YouTube: Error Budget Conversation (Rachel/Jackie/Marin)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
this
is
a
conversation
about
the
error
budgets
that
we
are
looking
to
try
and
set
up
and,
as
I
was
just
saying,
I'm
trying
to
at
the
beginning,
let's
just
set
a
goal
for
the
meeting
so
that
we
know
what
we're
going
to
be
working
towards
today.
So,
from
my
perspective,
I'm
very
keen
to
know
what
we
need
to
do
to
help
the
product
teams
use
this
for
prioritization
like
how
can
we
make
this
more
useful
and
how
do
you
envision
this
being
used
in
the
process?
So
jackie,
I'm
not
sure.
B
Yeah,
so
at
the
highest
level,
each
month
product
goes
through
and
we
have
a
variety
of
sensing
mechanisms
and
sensing
mechanisms
are
different,
feelers
that
product
managers
use
to
go
through
and
prioritize
various
issues
in
our
backlog
and
those
issues
are
then
assigned
to
engineers
to
work
on,
and
then
they
get
delivered
out
of
our
our
milestone
and
I'm
going
through
that
very
basic
description,
so
that
we
can
understand
how
error
budgets
will
become
a
part
of
that
sensing
mechanism
process.
So
today
we
use
things
like
technical
account,
management
feedback.
B
So
if
I
had
an
error
budget,
I
feel
like
I
would
be
able
to
advise
tau
darren,
james
and
dove
that
are
infradev
issues
which
are
the
verified
teammates.
The
verify
product
managers
that
we
need
to
either
schedule
availability
and
for
developer
performance
is
used
more
frequently,
and
today
I'm,
I
would
have
to
say,
looking
at
the
air
budget
that
we're
not
doing
that,
because
our
air
budget
is
way
over
the
allocated.
B
20
minutes
that
we're
supposed
to
be,
I
think,
we're
at
13
hours
right
now,
which
is
significantly
over
what
the
budget
is
supposed
to
be,
and
so
that
guidance
is
if
we're
supposed
to
be
at
20
minutes.
We
should
be
prioritizing
and
potentially
doing
way,
more
prioritization
of
features
like
features
or
bug,
fixes
or
technical
debt,
or
even
complete
dedication
around
scaling
enhancements.
A
B
A
Yeah,
I
think
you're
definitely
right
that
it's
a
pro
that
it
is
a
lagging
indicator
because
it's
about
what
has
happened
over
the
past
30
days
and
I
think
in
terms
of
being
proactive,
it's
about
having
perhaps
a
process,
or
I
guess
a
process
for
for
looking
at
the
dashboards
and
seeing
if
there's
anything
happening
in
real
time.
That
is
concerning
so
looking
over
the
past
one
or
two
days
instead
of
over
the
past
30
days.
But
again,
that's
still
lagging
to
a
certain
extent.
A
But
what
we
have
found
was
we
use
a
series
of
indicators
and
scalability,
and
what
we
have
found
is
that
by
looking
at
past
trends,
you
can
then
start
to
see
that
perhaps
there
is
certain
problems
coming
up
that
you
might
need
to
be
thinking
about,
but
I'm
not
quite
sure
if
that's
going
to
be
in
scope
for
the
for
the
work
of
error
budgets,
given
that
this
is
just
supposed
to
be
a
lagging
indicator
for
now,
but
I
saw
maryland
meet
today
briefly,.
C
Yeah
I
had
two
items
one
is
about:
do
we
even
know
why
we
are
talking
about
20-minute
budgets?
That's
one
and
another
one
I
just
wanted
to
mention
is
that
maybe
we
should
walk
before
we
run
and
I
agree
jackie
fully
agree
with
you.
This
is
a
lagging
indicator,
but
we
are
lagging
13
hours,
let's
get
into
the
minutes
territory,
and
then
we
can
talk
about
that
situation,
and
you
know
this
is
going
to
take
a
couple
of
months.
C
C
It
is
going
to
be
much
easier
for
us
to
figure
out
what
the
leading
indicator
can
be
right,
and
that
could
be
something
that
we
in
infrastructure
do
to
figure
out
like
rachel
mentioned.
Scalability
has
a
couple
of
things
that
we
use
as
a
prediction,
and
that
is
kind
of
now
leading
us
towards
those
things,
but
it
took
us
a
year
to
get
to
a
point
where
we
feel
a
bit
more
comfortable
about
this
and
we're
only
a
team
of
six
right,
rachel.
Seven,
six,
six,
seven
imagine
doing
that
across
all
of
engineering
product.
C
It's
gonna
just
take
time,
so
I
wanna
take
a
step
back
and
then
talk
maybe
about
why
20
minutes,
and
what
does
this
mean
to
us?
Because
this
is
being
recorded,
and
I
don't
know
whether
everyone
understands
we
have
a
target
of
99.95
availability
of
gitlab.com
over
a
period
of
30
days,
and
this
is
something
that
we
mentioned
to.
Our
customers
is
not
yet
a
contractual
item
right,
like
it's
not
there
in
the
contract
yet,
but
we
are
being
asked
more
and
more
about
our
guarantees.
C
Not
being
down
but
service
not
being
fully
available,
and
this
is
why
we
are
talking
about
the
20-minute
budget
across
all
of
the
platforms.
So
this
is
not
20-minute
budget
for
every
single
stage.
No!
No!
This
is
across
all
of
the
platform,
and
this
is
also
important,
because
some
teams
will
have
zero
spam
because
they
might
not
have
enough
usage
or
they're
really
that
good,
which
is
possible.
C
Some
teams
are
going
to
have
like
higher
budget
consumption,
which
is
because
they
are
very
popular
and
people
are
doing
or
customers
are
doing
things
that
we
didn't
anticipate
them
to
do
and
so
on,
and
that
also
allows
us
to
target
those
areas.
A
bit
more
and
overall.
What's
important
is
that
we
are
seeing
a
trend
towards
being
going
back
to
being
stable
and
going
back
to.
You
know,
trend
over
time
being
that
we
are
always
around
the
20
minute
downtime.
C
Do
we
want
to
go
better
than
that
sure
we
can
do
that,
but
we
are
over
optimizing
them
right,
and
maybe
we
we
can
then
focus
more
on
shipping
more
features
and
be
a
bit
more
relaxed
about.
You
know
the
the
work
we
do,
but
at
the
same
time
we
definitely
shouldn't
be
around
13
hour
mark
that
we
are
at
right
now.
B
And
this
is
where,
where
you
can
tier
service
too,
and
you
can
charge
more
for
those
highly
available
lower
down
times
and
in
my
previous
companies,
I've
seen
tiered
service
for
less
downtime
and
having
those
having
those
offerings
be
scaled.
That
way
because
it
does
require
more
resources.
And
if
people
want
to
pay
for
that,
then
you
can
offer
that.
C
Exactly
right
like,
for
example,
we
have
our
shared
runners
and
they
are
in
the
whole
bucket
of
all
the
other
runners
that
we
have.
We
can
be
more
inventive
in
that
and
charge
more
for
runners
that
are
more
stable
again,
that
becomes
a
discussion
on.
Are
we
then
going
to
offer
a
service
that
is
not
stable?
Well,
no,
not
really,
because
we
still
want
to
give
this
beer,
not
beer
line,
but
basically
baseline
baseline
right
to
everyone,
and
then
say
you
want.
C
On
top
of
this,
you
can
pay
extra
and
we
can
offer
you
extra,
but
this
is
like
a
long-term
strategy.
So
this
is
why
it's
important
for
us
to
start
with
this
is
not
small.
This
is
a
large
change,
but
it's
something
that
we
want
to
practice
all
together
and
get
better
at
understanding
what
we
are
doing
here.
Sorry,
this
was
a
long
intro,
but
I
thought
it
was
important
to
to
note
no.
B
A
So,
as
you've
said,
they're
product
managers
are
more
used
to
at
the
moment,
building
for
the
self-managed
instances.
B
B
There
does
need
to
be
maybe
an
ama
on
the
tactical
front
like
what
are
error
budgets
and
like
an
education
side
of
how
to
how
to
use
this,
and
what
does
an
air
budget
mean,
even
maybe
some
remedial
education
on
what
is
availability,
because
I
think
that
that
part
we
may
take
for
for
granted
on
what
is
up
time.
What
is
downtime,
what
does
that
mean
to
a
customer?
What
does
that
mean
to
an
end
user?
B
What
is
the
difference
between
nine
five
and
five
nines,
because
I
think
that
those
nuances
in
in
a
customer's
perspective,
especially
for
an
enterprise
customer
in
the
sas
language,
are
nuanced
but
very
different
and,
and
they
can
be
aspirational
for
us,
like
our
current
goal
of
99.95
and
five
nines
of
availability
are,
are
very
our
large
gap
right
and
so
right
now,
our
20
minutes
of
downtime,
I
think,
is
very
realistic
and
what
our
air
budgets
are
geared
toward
is
something
that
we
can.
B
We
can
educate
people
on
for
this
quarter
and
I
think
that's
a
very
tangible
next
step
for
us
to
take,
and
I
can
help
facilitate
that
with
it
with
the
product
team.
As
far
as
then,
the
next
stages,
as
far
as
like
how
how
broad
of
a
rollout
we're
going
to
be
doing
with
the
product
organization.
B
Right
now,
we
have
like
olive.com
on
the
product.
Pi
page
with
error
budgets
so
like
the
product
organization,
is
aware
that
this
is
a
pi
that
we're
tracking
a
performance
indicator
that
we're
tracking
and
then
they're
aware
that
we
are
going
to
start
instrumenting
stages,
and
we
want
to
put
this
inside
of
our
sisence
dashboards
for
product
managers
to
begin
tracking
against.
B
A
B
Definitely
a
hurdle
as
far
as
a
different
place.
It's
an
extra
click
that
could
inhibit
adoption,
of
course,
like.
B
If
the
top
down
mandate
is
you're,
going
to
use
an
error
budget
and
you
need
to
consume
it,
and
if
we
put
it
on
the
pi
page
for
people
to
go
it's
in
the
handbook
and
that's
a
that's
the
single
source
of
truth
that
people
are
going
to
go
to
then
it's
it's,
not
an
extra
click.
It's
just
a
different
place.
They
have
to
go
to
the
more
that
we
can
consolidate
into
sizes
the
better
for
for
usability
for
sure.
A
Okay,
so
the
so
the
things
I
heard
there
were
getting
it
into
getting
it
into
science
for
sure
having
an
ama
of
some
description
to
help
introduce
that
to
everyone,
and
have
it
be
quite
broad.
So
I'm
just
noting
that.
B
Yeah
and
on
the
on
the
lnd
front,
like
the
learning
and
development
side,
I
can
help
create
content
for
the
product
management
audience.
So
let
me
let
me
know
how
I
can
help
you
on
that
front.
A
And
then,
from
our
side
I
mean
we
have
the
first
iteration
of
this
available
on
the
dashboards
and
we
will
continue
to
iterate
over
that
to
add
more
so
that
the
error
budgets
cover
more
aspects
of
the
infrastructure
than
is
what
what
is
covered
at
the
currently.
But
that's
something
that
we'll
continue
to
iterate
on.
I
don't
think
we'll
ever
really
be
finished
there,
but
we'll
just
continue
to
make
the
data
richer
and
richer.
A
C
Yemen,
I
had
a
question
about
rachel
weather.
We
should
maybe
do.
I
know
we
have
only
five
to
ten
minutes,
but
maybe
we
could
do
a
bit
of
an
exercise
jackie
just
where
we
put
her
on
the
spot
here,
share
a
screen
and
then
like
see
what
kind
of
actions
would
you,
as
a
product
manager,
seeing
this
overwhelming
dashboard
do
right
like
whether
you
would
get
completely
confused?
C
What
kind
of
questions
would
you
ask
and
what
kind
of
questions
would
you
ask
to
your
engineering
manager,
which
is
what
I'm
kind
of
mostly
curious
here
about?
I
don't
know
rachel.
Do
you
think
that's
useful.
A
B
So
I
would
be
looking
for
the
budget
spent
and
by
the
way,
y'all
I'm
red,
green
colorblind.
B
So
this
is
something
that
I
did
mention
before,
and
I
didn't
know
that
there
was
color
coding
on
this
until
I
think
bob
told
me,
but
it
was
funny
because
I
was
like
oh,
I
had
no
idea
that
that
was
there,
so
I
would
be
looking
at
the
number
here
for
budget
spent
and
I
would
read
this
budget
spent
here
and
what
that
would
indicate
I'd,
probably
click.
This
link
to
kind
of
read
a
little
bit
more
about
what
error
budgets
are.
B
B
Just
as
a
as
an
a
note.
B
And
this
availability
number,
since
I
know
that
the
stated
availability
is
99.95,
it
might
be
helpful
to
have
the
targets
somewhere
stated.
B
Because
there
isn't
targets
anywhere
on
here,
so
all
I
know
is
what
current
state
is,
but
there
isn't
any
evidence
of
what
a
target
is
so
currently
like.
That's
in
the
problem.
B
A
B
Yeah
and
I
like
the
percentage
of
the
budget
spent
as
in
like
how
much
of
your
budget
has
been
consumed,.
A
B
Yeah
or
I
still
feel
like
the
percent
availability
is
valuable
because
we
would
need
to
know
if
we're
if
we're
at
the
the
99.95
but
like
what
percent
of
your
overage
are
you?
So
if
you
have
20
minutes,
it
would
be
interesting
to
know
how
much
over
my
budget
am.
I
so
like
how
much
in
debt
am
I
how
I
think,
is
kind
of
what
I
want
to
know,
then
that
would
be
an
interesting
metric
to
to
then
report
back
to
my
engineering
team
to
show
them
the
severity
of
my
overage.
B
A
Now
that
makes
sense
like
rather
than
just
saying
well
it's
20
hours,
it
would
be
to
say
well
as
a
percentage
we're
like
300
times
like
it's,
300
percent
of
our
budget
has
been
spent
here.
So
this
is
yeah.
Okay,
yeah.
B
C
Yeah,
it's
it's
really
hard
presenting
this
on
this
dashboard
in
the
way
you
use
sizes,
for
example,
right
like
in
sciences.
This
is
much
much
clearer
and
this
dashboard
is
not
made
for
that
right.
So
that's
another
data
point
we
we
have
now.
B
Because,
after
looking
at
this,
what
I,
the
kind
of
things
that
I
would
do
is
I
would
check
this
on
a
on
a
monthly
basis,
because
it's
going
to
be
a
part
of
my
pi
update
to
the
product
organization.
I
then
do
a
slack
update
as
a
product
manager.
I
don't
do
a
slack
update
to
my
engineering
team
and
it
informs
my
monthly
prioritization
or
my
quarterly
prioritization
for
my
features
or
bugs,
and
I
would
let
my
team
know
hey.
B
This
is
how
many
users
we've
had
changed
and
by
the
way
we're
300
over
our
budget
for
availability.
B
So
we're
not
going
to
do
any
new
features
and
we're
also
going
to
be
doing
a
rapid
action
and,
like
you
know
like
I
need
to
let
them
know
all
of
the
things
that
are
changing
and
that's
the
kind
of
update
that
would
come
out
of
this
sort
of
information
on
a
regular
cadence
as
a
result.
And
then,
of
course,
I
would
like
to
link
them
to
this
or
do
like
a
screen
capture
of
the
data.
That's
changed
so
the
the
kind
of
that's
the
sort
of
update.
B
That's
typically
coming
out
of
out
of
product
data
and.
B
C
An
additional
question
for
you
here
is:
do
you
need
trends
right
like
do
you
need
historical
data
for
this
data?
I
know
this
is
historical
data
already,
but,
for
example,
for
you
to
understand
how
things
are
have
been
developing
in
the
past
month
to
know
whether
the
investment
you
made
or
are
making
continuously
is
actually
making
things
better
or
worse.
I
don't
know
if
this
is
technically
feasible
to
do
this
in
one
dashboard.
But
the
point
is
I'm
looking
to
see
like
this,
whether
we
should
be
thinking
about
this
or
not.
Would.
A
C
B
Trend
will
be
very
important
because
we
would
want
to
know
if
prioritizing
performance,
infradoven
availability
issues
actually
influences
these
sorts
of
metrics,
given
that
they
are
lagging
metrics.
It's
really
important
that
we're
investing
in
the
right
things
and
what
would
be
really
interesting
is
if,
in
fact,
we
do
prioritize
these
fixes
and
they
don't
enhance
it.
Then
that
shows
that
there's
some
other
root
cause
analysis.
That
needs
to
be
done.
B
C
So
I'm
sorry
richard
just
one
other
question
before
we
leave,
so
I'm
assuming
you
would
not
be
looking
at
anything
else
on
this
desk,
like
if
you're
talking
about
this
specific
dashboard,
you
as
a
product
manager,
were
likely
not
going
to
go
in
in
depth,
try
to
understand
what's
happening.
There
call
me
a
naive
engineering
manager
but
asking
a
question
still.
C
Like
api
request
rates
or
any
of
that
like
to
understand
the
volumes
you're
dealing
with
or
to
help,
you
maybe
make
a
different
type
of
decision
when
you're
working
on
your
new
features
and
so
on.
Right,
for
example,
if
you
know
that
you're
handling
this
many
requests
per
minute
or
something
would
that
affect
your
decision
on
how
the
feature
is
built,
a
new
one
for.
B
I
would
say
that
if
the
information
is
readily
available,
like
looking
at
all
of
these
charts
in
here,
if
I
had
a
tutorial
on
what
all
these
charts
meant
and
could
could
trace
it
back
to
the
the
table
in
inside
of
git
lab
and
what
that
meant
in
the
product.
B
B
That's
a
spike
for
an
engineer
to
do.
Inside
of
I
declare
a
problem
so
here
the
availability
is
declared
a
problem.
There's
a
problem
statement
that
our
our
error
budget
is
over.
We
need
to
do
some
root
cause
analysis
as
to
why
we're
over
budget
here's
some
charts
that
indicate
that
hey
our
queries
are
are
going
insane.
These
might
be
some
some
areas
to
look
into
which
could
be
identified
either
by
me
per
this
chart
or
the
the
engineering
manager,
because
there's
some
trends
here
that
indicate
significant
dips
or
overages.
B
So
I
can
see
that
just
by
looking
at
it
like,
I
would
just
link
them
be
like
hey
these
look
different.
Can
you
look
at
them,
but
I
don't
know
if
other
I
don't
know,
if
other
pms
would
do
that,
though,
I
feel
like
I'm
different.
A
Yeah,
thank
you
for
this.
This
has
been
really
helpful
and
I
think,
in
terms
of
next
steps,
I
just
I'm
aware
of
the
time.
So
just
in
terms
of
next
steps,
we've
got
the
two
actions
that
we
can
take
there
from
the
scalability
perspective,
we'll
continue
to
make
that
our
iterative
changes
to
the
dashboards
themselves,
but
I
think
I
will
go
ahead
and
raise
these
two
issues
about
getting
getting
this
into
sciences
and
producing
an
ama
for
this,
and
then
we
can
continue
the
conversation
on
those
issues.
B
And
then
please
just
ping
me
about:
we
can
co-author
the
deck.
We
can
do
anything
around
content
and
I
can
help
schedule
with
christy
to
find
time
for
a
product
team.
Just
please
let
me
know
what
you
need
from
me.
Okay,
thank
you.
So
much
thanks
for
being
part
of
this.
You
bet
any
time
we.