►
From YouTube: 2022-Oct-11 DORA Metrics
Description
An overview from Cesar on the DORA Metrics and how they are calculated.
A
B
Sounds
good!
Thank
you
so
much
everyone.
Thank
you
for
joining
this.
This
call,
I,
I,
hope,
you're,
seeing
my
screen
I'm
sharing
a
screen.
You
should
be
seeing
Sly.
This
is
Dora
metrics
with
kitlab.
B
They
are
simple
demos,
but
they
highlight
how
we
collect
and-
and
you
know
how
you
can
relate
the
API
to
what
you
see
in
the
screen
and
the
UI.
So
let's
quickly
go
to
the
next
slide.
B
That's
me:
I'm
a
technical
marketing
manager
here
at
gitlab,
and
so
what
are
Dora
metrics
I'm
sure
most
of
you
probably
know
what
they
are,
but
they
were
developed
by
an
organization
or
a
team
called
devops
research
and
assessment,
they're
not
owned
by
Google,
and
one
of
the
actually
two
of
the
founders
wrote
the
book
of
the
you
know
about
the
Phoenix
project.
B
So
they
now
have
this
institute
or
institution
or
organization
called
Dora.
So
they've
identified
actually
five
metrics
that
measure
the
effectiveness
of
an
organization,
development
and
delivery
practices
and
how
well
they
are
performed.
They're
doing
devops
basically
and
the
last
one
that
is
not
listed
here
is
availability
and
availability.
Usually
you
know,
measuring
slas,
slis
slos,
and
that
metric
is
not
going
to
be
discussed
in
this.
B
In
this
call,
some
people
refer
to
these
four
metrics
that
you
see,
deployment
frequency
lead
time
for
changes,
tend
to
restore
service
and
change
failure
rate
as
Dora
4.,
just
the
I'm
not
sure
if
you've
noticed
if
you've
been
checking
the
Dora
website
lately,
but
the
2022
report
is
out
for
the
state
of
devops
and
I
pulled
this
out
of
that
report,
and
in
the
past
there
were
four
think
of
them
as
categories
of
metrics,
where,
where
the
highway
the
highest
one
was
or
the
best
one
was
Elite
I
think
it
was
called
and
now,
based
on
the
new
study
for
2022
they've,
decided
to
remove
that.
B
So
now
they
only
have
three
categories:
low,
medium
and
high,
where
high,
for
example,
if
we,
if
we
talk
about
deployment
frequency,
it
you,
you
know
it's
organizations
or
teams
that
are
doing
multiple
deploys
per
day
so
for
those
that
haven't
heard
of
these
metrics
deployment.
Frequency
here
are
the
definitions:
I
I
will
read
them
to
you.
What
we'll
do
is
we'll
jump
into
each
and
go
over
how
we
Define
them
and
how
we
measure
them
and
how
we
show
them
in
the
dashboards
and
also
we'll
talk
about
the
apis
themselves.
B
B
However,
as
as
the
metrics
are,
the
metrics
themselves
provide
insight
into
how
well
the
the
team
is
performing
or
organization,
and,
as
you
improve
those
metrics
as
you
get
better,
you
know
in
those
metrics
the
Ben,
there
are
benefits
to
the
business,
so
these
are
higher
level.
B
Business
related
benefits
that
come
as
a
result
of
becoming
better
or
or
improving
your
metrics,
your
Dura
metrics,
the
first
one
here
art
is
based
on
you
know
as
you
improve,
for
example,
for
example,
the
delivery
of
of
your
features
with
high
quality
when
you
shorten
the
recovery
times
from
outages,
you
know
that
leads
into
you
know
better
user
experience
and
customer
satisfaction,
and
all
of
that
the
benefit
that
the
business
sees
is
basically
user
retention
and
increased
adoption
right.
B
If
we
go
even
higher
than
that,
that
means
you
know,
revenues
right
stability
of
revenues
and
also
increasing
your
in
your
revenues
as
you
measure,
Dora
metrics,
and
as
you
improve
them
and
you,
you
know
the
process
by
doing
so,
you're
going
through
a
process
of
identifying
bottlenecks
in
your
development
and
delivery
practices.
B
Also
well
that
will
help
you
preempt
production
problems
as
you
get
as
you
get
better
you're
going
to
be
able
to
stop
problems
from
happening
in
production
and
also
that
will
result
in
preempting
outages
and
all
of
that
rolls
up
to
lower
risk.
B
You
you
end
up
generating
better
higher
quality
of
code.
You
increase
the
collaboration
among
your
team.
You
teams,
you
get
better
at
fixing
bugs
as
you
shorten
the
recovery
times,
for
example,
and
as
you
improve
your
deployment,
frequency
you're,
introducing
new
features
faster
and
delivering
that
value
to
customers
and
put
it
putting
that
value
in
their
hands
faster
and
as
a
result
of
all
that,
you
are
able
to
keep
your
Competitive
Edge,
because
now
your
developers
are
able
to,
you
know,
dedicate
more
of
their
time
to
developing
new
features
or
new
business
functionality.
B
That
brings
value
to
the
organizations
they
can
Innova.
They
have
more
time
basically
to
innovate
and
and
as
they
increase
it.
The
for
example,
the
deployment
frequency
again.
You
know
you're
you're
able
to
deliver
Innovations
faster
to
the
market,
and
all
of
that
gives
you
that
Competitive
Edge
and
you
can
stay
ahead
of
your
competitors.
B
Now,
just
in
case
Dora
is
in
Ultimate,
okay,
the
same
thing
with
value
streams,
so
the
first
is
deployment
frequency.
So
what
is
deployment
frequency
is
the
number
of
deployments
to
a
production
environment
in
a
given
period
of
time
and
what
this
deployment
frequency?
How
does
it
help
customer
customers
answer?
So
the
question
is:
how
often
is
my
team
releasing
code
to
end
users
on
the
right
side?
B
You
see
the
gitlab
UI
and
in
this
case
we've
gotten
there
through
CI
CD,
analytics
CI,
CD
and
then
the
second
tab
is
for
deployment
frequency
and
you
can
get
there
either
by
selecting
cicd
or
value
stream.
B
B
So
how
is
it
calculated,
so
the
frequency
over
a
period
you
know
over
a
period
of
time,
of
deployments
to
production
environments
that
are
based
on
deployment
here.
Deployment
here
is
important
in
this
case,
because
you
can
have
multiple
environments
that
are
production,
so
all
of
the
basically
the
number
of
deployments
of
all
of
to
all
of
these
environments,
production
environments
will
be
aggregated
as
part
of
this
calculation.
B
B
The
default,
if
you
don't
specify
start
and
end
is
the
default,
is
daily
and
it's
for
the
last
three
months
you
know
of
deployments
to
production,
also
when
you
are
specifying
dates
via
the
API
the
day.
Ranger
ranges
must
be
shorter
than
180
days
if
you're
interested
in
Reading.
You
know
how
this
feature
was
developed
and
all
the
discussion
and
on
how
it
was
defined
and
all
that
you
can
go
to
the
merch
request.
B
I
will
share
this
presentation
after
this
meeting
and
those
are
links
that
will
take
you
to
the
Mr
that
discusses
everything
about
deployment
frequency
from
the
technical
perspective
and
how
how
engineering
and
PMs
and
and
everyone
that
was
involved
in
the
definition
and
creation
of
this.
The
measurement
of
this
deployment
frequency
metric,
how
they
arrived
to
basically
the
final
calculation.
B
How
was
how
how
it
was
going
to
be
measured,
Within
gitlab
now
the
one
thing
to
keep
in
mind
is
that
the
API
Returns,
the
monthly
and
all
intervals
for
those
two
two
choices
it
will.
It
will
calculate
the
median
of
daily
median
values.
B
Now
another
question
that
I've,
you
know
that
has
come
up
in
the
past
is
so
what
you
know
if
we
think
about
issues,
incidents
Mrs,
how
do
these?
How
do
changes
in
these
effects
or
change?
This
the
measurement
of
deployment
frequency.
Now,
when
we
talk
about
issues,
incidents
and
Mrs,
they
don't
affect
this
specific
metric.
What
affects
the
measurement
of
this
metric
is
deployments
to
production.
B
B
And
I
recorded
this
I
I
had
to
blur
my
my
personal
access
token.
So
that's
why
I
recorded
this,
and
so
the
git
lab
basically
product.
This
is
the
project
for
the
gitlab
product
is
a
good
starting
point
to
show
you
the
first
two
metrics
deployment
frequency
and
lead
time
tourist
lead
time.
B
So
here's
the
deployment
frequency
again
it's
a
bit
repetitive,
but
it's
the
frequency
deployments
to
production
based
on
deployment
here
here
you
see
last
week,
is
25.
Deploys
and
I
ran
this
on
October
1st
and
the
frequency
is
that
number
divided
by
the
number
of
days
in
that
week
and
it
happens
to
be
3.1
now,
if
you
click
on
last
month
now
the
deploys
are
106
for
that
month
and
if
you
divide
it
by
the
number
of
days
within
that
month,
notice
that
goes
from
September,
2nd
to
October
1st.
B
B
Now,
if
we
let's
go
and
run
the
API
and
first
we're
going
to
run
the
API
for
deployment
frequency,
the
default
is
daily
and
in
specify
any
arguments.
So
this
is
going
to
give
us
the
number
of
deploys
per
day
for
each
of
the
days
where
there
was
a
deploy
from
for
the
last
three
months,
which
is
the
default
okay.
So
it
goes
from
22
75,
all
the
way
to
22
9
30.
B
Which
one
and
the
reason
it
doesn't
show
October
is
because,
when
I
ran
this
there
were
zero
deploys
that
had
happened.
No
deploys
had
happened
on
October
1st.
Yet
when
I
ran
this,
if
you
had,
if
you
noticed
earlier,
it
was
number,
it
was
a
seven
for
September
30
and
that
matches
the
graph
here
for
September
30th.
B
B
And
106
is
number
deploys
that
matches
that
number
for
the
last
argument
there
or
return
value
now.
The
last
is
all
we're
running.
All
all
will
show
us
basically
the
last
three
months
and
is
333
which
matches
the
333
deploys
that
you
see
there.
B
All
right,
so,
let's
move
now
move
on
to
the
next
metric,
which
is
lead
time
for
changes.
B
How
is
lead
time
for
change
is
calculated
by
the
way
each
this
each
slide.
If
you
notice,
where
it
says
details
and
there
is
introduced,
that's
basically
when
it
was
introduced
within
the
product,
you
can
click
on
that
once
you
get
the
presentation,
so
it's
this
is
the
median
time
between
a
merch
request
being
merged
and
then,
when
it's
deployed
to
production
environments,
okay,
again,
you
can
have
multiple
environments
under
the
production
deployment
here
like
before,
and
when
this
happens,
the
number
of
deployments.
B
B
Last
week
last
month
than
90
days
now,
the
API
now
for
this
one
it
Returns
the
the
value,
the
return
value
is
number
of
seconds
and
it
allows
you
to
and
again
the
API
allows
you
to
specify
start
and
end
dates
like
before,
and
remember
that
the
return
value
is
number
of
seconds.
If
you
want
to
get
to
the
days,
then
you
have
to
divide
that
by
60
to
get
minutes
by
60
hours
and
then
24..
B
Well,
that
will
be
one
day.
The
API
I
already
said
that
the
default
is
daily
and
for
and
it
considers
the
last
three
months
for
production
and
the
date
ranges
must
be
shorter
than
180
days
like
before.
B
If
you'd
like
to
re,
read
up
more
about
this
specific
metric,
there's
a
link
to
the
issue,
this
that
goes
in
great
length
as
to
how
they,
how
we
arrive
to
the
definition
and
the
calculation
of
this
metric
and
also
the
merch,
requires
that
it
basically
spells
out
the
code
that
was
added
to
the
product
to
calculate
this,
and
another
note
is
the
last
one
is.
This
is
you're
going
to
see
this
for
every
single
metric?
B
You
know
when
you
use
monthly
or
all
intervals
in
the
API,
it's
going
to
calculate
the
median
of
the
daily
median
values
and
again
this
can
introduce
a
slight
inaccuracy
in
the
return
data.
So
how
do
changes
in
issues?
Incidents
Mrs
affect
this
metric.
So
what
affects
this
metric
is
deploying
Mrs
to
production
right.
So
if
that
changes,
then
this
lead
time
will
also
be
affected.
B
All
right
so
we're
gonna
use
again
the
gitlab
project
for
this
part
of
the
demo,
we're
going
to
go
to
lead
time
and
again
the
default.
Well,
there's
three
options:
last
week
last
month
and
last
90
days,
and
these
are
the
days
from
from
merge
to
deploy
right
there.
Those
are
the
value
for
the
last
month
and
then
and
then
you
have
the
values
for
90
days
and
again
for
last
month.
B
All
right,
so,
let's
now
run
the
API
we're
going
to
run
the
API
just
with
the
default
values,
which
is
the
daily
values.
B
The
median
is
for
the
last
seven
days
is
1.6
days
and
15.8
hours
is
the
value
for
September,
30th
and-
and
here
what
you
need
to
remember
is
that
that
value
is
in
seconds.
So
you
need
to
divide
it
by
the
appropriate
numbers
to
get
to
days
now.
For
last,
we
now
the
second
option
that
we
just
run
is
the
monthly
option
and
again
you
can
pull
out
your
calculator
if
you
want,
but
you
know
those
numbers
match
the
numbers
in
the
graph.
B
B
All
right
and
just
remember
that
that
this,
what
what
affects
this
metric
is
deploying
Mr
production
right.
So
when
you
deploy
an
MR,
it's
going
to
run
the
pipeline,
you
know,
basically,
when
you
do
a
commit
and
the
and
and
to
to
the
main
branch
and
then,
if
you're,
talking
about
Auto
devops,
for
example,
when
you,
when
Auto
devops,
deploy
to
production.
B
B
What
does
it
help
customers
answer?
How
long
does
it
take
to
restore
service
when
a
service
incident
occurs?
Basically
an
implant
outage
on
the
right
side?
You
see
a
chart
in
the
UI
for
time
to
restore
service.
The
way
gitlab
displays
it
and
again
you
can
get
to
it
the
same
way
you
get
to
the
other
metrics.
B
B
Close
in
the
given
period
of
time,
whether
it
was
last
week
or
monthly,
depending
on
what
you're
asking
it
it
is,
you
know
that
data
is
graphed
for
that
chunk
of
time.
Currently
again,
the
UI
shows
you
only
three
options.
Last
week,
last
month
and
and
last
90
days,
the
API
again
returns
number
of
seconds
and
allows
you
to
specify
start
and
end
dates
and
again
default
this
three
months
for
production
and
date.
Ranges
must
be
shorter
than
180
days.
B
If
you
want
to
read
more
about
it,
here's
a
link
to
the
merch
request
that
added
the
initial
functionality
for
time
to
restore
service
and
the
the
caveat
that
the
API
returns
for
monthly
and
all
intervals.
It
calculates
the
median
of
the
daily
median
values,
so
it
could
be
in
a
slide,
a
slight
inaccuracy
in
the
return
data.
So
how
do
changes
in
issues?
Incidence
and
Mars
affect
this
metric.
So
when
you
close
incidents
in
the
time
period
that
you're
requesting,
then
that
piece
of
data
is
included
in
the
calculation.
B
So,
let's
go
to
an
example
now
for
this
example,
the
gitlab
project
is
not
used
in
incidents.
So
if
we
look
at
temporary
Source
service
and
change,
failure
is
zero,
both
of
them.
B
So
there
is
is
not
really
not
not
much
to
show
so
what
I
did
instead
was
I
created
a
project
right
here.
It's
called
My
Java.
B
And
in
this
project
it's
just
a
simple:
you
know
hello,
world,
I,
think
Java
program
and,
and
then
what
I
did
was
I
went
ahead
and
pre-created
incidents,
five
incidents
ahead
of
time,
and
you
will
see
if
I
click
time
to
restore
service
and
change
failure
rate
time
to
restore
service.
Is
it's
not
showing
nothing
because
I
haven't
closed
any
incidents?
Yet?
Okay,
change
value
ratios
500,
because
I've
I
created
Five
incidents,
okay
and
there's
no
deployments
yet
so
when
the
deployment
is
zero,
the
assumption
is
always
one.
B
B
And
I
had
created
Five
incidents
the
day
before
I
recorded
this
okay.
So
that's
what
you
see
one
day
ago
and
I
just
created
some
incidents.
You
know
two
criticals
one
is
two
and
two
S3s.
B
And
these
incidents
are
going
to
be
used
in
the
calculation
of
the
metric.
So
remember,
closing
incidents
will
affect
the
calculation
of
of
this
metric
that
we're
discussing
right
now,
so
we're
going
to
go
ahead
and
close,
let's
close
two
of
them.
So
let's
close
one
right
there,
let's
close
it
two
s1s,
so
it
says
one
is
critical.
B
B
And
in
this
case
it's
the
same
number
for
last
week
last
month
and
last
90
days
right
because
it's
a
brand
new
project
and
those
are
basically
the
the
only
data
points
that
are
there-
the
the
fact
that
I
close
to
to
incidents
now
notice
when
I
use
the
API
remember
the
value
that
is
returned
is
the
number
of
seconds
okay.
So
if
you
take
this
number
and
you
divide
it
all
the
way
until
you
get
days,
you
should
get
1.8
days.
B
All
right,
so
the
last
one
is
change
failure
rate.
So
this
is
a
percentage
of
changes
introduced
to
a
production
environment
that
caused
a
failure
scenario,
and
he
helps
customer
answer
the
question.
What
percentage
of
my
changes
to
production
resulted
or
result
in
the
greatest
service
and
required
remediation
on
the
right
side?
This
is
an
example
chart
from
gitlab
and
again
you
can
get
to
the
to
that
chart
using
those
to
navigation
ways.
B
The
UI
like
before
it
just
offers
three
options,
but
the
API
allows
you
to
specify
start
and
end
dates,
and
these
all
apply
again.
If
you
want
to
read
up
more
about
it,
I
provided
you
the
issue
and
the
merge
request,
like
basically
all
includes
all
the
discussion
that
by
the
different
stakeholders
and
how
they
how
we
came
up
with
the
calculation
of
this
change
failure
rate
and
this
caveat
again
that
for
the
monthly
and
all
intervals,
so
how
do
changes
in
issues?
Incidents
and
Mars
affect
this
metric.
B
So
what
affects
this
metric
is
the
number
of
open
incidents
during
a
time
period.
Okay,
so
whether
it's
you
know
last
week,
last
month
or
the
last
90
days,
the
open
incidents
are
counted.
B
Also,
the
number
of
deployments
during
that
period
will
affect
this
metric.
Okay,
you
know
and
like
I
mentioned
before,
if
there
are
no
deployments
during
the
time
period
that
you
specified,
the
assumption
is
there's
all
that
there
is.
There
is
one
deployment
that
way
we
won't
end
up
with
a
with
an
infinity
number
all
right.
So
let's
go
back
to
the
demo.
This
is
the
last
demo.
B
And
for
this
one
again,
I
use
this
project
that
I
created.
Remember:
I
had
created
Five
incidents
so
right
now
the
change
failure
rate
is
500,
I
had
close
to,
and
that's
why
you
see
that
drop
okay
and
but
the
change
failure
rate
is
still
500
percent
for
last
week
because
it
includes
all
the
incidents
that
were
open
during
that
week
and
if
we
run
it,
it
shows
five
right.
There.
B
B
Let's
go
to
the
project
and
let's,
let's
run
two
pipelines
to
production,
so
that
basically
we're
mimicking
two
deployments
to
production,
so
we're
just
going
to
run
the
pipeline,
we're
not
making
any
changes
to
the
code
or
anything
we're
just
running
pipelines
deployments
to
production,
we're
going
to
run
it
one
time
and
I
accelerated
here
the
time
so
that
we
don't
have
to
wait
a
few
minutes
that
should
be
done
pretty
quickly.
And
then
we
push
the
rollout
to
production,
100
I'm
using
Auto
devops.
So
you
can,
as
you
can
see,.
B
B
B
So
this
is,
there
is
another
customer.
This
is
the
only
dollar
customer
public
documented
customer
story
that
we
have
so
far
there's
another
one
in
the
works,
but
this
is
the
one
that
is
public
already.
The
link
is
right
here
and
they're.
Actually,
you
know
they
use
Dora
metrics
to
be
able
to
boost
deployments
and
increase
automation.
B
So
you
can
read
more
about
that
in
that
link
and
as
soon
as
the
other
one
comes
I
will
let
you
know
so.
Just
to
conclude,
you
know
the
Dora
metrics
dashboards.
They
they
streamline
the
evaluation
of
the
effectiveness
of
your
team's
development
delivery
and
delivery
practices.
B
They
can
also
help
you
better
measure
and
improve
what
matters
in
relation
to
yourself
for
the
delivery
and
performance
processes.
So
as
teams
evolve
from
Adora
low
level
to
a
high
level.
A
A
C
B
Really,
whenever
let
me
look,
this
is
for
the
our
web
website
or
product.
C
This
one
says:
get
lab.
B
D
Me
look
I've
I've
looked
at
this
before
and
and
my
sort
of
assumption,
based
on,
like
looking
at
the
data,
was
that
basically
gitlab
doesn't
use
gitlab
for
Incident
Management
and.
D
And
so
it
never
got
closed
off,
and
so
the
the
metric
got
a
bit
skewed
there
for
forgitlab.com.
C
B
So
see
see
how
this
I
wonder.
If
you
know
it's
just
something
else,
I
wonder
if
we
have
one
for
the
website,
so
this
includes
right
here
that
you
see
at
the
top
left.
This
includes
all
the
projects
for
the
company,
whereas
this
one
includes
the
product
only
the
product.
This
is
the
gala
product.
Oh.
B
I
was
saying
earlier:
we
don't
use
incidents
ourselves,
we
should
be
dog
footing,
but
we
don't
use
incidents.
B
You
know
for
for
issues
in
production,
which
is
the
way
we're
calculating
time
to
restore
service,
and
that's
why,
when
I
remember
when
I
came
here
and
I
went
here
analytics
yesterday
and
I
and
I
I
said
well,
there's
not
much
to
see
here
right,
because
we
don't
use
incidents
and
that's
why
I
created
a
new
project
just
to
show
you
the
mechanics
of
how
those
two
are
calculated
that
make
sense:
yep,
okay,
good.
C
E
A
D
Yeah
yeah
so
so
yeah.
My
my
question
was
like
so
a
lot
of
these
metrics,
like
especially
deployment
frequency,
would
probably
be
pretty
useful
to
have
outside
of
production
environments
as
well.
So
like
grouping
you
into
separate,
like
you
know,
review
or
pre-production
tier
environments.
Do
we
have
any
plans
to
to
go
down
that
route
as
well?
So.
B
From
reading
the
remember
I
shared
with
you
when
you
get
the
presentation,
you're
gonna,
and
if
you
you
know,
if
you
want
to
spend
time,
reading
the
Mrs
and
the
issues
that
where
all
the
discussion
happen
on
on
the
definition
and
the
the
way
we're
going
to
calculate
these
four
metrics.
B
That
question
comes
up,
and
this
is
you
know
this
has
been
our
first
NBC
for
all
of
these
four
and
from
what
I
read
that
that
you
know.
That
is
something
that
may
happen
in
the
future.
But
right
now
we're
just
using
it.
You
know
calculating
all
these
things
for
production.
B
D
Yeah
that'd
be
great
yeah.
Look
like
I
I
understand
that
you
don't
necessarily
have
the
the
full
visibility
of
the
road
maps
from
the
at
least
from
the
PO
perspective
right
like
they're,
going
to
have
a
much
more
full
view
of
that.
But
it's
it's
good
to
hear
that
we
are
considering
it.
D
Think
this
is
a
very
valuable
sort
of
first
go
first
pass
get
get
something
out
there
so
that
we
can
iterate
on
it.
After
that.
E
B
Credits
with
the
areas
yeah
yeah,
the
logic
is,
you
know
it's
better
to
put
something
small
out
and
get
into
the
handsome
customers,
as
opposed
to
wait
six
months
to
a
year
to
get
something
so
yeah.
D
100,
that's
yeah.
D
I've
been
watching
it
through
all
of
the
iterations
that
it's
already
been
through
right
from
when
I
was
just
like.
We
only
had
a
deployment
frequency,
so
it's
definitely
come
a
long
way
and-
and
it's
going
well
I.
B
Agree
agree
and
yeah
I
was
gonna,
say
something
but
I
forgot.
Let's
move
on
to
the
next
question.
All.
D
Good
I
think
I
have
that
one
as
well,
oh
okay!
So
this
one
was
looking
at
so
so
this
is
looking
at,
like
all
production
tier
environments.
So
do
you
do
you
know
what
happens
when
we
have
multiple
different
environments?
Is
that
just
like,
like
they
all
count
towards
the
total
right.
B
Exactly
yeah,
so
so,
let's
say
you
know
that
concept.
We
now
have
the
concept
in
the
product
in
the
product
of
production
tier,
and
this
is
basically
separating
the
environment
name
from
what
it's
being
used
for,
and
so
you
can
have
a
I,
don't
know
different
names
for
a
production
environments.
B
You
can
say
production,
East,
production,
West
and
they
can
all
be
production
environments.
And
the
answer
short
answer
to
your
question
is
yes,
the
calculations
include
all
of
the
production
environments,
not
just
one.
B
B
D
Of
a
lot
of
customers
who
who
look
at
Dora
metrics?
It's
definitely
a
big
value
driver
for.
D
B
Lately,
there's
been
a
lot
of
Buzz
and
you
know
a
lot
of
talk
about
this
in
the
market.
So
it's
it's
a
good
topic
to
talk
about
with
your
customers
for
sure
and
and
bring
the
VA.
B
You
know
I
when
I
talk
about
Dora
to
customers,
I
talk
about
the
the
single
application
approach
that
we
have
and
how
our
application
uses
a
single
model
for
all
the
devops
faces,
stages
and
the
you
know
the
fact
that
we've
designed
the
product
this
way
allows
us
to
generate
this
insights
by
connecting,
because
all
the
data
is
in
the
same
data
model
using
the
same
data
model.
B
So
we
can
Surface
a
lot
of
insight
like
Dora
metrics
to
customers
through
the
UI
and
I
always
tell
them,
whereas
you
know,
if
you
had
different
products,
you
know
best
of
breeds
to
do
different
stages
and
devops.
B
D
Devops
I've
also
found
it
to
be
very
useful
when
customers
are
looking
to
find
useful
metrics
about
how
productive
developers
are
so,
like
you
know,
the
the
sort
of
standard
ones
like
oh
yeah
I
kicked
off
100
pipelines
this
month
or
I
pushed
100
commits
right,
like
it
doesn't
tell
you
how
actually
productive
they
are
but
like
having,
like
you
know,
the
deployment
frequency
combined
with
like
the
change
failure
rate.
B
Yeah,
like
yeah,
really
maybe
they're
playing
a
lot,
but
if
you're
change
failure
rate
is
high
yeah.
So
it's
like
having
all.
D
Of
these
can
really
give
a
lot
of
insight
around
how
productive
your
developers
are
in
a
very
useful
way
compared
to
some
of
the
the
more
standard
metrics
that
have
that
have
been
around
for
a
while
yeah.
B
For
sure,
yeah
definitely
cool,
so
any
other
questions.
Anybody.