►
From YouTube: Discussion: Lead time for Changes on DORA4 Metrics
A
So,
thank
you
for
joining
cole.
This
sinkhole
is
for
making
sure
that
we
are
on
the
same
page
on
this
issue
because
we
got.
We
recently
got
a
couple
of
conversations.
A
So
since
we
don't
have,
we
only
have
30
minutes.
I
just
put
two
agendas.
One
is
definition
of
lead
time
for
changes
and
then
the
other
one
is
the
visualization
for
the
lead
time
for
changes.
A
A
B
B
Is
it
from
when
the
merge
request
was
created?
Is
it
from
when
the
merge
request
was
merged?
Another
option
that
was
raised
on
the
customer
success
call
that
I
had
was
don't
calculate
the
time
from
when
a
merger
quest
was
created
but
calculate
the
time
frame
when
the
first
commit
of
a
merger
quest
was
created.
So
there's
a
lot
of
interpretation
here.
C
B
Need
to
decide
on
on
one
what
I
liked
from
what
nathan
had
suggested
was
to
take
it
by
two
takes,
and
that
was
let's
present
just
the
time
from
when
the
merger
quest
was
merged
until
it
gets
to
point
to
production
as
one
take.
But
we
would
still
need
at
some
point
to
also
show
the
time
it
takes
from
when
a
merger
quest
was
created
until
it
was
merged,
and
I
think
we
have
that
data.
A
A
We
can
provide
three
aspects
of
the
data,
so
yeah
that
surgery
makes
sense
yeah.
I
just
wanted
to
make
sure
that,
what's
the
scope
in
this
milestone.
B
B
As
long
as
we
understand
that
we're
going
to
need
to
present
everything
at
some
point
and
at
some
point
does
not
have
to
be
in
the
nbc,
and
why
am
I
saying
this
because,
when
I
think
of
the
dora
4
dashboard,
it's
much
larger
than
what
we're
doing
now,
because
when
you
look
at
the
duracore
metrics,
they
tell
you
basically
how
well
your
team
is
performing,
but
just
showing
the
chart
isn't
enough.
You
need
to
drill
down.
B
You
need
to
understand
where
your
bottlenecks
are,
and
maybe
the
bottleneck
is
the
code
review
process
or
maybe
the
bottleneck
is
deployment,
production
or
maybe
the
code.
Maybe
a
bottleneck
is
is
somewhere
else.
So
I
think
at
some
point
we're
gonna
need
to
drill
down
to
all
this
data
the
merge
time
until
it
merges
the
time
it
sticks
in
review.
The
time
it
takes
from
when
a
merge
request
is
merged
until
it
gets
to
production
even
go
as
granular
as
how
long
each
job
takes.
C
Become
a
very
similar
yeah
sorry,
it's
become
also
very
similar
to
the
vsm.
If
we
do
all
of
this
drill
down
and
of
course,
we
need
to
collaborate
as
we're
collaborating
with
the
vsm
team,
yeah.
A
Maybe
that's
be
better,
but
I'm
not
really
familiar
with
the
vsm
code
base.
So
yeah
things
get
wrong.
B
Yeah,
so
that's
kind
of
why
I
like
the
suggestion
that
we
should
suggested
in
this
issue,
which
is
just
taking
the
time
for
merge
until
it
reaches
production
because
that's
more
release
oriented
you
know,
productive
environment,
stuff
and
not
necessarily
everything
that
has
to
do
with
how
long
the
pipeline
took-
and
I
think
I
also
mentioned
in
the
issue
that
we
do
have
a
mean
time
for
merge
that
was
delivered
by
someone
else
in
the
previous
milestone.
C
I
think
it's
a
great
approach.
I
also
like
it
it's
very
interesting
metrics
for
the
future.
What
we
said
the
I
don't
get.
What
why
why
we
need
to
measure
since
the
merge
created
it's
more
interesting
to
know
since
the
first
time,
someone
actually
from
the
first
commit
of
or
from
the
moment
that
the
branch
created
not
from
the
moment
of
the
marriage
of
america
quest
created,
because
maybe
there
were
many
commits
before
a
marriage
request
created
that
nobody
take
them
into
account.
B
Yes,
this
is
exactly
what
came
up
from
the
customer
success
call
and
why
they
mentioned
it's
better
to
go
to
calculate
this
from
the
first
commit,
because
people
can
be
discussing
a
bunch
of
things
on
emr
before
there
is
one
line
of
code,
that's
actually
written
and
so
that's
more
accurate.
A
So
like
yeah,
well,
I'm
gonna
start
one
age
cases
that
sometimes
roll
back
happens
and
then
roll
back
also,
you
know,
contains
much
work
as
associations
so
like.
If
we
think
take
the
the
development
time
into
account
and
the
thing
is
gonna
get
much
more
difficult.
I
think
so.
Yeah
I'd
rather
see
it
straightforward
at
this
moment
for
these
re
iterations.
B
D
Also,
my
understanding
of
the
the
four
metrics
they
all
seem
to
be
related
to
how
you
deploy
and
how
you
ship
software,
not
necessarily
how
you
develop
it.
That's
kind
of
my
understanding,
so
I
think
it
makes
sense
to
kind
of
exclude
development
time
and
just
you're
kind
of
measuring
the
machinery
to
get
stuff
into
production.
B
I
I
agree
everything
indoor
4
has
to
do
with
the
production
environment.
Only
yes
at
some
point.
We
would
want
to
make
this
more
flexible
and
let
our
users
choose
whatever
environment
they
want,
but
I
think
that
has
to
go
after
the
nbc.
The
nvc
has
to
be
these
four
metrics
related
to
the
production
of
our
own.
A
A
And
I
also
I
want
to
fix
one
proposal,
one
thing
in
this
proposal:
we
should
show
the
ability
time
for
much.
I
just
changed
much
mrt
production
we're
currently
seeing
them
average
time,
but
this
should
be
like
nissan
pointed
out
medium,
so
it's
slightly
different
median
right.
B
D
The
other
question
I
had
too
is
what
we
consider
like
for
a
particular
day.
Do
we
consider?
D
Let's
see
what
am
I
trying
to
say?
If,
if
we're,
let's
say
we
on
the
graph,
we're
looking
at
a
week
are
the
all.
The
data
points
is
that
the
the
mrs
that
were
deployed
to
production
that
day
or
is
that
when
the
mr
was
merged,
and
then
you
know
what
I'm
saying
I
think
I
I
said
it
better
in
my
comment.
D
A
And
shouldn't.
D
You
just
just
for
reference,
the
the
other
in
the
deployment
frequency
graphs.
All
the
date
ranges
we
did
were
in
utc
time
zones,
so
we're
not
like
translating
it
to
the
local
time
zones.
Oh,
so
I'd
assume
we'd
want
to
keep
that
consistent.
D
Anyway,
at
the
moment
we'll
be
showing
it
certainly
for,
like,
I
think,
iteration
one,
but
if
that
was
a
I
feel
like
we
could
probably
hide
those
days
if
we
had
some
way
to
configure
it.
It'd
be
a
little
complex
because
we
have
to
build
some
kind
of
ui
to
select
which
days
are
the
working
days.
D
B
Yeah,
but
not
for
the
release
game.
I
wonder
if
we
could
use
a
release
feature
and
then
just
again,
not
for
the
first
iteration
for
sure,
but
just
thinking
out
loud
at
least
like
remove
the
deploy
freezes
base
and
things
like
that.
B
Or
at
least
you
know,
show
them
on
the
graph
right
say.
Even
if
we're
thinking
about
time
stamps
like
it
would
be
really
easy
to
explain.
If
you
saw
office
in
a
drop
on
saturday
right
so
at
least
people
could
understand
really
easily
what
they're
looking
at
or
just
like
mark.
This
was
a
deploy
freeze
on
the
graph
itself,
without
removing
it
from
the
calculation
just
showing
it
on
the
graph.
B
Someone
I
talked
to
a
customer.
This
is
a
really
long
time
ago
and
I
can't
remember
what
the
subject
was,
but
they
told
me
that
there's
some
kind
of
statistics
of
the
most
productive
days
that
you
work.
I
could
have
somewhere
that
you
can
see
that
and
that
they
thought
it
was
really
interesting
that
they
saw
their
most
productive
days
wednesday.
So
I
wonder
if
we
could
probably
show
something
like
that
after
we
have
data
not
in
the
first
generation
but
like
these
are
the
days
that
you
deploy
most
frequently.
A
Yeah,
and
also
on
top
of
that,
we're
to
show
these
levels
like
a
light
high
medium
low.
B
B
I
just
opened
the
issue
again,
not
exactly
related
to
the
subject,
we're
talking
but
really
interesting.
I
open
an
is
an
issue
for
the
telemetry
team
so
that
they
can
start
collecting
this
data
so
that
we
can
present
on
at
least
for
for
sas.
Here's
where
your
place
versus
everyone
else
on
gitlab,
so
we're
going
to
start
collecting
that
data.
B
A
So
yeah,
actually
that's
a
good
point
that
I
just
wanna
dig
into
a
bit
more
that
actually
I'm
in
this
model.
So
I'm
building
a
feature
that
collecting
daily
metrics
about
the
draw.
And
then
I
was
wondering
that
if
it
makes
sense
to
have
the
collection
feature
that
collecting
matrix
the
function
should
be
placed
in
a
core
so
that,
like
we
collect
by
default
in
all
gitlab
instances,
so
that
we
can
send
the
usage
ping
to
you
know
to
our
business
metrics
and
only
like
a
premium.
A
Users
can
see
it
in
their
dashboard
in
the
gitlab
instance.
But
like
it
doesn't
the
visualization
doesn't
happen
in
a
core
because
it's
an
eep
tier
they're,
just
an
honor.
B
Sure
that
makes
that
makes
sense
to
me
and
also
after
the
nbc,
like
the
one
of
the
first
things
that
we
want
to
do
after
the
mvc
is
start
opening
it
up
for
additional
tiers.
So,
even
though
we're
doing
everything
ultimate
now
for
the
nbc,
we
do
have
a
plan
to
open
it
up
for
premium
and
core
users
as
well.
At
one
point.
A
B
It's
based
on
it's
based
on
both
it's
based
on
the
data
that
we
have
so
theoretically,
maybe
no
one
is
going
to
be
in
the
league
in
get
lab,
though
I
doubt
it,
but
you
would
have
a
comparison
view
versus
everyone's.
B
That's
actually
a
really
good
question.
I
think,
first
of
all,
of
course,
that's
based
on
the
data
that
we're
collecting
that
we
have
these.
The
only
thing
that
I
am
kind
of
thinking
about
today.
We
have
really
good
definitions
of
what
an
elite
team
is
so,
for
example,
deploys
multiple
times
per
day
anytime,
for
changes
is
less
than
a
day.
Time
to
restore
is
less
than
an
hour
and
change
failure
rate
is
between
0
to
15,
or
something
like
that
right.
B
What?
If
the
definitions
change,
I
mean,
I
think
we
need
to
have
some
kind
of
enum,
at
least
that
defines
that
and
so
that,
if
the
door
for
metrics
definition
change,
we
can
change
it
easily,
because
I
think
as
devops
matures,
this
is
going
to
change
it's
going
to
be
like
not
only
multiple
deploys
per
day.
It's
going
to
be
multiple
deploys
per
hour
right
in
like
two
years.
It's
not!
This
is
not
going
to
be
in
for
for
a
long
week
for
today,.
C
Yeah
also,
we
will
need
to
check
with
the
marketing
it's
fine
to
say
to
customers.
You
are
really
customer,
you
have
a
high
performance,
fine,
but
what
we
will
do
to
the
low?
Oh,
you
are,
you
are
bad
customer.
We
think
you
are
not
performing
well,
so
we
need
to
to
think
how
we
present
it
to
to
low-performing
customers.
B
I
think
it
actually
ties
very
nicely.
Our
transparency
value
we're
not
telling
them
they're
bad
customers
we're
telling
them
they
could
do
better
in
terms
of
their
sdlc
life
cycle,
and
I
think
it's
our
job
to
also
help
them
out
to
improve.
So
we
can
tell
them
different
ideas
that
we
had
so
either
point
them
to
features
that
we
have
that
can
help
them
or
point
them
to
really
good
examples
of
other
projects
of
what
they're
doing,
maybe
in
their
yaml.
B
Maybe
we
can
say:
okay,
anyone
who
has
over
30
jobs
in
their
yaml,
the
pipeline
is
so
very
slow
and
the
deployment
frequency
there's
a
correlation
between
load
deployment,
frequency
and
the
number
of
jobs
in
the
pipeline.
So
maybe
you
should
think
about
splitting
this
up
or
things
like
that.
B
A
A
So
I
want
to
talk
about
the
graph
visualization
a
bit-
it's
not
in
this
milestone,
but
it
could.
It
will
happen
in
the
next
milestone
right.
B
A
Maybe
we're
going
to
show
something
like
this,
but
the
point
is
that
it's
a
bit
different
from
deployment
frequency
deployment
frequency
is
that
in
the
deployment
frequency
graph,
the
higher
value
is
better
so
like
here
we
are
seeing
something
like
300
right.
So
this
is
a
good
score,
but
in
lead
time
for
changes
this
is
bad
score.
The
high
value
means
that
it
takes
more
days.
B
A
So
like
we
got
to
think
about
that,
you
know
how
we
present
the
data
for
users
should
be
different
from
diplomatic
frequencies,
so.
B
I
have
an
idea:
what
do
we
think
about
putting
targets
on
the
graph,
so
we
show
the
different
the
different
time
since
regular,
just
like
the
bracket
showed
a
minute
ago,
and
then
we
can
put
bars
on
the
chart
that
says
this
is
elite.
This
is
high.
This
is
medium.
This
is
low,
like
a
target
because
we
said
less
than
free
time,
for
changes
for
a
lead
is
less
than
one
day
and
between
a
day
and
one
week
is
high
right.
A
Yeah
yeah,
that
makes
sense,
and
also
I
want
to
point
out
that
when
deployment
didn't
happen,
the
value
the
lead
time
for
change
will
be
zero
like
that,
and
then
this
is
also
not
good.
It
should
be
considered
as
a
good
score,
so
like
this
graph
is
very
good
that
if
it's
zero,
that
means
that
deployment
didn't
happen.
So
this
is,
you
know
not
good
sport.
B
Yes,
I
wonder,
though
nathan
this
is
more
in
your
realm,
like
I
want
us
to
think
iteration
here.
What's
the
easiest
to
start
from
and
how
we
can
build
and
improve
it
over
time,
because
this
graph
seems
a
little
bit
complex.
D
The
immediate
question
I
have
is
is
should,
if
you
want
to
scroll
up
a
bit
those
two,
those
are
kind
of
two
options:
how
we
could
present
the
data
and
it's
basically
do
we
plot
days
that
have
zero
deployments
on
the
graph
and
do
we
do
we
plot
them
as
zero?
D
If
so,
you
get
the
top
version,
and
then,
if
you
skip
the
days
that
don't
have
any
deployments,
you
get
the
bottom
version.
So.
B
C
D
Yeah,
it
would
be
the
medium
time
so
it'll
be
like
a
or
yeah.
I
guess
time
so.
The
yeah,
the
two
they're
showing
the
same
thing.
It's
just
like
the
differences,
for
example,
yeah.
C
Six
hours
since
the
merge
to
the
deploy
and
then
on
the
ninth,
it
took
like
nine
hours
right.
D
C
D
No,
so
that's
where
it's
a
little
bit
deceptive
is
because
on
the
ninth
there
was
zero
deployments,
and
so
there
was
there's
no
lead
time
data
really.
So
the
top
one
shows
that,
because
it's
zero
but
the
bottom
one
kind
of
makes
it
look
like
there
was
data
there,
but
there's
actually
no
data
in
between,
like
the
ninth
or
the
sixteenth.
D
Although
okay
there's
no
data
yeah
yeah
yeah,
so
I'm
leaning
towards
the
top
one,
but
the
problem
that
shiny
is
pointing
out
is
that
that
makes
it
look
good
because
lower
is
better
on
this
graph.
But
in
reality
it's
actually
bad
because
you
had
no
deployments
so
we're
kind
of
making
a
metric
that
it
it
makes.
It
seem
like
not
deploying
is
a
good
thing.
B
I
think
I
feel
more
comfortable
with
the
top
one
just
because
I
can
understand
the
graph
easily,
but
we
should
stay,
maybe
like
next
iteration
we
color
red
as
zero,
because
that's
bad
or
again,
you
know
we
can
do
the
top
the
bottom
bar
as
green
and
then
between
one
day
and
a
week,
oh
wow,
in
hours.
This
is
going
to
be
horrible.
B
Like
yellow
and
then
we
have
orange
or.
C
B
C
B
Yeah
again,
the
graph
you
would
need
to
at
one
point
to
add
annotations
and
drill
down
to
understand
what
goes
on
on
the
graph.
So
you
would
know
that
this
was
a
day
of
deploy
freezer.
This
is
the
weekend
or
even
if
there
was
a
hot
fix
and
so
all
of
a
sudden,
you
see
an
explosion
of
of
merges,
because
you
know
there
was
something
some
production
incident.
D
I
think
I
know
what
you're
saying
that
sicko
is.
I
think
what
you're
saying
is
during
that
during
that
deploy
freeze
it
should
the
graph
should
be
really
high,
because
it's
let's
say
we
have
a
weak
deploy
freeze,
and
I
make
an
mr
in
the
middle
of
that
kind
of
logically,
I
should
see
everything
start
to
spike
until
we
start
deploying
again
and
then
it
should
go
down,
but
I
think
the
difference
is
between
whether
we
show
the
data
points
as
there
were.
D
B
Is
that
can
we
ask
to
start
playing
with
it
and
just
see
how
it
looks
and
then
make
a
conscious
decision,
because
I
think
that
any
way
that
we
display?
I,
I
think
that
if
we
display
it
like
the
median
time
here,
the
top
one,
which
kind
of
looks
good.
But
it's
lying
as
long
as
we
note.
Please
note
that
no
deployments
is
not
considered
in
the
median
or
something
like
that.
B
D
Yeah,
if
we're
solid
on
that
I'd
say
that's
the
only
piece
we
need
to
decide
is
when
we
kind
of
mark
the.
When
the
data
point
appears
and
if
we're
we're
fine
with
doing
deployment
time,
then
everything
else
is
kind
of
more
visual
front
end
and
we
can
push
it
off
till
till
next
milestone
and
the
api
won't
change
depending
on
how
we
like
even
these
two
graphs
here
that
you
see
those
would
both
the
same.
Underlying
data
would
be
returned
from
the
api,
so
yeah
there's
some
flexibility.
There.