►
From YouTube: Getting Started with DevOps Metrics - GitLab Webinar
Description
Learn about DevOps metrics in GitLab and why it is useful to track them. We will cover an overview of DORA metrics, Value Stream Analytics, and Insight dashboards, and what it looks like in Gitlab.
A
A
B
A
Jump
in
again,
thank
you,
everyone
for
joining
us
today.
We're
excited
to
go
through
the
content
of
today's
webinar
with
you,
which
is
getting
started
with
devops
metrics
I'm
joined
by
my
colleague,
Alex
Pham.
I'll.
Kick
it
over
to
her
in
just
a
moment
before
I
do
wanted
to
go
through
a
couple
of
housekeeping
items.
First
off
this
webinar
is
being
recorded,
so
you
can
look
for
that,
recording
as
well
as
the
deck
to
come
through
to
your
inbox
here
in
the
next
day
or
so.
A
If
you
have
any
questions
that
come
up
throughout
this
session,
please
put
those
in
the
Q
a
portion
of
your
Zoom
window
and
we'll
have
the
opportunity
to
answer
answer
some
of
those
throughout
and
then
Alex
will
answer
some
towards
the
end
as
well
and
without
further
Ado
I'll,
pass
it
over
to
Alex.
Who
is
one
of
our
customer
success
managers
here
at
gitlab.
B
B
This
session
aims
to
provide
an
overview
of
devops
metrics
for
Dora,
metrics
and
other
metrics
with
gitlab,
and
we're
hoping
to
identify
opportunities
for
greater
efficiency
and
correlating
those
opportunities
with
business
value
connecting
the
entire
organization
with
a
common
goal
and
vision
right
for
our
agenda
today,
we'll
start
by
covering
Dora
metrics
what
it
is
and
why
it's
important,
we'll
take
a
look
at
the
value
stream
analytics,
we'll
also
look
at
Insight
dashboards,
where
you
can
find
these
items
in
gitlab
and
then
we'll
round
out
with
additional
metrics
available
on
gitlab
as
Taylor
mentioned
earlier.
B
B
First,
I'd
like
to
take
a
step
back
and
talk
about
what
door
metrics
are
and
where
it
came
from.
Dora
stands
for
devops
research
and
assessment.
After
eight
years
of
data,
collection
and
research,
doors,
accelerate
state
of
devops
research
program
has
developed
and
validated
four
metrics
of
software
delivery
performance
and
a
fifth
metric
called
reliability
in
the
session
today,
we'll
cover
the
first
four
Dora
metrics,
which
are
deployment
frequency
lead
time
for
changes
mean
time
to
recover
and
change
failure
rate.
B
Why
do
we
care
about
tracking
door
metrics
or
any
metrics
in
general?
So
Roi
for
devops
is
tricky
since
it's
hard
to
put
a
price
tag
on
a
process
and
not
a
commodity,
but
devops
requires
a
huge
investment
in
every
organization
and
oftentimes
Executives
want
to
know
what
they're
getting
in
return
and
using
metrics
can
help
improve
devops
efficiency
and
communicate
performance
to
business
stakeholders,
which
then
can
accelerate
business
results.
B
So
the
four
door
metrics
that
we'll
look
at
today,
deployment
frequency
gitlab's
take
on
it-
is
the
average
deployment
frequency
to
production
lead
time
for
changes
is
the
median
time
it
takes
for
a
merger
quest
to
get
merged
into
production
from
Master
change.
Failure
rate
is
the
percentage
of
deployments
that
cause
an
incident
in
production
and
time
to
restore
service
is
the
median
time
an
incident
was
open
on
a
production
environment
in
the
given
time
period.
B
So
who
should
be
involved
when
it
comes
to
metrics,
and
the
answer
is
everyone
everyone's
contributing
is
involved
in
generating
the
metrics
and
everyone
on
the
team?
So
this
can
include
the
developers,
testers,
managers,
security,
folks-
and
you
can
do
this
by
having
everyone
on
the
team
continuing
to
contribute
on
git
lab,
simply
in
the
normal
way
that
they
do,
and
these
metrics
can
help
be
involved
in
driving
business
decisions
from
the
team?
B
At
the
moment?
There
are
several
different
places
in
git
lab
where
you
can
find
Aura
metrics
they're
available
in
gitlab,
CI,
CD,
analytics
group
level,
value
stream,
analytics
insights,
dashboard
and
apis
are
also
available
for
all
four
door.
Metrics
we'll
touch
on
each
of
these.
Today,
as
we
go
through
the
session
foreign.
B
B
Gitlab
measures
this
as
the
number
of
deployments
to
a
production
environment
in
the
given
time
period.
The
chart
shown
here
is
an
example
of
how
deployment
frequency
is
visualized.
We
have
the
option
to
view
the
data
for
the
last
week
last
month
and
last
90
days.
The
red
line
that
we
see
across
the
chart
indicates
our
average
for
that
time
period.
B
B
B
Similarly,
the
change
failure
rate
can
be
found
under
the
CI
CD
analytics
page
also
on
one
of
the
tabs
on
top.
This
is
also
available
at
both
the
group
and
project
level.
The
change
failure
rate
chart
here
shows
information
about
the
percentage
of
deployments
that
cause
an
incident
in
a
production
environment,
for
example.
This
can
be
a
deployment
failure,
a
security
incident,
a
rollback
or
remedy.
This
is
measured
as
the
number
of
incidents
divided
by
the
number
of
deployments
to
a
production
environment
in
the
given
time
period.
B
B
And
for
our
fourth
metric
today,
Dora
metric,
the
time
to
restore
service
can
also
be
found
under
the
CI
CD
analytics
page.
This
shows
information
about
how
long
it
takes
an
organization
to
recover
from
a
production
failure
or
failure
in
production.
So
it
gives
us
a
better
understanding
of
our
software
stability
and
reliability
Trends
over
time.
B
So,
in
addition
to
viewing
the
door
metrics
in
the
UI
under
CI
CD
analytics
page,
you
also
get
to
see
project
level
and
group
level
doremetrics.
You
can
get
those
through
the
API,
and
this
is
helpful.
If
you
want
to
see
metrics
beyond
the
90
days,
that's
shown
in
the
UI
through
the
API,
you
can
enter
in
your
date
range
and
see
the
door
metrics
through
there.
B
All
right,
so
we
took
a
look
at
the
CI
CD
analytics
and
pulling
it
from
the
API.
Next,
we'll
take
a
look
at
Value
stream
analytics
and
how
Dora
metric
looks
there,
so
value
stream
analytics
collects
and
shows
data
across
the
entire
software
lifecycle
with
no
Integrations
to
be
managed.
This
is
a
no
code,
value
stream,
analytics
dashboard
where
you
can
use
a
single
click
to
drill
down
to
investigate
further
a
value
stream
is
the
entire
work
process
that
delivers
value
to
customer.
B
B
You
can
even
test
and
compare
different
approaches
over
time
to
see
which
process
improvements,
help
with
efficiency
and
maybe
which
ones
caused
additional
idle
time
to
view
the
value
stream
analytics
for
your
group.
You
must
have
at
least
the
reporter
role
since
Valley
Stream
analytics
only
show
custom
stream
created
for
your
group.
B
You
will
need
to
create
a
custom
value
stream
and
then
we'll
go
over
how
that
looks
on
the
next
slide,
but
the
overview
dashboard
similar
to
the
one
shown
here
has
key
metrics
and
or
metrics
of
group
performance
depending
on
the
filter.
You
select,
the
dashboard
will
automatically
aggregate
door
metrics
and
display
the
current
status
of
the
value
stream.
B
B
B
B
B
B
B
On
this
example,
this
shows
how
you
can
use
value
stream
analytics
to
find
bottlenecks
in
the
workflow.
So
for
each
stage
a
table
list
displays
the
workflow
items
filtered.
In
the
context
of
that
stage,
the
table
provides
a
deep
dive
into
the
stage
performance
here,
we're
looking
at
the
staging
stage
and
Below.
We
have
a
merge
request
last
event
and
duration
column
from
here.
B
B
All
right,
so
we
covered
CI
CD
analytics
and
we
covered
insights,
dashboard,
sorry
I
meant
to
say
we
covered
cicd
analytics
and
we
covered
Valley
Stream
analytics,
and
now
we
are
going
to
dive
into
insights
dashboard.
B
All
right,
so
insights
gives
you
an
option
to
configure
a
custom
report
for
insights
into
your
group's
processes,
such
as
the
amount
of
issues
or
bugs
and
merger
classes
per
month.
You
can
configure
the
insights
to
see
data
about
your
group's
activity,
such
as
triage
hygiene
issues
created
or
closed
in
a
given
period.
You
can
also
see
the
average
time
for
merge,
requests
to
be
merged,
and
then
you
can
also
create
custom
Insight
reports
that
are
relevant
to
your
group.
B
B
The
gitlab
insights
yaml
file
is
a
file
where
you
can
Define
the
structure
and
Order
of
charts
in
a
report,
and
you
can
also
Define
the
style
of
charts
displayed.
For
example,
you
have
options
like
bar
chart
line.
Chart
stack
bar
chart
in
the
report
for
your
group
or
project
so
in
this
example,
we're
looking
at
a
single
definition
that
displays
one
report
with
one
chart
in
the
file.
B
B
First,
we
have
the
devops
adoption
report.
The
devops
adoption
report
shows
you
how
groups
in
your
organization
are
adopting
and
use
the
most
essential
features
of
gitlab,
for
example
on
the
screen
here
in
red.
We
can
see
the
development
category
and
it
has
approvals
code
owners
issues
and
merge
requests
as
the
features
below
it.
B
B
B
You
can
find
this
information
under
code
review,
Analytics
to
see
the
longest
running,
reviews
in
open,
merge,
requests
and
take
action
on
those
individual
merge
requests.
For
example,
a
high
number
of
comments
or
commits
can
indicate.
Maybe
the
code
is
too
complex
or
maybe
authors
require
more
training.
B
B
Next,
we
have
contribution
Analytics
and
you
can
get
an
overview
of
the
contribution.
Events
in
your
group.
This
page
allows
you
to
analyze
your
team's
contribution
over
a
period
of
time.
You
can
identify
opportunities
for
improvements
with
team
members
who
may
benefit
from
additional
support
and
on
contribution
analytics.
There
are
three
bar
graphs
that
illustrate
the
number
of
contributions
made
by
each
group
member.
You
can
see
push
events,
merge,
requests
and
closed
issues
for
each
group
member.
B
You
can
use
this
information
to
see
when
folks,
who
are
busy
or
less
busy,
and
this
is
useful
for
discovering
team
members
that
might
be
contributing
too
much
and
are
maybe
at
risk
of
burnout.
And,
alternatively,
maybe
you'll
find
folks
that
are
disengaged
and
not
contributing
enough.
So
this
page
helps
highlight
the
information
that
you
can
use
to
help
with
a
balancing
of
a
workload.
B
B
B
In
addition
to
seeing
the
programming
languages
you'll
also
find
average
commits,
there's
also
an
option
for
you
to
download
code
coverage
statistics,
raw
data
in
the
CSV
format.
These
screenshots
shown
here
is
actually
of
our
gitlab
repository
and
since
all
of
our
backend
code
is
in
Ruby,
it
makes
sense
that
the
biggest
chunk
is
Ruby.
B
All
right,
so
users
can
create
their
own
individual
operation
dashboards,
where
they
can
monitor
different
projects.
The
operations
dashboard
shows
a
summary
of
each
Project's
operational
health,
including
Pipeline,
and
alert
status.
This
can
be
great
for
leads.
Team
leads
or
even
group
leads
at
a
glance
you
can
see
if
the
pipeline
succeeded
or
if
it's
currently
running
or
failed.
B
B
B
So
from
a
single
location,
you
can
track
the
progress
as
changes
flow
from
development
to
staging
and
then
to
production,
or
maybe
through
any
series
of
custom
event,
flows
that
you
can
set
up
the
in
operations
and
environments.
Dashboard
share
the
same
list
of
projects
so
adding
and
removing
a
project
from
one
place
will
add
and
remove
the
project
from
the
other.
B
All
right
so
next
up,
we
will
take
a
look
at
a
number
of
different
security
reports
and
inside
sets
also
available.
B
B
Since
dashboards
are
updated,
with
the
results
of
completed
pipelines
run
on
the
default
branch,
they
do
not
include
vulnerabilities,
discovered
in
pipelines
from
other
unmerged
branch
when
you're
on
the
security
dashboard.
You
can
hover
over
the
chart
to
get
more
details
about
the
vulnerability
and
you
can
view
the
chart
and
the
trends
over
a
30-day
60-day
or
90
day
time
frame,
and
if
you
want
to
view
aggradata
aggregated
data
beyond
the
90
days,
then
you
can
use
the
API
gitlab
retains
the
data
for
365
days.
A
B
All
right,
so
when
it
comes
to
security
insights,
the
vulnerability
reports
provides
information
about
the
vulnerabilities
from
scans
of
the
default
branch.
You'll
see
results
of
a
successful
job,
regardless
of
whether
the
pipeline
was
successful
or
not.
The
report
is
available
at
the
group,
low
and
project
level.
B
B
B
B
B
You
can
use
the
report
to
get
a
list
of
compliance
violations
from
all
merged,
merge
requests
within
the
group.
You
can
use
the
report
to
get
the
reason
and
severity
of
each
compliance
violation
and
a
link
to
the
merge
request
that
caused
the
compliance
violation
to
view
the
compliance
report
for
a
group.
Then
user
must
be
an
admin
or
have
the
owner
role
for
the
group.
B
B
B
B
B
B
B
B
All
right-
and
we
have
made
it
to
the
end
of
our
session
today,
I
wanted
to
open
up
the
floor
for
any
questions.
A
Thanks
Alex
yeah,
before
we
jump
in
and
we
we
do
have
a
couple
that
came
through
just
wanted
to
let
everyone
know
that
we've
opened
up
a
feedback
poll.
We'd
love
to
have
you
all
take
just
a
minute
to
answer
those
couple
of.
A
A
B
Yeah,
so
durometrics
are
available
at
our
ultimate
tier
at
the
moment.
If
you
are
interested
in
seeing
the
door
charts,
I
would
recommend
to
reach
out
to
gitlab
team
member
or
let
us
know
in
the
chat
here
and
we
can
help
see
if
a
trial
is
something
that
you're
interested
in.
A
Awesome
what
report
should
I
want
if
I
want
to
start
getting
data
around
where
my
team
is
spending
more
of
their
time.
B
Yeah
there
are
I
would
probably
say
there
are
several
different
reports
that
you
can
look
at
a
good
starting
point,
might
be
the
productivity
analytics
and
then
there's
value
stream
analytics.
That
could
be
helpful
if
you're
on
premium
ultimate,
which
allows
you
to
create
your
own
workflow
and
then
from
there.
You
can
investigate
what's
taking
the
most
time
in
the
different
stages.
A
Great
last
one
here
for
change
failure
rate
metric.
How
does
it
know
when
there's
an
incident.
B
So
at
the
moment,
the
way
that
gitlab
knows
when
an
incident
occurs
is
when
an
incident
is
created
in
gitlab,
so
you'll
have
to
use
the
Incident
Management
through
gitlab
and
create
incidents
through
there.
This
can
be
done
automatically
or
manually
for
it
to
be
calculated
with
change
failure
rate.
We
also
have
Integrations
as
well.
A
Great
thanks,
Alex
with
that
we
will
wrap
up
today's
session.
Thank
you,
everyone
for
joining
us
and,
like
we
said
at
the
beginning,
we'll
be
sending
out
the
recording
and
the
deck
that
Alex
went
through
here
in
the
next
couple
of
days.
Thanks
everybody.