►
From YouTube: Manage Analytics discussion about metrics
Description
a Discussion with Manage Group Product Manager Jeremy Watson regarding DevOps Metrics in GitLab
A
Hey
samir,
it's
great
to
see
you,
I'm
jeremy
watson,
I'm
a
group
product
manager
here
at
get
lab,
and
I
am
responsible
for
the
managed
stage
of
the
devops
lifecycle,
which
includes
a
number
of
different
areas
and
capabilities
in
gitlab,
one
of
which
includes
analytics
and
our
analytics
suite
of
features
inside
the
product.
That
includes
value
stream.
Management
includes
insights.
So
I'm
here
today
to
talk
a
little
bit
about
kind
of
the
overview
there
of
how
analytics
works
in
gitlab.
A
A
So
you've
actually
created
an
agenda
here
and
I
I
you
had
a
few
questions
that
you
wanted
to
to
touch
on.
Do
you
want
to
start
there.
B
A
Oh,
it's
no
worries
at
all.
I
did.
I
did
type
in
some
some
thoughts
there,
but
we
can
start
there
I'll
I'll
I'll.
Kick
it
off.
The
first
question
that
you
point
you
posed
was
what
are
some
good
metrics
that
can
be
used
to
measure
devsecops,
and
I
thought
that
was
a
really
interesting
question
and
fundamentally
to
me,
like
devsecops,
is
all
about
kind
of
combining
development
operations
and
security
efforts
in
order
to
ultimately
like
get
your
code
to
market
faster.
A
So
the
ultimate
way
of
measuring,
like
the
effectiveness
of
your
devsecops
effort,
is
shipping
things
faster,
reduce
number
of
errors
and
then
faster
kind
of
deploys
more
often
that
you're
deploying
more
frequently.
A
So
what
I've
seen
in
the
marketplace
is
the
industry
is
kind
of
coalescing
and
around
those
types
of
metrics
and
they're,
like
customers
are
kind
of
coming
to
us
more
often
requesting
these
things
and
they're
no
longer
saying
what
metrics
would
be
looking
at,
they
actually
say
like
these
are
the
ones
that
you
know.
I've
read
about
that.
I
kind
of
want
to
see
in
the
product
and
the
way
that
so
those
are
the
questions
that
people
want
to
want
to
have
answers
like.
B
A
We
actually
seeing
like
faster
time
market.
Are
we
seeing
like
fewer
vulnerabilities
like
in
production,
and
so
there
are
these
key
metrics
that
were
first
called
out
in
this
book
called
accelerate
that
have
now
been
called
the
kind
of
called
the
the
dora
metrics
and
those
are
the
ones
that
customers
are
coming
to
us
asking
for,
and
those
are
like
four
key
ones,
that
kind
of
come
to
mind,
the
first
of
which
is
lead
time,
which
kind
of
heartens
back
to
that
concept
of
like.
A
Are
we
actually
getting
things
to
production
faster,
which
is
like
what
is
the
time
it
takes
on
average
to
go
from
commit
to
production,
so
a
developer
pushes
a
commit
to
a
branch
that
actually
makes
it
to
production.
How
long
does
that
that
process
actually
take,
and
the
goal
is
for
this
to
be
an
hourly
process?
That's
happening
so
frequently
and
you're
pushing
new
code
to
production
so
frequently
that
it's
happening
multiple
times
a
day.
A
The
lead
time
it
takes
is
so
high
that
if
there
is
an
error
there
isn't
that
there
is
a
mistake,
then
it
caught
cost
you
weeks
and
weeks
of
time
in
order
to
correct
it
and
therefore
it
it
takes
your
your
your
your
time
to
get
to
market
just
kind
of
slows
down
to
a
radical
degree,
because
the
cost
of
the
defect
becomes
so
ridiculously
high.
A
The
second
one
is
really
around
deploy
frequency.
It's
like
how
fast
are
we
actually
deploying
changes
to
production?
The
ideal
is
like
you're
doing
this
multiple
times
a
day
and
for
organizations
that
take
weeks,
if
not
months,
to
be
able
to
deploy
it.
You
know
we
we
see
like
radical
improvement
in
the
in
velocity,
for
those
organizations
that
can
actually
deploy
very
frequently
time
to
restore
service
is
also
the
third
one
that
comes
to
mind,
which
is
around
if
we
ship
a
defect
that
we
actually
need
to.
A
That
leads
to
some
type
of
unplanned
outage
or
incident
like
what
is
the
amount
of
time
it
takes
us
to
recover
from
that
again,
if
you're
deploying
and
committing
frequently
you're
committing
frequently
and
you're
deploying
changes.
Frequently
that,
ideally,
it
takes
you
very
little
time
to
react
to
that.
Like
there's
an
outage,
you
can
recover
from
that
less
than
a
day,
because
your
developer
is
able
to
push
to
production.
You
know
rollback
or
a
new
merch
request
really
quickly
and
then
fourth
is
really
around
change.
Failure.
A
It's
like
the
number
of
changes
that
you
actually
ship
to
production
that
result
in
like.
Oh
actually,
we
need
to
roll
this
back
because
it
introduced
like
some
critical
vulnerability
that
we
didn't
really
account
for
or
that
you
know
we
led
to
some
unexpected
outage
or
some
defect
that
we
need
now
need
to
either
remediate
or
or
roll
back
and
by
shifting
testing
and
security
left.
You
should
see
those
things
happen
less
and
less
frequently,
and
so
the
goal
is
like
when
looking
at
a
per
commit
basis.
A
You
see
this
happening
kind
of
like
less
than
10
of
the
time,
so
those
are
really
the
four
kind
of
like
key
metrics
that
I
hear
from
customers
and
we're
working
on
making
all
these
kind
of
available
within
gitlab.
But
if
you
see
yourself
kind
of
embracing
devops
that
that
philosophy
of
like
pushing
as
many
of
these,
these
you
know
these
these
tasks
kind
of
left
like
security
and
testing
and
compliance,
then,
ultimately,
you
should
see
kind
of,
like
your
time,
market
reduce
your
number.
A
How
quickly
you're
able
to
recover
from
failures
and
how
fast
you're,
deploying
and
also
like
the
biggest
one,
is
like
your
change.
Failure
rate
should
reduce
big
time
because
you're
able
to
catch
all
these
errors
before
they
actually
make
it
to
production
and
reduce
the
number
of
times
you
need
to
roll
things
back
so
independently
of
kind
of
gitlab's
capabilities.
A
B
No,
that
makes
sense
to
me
understand
that
we
work
in
the
federal
government
quite
a
bit,
and
so
there
is
a
little
bit
of
the
compliancy
question
that
comes
up
so
change
management
framework
is
generally
one
of
the
things
that
is
very
heavily
oriented
towards
government
business
because
there
used
to
be
a
time
when
they
didn't
want
just
anybody
sending
through
any
piece
of
code.
It
had
to
go
through
a
proper
change,
manage
process
approvals
and
so
on
so
forth.
B
So
when
we're
trying
to
marry
the
two
concepts
right,
one
is
go
as
fast
as
you
can,
with
as
few
stops
as
possible
and
measure
that
and
oh
by
the
way,
make
it
so
that
the
change
management
team
knows
what's
going
on
and
everybody
else
is
aware
of.
What's
going
on
so
from
that
perspective,
I
I
see
that
they
need
to
look
at
the
world
a
little
bit
more
retrospectively
than
prospectively,
which
is
what
they've
been
doing
is
is
their
their.
You
know,
you
changed
one
piece
of
code,
okay,
why?
B
What
was
happening
and
then
did
that?
Did
the
database
team
know
about
it?
Did
the
management
know
about
it?
So
it
takes
like
a
week
for
them
to
just
kind
of
gather
the
information
and
then
another
week
to
decide
on
whether
it's
the
right
thing
to
do
or
not
and
then
another
week
to
actually
deploy
it
and
we
haven't
even
got
into
the
actual
steps
of
deployment
like
it
takes
so
long.
B
A
This
is
going
to
vary
from
organization
to
organization,
I'm
sure,
even
within
the
public
sector,
where
they
have
different
kind
of,
like
you
know,
levels
of
compliance
depending
on
like
the
sensitivity
of
a
particular
project,
for
instance.
So
we
we
want
to.
We
want
to
move
all
of
that
in
all
of
those
all
of
that
into
the
product.
A
Artifact
trail
like
that,
where
certain
organizations
and
certain
groups
like
approve
of
mrs
before
they
actually
are
merged
into
production
into
master,
we
we're
building
that
kind
of
that
kind
of
dynamic
end
so
that
when
you,
it
takes
like
two
people
of
a
certain
like
user
type,
to
be
able
to
prove
something.
And
then
we
roll
that
up
into
a
compliance
dashboard
where
we
can
actually
like
show
you
different
things
and
different
changes
that
are
kind
of
pending
approval.
A
So
someone
can
go
down
the
list
and
click
deny
deny
deny
approve,
approve,
approve
and
actually
have
audit
events
that
are
generated
for
these
events.
So
this
is
something
that
you
know
is
developed
with.
You
know
pci
compliance
sock
to
compliance.
You
know
the
needs
of
kind
of,
like
you
know,
a
financial
organization,
not
necessarily
public
sector,
so
maybe
there's
more
work
that
we
need
to
do
in
order
to
fulfill
that.
A
But
to
answer
the
first
part,
which
is,
I
think
that
we
need
to
move
as
much
of
this
into
the
product
as
we
can
so
that
we
can,
you
know,
fulfill
those
needs
of
those
customers
in
the
product,
as
they're
generated
to
prevent
them
from
having
to
do
the
week's
worth
of
work
later
on.
The
other
thing
is,
I
I
think
that,
from
an
analytics
standpoint
like
they're,
there
there's
customization
that
we
need
to
continue
to
build
on
so
the
primary
way.
A
The
primary
feature
that
we
use
is
called
value
stream
analytics
for
this
type
of
customizable
analytics
needs
which
are
you
know.
We
come
out
of
the
box
and
we
have
certain
stages
like
plan
development
code
tests
that
correspond
to
generally
how
people
use
gitlab
to
build
software.
But
then
we
need
organizations
that
have
specialized
needs.
They
need
to
be
able
to
kind
of
customize
this.
So
I've
created
like
this
stage
here,
called
like
ready
for
dev
idle.
A
You
can
actually
go
into
value
stream
and
then
click
on
edit
and
you
can
kind
of
create
your
own
stage
in
order
to
kind
of
like
be
able
to
monitor
like
how
how
long
like
a
particular
issue
or
merge
requests
spent
in
a
particular
state.
So
here
you
can
see
that
I'm
curious
to
understand
like
how
long
an
issue
sat
and
ready
for
development
before,
like
a
developer,
actually
picked
it
up.
So
I
say
start
event
is
adding
ready
for
development
and
then
the
stop
event
is
adding
in
dev.
A
But
overall,
that's
kind
of
how
I
see
it,
which
is
number
one
I'd
love
to
see
us
move
as
much
of
those
capabilities
into
the
way
that
people
build
software
every
single
day
in
the
product
number
two.
You
can
keep
an
eye
on
those
metrics
by
using
customizable
value
stream
analytics
and
right
now
you
know
you
can
build
the
stage
and
be
able
to
see
like
an
itemized
list
on
the
right-hand
side,
but
in
the
future
we're
planning
on
doing
more,
like
being
able
to
chart
and
kind
of
alert
on
like
on
changes.
B
Cool
that
that
is
actually
very,
very
beneficial
vision
that
you
have,
because
that
I
think
that
is
exactly
what
the
customers
are
looking
to
get
out
of
this
stuff
and
up
until
now
it's
been
a
bit
of
a
difficult
journey
because
of
the
separated
product
suites
that
they
have
to
use
to
get
some
of
those
metrics
and
then
to
have
to
run
the
reports
around
it.
A
Now,
I'm
not
saying
we
have
everything
perfect,
I
think
that,
but
I
think
that's
the
big
advantage
of
gitlab,
which
is
that
we
have
all
that
information,
all
those
all
that
data
and
all
those
users
in
a
single
application.
So
we
can
take
advantage
of
that
and
then
do
things
that
other
tool
chains
really
can't.
B
So
one
of
the
use
cases
that
this
customer
is
looking
at
gitlab
for
is
get
ops
where
there
isn't
a
whole
lot
of
development
going
on.
They
have
a
bunch
of
applications
that
have
been
developed
to
a
certain
extent.
It's
just
maybe
bug
fixes,
maybe
one
or
two
things
here
that
are
changing
on
a
very
large
time.
Scale
like
we're
talking,
maybe
every
six
months,
something
changes
or
something
like
that.
B
The
get
ops
idea
that
they
have
is
with
regards
to
making
sure
that
the
application's
up
and
running
the
clusters
managed
or
wherever
it's
deployed
is
managed
properly
and
the
application
downtime
is
near
zero.
If,
if
at
all
possible,
is
there
anything
that
you're
working
towards
that
sort
of
use
case
to
feed
into
this?
Or
is
our
world
mostly
mo
developer,
focused.
A
Right
now,
it's
primarily
developer
focused.
I
think
that
jackie
jackie
on
the
on
the
cicd
side
has
been
thinking
about
some
some
improvements
here,
so
I'll
have
to
get
with
jackie
porter
to
kind
of
understand
kind
of
what
she's
working
on,
but
at
this
at
this
time
the
analytics
group
is
primarily
thinking
about
kind
of
the
developer,
experience
and
and
optimizing
for
that
at
the
moment,
but
I'd
be
really
interested
in
in
learning
more
about
what
how
this
cost
for
this
particular
customer
is
thinking
about
things.
So
we
can.
A
B
Consider
some
improvements
there,
it's
one
of
the
many
use
cases
that
they
have.
So
it's
not
the
primary
one.
But
certainly
if
you
know
I'll,
keep
an
ear
out
for
more
stuff
and
and
get
that
to
you.
Yeah.
A
B
They
they
do
the
traditional
development
life
cycle.
They
have.
You
know
federal
programs.
This
is
a
health
agency,
so
they
have
health
related
programs
that
come
through
and
they
have
to
build
applications
towards
it.
They
have
applications.
Like
you
know,
trials
vaccine
trial
applications,
applications
like
managing
the
genetic,
bio,
biogenetic
library
of
some
sort,
where
you
know
in
fact,
for
covid
they
had
actually
done.
They
they've
actually
got
the
entire
genetic
pattern
of
of
covet
actually
in
their
library.
B
So
they
do
those
kinds
of
things,
so
their
development
is
is
kind
of
an
interesting
situation.
If,
if
there's
a
new
disease
coming
up-
and
there
are
some
sort
of
trials
or
vaccine
trials
like
for
covet-
that's
going
on
right
now,
they're
building
an
application
towards
that
managing
the
entire
trials
program.
B
On
the
other
hand,
they
have
some
where
they
actually
put
have
developed
processes
and
procedures
on
how
to
do
these
trials
for
other
kinds
of
things
and
those
applications
aren't
changing
on
a
day-to-day
basis.
It's
just
a
you
know,
five-year
10-year
running
trial
period.
You
know
just
imagine
that
as
the
outbound,
but
then
they
have
shorter
ones
too,
but
they
they.
A
Interesting
interesting
sounds
like
a
very
cool
customer
you're
working
with.
B
It
is
it's
it's
one
of
the
most
interesting
customers.
I
don't
want
to
name
the
name
right
now
or
record
a
call,
but
I
can
certainly
share
the
share
more
details
about
it.
Cool
sounds
good.
I
think
the
next
question
I
had
was:
what
does
gitlab
offer
in
terms
of
dashboards
to
display
these
metrics?
I
think
you
covered
some
of
that
already
with
what
you
were
showing
the
value
stream
metrics.
Is
there
anything
else
that
you'd
like
to
cover.
A
Yeah
I'll
talk
a
little
bit
about
kind
of
our
philosophy
here
and
the
you
know.
I
talked
a
little
bit
about
how
we're
kind
of
concentrating
on
the
development
team
and
making
sure
that
they're
they're
working
efficiently.
The
way
that
we've
kind
of
structured
our
analytics
features
in
gitlab
is
that
we
have
value
stream
analytics
kind
of
at
the
very
top
level,
which
is
this.
Is
the
the
10
000
foot
view
of
like
how
your
organization
is
using
gitlab
at
the
group
level,
you
can
kind
of
drill
in
from
there.
A
The
idea
is
to
be
able
to
identify
specific
areas
of
waste
and
we'll
have
like
very
specific
stage.
Specific
analytics
features
that
allow
you
to
kind
of
action,
kind
of
in
veer
for
very
specific
personas
and
their
needs
and
like
I'll
share
like
an
example
here
and
one
example,
is
what
we're
building
out
for
what
we've
built
out
already
for
code
review
analytics,
for
instance
like
we
want
to
be
able
to
have.
A
You
know
the
10
000
foot
view
at
value
stream,
man
analytics,
but
we
also
want
to
be
able
to
understand
like
if
I'm
an
engineering
manager
or
a
director.
I
want
to
be
able
to
identify
you
know
and
drill
into
I'm
working
with
the
analytics
group.
I
want
to
be
able
to
see
which
merge
requests
are
taking
kind
of
a
long
time,
so
I
can
search
for
under
the
gitlab
project
and
see
exactly
which,
which
code,
which
merge
requests,
are
still
open
or
lagging
behind,
and
I
might
need
a
action
on.
A
I
can
jump
over
here
to
merge,
request
analytics
and
I
could
be
able
to
see
over
time.
The
number
of
merge
requests
merge
my
month,
we're
actually
working
on
this
feature
right
now,
so
this
is
kind
of
in
a
partial
state
for
adding
like
a
filter
bar
at
the
top
and
and
and
a
data
table
below,
so
that
as
an
engineering
manager
or
director
I
have,
I
can
go
into
value
stream
analytics
see
the
the
top
level
view
of
we're.
Taking.
A
A
We
actually
have
this
issues
analytics
kind
of
feature
where
you
know,
as
a
project
manager
or
as
a
product
manager.
I
can
see
like
issues
over
and
that
are
happening
that
have
been
opened
per
month
and
drill
into
like
my
specific
group
that
I
care
about
and
then
be
able
to
take
action
from
here
and
click
into
say.
A
Like
oh
wow
looks
like
you
know,
we
have
five
issues
that
were
open
today,
if
I
can
filter
for
bugs,
for
instance,
and
be
able
to
see
kind
of
which
bugs
are
kind
of
like
unaddressed
and
kind
of
which
defects
have
gone
kind
of
like
on
untriaged
and
say,
like
oh
looks
like
these
four
we've
assigned
these
to
milestones.
It
looks
like
these
four
here
are
open,
but
we
actually
haven't
acted
on
them.
A
We
need
to
take
some
action,
so
you
know
we
have
algorithm
at
the
top
level,
and
then
we
want
to
have
these
drill
down
specific
analytics
features
that
kind
of
help
us
be
able
to
action
specific
to
a
persona
for
a
particular
like
problem
that
they
want
to
solve.
So
that's
what
we
kind
of
offer
in
terms
of
dashboarding.
A
We
also
have
this
feature
that
we
call
insights,
which
I
think
is
also
probably
one
of
our
most
flexible
dashboarding
features,
and
you
can
see
that
this
is
what
we
have
that's
publicly
available
for
the
gitlab
or
group,
and
you
can
see
that
we
have
issues
we
have
bugs
created
per
month
by
severity.
A
We
have
a
number
of
different
dashboards
that
you
can
click
into
here
that
are
around
like
regressions
and
others.
So
this
these
these
features-
and
these
visualizations
here
are
all
powered
by
this
underlying
yaml
file.
That
is
highly
configurable.
You
can
see
that
you
know
this.
A
It
only
covers
like
very
specific
projects
within
the
gitlab
kind
of
envelope
of
of
of
projects
within
that
subgroup,
and
we
have
very
specific,
like
visualizations,
that
are
defined
with
this
kind
of
with
this,
with
this
particular
feature
here
down
to
you
know
what
specific
labels
that
we
want
to
look
to
like
how
we
want
to
title
the
charts
exactly
what
time
period,
to
look
at
how
we
stack
it
up
and
like
how
we,
what
appear
time
period
recruiting
the
columns
by
so
the
idea
is
that
we
want
to
have
flexible
dashboarding
at
the
top
level,
along
with
the
10
000
foot
view.
A
A
We
want
to
be
able
to
offer
features
like
insights,
like
customizable
value
stream,
analytics
so
you're
able
to
see
what
you
kind
of
want
to
see
and
not
have
to
be
forced
into
into
gitlab's
kind
of
opinions
if
they
don't,
if
they
don't
make
sense
for
your
your
team.
So
that's
where
we're
going!
That's
what
we
have
now
and
that's
kind
of
our
philosophy.
A
Moving
forward,
you'll
notice
that,
with
things
like
code
review
analytics,
we
just
have
like
the
data
table,
and
the
idea
is
that
we
want
to
make
sure
that
we're
showing
useful
data
to
our
customers
before
we
start
going
further
into
alerting
visualizations
actions
that
you
can
take
directly
from
that
page.
But
that's
the
vision.
Does
that
kind
of
make
sense
like
what
are
you
kind
of
hearing
from
customers?
What
are
they
interested
in
and
where
do
you
think
that
we
should
be
investing.
B
Yeah,
no,
that
that
makes
sense.
Interestingly
enough,
not
too
many
of
my
customers
actually
use
the
analytics
features
yet
because
they
haven't
matured
in
their
own
organizations
to
realize
what
they
should
be
measuring
and
why
so
they
just
they
just
want
something
faster
than
what
they
had
whatever
that
was
so
in
some
cases
they
have
a
fragmented
view
of
the
world
where
they
look
at.
Oh,
I
want
the
best
of
breed
of
applications.
B
So
then
you
get
the
fragmentation
of
oh
well,
then
jira
is
the
best
for
issue
management
and
then
gitlab
is
the
best
for
code
management
and
possibly
ci,
but
maybe
they
have
some
jenkins
leftover
areas
or
whatever,
so
they
just
don't
have
the
data
in
gitlab
to
be
able
to
make
sense
of
it
in
that
sense,
so
I
unfortunately
that's
happening
quite
a
bit
at
some
of
my
customer
sites,
but
the
ones
that
actually
start
adopting
git
lab.
B
I
think
this
is
one
of
the
things
that
I
want
to
highlight
to
them
to
say:
look.
This
is
something
you
get
right
out
of
the
box
like
you.
Have
you
don't
have
to
build
a
single
report
other
than
the
insights
piece
where
you
do
need
to
customize
it?
If
you,
if
you
wanted
to
make
it
more
intel
interesting,
but
it
gives
you
that
variability
right.
It
gives
you
the
out-of-the-box
value
stream
analytics
and
then,
if
you
want
to
get
any
more
details,
then
you
can
certainly
build
what
you
need
out
of
it.
B
So
what
I'm
seeing
from
customers
is
they're
starting
to
get
at
least
in
the
public
sector
that
I'm
looking
at
they're
starting
to
get
to
a
point
where
they're
ready
for
those
conversations
they
haven't
been
having
those
conversations,
it's
all
block
and
tackle
so
far.
How
do
I
get
my
pipeline
to
run
faster
security
runs
slower.
How
do
I
speed
up
my
security?
Well,
I
don't
know
you
know
it's
like
what
what
else
do
you
have
going?
It's
like?
No,
everything
else
is
perfect.
B
A
Yeah,
I
think
the
challenge
for
us
on
the
analytics
group
is
really
understanding
when
we
lean
into
our
single
application
advantage
and
be
very
opinionated
about
hey
here's.
What
you
need
to
be
looking
at
here
are
the
metrics
that
you
want
to
work.
Work
with
versus
the
organization
that
is
very
opinionated
knows
what
they
want
to
find
and
knows
what
they
want
to
see,
and
they
want
to
lean
into
like
tons
of
customizability
and
integrations
with
their
existing
tool
chain.
A
A
We
have
like
incredibly
high
switching
costs
at
this
point
and
that
you
know
we're
happy
with
with
jira
one
thing
I'll
say
and
is
that
you
know
we've
definitely
built
gitlab
with
you
know
a
partnership
with
atlassian
in
mind
and
knowing
that
we
want
to
be
able
to
play
nicely
with
jira
for
those
customers,
and
you
know
we
do
have
things
like
this
is
just
a
random
slide.
A
From
from
an
old
presentation
I
gave
you
know,
we
do
have
the
ability
to
integrate
directly
with
jira
at
the
project
level,
so
that
here
you
can
see
that
you're,
looking
in
jira
and
you're,
seeing
like
links
and
cross
references
into
gitlab
at
the
commit
and
merge
request
level,
and
also
we
have
the
ability
to
integrate
or
import
jira
issues
kind
of
in
total.
Or
we
have
like
a
really
nice
jury
importer,
where,
if
you
want
to
just
pull
over
and
migrate
away
from
jira,
we
give
you
the
tools
to
do
that.
A
B
So
how
do
the
metrics
work
in
this
case,
because
a
lot
of
what
you
showed
is
based
on
issues
and,
mrs,
so
insofar
as
gitlab
managing
the
mr
piece?
I
get
it.
But
what
about
the
issue?
Piece
yeah,
the.
A
Issue
piece
is
still
something
that
you
would
need
to
do
most
of
your
reporting
in
jira,
if
you're
fully
invested
in
jira
and
the
allow.
What
we
allow
you
to
do
is
you,
if
you're
using
you,
know
scm
and
git
lab
and
issue
management
and
jira
your
issue
management
and
your
analytics
would
still
need
to
live
in
jira,
because
we
don't
have
those
those
issues
and
those
objects
in
gitlab
to
be
able
to
reference
and
analyze.
A
So
that
is
a
shortcoming
and
that's
definitely
something
that
we
could
consider
improving
upon.
But
at
the
moment
you
know
the
objects
in
gitlab
need
to
be
there
for
us
to
be
able
to
to
to
do
any
analytics
on
them.
So
in
that
case
we
would
do.
We
would
probably
see
the
customer
build.
B
Yeah,
but
that's
that's
a
conscious
decision
that
they
made
by
by
using
different
tools.
So
that's
just
something
that
they
have
to
continue
to
support.
Yeah
I'd,
agree,
cool
and
then
I
think
the
last
question
is:
how
is
a
products?
A
Yeah
and
that's
a
that's,
a
really
great
question
and
something
that
I
think
that
we
really
need
to
improve
on.
So
we
don't
really
have
at
the
moment
this
concept
of
a
project
maturity
in
gitlab,
where
you
create
a
project,
and
we
have
like
an
onboarding
experience
where
we
provide
you
with
some
type
of
like
maturity,
score
or
or,
like
you
know,
a
report
card,
unlike
what
you
could
be
doing
better
with
the
project
level.
I'd
love
to
know,
if
you
think
that's
something
we
should
do
and
what
customers
are
asking
for.
A
The
only
kind
of
like
analogous
concept
that
we
have
is
really
around
like
this
devops
score,
which
is
a
a
feature
that
we
haven't
heavily
invested
in,
where
it
gives
you
like
a
top
level
score
like
how
kind
of
like
effectively
you're,
using
the
instance
kind
of
when,
comparing
to
other
other
instances
that
are
providing
us
with
telemetry
data.
But
there
are
a
couple
of
like
ideas
that
I'll
throw
about
I'd
like
to
kind
of
share
with
you
in
terms
of
how
we're
thinking
about
kind
of
maturity.
A
But
first
I'll
ask
you
kind
of
like
what
what
are
you
kind
of
hearing?
What
should
is
this.
B
Yeah,
we
should
be
thinking
about
what
we're
hearing,
at
least
this
one
customer
has
expressed.
Is
it
if
I
on
board
a
project
or
program
on
gitlab?
I
want
to
see
over
a
period
of
time.
B
Let's
say
it
starts
out:
let's,
let's
take
security
as
an
example,
because
that
that
was
the
most
easy
one
that
they
could
describe
to
me.
Let's
say
when
I
first
on
board
it.
There
are
a
thousand
vulnerabilities
that
have
been
found
and
over
a
period
of
time,
say
60
90
days,
maybe
a
year,
whatever
time
scale.
That
is.
B
I
want
to
see
how
we're
maturing
on
our
security
roadmap.
Are
we
burning
down
stuff?
Are
we
keeping
it
status
quo,
meaning
there
are
things
being
added,
but
just
as
many
things
being
removed,
or
are
we
actually
just
not
removing
anything?
That's
one
example
again,
security
being
the
very
very
key
focus
here,
but
from
from
a
overall
devops
maturity
perspective,
how
how
fa,
how
much
faster?
How
are
we
going
right?
B
So
when
I
first
on
board
the
project
and
say
I
take
the
metrics
30
days
in
which
probably
had
two
sprints,
for
example,
you
know
I
I
delivered
say
five
issues
between
those
two
sprints,
but
then,
as
I
go
bigger
time
scale,
how
is
my
maturity
being
looked
at
from
that
perspective
is
mature.
So
if
I
set
a
goal
to
go
to
10
15
30
by
a
certain
date,
am
I
meeting
those
goals?
Is
that
so
that's
the
that's.
B
The
type
of
maturity
that
now
feature
maturity
is
something
that
I
asked
them
about,
but
they
they
didn't,
think
that
that
was
something
that
they
wanted
to
measure,
because
that
is
something
that
we
do
right.
If
we
have
a
maturity
page,
we
talk
about
how
features
are
being
developed,
although
if
you
have
any
any
insights
on
that,
I
think
that
might
be
an
interesting
conversation
for
them
as
well.
A
Yeah,
it's
a
it's
an
interesting
problem,
because
the
way
that
I've
heard
about
the
concept
of
project
maturity
is
kind
of
two
problems.
One
is
as
a
gitlab
champion.
I
want
to
be
able
to
tell
my
boss
that
we're
using
this
great
product
and
getting
a
lot
of
return
on
our
investment,
so
I
want
to
be
able
to
show
project
group
instance
level,
we're
really
using
this
product
a
lot,
and
the
second
is
real
problem
that
I
hear,
which
is
as
a
large
enterprise.
A
I
want
to
be
able
to
understand
the
state
of
the
world
and
provide
a
report
card,
basically
against
certain
projects
or
groups
or
teams.
That
says
these
are
the
things
that
are
important
to
me
as
an
org.
How
are
we
doing?
Are
we
making
progress
against
those
goals?
Are
we
not
making
progress
since
those
goals,
and
the
challenge
with
that
is
every
organization
is
going
to
care
about
something
slightly
differently
where
you
just
talked
about
security
scanning
and
vulnerability.
Management
is
absolutely
something
that
I
hear
about
fruitcam,
but
I
also
hear
about
code
quality.
A
I
hear
about
defects
and
incident
management.
I
hear
about
engineering
productivity
and
making
sure
that,
like
we're
at
like
a
certain
level
and
iterating
and
improving
on
kind
of
like
the
productivity
of
like
our
individual
teams,
so
one
challenge
that
we
get
that
we're
trying
to
try
to
rationalize
is
like.
Where
do
we
draw
the
line?
And
how
do
we
make?
A
A
That
gitlab
thinks
are
important,
like
productivity
planning,
defects,
code,
quality
security
and
so
forth,
and
then
eventually
later
in
customizability,
and
the
idea
is
that
I
should
be
able
to
set
goals
for
each
of
these
and
be
able
to
see
just
kind
of
like
a
green,
yellow
red
type
report
card.
Are
we
actually
making
progress
in
this?
Are
we
ahead
of
our
goal?
Are
we
lagging
behind?
A
Are
we
actually
seeing
like
the
number
of
new
bugs
and
our
defect
kind
of
resolution
rate
like
increase
over
time,
and
we
need
to
take
some
action
and,
as
an
executive,
I
want
to
be
able
to
see
this
screen
get
like
a
very
quick,
quick
overview
of
how
my
organization
is
performing
dependent
on
those
goals
and
take
action
if,
if
kind
of
needed,
so
that's
really
the
vision
of
where
we'd
like
to
go
and
we're
going
to
kind
of
ship
an
mvc
of
this
in
13.4.
A
But
this
definitely
is
kind
of
an
area
that
we
need
to
invest
a
little
bit
further
in.
But
that's
how
we're
thinking
about
it
right
now
does
that
make.
B
Sense,
yes,
it
makes
a
lot
of
sense
and
actually
thank
you
for
sharing
that
epic
information,
because
I
had
somehow
missed
that.
I
think
that
will
be
of
very
much
value
to
the
customer
that
I'm
talking
to.
A
B
Yeah
absolutely
so
the
prospect
yeah
we
have
weekly
cadence
calls.
So
maybe
one
of
the
one
of
the
weeks
I
may
have
you
come
in
as
an
invited
speaker
and
just
talk
about
that
with
them.
Yeah.
B
Cool
and
then
I
think
we
we
touched
on
the
last
question
I
had,
which
is
what
custom
reports
dashboards
are
available
to
present
the
collected
metrics
and
the
insights
covers
that.
So
I
think
we
work
we're
good
on
that.
A
A
So
you
see
a
spectrum
of
people
that
depend
really
depends
on
kind
of
their
their
use
case
and
how
detail
they
want
to
create
create
kind
of
dashboarding
if
there
are
like
custom
reports
and
dashboards,
ensure
that
this
that
this
prospect
is
already
using
I'd,
love
to
be
able
to
kind
of
take
a
look
closer
look
at
that
to
see
if
we
can
better
support
them
in
our
product.
So
but
yeah,
that's
that's
that's
kind
of
what
the
what
we
want
to
there
there.
A
It's
it's
just
a
very
a
spectrum
of
needs.
You
know
some
customers
are
very
opinionated
love
the
stuff
that
we
have
out
of
the
box.
Customers
that
have
very
specific
needs
kind
of
want
to
see
us
like
with
lots
of
customization
and
are
very
particular
about
what
they
want
to
see.
So
we
need
to
be
able
to
support.
B
Both
sure
and
last
question
that
I'm
throwing
in
as
a
bonus
I
apologize.
The
the
metrics
that
are
collected
are
available
to
extract
out
of
git
lab
using
api,
correct.
A
Yes,
that's
correct
excellent
cool,
correct,
it
depends
on
so
that
is
correct
for
the
most
part
there
are,
you
know,
forget
for
users
on
gitlab.com,
for
instance,
there
are,
for
instance,
like
instance
and
application
level,
like
log
logging
and
events
that
aren't
that
aren't
going
to
be
available
by
api.
A
So
so
that
is
true
for
user
driven
activity
in
the
vast
majority
cases,
but
there
definitely
are
like
structured
events
in
the
log
files
that
wouldn't
be
available
for,
like
our
sas
users,
for
instance,
but
I'd
imagine
that
for
this
particular
customer
segment,
that's
probably
not
something
that
they're,
probably
more
interested
in
self-managed.
B
Yeah
they
they
are
interested
in
self-managed.
They
usually
have
a
log
management
tool
like
a
splunk
or
something
that
they're
gonna
take
logs
anyway.
This
is
more
you're
right.
The
user-driven
activities
around
what
kind
of
dashboards
we're
offering
yep
perfect
cool
excellent.
Thank
you
very
much
jeremy.
I
appreciate
it.
I
I
I
hope
and
wish
you
and
your
family
well
and
thank
you
for
taking
the
time
and
joining
us
today.
You.
A
Too,
samir
thanks
a
lot
for
the
conversation.
Do
you
want
to
flip
off,
recording
real
quick
and
we
can
wrap
up?
Thank
you,
sir.