►
From YouTube: Dev Section North Star Metric Review
Description
Dev Section PMs review their north star plans with the team as we iterate on what they should be.
Agenda doc: https://docs.google.com/document/d/1SYCKJhVzSVhV90P2ze8Vt0YTwFFO1AEmiX7tqaRsb0c/edit#
A
Depending
on
how
expansive
the
group
is,
that'll
essentially
be
something
we
measure
and
track
how
well
we're
doing
in
that
stage,
so
it
should
be
a
metric.
That
is
something
that
we
believe
we
can
modify
and
change
and
grow
and
then,
by
tracking
it
we
can
understand
how
successful
we
are,
and
so
very
simply
the
agenda
is
essentially
going
through
each
of
the
groups.
Obviously,
we've
got
seventeen
groups
in
depth.
A
So
if
you
do,
the
math
I
think
that's
no
more
than
two
or
three
minutes
on
each
group,
but
obviously
this
is
meant
to
be
collaborative.
It's
meant
to
be
something
where
we
not
only
challenge
one
another,
some
of
our
North
Star
metric
thinking,
but
also
something
where
we
learn
a
little
bit
more
about
the
other
things
and
the
other
metrics
that
other
team
members
care
about.
A
B
The
manage
team
cool
thanks
a
lot
Eric,
so
I'm
excited
to
talk
a
little
bit
about
the
North
Star
metrics
that
manage
has
kind
of
been
thinking
through
before
we
dig
into
specific
metrics.
There
are
two
general
thoughts
that
I
wanted
to
kind
of,
also
articulate,
and
one
was
one
of
the
goals
that
we
had
for
this
exercise
was
really
to
make
sure
that
we
were
creating
North
Star
metrics.
That
could
really
be
owned
by
an
individual
group
and
they
were,
you
know,
largely
unilateral
responsible
for
driving.
B
What
I
was
what
I
wanted
to
try
to
avoid
were
more
star
metrics
that
were
somewhat
that
were
dependent
on
confounding
factors
where,
if
we
had
like
another
github
acquisition,
type
of
it
happens
that,
like
rising
tide,
lifts
all
boats,
it
becomes
a
little
bit
harder
to
figure
out
whether
the
things
that
we're
actually
doing
are
actually
contributing
to
our
North
Star
metrics.
We're
trying
to
kind
of
like
find
things
or
resistance
to
some
of
these
confounding
factors,
and
also
one
thing
that
we
tried
to
do
during
the
discussion
was
really
like.
Think
less
about.
B
What's
measurable,
we
already
have
in
the
data
warehouse
and
think
about
what
total
success
for
the
group
looks
like
if,
like
it's
completely
lovable,
and
everything
is
amazing
and
people
are
attaching
themselves
to
the
things
that
were
shipping
with
with
great
with
great
KPIs.
What
are
those?
What
is
the?
What
is
it?
What
is
the
profile
of
that
world?
Look
like
from
a
KPI
standpoint,
kind
of
work
backwards
from
there.
B
B
Create
authentication
strategy
is
a
little
bit
more
directly,
so
that
was
kind
of
the
logic
around
that
we
may
iterate
on
this
further
and
right
now
the
status
that
this
is
we're
verifying.
How
immeasurable
this
actually
is
and
then
working
to.
Maybe
that
were
that
where
that
gap
is
what
we
can
measure
of
currently
in
periscope
and
then
adding
additional
things
to
the
usage
figures.
Gonna
be
discovered,
but
that's
kind
of
the
current
thinking
praxis
any
feedback
on
them.
I
can.
A
B
B
B
This
is
really
good
in
terms
of
it's
hard,
it's
a
hard
metric
to
game,
but
it's
also
very
hard
metric
to
move,
because
you
have
that
a
month
lag
time
between
anything
that
you
should
sow
retention
is
useful,
but
whatever
we
alive
around
was.
We
should
be
thinking
about
this
in
terms
of
weekly
actives,
to
see
like
just
the
number
of
unique
users
that
are
using
our
analytics
features.
B
If
that
goes
up
into
the
right,
either
we're
improving
our
retention
or
increasing,
like
the
scope
and
the
breadth
of
what
were
able
to
do
it
attracting
more
users
to
the
number
of
analytics
features
that
we
have.
So
that
was
kind
of
the
logic
for
that
and
I
think
that
this
is
currently
measurable.
Forget
lab
comm,
at
least
with
using
snowplow.
So
the
next
step
would
be
to
just
pre
that
a
prototype
kind
of
dashboard
for
this
and
use
that
and
some
goals
around
it
John
Mesa,
does
that.
C
D
Mac,
do
you
want
to
talk
a
little
bit
about
compliance,
yeah
and
sorry?
I'm
late
was
on
that
interesting
call
yeah.
So
for
the
compliance
group,
Jeremy
and
I
talked
a
lot
about
what
would
make
sense,
because
something
like
compliance
is
I,
guess,
arguably
a
bit
passive
in
nature
and
that
it's
kind
of
pervasive
theme
or
set
of
activities
that
occurs
in
an
organization
to
make
sure
that
things
happening
within
gitlab
adhere
to
company
policy
which
makes
the
company
compliant
with
some
legal
regulatory
framework.
D
So
we
broke
it
down
into
what
we've
dubbed
compliance
related
actions.
That's
things
like
gosh
blinking
searching
through
the
audit
events
table
interacting
with
things
like
setting
the
number
or
number
of
merger
quests
that
had
merge,
request
approval
settings,
number
of
merger
quests
that
had
gosh,
so
sorry,
I'm
just
blanking,
but
these
actions
that
when
we
look
at
it
at
a
aggregate
activity
view,
we
can
see
that
the
number
of
interactions
with
these
different
parts
of
the
product
indicate
that
there's
value
being
derived
from
it
particularly
run
things
like
audit
events
right.
D
The
more
that
people
are
searching
through
audit
events
tells
us
that
they're
getting
more
value
from
that,
because
they
need
the
traceability
of
what's
happening
within
get
lab.
The
number
of
merger
quests
that
have
merged
request
approval
rules
on
them
is
indicative
that
there's
value
or
validation
that
there's
a
need
to
compliance
check
or
gate.
These
changes
to
production
so
I
think
that's
where
we're
coalescing
right
now,
with
kind
of
expectation
that
we'll
refine
that
or
maybe
pivot
slightly
as
we
implement
more
features
and
determine
what
might
be
more
appropriate.
A
E
E
B
Yeah
and
I
think
that
might
be
the
claret
like
Matt.
What
do
we
mean?
We
see
eligible
customers,
because
you,
when
you
and
I
talked
about
this,
we
talked
about
like
just
scoping
it
down
to
maybe
like
the
paid
users
that
are
even
eligible
freezing
this,
but
like
are
we
thinking
about
this
that,
like
instance,
level
to
say,
like
of
all
of
the
EE
instances
out
there
like
at
the
instance
levels?
Are
they
using
compliance
related
actions
or
were
you
thinking
about
a
more
like
the
user
level.
D
B
D
D
I
feel
like
it
would
be
at
the
group
for
calm
or
is
a
soul
for
self-managed,
because
individual
users
might
be
doing
the
search
actions,
but
there's
not
necessarily
one
particular
user
that
needs
to
see
that
data
such
as
like
for
audit
events
and
then,
if,
if
we
find
that
in
fact,
number
of
merge
requests
with
these
approval
rules
is
in
fact
a
valuable
metric.
That's
not
really
a
user
action
as
much
as
its
organizational
policy
so
that
leave
my
rationale.
There,
yeah
I,
totally
agree
Thanks.
D
Yeah
I've
I
generally
lump
that
into
the
compliance
program.
Context
I,
think,
there's
a
separate
camp
that
would
probably
argue
well.
Peer
review
is
good
for
code
quality
period,
but
if
we
extrapolate
that
out
usually
code
quality
is
one
component
of
a
compliance
program
for
like
the
sdlc
or
InfoSec
policy,
but
I
would
argue
that
it
serves
more
value
for
those
compliance,
stakeholders
and
groups.
You're.
F
D
A
A
Measures
right
like
how
many
people
are
using
an
analytics
feature
and
you
could
probably
do
a
double
click
down
into
what's
an
analytics
feature
and
that
you've
kind
of
said,
hey
everyone
taking
a
compliance
related
action,
so
these
are
now
becoming
like
very
complicated
but
aggregated
type,
northstar
metrics,
where
you
have
to
do
a
significant
amount
of
logic
to
get
them.
How
feasible
are,
is
a
dos
or
metric
like
this
to
implement
for
you,
Matt
and
I,
guess
back
to
analytics
Jeremy
and
generation
as
well.
D
B
Agree
with
that,
I
mean
I
think
that
it's
it's
hard
to
to
us
to
figure
out
a
way
of
getting
around
that
analytics.
It
covers
like
eight
different
features
so
and
right
now,
we'd
like
to
see
you
know
increased
engagement
in
all
of
them.
We
at
the
moment
don't
have
a
outstanding
way
of
measuring
customer
satisfaction.
A
G
So
are
we
became
the
right
platforms
to
migrate
people
from?
Are
we
picking
something,
that's
obsolete
and
you
know
people
want
you
so
I
mean
the
right
things.
Are
these
importers
easily
discoverable?
Are
they
easy
to
engage
in
your
process
of
creating
your
instance
and
uploading
your
data?
Are
they
easy
to
use
and
are
they
a
delayed
work?
Well,
so
you
know:
are
they
stable?
They
work
every
time
or
have
you
tried?
One
didn't
work
and
then
never
came
back
so
I
felt
like
these.
G
It
is
probably
good
to
qualify
this
or
to
narrow
down
the
new
projects,
we'll
look
at
by
looking
at
just
projects
created
with
an
X
number
of
base
of
the
instance
being
created,
because
that's
where
we
expect
people
to
actually
use
the
importers
to
move
their
data
into
intricate
lab
and
I
think
the
further
further
down
the
further
away.
You
are
from
that
initial
creation,
the
less
likely
you
are.
The
JED
work
left
some.
So
you
know
measuring
looking
at
the
new
projects
being
created.
In
an
instance,
that's
been
around
for
four
years.
G
It's
got
a
lot
of
old
work.
It's
unlikely:
it's
gonna,
be
an
import
target,
so
looking
kind
of
at
the
fresh
instances
or
fresh
grooves
being
created
in
get
lab
comm
and
looking
how
many
of
those
do
we
actually
support
with
the
importers
and
trying
to
move
it
up
and
I
feel
that
if
I
can
and
if
I
can
answer
these
four
questions,
but
yes
we're
doing
the
right
thing
in
each
one
of
these,
this
number
will
move
up.
C
Just
just
an
observation
that
automated,
in
my
experience
of
like
an
automatic
maker,
great
migration
capability,
is
that
that
they
unblock
deals
just
the
presence
of
them,
even
when
they're
not
actually
used
and
and
and
many
times,
teams
will
actually
choose
not
to
use
them
for
good
reasons.
They
want
to
leave
cruft
behind
and
so
on,
but
being
able
to
say,
we've
got.
It
actually
allows
you
to.
C
G
That's
a
very
good
point:
I
mean
there
is.
There
are
some
immeasurable
values
that
we
do
good
with
importers,
this
being
one
of
them,
but
also
the
the
whole
first
impression
so
sometimes
like
the
importer
is
one
of
the
first
things
that
people
do
and
that
can
actually
have
lasting
impact
on
all
the
other
aspects
of
our
engagements
with
their
customer.
So
if
the
import
you
know
breaks,
if
it
errors
out,
if
it
times
out,
if
it
takes
24
hours
and
people
just
gonna
give
up,
it
does
have
a
lasting
effect.
H
H
G
G
J
One
of
the
things
on
the
importer,
so
everyone
one
quick,
thought
that
I
think
maybe
game
and
I
had
talked
about
like
for
importers,
the
deer
importer
as
an
example,
one
of
the
ways
we
were
thinking.
This
is
how
many
customers
use
the
issues
like
have
high
engagement
on
the
issues
that
import
after
they've
been
imported
right.
G
How
yeah
yeah,
I
guess
I
guess
that's
a
good
way
to
look
at
whether
people
are
just
importing
things
for
it's
the
work
of
like
if
we
were
to
move
somewhere
else,
we'd
bring
thousands
of
issues
that
have
no
movement,
because
I've
just
been
there
for
four
years
and
no
one's
touched
them
and
they're
gonna
continue
to
sit
there.
So
how
many
of
those
issues
are
really?
You
know
something
that
I'll
do
in
the
future
versus
just
kind
of
having
their
historical
trail
of
all
the
things
that
done,
yeah
I.
H
Could
a
comment
that
was
asking
like
time
limited
or
why?
Even
thinking
about
like
limiting
the
time
and
I
was
thinking
sort
of
my
own
usage
of
importing
projects,
and
sometimes
it's
I
mean
obviously
I've
been
at
killing
more
than
ninety
days
having
online
more
than
ninety
days,
and
there
are
times
I'm
like
there's
a
project
on
github
that
I
want
and
I
just
import
it,
because
I
want
to
test
the
functionality
or
something
like
it
and
do
it.
But
I
don't
know
how
common
that
is
for
customers.
H
I
know
there
are
lots
of
customers
that
like
mirror
projects
from
github
back
another
get
eleven
cents,
and
like
do
things
like
that,
and
so
I'm
wondering
it's
like
the
90
day.
Metric
is
sort
of
like
not
the
right
weight
up
to
time
box
that
because
I
think
the
valuable
being
able
to
grab
a
project
sort
of
whenever
I
want.
Maybe.
G
Will
pull
both
and
compare
them
and
see
which
one
gives
us
a
better
indication
when
I'm
gonna
be
afraid
in
this
case,
if
I
just
look
at
everything
is
that
the
importer,
like
any
improvement,
will
actually
get
lost
and
in
the
rounding
errors,
because
you
know
number
of
imports
versus
number
of
just
blank
new
projects
is
so
small
that
it's
easy
to
get
messed
up,
and
you
know
so.
I
want
to
have
I
want
to
look
any
population
of
new
projects
that
have
a
high
chance
of
having
an
import.
That
was
that
was.
I
G
E
G
F
Difficulty
with
the
measuring
metrics
entire
thing
and
I'm
sure
you've
run
into
this
is
people
are
generally
gonna
import
once
so,
if
you
make
improvements
the
product,
how
are
you
measuring
that?
Your
improvements
made
any
difference?
The
people
are
gonna
import
today
are
not
the
same
people
who
would
import
in
90
days
and
with
us
releasing
tools
and
approaching
more
enterprise
level
companies
and
more
regulated
industries.
I
think
we're
gonna
see
that
impact
our
potential
customers,
which
is
going
to
have
much
more
of
an
impact
on
your
metrics
than
anything
else.
F
Is
the
potential
customer
base
because
you're
either
gonna
import
it
or
not?
They
see
they're
gonna
work,
it's
not,
but
once
you're
done,
you're
done
so
like
it's
very
hard
for
me
to
understand
how
you
measure
improvement
to
an
importer
when
it's
a
one
use
thing
in
its
industry,
specific
and
customer
specific,
but.
E
G
F
Saying
she
says
Harris
whether
or
not
it
works
I'm
saying
they
import
everything,
and
then
they
go
Alice
didn't
do
what
I
expected.
It
still
looks
like
a
successful
import
from
our
side,
but
it
doesn't
make
them
any
happier
they're
not
going
to
use
the
data
because
it
wasn't
what
they
expected
to
happen.
I
That
something
that
would
fall
upon
the
import
group,
though
to
like
make
sure
that
they're
successful
once
the
import
has
happened
because,
like
surely
the
goal
with
the
input
group
is
to
successfully
import
people
into
the
product.
If
they've
decided,
the
product
is
not
fit
for
what
they
need.
That's
a
whole
separate
problem.
It.
F
Is
but
you
can
define
a
successful
important,
multiple
ways?
One
is
yes:
we
transfer
the
data
another
as
we
transfer
the
data
and
the
way
it
was
meant
to
be
used.
So
look
look
at
the
JIRA
importers.
A
good
example.
Did
they
get
all
their
tags
across
and
they
get
all
their
other
JIRA
asked
items
across
in
a
usable
fashion.
F
They
might
come
across
and
evaluate
our
product
and
say
it
doesn't
meet
our
needs
because
we
didn't
get
the
data
we
need
it,
even
though
it
was
successful,
it
might
not
be
organized
in
a
way
because
we
didn't
link
things
up
together.
Jira
has
a
ton
more
fields
than
get
lab
buzz
right
now.
If
the
data
pulls
across
in
a
manner
that's
successfully
pulled
across,
but
not
successful
or
usable
to
them.
I'm,
not
saying,
as
the
importer
groups
fault
I'm
just
saying.
Yes,.
K
F
L
That's
like
a
net
win
and
that's
like
you
own
control
of
that
I.
Think
marki
makes
really
good
points
about
total
usability,
but
I.
Think
the
starting
point
is
you
want
like
a
clear
metric.
You
can
actually
aim
for
and
individually
control,
so
that
could
be
an
interesting
way
of
approaching
it.
Maybe
it's
not
the
primary
northstar
metric,
but
having
some
sort
of
SLO
the
import
success
would
be
I.
Think
a
very
useful
supplementary
metric
yeah.
G
I
definitely
consider
that
a
sub-sub,
metric
and
I
think
at
least
two
out
of
the
four
questions
I
mentioned
here.
Did
you
speak
to
that?
Like
you
know,
are
easy
to
use
and
do
they
work
well
and
that
does
roll
up.
So
if
more
people
are
able
to
complete
successfully
the
import
I
think
our
North
Stars
will
catch
that
and
to
Mark's
point
mark
I,
agree
with
you,
and
then
there
is
a
something
I've
been
thinking
about.
G
F
Here,
I'm
grab,
suppose
you
simply
can't
import
their
data
and
even
though
they'd
like
to
because
they
from
regulatory
reasons,
they
can't
move
their
data
or
they
have
to
review
everything
and
therefore
that
they're
out
of
the
running
entirely,
but
they
might
love
the
tool
they'll
import
everything
over
just
to
have,
but
they'll
never
interact
with
it.
They
knew
they
wouldn't
from
the
get-go,
because
they're
required
by
law
and
not
to
move
their
data.
It
happens.
F
A
A
I
Cool
thanks,
so
we
went
through
a
few
different
thinking's
around
like
what
might
be
a
good
Northstar
metric
for
spaces.
So
for
those
of
you
who
are
less
familiar
with
the
spaces
group,
our
biggest
focus
is
enterprise
readiness,
predominantly
we're
focusing
on
get
lab,
calm
right
now
and
providing
a
platform
by
which
enterprises
feel
comfortable
subscribing
to
get
logged
on
which,
in
the
past,
has
nothing,
something
that
many
larger
enterprise
companies
have
wanted
to
do.
So
how
do
we
measure
that
success
rate?
How
do
we
know
that?
I
The
two
main
categories
of
this
basis,
group
they're,
both
pretty
well
aligned
with
our
business
model,
number
of
users
directly
correlates
with
our
billing
model,
but
it's
kind
of
a
little
bit.
It's
still.
It's
still
more
related
to
like
sales
number
of
groups
is
a
more
interesting
way
of
looking
at
it,
but
that's
that's
always
going
to
go
up
right,
so
you
can't
really
track
by
how
many
groups
do
we
have
on
Caleb
comm
and
the
I
suppose
those
self-managed
instances
that
we
have
data
from
so
thinking
about
this
a
little
bit
more
granularly.
I
You
can
think
about
active
groups
of
what
counts,
as
an
active
group
like
a
group
that
has
at
least
one
active
member
in
it
in
the
last
30
days,
and
then,
if
you're
kind
of
digging
into
more
around
like
enterprise
readiness
like
we
want
to
think
about
what
what
counts,
as
a
large
group
jeremy
proposed
like
these,
two
numbers,
like
one
active
member
and
a
large,
would
be
at
least
ten
members.
I
Those
those
definitions
are
open
to
other
suggestions,
but
it's
kind
of
where
we've
come
to
at
the
moment
is
to
see
to
think
about
like
how
can
we?
How
can
we
shape
our
Northstar
metric
around
like
the
group's
aspect
and
then
because
group
managed
accounts
is
something
that
we're
focusing
on
right
now.
I
That
might
be
an
interesting
drill
down
metric
that
we
can
look
into
like
how
many
of
these
large
groups
also
have
group
managed
accounts,
enabled
because,
if
you,
if
you
have
groups
that
are
being
created,
that
are
getting
bigger
and
bigger
like
that's,
that's
a
good
thing
like
we're.
Having
groups
that
are
having
members
added
like
groups,
are
getting
larger,
so
that's
kind
of
where
we're
at
at
the
moment
with
the
proposed
Northstar
metric.
Does
anyone
have
any
thoughts,
questions
ways
we
could
improve
that
I.
B
It's
hard,
that's
what
so
in
the
in
the
in
the
definition
like
at
least
one
active
member.
So
we
have
a
attribute
on
the
member
which
is
last
activity
on,
so
you
can
see
when
the
user
was
last
active,
but
it
doesn't
attribute
like
was
that
activity
actually
in
the
group
there.
We
need
to
work
with
the
data
zeal
on
this
in
the
past
I've
created
like
a
way
of
like
looking
at
different
events
and
the
tributing
like.
B
J
This
is
this
is
great
and
I'm
glad
we're
having
this
discussion.
We
can
actually
skip
over
mine.
I,
don't
need
to
voice
over
it.
Just
some
observations
I
had
or
we
had
as
a
stage
kind
of
broadly.
So
if
you
have
thoughts
or
feedback
drop
them
there
a
synchronously
and
we
can
jump
on
it.
But
let's
go
to
Gabe
and
talk
about
each
one
sure.
M
M
Why
I
think
right
now
like
this
is
a
better
indicator
of
teams,
collaborating
with
one
another
versus
somebody
who
might
be
the
one,
creating
all
the
issues
or
tracking
total
issue
count.
The
hypothesis
is,
if
more
seats
within
a
given
instance
or
team
are
participating
issues
they're
going
to
get
more
value
out
of
it.
This
is
intended
to
be
a
temporary
Northstar
metric
because,
as
I
talk
to
users
and
as
I
talk
to
Allah,
like
the
customer
value,
is
really
abstract
and
that
the
value
is
really
like.
M
The
tool
enabled
me
to
maintain
trust
with
my
stakeholders,
which
you
know
like
how
do
you
quantify
that
it's
sort
of
difficult?
So
this
is
intend
to
be
a
starting
point,
especially
as
a
plan
is
becoming
more
usable
in
larger
organizations.
It
will
also
help
us
under
understand,
option,
expansion
and
then,
ideally
retention.
E
Yeah
I
wrote
in
obviously
I,
don't
know
that
much
about
current
usage
patterns
or
anything
like
that,
but
my
only
comment
would
be
just
based
on
previous
experiences
that
comments
for
teens
that
are
co-located.
It's
not
used
to
soft
it,
at
least
in
other
tools
right
because
they
just
kind
of
turned
around
and
make
a
comment
to
someone,
and
they
may
not
do
it
in
the
tool
right,
but
still
consider
the
tool
as
their
source
of
truth
so
to
say
for
everything
about
that
issue.
E
M
F
B
Had
a
question
I
think
that
is
I,
don't
know
if
I
face
it
correctly
in
the
doc,
but
do
we
feel
that
comment
activities
always
correlated
with
business
value
cuz
like
I,
totally
agree
and
I
love?
The
idea
of
like
commenting
on
issues
is
more
indicative
of
collaboration
and
the
issue
in
the
issue
tracker
than
just
like
creating
issues,
and
then
so
they
just
sit
there
and
orphan
and
no
one
actually
engages
with
them.
B
It's
something
that's
available
on
core,
but
if
the
case
is,
is
that
customers
that
are
highly
engaged
in
paying
for
project
management
features
are
always
like
correlated
with
hi
comment
activity,
then
maybe
within
maybe
that's
that's
something
that
we
can
we
can
use,
but
is
that
is
that
always
you
feel
like
that's
always
true,
or
is
there
like
a
use
case
or
a
pattern
where,
like
customers
like
aren't
using
the
issue
tracker
in
this
way,
but
they're
still
paying
for
like
project
management
and
getting
a
lot
of
value?
I,
don't.
M
Think
we
know
that
yet
and
can't
know
that,
like
I
think
the
problem
right
now
is
even
on
our
usage,
think
we're
tracking
notables,
but
when
I
am
distinguishing
notables
across
em,
ours
issues,
epics
snippets
and
all
the
places
where
we're
comments
are
actually
being
made
and
we're
also
not
correlating
that
to
any
sort
of
like
business
value
at
this
point
in
time.
So
by
tracking
it
at
first
like
that
idea,
is,
will
start
to
be
able
to
at
least
expose
additional
dimensions
of
which
we
can
like
better
understand
leading
indicators.
M
And
if
you
drill
into
the
issue
that
I
linked
the
work-in-progress
I
down
below
I
kind
of
basically
extrapolated
out
like
this
is
temporary,
and
this
is
temporary
so
that
we
can
start
recording
all
these
other
things
across
all
instances.
Usage
ping,
so
that
we
could
eventually
figure
out
how
to
better
determine
customer
value
by
things
like.
What's
the
lead
time
right
for
an
issue
between
when
it's
open
and
closed
or
what's
the
throughput,
not
in
Mars
but
units
of
value.
M
J
F
We'll
do
this
real,
quick.
He
proposed
once
parametric
for
certify
as
the
number
of
requirements
created
now
this
is
given
that
requirements
management
is
not
yet
launched.
This
is
going
to
encompass
the
launch,
the
launch
of
the
metrics
collection,
as
well
as
people
actually
using
it,
but
we
do
fully
understand
that
this
is
probably
going
to
change
quickly
within
three
to
six
months.
F
F
Requirements,
courteous
and
really
what
this
is
showing
is
we
need
to
get
this
out
the
door
and
we
and
that's
like
job
one
and
once
we're
there
and
we
start
getting
feedback
from
our
users
and
understanding
better.
There
obviously
needs
to
be
a
better
metric
moving
forward,
but
having
nothing
is
very
difficult
to
put
a
metric
on
so
right
now
with
the
metric
is
to
have
something.
A
In
your
experience
mark,
you
know
you
might
have
a
massive
project
and
they're
continually
deriving
requirements
from
sorry,
they're
continuing
continually
driving
value
from
the
requirements,
management
tool
and
there
were
public
requirements
that
they've
had.
But
you
know
from
from
our
conversations
that
I've
have
a
few.
A
The
requirements
definition
is
largely
like
a
one-time
type
activity
that
happens
if
it's
kind
of
a
front-end
of
the
project,
even
though
you
continue
to
derive
value
or
the
requirements,
management
system
and
so
you've
already
kind
of
copy
added
all
this
with
it
might
change
over
time,
but
I'm
curious
as
to
your
thoughts
on
overtime.
If
he
given
thoughts,
yeah
I
might
measure
continued.
F
Value
long
term,
the
value
is
really
how
many
times
where
people
are
referencing
the
requirements
which
would
include
how
many
times
are
they
linking
to
the
requirements?
How
many
times
are
they
going
back
in
viewing
the
requirements,
because
that
shows
that
your
developers
are
looking
at
the
requirements
to
develop
their
code?
It
shows
your
clusters
are
looking
through
requirements
to
develop
their
tests.
The
linking
will
show
that
they're
utilizing
the
tests
and
linking
them
to
the
requirements
to
perform
full
validation.
F
It
shows
that
looking
at
compliance
in
an
overall
and
certification
is
an
overall
package
and
they're
constantly
referencing
the
requirements,
so
it
would
probably
something
along
the
lines
of
percentage
of
requirements.
Viewed
within
the
last
pick,
a
time
frame
or
something
along
those
lines,
number
of
views
per
requirement,
something
like
that,
our
requirements
being
created
and
then
ignored,
because
that's
not
what
we
want.
F
We
want
to
collect
people
who
are
creating
them
and
then
constantly
looking
at
them,
so
I
would
think
number
of
requirements
are
viewed
multiple
times
or
percentage
of
requirements
dude,
you
know
once
per
month,
I
don't
know
we'll
have
to
work
on
how
that
works.
That's
why
I
need
to
kind
of
get
it
out
there
and
see
how
people
are
using
it,
but
yes,
definitely
it's
something
that
has
to
do
more
with
interacting
by
viewing
and
less
about
creating
requirements.
I.
D
Was
just
curious
because
I
think
what
you
were
talking
about
sounds
like
a
great
way
to
measure
engagement
of
the
feature
itself,
but
do
you
have
like
a
target
percentage
of
our
customers
in
mind
like
what
what
top
tier
of
our
larger
companies
are
gonna
take
advantage
of
this
feature
at
all?
Is
there
like?
Is
it
5%,
20%.
F
I
F
Conservative,
so
we'll
have
to
kind
of
see
how
that
goes.
What
it
means
is
we
need
to
make
it
a
compelling
feature:
that's
integrated
correctly,
so
that
actually
usable
and
I
fully
understand
the
MBC
is
not
something
that
most
regulated
industries
could
use,
but
it
is
something
that
many
non
regulated
industries
could
get
a
lot
of
value
out
of
so
we're
sort
of
starting
there
and
we're
going
to
iterate
toward
a
regulated
solution
for
these
people.
K
Thanks,
okay,
so
for
source
code,
the
metric
that
we
were
looking
at
was
merge
requests
so
well.
Initially,
we
started
with
discussions
and
emergent
requests,
because
that
kind
of
shows
that
people
are
making
use
of
the
feature
and
collaborating.
However,
as
I
also
point
pointed
out
as
as
Melissa
did,
that
that
largely
depends
on
your
a
full
locator
team,
so
remote
team
and
things
like
that.
We
think
that
the
better
metric
to
measure
that
collaboration
is
gonna
be
merged.
Requests.
K
They
will
want
people
to
to
not
consider
not
used
using
them.
I,
don't
know
if
that
makes
sense.
So
the
feature
has
so
much
value
that
we
absolutely
see
that
valuing
they
want
to
use
those
those
M
ours.
So
having
that
be
a
percentage
monthly,
active
users
kind
of
feels
for
balance,
yeah,
that's
so
bright
right
now.
The
issue
that
I
linked
has
mrs,
but
based
on
this
conversation,
I
think
that
I'll
be
updating
that
to
the
percentage
of
users
that
are
making
use
of
my
Mars
I.
A
I
A
K
A
A
K
Yeah
and
that
kind
of
make
sense,
I
think
that
that
kind
of
has
the
same
I,
don't
wanna
see
the
lack
of
balance,
but
it
feels
more
balanced
that
you
know
as
we
get
new
new
users,
we
automatically
get
new.
Mrs
and
that's,
I
think
that
aiming
for
a
higher
percentage
of
the
users
that
are
making
use
of
it
here
in
this
case
kind
of
feels
more
more
balanced,
but
I
will
take
a
look
at
that.
Thank
Y
yeah.
A
I
think
what
you've
written
like
is
in
who
use,
mrs,
I
think,
is
an
interesting
one,
because
I
was
looking
at
the
data
I
think
on
Tuesday,
and
even
if
you
say,
hey
smile
for
creators
is
the
number
of
people
that
have
created
an
M
are
divided
by
the
total
number
of
monthly
activities.
There's
something
like
20%
so
and
that's
because
only
a
very
small
segment
of
the
population
will
actually
create
a
merge
request.
B
Gonna
say
a
percentage
of
Mau
for
use
who
use
decree
Amar's,
maybe
maybe
it
works,
but
I
think
it
maybe
also
depends
on
that
thought
exercise.
Maybe
I
was
doing
in
some
of
the
other
issues
was
just
like
okay,
depending
on
like
utter
success
for
this
group.
What
does
that
look
like
where?
Where
are
we
investing?
What
are
our
goals?
If,
like
your
goal,
is
primarily
we
want
to
increase
the
number
of
users
that
are
using
merge,
requests
and
I?
Think
that
makes
total
sense.
B
The
one
that
I
suggested
is
maybe
merge
request
created
per
user,
because
you
can
raise
that
in
two
ways
right.
You
could
increase
the
engagement
of
the
existing
users
with
their
so
they're
they're,
creating
more
merge
requests
or
you
create
or
increase
the
number
of
merge
of
people
of
users
that
are
creating
merge,
requests
at
all,
so
you're
increasing
like
either
through
print
the
depth
of
people
that
are
engaging
with
it
or
you
increasing
the
depth
the
breadth
of
the
people
that
are
engaging
with
it
at
all.
B
So
like
loveable
coder,
like
source
code,
is
probably
like
merge
requests
of
like
open
for
a
very
short
period
of
time.
They
like
have
multiple
participants
in
it
and
they
get
merged
very
quickly
at
a
very
high
percentage.
So
very
few
merge
requests
that
are
opened
like
don't
get
merged,
there's
very
few.
They
get
closed
along
the
way
and
thinking
about
metrics
and
maybe
capture
that
scenario.
It
might
be
really
helpful
if
that's
aligned
with
how
you're
thinking
about
successes
in
the
group.
K
Yeah,
so
I
was
kind
of
comfortable
with
the
first
one,
but
I
had
Pitt,
which
is
just
em
ours,
but
I.
Think
that
Basin
is
that
I'll
ponder
it
a
bit
more
because
I
think
that'd
make
a
good
point
that
the
value
that
you
really
captures
once
that
is
merged
right.
You
just
have
a
bunch
of
openmrs
that
never
happent.
You
know
that
you
never
did
anything
with
there's
not
a
lot
of
value.
A
Good
stuff,
all
right
in
the
essence
of
actually
adhering
to
the
meeting
times,
I'm
gonna
go
ahead
and
call
it
now
so
for
the
groups
that
we
didn't
get
to.
What
we'll
do
is
we'll
circle
back
in
the
team
meeting
and
we'll
use
the
bulk
of
the
work
section
than
team
meeting
to
to
continue
reviews
if
you've
learned
anything
from
the
other
groups
and
want
to
augment
yours
between
now
and
then
that
would
be
great
we're
looking
forward
to
continuing
the
discussion.