►
From YouTube: Development Metrics Working Group 5/1/2019
Description
Development Metrics Working Group Discussion
Agenda: https://docs.google.com/a/gitlab.com/document/d/1Y50uhpRW0zSGWI-TzPxHnwEHyOl7uWiyCzXtpRJd1_E/edit?usp=drive_web
A
Oh
so
it's
May
Day,
so
I
just
put
that
at
the
top
there
happy
May,
Day
everybody
we're
gonna,
go
ahead
and
get
started
with
looking
at
our
normal
persistent
links
associated
with
it,
which,
since
I
got
the
new
webcam
trying
to
figure
out
how
best,
just
so
that
I'm
actually
facing
forward
most
of
the
time
it's
supposed
to
turn
off
from
the
screens.
But
I
can't
always
do
that
as
well
as
I
should
so.
A
Do
is
just
kind
of
go
through
real,
quick,
so
numbers
are
in
for
May
and
for
a
Polish
would
say
we
had
our
largest
month
ever
for
total
M
ours
associated
with
it
1775
it's
a
great
year,
so
it's
the
the
independence
could
argue
that
the
show
than
the
July
release
for
anybody
whose
us
basically
1775
it's
decided,
76
I,
guess
so
we
go.
We
can
go
up
one
more,
maybe
anyway,
so
that
was
very
encouraging.
From
that
perspective.
A
On
our
average,
mrs
excluding
per
author,
excluding
community
contributions,
we
did
see
a
rise,
not
quite
as
much
as
maybe
we
had
hoped
but
still
arise
from
eight
point.
Five:
five
to
eight
point:
nine
three
that
one's
interesting,
because
it's
a
little
bit
lower
from
the
prediction
perspective
but
did
fall
within
the
error
range
that
I
can
originally
set
out
kind
of
based
on
that
one.
So
so
I
think
that
was
encouraging
from
that
perspective.
A
C
Actually
all
took
it
over.
This
is
the
last
week's
data
Tonya
here
could
help
me
polish
it
up
a
little
bit
because
we
found
out
that
we
should
be
looking
at
customer
affecting
bugs
as
a
whole,
and
we
add
another
chart
and
I.
Don't
see
your
sidebar
here,
Christopher
there's
a
malaria
shard
added
for
all
customers.
C
Customer
only
all
is
three
more.
Yes,
that's
the
one.
So
we
had
limited
data
on
customer
as
once
we
expand
its
goal
and
I.
Think
it's
it's
the
same
trend
here
that
the
the
numbers
are
increasing.
We
didn't
have
numbers
in
forward
with
this
month,
yet
my
goal
is
to
actually
try
to
automate
this
in
a
very
boring
and
we
see
way
because
it's
a
lot
of
work
trying
to
export
this
and
and
dice
it
in
Tania
has
been
doing
some
work.
I'm,
not
sure
she's
ready
to
show
yet,
but
essentially
is
a
template.
C
D
A
D
Okay,
do
you
all
see
a
chart?
Perfect?
Okay,
so
I
wanted
to
just
go
through
this
really
quickly
and
explain
what
these
lines
are.
The
black
line
here
is
merge.
Requests
per
regardless
of
its
author
is
an
individual
contributor
or
manager
at
the
time.
The
red
line
is
for
people
who
are
individual
contributors
at
the
time
that
the
merger
quest
was
merged,
and
you
can
see
it
really
closely
follows
the
total
line,
the
blue
line
at
the
bottom.
D
There
is
for
managers,
specifically
people
who
are
managers
at
the
time
that
the
merger
quest
is
merged
and
it
is
much
lower
than
individual
contributors.
But
overall,
the
number
of
managers
are
so
low
that
it
makes
a
very
small
difference
in
the
overall
trend
and
then
I
did
get
data
from
Britney
about
people
who
were
hired
as
individual
contributors
and
then
were
promoted
to
manager.
So
there's
people
like
Dennis
and
they
did
start
out
as
some
of
the
most
productive
people
in
the
company
and
their
numbers
should
go
down
as
overtime.
D
They
were
promoted,
but
this
is
only
about
a
dozen
people.
So
again
it
doesn't
make
a
huge,
a
huge
difference.
In
the
overall
numbers,
so
I
think
my
takeaways
and
please
validate
these.
Yes,
managers
are
less
productive.
However,
there
are
so
few
managers
said
it
doesn't
actually
make
a
material
difference
and
the
overall
trend.
D
Yeah
promoted
managers
to
their
a
dozen
out
of
I
mean
it's
less
than
ten
percent
of
the
total
population.
It's
not
a
huge
huge
number
and
there
is
also
a
tab
here
for
results.
This
is
what
the
chart
pulls
from.
So
here's
total
you
can
see
the
numbers
here,
the
counts
for
individual
contributors.
It's
I
mean
again
the
managers,
it's
about
10
percent,
less
than
10
percent,
for
example,
this
month
of
the
6.1
percent,
a
lot
of
managers
aren't
contributing
at
all,
so
their
numbers
aren't
being
included
here
but
of
the
managers
who
contribute
it's.
A
Did
a
you
didn't
check
the
histogram
on
that
you
just
I'm
assuming
got
the
average
like
it
would
be
curious
to
know
if
anybody
was
like
super
prolific
so
like
like.
If
you
see
Stan's
Stan's
contribution
numbers
are
pretty
outrageous
and
a
good,
but
you
know
because
of
that
you
know
it's
it's
it's
significantly
impacting
and
that's
the
only
thing
I
would
say
is
if
I
understand.
C
A
A
The
reason
I
asked
us
because,
like
if
you
had
a
ten
to
one,
if
you
had
to
tend
to
ones
like
that,
if
the
numbers
make
sense
to
me
from
an
averages
perspective,
if
he
had
say
to
ten
to
ones
that
happened,
then
the
runs
alive
2018
that
they
got
promoted.
That
would
that
would
explain
a
shift
from
that
perspective.
D
Yeah
and
to
make
sure
that
I'm,
clear
the
the
red
line
and
the
blue
line,
so
the
individual
contributor
line
and
the
manager
line,
it
does
include
people
who
were
promoted,
but
at
the
time
they
were
one
or
the
other
and
they're
sorted
accordingly.
But
the
green
line
is
regardless
of
if
they
were
in
I,
see
our
manager
at
the
time.
The
merger
quest
as
Marissa's.
Just
all
the
requests.
A
D
A
Well,
definitely,
we're
gonna
promote
people
when
it's
appropriate
to
promote
them.
I
I
was
thinking
more
around
the
fact
of
the
statistical
you
know.
How
much
does
it
statistically
matter
right
like
and
it's
less
about
that
and
more
to
be
kind
of
aware
of
it.
From
that
perspective,
I
guess
is
maybe
the
right
way
to
put
it
trying
to
find
the
right
level
of
grading
here.
E
E
So
it's
I
think
they
answer
your
question
tying
at
the
the
way.
The
hypothesis
word
is
a
little
bit
sort
of
like
underspecified,
because
we're
not
talking
about
like
the
impact
that
the
potential
impact-
if
this
is
true
on
our
productivity
numbers
right,
so
we're
saying
we're
saying
we
think
this
is
true.
We
think
people
get
promoted
in
management,
they
are
less
productive,
but
are
we
saying
this
is
the
reason
why
our
productivity
dipped
in
the
other
graphs
or
not
I,
don't.
D
E
So
then
the
I
would
add:
I
proposed
ahead
of
the
beginning
of
iPods
s
like
string
clip
over
clear.
Our
overall
productivity
has
declined
because
longer
tenured,
IC
employees,
population,
cellos,
productive
risk
manager,
population
have
gotten
less
productive
and
what
we're
saying
it
sounds
like
his
managers
have
gotten
less
productive,
but
that
is
not
why
our
overall
productivity
is
to
client
because
percentage
it.
D
E
So
now
let's
say
we:
actually
you
got
the
right
answer,
but
I
think
we
actually
falsified
this
hypothesis
as
I
just
recently
reworded
it.
So
it
flips
from
green
to
red,
but
that's
still
a
good
result
and
then
that
that
helps
justify
why
we
can't
determine
what
an
intervention
was,
because
if
it's
green
there
is
else
tells
the
other
definitely
something
that
we
should
do,
but
whereas
for
so
there's
not
in
this
case,
does
that
make
sense
to
everybody.
I
want
to
make
sure
I'm
not
like.
A
It
makes
sense
to
me:
I,
don't
I,
don't
know
if
I
would
ever
I
don't
know
if
I
would
ever
the
veracity
of
the
hypothesis
doesn't
always
necessarily
translate
to
me
I'm,
like
I,
don't
know
if
I
would
necessarily
okay
and
you're
saying
why
you
want
to
make
it
change
the
wording
so
that
it
does
this
hypothesis
fashion.
You
know
it's
a
hypothesis
if
we
get
a
result,
that's
validated,
but
it's
not
something
we
can
action
on.
That's
that's!
That's!
Actually
an
acceptable
outcome
in
my
book.
I
think.
D
A
If
we
ever
got
to
a
much
more
aggressive
state
of
promotion,
it
would
start
affecting
this
line
would
be
my
like,
like
we
validated
the
hypothesis
that
if
you
convert
somebody
from
a
more
experienced
we
just
in
this
case,
it
wasn't
statistically
relevant,
because
the
number
of
population
was
twelve
well
to
the
group,
which
is
less
than
10%,
but
if
I
think
it'd
be
a
lot
more
dominating
factor.
If
say
it
was
twenty
or
forty
percent,
not
that
we
would
necessarily
hit
see
that
in
this
situation,
but
that
is
a
possibility
of
organization.
A
E
You
could
make
an
argument
that
that
change
actually
has
happened
right
like
in
2018.
We
did
a
lot
of
bottoms
up
building
and
some
top
down
building,
and
then
we
learned
in
retrospect
that
it
was
more
effective
to
build
teams
top-down,
so
we've
been
hiring
outside
managers
earlier
and
putting
people
in
interim
positions
earlier,
so
I
think
that
actually
has
occurred,
but
had
done
things
anywhere
near
twenty
or
forty
percent
right.
A
And
what-
and
this
is
this
is
actually
me-
jumping
on
the
sword
again
week,
two
or
even
had
I
haven't
had
a
chance
to
investigate
this.
F
So
if
I
may
jump
in
Virginia,
this
is
this
is
a
great
hypothesis
hypotheses
that
cycle
time
would
help
us
dramatically.
So,
if
you
remember
from
our
conversation,
I
was
talking
about
determining
wearing
cycle
time,
we're
spending
so
much
effort
having
the
breakdown
in
our
cycle
time
algorithm
to
say
this
is
how
long
the
review
cycle
was
would
basically
help
us
make
a
determination
here.
So
this
in
any
case,
while
you
know
we're
looking
at
it
and
Christopher,
will
look
at
the
data
and
we
can
discuss
in
the
future
I.
B
So
this
one
I
cut
to
evaluate
a
little
bit
more
kind
of
what
we're
trying
to
achieve.
It
seemed
like
we're
trying
to
figure
out
why
longer
tenure
engineers
are
more
productive
than
less
tenured
engineers,
so
I
kind
of
broke
it
down
to
a
few
things
that
we
can
test.
We
can
send
a
survey
but
I'm
not
100%
sure,
but
what
the
purpose
behind
hypothesis
is.
So
it's
hard
for
me
to
figure
out
whether
there's
something
we
really
need
to
dive
into
deeper.
B
A
F
If
we
could
tailored
this
one
around,
can
we
can
we
graph
the
complexity
of
the
product?
So
that's
one
of
the
the
things
that
developer
mention
is
that
it
is
more
complex
today
to
contribute
to
the
product,
and
it
was
you
know,
with
a
25-person
engineering
team,
which
is
you
know
the
longer
10
year
folks
who've
been
around.
B
We
could
also
potentially
gauge
like
the
time
it
takes
people
to
do
that.
First,
mr
I
know
we've
started
to
do
that
for
onboarding
as
a
metric,
but
trace
it
back
to
older
engineers
and
see
how
long
it
for
them
to
get
started.
I
would
be
a
better
potentially
a
better
representation
of
onboarding
and
I'll
be
ramped
up
faster
yeah.
E
E
E
A
Comes
in
but
they're
still
new
to
the
company,
because
one
of
the
things
that
Clemente
and
Dennis
found
was
is
that
I
sees
with
over
two
years
of
experience,
I
think
was
actually
we're
more
productive,
which
I
was
really
surprised
by
I
was
like
gosh
takes
two
years
to
get
there.
That's
that's.
You
know
surprising.
At
least
I
thought
it
was
surprising
two
years
at
get
lab
two.
E
I
think
the
two
things
I
could
speculate
on
are
of
like
yeah.
You
come
on.
You
just
don't
know
the
code
miss
it's
like
three
million
to
two
million
lines
of
code
just
takes
a
while
to
learn
that,
but
then
also
the
only
thing
that,
but
that's
not
really
methodology.
The
thing
that
was
striking
methodology
is
that
we're
measuring
Amar's.
E
We
have
a
culture,
particularly
long-standing
culture,
really
doing
small
granular
bars
and
being
very
iterative.
That
is
probably
the
main
thing
the
new
developers
have
to
learn
if
I
were
to
speculate
and
yeah
we've
actually
been
doing
that,
and
we've
been
seeing
that
a
positive
increase
on
this
average
number,
but
are
there
other
things
that
yeah.
A
As
an
example,
if
the
ringette
lab
on
a
small
box,
prototype
man
being
somewhat
theoretical
here
but
like
if
it
turns
out
that
our
senior
senior
developers
are
running
on
a
small
box,
prototype
versus
using
a
cloud
since
it
gives
them
quick,
quicker
iteration
that
could
be,
that
could
be
a
determining
factor
right
or
they
open
up
the
RMR
right
away
and
submit
upstream
to-
or
you
know,
basically
get
the
armored
mr
open
as
quickly
as
possible.
Even
if
it's
in
draft
form,
because
I
want
to
verify
that
they
haven't
broken
anything
else.
A
E
F
F
I,
like
elaborate
on
kind
of
the
complexity
of
the
product.
What
I
heard
is
that
you
know
by
nature
being
a
monolith,
there's
so
much
more
to
learn
where
people
who
came
from
a
more
services
oriented.
They
feel
that
there,
the
expectation
is
that
they
can
learn
their
service
really
well
and
get
up
to
speed
much
faster.
So
that
was
one
comment
from
you
know
new
engineers
and-
and
this
was
a
senior
person,
so
they
are
capable.
But
just
by
factor
of
you
know
how
big
our
application
is.
It's
it's
a
lot
to
consume
quickly.
F
E
I
think
that's
actually
a
separate
thing
right
because
again,
that's
not
a
methodology,
but
I
do
agree.
That's
important,
so
I
think
there's
almost
a
separate
hypothesis
here
of
like
people
have
too
much
surface
area
to
learn
quickly
and
that's
why
they're
slow
to
learn
and
what
we
should
do
is
sort
of
create
synthetic
microarchitectures
just
in
the
form
of
areas
of
the
code
that
teams
and
individuals
are
responsible
for
and
therefore
they
don't
have
think
about
anything
outside
that
and
then
separate
from
that.
E
A
E
Well,
I
mean
the
you
know
the
data
would
be
do
we
may
already
have
the
data,
because
if
you
were
to
try
to
confirm
this,
you
would
say
like
well
how
many?
What's
the
average
mr
by
tenure,
we've
already
collected,
that
data
we've
shown
that
longer
tenured
employees
do
more
Amar's
on
average
than
short-term
employees,
so
the
interview
may
be
ready
for
an
intervention.
E
This
one
and
just
say
you
know,
do
something
in
the
onboarding
process
to
really
be
coaching
people
about
how
to
break
down
work
and
how
just
how
iterative
we
need
to
be
and
I
can
definitely
say
from
a
managerial
perspective.
Since
said
our
iteration
value
is
gonna
challenge
me
I'm,
like
yeah,
whatever.
E
That's
what
would
be
doing
anything
you
know,
naming
I,
think
you,
you
know
like
people
like
Clement,
you
probably
have
more
of
a
ground
truth
on
that,
but
I
would
say
if
people
feel
like,
if
that's
data
they
could
falsify
twenty
have
that
data,
then
I
would
say
we
could
jump
to
that
intervention
and
try
to
do
more
onboarding
around
iteration,
let's
say
or
clear
things.
I.
B
Think
there's
always
gonna
be
improvements.
We
can
do
to
help
people
do
that
shift
and
thinking
about
smaller
version
requests.
I
know,
for
me,
a
big
part
of
it
was
a
big
part
of
my
shift
when
I
joined
was
when
it
was
like:
hey
the
salary
calculator,
it
works,
it's
not
perfect,
it's
not
UX
approved,
but
it
makes
improvement.
So
let's
merge
it
and
then
he
merged
it.
So
that
was
that
was
really
big
for
me
and
understanding.
What
iteration
really
is.
E
So
this
would
apply
to
me
that
we've
confirmed
at
least
for
this
one
small,
smaller
MRA's
example.
We've
already
demonstrated
it
and
we
could
actually
identify
an
action
to
add
to
our
onboarding.
But
we
want
to
keep
this.
When
is
that
a
good
idea,
and
then
you'd
want
to
keep
this
open
and
look
for
other
methodology.
E
A
A
So
it
feels
like
maybe
next
step
is
to
actually
go
through
and
see
whether
or
not
you
know
take
you.
Don't
necessarily
do
it
if
you
do
it
for
all
that's
great,
but
if
you
want
to
do
it
for
like
take
to
the
foot
from
the
population
of
two
plus
years
and
two
from
the
population
of
say
three
or
six
months
of
tenure
and
do
a
comparison
of
their,
you
know
lines
of
code
as
an
in
curve.
A
A
E
E
A
When
writing
code
or
designing
like
it's,
it's
kind
of
a
it's
gonna,
be
a
little
bit
of
a
hand
wavy,
but
I'm,
just
wondering
if
I
like,
like
we
like,
generally
speaking,
when
I'm
learning
something
and
I'm
trying
to
get
more
productive
or
than
it
being
at
the
front
of
my
memory,
because
I'm
still
trying
to
figure
it
out,
you
know
I
have
to
spend
a
lot
of
time.
Like
you
know,
oh
I've
gotta
go
with
this
reference.
A
I
gotta
go
check
this
out
as
opposed
to
just
having
it
built
into
my
head
or
there'd,
be
the
checklist
to
sign
up
for
an
M,
R
or
those
aspects.
But
if
other
folks
have
better
suggestions
on
what
questions
survey
question
I
asked
her.
You
know
whether
we
should
maybe
just
go
interview
a
couple
folks,
that's
the
other
option.
I
like.
E
That
because
I
think
the
alternative
suggestion
is
we
a
graphing.
The
complexity
of
the
product
actually
would
mean
like
taking
the
history
of
every
developer
and
seeing
which
areas
of
the
code
that
they
touch
running
through
k-means
and
get
like
a
nonparametric
answer
to
areas
of
the
product
are
there
and
then
we
name
them,
and
then
we
turn
that
into
architecture.
So
I
think
that's
like
a
seven-year
dissertation
project
for
the
PhD
candidate,
when
we
can
just
kind
of
get
people
sense
and
chart
something
out.
It's
just
the
same
answer:
okay,.
A
A
Cool,
so
that's
the
hypothesis
half
an
hour
in
we're.
Finally
getting
the
point
where
we
can
welcome
folks
carrucan.
It
was
the
beginning
of
the
discussion.
Sorry
about
that,
so
welcome,
Lyle
and
Virginia.
That's
a
group
Mike
has
that
listed
below
and
then
also
mark,
Remy
and
Jason
are
also
joining,
though
I
didn't
get
them
added
early
enough,
probably
so
I'm
guessing.
That's
the
reason
why
they're
not
here
but
they're,
welcome
as
well
in
regards
to
adding
them
when
we
expand
it
or
charter.
A
Well,
we'll
talk
about
that.
A
second
here
cool
and
then
numbers
for
April
are
in
I,
haven't
had
a
chance
to
do
the
okay,
our
calculations,
just
because
they
came
in
about
10:30
this
morning,
Mrs
/
author
is
up
again
eight
point:
nine
three
yeah
and
also
M
ours,
total
M.
Ours
has
gotten
to
1750
1775,
so
really
really
good
results
from
that
perspective,
and
then
it
was
just
curious
for
feedback
from
the
team
of
you
know
at
the
risk
of
any
more
work.
A
D
Should
dog
food
any
issues
we.
A
E
E
A
You
like
that,
out
of
curiosity,
do
you
like
the
to
label
that
you
have
to
set
up
words
I'm,
not
a
big
fan
of
it.
Sorry,
the
to
label
yeah.
So
if
you
want,
if
you
just
wanna,
have
a
board,
that's
like
like
for
like
multiple
projects,
and
you
want
to
bring
in
issues,
you
have
to
tag
them
first,
with
a
label
to
get
so
gets
them
on
the
board,
and
then
you
had
to
have
a
separate
one
depending
on
which
column
you
want
it
in
associate
with
it.
A
E
Every
issue
on
this
board
needs
type
of
engineering
manager.
That's
how
it
appears
on
the
board,
so
I
have
an
issue
to
just
say
hide
this
issue
from
all
of
these
things
put
a
title
of
the
board
up
here.
This
is
an
engineering
management
port
and
just
have
something
in
parentheses,
about
the
scope.
This
is
like
you
know,
has
this
label,
and
then
these
gray
ones
just
kind
of
like
disappear.
Then
you
can
do
the
same
for
each
column.
E
Every
issue
in
this
in
Christopher's
column
here
has
this
orange
development
department
label
hide
it
from
this
column,
but
just
put
it
up
here
like
leave
it
right
here,
as
it
is
just
imply
that
like
yeah,
it's
all
here,
then,
if
you
want
to
see
every
issue,
you
can
see
it
in
the
detail
view
or
this
preview.
You
can
see
like
the
completeness
on
initiative
that
me
says
to
me,
but
product
management
isn't
really
it's
not
really
flying
for
them.
E
They
don't
want
to
create
a
button
that
either
hides
all
labels
or
shows
all
that
was
and
I
said
well
that
you
know
rather
than
a
sensible
default.
You
give
me
a
button
with
two
states,
neither
which
is
terribly
useful
to
me
generally
like
that,
but
I
wanted
to
be
sensitive,
not
pulling
rank
and
advocating
too
strongly
for
any
one
thing.
So
what
I'll
do
those
they've
got
that
issue
right
now?
E
A
A
E
Here's
what
I
did
I
was
bored
one
evening
and
my
board
I
mean
I
had
important
stuff
to
do.
That's
procrastinating,
okay,
except
sharing
and
I'll
jump
in
any
share.
Yeah
I
found
the
issue
yeah
there
we
go
so
I
went
in
the
chrome
inspector
and
I
just
deleted
all
the
labels
and
took
a
screenshot.
So
this
is
what
the
board
looked
like
at
the
time
with
all
the
redundant
gray
and
colored
labels
and
then
without
it
you
can
see.
E
A
Cool
all
right
and
then
I
had
an
action
item
from
last
week,
just
to
update
that
which
is
training
as
a
set
of
criteria.
I
talked
to
a
couple
of
our
directors
and
they
definitely
would
like
to
managers
to
be
held
accountable
to
the
number.
So
we
need
to
set
up,
but
basically
push
for
training
associated
with
that.
A
The
other
thing
is:
is
it
doesn't
account
for
people
outside
the
develop
organization
who
maybe
aren't
spending
as
much
time,
and
if
that
puppet,
as
that
population
grows,
it's
gonna
actually
force
us
to
force
us
to
be
explaining
that
aspect
of
it,
which
is
you
know,
hey
the
rest
of
the
population,
is
crying
but
they're,
not
necessarily
either
held
to
the
same
metric
or
or
looked
at
in
the
same
way
from
a
Productivity
perspective,
they're
helping
that
we're
helping
the
product
grow.
That's
a
good
thing.
A
C
E
Was
gonna
say
so
Christopher
I
understand
you're
saying
is
that
like,
ideally,
we
would
have
groups
who
represent
these
different
product
areas
and
we
would
look
at
the
average
of
our
per
this
group
and
then
we
would
take
on
the
burden.
Could
people
have
the
right
group
attached
to
them?
I
think
my
instincts
are.
Is
that
if
it's
not
necessary
to
be
in
the
right
group.
E
What
what
we
could
do
that
would
be
a
fan
of
is
that
if
we
tackled
values
went
up
above
about
creating
the
synthetic
notion
of
microservices
in
the
codebase,
we
just
looked
at
those
that
theoretically
would
never
change.
It
would
be
a
target
and
the
code
would
always
be
coming
there
and
I
think
then
we
would
have
100%
of
the
product
covered
and,
and
that
would
be
stable
and
I'd,
be
much
easier
to
keep
up-to-date.
A
Yeah,
I,
guess
I
guess
I'm
less
concerned
about
it
on
a
per
team
basis.
So
that's
that's
a
good
point.
Cuz,
like
the
the
one
thing
I
was
distinguishing
between
was
overall
organizational
aspect
versus
team,
and
once
you
start
getting,
the
team
does
get
unwieldy
from
that
perspective.
So
that's
that's.
Probably
a
good
aspect
to
kind
of
call
out.
A
C
Getting
a
lot
better,
we
leave
for
one
in
force,
singularity
on
on
stage
labels
now
and
there's
an
effort
by
Remy
just
to
switch
from
group
labels
to
new
stage
labels
instead,
so
I
think
we
should
double
down
and
we
can
double
down
on
that
as
a
better
accuracy
than
just
team
labels,
and
one
thing
that
I
would
say
is:
we
are
somehow
double
counting
in
some
areas
because
we
don't
enforce
one
team
label
4mr.
That's
not
something
I'm
happy
with.
C
E
E
You're
you're
sort
of
advocating
for
your
directors
and
managers
see
their
slice
of
it,
so
they
can
be
accountable
to
their
number
right,
which
I
understand.
But
at
least
you
can
see
your
whole
purview.
I
would
bet
on
labels
being
easier
to
achieve
than
groups,
because
we've
already
got
triage
bots
they're,
like
MEK,
it
seems
like
you
might
be
able
to
enforce
that
one,
and
only
one
rule
a
lot.
I
guess
this.
A
little
closer
I
mean.
E
E
E
A
But
what
I'm
work
we're
trying
to
figure
out
from
a
top-level
prioritization
is
just
the
fact
of
you
know.
Hey
like,
like
you
know,
you're
supposed
to
be
in
trying
to
encourage
people
to
contribute,
but
again
not
everybody's
job
within
the
engineering
organizations
is
to
be.
You
know,
a
hundred
percent
development
all
the
time
all
day
long
and
you
know
as
those
populations
grow.
You
know
what
our
expectations
around
the
phone
are
super.
A
If
you
don't
have
some
key
market
demarcation
and
it
feels
like
it's
it's
it
feels
like
it
feels
like
we're,
not
building
a
good
measuring.
Stick
from
that.
We.
E
A
A
But
to
your
point,
it's
it's
hard
to
solve
the
problem
of
tracking
it
accurately.
So
maybe
I'll
like.
Why
ask
for
now
we'll
see
how
things
go,
but
I
just
had
a
feeling
that
you
know
if
yeah
I
have
a
feeling
there's
at
least
one
cause
the
customer
most
likely
government-related,
where
they
want
to
be
very
direct
about.
Who
is
who
is
working
on?
What
and
what
pieces
of
it?
And
you
know
those
kinds
of
things.
E
C
They
can
use
it.
Actually,
some
companies
are
already
using
it.
It's
a
it's
a
public
project,
that's
anybody
can
just
walk
out
and
set
up
in
their
good
life
instance,
and
set
up
rules
were
heard
about
to
do
all
these
things.
Okay,.
E
C
I
do
have
an
issue,
though
cuz
I
know
we
talked
about
it
earlier
before,
and
the
easiest
way
to
get
this
is
it's
open,
an
integration
at
the
project
settings
and
this
it's
gonna
install
this
for
you
and
then
you
can
define
rules
and
it's
gonna
run
on
a
CI
run
it
for
you,
but
yeah.
There's
there
are
plans
to
integrate
it
to
make
it
easier,
but
we
just
don't
have
the
time
to
work
on
it
right
now.
I
could
find
the
issue
first.
C
And
before
we
believe
this
is
a
bullet
item,
I
want
to
said,
I
have
more
confidence,
there's
not
a
lot
of
confidence
at
the
data
at
the
orc
level.
I
think
that's
accurate.
It's
not
double
counting
the
day
to
day
dashboards.
The
date
the
dashboard
foolish
group
are
helping
engineering
managers
to
execute
their
day
to
day
attaches
it's
great,
but
we
are
indeed
double
county
in
some
areas.
If
multiple
labels
are
added
to
anymore
or
issues
has
to
be
enforced
in.
A
A
Interesting
the
metric,
to
be
honest,
then
at
the
individual
teams,
or
even
at
the
stage,
is
necessarily
because
I
think
you
know
if
you
read
anything
about
story
points
as
an
example
like
you
expect
them
to
very
heavily
between
we're.
You
know
groups
of
organizations.
So
then
you
have
to
kind
of
factor
that
in
as
well
Emma's.
E
A
C
Is
this
how
about
this
in
like
very
boring
solution
we
could
in
for
singularity
on
on
team
labels
right
now
on
em?
Are
you
can
do
that
without
help
and
retro
actively
apply
that
retroactively?
Li
remove
double-counting
labels
on
in
Mars?
That's
like
an
NBC.
A
C
A
A
C
C
A
I
actually
I
think
I
think
as
a
top-level
metric
I
I
wouldn't
need
it
then
look
at
throughput
numbers
for
individual
teams
and
then
kind
of
go
back
and
figure
it
out
from
there
I.
Wouldn't
it
so
look
at
average
on
Mars,
like
the
only
time
you
look
at
average
on
Mars
is
when
you
look
at
the
overall,
whether
you
think
that
the
group
as
a
whole
is
is
at
the
same
low
for
productivity
or
gain
more
or
less
productive,
productive
right,
I.
C
Still
think
in
forcing
singularity
in
the
team
now
group
labels
will
help
because
it
will
help
with
the
truth
right
now,
if
you
add
NMR,
you
have
two
labels.
He
was
showing
both
teams
words
so
like
we
at
least
we
address
that
problem
and
like
now
we
have
a
discussion
in
mr
which
team
actually
needs
to
apply
this
label
if
they're
just
only
reviewing,
and
it
should
be
adding
that
label
in
the
group.
A
C
This
would
be
a
short
so
after
looking
at
the
data
with
with
Tonya,
we
realized
that
we
are
potentially
missing
some
data,
but
looking
at
the
trend,
I
think,
even
if
we
count
that
missing
data
there
trying
to
be
the
same,
it's
increasing,
but
we
need
to
address
label
hygiene
and
proposing
some
really
boring
rules
here
to
help
with
surfacing
bugs
faster,
and
it
will
involve
some
helpful
support
as
well.
For
example,
like
there
are
there
issues
that
don't
have
a
customer
label,
we
can
actually
parse
the
issue.
C
Description,
do
a
regex
magic
that
has
a
sales
force
or
send
desk
lingo
and
only
add
a
customer
label
if
it's
filed
by
a
team
member,
if
it
has
security
at
a
bug
there,
there
issues
with
performance,
but
no
bug
label
there,
issues
in
security
that
no
buck
label
and
there
are
issues
with
with
bugs
customer
and
no
severity.
So
I
think
these
are
boring
solutions
that
we
could
do
to
help
improve
the
hygiene
and
yeah.
That's
I'm.
Just
gonna
put
it
here
for
transparency
and
we're
gonna
jump
on
this.
If.
A
C
I
found
one
with
both
buck
and
feature
and
I'm
like.
Is
it
a
new
feature
or
is
it
a
bug?
So
that
also
needs
to
be
enforced?
Legionella
pick
one
it's
an
either
at
the
bug
or
a
feature
and
that
things
like
we're
having
as
well
my
next
one
so
I'm
designing
a
new
trash,
funnel
process
and
potentially
defining
levels
of
trash
completion.
We
have
rolled
out
or
agreed
on
at
least
the
first
iteration
that
the
first
level
tries
before
it
reaches
engineering.
C
Man
and
PMS
needs
to
have
a
team
label
needs
to
have
severity
needs
to
have
a
customer
if
applicable.
So
that's
the
that's
the
process
for
that.
I
was
looking
to
reinvigorate
the
engineering
reaction,
a
rotation
to,
and
let's
help
right
now,
we've
enlisted
everybody
in
quality
engineering
to
do
this
and
they're
gonna
work
with
their
counterpart
teams,
but
any
help
areas
appreciate
it,
and
maybe
we
can
take
it
offline,
as
you
would
decide
how
to
remodel
this
process.
C
A
A
So
the
big
thing
to
know
here
is
that
we've
expanded
our
exit
criteria,
so
20%
increase
in
development
department
throughput,
which
I
haven't
done
April's
numbers,
but
we'll
figure
out
how
roughly
we
are
doing
there.
We've
got
a
keep
you
guys
pretty
close
over
a
visca
level,
three
need
to
establish
level
four
aspects
of
those
maturities
and
then
training.
We
decided
that
we
definitely
need
to
do
that
from
the
prospective,
and
then
we
have
a
whole
bunch
that
got
added
in
the
last
week
that
we'll
have
to
start
tracking.
So
it's
pretty
reading
the
board.