►
From YouTube: Development Metrics Working Group 2019-05-30
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
This
computer,
okay,
so
welcome
everybody.
This
is
the
development
metrics
working
group
for
the
week
of
the
last
week
of
May,
19
and
I
stated
before
I'm
taking
over
for
Christopher
as
I'll
facilitate,
but
he
will
stay
on
as
the
executive
stakeholder
and
that
the
team
has
increased
a
bit,
adding
people
that
are
working
on
the
metrics
itself
and
also
aligned
communications
to
the
support
department.
So
why
don't
we
go
ahead
and
go
through
the
persistent
links?
And
let
me
share
my
screen
here.
B
A
This
is
the
mr
merch,
her
author
per
month
or
charts,
given
it
was
contributed.
The
thing
we're
doing
pretty:
okay,
which
is
pleasantly
surprising,
I,
don't
have
the
crystal
ball.
I
could
suffer
so
I
don't
have
projections.
We
still
need
to
rely
on
Christopher
for
that,
but
given
this
I
think
we
should
be
on
target
to
at
least
be
on
the
same
pace
as
the
month
of
February,
which
is
a
you
know,
recovery
period
after
after
they
on
the
previous
tip
and
do
we
need
monthly
motion.
Mars
I
checked
this
last
night
and.
A
C
Know
if
I
have
as
good
a
bit
crystal
ball
whatever?
What
I've
heard
anecdotally
is
that
actually,
a
lot
of
people
left
very
energized
from
contribute
and
I
heard
that
they
had
like
some
of
the
most
productive
week
was
the
week
post
contribute
because
a
lot
of
people
were
so
motivated,
so
happy
coming
out
of
the
event
that
they,
just
like
said
down
heads
down
to
finish
their
their
work.
C
You
know
some
of
it
is
also
we're
wrapping
up
a
release
right
now,
so
we're
heading
into
future
freeze
and
the
teams
are
starting
to
notice
that
it's
good
to
get
a
lot
of
your
em
ours
up
and
ready
before
that
last
week,
so
I
mean,
like
I,
said:
I
feel
like
these
are
somewhat
anecdotal,
but
like
they're,
also
contributing
data
to
how
how
the
teams
are
responding
to
the
changes
that
we're
making,
which
is
good.
It's
really
good
to
see
it.
You
know
a
positive
up.
A
Yeah
I'm
not
sure
this
would
be
like
a
tribal
knowledge
or
anything,
but
before
contributed
like
stage
Ori
Mars
and
then
like
when
someone
gets
tired,
they
go
to
their
hotel
room,
a
I'm,
going
to
review
something
in
a
your
throughput
that
way,
but
yeah
I
think
I
think
it's
great.
We
probably
need
to
rotate
on
that
little
bit
before
we
make
it
like
a
guideline
or
anything.
A
This
is
great
way
for
Christopher
to
come
back
and
we
could
revisit
these
charts
again
and
then,
let's
see,
see
the
numbers
from
his
projections-
and
this
is
so
Thank
You
Tanya-
for
updating
this.
We
need
to
automate
this
as
soon
as
we
can
part
of
it
is
us
also
hiring
four
more
engineers
in
the
dream
productivity
as
well.
B
B
A
B
A
A
A
See,
okay,
yeah
I
think
that
might
be
a
minor
adjustment
that
we
can
do
where
we
export
the
CSV.
We
just
need
to
paste
them
the
additional
days
or
weeks
that
exporting
we
keep
the
long
one
anything
until
until
we
can
automate
our
way
out
of
this.
Hopefully
soon,
that's
probably
added
up
it.
The
next
meeting
I
think
we
have
enough
paste
there.
Yet
wild
alia.
Any
any
comments
here
too
too
soon
to
call
improvements
should
be
marker.
This
for
another
month,
no.
C
A
A
A
C
D
C
If,
if
this
is
a
good
time
to
talk
on
whether
we
still
see
value
in
pursuing
some
of
these
hypotheses,
I'm
more
happy
to
stay
on
it
and
try
to
make
progress
at
this
time,
a
lot
of
what
my
bandwidth
is
spending
on
is
moving
some
of
this
data
to
periscope
and
coming
up
with
dashboards,
which
I
think
I
would
personally
find
more
valuable.
But
I'll
put
it
out
to
the
rest
of
you
and
see
what
you
think.
C
C
A
B
B
A
Thank
you
for
that.
Yeah
I
think
we
haven't
man
it
off.
Don't
we
don't
want
to
add
new
features,
but
I
I'm
concerned
about
the
population
of
the
data,
because
we
have
analyzed
Amar's
on
whatever
we
have
imported
and
the
list
of
projects
that
Dahlia
has
graciously
worked
on
and
gave
to
us
right.
So
I
think
the
analysis
on
this
makes
sense
if
we're
doing
it
on
the
same
population.
So
we
may
need
to
wave
some
immediately
flexibility
on
hey.
A
We
need
to
get
up
and
the
quality
dashboard
and
not
native
kidnapping
sites,
I
think
we're
still
blocked
on
the
exclusion
inclusion,
mech
mechanism
for
native
insights.
This
meeting
is
not
the
place
to
discuss
so
type
into
detail.
Okay,
so
if
you
can
update
us
in
the
next
meeting,
that'll
be
where
we
can
look
cannot
find
this
level.
Thank
you,
sir.
A
My
next
agenda
item.
So
this
is
what
Dahlia
posted
earlier
before
we
are
working
with
Emily
and
we're
gonna,
divide
and
conquer,
because
there
are
so
many
things
to
do.
I'm
very
pragmatic,
not
one
team
to
own
everything
like
if
we
can
do
something
faster
and
this
penny.
Let's
go
ahead
and
do
it
if
Emily's
there
can
help
us
get
like
the
merge
time
done
in
Prescott,
let's
go
ahead
and
do
it
because
we
need
it
anyways
as
a
50
feet
level
view
for
Sid,
Eric,
Christopher
and
whatnot.
C
So
I
posted
a
link,
emily
created
an
issue,
we're
gonna.
One
of
the
things
we
we
did
last
week
is
move
the
pulse
survey
to
periscope,
so
we're
gonna
use
that
same
dashboard,
we're
gonna,
I'm,
suggesting
we
call
it
like
development
dashboard,
but
basically
it'll
start
to
look
like
the
collection
of
the
metrics
that
we
want
to
look
at,
which
is
like
I
would
love
that
because
that's
what
I
would
like
to
see
so
we're
gonna
add
another
chart
for
this
particular.
C
A
C
A
B
A
C
Let
me
think
about
it:
I,
at
the
end
of
the
day,
I
do
want
to
categorize
it
as
well
like
I'd
like
to
know
how
long
a
buck
takes
like
a
bug.
Mrr
takes
from
the
time
it
gets
created
to
the
time
it's
merch,
because
that's
that's
good
to
know
as
well
as
feature
and
so
on.
So
we
should
start
to
see
these
lines
actually
differentiate
between
what
a
backstage
you
know
mean
to
resolve.
Time
looks
like
versus
a
future
versus
a
bug.
Okay,.
A
That
sounds
good.
Thank
you.
I
have
the
next
two,
so
there's
there
are
two
more
links
on
fixing
and
dressing
time
to
resolve
customers.
One
has
two
bucks.
You
click
on
that
this
is
the
visualization
on
mean
time
to
solve.
So
this
is
the
one
that
we
touched
on
earlier
right
mark.
This
is
the
one
you're
looking
at
correct.
It's
just
a
repackage
of
it
here
again,
because
this
is
essentially
what
what
tania's
dashboard
is.
Yeah
yeah,
that's
the
one
I'm
looking
at
awesome.
Thank
you,
sir.
A
C
No,
no
I
get
it
and
we're
really
trying
to
kind
of
minimize
the
investment
in
these
other
areas
so
like
outside
of
the
gate
lab
product.
We
don't
want
to
spend
a
lot
of
time.
I
was
just
curious.
If
maybe
we
should
like.
Yes,
we
need
to
divide
and
conquer,
but
have
everything
you
know
geared
toward
periscope
if
we
feel
that
that's
where
things
should
live,
at
least
in
the
interim,
while
we
implement
the
product,
so
sorry
I'd,
say
yeah.
A
I
appreciate
that
I
think
it
makes
sense.
Probably
we
will
revisit
this
once
Remy
and
mark
has
finished
delivering
it
on
the
charts.
Just
do
not
straighten
everything,
but
thank
you
I
appreciate
that
we
also
have
another
dashboard
working
up
sisty.
This
is
what
dalia
thing
you
were
asking
for,
which
is
a
dashboard
for
each
stage,
and
if
you
click
on
that
link,
we
could
potentially
do
this,
but
it's
based
off
labels.
We
have
automated
in
place
too
if
it
has
a
shot
label
at
a
parent
stage,
but
like
the
data
hiding
is
done.
A
C
C
So
so
let
me
kind
of
verbalize
a
summary
of
what
I
put
here.
We
have
at
least
on
my
side
and
I,
know
on
Tim's
side
we
have
teams
that
are
growing
and
I'm
speaking
of
back-end
teams
that
are
starting
to
split
so,
for
instance,
we
just
hired
a
second
monitor
manager.
So
the
monitor
back-end
team
is
gonna
split
into
two.
Similarly
for
secure,
we
are
now
splitting
into
two
secure
back-end
teams,
we're
looking
to
grow
and
split
the
front,
end
team
and
so
on
and
so
forth.
C
So
what
we're
finding
is
that,
as
we
split
adding
more
labels,
one
it's
starting
to
create
a
lot
of
overhead,
the
data
accuracy
is
going
down.
It's
tribal
knowledge
to
get
people
to
learn
to
do
this,
and
so
on.
So
my
proposal
here
is:
we
need
something
to
scale
a
little
bit
better
and
labels
is
a
manual
way
of
ensuring
that
we
have
data
integrity,
I'm,
proposing
that
we
do
it
based
on
a
team
membership.
So
if
we
can,
we
start
to
track
where
people
are
based
on
the
team.
C
Membership
and
that'll
be
an
easier
definition,
because,
basically,
what
it
says
is
like
secure,
back-end
team
one
is
these
individuals
and
whatever
these
individuals
contribute
to
ends
up
reflecting
in
their
chart
and
so
on
and
so
forth,
and
it
we
can
make
adjustments
as
people
move
around
or
as
we
grow,
the
team
add
new
engineers
and
so
on.
So
just
putting
that
thought
out
there
for
you
to
move
away
from
using
labels
for
the
purposes
of
development,
metrics
and
quality
metrics
when
like
something
that
relates
to
what
are
these
individuals
contributing
to.
B
A
So
the
label
mecanim
solves
that
problem
by
by
by
by
nature,
because
it's
it's
caching
my
view,
you
cache
the
state,
you
save
the
state
at
that
point
in
time
and
then
the
count
of
the
unit
just
counts
to
that.
So
I
think
we
need
to
meaning
we
should
do
and
we
should
keep
a
roster
person
in
whatever
given
time.
We
know
then
Auto
apply
the
team
label.
That
person
is
on
at
that
given
point
in
time.
A
So
it's
like
it's
actually
pulling
from
the
person
right,
but
then
we
have
a
McCann's
and
manipulation
to
store
that
information
and
as
as
much
as
I,
don't
like
saying
it.
We
still
need
labels
because
bugs
s1p
ones,
whatever
is
a
label
and
I.
If
we
don't
use
labels,
it's
gonna
add
complexity
to
the
database
and
I'm,
not
sure
we
have
the
bandwidth.
You
implement
all
that
because
that's
that's
not
that's
almost
like
creating
another
development
project.
Yeah.
C
No,
no!
That's!
Okay!
Mike!
You
came
up
with
a
great
proposal
I
like
that
I
like
if
we
can
automate
based
on
the
author,
putting
the
right
label
and
we're
gonna
start
to
define
a
ton
more
because
we
want
to
get
visibility
into.
You
know
the
specific
team,
whether
it's
back-end
or
front
end
and
so
on,
but
if
all
that
is
automated
and
we're
not
relying
on
humans
to
label
these
correctly
and
I'm
fine
with
that,
okay.
A
Yeah
I
think
we
need
to
take
the
middle
ground
and
thank
you
for
for
listening.
Let's
go
with
that.
Let
me
go
an
issue
to
Auto,
add
team
labels
per
by
author
and
then
we
can
also
push
for
I
linked
an
issue
down
there,
where
we're
gonna
use,
bamboo
HR
as
a
single
source
of
truth,
I'm,
not
sure.
Where
is
that
yet?
But
given
all
the
effort
that
Eric
and
Christopher
has
been
doing
to
clean
up
team
yamo,
that
should
be
the
single
source
of
truth.
A
We
can
even
add
another
like
what's
the
team
name
for
that
person
and
then,
when
somebody
moves
around
handbook
is
the
single
source
of
food.
You
update
that
and
then
the
trash
bots
just
cool
from
there
and
assigns
the
correct
label
from
there
yeah.
My
approach
is:
like
small,
small,
boring
automation,
script
status.
That's.
C
B
I
poke
in,
as
with
the
support
perspective,
we're
talking
about
primarily
the
labeling
MMR's
here
right,
okay,
so
for
issues,
that's
where
I
feel
more
concerned,
they
I
don't
have
a
solution,
I'm
just
complaining,
but
as
as,
as
things
get
more
complex,
it
gets
more
difficult
for
support
team
to
like
keep
all
that
in
mind
as
we're
creating
bugs
and
getting
stuff
out
there.
Does
anybody
have
any
thoughts
for
a
way
that
there's
a
both/and
solution,
or
do
we
just
have
to
suck
it
up?
I
know.
A
No,
you
don't
have
to
suck
it
up,
and
this
meeting
is
a
place
for
you
to
complain.
I
want
to
find
a
solution
to
I'm.
Looking
for
ideas
complains
to
me
is
a
blessing,
so
please
don't
play
more
I
linked.
Something
for
you
here
on.
We
need
help
from
support
to
define
what
we
need
from
sun
desk
and
this.
This
came
as
like
an
idea.
Oh
and
James
Ramsey
and
then
Cynthia
and
I
were
drinking
at
the
open
bar
I
contribute.
A
That's
all
the
missing
information,
so
I'm
coming
from
sandesh
tickets
and
we're
asking
supported
like
hey.
Could
you
please
add
this
information
from
Zendesk
in
this
book?
So
we
know
and
assuming
when
you
say
issues
you
mean
votes,
correct,
yep,
great
okay.
So
usually,
yes,
usually,
yes,
so
we
were
thinking
of
creating
a
bot,
I'm,
sorry,
but
overuse
this
this
bot
term,
as
we
have
so
many
of
them
create
a
token
an
access
token
from
instrument
desk
as
we
have
the
bots
won.
A
When
somebody
calls
us
endlessly
call
an
API
call
on
the
URL
of
that,
send
this
link
and
then
pull
the
labels
that
we
need
from
send
us
and
added
as
a
comment
with
scrubbing
any
sensitive
data
too
late.
I
think
we
should
disclose
these
eyes
or
whatever,
but
we
should
know
whether
one
if
it's
a
paid
customer,
what's
the
size
like
we
can
come
without
twelve,
like
small,
medium,
large
and
then
like
hey.
A
B
B
Little
point:
we
are
adding
currently
adding
some
automations
for
inferred
stage
and
group
labels,
so
the
new
group
labels
that
are
beneath
the
stage
and
we're
adding
some
automations
to
infer
those
labels
from
subject
labels.
So
if
you
have
merge
request,
for
example,
it'll
add
a
stage
label
of
create
and
the
relevant
group
I
can't
remember
what
that
is,
and
and
we've
done,
that
for
every
single
subject
level.
So
we've
got
an
M
R
for
that
and
it
will
do
that
every
night.
B
A
Cool
so
the
time
that
I'm
moving
on
to
the
exact
next
agenda
item,
in
line
with
how
we
want
to
enforce
SLA
Zendesk
information,
helps
part
of
the
picture
there
is.
There
is
a
an
effort
on
us
to
clean
up
the
definition
of.
What's
what
does
it
mean
when
the
issue
is
charged
and
we
have
level
one
which
is
like
half
completed
and
quality
engineering
is
actually
taking
this
on
ourselves.
So
you
can
click
the
issue
six
to
four
four.
A
A
The
next
one
is
from
Christopher:
oh
just
realize
it.
We
need
to
get
a
better
okay,
you
need
to
get
better
labeling
and
definition.
Labeling
I
mean
I,
look
at
hammer,
there's
nothing
documented
and
backstage
needs
to
be
better
defined
and
potentially
broken
up.
So
I
actually
created
this
issue
like
eight
months
ago
and
I.
Remember
Dali.
We
were
talking
about
renaming
backstage
to
impediments
and
everything
that
comes
with
it.
That's
a
proposal
to
actually
break
down
backstage
because
I
think
tank.
A
It
started
from
like
what
is
TechNet.
What
is
backstage?
That's
like
the
hot
topic
item
everywhere
and
every
year
more
people
are
confused.
We
use
backstage
because
it
sounds
like
a
big
umbrella.
It
covers
more
stuff
if
you're
operating
two
rails
fire.
That's
like
that's
industrial
standards.
It's
like
you
need
to
go
to
the
next
that
they
make
the
long-term
support
or
whatever
so
open
it
up
for
discussions.
What
do
people
think
here.
D
C
D
It's
a
good
question,
so
there's
a
couple
things
I
think
one
is
is,
and
there's
probably
three
things
in
here
that
I'm
gonna
try
to
fit
in
there.
That
I
probably
felt
that
but
three
things
I'm
thinking
about
one.
We
still
have
a
lot
of
unidentified
em
ours,
which
feels
like
we
have
some
probably
cleaning
up,
or
at
least
you
know
getting
people
in
the
right
mindset
and
I'm,
not
sure
what
the
right
method
is
to
go
about.
Doing
that.
But
I
think
that's
something
that
we
probably
need
to
start
thinking
about.
Is
this
like?
D
How
do
we
close
that
gap?
Even
even
maybe
going
back
and
relabeling
some
em
ours,
you
know
say
for
the
last
month,
just
to
see
if
we
can
get
better
to
it.
Second
thing
is:
is
when
I
looked
around
I
didn't
see
any
place
where
we
define
the
categories
that
we
currently
have
so
I
think
you
know,
being
more
explicit
about
the
handbook
is
probably
one
thing
we
need
to
do
around
that
and
then
the
last
one
is.
Is
somebody
asking
what
back
stages
and
I
said?
Well,
it's
Tech
Data
and
anything.
That's
related.
D
Some
I
really,
though,
feature
but
not
really
directly
really
to
feature
which
is
me
going
like
this,
which
feels
you
know
very
much
not
like
we've
defined
it
crisply
and
because
it's
a
large
portion
there
was
some
feedback
of
well.
You
know
if
it's
really
a
tech
debt
versus
everything
else.
Maybe
we
break
this
up
and.
A
The
good
news
this
goes
before
we
actually
started
as
conversation
eight
months
ago,
and
the
issue
is:
do
okay,
so
where's
the
issue.
You
got
it
right
there
issue
I
I
was
confused
also
as
well
when
I
first
on
what
is
tech
that
what
is
backstage
because
they
seem
like
cousins
and
I,
think
we
ended
up
tweeting
backstage
as
the
umbrella
because
on
your
backstage,
so
it's
like
none
product
facing
changes
right.
Let's
take
that
and
there's
also
like
upgrading
to
like
real
six
or
like
performance.
A
So
I
think
there's
a
positive
breakup.
Backstage
and
technical
debt
falls
on
the
backstage
and
there's
like
static
masses,
see
I,
config
and
and
all
that
stuff.
So
a
boring
iteration
here
is
if
it's,
if
it's
TechNet
label
and
backstage,
and
we
will
dice
further
down
the
road.
But
if
it
is
not
a
feature,
pink
is
good
backstage
and
that
should
help
you.
D
C
Is
the
other
word
that
we
came
up
with
we
just
you
know
being
that
we
already
had
a
backstage
label.
We
went
with
that,
but
it's
it's
it's.
What
Mike
was
describing.
We
we
needed
a
category.
That
is
not
security,
because
we
did
want
to
highlight
that
as
a
separate
category.
We
know
that
bugs
are
well-defined.
We
know
features
are
well-defined
and
community
contributions,
so
backstage
is
basically
the
nod
of
everything
else
and
a
lot
of
times.
That
is
non
feature,
work
and
and
work
that
the
team
needs
to
do.
C
D
C
C
Understandable
and
we've
talked
about
it
with
with
multiple
managers
and
PMS,
and
it
depends
on
really
that
relationship
so
like
at
the
time
Victor
and
Shawn.
Basically,
you
know
mentioned
that
they
trust
each
other
and
Victor
will
we'll
tell
Shawn
if
you
want
to
prioritize
certain
things,
I'm
fine
with
that.
Just
let
me
know
what
you're
working
on
I'm,
not
sure
that
necessarily
translate
exactly
the
same
way
for
every
for
every
team,
but
the
practice,
and
the
message
is
that
p.m.
D
C
So
one
more
thing
to
add
and
I
know
that
this
was
actually
a
really
hard
thing
for
people
to
you
know
to
get
through,
but
I
I
was
adamant
in
saying,
let's
not
I
know
we
want
definition
of
what
is
in
backstage
and
that's
great
data,
but
the
further
down
we
break
these
labels,
the
Messier
our
throughput
chart
is
gonna
start
to
be
so.
I
really
thought
to
try
to
keep.
You
know
these
labels,
as
you
know,
as
much
of
a
clear
large
categories
as
we
can
so
just
something
to
think
about.
C
Of
course,
we
can
adjust
and
iterate
from
here
if
this
doesn't
make
sense,
but
I
do
want
to
like
I
still
feel
that
within
backstage
we
can
add
further
debugging,
but
if
we
start
to
break
you
know
these
specific
categories,
it's
gonna
start
to
be
harder
to
track
and
see
where
our
investments
are
going.
Yeah.
D
D
But
I'm
more
talking
about
you
know
like
this
is
more.
This
is
less
about.
This
is
less
about
meeting
all
the
surface.
Information
and
more
about
you
know
is,
is,
is
the
the
product
is
expectation
the
product
manager
is
in
defending
in
a
defendable
position?
Right,
look,
do
they
have
do
they
have
enough
comprehension
on
what
their
team
is
doing,
to
have
an
evaluation
of
that?
That's
kind
of
I,
see
Dalia
nodding
so
I'm,
assuming
that's
you're,
agreeing
with
me
and
that's
kind
of
the
thought
process
associated
with
that.
C
Absolutely
isn't
if
it's
not
happening,
we
should
have
a
conversation.
I
I
do
want
to
enable
our
products
and
to
feel
comfortable
with
what
the
team
is
executing
on
I
mean
that's
a
big
part
of
why
we
categorized
throughput,
because
if,
if
the
team
throughput
is
is
in
a
good
place,
we
should
always
be
executing
on
the
top
priorities
for
p.m.
and,
if
that's
not
happening,
then
we
should
talk
about
how
things
are
getting
prioritized.
So.
A
C
A
A
A
B
A
C
We
have
a
page,
sorry
Mac.
We
can
definitely
add
it
to
the
throughput
page.
I.
Remember
leaving
out
the
definition,
because
we
have
when
you
hover
over
a
label.
It
gives
you
the
definition
and
I
didn't
want
to
start
duplicating
like
I.
Didn't
want
it
to
be
that
they
start
to
get
out
of
sync
when
we
update
the
label
definition.
But
if
we,
if
we'd
like
to
pull
in
the
text
and
add
it
to
that
page
I
can
absolutely
get
that
I'm.
D
C
D
The
one
aspect
I
would
say
to
this
page-
is
we
put
in
there
we're
very
passive
in
rehearsal.
At
least
this
is
not
really
gonna
be
just
kind
of
and
again
I'm
skinning
really
fast.
Thanks
for
providing
this
information,
cuz
I
couldn't
find
it.
But
you
know
we
don't
say
what's
required
right,
like
we
just
were
very
passive
or,
like
hey
here's
nice
labels,
most
issues
will
have
most
of
these.
Not
that's,
not
that's
not
being
that's
not
saying
hey
these
ones
be
required,
or
you
know
something
that
effect
right.
C
Productivity
team
credit
they've
been
working
on
a
lot
of
automation
to
enforce.
You
know
adding
labels
as
well
as
pre-populating
with
labels.
So
I
mean
it's
it's
valid
feedback.
We
are
working
on
it.
We're
trying
to
make
this
less
of
a
manual
step
then
than
it
is
right
now.
But
yes,
I,
agree
with
you,
it's
it's
exactly
why
we
see
undefined
and
we
intentionally
display
undefined,
because
we
want
to
highlight
that
this
is
an
level
of
effort
that
we
want.
The
teams
to
focus
on
I
mean.
A
So
it's
required
by
automation,
but
let's
make
it
clear
in
the
handbook
and
communicate
it
out.
You
know
what
I
do
this
I'll
take
it
up
and
it's.
This
is
a
little
level
hanging
fruit
for
us
and
I
love
you
and
now
there's
a
review.
I'll
just
add
to
the
handbook
and
also
linked
to
the
issue
88
as
well,
then
we're
still
ironing
out
what,
if.
A
C
A
Posted
the
running
doc
from
2018
credit
to
whatever
Dalia
said
I'm
integrating
same
thing.
She
pointed
out
that
I,
hey,
let's
not
explore
this
labels
and
what
we
came
up
with
originally
under
backstage
was
two
categories.
One
is
architecture,
anything
that
is
regard
to
like
moving
code
around
that's
architecture
and
then
industry,
which
is
a
new
industry
standards.
It's
still
up
for
discussion,
so
I'm
gonna
put
this
into
the
issue
88
as
well,
and
we
can
take
it
there.
How
we
want
to
integrate
forward.
A
D
Okay,
leaving
it
as
is
I
just
like
I,
want
to
like,
we
need
to
have
a
method
to
basically
say:
okay,
we
want
to
dig
on
this.
You
know
area,
what's
what's
the
right
way
to
think
about
it,
you
know.
Is
it
the
engineering
manager
or
you
know,
be
more
explicit
about
going
in
engineering
insurer
versus
going
to
the
going
to
the
product
manager
for
that
area?
Okay,
and
if
a
product
manager
does
feel
like
there's
an
area
like
that
would
be.
D
A
B
A
A
Well,
so
here
what
I
share
the
screen
and
and
you
let
us
know
what
what
you
were
calls
is
on
my
situations,
so
this
is
so,
let's
start
with
this
first,
this
is
the
Mr
paratha
per
month,
Sharp's.
D
D
Boundary
of
error,
roughly
we're
a
little
bit
below
we
still
have
one
day
to
go
so
be
interesting
to
see.
If
we
pick
up
any,
you
know
any
increase,
basically
between
now
and
Saturday,
essentially
actually
day
and
a
half
to
go
from
that
restrictive.
So
so
so
feeling
pretty
good
about
that.
How
did
I
get
point?
Seven,
seven!
That's
the
drop
off
that
we
had
the
previous.
The
average
of
the
previous
two
contributes
I'm
associated
with
that.
D
Pretty
close
on
that
one
and
then
if
you
do
the
same
math
for
the
mrs
per
month,
yes,
we
actually
are
having
a
blowout
month,
which
is
we're
at
1779
and
we're
at
1504.
And
if
you
do
1779
5.77
how
you
get
1366,
which
is
what
I
think
I
can
predict
at
the
beginning
of
the
month
and
we're
over
that
by
a
good
amount.
That's
in
the
including
communion
contributions
that
when
I
I
flip
it
and
do
they
saw
that
yes,
cool.
D
A
D
A
B
A
Again,
okay,
I
think
one
second,
let
me
just
review
okay
I
think
we
should
start
scoring
them
with
the
fat
arrow
saying
if
this
is
already
achieved,
and
you
just
let
me
know
here
and
then
I'll
make
an
update
to
the
to
this
page.
So
I
think
this
is
roughly
done
correct,
customer!
Oh
yes,
when
you
person
already
you.
D
Sorry
I
mean
yeah
serve.
Other
20%
is
roughly
10:00
10:00
a.m.
ours
on
average.
So
it's
oh
we're
getting
there
we're
not
quite
there,
we've
definitely
even
increased
throughput
by
20%,
but
when
this
was
originally
draft
or
the
intent
associated
with
it
is
really
get
us
back
to
kind
of
similar
productivity
to
what
we
saw
a
number
of
months
ago.
So
we
either
have
to
come
up
with
a
good
reasoning
behind.
D
D
Wife
ups
here
yeah,
my
update,
is,
is
that
we
have
them
in
dashboards.
The
dashboards
are
going
to
change
over
time,
but
still
we
have
to
define
KPIs
for
the
two
top
runners,
and
the
last
thing
that
we
need
to
do
is
just
establish
the
baselines
form
which
would
actually
get
into
level
four,
so
so
I
need
to
get
that
documented.
D
C
A
Sounds
good.
Thank
you.
I
have
the
next
one,
so
we
saw
p95
it's
one
as
to
lower
to
188,
though
Tania
point
out
that
we
had
a
lower
number
of
thoughts
resolved
here
as
well.
So
what
I
want
to
call
improvements
as
I
get
so
continue
to
monitor
this.
This
is
still
using
Tyus
beefed
up
automated
a
Google
spreadsheet,
so
we
haven't
automate
this
yet,
but
at
least
we
have
metrics
going
in
and
it
seems
to
be
improving,
though
we
need
to
monitor
it,
ensure
our
customers,
making
bucks
has
a
security
level.
A
Mark
is
currently
working
on
this
provide
more
updates
in
the
next
week,
and
this
is
going
to
be
in
the
quality
dashboard,
because
we
want
to
measure
on
the
existing
population
might
be
a
margin
off
of
the
bugs
and
then
next
iteration
we
will
put
it
in
good
lap
native
once
the
exclusion
mechanism
is
in
effective
iteration
to
current
stage
group
tirage
package
report
link
is
still
in
flux.
We're
working
on
it,
no
updates
here
yet
and
I
also
have
a
training
that
I
need
to
roll
out
to
use
the
variety
and
severity
labels.