►
From YouTube: Development Metrics Working Group 2019-07-25
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
And
since
I
Christopher
is
out
here,
we
can
also
briefly
go
through
this.
When
he
comes
back.
We
can
go
back
on
this
as
well,
but
we
usually
look
at
the
Mars
for
months.
Charts
I
think
we're
at
25th,
and
it's
coming
close
to
the
previous
month
at
7.35.
Being
our
last
record
was
a
8.28,
so
hopefully
we
get
parity
in
the
next
coming
five
days,
total
in
mars
per
month
at
fifteen
hundred
and
seventeen
I
think
it's
leveling
off
from
the
lasts
the
last
month.
We
have
five
days
left
and
I
am
not
sure.
A
200
in
March
they're
gonna
make
it
within
five
days,
given
the
weekend
as
well,
so
I
think
we're
gonna
come
in
close
and
the
total
time
to
resolve
bugs
are
still
the
same.
I
think
we
anticipated
this
until
the
new
trash
package
goes
out
and
also
we
want
to
start
face.
The
list
of
bugs
using
the
new
group
end
stage
labels
which
is
going
to
be
rolled
out
it's
being
rolled
out
right
now.
With
that
I'll
stop
sharing
my
screen
and
we
can
discuss
the
main
items
on
the
agenda.
A
We
can
skip
the
hypothesis.
This
time,
I
guess
out
here
so
I
have
the
first
one.
So
we
have
implement
an
easy,
easy
way
to
track
mr.
Lubell
and
deliverables,
though
it's
not
exactly
what
what
Josh
that
you
have
doing
it
by
by
my
engineers,
but
at
least
we
have
a
review
app
here.
That
shows
how
many
delivered
deliverables
we
deliver
per
milestone,
and
let
me
just
share
my
screen
again:
you'll
see
that
so
this
is
interview
app
at
the
group
level.
A
So
from
this
it
seems
like
we
would.
We
were
completing
things
more
than
we
have
like
very
aggressive.
However,
I
think
there's
a
there's,
a
mislabeling
event
for
12:01
and
I'll,
wait
until
the
team
as
like
real
able
and
lay
with
our
stuff
first
before
looking
at
all
the
one.
This
is
the
first
time
we're
looking
at
this
mess
actually
but
I.
Given
the
pathway,
it
seems
like
we're
delivering
more
aggressively
Josh.
But
what
are
your
thoughts.
B
B
B
B
Now
I
guess
the
problem
there
could
be
that
that
the
MMR
might
not
get
the
verbal
label
on
it.
Unless
you
credit
to
me,
why
is
that?
Is
that,
like
the
potential
failure
mode
here,
yes.
A
There
are
highlighting
that's
one
if
we
don't
have
those
cross
linked,
then
we
weren't
captured
here,
but
also
there
could
be
deliverable
issues
that
do
not
have
an
MRI
that
we
just
closed
out.
So
the
correct
way
to
do
it
is
what
Christopher
has
added
in
in
that
issue
and
we
need
to
go,
buy,
delivered
deliverables
with
any
Amar's
associated
with
that
issue,
and
we
count
only
that
number
of
of
deliverable
issues.
Yeah.
B
B
A
B
A
member
to
manually
set
the
global
label,
which
is
probably
why
I
would
think
top
line
counts
are
lower,
although
actually
they're
not
well
233
for
12
pull
those
233
188
so
actually
for
most
yeah
I'd
be
curious.
We
probably
just
dig
in
here
and
see
a
little
bit
more
I,
think
that
you
know
if
they
notice
that
there's
a
little
really
little
isn't
thing
applied.
B
A
B
A
A
B
C
My
hypothesis
is
that
we're
actually
closing
a
lot
more
issues
than
we're
representing,
because
we're
only
marking
your
tool
herbals
as
things
that
related
to
product,
and
if
that's
the
case,
we
need
to
recognize
what
that
it's
because,
like
ultimately
you
own
priority,
but
if,
if
it
turns
out
that
teams
are
effectively
prioritizing
such
the
you
know,
if
those
are
considered
part
of
the
deliverables,
then
that's
great.
If
the
number
comes
out
close
to
even
then
like
okay,
now
we've
got
the
real
data.
We
can
look
at
it.
C
We
can
figure
it
out
right
like
that
like
it
just
feels
like
this
is
a
quick,
easy
one
to
figure
out,
and
then
it
gives
us
a
lot
more
information
because,
like
if
it
turns
out,
you
know
a
foreclosing,
a
thousand
and
seven
hundred
fifty
of
those
are
we
open
them
and
we
close
them.
That's
a
problem
need
to
go
fix
that
right
if
it
turns
out
that
it's
like
10%,
more
okay,
what
are
those
for
right
like
let's
go?
C
B
B
It
as
an
exercise
to
determine
like
if
the
deliver
label
is
being
improperly
applied
right,
like
said
and
I,
think
one
one
thing
is:
we
need
some
guidelines
on
the
planning
process.
I
think
that
that's
become
clear
here,
I
think
basic
discussion
yesterday
as
well,
but
I
think
four
deliverables,
but
that
should
be
everything,
including
bugs
features
like
a
whole.
Everything
that's
going
to
the
planning
process.
If
there's
a
Mars
being
done
that
aren't
in
there,
I
would
just
be
interested
and
see
what
that
is.
If
it's
like
the
backstage
stuff.
C
B
Yeah
he
makes
sense
in
the
one
concern
instance
again
like
and
Mosley
born
in
the
data
and
which
need
to
do
enough
checking
to
see
like
no
spot
checking
to
be
comfortable
with
it,
which
is
how
many
issues
get
just
get
closed
because
things
are
not
an
issue
right,
so
100
puts
of
a
problem.
We
look
into
it's
a
lot
of
product
related.
We
close
it
we'd
give
them
some
hints
or
whatever
it
is.
We
can
close
it
and
move
on
with
our
lives,
and
that
would
caught
up
in
here
as.
B
C
Those
like
a
lot
of
times,
documentation
issues
that
we
should
be
going
updating
they
can
fixing
based
on
that
like
that
would
be
the
next
question
right
like.
Is
it
the
same
issues
getting
open
by
customers
three
times
a
month,
then
we
clearly
don't
have
good
documentation
on
them.
Right
like
that.
That
would
be
kind
of
my
my
suspicions
on
that
yeah.
B
B
So
I'm,
not
sure
percentages
are
I.
Think
distribution
having
had
some
experience,
might
make
it
a
higher
percentage
of
that
environment
sold
confluence
than
than
others,
but
it's
probably
still
Madison
against
for
some
issues.
Shouldn't.
B
C
Know
that's
a
good
point
is,
is
that
this
could
be
short-circuiting
the
support
process.
So,
like
that's
your
point,
if
support
like
I
wasn't
making
that
assumption,
support
was
engaged
before
again,
but
I
think
that's
a
that's
an
interesting
point,
which
is
if
it
turns
out
that
customers
are
short-circuiting
by
creating
issues
directly,
but
then
we
find
out
that
their
customer
related
that
almost
feels
like
we
should
have
a
level
of
triage
there
even
before
it
gets
to
development.
B
C
B
C
Though
is
also
just
like
us
getting
some
consistency
around
cuz
like
it,
if
it's
taking
the
products
time
or
developments
time
when
it
shouldn't
be,
you
know,
then
that's
something
we
should
be
optimizing
for
right,
that
that
would
be
my
sense
of
it
or
at
least
haven't
discussion
around
to
say
yeah.
That's
that's!
You
know
more
than
comfortable
with
that
level.
A
A
B
A
A
B
A
A
C
B
A
B
B
A
lot
of
the
calculation
here
which
I
didn't
think
about
that
much,
which
is
that
some
people
might
not
be
flagging
depending
how
you
create
the
the
mr.mayor
when
I
pick
up
the
labels
from
the
issue
and
then,
therefore,
if
you
forget
to
apply
deliverable,
which
probably
happens,
you
then
might
get
undercounted.
Although
it's
weird
that
that
trend
would
I
guess
if
people
just
I,
don't
know
it's
anyways.
B
C
C
B
A
B
You're,
comparing
total
in
Mars
to
emerge
at
deliver
label,
yeah,
yeah
I,
think
the
Challenger
is
that,
depending
how
you
we
haven't,
we
haven't
been
as
far
as
I
know.
It
strong
like
strongly
encouraging,
could
love
me
and
he
applied
allure,
bulb
and
depending
on
how
you
create
the
Mr
and
Mary
not
impaired
at
the
labels
from
the
issue,
and
so
therefore,
if
it's
not
inherited,
you
can
mainly
set
them.
They
for
me,
like
setting
that
you're,
probably
not
some
people
are
probably
not
applying
the
de
liberal
label.
So,
okay.
C
So
yeah
so
so
you're
saying
there's
not
validity
in
the
deliverable
label
right
now,
which
that's
fine,
we'll
figure
out
that
I'm
just
telling
you
that,
like
if
we
go
fix
that
like
and
it
comes
back
and
still
at
this
number.
What
do
we
what's
our
conclusion
right,
like
that's
what
I'm
trying
to
drive
at
so.
B
B
B
B
C
B
A
Chuck
we
met.
This
is
a
reveal
app,
so
we
need
to
merge
this
first.
It
may
be
missing
some
data,
so
I
didn't
I
did
not
make
that
clear
before
this
is
still
a
work
in
progress.
If
I
merge
this,
then
it
should
be
on
top
of
feed
the
production
shots
that
we
have
right
now.
It
could
be
missing
some
that
data.
B
Okay,
yeah,
so
basically
the
queries.
Iran
was
number
of
the
issues
with
the
missed
8
111
8
label
right
and
then
it
would
have
been
the
number
of
issues
that
were
closed
with
a
mouse
and
11.8
and
a
label
of
deliverable,
which
is
how
this
was
ran
when
frankly,
I
probably
should
have.
This
should
actually
be
a
hundred
seventy-nine,
not
for
anyone
but
whatever
it's
two
days,
two
issues,
because
these
two
never
got
completed
and
they're
still
assigned
to
11.8,
strict,
which
deliverable
oh
okay,
mom,.
C
A
B
B
A
Correct
so
the
liberal
ones,
you
need
to
query
based
on
the
milestone,
because
it's
still
in
that
milestone
with
the
missed
ones.
You
just
need
to
create
on
that
label
on
star
milestone,
because
it
could
be
close
in
any
milestones
in
the
future,
but
it
has
that
misty
lever
that
a
label.
So
you
can't
just
do
like
a
one
plus
one
equals
two
in
the
open
and
closed
a
number
and
have
a
good
compile.
B
C
All
right,
I'm,
gonna,
think
about
it
like
it
just
feels
like
we
made
an
assumption.
Yesterday,
I
made
an
assumption.
Yesterday
of
we
went
from
300
to
400
items
being
planned,
I'm
not
convinced
to
the
bath
right
now
and
that's
best
way
described
because
like
right
now,
I
can't
even
became
inquiry
to
see
what
was
planned.
I.
B
Think
the
only
to
see
what
was
planned
is
to
look
at
what
was
planned
and
completed,
which
would
be
closed
in
that
milestone
right.
That's
the
easiest
portion
right,
mm-hmm
and
things
that
weren't
completed.
Would
that
have
stayed
open
and
then
having
assigned
to
a
later
milestone
and
the
only
way
you
know
that
it
was
actually
originally
planned
in
a
given
milestone
and
that
it
missed
is
the
missed
Labor's.
B
And
there
are
edge
cases
where
people
push
things
out
because
then
I'm
not
gonna
hit
them.
The
only
time
that
would
be
gets
applied
is
a
manually
which
is
all
the
time
and
be
if
the
quality
teams
triage
comes
through
and
applies
it
at
the
end
of
a
milestone,
but
things
haven't
pushed
out
later
or
a
really
good.
My
cycle
may
not
have
it
if
the
person
didn't
actually
apply.
The
label,
which
is
not,
which
is
which
happens
for
sure,
didn't
happen.
Okay,.
C
B
So
that
is
also
an
engine
at
a
point
like
wow.
There
are
600
issues.
No
thing
is
this
is
get
lab
Dorgan,
so
there
are
other
things
in
here
right,
I,
don't
have
I,
don't
have
an
easy
way
to
run
these
numbers
and
I
mean
I,
guess
I
could
selectively
choose
c
e
and
e
e
p--
and
you
have
the
runner.
You
have
mommy
plants,
you
have
like
all
these
other
things
and
I
saw
it.
I
just
simplicity's
sake.
C
My
agrees,
my
my
assertion
would
be
is
that
we
had
600
issues
or
related
till
unable
to
keep
what
we're
spending
time.
So,
at
the
end
of
the
day,
if
you
want
to
increase,
if
you
want
to
increase
velocity,
which
is
different
to
record,
if
you
want
increase
velocity,
which
is
you
know,
focused
area,
you
got
to
figure
out
how
to
basically
take
that
that
time.
Right
like
we're,
you
carry
half-and-half
capacity
right
now
from
a
velocity
perspective,
I'm
not
saying
we'll
ever
get
to
100%
philosophy
be
clear,
but
50%
feels
awfully
awfully
low.
Yeah.
B
B
A
C
That's
fair!
That's
very
that's!
That's
word
rush
I'm,
not
asking
you
to
go.
Do
the
calculations
as
much
just
tell
us
what
the
right
way
to
view
the
calculation
is
right.
So
then
we
can
how
to
just
ask
MEC
for
the
the
data
around
it
right
because,
like
then,
that's
that's
the
them.
We
have
a
better,
a
better
sense
for
how
we
evaluate
this
yeah.
B
I
think
there's
a
couple
of
questions
right.
One
is
what's
our
confidence.
So
what's
our
predictability
right,
that's
like
one
question,
which
is
what
so
the
always
trying
to
answer
here,
which
is
for
things
well
for
every
issue.
We
sign
up
milestone.
What's
the
likelihood
of
that
thing
actually
getting
delivered
in
that
plastic
and
that's
what
I
was
trying
to
drive
out
of
here
and
I
think
that's
important
to
try
and
keep
track
of
because
we're
getting
a
feedback
from
customers
that
right
now,
it's
too
low,
it's
causing
frustration.
B
They
say
they
just
don't
have
confidence
in
our
milestones
that
will
actually
ship
it.
So
that's
a
that's
a
problem.
That's
one
thing
we're
trying
to
turn
up
to
70%,
so
we
should
have
some
dashboard
in
an
automated
way.
That's
more
scientific
and
less!
You
know
I,
think
I!
Think
doing
it
here
is
good,
because
I
go
on
like
a
filter
out
other
projects,
it
all
line
with
other
reporting
that
we
have
and
it
won't
require
me.
B
A
It's
the
this
would
be
easy
for
us
to
do
it
in
the
I
know
everybody
wants
to
move
off,
but
since
the
data
population
is
there,
it's
probably
like
a
two
or
three
day
tasks
from
our
team
to
just
render
it
there
and
then
we
will
work
into
the
parity
with
periscope
later
and
I
would
run
with
that.
Actually,
because
if
the
urgency
of
the
data
seems
to
be
higher,
then
then
migrating
to
periscope
and
correct
me
if
I'm
wrong,
I.
B
C
A
C
Let's
try
that
and
see
what
the
data
kind
of
brings
us
yes
use.
Then
we
can
figure
out
from
there
like
right
now.
My
biggest
concern
is
that
I
think
our
total
issues
completed,
like
let's
even
say,
100
of
those
600
is
things
that
are
automated
are
part
of
the
delivery
team.
That
still
means
we've
got
over
200
issues
or
300
issues.
Potentially
that
are
you
know
things
we're
doing.
Okay,.
C
Yeah
I'm
guessing
some
of
them
are
customer
escalations,
I'm
guessing
some
of
them
are
outright
bugs
that
we
need
to
fix
I'm
sure
some
of
its
people
doing
the
right
thing.
We
just
need
to
look
at
it
from
a
overall
procedure
goes
there.
You
know
like
like
how
do
we
view
this
because,
like
if
it
turns
out
that
like
half
of
our
capacity
is
happening,
then
we
need
to
look.
We
need
to
go
back
and
say
you
know
what
actually
sending
70%
of
target.
C
What
you're,
showing
over
the
last
several
releases,
is
basically
a
static
on
delivery.
What
we're
seeing
on
em
ours
is
a
growth.
Some
of
that
could
be
because
of
scamming
the
system.
We
know
that,
but
I
think
also
some
of
it
is
honestly.
People
are
doing.
We're
actually
got
more
volume
coming
through
it's
just
it
may
not
be
the
focus
of
future
focused,
and
the
question
becomes
is:
is
that
the
right
decision
you
know
like,
then?
This
is
a
hypothetical
but
say
I
go
through
the
planning
process
know
the
month.
C
B
Yeah
I
think
and
a
that
would
be
really
helpful
to
help
us
get
our
arms
around
understand
guys,
it's
the
sides.
If
it's,
what
is
work
being
done
and
then
they
can
figure
out.
You
know
it's
appropriate
where
it's
going
and
just
sort
of
do
the
analysis.
I
just
did
dig
deeper
I
think
if
the
data
shows
okay.
B
B
B
C
A
B
Up
yeah,
okay,
yeah
sure,
I
think
like
again,
we
cancel
it
so
just
make
a
decision
here.
Let's
just
go
with
you
know
the
the
same
definition
I
had
for
the
current
spreadsheet
that
makes
sense
right,
which
is
it's
completed,
is
closed
with
a
Dilbert
label
and
the
current
milestone.
If
it's
missed
it
has
the
missed
version
label
for
that
right.
That's.
A
C
I
mean
the
in
theory.
It
should
be
next
week,
essentially
they're
doing
backfilling
right
now.
The
way
they've
done
is
they
said
they
turned
off,
transforming
while
they're
back
killing
those
two,
the
back
filling
and
then
they'll
have
to
rewrite
transforming
Emily's
right
now,
eta
is
probably
tomorrow
afternoon.
C
My
guess
is,
you
know,
might
be
sometime
Monday,
but
then
we
need
it
then
go
back
and
look
at
the
graphs
to
see
whether
or
not
we
have
parity
or
not
so,
there's
probably
gonna
be
some
work
Christmas
here
with
once
we
get
to
that
point,
so
I
was
trying
to
get
us.
Another
say
week
to
you
know,
figure
out
what,
if
there's
missing
data
or
things
of
that
nature,
that
we
had
to
go
to.
Okay,.
A
C
A
A
A
Remy
is
busy
and
the
back
the
back
background.
Work
for
marking
switch
in
this
fall
and
the
neo
trash
package
with
the
new
label
should
go
in
which
to
come
in
next
week
with
the
new
labels,
and
they
will
also
see
bugs
that
were
missed,
SL
O's
in
their
respective
group
trash
packages
as
well.
So
we
need
to
close
this
this
week,
Christopher
you
have
an
x1.
C
A
B
Good
I
think
my
question
was
whether
to
look
more
intensely
yet
P
issues
than
s
issues
that
wasn't
my
only
main
comment
here,
I'm.
So
a
lot
of
that
imagery
run
around
severity.
One
sort
of
my
only
question
would
be:
should
we
use
peas
instead
of
essence,
because
you,
like
it's
a
high
severity
and
a
high
impact,
you
should
get
a
high
guard,
you
label,
so
that
might
be.
B
A
Yep,
the
metric
on
the
s
labels
is
kind
of
like
an
anti
metric
like
we
could
be
hitting
our
piece,
but
are
we
actually
a
cow?
We
average
on
actually
closing
out
high
severity
one.
So
it's
like
an
anti
metric
in
addition
to
how
we
hold
ourselves
to
the
Eskimos
first
aid,
but
in
the
short
term,
I,
like
execution-wise
I,
think
we
should
focus
on
p
labels,
but
that
will
give
them
a
gauge
on
which
the
last
one
to
pick
on
right
and
then
you
execute
it
that
way.
It's
a
make
sense.
Yeah.
B
B
A
B
A
C
A
Time,
I'm
the
facilitator,
so
we
can
share
that.
Okay,
my
gut
feel
is
that
we
might
eat
end
of
August
or
a
bit
into
September,
given
other
passionate
discussions
around
mister
rules
and
deliverables,
but
wonder
what
do
folks
feel
I
I
don't
want
to
close
this
without
drawing
closer
on
a
bunch
of
important
items,
especially
things
that
drive
discussions
for
Christopher
into
p.m.
and
everything
so
I.
A
C
Hey
I'm
gonna
be
bold
here
and
apologize
Tonya.
Do
you
think
you
could
run
this
meeting
I.
C
Be
nearly
as
good
at
it
as
Mike
is
well
it's
time
to
time
for
new
experiences
right
sure.
So
my
proposal
is,
if
we're
gonna
run
till
the
middle
of
September
or
that
we
find
a
new
facilitator
or
the
meeting.
That's
tiny
you're,
probably
the
best
person
would
be
my
guess,
Beck
if
you
have
a
different
opinion,
I'm
open
to
proposals.