►
From YouTube: S1 mean time to close spike explanation
Description
A 3 minute overview on how to diagnose a spike in bug mean time to close KPI using the shared dashboard
A
A
If
we
look
a
little
bit
closer,
we'll
see,
there's
a
few
exceptions
that
are
pulling
this
up
by
click
on
the
link
we'll
go
to
our
shared
dashboard,
which
is
a
public
dashboard
and
a
few
things
jumped
out
to
me
really
quickly.
First,
we
have
this
chart
here
around
percentiles
and
the
mean
trend
and,
as
you
can
see,
the
mean
has
gone
very
far
above
our
p80
time
to
close,
which
means
80
of
all
bugs
that
are
closed.
A
The
80th
percentile
is
lower,
so
we
have
a
higher
likelihood
of
outliers
anchoring
the
average
up.
As
we
look
at
our
two
tables
down
here,
that
provides
some
supplemental
information
we'll
see
here
that
there's
an
issue
that
was
closed
in
march
that
had
that
was
open
for
469
days
around
what
looks
to
be
a
end-to-end
test
and
also
an
issue
here,
an
s1
bug
that
has
no
milestone
or
backlog.
These
issues
in
the
second
table
are
usually
indicative
of
bug.
A
A
This
might
be
something
for
us
to
look
at
in
the
future,
but
ramya
has
suggested
it
to
be
a
s3
s3
bug.
So
this
should
fall
out
of
the
report
in
the
meantime
to
calculation
during
next
refresh
this
one
as
we
scroll
down
to
the
bottom,
we'll
see
that
the
product
manager
there
was
a
lot
of
discussion,
but
the
product
manager
closed
it
because
there
was
age,
lack
of
further
considerations
and
our
ability
to
reopen
you
can
see
here
it
was
closed
without
a
milestone.
A
This
is
usually
a
pretty
good
signal
that
these
bugs
just
get
cleaned
up
by
the
product
managers
from
time
to
time.
Once
these
two
well,
once
the
first
bug
falls
out,
you
should
see
our
mean
time
to
close
drop
a
little
bit,
but
we'll
still
see
an
anchoring
effect
from
the
bug
that
was
closed
without
a
milestone
based
on
the
current
metrics.