►
From YouTube: Development Metrics working group 2019-06-13
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
it's
top
of
the
hour
when
we
get
started.
This
is
the
weekly
meeting
for
our
metrics
working
group
and
for
every
meeting
we
will
be
running
through
the
persistent
links
going
through
the
hypothesis
and
then
going
through
the
agenda
afterwards.
So
with
that,
I'll
kick
us
off
with
going
through
the
dashboard
and.
B
A
A
C
C
Certainly,
we've
still
got
a
essentially
two
more
days
this
week
and
then
we've
got
to
business
weeks
to
go
and
be
interesting
to
see
like
if,
if
what
I
seem
to
remember
is
both
both
quarters
of
March
and
April,
we
saw
us
roughly
at
just
below
nine
and
both
of
those
we
got
the
mid
midpoint
of
the
month.
We
were
about
five,
so
so
it
feels
like
feels
like
we're
getting
close
to
it,
but
we're
not
quite
there,
but
we
still
got
a
couple
more
days
in
the
month
to
kind
of
see
that
that's
a.
A
A
Let's
talk
about
that
towards
the
end
of
the
agenda
market,
you
can
add
one
one
item
there,
please,
because
we
we
have
seen
performance
being
slow.
It
could
be
because
we're
diluting
our
api's,
which
is
a
good
thing,
is
surfacing.
This
a
mitigation
is
to
start
the
import
earlier,
so
at
least
it's
in
time
for
this
meeting
every
day
and
at
9:30
PST.
A
C
I'm
glad
that
I
hear
to
hear
that
we're
still
working
on
importing,
because
that
kind
of
helps
at
least
me
feel
a
little
more
comfortable
and
maybe
there's
a
little
bit
of
additional
Amar's
command,
though
I
think
the
Volcom
have
kind
of
come
in.
If
you
look
at
the
bottom
graph,
which
includes
community
contributions
were
sitting
at
8:07
today,
the
first
week
of
June
was
our
largest
month
ever
16m
ours
above
we
achieved
in
the
first
week
of
April.
So
so
that's
that's
a
that's
a
pretty
significant
milestone.
C
From
that
perspective,
though,
we've
obviously
a
higher
number
of
folks
in
the
past
couple
months.
So
the
question
B
is,
is
you
know
you
would
expect
media
to
see
a
slightly
bigger
rise,
but
of
course
getting
people
productive?
Usually
it's
a
three
to
six
month,
not
a
one
to
two
month
thing
associated
with
that.
So
consequently,
I
feel
pretty
good
about
where
we
are
eight
ten
fills
are
announced
at
8:10.
Mine,
says:
807
feels
a
little
light
relative
to
the
halfway
point.
C
A
Okay,
with
that
I'm
moving
on
to
the
third
one,
so
credit
to
mark.
This
is
now
automated,
really
crude.
This
essentially
lists
the
time
to
close
bucks,
the
first
graph,
it's
all
box,
and
then
we
start
filtering
by
s1.
It's
really
boring
bare-bones
implantation.
We
were
going
to
migrate
this
to
insights,
but
this
was
the
fastest
path
to
get
metrics
for
all
the
stakeholders
and
execs
here.
So
if
you
scroll
down
with
future
by
filtered
by
s1
s2,
and
then
we
have
s1
with
customer
and
s2
with
customer,
there
are
some
concerns
from
me.
A
I
haven't
had
time
to
dig
into
this
yet,
but
I
with
this
graph
I
think
there
are
more
issues
with
s1
and
customer.
Potentially
we
are
not
adding
customer
labels
when
an
issue
is
affecting
the
customer.
The
not
automation,
we're
working
with
like
detecting,
send
us
links
and
all
that
could
potentially
help
this.
We
could
just
add
a
customer
label
when,
when
they
send
this
link
is,
is
present
moduli
you
want
to
chime
in
too,
and
you
any
of
that
I.
A
Thank
you.
So
this
is
the
key
point
here.
It
is
automated.
Why
don't
we
go
up
and
take
a
look
at
customer
butts
in
general,
as
one
is
too
so
average?
It
is
17
7
days
for
this
month.
Well,
it's
still
ongoing.
It's
still
being
stable
by
I'm
monitoring
the
95
percentile
85
percentile.
It
seems
to
be
very
unstable.
A
So
yeah
I
would
revisit
this
again
when
we
have
the
complete
month
of
June
and
their
existing
SLO
mechanisms
that
we
are
going
to
notch
people
on
and
that's
going
to
help
with
it
as
well.
I
do
want
to
say
that
this
metric
won't
get
better
until
we
burn
down
the
backlog
of
issues
that
are
a
year
and
a
half
or
more
than
more
than
a
year
old,
because
those
will
affect
the
time
to
close
Christopher
Craig
everybody
on
this
team
common.
A
B
I'm
not
making
any
of
her
suggestion.
Just
noting
that
there's
there
may
be
there's
some
unknown
areas
that
I'm
not
sure
like
write
the
potential
automation
would
be.
We
could
pull
people
that
are
in
the
customer
projects
like
and
add
tags
on
things
that
they
interact
with,
but
even
would
be
I
wouldn't
be
perfect,
so
I
see.
A
D
B
B
C
C
C
The
question
is,
is
you
know
like
if
we're
concerned
about
this
metric
and
it's
important
to
us,
then
that
means
that
we
have
to
prioritize
the
work
if
it's
not
the
imperatives
now,
if
it's
not
be
a
bird,
if
it
is
prioritizing
we're
still
seeing
this,
then
that's
a
different
discussion,
but
that's
kind
of
where
one
gets
I
didn't
actually
get
the
breakout
based
on
group,
I
kind
of
either
understand
it.
I
noted.
B
A
B
A
A
We
have
a
bunch
of
trash
processes
and
tries
levels
in
place
that
can
send
use
of
the
links
later.
But
right
now
everybody
in
quality
engineering
call.
It
engineering
try
this
new
issues,
so
we
at
least
can
guarantee
that
it
will
get
to
a
team
with
a
severely
label
within
a
week
when
it
was
open,
because
we
look
at
it
every
day.
We
found
it
out.
It's
it's
five
issues
per
engineer,
actually
right
now,
and
it
would
get
better.
A
C
It
was
notifications
or
discussions
as
an
option
to
get
information
on
yet
the
cross-reference
it
with
EMR
that
you're
looking
at,
but
I
was
wondering
if
that
data
was
just
not
available
on
comm
but
is
unavailable
in
the
hosting
product
or
not.
I
haven't
had
a
chance
to
play
with
it
Craig
there
may
be
a
way
that
may
have
found
a
way
for
us
to
essentially
get
that
time.
So,
first
comment
because
we.
B
B
Okay,
do
do
they
ever
get
downgraded
like
often
an
s-1
can
come
in
and
I've
seen
in
other
companies
where
you
have
no
work
around
and
somehow
excuse
me
during
the
course
of
time.
You
do
figure
out
an
acceptable
work
around.
While
you
continue
to
work
on
the
larger
bug.
Do
we
have
our
downgrade
severity?
I.
A
Am
pretty
sure
we
have,
though
there
is
no
written
guidelines
on
on
what
is
an
acceptable
path
to
downgrade
I
think
we
should
bring
this
up
the
product
product
team
Jason,
as
opposed
to
be
here,
so
you
can
chime
in
but
I've
seen
cases
where
it
gets
smoothly
round,
and
sometimes
it
gets
bumped
up
as
well
problem
and
as
s3.
It
has
to
write.
B
For
sure
I'm
wondering
if
it's
probably
an
edge
case
that
severity
is
get
cheney,
but
I'm
wondering
if
that
would
affect
the
graph
at
all
either.
You
know
if
you
have
an
s-1.
That
is
truly
a
blocker,
but
we
find
a
workaround
after
a
couple
days
and
you
can
bring
it
off
this
graph
or
if,
if
it
always
stays
in
this
one
and
no.
A
A
Okay,
shall
we
move
on
to
the
hypothesis
Oh
before
we
move
on?
We
added
trend
lines
in
the
previous
version
of
this
graph
in
Google
sheets,
because
that's
really
easy.
If
we
don't
really
care
about
trend
lines
in
this
iteration
I'm
gonna
move
that
out
of
the
scope,
because
I
want
to
move
this
into
the
gitlab
new
feature
that
is
gonna
be
g8
this
month,
so
we're
lifting
the
feature
flag
along
with
the
blog
post
and
we're
gonna
move
out
further.
Instead,
everybody's
okay,
I'll
close
out
this,
as
done
in
the
first
iteration.
A
Thank
you
and
I
will
stop
sharing
my
screen.
Oh
wait:
let's
go
through
the
policies
skipping
through
to
the
ones
in
black
number,
eight,
not
sure
we
made
any
progress
here,
developers
with
less
senior
tenor.
A
bit
laughs
come
with
a
different
background
and
then
those
who
here
longer
or
more
productive
I
know
Clemente
as
transition
off
this
working
group.
Dolly
your
go
ahead.
Well,.
F
What
I
was
gonna
say
is
I
believe
we
talked
about
putting
a
hold
on
working
on
these
hypotheses
in
favor
of
the
things
that
we're
focusing
on,
which
is
building
dashboards
and
periscope.
Some
of
the
work
that
your
team
is
doing.
Do
you
feel
that
there's
value
in
continuing
to
pursue
some
of
these
I'm
not
saying
we
should
delete
them,
but
in
the
interest
of
time
I
don't
know
like
I
have
not
been
focused
on
them.
So
I
want
to
make
sure
that
that's
the
correct
expectation.
A
C
Well,
the
only
thing
I
was
gonna
say
was
Craig.
If
you
wanted
to
jump
on
that
issue
and
take
a
look
at
it
and
you
thought
it
was
something
interesting.
You
want
to
investigate
and
that's
a
that's
an
opportunity
for
essentially
getting
you
started
from
that
particular
kind
of
understand
the
ins
and
outs
of
it
and
if
you
want
to
hit
me
up
next
week
or
even
tomorrow,
for
a
little
bit
of
you
know
a
101
more
than
happy
to
do
that
so
I'm,
okay,
with
the
general
direction.
C
F
B
A
A
Beware:
if
we
have,
if
you
have
an
issue
or
if
it's
a
long-running
thing,
just
link
it
inline
into
working
group
and
then
let's
the
score
in
the
description,
so
it
can
be
real-time
every
week,
so
we'd
have
to
make
mr
to
the
handbook
every
time
you
score
something
and
leads
and
engineers
which
is
p.m.
they
are
scientist,
update
this
in
real
time
and
need
suggestions,
comments,
I.
B
A
A
Next
one
just
reiterating
the
dashboard
that
we
shown
thank
you
Mark
again
for
a
quick
whip
up,
updated
dashboard,
and
now
we
see
the
metrics
coming
in
the
next.
One
is
the
interesting
one,
because
now
we
have
data
on
issues
that
Miss
SLO,
s--
and
I
also
updated
the
label
to
be
SLO
since
Ted
Thank,
You,
Christopher,
Wow
and
Jason
from
last
time.
I
also
made
a
update
to
the
documentation
page
and
remove
any
references
to
SLA
stating
the
same
reason
as
well.
So
we
desirable
thank
you
to
mark
one
time
twice
run.
A
We
now
have
a
list
of
how
many
p1
to
p3
in
CME
have
exceeded.
That
is
a
long
time
and
I
removed
p4
for
now,
because
that
can
go
on
to
infinity
because
they
know
I
still
owe
it
says
120
days
and
beyond
and
I
think
we
should
just
focus
our
energy
on
P
1
and
P
P
1,
2
P
3
for
now,
and
get
up
try
to
get
improvements
in
those
area.
A
D
I
was
wondering
because
we
can
break
it
down
by
stage,
but
I
was
wondering
I,
don't
I
know
that
the
group
triage
packages
or
stage
trigger
factors
have
a
lot
of
information
in
right
now.
But
I
was
wondering
whether
that's
the
best
place
to
surface
these,
because
we
could
limit
it
to
p1
and
p2
and
just
maybe
have
five
a
week,
because
we
can.
We
can
limit
it
that
way
as
well.
But
it
would
be
an
extra
section
in
there
an.
E
A
F
The
one
that
that
I'm,
not
so
p1
and
p2
I,
think,
are
very
reasonable
and
and
obviously
highlight
the
importance
of
addressing
these.
Do
we
consider
p3
to
be
still
in
line
with
I
mean
this
is
where
it
gets.
You
know
in
that
area.
If
we
can
live
with
it,
how
long
can
we
live
with
it?
Are
they
really
important,
and
if
we
live
with
it
for
three
months,
does
it
need
to
be
prioritized,
so
maybe
Jason?
That's
where
you
weigh
in
from
a
product
side.
F
C
F
C
E
C
C
The
reason
I
say
it's
because,
like
as
an
example
and
not
that
we
want
to
emulate
ourselves
after
but
like
right,
that's
a
great
example
like
their
whole
business
was
built
around
validation
and
bug
fixing,
and
you
know
it
is
an
L
like
product
product
expansion
is
definitely
an
element,
but
also
like
you
know,
people
will
say:
oh
well,
you're
fixing
all
the
bugs.
So
it's
all
the
more
reason
why
we
should
you
know,
get
a
potential
subscription
with
you.
I
get
lab.
So
that's
that's
just
a
thought
process.
Yeah,
okay,
yeah,
all
right!
E
E
C
B
C
Yeah
as
far
as
form
and
function
I'm,
not
necessarily
as
much
as
it's
the
idea
of
like.
If
we
want
to
really
prioritize
this,
then
it
seems
like
that's
that's
one
way
is,
is
you
know
when
is
this
getting
product
managers,
but
then
the
other
part
of
this
is
like
hey
we're
kind
of
expect
them
that
you
keep
up.
You
know
or
your
team's
keeping
up,
and
if
it's
not,
then
you
need
to
be
raising
that
to
figure
out
what
the
right
thing
to
do
is
organizational
prospectus.
A
C
I
took
it
I
think
it's
a
connection
from
last
time.
There
kind
of
start
pushing
on
hygiene,
so
basically
open
no
issue
where
I
asked
all
the
groups
labeled
to
basically
go
through
and
clean
up
and
Melissa
may
have
responded.
So
that's
really
positive
in
regards
to
the
fact
that
we've
had
pretty
quick
clean
up.
C
They
don't
have
a
team
assignment
so
because
of
that,
it's
really
hard
to
figure
out.
You
know,
based
on
that,
the
labeling,
the
one
that
jumped
out
to
me
when
I
first
did
an
analysis,
though
it's
like,
maybe
thirty
of
them.
His
delivery
team
has
a
bunch
and
I
wasn't
sure
how
to
best
go
about
that.
So
that
was
kind
of
the
question
was
this:
should
we
add,
you
know,
should
we
add
delivery
and
then
the
other
bit
of
feedback
I
got
on
that
issue?
F
So
I
think
documentation
is
part
of,
is
it
mainly
cost
of
a
feature,
and
the
recommendation
I
would
make
is
that
they
they
all
get
automatically
the
future
label
and,
if
Doc's
feel
like
this
was
a
bug
or
something
that
should
be
recategorize.
We
can
do
it,
but
that's
I
feel,
like
that's
gonna,
be
more
of
a
rare
case.
So
if,
if,
if
we
feel
like
that's
an
okay
and
an
acceptable
form,
that
should
be
easy
to
automate
just
automatically
apply
feature
label
on
every
dog.
Mr.
E
F
C
F
The
interesting
one
is
that
160
that
were
not
labeled
by
a
team
I
mean
this.
There
are
cases
where
someone
might
be
working
on
someone
else's
stage,
but
that's
also
somewhat
rare.
I
would
say
that
do
we
have
do,
we
feel
like
every
individual
developer
should
know
what
label
they
personally
are
associated
with,
like
I
am
on
the
static
analysis
team,
so
I
will
use
this
label
and-
and
it
shouldn't
be
wondering
what
label
do
I
put
here.
C
F
I
think
there's
there
there's
more
thought
process
that
goes
into
it
like
does
this
affect
you
know,
verify
stage
and
even
though
I'm
insecure
I
am
doing
work
that
is
related
to
verify
so
should
I
label
it
verify
or
should
I
label
it?
You
know
secure
I
just
want
to
see
like.
Are
we
overthinking
this
or
can
we
can
we
make
it
an
easy
formula
so
that
in
most
cases
the
label
is
easily
applied
and
correct.
F
B
F
And
I
apologize
I
I'm,
not
dismissing
that
this.
This
has
been
like,
we've
been
iterating
and
there
is
change
in
labels.
Change
in
process,
so
I
recognize
it.
I
just
would
like
to
walk
away
with
a
simple
formula
so
that
we
can
simplify
it
but
also
make
it
more
accurate
than
what
we
have
right
now.
I.
C
Would
apply
the
label
the
rule
of
thumb
I
would
use
is
if
you're
on
a
team
that
already
has
a
label,
then
you
should
be
applying
that
label,
regardless
of
where
you're
working
on
the
code
base
and
then
which
are
not
on
a
team.
That
has
a
label
then
put
it
on
the
one
that
you
think
is
best
applies
to
and
that
go
from
there.
Yeah.
E
They
reviewed
the
other
way
if,
if
I'm
I'm
the
verified,
adding
a
feature
to
maybe
if
I'm
on
the
security
and
I'm
doing
something
related
to
you,
templating
that's
been
verified.
Oh
well,
I
think
that
the
correct
label
is
the
verified
label
because
template
it
would
go
be
published
in
the
blog
about
templating.
It
gets
the
team
label
or
group
label
which
is
capital,
secure
and
I'm,
but
it
should
get
the
DevOps
phone
call
and
verify
label.
F
Yeah
and
I
think
that's
kind
of
part.
What
we
talked
about
is
is
like
for
throughput.
We
want
to
measure
the
team's
capacity
and
their
execution,
regardless
of
where
they're
contributing,
so
you
can
derive
these
labels
from
the
team
membership,
but
the
case
that
you
and
I
are
talking
about
where
a
team
is
contributing
to
another
stage.
That's
where
do
you
want
that
capacity
to
be
toward
the
team
they're
contributing
to
or
to
their
team,
and
it
just
it's
like
what
lens
do
you
want
to
look
at?
You
want
to
look
at
from
product
investment.
F
This
is
where
it
falls,
or
do
you
want
to
look
at
from
an
engineering
execution?
This
is
where
the
capacity
is
coming
from
and
we're
not
gonna
win
on
both.
So
we
should
decide
what's
more
important
and
use
that
for
this
iteration
and
if
it
gets
to
be
misleading
or
we're
not
getting
the
value
we
need,
we
can
adjust
I,
don't
know
that
first.
C
Which
one
was
which,
in
your
statement,
Dahlia
but
I'll,
put
it
slightly
differently,
which
is
it
has
to
be
about?
It
has
to
be
on
the
team
that
you're
in
not
not
where
the
work
is,
because
we
do
a
lot
of
teams
to
go
wherever
they
need
to
to
get
their
job
done,
but
otherwise
we're
gonna
we're
going
to
be
constantly
looking
back
and
forth
and
saying:
oh
well,
this
team's
doing
really
well
well,
if
this
team's
doing
really
well
because
everybody's
helping
them,
then
that's
not
really
representing
situation
right,
I!
Think.
F
C
B
C
F
Actually,
I
think
that
what
I
was
trying
to
say
is
like
it
depends
on
what
value
you
when
I
get
out
of
the
metrics,
so
Christopher
you
described
it
perfectly
for
engineering.
We
care
about
the
team,
how
the
individuals
on
it
on
the
team
are
executing
and
their
capacity.
What
I
was
trying
to
highlight
is
that
from
product
point
of
view,
that's
not
really
the
view
that
they
care
most
about.
They
want
to
see
from
an
investment
in
the
product.
F
C
They
own
the
only
investment
of
the
headcount
are
allocated,
so
ultimately
it
has
to
represent
that
right
so
like
misrepresenting
that
by
a
team,
disorganization
isn't
going
to
necessarily
help
that
right.
If
anything,
what
you
want
to
do
is
you
want
to.
You
want
to
lock
to
the
fact
that
hey,
you
know
like
it
Jason
tomorrow
said,
everybody's
working
and
create
everybody's
gonna
go
work
and
create,
assuming
that
he
gets
everybody
to
agree
to
that.
Like
you
know,
that's
that's.
F
Me
create
clarify
or
just
100
and
think,
because
there
there
is
a
use
case
that
PM
drives
like
when
then,
when
they're
doing
the
release
post
they're
looking
at
these
labels
to
determine
this
is
what
we
shipped
and
verify.
This
is
what
we
shipped
insecure
when
we,
when
we
label
based
on
execution,
there
is
a
situation
where
the
secured
team
is
contributing
to
something
that
the
very
my
team
will
end
up
releasing,
but
the
secure
is
the
so
it
will
have
the
secure
label.
F
So
if
we
go
by
that,
the
blog
posts
is
potentially
gonna
put
a
verify
feature
under
secure,
so
we
can
definitely
address
these
things.
I'm
not
saying
like
it's
impossible,
I'm,
just
highlighting
that.
That's
that's
the
difference
in
how
we
engineering
are
using
these
labels
versus
how
product
is
using
these
label.
It's
not
for
capacity
it's
for
the
blog
post
and
where
the
feature
is
coming
from,
like
landing
in
the
product
itself.
Okay,.
A
May
I
jump
in
a
bit
here,
make
sure
you
thought
no
hygiene,
so
there's
an
issue
for
the
rollout.
We
were
planning
to
deprecate
the
old
team
levels
actually,
because,
when
we
say
team,
it's
a
collection
of
people
reporting
to
a
manager
in
the
old
structure.
Secure
has
both
front-end
and
back-end.
So
for
you
to
infer
a
team,
you
need
fill
up,
secure
and
either
fronting
or
backend,
and
you
can
scope
that
to
a
team.
That's
the
definition
in
this
MRI
I'm,
gonna,
I'm,
gonna,
put
it
in
that.
E
E
So
for
the
example
that
I
gave
right,
so
the
secure
team
has
been
doing
a
bunch
of
work
related
to
includes
in
order
to
make-
and
they
are
DevOps
team
is
also
very
common
where
they'll
be
working
on
includes
the
pipeline
features.
Those
pipeline
features
are
still
features
related
to
DevOps
verifying,
so
they
get
communicated
to
customers
as
being
about
CI
and
about
pipelines,
so
they
have
the
devil's
verify
label
and
then
front-end
or
back-end,
but
it
correlates
with
a
secure
team.
Member,
not
a
verified
team
is.
A
E
E
It's
very
it's
fairly
common
there's
a
ton
of
issues,
I
mean
you.
Could
you
could
look
it
up
by
finding
issues
where
the
because
we
have
those
labels
currently
like
the
capital
s,
secure
team
will
search
for
issues
that
have
the
capital
S
secured
team
name
and
the
DevOps
verify
label
and
you'll
see
how
many
times
that
happens.
It's
relatively.
A
Say
I
think
we
should
take
this
to
the
broader
product
discussion
because
we
were
gonna
document.
What
this
label
means
there
was
an
MRI
Fabio.
Our
magazines
is
close
now
I
think
we
should
resurrect
that
I,
don't
think
we
should
have
to
nomenclatures
of
release
and
another
release
label
I.
Think
because
we
were
planning
to
infer
an
author
to
which
dev
stage
or
team
is
in
and
we
can
enjoy
it
in
that
way
and
not,
and
that
also
takes
care
of
the
migration
of
like
people
moving
teams
because
they
can
be
in
the
secure
stage.
A
C
C
They
gotta
come
to
my
lap
and
then
somehow
or
another
disappeared
on
me,
so
I'm
not
even
sure
what
happened
there,
okay
or
what
happened
but
yeah,
there's
that
mr,
but
then
there's
also
just
the
handbook.
You
know
update
to
the
page.
That
basically
say
you
know:
here's
how
we
expected
to
use
the
labels
in
the
rough
criteria,
that
kind
of
described
which
I
think
I
think
we're
in
rough
agreement
on
that
they,
including
this
one
complexity
of
the
the
PM
released
structure,
leveraging
it.
But
you
know
I,
don't
know
Jason.
How
many?
C
We,
if
we're
talking
about
50m
ours
and
we've,
got
two
thousand
running
a
month.
Then
that's
you
know,
that's,
that's.
We
just
didn't
need
that
didn't!
Okay,
you
know
this
is
one
percent
off
which
is
not
you
know
or
two
percent
off,
which
is
not
that's
like
it
annoys,
are
all
tips
of
the
problems
work
we're
trying
to
analyze.
E
C
A
A
Okay,
before
we
move
on
I,
do
want
to
outline
two
points
here.
We
are
talking
about
filtering
issues
and
in
Mars
by
absence
of
a
label.
We
don't
have
that
right
now.
That's
why
we
were
downloading
CSV
insights.
This
should
really
be
a
product
feature
because
he
just
be
able
to
search
by
hey
my
team
or
def
stage
and
Mars,
and
it
doesn't
have
a
through
food
label.
You
can.
You
can
essentially
do
that
here
in
the
issue.
C
Cool
my
next
one
was
on
how
mister
or
liberals
are
calculated,
basically
less
to
Lisa's
we've
seen
an
uptick
there,
Jason
kind
of
kind
of
called
that
out
and
I
was
trying
to
interest,
make
sure
I
understood
based
on
that,
whether
that
information
was
accurate,
so
that
was
kind
of
the
question
there
was
how
we're
counting
that
and
whether
we
had
there
any
challenges.
Okay,.
A
I'll
answer
here
so
before
we
were
tagging
it
under
seven,
which
is
the
feature
briefs
we
no
longer
have
that
anymore,
so
in
the
future,
is
going
to
be
tacked
together
with
everything
on
the
23rd
of
each
month
going
forward.
The
seventh
is
probably
no
longer
applicable
with
how
we're
doing
things
right
now
and
then
that
may
have
been
causing
the
uptick
in
the
last
month.
It
shouldn't
be
the
case
for
the
mass
last
month,
because
we
would
still
had
a
feature
freeze
in
effect
and
they
were
really
missed.
C
A
C
E
A
Okay,
we
have
a
one-off
trash
report
for
unscheduled
customer
buds,
and
this
is
just
a
one-off
thing
for
issues
with
customer
and
bug
that
doesn't
have
a
milestone
is
unscheduled.
Mark
has
been
going
in
and
tacking
relevant
product
managers
to
take
look
at
it
is
there
any
any
any
better
venue
for
this.
Take
some
mini
any
insights
and
what.
E
A
A
A
D
Yeah
so
yeah
it
was,
it
was
kind
of
halfway
through
by
the
time
we
started
this
meeting
so
I'm
just
gonna
move
it
a
few
hours
earlier,
but
we
have
been
having
problems
with
the
import
taking
a
little
bit
longer
and
I
need
to
investigate.
Why
that's
happening,
but
I'll
just
move
it
a
few
hours
earlier.
So
we
know
we
have
the
right
metrics
of
the
style.
This
is.
C
D
C
As
long
as
as
long
as
we're
consistent
on
it,
that's
great,
that's
the
that's
the
main
thing
but
yeah.
It
just
feels
like
people
like
I.
Look
at
it
first
thing
in
the
morning,
so
you
know
I
think
the
the
only
people
that
may
be
hampered
by
that
as
folks
in
Australia
or
late
a
day,
but
you
know
I
think
I
think
they
can.
They
can
manage
yeah.
F
F
F
A
A
Rama
the
comment
I
haven't
had
the
chance
to
reply
it
with
we're
thinking
of
being
flexible
here
of
guaranteeing
at
the
number
doesn't
decrease
and
I
think
he
gave
a
rough
number
like
if
we
had
25
right
now
and
like
the
Delta
coming
in
so
maybe
25
being
is
acceptable.
I
still
need
to
read
through
that,
but
again,
like
I,
don't
think
we
should
be
freezing
metrics
from
outside,
but
this
is
we
won
the
organisation
and
a
question
back
to
you
is
this:
is
a
good
window
to
import
any
other
projects
that
we're
missing?
F
A
C
C
C
A
A
A
C
C
A
A
C
C
A
A
Is
it
safe
to
assume
that
by
the
originally
our
KPI
field,
what
do
automated
those
three
persistent
links
correct
and
it's
not
done
yeah.
That's
the
way,
I'm
treating
them.
Okay,
okay,
going
to
what's
the
next
one
insuring
all
customer
facing
bugs
have
is
very
available.
So
we
are.
We
have
an
OPR
to
try
out
existing
ones.
Existing
customer
box
with
s1
s2
is
first
once
that's
done
we'll
pick
on
this,
so
I
don't
have
any
progress
on
here.
A
Yet
I'll
leave
it
at
such
a
gather,
trying
to
resolve
that's
done,
and
then
the
next
one
is
monitoring
average
time
to
resolve,
as
well
as
to
issues
to
go
down
it's
to
130
days
and
300
days.
I
think
we
should
to
wait
for
this
month
of
June
numbers
to
come
in
and
reevaluate
this
and
an
effective
iteration
to
the
current
stage
group
tries
package
I
think
we
have
made
some
progress
here
already
I'm
gonna
score.
This
thing
we
have
the.
B
A
F
No,
the
only
thing
I
would
add
on
training
is
a
I
merged.
The
stub
or
tim-tim
merge
it
like.
There
is
a
stub
in
our
development
for
for
onboarding,
and
potentially
we
could
start
to
put
some
of
these
training
material
there
for
managers
so
Mike,
if
you,
if
you
want
to
take
a
look,
I'm
happy
to
pick
up
like
a
small
issue
and
and
go
small
increments
on
this
training
instead
of
we're,
gonna
wait
and
do
a
big
training
and
recorded
and
and
so
on,
no.
F
A
Going
back
to
okay,
I
will
assign
this
up
for
the
stakeholders
here
to
reveal
I
thought:
let's
merge
it
in
an
enum.
That's
it
for
scoring
I'm.
Just
gonna
stop
sharing
my
screen.
We
are
at
four
minutes
ahead
of
time.
Anything
else
like
to
talk,
discuss
good,
okay,
cool.
Thank
you.
I
will
follow
up
Jason,
maybe
you're
the
best
person
to
help
me
with
this
and
you're
buy-in
a
school
and
the
on
the
team
and
and
stage
label
of
merging
or
whatever
I'll
probably
come
up
with
an
issue
and
attack
you
on
it.