►
From YouTube: [REC] Key Meeting - Quality (Public Stream)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi
everyone:
this
is
the
quality
department,
key
review
for
the
month
of
march
2021..
A
B
Yeah-
and
I
did
watch
your
video
a
few
hours
ago,
so
thanks
for
that,
so
it
looked
like
on
the
I
believe
it's
called
the
average
successful
pipeline
duration
kpi.
A
I
agree,
I
think,
there's
the
case
to
move
back
to
a
good
known
state.
I
believe
it
was
49
minutes.
This
is
also
the
case
of
we
have
limited
hiring
in
ept
and
we're
running
for
engineers
for
the
longest
time
and
we
haven't
been
keeping
up.
So
we
could
iterate
this
to
45
minutes,
which
is
still
ambitious,
that
that
will
beat
our
previous
lowest
point
and
then
adjust
from
there
once
we
have
hit
45
minutes.
B
I
was
going
to
say
when
you
say
you're
not
keeping
up
it's
not
that
there's
on
there's
no
vacancy,
so
you're
saying
the
size
of
the
engineering
productivity
team
has
been
fixed.
The
size
of
the
rest
organization
is
growing,
not
that
we're
unsuccessful,
hiring
or
adding
to
the
team,
or
something
like
that.
B
A
Yes,
we
we
have
a
gearing
ratio,
you're
correct.
We
do
have
one
or
two
hirings
at
the
in
the
list,
but
it's
in
the
lower
list.
But
I'll
advocate
more
with
my
team
on
this
to
to
make
sure
that's
captured.
B
Okay,
okay,
so
it
looks
like
we'll
change
the
target
to
40
below
49
minutes,
okay
and
make
it
more
ambitious
thanks.
So
I've
got
five,
which
was
I
I
wrote
these
notes.
While
I
was
watching
your
videos,
so
I
said:
oh,
we
should
have
mac
add
more
to
his
kpis,
because
that's
really
important,
I
see
you
did
later
in
the
video
go
to
my
kpis
to
talk
about
it,
so
my
suggestion
would
be
to
just
have
it
in
your
kpis.
So
it's
right
in
that
executive,
summary
section
and
you're
tracking
it
as
well.
A
Yep
agreed
thank
you
for
that
and
I
believe
we've
made
improvements
here
kudos
to
the
team.
I
wanted
to
highlight
that
as
well.
Thank
you.
B
And
I
got
number
six,
which
was
a
suggestion
to
demote
the
handbook
update
frequency
from
a
key
performance
indicator
to
a
regular
performance
indicator.
It
doesn't
feel
like
it's
at
the
same
level
as
things
like
mr
arr
and
pipeline
success
rate
and
mean
time
to
close,
and
these
other
sorts
of
things.
A
We
could
revise
on
that
as
well.
I
do
have
a
plan
to
put
this
in
q2
okrs
to
have
the
the
management
team
push
push
this
a
bit
more,
so
I
I
still
think
we
could
be
improving
here
in
that
regard
and
if
you
think
that's
a
consistent
thing
with
the
adjustments
you're
planning,
then
I'm
happy
to
move
this
down
as
well.
B
Yeah,
I
kind
of
want
the
kpis
that
are
defined
they
one.
They
need
to
be
representative
of
across
all
of
quality,
so
we've
got
quality
engineering,
engineering,
productivity,
we're
adding
something
around
driving
these
open
source
contributions
and
we're
adding
engineering
analytics
to
it.
So
four
sub
departments
within
the
department,
so
the
kpis
need
to
be
kind
of
like
a
high
level
overview
of
those
four
areas,
and
this
one
feels
like
it's.
It's
a
drill
down
into
something.
So
this
is
the
first
time
we're
doing
a
quality
key
review.
B
B
Cool
thanks
and
it
looks
like
the
next
one
accidentally
got
indented,
so
I'll
unindent,
it
so
number.
Seven
now
was.
I
saw
we
added
a
new
rpi
for
bug,
slos
all
for
severity
levels
which
felt
like
it
was
kind
of
measuring
the
same
thing
in
a
different
way
than
what
you
opened
with,
which
was
the
kpis
for
mean
time
to
close
s1
and
s2.
So
what's
what's
the
strategy?
Is
it
to
measure
both
because
they
measure
different
things
or
adopt
one
and
not
the
other,
because
it
tells
a
more
effective
story.
A
In
the
remark
of
the
the
bugatiman
slo
rpi,
we
were
planning
to
move
separate
them
out
and
have
this
as
a
supplemental
chart
in
the
shared
dashboard.
When
you
drill
down
the
mttc
of
s1s
and
that's
the
current
current
chart
as
well
you're
correct,
we
believe
we
should
have
just
one
single
source
of
truth
for
the
metric
and
then
for
other
stories.
You
can
drill
down
and
look
at
the
chart.
Instead,
that'll
be
the
next
iteration.
A
B
And
number
eight
same
just
thinking
about
you
driving
community
contributions
having
we
talked
about
percent
of
overall
contributions
that
come
from
the
community
being
a
kpi.
Having
that
my
level,
then,
I
think
having
it
at
your
level
makes
sense
too,
since
you're
driving
a
lot
of
those
efforts
with
the
team.
C
That
sounds
great
kai.
You
want
to
vocalize
as
part
of
your
update.
Yeah
makes
sense
to
me
I'll,
add
it
in
with
the
other
feedback
you
had
on
changing
around
mr
rate,
I'd
like
to
so.
B
It'll
work
kind
of
like
a
past
like
in
okrs.
We
have
pass-through
key
results.
This
will
be
sort
of
like
a
pass-through
kpi
where
it'll
be
at
the
engineering
level.
It'll
be
at
this
level
and
then
number
nine.
It
felt
like
the
software
and
test
gearing
ratio
might
we
might
need
that
to
be
a
kpi,
since
that's
your
single
largest
sub
department-
and
we
know
where
we're
behind
and
that's
important
to
driving
a
lot
of
things
successfully
like
being
sas
first
quality
in
general.
B
So
I
want
to
make
sure
next
time
like
if
finance
is
about
to
release
a
bunch
of
head
count
that
we've
got
a
lot
of
eyes
on
this
kpi.
There's
general
awareness
that
we
don't
have
as
many
as
we
hoped
that
these
folks
are
very
important
to
quality
and
things
like
velocity.
A
Agreed,
I
think
we
can
move
with
this.
Thank
you.
D
Yeah,
why
do
we
still
have
mean
time
to
close
when
it
goes
the
wrong
way?
When
you
do
the
right
thing
and
we
have
the
average
duration
of
open
bucks,
which
I
proposed
and
seems
a
much
better
measure.
A
A
This
is
a
story
we
we
found
out
after
looking
at
the
supplemental
metrics
inside
the
shared
dashboard
as
well,
we
could
always
iterate.
Do
you
think
that
mtcc,
at
this
point,
is,
is
no
longer
a
accurate
representation
of
what
you're
looking
for?
Is
it?
We
could
yeah.
D
As
I
said
before,
it
doesn't
like
the
whole
reason.
I
asked
for
average.
Open
duration
is
exactly
this
problem
like
you're,
not
measuring
all
the
stuff
you're
not
closing
out.
That
is
a
problem,
because
what
goes
wrong
at
companies
is
that
they
forget
about
old,
bugs
or
like
well.
What
goes
wrong
with
companies
anyway,
it
goes
the
wrong
way
like
let's
not
have
a
measure
that
goes
the
wrong
way.
Let's
deprecate
it,
let's
deprecate
it
tomorrow,
set
a
target
for
the
open
time,
get
rid
of
it.
We're
measuring
something!
That's
that's!
A
I
I
am
fully
aligned.
We
could
definitely
do
that,
though
we
can't
we
if
we
target
the
average
age
of
open
bucks,
the
age
be
astronomically
high
at
400
average
days
and
then
the
target
would
be
setting
the
next
one
to
be
370
or
350.
If
you're.
Okay
with
that,
you
were
just
relying
on
that.
Well,.
D
A
We'll
take
up
next
time
to
revise
and
de-prioritize
mtgc
and
we'll
move
up
average
days
of
open.
D
E
Yeah,
it's
it's
a
similar
question
to
sids
when
I
was-
and
I
I
like
the
addition
of
the
like
the
backlog,
visibility,
which
shows
you
know
not
only
the
count
of
the
s1
bugs,
but
also
the
you
know,
mix
of
the
average
close
plus
the
slo,
which
I
think
the
slo
is.
Actually.
I
think
it
somewhat
gets
to
what
sid's
talking
about,
which
is.
How
well
are
we
getting
to
them
in
a
certain
speed?
I
think
it
tells
a
good
operational
story.
E
I
guess
what
surprised
me
was
that
you
know
there's
about
15,
it's
small
number
of
of
bugs
open,
but
they're
they're,
very
old.
So
I
guess
the
question
is:
do
you
have
any
insight
to?
Why
are
those
just
really
hard
ones
that
have
been
out
there
a
long
time,
they're
s1
bugs
so
to
me
that
doesn't
seem
intuitive,
that
they
feel,
like
they'd,
be
a
higher
priority?
So
I
just
I
just
like
to
get
intuition.
Why
they're
so
why
the
age
is
so
high
on.
A
That
population,
one
of
the
the
better
signals
we've
seen
when
it's
not
getting
close,
is
that
it's
it's
not
clearly
producible
or
is
not
predictable
in
in
the
current
state.
A
So
I
think
a
push
for
that
would
be
to
have
teams,
look
at
how
to
reproduce
bug
again
and
if
not,
I
just
have
them,
have
them
close
it
out.
So
it's
the
signal's
clear
if
someone's
running
into
it
again,
it's
as
true
as
one
then
someone
will
file
another
new
s1
that'll,
be.
A
I
agree:
we
can
take
an
extra
hand
to
integrate
the
team
on
just
cleaning
out
this
one,
so
we
shouldn't
have
really
old
ones.
E
F
A
Yes,
thank
you
for
this.
Eric
has
been
super
supportive.
We
have
the
the
remaining
lists
of
hires
in
the
single
social
truth
hiring
sheet.
Coincidentally,
last
week,
I've
asked
my
managers
to
look
at
their
prioritization
in
their
queue
and
give
me
an
updated
list
with
that
info.
I
could
take
a
next
time
to
update
the
signal
there
in
that
sheet
and
advocate
more
in
that
regard.
Thank
you
for
your
support
here
and
I
believe,
moving
the
gearing
ratio
here
into
a
key,
we'll
get
to
see
the
signal
going
forward
as
well.
A
I
believe
that
will
that
will
help
in
this
the
same
line.
B
So
some
historical
context
on
this
one
quality
was
a
department
that
didn't
exist
at
all
when
I
started,
and
so
when
we
created
these
gearing
ratios,
there's
a
question
of
what
do
we
need
and
that's
up
for
debate.
I
mean
maybe
we're
at
34
of
target
because
we're
we're
overdoing
the
gearing
ratio
or
whatnot,
but
the
other
factor
that
came
in
was
well.
How
fast
can
we
reasonably
grow
so
like
in
2019?
B
I
think
the
gearing
ratio
would
have
dictated.
We
grew
the
quality
department
600
that
just
wasn't
possible
from
a
human
hiring
perspective,
and
so
we
said
okay.
Well,
we
have
to
exercise
a
little
bit
of
patience
and
grow
into
the
need
kind
of
at
a
controllably
at
a
controllable
like
a
healthy
rate
of
growth
for
a
department
and
then
hiring
came
to
a
hard
stop
in
early
2020
and
quality
was
just
like.
B
We
had
a
plan
to
get
to
our
gearing
ratio
in
calendar
year
2020
and
then
head
count
across
the
board,
just
sort
of
like
paused,
and
so
quality
was
left
kind
of
like
out
in
the
cold
when
the
when
the
door
is
shut
and
we've
been
waiting
ever
since
to
sort
of
turn
that
back
on
now,
we
have
used
this.
What
mech
mentioned
this
spreadsheet,
where
we
prioritize
all
of
headcount
across
all
r
d
within
scott,
and
we
have
been
opportunistic
when
there
has
been
attrition.
B
We
have
occasionally
taken
that
backfill
away
from
whatever
team
it
was
and
added
it
to
quality.
So
quality
has
been
one
of
the
groups.
That's
been
growing
much
more
than
average
over
the
last
year
and
a
half,
but
it's
been
when,
when
no
groups
are
really
growing
because
we
haven't
had
a
lot
of
vacancies
across
the
board.
D
Can
we
can
we
prioritize
someone
focusing
on
the
front
end
of
our
merch
request
code
because
it's
buggy
it's
laggy
and
performant,
and
it's
really
hurting
the
entire
company
and
our
prospects
in
the
market.
A
We're
happy
to
adjust
it.
I
was
planning
to
have
asking
dedicated
staffing
for
the
software
engineering
tests
for
the
ux,
the
front-end
foundational
team.
Until
we
get
that
funded.
I
was
looking
at
the
sct
for
editor
team
can
focus
on
this
in
the
short
term.
I
know
this
came
up
in
in
mobile
layouts
and
things
of
that
sort
as
well
yeah.
It's
it's
not.
D
Just
like
my
recent
post
of
three
days
ago,
it's
like,
if
you
look
at
our
net
promoter,
score
feedback.
That
is
the
one
that
most
people
complain
about.
So
it's
coming
from
multiple
sources
and
it's
basically
the
most
used
functionality
in
gitlab
like
the
editor
stuff.
It
could
be
like
that.
Doesn't
work,
that's
not
nice,
but
people
can
live.
But
if
this
thing
is
the
most
essential
thing
in
gitlab
the
merge
request
reviews.
D
So
I'm
not
telling
you
to
change
your
order,
but
I'm
surprised,
I
don't
see
anything
reflecting
it.
It
makes
me
wonder
whether
we
have
considered
the
popularity
of
things
when
we
make
our
investment
or
that
we
just
say:
oh
there's
this
many
people
working
on
it
etc.
But
I
think
like
if
we
have
a
part
of
gitlab,
that's
heavily
used.
We
should
the
bar
for
quality
goes
up,
and
I
don't
see
that.
A
Okay,
thank
you
for
the
feedback.
This
also
lines
up
with
some
of
this
discussion.
We
had,
especially
in
runner
where
the
feature
is
broadly
used.
The
permutation
of
testing
is
high.
We
need
more
than
one
sct
and
the
pattern
is
we
borrow
staffing
temporarily
and
then
release
it.
We'll
take
this
feedback
and
especially
for
most
requests
I'll
look
into
it,
just
what
we
can
do
with
what
we
have
for
commercial
questions.
D
Yeah
so
eric,
maybe
we
have
a
ratio.
Now,
that's
that's
appropriate
for
like
the
heavily
used
parts,
but
we
don't
need
the
same
ratio
for
the
lesser
used
parts.
So
it's
now
showing
a
bigger
deficit
of
engineers
than
there
really
is,
but
also
of
the
engineers
we
add.
We
put
them
on
the
wrong
projects.
B
Yeah
there
is
this
recency
bias
or
like
the
squeaky
wheel
against
the
grease
sort
of
thing,
and
so
as
much
as
mec
can
action
this
immediately.
Probably
it
should
be
something
more
holistic
where,
like
let's
come
up
with
a
way
to
score
each
individual
area.
D
D
E
Yeah,
I
think
my
point
in
e
was
similar
to
this
point.
Just
like
I
think,
given
that
you
know
you
said
based
on
the
history,
you
just
described
eric.
It
looks
like
we
said
something
a
while
ago
and
it
seems
like
we
have
an
opportunity
to
iterate
on
the
geary
ratio
approach
and
let
us
know
how
we
can
help.
B
Yeah
and
the
goal
prior
to
this
was
just
kind
of
like
think
of
people
trying
to
cover
the
whole
application
holding
hands
and
being
stretched
very
thin.
Now,
especially
with
more
head
count,
we
can
get
into
being
more
strategic
and
double
covering
areas
that
don't,
on
the
surface,
appear
to
sort
of
need
it,
but
have
extremely
high
usage
or
have
a
high
degree
of
complexity
or
these
other
sort
of
factors.
Yeah
great.
G
I'll
verbalize,
mine
and
d7,
we
have
okrs
coming
up
for
mr
performance
and
usability
in
q2
and
q3.
Could
we
fold
some
of
this
work
into
the
okrs?
Does
it
have
to
be
qe
who's
working
on
it?
If
we
had
really
actionable
issues,
are
they
things
that
developers
could
pick
up.
B
So
yeah,
so
the
mandate
of
the
quality
department
is
to
make
sure
development
knows
what
their
quality
is.
So
development
actually
works
on
a
lot
of
these
things.
What
makes
people
do
is
work
on
frameworks
tools,
some
of
the
hardest
tests,
that
very
few
people
can
write
like
two
factor
off
for
saml
or
something
like
that.
But
yes,
it's
it's
very
much
about
directing
the
work
that
development
would
execute.
So
some
of
these
things
need
to
go
to
product
managers
to
be
prioritized
for
their
the
development
teams
that
are
aligned
with
their
area.
A
Thank
you
as
an
action.
I'm
on
this
part
from
me
I'll,
look
at
how
we
can
factor
the
mouse
metric
into
the
the
gearing
ratio
and
I'll
provide
a
follow-up.
Then.
D
D
Sure
unique
contributors
per
month
is
going
down
any
color
on
that.
A
This
is
where
we'd
be
transparent.
I
have
no
signal
at
the
ground
level
yet
on
why
it's
going
down.
This
has
been
raised
as
a
concern
in
our
recent
meeting
as
well.
We
have
a
sync
with
a
community
relations
team
and
I'm
not
sure
if
kyle
tanya
have
you
touched
on
the
signal,
do
you
know
of
any
anything
we
can
provide
here?
We
can
follow
up
later.
C
Nothing
quantifiable,
just
contributor,
friction
getting
in
the
way
so
trying
to
isolate
those
points
and
get
that
ground
truth
a
little
bit
more
based
on
the
feedback
from
christos.
A
C
I
I
believe,
there's
the
same
trend.
I
think
the
units
that
are
counted
are
a
little
bit
different.
So
I
I'm
sorry
my
my
son's
behind
there,
but
the
units
that
are
counted
are
a
little
bit
different.
It
seemed
like
they
were
counting
commits
when
they
look
at
contributions
and
don't
really
have
an
atomic
contributor
count.
D
Yeah
be
especially
interested
if
we're
counting
commits,
instead
of
mrs
that
seems,
seems
mrs,
is
closer
to
value
cool
thanks.
Thank.
C
A
Kyle
and
I
believe,
we're
at
the
end
of
the
agenda
and
I
think
that
wraps
it
up
for
our
key
review.