►
From YouTube: Mar 2020 analysis of merge requests merged in more than 30 days - 2020-May-06 discussion
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
My
apologies,
I
wasn't
I
I
can't
start
my
video
for
some
reason.
Right
now,
but
I
mentioned
this
in
the
channel
as
well.
I
was
thinking.
525
would
be
the
best
state
to
start
this.
So
we
don't
confuse
ahead
of
13.0
I
updated
the
issue
on
that.
We
also
have
a
handbook
Mr
in
progress
to
describe
the
policy
so
that
we
can
collect
feedback
in
the
feedback
issue
I
link
to
in
the
in
the
channel
as
well
so
good
progress
there
from
what
jinchen
has
has
done.
It
looks
like
we're
ready
for
that.
A
A
We
can
see
how
we're
doing
on
this,
like
current
number,
merge
request
by
team
that
I've
identified
as
idol
group
by
team,
so
we
can
see
a
current
state
a
table
and
then
a
graph
of
total
number
monthly,
not
not
historically,
but
starting
next
month
and
I
believe
with
the
with
the
triage
issues
we
mark
some,
we
add
a
label
if
it's,
if
it's
currently
idle
or
something
that
so
we
could
just.
We
could
pull
this
data
basically
from
the
from
the
label.
A
B
It
wasn't
a
part
of
the
initial
trails
report.
Just
collecting
them
I
would
say
collecting
the
mrs
together
we
can.
We
can
look
to
do
that.
I
also
think
that
calculation
of
stale
or
I
idolize
should
say
is
just
looking
at
last
updated
date,
which
should
be
available
in
size
sense.
So
we
don't
need
to
necessarily
wait
for
a
policy,
a
triage
policy
to
do
that.
Okay,.
A
D
A
A
So
what
we
found
is
one
of
the
top
one
of
the
top
trends
we
noticed
in
merge,
request
merged
in
greater
than
30
days.
Is
that
especially
the
ones
on
the
way
outside
outliers,
like
greater
than
90
days?
We're
great
in
six
years
is
one
or
more
instances
of
them
going
idle
for
greater
than
30,
so
that
that
this
is
the
first
place
we
decided
to
to
target
so
great.
So
that's
awesome,
Colin,
Lily,
so
good
stuff,
and
the
last
thing
least
we
have
in
the
doc
is
ideas.
A
Without
and
we
have
some
numbers
later
in
the
document
on
average
and
median
and
90th
percentile,
so
I'd
like
to
not
with
the
first
launch
of
this,
but
in
the
future,
you
know
look
at
some
of
these
other
ones
and
add
it
to
the
same
triage
and
then
get
feedback
on
it.
So
all
these
numbers,
I
came
up
with
they're,
they're
kind
of
round
numbers.
You
know
at
either
10
or
you
know,
at
the
tens
or
at
the
the
fives,
it's
just
easier
for
people
to
kind
of
perceive
them
and
they're
all
we're.
A
B
Those
all
seem
like
really
good
things
to
add
to
the
report,
or
at
least
make
visible
I'm
I'm,
not
sure
how
feasible
it
is
with
how
the
triage,
like
the
triage,
ops
items,
work.
It
really
leverages
what's
exposed
to
the
API
and
off
the
top
of
my
head,
I'm,
not
sure.
Maybe
threads
is
exposed
in
the
merge
request.
Api
I
can
collect
it
all
together
and
ask
the
team
for
feedback
just
to
make
sure
there's
nothing
I'm
missing,
but
okay,
I'm,
like
where
that's
going
yeah
should.
A
B
A
A
If
you
look
in
the
source
data
links
section
earlier,
it's
the
first
link
there's
to
a
periscope
and
it
doesn't
have
all
the
data
that
we
just
discussed
it
as
a
lot
of
the
source
data.
It
doesn't
have
the
trends
some
of
the
trends,
because
we
haven't
added
it
to
something
like
that
yet,
but
you
can
see
the
translator
below
there's
a
couple
tables
linked
in
a
spreadsheet
later
in
the
doc
when
we
discuss
this
group
previously.
A
D
A
Now
that
being
said,
our
trends
in
like
when
things
take
longer
than
we
expect
and
what
customers
trend,
customers
trends-
might
be
different
on
all
these
various
all
these
various
metrics,
if
we
were
gonna,
make
a
customer
facing
I
would
likely
want
to.
You
know,
have
customers
have
the
ability
to
see
their
own
friends
and
then
make
their
own
decisions
on
the
on
the
on
the
thresholds.
E
And
so
on
that,
on
that
light,
do
we
have
any
any
capabilities
in
the
product
right
now
that
can
render
like
the
long
tail,
the
EMR
that
take
longer
to
merge
in
a
specific
timeframe?
Do
you
have
any
insights
on
on
a
feature
request
for
that
that
we
could
maybe
align
on
and
use
that
if
that
lands,
but.
D
We
do
have
productivities
insights
already
in
the
tool
that
does
have
that
kind
of
histogram
and
you
I
noticed
that
we
reproduced
that
in
science
I've
opened
another
epic
that
tries
to
address
the
question
broadly
of.
Why
are
we
using
science
instead
of
capabilities
we
already
have,
and
we
know
that
there
are
some
limitations
there
I'm
trying
to
collect
those,
so
we
we
can
turn
them
into
issues
and
get
them
fixed.
I'll
link
that
in
the
document
here.
E
E
Some
of
them
will
be
scoped,
but
the
bottom-line
direction
here
is
that
some
of
the
work
happening
in
backstage
is
is
actually
a
dependency
to
ship
a
feature
and
because
the
use
of
feature
right
now
sometimes
is
about
just
new
features
and
existing
features
are
lumped
into
backstage
as
a
catch-all.
So
to
fix
this,
we
would
like
to
expand
the
label
feature,
so
it
has
scope
to
new
features,
addition
and
maintenance,
so
this
work
is
categorized
and
credited
accordingly.
E
The
other
populations
of
work
is
scaffolding
and
tooling,
so
boring
solution,
we're
gonna,
add
two
new
scope,
labels,
tooling
pipelines
and
tooling
workflow-
to
capture
the
work
here
in
backstage
and
I.
Think
this
is
the
majority
of
what's
happening
there
now,
tooling
pipelines
is
everything
that
we
do
to
configure
our
CI.
The
test
configurations,
relapse
and
workflow.
Is
that
charge
automation,
the
delivery
team
and
did
the
release
and
all
that
so
I
just
want
to
update
this?
They
hold
the
holders
here
on
the
progress
happy
to
open
up
for
questions
on
this
direction,
going
forward.
B
Maybe
just
one
one
thing
to
note
is
one
of
the
behaviors
we
found
is
a
danger.
I
think
has
kind
of
encouraged
things
that
are
groundwork
for
a
feature
like
starting
a
feature:
work
I'm
labeled
as
backstage
to
avoid
the
changelog.
So
that's
something
we're
also
going
to
look
to
address
in
danger,
I
would
say,
is
in
ensuring
that
we're
not
causing
a
Cobra
effect
with
our
merge
request,
reviews
rules
where
we
are
trying
to
create
one
behavior
and
lead
to
an
unexpected
behavior
outcome.
A
Do
we
need
this
recurring
meeting
any
longer?
It
got
us
going,
I
think
we've
met
four
times.
We
have
a
slack
channel
where
we're
discussing
things.
We
have
one
or
more
issues
where
we're
discussing
things.
So
you
know
this
was
intended
to
do
this
not
into
perpetuity,
but
for
a
limited
period
of
time
I
would
I
would
I
would
I'm
asking
the
question
because
I
think
I
think
we're
we're
close
to
being
at
the
end
of
meeting
this
recurring
meeting.
A
Next
steps-
and
so
it's
great
so
I
will
I'll
get
that
scheduled.
It's
great
idea.