►
From YouTube: CHAOSS Risk Working Group 10-28-21
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Welcome
to
the
october
28
2021
meeting
of
the
risk
working
group
for
the
chaos
project,
the
links
to
the
minutes
are
shared
in
the
chat
and
I
can
also
share
them.
A
A
So
that
we
are
able
to
see
them
together,
the
things
that
I
thought
we
might
begin
just
with
a
brief
review
for
those
who
weren't
here
last
time,
so
at
the
or
with
the
oss
summit
north
america.
A
During
the
risk
talk,
there
was
some
discussion
of
libyars
versus
what
some
ospos
called
a
technical
debt
and
how
they
measure
that
and
that
some
discussion
of
that
the
name
itself
can
be
confusing,
and
it's
not
really
technical
debt,
because
the
way
the
technical
debt's
been
evaluated
was
has
been
sort
of
the
cumulative
age
of
all
the
dependencies
and
the
dates
of
calculation
are
somewhat
different
than
we
chose
to
use
with
libya
and
which
is
a
little
bit
different.
So
in
some
ways
it's
easier
to
implement
than
libya.
A
That
was
the
pro
because,
with
libyars
we're
doing
a
calculation
between
the
most
recent
release
and
the
release
that
a
project
is
on.
So
obviously,
we
always
have
to
maintain
a
knowledge
of
the
most
recent
release,
as
well
as
the
delta
between
those
dates
and
the
previous
release
or
the
release
that
a
project
is
using
and
it
also
it
does
also
enable
noting
older
software
that
might
no
might
not
be
maintained
any
longer.
So
it
has,
it
has
an
advantage
of
making.
You
know
if
your
most
current
release
is
five
years
old
it
could.
A
It
could
very
well
be
the
case
that
that
suffers
not
being
maintained
and
that's
a
bad
thing
and
libyars
wouldn't
show
you
that
information
that
that
the
the
dependency
had
stopped
getting
maintained,
and
then
you
can
see
the
cons
are
well
that
would
penalize
like
old,
stable
software.
That
really
didn't
need
an
update
and
the
number.
The
other
thing
is
the
numbers
slowly
changing
regardless
of
releases
so
david.
Would
you
like
to
comment
further?
A
B
You
know
that's,
it
was
the.
It
was
the
catchphrase
of
2020
and
it's
the
catchphrase
at
2021.,
you're,
muted,
you're,.
B
Okay,
yeah,
as
and
in
fact
we
have
a
recording
of
this
suit.
I
am
not
excited
about
this
particular
measure.
I
mean
there's
pros
and
cons
to
everything,
there's
no
perfect
measure,
but
I
I
really
I
don't
like
the
term,
and
I
don't
think
this
is
a
great
measure,
but
as
far
as
the
term
goes,
the
term
technical
debt
actually
has
a
meaning-
and
it
is,
is
basically
from
the
firm
were
debt
and
implying
that
your
debt
on
tech
on
technology
is
just
like
a
debt
on
a
house.
B
C
B
So
you
know
thinking
about
housing
or
credit
card
debt
is
the
model,
so
this
doesn't
seem
like
the
right
way
to
talk
about
that,
because
if
I'm
using
something
that
was
released
a
while
ago,
but
that's
the
current
version-
that's
not
necessarily
a
bad
thing.
On
the
other
hand,
if
it's
relatively
recent-
but
it's
behind-
I
should
get
penalized
for
that.
D
I
I
tend
to
agree,
I
mean
I
was
thinking
about
technical
debt.
I
agree
with
that
analogy
david.
I
feel,
like
I've
heard
it
a
lot
used
in
the
context
of
just
implied,
work
or
work
that
you
are
either
putting
off
or
you're,
addressing
and
investing
in
up
front
and
right,
knowing
that
you're
taking
it
on
means
you're
going
to
have
to
do
it
later,
and
it's
going
to
continue
to
be
a
bigger
problem,
as
other
things
change
around
it,
but
that
for
me
I
usually
look
at
it
as
an
integration
problem.
D
Just
how
much
work
you
need
to
do
to
make
something
work
well
versus
how
many
band-aids
are
you
applying
that
you
need
to
refactor
later
and
age
is
not
actually
the
like?
That's
not
actually,
maybe
the
biggest
factor.
It's
all
contextually,
driven
like
if
you're,
using
an
older
version,
because
all
the
things
you're
using
around
are
compatible
with
the
older
version.
Then
you're
incurring
less.
I
don't
know
like
it's.
It's
not
necessarily
a
time-based
or
time-related
issue.
B
Yeah
I
mean
this
is
the
problem
here
by
the
way
I
just
I
I
need,
let's
see
probably
ought
to
just
post
this
in,
but
the
I'm
gonna
post
this
right
here,
the
phrase
technical
that
there's
actually
a
wikipedia
entry.
This
is
a
standard
computer
terminology
term,
it
has
a
meaning
and
it
doesn't
mean
what
this
thing
measures
so.
B
A
B
There's
there's
a
long
list
of
things
that
we
don't
know
how
to
do
right
now,
but
that
doesn't
that's
a
totally
different
statement
to
I
mean
I,
I
did
a
phd
dissertation
on
on
a
on
something
that
everyone
agreed
couldn't
be
done.
So
I
I
I'm
I
I'm
very
hesitant
to
use
can't,
but
I
I
think
it
is.
It
is
fair
to
say
that
we
don't
have
a
very
good
idea
on
how
to
measure
technical
debt.
B
That
we
take
an
easy
solution
now
and
have
a
better
solution
that
would
take
longer
and
the
problem
here
is
you
know
if,
if
you
start
with
the
short-term
solution
and
then
slowly
move
towards
it
and
pay
it
off
you're
fine,
if
you
take
the
short-term
solution
and
then
you
base
everything
else
on
this
thing
that
you
know
has
serious
limitations
that
are
going
to
bite
you
later
then
you'll
get
bit
and
hard,
so
you
know
but
but
yeah.
So
I
don't
know
I
I
would
say
I
don't
know
anybody
who
knows
exactly
now.
A
D
D
B
Be
a
lack
of
knowledge
yeah.
It
may
not
be
lack
of
knowledge.
It
may
be
lack
of
time.
Elegant
short
code,
sometimes
often
takes
longer
than
than
than
the
quick
spit
out
just
like
elegant
writing
often
takes
more
work
too
yeah
and
for
the
same
reasons.
B
Right
yeah,
I
I
think
really
the
broader
issue
is
best
fit
for
purpose
or
something
like
that.
Yeah
so
I
mean
there's
I
mean,
there's
an
ac
there's
a
paper
published
in
the
communications
of
the
acm,
specifically
talking
about
this
term
referencing.
The
word
cunningham
report
from
1992,
where
he
says
shipping.
First
time
code
is
like
going
into
debt
yeah.
F
G
B
We
then
what
we
talked
about
is
hey.
Is
this
an
okay
measure
if
we
just
renamed
it,
and
I
think
I
think
renaming
it,
and
we
I
let's
see
here
right
here
we
talked
about
that
that
alternate
name.
I
mean
it's
better
with
a
clear
name,
but
it
still
has
problems.
F
A
B
Oh,
let's
see
here,
it's
called
managing
technical
debt.
If
you
are
an
acm
member-
and
I
am
you
can
get
it
right.
B
Okay,
so
you
can
get
out
direct
access
to
that
if
you
are,
if,
if
you
are
a
part
of
a
most
universities,
you
can
probably
also
get
that
if
you're
a
member
of
because
they
typically
have
a
way
to
get
these,
I
mean
cacm
is
one
that
is,
you
know
one
of
the
top
journals
period
in
computing.
So
if
it's
there,
it's
a
lot
of
people
read
it.
A
Yeah,
it's
it's
appears
to
be
in
there.
I
don't
know
I
might
be
logged
into
acm
right
now,
but
it
seems
to
be
not
behind
a
paywall.
A
B
B
Think,
oh
yes,
you
did.
Let's
see
here
yeah,
but
that's.
A
So
I
I
put
it
on-
I
put
it
on
our
list
of
things
that
perhaps
we
consider
in
the
future
for
the
moment.
C
D
It's
more
of
a
like
a
related
term
that
gets
used
in
a
lot
of
context
like
I
think
I
might
have
brought
this
up
before.
We
even
had
this
conversation,
because
I
wasn't
there
two
weeks
ago,
but
we
used
the
term
all
the
time
to
talk
about
a
broader
variety
of
similar
issues
that
were
just
sort
of
the
the
amount
of
debt
you're
incurring
by
adopting
a
solution.
You
just
necessarily
have
to
apply
to
software
development
so
using
in
the
context
of
say,
infrastructure
selection.
D
To
invest
in
refactoring
or
choosing
to
know
that
you'll
have
to
do
it
later
and
you're,
assuming
the
risk
that
something
might
break
and
they'll
be
forced
to
do
it
earlier.
So
it's
sort
of
like
it
can
generally
be
used
in
sort
of
those
broader
discussions,
not
necessarily
specifically
about
software
development
tasks,
which
the
definition
tends
to
focus
a
bit
more
on
software
yeah.
So
I
don't
know.
I
think
it's
because
of
that
variety
interpretation,
even
if
we
did
choose
to
say
we're
talking
about
it
in
this
context
and
how
we're
defining
it.
D
C
C
A
D
B
A
So
that's
good
good
question,
so
the
the
the
couple
items
on
the
agenda-
one.
You
know
next
metrics,
we
have
our
spreadsheet
of
of
things
that
we
were
talking
about
working
on
next
dependency.
Sustainability
risk
depends
some
something
like
dependency
range,
which
could
be
a
filter
on
libyars.
A
There's,
of
course,
the
ultimately
the
downstream
dependencies
are
are
a
thing
one.
One
thought
that
I
had,
though,
was
that
when
I
started
thinking
about
the
met
we
we
discussed
a
metric
called
dependency
sustainability
risk
and
when
we,
when
I
started
thinking
about
it,
I
thought
it
might
look
a
little
bit
like
what
we've
been
starting
to
call
metrics
model,
in
other
words,
dependency.
Sustainability
risk,
isn't
something
that
I
think
would
be
a
single
metric.
A
A
A
A
I
think
yeah,
I
think
I
think
sustainability
is
certainly
a
topic.
That's
come
up
in
the
many
different
metrics
discussions,
the
so
what
I
took
a
stab
at
was
focusing
on
dependency
sustainability
as
a
particular
concern
with
the
idea
that
okay,
I
have
some
set
of
projects
either
as
a
community
manager
or
an
ospo
director
where
I
want
to
understand
which
of
my
dependencies
have
some
level
of
sustainability
risk
or
how
to
rank
that
sustainability
risk.
A
And
if
I
was,
and
so
in
my
head,
where
this
comes
from
is
I
was
thinking
well.
Libyars
would
be
a
great
place
to
start,
and
I
think
upstream
codependencies
would
give
me
that
list
of
projects
that
I
need
to
get
more
information
about,
and
then
I
was
thinking
things
like
bus
factor,
elephant
factor,
contributor
activity,
how
how
long
change-
and
these
are
just
like-
I'm
throwing
darts
at
the
wall
here.
A
B
A
So
back
backing
up
in
within
the
pro
within
the
chaos
project,
we've
started
to
take
a
look
at
how
ospos
are
are
using
the
metrics
that
we
do
have
and
almost
always
they're,
combining
some
set
of
the
metrics
into
a
single
analysis
that
gives
them
a
picture
of
of
their
project.
A
They're
generally
not
looking
at
the
dependencies
when
they're
trying
to
evaluate
the
sustainability
of
the
project
or
its
health
or
the
responsiveness
of
the
community
they're.
Looking
at
other
things,
although
some
of
those
things
are
represented
here,
the
dependency
sustainability
risk
metric
model
or
sort
of
a
way
that
somebody
might
operationalize
trying
to
assess
the
risk
that
they
have
around
dependencies
probably
involves
first,
they
know
what
they
are.
A
Then
they
know
the
libyars,
but
there
are
other
factors
that
might
give
them
a
signal
about
the
long
and
short-term
likelihood
of
the
project
remaining
healthy
and
so
the
I
don't
know
the
idea
I'm
throwing
out.
I
won't
even
call
it
a
suggestion,
but
just
an
idea
that
we
can
discuss
and
then
decide
not
to
act
on
today
is-
is
that
we
have
a
metrics
model
around
this
idea
of
dependency
sustainability.
A
Right
right,
so
it's
yeah.
B
Yeah,
I
I
don't
think
so.
However,
there
is
a
scale
problem.
You
know
if,
if
you
measure
a
whole
bunch
of
factors
across
say
well,
the
the
average
application
reported
by
see
the
synopsis
or
sonotype
my
apologies,
I
can't
write.
One
was
528.
A
B
You
know
and
you
start
combining
data
from
all
of
those
I
mean
if
you
look
for
the
worst
case,
I'm
sure
you
can
find
a
worst
case,
no
matter
what
you
do
right
and
if
you
do
an
average,
I
wouldn't
be
surprised
if,
if
the
average
just
overwhelm-
and
you
wouldn't
find
what
was
important-
you
know
if
you,
if
you
average
the
sea
level,
you
don't
discover
that
there's
flooding
in
someone's
basement.
B
A
Right,
so
in
unstated
here
is,
is
probably
that
you
know,
whatever
we
would
say,
is
a
measure
of
sustainability
for
dependencies
that
we
care
about
and
if
it's
500
on
average.
That
doesn't
surprise
me
as
much
as
I
thought
it
did
it's
a
way
of
looking
at,
perhaps
where
you
have
your
top
your
top-end
dependencies
of
concern
along
each
of
these
metrics,
and
if
you
have
a
project,
that's
in
the
top
end
for
a
number
of
them,
then
you
have
a
bit
of
insight
into
where
to
focus
your
attention.
A
F
B
F
B
B
B
Usually
close
enough
yeah
I
mean
it's
tailed,
so
it
can't
be
strictly
speaking
in
normal
distribution,
you
can't
have
negative
numbers
of
contributors,
for
example.
Last
I
checked,
although
I
I
have
met
some
people
who
are
in
the
running
for
that,
so
you
know
it
would
be
better
if
you
weren't
around
so,
but
I
think
if
you
took
the
z
scores
and
you
know
and
then
averaged
those
then
you'd
be
able
to
answer
hey
most
of
the
stuff.
I'm
depending
on
is
way
worse
than
normal.
B
F
B
C
B
But
if
I
average
z
scores
and
where
well,
where
my
z
scores
is
how
well
is
this
project
doing
compared
to
everybody
else
within
that
ecosystem?
B
If
I
found
out
that
most
of
my
dependencies
are
significantly
work
on
on
average,
my
dependencies
are
worse
than
average.
I
should
worry
right
right.
On
the
other
hand,
if
my
dependencies
on
average
are
way
better,
then
I'm
less
worried
now
I
might
ask
which
are
the
outliers,
the
ones
that
are
worse
off.
A
A
About-
and
I
think
in
my
mind,
I
wasn't
thinking
about
averaging
them
across
all
the
dependencies
as
much
as
I
was
thinking
about.
Using
this
kind
of
a
strategy
and
z
scores
would
would
do
a
nice
job
of
normalizing
the
comparisons
so
that
I
that
I
could
find
the
things
to.
I
could
have
a
richer
set
of
metrics,
used
that
I'm
using
to
identify
projects
that
I
could
pos.
I
should
possibly
be
concerned
about
right.
B
B
So
I'm
not
saying
it's
trivial,
because
that
means
somebody
has
to
do
the
work
to
gather
the
data
to
find
out
where
how
the
curves
work,
so
you
can
identify
where
a
particular
project
is
in
its
scoring.
But
it's
it's
not
an
impossibility.
It's
a
I
gotta
automate
gathering
the
data
and
calling
up
the
routines
to
calculate
z
scores.
D
In
terms
of
I
guess,
maybe
because
this
issue
is
front
and
center
for
me
today
there
is
a
popular
project
that
is
not
working
and
it's
one
of
those
things
with
a
single
maintainer
who
has
been
kind
of
not
really
progressing
the
project,
but
is
somewhat
responsive,
and
so
this
now
that
there's
a
huge
ecosystem
dependent
on
this
project
for
active
things
that
people
use
refer
to
in
both
corporations,
as
well
as
in
the
community.
D
There's
sort
of
this
there's
a
high
level
of
like
it's,
not
working
and
that's
starting
to
like
rise
of
like
red
flags
and
everything,
but
there's
no
slo
around
the
project.
There's
a
single
maintainer
who
hasn't
necessarily
set
any
expectations
around
how
reactive
he's
going
to
be
how
available
he's
going
to
be
to
accept
other
kinds
of
contributions,
one
of
those
things
where
it's
like.
That's
just
the
the
state
of
the
project,
and
so
I'm.
B
Yeah
right
and
the
same
for
issue
response
time
by
the
way
for
the
projects
I
maintain,
we
have
a
policy
of
not
closing
unless
we
have
decided
not
to
do
it.
It's
perfectly.
Okay
to
have
an
issue
sit
for
five
years,
yeah
we
might
get
around
to
it.
That's!
Okay!
B
I
I
so
measuring
response
time
when,
if
the,
if
it's
an
issue,
that
is
a
bug,
I
think
bug
response
time
is
reasonable,
but
issue
response
time.
If
that
includes
feature
requests,
goodness
gracious,
that's
an
infinite.
The
the
number
of
possible
ideas
is
infinite,
so
counting
up
this
is
basically
another
way
to
say.
Can
you
count
all
ideas?
A
D
I
don't
know
I
feel,
like
a
metrics
model
for
project
sustainability
and
then
figuring
out
how
to
implement
that
in
terms
of
dependency
analysis,
I
think
the
hardest
part
is
defining
the
metrics
model
for
sustainability,
and
I
I'm
open
to
working
on
that.
I
think
the
only
caveat
again
is
what
I
raised
earlier
is:
are
there
others
that
are
already
working
on
this,
because
it
seems
like
a
thing
that
would
be
more
pervasive
beyond
just
the
risk
working
group.
F
F
That's
what
I
feel
so,
maybe
thinking,
okay,
bringing
it
narrowing
to
our
focus
area
like
risk
aspect
which
can
include
dependency,
libya
or
some
other
things,
but
I
don't
know
I
feel
sustainability
is
too
broad.
We
should
narrow
it
a
little
bit
to
bring
it
together.
What
about.
A
D
A
And
so
so
is
there
a
metric
is:
is
there
a
way
to
even
know
other
than
people
noticing
their
stos,
their
software
stops
building
that
it
doesn't
work,
and
is
that
is
that
a
like
a
metric,
I
think
like
how
do
I
express
that
in
a
way
that
are
there
precursors
to
something
breaking
in
that
way?
I
mean
that
seems
like
a
fairly
significant,
like
either
mistake
on
the
part
of
the
person
responsible
or.
D
D
I
think
it
was
just
for
lack
of
a
better
word
nicholas,
like
the
project
hasn't
evolved,
and
I
think
eventually,
if
something
doesn't
continue
to
evolve,
it
will
break
down
when
everything
around
it
has
made
it
obsolete,
which
I
that's
my
hypothesis
as
to
what's
happened,
without
actually
looking
at
the
code
base
and
understanding,
what's
not
running
or
not
working,
but
we
kind
of
knew.
This
was
a
possibility
because
it
wasn't
actively
being
maintained,
it
was
as
an
owner,
but
the
owner
is
just
somewhat
responsive
and
just
isn't
necessarily
progressing.
It.
A
Fire
today
yeah.
Well,
then,
that
fire,
you
know
people
were
aware
of
it:
some
version
of
metrics,
whether
they're
chaos,
metrics
or
heuristic,
implementations
of
chaos.
Metrics
there
was
an
awareness
that
there
was
a
ambivalent
bus
factor
of
one
and
that
something
could
potentially
go
wrong.
It's
not
like
you
were.
Nobody
was
surprised
that
this
eventually
happened,
but
when
it
happened
today,
it
sucks.
C
B
But
that,
but
I
have
to
admit
you
know
the
okay,
then
what
do
you
do?
I
mean?
Hopefully
maybe
maybe
if
the
answer
is
okay,
I'm
gonna
get
involved
in
some
of
these
projects.
Well,
okay,
that's.
That
would
be
the
awesome
answer
I
mean
one
of
the
alternatives
is.
I
will
stop
using
react
because
it
depends
on
it
has
many
dependencies
and
almost
certainly
many
of
them
are
single
person
projects,
but.
D
B
Yeah-
and
this
actually
kind
of
is
one
of
the
challenges
with
some
of
our
repos,
which
is
it's
very,
very
hard
to
say,
I
used
to
depend
on
project
blah.
I
now
want
to
d
to
check.
I
want
to
depend
on
block
work,
but
you
really
have
to
convince
the
user
of
blah
to
switch
in
each
case,
which
you
often
can't
do.
D
I
think
that's
a
sneaky
related
thing
like
in
terms
of
I
guess:
maybe
it
wouldn't
fit
in
this
metric,
but
sort
of
the
like.
So
we've
come
up
with
all
of
these
indicators.
We'll
say
we
come
up
with
all
these
indicators
around
whether
or
not
you
can
still
continue
to
rely
on
something
or
it's
becoming
increasingly
risky
to
rely
on
something,
then
the
decision
and
process
to
change
it.
D
B
B
A
B
A
It
didn't
feel
like
a
discrete
single
thing
that
we
could
measure
like
with
one
metric
and
I
so
to
summarizing
some
of
what
we've
talked
about
at
the
bottom
under
implementation.
Maybe
it's
we
run
the
upstream
codependencies
and
then
for
that
enumerated
list.
We
calculate
libyar's
bus
factor,
elephant
factor
and
license
coverage
z-score
it
and
then
provide
a
list
of
those
dependencies
that
have
some
z-score
below
some
threshold.
A
Like
maybe
that's
that's
the
dependency
sustainability
risk
model.
So
at
least
so
there
sounds
like
this
one
that
is
causing
you
pain
today.
Sophia
was
folks
were
generally
aware
of
it.
I
suspect
there
are
others
that
folks
are
not
generally
aware
of
of
holding
these
problems,
and
so
this
consolidation
of
metrics
potentially
is
helpful,
but
potentially
is
not.
D
Well,
the
idea
of
a
metrics
model,
wouldn't,
wouldn't
it
also
be
sort
of
a
self
selection
in
terms
of
knowing
which
things
are
more
or
less
important
to
you
like
there
might
be
another
consideration
that
they
would
want
to
include
in
it.
So
this
is
just
an
example
of
how
you
might
build
your
own
set
of
related
metrics
that
are
important
to
you
in
monitoring
your
dependencies.
G
A
Yeah,
I
would
I
would
so
the
way
I
phrased
that
just
as
I
was
typing
here
is
I
look
at
so
like
libya,
bus
factor,
election
factor
and
license
coverage
are
I'd,
say
each
of
them
are
sort
of
blunt
instruments
and
if
you
want
to
understand
it
more
deeply,
you're
likely
to
want
to
employ
metrics
that
are
you
know,
some
of
the
other
ones
that
were
initially
listed,
like
full
request,
acceptance,
rate
or
speed
of
response
or
number
of
contributors
that
that
we
have
metrics.
That,
I
would
say,
are
less
blunt.
A
Is
that
and
do
we
want
to
enumerate
those,
or
do
you
want
to
enumerate
the
potential
ones
in
something
like
this?
Are
these?
Do
we
want
to
call
these
core,
or
do
we
want
to
make
them
part
of
a
menu
and
offer
no
further
advice
other
than
here's
the
15
metrics
that
you
might
consider
using
and
offer
no
further
guidance
beyond
that.
B
A
B
A
Well,
I
mean,
I
think
I
mean
I
think
we
can,
I
think
the
potential
I
don't
think
every
gas
metric
is
potentially
enhancing
or
increasing
the
depth
or
granularity
of
understanding
dependency
sustainability
risk.
I
think,
there's
some
finite
collection.
D
B
Diatribe
of
a
thousand
things
you
could
do,
please
don't
if,
if
I'm
yeah.
F
B
D
Look
at
david
back
in
the
bug
versus
issue
response
right,
because
I
agree
with
david.
I
think
we
do
any
sort
of
issues
that
could
be
untenable,
but
if
it's,
this
is
a
p0
bug.
It's
not
working
for
a
better
idea
of
response
rate,
because
it's
just
like
it's.
Everyone
has
their
own
tolerance
to
downtime
and
if
it
takes
them
an
average
of
two
weeks
to
resolve
things
that
might
not
be
usable
for
your
pace,
depending
on
what
you're
building.
B
So
here's
the
challenge-
I
actually
do
think
that
bug
response
time
is
a
valid
and
useful
measure,
and
maybe
that's
in
fact.
I
would
accept
that
as
a
useful
metric
to
be
defined
here.
The
challenge,
then,
is
actually
going
into
and
measuring
it
because
most
folks
use
github
get
lab
or-
and
now
I
I
think
at
least,
if
you
use
bugzilla,
I
think
that
that
difference
is
is
there
by
default,
but
at
least
in
github
by
default.
There's
no
distinction.
A
B
A
Yeah
for
sure,
there's
also
potentially
filtering
just
on
what's
described
like
if
there's
a
stack
trace
of
the
issue,
there's
a
pretty
good
chance.
That's
a
bug.
A
B
By
the
way,
okay
how's
this
next
time,
unless
somebody,
how
else
has
something
else
to
propose?
Why
don't
we
continue
the
discussion
focusing
on
basically
the
bug
response
time
that
I
think
I
could
get
behind.
B
F
A
F
But
we
think
it's.
F
B
If
somebody
really
desperately
wants
it,
they
can
re-raise
it
and-
and
they
can
look
more
importantly,
they
can
look
at
our
notes
on
our
discussion
of
its
cup
pros
and
cons,
which
I
think
led
to
us
saying
now
we're
not
going
to
because
the
cons
outweigh
the
pros.
But
you
know
what,
if
somebody
comes
back
and
either
says,
you're
wrong,
look
at
this
pro.
That's
way
that
you
didn't
consider
that
overwhelms
or
this
con
you
can
overwhelm
or
hey.
B
If
you
refine
it
doing
x,
then
they're
pro,
then
their
big
problems
go
away,
which
is
great,
but
I
I
think
for
now
we
need
to
move
on
to
some
other
metric.
Somebody
has
a
better
idea,
that's
great,
but
I
I
do
think
that
bug
response
time
is
worth
doing,
because
that
does,
I
think
legitimately
describe.