►
From YouTube: CHAOSS Risk Working Group 1/27/22
Description
Links to minutes from this meeting are on https://chaoss.community/participate.
A
A
B
All
these
metrics
are
more
formal,
so
I
would
say
call
it
defect
because
I
think
some
people
there
are
formal
definitions
of
defect
and
I
think
bug
is
just
an
informal
name
for
the
same
thing.
C
B
A
D
Oh,
I'm
still,
no,
I
I'm
still
here
we
probably
call
them
gosh.
We
probably
call
them
bugs.
When
I,
when
I
I
was,
I
was
checking
for
reference
for
what
we
have
in
the
metrics
tracking
sheet,
which
has
bugs
in
at
least
three
fields,
and
then
you
called
my
name.
I
didn't
get
a
chance
to
search
for
defects,
but-
and
this
may
be
the.
D
While
we're
having
at
the
beginning
but
defect
versus
bug
versus
issue.
A
B
Requests,
some
are
defects,
let's
see
here:
ieee
1044,
2009,
ieee,
standard
classification
for
software
anomalies
and
immediately
I
find
the
word
defects.
A
So
the
thing
I
want
to
avoid
david
is
like
you
and
I
are
both
software
engineering
nerds
and
we
know
the
quantitative
meaning
of
defect
and
ieee
rules
like
I
teach
software
engineering
and
that's
the
word
I
use,
but
I
think
colloquially
the
word
in
practice.
People
who
are
not
as
big
of
software
engineering
nerds
as
we
are
pro
it
seems
like
they
might
use,
bug
and
recognize
that
term
more,
which
is
why
I
kind
of
asked
dwayne
what
he
thought
and
I
don't
know
I
would
guess,
kate.
You
are
also
yeah
and
use
defect.
A
C
E
E
B
Right
be
careful,
don't
issue
is
not
a
synonym,
but,
and
defect
is
a
synonym
well,
at
least
at
least
in
in,
I
think
they're
of
them
as
synonyms,
whether
or
not
other
people
think
of
them
as
the
synonyms.
But
I
would
prefer
defect
just
because
as
soon
as
you
enter
the
world
of
of
formal
stuff-
and
you
start.
F
B
B
I
I
don't
think
so,
and
I
think
github
is
absolutely
responsible
for
making
those
two
words
different.
Most
issues
aren't
defects,
most
issues
are
feature
requests
and
some
of
them
are
customer
support,
requests.
F
F
A
I
added
that
undocumented
feature
is
a
synonym
for
defect
just
in
a
joking
manner.
Here.
B
A
A
That
that
is,
that
is
that's
thanks
for
pointing
that
out,
if
an
odd,
because
that
was
just
a
failure
on
my
part,
I
think
the
link
is
probably
the
same,
but
just
in
case
it
isn't.
I
will
update.
C
I
don't
want
to
distract
the
writers,
but
I
think
dwayne
has
raised
a
good
point
in
the
chat
that
we
should
be
specific
to
software
defects
versus
other
issues.
That
could
exhibit
a
defectual
end
case,
but
whether
or
not
it's
part
rooted
in
the
software
versus
how
you
set
it
up
or
the
processes
and
integrations
you
set
around
it.
A
A
No,
I
mean
I
don't,
I
don't
think
so
like
when,
when
I
get
deployment
defect,
notices
regarding
auger
generally
they're
my
fault,
but
I
think
I
think
in
other
in
other.
There
are
many
other
cases
where
people
just
don't
actually
read
the
instructions
or
they
miss
a
line
and
don't
install
a
library.
So
I
think
that
that's
a
useful
distinction.
C
A
I
don't
know,
perhaps
it
just
I
mean
correctly,
I
supposed
felt
more
weaselly
and,
as
specified,
sounded
less
weasely.
A
A
C
Yeah
we
leave
it
for
now.
Okay,
we'll
review
it
to
make
sure
that
it's
still,
it
doesn't
seem
like
there
are
any
issues
or
conflicts,
something
that
we
haven't
standardized.
Yes
in
this
approach,
I
guess
for
those
that
are
in
the
weekly
calls.
We've
been
trying
to
come
up
with
a
general
statement
to
encourage
people
to
notice
when
the
data
collection
required
to
generate
the
metric
could
be
potentially
putting
them
into
any
conflict
with
legal
policy
or
regulatory
policies.
C
So
we
have
a
general
disclaimer
now
and
given
that
a
lot
of
these
data,
the
data
required
to
collect
a
lot
of
these
things,
I'm
not
seeking
requests.
Sorry,
the
data
required
to
produce
these
metrics
could
increase
the
generation
a
lot
of
pii
and
additional
data
about
people,
in
which
case
you
are
in
sort
of
gdpr
land.
C
So
we
have
a
general
disclaimer
now
that
we're
sticking
into
metrics
and
we're
working
on
a
central
piece
of
documentation
that
will
provide
a
little
bit
more
suggested
guidance
without
being
overly
prescriptive,
because
clearly
we
are
not
lawyers.
We
are
varying
different
types
of
roles.
I
don't
think
we
have
any
lawyers
in
the
chaos
community
actually,
but.
C
Yeah
and
we're
actually
referencing
a
couple
of
lf
docs
that
have
come
up
already
in
terms
of
how
to
deal
with
export
controls
or
gdpr
within
the
contract
with
your
community.
So
we're
trying
to
again
again
try
to
generate
a
list
of
resources
available
and
just
encouraging
folks
to
have
responsible
and
ethical
metrics
practices.
B
Okay,
I
have,
I
have
proposed
in
the
description
a
just
this
more
specific
metric
and
some
notes,
and
we
can
argue
about
it.
A
Is
this
is
right
up
here
david,
this
description
definition.
B
Right
so
I'm
proposing
that
when
we
say
what's
the
defect
resolution
time,
this
is
this
is
basically
I
mean
for
our
new
folks.
This
is
kind
of
the
rubber
meets
the
road.
The
challenge
for
this
group
is
narrowing
down
exactly
what
you
mean
by
some
metric,
so
I'm
proposing
that
be
the
time
between
the
formal
report
of
a
defect
to
the
project.
B
B
The
report
of
a
defect
to
the
project
using
its
using
the
project
using
the
projects,
defect,
reporting,
mechanism,
okay
and
the
time
where
the
project
accepts
or
merges
something
that
repairs
that
defect
and
makes
it
available
to
the
public
you'll
notice,
I'm
carefully,
not
defining
the
end
time
is
when
the
software
is
released.
B
There
are
some
projects
which,
don't
you
know,
you
know
their
releases
may
only
happen
say
every
other
year
and-
and
I
think
that
would
be
way
too
coarse-grained,
so
once
the
emerge
of
a
defect
has
been
accepted
and
merged
in.
I
think
we
should
accept
that
as
a
perfectly
you
know,
as
the
end
time.
A
B
My
goodness,
maybe
the
public
is
the
wrong
term
to
its
users.
If
we
want
to
allow
proprietary
folks.
D
B
We
want
it
to
be
fixed,
not
the
release,
time,
because
right
that
I
think,
is
too
way
way
too
coarse-grained
and
really
for
most
custom,
most
users
once
it's
fixed,
and
you
know
it's
going
to
be
in
the
next
version-
most
defects,
it's
okay!
You
can
live
with
it
for
a
little
while
and
if
you're
unhappy
you
can
wind
to
the
project
to
hurry
up
their
release
faster.
B
B
B
A
B
I'm
familiar
if
they
have
an
issue
tracker
at
all.
If
they
don't
that's
a
different
problem,
then
you
really
want
to
close
off
those
things,
because
otherwise
it
gets
completely
overwhelming.
D
D
I
don't,
but
I
know
it's
necessary
and
as
written
to
the
time
that
the
project
merges
something
that
repairs
the
defect.
How
do
we
know
that
the
merge
has
repaired
the
defect
if
the
issue
hasn't
been
closed,.
A
D
The
software
right
and
in
cases
at
least
in
cases
in
github,
where,
where
they're,
where
they've
tagged
it
and
it
automatically
closes
it
when
the
merge
lands
right.
Those
are
the
same
time,
but
if
they
haven't
there's
going
to
be
some
amount
of
time
that
elapses
between
a
merge
that
fixes
the
issue
and
the
verification
of
the
issue
actually
being
closed.
So
I'd
say
that
we
actually
are
pinned
to.
The
issue
is
closed.
That's
the
only
predictable
flag
that
we
have
that
says
it's
been
fixed.
B
Actually,
that's
not
true,
because
in
some
systems
yeah
this
is
the
this.
Is
the
joy
of
trying
to
nail
down
these
definitions?
We've
you
can
close
it
later
and
add
it
later
hey.
This
was
fixed
by
I've.
Seen
that,
and
in
that
case
a
tool
can
look
at
that
and
figure
out.
Oh,
it
should
have
been
closed
last
year
and
here's
the
evidence
for
it.
A
F
A
B
Right,
in
fact,
I
I
think
that
kind
of
leprous
nasty
approach
is
becoming
more
and
more
common.
You
know,
hey
you
reported
it
last
year.
I
don't
want
to
hear
about
it
anymore.
You
know,
only
new
things
are
true
and
something
that's
a
year
old
is
obviously
not
true.
A
What
so
does
does
so
I
mean
this.
This
raises
a
question,
though,
if
because
we're
going
to
have
to
develop
a
metric
that
we
can
actually
measure
right
if
we're
hard
about
the
heuristic,
where
the
issue
has
to
be
tied
to
some
code
change
to
to
close
the
defect,
we're
going
to
have
defects
that
are
never
closed,
even
if
the
issue's
closed
and
on
the
other
hand,
so
that's
I'm
looking
at
the
rock
curve
here.
That
would
be
that'd,
be
a
false
negative.
C
Well,
I
mean
the
the
side
argument
here:
is
this
kind
of
up
to
the
implementer
how
they
designate
these
things
too,
because
we're
also
going
to
put
in
the
description
of
implementation.
That
really
depends
on
what
you're
doing
your
project
and
the
conventions
that
you've
outlined
and
within
that,
then
you
would
design
your
measurement
to
correspond
to
whatever
policies
or
automated
actions
are
being
taken.
C
B
Yeah,
you
know
what
I
I
sean.
I
agree
with
your
concern.
I
think
that
would
be
too
complicated
to
try
to
put
all
that
within
this
metric.
I
think
that
suggests
the
need
for
a
different
metric,
something
like
abandoned
bug
or
defects,
or
you
know.
A
B
A
B
B
A
A
A
B
Well,
in
fact,
you
know:
hey
your
process
for
compiling
the
code
is
is
a
bug.
It
is
wrong.
A
B
Yeah
or
your
your
compiler
flags
don't
work
on
my
cpu,
oh
we,
but
that's
a
supported,
cpu
we
screwed
up.
Okay,
now
to
be
fair,
I
would
accept
that
build
scripts
are
code
too,
so
maybe
you
could
just
change
that
say.
That's
all
code
anyway!
So.
B
You
know
I
I
would
say
code,
including
build
instructions.
B
Get
the
the
badge
metric
renamed.
B
E
Work,
I
can
take
care
of
that.
E
Yeah,
if
the,
what
is
the
correct
label,
has
that
been
mentioned
in
the.
E
A
Okay,
all
right,
and
so
thank
you
for
taking
care
of
that
vanad.
I
think
when
we
meet
again
in
two
weeks.
We
can
finish
this
metric
and
incorporate
some.
You
know
if
we
want
to
specify
example,
labels
or
whatever
I
do
have
a
couple
of
large
spreadsheets
that
apparently
google
won't
open
right
now
that
are
in
addition
to
the
old
ones,
so
I'll
make
those
available.
A
Once
I
figure
out
how
to
get
google
to
open
a
very
large
spreadsheet,
I'm
really
surprised
it
wouldn't
honestly
because
it
opened
the
last
one.
So
it's
probably
a
user
error
all
right.
Well,
thanks.
Everybody.