►
From YouTube: CHAOSS Risk Working Group 11/11/21
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
D
B
The
trick
was
to
separate
the
wheat
from
the
chaff
as
it
were
right
I
mean
I
don't
have
a
crisis
with
saying
you
know
this.
You
know
measuring
this
requires
some
cooperation
from
the
project
for
identifying
bugs,
and
we
could
just
start
with
that
and
acknowledge
that
you
know
projects
which
don't
identify
which
ones
are
bugs
and
which
ones
aren't.
C
That
might
necessarily
be
a
hard-coded
fact.
It's
just
based
on
a
median
like
response
rate,
so
not
actually
an
slo
and
medium
response
time,
which
would
have
been
an
slo
had
you
bought
this
from
a
vendor,
in
which
case
there
does
need
to
be
tiers
of
responsiveness
where
minor,
bugs
versus
critical
things
that
are
taking
down
the
entire
project
have
different
levels
of
expectation.
B
Okay
right
do:
do
you
want
to
try
to
capture
that
or
just
say
for
you
know,
you
know,
get
some
insight
into
just
bugs
as
a
whole
that
the
challenge,
of
course,
is
that
different
people
rank.
I
mean
you
as
soon
as
you
switch
into
multiple
rankings,
it's
hard
to
have
any
sign
of
consistency.
A
A
And
some
projects
are
explicit
about
doing
that
or
not
right
so
like
if
they
use
the
word
bug
or
defect
like
those
are,
I'm
just
I'm
adding
to
what
david
wrote
like
labels
can
be
something
of
a
guide
but
david's
right.
If
the
project
isn't
following
any
kind
of
a
system
consistently,
then
it
won't
matter
that
those
labels
won't
be
complete
enough
and
they
probably.
C
Well,
I'm
gonna
go
full
circle
here
to
like
my
first
risk
meeting
kate
stewart
was
working
on
a
labeling
question,
looking
at
the
consistency
of
labels
and
terminology
that
were
being
used
to
classify
things
where
I'm
not
I'm
not
quite
sure
where
that
ended,
because
I
remember
she
did
some
work,
she
shared
what.
A
A
D
Yes,
so
on
that,
I
was
thinking
like
why
not
this
is
a
good
place.
We
have
been
working
on
that
data
for
quite
long
like
leveling
them
classifying
game.
Why
not
to
turn
into
a
like
research
paper
as
like?
We
have
the
data
we
have
been
working
on
it
framing
a
question
and
working
as
a
paper
output
from
a
research
working
group
like
risk
working
group.
C
Yeah
well
because
I
think
I
think
this
is
now
providing
us
a
hard
use
case
of
what
we
might
do
with
that
information.
Looking
at
thousands
of
labels
on
mass,
you
can
say
how
were
people
actually
labeling
things?
Were
we
able
to
identify
what
definitively
was
a
bug
and
wasn't
a
bug
based
on
how
they
used
labels?
A
And
so
like
in
the,
if
I
can
bring
this,
I
did.
I
found
that
spreadsheet
pretty
fast.
A
This
is
pretty.
This
is
across
a
very
large
collection
of
repositories,
but
so
there
so
in
that
sense
it's
reasonably
complete
and
if
we're
looking
for
defect
labels
I'll
pick,
a
color
can.
C
A
A
A
A
Yes,
absolutely
because
we
were
just
looking
at
classification,
we
weren't
looking
at
combinations
of
classification,
we
have
data
for
like
every
label
applied
and
that
that
those
labels
are
those
are
what
the
labels
are
at
the
point
of
our
data
collection,
but
we
also
have
event
streams,
so
we
can
have
the
addition
and
removal
of
labels
that
we
analyze
so
add
to
this.
Like
instances
of.
C
D
A
C
Sorry
now
now,
we've
now
we've
diverged
a
bit,
but
it's
more
to
me.
What
this
is
saying
is
that
potentially
labels
are
effectively
being
used
as
ways
to
identify
whether
or
not
something
is
a
bug.
There
could
be
more
specificity
in
how
the
label
is
implemented.
That
could
give
you
a
sense
of
the
severity
of
the
bug
or
what
the
bug
is
actually
like.
A
A
B
A
B
A
B
This
is
only
on
used
on
github
right,
not
gitlab.
Kid
lab
uses
labels
as
well.
I
know,
but
what
is
this
spreadsheet
for.
A
A
Is
just
from
cncf
projects
that
are
all
on
github.
C
Do
we
have
that
the
methodology
of
the
poll
demarcated
anywhere
because
we
are
going
to
refer
to
it?
I
want
to
make
sure
that
we
understand
what
it
is.
A
A
A
B
Yeah
and
and
noticed
the
following
labels
right
yeah,
that's
what
I
want
to
do.
I
I
know
bugs
there,
so
we
don't
have
to
go
further
type.
Colon
bug.
A
B
B
A
I
mean
just
scanning
it,
I
guess.
A
B
Yeah,
don't
you
know
don't
include.
B
E
A
D
He
a
bugzilla,
is
it
a
bug
or
a
bugzilla.
B
Yeah
bugs
is
a
widely
used
tool
for
issue
tracking,
if
you're,
not
using
github
or
get
love,
get
git
hub
or
get
labs
tracker
or
some
other
tracker.
That's
built
into
your
forge
get
bugzill
is
probably
the
most
common
of
the
issue
tracking
separate.
A
So
I
would
say,
like
debug
and
bugzilla
would
probably
be
excluded
yeah,
because
bugzilla,
I
believe,
is
I
mean
it's
language
of
what
it's
called.
It's
of
course
predates
github.
The
language
of
the
platform
has
bug
in
it,
but
I
don't
think
that
they
knew
that
everything
that
went
in
there
was
a
bug
when
they
made
it.
I
think
there
was.
C
A
B
Uses
mozilla
continues
to
use
bugzilla
and
I'm
pretty
sure
they
use
it
for
everything
again.
C
B
A
Okay,
other
words
that
you
want
to
search
for.
A
C
A
B
Now
the
the
weird
thing
is
bug
fix
is
not
by
itself.
C
C
Yeah,
but
I
I
think
that
the
challenge,
though,
is
I
guess
I
mean
we're
facing
this
in
some
of
our
other
projects
of
ingesting
the
api
stream.
It
doesn't
always
pull
in
labels
so,
depending
on
how
you're
pulling
the
data
you
have
to
actively
go
see
if
they're
like
what's
coming
up
and
how
it's
labeled
like
you
can
see
that
the
issues
are
being
created.
A
Yeah
no-
and
I
can
actually
just
I'll,
add
a
link
for
where
I
mean
like
so,
if
you're
trying
to
create
it
or
if
the
more
labs
trying
to
create
it,
then
the
endpoints
exist.
B
C
Well,
so
yeah,
that's
that's
also
why
I
was
interested
in
seeing
the
overlap
of
labels.
So
not
all
these
are
singular
labels,
because
I
think
it's
also
like
I've
seen
most
labels
or
things
that
I've
seen
labeled
have
been
like.
I
don't
know
anywhere
between
two
to
five
ooh.
B
Some
difficulty-
some
yes,
sometimes
with
difficulty,
but
in
many
cases
yes,
not
all
bugs
are
the
same,
and
I
just
wrote
some
text,
so
I
want
to
push
I
totally
agree,
but
I
think
it's
still
useful
on
metrics.
Without
that,
so
are
we
are
we
good
with
not
trying
to
finely
grain
it
and
I'm
gonna
make
an
argument?
If
you
don't,
if
you
don't
try
to
do
that,
then
it's
a
little
less
vulnerable
to
gaming.
You
know
is.
D
A
C
D
A
Yeah,
well,
I
can
make
it
I'll.
I
can
just
I
haven't,
got
everything
set
up
to
pull
data
for
this
meeting
and
sophia
you
have
to
go,
but
for
next
time
I
can
put
down
an
item
where
I
just
have
an
update
of
that
spreadsheet.
That
kind
of
shows
the
occurrence
matrix
for
issues
and
pull
requests
that
use
certain
labels.
So
you
can
actually
dig
a
little
deeper
and
I
guess.
C
I
guess
I
know
I
was
also
kind
of
thinking
about
david.
You
were
mentioning
as
part
of
this
we're
basically
also
starting
to
encourage
or
recommend
behavior
and
I'm
I'm
kind
of
curious.
I
don't
know,
I
think,
that's
going
a
bit
beyond
sort
of
the
chaos
prerogative,
but
if
their
our
metrics
are
getting
specific
enough,
where
they're
implementable,
if
people
are
following
best
practices-
and
I
think
labeling
your
issues
as
bugs
is
probably
assumed
as
a
good
practice,
but
is
it
something
that's
actively
said?
C
B
B
So
I
I'm
okay
with
this
group
saying
here's
what
we
want
to
measure
it
sure
would
be
easier
if
we
agreed
on
something
and
then
pitch
it
to
some.
You
know:
hey
like
the
open,
ssf,
best
practices
working
group,
which
does
find
it
very
interesting
to
identify
best
practices.
C
B
But
but
the
thing
is
what
I
think
this
group
should
do.
You
know
I
I'm
good
with
folks
staying
in
lanes
where
there's
the
strongest
at
I
mean
that
makes
sense,
but
you
know
basically,
I
don't
think
chaos
doesn't
need
to.
B
You
know,
create
a
spec
or
try
to
promulgate
it,
but
it
needs
to
identify
the
problem,
maybe
make
suggestions
and
why,
if
it
can
and
then
contact
somebody
else
like
the
open,
ssf
work,
best
practices
working
group.
I
can
do
that.
Okay,
but
you
know
contact
somebody
else
and
say:
hey.
We've
got
a
problem
if
there
was
a
best
practice
that
people
started
following
it
would
be
solved
or
made
better
anyway.
B
C
A
B
You
know
I
was
thinking
about
maybe
using
median
instead
of
average.
Maybe
that
helps
a
little
bit.
B
C
B
Okay,
as
long
as
more
than
half
of
bugs
are
fixed,
then
then
it's
okay.
B
C
B
A
C
B
E
A
A
We
could
yeah,
we
could,
I
think,
we've
accomplished
enough
and
so
I'll
stop
the
share
and
I'll
stop
the
recording.