►
From YouTube: CHAOSS Common Working Group July 20, 2023
Description
Meeting minutes are here: https://docs.google.com/document/d/1xsii5tfmmDwWpuhrFcBJMeYeT3RipJdiCdHrbi0NalQ/edit#heading=h.n3rh3l1y6dv7
Meeting summary is here: https://chaoss.discourse.group/t/common-working-group-meeting-summary-july-20-2023/215
A
There
I
hit
record
okay
welcome
to
the
July
20th
common
metrics
working
group
meeting.
Thank
you
all
for
coming.
A
I
think
we
can
go
ahead
and
begin
our
discussion
on
how
metrics
and
models
are
selected
by
the
the
Common
working
group.
This
is
related
to
the
working
group
mission
and
with
that
I'll
I'll,
throw
it
over
to
Matt.
B
Okay,
so
just
real
briefly,
the
Metro,
the
Common
working
group
is
what
we've
kind
of
talked
about.
Is
the
working
group
that's
responsible
for
kind
of
moving
forward,
metrics,
but
probably
more
metrics
models
that
maybe
don't
fit
clearly
within
a
working
group
or
are
they
originate
from
some
of
our
context?
Working
groups
where
we're
not
necessarily
asking
those
context,
working
groups
to
to
kind.
C
D
B
In
a
template,
form
and
kind
of
authoring,
the
metric
or
metric
model,
and
that's
I-
that's
part
of
the
process,
so
I
say
that's
all
good,
but
I'm,
not
sure
that
that
is
good,
so
historically,
prior
to
metrics
of
the
working
group,
so
just
kind
of
develop
their
own
metrics,
which
was
great
and
like
I,
was
saying
before
we
did.
The
recording
here.
Many
working
groups
still
do
that,
which
is
completely
fine
as
part
of
that
release.
C
B
A
couple
metrics
that
need
to
be
developed
and
they
really
came
from
the
they
were
group.
So
that
still
does
happen
to
me.
The
biggest
challenges
from
the
context
working
groups
so
those
context
working
groups
are
the
ospo
working
group,
the
corporate
Hospital
working
group,
The
University
Hospital
group
and
the
scientific
and
one
of
the
things
that
we
ask
these
for
these
context
groups.
B
One
of
the
things
we
asked
these
contexts
is
to
not
worry
too
much
about
the
inner
workings
of
the
chaos
project
that
they
can
just
kind
of
speak
freely
about
things
that
they
would
like
to
see
within
those
context,
working
groups
and
I
still
like
that
premise.
I
think.
Sometimes,
if
we
get
down
into
the
details,
we'll
lose
some
of
the
people
who
attend
those
calls,
and
so
it'll
just
become
less
interesting
for
some
people.
B
And
so
the
idea
was
is
that,
as
those
context
groups
talk
openly
about
metrics
models
here
in
the
Common
working
group,
we
would
help
develop
those
they
would
kind
of
roll
into
this
group
and
I'm
not
terribly
concerned
about
like
how
we
identify
those
that
should
roll
into
this
group.
That
seems
to
be
okay.
We
seem
to
have
enough
at
the
moment,
over
a
lot
of
people
that
attend
the
different
context,
working
groups
or
the
context
groups
and
attend.
B
Seems
to
be
pretty
good.
My
concern
is
that,
out
of
those
context,
working
groups,
there
could
be
a
situation
where
the
those
context
groups
just
kind
of
speak
openly
about
metrics
models
and
it
becomes
like
9
or
12
or
15
metrics
models,
and
they
just
kind
of
roll
into
here
and
we're
kind
of
like
we're.
Also
doing
other
things
as
well,
and
so
anyway,.
A
A
Is
that
they
may
include
too
many
metrics
in
their
models,
and
maybe
maybe
it's
in
in
those
situations,
it's
the
case
where,
where
common
can
give
them
feedback
that
says
hey,
maybe,
instead
of
eight
models,
you
could
pair
that
down
to
three
or
four
or
eight
metrics.
A
Kind
of
I
would
kind
of
see
the
Common
working
group
as
kind
of
being
a
a
kind
of
lever
for
validation
and
rigor
a
little
bit
right
to
just
make
sure
we're
we're
keeping
it
simple
and
not
not
over
complicating
the
the
the
models.
B
I
agree
this
this
brief
in
this
week's,
because
one
of
there's
a
metric
model
that
is.
A
B
Ready
to
go-
and
it
probably
has
eight
metrics
and
I-
just
I-
said
exactly
what
you
said
like:
don't
we
we're
trying
to
keep
these
kind
of
small,
just
so
they're
kind
of
these
incremental
First
Steps
and
the
response
was-
and
it
was
I-
thought
a
good
response
for
this
particular
metric
model.
Like
a
lot
of
these
metrics
say
around
licensing
are
just
kind
of
derivations
of
one
metric
or
some
of
like
the
activity,
they're
kind
of
derivations
of
things
around
issues,
and
so
there's,
at
least
for
that
model.
A
Maybe
the
so
maybe
we
have.
Maybe
we
have
that
discussion
around
each
metric
or
each
model
that
we
that
we
look
at
is
just
I'll
get
there
is
there
is
this?
Is
this
trying
to
be
too
much?
Is
this
relevant?
What
are
the
questions
like?
Does
someone
actually
need
this
metric,
or
are
we
or
are
we
kind
of
creating
it
just
because
the
we
have
the
ability
to
measure
it?
Yes,.
B
C
B
A
I
think
when
we,
when
we
initially
talked
about
that
today,
liaison
I
think
that
was
kind
of
the.
That
was
that
that
was
actually
part
of
the.
Why
we
were
talking
about
it
right.
So
the
the
liaison
person
would
be
that
that
person,
who
would
kind
of
connect
from
the
working
group
to
the
context
group
and
be
that
point
of
contact
to
bring
the
metric
into
the
working
group
and
maybe
to
provide
feedback
back
to.
B
C
B
Be
this
group
and
then
the
respective
context
group
that
would
be
the
liaison
for
and
I
know.
Oh
sorry,
go
ahead,
I
was
just
gonna
say
I.
There
seemed
to
be
a
number
of
people
who
have
an
interest
in
finding
different
ways
to
participate
and
contribute
to
the
project.
Maybe
if
we
did
an
open
call
for
this,
we
might
get
some
people
within
an
interest.
A
A
Almost
an
action
item
or
task-based
role,
or
you
could
be
the
liaison
for
this
metric
or
this
model
or
for
so
then
just
assign
it
by
whoever
wants
to
whoever
wants
to
be
the
point
of
contact
for
it.
Yeah,
okay,.
E
Good
question
sorry
I
was
just
wondering
if
so
I
I
mentioned
the
template
and
there's
it's
clear.
The
metrics
are
templated,
but
I'm
wondering.
Is
there
documentation
elsewhere
about
the
start?
Sorry,
the
models,
I
meant
to
say
the
models
I
do
that
constantly,
if
there's
so
just
because
when
I
look
at
them,
so
there's
some
models
where
there's
a
there's
like
a
great
deal
of
narrative
about
like
we
investigated
this
in
this
way.
E
This
is
how
we
validated
it,
and
then
there
are
others
where
there's
like
a
tiny
paragraph
of
like
we
think
this
is
what
this
model
means,
and
then
you
know
the
list
of
and
I
guess
like,
especially
if
you're
talking
about
like
trying
to
get
more
people
involved,
it
seems
like
you
might
need
more
guidance
for
people
as
to
like
what
a
good
write-up
looks
like
and
I
and
I
guess.
E
I
also
wondered
a
little
bit
if
you
know
if,
if
you're
not
throwing
in
eight
metrics
or
trying
to
keep
it
small,
that
might
also
give
you
more
room
to
talk
a
little
bit
more
about
why
those
metrics
or
in
the
cases
where
you
like
we're
talking
about
the
the
this
whole
thing,
where
you're
trying
to
figure
out
the
contributor
attribution
and
how
hard
that
is,
and
so
you
might
have
three
metrics,
because
you're
saying
this
is
what
you're
trying
to
get
to
it's
really
hard.
Here's
three
possible
Avenues.
B
B
That's
the
end
of
that
model,
full
stop,
and
then
there
are
other
models
that
have
they
have
that
and
then
they
have
a
whole
section
below,
which
is
to
your
point,
which
is
more
about
implementation
details
for
you,
Jen.
Is
there
one
of
those
more
useful
for
you
or
like
that?
You
look
at
and
you're
like
I
can
understand
this.
E
That's
hard
to
say,
I
I
think
that
the
question
that
I
keep
coming
back
to
you
for
these
groups
in
general
is
like
the
audience
of
like
who
we're
trying
to
talk
to
in
different
places
and
whether,
like
you're
looking
at
something
about
the
models,
sort
of
presents
itself
potentially
as
like.
Here's
like
a
solution-y
thing,
and
so
it
seems
like
it
might
make
sense
to
have
more
information
there.
E
But
obviously
you
can
also
go
read
the
metrics,
so
I
guess
I'm,
not
it's
not
I
I,
maybe
I'm,
not
sure
whether
one
is
totally
I
do
appreciate
the
idea
of
validation
and
like
where
that
is
coming
in,
especially
if
you're
like
just
generating
lots
and
lots
of
metrics
like
that
that
so
I
guess
I,
don't
feel
like
there's
one
right
answer.
Just
that
the
the
inconsistency
sort
of
makes
you
kind
of
wonder
about
the
right
way
to
do
it.
B
C
A
We
also
at
one
point:
we
were
also
creating
the
the
notebooks
for
metrics
as
well,
which
we
we
moved
away
from
that
and
and
now
we're
we're
actually
doing
that
again
now,
however,
the
the
process
has
kind
of
changed,
so
I
think
the
I
think
the
way
we're
doing
it
now
is
more
generalized
we're
trying
to
create
a
useful
tool
for
people,
whereas
the
previous
method
was
was
maybe
more
context,
specific.
E
B
Guess
I
100,
agree
and
based
on
this
conversation,
I
mean
my
inclination,
for
the
models
is
to
actually
remove
the
implementation
and
say
Here's
the
here's,
the
model.
It's
when
you
talk
about
audience
Jen,
it's
like
you're,
an
osmo
manager
at
a
university
or
you're
part
of
a
scientific
software,
I
mean
whatever
it
might
be,
but
this
is
meant
to
prompt
thinking
and
how
to
approach
a
particular
problem.
B
And
then
the
ways
that
it
gets
gets
implemented
could
be
found
to
your
point
in
the
metrics
themselves,
because
we
do
talk
a
little
bit
about
how
to
to
do
that
in
the
metrics
themselves
and
gosh
I
mean
there
are
so
many
different
tools
and
approaches
that
people
take
that
here's,
the
starter
like
it,
helps
you
move
out
of
the
gate.
That's
my
inclination,
yeah.
E
A
C
A
I
would
agree,
I
would
agree
with
that.
I
think
those
the
model
and
the
tooling
should
be
separate,
and
maybe
we
we
link
to
the
tooling
that
we
created
in
the
model,
but.
A
B
One
last
comment
on
the
implementation,
at
least
with
the
metric.
When
we
were
doing
metrics
implementations,
we
found
the
the
metric
that
self
has
some
stable
narrative
to
it.
Like
there's
a
story
about
the
question
that
it
was
approaching,
or
you
know
the
why
you
care
about
this
metric
same
with
metric
models,
we.
C
B
Always
run
into
these
issues
where,
like
maintaining
that
implementation
part
of
the
metric,
was
pretty
impossible.
It's
you
know
it's
like
I,
don't
know
it
just
it
moved
a
little
bit,
it
would
change
over
the
years
and
we
couldn't
really
keep
up
with
managing
that
part
of
the
metric.
So
that's
why
we
removed
it
too.
I
think.
B
Yeah
you
dreamed
it,
you
know
you
didn't.
We
did
actually
say
that,
but.
A
Are
we
still
are
we
still
having
a
a
public
period
for
comments
on
metrics
when
we
release
them?
Is
that
part
of
the
release
process
currently
not.
D
I
think
our
rationale
there
was
that
we
would
take
comments
anytime,
and
so
we
put
them
out
there
and
also
like
this
discussion
of
creation
is
also
public.
So
there
was
enough
time
for
folks
to
really
weigh
in
and
look
and
like
after
the
fact
like.
If
somebody
had
a
comment
on
a
metric
we
released
three
years
ago,
we
would
still
resurface
that,
like
we
would
entertain
their
comments
and
it's
not
like.
Oh
nope,
sorry,
you
missed
the
cutoff.
D
A
It
was
just
I
was
contemplating
if,
if,
if,
for
example,
the
the
Common
working
group
could
be
a
mechanism
for
review
of
models
before
they're
released
or
something
along
those
lines,
and
then
and
then
that
got
me
thinking
well,
what
would
that,
if
we're
doing
that
for
models,
maybe
we
should
contemplate
doing
something
like
that,
the
metrics
too?
So
what
would
that
look
like
for
metrics?
A
Maybe
that's
the
the
review
would
be
the
push
it
back
to
the
context
group
that
proposed
it
to
review
it
or
the
in
the
case
of.
A
B
Like
we
have
a
couple
really
good
ideas,
one
is
to
say
one
discuss.
A
B
B
B
C
B
B
D
I,
don't
I
hate
to
suggest
yet
another
group,
but
I
feel
like
implementation
is
the
thing
that
our
users
struggle
the
most
with
yes.
B
D
So
perhaps
we
need
a
group
focused
on
education,
just
having
a
central
place
for
those
kind
of
discussions
to
happen,
so
that
we're
not
just
like
forgetting
it
all
together.
C
D
Be
like
sorry
you're
on
your
own,
but
we
have
a
group.
That's
focused
on
kind
of
keeping
up
with
tools
and
helping
that
helping
facilitate
that
piece
of
it
because
I,
you
know
in
you
know,
I
thought
you
just
for.
As
an
example,
I
got
several
questions
about
how
to
implement
the
data
and
implement
the
metrics,
not
so
much
even
like
what
metrics
should
we
be
looking
at?
It
was
how
do
I?
C
Yeah
Elizabeth
I
think
we
have
this
discussion
some
time
ago
and
we
just
we
discussed
that
it's
not
really
within
the
scope
of
chaos
to
implement
the
metrics,
because
each
Community
will
need
some
certain
degree
of
structural
organization
that
we
don't
capture
in
our
definition
of
metrics.
C
We
have
a
high
level
overview
of
how
they
are,
which
is
quite
good,
but
on
the
line
data
structure
even
like
when
we
were
discussing
about
the
communication
with
Sean.
He
told
us
that
ago
and
I
think
other
things
don't
have
that
structure
to
capture.
If
you
really
got
an
implementation
trust
me,
we
will
need
to
redefine
a
lot
of
things
and
try
making
a
kind
of
templates
and
a
kind
of
middlewares
which
is
really
outside.
That
will
really
be
more
on
software
engineering
kind
of
work.
D
I
think
that's
fair
Fair.
It
just
feels
like
a
shame,
to
leave
such
a
gap
on
unattended.
D
You
know,
because
I
think
our
ultimate
goal
is
to
improve
Community
Health,
to
help
people
improve
Community,
Health
and
so
by.
You
know
just
leaving
that
as
a
gap
and
not
addressing
it
at
all.
I
feel
like
that
I
mean
I,
totally
understand
what
you're
saying
Armstrong
and
I
do
agree.
I
do
agree
that
it's
it's
like
a
huge.
It
would
be
a
huge
evolutionary
step,
I
think
in
chaos
to
kind
of
take
that
piece
on
to
the
level
that
it
needs
to
be
taken
on,
but
I
don't.
A
I
I
agree
that
in
the
in
the
past,
we
have
taken
that
stance,
but
I
I
do
think
that
we
are
starting
to
kind
of
transition
into
a
more
active
active
role
on
the
implementation
side,
so
we're
I
think
we're
moving
to
a
place
where
we
want
to
maybe
start
to
be
able
to
predict
outcomes
based
on
these
metrics.
We
want
to
see
how
these
metrics
are
are
working
for
organizations
for
people
in
contexts
so
well.
A
Well,
I
agree
in
the
past
that
we
have
kind
of
taken
a
hands-off
approach
to
that
I
think
at
the
at
the
place,
we're
at
in
the
project
now
I
think
we
are
I,
think
we
are
interested
in
in
kind
of
taking
it
to
the
next
step
and
and
really
looking
and
seeing
what
these
metrics
look
like
in
the
real
world
and
I.
A
Think
the
to
the
to
your
point,
I
think
John
is
probably
the
the
first
point
of
contact
for
this
going
forward,
but
I
would
also
say
that
the
the
education
platform
that
we
were
talking
about
the
other
day
in
Dei
is
a
is
a
good
place
to
maybe
begin
Outreach
around.
This
as
well.
B
I
think
this
is
all
fair
I
do
know
that
one
of
the
things
seems
Seems
related
to
this.
That
Don
has
talked
about
in
her
role
as
the
Director
of
data.
Science
is
kind
of.
First
just
trying
to
answer
the
questions
for
people
as
to
when
auger
is
the
right
tool
to
use
and
when
grimoir
lab
is
the
right
tool
to
use
and
why
you
know
it's
kind
of
the.
Why
and
I
to
me.
That
seems
like
a
really
good
entry
into
this.
C
Okay,
for
that
reason,
I
think
I'm.
Okay,
with
with
the
approach
that
Kevin
also
mentioned,
I
just
want
to
add
that
in
that
case
they
need
to
add
a
layer
of
responsibility
to
Don,
because
data
science
alone
is
not
addressing
this
kind
of
problem.
They
need
data
engineering.
These
are
two
different
concepts.
B
C
C
B
C
B
C
A
Where
we
talk
about
kind
of
reviewing
new
metrics
and
new
metrics
models,
adding
that
kind
of
adding
that
task
to
Common
that
I
think
that
would
be
a
good
test
for
common
and
I.
Think
that's
directly
related
to
the
the
comments
that
that
Jen
made
about
maybe
con
consistency
around
the
models,
yeah
and
even
the
metrics,
and
then
and
taking
it
a
step
further
kind
of
the
the
rigor
and
validity
around
those
as
well.
A
So
if
we
were,
if
we
were
to
take
on
that
task,
I
think
we
do
have
to.
We
have
to
have
a
conversation
about
what
those
what
that
review
would
look
like
the
and
one
of
the
one
of
the
first
questions
I
would
have
is,
is
whether
we
even
need
this
metric.
A
All
right
is
the
Spectrum
going
to
be
useful
for
someone,
so
I
think
that's
I,
think
that
would
be.
That
might
be
even
fair
to
take
back
to
the
the
working
groups,
Dei
and
risk
working
groups
and
even
the
context
groups
right
are
we
just?
Are
we
just
creating
this
this
metric
because
we
have
the
ability
to
measure
it
or
are
we
defining
this
metric
because
someone
needs
it
and
it
will
tell
them
something
useful
about
their
project
I?
Think
in
the
in
the
past.
B
A
B
C
B
A
Sorry
well
I
mean
the
agenda's
pretty
full,
but
the
knot
is
not
here
and
a
couple
of
these
are
vanon
issues
and
Dawn
is
not
here
either
a
couple
of
them
are
gone
in
your
shoes.
We
could
look
at
the
change
request,
closure,
I'm,
sorry,
self-merge
rate,
so
self-emerge
rate
was
the
the
metric
that.
A
Okay,
so
I'm
not
sure
when
he'll
be
back
but
I
think
he
has
finished
his
basically
initial
run
through
of
it.
I,
don't
believe,
we've
looked
at
this
as
a
group,
yet.
A
Should
we
take
take
a.
A
A
Whoever
has
the
highlighted
question
thinking
back
to
when
we
were
discussing
this.
This
metric
is
two
separate
questions.
Yes,
so
it's
it's
looking
at
them
together
in
separately
or
together
and
or
something.
A
C
C
C
B
B
A
A
A
good
point,
the
a
lot
of
the
a
lot
of
the
metrics
we
Define,
the
the
number
one
tool
providing
the
metric
is
GitHub,
which
I
don't
think
we
or
gitlab
yeah,
and
we
we
haven't.
We
don't
necessarily
include
those.
B
B
Review,
like
you
know,
I'm
talking
about
like
an
assigned
review,
but
that's
still
only
going
to
be
an
emoji
era.
A
Text-Based
comments:
it
could
be
emojis
our
emojis
really.
A
I'm
telling
like
like
a
comment
that
says,
let's
get
this
merged
is
that
is
that
a
review
is.
B
B
D
A
A
A
You
could,
if
the,
if
the
question
is
how
many
contributions
to
the
project
were
being
merged
by
the
original
contributor
and
or
without
a
review
I
mean
we
would
be
able
using
this
interface
in
some
projects,
we
would
be
able
to
do
a
count
right,
so
we
would
be
able
to
identify
the.
A
D
So
quick
question:
how
do
we
handle
the
auto
merge
functionality
from
GitHub,
because
people
have
reached
those
requirements
so
you're
not
talking
about
like
it's,
not
a
review,
but
it's
not
a
self-emerge
necessarily
because
they
would
have
to
have
reached
some
requirements.
Do
you
know
what
I'm
talking
about
anyway?
D
D
Think
it's,
it
is
a
it's.
C
A
Assign
for
a
certain
type
of
a
level
of.
B
E
A
D
D
A
Right
because
sometimes
sometimes
you'll
get
a
review.
That'll
say
you
know,
let's
get
this
merged
and
then
but
they
won't
merge
it
so
that,
like
okay,
well,
I
guess
I'll
merge
it
myself,
then,
since
they're
as
they've
told
me
to
right
so
the
so.
The
the
key
Point
here
is
about
the
lack
of
review.
B
A
B
A
A
I
would
even
crappy.
B
A
Even
sure
it
needs
to
be
two
metrics.
Maybe
it's
just
like
going
back
to
what
we
were
talking
about
earlier
like
do
we
need
to
do.
We
need
a
self-merge
metric
and
a
merge
without
review
metric
is,
is
self-merge
a
useful
metric.
If
we
take
out
the
review
bit
or
is
it
only
useful
when
review
is
included?
If
it's
only
useful,
when
reviews
included,
then
it's
probably
just
one,
it's
just
one
metric
is
what
we're
looking
at.
Let's
merge
without
review
yeah.
D
A
Yeah,
where
it
would
be,
would
it
be
a
filter
for
that
or
I'm
not
maybe
not
a
filter,
but
a
what's
the
other.
A
Well,
thank
you
all
for
coming
we'll
see
you
again
in
two
weeks.
A
It
will
will
Don
be
back
from
vacation
by
then
or
I
believe
so,
yes,
okay
and
I'm,
assuming
gray,
maybe
but
I,
saw
the
dates
for
Reyes
I
think
he
may
be
back
as
well,
so
I
will
I
think
he
had
made
a
comment
in
slack,
so
I
will
I
will
reach
out
to
him
and
let
him
know
we
we
reviewed
it,
and
hopefully
you
can
join
us
next
meeting
to
discuss
him
all
right.
Awesome.
Thank.