►
From YouTube: CHAOSS Metrics Models Working Group October 11-12, 2022
Description
Links to minutes from this meeting are on https://chaoss.community/participate.
A
Reporting
and
I
will
turn
on
live
transcript,
so
there
we
go
so
welcome
to
the
October
11th
October
12th
metrics
model
meeting
I
have
kind
of
a
list
of
things
that
I
would
like
to
kind
of
present
today
and
I
also
put
any
other
updates.
That
people
would
like
to
add
to
the
agenda.
B
A
Okay,
so
so
just
a
few
things
that
that
I
was
doing
kind
of
prior
to
the
meeting
today,
I
guess
I
can
close.
That
is
I've
been
working
just
a
little
bit
on
the
repository.
So
this
was
this
is
the
do
you
know
this
document
yahui?
This
is
the
ecosystem
document.
You
know
what
I
mean
where
you
have
the
productivity
and
software
and
social
robustness,
software
and
social.
A
Okay,
so
I've
been
updating
this
a
little
bit
to
kind
of
coincide
with
the
metrics
that
the
metrics
models
that
we
have
either
ready
or
in
progress,
so
I
was
just
trying
to
because
I
think
you
had
when
you
had
first
put
that
document
together.
A
lot
of
the
models
you
put
in
there
were
just
kind
of
early
models
or
just
models
that
you
were
kind
of
thinking
of
on
your
mind
and
so
I
just
took
the
time
to,
for
example,
Community
activity,
project
engagement,
project
awareness
funding.
A
A
B
A
A
Thank
you
so
June's,
not
here
so
I
did
want
to
talk
just
a
little
bit
about
this.
This
is
the
safety
definition,
so
this
this
metric.
These
were
comments.
The
June
had
put
this
forward
in
July
of
earlier
this
summer
and
then
yahui,
you
had
a
variety
of
comments
and
I.
Think
June
has
been
attending
to
those
comments.
A
One
of
the
challenges
that
we
kind
of
have
is
this
metric
is
a
it's
a
little
bit
or
I'm.
Sorry,
this
model
is
a
little
bit
older
than
kind
of
how
we're
defining
models
now,
just
in
the
sense
of
like
it
has
stage
one
stage,
two
stage:
three
stage:
four,
like
it's,
it's
a
little
bit
more
robust
than
then.
A
A
Thoughts
on
on
how
they
want
to
work
on
this
hygiene,
we're
talking
about
how
are
you
thanks
for
coming.
B
A
Working
talking
about
the
safety,
the
safety
definition,
so
I
was
just
saying
that
you
know
I
really
like
the
content.
That's
in
here
and
I,
don't
want
to
lose
any
of
it,
but
I
I
think
a
few
times
like
metrics
in
the
model
like
we
have
psychological
safety
and
we
have
a
fairly
like
extensive
overview
of
psychological
safety
and
in
a
lot
of
our
models.
We
don't
have
this
level
of
detail,
so
maybe
I
guess
I'll
ask
people
what
they
think
about
this
level
of
detail.
A
D
B
B
But
if
you
look
at
the
metric
metric
has
the
same
detail
the
stage
one
stash
two
is
here:
if
you
open
that
link
math
in
the
our
psychological
safety
Link
in
the
metric,
if
you
open
that
metric,
you
will
see
exact
duplication.
So
that's
that's
where
I
would
propose
that
we
should
have
a
brief
summary
of
the
metric
and
if
anyone
wants
to
refer
to
the
metric,
they
can
prefer
it
over.
Here.
D
D
A
D
A
Yeah,
so
why
don't
so
I
like
that,
because
that
it
keeps
the
content
because
I
really
like
the
content
and
that's
in
the
so
I
think
it's
just
a
kind
of
a
rearrangement
and
we
can
kind
of
take
a
take
vanard's
Point
as
well
with
respect
to
you
know
when
it's
okay,
we
could
just
refer
to
the
metric
I'm,
not
sure
that
every
one
of
these,
like
inclusive
governance,.
A
B
A
A
D
B
Maybe
we
could
remove
some
some,
not
Mappy
and
some
not
say
exact
mapping,
questions
a
removing
it
first
yeah
my
preference
for
working
in
many
of
the
working
groups.
I
have
always
like
felt
easy,
comfortable
doing
in
the
Google
Docs,
and
then
anyone
can
come
and
merge
those
like
issue
of
PR.
So
maybe
we
all
work
together
in
the
Google
Doc
and
then
June
fresh
PR
once
it
is
finalized
by
everyone.
A
B
I,
if
I
want
I
can
quickly
create
the
Google
Doc.
B
A
A
Let's
see
okay,
so
overall
I've
been
oh
overall
I've
been
cleaning
up
the
spreadsheet
as
well,
just
because
I'm
just
making
sure
that
if
we
have
a
released
metric
so
for
example,
Community
activity
or
Dei
event
badging.
But
it's
the
same
text
that
we
have
in
the
Google
Doc,
so
that
those
two
are
aligned
with.
A
A
So
I've
just
been
spending
a
little
bit
of
time
on
that
prior
action
items.
I
have
another
pull
request
here.
This
was
between
me
and
in
June,
so
I
went
through
and
this
pull
request.
If
you
recall
this
is
the
project,
engagement
and
I
think
June
and
yahui.
You
had
worked
on
this
one
quite
a
bit
if
I
recall
this
is
the
different
committers
contributors
issues
closed.
A
C
C
B
A
D
A
D
A
Okay
and
then
we
can
you
also
or
actually
it
doesn't
matter
whenever
it's
merged
we'll
make
sure
we
get
it
into.
D
What
two
more
two
more
Matrix
models
about,
code,
quality
currently
and
and
community
service
and
support
yeah
exposed
to
be
reviewed
by
Shane.
We
are
waiting
for
the
response
for
that
comment.
Okay,.
A
A
C
B
A
B
B
A
All
right,
okay,
good
so
far,
all
right,
moving
right
along
all
right.
So
another
thing
that
I
did
was
I
spent
time
so
for
any
of
the
in
progress
and
ready.
Metrics
I
took
some
time
and
tried
to
go
through
and
add.
If
you
recall
the
we
want
the
context,
tags
and
the
keyword
tags,
because,
as
we
roll
out
the
new
website,
it's
going
to
be
based
on
a
search
model
more
than
any
other
kind
of
model.
A
So
we
need
to
have
kind
of
a
commitment
to
terms
that
enable
search,
and
so,
if
I
recall
what
I
did
or
what
the
conversation
was
last
time
that
one
of
the
things
that
we
wanted
to
do
in
the
keyword
tags
was
to
kind
of
highlight
the
metrics
that
are
in
the
model
you
know.
So
if
one
of
the
metrics
was
downloads,
that
was
something
that
we
wanted
to
put
forward
as
a
keyword
or
if
there
was
something
about
pull
requests.
B
A
I
did
was
I,
I
went
through
and
if
there
was
something
about
a
change
request
so
like
let's
say
it
was
like
length
of
a
change
request
or
number
of
review
cycles
and
a
change
request,
I
just
pulled
it
out
and
I'd
call
it
pull
request,
but
I
just
pulled
it
out,
like
as
one
thing
or
if
it
was
like
age
of
an
issue
or
length
of
an
issue
or
comments
and
an
issue
I
just
put
issue.
A
You
know
what
I
mean
kind
of
the
highest
level
thing,
because
I
just
didn't
think
we
needed
something
as
low
as
like
number
of
comments
in
an
issue
that
seemed
very
bulky.
So.
A
I,
just
I
went
through
and
really
what
you're
seeing
here
is
largely
kind
of
a
meta,
a
meta
look
at
the
metrics
that
are
in
the
model,
and
if
you
want
to
check
me,
you
can,
but
so
then
each
one
of
these
has
a
context
tag
which
Kevin
was
asking
for
as
part
of
the
search
process.
And
then
these
are
the
these
are
the
defined
context
tags
so
that
you
can
kind
of
browse
a
little
bit
based
on
these
context
tags.
But
if
you
do
a
search,
these
are
the
keyword
tag.
D
A
B
B
B
A
D
A
Duration,
yeah
I
mean
I,
just
I
I
didn't
do
that.
It
was
just
a
decision
that
I
made
when
I
was
going
through.
It
so
kind
of
you
can
see
like
it's
always
just
pull,
request
or
influence
or
issue.
You
know,
okay,
any
other
comments
or
questions
on
context,
tags
and
keyword,
tags.
B
A
Let's
continue
on
look
at
that
we're
moving
right
along
so
I
had
I
was
kind
of
working
through
as
hanging
out
in
this
document.
Quite
a
while
today,
I
was
kind
of
working
through
the
red
ones.
You
know
what
I
mean
like
which
of
these
that
we
have
under
considering
might
be
useful
in
a
couple
ways.
A
One
is
what's
something
that
I
hear
often
is
is
needing
to
be
thought
about
from
a
metrics
model
perspective
and
then
also
what's
something
that
we
could
probably
use
Trace
data
for
to
help
get,
particularly
with
the
work
that
you're
doing
around
your
SAS
model
Yahoo.
You
know
what
I
mean
and
the
stuff
that
Sean
is
doing
with
auger
and
his
San
Diego
front
end.
You
know
what
I
mean
like
what
are
the?
A
What
are
the
metrics
models
that
might
be
consumable,
so
the
one
that
I
kind
of
gravitated
towards
and
I
thought
we
could
take
a
look
at
today
was
Project
security,
risk
and
I
know
that
I'm
guessing
that
yahui
and
Sean.
You
have
a
lot
of
thoughts
on
on
security,
because
I
pulled
a
lot
of
the
metrics
from
the
risk
working
group
as
as
candidates
for
this.
So
if
you,
if
you
just
it's
right
here,
if
you
click
on
that
link
in
the
model
or
I'm,
sorry
in
the
minutes.
A
So
I
was
hoping
we
could
just
take
like
a
minute
or
a
couple
minutes
anyway,
and
just
why
a
model
around
assessing
security
risks
might
be
important.
Particularly
around
this
user
story,
component
I
wrote
down
a
few
that
kind
of
came
to
mind,
but
if
there's
any
others,
Sean
I
don't
know
if
you
have
a
minute
to
take
a
look
at
that
yeah.
B
C
Right
now
that
nobody
needs
to
have
it
explained
to
them.
It
is
the
central
concern
for
most
open
source
projects
right
now,
largely
because
of
the
executive
orders
and
the
White
House
Office
of
Science
and
Technology
policies.
C
Note
that
we
need
to
focus
on
security
in
open
source.
So
this
is.
This
is
well
known,
I,
don't
know
if
we
want
to
cite
the
ostp
report,
but
I
mean
yeah,
but
it
is.
It
is
it's
motivating
the
entire
open
source
World
right
now,
but
I'll
find
the
link.
B
Attending
to
the
many
of
the
risk
working
group
meetings,
I
think
this
seems
to
be
too
broad.
Risk
has
been
looked
at
from
very
different
angles,
so
what
angle
we
are
taking
are
we
taking
all
the
angles
or
we
are
taking
just
focus
on
the
one
aspect,
because
there
are
so
many
risk
risks
of
the
people
risk
of
the
software
risk
of
the
community.
D
B
D
One
of
my
comments:
if
we
mentioned
software
security,
we
have
to
consider
in
like
de
facto
resolution
but
like
press
the
quality
brilliant,
but
for
the
communities
and
perspective
we
could
consider
the
bus
factor
of
the
electrons
I
mean
the
opposed
for
the
security,
but
they
are
not
the
same
security
aspect.
I.
A
D
If
we
talking
about
security,
I
study-
because
we
are,
we
all
also
have
a
defined
matrix
model
which
we
have
in
the
implemented.
Yet.
But
I
I
could
say
that
we
have
lots
of
common
things
and,
for
example,
and
I
would
like
to
add
more
Matrix
under
this
metric
step
model.
Is
that
okay.
D
D
D
I
can
open
up
that
wait.
A
minute.
A
A
B
D
D
And
also
do
we
do
we
have
a
matrix
about
a
security
policies
so
which
means
we
it's
used
to
check
if
this
community
have
set
up
this
mechanism,
no.
A
D
And
of
that
do
we
have
to
I
think
we
already
have
something
related
to
fast
test
right.
A
D
B
A
And
I
guess
one
of
the
questions
I
had
on
this
too
is
this
was
a
residual
in
the
document
before
we're
doing
this,
and
so
this
is
the
OSS
Seth
criticality
score,
so
Sean
I
know
that
I
think
you
have
worked
with
folks.
Yep.
Is
this
something
that
should
be
considered
when
we're
looking
at
metrics
in
the
metrics
model,
like.
C
B
C
C
B
C
B
C
A
Okay
and
then
I
guess
the
one
last
one
I
had
for
you,
too
is
Olivia,
is
good.
Yeah
yeah.
B
Libya
is
good:
I
I
had
only
concerned
with
the
programming
language
in
this
because
I
don't
know,
as
you
say,
Sean
Java
is
more
risky,
but
there
are
many
good
projects
in
Java
that
are
like
no
JavaScript
yeah.
C
A
Yahui,
do
you
know
for
like
Branch
protection
or
fuzz
test
any
existing.
C
D
B
B
A
What
about
signed
release?
This
is
something
we
can
take
a
look
at,
isn't
it.
D
B
D
This
part
I
think
the
the
whole
credibility
score
per
se.
It's
a
it's
a
matrix
model.
It's
it's
not
just
a
metric,
so
I
just
have
a
little
bit
concerned
about
this.
A.
D
A
D
Because
in
in
the
project,
engagement
and
the
community
activity
may
actually
I.
A
D
C
It's
like,
if
it's
like
a
I,
would
call
it
a
course
metric,
because
it
the
score
that
you
get
there.
It
gives
you
a
very
high
level
overview
of
the
project,
and
that
makes
it
useful
so
that
you
don't
have
to
do
a
lot
of
analysis.
The
35
parts
of
the
ossf
scorecard.
You
can
look
at
that
one
part
and
get
a
high
level
clue
can.
A
You
Sean,
can
you
get
the
criticality
score
from
ossf.
C
C
Well
ossf
scorecard's,
actually
something
you
download
and
run
and
I
download
and
run
it
in
auger
and
then
I
store
it
in
an
auger
table,
but
you
can
run
it
independent
of
auger.
Of
course,
it's
its
own
project.
D
Yeah
I
I,
also
Twilight,
actually
scorecard
a
scorecard
and
Christianity
is
called
the
this.
We
can
treat
it
these
two
as
a
two
independent
matrix
model
and
each
of
single
metrics
model,
how
a
clear
definition
of
the
Matrix
included
in
the
like
the
scorecard
and
the
credit
score
and
for
for
for
our
matrix
model.
Here.
Software
security
risks
which
have
more
linked
with
linked
with
with
the
security
score
a
scorecard,
because
a
scorecard
is
more
care
about
the
the
whole
security
things.
D
A
Just
put
that
document
on
that
that,
maybe
and
and
in
the
chaos
project
we
have
created
metrics
from
existing
we've,
basically
documented
other
people's
work
as
metrics.
So,
for
example,
libyer
was
not
our
creation.
That
was
something
that
existed.
Isn't
that
right
Sean
and
we.
A
And
then
I
think
another
metric
is
Project.
Velocity
was
another
metric
that
we
didn't
create
that
was
created
by
the
cncf,
whatever
their
their
thing
is
called.
You
know
what
I'm
talking
about
and
so
yeah.
We
just
documented
that
as
well,
so
it's
possible
for
us
to
do
what
you're
talking
about
yeah.
B
C
B
D
C
C
D
Sorry
wait
a
minute.
B
C
A
D
So
we
can't
create
there's
two
two
options
for
me:
we
can
create
another
new
matrix
model,
just
the
exactly
same
as
a
scorecard
and
the
crazy
edit
score.
Okay
well
or
we
can
obstruct
some
useful
and
meaningful
metrics
from
these
two
exits.
The
matrix
model
I
mean
the
creative
score
and
the
scorecard
two
or
only
matrix
model,
Nike
difference.
C
D
Yeah
sorry
again.
C
They,
the
ossf
group,
is
moving
very
fast.
They
have
a
high
velocity
and
so
the
outputs
of
these
tools
frequently
changes
and
just
when
we
consume
them,
we
have
to
be
ready
to
I
mean
the
way
we
do
it.
Nagar
is.
C
We
have
a
set
of
the
things
that
are
in
the
scorecard
that
we
consume
and
we
look
for
them
so
that
we're
I
don't
know
not
not
vulnerable
to
the
frequency
of
changes
that
we
can
make
a
plan
effort
to
incorporate
new
things
as
we're
able
to
so
that
when
they
add
something
new
effectively,
if
we
just
it's
difficult
to
make
it
work,
if
we're
not,
if
we
want
to
do
more
than
just
dump
the
Json
and
show
it
to
people,
we
have
to
Target
specific
things
that
we
already
know
are
there
and
then
ignore
the
new
things
until
we're
ready
to
not
ignore
the
new
things.
C
C
We
could
do
that
and
and
I
think
like
if
we
look
at
the
grid
under
the
criticality
score
repo,
we
just
picked
the
things
in
there
that
we
want
and
it's
maybe
the
all
the
things
that
are
there
today
and
we
enumerate
them
in
our
metric
model
and
then,
when
they
add
four
things
in
two
weeks,
we're
we're
not
things
that
we
build
are
not
going
to
break.
That's
all
I'm
saying
it's
like
more
of
a
technical
observation.
D
C
C
It
could
be
if
this
could
I'm
sure
easily
be
incorporated
in
the
grimore
lab
yeah
like
it's,
it's
a
separate
repo.
We
just
call
it
from
it's
written
in,
go
all
these
tools,
the
criticality
score,
the
scorecard
they're
all
written
in
go,
and
we
just
have
a
python
wrapper
around
going
auger
and
you
know
in
some
respects,
grimoire
lab
actually
wouldn't
be
as
susceptible
as
auger
to
the
breaking
changes,
because
all
grammarlo
does
is,
can
Json
so
there's
not
a
pre
there's
not
a
priori
structure,
that's
ex,
that's
necessarily
expected,
which
is
good
and
bad.
C
So
the
only
Advantage
is
that
we
are
defining
the
things
that
we're
looking
at
and
we're
constraining
the
likelihood
we're
limiting
how
much
our
definite
it's
we're,
limiting
how
much
our
metric
model
will
break
when
ossf
introduces
new
things
to
criticality
score
or
scorecard.
Okay,
that's
the
only
reason
that
I
think
we
would
do
that.
I,
don't
know
what
you
who
he
thinks.
D
I
think
I
think
I
think
a
week
producing
Waypoint,
independent
and
matrix
model
which
how
some
close
relationship
with
a
scorecard,
for
example,
in
the
security
risk
which
is
needed
because
we
have
ours,
think
we
have
our
own
thinkings
about
the
whole
security
things
and
and
we
can
give
them
the
clear
definition,
because
in
the
scorecards,
our
our
credentiality
score.
The
only
cover
the
data
sources
in
GitHub.
D
Instead
of
the
more
broad.
C
B
C
D
D
Yeah,
that's
why
I
mentioned
here
because
scorecard
and
the
Korean
score.
So
the
application
provided
by
ossf
only
supports
the
the
project
hosted
in
in
grade
GitHub.
So.