►
From YouTube: CHAOSS Risk Working Group Meeting, June 22, 2023
Description
Meeting summary can be found here: https://chaoss.discourse.group/t/risk-working-group-meeting-summary-june-22-2023/192
B
Welcome
welcome
to
the
risk
working
group
for
the
chaos
project
here
on
the
lovely
June
22nd
2023.,
we'll
get
started
with
the
agenda.
I'll
put
it
in
the
notes
again,
oops.
That's
the
wrong
clipboard
item.
B
A
D
B
Well
I
know:
we've
had
certainly
there
have
been
parts
of
the
chaos
project
that
relied
on
elasticsearch
that
experienced
some
difficulty
when
they
closed
that
an
open
search,
open
search
of
course,
has
worked
to
take
its
place,
but
that
was
not
a
smooth
transition
for
people
who
had
committed
to
elasticsearch
and
I
I.
Think
there's
been
a
discussion
in
in
the
course
of
this
group
over
the
years
about
how
how
those
choices
usually
emerge
from
a
single
firm
or
a
set
of
interests
having
control
of
a
governing
board
for
a
project.
A
A
We
were
kind
of
I
didn't
really
know
how
to
categorize
the
risk
of
sort
of
the
conflation
between
a
company
and
a
project
and
in
terms
of
just
like
how
you
think
about
this
Evolution
elastic
was
one
example
of
a
project
and
a
company
with
the
same
name
and
same
project
name,
and
then
when
there
was
an
issue
with
the
project
in
the
project
Community
now
there
has
been
a
Schism
and
have
multiple
projects
that
are
both
closed
and
open
source
available
in
the
market,
and
that
did
I,
don't
know
how
much
that's
impacted
their
brand
and
their
company,
but
I
have
to
imagine
it
did
something
just
in
terms
of
their
reputation
in
the
community
and
just
sort
of
thinking
about
that.
A
There's
two
elements
of
risk:
there's
the
risk
to
the
consumers
or
dependents
of
the
project
when
the
license
model
change
and
the
governance
model
changes,
but
there's
also
the
risk
to
the
the
one,
the
company
that
created
it
and
the
risk
of
changing
your
model
and
the
perception
and
reputation
and
usage
risk
that
comes
along
with
it.
So
I
think
Victor.
A
You
might
already
hear
how
we're
talking
about
this
I
think
something
that
we
struggle
with
in
this
working
group
is
everything
can
kind
of
be
a
risk,
sometimes
deterring,
and
how
so
trying
to
get
a
little
clearer
on
risk
to
whom
and
in
What
scenario
to
try
it,
at
least
within
our
own
definitions
and
a
selection
of
metrics.
We
have
some
orientation
there,
because
it
could
suddenly
become
a
boil.
The
ocean
Adventure.
D
This
yeah
actually
risk
that's.
A
good
question
is:
whoever
is
the
I
guess
this
actually
I'm
not
sure
is
this.
This
working
group
is
the
part
of
the
the
hospital
as
well
or
is
it
separate.
A
We're
separate,
but
I,
think
many
of
us
also
sit
in
the
also
working
group,
so
there's
some
overlap
with
the
interest
and
also
to
be
more
aligned
to
metrics
that
are
in
support
of
off
both
specifically
where
risk
is
more
General.
C
D
A
A
E
Yeah
sure
no
I
wanted
to
contribute
to
how
transparent
is
the
project
and
I
think
part
of
the
risk
that
you're
going
to
feel
as
either
a
user
or
the
the
maintainer
of
a
project
is
I.
E
Think
release
frequency
would
be
a
great
metric
to
watch
for
the
scenario
that
a
lot
of
companies
that
I've
known
will
develop
much
of
a
project
internally
and
then
publish
like
releases
or
not
publish
a
bunch
of
commits,
and
do
it
all
at
the
same
time,
and
so
I
think
that
if
you
see
a
project
that
only
ever
has
releases
and
very
few
commits,
it
might
be
a
good
indicator
that
the
project
is
being
maintained.
Internal
to
a
company.
It's
not
being
open
sourced
and
actually
developed
in
the
open.
E
F
This
is
Dave
wheeler
I
would
also
want
to
correlate
it
with
size.
You
know
having
a
oh
there's,
a
small
bug
patch
and
an
immediate
release
is
well,
it's
never
ideal.
It
happens.
It's
not
necessarily
an
indicator
of
a
problem,
but
if
you
are
certain
organizations
which
I
won't
name
but
should
shame,
you
know
you
know
it's
the
you
know
it's
the
one
release.
Congratulations
it's
the
commit,
if
you
will
of
you,
know
100
000
lines
of
code
and
there's
the
release
too
yeah.
No,
that's
you
didn't
just
do
that.
F
F
You
know
it
might
have
been
sitting
outside
this
resource
control,
which
is
itself
a
risk
anyway.
So
yeah
but
I
totally
agree
with
your
concern.
F
You
know
I
I,
do
accept
that
if
the
light
you
know
to
make
things
simple,
if
it's
under
certain
of
under
an
open
source
software
license
it's
open
source,
but
that
doesn't
mean
that
they're
behaving
transparently
or
you
know,
applying
best
practices.
So
it's
helpful
to
identify
you
are.
You
are
not
doing
it
well.
Definitely.
A
So
that
that
would
be
a
new
metric
for
us,
I
kind
of
like
it
and
I
think
the
metric
itself
is
more
of
that
ratio
to
your
point
versus
we're.
Looking
at
release.
Frequency
as
it
relates
to
commits
PR's
issues,
maybe
coming
from
the
same
group
of
people,
but
if,
if
it's
that
could
be
an
indication
that
it's
mostly
being
developed
internally
and
being
externalized
as
a
release,
but
not
necessarily
collaborated
with
a
broader
set
of
people.
What.
E
Yeah
and
I
I
wonder
if
the
API
would
surface
like
something
else
that
I
could
picture
happening
is
they
have
an
internal
version?
They
have
an
external
version.
They
push
the
history
to
the
external
version,
all
at
the
same
time
and
then
suddenly.
There
is
all
of
this
activity
and
now
I'm
going
to
cut
a
release,
because
that
would
be
like
the
simpler
way
to
be
cutting
those
releases
publicly.
E
B
I
mean
we
certainly,
the
API
certainly
gives
us
data
and,
and
our
grandmother
lab
both
provide
data.
That
would
let
you
understand
if
a
large
number
of
lines
of
code
were
committed
in
a
small
number
of
commits
or
in
a
short
period
of
time,
followed
by
a
release
followed
by
a
long
period
of
Silence
right,
so
that
phenomena
is
easily
revealed.
B
C
E
F
F
Yeah
we
actually
I
I,
haven't
really
checked
on
what
these
the
squatter,
but
if
they're
not
doing
much,
we
may
just
beg
hey
GitHub
you're,
a
member
of
our
group.
Can
you
give
us
our
own
name,
but
that
won't
work
for
everybody?
The
lesson
learned
still
applies.
B
B
F
D
Sorry,
the
the
formula
used
to
calculate
the
score
for
yes,
is
that
published
that.
F
The
actual
formula
yeah,
if
you
click
on
it,
yes,
all
the
details
of
exactly
what's
measured,
but
it's
it's
a
weighted
average
I,
don't
remember
what
the
weights
are,
but
it's
like,
there's
a
low
medium
high,
lowest
one
I.
Think
medium
is
three
I
forgot.
What
high
is
maybe
five?
It's
all
posted
there
now
I
mean
there's
pros
and
cons
to
scorecard,
but
the
idea
is
I
mean,
like
all
things.
F
No
tool
is
perfect
because
we
live
in
an
imperfect
world,
but
what
it's
doing
is
it's
looking
for
indicators,
10
means,
hey
things
are
looking
pretty
good.
Zero
means
no,
and
it
has
a
whole
bunch
of
things
like
hey.
Do
you
have
Branch
protection
on
which
means,
and
that
forces
you
more
or
less
to
post
to
a
separate.
You
know
here's
my
proposed
change,
giving
people
time
perhaps
to
review
before
you
bring
it
into
the
main
branch.
F
You
know
just
yeah.
If
you
look
at
what
is
scorecard,
yeah
keep.
F
Scroll
down,
okay,
there's,
okay,
public
data,
the
API
keep
going.
Oh,
let's
see,
I
guess
it's
moved
around,
there's
actually
a
separate
page
which
okay
yeah.
This
is
all
about
using
it.
Let
me
give
you
the
link:
it's
it's
on
there
somewhere,
but
there's
a
link
to
the
to
the
actual
default
scorecard
checks,
it's
near
the
top
and
then
there's
the
detailed
check,
documentation.
C
F
F
D
F
Yeah,
but
what
happens
is
each
of
these
scores?
You
know,
has
a
weight
which
is
verbally
rated
as
high
medium
low
and
then
the
overall
uses
that
verbal
to
basically
weight
it.
If
you're
not
familiar
with
weighted
averages,
I
mean
if
you
just
average
numbers,
you
add
them
up.
Okay,
if
you
weight
it,
let's
see
you
waited
by
three.
That
means
it's
as
if
it
was
there
three
times,
so
you
multiply
by
three
on
the
top.
What
you're,
adding
and
you
count
it
as
three
instances
on
the
denominator.
F
Okay,
it's
just
a
way
to
give
certain
values
more
weights.
It's
a
pretty
common
mechanism,
simplistic
mechanism
for
scoring.
F
F
But
but
but
that's
the
but
the
whole
point
of
focusing
on
the
high
medium
low
was
to
you
know
to
focus
on
the
is
this
considered.
You
know
a
stronger
signal
or
not,
and
the
idea
is
to
try
to
give
a
special
weights
to
signals.
I
will
note
that
some
of
these
signals
there's
challenges
and
there
were
working
on
them
in
particular.
Currently
it
only
works
on
GitHub.
F
We're
fully
aware
that
there's
a
large
vast
number
of
projects
that
aren't
on
GitHub
we're
working
actually
with
Lockheed
Martin,
who
is
implementing
a
the
our
code
editions
for
get
lab
that
hasn't
emerged
yet,
but
that's
you
know,
I
mean
they've
got
some
some
stuff
running
and
I.
It's
always
child.
F
You
know,
I,
think
people
aware
of
tools,
false
puzzles,
false
negatives,
so,
for
example,
tests
if
you're
using
GitHub
actions
and
certain
kinds
of
tests
we
detected
if
you're,
using
Circle,
Ci
or
Travis
or
Jenkins,
we
don't
detect
you.
You
must
not
be
running
any
tests
interesting.
Why?
Because
it
turns
out
that
detecting
all
possible
configurations
of
software
is
really
hard,
and
so
we've
got
people
working
on
this
and
you
know
and
improving
and
we'll
just
I'm
sure
we'll
get
you
know.
If
this
we
will
continue
on
that
as
long
as
this
project
lives.
F
Well,
maybe
I'll
be
dead
before
we're
done,
but
the
the
idea
is
basically
no
tool
is
perfect,
but
this
gives
us
much
more
information
than
we
had
otherwise,
and
you
know
a
way
to
give
you
a
quick
score
of
an
arbitrary
projects
is
really
helpful.
F
My
apologies
I,
you
know
I,
you
know
it's
been
a
while
since
I've
looked
at
the
mechanism,
but
at
the
you
know
how
the
the
scores
get
combined
but
I'm
pretty
sure
what
I
just
described.
It
is
right,
I
guess:
gotta,
I,
gotta
fight,
it's
documented
here,
somewhere
I,
just
gotta
find
it
Well.
B
C
B
F
Okay,
so
you
know,
I
was
correct
in
the
process.
I
just
didn't,
remember
the
weight
numbers
and,
of
course
that's
because
most
the
times
you
never
see
those
weight
numbers.
You
only
see
those
verbal
descriptions,
yeah.
B
And
what
I
will
say
is
that
openssf
is
actively
and
the
scorecard
team
is
actively
adding
things
to
this
list.
It's
in
the
last
18
months
that
I've
been
using
it
it's
chain
you
know.
What's
in
there
has
grown
yeah.
B
The
growth
is
slowing
at
first,
it
was
like
very
high,
but
it's
still,
it's
still
they're
still
adding
metrics
to
it.
F
Yeah
and
we've
actually
got
somebody
who's
trying
to
work
off
all
the
issues
and
we're
we're
working
to
improve
our
internal
test
coverage
to
at
least
80
statement
coverage
I
mean
you
know,
I
mean
it
already
has
some
tests,
but
it's
not.
You
know
like
any.
No
software,
as
far
as
I
can
tell
is
done.
I'm
sure,
there's
a
few
exceptions,
but
for
the
most
part
software's
never
done
so.
We
know
there's
functionality,
that's
missing
that
we'd
like
it
to
have.
We
know
that
it's
got
tests,
but
not
not
coverage
enough.
F
We're
improving
that.
You
know
working
on
get
lab
working
on
detecting
more
things.
So
you
know
we
think
it's
helpful.
We
know
there's
more
to
work.
That
needs
to
be
done,
like
I,
think
it's
really
helpful.
We're
oh
and
one
thing
you
didn't
mention.
We
are
running
weekly
scans
once
a
week
of
over
a
million
open
source
projects
and
storing
the
results.
F
B
F
No,
that
no,
that
that's
my
thing,
you
know
the
reason
we
only
do
it
weekly
across
a
million
is
because
that's
a
million,
whereas
if
you
run
it
locally,
you
can
do
things
like
every
commit
or
every
whatever.
You
know,
whenever
you
want
to.
F
It's
not
just
limited
open
source.
Let
me
tell
you
yeah.
C
A
A
Well,
we
have
sort
of
the
three
areas
that
we
discussed
that
we
can
get
back
to
in
a
bit,
but
it
might
also
be
helpful
to
have
sort
of
the
reference
for
other
areas
of
risk
like
to
essentially
acknowledge
that
scorecard
could
be
part
of
your
assessment
as
well,
if
you
as
an
easy
way
to
sort
of
incorporate
a
security
scan,
but
knowing
that
our
our
metrics
model
wasn't
really
security
Centric.
Because
of
that
sort
of
our
our
purpose
built
design.
F
And
now
to
be
fair,
the
open,
ssf
is
happy
to
steal,
I
mean
used,
chaos
work,
so
I
went
with
credit,
so
you
know
indeed
within
the
open.
Ssf
I
mean
scorecard,
is
an
effort
to
just
kind
of
immediately
do
a
quick
look
mostly
at
process
things
like
you
know,
do
you
have
Branch
protection,
you
know,
do
you
have
a
badge?
Is
there
a
test?
You
know
in
testing?
F
Is
there
you
know
static
analysis
is
their
Dynamic
analysis,
but
you
know
there's
other
efforts
to
try
to
gather
more
information
like
the
openssf
dashboard,
which
is
going
to
try
to
gather
other
security
again
still:
security,
Focus
security,
focused
information,
including
scorecards,
and
other
kinds
of
information
that
are
organized
quite
differently,
so
stay
tuned.
B
So
I
have
two
questions.
One
is
when
we
talk
about
this
metrics
development
for
metrics
model,
like
are
these
the
metrics
for
our
first
metrics
model?
One
question
that
arises
to
me
from
this
discussion
is:
is
open,
ssf
scorecard
kind
of
an
easy
one
in
a
way
to
be
an
opening
part
of
a
risk,
metrics
model
I.
F
I
would
include
it
it's
you
know
it's
something
you
know.
Basically,
the
the
openness
is
that
best
practices
badge
and
the
open,
SF
scorecard,
are
kind
of
the
two
main
ways
that
we
provide
quick
evaluation
of
projects
and
there
they
take
two
different
approaches
and
in
fact
the
scorecard
uses
best
practices
badge
as
one
of
its
inputs
and,
in
fact,
on
my
to-do
lists
for
this
year
is
the
best
practices
badge
is
going
to
bring
in
some
of
the
data
from
scorecard.
F
So
you
know,
although
they're
two
different
projects,
you
know
they
actually
are
supposed
to
we.
What
I've
said
is
hey
they're,
like
peanut
butter
and
jelly
they're,
both
trying
to
accomplish
overall
the
same
objective,
but
they
do
it
differently
and
can
work
well
together.
B
F
There's
no
there's
no
shame
in
pulling
that
out
separately.
So,
but
absolutely
yes,
please
do
include
the
openness
of
scorecard,
that's
pretty
straightforward
and
you
don't
have
to
re-copy
all
just
say:
hey
there's
this
scorecard
go
here.
If
you
want
to
learn
more
about
how
to
use
a
tool
to
automatically
measure
it
and
and
really
I,
think
that's
for
the
whole
point
of
Scar
card
is
automated
measurement,
so
you
really
sooner
or
later
want
to
point
to
the
tool.
Anyway,
you
don't
want
to
try
to
redefine
it
by
hand
all
over
again.
F
B
B
A
A
A
Checklist
and
then
the
sort
of
company
like,
if
you
have
a
large
elephant
Factor,
then
we
would
suggest
this
other
sort
of
I
like
this.
This
combination
of
sort
of
like
not
only
looking
at
company
participation
as
as
an
evaluation
of
Company
ownership,
but
also
the
sort
of
release
to
commit
size
or
volume
or
ratio
as
a
way
to
kind
of
investigate
that
further
look
for
indicators
versus
just
participation,
I.
Think
it's
a
little
bit
more
nuanced,
but
I
would
suggest
both
for
something
like
transparency.
A
A
B
B
A
Doesn't
have
it
specifically,
governance
was
listed,
I
was
looking
at
older
ones.
We
had
s-bomb
just
begin
transparency
of
what's
in
it
and
they
had
best
practices
badge
here,
but
I
think
it
fits
better
under
the
reported
vulnerabilities
and
sort
of
like
the
security
angle
component,
so
I
would
I
would
leave
it
out
here.
C
B
F
I
will
say
that
for
some
of
these
things
you
may
not
call
them
security
metrics,
but
there's
still
indicators,
I
mean
you
know.
For
example,
we
I'll
the
scorecard
absolutely
considers
failure
to
test
as
a
security
metric,
because
because
seriously,
if
you
aren't
testing
the
odds
of
you
getting
a
release
out,
that's
correct
or
not
that
high
and
even
more
importantly,
if
you
fix
a
vulnerability,
the
odds.
Are
you
screwing
up
some
functionality
along
the
way
when
you're
in
a
hurry
is
also
disturbingly
high,
so.
B
B
F
I
I
have,
after
literally
many
decades
of
experience
and
study
of
this
topic.
I
have
opinions
on
this,
but
this
sounds
like
something
we
would
want
to
grab
a
grab
a.
F
But
I
I
think
that
the
quick
answer,
though,
is
what
people
get
taught
in.
Schools
is
often
frankly
unrelated
to
the
real
world
for
unfortunate
reasons,
and
you
know
when
you
have
the
whisper
Chambers
saying
how
you
write
software
disconnected
from
how
software
is
actually
developed,
you
get
really
really
bad
advice.
Yeah.
F
A
Kind
of
like
I,
think
David
I'm,
not
sure
if
we
were
there,
but
we're
trying
to
relate
it
back
to
sort
of
the
criticality
score
of
like
within
the
ecosystem
and
I
know,
like
the
the
census
report,
attempted
to
do
that
with
packages
across
multiple
popular
package
systems,
but
like
we
didn't,
we
didn't
have
a
way
of
doing
that
within
chaos.
F
Okay,
how
does
it
relate
to
the
okay
I
can
answer
some
of
this
and
we
and
we
in
the
openness
if
we
actually
have
a
criticality
working
group,
that's
where
the
criticality
score
project
lives.
That's
where
the
Harvard
report
reports
into
so
they've
been
trying
for
a
long
time
to
develop,
to
help
identify.
What's
critical
turns
out
that's
hard.
Let
me
know
if
you've
seen
this
movie
before
so
the
Harvard
study
was
was
great.
I
mean
it
was
a
serious
quantitative
result.
It
is
narrowly
scoped.
It
only
covers
application.
F
Libraries
in
certain
ecosystems,
you
know
so
npm,
Pi
Pi.
You
know
those
kinds
of
things:
okay,
so
if
you're
an
application,
if
you're
in
C
or
C
plus
plus
you
know
those
are
all
going
to
be
excluded.
Okay,
but
for
that
area
they
had
some
really
helpful
data.
Did
some
real
analysis
came
up
with
some
very
interesting
identifications?
F
The
criticality
score
is
interesting.
That
was
an
early
attempt
to
identify.
What's
critical,
what
you
really
want
to
do
is
find
out
where
the
software
is,
but
in
general
you
can't
do
that.
You
know
if,
if
there
was
some
Global
organization
that
had
s-bombs
for
everybody
and
knew
how
often
different
products
were
used,
this
would
be
easy.
There
is
no
s-bomb
in
the
sky,
so
what
they
did,
what
the
criticality
score
does.
Is
it
measures
what
it
can?
F
C
F
Few
other
metrics,
but
that's
the
that's
primarily
what
it.
What
the
what
criticality
score
measures
is.
How
active
the
development
process
is
so
I,
don't
think
it's
wrong
in
the
sense
that
if
you've
got
a
very
large
number
of
people
making
a
very
large
number
of
commits-
and
you
know-
and
you
measure
it
different
ways-
it
is
likely
to
be
important.
B
F
C
F
The
same
thing
is
true
here:
where
sometimes
this
measure
fails,
I
will
give
you
two
examples,
one
which
isn't
obvious
and
one
which
is
the
not
obvious
one
is
there
are
some
projects
which
are
incredibly
busy
and
are
not
important
to
anyone
else.
In
particular,
there
is
a
there.
Is
software
that's
used
by
CERN,
the
very,
very
large
you
know
you
know
lab
they.
F
I
I
I
I
have
an
engineering
degree,
which
was
one
semester
short
of
a
Physics
degree.
So
don't
get
me
started
it's
tempting
okay,
that
so
CERN
does
cool
stuff,
but
they
have
a
software
project.
I.
Think
it's
like
like
an
internal
web
management.
I
forgot
what
it
is,
I
think
it's
some
kind
of
CRM
they
you
know
like
everything
else.
They
do
there's
over
a
thousand
authors,
but
it's
only
used
by
a
very
narrow
community.
So
it's
really
busy.
But
it's
really
not
that
important
and.
F
F
B
Just
criticality
score
there's
another
indication
of
criticality
of
some
kind
where,
like,
for
example,
when
renisha
was
here,
one
of
the
things
that
Dwayne
was
doing
when
he
was
at
indeed
was
looking
through
all
11
000
of
the
repos
that
they
rely
on
and
to
try
to
understand
which
dependencies
were
used
in
the
most
number
of
projects.
Right.
F
They
did
is
they
started
with
the
sca
vendors.
The
problem
with
the
dependency
tree
is
a
can
depend
on
bead
being
on
C
depend
on
D,
but
no
one
uses
a
so
just
counting
up
who
uses
who
depends
on
it
doesn't
help
you.
So
what
they
did
is
they
went
to
the
sca
vendors
of
what
has
been
found
in
real
world
applications,
and
then
they
followed
the
tree
down,
and
that
was
I
think
the
I
mean
that
is
the
best
available
data
for
the
subset
that
they
could
handle.
F
Okay,
I
mean
really.
You
could
always
complain
about
any
research,
but
that
is
the
best
the
world
has
available
and
we
funded
it.
So
there
yeah
so
and
I
was
I
was
involved
as
well
to
be
fair,
but
yeah.
F
Yeah
I
mean
there's
a
lot.
You
can
do
with
dependency
analysis
and
if
you
like,
I
can
also
talk
to
you
about
how
to
optimize
some
of
the
chaining,
because
it
turns
out
order
of
operations
is
really
important
for
certain
graph
operations.
What
a
surprise.
F
A
shock
there
so
yeah
so
I've
actually
got
some
stuff
posted
about.
You
know.
I
actually
did
a
census,
one
that
what
Harvard
did
was
census.
Two
I
did
census
one
and
some
work.
That
was
that
basically
laid
the
groundwork
for
what
Harvard
eventually
did
so
I've
done
a
number
of
specifically
dependency
analyzes,
but
the
problem
is
you've
got
to
figure
out
where
the
tops
are
just
saying
somebody
but
uses
it
does
not
really
tell
you.
B
Yeah
I
would
love
to
chat
with
you
about
that
offline
because
I
Harvard's
I
think
Harvard's
doing
census
three,
whether
you
know
it
or
not,.
F
F
Well,
they
didn't
offer
me
any
money
either
so
yeah.
So
so,
basically
we
there
has
been
analysis.
It's
really
challenging
to
do
the
analysis.
There's
it
is
so
difficult
to
get
the
data
that
you
need
to
do
the
real
world
analysis
that
can
lead
you
to
reasonable
conclusions.
So
anyway,
I'm
happy
to
tell
you
in
depth
about
some
of
these
things,
I'm,
not
sure
that
that's
this
no.
B
I'm
gonna
I'm
gonna,
send
you
an
email
because
I
I
think
there's
a
couple
problems
I've
encountered
with
dependencies.
The
biggest
one
is
oftentimes
multiple
libraries
resolved
to
the
same
GitHub
repository
and
assessing
the
maintenance
of
a
library
in
relation
to
whether
or
not
it's
been
whether
what
so,
there
could
be
a
recent
release
that
didn't
change
much
because
they
just
released
all
the
libraries
for
all
the
platforms.
At
the
same
time,
sure.
F
F
To
if
you
want
to
claim
that
hey
there
are
problems,
no
kidding
all.
A
Yeah
I
also
there's
just
one
one
sort
of
main
one,
which
is
if
the
thing
to
put
into
practice,
if
we're
proposing
things,
they
also
have
to
be
feasible
for
say,
a
company
or
an
individual
to
run
and
something
like
looking
at
a
a
multi-project
ecosystem
dependency
to
treat
an
overlap
and
system
level.
Analysis
is
probably
a
bit
out
of
scope
for
someone
who's
doing
this
project,
so
I'm.
A
Also
thinking
what
elements
can
we
learn
from
these
efforts
to
simplify
an
approach,
or
maybe
we
just
point
to
something
else
like
like
this
piece
of
research
from
Harvard
and
the
Linux
Foundation?
That
was
looking
at
trying
to
look
at
a
whole
ecosystem
level
of
popularity
and
criticality
versus
there's.
Also
the
contextual
piece
of
criticality
in
your
own
system,
which
we
we
can't
see,
but
also
that
might
be
more
relevant
for
them.
But
I.
F
Criticality
doesn't
necessarily
mean
it's
a
risk.
You
know
everybody
may
be
using
something,
but
that
doesn't
mean
that
your
use
of
that
is
a
necessarily
a
bad
thing.
In
fact,
there
are
some
who've
argued
that
sometimes,
depending
on
something
everyone
depends
on
is
good
because
everybody
that
means
that
lots
of
people
are
probably
looking
and
that
there's
a
problem
it'll
get
immediately
fixed
because
there's
some
big
roller,
you
know
big
Stakes
high
rollers,
who
depend
on
it
too
and
we'll
throw
whatever
money
is
necessary
to
fix
any
serious
problem.
F
At
least
that's
that's
a
pitch
yeah
and
happy
to
talk
about
that.
Sophie
I
know:
we've
talked
about
this
before,
but
I
guess
I
I
can't
help,
but
mention
again
every
time
you
talk
about
something
like
this
is
the
you
know
immediately.
The
first
question
is
who's.
This
forward
question:
are
they
trying
to
answer
so
usually
for
security,
I'm?
Trying
to
answer
questions
like
you
know,
I'm
thinking
about
using
this
package?
Is
this
risky
or
not
or
I'm,
already
using
this
package?
F
Is
this
risky
or
not,
and
in
some
cases
like
in
the
open
ssf,
the
critical
projects
working
group
is
trying
to
answer
of
all
the
projects
that
exist
out
there,
which
ones
are
the
most
widely
used
and
therefore
should
receive
extra
resources?
But
you
basically
have
to
figure
out
what
the
question
is,
and
then
you
use
the
metrics
that
best
match
to
help.
You
answer
that
question.
A
A
B
Oh
I
mean
I,
guess
we
could
we
could
skip
a
meeting.
A
C
C
B
I
got
some
papers
to
do
I'm
just
confirming
my
logic:
yeah.
It
would
be
July
6th,
so
the
next
meeting
after
that
would
be
July.
Do
I
want
to
meet
July
6th
Sophia,
or
should
we
just
go.
B
A
F
Have
to
admit
right
now:
I
am
kind
of
overwhelmed,
so
I
may
not
be
able
to
come
to
as
many
of
these
meetings
as
I'd
like,
but
I'm
always
delighted
to
answer
questions
if
you've
got
some
specifics,
but
really
right
now,
I
just
dumped
on
you
the
chronicality
score.
The
openness
is
of
scorecard
best
practices,
badge
you're
already
aware
of
so
at
least
for
those
kinds
of
things
you
at
least
have
a
mind:
dump.
A
Yeah
no
I
think
anything,
that's
helpful
to
know
because
I
think
for
something
as
complex
as
risk.
If
we're
trying
to
make
a
metrics
model,
B
simple,
then
pointing
to
things
that
are
inherently
roll-ups
can
also
kind
of
help.
You
look
at
something
that
is
more
nuanced,
without
just
being
one
number,
because
I
think
like
I
I
was
just
thinking
about
Gary
again
and
your
exercise
and
the
number
of
individual
metrics
that
you've
brought
up.
A
Sorry
I'm
just
talking
about
your
project
now
and
just
like
acknowledging
that
it's
risk
is
inherently
complex
and
only
picking
three
or
four
things
is
probably
not
enough.
So
some
of
these
roll-up
metrics,
if
we
can
take
advantage
of
them,
then
we
are
creating
a
simpler
approach
for
people,
whereas
they're
only
running
a
couple
of
things,
and
some
of
these
things
are
already
roll
up.
So
they're,
inherently
taking
into
consideration
more
factors
than
you
could
on
your
own
or.