►
From YouTube: CHAOSS Risk Working Group 3/24/22
Description
Links to minutes from this meeting are on https://chaoss.community/participate.
B
A
A
I
think
I,
I
know
a
lot
of
development
teams
really
don't
like
to
use
node.js
at
this
point,
because
the
there's
a
the
way
that
it
weaves
together
a
collection
of
dependencies
downstream
for
whatever
you're
building
is
often
changing,
often
has
security
issues
that
will
break
your
code,
and
so
it
leads
to
a
lot
of
high
maintenance,
and
so
there
there
are.
But
there
are
other
projects
like.
I
would
say
that
augur
and
gramor
lab.
I
don't
know
that
there's
any
or
only
a
handful
of
downstream
projects
that
depend
on
it
right.
A
Those
projects
are
sort
of
the
end
of
the
road.
I'm
not
aware
of
anyone
importing
pieces
of
gremore,
lab
or
auger
to
do.
Other
things
like
is
a
like
an
import
into
some
other
tool.
So
for
I
think
for
a
lot
of
projects.
It
doesn't
matter
like
I'm
not
sure
it
matters
for
kubernetes,
but
maybe
it
does.
B
Well,
it's
it
matters
in
a
different
way.
I
think
sort
of
the
downstream
dependencies
is
it's
more
about.
I
feel
like
well.
I
if
I
would
use
it
for
more
contextual
awareness,
how
your
thing
is
being
used,
understanding
your
usage
and
if
you
are
sort
of
the
core
of
the
project-
and
you
have
a
better
understanding
of
what
kinds
of
things
are
pulling
from
your
central
repositories,
then,
if
you're
thinking
about
what
things
should
I
be
prioritizing
in
our
next
version,
then
taking
into
account
how
your
product
is
being
used,
could
help
inform
that
decision.
B
I
mean
we.
We
talked
about
that
a
lot
from
the
perspective
of
understanding
the
usage
of
a
project
which
is
also
a
fairly
nebulous
task,
and
so
this
is
a
much
more
explicit
test,
because
these
are
code,
references
that
we
potentially
could
track,
but
it
it's.
I
don't
know
because
it's
not
actually
usage.
B
A
B
A
B
Yeah
as
a
way
to
provide
a
relative
importance
or
usage
level,
so
they
are
ready
to
find
a
way
to
sort
of
it's
not
downstream
dependencies.
It's
I
think
it
probably
is
combining
those
two,
but
I'm
not
really
sure
so,
just
if
we
do
want
to
propose
this,
I
just
wanted
to
flag
that
as
an
area
where
I
want
to
better
understand
what
they
did.
So,
if
folks
are
kind
of
like
what's
the
difference
here,
we
can
cleanly
articulate
that
sorry.
That
was
a
lot
of
ideas
all
at
once.
I'll
stop
rambling.
B
Yeah,
so
in
the
links
under
the
discussion,
the
linux
foundation
sends
us
to
a
free
and
open
software
link.
It
should
be
the
second
one.
If
you
just
do
a
control,
f
and
say
z
score.
Is
that
gonna
find
it
no
z
dash
score
here
we
go
calculate
the
average
z
score.
B
B
A
C
In
this
case,
I
developed
for
my
master's
thesis
of
v
index
where,
like
your
project,
is
depending
on
like
how
many
projects
are
depending
on
you,
which
is
like
first
level
and
then
how
many
others
are
depending
on
that
as
a
second
level,
it's
similar
to
h
index
of
a
scholar,
okay,
you
publish
a
paper.
A
A
B
I
mean
every
every
project
is
going
to
have
the
use
context
problem.
I
think
they're,
potentially
value
and
objective
measure
in
the
way
that
z,
the
z
score,
is
being
used
to
provide
a
comparison
across
all
these
different
packages
and
package
managers
and
provide
a
consistent
measurement,
even
though
the
user's
characteristics
could
be
unique.
I
think,
in
this
case
I
just
keep
coming
back
to
the
value
of
implementing
it
from
the
perspective
of
the
project
and
in
some
cases
actually
thinking
about.
B
B
And
so
again,
that's
not
really
the
I'm
curious,
if
the
nature
of
who
actually
finds
the
metric
useful,
like
it's
just
like.
I
feel
like
for
that
this
kind
of
metric
it's
more
useful,
not
necessarily
for
the
individual,
maintaining
the
project,
it's
more
useful
as
a
way
to
compare
projects.
B
Not
to
the
level
that
I've
seen
yet,
which
is
I've
already
asked
kate
if
we
could
interview
this
team.
It's
just
because
I
think
that
there
are
many
individuals
that
have
thought
about
this
and
they
clearly
took
an
approach.
But
if
they
don't
share
the
full
details
on
their
approach,
then
others
might
repeat
what
they
do.
And
I
can't
imagine
this
proprietary,
because
it's
a
research
institution.
B
A
A
A
B
B
D
It
looks
like
they're
looking
at
this
is
on
page
15
and
16
explicit
usage.
Okay,
so.
D
I
think
usage
might
come
a
little
further
up.
A
C
Yes,
in
the
presentation
they
said
they
have
got
it
from
the
companies.
I
don't
know
which
companies.
B
It
seems
like
they
are
combining
data
from
multiple
sources,
so
not
just
libraries.
I
o,
I
think
that
was
one
of
the
sources
that
they
use,
because
I
keep
reading
this
in
chunks
and
not
more
holistically.
I
think
I
need
to
just
read
the
whole
thing,
but
they
do
talk
about
the
difficulty
of
combining
their
various
sources,
and
so
I
think
they
are
using
data
from
private
companies
so
potentially
use
because
it
does
say
public
and
private
usage
now
that
I'm
finding
about
this.
B
A
C
B
A
F
They
trust
they
went
for
all
the
projects
that
were
scanned.
They
also
followed
all
the
dependency
trees
and
basically
used
that
as
part
of
the
aggregation
of
the
stats.
F
Yeah,
it
is
okay,
david
kate.
F
F
C
But
yep
yeah
from
this
conversation
I
can
see
the
metric.
Is
there
like
the
usage
or
in
terms
of
like
downstream
how
many
people
are,
depending
on
your
project,
it's
clearly
a
metric
which
is
critical
for
those
which
are
like
critically
important
projects,
maybe
not
for
every
project,
but
at
least
for
those
projects
for
which
this
report
is
also
there.
So.
B
Yeah,
okay,
I
was
just
going
to
give
you
the
the
two-second
background
of
why
we
were
talking
about
this.
We
vanad
had
raised
the
question
of
whether
or
not
we
would
like
to
pursue
a
downstream
dependency
metric
and
in
the
meantime,
we
were
trying
to
dig
through
this
particular
report,
to
see
how
much
overlap
in
the
approach,
knowing
that
this
kind
of
the
z-score
is
taking
the
same
kind
of
thing
into
account
so
versus
proposing
a
new
metric.
B
F
So
one
of
the
things
I've
found
really
use,
interesting,
slash,
useful
and
want
to
have
my
at
my
fingertips
every
day
was
the
what
the
of
depths
dot
dev
they
have.
The
package
is
impacted.
F
I
kind
of
think
it
is,
I
think
there
actually
may
be
more
than
is
even
here,
because
this
is
only
able
to
scan
what
they
have
access
to.
So
it's
scanning
whatever,
so
you
have
access
to.
B
Yeah,
I
believe,
that's
the
case,
because
you
can
see
at
the
all
systems
drop
down
on
the
top
right.
You
can
see
how
many
things
they've
imported
into
it,
but
it's
not
inclusive
of
everything
like
there
are
other
yeah.
C
So
so,
when
I
was
doing
libraries
dot,
io
analysis,
they
were
scanning.
Basically
all
the
package
manager,
those
33
managers
they
have-
and
I
think
it
is
a
similar
case.
They
are
scanning
all
those
package
managers
and
if
any
package
manager
is
using,
that
particular
project
is
being
listed
here.
If
not,
then
it's
not
listed
here.
That's
what
my
assumption
is
looking
at
this.
F
F
You're
doing
for
yourself,
because
you
think
it's
amusing
or
you're
just
doodling
around,
but
suddenly
someone
builds
a
whole
bunch
of
dependencies.
On
top
of
you,
it
changes
the
dynamic
and
you
might
be
a
little
bit
more
responsible
in
terms
of
letting
peop
setting
setting
expectations
with
people,
whether
you're
doing
things
on
a
supported
level
or
not.
So
they
can
make
risk
assessments.
C
F
You
know
if
something's
to
do
for
a
hobby
and
then
everyone's
building
this
to
hold
a
big
set
of
dependencies
on
them
and
the
person's
you
know
the
person
who
may
want
to
say:
hey,
don't
do
it
don't
build
it
on
this
stuff,
go
find
something
else,
because
I'm
not
gonna,
I'm
not
like,
say
I'm
not
giving
you
any
commitment,
while
you're
responsible.
If
you
find
bugs
you
know,
people
need
to
be
able
to
signal
that
too.
You
need
to
signal
what
you're
not
doing
sometimes,
and
if
people
don't.
B
F
I
said
we
gotta
start
if
we
start
surveying
the
ecosystem,
or
you
know
people
you
know
like
defs.dev
and
other
are
surveying
the
ecosystem
and
making
this
stuff
visible,
and
these
metrics
are
being
captured.
Then
people
can
go
and
query
them
and
find
them
like
you
know,
hey.
I
know
that
the
community
from
zephyr
is
really
excited
when
they
see
themselves
in
the
top
100
projects
type
of
deal
on
certain
lists,
or
things
like
that.
You
know
two
sides
of
the
same
pro
issue
right,
you
know,
one
of
which
is
preventing
the
surprises.
F
Oh
people
using
me,
I
don't
know
what
to
do
versus
the
oh
people
are
using
it.
This
is
really
cool.
Oh
I'm,
gonna
do
more
things.
B
F
C
And
that's
where
I
brought
this
patrick
as
a
part
of
project
popularity.
I
was
trying
to
work
on
a
model
project
popularity
which
can
be
measured
as
a
downstream,
which
can
be
measured
as
a
forks
or
stars
some
other
majors.
But
one
of
the
metric
I
was
looking
for.
Was
this
downstream,
like
how
many
are
dependent
and
how
popular
am
I
or
my
project
is.
F
Well,
like
in
the
first
census
we
use
library,
like
you
know,
there
was
a
lot
of
work
trying
to
even
understand
this,
like
you
know,
where
is
open
ssh
all
right
in
the
census,
one
project
back
five
six
years
ago,
and
so
these
are,
you
know,
things
that
are
in
the
amorphous
ecosystem
that
it's
not
always
clear.
You
can
understand,
especially
if
you
you
know,
don't,
have
access
to
sophisticated
tools
and
infrastructure
for
doing
all
the
crawling,
but
other
people
may
have
access
to
the
crawling.
B
Yeah,
I'm
inclined
to
agree
too,
because,
just
because
I
know
there's
there's
so
many
other
things
that
you
can
kind
of
gauge
popularity
to
venat's
point
on.
You
could
look
also
look
at
stars
and
forks
and
if
you
could
look
at
downloads
or
other
types
of
references,
but
those
are
super
flaky,
depending
on
what
you're
using
and
so
I
feel
like.
If
this
just
focuses
on
citations
and
dependencies,
then
that's
that
is
something
can
that
can
be
defined
across
all
projects
without
any
sort
of
the
the
asterisk
of
we're.
B
C
B
I
had
a
side
question
knowing
that
I
think
matt
and
elizabeth
we're
doing
the
audit.
Do
we
know
when
they're
getting
to
risk
and
should
we
make
space
in
our
next
meeting
for
whatever
they
have
for
us.
A
A
A
It's
yeah
the
briefly
we're
going
through
all
of
the
metrics
that
we've
created
and
we're
looking
for
consistency
of
structure
style
of
writing
the
ways
that
information
is
provided.
A
I
think
that
they're
mostly
complete
but
they've
they've,
evolved
over
the
course
of
five
years,
and
I
don't
think
they're
completely
consistent
and
the
goal
of
the
activity
is
to
identify
things
that
may
need
to
be
adapted
to
changing
technology
or
be
made
consistent
with
a
standard
style,
so
that
our,
I
think
I
mean
I
don't
think
we.
I
don't
think
it's
terrible,
but
I
think
we
want
to
do
this
review
at
this
stage
so
that
it
doesn't
become
terrible.
A
A
F
They
basically
the
census
report
that
just
got
published
was
the
contributions
from
synopsis
snick
and
we
actually
had
three.
So
we
could
actually
get
aggregated
and
it's
just
libraries
right,
the
commercial
vendors
we're
seeing
asked
to
scan
and
review
and
then
the
tracing
of
the
dependencies
through
those
for
what
was
being
scanned
from
a
vulnerability.
You
know
what
was
being
scanned,
so
it
was
the
aggregation
of
three
like
the
for
that
first
version
of
two.
F
F
Well,
yeah
they're,
not
packaged
synopsis
and
snake
they're.
There's
sca
source
code
analysis
tools
right,
okay,.
A
F
Composition
analysis
tools,
depending
which
version
of
the
acronym
you
wish
to
use,
and
so
the
software
composition
analysis
they
basically
get
asked
to
look
at
licensing
and
security
issues
as
part
of
their
commercial
offering,
and
what
the
what
it's
doing
is
taking
that
data
aggregation,
that
they
would
share
that
the
three
of
them
would
share
neutrally
after
much
much
much
much
much
pulling
and
planning
this.
This
was
hard
to
come
up,
get
out
of
them,
to
put
it
mildly,
on
a
regular.
You
know
non-consistent
basis.
F
Getting
the
data
was
pulling
teeth
and
you
know
like
they
wouldn't
give
it
to
the
lf
they
had
it
had
to
go
to
a
third.
You
know
to
the
harvard
folk.
So
that's,
okay,
okay,
so
we
you
know,
we
helped
facilitate
it
all
with
our
contacts
trying
to
help
them
get
them
enough
data
to
be
used.
I
think
the
data
sets
on
the
website
are
actually
available
too.
I
think
they
did
put
the
aggregated
data
set
out
there.
D
F
You
actually
go
to
the
website.
Sophia
might
want
to
pull
some
data
and
have
some
fun
with
it
links
to
open
data.
Yes,.
F
A
Yeah,
I'm
just
so
is
there?
Is
there
a
place
for
so
so
this
census
report,
I
think,
is
very
helpful
and
like
serves
providing
a
global
view
of
the
most
dependent
on
projects
from.
F
A
What
is
what's
the
relative
utility
of
dependency
analysis
tools
around
the
metrics
that
we've
developed,
that
look
at
like
libya
and
that
enumeration
of
the
dependencies
from
different
package
managers
are
those
those
metrics.
Obviously
don't
do
this
same
thing,
but
but
they
could
be
applied
in
the
they
could
be
applied
both
to
understand
upstream
and
downstream
dependencies.
Depending
on
the
pool
of
repos,
we
were
to
feed
a
tool.
A
And
I'm
I
just
I'm
I'm
I
honestly
don't
know
the
I
like.
I
don't
have
an
opinion
on
the
answer,
but
this
is
certainly
squishy
squidgier
than
the
upstream
dependencies
and
and
so
I'm
I'm
I'm
mentally
struggling
to
figure
out
okay.
What
would
be
the
useful
metric
or
metric
model
or
tool
or
piece
of
information
that
we
could
provide
to
the
open
source
community
that
that
isn't
trying?
You
know
we
don't
want
to.
I
mean
what
the
census
does
is
a
very
specific,
distinct
report,
not
an
ongoing
dashboard.
B
Yeah,
I
know
I
I
tend
to
agree.
I
mean,
I
think
what
you
had
said
earlier
on
reputational
risk,
if
we're
framing
it
as
risk
that
as
well
as
kate's
comment
around,
maybe
it's
more
of
a
psa
encouraging
maintainers
to
be
more
open
about
their
commitment
levels,
where,
if
you're
evaluating,
if
you're
going
to
use
a
project,
is
the
maintainer
community
around
that
project?
B
I
think
we
we've
put
we've
it's
like
almost
like
the
defect.
One
is
a
measure
of
how
responsive
that
community
is
to
address
issues
in
it,
but
I
think
even
a
step
before
that
is
whether
or
not
the
community
has
stated
any
kind
of
interest
to
continue
the
project.
Where
I
mean,
I
think
I
can
say
this,
because
it's
public,
but
I
in
a
very
specific
example,
I
work
a
large
company.
We
release
thousands
of
projects
and
there.
B
Statuses
there
also,
some
of
them
are
just
random
hobby
things
or
random
code
snippets
that
somebody
wrote
that
wasn't
a
proprietary
differentiator,
so
we
just
put
it
in
a
github
repository,
but
because
it
came
out
of
a
company,
there
was
definitely
an
issue
around
people,
assuming
that
all
these
things
were
going
to
be
maintained
like
we're,
maintaining
something
like
tensorflow
and
it's
just
like.
There
is
a
different
there.
We
we
knew
as
a
proprietary
entity
that
we
had
to
be
more
explicit
around.
These
are
things
that
we're
actively
continuing
to
build
on.
B
This
was
a
random
side
project
that
we
figured
would
be
better
if
we
just
released,
because
someone
else
could
take
advantage
of
it.
But
that
means
this
doesn't
necessarily
mean
we're
going
to
keep
maintaining
it.
So
they
we
started
being
more
prescriptive
about
ensuring
that
projects
state
that,
on
their
page
on
their
github
repository
to
say,
this
is
currently
not
being
maintained,
but
we
will
like
address
any
sort
of
critical
issues
or
this
one's
not
taking
pull
requests
because
there's
no
active
maintainers.
B
But
if
there's
an
issue
like
basically
it's
explicitly
defining
how
you
should
think
about
that
project
in
terms
of
whether
or
not
it's
active
being
maintained
or
is
the
potential
to
be
archived
and
removed
and
just
sort
of
mostly
like
we.
I
think
that
comes
back
to
reputational
risk.
We
had
to
do
it
because
there
was
more
of
an
expectation
because
it
came
out
of
a
company
that
that
it
was
going
to
be
maintained,
whereas
maybe
individuals
creating
repos,
isn't
that
same
level
of
expectation.
B
But
perhaps
we
should
be
encouraging
more
project
creators
to
provide
some
kind
of
statement
around
how
they,
how
they
interact
with
it
or
how
they
even
care
about
maintaining
it
or
maybe
they're,
just
like
nah.
I
just
made
it
for
fun
and
I
don't
care
what
happens
to
it.
I'm
never
going
to
touch
it
again
and
then.
A
I
I
don't
know
what
the
world
thinks,
but
my
my
long-standing
impression
of
google
is
more
than
any
other
open
source
contributor
they're
willing
to
abandon
platforms
that
don't
seem
to
be
working
and
they're
willing
to
abandon
projects
that
don't
seem
to
be
working
and
they
do
it
in
a
very
public
way.
So,
like
I
know,
google
can
take
it
away
if
they
decide
it's
not
useful
to
them,
and
I
I
just
accept
that
as
a
risk
like
I
forget
what
the
chat
thing
was
10
years
ago,.
B
A
A
B
B
A
A
A
So
so
might
our
next
step
beef?
I
guess
I
I
stayed.
I
said
a
lot
of
stuff,
I
don't
know
if
anyone
wrote
it
down,
but
just
maybe
vanad
when
you
take
a
look
at
a
downstream
dependency
metric.
We've
talked
about
a
lot
of
these
different
factors
and
to
see
if
you
can
propose
something
that
we
can
chew
on
and
actually
debate
the
merits
of
it
so
that
it's
not
just
a
free
discussion,
but
we
sort
of
have
a
straw
dog
of
here's.
A
They
would
have
to
be
package
managed,
but
it
would
also
be
good
to
know
what
are
the
upstream
dependencies.
I
have
and
like
I'd,
be
curious.
What
the
z
score
of
those
things
that
I
have
upstream
dependencies
on
are
and
and
not
all
that
might
be
accomplishable,
but
I
know
there's
a
lot
there's
a
lot
here
this.
This
feels
like
when
we
first
started
talking
about
dependencies
a
little
bit
that
it's
it's
a
lot
of.
B
B
Short
story
is
no
just
because
of
the
way
that
things
are
named
is
more
characteristic
of
the
community
and
the
individual
that
released
the
repository
than
the
project
community
or
the
project
code
base
itself.
At
least
that
was
sort
of
my
like
top
line
thing,
but
when
you're
trying
to
define
definitive
boundaries
around
any
concept
like
this
with
chaining
or
just
unclear
structures,
it's
always
going
to
be
like
a
the
bigger
like
I
don't
know,
just
like
keeps
getting
expanded
or
more
complex.
If
we
let
it
yeah.
C
B
It's
worth
kind
of
talking
through
all
those
edges,
because
I
think,
in
order
to
count
anything,
we
have
to
define
it
explicitly
or
else
it's
not
a
countable
like
a
countable
thing.
So
it's
worth
having
these
sort
of
like
hairy,
long
tail,
ambiguous
conversations,
because
there
are
many
arms,
so
we
gotta
lock.
A
You
know
what
what
are
the?
What
are
the
things
that
would
actually
be
worth
like
people
could
use
them
in
different
roles
that,
and
we
can
see
that
fairly
easily
after
we've
looked
at
it,
that
that
that
we're
not
trying
to
build
all
the
metrics
or
one
uber
metric
that
doesn't
answer
any
one
question
well,.
B
A
Yeah,
although
the
character-
I
don't
know
the
particular
actors
they
chose
for
the
movie,
oh.
B
A
Yeah
yeah
it
was,
it
was
an
amazingly
cast
film.
I
would
say
we
are
technically
out
of
time
now,
but
I
think
we've
come
to
some
sense
of
what
we're
going
to
do
next,
with
downstream
dependencies
and
working
towards
a
metric
that
is
useful.
A
D
B
What
the
tricky
part-
and
I
know
we're
at
times
so
I'll
say
this
quickly-
is
that
we
might
want
to
think
through
a
metrics
model,
because
it
might
help
us
find
the
gaps
where
we
don't
have
defined
metrics.
Yet
before
we
could
publish
the
model,
because
I
have
a
feeling
that
we
could
out
outline
an
idea
that
is
comprehensive
and
succinct.
But
then
we
only
have
like
three
out
of
five
of
the
things
that
we
actually
want
to
talk
about.
A
A
All
right
well,
thank
you
everyone.
I
hope
I
hope
the
discussion
was
valid
as
valuable
for
you
as
it
was
for
me,
kate,
did
I
miss
the
software
engineering
meeting
on
monday.