►
From YouTube: CHAOSS Risk Working Group Call 2-18-21
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
There
we
go
so
this
is
the
risk
working
group
meeting
for
february
18
2021
and
today
on
the
agenda
is
a
review
of
some
of
the
things
that
we
talked
about
over
the
last
several
weeks,
which
is
what
what
is
the
essentially
it
boils
down
to.
I
think
the
minimum
viable
product
and
metrics
for
the
risk
working
group
and
there's
a
couple
of
detailed
summaries
listed
in
the
agenda.
I
won't
spend
any
time
walking
us
through
that.
I
think
everyone
is
either
familiar
with
these
or
has.
A
A
I
built
this
little
spreadsheet,
with
two
tabs
needs
and
motivations,
as
well
as
resources
and
links
and
the
driving
question.
I
think
we're
asked
what
actional
I
don't
know.
What
actionable
is
what
what
what
metrics
can
we
provide?
That
would
be
useful
and
a
few
things
came
to
the
surface
as
possibly
useful
enumerating
vulnerabilities
associated
with
particular
packages.
A
A
How
do
I
know
a
component
will
continue
to
be
available
and
then,
of
course,
legal
and
compliance
concerns
and
then
the
second
tab
there's
a
host
of
available
resources
at
our
disposal,
including
the
osfs
or
ossf
working
group.
So
this
is
this
is
still
a
lot
to
take
in
and
and
digest.
A
So
I
I
thought,
maybe
you
know
reviewing
that
briefly
and
then
asking
the
questions
of
you
know
really.
Where
do
we
get
involved
and
what
are
the
tools
that
we
leverage
and
what
are
some
of
the
possible
mvps?
So
I've
just
kind
of
scanned
the
agenda
here,
some
of
the
possible
minimum
viable
product
metrics
that
I
think
emerged
were
things
like
enumerating
the
upstream
dependencies
for
a
particular
project
and
then
maybe
using
live
years
as
a
way
of
expressing
risk
associated
with
those
dependencies
and
possibly
enumerating
known
vulnerabilities
within
upstream
dependencies.
A
With
that
introduction
of
the
summary
I'll,
throw
it
open
to
comments,
thoughts,
direction
from
y'all.
B
Although
you
already
know
about
it,
I'll
just
add
ci
best
practices,
badge
data
because
of
course
that's
got
an
api.
You
know
happy
to
help
if
any,
if
you
want
to
extract
data
from
it
you're
having
problems,
just
let
me
know
I
will
do
what
I
can
to
help
you
out.
A
Does
the
does
the
badge
include
dependencies.
B
No,
but
well,
it
includes
data
that
you
can
use
to
help
you
identify
some
dependencies.
B
A
That's
fine,
I
think.
Well,
I
think
dependence,
dependencies
and
vulnerabilities
seem
to
coexist
and
have
a
close
relationship
with
each
other.
I
think
sure
the
risk
of
dependencies
are
a
concern
because
they
represent
risk
really
from
a
couple
of
perspectives.
One
is
the
ongoing
likelihood
of
that
dependency
being
sustained
and
the
other
is.
Are
there
known
vulnerabilities
in
a
dependency
that
my
project
has.
B
B
Oh
okay,
this
is
the
cia
best
practices
badge.
Basically,
the
ci
best
practices
badge
project
identifies
a
set
of
criteria
that
are
believed
to
be.
You
know
good
for
security,
and
if
you
meet
those
criteria,
congratulations,
you
get
a
badge,
there's
three
badge
levels,
passing
silver
gold
and
I
will
gladly
slip
in
there
a
link
to
more
about
this,
but
I
leave
this
particular
project,
so
I
know
something
about
it:
we've
got
over
3000
projects,
participating
linux,
kernel
and
just
lots
and
lots
of
folks
are
participating.
B
B
You
know
what
this
is
so
good.
I
will
also
slip
in
a
link
for
the
project
stats,
so
you
can
get
an
idea
of
growth
of
participation
over
time.
E
David
are:
are
the
cii
badges
self-certified
or
are
they
analyzed
and
endorsed.
B
Yes,
so
it's
much
more
self-certified,
however,
where
we
can
we
we,
we
do
have
automation
that
both
fills
in
the
answer
for
you
and
rejects
the
human
answer
if
we
can
show
its
faults.
B
So
if
a
human
says
I
do
version
control
and
there's
no
repo,
you
get
a
no
show
me
the
repo.
E
All
right
so
so
it
sounds
like
it's
it's
useful
if
a
person
is
looking
at
it
but
might
not
be
a
reliable
indicator
if
you're,
if
you're
trying
to
do
it
programmatically
what
we.
E
Let
me
say
that
a
little
bit
differently,
given
a
list
of
dependencies.
Yes,
if
a
bunch
of
them
say
yes,
they
have
a
cii
badge.
What
we
know
is
that
they
have
self-certified
at
the
very
least
that
they
have
a
cii
badge
right,
but
that
doesn't
necessarily
mean
they're
programmatically,
applying
all
the
best
practices
like
it's,
not
verifiable,
right.
B
Now
that's
right
because
some
of
them,
for
example,
one
says
that
your
home
page
has
to
describe
clearly
what
your
project
does.
I
know
of
no
way
to
do
that
purely
via
automation.
Now
all
their
answers
must
be
public
and
if
they
falsify
them,
we
will
over
we
we
depend
on
reports,
people
report
back,
I
usually
just
tell
them
hey,
go
fix
your
problem,
that's
not
really
adequate,
but
if
they,
if
it
turns
out
they're
generally
falsifying
they
get
kicked
off,
I
cannot.
B
A
But
I
can
say
doing
from
I
got
auger
I
went.
I
went
through
the
cii,
best
practices
badge
for
auger
and
I
think
the
utility
that
it
served
for
our
project
was
it
forced
us
to
think
about
things
that
we
made.
We
hadn't
done
yet,
and
it
became
a
way
of
saying:
okay.
Yes,
we
should
do
this
and
we
should
do
that.
So
it
did
inject
reflection
and
improvement
to
the
project
just
by
going
through
the
process.
A
B
Now
yeah,
so
the
badge
tries
to
do
a
little
bit
deterministic,
but
we
focused
more
on
what
was
important,
whether
or
not
it
was
deterministic.
If
you
want
deterministic
this
ossf
scorecard,
which
is
already
listed
here,
they
focus
purely
on.
What
can
we
measure.
E
C
But
I
I
don't
think
we
necessarily
have
to
provide
the
full
end-to-end
thing,
but
I
I
think
the
recommendation
is
when
you
flag,
all
your
dependencies
find
a
way
to
qualify
them
or
rate
them
and
some
measure-
and
these
are
mechanisms
that
you
can
apply
against
that
list.
That
can
either
tell
you
which
ones
have
or
don't
have
badges
or
have
a
rating.
Given
these
other
tools
that
you've
presented.
C
B
Well,
I
will
say
that
the
good
news
is,
for
example,
specifically
the
badge
three
thousand
projects
is
a
whole
lot
of
projects.
The
bad
news
is
that
there
are
literally
millions
of
open
source
projects,
so
you
know
I
I
I
don't
you
know
it
very
much
depends
on
what
your,
what
your
criteria
are
for
for
trying
to
do
the
measurement
I
think
for
the
badges
we
haven't.
We
certainly
have
never
said,
require
the
badge
or
never
use
the
dependency.
B
F
A
I
there's
there's
a
number
of
so
there's
a
number
of
tools
that
already
do
these
things,
and
I
don't
think
that
chaos's
software
infrastructure
needs
to
go
recreate
those
wheels,
but
we
could
define
formal
metrics
that
that
rks
metrics
that
use
those
tools
as
implementations
of
the
metric
and
in
some
of
the
so
from
discussions
with
many
of
you,
these
ones
at
the
bottom,
are,
I
guess
I
would
call
them
ideas
like
the
first
idea
about,
like
you
know,
getting
to
action
the
minimal
viable
product
for
for
metrics
that
we
can
build,
and
I
guess
I
would
like
to
maybe
ask
if
we
take
a
minute
to
to
look
at
these
talk
about
them,
see
which
I
see,
and
I
see
a
progression
from
a
to
b
to
c
so
simply
enumerating
the
dependencies
is
a
step
and
then
counting
the
libyars
for
a
project's
upstream.
A
Dependencies
is
sort
of
a
second
step
that
builds
on
the
first
one
and
enumerating
known
vulnerabilities
for
those
dependencies
might
be
a
third
step.
So
maybe
there
are
three
different
metrics,
but
they
build
build
on
each
other
and
then
the
one
I
added
is
the
you
know.
The
osf
scorecard
actually
looks
like
a
pretty
you
know,
looks
like
a
pretty
easy
piece
of
software
to
implement
and
and
we
may
choose
to
build
a
metric
that
just
simply
reflects
what
that
tool.
A
B
I
just
added
the
security
met,
the
security
bash
dashboard
stuff,
where
they're
they're,
starting
to
build
out
a
a
tool
to
show
off
various
kinds
of
numbers.
But
of
course
now
the
problem
is
well.
What
do
we
show
right
and
what
they're
currently
showing
is
the
scorecard
and
the
ci
best
practices
badge
and
and
some
github
data?
Basically,
you
know
and
some
github
data,
like
the
number
of.
C
On
each
dependency,
it
doesn't
mention
sort
of
the
overall
project
sustainability
which
isn't
necessarily
an
agreed-upon
metric,
but
the
this
thing
is
still
being
supported
or
has
some
someone
at
least
who's
an
active
maintainer
versus
if
this
gets
wildly
out
of
date,
now
no
one
will
work
on
it
anymore.
So
it's
some
kind
of
metric
that
states
the
longer
term
sustainability
of
it,
and
I
know
that's
a
whole
can
of
worms
unto
itself
so
maybe
pointing
to
something
out
of
maybe
the
common
working
group.
E
Thank
you
for
putting
language
around
the
thing
I
was
trying
to
get
language
around
sophia,
because
I
I
also
have
a
similar
concern
here-
that
I'd
love
to
see
something
in
here
that
points
to
how
active
is,
is
the
project
now
right
and
when
tide
lift
wrote
their
unmaintained
dependency
scanner?
E
They
they
focused
in
on
on
two
key
things,
and
one
of
them
was:
is
the
code
still
changing
and
if
there
are
new
issues,
are
they
closing
at
or
about
the
same
rate
as
they're
coming
in
right
at
or
better
than
the
rate
that
they're
coming
in?
So
if
we
see
there's
no
code,
change
and
issues
are
piling
up
over
time.
E
It's
a
good
indicator
that
that
library
is
probably
not
being
maintained
anymore,
and
if
we
see
if
we
see
that
issues
are
starting
to
come
in
faster
than
they're
being
closed,
once
you
get
past
that
initial
growth
or
maybe
an
initial
release,
if
that's
the
trend
over
time,
that's
a
good
indicator
that
the
maintainer
is
underwater
and
that
it's
at
risk
there's
three
or
four
different
metrics
in
the
chaos
group
that
point
to
these.
E
I
have
them
open
in
another
dock
because
I
had
to
pull
them
together
for
a
proposal
on
how
we
might
meet
cyst
controls
for
for
open
source
library
management.
So.
B
Yeah,
I'm
actually
somewhat
skeptical
of
the
issue
count
closing
because
the
reality
is
that
the
number
of
things
that
somebody
wants
somewhere
is
infinite,
and
I
certainly
don't
know.
I
will
say
that
you
know
we
often
don't
want
to
kill
good
ideas,
even
if
we're
not
going
to
work
on
them
right
now.
So
I
usually
leave
issues
on
and
just
let
them
hang.
I
will
close
them
if
they're
done
or
if
we're
not
going
to
do
them.
But
the
number
of
commits
being
made,
I
think,
usually,
is
a
really
good
indicator.
A
Would
you
say
that
the
trends
by
project
could
be
an
indicator
that
takes
into
account
your
concern?
So
if
the,
if
the
issues
are
close
start
to
start
to
close
at
a
significantly
lower
rate
compared
with
their
historical
averages,
even
given
that
you
want
to
keep
some
open
for
long-term
concerns,
I.
B
I'm
actually
still
skeptical
what
that
means
is
that
more
people
are
interested,
but
but
the
number
of
things
that
somebody
wants
is
infinite.
There's
an
human
needs
are
an
and
an
infinite
resource.
B
E
B
That's
okay,
but
they
tend
to
be
the
small
ones
too.
Like
is
odd,
that's
probably
not
going
to
change
unless
they
change
the
java
spec,
which
has
happened,
but
not
very
often.
C
C
C
C
Yeah-
and
so
maybe
it's
like
I'm
thinking
about,
I
know
the
turkey
cause.
I
like
the
onion
analysis.
I
don't
know
where
that
came
from,
but
in
terms
of
say
the
committers
that
are
responsible
for
the
bulk
of
the
activity
versus
a
bunch
of
people
that
might
just
be
coming
in
and
out
and
are
less
more
temporal.
C
So
I
mean
maybe
that's
a
little
bit
too
sophisticated
for
something
like
this,
but
something
that
could
definitely
show
that
the
project
is
still
being
supported
by
a
group
of
people
that
are
invested
in
it.
Not
just
people
coming
in
committing
one
thing
leaving
and
that
isn't
necessarily
a
sign
that
the
project
is
as
healthy
or
being
sustained.
C
C
C
A
So
would
this
be
these
this,
so
we've
kind
of
got
a
list
now
of
six
things?
If
f
is
the
sixth
letter
of
the
alphabet,
which
I
think
it
is,
would
this
represent
pos,
I'm
trying
to
sort
of
rearrange
and
put
things
in
the
order
of
that
we
should
do
them
or
that
we
would
kind
of
want
to
do
them?
Does
this
order
look
right?
Are
there
things
that
we
should
move
around.
G
I
would
propose
we
should
bring
or
make
a
title
as
a
atomic
metric
and
add
it
to
the
list
and
maybe
start
taking
one
and
discussing
them
as
a
formalizing.
In
this
way
we'll
be
approaching.
I
I
love
this
discussion
and
it
is
like
way
like
giving
me
a
lot
of
knowledge
but
like
producing
some
artifact
in
terms
of
metric.
We
should
pick
one
and
like
maybe
have
a
heading
or
a
title
for
that
and
put
it
in
the
excel
sheet,
and
we
start
writing
on
that.
A
Yeah,
I
agree
for
several
I
would
say
several
and,
and
so
the
the
the
question
I
was
really
asking.
So
I'm
looking
at
four
in
this
agenda
as
this
sort
of
we've
developed
a
to-do
list.
These
are
the
things
we're
gonna
build
out,
and
mostly
I
was
asking
if
the
priority
was
approximately
right
or
the
not
that
they
can't
be
worked
on
in
parallel.
A
But
to
give
us
some
focus,
does
this
look
like
approximately
the
right
order
is
or
is
there
anything
that
is
dramatically
wrong,
because,
given
this
order,
we'd
start
to
look
at
the
dependency
enumeration,
which
I
believe
we
started
last
week
for
upstream
the
projects
that
my
project
depends
on
and
we
could
pivot
over
to
continuing
to
work
on
that
metric.
A
A
A
So
do
we
want
to
look
at
the,
I
think,
the
first
one
pretty
sure
yeah
we
had
started
this
in
our
last
meeting.
A
So
we
could
continue
with
with
essentially
this
language
level
upstream
dependency
account
account,
which
seems
like
a
long
name.
We
might
evaluate
that,
but
it's
basically
the
reposito
repository
dependency
enumeration
task
or
list
is,
I
think,
what
this
is
and,
and
he
does
it-
it
seems
to
me
like
it
might
be
the
right
thing
to
just
start,
maybe
spend
10
minutes
working
on
this
metric
and
then
trying
to
develop
it.
A
A
Did
everyone
think
this
is
a
good
piece
of
work
to
start
using
this
time
for.
A
I
don't
like
the
name
but
I'll
leave
it
alone
for
now.
So
I'm
generally,
what
we
do
is
we
spend
maybe
10
minutes
working
on
something
and
then
check
back
with
each
other.
I'm
gonna
pause
the
recording
because
it
doesn't
really
make
for
brilliant
entertainment
and
re-watching
to
see
us
work
for
10
minutes.
A
B
You
know
if,
if
you're
trying
to
answer
the
question,
do
I
have
any
vulnerable
components?
You
definitely
need
to
consider
all
of
them.
C
Yeah,
well,
it's
also
like
the
case
where,
if
you
say
have
five
unique
dependencies
on
one
same
thing,
it's
like
five
different
pieces
rely
on
that
one
right.
They
count
it
as
one
then
I
would
say
that
that's
relatively
more
important
to
know,
as
in
like
just
saying
that
that's
one
dependency,
I
don't
necessarily
think
captures
the
fact
that
if
that
dependency
breaks
now,
five
things
break
not
just
one
thing
in
your
application.
B
Most
the
tools
that
I'm
aware
of
you
can
walk
through
and
find
that
walk,
but
you
have
to
walk
the
tree
yourself,
they're
just
going
to
give
you
a
list,
and
I
think
the
problem
here
is
I've.
Seen
several
folks
trying
to
measure
that
thing
that
they're
trying
to
measure
sophia-
and
it
is
it
is
challenging-
just
measuring
counts
actually
doesn't
do
the
job
because
it
may
be,
depending
on
a
very,
very
tiny
piece.
It
may
be,
depending
on
only
because
there's
an
option
that
might
use
it
and
you
never
use
that
option.
B
There
has
been
some
work
to
measure
at
run
time.
How
often
it
goes
into
a
ver,
a
certain
library
to
measure
importance
and
even
that's
kind
of
dubious,
because
that
depends
on
the
inputs.
B
Sometimes
the
error
handling
systems
only
get
hit
in
incredibly
rare
circumstances,
but
the
only
reason
you're
using
it
is
because
it
can
handle
the
special
circumstances.
So
it's
it's
just
a
mess
to
handle.
I
I
think
right
now
would
be
better
to
be
simple
here.
How
many
are
wait?
Which
ones
are
you
depending
on
and
go.
E
E
B
Yeah,
it
all
depends,
of
course,
from
a
security
point
of
view.
It
does.
It
depends
on
your
threat
model.
If
you're
worried
about
malicious
developers,
I
don't
care
how
many
times
it's
ever
used
if
it
enters
my
system
at
all,
I'm
not
worried.
E
We
could
maybe
mark
this
as
a
thing
to
come
back
to,
because
I
think
I
think
it's
worth
seeing
if
we
can
refine
it
once
we
get
further
into
the
process.
You
know
we
could
start
here
and
then
come
back
to
looking
at
that
particular
aspect
of
the
question.
C
Yeah,
I
mean
also,
potentially
that
could
be
one
of
the
sub
characteristics
too.
So
if
the,
if
this
is
just
the
enumeration
of
all
your
dependencies
and
then
the
other
qualities
that
we
would
attach,
that
are
the
the
vulnerability
report,
the
number
of
times
it's
occurring
in
your
system
and
things
like
that.
That
could
be
the
dimensionality
of
of
that.
So
I'm
I'm
fine
with
it
being
an
add-on
and
not
part
of
this
sort
of
core
definition.
C
What's
the
best
way
to
call
it
number
of
occurrences
of
a
single?
Yes,.
C
But
maybe
it
would
go
like
I
think
in
this
it
would
actually
go
in
the
other
document
we
were
working
on
in
terms
of
all
of
the
dimensions
you
would
want
to
add
to
dependencies.
C
F
F
A
I
don't
know
I
put
it
into
words,
so
maybe
you
should
type
in
sofia.
I
opened
up
under
c
in
the
notes
I
and
I'm
struggling
for
exactly
there,
because
it's
like
it
really
you're
trying
to
get
at
how
dependent
is
a
project
on
this
library
and
one
of
the
ways
of
getting
at
that
is.
How
often
is
it
used
in
different
files,
and
maybe
how
often
it's
used
at
runtime-
and
maybe
those
are
two
different
things
that
we
have
to
define.
B
I
tried
to
put
in
a
little
text
in
the
google
doc
under
under
parameters.
I
mean
feel
free
to
move
whatever
just
trying
to
capture
something
about
the
discussion
we
just
had.
B
D
H
A
Of
you,
yes,
likewise,
we've
got
about
a
minute
and
a
half
left
in
our
allotted
time.
I
think
really
good
work
today,
flushing
out
this
metric.
I
think
it's
moving
very
quickly
towards
something
that
we
can
propose,
and
you
know
I
wouldn't
encourage
everyone.
A
You
know
if
you
have
a
chance
to
look
at
the
list
of
other
metrics
that
we
laid
out
that
kind
of
occur
in
sequence
for
next
time
and
because
I
think
we'll
be
able
to
finish
a
draft
of
this
metric
the
next
time
and
perhaps
move
on
to
a
second
metric
and
then
maybe
some
of
us
can
start
experimenting
with
the
tools
that
exist
to
do
these
things
and
maybe
even
test
the
metrics
or
evaluate
them
against,
what's
actually
available.
B
No,
but
I
think
the
idea
of
next
next
time
we
finish,
try
to
trying
to
wrap
that
up.