►
From YouTube: CHAOSS Risk Working Group 5-13-21
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
welcome
to
the
risk
working
group
for
may
13,
2021,
we're
beginning
with
a
discussion
just
briefly
about
a
white
house
executive
order
related
to
computer
security,
because
apparently
someone
hacked
the
main
gas
pipeline
in
the
united
states.
That's
why
I'm
recalling
correctly
and
soft
and
so
part
of
that
software
build
materials.
We
do
have
that
as
one
of
our
metrics.
A
A
Yeah
yeah,
I
I
wasn't
even
aware
of
it.
I
was
writing
a
column
for
a
magazine,
so.
A
Appreciate
that
and
if,
if
folks
don't
mind,
if
you
feel
comfortable,
you
can
add
your
name
to
the
meeting.
A
A
A
A
A
Okay,
michael,
what
I
like
to
do
is
just
welcome
new
people
and
give
you
an
opportunity
we
like
to
give
people
that
are
new,
an
opportunity
to
introduce
yourself
and
your
interest
in
the
risk
working
group.
D
Awesome
hi
everybody
I'm
mike
scaveto.
I
run
an
open
source
security
team
at
microsoft,
I'm
also
a
lead
one
of
the
working
groups
within
the
open
ssf,
in
particular
the
the
identifying
security
threats
working
group.
We
recently
released
a
metrics
dashboard
around
open
source
and
then
everybody
well,
not
everybody.
Some
some
folks
said
you
know
hey.
D
How
does
this
hit?
You
know
intersect
with
chaos
and
what
it
what
is,
what
is,
did
we
keep
them
separate
for
a
reason?
Or
did
you
not
talk
so
I'm
here?
So
I'm
super
interested
in
in
learning
about
the
direction
that
you're
going
in
terms
of
measuring
security
risk
around
open
source
and
whether
there's
a
cross-pollination
or
collapse
or
whatever,
between
between
what
we
all
do
and
that
way
we
don't
spend
the
same
cycles
doing
the
same
amount
of
work.
A
No
and
david-
I
don't
know
if
you
know
david
wheeler
david's
on
my
working
group
as
well,
because
david
david
has
been
one
of
the
people
who
has
made
a
point
of
saying,
hey,
open,
ssf
is
doing
some
work.
We
should
definitely
talk
to
them
and
like
when
are
your
meetings
so
that
we
get
that
in
the
I
so.
D
Within
the
next
couple
days,
we're
gonna
re-jigger
the
times
it's
been
monday
and
wednesday
mornings
like
every
other
week,
but
that's
been
confusing,
so
we're
just
gonna
we're
trying
to
find
a
single
slot
that
that
works.
I
will
get
it
in
the
notes,
though,.
A
Okay,
yeah,
that
would
be
great,
and
I
know
I've
had
conflicts
for
some.
I've
got
them
on
my
calendar,
but
I've
had
conflicts
yeah
and
so,
if
they're
moving.
That
is
good
news
for
me,
because
that
that
would
be
so,
for
maybe
you
could
tell
us,
since
david's
talked
a
little
bit
about
open,
ssf
and
the
overlap,
but
maybe
you
could
so
so.
D
To
do
is
so
we're
targeting
a
couple,
different
types
of
stakeholders,
and
it's
it's
really
like
all
the
stakeholders
we
could
think
of,
but
like
the
developer,
themself
an
upstream
developer,
who's,
consuming
a
dependency
and
like
actual
application
developers
that
use
lots
of
open
source.
D
Usually
I
mean
we're
somewhat
enterprise
focused
there,
but
I
think
that's
where
the
bulk
of
like
application
level
development,
usually
takes
place
for
the
for
the
developer
themself.
They
want
to
understand
like
how.
How
am
I
doing
you
know
on
my
component
library.
You
know
my
thing
from
an
upstream
perspective,
the
dependencies
that
I
consume
like.
What
do
I
look
at
and
then
from
an
enterprise
perspective.
You
know
I
use
a
hundred
thousand
different
open
source
components,
which
ones
should
I
be
worrying
about.
D
Do
I
need
to
worry
about
any
of
these
and
then
when
it
really
gets
down
to
like
what
am
I
actually
measuring?
We
spend
a
lot
of
time
going
through.
You
know
the
types
of
metrics
that
make
sense
and
which
ones
are
too
blurry
and
which
ones
are
actionable
and
which
ones
are
just
just
kind
of
noisy.
We
settled
on
for
the
initial
phase
of
this
dashboard.
D
Collecting
existing
metrics
and
trying
to
tell
a
story
with
them
so
metrics.openssf.org
is,
is
the
website
that
has
this
all
on
it.
It's
a
grafana,
you
know
thing
and
we
take
data
from
three
different
open
ssf
projects,
so
david's
badge
program
is
one.
The
scorecard
project,
which
is
a
fully
automated,
looks
at
a
get
reap
github
repo
and
says.
D
And
then
third
one
is
the
open,
ssf
project,
criticality
or
criticality
score
project,
and
that
one
is
intended
to
to
say
how
important
is
this
project
to
the
to
the
larger
ecosystem,
to
the
world
kind
of
thing?
So
projects
like
node
and
kubernetes,
and
things
like
that
are
just
they
are
more
important
than
a
open
source
calendar
widget,
yes
and
and
therefore
you
know
to
be
able
to
say,
like
the
way
that
I
would
have
used.
Information
like
that
is
there's
a
really
really
important
project.
D
A
Right
right,
I
I
know,
do
you
know
dwayne
o'brien,
and
indeed
I
don't
actually
know
so
so
dwayne
and
I
have
had
a
number
of
conversations
about
as
an
op
as
he's
an
ospo
manager,
yeah
and
and
his
his
challenge
is.
He
has
11
000
projects
that
are
that
touch
his
ecosystem
in
one
way
or
another,
and
he
wants
a
little.
F
E
I'm
sorry
dwayne,
it's
all
right!
I
didn't
see
you
show
up
because
I
I.
E
Late,
it's
okay,
it
was
just
neat
to
hear,
hear
me
mentioned
organically
there,
the
one
of
the
reasons
for
that
analysis,
though,
is-
and
I
think
you
you
hinted
at
this
a
minute
ago-
like
what
do
we
do
with
it
once
we
have
that
information
and
my
interest
is,
if
I
see
a
project
that
looks
like
it's
trending
in
an
unhealthy
direction,
I'd
like
to
be
able
to
mobilize
money
or
developers
to
that
project
so
that
we
can
turn
it
around
if
it's,
if
it's
necessary
right,
so
I
I'm
I'm
less
interested
in
you
know
hey
this.
E
D
Absolutely
yeah,
so
so
the
the
the
whole
like
kind
of
direction
that
we're
going.
We
really
wanted
to
get
something
out
because
we've
been
talking
about
it
for
about
a
year
and
we're
getting
just
really
tired
of
just
talking
and
not
like
doing
so.
D
We
got
so
we
got
the
the
we'll
call
it
an
mvp
out,
but
now
now
we're
going
back
and
looking
at
the
actual
metrics
to
see
like
does
this
metric
like
really
make
sense-
and
you
know,
is
the
just
the
fact
that
you
run
static
analysis
like
is
that
the
thing
that
we
should
be
measuring
or
is
it
like?
Are
you
fixing
the
things
that
you
found
or
how
do
those
weight
together?
What
I
would
love
to
get
is
an
ssl
labs,
a
plus
score.
D
You
know
through
f,
and
I
totally
recognize
that
they're
they're,
like
I.
D
Oh
sorry,
if,
if
you
go
to
ssl
so
so
it's
just
a
letter
grade,
give
everybody.
D
So
if
you
know
because
the
problem
with
a
lot
of
dashboards
that
I've
seen
about
this,
I
think
cnc
cncf
has
as
dashboards
about
on
their
projects,
and
it
is
so
much
data
that
I
I
I
can
look
at
all
the
data
and
I
don't
know
what
to
pull
out
of
it
to
I
don't
know
if
it's
good
or
bad
or
or
you
know,
or
it's
just
data,
so
I
think
we
need
to
condense
things
down
to
like
how
is
it
overall
and
then
you
can
maybe
what
to
do
with
that.
D
But
we're
not
there
yet,
and
I'm
also
super
conscious
of
kind
of
having
like
large
enterprises,
come
in
and
tell
open
source
developers
how
awful
their
thing
is
and
like
just
the
messaging
there,
because
that's
not
what
it
that's,
not
what
I
want
to
do
and
that's
not
what
it's
about
so
there's
yeah,
there's
challenges
all
over
hey.
B
Overall,
are
you
talking
about,
say
all
scores
across
all
projects
and
an
ecosystem
or
in
a
category?
So
you
can
say
your
project
is
x
percent
lower
higher
than
similar
kinds
of
projects
that
all
have
difficulties
in
making
those
kinds
of
decisions
and
categories,
but
like
some
way
to
baseline
yourself.
D
Yeah,
so
so
we
haven't
so
we've
talked
a
lot
about
so
so
in
it
it
came
up
in
the
context
of
like
I
want
to
use
a
crypto
library
and
I'm
on
a
python
stack
here
are
my
four
options,
and
this
one
option
is
significantly
better
than
the
other
three.
That's
interesting.
D
I
don't
know
how
to
do
that
without
like
manually
curating
it
or
having
it
be
really
kind
of
sloppy,
but
I
think
I
think
that
would
provide
a
lot
of
value
to
the
community.
At
the
same
time,
I
don't
I
don't.
I
want
to
be
careful
about
how
objective
that
is,
because
I
don't
want
to
be
like
picking
winners
and
losers
out
there.
D
You
know,
unless
I
can,
I,
unless
I
feel
really
good
about
the
the
objectivity
of
the
metrics,
that
we
that
we
show
even
just
having
the
list
and
saying
we're,
not
we're
not
even
this
kind
of
goes
against
the
a
plus
score
but
like
like
here
here
are
your
five
libraries,
and
here
was
the
last
time
each
of
them
were
updated
and
here's
when
you
know
the
the
three
of
them
use
static
analysis,
the
other
one
doesn't.
I
think
that
would
be
interesting
too,
as
a
maybe
as
a
starting
point.
A
D
Yes,
yes,
so
the
content
there,
the
the
the
project
itself,
there's
a
like
a
django
front,
end
thing,
which
is
like
the
front
page
that
you
see
and
then
once
you
go
into
it,
it's
grafana.
The
dashboard
config
is
still
just
in
grafana.
We
need
to
like
somehow
get
that
into
a
into
github.
A
Yeah
one
one
of
the
things
that
chaos
has
sort
of
held
as
a
value
over
time
as
we
we
generate
the
metrics.
We
we
help
provide
consistent
definitions
and
tools
that
make
them
concrete
and
useful
for
people,
but
we
don't
score
them
for
four
people,
because
every
I
think
security
is
a
little
bit
different,
but
in
general
we
let
each
organization
apply
the
metrics
and
rank
and
and
prioritize
things
for
their
own
purposes.
A
A
The
advantage
of
this
for
some
of
the
security
applications
is,
you
can
have
people
going
and
doing
kind
of
white
hatty
sort
of
things
without
or
or
disclosing
things
privately
or
in
private
groups
without
disclosing.
You
know
so.
There's
this
security
infrastructure,
where
we
don't
like
to
broadcast
the
big
issues
until
they're
closed,
and
so
this
is
a
an
emerging
technology
for
identity
that
may
have
some
utility
in
in
the
identification
of
bugs
and
the
ability
of
people
to
share
them
so
that
I'll
just
throw
that
throw
that
out
with
yeah.
That's
interesting.
B
A
I
can
send
I'll
send
you
a
link
on
that.
It's
in
it's
in
my
rather
scribbled-y
notes.
D
A
D
I
mean
I
mean
different
workgroups
are
approaching
things
differently.
I
think
dependencies
themselves
impact
the
top-level
thing
just
the
same
as
the
top-level
dependencies.
So
right
yeah,
I
don't
think
we've
ever
like
consciously
made
the
decision
to
include
or
exclude
them.
We
just
always
assumed
that
they
were
there
right.
A
Right-
and
I
think
I
think
in
in
the
four
or
five
years
of
chaos-
we've
gone
from
the
challenge
of
open
source
itself,
growing
at
a
rapid
pace
and
needing
scaling
metrics
to
I
think,
in
the
last
18
months,
dependency
issues
have
become
incredibly
visible,
yeah,
definitely
in
the
community
and
and
so
hence
our
turn
in
that
direction,
and
maybe
one
of
the
things
sofia.
Where
should
we
go
from
here?
A
Should
we
talk
a
little
bit
about
some
of
our
mvps
or
should
or
maybe
sophia
duane,
I'm
looking
at
some
of
the
people
who
kind
of
guided
these
discussions?
Should
we
talk
about
some
of
these
mvps
a
little
bit
or
because
one
of
the
I
think
one
of
the
things
we
have
here
is
our
our
ospo
con
and
osseu
talk
discussion,
but
I
think
that
kind
of
takes
a
back
seat
at
this
moment,
and
maybe
maybe
we
talk
about
our
mvp
kinds
of
things,
sophia
dwayne.
What
do
you
think
arfan?
E
A
So
I
think
sophia
stepped
away
so
I'll
just
take
us
to
the
so
we
went
through
a
rather
long
discussion
and
let
me
bounce
around
a
little
bit
here
to
these
talks,
but
but
one
of
one
of
the
discussions
that
we've
had
is
there
are
so
many
different
types
of
dependencies
in
ways
that
they're
architected
and
so
one
of
the
talks
that
dhruv,
who
is,
I
don't
think,
he's
drew's
on
the
call.
So
this
is
from
drew's
google
summer
of
code
project,
which
I
cannot
speak
of
until
may
17th.
A
Regarding,
what's
happening
with
it,
I
will
say
that
we
have
three
students
working
on
risk
areas
in
the
google
summer
of
code
for
auger
and
chaos,
but
there's
direct
dependencies,
transitive
dependencies
and
interdependent
dependencies
circular
dependencies.
These
are
the
basic
four
categories
that
we've
kind
of
identified.
It
took
us
quite
a
while
to
get
to
these,
so
I'm
I'm
interested
when
we
think
about
things
like
the
work
that
that
you're
doing
at
microsoft
or
github
or
wherever
other
companies
when
you
put
together
the
dashboard
when
you
think
about
how
to
communicate
dependencies.
D
It's
it's
it's
a
hardware.
I
think
the
the
so
I'll
I'll
speak
both
as
microsoft
and
openness
on
the
microsoft
side.
The
way
we
we
flattened
the
tennessee
list.
Okay,
so
so
we
just
have
a
giant
list
of
here's
everything
you
have
a
vulnerability
in
it.
You
got
to
do
something
about
it,
that's
hard,
because
if
you
have
the
vulnerability
in
dependency
c
down
there
in
the
transitive
list
and
a
is
up
to
date,
what.
D
D
The
on
the
open
ssf
side,
at
least
as
far
as
the
metrics
go,
we
would
just
score
them
all
separately
and
then,
as
you
evaluate
your
entire
transit
of
closure,
you
see
that
you're
using
c
and
it
has
a
vulnerability
and
or
or
is
unmaintained
or
whatever,
and
you
do
whatever
you
want
at
that
point.
With
that
information,
the
the
thing
that's
missing
is,
and-
and
this
is
I
I
don't
know
that-
there's
a
lot
of
rigor
here
yet,
but
probably
not
but
like.
D
Can't
be
rigor
yet
right
so
like
if
you
go
up
to
the
inter
interdependency,
one.
A
This
one
start
no.
D
One
more
up
so
vulnerability
in
a
has
some
likelihood
of
impact
on
the
project
right.
A
vulnerability
in
c
on
average
will
have
less
likelihood
of
impact
on
the
project,
because
just
the
nature
of
dependencies,
like
you,
don't
use
the
entire
thing
right.
A
D
D
D
And
then
it
includes
everything,
so
you
have
these
warmest
transitive
closures,
where
you
really
only
need
this
like
little
tiny
thread
of
execution
through
it,
so
it
it's
and-
and
the
problem
is
when
you
tell
people
to
like,
even
if,
even
if
it
was
actionable
to
like
go
and
upgrade
this
thing,
they're
wasting
their
time
because
it
has
no
impact
on
the
final
thing.
So
how
do
you
like
reprioritize
and
like
risk
score
vulnerabilities
based
off
of
where
they
sit
in
the
transit
of
closure
yeah?
D
That
would
be
a
terrific
project
for
someone.
So.
B
B
B
A
Yeah,
I
I
can
I
can.
I
can't
speak
to
risk
per
se,
but
I
can
say
in
my
own
experience
with
npm
is
the
dependency
tree
affects
me
not
from
a
vulnerability
perspective
as
deeply
as
it
does
from
when
one
thing
changes
and
a
lot
of
other
things
are
dependent
on
it.
It
breaks
everything.
A
That's
that's
the
challenge
with
dependencies
that
I
face
more
often
than
security
vulnerabilities
honestly
with
npm.
With
with
these,
then
it's
more
of
the
circular
dependencies
and
the
these
networks
of
things
that
depend
on
all
the
same
things
yeah.
That
kind
of
thing.
C
I
think
sean
you
asked
what
github
does
with
its
dependency
graph.
Was
that
yeah
so
yeah
so.
C
My
understanding
is
like
and
if
this
could
be
wrong
so
like
let's
just
pretend
I'm
right
for
a
second
is
you
know
you
have
the
direct
dependencies
that
we
see
that
you've
expressed
in
like
a
log
file
or
something
and
then
and
then
we
do
have
you
know
the
dependency
graph
does
have
your
transitive
dependencies
as
well.
C
C
Yeah-
and
I
know
that-
and
I
know
that
there
are
folks
looking
at
how
how
we
can
be
sure
that
you're
actually
even
calling
a
dependency,
because
you
might
have
expressed
it
as
a
dependency,
but
is
it
actually
used,
I
think,
there's
a
reasonable
number
of
projects
that
have
a
dependency
where
they're
never
actually
using
it.
So
and
that's
the
thing
that
requires
sort
of
knowledge
of
the
sort
of
the
the
library
and
now
to
kind
of
represent
that.
C
So
I
know,
I
actually
think
that
I
I
think
I
might
have
to
bring
somebody
along
from
github
who
actually
works
on
some
of
these
engineering
features
at
some
point,
because
I
think
I
I
know
who
I'm
thinking
of
guy
called
doug
krieger,
who
I
think
would
really
enjoy
like
giving
you
like
an
overview
of
what
github
does
under
the
hood.
If
that
would
be
interesting,
that.
C
Phenomenally
interesting
yeah
he
and
I
the
reason
I
think
he
might
turn
up
is
like
he
reads
the
literature
and
stuff
because,
like
a
lot
of
what
they're
doing
is
fairly
cutting
edge,
yeah
imagine
stuff.
So,
like
you
know
some
engineers
don't
want
to
come
and
talk
to
a
group
of
people
like
this
and
that's
because
academic
work
is
sometimes
a
little
hard
to
know
how
to
enter
into
that
world.
Although
this
is
pretty
applied,
actually
yeah,
I
think
yeah.
I
think,
and
I
think.
A
A
Like,
as
you
said,
like
I've
reduced
the
dependencies
in
augur
by
80
percent
over
the
last
six
months,
simply
by
determining
what
we're
not
using
anymore
and
taking
them
out
of
our
setup.pi
and
our
package
lock
jack
jason.
A
D
D
I
I.
I
didn't
think
that
condition
I
I
would
have
thought
that
would
have
been
considered
a
bug
in
something
like
really
avoided,
but
I'm
seeing
it
a
lot
recently
as
I
started
looking
for
it.
So
this
is,
I
take
a
dependency
on
something
at
a
top
level,
but
I
directly
reference
one
of
its
sub
dependencies
without
making
that
subdependency
like
without
explicitly
like
require
like
adding
to
my
package.
You
know.
A
A
I
had
six
google
summer
of
student
code,
students
working
on
machine
learning
stuff
and
many
of
them
used
tensorflow
and
scipy,
and
another
kind
of
psychic
learn,
and
you
can't
have
multiple
different
versions
of
that
in
a
package,
and
so
in
order
to
standardize
the
version,
I
would
put
it
in
the
setup.pi
at
the
core
of
the
project
and
then
all
the
pieces.
The
project
that
use
user,
I
would
know,
are
on
the
same
version.
B
A
Of
the
pieces
and
all
the
modules
that
I
plug
in
and
then
eventually
I
go
back,
and
I
I
undo
that,
but
to
get
it
going
you
like
to
make
it
run.
That's
that's
why
that
happened
right
like
it
was
just
easier
for
me
to
do
it
that
way
than
to
go
through
six
different,
plug-ins
and
and
sort
them
all
out
individually,
yep
exactly.
It
also
gives
me
a
higher
level
view
of
what
all
my
dependencies
are,
because
when
they're
scattered
about
different
modules,
so
that's
why
people
do
dumb
things
yeah.
D
A
F
I
think,
with
the
point
michael
pointed
out,
like
my
our
project
directly
depend
on
these
truly
captured
in
the
last
second
image:
interdependent
dependency,
where
our
project
depend
on
a
and
b
and
a
again
depends
on
b,
which
is
like
truly
captured
in.
B
F
So
so
so
in
this
like,
if,
if
you
look
at
the
transit
over
here,
our
project,
depending
on
a
our
project
depending
on
c
a
is
depending
on
c.
But
there
is
a
scenario
our
project
is
depending
on
a
and
on
c
both
which
is
captured
in
the
next
graph.
I
see.
F
B
Which
is
now,
I
think,
bring
me
back
to
the
context
discussion,
because
I
think
it's
for
individual
dependencies
it'll
be
hard
to
know
the
full
context.
But
if
things
are
recurring
and
any
sort
of
overlapping
graph,
then
that
should
be
should
have
a
greater
weight
or
greater
interest,
because
it
assumes
that
it'll
muck
up
things
more
or
be
less
contained.
A
A
I
love
david
it's,
but
he
he
knows
too
much.
I
mean
if
this
were.
This
were
criminal
enterprise
some
he'd
be
killed
because
he
just
simply
knows
too
much
just
to.
I
think
this
is
great
I
want
to
under.
I
want
to
understand
a
lot
more
about
what
is
going
to
be
represented
in
the
mvp
for
oss
osf.
A
A
A
So
these
include
just
enumerating
the
dependencies.
So
obviously
there
are
tools
that
do
that
chaos
would
create
a
metric
that
defines
what
that
means,
the
objectives
that
basically
we
follow
a
goal,
question
metric
format
when
we
build
a
metric.
So
what's
the
goal,
understanding
your
dependencies,
the
objective
is
to
list
them
and
then
the
metric
is
some
kind
of
enumeration,
also
some
kind
of
evaluation
of
sustainability
risk.
So
this
could
include
issue
closures,
number
of
committers
core
stability.
A
Third,
one
is-
and
I
I
guess,
I'm
just
reading
these
to
you
just
to
give
you
some
context.
The
dependency
range
like
how
many
times
a
single
dependency
is
referenced,
so
dwayne
talks
about
you
know.
I've
got
this
rep.
I've
got
this
dependency
referenced
in
like
14
different
places,
but
is
it
actually
used
in
that
many
different
places
and
how
much
of
it
is
used?
So
that's
part
of
it.
I
don't
know
if
you're
probably
familiar
with
libya's.
If
you
know
david
david,
david.
A
So
one
of
the
things
duane
and
I
talked
about-
is
that
that
essentially
on
another
tab,
we've
identified
a
number
of
tools
that
identify
dependencies
in
a
number
of
different
languages,
and
so
one
of
the
things
that
we're
going
to
try,
you
know
pilot
or
give
it
a
shot,
is
collecting
all
of
these
different
tools
and
having
some
kind
of
wrapping
tool
that
you
can
apply
to
any
repository
and
show
dependencies
basically
calling
these
other
tools
as
python
wrapped
modules.
A
D
So
so
I'll
point,
you
I'll
add
this
into
the
into
the
notes.
We
have
an
open
source
project
called
oss
gadget
that
we
built
to
kind
of
scratch
our
own
itch
around,
like
I
want
the
the
content
for
the
npm
module,
foo
or
the
python
module
this
or
clone
this,
but
I
wanted,
but
I
didn't
want
to
have
to
remember
like
where
to
go
to
actually
get
the
bits,
so
it.
D
It
does
health,
it
looks
for
like
crypto
there's
like
eight
or
ten
different
modules
in
it.
One
of
them
is
metadata,
so
like
some
of
the
like
libraries
that
I
o
it'll
fetch
the
metadata
and
normalize
it
and
give
it
to
you.
D
Will
do
it?
Okay,
third,
one
under
under
us
all
right.
D
But
yeah,
I
I
mean
in
terms
of
enumerating
the
dependencies
the
other
problem
you're
gonna
have
there
is
the
dependencies
are
in
in
some
cases,
and
I
don't
know
how
trivial
this
is
only
really
known
at
runtime,
like
you
can't
actually
introspect
like
a
manifest
file
and
be
sure
exactly
what
you're
going
to
get
right
and
because
different
versions
can
have
like
wildly
different
chains,
past
them
and.
A
If
you
know
you
know
david,
you
probably
know
kate
stewart,
maybe
not
kate's
kate's
the
program
manager
or
for
the
zephyr
project,
which
is.
A
And
and
so
her,
she
is
very
concerned
about
the
distinction
between
dependency
dependencies
in
development
and
dependencies
at
runtime,
because,
obviously
for
her
and
safety
critical
systems,
runtime
dependencies
are
where
it's
all
at,
and
that
is
a
separate
kind
of
understanding,
so
excellent
point
there.
I
I
think
we
don't
have
runtime
metrics
here,
simply
because
kate
is
the
master
of
that
and
the
rest
of
us
feel
like
we
don't
know
it.
A
At
least
I
feel,
like
I
don't
know
very
much
about
how
to
identify
them
distinctly
and
then
hey,
look
osf
scorecard
there
you
go
is
is
one
it's
basically
reverse
engineering,
that
into
a
metric
or
a
collection
of
metrics,
because
we've
talked
about
it
a
lot,
it's
obviously
a
very
good
tool
and
then
finally,
some
kind
of
matrix
occasionally
comes
up.
Since
we
opened
with
security,
we're
not
close
with
security
that
there
is
this
relationship
between
dependencies
and
vulnerabilities
and
understanding
not
only
the
dependencies
in
a
project.
A
But
what
are
the
known
vulnerabilities
and
the
oss
scorecard?
I'm
sure
addresses
that
in
some
way
we've
talked
about
how
sketchy
and
cobbled
together
the
the
record
of
what
known
dependencies
are
like
it's.
I
guess
this
federated
kind
of
system
that
used
to
run
out
of
gaithersburg
maryland
the
standards
body
there.
I
can't
remember
the
name
of
it:
yes,
nist
thank
you,
yeah,
yeah
and
so
there's
there's
been
varying
degrees
of
you
know:
people
even
knowing
how
to
get
a
vulnerability
number.
D
Oh
yeah
yeah
and
I
think
github
is
helping
a
lot
there
in
in
their
their
role
as
a
cna.
I
I
would
and
I'm
sorry
I
feel
like
I'm
talking
a
lot.
I
do
I
you're.
Well
you
I
just
came
to
listen
but
look
at
look
at
all
the
knowledge
we're
extracting
from
you.
You
know
we're
using
you,
proud
of
it
happy
to
help
the
difference
between
like
cves
and
vulnerabilities.
B
B
D
D
Yep
but-
and
that
takes
a
lot
of
work
by
by
you
know,.
A
A
A
D
D
Paragraph,
the
second
to
last
link.
Okay,
there
we
go
so
this
is
the
grafana
thing
and
and.
D
D
A
We
have
a,
we
have
a
tool
that
builds
that
uses
the
physology
scanners
to
identify
s-bombs,
yes,
kind
of
stands
alone
if,
but
you
could
probably
use
physology
itself
to
get
that,
but
if
you're
I
mean,
if
you're
less
familiar
with
license
scanning
kinds
of
things
or
software
bill
material,
enumeration
kinds
of
things,
let
me
know-
and
I've
got
a
couple
things
that
we
do
with
auger
cool,
not
right,
I'm
not
saying
use
auger,
because
I
think
it's,
I
think
I
think,
there's
pieces
that
we've
learned
that
are
useful
in
this
context.
D
D
These
are
all
things
that
I
literally
consume
a
giant
json
file
or
yeah,
a
bunch
of
json
files
dump
it
flatten
it
throw
it
into
a
database,
and
then
the
dashboard
is
driven
from
the
database,
so
running,
phosology
or
auger,
whatever
separately
ex.
You
know
churning
out
kind
of
an
export
that
can
be
imported.
It
keeps.
A
A
Yeah
yeah,
just
we
import
the
json
python
file
and
dump
it
yeah
yeah.
This
looks
this
is
a
really
really
good
start.
I
mean
achieve
that.
I
like
that,
the
date
that
it
was
achieved
is
there
yeah
it's
it's
so
open.
Ssf
best
practices
is
different
than
the
cii
best
practices
badge
that
dave.
D
No
sorry,
it's
actually
identical
because
cii
as
an
organization
is,
I
think,
it's
gone
entirely.
All
of
the
work
kind
of
rolled
into
the
into
open
ssf.
I.
A
A
D
That's
a
great
question,
and
that
is
actually
because
I
don't
make
a
judgment
on
what
passing
means
passing
is
whatever
the
badge
program
told
me
was
passing
it
there's
literally
a
field
that
just
says
passing.
A
D
A
A
Whoa
wow,
so
my
criticality
score.
What
does
criticality
mean?
Oh
how
important
it
is
importance
to
the
overall
universe.
Okay,
all
right,
59,
it's
not
bad!
We're
passing
dynamic
analysis,
we're
failing
on
some
of
the
same
things.
Kubernetes
is
so
I
think
zephyr.
If
you
go,
zephyr
is
like
one
of
the
super
achievers.
A
D
A
Yeah
they
have
over
70
different
licenses
in
in
that
project
and-
and
I
think,
that's
largely
a
byproduct
of
all
the
different
device
manufacturers
that
are
contributing
they
they
do.
That
is
a
challenge
they
face
for
sure.
Well,
michael
arfan,
thank
you
for
joining
us
dwayne
good,
to
see
you
again,
everybody
sophia
elizabeth,
excellent
discussion.
A
I
really
I
learned
a
ton
today
which-
and
we
got
none
of
the
agenda
done
so
I
feel
like
a
successful,
successful
meeting
facilitator,
because
I
I
ignored
the
agenda
in
favor
of
what
people
found
more
interesting
and
any
any
final
words
or
instead.
C
When's
the
I
spoke
called
deadline.
A
A
Cool
yeah
we've
got
two
more
meetings,
barely
probably
one
is.
We
should
really
spend
some
time
on
at
the
next
meeting
and
I
do
encourage
folks
to
take
a
look.
Vanad
did
a
ton
of
work,
putting
together
some
summaries
of
the
ideas,
and
so
I
will
just
encourage
everyone
to
maybe
take
a
look
at
that
if
you
have
a
chance
prior
to
the
next
meeting.