►
Description
This is a video of a presentation done for the Composition Analysis group's show and tell on Criticality and Risk scores of open source dependencies.
A
Hello
and
welcome
to
this
composition,
analysis
show
and
tell
today
we
are
going
to
be
talking
about
we're
going
to
be
talking
about
critical,
criticality
score
and
dependencies.
A
This
is
my
screen's
shirt
right,
yep,
okay
and
then
just
one
more
I'm
going
to
try
to
go
into
presentation
mode
and
see
what
happens.
A
We
start
recording
but
anyways,
so
we're
good
okay.
So
so
I'm
gonna
talk
about
the
open
source
security
foundation,
criticality
score
project,
so
first
I'll
get
it
back,
get
into
ossf
a
little
bit.
So
it's
it's
a
fairly
new
thing.
A
It
started
in
2020,
it's
a
collaboration
between
a
bunch
of
companies
like
really
big
tech
companies
like
google,
microsoft
and
others,
and
the
linux
foundation
and
the
idea
there
is
basically
you
know
everyone
understands
the
the
the
critical
nature
of
open
source
to
everyone
else's
project
and
and
and
but
in
the
meantime,
there's
not
really
an
endeavor
to
secure
them
right,
so
so
that
that's
basically
kind
of
the
the
reason
to
be
of
these
of
this
of
this
group
and
inside
of
this
group,
there's
several
working
groups
and
they're
based
kind
of
on
different
things
like
security
threats,
digital
identity,
the
group
we're
gonna.
A
The
groups
whose
projects
we're
gonna
be
looking
on
at
today
is
the
one
kind
of
working
on
securing
critical
projects,
and
it
has
this
concept
called
the
criticality
score,
which
I
was
interested
in.
I
thought
that
it
could
be
useful
for
our
team,
so
this
is
like
a
famous
drawing,
I
think,
everyone's
seen
it.
It's
like.
Basically
like
xbcd.
A
You
know
that
a
lot
of
a
lot
of
what
we
think
are
amazing
awesome
projects
have
so
many
hidden
dependencies
that
you
know
you
have
this
you.
You
only
need
like
this
one
like
piece
to
fall
down
to
make
the
whole
project
fall
down,
as
we've
seen
in
supply,
chain
attacks
and
and
critical
vulnerabilities
of,
like
really
low
levels
like
open,
ssl
and
stuff.
So
this
kind
of
the
the
understanding
for
this
project
is
to
figure
out
what
are
those
critical
projects?
A
What
are
those
critical,
the
projects
so
that
you
know
there
are
limited
resources
that
even
these
companies
have
to
support
them
right
to
to
pay
their
developers
to
to
work
on
it
or
to
pay
the
maintainers
to
work
on
it
or
give
time
to
whoever
so
you've
got
to
identify
the
most
critical
and
work
sort
of
your
way
down?
That's
that's
kind
of
the
concept
I
think
there's
a
recent
example
that
we've
all
kind
of
heard
about
because
it
even
affected
gitlab.
A
It
was
the
ruby,
my
magic
gem
that
maintained
it
released
under
mit
license
and
then
the
maintainer
of
the
gem
that
the
hebrew
that
the
maintainer
relied
on
was
shared,
my
menthol,
which
was
released,
released
v2,
and
so
he
basically
told
the
maintainer.
You
know
you
should
take
it
down
or
you
should
change
your
license
and
I
think
the
the
the
developer
and
the
peak
of
kind
of
you
know
maybe
well
learn
to
judge
why
he
did
it,
but
basically
he
he
just
in
like
you
know
he
was.
A
He
took
down
the
germany
archive
which
affected
you
know
well,
something
like
a
half
a
million
repositories,
but
really
other
packages,
including
real,
including
something
we
relied
on
at
gitlab,
and
so
there
was
like
a
a
there.
Was
that
that
day
was
like.
You
know,
panic
for
everyone
who
relied
on
this
gym
to
try
to
like
get
around
it
or
figure
out
how
they
can
make
it
work.
So
so
that
was
that
was
an
example
of
that.
So
there
is
a
security.
This
is
not
necessarily
a
security
component.
A
It
just
demonstrates
how
a
critical
project
affected,
like
so
many
downstream
repositories
and
projects
and
companies.
A
So
so,
what's
the
criticality
score
intro,
the
criticality
intro
is,
is
used
to
identify
this
key
open
source
project
and
it's
kind
of
the
idea
is
to
create
an
objective,
repeatable
rating.
So
I
think
we
all
know
like
let's
say
even
for
yourself
if
you're
gonna,
if
you're
gonna
install
a
project
on
your
own
machine
like
let's
say
from
github,
you
know,
let's
say:
there's
five
different
ones:
you're
gonna
probably
take
a
look
at.
A
You
know,
stars
forks,
you're,
going
to
look
at
how
old
it
is
you're
going
to
look
at
you
know,
maybe
who
the
maintainers
are,
but
that's
kind
of
a
qualitative
feel
that
you
get
right.
You're
like
okay.
This
project
looks
better
than
this
one,
and
and
this
this
is
meant
to
quantify
that
feeling.
I
guess
quantify
the
the
idea
of
what
it.
What
is
a
like
a
critical
project
that
people
can
rely
on
and
again
the
the
interest
is
dispersed
kind
of
the
maintenance
resources
available
for
these
key
projects.
A
So
you
know,
starting
from
the
top,
I
guess
and
again
it's
a
pretty
straightforward
formula:
it's
basically,
you
have
several
parameters
for
rating,
and
then
you
have
a
weight
for
each
parameter
and
a
threshold
which
basically
just
makes
sure
that
some
of
these
parameters-
don't
don't
run
away
from
us
in
case
like
they.
They
weigh
too
heavily.
A
So
so
these
are
the
current
ones.
So
things
like
just
just
kind
of
to
give
you
an
intuitive
understanding.
It's
you
know
created
updated.
This
is
for
the
project.
You
know
how
many
contributors
there
are,
how
many
organizations
support
it
or
use
it.
The
commit
frequency
recent
releases,
closed
issues,
updated
issues,
dependence
count,
so
the
common
frequency
and
the
dependence
account
is
a
very
interesting
one.
A
It's
the
one
that
basically
indicates
how
many
other
projects
depend
on
it
right
and
I
think
that
that's
way
sort
of
heavily
so
so,
there's
like
in
the
project.
This
project
has
released
the
two
top
200
critical
projects
for
like
10
or
so
frameworks,
languages
and
a
bunch
of
you
know
the
modern
ones
in
it.
Here's
a
ruby
top
20
and
you
can
see
here
there.
A
It's
sorted
by
criticality
score,
which
is
at
the
the
at
the
at
the
right
or
the
rightmost
column,
and,
and
it
just
shows
kind
of
the
the
other
parameters
which
I
squished
a
little
bit,
because
just
just
to
show
sorry
how
it
kind
of
rates
and
yeah
you
can
see
at
the
top.
Like
you
know,
rails
is
obviously
the
top
one,
but
then
there's
things
that
are
that,
maybe
you
know
are
not
so
noticeable
that
maybe
you're,
including
a
spree
or
something
or
well.
A
I
guess
these
top
ones
probably
are
understood
to
be
critical.
But
then,
as
you
go
further
down
the
list,
you
know
it
it.
It
gives
you
a
good
understanding
of
of
kind
of
the
ecosystem
for
ruby
and
then
for
go
and
other
things
that
we
rely
on.
A
So
so
the
reason
I
wanted
to
so
I
mean
I
guess
so
that
was
the
presentation
of
the
criticality
score
and
the
usefulness
to
the
working
group
is
just
to
identify
well
currently,
the
current
point
of
it
is
to
identify
open
source
projects
to
support
and
identify
ones
that
to
the
community,
which
ones
are
critical.
A
So
you
understand
kind
of
the
maintenance
burden
that
you
should
kind
of
focus
on
the
reason
I
want
to
present
in
composition,
analysis,
I'm
sorry,
and
if
you
want
to
stop
me
at
any
time,
just
go
ahead.
The
reason
I
want
to
present
this
for
composition
analysis,
because
I
think
that
there's
a
there's
a
component
here
that
could
help
us
also
identify
dependencies.
A
So
so
with
an
addition
of
other
factors,
we
can
actually
kind
of
create
something
like
a
dependency
risk
score
for
the
dependencies
in
the
project
right,
and
that
would
give
us
sort
of
another
signal.
That's
not
that's
kind
of
continuous
that
would
allow
us
to
sort
of
you
know,
judge
between
projects
like,
even
though
some
some
project
has
a
much
more
critical
bug,
but
it's
if
it's
not
in
the
path
of.
If
it's
not
a
a
in
a
path
of
criticality,
we
can,
we
can
focus
on
a
different
project.
A
But
then
you
know
so
what
I
think
is
useful
to
kind
of,
to
maybe
add
the
security
layer
for
us
to
assess
something
like
time
since
maintainer
was
on
project
and
I'd
have
to
thank
nicole,
because
I
had
a
conversation
with
her
about
some
of
these
things
and
it
was
helpful
to
understand
how
she
thinks
about
it.
Nicole
is
our
product
manager
for
composition,
analysis,
and
so
things
like
time
since
maintainer
was
on
project
number
of
maintainers
historical
number
of
vulnerabilities
and
severity
score
of
historical
innovation.
This
is
just
some
thoughts.
A
I
have
right
so
so
time
since
maintainer
was
on
project.
Why
would
that
be
useful?
Well,
if
it's
a
new
maintainer
one
there's
a
maintainer
trust,
but
also
does
would
that
maintainer
be
able
to
fix
a
security
issue
quickly.
Does
that
maintain
or
understand
how
the
security
issue
would
affect
others
right
number
of
maintainers
is
useful
right,
backup
things
like
that.
The
you
know
again.
These
are
my
thoughts
on
it.
I'm
not,
I
haven't
thought
it
fully
through,
but
something
like
historical
number
of
vulnerabilities
would
would
indicate
the
overall.
A
I
guess
reliability
of
a
project.
Maybe
like
is
this
project,
something
that
you
can
trust
if
it's
critical,
if
it's
in
you,
if
it's
in
the
downstream
path
for
your
project,
is
that
something
that
you
know
when
you're,
judging
whether
to
update
this
project
or
even
include
it
in
your
in
your
dependencies,
do
you
want,
should
it
be
there
and
then
the
severity
score?
Historically,
so,
basically,
you
know
how
how
vulnerabilities
or
advisories
their
relative
severity
over
time
and-
and
you
know,
might
indicate
other
things
like?
A
Is
this
project
kind
of
insecure
and
there's
many
many
other
parameters?
I
just
want
to
kind
of
demonstrate
what
would
be
useful
for
us
again
not
to
rate
the
criticality
or
which
projects
to
maintain
but
to
actually
figure
out
in
the
dependency
tree,
what
the
risk
factors
are
of
the
dependencies.
So
these
are.
These
are
the
parameters
I.
A
Sense
because
I
kind
of
moved
between
those
pretty
quickly
again,
ask
me
questions.
So,
if
you
need
to
so
so
here
here
are
some
ideas
for
uses
for
composition,
analysis
and
what
I
would
like
to
call.
This
is
dependency
risk.
That
was
a
criticality
score.
I
think
the
criticality
score
could
be
kind
of
a
base
in
which
to
build
this
other
thing
called
dependency
risk
another
score.
So,
for
example,
it
would
add
an
additional
parameter
for
scoring
a
vulnerability
right.
A
So
I'm
not
just
saying
this
severity
is
high,
and
this
priority
is
this,
but
actually-
and
you
know,
cb
cwe
parameters
are
useful
too,
but
it
would
help
us
help
maintainer
or
the
project
owner
to
in
the
decision
to
address
sort
of
equivalent.
You
know
cwe
parameters
and
the
coolant
severity,
but
figure
out
which
one
has
a
higher
dependency
risk
to,
maybe
either
to
fix
it
first
to
update
it
first
or
to
just
focus
on
triage
right
is
this:
is
this?
Is
this
advisory
affecting
us
right?
A
If
and
if
it
is
like?
If
there's
two
advisors,
I
want
to
figure
out
which
one
I
should
focus
on
first
right
for
this
because
of
the
dependency
risk
relative
dependency
risk
of
both
or
several
the
other
thing
is
identifying
dependency
risk
overall
of
a
merge
request
right.
So
this
is
this
is
kind
of
something
that
would
allow
the
security
department
to
get
involved.
It's
more
of
a
compliance
thing
right
if
it.
If,
if
there's
dependency,
changes
in
a
merge
request
and
their
dependency
risk
passes,
some
stated
parameter.
A
You
know
this
could
be
delegated
to
you
know
security,
maintainers
or
escalated
or
require
more
reviewers,
and
then
historically,
we
can
identify
deltas
in
dependency
risk
of
a
particular
dependency
or
a
tree
of
dependencies,
so
that
we
can
learn
about
the
relative
danger
or
risk
of
a
particular
part
of
the
tree
or
dependency
in
general
and
then
sort
of
like
over
time
see
how
it
you
know
how
it
affects
the
project
and
whether
it's
worth
maintaining
and
something
like
that
and
then
and
then
the
other
thing
is
sort
of
like
I
think
I'll
get
into
it,
but
but
dependency
risk
changes.
A
I
think
much
more
often
or
sort
of
the
the
dependency
factors
like
maintainers
time
spans
commit
all
these
things
change
much
more
often
than
advisories
appear
for
a
project,
so
this
is
kind
of,
I
would
say,
an
alert
system.
It's
a
smell
of
a
dependency
that
we
can
show
users
early
earlier
than
an
advisory
that
they
can
kind
of
get
this
without
anything
changing
in
their
project.
A
They
can
start
to
see
a
dependency
risk
change
and
assessed
and
then
again
adding
a
layer
of
security,
compliance
right
so
saying,
like
things
over
a
certain
risk
level
require
you
know,
security,
department,
intervention
or
or
or
you
know,
basically
cannot
pass
and
then
increasing
analysis
depth,
allowing
us
statistical
analysis
like
if
we
have
more
historical
data,
more
parameters,
we
can
figure
out
risky
dependencies
or
truly
risky
dependency
by
looking
at
these
secondary
factors
I'll
get.
A
I
think
I'll
talk
about
it
a
little
more,
but
let
me
just
get
into
an
example
here,
so
this
is
an
example
of
dependency
risk
in
an
mr.
So
so
here's
our
project,
it's
fairly
simple.
It's
got
these
first
layer
dependencies.
Then
you've
got
these.
You
know
three
dependencies
that
rely
on
dependency.
Three,
you
can
have.
You
can
see
the
risk
scores
here.
A
So
I'm,
like
you,
know,
I'm
just
playing
around
there's
no
such
risk
score
really,
but
you
can
see
they're
fairly
low
right
but
let's
say
we
add
a
new
dependency
on
which
three
relies
on
right
and
this
one
has
a
very
high
one.
So
so
our
our
score
is
zero
to
one
right,
so
this
is
80
or
0.8.
So
now,
all
of
a
sudden
we've
introduced
a
high
risk
dependency.
A
There's
no
advisories,
there's
no
vulnerabilities,
it's
just
a
high
risk
dependency
for
whatever
reason-
and
you
know
we
can
maybe
I'm
just
kind
of
thinking
of
a
of
a
use
case
for
users
and
get
of
gitlab.
You
know
we
can
even
justify
like
here's,
why
we
think
this
is
a
has
a
high-risk
score,
but
you
can
see
how
it
affects
risk,
because
now
the
dependencies
three
has
become
a
lot
more
risky
and
and
the
dependence
and
the
risk
has
cascaded
up
to
all
the
dependencies
that
are
including
it
and
so
from
here.
A
A
lot
of
things
become
possible,
for
example,
if
dependency,
if
that
dependency
has
a
competing
or
if
that
project
has
another
one
that
you
can
consider.
You
can
switch
to
that
project
right
and
you
can
assess
the
risk
of
that.
So
a
lot
more
things
become
possible
than
just
looking
at
vulnerabilities.
To
me,
vulnerability
is
like
a
yes
or
no
signal
one
or
zero.
Does
it
exist?
Is
there
an
advisor
or
is
there
not
and
until
there's
an
advisory?
We
can't
really
do
anything
about
a
project.
A
A
Oh,
it's:
okay,
okay,
just
making
sure
I
don't
want
to
go
through
it
too,
fast,
okay,
so
and
yeah,
I'm
just
trying
to
make
our
time
not
go
over
time
too
much,
and
hopefully,
if,
if
there's
a
lot
of
interest,
we
can
get
into
more
some
of
these
use
cases
or
discuss
more
about
the
you
know,
kind
of
the
decisions
that
the
working
group
works
with
to
to
make
some
of
these
changes,
and
so
some
some
other
use
cases.
So
we
talked
to
identify
delta's
and
dependency
risk.
A
So
basically,
you
know,
as
we
said,
changes
in
parameters
such
as
maintainer
cal
project
stainless.
They
change,
as
I
said
much
more
often
than
advisories
come
up,
so
you
can
think
of
it
as
an
early
warning
system.
You
can
think
of
it
as
as
just
a
total,
absolute
value.
You
can
give
to
a
project
that
if
it
passes
a
certain
threshold,
you
know
it
needs
attention
right
and
it
gives
yeah
like
users
an
early
warning.
We
talked
about
security,
compliance
right.
We
can
create
policies
around
dependency
risk
management.
A
You
know,
I
think
I
talked
about
that.
This
is
just
a
reiteration
scores.
Above
a
certain
level
or
scores
within
a
certain
range
require,
you
know,
manager,
approval
security
department,
approval
or
they
can't
even
pass
right
like
if
you're,
if
you're
highly
security,
conscious,
if
you're
working
in
a
very
like
locked
down
kind
of
project,
then
you
can't
even
pass
past
the
certain
risk
level
right.
So
that's
that's!
A
A
So
we
can
have
these
secondary
factors
like
learn
about
sort
of
how
a
maintainer
behaves
on
a
project
and
and
kind
of
learn
over
a
history
of
pro
multiple
projects,
many
many
projects
and
then
we
can
apply
to
new
projects
based
on
based
on
what
we've
seen
right,
based
on
kind
of
a
a
learned.
You
know
something
that
we've
scored,
that
that
has
been
that
has
been
predicted
for
that
particular
project.
A
So
we
can
even
you
know,
for
things
that
don't
necessarily
have
history
or
don't
have
like
a
new
project
or
something
that
doesn't
have
a
particular
depth
of
parameters,
as
we
saw
before
predictive
capabilities
come
in
and
then
secondarily
other
predictive
capabilities
based
on
those
factors
like
over
time.
Let's
say
this
project
with
these
kind
of
risk
trees.
A
How
often
does
it
come
up
with
advisories
right?
How
often
do
advisories
come
up
for
this
project
with
this
kind
of
tree
versus
another
tree,
so
that
allows
us
predictive
capabilities
to
give
a
lot
more
intelligence
and
a
lot
more
early
warning
to
users,
which
I
think
is
a
very
useful
thing
in
dependency
management-
allows
us
to
be
very
proactive
yeah,
so
so
that's
kind
of
the
three
so
identifying
historical
data
and
deltas
and
warning
users,
compliance
and
statistical
analysis,
so
obviously
there's
other
things
that
are
possible.
A
I
just
want
to
kind
of
give
my
my
thinking
about
why
I
think
this
is
useful
for
the
team
yeah,
so
just
some
references
here
I
linked
the
group.
I
linked
the
criticality
score.
There's
this
the
paper
that
led
to
this
from
rob
pike
and
just
an
article
on
my
magic
so
that
that's
pretty
much
what
I
wanted
to
show
and
I
wanted
to
stay
within
time.
So
that's,
I
think
I
think
we're
well
in
there
and
I'll
now
take
questions
or
actually
I
will
look
at.
B
Again
now
you
need
to
change
the
bottom
line.
Yeah
a
little
left,
yeah.
A
A
A
There
is
always
a
crossover
between
the
kind
of
the
dependency
and
the
vulnerability
research
team
and
what
we
do.
I
think
this
the
benefit
here
is
automation,
so
yeah
and
olivia
you're
saying
you
want
to
voice
your.
B
Yeah,
just
a
quick
feedback
on
one
of
the
criteria
that
we
were
mentioning.
I
know
this
was
just
worth
soaked,
but
this
is
something
else
that
comes
to
my
mind.
When
we
talk
about
the
history
of
vulnerability
that
we
can
see
it
from
a
human
project,
this
could
not
always
be
considered
as
a
bad
thing,
particularly
when
comparing
straight
number
with
other
projects,
because
some
projects
are
more
transparent
with
the
vulnerabilities
they
are
disclosing
versus
others.
So
more
gravity's
might
mean
more
transparency,
not
necessarily
weaker
projects.
A
Yeah,
I
think
it's
a
good
point
and
I
think
when
I,
when
I
talk
about
sort
of
the
statistical
analysis
part,
I
think
what
that
allows
us
to
do
is
actually
back
propagate
the
risk,
scores
and
figure
it
out
like.
So
if
the
weighting
of
that
should
be
lower,
we
can
make
that
lower
right.
The
benefit
of
a
risk
score
is,
it
gives
us
all
of
these
variables
in
which
we
can
kind
of
optimize
in
the
space
in
which
we
can
build
a
predictive
model
around.
A
C
Sure
so,
and
I
think
that
the
risk
score
for
dependencies
could
be
a
great
addition
for
the
dependency
scanning
list,
but
yeah
it's
kind
of
like
natural.
It
feels
like
a
part
of
this
page
say,
but
I
guess
first,
we
need
to
work
on
the
dependency
graph,
because
we
don't.
We
just
have
like
minimal
paths
for
dependency
and
we
don't
have.
C
We
don't
have
the
graph.
So
we
can't
just
like
calculate
the
score
for
project
dependencies.
A
Having
a
graph
gives
us
a
lot
more
gives
us
a
lot
more
data
and
a
lot
more
information,
though
I
would
say
that
just
knowing
that
that
not
having
the
graph
but
just
knowing
that
a
project
with
a
high
criticality
score
in
the
let's
say,
gem
file
lot
in
the
locked
file
or
the
you
know,
the
mdm
log
file
is
enough
to
for
us
to
maybe
show
a
user
to
just
say:
here's
this
here's,
this
critical
dependency
path
on
which
you
rely
right
without
even
knowing
about
a
like
how
it
how
it's
affected
by
but
by
the
tree
right.
A
I
think
a
tree
maybe
is
more
advanced
than
than
that,
but
the
capability
is
there
right
even
now,
so
that's
kind
of
my
but
you're
right
absolutely.
The
graph
is
like
a
very
rich
data,
set
much
more
much
more
useful
than
than
just
just
just
like
these
scalar
values
on
each
each
project.
A
Olivia.
Do
you
want
to
verbalize.
A
B
Data
might
be
difficult
to
source
like
a
number
of
maintainers
and
the
the
history
of
the
project.
These
are
very.
This
is
a
lot
of
data,
so
if
we
have
to
source
that
that
might
take
a
huge
amount
of
time,
so
are
there
any
open
source
initiatives
to
start
building
a
big
database
of
this
information
like,
for
instance,
we
have
some
known
database
for
licensing
information.
A
Yeah,
so
I
think
that
the
same
group
is
working
on
something
called
the
dependency
feed,
but
I'm
not
sure
I
don't
I'm
not
sure
if
they're
addressing
it,
this
particular
scoring
criteria-
and
you
can
see
here
that
this
is
this
library
that
was
built.
It's
kind
of-
I
don't
know
if
it's
alpha
or
beta,
but
you
can
see
here
that
they're
they
rely
on
the
on
the
your
token.
A
So
you,
you
are
pinging
the
api,
so
it's
kind
of
in
the
early
stages,
but
I
do
imagine
it
as
if
it's
proven
out
the
usefulness
of
this
of
this
library.
Then
I
can
see
like
companies
like
well
like
data
sources
like
github,
getting
onto
the
training
offering
like
a
feed
or
some
kind
of
rolled
up
statistics
on
on
these
on
these
projects
without
having
to
kind
of
yeah.
A
Do
this
data
intensive
querying,
but
right
now,
I'm
not
aware
of
something
that
gives
us
an
api
or
does
a
rolling
up
for
us
that
does
this
calculation
for
us.
B
Okay
thanks
next
one
is,
I
think
this
is
emphasizing
the
needs
to
have
a
more
modular
approach
to
our
dependency
analysis.
We've
already
mentioned
them
multiple
time
in
the
past
to
try
to
extract
the
generation
of
the
graph
from
the
the
variability
analysis
and
then
the
license
analysis,
but
I
think
this
is
one
more
additional
analysis
that
we
can
do
on
top
of
an
extract
dependency
tree
or
graph.
B
So
because
I
assume
this
is
this
is
a
something
that
could
be
done
on
a
different
schedule
or
different
frequency
than
other
type
of
analysis,
because
it
might
be
more
resource
consuming
and
and
and
longer
analysis,
particularly
if
we
have
two
schools
that
just
made
it
out.
I
wasn't
mentioning
so,
and
I
think
this
is
a
great
duration
that
we're
taking
for
those
of
you
additional.
A
Leads
yeah
yeah,
I
mean.
I
think
this
is
a
proposal
like
not
even
a
proposal
as
a
demonstration
of
what
I
I
thought
was
interesting.
This,
like
I
said
this,
this
whole
kind
of
foundation
was
started
in
2020.
I
believe
it
was
like
summer
last
year,
maybe
even
a
little
bit
earlier,
so
I
think
it's
moving
along
and
it's
showing
promising
kind
of
ideas.
A
You
know
see
how
one,
as
you
say,
how
data
intensive
it
is
to
how
useful
it
is
right
and
how
good
it
is
that,
for
example,
predicting
the
relative
insecurity,
let's
say
the
dependency
or
the
relative
risk
of
a
dependency
and
then
sort
of
following
those
predictions
with
sort
of
mapping
those
predictions
to
actual
events
like
bugs
not
being
fixed
or
or
or
security
bugs
not
being
fixed
or
the
number
of
advisories
found
against
the
project,
so
that
that
could
be
kind
of
a
step.
A
Okay,
I
mean,
I
think
it
seems
that
we
are
we're
kind
of
out
of
time
and
out
of
questions,
so
I'm
just
gonna
stop
the
recording.