►
From YouTube: OSS Security Maturity Model Discussion (March 2, 2022)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
We're
seeing
information
about
the
software
improvement
group,
which
is
an
org
company?
Thank
you
right,
yep.
C
Okay
thanks,
so
one
thing
I
didn't
mention
yet
in
the
introduction
is
that
when
we
are
developing
these
methods,
we
believe
in
using
data
and
benchmarking
for
that,
because
with
a
lot
of
things
like
you
know,
every
system
has
duplication,
for
example,
on
the
code
quality
front.
Yet
you
want
to
minimize
duplication,
everyone
will
occasionally
discover
vulnerabilities
and
you
want
to
fix
them
as
quickly
as
possible
with
what
is
the
definition
of
quickly.
So
it's
very
hard
to
set
an
absolute
boundary
there.
C
You
know
that
what
should
be
one
hour
one
minute,
one
second,
but
their
benchmarking
is
quite
helpful
to
tell
people
what
is
an
acceptable.
You
know
like
resolution
time,
so
we've
been
building
that
benchmark
over
the
past
20
years,
so
these
are
7000
commercial
projects.
So,
of
course,
we
also
are
in
open
source
community.
C
What
you
see
in
this
chart
is
that
volume
is
from
left
to
right,
so
we
calculate
code
volume
in
person
years,
but
because
it's
more
intuitive
to
non-technical
people,
but
also
because
you
know,
if
you
have
different
technologies
like
100
lines
of
code
in
one
technology-
is
not
100
lines
in
the
other.
Some
languages
are
more
for
both,
so
we
normalize
in
that
way
and
from
top
to
bottom,
is
how
we
benchmark
code
quality.
C
So,
of
course
like
this
is
for
code
quality.
We
want
to
build
similar
pictures
also
for
more
security
and
privacy,
oriented
metrics
we'll
get
to
that
in
a
second.
But
in
this
kind
of
picture
this
is
very
nice
to
communicate
to
people,
because
quality
is
an
abstract
concept
of
where
they
stand
in
the
market,
and
if
people
are
in
the
the
lower
right
quadrant
of
this
picture,
usually
you
see
them
kind
of
like
jumping
to
the
edge
of
their
seat
because
they
start
to
get
nervous
a
bit
so
it
to
some
extent.
C
C
C
So
the
thing
is
that
if
you
look
at
kind
of
traditional
certifications,
they
assume
that
a
project
is
owned
by
a
single
entity
like
also
as
we
do
in
this
quality
benchmark
that
we
just
showed.
But
of
course,
a
lot
of
open
source
projects
are
either
a
pure
open
source
project
that
is
exclusively
owned
by
the
community,
or
it's
a
hybrid
project
that
you
have
volunteers,
but
there's
also
a
corporate
sponsor
there.
C
That
includes,
I
believe,
five
distributions
and
sometimes
multiple
versions
from
each
distribution,
because
also
we
want
to
measure
this
over
time.
So,
in
the
end,
security
is
a
dynamic
process.
So
that
means
it
also
like
you-
will
have
a
rating
almost
during
the
time
period.
C
D
So
if
we
look
at
the
current
standard,
for
example,
like
bcm
open
sum,
they
are
most
cases,
let's
say
more
process
oriented
and
also
it's
what
we
are
looking
for
is
trying
to
look
actually
not
even
focus
that
much
on
process.
But
we
look
at
the
facts.
We
look
at,
for
example,
the
let's
say
the
vulnerability
packaging
speed
and
we
look
at
some
security
settings
and
especially
for
the
linux
distribution.
There
are
like
a
security
configuration,
etc.
D
So
I
also
we
also
look
at
the.
The
current
model
always
is
a
tool
that
your
group
is
working
on.
So
that's
also
more
like
a
similar
like
our
approach,
but
a
little
bit
different.
I
think,
then
you
say
later
we'll
show
some
slides.
It's
we're
also
looking
at
like
a
time
period.
So
not
only
just
at
this
moment
what
is
the
security
in
the
code,
but
we
also
look
at
the
sensor
for
practice
like
how
the
code
review
is
done,
how
the
security
patching
is
done,
etc.
C
B
So
is
your
intent
to
look
at
distributions
like
a
debian,
say
red
hat
fedora
type
stuff,
or
are
you
looking
at
projects
like
a
kernel,
kubernetes
glibc,
that
type
of
stuff
in.
C
This
particular
project
distributions,
so
it's
indeed
open
sources
in
their
ubuntu
debian,
but
so
it's
not
only
measuring
the
code,
but
it's
also
looking
at
their
processes.
So
how
fast
do
they
actually
patch
the
vulnerabilities?
Okay?
So.
A
D
C
So
you
have
to
start
somewhere
so
to
say
that
it
applies
to
every
open
source
project
on
earth.
That's
a
bit
grandiose,
so
you
need
to
to
select
your
domain
where
to
start
and
then
later
on.
We
also
get
to
find
out
like
if
the
characteristics
of
different
type
of
projects
are
comparable,
and
I
honestly
wouldn't
know
if
the
the
security
process
for
kubernetes
is
comparable
to
the
one
for
ubuntu.
We
would
have
to
find
out
after
this
current
phase.
C
That's
a
good
question
also
on
the
dependencies,
and
this
turned
out
to
be
a
logistical
nightmare
because
of
course
there
are
dependencies
in
the
way
that,
like
the
upstream
project,
but
many
links
just
you
should
still
work
with
package
maintainers.
So
you
have
like
people
that
draw
patches
from
upstream
that
put
their
own
patches.
C
So
the
amount
of
data
site
was
slightly
more
ambitious
than
we
had
anticipated
in
the
beginning,
but
we
also
believe
that
this
is
actually
a
thing,
because
I
wonder
how
this
like
will
evolve
in
terms
of
security
processes,
because
you
get
conflicts
between
upstream
security
fixes
that
need
to
be
incorporated
in
the
distribution
itself
and
this
the
workflow
that
most
of
the
distributions
use
it
takes
at
least
a
couple
of
weeks
before
the
upstream
patches
are
in,
and
if
you
look
at
application
software
development,
that's
a
bit
more
kind
of
dynamic
and
nowadays
with
the
security
processes.
C
So
we
wonder
how
this
is
going
to
work
david.
This
is
not
new
news
for
okay,
that's
yeah!
That's
we
like,
of
course,
as
users,
we
know
a
bit
on
how
these
open
source
projects
work,
but,
of
course,
linux
distributions
they're
still
a
bit
special
in
that
regard.
So
it's
the
we
thought
there.
Some
packages
was
going
to
be
a
couple
of
hundred,
but
we
slightly
underestimated
that
jacques
says
that
you
might
want
to
talk
to
active
states.
C
D
By
the
way,
probably
later,
christopher
and
david
can
share
this
video
also
with
us.
D
C
C
Yeah,
okay,
then,
maybe
to
make
this
a
bit
more
concrete,
we
can
move
to
that
actual
mole.
So
this
looks
a
bit
intimidating,
so
our
code
quality
model
has
nine
metrics
because
we
want
to
keep
it
lightweight,
but
the
thing
is
you
first
need
to
have
something:
that's
very
comprehensive,
because
before
you
can
narrow
it
down
to
the
true
most
important
metrics,
so,
as
you
already
mentioned,
this
spans
a
lot.
C
So
if
you
look
at
the
the
project
organization,
there's
on
the
left,
you
have
more
process
related
kpis
so
like,
for
example,
do
they
do
code
reviews
and
again?
This
is
not
only
if
they
write
down
that
they
do
card
reviews,
but
whether
they
actually
do
it
and
there's
there's
evidence
that
they
have
done
so,
and
these
middle
two
bars
are
about
the
the
dependency
management
process.
So
in
a
linux,
distribution
is
all
about
packages
so
usually
for
an
application.
C
It
would
be
more
about
libraries
and
transitive
libraries,
but
so
you
have
this
whole
unique
angle
here
about
packages
and
about
when
do
they
patch
and
when
do
they
fix
upstream
and
that's
fairly
unique
to
linux
distributions.
C
So
also,
you
have
the
whole
vulnerability
management
process.
So
how
quickly
do
they
patch
it?
Do
they
release
timely
information
to
their
users,
that
a
vulnerability
exists
and
on
the
right
we
have
more
the
distribution
contents,
so
do
they
ship
with
security
features
included.
So
this
is
very
very
wide.
E
C
Okay,
then,
of
course,
like
it's
nice
to
count
these
things,
but
at
some
point
you
need
to
quantify
it
in
a
way,
that's
comparable.
So
maybe
how
you
want
to
take
this
one.
D
Yeah,
so
basically,
what
we
are
doing
is
from
the
left
to
right
is
the
process
hallway,
let's
say
measure.
The
left
part
is
what
then
is
just
show,
so
that's
the
let's
say
the
kpis
that
we
measure
or
we
define
that
what
are
considered
as
the
good
security
best
practices.
D
And
after
that,
then
we
define
the
concrete
measure
and
ideally
that
the
old
measure
could
be
measured
automatically.
But
from
our
current
case
I
think,
like
a
seven
people
found
are
completely
automated.
The
other
parts
are
still
some
manual
check
and
after
that
the
third
one
is
the
aggregation.
So
basically,
because
it
depends
on
what
other
items
where
I
measure
some
items
that
it
might
like
a
let's
say,
the
a
pipe-
let's
say,
patch
speed,
for
example,
then
you
could
do
like
an
enormous
amount
of
patch
within
a
certain
amount
of
time.
D
In
those
cases
that
all
these
several
thousands
data
points,
we
need
to
find
a
way
to
aggregate
to
come
up
as
a
region
that
are
more
comparable
with
each
other.
And
after
that,
then
we
have
the
benchmark
to
look
at
the
different
communities.
So
that's
also
the
idea
there
is.
If
there
is
one
community,
you
would
like
to
see.
Okay.
What
is
my
weak
points?
They
could
fear
these
benchmark
results
to
see.
Okay,
what
are
the
compared
with
the
other
industry?
D
Let's
say
comparables.
What
are
their
weak
points
strong
points
so
that
they
can
identify
what
are
the
most
important
things
to
improve?
So
basically,
that's
the
let's
say
five
steps.
That
means
I
believe
you
also
have
some
concrete
examples
regarding
the
benchmark,
kpi
and
the
aggregation
yeah.
D
So,
for
example,
this
is
a
one
example
like
in
the
beginning.
We
have
this.
This
is
a
particular
with,
let's
say,
linux,
distribution,
the
kernel
security,
and
we
look
at,
for
example,
this
open
ssh
security
and
hardening,
and
in
this
particular
case
we
look
at
several,
let's
say
main
configurations
or
what
is
their
cycling,
etc,
and,
based
on
that,
we
use
some
tools
to
figure
these
numbers
out
and
to
consider
okay,
what
I
consider
as
a
good
configuration
what
I
consider
as
the
backed
configuration.
D
So
this
is
a
one
example
rather
straightforward,
let's
say,
and
the
there
are
the
five
projects,
as
we
just
mentioned
there
are
we
didn't,
give
the
name
there.
It's
also
one
reason
of
that
is
also
because
our
current
results,
we
still
are
working
on
all
the
validation
and
etc.
B
So
how
are
you
accounting
for
communities
that
might
be
putting
together
a
general
use,
operating
system
that
is
not
intended
for
hardening
and
security
versus
a
commercial
offering
that
might
have
some
hardening
options,
but
still
needs
to
give
their
customers
that
flexibility
versus
a
truly
hardened
secure,
offering.
D
Yes,
that's
a
very
valid
question
so
for
that
purpose
we
actually
also
put
this
as
let's
say,
basically
just
open
how
we
measure
what
we
measure
and
how
we
evaluate,
but
it's
completely
true
for
so,
for
example,
like
there
were
also
several
cases
like
different
levels
of
which
one
it
is
a
let's
say
applicable
for
this
community
or
which
version
etc.
But
we
would
like
to
give
a
let's
say,
generic
recommendation
so
for
security.
D
B
D
D
For
other
measurements,
we
have
also
similar
cases
like
for
different
distributions,
some
distribution.
They
always
choose
to
fix
the
pipe
immediately
and
some
long
slow
lung
support
distribution.
They
prefer
to
do
this
more
as
a
stable.
So
so
those
are
the
kind
of
different
strategies,
but
so
our
idea
is,
we
define
this
model
and
we
give
the
recommendations
and
then
also
we
give
the
community
some
recommendation
like
what
is
your
purpose
and
what's
his
strategy
and
what
we
recommend
them
to
follow.
C
C
If
you
have
a
mainframe,
that's
30
years
old,
then
of
course
you
wouldn't
score
five
stars,
but
it
doesn't
mean
that
we
need
to
adapt
the
skill,
because
then
it
becomes
very
hard
to
understand
what
does
that
number
mean.
So
the
number
is
always
the
same,
and
you
know
if
you
on
the
application
level,
if
you
do
internet
banking,
would
expect
you
to
score
five
stars
on
security
related
metrics.
But
if
you
make
your
internal,
you
know
the
powers
the
launch
menu,
then
you
would
not
have
that
same
target
score.
C
C
There
was
a
question
like:
could
you
do
the
same
thing
for
the
the
bsds?
Yes,
but,
of
course,
to
make
these
models
there's
also
some
community
expertise
required
because,
of
course,
the
eventual
number
that
comes
out
is
context-free,
but
to
design
it.
You
definitely
do
need
the
context.
So
we
would
need
to
revisit
this
and
check
how
do
these
processes
work
in
the
bsd
world,
and
we
would
also
need
to
have
some
experts
that
know
more
about
that
in
order
to
validate
some
assumptions
that
we
have.
C
Okay,
so
this
is
a
relatively
simple
example
we
also
have.
Maybe
I
can
do
this.
One
are
you,
and
so
we
have
here.
This
is
something
where
you
count
this
per
upstream
community.
So
if
you
include
a
package
in
your
distribution
and
that
upstream
community
dies
out,
it
will
eventually
be
a
security
risk
for
you.
So
then
you
have
the
choice
to
either
no
longer
include
that
package
or
take
over
the
upstream
maintenance,
but
at
least
you
might
need
to
do
something
in
the
long
term,
but
of
course
every
has
established.
C
Every
distribution
includes
thousands
of
packages,
so
you
have
more
than
one
data
points
per
project.
You
will
always
have
a
couple
of
outdated
packages
in
there,
so
what
we
do
is
this
is
what
hyun
talked
about
earlier.
We
have
this
risk
profile,
so
we
assign
a
category
for
each
project,
so
these
thresholds,
they
are
themselves
based
on
the
benchmark.
We
try
to
use
some
numbers
that
are
slightly
rounded
to
make
them
intuitive,
and
then
you
can
prioritize
okay.
C
B
Question
sorry,
so
a
recent
version
of
red
hat
enterprise
linux
in
the
full,
install
included
9000
packages
and
I'm
assuming
the
other
distributions
are
similar.
So
how
are
you
out
of
those
9
000
potential
packages?
What's
your
criteria
that
somebody
gets
a
naughty
score
versus
their
cleaning
green,
because
not
all
9000
of
those
packages
are
gonna
potentially
adhere
to
your
might
not
they
might
be
feature
complete,
potentially.
C
C
B
There,
how
many
red
dependency
packages
need
to
be
what's
the
qualification
for
grading
the
distribution
there.
C
D
So,
of
course,
for
our
current
research
project,
so
we,
for
example
this
example.
We
only
have
like
a
cycle.
Seven
samples
which
is
in
generically
speaking,
is
not
sufficient
data
points
to
do
the
benchmark,
but
normally
how
we
do
benchmark.
Is
we
look
at
this
kind
of
risk
profile
and,
for
example,
like
sag,
when
we
do
benchmark,
we
have
like
thousands
of
products
data
in
it,
and
we
look
at
this
percentage
so
based
on,
for
example,
this
right
orange,
yellow
green.
D
D
D
Yeah
so
in
this
case
in
this
particular
example,
so
the
risk
we
consider
as
a
high
risk
when
there
is
this
package
that
there
is
zero
commit
during
the
last
year.
So.
F
C
That
there's
something
on
the
rating
level
and
something
on
the
finding
level.
So
on
the
rating
level,
this
approach
is
robust
against
false
positives,
because
it's
not
forbidden
to
have
red
right,
like
every
project,
will
have
some
red
in
this
chart.
The
only
thing
you
want
is
that
it's
not
out
of
control
so
that
way,
these
ratings
are
robust.
Against
that
the
thing
is:
if
people
want
to
improve
the
ratings,
then
they
indeed
get
to
the
finding
level,
and
then
they
will
yell
at
you
if
the
top
10
is
not
useful
to
them.
C
So
that's
something
that
we
on
the
kind
of
like
reporting
level.
We
have
a
storyline
there,
but
indeed
for
the
findings.
We
need
to
have
some
way
of
weighing
that
against,
like
the
most
useful
findings
up
front,
and
so
there's
also
this
java
xml
library,
j
dolmen,
I
believe
they
haven't
had
a
release
in
six
years,
so
I
need
that
showed
up
and
not
here
within
an
application,
and
indeed
some
libraries
are
pretty
much
done,
but
others
if
they
are
not
showing
any
activity.
C
A
I'm
sorry,
I
must
go.
Thank
you
for
opportunity,
though,
and
please
take
care.
C
I
believe
we
have
our
third
and
final
example,
and
this
is
the
one
I
believe
that
most
people
are
interested
in
this
from
the
the
people
from
these
distributions.
We
showed
it
to
so.
You
want
to
take
this
one.
D
Yes,
so
this
one
is
about
vulnerability,
patch
speed,
so
in
these
cases
we
look
at
how
vulnerabilities
are
pitched
within
different
days.
Actually,
I
have
to
say,
oh
a
bit,
that
the
current
this
picture
is
not
the
latest
version
we
have.
I.
D
Yeah,
so
basically
we
look
at
for
different
cves
with
different
cbss
scores,
so
they
normally
consider
as
let's
say,
high
severity,
cds
or
low
cbt
cbds,
and
in
those
cases
we
give
a
risk,
let's
say
profile,
based
on
okay
for
a
certain
cbs
score
cves.
How
my?
How
long
time
take
for
the
team
to
patch
and
based
on
that?
We
also
like
a
similar
story
like
the
private
slides
so
for
different
badge.
We
give
them
either
as
a
high
risk
or
low
risk,
and
high
rates
are
normally
the
case,
the
time
to
punch.
D
D
So
then,
if
you
show
the
next
table,
I
think
that's
a
become
a
con,
concrete
yeah.
So,
for
example,
the
current
way
we
took
is
like
this
so
based
on
the
cvs's
score
and,
if
it's
above
nine
so
which
is
a
let's
say,
a
very
high
priority,
a
severity
ones,
and
it's
normally
that
we
need
to
patch,
for
example
like
within
one
day.
Otherwise,
if
it
takes
longer,
don't
consider
already
as
moderate
or
high
or
very
high
risk.
D
So
this
is
how
we
treat
those
if
it's
a
cv
asset
scores
at
a
less
than
four.
So
then,
in
those
cases
we
do
allow
the
team
to
take
longer
time
to
touch.
D
Also,
maybe
one
thing
we
can
mention
here:
is
we
also
during
our
research?
We
also
noticed
because
currently
we
use
these
cv,
publish
dates
from
the
nvd
so
but
we
also
notice,
for
example,
a
different
community
like
red
hat.
They
could
already
have
certain
cv,
fundings
and
then
page
first,
and
then
they
finally
officially
published
to
the
mdmd.
C
C
Yeah
true,
so
we
we
had
some
discussions
about
that
because
that's
there
was
also
some
security
people
that
got
angry
at
us
for
even
suggesting
that
the
nfd
was
a
reliable
and
fast
source
of
truth.
C
So
but
it's
it's
objective
so
that
they
they
do
lag
behind,
but
because
we
allow
negative
days
we
we
believe
that
we
are
somewhat
covered
by
that
and
it's
at
least
it's
something
that
everyone
universally
considers
as
a
respectable
source.
So
it
prevents
some
discussions
there,
but
we
noticed
that
I
forgot
the
exact
percentage,
but
it's
like
in
some
distributions.
It's
like
30
or
40
percent
of
vulnerabilities
are
fixed
before
they
are
published
to
the
nvb
so
that
that
has
a
significant
impact
there
yeah
from.
D
My
memories,
ubuntu
ubuntu,
has
said
especially
ubuntu,
has
a
stronger
case
like
that.
We
also
actually
look
at.
For
example,
there
was
a
cd
in
the
database.
D
We
can
also
find
out
when
the
cbe
id
is
reserved,
so
that's
a
as
a
could
be
as
a
very
good
indicator,
but
on
the
other
hand,
they
also
clearly
mentioned
that
when
they
reserve
the
cv,
then
in
principle
that's
kind
of
the
leave
them
time
to
do
the
investigation,
which
also
turns
out
to
be
not
a
real
vulnerability
and
also
because
we
would
like
to
do
this
more-
let's
say,
eco
or,
let's
say
a
fire
to
all
the
different
communities
and
for
certain
days,
for
example,
ubuntu
or
as
a
open
source.
D
They
might
already
know
okay
dcb
exists,
but
for
other
communities
they
might
not
know.
So
then,
it's
not
fair
to
say
hey
your
patch
later,
because
of
that
reason.
So
for
that
reason
we
choose
at
this
moment
the
maybe
publication
dates.
C
C
C
But
if
you
would
make
a
new
one
based
on
january
2022,
you
would
get
different
values,
because
this
is
this
dynamic
process
we
we
talked
about,
so
we
do
not
scan
the
the
kind
of
like
the
images
that
they
provide
or
the
live
systems.
We
actually
did
do
that
in
the
beginning,
but
parsing
the
security
advisories
is
a
bit
more
practical
for
us
and
we
we
do
make
the
assumptions
that
people
speak
the
truth
in
those
advisories,
but
for
these
communities.
That
is
the
case.
Fortunately,.
C
Mata
asks:
do
you
count
the
issues?
The
distribution
does
not
fix
yes,
but
not
in
this
metric.
So
it
becomes
a
bit
difficult
to
have
these
composite
metrics
that
look
at
multiple
things.
So
this
is
one
that's
on
the
vulnerability
patching
process,
but
there's
another
metric
that
I
believe
is
called
vulnerability.
Risk
profile,
that's
about
the
open
ones.
C
Regarding
the
terminology
risk,
indeed,
that
risk
is
a
probability
of
a
negative
event
and
a
potential
negative
impact
of
that
event.
True,
so
it's
like
with
impact
and
likelihood
we
are
indeed
creative
with
calling
this
risk
profile.
C
B
Doing
a
qualitative
assessment-
and
that
is
less
persuasive
than
a
quantitative
assessment,
and
you
never
can
tell
at
business
what
their
risk
is.
It's
up
to
that
business
to
determine
their
risk
appetite.
C
That
is
yeah.
That's
a
valid
comment,
so,
like
risk
is
usually
you
need
more
on
the
business
context
level
and
not
necessarily
on
the
technical
finding
level.
You
could
call
it
severity,
but
it's
also
not
entirely
accurate.
So
we
would
need
to
think
on
how
to
call
this
thing.
Yeah.
D
D
C
B
F
More
of
a
speech
and
reply,
I
noticed
that
some
of
the
metrics
seem
to
be
about
the
product
and
some
of
the
metrics
seem
to
be
about
the
project
and
I
feel
kind
of
like
uncertain
about
mixing
those
together.
So
I
I
belong
to
the
the
sort
of
the
hubbard
slash
jones
and
freund
school,
of
assessing
security
risk,
which
is
that
we
should
use
probabilities
and
and
not
matrices.
F
I
realize
that
that's
hard
and
like
there's,
no,
no
nobody's
like
sat
down
and
done
the
multiple
regression
yet
to
to
give
us
the
formula,
but
I
still
prefer
that
framing
anyway,
the
point
I
brought
up
about
those
two
different
things
is
that,
like
I
feel
like,
if
you
were
building,
if
you
were
able
to
build
a
cause
and
model
of
security
into
which
you
could
plug
some
numbers
and
get
out
a
prediction,
I
feel
as
though
you
would
probably
separate
product
and
project
from
each
other.
F
So
project
would
be
like
my
prediction
that
a
new
vulnerability
will
be
introduced
sometime
in
the
future
and
how
often
that
will
happen
and
for
product
I'd
be
saying
something
like
this
is
deployed.
What
is
the
probability
that
it
gets
exploited?
How
many
times
per
year
do
I
expect
to
have
a
successful
attack
against
this
product,
and
you
can
see
that
those
are
kind
of
different
things
and
I'm
wary
of
mixing
them.
C
No,
you
could
see
me
writing
along.
So
that
means
it's
a
valid
comment,
so
you're
definitely
correct.
I
think
that
our
initial
focus
with
this
project
was
more
on
the
the
former
from
your
explanation.
So
it's
like
the
so
how
often
how?
C
Basically,
how
does
the
the
project
ensure
that
as
few
vulnerabilities
as
possible
are
introduced
so
rather
than
under
the
product
security,
which
is
ironically,
where
ssg
is
kind
of
coming
from,
but
you're
right
that
in
the
actual
model
we
kind
of
blurred
the
lines
a
bit
included
both
dimensions,
we
may
or
may
not
separate
it
into
two
different
sub
ratings
or
just
have
two
separate
models,
so
I
definitely
see
where
you're
coming
from.
But
it's
I
don't
have
an
immediate
answer
or
reply
to
be
honest.
D
Yeah,
so
so
another
thing
is:
maybe
it's
good
to
also
mention
what
is
the
the
goal
of
our
research?
Let's
say
because
the
idea
is,
we
hear
open
source
communities
saying
that
they
feel
they
are
doing
the
good
things
on
security,
but
they
are
not
very
certain,
so
they
would
like
to
see
based
on
the
facts.
Of
course,
these
facts
can
be
come
from.
The
product
itself
can
become
from
the
process.
D
Some
data
process
data
from
that
and
then
to
look
at
to
identify
if
there
is
some
security
best
practices
that
they
might
not
follow.
Well
enough,
so
that's
actually
the
so
that's,
of
course,
then
come
to
this
kind
of
a
little
bit
mix
of
different
kpis,
because
if
we
really
think
of
our
project
projects,
it's
not
really
to
predict
or
consider.
Okay,
what
is
the
let's
say
the
the
this
product
will
be
exploited
from
the
security
aspects.
D
It's
of
course
this
could
be
very
important
thing,
but
it's.
On
the
other
hand,
it's
not
really
our
our
goal
to
look
at
it.
Our
goal
is
by
looking
at
those
kpis
to
find
out
if
we
can
see
some
best
practices
that
not
might
be
followed
well
enough.
So
that's
a
a
better
explanation.
Let's
say
I
think
jacob
yeah.
F
Meet
me
again,
so
I
think
the
angle
I'm
coming
from
is
working
in
one
of
the
working
groups.
Securing
critical
projects
where
one
of
the
outputs
we're
trying
to
develop
is
a
ranking
of
projects
in
order
of
risk
where
risk
is
like
the
frequency
that
bad
thing
happens
times
the
magnitude
that
bad
thing
happens
and
that's
that's
why
I'm
interested
in
process
metrics
more
because
I
see
those
as
sorry
project,
metrics
or
process
metrics.
F
D
But
I
think
that's
also
the
reason
we
are
now
sitting
together
to
discuss
this,
because
we
have
our
one,
our
angle,
you
have
your
angle
and
we
we
can
explore.
That's
also.
What
we
would
like
to
do
is
to
see
if
there
is
any
way
that
we
can,
let's
say,
collaborate
and
all
there
are
something
that
we
can,
let's
say,
borrow
from
each
other
or
we
do
something
together
to
explore
more.
So
that's
this
way
are
completely
open.
C
Yeah
definitely
there's
a
comment,
so
that's
about
what
we
would
like
to
get
from
you,
so
it's
like,
of
course,
brutally
honest
feedback
so
which
is
working
out
pretty
well
so
far.
So
the
thing
is
that
so
people
that
work
on
these
projects,
the
way
that
they
look
at
is
they
want
to
be
the
best
in
these
graphs
and
if
they
are
not
the
best,
they
want
to
know
what
are
the
things.
What
are
these
rare
things?
C
You
know
give
them
as
a
list
and
then
we
can
fix
it,
which
is
true,
but
it's
not
exactly
fundamental
criticism
on
the
model
itself
and
I
think
this
process
projects
kind
of
like
questions.
They
really
touch
the
roots
of
what
we're
doing
here
so
where
to
put
the
scope,
because
of
course,
you
start
out
with
a
focus
and
at
some
point
people
ask:
can
you
add
this?
Can
you
add
that
and
then
it
kind
of
blurs
a
bit?
C
So
I
think
this
is
very
helpful
and
also
that
what
you
just
mentioned,
jacques,
about
kind
of
what
you're
working
on
so
maybe
also
other
people
want
to
share
like
does
this
relate
to
any
anything
they're
working
on
that's
and
like,
if
so
like,
could
you
elaborate
a
bit
on
how
you
see
this
kind
of
thing
like?
How
do
you
get
people
to
to
have
this
information,
but
in
an
actionable
way.
D
G
Yeah
yeah
is
still
on
the
call
I
am
here.
Yeah
scorecards
are
fully
automated,
although
it's
not
my
team
but
but
yeah.
It's
it's
fully
automated
and
that
was
a
central.
G
Itself
was
to
make
it
so
that
there
was
no
nothing
manual,
the
best
practices
badge
the
best
practices.
Badges
is
a
combination
of
automated
and
manual.
I
think
it's
mostly
manual.
G
Yeah
yeah,
so
so
I
think
from
my
perspective
on
on,
I
think
both
alpha
and
omega,
having
better
tooling
like
if
I
could
plug
in
an
arbitrary
project,
you
know
kubernetes
or
g-lib
or
whatever,
and
see
more
more
interesting
facts
than
let's
say
scorecard,
provides
because
scorecard
is
very
repo
based,
so
so
having
more
of
the
process
in
an
automated
way.
That
would
be
huge.
G
I
I
don't
know
that
I'm
always
wary
of
the
and
I've
been
burned
about
I've
been
burned
by
this
multiple
times
like
once,
you
assign
a
this
is
good,
and
this
is
not
good
then,
like
almost
nobody
is
happy,
because
the
people
that
you
said
were
good,
probably
aren't
and
other
people
point
that
out
and
people
would
say,
you're
bad
or
like
well,
you
just
don't
know
my
thing
having
just
like
sticking
to
the
facts
like
like
like
this
here
is
pretty
objective.
B
Will
say
my
one
comment
is
I
still
have
a
a
boggle
of
how
you
are
combining
the
different
groups
of
upstream
projects,
cohesive
communities
and
then
like
commercial
entities.
They
all
have
very
different
capabilities,
resources
and
goals
and
you're
kind
of
combining
that
as
one
thing,
so
you
should
be
very
prescriptive
in.
We
are
grading,
for
example,
commercial
distributions,
because
those
are
people
that
get
paid
to
support
these
things
and
they
can
be
held
accountable,
whereas
an
upstream
project
doesn't
really
care
about
your
score.
Necessarily,
that's
not
part
of
their
goal.
B
Their
goal
is
to
deliver
delightful
software
that
solves
a
problem
and
where,
but
you
know,
a
well-formed
mature
community
might
have
kind
of
somewhere
a
middle
ground.
Where
you
know
colonel
does
a
good
job
of
patrolling
their
own
problems
and
they
are
very
proactive
in
things
versus
somebody,
a
very
small
project.
So
you
have
to
think
about
there's
different
criteria,
motivations
and
resources
behind
these
different
levels
of
things,
and
I
yield
over
to
yotam.
Now.
E
Just
a
quick
comment
regarding
the
you
asked
earlier:
if
there
may
be
additional
metrics
or
things
that
you
can
look
at
so
what
comes
to
mind
is
maybe
binary
harding
hardening
status
of
the
of
the
binaries
in
the
in
the
system.
E
So
basically
things
like
like
how
they
are
compiled
with
various
flags
that
enable
various
exploit
mitigation
mechanisms
such
as
secomp,
slr,
etc.
But
that's,
I
think,
a
bit
more
invasive
that
what
you're
currently
are
planning,
because
there
is
no
such
available
data
set.
But
I
do
know
that
there
are
very
that
there
is
variability
between
the
different
operating
systems
and
packages
in
terms
of
that,
but
it
will
require
to
actually
run
some
code
on
those
on
like
on
the
image
and
and
actually
acquire
the
data.
C
Right
it's
for
some
of
these
metrics.
We
actually
do
run
stuff
on
the
images
but
per
binary.
Indeed,
that's
that's
kind
of
a
challenge
to
ultimate,
but
we
definitely
write
it
down
to
see
what's
possible.
C
D
This
is
the
one
page
we
looked
into
that
when
we
come
up
our
model.
There
was
actually
more
than
this,
so
we
have
this
one
and
also,
for
example,
like
a
open
source
deviant.
There
was
a
different
projects,
also
have
their
security
best
practice.
So
when
we
summarize
those
details,
we
look
at
those
communities,
look
at
their
practices
and
then
take
those
into
the
into
our
modern
essay.
C
C
D
So
maybe
a
a
question
to
the
let's
say:
the
ossff
team.
What
is
your
current?
Let's
say
the
the
important
thing
for
you
to
look
at
the
open
source?
Is
there
any
parts
that,
for
example,
you
are
looking
for?
You
want
to
do
something,
but
you
find
let's
say
you
need
support,
or
is
there
any
collaboration
opportunities
there.
B
B
So
my
feedback
overall
would
be
take
a
look
at
what
working
groups
are
available
and,
as
you
are
able
come
participate
all
our
meetings
are
open.
All
the
minutes
for
each
of
these
groups
are
available
that
you
could
look
at
and
see.
Oh
well,
critical
projects
is
talking
about
something.
That's
really
interesting
to
us,
with
our
methodology,
so
go
participate
in
that
community
and
listen,
and
you
know,
learn
what
they're
doing
and
potentially
contribute
back.
You
know
we're,
not
you
know,
patches
are
always
welcome.
You
know
is
very
we're
very
open
sourcing.
B
B
So
if
you
need
assistance
in
making
connections
there,
we
can
help.
You
know
make
introductions
with
members
of
those
different
security
teams,
said
fedora,
red
hat
sauce
went
to
canonical
aws,
you
know
kind
of
everywhere.
We
have
these
links.
We
can
help
broker
introductions.
If
you
want
to
talk
to.
How
does
the
opensuse
really
do
this.
C
Yeah,
that
would
be
amazing,
it's
and
in
terms
of
we
didn't
mention
it,
but
this
this
r
d
that
we're
doing
we
do
intend
to
publish
this
also,
so
we're
still
in
the
middle
of
things.
But
the
purpose
is
that
this
itself
will
be
disclosed,
of
course,
without
giving
any
people
from
any
of
the
distributions
or
heart
attacks.
So
we
also
have
to
see
like
how
do
we
kind
of
double
validate
it
with
people
that
they
are
not
too
shocked.
But
the
intention
is
that
this
will
be
public.
B
Yeah,
I
definitely
can
broker
introductions
to
the
big
three
and
if
you
have
other
communities
you're
interested
in
looking
at.
I
probably
already
also
know
those
folks,
but
everybody
here
participates
on
some
level.
These
different
communities
so
shoot
me
an
email,
and
I
can
help
focus
that
for
you
and
introduce
you
and
then
you
know
if
there's
communities,
I'm
not
aware
of
I'll
reach
out
to
jacques
or
mr
scovet
or
whoever.
F
Will
do
well,
I
will
say
one
thing
I
I
know
I
criticized
a
few
points,
but
I
do
want
to
recognize
how
much
work
this
must
represent
and
how
impressive
this
amount
of
work
is.
C
Well
done,
thank
you
and,
to
be
honest,
the
the
criticism
is
the
best
part,
because
otherwise
we
wouldn't
know
how
to
improve
it
anyway,
and
at
some
point
you
also
start
to
be
blind
for
your
own
failings.
So
it's
like
that's
because,
of
course,
for
the
teams.
This
is
quite
useful
to
get
a
prioritized
list,
but
this
is
very
helpful
to
think
about
more
about
the
boundaries
and
limitations
of
the
model.
B
So
michael
has
a
question:
is
your
intention
to
make
this
methodology
the
scoring
tools?
Everything
are
you
gonna,
open
source
that
make
it
publicly
available.
C
To
some
extent,
so
it's
like
whether
the
whole
thing
with
the
thresholds
is
probably
we
need
to
have
a
talk
about
that
internally,
but
with
the
the
structure
and
how
to
measure
these
things.
Yes,
we
want
to
to
open
source
that.
D
B
B
The
description
of
the
vulnerability,
all
right
with
that,
I
want
to
thank
the
sig
team
for
your
time.
As
you
have
questions,
don't
hesitate
to
pop
by
any
of
our
working
groups.
We
would.