►
From YouTube: Securing Critical Projects WG (July 1, 2021)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
B
Cool
awesome,
so
thank
you,
everyone
for
joining
today
we
got
a
couple
things
on
the
agenda
and
time
permitting.
I'm
sure
we
can
get
a
couple,
a
thing
of
other
things
on
there
today
we're
going
to
start
with
george
who's
going
to
present
some
results
of
a
survey,
study
and
yeah
really
excited
to
hear
all
about
it.
If
you
want
to
give
us
an
intro
and
dive
right
in
now,
we're
excited
to
hear
about
what
you
have
to
present
sure.
A
Thanks
so
much
amir,
so
george
cineoski,
I'm
a
senior
technologist
at
inquitel
labs
and
today,
I'm
going
to
summarize
the
secure
code.
Reuse
survey
that
john
speed
and
I
and
several
colleagues
recently
conducted
today's
talk-
has
three
components:
I'll
start
with
just
a
quick
background
on
our
approach,
then
I'll
give
you
a
demographic
baseline
on
the
highly
educated
python
reliant
cohort
that
took
the
survey
and
then
I'll
highlight
just
a
couple
of
our
key
results.
You
know
that
I
was
asked
to
highlight
for
you.
A
I
should
note
there
is
a
blog
post
coming
out
on
this
in
a
couple
of
weeks.
So
if
anything
you
hear
today
interests
you,
we
will
be
elaborating
on
that
and
then
you
know,
of
course,
we'll
take
questions
at
the
end.
So
just
a
brief
item
on
our
motivation
and
methodology,
you
know
the
why
and
the
how
the
survey
we
at
inkytel
labs
think
that
you
know
ken
thompson's
1984
reflections
on
trusting
trust,
which
is
all
about
trust-based
code.
A
You
know
there
are
annual
developer
surveys
as
everyone
on
this
call
is
familiar
with,
but
the
the
problem,
in
our
view,
is
they
only
ask
very
few
questions
related
to
security
related
to
package,
reuse
and
software
evaluation
criteria,
and
so
we
saw
a
gap
in
the
data
and
we
saw
an
opportunity
to
survey
the
the
developer
attitudes
you
know
going
beyond
salary
data
and
ide
preferences.
The
you
know
that
kind
of
information
is
certainly
helpful.
It
just
doesn't
tell
us
very
much
about
security.
A
So,
a
couple
months
ago,
my
colleagues
and
I
developed
an
18
item
online
questionnaire
on
surveymonkey.
We
had
some
academic
partners.
I
know
frank
naugle
of
harvard
business
school
sometimes
joins
these
calls.
He
was
one
of
the
people
who
looked
at
the
survey.
A
Just
one
last
note,
before
I
get
into
the
you
know
the
demographic
baseline
and
the
results
we
designed
the
survey
to
allow
for
cross-comparison
with
some
of
the
redmonk
tyobi
stack
overflow
data.
Obviously
we
don't
have
the
65
000
devs
that
stack
overflow
has.
But
you
know
this
is
a
this
is
a
start
and
then
just
finally,
I
want
to
call
out
david
wheeler.
You
know
in
addition
to
using
our
our
personal
and
professional
networks,
dave
helped
promote
the
survey
and
you
know
we're
really
grateful
for
that.
A
So
moving
right
along
who
took
the
survey
and
what
did
we
find
just
again
to
level
set?
These
are
some
descriptive
statistics.
This
was
a
very
polyglot
programming
population.
I
think
the
average
was
7.4
programming
languages
by
far
python
was
the
most
widely
used.
Then
we
had
some
shell
javascript
java
and
then
we
asked
related
questions
about
which
package
managers
you
use
pip.
Of
course
you
know
pip
and
pipe.
I
were
very
significant,
as
was
anaconda
and
then
npm
and
maven,
I
think,
was
the
the
next
third
or
fourth
most
popular.
A
Then
we
asked
a
two-part
question
and
this
was
sort
of
an
instructional
manipulation
check
just
to
see
if
survey
participants
were
straight
lining
and
clicking
through
how
many
years
have
you
been
coding
professionally,
which
is
in
blue
here,
and
how
many
years
have
you
been
coding
overall
and
that
included
in
schools
and
coding,
boot
camps
and
and
the
like,
and
this
data
too,
was
fairly
consistent
with
the
stack
overflow
results
showing
that
you
know
the
something
like
40
of
developers
today
are
have
less
than
a
decade
of
experience
under
their
belts
and
I'll
I'll.
A
Come
back
to
this
experience
point
in
just
a
moment,
then
two
more
demographic
points
and
we'll
dig
into
the
results.
You
know
we
took
a
an
educational
snapshot
of
our
our
survey
participants
and
something
like
95
or
96.
A
You
know
that's
that
orangish
region
of
the
of
the
chart
here,
90
to
95
of
our
survey
takers
had
a
bachelor's
degree
or
more.
When
you
dig
into
the
comparable
stack
overflow
data,
I
think
it's
stack
overflow's
case
it
was
75
percent
have
a
bachelor's
or
more
so
that
20
difference
in
our
our
cohort,
our
sample
frame.
I
think
just
comes
down
to
the
fact
that
we
used
our
personal
networks.
A
A
You
know
the
world
that
iqt
labs
has
access
to
is
not
the
world,
and
so
we
see
evidence
of
sophistication
in
this
population.
But
you
know
this
is
not
the
only
form
of
sophistication
people
learn
how
to
code
from
all
different
sources
of
learning
and
then
just.
Finally,
when
we
asked
about
organizational
size,
you'll
notice,
we
drew
the
line
here
at
you,
know:
schools
and
workplaces
with
a
head
count
of
50
or
more
and
that's
an
oecd
econometric
guideline.
A
That's
used
in
a
lot
of
micro
and
small
enterprise
studies,
and
so
the
bulk
of
our
survey
takers
were
at
medium
and
large
enterprises
in
the
private
and
public
sector.
And
so
let
me
let
me
pause
there
to
see
if
there
are
any
questions.
Otherwise,
I
will
dig
into
what
these
folks
told
us.
B
A
So,
as
I
mentioned,
we
we
dug
into
these
survey
takers,
self-reported
behaviors
and
attitudes,
so
you've
got
to
take
that
with
a
grain
of
salt.
But
you
know
one
of
the
interesting
questions
was
which
of
the
following
support
resources.
If
any,
does
your
organization
provide?
A
A
There
were
fewer
survey
takers,
you
know
who
agreed
with
that
and
then
as
we're
working
through
our
analysis.
A
One
of
the
things
john
speed
and
I
are
doing
is
trying
to
see
survey
takers,
who
had
more
than
one
of
these
organizational
support
resources
in
place,
and
you
know
using
that
sort
of
bare-bones
standard
that,
ideally,
your
organization
is
giving
you
the
top
four
of
these
things,
security,
training,
putting
policies
in
place,
warning
you
to
you,
know,
make
sure
you're
not
yolo
installing
untrusted
packages
and
then
potentially
making
available
independent,
open
source
reviewers
before
you
go
and
do
something
dangerous.
A
I
think
the
the
headline
number
is
about
75
of
our
survey.
Takers
do
not
have
more
than
one
of
these,
and
we
take
that
as
a
as
a
call
to
action.
You
know
these
are
things
that
public
and
private
sector
organizations
can
put
in
place,
and
we
think
you
know
if
our
data
are
at
all
suggestive.
A
This
is
something
they
dig
into
in
the
future
and
I'll
say
one
more
thing.
You
know
about
these
organization,
specific
resources.
You
have
to
read
this
against
the
backdrop
of
the
previous
slide.
You
know
an
organization
size,
the
these
are
fairly
large
organizations.
So
you
know
more
work
has
to
be
done
in
that
area.
A
When
we
ask
survey
takers
what
criteria
they
use
to
determine
if
a
software
package
is
safe
to
install
I'm
just
going
to
highlight
the
you
know,
the
two
salmon
orange
at
the
top
package
seems
popular
or
trusted,
and
maintenance
seems
recent
everybody
on
this
call
knows
those
are
fairly
superficial.
Metrics.
They
don't
necessarily
tell
you
much
about
code
provenance
or
you
know,
finer
grained
information
source
code
is
available
for
inspection
dependencies
clearly
listed.
A
You
know
those
are
also
more
informative,
as
you
as
you
head
down
only
about
a
third
of
this
population
was
actually
checking
for
security,
advisories
and
cves,
and
then
let
me
skip
ahead
just
in
the
interest
of
time.
I
have
three
more
slides
for
you.
These
are
also
sort
of
attitudinal
ipsitive
data
digging
into
how
often
developers
look
to
information
sources
and
then
the
next
two
slides
are
on
a
likert
scale
from
strongly
disagree
to
strongly
agree
with
neutral
in
the
middle,
so
just
spending
a
little
bit
of
time.
A
On
this
slide,
how
often
do
data
scientists
and
software
engineers
use
the
following
information
sources:
you'll
notice,
the
large
green
area
next
to
readme
seems
that
you
know
readme's,
maybe
the
the
port
of
first
call.
We
know
that
intuitively
a
smaller
number
of
survey
takers
look
to
the
issues
tab.
A
Fewer
still,
you
know
that's
that
red
area
never
and
seldom
look
at
the
actual
source
code
themselves,
and
then
probably
one
of
our
more
disheartening
results
that
you
know.
Maybe
we
can
spend
some
time
talking
about.
Is
this
last
package
scan
item
again?
A
I
know
we
we
don't
have
stack
overflows
vast
cross-section
of
the
developer
population,
but
if
this
is
at
all
indicative
there,
there
are
at
least
two
conversations
we
can
have
about
the
low
levels
of
reliance
on
package
scans,
at
least
among
the
well-educated
population
we
were
able
to
reach,
and
so
we
we
think
this
may
have
something
to
do
with
usability.
A
We
might
expect
this.
You
know
pattern
to
shift
in
the
opposite
direction.
So
number
one
usability
number
two
education.
You
know
consistent
theme
of
this,
this
survey,
so
shifting
from
you
know
how
often
do
you
use
the
following
information
sources
and
I
apologize
for
the
header
here.
This
should
be.
To
what
extent
do
you
agree
or
disagree
with
the
following
statements?
A
A
I
believe
package
registries
are
responsible
for
keeping
code
safe,
also
high
levels
of
agreement,
and
you
know
we
further
dug
into
this
in
terms
of
respondents
who
agree
that
package
registries
bear
the
responsibility
along
with
individual
developers,
and
there
seems
to
be
some
view
that
this
is
a
shared
responsibility
with
individual
developers
and
we'll
talk
about
that
in
the
forthcoming
blog
post.
A
I
do
not
engage
in
pre-installed
code
vetting.
Another
rather
alarming
result
that
you
know
people,
I
guess
place
their
trust
in
our
survey
and
we're
willing
to
admit
they
strongly
disagree
with
this
statement
and
then
you
know
another
sizable
population
only
upgrading
their
dependencies
if
they
contain
critical
vulnerabilities-
and
I
think
this
is
the
last
slide
I've
just
gotten
to-
and
I
apologize
for
the
heading
at
the
top.
It
should
be.
To
what
extent
do
you
agree
or
disagree,
the
following
statements?
A
I
wish
I
knew
more
about
the
vulnerabilities
associated
with
code,
reuse,
people
who
took
our
survey
overwhelmingly
agreed
with
that
statement
and
again
we
read
this
cry
for
help
may
be
putting
it
strongly,
but
you
know
a
plea
for
further
assistance,
further
educational
resource
and
more
just
learning
on
what
what
the
vulnerabilities
are
and
then
I'll
skip
the
I
always
check
my
code
for
credentials
before
commits
there's
some
evidence
that
this
population,
here
you
know,
use
github
more
frequently
we're
going
to
dig
into
statistical
significance
there
and
finally,
I'm
comfortable
evaluating
a
new
library
for
security
risks.
A
Again,
the
fairly
large
number.
Just
under
half
of
the
people
who
answered
this
question
said
they
do
not
feel
comfortable
and
so
more
work
to
be
done.
I'm
going
to
stop
there
see
if
there
are
any
questions
and
stop
sharing
my
screen
thanks.
B
D
Yeah,
I
just
saw
one
of
the
items
in
the
the
graph
was
like
fixing
vulnerable
packages
or
not
even
running
and
like
audit.
You
know
software
on
on
on
your
stack
and
being
like.
Oh
we're,
running
vulnerable
packages
and
then,
from
my
experience
running
these
like
npm
audits
or
you
know,
rest
stop
cargo
audits
or
whatever
and
you're
like
okay
cool.
I
have
a
vulnerable
package.
Let
me
upgrade
it
oh,
but
it's
in
a
dependency
of
mine.
D
Okay,
so
let
me
go
www.github.com
dependency
over
here
and
then
go
through
the
whole
motion,
submitting
a
pull
request,
etc,
and-
and
you
know
at
the
end
of
the
day
like
for
movie
companies-
maybe
you
would
rather
just
be
like.
Let
me
fork
it
and
just
change
it.
So
this
dependency
uses
my
new
dependency.
D
That
is
fixed
and
let
me
test
it
out
here
and
you
know,
because
I
remember
way
back
in
the
day
this
was
an
issue
with
like
angular
2,
and
there
was
like
some
nested
like
five
levels:
deep
vulnerability
with
like
extracting
zip
files
or
tar
archives,
or
something
crazy
like
that,
and
I
asked
one
of
my
engineers
I
was
working
with
to
be
like
hey,
go
fix
this.
You
know,
let's
see
how
deep
it
is
and
he's
like
no
way
like
we're.
Just
gonna
use
this
vulnerable.
D
Like
vulnerability,
I
mean
the
vulnerability,
wasn't
even
really
impactful
too,
which
is
another
thing
it
was
like
hey.
There
is
a
vulnerability
here,
but
it's
not
reachable
in
your
current
code
and
that's
another
signal
that
is
really
important
like
I,
I
recently
discovered
code
ql
and
I
was
like
okay,
so
you
know
this
path
injection
or
this
you
know,
file
path,
injection
problem.
D
User
input
control-
I
don't
really
have
to
worry.
You
know
this
is
more
like
static
analysis
tools,
but
the
same
principles
kind
of
apply.
It's
like
you
have
to
verify
that
these
are
actual
legit
concerns
for
your
particular
use
case
and
right
now,
there's
just
a
lot
of
false
positives
in
the
space,
and
even
if
there
is
a
real,
positive
positive,
it's
really
hard
to
change
if
it's
an
indirect
dependency.
So,
like
I
don't
know
you
know,
that's
just
I
think
a
common
problem
for
all
of
us
here.
Maybe.
A
It's
a
great
great
point,
and
you
know
there
were
some
items
we
didn't
have
time
to
get
to,
but
we
asked
about
you
know
you,
you
talk
about
the
the
frictions
and
the
you
know,
pain
points
that
devs
have
once
once
you
know
those
vulnerabilities
have
been
flagged
once
you've
gone
up
your
dependency
chain.
A
We
asked
about.
You,
know
cli
versus
gui
preferences,
because
there
seems
to
be
a
generational
component.
There
are
developers
who
grew
up
entirely
in
the
cli
and
there's
sort
of
cli
purists,
and
you
know
the
the
younger
cohort
that
is
out
there.
Installing
packages
all
day
long
seemed
to
consume
their
information
through
a
gui
again.
A
Nothing
here
should
should
shock
this
group,
but
it's
a
consideration
when
we
think
about
alerting
of
vulnerabilities
when
we
think
about
you
know:
information
sharing,
threat,
sharing,
etc,
and
so
you
know
another
item
that
I
didn't
have
time
to
specifically
call
out.
A
One
of
the
things
we
plan
on
doing
you
know
in
this
blog
post
shortly
is
this
seems
to
be
a
magic
number,
which
is
a
decade
of
experience
or
nine
point
something
years
of
experience
where
people
who
took
this
survey
respond
a
little
bit
more
cynically
and
respond
a
little
bit
more
guardedly
and
again
in
this
theme
of
of
education,
of
knowing
your
user
of
trying
to
take
all
the
all
the
brilliant
work
that
has
been
done
in
code
ql
and
seml,
and
things
of
that
nature.
A
There's
a
new
generation
of
developers
out
there
that
need
to
be
told
this.
You
know-
and
I
know
that's
a
very
basic
point,
but
it
came
through
in
our
data.
F
By
the
way
I
was
going
to
chain
off
of
what
john
said
about
you
know
a
lot
of
times
you
you
have
high
false
positives
or
you
say
well.
This
is
not
really
relevant
to
my
code.
I
just
went
through
this
exact
conversation
yesterday,
in
fact,
with
the
developer
team
who
are
looking
at
you
know,
I
shouldn't
probably
say
this
out
a
lot
looking
at
open
cvs
and
their
dependencies
and
they
were
like.
Oh,
this
can't
be
exploited
and
my
observation
was
a
lot
of
times.
F
Developers
are
not
the
best
ones
to
make
that
determination
right
there.
They
don't
have
the
creativity
right.
You
know,
there's
sort
of
this
mindset,
difference
between
developers
and
and
hackers.
You
know
it's
just
you
know.
People
can't
break
what
they
make
really
or
figure
out
how
to
so.
Anyway,
that
that
I
was
just
gonna,
you
know,
agree
with
you,
but
then
you
have
the
opposite
problem
where
it's
like.
Well,
you
know.
I
can't
see
how
this
could
actually
affect
me.
Well,
you
don't
you
have
you
suffer
from
a
lack
of
imagination
if.
G
If
I
could
actually
jump
jump
jump
in
there,
I
agree
100.
The
code
flow
capabilities
of
of
code
ql
make
it
more
reasonable
to
say
that
this
is
a
vulnerable,
a
vulnerable
dependency
that
is
not
in
the
call
graph
starting
from
the
app
it's
it's
I
mean
it's
infeasible
to
do
it
by
hand,
it's
somewhat
feasible
to
do
it
with
with
with
tools,
and
what
we
found
is
that
a
significant
portion
like
double
digit
percentages
of
dependencies
in
the
transitive
graph
can
be
like
shaken
off.
G
If
you
know,
if,
if
you
can
do
that
because
they
are
really
detached
from
from
anything,
and
these
are
because
they
have
these
super
libraries
like
crypto
browserify.
That
just
include
like
every
crypto
thing,
and
the
second
thing
is
on
on
the
forking.
You
know
we
have
a
super
strongly
advocate,
like
don't
fork
unless
you
like,
unless
it's
an
incident
or
like
you're,
really
really
really
sure
what
you're
getting
into
and
most
teams
don't
want.
The
technical
debt
of
caring
for
this
new
thing
that
they've
created.
G
So
there
was
a
there
was
a
there
was
a
balancing
there
where
security
versus
you
know
the
the
debt
or
the
the
risk
that
they
will
inherit
it
and
then
forget
about
it.
And
now
you
have
a
14
year
old
version
of
angular
that
you
know
has
been
forked,
so
they
can't
change
it
and
and
and
they're
paying
for
that
over
time.
A
H
On
that,
I
think
that,
while
for
for
platforms,
particularly
like
javascript,
where
something
that's
running,
your
call
graph
may
not
end
up
in
your
target
build.
Your
comment
is
valid.
There
are
languages
where
that's
not
the
case
like
python,
where
a
vulnerability
in
one
of
your
dependencies,
you
might
not
be
calling,
but
an
unknown
vulnerability
in
something
else
that
you
are
calling
may
enable
people
to
call
that.
H
H
D
I
have
a
question
on
the
presentation
and
it's
it's
more
like
a
like
extending
the
survey
a
little
and
wondering
you
know,
especially
with
the
cli
versus
gooey
tools.
Like
do
you
actually,
maybe
I
missed
it
because
I
was
a
little
late,
but
like
was
there
a
slide
showing
like
these
are
the
tools
that,
like
certain
people
are
using
and
and
you
can
like
associate
it
with
their
demographic,
whether
it's
like
their
age
or
their
background
or
whatever
like
they
only
use
depth.dev
or
you
know,
like
some
gooey
scorecard
something
other
I
don't
know
snicker.
D
However,
you
pronounce
sneak
or
snicker
whatever
you,
you
know
what
I
mean,
though,
like
like
what
tools
are
people
using
and
does
it
correlate
or
correspond
with
like
how
they
entered
this
space?
You
know
whether
they're,
like
of
a
newer
class
or
of
an
older
class
of
folks
who
older
class
people
just
were
unfortunate
to
have
shitty
monitors
and
not
good
web
browsers.
Duh.
That's
why
they
don't
like
guys
but
yeah.
My
question
was
like
more
like
you
know
what
tools
are
people
using
and
is
there
any
interesting
info
from
that.
A
Yeah,
so
that
that
was
actually
the
the
last
question
on
the
survey
you
know
in
terms
of
preferred
interface,
the
only
thing
that
stuck
out
like
a
sort
of
thumb
and
we're
gotta
caveat
this
on
you
know,
statistical
power
was
that
age
factor.
I
think
if
we
were
to
do
a
follow-up,
we
might
list,
you
know
seml
code
ql
and
these
kind
of
tools
and
and
try
to
get
a
sense
of
relative
preference.
A
But
you
know
this
was
just
an
initial
foray
just
to
see
if
you
know,
first
of
all
do
trust-based
attitudes
prevail.
You
know
to
what
extent
and
and
you
know,
that
kind
of
floored
us
and
just
people's
candor
and
saying
you
know
no,
no,
I
don't
check,
I
don't
I
don't
look.
I
trust
strangers
from
the
internet
completely.
Thank
you.
A
With
that
with
us
with
the
survey,
so
you
know
it's
it's
consistent,
but
I
just
want
to
go
to
a
an
earlier
point.
You
know
on
the
the
differences
between
npm
users
and
and
type,
I
you
know,
that's
another
area
that
I
don't
know
if
john
speed
is
still
on.
A
I
don't
see
them
on
one
of
the
tiles,
but
we
are
very
interested
in
these
sort
of
cross
language,
ecosystem
differences,
and
you
know
because
I
think
sometimes,
when
we
talk
about
package
managers
in
the
abstract,
it
really
is
apples,
oranges,
oil,
rigs
and
chainsaws,
and
you
know,
there's
a
commensurability
problem
and
I
I
think
there
are
lessons
the
package
managers
can
the
registry
organizations
themselves
can
teach
each
other.
A
This
is
something
we've
talked
about
with
with
mike
scavetta,
not
not
to
put
you
on
the
spot
mike,
but
you
know
there's
more
to
be
done
in
that
area.
I
think,
is
the
short
of
it.
B
A
But
we
we're
going
to
be
making
the
data
publicly
available,
because
I
don't
think
I
mentioned
it
in
my
spoken
remarks,
but
we
we
deliberately
did
not
collect
pii.
A
So
you
know
the
data
will
be
publicly
available
along
with
the
blog
post.
I
am
actually
the
resident
data
visualization
developer,
and
so
we
have
a
series
of
interactives
that'll
go
along
with
this,
and
so,
if
you're,
the
tldr
type,
you
know
the
blog
post
gets
a
little
dry
on
on
on
demographics.
A
A
The
other
idea-
and
this
is
where
I
really
have
to
be
careful-
because
this
is
not
set
in
stone-
is
in
the
spirit
of
stack
overflow.
You
know
it
might
be
interesting
to
come
back
to
this
population
and
try
to
get
a
bigger.
You
know
more
representative
sample
take
feedback
like
the
the
questions
this
group
has
has
posed
on.
You
know.
Well,
okay,
now
what
scan
tools
are
you
using?
A
You
know,
there's
some
obvious
follow-up,
another
area,
if
I'm
just
being
very
honest
that
we
sort
of
omitted
that
might
be
worth
digging
into
a
little
bit.
More
is
the
containerization
trend
you
know.
So
when
we
asked
what
are
you
developing
in
you
know,
we
we
left
a
free
text
section
at
the
end
and
a
lot
of
folks
wrote
in
you
know:
hey,
I
also
use
docker.
You
didn't
ask
me
about
docker,
so
you
know
I,
whether
that's
on
par
with
with
something
like
python.
A
Again,
it's
there's
this
commentability
problem.
Where
we're
talking
about
package
registries,
language
based
ecosystems.
You
know
everybody
here
knows
what
we're
talking
about,
but
we've
got
to
be
careful
what
we
mix
and
what
you
know
what's
in
and
what's
out,
but
I
think
a
future
survey
might
take
a
broader
view
of
these
things.
B
Very
cool,
very
cool.
Well,
I
know
the
the
work
group
would
be
happy
to
have
you
back
if
you
ever
want
to
provide
any
updates
or
if
we
can
help
spread
the
word
in
any
way.
That
would
be
awesome
really
appreciate
it.
Thanks
for
sharing
your
space,
oh
yeah,
absolutely
so!
Do
we
have
any
other
questions
or
comments
for
george
or
the
presentation
this
so
far,.
C
I
was
just
gonna
echo
what
amir
says.
Thank
you
and
yeah.
Definitely
let
us
know
how
we
can
help
too.
I
know
and
what
I
think
there's
this
working
group
when
we're
talking
with
the
folks
at
harvard
lish
we
had
and
we
brainstormed
like
some
ideas
for
survey,
questions
and
things
like
that.
So
if
we
can
be
helpful,
you
know,
as
you
think,
about
the
next
version
iteration
of
this
yeah.
Definitely
let
us
know.
E
B
Great,
thank
you
yes,
and
as
far
as
tools,
I
know
I
mean
open.
Ssf
is
working
on
quite
a
few.
That
will,
I
think,
help
in
a
lot
of
those
in
a
lot
of
those
with
a
lot
of
those
pain
points,
michael
just
shared
one,
the
kind
of
like
the
beta,
I
think
that's,
is
that
the
the
reproducibility
kind
of
measurement
tool,
michael
yep,.
I
B
Cool
so
yeah
so
I'd
say
also
just
be
on
the
lookout
for
the
tools
coming
out
of
this
work
group
and
the
other
work
groups.
I
think
they
can
help
in
a
lot
of
ways
awesome.
So
I
think
oh
wait,
sorry,
okay!
So
next
I
think
we
have
kim
who's
going
to
give
us
an
update
on
security,
scorecards.
C
Speaking
of
tools,
so
this
the
scorecards
project
actually
lives
in
the
best
practices
working
group,
but
I'm
here
today
and
I'm
going
to
talk
about
it
and
give
you
all
an
update.
This
actually
goes
quite
well
with
the
survey
we
just
saw.
I
think
it
was
slide
13
that
calls
out
a
lot
of
these
checks
that
the
scorecards
project
is
doing
so
I
don't
have
slides,
but
this
is
a
blog
post.
We
posted
yesterday
on
the
google
blog
so
just
a
little
background.
C
Scorecards
is
an
open,
ssf
project
that
does
a
number
of
security
checks
given
like
a
github
url,
and
so
it
goes
through.
Let
me
just
where:
is
it
yeah?
So
it's
that
ossf
slash
scorecard.
So
you
can
see
you
know
a
number
of
the
security
checks
that
it's
running
and
it
produ
and
it
returns
either
a
true
or
false
and
a
confidence
level,
and
so
the
oops,
the
the
update
we
posted
yesterday.
So
we
launched
this
back
late
last
fall
and
since
then,
we've
done
a
bunch
of
work.
C
One
is
adding
a
bunch
of
new
security
checks.
So
now
we
have
checks
to
see.
Is
there
vulnerabilities
based
on
our
osv
tool?
Are
our?
Are
there
binary
artifacts
in
the
in
the
repo?
Are
you
freezing
your
dependencies
and
I
think
a
few
other
checks?
Oh
like
do.
You
have
dependable
or
renovate
bot
turned
on
for
the
repo
and,
I
think,
a
few
more
checks
and
then
an
awesome
thing
is
we're.
C
Now
we
now
have
data
for
over
50,
000,
repos,
and
so
we're
storing
that
and
bigquery
is
all
publicly
available
and
accessible,
and
you
can
query
the
data
we'd
like
to
scale
that
even
further
we
keep
bumping
into
github
api
limits,
so
we
yeah.
We
would
like
to
try
to
keep
going
we're
all
sort
of
sharing
tokens
right
now
and
then
here's
a
really
cool
graphic,
like
looking
at
those
50k
projects,
kind
of
like
how
they're
stacking
up
you'll
see
a
lot
of
these
checks.
C
You
know
that
were
that
we're
looking
for
were
actually
in
georgia's
deck,
and
so
you
can
see
there's
a
lot
of
red
in
the
graph.
We
need
to
work
as
an
industry
to
get
more
green
on
these
graphs
and
then
there's
a
couple
of
case.
Studies
like
the
envoy
project
was
one
early,
one
that
picked
up
on
the
scorecards
project
and
has
been
iterating
against
their
own
dependency
policy.
C
For
how
you
know
how
they're
approaching,
if
developers
want
to
introduce
new
dependencies
into
the
envoy
project
and
how
they're
using
the
scorecards
to
help
mitigate
some
of
the
risk
there,
and
then
we
improved
our
own
scorecard
for
scorecard
so
dog
fooding,
our
own
stuff,
we're
pinning
dependencies
now
and
we
have
a
security
policy
in
the
repo
that
that
tells
people
who
to
email.
If
you
find
vulnerabilities
and
of
course,
we've
expanded,
the
community
azim's
on
the
call
who's
been
driving
a
lot
of
this
work
and
now
being
he
was
here.
C
I
think
he
left
and
we
have
a
standing
meeting
now
for
just
the
scorecards
team.
That's
on
the
open,
ssf,
public,
mailing
or
public
calendar,
and
I
think,
there's
a
slack
there's
a
slack
as
well,
and
so
a
couple
of
the
upcoming
things
that
we
are
looking
into.
Oh,
oh
michael,
you
just
reminded
me:
where
is
it
we're
also
showcasing
the
scorecards
data?
C
I
don't
even
know
where
it
is,
but
on
the
metrics.openssf
dashboard
that
michael's
been
driving
is
pulling
in
that
data
and
also
the
recently
announced
debts.dev
dashboard
that
was
that
was
announced
through
google,
that's
pretty
cool
to
check
out
and
so
upcoming.
C
These
are
some
of
the
bigger
things
that
we
want
to
add
is
have
like
github
badges
for
scorecards,
some
more
integration
with
cicd,
tooling,
and
then
integration
with
this
other
new
project
in
openssf,
called
all-star
and,
and
so
we've
been
working
on
this
all-star
project
to
actually
try
to
help
enforce
some
of
these
security
checks.
C
So
I
think
I
can't
remember
we
did
a
quick
presentation
in
one
of
these
working
groups
about
all-star,
but
it's
like
a
github
app
that
you
have
to
go
install,
so
we
can't
install
it
in
everyone's
repositories.
But
it's
trying
to
do
you
know
some
of
the
enforcement
of
these
security
checks
and
then
like
opening
issues
or
like
trying
to
like
turn
on
the
the
check
where,
where,
where
it's
possible.
So
so
that's
the
update
on
scorecards
we'd
love
to
get
more
feedback,
get
more
folks
involved.
All
the
above.
F
Hey
this
is
really
great
work,
kim
thanks
for
sharing
with
us.
I
think
that's,
that's
really
really
very
cool.
Can
I
can
I
a
couple
of
like
questions
comments
yeah
when
I
took
a
look
at
what
the
previous
scorecard
system
was.
Recording
it
looked
a
little
bit
to
me
like
it
was
somewhat
unreliable
as
to
okay.
F
We
might
have
started
a
badging
at
some
point,
maybe
four
years
ago,
but
you
know
didn't
really
complete
it,
but
we
got
thumbs
up
for
you
know
starting
it
or
you
know
it
felt
a
little
bit
like
somewhat
unreliable
in
terms
of
the
results
and
then
so
that
was
just
an
impression.
Is
that
backed
up
by
data?
And
then
we
looked
at
a
few
projects
that
you
know
looked
good
from
the
old
scorecard
system
and
then
it
was
like
well
that
unmatched
reality.
F
You
know,
as
as
in
we
looked
at
a
specific
project
and
how
it
scored
and
what
we
knew
about
the
project
right.
So
I
guess
the
the
question
is
I
mean
all
this
is
good.
Don't
take
this
as
anything
other
than
just
you
know,
you're,
just
asking
your
question
is:
did
you
sort
of
sample
and
look
at
it
and
go
oh
yeah?
This
is
this
is
spot
on.
You
know
indicating
a
problem
with
a
particular
package.
F
We
know
that
that
one's
actually
super
bad
or
you
know
the
other
one
where
it's
like
good
score,
but
you
know
anyway,
that's
the
question.
C
Yeah,
so
I
think
you
know
I
mentioned
there
is
a
confidence
value
that's
coming
across
to
through
each
of
the
security
checks.
I
would
say
I
mean
there's
always
room
for
improvement,
I'm
not
sure
azeem.
If
you
can
speak
to
some
of
the
things
we've
done
there.
Maybe
you
dropped
off.
I
don't
know,
oh
no,
there
you
are,
but
I
think
what's
helpful.
Like
some
people,
some
projects
have
said:
hey.
The
score
is
a
little
bit
wrong.
C
Like
did
you
like
file
an
issue,
you
know
tell
us
about
it
and
then
we
can.
Then
we
can
make
improvements
that
way
too,
but
it's
hard.
You
know
we're
just
doing
this.
The
thing
about
scorecards
is:
we
want
it
to
be
completely
automated,
so
you
know
so
so.
There's
no
humans
involved
really,
but
yeah
obviously
like
it
could
use
a
lot
of
tweaking
and.
F
I
mean
you
know,
there's
biases
and
any
automated
tools
as
much
as
you'd
like
it
to
be.
You
know,
purely
you
know,
objective
oftentimes
though,
and
it's
oftentimes
not
even
like
the
survey
showed
some
people
trust
you
know
something
and
then
it's
like
you
know
anyway.
Sorry
I
I.
C
Yeah,
I
think
an
active
discussion
going
on
too
is
and
and
maybe
they're
again,
a
z
might
be
able
to
speak
to
someone.
C
This
too,
is
like
there
are
different
things:
different
security
checks
that
organizations
or
people
feel
like
that
are
more
important
than
others,
so
coming
up
with
a
way
to
sort
of
weight,
your
own
checks
against
you
know
the
full
result,
because
you
know-
maybe
you
know,
maybe
being
an
active
project-
is
more
important
to
you
than
something
else,
and
so
giving
folks
the
ability
to
sort
of
tweak
their
own
enforcements
based
on
different
tiers
or
of
what
they
think
is
secure
is
another.
I
think
topic
in
the
for
discussion.
E
Yeah,
I
think
I
I
just
want
to
add
that
so
right
now,
at
least
for
v2,
we
have
been
focusing
more
on,
like
you
know,
the
breadth
of
scorecards,
like
how
kim
mentioned
we're,
trying
to
add
more
checks,
you're
trying
to
add
more
repos
but
yeah.
Of
course,
we
need
to
also
look
into
how
to
improve
each
of
these
qualities
and
how
to
go
into
like
the
depth
aspect
of
it.
E
So,
for
example,
like
frozen
dependencies
is
a
very
good
example
like
right
now
we
are
looking
at
frozen
dependencies
in
like
docker
files
and
things
like
that.
But
you
know
there's
so
many
ways
that
you
can
add
these
dynamic
dependencies
like,
for
example,
inside
python
or
something
of
the
sort.
E
So
we
are
doing
this
case
by
case
basis,
so
I
I
do
agree
that
it's
it's
not
like
a
hundred
percent
spot-on,
but
like
some
checks
we
know
we
are
doing
it
right
like,
for
example,
we
can
very
easily
check
if
a
repository
is
active
or
not
or
if
people
are
you
know
from
multiple
orgs
are
contributing
to
this
repository
or
not
so
some
of
these
things
we
are
definitely
spot
on,
but
you
know
things
like
frozen
dependencies,
whether
they're
using
dependable,
to
update
their
dependencies.
E
I
think
there
is
always
going
to
be
some
amount
of
confidence
score
there
that
we
are
going
to
rely
on
so
yeah.
So
that's
that's
kind
of
where
I
think
the
community
will
help
where
people
can
say.
Hey,
look,
there's
this
tool
also,
so
maybe
scorecard
should
like
look
into
that
along
with
you
know
the
tools
that
you're
already
checking
so
yeah.
C
I
mean
not,
I
mean
they're
decoupled
projects,
but
I
you
know
the
original
way
we
were
looking
at.
This
is
hey.
There's
a
whole
lot
of
open
source
projects
in
the
world
like
we
need
to
help
fix
them.
We
can't
boil
the
entire
ocean.
So
how
do
we
start
reasoning
about
you
know
the
most
critical
projects?
First
and-
and
so
that
was
you
know
that
was
sort
of
the
inspiration
for
the
criticality
score
project.
You
know-
and
I
don't
we
should
probably
give
an
update
on
that.
C
I'm
not
sure
like
what's
actually
been
happening
on
that
side
of
things,
but
yeah
like
just
just
trying
to
get
some
sort
of
like
narrow
down
the
problem
space.
If
you
will
is
totally
you
know
what,
as
you
know,
yeah
yeah
yeah,
why?
Why
did
you
have
a
like
specific
idea
or
like
questioning,
curious.
B
Not
in
particular,
no
I
just
I'm
just
trying
to
sometimes
I
try
and
think
of
ways
where
you
know
things
can
be
integrated
together
kind
of
like
how
the
metrics
project
could
integrate
some
of
the
data
from
security
scorecard
or
I'm
sorry
from
criticality
score,
for
example,
but
yeah.
Nothing
in
particular
came
to
mind
yeah.
C
B
C
D
I
feel,
like
it'd,
be
cool
to
run
like
a
criticality
score
on
your
internal
projects
and
be
like
for
company
x.
What
are
your
critical
open
source
projects?
You
know,
and
I
don't
think
that's
quite
I
haven't
checked
criticality
score
in
a
while,
but
I
I
feel
like
it's
just
grab
gripping
all
projects,
because
you
know
google
things,
but
I
I
had
comments
on
the
scorecard.
D
I've
been
using
it
last
couple
of
months
and
then
this
last
week
we
got
like
a
open
source
fix-it
sprint
to
to
do,
and
so
me
and
my
team
have
really
been
using
this,
and
I
noticed
back
in
the
day
with
like
the
dm
repo.
D
It
wasn't
like
catching
static
code
analysis
because
of
course,
they're
doing
static
code
analysis
for
the
rest
project
in
like
kind
of
a
unique
way,
and
it
gave
me
two
thoughts.
One
was
that,
like
man,
I
wish
I
could
file
a
ticket
right
now
and
send
it
straight
to
ossf
from
my
command
line
client,
because
I
still
haven't
gone
to
the
gui
master
race
yet
and
then-
and
so
you
know
just
like
command
line.
Oh,
this
is
a
false
positive,
like
there
actually
is
static
analysis
and
it
sends
an
issue
nicely.
D
D
I've
come
across
the
open
source
project
and
you
know
I
don't
it's
doing
the
static
code
analysis
or
it's
doing
x
or
y
or
z,
but
for
some
reason
oss
scorecard
isn't
actually
like
detecting
that
and
in
the
interim,
while
we
fix
the
scorecard
issue
or
while
we
like
improve
the
detectors
for
upstream
scorecard,
let's
have
like
a
a
cheat
dot
text
file
or
something
that
says,
passes
ci
cd.
You
know
for
oss
scorecard
almost
like
a
exclude
or,
like
you
know,
hey
it's
an
issue
upstream.
D
It's
not
an
issue
with
this
project.
You
know
like
we
are
doing
this
and
like
here's,
the
commit
message
with
the
with
the
detailed
proof
so
like
add
a
scorecard
dot
exceptions
or
scorecard
dot
whatever
and
be
like
you
know.
We
we
actually
are
doing
static
analysis
for
this
project
or
we
are.
We
do
have
like
sign
releases.
D
I
don't
know
why
signed
releases
wouldn't
show
up,
but
you
you
see
what
I'm
saying
like
have
that
cheat
sheet
or
that
cheat.md
file
for
like
scorecard.md
file
or
something
just
in
case
scorecard,
isn't
detecting
for
whatever
reason,
because
there's
a
lot
of
custom
implication
implementations
of
blah,
you
know
and-
and
we
can't
detect
all
of
those
implementations-
it's
just
not
a
fun
problem
to
solve.
For.
C
Yeah
I,
like
those
ideas,
jeff
oz,
jeffy
pasted.
Something
is
that
for
doing
like
the
issues
on
the
fly.
D
Oh,
this
is
the
github
cli
yeah,
so
I
can
integrate
it
with
cli.
I
mean
the
thing
is
with
ossf
scorecard.
I
already
think
I
have
my
github
token
exposed,
so
it
could
just
be
like
do
you
agree
with
these
results?
Kind
of
a
question
mark.
You
know
with
the
cute
little
emoji
at
the
end,
and
then
you
say
yes
or
no.
You
know
and
you
can
have
like
an
interactive
thing
or
you
can
just
do
for
scripting,
where
it
doesn't
do
like
a
user
input
thing.
D
But
you
see,
like
you,
see,
kind
of
how
it's
all
integrated
in
the
scorecard.
So
like
yeah,
we
can.
You
can
make
issues
once
you
have
a
github
token
you're
like
assuming
that
privilege
is
there.
You
should
be
able
to
spawn
an
issue
but
loop
yeah,
some
kind.
D
Because
yeah,
because
people
will
use
scorecard
and
they'll,
be
too
lazy
to
create
an
issue
because
then
you
have
to
go
and
you
have
to
figure
out
how
that
organization,
crafts,
their
issues
and
what
information
you
should
include,
and
it's
like
one
click,
stop.
That's
the
you
know,
that's
the
people
we
typically
deal
with
in
my
society
so
maybe
like
if
we
provide
that
you
know
and
and
and
I'm
making
a
suggestion
with
the
intent
of
like
being
able
to
apply
engineering
time
to
to
help
with
that.
E
Yeah,
I
really
like
the
cli
idea.
I
think
that
they'll
be
very
helpful
to
get
some
feedback
on,
like
you
know,
corner
cases
where
we
are
either
showing
false,
positive
or
false
negatives
and
the
other
they'll
be
useful
to
get
some
data.
And
then
we
can
add
these
in
our
test
case
and
then
try
to
work
around
that.
G
Oh
just
saying
that
that,
as
far
as
like
correcting
or
or
augmenting
or
commenting
on
data,
you
know
we
it
could
be
done
in
the
in
the
in
the
dashboard.
We
talked
about
making
that
more
of
a
read,
write
thing.
Obviously
that
doesn't
benefit
people
that
don't
use
the
dashboard.
So
it's
a
it's
trade-off,
but
it's
another
option.
I
guess.
C
Is
that
through,
like
issues
or
are
you?
How
are
you.
G
We're
not
doing
it
today,
but
what
we're
envisioning
is
kind
of
an
overlay
on
top
of
the
data,
so
that,
if
scorecard
thinks
that
something
is
true,
but
someone
knows
better,
it
can
be
set
to
false
and
and
then
it's
a
well,
how
are
people
going
to
abuse
that
function
and
how
do
we
check
it
and
who
do
we
have
all
that
stuff
which
is
kind
of
why
we
haven't
done
anything
because
those
are
those
are
like
need.
People,
problems,
yeah,.
C
B
Very
cool
well,
thank
you,
so
much
kim
for
that
update
on
the
security
score
card.
Does
anyone
have
any
other
questions
for
kim
about
that
or
anything
else
related
to
the
those
projects.
J
Yeah
just
a
few
questions
regarding
score:
is
there
any
plans
for
a
scorecard
badge
in
order
to
encourage
open
source
projects,
to
consider
this
scorecard
seriously
right?
So
you
know
that
they
can
have
a
badge
in
their
readme
file
or
something
like
that.
Their
score
is
really
good
or
something
like
that.
E
H
B
Cool
okay,
wonderful!
Thank
you
for
the
the
questions
and
comments.
I
think
maybe
for
the
last
couple
minutes.
One
thing
that
worked
well
last
time
was:
if
the,
if
the
representative
or
someone
who
wants
to
talk
about
it,
is
here
maybe
getting
some
updates
on
some
of
the
projects
and
efforts
and
initiatives
that
we've
been
working
on.
B
So
we
got
a
little
bit
of
it,
but
we
got
the
scorecard
update.
Do
we
have
any
updates
on
the
criticality
score
project
that
anyone
would
want
to
talk
about
or
provide.
C
B
And
then
I
think
with
what's
it
called
package
feeds
as
well?
I
believe
it's
jordan
is
he
here
or
anyone
want
to
give
an
update
on
package
feeds.
B
No
okay!
Well
one
quick
update.
We
have
from
o
stiff
open
source
technology
improvement
fund.
It's
a
pretty
tangentially
related
update
on
some
work.
We
just
did
recently
derek.
Do
you
want
to
just
give
a
quick
update
to
the
work
group
on
that.
I
Sure
I
will
actually
just
link
the
research
in
chat,
since
we
are
very
short
on
time.
We
did
a
project
with
the
linux
foundation
and
the
trailer
bits
on
their
release,
signing
policies
and
procedures.
I
There
were
some
interesting
things
in
there,
such
as
people
who
have
access
to
signing
keys,
not
having
a
smart
card,
and
there
is
a
lengthy
debate
about
whether
we
should
be
using
touch
activated
devices
or
not,
because
that
eliminates
a
class
of
problems
where
you
just
keep
your
smart
card
in
your
computer
at
all
times.
It's
not
really
providing
any
security.
I
In
that
case,
if
your
computer
gets
compromised
because
they
always
have
access
to
it,
whereas
a
touch
activated
device,
you
have
to
actually
be
physically
present
to
touch
it
in
order
to
sign
something
so
that
that
was
an
interesting
result,
and
we
talked
pretty
extensively
about
it
because
there
is
no
open
source
touch
activated
key.
That
is,
reproducible
that
meets
all
of
those
requirements.
I
B
Full
research
report
could
be
found
there.
So
if
you
have
any
questions
about
that
or
want
to
discuss
it
at
a
upcoming
meeting
feel
free
to
let
us
know,
it
was
more
of
a
policy
kind
of
I.t
audit
review,
as
opposed
to
kind
of
a
traditional
kind
of
manual
source
code
review,
which
yielded
slightly
different
results.
B
But
regardless,
I
think
the
results
were
good
and
we
were
able
to
have
a
body
of
research
that
can
be
built
on
top
of
and
and
referred
to
for,
future
use
and
and
future
development
in
the
in
kernel
release
signing.
So
that
was
awesome
and
then
going
along
with
that.
B
We
actually
currently
have
a
proposal
being
reviewed
and
voted
on
by
open
ssf's
tech,
where
we
were
essentially
offered
a
a
grant
of
sixty
thousand,
and
we
have
essentially
proposed
doing
symphony
a
thorough
audit
review
of
symphony
with
that
grant
specifically,
so
that
is
currently
being
reviewed
and
voted
on.
B
If
you
have
any
questions
about
that
or
want
to
discuss,
it
feel
free
to,
let
us
know
you
can
drop
me
myself
or
derek
an
email
or
if
you
want
to
review
that
proposal,
we
have
that
as
well,
but
hopefully
we'll
know
more
about
that
soon
and
you
know
we'll
actually
be
able
to
through
openssf,
specifically
be
able
to
to
show
what
we
do
in
securing
critical
projects.
And
hopefully
we
can
do
that,
starting
with
symphony.
B
B
Cool
okay!
Well,
thank
you,
everyone
for
coming
today
and
thank
you
to
our
presenters
for
the
updates.
Yeah
there's
the
slack
channel
as
well.
So
if
you
anything
that
you'd
like
to
discuss
with
the
work
group
offline,
the
slack
channel,
I
think,
is
a
good
avenue
for
that
as
well
as
well
as
getting
involved
with
the
email
lists
and
participating
is
very
easy.
So
how
do
I
get
into
slack?
B
I'm
just
learning
how
slack
works,
but
I'm
fairly
certain
you
just
you
add
the
open,
ssf
workspace,
and
in
that
you
can,
you
can
join
the
different
subtopics
such
as
this
work
group
has
a
separate
link
but
yeah,
I'm
not
too
good
at
slack
myself.
I'm
just
learning
it
to
me.
It's
just
glorified
aim
from
back
in
the
day,
but
I
will
send
a
link
to
the
slack
information
for
you
now,
john.
B
Cool
all
right,
and
with
that
thanks
again,
everybody
have
a
great
fourth
of
july
holiday.
Oh
michael
sent
it
thank
you
michael,
so
I
think
you
go
through
that
link
and
you
can
get
involved
with
slack
and
yeah
have
a
great
fourth
of
july
weekend
for
everyone
celebrating
and
have
a
great
weekend
and
looking
forward
to
the
next.