►
From YouTube: OpenSSF Webinar: Introduction to Project Alpha-Omega
Description
Hear from Brian Behlendorf (General Manager, OpenSSF), and Alpha-Omega project leaders Michael Scovetta (Microsoft) and Michael Winser (Google) to learn more about near term goals and opportunities for participation in the Alpha-Omega Project.
Link to the presentation:
https://openssf.org/wp-content/uploads/sites/132/2022/02/Alpha-Omega-Feb-2022-Webinar.pdf
A
Hello,
everyone.
Thank
you.
Thank
you
for
coming
to
our
webinar
today.
We're
really
excited
to
tell
you
more
about
this
project
that
we've
begun
at
the
openssf
called
alphaomega.
It
is
the
beginnings
of
this
project.
It
is
as
we
call
it
an
experiment,
it's
something
that
we
put
a
lot
of
thought
into,
but
is
still
very
early
days
and
we'd
love
to
find
ways
to
involve
all
of
you
and
and
push
this
forward
and
faster.
A
So
just
thought:
we'd
try
to
lay
the
groundwork
and
look
for
perhaps
some
opportunities
to
collaborate
with
all
of
you.
Let
me
so
so.
Let
me
give
you
just
kind
of
an
overview
of
what
you'll
hear
about
today.
We'll
give
you
a
bit
of
the
background.
A
What
drove
some
of
the
thinking
and
the
conversations
behind
this
we'll
do
an
overview
of
the
the
mission
and
the
kind
of
the
vision
for
the
project
and
then
into
some
details
about
how
we
plan
to
actually
deliver
on
on
that
mission
and
vision,
I
will
wrap
up
with
some
some
places
to
go
to
where
you
all
can
contribute,
and
then
we
really
want
to
leave
a
good
chunk
of
time
and
we
anticipate
perhaps
about
half
the
time
of
the
webinar
for
open
conversation
and
q,
a
about
where
we
can
take
this
there's
still
so
much
to
to
to
think
about
and
and
build
in
this,
and
so
involving
as
many
of
you
in
that
would
be,
would
be
great
well
and
just
to
get
started.
A
I
I'm
brian
bellendorf,
I'm
general
manager
for
the
open
source
security
foundation,
which
is
part
of
the
linux
foundation.
I've
been
with
the
linux
foundation
since
2016
in
a
couple
of
other
capacities
and
leading
open
ssf
since
october
of
last
year,
and
I've
been
in
the
open
source
space
for
about
100
million
years.
Why
don't
I
pass
the
the
baton
to
microscope
to
introduce
himself.
B
Hi
I'm
mike
scavedo.
I
lead
an
open
source
security
team
at
microsoft
within
openssf.
I
lead
the
identifying
security
threats
working
group,
I've
been
in
software
and
software
security
for
about
20
years
or
probably
more.
I'm
super
excited
to
to
join
brian
and
michael
in
getting
this
alpha
omega
project
off
the
ground,
and
I'm
super
excited
to
see
where
it
goes
to
michael.
C
C
I'm
excited
and
terrified
about
how
we're
finally
waking
up
to
it
and
paying
attention
to
it,
and
I've
been
working
with
michael
and
brian
on
alpha
mega
now
to
try
and
put
together
our
efforts
here
and
see
what
we
can
do
so
very
excited
to
meet
everybody
talk
about
these
things
and
how
we
all
hope
you're
going
to
learn
more.
A
The
two
michaels
so
omega
is
an
attempt
to
try
to
really
look
at
how
open
source
software
is
being
written
in
in
the
modern
world.
I
we
know
that
open
source
software
is
the
foundation
of
practically
all
modern
technology.
I
mean
we
see
stats
out
there
that
suggest
that
90
of
the
average
software
stack
is
actually
open
source
software
underneath
right,
and
we
know
that
society
needs
that
foundation
to
be
safe,
secure
and
resilient.
There's
no
better
evidence
for
this
than
the
fact
that
you
know
a
bug
in
a
java.
A
Logging
framework
can
start
to
trigger
a
series
of
meetings
and
public
proclamations
by
the
white
house
and
and
other
policy
making
organizations
and
drive
actually
a
whole
lot
of
disruption
and
investment
by
organizations
to
try
to
close
up
close
up
that
hole.
But
this
is
critical
infrastructure
now
and
I
think
everybody's
recognizing
this
even
far
beyond
our
own,
our
own
bubble.
A
This
is
bridges
and
highways,
and
these
are
also
digital
public
goods,
and
so
we
really
need
to
start
to
think
about
how
how
do
we
best
support
the
existing
mechanisms
for
building
open
source
code
and
the
existing
maintainers
and
the
existing
foundations
and
organizations
in
a
way
that
isn't
just
about
delivering
a
300
page?
A
You
know
tomb
of
thou
shalt
to
them
and
say
and
penalizing
them
when
they
have
a
defect
found,
but
is
instead
something
much
more
bottoms
up,
much
more
supportive,
much
more
systematic
in
in
not
just
looking
at
a
a
couple
of
projects,
but
trying
to
really
cover
the
breadth
of
the
mate
of
of
all
the
open
source
projects
that
are
used
out
there
in
a
meaningful
way.
A
But
let's
pull
back
and
be
a
bit
kind
of
humble
about
the
scope
of
that
work
and
that
you
know
there's
a
lot
of
open
source
projects,
a
lot
of
open
source
developers
and
we'll
never
close
every
hole.
The
last
defect
is
fixed
when
the
last
user
has
passed
away
as
they
say
so.
We've
got
some
ideas
here.
I
I
this
is
an
experiment,
but
but
this
experiment
along
a
couple
of
very
specific
lines.
A
Let
me
start
first
before
I
head
it
off
to
michael
windsor,
to
kind
of
elaborate
a
bit
more
on
what
it
actually
is.
Let
me
just
pause
and
help
say
what
it's
not.
In
particular,
it
is
not
a
fund
to
pay
open
source
project
maintainers
directly.
There
are
plenty
of
other
projects
trying
to
do
that.
A
Trying
to
answer
the
question
of
you
know:
what's
the
sustainability
model
for
open
source
in
different
ways,
I
I
will
there
are
some
targeted
places
where
we
might
apply
some
funding
to
help
get
over
the
hump
in
a
couple
of
projects,
but
we'll
go
into
that.
It's
not
a
certification
body
or
process,
we're
not
trying
to
bless
or
or
recognize.
You
know
good
versus
bad
or
or
have
a
formal
kind
of
this.
You
know
kind
of
fits
oriented
kind
of
thing:
it's
not
a
replacement
for
normal
security
practices.
You
know.
A
Our
hope,
in
fact,
is
that
this
ends
up
being
a
capacity
building
mechanism
and
helps
lift
how
other
organizations
other
open
source
foundations
and
how
companies
build
open
source
code
and
then
the
practices
that
they
adopt
so
we're
gonna
be
reusing
a
lot
of
stuff
coming
from
other
parts
of
the
open
ssf
to
make
that
work.
This
is
not
a
process
for
forking
and
taking
over
open
source
projects.
You
know
people
love
conspiracy,
theories
that
that
doesn't
get
any
traction
here.
Nor
is
this
a
replacement
for
any
other
existing
services.
A
You
know
there
are,
you
know,
there's
very
little
in
this-
that
we
think
is
actually
being
done
well
by
anybody
else.
There
out
there
in
the
open
source
ecosystem,
so
we'd
really
love
to
partner
with
anybody
who
has
similar
objectives
or
or
is
complementary
in
what
we
do
because,
again,
that's
a
big
challenge.
It's
also
not
a
private,
zero
day
trading
club.
A
I
I
we
will
be
dealing
with
vulnerabilities,
perhaps
ones
that
haven't
yet
been
disclosed,
but
I
I
there
are
there's
a
whole
universe
of
proper
thinking
and
proper
care
to
be
applied
to
how
this
get
managed
and
how
maintainers
and
others
work
through
coordinated
vulnerability,
disclosure
processes.
A
And,
finally,
it's
not
a
fully
automated
scanner
that
will
just
launch
launch
junk
vulnerabilities
at
maintainers
and
leave
it
up
to
them
to
have
to
clean
up
this
is
this
is
a
bit
more
thoughtful
than
just
trying
to
to
scatter
shout
that
out
or
we
think
at
least
so.
Why
don't
we
transition
to
what
it
actually
is,
and
with
that
I'd
like
to
pass
the
baton
to
michael
windsor,.
C
C
More
amorphous
aware
most
of
you
here,
because
you've
heard
of
it
are
interested
alpha
omega,
is
really
trying
to
be
in
a
focused
way
applied,
directed
activity.
Some
of
that
direction
is
specific
to
certain
projects,
and
some
of
it
is
meant
to
be
able
to
allow
us
to
scale
and
provide
skilled
solutions.
C
So
our
mission
is
really
to
provide
that
direct
maintainer
engagement
and
bring
expert
analysis
to
actually
achieve
concrete
outcomes,
even
as
the
tool
chains
and
the
working
groups
and
all
these
other
machineries
that
we're
putting
in
place
in
openssf
are
starting
to
develop
the
future.
We're
trying
to
act
now
towards
just
improving
things
and
then
scaling
it
up
next
slide,
and
so
our
aspirational
vision.
C
This
is
where
we're
trying
to
get
to
one
is
where
you
know:
critical,
open
source
projects
are
actually
secure,
and
it's
important
to
note
every
word
here:
matters
critical,
not
every
open
source
project
is
critical.
Not
everything
has
to
be
secure.
Now,
just
like
a
startup
is
going
to
prioritize
certain
things
over
others
same
thing.
B
Thanks,
michael
so
for
alpha,
the
main
point
of
alpha
is
working
with
maintainers
directly
so
and
I'll.
I
have
more
information
on
the
next
slide,
but
basically
you
know
we're
going
to
target
the
very
most
critical
open
source
project,
even
even
among
critical.
If
we
think
that
there
are
10,
000
or
50
000
critical,
open
source
projects,
the
most
critical,
100
or
200
of
that
list
would
be
targets
for
alpha.
Alpha
will
be
essentially
expensive
time
consuming
heavyweight
at
least
heavyweight
on
aaron.
B
We
hope
it's
not
heavyweight
for
the
maintainers
a
way
for
us
to
to
engage
understand
what
their
security
posture
is
understand,
where
their
gaps
are
and
help
them
to
do
to
to
fill
those
gaps
and
remedial
those
vulnerabilities
and
triage
those
bugs
and
kind
of
whatever
is
needed
if
a
project,
if
the
thing
that
they
need
help
with
most
is
moving
rocks
from
a
to
b,
and
if
that
helps
their
security
posture,
then
we
should
be
there
to
help
them
move
rocks
from
a
to
b
next
slide.
B
So
getting
super
specific
here
and
and
while
while
this
slide
has
lots
of
words
on
it,
it's
it's
all
intended
to
be
examples
of
the
kinds
of
things
that
we
can
do
and
where
our
thinking
is,
as
both
brian
and
michael
said,
this
is
an
experiment,
and
this
is
very
early
days.
So
some
of
this
is
subject
to
change,
but
imagine
imagine
you're
in
a
restaurant
and
you
know
and
you're
you're
you.
You
want
to
know
what
that
restaurant
can
offer
so
on
alpha's
menu.
B
B
If
both
we
and
the
project
both
think
that
this
is
a
fit,
then
we
then
we
move,
take
a
step
forward
and
we
look
at
the
main
courses
and
we
look
at
where
can
where
could
alpha
provide
the
most
value?
This
could
be
things
like
a
source
code
audit.
This
could
be
setting
up
tools.
This
could
just
be
you
know,
helping
them
to
you
know,
encouraging
them
to
set
up.
B
You
know
two-factor
authentication,
when
publishing
or
you
know,
commit
signing
or-
or
things
like
that
there
are
some
some
projects,
like
the
the
open,
sf
score
card,
which
can
give
kind
of
a
rundown
of
you
know
where
a
project
stands
on
in
on
certain
metrics.
Maybe
improving
those
metrics
is
is
the
way
forward,
or
maybe
it's
just
that
you
know
they
get
lots
and
lots
of
security,
vulnerability
reports
and
some
of
them
are
low
quality
and
they
just
need
help
triaging
them,
and
maybe
they
need
help
actually
creating
fixes
for
it.
B
Whatever's
whatever's
needed
there.
We
want
to
be
on
the
table
as
long
as
it's
generally
in
the
direction
of
improving
the
security
of
this
critical
open
source
project,
and
then
you
know
for
dessert.
We
do
want
to
look
back
and
see
how
we
did,
because
we
do
want
to
improve
over
time,
especially
for
the
first
five
or
ten
of
these
engagements
we're
going
to
expect
to
learn
a
lot.
B
Omega
is
at
the
opposite
end
of
it
and
and
not
opposite,
meaning
like
the
least
the
least
used
or
impactful
open
source
project,
but
but
still
within
this,
this
critical,
you
know
top,
let's
say
10
000.
Is
the
nice
round
number
that
we've
been
using.
We
want
to
use
a
combination
of
of
automated
tools
and
scoring
and
and
ml.
B
So
so
these
these
are
experts
reviewing
those
results,
validating
them
and
then,
assuming
that
they
are
authentic
and
important,
and
need
to
be
fixed
working
with
the
project
directly
reaching
out
and
doing
the
coordinated
vulnerability
disclosure,
but
also
lending
a
hand
in
creating
those
fixes,
if
if,
if
requested,
if
and
if
appropriate,
so
so,
while
there
is
a
large
degree
of
tooling
here,
it's
not
exclusively
tooling,
it's
not
fully
automated.
There
are
people
involved
and
if
you
go
to
the
next
slide,
sorry,
I
should
have
gone
here
earlier
yeah.
B
The
appetizer
is
using
the
tools
to
collect
lots
of
information
and
lots
of
vulnerabilities
lots
of
facts
about
about
about
these
projects.
We
will
have
engineering
software
engineer
security
engineers,
I
guess,
refining
this
rule
set
and
building
the
system
to
automate
the
triage
as
much
as
possible.
B
We
want
this
thing
to
be
like
magically
efficient,
such
that
you
know
we
can
kind
of
turn
the
crank
on
this
machine
and
get
a
high
quality
vulnerability
out
of
it,
and
then
we
just
keep
keep
turning
that
keep
turning
the
crank
once
we
validate
it
with
experts
we
reach
out,
we
get
the
thing
fixed
and
then
again
we
look
back
to
see
how
things
are
going
and
improve
the
tool
and
process
over
time.
B
A
So
there
are
a
lot
of
questions,
of
course,
that
all
of
you
may
have
about
how
all
of
this
will
actually
work.
We
are
very
much
a
part
of
the
rest
of
the
open
source
security
foundation.
A
lot
of
these
ideas
came
out
of
conversations
that
were
had
in
a
number
of
different
working
groups,
and
we
plan
to
stay
very
close
to
those
working
groups.
The
three
that
matter
most
for
for
the
the
activities
here
are
the
securing
critical
projects
working
group.
This
is
a
group.
A
That's
been
developing
a
mix
of
objective
data.
You
know
from
things
like
the
harvard
census
report
that
talks
about
you
know
kind
of
their
set
their
view
of
critical
projects
based
on
metrics
that
they're
able
to
obtain
to
conversations
that
we
have
in
those
working
groups
about
you,
know,
code
that
sits
very
critically
in
kind
of
the
build
infrastructure,
but
might
not
show
up
in
a
software
composition,
analysis
that
kind
of
thing.
A
So
that's
a
group
that's
chartered
with
creating
and
maintaining
a
list
of
the
top
100
that
was
used
most
recently
in
distributing
mfa
tokens,
for
example,
at
the
end
of
the
year
to
to
developers
on
the
top
100
projects.
So
we
plan
to
work
with
them
to
kind
of
come
up
with
these
additional
lists
and
refine
that
over
time,
there's
a
related
question.
I
see
popping
up
quite
a
bit,
which
is
you
know,
are
we?
How
do
we
select
from
those
top
100,
the
ones
that
we're
working
with?
A
And
that's
that's
still
a
work
in
progress?
To
be
honest,
but
we'll
talk
about
that
in
a
bit.
The
second
working
group
that
we
I
work
quite
closely
with
is
the
best
practices
working
group.
This
is
going
to
be
a
feeder
for
so
much
of
what's
on
the
menu,
especially
in
alpha
to
you
know,
to
kind
of
talk
about
with
projects
try
to
help
them
understand
how
they
might
adopt.
A
You
know
the
the
the
best
practices
badge,
those
sorts
of
things,
the
criticality
score
or
the
sorry,
the
the
the
scorecards.
You
know
other
practices.
But
that's
that's!
You
know
again.
I
we
do
not
wanna
walk
in
with
the
you
know,
just
read
all
this
stuff
and
you
or
use
these
tools
and
you'll
be
you'll,
be
a
okay.
A
It's
got
to
be
more
bottoms
up
and
needs
driven
than
that,
but
the
best
practices
group
working
group
is
creating
a
lot
of
value
in
what
they're
doing
through
that
and
then
the
vulnerability
disclosure
is
working
group.
This
is
obviously
going
to
be
a
big
deal
as
we
work
through
the
scan,
the
results
of
the
scans,
and
we
see
things
that
are
problematic,
perhaps
not
yet
clearly
a
vulnerability,
but
something
that
is
worth
talking
about
with
the
maintainers
as
that
evolves.
A
If
those
evolved
to
being
actual
vulnerabilities,
then
finding
a
way
to
work
with
those
maintainers
through
a
a
graceful
disclosure
process
such
that
you
know
those
get
those
fixes
get
rolled
out
to
major
stakeholders
and
and
everyone
can
be
updated
as
quickly
as
as
possible.
Once
it's
public
is
going
to
be
pretty
important,
and
so
that's
a
working
group.
A
That's
done
a
lot
to
try
to
figure
out
what
are
the
the
standards
and
benchmarks
that
open
that
might
be
appropriate
for
open
source
projects,
because
most
of
what's
out,
there
has
not
been
written,
particularly
with
open
source
in
mind.
So
one
way
for
all
of
you
to
get
engaged
with
us
is
to
find
us
on
those
working
groups,
but
we
know
as
well
that
we
will
be
will
need
to
be
developing
kind
of
a
public
engagement
model
for
each
of
these
two
halves
of
the
project.
A
Right
we've
got
some
ideas.
We've
got
some
things
that
we
think
might
work,
but
we
want
to
evolve
that
kind
of
more
a
little
bit
more
with
with
all
of
you
in
mind,
so
one
of
the
ways
to
stay
on
top
of
that
just
a
little
bit
out
of
order
here
is
to
join
a
slack
channel.
We
use
slack
pretty
extensively
at
open
ssf
all
right.
A
So
I
I,
if
you,
if,
if
you
all
of
you,
could
join
the
if
you're
interested
join
the
alpha
omega
alpha
underbar
omega
slide
channel
at
slack
open
ssf.org,
that's
a
great
conversational
way
to
stay
involved,
but
if
that
is
a
little
bit
more
than
you
want
to
want
right
now,
you
just
want
to
hear
about
updates,
and
that
kind
of
thing,
obviously
we'll
push
updates
to
twitter
and
the
open,
ssf
account
and
the
like.
But
we've
also
created
a
mailing
list
specifically
for
announcements
related
to
alpha
omega.
A
That
link
and
all
these
things
by
the
way
are
being
dropped
in
chat.
So
you
could
follow
that.
We
also
have
just
a
raw
expression
of
interest
form.
If
you
are
interested
in
knowing
more
interested
in
you
have
some
things
you
might
be
able
to
contribute.
That's
the
link
there
will
take
you
to
that
place
on
the
website.
As
well,
where
you
can
fill
that
out,
this
is
the
you
know.
This
is
this
is
really
what
we've
figured
out
so
far.
A
I
think
now
it's
actually
appropriate
for
us
to
pivot
and
look
at
the
questions
that
have
been
submitted.
I've
sort
of
been
scanning
those
a
little
a
little
bit,
and
I
know-
and
let
me
just
kind
of
paraphrase
what
I
think
is
about
half
the
questions
here
to
the
two
michaels,
which
is
what
will
be
the
criteria
for
you
know
deciding
what's
critical
and
which
of
the
projects,
particularly
for
alpha.
A
I
would
imagine
that
we
decide
to
work
with,
and
I
know
one
part
of
that
is
you
know,
coming
up
with
a
the
list
of
100,
you
know
or
200
working
with
critical
projects,
but
we're
not
going
to
be
able
to
have
the
kind
of
you
know
a
hands-on
intervention.
You
know
a
helpful
kind
of
kind
of
approach
with
all
100
of
those
projects,
so.
C
We've
heard
a
lot
already
we're
working
with
the
working
groups
to
get
that
initial
list
and
build
out
from
there,
but
our
priority
up
front
is
really
about
our
ability
to
get
actionable
sort
of
shovel
ready
work
that
we
can
start
to
think
about
and
do
and
learn
from
and,
as
we've
said
several
times,
we're
still
figuring
out
how
this
is
all
going
to
play
out
over
time,
and
so
one
of
the
first
sort
of
criteria
for
the
early
project
is
not
necessarily
what
is
the
most
critical
project.
B
Just
to
add
to
that
there
have
been
multiple
attempts.
You
know
over
the
years
at
coming
up
with
a
list
of
the
most
critical
projects.
Those
lists
are
usually
different,
they're
all
reasonable.
There's
no
standard
to
to
to
judge
you
know
which
one
is
better
than
the
other.
Really
the
critical
projects
working
group
does
have
a
list.
We
like
the
list.
B
That
list
will
inform
the
the
the
project
that
we
choose
from,
but
if
we
made
a
mistake-
and
we
chose
the
150th
instead
of
the
third
most
critical
project
according
to
the
criteria
that
they
used,
we're
still
dealing
with
very
important
projects,
so
we
don't
want
to
be
overly
biopic
to
just
let's
start
at
number
one,
and
we
don't
look
at
number
two
until
number
one
is
done
or
anything
like
that
we
do.
We
do
want
to
optimize
for,
as
michael
said,
optimize
for
impact
optimize
for
speed
optimize
for
learning.
B
B
We're
also
looking
at
you
know,
projects
you
know
in
the
in
the
larger
context
of
how
do
they
interact
with
the
rest
of
the
ecosystem
right,
an
individual
library
may
be
incredibly
important.
An
ecosystem
may
be
like
de
facto
much
more
important
than
any
individual
project,
so
so
we're
trying
to
look
at
it
holistically
and
come
up
with
the
just
good
choices
that
we
that
we
all
feel
good
about
in.
A
Terms
of
volume,
we'll
probably
have
some
sort
of
beta
beta
test
kind
of
pilot
phase.
For
for
this
kind
of
work,
where
we'll
try
to
evaluate
you
know
how
successful
we've
been,
is
there
a
repeatable
pattern
to
this
kind
of
thing,
and
I
think
you
know
we've
been
talking
about
trying
to
aim
for
five
five
different
projects
that
we
could
reach
out
in
that
period
of
time.
You
know
lasting
a
couple
of
months.
A
I
I
think
our
hope
would
be
that
over
the
course
of
the
first
year
we
could
try
to
reach
more
like
15,
to
20,
perhaps
of
such
engagements.
A
lot
of
this
is
going
to
be
based
partly
on
how
will
we
scale
the
staff
that
that
we
would
like
to
recruit
for
for
alpha
omega?
How
will
we
think
about
you
know
engagement
of
volunteers
in
that
process?
A
This
is
this
is
hard
work
and
it's
it's
harder
to
ask
for
for
folks
to
volunteer
for
this
kind
of
thing
or
to
to
even
vet
that
they
have
the
right
approach
to
this,
but
I
think
it's
on
us
to
think
about.
How
do
we
scale
this
out
to
to
you
know
to
be
able
to
take
advantage
of
volunteers
who
show
up
with
real
skills
and
are
willing
to
to
kind
of
work
on
a
systematic
process
for
this?
So
I
think
those
are
you
know
again.
A
5
15
is
is
just
a
dent
into
the
100.
Hopefully
we
find
ways
to
scale
up.
Certainly
more
resources
would
help
with
that
as
well.
But
you
know
I
I
think
it'd
be
a
while
before
we
can
claim
to
cover
all
100..
My
hope
as
well
is
that
if
we
can
talk
about
the
kind
of
work
we
do,
hopefully
we
can
have
a
you
know,
kind
of
a
a
a
ripple
out
impact
on
the
other
hundred
projects
as
well.
A
An
interesting
question
that
that
came
up
is
you
know
in
our
in
in
this
work
with
log4j.
If
we
had
started
this
a
year
ago,
would
log4j
have
been
on
that
list
and-
and
I
think
there's
a
couple
ways
to
dance
this,
but
but
I'll
leave
it
first,
the
two
michaels.
B
No,
probably
not
I
I
wish
I
wish
it
were
as
a
in
in
the
description
of
log4j
and
what
we
would
have
seen
from
a
high
level.
It
it
wouldn't
have
it
wouldn't
have
been
up
there
now.
Obviously,
it
would
be,
but
and
logging
frameworks
in
general.
I
think
you
know
people
are
starting
to
think
about
them
a
little
bit
differently
and
in
terms
of
kind
of
you
just
say,
magical
functionality,
yeah.
C
I
I
agree
with
michael:
I
think
that
it's
interesting
to
see
how
we
learn
about
classes
of
problems
right
so
there's
a
team.
I
work
with
here
at
google,
who
are
focused
on
fixing
all
kinds
of
sort
of
problems
in
the
linux
kernel,
there's
a
class
of
problems
they're
trying
to
eliminate
from
the
kernel
not
just
once,
but
in
a
durable
way,
and
we
could
look
now
with
the
you
know,
20
20
vision
of
hindsight
and
all
that
say,
looking
okay
points
of
extensibility
that
can
have.
C
Essentially
you
know,
outbound
calls
to
other
network
services
or
whatever
are
an
interesting
pattern
of
potential
risk
right,
which
is
not
really
earth-shattering
news.
We
just
hadn't
looked
at
it
through
the
same
lens
as
we
had
now
as
we
have
now,
so
we
might
entertain
an
effort
to
go.
Look
at
various
points
of
extensibility.
That's
exactly
the
kind
of
direction.
I
would
like
us
to
sort
of
start
understanding,
and
how
can
we
reliably
detect
those
things
right?
Are
there
patterns
of
coding
or
analysis
that
we
can
apply
to
get
there?
A
The
thing
I'll
add
to
this
is
you
know
that
this
is
a
question
that
kind
of
even
a
little
bit
more
focused
on
the
securing
critical
projects
working
group,
because
they,
you
know
what
one
of
their
their
data
sources
right,
is
the
harvard
census
and
there's
an
updated
version
of
the
harvard
census
coming
soon.
That
does
rectify
this,
but
the
previous
harvard
census
is
there's
one
or
two
I
can't
remember
which,
but
the
previous
one
did
not
list.
A
Log4J
and
part
of
that
was
you
know
it's
very
dependent
upon
the
data
sets
that
that
they
have
access
to
getting
access
to
data
about
which
components
are
are
downloaded
with
what
frequency
is
actually
rather
hard
to
get
to,
and
without
that
you
you
can
get
you
have.
A
You
can
do
software
composition
analysis
on
what's
embedded
inside
what,
but
you
don't
get
a
sense
of
impact
or
or
really
you
know,
does
this
matter
without
also
having
usage
data,
so
the
the
harvard
project's
getting
better
about
that
and
again
we're
going
to
try
to
hang
our
hat
on.
A
You
know
that
that
list
of
100
coming
from
securing
critical
projects,
as
we
whittle
that
that
list
down,
I
think,
engaging
with
not
just
you
know
the
individual
projects
that
we
find
interesting
to
talk
to,
but
also
with
the
foundations
around
them
is
going
to
be
important.
So
it's
not
just
about
talking
to
the
log
for
j
maintainers
right.
We
might
do
that
for
something
critical.
A
You
know
we'd,
look
at
that
or
for
a
you
know,
a
different
javascript
kind
of
component
here
or
there,
but
potentially
talking
with
the
organizations
around
the
apache
software
foundation,
the
open,
js
foundation
and
others
to
go.
Where
might
you
think
you
know?
There's
some
criticalities
here
to
here
are
some
projects
that
perhaps
don't
show
up
in
the
in
the
stats,
but
you
know
from
your
experience
are
perhaps
a
bit
more
need
of
some
of
this
kind
of
thing.
So
how
am
I
this
is?
Perhaps
a
provocative
question?
A
Can
an
open
source
project
request
to
be
included
in
either
alpha
or
omega?
Like?
Do
we
anticipate
having
kind
of
an
application
form
for
that
kind
of
thing?.
B
C
C
We
would
certainly
entertain
the
conversation
right
early
on
we'd
love
to
hear
from
you
and
your
interest
at
some
level,
but
there
is
no
sort
of
like
I'm
on
the
list
and
therefore
someone's
doing
things
for
me
kind
of
thing
going
on
here
we're
still
figuring
a
lot
of
the
stuff
out
on
the
engagement
model
over
time,
but
if
you
are
interested
in
being
part
of
this
first
of
all,
I
would
just
say
again
what
we
already
said
earlier
on
become
part
of
the
working
groups
have
a
conversation
there
we're
really
going
to
listen
to
the
working
groups
around
which
projects
are
critical.
C
A
Good
the
next
question
I
want
to
take
from
my
emily
fox
emily.
I
think
we've
enabled
the
ability
for
folks
attending
to
be
able
to
participate
in
the
conversation.
Can
you
unmute?
Is
it
possible
for
you
to
meet
great.
D
So
I'm
a
little
curious
because
automated
security
analysis
is
a
large
field
and
there's
a
lot
of
potential
things
that
can
go
in
it.
So
is
there
a
kind
of
phased
implementation
approach
around
the
kinds
of
automated
security
analysis?
You
intend
to
do
with
these
projects,
or
is
there
one
particular
one
that
you
think
will
be
the
most
bang
for
the
buck
in
its
implementation.
B
So
right
now
we
have
a
kind
of
a
proof
of
concept.
Technically,
it's
a
container
with
a
bunch
of
tools
installed
in
it.
Those
tools
include
code
ql
from
from
from
github,
as
well
as
probably
15
or
so
other
static
analysis
tools,
they're
all
in
that
kind
of
that
style
tool.
B
We
don't
we're
not
constraining
ourselves
to
only
static
analysis,
we're
trying
to
think
of
like
what
what
a
fuzzing
story
would
be
around
omega,
particularly
when
it's
low
touch,
how
to
automate
the
the
fuzzing
harness
stuff
is
a
whole
rabbit
hole
of
challenges,
but
we
want
to
explore
that
as
well.
B
So
those
are
the
kinds
of
tools
that
we
that
we
want
to
do,
but
but
again
like
another
like
similar
a
similar
tool
in
a
slightly
different
category,
sure
we
would
consider
it.
What
we're
really
looking
to
build,
though,
is
something
that
has
a
very,
very
low,
false,
positive
rate
and
as
soon
as
I
say,
oh,
we
just
threw
a
whole
bunch
of
tools
in
a
container.
B
The
alarm
bell
should
ring
and
you
should
say,
wait:
you're
going
to
get
a
whole
bunch
of
noise
out
of
it.
That
is
that's
one
of
the
challenges
that
we
want
to
face
head-on,
with,
particularly
with
the
security
engineering
talent
that
we're
going
to
hire
for
this
is
to
is
to
eliminate
that
either
through
kind
of
constantly
looking
at
the
rules
and
and
whittling
them
down
and
scoring
them
adding
more
context
so
that
the
rules
can
be
more
applied
more
accurately
and
then
just
just
generally.
B
I
guess
with
the
with
the
goal
of,
if
a
if
the
security
analyst
who's
like
reviewing
the
the
output
of
these
tools,
mark
something
that's
false
positive.
We
should
consider
that
a
bug
in
our
tool
chain
and
that-
and
that
would
you
know,
be
on
a
list
that
we
would
fix.
We
know
we're
never
going
to
get
completely
clean
results
out
of
out
of
a
tool
chain,
but
we
we
want
to
the
only
way
that
we're
going
to
scale
is
by
by
reducing
noise.
A
I
I'm
going
to
drop
the
slides,
though
we
can
I'll
talk
more
directly
as
well.
I
think
there's
a
couple
of
things
that
add
to
that
as
well.
One
is
this
is,
I
think,
a
useful
place
for
us
to
think
about
engaging
the
the
the
community
on
you
know.
We,
you
know,
like
everything
we
do,
the
software
is,
it
will
be
open
source.
You
know,
and
and
figuring
out
how
to
plug
in
additional
scanning
tools
is
an
area
we're
happy
to
engage
in
and
think
about?
A
How
do
we
add
it
to
some
common
infrastructure,
but
as
well
as
thinking
about
things
like
what
are
some
of
the
scanning
patterns
for
those
tools,
and
how
might
we
work
together
on
devising
a
rule
set
publicly
collaboratively
with
with
the
public
on
that?
A
But
as
but
this
point
about
trying
to
identify
and
reduce
false
positives,
I
think
is
also
a
real
opportunity
for
us
to
work
with
open
source
developers
on
whether
it's
you
know
advanced
machine
learning
tools
to
to
look
at
these
things
or
flags
that
people
can
put
in
code
to
try
to
highlight.
You
know
no,
this
really
isn't
a
problem.
A
All
right,
if
there's,
if
there's
ways
in
which
tooling,
can
help
fight
the
false
positives
problem,
because
that
will
be
the
most
the
the
major
burden
upon
both
the
staff
we
hire
and
maintainers.
We
work
with
to
try
to
sort
through
that's
a
place.
We
could
really
use
it
use
use
some
help.
The
second
one
is,
you
know,
one
of
the
things
that
we
may
end
up
being
bound
by
is
the
operational
expenses
required
to
do
scanning
right.
A
You
know,
if
you
anybody
who
does
ci
for
a
modern,
open
source
project
that
pulls
in
lots
of
dependencies
and
and
tries
to
to
do
you
know,
testing
and
security
scan
security
scans
with
each
each
pull
request
or
each
commit
knows
what
I'm
talking
about
these
costs
can
quickly
overwhelm
you
know
even
a
mid-sized
project,
so
one
of
the
things
we
have
to
look
at
is:
how
do
we
cost
effectively?
Try
to
really
cover
the
gamut
of
10
000
projects
efficiently?
A
And
where
might
we
try
to
get
other
additional
resources?
Cloud
credits?
I
know
some,
you
know
folks,
do
offer
this
kind
of
thing
as
a
as
both
a
paid
service
and
often
free
for
open
source
projects,
but
corralling
that
into
kind
of
a
uniform
environment
is
going
to
be,
I
think,
perhaps
a
challenge,
but
again
we'd,
love,
folks,
helping
and
trying
to
answer
that
anything
michael
windsor.
You
wanted
to
add
to
that
before.
A
Let's
keep
going
okay,
great
is
irving
wodelski
berger
able
to
unmute
himself,
and
would
you
like
to
ask
his
question
then,
if
not
I'm
happy
to
ask
it,
I'm.
E
E
C
A
hundred
percent
irving-
and
you
know
a
lot
of
the
practices
that
are
being
developed
and
discussed
and
evolved
in
the
working
groups,
came
from
projects
that
happened
in
other
organizations
or
practices
that
are
starting
to
emerge
and,
as
we
in
alpha
omega,
try
to
become
even
more
applied
and
really
bring
what
we
have
today
and
refine
it
and
improve
it.
We
want
to
make
sure
that
people
can
benefit
of
that
and
there's
really
nothing
specific
to
open
source
about
it.
C
Obviously,
access
to
the
source
code
is
a
critical
component
for
some
of
the
analysis
techniques
we
use.
If
it's
within
your
essentially
sort
of
you
know,
ecosystem
of
source
code
and
as
an
organization
you
have
access
to
the
code.
It's
great.
There
are
some
very
interesting
conversations.
I've
been
having
with
other
open,
assistive
members
about
vendor
relationships,
and
how
do
I
ensure
that
the
software
I'm
receiving
from
a
vendor
has
had
similar
analysis
and
using
things
like
scorecard
as
a
way?
C
Imagine
a
vendor
producing
a
scorecard
report
of
their
repos
and
other
things
and
some
sort
of
standardized
report
about
what
kind
of
work
I've
done
to
analyze.
My
own
software-
these
are
great
concepts,
we'd
love
to
see
that
sort
of
stuff
emerging
out
and
playing
out,
and
certainly
the
lessons
we
learned
through
alpha
omega
will
be
very
much
shared
with
the
community
and
and.
E
You
know
made
it
more
available
so,
and
you
know
the
one
reason
this
is
particularly
important
is,
let's
say,
a
rmit
which
I'm
affiliated
with
service.
Often
people
say
well
and
where
do
the
kids
learn
software
engineering
and
things
like
that
turns
out,
especially
schools
like
mit
and
maybe
the
same
stanford,
and
so
on?
Don't
teach
it
it's
almost
like.
E
C
I
think
first
of
all,
I
dropped
out
of
school
a
very
long
time
ago,
so
I
can't
speak
to
how
it
happened
then,
or
how
it
happened
now,
but
I
think
that
I
share
your
enthusiasm
for
ensuring
that
engineering
practices
start
showing
up
in
all
forms
of
software
education,
whether
it's
a
computer
science
degree
where
it
theoretically
is
theoretical
to
a
more
intentionally
and
computer
engineering
degree,
and
certainly
these
software
practices
need
to
become
part
of
the
norm.
E
C
E
A
Well,
irving
also
direct
your
attention,
the
best
practices
working
group
and
developed
a
set
of
training
materials
for
secure
software
development
that
three
different
courses
that
have
been
put
up
on
edx
that
we've
had
about
6
000
people
register.
A
For
so
far,
we
have
very
ambitious
goals
to
grow
that
to
be
something
that
can
reach
100
000,
although
frankly,
it's
the
kind
of
thing
that
every
software
developer
should
should
go
through
and
read
and
understand,
just
just
how
their
own
code
could
be
twisted
against
their
intent
right
just
how
to
red
team
their
own
code
and
and
how
actually
that
matters
in
open
source
as
well.
So
I
I
I
yeah
we
separately
from
alpha
omega.
A
I
think
there's,
there's
more
investment,
we'll
see
in
getting
that
out
there
and
more
widely
promulgated,
and
I
think
some
partnerships
with
schools
that
we'd
love
to
explore
too
thanks,
yeah
tom
jones
asks,
and
I'm
just
going
to
be
quick.
To
paraphrase,
will
disclosure
be
different
from
cves?
Maybe
one
of
the
two
michaels
could
talk
about
kind
of
your
view
on
how
disclosure
processes
will
be
managed
for
the
things
that
are
kind
of
discovered
during
omega.
B
So
so
the
the
cve
is
kind
of
the
tail
end
of
the
disclosure
process.
I
don't
see
any
reason
why
we
would
invent
our
own,
you
know
and
and
there's
to
be
fair,
there's
a
lot
of
conversation
going
on
about
the
future
of
cve
and
how
to
make
that
better
and-
and
we
would
kind
of
slip
stream
into
that-
I
don't
want
to
I'd.
Rather
really
smart
people
are
thinking
about
that
stuff,
and
we
should.
B
We
should
leverage
that
the
disclosure
process,
though
right
you
know
up
until
cve,
is-
is
coordinated
disclosure
and
and
working
directly
as
as
we
described
so
that
that
process
we
follow
kind
of
just
industry
best
practices
of
you
know
how
to
reach
out
and
how
you
know
what
one
thing
just
to
be
super
clear.
There
was
another
question
later
on:
are
we
going
to
make
vulnerabilities
public
90
days
after
we
haven't?
We've
talked
about
that.
B
We
haven't
really
made
a
decision
on
precisely
what
that
timeline
and
workflow
would
be
the
conversation
and
the
principles
that
we
have
in
mind.
Are
we
want
to
do
right
by
the
project
in
terms
of
giving
them
the
support
and
the
time
that
they
need
to
fix
things,
but
we
also
recognize
that
you
know
we
are
doing
this
on
behalf
of
society,
that
is,
by
definition
at
that
point
running
a
vulnerable
version
of
the
the
thing.
B
So
we
have
to
balance
that
need
and
we're
trying
to
do
that
in
a
thoughtful
way,
we'll
we
will
be
transparent
when
we
know
what
we
want
to
do
and
and
certainly
we're.
We
want
to
continue
that
conversation.
There.
F
Yes,
yes,
I
am
so.
My
question
is
mostly
focus
and
I
wrote
omega
in
the
question
now.
I'm
thinking
I
mean
it
could
apply
to
the
alpha
side
as
well.
Is
there
going
to
be
a
community
focus
when
it
comes
to
the
security
researchers
who
are
identifying
vulnerabilities
or
helping
to
identify
vulnerabilities
in
some
of
these
projects?
F
B
So
I
think
initially,
the
answer
is
more
of
a
focus
on
paid
staff
hired
for
this
project.
B
B
I
think
I
think
we're
all
in
agreement
that,
like
in
theory
having
a
larger
community
working
at
that,
like
there's
a
lot
of
security
researchers
out
there,
that
would
love
to
like
direct
at
interesting
pro,
not
directed
but
but
have
folk
have
thinking
about
interesting
problems,
there's
a
whole
other
side
of
like
disclosure
and
how
do
we
vet?
And
you
know,
because
there
are
bad
actors
out
there
as
well
more
to
come,
we're
thinking
about
it
actively.
B
You
know
like
doing
things
directly
and
having
the
community
help
in
terms
of
what
brian
and
michael
have
said.
You
know
the
the
the
slack
channel,
the
the
openness
of
working
groups,
the
the
core
tools
that
we
use
and
kind
of
advancing
our
thinking
in
how
to
do
this.
Do
this
well.
G
A
Let
me
move
on
to
dj
ware
dj.
Would
you
be
willing
to
ask
your
question
about
linux
distributions
on
the
call
or
I
can
read
your
question,
I'm
kind
of
calling
people
out
of
the
blue.
I
might
be
surprising
them.
I
apologize
well
dj
asked
what
is
our
our
vision
for
how
this
interacts
with
say,
linux
distributions
and
their
associated
repositories?
You
know
we
might
also
cover
some
of
the
other
sources
of
repositories
out
there.
You
know
maven
pipe
those
those
sorts
of
things.
B
I
I
don't
think
that
we're
we're
limiting
ourselves
to
any
particular
distribution
or
package
ecosystem,
or
anything
else.
You
know
I
I
we
we
should
be
thoughtful
in
going
in
looking
at
places
where
we
think
we
can
have
the
most
impact
so
for
certain
linux
distributions
you
know,
I
I
don't
think
I
don't
think
well
for.
B
H
Wasn't
trying
to
get
too
much
in
the
weeds?
I
was
just
trying
to
understand
if
I
was
looking
there's
so
much
overlap
in
different
packages
on
different
features
that
they
have.
How
would
I
identify
that
as
a
developer,
to
say
this?
I
know
this
has
been
vetted
and
this
one
hasn't.
It's
really
really
more
of
the
question.
Okay,
I
think.
C
I
think
the
granularity
of
effort
in
alpha
omega
will
be
at
what
you
would
think
of
as
a
package
level,
whether
it's
an
operating
system
package
or
a
language
package,
or
some
sort
of
open
source
project
of
some
kind.
But
when
you
start
aggregating
and
assembling
things
into
an
application,
then
your
s-bomb
of
like
the
other
day,
the
person
who
cares
about
the
s-bomb
is
the
application
operator
who
says
what
do
I
have
running
live
in
production?
C
What
vulnerabilities
have
I
pushed
out
there,
or
do
I
not
want
to
push
out
there
and
what
has
emerged
since
I
pushed
it
out
and
those
are
two
different
conversations.
Our
effort
is
not
enough.
Omega
focused
on
sort
of
that
aggregated
space
of
like
what
happens
and
all
this
stuff
comes
together.
How
do
I
deal
with
that?
It's
a
great
problem
and
there's
a
lot
of
work
going
into
it
in
various
working
groups
and
organizations,
but
we're
looking
at
essentially
the
raw
ingredients
of
that
cake.
C
If
you
will
and
trying
to
like
on
a
piece
by
piece
basis,
now
obviously
there's
a
defensive
graph,
some
deviant
package
is
built
up
using
some.
You
know,
python,
library
or
or
something
like
that,
and
so
there's,
obviously
a
transitive
set
of
problems
that
kind
of
work
through
there.
C
C
Those
are
so
that
each
again,
the
granularity
of
the
package
becomes
the
point
at
which
we
can
start
to
make
a
security
decision.
An
evaluation
of
is
a
risk
or
not,
and
I
think
that's
how
we're
trying
to
focus
it
right
now.
So
this
is
not
a.
I
checked
with
alpha
omega
and
they
said
I'm,
okay,
that's
not
that's
not
this
is
you
know
where
we're
going.
This
is,
you
know,
can
we
address
the
the
industry-wide
debt
around
security
posture
across
a
whole
bunch
of
packages
and
that
that
leads
to?
E
A
And
andrea
asked
I
may
be
naive,
but
the
most
critical
projects
are
probably
also
the
ones
with
the
best
security
posture
is
the
current
security
posture
posture
party
of
picking
the
projects
for
alpha
well
part
of
picking
the
projects
for
alpha.
C
And
so
you're
not
naive,
but
I
think
that
industry
wide
everybody
is
starting
to
realize
that
we
have
a
certain
amount
of
security
posture
debt
across
how
we
do
things,
whether
it's,
how
we
build
them,
whether
it's
whether
we've
gone
off
and
looked
for
you
know,
vulnerabilities.
C
You
know
on
a
regular
basis
and
looking
at
these
new
patterns
that
are
emergent
like
I
mentioned
about
extensibility
before,
and
then
you
know,
what
are
we
gonna
do
about
it?
How
we're
gonna
lean
into
that?
So
our
feeling
is,
although
there
are
a
lot
of
projects
that
have
tremendous
amount
of
eyesight
on
them
and
a
lot
of
eyes,
helping
make
it
better
and
a
lot
of
investment
in
security.
C
There's
like
every
one
of
those
has
work
to
be
done,
and
there
are
opportunities
where
we
can
make
a
difference
either
by
scaled
approaches
or
by
focused
efforts
and
again
we'll
find
out
right.
We
certainly
had
conversations
with
various
projects
who
would
unabashedly
tell
us
that
you
know
the
way
they
build
their
software
is
not
how
we
might
have
offered
to
do
so,
and
those
are
interesting
conversations
to
have
and
non-trivial
journeys
to
change
right
to
get
from
where
you
are
to
the
right
place.
C
So
not
everything
is
a
sort
of
traditional
vulnerability
sitting
inside
a
piece
of
code
that
hasn't
been
looked
at.
Some
of
them
are,
as
you
said,
posture
in
terms
of
code
reviews
to
factor
off,
etc.
Those
are
some
of
the
more
actionable
things,
but
there's
also
a
fair
amount
of
stuff.
Where
has
anybody
looked
and
say?
How
can
I
go
off
and
do
this
now
that
I've
learned
about
this
new
pattern
and
that's
where
I
think
we
can
lean
in
as
well
great
question
appreciate
it.
A
G
To
ask
I
am
here,
yes,
thank
you.
Can
you
hear
me?
Okay,
yeah,
so
I've
noticed
that,
like
even
when
fixes
are
available
a
lot
of
times,
vendors
technology,
software
vendors
have
very
poor
practices
when
it
comes
to
updating
their
dependencies
and
getting
the
upgrades
into
their
products
in
a
timely
fashion.
I'm
just
wondering
if
part
of
this
will
address
like
education
of
vendors
and
trying
to
you
know,
pull
them
down
the
the
righteous
path.
B
So
so
vendors,
specifically,
I
would
say,
probably
not
yeah
I
mean,
although
I
would
hope
that
the
alpha
omega
itself
kind
of
spurs
future
conversations
that
do
have
an
impact
there.
I
I
would,
but
if
you,
if
you
change
out
vendor
for
another
open
source
project,
then
I
would
say
that
part
of
our
analysis.
You
know
if
we
see,
especially
as
part
of
alpha,
that
a
you
know,
a
larger
project,
you
know,
has
a
package
log
file
that
hasn't
been
updated
in
four
years.
B
That's
interesting,
and
that
would
be
part
of
our
you
know
is
there
is
a
reason
why
that's
so
out
of
date
and
testing
challenges
and
there's
all
sorts
of
reasons.
Why,
like
package,
lock
files
are
good
but
they're
also
they're,
also
bad,
be
able
to.
I
think
I
think
the
world
needs
another
year
or
two
of
kind
of
thought
going
into
how
to
like
what
what
the
right
trade-off
there
is.
This.
C
Is
this
is
a
really
interesting
problem,
john
mark,
and
you
know
everybody
has
it
in
some
way
or
another?
You
don't
want
to
live
ahead,
but
you
want
to
live
kind
of
close
to
head
and
what.
C
Right
and
I'm
starting
to
see,
there's
a
couple
of
things
to
talk
about.
One
is
the
cost
of
in
adding
a
open
source
project
to
your
organization's
dependency
graph.
You
might
have
a
security
view,
some
sort
of
business
policy
or
whatever
or
in
some
cases,
no
policy
whatsoever.
The
cost
is,
can
be
quite
deceptively
low
and
the
total
cost
of
ownership
matters
here
and
then
the
cost
of
keeping
up
to
date
with
it
is
a
hidden
cost
that
people
often
don't
pay.
C
And
the
metaphor
I
like
to
use
here
and
it's
my
favorite
one-
somebody
in
my
team
gave
it
to
me-
is
people
tend
to
treat
open
source
as
free
as
in
free
beer,
but
it's
really
free,
as
in
free
puppies
and
you're,
taking
on
a
responsibility
within
your
organization.
For
maintaining
that
thing-
and
it's
not,
you
know,
there's
a
bunch
of
other
people
making
some
great
code
for
us
here,
we'll
just
use
them.
They're
awesome
we're
getting
free
work
here,
but
then
you
don't
pick
up
those
updates
and
now
you're
not
living
ahead.
C
And
now
you
are
essentially
an
unofficial,
poorly
declared,
half-baked,
greenspun's,
10th
rule-like
fork
of
that
original
project,
and
you
just
haven't
admitted
it
yet
wow
right
and
so
that's
a
big
problem.
I
want
to
be
clear.
I
don't
think
that
that
is
within
our
remit
within
alpha
omega
to
solve
it's
part
of
the
openness
to
seth's
remit
and
a
lot
of
people
are
thinking
about
things
to
get
there,
and
one
of
the
things
I
think
is
important.
There
is
what
is
stopping
people
from
continuously
updating,
and
I
think
it's
about
information.
C
Do
you
have
enough
information
to
make
an
informed
decision
about
this
update?
Do
you
have
enough
trust
in
the
community
such
you
can
pick
up
changes
more
reliably
more
frequently
and
live
closer
to
head
versus?
Oh
man,
I
gotta
do
another
three
month
review
of
this
three
line.
Pr
right,
like
you
know,
and
that
boundary
is
really
interesting.
That's
where
you
know
love
to
have
continued
conversations
there,
but
I
don't
think
that's
part
of
the
alpha
omega
scope.
A
I
wanna
follow
up
on
one
other
thing
from
the
previous
question
and
answer
too,
which
is
it's:
it's
not
our
intent
to
be
seen
as
an
arbiter
of
whose
security
postures
are
strong
or
weak,
or
to
be
publishing
stats
on
that
or
that
kind
of
thing.
There
are
other
efforts
to
do
that.
You
all
are
familiar
with
the
best
practices
badge
with
the
scorecards.
A
You
probably
know
that
you
can
go
to
metrics.openssf.org
and
see,
for,
I
think,
a
million
different
repositories
how
well
they
do
against
the
scorecards
and
best
practices.
Badges.
A
Some
of
the
cncf
landscape
pictures
now
offer
best
practices
badge
as
like
a
variable
to
select
from
some
of
the
other
landscape
deployments
out
there,
and
we
do
see
actually
in
a
different
part
of
open,
ssf,
expanding
kind
of
that's
that
type
of
understanding,
risk
and
security
posture
across
open
source
projects
and
other
ways
too,
that
look
more
for
that
and
inevitably
that'll
be
a
factor
in
some
of
what
alpha
and
mega
work
on.
But
but
that's
that's
definitely
going
to
be
a
separate
kind
of
initiative.
A
D
Sure,
as
part
of
your
engagements
with
these
projects,
are
you
also
looking
into
the
ide
extensions
such
as
those
that
maintainers
are
using?
Not
everybody
does
to
assist
them
in
ensuring
that
they're
writing
better,
more
secure,
secure
code
prior
to
its
commit
or
merge
into
these
projects?
I
know
this
is
a
newer
conversation.
I've
been
having
with
other
security
professionals
across
industry
that
we
often
forget
that
ides
are
used
and
that's
really
where
the
code
actually
starts
to
happen.
D
B
That
question
and
I'll
be
I'm
gonna
be
careful.
Yes,
we
absolutely
need
to
do
it,
but
we
as
the
royal.
We
have
open
ssf.
The
security
tooling
working
group,
I
think,
is
the
perfect
place
to
have
those
conversations
and
advance
those
things.
We
would
love
to
feed
our
learnings
into
that
working
group
as
well.
As
you
know,
the
the
broader
community,
because
absolutely
we
need
better
ide
based
squiggly
underlines,
or
you
know,.
B
Needed
to
you
know,
write
more
secure
code,
so
a
very
strong
supporter
of
that.
A
I
know
we're
getting
close
to
time,
so
I'm
going
to
try
to
be
super
quick
here.
There's
an
interesting
question
from
johann
homburg
thanks
for
the
initiative.
It
is
a
daunting
task
and
lots
of
real
work
ahead.
At
the
same
time,
the
security
experts
are
not
really
idling
at
the
moment.
How
do
we
attract
volunteers
to
this
project
and
I'll
add
on
to
that?
You
know
we're
gonna
have
to
hire
some
people
for
this
as
well.
A
So
any
any
thoughts
the
two
michaels
might
want
to
share
on
that.
B
Frankly,
I
think
that
was
one
of
the
reasons
why
we
went
are
going
down
the
rap
the
the
route
of
we
need
to
hire
people,
because
it
is
a
very
it-
is
very
difficult
finding
available
security,
talent,
especially
ones
that
you
can
count
on
for
many
hours.
We
also
don't
want
to
treat
security
researchers
as
a
free
resource.
You
know,
I
think,
fundamentally,
you
should
get
paid
for
your
work
and
that
gets
much
more
complicated
in
a
like
quasi-volunteer
sign
up
scenario.
B
A
So
I
we
will
try
to
tackle
the
remaining
list
offline
and
get
back
to
the
folks
who
have
asked
that
question
so
they're
they're
we're
the
backlog
is
30
questions
so
try
to
give
us
a
bit
of
time
to
get
to
them.
Try
anything
about
the
ones
that
are
probably
worth
trying
to
answer
here
in
the
short
term.
Michael,
you
want
to
pick
one
yeah.
B
I
think
I
think,
there's
one
a
couple
questions
I
saw
about
you
know:
are
we
going
to
work
with
kind
of
commercial
vendors
and
like
what's
our
relationship
with
there
would
be
license
commercial
tools?
I
think
everything
is
on
the
table.
I
would
prefer
not
to
frankly
blow
a
large
portion
of
our
budget
on
licensing
a
commercial
tool.
B
Much
prefer
partnerships,
there,
partnerships,
meaning
free,
but
I
think
the
most
important
bit
there
especially
for
omega-
is
the
quality
of
the
tool
and
the
the
you
know
so
having
a
having
a
free
tool
that
generates
lots
of
false
positives
is
a
net,
it
could
be
a
net
negative
for
for
us,
so
we
want
to
be
careful
in
what
we
integrate,
how
we
integrate
and
how
we're
able
to
tune
that
tool
over
time.
But
I
don't
think
I
I
why.
B
So,
while
a
large
portion
of
the
tool
chain
that
we
use
and
everything
is
intended
to
be
open
source
like
codeql,
the
engine
is
not
open
source
to
be
to
be
clear,
we
are
using
codeql,
I'm
open
to
using
others.
You
know
in
the
same
kind
of
capacity
or
even
better,
absorbing
like
data
sets
of
high
quality.
You
know
result
like
at
the
end
of
the
day.
C
Is
it
expected
that
engagement
will
take
one
to
two
months
and
do
you
have
an
intended
cap
for
the
engagement
time
frame?
That's
a
great
question
emily
and
on
a
long
list
of
great
questions
already.
The
answer
is
we
don't
know
yet
right,
we're
still
like
and
we're
very
excited
by
all
the
interest
and
we
get
to
hire
our
first
employee.
C
But
I
think
that
these
are
you
know
things
that
we
will
probably
start
out
with
an
initial
impression
of
like
let's
spend
two
months
on
project
x
and
then
after
two
months,
we'll
say
was
that
enough:
did
we
learn
enough
or
whatever
and
we'll
figure
out
what
the
right
engagement
is?
If
you
have
experience
and
thoughts
that
tell
us,
you
know
the
average
engagement
on
this
takes
n
months.
That
would
be
awesome
to
know
it
will
help
us
sort
of
plan
our
thoughts
there
as
well.
C
I
I
don't
pretend
to
know
the
answer
to
that
question,
but
it
it's
exactly
like
to
me
the
reason
I
chose
this
question
is
it
sort
of
embodies
the
spirit
of
all
the
things
that
we
don't
know
about
how
to
do
an
alpha,
omega-like
effort,
and
I
think
it's
sort
of
a
great
way
to
close
the
questions
and
we
will,
as
brian
promised
answer
as
many
or
all
of
them
as
possible
offline,
but
like
we're
here,
learning
and
we
really.
A
Yeah,
well
I
we
are
getting
close
to
time,
it's
time
to
ask
one
more
question,
which
is
from
ben
rockwood
to
what
degree
will
affinomega
forward
other
security
standards
such
as
salsa
provenance
or
software
bill
of
materials?
Which
of
you
want
to
take
that.
C
Those
standards
are
about
practices
they're
about
tooling,
as
I
said
at
the
beginning,
in
terms
of
our
mission
and
vision
right,
they
are
essentially
starting
to
shape
the
future
that
we
hope
will
influence
the
whole
industry
and
help
the
whole
industry
make
being
a
soft
secure.
Writing
software
easier.
C
That's
not
what
we're
doing
in
alpha
makeup
right.
We
will
look
at
the
signals
that
they
represent,
so,
for
example,
if
a
project
has
a
very
high
security
posture
and
a
community
that
has
invested
continuously
in
that
that's
a
signal,
that's
interesting
to
us.
By
the
end
of
the
day,
it
doesn't
change
whether
or
not
we
can
go
and
find
bugs
in
there
or
whatever.
Somebody
else
has
to
question
about
the
kernel
right,
there's,
obviously
a
tremendous
amount
of
eyes
on
the
kernel
from
a
security
point
of
view.
C
A
Thank
you
michael.
We
probably
wrap
up
here,
michael
scavetta,
any
last
words.
B
No,
I
I
I
really
appreciate
everybody's
time
and
and
taking
the
hour
to
to
listen
and
engage
with
us.
I'm
hoping
this
is
the
well
I'm
confident
this
is
the
start
of
a
you
know,
longer
discussion
and
you
know
continued
engagement.
So
please
keep
the
questions
coming
and
and
hold
us
accountable
to
to
delivering
on
the
vision
that
we
that
we've
articulated.
A
Okay,
great-
and
I
I
just
didn't-
want
to
thank
everyone
for
showing
up
as
well.
I
we
dropped
the
links
in
the
chat,
the
link
to
the
presentation
deck
as
well
as
the
recording
will
be
on
the
webinar
page
and
everywhere
else.
We
can
put
it
if
you
want
to
continue.
The
conversation
join
us
over
at
slack
on
the
alpha
omega
channel,
the
open
ssf
slack,
and
with
that,
thank
you
all
for
attending
and
such
great
questions
thanks.