►
From YouTube: Alpha Omega Project Public Meeting (June 7, 2023)
Description
https://github.com/ossf/alpha-omega
No public meeting notes.
B
C
B
Give
it
a
few
few
minutes
for
some
folks
to
join
and
get
started.
D
B
Zoom
was
literally
crashing
every
eight
minutes
during
during
last
hours.
Call.
C
B
E
B
Awesome,
okay,
so
welcome
everyone
to
the
June
7th
AO
public
meeting.
I
will
be
your
host
happy
to.
The
purpose
of
this
forum
is
to
give
anyone
the
opportunity
to
ask
inquire
chat,
raise
concerns
whatever
with
the
AO
team
and
what
we're
doing
and
how
we're
doing
it
and
what
we
could
be
doing
better
and
all
that
so
I
have
a
couple
updates
from
from
the
past
month.
B
Nothing
well
some
of
it's
interesting.
Actually,
sorry,
all
of
it's
interesting,
some
of
it's
new,
so
I
I'll
just
go.
Go
through
a
couple
of
these.
Stop
me.
If
you
have
questions
and
I'm
hoping
to
take
most
of
this
time
for
just
kind
of
open,
open
discussion
so
early
in
may
we
announced
at
open
ssf
day
that
we
had
received
an
addition,
additional
transfer
funding
from
Microsoft
and
Google.
Thank
you
to
Microsoft
and
Google
this
kind
of
hydrates
us
back
to
the
point
that
we
can.
B
We
feel
good
about
this
year
into
2024
from
a
funding
perspective,
so
we're
able
to
make
additional
investments
in
key
critical,
open
source
projects
to
so
we've
we've
had
discussions.
I
believe
last
time,
I
mentioned
that
we
are
engaging
with
openssl
to
complete
a
security
audit.
This
is
done
through
ostiff,
that
is
in
progress.
B
I
am
hoping
to
have
the
results
public
in
the
September
time
frame
after
any
issues
have
been
resolved
and
the
final
report
is
is
made
and
everything
there
We've
also
issued
a
a
grant
to
improve
support
in
Rust
to
improve
support
for
rust
in
the
Linux
kernel.
This
will
be
announced.
I,
don't
think
the
announcement
on
this
went
out
yet
so
this
is
I'm
dropping
the
gun
a
little
bit,
but
we
did
send
the
money
so
memory
safety
is
super
important.
B
We
want
to
support
that,
so
we
think
that
this
this
investment
will
will
be
good
there.
The
money
is
also
going
to
be
used
to
improve
features
within
Russell's
as
a
cryptographic
library
in
Rusk.
B
We
have
a
couple,
others
that
we
are
like
nearly
signed
and
in
in
a
similar
vein
of
providing
money
to
critical,
open
source
projects
to
do
critical
security,
work
that
have
kind
of
broad
Broad
and
leveraged
impact,
including
one
for
identifying
vulnerabilities
across
the
ten
thousand
or
ten
thousand
list
and
getting
those
getting
those
fixed.
So
we're
super
optimistic
about
being
able
to
talk
about
that
one
publicly
I'm,
hoping
next
month,
we'll
we'll
have
some
sort
of
announcement.
B
Each
of
our
Alpha
projects
provides
monthly
updates
and
Rafael
from
from
notice
here.
I,
don't
know
if
there's
anything
that
I
don't
want
to
put
on
the
spot,
but
if
you
wanted
to
chat
you
know
you
always
have
a
a
welcome
spot
here
to
chat
about.
What's
going
on
with
node.
E
All
right,
yeah,
yeah
sure.
E
Now
it
should
be
better
right,
yeah,
okay,
so
a
lot
of
things
happening
on
node.js,
a
lot
of
secret
fixes,
the
new
permission
model
arise
with
node.js20,
so
we
are
collecting
a
lot
of
feedbacks
just
to
provide
some
context.
The
permission
model,
basically,
is
a
secret
feature
to
deny
access
to
resources
on
on
during
the
the
node.js
execution,
such
as
reducing
attacks
against
the
fire
system
or
worker
threads
and
some
node.js
internals.
E
We
are
re-evaluating
all
the
feature
currently
having
a
lot
of
good
discussions
around
security.
However,
I
can't
speak
or
I
can't
say
more
because
it's
it's
not
public.
Yet
so,
if
I
say,
I
will
disclose
it
appropriately
a
secret
vulnerability.
So
that's
it.
B
Awesome
and
I
think
what
we
should
probably
do
for
next
time
is
kind
of
have
a
round
rock
I
mean
you're,
obviously
welcome
to
speak
every
time,
but
everyone
Robin
will
reinvite
the
other
Alpha
projects
to
come
in
on
occasion
and
an
update
and
kind
of
talk
with
what
they're
up
to
all
the
the
progress
reports
for
them
are
in
the
AO
GitHub
repository.
So
that's
up
there.
B
Let's
see
I'm
going
totally
out
of
order.
I
I
have
separate
notes
that
are
organized
differently,
so
none
of
this
makes
sense,
but
We've
launched
a
website,
so
Alpha
Omega,
dot,
Dev,
oh
I'm,
sorry,
Alpha,
Dash
omega.dev
is
our
new
website.
We
do
need
to
get
it
linked
from
openssf.org
the
AO
page.
There
goes
to
the
the
old
landing
page,
but
we're
gonna
put
more
stuff
there.
The
progress
reports
from
each
of
the
alpha
projects
is
in
our
GitHub,
though,
and
that
just
gets
updated.
B
I
also
wanted
to
maybe
pass
the
Baton
to
Yesenia
to
talk
about
the
the
mentorship
program
which
which
kicked
off
at
the
beginning
of
the
month,
not
to
put
you
on
the
spot.
But
if
you'd
like
to
say
anything,
you're
welcome.
A
Yeah
happy
to
be
on
the
spotlight
so
yeah
our
mentorship
program
kicked
off
last
week
on
June
1st
I
think
the
mentees
are
shy
of
a
day
for
a
full
week.
We've
got
them
onboarded
and
working
towards
their
two
respective
areas,
which
is
the
security
engineering
with
me
on
improving
our
Omega
tool
chain.
I'll
share
the
link
to
our
software
requirement
document
that
we're
currently
dropping
out
and
working
towards
this
week.
A
So
if
anybody
has
any
feedback
comments,
suggestions,
I
I,
welcome,
we
welcome
them
in
that
document,
and
then
we
have
these.
The
security
researchers
on
Jonathan's
end
and
I'll
pass
the
Baton
to
him
to
talk
a
little
further
on
that.
F
Sure
yeah,
so
there's
researchers
similarly
started
same
day
and
we're
working,
giving
them
kind
of
giving
them
a
crash
course
and
code
ql,
and
then
we're
diving
into
open,
rewrite
and
I'm
having
them.
Do
some
work
there.
So
you
know
just
trying
to
get
them
familiar
with
the
general
code
bases
we're
going
to
be
working
on
and
working
with
for
the
course
of
the
summer.
So
yeah
awesome.
B
B
Okay,
so
we
released
a
perfect
concept
tool,
called
disclosure
check
post
a
link
to
it
in
the
doc
disclosure
check
is
intended
to
find
how
to
best
report
a
security
vulnerability
in
an
open
source
package
privately.
So
each
you
know
some
projects,
it's
just
an
email
address
some
projects
you
have
to
like
find
the
author.
Somehow
some
projects
could
get
a
private
vulnerability,
disclosures
or
Tide
lift
or
a
jira
page
or
a
mail
private
mailing
list.
B
Doing
that
once
can
take.
You
know,
maybe
a
few
minutes
doing
that
repeatedly
is
both
Soul
draining
and
not
not
a
great
use
of
use
of
time.
That's
what
automation
is
for
the
tool
is
the
automation
for
that.
So
you
pointed
at
a
npm
or
Pi
pipe
project
or
GitHub,
repo
or
whatever,
and
it
gives
you
a
sorted
list
of
like
hey.
B
Here
are
the
you
know
here
are
the
high
confidence
ones
and
here's
some
like
maybes
and
here's
some
fallbacks
if
nothing
else
works,
so
that
is
under
the
openssf,
we're
trying
to
figure
out
what
the
right
maintenance
model
there
is,
whether
it'll
be
vulnerable
disclosures
or
us.
It
looks
like
it
looks
like
it'll,
probably
be
us,
which
is
fine,
but
for
folks
that
find
this
interesting
or
useful
feedback
is,
is
very
much
appreciated.
B
B
That's
what
that
is,
and
you
said
what
I
should
just
I
think
that's
the.
B
And
the
the
the
AO
website
should
just
I
I
think
that
that
it
should
just
be
off
of
there.
So
I
think
the
other
one
is
a
staging:
the
staging
site,
the
one
project:
okay,
yeah.
B
I
think
those
are
the
big
big.
Recent
updates
am
I
missing
anything
got
a
bunch
of
things,
we're
working
on.
You
know
this
month
for
for
next
month,
in
general,
you
know
just
trying
to
move
forward
in
all
places
you
know
have
the
the
the
tooling
and
security
research
move
ahead,
explore
new
projects
have
meaningful
discussions
on
on
where
we
invest
and
how
we
measure
the
value
of
that
investment.
A
H
All
right,
hi
everybody,
my
name-
is
Aaron
I
am
a
security
researcher
mentee,
so
I'll
be
working
under
Jonathan
for
this
summer.
H
Right
now,
I'm
a
second
year
undergraduate,
who's
studying
computer
science
at
UC
Irvine,
and
so
this
will
be
more
of
an
introduction
into
open
source
and
security
for
me,
but
I'm
really
hoping
to
learn
a
lot
and
get
to
know
everybody
and
become
really
familiar
with
this
code
base
and
then
hopefully
use
this
opportunity
to
continue
moving
on
in
my
career
and
then
hopefully
contributing
to
more
open
source
softwares
and
then
should
I
just
pass
it
on
to
somebody.
D
Hi
hi
guys
my
name
is
Splenda
I
am
a
recent
graduate
from
NSU
like
so
you
know,
I
am
a
security
engineer.
The
demonstration
of
Jessenia
and
my
focus
for
this
mentorship
is
going
to
be
on
the
triage
portal.
So
I'm
really
excited
about
it
and,
like
I,
said
multiple
times
but
I'll
say
it
again.
I'm
really
excited
to
just
get
this
kind
of
unique
opportunity
to
gain
experience
and
learn
from
you
guys,
I'll
pass
it
on
to
Andres.
C
Hey:
what's
up
everybody
I
too
am
a
recent
graduate,
but
from
NJIT
I
spent
a
little
time
before
on
the
co-op
for
doing
security
engineering.
So
really
looking
forward
to
learning
more
about
like
open
source
security
and
everything
that
has
to
come
along
with
that
I.
Don't
see
someone
you
here
so
I
think
that'll,
be
it.
A
Yes,
he
wasn't
able
to
make
today's
call
from
my
understanding,
but
maybe
the
next
one
you'll
be
able
to
introduce
himself.
Thank
you.
Everyone.
I
Can
I
introduce
myself
wow
I
recently
got.
My
name
is
Sharon
and
I'm,
currently
working
in
Reddit
previously
as
a
as
a
security
engineer
previously,
I
had
I
was
working
in
several
Fortune
100
companies.
I
was
recently
came
to
this.
This
awesome
project
I
was
okay.
This
is
a
this.
Is
my
regular
work
like
you
know,
because
I'm
also
leading
a
vulnerability,
Management
program
inside
Reddit.
So,
like
you
know
what
is
the
best
way
to
fix
them?
I
Then
I
got
to
know
about
this
awesome
open
source
project,
so
that
is
the
reason.
I
joined
this
meeting
just
to
understand
how
it's
going
on,
like
you
know
just
to
see
that
what
is
the
project
is
there
a
way
I
can
contribute
back
to
the
community.
These
are
the
things
I'm
looking
around
awesome.
B
E
G
I
just
wanted
to
quickly
just
thank
you
for
the
opportunity
to
collaborate
with
product
Alpha
and
we
on
the
ocean
side
are
really
excited
to
be
working
more
closely
with
everybody
and
working
on
making
security
turning
it
into
reality,
so
very
excited
thanks
again.
C
E
So
we
are
discussing
on
the
TSC
or
having
some
discussions
internally
around
how
we
assess
report
against
experimental
features
on
node.js.
We
normally
create
some
features.
That
is
experimental,
so
you
to
use
it.
You
need
to
pass
a
flag
experimental,
something
and
basically
you
opt
to
use
it,
and
it
is
not
meant
production.
The
idea
is
to
have
an
experimental
feature,
so
you
can
test
collect
feedback,
and
then
we
improve
the
feature
around
the
way.
E
However,
we
decide
last
year
that
we
are
accepting
vulnerabilities
reports
against
experimental
features,
because,
even
though
they
are
experimental,
they
should
be
secured
first,
they
should
be
secured
by
the
fool.
The
only
the
only
discussion
we
are
having
is
basically
the
the
assessment.
The
severity
against
that
report
should
be
the
same
as
a
regular
feature.
For
instance,
when
we
use
the
CVSs
calculator
to
to
set
the
nobility
is
critical,
high,
medium
or
low,
and
considering
that
they
are
experimental
and
the
user
opt
to
use
it.
E
We
are
not
sure
we
are
not
entirely
sure
if
they
should
be
assessed
in
the
same
way
as
a
regular
feature.
So
that's
I
think
would
be
a
great
discussion
here.
E
My
personal
opinion
is
that
in
most
of
the
case
they
should,
but
there
are
some
what
we
we
are
trying
to
prevent
is
that
okay,
let's
say
that
we
we
release
that
an
experimental
feature,
and
then
we
received
a
bunch
of
reports
on
that
and
all
of
that,
if
we
use
the
CVSs
calculator,
will
be
tagged
as
high.
However,
if
we
perform
a
secret
release
with
all
the
features
we've
tagged
as
high
is
to
tell
to
the
users-
oh
node.js
20.1,
for
instance,
is
totally
unsecure.
E
B
That's
a
good
one.
I
I
I
feel
sad
that
I
haven't
kind
of
come
across
that
before
and
have
like
a
opinion
already
made
I'm
looking
at
the
the
CFS
CVSs
like
calculator
like
yep
mentions.
My
gut
feel
is
that
like
there
should
be
something
there
that
says,
is
this
a
out
of
the
box
like
feature
or
is
it
something
that
has
to
be
opted
into?
Is
it
some
like.
E
Well,
there's
there
are
two
options:
one
is
the
user
interaction
technically,
if
you,
if
you
tag
it
or
if
you
need
to
use
the
this
option,
it's
a
required.
The
user
interaction
and
the
complexity
normally
goes
a
bit
high,
but
it
doesn't
decrease
the
the
the
the
the
the
yeah.
They
scored
a
lot.
You
know
like.
C
I
So
generally,
Cavs
score
doesn't
talk
about
whether
the
future
was
experimental
or
something
like
that.
They
mainly
talk
about.
Like
you
know,
like
sorry,
I,
don't
know
your
name,
but,
like
you
know,
they
mainly
talk
about
the
user
interaction.
I
How
much
privilege
is
required
all
those
things
and,
like
you
know,
if
you
see
a
lot
of
companies,
people
generally
don't
use
CVSs
like
you
know,
for
example,
if
you
are
running
some
vulnerability
management
system
or
something
like
that,
they
won't
plainly
don't
depend
upon
CBS
score
because
CBS
score
never
never
thinks
about
like
an
exploitability.
What.
I
E
Yeah
we
do
it
as
well,
I
mean
there
are
some
cases
that
we
do
use
the
cpss,
but
there
are
some
the
adult
cases
like
you
said
that
doesn't
fit
and
we
we
assess
it
manually,
but
even
assessing
it
manually,
it's
difficult
to
categorize,
severely
as
critical
or
high
or
medium,
and
considering
that
these
are
experimental
we
are
have.
We
have
a
good
debate,
at
least
internally
if
it
should
be
categorizing
the
same
in
the
same
role
as
a
regular
feature.
I
Yeah
regarding
that
experimental
I,
never
I
never
used
experimental
ones,
but
but
my
question
is
that
whether
the
experimental
future
is
in
Alpha,
which
is
about
to
release
in,
like
you
know,
we.
E
Don't
have
it:
oh,
no
JS,
we
don't
have
beta
or
Alpha
it's
either
experimental
or
it's
extable.
Oh.
E
I
Okay,
so,
as
per
my
general
experience,
like
you
know
previously,
I've
worked
in
a
dev,
then
I
moved
into
security.
So
generally,
unless,
like
you
know,
like
you
know,
some
like
new
future
is
like
releasing
and
it
contains
some
important
release.
I
can
also
currently
my
organization
was
not
using.
Then
only
people
actually
use
it,
because
nowadays,
every
software
development
is
happening
based
upon
Docker
files
right.
I
So
it's
very
hard
that
people
actually
create
these,
like
you
know,
either
like
AWS
Lambda,
otherwise,
some
other
Docker
based
upon
some
experimental
like
an
image
but
I
won't
100
rule
out,
but
but
most
of
the
time
like
you
know,
people
actually
use
these
Docker
images
they're
actually
built
upon
the
existing
released
images,
not
based
upon
experimented
milk.
That
is
my
two
cents
to
add
to
that
discussion,
but
but
I
think
this
is
a
good
discussion.
B
Yeah,
so
as
a
user
I
think
after
reading
this,
like
I
I,
expect
it
not
to
be
like
it
it's
going
to
change
it's
not
subject
December,
it
might
actually
not
work.
Well,
on
my
you
know,
system,
but
I
wouldn't
read
this
to
me.
If
you
ask
me
like,
should
these
things
be
just
as
secure
as
everything
else,
I
think
I
would
say?
Yes,
so
that
leads
me
to
say
that
see
it.
It
should
be.
B
It
should
follow
the
same
flow
as
as
stable
releases,
at
least
or
clarify
that
or
mental
has
a
different
security
bar
than
yeah.
Oh
well,.
E
That
works
well
when
the
feature
is
not
a
security
feature,
for
instance,
if
we,
if
we
release
the
test
Runner
or
when
it
when
it
was
experimental,
if
you
find
a
bug
there
or
if
you
find
a
vulnerability
that
can
be
assessing
the
same
flow
as
a
regular
feature
yeah.
However,
when
you
release
a
security
feature,
any
bug
will
be
a
vulnerability
eventually
yeah,
and
that
will
be
way
critical
to
to
assess
all
those
as
a
high.
You
know,
yeah.
H
E
Just
to
keep
in
mind
I
mean
if
you
have
any
any
thoughts
on
that
any
any
other
thoughts
you
if
you
feel
free
to
ping
me
on
on
Twitter
or
link.
B
It
doesn't
matter
I
I,
like
Sharon's
suggestion
of
like
ask
the
users,
you
know,
maybe
putting
it
out
there
and
saying,
like
you,
know,
Community.
What
do
you
think
like
if
we
go
this
route?
Here's
the
pros
and
cons.
If
we
go
this
route,
here's
the
pros
and
cons
and
see
if
there,
if,
like
the
community,
is
like
all
about
like.
If
it's
split,
then
you
don't
really
learn
anything.
But
if
it's
yeah,
we.
E
E
I
You
know
what
is
the
node.js
community
was
saying
there,
like
you
know,
are
they
willing
to
add
some
kind
of
a
star,
Mark
or
disclaimer,
saying
that
this
experimental
can
have
Security
Box,
be
careful
or
something
like
that?
What
is
the
thought
process
around
from
the
node.js?
Come.
E
Okay,
the
security
we
don't
have
a
server
for
that.
We
have
the
working
group
called
the
node.js
security
working
group.
So
we
we
meet
every
two
weeks
and
we
also
have
a
security
working
group
repositor.
So
people
normally
create
a
few
issues.
Ask
questions
and
if
you
want
to
report
a
vulnerability
they
go
to
the
hacker
one
but
yeah
we
don't.
E
We
we
don't
find
I
mean
it's
hard
to
to
ask
those
questions,
because
technically
the
security
that
adoption
shouldn't
be
derived
by
the
community,
it
should
be
a
rule
like
every
project
should
do
it.
It's
not
flexible
I
mean,
at
least
in
the
secret
point
of
view.
Everything
should
be
secured
by
the
fool
so
but
to
receive
feedback
is
just
through
the
the
the
meetings
and
repository
okay.
J
So
let
me
join
something
Jonathan.
You
were
saying
something
or
yeah
okay,
so
we
have
been
discussing
like
Michael,
you
and
I.
We
have
been
discussing
about
that
there'll,
be
like
several
groups
or
like
people
in
the
world.
They'll
be
like
embarking
on
separate
campaigns
on
on,
like
like
doing
this
kind
of
like
vulnerability,
Discovery,
mitigating
the
vulnerability
and
so
on.
J
So
have
a
central
location
like
a
GitHub
location
where,
where
we
can
coordinate
these
efforts
and
keep
some
specifically
like,
so
that
we
do
not
like
step
over
each
other
or
are
at
least
generally
aware
of
each
other,
so
that
we
can
share
knowledge
and
so
on
so
I
have
we
have
been
looking
into
this
and
like
we
have
a
general
idea
or
a
scaffold
that
I'd
like
to
propose.
So
you
were
mentioning
earlier
that
the
share
screen
is
not
working
is
that
it
was.
J
It
was
just
for
me
zooming,
okay,
so
I
can
share
my
screen.
Yeah!
Maybe
okay!
So
here
is
the
basic
idea.
Can
you
see
my
screen,
yeah
yeah,
so
so
keeping
track
of
the
OSS
bug
fixing
campaigns?
The
goal
is
that
we
expect
that
there
will
be
multiple
campaigns
and
we
need
to
ensure
that
the
efforts
are
aware
of
each
other,
also
like
in
future.
It
might
be
interesting
for
some
stakeholders
to
look
at
the
history
of
success,
failure.
What
has
happened
in
these
campaigns
and
so
on.
J
J
So
it's
basically
data
stored
in
plain
text
and
Json
at
the
top
level
of
that
GitHub
repo
will
have
like
a
metadata
directory
where
we'll
have
some
metadata
and
then
we'll
have
different
programming
languages,
Java,
Python
and
JavaScript,
each
of
them
harboring
the
project,
it's
that
like
that,
are
written
in
primarily
in
that
language
and
and
then
like
what
are
the
results
of
that
and
then
the
I.
J
The
metadata
is
just
like
it
will
Define
items
that
they
will
be
be
used
in
other
places
so
sample
some
of
the
sample
metadata
would
be
things
of
information
about
organizations
like
the
different
organizations
that
are
engaging
in
sub
campaigns.
J
It
could
be
about
bug
categories,
the
different
kinds
of
bugs
that
so
we
can
have
like
a
central
description
of
that
so
that
at
least
the
the
categories
can
be
shared
or
like
between
different
campaigns.
It's
not
arbitrary.
We
can
make
correlations
about
that.
There's
also.
The
reporting
process
like
there
are
these
steps
that
we're
also
figuring
out
in
real
time
now
at
the
the
vulnerability,
autofix
working
group
and
so
on.
J
So
we
can
take
the
the
different
steps
from
there
and
use
that
as
as
part
of
like
Define
them,
so
that
we
can
reuse
them,
then,
under
the
under
each
of
the
languages
we'll
have
the
projects
that
are
listed
in
in
alphabetical
order.
A
project
in
the
list
means
that
the
project
has
been
scanned
at
least
once
and
some
bugs
may
have
been
reported,
even
if
no
bugs
were
deported,
the
project
will
still
be
there.
J
A
project
not
listed
means,
so
if
someone
it
comes
in
and
looks
up
in
this
directory
and
sees
that
there's
no
project.
That
means
that
nobody
has
done
a
work
on
that
particular
project,
so
they
can
create
their
own
entries
and
and
so
on.
So
that's
kind
of
the
idea
under
the
project
there
is
so
there
is
an
info
where
we
store
the
basic
info
like
where
this
project
is
posted,
like
the
URL
or
the
like.
What
is
the
ranking
of
that
project?
Things
like
that?
J
The
history
would
be
for
that
particular
project.
What
are
the
when
are
the
different
scans
Etc
happen,
and
then
the
reports
would
be
what
kind
of
outcome
were
coming
out
of
that
particular
scan?
What
sort
of
things
were
reported?
What
sort
of
things
have
were
fixed
and
and
so
on?
J
So
that's
over
there
and
the
ongoing
we're
I
was
thinking
like
it
can
be
like
a
simple
like
block
a
DOT
block
file,
or
something
like
that,
where
the
presence
of
on
like,
if
somebody
comes
in
directly
and
sees
that
there's
an
ongoing.
That
means
that
somebody
else
is
working
on
on
this
or
scanning.
This
particular
project,
so
they
can
look
at
look
this
up
and
then
can
code
in
and
it
can
be
done
in
other
ways,
but
this
is
this
is
where
it
stands
right
now.
J
So
with
that
structure
in
mind,
so
whenever
somebody
is
starting
a
new
campaign,
so
this
is
a
proposed
workflow
for
a
campaign
manager,
who's
launching
a
future
campaign.
So
you
can
come
to
this.
He
or
she
can
come
to
this
and
check
if
the
project
is
project
under
review
is
listed
under
the
dominant
language
folder.
J
J
If
and
if,
if
there
is
already
the
project
present,
they
will
check
like
whether,
if
someone
is
what
else
is
working
on
the
project,
then
they
can
create
pull
requests
to
create
new
ongoing
file.
If
somebody
has
not
done
anything
on
that
or
is
not
doing
anything
on
that
or
if
there
is
somebody
else
working
on
that,
they
can
add
to
that
and
when
the
scan
is
done,
the
data
is
ready.
J
Then
the
submit
pull
request
to
add
information
to
the
Json
files
under
the
project
directory,
which
is
the
the
history
and
the
and
the
reports
to
so.
Let's
State
the
results
and
when
done
like,
submit
pull
request
to
remove
the
information
from
ongoing
or
removed.
J
So
at
that
point
they
say
like
okay,
I'm
I'm
done
so
I'm
moving
away
or
doing
something
else,
and
and
so
that
that's
how,
as
in
a
simple
simplistic
manner,
that's
how
like
right
now
we're
thinking
that
the
data
can
be
kept,
and
this
is
obviously
open
for
discussion.
This
is
just
the
first
idea
and
later
we
can
create
like
scripts
or
apis
to
input
the
data
there
to
extract
the
data
track.
J
Different
stuff,
like
an
organization,
stem
campaigns,
a
camp,
a
particular
campaigns,
track
record
or
a
project
scan
and
or
a
Project
Specific
like
what
has
been
scanned
and
what
which
bugs
have
been
have
been
discovered
there.
So
we
have
created
a
sample,
so
this
is
called
OSS
bug,
fixing
campaigns,
it's
it's
just
like
again,
a
sample
repository
that
is
showing
this
idea
and
and
that's
about
it.
I
mean
I'll
share.
This
link
I'll
be
happy
to
like.
Is
this
a
good
idea,
a
bad
idea?
J
Do
it
different
way,
happy
to
take
feedback?
Awesome.
B
Ncc
group
has
been
working
on
it
for
two
weeks
or
a
month
or
looked
at
it
three
months
ago
and
kind
of
you
know
whatever,
like
those
are
all
interesting
things
to
me.
I,
don't
know
if
I
mean
again
I
I
guess
it
probably
provides
the
same
value
for
automation.
If
I'm
going
to
scan
these
500
projects,
knowing
that
200
of
them
were
already
in
your
scan,
you
know
you.
B
And
is
in
in
is
kind
of
within
the
workflow
I
think
makes
sense.
I
wouldn't
want
to
modify
like
a
lot
of
Json
files
by
hand.
So
having
having
you
know,
scripts
to
kind
of
automate
that
I
think
would
be
would
be
super
useful,
I'm,
I
guess
the
the
other
thing
I'm
thinking
is
like
would
a
you
know,
shared
Google
spreadsheet
be
just
the
same
and
a
lot
simpler.
J
Makes
sense
I
also
thought
of
that,
but
then
the
the
thing
is
yeah
I
mean
obviously
yeah
I
mean
that
that
that's
much
simpler
but
then
like
polluting,
although
like
yeah
I
mean
if
it's
a
public
repository,
you
need
to
manage
like
whether
things
can
be
polluted
or
not,
but
at
the
same
time,
like
yeah,
there's
always
ways
to
revert
that
yeah
yeah
I.
B
Guess
it
depends
like
if
it's
a
closed
Universe
of
known
security,
researchers
and
firm,
like
like
Google
Docs.
Has
you
know
history
so
like
if
anybody
accidentally
like
deletes
a
bunch
of
stuff
like
you?
Can
you
know
you
don't
really
lose
it,
but
if
it's
something
where
you
expect
newcomers
to
show
up
and
say,
hey,
I'm
gonna
do
a
thing,
and
here
it
is
like
having
a
global
edit.
B
Just
inevitably
Decay
into
into
chaos,
so
maybe
maybe
this
is
the
right
way
to
do
it.
J
Yeah,
this
does
require
some
hand
holding
as
in
the
pull
request
management
you
can
like
if,
if
let's
say,
Jonathan
comes
with
like
10
000
entries,
all
of
them
automatic
and
then
and
then
like
10
000,
pull
requests
have
been
generated.
Then
we
can
fight
fire
with
fire,
as
in
like
also
create
automated
scripts
to
clear
those
10
000,
pull
requests
which
are
coming
from
password
sources
or
something
like
that.
But
yeah
I
mean
definitely
like
that.
J
That's
that's
a
like
the
future
manageability
like,
however,
lower
the
threshold
so
that
this
kind
of
manage
and
yet
be
useful
right,
the
the
the
thing
about,
because
with
the
Google,
Doc
or
or
an
Excel
sheet
in
particular,
if
there's
like
a
one
line,
entry
or
one
line
item
per
like
so
everything
is
like
a
one
row
in
the
table.
J
That
makes
a
lot
of
sense,
but
here,
like
at
the
end
of
the
OR
at
the
bottom
of
the
tree,
where
the
report
is
that's
or
the
scan
is
that
that
just
is
ever
expanding
so
and
the
thousands
of
project
do
we
have
do
we
create
like
a
thousand
tabs,
then
we
becomes
unmanageable
yeah
that
those
were
the
things
or
that
you
just
talk
about
and
then
kind
of
opted
out
of
that.
But.
D
B
I'm
I'm,
also
thinking,
maybe
just
in
in
for
instead
of
some
of
the
Json
just
marked
down
tables,
might
be
kind
of
more
more
readable,
especially
if
it's
a
like
you
know,
open
ssf
is
doing
zlib.
B
You
know
we'll
we'll
hold
the
lock
for
30
days
or
60
days
or
something
on
it.
I
like
the
concept,
though,
what
do
what.
I
A
Foreign
I,
like
the
concept,
it's
definitely
going
to
be
one
of
the
challenges,
is
determining
who's
running
a
campaign
and
where
they're
at
and
if
we're
you
know,
we
want
to
reduce
code
as
much
as
we
want
to
reduce
duplicate
campaigns,
yeah
so
I
think
I
personally,
don't
like
spreadsheets.
So
I
think
this
at
least
moves
a
little
forward
towards
a
better
automation.
F
Sorry
I
was
checked
out
because
I
had
something
else
going
on
just
a
high
level
view.
What
was
the
sorry
I.
B
Know
sorry
I
didn't
mean
to
put
in
the
spot.
There's
there's.
This
is
a
repository
of
files
and
metadata
about
who
is
doing
campaigns
on
against
what
projects
so
that,
if
you
are
scanning
500
projects
and
manoir
is
creating
500
projects.
You
don't
wait.
One
of
you
doesn't
waste
your
time
by
rescanning
the
thing
that
the
other
party
just
scanned.
It
says
essentially
a
way
to
do
to
do
distributed.
B
F
The
the
thing
that
I'm
the
thing
that's
crossing
my
mind
here
is
there
might
be
different
technology
at
Play,
That's
scanning
for
different
things
or
has
a
higher
false,
positive
rate,
lower
false
positive
rate
on
the
same
sort
of
like
repositories
and
so
creating
an
exclusion
Zone
over
it.
Scope
of
repositories
doesn't
necessarily
seem
like
it's
we're.
J
Not
tracking
or
the
plan
is
to
not
crack
the
false
positive
Etc.
It's
only
to
go
beyond
like
once.
Something
is
reported,
that's
when
what
what
we
are
expecting
that
that
there
is
like
these
bugs
have
been
reported
in
either
one
of
the
ways
like
pmpvr
or
public
or
or
through
a
pool
like
public
pool
request
or
a
issue
or
of
some
sort
having
so
it
starts
from
there
before
that.
There
is
some
scan.
That
happens.
That's
fine,
but
whether
there's
so
whatever
happens
before
triage
is
something
that
we
are
experienced.
B
Okay,
I
I,
misunderstood,
so
wait!
Okay,
so
so
you
guys
scan
some
projects.
You
find
some
some
bugs.
You
report
them
as
you
report
them.
You
create
metadata
here
so
that
when
Jonathan
scans
the
same
project
and
finds
similar
bugs,
he
does
not
report
them.
Yes,
so
the
problem
is:
how
do
you,
so
these
are
bugs
that
by
definition
haven't
been
fixed
yet
because
if
they
were
fixed
that
Jonathan
wouldn't
have
found
them?
B
How
do
you
tell
Jonathan
that
you've
reported
some?
You
know,
I,
don't
know
zip
slip
vulnerabilities
in
Project
Foo
without
telling
everybody
else
that
there
are
unfixed,
zip
slip,
vulnerabilities
in
Project
food,
yeah.
J
So
the
plan
is
that
we
don't
so,
let's,
let's
actually
give
an
example.
Maybe
so
here
is
in
in
Java
there's
a
particular
and
here's
the
report.
I
don't
know,
oh
okay,
so
it
does
not
happen.
So
it's
one
of
our
guys
was
doing
this
thing
populated
anyway
yeah.
So
the
bottom
line
is
so.
J
There
is
a
fixed
column
and
we
I
was
anticipating
that
only
when
we
go
to
a
particular
threshold
would
things
be
reported
here.
So
if
it
is
like
a
public
pool
request,
then
there
is
no
problem
reporting
it
here,
right
or
if
it
is
fixed,
then
then
that
information
comes
here
but
before
let's
say
a
pmpvr
has
been
created
at
that
particular
point.
We
are
not
submit
like
we
are
not
supposed
to
put
the
like
or
report
it
here.
J
It
only
becomes
part
of
this
public
information
when
it
has
become
public
information
by
itself
Yeah
by
definition,.
B
Where
you
know
for
Java
baritone,
a
review
would
say
tool.
You
know
this
campaign
was
run
against
things.
We
found
some
vulnerabilities,
reported
them
and
they're
either
public
in
a
PR
or
fixed
okay.
J
I'm
trying
to
think
of
this
I'm
not
aware
of
that
people,
so
it
will
be
good
to
actually
yeah
go
there
and
find
about
that.
Maybe
there's
there's
no
need
to
post
this
thing.
Is
there
some
something
already.
B
Yes,
in
in
review,
I
posted
a
deeper
like.
So
if
you
go
to
Omega
npm
in
there
and
click
any
of
those
and
then
that
markdown
file,
so
so
this.
D
J
B
If
you
actually,
if
you
go
back
to
the
markdown
and
you
view
Source,
there's
hidden
metadata
in
there,
that
oh,
it's
not
I
mean
it's
the
table
up
top
but
like
that,
that's
a
yaml,
structured
thing.
So,
okay,
you
know
yeah,
so
so
I
I
feel
like
once
there's
something
that's
like
recordable
to
the
public.
B
B
Coordination
that
I
think
would
be
interesting
because
I
mean,
but
maybe
I
mean.
Maybe
it's
not
so
big
of
a
problem
that
we
need
to
solve
it,
but
like
clear,
apologize,
I
thought:
that's
where,
where
this
had
started.
Okay,.
J
I
Just
trying
to
understand,
like
you
know,
I
read,
I
read
regarding
Alpha
and
Omega
thing
so
Omega
like
you
know
if
it
runs
several
tools
right
and-
and
it
is
also
have
some
kind
of
a
accession
framework,
so
I'm
I'm
trying
to
understand
what
is
the
end
goal
of
that,
like
you
know,
are
you
planning
to
publish
some
metrics
regarding
these
packages
or,
like
you
know,
how
can
how
can
like
you
know
a
normal
developer
or
a
security
engineer
from
a
company
can
get
a
benefit
of
this
particular
Omega
framework.
B
So
I
would
say
a
couple
different
ways,
so
there's
different
Target
audiences.
For
for
these
things,
overall,
the
purpose
of
Omega
is
to
improve
the
security
quality
of
the
ten
thousand
most.
B
So
so
as
that
security
quality
gets
better
and
better
everybody
benefits
without
having
to
do
anything,
they
will
simply
see
presumably
fewer
cves
or
cve-ish
things
you
know
in
in
the
package
that
they
use
and
they
can
have
higher
confidence
and
all
that
stuff,
the
Omega
tool
chain,
the
the
docker
container,
the
the
the
the
the
analyzer
can,
of
course
be
used
by
by
anyone
to
to
analyze.
B
You
know,
packages
and-
and
you
know,
get
the
results
they
still
need
to
triage
it,
which
is
the
whole
reason
for
the
triage
portal
and
all
the
work
that
that
we're
doing
now
is
to
make
that
more
efficient
for
us
and
then
the
open
question
is
and
for
who
else,
because
these
are
by
definition,
zero
days
or
potential
zero
days
and
I
I
think
it
would
be.
It
would
be.
B
It
wouldn't
be
great
for
us
to
just
make
the
whole
thing
public,
but
to
have
some
sort
of
a
some
sort
of
an
ability
to
to
expand
out
Beyond,
just
the
Omega
team
to
be
be
triaging
and
analyzing
and
doing
work
on
the
results
of
this
stuff.
The
assertion
framework
is
intended
to
be
used
by
organizations
and
the
idea
there
would
be
that
there
are
certain
types
of
vulnerabilities
or
policies
or
practices
or
metadata
or
whatever,
that
an
organization
may
want
to
just
block
or
flag
on
or
look
at
whatever.
B
B
So
that's
kind
of
the
the
vision
for
where
I
think
the
Assurance
assertions
could
could
play
with,
because
obviously
the
other,
the.
If
that's
like
the
vertical
there's
the
horizontal
side,
which
is
like
what
are
the
most
prevalent
vulnerabilities
out
there,
like.
Oh,
my
gosh
there's
like
so
much
SQL
injection
in
this
ecosystem.
Like
can
we
should
we
target
that
through
other
parts
of
AO?
B
I
So
thank
you
for
explaining
that,
so
how
this
assertion
framework,
like
you
know,
run
like
like
what
is
the
frequency,
because
I
know
it
was
still
in
the
work
in
the
progress.
So,
for
example,
is
the
community
is
planning
to
create
some
kind
of
an
API
around
that,
for
example,
if
I
want
to
yeah
I
actually
saw
that
site.
So
when
I
want
to
click
on
that
I'm
planning
to
like
you
know,
expand
different
rules.
I
It
like
you
know,
I
saw
that,
like
you
know,
licensing
also,
because
that
is
one
of
the
very
important
for
certain
companies
because,
like
you
know
what
kind
of
licenses
are
there
so
so
it
was
actually
planning
to
an
expose,
an
APA.
So
anybody
can
use
that
API
to
run
these,
like
you
know,
transit
dependencies
or
direct
dependencies.
Try
to
see
that
what
is
their
scanning
capability
plus?
What
is
the
licensing
one
right?
That
is
the
assertion
framework
right,
okay,
yeah.
B
So
so
I
mean
well
so
technically,
there's
a
there's,
an
API
to
get
the
data
out
of
the
assertion
framework
front
end.
So
you've
probably
seen
that
as
a
proof
of
concept
we
are
not
currently
like.
You
know,
re-scanning
projects
on
on
a
rolling
basis.
B
The
set
of
tools
that
we
use
is
the
Omega
analyzer
framework,
so
everything
that's
included
in
there
is
is,
is,
is
run
and
then
I
believe
we
have
a
couple
that
we
have
reprodu
reproducibility,
which
we're
now
calling
something
else.
David
semantic
equivalency
semantic.
B
So
so
you
know:
does
the
package
and
the
source
like?
Are
they
at
all
related?
How
closely
are
they
related
to
each
other
and
I?
Think
scorecard
as
well.
I
think
we're
on
scorecard
as
well
and
now
I
can
expand
that
stuff
out,
so,
like
licensing
comes
through
scorecard
may
be
actively
maintained,
come
through
scorecard,
SQL
injection
comes
to
code,
ql
and
semcraft,
and
other
things
like
that.
K
Yeah
and
if
I
can
check
real
quick
because
that
term
is
new,
originally
they
were
using
the
term
reproducible
build,
but
the
problem
with
that
is
that
reproducible
build,
has
a
very
specific
meaning
bit
for
bit
equip
equal
and
if
they,
if,
if
the
tool
finds
that
it
can,
you
know,
hooray
and
we
can
report
that,
but
it
also
checks
for
minor
variations.
That
usually
you
know,
if
you
assume
the
source
code's.
K
Okay,
you
just
want
to
know
if
the
build
was
subverted,
changes
like
date,
time
stamps,
don't
matter,
they're,
very,
very
low
risk
and
so
they're
looking
not
for
just
you
know.
They're
forbid
equivalency
is
the
gold
standard,
but
if
you
can't
get
for
bit,
what
can
you
tell
me
that
to
estimates
risk?
And
so
that's
where
this
semantic
equivalence?
Okay,
it's
not
exactly
but
we'll!
You
know.
We
have
reason
to
believe
it
will
have
exactly
the
same
effect
when
we're
on
when
run
yep.
I
All
right
so
during
that
scan,
like
you
know,
for,
for
example,
some
analyzer,
Omega
analyzer,
finds
some
issues
and
the
assertion
framework
is
using
that
so
are
we
doing
any
kind
of
a
validation
around
that
because,
as
you
know
like
like
a
lot
of
these
Sam
graph
and
all
those
tools
like
you
know,
for
example,
analyzer
is
running
with
many
rules
right,
so
obviously
it
will
produce
false
positives.
That
is
one
of
my
main
concerns.
B
Yes,
so
so
that
is
exactly
the
challenge
that
just
because
a
tool
says
it
finds,
something
does
not
mean
that
it's
a
real
vulnerability
or
anything
like
that.
So
the
purpose
of
the
assertion
framework
is
no
humans
in
the
loop
from
from
end
to
end
so
so
and
as
the
only
way
that
that
it
can
scale
and
run
every
week
and
things
like
that.
B
But
the
way
that
the
the
overall
architecture
will
will
work
is
you
know
a
a
a
scan
is
made,
the
results
are
sent
to
the
Assurance
assurance
assertions
and
policies
run
against
them.
The
realtor
also
sent
to
the
to
the
triage
portal,
depending
on
the
risk
of
the
project
and
the
the
type
of
vulnerability
and
just
everything
else,
there'll
be
triage
there
and
we
have
to.
B
We
haven't
I,
don't
think
we've
we've
thought
very
hard
about
this,
but
there
needs
to
be
a
first
of
all
a
feedback
loop
within
the
triage
portal
to
minimize
false
positives.
Yes,
and
for
things
like
that,
we'll,
like
you'd,
be
able
to
say
this
rule
is
just
garbage
so
turn
off
the
rule
and
never
never
show
it
to
me
again,
but
we
need
to
be
able
to
express
the
fact
that
I
looked
at
left
pad
in
the
in
the
portal
and
I've
as
a
as
a
security
expert.
B
I
believe
that
none
of
the
that
all
of
the
findings
are
uninteresting
false,
positive,
whatever
and
I,
should
be
able
to
stamp
an
assertion
and
create
an
assertion
with
that.
With
my
assertion
of
that,
send
that
to
the
issue
to
the
Assurance
assertion
site
and
then
have
that
kind
of
override
what
you
see
so
you
know
yesterday
left
pad
had
all
sorts
of
issues
today,
it's
totally
clean,
because
you
trust
me
to
make
manual
assertions
on
that.
B
So
there's
like
a
lot
of
like
hand
waving
here
on
like
how
this
will
actually
work
in
practice
at
a
scale,
but
that's
I
think
the
relationship
between
the
two.
B
Obviously,
if
the
tool
itself
is
just
not
working
well
or
there's
a
rule
that
like,
for
instance,
there's
a
there's
a
rule
in
semra
that
has
like
super
high
false
positive
rate,
we're
just
going
to
turn
it
off
in
the
in
the
in
in
the
in
the
tool
chain,
because
it
provides
almost
no
value
and
just
minimizing
noise
is
is
of
value.
So
we're
probably
just
gonna
do
that,
but
for
rules
that
are
more
contextual
and
sometimes
it's
good.
I
I
know,
like
you
know,
we
actually
reached
the
limit,
but
but
the
follow-up
question
is
that
so,
when
the
analyzer
find
the
works
and
we
send
it
to
some
triage
thing,
so
how
many
I
can?
Obviously
it's
required
some
effort
to,
like
you,
know,
triazing
them,
and
do
you
have
enough
Community
Support
to
triage
them?
I
And
second
question
is
that
obviously
there
are
some
companies
who
is
actually
doing
that
work
too,
because
obviously
they're
actually
running
that
some
graph
or
sneaker
whatever
the
two
so
are
you
are
you
willing
to
like?
You
know,
maybe
do
something
open
source
or
maybe
some
company
want
to
help
like
because
they
already
reviewed
those
particular
things
and
maybe
they
want
to
provide
those
meaning
so
that
the
button
of
the
like,
like
you
know,
the
triaging
team
who
is
actually
analyzing
the
framework?
Are
you
planning
to
involve
some
Community
there?
We've.
B
We've
had
we've
had
some
of
those
discussions
and
totally
open
to
the
idea
of
this
being
a
having
let's
say,
trusted:
Security
Experts,
not
literally
part
of
the
like
core
Omega
team,
doing
some
work.
We
do
anything
to
think
about
like
what
that
actually
looks
like,
but
but
yes,
I've
had
some
conversations
with
a
couple
of
those
those
organizations
to
see
like
what
makes
sense.
B
B
Thank
you
for
the
awesome
conversation.
Everyone
I'll
see
everybody
again
in
a
month
but
check
us
out
on
slack
Community
the
conversation
there.
Thank
you,
foreign.