►
From YouTube: Vulnerability Disclosures WG (January 24, 2022)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
C
A
D
C
Did
did
jennifer,
give
you
the
host
link
or
something
like
that,
or
you
were
just
the
first
to
show
up
so.
E
C
Okay,
excellent
well
I'll
start
assigning
you
some
tasks
right
away.
A
You
may
I
got
the
most
impractical
thing
ever
I
got
the
jeep
wrangler
392.
A
D
Just
small
point:
david
shared
a
link
to
the
zoom
call
on
the
mailing
list
today
and
that
one
was
invalid
for
me,
so
I
shared
one
from
the
calendar
just
now,
so
we
should
probably
try
distribute
the
correct
link.
I
think
I'll
just
triple
check
people
get
it
sure
I'll
talk
to
jennifer.
A
There
was,
I
know
that
david
and
one
other
person
had
kind
of
goofy
different
like
two
different
times
so
it
might.
They
just
may
have
an
old
invite
but
I'll
take
a
look
and
see.
A
All
right
well,
I
would
like
to
welcome
everybody
to
the
january
24th
edition
of
quite
possibly
the
best
working
group
within
the
open
ssf,
the
vulnerability
disclosures
team
go
team
vulnerability.
I
will
post
the
link
one
last
time
to
the
agenda.
If
you
could,
please
mark
your
attendance
and
if
you
have
any
items
you
would
like
to
talk
about,
we
have
one
item
so
far.
A
E
A
G
Hey
everyone
adam
here:
wait.
Can
you
hear
me
hello?
Yes,
we
could
hear
you
perfect,
yeah
so
adam
here
from
the
hunter.dev
team,
so
just
dropping
by
this
week's
call
we're.
H
Hi
everyone,
my
name,
is
alma
and
I'm
a
program
manager
with
ncc
group,
and
I
am
here
to
learn
and
hopefully
I'll
be
able
to
join
with
my
work
email,
because
I
only
could
use
my
personal
email
to
join
so,
hopefully
I'll
figure
it
out.
But
anyway,
I'm
looking
forward
to
learning
excellent
welcome.
F
I
I'll
say
hi
again
as
well,
since
there
are
some
new
faces.
I've
been
once
before,
but
I'm
kade,
I'm
a
relatively
new
product
manager
at
github,
working
on
the
security
advisory
team
and
so
excited
to
collaborate
with
you
all.
D
A
Things
happen:
computers
are
just
the
worst
all
right,
so
we
had
a
request
that
we
wanted
to
talk
to
adam
from
the
hunter.dev
team
and
were
you
the
one
that
brought
this
up
right.
B
Perfect
volleyball
was
never
supported,
so
I
don't
really.
I
just
hit
it
right
anyways.
So,
yes,
I've
adam
is
from
hunter.dove
and
I
just
want
to
say
first
off.
Thank
you.
Adam
is
based
in
the
uk,
and
so
I
know
it
is
very
late,
and
so
I
really
appreciate
you
swinging
by
this
group.
I
think
a
couple
of
you
that
I
know
this
has
come
up
in
various
discussions
within
this
group
on
you
know
some
github
issues
on
open
wall
forums,
kind
of
bouncing
around
twitter,
so
adam
by
all
means.
B
Will
please
correct
me
if
I
get
anything
wrong
about
how
your
platform
works
or
your
business
works.
It
is
a
bug,
bounty
program,
that
its
scope
is
all
of
open
source
and
I
think
what
is
unique
about
it.
As
I
understand
it
is
that
its
scope
is
all
of
open
source,
it
is
not
distinct
to
projects
that
sign
up
with
hunter
and,
I
think
that's
creating
kind
of
some
tension
and
concern
in
the
open
source
and
research
communities,
and
so
I
invited
adam
here.
B
I
said
you
know:
let's,
let's
have
a
chat,
there
might
be
some
conflicting
perspectives,
but
let's
definitely
everybody
here
wants
to
improve
open
source
security
and
believes
vulnerability.
Disclosure
is
an
important
part
of
it.
So,
let's,
let's
chat
it
out
adam,
I
figured
you
know.
We'd
give
you
the
floor,
maybe
give
us
an
overview
so
everybody's
on
the
same
page,
about
the
program.
People
can
ask
some
questions
and
we'll
see
where
the
discussion
goes
from
there.
If
that
sounds
good.
G
Sure,
thank
you
very
much
for
the
intro.
I
guess
just
to
begin
with
I'll,
just
kind
of
walk
everybody
through
hunter,
just
in
case
anybody's
not
familiar
with
it.
So
yeah.
As
anne
mentioned,
we
run
a
bug
bounty
program
where
we
are
effectively
trying
to.
G
I
guess,
provide
a
means
of
generating
meaningful
amounts
of
money
incentives
whatever
you
want
to
call
it
for
the
security
research
community
and
for
the
maintainers
at
large
by
getting
the
enterprises
who
depend
upon
the
open
source
software,
the
packages,
the
projects
whatever
it
might
be,
to
basically
for
the
build
for
this
research
in
kind
of
like
alternative
models,
alternative
bug,
bounty
models,
like
you
know,
bug
crowd,
hacka1,
synack
and
the
like.
G
It's
very
easy.
You've
got
an
enterprise
who's
benefiting
from
a
product,
and
then
they
look
for
security
research
against
that
product
and
if
their
product
improves
in
quality
and
insecurity,
you
know
they're
happy
their
customers
are
happy.
You
know
they
earn
more
money.
You
know
it's
easy,
but
in
the
open
source
world
it
doesn't
necessarily
work
like
that.
You
know
a
lot
of
maintainers
aren't
monetizing
their
their
code,
but
that
doesn't
mean
that
enterprises
aren't
depending
upon
it
and
using
it
to
build
products
and
systems
and
whatnot
so
effectively.
G
What
we
look
to
do
is,
as
mentioned,
we
work
with
enterprises
to
identify
you
know,
components
at
risk
or
otherwise,
and
then
you
know
aggregate
that
that
money
you
know
across
different
companies
and
use
that
to
to
sponsor
security,
research
and
as
well
as
most
importantly,
paying
for
fixes.
So
I
guess
that's
a
big
distinction
against
our
program
versus
other
I
mean
bug-bound
supports
generally
speaking,
is
that
we
dedicate
a
significant
portion
of
funds
to
actually
going
to
the
maintainer.
G
You
know
kind
of
trying
to
create
almost
like
a
new
revenue
stream
for
open
source.
I
guess
in
the
past
kind
of
turning
open
source
projects,
at
least-
and
I
mean
this
in
the
case
of
like
you-
know-
applications
into
commercial
open
source
software
has
been
pretty
viable,
but
for
things
like
packages,
not
so
much,
there
hasn't
been
really
a
means
of
unders
of
a
way
to
to
to
monetize
this,
and-
and
I
guess
this
is
kind
of
the
angle
that
we're
trying
to
take
on
which
is
these
packages
are
used
everywhere.
G
Let's
see,
if
you
know,
the
enterprises
who
are
the
beneficiaries
can
can
start
funding
the
research
we've
we've
had
a
you
know,
a
rough
history.
To
date
you
know,
we've
been
iterating
hard
and
fast
as
we
learn
from
the
community
and
we
learn
what
works
and
what
doesn't
work.
So
I'm
not
going
to
defend
any
of
the
of
the
previous
mistakes
we've
made
but
yeah.
G
This
is
I'm
just
trying
to
talk
about
the
kind
of
the
vision,
the
dream,
what
we're
trying
to
achieve,
and
then
you
know
we're
just
doing
as
best
we
can
on
the
way
to
trying
to
make
this
work,
and
then
this
thing's
feedback
and
then
and
it's
raising
on
on
the
back
of
it
yeah.
I
guess
that's
the
the
crux
of
it
just
more
general
information,
we're
a
five-person
team
in
london,
we're
very
early
stage,
so
very
early
stage
commercially
and
with
regards
to
venture
capital
and
yeah
so
yeah.
G
I
think
that
roughly
covers
everything
about
us
and
about
what
we're
trying
to
do.
But
let
me
know
how
you
want
to
take
the
conversation
forward.
If
there's
any
specific
questions
or
feedback
or
whatever
it
might
be,.
E
Sorry
two
questions,
so
this
is
you're
very
similar
to
I
I'm
blanking
on
the
name
of
the
company,
but
they
do
something
similar
in
this
space,
where
it's,
it's
basically
getting
paid
to
paying
open
source
developers
to
do
doing
secure,
doing
work
in
general
on
features
and
maintaining
these
projects
is
this,
so
this
is
more
a
that,
but
with
security,
what
is
the
name
of
the
company?
Why
am
I
blanking
on
the
name
currently.
G
So,
just
on
that,
I
think
there's
a
couple
of
companies,
kind
of
trying
to
monetize
feature
or
like
bug
fixes
so
there's
like
issue,
hunt
or
I
think,
there's
another
one
called
source,
maybe
source
hunt-
and
I
guess
we're
also
we're
quite
similar
to
tide
lift,
which
is
another
kind
of
company
that
tries
to
do
this
whole.
Get
enterprises
to
pay
for
some
kind
of
value
with.
I
believes
tide
lifts
major
being
around
licensing
support
and
all
of
this
kind
of
stuff,
but
yeah.
G
You
could
say
that
very
close
to
these
kind
of
like
issue.
You
know
pay
for
issue
or
pay
for
for
bug
fix,
or
you
know
these.
These
kind
of
programs.
D
D
Okay,
so
this
is
probably
a
minor
issue,
but
but
to
me
the
the
the
the
cv,
at
least
on
the
from
the
vm
side,
looks
a
bit
poor
in
terms
of
descriptions
and
this
cv
metadata
sort
of
applied.
D
I
see
some
of
them
have
not
been
uploaded
either,
but
that
could
be
other
issues
as
well
because
they're
four
days
old.
So
what
sort
of
how
much
quality
assurance?
Do
you
sort
of
do
or
provide
in
terms
of
descriptions
back
up
stream
or
upstream
to
cv
and
the
cv,
databases
and
stuff.
G
So
at
the
moment,
everything
is
basically
confirmed
by
maintainers,
so
we
don't
necessarily
add
an
additional
kind
of
like
security,
qualification
assurance
above
what
what
a
maintainer
provides.
So
we
trust
the
maintainer
to
make
best
judgment
on
severity
scores
on
all
forms
of
metadata.
With
regards
to
descriptions,
this
has
been
recently
noted.
Like
I
think
three
days
ago
we
saw
a
comment
on
twitter,
and
so
we
like
it,
we
did
an
active
fix
to
address
like
can.
G
We
include
version
fixes
and
this
kind
of
stuff
in
the
in
the
in
the
description,
and
we
do
have
plans
in
the
roadmap
to
make
this
more
rich
for
clarity
but
yeah,
as
mentioned
we're
like
a
small
team.
So
once
we
hear
from
the
community
what
is
wanted,
this
is
when
we'll
you
know
begin
to
work
on
that.
You
know
as
a
function
or
as
a
capability,
so
I
guess
we're
kind
of
playing
reactively
here
we're
trying
to
follow
the
standards
that
see
it.
G
That
cv
sets
forth
with
regards
to
the
cves,
but
then,
if
we
realize
that
we're
doing
something,
subpar
feedback,
you
know
we
do
act
and
and
try
to
to
to
do
better.
B
Oh,
I
I
was
just
gonna,
ask
two
two
questions
of
how
do
maintainers
get
access
to
the
vulnerability
information
and
what
are
your
disclosure
policies
when
there's
either
an
unresponsive
maintainer
or
the
maintainer
doesn't
want
to
use
the
platform.
G
That's
a
great
question
so
with
regards
to
authorization
models
for
vulnerability,
information,
there's
kind
of
let's
say
three
scenarios,
so
the
first
scenario
being
maintainers
already
on
the
platform
we
just
ping
them
an
email.
They
re-log
in
and
they've
got
exclusive
access
to
the
report
until
until
they
have
validated
and
fixed
it
and
that's
when
it
becomes
public
or
unless
there's
a
some
form
of
embargo
situation,
which
we
can
also
support.
G
Alternatively,
if
they
are
not
on
the
platform,
we
go
through
one
of
two
routes,
which
is
we
look
for
if
there's
a
security
contact
in
their
security
policy
security
md,
we
kind
of
have
like
this
chain
of
logic.
To
like
see
you
know,
does
the
use?
Does
the
maintainer
have
an
email
on
their
profile?
We
kind
of
like
exhaustively
search
for
an
email,
basically,
and
if
we
find
an
email,
we'll
email
them
explaining
who
we
are
our
security,
community
and
whatnot
and
then
we'll
provide
them
basically
a
magic
link.
G
They
simply
can
click
on
this
link
and
access
full
details
of
the
report
and
have
full
normal
access
to
the
platform
by
this
magic
link
and,
last
of
all
and
then
and
then,
generally
speaking,
we
don't
support
anybody
who
doesn't
have
a
security
if
they
don't
have
a
security
md
with
an
email.
So,
for
example,
if
they're
using
a
hacker
one
submission
based
process,
we
explicitly
do
not
support
this.
We
suggest
to
the
researchers
to
disclose,
via
their
their
stated,
security
policy.
G
And
lastly,
if
there
is
no
security
policy,
we
do
open
up
a
github
issue
requesting
without
providing
any
information
about
the
vulnerability
just
simply
requesting
hey
we'd
like
to
get
in
touch
about
the
vulnerability.
Can
you
create
a
security
md,
so
we
know
how
to
how
to
get
in
touch
with
you.
That's
broadly
how
we
do
authorization
if
a
maintainer
doesn't
want
to
use
the
platform?
Absolutely
fine,
we
don't
if
they
don't
validate
it,
we
don't
make
it
public
at
all.
G
There
is
some
discussions
about
in
the
future
if,
if
given
there
could
be
people
at
stake,
if,
after
a
responsible
disclosure
period
of
let's
say
three
months,
if
this
time
elapses
and
with
fair
warning
to
the
maintainers
about
the
vulnerability,
including
the
vulnerability
information
itself,
if
we
received
no
comments
back
from
them,
you
know
no
engagement
from
that
side.
G
We're
considering
adjusting
our
policy
to
make
it
public
after
this
point
in
time,
but
if,
prior
to
this
moment
the
maintainer
has,
you
know,
responded
or
if
they've
said
you
know,
we
don't
want
to
go
by
your
platform
perfectly
fine,
we
de-list
them
their
repository
and
they
won't
be
contacted
any
further
with
that.
With
regards
to
the
report-
and
we
suggested
that
to
the
researcher
to
to
directly
disclose
against
them.
But
of
course
they
they
thought
that
bounty
and
you
know
our
cv
process
and
all
this
kind
of
stuff.
E
I
just
yeah,
so
generally
I
have
hi
it's
pleasure.
It's
been
a
while,
so
I'm
now
doing
full-time
security
research
as
of
the
beginning
of
this
january.
I
do
this
interesting
thing
where
I
do.
E
I
use
codeql
to
find
widespread
security
vulnerabilities
across
open
source,
and
then
I
have
a
bot
that
I've
written
and
I
work
with
a
company
called
modern,
which
is
they
it's
the
the
project
that
I'm
working
with
is
called
open
rewrite,
and
I,
when
I
find
these
widespread
security
vulnerabilities
across
open
source,
it's
usually
in
thousands
of
thousands
of
repositories,
I'm
working
with
them
to
generate
to
create
rewrite
rules,
and
then
I
will
go
and
run
a
bot
that
I've
written
to
generate
thousands
of
pull
requests
to
fix
these
security
vulnerabilities
across
open
source.
E
So
it's
I
do
generally
write
my
code
code,
queries
and
then
try
to
do
private
disclosure
first,
but
when
it
comes
to
the
scale
of
security
vulnerabilities,
when
you
like
have
like
thousands
of
projects
in
front
of
you,
you
just
can't
scale
security
research
to
that
sort
of
scale,
but
your
platform
sounds
really
interesting
to
me
as
a
security
researcher,
at
least
given
you
know
what
I'm
doing
actively.
E
I
think
that,
regardless
of
this,
we
should
definitely
follow
up
at
this
meeting,
because
I
think
I
have
a
lot
of
things
to
pick
your
brain
about
and
discuss
what.
So
from
the
disclosure
point
of
view,
a
researcher
can
come
in
and
say
I
have
a
disclosure
policy
and
that
get
that
that
will
be
part
of
the
the
disclosure
process
that
your
system
handles.
Is
that
accurate
or
is
there?
Is
there
more
nuance
there
and
and
it
it's
yeah?
E
G
So,
generally
speaking,
we
operate
a
standard
disclosure
policy,
so
we
we
do
make
the
presumption
that
any
maintainer
who
registers
with
the
platform
as
an
example
and
who
doesn't
have
a
specifically
defined
security
policy
like
in
a
security.md
file.
As
an
example,
we
go
by
our
templated
approach
and
we
do
have
plans
in
our
roadmap
to
provide
more
granularity
to
maintainers,
to
allow
them
to
define
things
like
what
parts
of
the
repository
should
be
in
scope.
G
What
types
of
vulnerabilities
should
be
in
scope
to
you
know
for
them
to
deal
with
these,
to
do
with
the
research
and
reports
and
presenting
this
information
to
the
to
the
research
so
that
they
don't
have
to
waste
their
time.
Finding
vulnerabilities
in
in
files
or
against
vulnerability
classes
that
don't
that
aren't
necessarily
in
scope
for
this
project,
but
just
for
clarity.
This
is
to
come,
but
as
of
right
now
it
is
simply
a
standard
disclosure
policy
that
applies
unless
it
is
overridden
by
the
security
policy
of
the
specific
repository.
E
G
So
yeah,
so
we
try
to
be
just
for
clarity
here
like
we
we're
trying
to.
You
know
we're
trying
to
play
like
a
fair
intermediary
right,
so
we
don't
want
to
be
require
or
force
the
signing
of
ndas
or
require
non-disclosure.
G
You
know
in
in
eternam
kind
of
thing
we
are,
we
are
trying
to
to
make
something
that
works
for
both
users,
but
just
for
clarity-
and
I
I
just
want
to
get
this
across,
because
I
think
that
at
least
in
the
past,
with
regards
to
like
past
controversies,
shall
we
say
right:
it's
always
been
a
discussion
about
us
versus
maintainers
or
security
research
maintainers.
You
know
all
of
these
kind
of
things,
but
what
we're
really
trying
to
own
you
know.
G
Hone
in
on
here
is
that
well
the
end
people
who
are
going
to
be
affected
by
all
this
is
the
users
of
products
being
built
with
open
source
software
right
and
so
the
way
that
we
see
it
is
we're
trying
to
do
best
by
them.
We're
trying
to
be
like
listen
if
we
can
find
a
vulnerability.
If
we
can
advise
people
to
apply
a
patch
you
know
or
if
we
can
just
cultivate
a
fix
against
vulnerabilities,
so
that
so
that
enterprises
can
decide
whether
they
want
to
adopt
that
patch.
G
That's
where
we're
kind
of
coming
from
you
know
so
so.
Hence
why
we're
trying
to
be
fair
between
the
researcher
and
the
maintainer?
We're
not
trying
to
be
too
onerous
or
prescriptive
to
either
party
we're
trying
to
be
fair.
We
think,
like,
let's
say,
a
three-month
limit
on
embargo
with
regards
to
vulnerability
information.
It's
probably
about
fair.
Unless
you
know
if
this
is
a
massive
zero
day,
you
know
log4j
log
for
shell
kind
of
incidents.
G
You
know
this,
there
are,
there
are
exceptions,
but,
generally
speaking,
we
try
to
play
a
fair
game.
We
don't
want
to
upset
anybody,
but
we
are
conscious
that
in
the
end,
we're
trying
to
protect
users
so
and
not
necessarily
protect
the
reputation
of
maintainers,
but
also
not
necessarily
protect
the
reputation
of
researchers-
and
you
know
this
kind
of
stuff
so.
B
I
do
yeah
so
because
you
kind
of
brought
up
the
past
the
past
controversy
bit
and
I
think
for
the
most
part
you
know
things.
I've
seen
concerns
have
been
around
ideas
around
project
consent
and
by
default
your
scope
is
all
open
source,
so
projects
don't
opt
in
you
mentioned
you've
been
rethink,
thinking
some
of
the
ways
you're
doing
things,
I'm
curious.
B
What
is
your
thinking
on
that
today
because
I
noticed
you
mentioned,
projects
can
now
opt
out,
but
they're
still
by
defaulted,
opted
and
and
you're
thinking
around
kind
of
maintainer
burden
and
workload,
and
what's
your
thinking
on
that
these
days
definitely.
G
So
so
so,
and
I
just
so
before
getting
into
it,
I
just
want
to
express
that
overwhelmingly.
The
maintainers
that
we
have
worked
to
to
date
have
been
overwhelmingly
positive.
I
think
we've
worked
with
north
of
1000
or
2000
maintainers
in
different
projects
and
all
have
been
extremely
pleasant,
with
the
experience
and
with
our
smooth
disclosure
process,
and
this
is
the
first
time
for
a
lot
of
people
that
they
are
earning
money
from
an
open
source
project
that
they've
developed.
G
I
think
there
was
even
a
tweet,
just
maybe
four
or
five
days
ago,
of
a
maintainer
expressing
this-
that
I
think
he
was
saying
that
his
package
was
downloaded,
something
like
97
million
times
a
month.
Yet
our
fix
was
the
first
time
he
earned.
I
think
again,
like
150
or
200,
something
like
this
so
overwhelmingly.
G
We
we
do
hear
positive
feedback.
Unfortunately,
you
know
like
on
yelp
or
something
you
know.
Anybody
angry
is
more
willing
to
write.
G
Negative
review
than
people
are
willing
to
write
positive
reviews,
and
so
we
do
when,
when
there
is
a
negative
review,
you
know
it's
going
to
make
out
on
twitter
or
whatever
it
might
be,
and
it's
going
to
be
loud
with
regards
to
like
opt-in
versus
opt-out,
our
to
a
large
degree,
our
opinion
hasn't
changed,
which
is
we
perceive
anybody
who
is
writing
code
in
the
open
is
allowing
for
people
to
think
to
tinker
and
to
contribute
to
their
code
in
the
open.
G
Now
you
know
we
could
do
something
like
selfish
to
a
large
degree,
which
would
be
instead
of
reporting
vulnerabilities
against
the
original
repositories.
We
could
fork
everything
that
we
find
a
vulnerability
in
and
simply
put
the
vulnerability
against
our
fork
instead
of
the
original
repo
just
or
and
even
create
a
fix
against.
That
fork
is
an
example,
but
I
think
what
I'm
just
trying
to
get
across
here
is
it's
like.
G
From
our
perspective,
people
by
publishing
their
code
in
the
open
are
are
already
opting
in
to
have
their
codes
reviewed
criticized,
contributed
to,
and
that's
that's
where
we
come
from,
but
of
course
like,
if
we're
not
going
to
fight
anybody,
if
they
don't,
you
know,
want
to
be
part
of
the
process
for
whatever
reason
they're
overwhelmed
or
they
just
don't
believe
in
it
or
they
don't
want
to
monetize
it.
Who
knows,
we
absolutely
will,
will
oblige
and
and
prevent
any
any
future
reports.
B
No
I'll
have
one
last
question
that
I
have
asked.
Probably
too
many
others
doc
do
you
have
a
sense
of
you
know.
I
think
bug.
Bounty
programs
tend
to
be
different
from
like
a
disclosure
program,
because
you're
incentivizing
that
research-
and
I
know
you
know
they-
can
kind
of
have
that
high
noise
to
signal
ratio.
Do
you
have
a
sense
of
what
yours
looks
like
for
a
project?
Are
you
creating
a
lot
of
you
know?
G
I'd
say,
roughly
speaking,
I
think
60
of
reports
that
are
validated
to
end
up
to
be
a
positive
validation
rather
than
a
negative
validation.
Of
course,
some
of
our
users,
because
we're
because
we,
you
know
we're
on
you
know
like
the
scope
is
limitless.
A
lot
of
our
users
will
disclose
against
things
that
are
never
seen,
because
maybe
the
maintainer
no
longer
works
on
the
project
or
they're.
G
You
know
afk
or
whatever
it
might
be,
but
the
majority
of
the
stuff
we
do
produce
is
valid
and
not
invalid
with
regards
to
noise,
but
we
are
conscious
of
it.
You
know
like
we
don't
want
to
put
the
birds
on
on
on
maintainers,
because
we
do
ask
them
to
collaborate
in
the
process.
We
hope
that
we
fairly
incentivize
them
to,
but
we
do
ask
to
them
to
collaborate
in
the
process,
but
we
think
we're
acutely
placed
to
build
tooling
to
help
them
triage
these
reports
fingers
crossed
in
the
future.
G
You
know,
let's
say
we're
as
big
as
sneak
or
something,
and
hopefully
we
can
hire
triages
and
the
world
is
great,
you
know,
but
we
are
looking
to
invest
in
controls.
As
mentioned
earlier
on
jonathan's
question
to
allow
maintainers
to
acutely
scope
the
type
of
vulnerabilities
they
want,
what
they
find
meaningful,
these
kind
of
things,
to
try
to
take
away
that
kind
of
manual,
triage
burden
that
you
get
a
lot
on
bug,
bouncy
programs.
G
I
also
think
like
if
we
just
when
we've
done
kind
of
like
internal
reflections
on
this,
and
we
look
at
let's
say
like
how
if
we
take
hakka
one
as
a
as
an
example
with
regards
to
noise
versus
signal,
I
think
the
unfortunate
thing
with
proprietary
bug,
bounty
systems
or
like
bug-bound
systems
like
folks,
proprietary
solutions-
is
that,
like
the
the
the
list
is
so
small.
G
So
if
you've
got
you
know,
let's
say
I'm
not
sure
how
many
programs
on
a
hike
or
one,
but
let's
say
there's
ten
thousand,
I'm
not
sure,
but
but
because
they've
got
you
know,
100
000
or
a
million
users
or
10
million
users.
You
know
that's
a
very
small
attack
surface
for
and
many
many
people
to
enumerate
on
an
order
to
be
incentivized
to
find
stuff
against,
whereas
in
our
case
you
know
the
surface
is
so
large
that,
generally
speaking,
we
find
less
overlap.
G
There
are
you
know
some
people
are
going
to
be
motivated
by
you
know
the
biggest
bounties.
So,
for
example,
once
we
started
seeing
stuff
against
vim
like
as
morton
mentioned,
we
started
getting
more
and
more
stuff
against
him
simply
simply
because,
like
it
was
a
it's,
maybe
it's
quite
old,
you
know
very
little
amount
of
maintainers
and
it's
written,
and
I
I
guess
c
or
something,
and
so
it
you
know,
opens
up
to
like
loads
of
memory
management
issues,
and
so
we
started
getting.
G
You
know
many
many
reports
against
this,
but
this
is
certainly
an
anomaly
rather
than
what
is
normal
for
us.
I
think
when
we
looked
at
the
stats
in,
I
think
it
was
in
q3
or
q4.
G
I
think
quarter
on
quarter
the
amount
of
unique
repositories
we
were
finding
reports
against
was
basically,
I
think
it
was
doubling
each
quarter
so
so
we
we
certainly
are
seeing
a
spread
rather
than
people.
You
know
just
farming
against
the
the
same
repositories
over
and
over
again,
and
and
to
be
honest
with
you,
it's
because
we
incentivize
it
like
this,
so
we
limit
people
from
opening
one
opening
multiple
reports
pertaining
to
the
same
vulnerability
type
against
the
same
repository.
G
So
if
a
person
finds
10,
I
don't
know
prototype
pollutions
against
the
same
project.
They
won't.
You
know
spam
10
reports
against
the
maintainer.
We
incentivize
them
to
indicate
the
the
amount
of
occurrences
of
a
vulnerability
and
to
specify
this
in
a
single
report,
and
we
provide
them
a
bonus
in
doing
so.
So
what
so?
G
Instead,
what
happens
is
yeah
a
single
researcher
against
I
don't
know,
let's
say
like
low
dash
or
something
if
they
find
10
prototype,
pollutions
it'll
all
be
formatted
in
a
single
report
and
there
will
be
10
permalink
references
to
to
the
location.
You
know
the
locations
of
those
of
those
prototype,
pollutions
yeah.
A
I
had
a
question
adam,
I'm
curious
how
connected
you
are
with
existing
coordination
communities
that
have
existed
for
a
long
time
like
oss
sec,
distro's
list
and
then
another
follow-up
on
that
would
be.
Are
you
plugged
into
the
cert
cc
vince
platform?
That
also
is
doing
some
coordination
between
orders
and
maintainers.
G
Not
at
all
so
we
have
no
existing
relationships
with
any
of
these
groups.
Yeah
like
when
it
makes
me
sad.
Sorry,
it's
not
just
for
clarity.
It's
not
out
of
intent.
We
just
haven't
had
the
opportunity
to
work
with
such
brilliant
people,
so
yeah
all
we
do
with
regards
to
to
try
to
not
step
on
anybody's
toes.
Is
we
try
to
ensure
that
the
researcher
knows
that
this
needs
to
be
exclusively
disclosed
to
us?
G
We
then
exclusively
share
it
with
the
maintainer
make
sure
that
they
are
solely
hearing
about
the
vulnerability
from
us,
and
then
we
hope
to
federate
the
information
via
cbe.
We.
I
thought
this
was
the
the
norm,
but
if
there
are
other
means
of
of
sharing
this
information,
we're
always
and
we're
happy
to
work
with
whomever
wants
to
work
with
us.
A
Traditional
linux
based
software
has
a
very
well
established
cvd
process
where
things
are
disclosed
to
the
mailing
list
and
that
gets
out
to
all
the
appropriate
downstream
entities
should
be.
I
would
suggest
something
to
check
out
who
was
I
with
three
hands
at
once?
I'm
not
certain
who
was
first,
so
I
will
let
the
group
wrestle
over
who
talks
first.
D
Because
when
when
sira
was
mentioning,
I
was
a
security,
I
was
actually
poking
through
the
veeam
cv
assignments
from
country
dev,
and
I
actually
found
one
of
the
complaints
on
oss
security
regarding
hunter
dev.
So
that
was
a
bit
of
a
fun
thing,
but
no
the
one
of
the
fun
thing
or
well,
not
something.
D
But
one
of
the
points
that
they
raised
on
those
security
regarding
hunter
dev,
was
that
some
of
the
cvs
that
were
assigned
and
not
really
disclosed
well
processes
are
hard
followers,
I'm
not
going
to
say
it's
properly
followed,
but
some
people
pointed
out
that
there's
around
22
cvs
double
point
that
was
assigned
to
vim
recently
with
what
I
point
out
was
sort
of
poor
descriptions.
D
But
some
of
the
issues
as
well
was
the
fact
that
not
all
of
the
cvs
assigned
were
necessarily
issues
that
are
crossing
security
boundaries
expected
in
them,
which
makes
them
sort
of
false
assignments
in
terms
of
the
cv
process,
because
they're
not
really
like
not
all
crashes,
are
secure.
D
The
issues
some
of
them
are
just
simply
the
program
failing
and
some
of
the
highlights
like
from
alan
coppersmith
from
oracle
or
solaris
engineering
was
that
they
don't
really
see
how
these
are
security
issues
and
this
sort
of
loops
background
to
some
of
the
quality
assurance
cnas
should
be
doing
before
assigning
cvs
and
publishing
them
as
cvs
and
well,
it's
just
a
re-establishment
of
the
maybe
the
lack
of
the
qa,
which
is
being
done,
though,
but
it
seems
to
me
that
there's
a
lot
of
cbs
assigned
which
are
not
necessarily
security
issues
and
these
put
on
more
pressure
for
us
or
me
as
a
downstream
distribution.
D
D
G
Yeah,
no
thank
you.
No.
We
are
aware
and
we're
actively
discussing
internally
about
whether
we
need
to
enact
another
change
to
try
to
disincentivize
this.
G
This
kind
of
mass
creation
of
cvs
once
a
big
project
with
such
a
receptive
maintainer
is,
is
is
identified
by
our
community,
but
I
think-
and
it's
I'm
not
sure
what
cv's
perspective
on
on
all
of
this
is,
but
the
way
that
we
see
it
is
that,
in
the
end
of
the
day,
brown
in
effect
is
the
one
validating
and
requesting
a
cv
by
validate
by
validating
as
a
cr,
whatever
the
cvss
score
is
vulnerability
on
our
platform,
and
so
similarly,
similarly
as
if
you
know,
I'm
not
sure
like
sony
or
mitsubishi
or
any
other
one
of
the
cnas,
if
they
wanted
to
issue
not
proper
cvs
for
their
products,
they
should
have
full
permission
to
do
so.
G
I
at
least
that's
my
belief,
but
I
I
maybe
cv
has
a
different
opinion
on
the
matter
so
and
that
that's
at
least
the
situat.
That's
at
least
the
position
that
we've
taken
on
it
today,
which
is
like
if
a
maintainer
wants
to
issue
a
cve
for
whatever
their
project
is
that
they
should
be
able
to
to
issue
one.
We
shouldn't
impede
them
by
placing
our
subjective
opinion
on
it.
G
So
that's
at
least
how
we
we've
treated
it
today,
but
we
are,
as
mentioned
constantly
thinking
of,
should
we
change
tack
or
or
do
something
via
I'm
not
sure.
If
git
hub
takes
a
certain
position
on
this
by
the
security
advisories
functionality
like
if
github
will
take
an
opinion
upon
whether
they
will
issue
a
cv
if
a
maintainer
has
requested
one,
but
from
our
standpoint
yeah.
If
the
maintainer
asks
for
one
we
we
will
assign
and
publish
one.
I
Nope
we've
got
a
security
lab
team,
which
was
three
folks
and
we're
now
doubling
that
who
go
through
every
cde,
request
by
hand
and
again
check
and
make
sure
it's
not
pictures
of
cats,
but
also
that
it's
legitimate
and
not
some
kind
of
malware
download.
F
A
That's
how
a
large
commercial
headwear
based
company
also
performs
their
triage
and
evaluation
annually.
G
I
will
not
stop
any
of
you
donating
money
to
us
to
allow
us
to
also
build
a
manual
capability
to
review
these
cvs.
Yeah.
E
Meanwhile,
I
over
here
sent
an
email
to
the
cbe
board.
Yesterday,
saying
hey,
I
generate
thousands
of
pull
requests
to
fix
security
vulnerabilities
that
cross
open
source,
none
of
them
get
disclosed
because
they
just
get
merged.
How
can
I
automate
cv
disclosure
of
these
things
because
otherwise
nobody's
getting
notified?
I
have
yet
to
get
an
email
response,
but
you
know
it's
this
challenge
of
like.
A
A
E
Yeah
I
know
that
the
distributed
file
weakness
filing
project
has
been
dissolved.
They
were
going
to
create
pirate
cds,
which
were
a
number
the
cve
number.
Well,
you
know
their
idea
was
the
cve
number
is
a
cbd
number
and
if
the
cv
you
know
well,
okay,
a
num.
Their
idea
was
that
a
number
a
number
a.
E
A
A
I
will
track
it
down
on
the
they
came
three
times.
Okay,
and
I
can't
recall,
I
know
we
got
them
recorded
the
last
time
around,
but
I'll
track
that
down,
but
that's
really
going
to
be
use
of
the
terms.
Cve
is
protected,
so
they
could
have
called
it
whatever
they
wanted
to
using
the
same
format.
E
It
is
it
is:
it's
been
yeah
and
they're
using
different
identifiers
completely.
I
you
know,
I've
been
I've
been
bouncing
back
and
forth
with
kurt
and
those
guys
chatting
about
it
because
I'm
curious,
you
know
anywho,
so
I
you
know
I'm
I'm
currently
in
the
situation
where
I
you
know.
Sometimes
I
don't
know
you
know
what
is
the
best
way
to
handle
this.
I
have
a
lot
of
vulnerabilities.
I'm
gonna
go
fix
them,
but
I
can't
disclose
them
because
there
are
thousands
of
projects
right
like
how
do
you
do
that?
E
Well,
and
I
don't
have
any
good
answers
there.
The
question
that
this
is
that's
totally
not
with
respect
to
hunter
dev.
The
question
that
I
have
for
the
hunter
dev
team:
how
do
you
guys
handle
disagreements
between
the
security
researcher
and
the
maintainer
about
whether
or
not
certain
things
is
a
vulnerability
or
not?
And
what?
What
is
the?
E
How
how
I
mean?
I
know
that
it
can
change
from
case
to
case,
but,
like
you
know,
that's
that's
one
of
the
reasons
why
I've
kind
of
started
moving
my
own
disclosures
into
my
own
hands
and
starting
to
like
write
up.
My
disclosures
independently,
I've
had
conflicts
with
the
apache
foundation
where
they've
been
like
theirs.
I've
said
this
is
a
vulnerability.
They've
been
like.
No,
it's
not
I'm
like
okay.
Well,
I'm
going
to
get
a
cd
and
phone
number
for
it
and
their
response
is
well.
We
have
we're
cna
for
the
apache
foundation.
E
You
can't
get
one
and
then
I
of
course
have
figured
out
since
then
that
there's
an
arbitration
like
there's,
there's
not
an
arbitration,
there's
a
an
appeals
process.
You
can
go
through
to
get
a
cve
without
you
know
their
permission.
I've
never
exercised
it,
but
it's
there.
So
you
know
I
I
just
want
a
number
to
say:
here's
a
vulnerability.
They
didn't
fix
it
or
here's
the
number
vulnerability
they
did
fix
it.
You
know
that's
what
matters
to
me.
How
do
you
guys
handle
that
need
from
the
research
community.
G
So
till
today
you
know
up
to
date.
Generally
speaking,
we
do
side
with
the
maintainer,
so
only
on
explicit
validation
from
the
maintainer.
Will
we
ever
issue
a
cv
for
a
vulnerability.
G
We
do
have
a
policy
which
is
like
you
know,
as
mentioned,
if
the
maintainer
invalidates
the
vulnerability
or
if,
in
the
future
a
certain
period
of
time
elapses,
we
will
publicly
disclose
the
report
so
that
people
have
become
aware
of
it.
Whether
we
would
issue
cve
for
such
a
report.
This
is
a
unanswered
question.
G
Yet
we've
discussed
internally
about
peer-review
mechanisms
or
otherwise
to
try
to
create
consensus
about
vulnerability,
but
but
yeah
you
know,
as
of
today,
we
would
have
to
say
we
would
side
with
the
maintainer
and
to
answer
your
not
questioned,
but
to
plug
us.
We
have
an
api,
so
you
could
theoretically
use
our
api
to
yeah.
Do.
E
Things
is
so
one
of
the
things
that
I've
had
issues
with
attacker.
One
is
you
say:
I'm
gonna
disclose
anyways
or
I'm
going
to
get
a
cd
in
here,
but
you
know
it
usually
doesn't
come
up
with
cds,
because
most
hacker
1
programs
are
and
bug
crowd.
Programs
are
not.
You
know,
ship
software
and
staff
software
right,
yeah
yeah,
if
you're
dealing
with
software
that's
shipped
and
you
need
a
cd
or
you
think
is
the
maintainer
is
with
hacker
one
and
bug
crowd.
E
There's
this
kind
of
hanging
over
your
head
risk
that
you'll
get
ejected
from
the
platform.
If
you
challenge
the
platform's
decision
on
that,
and
that
means
a
loss
of
income
stream
from
your
security
research.
Is
that
same
risk
established
here
or
if
you
say
we're
not
going
to
do
it
but
you're
welcome.
Is
it
a
you,
as
a
researcher
are
welcome
to
do
what
you
wish?
E
G
So,
no
definitely
not
so
we
so
we
actually
took
the
decision
quite
early
on
that.
If
there
were
if
there
was
any
reason
that
a
maintainer
invalidates
a
report,
a
bar,
a
ongoing
duplicate,
the
report
should
default
to
public.
G
So
that's
the
main
so
that
the
researcher
could
share
it
or,
as
you
mentioned,
you
know,
try
to
get
a
cv
against
it,
and
this
is
just
in
stark
contrast,
because
you
know
you
hear
these
horror
stories
about
vdps
or
bounty
platforms,
where
you
know
they
get
marked
informational
they
get
marked
as
who
knows
what
and
then
you're
kind
of
and
you're.
G
You
know
you're
obliged
by
nda
or
whatever
it
might
be
to
never
disclose
about
the
report
again,
because
maybe
it
was
a
private
program
or
or
who
knows,
but
in
our
case
explicitly,
unless
it's
a
unless
it's
a
ongoing
duplicate.
G
That
would
be
the
only
scenario
where
it
is
locked
until
the
embargo
lifts
on
the
the
initial
report.
That's
the
only
yeah
the
conditions
that
we
work
by.
B
I
had
similar
clarification
on
your
exclusivity
and
terms
and
all
that
stuff,
but
you
answered
part
of
it.
I
got
a
little
confused
when
you
were
talking
about
a
situation
where
a
project
would
say.
No,
we
don't
want
to
work
with
this
platform.
In
that
case
is
the
vulnerability
researcher.
Is
it?
Is
that
considered
owned
by
hunter?
Is
it
owned
by
the
researcher
and
they
could
go
resell
it
or
they
could
sit
on
it
indefinitely
or
what
kind
of
terms
do
you
have
around
those
situations.
G
It's
a
great
question
actually,
so
I
I
like,
if
I
recall
the
legals,
I
think
explicitly
like
it's,
I
think
either
they
grant
us
ownership
or
we
get.
G
You
know
some
kind
of
exclusive
license
to
the
vulnerability
research,
but
just
for
clarity,
we
would
never
look,
and
following
this
call,
I
actually
seek
to
change
the
t's
and
c's
to
basically
say
in
the
event
that
a
a
maintainer
is
being
asked
to
leave
the
platform,
and
so
we
invalidate
the
report,
the
research,
the
research
should
return
to
the
ownership
of
the
of
the
researcher.
It's
only
whilst
the
plat
was
the
maintainer
is
on
platform.
All
the
vulnerability
is
still
pending.
G
Do
we
act
as
kind
of
like
a
guiding
hand
in
the
process?
The
second
it
is
invalidated.
We
present
that
information
on
our
website
publicly.
If
the
researcher
wanted
to
then
go
and
try
to
submit
that
submitted
to
an
alternative
program
or
whatever
it
might
be
like
all
for
them.
You
know
like
we.
We
don't
explicitly
want
to
own
this
information.
G
Yeah
does
that?
Does
that
answer
your
question.
B
Oh
now
I
have
it.
I
think
I'm
getting
a
little
confused.
So
the
report,
because
you
said
the
report
gets
invalidated
or
let's
say
the
maintainer
says
I
don't
use
this
platform,
then
you're
saying
it
automatically
releases
publicly.
G
If
that
so,
if
they,
if
they
say
to
us,
so
if
they
mark
it
as
invalid,
then
yes,
it
does
become
public
right,
but
if
they,
if
they
email
us
and
say,
oh,
we
don't
want
to
use
a
platform
or
something
to
this
effect,
and
so
this
me
this
would
from
a
system
perspective.
This
means
the
report
is
pending.
G
Then
we
will
not
publicly
disclose
this
report.
We
will.
We
will
make
the
report
private.
We
will
bar
future
disclosures
disclosures
from
this
for
this
repository
or
even
an
entire
organization.
If
that's
what's
requested
and
then
the
researcher
can
choose
to
go
via
mitre
or
whatever
it
might
be,.
A
I'm
not
certain
if
you
participate
adam,
but
something
you
may
wish
to
do.
Research
on
on
top
of
the
other
existing
cvd
entities,
there
is
a
bug,
bounty
community
of
interest
that
accounts
for
researchers,
entities
like
a
hacker
one
or
a
bug,
crowd
and
just
vendors
are
looking
to
create
a
bug,
bounty
or
a
vdp
program.
So
that
might
be
something
you
guys
want
to
participate
in,
or
at
least
research.
What's
going
on.
G
Nice,
thank
you.
Yeah.
Definitely
we'll
take
a
look
at
it.
I
think
there's
a
lot
of
there's
a
lot
of
great
resources
out
out
there,
actually,
even
like,
I
think,
is
it
bug
crowd
that
created
disclosure,
io
and
and
they're
kind
of
like
information
set
on
on
vdps,
so
yeah.
I
know
we're
like
certainly
looking
for
more
and
factoring
that
in
in
the
whole
kind
of
product.
B
Evolution
if
you're
looking
at
disclosure.io,
I
feel
like
this
group
would
be
remiss
not
to
plug
our
own
coordinated
disclosure
guide.
So
let
me
find
you
the
link
to
that.
It
is
written
specifically
for
to
be
read
by
open
source
maintainers,
who
are
trying
to
set
up
coordinated
disclosure
for
their
project.
G
A
Ask
a
lot
of
questions
I
want
to
give
the
rest
of
the
group
an
opportunity.
Is
there
anything
you're
curious
about
any
comments
you
want
to
make
and
we're
getting
towards
the
top
of
the
hour,
and
almost
half
of
us
haven't
had
a
question
or
thought
yet.
I
just
want
to
give
you
a
chance
to
talk
if
you
are
so
inclined.
F
I
should
probably
mention
that
hackermind
also
has
a
free,
open
source,
free,
essentially
bug
bounty
or
vdp
programs
that
you
can
set
up.
If
you
have
open
source
project.
A
A
To
take
notes,
please
I'd,
like
other
folks
to
take
a
look.
E
In
the
notes,
last
time
I
tried
to
use
that
it
had
a
limitation,
or
you
could
only
have
one
account
accessing
the
the
the
pro
the
profile
for
the
the
that
that
offering
that
you
had
for
open
source
only
allowed
one
account.
Is
that
still
a
limitation.
F
Should
not,
I
don't
believe,
that's
a
limitation,
I
you
could.
I
think
that
would
be
worth
maybe
sending
an
email
to
myself
or
probably
me
about
that.
E
And
it
took
a
while,
I
tried
setting
up
one
for
gradle
and
I
think
we
ran
into
that
limitation
where
it's
like.
If
you
wanted
to
or
like
you
know,
I
think
so
gradle
you
know
we
were
open
source
company.
I
think
we
wanted
to
potentially
do
that
and
then
offer
swag,
and
I
think
that
like
as
soon
as
we
like
said
like
we
want
to
have
multiple
accounts
and
offer
swag,
I
think
that's
when
it
kicked
into
okay,
now
we're
going
to
charge
you
for
this
stuff,
multiple
programs
or
just
no.
C
E
One
program
we're
trying
to
set
up
a
program
for
gradle-
I'm
not
there
anymore,
but
you
know,
and
we
actually
yeah,
but
I
remember
I
remember
like
I
think
that
one
of
the
limitations
was
for
this
account
that
you're
creating
out
creating
for
the
organization.
You
can
only
have
one
account.
You
can
only
have
one
get
a
one
hacker
one
account
log
into
that.
Otherwise
you
have
to
start
paying
for
the
full
version
of
hacker
one.
F
A
G
G
We
only
started
basically
to
talk
to
enterprises
about
how
to
tie
them
into
this
kind
of
like
trinity
of
people,
this
quarter,
and
so
we're
like
early
stages
on
this,
and
so
we're
trying
to
talk
to
enterprises
to
start
beginning
to
understand
their
then
their.
G
You
know
their
wants
and
needs
when
it
comes
to
when
it
comes
to
mitigating
risk
around
using
open
source
software
specifically-
and
so
I
guess
this
is
an
open
request
to
anybody
listening
who
might
be
of
interest
in
collaborating
with
us
on
on
trying
to
identify
better
a
better
way
to
to
how
we
can
use
our
resources,
this
community
and
what
we've
built
to
date
to
try
and
identify
a
new
way
to
mitigate
risk
for,
for
you
know,
enterprises
and
developers
who
use
open
source
software.
G
It's
so
funny,
because
what
you
know
I
read
about
every
day
on
on
on
hacker
news:
people
suggesting
ideas
of
of
maybe
new
ways
that
open
source
could
be
used
and
how
you
know
the
benefit
from
the
enterprise
can
flow
back
to
the
company,
and
this
is
kind
of
the
avenue
that
we
want
to
go
down
which
we're
not
trying
to
do
kind
of,
like
you
know,
source
composition,
analysis
like
many
of
the
existing
organizations.
G
We
want
to
build
something
new
and
so
yeah,
just
an
open
request
to
see
if
anybody
wants
to
collaborate
on
on
helping
us
to
try
to
achieve
this.
G
And
I
guess
if,
if
there
was
a,
if
there's
a
request
for
me
to
come
back
and
answer
more
questions
or
anything
like
that,
you
know
if
this
is.
If
anybody
wants
this
they're
happy
to
to
oblige
and
and
this
kind
of
stuff
but
yeah,
I
think
that's
basically
it
for
me.
I
also
think
just
to
add
on
to
crystal's
point
about
hack
one
and
it's
open
source
community
edition
I
feel
like
we
should
also
plug
ibb.
You
know
their
internet
bug
bounty
program
caught
up
with
kayla.
G
The
the
team
leads,
I
think,
just
last
week
had
a
great
conversation
and
it's
an
awesome
program.
So
just
thought
I'd
put
that
there
as
well.
G
Slack
am
I
in
it?
No,
I
don't.
I
haven't,
found
a
means
to
join
it,
so
if
somebody
can
send
me
a
link
or
something
I'll,
yeah
happy
happy
to
join,
I.
E
Would
love
to
chat
with
you
more
after?
This
is
over
last
question
that
I
have
for.
You
is
the
when
you
compare
the
payouts
for
the
security
researcher
against
open
source.
The
payouts
here
compared
to
like
large
enterprise,
open
source
security
or
large
enterprise
security,
like
the
payouts,
are
disparately
different
right
and
I
presume
that's
mostly
to
do
with.
Like
you
know,
I
recognize
your
you
know
just
getting
started
right.
Small
company.
E
Are
you
able
to
sell
these
large
organizations
on
this
idea
from
the
perspective
like
this
is
something
that
you
should
be
thinking
a
lot
of
money
into
in
order
to
finance
like
bringing
up
the
payouts
for
these
things,
or
are
you
trying
to
stay
at
these
current
rates
like
it?
I
I
recognize,
like
you,
know,
smart
trying
to
get
going,
but
you
know
by
comparison
right
like
the
payouts
are
small
and
if
you're
like,
if
you
stick
with
these
payouts,
I
mean
it's
better
than
nothing.
E
But
it's
still
not
like
the
kind
of
thing
that
you're
getting
like
the
big
people
that
want
to
do
security,
research
and
like
are
getting
the
tons
of
money
from
the
big
corporations
to
look
at
this
stuff.
How
how
are
you
I
mean?
Is
that
a
long-term
road
map,
or
is
that,
like
a,
we
just
need
a
couple
of
big
investors
to
make
that
possible.
G
So
we
understand
from
a
business
model
perspective
how
this
can
actually
work.
So
we
haven't.
We
haven't
cracked
like
an
enterprise
product
yet,
but
we
understand,
from
an
economics
perspective
how
this
can
work,
which
is
given
that,
generally
speaking,
that
you
should
be
earning
the
highest
bounties
against
the
most
popular
open
source
projects
out
there.
You
know,
and
these
highly
popular
projects
will
be
dependent
upon
by
the
most
organizations
if
we
can
even
ask
for
a
dollar
or
two
from
these
organizations
you're
starting
to
talk
about
seriously
large
sums
of
capital.
G
You
know
like
this,
this
one
I
can't
remember
if
we
take,
for
example,
I
don't
know
node
fetch
how
many
millions
of
organizations
worldwide
you
know
rely
on
node
fetch
as
an
example
and
if
yeah,
if
we
could
ask
for
even
10
cents
from
this
from
them.
Sorry,
this
is
massive
amounts
of
capital
that
can
then
go
towards
paying
for
not
only
security
research
but
also
whatever
remediation
is
required
in
in
the
projects.
G
So,
from
an
economic
perspective,
we
understand
how
the
bounties
can
scale
as
we
grow
our
commercial
capability,
but
as
you
mentioned,
because
we
are
a
startup,
it's
still
early
stages.
This
is
all
basically,
this
is
money
coming
from
us,
and
so
we
have
to
put
in
you
know,
cost
cost
controls,
but
the
way
that
I
see
it
is
looking
into
the
future.
As
we
begin
to
scale,
get
more
customers
and
whatnot,
we
should
be
able
to
significantly
drive
up
bounty
prices.
G
A
Very
nice,
I
want
to
thank
adam
for
coming
in
and
staying
up
so
very
late
to
talk
with
us,
really
appreciate
it
and
thanks
to
the
group
for
the
polite
dialogue,
this
was
excellent
reminder
in
the
to
kind
of
key
in
on
this
topic
of
researcher
relations
that
felt
like.
That
was
what
the
next
project
the
group
wanted
to
circle
back
around
on
and
start
to
work,
to
identify
the
how
we
can
help
pain
points
between
researchers
and
maintainers.
A
So
if
anyone
has
any
alternative
ideas
projects
we
want
to
collaborate
on,
let
me
know,
but
I'm
going
to
propose
that'll,
be
what
we'll
start
to
toil
away
on.
Next,
we
think
that's
an
area
we
can
add
some
value
to
try
to
help
provide
some
guidance
and
good
and
practice
around.
Thanks
to
everybody
really
appreciate
your
time.
Enjoy
your
weeks,
we'll
see
you
here
and
two
again
cheers
adam.