►
From YouTube: Vulnerability Disclosures WG (November 30, 2020)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Cool,
so
let
me
share
my
screen
really
really
quickly.
Let's
on
gray
ourselves,
those
of
our
who
who
are
here
welcome
everybody.
This
is
the
vulnerability
disclosures
working
group
meeting
at
the
open
source
security
foundation.
Do
we
have
anyone
who's
with
us
here
for
the
first
time.
A
Okay,
welcome.
That's
first,
would
you
care
to
introduce
yourself.
B
Sure
my
name
is
todd
collum
and
I
work
at
red
hat
on
the
product
security
team
for
the
rel
product.
A
Awesome
welcome
I
alrighty,
so
the
agenda
for
today
is
really
it's
really
quite
open
and
last
time
around
the
agenda
was
pretty
packed
and
we
had
a
really
an
awesome
discussion,
but
I
think
we
I
would
like
to
get
us
to
a
point
where
we
kind
of
narrow
down
the
topics
we
we
want
to
focus
on,
because
it
seems
that
there
is
no
shortage
of
topics
we
would
want
to
discuss,
and
I
think
last
week
was
was
an
excellent
example
of
that.
But
we
haven't
really
like.
A
We
touched
on
the
problem
that
we
have
this
process
or
I,
I
think,
let's
call
the
process
of
disclosing
vulnerabilities
between
different
parties
and
we
haven't
really
then
done
an
exercise
of
okay.
So
what
is
it
that
we
really
want
to
improve?
And
or
where
do
we
see
the
existing
processes
and
tools,
falling
short
of
the
needs?
And
if
I
remember
correctly,
we
identified
a
couple
of
different
stakeholders.
A
We
identified
definitely
open
source
maintainers
as
one
of
the
as
one
of
the
key
stakeholders
we
identified
security
research
community,
we
identify,
I
think
we
chat
about
like
tool,
vendors
and
companies
that
do
disclosure
for,
or
companies
or
organizations,
actually
handle
disclosures
for
product
that
they
don't
necessarily
own
and
last
but
not
definitely,
not
least,
we
have
chatted
about
consumers
of
open
source
vulnerability.
A
Information
and
our
goal
for
today
would
be
to
start
a
conversation.
What
kind
of
pain
points
do
you
all
see
as
you
are
either
in
a
position
of
a
particular
stakeholder
or
interacting
interacting
with
them?
And
I
think
it's
we're.
We
have
a
very
good
group
where
we
have
people
who
are
interacting
or
are
like
different
types
of
players
in
in
that
ecosystem,
where
at
least
are
able
to
articulate
those
those
pain
points.
A
Well,
so
what
I
thought
we
could
do
is
to
start
like
listening
the
pain
points
and
grievances
and
things
that
could
be
better
in
the
world
and
group
them
in
those
in
those
large
buckets
like
maintainers
consumers,
security,
research,
community
and,
let's
call
them
like
coordination,
organizations
and
bodies.
Does
that
sound?
Like
a
like
a
plan.
C
D
D
A
So
I
I
I
think
we
also
I'm
trying
to
think
of
a
way
to
kind
of
get
us
focused,
and
I
think
we
at
that
point.
A
We
discussed
that
we
could
really
like
try
to,
for
example,
think
about
use
cases
for
big
mature
projects,
say
back
to
linux
kernel
or
like
all
down
all
the
way
down
to
a
hobby
developer
that
you
know
they
have
their
open
source
projects
successful
or
maybe
not
successful
yet,
but
they
want
to
do
security,
but
they
have
absolutely
no
idea
how
right
so
so
and-
and
I
think
there
are
plenty
of
projects
and
even
entire
communities
that
are
somewhere
in
between
right-
they
don't
have
like
corporate
corporate
sponsorship
they're,
you
know
all
volunteers,
they
perhaps
could
have
access
to
the
infrastructure
if
they
knew.
A
What
is
it
that
they
needed
and
like
is
there
a
particular
group
that
you
would
want
to
like
focus
on
when
it
comes
to
maintainers,
for
example,
because
let's
focus
on
the
first
group
that
or
maintainers
do
we
have
someone
who's
a
maintainer
on
of
a
reasonably
popular
open
source
project.
Here.
E
I
mean
I've
been
a
open
source
maintainer
for
many
many
years,
but
for
various
smaller
projects,
and
I
mean
bigger
ones
too.
I
mean
I
I
used
to
work
for
mozilla
many
years
ago
and
on
firefox
and
then
worked
on
their
security
team
and
such
but
yeah.
So
I'm
happy
to
talk
about
any
of
that
stuff.
I
think
that
in
in
general,
it's
a
especially
the
open
source
community
is
just
that.
E
Like
hey,
it's,
it's
a
volunteer
efforts,
and
so
it's
you
know
best
effort
kind
of
thing
so
any
way
to
provide
the
open
source
community
with
templates
or
like
ways
to
do
things
easily
that
they
can
just
follow
would
make
their
lives
much
easier,
just
kind
of
like
what
github
has
done
with
like
auto,
generate
a
security
policy
by
default
and
and
some
other
things
that
would
be
kind
of
the
direction
to
go
is
like
hey.
This
is
kind
of
how
you
should
direct
people
to
report
security
vulnerabilities.
E
This
is
kind
of
how
you
should
think
about
how
to
handle
incoming
security
vulnerabilities.
So
because
again
we're
you
know,
maybe
we
can
focus
a
few
things
that
are
platform
specific,
but
in
general,
even
just
having
a
a
template
of
of
like
here's.
E
The
like
the
kind
of
things
you
want
to
think
about
would
probably
be
helpful
and
then
partnering
with
all
these
different
groups
to
kind
of
make
that
be
used
so
partnering
with
github
and
git
lab
and
other
kind
of
like
source
control
management
stuff
that,
like
can
automatically
generate
these
things
and
put
them
in
front
of
the
open
source
developers.
Maintainers
is
they're
creating
projects
and
creating
repositories,
and
things
like
that.
A
A
Or
or
this
like
the
the
fact
of
like
there
is
a
security
like
there's
an
implied
security
policy
that
you
can
report
a
security
vulnerability,
maybe
not
necessarily
to
the
individual
maintainer,
because
those
can
be
hard
to
find
they
can
be.
They
can
be
hard
to
contact
by
someone,
that's
just
looking
at
say,
an
npm
package
or
a
github
repository.
A
Maybe
it
makes
more
sense
to
kind
of
facilitate
like
package
maintainers
or
not
package
maintainers,
but
package
repositories
acting
as
sort
of
the.
E
The
red,
the
package
registries,
what
is
the
like,
I
I
fully
support
of
having
the
registries,
be
a
way
to
you
know,
report
things
in
some
fashion
I
mean
like
npm
js
has
has
been
doing
this
for
a
long
time.
The
you
know,
node.js
working
group
security
working
group
has
has
done
something
similar
and
there
there
have
been
a
couple
of
different
ways
of
doing
this,
but
in
the
end
they're,
not
the
ones
fixing
the
issue.
E
It
still
has
to
go
back
to
the
maintainer
right,
and
you
know
like
at
a
certain
point.
Maybe
they
can
disclose
the
issue,
but
you
know,
a
lot
of
these
registries
are
also
either
volunteer,
run
themselves
or
backed
by
some
commercial
entity
that
then
you're
saying
like
okay,
this
commercial
entity
should
be
the
one
to
get
the
vulnerabilities.
For
that.
It's
it's
a.
E
It
is
a
an
issue
there
of
like
how
do
you
handle
those
kind
of
things
in
the
end,
I
think
that
you
still
need
to
involve
the
maintainers
somehow
and
that
it
needs
to
be
part
of
just
like
you
know.
If
somebody's
writing
code,
they
also
need
to
be
thinking
about
how
to
handle
not
only
incoming
bug
reports,
because
that's
just
a
common
thing.
I
mean
people
use
whatever
like
github
issues
or
whatever
for
that,
but
like
how
do
you
handle?
You
know
security
issues
for
that
as
well.
F
E
E
Sure
yeah,
that's
fair,
I
mean
I
I
would
love.
I
know
somebody
had
had
asked
from
this
group
recently
like
hey,
you
had
some
open
source
projects.
They
wanted
to
learn
more
about
how
to
handle
vulnerabilities,
maybe
better
offering
services
to
like
hey,
I'm
not
sure
how
to
deal
with
this
like
a
way
that
the
community
can
reach
out
and
get
assistance
from
security
experts.
E
As
part
of
this
could
be
something
useful.
I
don't
know
I
mean
it's,
I
think
that's
just
one
of
the
common
problems
with
open
source
is
that
you
have
unpaid.
You
know,
volunteer
maintainers,
who
you
know,
don't
have
a
lot
of
the
support
that
they
need
and
are
expected
to
drop
everything
and
fix
things
asap.
E
So
yeah
I
mean
it's.
It
is
unrealistic
expectations
that
we
put
on
the
maintainers
in
a
lot
of
ways.
So
if
there's
anything
that
we
or
other
organizations
can
do
to
exist
in
that
process
and
by.
F
The
way
what
you
said
there
you
know
about
you
know
accepting
all
of
these
and
not
taking
it
personally,
because
it's
when
it's
your
code
and
somebody
you
know,
makes
a
comment
on
on
a
bug,
especially
a
security
one.
Very
often,
people
in
the
initial
reaction
is
because
they're
taking
it
personally,
it's
like
that
can't
be
possible.
I
don't
want
to
hear
it
and,
and
it's
hard
it's
really
hard.
Yeah.
A
But
I
I
I
wonder:
what
is
it
that
we
as
a
working
group
could
can
I
do
to
help
in
this
particular
use
case
where
we
we
have
a
volunteer
kind
of
effort
who's?
That
is
not
necessarily
like.
They
don't
have
security
expertise.
They
are
not
quite
aware
of
like
how
to
handle
security
vulnerabilities.
A
How
who
to
coordinate
with
how
to
disclose
them-
and
I
think,
like
this
you're
touching
on
something
that
is,
I
think,
very
important,
but
it's
like
I
would
love
for
us
to
be
able
to
do
something
about
it,
but
I
think
it's
just
nature
of
volunteer
open
source
work
that
you
take
your
work
with
a
lot
of
like
pride
and
emotions
right.
It's
like
it's.
My.
F
If
the
mechanism
for
reporting
a
security
as
a
security
issue
includes
something
like
a
very
factual
form
to
fill,
it
tends
to
be
maybe
a
little
less
a
vector
for
aggression
than
if
it's
a
free
text
by
somebody
sending
an
email
with
just
you
know,
just
a
free
text
where
people
write
emotionally
instead
of
clicking
selecting
you
know,
type
of
severity,
medium
super
strong,
open
door
for
hacker
whatever,
and
I
don't
know
it's.
F
I
so
some
way
to
to
make
the
reporting
filter
out
some
of
the
aggressivity
by
potential
aggressivity.
Some
people
are
just
really
nice
about
it
and
even
send
a
bug.
A
bug
fix
was
that
I've
had
that
happen
when
I
was
at
sun
microsystems
and
it
works
well,
but
you
know
I
don't
know.
E
It's
I
think
it
comes
around
to
setting
those
expectations
up
front,
and
so,
if
you
do
provide
a
way
to
report
it
and
say
there's
a
form
you
fill
out
whatever.
It
is
like
setting
the
expectations
for
the
researcher
or
the
reporter
and
that
they
know
that
like
hey,
this
is
how
it's
going
to
be
handled
and
such
makes.
It
seems
to
make
things
much
better,
because
if,
if
there's
no
process,
then
I
think
that
frustrates
people
and
like
okay-
I
don't
know
how
to
report
this.
E
G
E
Cut
all
that
out,
I
think
you
can
hopefully
make
the
the
process
the
interaction
better.
A
So
so
what
I'm
hearing
is
that,
if
we
could,
oh
david,
you
want
to
say
something.
H
Yeah,
as
I
say,
has
anybody
reviewed
the
current
version
of
the
github
standard
vulnerability
reporting
template,
because
there
is
one,
you
click
a
button
and
you
get
yourself
a
security.md
file.
I
haven't
looked
at
it
recently,
but
there
is
such
a
thing.
One
problem,
github
is
the
most
you
know.
Github
get
lab,
I
think,
are
the
two
most
common
repo
systems
and
I
agree
that
you
need
to
report
to
the
actual
people.
H
Developing
or
nothing
happens,
but
in
at
least
the
github
case,
and
I
haven't
checked
in
get
lab,
there's
no
way
to
privately
report.
You
can
privately
handle
it,
but
you
can't
privately
report.
So
maybe
this
group
could
work
with
github
getlab
anybody
else
so
that
private
reporting
would
work.
A
Yeah
actually
do
we
do
we
have
anyone
from
github
in
this
group
right
now.
I
don't
think
so,
because
I
I
chatted
with
them
some
time
ago
and
I
think
they
had
it
on
the
roadmap.
But
honestly,
I
have
absolutely
no
idea
where
they
ended
up
with
private
reporting,
because
that
seems
to
be
a
pretty
glaring
hole
in
like
the
whole
flow
right.
H
They
are
aware
of
that.
I
talked
with
with
some
of
the
microsoft
transition
team
when
they
were
in
the
process
of
buying
github,
and
I
know
nat
was
interested
in
that.
However,
they
ended
up
with
github
security.
Was
it
advisor?
I
forgot
what
the
system
is,
but
they
now
they
have
a
system
for
discussing
it,
but
not
for
reporting
it.
So
once
you
know
you
can
deal
with
it,
but
you
have
no
way
to
find
out
privately.
I
Yeah,
this
has
been
a
pain
point
for
I'm
dan
from
google
and
we
have
a
team
that
operates
a
bunch
of
fuzzers
on
open
source
projects,
and
this
is
a
pain
point
for
them
as
well,
because
they
need
an
automated
place
to
report
issues
that
these
fuzzers
find
so
they've
actually
stood
up.
Their
own
issue
tracker
where
maintainers
or
projects
have
to
opt
in
and
create
accounts
and
stuff
where
it
can
all
be
shared
privately
before
it
gets
reported
upstream.
H
Yeah
just
to
be
clear,
you
can't
have
private
issues
so
once
they
know
about
it,
they
can
discuss
them
privately,
but
they
can't
find
out.
I've
been
recommending
people
to,
and
you
know,
by
the
way
you
know
we
if
github
is
not
interested
in
doing
that,
we
could
develop
proposed
alternatives.
H
C
I
think,
potentially
a
good
next
step
because
I
think
josh
brought
it
up
and
then
eva.
Thank
you
for
pasting
the
security
report.
I
tossed
it
in
the
the
notes,
but
I
think,
like
maybe
over
the
next
week,
putting
together
use
cases
for
like
we
just
concentrate
on
maintainers
to
start.
If
that's
where
we
want
to
start
with,
what
are
the
use
cases,
they
would
need
and
maybe
mark
next
to
it
at
what
maturity
level.
C
They
would
need
that
and
see
if
there
is
existing
prior
art
out
there
and
what,
like
you
know,
note
any
flaws
like
the
prior
art
of
the
github
and
gitlab.
We
have
the
same
problem.
You
can
make
sort
of
a
private
issue,
but
the
private
issue
is
anybody
who
can
contribute
to
the
project.
So
if
it's
an
open
source
project,
you
can't
release.
You
have
to
make
a
separate
repo,
and
I
can
talk
to
our
plan
team
to
see
if
there's
anything
on
the
backlog
for
that.
C
But
what
we
could
do
is
just
start
with.
Here's
the
you
know
couple
of
use
cases
that
we
feel
are
most
critical.
Here's
prior
art
that
exists
in
the
flaws
is
it
on
the
roadmap
or
not,
and
there
might
be
two
or
three
prior
arts
for
each
use
case,
but
at
least
then
we
can
at
least
start
having
a
place
to
direct
people
like
here's.
What
there
is
here's
some
alternatives
like
disclose.io,
you
know,
if
you
don't
want
to
do
a
workaround
or
you
don't
want
to
wait
for
that
backlog
item.
J
I
Need
a
case
study,
I
think
the
envoy
project
had
two
security
issues
reported
two
weeks
ago
publicly
on
github
and
they
had
to
rush
out
an
emergency
release,
because
there
were
essentially
zero
days
that
got
made
public
without
people
realizing
what
was
happening.
A
Yeah,
so
I
don't
know
how
like,
for
example,
the
openjs
community,
how
representative
that
community
is,
but
it's
a
it's
a
set
of
javascript
projects
of
varying
level
of
maturity,
and
some
of
them
are
pretty
mature.
So
I
some
time
ago
I
did
a
survey
of
the
security
reporting
practices
in
that
community,
so
I
can
definitely
paste
the
link
later
on.
Maybe
that's
going
to
be
like
a
good
seed
of
for
that
sort
of
table.
Okay,
we
have
our
project
and
we
think
they
are
mature,
and
this
is
how
they
handle
confidential
disclosure.
A
But
like
this
is
something
that
we're
we're
talking
about
it
in
the
context
of
the
receiving
side,
so
maintainers,
but
there's
also
like
the
the
other
side,
which
is
the
security
researcher
trying
to
report
the
vulnerability.
So
is
there
anything
that
you
all
think
is
gonna
broken
or
in
need
of
improvement
from
that
side,.
A
Is
there
any
because
I
know
that
it's
for
especially
for
volunteer
or
for
projects
that
rely
on
volunteers?
It's
an
endless
source
of
frustration
for
security
researchers
that
like
this.
If
there
is
a
reporting
mechanism,
it's
essentially
a
black
hole,
sometimes,
and
could
we
do
anything
there
to
kind
of
not
drive
security
researchers
away
from
open
source
projects
towards
like
commercial
bug,
bounty
programs
where
they
can
earn
some
money
and
companies
are
usually
much
better
at
responding,
keeping
in
touch
with
the
researchers
and
with
open
source
projects?
A
E
I
mean
you
know
there
have
been
ways
again,
like
kind
of
like
the
the
node.js
working
groups.
You
know
security,
ecosystem
program
and
others
that
have
kind
of
been
a
a
way
for
people
to
just
kind
of
report,
issues
wholesale
to
some
trusted
group
of
people
and
then
that
trusted
group
of
people
tries
to
get
in
touch
with
the
maintainer
or
has
the
ability
to
kind
of
go
to
the
registry
and
say
like
hey?
E
Can
we
mark
this,
as
being
you
know,
has
a
vulnerability
that
you
know
we're
not
able
to
get
in
touch
with
the
maintainer
or
whatever
thing
so
I
mean,
I
think
there
needs
to
be
better
coordination
there.
You
know
that
like,
and
I
think
you
have
to
bring
that
down
to
as
close
to
the
possible
of
the
language
of
that
it's
written
in
because,
like
it's
I
mean
yeah,
you
could
you
could
have
a
group,
a
larger
group,
like
you
know,
the
linux
foundation
could
put
something
together.
You
know
this
foundation.
E
The
open
security
foundation
could
put
some
kind
of
thing
together,
but
you
still
have
to
provide
the
resources
to
triage
these
incoming
issues.
To
you
know,
do
your
best
to
track
down
the
maintainers
partner.
With
the
registries
I
mean
it
is
a
lot
of
work,
but
maybe
that's
something
that
that
needs
to
be
better
done
to
you
know
have
the
avenue,
because
there
is
a
lot
of
open
source
that
you
know.
E
A
Yeah
so
so
one
thing
we
learned
in
the
node.js
ecosystem
working
group
that
for
those
of
you
who
are
not
familiar,
it's
essentially
a
group
of
volunteers,
triaging
findings
for
a
back
bounty
program
for
which
the
scope
is
essentially
all
of
the
npm.
All
of
what's
in
the
npm
package
manager,
like
we
learned
that
this
is
really
not
sustainable.
A
I
and
in
security
researchers
have
they
come
with
certain
expectations
about,
like
the
timeliness
of
you
know,
response,
and
since
it's
a
multi-party
system
where
we
have
researchers,
we
have
people
who
are
doing
the
triage
and
we
have
maintainers.
Sometimes
that
process
takes
ages
or
the
is
not
concluded
at
all,
because,
for
example,
I
had
cases
where
I
treated
the
vulnerability
I
reached
out
to
the
maintainer.
They
sent
me
an
initial
response
that
they
are
fixing.
It
therefore,
like
I
said:
okay
we're
going
to
wait
until
they
fix
it.
A
Then
half
a
year
later,
like
no
contact
whatsoever
right
and
like
those
cases
are
pretty
those
cases
can
get
pretty
tense.
For
you
know
the
person
doing
triage
the
security
researcher,
maybe
the
maintainer
as
well
so
I've
this.
This
again
seems
to
be
a
thing
that
could
be
fixed
by
a
really
really
solid
security
policy.
A
That
says
not
only,
you
know
what
the
maintainer
can
expect
of
the
researcher
not
to
go
public
before
it
is
fixed,
but
I
think
it
would
also
have
to
go
the
other
way
around
that,
for
example,
we
have
a
policy
that
when
the
maintainers
don't
respond
within,
I
don't
know
30
60
90
days.
The
researcher
can,
like,
I
know,
there's
a
big
debate
about,
but
should
they
go
public?
Can
they
go
public
with
the
vulnerability?
Is
this
a
problem.
C
I
don't
we're
never
being
in
that
community
we're
never
going
to
get
the
two
sides
of
the
responsibility
disclosure
versus
like
free
all
the
vulnes
versus
the
never
free
all
the
villains.
People
like
there's
not
going
to
be
a
kumbaya
moment
for
those
groups,
but
I
think
that
to
your
point,
if
we
say
like
you
know,
make
sure
you
put
that
you
know
at
least
in
30
days,
you're
gonna
acknowledge
it
or
whatever,
but
I
think
kind
of
an
interesting
thing
that's
been
coming
up.
C
This
meeting
is
what,
if
there
were
escalation
points
like
if
you
don't,
we
could
like
literally
build
in
by
default
into
a
policy.
If
you
don't
get
a
response
from
the
maintainer,
try
the
package
manager
or
the
repository
or
try
github
or
get
lab,
because
I
know
github
and
gitlab
are
both
cnas
at
this
point,
and
so
you
could
actually
be
like
if
you
don't
hear
back
from
the
maintainer
in
their
proposed
amount
of
days,
the
recommended
next
step.
So
this
way,
instead
of
them
immediately
jumping
to
my
next
step,
is
either
disclose
or
whatever.
K
I
wonder
if
you
know
maybe
this
is
starting
to.
Maybe
this
is
the
direction
this
is
going
in,
but
maybe
building
out
that
that's
sort
of
that
frame
of
reference,
so
you
have
an
open
source
project.
What
should
you
be
doing
when
you
think
about
security?
What
are
the
elements
that
you
should
have
as
part
of
your
your
vdp?
Should
you
have
a
bdp,
you
know.
When
do
you
need
one?
K
What
you
know
we
talk
about
policies,
process
guidelines,
you
know
all
that
you
know
for
the
different
kinds
of
stakeholders
and
how
do
you
mature
your
project
as
you
go
along
because
you're,
not
you
know
a
small
project
out
of
the
gate.
One
two
three
maintainers
may
not
need
all
that
overhead
or
not
realize
they
need
it
when
they
grow
right.
K
But
the
responses
to
the
researchers
have
to
be
timely,
while
the
disclosures
may
be
complicated
so
and
you
have
to
have
more
communications,
how
do
you
communicate
that?
So,
just
I'm
just
thinking
about
you
know
this
kind
of
a
framework
where
we
can
lay
out
here's
you're
you're,
an
open
source
project.
Here
are
the
things
you
need
to
think
about
when
it
comes
to
vulnerability
disclosure.
What
elements
you
should
put
in
place.
L
And
two
points:
unless
something
is
commercially
supported,
having
a
policy
is
going
to
be
challenging
for
a
hobbyist
or
someone
doing
this
out
of
the
joy
of
coding,
so
we
need
to
think
about,
but
before
we
put
a
policy
in
place,
this
is
kind
of
a
negotiation.
It's
a
suggestion
and
I
think
way
we
can
be
successful
in
articulating
that
desire
or
that
need
is
through
the
developer,
best
practices
working
group.
It's
a
sister
group
of
this
one.
L
So
I
think,
as
we
have
our
guidance
and
suggestions
together,
we
can
move
over
to
that
group
and
provide
that
as
guidance
and
training
education
for
open
source
developers,
saying
if
you
want
to
have
higher
levels
of
maturity
or
more
contributions
or
potentially
future
commercial
backing.
Here's
some
good
practices.
You
should
exhibit
like
having
a
vdp
or
some
type
of
policy
having
a
secure
way
to
share
vulnerabilities
all
these
things.
So
we
can
kind
of
put
these
out
as
good
practices
and
evangelize
it
through
the
other
working
group
as
well.
A
A
Yeah
maybe
consider
that
your
kind
of
your
own
risk.
L
So
there
are
other
working
groups
like
the
cii,
badges
or
the.
I
can't
recall
what
I
call
the
metrics
group,
but
that's
not
the
right.
There's
a
group:
that's
looking
to
do
dashboards
around
reporting
kind
of
the
securities
of
open
source
projects,
so,
as
a
consumer
we
can
suggest
if
you
care
about
the
security
of
the
software
you're
using
here
are
some
mechanisms
that
projects
can
demonstrate
their
adherence
to
good
practice.
L
C
A
Yeah
but,
but
I
think
it
it
like,
at
least
in
my
mind,
it
brings
a
question
whether
we
should
focus
on
trying
to
find
a
way
to
help
projects
that
are
not
necessarily
willing
to
like
put
in
the
effort
or
don't
have
the
capacity
to
like
staff.
Some
sort
of
security,
triage
and
response.
C
That's
not
our
I
mean
I
could
be
wrong,
but
for
ours
is
about
vulnerability.
Disclosure
is
kind
of
our
objective
is
to
cover
that.
So,
if
some
group
of
maintainers
or
whatever
doesn't
care
to
care
about
security,
that's
not
in
our
purview.
That
could
be
like
a
different
working
group
that
wants
to
work
about
education
or
whatever,
but
I
think
you
know
since
we're
about
vulnerability
disclosure.
Somebody
is
curious
about,
and
you
know
at
least
curious
about,
and
they
want
to
come
to
what
we've
published
to
be
like
okay.
C
H
Back
just
a
little
bit
because
I
I
think
you
make
some
good
points,
but
I
think
part
of
the
problem
is
we're
trying
to
cover
so
many
different
kinds
of
situations.
I
mean
the
linux
kernel.
Folks,
they've
got
a
whole
team,
that's
one
kind
of
situation.
You
know
if
you
report
to
red
hat
and
it's
a
core
part
of
their
product,
you
know
they
they've
got
a
whole
team.
H
I
thought
if
there's
a
maintainer
and
the
maintainer
works
for
a
few
hours
a
week
on
the
weekends
and
that's
what
you
got
and
this
is
their
their
hobby
of
their
hobby.
It's
totally
unreasonable
to
expect
the
same
level.
So,
for
example,
I
think
for
a
lot
of
open
source
projects
just
or
I'm
moving
towards
is.
The
expectation
is
a
vulnerability
report
is
code,
a
vulnerability.
A
wine
is
not
really
adequate.
It's
the
here's,
the
problem
and
here's
the
proposed
solution
and
I
think
for
a
lot
of
open
source
projects.
H
That's
what
they're
expecting
and
a
report
that
says
something
doesn't
work
properly
is
not
there.
You
still
haven't
given
them
anything
useful.
Yet
now
I
don't
know.
I
think
that
may
be
a
challenge,
because
it's
a
it's
a
complete
disconnect.
The
security
researchers
are
not
used
to
reporting
that
they
may
not
even
know
how
to
write
the
course
exactly.
M
C
C
Yes,
I'm
wondering
like
how
do
we
like
we're,
not
gonna
solve
that
that
problem,
because
it's
a
really
big
leap,
but
I'm
wondering
if
we
could
put
some
recommendations
around
and
I'm
gonna
go
switch
to
my
hat,
I
run
the
diana
initiative,
which
is
an
information
security
conference
that
has
a
couple
thousand
people
all
women.
Obviously
it
would
be
bigger
if
we
went
co-ed
but
whatever
and
there
we
have
different
presentations
and
we
try
and
tell
people
how
they
get
involved
in
finding
securities
and
having
things
on
their
resumes
they've
experienced.
C
So
I'm
wondering
if
we
actually
put
together
something
that
could
be
given
at
conferences
or
to
people
who
arrange
conferences
to
explain,
hey.
You
have
a
bunch
of
security
researchers
who
want
to
contribute
and
get
stuff
on
their
resume.
Here's
some
recommendations
of
how
not
to
be
a
dick
and
how
to
participate
and
be
useful
and
helpful
and
pair
up
with
open
source
projects
like
if
we
put
together
some
guidelines
that
might
actually
encourage
more
researchers
to
participate
in
these
projects
and
get
closer
to
what
you
know.
A
Yeah
that
that
would
be
lovely-
and
I
think
the
the
first
very
basic
like
prerequisite
for
that
to
work-
is
that
the
open
source
project
has
to
opt
in
into
that
sort
of
collaboration
and
like
if
we
can
sort
of
facilitate,
for
example,
having
a
security
policy,
maybe
having
a
security
policy
with
you
know
a
paragraph
there
we're
open
to
working
with.
We
actually
encourage
security
researchers
to
submit
vulnerability
reports
to
us,
because
we
have
the
capacity
and
the
will
to
address
those
that
that
would
be.
That
would
be
amazing.
A
So
a
lot
of
what
I'm
hearing
is
that
we
could
like,
with
the
private
reporting
part
aside,
which
requires
like
some
product
development,
but
it
seems
like
working
on
like
a
really
really
solid
security
policy
or
vdp
template
would
could
be
potentially
a
very.
It
could
be
a
very
impactful
thing
for
us
to
focus
on.
L
Yes,
I
think
we
can
provide
for
all
the
different
stakeholder
groups.
We
can
provide
suggested
guidance
on
part
for
their
perspective
of
the
vulnerability
disclosure
pipeline,
what
they
could
or
should
do,
and
I
think
the
policy
is
a
great
thing
marson.
I
think
we
also
can
have
some
guidance
on
for
those
maintainers.
L
C
C
G
Art,
I
think,
art
you've
raised
your
hand.
Oh
yeah.
Sorry
thanks!
So
I'm
very
new
here,
but
I'm
going
to
raise
this
question
anyway,
and
I
first
of
all
this
line
of
line
of
work
we're
talking
about
today.
I
don't
think
problem
with
with
proceeding
with
it
by
any
means
I
will
comment.
My
observation
is
that
there
this
is
a
lot
of
already
trodden
ground
and
when
I
say
trodden,
I
mean
like
stamped
on
by
armies
of
elephants-
really
smash
trident,
there's
a
lot
of
stuff.
It's
not
there.
G
There
may
need
to
be
more
and
that's
perfectly
fine,
but
there
might
be
if
we
pop
up
a
level
from
the
sort
of
the
disclosure
policy
discussion.
There
might
be
other
things
in
the
disclo
disclosure
coordination,
vulnerability
stuff.
You
know
at
scale
that
applies
to
you
know,
lf
constituencies.
There
might
be
things
that
this
group
could
target
that
are
less
well
trodden
greater
gaps
than
all
the
disclosure
stuff
and
again
I'm
not
gonna.
I
don't
mean
to
be
alone,
the
lone
crank,
cranky
voice.
G
Saying
don't
do
this,
I'm
not
really
saying
that,
but
there
may
be
places
to
invest
that
are
bigger
gaps
than
the
disclosure
policy
area.
C
C
G
Somewhat,
I
think
you
know,
given
that
you
know
collating
framing
organizing
existing
stuff
is
a
little
sounds
a
little
bit
like
less
listening
work
to
me.
So
so
I
think
that
does
help.
And
again
this
is
my
second
meeting,
so
I
I
may
be.
I
may
be
out
of
context
here
as
well.
I
guess
my
comment
still
somewhat
applies,
and
you
know
this
is
also
a
matter
of
personal
opinion.
G
You
know
it
might
be
that
I
personally
am
tired
of
this
discussion
and
others
are
not
so
that's
you
know
I.
I
will
not
stand
in
the
way
of
anything,
certainly,
but
I
do
see
some
newer.
B
A
Yeah,
I
think
I
think
that
yeah,
I
think
the
problem
we've
always
had
as
a
working
group
is
that,
like
this
particular
meeting,
is
where
we
do
work
and
ideally
we
would.
The
work
would
happen
between
those
meetings
and
then
we
would
kind
of
meet
again
every
second
week
and
chat
about
what
was
done.
B
L
D
L
C
There
are
some
suggested
use
cases
in
each
one
of
the
groups,
but
I
think
the
groups,
the
first
thing
they'd
have
to
do-
is
validate
that
those
are
sufficient
use
cases
and
they
like
them.
So
I'm
kind
of
thinking
like
the
first
thing
is:
here's
the
use
cases.
Then
the
next
thing
is
for
this
use
case.
We
recommend
xyz
that
already
exists,
or
we
need
to
research
it
more.
C
A
Okay,
is
there
anyone
who,
like
wants
to
take
a
lead
on
like
fleshing
out
those
use
cases?
I
don't
know
like
a
quick
markdown
document,
maybe
in
the
readme
and
then
we
could
start
like
breaking
out
the
work
there.
A
Okay,
that's
very
cool
I
was.
I
was
actually
not
sure
if
anyone
here
kind
of
identifies
very
strongly
with
the
consumer
community,
even
though
to
an
extent
probably
every
one
of
us
does
this
from
from
time
to
time.
It's
like.
Oh
I'm,
going
to
use
an
open
source
project
to
make
my
code
simpler.
E
L
So
mark
merson
is
there
when
we
say
consumers
right
based
on
what
you
just
said:
you're
talking
about
consumers
of
the
project.
Is
there
another
stakeholder
group
for
consumers
of
the
vendor
software
consumers
of
the
software.
A
Yeah,
I
think
we
so
far.
We
have
not
been
distinguishing
those
two
groups,
but
but
I
think
you're
right
that.
L
A
And-
and
I
think
that
the
consumers
of
open
source
projects
like
it's
fair
to
assume
that
they
rely
on
tool
vendors
to
consume,
that
information
that
it's
it's
very
rarely.
I
think
that
I've
seen
someone
consuming
vulnerability
information
straight
from
the
projects.
L
I
mean
so
so
yeah
I
mean
what
what
actually
so,
what
happens
in
practice
is
the
consumers
will
get
alerted
to
the
vulnerability
in
the
software
typically
before
the
vendor
has
notified
them,
and
then
they
will
go
to
the
vendor
and
say
my
vulnerability
scan
has
now
detected
this
new
vulnerability
in
open
source
component
xyz.
What
are
you
doing
about
it?
L
That's
actually
the
flow.
That's
that's
seen
in
practice.
It's
it's
not!
It
doesn't
actually
typically
come
from
the
vendors
unless
you're
talking
about
vendors
in
terms
of
the
operating
system.
Vendors
like
like
red,
hot
and
ubuntu,
soucy,
etc
might
be
an
exception
to
that.
But
in
terms
of
commercial
software,
that's
using
open
source
code.
L
The
consumers
will
often
have
view
of
the
vulnerability
before
the
vendor
has
provided
anything
because
the
vulnerability
scanning
infrastructure
that
they
use
will
have
they
they
get
zero
day
updates
right.
So
as
soon
as
the
vulnerability
is
reported
into
the
community,
it'll
show
up
at
all
of
the
end.
Enterprise
users
they'll
be
notified
of
it,
even
though
the
vendor
might
not
have
any
any
reaction.
Yet.
M
So
I
I
think,
there's
sort
of
two
definitions
of
consumers
in
this
case.
It's
both
the
consumers
of
the
of
the
data
stream
from
the
providers
as
well
as
you
have
the
consumers
of
sort
of
their
disclosures
announcements,
so
somebody
might,
for
instance,
be
using
security
scanners
which
tracks
debian,
cv,
announcements
or
red
hat
cvc
announcements
and
they're
sort
of
one
part
of
the
consumer
chain,
and
we
also
sort
of
have
red
hat
or
open
source
or
arch
linux,
which
also
sort
of
consumes
the
nvd
data
streams.
M
So
I
guess
the
sort
of
the
these
discussions
a
little
bit
about
the
distinction
between
those
two
groups,
sort
of
the
vendor
consumers
and
the
consumer
consumers.
If
that
makes
sense,
I'm
not
sure
and.
L
A
L
Yeah
I
mean
the
I'm
I'm
representing
or
well
I
I
actually
am
in
a
lot
of
these
camps,
various
different
days
of
the
week.
To
be
honest,
but
you
know
we
we
develop.
You
know
security
products
that
read
all
those
vulnerability
reports
and
provide
them
to
users
right
and
and
triage
intermediate.
You
know
triage
intermediate,
these
vulnerabilities
at
the
end
enterprise
right,
so
the
enterprise
has,
you
know,
thousand
ten
thousand
unpatched
vulnerabilities.
L
They
have
to
go
and
prioritize
and
triage
these
things,
which
things
are
important,
which
things
are
not
that
having
the
view
into
the
actual
source
reports
that
we're
all
discussing
here
is
incredibly
important.
It's
not
it's
not
the
vendor,
specifically
right,
as
I
say,
like
often
the
scanners,
like
your
qualis
infrastructure,
your
qualis
infrastructure
will
tell
you
about
that
vulnerability
that
just
came
up
in
your
npm
or
or
sync
or
white
source.
L
L
So
you'll
have
to
go
to
your
vendor
and
say:
when
do
I
get
this
patch
and
they'll
say
we'll
get
back
to
you
tomorrow,
so
figuring
figuring,
all
that
logistics
out
to
me
is
important
and
how
that
information
cascades
through
from
the
open
source
project
through
to
the
vulnerab.
You
know
the
security
community
back
through
the
security
vendors
to
the
end
customer
then
back
to
the
software
vendor.
A
I
honestly
I
I
have
an
impression
that
we
haven't
explored
that
part
of
it
that
much
so
far,
so
jason.
I
I
don't
know
if,
if
this
is
something
that
you
will,
you
would
like
to
take
a
lead
on,
or
maybe
for
one
of
the
future
kind
of
working
group
meetings.
You
could
come
and
you
know
just
describe
and
maybe
do
a
little
presentation
of
how
the
whole
workflow
kind
of
looks
from
your
perspective,
because
I
think
it's
it's
something
that
would
help
us
kind
of
flesh
out
those
use
cases.
A
C
L
L
M
Vulnerability
data
reports
from
think
cv,
streams
from
nvd
or
mitra,
which
is
aggregated
and
forwarded,
essentially
where,
when
you
provide
the
advisory
stream,
such
as
open
source
or
red
dust,
I
think.
L
The
consumer,
end
user
persona,
the
enterprise
or
or
whoever's
whoever's
using
the
software
right,
the
cons,
the
consumer,
consumer
or
the
enterprise.
M
A
C
C
M
I
I
I
think
part
of
this
is
probably
a
little
bit
representational
last
time
with
the
embedding
of
the
cpe
information
in
document
containers,
which
would
help
yeah.
A
Yeah,
so
I
can
reach
out
to
you.
I
think
we
had
someone
from
white
source
as
well,
so
I
can
reach
out
to
past
working
group
the
meeting
participants
to
tell
us
a
bit
more
how
they
are
consuming
vulnerability
information
to
make
those
products
run,
because,
like
we
two
weeks
ago,
we
focused
on
like
the
as
bomb
side
of
the
problem
as
I've.
We
have
south,
where
we
don't
know
what's
inside,
but
we
haven't,
I
think
touched
at
all
is
like:
where
is
the
vulnerability
information
coming
from?
How
is
this
discovered?
A
How
often
do
you
just
ingest
it
once.
C
I
can
tell
you
it's
terrible
right
now,
because
how
we
consume
it,
there's
like
eight
different
sources
that
we
have
to
aggregate
and
then
unify
so
that
they
match,
and
then
we
put
it
into
our
own
database.
So
if
the
person
from
white
source
can't
take
that
on,
I
can
start
looking
at
that
next,
but
yeah.
It's
terrible
right
now.
G
G
G
So
I
believe,
there's
right
right
for
progress
here,
hopefully
yeah.
Thank
you.
C
Yeah,
so
it
also
looks
like
there's
a
note
in
text
about.
We
should
also
fully
flesh
out
our
personas
a
little
bit
more
as
another
item
yeah.
M
A
Okay
cool,
I
need
to
jump,
but
it
was
good
to
talk
to
you
and
let's
do
some
work
in
the
next
two
weeks
and
let's
get
back
together
in
two
weeks
and
see
what
we
accomplished.