►
From YouTube: OpenSSF Day at OSS NA - What You Need to Know and Do about Vulnerability Disclosure - Anne Bertucio
Description
OpenSSF Day at Open Source Summit North America - What You Need to Know (and Do) about Vulnerability Disclosure - Anne Bertucio, Google
A
So
if
you'd
like
to
hear
more
from
folks
like
Brian
and
Lydia,
we
encourage
you
to
check
out
our
new
podcast
Untold
Stories
of
Open
Source
that'll,
be
launching
today
on
Spotify,
so
check
that
out
our
next
speaker,
who
I
will
invite
up
to
the
podium
so
that
she
may
wrestle
with
connectivity
issues
is
a
a
friend
and
frequent
co-presenter
with
myself
and
bertuccio,
and
is
a
senior
program
manager
in
Google's,
open
source
program
office.
A
She
is
an
active
participant
in
openssf.
She
helps
lead
and
execute
projects
such
as
the
guide
to
coordinated
vulnerability,
disclosure
for
open
source
projects,
a
project
I'm
very
happy
with
and
later
this
week
you
can
find
her
at
the
this
Summit
and
the
Linux
security.
Summit
will
she'll
be
co-presenting
with
a
handsome
fellow
with
a
nice
hat
talking
about
how
to
handle
researcher
disclosures,
how
to
disclose
vulnerabilities
to
projects
code
base.
So
please
join
me
in
giving
Ann
a
warm
Round
of
Applause.
A
B
Okay,
this
is
I,
I,
don't
touch
anything
in
web.
That's
not
me
all
right.
B
Nope,
oh
okay,
well,
I'm
gonna
have
to
go
for
memory.
This
will
be
fun
okay.
Good
morning
my
name
is
Ann.
Bertuccio
I
am
acutely
aware
that
I
stand
between
you
and
the
connects
coffee
break,
so
I
will
try
and
make
this
exciting
enjoyable.
Hopefully
you
will
learn
something
we're
going
to
be
talking
all
about
vulnerability.
B
Disclosure
we're
going
to
go
through
and
I
hope
we're
going
to
have
as
much
fun
as
the
people
across
the
wall
is
that
I
think
that's
the
financial
conference
and
I've
never
heard
a
financial
conference
be
so
excited,
so
we're
going
to
talk
about
what
vulnerability
disclosure
is
a
little
bit
about
why
the
open
ssf
decided
that
they
need
an
entire
working
group
devoted
to
it.
B
One
is
that
they
could
exploit
it
if
it
allows
them
to
abuse
somebody's
compute
and
mine
Bitcoin,
although
that's
not
as
lucrative
as
it
was
last
week,
they
can
do
that.
It's
an
option.
The
other
is
that
they
could
sell
it.
There
is
actually
a
market
granted.
You
know
an
unregulated
Market,
but
there
is
a
market
for
vulnerabilities
and
it
does
pay
quite
generously
at
times
depending
on
what
it
is
wow.
B
Okay,
so
we're
gonna
dance
it
up
a
little
bit,
so
it
does
play
quite
generously,
and
you
know
it's
just
something
to
keep
in
mind
that
your
researcher
has
options
with
their
finding
and
that
option
can
include
a
large
amount
of
cash.
This
is
not
a
bug.
Bounty
we're
going
to
talk
about
that
in
a
little
bit.
The
other
option
they
have
is
that
they
could
disclose
it,
which
is
really
where
we
come
in.
B
B
You
can
kind
of
put
these
models
on
a
sliding
scale.
You
know
I
want
to
say
everything
across
the
scale.
It's
we
kind
of
get
a
little
philosophical
in
open
source
here.
Everybody's
coming
at
this
from
I
want
to
protect
the
user,
but
there's
disagreements
about
what
the
right
way
to
do
that
is
so
full
disclosure.
This
is
where
you
know
researcher
says
I'm
going
to
share
this
as
publicly
and
widely
as
possible.
The
thinking
being
the
sooner
users
have
this
information,
the
sooner
they
can
patch.
B
If
I
found
it
an
attacker
has
probably
already
found
it.
Let's
get
that
information
out
there.
That's
where
you
see
people
putting
things
on
Twitter,
for
example,
on
the
other
side,
would
be
private
disclosure,
the
approach
here
being
you
know?
If
we
keep
this
as
quiet
and
covert
as
possible,
we
can
maybe
do
some
silent
patching
protect
the
users.
From
that
perspective,
this
model
does
not
give
a
lot
of
credit
to
attackers.
Attackers
are
very
sophisticated.
We
should
keep
that
in
mind.
B
B
So
that
sounds
relatively
simple
right.
Why?
Why
is
there
an
entire
working
group
for
this?
Why
not
yeah,
it
seems
straightforward,
it's
not
at
all.
It
can
get
really
complicated.
It
can
get
really
nuanced
we're
dealing
with
humans
here,
so
you
know
which
of
those
three
models.
Do
you
follow?
What
are
you
supposed
to
do
when
what,
if
it's
really
bad
like
extra
extra
bad?
Does
it
change
what
I
do
what,
if
we
can't
figure
out
how
to
patch
it
I
just
don't
know,
and
what?
B
B
All
of
these
are
complexities
that
make
this
process
really
complicated,
and
you
have
to
practice
over
and
over
and
over
again.
The
time
to
get
comfortable
with
disclosure
is
not
the
time
that
you
have
a
severe
vulnerability
on
your
hands.
It's
well
before
that,
and
there
are
open
source
projects
that
have
really
mature
well-practiced
vulnerability
response
processes.
B
B
So
when
the
working
group
put
out
their
guide,
we
did
the
coordinated
vulner
the
guide
to
coordinated
vulnerability
disclosure
for
open
source
projects.
Why
did
we
pick
CBD
as
nithia
and
Brian
were
talking
about
you
know?
Open
source
is
critical.
It's
not
just
hobbyists,
it's
running
critical
infrastructure
and
the
thought
is
if
we
can
give
projects
a
little
bit
of
time,
working
with
the
researcher
to
patch
to
cut
a
release
that
mitigates
that
that
is,
helps
reduce
the
risk
to
the
user.
B
We
can
help
protect
them
that
way,
but
the
disclosure
part
communicating
the
information
out
as
widely
as
possible
is
really
important,
particularly
in
open
source.
You
know,
unlike
proprietary
software,
where
you
might
have
a
way
to
pull
all
your
users,
you
might
have
an
email
list
of
every
person.
That's
using
your
software.
We
don't
have
that
in
open
source.
So
we
really
have
to
make
sure
that
when
we
know
about
a
vulnerability,
if
we
know
a
mitigation
as
well,
we
communicate
that
as
widely
as
possible.
B
I
want
to
pause
for
just
a
moment
and
clarify
how
vulnerability
disclosure
is
different
from
a
bug.
Bounty
program,
it's
a
vulnerability
disclosure.
You
know
it's
a
method
for
reporting
findings.
There
may
or
may
not
be
cash
involved.
If
there's
cash
involved
for
reporting
a
finding,
it's
typically
because
it's
part
of
a
larger
vulnerability,
vulnerability
disclosure
program
frequently
run
by
a
company.
So,
for
example,
I
work
at
Google
for
some
of
our
open
source
projects.
B
If
you
submit
a
report,
our
VDP
will
give
you
some
cash
as
a
thank
you,
but
it's
not
run
as
a
bug.
Bounty.
The
ethos
here
is,
you
can
think
of
it's
kind
of.
If
you
see
something,
let
us
know
a
bug.
Bounty
program,
on
the
other
hand,
is
very
actively
dangling
cash.
Saying
please
go
look
and
there
will
be
scopes
on
that,
based
on
the
bug,
Bounty
sponsor.
So
it's
not.
Please
go
look
for
all
vulnerabilities.
It's
please
go
look
for
these
specific
ones.
B
B
You
know
she
makes
this
was
a
situation
where
a
vendor
on
day,
one
had
a
very
flashy,
very
pretty
bug,
Bounty
website
a
report
was
sent
to
them
and
they
didn't
actually
know
what
to
do
with
it.
So
she
kind
of
made
this
you
can
think
about
it.
As
you
know,
opening
a
restaurant
before
you
have
your
weight
staff
before
you
have
your
line,
Cooks
there's
a
time
and
place
for
them,
but
you
need
to
do
the
basics
first,
all
right!
B
So
let's
talk
about
this
guide,
yeah,
it's
a
great
guide,
so
we
do
have
about
20
minutes
left
and
so
I
say
this
is
the
abbreviated
version.
If
you
head
to
GitHub,
there's
a
lot
here,
there's
a
full
guide
that
kind
of
walks
through
everything,
there's
a
run
book
and
then
there's
a
bunch
of
templates.
My
recommendation
would
be
that
you
find
some
time
if
you're
a
project
maintainer
when
you
don't
have
a
burning
vulnerability
to
read
through
the
guide.
B
So
it
kicks
off
when
someone
finds
an
issue-
and
they
say
I
want
to
contact
the
team
about
this
and,
of
course
you
might
recognize
our
fine,
open,
ssf,
Goose
or
duck
or
I,
don't
know,
difference,
goof,
duck
hat
all
right.
You
might
recognize
that
thing,
they're
going
to
be
our
security
researcher.
For
today
they
found
something
they
say.
You
know
I
want
to
contact
the
team
about
this.
The
first
question
is
well
who,
who
is
that
team?
B
So
within
your
project,
you
probably
want
to
have
between
three
and
five
people
who
are
essentially
your
vulnerability
management
team.
If
you
only
have
three
to
five
contributors,
it
is
all
of
you,
and
so
these
folks,
you
know
they
don't
have
to
be
security
Engineers.
They
don't
have
to
be
experts
in
everything,
security,
they're,
really
there
for
triage
and
coordination.
B
So
we
also
want
to
be
able
to
have
our
duck
know
how
to
contact
this
team,
and
we
want
that
information
really
obvious
if
it's
buried
five
pages
deep
nested
in
folders.
That
kind
of
take
Insider
project
knowledge
to
know
like
governance,
slash,
Community,
slash,
go
here,
they're,
never
going
to
find
it
they're
going
to
give
up
before
they
get
to
you.
B
The
other
thing
is
that
we
really
wanted
to
make
it
easy
for
them
to
contact
us
and
for
that
reason
email
is
okay.
You
know
some
folks
kind
of
feel
a
little
wonky
about
sending
vulnerability
information
over
email,
but
if
we
think
about
it
from
a
risk
perspective,
it
is
a
bigger
risk
to
not
give
that
information
so
email's.
All
right
I
would
encourage
you
as
well
to
not
necessarily
to
not
have
your
reporter
have
to
have
have
to
sign
up
for
new
tools.
B
So
if
there's
a
third
party
platform-
and
it
involves
creating
a
whole
new
account
going
through
a
whole
new
thing-
they're-
probably
not
going
to
do
it
they've
already
done
you
a
favor
by
deciding
to
report
the
vulnerability.
Let's
make
this
process
as
friction.
Free
as
possible,
so
this
is
an
example
of
a
security
policy,
frequently
known
as
a
security.md
file
and
you'll
notice.
There's
some
specific
information
in
here.
Pretty
brief:
it's
not
you
know
a
huge
gigantic
document.
It
gets
right
to
the
point.
It
says
how
to
reach
the
VMT.
B
What
to
include
in
the
report.
So
an
important
bit
here
is
information
to
reproduce
the
issue
so
as
the
VMT
you're
going
to
want
to
be
able
to
reproduce
that-
and
another
very
important
part
is
managing
expectations
about
communication.
You'll
see
in
here
it
says,
acknowledge
receiving
your
email
within
three
working
days.
Can
change
that
as
need
to
be
but
think
about
it.
Almost
like
a
customer
service
interaction.
You
know,
if
you
send
something
off
say:
hey
I
never
got
my
shipment.
B
B
Not
everything
is
a
vulnerability
and,
as
the
vulnerability
management
team
you'll
want
to
take
a
look
and
really
decide
that
this
isn't
a
comment
about
design
just
having
poor
design
that
isn't
great
for
security
is
not
a
vulnerability,
it
isn't
a
suggestion
for
an
improvement.
That's
also
not
a
vulnerability.
Those
are
things
that
can
be
worked
out
in
public.
The
criteria
we're
really
looking
for
is
something
that's
not
working
as
intended
and
gives
unattended
access
and
compromises
data.
Frequently
that's
in
the
categories
of
data,
Integrity,
availability
and
confidentiality.
B
Once
we
confirm
that
it
is
a
vulnerability
we
want
to.
Let
our
reporter
know
that
and
we
can
move
on
to
patching
we're
going
to
play
the
role
of
the
VMT
over
there.
If
anybody
from
the
Marketing
Group
is
taking
notes,
I
will
be
first
in
line
for
that
sweater.
So
we're
going
to
say
you
know
thank
you
for
the
report.
We
were
able
to
recreate
this
confirmance
of
vulnerability.
Would
you
like
to
be
involved
in
patching
this
now?
Reporters
have
lots
of
different
reasons
that
they
report
issues.
B
Maybe
there's
a
paper
they're
working
on
some
research,
but
they
also
might
be
really
invested
in
how
what
this
patch
looks
like
as
part
of
finding
it.
They
might
have
a
couple
ideas
of
oh,
this
might
mitigate
it,
so
you
should
always
ask
them:
do
you
want
to
be
involved?
They
might
say
yes
or
no,
but
give
them
the
option.
B
The
other
thing
we'll
do
is,
as
we
request
request
a
cve,
for
it
is
ask
them
if
they'd
like
to
be
credited.
Your
default
should
be
to
crediting
reporters,
but
for
many
reasons
they
might
not
like
that,
and
you'll
also
want
to
respect
their
preference
for
how
they
want
to
be
credited.
The
name
any
affiliations.
Anything
like
that.
B
So
now,
let's
talk
about
what
on
Earth
does
90-day
disclosure
timeline
bit?
Is
it's
really
important
that
it's
an
agreed
upon
timeline
so
frequently,
and
this
gets
to
the
coordinated
part
of
disclosure?
90
days
is
kind
of
standard-ish
about
how
much
time
the
vulnerability
management
team
has
to
patch
and
disclose
the
issue.
It
is
not
90
days
from
when
you
start
working
on
it.
It
is
90
days
from
when
the
researcher
sends
off
that
first
bit
of
communication
and
it
lets
the
researcher
know.
B
You
know
there
will
be
an
end
date
in
sight
because
remember
how
we
were
talking
about
incentives
and
needs.
They
want
to
be
able
to
show
their
work,
so
it
gives
you
a
little
time
to
find
a
mitigation
cut,
a
release
share
that
information.
But
if
that
doesn't
happen,
the
reporter
is
free
to
go
about.
You
know
sharing
their
work
themselves
going
to
that
full
disclosure
model.
B
You
know,
I
said
it's
standard-ish,
not
everybody
agrees,
and
if
your
reporter
says
no
I'd
rather
have
this
out
in
15
days,
Thumb
Thumbs,
the
rubs
like
that's
just
how
it
is
you
have
to
work
together.
There
are
situations
where
you
know.
90
days
is
probably
a
little
long.
Maybe
if
it's
a
high
severity
issue,
seven
might
be
more
appropriate,
but
it's
really
about
coming
to
an
agreement
on
what
that
is,
and
everybody
has
the
same
set
of
expectations.
B
So
now
we
have
a
cve
assignment
in
the
works.
We've
got
a
patch
in
the
works.
We're
ready
to
disclose
this
you'll
notice.
I
skipped
over
this
bit
in
Gray
about
embargo.
There
is
a
lot
of
information
in
the
guide
about
having
an
embargo
program.
Embargoes
are
really
complicated.
They
add
a
lot
of
considerations
to
your
project.
They
potentially
add
some
legal
ramifications,
things
to
consider
in
that
realm.
There's
a
ton,
more
information
in
the
guide,
but
I
would
say
for
most
projects.
B
B
It
says
exactly
what's
going
on,
it
has
things
like
the
versions
that
are
affected
steps
to
recreate
it,
so
that
the
user
can
recreate
this
and
see
if
they're
impacted
has
remediation
and
mitigation
and
some
timelines
about
you
know
when
this
was
reported
when
it's
disclosed,
you
want
to
save
things
like
how
I
found
this
or
you
know,
really
a
deep
dive
on
it
for
blogs.
Videos
talks
all
that
this
is
really
trying
to
keep
it
brief,
get
straight
to
what
people
need
to
know.
B
B
That
was
just
amazing
timing,
amazing,
so
the
tldr
here
you
know,
if
you're
a
project,
maintainer
really
make
sure
you
have
a
plan
before
you
need
a
plan
if
you'll
come
feel
comfortable
with
what
the
process
is
have
you
know
here
are
three
to
five
folks
that
take
in
issues
at
triage.
Have
that
ready
to
go
before
you?
Have
the
burning
issue?
Get
your
security
policy
in
place
that
if
you're
a
GitHub
user
you'll
find
that
they
have
GitHub
security
advisory,
there's
some
easy
Tools
in
there
to
help.
B
You
do
that,
and
you
know
also
remember
that
researchers
are
human
too.
By
coming
to
you
and
trying
to
make
that
contact,
they
want
to
work
with
you.
They
want
to
help
this.
You
know
protect
users.
Get
this
sorted,
don't
feel
like
that.
Stereotype
of
you
know
hacker
with
the
black
hoodie
menacing
and
mean
they're.
Not
they
want
to
see
this
have
a
good
resolution
and
I'd
also
say
thank
you.
You
know
this
person
has
come
to
your
project.
They're
doing
you
a
huge
favor.
B
B
If
you
are
a
project
user,
you
know
it's
important
that
you
know
where
to
you
should
be
signed
up
to
get
disclosure
information
that
might
be
running
a
feed
off
GitHub
security
advisories.
The
project
might
have
a
mailing
list,
but
make
sure
if
it's
really
critical
to
you.
You
know
how
to
have
that
information
also
know
if,
as
you're
using
the
project-
and
you
stumble
across
the
security
issue,
where
should
you
go
to
report
it?
Where
do
they
do
that
intake
and,
of
course,
in
the
spirit
of
Open
Source?
B
If
someone
on
your
team
is
good
at
organization
communication,
is
this
a
role
that
you
can
contribute
back
with?
You
know
you
don't
have
to
be
the
security
engineer,
solving
everything
if
you
love
spreadsheets
and
organization.
This
might
be
for
you
so
I
know
we're
probably
right
on
time,
so
maybe
rather
than
questions
I'm
happy
to
step
outside
and
take
questions
over
coffee.
But
thank
you
so
much
take
a
look
at
the
guide
and
happy
happy
bug
solving.