►
From YouTube: Securing Critical Projects (February 24, 2022)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
Spring,
are
you
kidding
me?
We
have
snow
outside,
so
do
we
what
the
heck?
This
is
not
cool.
I
mean
it
is
cool.
It's
legitimately
cold
yeah
we're
going
to
get
up
six
celsius
today
it
looks
like
so
look
at
us
fun
stuff.
C
B
C
B
C
D
Actually,
while
we
have
a
few
minutes
before
everyone
else
comes
in
julia,
did
you
see
vicky's
response
to
our
vicky's
comment
on
your?
What
was
it
your
your
representative
project?
D
I
made
a
comment
yeah,
I'm
sorry
that
was
very
vague.
Let
me
see
so
you
mentioned
vicki
about
your
your
association
with
the
floss
foundations
community.
Oh.
E
D
How
it
might
be
a
good
way
to
reach
people
in
an
open
source
and
maybe
hit
them
from
a
little
bit
of
a
higher
higher
level,
instead
of
going
to
them
individually?.
C
Yeah
I
this
sounds
vaguely.
Yes,
I
this
sounds
like
something
I
would
have
said,
but
I
don't
recall
where
or
when.
D
B
I
see
no,
I
did
it
there,
it
is
yeah
yeah
that
I
mean.
That
is
definitely
right
in
line
with.
B
Sorry,
my
brain
is
bride
from
this
week.
That's
right
in
line
with
with
the
the
channels
side
of
things
the
one
one
to
many,
so
I
will
actually
put
that
specifically
on
the
line
for
foundation
networks.
B
B
So
if
you
know
anyone
wants
to
to
look
back
over
it
yeah
that
would
be
great.
B
D
Okay
looks
like
we're
a
couple
minutes
past
the
hour,
so
we
can
go
ahead
and
get
started.
Welcome
everybody
thanks
so
much
for
joining
today.
This
is
the
securing
critical
projects
working
group
meeting
jeff
is
out
this
week,
so
I'll
be
doing
the
best
I
can
to
guide
the
discussion
today.
D
So
really
just
want
to
thank
you
all
again
for
that
fantastic
work
and
I'm
looking
forward
to
building
that
out
more
and
really
really
providing
this
kind
of
value
to
both
ourselves
as
a
work
group
and
and
to
different
work
groups
in
the
open,
ssf
and
open
source
as
a
whole.
So
with
that
I'll
quickly
mention
that
our
deep
dive
into
census
2
is
going
to
be
postponed
for
the
time
being.
D
Hopefully,
when
we
get
david
in
the
room
and
we
get
the
the
results
out,
we
can,
I
think,
I'll,
sit
down
together
and
really
talk
that
through,
I
think,
that'll
be
a
that'll,
be
a
fun
exercise,
oh
and
looks
like
hen
joined
as
well.
I
just
want
to
also
give
a
shout
out
to
hen
who's
been
very
active
on
slack
and
helping
out
a
lot
as
well.
So
thank
you
again
and
thanks
for
joining
so
with
that
we
have
on
the
agenda
agenda.
D
Currently,
julia's
extend
extended
version
of
the
proposal
that
that
was
brought
up
in
the
last
meeting,
so
I
will
hand
the
mic
to
julia.
D
B
Yeah,
so
I
I
am,
I
believe
it
was
with
him
that
that
suggested
a
a
gradual
rollout
plan.
B
So
if
you
scroll
down
or
get
to
the
implementation
stage,
I've
kind
of
outlined
some
of
that,
unfortunately,
like
I
think,
there's
a
bit
of
a
dependency
with
selecting
a
voting
system
so
that
you
know
once
we're
actually
reaching
out
to
folks.
We
have
a
way
of
engaging
them,
so
I
did
put
mention
of
that
on
here,
jacques,
if
you
want
to
add
your
name
as
the
driver,
if
you're
signing
up
for
that,
I
didn't
want
to
volunteer
you.
B
Okay,
well,
then,
you
can
put
your
name
there
or
I
will
later,
but
there
were
a
couple
of
really
good
questions
raised.
I
actually
don't
know
who
who
talked
about
analyzing
efficacy,
because
you
didn't
you
didn't
fess
up
to
who
that
was,
but
that
is
a
that's
a
definite,
interesting
area
of
research.
B
You
know
coming
from
my
ml
background,
I
think
of
validation,
algorithms,
but
this
is
really
subjective
data,
so
the
the
proof
is
actually
going
to
be
whether
we
find
the
the
validation
and
the
the
the
resulting.
B
You
know
ranked
lists
useful,
so
it
it
does
need
an
iterative
approach
and
we'll
just
have
to
keep
refining
as
as
time
goes
on.
Unfortunately,
there's
not
a
way
to
like
give
it
a
this.
This
scored
98
inefficacy.
So
and
of
course,
you
know,
we
don't
hope
for
this,
but
the
next
time
that
there's
a
there's
a
log4j
or
openssl.
B
If
the
project
was
high
on
the
list,
then
maybe
we
did
a
decent
job,
so
any
feedback
that
that
you'll
have
would
be
great.
I
would
love
you
know.
This
is
like
my
third
meeting,
so
I'm
not
quite
sure
how
how
y'all
like
to
move
forward
on
these
things.
So
I'd
love
some
some
direction.
There.
D
Generally,
I
mean
anyone's
more
than
welcome
to
to
provide
their
their
thoughts
as
well.
Again,
I
always
advocate,
for
you
know,
open
discussion.
Anyone
can
participate
and
generally,
I
think
that's
been
our
approach
to
is
you
know,
general
work,
group
consensus
and
just
what?
What
is
what
feels
like
it
makes
the
most
sense
for
the
work
group.
D
I
have
been
I've
been
thinking
about
this
as
well,
and
I
think
I
think
the
the
the
main
pieces
of
the
puzzle
are
going
to
be
the
input
engine
so
how
things
are
nominated,
as,
as
you
said,
the
curation
and
analysis
piece
of
it,
which
I
think
we
definitely
make
sense
to
work
out
a
process
for
that,
as
well
as
the
further
analysis
and
categorization,
because.
G
D
Think
what
what
could
work?
I
would
love
to
hear
the
work
group's
thoughts
on
this
is
if
we,
we
essentially
had
two
outputs,
so
the
one
output
being
that
list
right.
The
list
of
projects
that
we
say
are
critical.
D
That
alone,
I
think,
would
be
very
useful
to
a
lot
of
folks
and
and
is
something
that
a
lot
of
folks
are
looking
for
and
then
the
kind
of
the
second
list,
which
is
that
kind
of
categorized
where
there's
a
little
bit
more
further
analysis
done,
maybe
even
as
as
you
had
mentioned,
julia
a
little
bit
of
analysis
so
far
is
even
maybe
interfacing
with
with
some
of
these
candidate
projects
a
little
bit
to
to
dig
a
little
bit
deeper
and
get
a
little
bit
more
insight
on
them
to
categorize
them,
because
I
think
that
too
will
be
very
useful
for,
for
you
know,
investment,
security
and
and
and
what
we're
looking
to
do
so
I
don't
know
the
best
way
to
do
that.
D
But
you
know
we've
got
a
we've
got
a
lot
of
folks
in
the
room
and
you
all
have
a
lot
of
great
experience.
I'm
I'm
confident
that
as
a
work
group,
we
could
we
can
figure
it
out
and
come
up
with
something
good.
D
I
I
would
also
I
maybe
what
we
can
do
as
well
is
to
try
and
one
focus
on
you
know
what
the
end
product
is
going
to
be
kind
of
beginning
with
the
end
in
mind,
and
what
we're
looking
to
accomplish,
as
well
as
having
maybe
maybe
not
deadlines,
but
just
general,
maybe
dates
to
kind
of
help
us
to
so
that
we're
not
so
we.
D
So
we
have
a
good,
maybe
a
stopping
point
at
some
point
where
we
feel
good
about
something
and
can
actually
move
forward
with
with
putting
it
out
there.
So
I'd
love
to
hear
anyone's
thoughts
on
that.
B
Actually
before
before
that,
could
I
I
think
I
might
have
had
a
bit
of
an
incorrect
assumption
at
some.
G
B
That
I
want
to
to
clear
up
when
I
was
writing
the
the
proposal.
B
B
Like
that
that
primary
list
divided
up
and
evaluated
by
each
members
of
the
ecosystem
and
then
mixed
back
together,
because
I
was
operating
under
the
second
assumption-
and
I
realized
that
doesn't
sound
like
it's
lining.
Up
with
what
you're
saying.
F
Yeah,
there's
a
there's
a
thing
I
didn't
talk
about
in
the
presentation
two
weeks
ago,
which
was
waiting,
experts
yeah,
so
there's
there's
a
couple
of
things:
there's
a
couple
of
sort
of
quirks
or
wrinkles.
F
One
is
that
there
have
been
studies
where
they
looked
for
signals
of
expertise
to
to
do
the
waiting
on
those
were
not
very
successful.
A
lot
of
things
that
you'd
sort
of
like
assume.
That
proved
that
somebody
was
an
expert
didn't
check
out.
So,
like
number
of
publications,
number
of
grants
awarded
peer
assessments
of
expertise
like
none
of
these
were
actually
very
predictive.
F
Unfortunately,
the
one
thing
that
turned
out
to
be
successful
was
doing
a
calibration
test
beforehand.
So
the
calibration
for
those
who
don't
remember
is
the
idea
that
you
have
little
tests
that
ask
little
little
sort
of
questions
and
you
ask
people
to
give
estimates
of
them.
F
So
whether
that's
like
a
a
point,
estimate
or
a
range
estimate
whatever-
and
you
know
the
answer
in
advance
like
it's
for
a
known,
unknown
outcome-
and
you
use
that
to
basically
say
how
optimistic
or
pessimistic
is
this
person
and
how
confident,
overconfident
or
underconfident
are
they,
which
you
can
derive
from
their
answers
to
those
questions.
F
So
somebody's
very
overconfident,
they'll,
give
very
narrow
ranges
or
they'll
get
very
high
percentages
of
certainty
of
that
something
is
true
and
so
on
and
when
they
do
the
weighting
according
to
that,
they
have
much
better
performance
in
terms
of
what
comes
out
of
it.
The
tricky
part
there
is
actually
coming
up
with
the
calibration
questions.
They
have
to
be
enough
in
the
domain
that
you're
not
just
asking
people
general
knowledge,
questions
about.
F
You
know
when
the
american
revolution
happened,
but
they
also
have
to
be
sort
of
general
enough
that
somebody
could
plausibly
in
that
in
that
domain.
Actually
try
to
answer
that
question
without
knowing
it
does
that
make
sense.
D
That
makes
sense
yeah
going
to
to
your
question
julia.
I
guess
I'm
not
entirely
sure.
I
just
think
it.
You
know
if
there's
let's
say
a
curation
process
and
you
know
a
project,
we
say
yes,
this
is
critical,
let's
put
it
on
the
list.
I
think
that
could
in
itself
be
a
good
stopping
point
where
some
people
will
just
want
to
see.
D
You
know
what
are
what
is
your
output
of
critical
projects
and
then
because
I
think
maybe
some
further
analysis
is
going
to
be
required
to
kind
of
categorize
and
prioritize
that
I
you
know
more
work
will
be
necessary,
but
I
don't
see
why
they
necessarily
have
to
be
different
lists,
so
to
speak.
I
just
if
one
is
kind
of
a
function
of
the
other
one.
Then
I
mean
I
guess
that
it
really
depends
on
where
they're,
housed
or
kind
of
how
that's
displayed
or
communicated.
D
B
So
I
I
don't
want
to
like
talk
too
much,
because
I
tend
to
do
that.
But
one
thing
to
keep
in
mind
is
the
currency
of
time,
and
I
would,
I
would
argue,
like
take
take.
For
example,
if
we
are
splitting,
if
we
are
selecting.
B
Folks
to
provide
feedback
based
on
language
right,
just
programming,
language.
B
Then,
having
someone
provide
having
like
a
a
javascript
expert,
provide
feedback
on
the
an
entire
candidate
list
is,
is
especially
if
we're
asking
them
to
vote
relatively
on
on
some
takes
a
lot
of
time.
B
F
F
We
should
encourage
people
to
estimate
outside
of
their
domain
out
of
their
expertise,
just
to
create
additional
data
points.
This
is
partly
because
expertise
also
create
tends
to
create
overconfidence
people.
People
tend
to
give
narrower
ranges
than
they
strictly
ought
to,
based
on
face
rates
so
having
having
an
additional
sort
of
like
check
from
somebody
who's.
Literally,
not
an
expert
on
javascript
say
who
will
give
a
much
wider
range.
It
helps
to
improve
the
overall
accuracy.
H
Yeah
dude
I
apologize.
If
I'm
see,
I
don't
want
to
be
lower
in
my
hands,
so
I'm
I'm.
I
apologize
for
kind
of
the
dumb
question
on
my
part,
julia.
What
so
we
want
to
do
this
outreach,
which
I
think
is
really
important
within
these
various
diff.
It's
not
like
one
ecosystem
is
like
so
many
different
ones
that
are
not.
H
You
know,
necessarily
overlapping,
and-
and
so
I
appreciate
that,
I
guess
one
of
the
questions
I
have
is:
how
is
the
selection
criteria
different
from,
for
example,
what
alpha
omega
are
doing,
because
I
think
alpha
omega
are
trying
to
themselves
identify
the
the
you
know,
kind
of
the
the
projects
that
we
need
to
worry
about.
The
most
is
that
yeah,
maybe
this
is
circular-
are
we
maybe
we're
supplying
to
them
the
ones
that
are.
E
H
I
see
okay
so
as
far
as
that
goes
it's
it's
that
would
be
circular
to
say
you
know
alpha
alpha
omega
wants
to.
You
know,
talk
to
us
about
what
they
should
focus
on,
and
so
we
can't
ask
them.
So
I
get
that
all
right,
so
this
is
really
kind
of
foundational
work
beyond
you
know
just
outreach
it
really
does
get
to.
It
gets
to
really
the
identification
piece.
Okay,
I
got
it.
I
see
others
with
their
hands
up.
So
thanks
for
answering.
I
I
I
had
interpreted
this
work
as
an
intent
to
provide
some
initial
framework
or
scaffolding
for
alpha
omega,
both
how
we,
as
you
know,
volunteers
and
member
of
openssf,
and
how
we
might
do
outreach
and
how
the
contractors
under
the
alpha
project
might
do
that
outreach.
Is
that
the
intent
or
did
I
misunderstand.
D
D
So
we
had
started.
We
we
created
an
initial
draft
of
about
a
hundred
projects,
would
fit
kind
of
that
alpha
that
you
know
very
high,
criticality,
very
high,
you
know
used
everywhere
and
it
would
kind
of
help
guide
that
that
decision
making
is
was
my
understanding,
but
I
think.
C
D
Sense
to
look
back
at
when
mike
scaveta
originally
proposed
alpha
to
to
the
work
group.
It
might
make
sense
to
to
look
back
at
that
and
get
an
idea
of
really
exactly
what
the
scope
of
of
the
ask
was,
but
to
my
unders
to
my
understanding
it
was
you
know
we
need
help
figuring
out.
You
know
which
projects
are
the
most
critical.
Can
you
guys
help
us
with
that,
and
our
can
y'all
help
us
with
that
and
that's
what
we're
working
on.
D
So
we
did
create
kind
of
that
first
kind
of
rough
draft
list,
but
I
think,
if
we're
going
to
do
something
a
little
bit
more
comprehensive
and
representative
and
and
well
fleshed
out,
it
will
require.
You
know
a
little
bit
more
more
time
and
curation,
but.
I
What
I
see
here
is
an
evolution
of
how
do
we
decide
what
products
are
critical
to?
How
do
we
manage
a
program
of
outreach
to
support
those
communities,
and
that
appears
to
be
what's
in
julia's
document?
If
I'm
understanding
this
and
attributing
it
correctly
that
it's
it's
more
of
a?
How
do
we
support
them
rather
than
a
which
ones
are
we
are
we
choosing
to
support.
B
So
I
I
don't
want
to
jump
in
front
of
in
front
of
emily.
B
I'm
I'm
just
going
back
to
the
the
notes
where
this
this
document
was
requested
and
I
have
to
admit
there
was
there
wasn't
a
ton
of
context
like
I
don't
believe
there
was
discussion
of
alpha
at
the
time
unless
I'm
misremembering
or
misinterpreting
the
notes.
It
was
just
about
how
to
get
more
experts
and
expertise
involved
in
providing
feedback
or
potentially
being
an
additional
input
point
to
criticality
score.
That
was
the
context
under
which
I
drafted
the
proposal.
B
Okay,
I'm
happy
to
evolve
it
in
a
different
direction
or
have
somebody
else
do
that
just
wanted
to
to
level
set
that.
D
J
So
a
couple
of
things-
julia
first
asked
if
you
could
add
that
clarification
point
to
the
document.
I
think
that
will
help
in
future
readability
of
it.
The
other
part
of
it
is
while
the
document
is
intended
to
drive
more
expertise
in
bringing
them
into
this
working
group
and
the
corresponding
outcomes
and
deliverables
of
the
group.
J
I
can
also
see
this
structure
being
applied
to
other
capabilities
within
the
open
ssf,
and
perhaps
this
is
more
of
a
question
for
brian
then
is:
could
this
kind
of
structure
or
this
framework
be
leveraged
in
other
open,
ssf
work,
particularly
where,
in
the
past,
we've
discussed
that
there
are
questions
around
contracted
work?
J
There
are
things
that
we're
going
to
have
to
have
other
people
do
as
a
paid
service
and
there's
certainly
an
expectation
that,
if
we're
curating
experts
from
the
community
or
they're
not
being
nominated
by
others,
we
want
to
ensure
that
whoever
is
doing
the
corresponding
work.
Has
that
input
or
has
a
similar
level
of
expertise.
E
Sure
I'll
answer
that
directly
since
I
was
called
out
first,
I
wanted
to
say
when
it
comes
to
alpha
omega.
I
kind
of
feel
like
I'm
have
to
represent
the
michaels
by
proxy,
since
neither
of
them
are
here,
although
I
think
they
are
pretty
regular
attendees
of
these.
This
calls.
So
I
I'd
kind
of
point
to
them
more
as
the
kind
of
last
word
on
this,
but
one
one
thing
to
perhaps
share-
which
I
think
has
is
that
you
know
with
alpha,
because
we
anticipate
the
engagements
being
meaningful.
E
That
means
we're
not
going
to
be
able
to
reach
100
projects
in
a
meaningful
way
in
the
first
year
with
any
degree
of
scalability.
We
anticipate
now
so
we'll
when
we
get
a
list
of
the
top
100
or
200
or
however
many
there
likely
will
still
be
an
internal
to
alpha
process
to
whittle
that
down
to
who
are
the
ones
that
actually,
we
should
prioritize
even
more
to
go
out
and
engage
right.
E
So
so
the
list
that
this
group
generates
and
the
and
the
scores
from
this,
as
well
as
the
best
practices
badge
and-
and
you
know,
there's
the
score
card
and
all
these
other
bits
of
data
will
be
incorporated
into
that
that
you
know
that
smaller
list.
E
But
we
know
that
the
the
list
of
top
100
was
also
useful
for
the
great
mfa
distribution
project,
where
knowing
okay,
here's
the
the
hundred
todd,
let's
see
if
we
can
find
10
people
at
each
to
give
them
codes
to
to
get
two-factor
auth
out
to
would
be
useful.
I
would
wager
boast
projects
at
openssf
are
going
to
have
a
need-
or
at
least
some
utility
from
a
list
of
the
top
n
and
might
have
their
own
processes
for
whittling
that
down
further
or
taking
a
slice
of
that
or
whatever.
E
So
you
know
the
more
that
those
lists
can
be
not
just
a
single
scalar,
but
you
know
a
scalar,
perhaps
with
some
other
attributes
like
sector.
Language
obviously
is
important.
That
kind
of
thing
we've
had
other
people
ask.
Well,
maybe
we
should
have
a
top
100
for
the
financial
sector
or
for
healthcare,
or
something
like
that.
E
All
this
is
to
say
yes
to
that
main
question,
which
is,
I
expect
this
list
to
be
useful
and
actually
critical
itself
to
many
of
the
other
functions
inside
open,
ssf
and
others
outside
too.
J
So
then,
my
next
follow-up
is
for
the
expertise
curation.
In
my
experience,
when
you
have
too
many
experts
in
the
room-
and
this
was
mentioned
previously-
that
you
tend
to
have
a
one-sided
view
of
the
ecosystem
so
with
a
lack
of
diversity
and
viewpoints,
and
some
of
that
comes
from
junior
expert
junior
experience
asking
the
correct
questions
from
their
own
background.
J
Is
there
a
mechanism
or
does
anybody
have
any
awareness,
something
that
can
be
introduced
into
this
framework
to
ensure
that
those
kinds
of
things
don't
happen
that
we're
receiving
the
correct
amount
of
diversity
and
viewpoints
and
providing
feedback?
Asking
the
right
kinds
of
questions
and
doing
the
analysis,
because,
quite
frankly,
especially
in
the
security
community,
there's
a
lot
of
bravado
and
confidence
and
your
ex
your
current
workspace
and
a
lot
of
it
needs
to
be
collective
learning.
E
J
B
Yeah,
that's
definitely
something
that
that
I
was
struggling
with
when,
when
creating
the
doc
or
creating
the
proposal.
B
I
do
think
if
you
look
at
the
nomination
section,
I
do
think
it's
critical
to
provide
a
a
way
of
drawing
in
experts
that
aren't
connected
directly
to
the
members
of
the
working
group
and
so
having
that
kind
of
nomination
way,
nomination
side
of
things
and
also
the
ability
to
self-nominate.
B
B
To
join
the
working
group,
necessarily
because
that's
a
bigger
time
commitment-
and
that
was
one
of
the
reasons
that
this
pro
this
proposal
was-
was
requested-
this
this
is
to
get
their
feedback
on
what
the
working
group
produces.
F
And
you're
on
mute.
Yes,
I
am
the
whole
space
bar
stops
working
as
soon
as
you've
said
anything
in
chat,
it's
a
fun
feature,
yeah
one.
One
thing
that
emily
brought
up
is
the
culture
of
bravado
or,
if
you
wanted
to
be
nice
over
confidence,
I
think
that's
one
thing.
Let
me
oh
my
hand.
F
The
trickiest
part,
as
I
sort
of
said
earlier,
is
coming
up
with
with
good
questions
where
it's
it's
some
mix
of
general
knowledge
and
some
mix
of
domain.
F
You
know
things
like
how
long
is
the
nile
and
giving
a
giving
a
range,
and
I'm
also
concerned
with
the
diversity
as
well.
I
think
it's
likely
we'll
have
some
sort
of
snowball
effect
or
snowball
sampling
in
terms
of
how
we
build
build
the
expertise.
Initially,
one
of
the
great
difficulties
is
that
we
need
lots
of
experts,
but
it's
sort
of
first
of
all,
hard
for
the
open
ssf
to
get
its
message
out
and
then
we're
a
smaller
part
of
the
open
ssf
trying
to
get
the
message
out.
D
C
Well,
hi
team,
so
this
actually
reminds
me
of
it's:
it's
not
directly
analogous,
but
it's
it's
similar
to
what
has
happened
in
the
spdx
legal
team
with
spx
the
license
list
for
years
people.
So
how
the
process
works.
C
Is
people
send
issues,
and
they
say
please
add
us
to
the
spf
spdx
license
list
and
then
there's
this
great
big
conversation
that
happens
both
on
the
issues
and
on
the
licensing,
the
legal
calls,
and
that
became
pretty
cumbersome
there,
weren't
really
any
guard
rails,
and
so
things
were
they
would
change
from
from
week
to
week,
from
month
to
month,
from
whoever
was
leading
the
meeting
and
then
over
time
they
developed
and
publicly
documented
and
used
as
a
checklist,
something
they
called
the
license.
C
Inclusion
principles
looking
at
these
principles,
rated
by
priority
should
we
include
these
on
the
list
and
they
use
that
in
the
issues
they
walk
through
it
and
they
go
yes
this.
Yes,
this
know
that
okay,
well,
this
is
pretty
far
down
the
list.
C
Does
it
matter
that
it's
a
no
and
then
they
have
they're
able
to
have
conversations
around
openly
documented
principles,
and
that
sounds
like
kind
of
where
we're
trending
here
right
is
to
come
up
with
openly
documented
principles
and
criteria
for
inclusion
as
one
of
these
reviewers
so
to
speak,
and
then
we're
able
to
have
these
these
kind
of
consistent
conversations
about
it,
and
I
think
that's
very,
very
important
for
us
to
be
able
to
do
and
we
can
again
coming
up
with
that
list
is
going
to
be
pretty
difficult.
C
But
it's
going
to
be
incredibly
valuable
and
worthwhile
and
something
that
isn't
going
to
be
right,
the
first
time,
but
we
can
iterate
on
it
and
we
do
in
spdx
as
well
in
the
license
in
the
legal
team.
We
do
iterate
on
the
inclusion
principles
as
necessary.
Nothing
is
carved
in
stone.
Everything
is
carved
in
jello.
Everything
is
open
to
to
being
edited
right.
So
I
think
that's
it's
really
valuable.
C
What
we're
discussing
here
and
how
we're
doing
it,
and
I'm
really
grateful
to
julia
for
starting
this
document
and
it's
going
to
be
great,
going
forward.
Yay
go
team.
D
Wonderful,
thank
you,
vicky
couldn't
agree
more
and
yes,
we
kind
of
discussed
that
a
little
bit
in
our
first
go
of
of
making
sure
we
have.
You
know
well-defined
selection
criteria,
and
you
know
just
trying
to
be
factual.
You
know
as
factual
and
and
data-driven
as
we
can
possibly
be
so
yeah
very
good
good
stuff.
D
I
am,
I
wonder
if
I
think
I
think
what
we
should
potentially
do
is
maybe
have
a
dedicated
session,
so
I
think,
with
the
census.
2
results
coming
out.
This
is
to
go
back
to
the
100.
Most
critical
projects
list,
for
example,
are
the
critical
projects
list?
Yeah
really
have
a
dedicated
working
session
on
that
kind
of
like
a
hands-on
thing,
like
we
did
back
in
in
mid
late
november,
just
dedicated
on
that
and
then
going
along
with
that
the
outreach
of
julius
program.
D
I
think
it
would
make
sense
to
probably
have
some
dedicated
time
just
on
kind
of
working,
that
out
and
building
that
out
and
coming
up
with.
You
know
action
items
things
that
we
can
do
to
to
to
move
it
forward,
because
I,
because
I
absolutely
love
the
discussions
that
we
have
and
I
think
they're
extremely
useful,
and
I
would
I'd
love
to
incorporate
that
into
into
some
of
the
the
things
that
we're
looking
to
to
deliver.
D
So
I'm
gonna
put
on
the
agenda
for
for
next
for
our
next
meeting,
which
I
think
will
be
perfect
timing,
because
that'll
be
shortly
after
the
census.
2
stuff
comes
out,
and
hopefully
david
will
be
back
as
well
and
we
can
deep
dive
into
into
that
output
and
then
ingesting
that
into
our
own
method.
For
for
our
output.
So-
and
I
know
that
still
requires
some
development
as
well,
but
I
think
that
having
the
dedicated
time
for
that
makes
makes
sense
jack.
You
have
your
hand
up.
F
F
You
look
to
aggregate
their
opinions.
The
best
example
I
can
think
of
was
a
competition
that
was
run
by
iapa,
which
is
a
intel
intelligence,
advanced
research
projects,
agency
intelligence
agencies
obviously
have
the
problem
that
they
have
a
huge
amount
of
data
and
they
have
huge
bunch
of
experts
and
they
need
to
come
up
with
actionable
insights
that
are
not
stupid.
F
F
Will
this
gas
pipeline
be
switched
off
by
this
date
and
so
on
and
so
forth
these
these
sorts
of
geopolitical
questions.
They
were
able
to
show
that
some
teams
performed
better
than
others.
They
were
able
to
aggregate
expertise
better
than
others
and
that
there
were
sort
of
procedural
differences
between
them
and
how
they
went
about
it,
some
some
of
which
I
sort
of
discussed
previously.
F
So
that's
the
closest
example
I
can
think
of
the
other.
One
would
be
betting,
betting
markets
or
futures
markets.
That's
from
expert
aggregation,
I'm
I'm
not
in
favor
of
a
futures
market
for
what
we
do.
I
just
don't
think
there
will
be
enough
liquidity
to
make
it
effective.
F
It'd
be
too
easy
to
manipulate
plus
it
would
be
we'd
be
arguing
that
we're
giving
it
you
know
it
could
be
like
the
whole
debacle
with
the
terrorist
futures
market
where
it
was
like.
Well,
if
you're
a
terrorist,
you
know
you're
going
to
attack,
you
can
make
a
killing,
figuratively
and
literally
by
betting
on
this
future
happening
and
then
going
out
and
doing
it.
So,
although
it
would
have
revealed
in
the
prices
that
you
were
planning
to
do
that,
it
also
gave
you
a
direct
financial
incentive
to
go
and
do
it.
F
So
you
can
see
something
similar
with
vulnerabilities
if
somebody
has
a
direct
financial
incentive
to
create
a
vulnerability
or
to
discover
vulnerability,
but
not
to
share
it
with
the
community.
That
would
be
a
drawback.
Despite
the
other
advantages
of
futures
markets.
I've
rambled,
remember
julia.
When
you
thought
you
talked
too
much.
D
Oh,
that
was
good.
That's
a
really
good
point
I
mean
incentive.
Incentivization
is
is
important
very
much
so
I'd
love
to
open
the
floor,
to
maybe
someone
we
haven't
heard
from
yet
or
someone
who
has
been
just
aching
to
say
something
and
hasn't
had
the
chance
to
yet.
D
I
So
hi
there
hello
one.
So
I
bet
that
has
been
discussed
a
couple
of
times
already.
I'm
wondering
I
think
brian
mentioned
it
before
so
I'm
with
ericsson,
so
we're
teleco
company,
which
means-
and
that's
just
an
example.
Here
I
think
brian
mentioned
finance,
automotive
and
there's
certainly
separate
industries.
I
What
what
is
your
group's
perspective
currently
in
terms
of
industry,
specific
flavors
of
of
said
list,
because,
honestly,
of
course,
I
know
that
the
the
that
is
a
challenge
to
come
up
with
the
one
list
of
100
projects
right
and
there
are
certainly
industry,
specific
projects
more
or
less
critical
to
one
and
it's
it's
my
my
belief
that
if
it's
industry
critical
it's
in
the
responsibility
of
that
industry
to
also
take
care
of
this,
but
still
it
would
be
nice
if
there
was
a
way
to
to
highlight
and
flag
different
flavors
of
this
list,
I'm
not
going
to
say
that
we
need
to
have
a
different
different
list
right
away
and
I'm
totally
fine
with
let's
say
running
this
as
a
prototype
for
a
generic
list
and
then,
if
it
works
well,
we
kind
of
can
go
into
different
different
areas.
I
But
at
the
end
of
the
day
I
would
like
to
well.
I
think
it
would
be
beneficial
to
have
different
flavored
versions,
but
I
would
really
like
to
to
know
what
the
group
thinks
about
this,
because
I'm
like
my
first
time
here
on
the
call-
and
I
don't
want
to
make
wrong
assumptions
on
on
where
you're
going.
D
D
I
personally
I'm
of
the
camp,
where
I
think
when
you
really
go
high
up
enough
in
terms
of
of
critical
criticality
and
how
critical
some
of
these
projects
are,
it
doesn't
matter
what
industry
you're
in
who
you
are.
You
know
really
what
the
the
extent
of
or
the
context
is.
I
think
there
are
some
projects
that
are
just
so
absolutely
critical
that
it
I
mean
because
that
they're
tied
into
basically
everything
it's
it's
you
couldn't
you
almost
couldn't
specify.
D
You
know
this
is
just
for
industry,
x
or
y,
and
I
I
personally
I'm
of
the
camp
to
really
try
and
focus
on
those
projects,
the
ones
that
we
can
say
you
know
without
a
doubt.
You
know
if
this
disappeared
tomorrow
or
stopped
working,
you
know
everyone
would
be
affected,
probably
negatively
and
I'll
cause
a
lot
of
disruption.
So
that's
just
my
that's.
My
opinion,
I'm
sure
others
have
different
opinions
as
well.
It
looks
like
we
have
a
couple
hands
up,
so,
let's
hear
from
them
we
could
do
julia.
Then
henry.
B
I
can't
use
the
word
critical
in
this
working
group
anymore,
most
critical
list
of
projects
that
would
basically
take
down
everyone,
but
I
also
think
you
know
once
we
have
that
list.
Segmenting
it
out
by
industry,
isn't
out
of
the
question
you
know.
If,
for
some
reason,
kobo
were
to
go
away,
it
would
take
down
the
government
right
so
unfortunately,
still
very
heavily
relied
on
there.
B
D
Thank
you,
julia
henry.
K
Yeah,
I
agree,
I
think
it's
actually
a
really
good
example
of
ava's
point
on
well.
Sorry,
I
apologize
maybe
examine
this.
I
forget
expertise
focus
right.
So
if
we
get
focused
on
expertise
because
we're
looking
at
java,
as
always
my
good
easy
example,
we
can
easily
miss
the
the
java
parser.
That's
really
important
to
that
hospital
focused
specification.
K
So
we
want
to
I,
I
think,
starting
into
this
initial
direction
of
categories
that
are
sort
of
big
and
broad
and
simple,
and
then
that
diversity
test
one
of
those
diversity
testing
slices,
is
to
be
taken
out
to
expertise,
experts
within
particular
business
domains.
As
an
example
of
these,
you
know
perpendicular
domains
and
flagging.
What's
missing
from
your
perspective
and
they're
like
well,
the
big
java
parser
of
an
xml
format
for
medical
things
would
be
in
a
glaring
thing.
It'd
be
like
okay,
great.
This
needs
its
own
highlight
star
way
of
being
seen.
D
Thank
you
david.
You
have
your
hand
up.
G
Yes,
hello,
I
want
to
say
you
know,
introduce
myself
to
everyone
and
I
work
from
ibm
research
and
I'm
also
one
of
the
founders
of
the
gcc
steering
committee
so
representing
the
gnu
tool
chain,
and
you
know
related
to
what
you're
saying
about
you
know
critical
pieces
of
infrastructure.
I
think
everybody
understands
that
the
new
tool
chain,
including
glypc
bin
utils,
is
affects
a
lot
of
the
infrastructure
that
we
have
in
linux
and
the
cloud,
and
so
you
know
following
on.
G
I
haven't
been
part
of
all
these
meetings,
but
you
know
just
as
you're
saying
how
to
carefully
understand
you
know
to
to
to
find
and
recognize
those
projects
that
it's
every
different
industry
has
its
own
critical
issues,
but
also
as
we
try
to
somehow
encompass
all
of
this,
you
know
try
to
to
find
some
a
automated
process
but
find
a
way
of
doing
the
sorting
too.
That
is
just
you
know,
especially
with
these
older
projects
like
gcc,
which
isn't
on
github.
G
For
example,
I
mean,
if
you're,
looking
at
stars,
we're
looking
at
things.
I
mean
that's
not
going
to
show
up
for
gcc
or
bing
util,
so
just
to
you
know
just
be
carefully
to
recognize
that
you
know
open
source
and
especially
linux
has
been
around
20
30
40
50
years.
G
You
know
that
that
there
needs
to
be
a
really
big
net
into
how
to
recognize
different
projects
that
were
started
in
different
generations
and
maybe
in
in
different
using
different
infrastructures,
and
you
know
just
to
make
sure
that
that
nothing
is
is
missed
in
terms
of
how
to
capture
and
represent
those
important
projects.
G
I
mean
when,
when
we're
doing
these,
you
know
cii
or
these
these
other.
You
know
census
type
of
things
to
how
to
how
to
make
sure
that
everything
is
is
recognized.
D
Absolutely
yeah,
thank
you
david
and
that's
a
great
point
and
I
think
it
ties
into
the
importance
of
diversity.
Emily
brought
up
the
point
as
well.
That's
something
that
that
we
tried
incorporating
into
our
first
run
of
developing
a
list
of
critical
projects.
Was
I
really
thought
it
would
make
more
sense,
given
how
complex
of
a
problem
it
is
to
try
and
source
as
many
diverse
viewpoints
as
possible,
so
that
we're
not
just
looking
at
one
ecosystem
or
one.
D
You
know
one
tool
and
what
have
you
and
I
think
that's
going
to
be
a
good
guiding
principle
to
getting
a
more
comprehensive
picture.
D
Instead
of
you
know
what
are
the
most
critical
projects
you
know
in
this
in
in
this
ecosystem,
for
example,
and
I'm
happy
to
say,
gcc
actually
did
make
our
rough
first
draft,
I
just
double
checked
and
it
was
on
there
so,
and
that
was
through
actually
through
through
survey
response,
so
it
was
through
actually
getting
the
opinions
of
the
community
of
openssf,
and
you
know,
I
think
that
is
one
way
to
kind
of
foster.
Diversity
is
to
just
get
as
many
viewpoints
as
you
can.
D
J
So
some
of
the
discussion
in
the
chat
has
been
around
reporting
projects,
especially
from
a
black
hat
perspective.
So
this
has
been
some
of
the
conversations
in
the
s-bomb
community
as
well
is,
if
I'm
providing
a
list
of
projects,
can
an
attacker
correlate
all
of
that
information
and
pick
one,
that's
a
weak
weak
point
and
they
can
go
and
exploit
it
as
a
target,
and
we
can
come
up
with
the
same
kind
of
methodology
around
critical
projects
and
reporting
their
security
posture.
J
So
do
you
protect
the
ignorant
because
you're
simply
unaware
of
security
practices,
or
do
you
publicly
proclaim
their
current
security
posture
so
part
of
the
problem
in
that
space?
In
my
mind,
is
that
at
what
point
do
you
draw
that
line?
What
is
that
threshold
look
like,
because,
when
we're
doing
security
scanning
on
operational
systems,
the
absence
of
a
security
finding
does
not
always
guarantee
that
it
was
tested.
J
D
Yeah
absolutely
do
we
have
any
thoughts
on
emily's
question.
K
Just
to
read
out,
as
I
was
typing
a
proposal
there,
someone
mentioned
having
sort
of
principles
and
criteria,
and
I
think
this
is
a
good
one.
For
generally,
the
list
we
create
will
be
transparent
and
public,
but
we
should
remain
conscious
of
that
and
therefore,
when
there's
a
surprise,
you
know
if
everyone
was
like.
I
never
thought
of
this.
That's
the
moment
to
worry
about
brian's
direction,
but
until
then
the
general
principle
is.
The
list
is
out
there
until
you
discover
that
thing
that
you're
shocked
by.
D
I
think
I
mean
sure,
maybe
some
of
the
analysis
that
analysis
that
we
do
could
theoretically
help
a
bad
actor,
but
I
would
say,
yeah,
I
guess
as
long
as
it's
nothing
that
is,
I
guess,
secret
or
because,
because
at
the
end
of
the
day
I
mean
I
mean
bad
actors
are
always
going
to
be
bad
actors
and
and
and
they're
always
going
to
do
what
they
do,
but
I
think
mitigating
their
their
ability
to
do
to
act
badly
so
to
speak,
is
obviously
important,
so
so
yeah.
D
I
guess
we
just
got
to
walk
that
fine
line
between
transparency
and
and
maybe
not
really
giving
away
anything
that
is
maybe
proprietary
or
you
know
a
secret
that
could
that
could
come
back
to
that
could
be
used
in
a
in
a
negative
way
or
in
a
bad
way.
K
I
think
that
dave's
alpha
omega
cyclic
questions
earlier.
I
think
this
is
where
sort
of
there
will
become
a
feedback
loop
with
alpha
omega
of
at
the
moment.
The
most
interesting
surprise
I
can
imagine
is
something
that
alpha
should
dive
into
immediately
and
try
to
help
that
project,
and
so
the
group
will
reach
that
point
of
realizing,
where
alpha's
applying
attention
and
be
able
to
go.
K
We
want
to
give
alpha
a
a
nudge
on
this
to
go,
get
involved
in
this
project,
because
we've
discovered
this
in
a
weird
corner,
and
I
I
agree
with
you,
I
mean
I
think
this
is
it's
the
one
in
100
000
right.
It's
the
the
very
rare
case
where
something
pops
up.
C
Well
hi
hand:
you
just
reminded
me
of
something
else.
I
really
did
not
expect
for
my
sbx
knowledge
to
come
in
so
useful
here,
but
another
thing
they
do
that.
I
think
the
critical
projects
team
might
want
to
consider
is
they
have
releases.
C
They
are
not
constantly
updating
the
list
of
the
license
list.
It's
every
quarter.
They
will
have
released
a
new
one
and
I
think
that's
something
that
we
should
look
into
right.
Not
only
does
it
acknowledge
that
this
is
a
living
public
document
right,
but
it
also
allows
us
to
build
in
this
this
reconsideration
cycle
right.
Even
if
we
don't
have
to
update
the
list
every
quarter,
you
know
we.
Perhaps
we
haven't,
found
a
new
lib
nebraska.
C
We
at
least
have
that
process
built
in
and
we
can
be
very
public
about
the
fact
that
we
have
looked
at
this,
and
this
is
where
we
are,
for
you
know
2022,
q4,
sort
of
thing
right.
I
think
that
would
be
very
good
to
just
from
the
very
start
be
clear
about.
You
know
we're
gonna,
have
a
process
and
we're
going
to
have
here's,
how
we're
going
to
do
it
and
every
quarter
we'll
release
something
new
and
that
will
also
help
everyone
else
who
relies
on
that
right.
C
Dale
yeah,
I
know
siri
thanks
siri,
he
does
not
understand
so.
He'll
have
to
come
to
more
meetings,
but
I
think
you
know
everyone
else
will
be
able
to.
They
won't
be
aiming
at
a
moving
target.
They
know,
will
have
a
release,
schedule
and
they'll
be
able
to
plan
on
that
and
that's
going
to
help
alpha
make
a
great
deal.
I
think,
as
well
as
anyone
else
who
relies
upon
this.
C
C
So
every
quarter
they're
adding
new
licenses
they're
making
additions
to
other
licenses.
You
know
if
they
need
to
shift
something
so
there's
the
public
list,
that's
on
the
website
and
then
there's
the
conversations
that
happen
on
the
issues
and
then
every
quarter
they're
like
okay,
we're
going
to
package
up
all
these
changes
and
new
licenses
and
now
we're
going
to
push
them
to
the
live
website.
So
the
public,
the
issues
are
public.
C
Everyone
can
see
them,
everyone
can
participate,
but
for
the
vast
majority
of
people
who
use
it,
they
just
want
the
published
license
list
right,
and
so
it's
you
have
it
public
in
both
ways,
and
you
also
have
this
set
target
that
everyone
can
rely
upon.
C
So
if
there
is
a
lib
nebraska
and
you
need
to
jump
on
it
immediately,
great,
that's
awesome,
you're
tracking
that
and
alpha
omega
can
be
working
on
it
and
then,
when
it
goes
public
and
is
pushed
live
as
a
release,
then
everyone
can
see.
Oh
yes,
now
we
have
a
new
nebraska
and
it's
in
the
top
100
it
pumps
something
else
down
or
whatever
right.
C
However,
that
works
that's
going
to
be
a
work
in
progress
as
we
are
developing
those
criteria
and
the
entire
process,
but
I
think
that
could
be
very
helpful
for
a
lot
of
people.
D
Thank
you
did
someone
else
have
their
hand
up:
oh,
no,
okay,
okay
and
yeah
julia
julia.
I
see
your
opinion
here
as
well
yeah.
I
I
personally
I'm
a
huge
fan
of
mvp's
minimal,
minimal,
viable
product.
I'd
like
to
start
with.
You
know
what
can
we
actually
achieve
and
put
out
there
as
opposed
to
you
know?
What
is
this
perfect
thing,
and
you
know
let's
not
do
anything
until
that's
perfect.
D
So,
yes,
I
would
say
yeah,
starting
with
with
having
kind
of
like
that
north
star
of
what
to
work
towards
and
then
just
kind
of
having
releases
right
now
as
the
as
the
process
gets
finalized.
So
wonderful,
okay,
great!
So
we've
got
a
couple
minutes
left
for
the
hour.
I'd
love
to
hear
I
I
try
and
always
have
at
least
a
couple
minutes
at
the
end
for
any
miscellaneous
thoughts
or
updates,
or
if
someone
had
something
that
they
wanted
to
bring
up
and
didn't
get
a
chance
to
earlier.
D
Now
is
a
perfect
time
for
that.
So
I'll
open
the
floor
for
a
couple
minutes
before
we
kind
of
have
some
closing
comments
and
conclude
for
today.
So
I'd
love
to
hear
from
from
from
you
all
if,
if
there
was
anything
on
your
mind
or
something
we
didn't
get
to
yet.
K
D
E
The
webinar
for
the
release
of
the
census,
2,
is
on
tuesday.
I
believe
I
believe
it's
on
tuesday,
so
I
think
that's.
When
the
report
comes
out,
then.
D
Great
great
yeah,
so
hopefully
by
by
our
next
meeting,
we'll
well
that'll,
be
out,
and
I
will
be
able
to
really
kind
of
dive,
deep
on
that
and
and
and
analyze
that
so
yeah.
So
with
that,
I
just
want
to
thank
everybody
great
discussion
today,
as
always
really
appreciate
people
participating
david,
welcome
and
georg
welcome.
Thank
you
for
joining
today
and
providing
your
thoughts
and
opinions
and
asking
good
questions.
D
I'm
looking
forward
to
to
working
through
this,
with
with
you
all
at
the
next
workgroup
meeting
and,
of
course,
just
to
bring
up
a
point
that
was
brought
up
on
slack
earlier,
that
you
know
just
to
be
inclusive
for
everyone
in
case
you
know
the
meeting
times,
don't
always
work.
We
can
always
take
things
to
slack
too.
If,
for
whatever
reason,
the
the
the
time
zone
just
doesn't
work
out
for
you,
so
yeah
feel
free
to
to
continue
the
discussion.
D
It
doesn't
just
have
to
be
here
during
our
work
group
meetings.
You
know
the
discussion
is
always
welcome
and
with
that
yeah
thanks
again
everybody
and
I
look
forward
to
seeing
y'all
in
two
weeks
bye.
Everyone.