►
From YouTube: Securing Critical Projects WG (January 27, 2022)
Description
Agenda and Notes: https://docs.google.com/document/d/1MIXxadtWsaROpFcJnBtYnQPoyzTCIDhd0IGV8PIV0mQ/edit
B
C
C
Hi
everyone,
thanks
for
joining
as
usual,
we'll
give
people
a
few
minutes
to
to
join
and
start
a
few
minutes
after
the
hour.
A
So
if
you
don't
mind,
I'd
like
to
propose
a
last-minute
addition
to
our
agenda,
just
wanted
to
give
a
quick
update
on
the
status
of
census,
2
from
harvard.
D
A
All
right
you
have
been
added.
You
will
probably
need
to
refresh
your
that
particular
tab
got.
C
C
Okay
yeah,
it
looks
like
it's
about
five
after
the
hour,
so
hopefully
everybody's
here.
That
wants
to
be
thanks
for
joining
and
again
a
reminder
that
david
mentioned.
If,
if
you'd
like,
you
could
add
yourself
as
an
attendee
to
the
meeting
notes,
the
meeting
notes
are
pasted
in
the
chat
and
also
attached
to
the
calendar,
invite
you
do
have
to
join
the
group
to
get
access
to
edit
and
then
okay
yeah.
Let's
continue
on
so.
C
D
Sure
I
can
dive
in.
I
think
I'm
new
to
this
particular
working
group,
or
it's
been
a
long
time
since
I've
been
here
john
meadows
from
city,
citibank.
I
run
cloud
security
engineering
in
appsec,
but
also
a
lot
of
the
software
supply
chain.
Work.
We've
been
open
sourcing,
a
lot
of
our
work
through
the
cncf,
with
some
best
practices,
work
and
we're
working
on
a
secure
software
factory
which
we're
looking
to
donate
to
the
ossf.
C
Awesome
welcome
glad
to
have
you
anyone
else.
G
Quite
a
long
time
since
I
joined
this
meeting
ava
black,
my
lead,
open
source
ecosystem
security
at
a
part
of
microsoft
and
also
in
the
osi
board.
Some
of
you
probably
know
me
from
the
cncf
tag,
security
or
from
the
confidential
computing
consortium.
H
Hey
I'm
tom
from
bloomberg,
I'm
leading
some
of
the
software
supply
chain
security
efforts,
just
kind
of
making
my
way
around
each
of
the
working
groups
to
see
which
we
can
help
contribute
to
and
share
kind
of
help
us
as
well
move
forward
with
a
lot
of
this
work.
C
This
is
great.
A
lot
of
new
people
welcome
everyone
and,
if
you're
not
aware
like,
we
have
the
github
repo,
where
we
state
our
mission
and
kind
of
things
that
we
tend
to
do
in
this
meeting
and
some
of
the
projects
that
are
part
of
this
group
so
check
that
out
all
right.
Well,
let's
move
along
status
of
census,
2
from
harvard.
A
Okay,
so
I
think
this
is
this-
is
me
giving
a
quick
status
report.
Just
let
me
give
you
the
punch
line,
the
the
new
census.
2
is
expected
to
be
released
mid-february.
A
They
have
all
the
data
analyzed
a
result.
They've
got
a
draft
report.
A
whole
bunch
of
people
are
now
beating
them
up
on
it.
As
it
happens.
When
you
write
a
report,
you
know
you
know
what
what
do
you
mean
where'd
you
get
this
explain
this
that
sort
of
stuff,
but
you
know,
I
think,
we're
we're
in
you
know
everything
is
looking
good
for
release
the
they.
A
They
use
three
data
sources,
three
different
sca
suppliers,
sort
of
that
starting
point
for
looking
down
at
what
what
software
is
used.
A
So
the
agreements
with
those
was
that
the
analyze,
not
the
report,
but
the
data
analyzed
results
will
be
shared
with
them
and
they
would
have
until
the
first
of
february
to
object,
because
you
know
something
super
sensitive
would
be
revealed
by
it.
We're
not
expecting
any
such
issues.
A
A
You
know
you
go
back
and
forth
and
review
comments
and
it
may
be
a
few
days
later
than
that,
but
I'm
expecting
it
to
to
be
relatively
eminent
once
it's
released.
I
think
the
expectation
by
many
is
that
this
working
group
would
then
look
through
that
list
and
decide
what
to
add
what
to
change
in
its
list
of
critical,
open
source
projects.
Based
on
that
data.
A
I'll
tell
you
right
now
that
they
have
basically
they're
they're.
They
have
several
different
tables.
One
issue
is
mbm
versus
not
npm,
and
another
issue
is
direct
versus
transitive.
A
So,
basically,
because
of
the
way
npm
manages
their
packages,
you
know
I
don't.
I
think
many.
I
think
the
vast
majority
of
you
are
already
aware,
but
just
in
case
someone
isn't
npm.
The
ecosystem
works
very
differently
for
most
other
package.
Ecosystems
about
half
of
the
packages
have
either
zero
or
one
function
at
most,
there's
a
real
drive
towards
incredibly
tiny
packages.
A
As
a
result,
when
you
start
doing
these
transitive
dependencies
and
counting
any
counts
of
an
npm
package
will
just
completely
obliterate
any
other
package
just
because
the
way
the
ecosystem
works,
and
so
instead,
instead
of
trying
to
figure
out
a
way
to
normalize
this,
which
is
just
truly
different,
they're
just
going
to
have
them
as
two
separate
buckets.
A
It's
just
you
know
it's
hard
to
figure
out
another
way,
and
you
know,
since
it's
unique,
why
there's
no
need
to
do
that.
They
are
going
to
talk
about
direct
versus
transitive
dependencies.
You
know
some
things
are
direct.
Some
are
not
one
big
difference
with
this
census.
To
with
this
updated
version
versus
the
original
census,
to
that's
already
public,
there
are
several
differences.
One
is
they're.
A
Gonna
have
three
sca
vendors,
so
more
data,
a
lot
more
data
to
work
with
another
difference
is
that
they're
taking
exact
versions
into
account,
which
is
something
that
I
actually
had
done
back
when
I
was
at
ida
when
we
did
some
some
some
preliminary
prototypes,
we
handed
them.
Our
code
turns
out
it's
a
lot
of
work.
To
do
that,
you
got
to
do
it
correctly,
because
there
are
many
ways
to
do
it.
That
will
take
several
years.
A
So
the
how
you,
the
algorithms,
you
use
matter
and
they've,
already
found
that
there's
some
a
number
of
disturbing
trends,
where
particular
versions
that
are
very
very
old,
are
actually
at
least
as
popular
or
more
popular
than
current,
not
just
current
versions,
but
current
branches.
A
A
lot
of
people,
for
example,
weren't
vulnerable
to
log
for
shell
because
they
were
using
the
log4j
version.
One
branch
which
has
known
vulnerabilities
and
hasn't
been
maintained
in
years
and
is
expressly
not
maintained
for
years,
and
so
that's
the
kind
of
stuff
that
they're
pulling
out,
which
I
think
will
surprise
zero
people
in
the
it
world.
But
I
think
it's
good
to
have
the
actual
data
to
show
it.
C
C
Either
way
like,
if,
if
they
think
it's
very
important-
and
you
know
we
already-
we
already
passed
on
it
or
vice
versa,.
A
Okay,
I
let
let
me
put
my
cards
in
the
table
because
I
actually
did
a
sim.
Although
I
didn't
use
exactly
the
same
approach
that
harvard's
done.
You
know
I
actually
wrote
the
census
or
now
called
the
census.
One
report-
and
I
did
some
preliminary
prototyping
for
this
census-
2
approach,
I'm
a
big
believer
in
doing
data
analysis
and
making
decisions
based
on
data,
but
I
also
think
that
that
is
important
information
that
you
then
hand
off
to
humans
to
do
a
final
cut
in
this
particular
case.
A
A
One,
for
example,
we
did
try
to
weight,
not
just
how
popular
something
was,
but
how
exposed
it
was
census,
2
and-
and
frankly
I
I
can
sympathize
the
the
amount
of
data
they're
trying
to
deal
with
is
huge,
they're
they're
taking
on
a
larger
scale,
and
they
don't
have
the
kind
of
data
that
lets
them
do
some
of
that
kind
of
analysis,
and
so
they
intentionally
focus
very
much
on
the
you
know.
A
Let's
first,
you
know
from
every
sca
supplier
we
can
all
the
data
of
every
per
application,
they've
analyzed
and
then
backtrack
and
identify
what's
what
their
the
sum
of
all
those
applications
most
depend
on.
So
I
say
all
that
as
a
prefix
to
say,
I
think
we
need
to
as
humans
go
through
and
add
human
knowledge
of
context
that
the
numerical
analysis
cannot
provide,
and
this
was
even
done
with
census.
One.
A
So
I
think
the
expectation
is
that
this
is
going
to
be
important
new
work,
but
we
need
a
group
of
people
to
look
at
it
all
of
you
really
everybody
we
can
get
our
you
know
to
look
at
the
context.
You
know
if
there's
an
unintentional
vulnerability,
how
bad
would
that
be?
If
there's
a
malicious,
you
know
attack
how
bad
would
that
be,
and
how
likely
would
that
to
be?
You
know
I.
A
Those
kinds
of
things
are
very
very
hard
for
computer
analysis
today.
So
I
realize
that's
a
really
long
answer
to
a
simple
question,
but
I
I
I
I
think
I
need
to
give
some
justification
so
does
that
make
sense.
D
D
A
There's
a
little
bit
of
that
on
the
openness
on
this
working
groups
website.
I
believe
that
doesn't
mean
it
doesn't
need
to
be
documented
better
but
yeah,
because
I
actually
agree
with
you.
I
think.
That's
you
know
it's
important
to
to
to
write
that
down
other
questions.
F
Yeah
so
so
david,
I
really,
I
really
love
incorporating
human
expertise.
I
think
we
forget
that
that
counts
as
data
too,
and
not
to
go
into
ml
terminology,
but
that's
a
kind
of
a
semi-supervised
approach
that
that
has
cons
like
in
the
ml
space,
consistently
yields
better
results
than
unsupervised
approaches
alone,
so
it
makes
perfect
sense.
F
I
guess
one
of
the
things
that
I
am
wondering
about
is
how
how
what
is
the
plan
to
go
about
recruiting
those
those
people
that
have
that
domain
expertise?
And
how
are
you
making
sure
that
it's
a
representative
sample.
A
I
I
think
the
the
short
answer
is
that
we're
expecting
all
of
us
to
do
some
research
to
go
back
and
look
at,
but
I
don't
have
a
way
of
bringing
in
everybody
in
the
world.
I
don't
know
anybody
else
who
does
either
so
I
I
don't.
You
know
if
we're
expecting
expertise
in
all
things,
I
don't
see
how
we're
going
to
manage
that.
F
Not
expertise
in
in
all
things,
necessarily
it's
just
a
matter
of
not
just
that
that
trivializes
it
it.
But
it
is
a
matter
of
ensuring
that
when
we
are
providing.
F
Data
from
our
own
experiences
from
our
own
research
that
we
have
enough
folks
from
varied
backgrounds
to
be
able
to
provide
a
a
well-rounded
sample.
I
think
ava
had
a
had
a
point
that
that
they
were
looking
to
make.
D
I
wouldn't
think
it
might
have
been
jack
as
well,
but,
but
I
it
really
just
add
to
that-
is
that
you
know
I
was
thinking
of
reaching
out
to
red
team
groups
and
also
teams
within
our
own
companies
and
organizations
to
try
and
provide
that
additional
human
input.
You
know
we,
we
know
certain
things
are
more
accessible,
we
know
certain
things
aren't
used
or
I
think
the
david's
point
are
maintained
by
as
many
people
as
others,
and
we
can
bring
that
sort
of
additional
information
to
bear.
D
G
My
comment
might
be
more
applicable
to
the
alpha
part
of
alpha
omega
than
this,
but
it
might
also
be
applicable
here,
but
I
would
suggest
the
process
we
start
to
create
be
built
to
include
that
sort
of
human
information.
That
julia
was
pointing
at
from
domain
experts
who
are
not
in
this
group
and
won't
be
in
this
group,
but
we
we
know
where
they
are
they're
in
those
projects
they're
in
those
domains.
A
Okay,
we'll
need
to
we'll
need
to
figure
out
how
to
do
that.
Then
I
I
think
we
our
first
run
around
we
were.
We
were
in
a
hurry
to
support
a
particular
group,
the
great
great
mfa
project,
which
we'll
talk
about
in
a
moment.
G
G
We
find
a
liaison
within
that
project's
maintainer
community.
That
is
a
contact
point.
That
person
is
invited
they're
on
the
mailing
list
and
that
sort
of
flips
it
from
we're
concerned
about
this
project,
to
we're
engaging
with
this
project
and
able
to
gather
information
with
it
that
becomes
formalized
as
part
of
the
openssf's
process
for
outreach
and
connection.
E
A
Yeah,
if
I
can,
I
I
our
next
time
is
going
to
be
talking
about
alpha
omega.
So
let's
talk
more
about
alpha
omega.
For
that
I
I
don't
want
to
lose
this
thought,
though,
about
I
I
think
you
know.
Last
year
we
were
in
a
hurry
to
try
to
support
the
mfa
project.
My
thanks
to
everybody.
It
was
a
rushed
process.
A
On
the
other
hand,
we
got
something
and
we
got
a
whole
bunch
of
results,
so
I
don't
think
we
need
to,
but
I
do
think
that
a
way
to
get
more
engagement
with
more
expertise,
more
analysis
would
be
awesome,
and
so
let's
talk
about
how
to
do
that,
I'm
not
sure
how
to
do
that.
I
would
I'd
love
to
hear
others
ideas
on
it.
J
I
have
a
suggestion
or
suggestions
that
are
related.
I
suppose
so,
like
julia,
I'm
gonna
analogize
to
something
that
I've
known
from
my
past.
In
my
case
politics
I
used
to
be
involved
in
australian
politics.
Please
forgive
me,
and
I
was
very
interested
in
voting
theory
very
interested
in
voting
systems.
I
think
it
would
be
possible
to
configure
or
construct
a
voting
method
that
would
allow
us
to
integrate.
You
know
so
to
speak,
synthetic
votes
from
software
agents.
You
know
calculating
different
parts
of
the
criticality
score.
J
It's
been
a
long
time
since
I
looked
at
voting
methods,
so
I
would
have
to
go
back
and
look
up
the
properties
of
them,
because
you're
probably
familiar
with
arrow's
theorem
that
there's
no
perfect
voting
method
whatsoever
and
there's
another
one
always
forget
the
name
of
which
is
even
scarier,
but
it
is,
I
believe,
possible
to
do
that
kind
of
thing.
The
one
asterisk
I'll
put
nearby,
that
is,
that
it
would
probably
be
infeasible
to
ask
people
to
rank
200
entries.
J
J
A
I'll
I'll
quickly
note
that,
although
I
doubt
I've
gotten
nearly
as
much
into
it
as
you
have,
I
do
know
when
someone
says
what
a
kondra
say
method
is.
I
actually
know
what
you're
talking
about.
We
actually
use
opavote
for
a
number
of
votings,
including
for
the.
K
A
One
challenge-
I
guess
I
would
go
back
to
is:
let
me
propose
something
and
focus.
My
opinion
is
in
the
end,
although
data
is
very
very
valuable,
the
humans
end
up
knowing
context
that
no
none
of
the
tools
are
ever
going
to
be
able
to
find.
So
I
would
suggest
that
in
the
end
it
actually
not
just
be
humans
enter
data,
but
in
the
end
humans
make
the
decision
not
one
human
but
humans,
plural
I'll,
give
you
example
from
census,
one
which
is
a
couple
years
ago.
A
Our
scoring
method
found
that
mail
transfer
agents
were
by
far
and
away
one
of
the
most
important
things
in
the
universe.
Why?
Because
they
were
widely
installed
on
linux
distros,
they
could
be
configured
to
be
network
accessible,
they
processed
directly
data.
That
was,
you
know
from
potentially
from
an
attacker.
You
know,
and
you
know
they
checked
off
all
these
boxes,
but
then
you
look
and
say
yes,
but
they're
not
you
know
most
distros
they're
not
configured
that
way.
A
Very
few
people
run
their
own
mail
transfer
agents
today,
and
so
you
know,
the
automated
system
would
lead
you
to
one
conclusion
that
a
human
would
look
and
say.
I
mean
it's
not
bad
to
work
on
the
mtas,
but
this
ntp
ntpd,
which
manages
all
the
time
and
is
also
directly
accessible
way
more
important.
J
So
to
to
to
riff
on
that
very
quickly,
you
could
have
it
as
a
two
round.
I'm
sorry
go
down
this
rabbit
hole.
You
could
have
a
two
round
vote
first
round.
Is
software
agents
produce
an
order,
so
they
vote
amongst
themselves
to
create
an
initial
order.
That's
the
order
that
entries
get
displayed
to
humans
and
then
they
can
rearrange
it.
So
if
they
say
something
like
mta
shouldn't
be
number
one,
I
think
they
should
be
number
17.
They
move
it,
but
you
have
at
least
an
initial
ordering.
J
E
A
Good
thing
is
that,
oh
sorry
go
ahead
david,
as
I
say
now,
my
my
apologies,
I'm
trying
to
type
and-
and
I
think
it
was
julia
who
mentioned
it.
How
do
we
get
more?
How
can
we
get
more
expertise
involved
and
I
think
the
challenge
here
is
very
much
the
you
know
most
act.
A
Most
experts
are
experts
in
one
area,
so
you
know
if
you're
into
ml,
you
can
riff
all
day
about
the
advantages
of
pie,
torch
and
you
know,
and
so
on,
whereas
in
other
areas,
you'll
you'll
riff
on
other
areas-
and
so
you
know
it's
going
to
be
a
challenge
to
get
involved
and
get
everyone
getting
enough.
Experts
involved.
F
Yeah,
no,
I
I
definitely
agree
that
that
it
is
a.
It
is
a
challenge,
and
I
think
this
comes
back
to
the
con.
The
conversation
from
our
last
meeting
about
categorization
right-
and
I
believe
you
know
it
was
mentioned
already
that,
like
you,
can
have
experts
in
in
one
subset
one
category
or
another,
and
if
you're
able
to
appropriately
segment
out
the
data,
then
you
can
get
that
broad
set
of
experts
coming
in
to
to
provide
information
and
validation.
F
But
it's
it's
definitely
not
a
that
an
easy
problem
to
solve.
So
I
do
recognize
that.
J
I
should
have
put
my
hand
up,
I'm
sorry,
there
we
go
you
can
you
can
be
very
fancy
and
use
something
like
range
voting
where
this?
This
is
something
that
you
might
be
able
to
get
software
expects
to
to
use
it's.
I
think
it's
a
terrible
idea
for
general
voting,
even
though
it's
fans
are
very
voice.
Effortless,
I'm
sorry
ava.
I
hope
we
get
to
your
point.
J
J
So
it's
it's
slightly
abstract
way
of
voting
and,
as
I
said,
the
proponents
make
fabulous
mathematical
arguments
that
it
solves
areas
of
possibility
theorem
skeptical,
but
that
would
be
a
possibility
as
well
so.
A
A
L
A
C
Yeah,
so
I
think
you
you
covered
david.
We've
got
two
two
problems
here:
what
one,
how
to
find
the
the
experts
and
and
contact
them
another
how
to
take
their
results
and
do
something
with
them
so
yeah.
Maybe
we
go
ahead
and
put
an
item
on
next
week's
calendar
to
review
any
kind
of
proposals
or
ideas.
People
have
for
these
problems.
E
You
know
it's
critical
to
get
a
diverse
set
of
viewpoints
for
it
to
be
an
open
source
to
use
a
term
or
curated
kind
of
experience
where
you
know
we're
getting
all
the
input
from
you
know
the
community
at
whole
as
at
whole
as
a
whole,
and
you
know
the
folks
who
are
passionate
about
this
stuff
who
want
to
you
know,
see
progress
made
on
it
within
the
work
group
and
yeah
anything
that
we
can
do
to
engage
as
many
folks
as
possible
and
to
get
as
much
expertise
as
we
can
incorporate
it
into
it
and,
like
you
said
david,
you
know,
incorporate
both
the
data
side,
the
quantitative
side
with
the
qualitative
data,
and
I
I
think
we
can
definitely
work
something
a
little
bit
more
formalized
out
and
I
think
it'll
take
some
time.
E
A
Yeah-
and
I
I
want
to
quickly
add
that
I
think
that
we
ought
to
capture
not
just
what
they
are,
but
some
sort
of
rationale
on
why
they
are
important.
Yes,
and
maybe
that's
a
key,
and
you
know
that's
something
we
can
work
on
together
and
create
as
a
key
input
to
voting.
K
There
is
one
thing
I
wanted
to
add
on
this
front,
which
is:
we
should
continue
to
do
the
automation
parts
of
the
bit
as
well,
so
the
stuff
we
are
doing
with
criticality
score
and
the
reason
for
this
is
we
can
actually
have
some
good
metrics
on
what's
critical
to
a
package
management
ecosystem.
So
just
to
give
you
a
log
4j
example,
right
in
december,
when
we
looked
at
the
number
of
packages
in
maven,
eight
percent
of
them
were
dependent
on
log
4g.
K
A
Yeah,
I'm
assuming
that
we're
going
to
continue.
You
know
the
criticality
scoring
we're
going
to
have
the
heart.
Okay
update
we're
actually
having
updated,
criticality
score
information,
we'll
have
the
harvard
data
in
about
a
month.
In
less
than
a
month,
let's
say
a
month
we
need
to.
I
think
we
need
to
try
to
quickly
work
out
the.
What
are
we
going
to
do
with
that
and
then
try
to
turn
that
crank
within
the
next
few
months
and
you'll
basically
come
up
with
answers
on
all
right?
A
What
do
we
think
the
most
critical
components
are,
which
is
a
perfect
segue
to
the
next
topic
alpha
omega?
Can
we
move
on
to
that
topic?.
A
All
right,
so
I'm
going
to
try
to
be
quick
here.
For
those
of
you
don't
know,
there's
a
there's
a
project
within
the
open,
ssf
called
alpha
omega.
It's
actually
funded,
it's
approved
by
the
governing
board.
Google
and
microsoft
have
each
put
in
two
and
a
half
million
dollars
on
this
and
there's
two
parts
to
alpha
omega
and
alpha
is,
I
think,
really
the
key
part
for.
A
As
far
as
this
group
is
concerned,
the
alpha
side
of
alpha
omega's
goal
is
to
identify
a
few
really
critical,
open
source
software
projects
and
fund
focused
work
on
them.
So
we
talked
earlier.
I'm
sorry
ava's
gone,
but
basically
this
is
where
the
once
it's
picked.
Okay,
we're
gonna
work
with
project
x,
okay,
immediately
going
to
interact
with
project
x,
talk
to
them.
What
do
they
need?
What
are
the
issues
and
then
do
a
whole
number
of
things
to
try
to
make
that
as
vulnerably
vulnerability,
free
as
we
can
so
things
like?
A
Do
security
audit
at
least
one
probably
multiples
once
that's
once
we've
identified
issues
work
to
get
them
all
fixed
work
to
not
just
fix
point
problems
but
help
the
project
update
its
processes,
so
things
like
getting
a
best
practices
badge
getting
a
high
score
card
value,
getting
a
high
salsa
level,
getting
it
synced
getting
a
signature
through
sig
store.
You
know,
generating
s-bombs
all
these
things
that
we
would
like
the
projects
that
we
most
depend
on
to
be
doing.
A
There's
no
expectation
that
the
alpha
side
will
be
able
to
do
this
with
every
project.
That's
not
reasonable,
so
they're
going
to
want
to
they
want.
They
want
a
short
list
that
they
and
and
frankly,
I'm
expecting
that
at
least
for
a
while
they're
not
going
to
do
every
single
project
that
we
identify,
but
they
will
probably
peel
off
a
small
subset
but
they're
going
to
want
the
they
want
that
list
to
appeal
from
and
I'm
sure
they
would
really
like.
A
E
Yeah,
so
I
just
wanted
to
give
a
quick
update,
because
dave
stewart
asked
this
in
our
last
meeting
and
I
think
it's
important
to
note
when
we
have
when
we
have
wins
so
that
initial
list
that
we
did
put
together
at
the
end
of
the
year
was
in
fact
used
to
get.
I
think
it
was
between
two
and
four
hundred
mfa
tokens.
Is
that
right,
david.
A
At
least
200
I
am,
I
am
late
on
going
back
and
getting
the
numbers.
I
know
we
at
the
start.
We
had
over
200
invitations,
but
we
were
trying
to
do
this
before
the
end
of
the
year
and
after
a
while
it
became
shoot.
Let's
just
get
the
get
the
thing
done
by
deadline,
so
I
need
to
now
go
back
and
after
a
while
it
became
kind
of
of
a
whirlwind.
L
A
It
I
think
it
was
a
fabulous
success.
Every
single
one-
and
you
know
here's
one
of
the
here's,
a
titan
key
by
example,
so
google
gave
500
coupons,
getup
gave
500
coupons
for
ub
keys
and
we
basically
contracted
all
the
projects.
Well,
we
tried
to
contact
them,
some
didn't
respond.
I
mean
it
was
near
the
end
of
the
year.
Some
responded
and
said.
Thank
you
very
much,
we're
already
using
mfa
tokens.
A
E
Yeah
and
I
think
they
weren't
necessarily
just
handed
out
willy-nilly
either
they
were
given
to
core
maintainers
or
kind
of
maintainers
with
higher
level
access
and
whatnot.
So
just
want
to
point
that
out.
You
know,
I
think,
that's
definitely
a
big
win
and,
and
we
can,
we
should
celebrate
the
the
victories
and
yeah.
E
So
some
good
things
came
out
of
that
work
that
we
that
we
did
as
a
work
group,
and
so
I
just
want
to
thank
everybody
again
for
that
and
give
dave
stewart
that
update
that
you
know
we
were
able
to
actually
make
some
progress.
We
were
able
to
thank
you,
get
some
things
out
and
get
some
good
work
done
so
yeah
and
do.
B
A
Are
actually
being
used?
I
I
don't
have
that
numbers
that
number
in
front
of
me.
What
here
here's
a
challenge
we
did
not
want
to
invade
anybody's
privacy,
so
you
know
what
the
step
one
was
making
sure
they
they
got
the
things
and
that
they
they
got
them
in
time.
A
Are
they
being
used
get
we
we're
asked
in
two
different
ways:
we've
asked
the
projects
to
report
back
we're
going
to
have
to
nag
them
because
nobody's
reported
back
yet
and
github
now
githubs
gave
some
and
google
gave
some
in
the
case
of
github
the
way
that
they
gave
them
out.
They
can
actually
find
out
how
many
are
being
used.
A
My
expectation
is
we're
going
to
need
to
give
them
a
little
time.
A
number
of
folks
did
this
at
the
end
of
the
year,
but
github
believes
that
they'll
get
no
knows
which
tokens
are
being
used
now,
they're
not
going
to
give
us
the
individual
names
they're
just
going
to
give
us
raw
aggregate
scores.
A
So
I
don't
have
the
numbers
yet,
but
but
we
expect
we,
we
planned
everything
out
to
get
those
numbers.
I
don't
have
those
numbers
yet.
B
Okay
thanks,
you
know,
and
it's
it's
probably
you
know
it's
it's
only
the
27th
of
january,
so
you
know
the
way
you
know
if
you're
a
part-timer,
you
know
maintaining
a
small
project.
You
know
it
could
it
could
take
a
while,
but
anyway,
thanks
for
thanks
for
the
update.
A
Yeah
you're
really
welcome,
and
you
know
so
I
I
my
feeling
is
if
some
aren't
used,
but
many
are
it's
okay
I
mean
you
know
I
would
love
for
100
use,
but
trying
to
get
that
is
is
not
reasonable
and
you
know
I
expect
that
whole
bunch
of
them
will
be
be
used
because
there's,
why
bother
to
make
the
request?
A
I
will
also
concede
by
the
way
we
we
definitely
did
that
in
a
rush
I
mean
the
whole
thing
was
a
rush.
We
didn't
realize
that
the
google
coupons
were
going
to
expire
until
the
at
the
end
of
the
year
until
very
late,
so
we
hurried
on
the
process
to
identify
the
critical
projects
and
we
also
hurried
to
get
the
token
the
token
coupon
codes
and
validation
codes
out
it
was.
It
was
frankly
overwhelming.
I
didn't
you
know,
anybody
who
says
customer
service
is
easy
has
never
done
it.
A
So
there
was
incredible
amount
of
time
that
I
and
lots
of
other
folks
spent
interacting
with
the
projects,
because
we
didn't
want
to
just
give
them
out
to
random
people.
It
was
for
the
maintainers
of
either
those
projects
or
the
project
that
they
depended
on,
but
really
it
was
pretty
much
just
those
projects,
so
it
was
a
crazy.
B
I
mean
the
other.
The
other
you
know
factor
is
making
sure
that
anybody
who
has
that
level
of
access
I
mean
is
using
mfa.
In
other
words,
if
you
have
like
not
just
one
maintainer
for
the
small
project,
but
you
have
five
all
five
kind
of
need
to
be
using
it.
Otherwise,
it's
like
you
know
what
I'm
saying
anybody
would.
B
It's
slightly
better,
if
a
smaller,
if,
if
the
less
than
the
majority
you
know
use
it
it,
this
is
just
some
of
the
things
that
we're
playing
in
my
head
that
I
was
sort
of
like
okay.
We
can't
really
talk
about
that
now
because
we're
in
a
rush
to
get
this
list
done
and
get
these
things
out.
So
it's
like.
I
I'm
just
you
know,
I'm
a
simple-minded.
B
You
know
kind
of
person,
I'm
just
asking
simple
questions,
because
that
it
was
the
time
to
ask
the
questions
was
not
then,
but
I
appreciate
the
the
chance
to
revisit
it,
because
this
gives
us
a
sense
of
you
know
I
mean,
and
maybe
one
argument
is
well
any
every
little
bit
helps
a
little.
That's
not
bad!
A
You
know
I'm
going
to
push
back
a
little
bit,
although
you're
right,
you're
right
that
it
would
be
better
if
you
had
100
percent
there's
no
doubt
of
that
it
all.
It
does
depend
a
lot
on
what
the
attacker
is
trying
to
do.
If
an
attacker
is
really
focused
on
specifically
project
x,
then
yes,
the
view
would
be
anybody
who's,
who's.
A
weak
point
is
a
weak
point.
A
I
think
for
a
lot
of
attackers,
though
they're
just
interested
in
attacking
anybody
they're
not
targeting
just
one
project,
and
so
from
that
point
of
view,
every
single
person
we
provide
stronger
defenses
to
means
there's
one
less
in
the
armor
as
it
were,
so
I
mean
you're
you're,
absolutely
right,
100
would
be
best,
but
I'm
going
to
take
every
single
one
as
a
win,
especially
since
for
a
lot
of
these
attackers
they're
just
kind
of
I
won't
say
spray
and
pray
but
they're,
you
know
they're
just
looking
for
the
easy
for
the
easy
attacks,
and
so
you
know
getting
rid
of
of
one
of
those
avenues
is
a
great
thing.
C
A
We
I
have
no
commitments
right
now
from
google
that
they'll
have
more
tokens.
Github's
coupons
didn't
expire,
so
we
actually
have
a
number
of
well
validation
codes.
Actually,
but
we
we
have
a
number
of
validation
codes
left
that
we
can
still
use
for
existing
projects,
but
it
has
to
be
ub
keys
with
github
we're
hoping
to
talk
google
into
more
so,
but
I
haven't.
I
haven't,
pressed
too
hard
because
to
be
honest
by
the
end
of
the
of
the
calendar
year,
I
was
exhausted.
A
We
haven't,
we
could
I
mean
that's
something
to
be
brought
up
in
the
mfa.
I
was
hoping
to
talk
quietly
talk
google
into
getting
some
more
so
that
we
could
have
you
know
more
to
distribute
also
to
be
fair,
I
think
we
would
love
to
have
the
updated
critical
projects
list,
and
so
I.
A
I
I
understand,
but
but
the
thing
is,
it
might
make
sense
for
us
to
have
a
new
mfa
drive
when
we
have
a
new,
updated
list.
C
Yeah
just
wondering
like
just
trying
to
figure
out
like
what
level
of
a
milestone
should
we
set
for
our
our
group
like
like
how
much
quality
do
we
want
to
add
to
our
our
identified
list
before
we
say
this
is
a
milestone
and
we
can
go
ahead
and
do
a
second
round
of
mfa,
like
what
kind
of
feeling
should
we
be
going
for
when
we,
when
we
work
on
an
updated
list
as
far
as
effort
versus
quality
trade-off,
you
know.
A
I
I
suspect
that,
if
we're
going
to
have
a
formal
voting
system,
we're
not
going
to
want
to
have
re-votes
and
re-votes
and
re-votes
we're
going
to
want
to
have
a
vote
and
done
so
I'm
I'm
expecting
you.
Things
can
always
be
improved,
but
I'm
expecting
that
we're
gonna
have
some
sort
of
release
of
this
is
our
best
estimate
at
this
date
and
then
we
can
re-trigger
the
mfa
process
at
that
after
that.
A
A
Let's
get
more
expertise,
dave
stewart,
I
think,
cr
had
a
very
good
point
about
you
know
we
really
need
to
you
know
about
you,
know
and
process
improvements,
making
sure
we
really
can
get
more
asynchronous
and
you
know
more
time
less
rushed.
C
A
Right,
I
would
call
the
current
version
that
we
have
out
there
at
0.5.
You
know
it's
a
draft
version,
I'm
grateful
for
the
draft
version
because
it
was
acted
on
it
wasn't
just
hey.
We
made
a
spreadsheet
with
that
spreadsheet.
We
turned
around
and
interacted
with,
like
you
know
what
something
like
200
people.
A
H
Mfa
project,
but
I
found
the
I
found
the
github
page
for
it.
I
guess
I
do
have
a
follow-up
question
to
it.
It
sounds
like
you're
you're
looking
to
secure
more
more
tokens
is
there
appetite
to,
I
guess,
get
donations
per
se
from
other
enterprises.
A
A
You
know
you
know,
I
I
I
don't
think
anybody
has
any
any
particular
limits
on
who
will
take
resource.
You
know
I
mean
I
I
guess
if
there's
some
sort
of
blood
money
or
something,
but
I
mean
short
of
you-
know
some
morally
objectionable
source,
I
think,
will
be
delighted.
H
Yeah-
and
I
was
looking
particularly
in
the
mfa
project
like
it
seems
like
a
very
cool
thing,
a
really
awesome
thing,
that's
also
quite
tangible,
as
well
so
yeah
and.
A
I
think
that's,
I
think,
that's
the
thing
that
really
gripped
a
lot
of
folks
is
that,
yes,
it
was
done
in
a
hurry.
Yes,
I'm
sure
things
could
have
been
done
better,
but
my
gosh,
we
went
from
nothing
to
over
200
people
getting
mfa
tokens.
We
expect
them.
I
don't
know
for
sure
I
expect
the
vast
majority
will
be
in
use,
and
that
means
that
suddenly
attackers
have
found
200
fewer
pla,
a
little
less
than
200
fewer
places
to
easily
get
into
a
project
just
through
a
password.
A
This
is
a
known
attack.
I
mean
projects
are
already
been
attacked
this
way.
So
that's
an
awesome,
awesome
thing.
So
my
thanks
to
everybody.
E
E
You
know
the
the
whole
idea
of
focusing
on
selection
criteria,
the
why
you
know
the
the
support
or
the
evidence
of
why
a
project
belongs
on
a
list
like
that
and
having
kind
of
multiple
data
points
to
support,
why
a
project
should
be
on
the
list
versus
why
it
shouldn't
and,
of
course,
having
that
kind
of
community
curation,
where
you
know
we
kind
of
talk
as
a
group
to
you
know.
Well,
have
you
thought
about
you
know
this
or
that
or
and
and
so
overall
I
think
it
was.
I
think
it
was
a
great.
E
It
was
a
good
experience
and
a
lot
of
good
things
came
out
of
it
and
kind
of
like
what
you
were
saying:
jeff,
we'll
just
keep
iterating
making
it
better,
and
I
definitely
don't
think
it's
gonna
happen
in
a
month
or
in
you
know,
a
couple
meetings,
but
you
know
the
idea
that
we're
constantly
kind
of
iterating
and
and
making
it
better
and
taking
into
account
everyone's
thoughts
and
ideas.
I
think
we'll
just
continue
to
to
improve
and
and
yeah
again.
Hopefully
more
action
will
be
taken
as
a
result.
C
All
right,
I
wanted
to
move
on
to
the
quick
update
on
the
criticality
score.
Just
fyi,
there's
a
member
one
of
my
co-workers,
caleb
brown
who's
going
to
be
working
on
criticality
score
and
taking
a
closer
look
caleb's,
not
in
a
time
zone,
that's
suitable
for
this
meeting,
so
he
won't
be
able
to
provide,
updates
or
take
feedback
here,
but
I
will
relay
those
anything
that
needs
to
be
done,
live
for
him
so
again,
providing
updates,
there's
no
updates
right
now,
but
and
then
taking
feedback.
C
But,
of
course
the
this
working
groups
channel
on
slack
is
a
great
place
to
chat
about
criticality
score
with
caleb
or,
as
usual.
Github
issues
seems
like
we
have
feedback
on.
You
know.
Where
do
I
send
my
criticisms
or
questions
or
feedback
about
the
the
whole
project
and
I'll
make
sure
to
you
know,
of
course,
any
of
those
avenues
are
fine
but
I'll
make
sure
to
let
caleb
know
that
that's
a
like
a
community
request.
C
Maybe
he
wants
to
do
something
more
formal
for
a
way
to
provide
that
specific
kind
of
feedback.
Any
other
questions
here.
A
Async
communication
is
good.
I'd
also
like
to
add
that,
although
it's
it's
not
a
mandate,
some
projects
and
do
kind
of
shift
around
their
meeting
time,
so
it's
not
one
fixed
time
but
switches
around
the
governing
board
start
meeting.
I
think
six
a.m.
A
Pacific
time,
because
that
works
out
for
us
east
coast
and
for
china,
and
it's
not
too,
I
think
it's
not
too
hideous
for
europe
either.
So
you
know
I
I
think
if
we
have
more
than
one
person
where
that
impacts,
I
think
we
should
start
I
I
would
love
to
have
some
participation
and
if
I
have
to
show
up
at
an
occasional,
2am
meeting
I'll
hate
it,
but
I'll
do
it
anyway.
C
Yeah,
I
think,
if
the
project-
I
don't
know
if
I
would
suggest
work
moving
this
meeting
just
for
for
that
project.
But
if
the
project
is
at
the
point
where
it
needs
its
own
meeting,
then
of
course
that
would
be
good
it's
at
a
different
time.
That
could
be
where
it
needs
to
be.
E
C
A
Yeah
yeah,
I
propose
that
the
update
on
the
criticality
score
I'll
continue
in
our
next
gathering.
A
K
So
I
would
say
this
is
the
exact
bit
that
we
are
trying
to
fix,
like
it's
a
known
limitation
and
that's
where
we're
trying
to
gather
the
data
from
the
package
manager
ecosystem
on,
what's
it's
most
dependent
on
or
used,
but
obviously
it
cannot
match
all.
Let's
say
some
manual
case
studies
too.
We
need
a
mix
of
so
totally
agree
with
you
david,
but
we're
working
to
fix
it.
A
Right
right-
and
this
is
the
the
challenge-
the
harvard
folks
is
well
where's,
your
top
and
they're.
What
they're
doing
is
they're
using
sca
vendor
analysis
of
actual
applications
to
get
their
top
of
what?
What
are,
where
are
the
applications
using
and
then
tracing
down
the
dependencies?
They're,
not
analyzing
activity
at
all.
So
you
know
from
that
perspective
right
now.
They
they
mesh
pretty
nicely
but
but
yeah.
If
we
can
make
things
better,
I'm
all
for
it.
C
I
had
one
last
quick
question
before
we
end
the
meeting
amir.
I
don't
you
know,
I
don't
see
you
adding
anything
on
the
agenda,
for
you
know
working
on
the
spreadsheet,
and
I
know
that
that
there's
work
to
be
done.
Should
we
be
time
boxing
time
in
these
meetings
to
do
spreadsheet
work
or
what
do
you
think.
E
I
think
that's
a
good
question.
I
was
thinking
about
that
too.
One
of
the
first
ideas
or
thoughts
that
came
to
mind
was
doing
maybe
every
other
week,
so
maybe
the
weeks
that
I'm
facilitating
like
we
could,
that
could
be
like
a
working
session
and
then
this
every
other
week.
The
sessions
that
you
facilitate
could
be
more
of
the
you
know
the
normal
sessions
that
we
have
updating
and
figuring
out
new
things
to
do
and
improve,
maybe
just
a
thought,
I'd
love
to
get
the
work
groups
opinion.
A
Well,
I
would
propose
that
in
general
we
should,
but
for
the
next
meeting,
I
think
we
have
a
more
fundamental
question:
how
should
we
create
the
newer,
updated
list
of
critical
open
source
software?
I
had
originally
thought
that
we
would
just
take
our
existing
spreadsheet
and
the
harvard
data
and
basically
improve
this,
have
group
discussions
and
improve
the
spreadsheet
based
on
the
harvard
data,
updated
criticality
scores.
E
Okay,
so
how
about
we
we'll
do
that
at
the
next
meeting,
so
we'll
have
julia
and
josh?
Do
you
think
I
don't
wanna
to
set
any
deadlines
or
anything
do
you
think?
Maybe
next
week
would
be
okay
to
to
discuss
some
of
your
research
and
talk
about.
You
know
what
you
find
and
we
can
kind
of
use
that
session
almost
dedicated
to
setting
up
what
we're
looking
to
set
up.
J
It's
chauncey
in
my
case
we're
about
to
ship
a
pretty
big
rsc
to
rubygems
about
gem
signing.
So
that's
probably
gonna
be
my
focus
in
the
next
week
or.
E
J
E
So
we'll
do
our
best
and
yeah.
I
think
I
think,
maybe
for
now
we
could
try
that
jeff,
just
maybe
have
every
other
session,
be
like
a
more
intensive
working
session
and
if
it
doesn't
seem
like
it's
working
or
maybe
it
needs
dedicated
time,
we
can.
We
can
adjust.
A
Yeah,
I
think
we
will
need
to
have
working
time
and
setting
down,
but
I,
but
I
think,
right
now.
The
primary
goal
is
we're
going
to
need
to
figure
out
how
we're
going
to
create
that
next
traunch
and
said
you
know,
and
if
you
want,
I
can
try
to
quickly
write
down
what
I
thought
we
were
going
to
do,
which
was
basically
continue
on
what
we've
done
with
our
draft
to
make
it
better.
But
if
that's
not
right,
yeah
at
the
end,
but
thanks
so
much
for.
E
I'm
always
for
building
on
what
has
already
been
done.
I
mean
a
lot
of
work
went
into
that
already
and
there
was
a
lot
of
work
prior
that
went
into
kind
of
that
first
round.
So
let's
start
with
that
in
two
weeks
time
and.
A
E
Yeah,
I
I
propose
continuing
to
use
it
because
I
mean
we
put
a
lot
of
time
into
that.
I
don't
see
why
we
need
to
start
from
scratch,
but
if
there's
a
better
way,
I'm
all
ears.
A
C
Okay,
that
works
all
right,
we're
out
of
time.
Let's
chat
and
slack
before
the
next
meeting
make
sure
we
have
what
we're
going
to
do
done,
figured
out.