►
From YouTube: Securing Critical Projects WG (March 11, 2021)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
Thank
them,
hi
folks,
thanks
for
having
us
join
your
meeting
today,
I'm
here
with
my
colleague,
tara
who's
on
the
call
as
well
and
we'll
be
talking
to
you
a
bit
about
utf.
B
I
think
we
both
got
connected
originally
through
david
and
hi
david
see
you
on
the
call,
looks
nice
where
you
are
just
to
kind
of
touch
base
a
little
bit
and
talk
about
what
otf
does
for
those
who
are
not
familiar
with.
Otf
will
be
kind
of
going
over
that
and
the
types
of
the
types
of
projects
that
we
fund
and
types
of
work
that
we
do.
The
type
of
work
that
we
do
and
tara
has
prepared
a
presentation
for
us
which
I'm
very
excited
to
to
share
with
you
all.
B
So
sorry,
I
don't
know
if
you
want
to
share
your
screen
for
it
or
just
kind
of
kick
us
off
for
otf.
C
Sure,
yeah,
hello,
everyone.
Let
me
just
figure
out
how
to
share.
C
I
hope
this
works
yeah
is
it.
Is
it
work
for
everyone.
C
Great,
so
I'm
I'm
a
program
manager
at
the
open
technology
fund.
The
open
technology
fund
is
a
private
nonprofit
organization.
We
work
to
advance
internet
freedom
in
environments
that
are
oppressive
by
supporting
research,
development
and
implementation
and
maintenance
of
technologies
that
provide
secure
and
uncensored
access
to
the
internet.
C
We
do
this
by
providing
both
direct
funding
to
projects,
as
well
as
support
services
to
both
individuals
and
organizations
around
the
world,
working
on
addressing
threats
to
internet
freedom,
journalism
and
human
rights,
usually
technology-backed
solutions
and,
as
I
said
earlier,
we're
a
private
non-profit
organization
funded
by
the
us
government.
Through
the
u.s
agency
for
global
media,
we
have
several
funds
and
labs,
so
funds
are
usually
like
how
we
provide
direct
support
to
projects
and
labs
are
where
we
provide
services
through
some
of
our
partners
and
vendors.
C
We
also
have
a
rapid
response
fund
for
digital
emergencies,
and
we
also
have
our
new
teca
scale
fund,
which
funds
taking
proven
reliable
technologies
and
implementing
on
a
scale
to
help
people
access
internet
in
places
where
it's
censored.
C
We
also
have
a
fellowship
called
the
information
control
fellowship
that
does
research
on
and
mainly
information
controls
and
final
ways
of
understanding
better
how
information
gets
controlled
online
and
ways
of
also
getting
around
that
in
terms
of
circumventing
censorship.
C
In
terms
of
labs,
we
have
we
support
localization
lab,
which
is
an
independent
organization
on
its
own.
That's
working
towards
localizing,
open
source
projects
that
are
working
in
the
internet
frame
space.
We
have
red
team
now,
which
I
will
talk
about
soon.
We
have
usability
lab,
which
provides
usability
services
to
open
source
projects,
supporting
internet
freedom,
learning
lab
that
helps
projects
in
their
outreach
or
their
highlighting
their
work.
C
We
also
have
engineering
lab
and
legal
lab,
which
we
can
also
like
explain
further,
like
folks
out
the
one
or
more
information
about
so
some
of
you
who
might
be
familiar
with
us
before
might
notice
that
we
didn't
mention
the
infrastructure
fund.
So
core
infrastructure
fund
was
one
of
otf's
funds.
We
created
to
support
the
building
blocks
of
the
internet,
so
technology
that,
like
most
digital
security
and
circumvention
and
other
applications
that
are
on
the
internet,
require
to
function
safely
and
securely.
C
So
yeah
will
I
about
towards,
like
securing
internet
freedom
and
improving
the
overall
health
of
the
internet.
We've
launched
this
fund
to
help
support
these
projects.
C
However,
recently
we've
folded
the
core
infrastructure
fund
into
the
larger
internet
freedom
fund,
just
as
a
way
of
like
simplifying
our
intake
process,
because
all
those
different
funds
on
our
website
was
confusing
to
people
who
didn't
know
where
to
apply
exactly
so.
We
just
we're
doing
sort
of
like
that
sort
of
sorting
internally
now
and
having
everyone
apply.
The
internet
freedom
fund,
but
it
still
sort
of
remains
a
focus
area
for
otf
and
we'll
probably
soon
have
a
part
of
our
website.
C
Where
we
highlight
our
core
infrastructure
work
and
encourage
people
to
apply,
I
I
chose
like
a
sample
of
projects
that
I
thought
might
be
relevant
to
the
work
of
the
open,
ssf
working
group.
I'm
hoping
I
was
currently
selecting
those
but
yeah,
essentially
like
the
first
person
I
want
to
highlight.
Was
we
supported
the
qna
minimize
implementation
of
key
name
minimization
by
nine
helping
make
dns
more
privacy
focused?
C
C
We
also
help
support
reproducible
builds,
which
is
an
important
project
and
making
sure
that
we
can
actually
verify
that
open
source
projects
are
built
like
reproducibly,
we'll
support
some
research
on
5g
and
human
rights
analysis
on
the
human
rights
implications
of
5g
technologies.
C
We
also
support
the
implementation
of
esni
for
open,
ssl,
so
encrypting
the
server
name,
indication
which
helps,
which
has
a
great
value
for
at
least
for
stopping
sensors
using
the
s9
for
censorship,
and
I
want
to
talk
a
bit
more
like
asmi,
for
example,
in
keyne
minimizations.
I
feel
like
where
one
day
is
where
ods
funding
has
been,
is
really
what
suited
for
it's,
perhaps
not
like
in
the
setting
of
sort
of
privacy
standards
or
places
like
idf,
but
like
once,
those
improvements
or
privacy
standards
are
made
better.
C
We
can
definitely
like
our
funding
has
been
really
good
at
like
taking
those
improvements
and
implementing
them
and
like
the
libraries
and
tools
that
people
rely
on
the
most.
We
also
help
support
the
ford
rpki
validator,
which
is
the
only
rpki
in
in
latin
america.
C
C
We
also
help
support
cert
bot,
which
is
a
tool
by
aff
to
help
people
you
know
install
let's
encrypt
certificates
and
other
acme
certificates,
so
those
are
the
projects
I
chose
in
particular
to
highlight,
but
we
also
I
wanted
to
like
mention
these
groups
of
projects,
because
those
are
sort
of
the
core
of
our
work.
I
would
say
we
support
several
privacy
tools,
including
tor
signal,
wireguard,
tails
and
malevole.
C
We
also
support
internet
measurement
tools.
Those
are
really
important
for
us
to
help
understand
the
scope
of
censorship
and
surveillance,
as
it
happens.
So
uni
and
mlab,
for
example,
are
some
of
the
projects
supported
there,
and
we
also
support
censorship,
circumvention
tools.
We
have
several
that
we're
supporting
right
now,
including
relay
map
geneva,
which
uses
ai
to
come
up
with
new
censorship,
circumvention
techniques
and
ant
link
and
psiphon
as
well,
are
other
gloves
of
convention
tools
that
we
support,
but
we
don't
only
support
the
development
of
these
tools.
C
We
also
support
sort
of
the
implementation
and
deployment
and
making
sure
that,
like
the
tools
are
developed
within
the
needs
of
the
people
who
need
them,
the
most
and
those
are
developed
in
a
diverse
and
localized
way
and
happens
of
strengthening
the
feedback,
loop
between
tool
makers
and
the
people
who
are
actually
relying
these
technologies
and
either
deploying
them
or
training
or
localizing
them.
C
So
we
have
lots
of
emphasis
on
our
community
of
people
who
are
doing
all
these
different
works
and
bringing
them
together.
I'm
going
to
to
open
that
service
to
talk
a
bit
more
about
what.
B
Thank
you.
Sorry
could
you
would
you
mind
going
back
to
the
slide
where
all
the
lights
and
the
pro
the
different
funds
are
listed,
I
mean
just
to
kind
of
wrap
it
up
for
this
section.
We,
though
our
portfolio
or
areas
of
work,
are
pretty
broad
in
the
sense
that
we
cover.
Like
sorry,
I
said
a
lot
of
different
components,
but
that
all
kind
of
fit
under
this
idea
of
internet
freedom.
B
This
idea
of
anti-censorship
work,
anti-surveillance
work,
privacy
and
encryption
and
like
working
on
internet
shutdowns,
and
all
that,
so
it
really
goes
anywhere
from
like
supporting
tech
development
and
incubating
certain
projects
to
supporting
research,
to
supporting
convenings,
to
supporting
kind
of
everything
that
falls
in
the
ecosystem.
So
so
we're
not
just
kind
of
injecting
money
into
tech
development,
but
kind
of
like
every
component.
B
That
kind
of
goes
and
contributes
to
internet
freedom
generally
and
to
the
efforts
that
we're
trying
to
you
know
kind
of
trying
to
push
forward
in
terms
of
anti-censorship
work,
anti-surveillance
work-
and
it's
I
mean
I
remember
when
I
started
at
with
tf
or
before
starting
out
with
tf
anytime,
that
I
kind
of
looked
at
the
website.
It
felt
a
bit
overwhelming
because
there
were
so
many
different
options
in
the
ecosystem
from
like
a
rapid
response
fund.
B
For
you
know
that
was
very
useful,
for
example,
with
what
was
happening
recently
or
what
is
currently
happening
in
myanmar.
So
it
kind
of
supports
kind
of
rapid
response
situations
like
that
when
there's
a
crisis
in
the
world
around
like
either
censorship
or
or
internet
shutdowns,
two
things
like
localization
lab
two
things
like
the
engineering
lab
to
more
like
research
projects.
So
there
can
there's
a
kind
of
a
broad
range
of
things
all
around
there,
but
all
still
under
this
umbrella
of
internet
freedom,
anti-censorship
anti-surveillance
work.
B
B
F
B
That's
actually
exactly
what
I
wanted
to
thank
you
for
reminding
me,
because
that
was
the
thing
that
I
was
trying
to
remember
to
talk
about
what's
different
about
otf
versus
other
funders
in
this
kind
of
tech
and
human
rights
field.
If
we're
thinking
about
ford,
for
example
or
whatnot
otf,
is
pretty
much
open
to
anyone
and
everyone
at
almost
any
stage
of
development
and
for
kind
of
any
type
of
financial
request.
B
Usually
some
bigger
funders
kind
of
support,
bigger
projects,
so
they're
not
really
giving
grants
like
say
under
fifty
thousand
dollars
or
under
under
a
hundred
thousand
dollars
or
whatnot,
but
otf
literally
I
mean
you
could
the
way
that
it
was
designed
initially
and
conceptualized.
Initially
was
the
idea
of
being
able
to
kind
of
really
just
fund
and
give
enough
money
to
anyone
who
had
any
idea,
no
matter
where,
in
that,
like
conceptual
stage,
it
was
to
basically
develop
the
what
they
wanted
to
develop
and
what
they
wanted
to
work
on.
B
So
if
someone
is
applying
for
like,
if
someone
is
applying
to
us
for
a
thousand
dollars
or
a
hundred
thousand
dollars,
that's
pretty
much
accessible
and
possible
for
for
anyone,
so
there's
no
kind
of
limit
or
restriction
on
how
much
funding
on,
what's
like
the
minimum
amount
of
funding
that
you
can
apply
for,
but
basically,
if
you
just
head
over
to
our
website-
and
you
end
up
on
like
the
internet,
freedom,
fun
page,
there's
just
kind
of
a
apply
to
this
fun
button
and
you
just
go
there
and
you
just
submit
your
information
and
you
describe
your
project
and
that's
essentially
kind
of
the
the
pathway
into
receiving
funding.
B
We
received
the
proposal
on
the
back
end.
We
have
a
team
of
program
managers
that
essentially
reviews
it
initially
to
kind
of
figure
out
if
it
falls
under
our
remit,
because
we
do
get
a
lot
of
applications
say
of
like
people
who
want
to
do
human
rights
documentation
like
we
don't
really
do
human
rights
documentation
work
with,
like
that's,
not
the
type
of
stuff
that
will
that
will
fund.
So
the
kind
of
the
first
initial
screening
process
is.
Does
this
fit
under
the
type
of
work
that
we
fund
and
that
we
support?
B
If,
yes,
then
it
kind
of
gets
moved
to
to
our
review
process,
and
that
involves
basically
the
entire
program
manager
team
that
reviews
the
the
application
that
then
sends
it
to
our
advisory
council
and
the
advisory
council
is
also
listed
on
our
website.
So
they
also
review
it
as
well,
and
we
basically
based
on
consensus,
mostly
kind
of
decide
like
what
moves
forward
and
what
doesn't
in
terms
of
receiving
funding
it's.
B
It
was
initially
designed
to
be
an
incredibly
kind
of
straightforward
and
easy
process
designed
for
really
anyone
to
be
able
to
apply
to
it
and
not
have
a
lot
of
restrictions
that
other
funders
usually
have
around
this
type
of
work.
G
C
Sure
so
we
do
have
like
if
you
look
at
the
link
that
I
sh
well,
I
maybe
I
should
share
a
link
to
it,
so
we
do
have
a
guide
where
we
talk
about
sort
of
our
criteria
for
each
question,
why
we
ask
it
currently:
we've
updated
the
one
for
the
concept.
Note
stage,
I'm
not
sure
if
you
all
are
aware,
but
like
our
funding
was
paused
for
a
while
recently,
but
luckily,
like
we've,
been
able
to
come
back
and
relaunch
and
we
actually
just
relaunched
the
fun
thing
earlier
this
week.
C
So
we're
working
on
sort
of
like
updating
those
guidances,
but
I
would
say
like
in
general,
like
what
we
look
for
is:
what
are
you
trying
to
do?
Where
are
you
trying
to
do
it,
and
how
do
you
like?
How
do
you
understand
like
that
problem
like
we're
looking
for
problems
that
come
from
the
concerns
of
the
people,
who
are,
let's
say
on
the
front
line
or
part
of
the
communities
where
this
tool
is
gonna,
be
implemented
or
used
so
obviously
for
stuff,
that's
more
about
infrastructure
or
the
internet.
C
We
we
can't
sometimes
like
ask
like
how
will
this
help
someone
in
particular-
and
I
mean
let's
say
in
myanmar
or
colombia,
but
if
you
can
sort
of
like
in
general,
display
like
there's
a
case
for
that
this
technology,
and
if
you
can
point,
for
example,
two
tools
that
we
also
rely
on,
like
I
don't
know
like
if
we
fix
this
part
of
the
internet,
tour
will
run
better
like
just
to
give
a
really
simply
simple
example:
that's
that's
what
we
look
for
it's
it's.
C
We
need.
We
need
to
have
strong
justification
to
or
to
understand
that
this
some
this
thing
will
help,
but
that's
so
our
application
process
is
a
two-stage
process.
In
the
first
stage,
we
just
sort
of
validate
the
idea
itself
make
sure
that,
like
it's,
it's
it's
a
good
idea.
It's
responsive
to
the
problem
and
it's
being
done
by
people
who
know
what
they're
doing
and
understand
the
context
where
they're
going
to
deploy
it
in
and
in
the
second
stage.
C
So
once
we
read
the
concept,
node
and
think,
okay,
this
is
a
good
idea.
Then
we
ask
to
submit
a
full
proposal
and
in
that
full
proposal
we
will
look
at
more
details
such
as
the
usability
of
the
like.
If
it's
a
technology
development
project
like
is,
has
the
usability
been
considered?
Does
the
project
have
a
good
usability
practices?
C
Do
they
need
the
usability
support
that
we
can
provide
using
usability
lab,
for
example?
We
also
look,
for
example,
right
main
considerations.
Is
this
the
right
tool
for
that?
That
context
are
have
they
considered,
for
example,
how,
for
example,
if
it's
an
empty
sensor,
you'll
have
to
consider
how
the
sensor
might
sort
of
learn
about
that
tool
or
sort
of
make
it
less
efficient
or
block
it?
We
also
look
at
sustainability.
C
We
look
at,
but
mainly
it's
it's
sustainability
and
we
look
make
sure
that
it's
not
a
duplicative
effort,
because
our
funding
in
general
is
not
that
large.
So
we
can't
really
well
well.
I
think,
like
there
are
cases
where
you
say
like
we
need
to
have
more
more
different
versions
of
this
tool
because
that's
useful
for
a
healthy
ecosystem.
C
You
that's
not
enough
for
us
to
be
able
to
say,
like
we
can
dedicate
this
much
money
to
have
different
tools.
We
need
the
strongest
depiction
of
why
this
neutral
is
needed.
What
gap
is
to
fill
or
how
does
it
do
things
differently
from
similar
tools
out
there?
I
would
say:
that's
it.
C
I
mean
the
most
important
thing
is
that
it
has
to
serve
people
in
refreshing
context
because
we're
one
of
the
only
funds
that's
sort
of
dedicated
towards
serving
like
what
we
call
the
larger
world
or
the
global
south.
C
So
it's
it's
it's
important
for
us
to
to
make
sure
that
the
tools
that
we
support
are
ones
that
will
be.
They
can
be
useful
everywhere,
but
they
have
to
be
particularly
useful
in
those
contacts
just
because
most
of
the
people
that
we
support
do
not
usually
have
access
to
other
funders.
B
Yeah
I
mean
to
that
point
specifically
there's
a
lot
of
like
we're.
Maybe
one
of
the
only
only
funders
that
fund
work,
that's
at
the
intersection
of
like
tech
and
human
rights
specifically,
and
that
has
direct
applications
to
like
to
to
areas
in
the
world
where
there
are
like
some
serious
human
rights
and
digital
rights
concerns.
B
So
that's,
usually
kind
of
like
a
lot
of
people
will
apply
to
us
with,
like
a
really
great
idea
that
that,
like
does
genuinely
should
receive
funding,
but
that
has
kind
of
no
direct
application
on
like
the
human
rights
angle
and,
unfortunately,
that's
the
type
of
stuff,
even
though
it's
probably
a
great
idea
and
type
of
technology
that
needs
to
be
developed.
It's
something
that
we
cannot
and
should
not
support,
because
our
funding
is
so
limited
and
there's
so
few
opportunities
for
kind
of
the
intersection
of
tech
and
human
rights.
B
Any
other
questions
here,
otherwise
I
mean
I
just
have
five
minutes
about
the
red
team
lab.
We
can
do
questions
after
that
as
well.
If
that's
helpful,.
B
Okay,
let's
just
move
on
to
the
red
team
lab,
I'm
just
focusing
on
this
one,
specifically
because
it's
about
security
audits
so
generally
how
it
works.
We
a
lot
of
I
mean,
as
you
know,
a
lot
of
a
lot
of
open
source
projects
and
specifically
a
lot
of
open
source
projects
that
are
developed
and
designed
for
the
use
cases
that
we're
talking
about
usually
have
a
lot
of
difficult.
Finding
funding
and
also
usually
cannot
afford
security
audits
for
their
code
and
for
us
as
otf.
B
We
definitely
have
a
responsibility
to
kind
of
kind
of
put
a
stamp
of
approval
and
assure
that
once
we're
putting
and
funding
once
we're
putting
a
technology
into
the
market
and
we
funded
it
we're
also
kind
of
ensuring
the
security
of
that
application.
That's
been
developed.
B
So
that's
how
the
red
team
lab
has
been
conceptualized
and
created
originally,
and
we
essentially
partner
with
a
few
different
kind
of
mainstream
like
corporate
security
companies
such
as
cure53
subgraph
ncc
group
include
security
and
radically
open
security
and
those
we've
been
working
with
them
since
2016
and
we're
about
to
actually
enter
a
new
round
of
kind
of
partner
proposal
applications,
but
essentially
for
the
past
five
years.
B
Those
have
been
the
the
companies
that
have
been
providing
services
so
a
lot
of
the
technologies
being
either
incubated
by
otf
or
providing
services
for
people
applying
to
otf
for
security
audits,
so
the
way
that
it
normally
yeah,
oh
hi,
jennifer.
The
way
that
it
normally
works
is
that
could
you
move
on
to
the
next
slide?
Tara,
there's
a
few
different
ways
that
people
can
apply
to
get
funding
for
the
red
team
lab,
but
it's
either
directly
an
otf
project,
so
say
an
application.
B
B
That
way,
so
that's
the
first
way
that
that
happens.
The
second
way
is
someone
directly
applies
to
the
red
team
lab
through
our
website.
B
Maybe
someone
that
has
not
received
otf
funding
in
the
past,
but
that
does
need
support
for
a
security
audit
and
the
application
that
they
have
or
the
technology
that
they
have
still
falls
within
our
remits
and
what
we
do
so
they
can
just
apply
directly
through
our
websites
to
these
services
that
a
lot
of
these
partners
provides,
or
sometimes
we
have
what
we
kind
of
call
an
adversarial
audit
for
the
lack
of
a
better
word.
But
it's
essentially
someone
reaching
out
to
us
and
saying
hey.
B
I've
come
across
this
kind
of
piece
of
technology
or
this
application,
and
I
have
some
serious
concerns
about
it
and
how
it's
being
used
essentially
to
to
abuse
human
rights
or
to
harm
people
or
to
whatever
something
in
that
context.
B
Could
you
look
into
it
and
you
have
someone
that
could
look
into
it
and
could
essentially
reverse
engineer
this
application
and
figure
out
what
type
of
data
it's
collecting,
how
that
data
is
being
used,
etc,
etc
and
we've
had
it's
not
like
a
very
common
one.
The
number
three
in
this
list-
it's
not
very
common,
but
when
it
happens,
it
definitely
leads
to
like
some
very
interesting
things.
B
And
so,
if
you
could
move
on
to
the
next
slide,
please
one
thing
that
we
worked
on
last
year,
for
example,
was
working
with
one
of
the
partners
cure53
to
reverse
engineer
this
app
that
had
been
used
in
china
and
that
was
marketed
as
kind
of
a
learn
about
the
history
of
china
app.
B
But
that
had
been
really
pushed
by
the
chinese
government
quite
aggressively
to
everyone
and
like
journalists
entering
the
country,
had
to
download
the
app
and
there
was
kind
of
a
lot
of
social
pressure
around
having
the
app
and
using
it.
B
And
it
was
all
about
kind
of
learning
about
the
the
history
of
the
the
country
and
the
government
and
and
the
party
and
all
of
that,
and
in
so
someone
essentially
reached
out
to
us
with
some
serious
concerns
about
it,
because
they
wanted
to
know
a
bit
more
about
kind
of
what
what
was
happening
behind
the
scenes.
And
after
after
looking
under
the
hood.
B
So
that's
kind
of
what
came
out
of
this
we've
partnered
with
human
rights
watch
on
some
of
the
work
that
they've
done,
looking
at
the
uyghur
concentration
camps
and
the
apps
that
have
been
used
by
the
police
in
china,
essentially
to
gather
information
on
the
uyghur
population
and
the
type
of
data
that
they've
been
receiving.
B
So
that
was
a
really
big
human
rights
watch
report
that
came
out,
I
believe
in
2019
as
well,
so
that's
kind
of
like
what
falls
under
this
umbrella
of
like
adversarial
audits,
but
it's
essentially
just
people
coming
to
us
saying
hey!
I
came
across
this
thing:
hey
I'm
maybe
a
whistleblower
at
this.
Whatever
company
hey,
this
really
concerns
me:
hey
I've
been
blah
blah
blah.
B
Whatever
could
you
have
someone
look
into
this
because
I
think
there
are
some
very
serious
human
rights
concerns
here
at
play,
so
that's
kind
of
an
area
that
we're
exploring
a
little
bit
more
and
developing
a
bit
more.
B
But
apart
from
that,
historically,
what
the
red
team
lab
has
been
is
just
any
otf,
funded
technology
or
application
will
get
security
audits
or
it's
an
open
application
to
whoever
wants
reach
out
to
us
and
request
a
security
audit
we'll
get
in
touch
with
folks,
like
jennifer
at
ncc
group
or
of
any
of
the
other
organ
companies
essentially
and
kind
of
match
them
with
the
applicant
and
otf
will
essentially
pay
for
the
security
audit
and
that's
kind
of
how
that's
covered
otf
covers
the
cost
for
it,
so
that
the
burden
is
not
on
the
on
the
applicant
and
that's
kind
of
an
overview
of
the
red
team
lab.
B
I
figured
they
might
be
interesting,
considering
the
group
that's
here
today,
but
that's
kind
of
big
umbrella
otf
and
the
type
of
work
that
we
do
specifically
than
the
red
team
lab,
and
you
know
the
internet,
freedom
fund
and
whatnot.
So
yeah
I
mean
that
was
a
point
of
essentially
our
talk,
just
to
kind
of
give
you
all
an
overview
of
the
type
of
work
that
we
do
and
what
we
look
at
and
happy
to
answer
any
questions
and
happy
to
see.
B
Like
any
thoughts
or
ideas
about
you
know,
collaboration
how
we
could
work
together,
and
I
mean
I
said
this
to
david
the
last
time
we
chatted,
but
eventually
I
mean
essentially,
if
you
come
across
any
organizations
or
groups
or
developers
that
are
kind
of
working
on
something
that
you
think
falls
under
our
remit
like
please
send
them
our
way
get
in
touch
with
us.
B
You
are
happy
to
share
my
email,
I
believe,
tara
as
well,
but
get
in
touch
with
us
and
let
us
know
about
it-
and
this
is
the
type
of
stuff
that
we'll
yeah
we'll
essentially
take
a
look
at.
A
B
Yeah
yeah
yeah,
I
would
love
to.
I
would
love
to
talk
more
about
that.
We've
done
some
some
work
with
some
google
engineers
before
so
I'm
happy
to
kind
of.
C
I
just
want
to
go
back
to
david's
original
question
like
if,
if
you
find
any
tool
or
a
project
that
needs
support
and
would
follow
within
sort
of
our
area
of
focus
and
missions,
I
mean
the
best
I
mean
my
best
option
would
be
either
yeah
give
them
one
of
our
contacts
either
or
like
to
talk
to
or
even
just
let
them
apply
directly.
C
A
Okay,
kaylie
did
I
say
your
name
right.
You
are
up.
A
Let
me
pull
things
up
here.
Do
you
want
to
give
a
quick
intro
as
well.
D
H
Can
just
pull
those
up
eh,
you
know
what
I'm
just
going
to
pull
up
a
slightly
old
version
of
this
deck.
I
want
to
apologize,
because
this
is
kind
of
coming
to
you
fresh
from
a
research
presentation
that
I
did
earlier
today.
So
the
slides
are
not
totally
tailored
to
this
audience,
and
I
just
apologize
for
that.
H
D
H
Okay,
can
everybody
see
that
okay
yep
it's
time
to
go
there?
We
go
awesome,
okay,
so
hello,
open,
ssf,
working
group.
This
is,
as
I
was
saying,
freshly
presented
at
the
2021
ieee
international
conference
on
software
analysis,
evolution
and
engineer
re-engineering,
which
they
call
saner,
I'm
kaylee
champion,
and
the
project
I'm
presenting
was
conducted
together
with
benjamin
makohil,
and
we
also
have
erin
shaw
here,
we're
all
part
of
the
community
data
science
collective.
This
is
a
screenshot
from
our
recent
retreat,
showing
some
of
the
cast
of
characters
that
we're
representing
the
community
data.
H
Science.
Collective
brings
together
researchers
from
university
of
washington,
where
that's
where
I
am
as
well
as
various
other
places
in
academia
and
industry.
We've
got
people
from
northwestern
carleton,
purdue,
nc
state
and
we
study
dynamics
of
online
communities,
including
open
source
software
production
communities.
But
we
also
study
environments
like
wikipedia
reddit,
citizen
science
and
tour
and
topics
circulating
around
production
of
software,
as
I
mentioned,
but
also
online
governance,
moderation,
cooperation,
how
people
collaborate
to
build
amazing
things,
privacy,
learning,
activism,
inequality,
how
newcomers
get
involved
and
so
on.
H
All
right-
and
I
also
want
to
express
some
appreciation
for
the
ford
sloan
digital
infrastructure
initiative
that
made
the
project
that
I'm
going
to
present
to
you
possible.
This
is
actually
the
third
in
a
series
of
projects
that
I've
done.
The
first
was
collecting
connecting
some
prior
work
with
in
wikipedia.
The
second
was
a
literature
review
where
I
went
through
thousands
of
articles
about
software
quality
and
then
finally,
this
third
project
that
I'm
going
to
tell
you
a
little
bit
more
about
today.
That
kind
of
puts
all
of
that
together.
H
I
think
that,
along
with
many
people
on
this
call
we're
all
kind
of
singing
from
a
similar
song
book
in
terms
of
having
been
really
inspired
by
the
heartbreed
vulnerability
in
2014
and
the
sort
of
tragedy
of
openssl,
and
this
sort
of
event
raised
a
lot
of
concerns.
I
think
that
there
may
be
other
vital
pieces
of
software
out
there
acting
as
infrastructure
but
similarly
neglected.
H
So
this
kind
of
perspective
of
software
as
infrastructure
laid
out
in
naughty
egg
balls
report,
roads
and
bridges
and
heartbleed
is
kind
of
the
poster
child
for
that,
so
openssl
is
sort
of
one
of
many
of
sort
of
free,
slash
library,
open
source
software
or
floss
projects
that
were
developed
by
volunteers,
and
I
know
this
group
is
quite
familiar
with
kind
of
this
phenomenon,
but
I
want
to
point
specifically
to
the
fact
that
in
floss
oftentimes
the
innovation
is
produced
by
volunteers
who
are
selecting
their
own
tasks
and
priorities.
H
So
this
can
be
an
advantage
because
individuals
very
efficiently
match
their
interests
and
their
skills
and
their
new
ideas.
They
can
work
together
over
networks
and
use
version
control
tools
to
sort
of
combine
all
those
contributions
together,
but
that
efficiency
gain
may
come
with
a
certain
cost,
so,
as
floss
grows
in
popularity
becomes
infrastructure.
H
H
So
how
do
you
prioritize
among
the
many
crumbling
roads
and
bridges
to
choose
where
to
focus
your
attention,
and
this
project
is
part
of
our
answer
to
that
challenge
and
our
insight
is
derived
from
a
basic
expectation
we
might
have
for
infrastructure,
which
is
that
the
most
important
components
of
software
infrastructure
that
those
things
we
rely
on
the
most
are
perhaps
also
the
ones
that
are
the
highest
quality?
That's
a
maybe
a
basic
expectation.
But
how
do
you
apply
that
in
a
virtual
context,
how
do
you
detect
a
crack
in
virtual
cement?
H
H
Here
we
have
yeah
kind
of
one
case
might
be
where
quality
is
relatively
high
and
importance
is
perhaps
relatively
low.
So
we
would
call
that
overproduction.
H
H
H
You
could
use
security
alerts,
best
practices,
metrics
satisfaction
and
also
alternate
measures
of
importance.
From
what
I
present
to
you
today,
you
might
look
at
what,
where
things
are
used,
how
they're
used
the
nature
of
dependency
between
them
presence
in
containers
and
so
on
all
right.
So
this
is
the
case
that
we
applied
our
method
to.
We
applied
the
method
to
the
debian
project.
This
is
a
valuable
site
for
study
in
its
own
right,
since
debian
is
the
backbone
of
linux
web
serving
worldwide
very
broad
view.
H
We
extracted
data
from
debian's
public
repositories.
For
this
analysis,
including
bug,
tracking
and
release
databases
for
our
measure
of
quality,
we
used
speed
of
problem
resolution,
which
we
extracted
from
bug
reports
and
I'm
happy
to
talk
more
detail
about
the
sort
of
statistical
handling
that
we
applied
to
this
measure
of
quality.
H
H
We
identified
our
measure
of
importance
which,
in
our
case,
we
used
the
popularity
contest
survey
from
debian,
which
gives
us
a
view
into
how
packages
are
being
used
from
an
opt-in
survey,
there's
about
200
000
systems
in
the
snapshot
that
we
took,
and
this
is
from
kind
of
the
just
the
home
page
of
popcorn,
and
what
I
kind
of
love
about
this
is
that
it's
like
the
history
of
basically
the
entire
internet
like
2004
to
2021.
H
We
see
the
rise
of
in
the
fall
of
i386
and
the
rise
of
amd.
So
in
this
data
we
have,
I
think,
an
interesting
snapshot,
but
it
is
not
it's
not
everything.
H
So,
in
step
four
here
we
have.
This
is
our
approach
to
this
question
of
how
to
relate
quality
and
importance.
This
is
a
rank
ordering
approach,
it's
just
kind
of
a
intuitive
sense
that,
like
whatever's
top
and
one
should
be
top
and
the
other
number
one
in
in
quality
number
one
and
importance
and
just
step
on
down
and
where
you
see
big
gaps
between
quality
and
importance.
That's
where
you
become
concerned
about
this
about
this
gap
in
step
five
here.
H
This
is
our
last
step
testing
for
deviations
where
those
rank
orderings
of
quality
and
importance
that
I
mentioned
are
diverging
substantially.
That's
where
you
can
characterize
your
project
as
misaligned,
either
under
produced
or
overproduced,
and
what
we
found
is
that
underproduction
is
significantly
widespread
in
debian.
H
So
these
are
this
kind
of
color
smear
slide
gives
you
one
line,
one
vertical
line
per
package
and
it
gives
you
the
95,
credible
interval
for
the
distribution
of
the
under
production
of
each
package.
So
on
the
left
here
these
are
aligned
packages.
Some
of
these
may
indeed
be
somewhat
under
or
overproduced,
but,
as
you
can
see,
they
cross
zero.
So
we're
going
to
be
kind
of
conservative
and
not
worry
about
those
so
much
on
the
right
side.
H
Here
we
see
the
overproduced
package
packages
here
in
green
and
then
a
very
large
number
of
underproduced
packages
in
blue
on
the
right
here.
This
is
about
4
000
of
those
22
000
packages,
and
this
is
where
more
attention
is
needed.
More
investment
is
needed
all
right,
so
that
was
our
five
part
method
and
I'm
gonna,
just
as
we
sort
of
applied
that
to
debian
and
I'm
gonna
dig
into
that
data
just
a
little
bit
more.
I
won't
spend
too
too
much
time
here.
H
This
is
again
a
heat
map
of
the
same
data.
You
saw
in
that
color
smear
slide,
and
this
shows
us
again
where
those
underproduced
packages
live
a
very
high
level
of
installation
but
low
quality,
and
this
one
is
where
we
kind
of
name
names
just
a
little
bit,
and
these
are
those
packages
within
debian
that
had
the
most
serious
underproduction
issue.
So
there's
a
couple
things
on
here:
gnome
power
manager,
kind
of
st
sticks
out
at
the
very
top
here
consistently
as
one
as
the
most
troubled
package
that
we
detected
in
debian.
H
Overall,
these
results
suggest
to
us
that
there's
a
little
bit
of
a
long
tale
of
gooey
underproduction
in
debian.
Those
of
us
who
have
been
using
linux
since
the
beginning
are
maybe
not
shocked
by
this
result,
but
it
suggests
that
it's
a
persistent
issue
inside
this
ecosystem.
H
In
the
talk
that
I
gave
to
the
conference,
I
also
talked
a
little
bit
more
about
implications
for
software
engineering
research.
I'm
going
to
just
skip
forward
a
little
bit
here,
inviting
folks
to
collaborate,
and
then
I
sort
of
went
ahead
to
some
implications
for
practitioners,
encouraging
people
to
think
this
way
about
the
products
in
their
own
kind
of
product
line
and
in
their
tool
chain.
Think
about
their
dependencies
on
the
floss
thinking
about
their
dependencies
on
floss
infrastructure
and
perhaps
making
some
investments.
H
For
me,
my
next
steps
with
this
project
are
to
keep
thinking
about
social
processes
that
drive
under
production.
Where
does
this
come
from?
How
does
it
happen?
How
can
we
mitigate
it?
How
can
we
counter
under
production
and
then
I
need
to
identify
new
targets
for
study
and
I
need
to
seek
some
new
funding
so
if
to
the
to
the
degree
that
you
can
share
your
feedback
with
me,
that
would
be
wonderful
and
and
think
about
your
networks
with
respect
to
sources
of
data
and
environments
or
opportunities
to
to
seek
further
support.
H
E
By
the
way,
this
is
awesome.
I
I
I'm
you
know.
I
know
this
is
a
lot
of
the
stuff
that
was
in
the
paper,
but
it
really
helped
me
hear
from
you
on
this.
One
question
I
had
was
the
on
the
methodology
under
production
or
over
production
is
correlated
with
bugs
how
long
a
bug
remains
open,
right
and
so
are.
Are
there
projects
I
mean
openssl,
I'm
trying
to
remember,
and
I
don't
do
all
open
source
projects
that
we
depend
on
have
like
bug
tracking
systems.
E
I
mean
that's,
maybe
a
stupid
question,
but
I
wonder
because
I
seem
like
a
few
of
them
may
not
have
very
well
developed
processes
relative
to
that.
So,
and
maybe
that's
not
a
question
you
can
answer
kaylee.
Maybe
that's
that's
more
of
a
that's
sort
of
in
my
in
my
head.
If
there
are
others
that
are
we
we
maybe
there
may
be
some
other
underproduction
here,
but
based
on
the
methodology,
may
need
some
refinement.
H
Yeah,
so
bug
tracking
is
definitely
not
in
like
it's
not
a
practice
that
all
projects
have
adopted.
Our
kind
of
approach
here
has
been
just
to
tackle
those
where
we
can
kind
of
take
that
as
a
measure
of
quality,
but
there's
other
measures
of
quality,
and
maybe
that
may
be
used
instead,
rather
than
say
bug
tracking.
E
A
Yeah,
this
is
really
interesting.
I
haven't
read
your
paper
yet,
but
thanks
for
presenting
yeah,
we've
been
looking
at
a
few
different
ways
inside
the
open
ssf
to
sort
of
figure
out
how
we
define
like
the
critical
open
source
project.
So
we
we
have
one
project
called
the
criticality
score
which
goes
through
and
looks
at
different
heuristics.
A
I
can.
I
can
send
you
a
link
or
post
it
in
the
notes
or
something
and
then
you
know,
we've
also
had
conversations
with
the
harvard
folks.
I
see
jenny
on
the
call
that
do
the
census
ii
work.
So
I
think,
for
me,
and
from
from
my
side
like
just
trying
to
you,
know,
do
a
better
job
at
identifying
like
which
projects
we
consider
critical
and
then
you
know
figuring
out
best
ways
to
help
with
you
know
remediation.
A
I
think
one
thing
that
needs
like
a
little
bit
of
tuning
on,
I
think
maybe
the
criticality
score
project,
and
I
don't
know
I
haven't
looked
lately,
but
you
know
you
you
mentioned,
like
the
you
know
the
brittle
projects
underneath
and
that's
that's
the
ones
that
I
think
that
are
the
most
scariest.
So
we
know,
like
large,
open
source
projects,
have
lots
of
people
contributing
they're
going
to
be
popular
they're
going
to
rank
high
on
the
list,
but
like
what
about
those
projects
that
people
just
aren't?
Thinking
about
that?
A
F
This
is
not
so
much
a
question
for
her,
or
maybe
it's
really
a
question
for
this
group
kim
kind
of
a
follow-on
which
is
we
need
to
start
looking
and
collecting
these
different
sources
to
try
to
help
identify
hey
these
are
the
ones
we
need
to
go
after
you
know
we
need
to
work
on
and
improving,
and
then
you
know
take
steps
working
with
folks
like
otf
and
google
and
other
folks
where,
where
funding
is
needed,
I'm
not
sure
exactly
how
to
start
that
process,
I
mean
it
seems
like
this
threat
group
to
do
that,
though,
looking
at
you
know,
gathering
at
looking
at
these
different
data
sources
to
try
to
figure
that
out.
E
Yeah,
I
I
think,
there's
a
certain
cross
product.
You
know
kind
of
analysis
that
needs
to
get
done
david.
I
I'm
I
was
looking
at
a
list
of
things
that
intel
depends
on
a
lot
and-
and
I
have
my
you
know-
corporate
colleagues
who
are
looking
at
me
saying
dave
are
our:
are
you
know,
yeah
we'll
help
the
community?
Of
course,
we
generally
want
to
help
things
that
are
going
the
right
way.
It
was
a
question
more
around.
So
is
this
actually
going
to
have
any
sort
of
positive
benefit?
E
You
know
for
intel
and
I've
been
trying
to
answer
that
question
and
I'm
still
struggling
to
try
and
figure
it
out
right.
You
know.
So
this
is
actually
was
you
know.
I
bet-
and
I
think
generally
from
a
community
standpoint
for
all
of
us
and
for
the
community
in
general.
I
think
we
need
to
look
at
these
data
sources.
Have
that
sort
of
cross-product
analysis
that
that
prioritizes?
E
You
know
if
we're
going
to
go
work
on
some
of
these
things
and
make
have
have
an
impact
on
them.
We
got
to
have
you
know
that
prioritization
effort
underway
so
there's
another
step,
that's
needed
from
somebody
who
can
do
some
of
that
that
you
know
that
analysis.
For
example,
the
harvard
you
mentioned
kim
mentioned
seems
to
be
really
based
on
things
that
are
like
node.js.
E
You
know
packages
that
are
used
a
lot,
maybe
not
you
know,
supported
well
or
java
packages
or
pi
pi,
the
the
dev.
I
I
suspect
the
debian.
You
know
database
may
be
semi
disjoint,
so
it's
there
may
not
actually
be
a
lot
of
intersection
between
the
sets.
So
that's
going
to
be
another
challenge.
F
For
us,
I
I
think
it's
fair
to
say
that
it's
it's
essentially
just
there
are
separate
disjoint
sets.
There
was
actually
an
earlier
effort
which
you
can
blame
me.
I
led
I
called
census
one,
that's
why
the
harvard
work
is
called
census,
two,
where
we
specifically
and
analyze
debian
packages
different
way
to
score
things,
but
the
same
basic
challenge
of-
and
I
think-
and
I
think
the
various
works
have
shown
that
it's
very
very
hard
to
cross.
Compare
different
package
ecosystems,
it's
a
lot
easier,
just
to
say
within
this
ecosystem.
F
E
Yeah
and
it's
not
it's
also
good,
not
to
try
and
boil
the
ocean,
maybe
somewhat.
Let's
pick
off
the
top,
you
know,
n,
where
n
is
is
a
small
integer,
but
you
know
really
has
is,
is
painful
right.
I
would
love
to
get
the
what
maybe
it's
10.,
I
don't
know
20.
something
along
those
lines,
as
opposed
to
you
know.
Looking
at
how
vast
the
the
ocean
of
underproduced
stuff
is.
G
Yeah
yeah
we're
actually
working
on
putting
together
some
of
that
data
and
I'd
love
to
present
to
the
work
group
on
our
progress
on
that.
Hopefully,
in
the
coming
meetings
on
how
that's
looking,
I
think,
you're
absolutely
right.
Both
david's
in
this
case.
E
I
E
I'm
being
I'm
being
leaned
on
by
my
colleagues
to
say:
okay
dave,
you
know,
you
know,
help
us
to
understand
how
we
can
help
the
community
the
best
way.
Yeah.
J
From
you,
hey,
I'm
mako
hill,
I'm
the
other,
I'm
the
lesser
collaborator
on
the
project,
and
I
think
that,
but
a
long
time,
debian
person.
I
think
that
one
kind
of
thing
I
want
to
emphasize
is
that,
like
that,
that
I
mean
I
would
love
to
think
about
applying
the
basic
method
to
like
either
a
different
sort
of
population
of
projects,
or,
if
you
said,
here's
a
subset
of
w
projects
that
we
care
about
great.
We
can
do
all.
J
We
need
just
be
able
to
do
two
relative
rankings
right
and
so,
if
you've
got
like,
if
you
want
to
plug
in
criticality
score
and
for
some
set
of
packages,
and
if
you've
got
some
measure
of
importance,
if
we
can
put
it
into
a
ranking,
we
can
sort
of
assume
we
can
we
can.
We
can.
J
You
know
talk
about
what
perfect
alignment
would
be
and
talk
about,
deviation
from
that
alignment
and
identify
the
things
that
have
deviated
most
right,
and
I
think
that
one
of
the
things
that
we
were
excited
to
do
with
this
moving
forward
was
to
work
with
other
sort
of
communities.
Organizations
to
help
them
build
these
kinds
of
you
know,
measures
of
under
production
in
in
context
that
they
care
about.
We
can
do
it
in
debian,
but
for
subsidy
we
could
do
it
in
node.js.
J
We
could
do
it
somewhere
else
right,
and
I
think
that
we've
got
I
was
saying
it's
like
this
is
actually
on
on
on
me.
Right
now
is
actually
sort
of
doing
a
final
pass
on
the
all
of
the
sort
of
code
and
also
the
results
from
debbie,
and
I
know
that
david
asked
for
those,
and
so
those
will
be
you
know
the
paper
was
only
presented
today.
J
I
don't
think
it's
even
been
published
yet
on
the
website,
but
when
we're
done
with
it,
so
I
put
a
link
to
the
pre-print.
J
You
should
check
that
out
and
then
the
all
the
code
will
be
out
there
later
today,
and
I
think
that
we're
also
totally
interested
in
working
with
people
to
think
about
what
extensions
or
applications
of
this
would
be
for
other
measures
of
importance
for
other
measures
of
quality
if
you've
got
other
ones
that
you
care
about
or
within
other
subsets
of
packages
which
or
pieces
of
software
that
you
care
about
as
well.
I
So
not
really
a
question
but
kind
of
an
observation.
I
think
you
know
at
the
end,
kayla.
You
were
talking
about
next
steps
and
I
think
I
I
really
look
forward
to,
and
I
think
the
working
group
would
probably
be
interested
in
your
analysis
there,
because
answering
that
I
think
answering
that
question
of
what
is
causing
under
production.
I
think
there's
some
assumptions
and
like
the
way
that
we're
going
about
things
with
the
working
group
where,
like
for
instance,
increased
funding,
will
help
address
under
production.
I
E
Yeah-
and
we
had
some
survey
results
that
suggested,
that
was
not
a
good
correlation
actually,
so
that
was
that
was
also
you
know.
A
factor
would
be
interesting
from
the
social
standpoint
to
figure
out
whether
you
know
our
intuitions
or
surveys
are
right.
There.
I
F
If
I
can
jump
in
having
had
some
experience
at
the
core
infrastructure
initiative,
I
think
funding
can
help,
but
it
depends
how
the
funding
is
used.
What
a
surprise,
so
you
know
money
spent
does
not
correlate
into
necessarily
into
work
produced,
useful
work
produced.
C
Yeah,
I
can
also
like
maybe
yeah,
share
our
perspective
on
this,
like
we,
we
definitely
like
for
us.
I
think
lots
of
our
projects
are
also
now
facing
an
issue
where
they
have
the
resources
that
they
need
in
order
to
do
the
work,
but
perhaps
not
as
many
people
who
are
doing
the
work.
So
we
don't
understand
the
fact
that,
like
it's,
not
only
just
making
money
available
or
making
funding
available,
also
creating
structures
that
enable
these
projects
to
sustainably
have
enough
people
and
and
as
well
as
resources
to
do
their
work.
I
A
The
recording
will
be
up
on
youtube,
I
think,
maybe
next
next
meeting
we
should
have
more
of
a
discussion
or
something
but
feel
free
to
add
topics
to
the
agenda,
and
I
think
we
can
continue
these
conversations
forward
about
getting
lists
and
everything.
That's
one
of
the
things
we
learned
with
the
criticality
score
project.
If
you
put
any
sort
of
ranked
list
on
the
internet
everyone's
going
to
think
it's
wrong
and
tell
you
why
it's
wrong,
but
I
mean
starting
somewhere,
is
a
good
thing.