►
From YouTube: Securing Critical Projects WG (February 10, 2022)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
I
used
to
live
in
a
very
snowy
area
of
the
country
in
a
western
new
york,
so
I
I
understand
some
of
your
pain.
I
believe.
A
E
D
Hey
then
good
to
see
you
all
here,
I
I
yeah,
I
figured
you
can't
be
just
one
of
four
people
on
a
call
and
have
your
camera
turned
off.
I
usually
try
to
be
background
in
these
meetings.
Just
so
I'm
aware
of,
what's
being
said
and
done
so
I
have
no
agenda
myself
to
bring
david's
the
one
who
is
more
the
the
bulldog
on
these
on
these
calls.
So
what
are
you
signing
me
up
for
brian.
G
G
All
right,
but
I
do
try
to
be
friendly
I'll
I'll
I'll,
be
delighted
to
sign
up
for
that.
You
know
the
we
we've
got
some
challenging
problems,
but
there
there's
no
reason
we
can't
be
good
to
other
people.
While
we
solve
hard
problems.
G
Well,
you're
right,
that
is
one
of
the
least
challenging
problems.
So
let
me
first
get
to
it
myself.
D
The
meeting
like
this
should
be
public
facing
I
mean
it
should
be
not
required,
I
mean
they
might
need
edit
access.
I
can
I.
G
G
Absolutely
so,
let's
see
here,
can
you
can
you
post
in
the
chat
your
email
address
and
we'll
get
chatted.
F
G
All
right,
so
I
have
now
changed
the
settings.
Anyone
in
the
world
can
view
this.
I'm
surprised
it
wasn't
set
that
way
already,
but
thank
you
for
letting
us
know
because
stuff
happens,
and
you
are
now
you
you
now
have
infinite
power
with
great
power
comes
great
responsibility.
G
G
Opportunity
there's
an
opportunity,
but
I
truly
I
truly
I
truly
hate
spam.
The
phone
spammers
have
made
our
phones
basically
stolen,
our
phones
from
us
yeah.
So
I
remember
when
phones
were
useful
and
you
know
it's
not
that
they're
obsolete
it's-
that
we
let
them
get
taken
over.
E
Well,
here's
here's
a
tip
pro
tip,
don't
give
anyone
your
real
phone
number.
G
E
Yeah,
the
the.
E
G
G
There
you
go,
microsoft
has
called
me
several
times
apparel
just
but
but
this
as
I
said
that
we've
we've
let
that
we've
let
the
scammers
steal
our
phones
from
us.
So.
E
I
recently
turned
on
the
function
where
it
sends
all
unknown
callers
to
directly
to
voicemail,
and
I
gotta
say
it's
life-changing.
I.
G
A
I
A
Okay,
cool
well
welcome
everybody.
This
is
the
what's
the
date
today,
the
10th
it's
written
on
the
document,
wow
february
10th
securing
critical
projects
meeting
very
happy
to
see
you
all.
I
think
we're
going
to
have
a
good
discussion
today
got
a
lot
of
updates.
A
Typically,
we
like
to
start
off
with
any
introductions
anyone
who's
new
we'd
love
for
you
to
introduce
yourself
and
how
you
are
involved
with
open
source
and
and
what
brought
you
to
the
work
group
and
things
of
that
nature.
Vicky,
since
I
know
you,
I'm
gonna
put
you
on
the
spot
and
call
you
first
and
I'd
love
to
introduce
yourself
well.
E
That's
fine,
I
don't
mind
that
hi
everyone
I'm
vmware
sword,
but
because
we're
all
friends
here
you
can
call
me
vicky.
I
am
currently
a
director
and
senior
strategy
advisor
at
wikrow.
E
To
are
we
hand
doing
handoffs
to
people,
or
are
you
going
to
do
them
all
in
here.
A
If
you'd
like
to
to
do
a
handoff
to
someone
who
but
but
you're
new
too,
so
I
guess
anyone
who's
it's
their
first
work
group
meeting,
we'd
love
to
hear
from
you
and
get
introduced
to
you.
K
I'll
go
next
hey
by
the
way
this
is
shubrachar
first
meeting,
I
think
some
folks
in
the
community
ping
me
you
should
be
attending
this
meeting
so
introductions,
I'm
the
cto
of
the
linux
foundation
and
I've
been
working
on
a
lot
of
the
tooling.
We
call
that
the
lfx
platform,
one
of
that
tooling,
does
include
a
product
called
lfx
security
that
we
have
been
building
at
the
lf
inside
there's
another
one
which
looked
at
like
all
the
code,
contributions
and
criticality
scoring
and
all
that.
A
Wonderful
hi.
Thank
you,
shubra
nice
to
see
you
looks
like
we
have
someone
who
raised
their
hand.
Would
you
like
to
go
next.
L
Sure
I
don't
want
to
jump
over
people
since
I'm
new,
I
didn't
know
who
else
was
new.
My
name
is
melba
and
I
work
for
ibm.
I'm
a
senior
technical
staff,
member
focusing
on
supply
chain
security
and
product
security
for
all
of
ibm.
So
this
is
a
new
role
for
me.
I've
been
in
it
for
a
couple
months,
so
this
is
my
first
meeting
first
time
being
exposed
to
you
know
open
source
community,
so
yeah
happy
to
be
here.
M
A
Awesome
hello
and
welcome
christopher
yep.
Who
else
do
we
have.
O
Thank
you
hi
henry
andall,
I'm
involved
both
at
apache
and
also
on
the
open
source
program
office
at
aws
and
just
interested
in
listening
and
learning
here.
A
Hello
welcome
henry
caleb.
N
G'day
I'm
caleb
brown.
I
work
at
google
on
the
open
source
security
team
with
jeff
and
abhishek
and
others.
This
is
actually
a
really
crazy
time
for
me
to
be
here
but
yeah.
I
I
decided
to
join
because
people
are
talking
about
criticality
score
and
that's
something
I'm
working
on
at
the
moment.
So
yeah.
P
Yeah
sure
I'll
go
utam,
I'm
from
brazilian
a
cybersecurity
startup
based
in
israel.
I
lead
the
vulnerability
research
team
starting
to
get
involved
in
a
few
openness
as
well.
A
Oh
all
right!
Well,
thank
you.
A
Yes
welcome
everybody
happy
to
have
you
here
just
a
little
bit
of
I
guess
housekeeping
stuff,
so
we
do
have
a
notes
sheet
that
was
just
made
public
in
terms
of
that
anyone
can
view
it
if
you'd
like
edit
access,
go
ahead
and
request
that
it's
meant
to
be
kind
of
a
collaborative
document
where
we
keep
track
of
the
agenda
meeting,
notes
things
like
that,
so
feel
free
to
throw
in
any
notes
or
or
things
that
come
up
throughout
the
meeting
that
maybe
don't
get
written
down
in
the
meeting
notes
and
yeah.
A
So
with
that
looks
like
we,
we
have
a
pretty
good
agenda
today.
I
think
the
main
thing
is
going
to
be
talking
about
some
of
the
census.
2
stuff,
that's
coming
out
eminently
so
with
that,
I
would
love
to
hand
it
off
to
you.
David
tell
us
about
that.
G
Okay,
so
yeah
hold
up
so
basically
the
we,
the
harvard
folks,
have
a
final
draft
of
the
census.
2
report
I
mean
I
tried
to
put
some
notes
in
there.
So
hopefully
you
can
see
from
the
note
some
of
this
information,
the
it
is
getting
formatted
edited
and
the
we
actually
have
a
date
now
we're
hoping
to
get
it
done
by
the
end
of
february,
but
they,
you
know,
there's
a
little
extra
work,
so
the
plan
now
is
going
to
be
released.
G
On
march
1st,
originally
we
said
first
quarter
2022,
so
we're
well
within
the
overall
date
goal
anyway.
So
at
least
you
know,
once
we
had
all
the
data
sets,
that
was
the
that
was
the
expected
time
frame
for
those
who
aren't
aware
of
this.
Basically,
this
is
some
analysis
that
they
already
did
a
preliminary
version
of
before,
but
now
they
have
three
sca
suppliers
data
sets,
and
that
was
that
turns
out
to
be
vital.
G
They
have
to
have
at
least
three
to
make
a
lot
of
data
public,
because
a
number
of
these
suppliers
view
a
lot
of
this
and
from
detailed
information
as
proprietary,
and
you
know,
trade
secret,
and
you
know
their
secret
sauce
and
so
by
having
three.
No
one
of
the
suppliers
can
figure
out
details
about
the
other.
If
we
only
had
two,
then
each
one
could
figure
out
the
details
of
the
other
with
three
that
doesn't
happen,
there's
enough
mixing
of
the
data.
G
It
also
brings
in
dependency
data
from
various
repos,
at
least
from
libraries.io.
I
have
to
admit
I
don't
remember
if
they
used
another
way
to
get
data
as
well,
but
I
know
they
they
at
least
use
date.
Libraries,
I
o,
which
in
turn
grabs
data
from
various
repos
registries.
G
One
big
difference
of
this
versus
the
preliminary
version
is
that
they're,
actually
tracking
specific
version
numbers.
That's
a
lot
more
work
back.
When
I
was
at
my
previous
employer,
we
actually
had
developed
some
algorithms
to
do
this
turns
out
that
there
are.
There
are
many
ways
to
do
it,
many
of
which
can
take
months
and
years.
G
So
that's
not
the
right
way,
so
we
helped
him
a
little
bit
with
some
of
the
algorithms
to
make
sure
that
they
could
get
that
done
in
reasonable
time,
and
I'm
not
sure
I
want
to
give
away
too
many
grand
secrets
of
I
I
I
have
read
the
draft,
but
I
I
will
note
that
you'll
be
shocked,
shocked
to
know
that
not
everyone
uses
the
latest
version
of
software.
G
Yes,
exactly,
this
is
not
new
information
this
you
know.
Previous
studies
have
found
this,
so
I
I
think
you'll
be
not
shocked
to
know
that.
Indeed,
when
you
look
at
things,
this
broadly
the
same
kind,
but
in
a
different
direction,
this
same
result
happens.
G
So
we,
the
critical
working
projects
working
group,
we'll
need
to
figure
out
how
to
either
update
our
current
draft
or
replace
our
critical
project
list
using
this
and
other
data.
So
this
is
more
of
a
kind
of
a
heads
up.
It's
going
to
be
several
different
lists.
Look
at
several
different
ways.
G
Probably
the
big
thing
is,
I
think
most
of
you
already
know
that
the
javascript
npm
community
just
works
differently
than
everybody
else.
You
know
approx
something
like
49.
Around
half
of
all
mpm
packages
have
either
zero
or
one
function.
G
Okay,
the
incredible
emphasis
on
very
microscopically
tiny
packages
creates
really
different
results
where
you
have
these
incredibly
deep
dependency,
trees,
yeah
and
so
on.
That
makes
it
very
very
difficult
to
compare
with
the
other
packaging
ecosystems
so
for
a
lot
of
the
studies
they're
going
to
have
basically
here's
the
npm
list,
your
javascript
list,
here's
the
everybody
else
list,
because
otherwise
doing
things
like
counts
of
dependencies,
it
really
just
doesn't
make
the
same.
G
It
really
isn't
the
same
thing
at
all,
and
so
it
makes
more
sense
to
separate
those
those
two
out.
So
now
it's
called
census
2..
The
original
one
was
called
preliminary
census,
2.,
so
they're
calling
it
census
2.
and
ask.
G
G
C
Q
G
Oh,
if
that
can
be
for
a
you
know
what
I
actually
haven't
looked
tracked
down
to
try
to
see
what
why
so
many
people
are
having
those,
but
it
can
be
something
as
simple
as
a
you
know,
providing
a
constant.
G
You
know
a
value
for
a
constant.
Like
you
know,
hey
here's
a
you
know
what
are
the
letters
a
through
z,
upper
lower
case?
I
don't
know
exactly
so
so
you
asked
for
an
example.
There's
an
example.
Now
what
is
the
common
case
for
use
for
something
that
has
zero
yeah
or
yeah
there's
another
one?
Another
one
is
packages
that
often
import
many
other
packages
just
to
bundle
them
up.
G
Q
Q
G
But
but
this,
but
the
reason
that
it's
challenging
to
merge
things
together
is
because
I
won't
say
that
never
happens
in
other
package
ecosystems,
I'm
sure
it
does,
but
these
the
prevalence
of
it
makes
a
lot
of
these
quantitative
figures
really
just
if
they
don't
look
at
all
the
same.
It
looks
like,
for
example,
oh
my
gosh,
you
know
javascript,
there's
so
many
more
javascript
packages
for
packages
for
javascript
than
anything
else.
Yes,
but
each
of
them
do
maybe
1
100th
of
a
typical
package
in
the
other
ecosystems.
A
Wonderful,
thank
you.
David
yeah,
we're
all
very
excited
to
see
the
report
when
it
comes
out,
and
I
I
definitely
think
a
pretty
intensive
session
with
the
work
group
is
in
order
once
that
comes
out
to,
as
you
said,
kind
of
merge
or
take
that
data
with
what
we
came
up
with
and
kind
of
either
compare
them
and
come
up
with
the
best
way
to
to
to
have
a
list
of
what
we
consider
the
most
critical
projects.
A
A
A
lot
of
it
boiled
down
to
things
like
you
know
the
impacts
of
subversion.
You
know
what
would
happen
if,
if
a
subversion
were
to
happen,
availability
of
alternatives,
if
it's,
if
there's
just
one
one
open
source
project-
and
there
are
not
really
any
other
alternatives-
you
know
that
would
increase
its
criticality
somewhat
and
many
other
factors
too.
But
that
is
something
that
we
are
going
to
ideally
really
build
out
and
and
and
be
able
to
show
and
demonstrate.
A
You
know
this
is
kind
of
what
our
thought
process
was,
and
this
is
how
we
came
up
with
it,
based
on
some
really
preliminary
reviews.
I
do
think
we
were
certainly
on
the
right
track
as
a
work
group
in
terms
of
the
the
project
list
that
we
came
up
with,
but
you
know,
there's
still
more
work
to
be
done
and
with
that
I
think
I'd
like
to
hand
it
off
to
jacques
who
had
some
thoughts
about
this.
H
Thank
you
I'll
I'll.
Just
leave
it
at
that.
Let's
see
how
the
screen
sharing
goes,
yeah,
okay,
okay,
it
needs
me
to
add
it
to
preferences,
which
probably
means
I
need
to
restart
the
app
that's
good
times.
H
So
you
can
all
just
watch
somebody.
O
Quick
quick
question
for
you
david,
while
jacques
is
doing
that
in
the
preliminary
it
it
splitting
out,
javascript
basically
made
the
rest
of
the
list
java.
Does
the
draft
get
more
diverse
in
in
the
new
version.
G
I
you
know
I
haven't,
I
hadn't
looked
at
it
for
that
way,
but
I
believe
the
intent
is.
Although
the
report,
the
report,
will
present
information
in
various
ways,
but
I
think
the
intent
is
to
also
present
a
lot
of
data
with
csv
files
and
such
so.
Basically,
while
I
mean
you,
you
don't
want
to
just
dump
with
dump
people
hey,
we
did
an
analysis
and
here's
a
big
long
list
of
data,
but
so
I
think
the
report's
valuable
for
putting
things
in
context.
G
If
this
working
group
basically
read
the
report,
you
got
the
big
picture,
hey,
let's
jump
in
and
look
at
more
of
the
details,
and
so
if
for
for
those
kinds
of
concerns,
I
think
really
the
best
thing
to
do
is
well,
let's
grab
the
actual,
I
don't
say
want
to
say
raw
data,
because
that's
actually
not
true
at
all,
but
the
more
detailed
results
in
a
in
a
machine
processable
way
so
that
we
can
deal
with
the
yes
it's
true.
Not
all
not
everybody
runs
who
doesn't
run.
Javascript
is
running
java.
G
We
we
all
know
that
yeah,
but
it
we
originally
talked
about
trying
to
split
it
by
per
language,
but
I
think
that
has
its
own
complications,
so
yeah,
okay.
Hopefully
that
helps
anyway,
so
yeah.
We,
I
think
the
goal
is
to
give
data
so
that
we
can
deal
with
all
those
questions
much
more
directly.
A
H
Floor
is
yours,
the
very
carefully
phrased
alternative
title,
I'm
glad
there
are
a
lot
of
new
people
here
today,
because
some
of
this
will
not
be
surprising
to
some
of
you.
So,
let's
get
started.
What
are
we
trying
to
achieve?
H
I
would
say
that,
basically,
what
we're
looking
at
is
the
reduction
of
risk
per
dollar
spent.
That
obviously
has
two
elements
this
has
been.
One
of
my
sort
of
bug
bears
is
continuing
to
hammer
over
and
over
that.
H
We
need
to
distinguish
between
event,
frequency
and
event
magnitude
and
that
reducing
the
risk
exposure
means
reducing
one
or
both
of
those,
and
how
much
are
we
talking
about
if
I
was
australian,
I'd,
say
beep
ton
very
much
a
trillion
dollars
based
on
one
estimate
and
that's
just
for
cyber
crime
that
doesn't
even
include
cyber
warfare.
H
So
what
are
we
doing?
We've
got
out
for
omega,
but
alpha
omega
needs
our
help.
They
need
us
to
slice
the
world
into
things
that
are
alpha
and
things
that
are
at
maker,
which
means
we
need
to
rank
them
and
then
apply
a
cut
off
and
how
much
effort
should
we
spend?
Because
you
know
we
could
do
it
quickly
and
just
like
throw
the
bones
or
we
could
spend
a
lot
of
time.
So
what
is
the
rational
amount,
and
I
would
like
to
propose
a
thought
experiment
for
what
I
call
the
lib
nebraska
problem.
H
H
How
much
would
you
pay
to
know
about
lit
nebraska
is
a
rational
question
to
ask
now:
why
is
it
hard
to
rank
projects
again?
Some
of
this
might
not
be
completely
novel.
First
of
all
incomplete
data.
Now
we
have
the
criticality
score
and
hello.
Everyone
who's
worked
on
it,
I
I
hope
you're
not
mad,
but
it
does
rely
on
an
incomplete
data
set,
and
I
would
argue
that
it
doesn't
at
the
moment
distinguish
between
frequency
and
magnitude,
the
the
signals
that
are
inputs
to
it
kind
of
do
a
bit
of
both
it's
not
orthogonal.
H
It's
also
difficult
to
get.
You
know,
various
other
inputs
and
various
other
outputs.
You
know,
like
especially
outputs
if
you
want
to
do
any
kind
of
like
regression.
If
you
want
to
do
anything,
fancy
with
machine
learning,
it's
very
hard
to
get
the
the
output,
that's
predicted
to
use
to
train
the
model.
H
The
second
one
we
all
know
is
just
that.
There
are
a
hell
of
a
lot
of
projects
in
alpha
omega.
I
believe
they
talk
about
10
000
as
the
long
tail,
which
is
just
a
an
inhuman
amount
of
ranking,
that's
needed,
but
somewhere
in
that
10
000
is
live
nebraska,
just
just
waiting,
silently
for
us
to
bring
it
to
the
forefront
and
win
america's
america's
idol.
H
And,
of
course
the
question
is:
who
does
the
ranking?
Any
group
of
rankers
is
going
to
be
biased
in
favor
of
some
selection
of
projects.
So
how
do
we
come
up
with
first
of
all,
an
unbiased
group
of
estimators?
Before
we
go
to
estimation
and
there's
a
problem
also
measurement
fidelity?
H
We
need
to
be
able
to
distinguish
within
the
long
tail,
there's
a
real
danger
that
if
we
just
went
with
popular
projects,
you
know
we
would
have
distinctions
for
say
the
first
50
or
100
and
then
there's
just
going
to
be
long,
long
strings
of
things
that
are
rated
20
and
long,
long,
strings
of
19s
and
18s
and
so
on
and
so
forth,
where
they're
indistinguishable
from
each
other.
So
we
need
something,
that's
high
enough
fidelity
that
it
can
distinguish
these
long
tails.
H
So
there
we
go
first
thing
we
talked
about
last
time
I
was
here
is
voting
methods,
and
this
is
you
know,
a
fatal
mistake.
I
made
because
I
volunteered
they
tell
you
never
to
do
that.
So
I
said.
Oh,
I
remember
voting
methods.
The
first
thing
I
I
learned
when
I
started
to
refresh
my
memory.
Voting
methods
is
hey.
I
probably
didn't
know
as
much
as
I
thought
I
did
at
the
time
and
b.
H
I
certainly
don't
remember
very
much
of
it,
but
after
a
bit
of
skimming
I
came
across
what
I
would
say
is
like
three
major
families
of
voting
method.
That
makes
sense
for
us
approval,
voting,
rank
choice
and
score
voting.
So
approval
voting
looks
like
this.
You
just
say
to
your
experts.
Tell
me
what's
critical,
what's
important,
you
just
tick
a
box,
that's
more
or
less
what
we
did
actually
previously
when
we
were
selecting
a
preliminary
list
for
alpha
omega.
H
We
more
or
less
went
through
a
list
of
candidates
and
said
yes
or
no,
it's
actually
quite
nice
in
a
number
of
ways,
it's
very
simple
to
understand
and
implement.
It
doesn't
require
you
to
decide
which
two
projects
are
more
important
than
each
other,
and
it
also
doesn't
require
an
exhaustive
vote.
That's
where
you
have
to
mark
every
single
entry
to
get
a
useful
ranking
with
10
000
projects.
Exhausted
votes
are
sort
of
hard
to
come
by
now.
The
big
problem,
of
course,
is
what
the
hell
do.
H
We
mean
by
critical
that
that
could
be
anything
we
could
be
saying
like.
It
sounds
bad
to
me
and
you
use
a
qualitative
scale,
in
which
case
I
will
make
fun
of
you
or
you'd,
be
trying
to
use
a
quantitative
scale
like
it
does
an
aggregate
of
100
million
dollars
of
damage
it's
just
like.
Well.
How
are
we
measuring
that?
You
know
what
are
we,
including,
including
opportunity
costs?
Are
we
including
disruptions
to
work?
H
You
know
and
how's
that
estimated
okay
and
finally,
it's
susceptible
to
this,
what
I
call
the
sparsity
problem
or
what
others
call
the
sparsity
problem,
which
I
hope
I
am
using
correctly,
which
essentially
says
we
don't
have
enough
data
points
and
then
you're
going
to
get
those
long
long
runs
of
identically
ranked
projects
again.
H
So
now
we
go
to
rank
choice,
voting
it
can
be
more
robust
depending
on
the
method.
There
is
a
certain
amount
of
you
know,
academic
sort
of
knife
fighting
in
the
hallways,
about
whose
method
is
best
that's
to
be
expected,
so
it
can
be
more
robust,
it's
still
susceptible
to
the
sparsity
problem,
just
in
a
different
way.
H
H
Normally,
I
have
historically
given
score
voting
people
a
hard
time
as
a
person
interested
in
voting
systems.
They
usually
sell
it
as
a
cure
for
all
ills
and
all
ails,
and
I
think
in
a
normal
voting
system.
I
don't
agree,
but
in
this
case
it's
actually
rent,
you
know
rank
sorry
range
voting
or
score
voting
is
is
kind
of
nice.
The
idea
is
that,
essentially
you
give
a
score
to
something.
H
You
say
this
is
worth
nine
out
of
ten
bad,
or
this
is
seven
out
of
ten
bad,
it's
less
vulnerable
to
the
sparsity
problem,
because
you
only
need
one
vote
to
establish
a
value.
You
know
a
position
relatively
speaking.
It
does
have
a
fidelity
problem.
You're
going
to
have
enough.
H
You
need
enough
scores
that
you
get
high
fidelity
differences
between
different
things
and
you've
got
to
hope
that
your
your
forecast
as
your
estimators
bother
to
use
all
of
the
range
rather
than
say
just
locking
onto
70,
80,
90
and
so
on,
and
turning
into
a
10
point
range
or
a
five
point
range.
But
you
know
that's
that's
a
problem
you
have
to
have
to
deal
with
the
other
one
is
what's
called
incoherent
rankings.
H
That
basically
means
that
suppose,
at
one
time
I
get
shown
with
nebraska
at
another
time
I
get
shown
with
food.
As
you
can
see.
In
this
case,
I've
said
that
live
nebraska
is
worse
than
lib
food,
but
of
course
it
might
be
that
I
don't
see
those
two
side
by
side
at
one
time.
You
know
I'm
sure,
being
shown
a
subset
of
the
time,
so
I
don't
have
to
rank
10
000
things.
H
At
the
same
time,
it
might
be
that
my
decisions
are
inconsistent,
that
if
I
saw
them
side
by
side,
I
would
give
them
different
scores,
because
I
was
thinking
about
them
relative
to
each
other,
then
I
would,
if
they're,
presented
separately
from
each
other,
so
there's
a
problem
with
incoherent
rankings
within
a
single
expert.
H
So
an
alternative
that
came
to
me
while
I
was
thinking
about
this
was
what's
known
as
structured
expert
judgment.
It
has
another
a
lot
of
other
names
expert
elicitation
again
there
are
academics
fighting
the
hallways
about
which
name
should
be
used,
because,
curiously,
the
grant
money
is
attached
to
it.
So
what
is
it
really
or?
Why
is
why?
Is
it
something
that
I'm
suggesting
and
really
it's
because
when
we
ask
people
rank
projects,
we're
asking
them
to
create
a
forecast
of
risk,
we're
saying
what
is
the
bad?
How
much
is
the
bad?
H
You
know
when
will
the
bad-
and
this
is
especially
true
for
score-
voting
score
voting.
If
you
squinted
it
and
look
it
sideways
is
essentially
saying
what
is
the
risk
exposure
on
some
linear
scale,
which
is
not
quantitative?
It's
an
ordinal
scale
of
risk,
rather
than
you
know,
some
interval
ratio
range
how
it
works
is.
Essentially
you
have
your
experts,
you
do
a
process
called
calibration
which,
essentially,
you
know,
forces
them
to
give
better
estimates
than
they
would
if
they
just
walked
in
off
the
street.
H
Whereas
to
do
it,
you
break
the
estimates
down
to
two
parts,
frequency
of
magnitude,
rather
than
having
them
just
say
how
bad,
because
those
things
are
orthogonal
and
it's
very
easy
to
be
thinking
of
one
or
the
other,
but
not
both.
At
the
same
time,
the
human
brain
doesn't
work
well
with
volumes
and
areas
as
a
thing
that
it
can
give
a
linear
estimate
for
for
each
estimate,
you
get
a
confidence
interval
and
you
use
some
formulas
to
roll
this
all
up,
there's
two
very
prominent
ones.
The
cook's
classical
model
and
habits
applied.
H
H
H
A
nice
thing
about
expert
expert
judgment
is
that
you
can
slope
in
a
whole
bunch
of
fuzzy
stuff.
You
know
with
criticality
score.
We
have
correctly
looked
for
concrete
measures,
but
there's
still
a
whole
bunch
of
like
beard,
stroking
and
there's
one
time
when
I
was
you
know
a
sysadmin.
H
H
What's
really
interesting
to
me,
though,
is
particularly
that
there
is
a
lot
of
research
that
shows
you
can
get
a
measurable
improvement
of
forecast
accuracy
if
you
train
your
experts
in
forecasting.
H
First,
if
you,
you
know
you,
if
you're
doing
something
like
a
voting
system,
you're-
probably
not
going
to
do
that,
and
people
are
just
going
to
go
with
what's
in
front
of
them,
you're
going
to
have
the
recognition
problem,
which
I
described
earlier
on
you're,
going
to
have
the
fact
that
something
that
is
highly
recognizable
or
well
known
is
subject
to
an
availability
bias.
H
I
don't
remember
seeing
that,
so
I
don't
know
how
I
got
skipped,
and
so
things
are
essentially
going
to
be
given
more
weight
simply
because
people
have
heard
of
them-
and
this
is
the
really
nice
thing
about
expo
judgment
or
eliciting
expert
judgment.
Is
that
a
single
estimate
or
a
single
forecast
gives
you
meaningful
insight
into
loot
nebraska?
It
will
only
take
one
person
to
give
their
interval.
H
You
know
their
confidence
interval
to
actually
give
you
meaningful
information
that
can
distinguish
it
in
a
very
fine
grain
from
other
projects
in
the
set.
Of
course,
the
con
is
who
has
the
damn
time?
This
is
a
time-consuming
proposal,
you're
looking
at
estimates,
you
know
that
were
broken
into
two
parts.
They
have
to
give
potentially
two
or
three
numbers
per
part,
so
you're
looking
at
six
numbers
per
per
dependency,
and
it's
just
going
to
add
up
over
a
lot
of
time.
H
I
think
it
improves
the
estimation
accuracy
a
lot,
but
we
will
have
to
think
carefully
about
what
kind
of
coverage
how
you
spread.
People
around,
what
subsets
people
are
done,
what
all
the
things
get
presented
in?
What
is
the
constituency?
Those
problems
are
all
still
there
very
quickly.
H
We
should
keep
score,
there's
an
argument
to
be
made
that
we
gamify
the
sucker
so
that
people
can
feel
very
proud
about
the
quality
of
their
estimates,
but
also
it's
been
shown
that
feedback
improves
future
forecasts.
If,
if
people
are
miscalibrated,
if
they
continue
to
receive
feedback
from
a
system
about
how
they
perform,
they,
they
get
more
calibrated
or
better
calibrated
and
they
improve
their
forecast
accuracy.
The
tricky
part
there
is
that
we
will
need
to
come
up
with
an
ambiguous
definitions
of
x
has
happened
so
that
we
can
apply
scoring
methods.
H
I
link
there
to
a
page
which
talks
about
weather
forecast,
accuracy,
scores,
of
which
there
are
very
many
because
they
have
to
deal
with
very
many
phenomena,
and
you
know
saying
something
like
it.
Rained
is
one
way,
that's
a
binary
outcome,
but
of
course
you
could
then
argue
like
well.
The
forecast
was
for
heavy
rain
and
what
counts
as
heavy
rain
and
what
was
the
range
of
heavy
rain
and
it
rained
heavily
in
this
area,
but
not
in
that
area.
But
we
said
this
area,
but
it
overlapped.
H
So
there's
all
these
sort
of
complicated
scoring
systems
that
meteorologists
have
come
up
with
to
help
them
estimate
their
forecast
skill.
They
call
it
so.
In
conclusion,
I
think
we
should
do
one
of
two
things:
either
we
use
score
voting
and
we
recruit
as
many
as
experts
as
possible,
for
which
julia
is
about
to
speak
or,
alternatively,
we
select
few
highly
expert
individuals.
H
We
give
them
the
calibration
training.
We
have
a
system
which
rolls
up
their
estimates
and
we
go
about
it
that
way
and
that
one
would
be
something
like
you
book
them
for
a
week
or
two
weeks
or,
alternatively,
you
could
have
a
system
actually
where
they
can
do
it
on
a
rolling
basis.
You
know
you
could
have
them
have
comments.
This
is
why
I
estimated
this,
so
they
can
change
their
forecasts
at
different
times.
O
You
mentioned
weather,
dark
as
your
metaphor
for
forecasting
what
what's
the
data
we
can
record
like.
H
Yeah,
the
output
data
right-
that's
that's
a
big
problem,
there's
sort
of
two
things
I
think
I
saw
oh
wow.
The
chat
blew
up.
H
I'm
not
going
to
read
this
right
now,
I'm
just
going
to
go
the
last
one
which
says
cdu
counts,
don't
really
mean
anything.
This
is
recorded,
so
I'll
put
it
politely.
I
yes,
cbe
accounts
have
the
problem
that
you
will
be
penalized.
For
being
honest,
if
you
work
hard
to
report
your
cves,
you
have
a
higher
cbe
count.
H
H
We're
going
to
manage
is
going
to
be
probably
something
on
the
order
of
either
a
expert
estimations
of
damage
caused
right,
which
means
you
might
need
a
second
set
of
experts
to
do
that
or,
alternatively,
some
sort
of
like
econometric
or
machine
learning
estimate
of
that
kind
of
damage.
So
I
expected
that
would
happen
in
the
long
term
is
that
these
estimates
would
eventually
be
progressively
replaced
by
some
sort
of
regression
model
or
some
sort
of
machine
learning
model
that
use
the
weights
as
training
right.
H
It
uses
the
expert
estimations,
the
training
of
both
the
risk
exposure
and
the
risk
realization
to
slowly
make
it
easier
to
see
an
unseen
project
and
say:
okay,
based
on
the
evasion,
the
available
data.
I
am
projecting
that
it's
within
this
interval-
I
don't
know
if
that
answered
your
question
or
just
hand-waved
it
away.
O
Q
Yeah
my
question:
I'm
trying
to
frame
this
in
terms
of
a
question
thanks
for
the
presentation
by
the
way
jax
it
was
great-
I
I
feel
so
much
smarter
now
about
voting.
You
know
at
least
I
know
who
to
go
to,
though,
if
I
had
a
question.
The
other
question,
though,
that
I
keep
coming
back
to.
Q
Q
If
there
ever
was
one
with
a
10.00
cv,
ss,
you
know
and
and
yeah
it
was
somewhere
on
our
radar,
but
I
think
the
the
general
weakness
of
a
human-based
expert
system
is
probably
those
are
the
very
ones
who
are
overlooking
this
exact
problem
already,
and
you
know,
what's
the
basis
of
their
judgment,
to
vote
on
right,
it's
based
on
whatever
they
they
know,
but
I
think
right.
So
I'm
not
saying
anything.
This
is
sort
of
a
rhetorical
question.
Q
I
guess,
but
that
that's
I
I
keep
pondering
the
thing
in
terms
of
I
I
like
the
somewhere
in
the
chat
stream.
It
said
something
about
a
a
a
a
machine
learning
problem
or
something
of
that
sort.
Right
and
that's,
I
think
maybe
and
the
problem
is
where
you
get
data
sources
to
feed
into
that
right.
Unless
you
have
experts
so
anyway,
I
you
know
it's
there's
there
you
go,
but
anyway,
I'll
shut
up,
but
that
was
kind
of
my
comment.
H
Yeah,
it's
a
good
comment
and
it's
true,
so
I
would
probably
argue
that,
as
time
went
on,
you
would
give
your
experts
more
and
more
relevant
data
points
or
things
to
investigate
you
know,
as
experts
become
better
at,
they
might
develop
checklists
of
things.
To
look
for
things
to
consider
even
breaking
the
estimate
down
into
the
two
constituent
parts
of
frequency
of
magnitude
would
help
like
you
could
have
looked
at
log4j
and
plausibly
said
well.
H
Q
Maybe
there
is,
I
just
don't
know,
but
has
access
to
all
of
that
data
right,
because
that
that
when
you,
when
you
boil
it
down
that
that's
that's,
drives
the
frequency
metric
right
and
probably
a
lot
of
people
just
hadn't
had
visibility
that
huge
set
of
we
have.
I
have
pretty
good
visibility
on
our
use
of
our
products,
uses
open
source,
but
our
I.t
side
is,
is
I'm
finding
is
a
little
harder
to
you
know,
nail
down
so
anyway.
Q
Was
just
some
and
then
it's
not
just
my
company,
but
it's
all
the
other
sort
of
companies
that
are
out
there
and
I
have
one
of
the
tricky
things
about
open
source.
I
learned
early
on
when
I
was
launching
yacto
project.
You
never
know
who's
going
to
be
using
your
stuff,
that's
something
you'll
just
never
ever
know,
and
that's
one.
Q
H
Right
and
that
that's
right
and
that's
that's
an
example
of
where
our
best
available
signal
our
best
available
intelligence
is
going
to
be
expert
opinions.
We're
going
to
be
relying
on
people
who
know
or
potentially
know
something.
So
you
can
imagine
two
experts,
one
of
whom
doesn't
know
much
about
log4j.
They
can
imagine.
Well
it's
a
logging
system.
It's
probably
a
lot
of
places.
H
They
should
give
a
very
wide
interval
for
their
prediction,
whereas
somebody
who
is
very
deep
in
the
java
ecosystem
is
likely
to
give
a
narrow
prediction,
and
you
can
combine
both
these
two
there's.
There's
there's
people
who
will
probably
like
just
like
burst
out
of
the
ground,
yelling
about
bays
and
they're
right.
So
there's
there's
a
amazing
combination
to
be
done
there,
I'm
going
to
flick
it
now
to
brian
who,
I
think,
has
got
his
hand
up
next.
J
I
wanted
to
build
on
top
of
the
question
about
you
know:
what
is
this
output
data
looking
like
and
take
it
a
step
further
and
ask
you
know
how
do
you
envision
someone
looking
at
these
forecast
results
and
then
taking
an
action
based
on
them
or
I
guess
in
other
terms,
you
know
if
I
see
the
forecast
as
rain,
I
pack
an
umbrella
for
the
day
or
if
I
see
that
in
the
security
realm,
if
I
have
a
cve,
I
want
that
to
trigger
a
patch
being
created.
So
what
what
would
the
forecast?
H
Yeah,
I
would,
I
would
say
two
things.
One
is
first
of
all,
we
would
be
applying
a
cut
off
so
perhaps
annually
or
biannually
we're
recalibrating
what
falls
into
the
alphabet?
What
falls
into
the
omega
bucket?
One
thing
I
didn't
talk
about
in
this
presentation,
but
I
thought
about
talking
about.
What's
like
the
really
interesting
problem
is
like
the
border
region
between
alpha
and
omega
and
how
much
churn
is
going
to
occur.
H
There
is
going
to
be
going
to
be
difficulty,
but
I
think
things
that
are
like
well
clear
are
likely
to
stay
well
clear
into
the
alpha
bucket
and
likewise
things
that
are
deeper
and
omega
are
unlikely
to
burst
up
unless
new
information
comes
to
life.
One
thing
we
can
do,
though,
is
look
for
movement
or
trends.
You
know,
as
as
experts
revise
their
estimates,
you
would
you
would
leave
it
open
to
do
that
or
change
their
votes.
H
Alternatively,
you
would
look
for
things
that
start
to
trend
up
or
things
that
you
know
relate
stayed
at
the
relative
baseline
for
a
long
time,
but
started
to
drift
upwards
right
and
you'd
be
like
okay.
So
what's
changing
about
that,
and
that
would
be
true
for
criticality
scores.
Well,
a
paper
I
saw
while
I
was
going
through
actually
proposed
an
alternative
criticality
score,
where
the
two
factors
were
truck
count,
they
call
it
they.
They
had
a
way
of
computing,
the
truck
factor
and
just
the
raw
number
of
ultimate
dependencies
fully
fully
realized.
H
G
Yeah,
so
boy,
there's
so
many
different
things
I
want
to
respond
to
my
my
brain
is
starting
to
pop
off
the
things
I
wanted
to
say
yeah.
So
I
think
one
of
the
key
things
to
note
is
we
we
do
have,
and
for
those
of
you
who
are
just
coming
in
my
apologies,
because
some
of
us
are
talking
about
things
we've
previously
discussed,
we
already
have
some
data
sets
that
we
believe
can
be
useful
and
we
know
there's
other
data
sets
coming
in,
particularly
the
harvard
study.
G
G
What
that
doesn't
tell
you
is,
you
know,
applications
as
necessarily
used
as
a
whole,
okay
and
so
they're
they're,
looking
through
the
tracing
through
the
criticality
score,
the
open,
ssf
criticality
score,
looks
at
various
measures
that
that
can
be
easily
gotten
right.
Now,
really
only
looks
at
github
a
lot
of
up
a
lot
of
projects,
don't
use
github
and
even
for
those
who
do,
it
looks
really
at
things
like
activity.
G
So,
for
example,
the
the
theory
here
is
that
something
that's
really
really
active
must
be
important
and
that
isn't
necessarily
a
bad
heuristic,
but
it
is
clearly
imperfect.
There
are
really
vital
projects
that
have
become
more
abund
and
and
not
in
a
good
way,
because
they're
done,
but
because
they've
been
their
maintenance
has
been
abandoned.
They
that
would
not
show
up
in
the
criticality
score,
even
though
it's
obviously
vital,
and
there
is
actually
some
counter
examples.
My
apologies,
there
was
one.
G
There
was
a
project,
that's
used
at
cern
that
has
one
of
the
highest
criticality
scores,
because
I'm
not
sure
if
you're
familiar
with
how
cern
works,
but
their
papers
tend
to
have
over
a
thousand
authors.
You
know
the
list
of
authors
is
typically
longer
than
the
papers.
So
when
you've
got
that
kind
of
situation,
where
you've
got
big
project,
you
you
may
have
you
know
tens
of
thousands
of
people
adding
a
little
bit
of
code
to
something,
but
it's
only
used
in
cert.
G
So
you
know
so
so
none
of
these
data
sources
are
perfect,
which
is
why
this
group,
at
least
up
to
this
point,
has
focused
on.
Let's
gather
as
much
days
we
can,
but
we're
going
to
have
to
depend
on
human
judgment
because
of
the
sparsity
of
the
incomplete
the
incompleteness
of
the
data
set
the
problems
with
each
of
those
data
sets
I
mean
we
can
revisit
that,
but
that's
why
jacques
is
talking
about
these
voting
systems.
G
K
Yeah,
I
think
part
of
it
david
answered,
but
my
question
is
like
I
think
the
voting
is
complementary.
It
does
not
replace
metrics
right
when
you
talk
about
metrics.
If
you
look
at
I,
I
get
it
right
like
if
you
look
at
activity
tracking
on
systems
from
github
and
whatnot,
a
lot
of
projects
are
on
gate,
gitlab
get
it
whatever
right,
but
if
you
are
able
to
start
instrumenting
and
start
looking
at
metrics
of
how
many
people
are
actually
contributing.
K
Are
the
commits
going
up
down
if
you
start
tracking
metrics
like
who's
downloading
it?
What's
basically
the
behind
the
firewall
enterprise
usage
of
that
how
many
of
these
libraries
are
being
pulled
into
by
package
managers
or
by
ci
systems?
So
even
if
the
contribution
has
ceased
to
exist,
it
still
might
be
part
of
a
lot
of
builds
right
like
existing
builds
of
those.
K
I
think
metrics
do
add
a
lot
of
value,
and
I
think
there
was
another
topic
in
there
which
talked
about
like
dependencies
yeah
dependencies
again,
at
least
in
my
team,
at
the
lf.
What
we
have
been
doing
is
we're
gathering
a
lot
of
these
data
sets
right.
K
Just
from
the
code
perspective,
like
who's
writing,
code
commits
happening
who
is
merging,
because
that
is
also
somewhat
indicative,
because
many
people
who
are
building
open
source
platforms
or
services
on
top
of
open
source,
they
actually
tend
to
be
committers
for
the
new
stuff
right,
but
for
the
legacy
stuff
which
has
stopped
getting
maintained,
you
might
not
see
so
much
code
activity,
but
that's
another
data
stream.
K
You
need
to
collect
from
behind
the
firewall,
but
the
third
stream
is
this
dependency
data
right,
say:
okay,
I
have
package
a
and
I
have
like
using
for
400
packages
from
the
upstream
right.
If
we
are
able
to
get
that
data,
we
have
been
collecting
it
for
at
least
linux
foundation
projects
not
boiling
the
ocean.
But
if
you
are
able
to
combine
these
data
sets,
then
you
have
something
which
is
a
little
bit
more
objective
and
then
you
know,
obviously
you
know
the
voting
mechanism
is
kind
of
the
layer
on
top.
K
I
Yes,
wonderful.
Do
we
have.
H
A
Oh
okay,
but
yeah.
This
is
definitely
a
topic
that
is
going
to
require
some
deep
dives.
So
in
the
interest
of
time
we
I
think,
if
we
all
agree,
we
could
definitely
set
time
at
a
future
work
group
meeting,
maybe
the
next
one
to
focus
on
this
topic
specifically
because
it
is
one
of
the
main
objectives
of
our
work
group.
So
does
that
sound
good
to
everybody?
Maybe
at
the
next
work
room?
What
do
you
think
about
that
jeff?
If
we
just
kind
of
had
like
a
deep
dive
into
into
this.
G
If
it's
different
than
what
we
did
last
time
last
time
we
didn't
in
rapidly
and
I'm
glad
for
that-
and
you
know
and
let's
let's
walk
it
through,
but
I
don't
want
to
debate
it
forever.
J
A
A
B
Sure
I
linked
it
in
the
the
agenda
and
everyone
who
is
a
part
of
the
google
group
should
have
edit
access
so
feel
free
to
edit
directly,
basically
I'll,
just
kind
of
talk
through
it.
We
need
to
figure
out
how
to
segment
out
the
the
projects
that
we're
looking
for
feedback
on
the
rankings
that
we're
looking
for
feedback
on,
since
we
already
do
pull
out
certain
certain
information
about
the
projects
like
language.
B
I
think
that's
the
easiest
and
most
logical
starting
place
and
as
david
was
talking
about
earlier,
the
ecosystems
that
these
packages
belong
to
have
very
different
development
practices,
norms,
method,
communication
methods,
etc,
and
so
asking
folks
to
rank
within
their
own
or
vote
in
some
form
or
fashion
to
jacques
presentation
within
their
own
ecosystem
is
probably
going
to
give
us
the
most
meaningful
data.
I
know
nothing
about
the
rust
ecosystem.
B
It
would
be
a
mistake
to
ask
my
opinion
on
unrest
packages,
so
my
proposal
is
to
identify
leads
for
each
ecosystem
that
we
do
identify,
who
will
act
as
the
the
hub
for
for
that
ecosystem
and
do
the
outreach
for
within
within
their
community
norms
and
identify
their
primary
communication
mechanisms
and
from
there
we
go
to
a
public
nomination
process
as
well.
To
avoid
similarity
bias.
B
Because
we
don't
want
to
just
reach
out
to
people
that
we
know
and
communicate
communities
that
we
know
we
want
to
identify
the
folks
that
we
don't
as
well,
and
so
I
think,
that's
a
key
critical
part
of
of
ensuring
that
we
get
representative
viewpoints.
So
it's
a
it's
it's
a
very
rough
outline.
I
didn't
want
to
make
it
overly
prescriptive.
I
think
our
our
primary
limiting
factor
is,
you
know
we
are
all
volunteers.
B
Well,
most
of
us
are
volunteers.
I
guess
there
are
some
people
who
are
paid
to
be
here
and
we
want
to
make
sure
that
what
we
do
is
scalable.
So,
however,
we
decide
to
ask
people
to
to
vote
or
provide
input.
We
need
to
make
sure
that
we're
not
duplicating
work.
B
So
I
do
recommend
that
we
have
somebody
who
is
overseeing
the
outreach
program
as
a
whole,
so
that's
kind
of
the
the
long
and
short
there's
space
for
questions
in
the
doc
or
here
I'm
not
sure
who
had
their
hand
up
from
before
and
who
had
their
hand
who
has
their
hand
up
now.
I
saw
that
vicki
did
did
put
her
hand
up
now.
So.
E
So,
in
general,
I
think
this
is
a
very
good
idea.
I
mean
your
example
of
rust
is
dead
on
right
and
there's
a
difference
going,
as
we've
pointed
out
already,
there's
going
to
be
a
difference
between
javascript
and,
for
instance,
npm,
javascript,
right
and
so
they're,
certainly
going
to
be
having
experts
on
their
particular
ecosystems
makes
total
sense.
My
question
is
about
that
you're.
E
Essentially
your
one
to
public
aspect
there
you
know
going
out
from
public
to
look
for
that
live
nebraska,
essentially,
wow
we've
got
metaphors
flying
everywhere,
love
it
so
to
spot
that
lib
nebraska
that
outreach
itself
we
have
to.
I
think
it's
going
to
be
very
important.
E
We
make
sure
we
don't
have
more
selection
bias
going
on
there
right
and
that's
going
to
be
a
tricky
one
that
I
think
we
need
to
make
sure
we
cover,
because
me,
looking
at
my
network,
is
going
to
be
a
lot
of
overlap
with
like
julia
with
amir
with
david,
and
so
what
does
that
outreach
look
like,
I
think,
will
be
very
critical
to
the
success
of
this
effort.
B
Yes,
absolutely
the.
What
we're
essentially
trying
to
do
is
identify
the
unknowns
and
our
and
the
gaps
in
in
our
networks
so
that
we
don't
hyper
focus
like
like
you
said
vicki.
If
you
and
I
did
the
outreach,
we
would
be
contacting
all
the
ospos
right.
That
would
be.
B
That
would
be
our
primary
network,
so
yeah
absolutely.
E
Yeah,
I'm
just
want
to
avoid
that
airplane
with
the
little
yeah
red
dots
on
it.
I
think
that's
going
to
be
pretty
important
to
avoid.
Yes,.
O
This
is
a
more
of
a
suggestion
here,
because
it's
feeling
a
bit
like
an
either
or
in
terms
of
the
way
the
agenda's
played
it
up
and
my
kind
of
guesstimate
here
on
the
behind
the
scenes.
Feelings
is
that
the
working
group
needs
to
provide
a
list
to
alpha
and
omega
sooner
than
later,
but
but
I
really
like
that,
I
love
the
direction
of
this
julia
and
what
I'm
thinking
is
how
you
can
split
this
up
into
bar,
raising
that
we
make
to
the
process
over
time.
O
So
using
the
the
joking
on
the
chat
channel
of
you
know,
v1
at
some
point
like.
Is
there
a
v1,
a
v2
or
v3
that
each
one
of
these
steps
appears
and
slowly
raises
the
bar?
Because
I
imagine
people
will
be
like
we
need
alpha
something
soon.
We
need
something.
We
need
something.
B
Yes,
absolutely
I
I
considered
adding
kind
of
phases
to
the
dock,
but
I
had
already
reached
over
two
pages
and
I
I
get
kind
of
verbose,
so
I
can
absolutely
add,
add
kind
of
the
the
stages.
B
I
think
that
future
area
of
consideration
is
how
to
split
up
the
the
candidate
list
further
than
just
programming
languages,
and
I
I
did
explore
that
a
little
bit
in
the
project
classification
section.
But
yes,
a
roll
out
plan
is
definitely
needed
and
I
do
encourage
you
to
add
any
comments
or
edit
directly
to
the
doc.
If
you
have
ideas,
I
don't
don't
feel
a
particular
sense
of
ownership
of
this.
R
You
have
a
question
yeah
julia,
so
just
to
clarify
ecosystem
lead
would
be
one
of
us
in
this
meeting,
not
necessarily
an
expert
in
the
area,
but
somebody
that's
in
charge
of
trying
to
reach
out
to
that.
B
I
think
I
think
it
would
be
helpful
if
the
pers,
if
the
ecosystem
lead,
had
some
knowledge
of
the
ecosystem,
but
they
don't
need
to
necessarily
be
an
expert.
G
Yeah,
I
don't
want
to
sound
terribly
like
a
wet
blanket
here.
If
the
goal
is
the
hey,
I
want
to
reach
out
to
various
communities
all
for
that.
I
do
want
to
be
careful
here.
The
harvard
study
only
looked
at,
for
example,
the
language
level
communities,
but
those
are
by
no
means
the
only
you
know,
system
level
packages.
G
You
know
nobody
systems
work
with
that
works
without
those
containers
kind
of
matter,
so
we
need
to
those
are
other
ecosystems.
We
need
to
include.
You
know,
plug-in
system.
There
are
plug-in
ecosystems
for
various
systems
that
are
all
their
own.
B
G
On
the
other
hand,
I
I
do
worry
that
you
know
I,
I
touch
a
number
of
ecosystems,
but
my
experience
has
been
even
if
you're
involved
deeply
in
an
ecosystem.
You
only
know
a
part
for
a
portion
of
it.
I
think
javascript's,
probably
one
of
the
worst
in
that
aspect
you
can,
you
can
deeply
know
a
whole
set
of
javascript
packages
and
never
it's
incredibly
fractured
community.
You
know,
you
may
know
your
react
and
a
whole
lot
of
things
about
react
and
you've
never
seen
view
you've,
never
seen
anything
use
view.
G
You
know
the
construct
of
what
they
use,
as
is
foreign,
and
so
you,
even
if
you
know,
even
if
you're
involved
in
an
ecosystem,
you
probably
know
so
I
I
think
I'm
not
so
sure
I
would
want
to
go
too
far
in
the
dependency
on
the
ecosystem.
I
think
what
I'd
really
go
is
the
do
you
know
something
about
this
before
you
make
a
an
expert
case
on
it.
G
You
know-
and
you
know
you
may
you
know
you
may
know
a
whole
lot
about
something,
even
though
you're
not
generally
involved
with
that
ecosystem
or
you
may
know
nothing
which
case
plus.
You
know,
let's
not,
let's
listen
to
the
folks
who
know
more
about
it.
B
David,
I
absolutely
agree.
The
languages
is
definitely
just
a
starting
point
and
the
one
of
the
key
factors
that
I
was
considering
is
what
information
do
we
already
have
that
we
can
divide
by
the
I?
I
really
do:
love
nadia
egg
balls,
rhodes
and
bridges
framework,
which
I
I
do
link
in
the
document.
B
But
the
problem
is,
is
that
it
requires
manual
classification
and
that
just
kind
of
increases
the
load
of
the
work
of
what
we
need
to
do
as
pre-work.
B
So
it's
a
it's
a
balance
and
I
think
that
in
as
an
as
an
iterative
process,
we
do
need
to
identify
ways
of
further
subdividing
the
the
ecosystems
because
you're
absolutely
right,
the
container
ecosystem
or
the
the
orchestration
ecosystem
spans
languages.
It
spans
it
spans.
The
the
the
classifications
that
we
currently
have,
and
that
is
a
useful
set
of
assessments
as
well.
A
Wonderful,
thank
you.
Ava,
you
had
your
hand
up.
I
want
to
make
sure
we
get
to
you
before
we
conclude.
T
Thanks
yeah,
I
know
we're
at
time.
I
will
try
to
add
my
questions
as
question
comments
on
julia's
doc,
love
the
star
trulia
yeah,.
A
Well,
thank
you.
Okay
with
that,
yes,
it
looks
like
we
are
at
time
a
great
discussion
today,
everybody
jeff,
I'm
gonna,
just
message
you
on
slack
about
the
town
hall,
but
yes,
again,
great
discussion,
everyone
and
looking
forward
to
some
of
these
more
intensive
sessions
in
the
near
future.
A
So
thanks
again
and
again,
discussions
can
always
be
taken
to
the
slack
channel
too,
just
to
be
mindful
of
people
who
might
not
be
able
to
make
the
meeting
times
and
whatnot
so
again,
thanks
everybody
and
have
a
great
rest
of
your
day
and
weekend
or
the
mailing
list.
Yes,
bye-bye.