►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Well,
welcome
everybody
to
the
first
work
group
meeting
of
2021..
I
hope
everybody
had
a
nice
nice
break
a
little
bit
of
time
off
and
are
recharged
and
ready
to
make
2021
awesome.
So
I've
got
some
demo
and
I've
got
a
bunch
of
stuff
to
talk
about
today.
A
I
posted
a
link
to
the
meeting
notes
in
the
chat.
If
anybody
needs
again,
I
can
post
it
again,
but
it's
the
same
same
notes
doctor
that
we've
been
using
so
just
add
yourself
to
the
top.
If
there's
anything
that
you'd
like
to
talk
about
on
the
just
add
yourself
to
the
to
the
agenda
and
I'm
sorry,
I
have
a
camera
on
one
side
and
the
incoming
video
on
the
other.
So
I
am.
I
am
looking
at
you
guys.
A
Okay,
so
first
thing
so
a
couple
weeks
ago,
luigi
was
was
kind
enough
to
to
volunteer
to
help
kind
of
get
a.
I
don't
know
we're
going
to
call
it
a
a
park
or
an
mvp
or
just
a
v
like
0.9
of
a
dashboard
out.
So
I
tried
to
absorb
a
lot
of
the
the
comments
and
the
different
docs
and
requirements
specs,
and
all
this
and
one
of
one
of
the
realizations
that
that
I
had
was.
A
It
would
be
better
to
get
something
out
than
getting
the
perfect
thing
out
a
very
long
time
from
now.
So
I
feel
like
we
weren't
making
a
ton
of
progress,
or
I
wasn't
making
a
ton
of
progress
on
the
the
implementation
it
just
got
too
complex
and
we
had
a
a
short
meeting.
We
didn't
really
have
quorum,
but
but
a
comment
came
up
in,
I
think
it
was
early
november,
which
was
which
was
basically
like.
This
is
way
too
complicated.
A
Can
we
just
like
throw
griffon
on
a
database
and
and
call
it
a
day,
and
that
got
me
thinking
and
it
wasn't
a
terrible
idea.
So
that's
the
demo
I'm
going
to
show,
so
this
is
graffana
on
a
vm
running.
A
A
Okay,
so
these
cards
down
here
come
from
the
david,
your
stuff,
the
the
the
best
practices
badge
data.
These
things
over
here
come
from
the
scorecard
dan,
your
stuff
scorecard
metrics.
A
The
project
overview,
I
believe,
comes
from
the
the
best
practices,
although
some
of
these
links
down
here
are
generated,
dynamically
project
releases
over
time
come
from
the
libraries.I
o
api,
which
gives
a
list
of
versions
and
dates.
So
you
can.
You
can
graph
it
months
since
last
release
and
average
months
to
average
months
between
releases
comes
from
github.
A
I
believe
I
know
this
libraries
at
I
o
contributors
within
the
past
90
days
is
broken
on
this
one,
but
it
should
say
you
know
the
number
of
distinct
contributors
to
the
git
repo,
so
it
doesn't
clone,
and
then
it
parses
the
log
looking
for
this
and
then
percentage
of
issues
open.
This
is
the
last
hundred
issues
that
were
created.
How
many
of
them
are
currently
open
and
then
what's
the
average
duration
between
open
and
closed
for
those
hundred.
A
So
these
are
not
intended
to
be
perfect
metrics,
because
no
metrics
are
perfect,
but
they're
intended
to
give
something,
and
then
I
think
what
I'm?
What
I'm
hoping
is
that
we
have
a
a
fruitful
you
know,
argument
about
like
well
90
days
is,
is
a
terrible
number
and
a
year
is
like
just
barely
enough,
but
it
really
should
be
like
one
year
in
three
years
or
like
average
issue.
Duration
doesn't
mean
anything.
A
It
really
should
be
this
other
thing,
but
at
least
we
have
something
to
like
point
at
and
and
laugh
at
and
say
that
it
should
be
this
other
thing.
Instead,
these
things
on
the
bottom
there's
a
there's,
a
job
that
goes
out
and
gathers
everything
from
open,
ssf
from
the
best
practices
and
all
of
the
public
data
that
the
scorecard
project
creates,
which
is
awesome.
So
thank
you,
for
both
of
you
for
for
having
those
kind
of
apis
and
data
available
is
a
lot
easier
than
well
openness.
A
A
So
what
I
have
is
you
know
if
you
want
to
look
at
left,
my
favorite
left
pad,
you
know,
left
pad
comes
in
and
more
data
you
know
coming
in,
but
left
pad
you
know
is,
is
you
know
it's
been
deprecated
since
I
think
2018.
So
we
expect
to
see
numbers
like
this.
There
is
no
badge
stuff.
A
I
ran
the
scorecard
independently
on
it
and
some
of
them
popped
up
yes
and
most
know
so,
there's
this,
but
it
has
this
other
section,
secure
reviews
which
I
want
to
talk
about
in
a
little
in
a
minute.
Are
there
any
questions
about
the
demo
so
far,
I'm
gonna
get
a
little
bit
more
into
the
architecture
and
then
go
to
the
security
reviews
and
then
have
more
of
a
holistic
discussion
on.
Like
is
this
the
right?
A
Are
we
going
down
the
right
road,
david
you're,
on
mute.
B
Thank
you,
the
2024
phase
is
going
to
be.
2021
is
all
also
I
see.
Thank
you
yeah.
I
I
would
definitely
love
to
have
all
the
arguments
with
the
metrics.
I
think
there
are
better
I'm
not
very
excited
about
release
versus,
say
some
number
of
commits
or
something,
but
I
I
love
this
overall
approach
of
getting
something
up
and
then
you
know,
and
then
we
can
have
reasonable,
interesting
arguments
about
what
to
put
there.
B
So
I
I'm
actually
very
excited.
I
think
this
looks
this
is
looks
really
really
promising.
I
am
curious,
where
you
got
the
security
reviews
text
there.
B
Yeah,
so
I'll
I'll,
just
quickly
put
a
peg
in
the
you
know,
for
the
health
and
so
on
the
number
the
length
of
time
between
releases.
B
You
know,
different
projects
have
very
different
views
on
that,
but
I
think,
if
you
produce
at
least
you
know
how
many
commits
per
month
that
at
least
gives
in
particular
I
there
are
no
commits
this
month-
tells
me
something
very,
very
different.
Even
if
there
hasn't
been
a
release
in
in
a
year.
A
True,
I
I
I
think,
maybe
we
need
both
because
from
a
consumer
I
don't
consume,
commits
I
only
consumer
release.
So
if
I
know
that
the
last,
like
you
do
releases
every
three
years
and
you
just
had
one
three
months
ago,
I
should
know
not
to
expect
another
release
for
another
couple
years
but
yeah.
I
agree
that
the
number
of
commits
is
is
important
and
what
I
was
hoping
for
is
that
some
of
these
guys
on
the
bottom
are
proxies
for
that
kind
of
information.
So
I
I
haven't
looked
into
precisely
what
like
the?
A
What
would
it
take
to
go
from
an
x
to
a
check
on
contributors,
but
I
would
imagine
would
be
something
along
those
lines
like
have
had
more
than
x
contributors
in
the
past
t.
A
Correct
I
tried
to
make
these
ones
on
the
bottom
all
binary
aside
from
these
guys
over
here,
so
these
are
either
like.
I
actually
some
of
these
have
partial,
because
you
have
partial
in
some
of
your
data,
sorry
or
partially
met,
or
something
like
that.
B
B
C
D
There's
two
relevant
things
here:
there's
like
a
one
check
to
try
to
determine
whether
or
not
the
project
is
active
and
that
one
has
a
couple
different
things.
It
looks
at
like
whether
any
kind
anything
has
happened
in
the
last
90
days
or
something
like
that
and
then
there's
another
one
that
tries
to
make
sure
there's
at
least
more
than
one
contributor.
So
we
don't
care
about
the
actual
number
as
long
as
it's
greater
than
one.
C
Okay,
so
there's
some
overlap
in
these
between
the
best
practices
and
the
scorecard,
I
think,
are
you
doing
similar
kinds
of
does
the
does
the
best
practices
have
like?
C
B
Yes,
I'll
go
ahead,
dude
yeah!
Yes,
although
you
know
there
are
some
challenges
and
differences,
so,
for
example,
for
the
multiple
contributors.
What
we
want
is
multiple
contributors
from
different
organizations,
whereas
with
with
the
number
of
contributors,
it's
just
straight.
Are
there
more
than
one
and
so.
D
D
E
I
have
a
question
regarding
the
open
ssf
batch,
so
this
is
filled
out
through
a
form
right
like
since
we
cannot
validate.
Maybe
these
metrics
does
it
make
sense
to
show
it
here
rather
than
just
the
batch,
because,
ideally
we
would
want
to
validate
the
metrics
right.
Let's
say
somebody
says:
two-factor
authentication
is
enabled
like
do
we
validate
all
these
metrics.
B
We
don't
validate
all
of
them.
We
validate,
of
course,
let's
actually,
let's
step
back,
what
do
you
mean
by
validate
okay?
Some
of
them
are
automatically
checked
and
they
they
are
verified
as
being
correct,
or
some
of
them
are
at
least
verified
as
being
consistent
with
other
data.
Well,
no,
some
data
is
self-reported.
Some
is
not
there's
so,
whereas
so
some
data
is
self-reported.
B
Some
is
validated
in
terms
of
an
automatic
check
and
some
is
validated
for
at
least
you
know,
consistency
in
other
words.
Yes,
that
is
there
are
reasons
to
believe.
That's
true.
There
is
also
the
prop
the
challenge.
If
someone
lies
it's
all
public,
so
it
can
be
questioned
and,
if
necessary,
overwritten
we've
done
that
in
several
cases
where
somebody
makes
false
claims
and
it's
reported
and
it's
either
fixed
or
the
or
the
deleted
and
the
user
is
removed.
B
If
we
have
to
do
that,
is
that?
Is
there
a
periodic
like
hey?
Is
this
still
true
there?
We
have
talked
about
that
many
times,
there's
a
little
bit
of
that,
but
we'd
like
to
do
more
in
the
future.
B
Okay,
but
right
to
be
honest
right
now,
it's
a
matter
of
priority
right
now,
we've
been
more
focused
on
just
getting
people
to
do
things
at
all,
because
that's
a
higher
risk
people
backsliding
a
little
bit
is
not
ideal,
but
for
a
lot
of
these
things,
once
you
set
up
things
like
ci
pipelines,
your
pipeline's
going
to
enforce
it
on
yourself
and
you're
doing
it.
So
we're
way
more
worried
about
the
projects
that
aren't
even
trying.
A
Cool
any
other.
C
Yeah
I'll
just
say
I
agree
with
david.
It's
you
know
it's
great
to
have
something
here
and
we
can
you
know
and
we
can
keep
refining.
But
you
know
it's
it's
a
starting
point.
So,
thanks
for
your.
B
B
A
A
F
A
A
So
these
things
look
like,
and
it's
just
post,
it's
vanilla,
postgres,
nothing
special,
so
core
infrastructure
badge
thing:
the
key
is
like
you
know
where
it
came
from
and
the
license
is
mit
and
that's
it
then
back
here
when
I'm
curious
as
to
like
well
what
what
does
this
mean?
A
It
just
means
that
the
query
is
just
for
the
given
package
url.
What's
the
value
of
the
key
and
then
there's
some
mashing
to
keep
griffin
happy,
but
it's
nothing
special
there.
So
there's
no
like
weird
complex
back
end
whatchamabobs
to
to
kind
of
keep
things
going.
You
just
have
a
refresh
job
and
the
refresh
job
is
the
thing
that
is.
A
Here
now,
and
basically
that
comes
down
to
oops,
not
that
one
a
couple
different
scripts,
one
of
them
is
like
check
the
scorecard
and
the
scorecard
means
that
either
I'm
going
to
grab
the
the
latest
json
with
everything
in
it
and
parse
them
all.
Or
if
you
give
me
just
one,
I
will
go
there
and
I
will
try
to
find
the
the
github
repo
and
then
run
the
scorecard
thing
myself
on
it.
G
I
have
a
question
how
you
you
handed
the
github
limit,
because
when
I
I
was
written
the
the
code,
I
have
some
troubles
with
the
github
api
leads
because,
of
course,
to
to
test
the
code.
I
need
to
run
it
several
times
and,
of
course,
after
some
attempts,
I
was
blocked
by
github.
A
Yeah,
I
I
don't
have
a
good
answer
for
that.
I
don't
think.
Well,
I
think
we
should
have
have
a
talk
with
github
to
see
what
what
are
the
chances
of
getting
a
higher
api
limit.
I
know
other
orgs,
do
you
know
they
have
a
pool
of
tokens,
which
kind
of
is
not
probably
the
right
thing
to
do
either
we
could
run
it
slowly.
A
We
could
you
know,
but
but
like
the
the
scorecard
itself
takes,
I
mean
I've
seen
it
take
over
a
thousand
out
of
the
5
000
api
points,
not.
E
Really,
I
think
it
just
takes
around
less
than
a
hundred,
so
you
can
do
like
around
50
to
100
repos.
But
here
is
the
thing
we
are
doing
kind
of
a
hack
to
switch
between
tokens.
So
I
think
that's
like
a
hack
around
the
github
token
limits
like
you
can
have
multiple
tokens
and
you
know,
but
it
has
been
a
crazy
pain
like
we
need
to
solve
that
thing.
Yeah.
A
Actually
before
that,
because
what
I
know
so
when
I
was,
as
I
was
doing,
other
things
for
for
with
with
the
github
token,
I
realized
that
if
I
just
switched
over
and
learned
graphql
better,
I
could
get
you
know
going
down
from
hundreds
to
like
one
token
point
where
it's
just
significantly,
and
I
noticed
that
there
was
some
code
in
the
scorecard
that
does
graphql.
D
Yeah,
there's
probably
some
optimizations
that
can
be
done.
I
know
it's
like
the
graphql
api
rate.
Limiting
changes
completely
like
it's
not
one
token
per
request.
They
have
some
way
where,
like
it's
smart
enough
to
charge
you
based
on
how
big
the
query
is,
so
there
probably
is
okay,
just
I'm
tweaking
it
could
be
done,
but
yeah
when
I
first
looked
at
it,
it
was
hard
to
actually
predict
just
because,
like
the
math.
E
B
Yeah,
michael
there's,
another
approach
you
can
take,
which
is
radically
different
hold
up,
and
that
is
something
more
like
what
the
best
practices
badge
does,
which
is
instead
of
trying
to
do
a
batch
and
to
be
fair.
The
badge
really
couldn't
do
the
what
you're
doing
right
now
anyway,
but
you
could
make
it
so
that
people
log
in
and
use
their
tokens
because
they're
interested,
I
really
don't
want
to
know
about
project
x,
well,
project
x,
isn't
on
the
the
list
that
you
currently
do
on
a
normal
cycle.
A
I
I
think
that
totally
makes
sense,
especially
if
we
are.
A
I
I
wouldn't
want
to
just
give
out
the
code
and
say:
hey
here,
run
this
code
yourself
on
your
desktop
against
your
thing
and
submit
the
data
to
us.
Oh.
B
No,
no,
no
I'm
saying
yo
yo
metrics.openssf.org,
you
show
up.
Oh
you
want.
You
want
to
look
at
this
other
project.
Give
me
the
url
the
project
you
log
in
give
me
the
url.
The
project
run,
the
scorecard.
Do
the
ask
cia
best
practices
whatever
else,
and
then
you
get
the
data,
you
show
the
user
results
and
then
you
store
it
with
a
date.
B
E
C
C
C
I
think
it
would
be
very
interesting
if
we
could
have
the
results
from
this
come
out
in
in
total
link
format
and
be
signed
and
also
identify.
You
know
like
who
ran
the
check
and
and
then
we
could.
You
know,
use
that
also
for
companies
who
want
to
validate.
They
can
validate
against
that
so
yeah,
sorry,
automate
and
validate.
G
Yeah,
I
have
some
doubts
about
this
strategy
about
guitar
talk
and
limits,
because.
G
Of
course,
the
the
user
can
run
using
their
token
one
two
three
scans,
but
they
can
use
with
many
tools
in
in
some
hours
because
the
limits
block
you
for
one
hour
or
more.
I
don't
know,
and
so
I
don't
know
if
it
is
the
best
user
experience.
If
you
can,
if
you
want
to
compare
different
tools,
and
so
you
can,
you
want
to
run
the
dashboard
in
real
time
different
times.
G
In
addition,
I
think
that
scorecard
works
only
with
the
github
and
it
is
okay.
But
if
a
project
is
on
gitlab
we
can
we
cannot
use
it.
So
I
don't
know
or
maybe
I'm
wrong,
but
when
I
have
tested
the
scorecard
with
gitlab,
I
cannot
run
it,
but
maybe
I
can
maybe
I
I
yeah.
D
Yeah,
that's
you
summed
it
up
correctly.
It's
just
a
matter
of
doing
the
work
to
query
the
lab
apis.
A
A
Say
no
data
here
in
the
middle
because
I
haven't
run
the
script
because
of
the
api
limits
and
there's
no
sense
in
you
know
burning
cpu
time
on
a
proof
of
concept
for
all
these
okay,
I
did
want
to
get
to
the
security
reviews
part
quickly,
if
that's,
unless
anybody
has
anything
burning
that
they
need
to
talk
about
right
now,
right
this
second,
okay,
just.
C
One
more
thought:
sorry,
I
wonder
if
we
could
get
github
to
provide
data,
so
the
kinds
of
data
that
we're
interested
for
security
in
a
kind
of
a
cold
storage
or
something
so
that
we
wouldn't
have
to-
and
it
just
seems
like
something
to
talk
about,
so
we
wouldn't
have
to
use
their
api
to
query.
Maybe
they've
got
some
other
place
where
we
can
go.
You
know,
get
relatively
fresh
data
across
a
bunch
of
projects,
so.
A
That
would
be
that,
but
particularly,
I
think,
because
you
know
in
in
anything
you
know
there's
this
like
hockey
tail
graph
of
like
projects
that
anybody
has
ever
used.
So
if
you
just
focus
to
the
things
that
like
we
would
probably
care
about
anyway,
it's
not
be
a
very
small
portion
of
the
total
number
of
github
projects.
C
Yeah,
I'm
not
sure
exactly
how
it
would
look,
I'm
just
trying
to
come
up
with
you
know
some
way
where
you
know
another
way
where
we
could
get
around
their
api
limits,
and
I
thought
perhaps
that
we
might
be
having
some
data
stored
on
the
side.
That's
you
know
so
that
we're
not
querying
against
their.
You
know
their
main
databases.
D
D
Yeah,
that's
the
problem,
it
seems
like
microsoft
was
sponsoring
it
at
some
point.
There
was
only
one
maintainer
of
it,
but
that's
basically
what
it
did.
People
were
kind
of
donating
access
tokens
to
it
and
they
were
scraping
and
sticking
things
into
offline,
cold
storage
and
then
making
them
available
to
researchers.
C
D
C
A
D
B
Yeah,
as
far
as
I
know,
bigquery
still
exists,
but
the
but
I
mean
it's
not.
I
don't
think
that's
free
at
all.
As
you
get
bigger
you
get
charged
for
it.
So
I
don't
think
that's
really
going
to
be
a
help
for
scale
and
this
kind
of
thing,
if
you
want
the
general
public
to
be
able
to
query
to
it,.
A
No,
but
if
we
could,
if
we
could
query
to
it-
and
I
you
know
somehow
figure
out
the
billing
part
of
that,
that
might
be
more
efficient
if
it's
yeah.
D
B
C
Yeah,
it
just
makes
it
a
little
harder
for
someone
to
to
query
there's
an
extra
step
like
you
have
to
provide
your
token
but
yeah
anyway.
E
Also
more
checks
so
right
now
it
takes
around
15
to
20
seconds
and
we
keep
adding
more
checks.
So
it
won't
be
a
real-time
experience
to
get
that
information
to
get
which
information
like
all
the
metrics
right.
So
right
now
we
have
like
10
checks
right
and
it
takes
around-
let's
say
15
to
20
seconds.
So
if
someone
goes
to
a
web
page,
it
will
take
some
time.
So
it
won't
look
like
real
time.
Data
is
fast
enough.
B
You
know
I
I
usually
don't
make
decisions
to
use
some
piece
of
software
in
15
seconds.
B
C
A
Okay,
so
so
so
when
I
guess
ossc
the
the
github
coalition
started,
what
are
the
the
drivers,
at
least
for
me
in
this,
was
so
part
of
what
my
team
does
at
microsoft
is
security
reviews
of
open
source,
so
we
we
look
at
a
thing
and
we
give
it
a
thumbs
up
or
a
thumbs
down.
We
use
tools,
we
do
code
review,
we
do
all
sorts
of
stuff,
and
so,
when
we
give
it
a
thumbs
down,
that's
usually
either
because
it's
just
an
inherently
non-secure
thing:
it's
like
it.
A
So
we're
not
going
to
approve
that
in
other
cases,
vulnerability,
in
which
case
we
take
it
through
that
process
and
get
it
get
it
fixed
and
in
a
lot
of
cases
it's
fine
and
when
it's
fine,
we
give
it
a
thumbs
up,
which
is,
I
think,
a
little
bit
different
than
what
other
things
existing
processes
are
out
there,
because
I've
never
really
seen
a
good
like
thumbs
up
on
security,
like
someone
has
actually
looked
at
it
with
their
eyeballs
and
said
this
looks
fine,
that's
part,
one
part
two
is
that
we've
done.
A
You
know
hundreds
and
hundreds
of
these
over
the
past
couple
years
and
we're
just
kind
of
sitting
on
the
data
and
if
any
of
your
organizations
are
doing
anything
similar,
we're
probably
looking
at
the
same
thing,
which
means
that
we
are
collectively
wasting
like
90
percent
of
our
effort.
Wouldn't
it
be
great
if
there
was
a
community
resource
that
we
could
all
publish
to
and
consume
from
and
get
interesting
data
not
zero
days.
A
So
this
is
not
a
zero
day
trading
club
but
interesting
security
data
about
projects
that
aren't
coming
from
the
projects
themselves
so
like
apache,
has
a
security
page
and
has
a
security
page
and
those
are
great.
But
those
are
also
the
you
know.
Self-Attested
like
I
am
secure
because
I
say
I'm
secure
for
the
big
projects.
There's
trust
there
and
that's
fine
for
the
smaller
projects.
They
they
don't
have
those
things
anyway,
and
wouldn't
it
be
great
to
have
you
know,
just
information
that
someone
has
looked
at
left
pad
and
said.
A
You
know
what
looks
fine,
and
so
so
that
was
kind
of
the
one
of
the
things
that
drew
me
to
to
ossc
was.
Was
this
and
I've
been
hoping
to
get
this
out,
and
so
I'd,
like
to
kind
of
you
know,
start
moving
on
this
and
basically
so
what
this
is
is
it's
just.
It's
meant
to
be
super
simple.
It
is
a
markdown
document
that
has
text.
A
It
has
a
couple
different
fields
that
you
have
to
do,
otherwise
the
ci
will
will
complain
and
it
has
a
little
bit
of
structure
at
the
top,
which
is
what
things
does
it
apply
to.
When
did
I
do
my
review?
Who
did
it
what
my
recommendation
was,
and
this
is
like
say,
unsafe
or
like
it
depends,
and
hopefully
there
aren't.
A
Hopefully
there
are
more
like
things
not
in
the
it
depends,
because
that
doesn't
help
too
much
and
then
so
so
this
data
just
gets
sucked
into
here
and
becomes
this
so
so
this
is
it's
a
clone
and
a
parse
and
a
upload
to
the
database.
So
it's
so
it's
quick,
but
it's
it's
all
sourced
from
from
here.
A
So
what
I
would
like
to
do-
and
I
don't
think
that
the
two
projects
need
to
go
out
at
exactly
the
same
time,
but
you
know,
as
I
said
you
know,
we
have.
You
know
hundreds
and
hundreds
of
these
that
we
would
like
to
make
public
and
going
forward.
We
would
like
our
the
ones
that,
as
we
do
them,
especially
the
ones
that
aren't
sensitive,
I
guess
we
just
immediately
make
public.
A
I
think
that
that
helps
us
scale
that
helps
everybody
else.
If
you
guys
are
doing
it,
it
will
help
you
scale,
there's
still
some
open
questions
about.
You
know
what's
the
bar
and
how
do
you
ensure
quality
and
what
happens
when
people
disagree
and
if
you
have
three
different
reviews
by
three
different
companies,
all
saying
different
things
like
who's
the
arbiter
and
I
think
we
can
work
out
all
of
those
all
those
issues
going
forward.
A
But
again,
I
think
it's
important
to
get
something
out,
and
I
think
this
is
a
unique
like
a
unique
offering
that
I
think
I
I
think,
provides
value
so.
B
Well,
this
is
a
new
one.
I
like
the
idea,
I
have
to
admit
when
you're
saying
how
do
you
arbitrate,
I
you
might
find
it
easier.
Just
here's
provide
them
all.
I'd
rather
have
three
different
ones
who
disagree
than
the
zero,
which
is
what
I
usually
end
up
with
yeah,
but
I
I
I
suspect
it's
gonna,
be
all
sorts
of
complications
in
terms
of
you
know,
rev,
you
know
what
gets
accepted
and
so
on,
but
I
still
think
this
this
is
this
is,
I
think,
a
good
idea.
A
I
would
imagine
this
would
be
a
separate
repo
on
on
ossf.
Okay,
so
I
was
imagining
this
is
just
you
know
right
right
now,
it's
in
mine
but
I'll.
You
know
just
remote
it
over
and
you
know
and
push
and
we're
and
we're
done.
I
I
do
have
some
some
some
text
here
on,
like
you
know
what
I
was
thinking
about,
but.
E
Yeah
same
question
as
david,
which
is:
can
this
like
really
show
up?
Let's
say
in
the
package
manager
page
like
where
you
are
actually
getting
that
package
like
if
if
it
can
show-
and
I
think
dan
can
comment
here
too,
like
we
actually
talked
to
different
package
manager,
folks
like
pi
by
so
we
could
think
of
maybe
integrating
it
somewhere
like
because
that's
where
people
would
be
downloading
the
package
right.
So
absolutely.
D
D
It's
the
same
idea,
but
I
they
kind
of
tackled
it
from
the
completely
different
side.
Instead
of
telling
people
what
to
review
and
sharing
the
reviews,
they
kind
of
built
this
cryptographic
data
station
mechanism
for
publishing
the
reviews
and
making
sure
that
the
people
who,
like
the
kind
of
security
on
those
markdown
files
themselves,
so
that
you
can
actually
check
if
microsoft
says
that
they
did
this
review,
that
it
was
signed
by
some
key
from
microsoft.
Kind
of
thing.
D
Yes,
and
I
think
this
their
model
can
apply
it
like
the
file
level
and
the
package
level
and
even
like
the
individual
pr
level,
but
yeah.
I
really
think
there's
like
kind
of
crowdsourcing.
These
reviews
is
important
and
somebody
should
do
it.
Yeah.
B
But
you
mentioned
like
pie,
pie,
or
somebody
mentioned
like
pipe
or
npm,
and
so
on.
You
know
if,
if
we
can
eventually
get
this
like
metrics.openssf.org
and
it
pulls
in
these
reviews
and
then
on
pi
pi,
you
say
hey,
you
want
to
learn
more
about
security
click
here
and
it
jumps
to
the
metrics
open,
ssf.org
page,
which
includes
those
reviews
as
well
as
metrics
and
so
on.
Yeah.
C
B
C
Yeah
I
like
that
approach.
I
think
it's
you
know.
Ultimately,
it's
again
because
I
like
this
idea
or
this
scenario
that
I'm
interested
in
is
this
gating
scenario.
So
if
I'm
an
organization,
any
organization
like
maybe
microsoft
but
any
organization-
and
I
it
really
would
be
convenient.
If
there's
just
one
location,
I
can
go
to
or
one
api,
I
can
use
to
get
information
about
projects
or
packages
or
or
whatever.
A
Yeah,
so
so,
there's
no
reason
why
we
couldn't
be
this
so
so
the
same
way
that-
and
I
was
looking
for
that
for
the
one
of
the
package
managers
goes
out
to
sonotype
looking
for
license
info.
There's.
No
reason
why
the
package
managers
couldn't
all
be
invited
to
query
us
for
the
review
content
and
then
say:
oh,
there
have
been
two
reviews
of
this
click
here
to
go
to
the
whatever
the
page
or.
I
A
B
Yeah,
in
fact,
I
I
think
it'll
be
a
lot
easier
to
get
people
to
join
with
the
more
the
merrier
where
you
know.
We
we
pull
in
data
and
we
provide
links
off
to
them
and
if
they
pull
in
different
data,
that
actually
increases
the
likelihood
they'll
want
to
come
here,
because,
oh
this
is
where
I
can
find
the
other
information
yeah.
A
Yep,
I
think
this
is
also
so
one
of
the
problems
that
we've
had
is
well,
not
problems,
sure
challenges
whatever
it
has
been.
You
know,
given
a
pool
of
money,
how
do
we?
How
do
we
spend
that
pool
efficiently
to
get
good
security
work
done?
A
So
if
we
had
the
top
1000
projects
in
each
ecosystem,
it's
it
wouldn't
be
be
insane
to
think
that
we
could
get
those
all
done
in
a
relatively
short
amount
of
time,
given
some
funding
and
and
contractors
and
stuff.
So
it
doesn't
need
what
I'm
afraid
of
with
this
is
that
we
we
start
out,
and
it's
the
you
know,
come
to
a
party
and
bring
food,
but
everybody
thinks
somebody
else
is
bringing
food
and
there's
no
food.
So.
C
Now
we're
talking
about
an
overlap
or
an
intersection
with
what's
happening
in
the
the
securing
critical
projects
exactly
exist.
A
J
Michael,
that
idea
does
seem
very
nice,
which
is
reviewing
a
package
by
different
companies,
but
then,
when
it
comes
to
an
open
source
platform
like
open
ssf,
I
think
there
might
be
an
issue
in
verifying
or
trusting
the
party
who's
providing
the
review,
meaning.
How
do
we
trust
the
reviewer
yeah?
That's
where
I.
I
I'd
love
to
get
some
folks
from
the
in
total
project
involved
in
this.
J
C
D
Think
that's
the
aspect.
The
c
rev
or
crev
project
tried
to
tackle.
I
think
it's
kind
of
like
a
chicken
and
egg
thing
where
they
tackled
the
technical
aspects
of
how
to
figure
out
who
did
the
review
and
stuff,
but
they
didn't
actually
get
anybody
doing
the
reviews
where
michael's
approach
is
kind
of
coming
the
other
way
and
let's
just
get
people
doing,
reviews
first,
you
know
bringing
food
to
the
dinner.
J
Right
and
we
can
get
better
over
time-
I
think
there
might
be
one
way,
which
is
at
least
personally.
I
think
it
looks
very
nice
so
something
which,
let's
encrypt
it,
so
they
do
a
dns
validation
for
that
special
path.
Right
I
mean
they
have
that
dns
validation.
They
make
a
http
call
to
a
path,
and
we
should
have
the
records
ready.
J
Now
we
already
trust
domains
based
on
the
certificates,
the
stat
etcetera,
whatever
is
provided.
So
if
it
is
being
verified
by
a
online
I
mean
any
company
with
an
online
presence.
A
dns
validation
could
be
a
solution
saying
yes,
this
was
provided
by
us.
A
Oh
yeah
yeah
yeah,
so
so
when
somebody
submits
a
pull
request
and
says
this
is
from
you
know,
you
know
microsoft.com
you're
like.
Is
it
really
so
you
email
mike
at
microsoft
and
say:
hey,
did
you
submit
this
review
and
I
say
yup
and
then
you're?
Then
it's
validated
correct,
yeah.
Yes,
I
agree
something
there.
We
could
also
do
like
a
you
know
it
doesn't
it.
It
gets
some
sort
of
provisional
status
until
somebody
else
like
gives
it
a
thumbs
up
too
and
we
could
do
all
sorts
of
I.
C
If
we
can
yeah-
let's,
let's
note
that
and
I'm
sorry
to
to
move
this
along,
but
I
we
have
just
a
few
minutes
left
and
I'm
I
I'm
really
excited
about
this
project
and
I
would
like
to
include
it
in
our
upcoming
press
release
and-
and
so
that
means
we
need
to
think
about.
Okay,
what
needs
to
happen?
You
know
ahead
of
including
it
in
the
press
release.
C
So
I
you
know
david
asked
about
you
know.
Where
do
we?
I?
I
think
it
would
be
great
if
there's
something
that
that
we're
comfortable
letting
people
see-
and
maybe
we
you
know
market
clearly
as
alpha,
but
but
we
have
an
alpha
site
up
and
we
announced
that
in
the
press
release
is
that
michael
you,
we
had
exchanged,
email,
just
a
quick
email
over
the
holidays,
and
I
think
that's
something
that
you
were
interested
in
as
well
thoughts
from
others,
maybe
david
on
what
we
need
to
do.
C
I
know
we
need
to
think
about.
You
know:
what's
the
url,
where
we
expose
it
if
we
do
use
metrics.openssf.org,
I
think
it's
very
easy.
That's
what
I
had
a
conversation
with
lindsay
earlier
about
that
I
think
that'd
be
pretty
easy
to
bring
up.
I
also
did
get
another
domain
name
that
we
could
use,
which
is
the
awesome
domain
name
anyway.
C
So
there's
the
question
of
location,
but
you
know
maybe
we
should
just
kind
of
jot
down
what
things
we
need
to
do
ahead
of
to
feel
like
it's
ready
to
be
and
included
in
a
press
release.
Sorry.
C
Oh
well,
good
good
question.
I
guess
I
was
thinking
the
dashboard
first,
but
it
we
actually,
if,
michael,
if
you
were
ready
to
contrib,
you
know
I
might
want
to
wait
on
the
code
review
until
we
because
there
might
be
other.
I
don't.
I
mean
it's
great,
that
it'd
be
microsoft,
providing
something,
but
we
might
want
to
do
it
with
a
few
other
companies.
If
we
think
there
are
other
others
that
want
to
participate
so
yep.
B
C
B
I
I
think
I
I
think
in
some
ways
I
don't
want
to
use
that
shot
for
something
that's
still
pretty
early,
so
you
know
let
I
would
say,
hold
the
powder
on
that
one,
because
that
was
very
much
worth
its
own
thing
and
I
would
want
it
to
be
a
little
more
baked
before
putting
in
a
press
release.
That
said,
I
am
excited
about
it.
As
far
as
the
metrics
goes,
I
don't
know
what
the
I
mean
the
bar
is
whatever
we
think
it
is
for
putting
in
a
press
release.
B
Michael
has
officially
demonstrated
it
right
to
the
working
group.
So
as
far
as
hey,
if
you,
if
you
want
to
put
in
the
press,
release
that
we've
already
demonstrated
the
metrics
you
know
yeah,
we
we
can
check
that
box
as
of
today.
It
would
be
awesome
to
have
a
website
up
that
people
that
actually
see
them
honestly.
A
So
so
the
website
itself
is
public.
What
it
needs,
obviously,
is
a
landing
page
and
something
to
say
like
what.
What
am
I
looking
at
and
all
of
that,
so
what
I
would
imagine
is
a
page
to
open,
ssf
so
metrics
that
openness.org
takes
to
a
home
page,
which
says
hey
here.
This
is
what
the
metrics
project
is.
A
Click
here
to
view
the
dashboard
and
then
and
then
it
goes
to
something
like
this
and
what
we
can
do
is
we
can
also
clean
this
up
so
that
the
only
projects
we
have
in
there
are
things
that
we
have
like
good
complete-ish
data.
For
so
we
we
kind
of
we
make
sure
that
people's
first
first
viewing
of
it
is.
You
know
that
they
are
wowed.
B
Yeah,
I
I'm
looking
at
the
date
it's
january
6..
I
I
hate
to
over
promise
in
a
press
release.
I
think
it
will
be
okay
in
a
press
release
to
say
that
you
know
we.
There
we've
had
an
early
demonstration
of
the
metrics
dashboard
to
allow
people
to
see
various
data
security
data
and
we're
working
on
turning
into
a
website
and
such
like
that,
I
I
hate
to
try
to
press
that
in
just
a
few
days.
I
I
don't
my
experience
has
been.
C
F
C
B
About
not
telling
people
because
you've
got
a
dashboard
right,
you
got
it,
I
mean
we
can
improve
it.
I
I
think
well
I,
but
I
think
that
there's
I
think,
there's
some
metrics
that
should
be
replaced.
I
think
there's
some
changes
but
gracious,
but
I
I
think
that
we
could
say
something
about.
You've
demonstrated
a
you
know,
an
early
version
and
we're
going
to
turn
into
a
website,
and
then
you
know
declare
victory
as
far
as
a
press
release
goes
because
people
are
interested
in.
That
will
then
know
that
that's
coming.
A
A
We
could
do
that
yeah
and
then
it's
just
just
a
matter
of
replacing
that
with
with
the
content
and
notifying
people,
and
you
know
what
so,
okay
do
you
know
when
the
next
press
release
after
january
would
would
be
in
april,
so
it's
every
quarter
so
so
that
so
then
april
would
be
a
good
time
to
announce.
A
The
dashboard
is
now
publicly
available
and
we've
worked
out
all
the
kinks
and
if
we
could
do
the
security
reviews
at
the
same
time,.
B
Yeah
and
I
have
to
admit-
I
haven't
seen
many
press
releases
in
january-
that
have
a
lot
most
people
aren't
doing
a
lot
of
work
at
the
end
of
december
to
announce
in
january,
so
I
think
you've
done
quite
a
lot
in
quite
a
short
time.
So
I
I
think,
delaying
a
lot
of
the
goodies
to
april
is
just
fine.
C
Yeah
yeah,
so
we
can,
we
can
focus
on
just
announcing
the
project
and
then
we
can
say
that
it's
been
demonstrated.
We
can
talk
about
the
goals
and
and
then
you
know,
here's
how
you
can
you
know,
find
out
more
or
participate
so
cool.
A
Okay,
so
so,
let's,
let's
make
sure
we
don't
lose
the
we
do
need
to
get
the
domain.
That's
important.
G
Right,
I
don't
know
if
it
is
important,
but
if
we
want
to
add
a
user
login
or
collect
user
data
like
token
and
similar,
maybe
we
need
to
to
be
compliant
to
gdpr
in
in
the.
F
A
D
B
Yeah,
just
you
know,
frankly,
a
a
if
you
wanted
to
creating
a
page
with
a
couple
links
and
saying
what
this
is
going
to
be
about,
so
that
the
search
engines,
particularly
google,
discover
it
and
it
starts
getting
linked.
Because
my
experience
has
been
you.
You
spent
a
lot
of
work,
creating
something,
and
then
you
spend
the
next
couple
years
trying
to
get
people
to
hear
about
it.
F
C
Yeah,
michael,
let's,
let's
talk
offline,
but
we
should
just
loop
lindsay
in
yeah
and
she
can
help
us
get
get
a
page
up
and
then
we
should
get
together.
The
content
that
we
want
on
the
page
and
then
also
start
reviewing
that
with.
We
want
to
make
sure
that
this
is
visible
before
we
do
it.
We
want
to
make
sure
it's
visible
to
the
attack
members
and
also
to
the
governing
board
so
but
like
lindsay-
and
I
can
help
shepherd
through
that
perfect.
Okay,.
A
Awesome
final
question
for
the
secure.
A
A
What
what
so,
what
would
I
what
I
need?
I,
I
think
what
our
most
important
thing,
like
the
the
biggest
rock
that
we
need
to
get
over
is
I
need
another
large
organization
to
commit
to.
I
am
going
to
do
x,
number
of
reviews
and
contribute
them
or
I
have
already
done
x
and
I
will
contribute
them,
but
it
has
to
be
the
like
who's
who's.
Actually
bringing
like
soda
to
this
party.
A
If
you
I
will,
I
will,
I
will
take
it
and
I'm
gonna,
I'm
gonna
push
on
dan
and
kim
to
see
if
we
can
get
google
to
commit
a
couple
or
commit.
You
know
a
lot,
and
anybody
else
is
welcome
as
well.
A
Wonderful,
thank
you
all
very
much
happy
new
year
and
looking
forward
to
our
next
meeting
on
the
wait.
Sorry
before
I
leave,
the
next
meeting
is
going
to
be
like
the
18th.
I
think,
if
that's
super
bad
for
anybody,
is
that,
like
martin
luther
king
day,
I
don't
know
if
it's
the
day
or
the
observed
day,
because
my
kids
have
off
from
school
on
like
the
friday
and
the
monday.
A
So
if
it's
a
bad
day,
just
ping
me
and
we'll
we'll
reschedule,
but
otherwise
I'll
keep
it
on,
keep
it
on.