►
From YouTube: Scorecards Biweekly Sync (June 15, 2023)
B
A
A
few
minutes
for
people
to
trickle
in
and
then
probably
get
started
in
about
three
minutes.
A
All
right,
gonna
give
folks
one
more
minute
to
trickle
in
and
then
I
will
get
started.
A
All
right
welcome
everyone
for
those
that
don't
know
me:
I'm
Spencer,
I'm,
one
of
the
scorecard
maintainers
and
I-
am
today's
moderator,
so
I'll
just
be
going
through
and
keeping
us
on
the
agenda.
A
Please
fill
out
the
attendee
list
in
the
working
group
notes.
The
link
has
changed
since
our
last
meeting.
The
lint
Foundation
has
moved
some
documents
around
trying
to
get
them
into
shared
folders
that
they
control,
so
that
you
know
when
people
change
companies,
it's
not
as
much
of
an
issue
with
people
not
having
access
to
Docs
and
the
benefit
now
is
that
people
can
have
access
to
the
stock
without
needing
to
join
the
scorecard
Dev
Google
group.
So
hopefully
that's
easier
for
people
to
check
on
the
meeting
notes.
A
If
you
haven't
already,
please
fill
your
name
for
the
attendees
section,
and
normally
we
provide
a
little
time
at
the
beginning
for
anyone.
That's
new
to
the
group
to
optionally
introduce
themselves
say
what
they're
you
know,
hoping
to
get
out
of
scorecards
attending
the
sync
contributing
to
scorecard
Etc.
So
if
there's
anyone
that
wants
to
introduce
themselves
now,
I'll
take
a
little
bit
of
time
to
pause.
A
All
right,
I'm,
seeing
mainly
familiar
faces
in
the
chat,
so
I
will
go
ahead
and
start
with
the
sorry
just
reading
the
some
of
the
agendas.
One
thing
I
guess
first
in
the
agenda,
is
something
that
I'm
proposing
where
previously
in
the
meetings.
So
if
you
scroll
back
to
November
or
so
maybe
not
November,
I
think
it
was
probably
January,
but
we
we
made
a
scoring
change
to
code
review
that
ended
up
affecting
more
repositories
than
we
thought.
A
So
we
ended
up
having
this
issue
where
we'd
like
to
catch
scenarios
where
scoring
changes
are
catch
us
by
surprise,
and
then
we
have
to
react
to
them.
So
the
solution
starts
based
on
something
a
contributor
had
done
in
some
of
their
PRS,
where
they
say
this
is
the
change
I'm
making
and
based
on
some
300
sites
the
score
changes
in
this
way,
so
I'm
proposing
a
design
document.
This
is
a
two-page
doc
trying
to
say
you
know
this
is
how
I'm
going
to
solve
this
problem
of
saying.
A
So
I
won't
go
through
the
entire
doc.
But
it's
it's
pretty
short.
It's
two
pages,
but
I'm,
proposing
a
tool
that
will
be
stored
in
the
scorecard
repo
and
centers
around
a
tool
to
generate
results.
This
is
just
running
scorecard
on,
say
the
250
300
I'm
open
on
numbers
repos
that
we
want
to
check
a
tool
to
compare
two
sets
of
results
and
I'm
just
envisioning.
Storing
the
Json
results
that
scorecard
produces,
maybe
in
a
certain
combined
structure.
But
this
is
the
Json
that
you
know
people
are
used
to
seeing
when
they
run
scorecard.
A
So
compare
we'll.
Do
the
diff
and
highlight
results?
Stats
might
be
a
nice
way
to
visualize,
saying
like
X,
repos
gotta
score
zero,
why
repo's
got
a
score
two
Etc
and
then
some
sort
of
command
to
say.
I've,
looked
at
the
differences.
I
accept
that
this
is
some
sort
of
change
that
we
want
to
see
and
then
there's
some
commands
saying
that
it
could
operate
on
a
subset
of
checks.
A
So
the
goal
is
that
when
a
scoring
change
is
proposed,
one
of
the
maintainers
will,
you
know,
maybe
label
the
pr
in
a
way
that
kicks
off
this
workflow
and
says
you're
changing
code
review.
So
let
me
run
the
code
review
test
on
these
300
repos
and
see
if
the
scoring
distribution
changes
in
a
way
that
we
weren't
expecting
yeah.
So
this
is
the
document
it's
open
for
comments
by
all
all
right,
sorry
to
see
Pedro's
thing.
Hopefully
everyone
can
see
the
screen
share
if
anyone
has
any
high
level
thoughts.
A
I'd
welcome
them
now,
but
otherwise
the
dock
is
open
to
comments
by
anyone.
So
I'm
always
open
to
coming
back
to
this
At
a
next
meeting.
If
we
don't
reach
sort
of
a
consensus
or
anything
yeah
Pedro.
C
So
I
mean
just
to
clarify
this:
here
is
not
so
much
to
detect
changes
and
scores,
but
to
study
the
impact
of
changing
the
score
like
scoring,
changes
will
be
detected
by
unit
tests,
but
the
idea
is
for
this
year
to
see
okay,
and
this
is
actually
we
thought
this
was
going
to
be
a
small
change,
but
it
actually
decimated
everyone's
code
review
scores
is
that
is
that
the.
A
Idea,
yeah
so
like
we
have
unit
tests
and
we
have
end-to-end
tests
right
now
that
look
at
individual
repos
or
individual
files,
depending
on
the
check
and
when
we're
making
these
changes,
those
tests
get
modified
because
you
know
they
need
to
in
order
for
the
CI
CD
to
pass.
But
you
know
someone
says:
oh
I
wanted
to
make
this
change
so
yeah
the
score
is
going
to
change,
but
they
only
check
it
against
one
repo,
whereas
by
running
it
against,
say,
100
or
200,
or
something
like
that.
A
C
And
would
these
would
the
Goldens?
Would
they
be
like
live
projects?
We
would
I,
don't
know,
have
like
numpy
and
other
live
popular
projects,
and
then
we
run
the
check
on
using
the
old
version
of
scorecards
and
we
run
the
check
with
a
new
version
of
scorecards
and
do
a
diff
or
would
be
we
have
like
100
or
200
Forks
that
are
static
at
a
given,
commit
that
we
know
what
this,
what
the
scores
are
meant
to
be.
A
So
I
had
envisioned
doing
it
on
live
repositories
because
it
you
know,
gives
a
better
sample
of
what
people
are
doing.
Instead
of
just
something
static,
it
does
have
some
downsides
of
changes
being
made
outside
of
scorecard
code
changes,
so
a
project
could,
for
example,
say
like
I'm,
no
longer
doing
code,
review
or
start
doing
code
review
and
then
the
score
would
change
outside
of
our
control.
A
I.
Think
your
strategy
of
running
scorecard
again
on
an
older
version
to
sort
of
eliminate
that
would
work,
but
I
was
envisioning,
storing
the
results
with
the
repositories,
the
set
of
goldens
to
cut
down
on
having
to
rerun
scorecard,
but
it
does
run
into
those
scenarios
where,
if
a
repository
changes
you
know
outside
of
scorecards
controlled,
then
that
could
potentially
cause
a
golden
test.
Failure,
which
is
one
reason
I,
was
trying
to
make
it
easy
to
update
this.
A
I
have
something
that
says
like
this
would
also
be
good
to
run
or
yeah
yeah
like
weekly,
or
something
like
that,
where
even
if
something
isn't
being
proposed
or
we're
not
about
to
cut
a
release,
just
sort
of
make
sure
that
our
goldens
are
accurate.
A
I
I
did
copy
this
from
a
internal
doc.
That
I
was
just
noodling
in
so
I
might
not
have
copied
that
bit
over,
but
I
will
say
that
you
know
having
a
weekly
scan
either
in
GitHub
or
in
the
cron,
makes
sense
to
me.
I
think,
there's
some
benefits
of
doing
it
in
the
cron.
As
our
token
quota,
there
is
a
bit
higher
than
what
our
repository
might
have,
but
I
will
add
that
yes,
yeah.
D
And
I
also
thought
about
for
the
given
the
random
subset
in
the
nightly
I.
Wonder
if
there's
something
interesting
if
the
nightly
was
like
5
000,
but
it's
2500
random
one
day
and
the
2500
from
yesterday.
So
it's
basically
just
a
moving
set
and
then
you
could
actually
compare
it
on
the
nightly
basis,
which
would
get
some
of
the
like,
potentially
some
of
the
outliers
that
might
not
be
in
the
Golden's.
If
you
just
think
of
a
sliding
window
effectively.
A
Yeah,
based
on
how
the
the
random
repositories
are
picked,
I
think
that
would
require
some
sort
of
re-architecting,
but
not
not
like
sorry,
not
re-architecting,
but
we'd
have
to
change
how
the
magnitude.
A
C
You
know
just
on
that
point
of
the
the
randoms
like
it's.
It
sounds
good,
but
then,
but
how
would
that
work
with
the
idea
of
having
fixed
scores
for
for
projects
or
so
I
mean?
Would
we
just
be
relying
on
the
the
crime
jobs
scores
from
Monday
when
they're
running
the
Friday
nightly.
A
So
I
I'm
not
sure
if
so
right
now
the
KRON
checks
a
random
subset
and
what
the
nightly
is
doing
is
like
image
promotion
from
latest
to
stable
that
the
rest
of
the
crown
uses.
So
the
another
problem
that
I
haven't
mentioned
is
that
the
crown
is
set
up
in
a
way
to
just
saying:
like
did
the
runs
succeed
without
hitting
a
runtime
error,
and
these
golden
tests
are
looking
to
answer
different
question.
Not
just
did
the
Run
succeed,
but
did
the
scores
change.
A
A
All
right
so
is
is
anyone.
Thinking
like
this
is
a
terrible
idea.
Is
this,
you
know
misguided
should
or
people
thinking
you
know.
This
is
generally
the
right
idea.
I
might
you
know
read
through
this
later
and
leave
a
comment
I'm
just
trying
to
get
some
ideas
before
I
start
actually
writing
code,
but
I
see
some
thumbs
up.
E
As
a
maintainer
of
a
project
that
uses
scorecard
I
would
find
this
useful
if
I
could
check
scores
over
time,
which
is
something
that
is
a
lot
of
picky
work
right
now,
I
realize
that's
not
the
target
use
case,
so
I
actually
want
to
ask
about
that
is
it
is
that
is
the
target
functionality
related
to
maintenance
scorecard
more
than
usage
of
scorecard.
A
It's
primarily
aimed
at
saying:
I
have
two
points
in
time
and
I
want
to
see.
If
there
are
any
differences
in
score
between
them,
it
can
be
used
to.
You
know:
I
thought
like.
Oh,
if
I
pass
in
two
data
sets,
I
can
visualize
them.
Both.
Are
you
familiar
with
the
bigquery
data
set.
A
Okay,
so
if
you
go
to
the
scorecard
read
me,
let
me
find
my
right
tab.
A
So
in
the
readme
there
is
something
about
public
data
here
and
one
of
these
queries,
as
you
may
be
interested
in
how
a
project
score
has
changed
over
time.
So
there
is
a
bigquery
query
right
here,
where
I'm
looking
at
the
scorecard
repo
and
I'm
going
to
order
it
by
date,
because
we
stick
all
of
the
weekly
runs
into
a
table
ordered
by
date.
So
you
can
sort
of
graph
that
and
see
how
a
project
score
has
changed
over
time.
A
Then
yeah
I
I
think
that's
the
same
as
what
we're
trying
to
catch
here
is
that
you
know
someone
is
proposing
a
change
and
they
want
to
see
how
it
affects
the
score
for
n
repos,
but
the
change
being
a
change
in
scorecard
code,
but
I
guess
it
could
be
a
change
in
like
repo
administration
too
so
like.
If
you
wanted
to
apply
some
change
to
X
GitHub
repos,
you
could
have
a
run
saved
from
before
and
then
make
the
change
and
then
do
it
again
and
see
how
it
changes.
E
A
Cool
yeah
I
think
in
the
interest
of
time
it
looks
like
we
have
a
pretty
full
agenda.
I
will
probably
shoot
you
a
slack
message
if
you're
in
the
slack
otherwise
I
can
okay
but
yeah.
A
All
right
cool,
so
moving
on
next
is
about
unpin
dependencies
and
unprivileged
workflows,
jogo.
G
Hey,
hey
Schultz,
so
yeah
I
wanted
to
bring
some
attention
to
this
issue.
I'm
I
am
currently
assigned
to
it,
but
I'm
not
sure
whether
we
want
to
have
this
information
and
how
the
original
idea
was
to
have
either
zero
or
lighter
decrease
on
this
car
on
independencies.
If
the
unpinned
action
doesn't
have
any
dangerous
permissions.
G
So
if
you
have
like
read
on
the
producer
or
or
just
content,
read
permission,
so
it
could
be
unpinned
if,
if
they
maintainer,
what
there
are
some
some
branches
here,
like
maybe
just
reduce
some
very
small
score
like
0.1
or
something
like
this
or
just
not
reduced
at
all.
But
the
point
that
was
brought
up
in
this
discussion
is
that
there
is
the
action.
The
from
GitHub
upload
action,
upload
artifact-
and
this
is
this-
doesn't
doesn't
require
any
permission
it.
G
It
can
be
used
by
read-on
permission
or
I
I
think,
even
if
no
permission
at
all
and
this
action
is
used
in
some
scenarios
for
critical
processes
for
I
I
have
this
example
in
which
some
workflow
creates
the
build
uploads
it
using
the
upload,
artifact
action
and
another
workflow
downloads.
Is
this
artifact?
G
So
this
could
still
be
a
very
dangerous
workflow
if
they're,
not
if
they're
not
paid,
so
they
have
all
read-only
permissions
doesn't
have
any
secret,
but
it's
still
a
critical
piece
of
codes
so,
and
so
they
should
be
pinned
so
yeah
I
really
said
to
get
to
know
what
maintainers
think
of
it
and
you
think
of
it.
Should
we
go
ahead
with
this
idea?
G
I
have
some,
how
can
I
say
a
lower
decrease
of
the
score
like
currently,
if
you
been,
if
you
don't
pin
action
that
is
used
with
right
permission
and
are,
if
you've
not
been
an
action
with
either
with
permission,
you
have
the
same
decrease
amount
of
score.
Do
you
want
to
change
that
or
not
I?
Think
that's
a
question.
A
All
right
so
going
to
your
first
question
about
unpin
dependencies
and
unprivileged,
workflows,
I.
Think
it's
unfortunate.
You
know
that
the
permission
model
lets
this
be
a
problem,
but
thank
you
for
finding
a
example
of
like
how
a
repository
uses
this
I
guess
one
so
like
it
makes
it
hard,
because
we
can
check
like
do
any
of
the
repositories
in
this
or
do
any
of
the
workflows
in
this
repository,
call
upload
action
or
download
action,
or
something
like
that.
H
A
Yeah
I
I
think.
C
Just
occurred
to
me
like
to
capture
this
specific
case
where
the
artifact
is
generated,
but
is
then
automatically
published
by
another
workflow
like
I.
Don't
know
if
it's
like
technically
feasible
or
plausible,
but
we
could
check
like.
Is
this
workflow?
That
is
this
workflow,
that
uploads
artifacts?
C
Is
it
used
as
a
workflow
run
of
another
that
does
it
trigger
another
workflow
to
run
that
then
downloads,
this
artifact,
but
that
would
be
probably
a
pain
and
also
a
wooden
catch,
a
case
that
I
haven't
seen
I
believe,
but
that
it
likely
exists
where
someone
has
a
workflow
to
build
the
artifact.
But
then
they
just
download
the
artifact
and
publish
it
manually
on
their
laptop,
which
would
be
a
bit
of
a
weird
situation
but
probably
exists.
I
C
Yeah
I
mean
just
to
be
clear,
just
to
clarify,
like
the
the
pinning
and
the
pinning
here
is
just
regarding,
shall
we
say,
build
time
dependencies
in
workflows,
so
you're
practicing
from
that
is
just
published.
That
can
remain
just
version
pinned
and
that's
absolutely
fine.
Oh.
C
I
I
C
I
I
really
like
that's,
although
that's
that's
clarified,
but
the
the
language.
My
understanding
is
that
when
you
do
a
an
npm
publish,
for
example,
the
package
Json
isn't
really
interpreted.
It's
just
bundled
it
in
with
a
package
and
shipped
off,
so
that
can.
J
C
Can
be
a
version
pinned
range
pinned
whatever
have
fun?
We're
not
scorecard,
doesn't
care,
it's
just
the
dependencies
that
you
require
to
build
your
package
in
other
languages.
The
actions
that
you
run
prior
to
doing
an
npm
publish
all
the
all
the
things
that
you
do
in
order
to
publish
those
should
be
pinned.
So
everything
that's
going
on
inside
your
GitHub
Runner
should
be
pinned,
but
the
actual
package
that
is
then
submitted
can
be
pinned
or
unpinned
or
whatever
you
think
best.
H
Yeah
I
mean
I
think
in
any
one.
In
any
case
like
this,
where
there's
an
action
or
or
you
know,
a
a
state
where
scorecard
can't
determine
if
it's
safe
or
not
honestly
I
feel
like
it
should
lean
towards
not
deducting
score
for
it.
I
think
any
time
that
scorecard
produces
a
low
score
for
somebody's
project
and
they
look
at
it
and
they
see
oh,
what
I'm
doing
is
perfectly
safe
and
scorecard
is
wrong.
H
That
scorecard
loses
credibility,
and
this
is
another
case
of
that
where
I
have
workflows
in
my
in
my
GitHub
repository,
and
they
are
read
only
and
I
don't
want
to
deal
with.
You
know,
pinning
by
my
actions
by
hash
scorecard,
is
telling
me
that
I'm
doing
things
wrong,
like
that
makes
me
think.
Oh
scorecard,
is
a
not
a
useful
tool.
H
So,
while
there
are
situations
where
you
know
there
that
you
you
could
get
around
this
and
scorecard
won't
detect
it
that's
something
we
can
call
on
documentation,
that's
something
that
we
can
put
into
best
practices
guides
for
how
to
use
GitHub
actions
safely.
These
are
things
that
you
need
to
watch
out
for,
but
I
think
that
in
general
like
having
scorecard
lean
towards
not
punishing
people
in
situations
where
it's
not
correct,
that
is
good.
Now,
that's
counterintuitive,
to
saying,
like
those
scorecards,
should
be
aiming
for,
like
the
most
secure
configuration
possible.
H
A
Sure
a
lot
of
the
ghost
Upstream
team
can
tell
you
like
maintain
your
feedback
and
false
positives
is
a
big
problem
with
adopting
scorecard
so
I
I
think
you
know,
that's
I
mean
there's
the
expression
like.
Don't
let
perfect
be
the
enemy
of
good
or
something
like
that
where,
if
this
helps
people
start
using
scorecard
more
pinning
when
it
matters
and
not
pinning
when
it
doesn't
I
I
know
one
person
that
comments
on
some
of
the
issues.
A
A
lot
is
trying
to
say
like
maybe
not
from
a
security
perspective.
Does
it
matter
if
you
pin
but
like
sometimes
there's
a
stability
aspect
where,
if
you're
pinning
your
linter
on
like
a
major
version
and
then
they
bump
it
and
they
bump
a
feature
version
and
now
there's
a
newlinter
like
I've,
had
repos
break
from
pinning
it
like
D3
of
a
certain
launcher,
but
yeah
I
know
I
I
see
what
you're
saying
about
you
know.
If
we
fail
unless
we're
certain,
then
it
builds
trust
that
you
know
when
we
do
find
something.
I
Noisy
tools
that
complains
about
things
that
aren't
a
problem,
someone
who's
concerned
that
you
know
I
am
not
sure.
I
trust
you
at
all
give
me
really
good
confidence
that
this
is
okay,
they're,
going
to
want
to
hear
about
the
things
that
I
can't
be
certain
of
I.
Think
it's
fine!
If
the
if
the
preference
is,
if
we're
not
sure,
we'll
assume
it's
okay,
but
in
that
case
it
probably
is
a
good
idea
to
write
that
down
somewhere.
So
we
don't
have
to
come
back
to
keep
having
the
argument.
You
know
if,
if.
J
I
Not
sure
we'll
assume
it's
okay,
because
we
do
not
want
to
give
false
reports
that
there's
a
problem
yeah
but
throwing
a
warning
doesn't
help.
Oh
okay,
yeah
I
mean
you
can
comment,
but
you
still
have
to
decide.
Since
you
have
to
put
out
a
score.
What's
your
default
score,
yeah,
you
could
put
out
a
warning
that
says.
I
can't
tell
therefore
I'll
give
you
that
that's
actually
not
a
bad
compromise,
but
I
think
it
would
be
good
to
write
that
down.
A
All
right
yeah,
so
in
the
chat
Pedro's
like
don't
punish
my
thorough
warning
message,
that's
something
that
you
know
the
same.
Well,
maybe
not
the
same
check
but
token
permissions
people
liked
using
certain
permissions,
and
previously
we
would
subtract
10
points
if
it
wasn't
on
an
allowed
list
of
actions.
A
So
back
in
October
that
changed
and
the
solution
was,
you
know,
I'm
going
to
Warren,
but
not
punish
kind
of
thing.
Yeah
I
think
there's
a
lot
to
discuss
here
and
like
there's
it's
more
than
just
this
issue
this
this
notion
of
like
what
should
scorecard
do
when
they're
not
sure
or
like
when
we're
not
sure
and
the
notion
that
David
put
forth
of
like
different
people
care
about
different
things.
A
So
some
static
analysis
tools
have
like
a
confidence
level
where,
like
should
you
say,
only
show
me
something
that
you're
100
sure
on
only
show
me
something
that
you're
like
50
sure
on.
So
maybe
a
flag
is
needed
to
say,
like
should.
I
err
on
the
side
of
caution
should
I
air
permissive,
so
that
might
help
address
the
two
different
audiences.
E
Is
useful,
then
it's
worth
scoring
and
if
it's
not
race
scoring
one,
then
it's
not
useful
and
and
I
lean
towards
the
permissive
approach
here,
which
is
that
I?
Don't
think
this
read-only
context
where
there
are
no
secrets
available
is
risky
and
I
can
tell
you
in
my
projects,
I
get
a
lot
of
pushback
from
the
opinion,
because
people
believe
that
they
prevent
us
from
picking
up.
J
G
So
yeah
I
I
just
wanted
to
print
out
the
the
idea
like
I,
don't
I
I
believe
we
we
can
not
think
only
by
like
should
be
punished
or
not,
but
we
can
think
of
less
of
of
punishment
as
well
like
if
you
have
like
this
really
rare
case
of
security
risk,
for
example,
for
this
upload
artifacts,
we
hope
we
can
decrease
very
low
amount
of
of
score.
G
If
you
we
use
a
unpinned
read-only
permission,
so
even
like
0.5
or
0.2
I,
don't
know
just
to
make
this
car
also
represent
these
cases,
but
it
wouldn't
get
any
maintaining
mad
so
to
to
to
have
a
nine
nine
of
a
ten
or
nine
of
995
out
of
a
ten,
so
I'm
really
looking
forward
for
this
idea.
What
what
do
you
guys
think.
G
Because
Spencer
said
about
this,
this
possibility
of
inside
scorecard,
you
say:
oh,
do
I
want
to
have
a
restrict
look,
a
restrict
review
of
my
security
or
like
a
broader,
more
sensitive,
but
the
thing
is
that
the
scorecard
is
also
seen
by
like
external
people,
so
they
should
be
Empire
shows,
so
everyone
should
get
a
a
reason
about
reasonably
impartial
results,
so
I
I
think
having
this
small
small
decreases
kind
of
goes
to
this.
G
A
There's
support
for
going
forward
with,
even
if,
in
this
like
one
in
I,
don't
know
10
000,
a
million
scenarios
like
it
could
negatively
affect
so
I
I
think
you
know,
since
there's
that
consensus
I'll
make
a
note
to
make
a
comment
on
the
issue
after
this
meeting
and
any
like
overflow
discussion
can
go
there
I'm
also
reading
in
the
chat
that
argov
says,
structured
results
could
feel
like
they
go
a
long
way
here.
A
We'd
create
findings
and
a
maintainer
can
choose
which
probes
to
ignore
or
use,
and
he
says
that
he
wishes
there
were
a
threat,
mapping
or
some
sort
of
threat
model
for
the
GitHub
API
permission
levels
and
what
a
given
permission
allows
someone
in
the
workflow
to
do
so.
If
something
like
that
existed,
it
would
help
if
it
could
be
in
scorecard.
G
Yeah
I
feel
that's,
maybe
something
that
scar
card
could
bring
up
because
I
haven't
found
anything
like
this.
J
J
Checks
so
that
open
source
maintainers
can
get
a
higher
scorecard
score
and
currently
for
two
scorecard
checks
for
pinning
of
dependencies
and
for
elaborate
token
permissions.
The
scorecard
action
points
to
the
step,
Security
app,
page
and
I
wanted
to
discuss.
A
couple
of
you
know:
potential
updates.
We
can
make
there
to
simplify
the
deployment
of
remediations.
J
B
J
Okay,
so
this
is
a
public
repository
in
my
account
and
you
know:
I
have
deployed
a
scorecard
action
and
it
has
created
a
bunch
of
code
scanning
findings.
So
if
you
look
at
one
of
these
token
permissions
findings
right
now,
it
provides
a
link
to
this
app.step
security.io
page,
and
when
you
go
there,
it
automatically
adds
the
permission.
J
We
realize
that
many
open
source
maintenance
when
they
come
here
and
they
get
confused.
So
what
they're
supposed
to
do
after
this
is
they
should
either
click
on
copy.
We
shall
copy
this
remediated
workflow
and
then
they
they
can
go
and
create
a
pull
request
in
their
Repository,
although
they
can
click
on
this
other
other
button.
But
you
know
they
realize
that
open
source
maintenance
come
here.
They
get
confused
and
since
you
also
know
the
URL,
you
know
that
was
visited
oftentimes.
J
We
see
that
the
deployment
the
remediation
is
not
deployed,
so
the
The
Proposal
here
is
I.
Don't
know
we
already
provide
a
button
called
create
a
pull
request.
So
when
users
go
here,
you
know
we
take
the
repository
as
an
input
and
we
figure
out
all
the
ways
in
which
you
know
we
can
actually
improve
the
scorecard
score
on
behalf
of
the
maintainer.
So
we
analyze
the
repository,
and
we
say
that
you
know
like
we
can
fix
token
permission
in
these
files.
J
You
can
also
pin
scorecard
actions
in
these
files
and
so
on.
In
addition,
we
also
deploy
the
scorecard
action
itself
if
it
is
not
present
and
and
maintenance
can
pick
in
just
so,
for
example,
if
they
don't
want
to
pin
their
dependencies,
they
can
just
uncheck
this
option
or
they
can.
You
know
like
selectively,
choose
what
they
want
to
deploy
and
they
can
click
on.
J
This
create
pull
request
button,
and
what
this
will
do
is
this
will
make
these
changes
on
behalf
of
the
developer
and
create
a
pull
request
in
their
repository,
which
they
can
then
review,
and
you
know
like
merge
it
if,
if
everything
looks
good
to
them,
so
in
this
case
you
know
we
are
basically
simplifying
the
remediation
workflow,
where
developers
don't
have
to
copy
paste.
Think
an
entire
experience
is
quite
streamlined.
J
So
the
proposal
that
we're
thinking
is
you
know
when
someone
clicks
on
this
link,
instead
of
showing
this
as
a
default
view,
we
can
show
them
this
as
a
default
view.
You
know
where
they
will
be
able
to
see
all
the
findings
and
will
provide
a
button
here
so
that
if
they
just
want
to
fix
that
particular
file,
they
can
go
back
to
this
un
and
copy
paste
it,
and
the
idea
is
that
in
a
hopefully,
since
we
are
simplifying
the
way
someone
can
deploy
these
remediations.
This
will
actually,
you
know,
improve
their
scorecard
score.
J
So
yeah
I
wanted
to
open
up
the
floor,
and
you
know
like
ask
and-
and
you
know
like
get
your
feedback
if
someone
has
any
concerns,
question
feedback
on
this
proposed
change.
A
A
Is
it
possible
to
only
have
I
guess
the
token
permission
checks
on
by
default
and
then
people
can
enable
more
so
like,
instead
of
like
everything
being
enabled?
Does
it
make
sense
that,
like
if
someone
clicks
a
link
from
a
token
permission
comment,
then
it's
just
recommending
token
permissions
by
default.
J
Yeah,
that's
a
good
point,
and
that
is
certainly
we
can.
We
can
do
that
so
when
someone
clicks
on
this
button
will
only
show
by
default,
we'll
only
show
them
token
permission
for
this
file
and
we
can
keep
everything
unchecked
or
we
can
hide
that
we
can
hide
all
all
the
other
recommendations
yeah.
That
is
something
that
we
can
do
that.
A
Because
I
I
still
think
it's
it's
very
convenient
for
people
to
be
able
to
fix
multiple
things
at
once,
so
like
if
they're
a
way
to
say
like
this
is
the
page
and
like
we're
going
to
generate
a
pull
request
for
this
and
then
allowing
people
to
click
those
other
boxes.
A
Instead
of
making
them
unclick
certain
boxes,
I
I
think
that
works
a
little
better
yeah
I
I
think
the
auto
generated
PR
thing
is
super
nice
from
a
usability
perspective
and
I
wouldn't
be
opposed
to
changing
the
link
of
like
which
remediation
page
they're
landing
on.
J
Yeah,
that's
good
feedback,
and
this
will
not
require
any
changes
in
scorecard.
So
we
can
update
this
page
itself
so
that
it
will
automatically
take
them
to
the
the
pull
request.
Experience.
I
Yeah
anything
that
we
can
do
to
help
people
actually
use
this.
In
fact,
I've
been
pointing
people
to
the
main
open
scorecard
page,
but
that
probably
isn't
really
the
right
place
so.
A
B
A
J
B
Culture
yeah
I,
think
I
was
thinking
of
like
in
in
actions.
There
may
be
a
a
way
of
like
when
an
action
runs
or
like
creates
a
annotation
on
a
pull
request
that
there
may
be
a
way
of
like
doing
like
a
patch
suggestion
or
something
I.
A
So
scorecard
has
like
a
run
on
pull
request:
option
I,
don't
think
it's
overly
supported
right
now,
like
potentially
I,
could
see
something
there.
I
just
don't
know
how
that
would
integrate
with
I
mean.
Does
the
step
security
have
like
a
GitHub
app
that
could
sort
of
catch
that.
J
So
it's
actually
based
on
a
project
called
secure
repo.
It's
an
open
source
project
by
step
security
I
can
certainly
look
into
it,
and
I
can
see
how
if
it
is
feasible
or
not
right
now
there
are
apis.
You
know
that
generate
this
patch,
but
I'm
not
sure
if
that
would
be
the
right
model
as
if
you
want
to
generate
the
paths,
then
there
will
be
an
API
dependency
based
on
the
current
model.
B
A
All
right,
choko.
F
J
Yeah,
so
we
suddenly
want
to
make
sure
that
you
know
people
don't
spam
this
on
the
repos
that
they
don't
own.
So
for
someone
to
use
this
portal
they
need
to
log
in,
and
we
only
you
know,
need
public
information,
so
we
only
need
their
GitHub
account
and
then
you
know
for
them
to
be
able
to
create
a
pull
request
that
they
need
to
be
a
maintainer.
So
we
look
at
their
repo
and
make
sure
that
you
know
they
are.
J
Yeah,
that's
a
good
point.
They
can
certainly
use
this
experience
or
this
experience
because
we
are
just
generating
the
diff.
We
also
support
the
preview
features.
If
you
come
here
and
you
analyze
that
you
go.
For
example,
if
you
don't
want
to
create
a
pull
request,
one
can
also
generate
a
preview
and
these
previews
are
generated
in
our
bot
account.
J
So
maybe
we
can
enable
this
feature
wherein
someone
can
generate
a
preview,
not
rely
on
the
fact
that
the
user
needs
to
be
a
maintainer,
but
anyone
should
be
able
to
do
this
or
the
other
option.
Is
we
actually
take
them
to
this
page?
You
know
wherein
they
can
see
the
fix
and
just
copy
paste
it.
So
this
preview
is
generated
under
our
bod
account
and
we
are
not
really
spamming
that
Repository.
A
E
Spencer's
earlier
point
about
the
context
of
the
particular
problem,
that's
being
reported
on
makes
me
realize
that
it
may
be
confusing
to
users
to
see
this
screen.
The
analysis
screen
that
has
results
related
to
a
bunch
of
different
items,
and
there
may
be
you
know,
usability
problems
related
to
that
I.
You
might
want
to
actually
activate
an
affordance
to
fix
everything,
something
like
that
or
at
least
indicate
in
the
user
interface,
that
this
is
everything
over
here,
and
this
is
the
thing
you
came
to
see.
J
Yeah
that
that's
good
feedback,
so
we
can
suddenly
make
that
change,
because
we
know
why
user
is
visiting
this
page
and
which
file
they
are
interested
in
fixing.
So
in
the
UI
we
can
clearly
differentiate
that
it.
You
came
here
for
this
file,
so
you
can
just
fix
this
file
if
you
want
or
if
you
wanted.
Other
additional
changes
here
are
no
other
fixes
that
you
can
fix
automatic.
A
H
Yeah,
if
you
can
just
click
on
that
more
of
an
a
question,
is
this
an
unintended
behavior
from
scorecards
kind
of
embarrassingly
we
had
been
on
a
really
old
version
of
scorecards
On
All-Star,
due
to
like
the
memory
usage
issue
up
until
recently,
so
we
like
updated
from
you
know
like
a
year
ago
to
now
and
if
you
go
to
the
other
link
there,
so
we
were
getting,
you
know
we
we
used
to
get
if
we
got
a
failed
score
the
logs
and
then
we
just
display
all
the
logs,
but
now
all
of
a
sudden
we're
getting
a
whole
bunch
of
empty
lines.
H
We
were
getting,
let's
see
we're
getting
the
check,
detail
back
and
just
displaying
it
all,
and
now
we're
looking
at
finding
that
location
to
see.
If
that
exists
or
not,
was
this
an
intended
change
or
is
there?
Is
there
like
something
that
I'm
missing
here
with
the
logs
coming?
Maybe
if
you
don't
have
the
answer
now
we
can
discuss
offline,
but
I
just
wanted
to
bring
it
up.
A
So
my
first
thought
is
that
it
probably
has
to
do
with
structured
results.
We've
been
trying
to
make
it
so
that
nothing
is
changing
if
things
aren't
like
the
experimental
flag
isn't
on,
but
I
would
have
to
double
check.
That's
where
my
first
understanding
is
going.
I
know,
Laurent
has
already
touched
say,
like
the
token
permissions
check.
So
if
this
is
just
affecting
token
permissions,
then
I
I
can
look
into
that
see
if,
like
something
slipped
through
the
cracks,
but
I
I,
don't
think
it's
an
intentional
change.
H
Okay,
okay,
so
yeah
just
FYI,
then
yeah.
If
you
take
a
look
at
that
token,
permissions
change,
maybe
that's
on
some
unintentional
logs
there,
but
yeah
we're
filtering
it.
So
it's
we're.
Okay
on
our
side,.
A
Cool
yeah.
No
thank
thanks
for
bringing
this
up
feel
free
to
make
an
issue
on
the
scorecard
side
of
things,
but
I'll.
H
H
K
Hey
sorry,
just
trying
to
sort
out
the
audio
feedback
issues
yeah
just
a
note
to
make
sure
that
if
there's
any
extra
feedback
on
the
proposed
contributor
model
and
next
things
to
do
next
there
just
to
raise
it
up.
I
I'd
like
for
that
to
make
some
progress
so.
A
I
know
folks
have
added
some
comments:
I,
don't
I,
don't
know
if
all
of
them
I
guess
have
been
addressed,
but
yeah
it'd
probably
be
helpful.
If
you
could
throw
a
link
into
I
guess
if
we
scroll
down
it's
there
too,
but
yeah.
H
I
can
give
a
quick
summary
on
that.
I
saw
that
from
Dustin,
so
the
discussion
at
the
TAC
level
was
or
the
general
consensus
seems
to
be
that
people
are
supportive
of
requiring
even
even
requiring
Community
ladder
at
the
open,
ssf
level
and
saying
you
need
to
have
something
that
lists,
how
you
become
a
maintainer
and
who
the
maintainers
are.
H
But
most
people
thought
that
the
the
proposed
one
the
scorecard
proposed,
one
is
like
too
detailed
and
just
like
too
much
to
to
take
for,
like
all
the
projects
or
a
small
project,
so
I
think
the
the
discussion
is
either
it's
going
to
be
a
much
simpler
template
or
no
template
at
all
and
just
kind
of
like
link
out
to
other.
You
know
examples
get
the
scorecard
one
if
it's
adopted
so
for
the
scorecard
Community
I
think
that
the
the
tact
doesn't
have
specific
feedback
other
than
that.
H
This
is
good
and
like
continue.
This
is
something
that
they're
going
to
want
to
see
in
all
the
projects
and
that
you're
you're
gonna,
you
know
scorecard,
should
be
free
to
adopt
this
more
detailed
one
if
it
makes
sense
for
the
community
and
that's
that's
totally
fine
but
I-
think
it's
up
to
this
community
to
kind
of
comment
on
and
go
over
the
proposed
one
and
and
then
and
then
adopt
it.
C
So
I
just
added
here
to
the
the
minutes,
the
the
link
to
the
draft
proposal.
C
If
you
look
at
it
the
way
it's
written
now,
it
is
still
very
much
written
in
terms
of
the
broader
openssf
kind
of
like
this
is
the
template
for
everything
and
I
would
be
happy
to
change
this
and
kind
of
remove
all
the
ossf
fluff
and
make
it
very
hard
specific,
but
I
also
don't
know
if
that
is
something
that
people
want
or
if
you
prefer,
that
I
hand
this
off,
and
you
know
the
like
the
maintainers
take
the
lead
on
this
I
got
I'm
happy
to
to
to
make
the
changes,
but
obviously
I'm,
not
the
person
that
should
make
all.
A
Cool
yeah,
I
I
left
some
comments.
I
I
know
Jeff
left
a
bunch
as
well
as
well
as
a
few
others.
So
I
think
you
know.
Ultimately.
Yes,
it
is
up
to
the
maintainers
I
guess
for
final
say,
but
it
is
good
to
have
some
sort
of
consensus.
C
Yeah,
so
just
to
clarify,
would
you
rather
I
just
hand
this
off
and
you
all
take
care
of
it
that'd
be
like
I'm
perfectly
fine,
just
clarifying
whether
I
should
work
on
this
or
not.
A
Yeah,
let
me
read
through
that
and
I
will
get
back
to
you,
I
think
in
the
interest
of
time
just
trying
to
get
through
these
last
two
things:
I
I
think
the
two
minutes
are
better
spent
on
the
last
two
bullet
points,
but
I
I
will
get
back
to
you.
A
All
right,
David.
I
Yeah,
so
sorry
for
some
procedural
stuff,
but
basically
I'm
looking
at
their
Google
doc.
For
our
notes
and
going
you
know,
I,
don't
really
like
that.
Anybody
in
the
world
can
just
show
up
and
edit
and
delete
everything.
That
seems
like
a
terrible
idea.
We
really
haven't
had
any
problems,
but
I'd
rather
I.
Don't
have
any
trouble
with
people
coming
by
and
making
suggestions,
but
I'd
rather
not
strangers.
Just
delete
everything
so
I'd
like
to
lock
it
down
slightly
to
anyone
can
make
a
suggestion,
but
I.
I
I
I
did
make
an
attempt
to
add,
as
one
of
the
editors
the
the
mailing,
the
OSS
scorecard
Dev
as
a
Google
group,
and
it
says
that
it
seems
to
note
that
there's
a
number
of
members-
so
maybe
that's,
have
I
done
enough
and
I
can
just
switch
this
over
to
anyone
at
the
end.
It
can
just
can
be
a
commenter
instead.
Is
that
is
that
going
to
be
adequate
for
purpose.
A
I
I
think
that's
good
I
chatted
with
Amanda
when
she
made
the
switch,
and
she
also
echoed
that
there
haven't
been
problems
so
far.
But
other
people
have,
you
know
mentioned
like
hey,
maybe
having
everyone
be
able
to
edit
isn't
the
best
idea.
So.
I
I
I
Fine
okay
I
mean
it's
okay.
For
you
know
these
are
meeting
notes.
It's
not
the
code
itself.
So
if
you're
here,
where
I'm
I'm
as
far
as
I,
know
we're
fine
with
you
editing
the
doc
as
the
the
goal
is
really
more
to
kill
to
to
help
counter
the
trolls.
A
Cool
yeah
and-
and
you
can
Pedro-
join
the
Google
group
in
question
and
then
you
should
have
edit.
But
if
you're
fine
with
just
viewing
that
you
know,
I
think
this
is
a
better
configuration.
I
J
I
It's
the
it's,
the
people
who
make
life
less
fun
for
everyone
else,
so
that's
it
for
that
point
and
the
other
one
rose
were
at
that
time.
I
The
critical
projects
working
group
is
greater
draft
list
and
I
was
just
hoping
to
see
if
we
can
make
that
get
that
added
to
our
list
of
all
the
projects
who
are
getting
reviewed,
weekly
I
think
most
of
these
are
already
there,
but
I'm
not
sure
what
the
process
is
to
make
the
requests.
So
I
figured
I'd.
Ask
here:
oh
they're,
all
there,
oh
even
better!
I
I
All
right:
fabulous,
okay,
I!
Thank
you
so
much
well
that
that
answers
my
question.
A
Cool
the
only
thing
left
to
do
is
to
pick
a
facilitator
for
next
week.
So
if
anyone
wanted
to
volunteer
otherwise
I
will
probably
Fallen
told
a
maintainer
yeah.
Thank
you.
Everyone
for
attending.
If
you
want
to
facilitate,
feel
free
to
just
mention
it
I'll
Stick
Around
for
30
seconds,
but
otherwise
see
people
in
two
weeks.
A
Be
cool
I
will
try
there.
There
was
a
PR
that
got
sent
to
us
from
someone
that
has
to
do
with
a
data
license
saying
like
the
data
that
scorecard
provides
is
licensed
under
some
license.
So
we
we
have
you
listed
as
a
reviewer,
because
this
is
you
know
over
our
head
and
should
be
made
by
someone
at
the
Linux
Foundation.
Okay,.
I
A
I
Throw
me
the
link
to
be
honest,
I
mean:
are
they
claiming
that
scorecards
is
using
some
data
that
we
don't
have
rights
to
use
I.
I
Okay,
that's
fair
I
mean
have
we
ever
talked
about
that
we
probably
ought
to
make
that
clear
if
we
haven't
yeah.
I
Okay,
all
right:
okay,
I'm,
sorry!
Well,
our
meeting
is
over,
but
maybe
you
know
that
sounds
like.
If
that's
the
issue,
okay,
can
you
just
throw
the
the
thing
in
chat
and
I'll
yeah.
I
A
I
I
D
I
Okay,
yeah
I
will
I
will
go,
take
a
peek,
you're
gonna.
Send
me
an
email
or
something
and
I.