►
From YouTube: Scorecards Biweekly Sync (April 7, 2022)
C
Realized
I
have
the
honor
duty
of
being
facilitator
today,
so
I
am
going
to
I'll
share
my
screen
in
a
bit
and
just
give
me
a
second
to
set
some
stuff
up.
C
In
the
meantime,
I
see
that
folks
are
entering
their
names
on
the
on
the
notes,
which
is
awesome.
Please
continue
doing
that.
I'm
going
to
share
the
link
in
chat.
If
you
don't
think
you
beat
me
to
it.
Thank
you.
Veggie.
E
C
Edit,
the
template
yeah,
I
think
we
should
just
I
think
we
should
highlight
that
the
template
is
maybe
in
bigger
words.
E
Yeah
you
can
make
128
point,
it
won't
help,
I
think
just
put
it
in
the
bottom
and
and
note
the
top
hey,
there's
a
template
for.
E
A
E
G
I
think
both
spots,
we
need
a
spot
to
put
stuff
for
the
next
meeting
in
between
meetings.
I.
G
While
we're
on
the
topic
of
the
doc,
did
we
ever
decide
what
the
difference
between
standing
agenda
updates
and
open
issues
is
or
where.
C
All
right
are
any
of
these
demos,
this
third-party
dependency
management
checks
and
go
to
fredo.
C
C
Cool
all
right,
so
it
is
three
past
four.
If
you
were
in
the
u.s
eastern
hello,
hello,
everyone
today
is
april
7th.
This
is
one
of
the
bi-weekly
meetings
of
the
scorecards
and
all-star
projects.
This
is
a
meeting
that
will
be
recorded
and
available
on
the
internet
later.
So
please
be
mindful
of
what
you
say
and
do
please
be
sure
to
adhere
to
the
open,
ssf
code
of
conduct,
which
you
know
essentially
amounts
to
be
kind.
Be
awesome.
C
People
be
excellent
to
each
other,
so
yeah,
let's,
let's
jump
into
it.
We
have
a
standing
agenda
which
I
see
some
people
are
commenting
on
we're
moving
some
things
around
jeff.
Let
me
pull
your
update
down
and
okay.
You
already
did
that
cool
all
right,
let's
jump
in
if
there
are,
if
there
are
any
new
folks,
I
would
love
if
you
took
some
time
to
introduce
yourself
to
the
group
and
what
you're
looking
to
do
here.
C
C
All
right,
so
this
looks
like
it's
an
update
for
later.
Do
we
have
any
announcements.
E
You'll
get
there,
I
I
threw
in
some
stuff,
I
see
other
people
through
and
stuff
you're
just
you're,
just
not
quite
there
at
the.
G
G
Is
it
okay?
These
are
kind
of
new
topics.
Announcements,
project
updates
is
that
do
we
have
like
an
idea
of
what
kind
of
scope
that
is.
C
Yeah,
I
think
we
we
we
still
need
to
massage
this
a
little
bit.
I
would
say
that
project
updates
really
fall
into
everything,
that's
being
filled
out
under
open
issues
and
one-off
discussions
in
terms
of
announcements.
I
think
you
know
announcements
will
hit
when
we're
either
getting
close
to
a
release
or
or
releasing
so
we
can
probably
skip
this
section
in
terms
of
road
map
updates,
we'll
we'll
talk
about
that
in
brian
section,
around
mission
and
vision
updates.
So
with
that
said,
sorry
jeff
are
you
about.
G
G
I
mean
I
did
do
a
release
of
all
star
last
week
or
the
week
before.
It's
the
first
release
we've
made
because
it's
typically
just
used
by
people
as
an
app,
and
I
I
have
a
proposal
for
release
process
for
like
a
cadence.
That's
not
been
adopted
yet
so
they're
caught
that's
their
issues.
If
anybody
has
comments
on
that,
but
we'll
we'll
be
making
issues
on
a
rig,
I
mean
sorry
releases
on
a
regular
basis
at
some
point
cool.
So
there
is
a
v2.
C
Release
please
check
that
out.
If
you
haven't
already
congrats
jeff,
that's
at
gh,
cr,
dot,
io,
slash
open
ssf,
our
ossf
all-star
v20.
I.
C
In
yeah,
I
did
note
in
your
release
process
that
you
mentioned
that's
not
interested
in
sember
for
for
reasons
and
that's
fine.
I
think
we
should
clarify
that
it's
there
are
sever.
You
know
you're
using
some
for
compliant
versioning,
but
without
the
guarantees
right,
yeah
yeah.
C
I'll
tag
just
scorecard
maintainers
in
case
they
want
to
check
that
out
and
anybody
else
do
we
have
a
link
to
this
in
the
meeting
notes.
J
Okay,
so
I
have
a
small
announcement,
I'm
hoping
to
release
a
scorecard
action
release
in
the
next
two
weeks
with
very
simple
update,
which
would
be
support
for
the
github
token.
I
think
we
talked
about
this
last
time,
but
I
think
I'm
going
to
start
working
on
this
next
week.
J
A
C
And
thank
you
for
the
scribes
that
are
already
here,
but
if
anyone
please
please
do
feel
free
to
feel
free
to
take
notes
and
assist
us
as
we
go.
F
C
Yeah
they've
had
lots
of
cool
announcements
recently,
so
there
there's
one
on
the
there's
one
around
the
project,
beta
updates
and
adding
things
to
project
beta
boards
and
then
the
other
one
that
you
mentioned
so
there's
some
some
new
functionality.
C
I
think
one
from
the
enterprise
side
around
secret
scanning
and
then
on
the
I
believe
they
also
released
a
new
api
for
dependencies.
Yes,.
E
F
A
F
C
F
E
F
C
E
Is
this
the
one
the
rest
slash
reference,
slash
dependency,
dash
graph.
E
C
E
C
Oh,
no,
no,
no
you're,
good,
we're
good,
we're
all
good,
all
right
cool,
any
other
announcements
before
we
jump
to
the
next
thing.
C
All
right,
so
I
don't
have
it's
not.
I
don't
think
it's
a
it's
a
here
announcement
specifically,
but
I
do
want
to
mention
that
which
is
kind
of
cool
we're.
So,
on
the
on
the
kubernetes
side,
we
just
landed
some
improvements
in
our
image
promotion
process
that
will
start
signing
all
the
container
images.
So.
E
C
Hoping
with
the
next,
the
kubernetes
124
release
coming
out-
and
you
know
optimistically
two
weeks
or
less
than
two
weeks
that
those
images
will
be
signed
by
our
image
promoter
and
then
and
then
we'll
start
going
into
the
the
the
fun
stuff
of
trust
delegation
for
separate
teams
and
stuff.
But
yeah
not
really
relevant
to
here.
But
I
did
want
to
mention
it
since
you're
all
hip
and
hip
about
security.
C
So
if
you
want
to
read
the
tracking
and
try
to
do
the
breadcrumb
stuff,
that's
here
and
we'll
we'll
come
out
with
a
oh,
we'll
likely
do
a
you
know:
cross
post
kind
of
one
from
the
sig
store
side
and
then
another
one
from
the
the
kubernetes
side,
I'll
peek,
behind
the
scenes
for
the
maintainer
stuff,
okay,
cool
anything
else,
announcements
wise
before
we
jump
into.
E
So
they
use
badges.
I
don't
know
about
anything
else.
C
It's
not
consistent
across
the
it's
not
consistent
across
the
board,
since
all
of
the,
since
we've
got
like
core
kubernetes,
which
is
managed
by
like
everyone
like
the
kubernetes
process,
is
kind
of
like
under
my
team's
purview
on
the
release
engineering
side,
but
like
release
engineering
elsewhere,
depending
on
the
sub
project,
may
be
handled
differently
right.
C
So
there's
some
consistencies,
but
they're
not
you
know
so
we're
personally,
not
using
we're
personally,
not
using
the
you
know
scorecard
or
our
all-star
right
now,
outside
of
like
things
that
I
have
run
on
my
local,
we
are
we've
enabled
we
we
have
an
actually.
No.
We
have
enabled
code,
ql
and
and
scorecard
for
the
for
k
release
which
is,
is
you
know
where
we
do
a
lot
of
our?
We
do
a
lot
of
our
release.
C
Engineering
work,
so
there
is
a
we
are
using
the
scorecard
action
here.
As
you
can
see
from
security,
there
are
tons
of
fun
things
to
clean
up,
which
I
won't
click
into,
but
they're
they're
mainly
around
pin
dependencies.
So
there
are
some
some
questions
about
independencies
that
I
think
will
come
with
some
of
the
some
of
the
notes
that
jordan
left
in
slack,
which
I
transposed
into
a
github
issue
for
tracking
but
really
around.
C
When
we
should
and
shouldn't
depend,
pin
dependencies.
I
think
you
know
for
the
kubernetes
examples.
What
comes
up
is
instances
where
we've
got
images
that
are
say
the
you
know
say
the
let's
say
the
debian
base
right
cluster
microfile
yeah
wrong.
A
C
So
in
instances
where
they
may
flag
something
like
from
from
base
image,
that's
parameterized
right,
so
I
think
there's
some
maybe
work
to
be
done
to
recognize
when,
when
images
are
because
we
build
multiple
variants
for
for
all
of
these
images,
so
so
I
think
there
are
some
there's
some
work
to
be
done
to
like
be
like
os
codename
to
to
build
out
this
right.
So
this
is
never
going
to
be
pinned
to
an
exact.
You
know
it's
an
exact
version
and
and
digest.
C
All
right,
let's
jump
into,
let's
jump
into
the
discussion
on
third-party
dependency
management
checks,
go
to
fredo,
want
to
introduce
yourself
and
get
started
cool.
I
So
my
name
is
gordoffredo,
I'm
working
on
the
flutter
team,
basically
dart
and
flutter,
and
we
have
been
enabling
scorecards
for
many
different
projects.
One
of
the
things
that
we
found
is
when
we
were
trying
to
remove
all
the
binaries
across
all
the
different
repositories.
I
I
The
best
practices
that
we
should
be
using
like
third
party
folder
and
put
everything
that
is
external
to
the
project
that
is
gonna,
be
coming
as
a
dependency
added
into
that
folder,
so
that
we
have
everything
encapsulated
in
a
single
place.
That
would
be
easier
for
everyone
to
check.
C
So
I'm
gonna,
I'm
gonna,
pause,
really
quick
and
just
mention
that
that
is
a
similar
to
like
the
hack,
folder
third
party
is
in
google
ism,
and
so
it's
not
necessarily
a
best
practice.
It
is
something
that
has
kind
of
like
spread
into
the
ecosystem
as
a
result
of
usage
in
google.
I'm
just
going
to
call
that
out.
I.
I
See
yeah
and
that's
great,
and
actually
I
wanted
to
open
discussion
about
this
like
is
there
anything
that
we
can
do
to
try
to
get
those
all
those
dependencies
in
a
single
place,
rather
than
spread
around
multiple
folders
in
the
different
repositories
or
if
there
is
any
good
practice
that
we
can
follow
here.
C
That's
a
great
question
and
I
I
don't
think
it
has
an
easy
answer,
because
I
think
it's
going
to
depend
on
the
use
case.
There
is,
I
think
you
know
it
would
be
worthwhile
to
review
the
so
google
has
a
playbook
if
you
will
for
releasing
open
source
code
right,
and
I
think
some
of
it
goes
into
what
to
do
when
you're
dealing
with
third-party
code
right.
C
I
primarily
work
on
going
projects,
so
this
is
roughly
a
solved
problem.
If
you
decide
to
you
know,
if
you
decide
to
use
a
vendor,
folder
or
or
just
you
know,
mention
mention
them,
you
know
mention
them
as
direct
dependencies.
C
Our
use,
you
know,
use
module,
mod,
separate
module
caches
right,
so
there
are
ways
around
it
and,
depending
on
what
language
you're
dealing
with
for
for
things
that
I'm
not
familiar
with,
I
can't
comment
on,
but
I
think
it
would
would
be
worthwhile
to
kind
of
check
out
the
google
guidance
there
on
third
party.
I
I
believe
one
of
them
is
one
of
the
in
this
case,
the
product
that
I'm
talking
about
is
the
ide
for
for
flutter,
the
intellij
ide,
and
one
of
those
binaries
is
license
a
licensed
binary.
So
I'm
not
sure
how
to
deal
with
that
one,
but
basically,
from
the
chromium
perspective.
The
way
they
do
it
is
they
they
have
something
called
ct.
I
That
is
downloading
those
binaries
at
runtime
rather
than
having
them
checked
in,
but
I'm
not
I'm
not
sure
from
the
open
source
perspective.
How
should
we
deal
with
those.
C
So
I
just
linked
in
the
notes
the
the
preparing
for
an
open
source
release,
the
third-party
component
section
from
the
google
open
source
documents.
I'm
not
sure
that
everything
that
you
need
is
going
to
be
there,
because
I
know
that
some
of
that
some
of
that's
also
like
internal
specific,
may,
have
internal
specific
links.
I
I
would
say
the
follow-up
question
to
this
one.
Would
it
be
if
you
would
be
worthy
to
create
like
a
check
for
this
kind
of
things
or.
C
So
kubernetes
does
some
stuff
like
this
too.
We
have
a
vendor
so
for
core
kubernetes
that
there
is
a
vendor
folder
and
we
do
track.
Is
this?
You
know
if
there
are
forbidden
if
they're
forbidden
dependencies
for
licensing
reasons,
maybe
we
do
track
that
I
can.
I
can
dig
up
links
to
that,
but
yeah
I
would.
C
I
would
agree
that
you
know
you
need
some
way
of
tracking
if
this
is
a
sane
dependency
to
take
on,
and
if
it
is
not
your
code
david,
do
you
have
your
hand
up.
E
Yeah,
so,
regarding
this
whole,
you
know
our
artifacts
and-
and
such
I
had
put
in
a
comment
on
the
all-star
stuff-
that
maybe
what
we
need
to
do
is
enable
projects
to
say
basically
an
artifact
ignore,
like
a
github
like
a
dot,
get
ignore.
E
Well,
yes,
I
know
about
that.
One,
for
whatever
reason,
I
think,
that's
okay,.
G
Well,
so
I
think,
there's
another
discussion,
so
I
think
the
first
question
is:
are
there
best
practices
for
handling
binaries
and
dependency
binaries
in
your
repo
right
and
I'm
hearing
some
crickets
here
or
that
it's
like
kind
of
language,
specific.
C
Right
yeah,
I
think
I
think
the
answer
is.
It
depends
right
there.
You
know
there
are
some
instances
where
you
know
the
dependencies
I'm
usually
looking
at
can
can
be
shipped
in
in
containers.
Basically
right
so,
and
usually
it's
around,
like
tooling
and
and
darcy's
saying
I
know
of
many
bad
practices.
E
Well,
I
I
I
have.
I
have
another
example
where
you
know
analysis
of
executables
and
so
they're
in
there
as
part
of
your
test,
suite.
E
C
Right
so
that's
a
situation
where
I'm
I'm
probably
going
to
try
to
stick
all
of
that
stuff
in
a
container.
So
it's
not
considered
a
dependency
right
and
pull
the
container
do
that
check
in
in
ci.
Instead,
I
think
I
think
it
it
could
be
worthwhile,
and
maybe
this
is
not
the
time
to
do
it,
but
to
have
a
discussion
on
what
are
the
bad
practices,
and
maybe
we
can
use
that
to
inform
what
what
the
the
better
practices
are
right.
C
All
right
cool:
do
you
have
any
other
comments
on
that
topic
before
we
move
on?
I
think
we
should
continue
to
the
discussion
first.
So
if
there
is
not
something
already
filed
in
the
all-star
repo
related
to
this
go
to
freight,
I
would
I
would.
I
would
suggest.
G
So
the
second
question
is
the
the
the
one
that's
further
down.
Should
we
discuss
that
now
I'll
start
allowing
visual
binaries
to
be
opt
out.
C
G
Yeah,
so
the
second
discussion,
I
think,
is
okay,
so
I
asked
what's
the
what's
the
best
practice,
and
maybe
we
don't
know
the
best
practice,
but
the
the
the
next
step
is
okay.
Let's
say
I
do
need
binaries.
Let's
say
I'm
following
best
practices,
my
own
best
practice
on
those.
How
do
I
tell
all
star
and
or
score
cards
to
ignore
those
so
for
all
star?
G
We
could
right
now.
You
know
with
the
with
the
spirit
of
like
launch
and
iterate,
if
you,
if
you
have
binary
artifacts
and
you
want
to
allow
them-
you
just
disable
this
policy.
G
Clearly
you
know,
as
with
all
software,
we
can
make
it
better
and
have
a
a
list
to
opt
out
whether
it's
you
list
the
files,
maybe
list
the
files
in
the
hashes.
Maybe
you
you
put
a
dot
something
file
like
is
recommended
there.
I
think
all
of
those
are
potential
potential
solutions,
and
I
have
this
issue
here
where
we
can
discuss
it
so
feel
free
to
drop.
Your
suggestions
go
fredo.
There.
C
And
I
I
do
think
that
that
could
be
a
a
nice
one.
You
know
generating
a
you
know,
a
skeleton
of
a
bomb
I
guess
for
for
the
binaries
that
are
in
the
repo
and
storing.
G
That
in
a
file
yeah,
if
you
have
a
file
with
the
list,
you
get
you
you
have.
You
have
that
list
there
to
kind
of
know
exactly
where
all
the
binary
artifacts
are
in
your
repo,
because
if
they're
not
in
that
list,
then
you'd
be
getting
an
all-star
alert,
but
secondary
to
this
is
that
we
could
do
the
filtering
on
the
all-star
side.
But
we
are
running
the
scorecard
check
underneath
and
I
created
a
second
issue
in
the
scorecard
repo
is,
should
scorecard.
G
Be
able
to
also
have
this
option,
you
know
it
wouldn't
be
on
for
the
for
the
public
results,
because
this
would
be
like
if
you're
running
scorecard
on
your
own
repo,
and
you
want
to
you
know
what
to
ignore
another
option.
I'm
proposing
here
is
that
there
are
certain
well-known
artifacts
that
like,
for
example,
the
gradle
wrapper
here
on
their
own
by
their
own
admission,
there's
2.8
million
github
repos,
with
this
binary
checked
in,
and
there
are
recommended
security
best
practices
like
they,
they
say
on
their
page.
G
This
is
a
vulnerability
like
this
is
a
potential
you
know
vector
of
attack.
You
should
be
doing
these
checks
and
validations
in
your
repo.
If
you
actually
check
this
in
should
scorecards
be
able
to,
you
know,
opt
this
out
specifically
like
this
is
well
known,
binary
or,
and
and
or
should
it
and
if
it
does,
should
it
also
only
allow
you
to
give
you
a
passing
score
if
you
are
also
following
the
best
practices
for
that
particular
binary.
C
Yeah,
I
think
I
think
my
concern
with
all
of
these
things.
Checks
in
general
is
like.
Where
is
like
what
is
the
good
stopping
point
right
for
us
to
like,
like
where
where's
our
responsibility?
Stop,
and
you
know
so
so
there
is
an
element
of
it
like
where
I
feel
hey
like
don't
do
bad
things
to
your
repo
and
and
if
you're,
maintaining
a
repo
you
should
be
in
charge
of
that
largely.
C
I
think
there
are
things
that
we
can
report
on,
but
the
second
we
get
into,
I
think
the
second
we
get
into
here's
this
very
specific
file
right
for
this
ecosystem
that
does
foo
barbaz,
make
sure
it's
not
checked
in
right.
Then
we,
we
sort
of
assume
the
responsibility
of
enumerating
those
files
for
multiple
ecosystems
right,
and
we
spend
the
time
on
trying
to
do
the
enumeration
instead
of
improving
the
checks,
so
I'm
personally
negative,
I'm
personally
negative
one
to
to
specific
files.
H
Yeah
just
to
add
to
this,
I
think
we
had
a
very
similar
discussion
with
oss,
first
and
cluster
first
light
here,
where
you
can't
pin
those
dependencies,
so
they
wanted
an
exception.
I
think
the
outcome
of
that
discussion
was
very
similar.
That
scorecard
should
still
be
probably
be
complaining
about
that,
but
something
like
all
star
or
someone
like
a
client
level
policy
can
say
that
we
don't
care
about
this
scorecard
complaining
about.
C
E
But
by
the
way
the
same
thing
happens,
for
I
think
many
of
these
other
things
yeah.
I
I
actually
I'm
actually
thinking,
and
I
don't
think
anybody
has
necessarily
agreed
to
that,
but
I
think
in
the
long
term
it
would
be
useful
to
have
two
scores.
The
you
know
here
is,
I
accept
all
the
claims
from
the
project
and
the
other
is
I'm
going
to
ignore
the
ignore
files
and
everything
else
and
just
here's
the
raw
score
and
the
the
one
is
basically
more
useful,
the
developers
of
the
project.
E
I
know
this
is
okay
and
the
other
is
I
don't
trust
those
crazy
developers
tell
me
what
they
do,
regardless
of
whether
I
should
believe
them.
C
Yeah
and
I'm
I'm
kind
of
yeah.
I
think
I
think
this
comes
up
for
for
all
of
our
checks,
some
of
the
stuff
that
jordan
logged
in
that
and
that
in
the
slack,
chat
and
issue
reflects
it
as
well,
and
I
think
that
david
you
had,
you
had
filed
something
way
back
about
modifying
scores
based
on
our
tolerance
right.
E
Most
of
our
vulnerabilities
are
of
the
unintentional
kind
and
we're
just
trying
to
steer
developers
who
aren't
aware
of
good
practices
to
do
good
practices
if
you've
got
malicious
developers,
then
yeah,
but
this
this
scorecard
is
not
going
to
find
malicious
code
subtly
hidden
in
inside
there.
That's
just
not
what
it's
good
for.
I
Yeah,
I
just
wanted
to
point
it
out
that
a
second
use
case
for
this
would
be
to
landing
incremental
improvements
like,
for
example,
we
have
like
20
different
binaries
that
we
need
to
delete
somehow,
but
it
is
very
complicated.
I
We
would
like
a
way
to
avoid
that.
We
are
regressing
on
adding
new
binaries
as
we
are
cleaning
cleaning
them
up.
So,
for
example,
we
delete
one.
Then
we
removed
it
from
from
from
the
allowed
list
or
whatever
we
want
to
call
it,
and
we
continue
improving
over
time
and
avoiding
regressing.
C
Yeah
for
sure,
so
I
think
you
know,
if
you
look
at
the
you
know
like
the
the
hardened
security
steps
for
like
step
security
where,
by
default,
the
policy
will
give
you
audit
and
then
it's
your
responsibility
to
switch
to
block
when
you're
ready.
If
you
look
at
you
know
from
the
all-star
perspective,
you've
got
you've
got
a
few
different
paths.
C
You
can
go
down
right,
you
know,
one
being
you
know,
one
being
log,
you
know
log
issue
or
you
know,
fix
and
and
fix
for
for
the
most
part,
I
think,
is
not
implemented
for
for
many
of
the
the
reporting
pieces.
But
that's
you
know,
I
think
I
think
the
the
permissive
versus
you
know,
full
full-on
block
makes
a
lot
of
sense
right.
I
think
that
we
have
enough
of
the
levers
already
open
to
say
like
this
is
an
exception
right
for
for
several
of
those
checks.
M
H
Yeah,
so
just
to
clarify,
I
mean
laurent
is
working
on
this
feature
where
we'll
just
dump
out
the
true
raw
facts
from
scorecard.
So
basically
you
can
apply
your
own
policy
and
get
whatever
kind
of
change
score
that
you
want.
So
that
way,
you'll
have
your
opinionated
score,
but
you'll
also
have
a
score
from
scorecard.
C
And
so
the
attention,
the
intention
is
that
this
is
this
is
the
I
guess,
the
raw
results
with
the
and
like
the
policy,
the
serif
policy
applied
to
it
or
something
all
right.
E
Yeah,
and
by
the
way,
I
I
I
really
like
that,
the
point
about
enabling
you
know
preventing
regressions,
there's,
there's
a
lot
of
stuff
like
where
scorecards
doesn't
detect,
for
example,
tooling,
in
a
lot
of
cases
and
being
able
to
say
yeah
yeah,
we
do
have
it
it's
right
here.
E
At
least
you
know
it.
You
know,
makes
scorecard
still
useful
for
those,
even
when
scorecards
doesn't
detect
some
things,
because
exactly.
C
Cool,
so
it
sounds
like
we.
We've
got
some
things
in
flight,
some
things
that
still
need
discussion
again.
I
would.
I
would
request
that
anyone
who's
interested
in
working
on
this
or
chatting
about
this
hit
the
issues
on
you
know
on
the
respective
repos
anything
we
need
to
cover
before
we
move
on
to
the
next
topic.
B
Point
when
azim
was
talking
about
the
user,
customized
policy
and
what
thus
dumping
all
the
score
from
the
scorecard
so
is
that
will
show
all
the
deviation
from
the
actual
scorecard
moving
to
the
next
policy
with
the
user
pause.
Customize
policy
is
any
point
where
we're
showing
this
deviation.
C
So
so
yeah,
so
the
idea
would
be
that
we,
you
know
one.
We
produced
the
the
raw
public
score
like
this
is
the
expectation
if,
if
you
had
no
exceptions
to
the
default
policy
right
and
then
there's
another
scenario
where
you'd
produce
the
I've
customized,
this
scorecard
check
and
I
want
to
do
xyz,
I'm
okay
with
certain
things
being
wrong
or
not
to
the
expected
intentions
of
the
default
public
policy
and
have
a
score.
That's
derived
off
of
that
right
so
like
how
to
tweak
the
like.
C
We
want
to
enable
that
and
then
you
know
and
and
then
also
figure
out
like
how
does
the
score
change
based
on
based
on
those
changes
right
if
you've
done
something
that
is
completely
you
know,
I
don't
know
you
decide
that
zero
maintainers
is
okay
for
your
maintainers
check
or
something
like
that
right,
that's
something
that
we
should
still
like:
whoa
whoa,
whoa,
whoa,
whoa
whoa.
I.
C
Figure
this,
but
are
you
really
sure
that's
okay,
yeah,
so
and-
and
I
think
it's
gonna-
be
a
discussion
and
and
scaling
based
on
each
of
the
checks.
I
don't
think
we're.
Gonna
have
a
blanket
policy
for
any
one
of
these.
B
Oh
okay,
oh
okay,
okay
yeah,
something
like
we
can
make
which
some
of
those
raw
police,
raw
evidences
our
collection
should
not
be
changed,
or
these
at
least
the
basics
should
not
change.
You
have
some
limitations
for
customization.
C
A
C
In
so
far
like,
even
with
the
you
know,
even
with
the
you
know,
grading
with
a
curve,
if
you're
still
coming
in
under
a
certain
value,
we
should
say:
hey,
listen,
I
I
know
you
said
this,
but
are
you
sure
and
how
that
works
we
can
discuss,
but
but
yeah
I
would.
I
would
suggest,
leaving
comments
on
these
issues
if
you're
interested
in
that
too.
E
I
I
think
one
of
the
challenges,
though,
is
that
right
now
there's
a
lot
of
projects
where
scorecards
you
know
you
may
meet
all
the
criteria
and
scorecards
won't
detect
it.
The
most
obvious
one
is
the
next
item,
which
is
you're
not
on
github,
because
so.
E
E
All
right,
so
I
was
going
to
propose,
but
before
doing
that,
wanted
to
you
know,
raise
the
issue
if
somebody's
already
done
this,
but
earlier
today
there
there's
an
open
ssf
governing
board
meeting
new
rep.
There
is
jonathan
hunt
who
works
for
get
lab
and
right
now,
scorecards
only
works
on
get
hub.
I
I
mean
I've
been
trying
to
convince
try
to
make
adjustments
to
the
scorecards
text
to
at
least
it
isn't
locked
to
get
up,
but
it
only
actually
generates
anything
useful
on
github.
So
I'd
like
to
contact.
C
C
E
C
So,
let's
so,
let's
based
on
that
you're
using
get
and
and
then
I
think
that
you
know
we
we're
going
to
encounter
the
same
problem
that
we
do
with
just
the
the
idea
of
like
I,
like
I
spend
enough
time
working
on
like
these
different
things
that
are
are
like
you
can
stick
a
lot
of
things
into
the
box
and
the
the
the
thing
that
gets.
The
most
support
is
the
the
thing
that
people
have
the
most
context
on
right.
C
So
I
think
that,
because
we
are
on
you
know,
because
we
are
on
github,
we
are
going
to
have
a
natural
bias
to
building
things
for
github.
That's
not
to
say
that
should
be
the
only
thing
that
we
should
do,
but
at
the
same
time,
if
we
are
going
to
build
support
in
certain
vectors
for
git
lab,
then
we
would
would
request
that
we
have.
We
have
support
from
git
lab
in
terms
of
like
not
just
not
just
like.
Yes,
you
should
do
it,
but
also
you
know,
personnel
to
help
out
with
that.
E
I
I
got
that
yeah,
so
I,
but
I
I
think
basically
is
before
I
went
anywhere
because
I
I
was
going
to,
but
I
was
going
to
suggest
I
mean
I
can
go,
send
any.
I
have
the
ability
to
send
an
email,
but
basically
tell
jonathan
hunt,
hey
you
know.
Scorecards
right
now
really
looks
you
know
it
looks
for
data
on
github.
E
C
Exactly
yeah,
so
I
think
you
know
using
making
sure
that
we're
using
you
know-
and
this
goes
into
for
the
for
the
maintainers
who've-
been
having
this
discussion.
What
do
we
do
with
all
of
the
clients?
You
know
the
you
know.
All
of
our
clients
are
maybe
like
not
consistently
written
and
there's
an
opportunity
for
like
here's.
What
the
interface
looks
like
for
this
client
go,
build,
they
go
build.
The
the
gitlab
version
go,
build
the
x
provider
version
of
it
right.
C
So
I
think
if
we
can
get
to
that
point
where
we
generize
enough
of
the
enough
of
the
components,
then
it
makes
a
lot
of
sense
for
for
someone
who
has
get
lab
expertise
to
jump
in
and
say
like
okay,
I
I
have
enough
of
the
the
road
you
know
I
have
enough
of
the
the
blueprint
to
to
do
it
for
gitlab.
E
C
So
I
yeah,
I
definitely.
I
definitely
agree
I
just
I.
You
know.
I
know
that
because
I've,
you
know
seen
it
in
kubernetes,
you
know,
but
we've
had
like
dex
vulnerabilities,
for
example,
because
we've
got
all
these
plug-ins.
You
know
we
had.
We
had
one
that
you
know
around,
like
xml
validation
for
the
samwell
plug-in
and
and
I'm
like
cool
I'll
help
patch
the
vulnerability.
C
C
So
as
long
as
we
are,
you
know,
as
long
as
we
can
move
forward
with,
like
you
know
the
promise
that
we
will,
we
will
try
to
deliver
generic
enough
interfaces
for
people
to
implement
on
top
of-
and
you
know
you
know
married
with,
can
we
bring
people
who
have
expertise
of
this
to
help
maintain
it
as
well?
And
I
think
we've
potentially
got
a
good
plan.
E
Okay,
yeah.
Obviously
I
can't
promise
what
get
lab
does,
but
that's
their
decision,
but
I
want
to
give
a
give
a
heads
up
before
I
I
did
this
in
case
there
was
some
issue.
I
wasn't
aware
of.
C
H
Yeah,
I
I
just
want
to
chime
in,
I
think
what
stephen
said.
Basically,
we
have
interface
which,
which
will
let
us
integrate
with
git
lab.
I
just
don't
think
we
are
focusing
on
it
right
now,
because
you're
just
shot
on
resources,
so
yeah
it'll
be
very,
very
helpful.
If,
if
you
could
get
us
in
touch
with
gitlab
or
someone
from
get
lab,
who's
interested
in
you
know
contributing
here,
we
can
even
look
into
like
adding
them
as
maintainers
if
they
have
contributed
enough.
So
yeah
right.
E
C
So
yeah,
let
me
so
I
think
what
we
should
do
and
I
meant
to
mention
to
the
maintainers.
If
I
haven't
already,
is
that
we
set
up
a
group
for
for
the
project
specific
maintainers.
I
don't
know
if
that
needs
to
be
on.
You
know
if
it's
on
google
groups
or
we
do
it
on
lists
or
groups
that
I
o
or
something
but
like
you
should
be
able
to
send
a
note
to
to
score.
You
know
scorecard
maintainers
and
get
a
response,
especially
as
it
relates
to.
C
E
Okay,
how's
this,
I'm
all
for
a
long
term,
long
term
create
a
was
a
scorecard
mail,
create
a
score
cards
mailing
list.
C
So
we
do
have
the
mailing
list
just
to
be
clear.
We
do
have
the
mailing
list
today
for,
like,
I
think,
the
scorecard
scorecard
dev
right.
You
can
use
that
will
hit
everyone
who's
subscribing,
but
like
very
specifically.
If
you
want
to
reach
maintainers,
we
don't
have
a
maintainer
specific
list.
Okay
right
and
we
can,
we
can
out
of
band,
give
you
the
maintainer
addresses
if
you
want
to
keep
the
the
scope
smaller
for
the
meantime,.
K
C
Probably
on
the
list-
and
it
should
be
at
the
bottom
square
card-
dev
ossf,
scorecard,
dev,
okay,
all
right
so
I'll,
just
drop.
C
So
I
you
know
with
the
screen
share.
I
usually
am
not
paying
attention
to
chat,
so
I
just
want
to
take
some
time
and
and
make
sure
that
we
we
come
back
and
address
some
of
the
questions
that
were
asked
in
chat
cool
with
everyone.
C
Sweet
okay,
so
we
talked
about
the
bad
practices.
Stuff
briefly,
are
all
sources
hosts
and
artifacts
treated
equally
at
the
moment,
I
don't
think
they
are,
and
I
think
it
goes
back
to
that
whole
we
ish
ish
right.
I
think
it's
going
to
depend
on
what
we
have
most
experience
with
and
what
we've
built
right.
C
So
one
of
the
the
issues
that
are
currently
open
from
from
jordan
or
for
me
you
know
proxying
jordan,
is-
is
that
there's
lots
of
experience
on
the
on
the
npm
side
there
right
and
there
is
a
plan
to
publish
best
practices
for
npm
package
management
right.
So
that's
one
example
of
like
once
that
guidance
is
out.
That's
something
that
we
could
integrate
into.
You
know
our
our
systems
right,
the
okay
sequel.
I
might
not
get.
D
Sorry,
stefan,
what
did
you
say
about
the
npm
best
practices?
Sorry,
there
is
so
there's
an
issue
here.
A
J
Just
the
point
on
the
npm
best
practices
like
sorry,
I
missed
what
you
said.
I
was
distracted.
C
Oh
right,
so
there
I
think,
there's
a
plan
to
publish
npm
best
practices
or
a
set
of
best
practices
for
various
package
managers.
So
we
should
stay
tuned
for
that
and
see
you
later.
C
C
Cool,
so
is
there
a
watch,
this
space
kind
of
thing
for
for
folks,
on
the
call.
J
C
All
right
moving
down
the
list
from
alan
is
there
a
thought
about
stirring
attestations,
certain
checks
being
run
out
of
band
without
needing
manual
overrides
to
bring
the
score
up
great
question?
I
don't
know
that
we
have
an
answer.
The
who
is
who
is
attesting
and
do
we.
You
trust
that
attestation.
L
Yeah,
it's
just
so.
We
have
a
couple
open
source
operating
system
projects
and
I
was
running
the
scorecard
on
them.
I'm
not
too
happy
with
the
results.
So
I'm
like
working
on
some
fixes,
but
there's
some
that
I
I
have
to
get
into
how
the
scores
are
being
created,
but
without
knowing
all
the
details.
Yet
I
was
just
trying
to
figure
out
if
there's
a
way
in
our
other
pipelines
to
bring
like
a
attribute
file.
E
L
L
We
have
partners,
we
work
with
and
they
are
throwing
issues
at
us
that
we
know
we've
solved
already
and
so
there's
a
lot
of
background
noise
that,
while
the
raw
score
is
good
to
know
about
it,
still
causes
churn
on
the
teams
to
to
research
something
or
to
review
a
ticket
that
came
in
that
says,
hey
you
says:
you're
not
doing
any
ci
checks.
We
don't
like
that.
So
what
are
you
going
to
do
about
it
like?
No,
we
do
ci
checks.
C
Yeah-
and
I
I
think
this
is
this-
is
also
a
you
know,
a
failing
of
it's
not
really
failing
it's
just
what
you
do
and
don't
have
access
to
right
so
like
there
are
certain
instances
where,
if
you
run
a
local
check,
you
you're
not
going
to
get
the
same
responses
as
you
will,
if
you're,
if
you're
running
against
the
repo,
the
public
repo
right.
C
So
that's
one
thing
to
consider:
if
they're,
if
they're
running
local
checks
versus
against
your
repo,
the
I
think
there
is
a
yeah
there
is
like
what
is
the
remediation
path
for
any
one
check
right,
if
you,
if
you
have,
if
you
have
done
the
right
things
and
your
score
does
not
go
up,
then
that
is
that's
our
problem
right.
That
means
that
we're
not
you
know.
That
means
that
we're
not
exposing
enough
information
for
that
remediation
to
be
counted
right.
C
Okay,
so
so
I
think,
but
but
it's
hard
to
it's
hard
to
generalize
any
one
specific
case
right,
so
it'd
be
helpful
like
when
you
run
into
those
to
file
issues
or
so
look
for
existing
issues,
because
I
think
a
lot
of
these
things
are
are
a
lot
of
these.
Things
are
captured
in
the
repo
and
maybe-
and
maybe
we
haven't
gotten
to
it-
and
maybe
it's
a
great
opportunity
for
you
to
come
in
and
contribute
to
that.
L
Yeah,
as
I
get
more
into
it,
I'll
go
to
open
more
better
issues,
but
at
the
moment
I
think.
E
E
L
I
I
often
like
probably
take
me
a
week
and
a
half
or
so
to
come
back
and
tell
you
like
specifically
something
that
because
I
haven't,
I
ran
the
sport
card
once
or
twice
and
we
have
a
goal
to
get
to
like
the
best
practices
badge
and
be
like
you
know,
a
good
citizen
with
all
this
so
yeah,
so
I
mean
we're
just
on
our
journey
so
as
I
was
just
bringing
it
up
that,
maybe
if
we
don't
get
100
there,
I'd
still
like
us
to
have
all
the
checks
there,
just
maybe
they're
not
being
caught
the
right
way.
C
Yeah,
so
I
I
think
that
you
know
for
as
much
as
you're
willing
to
share
like
you
know,
I'm
I'm
sensitive
to
the
sensitivity
as
much
as
you're
willing
to
share
to
you
know,
provide
provide
data
for
us
to
improve
that'd,
be
awesome.
I
you
know
I
when,
when
I
hear
this,
I
think
of
you
know
like
container
image
scanning,
for
example
right.
C
If
you
look
at
trivia,
there
is
a
there's,
a
flag
that
is
ignore,
unfixed
right,
so
so
there
are
instances
where
you
will
have
there's
a
report
of
a
cv
and
you're
like
yes,
I
know
it
exists.
This
is
bad.
I
would
like
to
fix
it,
but
there's
no
upstream
fix
for
it
right.
So
there's
a
scenario
where
you'll
run
the
report
for
your
container
image
scanning
and
you'll
get
a
dump
of
all
these
cves
that
you
actually
can't
fix,
because
they're
not
fixed
on
the
operating
system
level.
Right.
C
So
that's
why
that
flag
exists
and
I'm
sure
that
there
are
scenarios
just
like
that
within
you
know,
within
our
code
that
we
need
to
that.
We
need
to
to
kind
of
guard
against
so
yeah.
Any
any
data
that
you
can
can
provide
from
the
real
world
would
be
super
helpful,
not
to
say
we
don't
live
in
the
real.
A
C
You
know
but
we're
on
the
internet
right
now,
all
right.
So
folks
we
have
so
folks
we're
running
to
the
top
of
the
hour,
and
I
don't
recall
if
this
is
one
of
those
calls
where
you
can
go
free
after
we're
going
straight
to
five
or
five
eastern.
C
So,
let's,
let's
jump
into
what's
left
over
and
david,
it
looks
like
it's.
You.
E
C
E
C
G
C
M
Yes,
I
I
think
that
is
an
app
summary
of
how
we
got
to
where
we
are,
and
I
apologize,
I'm
using
azim's
computer
to
talk
about
my
my
topic.
Go
for
it.
M
M
Not
at
all,
you
know
it's
not
ranked
on
importance.
It
was
just
ranked
on
who
opened
the
dock
first,
so
a
quick
plug
following
up
on
on
something
that
I
talked
about
in
our
last
meeting,
which
is
we
have
a
general
need,
as
our
group
is
growing,
to
kind
of
formalize
a
mission
and
vision
just
to
be
clear
on
where
this
project
is
is
going.
M
I
think
these
apply
to
both
all-star
and
scorecards,
so,
if
you're
interested
in
helping
craft,
what
those
are
going
to
look
like,
I
opened
up
a
couple
of
issues
on
each
project
that
are
linked
in
this
doc
tag
yourself
there
if
you're
interested
we'll
kind
of
work
on
the
right
format
to
do
this
kind
of
joint
writing.
I
think
we'll
see
how
far
the
github
issue
can
take
us,
but
we
may
have
you
know
kind
of
a
special
one-off
meeting
for
anyone,
that's
interested
in
writing.
M
So
I
think
we'll
we'll
give
it
about
a
week
of
just
being
on
a
github
issue,
and
then
I
you
know
if
it
seems
like
having
a
more
online
meeting,
would
be
more
productive,
we'll
we'll
figure
it
out
from
there
yeah.
That's.
C
C
Cool
all
right,
so
the
clock
has
started
two
weeks.
Everyone
awesome
and
I
think
do
we
want
to
do
you
want
to
call
it
here
or
do
we
want
so
normal?
Do
you
have
is
your?
Is
the
low
risk
token
permission
quick
or
we
kind
of
chatted
about
it?
Didn't
we.
D
No,
I
don't
think
we
did.
How
much
time
do
we
have
left.
We've
got
a
minute,
so
I
don't
think
we're
gonna
get
it.
Okay,.
C
All
right,
so
I
will
pull
some.
I
will
pull
some
of
the
things
that
we
didn't
get
to
cover
up
into
the
upcoming
topics.
The
last
thing
that
we
should
do
is
a
facilitator
transfer
for
the
next
meeting,
so
who
is
interested
in
being
the
bosun
of
this
fine
group
of
folks
for
the
next
meeting.
C
C
B
B
One
one
place
yeah
just
for
a
minute
about
one:
five:
three,
four
issue:
the
update
copyright
headers.
We.