►
From YouTube: Scorecards Biweekly Sync (July 13, 2023)
A
B
C
C
And
just
in
case,
the
new
document
has
people
typing
suggestions.
If
you
want
to
fix
it,
you
can
join
the
same
scorecards
Dev
Google
group,
and
it
should
give
you
editor
access.
But
if
suggestions
work
then
carry
on
and
we'll
accept
them
when
we
can.
C
It
looks
like
everyone's
doing
it
already,
but
as
you
file
in,
if
you
can
put
your
name
on
the
list
in
the
doc,
if
you
don't
have
it
I
can
post
the
link
in
the
chat.
It
should
be
at
least
viewable
by
everyone.
These
days.
C
All
right,
it's
about
three
after
so
I
will
go
ahead
and
get
started
for
after
hi
everyone,
I'm
Spencer,
one
of
the
scorecard
maintainers
and
I'm
the
facilitator
for
today's
meeting.
C
C
But
usually
we
start
our
agenda
by
inviting
anyone.
That's
new
to
the
sync
to
introduce
yourself
say
a
few
words
who
you
are
what
you're
hoping
to
get
out
of
the
sink
Etc.
This
is
optional,
but
if
you
want
to
now
go
ahead
and
mute
and
introduce
yourself.
D
I
will-
and
this
is
how
long
I
can
stay
so
I'm
just
going
to
introduce
myself
and
then
run
because
I
am
here
and
you're
getting
to
know
the
rest
of
the
open,
ssf
Team.
My
name
is
Adrian
Markham
I
have
just
joined
LF
within
the
past
three
weeks
as
a
technical
project
manager.
D
My
first
big
assignment
is
to
help
get
the
scorecard
project
up
and
running,
so
some
of
you
have
already
poked
and
I'll
continue
to
poke
around
and
just
try
to
see
what
we
can
do
to
get
get
the
progress
happening
a
little
bit
faster.
If
we
can,
let
me
know
how
I
can
help
and
what
needs
to
be
done.
E
I
can
go.
My
name
is
Jonathan
Howard
work
for
Lockheed
Martin
I'm
on
the
hopper
project
and
I
did
some
work
for
well
on
a
hopper
plug-in
for
the
open,
ssf
scorecard
I'm
here.
Just
for
you
know,
awareness
and
exposure.
C
All
right
with
that,
we
can
jump
into
things,
so
it
doesn't
look
like
we
have
any
announcements.
I
did
miss
last
sync
I,
don't
see
it
scorecard.
Vae
11
came
out
a
little
bit
ago,
which
you
know
had
some
things
in
the
patch
notes,
but
leads
me
into
the
first
agenda
item
and
that
was
sort
of
a
change
in
how
two
of
the
checks
work,
where
sometime
between
V10
dot
I
think
it
was
four
and
v11.
C
There
was
a
change
in
a
check
that
for
gitlab
made
no
sense
so
gitlab
didn't
support
the
notion
of
GitHub
workloads,
understandably,
and
how
GitHub
token
permissions
work.
So
there
was
this
PR
that
basically
changed
it
to
say
that
if
you
don't
have
any
GitHub
workflow
files,
then
we're
going
to
return
a
minus
one
score
for
repositories
that
were
on
GitHub.
This
meant
that
the
score
now
dropped
from
a
10
to
a
negative
one.
C
C
Does
it
make
sense
to
have
a
10,
so
I
thought
Drogo
made
a
good
point
about
how
it
impacts
the
aggregate
score
the
overall
score
once
you
combine
all
the
checks
when
some
of
the
checks
are
minus
one,
so
I
think
this
issue
is
sort
of
simple
enough
that
we
don't
have
to
come
to
a
conclusion
today,
but
just
sort
of
drawing
attention
to
the
issue
if
people
feel
like
weighing
in
and
then
I'll
be
pinging,
some
of
the
maintainers
I'm
not
sure
how
many
others
are
on
the
call
right
now.
C
Yeah
so
David
in
chat
says
that
we
say
the
score
is
zero
to
ten,
so
negative
one
doesn't
make
sense.
Traditionally,
we've
used
it
for
when,
like
we
either
don't
have
enough
information
to
score
a
particular
check
or
if
there's
some
sort
of
error
that
prevents
us
from
doing
the
analysis.
So
this
might
be.
C
You
know,
for
whatever
reason
the
token
you're
using
doesn't
have
permissions,
there's
different
tokens,
and
sometimes
some
of
them
disagree
with
certain
checks,
or
you
know
we're
trying
to
score
you
on
code
review,
but
the
only
PRS
that
we
see
are
from
you
know
depend
about
so
yeah
I
mean
negative
one's
unfortunate,
but
it's
usually
reserved
for
situations
where
we
can't
make
a
judgment
and
I
think
even
with
workflows
gone,
we
can
make
a
judgment,
but
yeah.
A
C
B
What
are
we
trying
to
measure
here,
I'm
just
thinking
so
one
of
the
projects
I
work
on
is
k-native
and
k-native
uses
both
GitHub
actions
and
prow,
which
is
an
external
system
running
on
kubernetes,
if
you're
doing
a
security
assessment
and
you're
just
looking
at
the
GitHub
actions,
piece
you're
missing
a
whole
bunch
of
other
stuff,
that's
running
in
order
to
produce
a
release,
and
so
you
might
spuriously
give
a
high
score
when
actually
there's
some
other
stuff
going
on
that
you
don't
see,
that's
worse
and
so
I'm
just
trying
to
figure
out
what
are
we
trying
to
do
with
this
score?.
C
Yeah
so
traditionally
scorecard
only
supported
GitHub
repos
and
it
was
common
for
GitHub
repos
to
have
these
gotchas
where
token
permissions
are
a
little
unintuitive
and
the
permissions
are,
you
know,
overly
broad
or
it's
possible
for
basically
untrusted
injection,
so
the
check
was
originally
interested
in.
C
Why
for
the
gitlab
stuff
it
returns
minus
one
and
I
I,
just
don't
think
it's
been
expanded
or
a
parallel
version
has
been
introduced
through
GitHub.
C
A
C
Also
so
like
historically
when
it
was
just
GitHub,
it
was
how
trustworthy
is
your
GitHub
actions,
because,
like
people
still
used
Travis
or
Circle
Ci,
or
something
like
that
and
I,
don't
think
we
ever
like.
That
was
to
my
understanding,
not
the
part
of
the
check
because,
like
I
I'd
have
to
get
input
of
someone,
that's
been
on
the
project
a
little
bit
longer,
but
I'm
proceeding
under
the
basis
that
it
was
just
to
look
at
GitHub
actions.
G
Support
you
there
Spencer,
like
the
the
point,
is
to
check
to
see
if
you're
doing
something
wrong,
not
to
check
if
you're
doing
everything
right.
So
it's
it's
really
just
for
the
the
things
that
scorecard
knows
about.
Are
you
doing
anything
wrong,
so
yeah
somebody's
using
Travis
if
somebody's
using
whatever
CI
it,
wouldn't
give
you
a
bad
score,
but
it
also
wouldn't
give
you
you
know
and,
and
that's
just
the
the
extent
of
scorecard's
knowledge
and
then
it
would.
G
B
It
does
I
just
wonder
if
hey
we
couldn't
detect
any
workflows,
whether
we
should
be
doing
this
as
a
negative
points
from
a
hundred
system
or
like
a
positive
points
from
zero
type
system.
G
C
Yeah
and
the
way
scorecard
scores
right
now,
there's
not
sort
of
a
great
answer
of
what
happens
because,
like
the
aggregate
score
is
based
on
all
of
the
checks
and
if
we
continue
to
say
you
have
no
GitHub
workflows,
so
everything
is
a
10..
You
know
that
sort
of
vacuously,
true
that
will
sort
of
boost
up
your
score,
but,
as
jogo
mentioned
in
the
issue
thread,
if
you
say
you
know,
you
have
no
workflows.
I
can't
really
give
you
a
score
based
on
this,
then
that
inflates
the
weight
of
all
the
other
checks
as
well.
H
Yeah
I
was
immediate
reacting
to
the
hey.
We
tell
everybody
that
we
give
score
everything
zero
through
ten,
so
it
should
be
at
the
very
least.
We
should
fulfill
our
promises
and
score
it
0
through
10..
As
far
as
the
hey.
What
do
you
do
when
you
don't
know
I
mean
that's
always
been
a
challenge,
and
you
know
in
in
one
additional
Quirk
I
should
add.
Is
you
know
some
systems
may
use
multiples.
I
mean
I
will
point
out
the
best
practices
badge.
We
use
GitHub
actions
for
a
couple
things.
H
Most
of
our
checks
are
on
Circle
CI,
so
I'm
not
even
sure
what
scorecard
does
in
that
case.
So.
H
I
think
it's
defensible,
but
then
we
need
to
say:
hey
we
didn't
detect
it.
You
know
you
might
have
one.
We
just
don't
see.
One
see
I
see
a
CI
pipeline,
but
you
have
a
problem.
I
can
see
it.
You
know
a
certain
score
at
that
level
and
more
generally,
I
think
we
always
want
to
have
the
higher
level
description
in
the
scorecard,
the
more
General
description
and
then
within
the
scoring
in
more
details.
Hey
you've
got
a
pipeline.
It
has
these
kinds
of
problems
the
score,
because
you
know
hey.
H
We
support
one
pipeline
now,
we'll
support
others
later.
It
should
be
possible
for
us
to
figure
out
approximately
where
our
value
is
going
to
end
up.
Just
as
we
add
more
and
more
of
these
cases,
thanks.
C
Yeah
I
may
have
to
find
a
different
room
in
a
second,
but
this
check
isn't
necessarily
concerned
with.
Do
you
have
a
CI
pipeline?
That's
more
with
the
CI
tests
check
looks
at
this
is
if
you're
using
GitHub
actions.
Is
it
configured
appropriately.
I
Yeah
and
just
one
comment:
David
on
the
zero
to
ten
thing,
just
in
case
there's
a
bit,
there
might
be
a
misunderstanding:
the
minus
one
isn't
actually
a
minus
one.
A
J
H
J
J
But
if
an
expert
in
circle
CI
wants
to
come
in,
say,
here's
what
a
dangerous
workflow
looks
like
at
Circle,
Ci
or
I
think
some
of
the
gitlab
people
who
have
more
experience
with
that
would
be
like
this
is
what
a
dangerous
workflow
looks.
Dangerous
pipeline
looks
like
in
gitlab.
That's
certainly
with
a
scope
of
the
scorecard,
and
we
we
should
do
that
and
the
the
reason
I
believe
that
a
inconclusive
result
is
suitable
in
the
case
where
we
don't
find
workflows.
J
Is
that
we're
we're
just
it's
almost
like
scorecard,
failed
to
do
something
or
scorecard
just
not
programmed
to
detect
a
particular
kind
of
workflow,
so
it's
just
like
scorecard
itself
is
inconclusive
and
just
can't
say
anything
one
way
or
another
about
whether
the
repo
is
falling
good,
good
practices,
yep.
C
Yeah
I
think
that
argument
is
a
lot
easier.
Sorry
Pedro,
real
quick
if
we
had
structured
results
now
sort
of
thing
where,
if
we
could
say
this
is
the
GitHub
sort
of
check,
then
returning
minus
one
makes
sense.
I
I
think
you
know
you
bring
up
a
good
point
that
right
now
the
check
is
dangerous
workflow
and
we
can't
evaluate
it
in
the
sense
of
if
this
is
a
git
lab
or
a
GitHub
project
and
I.
Think.
H
I
Yeah
I
know,
and
but
just
like
giveaway,
like
raghav
mentioned
of
like
we
might
include
dangerous,
CI
CD
on
other
platforms,
the
minus
one
becomes
more
explainable
becomes
more
defensible.
Shall
we
say
in
in
that
regard,
because
then
it's
a
minus
one.
Now
and
then
we
add
information
on
Travis
in
tomorrow
and
then
the
minus
one
becomes
a
real
score.
I
That
is
more
natural.
Shall
we
say
than
you
get
a
10
out
of
10
and
then
we
add
Travis
tomorrow
and
your
10
out
of
10
becomes
a
zero
that
becomes
a
bit
that's
a
bit.
I
might
feel
a
bit
weirder.
Shall
we
say
in
terms
of
hey.
If
you
couldn't
judge
my
dangerous
workflow,
maybe
you
shouldn't
have
been
giving
you
the
points
all
along,
so
I,
don't
think
anyone's
going
to
be
currently
complaining
about
getting
a
10
out
of
10.
C
So
I
mean
it
sounds
like
there's
some
votes
both
for
and
against
I'd
be
curious
to
get
laurent's
opinion.
As
you
know,
we
work
towards
structured
results
on
if
we
just
maintain
the
status
quo
for
now
on
tens
and
then,
if
structured
results,
are
you
know
six
months
out?
Does
that
change
sort
of
the
discussion?
C
But
yes,
you
know
it.
There's
some
good
discussion
about.
F
C
Know
a
point
boost
and
do
certain
repos
get
Point
boosts
for
either
choosing
to
or
choosing
not
to,
but
yeah.
Thank
you
for
those
that
left
comments
in
the
issues.
I
I
think
it's
hard
to
make
I
guess
a
conclusive
decision
with
only
two
maintainers,
but
thank
you.
Everyone
for
comments,
so
I'm
happy
to
move
on
to
the
next
issue.
Pedro.
I
Yeah,
so
this
issue
here
is
the
fact
that
the
dependency
update
tool
check
when
it's
looking
at
the
for
when
it
it
can
detect,
depend
about
by
looking
at
PR's,
and
they
can
all
just
check
like
the
history
of
PR's
in
the
project
looking
for
depend
about
PRS,
and
if
there
are
any,
then
it
currently
just
assumes
okay,
this
project
has
depend
about
and,
however,
a
project
might
have
turned
off
and
depend
about
because
many
projects
get
annoyed
and
depend
about,
and
it
happens
relatively
often
that'll
project
will
just
turn
it
off
and
it's
hard
to
detect
when
looking
at
PRS.
I
If
that
has
happened,
one
solution
is
to
instead
of
looking
at
the
PRS
or
as
well
as
looking
at
the
PRS
is
looking.
If
the
project
has
the
dependabot
yaml
file
the
settings
file
for
the
pendabot,
but
that
file
isn't
strictly
required
for
the
pendabot
to
run.
You
only
need
that
the
depend
about
yaml
file.
If
the
project
wants
the
consistent,
frequent
dependabot
version
bump
PRS.
I
However,
a
project
might
not
want
that,
and
only
opt-in
for
what
they
call
the
dependabot
security
updates,
where
dependabot
will
send
a
PR
to
fix
dependencies
having
known
vulnerability,
and
if,
if
you
have
a
product,
only
opts
in
for
security
updates,
it
doesn't
need
a
depend
about
the
ammo
file.
So
the
question
that
ends
up
happening
here
in
this
issue,
especially
further
further
down,
is,
should
we
should
it
be
like?
I
Should
we
perhaps
change
or
depending
the
dependency
update
tool,
when
looking
at
the
pendabot
to
look
for
the
file
or
not
basically
like
should
depend
about
only
count
if
the
project
has
opted
in
for
version.
A
I
H
The
goal
is
that
if
there
is
a
security
vulnerability,
if
there's
a
vulnerability
that
they
updated,
prompto
right
so
I
mean
so
if
they
have
that
file,
that
would
suggest
that
maybe
they're
doing
it
one
other
approach
is
looking
at
past
I'm
looking
at
what
what
are
the
simple
indicators,
obviously,
if
they
have
depend
about
commits
that
have
been
merged
in
that's
a
pretty
good
indicator,
you
know
worse
comes
to
worse,
and
this
would
be
suddenly
a
whole
bunch
of
analysis.
H
I
Yeah,
the
the
current
setup
for
dependency
update
tool
is
the
second
one
of
looking
at
DRS
and
how
mistaken
it
only
looks
for
like.
Are
you
receiving
the
pendabot
update,
dependent
Bots,
for
example,
or
renovate
bot?
Whatever?
Are
you
receiving
those
PRS
I'm,
not
sure
if
it
looks
at
whether
they're
merged
or
not,
but
just
like?
Is
it?
Do
you
have
it?
I
So
we
don't
have
that
signal.
Unfortunately,
as
far
as
I
can
tell,
and
so
the
file
would
give
us
a
cheap
way
of
figuring
out
that
of
skipping
that
whole
thought
process,
but
it
also
is
forcing
people
to
accept
version
updates
instead
of
just
security
updates,
which
is
I
think
probably
closer
to
the
spirit
of
scorecard.
C
I
I
don't
think
it
has
to
be
an
either
or
kind
of
thing
where
a
project
can
just
sign
up
for
depend
about
security
updates
like
they're
doing
now,
and
we
do
attempt
to
look
for
it
in
recent
PRS.
It's
just
if
a
project
isn't
happy
that
you
know
hey
I
have
depend
about
installed,
they
can
go
out
and
create
I'm
curious
if
even
like
an
empty
dependabot.yaml
Works,
where
you
know
you
don't
sign
up
for
anything
but
I,
I.
Think
in
the
thread
I
can
change
my
tab
back
to
it.
C
I
was
sort
of
suggesting,
like
a
repository,
can
add
a
dependabot.yaml
to
say
yes,
I
am
actually
using
this
sort
of
thing,
so
I
I,
don't
know
if
too
much
is
required
to
be
changed
on
the
scorecard
side
other
for
other
than
the
issue.
You
mentioned
that
this
project
three
years
ago
used
depend
about
once
and
then
turned
it
off
sort
of
thing.
C
So,
at
least
to
me
what
makes
sense
is
some
sort
of
look
back
with
a
window
which
will
hopefully
get
rid
of
those
false
positives,
but
may
introduce
false
negatives,
in
which
case
the
solution
would
be
potentially
adding
an
extra
file,
which
kind
of
sucks
and
scorecard
has
this
problem
every
now
and
then,
where
we
don't
detect
everything
and
we're
working
towards
user
specified
remediations
or
like
saying
no,
no!
No!
This
is
why
that's
not
accurate,
but
we're
not
there
yet
so
in
the
meantime,
I
I
think
are
you
just
seeing.
I
Yeah,
like
I
mean
like
I'm,
asking
like
should
yeah
like,
should
scorecard
change
to
require
the
Dependable
yaml.
Another
thought
that
crossed
my
mind
is.
It
could
be
like
five
points
for
having
the
ammo
five
points
for
PRS
for
having
at
least
one
PR
or
something
I.
Don't
know.
I
Yeah
I
mean
this
year,
like
I
came
with
a
question.
I
didn't
come
here
with
an
answer
of
what
the
scorecard
position
should
be
just
raising
this
issue.
C
I
think
that's
why
I
opened
the
check
doc?
Assuming
you
know,
the
docs
are
accurate,
which
they're
not
I,
guess
the
purpose
of
the
check
is
to
see
if
you
use
a
dependency
update
tool
and
If
part
of
this
says
out
of
date.
Dependencies
make
you
vulnerable
to
flaws
and
dependabot
is
giving
you
security
updates,
then,
to
me,
as
written,
requiring
a
file,
that's
not
necessary
for
what
the
tool
is
trying
to
check
wouldn't
be
necessary.
C
If
we
want
to
add
like
a
recommended
like
hey,
if
you're
using
this
but
you're,
not
we're
not
picking
it
up,
here's
something
you
can
do
as
a
hint
I
think
that
works,
but
I'd
be
curious.
If
someone
feels
strongly,
if
saying
you
know
the
point
of
a
dependency
update
tool,
isn't
just
for
security
updates,
but
also
blah
blah
blah.
H
I
I,
let
me
I
certainly
encourage
updating
in
general,
because
if
you
don't
update,
it
gets
harder
later
on
when
there
is
a
security
update
but
I'm
a
little
hesitant
to
press
the.
You
know,
please
keep
your
everything
updated
at
all
times,
because
there
are
cases
where
you
can't
update
I'm
thinking,
particularly
of
a
case
where
there's
some
software
components
that
we're
licensed
to
a
non-open
source
software
component.
C
I,
don't
think
any
of
this
is
looking
to
see
if
the
PR's
actually
merged,
because,
like
on
Dependable,
you
can
say
ignore
this
version.
Ignore
this
like
ignore
this
dependency.
Okay,.
H
H
Two
minds
on
this
I
do
think
in
general.
It's
wise
to
update
I
also
do
think
that
you
want
to
prioritize
the
important
ones
whatever
that
means
for
the
bigger
projects.
It's
a
little
over
Dependable
could
be
a
little
overwhelming
if
you
actually
enable
it
for
all
versions,
I'm
pretty
sure
we
only
enable
it
for
the
security
vulnerabilities,
for
example,
so
best
practices.
K
Could
we
detect
if
that
caused
you
to
start
taking
on
vulnerabilities,
because
you
weren't
updating,
that's
kind
of
more
of
where
it
starts
to
become
a
concern
for
security,
because
I
think
what
you
just
said
is
very
common,
where
you
at
least
don't
Auto
merge
things
that
are
beyond
that
patch
version
and
you
may
choose
to
delay
even
manual
merges
of
those
minor
and
major
updates
for
very
good
reasons,
but
those
Also
may
become
unsupported,
and
then
you
do
have
a
legitimate
security
concern
at
some
point.
H
I
guess
we're
on
firm
ground
on
asking
people
to
you
know,
make
sure
you've
been
automated.
You
have
automated
enabling
of
security
vulnerabilities
and
you
know
encouraging.
You
know,
push
the
button
and
update
the
question
here
is:
should
we
broaden
that
to
just
you're
out
of
date
period
as
a
warning.
C
K
K
Yeah,
to
enable
an
automated,
automated
dependency
update,
that
is
merging
things
like
I,
think
that
is
a
big
security
help
for
a
lot
of
projects.
H
H
I
don't
know
if
it's,
if
there's
any
such
research.
H
H
Research
gate
on
the
use
of
dependabot
security,
pull-up
requests
interesting
all
right.
Well,.
C
All
right,
I'm,
just
trying
to
capture
some
of
our
discussion
in
text
to
sort
of
summarize.
C
So
it's
like
they're
sort
of
two
choices
where
either
we
require
it
or
keep
it,
as
is
which
is
a
signal,
but
not
necessarily
required,
and
then
I.
Don't
think
this
was
part
of
Pedro's
question,
but
are
we
trying
to
get
I
guess
a
decision
on?
C
I
I
Like
a
clunky
way
of
doing
it
would
be
to
not
make
a
decision
and
instead
say
hey
if
you've
got
to
depend
about
PR
over
the
last
year
you
get
a
10
out
of
10..
If
the
last
one
was
two
years
ago,
three.
G
I
Ago
and
you
don't
have
the
dependent
body
ammo
file,
we
don't
know
if
this
thing's
still
running,
but
maybe
it
is
so
we'll
give
you
a
couple
of
points
we
kind
of
give
the.
If
they
don't
have
the
file
we
give
them
fewer
points
as
time
goes
on,
as
we
lose
confidence
that
they
actually
have
depend
about
turned
on,
like
it's
a
clunky.
H
It's
actually
it's
clunky,
but
it's
not
as
crazy
as
it
sounds,
giving
them
less
points
for
something
we
have
less
confidence
in
it's
a
little
clunky,
but
it
has.
A
H
I
I
That
I
yeah
I
have
no
idea,
but
I
wouldn't
be
surprised
if
what
it
means
is
that
it
runs
as
like.
It
runs
the
the
scan,
which
means
nothing,
but
if
you
have
the,
but
if
you
have
it
active
for
version
updates,
which
is
what
you
would
with
the
Dependable
with
the
yaml
file,
you
also
have
it
active
for
security
updates.
So
it
actually
might
be
that,
having
like
a
blank
file,
also
automatically
puts
you
in
for
security
updates
as
well,
but
haven't
texted.
It
so
can't
be
sure
foreign.
H
H
So
in
that
point
the
only
real
question
is:
what
do
we
do
if
not
and
hey,
if
there's
depend
about
or
some
more
results
and
we
keep
saying
depend
about,
but
obviously
any
other
tool
that
does
dependency
analysis
and
helps
them
keep
up
to
date
is
what
matters
so
yeah
either
either
we
see
evidence
of
it
in
the
in
proposed
changes
or
request,
merge
requests,
or
we
see
some
configuration
that
suggests
that
in
fact
it's
it's
going
on.
I
Yeah
so
I
mean
the
things
I
depend
about
is
a
specific
case
because
with
renovate
and
the
other
ones
they
are
like
config
as
code
like
renovate,
you
can
usually
identify,
and
actually
you
can't
actually
I'm
I'm
I'm
wrong
right
depend
about
it.
Renovate's
actually
activated
on
the
account,
not
on
the
that's
true,
never
mind.
We.
C
Do
look
for
the
renovate
config
files?
That's
the
only
way
we
have
evidence
of
using
renovate
right
now
for
scorecard.
Okay,.
H
Okay,
but
you're
still
looking
for
it
and
again
you're
you're,
just
looking
for
evidence
right.
If
you
have
evidence
hooray,
you
get
some
points
so
we're
just
looking
I
mean
this
is
just
no
different
from
the
other
Checkers.
It's
you're.
Looking
for
evidence,
I
mean
same
for
gymnasium
on
the
get
lab
side
right,
I,
don't
know
if
that's
being
looked
for
today,
but
you
could
certainly
look
for
it
at
some
point.
H
J
B
I
mean
I'm,
just
thinking
about
can
I
have
a
depend
about
yaml,
but
actually
have
gone
and
slid
the
slider
to
off.
H
Well,
it
in
general
that
we're
not
really
assuming
maliciousness
yeah
I
mean
there's
actually
a
much
simpler
way
of
disabling
depend
about,
and
that's
simply
using
a
language
that
it
doesn't
support
that.
H
C
Yeah
I
mean
there's
all
sorts
of
corner
cases
like
if
you're
writing
a
library
that
has
no
dependencies
too,
like
what
do
you
do
so
before
we
get
too
deep
into
a
can
of
worms,
I'm
going
to
realize
back
a
little
bit,
but
it
sounds
like
the
biggest
question
is:
how
does
this
behave
with
an
empty
depend
about
emo?
C
You
know:
we've
brought
that
up
a
few
times
so
I
think
a
good
Next
Step
would
be
someone
either
Pedro
or
I
could
try
to
find
time
between
or
before
next
meeting,
just
sort
of
run,
a
little
experiment
see
what
happens
and
then
once
we
have
a
little
more
information,
it
might
be
easier
to
sort
of
make
a
decision.
H
I
think
the
the
coming
bound
to
the
brass
tacks,
though
the
whole
point
is
if
there
is
a
vulnerable
dependency,
we
want
there
to
be
automated
reporting
and,
ideally,
automated
repair
and
the
only
indicators
we
have
are
pull
request,
merge
requests
and
the
pre
and
a
yaml
file
which
may
have
to
be
more
than
empty.
Depending
on
the
results
of
your
tests.
The
empty
dependency
case
is
actually
easy
put
in
the
ammo
file.
H
I
Spencer,
in
the
previous
point
on
dangerous
workflow
token
permissions,
you
said
you
wanted
to
get
a
Long's
opinion.
He
is
in
the
in
the
call
now
at
least
I
believe
he
is
at
least
someone
called
so
far.
C
Yeah
Lawrence,
currently
muted,
but
if
you're
listening
Laura
for
context,
we
were
talking
about
sort
of
structured
results
earlier
in
the
call
where
there's
this
notion
of
how
do
we
judge
a
repository
or
like
a
repo
on
dangerous,
workflow
and
token
permissions
when
either
so
like
there's
two
scenarios,
one
where
it's
on
an
ecosystem
that
doesn't
support
GitHub
tokens
and
workflows,
because
it's
not
GitHub
or
it
does
your
GitHub
repo.
C
But
you've
done
your
CI
CD
on
another
system
sort
of
thing,
and
it
would
be
easy
if
we
had
structured
results
because
we
could,
you
know,
say
not
applicable
or
something
like
that.
But
in
the
current
framework
like
does
it
make
a
difference
where,
like
a
gitlab,
repo
gets
a
minus
one,
a
GitHub
repo
gets
a
10.
But
what,
if
you
have
your
CI
in
a
different
system?.
L
C
C
L
L
The
score
yeah
I
think
even
for
gitlab
I'm,
not
sure
that
minus
one
makes
sense
because
minus
one
means
inconclusive,
which,
which
means
I,
think
we
we
don't
know,
I,
think
what
we're
looking
for
is
like
a
new
one
that
says
not
applicable,
which
is
basically
a
yeah.
So
I
guess
I.
Guess
that's
my
first
reaction
about
the
minus
one.
Maybe
it's
not
a
problem
in
practice
when
we
compute
the
score,
I
haven't
thought
enough
about
it
and
on
GitHub
yeah.
L
L
C
I
I
I
think
it's
something
you
know
before.
We
move
structured
results
to
a
release,
kind
of
thing
we
should
say
and
I
I
think
we'll
see
this
as
we
migrate
more
more
checks,
I
I
think
the
only
action
item
for
right
now
is
sort
of
getting
a
maintainer
majority
on
If.
Part
of
the
scoring
change
should
be
reverted
if
this
is
just
something
that
we're
going
to
deal
with
until
we
have
structured
results,
I
don't
think
that
has
to
be
right
in
this
meeting.
C
L
Yeah
sounds
good,
so
just
at
a
high
level,
I,
don't
think
waiting
for
structured
results
is
necessary,
because
anything
that
structured
result
does
is
really
just
reusing.
It's
reformatting
the
output,
but
it's
the
scoring
is
happening
independently
of
the
structured
results,
so
anything
that
any
scoring
we
would
change
I
think
wouldn't
affect
the
structured
result.
It's
just
that
the
scoring
will
happen
on
the
structured
result
instead
of
happening
on
the
Raw
results.
L
C
Yeah
I
mean
I
think
there's
some
interesting
like
major
version.
This
is
how
we
change
scorings,
like
does
it
make
sense
to
change
How
We
Do
scoring
significantly
in
the
middle
of
E4
like
right.
Now
we
have
zero
to
ten
with
a
minus
one
if
we're
not
applicable.
So,
at
least
in
my
mind,
as
long
as
we're
on
V4,
that's
sort
of
what
we're
sticking
to.
C
F
Yeah
I
thought
I'd
just
take
an
update
that
Steven
put
in
slack.
It's
also
generally
my
update
building
off
of
the
road
mapping,
I,
guess
presentation
or
what
I
I
talked
about
in
our
last
meeting,
for
just
what
at
least
at
Google,
we
were
committing
to
and
the
feedback
from
the
group
to
build
a
broader
Community
road
map,
I'm
gonna
be
chatting
with
Stephen
and
Adrian
Markham,
who,
if
you
haven't
met
yet
is
a
is
a
new
TPM
for
the
openssf.
F
So
if
you
remember
from
the
previous
meeting,
Stephen
volunteered
to
kind
of
walk
through
how
some
other
projects
built
up,
Community
road
maps
with
me-
and
so
we
delayed
that
a
little
bit-
and
it
just
happens-
that
patreon
is-
is
looking
at
a
very
similar
thing
across
a
lot
of
different
projects.
So
the
three
of
us
are
gonna,
just
have
a
discussion
about
what
that
can
look
like
and
then,
hopefully
for
next
meeting.
F
We
can
bring
back
not
just
kind
of
a
summary
of
what
we
discussed,
but
also
really
start
that
process
of
of
building
up
a
broader
Community
road
map
for
the
project.
L
Yes,
so
I
think,
last
time
we
met,
we
discussed
the
the
time
zone
and
whether
we
can
alternate
between
a
EU
friendly
and
then
a
Pacific.
L
Time
zone
I
think
someone
said
they
would
share
a
doodle
or
or
something
equivalent.
I
was
I
just
wanted
to
to
follow
up
because
I
know
this
thing
just
slip
through
the
cracks
and
then
we're
going
to
forget.
But
just
I
was
just
curious.
If
anyone
has
updates
on
whether
something
was
shared
on
slack
or
anything,.
C
I
think
we
can
tag
I'm.
Looking
at
the
thing
from
last
Dan,
we
can
ask
him.
I
would
assume
that
as
the
one
that
brought
the
issue
he
would
at
least
it
said.
C
Okay
I
think
I'm
getting
confused,
but
yeah.
Let's
just
ask
Dan
about.
If
there
was
a
doodle
Zone
in
slack
and.
F
C
With
that
we
are
at
the
end
of
the
agenda,
so,
barring
me
last
minute,
questions
all
that's
left
is
to
pick
a
facilitator
for
the
next
sync,
which
is
currently
July
27th.
A
C
Right
thanks
everyone
for
coming
discussing.
Please
take
a
look
at
the
issues
and
leave
a
comment.
If
anything
you
want
to
say
wasn't
represented
the
meeting
or
added
to
the
notes,
but
otherwise
see
folks
in
two
weeks
and
thanks,
everyone.