►
From YouTube: Scorecards Biweekly Sync (January 12, 2023)
A
B
C
We
will
wait
until
the
three
minutes
back.
They
are
for
any
reason.
I
know.
Lauren
is
supposed
to
talk
about
functional
results.
C
But
Lauren
does
your
another
Saturday
meeting
everyone
Happy
New
Year
to
everybody.
So
please
fill
the
attendees
in
the
Google
Doc.
Most
of
us
are
already
nobody's
new,
meaning
anybody's
new.
Please
introduce
yourself.
C
Right,
okay,
so
I
don't
see
any
new
faces,
but
if
there
are
any
new
people,
please
include
yourself.
Jeff
is
writing
something
about
projecting
digital
update
Jeff?
Do
you
wanna
talk
about
that
specifically.
D
Oh
just
to
update
yeah
I'm
gonna
be
working
on
docs
the
next
few
next
week
or
so
trying
to
get
some
some
overhauls
there,
but
that's
about
it.
C
Great
next
we're
going
to
jump
on
to
the
agenda
Lauren.
You
want
to
talk
about
structured
results.
F
Yeah
sure
so
a
structured
result
is
something
I
brought
up
in
the
past
maybe
a
month
ago.
So
the
idea
here
is
that
the
current
Json
results
that
we
we
have
are
basically
a
list.
So
you
have
the
checks
and
then
you
have
a
list
of
results
which
are
strings,
and
that
makes
it
difficult
for
people
to
write
automated
tools
to
act
upon
those
results,
because
it's
not
very
structured.
F
So
the
first
PR
that
I
sent
is
an
example.
It's
basically
the
changes
we
have
to
make
to
support
this
new
structured
results.
Navin
I,
don't
know
if
you
want
to
share
your
screen
or
maybe
I
can
try
to
share
my
screen
to
show
what
what
the
new
results
would
look
like
if.
F
Yeah,
so
that's
the
example,
so,
basically
what
it
does.
It
replaces
the
old
detail
which
was
an
array
of
strings,
and
it
replaces
it
with
the
findings,
a
finding
structure,
which
is
a
more
fine-grained
view
of
what
the
results
are.
So
the
first
field
that
you
see
is
called
the
rule.
F
That's
basically
the
so
every
check
is
basically
divided
into
rules
and
each
rule
is
defined
with
a
yaml
file
which,
with
all
the
all
the
information
about
the
rules
such
as
you
know,
its
risk,
its
remediation
and
so
on
and
so
forth.
So
you
can
think
of
a
rule
as
like,
a
more
granular
version
of
a
check
and
we're
hoping
that
those
rule
can
explicitly
tell
people
what
what
the
result
is
about.
F
So
in
this
case
here
the
rule
is
called
GitHub
workflow
permission
top
no
right,
which
means
that
it's,
the
the
the
token
permission
in
the
workflow,
are
not
defined
at
the
top
level
of
the
workshop
or
nothing.
Maybe
after
we
can.
We
can
look
at
the
the
yaml
file.
I'll
continue.
I'll
continue
on
this,
the
the
finding
also
has
so
it
has
a
risk.
It
has
an
outcome
which
is
either
negative
positive
and
maybe
it
could
also
be
like
not
supported
or
yeah,
maybe
not
supported.
F
For
example,
if
you're
on
gitlab
and
the
the
this
rule
doesn't
apply,
the
location
has
the
you
know
the
path
to
the
file,
including
the
line
where
we
found
a.
F
F
The
remediation
section
is,
as
you
can
imagine,
the
remediation
it's
supposed
to
be
granular
enough,
so
that
you
have
the
exact
step
to
remediate.
This
is
something
today
that
we
don't
have.
For
example,
if
you
look
at
the
pin
dependency
check
depending
on,
if
it's
a
python
result
or
dockerfi
results
or
I,
don't
know
an
npm
result,
users
need
to
need
different
remediation
steps,
and
we
don't
have
that.
You
know
that
level
of
accuracy
or
like
that
level
of
detail
in
the
in
the
in
the
remediation
section
of
the
check.yaml.
F
So
here
it
will
be,
like
you
know,
way
more
accurate
and
to
try
to
decrease
the
time
that
it
takes
users
to
remediate.
F
The
risk
levels
also
is
something
that
we
think
can
help
visualize
visualization,
so
you
can
imagine,
for
example,
Michael
who
works
on
depths.dev
could
maybe
show
all
the
critical
results.
All
the
I
don't
know
medium
results
and
and
use
that,
in
combination
with
the
effort
to
to
help
users
prioritize.
F
The
reason
that
we
have
a
risk
per
rule
is
because,
in
the
past
we
had,
you
know
the
risk
was
at
the
level
of
a
check,
but
often
time
in
the
check.
You
know
some
some
part
of
the
check
of
the
chicks
were
you
know,
maybe
critical
or
like
high
risk
results,
and
other
thoughts
were
like
low
results.
So
we
were
not
able
to
adjust
the
the
level
of
the
risk
when
we
reported
the
results.
F
So
that's
kind
of
what
it
looks
like.
Maybe
I
can
just
quickly
show
the
the
role
that
yaml
Navin
and
then
we
again
take
some
questions.
C
F
So
you
can
go
under
a
rule,
yeah
take
which
one
do
you
sorry
under
rule?
Oh
sorry,
sorry,
it's
under
checks,
evaluation
and
permissions.
C
F
F
You
can
see
so
the
name
of
the
file
is
basically
the
name
of
the
rule.
Then
we
have
a
description.
F
I
might
actually
remove
the
shot
on
the
in
the
description
field
and
keep
just
one
because
they're,
basically
the
same
I
added
like
a
motivation
field,
to
explain
what
the
what
the,
what
the
check
actually
does
and
why
it's
important
a
field
for
implementation
to
explain
how
we
implement
it.
If
it's
relevant
to
users
and
then
under
the
remediation
field,
you
can
see
the
effort
and
the
text.
F
The
text
is
basically
an
array,
and
this
way
we
can
show
it
to
users
as
like
different
steps
that
they
have
to
to
follow
in
order
to
remediate.
So
it's
kind
of
all
there
is
to
it.
F
F
F
You
know
what
a
rule
should
be
so
that
it's
kind
of
stable
over
time.
For
example,
here
I
called
the
role
you
know
no
top
level
permissions
defined,
so
I
think
I
could
also
have
called
it
default
permissions,
but
I
took
the
view
that
later
on,
we
might
want
to
have
another
rule.
That
says:
does
the
org
level
default
permission?
F
F
You
know
to
take
I
mean
everything
they
might
need
to
make
a
judgment.
Call
on.
G
The
results
so
I
think
that
that
is
maybe
that's
maybe
like
implementation
detail
for
like
per
Org,
the
the
rule
I
mean
maybe
has
a
name
that
is
some
concatenation
of
like
rule
outcome,
risk
or
whatever
fields.
We
want
to
add
and
right.
So
it's
like
permissions.
You
know
top
level
the
high
blah
blah
blah
right,
but
yeah
we
should.
We
should
decide
on
what
the
schema
looks
like
before
before
sliding
it
in.
C
Sure,
unfortunately,
there's
a
breaking
change.
This
is
a
breaking
change
so
from
a
schema
result.
So
this
has
to
be
next
wheel
version.
We
need
to
just
setting
some
saying
hey.
We
cannot,
because
this
is
even
for
API
and
for
our
we're
doing
V4.
This
has
to
become
V5.
C
C
Really
like
the
like,
the
way
it
is
coming
out,
I
am
not
a
fan
of
strings.
At
least
I
know
it's
not
it's
implementation,
but
I,
don't
I'm.
A
C
A
big
fan
of
negative
positive
being
strings,
it's
hard
for
somebody
to,
but
other
than
that
I,
like
the
location
where
the
things
are.
That
is
a
great
thing,
but
the
risk
being
high
low
makes
sense
for
human
beings.
Usually,
this
is
really
good,
but.
B
Yeah
good
I'm,
sorry
just
trying
to
piggyback
off
of
naveen's
comment
about
strings
I
I
commented
on
the
PR,
but
I
wanted
to
highlight
that
some
people
have
talked
about
having
a
like
an
integer
ID
for
a
rule.
So
there's
some
discussion
on
that
too.
In
the
pr.
F
C
But
if
you're
writing
automation,
numbers
are
easier
to
because
nobody's
going
to
manually,
look
at
this
yaml,
my
two
cents.
Oh
it's
a
Json
people
are
going
to
write
some
automation
to
oh.
If
it's
four
I
need
to
go,
do
this
into
writing
strings
hello.
Do
I,
need
the
uppercase
it
if
we
change
our
casing,
for
whatever
reason
that
breaks
somebody
else.
So
numbers
are
easier.
My
two
said
foreign.
G
Beat
me
to
it,
but
yes,
plus
one
to
the
whole:
it's
a
breaking
change.
It's
a
it's
a
reasonably
large
breaking
chain
for
dropping
we're,
dropping
Fields
out
of
trucks,
we're
adding
new
Packages,
Etc
I.
Think
there's
a
way
to
reason
about
this,
where
we
bring
it
into
I.
Think
we
need
a
larger
discussion
about
I.
Think
we've
brought
it
up
in
the
past,
but
like
what
do
we
respect
semper
like?
Do
you
like
what
happens
when
an
API
changes?
How
do
we
message
that?
G
There's
a
way
to
reason
about
this,
where
we
could
stick
a
lot
of
this
into
a
you
know,
use
a
few
dare
package
a
you
know
like
you
know
something
that
is
an
experimental
and
and
bring
that
in
slowly,
but
surely
as
we're
still
as
we're
testing
it
around,
but
I
I
think
you
know
one
of
the
things
that
you
run
into
when
you're
incorporating
a
large
change
is
that
you
know
we're
we're
saying
right
now
at
a
baseline,
it's
it's
V5,
but
if
it
doesn't
come
in
at
some
point,
it's
it's
gonna.
G
You
know,
you're
gonna
have
the
task
of
trying
to
continually
make
sure
that
the
pr
stays
up
to
date
until
you
can
land
it
for
a
V5.
So
like
that
suggests,
is
it
a?
Is
it
an
experimental
package?
Is
it
a
V5
tracking,
Branch
future
Branch,
whatever
you
want
to
call
it?
How
do
we?
How
do
we
talk
about
that?
Because,
because
I
imagine
that
we'll
have
we'll
have
more
large
PRS
that
we
don't
want
to
that?
We
don't
want
to
become
stale.
F
Yeah
I
can
comment
on
this,
so
regarding
the
API
I
think
I
think
Navin
you,
you
meant
like
the
rest
API
in
the
bigquery.
C
G
So,
let's,
let's,
let's
simplify,
hold
on,
let's,
simplify
because,
like
this
anything
that
is
anything
that
we
have
exposed
is
the
API
right.
So
anything
that
is
in
this
repo,
that
is
an
exported
method.
Type
Etc,
is
part
of
our
API.
However,
people
consume
it
whether
it's
in
bigquery.
What
have
you
if
someone?
If
someone
is
writing
a
package
built
on
top
of
scorecard
like
scorecard
action?
That's
part
of
their
API.
F
All
right,
okay,
so
I'll
start
with
the
rest.
Api
I,
don't
see
this
format.
Replacing
the
old
one
I
think
would
be
a
different
one
and
the
API
would
just
I
haven't
chatted
with
your
azim
or
how
it's
done
now,
but
I
don't
think
we
would
deprecate
the
old
the
old
results
simply
because
I
think
some
users
might
actually
prefer
when
they
run
on
the
CLI,
but
it
gives
them
strings
rather
than
this
Big
Blob
of
data,
because
that's
a
lot
harder
to
read
for
a
human.
F
So
that's
for
the
rest,
API.
Regarding
the
API.
C
F
D
C
So
that's
what
you
would
Envision
there
and
we
still
keep
generating
the
whole
format,
but
probably
have
a
slash.
V2
would
just
be
an
example
on
the
on
the
fqdn
with
this
new
result,
so
people
can
get
at
some
point.
Stop
that
okay,
I
agree.
F
That's
how
I
see
it
for
the
API
that
scorecard
has
so
in
terms
of
stale.
You
know
a
different
branch.
What
Stefan
said
I
think
once
this
PR
is
merged,
it
should
be
pretty
easy
to
it's,
not
gonna
break
like
the
rest
of
the
code.
I
think
it
can
live
in
the
same
code
base.
I
know
it's
not
very
explicit
on
our
readme,
but
I.
Don't
think
scorecard
guarantees
anything
similar
like
somewhere
wise
for
the
API
that
we
have
like
the
the
go.
F
G
So
I
think,
every
time
this
conversation
comes
up,
we
we
kind
of
get
to
the
same
place
and
we
I
I
think
where
we
need
to
get
to
is.
We
should
be
explicit.
Yes,.
F
Yeah
yeah
I
think
we
should
have
a
section
on
the
readme
and
say
that
December
is,
for
maybe
the
output
format
on
the
CLI,
the
bigquery
table
the
rest
API,
but
not
for
the
go
API
I
think
yeah.
This
is.
We
should
have
done
that
earlier.
B
F
Okay,
yeah
I'll
do
that
if
there,
if
there's
no
other
question,
I,
also
wanted
to
ask
a
little
bit
about
whether
people
think
we
should.
We
should
keep
the
score
as
part
of
this
new
structure
or
do
people
think
that
we
should
get
rid
of
them.
H
F
H
F
H
One
of
the
patterns
that
I've
seen
is
like
you
know,
for
example,
in
the
GitHub
starter
workflows.
There
is
the
whole
code
scanning
section
and
I.
Think
scorecard
is
also
there
and
I.
Think
they're
probably
require
tools
to
upload
the
results
to
the
GitHub
code
scanning
API,
which
I
think
accepts
only
sorry
for
I,
don't
know
if
it
accepts,
because
that's
how
it
actually
gets
scorecard
results,
get
shown
in
the
getter
action
right.
F
F
No
breaking
changes
and
in
fact,
even
for
Json
there
is
no
breaking
changes.
If
we
create
a
new
format
and
we
call
it
something
different
right
now,
I
called
it
extended
Json,
but
you
know
we
can
come
up
with
a
better.
G
Yeah
so
I
I
think
if
I
can,
if
I
can
try
to
to
interpret
Vernon
when
she
when
she
decided
it's
I,
think
it's
a
why
not
serif
or
why
not
closer
to
serif.
If
we
know
that
that
is
a
standard
that
that
other
tools
are
going
to
be
ingesting
already
and
I,
think
it's
a
great
question.
H
H
So
you
know
so
I'm
just
wondering
like
if
there's
already
work
done
in
terms
of
you
know
whatever
problems
this
is
trying
to
solve.
Maybe
it's
sad.
If
you
know
group
has
done
done,
work
to
solve
those
problems,
and
so
just
wondering
if
in
fact,
I
didn't
even
know
whether
it's
sorry
for
not
so
I
was
just
trying
to
understand.
F
No
I
mean
that's
a
good
question.
I
think
serif
is
a
standard.
It's
a
huge
standard.
Virtually
nobody
actually
implements
it
fully.
So
GitHub
only
implements
and
understands
a
subset
of
serif
and
I
think
because
it's
so
big
and
so
complicated
that
no
one
I
mean
there
are
very
few
tools
that
actually
read
it
and
that's
my
understanding.
F
G
So
I
mean
with
that
said:
I
do
think
it's
worthwhile
to
see.
If
we
can,
we
can
map
what
we
want.
Yeah
and
I
was
just
about
to
link
that
to
you
sensor.
Thank
you
and
the
notes
are
here's
a
link
to
the
GitHub
docs
about
code
scanning
and
serif
support
for
code
scanning
I.
G
You
know
if
if
we
have
Concepts
that
overlap
it
if
it's
a
yeah,
if
it's
a
even
if
it's
a
subset
of
of
the
serif
implementation,
I
think
I
think
it'd
be
worthwhile
to
consider
doing
that
instead
of
the
because
I
I,
like
even
when
we
reference
extended
Json
extended
Json
is
defined
in
other
places
as
well
as
as
not
being
this,
so
we
should
so.
Where
are
there
respects
defined?
Let's
try
to
to
adhere
to
them
and
care
be
being
careful
with
the
nomenclature
because
extended
Json
is
a
thing
also.
H
Had
was
go
ahead,
the
only
other
thing
I
had
was
about
the
rule,
ID
that
you
know
I've
seen
that
the
rule
ID
doesn't
necessarily
have
to
be
a
number
as
long
as
it.
You
know
it
sort
of
stays
the
same
and-
and
you
know,
because
sometimes
it's
also
intuitive-
for
someone
to
be
able
to
see
a
rule
like
like
here's
an
example
from
code,
ql
and
I.
Think
some
of
the
other
tools
use,
like
you
know,
alphabets
Dash
number
so
just
wanted
to
mention
that.
C
A
F
One
quick
yeah
if
I
I'll
take
a
look
at
the
code,
ql
we'll
some
other
question
I
wanted
to
ask
is
whether
people
think
we
should
keep
the
score
in
this
new
format,
and
should
we
also
whether
we
should
also
keep
the
checks,
because
I
think
some
people
have
expressed
interest,
or
at
least
have
discussed
whether
we
need
to
have
a
result
with
a
flat
list
of
findings
without
having
the
this
nested
check,
structure
that
I
currently
reuse
from
the
earlier
Json
format?
F
We
had
well
I'm
just
wondering
if
people
have
thoughts
on
this,
if
we
removed
it
I
guess
we
could
we,
we
might
still
need
a
new
field
in
the
finding.
That
says,
check
is
blah
blah
blah,
but
I
wanted
to
know
whether
people
think
it
would
simplify
the
format
or
whether
it's
a
bad
idea.
F
C
Right
thinking
a
little
further
down,
if
we
intended
the
example,
we
intend
to
replace
our
standard
Json.
What
does
extended
Json
then
bringing
that
feeling
is
better
but
having
the
score
should
not
cause
any
any
major
storage.
It's
not
that
it's
going
to
blow
up
I
rather
have
it.
Oh
then
I
can
get
this
Json
and
then
I
can
figure
out
what
my
scorecard's
score
and
then
decide
what
do
I
need
to
do
so.
That
would
be
a
good
addition
to
have
my
do
since
on
that.
G
Yeah
same
I
would
prefer
more
data
and
to
slice
and
dice
it
as
necessary.
H
So
question
I
have
is:
do
you
already
have
you
know
consumer
of
this
output?
You
know
who's
going
to
consume
this.
F
I
think
Michael
on
the
call
would
be
the
first
consumer
yeah,
so
Michael
works
on
the
devs.dev
project
and
they've
had
issues
on
you
know,
making
the
stroke
out
results
like
they've
had
a
hard
time
showing
users
their
results.
Besides
the
just
just
dumping
the
details
of
the
current
results.
So
so
this
is
supposed
to
help
them
We've.
We
had
discussion
with
them
about
what
they
would
need.
What
would
help
so
yeah
I
think
Michael
would
be
the
first
consumer
I,
don't
know
if
you
have
anything
to
add
Michael.
E
I
mean
I
do
think
that
it's
good
in
general
to
have
a
little
bit
more
structure
to
it,
because
it
does
make
it
easy
to
reason
about
it
and
yeah.
Yes,
short
version,
certainly
in
terms
of
compatibility,
I,
think
that
that
transition,
depending
if
it's
a
major
version
bomb,
just
handling
it
appropriately.
We
haven't
looked
too
much
at
serif
yet,
which
is
something
I,
will
be
looking
a
bit
more
at
so.
G
And
I'm
sorry
go
for
it.
Sorry,
I'm
done
and
I'm
looking
at
the
the
page
that
Spencer
shared
towards
the
bottom
is
as
a
serif
output
examples
and
the
example
with
minimum
required
properties
and
I'm
sure
that
we
get
to
a
format
like
that,
because
our
our
results
are
displayable
and
GitHub,
but
like
looking
through
that,
that
seems
that
seems
reasonable
and
I
think
we're
we're
pretty
close
there.
If
we,
if
we
were
to
adopt
the
format.
C
And
to
add
to
larun's
question
I
spoke
to
at
least
a
couple
of
them.
Indeed,
remember
silent
and
people
want
it,
just
not
text
people
who
wanted
to
write
it
out
or
something
like
this.
C
This
is,
and
they
weren't
interested
in
getting
the
data.
So
I
don't
know
if
they
would
adapt
be
radioed
out,
but
this
was
a
feedback
that
I
got
from
people
speaking
about
scorecard.
C
The
next
topic
is:
if
I
get
okay,
wait
action
to
use
the
API
to
get
comments
and
updates
on
difficult
to
see
changes.
This
is
something
that
I
have
been
working
on.
A
Mona
share
demo,
what
it
is
so
right
now
again,
this
came
up
in
in
couple
of
by
speaking
a
couple
of
customers
in
the
summit.
People
are
people
are
flying
blind.
If
there's
a
new
independency
change,
people
are
like,
but
I,
don't
know
how
great
the
dependency
is
and
Lauren
had.
C
Somebody
forgot
the
name
in
tone
work
on
this
specifically
for
their
project,
so
I
took
that
took
that
idea,
but
I
couldn't
reuse
any
of
the
code.
The
whole
goal
is
to
if
there's
any
dependency
change
to
your
project,
wouldn't
be
nice
to
get
a
scorecard
score
on
the
dependency
change.
So
it
makes
people
warm
and
fuzzy
to
say:
okay,
now
I'm
bringing
this
new
dependency.
What
happens?
C
Does
it
have
a
binary
file
in
there?
Does
it
have
been
independency?
What
is
the
score
on
any
of
that
and
be
nice
to
know
so
so
I
wrote
some
code
up,
so
here's
an
example.
This
is
scorecard
action.
So
what
what
it
does
is
just
to
give
some
perspective
it
changes
and
dependencies.
C
It
makes
him
dependency,
change
and
I'm.
Also
a
specific
claim
brought
it
up
a
blank
import
on
go
because
this
dependency
has
a
vulnerability
so
and
this
dependency
is
not
scanned
by
scorecard,
because
this
is
an
older
repository
and
the
scorecard
does
not
scan
this.
So
this
is
a
common
use
case.
People
want
to
bring
in
update
few
dependencies.
People
want
to
bring
in
an
existing
repo,
but
it's
not
scanned
by
scorecard.
So,
ideally,
we
would
want
to
know
if
I
was
trying
to
consume
scorecard
I,
don't
want
all
the
checks.
C
I
could
be
picking.
Hey
I
just
want
two
checks
to
know
what
that
what
the
data
is
about
this
and
if
there's
no
scorecard
data,
but
if
there
is
any
volatility,
I
want
to
know
that
information,
and
all
of
this
is
used
only
one
API
call
to
GitHub,
which
gives
dependency
def.
So
by
calling
that
API
providing
between
two
common
shares.
A
C
So
here's
here's
the
example
so
this
this
is
how
so
I
can't
figure
the
API
check
or
to
only
two
two
checks:
binary,
artifacts
and
Independence,
because
I
don't
want
everything
so
just
to
show
this
is
configurable.
So
just
those
two
and
here's,
the
actual
API
call
to
score
guys
that
somebody
can
click
and
look
at
the
result
and
do
they
wear
changes
to
adding
robots
and
pencils.
Go
saml
that
one
didn't
have
any
scorecard
API
results
because
it's
not
being
scanned
and
that
repository
is
not
being
updated
in
the
last
four
years.
G
C
Is
an
example
of
somebody
can
configure
what
checks
they
are
interested
in
to
show
their
score
and
I.
Don't
wanna
my
thought
process
I'll
wait
for
to
not
to
dump.
We
can
obviously
configure
what
we
want
to
show
and
what
we
don't
want
to
show,
but
showing
this
highlight
the
reason
why
I
just
showed
this
is
I
opened
the
pr
to
for
that
one
of
the
dependency
update
tools.
Lauren.
Do
you
know
which
tool
what
that
was
it
wasn't
dependent
about?
It
was
the
other
one.
C
C
G
G
Does
it
make
sense
for
us
to
if
there
is,
if
there
is
equivalent
functionality,
not
to
say
I
mean
not,
everyone
is
on
GitHub
and
necessarily
using
the
stuff,
but
potential
thing
that
we
could
leverage.
If
we
don't
need
to
reinvent
the
wheel.
C
And
so
the
dependency
review
API
just
gives
the
it
gives
two
information.
It
says
what
are
the
new
dependencies
that
have
changed
and
it
also
gives
vulnerability.
So
that's
why
I
was
able
to
use
that
to
pull
that
information
it's
up
to.
Obviously
it
doesn't
not
set
in
stone
I'm,
just
showing
what
can
be
but
dependency
review.
Api
does
not
query
the
scorecard
API
to
get
results
and
we
I'm
just
teaching
both
these
things
together.
To
provide
this
information.
C
C
G
Be
stressful,
I,
I'm,
yeah
I
have
I
would
have
mixed
feelings
about
it,
especially
if
it's
related
I
mean
if
it's
only
supportable,
potentially
only
supportable,
on
a
platform
that
everyone
might
not
be
on
because
I
I
know
I,
know,
there's
support
being
built
for
git
lab
as
well.
Are
it's
in
experimental,
yeah.
C
It's
an
experimental
so
so,
so
that's
why
keeping
this
code
base
only
in
action,
not
in
scorecard.
So
essentially,
if
we
decide
to
add
this
to
get
lab
and
XYZ
then
potentially
move
that
into
scorecard,
but
keep
this
simple
enough.
This
is
not
pretty.
This
is
not
extremely
large
code
base,
so
this
is
about
probably
I
would
say
about
500
600
lines
of
code,
but
my
thought
process
to
keep
this
only
in
scorecard
action.
G
H
C
It's
not,
we
are
not
parsing.
We
like
like
scorecard,
does
not
parse
it.
That's
what
dependency
API
provides
that
out
of
the
box,
so
I
give
cool
to
comment
shots
for
the
repository
it
says:
hey.
Are
there
any
other
dependencies
updated?
If
it's
updated
tells
me
what
those
dependencies
are
I?
Take
those
lists,
I
go
to
I
parse
that
out,
go
to
scorecard
API
and
say:
hey.
A
C
Yes,
it
keeps
the
ecosystem,
it
gives
the
what
was
added.
What
was
remote
everything
it
gives
all
that
so
essentially,
and
what
the
source
repository
we
take
the
social
repository
we
four
bits
get
up.
We
just
think
if
it's
get
out
hey,
then
we
take
that
go
look
at
the
API.
If
you
can
find
it
nothing
in
the
API
you're
not
going
to
dump
anything.
C
Is
very
good
question
I,
don't
know
I,
don't
know
whether,
but.
B
C
Part
of
the
GitHub
API,
so
it's
part
of
the
GitHub
API
I,
don't
know
whether
for
private
positives,
how
would
work
probably
what
features
it
provide?
I
don't
know
about
that.
It.
G
Depends
is
the
is
the
the
the
shirt
version
so
yeah,
it's
part
of
the
GitHub
API
it
is.
It
depends
on
how
much
yeah
it
depends
on
what
what
functionality
you
have
turned
on.
So
I
believe
that
you
have
to
have
Advanced
security
and.
G
Enabled-
and
it
will
also
be
dependent
on
what
server
version
of
GitHub
Enterprise
server
if
you're
using
Enterprise
server
that
you
have
configured
so.
C
If
we
are
thinking
of
even
code
QRS
for
the
manner
you
need
to
have
paid
Advanced
security
for
you
to
run
code
QR,
which
scorecard
goes
in
dings.
If
you
can
use
that
so
so
we
can
go
down
this
Rabbit
Hole
of
trying
to
say,
which
is
and
which
is
not,
but
I'm
not
saying
that
we
should
not
look
for
that,
but
we're
gonna
make
this
configurable.
We
can
turn
on
and
turn
off
so
that
it
does
not
error
out.
Somebody
can
and
want
this
video.
C
H
C
If
you
see
this,
somebody
has
to
now
you
now
adding
more
dependency
on
whether
they
are
maintaining
those
packages.
Those
things
become
problem
is
an
Enterprise
getting
they
have
a
revenue
model
to
maintain
this
because
they
wanna
they
wanna,
add
new
features,
make
it
stable.
Now
we
depending
on
someone
else,
brings
oh,
are
you
having
support
for
python?
Are
you
having
support
for
x,
Pi
Pi?
Do
you
have
support
for
this,
but
you
have
as
a
revenue
model.
This
is
what
they're
building
on,
obviously
swapping
that's
only
one
function.
G
Let
me
let
me
interject
a
little
bit
because
Veron
before
you
you
jumped
in
I
was
going
to
kind
of
touch
on
this.
There
is
a
bit
of
an
incentive
towards
open
source
or
public
repositories
right
so
for
code
ql,
you
can
use
codeql
without
the
license.
As
long
as
the
repo
is
is
public
right
you,
if
you
have
a
private
repository,
that's
when
you've
got
you
that's
when
you've
got
to
get
the
advanced
security.
G
You've
got
to
click
the
buttons
you've
got
to
pay
the
money,
so
there
is
some
incentive
towards
having
publicly
available
open
source
code
and
and
getting
some
of
this
additional
functionality
out
of
GitHub.
As
a
result,
I
think
that
that
is
a
reasonable
model
and
I
think
that
you
know
at
least
by
restricting
this
in
in
within
the
context
of
the
scorecard
action,
it
means
you're
likely
running
through
the
same
workflow
right.
G
There
are
things
in
in
scorecard
that
you
won't
necessarily
need
to
be
able
to
do
if
you're
or
you
won't
get
as
robust
results
if
you're,
if
you're
dealing
with
a
private
repository
already
right.
So
as
long
as
we're
setting
the
expectation
of
like
what
needs
to
be
what
what
buttons
need
to
be
turned
on
or
what
things
need
to
be
paid
for,
I
think
it's
fine,
especially
given
that
it's
restricted
into
this
kind
of
scorecard
action,
namespace
as
it
were,
yeah.
C
Adding
to
what
Stephen
mentioned
right
now,
if
you
want
to
run
scorecard
on
the
right
repository,
it's
going
to
write
to
SAR,
which
is
an
advanced
security
feature.
So
you
need
that
so.
H
One
of
the
one
of
the
suggestions
I
had
is,
for
example,
you
know
this
osv
scanner,
you
know
which,
which
recently
from
Google,
which
is
open
so
I,
think
that
also
does
parsing
off
a
lot
of
these
package
manager.
Files
and
I'm.
Just
I
was
just
wondering
if
that
is
actually
being
used.
In
fact,
that
is
sort
of
mentioned
here,
but
I
don't
know
if
it's
a
dependency
or
is
it
a
because
that
you
know
if
you
take
a
dependency
of
that
and
it
it
is
parsing
a
lot
of
these
package
managers
advantage
correct.
C
But
I
it's
a
battle
point
I'm,
not
saying,
but
probably
for
V1
we
can
think
of
using
using
the
GitHub,
but
probably
for
we
too.
We
can
see
like
okay,
let's
how
can
we
replace
this?
That's
my
take
because
scorecard
action
cannot
be
run
on
a
private
people
if
you
don't
have
advanced
security
so
especially
nowadays
being
namespace
too.
That's
my
take
because
now
adding
another
dependency
and
coming
back
at
what
to
rather
giving
this
as
a
feature.
G
So
I
I,
I
I,
feel
I
feel
like
I'm
being
the
contrarian
today,
but
I
I
have
kind
of
the
the
opposite
or
maybe
a
different
take
is,
is
evaluate
options
before
implementation
and
try
to
make
like
understand,
good
fit,
I
think
osc
scanner
is,
is
a
potential
good
fit
and
it,
and
it
will
will
potentially
remove
some
of
the
barriers
for
people
who
may
not
be
on
systems
where
they
can
enable
what
they
need
to
to
to
have
that.
G
So
we
should
look
at
osv
scanner,
especially
because
we
probably
also
know
the
people
who
are
involved
in
and
maintaining
it
before
before
doing
any
implementation.
I
think
it's
a
reasonable
request.
F
I
think
there's
one
limitation
of
OSB
scanner
that
it
understands
packages,
but
it
doesn't
understand
the
mapping
between
a
package
and
a
repo
unless
it's
for
maybe
golang,
because
we
can
infer
the
repository
from
the
like
username
and
I.
Think
that's
what
that
GitHub
API
is
providing
right.
I!
Think
Deb
is.dev
also
has
a
similar,
API
I.
Think.
F
Some
love
behind
where
they
look
at
stars
and
downloads
and
they
say
okay.
This
is
coming.
This
repo
seems
to
be
the
source
of
Truth,
because
I
think
sometimes
people
also
Fork
projects
and
they
don't
change.
The
Source
hints
that
they
have
in
their
npm.
So
you
end
up
with
a
lot
of
packages
that
are
supposedly
from
the
same
repo
I
mean.
This
is
just
something
to
be
aware
of
that.
The
mapping
is,
is
what
I
think
that
API
is
giving
us
that
maybe
OSD
doesn't
support,
but
I
haven't
looked
in
detail.
C
We
have
next
eight
minutes,
so
I'm
gonna
run
with
this
okay.
So
to
the
next
step-
and
this
is
I'm-
gonna-
do
a
PR
to
scorecard
action
so
that
we
can
get
eyes.
C
So
first
of
all,
next
thing
is
I
want
to
keep
it
as
a
comment,
so
that
it's
one
comment,
so
people
can
keep
it
easy
simple:
they
can
Squish
and
they
can
see.
Oh
what
it
is.
Each
one
is,
instead
of
it
being
hundreds
of
comments.
So
if
they
don't
like
it
and
one
of
the
things
Lauren
do
you
want?
Do
you
think
this
should
be
part
of
the
existing
action,
or
should
it
be
because
that
means
you
got
to
run
this
in
vrs
and
scorecard
if
it
runs
on.
F
Yeah
I
think
I.
Let
people
discuss
what
they
think
if
I
understand
correctly.
If
this
runs
on
PRS,
the
only
API
request
is
to
the
that
GitHub
API
about
the
dependencies,
but
everything
else
will
be
pulling
out
from
the
bigquery
or
right
now
from
the
rest,
API,
okay,
so
so
the
only
right
limiting
problem
would
be
from
from
that
additional
apir
you're
also
saying
that,
oh
maybe
the
running
scorecard
to
the
NPR
as
the
limitation
and
people
are
already
running
into
that
problem.
Yes,
yes,.
C
Because
now,
if
scorecard,
is
usually
configured
for
on
a
like
on
a
merge
or
something,
but
it's
now
we're
going
to
enable
with
PRS
it's
gonna,
they're
gonna
head
into
great,
limiting
scorecard
costs
and
leave
rate
limiting
issues,
and
so
that's
why
I
want
to
avoid
that.
Keep
this
simple!
Keep
it
clean
get
in
the
one
downside
is
now.
We
need
to
request
a
lot
more
people
to
go
install
this.
That's
the
only
downside
to
that,
but
in
the
long
run,
it'll
avoid
rate
limiting
issues.
C
G
G
Sorry
I
was
on
me
no
I'm,
saying
I
I
think
a
blocker
to
implementation
should
be
evaluating
options.
C
Okay,
I
can
I,
can
I,
can
I
can
open
a
PR
and
we
can
have
a
discussion
should
not
be
a
problem
on
that.
Specifically.
H
For
the
GitHub,
actually
one
of
the
other
things
to
consider
is
that,
if
you're
planning
to
create
the
pr
comment
using
the
GitHub
token,
then
I
think
it's
also
going
to
need
the
pr.
The
pull
request
right
permission
for
that.
So
that's
another
thing
to
consider:
okay
with
some,
so
if
someone
wants
to
opt
into
this,
then
the
documentation
will
have
to
say
that
yeah.
C
H
D
G
Yeah
yeah
a
discussion
would
be
great
because
I
think
I
think
the
agree
that
a
separate
action
is
it's
more
work,
and
this
is
this
is
good
documentation,
and
this
is
just
setting
the
expectation
that
you
will
have
to
change
permissions.
If
you
want
to
use
this
I
think
is
reasonable.
C
And
Laurent
I'm
sorry,
we
are
three
minutes
into
it.
If
we
merge
I
know,
these
are
like
one
of
the
things
that
we
I
was
also
talking
about
is
I,
don't
I,
don't
really.
We
should
probably
talk
about
awareness
that
you
want
to
release
like
all
of
these
things
together,
as
we
were
talking
about
in
the
last
meeting,
is
we
probably
won
in
q1
set
a
date
and
try
towards
it,
because
it
makes
it
easier
as
to
what
we
want
to
release
and
put
that
together.
F
G
We
do,
we
do
have
a
milestone,
we
I
I,
don't
think
we've
been
planning
it
for
the
Milestones,
so
so
there
are
milestones
in
GitHub.
F
F
Ask
is
that
maybe
that's
question
for
Spencer
or
people
who
know
more,
but
are
there
any
ways
that
we
can
reduce
the
problem
around
API
rate,
limiting
for
scorecard
running
on
pull
requests,
or
is
this
just
going
to
be
hard
in
general.
C
B
I
don't
know,
I
think
it
would
help
if
we
could
specify
in
the
action
I
don't
think
we
can
so
correct
me
if
I'm
wrong,
but
if
we
can
specify
the
checks
that
run
in
the
scorecard
action,
because
some
are
a
lot
worse
than
others.
There
are
some
that
hit
30
API
requests
for
a
certain
check,
but
a
lot
of
them
use
the
graphql
API.
So
if
it's
one
graphql
API
and
we
get
you
know,
the
majority
of
the
checks
is
that
good
enough,
so
I.
G
F
So
if
somehow
we
are
hitting
those
rate
limiting
because
of
the
API
calls,
it
means
that
we
are
basically
not
being
very
smart
on
how
we
query
graphql
and
we
basically
querying
everything
even
things
that
we
don't
need.
So
maybe
we
can
make
it
a
bit
more
granular
so
because,
like
scorecard,
should
only
need
the
you
know,
cloning,
the
repo
and
then
basically
looking
at
the
code.
That's
that's!
Those
are
the
only
checks
that
we
turn
on
on
pull
requests
for
the
whole
problem,
at
least
right
now,.
H
C
Yep,
yes,
so
the
next
meeting
this
once
does
anybody
else
want
to
be
volunteer.