►
From YouTube: Scorecards Biweekly Sync (November 17, 2022)
B
B
B
Hi
so
yeah,
let's
I'll
I,
guess
I'll
start
it
now
and
if
anyone
else's
joining,
we
won't.
We
won't
get
too
too
far
ahead,
but
yeah.
So
welcome
so
we're
meeting.
This
is
like
the
scorecards
working
group
meeting.
B
My
name
is
raghav
I
I
work
at
Google
on
on
scorecards.
We
don't
have
a
super
packed
agenda
today.
So
if
there's
like
anything
else
that
you
feel
that
you'd
want
to
add,
please
skip
please
go
ahead,
but
yeah
I'll
be
facilitating.
B
E
B
D
Okay,
I
might
have
the
super
power
to
add
you
hold
up.
Let
me
see,
oh
I.
Think
I
do
can
do
you
mind,
saying
your
email
address
out
loud
or
is
that
yeah.
E
D
How
is
this,
why
don't
you
stick
it
in
chat
like
you?
Just
did
awesome
all
right.
So
let
me
quickly
add
you
so
I
have
now
Justified
my
existence.
D
A
All
right,
yeah
join
group.
Okay,
that
looks
like
it's
a
different
group
from
the
one
that
was
in
the
my
email
address
is
in
the
chat.
Okay,.
D
D
Okay,
yeah
I,
don't
I,
don't
know
why.
But
let's
let's,
let's
get
you
added
first
and.
E
A
A
Yeah
saw
the
presentation
last
week,
I
didn't
actually
realize
until
you
guys
presented
in
Tahoe.
I
didn't
realize
that
there
was
a
community
meeting
for
score
card,
so
I've
actually
got
a
couple
commits
on
scorecard
without
ever
even
realizing
that
there's
a
community
awesome.
D
Welcome
and
thank
you
I
I
do
want
to
since
you're
here
a
quick
thank
you
to
sonotype,
who
did
some
really
really
interesting
analysis
this
year,
that
showing
you
know
the
the
utility
of
some
of
these
metrics
and
just
you
know,
and
their
their
predictive
power
for
at
least
known
vulnerabilities,
I,
just
I
thought
it
was
really
awesome
to
see
that
kind
of
analysis.
So
thank
you.
D
Yeah,
exactly
a
lot
of
the
work
happens
with
issues
and
pull
requests
and
so
on,
as
you
can
already
see
so
you're
so,
but
okay,
so
who's
our
lead,
I
I'm
not
actually
leading
this
meeting
who's
leading
our
meeting
today,
I'm.
D
A
B
Okay,
yeah
so
I,
don't
see
a
bunch
of
project
or
individual
updates,
so
yeah
I
can
go
on
to
the
agenda
and,
like
yeah,
feel
free
to
add
things
over
the
duration
of
the
meeting.
If
anything
comes
up
but
yeah,
the
issue
that
I
wanted
to
talk
about
is
handling
code
review
for
automated
or
bot
generated
commits
on
a
project.
B
So
there
are
a
couple
of
issues
that
have
been
opened
around
like
how
how
we,
how
we
score
code
review
for
for
for,
commits
and
just
to
give
a
context.
The
code
review
check
today
looks
at
GitHub,
PR's
and
whether
it
could
have
for
for
any
any
commit
in
the
kind
of
like
default
branch
of
the
project.
D
B
Was
closed
or
like
merged
by
someone
other
than
the
person
who
opened
it
so
they're
like,
however,
we
don't
have
a
great
story
around
how
we
handle
like
comments
generated
by
by
Bots,
so
like
an
example
of
a
bot
is
like
renovate
or
dependabot,
but
people
also
use
Bots
for,
like
other
things,
that
use
like
Bots
that
do
like
back
ports
or
like
automated,
like
automated,
like,
if
there's
like
a
regression
or
a
bug,
they'll
like
automatically
back
out
a
commit.
B
So
it's
like,
like
a
wide
variety
of
bots
and
the
scoring
can
like
be
be
inconsistent.
B
B
Like
talked
a
little
bit
about
this
offline
and
there's
kind
of
like
a
proposal
that
want
to
like
get
people's
thoughts
on
today,
we
kind
of
look
at
the
last
last
30
commits
and
out
of
those
commits,
we'll
see
how
many,
how
many,
like
distinct
change
debts.
There
are
meaning
like
how
many,
how
many
of
those
commits
belong
to
like
some
reviewable
activity
like
a
PR
and
we'll
give
like
a
proportional
score.
B
So
if
A,
Change,
Is
reviewed
or
not,
we'll,
like
proportionally
yeah
like
give
like
seven
out
of
ten,
if
like
21
out
of
30
or
30
of
those
commits,
are
like
reviewed,
but
our
proposal
is
like
to
move
this
like
more
proportional.
So
if
there's
any
Presence
at
all
of
blood
commits
that
are
unreviewed,
we
would
just
like
subtract
a
flat
minus
three
and
if
there's
any
human
commits
that
are
unreviewed,
we
would
like
subtract
like
a
minus
seven
for
that
yeah.
D
B
F
F
Sorry,
I
I
wanted
to
give
a
little
bit
more
context,
because
I
think
that's
there's
an
issue
that
people
from
sonotype
actually
reported
at
the
Tahoe
meeting
and
I.
Think
that's
the
broader
context
for
this
problem,
which
is,
if
you
look
at
the
repository,
which
is,
let's
say,
is
a
single
maintainer
Repository
if,
if
scorecard,
only
catches,
full
requests
that
are
from
either
Bots
or
external
contributors,
all
of
those
are
going
to
be
reviewed
because
like
unless
you
have
enabled
Auto
merge
on
Dependable,
but
most
of
the
time
people
still
click.
F
An
external
contributors
also
are
always
going
to
have
an
implicit
review,
whether
it's
it's
an
lgtm,
explicit
or
like
it's
just
you
merge
and
you're
a
different
person,
so
that
kind
of
leads
to
false
positive,
because
all
the
single
maintainer
repositories
will
be
seen
as
having
code
reviews,
even
though
it's
not
exactly
true
and
I.
Think
that's
not
in
the
spirit
of
this
talk.
This
of
this
check.
This
check
is
like
more
like
I,
wouldn't
say:
Insider
attacks,
but
it's
more
like.
F
Is
there
a
community
that
re
and
they
review
each
other's
pull
requests?
F
F
We
consider
them
less
risky,
maybe
and
we
don't
score
the
same
way,
even
if
they
are
like
reviewed
or
not
reviewed,
and
can
we
also
differentiate
humans
and,
ideally
between
humans.
We
would
differentiate
between
external
contributors
and
maintainers,
although
we
don't
think
that's
easily
possible
because
there's
no
GitHub
API.
So
unless
we
do
some
inference
where
we
look
at
who
merge
commits
and
then
we
say:
okay,
those
are
maintainers
and
then
we
query
again
GitHub
apis.
F
Like
some
convoluted
process,
we
think
you
know
I,
don't
think
we
can
tackle
everything
in
one
go,
but
but
that's
the
broader
problem,
I'm
sorry,
David
I
cut
you
off,
but
I
wanted
to
give
the
yeah
just.
A
To
just
to
summarize,
like
the
the
thing
that
we
were
talking
about
in
Tahoe,
I
think
Brian
talked
to
you
was
that
there
is
a
user
who
came
to
us
and
was
like
hey.
Why
does
my?
Why
does
my
scorecard
score
keep
going
down
and
we
looked
at
it
and
we
saw
that
he
was
being
penalized
for
not
having
any
any
code
reviews
right.
A
He
was
just
merging
directly
to
Maine
and
occasionally
his
score
would
jump
up,
because
a
dependabot
would
make
a
PR
huge
review
and
his
score
never
should
go
up
because
he's
still
just
a
single
person
committing
directly
to
Maine
so
depend
about
shouldn't
buffer
his
score
in
the
way
that
it
had
been.
So
that
was
the
the
context.
F
And
I
think
something
else
and
Spencer
mentioned
is
that
when
we
like
it's
basically
the
fact
that
it's
going
up
and
down
and
the
there's
like
this
instability
of
the
check
and
we
we
don't
know
if
we
can
really
tackle
this
all
in
one
go
unless
we
look
at
more
pull
requests
or
like
a
longer
history
but
yeah
anyway,
yeah,
that's
you're,
absolutely
right.
Thanks
for
bringing
that
up,
I'll,
let
other
people
talk.
Sorry.
D
Yeah
so
I
have
a
potentially
different
view
that
doesn't
make
it
better,
just
potentially
different,
let's
because
I
think
I
think
this
is
actually
an
interesting
conversation
and
clearly
we
need
to
think
this
through
and
I'm
expecting
there'll
be
some
change
here,
because
I
think
you're
right.
It
does
seem
a
little
odd.
D
What's
currently
happening,
I'm
willing
to
accept
Bots
as
another
creature,
they
just
happen
to
be
computers,
so
I'm
happy
with
having
a
bot,
creative
proposed
change
and
then
the
human
reviews
it
and
now
you
have
two
things,
reviewing
something
presuming
that
with
the
exception,
which
I'm
not
sure
we
it's
worth
trying
with
the
exception.
D
If
the
person
who
created
the
bot
is
also
the
reviewer
I,
think
that's
the
one
case
where
that's
a
little
more
dubious,
but
if
a
bot
creates
something
and
then
it's
reviewed
well,
presumably
the
bot
bot
thought
it
was
a
right
thing
to
do
based
on
its
code
and
then
the
human
reviewed
that
particular
change
and
also
thought
was
good
and
so
I
think
that
that
has
some
value.
What
worries
me
I'm,
actually
fine,
with
penalizing
different
verbots
versus
human
commits
proposals.
D
What
worries
me
more
is
the
well-known
Bots
not
being
penalized
I,
don't
I
I,
guess
we're
kind
of
assuming
that
certain
Bots
never
make
mistakes.
Boy,
that's
kind
of
not
been.
My
experience,
including
dependabot,
can.
A
We
can
we
segment
the
two
concepts
in
the
notes.
Yeah
I
feel
like
I
feel
like
because,
like
on
one
point,
it's
how
do
we
handle
single,
maintain
like
single
contributor
projects
right?
How
do
we
handle
that
and
then
on?
The
other
point
is
the
one
that
you're
just
addressing
there.
A
D
D
Okay,
all
right
I
yeah,
that
that
sounds
like
a
good
tease
apart.
So
so,
let's
talk
about
commit
review.
D
I
am
fine
with
saying
if
a
bot
or
another
human
creates
something,
that's
one
thing,
be
it
a
human
or
not
a
human.
That's
created
a
proposed
change,
and
now
you
have
a
second
person
reviewing
the
change
to
see
whether
or
not
it's
actually
the
right
thing.
So
you've
got
at
least
two
somethings.
You
know
somebody
originally
proposed
and
somebody
else
reviewed
it
that
doesn't
strike
me
as
crazy.
D
F
So
would
it
be
worth
maybe
to
even
the
code
review
check
what,
if
we
looked
at,
say
100
commits
and
we
look
at
how
many
people
have
merged
or
committed
or
merged
the
commit?
F
And
then,
if
we
have
a
single
person,
we
say
it's
very
highly
likely
that
it's
a
single
maintenance
repository
and
there's
no
code
review
like
can
we
differentiate
based
on
who
commits
like
I,
guess,
maybe
single
maintenance
we
can
actually
infer,
because
if
we
look
at
enough
commits,
we
know
that
it's
the
same
person
reviewing
or
the
same
person
merging
and
if
we
differentiate
that
we
make
sure
that
that
person
is
a
is
a
human.
We
know
that
it's
like.
Can
we
do
something
like
this
one.
D
E
There
some
kind
of
I
haven't
looked
at
exactly
how
it
gets
installed
with
All-Star,
but
a
configuration
file,
maybe
to
save
who
the
maintainers
are,
because
if
you
do
it
that
way,
new
maintainers
would
also
be
penalized,
and
it's
confusing
for
projects.
You
want
to
increase
their
score
to
be
able
to
do
that
on,
like
a
short-term
basis.
D
I
haven't
checked
but
can't
you
know
obviously
GitHub,
for
example,
knows
who
has
rights
to
make
right,
to
has
permission
to
write
directly
to
the
repo,
which
is
what
I
would
call
a
maintainer,
and
at
that
point
is
is
that's,
that's
something
that's
publicly
accessible,
isn't
who
has
permissions
to
write
to
the
repo?
No,
no.
D
A
D
Well,
that's
that's
what
that's
that's
two
of
them,
I
I
think
there's
actually
two
different
measurements:
are
they
getting
reviewed
and
how
many
maintainers
there
are
I
think
those
are
two
separate
points.
So,
let's,
let's
focus
on
the
on
the
code
review
so
for
the
code
review,
we
do
get
that
information
publicly.
D
You
know,
basically
somebody
else.
You
know
if
somebody
proposes
it,
somebody
else
has
to
then
merge
it
right.
You
can.
We
can
certainly
tell
that
publicly
I'm,
actually
fine
with
having
a
lower
penalty
for
a
bot
under
the
assumption
that
it's
you
know,
although
a
bot
could
create
malicious
changes,
it's
somewhat
less
likely.
F
D
F
I
think
that's
what
like
I've
said:
I
I'm,
also
okay,
on
penalizing
less
just
because
there
is
a
risk,
but
it's
not
as
big
as
anyone
pushing
like
a
human
doing.
It.
D
I
think
there's
a
lower
risk
for
unintentional
error.
Bots
can
can
insert
proposed
malicious
code
just
as
well
as
a
human
can,
but
generally
you're
only
going
to
add
a
bot.
If
you
don't
think
it's
going
to
be
malicious
and
typically
Bots
are
only
implementing
things
that
are
so
mechanical
that
they're
unlikely
to
get
it
wrong.
D
B
A
This,
the
normal
approach
to
discussing
features
and
changes
for
the
scorecard
I
I
just
realized
like
I'm
brand
freaking,
new
and
I'm,
just
jumping
right
in
I
love,
I
love.
How
open
you
guys
are
the
conversation,
but
this
is
also
a
little
on
the
disorganized
side.
I'm
just
curious,
like
what's
the
normal
process
for
for
like
changes
and
features
and
and
this
this
type
of
conversation
I.
D
I
think
you're
I
think
you're,
seeing
it
oh.
C
A
And
I
didn't
get
I
didn't
get
introductions
from
the
the
group
here
so
who
I
know
I've
seen
Lauren
and
Spencer
as
maintainers
Roger?
Are
you
one
of
the
one
of
the
code
maintainers
on
scorecard
I'm,
not
David.
D
I
I
am
not,
however,
I
I'm
not
well
I'm,
not
one
of
the
maintainers.
On
the
other
hand,
I
work
for
the
Olympics
foundation
and
I
lead
the
best
practices
badge
project
and
the
two
projects
are
both.
You
know
trying
to
figure
out
how
all
projects
are
doing
things,
so
we
want
to
try
to
coordinate
things.
A
D
D
Sometimes
they're
reviewed,
sometimes
they're.
Not
if
somebody
just
you
know
just
automatically
accepts
a
bot
commit
without
a
review.
You
know
that
seems
like
a
risk
but
much
lower
risk
than
if
you
just
merged
in
arbitrary
humans
by
review
about
review.
F
Yeah
and
also
in
particular,
we
don't
want
to
discourage
people
from
using
Auto
merge
independable,
because
you
know
that's
a
useful
feature
and
most
of
us
who
lgtm
depend
about
or
renovate
about
PR's.
We
actually
don't
even
really
review
them.
What
we
we
skim
through
them,
and
sometimes
maybe
probably
just
say
yes,
so
yeah.
D
I
I
will
say
that
when
I
I
use
dependabot
on
on
my
GitHub
projects,
but
you
know
I
do
review
the
proposed
changes
and
I.
Don't
always
accept
them.
I
do
find
that
just
because
dependabot
says
you
should
do
something.
D
No,
yes
technically
it
passes,
but
it
doesn't
understand
the
full
context.
So
I
do
think
that
there
should
be
a
penalty.
A
D
E
I'm
not
really
clear
on
so
I
understand
that
you'd
want
to
not
have
a
bot
counted
as
a
contributor,
if
that's,
if
you're,
for
the
case,
where
someone's
committing
to
Maine
but
otherwise
I
see
bosses,
sometimes
there
were
sometimes
they
have
better
contributions
than
humans.
So
why?
What's
the
point
of
other
than
that?
One
Edge
case
of
someone
committing
to
Maine
and
not
having
any
other
contributors,
I.
A
Think
that's
I
think
that's
exactly
what
this
is
for
the
first
one
is:
what
do
we
do
when
commits
are
made
without
a
review
right
and
then
there's
the
discussion
of
like
okay?
Well,
if
it's
a
human
committing
without
review,
definitely
penalize
it
now
what
if
it's
a
bot
committing
without
review?
Oh.
D
Yeah
I
think
it
should
not
be
same
penalty
simply
because
typically
Bots
are
implemented
on
things
that
are
mechanical
and
they're
less
likely
to
be
wrong.
They
can
be
wrong,
so
I
think
a
penalty
is
appropriate,
but
a
lesser
penalty
is
also
seems
appropriate
again.
Lower
risk
not
no
risk,
therefore,
lower
penalty,
not
no
penalty.
F
Yeah
me
too
I'm
I'm,
leaning
towards
lower
risk
and
also
because,
if
you're,
a
single
maintainer,
maybe
some
maintainers
just
want
to
say
just
Auto,
merge
I,
don't
have
the
time
to
review
all
this,
and
if
they
do
this
and
we
say
you're
not
like
you
like,
we
don't
want
I,
don't
think
we
want
to
discourage
them
from
doing
this,
even
though,
if
they
are
a
single
maintainer,
they
will
not.
They
shouldn't
pass
code
review
so.
B
I
think
code
review
wouldn't
like
we
would
focus
Less
on
like
penalizing
them,
and
probably
the
code
review
check
would
take
more
of
an
approach
as
saying
there's
like
not
enough
review
data
because
there
aren't
like
any
like
yeah,
there
isn't
like
human
review
activity
to
make
a
decision
about
whether
this
project
is
like
subject
to
human
scrutiny,
with
review
or
not.
B
I
I
think
in
so
like
in
in
the
default
case.
We
would
like
start
everyone
off
at
10
and
kind
of
like
give
give
these
penalties
instead
of
being
proportional,
but
in
the
single
maintainer
case.
I
think
we
would
go.
We
would
we
all
like
scorecard,
also
has
like
this,
like
insufficient
data
result,
I
believe
so
we
would
like
to
say
something
like
that,
like
there's,
not
enough
data
to
make
a
to
give
a
score
or
not.
F
B
But
that's
that's
one
possibility.
We
could
also
just
say
zero,
like
I'm
I'm,
not
sure
which,
which
which
one
of
those
is
better
a
better
option.
Actually
so
yeah.
F
But
I'm
I'm,
not
you
mean
whether
he's
a
the
person
he's
a
maintainer
yeah
maintainer.
F
D
It
it
won't
give
us
that
access,
yeah.
F
So
they
have
this
difference
between
yes,
but
that
said
now
that
they
have
a
granular
patch
token.
Maybe
that's
maybe
one
way
that
people
could
use
it,
but
but
but
still
on
the
other
hand,
if
you're
on
your
own
repository,
you
already
know
what
you
do
right.
So
I
think
that
data
is
more
useful
to
Consumers
than
it
is
from
for
maintainers.
So
so
maybe
that
wouldn't
actually
resolve
the
issue.
D
So
so
my
brain
is
limited.
I
can
only
really
handle
one
thing
at
a
time.
So
if,
if
code
is
just
getting
accepted
from
the
outside,
without
any
review,
I
I
like
that
I
guess
proposal
one
you
know.
If
you
know
human
can
it's
on
are
getting
viewed
by
somebody
else,
big
penalty.
If
bot
commits
are
being
accepted
without
review,
you
know
either
the
bot
is
just
editing
it
directly.
That's
a
smaller
penalty.
You
know
three
and
seven
seem
perfectly
reasonable
to
me.
D
B
Yeah
that
that's
that's
all
right
with
me,
I
I'm,
like
a
little,
you
know
like
every
time,
Like
A
change
is
made
to
scoring
we'll
like
get
some
like
level
of
feedback
on
whether
that
that
makes
sense
or
doesn't
make
sense
based
on
like
people's
workflows.
So
like
I'm,
okay
with
like
maybe
we
should
like
do
proposal
one
and
like
yeah.
C
Just
one
thing
that
I,
like
that
Scott
has
been
doing
in
some
of
his
changes
with
security
policy
and
license
checks
is
that
he
had
a
list
of
say,
400
projects
and
was
able
to
test
I
guess
his
scoring
change
that
he
was
proposing
and
get
an
idea
of
how
on
these
400
projects
it
would
affect
scores
before
the
change
was
made.
So
preemptively
get
some
sort
of
quantitative
data
without
waiting
for
people
to
come
to
us
saying.
Why
did
my
score
go
down.
C
D
Okay,
yeah
yeah
yeah,
okay,
he's
not
here
on
the
call,
but
I
know
him:
okay
was
doing,
was
running,
I,
guess
draft
scorecard
code.
C
Yeah,
like
he
was
just
running
his
Branch
against
400,
repos
and
sort
of
getting
regression
testing,
I
guess
of.
E
D
You
know
what
I
all
right,
let's
I
I
like
that
idea,
I
think
we
should
ask
Scott
to
do
that.
Here's
a
crazy
thought
is
this
something
maybe
we
should
put
in
the
scorecards
CI
Pipeline
on,
so
that
we
can
see
the
changes.
How
scores
change
based
on
changes
to
code?
Is
that
a
little
heavy.
B
C
B
The
CI
pipeline
it
it
sounds
like
it
might,
it
might
affect.
Like
token
token
usage.
B
D
F
So
that's
a
good
reason
not
to
we
can
at
least
encourage
reviewers
when
there's
like
a
scoring
change
in
a
pull
request
to
ask
for
that
data
when
possible.
I
think
Scott
is
the
only
one
who
has
done
that
great
work
until
now,
so.
D
Okay,
let
me
update
my
proposal,
then
maybe
we
can
document
how
to
do
that,
so
that
once
there's
a
a
pull
request,
somebody
can
run.
D
F
B
B
One
is
like
a
good
thing
to
maybe
like
put
on
one
of
these
issues,
just
like
as
a
summary
of
like
what
the
decided
like
or
like
the
approach
that
people
seem
to
seem
to
like
so
I
can
like
put
a
comment
on
this
issue
about
proposal,
one
and
then
for
this
other
thing
of
like
having
all
reports
in
like
like
like
having
having
scoring
reported
whenever
you
create
a
PR
that
affects
scoring
I
think
this
would
be
good
also
like
create,
like
a
separate
issue,
to
track.
D
E
F
So
for
proposal
number
one
will
it
would
it
be
also
good
to
try
to
get
some
statistics
on
Harmony
out
of
the
30
comments
that
we
get?
How
many
are
from
typically
from
a
bot
and
not
a
bot
or
at
least
have
a
distribution
for
single
maintainers?
Because
then
we
could
tell
what's
like
the
minimum
number
of
commits
that
we
should
be
looking
at.
F
D
Does
I
I,
don't
I,
don't
know
it's
necessarily
bad
if
it
kind
of
goes
up
and
down
a
little
bit,
I
mean
frankly
that
what
happens
anyway
with
scorecards
you,
you
add,
pin
dependencies
instead
of
your
scorecard
changes,
and
we
certainly
don't
worry
about
that.
F
But
that
will
happen
anyway
right.
If,
if
let's
say,
for
example,
you
have
one
direct
commit
to
the
main
branch
from
a
human
yeah,
and
then
you
have
six
from
depend
about
that
are
not
reviewed
that
are
reviewed,
then
you're
gonna
get
a
certain
score,
and
if
the
other
time
you
have
One
external
contributor
has
a
commit,
that
is
a
PR
that
is
reviewed
as
a
human
and
then
the
same
as
Dependable.
Then
it's
gonna
go
up
and
down
it.
D
F
Yeah,
that's
the
problem.
We
look
at
30,
but
the
the
commits
that
we
look
at
might
be
part
of
the
simple
request
and
that's
a
an
existing
issue.
We
have
to
fix.
So
we
need
to
decide
how
many
pull
requests
we
should
be.
Looking
at.
Please
correct
me:
Spencer.
B
Thanks
yeah,
particularly
in
case
so,
if
you're,
if
you
don't
squash
your
PRS,
then
they
kind
of
like
a
PR
with
like
a
bunch
of
changes
like
a
lot
of
back
and
forth
will
appear
as
like
30
commits
or
whatever,
and
it
could
or
like
50
commits,
and
we
might
not
even
get
all
of
them.
So
might
only
have
like
one
PR
to
look
at
over
the
history
of
the
like.
Okay,.
D
A
Where
is
the
code
for
because
I'm
looking
at
the
codereview.go
raw
I'm,
looking
at
Raw
code
review
but
I'm
not
up
to
speed
on
this
check
yet
so
I'm
trying
to
figure
out
if
there's
Separate
Checks
for
like?
A
A
B
A
Okay,
because
it
seems
like
checking,
pull
requests
to
make
sure
it's
like
hey.
If
this
is
a
dependent
bot,
bull
request,
we
just
ignore
it.
We
we
don't
we're
not
going
to
score
you
based
on
your
dependent,
but
like
you're,
not
going
to
be
scored
well,
based
on
your
dependent
bot
merges
in.
Let
me
look
at
all
the
other
pull
requests
and
if
it's
a
single,
if
it's
like
self-reviewed,
then
that
would
accomplish
everything
is
like
hey.
A
B
Yeah
I
think
what
you're
saying
would
all
be
part
of
the
evaluation
part
of
that
check.
The.
C
B
D
C
C
D
C
D
B
F
F
You
are
doing
code
reviews,
I
mean
I,
guess
sometimes
it's
using
a
different
review,
Bots
like
what
is
it
Garrett,
but
I,
wonder
whether
then
for
Garrett
and
these
other
ones,
we
actually
try
to
count
commits
and
in
that
sense,
I
think
that
there
is
no
problem
of
squash
and
non-squash,
because
it's
not
full
request.
F
I
think
maybe
there's
four
requests
but-
and
maybe
that
would
allow
us
to
be
more
precise
on
on
GitHub
for
GitHub
reviews,
where
we
say
we
know
you
are
doing
reviews,
because
the
branch
protection
settings
says
so
and
I
know
it
can
be
changed
but
like
like
it
depends
if
we
say
that
con
like
maintainers
are
adversarial
or
they're.
Just
you
know
doing
they
are
not
trying
to
game
the
system
or
not
because
like
if
they
just
don't
change
it
all
the
time.
F
That
gives
us
a
strong
signal
that
we
could
maybe
then
validate
with
the
code
review
to
make
sure
it's
it's
not
violated,
but
maybe
put
more
trust
in
the
branch
protection
setting
I'm
just
asking.
B
Yeah
in
in
my
mind-
and
this
is
like
a
little
bit
separate
from
from
this
issue
but
like
Branch
protection
like
philosophically
Branch
protection
is
like
point
in
time,
and
code
review
is
historical,
which
is
like
Branch.
Protection
could
be
up
like
you
could
turn
on
brand
protection.
B
30
seconds
before
running
scorecard
and
scorecard
would
give
you
like
full
points,
whereas
code
review
is
like
a
historical
thing
of
all
the
commits
in
a
in
a
project
and
then,
like
the
other
part
of
what
you
said,
which
is
like,
can
we
look
at
like
other
review
platforms?
We
we
do
look
at
other
review
platforms,
but
we
apply
much
much
less
scrutiny
just
because
we
don't
have
like
all
the
code
to
handle.
What
would
like
what
would
like
an
approval.
B
Look
look
like,
although
we
we
we,
we
definitely
should
handle
things
like
fabricator
and
Garrett.
But
right
now
we
just
kind
of
say,
like
all
those
commits
from
other
platforms
are
are
kind
of
like.
C
F
Yeah
I
mean
I'm
trying
to
see
of
what
the
like
how
the
data
the
results
are
consumed.
So
if,
if
I'm
just
trying
to
understand
how
many,
how
my
dependencies
behave,
I
I
almost
feel
like
Branch
protection
is
kind
of
a
signal
that
is,
it
can
be
gamed,
but
it's
it.
It
gives
me
a
an
idea
of
who
is
doing
code
reviews
and
who
is
not,
and
then
there's
the
other
part
where
oh
I'm,
taking
the
view
of
there's
an
Insider
attacks
like
red
team
exercise
and
I
want
to
catch
it
through.
F
You
know,
attestations
like
salsa
does,
for
you
know
Source
requirement,
and
that
seems
to
be
maybe
something
different
and
it
looks
almost
like
we're
trying
to
tackle
both
we're
trying
to
like
the
code
review
check
right
now
is
kind
of
trying
to
say
I'm,
verifying
that
I
can
give
you
an
attestation
and
that's
useful
for
the
maintenance
themselves
when
they
want
to
deploy
something
as
part
of
salsa
but
I.
F
D
I
I
think
I
can
make
a
decent
case
for
what's
being
done
right
now.
You
know
the
branch
protection
is
a
signal
that
hey
we
have
an
intent
to
from
here
on.
Do
reviews
get
some
credit,
but
you
know
what
the
what's
oftentimes
the
best
way
to
figure
out
what's
likely
to
happen
in
the
future,
is
to
see
what's
been
happening
in
the
past.
D
You
know,
if
somebody
you
know,
turns
on
and
off
Branch
protection
every
time
they
want
to
do
something,
that's
kind
of
lame
watching
what
they
actually
do
has
some
value.
A
What,
if
we
do
it,
what
if
we
do
all
commits
from
a
certain
date
so
like
all
commenced
since
seven
days
ago,
something
like
that.
F
C
Yeah,
there's
sort
of
like
two
answers
to
this
I'm
sure,
there's
an
expression
we
could
use
in
graphql
to
sort
of
get
that
and
30
commits
was
picked.
C
You
know
before
I
started
looking
at
the
code,
but
it's
nice
to
have
one
query
that
we
submit
ahead
of
time
because
the
the
weekly
Crown
job
that
runs
this
on
1.2
million
repos,
is
trying
to
minimize
the
amount
of
API
calls
yeah,
but
at
the
same
time,
for
someone
running
on
like
a
CLI
or
something
it's
entirely
possible,
I,
don't
know
explicitly
if
it's,
if
it
supports,
give
me
every
commit
in
the
last
X
days,
I
wouldn't
be
surprised
if
it
does,
and
it's
possible
to
you
know,
keep
listing
that
sort
of
thing
we
just
have
to
be
careful.
C
Also,
now
that
we
have
One
external
contributor
working
on
gitlab
support.
The
this
you
know,
functionality
that
we're
wanting
for
is
a
generic
way
and
right
now
the
interface
is
list
commits
I,
think
and
it
just
returns
commits
so
just
something
to
keep
in
mind.
D
Yes,
I
I
do
know
that
in
SQL
you
can
say
you
know
after
this
date
and
limit.
You
know
up
to
this
number.
You
know
you
can
have
limit
statements.
I
would
imagine
that
graphql
has
the
same.
That
said,
I,
don't
know
what
get
a
get
lab
does
it
wouldn't
I
mean
they
may
just
have
a
simple
paged
interface,
in
which
case
you
can
just
page
backwards
until
you
reach
a
date
or
a
you
know,
either
a
date
or
the
limit
number.
D
If
we
have
to
do
a
page,
that's
obviously
less
efficient,
but
maybe
that'll
be
compensated
for
with
the
fewer
number
of
projects
we
have
to
read.
So
that
may
actually
work
out
just
fine.
D
D
D
Okay,
just
on
the
scorecard
I
I,
yeah,
yeah
I,
just
I
just
did
the
quick
look
up
on
get
lab
queries
and
I
immediately
suddenly
find
out
about
SQL
query
guidelines.
So
there
may
be
a
SQL
interface,
I
I,
don't
know!
D
D
It
is,
it
is
also
quite
a
bit
of
yank
down
before
some
of
these
get
Pro
for
some
of
these
projects.
If
you
do
a
get
clone,
it's
a
non-trivial
amount
of
data
that
you're
pulling
and
I've
done
this
so
I
that
I've
done
and
yes
I
know
you
can
do
that
with
get
directly.
B
B
Do
something
different
like
they
put
the
like,
for
example,
fabricator
has
like
the
differential
Revision
in
the
commit
like
the
commit
message
itself,
so
it's
like
a
little
bit
easier
there
to
like
correlate,
but
with
GitHub
we
actually,
we
use
a
lot.
We
have
one
API
call
for
like
the
log
like
another
and
like
part
of
the
graphql,
is
like
to
get
the
associated
PR.
A
A
B
Yeah
we
we
have
like
a
little
bit
more
than
five
minutes
left,
so
I
just
wanted
to
ask
if,
like
if
people
want
to
talk
about
something
else
or
have
any
other
like
topics
that
they.
A
A
To
get
a
PR
up
with
changing
some
of
the
checks
to
just
look
at
the
get
log
instead.
Is
that
something
you
guys
would
be
would
welcome
reviewing.
B
Are
you
saying,
like
change
code
review
to
just
look
at
the
git
lock.
D
A
D
D
A
Yeah
I'm,
just
thinking
of
it
from
the
context
of
the
security
slam
that
that
we
did
last
month
like
nobody
would
get
to
100.
Then
that
would
be
a
that'd,
be
a
pain,
that'd,
be
a
pain.
A
First,
first
task
find
out
if
this
actually
works.
So
yeah
drop
me
off
the
issue.
If,
if
you
find
it
otherwise
I'll
I'm
about
to
have
a
baby,
so
I
mean
this
could
be
all
move.
I
might
I,
might
just
not
do
it,
but
but
I'll
put
up
a
PR
with
a
subset
I'll.
Try
to
keep
it
small.
If,
if
we
can
improve
the
viability.
B
Yeah
congrats,
first
of
all,
yeah.
Congratulations.
C
B
Based
versus
file
based
checks
within
the
code
base,
so
that
might
be
like
a
interesting
thing
to
look
at
it,
like
classifies
checks
by
the
kind
of
data
that
they
need
to
work.
A
B
A
Where
is
it?
Is
that,
like
you're
talking
about
a
switch
inside
of
scorecard
right,
yeah.
B
Within
the
scorecard,
there's
like
a
like,
like
a
flag
per
check
that
that
says
whether
it's
like
oh
yeah,
I'd,
have
to
stop
sharing
to
pull
it
up,
but.
A
B
B
B
C
Pretty
sure
that's
the
one
but
I'll
have
to
read
through
it
more
to
double
check.
B
A
Well,
thanks
guys,
it
was
a
pleasure
I'm.