►
From YouTube: Scorecards Biweekly Sync (May 4, 2023)
A
A
All
right,
let's
get
started
so
hello.
My
name
is
Ian
Dunbar
Hall
I
will
be
your
facilitator
for
this.
This
meeting,
if
you
haven't
already,
do
please
I,
just
ungrate
your
name,
so
we
know
who
attended
reach
out
to
me.
If
you
don't
have
permissions
out
of
the
stock
and
out
there
for
you.
A
A
A
All
right,
I
have
a
feeling:
it's
gonna,
be
a
short
meeting.
All
right.
We
do
have
one
open
issue.
We
want
to
talk
about
code
review
check,
so
it
looks
like
the
discussion
about
using
AI
to
review
code
and
helping
projects
with
a
single
hoping
projects
with
single
maintainers
yeah
go
ahead
and
pick
up
the
front
side
board.
B
Yeah,
hey
everyone,
I
added
that
topic,
and
you
know
it's
a
it's
a
brand
new
area
that
I
wanted
to
discuss
with
the
community,
and
you
know,
use
this
forum
to
approach
the
topic.
B
As
you
know,
in
scorecard
we
have
a
check
that
evaluates
whether
code
reviews
are
being
performed
or
not,
and
and
the
recommendation
right
now
for
code
review
checks,
if
you,
if
you
get
a
low
score,
is
to
find
you
know
additional
people
who
can
review
your
code,
and
this
has
been
a
bit
problematic
for
projects
where
you
know
you
have
single
maintainers.
So
I
was
wondering
you
know
because
of
the
latest
advancements
in
Ai,
and
you
know
like
now.
B
There
are
so
many
Dev
toolsions
that
rely
on
AI,
whether
we
we
should
consider
the
use
of
AI.
You
know,
as
per
in
in
the
code
review
process
and
sort
of
provide
a
way
for
single
maintainers.
To
sort
of
you
know
like
get
a
decent
score
on
this
check
without
finding
another
human
and
I
know.
This
is
a
broad
topic.
So
I
was
curious
about
the
community.
Thinks
about
this
approach.
What
are
the
pros
and
cons?
B
And
if
there
is
any
way
you
know
we
can
incorporate
or
improve
scorecard
check,
based
on
based
on
that.
C
It
looks
like
many
of
the
core
maintainers
are
on
the
call
today,
but
I
think
that's
a
pretty
interesting.
Take
I
I
was
recently
looking
at
something
that
semgrip
did,
which
is
sort
of
similar
where
they
were
looking
at
using
their
static
code,
analysis
on
GitHub
PRS
and
then
sending
that
off
to
chat
gbt
to
like
determine
false
positives
or
propose
patches,
for
you
know,
issues
that
were
found
by
some
grip
and
and
I
do
think.
C
That
could
eventually
become
certainly
a
better
than
nothing
check
right
like
maybe
it's
not
a
full
the
same
as
a
human
review.
I
think
the
big
concern
there
is
one
of
the
intents
of
that
check
is
to
prevent
someone
from
putting
malicious
code
into
the
code
base
that
a
good
actor
looked
at
it
and
said
you
know
this.
B
Yeah
I
think
that
that
that's
a
good
point
and-
and
the
idea
here
is
that
you
know
if
you
can
actually
figure
out
all
these
concerns,
and
you
know
there's
a
way
to
address
these
concerns.
I
think
this
would
be
a
great
vehicle
to
help
maintainers,
especially
the
you
know,
the
ones
that
actually
manage
a
project
by
themselves.
So
since
we
don't
have
core
maintenance
on
the
call,
I'll
also
go
ahead
and
create
an
issue
to
sort
of
discuss
this
offline.
A
A
I
think
there's
some
fundamental
concerns
about
IP
and
if
llms
are
going
to
introduce,
you
know
GPL
code
and
the
things
that
that's
going
to
be
a
problem,
so
I
think
I
mean
I,
guess
it's
kind
of
outside
the
scope
of
security
right
and
providing
security
guidance,
but
I
I
could
I
could
perceive
in
some
programs
or
some
projects
trying
to
say
like
we
do
not
want
to
see
code
from
this
if
it
becomes
a
a
license
issue.
B
Yeah-
and
in
this
case
the
recommendation
is
to
not
use
AI
to
author
code,
it
is
to
review
you
know
the
code
being
produced
by
human
users.
A
Said
you
know
ticking
and
creating
an
issue,
for
it
is
probably
the
best
course
of
action
and
we
can
see
from
there
since
David
just
called
dialed
in
he
might
have
some
some
opinions
on
this
too
I'd
be
interested
in
real
house
issue.
B
Right,
yeah
and
the
the
use
case
here
is
to
not
use
AI
for
generating
code,
but
to
review
code
so
that
you
know
we
can
provide
a
way
for
projects
with
single
maintainers
to
be
able
to
like
score
decently.
On
this
check
right
now,
you
know
like
they,
because
there
is
no
other
human
user,
they
get
a
score
of
zero
and
the
idea
is:
can
we
use
AI
to
actually
get
something
better
than
zero?
It
may
not
be
ten,
but
something
in
between.
D
Okay,
we're
not
talking
about
changing
scorecard
itself
for
changing
talking
about
projects
they're
trying
to
implement
and
get
better
scores
on
scorecards
right.
B
Yeah-
and
this
is
specifically
for
the
code
review
check
where
we
look
at
you
know
like
pass
changes
and
make
sure
that
those
changes
are
reviewed
by
a
human.
D
D
I
think
that's
a
losing
battle
and
I.
Don't
I
I
I
mean
I
I
I,
don't
even
think
we
should
try
to
go
there
as
far
as
using
AI
to
review
code.
D
D
There
are
currently
I
mean
we
don't
already
don't
accept
code
review
from
other
kinds
of
tools
if
you
run
a
static,
analyzer
and
a
fuzzer.
Those
are
obviously
highly
encouraged.
Behaviors,
but
I
would
view
running.
I
would
review
Running
an
AI
as
another
kind
of
static
analysis
tool,
and
should
you
use
a
static
analyzer?
Yes?
D
And
so
maybe
we
need
to
clarify
that,
but
now
so
far
the
reviews
that
I've
seen
is
that
the
ARs
are
terrible
at
finding
vulnerabilities
they're
nowhere
near
as
good
as
at
least
now
careful
here,
because
oops
I
I.
The
problem
here
is
I'm
speaking
about
AI,
stuff
and
I.
Think
we
all
know
don't
blink.
There's
a
change.
D
I
I
think
there's
arguments
to
be
made.
That
2023
was
the
end
of
the
Industrial
Revolution
and
the
beginning
of
the
AI
Revolution.
We
will
ask
me
in
50
years,
if
I'm
a
robin,
but
but
right
now
we
don't
have
very
good
evidence
that
they're
any
good
at
it
right
now.
The
evidence
is
that
they're,
remarkably
better
than
they
used
to
be
at
generating
code,
but
they're
still
terrible
at
reviewing
and
finding
vulnerabilities
in
it.
So
until
that
changes
I
think
we
should
dream
just
like
any
other
static
analysis
tool.
C
It's
like
does
this
appear
to
be
a
false,
positive
or
not
based
on
a
large
right
right
amount
of
data
or
to
if
there's
a
if
there
is
a
positive,
then
to
propose
a
fix
for
that
as
part
of
the
merge
request
and
I
think
there's
some
fairly
interesting
work.
There
yeah.
D
That's
not
new
people
working
on
that
five
years
ago,
I
actually
saw
now,
as
I
said.
You
have
to
be
careful
about.
You
know.
What's
current
at
least
five
years
ago,
they
weren't
very
good
at
it,
I
mean
they.
They
were
a
little
better,
they
weren't
great,
but
hey
this
year.
Things
have
gotten
a
lot
better,
so
I'm
totally
willing
to
believe
that
things
have
gotten
better
but
again,
I,
don't
think
we're.
We
can
make
a
claim,
that's
anywhere
near
as
good
as
human
review.
A
D
D
E
Yeah
I
I
think
I
share
the
same
concern
as
we
said
that
Keith
that
I
think
the
original
intention
of
this
check
was
that
people
fundamentally
didn't
want
to
depend
on
single
maintainer
projects
because
they
didn't
want
to
be
vulnerable
to
the
whim
of
a
single
person,
and
you
know
maybe
they're
vulnerable
to
a
collaboration
of
two
people,
but
but
not
a
single
person,
and
that's
that's
like
fundamentally
at
odds
with
single
maintainer
projects.
E
E
I
think
this
kind
of
highlights
that
consumers
of
scorecard
or
scorecard
scores
should
be
able
to
customize
their
their
consumption.
So
if
I
do
trust
single
maintainer
projects,
maybe
I
could
get
scorecard
scores
via
an
existing
database.
You
know
with
my
own
filter
on
it
or
something
like
that.
E
But
yeah
I
think
I
think
we
need.
You
know
it's
again.
It's
not
about
you
know
whether
or
not
whether
or
not
the
AI
is
good
or
bad
I.
Think
isn't
you
know.
Is
this
a
concern,
but
maybe
not
the
most
fundamental
concern
with
the
people
that
originally
thought
of
this
score,
which
is
you
know,
don't
make
somebody
let
somebody
who
just
does
something
again
on
a
whim
without
any
any
collaboration,
publish
a
package
that
then
impacts.
The
text
me.
D
You
know
what
I
think,
Jeff
and
I
came
with
this
with
different
approaches,
but
I
think
we
end
up
with
the
same
answer.
A
A
A
A
An
issue
out
there
sounds
like
a
good
first
start,
since
us
I
think
most
of
the
core
maintainers
are
not
on.
The
call
today
would
be
good
to
get
them
away
in
Jeff
I.
Think
that
sounds
like
a
great
idea
and
David.
Thank
you
for
your
thoughts,
any
other
items
before
we
we
go
to
the
summary.
D
I
do
have
a
request
here,
which
is
since
we've.
It
sounds
like
we're
more
or
less
agreeing.
Is
there
a
change?
Do
we
need
to
add
some
clarifying
text
to
this
Criterion
that
says,
hey
AI
doesn't
count
I,
think
that
would
be
perfectly
appropriate.
A
lot
of
people
are
asking
about
Ai,
and
if
this
is
something
that
we
are
widely
agree
to,
I
think
mentioning
it
in
some
detail.
Tech
somewhere
is
a
good
idea.
D
I
could
even
draft
that
text.
If
you'd
like.
A
C
D
D
If
we
ever
go
how's
this,
if
we
go
towards
evaluating
the
quality
of
tools,
then
I
think
yeah.
We
would
need
to
go
there,
but
not
until
then,
but
then,
if
we
do
that
and
that's
plausible
then
yeah.
A
A
So
for
every
upcoming
meeting
we
try
to
figure
out
who's
going
to
be
running
okay,
so
there's
no
volunteers.
This
time
then
we'll.