►
From YouTube: Scorecards Biweekly Sync (January 26, 2023)
A
B
B
B
B
I'm
not
gonna,
wait
too
much
longer.
There's
not
many
people
here,
I
think
you
know,
we
all
know
each
other,
so
names
and
attendees
no
new
faces
in
terms
of
project
and
individual
updates.
First
for
All-Star
Jeff.
If
you
want
to
talk
about
this
PR.
A
Yeah,
don't
need
a
big
discussion
here,
but
I
wanted
to
call
it
out
that
I've
been
working
on
a
contributing.
Md
I
saw
that
we
didn't
have
the
code
of
conduct
in
All
Star,
so
I
copied
that
from
another
open,
ssf
repo,
so
I
got
that
there
as
part
of
the
contributing
MD
links
to
the
slack
Channel
and
things
like
that,
and
then
the
other
main
thing
is
the
contributor
ladder.
A
So
this
was
a
headed
as
a
proposal
for
a
while,
and
we
had
some
discussion
on
the
issue.
Actually
I
didn't
link
the
issue
PR.
Let
me
do
that
now
and
then
yeah
just
wanted
to
to
put
that
out
there
for
any
comments.
Leave
it
on
the
issue
other
than
that
I'm
working
on
some
quick
start
updates
and
some
other
documentation
updates
that
we've
identified
that
we
want
to
make
sure
is
up
to
date,
like
our
changelog
or
you
know,
what's
new
things
like
that.
B
B
Things
like
that
so
sort
of
sat
down
and
checked
more
of
the
scorecard
documentation,
whether
that's
the
readme,
the
scorecard
action,
the
scorecard
web
app
or
the
website
contents
itself.
So
I
can
make
an
issue
about
some
of
these
stale
docs.
We
found
and
sort
of
plans
to
freshen
them
up
so
that
new
contributors
are
able
to
see
the
documentation
without
any
of
the
knowledge.
That's
just
gained
through
time
kind
of
thing.
B
C
E
C
A
Yeah
welcome
Jeremy
I
mean,
as
you
can
see.
Normally
this
meeting
we
have,
you
know,
updates
for
core
Cardinal
star,
like
project
updates,
and
then
agenda
is
usually
if
we
need
a
discussion
item
for
maybe
a
bug
or
feature
that
you
know
we
want
to
achieve
consensus
on
or
you
know
if
people
have
proposals
for
new
checks
or
new
best
practices.
A
B
Yep
all
right
yeah,
it
looks
like
you're
done
just
pretty
light.
B
I
can
throw
something
on
there
just
to
have
some
sort
of
discussion,
but
I,
don't
think
we'll
end
up
taking
the
whole
hour
or
anywhere
close
to
it.
B
But
there
was
a
PR
where
right
now
we
run
all
the
sort
of
light
and
heavy
tests
on
every
commit
and
running
into
like
big
great
limiting
issues.
So
if
we
just
look
at
the
action
tabs
right
now,
there
comes
a
point
where,
like
a
lot
of
these
are
failing-
and
these
are
all
API
failures
and
things
like
that.
B
And
yeah
we
had
to
roll
it
back
a
little
where
certain
triggers
don't
have
access
to
repository
secrets.
So
this
is
still
a
CI
CD
Improvement
I'd
like
to
see-
and
there
are
a
few
ways
to
do
it.
So
just
something
to
play
around
with
any
comments
on
the
issue
are
great
yeah,
just
sort
of
wanted
to
bring
it
up.
I,
don't
really
think
it's
something
that
would
be
a
great
discussion
topic,
but
seams
are
always
limited
on
our
CI
CD
pipeline.
D
Do
you
mind
popping
that
issue
into
the
agenda
just
so
yeah
is
there?
Is
there
stuff
that
we
need
to
unblock
I
mean
I,
I,
know
that
we're
limited
in
our
maintainer
camp
today,
but
to
see,
if
there's
anything
that
we
potentially
need
to
unblock.
B
B
B
Yeah
a
lot
of
feedback
from
people
in
the
past
about
why
these
structured
results
would
be
beneficial.
So
just
sort
of
re-inviting
feedback
seems
like
we've,
had
quite
a
few
comments,
but
just
trying
to
solidify
people's
opinions
and
get
this
merged
in.
D
Yeah
I'll
I'll
check
in
as
well
I
just
wanted
to
I
just
wanted
to
let
the
conversation
settle
out
a
little
bit.
I
think
you
know
I'm
still
of
the
mind
around
the
the
serif
formatting
like
that.
It
would
be
good
to
get
something
that
is
minimum,
viable
serif
but
I
mean
especially
in
you
know,
in
the
face
of
potentially
introducing
yet
another
format
to
consider
it.
D
B
Yeah
I
I
think
I
I
still
need
to
make
my
comment,
but
I
after
last
think
left
some
thoughts
on
supporting
you
know.
Serif
directly
I
should
be
on
here
somewhere,
where
Seraph
is
expressive
enough.
That
I
think
we
can
I'm
not
sure
where
my
comment
was.
It
should
have
been
to
this
PR.
B
Yeah,
but
it
is
a
very
expressive
standard
that
overlaps
a
lot
with
what
we're
trying
to
do
so
I
think
at
the
very
least,
even
if
we
do
sort
of
have
our
next
version
of
Json
format
like
if
that's
the
V3
format
or
the
extended
Json,
or
something
like
that,
having
these
rule-based
results
can
be
used
to
augment
the
existing
serif.
We
have.
C
B
Know
there's
some
sort
of
information
about
when
and
what
it's
being
run
on
and
how
and
then
sort
of
these
checks
is
where
the
individual
rules
would
be
in
our
format
and
all
of
these
have
a
corresponding
serif
output.
Whether
that's
you
know
the
start
time
of
the
invocation,
just
a
miscellaneous
property
bag
to
handle
everything
that
serif
doesn't
necessarily
do
when
we
were
talking
about
scorecard
specific
stuff,
so
the
commit
and
sort
of
describing
the
scorecard
used
binary
used
to
do
the
check
their
objects
that
talk
about
the
tool.
B
Doing
the
analysis
in
serif
yeah.
The
checks
with
laurent's
example
just
live
overlap
between
the
rules,
so
yeah
a
lot
of
people
still
out
this
meeting
so
I'm
not
sure
how
much
discussion
can
be
had.
But
I
know
this
week.
I'm
gonna
try
to
finalize
some
of
my
thoughts
on
it
and
get
a
review
out
there
and
sort
of
see
if
we
can't
merge
it
in
and
get
a
consensus.
D
Yeah
for
sure
so
so
I
mean
dumb
question,
maybe
because
we
have
to
dump
it
as
we
have
to
dump
our
output
as
as
serif
anyway
for
I
mean
if,
as
people
are
picking
it
up
on
scorecard
action.
So
how
far
are
we
between
the
results
that
we're
producing
today
and
what
this
is
proposing?.
B
D
Yeah
yeah,
a
lot
of
this
just
looks
like
package
refactoring,
I,
think
yeah
I,
think
you
know
as
long
as
as
long
as
what
we're
getting
out
of
the
other
side
is
still
close
enough
to
what
we
have
today.
I'm
I'm,
pretty
fine
with
it,
but
if
it
would
be
great
if
if
what
we
were
outputting
before
it
gets
kind
of
massaged
for
GitHub
was
just
Cyrus
or
some
minimal
stuff
yeah
I
do
I
do
need
to
sit
down
and
dig
through
that
one.
D
Just
going
back
for
a
second
because
I
know:
I,
like
my
headphones,
are
doing
weird
stuff,
while
you
were
talking
about
it,
but
the
doc
freshness.
Is
that
only
happening
on
the
website
right
now,.
B
So
some
people,
I
Google,
sat
down
and
sort
of
just
evaluated,
pretty
much
any
readme
that
we
had
in
terms
of
at
scorecard,
whether
that
was
the
the
readme
about
how
to
contribute
how
to
install
how
to
run
how
to
add
a
new
check
that
sort
of
stuff
the
documents
on
the
scorecard
action,
repo,
mainly
just
the
installation,
instructions
and
sort
of
the
readme
on
that
page,
and
then
both
the
scorecard
web
app
in
terms
of
contributing
to
the
web,
app
and
sort
of
how
it's
set
up
and
how
to
run
the
site
as
well
as
the
website
contents
itself.
D
Because
I
would
say,
arguably
we
you
know
in
an
ideal
State,
we
just
have
all
the
docs
in
one
place
right.
Okay,
here
go
generate
the
website
based
on
this
folder
or
I.
I
know
that
there
are
some
revisions
to
the
website
that
I
I
know.
It
doesn't
quite
pick
up
off
of
the
content
from
me.
So
how
can
folks
help.
B
So
I
think
the
best
way
would
it
be
to
take
a
look
at
the
issue
whenever
this
gets
posted
I'll
work
with
Kara
to
get
some
of
the
notes
from
that
sync
posted
in
terms
of
these
are
what
we've
identified
as
stale,
Docs
and
sort
of
from
that
point.
You
know
people
can
pick
up
what
needs
to
be
freshened.
D
B
Yeah
that
makes
sense
so
I
think
right
now
being
January.
E
B
D
Think
yeah
I
think
figuring
out
like
what
is
what
freshness
means
to
us
and
how
we
can
kind
of
like
guard
against
staleness
in
the
first
place.
Talking
about
that
as
a
group
and
and
I
think
you
know
at
least
starting
with
the
outputs
of
of
that
meeting,
that
can
be
shared
publicly
and
then
maybe
a
I
I
think
I.
D
Think
six
months
sounds
like
the
right
target
to
to
check
in
again,
if
we've
got
kind
of
actionable
actionable
things
falling
out
of
that
out
of
this
issue,
but
hopefully,
but
hopefully
like
we're.
You
know
what
we're
focusing
on
in
that
six
months
is:
is
guarding
against
illness
as
opposed
to
planning
the
next
check-in.
B
B
D
I
feel
like
this
is
I
I've
been
bumping
into
this
in
a
few
other
projects
too,
where
it's
like.
Where
do
we
put
our
docs?
Where
do
we
put
the
contributing
dogs
where
we
put
the
developing
dogs,
and
should
we
have
some
script
that
runs
between
these
repos
to
generate
the
site
and
yada
yada?
It
would
be
great
if
it
was
I
feel
like
in
in
ways
there
are.
D
D
C
D
A
A
Everything's
like
running
pretty
smoothly
and
there's
obviously
plenty
of
feature
requests,
but
that
are
good
first
issues,
but
nothing
yeah.
D
Yeah,
okay,
are
your
help
wanted
and
good
first
issues
up
to
date?
Yes,
great
so
I
I
guess
talk
announcements
for
us
we've
got
Naveen
is
going
to
be
speaking
at
state
of
open
Con
coming
up
in
London,
February,
7th
and
8th.
D
If
anyone
is
heading
out
to
that,
I
will
be
and
that'll
be
on
scorecard,
I
mention
it
in
the
scorecard
meetings
and
I'll,
be
speaking
at
Philly.
Emerging
Tech
in
April
I
believe
also
on
scorecard
I'm
gonna
take
I'm
gonna
try
to
to
break
it
down
from
the
the
perspective
of
someone
who
might
be
evaluating
a
project
for
for
outbound,
open
source
contribution
say
like
you've,
got
an
opto
or
something
you're
doing
a
review.
D
D
I,
you
know
I,
don't
no
I'm
not
going
to
that
one,
but
let
me
check
the
schedule.
Real
quick,
I
would
I
would
imagine
the
way
it
would
hold
on.
D
Yes,
how
do
you
trust
open
source
software
Niven
end
time,
so
this
is
I'm
just
drop
this
in
the
chat.
D
B
D
B
Yeah
I
I
think
there's
some
discussion
with
GitHub.
D
B
Step
forward,
yeah
I
think
azim
could
talk
more
to
it,
but
imagine.
E
I
wanted
to
like
raise
the
thing
of
like
I,
want
to
talk
a
little
bit
about
sember,
so
like
versioning.
This
is
the
thing.
That's
not
super
clear
in
our
checks.
E
Today
we
have
like
checks
for
a
variety
of
things
in
scorecard
and
then
once
those
like,
they
have
a
raw
component
which
has
an
output
format
that
we
we
change
kind
of
at
will
and
then
like
their
scores,
which
we
also
you
know
we
we
can
change
those
typically
that
we
we
just
make
like
we
just
like
fix
the
score.
E
We
don't
change
the
way
that
scores
are
calculated,
but
we
don't
have
like
guarantees
around
either
of
these,
so
was
gonna
propose
that
maybe
we
like
think
about
this
and
like
how
how
how
we
yeah
like
how
we
version
this
this
stuff
and
like
maybe
maybe
just
like,
put
a
statement
about
like
scoring
staying,
consistent
or
inconsistent
within
a
particular
version.
So.
E
Has
thoughts
on
this
yeah
so.
D
Yeah
so
I
I
opened
this
issue
a
while
back
and
it
is
worth
touching
on
again
every
now
and
again,
I
will
Bang
the
Drum
on
on.
Are
we?
Are
we
doing
some
or
are
we
not
doing
sember?
If
not,
we
should
we
should
post
something
about
what
our
guarantees
are.
D
A
So
you
mean
yeah
just
turning
out
there,
so
the
the
code
that
All-Star
Carl
calls
as
a
library.
It
doesn't
need
to
be
like
I'm,
fine
with
it
not
being
centered
I
think
the
discussion
was.
Do
we
want
the
API
and
the
results
to
be
simpler
and
not
worry
about
the
the
library
that's
being
exposed
to
All-Star.
E
E
Maybe
the
next
step
is
like
make
a
make
a
proposal
and
put
it
in
put
it
in
slack
for
everyone
to
read.
D
A
On
this
thing,
I'm
I'm
pretty
sure
that
the
scorecard
project
wants
to
be
able
to
announce
major
versions.
Even
if
there's
no
breaking
change
for
the
purpose
of
drawing
attention.
So.
D
And
I
think
you
know,
even
if
it
is
even
if
the
purpose
of
the
the
version
bump
is
for
marketing.
That's
that's
fine
right.
We
can,
we
can
say
no
breaking
changes.
I
think
I
think
the
the
problem
is
is
the
the
inverse.
A
Absolutely
signaled
in
some
way
completely
agree,
but
some
people
will
get
grumpy
if
you
doing
the
other
way
too
yeah.
If
you
don't
call
it
out.
E
If,
if
I
were
to
like
give
my
quickest
pitch
for
what
it
should
be,
it
would
just
be
any
change
to
Raw
results
or
scoring
that
isn't
like
a
bug
fix
like
it's,
not
like
you're
scoring
something
like
that
who
you
know
like
like
like
like
like
like
a
scoring
change.
That
would
require
a
change
of
documentation
or
something
either
of
those
should
trigger
a
minor
version
bump.
D
B
So,
like
one
thing
that
I'm
thinking
of
during
this
discussion
is
like,
we,
we
had
Scott
improve
sort
of
the
security
policy
where
before
it
was,
is
the
file
there
and
now
it's
is
the
file
there
and
like?
Is
there
an
email
and
does
it
mention
like
more
than
just
the
email
so
like
some
sort
of
quality
check
right.
C
D
I
would
do
that
as
a
I
would
do
that
as
a
major
honestly,
so
because
you're,
if
you're,
if
you're
changing
how
the
the
quality
of
the
check
is
determined,
ultimately
you're
changing
the
output
right,
so
I
would
consider
you
know,
like
we
say
across
x,
amount
of
checks.
We've
decided
that
you
know
you
know
the
the
our
policy
needs
to
be
relaxed,
and
these
you
know
in
these
kind
of
axes
right.
D
This
is
a
major
version
bump
all
you
know
well
additional
changes
to
these
policies.
Maybe
you
know,
maybe
tweaking
of
you
know
the
tweaking
of
like
the
the
scale
of
severity,
for
you
know
for
reporting
or
something
but
but
I
I
think
you
know
being
able
to
say
consistently
like
Hey
we're
changing
this
to
to
be
a
bit
more,
be
better,
be
a
bit
more
relaxed
and
the
resulting
and
the
results
reporting
I.
C
D
C
C
D
A
Yeah
I
think
a
way
I
mean
when
you
get
more
mature
a
way
to
counteract
that
is
to
have
like
a
new
check.
B
not
be
like
in
a
beta
or
something
like
that.
So
yeah,
you
could
have
checks
that
are
okay
to
change
the
value
or
change
the
the
calculation
as
they're
being
developed
and
still
being
released.
D
And
yeah
and
I
think
you
know
definitely
check
out
this.
This
issue,
because
I
talk
about
some
of
that,
we
do
have
feature
Flags
sort
of
right,
I,
don't
think
we're
necessarily
consistent
about
when
things
get
into
them
when
we
drop
those
those
environment,
variables
or
or
something
that
when
this
is
like
a
core
you
know
like
this
is
core
functionality.
Now
we
we're
not
really
consistent
about
that,
so
so
yeah
overall
I
think
we
need
to
make
a
decision
here.
D
Calfer
is,
is
completely
fine
by
me
in
in
some
way.
If
we,
if
we
say
like
okay,
you
know
it's,
we
know
that
you
know
maybe
we're
doing
a
quarterly
quote-unquote
major
right
and
and
that's
when,
although
the
fun
stuff
is
coming
in
or
yeah,
we
I
mean.
Ultimately,
we
just
need
a
a
way
of
like
whatever
the
versioning
system
looks
like
as
long
as
we
have
a
lay
of
gardening
against
potential.
Potentially
hazardous
changes.
B
Yeah
I
think
that
discussion
is
easy
for
something
like
the
scorecard
just
CLI
and
gets
really
complicated
for
the
API
that
uses
you
know
either
the
crowdsourced
scorecard
action,
or
you
know
just
the
the
Quran
job
where,
like
it
starts
becoming
unmaintainable
to
run.
You
know,
like
10
different
API
versions
at
the
same
time,
on
a
million
repos
and
things
like
that.
E
E
Api,
like
produces
an
an
output
or
like
when,
when
the
Chrome
produces
an
output,
that's
stored
for
the
API
to
later
retrieve.
That's
like
the
same
as
like
the
raw
result,
format
right.
It's
like
exactly
the
raw
result,
format.
B
E
B
In
terms
of
how
it
works
today,
yes,
every
week
that
the
results
are
on,
it's
all
using
the
same
roughly
the
same
scorecard
version,
there's
a
little
bit
of
an
asterisk
there
in
terms
of
how
the
stable
build
is
determined
and
if
there's
crashes
Midway
through
the
week
but
yeah.
E
Got
it
so
like
it
sounds
like
one
of
the
challenges
there
is
that,
like
you're,
we're
like
interleaving,
potentially
like
cron
results
that
come
from
latest
version
at
time
or
like
latest
stable
version
at
time
of
at
the
Quran
schedule,
time
with
any
version
at
any
time
that
could
be
coming
from
from
the
crowdsourced
action.
D
B
B
B
B
Yeah
I
think
we
got
a
meaningful
discussion
where
there
was
a
pretty
empty
agenda
beforehand.
So
thank
you.
Everyone
only
remaining
thing
is
picking
a
facilitator
for
it's
going
to
be
in
February.
D
Ninth
or
yeah
I'm
also
terrible
but
manifold,
but
we
should
be
going
so.
It
looks
like
the
ninth
I
will
be
away.
Unfortunately,.
B
Yeah,
so
if
there's
any
volunteers
or
I
might
just
get
Laurent
back,
he
was
supposed
to
be
today's
moderator,
but
had
a
conflict
I'm
just
gonna
put
them
down
and
if
it
changes
I'm
sure
one
of
the
yeah.