►
From YouTube: Scorecards Biweekly Sync (May 18, 2023)
A
B
A
A
B
Hi
I'll
introduce
myself
it's
the
first
time,
I've
been
in
this
call.
My
name
is
Lucas
gons,
among
other
things,
I
work
on
magma
Corps,
which
is
a
open
source,
Telecom
stack
and
on
the
security
TPM
there
and
I
consult
for
a
number
of
ospo's,
where
I
frequently
run
the
scorecard.
So
I
see
it
quite
often
awesome.
D
And
hi
I'm
Andrew
lillybrinker
I'm,
with
miter,
where
I
work
on
software
Supply
chains,
group.
A
Great
project
updates:
does
anyone
have
anything
to
shout
out.
B
B
Some
code
to
look
at
s-bombs
and
review
their
quality
and
it
has
been
a
topic
to
integrate
that
into
the
scorecard,
which,
from
my
perspective
as
a
user,
seems
very
useful,
and
there
was
recent
discussion
about
this
in
slack
with
some
comments
there
and
the
link
in
the
document
is
to
issue
2605.
A
G
G
Answers
are
yeah,
probably
good.
The
question
is
there
there's
some
feelings
that
we
might
dilute
the
overall
scorecard
by
the
introduction
of
this,
and
then,
where
does
it
fit?
If
we
want
that,
and
so
I
don't
know
if
we
have
the
people
on
the
call
today
who
can
answer
those
questions
or
decide
things
or
how
those
decisions
get
made,
but
we've
I
had
this
issue
open
for
quite
some
time
now.
G
F
B
G
B
C
It's
not
just
the
UI,
although
I
think
that's
a
fair
question,
I
think
there's
a
broader
question
here:
I,
don't
have
an
answer
so
I
think
perhaps
the
step
first
step
is
to
figure
out
what
the
question
is
and
if
that
seems
like
I'm
trying
to
evade.
That's
probably
true
but
hey
I,
think
there's
a
challenge
here
in
all
in
all
the
other
scorecards
measures,
it's
specifically
measuring
the
source
code,
repo
either
the
code
there
or
the
processes
used
in
in
modifying
it
as
detected
from
looking
there.
C
But
this
is
looking
at
something
that's
generated
from
the
source
code,
which
is
not
something
that
the
scorecard
has
historically
measured.
We.
C
Yeah,
that's
that's
fair!
It's
not
generated
from
the
source
code,
but
it's
but
they're
they're
still
signatures
of
the
source
code.
This
is
a
different
Beast.
It's
really
I
mean
it
depends
on
what
kind
of
s-bomb
you've
got
if
it's
a
source,
SAS
bomb
you
can
generate
from
the
source
code.
I.
Think
a
lot
of
folks
are
very
interested
in
build
time
s-bombs,
which
aren't
generated
until
you
build
it.
C
C
G
C
I
I
think
David
I
mean
you're
like
this
is
a
this
is
an
implementation
issue
where,
like
first
we
want
to
discuss.
Do
we
want,
you
know,
is
the
the
artifacts
of
a
release,
part
of
that
part
of
scorecard
I?
Think
Justin's
answer
as
it
already
is
today.
I
Scorecard,
is
not
just
a
score
on
the
the
code,
it's
a
score
on
the
community
practices
and
then
yeah.
It's
it's
an
implementation
detail
on
where
we
find
those
releases
and
those
s-bombs
to
score
them.
If
we
just
support
stuff
attached
to
a
GitHub
release
today,
that
might
be
what
what
happens
and
then,
but
that's
just
because
the
rest
of
the
searching
and
finding
hasn't
been
implemented
yet
and
that
should
be
implemented
as
well.
I
C
I
Well,
I
think
the
the
question
is:
where
does
the
score
go?
So
so?
What's
not
an
implementation
deal
like
that's,
not
a
code
challenge
here,
it's
like:
where
does
the
score
go?
Does
it
part
of
releases?
Is
it
part
of
binary?
Artifacts
is
its
own
score,
I
think
if,
if
everybody's
sitting
around
and
nobody
you
know
thinking
like
where
does
this
go,
and
nobody
has
an
answer-
the
The,
Next
Step
I,
would
think
is
to
write
a
proposal.
I
So
instead
of
putting
the
issue
as
a
question,
do
we
want
this
right
proposal
say
I
proposingly,
add
this
I
propose
that
it
goes
into
this
score,
I,
propose
it
or
it's
a
new
score,
etc,
etc,
and
let
people
say
you
know,
argue
against
it
and
then,
if,
if
there's
no
pushback,
then
just
let's-
let's
put
it
in.
B
I
think
that
the
obstacle
to
having
a
successful
proposal
is
that
there's
a
broader
question
and
broader
questions
tend
to
be
harder
to
decide.
For
example,
the
dependable.yml
configuration
is
related
in
that
it
relies
on
source
code,
AS
David
points
out,
but
also
it's
about
a
generated
artifact
and
it
has
a
limitation
as
a
feature.
That's
very
important
in
that
probably
the
most
common
way
of
configuring
dependabot
is
in
a
UI
not
using
the
depend
about
yml.
B
So
there's
this
there's.
The
packages
item
is
also
related.
The
salsa
item
is
also
related.
It's
hard
to
avoid
that
so
I
wonder
about
thoughts
and
whether
or
not
that's
tractable
or
whether
that's
something
that
the
project
wants
to
avoid.
First,
let's
ask:
does
the
project
want
to
stick
with
what's
in
source
code
in
the
rebound
or
raw
new
from
there?
F
G
C
Else
I
mean
I
would
presume.
The
first
step
would
be.
You
know,
find
out
what
this
group
proposes,
and
you
know
mention
up
to
the
best
practices
working
group.
Here's
what
we
plan
to
do.
My
guess
is
that
unless
there's
something
proposed,
that's
sheer
craziness,
there's
gonna
be
quite
a
bit
of
flexibility.
You
know
as
long
as
as
long
as
the
end
goal
is
we're
trying
to
make
things
secure.
C
J
J
Think
it'd
be
useful
to
summarize,
like
all
the
points
from
the
issue,
because
I
think
we
we're
starting
to
lose
track
of
all
the
comments
and.
F
J
Them
in
like
in
one
place
because
I
think
one
also
one
open
question
is:
how
would
we
score
like
most
most
projects
are
actually
just
libraries
that
don't
have
an
s-bomb,
so
we
don't
want
to
penalize
them.
Why.
J
F
J
C
Well,
it
does
depend
on
the
programming
language
here.
It
sounds
like
you're,
assuming
JavaScript
that
wouldn't
be
true
for
a
lot
of
other
languages.
Where
you
know
the
assumption
is
that
you
embed
them
in
and
you
may
even
vendor
them.
G
I
I
Yeah,
so
you
need
to
say
like
this:
it's
me
Jeff.
It
needs
to
say
like
what
you
know,
what
the,
how
the
code's
going
to
work
and
how
the
how
the
scores
can
be
calculated.
G
G
Make
more
sense,
although
there's
pushback
there,
because
people
don't
want
to
dilute,
signed,
artifacts
score
and
so
there's
discussion
of
a
new
score,
but
I
think
that's
just
normal
discussion
based
on
a
proposed
Direction,
there's
there's
a
definitive
score
that
comes
out
of
s-bomb
scorecard
out
of
a
base
100
and
that's
linked
in
the
in
that
issue,
and
so
I
think
the
thing
we're
trying
to
decide
is:
where
does
where
does
it
fit
in.
I
B
B
I
mean
I
think
that
on
the
the
scorecard
algorithms,
don't
necessarily
shine
in
those
kind
of
edge
cases
anyway,
let's,
let's
try
to
get
like
the
default
case
and
I
think
the
default
case
is
about
there's
an
external
s-bomb
generator.
B
That's
it's
probably
a
GitHub
action
and
it
generates
an
artifact
and
by
the
way
this
also
affects
packages
and
maybe
some
other
dance
and
might
be
useful.
So
let
me
propose
here's
a
straw,
man,
it's
not
good,
but
it's
a
start.
There
is
a
URL
for
a
artifact
that
can
be
checked
and
it's
in
a
configuration
file
that
is
sending
information
into
the
scorecard
and
that
configuration
file
can
be
extended
with
new
features
in
the
future.
J
So
I
have
another
question
around
like
if
I'm
an
npm
package
or
I
don't
know
a
maven
package
where
do
I
put
that
S1,
because
I
guess
as
soon
as
we
say,
you
need
to
have
an
s-bomb.
We
are
encouraging
people
to
add
an
s-bomb
but
is
like,
like
when
I
installed
in
a
package
like
it's
not
clear
to
me
that
we
have
to
force
developers
to
have
an
s-bomb
on
their
release,
assets.
J
That
sounds
a
little
bit
counter-intuitive,
not
not
even
super
useful
yeah,
I'm,
just
curious.
C
F
C
The
end
users
and
for
the
end
users
having
the
s-bombs
within
the
packages,
make
sense
to
them,
because
you
know:
hey
I've
got
this
thing:
it's
right
there
off,
we
go,
it
does
take
up
more
space.
So
if
you
are
the
space,
great
environment,
that's
a
problem
at
that
point.
I
would
kind
of
hope
that
you
could
have
included
some
sort
of
URL
of
here.
C
I
am
go
here
for
the
s-bomb
data,
for
this
particular
release,
but
I
think
there's
all
sorts
of
challenges
once
we
open
up
the
scav
as
it
were,
but
we
are
going
to
have
to
think
this
through,
because
I
think
this
is
a.
This
is
something
that
a
lot
of
organizations
do
want.
A
lot
of
end
users
do
want
and
it
makes
sense
that
they
want
it.
So
how
could
we
for
without
overwhelming
the
Developers.
F
J
There's
also,
another
question
is:
unless
you
have
vendor
dependency,
the
package
has
all
the
information
for
you
to
compute
the
s-bomb.
If
you
want,
and
so
if
we
just
ask
users
to
run
a
tool
like
post
deal
like
Swift
That's,
not
better
than
just
letting
the
consumer
actually
do
it
so.
J
That
doesn't
go
into
an
s-bond
that
doesn't
go
in
a
in
the
s-bone
of
the
package.
The
compiler
isn't
there
like
the
compiler
is
it's.
If
you
have
salsa
provenance,
you
could
the
Builder
could
could
say,
I
use,
GCC
or
clang
I,
don't
know
12.3,
but
otherwise
the
pro
announces
isn't
going
to
tell
you,
because
otherwise
you
go
all
the
way
down
and
say
what
what
Linux
machine
did
I
use
right
right,
like
it's,
the
s
bomb
is
just
for
the
package,
not
for
what.
C
J
D
J
B
To
avoid
that
harder
question
and
go
to
a
slightly
more
addressable
one
to
Laura's
question
about
how
do
you
score
this
in
context
where
it's
not
relevant,
I
think
the
scorecard
is
most
useful
as
kind
of
a
to-do
list,
it's
in
fact
it's
a
little
bit
less
effective
as
I
think
to
compare
the
security
of
different
projects
and
and
most
valuable
as
a
thing
that
helps
developers
of
Open
Source
projects
know
what
to
do
next.
B
A
Great
okay,
so
great
discussion
so
far
looks
like
there's
a
lot
of
open-ended
questions.
So
I
guess
continuing
on
the
issues
that
exist
and
a
proposal
would
help
to
at
least
get
the
ball
rolling
and
even
if
the
first
iteration
isn't
perfect,
it'll
help
show
what
the
right
direction
is
with
the
community.
So
having
this
discussion
is
definitely
on
the
right
track.
So
great
yeah.
J
Also
also
as
a
remediation
like,
how
do
how
should
people
like,
if
we
tell
if
we
tell
people
you,
don't,
have
an
s-bomb
like
how
how
should
they
actually
generate
it?
I
know
there
are
some
tools
but
like
because
if
we
say
just
use
this
tool,
then
we
could
say
like
we
could.
We
could
also
maybe
verify
through
scorecard
that
there's
a
workflow
that
you.
J
J
I
think
scorecard
should
be
able
to
look
at
the
common
places
because,
like
asking
people
to
add
metadata
to
their
to
their
project,
is
yet
another
burden
for
adoption
and
for
for
maintainers.
So.
J
We
shouldn't
we
should
we
shouldn't,
have
a
config
file,
I
mean
they
aren't
like
a
billion
ways
to
put
an
s-bomb
right.
It's
in
the
release
assets,
you
might
have
it
in
the
source
code,
which
I
think
we
wouldn't
recommend,
but
yeah
I,
don't
I,
don't
think
we
need
yeah,
I
guess
if,
if
the
proposal
says
this
is
the
best
place
to
put
it,
then
we
can
just
say
put
it
there
and
if
people
complain
we
will
adjust
and
improve
scorecard,
but
I
think
right
now.
J
K
Yeah
I
think
my
question
is
more
is
like,
and
maybe
this
is
outside
of
the
scope
or
scorecard,
but
I
think
as
an
end
user
right,
it's
often
hard
programmatically
to
find
exactly
where
that
a
spawn
is,
and
you
end
up,
searching
all
the
different
locations
right
and
if
that's
something
that
scorecard
is
already
doing,
is
there
benefit
in
exposing
that
value
as
part
of
the
API?
So
G
act
go
to
scorecard
I
get
my
score.
I
get
my
s-bomb
as
I'm
doing
my
compliance
activities.
K
J
K
J
J
B
C
One
thing
that
complicates
things
I,
just
I'm,
not
sure
what
to
do
with
this,
which
is
something
I'm
thinking
of,
is
the
issue
about
libraries
versus
applications.
This
was
raised
specifically
at
the
npf
best
practices
guide.
Where
you
know
libraries.
Typically,
you
try
to
allow
large
ranges
of
versions,
but
we
do
deliver
a
particular
application
and
it's
all
you
know
that
you
end
up
fixing
this
specific
set
of
libraries
and
that's
where
some
of
the
a
lot
of
a
lot
of
ecosystems,
these
specific
version
numbers
end
up
getting
set.
C
J
C
The
the
the
the
the
always
the
most
fun
are,
the
system
level
things
see,
but
primarily
C
and
C,
plus,
plus,
where
the
source
code
is
one
set,
but
for
any
one
pack
for
any
one
source
code.
Collection,
there's,
usually
a
large
number
of
system
level
packages,
one
for
every
Linux
just
draw
plus
the
Star
bsds
Plus
other
things,
and
that
makes
things
a
lot
more
complicated
because
in
many
cases
the
source
for
the
folks
who
are
developing
the
source
code
often
don't
know
where
their
source
code
is
getting
deployed.
C
Well,
if
you're
leaking
against
it
dynamically,
that's
absolutely
true,
however:
you're
probably
leaking
against
the
header,
so
you
at
least
have
an
idea
of
which
version
sequence,
but
then
you
also
have
rendered
code
that
makes
life
more
interesting.
H
J
Right,
but
what
about
like,
the
more
more
like
you
know,
npm
golang
like
more
the
programming
languages
packages,
all
those
do
they
all
follow
the
you
know
resolving,
always
happen
when
when
you
build
at
the
end
or
do
you
do
you
have,
because
that's
my
impression,
but
maybe
I'm
wrong
and
I'd
like
to
yeah.
C
I
I
I,
sadly
I,
don't
know
all
all
such
systems
that
that
would
probably
be
a
challenge.
G
This
can
also
be
out
of
scope
for
the
scorecard
itself
and
instead
we
can
rely
on
the
other
groups
within
the
open
ssf
who
are
pushing
forward
s-bombs
and
say
hey.
They
have
a
list
of
things
that
we
think
that
they've
thought
long
and
hard
about
s-bombs
go.
Do
what
they
say
we're
here
to
score
the
quality
of
the
result.
B
I
think
that's
that's
a
good
strategy.
I
think
that
there's
there's
some
low-hanging
fruit,
but
it's
only
in
special
cases,
and
the
first
thing
to
do
is
to
collaborate
with
the
group
who
are
developing
s-bombs
in
the
first
place.
C
You
that's
a
that's
a
great
idea,
so
I
can
think
of
two
groups
to
chat
to
I'm,
not
sure
the
s
Bob
River,
where
folks
will
have
an
answer,
but
we
at
least
ask
there's,
also
the
repos
working
group
and
we
don't
have
to
wait
for
their
meetings.
We
just
say
hey.
You
know
first
scorecard
tried
to
score
out
these
things.
How
would
you
recommend
we
fight
them?
C
C
C
And
I
think
this
is
a
challenge
because
in
the
end,
I
think
this
is
the
you
know
as
as
painful
as
it
is
in
order
to
measure
this
stuff,
we
need
to
know
where
it
is.
That's
a
totally
legitimate,
reasonable
question
and
it's
the
sort
of
thing
we
need
to
eventually
figure
out.
A
recommended
set
of
answers
for
I
wouldn't
be
surprised
by
the
way.
There's
not
one
answer.
That's
fine!
Here's
the
list,
but
I,
don't
have
that
list
and
I,
don't
know
anybody
else
does
either
there
isn't
really
Derby
cats.
H
I
just
wanted
to
briefly
like
Express
support
for
this
entire
effort.
I
think
it
is
something
that's
important,
as
Lucas
said
like
one
of
the
one
of
the
values
of
scorecard
is
driving
good
behaviors
and
I
really
do
think
that
building
s-bombs
and
quality
s
bombs
is
part
of
kind
of
Next
Generation
supply
chain
orchestration.
So
I
I
really
support
trying
to
figure
out
an
answer
here.
Understanding
this
is
kind
of
a
difficult
thing
to
tackle.
Yeah.
C
And
I
will
note:
I
was
at
the
open
source,
Summit
North
America
last
week,
and
what
was
really
shocking
to
me
was
how
different
the
resolutions
were
between
say,
npm
and
yard,
where
you
know
you,
you
have
exactly
the
same
inputs
and
you
I
mean
I
think
what
was
like.
You
know,
there's
hundreds
of
of
packages
different,
whether
or
not
they
were
included
or
not
included,
depending
on
the
specific
tool
you
used
for
resolution,
because
I
guess
for
a
lot
of
them.
They've
used
speed
more
important
than
correctness,
which
is
a
little
concerning.
C
So
you
know,
but
I'm
not
I'm,
not
sure
the
rationale,
but
in
any
case
it's
it's
worth
noting
that
it's
quite
a
challenge-
and
there
are
other
ecosystems
already
dealing
with
to
some
extent,
so
try
to
make
sure
that
the
answer
we
get
is
the
right
one
for
that
ecosystem
and
that
they've
helped
us
determine
that.
It
seems
like
the
right
solution.
Long
term.
A
Great
okay,
great
to
get
related
groups,
we
can
get
their
expertise,
get
their
feedback.
So
that's
super
helpful.
Any
other
last
comments
on
this
s-bomb
topic.
F
I
Yeah
I
just
added
one
in
this
meeting,
the
last
two
or
three
times
and
I
think
Justin.
This
is
kind
of
your
source
of
your
frustration.
We've
like
had
a
lot
of
discussions
and
said:
oh
you
know
the
core
maintainers
aren't
here,
so
we
can't
actually
come
to
a
conclusion.
I
just
looked
at
the
repo
I
thought
there
was
a
list
of
maintainers,
but
maybe
I
can't
find
it
at
all.
Is
there
a
list
of
maintainers
and
who
are
they.
J
It
should
be
in
the
code
owner
file
on
the
repo
it
might
be
under
the
dot
GitHub.
I
And
is
there
some
kind
of
like
Quorum
we
should
have
for
the
meetings.
I
mean
I,
you
know
I
know.
Sometimes
people
can't
make
meetings
I'm,
not
saying
it's
a.
You
know,
I'm
not
trying
to
get
anybody
in
trouble,
but
like
what
do
we
do?
What
would
should
we
have
some
kind
of
representation
always.
I
Should
we
ask
that
the
maintainers
work
together
to
at
least
have
one
person
at
every
meeting,
yeah.
E
There
was
a
probably
like
a
coverage
got
up
with
vacation
last
time
that
wasn't,
as
you
said,
it
was
it
just
wasn't,
communicated
between
the
intense
well.
I
So,
and
is
there
a
is
there
a
process
to
step
down
like
I
noticed
a
couple
maintainers
didn't
work
on
the
project
anymore,
like
I,
haven't
seen
Stephen
in
a
while
not
trying
to
call
anybody
out,
but
just
in
general.
J
I,
don't
think
there
is
anything
like
that.
I
think
there
is
a
contributor
ladder,
dot
MD
on
the
repo
somewhere
I,
think
Navin
Navin
wrote
it
probably
last
year
or
something
at
the
roots,
I
suppose
no,
it
might
be
called
contributor.md
or
something
like
contributing
Maybe.
I
Yeah,
the
main
worry
is
like,
if
you
take
that
boat
owners
list
and
you
take
off
the
people
that
are
inactive,
you
get
like
two
or
three
people
and
and
then
like
I
said,
that
seems
to
be
kind
of
coming
up
to
our
coverage
problem
on
replying
to
issues
for
people
have
proposals,
and
you
know
making
sure
there's
people
in
the
next
meeting.
I
assume
you
know
you
would
be
open
to
more
maintainers.
If
they're
active
contributors.
I
So
yeah,
just
you
know,
if
we,
if
we
as
a
project
like
get
that
documented,
we
could
get
get
more
people
there
and
it'd
be
easier
to
have
like
the
discussions,
because
there's
a
lot
of
people
coming
to
these
meetings
and
every
two
weeks-
and
you
know
we
could
have
those
discussions
and
have
some
people
be
responsible
for
saying
like
as
as
a
maintainer
I
I
approved
this,
or
this
is
my
stance
on
these
discussions.
B
Sense,
I
think
it's
in
the
interest
of
that
contributors
and
maintainers,
because
otherwise
they're
going
to
feel
like
either
the
meetings
are
giving
them
orders
which
isn't
appropriate
in
this
community
or
they're
gonna
ignore
what
the
meetings
are,
which
is
you
know
not
like
not
a
healthy,
Community
dynamic.
B
So
maybe
the
question
could
be
what
works
for
the
contributors
and
maintainers
to
you
know,
participate
and
have
them
support
their
needs.
J
F
J
Guess
that's
why
we're
a
bit
slow,
I'm
personally
I
mean
I'm
a
maintainer
I'm
I'm
fine,
with
having
an
s-bomb
check.
The
only
worry
I
have
is
I
I
always
try
to
reduce
the
amount
of
work
that
maintainers
have
to
do
and
not
penalize
them.
If
it's
not
something
that
they
need
to
do
and
I
guess.
J
That's
where
the
question
of
Library
versus
you
know,
final
application
comes
into
play
beyond
that,
I
think,
plus
one
on
having
an
s-bomb
check
or
if
we
put
it
somewhere
else,
I
think
a
new
check
seems
to
work
fine
for
this
case,
but.
G
I
D
Yeah,
so
actually
I
have
one.
This
was
something
that
David
and
I
talked
a
little
bit
about
previously
and
recommended
that
I
joined
here.
So
I
wanted
to
share
a
tool
in
here.
I'll
put
the
link
here
so
basically,
miter
has
had
been
working
on
for
since
2019,
a
sort
of
similar
tool
to
scorecard
that
we
call
hip
check
that,
basically,
does
you
know
analysis
of
repositories.
D
So
you
know:
I
figured
since
I'm
just
sharing
the
repo
here
with
you
all
right
now
that,
rather
than
trying
to
immediately
jump
into
you
know
nitty-gritty
of
things
and
give
everyone
a
chance
to
take
a
look
I'm
on
slack
and
I'm.
Happy
to
you
know,
talk
more
detail
about
opportunities,
but
yeah
basically
wanted
to
to
raise
this
as
a
thing
that
you
know,
at
least
on
the
miter
side,
were
interested
in
doing.
C
Well,
first
of
all,
nothing
else,
thank
you,
so
very,
very
much
for
just
I
I
know
I'm
at
least
one
of
the
people
who
said
oh
I,
just
found
out
about
hip
check.
Do
you
know
about
scorecard,
so
thank
you
so
very
much
for
be
willing
to
come.
Enjoy
it.
C
I
I
know
that
one
thing
that
I
would
love
to
see
and,
of
course,
it's
a
whole
lot
easier
for
me
to
ask
for
things
that
for
me
to
actually
do
any
work
is,
can
you
list
ideally
all
or
at
least
some
of
the
things
that
hip
check
looks
for
that
scorecard?
Currently,
doesn't
because
the
obvious
way
to
integrate
these
two
things
is
to
look
at
well.
C
What
does
hip
check
do
that's
different
and
then
could
those
be
added
as
new
heuristics
new
checks
in
scorecard,
and
they
just
if
they
do
more
or
less
the
same
thing.
Then
you
know
okay.
Well,
we
already
do
that,
but
looking
at
that
difference,
so
have
you
had
a
chance
to
do
that?
If
not
that's
fine,
just
I
figure
I
could
ask
yeah.
D
I
mean
so
I.
Can
there
are
a
couple
of
analyzes
that
hip
check?
Does
that
I
think
are
are
interesting
and
distinct
from
some
of
the
stuff
that
scorecard's
doing
and
I'll
say
so
in
terms
of
this
kind
of
goes
to
some
of
the
underlying
design
goals.
D
I
think
that
the
two
tools
hip
check,
who
specifically
originally
created
to
try
and
identify
software
supply
chain
threats,
and
so
the
their
analyzes,
which
are
not
just
about
the
practices
associated
with
a
project,
but
also
more
concretely
about
trying
to
detect
things
like
typo
squatting,
so
like
there's
a
an
analysis
that
basically
tries
to
look
at
dependencies
and
identify
dependencies
which
are
possibly
type
of
squatted.
D
D
There's
also
a
couple
of
different
analyzes
which
look
at
individual
contributions
to
try
and
identify,
potentially
concerning
contributions,
including
those
that
might
contain
obfuscated
code,
packed
malware.
Things
like
that
and
I'd
be
happy
to
talk
to
those.
The
way
that
they're,
structured
right
now
in
hip
check
is
as
sort
of
distinct
things
rather
than
there
was
previous
version
which
sort
of
combined
them
all.
You
know
the
single
commit,
so
it
would
say,
like
hey,
you
have
a
single
commit
here.
That's
sort
of
looks
weird
and
you
know
should
be
further
evaluated.
D
I'll
also
say
that
I
think
that
there's
as
far
as
differences
between
the
two
tools,
the
ways
in
which
the
output
is
presented
are
quite
different
and
I'm.
You
know
interested
in
conversations
about
kind
of
like
user
experience
stuff,
for
you
know,
making
the
output
at
least
the
human
readable
side
of
the
output,
consumable
I
think
there's
some
nice
things
that
hip
check
does
normally
I
would
love
to
give
a
demo,
but
I
actually
did
an
operating
system
upgrade
a
couple
days
ago
on
my
device,
which
broke
a
bunch
of
stuff.
D
So
unfortunately
can't
but
yeah
I
feel,
like
that's
kind
of
a
run-through
of
some
some
stuff
I
think
is
relevant.
C
So,
as
far
as
the
user
interface
goes,
I
think
we
talked
briefly
about
this
earlier
you
and
I
Andrew,
and
my
current
theory
is
that
we
might
be
able
to
take
advantage
of
those
ideas,
but
at
kind
of
a
separate
level.
In
other
words,
you
know
scorecard
right
now
is
very
much
focused
on
analyzing,
producing
results
and
things
like
Json
yeah,
and
then
you
could
have
a
tool
that
takes
that
Json
and
does
cool
things
with
it.
C
But
you
know,
building
on
if
hip
hip
check
has
a
better
UI
for
those
kinds
of
the
displays
that
doesn't
mean
scorecard
has
to
throw
away
all
they've
done.
It.
C
D
It
does,
and
and
I'll
say
as
well
in
terms
of
you
know
where
there's
opportunity
to
bring
stuff
over
I
would
expect
that
anything
new
would
be.
You
know
re-implemented
rather
than
directly
using
the
code
from
hip
check
in
part
just
because
hip
checks
written
in
Rust
and
you
know
obviously
scorecard-
is
written
in,
go
and
I.
Don't
you
know
think
that
anyone
really
wants
to
deal
with
ffi
stuff
with
go.
D
So
you
know
where,
like
stuff
needs
to
be
re-abulated
I,
don't
think
that's
a
big
deal
and
in
terms
of
also
the
idea
of
having
maybe
a
thing
that
consumes
the
Json
output
from
scorecard
and
you
know,
creates
a
nice
UI
for
it.
That
is
in
some
plate.
Some
ways
you
know
drawing
from
Lessons
Learned
with
hip
check
I
think
that's
totally
possible.
F
J
Do
you
have
a
link
to
an
example,
output
to
see
the
format
or
the
ux
or
anything.
D
No,
unfortunately,
don't
have
any
example:
output
in
the
repo
we
do
have
the
instructions
for
installing
it
yourself
and
building
yourself,
but
unfortunately
that
does
require
either
Docker
or
that
you
have
your
own
rust
tool
chain.
Setup
like
I
said
normally
I
would
do
a
demonstration,
but
my
local
system
is
kind
of
busted
right
now.
Gotcha.
C
If
you
wouldn't
mind
sometime
in
the
future,
when
you've
got
a
computer
working,
you
could
just
create
a
short
video
by
the
way
post
that
show
off
the
cool
stuff
at
the
deer
dot.
F
C
To
hold
to
our
schedule
yeah,
so,
let's,
let's
go
back
and
talk
about
some
of
the
measurements
type
of
squatting
is
obviously
still
one
of
the
most
common
I
mean
I.
Think
it's
only
recently
been
superseded
by
could
see
a
dependency
confusion
only
because
defensive
confusions
become
so
common,
not
because
type
of
squatting's
gone
away.
So
it's
it's.
You
know
type
of
squatting
important,
as
you
mentioned,
the
repos
are
also
countering.
So
maybe
this
is
less
important
to
measure
the
scorecard
level.
C
I
I'm,
not
sure,
but
I
mean
I,
could
certainly
see
it
be
useful.
The
looking
for
concerning
contributions
certainly
seems
plausible.
Obviously
the
risk
there
is
false
positives
I
mean
that's
true
for
all
these.
How
are
you
I
have
to
admit
is,
since
these
things
measures
become
public
I.
Imagine
an
attacker.
A
smart
attacker
could
just
make
their
contributions
so
that
you
know
hey
is,
does
it
detect?
Does
it
detect
it
just
tweak
it
until
it's
not
detected,
but.
D
C
Does
save
at
least
you
know
useful
against
the
the
less
smart
adversaries?
Can
you
summarize
experience
false
positives,
low,
false
positive
rate
kind
of
what
are
you
looking
for
yeah.
D
So
I
will
say
validating
the
usefulness
of
the
sort
of
commit
level.
Like
you
know,
trying
to
look
for
malicious,
commit
kind
of
analyzes
was
an
interesting
challenge.
D
What
we
ended
up
doing
basically
was
first
of
all,
building
a
tool
that
identified
any
patches
made
to
fix
previously
disclosed
cves
and
then
created
another
tool
that
basically
would
take
that
patch
file
and
then
work
backward
to
look
at
all
of
the
contributions
which
introduced
code
that
turned
out
to
be
vulnerable
and
then
ran
hip
check
against
those
relevant
repositories
to
see
if
it
flagged
commits
that
were
in
those
chains,
basically
see.
If
it
would.
You
know
it
was
flagging
things
that
eventually
turned
out
to
be
core
abilities.
D
It
did
decently
well,
but
we
would
like
it
to
do
better.
Basically,
we,
it
would
put
something
we
ended
up
like
sort
of
ordering.
You
know
for
a
project
basis
like
ordering
commits
based
on
their
score
on
that
you
know
hey.
This
looks
like
it's
concerning
metric
basically,
and
it
would
pretty
consistently
put
at
least
one
or
two
of
the
commits
from
these
chains
of
commits,
which
were
you
know,
touching
code
that
turned
out
to
be
vulnerable.
K
D
C
You
could
talk
with
me
later,
but
there
is
a
group
that
actually
specifically
has
a
collection
of
malicious
patches.
They
don't
make
it
public
for
Fairly
obvious
reasons,
although
the
data
in
fact
has
been
public
or
whatever,
there's
just
tried
to
make
it
harder
for
the
adversaries
to
get
the
collection
all
at
once.
C
Yeah,
but
shoot
me
the
email
once
I
get
a
little
better
I
will
try
to
shoot
you
a
response,
but
but
that's
a
longer
term
thing,
but
I
think
they
were
finding
things
like
pretty
blatant
stuff
executing
something.
That's
a
a
you
know,
a
zip
or
a
something
you're
decrypting
and
then
executing
is
probably
a
bad
side.
Yeah
they've
already
at
least
the
what
some
of
these
examples
they
weren't
even
trying
to
be
subtle.
F
A
C
A
way
to
create
a
list
of
here
the
differences
because
I
think
at
least
what
I
would
like
to
see.
I
I,
it's
good
to
have
this
conversation,
always
glad
to
see
it.
But
you
know
to
make
sure
that
we
don't
lose
these
good
ideas.
I
would
love
to
go
back
and
just
start
walking
through
what?
What
does
hip
check
do
and
just
try
to
drade
out.
You
know
the
ones
that
seem
most
likely.
I
mean
it's.
I
C
C
J
There's
also
another
working
group
on
package
analysis
where
the
you
know
the
work
that
you're
doing
on
figuring
out.
If
something
something
looks
malicious,
it
could
also
be
useful.
They.
F
E
A
Thank
you
all
right,
any
other
other
topics.