►
From YouTube: Scorecards Biweekly Sync (June 2, 2022)
A
B
I
can
share
the
screen
if
you
want,
we
can
call
that.
A
I
guess
I
guess
that's
fine.
The
agenda
is
pretty
it's
pretty
short,
so
I
guess
let's
start
yeah
welcome
everyone
to
the
scorecard
bi-weekly
meeting
2nd
of
june.
C
Yeah
so
hi
everyone.
This
is
sripad
sripad
margara,
I'm
a
synthetic
staff
member
at
ibm
research,
I'm
getting
some
initiatives
around
supply,
chain
security,
and
I
am
big
fan
of
scorecard
I've
been
using
it
for
for
a
while,
and
I
would
like
to
share
some
work
that
I've
been
doing
and
see
if
how
it
aligns
with
scotland,
so
yeah,
okay,.
E
Hi,
I'm
caroline
I've
been
working
with
sripod,
I'm
also
with
ibm,
I'm
a
security
engineer
and
yeah.
I
attended
last
week
last
meeting,
but
I
was
just
a
lurker.
I
didn't
say
anything
so
welcome.
E
A
I
guess
that's
it
then
so
cool
well
great
to
have
more
people
on
board.
A
We
typically
discuss
you
know
technical
problems,
project
updates
and
directions
about
where
we
want
the
scorecard
project
to
move
forward
to
so
that's
kind
of
what
we
do
in
this
meeting.
A
I
think
let's
proceed
with
the
announcement
jeff.
Do
you
want
to
say
a
few
words
on
what
you
wrote
and
then
I
think
after
that
we
can.
Let
sripad
talk
about
the
work
that
he
he
wants
to
talk
about.
F
The
first
one
here
is
that
you
can
put
org
level
config
I
mean
you
can
put
repo
level
config
in
the
org
centralized
repository
if
you
create
a
subdirectory
with
the
repo
name
and
this
the
second
one
here
is
that
you
can
also
have
a
central
config
and
then
in
an
org
level.
Config
like
you,
have
multiple
orgs.
You
can
have
them
all
point
to
one
central
config
by
using
this
base,
config
option.
A
All
right
no
question,
so
I
added
a
point
on
the
scorecard
action
release
we
released
last
week,
version
1.1.0.
A
A
It
comes
with
a
caveat
which
is
that
the
branch
protection
check
doesn't
work
with
this
token,
and
this
is
something
that
github
is
aware
of,
and
they
are
trying
to
to
fix
it
on
their
side.
So
as
soon
as
they
fix
it,
we'll
have
support.
We
will,
we
can
add,
support
back
for
the
branch
protection
settings.
A
I
think
this
is
just
the
release
we
made
with
a
bunch
of
fixes
and
that's
what
we
use
in
the
action
release.
So
I
think
those
other
changes
I
don't
know
naveen
azim.
Are
there
other
important
changes
yeah
there.
B
Are
some
obviously
there's
some
breaking
changes
I
mean
like
we
are.
We
are
stopping
the
old
json
format
and
that's
that's.
That's
in
the
release
notes
to
say
warning.
Those
are
breaking
changes,
but
that's
one
critical
thing:
people
are
looking
at
the
v1
json
formulators
out
and
that's
not
being
supported
going
forward
and
it
was
already
mentioned
during
the
week
too.
If
I'm
not
wrong,
so
that's
one
thing:
that's
that's
there.
A
Cool
thanks
all
right,
maybe
another
announcement
that
I
forgot
to
write
down
in
the
in
the
notes
is
that
we
have
someone
interning
at
google,
who
is
going
to
be
working
on
a
way
to
run
scorecard
on
the
dependencies
of
a
project.
A
I
think
the
idea
right
now
is
to
add
support
inside
the
scorecard
action
and
when
you
have
when
there
is
a
pull
request
from
depend
about
renovatebot
or
any
pull
request
that
changes
one
of
the
dependencies,
we
would
be
able
to
show
the
user
the
the
the
scorecard
results
for
that
dependency,
so
they
can
learn
and
get.
You
know
used
to
seeing
the
the
results
and
maybe
take
decision
based
on
on
the
results.
B
So
I
have
a
question
on
this,
so
I
know
the
the
design
that
needs
to
come
in,
but
the
general
overview
is
it
going
to
be
using
bigquery
to
fetch
the
data
or
like?
Where
is
the
direction
for
that?
Specifically,
for
that
repository
like
for
that,
and
I
we
don't
have-
we
don't
have
commissions
in
our
scorecard
for
two
specific
ones.
How
is
that
going
to
go?
Pull
that
specific
version
or
feature
of
the
dependency.
A
A
B
And
also,
I
know
I'm
jumping
ahead.
Why
keep
it
part
of
school
got
action,
because
now
that
ties
with
the
scorecard
release?
So
if
we
have
to
make
any
changes,
this
would
be
like
I'm
just
talking
like
how
I'll
get
update.
They
have
the
dependency
review
as
a
separate
action
and
dependency
dependency
feature
as
a
separate
one.
So
now
they
can
have
independent
releases
and
don't
have
to
step
on
each
other.
The
example,
if
there's
a
set
fault
or
any
of
that,
then
we
need
to
release
this
and
cause
problems
to
the
other
one.
B
So
is
there
a
specific
reason
to
keep
it
within
the
scorecard
action?
Can
it
be
like
how
how
codeql
has
multiple
actions
within
the
same
repository?
I'm
not
saying
that
we
need
to
create
another
apostrophe.
A
So
I
guess
this
is
still
to
be
discussed
right.
I'm
just
gonna
say
what
what
I
have
in
mind
and
I
guess
everyone
else
can
can
chime
in.
A
A
I
think,
even
if
you
had
two
different
actions,
if
a
user
wants
to
to
have
both
either
way,
they'll
have
to
update
when
you
update.
So
from
their
point
of
view,
I
think
having
one
is
is
simpler.
Well,
that's
kind
of
my
my
point
of
view
right
now,
but,
like
so.
B
Can
I
sorry
I
don't
mean
to
check
instead
of
doing
a
design
dot?
Can
we
just
write
down
what,
like
whatever
we
discuss
right
now?
Can
there
be
just
a
github
issue,
so
it's
easier
to
discuss
over
there,
or
at
least
asynchronously
like
so
that
we
don't
have
to
like
somebody
works
at
design
dark
and
then
don't
even
know
what
features
we're
going
to
talk
about
that.
A
Yeah,
I
I
think,
by
design
dock,
I
I
meant
mostly
like
one
page
and
put
it
in
the
in
an
issue
or
a
discussion.
Yeah
we'll
definitely
do
this
right
right
now.
We
don't
have
anything
so
as
soon
as
we
have
some
like
rough
ideas
about
where
the
project
can
go
and
like
what
is
the
first
step,
I
think
we
will
create
an
issue
I
think,
like
maybe
next
week.
A
Okay,
so
if
there's
no
more
questions,
let's
move
on
to
the
open
issues,
I
think
sripad
floor
is
yours.
C
C
C
Again,
the
way
it
started
it
and
every
project
start
somewhere
right,
so
it
started
with
the
earlier
this
year.
We
had
this.
Everyone
know
this
npm,
malicious
npm
packages
with
colors
and
figures
that
break
a
lot
of
applications
and
that
that's
when
you
essentially,
we
started
to
check
right.
Okay,
if
there
was
a
malicious
maintainer
who
was
able
to
push
some
changes,
why
did
I
ended
up
automatically
updating
my
dependency?
How
did
it
had
no
way
to
basically
check
before
updating
had
any
indications
of
those?
C
And
yes,
we
had
a
lot
of
discussions
that
the
dependencies
we
are
not
paying
to
specific
versions
and
everything,
but
even
if
the
dependencies
have
been
did
I
had
all
the
tools
and
all
the
basically
data
available
to
make
some
informed
decision
and
the
answer
is:
there
is
no
control
way
to
update
dependency
virtuals
right
when
I
want
to
update
from
version
x
to
version
y,
I
don't
know
if,
for
all
the
changes,
all
the
pull
requests
that
went
into
this
release
are
the
best
package
will
follow
right
and
what
kinds
and
size
of
changes
that
went
into
his
release
and
we
can
go
into
like
kinds
and
size
and
how
we
can
determine
it.
C
But
the
bottom
line
is:
can
we
identify
and
collect
some
indications
of
some
bad
behavior
and
we
can
get
some
change
in
sites
like
when
we
see
any
changes?
Can
we
get
more
insight
into
what
kind
of
changes
are
this
and
that's
essentially,
when
I
started
looking
into
this
and
obviously
the
first
start
is
still
okay
with
the
new
scorecard,
which
gives
us
essentially
point
in
time.
Evaluation
of
these
practices,
right,
like
price
production,
is
on
peer
reviews
are
enabled.
C
If
I
run
it,
I
can
pass
those
checks,
but
you,
if
I'm
a
really
malicious
maintainer,
I
can
disable
this.
I
can
push
some
commit
changes,
enable
them
back
enable
this
setting
back,
and
when
I
run
it
again,
I
get
all
these
checks
getting
passed
right,
so
we
just
call
it
temporal
loss
because
we
are
not
essentially
able
to
maintain
this
capture
this.
This
setting
changes
over
this
time
period,
and
this
is
primarily
because
it's
not
limited
of
scorecard
github
or
any
scm.
C
They
do
not
maintain
this
immutable
record
of
setting
changes
like
when
I
make
a
pr
or
when
I
commit
something
I
get
immutable
record
like
this
is
a
commit,
but
that
is
not
available
with
github.
For
when
I
change
the
setting
right,
there
is
no
immutable
record,
that's
one
thing,
but
they
provide
the
immutable
record
of
the
code
changes
or
when
I
cut
a
release
whenever
you
make
a
new
release
so
that
information
is
available.
C
So
can
we
use
that
information
right
and
second,
is?
If
we
are
executing
this
scans
against
the
repository?
We
execute
it
against
the
main
or
head
branch
right
and
let's
say
we
see
this
particular
head
branch
has
a
vulnerability,
but
if
I'm
using
maybe
a
previous
version
where
the
dependency
is
not
captured,
then
that
I
cannot
map
that
execution
that
result
to
my
application
or
my
use
of
that
particular
repository
or
that
particular
package
right
so
in
case.
C
What
basically
I
started
looking
into
is
into
releases
like
when
we
have
any
package
release
it
has
commits.
It
has
pull
requests,
every
pull
request
every
commit,
they
have
labels,
they
have
link
issues,
they
have
their
labels.
We
have
contributors,
reviewers
release,
notes
signatures
right.
All
these
data
is
there
available
for
us,
and
this
is
predictable
data
immutable
records.
C
We
have
insights
like
who
are
top
contributors,
how
many
lines
have
been
added
or
deleted?
So
can
we
curate
this
and
provide
some
actionable
insights?
What
what
changes
are
going
into
this
releases
and
that's
essentially
the
primary
objective?
So
let
me
quickly
show
you
demo
so
the
way
it
can
be
run.
Essentially,
you
said
this
is
the
it's
again
it's
motivated
by
the
the
scorecard.
So
it
follows
the
similar
notation.
So
it
says
it
has
two
modes:
one
is
the
package
where
it
runs
again.
C
Individual
packages
and
another
is
sbob
where
it
passes,
there's
bomb
and
does
the
same
thing
for
all
the
packages
in
this
form
right
so
for
packages.
So
this
is
the
flash
package.
Let's
actually
first,
let
me
we
can
run
it
against
core
card
right.
So
this
is
a
scorecard
project.
This
is
the
repository
where
this
particular
project
is
hosted.
C
If
we
run
it-
and
this
is,
the
t
is
a
tag
right.
This
is
the
current
version
that
I'm
using,
so
what
gauge
is
essentially
in
the
backend
is
doing.
It
is
making
api
queries
to
this
particular
repository
and
collecting
all
that
information
that
I
just
mentioned.
So
in
that
there
were
two
challenges,
one
challenge
primarily
right,
so
some
of
the
information
that
we
wanted
to
collect
it
was
not
available
through
the
github
api
like
go
client
yeah,
so
we
have
to
have
the
mix
of
graphql
api
and
the
go
client.
C
C
Okay,
so
this
is
kind
of
report
it
produces
right.
It
says
the
current
version
is
four
to
zero.
The
latest
is
this
one:
two,
you
are
lagging
by
two
releases
because
they
are
totally
between
or
21
days,
because
some
organizations
they
have
the
requirement
that
they
cannot
like
be
in
more
than
three
major
versions
or
more
than
180
days
right,
and
these
are
the
federal
clients
that
I
I
had
talked
to.
They
said
they
have
this
requirement
and,
more
importantly,
if
we
can,
it
basically
recommends
okay,
you
can
go
and
update
to
this
version.
E
C
Unique
contributors
who
contributed
for
all
the
prs
that
went
into
this
particular
release:
they
were
reviewed
by
six
reviewer,
unique
reviewer
and
in
some
cases
I
found
that
the
person
who
is
making
a
pull
request.
He
is
the
one
who
is
reviewing
the
or
approving
that
particular
change.
So
we
check
that
these
are
non-peer
review
changes.
That
means
it
is
reviewed
by
the
one
who
is
not
actually
committing
the
change
and
making
a
pull
request,
and
there
are
zombie
commits.
C
I
mean
this
is
this
is
again
for
lack
of
a
better
word,
so
these
are
commits
that
are
made
directly
to
the
main
branch
without
any
pull
request
right.
So
no
pull
request,
nothing
directly,
change
to
the
main
drag
and
then,
as
I
said
this,
this
is
this-
gives
us
some
insight
and
then
what
kinds
of
changes
right.
So
we
look
into
right
now
we
are
only
looking
to
labels
of
pull
request,
issues
associated
issues
and
everything,
so
that
at
least
gives
us
some
insight
into
what
kind
of
changes
are
going
into
it.
C
C
So
this
essentially
again
when
I
run
it
against
the
one
of
the
popular
one
which
is
tensorflow
right.
It
tells
us
that
there
are
a
lot
of
commits
that
went
directly
into
the
main
branch.
Then
what
kinds
of
changes
went
into
it
or
what
components
were
affected?
So
you
can
see
it
says:
core
core
component
has
been
changed.
C
What
kinds
of
changes
went
into
it
large
extra
small?
So
these
are
the
components
that
are
changed
so
now,
if
I'm
a
user
of
this
particular
library,
and
when
I
see
a
new
version,
I
said
okay,
these
are
components
that
are
changed
and
should
I
update,
I
can
make
an
informed
decisions
before
updating
making
a
update
right.
C
Similarly
flask.
This
is
another
very
popular
package
which
says
all
the
commits
they
are
going
into
the
main
branch
right
without
any
pull
request,
but
the
kinds
they
are
like
talk,
testing
typing
so
seems
like
okay
right.
These
are
changes
to
ready
me,
probably
some
typo,
but
still
it
gives
us
some
indications
of
flags
and
some
red
warning
to
look
into
it
more
closely.
C
C
But
first
thing
is:
can
we
have
this
as
in
addition
to
scoring
the
policy,
we
can
also
go
and
score
individual
releases
and
add
this
support
and
add
some
more
checks
there
and
the
second
is:
if
you
look
into
the
overall
oss
ecosystem
right,
it
has
a
lot
of
currencies
like
it
has
source
code.
It
has
registries,
it
has
catalogs
open
source
catalogs
like
artifactory
marketplace,
rectangle
registry,
like
osi,
oci
images
and
even
for
source
code.
We
are
application
report
data
from
our
infrastructure
report
deployment
manifest.
C
Can
we
again
extend
the
scope
and
add
the
checks
for
these
kinds
of
artifacts?
When
we
scan
a
repository
we
see
there
are
terraform,
I
mean
most
of
the
checks
are
going
to
be
common,
like
branch
protections,
reviewers
maintainers,
but
some
other
specific,
like
in
app
repo,
will
look
into
oss
fuzzing
vulnerabilities.
C
So,
for
if
we
see
there
are
data
format
and
civil
repository,
we
can
use
checks
like
tfsaf
to
see
if
there
are
any
terraform
scanning
that
needs
to
be
done.
If
we
see
this
is
a
helm
chart,
there
are
hem
types
in
the
repo.
We
can
automatically
run
some
cis
checks
against
them.
Similarly,
for
oci
images,
so
that's
essentially
a
second
that
can
be
added
support
for
this
different
kinds
of
oss
artifact
to
scorecard
yeah.
That's
pretty
much
what
I
had
so
any
thoughts,
any
questions,
feedback.
A
Yeah
thanks
a
lot
for
the
presentation.
That's
super
interesting
who
would
like
to
start
with
a
question.
B
I
would
like
to
I
have
one
before
I
give
comments.
I
have
one
question
on
that
on
your
zombie
comments.
So,
were
you
able
to
check
whether
your
zombie
comments
match
with
what
scorecard
is
showing
like
you
see?
There
are
15
zombie
comments,
122
zombie
comments
on
your
like
on
on
flask
and
tensorflow.
Were
you
able,
because
usually
these
this
information
is
available
within
scorecard,
at
least
in
the
big
query?
It
clearly
shows
how
many
of
them
don't
have
this
data.
Were
you
able
to
confirm
it's
not.
B
C
B
G
B
You're
building
something
to
get
that
yeah
absolutely
and
that's
one
thing
that
I
have
yeah
specifically.
B
A
Who
else
want
to
to
go
next.
G
I
I
can,
I
just
had
some
questions
so
on
your
second
proposal.
Could
you
just
like
reiterate
what
what
do
you
mean
by
different
kinds
of
particular
artifacts?
I
I
wasn't
sure
if
it
meant
different
repositories
like
apart
from
github
or
are
these
something
like
you
know
like
we
checked
docker
file
today,
but
you
also
want
to
check
some
other
types
of
artifacts
on
our
github
repositories.
C
Yeah
exactly
so,
it's
not
different
like
github
gateway,
git
lab
and
everything,
but
even
if
we
look
at
just
github
right,
people
are
using
it
to
host
their
application
code
like
the
packages
also
the
infrastructure
code,
like
ncbi
data
form
or
cross
plane
and
all
those
new
infrastructure
code
that
is
also
being
available
in
the
on
the
github
repositories
and
also
help
charts
right.
If
you
look
into
all
the
operators
and
everything
all
so
there
are
this,
that's
why
I
just
call
it
currency,
so
different,
granularities
or
different
types
of
artifacts
are
available.
C
So
when
we
let's
say
someone
gives
me
a
repository,
this
is
the
repository,
and
can
you
rate
this
or
score
this
for
me,
so
we
can
look
into
some
settings
like
transactions
and
everything,
and
now
we
can
go
and
see
what
kinds
of
artifacts
are.
If
it
is
application
source
code
like
dot
phi
and
dot
go,
we
can
just
run
the
scorecard
that
we
have
today
right.
C
If
we
see
there
are
terraform
dot,
tf
or
ncbl
scripts,
we
can
run
different
kinds
of
checks
like
tfsec
or
others
that
look
into
whether
there
are
any
misconfigurations
in
the
terraform
or
missing
configurations.
It
is
the
deployment
manifest
like
hen
chart,
so
we
can
run
some
ci
checks
like
cube,
linker
or
others.
So
we
have
contributed
some
checks
to
tube
enter
the
ci
that
perform
the
cigs.
C
G
Right
so
so
yeah
I'll,
try
and
answer
this
in
two
parts.
So
I
think
the
images
thing
is
a
bit
difficult
for
scorecard,
because
I
I
think
it's
it
is
hard
to,
because
scorecard
is
written
with
the
assumption
that
we'll
be
running
it
on
a
repository,
whether
it's
github
or
a
gitlab
type
of
repository.
So
it's
usually
hard
to
have
that
mapping
of
like
image
versus
code
directly.
G
So
that's
probably
not
a
problem.
We
are
trying
to
solve
right
now.
However,
I
do
like
the
idea
of
doing
the
terraform
or
the
ansible
conflicts
that
you
mentioned.
I
I'm
personally
not
too
familiar
with
this,
but
the
idea
definitely
fits
very
well
with
what
scorecard
is
trying
to
do.
In
fact,
this
is
how
we
kind
of
extended
our
pin
dependency
check
over
time,
so
it
started
with
a
docker
file.
G
But
now
it
also
looks
like
you
know,
other
types
of
files,
so
it's
overall
on
a
very
high
level
seems
like
a
good
fit,
but
definitely
we
should
discuss
more.
So
I
I
would
for
that
part
from
my
side.
I
would
suggest
let's,
let's
open
a
github
issue
on
scorecard.
If
you
could,
just
let's
say,
have
a
very
brief
few
line
proposal
right
like
let's
say
we
start
with
the
terraform
file.
G
If
that's
what
we
want
to
start
with,
if
you
could
just
tell
you
know
what
kind
of
checks
or
like
what
kind
of
security
aspects
you
want
to
look
for
and
how
you
want
to
implement
it,
maybe
we
can
discuss
it
there,
yeah,
absolutely
yeah
I'll
hold
my
thoughts
on
the
other
proposal
for
now,
just
just
in
case.
Anyone
else
wants
to
write
their
ideas.
G
A
B
I'm
going
to
tap
on
the
second
proposal
that
mentioned
same
as
you
mentioned,.
B
Correct
me,
if
I'm
wrong,
this
is
almost
like
running
security
tools
based
on
specific
kind
of
code
like
star
form,
I'll,
write,
df,
saying
if
it's
go
code,
gosek
tools
like
if
python
there's
some
python
xyz
tool
like
different
linters
security
tools
to
run
on
different
kind
of
code,
is
that
the
proposal
talking
a
thousand
foot
overview,
not.
C
Yeah,
so
not
yeah,
basically
kind
types
of
artifact
right
like
so,
we
can
even
go
like
go,
go,
seek
and
everything,
but
more
at
more
granular.
We
can
look
into
like
we've
categorized
all
those
into
app
application
code
that
that
are.
There
are
some
checks
which
are
common
for
application
code
because
and
some
checks
which
are
common
for
like
infrastructure.
There
are
formats,
because
if
you
look
into
this
category,
they
are
managed
and
operated
in
a
different
ways.
Right.
C
If
you
look
into
a
app
code,
it
is
changed
a
lot
of
lot
frequently
than
ansible
or
terraform
core
right.
So
the
idea
essentially
to
have
this
at
a
little
more
granule,
a
coarse
level
not
very
even
like
if
it
is
a
fire
and
this
if
it
is
a
core
and
goes
thing
even
we
can
go
that
level
also,
but
in
this
one
I
just
think
if
we
can
have
it
at
least
support
for
this
at
course,
level.
Like
it's
infrastructure
reports,
deployment,
manufacturing
products
in
charter
sure
we
can.
B
So,
let's,
let's
assume,
if
it's
infrastructure
report,
let's
assume
it's
a
mono
republic
infrastructure?
Oh,
let's
keep
it
simple
if
it
interests
your
repo
yeah,
let's
see
suppose
we
found
the
stereo
styrofoam
files,
so
is
your
recommendation
that
we
run
tfsec
on
the
on
the
on
that
repository
along
with
existing
scorecard
checks
and
add
that
additional
score
into
scorecard
results
so
that,
if
somebody's
going
to
consume
that
understands
as
a
scorecard
repository,
we
understand
what
these
existing
scores
are.
Along
with
that
here
are
things
that
tfc
complained
about.
C
Exactly
so,
if
you
look
into
today's
right,
so
when
you
see
the
score
card,
it
runs.
If
you
see
is
the
it
does
oss
file,
it
does
vulnerability
scanning,
it
tells
the
pinning
pin
whether
dependencies
are
pin
or
not.
These
are
checked
for
application
reports
right.
C
They
are
probably
not
applicable
for
infrastructure
bacteria
form,
so
for
them
there
can
be
different
set
of
checks
that
can
be
done
so
based
on
the
type
we
can
apply,
different
sets
of
checks
on
that
and
some
of
the
checks
they
are
going
to
be
common
right,
which
are
the
repository
scheduling,
checks
so
yeah.
I.
B
I
I
like
you
mentioning
that
app
like
fuzzing,
does
not
apply
to
like
infrastructure
repositories
hellhound
chart,
because
lots
of
people
are
using
different
stuff.
So
initially
initially,
I
brought
this
thought
process
that,
but
then
we
decided
some
time
before
scorecards
not
into
scorecard
was
generally
essentially
wasn't
in
this
in
this
space.
But
I'm
more
than
happy
it's
a
good
thought
process,
but
we
got
to
be
specific
as
to
where
we
draw
a
line,
but
I
do
agree
to
what
azim
mentioned.
B
We
got
to
think
about
this
thanks
and
I'm
working
here,
and
I
do
have
your
that
bras
on
the
first
one.
I
want
to
say
I
like
the
thought
process.
The
release
would
certainly
help
if
that
can
go
into
scorecard,
but
understanding
what
release
is
in.
How
often
that
is
run
is
an
implementation
detail
that
we
got
to
think
about.
C
Yeah,
absolutely,
I
think,
how
part
we
can
maybe
go
later
like
what
exactly
goes
into
it
and
this
but
yeah
just
in
the
first.
I
just
wanted
to
get
this
feedback
right
like
if
max
is
and
yeah.
B
Thanks
thanks
lauren.
A
Azim,
do
you
want
to?
I
think
I
think
you
had
other
other
questions.
C
G
And
yeah,
I
think,
that'll
be
helpful.
I
I
definitely
think
the
release
one
needs
a
bit
more
thought
it
might,
like.
You
said
work
well
with
lauren's
proposal
to
have
a
dependency
check,
but
yeah.
E
G
A
Okay,
so
something
I
wanted
to
add
is,
I
think,
some
work
that
azim
and-
and
I
are
doing
around
the
scorecard
badges
we're
going
to
be
recording
every
the
scorecard
results
for
every
commit,
so
that
you
might
be
able
to
use
that
data
to
to
like,
like
for
the
the
release,
the
release,
information,
yeah.
G
A
G
Yeah-
let's
say
I
we
are
hoping
for
the
end
of
this
quarter.
So,
let's
say
conservatively,
maybe
like
mid
july.
We
can.
We
can
name
for
that,
but
yeah
we
can
think
about
how
to
like
yeah
scorecard
like
lauren
says.
Koka
does
also
support
commitsha
based
thing,
so
we
have
thought
about
the
similar
problem
that
you
mentioned,
that
you
know
scorecard
shouldn't
only
be
running
at
head,
but
you
should
also
be
able
to
look
back,
but
we
don't
support
versions
in
the
sense
that
we
don't
understand
a
release
version.
G
C
Yeah,
so
I
think
in
this
one
what
we
are
trying
to
do
is
we
are
trying
to
aggregate
the
all
commits
in
that
goes
into
release
right.
So
if
we
have
that
support
that,
if
we
have
scorecard
results
for
individual
sha,
we
can
basically
say
okay
in
this
between
these
two
release.
These
are
all
the
commits.
Then
these
are
their
scores
right.
Then
we
can
get
some
insights
into
it
right.
B
C
No,
no,
we
want,
we
don't
want
to
run
it.
We
just
said
I
I
want
to
basically
have
these
two.
I
have
two
releases.
These
are
commits,
that's
what
we
are
doing
right
now
like
we
are
not
running
scorecard
from
all
the
shock,
but
we
are
saying
between
these
two
commits.
Are
these
two
releases?
These
are
all
the
pr
that
went
into
it.
Let
me
get
some
insight
into
whether
they
were
reviewed
and
what
kinds
of
changes
with
their
and
what
were
their
labels.
C
B
All
right,
let
me
do
high
check
interject
assumes
conversations
include.
G
A
Sorry
so
there's
a
yeah
thanks
sripat
for
the
the
presentation.
I
hope
we
can.
We
can
work
together,
starting
with
the
issue.
A
So
I
see
someone
added
another
point
about
discuss
about
dependencies
with
the
tool
chain,
I'm
not
sure
who
added
this
yeah.
I.
B
Added
that
sorry,
I
added
that.
Okay,
I
want
to
show
my
screen
again
just
to
make
sure
okay,
the
the
problem
right
now
with
scorecard,
is
like
I
specifically
I'm
bringing
these
two
up
like
I'll
open
up
these
two
issues.
B
Scorecard
scorecard
for
people
who
don't
know
scorecard
has
two
gomad
oneness
for
the
main
repo
another
one:
the
tools,
repo
tools,
reports,
where
we
depend
on
a
bunch
of
different
tools.
We
want
to
pin
by
specific
sha
so
that
we
know
which
version
of
the
tools
that
we're
using
and
that
is
growing,
really
large,
especially
bringing
cosine
in
and
and
I'm
working
on,
two
pr's
one
is
there's
a
there's:
a
high
vulnerability
on
gotof,
which
we
don't
directly
consume.
B
It's
our
tools,
dependency
that
consumes
that's
one,
that's
one
problem
and
I'm
also
working
on
another
pr
that
is
trying
to
upgrade
to
go
1.18
and
put
these
up.
Both
these
were
blocked
by
the
co-dependency
and
jason
was
pretty
cool.
He
immediately
tried
to
fix
this
problem,
but
it
is
growing
into
the
moment.
B
Like
long
story,
short
cool
had
a
dependency
with
cosine
and
cosine
does
not
do
semvers
and
they
broke
their
dependency,
and
that's
why
co
was
broken
and
we
use
co
and
cosign
and
that
could
that
cause
collisions
for
the
problem.
So
I
asked
jason
from
who
right
now
works
for
who
also
works
in
a
six
store
and
in
co
to
help
me
with
this,
and
he
went
ahead
and
helped
with
that,
but
still
in
spite
of
that,
there
are
a
lot
of
conflicts
in
dependencies.
B
The
two
reasons
that
I'm
bringing
this
up
is:
we
are
trying
to
bring
in
the
scorecard
action
repository
into
scorecard,
and
all
of
this
is
I'm
still
having
a
hard
time
trying
to
solve
this
problem
like
co-dependencies
are
especially
we
have.
We
have.
We
have
replays
options,
I'm
specifically
talking
about
these
replays
option,
I'm
going
to
share
the
screen
right
now.
B
B
Two
issues
in
this
specifically,
we
are
treating
that
one
is
I'm
still
working
on
it.
I
don't
know
how
far
we'll
proceed
to
work
on
this
second
option
and
second,
second,
concern
is,
if
we
bring
in
school
card
action
into
this,
like
what
stephen
mentioned
and
stephen
also
ran
into
the
same
problem.
It's
going
to
create
a
lot
more
issues,
and
I
just
want
to
bring
this
up
over
here
so
that
we
think
about
that,
and
I
want
to
open
up
for
others
to
have
any
conversation
on
this.
G
I
mean
I'm
okay
holding
off
on
like
doing
the
merge.
I
mean.
That's,
that's
the
concern
here.
I
don't
think
that's
a
high
priority.
Okay,.
B
G
Yeah,
when
I
said
merge,
I
meant
like
the
merge
of
scorecard
action
and
scorecard
okay,
if
yeah.
If
that's
that's
a
concern
yeah,
I
I'm
I'm
okay
holding
off
on
that
like
I,
I
don't
think
that
is
high
priority
at
all,
at
least
from
my
side,
and
I
believe
even
laurent
is
okay
holding
off
on
it.
If
stephen
hey,
I
don't
think
he's
on
the
call
right
now,
but
if
stephen
is
interested
in
doing
it,
we
can
have
a
conversation
institution
but
yeah.
B
Concern
we
can
hold
off
on
it:
okay,
you're
right,
that's
my
only
concern
because
yeah,
that's
my
only
concern
and
probably
again
to
solve
this
okay,
especially
to
solve
this
particular
the
gotav
and
the
go
a
1.18
we
might
I'm
like,
like
I'm
gonna,
try
and
see
if
we
can
plot
co
and
we
don't
use,
we
use
cool
only
right
now
we
use
co
within
our.
We
install
co
within
the
action.
We
probably
have
to
do
that,
but
I
would
work
through
that.
B
A
B
B
Let
me
let
me
answer
that
question
specifically,
so
we
want
to
know
what
specific
version
of
tools
we
do
like.
We
want
to
be
sure
and
that's
why
we
have
these
underscore
imports
so
that
we
know
what
version
of
the
tools
that
we
have
and
we
can
have.
Reproducible
builds
based
on
the
version
of
the
tools
that
we
do
and
that's
why
we
have
them
the
tools
and
that's
why
we
have
got
our
model
prototype.
To
answer
your
specific
question
now
you're,
similar
to
what
you
mentioned.
B
If
I
pull
out
code
and
use
only
in
the
in
the
yaml
file
to
install
code,
then,
if
somebody
clones
code
card
repo
have
to
figure
out
what
version
of
code
they
need
to
install
and
how
all
of
these
things
have
to
run
and
our
make
files
have
co
bill
and
we
do
and
we
won't
know
what
version
that
will
all
be
documented,
but
not
have
automated
way
of
figuring
out.
That's
the
problem.
A
A
B
A
Okay,
I
guess
no
advice
for
you,
okay,
so
I
don't
think
we
have
any
other
items,
so
I
guess
we
can
con.
We
can
conclude.
Is
there
anyone
with
a
question
before
we
end?
We
have
another
five
minutes.
A
You're
also
volunteering,
yes
yeah.
So
if
there's
no
other
question,
let's
decide
who
wants
to
be
the
facilitator
for
next
time?
Basically
just
doing
what
I
did
today
so
who's
interested.
B
A
And
I
think
we
can
end
on
this
unless
there
are
further
questions.
A
So
I
think
we
we're
done
well
thanks
everyone
for
attending
thanks
for
the
conversation
and
see
you
all
next
week
or
in
two
weeks,
thanks
yeah
yeah
thank.
A
It
could
be
two
weeks
is
what
16
or
17.
yeah.
B
21St
yeah
right,
okay,
we
should
probably
plan
to
have
obviously,
except
as
then
the
team
like.
I
know
I
think
we
can't
be
made
able
to
make
it,
but
we
should
probably
have
that
in
person,
but
we
can
still
do
that
on
the
summit
on
the
16th
yeah.
Okay,
perfect!
Okay,
thank
you.