►
From YouTube: Scorecards Biweekly Sync (February 10, 2022)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Lauren
coming
in
yeah
lauren
should
be
joining
us
only
I'm
just
streaming.
The
team.
C
While
we're
filtering
in,
I
have
a
question
about
the
meeting
notes.
I
see
that
kind
of
I
guess
the
last
meeting
was
jan
27
two
weeks
ago.
So
I
guess
we
should
just
start
a
new
thing,
but
I
was
unclear
what
this
upcoming
tmpl
section
was.
If
that
was
a
different
meeting
that
happened
or
if
that's
template.
B
E
The
template
you
copy
that
and
use
it
for
this
meeting,
which
I
would
love
to
help
do,
except
there
is
no
access
available
to
the
link
that
was
in
the
in
the
calendar,
invite.
So
if
somebody
could
accept
my
request.
F
Hey
hey,
if
you
add
yourself
to
the,
I
will
pop
the
group
in
in
a
second
and
it'll,
grant
you
access
to
the
sheet,
although
I
know
you
may
have
some
googly
restrictions
right.
B
B
That
we
should
use
github
discussions,
so
we
started
with
questions.
I
just
I'm
unsure
of
how
we
kind
of
translate
between
the
talks
and
the
discussion.
So
maybe
that's
something
still
going
on,
because.
D
F
Yeah
yeah,
so
I
mean
we're
kind
of
we're
kind
of
like
translating
it
after
the
meeting.
So
we
are,
we
are
still
using
google
docs,
but
the
things.
E
E
Yeah
and
as
pdfs,
we
all
pile
into
an
ether
pad
which
is
percent
persistent
week
to
week
to
week,
and
then
somebody
moves
those
notes
into
github
into
a
markdown
file
at
the
end.
And
then
you
know
at
the
next
call
we're
like.
Yes,
we
approve
the
notes,
whatever
pro
forma
sort
of
thing,
but
having
the
ether
pad,
where
we
all
pile
in,
has
been
incredibly
helpful
because
then,
throughout
the
week,
if
you
come
up
with
something
you
can
add
it
to
the
agenda,
et
cetera,
et
cetera,
but
not
having.
E
B
Sorry
so
maybe
let's
get
started.
I
I
quickly
wanted
to
start
off
with
introducing
rohan.
B
I
Yeah
sure
so,
hey
everyone,
my
name's
rohan,
I'm
a
second
year
cs
student,
going
to
uc
berkeley
right
now
and
I'm
going
to
be
working
under
azim
as
a
software
engineer
intern.
So
right
now,
I'm
doing
some
stuff
with
the
scorecard,
and
currently
I
have
one
pr
up
for
the
adding
the
scorecard
to
a
whole
organization
at
scale.
So
yeah,
like
so
far,
I've
been
really
enjoying
the
internship
and
I've
been
looking
forward
to
working
with
you
guys
so
nice
to
meet
everyone.
J
Yeah,
I
can
go
next
hey
all.
This
is
shubra
shubra
carr,
I'm
the
cto
of
the
linux
foundation.
Some
people,
some
folks
on
the
community
asked
me
to
join.
This
call
would
be
really
valuable,
so
I'm
taking
the
lead
of
brian
and
abhishek
and
first
time
joining
welcome.
K
Okay,
I'll
go
next,
my
name
is,
I
am
the
lead
on
flutter
range
fraud
and
we
just
started
using
scorecards
everywhere.
So
I'm
joining
just
to
know
everyone
involved
in
the
project
and
also
how
can
I
contribute
to
the
scorecards
as
a
whole.
L
E
G
Thanks
anyone
else,
I'm
not
new.
This
is
david
wheeler,
I'm
the
mysterious
person
on
the
phone.
G
B
Cool
brian,
I
I
I
think
we
had
a
few
items
to
discuss.
Maybe
should
we
start
with
that
or
did
you
have
anything
like
anything
on
the
agenda
or
should
we
just
get
started
with
the
regular
scorecards
meeting
yeah.
J
Yeah
yeah,
absolutely
so
you
know.
At
the
linux
foundation,
we've
been
working
for
the
past
three
years.
Building
a
system
called
lfx,
it
has
a
security
module
in
it,
which
we
are
working
with
the
alpha
omega
group
to
integrate
that,
because
we
started
much
earlier
than
the
open
ssf
was
formed
and
similarly
we
had
a
metrics
project
and
that
that
product
is
called
insights,
and
this
is
essentially
providing
deep
analytics
to
all
the
linux
foundation
projects
and
we
have
onboarded
about
800
projects
or
so
in
that
platform.
J
And
if,
if
you
folks
are
interested,
I
can
show
you
the
some
of
it
but
net
net
right.
We
are
instrumenting
everything
from
the
code
base
to
the
ecosystem,
so
things
like
comments,
contributors,
who's
joining
you
in
the
community
whose
fault
you
know
kind
of
drifting
away.
You
know,
like
your
build
processes,
your
github
issues,
pull
requests
code,
efficiency,
a
lot
of
that
right
and
then
we
also
look
into
the
mailing
lists,
the
slack
channels
and
social
media.
J
J
Some
of
them
are
using
it
for
voting
purposes
and
whatnot,
and
then
we
had
a
security
module,
which
we
basically
integrated
with
a
bunch
of
sca
vendors
on
the
backend
to
do
static
code
analysis,
sorry,
static
code,
analysis
that
can
find
code
secrets
and
then
dynamic
dependency
analysis
and
find
vulnerabilities
around
the
stack,
cbe
cwz
and
all
that
so
in
both
of
these.
What
we
also
started
doing
is
because
the
the
criticality
scores
that's
where
we
started
with.
We
actually
integrated
that
already
into
this
central
platform.
J
J
So
one
of
the
suggestions
that
came
from
folks
like
abhishek
was
why
don't
we
collaborate
and
merge
these
efforts
right
and
I
think
and
basically
be
able
to
show
both
in
the
same
plane
right
and
obviously
we
don't
have
all
the
data
we
need
like
criticality
scoring,
has
a
lot
of
other
dependencies
like
what's
the
actual
enterprise
adoption
and
things
like
that.
We
are
working
on
adding
those
additional
metrics
as
well,
but
I
think
it
would
be
very
helpful
across
the
board
if
we
are
able
to
join
hands
here
right.
J
So
that's
the
overall
goal.
If
you
folks
are
interested
I'll
quickly,
share
my
screen
and
show
you
a
little
bit
and
then
you
know
we
can
take
it
from
there.
J
Okay,
so
you
know
these
are
all
public
facing.
You
know
this.
This
one
is
called
insights.lfx.dev
and
you
know
we
are
instrumenting
everything.
These
are
like
global
trends,
but
then
we
have
like
project
by
project
views
as
well.
Now,
a
project
for
us
is
essentially
a
set
of
gita
box
or
guitars,
or
it
doesn't
matter
which
source
control
system
they're
coming
from
so,
for
example,
if
you
look
at
something
like,
let's
look
at
cloud
foundry
or
even
if
you're
looking
at.
J
Let
me
take
one
example
here:
right,
like
lf
edge
right,
so
edge
is
essentially
like
a
set
of
projects,
and
you
know
we
basically
create
these
sub
projects
and
groups.
These
are,
like
you,
know,
eight
or
ten
sub
projects
in
there,
but
we
have
stuff,
like
you
know,
a
lot
of
these
metrics
that
come
in
and
things
around
a
variety
of
metrics
right
like,
and
I
can
show
you
some
of
those
right
in
the
global
trends.
J
So
we
look
at
like
things
like
contributor
strength,
growth
and
retention
commits
growth,
new
contributor
growth.
So
if
you
did
acquire
new
contributors
and
like
how
much
the
impact
was
with
contributors,
land
lines
of
code
added
deleted,
what
are
those
contributor
roles
right?
How
many
of
them
are
submitters?
Reviewers
approvers?
What's
the
code
pipeline?
Look
like
you
know
the
engineering
cycle
time
merge
efficiencies.
J
What's
the
backlog
of
issues.
You
know
issue
resolution,
build
statistics
right
like
if
you're
running
a
number
of
jobs,
then
communication
channels,
you
know
organizational
involvement,
registry,
download
data
and
whatnot
right.
So
these
are
like
at
super
high
level.
But
then
we,
if
you
go
at
a
specific
project
level,
and
if
I
pick
up
any
one
of
these,
let's
look
at
one
project
here
in
a
little
bit
more
detail.
We
start
splitting
these
into
different
sorts
of
metrics,
so
right
now
you're
looking
at
like
technical
metrics.
J
J
You
know
docker
registry
information
on
github
gate
variety
of
things
right,
so
this
is
across
the
board
free
thing
that
we
have
built
for
all
projects
and
then
the
security
part
of
it
is
basically
we
are
running
automatic
scanning,
and
now
we
didn't
build
all
the
back-end
software.
J
We
are
collaborating
with
a
lot
of
sca
vendors
where
we
are
automatically
detecting
dependencies,
and
if
you
look
at
one
of
these
projects
it
is,
it
has
role
based
access
control
but
like
when
we
start
looking
into
I'll
just
let
this
data
render
you
can
see
here
the
project
criticality
scorecards
we
already
integrated
with
that.
So
when
we
start
running
the
scans,
we
look
at
things
like
vulnerability,
scoring
secrets
and
compliance.
You
know
we
also
integrated
the
ossf
best
practices
score
badge
right.
J
Then
we
are
also
getting
detailed
into
like
what
code
secrets
might
be
detected.
What
kind
of
non-inclusive
languages
are
there
how
what
type
of
vulnerabilities
are
there?
You
know
what
release
versions
happened,
what
languages
are
used
in
the
project
and
then
you
go
into
a
little
bit
more
detail
right.
You
can
start
debugging
all
these
issues
and
with
known
and
cv
cw
information,
we
have
all
the
dependency
information
of
all
the
upstream
packages.
So
think,
like
you
know
what
you
have
on
depths.dev
only
thing
was.
J
This
was
like
limited
to
linux
foundation,
projects
for
now,
and
then
you
know
we
have
stuff
around
license
scanning
and
a
lot
of
detailed
information
here
now.
The
goal
was
because
we
are
expanding
on
this
anyways
and
we
are
going
to
expand
this
to
all
projects
much
beyond
linux
foundation
projects.
So
when
we
talk
about
criticality
scoring,
can
we
use
some
of
these
metrics
right
and
actually
start
merging
some
of
these
dashboards?
That
can
be
presentable
to
the
community
right
and
used
for
a
variety
of
analysis.
J
So
that's
kind
of
the
context
I'll
not
go
beyond
technical
metrics.
But
when
you
really
look
at
how
critical
a
project
is
you
can't
just
go
with
five
six
metrics?
There
are
hundreds
of
parameters
right
from
the
ecosystem
that
you're
gathering.
So
today
we
have
things
like
registry
downloads
and
stuff.
Like
that,
that's
not
enough!
We
are
now
working
with
some
other
partners
to
get
like.
J
Actually,
behind
the
firewall
enterprise
adoption
as
well
right,
so
you
might
have
heard
of
companies
like
stack,
shared
or
io
and
all
so
we're
trying
to
create
that
360
degree
view,
and
hopefully
these
metrics
can
come
into
how
we
compute
the
algorithm
for
how
critical
that
project
really
is
right.
So
that's
kind
of
the
very
high
level
context
makes
sense.
B
Make
sense
yeah.
That's
that's
great
to
know.
I
I
had
a
few
questions.
I
think
one
one
quick
one
is,
I
tried
accessing,
let's
say
kubernetes
on
the
lfx
insights
and
it
gives
things
saying
you
know
it's.
It's
been.
B
J
It
is
publicly
acceptable,
accessible
data
like
some
of
the
cncf
projects.
Some
of
them
have
been
disabled
for
now,
because
what
we
did
is
we
started
hitting
a
problem
of
scale
because
we
are
computing
and
hashing
through
every
commit.
So
even
and
that's
why
I'm
having
an
open
discussion?
These
are
all
devs
right,
so
we
were
storing
these
on
gigantic
elastic
clusters
and
then
you
know
the
cost
of
maintaining
that
went
through
the
roof.
J
So
we
are
moving
to
building
a
bigger
data
lake
and
that
effort
will
be
done
completed
by
end
of
march
early
april
and
that's
where
we
are
planning
to
open
the
throttle
up
right.
So
certain
projects,
you
know
which,
when
we
had
other
systems
like
kubernetes,
has
a
devstat
system
as
an
example.
So
if
they
already
had
a
solution,
we
said
okay,
I
will
build
this
ocean,
but
instead
of
boiling
the
ocean,
let's
solve
the
scalability
thing.
First
right.
F
J
F
F
J
Was
going
to
obviate
the
need
for
dev
stats
across
cntf
projects?
Eventually,
yes,
that's
the
goal
because,
like
we
have
every
metric
that
devstat
has
plus
much
more
right.
But
you
know
kubernetes
is
a
huge
project
and-
and
you
know,
we
need
to
really
scale
out
and
if
they
add
another
100
projects,
we
should
not
be
fighting
for.
How
do
we
keep
adding
more
compute
and
scale
at
the
back
end?
So
the
move
to
the
data
lake
is
cost
effective
and
highly
architecturally
scalable.
B
Okay-
and
you
mentioned
that.
L
B
Targeting
lf
foundation
project
so
does
this
with
the
data
lake
move,
do
you
plan
to
go
towards
the
broader.
J
No,
that's.
That
was
one
of
the
compelling
reasons
for
us
to
move
to
the
data
lake
because
we
wanted
to
do
this
for
it,
if
not
a
million
at
least
100
000
projects
right,
so
we
needed
to
swap
out
the
back
end
so
that
you
know
we
don't
get
drowned
with
a
10
million
dollar
cloud
bill
to
run
all
these
analytics
on
a
daily
basis
right.
So
yes,.
F
Sorry,
are
they
I'm
curious,
is
it
purely
proprietary
or
is
there
any
backing
technology
like
stuff,
that's
stuff,
that's
touted
in
the
chaos
foundation.
J
Yeah,
so
we
originally
started
with
some
projects
like
that
we
came
right,
yeah
and
some
of
our
agents
still
use
some
of
the
grimoire
lab
stuff.
But
over
time
we
found
out,
like
you
know,
the
architecture
was
very
single
project,
centric
and
api
rate
limits
we
started
hitting
and
all
that,
so
we
essentially
rewrote
most
of
the
instrumentation
or
probes
as
we
like
to
call
it,
and
basically
we
connected
a
bunch
of
connectors.
J
So
we
have
about
15
connectors
today
and
then
we
also
are
writing
a
byoc
model.
So
anybody
can
come
and
write
their
own
connector
and
you
don't,
and
if
you
follow
the
spec
or
like
the
modeling
data
of
the
data
lake,
you
know
as
long
as
that
conforms
to
the
api
construct.
You
don't
have
to
worry
about
like
whether
it
will
integrate
seamlessly
or
not
so
we
started
with
grimoire.
J
We
still
follow
the
specs
of
chaos
right
in
terms
of
like
what
metrics
they
recommend,
but
we
went
beyond
it
and
then,
but
on
the
software
library
perspective,
we
essentially
had
to
rewrite
all
of
it
right.
So
when
I
say
proprietary,
it's
not
proprietary,
because
we
are
not
a
vendor,
it's
like
all
lf
engineering
and
as
we
roll
out
this
watch,
you
know
kind
of
our
version.
Two.
We
will
start
opening
up
all
the
connector
code
base
and
everything.
J
It
will
be
open
source
but
like
we
didn't,
want
to
put
out
like
half
bigfoot
out
there,
so
you
know
it's
easy
to
create,
I'm
not
saying
like
just
from
a
criticality
perspective,
but
it's
also
like
we
ran
at
a
breakneck
speed.
So
you
know
a
lot
of
the
best
practices
of
where
we
are
storing
cloud
keys
and
all
those
need
to
be
like
scrubbed
properly
before
we
just
open
source
it
all
right.
We
you
know
we
need
to
document
our
own
stuff.
J
We
need
to
have
something
more
secure
and
because
we
were
running
so
fast,
we
didn't
really
do
get
to
do
a
lot
of
these
cleanup
and
what
licensing
model
will
put
this
under.
So
lf
legal
is
looking
at
like
what's
the
appropriate
licensing
model,
but
as
soon
as
we
are
done,
we
are
going
to
open
it
up
right,
so
yeah.
J
So
the
next
version
that
we
are
rolling
out
with
the
data
lake
will
have
those
apis,
because
we
have
gotten
this
request
from
member
companies
of
the
linux
foundation,
including
openssf
members
as
well
like
you
know,
if
we
are
able
to
send
them
the
api
that
they
can
subscribe
to,
but
we
have
to
be
careful
because
we
are,
we
are
capturing
a
lot
of
pii
data
as
well.
J
F
Yeah,
I
would
definitely
be
interested
in
hearing
more
or
helping
test
stuff
out
like
for
kubernetes
we've
been.
We
just
wrote
a
tool
to
that
kind
of
like
marries
information
from
from
github
commits,
as
well
as
to
have
stats
to
kind
of
determine
the
activity
of
maintainers
or
contributors.
F
So
that,
and
also
the
tool
has
like
various
functions
to
allow
us
to
like
write.
The
annual
reports
for
kubernetes
too.
So
I'd
be
curious
to
try
to
start
integrating
that
stuff
as
soon
as
y'all
are
comfortable
with
it.
Yeah.
J
So
yeah
it's
not
too
far
out.
I
think
we
are
talking
about
two
three
months
here
till
we
move
or
cut
over
to
the
new
data
lake
and
that's
why
we
like
kubernetes
originally
was
on-boarded
already.
We
still
have
the
data,
but
then
we
went
and
spoke
with
chris
and
priyanka,
and
he
said
like
look,
you
know
we
are
just
burning
through
cashier
on
the
cloud
bill.
Can
we
temporarily
turn
it
off,
because
you
already
have
data?
You
know
like
dev
stats
today
right?
J
B
On
this
call
actually
has
written
and
rolled
out
the
scorecard
github
actions,
which
is
what
we're
hoping
will
be
like
a
primary
integration
point
for
most
scorecard
users,
and
it
will
pretty
much
do
take
over
the
processing
aspect
of
it
that
you're
talking
about
that
is.
We
are
hoping
that
every
time
there
is
a
comment
or
like,
let's
say,
a
trans
protection
setting
exchange
scorecards
can,
like
you
know,
run
and
like
publish
this
data.
B
So
I
my
question
to
you
really
was:
will
your
data
lake
architecture
be
flexible
enough
to
accept
some
kind
of
input
like
that?
Basically,
if
you
want
to
publish
this,
we
could
publish
it
on
your
architecture
and
make
it
available
through
the
web
dashboard.
Do
you
think
that
that
seems
like
a
possible
solution.
J
I
J
You
folks,
with
a
demo
but
like
we,
have
a
small
admin
console
of
it
to
onboard
projects
and
what
we
do
is
like
for
every
project
which
is
we
auto
discover:
okay,
what
all
repos
are
there
and
then,
if
a
new
depo
got
added
into
a
github,
for
example,
kubernetes
had
like
six
or
195
epos,
so
we
automatically
scan
and
detect
the
changes.
But
then
we
say:
okay
turn
on
this
connector
turn
on
this
connector
this
connector.
It
requires
some
kind
of
maintainer
permission,
though
in
some
cases
not
everywhere.
J
You
know,
because
we
sometimes
install
a
bot
to
collect
the
data
right
makes
us
so
much
easier.
Sometimes
it
doesn't
it
just
clone
the
repo
and
find
the
stats
on
that,
but
yeah
we
have
we.
J
We
are
working
on
that
connector
architecture,
so
it's
super
easy
to
plug
in
or
subscribe
and
push
the
data
into
the
data
lake
and
then
once
the
data
is
in
the
data
lake
building
a
visualization
is
super
easy
because
you
know
we
we
are
using
all
kind
of
technologies
around
it
with
trio
and
presto
and
others,
and
you
can
create
any
kind
of
custom
views
without
having
to
spend
technical
debt
on
hard,
angular
dashboards
or
react
dashboards
kind
of
thing.
B
Right
now
that
sounds
great.
Actually
that
definitely
helps
out
with
a
lot
of
efforts.
So
if
so,
if
I
were
to
understand
this
correctly,
are
we
proposing
that
we
turn
down
the
metrics
dot,
open,
ssf
dashboard
and
instead
prefer
the
lfx
insights
dashboard?
Is
that
what
the
problem.
J
Yeah
eventually,
yes,
I
think,
because
it's
a
shared
source
anyway
and
point
is
like
you
know,
in
insights,
there
will
be
a
dedicated
dashboard
because
you
probably
don't
need
social
data
and
all
that
but,
like
you
know,
you
essentially
have
the
whole
criticality
and
like
entire
scorecard,
based
dashboards
on
there,
and
then
you
can
search
for
any
project
and
you
can
group
it
with
a
set
of
projects.
It's
not
just
single
project
right.
It
could
be
like
okay,
let's
analyze
these
100
together,
let's
look
at
cncf
as
a
whole.
J
If
you
wanted
to
what
do
you
want
to
do
a
global
grouping
and
look
at
aggregates
at
that
level?
If
you're
and
again
this
is-
or
maybe
it's
not
a
project,
it's
a
package
somewhere
right.
So
yes,
that
would
be
kind
of
the
end
goal.
If
and
but
again
we
don't.
We
really
want
to
collaborate
on
making
that
happen,
because
you
know
it
will
need
a
lot
of
heavy
lift
right.
B
J
Yeah
and
that's
the
I
think
think
about
it.
This
way,
like
different
data
collectors
or
engines,
are
collecting
the
data,
but
if
you
pipe
it
into
a
data
lake,
then
you
know
like
the
app
the
visualization
layer
and
all
that
is
very
flexible.
Instead
of
building
ten
different
tools,
like
just
consolidate
the
data
into
a
data
lake.
B
Right,
yeah,
that
that
makes
sense
to
me
why
yeah,
I
think
that
makes
sense
to
me.
I
I
have
one.
B
I
was
discussing
with
the
scorecard
action
pushing
data
we
wanted
to
do
this
so
that
we
can,
you
know,
enable
programmatic
api
access
or
even
github
badges.
I
I
just
want
to
open
the
discussion
to
ask
like.
Is
it
worth
having
this
separate
effort
that
that
we
have
apart
from
the
lfx
insights
or
like
I,
don't
know
what
you
think
about
that.
H
Is
the
data
on
lfx
available
to
everyone
by
an
api
or.
J
Or
is
it
just
through
the
dashboard
yeah?
Not
yet,
but
you
will
have
that
api
after,
as
I
said
right
like
we'll,
be
exposing
that
after
our
data
liquid,
so
not
today,
you
know
like,
within
the
engineering
team,
we
can
give
you
access
right,
but
not
publicly.
J
Yeah,
so
our
you
know,
engineering
is
a
little.
We
have
dedicated
engineering
on
this,
but
let's
put
it
this
way,
I
think
our
data
lake
effort
is
completing
just
the
back
end.
Stuff
is
completing
around
march,
let's
say
end
of
march,
and
we
are
publishing
the
next
version
of
insights
that
feeds
of
this
data
lake
worst
case
by
early
may.
So
I
think
the
api
access
should
be
there,
hopefully
by
like
mid-june
or
end
of
june
worst
case.
E
Okay,
so
to
be
realistic
and
to
add
some
buffer
zone
in
there,
because
we
are
in
a
pandemic,
it's
probably
most
reasonable
for
people
to
rely
on
something
being
available,
say
in
start
of
autumn
mid
autumn,
something
like
that
just
to
be
safe.
If
plans
were
to
be
made.
J
Yeah,
so
that's
that's
where,
like
obviously,
we'll
have
to
put
some
kind
of
rate
limit
with
some
authorized
access
keys
right
and
again,
you
know
that
will
be
a
permission
granted
key
so
that
you
know
we
can
give
it
to
and
we'll
have
to
say,
set
some
rate
limits,
because
there
is
a
cost
of
getting
the
data
out
of
a
cloud.
B
Yeah
sounds
great
sugra.
I
think
I
think
we
should
definitely
sync
more
and
figure
out
how
to
collaborate
here.
So
let
us
know
what
what
your
preferred
means
of
communication.
We
can
either
follow
up
through
email
or
we
can
sync
up
again
in
the
same
scorecard
by
v3
next
time.
J
Yeah,
I
think
what
I
could
do
is
if
you
this
was
just
introductory
to
just
set
this
in
motion.
I
think,
as
we
go
forward,
I
would
like
to
invite
some
of
my
engineering
team
members
to
come
and
join
in
actual
programmers
who
are
working
on
the
code
base.
David
happens
to
be.
One
of
our
tech
leads
on
that,
but
I
would
like
to
have
them
join
in
this
working
group
as
well,
and
also
like
we'll
probably
have
to
do
some
offline
engineering
sessions
developer
to
developer.
B
Yeah
make
sense
david.
Do
you
think
this
is
something
we
should
also
bring
up
in
the
other
working
group?
I
remember.
B
G
Yeah
I
mean
there,
though,
well
the
the
identity
group
has
been
renamed
to
the
supply
chain,
integrity
working
group,
I'm
having
a
little
trouble
right
now,
while
I'm
driving
remember,
which
groups
have
heard
what
I
mean.
Certainly
there's.
No,
it's
probably
a
good
idea.
If
they
haven't
already
to
to
discuss
lfx,
you
know
what
it
can't.
You
know
what
what
it
can
and
does
and
can't
provide
short-term
me
term.
G
I
somehow
think
that
they
have
already
heard
something
about
this,
but
if
not
I
mean
it
wouldn't
be
a
bad
thing.
Yes
and
schubert.
You
can
correct
me
if
I'm
wrong
right
now.
Today
the
focus
has
been
more
on
the
linux
foundation
projects
because
there's
a
non-trivial
op-ex
cost.
I
think
adding
like
a
few
hundred
critical
projects.
I
think
we've
already
discussed
is
totally
plausible,
as
it's
currently
constructed.
I
don't
think
it's
you
know.
G
I
think
there's
been
concern
about
trying
to
say
add
every
open
source
software
project
to
it.
But
who
knows
you
know
if
we
say
hey,
maybe
some
of
these
metrics
work?
You
know
that
are
operationally
expensive.
We
don't
do,
but
we
provide
some
data.
I
think
that's
a
plausible
direction,
I
don't
know
if
any
decision
has
been
made
on
that.
I
just
know
it's
been
discussed.
G
J
J
Even
if
we
were
to
onboard
another
10
000
projects,
the
second
part
of
it
is
like
not
when
you
are
looking
at
like
long
tail
of
a
million
projects,
you
don't
need
to
collect
everything
from
like
social
media
or
like
earned
media,
or
things
like
that
right,
so
the
set
of
metrics
will
also
be
less
so
so
and
again
we
are
generating
opex
funding
for
this
through
other
member
companies.
J
G
I
I
think,
there's
still
it
should
correct
me
if
I'm
wrong,
but
I
think
there's
still
discussions,
but
it
make
at
least
to
me.
It
makes
sense
to
do
exactly
what
you
just
described.
You
know:
hey,
there's
a
million
projects.
I,
the
operational
expenses,
will
kill
us
if
we
try
to
do
everything,
but
that
doesn't
mean
that
a
small
number
of
things
can't
be
done
and
and.
J
J
J
D
Can
I
so
we
should
probably
also
write
an
issue
about
this,
like
other
than
the
notes.
It
should
also
be
issued
because
not
many
of
them
within
the
meeting.
So
it's
good
to
have
it
as
an
issue.
Probably
shibra
or
somebody
should
start
an
issue,
and
this
will
help.
Others
also
have
the
visibility
and
provide
context
or
directions
or
feedback
on
this.
B
Yeah
I
can
put
something
in
writing
and
so
that.
B
K
K
I
I
believe
I
I
posted
one
more
comment
a
couple
of
days
ago
and
I
believe
we
can
do
it
in
a
non-disruptive
way,
but
that
would
require
like
shipping
a
python
script
along
with
the
golang
binary.
So
I'm
not
sure
if
that
will
be
okay
or
we
want
to
find
a
different
way
of
enabling
these
checks.
B
D
This
is
like
so
okay,
I've
got.
I've
got
some
feedback
on
this.
Our
google
binaries
have
the
best
part
is.
It
does
not
require
any
one
time,
so
it's
runtime
independent,
so
anybody
can
download
this
and
run
this
now.
This
adds
dependability
hey.
You
need
to
run
a
python
script
along
with
this.
Now
I
need
to
do
a
shell
exact.
That
means
that
could
open
up
if
somebody
puts
another
script
in
and
replaces
that
script
and
that
gets
executed.
D
D
So
I
would
recommend
that
if
we
start
a
thread
and
think
through
this,
I'm
you
probably
have
a
thread,
but
we
should
talk
about
this
and
see
the
pros
and
cons
before
jumping
the
gun.
That's
all
my
opinion
is.
B
Let's
see,
do
you
know
how
many
folks
like
or
like
who
all
uses
g
client
like.
K
Yeah,
so
g
klein
is
like
a
dependency
system
for
chromium,
so
basically
it
is
a
configuration
file
where
you
put
all
the
third-party
dependencies
and
a
bunch
of
different
configurations.
K
So
at
the
end,
the
depth
file
is
something
like
a
python
script.
It
gets
parsed
whenever
we
are
running
gclient,
sync,
which
will
check
out
a
bunch
of
different
repositories,
and
it
will
run
some
some
scripts
as
hooks
after
the
checkouts
just
to
download
like
additional
binaries
that
need
to
be
there.
So
basically,
g
client
is
setting
up
the
whole
bunch
of
different
repositories
and
the
dependencies
that
are
needed
for
building
a
complex
system.
K
It
was
created
for
chromium,
but
many
other
projects
are
using
it
now.
So
in
this
case
we
are
using
it
for
building
dart
and
also
building
flutter.
D
H
Did
you
get
any
any
feedback
from
the
authors
of
g
client?
Like
I
remember
you
said
in
the
issue
that
you,
you
would
propose
to
add
a
flag
to
say
you
know,
require
hashes
a
bit
like
five
pip
or
npm,
so
we
we
wouldn't
have
to
pass
the
like
the
file
and
it
could
be
done
by
you
know.
When
you
run
g
client,
it
would
basically
make
sure
that
everything
is
pinned,
so
we
would
be
able
to
look
for
for
the
command
in
your
in
your
ci
script.
For
example,.
K
Yeah,
I
discussed
a
little
bit
with
them.
I
I
I
felt
like
they
were
not
very
open
on
adding
more
functionality
to
the
tool,
but
also
it
will
have
the
same
complexity
as
adding
a
python
script,
because
we
need
to
check
out
the
depot
tools
repository
for
having
access
to
the
g
client,
which
is
also
python.
B
M
G
In
general,
it's
very
very
hard
to
tell
in
an
open
source
software
project,
how
many
users
you
have
and
how
often
they're
using
it.
This
is
not
a
new
problem,
but
it
does
seem
that
we
want
to
make
it
relatively
easy
for
both
cases
right,
I
don't.
I
don't
see
a
killer
in
enabling
a
running
a
script,
but
you
need
to
be
darn
careful
if
you
call
a
script
because
you're
going
to
be
passing
data,
I
presume
that's
not
totally
trusted.
H
H
K
K
So
I
I
would
say
one
of
the
other
potential
solutions
was
flattening
it
out
that
file
before
using
it
from
scorecards,
but
that
will
make
it
not
generic
for
everyone
using
g
client.
So
it
will
solve
our
our
use
case,
but
nobody
else
using
gclan
will
be
able
to
use
it
without
running
the
parsing
in
their
in
in
their
repos.
B
K
Yeah,
I
I
would
say
for
darden
flutter,
which
is
what
we
are
enabling
right
now
we
have,
we
can
enable
it
for
all
of
them.
I
see.
B
B
H
B
H
K
No,
it's
the
flutter
team,
so
actually
I'm
I
work
in
flutter
and
bart.
A
G
K
Yes,
both
are
open
source
software.
E
So,
okay,
my
primary
concern
with
the
current
like
first
pass
is
that
it's
very
google
specific
with
chromium
and
flutter
and
dart,
and
as
someone
who
is
who
works
in
a
company
that
is
not
in
the
and
the
google
ecosystem
and
is
developing
and
planning
projects
that
use
scorecard.
E
This
doesn't
help
me
and
it
therefore
won't
help
the
rest
of
the
ecosystem.
I
know
that
google
has
a
lot
of
skin
in
this
game
and
I
appreciate
that.
But
is
it
possible
to
perhaps
look
at
a
first
pass
that
isn't
google
specific
and
therefore
something
that
can
appeal
to
more
people.
B
Yeah,
I
think
this
also,
if
I'm
not
wrong,
the
thing
to
consider
here
is
gcline.
I
guess
is
a
tool,
that's
most
likely
used
by
google
heavy
repositories.
A
B
Like
this
is
you
know
very
google
tailored,
but
I
I
don't
think
that's
that's
the
like.
That's
what's
happening
here.
E
B
E
Mean
it
may
not
be
intentional,
but
g,
client
and
chromium
and
flutter
and
dart
I
mean
that's,
that's
all
googly
stuff,
so
I'm
sure
there's.
There
are
very
good
reasons
for
that,
but
I
do
have
to
wonder
whether
might
be
getting
a
bit
too
laser
focused
in
a
particular
use
case,
which
could
make
it
very
difficult
for
the
project
to
expand
beyond
that.
M
Absolutely
I
agree:
yeah.
M
We're
talking
about
the
dependency
pinning
check
so
currently
it
checks
things
like
that.
You
have
a
npm
package
lock
file
and
that
your
dependencies
are
pinned
there
or
all
the
different
package
manager,
ecosystems.
So
we're
just
trying
to
add
a
new
ecosystem
to
that
check
where
you
know
when
we
see
that
you're
using
g
client
for
your
dependencies,
that
we
check
that
that
those
are
pinned
too.
E
So
this
specific
feature,
request
and
enhancement
is
to
add
that
part
of
the
ecosystem
and
other
parts
are
already
added
and
you've
got.
I
E
Kind
of
calming
soothing
voice,
so
that's
really
lovely
jeff.
Thank
you
for
that.
Okay,
I
apologize
for
not
having
the
context
of
having
read
the
and
participating
in
the
conversation
there,
but
that
does
clear
things
up
quite
a
bit.
Thank
you,
mr
cullen.
H
Can
we
tear
apart
just
the
piece
that
we
actually
need,
that
does
the
passing
and
like
remove
any
api
calls
or
like
any
things
like
is
g
client,
actually
something
that
interprets
code
or
is
it
just
passing
and
like
it's
in
it's
interpreting
code
because
they're
variables
and
stuff,
but
can
you
actually
execute
whatever
you
want
like
if
we
were
to
to
remove
the
whole
g
client
and
just
focus
on
that
part?
That
does
the
passing.
K
Oh
yeah,
so
the
part
that
is
doing
the
parsing,
so
I
just
creating
a
small
script
for
that.
It's
probably
like,
somewhere
in
between
20
to
40
lines
of
code,
so
that
that's
the
size
of
the
python
script
just
to
parse
the
date.
K
Yeah,
that
that
that's
what
is
doing
the
parsing,
and
that
would
be
enough
for
getting
the
dependencies
with
all
the
different
variables
that
are
being
replaced.
G
G
You
know
one
thing:
that's
worrying
me
here
is,
if
I,
as
long
as
this
is
only
called
when
it's
needed,
you
know
if
I,
if
I
analyze
something
in
javascript
or
c
or
c
sharp
or
something
it's
not
going
to
get
called
at
all.
But
the
other
issue
is:
let's
say
that
I
have
a
package,
some
repository,
I'm
analyzing
and
it's
in
it's
malicious,
just
analyzing
a
package
shouldn't
cause
running
arbitrary
code
that
could
cause
trouble.
G
Are
we
pretty
confident
that
that's
not
going
to
happen?
No,
it's.
D
H
H
G
I'm
a
little
yeah.
Forgive
me,
I'm
a
little
less
convinced
by
the
number
of
lines
of
code
I
mean
you
can
have
one
line
that
just
runs
exec
and
is
a
complete
disaster.
So
yeah.
H
K
Yeah,
so
the
other
option
here
would
be
the
inlining,
the
python
code
and
just
adding
support
for
it
in
goldline
binary
itself.
I'm
not
sure
how
difficult
that
would
be
because
it
needed
to
import
the
python
headers
and
a
couple
more
things.
So
that's.
K
Option
yeah,
so
actually
we
could
do
that,
but
it
will
complicate
the
build
process
for
scorecards
itself.
F
D
Solve
it,
I
think
you
want
to
go
ahead
and
say.
B
That
maybe
I'll
just
summarize,
like
the
three
options
that
we
have
so
one
is,
we
can
ask
the
users
of
g
client,
that
is,
repositories
who
depend
on
these
g
client
tools
to
pre-flatten
the
file
and
like
put
it
in
the
repositories,
and
then
we
can
analyze
these
flattened
files.
But
that
won't
be
generic
enough,
because
that's
not
it's
not
a
thing
that
folks
do
today,
so
it'll
be.
B
Reposed,
like
chromium,
dart
and
clutter,
so
that
would
be
something
for
like
a
v0.
The
other
option
would
be
to
run
a
python
script
as
a
sub
process
in
go
so
that's
another
option
which
could
be
generic
enough
but
poses
some
security
risk.
I
think
the
third
option
would
be
to
use
a
golang
parser
to
pass
the
python
code.
Is
that
correct?
I
think
those
are
the
three
options
that
we
have
and
the
suggestion
was.
We
start
with
the
flat
and
pre-flattened.
K
Sounds
good,
I
I
do
have
a
proof
of
concept
that
I
can
upload
for
review,
so
we
can
take
a
look
and
see
how
it
looks
and
then
take
a
decision
from
there.
The
other
thing
that
we
may
want
to
do
is
we
start
with
a
flattened
file
first
and
then,
if
we
see
that
a
lot
of
people
are
using
that
specific
file
type,
then
we
can
start
thinking
on
adding
the
correct
support
for
it.
B
B
Okay
sounds
good,
so
I'll
I'll
update
the
issue
with
these
options.
B
All
right,
thanks
cool,
we
have
five
minutes.
Does
anyone
have
anything
five
minute
worthy
discussion
that
we
should
discuss,
or
should
we
jump
into
opens.
K
B
D
That's
that's
something
which
we
should
probably
talk
about.
Probably
next
meeting.
We
prioritize
that
if
some
of
others
are
waiting
for
some
of
the
fixes
that
we
made,
we
should
probably
cut
a
dot
one
release,
that's
something
which
we
can
probably.
I
know
we
have
breaking
api
changes,
but
still
nobody's
using
our
api.
For
now
we
can
say
not
that
we
restricted
for
a
seminar.
F
Yeah
I
was,
I
was
going
to
say
that
we
should
touch
the
v5
question
mark
with
the
log
stuff
at
some.
I
F
We
know
that
we
know
that
all
star
is
using
it
but
also
is
currently
on
v1.
I
have
a
pr
open
for
v4,
but
that
I
think
there
is
a
there's,
a
client
break
somewhere.
The
the
new
github
client
is
not
passing
off
details
for
all-star,
so
I
haven't
had
a
chance
to
poke
at
that.
I
don't
know
if
you
have
jeff,
but.
F
Yeah,
but
I
would
want
to
make
sure
that
that
works
before
before,
considering
cutting
over.
M
So
so
for
allstar,
one
of
the
proposals
that
we've
been
talking
about
was
moving
more
towards
just
enforcing
the
action
and
actually
kind
of
moving
away
from
the
code
we
have
now.
So
I
wouldn't
actually
make
that
a
priority
to
fix
before
cutting
a
release.
B
Okay,
yeah,
I
think
we
can
look
into.
I
think,
a
release,
maybe
soon
hopefully
before
the
next
week's
meet
or
maybe
we
can
meet
in
two
weeks
and
cut
the
release
then,
but
yeah.
Hopefully,
in
like
15
days,
we
can
have
another
release.
F
B
D
B
Sounds
good,
I
guess,
since
we
have
two
minutes,
I
don't
know
if
we'll
be
able
to
cover
stuff,
maybe
one
quick
thing:
navin
do
you.
I
know
you're
working
on
this
vulnerability
in
dependencies
issue.
I
know
we
have
a
bunch
of
blockers
there
is
there
something
we
can
help
you
with
there.
D
I'm
more
than
happy
if
somebody
else
can
pick
it
up.
Okay,
I
spoke
to.
I
spoke
to
jason,
who
happens
to
own
that
he
said
it's
not
at
all
available
to
you,
so
there
are
two
options
to
it:
option
one.
We
we
write
out
saying
this
is
not
an
issue.
I
specifically
a
tracking
issue,
which
is
that's
not
the
option.
D
Yeah,
so
that's
the
problem
with
all
of
this
mess.
That's
why
we're
not
able
to
fix
that
and
I
tried
justify.
I
try
to
go
1.16
also
to
see
oh,
can
I
do
that.
I
tried
gomod
vendor.
I
I
tried
all
the
hacks
that
I
knew.
I
know
unless
somebody
else
finds
another
hack.
B
I
see
and
is
the
I
know,
one
of
the
projects.
I
think
this
is
the
jason
project
that
you're
mentioning,
so
what's
the
update
that
they
are
unable
to
move
or
is
not
a
priority
for
them
or
what's
it
what's
the
issue?
No,
they.
D
D
Oh,
I
see
and
that's
what
stephen
and
I
at
least
were
trying
to
see
and
jason
specifically
said:
hey,
there's
a
few
hacks
and
there's
a
get-up
tracking
issue
for
a
few
hacks.
None
of
the
hacks
are
working.
Oh,
I
see
yes,.
F
Yeah,
this
is
also
I'm
trying
to
think
trying
to
pick
up
the
actual
vulnerability.
This
may
be
something
that's
not
a
problem.
F
Said
it's
not
a
problem
yeah,
because
this
popped
up
and
this
popped
up
in
a
few
of
my
repos
and
kubernetes
too
and
yeah
this
isn't.
I
don't
think
this
is
a
problem
for
us.
So,
like
I
mean
the
option
is
to
you
know,
one
of
the
options
is
to
dismiss
the
alert
and
and
write
you
know,
do
a
write-up
as
yes,
I
forgot
who
you
mentioned,
but
but
I'm
I'm
almost
inclined
to
continue
to
try
to
do
hacky
things.
E
D
Like
what
stephen
mentioned,
we
can
just
write
up
and
dismiss
that
and
also
do
a
write-up,
why
it's
not
so
that
people
are
not
like
okay,
these
people
have
a
vulnerability,
they've
dismissed
it.
No,
we
do
that
due
diligence,
why
we
did
that.
B
All
right
folks,
I
think
we
are
three
minutes
over
time.
A
productive
meeting
thanks
all
for
coming,
take.