►
From YouTube: Security Tooling WG Meeting (June 15, 2021)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
There
that
should
be
better
now
how
many
different
mute
buttons.
Can
you
possibly
have
thanks
azim?
Do
we
have
anybody
else's
name.
B
B
D
I
can
give
you
guys,
like
a
brief
update
of
the
progress
we
made
depending
on,
like
maybe
you
can
take
from
that,
and
if
you
guys
have
more
questions
and
follow-ups
you
can.
I
can
try
and
answer
those.
So
what
we're
doing
right
now
is
we
are
trying
to
scale
scorecard
so
that
we
can
dump
out
more
data
on
more
and
more
repositories.
D
So
as
of
right
now
we
have
about
30
000
repository
worth
of
data
and
I
think
that's
going
to
increase
soon
in
the
coming
few
weeks.
So
we
are
hoping
to
hit
maybe
like
50
000
repositories
in
the
next
coming
weeks.
So
the
idea
is
that
we
can
dump
dump
dump
out
these
data
in,
like
maybe
a
big
query
table
and
folks
who
want
to
you
know,
depend
on
the
scorecard
data
can
read
from
there
and
the
way
we
are
kind
of
going
about.
D
It
is
we're
hoping
that
we
can
take
these
criticality
score
projects,
and
you
know
the
highest
criticality
score
projects
as
what
we
dump
out
so
the
more
critical
they
are
they'll
be
included
in
scorecards.
So
that's
that's
one
update
I
had.
We
are
also
working
on
some
other
new
interesting
checks
like
we
are
trying
to
add
checks
to
figure
out
whether
repository
has
binary
artifact
present
in
it.
We're
also
adding
some
new
checks
on
like
frozen
dependencies.
Whether
repository
depends
on
frozen
dependencies
or
they
are
just
doing.
D
You
know
like
the
skull
kind
of
a
thing
so
yeah.
So
some
of
these
new
checks
are
coming
out
and
we
are
still
working
on
it
and
it
should
be
part
of
our
v2,
which,
coming
in
like
two
three
weeks.
C
So
I
I
have
a
very
kind
of
focused
question,
but
but
what
the
heck?
Do
you
record
in
your
outputs
when
you
did
the
analysis?
Because
when
I
look
at
metrics
open
ssf.org,
it
doesn't
have
a
when
this
was
recorded
and
I'm
wondering
if
that's
because
it
wasn't
in
the
output
from
scorecard.
D
We
actually
do
record
when,
when
we
did
this
analysis
like
we,
we
used
to
do
this
on
a
daily
basis,
and
now
I
think
we
we're
kind
of
doing
it
weekly
basis
because
of
number
of
repositories,
but
yeah.
We
do
record
the
day.
No,
that
this
analysis
started
like
or
at
least
the
analysis
is
for
this
week,
and
things
like
that-
that
that
information
should
be
available.
C
Gotcha
and
you
are
aware
that
the
there's
a
site
called
metrics
open,
ssf
work,
that's
bringing
in
that
data,
so
the
more.
C
You
do
the
the
more
they'll
have
data
about
projects.
D
Yeah,
that's
true,
I
think
the
the
only
other
meeting
where
I
I
did
come
across
the
openness
of
metrics
was
probably
the
I
I
think
now
it's
seven
a.m.
So
I
stopped
attending
that
meeting
because
it's
too
early
for
me
but
yeah
I
I
will
probably
sync
up
with
mike
from
microsoft
again
once
we
have
some
some
more
time
we
have
a
v2
available
great.
E
Yeah,
just
just
to
clarify
from
foundation,
I
read
through
I'm
still
not
very
clear
on
what
what
types
of
software
is
in
school
for
this
scorecard.
If
you
can
go
back
to
the
foundationally,
what
the
purpose
of
the
project
was,
I
saw
the
video
and
I
saw
all
the
content
it'd
be
good
to
just
refresh
this,
why
it
is
there
and
how
it
works,
and
what
is
the
value
of
it
for
all
these
all
the
open
source
projects?
You
know.
D
Sure
so
yeah,
I
think
we
are
basically
targeting
like
almost
any
repo
like
so
maybe
look,
let's
take
a
step
back,
so
I
think
the
idea
of
scorecards
is.
Can
we
provide
some
basic
checks
like
automated
text
like
one
of
the
simplest
one
we
have
is?
Does
this
repository
even
have
like
a
security.md
file
which
tells
if
there
are
security
vulnerabilities
that
are
found?
How
do
you
report
it
so
there
are.
D
There
are
these
basic
checks
that
we
go
through
and
like
things,
which
are
very
obvious
that
we
know
need
to
be
fixed
in
our
repository,
we
kind
of
call
it
out
and
say
that
hey.
This
is
how
your
security
posture
looks
like.
So
it's
kind
of
like
saying
if
it's
100,
if
you
get
100
score,
that
doesn't
mean
it's
100
secure,
but
I
guess
it's
trying
to
say:
are
you
doing
anything,
obviously
bad,
and
can
you
improve
on
it?
So
there
is.
D
We
are
not
a
language
or
like
platform,
we
are
kind
of
language
agnostic
and
we
basically
just
look
at
these
obvious
things
like
dependency,
pinning
and,
let's
say,
docker
files
or
like
having
these
security.md
file,
and
some
checks
like
that,
like
the
other
one
I
was
talking
about,
this
binary
artifact
check
is
if
you're
checking
in
a
binary
inside
your
repository.
That's
probably
not
a
good
thing
to
do
so.
We
know
we
call
it
out
and
say:
hey
look.
F
D
That's
true,
so
I
I
think
we
are
having
some
discussions
around
like
how
you
know
how
deeply
involved
we
want
to
be
in
the
sense
that,
for
example,
if
it's
a
goal
line,
it
might
be
different
type
of
dependencies
and
things
like
that.
But
right
now
we
might
just
be
looking
at.
I
think
we
are
planning
to
start
off
with,
like
just
basic
rejects,
based
thing
just
to
see
they
are
not.
D
They
have
like
some
kind
of
sharp,
sharp
spinning
things
like
that,
and
maybe
in
the
future,
depending
on
you
know,
when
things
evolve,
we
might
go
into
more
language,
specific
or
platform
specific
projects.
E
Thanks
so
a
lot
more
on
the
process
side
than
on
the
code
zone
like
secure
way
of
building
it
all
that
is
not
in
scope
at
this
time,.
E
Yeah,
it's
a
lot
more
on
the
process
side
of
how
the
software
is
being
secured.
More
than
on
the
like,
I
would
say,
hard
skills
if
the
software
itself
is
secure,
like
yeah.
D
Yeah
yeah,
like
yeah
any
any
obvious
checks,
any
obvious
security,
anything
that
we
think
can
be
automatically
found
out
and
said
that
hey
like
you
know
this
is
clearly
an
obvious
flaw
in
the
repository.
Then.
Yes,
we
will
try
to
integrate
it.
A
good
example
is
which
is
coming
up
in
a
few
weeks.
Now
is
integration
with
osv?
D
Sorry,
what
is
the
other
one?
You
say
national
vulnerability,
database
nvd
and
I
think
we
are
planning
to
start
off
with
osv
for
now
and
we
will
we'll
see
how
that
integration
goes
and
we'll
probably
look
into.
B
Nvt,
so
I'm
curious
for
the
code
review
check.
Do
you
check
to
see
if
there's
any
time
between
the
pull
requests
being
submitted
and
accepted?
I
I
I
I've
seen
pulling
requests
that
that
are
accepted
in
like
30
seconds
of
being
submitted.
D
Yeah,
I
know
I,
I
don't
think
we
look
into
we
look
into
that.
We
just
see
if
the
repository
has
a
policy
for
code
review
or
is
it
just
doing
an
automated
accept
thing
but
yeah
we
don't.
We
don't
look
into
a
time
thing.
B
Okay
and
for
the
fuzzing
and
the
sas
checks
you're,
just
looking
for
oss
fuzz
and
whether
or
not
they're
using
codeql.
D
I
I'm
not
sure
about
sas,
but
the
fuzzing
thing
is
here
right
that
we
are
only
looking
at
the
oss
first,
laurent
is
on
the
call
learn.
Do
you
know
how
we
do
the
sas
checks,
yeah.
G
The
sas
is
just
looking
for
whether
you're
using
code
ql
and
we
we
might
actually
remove
this
check
and
replace
it
by
some
better,
better
checks
where,
for
example,
we
look
for
some
very
specific.
You
know
patterns
like
deserialization
issues
or
things
like
that,
but
but
we
have
we're
just
starting
on
this,
like
with
the
curl
pipe
bash
pattern
that
we're
working
on.
H
Okay,
I
can
comment
on
that
from
from
the
kqr
side
here
at
github,
I
I
think
it
would
probably
be
interesting
to
add
more
sas
tools
to
that
to
that
check,
rather
than
try
and
do
your
own
analysis
and
find
your
own
vulnerabilities.
I
think
the
latter
will
will
will
be
a
lot
of
effort
and
the
form
will
probably
be
a
lot
easier.
G
Okay,
so
we
we
don't
want
to
reinvent
the
wheel,
then
the
problem
with
having
more
sus.
So
so
we've
thought
about
this.
We
don't
have
yet
like
a
full
conclusion
of
what
we're
going
to
do.
That's
the
honest
answer,
but
enabling
more
tools
means,
like
users,
still
have
to
configure
the
tools
and
then
different
teams
want
different
things
from
different
tools,
and
so
that's
kind
of
the
challenge.
Just
saying
that
someone
is
using
codeql,
for
example,
doesn't
really
mean
anything
because
you
have
different
templates
and
it
does
different
things.
G
H
Yeah,
I
agree
with
that.
I
think
the
the
the
when
we
first
talked
about
the
scorecards,
the
the
aim.
There
really
was,
I
think,
to
establish
whether
a
project
had
given
that
consideration,
whether
they
thought
about
sas
and
if
so,
whether
they
set
up
one
or
two
or
three
tools
or
one
of
three
tools,
and
I
think
that
the
project
has
started
with
with
codeql.
C
H
C
So-
and
I
agree
that
you
know
there's
many
different
configurations,
but
I
think
there's
a
dif
there
is
a.
There
is
a
qualitative
difference
in
I'm
using
or
not
using
tools
versus
I'm
not
doing
anything.
C
Now,
if
you're
not
doing
if
you're,
not
using,
if
you're
running
the
tool
and
totally
ignoring
the
results,
that's
a
problem,
but
I
don't
know
how
a
tool
can
detect
that.
G
G
We
should
integrate,
because
there's
a
lot
of
them
and
I'm
personally
not
aware
of
all
of
them,
and-
and
I
don't
know
yet-
I
haven't
really
looked
at
how
github
integrates
tools,
whether
I
know
there's
a
common
format
that
you
use
called
saphir
or
something
an
open
source.
It's.
H
G
H
Yeah,
that's
a
good
idea.
We
could
look
for
whether
any
tool
has
has
basically
commented
on
your
repo
and
whether
that's
codeql
or
or
coverity
or
sonocube.
That's
that
doesn't
really
matter,
as
we
just
said,
but
indeed
we
can't
see
whether
people
have
been
using
the
code
scanning
toolset
with
that
serif
integration.
If
that
makes
sense.
C
Yeah
now
not
everybody
uses
that
integration
or
before
it
even
does
pull
requests
on
them,
but
or
at
least
comments,
but
that
at
least
would
give
a
hint
yeah.
F
F
G
Well,
we're
not
planning
to
remove
it
we'd
like
to
have
more
tools,
but
I
I
think
if
we
want
to
have
better,
we
don't
really
know
what
all
the
tools,
what
kind
of
checks
they
are
making,
and
so
it's
a
little
bit
hard
to
keep
track
of
it.
So,
as
I
think,
david
and
mention
having
it
is,
is
a
good
first
step,
but
you
know
later,
I
guess
we
would
like
to
have.
G
You
know
some
ready-made
configurations,
maybe
categorized
by
the
sort
of
checks
that
the
tools
make,
but
maybe
further
down
the
line.
F
We
don't
plan
yeah
yeah.
They
they
provide
open
api
for
open
project,
like
you
can
check
for
quality
profiles
and
rules
activated
and
the
open
findings
and
the
trends.
So
you
can
even
have
a
better
understanding
if
they
are
actually
using
the
static
analysis
or
they
just
scan
it
for
a
namesake
right.
But
again,
I
completely
agree
with
the
david's
point
like
having
something
is
better
than
nothing
right
like
so.
But
if
you
go
want
to
go
a
bit
more
deeper
and
understand
more
about
project,
there
are
apis
available.
I
So
I
mean
you
should
be
able
to
add
a
few
dash
tools
as
well
fairly
easily
either
whether
you
look
at
issues
raised
or
particular
config
files.
G
Yes
correct,
so
I
I
would
love
it
if
you,
some
of
you,
could
just
create
issues
on
the
scorecard
and
and
tell
us
like
what
are
the
tools
that
you
think
would
you
think
would
be
useful
and
we're
missing,
because
that
would
help
like
there's
so
many
tools,
I'm
honestly,
not
sure
which
one
we're
gonna
start
with.
So
just
having
an
idea
of
what
the
community
thinks
would
be
useful
would
be
super
helpful,
but.
B
C
G
Yeah,
so
there's
also
a
question
that
I
started
to
interrupt.
There's
also
a
question,
a
concern
that
was
raised
by
people
saying
that
if
we,
if,
if,
if
we
look
for
certain
tools,
we're
basically
saying
that
those
tools
are
good
and
we're
doing
some
free
marketing
for
them,
and
we
have
to
be
be
careful
about
not
singling
out
coverity
because
they
are
most
used
today.
C
If,
if
you
only
analyze
detected
coverity,
I
completely
agree:
if
you
detect
coverity
and
fortify
and
a
whole
bunch
of
other
tools,
then
I
think
that
that
objection
goes
away.
I
mean
the
the
goal
isn't
to
detect
whether
or
not
they
use
coverity.
The
goal
is
to
detect
whether
or
not
they're
using
a
certain
class
of
tool.
I
And
you
can
include
a
you
know,
some
comment
to
say:
if
there
are
any
other
tools,
we
should
be
detecting.
Please
raise
an
issue
here
or
whatever
so
yeah.
As
long
as
there's
a
straightforward
way
for
anyone,
whether
it's
a
company
or
an
open
source
tool
to
get
registered,
then
I
don't
see
it
as
a
problem.
B
C
C
C
D
Cool,
I
I
just
wanted
to
quickly
reiterate
lauren's
point
of,
if
you
guys
have
any
ideas
about
the
text.
Please
do
make
issues,
because,
right
now
we
are,
I
think,
you're,
looking
more
into
the
how
to
increase
the
breadth
of
scorecards
like
how
do
you
get
more
data?
How
can
we
add
more
checks,
but,
like
each
of
these
checks
probably
needs
more
better
quality.
You
know
more
better
thought
so
yeah.
We
would
love
to
hear
from
the
community
and
see
how
this
can
help
like
how
we
can
make
scorecards
more
useful.
B
Make
sense
to
see
azim.
B
Thank
you
very
much
for
for
for
sharing
what
you're
up
to
with
that
david.
I
wanted
to
ask
you
know:
there's
lots
of
stuff
going
on
with
the
executive
order
these
days.
Can
you
talk
about
what
linux
foundation
slash
open
ssf
are
currently
doing
since
I
know,
there's
lots
of
information
flying
around
rapidly
and
quickly.
C
Oh,
but
the
cheeky
answer
would
just
tell
you
no,
but
I
actually
can
tell
you
some
things,
let's
see
here.
So
let
me
write
down
us
executive
order,
david
w,
all
right,
let's
see
here
so
I
so
the
u.s
sent
out
an
executive
order
on
cyber
security.
C
I
wrote
up
a
they
had
five
categories:
nist
created
a
request
for
information
on
five
categories.
I
wrote
up
a
five
papers,
one
for
each
and
I
mentioned
open
ssf
on
all
of
them.
I'm
pretty
sure
I
mentioned
this
group
specifically
it'd,
be
kind
of
surprising
I
hadn't
they
did
invite
me
back
to
talk
about
the
defining
critical
software.
C
I
have
some
concerns
about
how
they're
def
they're
thinking
about
defining
critical
software.
But
you
know
the
work
is
still
going
on:
yeah,
lots
and
stuff
lots
of
stuff
going
on
a
lot
of
people
are
asking
me
questions
and
and
trying
to
chat
and
learn
more
and
that's
all
good.
It's
a
little
tiring.
I
am
my
boss's
boss
is
jim
zemlin,
president
of
linux
foundation.
C
He
asked
me
to
come
up
with
a
list
of
what
might
be
you
know.
What
would
you
like
to
be?
Would
you
like
funded?
If,
if
you
wanted
to
ask
for
a
serious
amount
of
money,
he
wanted
me
to
not
release
that
until
he
had
a
chance
to
review
it
and
he
hasn't
had
a
chance
to
review
it
yet
so
I
can't
share
the
details,
but
I
can
tell
you
that
I
started
with
the
open
ssf
wishlist,
some
other
folks.
I
made
some
stabs
about
what
I
thought
they
might
cost.
C
That
doesn't
mean
that
shockingly
I'm
not
omniscient,
so
I
could
be
wrong
about
prediction
in
the
future,
but
you
know
that's
what
I've
been
one
of
the
things
I've
been
trying
to
do
so,
just
just
to
kind
of
clarify.
This
is
basically
an
effort
to
try
to
go
out
and
request
large
larger
sums
of
money
by
starting
at
the
top.
There
is
no
guarantee
that
any
of
this
stuff
will
get
funded.
I
want
to
make
that
clear.
C
Now,
as
I
said,
do
not
base
your
life
upon
whether
or
not
that
actually
works.
Think
of
this
as
a
an
effort
to
do
big,
fundraising,
yep
will
it
work.
Don't
know
you
miss
all
the
shots.
You
don't
take.
C
So
but
you
know,
I
I
think
jim
and
I
are
very
much
agreement
that
the
basic
pitch,
which
is
hey-
you
know
yes,
it's
a
lot
of
money
for
for
but
not
for
a
government.
You
know
considering
the
ecosystems
that
the
open
source
runs.
C
C
So
we'll
we'll
see
we'll
see
how
that
works.
Okay,.
F
B
C
You
know,
that's
not
fair,
he
he's
he
he.
He
is
just
interested
in
trying
to
make
things
work
and
fix
problems
as
he
sees
them
absolutely
fixing,
but
fixing
problems
is
a
target,
rich
environment
and
he's
just
he
had
a
board
meeting
last
week.
There's
another
big
thing
he's
trying
to
do
so.
It's
you
know
and,
and
actually
to
be
honest,
I've
been.
C
Okay,
so
just
fyi,
since
this
is
the
tool
working
group,
this
is
actually
not
funded
by
the
linux
foundation.
This
is
work
that
I've
been
doing
in
my
copious
spare
time,
but
I
wrote
a
little
tool
years
ago
that
analyzes
cnc
plus
plus
code,
it's
a
simple
security,
focused
focus,
source
code,
static,
analyzer
for
cnc,
post
plus
code.
It's
actually
written
python,
put
on
a
new
release.
It
now
supports
oasis
serif
as
an
output
format,
and
it
has
better
support.
C
Cross-Platforming
worked
across
everybody
before,
but
now
it
should
be
even
more
more
vunderbar.
So
you
know
if
you
use
it
enjoy.
J
I
mean
it
sounds
like
you've
output.
Sarah
you've
already
done
all
of
the
work.
That
is
the
integration,
but
what
we'd
need
to
do
would
be
to
create
an
action.
So
it's
easy
for
anybody
to
run
it
and
put
a
blob
in
the
marketplace
for
it.
Great
github
doesn't
really
github
hasn't
created
any
of
those
for
other
open
source
projects
itself.
So
I
don't
think
we
can
maintain
it,
but
it
sounds
like
you've
done
90
of
the
work,
because
there
was
a
reason.
J
C
I
I'm
I
might
but
to
be
honest
right
now.
I
my
my
cup
is
full,
so
I'm
gonna,
I'm
gonna
bake.
I
mean
to
be
fair,
the
serif
stuff,
the
serif
integration
was
done
by
others
too.
So
I
am
happy
to
review
proposals.
C
I
will
occasionally
make
improvements
and
I
might
do
that
one,
but
it's
going
to
be
a
while.
If
you
wait
for
me.
J
C
Yeah
that
was
actually
some
microsoft
work,
so
it
would
be
unsurprising
if,
if
github
integration
is
actually
in
their
view,
but
you
know
I
can,
I
can
at
least
put
a
a
request
on
on
get
for
the
proposal
and
saying
hey.
You
know,
we'd
love
to
see
a
github
integration
and.
J
Cool
sounds
good
yeah
I
mean
the
microsoft
folks
are
all
moving
more
and
more
microsoft.
Development
happens
on
github
now,
so
they
have
a
load
of
requirements
that
any
tools
that
they're,
using
as
part
of
the
microsoft
security
tool
chain,
end
up
compatible
with
github.
So
they
almost
certainly
have
that
on
their.
C
List
all
right,
so
I'd
love
to
have
a
github
integrations
and
get
up
like
serif.
If
you're
are
planning
to
create
a
github,
what
do
you
call
them?
It's
a
github
actions,
integration.
H
Yeah,
I've
included
the
link
in
the
in
the
notes
for
today's
meeting.
C
All
right,
okay,.
C
That's
a
that's
for
sonar
cloud.
Specifically,
though,
isn't
it.
C
What
what
in
the
chat
am
I
supposed
to
be
looking
at
for
github
actions,
integration.
C
Okay,
all
right
and
I
will
create
a
a
an
issue
on
flaw
finder,
which
others
can
then
do
something
glorious
about
cool.
B
A
Ryan,
I
have
one
which
is,
if
there's
time
available
at
the
next
meeting
in
two
weeks,
there's
a
member
of
the
team
activity,
labs
george
sinawski-
that
is
willing
and
interested
to
present
our
survey
results
of
software
developers
and
data
scientists
and
their
code,
reuse
habits,
their
security
related
pre-installation
code,
reuse,
beliefs
and
behaviors,
and
we,
if
you
we
want
this
sort
of
survey
to
inform
tool
development.
A
So
it's
not
particularly
tied
to
any
one
tool,
but
for
people
who
are
interested
in
especially
pre-installation
package
vetting
behaviors,
you
know
we
would
be
my
colleague
george
would
be
glad
to
do
a
presentation.