►
From YouTube: Securing Critical Projects WG (January 28, 2021)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
I'm
so
welcome.
I
don't
know
if
there's
any
newcomers
in
here
today,
if
you
want
to
introduce
yourself,
feel
free
to
take
a
second.
B
No
we're
all
real
regulars
cool
all
right,
so
I
didn't
think
we'd
have
enough
on
the
agenda,
but
it
filled
up
pretty
quickly.
So
we're
gonna
kick
today's
meeting
off
with
a
short
discussion
from
amir
and
then
jordan
and
dan
have
some
stuff
to
talk
about.
I
think
dan's
even
going
to
do
a
demo
on
some
work,
they've
been
doing
so
amir.
Why
don't
you
kick
it
off.
C
Wonderful,
thank
you.
So
much
kim
hello,
everybody
looking
forward
to
updating
the
work
group
today
on
the
results
of
the
security
review
that
I
first
alluded
to
back
in
october,
I
created
a
quick
document
to
share
with
everybody
just
that
kind
of
captures
some
of
the
main
information
it's
directly
in
the
agenda
as
well,
if
you'd
like
to
follow
along.
But
again,
yes,
thank
you
so
much.
I'm
excited
to
talk
to
you
about
this
review.
C
C
C
So
I
know
those
are
pretty
two
two
pretty
simple
things
that
that
can
cause
some
some
discussion.
My
intention
for
today
was
more
so
just
to
update
everybody
and
to
go
over
these
findings
with
everybody.
C
I
highly
recommend
reading
the
report
to
see
really
the
nature
of
the
research
that
was
done
and
how
those
recommendations
came
to
be,
and
I'm
hoping
that
you
know
if
that
gets
everybody
excited
enough,
and
we
want
to
have
a
deeper
discussion
on
those
findings.
We
can
definitely
do
that
as
a
as
an
agenda
topic
for
a
future
meeting,
but
going
about
really
quickly
to
go
over
the
impact
of
the
work.
C
So
with
that,
I
I
thank
you
all
again.
Thank
you
for
giving
me
your
time
to
update
you
all
on
that.
I
definitely
invite
you
to
discuss
anything
security
related,
especially
when
it
comes
to
audits
or
security
reviews
with
us,
myself
and
derek
couldn't
be
here
today,
unfortunately,
he's
a
little
under
the
weather,
but
I'm
definitely
open
to
any
kinds
of
discussions
of
that
nature
and
yeah.
Hopefully,
we'll
have
more
work
and
more
results
for
the
work
group
soon.
So
with
that
again,
thank
you
and
yeah.
C
Any
questions
feel
free
to
I
don't
I
don't
want
to.
I
don't
want
to
kind
of
take
up
the
the
whole
meeting,
so
I
was
thinking
if
we
want
to
do
any
questions
I
can
maybe
take
one
or
two,
if
I'm
able
to
answer
them
and
then,
if
warranted,
we
can
discuss
further
at
a
future
meeting.
E
Jump
in
a
little
on
that,
if
you
want
okay,
yeah,
so
greg
kh-
and
I
both
saw
this
and
I
think
we're-
I
think
we
basically
came
to
the
same
conclusions
in
terms
of
response
of
this-
the
first
one
I
think
the
certainly
greg
cage-
and
I
don't
while
we
don't
in.
We
have
questions
about
that.
First
one.
I
don't
think
the
linux
kernel
is
actually
going
to
move
to
a
neatly
public
viewpoint.
I
think
both
greg
h
and
I
have
con
significant
concerns
about
that.
E
On
the
other
hand,
we
thought
that
was
one
heck
of
an
interesting
recommendation
and
it's
hard
given
our
both
of
our
pushes
about.
We
want
things
more
open,
saying:
hey,
we
want
a
group
that
says
you
should
be
even
more
open
is
an
interesting
perspective,
and
so
we
thought
well
we're
not
so
sold
on
it.
They
make
some
good
points.
E
Let's
get
that
report
out
and
and
hear
what
other
people
have
to
say,
because
it
is
an
interesting
idea
that
has
not
been
seriously
pitched
before.
I
don't
think.
As
far
as
the
second
one
goes,
I
think,
there's
general
unanimous
is
probably
impossible,
but
I
think
there
is
general
agreement
with
the
in
the
colonel
community
that
it
makes
sense
to
have
the
cve
cves
devolve
on
the
kernel.
Devolve
down.
E
E
So
I
think
I
think
things
have
been
moving
in
that
direction
anyway,
but
this
gives
much
more
of
a
yes.
This
makes
sense
and
in
fact,
push
harder
in
this
direction.
E
C
I'm
not
entirely
sure
about
hacker
news
just
yet
the
this
is
all
relatively
new.
We
just
did
publish
this,
but
I'm
not
too
sure
if
it's
on
there
yet,
but
I
wouldn't
be
surprised
if,
if
it
made
its
way
on
there
or
you
know,
yeah.
C
Yeah
but,
but
to
your
point
david,
the
fact
that
that
there
is
a
a
referenceable
body
of
research
done
that
can
that's
well
founded
that
can
serve
as
a
catalyst
to
document
this
stuff
and
move
it
along
in
the
right
direction,
as
david
mentioned,
I
think,
is,
is
some
of
the
real
value
in
this
review.
A
Cool
actually
the
order
here
we
put
newton
last,
but
I
think
for
my
demo,
jordan
and
I
were
just
going
to
chat
a
little
bit
about
some
upcoming
work.
So
maybe
it
makes
sense
to
move
that
one
to
last
discussions.
You
want
to
talk
about
the
criticality
score
research.
You
did
nathan
if
you're
here.
F
F
Me
yeah
yeah,
okay,
awesome,
hello,
everyone
again,
thank
you
for
the
opportunity.
I
just.
E
F
To
sort
of
revisit
something
that
I
did
after
our
meeting
last
the
last
time
we
met,
we
looked
at
the
criticality
score,
I
got
introduced,
okay
shake
and
we
had
a
email
communication
as
well,
because
I
did
a
project.
While
I
was
a
grad
student
at
rit,
which
was
very
similar
to
what
the
criticality
score
is
about.
I
think
chris
han,
my
colleague
mentioned
it.
F
It's
called
reaper
I've
linked
it
in
the
the
agenda
as
well,
so
so
it
seemed
very
similar
and
we
had
a
bit
of
a
communication
with,
and
he
said
that,
yes,
some
of
the
the
metrics
that
we
collected
as
part
of
the
repo
project
are
being
collected
as
part
of
the
criticality
project.
So
that's
that's
good,
but
but
abhishek
also
mentioned
that
you
know.
Reaper
is
based
off
of
this.
This
project
called
jesus
torrent,
which
has
been
inactive
for
a
while.
F
Now,
since
it
was
funded
by
microsoft,
there
was
a
bit
of
a
snap
over
with
how
it
moved
away
from
the
the
researcher
who
did
the
gstr
into
microsoft.
It
was
supposed
to
become
a
an
internal
research
thingy
that
never
went
anywhere
but
but
yeah.
So
I
was
curious
and
and
so
we
shake
shared
some
of
the
results
from
the
criticality
score
and
I
opened
up
the
csvs
and
I
looked
at
some
of
the
top
rated
projects
in
terms
of
criticality
and
all
of
them
were.
F
You
know,
familiar
projects
they're
all
some
of
the
most
popular
open
source
projects
in
their
respective
programming
languages.
So
we
said,
is
that
you
know
of
these
just
the
criticality
scores
very
similar
to
the
popularity
of
these,
and
I
looked
at
the
hacker
news
discussion
about
the
criticality
score
and
some
of
the
comments
on
that
course
were
also
asking
the
exact
same
question.
So
we
said
why
don't
we
just
run
a
quick
evaluation
of
that.
So
we
did
that.
F
I
posted
the
results
onto
the
google
group
conversation
I've
linked
that
conversation
in
the
agenda
as
well.
So
we
found
that
this
there
is
a
positive
correlation.
Yes,
there
is
criticality
score
is
positively
correlated
with
popularity.
I
use
the
number
of
stargazers
on
github
as
a
measure
of
popularity,
but
the
effect
is
not
as
strong.
I
expected
it
to
be
really
high.
You
know
80.8
spearman's
row
correlation
coefficient,
but
it
wasn't
anywhere
close
to
that.
F
So
so,
yes,
criticality
score
is
essential
in
terms
of
it
being
it
giving
us
more
signal
than
the
the
popularity
is
giving
us.
So
in
the
end
of
that
discussion,
I
think
dan
recommended
that
I
write
up
the
methodology
I
have
written
that
up.
I
just
want
to
contribute
that
back
and
do
I
don't
know
which
repository
I
should
be
contributing
that
practice.
I'm
looking
for
some
guidance
from
this
group,
it's
a
markdown!
F
So
if
you
want
to
put
it
as
a
wiki
in
in
a
project,
you
could,
if
you
want
to
put
it
as
a
markdown
file
somewhere,
I
could
just
send
it
to
this
group
so
yeah.
I
just
wanted
to
touch
upon
what
I
did
and
to
share
the
results
and
see
if
anyone
had
any
questions
or
follow
on
work
that
we
think
we
could
do
in
terms
of
this
criticality.
E
F
Okay,
and
would
that
main
project
be
the
dub
g
security,
critical,
securing
critical
projects,
philosophy.
A
A
F
And
so
another
item
on
the
agenda,
it's
again
related
to
what
what
we're
doing
criticality
code,
as
I
mentioned,
is
similar
to
what
reaper
did
you
know
a
few
years
ago.
Obviously
that
data
set
is
widely
out
of
date.
Now
we
we
ran
that
last
in
april
of
2014.
I
think
I'm
also
curious
to
see
how
correlated
the
criticality
score
is
with
the
score
that
we
computed
and
the
reason
I
want
to
do.
That
is
because
we
already
have
this
data
set
of
a
few
million
repositories
that
we
scanned
over
an
entire
summer.
F
We
used
all
of
the
computer
at
the
university
to
do
that.
So
if
we
can
get
a
high
enough
correlation,
then
we
automatically
have
a
much
larger
data
set
of
critical
projects.
So
that's
something
that
I
want
to
sort
of
extend
this
analysis
into
in
the
near
future.
A
So
the
data
scale
issue
has
been
a
problem
for
us
for
a
while.
You
mentioned
gh
torrent.
Yes,
you
didn't
quite
catch
what
you
said
about
it.
You
said
I
know
the
researcher
on
it
has
kind
of
moved
away
from
it
or
it's
not
quite
being
actively
updated
anymore.
Do
you
have
any
more
information
on
that.
F
Yes,
so
from
what
I
know,
microsoft
was
sponsoring
that
project,
so
they
were
giving
the
project
azure
data
lake
storage.
I
think
they
were
slowly
moving
the
the
repository
out
of
the
gstore
and
database
out
of
my
sequel
that
was
hosted
on
prem
onto
azure
because
they
gave
them
two
years
worth
of
credits.
That's
the
last
I
heard
of
it
and
recently,
when
I
checked
they
don't
have
they
used
to
release
an
update
every
now
and
then
with
every
new
mine
that
they
did
of
github.
F
But
I
haven't
been
seeing
any
new
updates
coming
off
of
that
and
I
think
the
way
they
got
around
the
data
scaling
issues
that
I
think
someone
mentioned
in
the
email
chain
was
just
asking
the
community
to
share
their
github
tokens.
That's
pretty
much
what
we
did
with
reaper
as
well.
We
essentially
asked
our
entire
class
of
students
who
were
working
with
to
volunteer
that
github
tokens
for
us
over
the
summer.
So
that's
how
we
got
on
all
the
data
scale
issues.
F
But
again
this
was
much
much
earlier
and
I
think
github
has
gotten
a
little
bit
more
strict
with
the
the
api's
rate.
Limiting
so
yeah,
I
don't
know
how
they're
gonna
fly
now.
F
And
in
terms
of
the
project
I
mentioned
so
there
was,
microsoft
was
working
on
another
similar
project
called
gh
insights,
which
was
very
similar
to
reaper,
and
I
was
supposed
to
work
on
that
project,
but
that
again
got
shelved.
It
was
supposed
to
be
exactly
that,
an
internal
tool
that
you
could
use
to
collect,
metrics
from
repositories
and
understand
how
they're
doing
but
again,
I
haven't
heard
anything
since,
like
four
years
ago,.
A
I
know
abhishek's
been
asking
some
people
at
microsoft
and
github
for
ways
to
get
more
tokens
and
everything.
I
don't
know
if
you've
heard
back
anywhere
yet.
G
There
is
another
mentioned
with
respect
to
criticality
score,
so
right
now
we
are
trying
to
just
look
at
let's
say
critical
projects,
but
another
important
point
that
people
bring
up
is
like,
which
are
the
actual
security
critical
part
of
those
like
there
might
be
popular
projects,
but
we
only
care
about
ones
that
expose
like
attack
surface
or
can
harm
user
data.
So
still
looking
for
ideas,
if
people
have
to
decide
that
security
bit
part
of
it
like,
if
anyone
has
any
ideas
on
that.
A
I
think
one
example
of
that
that
much
people
probably
saw
this
week
is
the
pseudo
cves.
If
you
look
at
that
repo,
it
doesn't
have
too
many
stars,
so
it
would
probably
not
show
up
in
any
of
these
analyses.
We've
done
so
far,
but
sudo
is
a
pretty
security,
critical
project,
so
yeah
any
ideas
for
how
to
catch
stuff
like
this
in
the
analysis,
would
definitely
be
useful.
F
But
we're
looking
at
the
bounty
programs
and
some
of
these
projects
have
give
us
an
indication
of
which
of
these
are
actually
security
related
and
they're,
taking
it
seriously
like
hacker
one
or
what
are
you
indicating,
there's?
Also
a
linux
foundation,
I
think
or
open
internet
one
of
them.
There's
this
umbrella
bounty
program,
that's
funded
by
a
bunch
of
organizations,
so
they
could
have
boundaries
to
anyone
who
finds
security
problems,
so
we
could
start
with
those
projects
as
well
right
because
they're
they're
seriously
they're
clearly
concerned
about
security.
B
Yeah,
maybe
we
start
a
dock
with
a
bunch
of
ideas
and
just
throw
them
all
in
there
and
share
I'll
share
it,
either
this
week
or
next
meeting
and
then
any
any
ideas
pop
them
in
there.
A
All
right
malware
analysis,
so
jordan's
been
doing
a
bunch
of
work
on
this
project.
We've
got
a
little
repo
for
I'll
drop.
The
link
in
here,
and
I
did
some
hacking
on
it
earlier
this
week
too.
So
I
thought
I
would
do
a
demo.
If
people
are
interested,
do
you
want
to
get
started?
First,
jordan.
I
haven't
had
a
chance
to
catch
up
with
what
you
pushed,
but
I
just
saw
you
mentioned
in
slack
something's
ready.
I
Yeah
yeah
I'll
give
a
super
quick
update,
so
there's
kind
of
two
parts
to
the
malware
analysis
project.
The
first
is
a
project
you
can
imagine
it
could
even
be
its
own
product,
so
to
speak,
which
is
essentially
just
a
a
consolidation
point
that
takes
in
updates
from
all
the
different
package
managers
for
different
languages.
You
know
for
pi
pi
is
is
kind
of
the
first
one,
but
you
can
imagine.
Npm
is
another
one
and
it
exposes
those
on
like
one
queue
to
be
ingested
upstream.
I
I
That's
the
part
that
I've
been
working
on
and
what
I've
merged
recently
is
that
we've
migrated
some
of
those
bits
to
cloud
run,
to
make
that
a
little
bit
easier
in
terms
of
adding
new
package
manager,
support
and
I've
also
been
working
on
getting
everything
bootstrapped
with
with
terraform,
so
that
as
new
ones
are
added,
the
infrastructure
is
managed
for
us
kind
of
automatically
so
setting
up
all
the
github
actions
and
stuff.
I
Where
once
a
pr
is
accepted,
all
the
right
infrastructure,
spins
up
and
things
just
kind
of
magically
work
so
more
to
come
on
that
we've
merged
the
cloud
run
stuff
last
night,
except
it
was
so
late
that
whenever
I
merged
it,
I
didn't
realize
that
the
pr
merged
into
a
branch,
so
we
still
have
to
get
it
into
master
or
main,
rather
that
that
can
be
done
tonight
and
then
the
terraform
stuff.
I
I
was
going
to
get
done
last
night,
but
github
actions
had
helpfully
a
an
issue
whenever,
whatever
I
was
trying
to
get
those
tested
and
merged
in
so
more
to
come
there.
But
that's
that's.
The
first
part
of
that
process
is
getting
all
the
packages
kind
of
upstream
to
some
of
the
work
that
dan's
been
working
on.
A
Cool
yeah
so
I'll
share
my
screen
and
do
a
little
demo.
I've
been
hacking
on
the
second
half
of
this.
It's
all
kind
of
based
on
jordan's
original
work
that
he
got
set
up
here
in
this
oss
malware
repo
and
published
a
blog
post
a
while
ago,
but
the
basic
idea
here,
the
type
of
malware
we're
trying
to
detect
is
packages
that
do
bad
things
during
installation.
A
So
a
lot
of
programming
languages
and
package
managers
like
pi,
pi
and
npm.
Let
packages
run
arbitrary
scripts
as
they're
installed,
and
there
have
been
a
couple.
I
think,
even
this
week,
there's
another
report
of
npm
packages,
stealing
like
discord
credentials
from
people's
home
directories
during
installation
and
using
that
to
do
bad
things.
A
So
jordan's
original
code
here
ran.
You
can't
correct
me
if
I'm
wrong,
but
it
ran
the
installation
inside
of
a
docker
container
and
then
watched
what
happened
with
like
a
sysdig
and
tcp
dump
to
see
if
any
network
connections
got
made
and
what
files
got
opened
and
that
kind
of
thing
was
there
anything
else.
You
were
looking
for
jordan.
I
No-
and
I
think
an
important
point
is
that
it
required
a
dedicated
host
that
had
to
always
be
running
so
that
was
kind
of
the
downside
of
that
approach.
Right.
A
Yeah,
so
somebody
pointed
me
at
the
falco
project
earlier
this
week,
which
I
had
heard
about,
but
I
hadn't
ever
tried
before,
which
is
a
kind
of
like
exactly
what
jordan
did,
but
with
cystic
inside
of
kubernetes
clusters,
so
you
can
configure
it
to
watch
all
containers
inside
of
a
cluster.
So
I
decided
to
give
it
a
try
on
this
one
and
wrote
up
a
couple.
Configs,
so
I'll
show
what
it
looks
like
and
then
how
it
works.
A
A
So
I
set
this
up
to
ignore.
You
know
some
files
that
you
would
expect
a
package
manager
to
look
at
for
downloading
things
over
ssl,
resolving
dns
and
you
know
actually
executing
the
you
know
python
script
itself
and
wrote.
These
rules
then
put
together
a
little
go
program
to
parse
the
results
in
real
time
as
they
get
generated
here,
falco
just
kind
of
exports
them
in
json
over
http.
So
this
thing
just
listens
to
those
does
a
little
bit
of
correlation
and
then
uploads
the
results
to
gcs,
which
is
a
google
cloud
storage.
A
I
was
trying
to
see
how
it
would
scale,
so
I
grabbed
the
top
couple
hundred
packages
from
pi,
pi
and
npm
stuck
them
here,
a
text
file
and
it
seems
to
work
pretty
well,
so
I
can
there's
a
little
script
that
just
creates
a
kubernetes
pod
for
each
one
of
these.
That
just
does
an
installation
and
then
gets
deleted
after
it
takes
a
couple
minutes
just
to
get
through
100
of
them
in
this
pretty
small
cluster.
I
have
so
it
seems,
like
it'll
scale
pretty
well,
we
can
do
python.
A
This
should
create,
can
see
this
yep,
it's
creating
a
pod
for
every
single
package.
Here
I
didn't
build
any
queuing
myself.
Kubernetes
handle
the
scheduling
and
cueing,
so
the
pods
will
just
kind
of
sit
pending
until
there's
room
for
them
to
execute.
You
can
see.
I
think
I
have
the
logs
going
here
for
the
go
program
so
eventually,
as
these
pods
actually
start
up
run,
we'll
start
to
see
a
whole
bunch
of
things
here.
A
This
is
from
an
earlier
run,
but
right
now,
basically
just
logs
every
single
file
outside
of
those
allowed
directories
that
is
touched
by
the
installation
of
each
one
of
these
packages
and
then,
while
we
wait
for
this
to
actually
go,
you
can
see
what
the
results
look
like.
It
downloaded
them
from
a
previous
run.
So
when
you
install
the
aws
sdk
and
I'll
format,.
A
What's
happening,
oh
whoops,
so
you
can
see
that
when
you
saw
the
aws
sdk,
it
just
touches
these
files,
not
really
sure
what
it's
doing
with
local
time
or
anything,
but
that
seems
relatively
safe.
I
didn't
see
anything
crazy
in
here
when
I
was
looking,
but
I
also
didn't
do
any
deep
analysis
on
this.
I
think
the
next
kind
of
cool
step
would
be
to
hook
this
up
into
some
database
like
a
bigquery
or
mysql.
A
And
yeah
this
is
all
published
on
gcs.
So
if
anybody
wants
to
download
it
and
play
around,
they
can
or
add
other
packages
to
these
lists
and
we
can
run
it
more.
I
tried
to
run
it
on
the
three
npm
packages
that
were
published
this
week
to
see
if
it
would
catch
anything,
but
they
were
actually
taken
down
and
I
can't
find
the
code
for
them.
So
I
can't
tell
if
I
would
have
caught
those
I
was
too
slow,
so
we
can
see
the
logs
happening
here.
E
Yeah,
I
was
going
to
ask
you
if
you
have
tested
it
out
with
known
bad
there's.
Anyone.
E
I
There
was,
there
was
a
couple
people
that
reached
out
after
my
initial
analysis
that
had
done
some
work
like
what
one
of
them
had
a
paper
that
was
called
like
the
the
the
backstabbers
toolkit
or
something
I
have
to
find
out.
They
maintain
these
libraries
and
they've
offered
them
up
so
I'll,
get
their
contact,
details
and
I'll
send
them
over,
because
maybe
that'd
be
a
good,
a
good
test
base
for
you.
E
Awesome
yep
that
that's
actually
the
same
group
I
was
going
to
show
backstabbers
kit
folks
have
a
repo,
it's
not
public,
but
if
you
request
access-
and
you
know-
try
to
show
that
you're
not
trying
to
hurt
other
folks
they'll
give
you
access,
but
that
might
be
a
useful
way
to
test
out
some
things.
A
So
it's
not
actually
something
sitting
in
the
registries.
It's
like
just
some
code.
I
can
test
out
myself.
They.
A
Cool
give
that
a
try.
I
know
you
also
did
network
analysis,
jordan.
I
tried
to
get
that
working,
but
I
was
running
into
some
trouble
with
falco
and
decided
to
drop
that
for
now,
if
something's
gonna
exfiltrate
your
credentials
over
the
network,
it
probably
has
to
read
them
first,
so
I
figured
this
is
a
good
place
to
start.
I
Yeah,
that's
that's!
These
can
be
taken
in
piecemeal
problems
right
because
I
had
to
do
you
know
I
had
to
do
some
magic
to
get
that
to
work.
You
know
pretty
much
hooking
up
the
networking
up,
the
dc
beat
up
and
that's
that's
the
hard
part
right.
It's
like
how
do
we
get
this
part
of
it
working
in
both
a
scalable
way,
but
also
in
a
you
know.
I
Weird
packages
aren't
going
to
slow
down
the
whole
system
kind
of
way,
because
that's
something
that
I
encountered
was
that
you
know,
since
the
internet
is
just
like
the
worst
place
ever.
You
know,
there's
there's
stuff
on
there,
that
just
it
doesn't
do
what
you
expect
it
to
do.
It
doesn't
cleanly
install
it
takes
forever
and
and
if
there's
limits
and
everything
going
through,
one
kind
of
persistent
host,
for
example,
then
that
caused
me
a
number
of
scalability.
A
Yeah
problems
was
hoping.
This
would
just
be
nice
because
I
can
add
more
notes
to
the
kubernetes
cluster
or
whatever
and
run
it
and
it's
pretty
horizontally.
Scalable.
I've
had
some
other
ideas,
too
kind
of
making
the
container
environment.
These
are
running
in
a
little
more
attractive
to
things,
trying
to
steal
credentials
like
putting
fake
credentials
in
locations
and
that
kind
of
thing
just
to
see
if
anything
opens
up
those
almost
like
a
little
honeypot
technique.
Anybody
has
other
ideas
or
links
to
stuff
like
that.
A
We
can
try
sticking
it
in
right
now,
it's
just
a
basic
python
and
a
basic
npm
container,
but
we
could
definitely
try
to
disguise
things
a
little
more
dynamic.
You
know
code
execution,
so
somebody
could
look
and
see
that
it's
part
of
the
system
and
then
not
do
bad
behavior.
If
this
were
to
get
productionized,
so
I'm
gonna
have
to
come
up
with
ways
to
hide
it,
and
what
I
love
is
that.
I
You
know
the
more
because
I
had
you
know
whenever,
whenever
the
blog
post
was
published
and
people
on
on
hacker
news,
we're
you
know
giving
giving
opinions
they.
You
know
one
of
the
things
that
I
heard
was
you
know.
Well,
they
would
just
fingerprint
the
system
and
then
they
would
not
detonate
their
malware,
and
one
of
the
benefits
is
that
you
know
by
having
such
a
vanilla
container
that
it's
running
in
that
matches
a
lot
of
people's
normal
production
instances.
I
I
If
they
choose
not
to
you
know,
detonate
their
malware,
then
they're
actually
missing
out
on
a
huge
footprint
of
people,
because
that's
otherwise
a
legitimate
production
system
right
there
may
be
other
ways
that
they
could
do
some
some.
You
know
complex
fingerprinting,
but
it
certainly
raises
a
bar
pretty
significantly.
A
E
Server,
I
would
say
just
in
general,
you
know
in
testing
in
general,
you
should
try
to
make
it
look
as
much
as
possible
like
the
real
thing,
but
but
in
this
case,
especially
so,
to
get
to
counter
the
anti-detection
mechanisms.
A
lot
of
these
folks
have
I'm
sorry
go
ahead.
I
was
just
gonna
say:
they're
gonna
have
to
make
cis
calls
to
fingerprint
the
system.
A
E
This
actually
raises
a
larger
question.
I
I
don't
have
a
solution,
but
you
know:
if
everything
is
public,
then
the
then
the
smarter
attackers
will
look
at
your
list
of
what
the
things
you're
looking
for
and
try
to
avoid
it.
The
the
virus
writers
have
very
much
this
issue
where
you
know
the
first
thing
you
do
after
you
write
a
virus
is
make
sure
it
goes
to
virus
total
and
keep
sending
it
until
it
doesn't
detect
it
anymore.
I
And
what
I
like
is
you
know
it
goes
back
to
mike's
point,
which
is
let's
say
that
that
we
say
we're
looking
at
all
sys
calls
and
all
network
traffic.
I
will
tell
you
that
up
front
that
that's
what
I'm
looking
for.
So,
if
you're
gonna
make
malware
just
don't
use
syscalls,
don't
use
network
traffic,
you
know
and
that
you
know
with
how
much
that
raises
the
bar
it
it
and
really
you
know
the
way
I
responded
to
folks
the
they
were
offering
up
those
kind
of
perspectives.
I
A
B
Do
you
think
could
potentially
use
this
for
like
detecting
which
projects
are
security,
critical,
just
tying
back?
The
two
conversations
like
you
know,
see
what
files
are
being
touched
or
anything,
and
this
could
give
us
a
better
indication
of
which
projects
might
be
more
vulnerable
to
larger
attacks.
A
I
remember
when
jordan
first
presented
people
somebody
hadn't
talked
about
so
this.
Actually
this
just
does
installation.
People
have
talked
about
detecting
at
run
time.
You
know
so
after
installation
run
it
and
have
it
exercise
some
basic
code
paths,
because
other
packages
don't
do
anything
at
install
time,
but
they
set
up
hooks
so
that
when
you
import
them
in
a
real
app
it'll
look
for
around
for
stuff.
It's.
E
A
G
One
thing
I
wanted
to
mention
here
was
like:
I
really
feel
these
package
managers
allow
like
like
building
from
anything
like
not
just
the
exact
source
code
that
it's
supposed
to
be
built
from,
so
a
completely
different
idea
to
think
about.
Maybe
like
can
we
have
like
some
system
which
just
builds
things
from
source
and
puts
like
a
verified
label
or
something
like
that
onto
the
things
which
people
can
trust
more
right.
Now,
it's
more
like
you
can
just
build
anything
upload
anything.
G
Obviously
there
is
a
flaw
like
you
can
inject
the
malware
inside
the
source
itself,
but
right
now
it
just
feels
like
a
wild
wild
west
like
do
completely
anything
and
upload
any
binaries
to
a
package
manager.
A
Well,
chris
actually
had
an
item
that
he
wanted
to
discuss
about:
reproducible
builds,
which
is
kind
of
exactly
that.
Do
you
want
to
chat
about
that
today,
chris,
or
is
that
for
a
future
topic.
C
So
the
reproducible
builds
chat
is
with
chris
lamb.
C
That,
if
I'm
not
mistaken,
is
scheduled
for
either
the
18th
or
the
25th
of
february.
So
hopefully
in
the
next
meeting
or
in
the
next
two
meetings.
E
If
I
can
jump
in
here,
real
quick
because
I've
I've
been
mentioning,
the
value
of
reproducible,
builds
for
solar
winds
and
you're,
absolutely
right,
pi,
pi,
npm
and
so
on.
You
can
push
packages,
but
they
don't
necessarily
have
to
do
anything
with
the
code
that
was
posted
and
that's
a
big
problem.
The
ch.
I
do
agree
that
in
the
long
term
we
should
have
a
way
to
verify
wait
a
minute.
I
only
want
to
install
python
packages
or
npm
packages
if
they've
been
verified.
For
example.
E
One
challenge
is
that
a
lot
of
packages
currently
aren't
reproducible.
So
if
there's
any
date
time
stamps
in
there
there,
you
have
to
deal
with
that.
If
there's
any
ordered
collection,
if
there's
any
collection,
that's
not
has
a
for
that
doesn't
have
a
forced
order.
You
need
to
change
it
to
force
the
order
say
by
doing
a
sword.
It's
not
rocket
science,
but
it
does
typically
in
typical
systems.
It
does
require
changes
because
nobody's
ever
asked
can
the
bill
be
reproducible
and
so
most
a
lot
of
builds
aren't.
E
E
That's
probably
true,
too,
but
but
I
I
think
it
would
be
totally
doable
to
say,
hey
pipe,
you
know
to
modify
pi
pi
so
that
you
know
in
order
to
submit
it,
you
either
have
to,
or
they
strongly
encourage
you
point
off
to
a
to
a
repo,
and
then
you
know
before
it
gets
installed.
It
verifies
that
you
reproduce
it
or
maybe
several
folks
verify
you
reproduce
it.
You
know
still
have
people
upload,
it
just
verify
it
now.
You've
got
it,
but
again
it
would
require
changes.
J
Yeah,
I'm
working
on
something
similar
right
now
for
pie
pie,
so
I'm
trying
to
build
a
ci
pipeline.
Who
is
it
speaking
this
martin
kardon?
Okay?
So
I'm
working
on
something
like
this
for
pi
pi,
where
it's
essentially
something
like
a
ci
pipeline
that
download
clones
the
github
repository
it
tries
to
build
the
python
package
and
then
verifies
against
what
is
uploaded
to
the
pipeline.
So
like
give
this
check
mark
that
can
be
included
on
github
repository
like
a
page
that
this
has
been
verified
independently,
which
is
basically
the
idea
of
reproducible
builds.
J
I
agree,
there's
a
problem
on
python.
More
specifically,
they
have
a
problem
that
they
include
the
timestamps
when
the
source
code
was
compiled,
which
often
breaks
it,
but
they
actually
introduced
a
backboard
also
to
python
2,
where
you
can
set
a
default
value
just
to
specifically
verify
that
it's
a
reproducible
build
and
it's
been
completely
fixed
in
python
3.8,
I
think
or
3.7.
J
So
it's
actually
already
possible
right
now,
but
nobody
is
using
that
yet
so
I'm
trying
to
create
a
similar
tool
like
that
and
there's
also
maybe
a
related
issue-
is
that
there
are
different
kind
of
formats
in
python.
So
there
is
a
wheel
there
are
sds,
etc
and
in
the
pep
specifications.
J
Wheel
are
also
supposed
to
be
like
this
no
code
execution
during
the
install
time,
but
everybody
is
ignoring
that.
Basically,
so
one
of
the
checks
that
I'm
trying
to
do
is
that
you
can
also
prove-
or
like
also
display,
this
kind
of
verification
or
page
that
when
you
are
installing
some
kind
of
packages,
there
is
no
kind
of
code
execution
being
done
during
the
install
time,
which
is
what
the
wheels
are
supposed
to
do.
But
again
nobody
is
doing
that.
A
A
They
have
support
for
when
you
publish
a
container
image
on
docker
hub,
you
can
build
it
yourself
and
push
it
or
you
can
just
point
docker
hub
at
the
docker
file
and
they
build
it
for
you
and
they
publish
the
logs
and
then
add
a
little
badge
attestation
saying
that,
yes,
they
actually
built
it
from
the
source
code
at
this
time
and
it's
not
the
person
who
published
its
responsibility.
It's
kind
of
the
docker
hub
systems.
A
So
it's
not
quite
reproducible,
but
it
is
a
nice
like
if
a
third
party
built
it
and
can
make
that
out
of
station.
E
E
But
solar
winds
is
showing
that
maybe
hoping
that
nobody
will
ever
break
into.
You
is
not
a
good
good
plan.
J
Yeah,
it's
followable,
I'm
now
building
the
documentation
and
let
me
post
a
link
to
the
aura
project,
I'm
trying
to
document
there
as
much
as
possible,
but
it's
still
very
working
progress,
so
part
of
that
is
already
published,
which
is
called
our
adif.
That
is
basically
try
and
get
the
two
sources
of
data,
which
is
essentially
also
checking
whether
the
build
is
reproducible
and
right.
J
Now,
I'm
trying
to
build
that
pipeline
that
tries
to
simulate
building
this
package
and
verify
against
the
pipeline,
but
that
might
still
take
a
while
cool.
Thank
you.
E
K
I
post
yeah:
this
is
chris
horn.
I
posted
a
couple
things
to
the
google
working
meeting
notes
just
ideas.
I've
been
thinking
of
in
terms
of
identifying
critical
software,
so
there
are
two
ideas
there
and
basically,
what
I
was
thinking
about
are
like
what
are
the
signals
you
should
pay
attention
to
in
order
to
identify
them.
K
So
one
is
basically
like
a
economics
market
perspective,
so
the
packages
that
have
the
highest
exploit
market
values
are
the
ones
that
people
care
about
right
and
then
the
second
one
is
it's
kind
of
around
the
idea
of
what
martin
was
talking
about
in
terms
of
looking
at
dependency
data-
and
you
know
I
just
I've
known
over
the
last
couple
years
now.
I
guess
department
of
commerce
ntia
is
the
national
transportation,
something
like
that
there
it's
it's
like
this
one
guy
in
in
department
of
commerce,
who's,
pushing
software
bill
materials
formats.
K
So
I
think
last
meeting
I
heard
that
people
had
been
looking
at
software
composition,
analysis
data-
I
don't
know
the
source
of
that
data,
but
I
do
know
that
there
are
standards
initiatives
through
mitre
and
commerce.
Alan
friedman
is
the
guy
at
ntia,
dr
friedman
and
then,
but
that
that
might
I
I
I
know
there
are
file
formats
called
cyclone
and
spdx.
I
don't
know
a
lot
of
the
details
here,
but
that
might
represent
data
sources
that
you
could
do
your
dependency
analysis
on.
H
Very
much
so
that's
for
the
whole
purpose
of
them,
I'm
kate,
by
the
way-
and
I
work
with
ellen
I'm
on
I,
I
co-chair
the
formats
and
tooling
working
group
underneath
the
ntia
effort
and
one
of
the
elements
of
an
s
bomb
is
literally
having
what
depends
on
water
contains
in
this
case,
so
that
information
should
be
there
and
it's
there
so
that
the
analysis
can
get
done
and
going
one
hop
down
and
so
forth.
Recursively.
Let's
need
it.
H
H
So
one
of
the
activities,
that's
that
group's
working
out
the
group's
working
on
is
coming
up
with
some
example.
In
the
reference
corpus
to
be
used,
so
I'd
say,
work
in
progress
check
back
in
about
a
month
or
two
time.
E
As
far
as
broadly
to
identify
criticality
somebody's
already
noted
libraries.io
has
some
the
harvard's
also
gathered
data
from
specific
sca
vendors.
E
E
No
okay,
so
libraries
dot.
It
captures
some
packaged
data
if
you're
using
a
language
level
package
manager.
So,
for
example,
if
you're
using
ruby,
it'll
yank
up
your
gem
file,
if
you're
using
python
requirements.txt,
you
know
packages.json
for
javascript,
you
know
so
it
looks
at
those
files
and
such.
E
K
Sure
so
yeah
right,
I
think
what
you're
saying
is
that
there
are
multiple
ways
to
invoke
dependencies
and
not
all
the
methods
are
good
at
chasing
all
the
ways
right.
So
there's
you
could
have
direct.
You
could
have
direct
like
with
the
right
word
where
you
just
make
a
call
right
to
the
executable.
You
fork
a
process
and
call
it
right.
You
can
include,
or
you
that's.
E
Right
right
and
that
obviously
creates
a
dependency
but
wouldn't
be
captured
by
a
language
level
package
manager,
which
is
why
there
are
these
formats,
like
spdx,
cyclone
dx,
which
do
cat,
which
are
able
to
capture.
You
know
arbitrary
language
dependencies,
but
now
you've
got
to
figure
out
how
to
you
know
it's
great:
they
can
capture
that
data.
Now
you
got
to
figure
out
how
to
figure
that
out.
H
Yeah,
that's
pretty
much
about
it.
There
there's
there's
all
the
relationships
that
are
defined
catch
most
the
build
relationships,
and
if
you
see
something,
that's
missing
that
you
know
you
need
to
reflect
just
open
an.
E
Issue,
I
I
tried
to
add
to
our
meeting
group
working
group
meeting,
notes
some
notes.
Hopefully
they
help.
H
B
I
had
one
other
thing
I
was
gonna
bring
up
today,
but
what
yeah
sure
why
not?
We've
got
about
10
minutes
left
and
I've
been
I've
been
brainstorming.
So
so
we've
been
talking
about
identifying
critical
projects
a
bit
today,
but
I've
been
thinking
about
the
flip
side,
where
we
actually
inspire
projects
to
make
improvements
for
their
security
posture
and,
of
course,
the
security
scorecards
project.
We
started
in
this
working
group
and
does
a
bunch
of
security
checks.
B
So
I'd
be
curious
if,
if
people,
what
people
think
about
some
of
these
ideas-
and
if
you
have
some
of
your
own,
I
think
the
first
one
comes
to
mind-
is
like
one
of
those
badges
on
github,
I'm
not
even
sure
how
how
how
hard
it
is
to
get
to
set
that
up.
B
But
basically,
if
you,
you
know,
meet
a
defined
set
of
those
security
checks,
we'll
give
you
we'll
give
your
project
one
of
those
badges
is
one
idea
that
I
had
and
I
think
we'd
have
to
come
up
with
either
like
a
tiering
system
or
you
know,
figure
out
which
criteria
gets
you
what
badge
and
then
the
other
idea
that
I
thought
it
was
just
like
a
bounty
rewards
program.
So
you
know
it.
We've
been
learning
that
it's
really
hard
to
like
just
throw
someone
at
a
project
and
help
out.
B
B
E
Front
I
if
I
can
jump
in
real,
quick,
at
least
as
far
as
making
it
easy
to
get.
You
know,
implement
the
tiering
system
stuff.
I
mean
the
ci
best
practices
project.
We
specifically
had
to
do
that.
If
you
you
know
people's
modify
their
readme
to
point
off
to
something
and
as
they
get
better
or
hopefully
not
worse
on
the
badging
project,
it
shows
their
current
state.
So
it's
it's
not
hard
to
do.
E
E
You
know,
we've
been
we've
been
encouraging,
for
example,
folks
to
you
know,
earn
your
best
practices
badge
and,
of
course,
that
that
is
a
tiering
system
and
that
forces
folks
to
you
know,
write
automated
tests
and
use
a
static
analysis
tool,
and
all
that
goodness,
you
know
to
tell
people
how
to
report
vulnerabilities
of
the
kinds
of
things
you
kind
of
would
be
hoping
they'd
do
anyway,
but
it
may
not.
E
I
love
the
idea
of
a
bounty
program.
I
I
think
you
can't
open
up
to
everybody,
there's
probably
just
at
least
it's
not
if
it's
money,
but
at
least
for
some
selected
list.
K
C
Yeah
they
did
two
iterations
yeah.
It
was
fossa
one
and
fossa
two
and
based
off
of
my
I
didn't.
I
wasn't
able
to
go
really
into
the
nitty-gritty
of
it,
but
a
lot
of
it
and
you
see
a
lot
with
really
big
efforts
where
a
lot
of
the
value
kind
of
gets,
lost
and
kind
of
in
them
in
the
middle
kind
of
so.
C
So
that's
about
a
hundred
thousand
dollars
per
cv,
so
not
a
not
a
great
hit
rate,
but
just
the
fact
that
it
was
a
government-funded
initiative
to
improve
open
source
software,
specifically
through
bug,
bounties
audits.
Things
of
that
nature,
I
think,
is
a
great
first
step,
but
I
think
in
terms
of
execution
you
know
I
I
think
it
could
have
been
done
better
but
of
course
that's
easy
to
say.
After
the
fact
I
mean
it
was
a
great
effort
conducted
overall,
okay.
G
I
feel
like
many
of
these
programs
are
too
vague,
like
or
too
general
like
in
the
case
of
scorecards.
We
can
have
like
very
specific
set
of
criteria
like
tiers,
like
hey,
if
you
satisfy
all
these
checks,
like,
let's
say,
basic
ci
integration
for
like
fuzzing
and
other
things
like
that
could
be
a
good
reward
in
itself
and
we
could
come
up
with
a
good
magic
number
that
is
rewardable
for
the
time
that
the
effort,
time
or
effort
that
the
developer
puts
in
so
just
wanted
to
see
what
you
guys
think.
B
B
Cool
all
right,
I
think,
that's
a
wrap
for
today.
Thank
you.
Everyone
and
yeah
feel
free
to
add
things
to
the
agenda.
If
you,
if
you
want
to
discuss
anything
in
the
upcoming
meetings,
so
all
right
have
a
good
day
thanks.