►
From YouTube: SLSA tooling meeting (August 19, 2022)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
So
what
do
you
guys
think
is
the
permission
of
the
special
interest
groups,
the
death
of
the
salsa
meeting,
no
meetings
I'm
kidding,
but
yeah.
B
Yeah
no
I
I
mean
I,
do
think
that
this
these
people,
probably
calm
down
after
a
few
weeks,
yeah
I
think
these
meetings
will
probably
yeah
calm
down
after
you
know,
once
we
hit
1.0
or
whatever
I'm
sure
it'll
probably
go
mostly
back
to
to
where
it
was
until
you
know,
probably
spin
it
back
up
every
now
and
again
for
some
big
pushes,
but
yeah
yeah
I
think
the
yeah.
The
big
thing
is
just
making
sure
that
we
we
have
that
aligned
vision
for
1.0.
A
C
B
B
So
that's
what
what
Mark
is
leading
up
and
then
for
the
meeting
that
Melba's
been
leading
up
as
the
positioning,
which
is
just
trying
to
make
sure
that,
like
you
know,
given
that
even
just
yesterday
or
whatever
is
there's,
there's
even
another
supply
chain
framework
that
has
come
out
like
what
can
we
do
to
sort
of
align
that
work
to
make
sure
that
you
know
it
Maps
well,
together,
you
know
obviously,
there's
always
going
to
be
more
than
one,
but
we
want
to
make
sure
that
generally,
the
you
know
we're
not
saying
anything.
B
That's
just
completely
out
of
you
know
that
that
other
people
are
not
also
saying,
and
so
we're
doing
that
and
then
also
trying
to
see
where
we
can
get
in
with
some
folks
in
the
government
make
sure
that
they
understand
salsa
and
those
sort
of
things
and
by
a
government.
I
mean
US
Government,
but
also
World
governments
in
general,
as
they
are
also
I,
believe.
B
The
European
Union
is
also
starting
to
pull,
pull
that
up
and
then
the
tooling
meeting
is
then
just
you
know
saying
hey
what
can
we
do
to
start
really
hitting
like
some
of
the
features
that
maybe
you're
missing
in
some
of
the
tools
that
are
out
there?
B
So
yeah
I
put
the
document
in
the
notes
feel
free
to
add
yourself.
B
A
Yeah
I'll
go,
this
is
Eric
Tice,
you
know,
Mike,
you
and
I
talked
about
me
helping
you
with
this,
but
I
never
saw
any
email
or
and
I
missed
the
slack
saying
when
the
meeting
started
so
I
apologize
for
not
being
more
proactive,
I
work
for
Wipro
Technologies
I
am
the
director
of
technical,
Consulting
and
I
run
our
Center
of
Excellence
for
the
Oslo
team,
and
hopefully
you
can
get
more
involved
in
this
here
in
the
near
future.
B
Cool
Sebastian.
B
Okay,
I
believe
everybody
else
has
been
here
before
or
actually
I'm.
A
A
All
right
now,
I'm
Eric
herget
I'm
at
red
hat
on
the
products,
product,
security
supply
chain
team
and
so
I'm
interested
in
getting
involved
in
salsa
somehow
and
the
tools
piece
of
it
looked
looked
like
a
good
place
to
go.
I
come
from
a
technical
background.
So
that's
why
I'm
here
it's
cool.
B
Wow
glad
to
have
you
so
before
we
get
to
any
items
on
the
agenda,
does
anybody
have
any
sort
of
updates
or
anything
else
that
they
wanted
to
bring
up
from
the
tooling
perspective.
B
All
right,
so
we
can
get
to
the
agenda.
B
So
there's
really
only
one
big
item
on
the
agenda
today,
which
was
so
for
folks,
because
I
do
see
that
you
know
there's
there's
a
few
new
folks,
the
past
few
weeks,
what
we've
been
doing
is
one
is
we've
been
sort
of
splitting
stuff
out
into
categories
for
like
the
different
sort
of
tool,
tooling
categories,
so
this
is
stuff,
like
you
know,
salsa
Builders
and
producers
of
the
actual
salsa
metadata
things
that
generate
the
salsa
metadata
distribution
and
discovery
of
that
sort
of
metadata
and
then
stuff,
like
verification
and
decisioning.
B
So
things
like
that
can
actually
verify
the
right.
Things
are
signed
the
right
ways
and
that
can
also
make
policy
decisions
or
perhaps
things
tools
they
can
also
audit
against.
You
know
a
salsa
claim
like
oh,
let's
go
back
and
check.
Does
that
make
sense
that
that
claim
is
is
being
made
and
then
also
then,
there's
kind
of
like
a
couple
other
categories
that
are
more
just
like
integration
so
like?
How
are
we
integrating
with
other
package
ecosystems
right?
B
So
so
certain
things
like
like
the
six
world
can
very
easily
integrate
with
oci,
but
there's
stuff,
like
that,
npm
rate
announced
that
they're
going
to
be
using
six
door
plus
salsa
for
some
of
their
stuff.
And
so
what
can
we
do
to
start
integrating?
You
know,
maybe
some
salsa
stuff
into
npm
or
that
sort
of
thing
then
there's
another
category
which
is
just
more
like.
B
Are
there
any
things
that
we
should
be
doing
to
set
up
the
adoption
team
for
a
good
time
so
that
you
know
when,
when
it
comes
to
sort
of
writing
up
documentation
or
like
getting
started
with
salsa
and
that
sort
of
thing
and
then
there's
sort
of
just
general
other
tools
which,
and
so
all
this
stuff
actually
I
realize
I'm,
not
sharing
my
screen
here.
B
But
anyway,
the
other
one
is
like
other
tooling,
that's
related.
So
these
are
things
like
scorecard
and
and
generally
like
s-bomb
like
how
are
we
integrating
with
other
s-bomb
tooling,
or
do
we
need
to
that
kind
of
thing?.
A
B
Then
finally,
there
was
some
stuff
regarding
how
we
want
to.
How
did
we
want
to.
B
Otherwise,
not
working
for
me
right
now,
but
so
the
other
one
finally
was
also
endemic
tools
that
might
have
gaps.
So
these
are
things
like
you
know.
Jenkins,
for
example,
right
like
I,
know,
there's
a
few
different
people
who
are
working
on
Jenkins
plugins
to
generate
salsa
Providence.
But
you
know:
are
there
additional
things
that
are
perhaps
gaps?
B
Are
there
things
that
we
should
also
be
doing
from
the
perspective
of
like
here's,
how
you
run
Jenkins
in
order
to
make
it
salsa
compliant?
Because,
like
you
know,
a
lot
of
the
stuff
today
right
is,
is
Jenkins
sort
of
let
you
easily
sort
of
shift
builds
and
whatever,
which
would
kind
of
prevent
it
from
some
sort
of
higher
level.
It's
also
levels
unless
you're
restricting
that
sort
of
thing.
So.
D
B
Really
kind
of
where,
where
we
kind
of
split
some
of
that
stuff
up,
I
noticed
naveen's
not
on,
but
he's
been
working
on
sort
of
building
out
a
a
Google
Sheets.
You
know
a
spreadsheet
to
for
that
stuff,
and
let
me
try
this
one.
A
B
Cool
cool,
so
yeah,
that's
kind
of
where,
where
we
we
ended
up,
we've
done
a
little
bit
of
prioritization
and
last
week
we
kind
of
came
down
to
looking
at
distribution
as
Discovery
as
one
of
the
big
sort
of
gaps
right.
B
We
do
have
a
bunch
of
different
tools
in
the
build
and
production
that
are
already
sort
of
pretty
far
along
there's
a
lot
of
folks
in
this
group
and
other
groups
that
are
doing
stuff
at
you
know:
business
also,
GitHub
generator,
there's,
there's
ones
for
git
lab
there's
using
GitHub
actions
and
Google
Cloud
build
there's
tecton,
there's
trustka,
there's
a
bunch
of
different
things.
If
you
have
a
tool,
that
also
does
this
feel
free
to
add
it
thought
to
the
list.
B
But
when
it
came
to
distribution
and
Discovery,
there
is
kind
of
a
a
gap
right,
because
we
have
verification
and
decisioning
right.
We
have
tools
like
cosine.
We
have
a
bunch
of
tools
that
can
actually
verify.
We
have
a
bunch
of
the
six
store
stuff.
We
have
things
like
key
Verno
which
can
use
a
mission
control
policy
all
that
good
stuff.
But
when
it
comes
to
actually
sort
of
saying
like
we
have
two
problems,
one
is
distribution
of
the
salsa
provenance
in
things
that
are
outside
of
oci.
B
So
these
are
things
like
npm
packages,
python
packages,
random
files,
some
of
that
information
is
being
stored
in
recore,
but
it's
not
the
easiest
to
sort
of
Discover
right.
If
you
have
like
you
know,
if
you
want
to
go
and
say:
okay,
well,
I
haven't
I
I,
just
downloaded
a
file.
Is
it
in
record
the
only
way
to
sort
of
do
that
today
is
just
sort
of
via
the
hash
and
if
somebody
hasn't
recorded
it
to
record
but
they've
recorded
it
somewhere
else,
it's
impossible
to
sort
of
discover.
B
So
this
is
that's
kind
of
where
we
kind
of
landed
is
like
how
do
we
sort
of
figure
that
sort
of
thing
out
actually.
B
Here
we
just
renamed
it,
and
so
there's
a
couple
of
tools
right,
cosine
get
help
out
with
oci,
because
it
just
sort
of
looks
for
the
thing.
Alongside
the
package
there's
some,
there
was
just
some
discussion
about
tough,
because
tough
makes
it
easy
to
sort
of
distribute
the
root
of
trust
so
that
you
can
actually
do
the
distribution
and
discovery
of
the
keys.
B
And
then
you
know
today
right
a
lot
of
stuff
is
potentially
you
could
just
store
the
file
somewhere
and
folks
know
where
the
file
should
live
or
some
sort
of
convention
for
the
name.
So
you
move
file,
you
can
do
that.
B
Okay,
so
that's
where
we
ended
up
want
to
get
folks
thoughts
on
where
we
were.
What
what
folks
wanted
to
sort
of
focus
on
here,
if
there's
any
sort
of
big
things
in
that
sort
of
distribution,
Discovery
like
if
there's
areas
of
the
specific
tools
that
you
think
need
to
have
a
feature
or
if
there's
a
tool,
that's
not
listed
that
should
be
listed.
A
I
think
so
one
thing,
that's
kind
of
been
smaller.
On
my
mind,
I
think
I
was
talking
about
in
the
last
meeting.
I
was
in
maybe
two
weeks
ago.
Related
to
you
know
what
do
we
do
about
the
metadata
and
I
think
we
have?
You
know
some
of
that
decisioning
below
right
for
the
tooling,
but
I
think
the
the
the
higher
value
of
salsa
is
actually
like
sure
like
the
metadata
exists,
but
you
know
what
action
should
we
take
on
that?
A
B
B
Sure
yeah
there's,
there's,
definitely
I,
think
a
sort
of
a
nice
relationship
between
the
distribution
and
Discovery
and
the
verification
and
decisioning,
because
also
along
that
lines,
you
want
folks
to
be
able
to
say
I
trust,
salsa
metadata,
that's
perhaps
coming
from
this
location
or
coming
from
these
specific
keys
and
and
and
so
on.
B
B
One
of
those
problems
which
is
sort
of
the
adjusting
of
all
the
public
sort
of
salsa
metadata,
that's
available
and
other
sorts
of
you
know
not
just
salsa
but
but
other
sorts
of
encodo
based
attestations
and
that
kind
of
thing
and
build
a
sort
of
you
know
and
and
sort
of
look
at
the
graph
of
things
and
be
able
to
say.
Okay,
you
know
here's
the
metadata,
that's
associated
with
these
things
and
from
a
verification
and
sorry,
not
verification
from
a
a
distribution
and
discovering
perspective.
B
B
A
I'm,
so
sorry,
just
to
add
to
my
point,
if
you
don't
mind,
I
apologize,
Isaac
button,
and
you
know
one
like
one
thing.
I'm
thinking
about
is
like
sure,
like
there's
a
happy
path
where
you
know
the
salsa
metadata
says
it
came
from
the
repo
you
expect
it
to
et
cetera,
et
cetera,
but
like
what?
What
do
we
do
when
it
doesn't
right,
come
from
the
repo
we'd
expect
or
that
didn't
it
wasn't
built
by?
A
You
know
a
runner
that
we
expected
it
to
be
run
by
that's
an
interesting
question
that
I
have
in
my
mind
this
is
kind
of
what
I
was
trying
to
get
at
I.
Suppose.
B
Yep
Isaac.
E
Yeah
I
was
I,
had
a
quick
question
about
guac
and
then
a
comment
related
to
something
else.
You
said
that
so
in
terms
of
of
guacam
I'm
kind
of
getting
up
to
speed
with
you
know
the
scope
and
the
intent
there
is
it.
My
understanding
so
far
is
that
that
guac
has
been.
It
was
a
kind
of
a
almost
offline
intelligence
thing
is:
are
you
saying
that,
potentially
it's
a
real-time
decisioning
thing
for
use
in
production
like
hey,
it's
it's.
What
could
be
part
of
an
admission
controller
or
something.
B
Hey
we've
collected
all
the
metadata
documents
that
folks
are
doing
and
put
it
into
and
to
be
clear,
all
the
public
metadata
documents
and
sort
of
put
it
into
a
database
that
makes
it
sort
of
easy
for
folks
to
run
those
queries
against
and
once
again
these
should
not
be
like
real-time
sorts
of
queries
right
perspective
because
yeah
we
anticipate
these
to
be.
You
know
like
potentially
taking
several
seconds
or
you
know
for
larger
ones.
B
You
know
maybe
even
longer
but
the,
but
the
idea
also
would
be
folks
could
cash
it
in.
In
that
kind
of
thing,.
E
Got
it
and
then
the
other
thing
was
just
to
kind
of
echo
what
you'd
said
about
you
know
verification
what
that
looks
like,
but
at
the
moment
like,
like
you
say,
it's
the
the
notion
that
we've
had
of
it
has
been
fairly
shallow
in
terms
of
hey
it.
Just
does
a
sign
for
sales
problems
exist,
I
think
we
are
going
to
have
to
at
some
point.
You
know
face
what
does
the
web
of
trust
underlying
that
look
like
and
hey?
E
You
know
if
I'm,
an
Enterprise
deploying
this
artifact
and
the
problem
is
signed
by
GitHub
actions.
That's
great.
Do
I
trust
good,
have
actions
what's
the
basis
for
that
trust.
Are
there
other
Enterprises
that
don't
trust
you
have
actions
and
so
on?
And
then
how
do
we?
How
do
we
provide
anchors
for
that
trust?
Is
that
an
ecosystem
concern
where
you
know
people
can
say
Hey.
E
You
know
I'm
going
to
go
and
verify
these
Builders
independently
and
attest
to
them,
or
you
know,
do
we
want
to
get
into
the
business
of
having
you
know,
salsa
accredited
Builders,
where
someone
actually
goes
in
and
says?
Well,
you
know
the
salsa
organization
has
said:
GitHub
actions
is
qualified
at
level
three
or
whatever,
and
so
at
the
moment
I
feel,
like
you
know,
we've
not
reached
that
that
point
yet,
but
I
feel
like
it's
inevitable,
we're
going
to
have
to
think
through
that
class
of
problems.
A
Great
Sean,
yeah
I
think
you
just
kind
of
build
up
a
little
bit
on
on
that
I.
Think
trishan
could
bought
some
of
this
up
before
an
earlier
community
meeting,
but
I
I'm
wondering
now
about
kind
of
how
do
we
go
about
working
out?
Who
is
trusted
to
do
what
in
the
chain
where
there
are
multiple
parties
in
the
chain?
A
Does
it
make
sense
for
GitHub
to
be
signing
for
package
author
for
package?
Yours
that
does
it
make
sense?
You
know
for
third-party
Builders
to
be
signing
for
things
that
they
didn't
do.
Who
do
we
trust
to
sign,
for
what
I
think
is?
Is
another
part
of
the
the
policy
decision
so
I
think
there's
you
know
that
there's
discovering
that
web
of
trust,
but
then
working
out
what
parts
of
the
supply
chain
are.
Members
of
that
web
of
trust
authorized
to
sign
for.
E
B
Policy
controller,
this
key
Verno,
there's
Opa
gatekeeper
and
some
of
these
other
tools
that
are
kind
of
actually,
you
know
doing
some
of
that
work
and
figuring
out
like
whether
it's
through
top
or
something
else,
I'm
Mark.
D
Yeah
so
in
terms
of
like
what
actually
happens
at
you
know
like
if,
if
on
the
the
path
where
things
get
rejected,
I
definitely
think
that
is
a
good
thing
to
work
through
because,
like
that's
kind
of
the
case
that
we're
trying
to
do
right
is
like
protect
bad
stuff
from
getting
through
I.
Don't
know
how
you
want
to
track
this
mic
or
like
how
you
want
to
like
break
this
up
into
like
sub
problems.
D
Maybe
it
would
be
good
to
like
have
someone
well
I,
guess:
I,
guess
it's
kind
of
along
the
same
lines
as
the
policy
question
of
which,
which
visa
and
Simon
had
written
up
some.
Some
initial
thoughts
on
because
I
was
about
to
say,
like
I,
have
some
thoughts
on
how
that
might
work,
but
then
that
might
just
be
getting
into
the
Weeds.
Now.
D
B
It
might
still
be
worth
it.
Oh
Roy,
if
you
said.
B
Okay,
you
know
definitely
I
think
it's
a
lot
of
these.
All
these
questions
are
still
open,
but
yeah,
let's
go
through
Sebastian.
C
Thank
you,
I
wanted
to
bring
the
perspective
that
you
have
a
list
of
a
list
of
tools
that
was
on
your
screen.
There
I'll
read
it
on
my
screen,
so
cosine
tough,
quack,
companion
file
and
for
companion
file
I,
don't
think
it's
practical
to
expect
package
managers
to
support
Distributing
these
extra
artifacts.
C
So
I
think
the
discovery
of
metadata
has
to
be
a
separate
process
from
downloading
the
package
itself,
in
the
same
way
that
Distributing
gpg
or
s
mime
Keys
those
kinds
of
things
those
are
done
outside
of
the
ecosystem,
which
is
trying
to
use
the
signatures.
The
keys
are
distributed
through
various
different
methods.
D
I
guess
just
to
respond
to
that
specifically
I
have
some
opinions
here.
My
opinion
we
need
I,
think
it'll
be
desirable
to
have
both
models.
D
Doing
it
within
the
ecosystem
is
valuable
for
reliability,
because
when
you
have
to
do
it
through
a
separate
system,
you
have
a
new
failure.
Domain
like
it
used
to
be
that.
D
Well,
if
you
can't
fetch
a
Patchogue
if
the
mirror
is
down,
but
here
it's
like
you,
can't
fetch
a
package
if
the
mirror
is
down
or
the
provenance
thing
is
down
and
if
the
whole,
and
if
you
need
some
extra
system
to
verify
signatures
and
if
that
system's
down
too,
it
also
might
add
additional
latency
in
order
to
fetch
these
attestations
and
that
might
be
undesirable
in
a
lot
of
cases
like
within
Google,
we've
we've
run
into
that
and
and
specifically
have
built
inline
propagation
of
of
attestations.
D
We've
made
that
a
priority
for
reliability
and
latency
reasons,
but
for
all
the
reasons
you
said,
I
think
it's
also
desirable
to
be
able
to
support
other
ecosystems
that
don't
have
that
support
and
so
I
I
I
feel
like
it's
probably
a
mistake
to
only
support
one
to
the
exclusion
of
the
others.
C
F
I
think
that
there's
a
huge
open
question
here
as
to
you
know
because
package
managers
haven't
adopted
s-bombs
at
this
point,
even
though
we've
all
said
they're
supposed
to
be
there.
The
question
is:
is
the
s-bomb
validation
of
what
its
contents
are?
Part
of
the
package
manager
and
the
signature,
validation
and
CDs
are
Vex
checking
outside
the
package?
Those
sort
of
discussions
haven't
happened
yet
right
signing
packages
as
they
are
now
is
not
necessarily
our
end
goal.
F
F
The
validation
of
what
that
s-bomb
is
against
Vex
and
communicating
up
to
the
network
is
still
a
work
in
progress
of
how
the
heck
this
is
all
going
to
flow
through
and
how
offline
validation
is
going
to
work
so
focusing
on
just
signing
up
packages
right
now,
I
think
is
a
good
start,
but
I
think
we
need
to
have
a
broader
conversation
of
how
do
s-bombs
end
up
folding
into
these,
which
says
get
into
the
other
question
of
I
can
rent
a
factory
to
go
and
build
my
component
I
still
own
the
product
at
the
end
and
in
theory
the
way
I
Envision,
that
is,
the
s-bomb,
is
owned
by
the
product
owner,
whereas
the
factory
or
GitHub
actions
could
prevents
the
signed
evidence
that
they
saw
during
their
build
in
their
Factory,
and
that
is
perfectly
fine
right.
F
C
B
F
If
you're
arguing
that
this
should
be
able
to
have
the
same
IDs,
that's
an
interesting
question.
I
think
there
are
parts
of
an
s-bomb
that
are
unique
to
that
specific
run.
On
the
other
hand,
I
think
all
the
files
that
are
determinist
built
and
all
the
rest
will
be
100
exactly
the
same.
I
agree
with
that
point,
but
I
don't
think
the
s-bomb
is
100
reproducible
at
this
point.
C
Yeah
and
like
you
say,
it's
not
necessarily
ideal
for
them
to
be
completely
reproducible.
You
do
want
to
have
certain
information
about
the
specific
builds
to
know
whether
the
S
form
is
what
you
expect.
Basically,.
C
F
All
done
by
the
Bayer
company.
F
That
said,
the
the
other
comment
and
saying
hey
specific
evidence
to
a
specific
tool.
My
understanding
is
that
we
were
allowing
multiple
tools
to
be
able
to
claim
the
same.
You
know
produce
evidence,
that's
sufficient,
like
code
ql
versus
coverity
versus
prefast
versus
neural
Staffing
analysis
engines,
they
all
should
have
be
able
to
say
hey.
We
can
a
test
that
we
produce
static
analysis
results
that
are
looked
at
I'm,
not
looking
at
a
specific
salsa
entity
being
bound
to
a
single
tool
at
this
moment
or
are
other
people
thinking
this
differently.
A
A
Think
that's.
Okay,
no
yeah,
I,
agree,
I!
Think
the
in
salsa.
The
tooling
is
not
specific
right
like
you
should
use
this
tool
in
particular,
you
can
use
any
tool.
I
mean
vulnerability,
scan
you
can
use.
There
are
hundreds
of
them.
You
can
use
any
of
them
to
scan
for
it
so
and
I.
Don't
think
it
should
be
specific
to
any
tool
as
such
yeah.
B
B
Yeah
and
the
solstice
spec
has
like
we,
you
know
we
have
that
Builder
element
in
there,
where
the
only
thing
that
would
be
different
is
just
like
yeah.
It's
a
you
want
to
people
to
know,
as
as
you
sort
of
mentioned,
Roy
like
you
want
to
know
what
did
do
the
building
right
like
not
necessarily
just
who
built
it
right.
It's
like
you,
know,
company
X,
built
it
with
toolbot
right
or
at
least
they're,
claiming
that
they
built
it
with
toolbot,
which
is
also
I,
think
important.
F
The
question
of
tools
becomes
really
pragmatic,
specifically
for
bigger
companies
Google
with
their
clang
using
for
for
Chrome.
It
isn't
the
same
one
as
playing
with
the
rest
of
the
industry
uses,
nor
is
the
visual
studio
compiler
that
we
use
for
building
windows,
isn't
necessarily
a
production
version
that
will
release
the
public,
so
we
haven't
even
worked
through
what
that
means
and
then
we're
starting
to
trip
into
the
get
bomb
space
as
well.
F
That
one
I
think
is
worth
a
much
deeper
conversation
and
considering
build
engines,
and
there
is
a
concept
of
some
ability
to
have
IP
in
these
processes.
We
have
to
tread
very
carefully
in
here.
D
So
if
you
can't
hear
me
sorry
about
that,
the
so
so
Roy
I
agree,
I,
think
I,
as
so
far
as
I
understand
what
you're
saying
I
think
I
agree
with
you
on
pretty
much
all
the
points
in
particular
like
the
difference
between
like
the
Builder
is
claiming
It
produced
this
thing
and
that's
about
it,
whereas
the
s-bomb
would
be
getting
into
the
details
of
what
went
into
it
and
and
if
we
think
about
the
the
use
cases
salsa
as
written
now
is
worried
about
tampering,
whereas
s-bomb
is
focused
on
vulnerability,
management
and
Licensing,
and
so
the
threats
are
are
different.
D
There,
like
the
threat
actors,
are
different
and
so
I
think
the
two
are
compatible.
Yeah.
F
Is
it
purview
of
the
of
the
end
and
person
the
salsa
and
evidence
claims
could
be
part
of
the
factory
which
case
has
no
relationship
to
it.
To
the
s-bomb
owners.
I'm
saying
the
s-bomb
is
where
you
would
go
to
say
who
owns
the
product
and
not
look
at
this
who
signed
the
salsa
data
as
being
definitive
as
ownership.
D
Yeah
that
that
makes
sense
to
me
I
mean
we'll
have
to
like
write
that
up,
but
like
at
a
high
level.
That
sounds
good
to
me,
although
again,
like
it
kind
of
comes
down
to
the
details,
yeah.
F
I
understand
the
other
thing
is
this
is
the
distribution
of
all
this
content
is,
is
where
we
start
tripping
into
using
skip
as
a
notary
system
and
a
public
way
to
to
distribute
large
volumes
of
data.
I
think
we've
got
oci
listed
here
and
that's
similar
to
the
thoughts
and
a
stepping
stone
to
a
better
solution
going
forward.
The
interesting
question
for
for
us
for
packages
is
potentially
the
s-bomb
could
be
inside
a
nugget
package
or
an
npm
package.
F
But
as
soon
as
you
start
trying
to
build
complex
products,
you
need
some
endpoint
that
you
can
go
and
retrieve
the
same
s-bomb
from
without
having
to
say,
I
have
to
ship
the
whole
tree
inside
my
product
myself
and
so
I'm
kind
of
viewing
s-bombs
as
being
both
a
an
optimization
that
they're
in
the
package.
So
you
can
do
it
offline
validation
and
potentially
be
available
through
something
like
skit
or
oci
to
handle
the
deeply
nested
tree
that
we
have
will
end
up
building
in
the
end.
D
Foreign
yeah
I
I
have
to
step
off
the
call
for
a
second
I'll,
be
back
in
a
few
minutes.
One
quick
thing
in
terms
of
like
the
coming
back
to
the
storage
thing
of
like:
do
you
store
it
in
the
package?
Repo
versus
have
some
like
third
party
or,
like
some
alternate
means
of
storing
the
attestations
I
think
if
we
think
about
use
cases
for
it's
also
where
we're
trying
to
stop
tampering
and
actually
ideally,
would
prevent
package
uploads
and
package
use
there.
We
have
a
higher
reliability.
A
B
Yep,
okay,
so
that
makes
sense,
yeah
I
mean
I,
think
I
think
we
all
sort
of
you
know
agree.
There
I
think
that
there's
there's
two
separate
yeah
and
I
kind
of
want
to
make
sure
that
the
given
that
this
is
mostly
focused
on
the
salsa
stuff.
We're
you
know,
I
I,
do
read
it
that
s-bomb
stuff
is,
is
going
to
be
an
interesting
integration
point,
but
maybe
we
don't
have
the
same
exact
concerns
at
least
starting
off
with
that
said,
I
think
there's
some.
B
There
was
a
lot
of
good
points
there
around,
especially
around
sort
of
the
that
that
distribution
piece,
because
I
do
agree
that
you
know
whether
it's
skid,
whether
it's
oci,
whether
it's
a
rest
endpoint
that
that
you
know
that
that
gives
you
a
sign,
whatever
I
think
that
there's,
where
we're
trying
to
kind
of
figure
out
the
two
big
components
of
it,
which
is
one,
is
if
somebody
just
sort
of
sits
down
and
says
great
I,
you
know
either
I'm
about
to
download
a
package
or
I
just
want
to
do
a
lookup
and
I
want
to
see.
B
Does
this
thing
have
salsa
metadata?
What
mechanisms
can
they
begin
to
do
to
do
that
right?
B
B
But
then
there's
also
talk
around
already
some
folks
saying:
hey
we're
gonna,
give
access
to
some
sort
of
database
that
contains
this
information,
and
you
can
request
it
through
the
hash
or
through
some
sort
of
URI.
B
Oh
yeah,
no
I
agree
I
agree.
So
so
would
you
sort
of
argue,
then
that,
like
some
of
the
work,
that's
being
done
in
the
spdx
and
Cyclone
DX
and
similar
sorts
of
spaces
around
distribution
of
s-bombs,
so
that
could
be
leveraged
for
also
distribution
of
salsa
metadata?
Yes,
100.
F
F
Bring
that
up
is
because
there
is
one
piece
of
data
that
I
think
it'll
be
continuous
through
the
lifetime
of
the
product,
which
is,
if
I
go
off
and
say
that
I
need
to
have
an
anti-malware
scan
within
the
last
seven
days.
That
will
be
a
continuous
process
and
that's
new
evidence
per
week,
potentially,
in
which
case
it
has
to
go
out
there
and
I
would
love
that
to
be
hey.
B
Yeah
I
totally
agree
with
you.
There
there's
also
and
I,
don't
want
to
get
too
deep
into
this
piece
per
se,
but
I
I
know
that
there's
some
discussion
about,
for
example,
if
I
do
have
an
anti-malware
scan.
Where
does
that
show
up?
B
In
my
salsa
metadata
like
and
I
know
one
of
the
things
that
you
know
we've
been
doing
just
myself
and
my
company
is,
is
you
know,
because
there's
not
a
ton
of
in
Toto
predicate
types
yet
is
we've
just
been
using
the
sort
of
salsa
metadata
as
proof
of
if
I
have
a
salsa?
Sorry,
if
I
have
a
malware
scan,
I
can
point
to
the
salt
and
metadata
for
that
malware
scan
to
show
the
Integrity
of
the
actual
scan.
I
just
ran
that
might
not
be
ideal
long
term.
B
There
might
be
a
better
sort
of
mechanism
of
saying
it
doesn't
make
sense
to
say
something
like
here.
Is
you
know,
here's
claims
about
the
this.
You
know
the
the
security
of
a
thing
and
you
know
have
a
different,
whether
it's
in
Toto
or
some
other
metadata
type,
that's
something
that
oh
God
I.
F
Was
using
it
as
a
proxy
for
after
the
product
is
built,
we
will
potentially
gain
new
evidence
and
often
Omega
yeah
code,
ql
style
work,
we're
doing
for
open
ssf.
We
fall
into
the
same
buckets,
they
happen
after
the
product's
been
released,
and
so
we
kind
of
need
to
think
that
this
evidence
shows
up
both
when
we're
producing
the
final
product
and
we'll
dribble
in
in
later
and
I
would
love
to
understand
what
that
is
from
a
salsa
point
of
view.
B
Yeah
so
from
the
tooling
perspective,
I
think
the
thing
that
we've
been
doing
and
I
I
want
to
hear
other
folks.
Opinions
as
well
is
yeah.
We've
just
been
sort
of
associating
this
metadata
as
much
as
is
possible,
where
we
just
sort
of
say
hey
if
it's
signed
with
the
right
sorts
of
keys
and
we
have,
for
example,
if
you
use
tough
right,
we
can
go
and
say:
okay.
Well,
these
keys
are
associated
with
these
roles,
and
so
they've
been
given
permissions
to
sign
these
sorts
of
things
and
then
so.
B
At
the
end,
we
might
expect
from
a
policy
perspective.
I
expect
to
solve
some
metadata
associated
with
the
build
I
expect
a
security
scan,
a
science
security
scan,
or
some
sort
of
you
know,
attestation
regarding
to
the
security
scan
signed
with
this
role
and
and
so
on.
That's
that's
just
what
we've
been
doing
but
mark
foreign.
D
Now
is
what
is
like,
strictly
required
by
salsa
versus
what
are
like
add-on
benefits
of
things
that
we
kind
of,
like
implementations,
ought
to
do
to
get
more
benefit
out
of
it
so
from
the
required
part
of
salsa
in
what
we
currently
have
defined,
not
counting
any
sort
of
future
extensions
is
just
about
tampering
and
specifically
how
the
thing
was
originally
built,
and
so
that
information
is
I,
believe
static
and
like
it
won't
change
over
the
lifetime
of
the
artifact
like.
If
you
say
this
thing
has
to
have
been
built
from
this
GitHub
repo.
D
F
D
You
know
at
upload
time
or
use
time
or
ingestion
time
into
an
organization
or
just
monitoring
but
effectively
that
all
that
data's
static
and
it
shouldn't
change.
Whereas
think
yeah,
like
like
vulnerability
management,
which
is
outside
of
the
scope
of
salsa
right
now,
but
is
related,
Technologies
would
and
so
I
I
think.
That's,
maybe
where
some
things
are
coming
down
of,
like
certain
folks
have
one
use
case
in
mind
like
I'm
having
the
tampering
use
case
in
mind
and
think,
oh
well,
this
information's
static
and
another
folks
are
like.
F
D
Yeah
yeah
I'm,
sorry
you're
right,
that's
a
better
way
to
say
it.
So
here's
actually
another
use
case.
That's
a
little
bit
in
between
would
be
I
want
to
say
at
upload
time
prevent
this
upload
unless
it
has
been
scanned
and
tested
According
to
some
definition.
D
In
order
to
do
that,
you
have
to
create
the
artifact
first
and
then
you
scan
it
and
so
like
it
can't
really
go
in
the
province
because,
like
it
has
to
go
like
it's
an
after
the
fact
thing
and
in
some
cases
like
you
have
to
actually
upload
it
first,
like
let's
say
a
container
image.
D
Let's
pretend
you
have
to
actually
upload
it
to
the
some
sort
of
container
registry
before
you
can
like
scan
it,
because
that's
the
interface
that
things
use
to
do
scanning
or
let's
say
you
want
to
run
it
on
a
cluster
or
something
and
say
like
yes,
I
ran
that
in
my
cluster
and
all
the
my
you
know,
quality
assurance
tests
passed
and
that
would
be
additional
evidence
that
you
generate
after
the
original
build.
D
So
it's
it's
similar
to
what
you're
saying
in
addition
to
like
the
regularly
updated
stuff
I,
we
should
kind
of
come
to
an
agreement
on
what
we
call
that
in
my
mind,
that
is
not
Strictly
Salsa.
D
It
is
related
and
we
we
probably
want
to
create
when
we
created
Solutions
support
both
use
cases
but
I,
don't
think
it's
Strictly
Salsa!
That's.
F
Exactly
where
I
was
thinking
Mark
except
the
terminology
I
use
is
then
is
we
look
at
the
available
evidence
we'll
make
an
endorsement
is
available
for
XYZ.
So
in
this
case
it
would
be
an
endorsement
that's
available
for
testing
next
stage,
maybe
with
the
testing
data.
We
make
an
endorsement
saying
it's
ready
for
flighting
and
after
that
another
once
that
gets
to
some
stage.
F
You
know,
lightweight
and
consumable,
and
not
look
at
a
broad
swath
of
data
for
the
consumers
to
make
their
decisions,
and
that's
where
the
distinction
between
Salsa
is
about
Evans
and
here's
proof
that
that
that
matches
to
this
this
functionality,
where
an
endorsement
is
saying,
hey,
I,
looked
at
it
and
I'm
a
claim
that
it's
available
for
testing
purposes.
B
What
you
mentioned
there
in
in
at
some
level,
it
sounds
similar
to
sort
of
like
the
the
binary
authorization
white
paper,
some
of
the
other
things
that
are
kind
of
out
there,
around
sort
of
saying
like
taking
it.
You
know,
hey
based
on
the
all
the
metadata
I
have
today,
and
all
the
affirmative,
metadata
I
should
say
right,
like
everything
should
be
affirmative,
you
should
be
able
to
say
great,
you
know,
no
vulnerabilities
have
been
discovered
in
the
past
seven
days.
This
thought
and
the
other
thing
it
gets
a
stamp.
B
It's
ready
to
go
into
whatever
environment
or
it's
ready
for
whatever
the
next
step
is
I.
Think
that
kind
of
still
kind
of
leads
to
the
other
question
of
like
okay.
Well,
before
we
even
get
to
the
point
where
we
can
sort
of
make
that
endorsement,
we
need
to
make
sure
we
have
that
ability
to
provide
access
to
all
the
metadata
to
whatever
tool
is
going
to
be
making
that
endorsement
or
whatever
thing
is
going
to
be.
Making
that
endorsement.
I
should
say.
F
Very
you
know
this
is
one
thing
I
stressed
before
is
the
endorsement
doesn't
have
to
declare
what
what
evidence
or
salsa
items
that
looked
at
to
make
this
decision,
because
declaring
what
you
didn't
look
at
is
a
security
issue
as
well
as
what
you
did
look
at
so
the
to
my
mind,
an
auditor
reviewer
can
make
an
endorsement
and
not
declare
how
they
got
there.
Yeah
no
and
I'm
100.
F
You
know,
as
long
as
you
understand
the
subtlety
of
what
I
was
saying
there,
that
building
block
allows
us
to
to
have
a
sliding
scale
and
and
improve
salsa
as
we
go
forward
here.
I
still
think
the
salsa
style
data
is
the
largest
thing
that
reviewers
would
do
and
not
all
reviewers
will
only
base
their
decisions
on
the
salsa
data.
B
Oh,
no,
no
yeah,
so
to
be
clear.
I
completely
agree
with
you
on
that.
One
I
think
that
there's
another
subtle
point,
which
is
as
we
look
at
that
sort
of
problem
I,
agree
that
certain
people
might
not
be
sort
of.
If
they
make
an
endorsement,
they
might
be
not
declaring
how
they
made
the
endorsement,
but
there
still
needs
to
be
a
process
by
which
how
they
make
their
endorsement
and,
for
example,
a
lack
of
metadata
should
not
be
considered
like
I
think
you
would
mention
this
right.
B
A
a
lack
of
a
vulnerability-
or
you
know
rather
not
looking
for
a
vulnerability,
doesn't
mean
the
vulnerability.
Isn't
there,
and
so
you
know
there
still
needs
to
be
that.
That's
all
I
meant,
which.
F
C
I
think
one
critical
aspect
is
having
stable
identifiers
and
that's
something
that
we've
been
very
very
keen
to
do.
In
spdx,
we've
had
identifiers
on
the
document
level
and
individual
elements
within
that,
and
if,
if
csaf
documents
can
have
similar
identification
mechanisms,
then
we
can
link
the
vulnerability
reports
to
each
other.
C
F
C
Yeah
so
I
think
I
think
csaf
doesn't
currently
have
a
internal
way
of
expressing
IDs
but
content.
Addressable
ideas
are
always
a
a
nice
easy
way
out
of
that.
If
you're
able
to
have
machine
readable
rather
than
human,
readable
identifiers
and
the
tooling
around
that
to
calculate
the
identifiers.
B
So
I
just
so
now
we
have
sort
of
10
minutes
here
so
I
think
it's
probably
worthwhile
just
to
kind
of
reevaluate.
So
from
from
last
meeting,
we
had
sort
of
decided
on
taking
a
closer
look
at
the
distribution
and
Discovery
tools
to
see
if
there's
ways
that
we
can
sort
of
either
fill
in
some
gaps
where
they
might
not
exist.
B
If
we
could
start
to
sort
of
look
at
different
approaches
for
for
some
of
that
sort
of
stuff,
it
sounds
like
you
know,
based
on
what
Roy
had
said
and
I
don't
know,
I've
only
been
to
a
couple
of
the
the
skit
meetings,
but
do
we
know
the
sort
of
current
state
of
things
for
you
know
where
we
might
want
to
start
looking,
even
if
it's
not
like
a
permanent
solution
right
or
I,
don't
want
to
say
permanent
solution,
but
like
are
there
certain
things
we
can
do
today
around
sort
of
the
distribution
and
Discovery
problem
or
do
folks
think
kind
of
like
a
lot
still
needs
to
get
done.
F
So
Steve
Glasgow
put
out
a
a
video
on
how
to
use
oci
for
this
Association
as
a
a
stepping
stone
for
us
to
start
playing
with
and
I'll
see.
If
I
can
find
that
and
broadcast
that
back
out
here.
So
it's
perfectly
available
for
us
to
start
hooking
these
things
together
and
use
it
as
a
a
here.
This
is
Step
Zero.
B
Yeah
yeah,
so
just
on
that
note,
I
believe
both
the
aorus
slash
notary
stuff,
as
well
as
the
other
oci
and
six
store
stuff
all
sort
of
are
using
because
I
think.
Even
today
you
can
store
salsa
metadata
in
oci,
but
does
the
I
have
a
question?
Is?
Is
that
sort
of
also?
Oh
sorry,
guys.
F
Already
have
a
binding
mechanism
in
oci
that
would
allow
us
to
associate
just
just
drink,
getting
it
together,
and
that
is
what
I
was
saying
to
use,
not
using
necessarily
the
same
support
of
notary,
but
just
say:
here's
the
binding
mechanism
to
search
okay.
B
D
I
think
maybe
a
good
Next
Step
would
be
to
start
a
dock
where
we
could
kind
of
start
dumping.
Ideas
like
like
requirements,
considerations
use
cases,
possible
solutions,
pros
and
cons
to
those
Solutions
and
then
kind
of
like
form
that
to
something
more
coherent.
B
Sure
could
definitely
create
that
doc.
Right
now,
I
will
just
create
a
new
one.
A
F
F
C
In
terms
of
stable
identification
of
software,
the
software
packages
themselves,
there
are
still
some
problems
due
to
the
lack
of
maturity
about
those
stable
identification
mechanisms,
and
that
applies
to
both
CBE
2.3
and
pearls
and
in
terms
of
the
stable
identification
of
s-bombs
I.
Think
it's
going
very
well.
The
spdx
3.0
will
have
rdf
style,
Iris
and
I.
Think
the
missing
link
there
in
terms
of
not
having
stable
identifiers
at
all,
is
the
csaf
Vex
side
of
things.
C
B
Okay,
so
I
created
the
doc
right
now.
Obviously,
absolutely
nothing
in
there
probably
worthwhile
to.
A
B
Know
for
anybody
who
wants
to
just
start
putting
in
requirements
use
cases
that
kind
of
thing
I
think
it's
probably
okay,
to
start
off
with
just
brainstorming
in
there
and
then
next
meeting
we
can
probably
get
together
and
refine
it
a
bit.
B
With
that
said,
I
know
we
only
have
four
minutes,
but
is
there
anything
else
anybody
else
wanted
to
to
bring
up
here?
I
I
think
this
is
this
was
productive,
I
think
there's
a
lot
of
interesting
points
that
were
brought
up
about.
You
know
making
sure,
as
they
say,
the
devil's
in
the
details
right
like
there's,
there's
a
lot
of
things
that
can
get
lost
here.
I
think
a
lot
of
this
discussion
brought
up
some
some
pretty
good
points.