►
From YouTube: SLSA Specifications Meeting (August 25, 2022)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
Yeah
yeah,
especially
because
I'm
pretty
short
so
my
my
desk
is
most
desks,
are
pretty
tall,
and
so,
in
order
for
me
to
reach
the
floor
with
my
chair,
I
actually
have
to
sit
lower
than
much
much
lower
than
the
desk.
So
it's
kind
of
like
even
more
awkward,
sometimes,
and
then
I
put
like
yeah
anyways,
I'm
going
I'm
rambling
about
this,
but.
B
One
of
those
things
where
you
know
I
was
so
you
know
I've
been
working
remote
for
quite
some
time
now
until
the
point
where
there
are
people
I'm
working
with
who
I've
never
met
in
person
and
I've
been
working
with
them
for
several
years
and
yeah
when
you,
so
you
know,
when
you
do
kind
of
resolve
that
situation,
you
don't
meet
people.
You
suddenly
find
out
exactly
how
tall
people
are,
which
is
a
complete
surprise.
Most
of
the
time.
A
Yeah
yeah,
I
am
definitely
not
tall.
I
forgot
the
not
in
the
least
I'm
gonna
see
if,
if
gilbert
or
marcelo
want
to
join
because
they've
been
talking
about
this
offline
in
the
thread
you
want
to
join
the
hybrid
discussions.
A
Well,
sort
of
like
it,
it's
somewhat,
I
feel
like
they're,
still
related,
not
quite
the
same
thing
but
right:
okay,
anywho!
Let
me
remember.
Switching
gears
here,
hybrid
means
open
source
software
plus
proprietary
software,
and
how
do
we
handle
that.
B
Yeah
yeah,
so
that's
that
that's
kind
of
an
open
and
interesting
question.
I
can
tell
you
what
we're
trying
to
accomplish
at
active
state-
and
that
is,
we
are
trying
to
basically
provide
builds
with
our
proprietary,
build
system
of
open
source
components
that
we're
ingesting
from
outside
and
also
at
at
some
point.
Well,
we're
already
doing
it
actually
is,
is
actually
being
able
to
allow
users
to
insert
their
own
source
code
into
our
build
system
as
well,
which
is
proprietary
and
protected.
A
Yeah
yeah
and
I
I
want
to
bring
up
because
I
was
trying
to
convey
that
in
in
some
of
my
diagrams,
but
it
was
going
to
make
it
extremely
messy,
and
so
I
was
trying
to
figure
out
a
better
way
of
showing
that.
So
let
me
I'm
trying
to
open
it
up
so
that
way
I
can
refer
to
it,
nope,
that's
not
it.
Okay,
here's
salsa!
So,
okay,
let
me
share
my
screen.
Hi
marcela.
Thank
you
for
joining
hi.
Can
you
hear
me
fine,
yep?
Yes,
okay!
Thank
you!
A
Okay.
You
know
I
was
trying
to
do
like
a
simple
like
hey.
You
know
you
have
a
hosted
repo.
Maybe
it's.
C
A
B
A
But
then
you
have
the
well
what
if
because
nist
ssdf
actually
talks
about.
This
is
having
an
approved
software
repository
of
vetted
code
or
vetted
software
components,
and
so
we
are
thinking
of
going
completely
internal
like
red
hat,
where
we
will
vet
the
open
source
packages.
We
use
right
once
we
vet
them
their
house
inside
and
they
can
only
be
retrieved
from
inside.
You
can't
retrieve
the
packages
from
outside.
B
Yeah
I
mean
we
have
a
very
similar
system
in
that
we,
you
know,
all
of
our
builds
come
from
a
mirror
that
we
can
create
internally
of
external
packages.
So
basically
those
packages
are
ingested
by
one
pipeline
and
then
you
know
we
just
have
the
source
there
and
then
all
the
builds
happen
off
of
that
repository
rather
than
off
the
of
the
original
repository
on
the
internet.
A
Yeah,
so
how
do
we
and
I've
been
trying
to
think
about
this
right?
It's
like
if
there's
something
where
you
it's.
It's
almost
the
same
as
a
self-posted,
unknown
repo
right.
You
don't
really
have
full
visibility
necessarily.
So
how
do
you
address
salsa
in
this
scenario
when
it
comes
to
provenance,
because.
B
You
are,
we
are
changing
it
a
little
bit.
Our
approach
to
this
was
that
we
would
treat
the
our
ingestion
process
as
a
part
of
the
bill
pipeline,
and
so
we
would
make
an
attestation
about
the
mirroring
process.
B
So
basically
we
would
say
we
got
on
this
date.
You
know
so
you
know
your
asset
station
has
like
a
start
time
and
an
end
time,
and
so
basically
we
would
say
on
this
date.
We
downloaded
this
from
here,
and
so
you
know
you
would
have
one
attestation
there.
That
said,
you
know
the
input
artifact
was
whatever
the
source
repo
is
and
the
output
artifact
is
our
mirrored
copy
of
it,
and
then
you
would
basically
and
then
we
would
say
we
would
make
the
regular
build
attestation
about
having
built
from
our
mirror.
A
B
The
vetting
process,
I
think
you
could
potentially
because
presumably
that's
well-
that's
probably
a
mix
of
human,
automated.
B
Which
is
you
know
the
same
for
us
because
we
do
you
know
we'll
do
sas
and
dast
on
some
of
the
stuff
we
bring
in
anyway,
and
there
are
manual
processes
for
checking
for
malware
and
stuff,
like
that,
but
yeah
that
I
think
you
know
the
automated
part.
You
could
generate
probably
a
a
regular
autumn,
a
regular
attestation
for
I'm
not
quite
sure
what
the
input
and
output
artifacts
would
be
for
that.
B
B
B
Which
is
you
know
the
the
vsa,
which
is
kind
of
the
way
of
of
terminating
the
graph
a
little
bit
in
that
you
can
say
that?
Well,
you
know
we
applied
these
policies,
all
these
processes
to
these
things
before
generating
this
output.
So
basically
it's.
This
is
a.
This
is
a
way
of
saying
we
did
this.
If
you
trust
us
to
do
it,
then
everything's
fine,
but
you
know
your
your.
Your
trust
in
the
artifact
is,
is
dependent
on
how
I
say
so
in
some
in
some
level.
A
Okay,
so
that
makes
sense.
A
And
I
guess,
then
people
would
have
to
self
a
test
kind
of
like
a
posted
on
that
hosted
kind
of
like
this
picture
over
here,
where
to
go
right
where
the
customer
everything's
kind
of
behind
the
firewall,
and
so
they
would
have
to
attest.
D
So
can
I
jump
in
for
a
second
because
I
think
at
least
from
what
I've
understood
we.
D
Have
certain
projects
that
we
don't
self-host
potentially
but
still
go
through
this
vetting
process?
And
then
we
just
have
like
a
vetted.
We
essentially
just
have
a
very
long
list
or
kind
of
database
of
projects
and
the
different
statuses
that
or
vetting
statuses
that
they
have
and
when
this
expires
so.
D
I'm
wondering
if
this
is
almost
a
third
scenario,
because
it's
because
a
lot
of
the
projects
that
are
vetted
are
not
necessarily
sort
of
pulled
into
an
internal
mirror,
but
rather
sort
of
they
remain
on
github,
but
it's
sort
of
recorded
where
they
are
and
when
they
were
vetted
by
whom
and
the
vetting
typically
expires.
D
D
A
Are
you
saying
that
there
is
some
of
it?
You
said
some
of
it's
hosted
and
some
of
it's
not.
So
is
it
more
like
this
example
where
you
have
your
trusted,
repo
could
probably
be
proprietary
and
then
maybe
the
open
source.
You
know
code
and
it
goes
out
to
the
internet
and
grabs.
It
is
this
the
scenario
I'm
trying
to
visualize
what
you.
D
Yeah,
so
I
do
think
this
is
represented
by
this
public
oss,
repo
or
private
public
image
repo,
but
there
there
still
is
the
sort
of
internal
vetting
right
that
happens
that
we
internally
need
to
be
compliant
with
right.
D
It's
I
would
say
more,
it's
not
even
that
the
vetted
project
is
self-hosted,
it's
more
that
we
just
track
external
vetted
projects.
A
B
Them
direct
from
the
upstream
repo,
but
the
vetting
is
attached
to
the
external
repo
rather
than
a
rather
than
the
local,
rather
than
a
local
self-hosted
mirror.
D
Yeah
that
sounds
right:
okay,.
B
Basically,
the
internal
vetting,
presumably
reduced,
produces
a
record
that's
held
internally
at
intel,
which
is
attached
to
the
the
upstream
repo.
A
B
That
intel
have
internally
and
that
kind
of
describes
their
level
of
trust
in
the
upstream
repo.
B
A
D
Not
100
sure
about
that
aspect,
but
certainly
for
the
release
process
yeah
throughout
the
development
and
the
release
process.
We
do
check
this
repo
or
this
list
I
just
with
yeah.
D
I
would
say
during
development-
maybe
I
don't
know
to
what
extent
different
groups
might
pull
this.
This
record
automatically
versus
not.
B
I
think
this
is
this
is
another
decent
use
case
for
the
verification
summary
attestations
as
well,
because
it's
like
it's
those
those
attestations
are
for
this
has
been
through
some
sort
of
trusted
process
and
all
the
vsa
is
a
document
that
says
this
particular
thing
has
passed
this
particular
process.
B
It's
not
actually
a
record
of
anything
that
automated
that
happened.
Necessarily
it's
just
that
this
is
good
to
go,
and
if
you
trust,
if
you
trust
the
entity
that
made
the
vsa,
you
can
trust
the
the
the
claims
made
in
it.
A
A
So
I'm
trying
to
think
about,
I
guess,
for
the
proprietary
code
right,
there's
some
sort
of
internal
repo
over
here.
A
A
A
A
You
know
the
reverse.
That.
A
Tested,
that's
the
build.
A
This
process
is
snapshot.
A
B
Yeah
I
mean
basically
vetting,
isn't
necessarily
a
binary
thing:
it's
not
a
use
or
don't
use
it
it's.
You
know
you
can
use
in
certain
circumstances.
So,
for
example,
some
of
our
we'll
have
a
different
levels
of
trust
in
different
snapshots
that
we
have.
D
A
Okay,
so
so
this
is
like
an
optional
optional
mirror,
not
optional,
but
you
know
secondary
use
case.
I
guess.
A
A
Okay
and
then
this
could
be
hosted
or
not
hosted,
I'm
trying
to
think
of
the.
Where
is
it
like
these
right,
where
your
your
stuff
is
hosted
versus
not,
and
this
looks
like
it
would
be.
Sometimes
it
might
be
this
all
right,
like
there's
a
public
open
source
repo.
B
Well,
I
mean
that's
part
of
the
problem
at
the
moment.
Is
that
you
know
we're
not
getting
the
source
level
attestations
and
you
know
that's
why
we're
talking,
I
guess
in
the
in
the
specification
meetings
about
not
including
the
source
stuff
in
v
1.0
of
salsa,
because
at
the
moment
you
know
the
level
of
trust
we
can
put
in
an
upstream
repo
in
stuff
that
we
download
is
pretty
much
tied
to
how
much
we
trust
the
download
process
and
the
repo
maintainers
ability.
You
know
the
repo
hosters
ability.
B
What
it
is
that
the
authors
have
expressed
in
their
code,
so
yeah
there's
an
integrity
part
there
that
we
have
no
control
over,
and
there
are
no
attestations
for
at
the
moment,
which
is
why
you
know
the
source
code.
Stuff
is
going
to
be
important
as
we
go
on,
but
you
know
we're
not
at
the
point
yet
where
we
can
really
say
you
know
we
produced
a
we've,
got
a
good
document,
good
documented
chain
of
of
custody.
B
B
All
we
can
say
really
at
the
moment,
is
that
we
got
this
snapshot
of
the
source
code
at
this
date.
It
matches
this
checksum.
If
there
is
any
authenticating
data
like
a
signature
or
something
like
that
on
it,
when
we
downloaded
it,
we
have
a
copy
of
that
and
we
verified
that.
C
A
B
Yeah,
I
think
that
would
that
wouldn't
that
would
I
think,
if
they're
not
making
attestations
themselves
directly
or
something
that's
translatable
into
an
attestation,
we
can
make
attestations
on
their
behalf,
to
a
certain
extent
based
on
what
we
know
about
the
upstream
repo
and
the
project.
B
So
we
can
make
it.
We
could
make
a
verification
summary
at
a
station
saying
that
we
believe
you
know
these.
These
things
upstream
of
us
are
secure
because
you
know
they've
got
this
good
scorecard
or
they've
done
well
with
you
know,
oss
gadget
or
something.
But
beyond
that
you
know
we.
We
can't
actually
make
any
annotations
unless
the
author
is
making
them
themselves
directly.
C
What
I
mean
sorry,
I
know
I'm
just
jumped
in
here
and
I'm
trying
to
understand
what
we're
trying
to
accomplish.
I
think
I
understand,
but
but
what,
if
day
one
we
don't
we
we
just
post,
hey,
we
don't
validate
this
repository.
C
You
know
day
two.
We
we
started
looking
into
the
repository
and
using
some
tools,
we
we
do
start
making
those
check,
sums
and
validate
those
and
then
and
then
day
three.
We
we
go
further
on
on
the
attestation.
I
mean.
C
Going
to
be
very
difficult,
I
don't
know
I
look
at
it
like
a
like
getting
a
passport
right
like
we
we're
not.
We
got
to
provide
like
five
levels
of
identification
to
prove
that
I'm
gilbert
to
get
a
passport.
How?
How
do
we
do
that
with
individuals
that
that
don't
right,
you
know
we
have
we
have
in
a
social
security
area.
We
have
social
security
that
identifies
you
as
a
person
and
that's
your
attestation
and
foreigners
that
don't
have
a
social
security
or
people
that
don't
have
social
security.
C
A
No,
it's
it's.
This
unique
identifier,
anytime,
somebody
makes
a
commit
and
it
will
tie
this
artifact
id,
if
I
remember
correctly,
to
like,
for
example,
an
s
bomb
to
say,
hey
this
commit
is
associated
to
this
and
I
can
bring
it
up.
I
can't
tell
you
exactly,
I
just
remember
watching
a
presentation
on
it.
C
C
B
B
So
that's
the
you
know,
that's
where
you
need
to
get
to
and
basically
working
out
you
can.
You
know
if
you
can
integrate
with
you
know
the
authentication
mechanisms
for
both
the
users
and
you
know
any
authentication
and
integrity,
mechanics
that
there
are
for
source
code
artifacts
whatever.
B
Then
you
can
start
to
kind
of
derive
attestations
for
these
things,
but
often
those
things
are
opaque
and
not
necessary.
You
know
you.
I
couldn't
say,
for
example,
that
I
know
everything
there
is
to
know
about
git's
oauth
mechanism
and
how
they
implement
it.
So
I
couldn't
say
for
sure
that
it's
100
trustworthy.
All
I
can
say,
is
I
trust,
github
to
make
it
trustworthy,
and
you
know
you.
C
B
There
are
situations
where
you
can't
make
that
verification
available.
So,
for
example,
we
have
the
we
have
an
issue
where
so
we
use
containerized
builds,
but
some
of
our
containers
contain
proprietary
software.
So,
for
example,
we'll
have
intel
compilers
or
microsoft
compilers
in
there.
So
because
we've
accepted
the
licenses
for
those
things,
but
third
parties
haven't.
We
can't
make
those
containers
available
for
people
to
check
even
against
the
checksum
or
a
signature
that
we
give
them.
B
So
in
that
case
again
we're
looking
at
making
verification
summary
attestations
on
that
behalf,
saying:
okay:
well,
we
downloaded
visual
studio
and
installed
it
via
this
process.
You
can
trust
us
to
have
done
what
we
said
and
then
you
also
have
to
trust
you
know
by
by
extension.
You
also
have
to
trust
the
the
process
behind
creating
visual
studio,
the
authors
of
visual
studio
and
all
of
those
things
that
we
cannot
make
an
attestation
about
specifically,
but
only
to
say
as
much
as
we
trust
that
process.
So
we
think
you
should
too.
C
A
B
A
B
Microsoft
are
apparently
intending
to
provide
attestations
for
things
like
visual
studio
and
for
the
operating
systems
that
they
run
on,
so
that
you
know
that's
something
that
we
can
fall
back
on
a
little
bit
further.
But
again
it's
making
attestations
about
proprietary
code,
which
you
can
verify
in
terms
of
you.
Can
you
can
verify
the
integrity
of
the
attestation,
but
you
can't
actually
verify
the
integrity
of
the
things
it's
making
attestations
about,
because
it's
proprietary
code.
A
B
A
Fine,
that's
that
would
be
expected,
I'm
just
wondering
about
the
open
source,
because
if
we
can,
if
we
can
gather
all
the
data
about
the
open
source
repository
because
we
have
the
history
of
commits,
we
have
the
maintainers
that
we
have
the
contributors.
A
We
can
technically
build
the
software
to
say
we
built
this
software
from
source
and
so
from
that
perspective,
no
one's
tampered
with
it,
but
we
may
not
and
we've
scanned
it
etc.
So
we
don't
see,
any
vulnerabilities
doesn't
mean
that
there's
not
a
backdoor
or
something
like
that,
and
maybe
there
is
a
known
unknown.
I
know
there's
that
concept
for
s-bombs.
B
B
So
at
that
point
you
need
to
trust
github
to
accurately
accurately
record
those
and
also
to
actually
represent
them
to
you
so
yeah.
There
are
points
at
which
this
information
could
be
tampered
with,
that
you
have
no
visibility
into
and
that's
something
we're
going
to
have
to
live
with,
but
yeah,
and
that
and
what
we
need
to
work
out
is
how
we
can
make
attestations
about
those
processes
without
exposing
you
know
the
content
of
the
processes
themselves.
B
D
That
makes
sense
to
me
there's
an
interesting
question
about
you
know.
Essentially
I
think
what
you're
getting
at
is
what
ends
up
being
the
root
of
trust?
Is
it
the
author
of
a
particular
artifact,
or
is
it
the
hosting
environment.
B
Yeah,
I
mean
that's
it,
that's
that's
the
whole
thing.
You
know
that
it's
it's
a
chain
of
trust.
You
can
treat
it
like
a
train
of
a
chain
of
custody.
You
can
see
where
the
originated
and
follow
them
all
the
way
through
and
that's,
I
think
what
salsa
is
trying
to
give
us
eventually
is
a
complete
custody
from
source
code
to
bits,
and
at
the
moment
we
don't
have
that,
and
there
are
always
going
to
be
proprietary
parts
in
that
chain
that
you're
just
going
to
have
to
trust
somebody's
word
on.
D
So
I
see
and
I'm
gonna
have
to
drop
in
five-ish
minutes
or
so,
but
I
see
that
you
added
git
bomb
in
here
so
yeah.
I
was.
A
I
was
curious
if
this
could
help
with
some
of
the
attestation,
because
I
remember
the
presentation
on
how
it
takes
a
get.
I
think
it's
a
git
void
hold
on.
Let
me
go
back.
A
B
The
way
that
I
see
the
git
bombs
is
they're
incredibly
granular
much
more
granular
than
you
would
necessarily
say
at
the
package
level
I
mean
we
would
the
way
that
we
look
at
things
is
that
a
released
package
is,
you
know
the
table
of
the
source
and
that's
our
kind
of
input
artifact,
whereas
the
git
bomb
will
go
down
to
okay,
so
within
this
source
code
there
are
these
make
files.
There
are.
C
A
B
Executable,
which
is
a
level
of
detail
that
I'm
not
sure
everyone
is
particularly
interested
in
and
is
also
exhausting,
walk
if
what
you're
actually
trying
to
do
is
is
follow
everything
down
to
the
individual
source
files
walking
that
graph
for
something
you
know
as
complex
as
you
know,
just
even
python
or
ruby
is,
is
exhausting
because
you
know
they
will
have
dozens
of
dependencies
on
other
packages
which
are
built
of
hundreds
or
thousands
of
c
source
files.
So
actually.
B
Yeah
I
mean
it's,
I
think
the
the
problem
that
we
have
is
that
you
know
we're
trying
to
make
sources
as
practical
as
possible
so
that
people
can
make
decisions
on
it
without
having
to
be.
You
know,
for
one
of
the
best,
a
better
word
intentionally
naval
gazing,
to
try
and
work
out
what
it
is
that
they've
actually
got,
and
you
know
whether
each
individual
source
file
is
what
you
know
what
was
delivered
so
yeah.
You
can
go
that
deep
and
ultimately
you
know
if
you
really
want
to
check
everything
right
down.
B
D
Okay,
yeah,
I
was,
I
was
gonna-
add
that
I
think
git
bomb.
My
impression
is,
has
some
very
specific
use.
Cases
like
oh,
if
I
pulled
in
this
source
file
and
this
source
file
is
where
some
vulnerability
that
was
just
reported
actually
appeared.
You
can
sort
of
trace
through
this
tree
and
find
the
very
specific
point
in
your
dependency
graph
at
which
the
vulnerability
was
pulled
in,
but
my
question
about
git
bomb
is
always
who's.
D
Storing
these
and
shipping
gip
bomb
as
part
of
elf
binaries,
I'm
not
sure
if
that's
a
scalable
solution
so
yeah,
I
think
ashawn
was
saying
I
also
view
get
bomb
as
a
very
use-case-specific
tool
that
not
necessarily
everyone
will
will
be
able
to
happen
to
yeah.
B
Ultimately,
when,
when
an
author
releases
a
package
they
are
releasing,
you
know
that
package
as
a
bundle
of
the
source
code
and
so
they're.
You
know
they
are
effectively
at
that
point
attesting
to
the
integrity
of
the
bundle
as
a
whole
rather
than
to
each
of
the
individual
source
files.
So
but
yeah
I
mean
it's,
it's
incredibly
specific,
it's
r,
it's
useful!
In
a
I
mean,
you
know
if
you
are
going
through
an
incredibly
forensic
process,
but
I
think
you
know
most
of
the
time
you
will
say:
okay.
Well,
I
found
the
bug.
B
B
You
just
say:
okay
to
the
author,
this
release
has
this
problem
and
here's
a
demonstration
you
go
fix.
It.
B
I
think
it's
useful
it's
useful
for
extreme
forensic
cases,
but
I'm
not
sure
that
anyone
will
want
to
validate
an
a
graph
of
their
own.
You
know
their
software
stack
down
to
the
individual
source
files.
B
So
when
we
are
vetting
yeah,
I
mean
we'll
examine
individual
source
files,
but
we
won't
give
each
individual
source
file
a
score.
It'll
it'll
be
applied
to
the
package
as
a
whole,
because
you
know
part
of
that
is
some
of
those
things
that
you're
vetting,
like
the
the
source
code
control
practices.
The
you
know
the
regularity
of
updates
and
things
like
that
are
constant
across
the
package.
D
B
Somewhat
vary
across
the
package,
so
yeah.
Would
you
want
to
say
that?
Well,
the
repo
as
a
whole
is
pretty
lively
and
people
have
updated
it.
You
know
on
a
regular
basis,
but
we're
going
to
give
this
a
bad
score,
because
this
one
source
file
hasn't
changed
in
two
years,
probably
not.
C
B
I
think
you
know
we
have
to
be
kind
of
repo
agnostic
and
we're
just
using
github
as
an
example
here
yeah
and
some
of
the
stuff
you
know
like
yeah,
the
results
of
the
internal
vetting
are
stored
internally,
rather
than
necessarily
you
know
publicly
available.
You
know
we
can
make
them
publicly
available,
and
you
know
we
will
expose
certain
parts
of
it
through.
You
know
public
interfaces,
but
yeah
the
actual
database
isn't
it
neces
is
held
internally
at
least
well
in
you
know
the
way
that
martial
was
describing
it
about
the.
D
B
Process
they
do
that
and
we
have
something
very
similar,
so
yeah,
there's
there's
that
part
of
it.
I
think
all
of
this
works
better
if
it
is
distributed,
but
discoverable,
and
that's
one
of
the
things
that
we're
finding
with
the
the
tooling
group
at
the
moment
is
that
discoverability
is
a
real
problem
for
attestations
and
it's
something
we
we
kind
of
need
to
address.
B
A
Clarify
you
just
mentioned
the
the
comment
the
distributed.
B
I
I
just
well,
I
think
the
whole
kind
of
the
whole
data
and
the
integrity
of
kind
of
the
system
as
a
whole
is
better
if
it's
distributed
rather
than
held
in
one
place.
A
B
A
B
Have
to
trust
trust
everybody
whose
hands
the
bits
pass
through
before
they
get
to
you,
and
so
that's
part
of
what
the
attestations
do
for
you
is
that
you
know
the
signatures
are
made
by
particular
entities.
So
you
need
to
work
out.
You
know
there's
another
level
to
this,
which
is
policy
which
is
working
out.
Do
I
trust
this
particular
entity
that
have
performed
this
particular
action
and
so
yeah?
B
It's
it's
a
case
of
okay,
so
github
is,
maintaining,
you
know,
is
maintaining
the
source
code
and
github
has
followed
this
github
actions
process
and
has
made
an
attestation
that
that's
what
they
did.
That's
fine.
I
trust
them
to
do
that.
Do
I
trust
them
to
say
that
these
bits
on
pipe?
I
are
the
same
as
the
bits
that
came
out
of
github.
No,
that's
my
pie's
job.
I
trust
them
to
do
that.
A
B
B
How
much
do
you
trust
active
state
to
have
trusted
everyone
else
in
the
chain
you
can,
if
you
want
to
apply
your
own
policy
to
all
of
that,
but
you
could
also
say:
okay,
I
trust
active
state
to
have
done
the
due
diligence
on
on
the
bits
that
they
provided
me,
and
at
that
case
you
can
delegate
trust.
C
So
would
we,
I
guess,
just
kind
of
thinking
about
this
a
little
bit
more,
it's
difficult
to
you
know,
like
you,
said
right
now,
there's
a
lot
of
businesses
out
there
a
lot
of
packages
a
lot
of
different
binaries
executables.
C
Would
we
recommend
or
suggest
a
standard
based
on
what
we
learn
hey.
This
is
a
standard
set
of
things
that
you
could
do,
or
maybe
I
call
them
integrations
or
assets
that
you
know
are
a
trusted
source
that
that
are
registered
as
a
trusted
source
and
nothing
outside
of
there
is
trusted.
So
that's
how
we
do
you
know
decentralized
model
or
or
centralized
model.
I
don't
know
I'm
just
I'm
thinking
out
loud.
A
So,
are
you
referring
to
recommending
like,
for
example,
git
la
git,
lab
and
github
as
a
trusted
source
like
actual
products,
vendors
or.
C
I'm
recommending
I'm
recommending
to
to
to
companies
to
build
a
supply
chain
factory
that
has
trusted
sources
that
you.
A
C
Been
have
vetted
right
where
you
know
that
these
are
vetted
sources
and
even
within
your
company,
some
developer
might
spin
up
a
tool
or
an
open
source
tool,
and
he
might
be
trying
to
inject
something
through
that
open
source
tool
or
build
something
that
open
source
tool.
That's
not
vetted
and
that's
a
red
flag.
B
A
B
Other
things
to
say,
okay,
well,
this
is
something
that
increases
my
level
of
trust
in
the
organization
that
is,
that
is
providing
me
with
these
things
and
those
are
outside
of
the
scope
of
salsa.
To
a
certain
extent,
all
they
are
is
what
saucer
says.
What's
also
trying
to
do
is
say
that
you
know
okay
well,
this
is
the
whole
process.
Is
tamper,
evident
from
source
code
to
bits.
There
are
going
to
be
parts
in
here
which
are
opaque
and
you
you
won't
necessarily
have
visibility
of,
but
you
can
trust
this
organization's.
B
You
know
it's
internal
organization,
it's
it's
processes,
procedures
and
everything
else
to
make
sure
to
guarantee
that,
or
at
least
suggest
that
the
bits
that
they
produced
are
trustworthy,
and
so
you
you
can
you
know
you
can
apply
levels
of
trust
to
individual
people
and
that's
part
of
what
each
consumers
I
you
know
each
consumer
of
these
is
going
to
have
to
have
some
sort
of
policy
about
what
they
accept,
and
you
know,
even
if
that
policy
is
as
simple
as
every
part
of
it
has
a
signature.
I
don't
care
who
it's
from
so
that's.
B
You
know
the
simplest
kind
of
policy
is
that
I
validated
signatures
for
all
these
attestations
and
then
you
know
more
complex
gets.
You
know,
okay,
well,
this
individual
made
this
attestation
about
this
process
and
then
you
know,
we've
got
an
internal
database
or
list
of
whatever
of
who
is
trusted
to
do
what
and
we
trust
them
to
do
that,
and
then
you
know
you
get
into
further
levels
but
well.
We
trust
these
guys
because
they've
got
stock
too,
or
we
trust
these
guys
because
you
know
they
get
up
for
whatever
reason.
B
So
at
some
point,
you're
gonna
have
to
have
that
level
of
policy
and
different
organizations
will
put
a
different
amount
of
you
know,
invest
different
amounts
of
effort
into
producing
those
policies,
and
all
saucer
is
saying
really
is
that
yep
we
got
this,
they
said
they
did
it
and
there's
a
signature
to
say
that
they
did
it,
and
you
know
that's
kind
of
your
base
level
of
trust
beyond.
That
is
how
much
do
you
trust
these
people
just
to
to
have
done
what
they
said?
They
did.
C
Right,
it's
like
it's
like
verisign
right,
everybody
goes
into
an
https
and
they
see
a
verizon,
a
very
signed
signature
and
they
go
oh.
I
automatically
trust
it,
but
but
should
you
really
trust
it
right,
I'm
being
a
little
bit
devil's
advocate
here
like
we
as
an
industry,
you
know
at
least
think
the
very
sign
is
secure
right
and
everybody
trusts
verisign,
that's
kind
of
what
I
was
getting
at.
Okay
when
controls
of
iso
and
stock
and
fedramp.
I
agree.
C
Those
are
all
processes
and
controls
that
verify
what
you
say:
you're
doing
right
to
some
level.
Even
federer
federer
has
a
more
prescriptive.
You
know
process
through
that
because
I
I've
gone
through
fedramp
many
times,
but
my
I
guess
my
yeah.
My
observation
is,
and
my
suggestion
is:
how
do
we
get
to
that
verify
level
with
salsa
so
so
think
of
it?
That
way.
B
It
does
it
does,
I
think,
yeah,
not
every
mom
on
pop
shop
is
going
to
have
the
resources
to
do
that.
Verification
and
and
kind
of
generate
their
own
trust
levels
themselves,
but
what
they
should
be
able
to
do
is
is
trust
one
one
entity
at
least
to
have
done
that
for
them.
C
Yeah-
and
I
think
what
I
was,
what
I
was
trying
to
do
to
get
to
mel
is
the
I'll
call
it
the
authenticity
or
attestation
of
what
the
supply
chain
factory
would
look
like.
So
what
are
the
tools
and
what
are
the
everything
from
from
source
to
commit
to
and
build
to
deploy?
C
What
is
that
supply
chain
factory
authenticity
look
like
from?
Can
we
make
recommendations
of
those
of
those
tools
that
have
been
vetted
and
we
say
yes,
we
we
trust
these
tools.
A
I
think
most
companies
would
that
up
be
a
public
or
private
statement,
because
I
don't
think
most
companies
would
vouch
for
tools
or
products
externally
unless
they
have
some
sort
of.
You
know
public
partnership,
if
that
makes
any
sense
like
ibm's
not
going
to
say-
and
this
is
just
you
know-
obviously
hypothetical
ibm's-
not
going
to
say
hey.
You
know
I
trust
acme
over
here,
but
with
these
tools,
because
we
don't
have
any
public
relationship,
so
we're
not
going
to
endorse
that
tool.
A
C
B
Yeah-
and
I
don't
think
you
have
to
make
a
public
statement
about
every
tool
that
you
use,
I
think
you
know
in
some
ways
this
this
starts
to
feel
a
little
bit
like
a
web
of
trust.
I
trust
that
tool
because
I
addressed
that
vendor
who
uses
it,
and
we
know
that
there
are.
There
are
problems
with
with
the
kind
of
web
of
trust
model.
In
that
you
know
it.
It
tends
to.
It
tends
to
build.
You
know
small
islands
of
trust,
but
yeah.
B
On
the
planet
is-
and
we
don't
want
to
get
to
that
situation
ever,
but
I
think
it's
it's
a
case
of
okay.
Well
at
some
point
you
have
to
trust
people
who
trust
other
people-
and
you
know
that's
kind
of
you
know
how
society
is
structured
in
you
know
you
have
to
trust
some
people
to
do
things
that
they
say.
They're
gonna
do,
and
you
know
if
they
trust
other
people
to
do
other
parts
of.
A
Okay,
I
think
we
have
six
minutes
left.
Is
that
right.
A
No,
no,
I
I
do
think
it's
been
useful.
It
takes
me
a
while
to
process
like
I'll,
probably
process
this
conversation
over
and
over
in
my
head
for
a
while
to
try
to
think
about
how
to
apply
what
was
said
and
then
it
kind
of
starts
connecting
so
I
don't
usually
get
it
at
the
beginning.
It
takes
me
a
little
bit.
A
B
I
think
this
is
let's
take
a
summary
of
this
to
the
specification
group
and
see
whether
it
makes
any
tangible
difference
to
the
plan
for
salsa
one
and
also
going
forward.
A
B
B
With
salsa-
or
you
know,
it
gets
that
whole
the
proprietary
source,
part
of
it,
I
think,
is,
is
isn't
well,
isn't
well
accounted
for
in
the
you
know,
you
have
one
blunt
tool,
which
is
the
the
vsa
for
pretty
much
all
of
those
use
cases,
and
I
think
there's
there's
definitely
a
case
for
more
fine-grained
statements
about
binaries
that
you're
using
and
that
that
give
you
a
better
idea
of
how
to
treat
them
at
a
policy
level.
A
Make
sense
anything
else
you
can
think
of
either
sean
or
gilbert.
C
A
C
Happy
to
do
that
because
I
I
know
that
I,
what
I
usually
do
is.
I
start
writing
down
my
board
here
in
my
in
my
my
office
room
and
and
then
I
start
trying
to
understand
that
better.
A
C
A
Thank
you,
and
so
I'm
trying
to
do
the
same
with
this
like
how
do
I,
how
do
I
visualize
this,
like
it's
good,
that
we
were
doing
it
live,
because
then
that
helps
me
at
least
get
a
little
further,
but
how
do
we,
you
know,
put
this
and
kind
of
this
similar
form
where
we
can
show
that
there's
a
gap
of
trust
in.
B
C
There's
there's
very
many.
There's
too
many
touch
points
right
and
you
don't
know
the
author
behind
those
touch
points
either
right.
They
could
have
a
malicious
email
address,
there's
no
way
to
validate
the
email,
authenticity,
it's
more
like
sean
was
saying.
We
just
have
to
trust
that
part
of
the
process
or
it's
hard.
It's
very
hard.
A
C
The
chain
of
trust,
I
think
it's
important,
I
think,
but
okay,
so
just
to
kind
of
summarize
here,
I
feel
like
we're
doing
what
what
kind
of
google
does
a
little
bit?
Is
they
kind
of
just
take
a
download
of
the
entire
internet
and
index
those
and
and
then
they
you
know,
and
then
they
crawl
through
it
in
a
very
fast
form
and
then
they're
able
to
like
check
things
quickly.
A
Yeah
it'll
be
interesting
to
see
how
many
people
will
want
to
see
this
as
a
service,
because
they
just
don't
have
the
what's
it
called.
They
don't
have
the
resources
right,
but
I
mean
this
whole
vetting.
B
A
C
Yeah,
I
think
it's
a,
I
think
it's
good
right
from
us
from
if
you
go
that
model
sean
right,
let's
say
a
vendor
is
winds
as
one
of
those
vendors
and
and
the
solar
winds
gets.
You
know
he
got.
He
got
a
hacked
right
and
now
now
that
could,
as
part
of
the
model
like
solar
winds,
can
have
almost
like
a
credit
check.
Hey
solarwinds
is
no
longer
at
800.,
it's
now
at
600,
because
it
has
this
thing
right.
I
I
like
that.
That
concept
at
least
sean
it's.
B
A
Well,
thanks
folks,
for
joining.
I
think
this
was
quite
productive
and
at
least
for
me
and
information
enlightened,
trying
to
see
how
other
people
are
tackling
this
problem.
So
I
will
I'll
put
the
raw
pdf
on
the
chat
so
that
people
have
it
and
then
I'll
try
to
fix
it
up
before
monday,
so
that
we
can
walk
through
it
with
the
broader
team.