►
From YouTube: SLSA Meeting (October 6, 2021)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
D
E
Sure
give
me
just
one
second,
I
know
some
of
the
folks
have
seen
this
already.
Let
me
just
double
check
and
make
sure
I
I
have
sort
of
reset
the
demo
to
the
the
since
the
last
time
I
I
gave
it
just
so
that
it's
I
can
show
off
all
the
different
build.
C
And
while
you
do
that,
I
will
briefly
stall
for
you
by
saying
that
I
was
at
the
open
source
summit
last
week
and
talked
about
race
things,
including
a
shout
out
about
mentioning
salsa.
So
you
know
so
there
are
people
who've
heard
about
it.
A
Oh
yeah
lots
of
mentions
of
it
there's
going
to
be
even
more
monday
at
kubecon.
I
was
just
going
to
ask
if
maybe
we
could
start
a
listing
here
for
people
that
will
be
at
kubecon,
so
we
can
all
try
to
meet
up
while
we're
waiting.
C
E
Eric
cool
one
other
thing
I
just
want
to
actually
add
on
there
regarding
what
you
were
just
mentioning
about:
yeah
salsa
becoming
a
lot
of
people
talking
about
it.
I'm
also
involved
in
the
nix
and
nixos
community,
and
some
of
the
folks
there
were
kind
of
very
interested
in
seeing
how
they
could
start
to.
You
know
see
how
some
of
the
nyx,
tooling
and
and
stuff
that
they're
doing
over
there,
how
that
could
be
used
to
sort
of
prove
elements
of
salsa.
E
All
right
cool,
so
let
me
get
started
here.
Can
everybody
sort
of
see
my
screen?
E
Do
I
need
to
make
anything
larger
everything
good,
all
right,
cool,
all
right,
so
this
is,
I
know
some
of
you
have
seen
this
demo
if
you've
seen
me,
give
it
at
the
cncf
or
or
some
other
places,
but
I'm
going
to
run
sort
of
an
example
secure
software
factory
here,
and
so
this
is
a
combination
of
different
tools
operating
inside
of
kubernetes,
to
show
to
sort
of
show
what
a
a
secure
build
system
for
a
secure
supply
chain
might
look
like
and
I'll
show
how
also
this
relates
back
to
the
salsa
stuff
by
actually
showing,
at
the
end,
doing
admission,
control
based
on
attestations.
E
And
also
show
how
you
know:
we
need
to
really
consider
different
pieces
in
here
in
order
to
really
it
has
sorry.
We
need
to
consider
the
whole
thing
the
big
picture
holistically
and
that
all
it
takes
is
for
one
small
gap
for
a
supply
chain
compromise
to
make
its
way
in
there.
E
So
just
to
kind
of
give
the
high
level
here
right.
It's
doing
some
basic
stuff,
just
sort
of
cloning,
I'm
you
know
using
canaco
to
build
I'm
generating
s-bomb
on
the
source
files-
I'm
you
know
doing
some
other
stuff
here
and
and
so
on,
and
so
all
this
now,
so
it's
using
cosine
behind
the
scenes,
it's
developing,
attestations
and
so
on,
and
so,
if
I
go
in
and
quickly
do
a.
E
Ls
here
you
know,
you'll
see
this
is
the
image
itself.
This
is
an
s-bom
of
the
source
files.
E
This
is
a
signature
for
that
s-bomb
here
is
a
chain's
attestation
that,
for
what
actually
happened
in
that
build-
and
here
is
a
signature
on
the
actual
image.
So
if
I
want
to
just
show
you
what
this
looks
like,
I
can
do.
E
E
Okay,
so
this
is
an
attestation
this,
and
so
what
I
can
actually
do
is
if
I
want
to
fetch
metadata,
if
I
want
to
actually
get
the
data
associated
with
that
at
a
station,
I
can.
E
Right,
I
can
look
at
what
the
actual
sort
of
build
was
doing
and
it's
going
through
all
these
steps.
Something
to
sort
of
note
here
is:
I
am
using
for
the
actual
inside
of
the
namespace,
where
the
build
is
actually
happening.
E
I
am
using
kyverno
as
an
emission
controller
to
validate
the
actual
keys
of
the
images
that
sorry
validate,
that
the
images
are
signed
with
the
right
keys,
so
I'm
making
sure
hey
anything
that
I'm
pulling
down
from
techton
is
signed
with
the
techton
key
anything
that
I'm
pulling
down
from
sigstor
is
signed
with
the
six
store
key
other
things
in
there
are
signed
with
my
key
and
so
on,
and
so
all
of
this
great
stuff
cool,
I
have
attestations.
I
have
signatures.
I
have
all
this
sort
of
stuff.
E
What
could
possibly
go
wrong?
So
let
me
show
you
so
if
I
pull
down.
H
E
Yep,
so
there's
a
handful
of
things
in
here.
So
a
lot
of
this
is
described
in
more
detail
in
the
cncf's
reference
architecture
document
for
the
secure
software
factory,
but
at
a
high
level,
I'm
using
kyverno
and
as
well
as
opa
gatekeeper
for
two
separate
use
cases
for
a
mission
controller
most
likely
would
become
one
as
some
of
the
features
sort
of
coalesce.
E
I
am
using
a
bit
of
custom
code
as
a
wrapper
api
around
cosine
to
actually
query
to
actually
do
some
of
the
querying
for
the
admission
controller
right,
because
the
emission
controller
in
the
case
of
opa
gatekeeper
it
can
make
http
requests.
It
doesn't
really
know
how
cosine
works
itself,
so
you
have
to
say:
hey
make
an
http
request.
E
It
returns
some
json
and
I'll
do
some
stuff,
and
a
lot
of
that
work
was
based
on
what
the
person
who
goes
by
developer
guy
on
cncf
and
some
of
the
other
slacks.
It's
based
on
some
of
the
work
that
him
and
a
few
other
folks
had
done
and
then
also
using
techton,
tecton
and
tecton
chains
as
the
actual
ci
cd
system,
and
I'm
trying
to
think.
E
E
Oh,
that's
weird,
why
does
it
say
goodbye
world
right
like
I
it?
It
should
have
ran
hello
world.
It
pulled
down.
If
I
go
back
here
right
and
I
look
at
what
it
cloned
right.
It
cloned
this
code
from
here.
If
I
go
and
show
you
what
that
code
looks
like
right.
If
I
go
to
the
source,
it
says
hello
world
right
what
what
went
wrong
well,
you
know
to
spoil
it
for
you
a
little
bit
here
right.
I
I
have
a
I
have
over
here.
I
have
a
doctor
file.
E
Right
that
is
supposed
to
be
the
parent
builder
image
of
of
this
thing.
It
includes
this
rust
latest
thing,
but
I'm
actually
replacing
cargo.
The
you
know
the
rust
compiler
with
my
fake
one
right
and
because
I
wasn't
validating
the
actual
parent
images.
Well,
you
know
the
parentages
of
images
of
what
I'm
building.
Oh,
that's
in
it
you
know
it
didn't
get
caught,
and
so
that's
how
the
bad
code
made
its
way
in
there.
So
what
sorts
of
things
can
we
start
to
to
to
do
to
sort
of
validate
that?
E
E
E
E
E
Build
sort
of
the
real
one
which
I've
already
signed
and
everything
else
pushed
out
and
rerun
it
one
more
time,
so
this
is
now
against
something
that
we've
already
pre-approved
the
image.
You
know
this
is
assuming
that
you
know
where
we're
using
the
right
digest.
We've
signed
it,
we've
we've
validated
that
you
know
whatever
this
image.
E
You
know
we
validated
that
that
parent
image
is
good
now
it
should
go
through
and
actually
build
it
correctly
this
time,
but
once
again,
you
know
for
everybody
here
on
on
this
call,
as
as
you
we
all
know,
right
like
a
lot
of
this
stuff
has
to
be.
You
know
validated
it's
hard
to
sort
of.
You
know
do
it
all
at,
but
we
need
to
really
think
about
the
problem
holistically
and.
E
E
You
know,
your
root
of
trust
hasn't
been
compromised
already
is,
is
sort
of
some
of
the
assumptions
that
are
being
made
in
this
demo
right,
where
some
of
the
assumptions
that
are
being
made
in
this
demo
is
that
you
know
whatever,
like
I'm
using
amd
right,
amd
hasn't
been
hacked
and
their
processors
are
not
compromised
right
and
I'm
assuming
that
kubernetes
hasn't
been
compromised.
I'm
assuming
tecton,
right
and
and
cosine
have
not
been
compromised
here,
but
assuming
those
things
haven't
been-
and
I
you
know
that's
where
I'm
I'm
sort
of
rooting.
E
My
trust,
then
I
can
do
some
of
these
extra
things
such
that
as
stuff
comes
in,
I
can
validate
and
so
on,
and
so,
if
I
go
here,
you
can
see
yes,
this
parent
image
over
here
was
validated
now,
in
this
case
right,
I'm
still
not
using
best
practices
right.
I
would
most
likely
just
always
pin
to
an
individual
digest
and,
in
addition
to
checking
that
those
digests
are
signed,
but
for
the
sake
of
the
demo,
that's
you
can
see
all
of
this
now.
E
What
I
can
do
is
I
can
rerun
right
it.
It
will
have
all
the
same
sorts
of.
E
It
will
have
all
the
same
sorts
of
attestations
and
signatures,
and
and
so
on.
E
Here,
right
and
now
it
says
hello
world
now,
one
extra
step
here
right
is
okay,
so
now
that
we've
shown
how
we
can
do
all
this,
how
we
can
do
all
these
things
to
sort
of
secure,
you
know
your
builds
and
secure
the
supply
chain.
What
can
we
do
now
to
sort
of
say?
Okay?
Well,
when
I
move
this
thing
to
production
right,
I
only
want
to
run
stuff
that
has
gone
through
these.
You
know,
attestations
and
so
on,
like
I
want
to
validate
hey,
does
this
thing?
E
Have
the
correct
salsa
attestations,
if
not
don't
let
it
run
so
that's
where
the
other
emission
controller
comes
into
play,
which
is
the
oppa
gatekeeper
one,
and
so
let
me
show
that
what
that
looks
like
real
quickly.
E
Right
so,
once
again,
I'm
not
an
expert
in
opengatekeeper
or
anything
like
that.
So
so
don't
my
my
rego
might
not
be
the
best,
but
the
basic
idea
is.
I
have
two
things
that
I'm
checking
one
is
that
hey
is
this
signed
with
my
image?
Sorry
is
the
image
signed
with
my
key
and
in
addition
to
that,
does
it
have
the
required
attestations,
and
so
what
if
this
is
actually
calling?
E
Is
it's
calling
a
wrapper
api
around
cosine
to
do
this
once
again,
based
on
some
of
the
work
that
some
other
folks
in
the
community
have
done?
But
so
let
me
show
you
what
happens
if
I
try
to
run
something
that
isn't
signed.
E
E
Right
so
I
try
and
use
this
random
curl
image
in
the
you
know
quote:
unquote,
prod
name
space.
This
is
the
name
space
that
that
is,
you
know,
for
the
sake
of
the
demo,
is
the
namespace
that
we're
using,
as
you
know
where
the
admission
controller
is
actually
validating,
access
or
sorry
controlling
access,
and
so
you
can
see
here
doesn't
have
a
valid
signature,
doesn't
have
valid
attestations,
but
if
I
do.
E
E
Why
well,
it
has
all
those
right,
attestations
and
and
so
on,
and
then
one
final
thing
just
to
kind
of
show
you
where
we're
kind
of
thinking
about
this
in
the
future
right
is,
if
I
do,
if
I
use
crane
ls
again
here
right-
and
I
show
you
hey,
there's
stuff
like
the
s-bomb
right
now
in
general,
we
want
to
sort
of
keep
most
of
the
information
as
like
sort
of
attestations
right.
You
know.
Yes,
you
have
an
s-bomb
that
meets
these
sorts
of
requirements,
and
so
we're
attesting
that
you
know.
E
Yes,
you
have
those
things,
but
it
is
also
still
useful
to
kind
of
go
back
into
that
metadata
at
some
point
in
time
in
the
future
to
say:
hey,
you
know
what
we
just
you
know
recognize
that
there's
a
bad
hash
like
you
know
if
we
recognize
that
there's
a
bad
hash
of
that,
you
know
refers
to
something.
Can
we
go
back
and
now?
Actually,
you
know
look
at
what's
you
know
in
one
of
these
metadata
files?
That's
associated,
that's
lives
alongside
the
image
in
oci,
and
we
can.
E
You
know
also
still
perform
the
same
sort
of
admission
control
right.
We
can
do
stuff
like
invalidate
a
hash
and
say
you
know
sorry
you
can't
deploy.
If
this
hash
is
found
to
be
in
your
s1
or
we
can
revoke
access
and
those
sorts
of
things,
and
then
we
can
obviously
still
do
other
higher
level
things
of
like
revoking
the
attestations
regarding
things
that
have
those
s
bombs
and
so
on.
E
Cool
and
just
one
quick
thing,
if
I
just
show
you
you
know
the
other
thing
that's
also
useful
here
is:
I
can
now
go
back
at
any
time
and
query
whatever's
in
this
image.
By
doing.
F
E
E
E
All
right
there
we
go:
oh
yeah,
let
me
type
into
jq
here
and
so
like,
for
example.
Here
is
a
cyclone
dx
s-bom
of
the
source
files
right
and
so
because
all
of
this
sort
of
stuff
lives
alongside
the
image
in
oci
and
because
it's
more
or
less
content,
addressable
here
right,
you
know
this
s-bomb
is
the
digest
here.
E
E
Right
it's
this
digest.
You
know
with
some
extra
little
thing
here:
dot
s-bom.sources!
This
allows
me
to
know
you
know.
Yes,
this
s-bomb
refers
to
this
thing,
and
this
signature
over
here
refers
to
this
s-bomb
and
there's
some
additional
things
we
can
do
with
at
the
stations
to
sort
of
provide
some
additional
guarantees.
E
Cool,
that's
the
the
main
part
of
the
demo.
Any
questions.
B
E
Yeah
so
a
couple
of
reasons,
mostly
just
sort
of
practical
reasons
for
the
demo,
one
of
the
reasons
is
kyverno
makes
it
very
easy
to
do
stuff
like
this.
I
can
just
give
it
a
key
and
it'll
just
be
able
to
kind
of
you
know
it
has
built-in
integration
with.
I
believe
cosine
signed
images,
and
so
I
can
just
sort
of
do
this
verify
images
and
just
plug
that
in
there
now.
E
Gatekeeper
itself
does
not
have
that
ability,
but
it
does
have
the
ability
to
just
sort
of
make
arbitrary
hdp
requests,
so
it
can
kind
of
do
some
additional.
It
can
do
a
lot
more,
but
it's
a
lot
more
complicated.
So
one
of
the
things
in
here
is
so
I
wrote
up.
You
know
a
wrapper
api
based
on
some
of
the
work
that
some
other
folks
in
the
community
have
done,
and
that
wrapper
api
like
kyverno,
can't
call
that
directly.
E
So
I
had
to
use
something
else:
opa
gatekeeper
can
do
that,
but
oba
keeper
also
doesn't
make
it
super
easy
to
do
some
of
the
other
stuff,
like
ideally
right
as
as
the
emission
controllers
get
more.
You
know
feature
rich,
I
would
imagine
almost
any
of
you
know,
you'd
be
able
to
pick
whichever
one
suits
your
particular
use
case,
but
in
the
case
of
of
this
particular
thing,
it
was
just
that
if
I
just
want
to
quickly
check
a
key
kyverno
makes
it
super
easy.
E
E
As
far
as
I
know,
I
can't
do
that
in
kyverno
as
it
stands
today,
there's
some
other
mission
controllers.
I
could
probably
do
it
in,
but
I'm
you
know
mostly
familiar
with
with
gatekeeper
in
that
sense
and
this
you
know
it
allows
me
to
do
this
sort
of
thing,
whereas
I
couldn't
do
it
in
kyberno.
But
if
I
want
to
just
purely
check
you
know
a
a
if
I
purely
want
to
check
a
signature
based
on
like
who
owns
that
image
or
who
should
have
signed
that
image
right
now.
E
It's
very
cumbersome
in
gatekeeper,
I
believe,
there's
also
some
work
from
some
folks
trying
to
make
a
plug-in
and
some
other
stuff.
B
Yeah
that
answered
it.
Hopefully
I
was
using
connoisseur
for
image
verification
and
I
posted
a
link
into
the
security
sig
kubernetes
group,
because
they're
also
looking
at
this
process,
and
so
I
saw
you
use
both
and
I
just
thought
it's
like
highlighting
one
of
the
gaps
we
have
in
our
you
know:
admission
controller
environment
or
ecosystem
right
now
that
we
can't
do
this
with
one
nice
utility,
we
have
to
use,
you
know
one
or
the
other,
and
I
want
to
side
with
gatekeeper.
B
But
writing
a
wrapper
script
seems
kind
of
not
something
I
had
time
to
do
so.
It's
cool
to
see
your
demo.
C
You
know,
in
fact,
I
think
we
really
need
to
scrub
it.
On
the
other
hand,
hey
you
know
there
are
certain
things
that
people
are
going
to
likely
use
or
want
to
do,
and
here's
a
way
to
do-
or
here
are
three
ways
to
do
it
and
and
reasons
why
you
might
use
each
or
something
like
that,
but
make
it
clear
that
that
those
are
just
you
know
here
are
some
tips
that
might
help
you
if
these
are
not
required.
C
Yeah,
providing
easy
paths
is
great,
just
need
to
make
it
clear
that
you
don't
you,
don't
have
to
do
it
this
way.
These
are
the
requirements.
Here's
an
easy
way
to
get
there,
because,
no
matter
what
people
are
going
to
have
other
other
things
and
other
approaches
and
things
they
already
have
they're
going
to
come
to
the
table
with.
F
C
Yeah,
I
don't
well.
I
don't
think
that
should
be
a
requirement
for
salsa.
You
should
support
it,
but
there's
a
whole
lot
of
systems
that
are,
you
know,
standalone,
there's
a
vast
number
of
billions
of
standalone
devices
and
we
care
about
those
too.
If
it's
a
heart,
if
it's
a
pacemaker,
I
still
would
like
to
have
some
confidence
in
its
supply
chain.
I
Yeah
and
actually
to
that
what
I
I'm
on
the
agenda
for
later
to
talk
about
that,
I
have
the
salsa
reference
architecture,
doc
that
I'd
like
to
kind
of
discuss
it's
in
a
very
early
state,
but
that
also
kind
of
gets
toward
that
of
like
what
are
the
general
ideas
of
how
this
works,
as
opposed
to
like
this
is
specifically
how
it
works
and
not
being
like
not
specific
to
any
particular
build
system
or
not
oci
specific,
but
would
work
for
it
kind
of
any
package
manager.
I
think
that's
important.
I
Also
trishank
you
mentioned
about
opa
and
rigo
is
too
limiting.
I
could
I
could
mention,
because
various
folks
are
talking
about
the
admission
controllers.
What
we've
been
thinking
on
the
google
side
on
the
binary
authorization
side
is
that.
I
Like
any
of
these
languages
are
terrible
to
write
and
most
people
should
not
have
to
write
them
and
so
doing
something
like
opa
gatekeeper
if
you're
familiar
that,
where
some
expert
writes
a
template
in
some
policy
language
like
rego,
that
does
like
all
the
hard
work,
but
then
most
regular
people
just
instantiate
that
with
parameters
that
say
like
I
expect
you
know
like
the
source
repository
to
be
this
or
the
build
system
to
be
that
or
the
key
to
be
this
and
does
all
the
joining
we
have
been
looking
at
and
and
are
working
on.
I
Similarly
to
what
michael
has
presented
some
sort
of
wrapper
around
opa
to
kind
of
make
that
all
happen,
because
there's
a
lot
of
fiddly
bits
like
checking
the
hashes
and
all
this
other
stuff.
That's
that's
hard
to
get
right
and
and
using
any
of
these
things
directly
is
just
awful.
So
that's!
I
think
why
it's
important
to
have
some
sort
of
rapper.
F
Oh,
oh
yeah,
yeah!
No,
no!
I
agree
without
running
too
much
of
the
agenda
over
yeah.
I
just
wanted
to
make
a
comment
that
we
found
some
limitations
that
at
least
oppa
oppa
doesn't
work
for
us
right
now.
I'm
happy
to
get
my
colleague
in
the
call
next
time
he
can
talk
more
about
the
technical
details.
E
Yeah
one
quick
thing
I
just
wanted
to
add
on
before
handing
off
is
is
just
when,
regarding
some
of
the
stuff,
around
sort
of
reference,
implementation
and
different
things,
yeah,
definitely
the
the
cncf
reference
architecture.
We
are
trying
to
kind
of
show
some
of
that
sort
of
stuff
off
in
both
a
sort
of
high
level.
E
These
are
the
the
high
level
requirements
that
you
need
to
hit
in
order
to
kind
of
get
these
things
right.
You
know
you
need
to
have
a
mission
controller.
That
does
these
things
and
then
here
is
a
reference
implementation
you
could
use
whatever
you
want
just
want
to
kind
of
throw
that
out
there
as
well.
J
All
right,
I've
I've
got
an
extra
question.
I
know
if
you
can
hear
me
my
microphone's
playing
up.
J
E
I
I
put
it
in
the
chat
here
so
I'll
post
it
again
it's
under
my
public
github.
This
is
to
be
clear.
This
is
not
the
cncf
reference
implementation,
but
some
of
that
code
might
make
its
way
in
the
cncf
reference
implementation.
This
is
just
purely
some
of
the
stuff
that
I'm
also
going
to
be
demoing
at
kubecon
and
some
other
stuff.
People
are
more
than
welcome
to
sort
of
poke
around
with
it
use
it
as
they
see
fit.
You
know,
ping
me
ask
questions.
C
A
Not
me
it
could
have
been
kim.
She
lost
power
today.
There's
some
pg
anything
so
she's,
not
here,
but
it
could
have
been
her
ouch.
Okay,.
A
C
Okay,
we'll
defer.
H
G
D
Let
me
how
do
I
present
to
chrome
tab?
You
should.
C
D
G
K
It's
easy
to
inadvertently
click
on
the
mute
button
when
you're
sharing,
because
it
throws
the
control
panel
up
to
the
top.
Oh.
I
Because
there's
this
audio
shield
there's
an
audio
share
button
that
mutes
you.
Okay,
all
right!
Sorry,
I'm
not
I've!
Never
I've
not
used
zoom
before
all
right.
So
the
idea
we're
trying
to
put
together
a
demo
for
like
kind
of
getting
around
the
showing
off
the
ideas
of
of
salsa
kind
of
at
a
higher
level
of
more
end-to-end
and
more
of
like
a
macro
level
of
like
what
is
the
value
of
of
salsa.
I
The
the
demo
showed
by
michael
is,
I
think,
more
of
a
specific
implementation
like
how
you
could
do
this
today
and
like
how
you
could
realize
it,
for
this
is,
is
more
of
like
how
can
we
kind
of
change
the
industry
like
an
ecosystem
as
a
whole,
and
how
can
we
get,
for
example,
all
packages
on
pi
pi
to
be
validated
or
something
like
that?
I
That's
kind
of
where,
where
I'm
going
so
the
main
value
proposition
that
I'd
like
to
show
is,
I
I
propose,
and
we
can
discuss
this-
is
that
salsa
provides
value
by
tracing
back
to
source.
I
I
How
do
you
know
it
actually
is
comes
from
that
you
have
pretty
much
zero
guarantees
you're,
just
trusting
that
the
authors,
more
specifically,
the
people
who
have
access
to
the
credentials
that
can
upload
actually
have
the
the
like
actually
did
that
and
you
trust
that
they
faithfully
did
it,
and
so
the
the
value
with
salsa
is
that
you
have
some
independent,
like
you,
don't
have
to
trust
those
individual
authors.
You
can
trust
some
other
organization
that
maybe
you
deem
more
trustworthy.
I
So,
for
example,
if
it's
built
by
github
actions
and
you
trust
github
actions,
you
don't
have
to
trust
a
thousand
tens
of
thousand
different
projects
that
use
github
actions.
You
could
just
trust
that
as
an
organization
whole
or
similar
circle,
ci
or
whatever
build
platform.
If
we
tie
this
back
to
a
hardware
route
of
trust,
which
is
something
that
we've
been
thinking
about,
then
it
could
be
like
you
trust
the
software
and
amd,
but
you
don't
have
to
trust
any
particular
organization.
I
So
that's
kind
of
what
I'm
trying
to
get
to
to
get
across
the
overall
picture
is
pretty
simple
and
the
the
idea
which
we
don't
have
documented
yet,
but
I
would
like
to
put
on
the
salsa
site
somewhere,
is
that
the
the
builder
generates
some
sort
of
provenance
it
gets
stored
somewhere
and
then,
ultimately,
an
admission
controller
decides
whether
a
given
package
is
okay,
given
a
pop,
a
combination
of
a
package,
a
policy
and
some
evidence
in
the
form
of
attestations
whether
it
meets
some
requirements
like
that,
it's
a
fairly
straightforward
idea,
this
type
of
mod,
I'm
just
trying
to
document
what
the
overall
model
is
and
give
terms
of
these
again,
this
match
is
what
michael
just
presented
earlier.
I
I
think
we
may
also.
We
will
probably
also
need
source
attestations.
That
says,
like
a
particular
get
commit
lived
in
a
particular
repo
in
the
case
of
like
situations
where
you
build
from
a
mirror
or
something
like
that
or
the
builder
doesn't
actually
know
where
repo
it
pulled
from
internally
within
google.
We
have
that
use
case
and
so
we're
building
this
internally
implanted
kind
of
once.
We
have
like
something
that
actually
works,
we'll
we'll
we'll
we'll
make
that
public.
I
I
So
the
what
I'd
like
to
do
in
this
kind
of
almost
mvp
or
demo
is
just
kind
of
have
a
working
end-to-end
system
for
something
other
than
oci,
because
there's
there's
kind
of
a
lot
of
examples
of
docker
and
oci
working
like
michael's
example
and
that
one's
a
so
I'm
trying
to
feel
like
how
can
we
have
a
demo
that
works
for
any
package
manager
regardless
of
technology,
so
pi,
pi,
maven,
npm,
anything
and
so
the
the.
I
I
kind
of
go
into
like
how
can
we
gen,
like
the
main
work
streams,
are
how
do
we
generate
provenance?
How
do
we
store
and
propagate
provenance,
and
then
how
do
we
perform
some
sort
of
policy
that
determines
whether
an
artifact
is
okay?
Those
are
the
three
main
work
streams.
So
I'd
try
to
like
to
break
it
up
along
those
lines.
I
I
what
the
kind
of
the
sig
store
model
the
cosine
model
is
that
the
attestation
store
is
in
within
a
package
repository
itself.
That
is
kind
of
a
special
case
of
this.
I
think
it's
a
particularly
good
model.
It
has
a
lot
of
nice
properties
in
terms
of
reliability,
but
it
doesn't
always
work
so,
for
example,
if
you
wanted
to
do
this
in
pipe,
I'm
not
I'm
making
a
claim.
I
So
please
correct
me:
if
I'm
wrong,
I
think
if
you
try
to
do
this
in
pie
pie,
it
wouldn't
be
work
because
there's
nowhere
to
stick
the
attestations,
you
could
stick
it.
You
can't
stick
it
inside
the
packet.
Well,
I
guess
maybe
you
could
as
like
an
extra
file,
but
in
a
lot
of
package
managers
know
where
to
stick
this
extra
information.
You
can't
stick
it
in
the
package
because
it
has
to
have
a
packet,
a
hash
of
the
package
itself,
so
it'd
be
circular.
I
Cosine
gets
around
that
by
having
the
special
labels
that
are
hash,
dot,
att
or
sig,
but
that
doesn't
always
work.
So
in
terms
of
like
the
three
main
questions
like:
how
do
you
generate
provenance?
How
do
you
propagate
it
and
then
how
do
you
verify
it?
The
the
three
main
options
I've
been
thinking
about,
for
how
to
do
that
are
one.
I
Is
the
build
system
itself
could
generate
the
provenance
in
some
sort
of
trusted
control
plane,
so
using
github
actions
as
an
example
builder,
like
github
itself,
would
generate
the
provenance
and
attest
to
the
fact
that
it
is
all
true,
I
think
that's
the
best
model,
but
it
like
requires
convincing
all
these
builders
to
make
a
bunch
of
changes
to
their
system,
and
so
that's
kind
of
a
no-go,
at
least
for
the
initial
initially
another
option
is
that
and-
and
this
is
what
we
already
have
demos
for-
is
that
the
the
the
user
controlled
like
the
tenant
on
the
build
system
generates
the
provenance
so
like
you
could
do
this
in
your
github,
like
as
a
github
action
or
if
you're,
using
azure
pipelines
or
google
cloud
build
or
circle
ci
or
whatever,
there's
kind
of
just
like
one
of
the
things
you
run
within
that
generates
the
provenance.
I
The
main
downside
to
this
is
that
you
then
have
to
trust
all
of
the
security
of
the
ci
system.
That
thing
has
to
have
access
to
the
cryptographic
secrets.
You
have
to
trust
that
none
of
the
people
who
run
it
have
gained
access
to
those
secrets.
You
have
to
make
sure
that
nothing
runs
that
runs
on
the
ci
steal
those
secrets.
It's
it's
really
hard
to
get
right
and
as
an
external
person,
there's
it's
really
hard
to
verify.
I
have
to
trust
all
these
different
organizations.
I
I
have
to
trust
like
if
I
verify
curl,
I
have
to
trust
curl.
If
I
verify
yeah,
I
am
I'll
have
to
trust,
pie,
ammo,
etc,
cetera
so,
like
the
total
trust
base
of
like
all
packages
that
I
depend
on
is
huge,
so
that's
not
particularly
desirable,
and
so
something
that
I
put
together
a
little
demo
for
that.
I'm
working
on
it's
kind
of
in
a
weak
state
right
now,
there's
a
link
further
in
the
dock.
If
you're
interested
is
something
that
scrapes
the
existing
build
logs
of
the
build
system.
I
I
think
it's
a
it's
a
an
appealing
model
that
I
want
to
kind
of
consider
doing.
I
don't
think
it's
the
right
long-term
solution,
but
in
the
short
term,
it
kind
of
shows
the
idea,
and-
and
so
that's
that's
one
thing-
we're
we're
working
on
building
in
terms
of
the
attestation
storage.
I
I
this
is
kind
of
where
I,
the
doc.
It's
still
half
sharing,
I'm
just
kind
of
sharing
work
in
progress.
To
kind
of.
Let
folks
know
where
we're
going.
The
the
the
main
two
models.
I
Are
you
either
have
like
some
sort
of
central
storage
thing
like
recore
could
be
an
example
where
you,
where
you
store
it
on
the
at
the
stations
on
the
transparency
log
or
you
store
it
in
line
in
the
package
manager.
Both
those
have
advantages
and
disadvantages.
I
matt
swozo
who's.
I
I
think
on
the
call-
and
I
have
been
trying
to
come
up
with
just
a
general
metadata
store
that
doesn't
depend
on
that's,
like
kind
of
just
a
dumb
store
just
to
get
something
working
so
that
you
don't
like
with
recore.
It
is
like
you,
you're
kind
of
in
the
whole
civic
store
model.
It's
we'll
look
at.
Basically,
I
just
want
something
that
works,
that
we
can
just
get
done
quickly.
C
Yeah,
so
I
had
a
comment-
and
this
is
please
apologize.
This
is
really
out
of
left
field
and
in
some
sense,
it's
left
field.
C
For
me,
at
the
conference
last
week,
ava
black
presented
to
me
a
very
different,
interesting
model
that
I
am
still
trying
to
think
through
the
pros
and
cons
of,
but
ava's
basic
idea
was
to,
in
the
long
term,
tweak
the
compilers
to
record
the
hashes
of
all
their
source
code
inputs,
and
you
know
in
the
interim,
it's
not
hard
to
create
a
shim
that
notices
hey
my
build
process,
opened
a
file
I'll
take
a
hash
of
that
input
and
then
generate
basically
a
hash
of
the
list
of
hashes
that
are
used
as
input.
C
It's
not
exactly
provenance.
It's
really
hashes
because
you
don't
know
where
you
I
have
sources.
I
know
what
what
the
inputs
were.
I
don't
necessarily
know
whether
where
the
inputs
came
from
so
it's
a
it's
not
exactly
provenance
and
the
result
is
not
exactly
an
s
bomb
because
again
it
doesn't
tell
you
where
it
comes
from.
Just
tells
you
the
hash.
On
the
other
hand,
you
have
very
strong
information
on
exactly
what
went
into
the
build
process.
C
Here's
the
hash,
assuming
the
hashes
have
you
know,
are
cryptographic
hashes
and
have
good
cryptographic,
hash
properties.
You
know
what
the
inputs
were.
It's
and,
and
I'm
wondering
if
maybe
that
could
be
another
approach
pros
and
cons,
and
I
just.
I
So
I
think
that's
that
sounds
in
terms
of
like
this
model.
That
would
be
the
case
where,
like
the
user
provided
tooling,
does
it.
My
main
question
is
from
like
a
trust
model.
Why
do
I
trust
that
any
of
that
information
is
accurate,
like
I
would
have
to
like?
Let's,
let's
take
the
pi
amp,
the
pi
ammo's,
the
example
here
pi,
let's
say
pi
ammo
did
that.
I
now
have
this
at
the
station
that,
like
here,
where
the
inputs-
and
here
are
the
outputs.
I
What
makes
me
believe
that
it's
actually
accurate,
I
probably
wouldn't
I
have.
I
have
nothing
to
believe
that
that's
actually
accurate.
I,
if
I
want
to
convince
myself,
I
could
look
at
their
code.
I
could
look
to
see
how
it's
configured.
I
could
maybe
look
to
see
how
the
keys
are
protected,
and
maybe
I
could
kind
of
do
that,
but
it's
not
scalable.
It's
not
automatic.
I
I
think
that,
from
my
perspective
of
protecting
integrity
that,
like
prevent
people
from
tampering
with
the
process,
I
I
feel
like
coming
something
at
the
compiler
level
is
too
low
level.
I
kind
of
been
more
interested
in
like
the
overall
process
like
consider
the
build
as
a
black
box.
Any
sort
of
compiler
level
stuff
seems
like
it
is
a
useful
information,
especially
for
like
vulnerabilities
and
licensing,
but
it
almost
seems
like
it
isn't.
C
Yeah,
if
you
don't
trust
the
compiler,
for
example,
you
know,
then
you
have
a
very
different
kind
of
use
case.
I
have
some
work
in
that
area.
Well,.
C
Yeah,
I'm
not
I'm
not
entirely
sure
that
you're
removing
it
again
if
you're,
if
you
take
what
was
generated
by
the
build
and
going
to
run
it
later,
you
are
trusting
it.
G
Suggest
that
there's
part
of
the
difficulty
is
trying
to
find
the
one
true
trust,
rather
than
accepting
that
all
trust
is
a
matter
of
subjectivity.
Literally,
who
is
the
subject
that
we
trust
and
to
what
degree,
and
at
what
time
do
we
trust
them
self-promotion
alert?
I
wrote
that
article
requirements
for
universalize
that
graph
and
I
do
talk
about
subjectivity
as
one
thing
that
should
be
encoded
explicitly
in
the
trust
model.
C
C
I
Yeah,
one
thing
that
I
think
is
it
would
be
good
to
kind
of
spell
out
is
also
like.
There's
not
just
secure
or
trustworthy
is
like
a
very
broad
thing.
They
could
have
lots
of
definitions
if
it's
helpful,
I
think
to
break
it
down
into
components.
One
of
them
is
integrity,
which
again
is
like
hard
to
define
in,
I
think
in
maybe
the
blog
post
or
something
like
that.
We
talked
about
defining
that
as
protection
against
tampering,
which
again
is
like
how
do
you
define
that?
I
But
if
I
kind
of
feel
like
the
solutions
for
integrity,
meaning
like
prevent
people
from
tampering
in
a
way
that,
like
the
organization
that
owns
the
software,
does
not
want
you
to
do
so
like
an
individual
tampering
with
it
or
something
like
that
or
some
third
party,
then
the
solutions
are
probably
different
than
ones
that
are
around
like
code
like
code
quality
or
like
free
of
vulnerabilities
or
things
like
that.
That's
another
important
aspect,
perhaps
even
more
important
but
like
it
seems
like
it's.
I
So
anyway,
in
terms
of
I
was
going
to
say
kind
of
coming
back
to
the
reference
architecture.
This
was
more
just
like
sharing
a
work
in
progress
thoughts
if
you're
interested
in
collaborating
on
this
or
have
any
feedback,
either
comments
on
that
or
email,
I'm
happy
to
get
any
help
on
that
or
feedback,
but
just
more
of
like
I
want
to
share.
C
Thoughts
got
it
so
I'm
trying
to
distinguish,
I
think,
you're
right,
they're,
different
kinds.
I'm
trying
to
figure
out
how
to
verbally
distinguish
these
different
kinds.
I'm
struggling
a
little
bit.
I
mean
it
sounds
like
one
you
one
thing
is:
did
you
run
this
process
and
the
other
is?
Can
I
trust
that
process
that
I
ran
or
how
trustworthy
is
the
process
that
I
ran
and
was
this
a
process
that
I
ran?
C
That
may
not
be
exactly
right.
I'm
struggling
a
little
capture
if
somebody
has
a
better
way
of
capturing.
I
Yeah,
I
I
kind
of
one
way
we've
been
thinking
about
it
at
google
is
like
remove
the
trust
in
people.
We
shouldn't
have
to
trust
that
people
have
done
the
right
things.
I
C
Yeah-
and
I
would
add
the
way
you
I
think
one
of
the
key
things-
and
this
comes
back
to
some
discussions
about
earlier-
is,
if
you
want
to
verify
it,
the
best
way
is
to
show
that
you
can
reproduce
it
which
brings
us
back
to
the
reproducibility.
I
know
you
don't
want
to
add
that
in,
but
if
I
don't
tr,
if
I
don't
trust
the
people
and
I'm
not
sure
I
can
trust
the
underlying
systems,
I'm
running
out
of
options.
E
I
I
guess
actually
one
thing
is
I
I
do
wonder,
I
think
as
much
as
we
can
remove
the
need
to
trust
people.
I
I
agree
with
that
sentiment.
I
do
think
that
there's
at
some
point
you
have
to
trust
somebody
to
have
done
something
because
otherwise
no
code
got
written
in
the
first
place
so,
but
but
I
do
think
that
yeah
like
from
from
I
know,
I'm
I'm
mostly
talking
about
this
from
an
end
user
that
doesn't
like
we're
not
selling.
E
You
know
my
day,
job
we're
not
selling
software
we're.
You
know
the
only
software
that
we
have
is
stuff,
like
you
know,
online
banking,
that
kind
of
thing
from
our
perspective,
you
know
we
do
still
have
that
requirement
to
understand
like
how
do
we
know
what
vendors
we
can
trust
right
like?
How
do
we
know
and
and
it's
in
the
open
source
world
I
think
it's
a
little
bit
of
it's
a
different
sort
of
problem
right.
E
I
mean
it's
similar
to
be
clear,
but
but
when
it
comes
to
start
talking
about,
like
hey,
do
I
trust
vendor
x
to
have
done
these
things
and
they
also
don't?
Maybe
maybe
they
don't
want
to
reveal
how
they
built
all
their
applications?
C
I
see
we're
running
on
short
on
time,
so
I
assume
that
we're
gonna
when
we
come
back
we'll,
probably
continue
this
discussion
and
also
who
whoever
slipped
in
the
the
other
one,
I
will
admit
their
their
guilt.
I
Okay,
any
other
we're
basically
at
a
time
any
other
issues
before
we
go.