►
From YouTube: SLSA Bi Weekly Sync (March 17, 2022)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
I
feel
like
it's
that
scene
from
mean
girls
stop
trying
to
make
a
fetch
happen.
C
At
the
google
representative
on
this
call
I'll
just
say,
I
feel
your
pain
there's
nobody
on
the
me.
Oh,
it's
fine!
So
all.
D
A
And
I
just
took
off
the
meet
the
meat
link
there.
I
don't
think
ghost
is
in
my
calendar,
but
it's
possible.
Can
someone
fix
the
calendar
issue?
Do
we
need
a
foundation.
E
You
know
kim
I'm
just
jumping
in,
but
jorie
is
back.
She
knows
how
to
do
all
sorts
of
cool
calendar
things
perfect.
So
I
don't
know
what
the
problem
is.
But
if
you
I
would
just
tell
her.
E
I
C
C
E
A
Right,
let's
get
started,
welcome
everyone
so
for
it
seems,
like
all
you
folks
found
the
right
meeting
and
the
right
time
due
to
our
time
change
in
the
us.
A
We
also
move
this
to
be
the
other,
the
the
alternating
weeks
and
I
think,
the
alternating
day,
but
you
all
know
that,
because
you're
here
we
have
a
few
things
on
the
agenda
today,
so
I'm
not
sure
the
time
how
to
time
box
some
of
these,
but
I
think
I
think
maybe
they're
they'll
all
stay
within
time,
but
anyway,
let's
kick
it
off
mark's
gonna
talk
about
the
salsa
proposal
process.
A
J
That's
fine,
I
I
don't
think
we
need
to
spend
a
lot
of
time
on
it.
I
just
want
to
mention
that
there's
now
a
proposal
for
the
proposal
process,
I've
created
the
first
proposal
being
the
creation
of
the
process,
mostly
as
an
example,
to
see
if
it
works,
joshua
reviewed
it
so
far,
but
it
would
be
great
if
there
was
any
other
feedback,
whether
it's
just
a
plus
one
or
if
you
have
any
anything
instructed
to
add.
I
just
wanted
to
throw
that
out
there
just
to
get
more
eyeballs
on
it.
A
J
Yeah
yeah,
so
if
you,
if
you
click
in
the
files,
changed
yeah
thing,
you
could
then
view
that
file.
Well,
if
you
click
the
little
dot
dot
menu
and
hit
the
view
file
in
the
upper
right
yeah,
it's
the
addition
of
the
proposing
changes
so
we'll
still
use
github
issue.
The
proposal
that
we
still
use
github
issues
to
describe
everything.
J
If
there
is
an
issue
where
it's
unwieldy
to
use
a
github
issue,
mostly
for
larger
things,
then
we
can
create
a
proposal
document
in
this
other
repo,
which
is
like
basically
like
a
design
document,
and
that
is
allows
us
to
track.
If
you
go
to
pull
requests
in
another
tab,
you
can
see
the
very
first
one.
J
What
is
the
problem
you're
trying
to
solve?
What
is
a
prior
art?
What
are
you
proposing?
What
is
the
rationale
for
doing
that
as
a
standard
way
of
getting
agreement
on
kind
of
thorny,
more
thorny,
difficult
issues
for
one
particular
thing
that
we
talked
about
in
the
past
was
expanding
salsa
to
like
possibly
expanding
salsa,
to
cover
more
than
just
integrity.
This
would
be
an
example.
That
would
be
an
example
of
something
that
should
be
a
proposal,
because
there's
a
lot
of
pros
and
cons
and
things
to
weigh,
and
we
it's.
A
Cool
awesome,
thank
you,
mark
yeah
for
folks
that
have
feedback
feel
free
to
check
out
the
issue
and
and
leave
mark
some
comments
all
right.
Next,
we
have
ozra
and
lauren
to
talk
about
non-falsifiable,
salsa
provenance,
and
it
looks
like
you
guys
probably
want
to
take
over
the
screen.
K
Hey
yeah,
let
me
try
screen
sharing.
K
Oops
sorry,
I
forgot
how
to
do
this
with
zoom
all
the
time.
Okay,
can
you
see
my
screen
yep,
okay
and
then,
if
I
do
this,
okay
cool,
I
don't
I
feel
like
I
feel
like.
I
thought
there
was
a
way
I
could
see
everyone,
but
maybe
there's
not
it's.
A
K
Okay,
awesome,
sorry
about
that.
I
really
don't
know
how
to
see
the
chat
box
or
anything
but
yeah
feel
free.
If
someone
does
have
a
question
just
interrupt
me
and
like
ask
the
question:
okay,
so
laurent
and
I
have
been
working
on
this
like
proposal
or
design
and
proof
of
concept
as
well,
for
particular
implementations
around
achieving
like
a
high
level
of
salsa,
so
in
this
case
also
three
or
four,
and
how
that
would
happen
using
kind
of
like
native,
open
source
tooling.
K
So
we're
really
just
using
github,
reusable
workflows,
which
is
this
like
kind
of
new
feature
launched,
I
think
last
fall
and
github
oidc,
which
is
also
a
relatively
new
feature
so
with
both
of
these
features,
we're
going
to
kind
of
walk
through
how
to
get
to
salsa,
3
and
then
kind
of
talk
about
also
a
little
bit
of
you
know
how
to
get
to
salsa
4
with
the
same
sort
of
technique,
and
so
the
motivation
here,
like
I
said
before,
is
that
we
kind
of
want
this
upgrade
path
to
salsa
3
for
github
native
projects,
and
we
don't
really
want
to
introduce
new
tooling
to
them.
K
So
if
they're
already
relying
on
github
actions
for
ci
cd,
then
they
can
kind
of,
like
you,
just
sort
of
use
these
solutions
and
use
our
like
demo
poc
and
like
enhancements
around
that
to
kind
of
get
to
salsa
3,
pretty
easily
kind
of
with
like
no
additional
work
and
then
maybe
also
for
salsa
4.
K
Once
we
kind
of
work
through
all
of
that,
and
the
other
motivation
here
is
that,
like
a
lot
of
package,
managers
have
github
actions
that
allow
publishing
and
integrating
with
a
salsa
3
providence
generator
like
we're
going
to
show.
You
would
be
an
awesome
way
to
get
integrations
with
package
managers.
K
So
yeah,
that's
kind
of
the
high
level
here.
So
I'm
going
kind
of
going
to
just
jump
right
into
like
what
reusable
workflows
give
you
and
how
our
solution
kind
of
builds
off
of
that,
and
also
yeah
just
I
figured
this
audience-
is
like
relatively
like
familiar
with
the
salsa
3
requirements,
although,
like
I
always
have
questions
about
them
and
I'm
sure
like
I'm,
not
the
only
one,
but
the
build
requirements
are
here.
K
The
promise
requirements
are
here
later
on
I'll
kind
of
like
go
through
each
one
of
them
and
kind
of
demonstrate
why
our
solution
fits
all
these
boxes.
But
the
main
thing
here
is
that
non-falsifiability
for
l3
is
achieved
with
this
sort
of
like
reusable
workflow
in
oidc
token
generation,
yeah,
okay,
so
on
github,
reusable
workflows.
K
So
a
reusable
workflow
is
basically
it's
similar
to
an
action.
It
gets
called
kind
of
like
an
action,
but
it
is
basically
a
mechanism
that
github
released
where
you
can
call
a
workflow
defined
in
a
separate
repository
from
the
caller
workflow.
K
So,
for
example,
I
can
publish
a
reusable
workflow
on
my
own
repository
and
then
each
of
my
individual,
like
let's
say,
project
repositories
can
go
and
call
into
that
reusable
workflow,
so
that
the
the
way
that
it
works
like
in
the
sort
of
like
from
the
caller's
perspective
again
you're,
just
gonna
the
way
that
you
might
call
an
action
by
like
a
user's
invocation.
You
would
also
say,
like
uses
blah
blah
blah,
reusable
workflow,
so
path
to
reusable
workflow
at
a
particular
reference.
K
So
what
we're
going
to
do
in
our
solution
is
construct.
A
reusable
workflow
that
acts
as
our
trusted
builder
here
and
the
reason
why
this
works
really
nicely
is
that
because
of
github's
job
isolation
and
because
of
the
isolation
that
reusable
workflows
give
you
from
the
caller
repository,
we
can
kind
of
inherit.
We
can
kind
of
like
achieve
those
build
isolation
and
user
isolation
that
salsa
requires.
K
So
the
first
thing
is
that
the
user
interference
so
between
the
caller
and
the
calling
workflow,
no
like
nothing,
can
really
be
propagated
besides
defined
input
parameters.
So
this
means
environment
variables,
defaults
services,
other
set
of
other
steps,
other
procedures
going
on
in
the
caller
workflow,
so
in
the
user,
workflow
don't
get
propagated
inside
this
reusable
workflow
and
then
the
second
thing
is
within
this
reusable
workflow.
K
We
can
define
jobs
so
like
just
how,
like
normal
github
actions,
have
jobs
and
reusable
workflows,
just
you
know
structured
as
a
typical
workflow,
but
each
of
these
jobs
has
isolation
between
them,
so
each
job
runs
an
isolated
vm,
and
so,
within
this
reusable
workflow,
we
can
define
steps
like
a
build
process
and
we
can
define
steps
like
a
provenance
process
and
those
can
be
isolated
from
each
other.
K
So
then
we
get
that
build
process,
isolation
from
the
providence
that
is
kind
of
the
main
difference
between
why
we're
using
a
reusable
workflow
here
rather
than
an
action.
So
our
reusable
workflow
gives
us
individual
job
steps
that
can
be
isolated
from
each
other.
So
that
is
a
super
cool
feature.
E
Quick
question:
if
you
do,
I
I'm
not
familiar
with
reusable
workflows.
I
am
familiar
with
github
actions.
Is
it
hard
to
convert
a
github
action
to
a
reasonable
workflow.
K
That's
a
good
question
so
within
let's
say:
let's
suppose
that
you
are
a
project
maintainer
who
has
a
build
workflow
right,
so
you
might
define
like
a
series
of
jobs
within
that
you
can
define
a
job
that
calls
into
the
reusable
workflow.
So
it's
pretty
much
like
it's
as
hard
as
adopting
an
action
thanks.
Does
that
make
sense?
K
Yes,
awesome!
So
right!
So
in
our
world
here,
like
in
our
salsa
poc
world,
what
we're
doing
is
we
are
going
to
be
defining
these
trusted,
reusable
workflows
that
do
these
salsa
procedures
and
other
people
like,
for
example,
like
other
people,
can
adopt
those
workflows
to
like,
let's
say,
create
provenance
or
run
a
build
and
create
provenance.
K
Likewise,
like
package
managers
could
adopt
this
solution
to
say,
like
okay,
I'm
going
to
define
like
me
as
pi
pi
or
me,
as
whatever
can
decide
to
publish
a
reusable
workflow
that
would
create
provenance
for
their
project
users.
So
right,
I
kind
of
went
over
this
within
each
within
the
reusable
workflow.
Each
job
is
isolated,
like
a
typical
github
workflow,
which
means
that
we
can
isolate
a
job
step
like
providence
generation
from
a
job
step
that
might
run
a
build.
K
K
So
here
we
are
kind
of
trusting
github.
Obviously
we're
trusting
github
in
you
know
running
the
code
that
we
define
in
our
reusable
workflows,
we're
trusting
github
to
do
a
variety
of
different
things
later
on
as
well,
but
we
also
are
trusting
github
that
they're,
not
man
in
the
middling,
these
namespace
outputs,
that's
kind
of
like
a
further
next
step
thing
over
here.
K
If
you
have
any
ideas
about,
like
you
know,
getting
github
to
sign
off
or
verify
that
data,
that
would
be
interesting,
but
for
now
what
we
do
is
we
kind
of
hash
check
some.
The
authenticity
between
the
the
job
processes.
Does
that
make
sense.
K
I
will
just
show
a
picture
and
then
you
can
marinate
on
any
other
questions
here
about
reusable
workflows
so
out
here
on
the
blue
calling
workflow.
So
that's
your
project,
maintainer
who's,
you
know,
building
project
foo
and
then
what
they
can
do
in
our
demo.
Poc,
for
example,
which
works
with
golang.
They
can
define
in
their
source
repository
a
salsa
release
or
yaml.
So
this
might
look
exactly
like
a
go
release
or
yaml.
Then
they
will
call
into
the
reusable
workflow
just
like
in
action.
K
So
they'll
invoke
this
reusable
workflow,
given
these
defined
input
parameters.
So
that's
the
only
thing
that
can
really
pass
into
this
trusted
builder,
which
we
define
to
be
like
a
go
version
or
some
like
environment
variables
that
need
to
be
generated
at
build
time,
for
example
like
a
git
version,
so
within
this
trusted
builder,
reusable
workflow.
We
have
isolation
beyond.
K
Besides
this
input
parameters
and
inside
this
trusted
builder,
we
might
have
individual
steps
like
a
build
step
or
a
provenance
generation
step.
Our
poc
does
a
build
and
a
provenance
generation
and
science
step,
but
you
could
also
imagine
if
you're
calling
workflow
were
to
do
the
build
process,
then
you
could
define
a
more
generic
trusted
builder
that
just
generates
provenance
given
in
artifact
hash,
and
this
kind
of
came
up
mark
gave
us
this
idea,
which
would
be
awesome
to
do,
and
I
just
kind
of
have
to
go
and
do
it.
K
But
that
means
that
it
would
be
applicable
to
any
kind
of
build
process
and
then
out
comes
like
your
binary
and
your
signed
providence
from
your
trusted
builder
in
our
poc.
So
if
you
click
on
the
link,
I
shared
this
slide
deck
with
salsa
discussion.
But
if
you
click
on
this
link
to
the
provenance
over
here,
you'll
also
see
where
our
trusted
builder
is
defined.
K
I'm
just
kind
of
setting
the
stage
of
like
what
a
reusable
workflow
is
all
right.
So
now
I'm
gonna
go
into
the
second
piece
of
the
puzzle
here,
which
is
like
how
do
we
do
authentication
and
signing
so,
like?
I
said
we're
using
like
basically,
two
github
features.
The
first
reusable
workflows
in
the
second
is
oidc,
so
recently
github
added
support
for
oidc,
and
so
what
kind
of
happens
is
every
single
time
you
run
a
github
workflow
you
are
given.
K
You
are
provisioned
by
github,
a
unique
bearer
token,
and
so
what
you
could
do
is
that
is.
You
could
make
a
request
to
github's
oitc
provider,
given
that
bearer
token
and
then
receive
back
an
oidc
identity
token
and
that
identity
token
will
attest
to
your
basically
your
workflow
identity,
which
would
contain
the
you
know,
calling
repository
and
the
caller
repository.
So
it
would
identify
this
trusted
builder
plus.
It
would
also
identify
the
workflow
that
called
into
the
trusted
builder
in
the
case
of
a
reusable
workflow,
which
is
awesome
for
us.
K
What
this
trusted
builder
is
given
a
signed
provenance,
and
so
then,
what
we
do
with
that
oigc
id
token
is
that
we
go
and
talk
to
fulcio,
which
is
safe
stores,
root,
ca
that
generates
code,
signing
certificates
based
on
oidc
tokens,
so
fulcio
will
go
and
verify
that
oidc
id
token
and
a
challenge
after
a
challenge
that
attests
to
a
key
generated
inside
this
trusted
builder
and
then
spit
back
to
a
signing
certificate.
K
If
that
identity
looks
correct,
so
basically
full
cio
is
giving
you
a
certificate
or
like
a
little
attestation
saying
hey.
Yes,
you
are
your
id
token
was
verified
and
you
truly
own.
Whatever
signing
key
you
wanted
to
request
from.
I
really
should
have
added
a
signing
key
in
this
diagram,
and
I'm
sorry
about
that.
But
what
this
kind
of
gives
us
is
a
mechanism
to
sign
an
artifact
and
tie
it
with
the
builder
identity.
K
So
just
as
an
example,
this
is
what
the
identity
looks
like
essentially.
Well,
it's
not
quite
the
identity
return
in
the
certificate,
but
it's
the
identity,
token
that
github
provides
you
and
that
is
sent
to
full
co.
So
I
just
want
to
point
out
like
a
couple
things
here
and
also
I
will
take
questions
after
this
slide
as
well.
So
I
want
to
point
out
this
jaw
workflow
ref,
which
is
fifth
from
the
bottom.
K
That
is
the
identity
of
the
reusable
workflow,
so
I
think
in
their
example
they're
using
octo
automation
here.
This
would
be
like
your
trusted
builder
here
and
then,
if
you
look
up
to
repository,
so
that
is
the
like
seventh
line
or
so
from
the
top
of
the
second
of
the
claims
over
here.
That
is
the
calling
workflow,
so
that
might
be
the
original
user
workflow
that
called
into
this
trusted
workflow.
K
L
K
Okay,
so
the
workflow
ref
will
contain
like
org
repository
path
to
path
to
workflow
and
then
like
the
ref,
will
just
be
a
ref
to
like
main
or
something
here.
So
you'd
have
to
go
and
pull
in.
L
K
Yeah
yeah,
I
think
that
the
sixth
line
this
shot
over
here-
sorry,
I
can't
point
to
it,
but
the
sixth
claim
under
above
repository,
I
think,
that's
the
shaft
for
a
repository,
so
I
think
that
might
be
the
shot
for
the
calling
workflow.
K
M
K
M
Yeah,
can
you
hear
me
yeah
yeah,
so
all
the
bassrev
head,
ref
and
char
are
from
the
calling
workflow
so
the
project,
so
the
job
workflow
ref
is
the
only
thing
currently
available
to
identify
the
the
builder,
okay
and
and
yeah.
So
we
we
can
like
yeah,
there's
no
additional
information
right
now.
What
maine
is
we
can?
M
As
the
builder
we
can
put
this
information
into
the
provenance
because
we
know
we
know
our
own
hash,
but
I
think
we
want
to
get
you
know
better
security,
because
I
think
what
you're
pointing
out
is
what
we're
aware
of
is
someone?
Might
you
know
if
you
don't
trust
the
maintainers
of
the
project?
They
could
create
another
branch
or
like
a
tag
on
another
branch
and
say
this
is
v
2.1
or
something
so
today
I
think
that's
the
limitation
and
I
guess
we're
going
to
talk
to
github
to
see
yeah
right,
yeah
yep.
K
Yeah,
I
think
the
concern
here
is
that,
like
potentially
your
trusted
builders
code
could
be
corrupt
right,
yeah.
So
yeah
again,
we
don't
have
like
a
reference
to
the
commit
here,
and
so
we
can't
really
do
an
additional
verification
to
say
like
okay,
are
we
at
a
trusted
ref
of
the
reusable
workflow?
So
right
now
we
currently,
I
guess,
have
to
trust
the
entire
history
and
code
of
the
trusted
builder.
B
It's
not
even
just
a
matter
of
trust,
but
also
for
debugging.
I've
used
other
systems
which
do
keep
track
of
this
information,
and
it
has
been
life-saving
on
more
than
one
occasion
to
know
exactly
what
version
of
the
ci
pipeline
was
in
play.
K
Yeah,
I
think,
on
that
it
might
be
worth
I
we
had
like
sort
of
a
list
of
questions
to
ask
github,
and
I
I
wanted
to
kind
of
double
check.
This
was
addressed
in
one
of
them
all
right,
any
more
questions
thanks
and
if
you
want
like
it
would
be
helpful
for
me
because
I'm
like
I
forget
everything
like
if
you
wanted
to
put
in
a
comment
about
that.
That
would
like
definitely
remind
me
to
follow
up
on
that.
K
I
can
do
that
all
right
so
again,
so
back
to
this
picture
over
here,
the
fulcio
issued
cert
populates
information
from
the
oidc
identity,
token
into
x509
extensions
and
also
populates,
the
subject
name
uri
with
the
job
workflow
ref
that
I
just
showed
you
so
looking
at
that
we
have
a
bunch
of
extensions
over
here
that
are
defined
in
like
full
cos
code,
but
you'll
see
the
job
workflow
ref
at
the
top
over
here.
The
calling
repository
in
one
of
the
oid
extensions
and
the
calling
repositories
sha
as
well.
K
So
that's
there
and
so
kind
of
just
to
tie
this
all
in
together.
What
we're
doing
here
is
we
are
able
to
sort
of
identify
this
builder
through
the
signing
certificate
and
trace
back
on
the
information
that
was
given
in
that
oidc
token.
K
So
we
do
have
to
trust
the
full
coca
to
verify
the
oigc
token
and,
like
I
guess
transparently,
we
have
that
and
then
once
we
verify
that
we
can,
we
know
to
trust
the
builder
id
and
once
we
trust
the
builder
id,
then
we
know
what
code
was
executed
to
generate
the
provenance,
which
means
that
we
know
our
province
was
service
generated.
K
We
know
the
provenance
was
populated
correctly
correctly,
and
we
know
that
the
signing
key
was
hopefully
ephemeral,
because
that
was
the
code
run
in
the
workflow
and
also
we
kind
of
know,
it's
ephemeral.
We
rely
on
github's
like
job
isolation,
also
for
ephemerality.
So
I
guess
there's
like
two
caveats:
there
is
that
we
we
generally
are
trusting
github
six
store
and
the
trusted
builders
code.
K
Is
that
that's
yeah,
it's
kind
of
like,
like
I
mean
we
kind
of
had
some
discussions
around
like
oh,
given
that
we're
trusting
all
these
pieces
and
little
bits
of
each
of
these,
like
we're,
trusting
the
job,
isolation,
we're
trusting
the
femorality
like?
Are
there
ways
that
we
could
verify
those
types
of
properties?
So
those
are
stuff
that
kind
of
came
up
with
like
oh,
we
should
probably
follow
up
with
github
on
how
to
expose
that.
E
This
actually
leads
to
a
question:
have
you
considered
trying
to
write
this
down
in
terms
of
here's?
Why
we
met
all
the
criteria,
all
the
salsa
criteria
and
here's
the
things
we're,
depending
on.
K
Yeah
at
the
end
of
the
slide
deck
and
also,
I
think
I
linked
in
the
notes,
there's
a
design
document
which,
like
outlines
in
a
lot
more
detail
of
what
the
verification
procedure
is
doing
and
what
it
relies
on
as
well
as
like
the
like.
You
know,
for
example,
we
rely
on
like
the
the
jobs
to
be
passing
data
between
each
other
through
the
namespace
outputs.
So
all
that's
kind
of
detailed
in
the
design
dock,
and
if
you
find
that
something
is
missing
or
not
clear,
please
feel
free
to
comment.
E
K
K
Oh
interesting,
okay,
like
manual
reproducibility
checks,
that's
well.
K
K
I
I
think
you
would
only
really
need
to
run
the
build
process
twice:
yeah,
fair
yep
and
then
from
there
that
could
populate
a
reproducibility
boolean
with
like
some
certainty.
You
know
we
could
say
like
reproducible
with,
like
you
know,
two
indications,
that's
awesome.
That's
a
good
idea!
K
Okay,
sorry,
I
can't
see
anyone
all
right
so
visually
what's
going
on
here,
is
that,
like
the
client
has
a
provenance
and
an
artifact,
so
they
have
assigned
provenance
and
an
artifact
output
by
this
trusted
builder
and
the
first
step
that
they're
going
to
do
is
they're
going
to
go
and
retrieve
the
signing
certificate.
Right
now
we
are
just
like,
in
our
verification,
poc
we're
directly
querying
record
for
the
correct
signing
cert,
but
you
could
imagine
if
you
wanted
to
reduce
the
latency
of
like
querying
record.
K
You
could
also
publish
your
signing
certificate
inside
the
trusted
builder
and
upload
to
your
github
artifacts,
and
then
consumers
could
go
use
that
for
direct
verification.
So
you
could
skip
this
first
step
if
you
wanted,
but
anyway
we're
output
the
signing
certificate.
K
So
the
next
step
here
is
that
we
want
to
go
and
verify
the
signature
on
the
providence,
so
we're
going
to
take
in
the
certificate
and
take
in
the
signed
provenance
verify
the
signature
there.
That
means
that
we
can
at
least
trust
that
the
provenance
wasn't
tampered
with.
So
that
gives
us
intel
like
data
integrity
and
sort
of
gives
us
authenticity.
So
then,
from
there
we're
going
to
verify
the
builder
identity
by
extracting
the
subject
name
of
the
certificate.
K
So
once
we
verify
the
builder
identity
so
verifying
it
means
let's
go
and
check.
This
is
the
a
trusted
reusable
workflow
once
we
have
that
we
can
actually
trust
the
contents
of
the
provenance,
because
we
know
that
that
signing
cert
could
have
only
been
generated
by
an
oidc
token
issued
inside
that
trusted
reusable
workflow,
assuming
of
course,
that
we
trust
github
to
provision
the
correct
oidc
token
and
that
we
trust
sigstor
to
issue
and
verify
the
signing
certificate
so
from
there
we
go
and
verify
the
problem.
K
So
at
that
point
we
trust
that
whatever
our
builder
identity
signed
was
actually
authentic
and
non-falsifiable.
So
at
that
point
we
can
go
and
parse
that
providence
payload
ingest
that
artifact
and
do
whatever
verification
we
need.
So
maybe
we
hash
the
artifact
verify
that
it
matches
the
provenance.
Maybe
we
do
some
other
sort
of
provenance
matching
to
say
like
okay,
the
config
source
was
something
I
trusted
and
so
on.
According
to
whatever
policy
you
have
so
does
the
flow
of
that
make
sense.
K
Cool
so
again,
justin
steps
over
here,
I'm
not
gonna.
It's
basically
literally
what
I
just
said,
but
again
here
we're
verifying
integrity
and
non-falsifiability,
we're
verifying
authenticity,
and
then
we
trust
that
providence
payload.
So
then
we
can
do
our
actual
policy
check
to
do
some
other
again
more
in
more
detail,
so
unpacking
that
salsa
3
achievement.
Some
other
key
points
here
on
that
non-falsifiability
here.
So
in
addition
to
just
testing
the
builder
identity
inside
that
certificate,
which
gives
us
trust
in
the
workflow
content
and
the
providence
generated.
K
We
also
like
note
that
that
ephemeral
key
is
only
accessible
inside
that
trusted
builder.
So
that's
like
pretty
key
point
in
the
salsa
3
requirement
list
is
that
the
user
process
and
build
process
don't
have
any
access
to
that
signing
key,
and
that's
because
that
that
signing
key
is
generated
inside
the
providence
generation
vm.
So
it's
isolated
from
both
the
build
process
and
also
from
the
calling
workflow
so
from
project
maintainers.
L
Repository
if
I
have
to
pass
environment
variables
like
how
do
you
make
it
calamityless
or
any
of
like
any
things
for
my
like
seagull
any
of
those
things,
but
if
I
wanted
for
the
golang,
how
do
you?
How
do
you
achieve
strong
achieve
that.
K
Right,
so
I
think
this
like
this
might
be
like
a
a
my
understanding
of
parameter
list
here.
Is
that
even
dynamic
like
flags
can
as
long
as
they
are
generated
by
some
scripted,
workflow
or
some
sort
of
scripted
code,
then
that
might
suffice.
So,
for
example
like
putting
in
the
the
git
commit
in
like
a
version,
information
or
something
at
build
time
is
still
considered
parameter
list,
given
that
we
are
putting
it
into
the
source
code.
K
Feel
free
to
like
I
I
I
disagree,
but
that's
okay,
yeah
in
in
your
case
here,
though,
like
how
would
you
even
deal
with
like
dynamic
flags.
L
Yeah,
that's!
That's!
That's
why?
So
that's
my
that's.
My
question
asked
if
you're
calling
a
parameter
list,
so
that
means
we
have
crystal
clear,
define
which
parameters
are
acceptable
which
which
are
not
that
becomes
a
kind
of
gray
area
as
to
okay.
For
for
these
things,
it's
acceptable
for
these
things
and
not
so
it's
kind
of
hard.
K
Yeah
for
what
it's
worth,
we
do
only
accept
an
allow
list
of
parameters.
K
Our
code
inside
the
trusted
builder
will
go
and
check
those,
and
it
also
will
go
and
actually
list
out
inside
the
provenance
what
the
build
command
was
and
also
what
the
environment
was.
So
hopefully
it's
at
least
traceable,
and
also
the
calling
workflow
is
referenced
from
the
commit
sha
and
the
source
repository
inside
the
provenance.
So
you
can
go
and
check
the
calling
workflows
code
if
you
wanted
to
sort
of
truly
lock
that
down.
K
M
Yeah
also
something
I
think
we
discussed
is
you
know
the
whether
the
build
is
parameter
less
or
not.
I
think
it
depends
what
level
of
abstraction
you
look
at.
If
you
look
at
the
compiler,
then
the
the
arguments
and
the
environment
variables.
M
You
know
there
are
some
parameters,
but
if
you
look
at
the
builder,
which
is
triggered
by
just
a
you
know
a
github
trigger,
then
the
only
parameters
is
basically
just
like
the
trigger
name
and
the
trigger.
You
know
the
even
payload
and
like
as
like
astra
said
everything
is,
is
is
in
source
code.
M
So,
basically,
from
the
builder's
perspective,
the
only
parameters
are,
you
know
the
github
payload,
the
github
even
payload,
and
I
think
that's
why
we're
arguing
that
this
is
parameter-less,
but
also
something
I
want
to
point
out
is
this
is
actually
you
know,
independent
from
the
implementation
that
we
have.
So
if,
if
people
believe
that
this
is
not
less
parameterless
in
in
essence,
it
means
that
salsa,
like
salsa,
has
a
problem,
because
you
cannot
achieve
it
and
especially
in
golang.
You
cannot
achieve
it
because
you
need
those
environment
variables
and
so
on.
K
Right
and
like
the
the
idea
of
this
solution
also
doesn't
require
you
to
expose
those
if
you
didn't
want
to
so
you
could
just
again
like
reuse
this
sort
of
framework.
I
guess
for
achieving
salsa,
3
and
github
with
reusable
workflows
in
oidc
and
not
expose
any
parameters
or
you
could
limit
what
triggers
you
allow.
E
Just
to
can
we
put
a
pic?
Can
we
put
a
pin
in
this
because
I
think
that
this
issue
about
what
parameter
list
means
is
broader
than
this
specific
implementation?
It's
pointing
out
a
concern,
so
I
I
don't
know
how
to
put
a
pin
in
that,
but
we're
gonna
need
to
come
back
to
that
more
broadly,
david.
A
Check
out
the
chat,
there's
an
issue
that
people
are
referencing
that
might
contain
the
scope
of
the
issue.
Issue
278
looks
like.
N
Do
we
want
to
have
a
set
of
allowed
parameters
because
the
the
the
intent
of
parameter
lists
is
we
don't
want
to
allow
the
scope
to
be
so
broad
that
it
becomes
impossible
to
reason
about
what's
actually
happening
in
your
build
at
the
same
time?
Right
you
know,
so
we
can
limit
that.
That's
a
big
thing,
and
so
that
leads
to
sort
of
two
flows.
One
is:
could
you
have
an
allow
list
of
parameters,
or
do
you
just
literally
define
your
build
such
that
every
single
thing
is
like
here?
N
H
Yeah,
I
think,
to
my
mind,
as
long
as
the
as
long
as
the
parameters
do
and
affect
the
ids
of
the
outputs
and
so
that
any
change
to
a
parameter
is
also
a
change
to
the
artifact
that
you
produce
and
hence
the
id
of
the
artifact
that
you
produce
and
you
can
bake
those
parameters
into
either
the
build
script
or
details
on
the
invocation.
H
There's
no
way
for
for
anyone
to
kind
of
modify
parameters
and
try
and
produce
a
focus.
Artifact.
K
I
guess
my
claim
is
that,
like
for
some
ecosystems,
that
might
not
be
possible
if
you
have
dynamic
tags
that
are
based
on
versions
or
commits,
but
agreed
that
it
is
like,
for
example,
like
your
the
parameters
for
your
event
trigger
are
not
going
to
change
your
your
build
if
it's
reproducible
so
actually
yeah.
I
guess
I
mean
it
really.
It
really
depends,
but
regardless,
whatever
you
just
decide,
parameter
list
means
it's
like
likely
achievable
with
this
structure.
K
Unless
you
have
like
such
a
stringent
requirement
that
even
something
like
the
the
trigger
would
change
that.
O
Just
just
to
note
when,
with
the
python
rebuilder
work,
a
very
common
thing,
as
as
sir
you
just
mentioned,
was
taking
the
tag
or
like
yeah
tagging
workflow
the
release
workflow,
as
you
know,
the
trigger
to
release,
and
so
I
I
think,
that's
a
very,
very
common
sort
of
maybe
non-parameter,
but
a
parameter
propagation.
That
definitely
needs
to
be
handled
and
considered.
K
Yeah
right,
so
maybe
it's
more
like
yeah
a
lot
list
of
parameters
that
we
are
defining
here
all
right
and
then
verification
in
general,
like
we
have
in
our
demo
poc
at
least
again
we're
given
a
binary
into
provenance,
and
we
even
like
exposed
some
like
expected,
like
source
repositories,
which
kind
of
go
in
the
realm
of
like
policy
here
and
expected
tagging
branches
and,
as
you
can
see
like
the
sort
of
command
line
output.
K
Here
we
go
and
fetch
the
record
transparency
log
entry
that
this
was
uploaded
at
and
print
out,
some
information
based
on
the
signing
certificate
in
providence,
for
that
particular
artifact
and
providence
set.
So
just
as
like
a
quick
appendix
this
is
like
an
example
of
some
of
the
provenance
that
is
generated
from
our
demo
poc
for
go
so
on
the
right
here.
You
can
see
the
the
actual
build
invocation.
K
You
can
see
the
source
repository.
You
can
see
the
the
trigger
on
the
left.
You
can
see
parameters,
you
can
see
the
digest,
sha
et
cetera
the
trusted
workflow
as
well
in
the
builder
id
and
then
just
from
here
on,
like
just
kind
of
closing
it
out.
K
If
you
have
any
more
questions,
feel
free
to
jump
in,
but
there's
lots
of
places
to
go
from
here
we
can
generate
like
more
generic
actions,
like
mark
had
suggested
about
just
generating
providence,
given
artifact
hash,
we
could
define
package
manager
specific
integrations
with
this
type
of
workflow,
more
testing
paths
to
l4
and
so
on.
O
K
Right
so,
for
example,
like
pi
pi
exposes
actions
to
publish,
so
you
could
also
imagine
that
pi
pi
would
handle
a
like
a
reusable
workflow
that
did
the
build
step
for
you
before
you
invoked
the
publish.
However,
I
guess
like
in
the
case
of
like,
if
we
just
scoped
our
reusable
workflow
to
just
do
provenance.
K
That
would
actually
be
a
lot
easier
as
well,
so
users
could
keep
their
own
defined
step
and
then
right
before
they
do
pi,
pi,
publish
or
npm
publish
they
could
call
into
a
reusable
workflow
to
generate
that
provenance
and
send
that
with
them.
Does
that
make
sense?
So
this
is
this
is
the
case
where
someone
already
has
like
a
ci
action
in
github
that
is
publishing
to
their
package
repository.
O
Gotcha
I
I
was
actually
envisioning
this
as
sort
of
a
tail
sort
of
action
on
an
existing
workflow,
but
you're
you're
saying
it
would
potentially
be
in
the
middle
between
like
build
and
publish.
K
I
suppose
so
yeah,
I
guess
we
could
also
get
direct
integration
with
the
publish
step
and
that
way
users
don't
even
have
to
change
their
calling
workflows,
cool.
M
M
You
know
where
is
the
provenance
going
to
be
to
be
kept
because
you
want
like
pvp,
installed,
to
actually
check
the
pronouns,
so
at
least
you
want
pip
install
to
be
aware
of
you
know,
provenance
information.
M
A
There's
a
there's,
a
new
group
with
some
of
those
folks.
You
could
probably
figure
out
where
those
discussions
are
happening
and
maybe
see
what
they're
thinking
all
right
am
I
re-sharing
cool
all
right
azra.
This
was
awesome.
I
have
a
couple
questions,
but
since
we
have
a
few
more
topics
on
the
on
the
agenda
today,
maybe
we'll
take
any
more
questions
offline,
but
this
is
a
awesome
demo
and
I'm
excited
to
see
where
this
goes
cool.
So
next
we
have
a
discussion
question
from
sam
sam.
P
The
way
I
read
it,
it
kind
of
could
be
interpreted.
Potentially
two
ways
like
one
way
would
be
that
the
actual
service
build
service
itself
would
have
to
generate
the
provenance
and
then
potentially
another
way
would
be
that
you
could
accomplish
that
through
a
ci
job
and
I'm
just
struggling
to
understand
like
what
the
intent
is
of
that
or
maybe
if
we
could
get
some
clarity.
I
have
an
example
project
done
on
git
lab
that
uses
cosine
to
assign
a
build
output.
N
So
and
others
correct
me
if
I'm
wrong,
I
believe
the
intention
of
service
generated
is
pretty
much
to
say
that
the
the
build
itself
like
the
stuff
that
is
user
definable
should
not
have
access
to
the
key
right,
because
you
could
do
all
sorts
of
bad
stuff
there
and
it
becomes
it's
hard
right
because,
like
when
you
have
the
builder
right
could
be
running
all
sorts
of
arbitrary
code.
It
becomes
hard
to
know
whether
or
not
the
you
know
what's
actually
going
on
with
the
key
inside
of
that
runner.
N
P
J
Yeah
yeah,
I
I
think
so
so,
for
example,
if
and
we
should
probably
file
an
issue
about
this
to
kind
of
like
have
this
and
eventually
turn
it
into
like
actual
documentation
and
clarification,
because
I
agree
that
it's
ambiguous.
J
I
think,
for
example,
if
the
signing
key
is
accessible
to
the
actual,
build
script
itself
and
like
someone
could
have
just
grabbed
the
key
and
generated
prominence
not
using
the
ci
cd
system.
That
would
not
meet
the
requirement,
although
I
think
there's
a
difference
between
there's
some
ill-defined
difference
between
two
and
three
of
the
non-falsifiability
there's
like
a
stronger
bar
for
three,
but
I
don't
think
we've
decided
exactly
what
that
bar
is.
J
J
O
Might
we
just
rename
service
generator,
because
I
I
think,
what
sam
what
you
were
saying
that
was
the
original
intent
right
that
the
build
system
actually
generate
the
prominence
mark.
J
Yeah,
the
original
intent
was
that
the
build
system
generate
the
provenance,
because
I
think
we
didn't
foresee
these
other
solutions.
O
B
Do
we
have
an
issue
to
discuss
that
like
discuss
the
naming,
if
we
change
that,
don't
that's
that's
a
great
idea.
P
P
Yeah,
I'm
happy
to
open
an
issue
there.
It
would
help
us
a
lot
for
this
to
be
clarified,
because
I
get
like
we're
trying
to
understand.
Do
we
need
the
runner
to
negatively
be
generating
all
of
this
provenance,
in
which
case
you
know,
that's
a
pretty
long
roadmap
for
us
to
build
all
of
that
out,
natively
inside
of
gitlab
versus
something
like
what
was
demonstrated
today
with
github
right.
Are
there
other
paths
to
accomplish
this
anyway?
P
E
J
I
I
think,
really
levels.
Two
and
three
are
a
bit
ambiguous
like
level
four
is
like
the
top
strength
and
level.
One
is
like.
You
basically
have
anything
at
all
level.
Two
and
three
are
meant
to
be
kind
of
intermediate
milestones
at
level.
Three.
We
we
say
non-falsifiability,
because
we
just
based
on
past
experience,
you
know
sometimes
implementations
like
maybe
would,
even
though
it's
generated
by
the
service,
it
uses
untrusted
inputs
and
non
like
the
at
level.
N
Yeah
definitely
have
some
thoughts
on
that,
so
once
the
github
issues
up,
I
can
definitely
put
some,
I
think,
there's
a
lot
of
also
subtlety
around.
You
know
who
is
actually
doing
the
signing
right
and-
and
you
know
even
saying,
hey-
I
have
an
od
oidc
token
great,
but
that,
depending
on
the
build
script
right,
if
that
build
script
is
not
good,
it
could
still
sign
something.
That's
not
like.
E
By
the
way,
I
am
delighted
by
this
overall
process
of
trying
to
make
it
happen
in
a
general
way
finding
the
requirements
need
some
refinement,
refining
the
requirements
and
we're
more
likely
to
have
good
stuff
in
the
end.
So
I
I
don't
think
this.
This
is
actually
an
awesome
sign.
A
I
agree
that
was
actually
my
question
for
azra
if
they,
if
they
created
any
issues
along
their
way,
they
said
there
were
some
things
that
weren't
clear,
but
I
digress.
E
Yeah,
actually,
I
would
beg
that
you
know
if
there's
anything
that
in
the
process
wasn't
clear
beyond
what
you've
already
noted.
I
mean
this
is
this
is
exactly
perfect
for
that
sort
of
thing.
A
Yeah,
okay,
cool-
let's
try
to
quickly
get
through
the
last
couple
items
on
here
looks
like
we'll
get
an
issue
out
of
that:
the
tom
you're.
Next
we
have
a
salsa
blog
now,
I'm
guessing.
This
probably
goes
to.
Oh
okay
goes
to
an
issue
we're
talking
about
having
a
salsa
blog
and
looks
like
you're
looking
for
who
might
approve
post
my
my
quick
input
on
this
tom.
I
don't
know
if
you
want
to
drive
this,
but
I
I
don't
want
to
have
a
crazy
process.
A
E
A
Q
So
I
don't
know
that
we
care
either
way.
I
I
I
definitely
agree
that,
like
less
process
is,
is
is
more
right,
like
you
know,
basically
convince
someone
that
this
is
well
written
and
not
spam,
and
you
know
have
them
double
check
your
like
understanding
of
of
salsa
and
go
something
like
that.
I
don't
know.
A
Well
sounds
good
thanks,
folks
for
taking
notes.
Today,
too,
this
is
great
and
then
the
last
topic
brandon.
You
brought
this
up
yesterday
about
having
a
concerted
effort
on
just
collecting
open
source
compromises.
A
R
I
think
we
kind
of
like
mentioned
a
couple
couple
groups.
I
think
the
backstabbers
knife
q-tail
check
marks
as
well.
I
think
the
maybe
can
we
either
out-of-band
or
during
this
meeting
or
some
other
meeting
kind
of,
have
a
discussion
on
that.
A
Yeah
john
speed,
are
you
on
the
call?
I
thought
there
was
an
issue
somewhere
or
maybe
you
guys
started
a
slack.
S
Discussion,
that's
right.
There
is
an
issue
and
I
can
drop
it
in
a
second.
It
will
take
me
a
second
it's
hard
for
me
to
do
while
I'm
talking,
but
there
is
an
issue-
and
I
think
abhishek
now
months
ago,
had
potentially
with
kim
listed
a
set
of
open
source
software
compromises
from
over
the
past
year
or
two,
and
then
there's
been
a
series
of
conversations
where
myself
and
others
have
said
yeah.
S
This
is
a
good
idea,
so
basically
thumbs
up
for
me
and
they
got
so
far
as
there's
a
private
repo
created
within
ossf
called
hoping
like
oss
compromises
myself
and
avashek,
and
a
few
others
have
been
talking
about
it.
But
I
of
course
there
are
many
things
going
on,
but
I've
kind
of
been
saying
thumbs
up
thumbs
up
and
I've
been
looking
for
collaborators.
So,
if
you're
interested
in
working
on
such
a
thing
with
me,
I'm
glad
to
talk
about
it
in
the
slack
or
with
me.
S
R
Okay,
cool
yeah.
I
think
let
me
share
as
well.
I
don't
have
access
to
the
repo,
but
we
also
have
one.
Maybe
we
can
see
how
we
can
consolidate
it.
This
one
is
is
public
it
has.
This
was
started
with
from
santiago.
R
This
was
kind
of
like
it's
probably
not
not
complete,
but
I
think
you
know
I
don't
have
that
larger
opinion
on
where
it's
hosted
as
long
as
it's
public,
I
think
that's
kind
of
like
the
main,
the
main
thing
but
yeah,
let's
sync
up
offline
and
discuss
it
sounds
great.
E
Yeah
and
real
quick,
I
propose
that
I
I
suggest
proposing
this
as
part
of
the
supply
chain
working
group-
I
know
that's
already
been
mooted
because
it
this
crosses
many
things,
not
just
salsa
agree,
but
but
that
said,
hopefully
this
can
inform
salsa.