►
From YouTube: SSCS Working Group Meeting - April 10, 2023
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Welcome
everyone
to
our
weekly
meeting
for
the
software
supply
chain
security
working
group
I
just
went
ahead
and
dropped.
One
item
I
just
wanted
to
cover
right
up
front
before
we
get
too
deep
onto
everything.
I
know:
we've
got
some
folks
who
are
new
to
the
working
group.
We
recently
also
added
some
folks
external
to
gitlab,
who
are
participating
in
related
features
with
the
working
group
as
well.
So
I
just
wanted
to
preface
this
with
just
an
overview
of
some
of
the
roles
and
responsibilities.
Here.
A
We
have
a
number
of
Engineers
from
the
government
team,
specifically
who
are
contributing
80
percent
of
their
time
at
a
minimum
to
the
working
group,
and
then
we
have
a
number
of
others
who
are
participating
in
you
know
various
capacities
as
advisors
or
otherwise.
A
Also
just
to
kind
of
recap,
some
of
where
we
landed.
Last
week,
we
ended
up
making
some
dri
assignments
for
specific
features.
To
do
some
refinement
on
some
of
those
so
Brian's
taking
the
lead
on
the
back
end
and
Daniel
on
the
front
end
on
defining
that
user
experience
for
signed
container
registry
images,
I
think
it's
Ollie
I'm,
not
100
sure
how
to
pronounce
his
name.
A
So
my
apologies,
if
I'm
saying
it
wrong,
he's
taking
the
work
to
evaluate
overall,
where
we're
at
with
doing
the
signing
itself
so
putting
together
sort
of
that
end-to-end
architectural
plan
of
how
do
we?
You
know?
Not
only
do
the
signing
you
know
generate
the
tokens,
do
the
signing
and
then
push
that
all
the
way
through
to
verification,
and
then
Aaron
is
doing
a
deep
dive
into
whether
or
not
we
need
an
additional
oidc
provider
and
figuring
out
how
we
can
get
the
tokens
that
we
need
in
order
to
do
this.
B
Who,
who
was
on
the
front
end
of
signature,
verification
I,
missed
that
one
I'm
trying
to
take
notes
into
that.
C
A
A
A
B
Yeah
before
we
go
on
to
that,
one
I
know
that,
like
I
think
it
was
Darren
had
started
like
a
blueprint
a
bit
ago,
but
I
think
that
preceded
the
working
group.
So
like
is
that.
A
Yeah,
so
that's
going
to
be
part
of
the
work
that
Ollie's
doing
is
is
going
to
be
doing
a
spike
and
then
the
results
of
that
will
feed
back
into
that
architectural
blueprint.
That
was
one
of
the
outcomes
of
that
Spike
issue.
That
Ali
is
responsible
for.
A
B
Yes,
I
guess
we
can
move
on
to
two
which
is
kind
of
kind
of
a
same
thing.
Is
anybody
able
to
give
a
update
on
the
the
status
of
the
oidc
provider
endpoint,
whether
we
intend
to
pursue
that
for
16.0?
B
D
Can
I
can
I
just
ask?
Well
it's
more
of
a
clarification
question
here.
I
just
I
want
to
be
totally
transparent
on
this
too
I
come
in
from
the
the
compliance
group
to
help
out
and
from
the
discussions
I've
had
with
with
Nate
who's
on
vacation
this
week
and.
C
D
On
the
tasks
that
I
picked
up
on
doing
a
spike
on
on
signing,
which
I
took
up
kind
of
under
the
impression
that
that
would
be
a
good
opportunity
to
ramp
up
on
some
of
the
terminology
and
some
of
the
technical
details
behind
what
we're
working
on,
and
it
feels
like
that,
since
I
have
done
that,
we
have
launched
directly
into
this
oidc
new
issuer.
I.
Guess
problem.
D
That
I
think
is:
has
a
lot
more
fast
to
it
than
I.
Think
I
am
really
adequately
qualified
to
undertake,
especially
given
the
time
the
time
sensitivity
of
this
and
how
much
of
the
future
work
that
it's
blocking
so
I
want
to
get
that
cleared
up
completely,
because
I
do
I
want
to
contribute
here,
but
I
don't
feel
like
it's
going
to
help
us
move
forward
if
I'm,
if
the
onus
is
kind
of
on
me
to
make
that
decision,
because
this
is
kind
of
well
beyond
my
expertise.
At
this
point.
C
Yeah,
okay,
we're
all
we're
all
kind
of
trying
to
do
other
research
on
how
this
stuff
works.
I
think
none
of
us
understand
it
completely
in
depth,
yet
we're
also
research
checking
it,
but
if
you
would
like
I
can
take
over
as
the
tri
for
that
issue.
C
I
have
been
researching
how
the
signature
verification
works.
That's
been
my
primary
focus
for
today
and
I'm.
Actually
writing
that
epic
huge
comment
about
how
it
works,
that
we
can
reference
it
later,
and
so
that's
probably
gonna
inform
of
the
other
decision
that
we
make
about
this
because.
C
I
do
think
that,
in
order
to
make
an
informed
decision
about
the
ordc
provider
and
other
stuff
that
we're
working
on,
we
I
need
to
understand
how
this
works.
First,
so
I've
been
doing
my
research
on
that
expect
to
finish
that
in
maybe
a
day
or
two
and
then
I'll
be
able
to
look
at
the
word
AC
provider
again
and
see
if
I'm
making
a
decision
on
that.
B
Yeah
and
and
like
I
think
Billy
has
mentioned
several
times
in
a
few
different
threads
like
I.
Think.
The
reason
why
this
issue
is
a
bit
convoluted
is
that,
like
it's
not
technically
a
a
blocker
right,
like
volcio,
can
sort
of
work
around
the.
B
B
Those
BR
like
have
all
those
bringing
changes
occur
right
now
and
then
not
have
to
worry
about
anything
in
the
future
and
there's
been
like
a
long
time
request
to
move
the
the
issuer
for
CI
job
tokens
to
a
separate
domain
anyway,
because
folks
that
are
running
private
instances
that
are
behind
a
firewall
or
something
they.
They
can't
use
that
feature
for
what
it's
intended
for,
which
is
like
authenticating
with
external
Services
right.
B
So
if
the
oibc
endpoint
is
not
publicly
accessible,
folks
won't
be
able
to
use
the
the
these
new
keyless
signing
features
right.
So
you'll
run
like
cosine
a
test
within
CI,
but
then
like
when,
when
folkio
tries
to
hit
the
discovery
endpoint
and
validate
that
the
public
Keys
match
up
right,
they
won't
be
able
to
do
that,
and
that's
true
with
like
AWS
and
gdcp,
and
all
the
sort
of
identity
Federation
stuff.
B
So
I
think
I
think
it
makes
sense
to
like
launch
the
keyless
signing
integration
with
standing
up
the
new
issue
on
a
separate
domain
as
part
of
that,
but
but
if
not
I,
think
we
need
to
think
through
what
it
looks
like
to
migrate
folks
from
A
New
Path
to
a
new
domain
in
the
future
right
yeah,
that's
my
thought
on
it.
D
D
I'm
happy
to
to
sit
in
on
those
discussions,
but
I
I
don't
think
it's
helpful
to
have
like
someone
external
to
them.
It's
already
been
ongoing.
You
have
to
make
a
decision,
especially
and
another
thing.
When
we
talk
about
time,
sensitivity
too,
it's
it
wasn't
clear
to
me
in
context
like
what
kind
of
time
frames
we
were
looking
at,
but
there
was
a
lot
of
you
know,
kind
of
requests
for
updates,
and
that
was
really
in
a
position
to
get
those.
C
I
I
also
think
it's
it's
too
late
to
add
more
breaking
changes
to
16.0
at
this
point,
because
it's
going
to
start
in
a
week
or
two
but
I
know
I
know
that
this
feature
is
Alpha
and
technically
we
could
break
it
whenever
we
want,
but
I
do
I
really
don't
want
to
do
it
without
giving
any
notice
out.
C
Like
that's
a
good
way
to
lose
trust
from
customers.
B
Yeah
I
mean
like
to
be
clear:
the
issuer
is
embedded
in
the
JWT
right,
so
unless
you're
like
hard
coded
against
the
current
issuer,
endpoint,
which
I
think
is
just
gitlab.com
it
like
it's,
not
really,
a
breaking
change
in
that
sense,
folks
would
still
be
able
to,
like
all
the
tooling
would
automatically
valid
I'm,
not
sure
what
all
folks
are
using
that
jvt
for
today,
but
yeah,
like
you,
said,
the
ID
tokens
keyword
in
ciml
is
in
Alpha
right
and
like
I.
Think
most
of
the
tooling
is
integrated
programmatically.
B
So
changing
the
issue
by
itself
is
is
not
a
breaking
change.
I
just
meant
that,
like
there's,
definitely
other
breaking
changes
like
removing
the
old
environment
variables
right
and
potentially
like
like
we're
talking
about
changing
the
claims
that
are
embedded
into
the
JDT
right.
So
if
we
remove
some
old
ones
or
rename
anything,
those
would
also
be
breaking
changes,
but
I
don't
think
that
changing
the
issuer
itself
is
necessarily
A
breaking
change
because,
like
all
the
metadata
that
you
need
to
validate,
that
token
is
still
embedded
in
the
token.
So
does
that
make
sense.
E
So
I
I
would
actually
disagree
there.
I
I
know
for
a
lot
of
cases
for
six
store
and
cosine.
We
actively
encourage
people
to
say,
like
when
you
ver,
when
you're
verifying
policies
to
say
which
issue
where
it's
coming
from,
because
we
we
sort
of
see
this
this
problem
all
the
time
where
you
know
the
issuer,
what
GitHub?
E
What
GitHub
as
an
issuer
says,
is
okay,
maybe
different
than
what
Google
says
as
an
issuer
is
okay,
this
this
pops
up
a
lot
for
like
how
do
you
know
some
someone
has
control
over
an
email
address,
so
we
we
do
tend
to
encourage
people
to
pin
to
specific
issuers.
So
I
I
would
expect
that
to
be
a
breaking
change.
E
I
I,
don't
think.
That's
a
show
stopper
to.
B
E
Yeah,
that's
why
I
was
saying
in
that
one
comment:
I
can
pull
up
the
link.
You
know
if,
if
we
do
move
forward
with
the
keyless
with
the
the
current
issuer,
like
I,
think
that's
totally
fine.
E
We
would
just
basically
treat
that
whatever
the
new
issuer
endpoint
as
a
completely
distinct
entity,
and
then
it
would,
it
would
just
be
on
sort
of
gitlab's
responsibility
for
like
how
do
you
Shepherd
users,
from
the
the
old
endpoint
to
the
new
endpoint,
and
what
does
that
transition
look
like,
and
you
know
how
do
users
know
which
one
to
use
for
their
validation
stuff
like
that.
B
But
there's
nothing
all
the
nothing
in
the
tool
chain
needs
to
go
back
and
consult
the
old
issuer.
The
old
oidc
issuer
endpoint
to
verify
things
that
have
already
been
signed
right
because
their
signatures
just
exist
in
transparency,
log
and
don't
need
to
be
revalidated
against.
A
All
right
well,
in
any
case,
I'm
fine
with
the
plan
to
have
Brian
take
over
as
dri
for
this
I
think
that's
totally
fine-
and
you
know
it's
gonna
sounds
like
it'll,
be
a
couple
days
before
you're
able
to
start
in,
but
I
think
that's,
okay,
I
think
that's
a
smart
move
to
just
make
sure
we
make
a
good
decision
here.
I
know,
there's
time
pressure,
but
it's
better
to
make
the
right
decision
than
yeah.
B
I
think
I,
just
I
just
thought
of
brand,
like
an
option
that
we
have
here
is
in
order
to
limit
the
impact
like
I
assumed
that
I
think
God
has
already
deprecated
like
the
old
CI
job,
Duty
and
CI
job
Duty,
V2
and
those
are
on
the
I.
Don't
think
they're
scheduled
for
removal
yet,
but
like
they're,
already
deprecated
so
I
think
that's
the
intent.
B
But
if
we're
worried
about
breaking
changes
to
that
stuff,
which
could
be
important
because
I
think
our
vault
integration
hinges
on
that,
the
new
issue
could
could
just
be
scoped
to
jwts
that
are
generated
by
the
new
ID
token
keyword,
I.
Think
so
you
could
leave
everything
else
in
place
and
just
stand
up
a
new
issuer
that
is
specifically
only
used
when
you're
generating
tokens
with
the
ID
token
keyword,
and
that
would
be
sufficient
right.
So
that's
just
another
option.
C
B
C
Is
technically
his?
He
typically
owns
that
part
of
the
product
and
we
shouldn't
make
like
a
deprecation
decision
without
his
input.
A
Yeah
I
agree
and
then
Brian
I
think
you
probably
have
enough
refined.
You
know
if
you're
finishing
out
that
refinement,
Aaron
I,
don't
know
if
what
you
want
to
move
over
to,
but
I
would
assume
if
you
wanted
to
pick
up
some
of
those
back-end
issues
on
the
user.
Experience
for
signed,
container
registry
images,
I
think
where
Brian's
got
those
pretty
well
refined,
you
might
be
able
to
just
pick
up
where
he's
left
off
there.
A
B
Yeah
I
was
hoping:
Ollie
would
be
around
to
discuss
this
stuff.
A
little
bit.
I
can
I
can
chat
with
him
about
it.
Offline
later,
I
think
the
biggest
open
question
on
the
aligning
with
the
folkio
spec
is
like
what
we're
gonna
do
about.
B
One
of
the
claims
is
that
that
GitHub
supports
his
Runner
environment,
which
is
intended
to
be
something
like
platform
hosted
versus
self-hosted,
right
and
I.
Think
everybody
understands
the
the
context
from
from
the
linked
PR,
but
but
basically
like.
B
We
need
some
way
to
identify
like
managed
Runners
versus
unmanaged
Runners,
which
isn't
really
something
that
we
indicate
explicitly
anywhere
in
the
interface
today.
As
far
as
I
know,
I
think.
The
only
way
that
you
know
that
you're
running
on,
like
a
managed
Runner,
is,
is
just
based
on
the
runner
description
currently
and
that
Runner
description
is
just
like
a
a
plain
text
value
so
not
really
something
that's
suitable
for
embedding
into
a
JDP
claim
at
the
moment.
C
B
Yeah
so
I,
I
and
I
think
like
what
I
was
missing
from
the
initial
discussion
is
like
again,
like
folkio
can
sort
of
like
remap
our
tokens
into
other
things
right
so
I
I,
don't
know
whether
it
makes
more
sense
to
just
include
some
Boolean
value
like
that
in
the
in
the
like
claims
that
we
generate
or
like
have
something.
That's
like
more
specific
that
they
could
then
use
to
map
into
that
claim
right.
B
So,
like
I,
think,
Ali
was
looking
at
like
the
runner
descriptions
for
our
shared
Runners
that
have
a
nice
like
URI
associated
with
them.
That
indicates
the
environment
that
they're
running
in.
If
that
were
something
that
was
standardized
across
all
Runners,
that
would
be
something
that
volcio
could
could
hook
into
and
be
like
okay.
B
Well,
this
one
has
the
host
name
of
like
Runner
manager.gitlab.com,
and
so
we
know
that
that's
a
platform
hosted
shared
Runner,
but
yeah
I'm,
just
trying
to
think
of
a
way
that
we
can
answer
the
question
easily
for
now,
but
it's
still
be
forward
compatible
with
changes
that
we
might
want
to
make
in
the
future.
You
know
what
I
mean,
because,
like
I
I
think
I
know
like
Darren
has
talked
to
me
about
us
potentially
supporting
like
managed
Runner
fleets.
On.Com.
B
E
Yeah
I
I
don't
have
like
a
definitive
answer
because
I
think
what
it
really
comes
down
to
is
like
what
do
you
want
users
to
be
writing
policy
against
and
if
the
the
Boolean
true
false,
is
sort
of
all
the
knobs
you
want
to
give
them
then,
like
great,
we
can
do
that
if
we
do
want
to
have
more
of
the
like.
Oh
what
type
of
Runner
pool.
Where
is
it
running?
Who
has
access
then
yeah?
We
probably
want
some
more
more
details
there.
B
We
send
you
a
Boolean
claim
value
for
now.
Well,
I!
Guess,
if
it,
if,
if
we're
sending
it
across
any
claim
value
like
we
have
to
support
that
on
our
end
indefinitely,
okay,
yeah,
we'll
just
have
to
think
about
it.
A
little
bit
more.
C
E
A
E
I
could
take
this
one
Billy
that
might
be
yours,
yeah
I'm
doing
most
of
this
work
anyway,
yeah
so
I
didn't
get.
My
I
didn't
introduce
myself
at
the
beginning,
hi
Billy
I
work
over
at
chain
guard.
You
might
have
seen
me
on
a
few
other
open
source
projects.
I
contribute
a
lot
to
six
store.
Tecton
get
sign,
yeah,
so
happy
happy
to
be
working
with
you
all
and
yeah
I'm
excited
to
make
this
happen.
So
yeah
we
have
a
bunch
of
stuff.
E
That's
that's
sort
of
in
flight
already
so
support
for
arbitrary
ID
tokens.
We
landed
in
cosine
and
get
sign
so
once
the
full
Co
changes
are
in,
we
should
be
able
to
start
picking
up
the
gitlab
tokens,
basically
as
soon
as
possible,
which
is
exciting.
E
Pr
is
out
for
six
door
JS
as
well,
which
Brian
from
GitHub
did
approve
but
needs
to
go
through
another
cycle:
that's
fine,
yeah
and
then
beyond
that
I've
started
poking
around
the
npm
CLI
stuff
changes
look
fairly
straightforward,
I.
Think
again,
some
of
this
does
depend
on
the
full
Co
changes.
E
I
know
with
the
GitHub
with
the
gitup
actions
workers.
There's
there's
definitely
a
push
for
only
using
authenticated
data,
so
data
that
was
present
in
in
the
JWT,
so
yeah
I'd
like
to
follow
suit
as
well
for
for
any
gitlab
data
as
well.
So
things
like
commit
Shah
things
like
you
know
where
the
file's
coming
from
I
think
we've
already
covered
most
of
it
in
in
the
other
issues,
but
getting
those
in
place
and
and
have
those
be
Upstream
to
full
Co
and
start
publishing.
E
Those
will
probably
be
a
blocker
there,
but
otherwise
I
think
everything
looks
good
for
for
the
most
part
so
far,
but
I'm
happy
to
answer
any
questions.
B
Give
me
share
my
screen:
real
quick
I
just
wanted
to
clarify
a
couple
things
that
Billy
was
talking
about
for
anybody
that
may
be
watching,
recording,
async
or
doesn't
have
the
context.
B
Can
you
see
my
screen?
Do
you
see
the
save
store,
jsmr.
C
B
So
the
way
this
is
going
to
work
in
the
short
term
like
this
is
sort
of
like
a
a
stop
Gap
until
we
can
have
like
integrated
ambient
credential
detection.
So
I
think
another
issue
that
Ali
was
on
was
standing
up.
The
separate
API
endpoint,
where
you
could
request
an
ID
token
rather
than
having
to
use
the
ID
tokens
keyword
at
all.
B
But
in
the
meantime,
basically
what
Bailey's
done
is
establish
a
convention
where,
if
you
inject
an
ID
token
that
is
named
Sig
store
ID
token,
all
the
tooling
get
signed
cosine
will
will
respect
that
automatically
right.
So
in
the
future,
like
the
only
thing
that
would
change
will
increase
to
actually
support
ambient
credential
detection.
A
B
A
It
I
can
pass
it
off
to
someone
else.
I
think
I'll
try
to
stay
on
if
I
can,
just
in
case
okay.
B
Does
that
answer
your
question?
There
I
pointed
to
the
struct
that
we
are
using
to
construct
it
right
now.
Yeah.
E
That
is
mostly
helpful
are.
Are
you
guys
doing
anything
specific
with
the
predicate
I
know
that
that
ends
up
being,
like
the
most
customized
part,
quite
a
lot.
E
E
So,
typically,
what
we
see
for
other
predicates
for
like
get
up
actions
or
tecton,
is
that
CI
systems
will
actually
embed
the
config
spec
into
the
predicate
itself
and
how
that
format
looks,
can
vary
per
provider
and
salsa
was
intentionally
written
to
be
flexible
in
that
way.
E
So
if
all
If,
all
we're
doing
is
just
using
the
the
standard
like
salsa
like
in
total
predicate
fields
that
that's
easy
enough,
but
if
we're
doing
something
sort
of
more
involved,
basically,
what
I'm
trying
to
get
at
is
like
I
want
to
try
to
mirror
the
predicate
that
we're
using
that
gitlab
is
using
Elsewhere
for
npm.
E
So
if
there's
like
any
like
live
examples,
or
anything
like
that,
you
can
point
me
to
that'd.
Be
super
helpful
for
me
to
to
base
the
npm
stuff
off
of.
A
Yeah
so
I
just
dropped
a
an
example
there's
a
screenshot.
Also,
if
you
want
to
test
this
out
in
a
gitlab
project,
it's
extremely
easy.
Okay,
all
you
have
to
do.
I'll
drop
this
in
the
notes
document
as
well,
but
all
you
have
to
do
is
create
a
DOT
gitlabci.yaml
file
in
a
project
and
it's
a
very,
very
short
file
and
it'll
produce
the
attestation
for
you.
A
I
couldn't
say
for
certain
I'm
pretty
sure
it
happens
at
the
end
though,
but
I
could
be
wrong
on
that.
We'd
have
to
ask
I
think
it
was
Gregory
implemented.
It.
B
Yeah
I
I'm
wondering
like
what
how
if
we
were
to
continue
to
do
Providence
generation
within
the
runner?
How
do
we
connect
that
to
client
tooling,
like
npm,
like
I'm
thinking
in
obvious
ways
like
it
just
dumps
the
Json
file
somewhere
and
sets
an
environment
variable
indicating
where
that
file
is,
and
then
client
tooling
can
just
like
read
it
in
provide
an
attestation
and
send
it
off,
but
I
don't
know,
there's
much.
E
About
that
yeah
so
that
that
would
that
would
be
nice.
We
can't
do
that,
though,
because
there's
no
guarantee
that
that
file
hasn't
been
tampered
with
from
like
when
it
was
written
to
when
the
a
tester
is
reading
it,
so
that
that's
why?
For
like,
the
npm
stuff,
like
there's,
there's
a
desire
to
only
use
values
generated
like
materials
and
config,
source
and
stuff,
like
that,
like
anything,
that's
sort
of
identifying
information
should
always
be
authenticated
and
coming
from
the
the
the
jot
directly.
B
Me
look
I,
provided
a
couple
links
Below
on
on
B
for
like
how
npm
is
doing
it
currently.
E
Yes,
yeah
they
are
current
they're,
currently
using
the
environment
variables,
though
there
there
is
an
issue
open
in
the
npm
beta
issue,
tracker
I'm,
not
sure
if
you
have
access
to
that,
I
can
get
you
access.
If
you
want
that
basically
says
like
hey,
we
should
stop
doing
this,
so
we
should
use
authenticated
Providence
only
or
authenticated
data.
Only
okay.
A
And
yeah
it
this
would
have
to
be
generated
at
the
end
after
the
job
is
finished.
Running
because
we're
looking
at
the
output
like
the
build
artifact
that's
produced
as
a
result
of
the
job
and
as
part
of
that
attestation
we're,
including
a
Shaw
half
of
the
file.
We
couldn't
get
that
if
we
were
running
it
at
the
beginning
of
the
job,
so
we
have
to
let
the
job
complete
first
so
that
we
can
get
all
of
the
output
and
and
certified
like
the
shaw
hash
of
that
output.
For
example,.
A
Yeah
we're
producing
salsa
to
compliant
attestation
at
the
moment.
A
A
And
that's
the
same
thing
that
we
want
to
do
with
the
signing.
So
we
just
want
to
sign
all
of
these
attestations
that
we're
producing
just
by
default,
because
it
never
hurts
to
generate
an
attestation
or
sign
something
unnecessarily
other
than
I
guess
it
adds
a
small
amount
of
additional
CI
minutes,
but
otherwise,
like
it's
not
doing
any
harm
to
generate
attestation
and
sign
something
if
it
doesn't
need
it.
So
we
just
want
to
do
that
by
default,
for
everything.
B
Let's
say
that
you
build
your
npm
package
right
and
then
I
guess
you
could
output
it
as
a
tarball
and
have
it
get
signed
automatically
but
like
I,
don't
see
any
way
to
like
close,
like
you'd,
have
to
then
have
a
separate
job
that,
like
downloads,
that
tarball
and
publishes
it,
along
with
like
the
the
attention
at
the
station,
which
I
think
would
put
you
in
the
same
position
that
you're
in
initially,
which
is
that
you
can't
really
verify
where
that
artifact
or
signature
came
from
or
that
it
had
well.
A
I
would
expect
that
you
should
be
able
to
verify
the
signature
and
the
signature
assigned
to
the
artifact
or
sorry
the
signature,
science,
the
attestation
and
the
attestation
has
a
checksum
of
the
build
artifact
itself,
and
so
by
doing
that,
you're
able
to
certify
that
neither
the
attestation
nor
the
build
artifact
has
changed.
Like
the
npm
package,
you
know
certified
that
that
has
not
changed
from
the
time
that
it
was
signed.
A
B
So
I
I,
just
don't
I,
don't
see
how
folks
are
going
to
connect
this
to
the
in-state
like
I
I,
get
what
you're,
saying
and
I
think
it
makes
sense.
I
just
don't
see
like
I'm
having
trouble
visualizing.
How
folks
would
actually
use
it.
E
I'll
also
throw
in
another
anecdote,
so
this
is.
This
is
a
similar
problem
that
we
have
on
another
open
source
project,
tecton
chains,
where
we
do
sort
of
out-of-band
signing
for
artifacts
being
produced
by
text
on
pipelines.
E
A
strategy
that
we
employ
a
lot
is
basically
combining
both
signatures
running
on
the
worker
itself,
with
signatures
coming
from
the
The
Trusted
control
plane,
and
so
you
can
actually
like
use
both
in
tandem
right.
You
can
say,
like
oh,
hey
I'm,
going
to
when
I
npm
publish
I'm,
going
to
sign
this
artifact
with
the
identity
of
the
build
job
itself,
and
then
I'm
also
going
to
sign
it
with
some
sort
of
other
a
tester
for
the
pipeline,
that's
running
in
the
trust,
control
plane,
and
it's
really
the
combination
of
both
signatures.
E
That
really
grants
that
trust
and
you
can
actually
split
that
responsibility.
So,
like
you
know,
the
trusted
control
plane
doesn't
necessarily
need
to
know
about
all
of
the
inputs
and
outputs
and
stuff
like
that.
All
it
needs
to
know
is
really
about
what
it
What
was
the
output
artifact
hash,
and
then
you
can
sort
of
have
that
summary
attestation.
B
Yeah
I,
I
and
I
forgot
that
Tick
Tock
change
sort
of
behaves.
Similarly,
in
the
case
of
container
images,
am
I
remembering
correctly
that,
like
you,
rely
on
the
user
to
like
set
some
environment
variable
that
indicates
the
images
that
were
pushed
and
then
you
like,
detect
on
chains
component,
looks
at
it
that
way.
E
Yeah,
we
have
a
few
different
mechanisms
to
do
it.
You
can,
you
can
set
a
results
you
can
set,
there's
also
some
work
going
on
to
to
make
it
more
first
class
in
the
Epi
for
like
what
Pro
like
what
provenance
are
inputs
and
outputs
and
stuff
like
that.
Yeah
chains.
Changes
basically
looks
for
all
of
them
like
what
are.
What
are
the
end
different
ways
you
can
say,
like
hey,
I,
produce
this
image,
and
then
it
tries
to
go
in
and
sign
that.
B
Like
that's
another
way
to
do
it,
instead
of
like
outputting
the
artifact
directly
from
the
job,
you
can
just
have
some
way
to
tell
us
that
you
pushed
this
artifact
somewhere,
whether
it's
npm
or
a
container
registry,
or
something,
and
then
we
can
layer
in
the
automatic
signing
on
top
of
that.
That
way.
Potentially
that
way,
you
don't
have
to
like
that
way.
We
don't
force
users
into
like
having
to
output,
something
like
a
container
image
as
an
artifact,
get
that
uploaded
and
then
separately
download
it
and
push
it
to
a
registry.
A
A
So
the
end
state
is
that,
if
you're
building
a
container
image,
we
would
want
to
generate
an
attestation
for
that
and
sign
it,
and
we
don't
know
we
wouldn't
want
to
force
you
to
Output
that
as
a
build
artifact
in
order
to
make
that
possible,
like
we've
started
with
generating
attestations
for
build
artifacts
just
to
get
started,
but
eventually
we
would
also
want
to
generate
attestations
just
like
when
we
see
a
Docker
build
command.
Essentially
so
I
think
we
we
just
haven't
crossed
that
bridge.