►
From YouTube: SSCS Working Group Meeting - April 18, 2023
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
everyone
thanks
for
joining
to
our
weekly
meeting
for
our
software
supply
chain
security
working
group
Ollie.
It
looks
like
you've
got
the
first
item
here.
B
Sure
so,
I'm
working
on
adding
additional
claims
to
the
ID
token,
based
on
the
pr
to
map,
get
lab
claims
to
full
Co
claims.
It
looks
like
there's
five
nucleic
claims
that
we
need
to
add
their
Runner
ID
Runner
environment,
sha,
pipeline
Shaw
and
pipeline
ref
and
yeah
I'm,
hoping
to
wrap
that
up
this
week.
C
B
C
Yeah
I
I,
just
I
noticed
we
had
this
graphql
endpoint.
That
seems
to
like
unravel
all
of
those
different
mechanisms
of
include
into,
and
it
just
produces
a
raw
http
URL
string
that
you
can
actually
yaml
from
so
whatever
logic
is,
there
might
be
useful
for
oh.
C
C
That
seems
like
the
lowest
common
denominator,
but
I
I
know
that,
like
there
may
be
a
bit
of
a
balance
between
like
what
we
produce
and
like
what
volcio
actually
Maps
into
the
certificates
and
I
was
wondering,
like
maybe
Billy
has
an
opinion
on
on
whether
there's
value
and
having
some
consistency
in
like
the
URI
refs
between
GitHub
actions
and
and
you'll
FCI.
Since
it's
what
folks
are
going
to
write
policy
against
foreign.
D
Yeah,
that's
that's
a
good
question.
I
would
at
minimum
aim
for
consistency
within
git
lab,
so
the
main
thing
we
want
users
to
be
able
to
use
these
refs
for
is
to
basically
be
able
to
reproduce
the
build
config
that
was
used
for
a
particular
run.
D
So
if
you
need
to
say
like,
like,
oh
hey,
there's
just
production
artifacts,
we
want
to
be
able
to
go
back
and
say,
like
you
know,
can
I
redo
what
was
done
and
get
the
same
artifact,
and
if
you
can't
do
that,
then
that's
a
problem,
that's
sort
of
what
we
want
to
get
towards
for
consistency
between
like
GitHub
and
gitlab.
D
D
I
think
there's
going
to
be
some
things
that
are
going
to
be
hard
to
map
across.
Just
because,
like
how
git
lab
handles
fetching,
you
know,
URLs
is
probably
different
than
how
GitHub
does
like
resolving
workflow
files
and
stuff
like
that.
So
I'd
say
do
the
best
you
can,
but
if,
if
there
are
differences,
I
wouldn't
worry
about
it
too
much.
C
D
They
are
being
derived
from
the
the
token
the
the
token
claims,
so
they
may
not
map
one
to
one.
There
might
be
some
transforming
that's
going
on,
but
we
always
derive
the
information
that
we
get
from
the
incoming
job.
B
D
I,
ideally,
yes,
we
want
to
be
able
to
pinpoint,
like
the
exact
version
of
something
we're
also
talking
I
forget
in
what
thread
when
we're
fetching
some
of
these
included
files
or
the
pipeline
files,
something
that
would
be
nice
long
term
is
to
have
like
a
shot
256
digest
of
what
was
fetched.
So
even
if
we
can't,
like
necessarily
fetch
it
by
the
exact
Shaw,
we
should
still
be
able
to
say
like.
D
Oh,
if
I
go
refetch,
that
value
is
it
the
same
as
it
was
before,
and
that
would
also
be
valuable
for
being
able
to
determine
if
it
it's
something
that
you
can
reproduce
and
was
it
the
same
value
and
and
workflows
like
that.
B
I'll
move
on
to
the
next
point.
I
was
wondering:
when
do
we
need
an
API
endpoint
to
request
an
oidc
token,
as
well
as
like
the
predefined
variable
that
tells
you
where,
to
exchange
your
CI
token,
for
an
ID
token,
if
we
want
that
sooner
versus
later,
just
to
decide
what
I'm
going
to
be
working
on
next
after
adding
the
claims.
D
D
You
said
that
it
should
match
the
lifetime
of
the
job,
which
should
be
fine
if
it
was
like
a
five
or
ten
minute
token,
then
I'd
be
a
little
bit
more
concerned,
but
if,
if
we're
expecting
that
to
live
for
as
long
as
the
job
is
then
great,
that
should
be
fine.
D
Yeah
and
that
that
should
be
fine,
the
endpoint
will
let
us
dramatically
scope
down
the
the
timeline
like
we
can
make
it
five
ten
minutes
if
we
wanted
to
and
we
could
just
refresh
as
needed
so
long
term,
I
think
like
yeah.
We
do
want
to
do
that,
but
for
now
like
just
relying
on
the
long-live
timeout
seems
Seems.
Okay,.
E
C
I
guess
it
helps
a
bit,
though,
like
the
audience
claim
will
be
validated,
but
it
not
good
having
my
chances
that
expire
after
30
days,
sitting
around
and
I
I
guess
also
in
the
short
term,
like
people
can
reduce
their
job
timeout
to
like
I
think
the
default
is
one
hour
right,
but,
like
signing
anything
is
not
going
to
take
more
than
five
minutes,
probably
in
a
single
job,
so
yeah
people
can
mitigate
it.
That
way.
C
When
for
providers
that
do
support
ambient
credential
detection,
you
will
generate
a
new
job
every
time
like
cosine
or
any
other
six
store.
Libraries
is
called
right.
There's
no
like
built-in
caching
or
anything
like
there's,
no
reason
to
reuse
the
same
jbt
multiple
times
is
there.
D
Not
yet
we've
been
talking
about
it,
so
we
actually
have
a
caching
mechanism
for
get
sign
already,
because
for
rebases
people
tend
to
get
really
annoyed.
When
you
know
10
10
browser
tabs
open
all
at
once
for
the
CI
stuff,
it's
less
intrusive,
so
it
doesn't.
Really
people
don't
see
it
as
often
so
it
hasn't
really
been
a
concern
for
cosine
or
other
client
libraries.
D
D
D
Yeah,
the
the
main
concern
is,
we
do
have
rate
limiting
on
the
fulsio
side
for
like
tokens.
So
if,
if
someone
does
try
to
hammer
full
CO2
much
like
we
will
push
back,
I
forget
exactly
what
those
limits
are
off
the
top
of
my
head.
But
again
the
only
project
I've
heard
of
that's
run
into
those
problems
is,
is
kubernetes.
D
Oh
yeah,
so
this
is
just
the
the
regular
check-in
yeah,
so
no
major
update
since
last
week,
except
for
we
got
that
six
star
JS
change
in
we're
still
working
through
some
of
the
the
folsio
stuff,
though
I
I
we've
made
some
progress
here.
We've
loosened
some
of
the
the
requirements
for
like
the
build
signer
digest
and
we're
adding
more
more
guidance
around
Sans
I.
Don't
think
this
changes
any
of
our
plans,
so
I
think
we're
good
to
go
to
push
forward,
but
just
a
general
FYI.
D
Besides
that
I've
been
starting
to
work
on
the
npm
CLI
stuff,
basing
it
off
of
the
the
existing
salsa
0.2
work
that
gitlab
already
has
I,
don't
know
what
your
plans
are
for
salsa,
1.0
last
I
heard
I
think
that's
dropping
tomorrow,
so
just
something
to
keep
in
mind
on,
but
but
for
now
I'm.
Basically
in
a
market,
it's
like
a
V1
Alpha
One
base
it
off
the
0.2
API,
and
then
we
can
iterate
from
there.
D
However,
we
won't
actually
be
able
to
use
this
in
practice
and
try
it
out
for
npm
for
real
until
the
full
Co
things
land
are
in
our
pushed
to
the
public
good
instance.
So
that's
going
to
be
like
the
long
tail
Locker.
So
this
the
sooner
we
can
sort
of
unblock
the
the
full
SEO
stuff
and
start
making
progress
there.
The
better
off
will
be.
A
Just
answer
your
half-assed
question:
there,
your
implied
question
there
of,
like
our
remote
plans
for
salsa
1.0
right
now,
we're
really
focused
on
signing.
So
you
know
we
can
keep
adding
more
stuff
into
the
build
attestation,
but
it's
not
really
useful
if
the
attestation
can't
be
verified-
and
so
this
same
working
group
that
we're
in
now
is
probably
going
to
end
up
circling
back
and
adding
additional
Fields
into
the
attestation
itself
to
adhere
with
what
probably
would
have
been
salsa
level.
A
Three
and
now
you
know
I
know
they've
kind
of
changed
things
with
the
new
salsa
1.0,
but
basically
on
our
roadmap.
That's
all
follow-up
stuff.
After
we
finish
the
signing,
because
we
want
to
have
the
signing
in
place.
First.
D
Cool
yeah
that
sounds
good
to
me.
I
I
mentioned
it,
because
I
know
that
GitHub
has
been
talking
about
changing
their
npm
provenance
to
using
salsa
1.0.
So
I'm,
not
sure.
If
my
assumptions
are,
our
preferences
is
to
keep
gitlab
Providence
roughly
the
same,
maybe
some
slight
differences,
but
roughly
the
same
and
so
targeting
the
0.2.
The
existing
one
probably
makes
more
sense
here.
A
C
Another
point
on
B
I:
don't
know
if
I
think
it'd
be
a
pretty
trivial
thing
to
do
Billy
if
this
is
one
that
you
want
to
take,
but
we
can
start
layering
in
the
item
token
generation
and
then
running,
npm
publish
with
the
things
the
flag
is
like
provenance
or
something
dumb.
C
So
we
we
want
to
make
sure
this
template
remains
compatible
with.
Let
me
just
type
it
out.
There's
like
a
couple
of
cases
like
one
is
that,
like
you,
can
be
pointing
at
git
Labs
built-in
registry,
which
won't
support
this
stuff,
currently
so
making
sure
that
still
works.
C
And
then,
when
you
don't
have
like
this,
the
ID
token
specified
I,
don't
know
what
the
behavior
of
npm
published
for
the
Providence
flag
is,
if
you
don't
have
a
token
but
I
assume
it
probably
fails
so
making
sure
that,
like
we
detect
whether
that's
present
or
not,
that
sort
of
stuff,
here's
one
other
thing
I
was
thinking
of
but
I've
lost.
It
I
think
the
main
thing
is
making
sure
that
you're
pointing
at
the
npm
registry
and
just
not
not
reading
it
when
you're,
not
right,
yeah.
C
D
My
understanding
there
was
that
was
a
like
make
sure
we
can
actually
like
npm,
verify
the
the
things
that
we
push
from
gitlab
the
other
thing
once
we
have
the
full
suit
stuff
at
in
as
well.
There's
that
little
badge
that
shows
up
on
npm
for
like
pointing
to
the
Providence
we'd
like
to
get
integration
there.
D
That
is,
unfortunately
I,
think
more
on
npm
side
than
it
is
in
our
control,
though,
but
I've
already
kicked
off
an
issue
with
some
folks
at
GitHub
for
like
what
do
we
need
to
do
to
start,
you
know
getting
that
data
to
populate,
but
I
think
a
lot
of
this
first
step
is
like
we
need
to
have
the
data
there
to
have
some
some
examples
to
point
to.
D
I
think
so
I'm
not
100.
Confident
in
that,
though,
okay.
E
D
C
I
know
Brian's
already
working
with
the
registry
team
on
like
what
it's
gonna
look
like
in
the
user
interface
for
your
assigned
container
images,
but
Billy
you'd
expressed
interest
in
adding
like
get
signed.
Signature
support
to
to
get
commits
as
well
Brian
I
think,
like
I,
don't
know
what
your
timetable
is
on
on
the
container
registry
stuff.
C
But
given
that
we
already
have
get
commit
signature
verification,
it
might
be
easier
to
do
to
do
the
the
get
sign
integration
first
and
then
use
that
as
like
a
building
block
for
the
container
registry,
especially
if
you're
planning
on
doing
it
all
on
the
rail
side.
E
E
And
the
x59
or
so
there's
a
lot
of
problems
with
it
because,
like
it,
was
a
community
contribution
and
you
can't
use
it
on
github.com
because,
like
it
uses
a
system,
trust
store
and
so
like
I'm,
not
I
feel
like
that
feature
is
very,
very
underused,
and
nobody
really
knows
who
knows
how
to
use
it.
C
The
the
like
back
Channel
communication
between
the
registry
and
rails
I
feel
like
is
going
to
be
the
annoying
part
of
of
all
of
that
I
mean
we
already
do
that.
I
can't
remember
what
for,
but
the
registry
already
like
notifies
rails
of
when
manifests,
are
pushed
and
deleted
for
some
reason
and
so
I
I
guess
that's
what
we're
gonna
piggyback
on
right
for
the
certificate,
verification.
E
E
A
We
were
actually
planning
on
using
those
for
our
continuous
container
scanning
I
think
that
has
been
in
the
product
for
maybe
about
a
year
at
this
point,
I
I,
don't
know
if
that'll
be
useful
to
you
or
not.
Here,
though,.
A
That
might
be
something
to
double
check
on,
because
that's
different
from
the
latest
that
I've
heard
on
that
topic.
E
Yeah
yeah
right
now,
right
now,
I
was
just
thinking
about
like
doing
doing
signature,
verification
on
the
rail
start
and
just
like
doing
it
every
time,
retouch,
the
images
the
thing
is
is:
if
we,
if
we
have
to
do
caching,
then
it
could
get
complex,
pretty
quick.
A
A
I
I,
don't
know
if
it's
just
a
rough
prototype
or
if
it's
functional
code
that
you
can
use
and
rely
on,
I'm,
not
sure
if
it's
state
but
I
think
there's
enough
interest
here
in
the
next
six
months
that
you
know
you
could
probably
even
get
help
from
Olivier's
team
on
composition,
analysis
or
someone
else
too.
If
we
needed
extra
people
to
help
Implement
that,
because
it's
something
we're
going
to
need,
if
that's
the
route,
you
decide
to
go.
A
C
And
see
I'm
looking
forward
Billy's
doing
some
interesting
stuff
related
to
meeting
like
the
salsa
floor,
there's
like
a
source
requirement
around
two-party
reviews
and
and
I
I,
don't
think
there's
any
like
anything
concrete
in
terms
of
like
how
that
will
be
implemented
broadly,
but
it
will
presumably
be
some
sort
of
signature,
that's
embedded
into
like
the
git
repository
or
or
commit
somehow
right
really
so,
like
I
I
think
we
would
end
up
presumably
end
up
using
like
similar
mechanisms
to
to
reach
that
salsa
for
requirement.
A
Yeah,
that's
probably
a
good
discussion
for
another
time.
I
think
we've
got
some
good
ways
that
we
could
potentially
just
add
that
into
the
attestation
that
we
already
have,
but
that's
probably
that
could
be
a
pretty
deep
discussion.
E
C
Yeah,
it's
a
really
hard
one,
but
like
I,
also
think
that
it's
kind
of
our
like,
because
it's
so
hard,
it's
like
sort
of
our
responsibility
to
solve
because,
like
ideally,
you
want
it
integrated
into
the
Mr
approval
process
or
something
like
that
right
where
like
users
can,
but
it
also
has
to
occur
on
like
the
merge
commit,
which
is
the
extra
tricky
part
yeah.
We
can
save
that
for
another
time.
There's
plenty
plenty
of
things
that
we
need
to
do
before.
We
can
even
possibly
do
that.
So.