►
From YouTube: SLSA Tooling Meeting (September 16, 2022)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
B
Right,
okay,
so.
B
Before
getting
into
that
does
anybody
else
have
any
sort
of
interesting
updates?
They
wanted
to
bring
up.
A
C
B
So
one
of
the
the
interesting
ones.
B
Was
this
so
there
is
and
I'll
post
it
it's.
It's
been
posted
in
in
slack
as
well.
There
is
a
Samsung
Jenkins
plug-in
for
or
sorry
I
should
say.
Samsung
there
is
a
salsa
Jenkins
generator
made
by
Samsung
I.
Believe
the
folks
who
are
are
making
this
based
on
what
they
had
sort
of
mentioned
to
me
is
that
they
are
working
out
of
Korea.
B
So
it
is
a
bit
of
a
like
it's
a
bit
of
a
a
Time
shift
there
to
for,
for
them
to
kind
of,
let's
say
attend
these
meetings,
but
it
might
be
worthwhile
to
have
them
demo
at
some
sort
of
more
convenient
time
for
them.
B
But
basically
you
know:
I
took
a
quick
look
and
I
know
that
there's
a
few
other
folks
who
are
working
also
on
Jenkins,
plugins
and
and
Jenkins
generators
so
want
to
get
folks
thoughts
on
this
yeah.
It's
11
p.m,
in
Korea,
now
yeah!
So
that's!
That's
why
I'm
I'm
like
hey?
If
we
we
talk
to
them
about,
you
know
doing
something
it
might
be
worthwhile
to
have
something:
that's
a
bit
more
convenient
for
them.
B
My
cursory
sort
of
look
at
this
was
hold
on
sorry.
My
cursory
sort
of
look
at
this
was
it's
fairly
simple.
It's
fairly
just
sort
of
like
looking
at
what
happened
in
the
Jenkins,
build
and
just
sort
of
putting
into
salsa,
so
I
believe
it's
really
only
salsa
one.
At
this
point
and
with
maybe
a
little
few
other
changes,
it
could
be
salsa
too,
but
still
sort
of
I.
Think
one
of
the
things
that
we
kind
of
wanted
to
figure
out
is
is
hey.
B
When
we
kind
of
pull
these
these
sorts
of
or
additional
things
we
can
like
the
tooling
team
can
sort
of
provide
as
sort
of
guidance
on,
like
you
know,
implementation
of
a
salsa
generator
like
what
that
sort
of
thing
should
look
like
and
so
on,
so
that
that
folks
understand
you
know
like
what
the
intent
is
of
you
know.
The
ideal
here
is,
is
you
know
the
Builder
should
not
be
doing
it?
It
should
be
the
orchestrator
of
that
build
should
be
doing
a
lot
of
the
stuff
yeah
yeah
Aaron.
A
Yeah,
just
I
haven't
looked
at
the
about
this
repo
yet
in
depth.
A
I
mean
you
know,
we
know
that
a
lot
of
people
use
Jenkins
still-
maybe
maybe
unfortunately,
I
don't
know
great.
So,
like
you
mentioned,
they
do
say
it's
salsa
level,
one
provenance
in
their
repo
yeah
I,
wonder
like
by
definition
of
of
Jenkins
itself,
right
I,
wonder
if
they
would
be
able
to
design
it
to
be.
You
know,
creating
the
provenance
in
a
service
like
a
builder
or
service
generated
way
right,
because
there's
the
way
that
Jenkins
is
I,
don't
know
if
they
I,
don't
know
it's
an
interesting
one.
B
Yeah
yeah,
it
can
probably
be
done
at
some
level
and
I
know
Eric
from
Wipro
has
been.
He
had
mentioned
something
that
they've
been
working
on
some
sort
of
plug-in
as
well.
The
in
Toto
folks
are
also
working
on
a
Jenkins
plug-in,
and
so
on
that
end
I
mean
I
do
want
to
see.
B
If
we
can
sync
up
some
of
those
folks,
you
know
not
to
say
that
there's
anything
wrong
with,
especially
at
this
level
having
multiple
Jenkins
plug-ins,
because
different
things
might
be
better
for
different
use
cases,
but
it's
still
probably
worthwhile
as
we
kind
of
go
through
and
I
know.
B
It's
one
of
the
the
just
for
this
group,
as
kind
of
like
a
a
question
for
us
is
how
much
do
we
want
to
sort
of
help
out
in
either
Hands-On
keyboard
or
in
at
least
providing
sort
of
guidance
around
you
know
what
a
good
salsa
generator
looks
like
like,
for
example,
the
salsa
GitHub
generator.
That's
you
know
internal,
that
for
us
that,
like
a
lot
of
the
the
googler
googlers
have
worked
on
and
and
whatnot.
D
A
D
A
I'm,
just
simply
excited
to
see
this
for
Jenkins,
it's
pretty
cool.
C
A
Yeah
I
think
I
had
the
same
observation.
I
think
this
attestation
they
need
to
happen
out
of
band
right
like
just
like
blockchains
or
others
doing.
It
cannot
be
part
of
the
your
build
recipe
in
the
pipeline
because
we
are
saying
developers
we
do
not
start
the
pipeline
execution
while
it
is,
it
is
happening.
The
provenance
generation
is
not
part
of
that.
It
is
happening
in
the
background.
B
Yeah
correct,
and
so
that's
why
I
think
in
this
case
they
in
the
library
they
say
it's
only
salsa
one,
because
I
think
it's
also
one.
We
allow
that
I,
don't
remember
exactly
where,
where
when
we
stop
allowing
that
sort
of
thing
and
making
it
more
part
of
the
orchestration
or
sorry,
the
service
needs
to
be
the
one.
That's
actually
recording
it
as
opposed
to
the
the
end
user
workload.
B
There's
a
couple
things
so
that
well
first
off
there's
a
few
folks
who
are
working
on
plugins
just
across
the
space
as
far
as
Cloud
bees
itself.
They
seem
to
be
a
bit
more
from
my
understanding
at
least,
and
don't
quote
me
on
this,
but
but
based
on
some
conversations.
B
I've
had
and
parth
might
actually
know
because
I
know
he's
given
a
couple
of
demos
is
I,
believe
they're
kind
of
focused
more
on
the
Jenkins
x
side
than
the
Jenkins
core
side,
which
Jenkins
core
being
sort
of
the
traditional
Jenkins
Legacy
Jenkins.
A
So
last
time,
last
time
I
spoke
or
a
lot
in
the
meeting
for
the
CD
Foundation.
They
they
gave
a
presentation,
so
they
were
hoping
they
were
planning
on
getting
to
salsa
level
two
by
the
end
of
the
year,
integrating
with
the
techcon
chains
and
then
past
that
point
I
think
they
didn't
have
anything
else.
I
think
they
were
saying
that
they
would
start
integrating.
You
know
they
were.
A
They
have
their
own
tecton
catalog
right,
because
Jenkins
x
underneath
still
uses
tecton,
so
they
were
updating
their
techton
catalog
and
basically
making
it
more
making
it
more
easy
for
the
you
know,
for
users
to
start
using
their
pipelines
and
having
those
you
know
scans
and
everything
else
built
into
it,
but
at
the
same
time
yeah
their
their
plans
was
just
to
get
this
also
too
by
the
end
of
the
year.
A
B
Yeah
with
that,
with
that
said,
the
there's
I
know
Eric
who's
who's
been
on
a
lot
of
the
salsa
calls.
I
know
he
had
mentioned
that
Wipro
is
working
on
a
few
things
on
this
front
as
well.
B
He
also
there's
I
know
a
few
other
folks
have
been
working
on
kind
of
an
abstraction
on
top
of
stuff,
like
Jenkins,
like
in
the
CI
CD
system,
where
the
idea
would
be
some
other
tool
would
take
care
of
looking
at
what
Jenkins
is,
let's
say
doing
and
doing
like
so
so
Jenkins
itself
would
even
the
orchestration
piece
would
be
just
considered
an
end
user
workload
and
there
would
be
something
else
calling
out
to
that
Jenkins
to
make
it
do
things
and
then
it
would.
B
You
know
it
would
do
the
recording
and
and
whatever
there's
some
stuff
on
that
end,
like
that's
coming
out
of,
for
example,
Red
Hat,
one
thing
called
I'm
blanking
on
the
name
of
it,
but
it's
made
by
Bill
Benson
over
there
cloygos
was
that
right,
Legos.
A
Yeah,
that
is
maybe
a
kind
of
a
one
pipeline
like
open-ended
implementation
of
pipelines.
Right
like
this
is
a
strict
way
set
of
controls
which
are
embedded
into
it,
and
you
just
take
it
and
run
it.
Input
is
git
repository
output.
Is
your
build
artifact,
you
don't
care
about
what
is
in
between.
B
Yeah
yeah,
so
so,
yes,
they're
focused
more
on
the
they're
calling
like
governance
aspect
of
the
governance,
abstraction
sort
of
aspect,
there's
also
just
just
as
an
FYI,
because
I
think
it's
kind
of
related
to
what
we're
looking
to
do.
B
The
CD
Foundation
has
been
pushing
this
sort
of
thing
as
well,
which
is
what
they're
calling
like
intent
based
Pipelines,
the
other
things
that
are
kind
of
coming
out
of
the
CD
Foundation,
which
are
just
things
we
should
probably
be
cognizant
of,
is
they're
they're
pushing
for
what
they're
calling
also
Cloud
sorry
CD
events,
which
is
a
a
sort
of
a
very
a
bit
of
more
of
a
specific
version
of
cloud
events.
So
the
idea
would
be
potentially
right,
like
everything
becomes
event
driven
and
I.
B
That
we
need
to
just
sort
of
think
through
if
CD
events
becomes
a
bit
more
widely
adopted,
I
think
that's
going
to
kind
of
potentially
complicate
things
because
Cloud
events,
everything
then
becomes
more
of
like
an
event-driven
architecture
and
all
asynchronous,
and
what
would
that
look
like
from
the
perspective
of
something
trying
to
generate
salsa
Providence,
just
something
to
to
keep
in
mind?
I,
don't
think
it's!
We
have
to
go
overwrite.
The
second.
A
B
So
what
else?
From
the
last
couple
of
meetings,
the
what
we've
been
focused
on
for
folks
who,
who
haven't
been
able
to
attend
as
much
the
main
things
that
we've
been
focused
on
are
twofold?
One
is
around
Integrations
for
attestation,
distribution
and
discovery,
and
so
this
is.
Where
is
this.
B
B
So
as
a
reminder,
from
a
few
weeks
ago,
we
had
some
of
the
folks
from
oci
come
in
Mike
Brown
in
particular,
who
talked
through
some
of
the
big
changes
that
had
just
been
merged
in
for
stuff,
like
the
image
and
just
the
image
and
distribution
specs
which
allow
us
to
be
a
bit
more
flexible
with
how
we
use
salsa
attestations
then
also
we
were
we've
been
talking
a
little
bit
about,
because
I
know
we
have
Frederick
on
as
well
about
sort
of
the
plans
for
or
at
least
the
proposals
for
how
npm
might
distribute,
store
and
distribute
salsa
attestations.
B
There
was
some
also
some
discussion
here
in
the
chat.
Sorry
in
the
document
some
comments
about
how
Maven
might
be
doing
the
same
sort
of
thing
for
Java,
then
also
you
know
we
briefly
have
been
kind
of
talking
about
what
attestation
Discovery
might
look
like.
You
know
from
the
perspective
of
hey
what
happens
if
somebody
wants
to
run
a
query
against
just
the
environment
at
large?
What
might
that
look
like,
but
we
sort
of
said
hey?
B
B
With
that
said,
one
of
the
things
that
keeps
coming
up
is
there's
a
lot
of
confusion
around
recore,
because
if
you
read
the
readme
for
recore
it,
it
sort
of
is
pretty
clear
that
one
of
the
one
of
its
intentions
is
to
be
an
API
for
distribution
and
discovery
of
attestations,
but
pretty
much
everybody
who
works
on
recore
is
saying:
that's
actually
really
not
the
case,
so
I
think
we
just
need
to
make
sure
that
we
update
the
documentation
so
that
it's
very
clear
that,
for
you
know,
for
some
definition
of
of
you
know,
distribution,
that's
true,
but
it
should
not
be
you
know.
B
One
of
the
big
things
that's
been
discussed
is
is
recore
should
not
be
used
as
the
primary
distribution
method
of
those
attestations,
and
so
just
that's
something
that
I
think
and
I
was
actually
before.
This
call
writing
up
a
an
issue
for
the
recore
folks
to
try
and
maybe
make
that
all
a
little
clearer.
B
The
other
thing
that
we
were
also
talking
about,
which
is
also
a
big
one,
is
maybe
defining
some
sort
of
pattern
for
out-of-band
file
distribution
like
sorry
attestation
file
distribution,
so
this
is
stuff
like
Json
lines
right
so
in
Toto,
in
Toto,
recommends
using
Json
lines
as
the
way
that
you,
if
you
were
to
distribute
this
as
flat
files,
or
rather,
if
you
distribute
this
as
just
as
files,
you
should
be
Distributing
it
as
Json
lines
files
and
those
salsa
attestations,
as
as
as
as
Json
lines
files.
B
But
the
thing
I
think
that
has
come
up
a
few
times
is:
what
does
that
like?
How
do
folks
distribute
it
like?
How
do
folks
distribute
it
Discover
it
fetch
it
pull
it
down
and
so
on.
Like?
Is
there
a
pattern
for
doing
that,
so
that,
as
people
build
tools,
we
can
point
everybody
to
you
know
everybody
can
implement
it
any
way
they
want,
but,
like
generally,
this
is
what
it
probably
should.
Look
like,
I'm
Sean.
C
C
B
D
C
But
yeah,
basically,
your
your
file
format
is
then
one
jsonline
per
dsse,
wrapped
attestation,
yeah
yeah,
right,
yeah,
okay,.
C
D
Yeah
yeah
that's
I,
have
that
kind
of
concern
is
that
if
we
have
any
envelope
format,
that's
not
Json,
then
that
bundle
format
doesn't
work
very
well,
but
I
think
that,
from
what
I've
heard
from
most
people
is
that
the
that
they're
less
worried
about
that
and
they
more
like
the
benefit
of
having
it
be
a
super
simple
file
format,
and
you
just
cut
it.
You
don't
have
to
have
like
any
special
Library
departed
or
anything
like
that.
B
Yeah,
so
if
do
folks
actually
so
as
we
kind
of
go
through
this,
let
me
kind
of
go
through
do.
Does
anybody
have
any
update
on
some
of
the
oci
stuff?
I?
Don't
want
to
put
anybody
on
the
spot
here,
but,
like
I
know,
the
stuff
has
been
merged,
but
I
don't
know
what
the
timeline
is
for
actually
releasing
it.
E
We've
got
the
release
candidates
currently
preparing
to
get
tagged
over
there,
and
so
that
is
in
the
works.
Getting
vote
on
they've
got
the
PRS
ready
to
go.
So
that's
just
waiting
on
a
few
people
to
vote
and
then
we'll
at
least
have
something
out
there.
Look
at.
There
are
a
few
more
of
pieces
to
clean
up
as
we
go,
but
it's
at
least
a
safe
people
can
try
it
out.
B
Cool
now
to
I
guess
pick
on
Frederick
or
somebody
from
who's
been
doing
some
of
the
stuff
on
the
Node
JavaScript
npm
side.
B
F
F
So
if
I'm
not
mistaken,
I
think
we
should
expect
to
sort
of
merge
the
pr
with
RFC
next
week
or
the
week
after
that
for
distribution,
it's
still
sort
of,
as
we
were
discussing
before,
so
the
primary
way
then
client
will
or
the
npmcli
we'll
get.
The
attestations
is
by
retrieving
them
from
the
practice
package
registry.
B
Cool
Mark,
you
have.
A
B
A
D
One
just
to
make
sure
I
understand
it
proposal
correctly.
The
idea
is
that
you
would
store
an
attestation
like
you'd
store
the
agitation
itself
as
a
blob.
You'd
store
a
manifest
I
guess
that,
like
contains
the
blob
as
a
layer
or
I'm,
not
sure
I'm
using
my
terminology
and
refers
to
the
subject
yeah.
So
then
that
would
effectively
index
by
the
subject.
E
The
thing
that
changed
here
that
we
had
going
before
is
you
can
always
go
out
and
Helm
charts.
Do
it
today
a
bunch
of
other
tools?
Do
it
today
where
you
can
push
your
external
artifact
to
a
container
registry
as
long
as
you
follow
a
certain
format
as
long
as
the
registry
is
not
blocking
certain
things,
so
there
are
a
few
few
small
gotchas
in
there
but
effectively
you
can
already
push
a
blob
out
there.
E
You
can
list
that
in
your
manifest,
you
can
push
all
that
up
to
a
registry,
and
you
can
look
at
the
registry.
You
see
your
tag
and
pull
that
down.
So
that's
how
you're,
how
they're
doing
Helm
charts
today,
but
we
added
in
this
with
the
new
feature,
is
be
able
to
say:
let's
be
able
to
associate
that
with
an
existing
image,
and
so
I
can
query
an
image
and
say
now
firm.
E
This
image
tell
me
all
the
other
artifacts
that
point
to
this,
like
the
signatures,
like
the
s-bombs
that
are
affiliated
with
this
image,
and
so
it
gives
you
a
query
interface
to
look
all
those
up
without
having
to
know
what
the
tag
is
of
that
artifact.
This
can
be
different
from
the
tag
of
the
image
right.
D
Right
and-
and
you
could
I
think
in
The
Proposal,
you
could
query
by
type-
is
that.
E
Right
yeah,
so,
what's
going
to
get
returned,
is
today
you've
got
the
concept
of
a
manifest
list
for
a
multi-platform
image
manifest
so
you'll
have
a
whole
bunch
of
individual
pointers
to
those
descriptors
for
each
individual
platform
specific
manifest
we're
using
almost
an
identical
data
structure.
E
It
is
identical
data
structure
for
the
artifacts
themselves,
where
they're
also
in
a
list,
and
they
have
the
individual
descriptors
to
each
artifact
that
you
have
in
that
list
and
you
can
also
put
annotations
in
there,
and
so
you
can
say
there
are
the
various
little
the
annotations
for
whatever
kinds
of
metadata.
You
need
on
your
object,
so
you
know
which
one
from
that
list
you
want
to
pull,
and
so
you
can
say
this
is
the
yes
bomb
in
a
Json
format.
This
is
the
signature
coming
from
cosine
all
the
different
kinds
of
metadata.
E
You
need
to
pick
and
choose
out
there,
maybe
you're
going
to
have
an
extra
piece
of
metadata
that
says
who
the
signer
is,
whether
it's
Cyclone,
DX
or
spdx,
that
kind
of
stuff
that
can
all
be
put
in
annotations.
So
when
you're,
looking
through
that
list
of
artifacts,
you
can
figure
out
which
one
you
want
to
pull.
D
E
D
E
They're,
adding
a
filter
type
in
there
for
the
media,
type
of
the
artifact
and
so
they're
called
an
artifact
type,
but
it's
like
an
a
media
type
out
there,
and
so
you
can
query
on
that
one.
But
that's
one
of
a
handful
of
things.
You
could
potentially
query
on
when
you
get
back
the
full
list
of
everything
you
can
always
query
on
all
the
different
annotations
in
there
as
well,
so
the
filter
on
there
just
trims
down
what
that
list
of
everything
is,
so
you
just
get
the
certain
types
of
artifacts.
E
D
E
Yeah
and
in
the
in
the
initial
stuff,
what
we're
going
to
see
from
Registries
is:
that's
the
only
option,
they're
going
to
pull
everything
back
and
then
query
it
all
filter
it
all
on
client
side,
the
future,
when
Registries
upgrade
they'll,
be
able
to
at
least
do
that
initial
set
of
filtering
on
the
server
side.
That
says
just
give
me
the
stuff
for
my
media
types.
D
Yeah,
the
the
reason-
and
this
is
the
media
type
of
the
s-bomb
or
whatever.
E
It's
the
artifact
type
is
the
field,
name
is
what
they're
using,
and
so
that
is
the
media
type
of
the
config
blob.
If
you're
using
the
image
manifest
or
if
you
use
the
new
artifact,
manifest
it's
called
artifact
type
in
there,
but
it's
it's
not
in
a
media
type
and
so
we're
assuming
that
whoever's
using
that.
Hopefully
they
go
out
and
register
their
name
at
I
held
off
on
saying
they
must
register
a
name
because
even
oci
isn't
good
about
registering
all
of
our
names.
D
E
D
See
so
be
in
the
distribution
side,
the
the
reason
I
ask
this
is
in
this
forum.
Yeah,
okay,
so
refers.
Okay.
Let
me
just
present
here.
Actually
you
know
it's
Jerry
presented
Michael.
Do
you
want
to
open
up
the
here?
I
could
just
actually
link
directly
to
the
line.
D
Okay,
the.
D
Artifact
type
because
artifact
type,
I
guess
I'm
wondering
in
our
case
all
the
artifact
types
will
always
just
be
like
in
Toto
and
I
was
wondering.
Would
it
make
sense
to
Define
like
a
a
because
you
media
types
have
like
an
optional
parameters
like
a
semicolon,
then
you
could
have
parameters.
We
could
Define
parameters.
That
kind
of
say
like
point
one
layer
in
so
that
way
you
could
filter
by
the
predicate
type.
You
know
like
the
next
layer
down
I.
D
E
E
Cool
and
Aaron
I
saw
your
comment.
I'll
drop
a
link
over
in
the
in
the
me
notes
here
in
a
minute,
but
I
gotta
jump.
B
Yeah
that
the
oci
stuff
sounds
interesting,
I
need
to
dive
in
a
bit
more.
There
I
think
the
other
thing
does
anybody
else.
Have
any
other
questions
comments
on
that
that
somebody
else
maybe
who
was
on
the
call
can
can
answer.
B
B
Oh
yeah,
so
on
the
the
auras
end,
if
anybody
knows
anybody
who
who
works
on
the
auras
side,
who
they
can
maybe
join
one
of
these
meetings
or
otherwise,
we
can
maybe
coordinate
with
them,
I
think
that
would
also
be
really
valuable.
Some
folks
have
been
commenting
out
of
band
about
you
know
getting
folks
from
the
aorus
side
to
like
to
us
to
have
a
conversation,
but
it's
it's
been
hard
to
coordinate.
B
So
one
of
the
things
that
Seth
had
oh.
A
Did
I
not
I,
guess.
B
That
I
thought
I
added
a
comment
here.
So
there's
some
discussion
about.
You
know
writing
up
a
few
things
to
to
make
it
clearer
what
we're
trying
to
do
from.
Oh
sorry,
what
reasonable
patterns
look
like
for
Distributing,
Json
lines,
files,
storing
them
Etc,
like
what
sorts
of
things
could
people
do
when
implementing
tools
right?
B
B
Salsa
Providence,
but
what
happens
when
either
the
release
happens
out
of
band
like
hey?
This
is
just
a
tarball
and
we
store
it
on
GitHub,
like
what
should
the
general
flow
look
like,
so
that
as
people
build
out
additional
tools,
one
is
they
don't
have
to
reinvent
the
wheel,
and
maybe
that
there
is
a
couple
of
common
patterns
for
this
sort
of
out-of-band
distribution
mechanism
right
where
it's
like?
Hey?
It's
not
you
know
it's
it's
something
like
a
tarball,
it's
not
or
it's
something
like
hey.
B
This
I
am
distributing
some
sort
of
package,
but
the
package
itself
there's
no
there's
no
standard
around,
including
stuff
like
attestations
in
the
package
in
some
way.
B
So
one
of
the
things
that
was
kind
of
discussed
was,
you
know,
taking
these
in
Toto
Json
lines,
bundles
and
sort
of
describing
at
least
some
sort
of
pattern
for
how
it
is
normally
stored.
Like
is
it
just
purely
something
like
you
have
the
package,
you
have
a
Json
lines
file
and
you
have
some
way
of
naming
that
Json
lines
file
so
that
you
know
that
it
refers
to
that
package.
You're
downloading,
something
like
that.
B
It
sure
and
I'm
sorry,
it
seems
like
every
single
time
my
I
share
my
screen
it.
My
everything
starts
to
glitch
out
on
my
PC,
so
I'm,
just
going
to
stop
sharing
for
a
second
I'll
have
to
figure
out
why
that
is,
but
so
the
the
main
thing
right
was
we
have
like
today.
B
If
you
want
to
share
a
salsa
attestation,
the
way
we've
been
recommending
is
you
know
you,
you
do
it
via
Json
lines,
because
that's
what
in
Toto
supports
and
yeah
yada,
but
the
thing
right
is:
is
you
know
the
Json
lines
file
like?
Should
it
be
the
same
like
it?
Should
it
be
something
like
you
know,
food.tar.gz.json
L,
should
it
be
the
hash
of
the
file?
B
Should
you
know
how
should
these
things
be
stored,
like
in
some
cases,
some
folks
have
said:
hey
I
want
to
store
all
the
attestations
for
the
stuff
I'm
building
on
a
separate
endpoint,
so
I
have
a
rest
endpoint
somebody
can
just
sort
of
say:
hey
I'm,
about
to
pull
down
this
package.
Query
the
attestation.
Some
people
are
saying:
no,
no
I'm.
B
Assuming
that
every
end
point,
let's
say
you
know,
is
going
to
be
this
package,
slash
version
number
and
then
there's
the
actual
tarball
and
then
right
next
to
the
tarball
is
a
Json
lines
file
with
the
attestations,
I.
Don't
know
you
know
if
folks
have
given
thought
to
those
patterns,
but
that's
kind
of
the
the
big
question
is
people
are
asking
like
hey
for
things
that
are
not
integrated
directly
into
package
managers
today.
Could
I
write
a
shell
script?
B
C
But
I
can
comment
on
the
way
that
that
we're
currently
planning
to
do
it
so
for
every
package
that
we
generate
a
binary
for
that
artifact
has
a
a
unique
ID,
which
is
the
which
is
a
miracle
tree,
a
miracle
hash
leading
up
to
the
to
the
actual
artifact
based
on
all
the
all
the
dependencies.
C
And
so
what
we're
doing
is,
although
we're
storing
our
attestations,
apart
from
the
artifacts,
we're
presenting
an
API
whereby,
if
you
ask
for
the
artifact
by
its
ID,
you
can
also
ask
for
the
attestation
by
its
ID
as
well
and
they're
just
the
same.
They
did
the
same
path
in
HTTPS.
C
You
get
the
same
path
and
you
get
the
artifact
terrible
or
you
get
the
wheel
or
the
gym
or
whatever
it
is
that
we're
actually
serving
up
as
the
artifact
and
then
the
same
URL
with
attestation.into.jsonl
gets
you
the
attestation
for
that
artifact.
C
In
actual
fact,
because
we're
now
using
multiple
steps
in
our
build
pipeline,
that
will
probably
be
just
called
multiple
Dot
in
toto.jsonl,
because
that's
the
recommendation
for
bundles.
B
Cool
yeah
that
that
definitely
makes
sense
to
me
at
least
has
anybody
else
have
any
thoughts
on
you
know
doing
it:
sort
of
content
addressably
via
the
Merkel
hash
or
some
sort
of.
A
A
Yeah
I
don't
have
any
comment
on
the
Merkel
hash,
but
I
can
definitely
see
you
know
value
and
having
that
unique
identifier,
of
course
right.
For
that
thing,
not
even
just
a
1.2
like
V
1.2,
but
actually
having
like
something.
That's
like
a
little
bit
more
I
guess
it
wouldn't
be
immutable,
but
like
a
little
more
specific
right.
B
Well,
yeah
I
mean
I
mean
in
the
case
of
what
Sean
described
it
would
be.
Definitely
it
should
be
immutable
there
and
and
the
thing
there
that
I
think
is
it's
one
of
the
things
that
I
I
know
is
something
we've
been
playing
around
with
a
little
bit.
Is
you
know
if
you
build
the
dependencies,
those
things
can
have
various
Json
lines.
Those
can
have
all
attestations
and
then
whatever
the
resultant
package
is
because
you
have
the
full
sort
of
Merkel
hash.
B
I
I'd
be
curious,
actually
to
better
understand
like
because,
because
you
can
do
from
my
understanding
right
using
the
Merkel
hash,
you
can
do
the
inclusion
proof
easily,
but
it's
hard
to
do
the
reverse
lookup
right.
It's
yeah.
C
Yeah,
it's
it's,
but
basically
what
we
do.
So
the
idea
is
that
we're
not
quite
there
yet,
but
what
you
should
be
able
to
do
is
regenerate
the
Merkel
hash
from
the
material
section
that's
listed
in.
A
C
Because,
basically,
it's
all
the
inputs
that
went
in
there
that
are
factored
into
the
medical
hash
are
actually
there
in
the
Atta
station
as
well.
So
you
should
be
able
to
reverse
engineer
it
that
way,
but
you
can't
necessarily
just
take
a
miracle
house
and
work
out
what
the
dependencies
were:
yeah
yeah
so,
but
but
because
we
list
the
the
Uris
to
the
to
the
artifacts
for
the
dependencies.
C
D
Sebastian's
comment
of
like
having
a
gap
in
the
recommendations,
I
think
the
at
least
the
way
I've
been
thinking
about.
This
would
be
that
we
have
a
recommendation
for
like
I
I,
do
think
it
makes
sense
to
do
ecosystem
by
ecosystem
like
if,
like
I,
don't
know
what
npm
is
planning
on
doing
but
or
actually
I
didn't.
D
Even
you
guys
know
how
you
even
fetch
packages
from
npm,
I,
don't
even
want
the
protocol
is,
but
I
would
imagine
in
cases
where
you
fetch
it
by
some
sort
of
file,
name
that
it
makes
sense
to
have
a
concrete
recommendation.
D
Actually,
we
do
have
a
concrete
recommendation,
which
is
that
you
append
a
dot
in
toto.jsonl
suffix
that
way
all
these
different
ecosystems,
if
they
are
doing
it
by
file
name
or
it's
like
on
the
file
system,
or
something
like
that,
then
we
can
using
the
same
suffix,
I,
think
or
convention
or
whatever
the
convention
is
I,
think
makes
it
easier
because,
like
they
all
do
it,
if
there's
cases
where,
like
oh,
you
use
this
one
end
point
to
get
this
thing
in
a
different
endpoint
to
get
something
else.
Then
then
that's!
B
Yeah-
and
it
was
about
sort
of
like
completely
like
yeah
I
I
I
I,
almost
feel
like
we,
we
should
be
having
a
a
bit
of
a
a
hierarchy
of
like
yeah
if
it
can
be
integrated
directly
into
the
the
the
Whatchamacallit
the
into
the
package
manager
itself
or
the
distribution
mechanism
of
that
package
itself,
then
absolutely
it
should
but
like
I
know,
a
lot
of
folks
are
saying:
hey
great,
it's
gonna
take
you
know,
based
on
just
as
an
example
like
hey,
like
there's,
there's
like,
for
example,
the
npm
stuff.
B
How
should
I
do
that
right
and
some
people
might
say:
okay,
well,
great,
here's,
here's
maybe
a
way
we
can
at
least
get
you
started
until
the
more
official
way
is
actually
implemented
and
then,
in
certain
cases
as
well,
some
of
the
stuff
that's
actually
come
up
is
like
what
happens
in
cases
where
there
is
no
distribution
mechanism
right,
like
certain
things
like
where
like
there
are
certain
types
of
things
where
the
output
is
essentially
just
a
tarball,
and
so
there's-
and
it's
not
really
like
you
know,
it's
there's
no
sort
of
pip
install
npm
install
for
whatever
that
thing
is,
and
so
everybody
just
sort
of
downloads
that
turbo
and
unpacks
it
like.
B
Is
there
a
way
to
still
you
know,
distribute
salsa
attestations
such
that,
like
there's
at
least
a
common
pattern
that
if
folks
know
oh
yeah,
here's
here's
something
that's
out
of
band,
you
know
it's
not
like
inside
of
a
package
manager
or
whatever
I
can
just
pull
it
down,
pull
down
and
and
I
know,
I
could
go
and
look
at
like
a
like
Sean
described
like
if,
if,
if
the
the
artifact
is
listed
by
a
hash
or
something
like
that,
then
I
can
pull
hash
dot.
B
B
Think
the
also
the
using
the
Json
lines
as
the
bundle,
because
and
if,
like
multiple
attestations,
potentially
there
right,
because
otherwise,
how
can
you
easily
have
everything
referred
to
that
package
without
some
sort
of
mechanism
of
saying
you
know
there
needs
to
be
I,
think
a
some
mapping
between
the
attestations
to
the
artifacts,
especially
if
there's
multiple
artifacts
in
the
same
sort
of
directory
or
whatever,
that's
really
I,
think
kind
of
where
we're
trying
to
kind
of
figure
out,
because
I
think
somebody
from
the
I
believe
it
was
from
the
s-bomb
side
had
had
brought
up
that.
B
You
know.
Lots
of
folks
are
Distributing
packages
in
ways
that
are
not
like
using
a
package
manager
and
so
they're
just
files,
and
so
a
lot
of
folks
are
just
asking
for
you
know
hey
when
you
just
you
know
when
you
distribute
me
this
tarball
I
also
expect
an
attestation
and
I
expect
an
easy
way
to
know
that
that
attestation
is
supposed
to
be
pointing
at
that
file.
That's
it.
C
Yeah
I
think
the
worst
worst
offenders
here
are
the
packages
that
installed
themselves
with
a
ship
pipe
or
kill
pipe
sure.
B
Yeah
yeah,
exactly
and-
and
that's
like
the
thing
of
you
know
something:
we've
been
playing
around
with
a
little
bit
is
hey.
B
You
are
even
significantly
better
if
you
had
a
let's
say,
a
very
basic
tool
that
does
the
curl
pipe.
You
know
shell,
but
when
it
curls,
it
also
verifies
that
it
has
a
salsa
attestation
that
you
know
from
a
signature.
You
trust
before
piping,
into
sh
right,
like
even
having
a
tool
that
just
does
that
you're.
You
know
it's
much
better
than
a
lot
of
what
the
other
things
we're
seeing.
A
A
D
So
it's
hard
to
find
and
probably
would
be
good
to
have
some
sort
of
condensed
documentation
of
like
here's,
the
suite
of
things
that
are
recommended.
You
don't
have
to
use
all
of
them
and
the
layers
can
be
swapped
out.
It
is.
D
Out
no
no
yeah
I'm
I'm,
sending
it
right
here,
because
I
think
it's
very
easy
to
miss
I'm,
trying
to
update
those
convention
for
naming
on
your
files.
D
I'll
add
to
the
meeting
notes
as
well.
Great
actually
I
should
I
should
link
to
specifically
to
the
file
naming
convention.
C
Yeah
yeah
I
have
a
question
about
the
naming
convention
there
as
well,
so
we're
producing
pipelines
that
actually
produce
our
artifacts.
So
there
are
multiple
steps
between
Source,
repo
and
binary
artifact,
and
so
what
we're
doing
is
regenerating
artifacts
intermediate
artifacts
along
the
way
that
we
feed
in
through
the
pipeline.
C
C
D
That's
a
good
question:
I
think
we
should
clarify
that
in
the
docs,
if
you
read
the
docs
directly,
I
think
it
should
be
called
multiple
because
there's
different
subjects,
but
if,
if
I
understand
correctly
I
think
in
practice,
people
don't
care
about
the
intermediate
artifacts
and
they
really
just
care
about
the
end
artifact,
and
it
is
really
a
bundle
about
the
end
artifact,
along
with
supplemental
attestations
that
you
would
like
kind
of
want
to
chain
together,
and
so
that
would
imply
that
you
really
ought
to
be
naming
it
by
the
artifact
file
name
and
so
here.
D
I
think
I,
think
you're
agreeing
with
me
or
maybe
I'm
agreeing
with
you
I,
don't
know
much.
That
probably
makes
sense
to
be
called
like
it's
not
really
about
what
the
subjects
are,
but
really
about
what
you're
trying
to
verify
and
like
what
the
consumer
is
trying
to
verify
and
so
like.
If
there's
a
natural
name
that
people
are
trying
to
verify,
then
you
just
append
the
suffix
to
have
all
of
the
necessary
attestations
to
verify
that
thing.
D
C
D
Yeah,
let
me
I'll
file
an
issue
real
quick
right
now,
and
then
we
could
fix
it.
It's
easier
to
file
the
issue
than
to
actually
change
the
tax.
F
If
I
can
just
mention,
as
we've
talked
a
bunch
of
this,
like,
we
are
I,
think
spraying
away
a
little
bit
in
the
npm
use
case
from
what
has
been
discussed
so
far.
So
that's
an
example
and
we
will
primarily
be
fetching
the
at
the
station,
the
package
name
and
version,
and
also
when
the
register
return
data
stations.
We
will
not
return
the
rule.
This
envelope
has
a
Json
line
format.
F
We
will
return
them
in
the
new
six
store
bundle
format,
that's
being
proposed
as
we
speak
right
now,
so
it's
not
really
done
yet,
but
we're
working
on
getting
that
bundle,
format,
standardized
and
the
reason
for
that
is
that
we
are
signing
everything
with
fossil
and
Publishing
a
data
station
on
recourse.
We
would
like
to
have
a
consistent
way
of
delivering,
not
just
attestation,
but
also
the
key
material
to
the
clients
that
they
can
verify
that
as
well.
That's
why
we
are
taking
a
little
bit
of
a
different
approach.
A
Yeah
I
think
I
think
the
key
material
is
really
important
shortage
on
the
front
market.
I
think
the
key
material
is
really
important
right
to
think
about
Distributing,
oh
because
you
know,
if
I'm
not
using
full
CEO
and
just
want
to
give
someone
my
public
key
right
to
verify.
Like
that's
another
scenario
right
that
I
think
is
important.
How
do
we
distribute
this.
C
A
F
Is
not
that
it's
actually
bothering
me,
but
it's
because
I
do
think
there
was
an
issue
on
this
envelope
to
actually
have
certificate
as
an
attribute
in
the
signature
field.
I'm,
not
sure
if
that
sort
of
accepted
as
standardized
yet
but
I
do
know.
A
lot
of
folks
are
putting
the
certificate
in
the
DC
envelope.
F
Okay,
but
there
is
another
thing
for
us
that
we
are
very
icky
about
as
well
and
that
having
assigned
the
timestamp
and
for
us
we're
getting
that
from
recore.
So
having
that
entire,
let's
say
record
blob,
be
stuck
into
the
this
envelope
at
12
Fields,
like
maybe
doing
a
little
bit
too
much
violence
on
non-specification.
So
so
for
that
we
are
and
sort
of
sending
the
the
recore
response
or
the
recore
entry
parallel
to
that
station.
So
they
can
also
verify
that
the
signature
actually
had
sorry.
D
Could
I
ask
someone
update
the
notes
to
proper
capture
that,
because
I,
don't
think
I
followed
all
those
points
about
like
the
the
difference
between
the
Json
L
format
like
if
you
could
like
link
to
the
six
door,
bundle
format
I'm,
actually
having
difficulty
finding
what
that
format
is
and
like,
what's
missing,
I
think
that
would
be
helpful
to
record
in
the
notes.
D
Sorry
thanks,
you
said
something
about
including
the
time
stamp
or
the
and
the
certificate
or
something
like
that.
F
Yes,
so
yeah
to
sort
of
go
back
a
little
bit.
We
are
using
also
to
sign
at
the
stations
and
because
of
that,
we
are
getting
a
ephemeral
certificate
or
we're
actually
creating
it
ourselves
and
then
sending
it
for
full
sip
to
be
signed
by
full
silhouette
and
that
certificate
is
like
short-lived
15
minutes.
I
think
the
default
expiration
is
so
to
make
sure
that
the
client
can
verify
that
we
actually
use
this
certificate
during
the
time
it
was
valid.
F
We
are
also
sending
data
station
to
recore
and,
as
part
of
that,
recore
is
sort
of
time,
stamping
that
assign
an
entire
thing
with
recourse
public
key
and
returning
back
a
response
which
can
be
used
for
offline
verification.
So
that
response
is
something
we
would
also
like
to
send
to
the
npm
CLI.
So
npm
can
CLI
can
actually
verify
not
just
the
signature,
but
also
that
it
happened
during
the
time
the
certificate
was
valid,
come
on.
C
That's
interesting,
it's
it's
almost
like.
If
you
wanted
to
do
it
independently,
you
need
something
like
the
way
authentico.
Does
it
and
embedding
the
timestamp
in
the
signature.
F
C
Well,
with
authentic
code,
you
do
because
you
have
to
go
and
get
assigned
a
timestamp
from
an
external
time.
Server
yeah.
F
D
Would
it
make
sense
so
the
bundle
the
term
bundle
here
is
actually
used
differently
than
in
Toto,
because
the
in
in
total
one
envelope
is
like
a
signed
message
with
like
some
metadata,
and
here
the
bundle
is
still
one
message
and
it's
just
wrapping
the
envelope
with
some
additional
stuff
that
doesn't
have
a
field
in
it.
I
wonder
if
we
should
just
extend
dizzy
to
have
these
fields.
F
Yeah
and
speaking
of
bundle,
if
you
have
a
better
name,
we're
very
open
for
that,
because
bundle
is
extremely
overloaded.
It's
used
differently
in
very
different
places
and
we
kind
of
stuck
with
bundle
right
now
because
in
actually
it's
a
pretty
good
word,
but
it's
overloaded.
So
it's
very
sort
of
ambiguous
of
what
it
means,
but
in
in
the
sixth
or
Community
bundle
is
used
quite
a
bit
to
kind
of
capture.
This
usage
of
it.
F
But
there
is
another
thing
here
as
well,
which
is
that
the
signature
proposal
can
also
use,
let's
say,
a
naked
signature
and
the
payload
hash
as
well,
because
if
I
understood
correctly
from
the
Maven
use
case,
they
won't
actually
put
the
this
envelope
in
this.
They
will
put
this
envelope
into
the
maven
package
and
sign
the
maven
package
as
an
opaque
blob
and
capture
that,
in
this
bundle
format,
that's
the
understanding.
F
I
have
I'm,
not
sure,
if
that's
true
anymore,
but
that
sort
of
takes
a
little
bit
of
a
different
sort
of
take
this
in
a
little
bit
of
a
different
direction
and
also
one
of
the
original.
Let's
say,
drivers
from
this
came
from
the
fact
that
in
six
store,
if
you
are
signing
a
blob,
you
don't
get,
let's
say
a
single
artifact
that
contains
signature,
payload
hash
Etc
together,
whereas
if
you
sign
a
container,
you
cannot
get
a
similar
oci,
manifest
that
was
discussed
earlier
on.
F
This
call
like
that
contains
actually
everything
and
to
digest
and
the
certificate
and
the
possible
recurrent
Etc.
So,
even
in
six
door,
there
is
a
little
bit
of
Disconnect
between
how
the
different
outputs
are,
depending
on
what
you're
signing
so
so
this
is
also
one
of
the
things
that
the
sixth
or
bundle
is
trying
to
fix,
or
at
least
get
opportunity
to
have
a
more
consistent
way
of
capturing
output
of
the
signature
process.
F
D
Without
knowing
all
the
implications
like,
my
first
reaction
would
be
that
it
seems
like
it
would
be
desirable
to
move
it
into
The
Dizzy
spec,
so
that,
like
it
seems
like
six
door
shouldn't,
have
to
Define
that
format.
D
Ideally,
and
if
we
had
it,
you
know
again
kind
of
move
it
to
the
larger
standard
yeah.
There
was
discretion
in
the
Disney
repo
about
having
like
the
timestamp
field
and
the
certificate
field,
Etc
and
I.
Think
just
it
never
is
added
just
because
no
one
has
done
it.
F
Yeah
or
let's
say
I
may
get
a
signature
over
a
blob
where
we
don't
really
know
what
we're
signing,
but
yeah,
at
least
for
me
personally,
I,
don't
think
it's,
let's
say
important
if
it's,
let's
say
defined
or
if,
if
the
specification
lives
in
six
store
versus
this,
because
even
in
six
store,
we
have
a
lot
of
support.
We
six
to
have
a
lot
of
support
for
for
this
envelope
and
so
I,
don't
think
it's
crucial
in
that
sense,
I
think
it's
more
crucial
that
we
get
some
alignment
between
the
different
ecosystems
here.
A
B
I
know
we're
already
actually
a
couple
of
minutes
over
here,
so
I
don't
want
to
take
up
any
more
of
anybody's
time,
but
yeah
I
think
this
is
a
good
discussion.
B
It
sounds
like
there's
a
couple
things
maybe
to
follow
up
in
the
slack
on
the
salsa,
tooling
slack
out
of
band,
that
we
can
kind
of
yeah
talk
through,
because
I
I
do
think
the
the
main
thing
is.
We
want
to
just
make
sure
that
we're
we're
being
relatively
consistent.
Even
if
there's
like
two
or
three
different
options,
we
just
want
to
make
sure
that
there's
not
like
everybody
comes
up
with
their
own
option
and
and
then
none
of
the
tools
can
interrupt.
B
That's
really
the
takeaway
cool
well
I'll,
see
if
I
don't
talk
to
you.
I'll
see
you
all
next
week.