►
Description
Speaker: Bob Callaway
Sigstore (sigstore.dev) is a collection of young, rapidly growing open source projects in the secure software supply chain space that combine transparency logs, digital identity & attestation technologies, and policy artifacts to enhance the security of software artifacts through the entire development/deployment lifecycle. This talk will include an overview of the projects that make up sigstore, brief demos showing how the different projects interoperate, a survey of current adopters, as well as a review the of project roadmaps for further integration and adoption in the OSS landscape.
Sched: https://sched.co/siFP
A
Next
up,
we
we
have
bob
calaway
and
he
will
talk
about
use
six
store
to
secure
your
software
supply
chain.
Boop,
you
have
the
work.
C
Listening
in
today,
I
wanted
to
talk
to
you
a
little
bit
about
the
fig
store
project,
which
is
a
new
and
really
quickly
growing
project
under
the
open
source,
secure
supply,
open,
ssf,
open
source,
secure
software
foundation.
C
B
C
If
you
have,
if
you've
done
any
reading
around
the
topic
of
software
supply
chain
over
the
last
year,
you've
seen
numerous
attacks,
we've
seen
governments
start
to
get
engaged
to
say
what
can
we
do
to
ultimately
provide
greater
levels
of
assurance
of
what
software
is
being
downloaded
and
consumed?
And,
ultimately,
how
do
we
start
to
fix
this
problem?
We've
seen
this
huge
explosion
in
the
number
of
open
source
packages
that
are
available
for
for
use,
as
well
as,
for
you
know,
the
growth
of
communities,
but
as
commensurate
with
that
growth.
C
C
The
right-
and
this
just
looks
at
a
very
small
set
of
attacks
out
there,
which
are
looking
at
either
dependency,
confusion
or
typo
squatting,
where
maybe
you
want
to
download
a
json
parser
and
instead
of
you
know
an
s,
someone
will
actually
submit
a
different
unicode
character
in
that
looks
like
an
s
but
isn't
and
force
you
to
download
and
import
a
package
that
may
have
some
malicious
code
in
it.
C
So
those
sorts
of
attacks
are
are
becoming
very,
very
prevalent
and
are
just
one
example
of
where
we,
as
a
as
a
broader
community,
need
to
spend
more
time
and
focus
to
provide
you
a
higher
higher
standard
of
security
for
folks
and
if
you've
been
through
a
a
corporate
yearly
training
where
they
talk
about
information
security.
You've
probably
seen
that
that
same
screen,
we
all
have
of
the
usb
drive.
B
C
C
C
The
cto
office
at
red
hat,
as
well
as
dan
lawrence,
started
the
sig
store
project
a
little
about.
B
C
Year,
a
year
and
a
half
ago,
really
with
the
premise
of
hey,
let
what
can
we
do
to
to
start
to
address
some
of
these
issues
and
the
thing
that
jumped
out
to
us
right
away
was
well,
it's
not
like.
We
don't
actually
have
the
tools
to
start
to
validate.
Some
of
this
apply
policy
sign
and
verify
content,
it's
just
that
nobody's
using
them
and
as
we
started
to
really
dig
into
that,
you
know.
C
We
realized
that
there
were
some
significant
challenges
in
terms
of
user
experience,
key
management,
identity
management
that
really
warranted
some
real
focus
and
an
energy
for
for
for
us
to
go
and
look
at
so
what
we've
done
with
the
sig
store
project?
Is
we've
built
out
a
set
of
sub-projects
to
address
different
facets
of
this
overall
problem?
C
We
then
engaged
the
linux
foundation,
knowing
that
hey
it's
great,
if
one
company
wants
to
go
and
try
to
try
to
solve
this
space,
but
really
for
broader
adoption
to
take
place,
we
need
to
see
this
happen
in
an
open
community
in
an
open,
neutral
consortium
and
engage
the
the
linux
foundation
and,
ultimately,
the
open
ssf
to
to
be
kind
of
the
home
for
for
this
work,
in
addition
to
having
a
set
of
open
source
projects,
we're
actually
running
free,
totally
publicly
available
transparency,
log
and
certificate
authority
services,
they're
community
operated.
C
So
if
you
just
sat
in
on
the
keynote
around
operate
first,
it's
that
same
sort
of
mentality
as
we
as
a
community
are
actually
setting
these
up
and
running
these,
and
the
goal
in
all
these
projects
is
again
not
to
try
to
be
the
single
answer
for
solving
every
single
scenario.
Here.
What
we
wanted
to
do
was
really
enable
a
more
modular
and
interoperable
approach
to
bringing
together
a
lot
of
the
existing,
tooling
and
all
standards
to
really
start
to
address
the
problem
of
how
do
we
actually
sign
content?
C
C
Let's
encrypt,
they
were
launched
several
years
ago
to
really
try
to
get
100
of
web
traffic
to
be
encrypted
using
tls
and
ssl,
and
so
they
wanted
to
remove
as
many
barriers
as
possible
to
get
people
actually
generating
certificates,
putting
them
onto
websites
and
getting
that
traffic
encrypted.
C
We
at
six
store
want
to
be
for
software
distribution
when
it
comes
to
signing
in
provenance.
What
let's
encrypt
was
to
the
the
ssl
and
tls
space
to
try
to
make
these
practices
as
ubiquitous
as
possible
to
remove
as
much
friction
as
possible
and
ultimately
to
facilitate
secure,
download
verification
of
these
assets
and
then
to
again
bubble
up
some
of
these
decisions
around
policy
and
trust
in
a
more
meaningful
and
visible
way.
C
C
Four
different
areas:
the
first
is
starting
at
kind
of
the
the
bottom
of
a
layer,
cake
architecture,
diagram
right,
you've
always
typically
got
infrastructure
at
the
bottom,
and
so
there's
been
a
fair
bit
of
work
in
the
cncf
and
a
handful
of
other
projects
to
really
start
to
generate
attestations
about
what
is
the
identity
of
this
computer
and
what
is
the
identity
of
the
workload
that's
ultimately
running,
and
so
we
need
to
generate
those
out
of
stations
and
then
cryptographically
sign
them
and
verify
them.
C
Next
area
is
really
around
the
build
system
right,
it's
always
fun
to
write
code,
but
we
need
to
have
attestations
around
who
actually
was
the
author
of
that
code,
who
did
the
plus
ones
and
the
lgtms
in
the
code
review
that
said
yep.
This
is
sufficient
and
ultimately
pushed
that
through
to
to
generate
a
release.
B
C
Well,
on
that
point,
what
system
actually
generated
that
release?
Is
that
a
desktop
that's
sitting
on
the
floor
of
your
room?
Is
that
a
cloud-hosted
ci
service?
That's
out
there
and
again.
C
C
Not
just
sign
the
actual
binaries
that
are
getting
run,
but
the
config
themselves,
obviously
with
a
huge
push
around
infrastructure
as
code,
we
want
to
be
able
to
sign
those
artifacts
and
make
sure
that
everything
that
we're
putting
into
production
is
verified
and
has
a
sufficient
level
of
integrity
underneath
it
and
then
finally,
we
got
to
wrap
all
of
these
signed
documents
up
and
put
it
into
a
policy
construct
so
that
we
can
actually
start
making
explicit
decisions
about
what
is
in
my
environment.
C
And
what
do
I
trust
recognizing
that
these
changes,
that
you
know
this
utopia
that
I'm
trying
to
describe
around
everything
being
signed
and
trusted
and
having
the
full
province
of
everything
it's
going
to
take
some
time
for
us
to
get
there.
So
as
we
go
through
that
journey,
we
want
to
make
sure
that
we're
able
to
understand
what
is
the
risk
in
our
environment
and
then,
as
we
improve
that
over
time,
tighten
our
controls,
if
we
so
desire
to
to
make
sure
that
we're
improving
the
overall
posture
of
our
environment.
C
So
again,
this
is
this.
Is
this
project
was
really
started?
You
know
standing
on
the
shoulders
of
giants.
There
was
a
lot
of
work
done
going
back
seven
eight
years
ago
around
the
notion
of
a
transparency,
log
and
I'll
go
into.
B
C
Published
in
an
acm
paper
back
in
2014,
google
open
sourced
an
implementation
of
this
in
2016.,
and
ultimately,
we
started
to
see
some
of
these
raw
materials
pull
up
around.
How
do
I
actually
get
a
public
log
that
can
be
independently
verified
and
would
be
append
only
meaning?
I
can't
go
in
and
make
changes
to
any
record
that
would
ultimately
have
already
been
committed
that
started
to
be
out
there
for
tls
certificates
over
the
web
in
2017.
The
firefox
team
started
looking
at
well.
C
Could
we
actually
pivot
this
for
other
binaries
and
started
to
reuse,
some
of
that
certificate,
transparency,
log
work
that
was
built
for
tls
certs
on
the
web,
which
was
an
interesting
kind
of
proof
of
concept,
as
I
would
call
it
a
lot
of
great
work
and
a
lot
of
great
thought,
but
it
was
kind
of
adapted
to
the
the
limitations
of
ct
logs
and
so
in
2019
2020.
We
started
seeing
well,
okay.
Well
what?
C
If
we
took
a
step
back
and
actually
re-envisioned
this
more
broadly
brandon
phillips
did
some
work
on
a
on
a
project
called
arget
when
all.
C
Served
is
the
inspiration
for
what
the
the
sigstor
community
has
now
built,
we'll
go
into
the
three
major
projects
that
are
under
the
sig
store
banner
recore,
which
is
a
signature
and
out
of
station
transparency,
log
that
launched
in
the
middle
of
2020..
C
We
then
launched
a
certificate
authority
called
fulcio
shortly
thereafter
and
then
in
parallel.
We
also
started
designing
a
new
tool
to
sign
initially
started
with
signing
container
images,
but
has
since
started
to
broaden
out
not
only
into
just
generic
plots
and
artifacts,
but
also
starting
to
look
at
the
broader
ecosystem.
So
cosine
is
that
signing
and
verification
tool
that
we've
ultimately
built.
B
C
C
Community,
it
has
been
amazing
to
see
this
go
from
three
three
individuals
to
now
seeing
slack
channels
with
over
a
thousand
people
in
them.
We've
had
236
individuals
commit
over
three
thousand
different
patches.
You
can
see
the
rest
of
the
stats
here
on
the
chart,
but
ultimately,
we've
seen
kind
of
that
up
and
to
the
right
growth
that
I
talked
about
before.
C
This
is
the
type
of
up
into
the
right
growth
that
you
want
to
see
on
charts,
and
so
it's
really
been
amazing
and
quite
humbling
to
be
part
of
this
overall
community.
That's
been
jumping
in
and
really
putting
the
energy
to
solve
some
of
these
fundamental
problems
we
have
so
just
to
give
you
a
sense.
This
is
a
a.
B
C
Active
very
welcoming
community.
If
anyone
here
is
interested
to
learn
more,
you
know
hope,
you'll
hope,
you'll
join
us.
Now.
We've
we've
built
some
of
this
technology.
We've
prototyped
it.
The
next
question
might
be
okay
like
cool,
interesting
who's
using
it.
So
on
the
on
the
left
side
of
the
chart
here,
there's
a
couple
big
names
that
you
might
recognize
around
arch
linux
or
kubernetes
github
we're
starting
to
see
some
of
these
big
names
really
take
a
step
back
and
look
at.
C
How
could
they
leverage
both
the
projects
that
sig
store
is
is
under,
or
should
I
say,
under
the
six-story
umbrella,
but
also.
B
C
Do
they
start
to
leverage
some
of
these
public
services
that
we're
running
as
well?
So
kubernetes
has
recently
passed
the
kept
to
look
at
using
the
cosign
tooling
to
sign
the
release
images
as
well
as
start
to
include
s-bombs
github
now
has
some
built-in
support,
in
their
kind
of
starter,
workflows,
to
show
you
how
you
can
actually
sign
artifacts
using
cosign.
C
The
rubygems
community
is
looking
at
publishing
an
rfc
literally
today
to
look
at
actually
revamping
how
ruby
gems
are
signed
and
published
into
their
package
management
systems
and
we're
having
very
similar
conversations
with
other
folks
with,
like
the
python
community,
the
node
community
of
various
folks
in
the
java
ecosystem
as
well.
So
we're
seeing
a
ton
of
excitement
and
again
here
we're.
We
literally
want
to
get
to
the
point
where
all
of
those
all
of
the
things
in
the
infrastructure
build
deployment
and
policy
space.
C
We
want
to
make
sure
that
we're
doing
a
really
robust
job
of
covering
all
of
our
bases.
From
that
perspective,
so
getting
a
little
technical,
I
wanted
to
kind
of
quickly
call
out
at
least
a
high
level
diagram
of
what
the
architecture
of
these
three
moving
parts
looks
like,
as
I
mentioned,
there's
cosine,
which
is
the
tooling,
which
is
at
the
top
of
this
diagram.
C
B
C
Used
certain
systems
named
pgp
they're,
very
powerful
they're;
very
they
offer
quite
a
few
options,
but
for
mere
mortals
that
maybe
don't
have
a
deep
security
background
or
don't
understand
the
nuances
of
how
to
actually
create
and
manage
cryptographic,
keys,
safely,
they're,
pretty
daunting
and
people.
Frankly,
just
look
at
the
tooling
and
go
yeah.
C
I
know
I
should
do
that,
but
I'm
not
doing
that
so
cosine
really
tries
to
address
making
that
user
experience
much
more
simple
and
straightforward,
and
even
in
some
cases
we
can
actually
totally
remove
the
key
management
responsibility
on
the
end
developer,
which
is
a
pretty
pretty
impressive
thing
that
I'll
show
off
here
in
a
bit
once
you've
used
our
cosign
tooling,
to
sign
an
artifact
that.
C
Publish
the
results
into
recore,
which
is
our
transparency,
log
and
so
recore
is
a
public
rest
endpoint.
You
can
go
and
query
not
only
what
entries
are
in
the
log,
but
you
can
actually
go
back
and
verify
the
overall
state
of
the
log
by
it's
built
on
a
merkle
tree
which
ultimately
has
some
hat.
It's
a
distributed.
Hash
table
such
that
where
individual
leaves
their
hash
values
are,
are
hashed
together
and
ultimately
aggravated
to
the
top.
B
C
B
C
Apis
look
at
the
state
of
the
log.
We
publish
everything
in
in
public.
You
know
cloud
storage
buckets
to
where
folks
can
go
and
actually
monitor
and
keep
us
honest
again.
We're
community
operated
we're
not
trying
to
there's
no
profit
motive
behind
any
of
this,
so
we
want
to
make
sure
that
we're
not
just
standing
something
up,
that's
useful
but
ultimately
can
be
audited
and
can
be
trusted
and
then
finally,
moving
clockwise
over
to
the
left.
Full
co,
as
I
mentioned
before,
is
our
code
signing
certificate
authority.
C
B
B
C
You
know
show
their
identity
documents
to
one
another
say:
hey.
I
met
you,
I've
seen
your
id.
I
trust
you
are
who
you
are.
You
know
who
you
say
you
are
I'm
going
to
import
your
key
into
my
key
ring
and
that
way
as
I
go
when
I
look
at
artifacts
that
are
out
there,
I
can
be
assured
that
if
you
show
up
with
you
know
some
sort
of
sign
artifact
that
you
were
actually
the
person
that
you
say
you
were.
B
C
That's
again,
that's
especially
in
covid
times
where
it's
really
hard
for
us
to
get
together
and
meet
in
person.
That's
not
quite
pragmatic
in
all
scenarios,
but
in
other
scenarios
that
we'll
talk
about
what
happens
if
the
identity
of
the
signer
isn't
a
person,
it's
actually
the
build
system
or
it's
actually
the
the
desktop
system
that
wants
to
generate
it
as
a
station.
C
So
those
are
the
three
pieces,
that's
how
it
ultimately
starts
to
come
together,
but
rather
than
sit
here
and
join
on
too
much
more
about
you
know,
on
charts.
I
wanted
to
quickly
show
you
a
demo
of
this.
So
what
I'm
going
to
do
is
two
different
things
before
I
jump
to
the
terminal.
C
The
first
is
I'm
going
to
build
an
a
very
simple
kind
of
hello
world
app
and
ultimately
sign
it,
and
I'm
going
to
walk
through
kind
of
the
browser-based
login
workflow
of
a
developer
that
maybe
has
a
gmail
account
from
somewhere
and
wants
to
use
that
to
generate
that
identity.
C
Token
that
I
just
mentioned,
I'm
going
to
commit
that
code,
I'm
going
to
push
it
up
to
github
and
while
I'm
showing
you
the
demo
in
the
background,
github
actions
is
actually
going
to
go
and
build
a
containerized
version
of
that
application
and
it's
going
to
publish
it
into
the
github
container
registry,
and
in
doing
so
it's
actually
going
to
sign
the
image
using
its
own
workload
identity.
So
there
won't
be
any
interaction
from
me.
C
And
I
don't
have
to
go
back
forth
copy
and
pasting,
I'm
using
a
little
script
called.
Do
it
live,
which
allows
me
to
to
make
sure
that
I
type
my
commands
totally
cleanly.
So
this
is
live
it's
running
so
we'll
hope
the
demo
gods
keep
me
in
their
good
favor.
But
if
you
notice
my
amazing
typing
abilities
with
no
errors,
I
am
using
a
script
to
help
with
that
all
right.
So
let's
start
this
first
demo.
First
thing:
I'm
going
to
do
is
just
start
off
and
and
clean.
C
My
environment
run
a
make
file
real
quick
just
to
clean
up
everything.
So
this
goes
well
and
then
I'm
gonna
actually
go
into
a
directory.
And,
let's
open
my
simple
hello
world,
go
file
so
pretty.
C
I've
got
a
simple
print
statement
here
and
what
I'm
gonna
do
just
to
prove
that
this
is.
B
B
C
B
C
C
Do
is
ultimately,
let's
commit
that
and
you'll
notice
here
that
I've
got
the
I've
got
a
quick
shell
out
to
get
the
the
current
date
and
time
so
when
we
go
and
look
at
github
we'll
be
able
to
know
that
this
came
in
at
9,
9
20
eastern
time
today,
so
I've
committed
that
I've
now
pushed
it
to
the
repo
and
I've
got
a
binary
sitting
here.
On
my
desktop
and
in
the
background
like
I
said,
it's
been
that
that
commit
has
already
been
pushed
to
github.
C
What
I'm
going
to
do
now
is
I'm
going
to
use
the
cosine,
binary
and
you'll
notice
here
that
I
have
this
experimental
flag
that
we
have
turned
on
and
that's
because,
as
I
mentioned
a
couple
slides
ago,
the
recourse
services
and
the
full
co
services
are
still
in
a
beta
mode,
and
we
want
to
make
sure
that
that's
very
apparent
to
users.
So
we
have
a
flag.
That's
that's
still
set
on
on
our
tooling.
C
So
what
we're
going
to
do
here
is
we're
going
to
use
the
cosign
tool
to
sign
the
actual
executable,
the
actual
binary
itself
and
once
in
that
signing
process,
we're
actually
going
to
take
the
co-signing
cert
that
comes
back
from
the
full
co
service
and
store
it
in
the
local
directory
as
well.
So
when
I.
C
On
this,
what's
going
to
happen,
is
you're
going
to
see
my
browser
pop
up
and,
if
you've
ever
been
to
a
website
where
you've
ever
tried
to
go
to
log
in,
and
it
gives
you
the
option
of
log
in
with
facebook
or
log
in
with
google
log
in
with
twitter.
It's
the
same
general
concept
that
we're
using
here.
C
These
are
all
the
oauth2
and
openid
standards
we're
simply
using
those
as
published
we're,
not
modifying
any
of
those
standards
at
all,
and
so
we
present
a
screen
here
to
say
login
to
the
sig
store,
identity,
framework
click.
C
Access
to
anything
other
than
your
email
address,
that
is
the
only
thing
that
in
this
flow
we
actually
capture.
So
in
this
case
I'll
click
on
my
google
login
and
it
looks
like
I
took
too
long
to
sign
that,
so
you
can
see
that
this
is
actually
a
real
demo,
because
I
got
a
an
error.
So
what
I'm
going
to
do
is
quickly
rerun.
This.
B
C
Another
commit
and
then
we're
going
to
run
this
process
one
more
time
here
and
I'm
not
going
to
talk.
Take
so
long,
I'm
gonna
click
through
and
then
I'll
go
back
to
my
terminal
here.
So
you'll
see
this
is
the
actual
code
signing
certificate
that
was
generated.
I
didn't
have
to
go
and
understand
any
crypto
calls
whatsoever.
C
This
just
flowed
right
through
and
I've
now
not
only
made
an
entry
in
the
transparency
log,
but
I
also
have
the
code
signing
cert.
C
That
open
again,
this
isn't
something
you
necessarily
have
to
do,
but
for
those
of
you
who
are
familiar
with
looking
at
ssl
certificates,
what
I'm
going
to
do
here
is
just
print.
What
this
ultimately
looks
like
quickly
running
through
it,
so
sig
store
is
ultimately
the
certificate
authority.
So
it's
the
issuer
of
this
cert.
This
is
a
public
key
that
was
generated
in
memory
by
our
cosign,
tooling
and
then
down
here
at
the
bottom.
You
can
actually
see
we're
storing
both
the
identity
issuer,
which
in
this
case
was
google.
C
B
C
Works
is
to
put
this
into
this
immutable
log,
this,
the
the
signature,
transparency
log
that
we
have
so
what
I'm
going
to
do
now
is
I'm
going
to
query
the
log
and
I'm
going
to
say,
show
me
all
the
entries
you
have
for
the
binary
that
I
just
compiled
on
my
system,
and
so
it's
a
simple
record.
Cli
we've
got
again
trying
to
keep
the
user
experience
here
very
simple.
It
says,
search
for
any
entries
you
have
in
the
log
for
this
particular
artifact.
That
says.
B
C
In
this
case,
we
have
one
entry
that
we
just
put
in
by
running
the
cosigning
tool.
So
what
I'm
going
to
do
here
is
actually
ask
for
that.
Log
entry
and
what's
ultimately
come
back
here,
is
some
reasonably
pretty
printed
json
to
say:
hey,
there's
a
record
in
the
log,
it's
at
index,
1.191756
million-
and
it
has
this
particular
unique
identifier
in
the
transparency
log.
All
we
are
storing
is
the
sha
sum
of
the
artifact,
not
the
binary
itself,
we're
not
trying
to
redistribute
binary
content.
C
C
B
C
Only
entry
into
the
log
to
say:
here's
here's
the
instance
of
this
artifact
at
this
this
particular
point
in
time
now
you
might.
C
Okay,
cool.
You
wrote
down
that
something
happened.
What
does
that
really
give
me?
So
if
you've
ever
heard
of
the
term
kind
of
a
split
brain
attack
or
a
shared
v
or
a
split
view
attack,
we
don't
really
have
a
single
source
of
truth
as
to
who
signed
an
artifact
and
at
what
time
was
it
ultimately
signed?
And
what
happens
if
somebody
comes
along
and
uploads
a
different
binary
with
a
different
signature
and
everything
looks
like
it:
checks.
B
C
But
you
may
not
be
seeing
the
same
thing
as
what
somebody
else
halfway
around
the
world
is
ultimately
seeing
if
you're,
if
you're
particularly
targeted
with
a
man
in
the
middle
attack
or
other
things,
we
want
to
make
sure
that
we
have
a
single
source
of
truth
to
say
what
happened
when
and
so
what
the
transparency
log
ultimately
provides
is
a
way
for
you
to
implement
independently
query
something
and
to
know
for
this
binary.
C
It's
only
been
signed
once
and
here's
the
email
address
of
the
identity
that
signed
it
or
you
may
see
700
signatures
that
are
all
tied
together
there.
So
again,
this
doesn't
necessarily
solve
this
problem,
but
what
it
does
is
it
level
sets
the
information,
and
so
people
can
actually
now
start
to
query
this
and
better
understand
how
all
of
this
is
pulled
together.
C
B
C
Let's
go
ahead
and
dig
into
this,
this
ci
run
here
pretty
simple
ci
run.
It's
just
checking
out
the
code,
downloading
the
correct
version
of
go
it's
using
a
tool
right
up,
I
should
say
a
project
called
co,
which
is
a
a
cool
open
source
project
that
just
makes
the
process
of
compiling
go
code,
putting
it
into
a
container
based
on
distro-less
images
and
then
pushing
it
out
to
a
container
registry
super
simple
and
straightforward.
B
C
I
log
into
the
container
registry,
once
I
compile
everything
and
then
I'm
using
this
code
tool
here
to
actually
sign
the
image,
push
it
to
the
container
registry
and
uploaded
signatures.
So
this
last.
B
C
This
was
actually
signed,
not
by
bob
but
by
github
itself.
So
you
can
see
the
issuer
of
the
identity.
Token
actually
was
github
that
flowed
back
and
forth
between
the
sig
store
services
and
made
an
entry
into
log
showing
a
container
with
this
particular
digest.
Value
was
put
into
the
registry
and
we
have
the
information
here
from
the
log.
That's
actually
stored
as
well
an
interesting
note
here.
This
information,
this
sign
entry
timestamp,
can
actually
be
verified
offline.
C
If
you
have
the
public
key
of
the
log
and
you
choose
to
put
that
into
a
trust
store,
you
can
actually
verify
that
timestamp
and
have
a
cryptographic
assurance
that
it's
in
the
log
without
having
to
query
the
log
in
real
time.
So
you
can
use
this
in
both
online
and
offline
cases
and
then,
finally,
what
we'll
do
is,
let's
pull
the
actual
certificate
out
of
the
entry
in
the
log
and
just
quickly
show
one
more
time
that
this
did
come
from
github
and
we
have
a
little
bit
more
information
here.
C
To
say
this
is
the
actual
name
of
the
repository.
It
happened
on
a
push
event.
It
was
on
the
main
branch,
and
this
actually
was
the
commit
hash
from
the
repo
that
generated
the
build
workflow
to
kick
off.
So
if
the
commit
starts
at
9696,
let's
go
back
to
the
repo
real,
quick
and
check
out
my
commit
history
and
sure
enough.
C
C
So
that
is
the
end
of
the
first
demo,
so
let's
quickly
switch
back
here
to
the
slides.
So
what
I'm
going
to
do
here
in
the
next
few
slides
is
just
dig
into
a
little
bit
more
detail
around
what
is
actually
happening
underneath
the
magic
of
some
of
these
tools,
so
we'll
start
first
with
the
cosine
tool.
So
this
is
a
go
based
binary
that
runs
on
cross
platforms.
C
That
has
essentially
the
responsibility
for
looking
at
the
artifact
generating
the
signature
for
that
artifact
and
then
publishing
that
signature
to
a
transparency
log
again
it
does
this
for
both
blobs,
as
well
as
containers
which
you
saw
in
the
demo.
C
A
C
I
used
a
ephemeral
key
and
we'll
talk
a
little
bit
more
about
what
that
means
in
the
demo.
I
could
just
have
easily
grabbed
my
ub
key.
That's
here
on
my
desk
plugged
it
into
my
system
and
used
the
key.
That's
on
that
ub
key
to
do
that
signing.
I
don't
necessarily
have
to
get
a
key
in
memory
I
can
use.
C
If,
if
I'm
comfortable
with
managing
my
own
keys,
you
can
simply
do
that
and
just
generate
a
a
call
out
to
our
services
to
generate
an
as
a
station
that
hey
at
this
particular
point
in
time.
I
actually
possessed
the
key
and
I'm
proving
it
cryptographically
by
signing
something
and
publishing
that
up
to
the
log.
So
the.
C
Is
really
there
just
to
make
that
super
simple
and
super
easy
for
containers?
We
leverage
the
oci
specs
to
generate
a
new
artifact
on
the
on
the
image,
manifest
object
that
actually
stores
the
signature
itself
in
the
registry
alongside
the
container
image
there
so
you're
not
having
to
go
and
fetch
these
from
different
places,
they're
all
stored
under
the
same
object
and
if
you're
familiar
with
using
tools
like
crane
or
or
docker
and
spec,
to
go,
look
for
look
through
a
registry.
C
Saw
in
that
first
demo
was
the
use
of
what
we
like
to
call
keyless
signing,
which
is
kind
of
a
nod
to
the
notion
of
a
serverless
type
workflow,
which
obviously
there
is
a
server
somewhere,
that's
running
that
workflow,
but
we
still
like
to
call
it
serverless.
So
in
that
same
vein,
keyless
mode
is
where
we
actually
generate
a
private
and
public
key
pair
in
memory.
C
We
do
the
signing
we
walk
through
the
whole
workflow,
but
at
the
end,
because
we've
published
this
proof
of
possession
in
and
the
actual
signature
itself
into
an
immutable
log
for
all
to
see,
I
don't
have
to
keep
that
private
key
around
at
all.
I
can
actually
just
delete
it
and
move
on,
which
is
a
really
powerful
concept,
because
now
I
don't
have
to
go,
find
that
ub
key
and
keep
it
safe.
C
I
don't
have
to
worry
about
what
happens
if
I've
got
that
little
stubby
thing
plugged
into
the
side
of
my
laptop
and
my
laptop
gets
stolen.
I
don't
have
to
worry
about
that
at
all.
I
all
I
need
to
do
is
make
sure
that
I
maintain
the
integrity
of
my
op
id
provider's
credentials
and,
assuming
that
that's
true,
I
key
management
is
totally
taken
out
of
the
case
and
taken
out
of
a
concern.
C
Couple
things
more
to
call
about
call
out
about
the
recore
transparency
log.
There
is
a
requirement
that
we
can.
We
will
only
insert
things
into
the
log
that
it
can
independently
verify.
I
mentioned
this
before,
but
it's
worth
reiterating
artifacts
themselves
are
not
stored
in
the
transparency
log.
We
are
not
trying
to
be
the
content
store
for
the
entire
internet
for
digital
signatures.
We
only
store
the
digest
the
signature
itself
and
the
public
key
or
the
code
signing
certificate.
The
only
exception
to
that
rule
is
for
full
providence,
attestations
or
timestamps.
C
We
do
record
that
data
in
the
log.
But
again
there
is
no
content
itself
that
there's
no
binaries
there's
nothing
that
can
actually
be
used.
These
are
just
attestations
or
metadata
about
about
a
particular
artifact
that
would
get
put
into
the
log.
We
have
a
full
open
api
that
documents
what
the
rest
endpoints
all
look
like.
C
We
act
as
a
compliant
timestamp
authority
and,
as
I
mentioned
before,
a
public
good
instance,
you
can
publicly
monitor
and
verify
the
integrity
of
it
yourself,
but
you
can
also
run
these
services
yourselves
behind
the
firewall.
If
you
so
choose,
there's
nothing
preventing
you
from
downloading
and
running
these
these
yourself.
C
Showed
you
the
nice
way
to
use
cosine
and
our
tooling
to
make
this
super
simple
and
easy.
I'm
going
to
actually
get
we've
got
time,
so
I'm
going
to
dig
in
and
actually
show
you
more
of
the
the
hard
way
of
how
to
do
this,
and
so
what
I'll
do
here
is
walk
you
through
how
I
would
actually
replicate
that
whole
demo
that
I
just
showed
you
before
using
openssl
curl
commands
and
things
that
you
could
ultimately
recreate
without
our
tooling
whatsoever,
and
in
doing
so
really.
C
B
C
C
So
with
that,
let's
kick
off
the
demo
number
two
script
and
the
first
thing
I'm
going
to
do
is
I'm
going
to
generate
that
key
pair
that
I
talked
about
before.
So
this
is
just
a
simple
call
to
open
ssl
to
say:
give
me
a
private
key
based
on
the
elliptic
curve
algorithm.
Let's
write
that
into
ec
underscore
private
dot
pem
and
then
let's
generate
the
public
key
that
corresponds
to
that
and
put
it
into
ec
underscorepublic.com.
B
C
C
To
show
your
private
key,
so
I
won't
show
you
that,
but
I
will
show
you
what
the
public
key
is
just
for
reference
to
say:
hey
we've
got
a
pen
and
coded
public
key
that
could
be
used.
The
next
thing
I'm
going
to
do
is
I'm
actually
going
to
call
out
to
that
same
sig
store
identity
provider
and
again
I
could
point
this
at
any
open
identity,
compliant
provider.
C
If
I
chose
for
kind
of
having
my
own
private
deployment
of
the
sig
store
services,
I
could
certainly
configure
them
to
point
to
my
internal
sso.
If
it
generates
id
tokens,
that's
totally
a
possibility,
but
for
our
public
service
in
this
beta
period,
we
just
wanted
to
point
to
something
to
get
the
concept
out
and
available
for
folks.
C
Ultimately,
does
is
walk
through
that
same
oauth
stance
to
say:
let's
call
out
to
a
provider,
let's
get
an
identity
token,
and
then
let's
ultimately
write
that
into
the
file
system.
C
So
you'll
see
my
browser
pop
up
again,
I'm
going
to
click
the
login
with
google
button,
one
more
time,
click
on
my
address
I
get
the
thumbs
up.
This
was
successful
and
let's
pop
back
over
to
the
shell
prompt,
so
the
next
thing
I
want
to
do
is:
let's
actually
extract
the
email
address
from
this
identity.
Token
that
came
back
and.
B
C
But
ultimately,
all
I'm
doing
here
is
generating
something
that
I
can
sign
to
prove
possession
of
this
private
and
public
key
pair
that
I
created
above
and
so
I'm
going
to
actually
sign
the
email
address
itself
as
a
string
and
so
we'll
now
that
I've
extracted
the
email
address.
What
I'm
going
to
do
here
is
use
open,
openssl
and
actually
just
say,
create
a
digital
signature
using
this
particular
hash.
C
Algorithm
use
the
ec
private
key
to
generate
that
signature
and
do
it
over
the
file
named
email,
and
then
let's
store
that
in
email.sig
just
to
make
that
easy
and
straightforward
the
next
thing
I'll
do
is
like
okay,
I've
signed
the
email
address,
let's
locally
verify
that
that
signature
checks
out.
So
instead
of
using
the
dash
sign
option,
I
use
the
verify
flag.
I
pass
it
the
public
key
in
this
case,
not
the
private
key
and
we'll
pass
it
the
signature
itself,
as
well
as
the
actual
input
message.
C
I
may
have
to
restart
this
one
more
time,
depending
on
how
long
this
took,
but
all
I'm
doing
here,
typing
very
quickly,
is
generating
a
rest
call
to
the
full
co
end
point
I'm
passing
along
the
id
token
here
as
a
bearer
token
to
that
api
call
and
I'm
just
passing
it
some
basic
json
in
the
body
of
that
request,
stating
hey,
I'm
using
ecdsa
keys,
here's
the
actual
value
of
the
public
key
and
then
here's
the
value
of
what
I've
ultimately
signed.
So,
let's
see
if
that
actually
worked.
C
B
C
Show
you
the
identity,
token
that
we
wrote
again
just
to
look
at
what
information
is
here.
I
mean
I'm
going
to
print
it
out,
but
I
have
notably
removed
my
user
id
and
a
subfield
here
just
for
my
own
personal
sanity,
since
this
is
being
recorded.
But
this
is
the
json
payload
that's
inside
of
an
identity
token,
it
it
itself
is
signed
by
the
identity
provider.
C
So
that's
the
way
that
we
again,
we
cryptographically
link
this
up
into
a
root
of
trust,
and
here
you
can
see
that
I
don't
have
any
information
about
how
to
get
to
my
documents
within
google
or
any
other
publicly
public
information
that
I
don't
want
to
disclose.
It
literally
just
is
my
email
address
and
nothing
else
so
now
that
I've
got
that
certificate
downloaded,
let's
actually
print
it
out
and
look
at
it,
and
this
time
we
get
that
certificate
again
issued
by
sigstor.
C
We'll
come
back
and
talk
about
this
validity
period
here
in
a
second.
But
again,
all
that's
stored
in
the
certificate
is
the
same
thing.
I
showed
you
before
it's
just
who
was
the
identity
of
the
identity
provider
that
signed
that
token,
and
what's
my
email
address,
nothing
else
is
stored.
C
So
what
I'm
going
to
do
now
is
I'm
again
a
code
signing
certificate
is
really
a
signed
document
that
says
this
person
presented
a
public
key
with
this
value.
At
this
point
in
time
and
so
embedded
inside
of
that
code,
signing
certificate
is
actually
the
public
key
itself
that
I
generated
at
the
beginning
of
the
second
demo.
C
So
what
I'm
going
to
do
is
I'm
actually
going
to
extract
that
public
key
value
out
of
the
cert,
and
then
I'm
going
to
actually
make
sure
that
I
can
still
verify
that
same
fine
email
address
that
I
had
at
the
beginning,
using
what
I
sucked
out
of
that
code.
Setting
cert
that
came
back
from
full
co
and
openssl
tells
me
it's
still
good,
I'm
going
to
just
quickly
prove
to
you
by
calling
diff
that
both
of
these
things
are
still
identical,
because
I
don't
see
the
message
here
that
says:
wait
a
minute.
C
This
doesn't
match
so
now,
we'll
move
on
to
the
actual
signing
path.
Now
that
I've
got
that
code
signing
certificate,
I'm
just
going
to
generate
128
bits
of
random
randomness
and
put
it
into
a
file
called
artifact.
C
All
of
that
is
interoperable
based
on
adherence
to
that
rfc.
So
what
we're
going
to
do
is
we're
going
to
use
openssl
generate
a
timestamp
request
and
in
that
request,
we're
actually
going
to
provide
the
sha
sum
over
the
signature
itself.
And
what
we're
doing
here
is
we're
proving
possession
of
the
signature
at
this
point
particular
point
in
time,
and
so
what
we've
done
is
we've
created
that
request.
C
Let's
now
call
out
to
the
sig
store
api
again
using
curl,
just
simple
push
up
of
that
binary
content
and
we'll
also
fetch
the
certificate
chain
that
we
will
use
for
verifying
that
signed
timestamp.
So
what
we're
going
to
do
now
is
use
openssl
to
verify
that
signed
timestamp
and
what
we
get
back
is
to
say
all
right
that
that
timestamp
came
back.
Okay
and
again,
like
you,
may
go
back
bob.
Why
are
you
generating
time
stamps?
This
doesn't
really
make
any
sense.
C
C
B
C
Have
cryptographic
proof
that
this
signature
existed
during
the
validity
period
of
that
code
signing
certificate?
And
at
that
point
I
can
now
let's
go
put
that
in
the
transparency
log
and
let's
throw
away
the
keys,
because
I
won't
actually
need
to
keep
them
anymore,
because
all
of
this
process
can
be
walked
back
and
then
apparently
verified
back
to
the
root
of
trust.
With
the
information
from
the
logs.
B
A
I'm
sorry
bob
we
reached
the
the
time
or
for
for
q
a
so
if
you
mind.
So,
if
you
finish
the
idea
and
we
can
go
for
the
q,
a
yep.
C
I'm
right
at
the
last
step,
so
we'll
we'll
move
right
to
that
here
in
30
seconds.
So
what
we'll
do
here
is
we'll
submit
that
signature
we'll
break
apart
the
output
again.
This
is
similar
to
what
we
saw
before
we're
assigning
just
the
hot
the
digest,
just
the
public
key,
nothing
else
in
the
log,
and
we
are
done
so
last
chart
and
we'll
move
to
q.
C
A
project
is
growing
super
quickly
in
terms
of
the
areas
that
we're
going
to
continue
to
focus
on
we're
continuing
to
do
a
lot
of
work
with
the
upstream
container
ecosystem,
as
well.
As
I
mentioned,
with
the
various
different
package
managers
that
are
out
there,
we're
trying
to
get
this
integrated
into
a
variety
of
different
emission
controllers
within
the
cube
community,
as
well
as
the
broader
set
of
linux,
distros
and
others,
and
we're
again
pushing
towards
a
public
ga
of
many
of
our
services.
C
So
long
story
short
really
appreciate
everybody's
time
today,
if
you're
interested
in
more
information
feel
free
to
look
at
github
visit
our
website,
sextor.dev
or
other
broader
efforts
out
of
the
open
ssf.
So
with
that,
thanks
and
we'll
switch
over
to
q,
a.
C
C
If
you
just
do
a
pip
install,
all
you
do
is
input
to
that.
Is
you
give
it
a
pointer
to
a
shell
script
and
you
tell
it
run
this
and
for
every
command
that
I
mash
a
key.
I
mash
on
my
keyboard.
I
can
type
whatever
whatever
keys
I
want,
but
it
actually
then
puts
into
the
the
shell
what
is
in
the
script
itself,
so
it.
A
Here
another
questions
are
in
q,
a
I
can
write.
C
A
A
Any
container
registries
such
as
docker
hub
qaio
interested
in
this
are
they
being
on
on
board
as
well
or
showing
interest.
C
Yeah
so
the
answer
that
is
yes,
the
actual
for
the
red.
B
C
Out
there,
quay
3.6
actually
has
all
of
the
oci
media
type
support
required
to
implement
this.
That
local
product
is
ga,
I'm
not
sure
what
the
rollout
status
for
the
public
kway
dot
io
services
to
get
to
that
3.6
level.
I'd
redirect
that
question
back
to
pm,
but
dockerhub
already
supports
this
gchr,
as
I
showed
supports
this
today.
B
C
The
point
where
this
is
going
to
be
pretty
much
ubiquitously,
supported
across
all
popular
container
registries
that
are
out
there
and
I
see
peter
asks-
is
it
possible
to
push
signatures
to
any
container
registry,
same
sort
of
thing
we're
using
the
oci
standard
for
recording
this
information,
we're
not
doing
anything
custom.
So
as
long
as
a
registry
is
oci
compliant,
then
it
will
just
work.
C
C
B
C
And
record
this
signature
of
that
in
the
log
as
just
as
well,
because,
frankly,
like
the
commit
itself
is
text
that
has
digest
values
in
it.
It's
structurally
very
similar
to
other
things,
there's
nothing
preventing
us
from
doing
that.
It
was
just
more
of
a
trying
to
scope
the
demo,
but
you're.
Absolutely
correct
is
that
we
ultimately
want
to
generate
provenance
statements
to
say
who
was
that
person
that
ultimately
generated
a
commit?
Where
did
it
originate
from
and
to
be
able
to
walk
that
back?
Get.
C
With
its
structure
provide
some
guarantees
there
that
we
rely
on
more
broadly,
the
community
relies
on
for
integrity,
but
for
the
citing
the
commit
itself,
the
git
tooling
has
some
stuff
that's
built
in
we're,
looking
at
some
different
variants
of
that
as
well,
but
totally
supported
and
a
great
point.
A
So
we've
hit
the
the
total
time
of
this
talk
for
every,
for
anyone
who
hasn't
has
the
data
that
a
question
answered
will
be
available
on
our
virtual
venue
on
that
work,
adventure
and
if,
if,
if
you
want
to
meet
him
or
ask
any
more
questions
about
about
the
talk
or
you
can
catch
up
with
him
there.
Thank
you
thank
you
for
for
the
talk
and
and
thank
you
every
everyone
for
joining
us
for
this
talk.