►
From YouTube: Zero-Trust Supply Chain Security with Sigstore, TektonCD and SPIFFE - Dan Lorenc, Google
Description
Zero-Trust Supply Chain Security with Sigstore, TektonCD and SPIFFE - Dan Lorenc, Google
A
A
If
you've
heard
of
supply
chain
security
this
year,
then
you're
not
alone.
It's
a
pretty
hot
topic
in
the
news.
Lately,
I'm
going
to
be
covering
some
of
the
history
of
supply
chain
security,
a
bunch
of
the
attacks
that
have
happened
lately
and
then
how
to
combine
some
buzzwords
supply
chain
security
has
become
a
buzzword.
Zero
trust
is
a
buzzword,
we're
gonna,
put
them
together
and
make
a
double
buzzword
and
then
have
some
real
examples,
though,
showing
how
it
can
be
used
to
actually
help
in
the
real
world.
A
So
yeah,
thanks
again
for
coming
in
person
to
everyone
here
in
person.
I
know
the
pandemic
has
been
long.
This
is
one
of
my
first
time,
speaking
to
real
people
and
we're
sick
of
the
pre-recorded
talks
and
the
videos
being
played
at
us
all
the
time
so
we'll
try
to
make
this
one
a
little
bit
more
fun.
A
We'll
start
with
this
one,
if
you've
seen
my
bio
photo
or
anything
on
the
internet
yeah.
This
is
this
is
what
I
looked
like
before
the
pandemic,
how
it
started
how's
it
going,
and
even
this
one's
a
little
old
now,
so
my
hair
kind
of
collapsed
under
its
own
weight,
and
now
it
goes
down
instead
of
out.
This
was
about
the
peak
of
still
going
up
yeah,
so
this
has
been
a
long
year,
not
just
in
pandemics
and
in
hair,
but
also
in
supply
chain
security
and
supply
chain
attacks.
A
So
here's
what
we're
going
to
cover
first
couple
topics:
why
does
this
matter?
Why?
Why
are
supply
chain
attacks
important
now?
Why
do
we
need
to
protect
ourselves
against
them
I'll?
Try
to
explain
the
zero
trust
buzzword
a
little
bit
too
and
then
jump
into
some
xero
trust,
supply
chain
architecture
and
some
demos
showing
how
we
can
actually
use
some
of
this
today,
using
projects
like
tecton
sig,
store,
spiffy,
inspire
and
in
toto.
A
Cool
so
supply
chain
security.
Why?
Now
we've
got
a
couple
memes
here
I
put
these
together
a
few
different
versions
of
this,
but
this
kind
of
sums
it
up:
we've
spent
kind
of
decades
in
software,
building
things
and
kind
of
neglecting
the
build
environments
and
the
build
systems
and
the
people
that
operate
them.
I
spend
all
this
time
securing
our
production
infrastructure,
but
then
build
things
on
jenkins
boxes
sitting
under
somebody's
desk
that
nobody
is
really
looking
after
I'm
guilty
of
this.
I've
done
this.
This
is
how
I
got
into
this
space.
A
A
We
spend
all
this
time
in
infrastructure
and
dollars
buying
these
expensive
solutions
to
protect
our
runtimes
and
then
deploy
to
them
kind
of
using
amateur
tooling,
not
to
use
that
word
badly.
It's
just,
we
all
know
we're
doing
it
and
we
haven't
fixed
it.
Yet
so
yeah
the
world
software
supply
chain
security
and
a
bunch
of
unpatched
jenkins
servers
and
closets
holding
all
of
that
up.
There's
another
version
of
this
one
too
I
thought
of
yesterday.
So
I
added
it
here
this
one
yeah.
A
A
Well
too,
the
big
theme
here-
and
you
know
the
big
advice
I
give
to
people
and
talk
about
how
to
secure
their
supply
chains
is
simple,
treat
your
build
system
like
a
production
environment,
because
it
is
one
it
sounds
kind
of
silly,
but
it's
like
those
movies
where
they
build
this
huge
jail
cell
or
a
bank
vault,
or
something
like
that.
That's
super
secure
and
then
somebody
sneaks
in
in
the
food,
cart
or
sneaks
out
in
the
food,
cart
or
delivery
van,
or
something
like
that.
A
We've
gotten
so
good
at
protecting
our
infrastructure
that
people
of
attackers
have
looked
for.
The
next
easiest
way
in
which
happens
to
be
through
the
supply
chain
and
just
like
physical
supply
chain
attacks
happen
using
vendors
or
you
know
the
doorways
that
we've
left
open
software
supply
chain
attacks
work
the
same
way
instead
of
attacking
one
company.
If
they've
done
a
good
job
protecting
themselves,
you
can
find
a
vendor
that
they
use
or
an
open
source
dependency
or
a
library
or
another
system,
and
attack
that
and
then
pivot
to
all
of
their
customers.
A
A
This
is
one
of
the
ones
where
the
what
ifs
still
worry
me
vs
code,
which
is
an
open
source
text.
Editor
from
microsoft,
has
developed
on
github
there's
a
misconfiguration
in
some
github
actions
on
release
branches,
where
an
attacker
was
able,
a
researcher
was
able
to
get
push
permissions
to
the
release
branches
because
the
branch
protection
was
set
up
incorrectly.
Thankfully,
this
person
got
a
huge
reward
and
everything
and
nothing
bad
happened,
but
he
showed
that
he
had
permission
to
push
releases
on
vs
code
just
because
of
some
misconfigurations
on
github.
A
The
really
scary
thing
here
is
that
this
supply
chain
attack
could
have
pivoted
to
other
supply
chains
right.
This
is
an
ide.
This
is
a
development
environment,
that's
used
by
developers
all
over
the
world.
So
if
that
got
hacked,
if
that
got
tampered
with
now
we
couldn't
trust
any
code
that
came
out
of
that.
So
this
is
one
of
the
reasons
this
one
scares
me
so
much.
A
A
Nobody
had
done
this
before,
and
it
was
another
researcher
again,
thankfully,
by
confusing
people
about
the
name
and
the
namespace
and
the
location
packages
would
be
downloaded
from
the
researcher
was
able
to
upload
packages
with
internal
names
to
public
repositories
and
the
package
managers
try
so
hard
to
download
something
for
you
that
they'll
fall
back
or
fall
forward
and
retrieve
things
from
public
managers
first
and
so
the
attacker
got
a
whole
bunch
of
the
researcher
got
a
whole
bunch
of
prize
money.
For
this.
A
One
too,
which
is
great
php,
was
another
near
miss
where
this
was
an
actual
attacker,
but
the
php
development
team
caught
it
pretty
quickly,
but
somebody
pushed
forge
commits
directly
to
the
upstream
php
git
repository,
because
it
wasn't
secured
correctly.
They
quickly
corrected
that
they
moved
it
to
github,
instead
of
hosting
it
themselves
and
noticed
it
fast
before
release
got
cut.
But
this
is
another
one
with
huge
downrange
implications,
because
this
is
a
language
interpreter,
a
language
runtime.
That's
then
pulled
into
websites
all
over
the
world.
A
I
won't
go
through
all
of
them,
and
this
is
just
a
summary
anyway
jump
ahead
to
some
of
the
more
recent
ones.
Just
maybe
a
month
ago,
anybody
doing
work
on
github
probably
had
to
deal
with
this
one.
Did
anybody
have
to
deal
with
the
travis
ci
secrets,
rotation
issue?
Anybody
have
to
deal
with
that.
One,
a
couple
people
here.
A
If
you
didn't
raise
your
hand,
you
should
probably
check
to
see
if
you're
using
travis,
you
might
not
have
seen
it,
but
there's
a
misconfiguration
in
the
travis
ci
infrastructure,
where
all
secrets
were
made
available
to
pull
requests
and
pull
requests
are
a
form
of
remote
code
execution.
If
you
have
tests,
anybody
can
send
you
one
and
your
tests
get
to
run
and
if
you
have
secrets
in
those
tests
the
travis
ci
issue
made
it.
A
A
So
a
bunch
of
these,
both
open
source
and
closed
source
open
source
isn't
worse
at
this.
It's
not
better.
It's
all
source
code
and
all
source
code
is
developed
and
all
source
code
is
uninvested
in
from
the
build
perspective,
like
those
memes
showed
before
been
a
bunch
of
other
scary
charts
here,
so
sona
type
they're
at
the
event
they
just
put
out
a
report
last
week,
maybe
two
weeks
ago,
their
2021
state
of
the
software
supply
chain
report,
there's
a
650
percent
increase
in
supply
chain
related
attacks
in
2021.
A
A
So
it's
a
big
problem,
one
of
the
best
reasons
I've
heard
for
that
problem
and
why
this
is
only.
Finally,
a
thing
is
because
we've
gotten
so
good
at
securing
the
rest
of
our
infrastructure,
so
attackers
are
pivoting
to
the
next
easiest
thing.
It's
not
that
this
is
easier
than
it
was
before.
It's
just
easier.
Relative
people
are
finally
turning
on
things
like
two-factor
authentication
and
strong
password
managers
and
https
everywhere.
Efforts
like,
let's
encrypt
so
the
rest
of
our
infrastructure,
has
gotten
harder
to
attack
making
supply
chains.
The
easiest
relative
target.
A
Things
like
s-bombs
for
software
build
materials
which
help
us
understand.
What's
in
the
software,
we're
using
and
then
security
frameworks
to
help
describe
that,
and
this
is
where
a
bunch
of
the
other
buzzwords
like
zero
trusts
come
in.
A
That
is
not
zero
trust
if
you're
relying
on
a
fence
like
this
or
a
firewall,
or
some
allow
list
of
repositories
that
you
want
to
pull
your
packages
in
from
and
just
assume
that
everything
inside
of
that
boundary
is
safe
and
everything
outside
of
that
boundary
is
dangerous.
That
is
not
zero
trust.
That
is
the
opposite,
so
things
like
secured
networks,
if
you're
using
bastian
servers
to
get
into
a
production
environment
vpns.
This
is
all
in
the
network
security
angle,
so
I'll
explain
how
this
translates
to
supply
chain
security
in
a
few
minutes.
A
But
these
are
all
the
things
that
represent
not
zero
trust
and
it's
obvious
right
as
soon
as
you
find
one
hole
in
one
of
these
barriers
and
get
in
now,
you
have
full
privileges
and
you
can
pivot
around
and
attack
everything
behind
that
infrastructure.
So
this
is
why
these
older
techniques
don't
work.
A
So
what
is
zero
trust?
This
is
when
you
start
making
things
a
lot
finer
and
and
base
trust
on
individual
entities.
So,
instead
of
saying
everything
inside
of
this
room
is
safe.
Everything
outside
is
dangerous.
We
actually
know
and
base
all
of
our
trust
decisions
on
who
everyone
is.
I
know
some
people
in
this
room.
I
don't
know
some
people
in
this
room
so,
instead
of
just
setting
up
a
fence
or
a
perimeter,
we're
actually
basing
it
on
every
single
person
and
they
carry
around
the
credentials
with
them
everywhere
they
go.
A
So
this
has
been
particularly
great
during
the
pandemic
when
all
the
work
from
home
started
and
everybody
stopped
going
to
an
office
every
day.
So
if
you
had
a
set
up
before
where
your
office
had
permissions,
that
you
know
you
couldn't
do
from
home,
you
likely
switched
to
something
like
this.
Where
you
got
permissions
everywhere,
you
are
so
if
you
have
the
same
permissions
inside
and
outside
the
firewall.
That's
what
we're
going
for
here-
and
this
is
a
zero
trust
network
architecture.
A
So
how
do
we
translate
that
to
supply
chains?
I
talked
about
it
a
little
bit
before,
but
in
when
we're
talking
about
people
inside
and
outside
of
a
firewall,
we're
pivoting
this
over
to
artifacts
right
in
supply
chains,
we're
dealing
with
software
code,
build
systems
or
artifacts
that
get
built.
These
are
binaries.
These
are
container
images.
A
These
are
python
or
java
packages,
and
so,
if
we
want
to
get
rid
of
those
firewalls
and
get
rid
of
those
allow
lists
and
trusted
registries,
then
we
have
to
actually
be
able
to
trace
back
artifacts
back
to
where
they
came
from
artifacts,
don't
have
names
they're,
not
people,
they
can't
tell
you
who
they
are.
You
want
to
base
these
policies
on
where
the
artifacts
came
from
how
they
were
built,
who
reviewed
the
source
code?
A
Those
kind
of
things,
so,
if
you
want
to
combine
these
two
buzzwords
and
come
up
with
what
a
zero
trust
supply
chain,
is
it's
one
where
every
artifact
can
be
verifiably
traced
back
to
the
source
code
and
the
hardware
it
was
built
on,
and
these
are
roots
of
trust
right.
People
have
roots
of
trust.
Today
we
have
two-factor
auth.
We
have
identifi
identification
systems
of
all
these
things
to
vouch
for
a
person's
identity.
Hardware
has
the
same
thing.
We
have
tpms,
we
have
trusted
hardware.
A
We
can
prove
that
hardware
is
safe
and
hasn't
been
tampered
with
when
it
boots
up,
but
we
don't
really
have
that
for
source
code.
We
don't
really
have
that
for
artifacts
today.
So
if
we
can
kind
of
connect
that
bridge
of
artifacts
all
the
way
back
to
those
truss
roots
for
people
in
the
machines
that
they
came
from,
then
we
can
start
to
apply
these
finer
grained
policies
to
our
production
environments.
A
And
right
now
it's
a
mess
right.
This
is
just
a
little
example.
I
have
and
we'll
kind
of
play
around
with
some
of
this
in
the
demo
at
the
end,
but
this
is
not
to
pick
on
this
example.
This
is
just
the
first
one
when
you
go
to
the
cncf
artifact
hub.
So
if
you
go
to
artifacthub.io,
this
prometheus
chart
is
one
of
the
top
ones
that
shows
up.
A
These
are
the
artifacts
that
get
deployed,
so
these
are
container
images
that
end
up
running
just
from
a
simple
installation
here.
This
is
not
zero
trust
right,
we're
trusting
a
whole
bunch
of
different
things,
a
whole
bunch
of
different
people,
a
whole
bunch
of
different
systems
and
every
single
one
of
these
could
be
tampered
with
or
attacked
or
could
be
malicious.
A
A
A
A
So
if
the
repository
operators
get
compromised
now
they're
part
of
your
trust
circle,
anybody
operating
one
of
these
repositories
can
change
something
and
attack
you
and
this
here
there
are
three
three
different
ones
at
least
trusted
in
this
one
helm
chart.
So
we've
got
docker.
We've
got
quay,
I
guess
just
two
yeah,
because
the
bottom
one
is
also
on
docker
hub,
so
we're
trusting
both
docker
hub
and
kwai
in
this
example,
and
they
might
not
be
part
of
your
trust
circle.
A
You
might
not
trust
those
operators,
then
there's
the
build
systems
that
actually
built
these
things.
I'll
show
some
examples
here.
If
the
build
system
gets
attacked,
then
that
can
insert
malicious
code.
That's
exactly
what
happened
with
solarwinds.
So,
even
if
you
know
the
authors
of
these
images,
you
know
the
people
operating
the
systems
that
host
the
images.
A
A
A
So
these
are
some
kind
of
top
level
principles
on
how
we
can
get
to
zero
trust,
supply
chain
security,
some
don'ts
and
then
some
do's.
If
we
want
to
work
backwards
like
we
did
in
the
beginning,.
A
This
is
a
kind
of
the
worst
of
the
worst
you're,
not
paying
attention
at
all.
Allow
listing
specific
tags
is
kind
of
the
next
step
up
if
you
go
through
some
of
the
tutorials
around
best
practices
with
kubernetes.
This
is
a
pretty
easy
thing
to
set
up.
If
you
push
all
of
your
images
to
a
single
location,
you
can
at
least
restrict
things
to
that.
That's
way
better
than
doing
nothing,
but
it's
still
not
great,
because
then
anything
that
gets
pushed
there
is
now
inside
of
your
trust
circle.
A
It's
inside
of
that
barbed
wire
fence
from
the
first
picture,
and
it's
not
really
zero
trust,
specific
repositories,
same
thing
and
even
a
whole
bunch
of
the
basic
signing
schemes
in
all
of
this
are
also
kind
of
just
papering
over
the
same
problem.
A
A
So
how
do
we
flip
this
around
to
things
that
we
should
do,
and
this
is
where
we
start
to
get
to
much
more
complicated
policy
decisions,
much
more
complicated
policy
definitions,
but
this
in
these
systems.
We
have
to
base
the
policy
on
exactly
where
something
came
from.
That's
where
the
term
provenance
comes
in.
If
you
haven't
heard
that
term,
it's
the
same
in
physical
supply
chains
and
software
supply
chains,
it
represents
where
something
came
from,
and
we
want
to
base
our
policy
on
that,
and
it
should
be
tamper-proof.
A
So,
instead
of
basing
policy
on
where
an
image
is
or
where
an
artifact
is,
we
want
to
base
our
policy
on
where
it
came
from
and
how
it
was
produced,
and
we
want
to
capture
this
at
every
step
of
a
build
builds
are
recursive,
builds,
have
multiple
steps.
Anybody's
looked
at
a
make
file,
there's
probably
hundreds
of
lines
in
there
just
for
the
simplest
things.
This
applies
to
dependencies
as
well,
so
we
have
to
do
it
recursively.
A
You
want
to
capture
this
provenance
at
every
single
step.
We
can't
do
it
once
at
the
very
end.
This
applies
to
every
single
step,
every
single
piece
here,
so
the
source
code,
the
build
process,
the
publication
process,
everything
we
sum
all
this
up.
We
get
a
system
where
we
can
trust
what
the
artifact
is
and
how
it
came
to
be
not
where
it
is
at
that
at
any
given
moment
in
time.
It's
a
big
distinguishing
characteristic
between
zero
trust
and
non-zero
trust.
A
A
These
are
all
the
different
steps
that
happen
in
just
to
build
one
package
and
get
that
from
source
code
to
one
consumer
without
any
of
the
recursive
elements
and
all
these
letters
here,
a
b
c
d,
e,
f
g
h,
are
attack
points
and
if
you
go
to
the
reaper
there,
you
can
see
a
whole
bunch
of
sample
attacks
that
have
actually
happened.
So
this
isn't
theoretical.
A
These
are
places
that
actually
do
get
attacked
in
a
single
artifact,
and
so,
if
you're,
relying
on
no
trust
or
not
caring
at
all,
then
you've
got
a
whole
bunch
of
different
attack
points.
Here
I
mean
you're,
trusting
nothing
bad
to
happen
at
any
one
of
them
or,
if
you
flip
this
around
and
you
get
signed
provenance
captured
at
every
one
of
these
steps
stored
somewhere
that
we
can
then
apply
policy
on.
A
A
So
yeah
another
meme
here
this
is
kind
of
how
we
have
to
reason
about
this
stuff
today
and
I'll
play
around
with
a
couple
examples
here,
particularly
with
that
prometheus
chart
that
I
started
on
before
these
are
type
of
questions
we
want
to
be
able
to
answer
so
that
we
can
write
and
enforce
policy
if
you
find
a
binary
on
the
ground,
if
you
just
find
something
on
the
internet
that
you
want
to
curl
pipe
to
bash.
For
some
reason,
you
probably
want
to
know
who
built
that
who
published
it.
A
You
might
find
it
a
url,
but
you
don't
know
who
owns
that
url.
It
could
just
be
some
random
hosting
system
so
who
published
that
binary?
The
next
step
is
who?
How
was
it
built
right?
Was
it
built
with
the
version
of
the
go
compiler
that
was
full
of
cves
because
it
was
old?
Was
it
built
with
the
tool
chain
that
we
trust?
Was
it
built
on
a
system?
We
trust
if
you
just
have
a
random
executable
or
a
container
image.
You
can't
really
answer
any
of
these
questions
today.
A
Then,
all
the
way
back
to
the
more
important
questions.
What
source
was
it
built
from
if
you're
using
open
source
it's
great,
because
you
can
audit
the
source
code?
You
can
review
the
source
code
and
look
at
it
for
malicious
code
or
vulnerabilities,
but
that's
only
true
if
you
know
the
actual
version
and
the
actual
source
code,
something
came
from
a
lot
of
the
package
managers
today
show
your
repository
url
and
tell
you
where
they
think
something
came
from,
but
in
most
cases
that's
just
a
random
text
field.
A
A
Again,
if
we
don't
know
who
built
it,
who
reviewed
it,
who
authored
the
code,
then
we
can't
make
those
decisions
and
again
was
anything
tampered
with,
even
if
we
think
we
know
all
of
this,
we
might
not
know
these
an
attacker
sees
each
one
of
these
steps
as
a
potential
place
where
they
can
jump
in
and
do
bad
things.
A
Alright,
so
this
is
a
little
reference
architecture
that
we
put
together,
showing
a
bunch
of
open
source
projects
today
can
be
combined
to
get
something
similar
to
one
of
these
architectures.
We
can
capture
these
provenance
statements
inside
of
a
build
system.
We
can
operate
the
build
system
securely.
A
What
we're
trying
to
do
here
is
cryptographically
verify
every
step
in
a
simple
supply
chain,
so
from
an
artifact
all
the
way
to
a
root
of
trust,
whether
that's
a
person
touching
a
two-factor,
auth
token
or
or
hardware,
with
the
tpm
booting
up
in
a
known
good
state,
and
since
this
is
open
source,
we're
going
to
build
this
inside
and
outside
of
organizations.
So
you
might
be
able
to
set
up
something
like
this
in
your
own
company,
where
you
don't
build
any
source
code
from
external
dependencies
you're
in
everything
from
scratch
yourself.
A
A
So
a
recent
survey
that
said
99
of
companies
have
open
source
in
their
supply
chain
somewhere
and
one
percent
was
probably
lying
or
didn't
understand
the
question
or
something
like
that:
it's
virtually
impossible
to
not
use
open
source,
whether
you
know
it
or
not.
A
So
we
want
to
be
able
to
carry
this
inside
and
across
organizations
nobody's
living
on
an
island
nobody's
living
in
a
bubble.
If
you
build
something
that
only
works
in
your
internal
system,
then
that's
not
solving
the
full
problem,
yeah,
so
a
whole
bunch
of
different
projects.
We
can
combine
to
get
these
five
pieces.
A
So
the
first
one
is
identity
both
for
people
and
systems.
We
want
to
be
able
to
address
machines.
We
want
to
be
able
to
address
services,
whether
they're,
ours
or
someone
else's
naming
is
one
of
the
hard
problems
that
every
engineer
has
had
to
deal
with,
and
this
is
really
a
naming
problem,
a
secure,
naming
problem
which
makes
it
even
worse.
A
Thankfully,
there's
some
awesome
projects
that
help
deal
with
this.
The
first
ones
here
is
spiffy
the
spiffy
and
the
spire
sibling
projects
help
uniquely
identify
and
securely
authenticate
systems
in
a
federated
manner,
so
across
environments
across
organizations,
those
kind
of
things
they
can
use,
tpm,
attestations
or
other
types
of
hardware
attestations
to
attest
to
machines
being
in
a
known,
good
state.
Before
they
issue
credentials,
you
can
take
those
credentials,
share
them
with
other
people
and
they
can
validate
those
credentials
actually
came
from
a
healthy
system.
A
Then
we
combine
those
with
build
systems
right.
We
want
to
operate
our
build
system
securely.
The
build
system
has
to
be
designed
with
security
in
mind.
For
these
demos
and
examples,
I'm
actually
going
to
show
some
github
actions
stuff
as
well
as
techton,
which
is
an
open
source
project
designed
with
supply
chain
security
in
mind.
A
Those
machines
are
our
vms,
in
this
case
they're
monitored
when
they
spin
up
to
get
tpm
out
of
stations
and
prove
that
they're
healthy
before
credentials
get
issued.
This
is
a
multi-tenant
system,
so
we
know
that
we're
issuing
credentials
as
finely
scoped
as
we
can.
If
you've
got
hundreds
of
builds
going
on,
each
build
gets
its
own
set
of
credentials.
It's
not
a
global
name
space
or
anything
like
that,
and
those
are
coming
from
the
spiffy
spire
system.
A
You
have
agents
installed
on
the
virtual
machines,
so
the
agents
do
a
bunch
of
health
checks
when
they
first
come
up
and
prove
that
the
machine
is
in
a
good
state
back
to
the
central
server
before
the
server
authenticates
that
node
these
nodes
run
a
whole
bunch
of
different
workloads
vms
in
this
case,
and
a
kubernetes
cluster.
So
they
run
a
whole
bunch
of
different
pods
when
we
know
that
the
aspire
agent
on
each
node
is
healthy
and
it's
running
as
root.
A
It
can
then
authenticate
and
attest
to
the
individual
processes
running
on
that
machine
and
issue
credentials
for
the
specific
build
jobs,
and
you
can
tie
these
attestations
to
a
whole
bunch
of
different
things.
If
you're
running
on
physical
hardware,
you
can
do
tpm's
if
you're
running
on
cloud
providers,
a
bunch
of
cloud
providers
provide
great
technology
for
this
out
of
the
box.
That
are
a
little
bit
easier
to
work
with,
so
we
can
get
workload
out
of
stations
from
each
node
proving
everything
was
healthy.
A
So
we
get
these
credentials
and
they're
actually
issued
meaningful
identifiers
that
I
can
use
in
my
internal
system
and
I
can
hand
to
somebody
else
and
they
know
what
to
do
with
it.
They
know
that
it's
tied
back
to
my
internal
organization
and
then
I
can
look
at
that
and
tie
it
all
the
way
back
down
to
the
individual
machine
if
you're
at
a
different
company.
The
machine
name
might
not
make
sense
to
you,
but
you
can
tie
that
back
to
me
and
then
I
can
trace
it
back
to
the
machine.
A
A
This
means
that
there's
a
lot
of
yaml,
because
we
have
to
actually
specify
what
everything
is
going
to
do.
There
are
no
surprises,
but
that's
great,
because
when
we
look
at
that
later,
we
know
exactly
what
happened
and
if
we
look
at
it
before
it
runs,
we
know
exactly
what
is
going
to
happen
and
then
it's
got
some
other
cool
properties
here,
like
automatic
provenance
capture
and
signatures
and
other
fancier
stuff,
like
hermetic
builds,
so
you
can
instruct
it
to
run,
builds
without
network
access,
which
prevents
a
whole
other
set
of
attacks.
A
Then
the
final
one
here
is
the
sigstor
project.
So
this
issues
a
bunch
of
code
signing
certificates
for
free,
just
like
let's
encrypt,
does
for
https
in
your
browser.
The
goal
here
is
to
make
it
easy
to
get
code
signing
certificates
for
free
that
can
be
used
to
sign
open
source
artifacts.
A
This
is
stored
with
some
pretty
cool
transparency
log
technology.
So
you
don't
have
to
trust
the
operators
of
the
project.
You
can
audit
them
and
make
sure
that
nothing
is
deleted.
Nothing
is
tampered
with,
so
we're
trusting
the
operators
to
keep
this
up.
We're
not
trusting
the
data
them
to
not
tamper
with
the
data,
which
is
a
nice
little
hack
here
to
make
sure
that
we
have
a
central
place.
All
this
can
be
found
and
queried
without
having
to
add
another
trust
element
to
everything.
A
All
right,
so
that
was
where
all
the
pieces
broken
down.
This
is
all
of
it
again.
Now
we're
going
to
jump
into
the
actual
demos,
we're
going
to
take
some
containers
and
try
to
look
everything
up
and
prove
where
it
came
from.
A
All
right
so
we're
going
to
first
start
with
the
anti-demo
we're
going
to
take
this
chart
and
try
to
trace
everything
back
and
see
all
the
places
where
we
have
to
do
guesswork
and
where
bad
things
could
have
happened.
A
So
we're
going
to
tab
out
of
here
the
first
one
I'm
going
to
pick
on
is
this
middle
one
here
the
jimmy
dyson
config
map
reload.
I
know
jimmy
I've
worked
with
him
before,
but
this
is
a
cool
little
utility.
He
wrote
a
long
time
ago
to
reload
a
kubernetes
pod
when
a
config
map
changes,
because
it
doesn't
do
that
by
default.
A
So
if
we
grab
this
one
here,
we've
got
v,
0.5
type,
that
here
we
find
a
github
repository,
there's
some
source
code.
It's
pretty
simple.
It
looks
like
it's
go,
sorry,
not
python,
and
this
is
here
it's
got
some
great
scores
and
everything,
but
we
really
have
no
way
of
knowing
that
the
container
image
that's
running
in
our
cluster
actually
came
from
this
github
repository.
A
A
A
Yeah,
so
this
is
all
getting
straight
from
the
github
repository.
This
is
the
readme
here,
but
again,
the
owner
of
an
image
on
docker
hub
can
put
anything
they
want
here
unless
you're
using
docker
hub's
automated
verified,
build
feature
where
docker
hub
actually
pulls
the
code
and
does
the
build
for
you,
then
this
is
just
a
best
guess,
so
this
isn't
really
something
safe
that
we
can
rely
on.
A
A
So
this
isn't
even
from
scratch
here
right.
This
is
from
busybox,
which
is
another
document.
Sorry,
yes,
awesome!
Yeah!
Thank
you
yeah!
So
if
we
read
this
docker
file
here,
it's
parameterized
a
little
bit.
So
it's
hard
to
tell
exactly
what
this
would
build
if
we
built
it
because
these
are
all
arguments,
but
this
base
image
here
from
base
image
comes
from
right
here,
so
it
can
be
overridden
when
it's
built.
A
A
We
don't
know
which
one
this
happened
to
grab
at
the
moment
it
was
built
the
stuff
changes
constantly.
This
is
part
of
the
actual
official
docker
hub
images
program,
but
these
change
constantly.
So
we
don't
know
which
cves
were
in
the
version
of
this.
That
was
built
because
we
don't
know
which
version
of
it
was
built.
Any
one
of
these
steps
could
have
had
an
accident
happen
or
an
attacker
could
have
tricked
them
into
using
something
old
or
they
could
just
forgotten
to
update
it.
A
That
image
was
already
six
months
old
when
we
looked
at
it
on
docker
hub,
and
that
was
just
one
in
the
you
know
six
or
seven
images
here
so
basically
from
the
start,
we
can't
actually
figure
out
where
that
image
came
from.
We
can
just
kind
of
do
our
best
guess
about
it,
and
if
we
trust
it,
then
we
also
can't
figure
out
where
the
dependencies
in
that
image
came
from,
because
the
problem
just
keeps
applying
down
the
chain.
A
So
I've
got
another
image,
though,
that
I
built
using
github
actions
and
techton
with
signed
provenance
at
each
step,
stored
inside
of
sig
store,
so
I'll
show
what
this
looks
like
a
little
bit.
Generically
you've
got
two
images.
This
could
this
could
be
three
images
and
a
chain.
This
could
be
ten
images
in
a
chain
but
we're
starting
from
scratch.
We
have
a
base
image.
A
That's
got
some
code
in
it,
contributed
by
one
person
I'll
full
screen
again
real
quick
before
we
jump
back
out.
So
there's
the
base
image
with
some
code.
In
it
the
commit
is
signed
by
one
person
that
is
built
with
tecton.
This
happens
to
be
a
distroless
container
image
which
is
built
from
scratch.
The
commits
were
signed.
This
was
built
in
a
cluster
with
all
of
those
features
turned
on
in
kubernetes
and
it
was
published,
and
then
I
have
a
sample
github
repo
with
another
image.
A
It's
this
hello
world
go
app
that
was
built
with
the
tool
called
co.
We'll
look
at
that
code
in
a
minute.
It's
got
some
code
in
it
as
well.
That
is
layered.
On
top
of
this
base
image
that
was
signed
by
another
person
and
that
was
built
in
github
actions
using
some
of
the
oidc
features
to
sign
the
commit
and
sign
the
final
image.
So
we
can
actually
start
and
trace
all
this
back
from
the
image
itself.
A
I'll
start
over
here
here
is
the
repo
itself
where
all
the
code
came
from.
It's
a
very
simple.
I
made
this
too
big
hello
world
going
application
about
as
simple
as
it
can
get.
If
you
run
it,
it
just
prints
one
string
and
it
is
published
in
the
github
container
registry
and
was
built
using
github
actions.
So
we
can
see
the
run
here
that
actually
worked
a
whole
bunch
of
failures,
but
we're
going
to
start
with
just
the
image
and
we're
going
to
trace
all
of
that
back
using
the
c
store
transparency
logs.
A
B
A
So
here's
the
json
that
references
a
bunch
of
tar
balls.
So
this
is
what
came
out
of
the
github
action,
and
so
we
normally
reference
this
overall
thing
by
its
own
shot.
256
digest.
So
that's
what
we're
going
to
look
up
signatures
for
using
the
cosine
tool
to
look
for
and
validate
any
signatures,
so
we're
going
to
print
those
out
too.
A
So
we
can
jump
over
here.
You
can
see
the
actual
action
run.
That's
here,
that's
the
name
of
it,
and
this
is
the
commit
that
was
here
so
now.
We
have
cryptographic
proof
that
this
image
was
built
at
that
commit
in
this
exact
action
run,
and
this
is
all
stored
in
a
transparency
log.
So
we
verified
it.
We
can
see
the
time
stamp
when
it
was
entered
and
all
of
that
cool
information.
A
But
now,
let's
look
at
the
commit
itself
too,
because
I
signed
that
commit
myself
and
also
put
that
in
the
transparency
log.
So
we're
going
to
take
this
commit
number
and
we're
going
to
do
another
verify
and
we're
going
to
do
a
search
in
the
transparency
log
and
we're
going
to
type
in
that
commit
into
bash
good
thing.
I
have
all
this
saved,
so
we're
going
to
look.
A
This
command
uses
the
recourse
cli
to
look
up
any
signatures,
any
data
in
the
transparency
log
tied
to
that
shot,
256
entry
and
we
found
one
entry
here
I
could
have
signed
it
multiple
times.
Many
people
could
have
signed
this
because
it's
a
global
transparency
log,
but
there's
only
one
in
here.
So
we're
going
to
look
at
that.
A
We
want
this
one,
so
we're
gonna
see
a
bunch
of
data
dumped
out
here.
We
can
decode
this
now.
A
Here
we
go
yeah.
I've
got
the
whole
command
now,
so
we're
going
to
do
a
whole
bunch
of
jq
parsing
magic,
pipe
this
through
openssl
and
we're
going
to
see
the
actual
code
signing
certificate
that
was
used
to
sign
this,
and
you
can
see
the
subject
here
is
me.
That
is
my
email
address,
because
I
signed
that
commit.
A
So
I
signed
the
code.
It
was
built
automatically
in
github
actions
and
we
found
the
actual
action
invocation
that
was
used
to
build
it.
So
we
started
backwards
from
an
image
and
we
found
quite
a
bunch
of
information
about
it,
but
we
can
keep
going
now.
We
can
look
at
that
image
again
and
find
the
base
image
that
I
mentioned
before.
A
A
A
There's
a
bunch
here
because
the
distrolus
images
happen
to
employ
another
cool
technique
called
reproducible
builds.
This
turns
out
to
be
pretty
hard
to
do
in
docker
images,
so
it
doesn't
happen
often,
but
these
ones
are
reproducible.
So,
if
they're
built
daily,
we
get
hundreds
of
entries
with
the
same
digest
unless
something
actually
changes.
So
there's
a
lot
of
entries
here
so
we'll
just
look
at
the
bottom
one
we
can
see
the
subject
here.
This
is
an
actual
service
account
that
is
tied
to
the
machine.
A
A
We
can
find
a
whole
bunch
of
other
information
in
here
in
the
signatures,
but
the
cool
little
hack
I
mentioned
around
reproducible
builds
means.
This
is
built
in
a
couple
different
systems
where
we
get
the
same
digest
each
time.
So
we
found
one
valid
signature
for
the
service
account,
but
what
else
is
in
the
transparency
log?
Let's
do
a.
Let
me
find
the
digest.
First,
go
back
here
cool,
so
this
is
the
digest.
A
A
Here
we
go
yeah,
so
this
is
a
a
provenance
statement
explaining
exactly
how
it
was
built
and
by
what
tool,
versions
and
all
that
stuff,
so
we're
going
to
decode
that
with
a
bunch
of
jq
awesome.
So
this
is
one
that
was
built
from
techton
using
the
workload
credentials
from
the
spiffy
spire
system,
and
we
have
a
payload
here
showing
exactly
what
happened.
This
is
captured
by
the
build
system
itself,
so
this
is
not
something
that
was
generated
inside
of
the
build
script.
If
the
build
script
was
compromised,
this
wouldn't
be
affected.
A
This
is
the
build
system,
logging
exactly
what
source
was
fetched
before
the
build
was
kicked
off
and
what
artifacts
came
out
at
the
end.
So
there's
a
bunch
of
images
built
at
the
same
time
in
here.
So
there's
a
lot,
but
we
can
see
the
date
it
was
built
at
four
days
ago.
According
to
the
build
system,
we
can
see
the
source
code
repository.
It
was
fetched
from
the
exact
commit
that
this
came
from.
A
We
can
also
see
some
other
cool
stuff,
like
the
exact
tools
and
versions
that
were
used
as
part
of
that
build
process
too.
So
this
was
one
step
here.
You
can
see
the
exact
bash
script
that
was
kicked
off.
This
happened
to
use
the
golang
official
container
images,
one
of
the
build
steps,
so
we
don't.
Even
we
have
more
than
just
the
base
images.
We
have
the
actual
versions
of
all
the
tools
that
are
used
in
here.
A
We
built
a
bunch
of
different
things
in
the
same
build
because
we
built
a
bunch
of
containers,
so
we
can
see
all
the
other
ones
that
came
out
at
the
same
time
their
digests
and
jump
around
in
that
graph.
By
searching
for
all
of
these
things,
if
we
come
over
here,
we
can
look
at
that
commit
back
in
the
github
ui.
A
Yeah,
so
that's
the
latest
one
I'll
type
that
one
in
so
we
can
see
yep
there
was
this
commit,
some
w11
builds
got
updated
and
this
is
the
automated
build
that
kicked
off
that
run
now.
I
think
I
signed
that
commit
two,
so
we
can
do
that
same
trick
that
we
did
before
and.
A
We
take
that
commit
look
for
signatures
associated
with
it.
No
okay.
I
didn't
sign
that
one,
I
guess,
but
I
could
have
signed
that
commit
and
that
would
have
been
reflected
in
here
too.
So
we
traced
the
one
image
back
to
the
source
code
it
was
built
from
and
then
we
actually
traced
it
back
another
level
from
a
completely
different
build
system.
So
the
first
one
was
built
in
github
actions.
A
The
second
one
was
built
in
a
techton
installation
running
in
a
kubernetes
cluster
maintained
by
a
completely
different
team,
and
then
we
actually
trace
that
back
to
the
source
code
and
tool
versions
used
here
as
well.
So
all
this
metadata
is
available.
Anybody
can
go
and
query
that
if
you
happen
to
know
the
right
incantations
on
the
command
line.
A
C
A
Yeah
good
question
so
I'll
repeat
that
back
for
the
people
on
there
got
a
reminder
yeah.
So
the
question
was
we
mentioned:
we
can
get
hardware
out
of
stations.
How
low
can
we
actually
go
because
there's
always
a
turtle
below
the
one
that
you're
thinking
about
at
any
given
time?
So
the
cool
part
of
the
in
toto
and
spiffy
spire
project
is
that
a
lot
of
that
mechanisms
are
a
lot
of
those
mechanisms
are
pluggable.
A
If
you're
running
in
a
cloud
environment,
then
you
just
get
whatever
the
vtpm
or
workload
attestation
injected
automatically
is.
But
if
you
want
to
configure
your
own
tpm,
tpm
2.0
hardware
organization,
with
your
own
certificates
and
your
own
remote
attestation
system,
you
can
do
that
too.
So
you
get
the
same
style
credentials.
You
can
just
kind
of
define
and
configure
your
own
policy
and
write
your
own
plugins
for
as
deep
as
you
want
to
go.
I'm
sure
there
are
levels
to
that.
A
That
I
don't
quite
understand
are
grock
though,
and
some
hardware
experts
can
back
it
up
a
little
bit
more.
But
if
you
have
like
something
like
one
of
the
new
502
tokens
which
you've
got
something
to
give
away,
if
you
want
signing
your
own
commits
and
then
you've
got
that
hardware
stuff
as
deep
as
your
cloud
provider
or
whatever
infrastructure
you
happen
to
be
running,
has
as
well,
then
you
can
get
pretty
low.
C
Need
make
any
changes
to
the
tool
chains
that
you
used
to
build,
like
you
mentioned,
having
trusted
tools.
A
Yeah,
so
that
questions
do
you
need
to
make
any
changes
to
the
tool
chains
you
used
to
build?
You
don't
need
to,
I
think,
the
biggest
change
that
the
biggest
change
is
actually
in
build
systems
themselves,
and
if
the
build
system
that
you're
using
doesn't
have
a
built-in
way
to
capture
that
provenance
out
of
band
from
like
the
build
script
that
it's
executing,
then
you
can
only
get
so
far.
A
C
You
do
need
to
change
the
build
system.
Whatever
your
schedule
is
out
of
the
steps,
the
build
process
itself.
How
are
that?
Comparing
something
that
changed.
A
A
I
think
that's
one
way
you
could
think
about
it.
I
like
to
think
about
it
more
in
terms
of
like
which
components
you
trust
and
which
components
are
dynamic
as
part
of
a
build,
and
instead,
so
you
can
kind
of
apply
it
to
either
level
if
you're
working
on
an
open
source
project
and
somebody
can
send
you
a
pull
request
that
runs
build.sh
at
the
top
level
of
the
root
repo.
You
probably
wouldn't
want
to
trust
the
output
of
that
script,
because
the
person
can
change
it
when
the
pull
request
gets
kicked
off.
A
But
if
you
have
your
own
build.sh,
that's
checked
in
somewhere
else,
that's
audited
by
10
people,
and
that
can
do
those
things.
Then
you
might
trust
that
version
of
the
script.
The
same
would
apply
to
the
tool
chains.
If
you're
going
to
run
a
compiler
that
has
instrumentation
built
in
as
long
as
that,
compiler
is
stored
somewhere
else
and
it's
built
itself
securely
and
signed
and
does
all
this
cool
stuff.
Then
you
can
trust
that.
A
So
it's
really
about
what
part
like
one
developer
on
your
team
or
one
person
on
the
internet
has
access
to
modify
as
part
of
the
build,
and
you
can
draw
a
lot
of
different
circles
yourself
around
how
you
define
those
boundaries,
but
in
general
the
lower
level
you
can
make
those
integrations.
The
better
compilers
know
which
files
they
fetch.
They
know
what
outputs
come
out.
It's
way
easier
to
do
it
at
that
level.
A
A
Yeah,
so
the
question
was:
there's
a
lot
of
comparisons
I
had
to
do
there.
There
were
a
lot
of
hoops.
I
had
to
go
through
and
look
at
and
inspect.
Could
you
automate
a
lot
of
that
with
having
a
trust
policy
yeah?
So
it's
a
whole
bunch
of
json,
it's
a
whole
bunch
of
yaml.
Nobody
wants
to
look
through
every
time
they
want
to
do
a
deployment.
A
I
like
to
kind
of
keep
those
concepts
orthogonal.
So
we
get
as
much
data
as
we
can.
It
was
generated
in
a
trustworthy
way
and
put
that
places
where
you
can
access
it.
When
you
want
to
make
those
policy
decisions,
there's
a
lot
of
different
great
policy
engines
that
you
can
use
to
configure
which
things
you
trust,
which
repositories.
A
A
Yeah,
so
the
question
was:
what's
the
amount
of
overhead
to
hook
all
this
up
for
a
new
project,
I'm
using
something
like
github
actions,
not
terribly
high
right
as
the
build
system.
The
cool
part
of
this
is
that
the
improvements
have
to
come
in
from
build
systems,
and
if
a
build
system
rolls
out
improvements,
then
a
lot
of
it
can
just
apply
to
everyone
using
that
build
system
without
you
having
to
do
a
whole
bunch
of
manual
work
on
your
own.
A
So
it
is
a
lot
of
work
if
you're
using
you
know
the
the
meme
from
before
of
a
whole
bunch
of
unpatched
jenkins
servers,
then
you're
going
to
have
to
patch
them
to
install
different
plugins,
which
is
its
own
nightmare.
But
if
you're
using
something
managed
like
github
actions
when
they
roll
out
new
features.
Theoretically,
you
should
be
able
to
take
advantage
of
them
without
even
thinking
about
it,
so,
especially
if
you're,
starting
with
something
new,
it's
really
easy.