►
Description
Building a secure software supply chain is no easy feat. SolarWinds showed us that even the experts have a difficult time. This talk gives an overview of what's required, including ingesting external dependencies, attestation of the build infrastructure, signing artifacts, SBoMs, reproducible builds, and admission controllers. We'll also look at some of the key projects in this space being developed within the CNCF and Linux Foundation.
A
A
So
first,
who
am
I
I'm
a
solutions
architect
with
boxford
they're,
now
an
ibm
company
and
I
help
clients
transition
into
the
container
ecosystem
in
my
downtime,
I
answer
a
lot
of
questions
over
on
stack
overflow
and
by
helping
spread
the
knowledge
about
docker
through
those
answers
on
stack,
overflow
and
also
presentations.
Like
this,
I
got
added
into
the
captain's
program
a
while
back.
So
let's
dig
in
laugh
about
me.
I
want
to
talk
about
what
are
supply
chain
attacks,
but
first,
let's
have
a
quick
recap:
attackers
have
a
lot
of
different
methods.
A
They
can
come
and
attack
our
environments.
They
can
do
the
physical
compromises
they
can
do
phishing
or
social
engineering
attacks.
All
those
are
very
common.
In
addition
to
patching
or
finding
patch
security
holes
that
someone
hasn't
patched
yet
something
like
the
equifax
example
more
difficult
to
defend
against
our
attackers
use
something
like
a
zero
day
or
a
malicious
insider.
So
we're
not
going
to
cover
any
of
that
today.
We're
going
to
look
at
are
these
supply
chain
attacks
supply
chain
attacks.
A
So
that's
we're
looking
for
they're
looking
for
that
soft
upstream
target,
rather
than
compromising
the
target
directly
in
software.
What
we
see
is
this
is
going
to
look
a
lot
more
like
a
instead
of
attacking
the
production
server
they're
going
to
attack
the
build
servers
or
your
upstream
dependencies
that
you're
pulling
in
from
your
project
and
so
the
xkcd
cartoon.
I
think
this
came
out
shortly
after
the
open,
ssl
heartbleed
attack
we're
looking
at
just
the
one
lone
developer.
That
is
maintaining
some
project
that
everybody
else
depends
on.
A
So,
if
you
can
attack
that
the
whole
the
whole
house
falls
down
so
you've
seen
a
lot
of
this
in
the
news
lately,
I'm
talking
about
this
along
with
a
whole
lot
of
other
people.
Recently,
we've
seen
dependency
confusion,
attacks.
Those
were
researchers
that
were
looking
at
some
repositories
out
there
that
they
could
see
in
code
from
different
organizations
that
were
sharing
their
code.
Maybe
they'd
open
source,
something
maybe
they
were
demonstrating
it
in
a
presentation,
but
they
didn't
see
the
public
repository
for
that
code.
A
You
know
this
dependency
confusion,
it
wasn't
sure
which
dependency
it
should
have
pulled
in
and
I
pulled
the
wrong
one
in
so
that
was
one
popular
attack
that
came
in
a
while
back
the
another
set
that
we've
seen
recently
have
been
attacks
on
tooling,
like
npm
or
similar
package
managers,
where
they're
replacing
the
popular
package
that
people
have
been
using
with
a
vulnerable
version,
they
just
release
a
new
version
that
contains
a
trojan
or
some
kind
of
undetected,
vulnerability
in
there
that
the
attackers
can
use-
and
maybe
it's
a
long
play
attack
the
developer
out.
A
There
always
plan
this
and
just
wait
long
enough
for
enough
people
to
use
the
product,
and
then
they
turn
bad
a
lot
of
times.
What
we're
seeing
is
an
open
source
developer.
That's
just
overworked
they've
got
too
much
to
do
and
they
add
a
new
maintainer.
Someone
comes
along
and
just
volunteers
to
help
out,
and
after
enough
time
they
say:
okay,
we'll
give
you
permission,
and
that
was
the
whole
goal
all
along.
A
The
person
was
buying
some
overworked
developer
and
just
volunteered
enough
until
they
got
the
access
and
they
started
shipping
some
bad
stuff
in
there
that
just
went
unseen.
The
other
way
this
can
potentially
happen
is
the
developer
themselves.
Just
gets
hacked.
You
know
it
could
be
a
targeted
attack,
it
could
just
be.
They
reused
the
same
password
on
two
places
and
they
found
the
password
they
got
into
the
git
repo
and
started
submitting
stuff.
A
Additionally,
we've
been
seeing
lately
is
the
popularity
around
the
solar
winds
breach
and
that
went
after
the
build
infrastructure
itself.
They
found
the
the
sign.
They
found
the
build
servers
and
were
able
to
get
in
there
and
compromise
those.
So
the
build
service
themselves
signed
the
program
signed
the
result
as
that
trusted
vendor,
and
those
were
then
distributed
and
played
out
to
a
lot
of
customers
that
deploy
in
their
environment.
A
Can
you
really
say
for
sure
that
you
aren't
already
compromised
right
now
with
a
similar
attack
and
I
think
a
lot
of
us.
We
want
to
say
yes,
but
realistically
the
answer
is
probably
no,
and
so,
as
a
result
of
that,
we've
seen
the
white
house
issuing
an
executive
order,
saying
we
need
to
improve
our
supply
chain
security
that
came
with
a
lot
of
dollar
figures
behind
it,
and
so
because
of
that,
you
see
pretty
much.
Everyone
in
the
industry
is
suddenly
jumping
after
this
and
saying
yeah.
A
This
is
something
we
want
to
help
fix
partially
going
out
to
the
money,
but
partially
just
because
it
is
a
huge
issue
that
we
all
see
and
know
we
need
to
fix.
But
this
isn't
new.
This
went
all
the
way
back
to
1984
ken
thompson
was
saying:
how
can
we
know
that
we
can
trust
the
compilers
themselves,
because
the
compilers
are
compiling
themselves,
they're
compiling
the
code,
and
they
give
you
a
binary.
A
If
that
binary
saw
that
it
was
compiling
the
compiler,
it
could
inject
some
malware
and
you
would
never
know
that
the
outputting
binary
is,
you
know
recreating
its
own
vulnerability,
every
single
time
it
gets
rebuilt,
and
it
could
also
see
that
hey,
I'm
compiling
something
like
well
back
in
the
day.
It
was
the
login
command,
but
today
it
might
be
something
like
this
hd
demon
and
see
that
you
know
I'm
compiling
that.
Let
me
just
inject
my
back
door
into
that
as
well,
and
the
code
looks
fine.
A
A
What
we
are
trying
to
solve,
though
in
the
supply
chain,
is
how
can
we
best
harden
our
environment
and
so
to
do
that,
we
start
by
validating
all
our
inputs.
That's
going
to
be
something
like
a
two-person
rule
on
every
commit
you
got
to
have
someone
verifying
and
approving
every
one
of
these
pull
requests
with
you.
You
might
have
a
scanner
or
some
kind
of
other
vulnerability
analysis
tool,
looking
at
all
the
different
external
libraries
that
you're
ingesting
and
then
copy
those
into
your
local
environment
in
a
secure
way.
A
You're
also
going
to
want
to
harden
your
build
infrastructure,
and
this
you
know,
sounds
people
throw
it
out
there
as
if
it's
something
like
it's
an
easy
problem.
But
it's
not
it's.
It's
solving
that
lower
total
problem.
You
can
only
go
so
far
before
you
get
to
the
point
that
you
just
can't
solve
it.
A
If
you're
solving
this
with
a
hard
and
build
environment,
maybe
that's
a
container
image
you
lock
down,
and
so
it
can
only
compile
exactly
wants
to
do,
but
now
someone
you
need
to
make
sure
your
continuous
integration
system
itself
is
secured.
You
need
to
make
sure
the
orchestrator
that
it's
running,
that
continuous
integration
system
that
is
secured.
You
make
sure
the
operating
system
that's
running.
The
orchestrator
is
secured
every
one
of
these
things
has
a
lower
level
below
it
that
you
know.
A
A
Next
area
I
want
to
talk
about
here
is
verifying
the
build
process
itself,
and
so
how
do
we
know
that
you
did
what
you
said.
You
did
and
then
once
you've
done,
that
you
need
to
assign
your
output,
and
so
maybe
that
is
a
long-term
signature
or
a
transparency
log,
something
there
you
can
say.
I
did
this.
Other
people
can
know
that
it's
verifiable.
A
You
want
to
distribute
your
artifacts
a
lot
of
times.
We're
looking
at
is
trying
to
move
this
into
container
registries
today.
So
instead
of
shipping,
the
signature
and
s
balm
and
other
attestations
a
lot
of
things,
we're
going
to
talk
about
today,
separately
in
a
whole,
separate
environment.
Can
we
merge
those
together
with
the
containers
into
the
same
repository,
so
they
ship
together
in
some
way
so
we're
looking
at
different
methods.
We
can
do
that
and,
lastly,
the
emission
controller.
A
Now
this
does
any
good
if,
when
we
go
into
production
run
our
code,
we
don't
check
any
of
this
stuff,
and
so
the
mission
controller
needs
to
go
through
there
and
verify
that
this
has
been
said.
But
it
also
has
to
do
harder
things.
You
know
preventing
rollback
and
replay
attacks
and
all
those
dependency
confusion
attacks.
Maybe
you
have
a
malicious
mirror
out
there.
You
need
to
prevent,
and
if
you
handle
something
like
revoking
the
key
or
an
individual
signature,
that
gets
a
little
difficult,
especially
if
you're
trying
to
connect
to
some
central
revocation
server.
A
What
happens
if
you
can't
reach
it?
If
you
can't
do
you
fail
open,
in
that
case,
you're,
insecure
you're,
letting
vulnerabilities
potentially
go
through
or
do
you
fail
closed,
and
if
you
do
that,
then
you're
gonna
create
some
outage
downstream.
It's
one
of
these
lose-lose
there's
a
bad
answer
on
both
sides
of
that.
A
A
First
step
here:
is
the
software
build
materials?
Think
of
this?
Like
your
ingredients
label?
This
is
what
goes
into
your
application.
It
is
going
to
be
the
name
of
your
app
that
you're
building
the
unique
version
number,
so
you
can
track
exactly
what
the
application
was
deployed
but,
more
importantly,
it
is
all
the
different
libraries
and
other
components
that
went
into
your
application,
and
so
we've
got
a
couple:
convenience
standards
out
there.
So
spdx
came
from
the
linux
foundation.
A
They
were
looking
at
licensing
and
adding
security
later
on
into
that,
because
what
they
were
concerned
of
originally
when
they
created
was,
if
we
have
something
like
gpl,
that
gets
injected
into
an
apache
2
project.
That's
going
to
corrupt.
The
licensing
is
going
to
create
some
compatibility
issues
that
someone
may
not
have
intended
to
do
so
that
was
where
they
started
from
cyclone
dx,
though,
came
from
wasp.
A
They
are
interested
in
security
first
and
foremost,
and
so
all
the
things
that
are
required
in
theirs
were
security-based,
and
then
they
add
licensing
later
on
as
an
optional
field.
So
we're
seeing
is
each
of
these
things
started
with
the
mandatory
things
that
they
were
focused
on
and
then
started,
adding
optional
capabilities
later
on
that
people
can
can
or
don't
necessarily
have
to
include
in
their
s
bombs,
and
so
we're
going
to
see.
A
Hopefully,
the
merging
these
two
specs,
or
maybe
one
or
the
other,
is
going
to
win
out
unclear
right
now,
so
that's
still
being
played
out.
Importantly,
you
don't
just
want
the
s-bomb
on
what
you're
creating
you
want
the
s-bomb
on
your
build
infrastructure
itself.
You
know,
were
you
building
us
with
a
potentially
vulnerable
compiler
out
there?
You
need
to
know
about
that
you're
going
to
need
the
s
bomb
for
every
one
of
your
dependencies
and
then
those
dependencies
need
to
include
the
s
balls
for
their
dependencies.
This
goes.
A
You
know
iteratively
through
all
the
different
chains.
So
hopefully
you
can
get
back
to
all
the
different
environments
that
you're
pulling
in.
Additionally,
if
you're
running
this
inside
of
a
cloud
or
a
software
as
a
service
you're
going
to
want
to
make
sure
that
you've
got
this
for
your
runtime
infrastructure
as
well.
Are
you
vulnerable
to
something
like
a
spectre
or
meltdown?
A
How
can
we
push
the
s
bombs
up
to
those
container
registries
and
then
those
test
bombs,
they're
going
to
be
pretty
much
static,
they're,
hopefully,
reproducible.
So
every
time
you
run
the
build,
you
should
get
the
same
s
bomb
out
if
you
run
the
same
environment,
but
the
vulnerabilities
that
we
can
scan
based
on
those
s.
A
Bombs
are
going
to
change
over
time,
and
so
we
need
a
vulnerability
scanner
to
look
at
the
sbom
and
to
do
that,
we
need
to
distribute
it
again
and
we
need
to
make
sure
that
we've
got
a
scanner
that
can
read
it,
and
so
that's
that's
an
ongoing
challenge.
That's
still
under
development
right
now,
we've
got
a
lot
of
tooling
to
create
it,
but
we
don't
have
a
lot
of
tooling
to
use
it.
So
that's
one
of
the
key
areas
that
needs
to
be
developed
still
attestation.
A
So
one
of
the
questions
here
is:
how
can
we
verify
the
truth
and
authenticity
of
what
we
do?
How
do
we
know
we
did
what
we
said
we
did
in
there.
We've
got
a
couple
key
tools
that
I'll
look
at
right
now.
One
of
them
is
in
toto.
They've
got
a
syntax
for
giving
an
attestation
for
every
step
that
you
perform,
and
so
you
run
your
step.
A
Then,
when
you
get
to
the
final
step,
you
can
take
all
the
different
various
attestations
that
you've
received
from
all
the
individual
steps
and
double
check.
Did
all
the
outputs
from
the
first
step
match
the
inputs
from
the
second
step
and
so
on,
and
that
helps
you
detect.
If
there's
a
man
in
the
middle,
it
doesn't
help
you
detect,
though,
if
you
have
maybe
a
malicious
build
note,
because
you're
on
that
machine
with
the
private
key
that
you
can
use
to
attach
your
code,
that
could
also
be
the
attacker.
A
That's
on
that
machine
with
your
credential
to
sign.
You
said
you
did
something
good,
even
if
you
did
something
bad
so
to
solve
that
we
go
the
next
step
here,
which
is
spiffy,
which
has
the
implementation
of
spire
and
what
they're
providing
is
some
short-term
keys
to
a
tested
agents
and
the
workloads
they're
running
on
those
agents.
And
what
does
that
mean
means?
A
The
agent
itself
is
going
to
verify
the
server
with
something
like
a
hardware,
tpm
or
some
kind
of
cloud
api
that
I'm
a
node
that
should
be
trusted,
and
so
it
receives
a
bunch
of
credentials
for
all
different
workloads.
That
can
run
on
that
on
that
agent,
on
that
node
and
the
workloads
that
are
running
on
that
node
are
going
to
tell
are
going
to
contact
their
local
agent
and
say:
hey,
I'm
a
trusted
workload
running
on
you.
Can
you
give
me
my
credential
and
we're
going
to
check?
A
A
If
so,
yes,
okay,
give
it
that
certificate,
and
now
it
can
take
that
certificate
or
a
jwt
and
use
that
in
a
tool
like
in
total
or
something
like
that
to
say
I
am
somebody
you
trust
here
are
my
credentials,
and
then
we
know
that
workload
is
running
on
a
trusted
machine,
hopefully
a
hardened
machine,
and
we
can
verify
the
steps
using
the
combination
of
tools
like
this.
So
it's
not
just
one
tool.
We
need
to
look
for
ways.
We
can
integrate
these
things
together,
to
give
you
a
strong
environment,
similar
project,
key
lime.
A
Here
at
the
end,
where
we're
going
to
talk
about
how
to
integrate
tpm
or
something
like
that
to
give
you
a
hardware
rooted,
cryptographic,
trust
of
a
remote
machine,
and
so
what
that
is
saying
is
how
do
we
know
that
remote
machine
is
who
you
really
believe
it
is
well,
we
can
use
their
tpm
on
their
machine
to
give
you
that
kind
of
hardware,
remote
trust,
so
we've
now
verified.
This
is
we're
running
what
we
think
we
run.
You
know
we
made
that
claim
of
what
we
thought
we
did
now.
A
We
need
to
sign
it
and
assign
it.
We
need
a
couple
things.
One
is
we're
going
to
need
a
key
and
so
the
to
get
our
signing
key.
Hopefully
we
don't
have
the
sign
key
somewhere
in
the
public
that
can
be
taken,
and
so
do
that
we're
going
to
look
at
things
like
parsec,
which
gives
these
apis
to
access
the
hardware,
security,
module
or
tpm,
or
something
like
that.
They've
got
these
apis.
You
can
use
to
access
that
the
other
one
I've
been
looking
at
lately
has
been
vault.
A
A
Another
key
tool
here
in
the
science
space
it
kind
of
merges
together
a
little
bit
of
getting
the
key
and
also
with
signing
the
data,
is
tough.
They
have
a
whole
framework
for
how
you
can
push
software
updates,
they're
big
in
the
automotive
and
a
couple
other
industries
out
there,
because
they
can
prevent
things
like
rollback,
replay
attacks
and
fast
forward
attacks
by
the
way
they've
designed
their
key
management
and
kind
of
a
hierarchical
set
of
keys.
They've
got
you
know,
one
root
key,
but
they've
also
got
a
target.
A
Key
they've
got
a
snapshot
key
to
collect
a
whole
bunch
of
targets
together
and
say
this
is
the
current
one
and
then
they've
got
a
time
stamp
key.
That
says
this
is
a
short
term
thing
we're
constantly
iterating
on
which
helps
them
handle
something
like
a
revoke
some
kind
of
revocation.
You
just
update
the
new
snapshot.
Without
that
thing
that
you
want
to
revoke
in
it
and
then,
when
the
timestamp
expires,
someone
has
to
get
the
timestamp
on
the
new
snapshot.
A
So
they've
got
the
solution
out
there
that
was
used
heavily
in
notary
version
one,
but
it
wasn't
the
greatest
implementation
in
version,
one
notary.
A
In
addition,
it
was
giving
you
an
alternate
alternate
view
of
the
tags,
so
as
soon
as
the
upstream
vendor
stops
signing
something.
If,
if
they
stop
sign
something,
you
would
keep
running
the
sign
code,
but
maybe
you're
running
some
really
old
vulnerable
code,
because
it
just
says
it's
not,
it
doesn't
say
it's
been
patched,
and
so
you
run
this
whole
version
of
latest.
That's
been.
A
You
know
we
see
that
with
ssh,
but
trust
on
first
use
when
you're
talking
about
containers,
you
don't
have
that
interactive
user
there
to
verify
the
remote
machine
they
were
talking
to
and
when
you
get
into
containers,
you
also
see
a
lot
of
ephemeral
environments
and
those
ephemeral
environments
mean
the
trust
on
first
use
is
trust
on
every
single
use,
so
it
doesn't
matter
who
signed
as
long
as
someone
signed
it.
We
trust
it
and
that's
not
good
to
prevent
attackers
so
we're
going
to
try
to
solve
a
bunch
of
those
problems
in
version
2.
A
we're
also
trying
to
move
the
signatures
out
of
a
separate
notary
server
and
get
them
into
an
oci
registry.
So
they
can
be
shipped
alongside
the
artifact
that
we're
signing
to
do
that.
There
are
some
challenges
in
there.
We
need
to
work
on
how
to
change
oci
a
little
bit
I'll
talk
on
that
in
just
a
second
here,
but
yeah.
It's
going
to
be
a
challenge,
but
if
we
can
do
that,
that
means
that
we
can
now
copy
these
artifacts
between
registries.
A
That
would
be
really
nice
because
then
you
help
support
the
disconnected
environments
that
have
their
own
local
registry.
This
is
still
in
the
design
prototype
phase.
I
saw
a
recent
alpha
one
release
from
them,
so
they're
they're
trying
to
get
some
more
visibility
on
what
they're
working
on
so
very
much
under
development.
Something
to
look
at
a
little
bit
further
along
in
development
is
the
cosign
project.
That's
under
six
store
originally
started
by.
A
I
believe
it
was
a
handful
of
googlers
but
they've
the
people
running
that
one
now
I
think,
they've
gone
off
into
a
company
called
chainguard
and
what
they're
looking
at
is
to
push
the
sign
data
as
a
separate
tag,
and
so,
instead
of
re
requiring
the
registry
change.
Some
of
the
api
calls
on
registry
they're
working
within
registries
as
they
are
today
and
trying
to
fit
this
into
the
model.
That's
sitting
out
there
I
feel,
like
their
1.0
release,
was
a
bit
rushed.
You
know
they.
A
They
say:
they're
stable,
but
they're
still
changing
things
like
what
what
the
signature
contents
look
like.
Maybe
they
want
to
change
the
synchro
envelope.
Maybe
they
you
know,
don't
have
the
best
workflow
for
multiple
signatures,
so
something
there
might
still
need
to
be
developed.
There's
a
lot
of
work
still
happening
on
this
side,
but
it's
probably
far
along
than
anybody
else,
that
I've
seen
so
far.
A
The
other
project
coming
out
of
that
segstore
project
was
recore,
so
cosine
can
integrate
with
this.
To
give
you
a
transparency,
log
and
they've
got
a
public
instance
of
this.
It's
not
so
stable
right
now,
they've
got
no
slo
at
all
really
and
they
even
have
a
notice
on
there.
It
says
they
might
periodically
reset
this.
So
if
you're
going
to
use
it,
you're
probably
going
to
want
to
self-host
it,
and
so
if
you're
self-hosting,
it
just
realize
you're
self-hosting,
something
that's
still
very
much
under
development.
A
That's
the
way
that
we
build
and
ship
images
today,
they're
they're
a
well-known
standards
body
that
are
trying
to
maintain
these
specs
for
the
image
and
distribution
spec,
and
originally
they
were
defining
standards
from
what
was
already
created,
and
so
now
they
have
the
challenge
of
taking
what
was
already
created
and
trying
to
define
a
new
creation.
On
top
of
that,
and
so
we've
looked
at
the
artifact
spec
of
how
do
you
push
some
of
these
things
that
aren't
images
into
a
registry?
A
And
it's
not
supported
by
a
lot
of
registries
with
what
they've
got
right
now,
they
kind
of
extended
the
image
spec
to
do
that.
So
you've
got
something
that
isn't
really
an
image
that
needs
to
be
pushed
to
a
registry
in
the
format
of
an
image
and
so
they're
they're,
looking
at
ways
that
maybe
they
can
change
that
a
little
bit.
A
So
it
doesn't
have
to
have
that
syntax
of
the
config
and
a
whole
bunch
of
layers
in
there,
but
it
can
have
a
little
bit
more
freeform
syntax,
and
hopefully
we
can
get
some
more
support
on
registries
when
we
do
that
they're
also
working
on
reference
types.
So
I
mentioned
some
of
the
stuff
in
node
review
to
us.
Depending
on
this.
A
What
we're
trying
to
figure
out
is.
Can
we
push
something
to
a
registry?
You
don't
want
to
mutate
the
image,
because
the
image
itself
has
that
immutable
digest
on
it,
but
you
want
to
push
a
second
object
next
to
the
image
that
has
this
reference
back
to
your
image.
So
you
need
some
kind
of
api.
You
can
query
the
registry
and
say
tell
me
all
these
things
that
have
this
back
pointer
to
it
and
that's
going
to
be
a
little
bit
of
an
interesting
challenge.
There.
A
We're
gonna,
have
to
get
registry
support
to
add
that
functionality,
and
so
that's
still
tbd
still
under
development.
Right
now,
they've
got
a
working
group.
They
recently
created
and
so
it'll
be
interesting
to
see
what
comes
out
of
that
or,
I
think
they've
created
working
group.
It's
still,
you
know,
I
think,
that's
in
the
voting
stage
right
now
also
on
this
list
here
that
we're
talking
about
opa
gatekeeper.
A
There
are
other
kinds
of
emission
controllers
out
there,
there's
even
a
specific
emission
controller
just
for
security,
just
for
verifying
signing,
but
oppa
has
kind
of
got
the
got
the
ecosystem
behind
it
and
they've
got
their
own
little
language
in
there.
You
have
to
learn
the
regular
language
and
they've
got
a
bunch
of
stuff
out
there.
A
That's
really
stable-
I
listed
on
here,
not
so
much
because
of
what
they've
already
got,
but
what
they're
going
to
need
to
start,
adding
for
the
image
signing
so
there's
going
to
be
a
lot
of
functionality
gets
added
into
this
as
we
get
all
these
other
projects.
Finally,
production
ready
we're
going
to
use
the
updates
on
the
opa
side
as
well.
A
I
want
to
point
out
a
few
other
related
projects
in
groups,
and
one
of
those
is
the
cncf
security
tag.
They've
got
a
working
group
that,
based
on
a
lot
of
the
work
that
we're
talking
about
today
on
originally,
they
were
created
for
a
white
paper
that
was
talking
about
the
secure
software
supply
chain,
best
practices,
and
so
that
white
paper
was
looking
at
all
the
different
best
practices.
A
That's
where
a
lot
of
this
content
of
today's
presentation
came
from
another
big
group
in
there
that
is
playing
space
is
open.
Ssf,
that's
from
the
linux
foundation,
and
they
are,
you
know
the
open
source
security
foundation.
That's
what
that's
ssf
stands
for.
They've
got
a
whole
bunch
of
groups
looking
at
how
they
can
improve
security,
and
I
think
some
of
those
were
based
on
when
they
had
the
open,
ssl
vulnerability
out
there.
The
heart
lead
vulnerability.
I
think
that
got
a
lot
of
momentum
on
the
linux
foundation
side.
A
One
last
area
to
look
at
is
salsa
that
comes
out
of
google,
google's
been
doing
a
lot
of
pushing
from
the
security
standpoint
and
they
are
looking
for
how
they
can
document
the
tiers
for
all
the
different
projects
out
there
and
tiers
of
the
provenance
and
the
security
of
that
artifact
you're,
creating
so
they'll
they'll
start
with
the
first
tier,
where
you're
just
saying
you
know,
let's
verify
that
that
you've
scripted
your
build
and
that
you
define
all
your
dependencies
and
you
go
up
to
the
fourth
tier
they've
got,
which
is
saying,
let's
make
sure,
there's
a
two-person
review
on
every
commit
all
the
different
code
changes
in
there
that
it's
done
within
a
hermetic
and
reproducible
build
environment,
and
I
think
there
are
a
lot
of
people
that
aren't
even
close
to
that
tier
four.
A
But
we
want
to
document
where
you
are
between
there,
and
so
that's
that's
we're.
Looking
from
that.
If
we
can
start
documenting
all
of
our
attestations
and
problems,
what
we're
creating,
then,
hopefully
we
can
start
extending
that
to
all
the
different
dependencies
we
have,
and
so
we
can
start
to
say.
Okay,
this
thing
we're
releasing
to
you.
It
comes
with
a
tier
whatever,
and
now
we
know
we're
only
going
to
approve
stuff
that
has
a
certain
tier
within
our
production
environment.
A
So
that's
we're
looking
at
getting
to
so
I
said,
reproducible
builds
there,
and
that
is
a
very
key
area.
A
lot
of
we've
been
talking
about
is
kind
of
the
top
line
here,
where
we're
saying
just
take
a
normal
supply
chain
and
harden
as
best
as
possible
make
this
thing
so
that
no
one
can
go
through
and
change
it
if
at
all
possible.
A
A
In
addition,
when
you
run
that
a
second
time
you're
going
to
have
something
like
a
time
stamp
or
perhaps
some
kind
of
externality
in
there,
you
do
like
an
apt-get
update
in
your
package
and
you're
going
to
start
pulling
the
latest
whatever
from
outside.
That's
not
reproducible,
and
so
how
do
we
make
some
of
these
things?
Reproducible?
It's
it's
non-trivial
we've
even
got
things
like
host
dependencies
in
there
you
might
inject
a
host
name
or
some
kind
of
build
path,
or
something
like
that
into
your
code.
A
None
of
that
can
be
in
there
if
you
want
to
be
reproducible,
so
a
non-trivial
problem
to
create
that.
But
if
you
do
and
like
I
was
saying
you
there's
always
this
lower
turtle
problem,
there's
always
your
depending
on
the
cloud
here,
depending
on
something
else.
If
you
can
create
a
reproducible,
build
environment,
that
is
a
separate
environment,
managed
by
separate
people
different
organizations.
A
So
you
have
that
isolation
between
it.
You
can
know
that
if
someone
got
into
the
build
infrastructure
and
changed
out
your
compiler,
they
would
be
able
to
do
that
in
a
normal
supply
chain.
You
might
go
through
and
say:
yeah
that
look
looks
good,
that's
a
green,
build,
give
it
a
sign,
deliver
it.
Everything
looks
great
if
we
can
do
it
with
a
reproducible
build
though
we
can
see
that
these
two
things
are
different.
They
were
built
by.
You
know
something
was
didn't
match
between
the
two
outputs
there's
something
to
look
at.
A
We
need
to
figure
out
why
these
things
didn't
match
and
in
the
open
source
world.
That's
even
better.
If
we
can
do,
reproducible
builds
there
because
those
bills
might
be
completely
different
organizations
and
it
gives
the
attacker
a
really
hard
problem.
They
have
to
attack
every
single
builder
out
there
to
make
their
thing
undetected.
A
So
that's
what
we
want
to
try
to
get
to
that
would
be
a
great
solution
if
we
can
get
there,
but,
like
I
say
it's
non-trivial,
there's
a
whole
website
dedicated
to
this
reproduciblebuilds.org.
A
A
There
are
a
handful
of
projects
out
there
that
are
working
on
this
there's
the
nix
and
basil
and
build
packs,
and
all
these
other
things.
The
challenge
I
have
looking
at
them
is
a
lot
of
them.
They
they
don't
use
something
like
the
docker
file.
They
require
you
to
change
your
ability
to
use
their
tooling
instead
of
making
their
tooling
work
with
your
build,
and
so
some
of
the
stuff
I've
been
looking
at
lately
is:
can
we
take
something
like
a
standard
docker
file
and
make
that
reproducible?
A
So
that's
been
the
challenge
that
I've
been
taking
on
my
free
time,
whatever
little
free
time
there
is
to
try
to
do
this
stuff.
So
it's
a
big
problem
if
you're
interested
in
helping
out
feel
free
to
reach
out
to
me
so,
let's
wrap
up.
This
is
a
lot
of
text
a
lot
of
talking
about
what
what's
involved
here.
A
A
If
nothing
else,
maybe
we
got
a
few
extra
contributors
that
want
to
help
out
on
that
one
also,
you
know,
unlike
some
of
the
orchestration
and
service
mesh
wars,
we've
been
seeing
from
other
places.
They're
gonna
be
a
lot
of
winners
out
coming
out
of
this,
and
so
you
know
have
a
look
help
out
as
best
you
can.
A
So
that's
what
I
have
to
talk
about
and
this
presentation
it's
up
on
my
github
repo,
that
qr
code
will
take
you
there
and
I
hope
that
from
there
you'll
you'll
find
something
and
you'll
come
and
help
out
with
the
project.
If
you
have
any
questions,
feel
free
to
hit
me
up
on
twitter
if
you're
watching
this
one
live,
I
have
hopefully
been
answering
some
questions
in
the
chat
alongside
of
you
and
if
you
saw
any
typos
or
something
else
in
the
repo
here
feel
free
to
open
up
a
pull
request
over
on
github.