►
Description
Securing the Supply Chain with Witness - Cole Kennedy, TestifySec
Witness is a new open-source modular framework for supply chain security. Witness works by making collections of attestations that are bound to the CI process. These attestation collections give administrators trusted sectors on which to enforce policy no matter where the policy enforcement point is. Witness is an implementation of in-toto and integrated with cloud-native security tools such as rekor, spire, cosign and Kubernetes. In this talk we will describe the witness trust model and offer a demonstration of implementation in a CI pipeline.
A
My
name
is
cole
kennedy.
I
am.
The
ceo
testify
sec.
We
were
founded
about
seven
months
ago
and
we
were
founded
really
to
solve
this
problem
around
supply
chain
security,
with
respect
to
software
and
vendors
and
and
other
compliance
concerns.
A
A
As
we
as
we
solve
this
problem,
you
know
it's
not
just
me
here
is
we
have
a
great
team
of
experts
and
you
know
we
got
a
lot
of
plans
coming
up
so,
but
here
on
top
here
to
talk
to
you
about
witness,
you
know
we
started
working
on
witness
after
kubecon
last
year,
mikhail
and
I
sat
down
and
really
looked
at
the
complexity
of
you
know,
solving
this
supply
chain
problem
right.
How
do
we?
A
How
do
we
verify
that
artifacts
are
what
we
expect
them
to
be,
so
we
created
this
at
the
station
framework.
It
implements
the
in
total
spec,
including
the
ite
5
ide
6,
ite
7..
You
know
myself
mikhail,
worked
together
with
community
around
some
of
these
specs
and
did
some
implementation
in
in
the
upstream
repositories.
A
We
also
use
open
policy
agent.
We
changed
the
layout
specification
from
the
standard
nintendo
layout
to
something
that
was
a
little
bit
more
flexible.
It
allowed
us
to
use
some
tools.
You
know
that
have
come
about
since
that
that
layout
was
created,
and
we
also
have
extensible
support
for
different
back
ends
in
different
types
of
a
testers.
A
So
right
now
we
support
recore
as
a
one
of
our
plugable
back
ends,
and
then
we
have
several
testers
that
we
go
over
go
over
a
little
bit
and
really
what
we
wanted
to
do
was
provide
a
framework
that
was
robust
enough
to
meet
salsa
level
for
providence
requirements
and
eventually
meet
also
level
four,
be
able
to
automate
the
guarantees
around
salsa
on
the
floor
and
be
able
to
create
policy
against
those
types
of
constraints.
A
A
We
sign
all
the
providence
within
witness
with
either
spiffy
spire
or
you
know,
for
machine
identity,
or
we
use
full
cl
for
for
user
identity.
You
know
met
kind
of
coincides
with
service
generated
provenance
right
with
the
whole
point
of
witness
and
in
total,
is
to
automatically
generate
the
inputs
and
the
outputs
of
a
compilation
process.
A
So
we
do
that
by
implementing
the
antenna,
spec
non-falsifiable
right
that
that
goes
into
our
ability
to
use
short-lived
certificates
or
short-lived
keys,
as
well
as
a
time
stamp
from
record
to
ensure
that
you
know
these
access
stations
are
signed,
but
not
only
that
the
private
key
material
that
they're
used
to
sign
is
protected,
and
then,
finally,
we
want
to
make
sure
that
all
the
dependencies
is
complete
right.
We
have
a
full
build
materials
of
what
went
into
that
build
and
we
do
that
through
our
tracing
ability
within
witness.
A
So
let's
go
over
witnesses.
Trust
monger
right,
you
know
see
the
turtle
there,
because
it
really
joe
cross
is
all
about.
You
know
finding
that
bottom
turtle
and
getting
rid
of
it,
and
we
do
that
by
implementing
spiffy
spire
to
authenticate
identities,
machine
identities
rather
than
using
the
token.
So
we
use
remote
attestation
to
verify
the
identity
of
the
machines
that
are
doing
the
build
process.
A
Second,
we
we
incrementally
establish
trust
with
these
cryptographic
documents.
So
if
you're
running
a
build
on
git
lab
on
aws
infrastructure,
you
have
two
cryptographic
documents
available
to
you
right.
You
have
the
aws
metadata
service
and
then
you
also
have
that
jwt
that
get
that
provided.
A
So
we
use
those
documents,
as
well
as
other
data,
that's
available
within
the
system
in
the
process
to
create
these
at
the
stations
and
then
you
know,
like
I
said
above
right,
we
use
these
ephemeral
short-lived
signing
keys
to
sign
these
attestations
but
signing
automated
workloads,
hardware,
keys,
very,
very
difficult
by
introducing
spiffy
spire,
we
kind
of
solve
some
of
these
problems
and
we're
able
to
automate
the
process
of
signing
these
attestations
while
retaining
trust
and
protecting
that
private
key
material.
A
So
we're
talking
about
signers,
we
actually
support
multiple
signers
right
now
and
hopefully
in
the
near
future.
We'll
support
more,
but
the
way
the
siding
works
is
that
we
take
all
these
these
documents
these
at
the
station
and
bundle
them
together
into
one
json
file.
Then
we
sign
that
using
the
dsse
envelope
and
we
need
some
keys
to
actually
do
that
signing.
So,
while
we
say
keyless
right,
we
actually
do
receive
keys
from
from
focio
right.
A
We
receive
the
signing
certificate
from
pulsa
when
we
sign
the
stuff
with
those
user
identities,
and
then
you
know
in
the
ci
process
that
doesn't
always
work
so
well,
so
we
implement
spiffy
spire
to
actually
verify
workload,
identity
of
whatever
the
builder
container
or
the
builder
agent
to
make
sure
that
hey.
Yes,
this
is
exactly
what
I
want
to
build
my
my
binary
and
I
trust
it
and
then
finally,
because
we're
using
those
short
lived
keys,
so
certificates
are
only
valid
for
very
short
periods
of
time.
A
I
mean
order
of
minutes
or
hours,
maybe
right
so
that
workload's
probably
going
to
get
scheduled
when
that
certificate
has
expired.
So
we
need
a
different
way
to
make
sure
that
those
attestations
were
signed
during
the
certificate
validity
period,
and
we
do
that
by
by
doing
another
time
step
on
top
of
that
signature
right
now.
This
capability
is
fulfilled
by
recore
when
we
upload
the
attestation
to
recore
that
attestation
then
receives
a
time
stamp
and
distorted
on
the
law
for
non-repudiation.
A
So
we
talked
a
little
bit
about
cryptographic
document
support
right.
So
during
the
witness
attestation
process,
we
look
through.
We
look
for
these
different
types
of
documents
so,
like
I
said,
if
you're
running
on
git
lab,
we
find
that
jwt
that
has
all
the
information
about
that
that
ci
runner
in
and
what
what
generated
it.
So
we
could
tell
who
did
the
la
who
made
the
last
commit
on
that
by
by
expecting
that
jwt,
we
can
figure
out
what
project
that
came
from.
A
We
can
identify
all
different
sorts
of
permissions
and
information
metadata
about
that
that
running
process
that
we
can
then
apply
policy
to
same
goes
with
the
aws
metadata
service
right.
We
take
that
metadata
and
put
it
into
a
json
document
that
we
assign
well.
This
gives
us
these
trusted
selectors
that
we
can
then
establish
policy
against
we'll
go
through
this
a
little
bit
more,
but
currently
we
support
google
cloud
aws
generic
jwt
tokens
as
well
as
git
lab.
A
We
encourage
contributors
to
to
add
additional
testers
to
witness
and
just
reach
out.
If
you
need
help
with
that,
then
finally,
the
last
part
of
witness-
and
this
is
really
the
most
exciting
part
of
it
is-
we
need
something
to
do
with
all
these
attestations
right.
How
can
we
make
them
actionable
to
improve
our
security
posture,
posture
and
efficiency?
A
So
we
have
a
policy
engine
embedded
within
witness.
It's
witness
verified,
but
policies
define
you
know
what
attestations
must
be
satisfied.
So
within
that
policy
document,
you
may
say
you
want
to
get
lab
at
the
station
for
this
stuff.
You
want
a
gcp
at
the
station
for
this
step,
and
then
you
want
a
command
run
at
the
station
that
has
a
trace
on
it.
So
you
define
that
and
then,
within
those
attestations
right,
you
can
also
attach
regal
policies
that
must
pass
in
order
for
that
policy
to
be
satisfied.
A
So
now
we
have
this
trusted
sign
data
that
we
can
evaluate
with
our
policy
engine,
to
understand
whether
this
workload
meets
our
policy
or
does
not,
and
then
we
can
decide
what
we
want
to
do
with
that,
and
then
the
last
part
of
this
is
that
we
also
enforce
the
cryptographic
identities
that
are
allowed
to
execute,
and
what
this
means
is
that
hey,
we
inspect
the
public
certificate
that
was
used
to
sign,
sign
that
at
the
station,
and
we
can
check
the
certificate
constraints
on
that
to
see
you
know
who
signed
it
if
they
were
allowed
to
sign
it.
A
We
also
embed
the
certificate
authorities
that
we
trust
within
that
policy,
so
kind
of
a
blown
out
picture,
a
lot
of
words
on
the
screen
so
to
backtrack.
What
we
do
is
we
take
these
identity
documents,
whether
it
be
the
cloud
instant
metadata,
some
sort
of
a
jwt
along
with
the
source,
files
and
materials
we
create
and
that's
attestations
for
all
of
those
we
execute
the
command
that
we
specify
and
then
this
command
during
when
that
command
is
executed.
A
We
trace
all
the
materials
that
go
in
and
out
of
that
command,
and
then
we
bundle
all
these
together.
What
we
call
an
attestation
collection,
this
attestation
collection
is
signed
by
a
key
provider
and
then
up
uploaded
to
a
back-end
store,
we'll
blow
it
out
a
little
bit
more
for
more
a
ci
approach.
What
this
looks
like
right
so
record
is
our
evidence
like
we're
normalizing,
all
of
our
evidence,
putting
it
in
recore
and
then
we're
able
to
evaluate
policy
against
that
normalized
evidence.
A
Witness
verify
is
a
library
and
so
we're
currently
working
on
a
admission
controller
that
will
enforce
these
policy
documents
in
a
kubernetes
environment.
But
really
the
library
can
be
implemented
in
just
about
any
piece
of
software
where
you
need
to
verify
the
providence
of
whatever
you're
running.
A
So
one
of
the
most
important
things
we
want
to
do
is
that
make
sure
that
all
of
our
software
is
built
on
the
physical
machines
that
we
trust
right.
These
machines
are
part
of
our
system
security
plan.
These
machines
that
have
been
you
know,
attested
to
our
chief
information
security
officer.
We
should
make
sure
that
our
builds
are
actually
coming
off
of
that
and
we
don't
have
a
rogue
developer.
That
is
bypassing
the
ci
process
in
order
to
get
his
feature
into
production.
A
So
what
we
want
to
do
here
is
we
want
to
take.
We
create
a
gc.
We
create
a
regal
module
specific
to
our
gcp
project.
Right,
we
can
see
here
at
testify,
sac
our
gcp
project.
That
number
is
three
two,
four
three,
two
two
two.
So
when
we
apply
this
policy
into
into
our
mission
controller,
any
of
the
builds
that
don't
have
an
attestation
that
prove
that
it
was
built,
at
least
once
on
our
infrastructure
will
not
get
admitted
into
the
cluster
and
that
workload
will
not
be
executed.
A
Next,
we
want
to
verify
that
an
artifact
actually
did
pass
on
static
analysis
testing.
So
in
a
lot
of
organizations
you
may
have
you
know
dozens
of
ci
systems
right
and
to
understand.
You
know
the
compliance
of
each
artifact
that
comes
out
of
each
of
those
ci
systems.
It's
a
really
difficult
task,
so,
instead,
what
we're
going
to
do
is
we're
going
to
create
some
policy
that
says:
hey
every
single
artifact
that
goes
into
our
production
system
must
have
a
snick
priority
score
of
less
than
510..
A
So
now
we
create
this
policy
and
we
we
implement
it
within
our
admission
controllers
in
kubernetes
cluster.
Now
this
means
that
any
any
of
the
workloads
that
are
are
scheduled
must
must
pass
our
policy
for
static
analysis
right.
We're
not
allowed
to
bypass
this
because
the
developers
in
a
hurry-
or
maybe
there's
a
misconfiguration
or
some
other
situation-
that's
happening.
A
Finally,
this
is
this
is
this
is
something
that
is
usually
very,
very
difficult
to
mitigate
against
and
if
your
compiler
was
compromised
or
has
a
critical
vulnerability
that
transfers
that
vulnerability
into
your
software,
it's
going
to
be
really
difficult
to
sift
through
everything
in
production,
to
figure
out
exactly
what
was
compiled
by
that
malicious
or
vulnerable
compiler.
A
But
if
you
have
tracing
enabled
within
witness,
while
your
ci
process
is
running
right,
we
actually
collect
that
information.
So
we
have
the
the
digest
of
every
single
process
that
that
happened
during
the
sea
during
this
step
of
this
year
process,
as
well
as
all
the
files
that
went
into
that
all
the
intermediate
files
and
all
the
outputs,
and
we
can
take
those
digests
and
compare
them
against
vulnerability,
database
or
or
different
threat
databases
to
understand
a
more
granular
risk
level
of
that
workload.
A
A
A
So
what
you're
going
to
see
here
is
we
have
a
post
commit
hook
that
runs
witness
and
witnesses
is
configured
in
the
in
the
commit
hook
to
actually
do
an
oidc,
credential
verification
with
fulcio,
where
it's
going
to
get
those
certificates
and
sign
that
at
the
station.
A
So
we
can
see
right
that
at
the
station
working
now
we
should
have
a
ci
pipeline
kicking
off
here.
Oh,
we
got
to
push
it
first
right,
all
right,
so
we'll
go
ahead
and
push
it,
and
then
that
should
be
kicking
off
here
pretty
soon.
A
And
there
you
see
this
step
failed
because
we
use
the
wrong
credentials
to
make
that
commit
attestation.
A
For
our
next
demonstration,
what
we
want
to
do
is
we
want
to
make
sure
that
all
of
our
spire
agents
actually
get
compiled
on
infrastructure
that
we
trust
right.
We
want
to
make
sure
that
you
know
it's
not
being
compiled
in
some
developers:
computers
sitting
underneath
their
desk
or
by
a
malicious
actor
right.
We
want
to
make
sure
that
everything
that
goes
in
production
gets
built
on
a
production
system
on
our
production
ci
system,
where
we
have
our
our
checks
and
we
have
our
our
we're
in
compliance.
A
A
You
see
right
that
matches
what's
in
our
policy,
so
let's
go
ahead
and
see
what
this
looks
like
to
do.
This
offline
verification
of
our
policy,
as
well
as
we'll
go
change
a
couple
things
around
and
see
if
our
policy
still
still
passes.
A
A
So
we've
actually
downloaded
some
of
these
artifacts
ahead
of
time.
So
the
first
thing
we're
going
to
do
is
actually
verify
the
spire
agent
and
we're
going
to
verify
that
it
was
built
on
that
gcp
project
id
and
that
passes
all
the
other
concerns
that
we
have
in
our
policy.
A
I'm
always
doing
right
now
is
it's
looking
in
record
for
all
those
at
the
stations
and
then,
as
part
of
those
attestations,
some
of
them
will
get
more
references.
We
call
these
back
preferences
to
things
like
the
pipeline
url
or
the
commit
hash.
Where
we
can,
we
can
go
and
find
more
attestations
that
may
correspond
to
that
ci
pipeline,
essentially
building
out
the
entire
providence
graph
required
to
satisfy
that
policy.
A
You
can
see
verification
succeeded
if
we
look
at
any
of
these
indexes
right
here.
You'll
see
that
hey
when
we
go
there,
that's
that
actually
that's
a
sign
attestation
for
that
artifact
and
that
step.
A
So
here
we've
actually
changed
the
gcp
number
in
the
policy
and
we're
gonna
go
ahead
and
see
if
this
one's
verifies.
A
A
Right,
so
this
is
a
command
run
attestation
we
see.
This
is
the
command
that
we're
running
we're
building
the
spire
agent.
We
can
see
exactly
the
parameters
that
were
passed
into
it,
but
more
than
that,
we're
running
a
trace
on
this
process
and
we
have
full
permissions
over
it
because
we're
actually
wrapping
it,
and
this
allows
us
to
grab
you
know
all
the
sub
processes,
all
the
executors,
as
well
as
all
the
open
files
that
went
in
and
out
of
that
this
this
process.
A
So
this
gives
us
a
lot
of
information
about
possible
vulnerabilities
that
may
be
introduced
into
our
system,
and
this
is
we
can
create
policy
against
these
items
just
like
anything
else.
So
if
there's
a
malicious
compiler,
that's
identified,
we
can
now
backtrack
and
find
everything
in
our
system
that
was
compiled
using
that
malicious
compiler
wow
and
another
situation
where
this
works
out
really
well,
is
something
like
heartbleed
right.
We
want
to
figure
out
all
the
different
all
the
different
places
where
the
vulnerable
version
of
openssl
was
compiled
into
our
software
right.
A
This
is
really
really
difficult
to
do
unless
we
have
really
good
accounting
of
what
went
into
our
software
and
that's
what
we're
doing
here,
we're
accounting
for
everything
that
that
goes
in
and
keeping
track
of
it.
So
we
can
look
that
up
later
with
that,
that's
end
of
our
demonstration
and
let
us
know
if
you
have
any
questions
we'll
be
on
the
live
stream
and
then
I
think
we
have
someone
there
in
person
too.
So
if
you
see
frederick
around
make
sure
you
say,
hi.