►
From YouTube: CNCF SIG Security 2021-01-13
Description
CNCF SIG Security 2021-01-13
C
Excited
to
hear
your
presentation
today.
A
Well,
it's
it's
really
the
start
right,
so
it's
the
it's
more
of
a
call
to
arms
than
anything
else,
but
it's
really
trying
to
gather
people
together.
So
we
can
focus
on
this
stuff.
A
I
think
it
raises
more
many
many
many
more
questions
than
answers.
That's
for
sure,
but
it's
the
start.
D
Hello,
everyone
good
morning
I'll
be
facilitating
the
the
meeting
today,
we'll
as
always,
maybe
we'll
just
give
everyone
a
couple
more
minutes
to
join
and
then
we
can
get.
D
D
D
D
D
I
don't
see
the
link
it's
in
the
chat.
Do
you
see
that
in
the
chat.
E
D
D
D
Okay,
I
think
we've
waited
for
the
customary
three
minutes,
so
let's
get
started.
Thank
you.
Everyone
good
to
see
everyone
again.
Today's
agenda
is,
I
see
I
have
an
update
that
I'd
like
to
just
quickly
it's
very
quick
on
the
on
the
pr's
and
the
tagging
of
some
of
the
pr.
So
I'll
talk
about
that
looks
like
there
are.
Brandon
has
an
update,
so
we'll
get
to
that.
D
So
why
don't
I
kick
it
off
and
then
we
have
a
presentation
that
will
that
will
be
delivered
by
jonathan
meadows
on
software
factory.
So
let's
get
started
so
my
update
is
yeah.
From
from
last
week's
meeting
there
were.
I
took
up
an
action
item
to
actually
go
ahead
and
tag
related
issues
based
on
the
cloud
native
security
white
paper.
There
are
a
lot
of
efforts
in
terms
of
a
webinar
that
we
want
to
deliver,
as
well
as
mini
and
micro
blogs
that
we
are
trying
to
plan
out.
D
So
those
issues
are
there
if
you're
interested
in
participating
on
them.
Please
go
ahead,
but
I've
also
actually
I'd
be
lying.
If
I
said
I
did
it,
it
was
on
my
plate,
but
I
think
emily
has
gone
ahead
and
tagged
it
appropriately.
I
think
that
that
the
tag
is
white
paper,
in
fact.
So
so
that's
been
done.
So
that's
my
update
and
then
moving
down
the
list.
Brandon
yeah.
Would
you
like
to
talk
about
the
update
on
the
landscape
effort.
F
Yeah,
so
a
couple
of
us
got
together
to
kick
off
the
landscape,
the
the
new
landscape
iteration
just
an
hour
ago,
so
I
just
want
to
make
aware
about
the
work.
That's
going
on
that
there
is
a
document
that
will
be
made
available,
but
I
will
post
the
the
issue
so,
if
you're
interested
in
getting
involved
with
the
new
iteration
of
security
landscape,
do
just
just
make
a
comment
on
that.
F
It
will
be
helpful
if,
if
you're
new,
also
to
kind
of
we
can
do
around
introductions,
but
also
in
the
the
meeting
minutes,
if
you
can
put
down
kind
of
the
organization
that
you're
with
or
kind
of
what
you
do,
you
know
it
usually
is
helpful
for
those
people
that
may
want
to
reach
out
to
talk
about
some
topics.
D
You
know
thanks
for
that
brandon,
so
maybe
we
should
do
that
right
now,
really
quickly
would
with
our
new
members.
Would
you
like
to
introduce
yourself
really
quickly.
G
E
Yeah
I'm
cara
justin
invited
me
to
the
meeting.
I
I
work
at
clabbies.
D
All
right,
so
I
think
that
that
rounds
it
up
so
without
further
ado
jonathan,
would
you
like
to
take
the
take
the
stage
and
talk
about
your
presentation?
Please
take
it
away.
Thank
you.
A
Yeah,
thank
you
so
there's
actually
a
couple
of
us
presenting
today,
and
this
is
really
a
presentation
that
focuses
around
a
number
of
conversations
a
small
group
of
us
had
over
the
last
year,
or
so
I
know
when
andy
martin
from
control
plane
is
on
the
line.
I
wonder
if
you'd
be
able
to
share
your
screen
and
go
through
those
slides
thanks
andy.
So
this
is
really
from
a
group
of
conversations
we've
had
between
andrew
martin,
justin
sabrin
myself
and
really
it's
a
call
to
arms.
A
So
this
this
deck
really
brings
together.
Some
of
those
current
thoughts-
it's
not
necessarily
the
voice
of
our
companies
and
it's
far
from
a
finished
product,
there's
definitely
more
questions
in
here
than
answers,
but
it
starts
to
map
out
our
thoughts
on
the
subject
and
really
what
we're
looking
at
here
is
is
a
request
for
for
collaboration.
A
So,
let's
set
the
scene
a
bit
and
and
describe
what
we
look
at
is
from
a
supply
chain
perspective.
What
is
the
supply
chain?
Well,
any
any
exchange
of
goods.
I
A
So
yeah,
so
any
exchange
of
goods
can
be
modeled
as
a
supply
chain
and
supply
chains
exist
for
anything,
that's
built
from
other
things
now,
whether
that's
processed
foods
or
pharmaceuticals,
or
software,
with
the
exception
of
the
start
and
end
of
that
supply
chain.
Each
link
in
that
supply
chain
is
effectively
a
producer
or
a
consumer
of
a
product.
A
Now
the
software
equivalent
of
that
is
more
complex
right.
So
we
have
software
created
based
upon
many
different
dependencies
and
transitive
dependencies.
That
software
is
then
built
and
distributed
over
a
network
to
an
application.
Repository
such
as
docker
or
pipe
before
finally
being
deployed
and
in
most
cases
of
vendor
supplied
software,
there's
really
no
detail
on
where
that
software
came
from
how
it
was
built.
What's
in
it,
and
it's
it's
unfortunately
worse,
so
it's
like
having
no
list
of
ingredients
on
that
standard
product.
A
So
as
you
can
see,
there's
producers
and
consumers
at
each
state
of
that
supply
chain,
and
importantly,
any
stage
in
that
supply
chain
that
we
don't
directly
control
or
have
trust
in
is
liable
to
be
attacked
and
a
compromise
of
any
upstream
stage
is
going
to
impact
us
as
downstream
consumers.
Okay.
So
if
you
look
on
the
next
slide,
we
can
look
a
bit
closer
at
that
problem.
A
What
we
see
and
what
we
realized
is
really
we
need
to
look
at
this
issue
holistically.
We
need
to
look
at
it
start
from
the
start
to
the
end
of
that
supply
chain
and
that's
why
we're
looking
to
to
bring
others
together
to
try
and
solve
this
problem?
And
it's
really
due
to
the
producer
consumer
problem
effectively.
A
So
as
a
group
of
the
four
of
us
over
the
last
last,
while
we
started
to
look
at
this
problem
and
split
into
four
focus
areas,
so
the
first
one
is
the
fact
that
software
has
multiple
dependencies
and
transitive
dependencies
and
often
with
limited
detail
on
what's
actually
in
it,
and
that
situation
gets
a
lot
worse
for
closed
closed
source
software.
So
we
need
to
address
that.
A
Secondly,
we
need
to
realize
that
securely
ingesting.
That
code
is
difficult
and
we
need
to
figure
out
how
we
get
an
understanding
of
where
that
code
came
from
like
what
are
their
transitive
dependencies
and
every
time
we
include
a
dependency.
A
We're
effectively
extending
our
supply
chain
and
the
security
of
it
to
the
producer
and
take
on
their
security
posture
and
extending
our
supply
chain,
and
that
becomes
really
really
difficult
to
track.
Thirdly,
we've
got
to
build
and
distribute
that
software,
which
is
very
hard
problem,
but
one
we're
really
focusing
on
and
finally,
we
need
runtime
validation.
We
need
to
essentially
validate
that
the
software
we're
running
is
the
software
expect
and
we
need
to
be
able
to
validate
signatures
generated
by
those
producers
pipelines,
probably
in
admission
controllers
in
a
cloud-native
sense.
A
So
we
have
to
solve
all
four
of
those
areas
holistically
to
really
address
and
tackle
this
problem,
but
being
things
as
they
are,
we're
really
focusing
on
the
last
two
and
that's
where
we
get
into
the
the
software
factory
sandy.
Do
you
want
to
continue
on
the
next
one.
J
The
presenter
view
issue
is
mine:
I'll
have
to
just
do
the
sideways.
I
apologize
for
the
the
side
view,
so
what
solutions
already
exist
in
this
space?
Well,
multiple
different
groups
are
looking
at
this.
Firstly,
we
have
s-bombs
software
bill
of
materials.
J
There
is
also
a
group
working
on
d-bombs
distributed,
builds
of
materials
which
is
effectively
a
mechanism
to
distribute
s-bombs
in
an
attested
form,
so
we
can
trust
it
in
our
build
systems.
J
There
are,
there
is
existing
work
underway
to
securely
build
software.
The
department
of
defense
in
the
us
is
really
leading
the
way
in
in
this
domain.
They
put
a
lot
of
work
into
this
space
and
released
a
very
detailed
paper,
which
is
the
enterprise
devsecops
reference
design,
which
is
linked
later
on
not
on
this
slide.
Unfortunately,
this
creates
a
software
factory
which
is
secure
software
delivery
infrastructure
that
creates
hardened
pipelines
from
pre-secured
and
hardened
components
to
build
something
that
ultimately
can
then
produce
other
software.
J
So
it's
very
much
the
sort
of
russian
dole
turtles,
all
the
way
down
approach,
and,
of
course.
Thirdly,
as
this
group
is
already
aware,
justin
capos
and
santiago
torres
with
the
tough
and
in
total
projects
are
in
this
space.
J
Looking
at
the
verification
of
integrity
of
software,
all
achieved
with
with
coat
with
signing,
essentially
one
of
the
paradoxes
of
open
source
is
that
it
does
not
come
with
a
usage
guide
and,
as
such,
we
tend
to
see
that
generalized
software,
so
something
that's
not
specifically
intended
for
for
one
purpose
but
can
be
used
more
generally,
is
more
widely
consumed,
whereas
this
means
that
potentially
software
may
be
used
in
a
way
that
is
not
intended
for,
whereas
more
specific
software,
of
course
tends
to
be
utilized
in
niche
specific
cases.
J
Of
course
we
need
to
solve
this
problem
for
software
in
general,
not
the
special
case
and
as
such,
when
we
want
to
ingest
general
sorry
specific
non-generalized,
tooling,
we
potentially
have
a
problem
here,
so
we
would
like
to
increase
the
reusability
of
well-defined
and
security
vetted
products
to
get
truly
secure
supply
chains,
and
this
means
moving
outside
of
our
immediate
domain
and
securing
upstream.
So
it's
not
just
securing
our
link
in
the
supply
chain.
J
J
So,
as
with
most
build
chains,
software
is
committed
to
get
it
is
subjected
to
automated
procedures,
choke
points
and
quality
gates
and
at
those
gates
we
have
the
ability
to
enforce
policy,
as
we
see
on
the
diagram
here
on
the
left,
and
there
is
the
link
to
the
the
reference
design
very
strong
recommend
for
reads:
it's,
it's
very
approachable
and
has
done
a
huge
amount
of
work
for
us
in
getting
to
this
point
very
good.
J
So,
of
course,
what
we're
doing
here
really
is
just
standing
on
the
shoulders
of
ci
cd
giants
that
these
are
not
revolutionary
concepts.
It
is
just
a
trusted
deployment
and
propagation
mechanism
for
the
underlying
infrastructure
and,
of
course,
preventative
controls
mean
that
we
are
shifting
things
as
we
all
love
to
do
left
in
the
pipeline.
J
Detective
controls
are
run
against
the
deployed
systems
to
the
right
of
the
pipeline,
and
then
we
bookend
the
the
security
and
deployment
infrastructure.
I
trusted
delegated
automation,
infrastructure
with
again
standards
and
known
tooling,
notably
the
skill
to
achieve
this
level
of
automation
and
also
almost
sort
of
recursive
or
self-referential.
J
Automation
is
significant,
but
as
with
devops
as
with
devsecops
etc.
The
point
here
is
about
bringing
software
engineering
rigor
to
automation
and
operations
and
security
in
order
for
small
teams
to
manage
far
more
resources
than
traditionally
they
would
be
able
to.
J
Some
further
considerations:
what
we've
been
discussing
as
a
group:
it
is
really
centered
around
the
chain
of
trust
here
and,
of
course,
thanks
to
everybody,
who's
been
doing
work
on
this
for
years
and
presenting
to
this
group
for
numerous
years
as
well.
J
The
question
is:
how
do
we
attest
that
the
work
at
each
stage
of
the
pipeline?
How
can
we
attest
to
that
work
at
each
stage
and
also
attest
to
the
ultimate
output?
So
for
clarity,
that
is
each
individual.
Build
stage
must
have
a
cryptographic
chain
of
trust
internally,
but
then
externally.
J
Our
final
artifact
also
must
have
some
indication,
some
verifiable
mechanism
that
we
have
built
it,
that
we
trust
it
and
therefore
the
people
that
we
have
a
trust
relationship
with
can
verify
that
when
they
go
to
consume
that
artifact,
currently
we're
looking
at
a
proof
of
concept
leveraging
in
toto
and
as
we've
dug
into
that.
We've
also
investigated
the
bootstrap
requirements
for
this
system,
and
this
trusted
components.
J
If
a
single
provider,
as
john
mentioned
is
compromised,
the
attacker
may
target
one
of
n
downstream
consumers
and
we
get
ourselves
into
sticky
situations,
as
we
have
seen
so
inherently
the
use
of
open
source,
the
use
of
software,
the
use
of
anything
that
we
exchange
that
we
can
model
as
a
supply
chain
as
humans
requires
us
to
accept
some
inherent
level
of
risk
and
making
sure
that
reasonable
measures
are
in
place
to
ingest,
detect
and
then
send
on
sorry
to
ingest
software
to
detect
malicious
software
and
then
securely.
Send
it
on
is
imperative.
J
So
this
is
not
a
replacement
for
a
hardened
pipeline.
It
is
packaging
and
distribution
mechanism
for
the
hardened
pipeline
itself,
and
as
ever,
we
assume
that
no
control
is
entirely
effective
and
we
are
running
intrusion,
detection
systems
all
over
the
place,
because,
frankly,
we
expect
to
be
compromised
in
some
shape
or
form
compromising.
The
intrusion
detection
system
is
left
as
an
exercise.
J
A
thought
exercise
to
the
reader,
perhaps
okay,
so
here
is
our
proposed
proof
concept,
design
a
high
level
view
of
a
theoretical
software
fact.
J
I
do
apologize.
That
was
a
very
quick
restart.
That's
the
second
time
zoom
has
crashed
on
me
today.
Let
me
just
bring
everything
back
up
again.
J
I
presume
yeah,
okay,
I've
seen
that
so
no,
it's
not
sharing
it.
Excuse
me
yeah.
So
what
we're
building
here
is
the
infra
build
environment.
So
this
the
second
box,
the
second
major
box
on
the
right
and
so
a
theoretical
sample
project
we're
using
the
software
factory
to
construct
this
infra
build
environment
for
a
theoretical,
secured
project.
J
So
we
have
a
laundry
list
of
inputs
to
the
system
on
the
right
on
the
left
hand,
side
here
as
we
look
the
the
most
the
one
we'll
focus
on
at
the
moment
is
the
spiffy
s
vid.
J
So
this
is
the
the
bottom
turtle
s
vid
to
to
use
the
terminology
that
seitel
published
in
their
book.
It's
really
an
excellent
read
again.
There
are
some
contributors
to
that
book
in
the
sig.
It's
a
great
piece
of
work,
even
if
you
just
want
a
simple
way
to
describe
cloud
native
systems,
the
glossary
at
the
back
is
really
excellent.
So
a
strong
recommend
and
we're
assuming
that
spiffy
inspire
as
sort
of
long-term
entities
in
this
group
are
known
and
we
don't
go
into
too
much
depth
this
presentation.
J
So
what
do
we
have?
We
have
a
spiffy
s-vid,
a
specific,
verifiable
identity
document.
This
is
an
identity
that
we
can
use
to
identify
ourselves.
Frankly,
what
we
do
in
the
top
box
with
our
remote
support
services,
is
we
enable
vault
to
be
spiffy
aware
we
do
that
by
using
aspire,
authentication
plugin
to
vault,
and
this
means
that
we
can
externalize
all
our
secrets
and
exist
just
with
a
workload
identity
that
is
linked
to
a
trust
domain.
So
we
also,
we
know
what
the
identity
is.
J
We
know
where
it
came
from
and
we
have
a
domain
in
which
it
is
trusted.
Spiffy,
inspire
support
federation
and
an
extended
version
of
this
concept,
which
becomes
far
more
interesting
in
a
a
theoretical
future
version
of
this
presentation
once
we're
past
getting
our
initial
proof
of
concept
together,
so
that
initial
secret
is
input
to
an
infrastructure's
code
deployment
which
stands
up
the
first
software
factory
tcb
one
in
some
trusted
environment.
So
what
we're
seeing
here,
we're
taking
all
of
our
inputs
we've
got
a
make
file
here.
It's
an
arbitrary
task.
J
J
That
trusted
compute
base
will
be
used
to
build
the
next
software
factory
and
then
terminate
it.
So
it's
an
ephemeral,
build
environment.
Using
the
same
I
say,
infrastructure,
the
same
workloads
and
the
same
images.
So
the
same
container
images
are
used
locally
as
used
in
production,
so
we
have
a
continuity
there
and
we're
not
everything
is
treated
at
the
same
trust
level
from
that
perspective,
because,
of
course,
we're
actually
proxying
through
key
material
and
or
if
not
key
material
identity
that
can
be
used
to
retrieve
key
material.
J
Okay,
so
trusted
compute
base.
One
is
a
kubernetes
cluster
which
is
in
this
case
running
tecton
again
it
it
just
needs
a
build
runner,
but
tekton
is
a
direction
of
travel
that
we're
interested
in.
It
also
includes
the
pecton
chains
project,
which
takes
tekton
task,
runs
and
converts
them
into
in
total
link
format.
J
J
J
So
what
have
we
done
there?
Well,
we
have
taken
all
of
the
dependencies
needed
to
build
the
infra
build
environment.
We've
stood
them
up
in
the
trusted
compute
base
and
we've
run
a
build
job
that
then
stands
up
the
infra-build
environment.
This
is
pulling
secrets
from
our
from
our
vault
server,
say,
pulling
secrets
minting
these
certificates
as
appropriate,
and
finally,
a
productionized
software
factory
has
been
built.
The
infobuild
environment
won.
J
This
software
factory
is
not
used
to
run
automated
deployments
as
the
previous
software
factories
were.
It
is
also
capable
of
building
software,
so
it's
tooled
up
with
the
appropriate
pipelines
to
take
a
java
repository
or
a
microservices
galaxy
or
a
cluster
or
a
mono
repo
or,
however,
we've
configured
those
things
in
tecton
and
it
can
build
these
application
artifacts,
as
well
as
supporting
developers
and
operations
and
therefore
creating
trusted
artifacts
and
again
we're
into
this
build
stage
signing
with
a
trusted
build.
We
finally
have
an
artifact
that
we
can
trust.
J
The
obvious
choice
here
is
notary.
Notary
v2
is
under
active
development,
with
all
of,
as
with
all
of
this
presentation,
effort
is
required
across
various
domains.
To
get
to
a
proof
of
concept.
We
are
conscious
that
we
are
relying
on
things
that
are
under
development
in
many
of
these
cases,
but
they're
all
things
that
we
have
strong
confidence
in.
J
So
how
does
this
address
the
supply
chain
problem
by
signing
and
verifying
every
artifact
and
action
that
the
system
performs
everything
produced
should
have
a
verifiable
signature
in
the
trust
domain,
and
thus
consumers
of
the
software
factory
are
able
to
assert
that
the
output
artifacts
are
created,
tested
and
verified
before
the
artifact
is
distributed,
and
then,
of
course,
our
trust.
But
verify
is
possible
when
the
producer
consumer
relationship,
when
we're
the
producer
and
somebody
downstream,
is
consuming
the
artifacts
that
the
software
factory
has
produced.
J
There
is
a
secondary
problem
of
compromised,
build
infrastructure.
What
happens
if
somebody
doesn't
get
the
signing
keys,
but
is
able
to
tamper
with
some
of
the
build
step
containers
or
is
able
to
tamper
with
source
code
under
some
circumstances
in
toto
has
a
pattern
for
this,
so
it's
actively
deployed
in
debian
repos.
If
you,
if
you
have
a
debian
installed
you'll,
be
using
packages
that
are
reproducibly
built
and
signed
within
toto
same
for
the
pi
pi
registers
there
are,
god
knows
actually
tens
of
thousands
if
not
more
packages
distributed
and
signed.
J
In
this
way.
The
way
the
compromise
build
infrastructure
is
dealt
with
by
in
toto
and
those
projects
is
by
running,
builds
in
parallel
in
geographically
distributed
and
isolated
environments.
So
the
supply
chain
attack
is
only
the
source
code,
because
those
environments
all
have
their
own,
unique
to
some
definition
there
of
supply
chain.
J
If
one
is
compromised,
it
will
create
an
artifact
with
a
different
hash,
a
different
cryptographic
proof
if
you
like,
and
when
they're
compared
across
multiple
different
build
infrastructures,
we
can
determine
either
there's
a
non-deterministic
or
non-reproducible
build,
or
something
is
up
with
the
the
actual
build
infrastructure
itself
and
that
provides
a
robust
distributed,
fail.
Safe
for
that
particular
attack,
of
course,
if
the
source
code
is
compromised.
J
Well,
all
bets
are
off.
That
in
itself
is
a
different
problem
and
there's
some
interesting
discussions
going
on
in
the
developer
identity
working
group
about
if
we
can
ever
actually
link
those
things
to
one
human.
J
But
I
digress
so
a
few
more
things.
The
the
keys
for
various
activities,
because
we
need
signing
keys
all
across
this
infrastructure
would
reside
in
vaults
under
this
model
to
sign
the
individual
build
attestations,
but
we
do
need
a
longer-lived
credential
to
verify
those
things
at
runtime
or
later
in
the
build.
J
There's
a
question
of
using
long-lived
credentials
and
build
attestations
that
we
validate
in
runtime.
There
are
pros
and
cons
of
each
approach
and
we're
looking
to
undertake
a
threat
modelling
exercise
to
try
and
get
to
the
bottom
of
this.
We
have
ideas,
we
have
thoughts,
but
we
will
do
a
lightweight
and
then
a
more
formal
threat.
Modeling
attack
on
this
at
some
point
in
the
future.
A
And
I
know
that
a
couple
of
other
people
on
this
call
are
also
looking
at
that
same
problem,
so
it'd
probably
be
good
to
get
together
on
that
one
we're
still
trying
to
figure
out.
You
know
which
sort
of
credentials
which
you
know
which
which
we'd
use
to
sign
those
build
out
stations.
So
it's
a
little
bit
complex.
Hence
the
the
threat
model
that
we're
going
to
put
together.
D
Is
it?
Is
it
a
good
time
for
a
question.
I
J
Thank
you
so
just
running
through
the
theoretical
types
of
pipeline.
What
does
a
software
factory
support?
It's
just
ci
cd.
The
answer
is
then
flexible
and
unbounded.
Almost
anything
we
can
build
application
pipelines
as
we
know
and
love.
We
can
build
our
infrastructure's
code
pipelines
because,
fundamentally
we're
just
running
stuff
in
containers.
Nothing
changes
here
from
running
in
your
build
environment
of
choice
and
of
course,
we
can
also
run
our
securities
code
pipelines.
J
Now
these
security
pipelines
are
generally
not
going
to
build
artifacts.
They
will
generally
produce
test
evidence.
We
can
then
at
least
sign
that
test
evidence.
So
we
have
some
more
confidence,
but
really
the
the
infrastructure
and
the
security
pipelines
are
of
such
extreme
levels
of
trust
to
our
organization.
J
If
that
infrastructure's
code
pipeline
can
deploy
stuff
into
production,
it
is
a
useful
backdoor,
let's
say,
and
the
same
for
the
network
routes
that
are
likely
to
emanate
or
link
to
the
security
infrastructure
and
testing
infrastructure,
so
whether
these
are
co-tenanted
on
the
same
software
factory
instance,
or
whether
multiple
exist
or
indeed
because
of
the
highly
replicative
nature
of
the
thing.
J
J
One
of
the
pretexts
to
this
is
a
thought
exercise
what
can't
be
containerized
and
ad
nauseum
here.
We
think
that
potentially
almost
everything
can
be
containerized.
Some
things
don't
make
immediate
sense,
for
example
shipping
shipping
some
configuration-
maybe
that
doesn't
quite
make
sense-
maybe
our
organization
would
prefer
to
be
more
traditionally
use
a
git
repository
for
that,
but
if
we
can
force
everything
into
a
container,
I
use
the
word
force
purposefully.
J
It
gives
us
consistency,
a
lack
of
duplication
between
different
types
of
thing,
and
ultimately
this
is
better
for
security.
We
know
that
we
can
run
consistent
tests
against
everything.
What
do
I
mean
by
consistent
tests?
I
I
think
minimum
viable
cloud
native
security
is
just
container
scanning.
J
Cvs
have
been
published,
we've
got
correlation
data,
we
should
do
it,
it's
practically
free
that
then,
by
extension,
should
be
applied
to
almost
everything,
it's
kind
of
the
core
os
or
container
linux
theory
again
taking
the
slightly
ad
nauseum
yeah.
J
J
How
does
that
get
to
the
software
factory
in
a
trusted
way?
Well,
it
is
ingested
into
the
organization
and
scanned
with
the
same
set
of
tools.
Internal
code
is
subjected
to
static
analysis
for
actual
source
code,
notwithstanding,
depending
upon
the
compilation,
state
and
built
into
a
container
image
and
again
the
benefit
here
of
taking
everything
as
a
container
for
standardization
and
reuse
purposes
becomes
clear.
J
However,
the
complexities
of
consuming
code
by
ingesting
it
into
an
organization
is
a
super
difficult
problem
in
this
domain.
It's
beyond
the
scope
of
this
presentation,
so
we're
modeling
the
produced
consumer
relationship.
What
happens
if
we
have
an
anonymous
producer
with
a
binary
compiled
obfuscated
artifact
that
we
need
to
get
into
our
organization?
J
We're
choosing
just
to
look
at
it
from
the
producer
consumer
relationship
at
the
moment
with
the
software
factory,
but
of
course,
by
extension,
the
ultimate
goal
would
be
to
extend
the
software
factory
in
both
directions
from
its
point
in
the
supply
chain,
so
that
it's
able
to
verify
its
producers
again
to
the
source
by
source.
I
mean
to
the
initial
producer
and
all
the
way
down
to
the
final
consumer.
J
Here
is
a
slightly
closer
view
of
the
container
app
software
factory
build
type,
and
the
point
here
is
that
most
everything
ends
up
being
subjected
to
something
a
little
bit
like
this,
so
example,
it's
just
a
microservice,
maybe
it's
a
cache,
maybe
it's
an
admission
controller
for
the
purposes
of
what
we're
looking
at
in
static
security
and
policy
tests.
J
If
we
want
to
extend
trust
internally
to
our
domain,
so
in
our
organization
into
our
git
repository
well,
human
identity
and
gpg
is
more
or
less
the
only
way.
We're
going
to
achieve
that
at
this
point
in
time,
pki
is
difficult,
but
but
it's
the
best
we've
got
at
the
moment
again
commit
conformance
scanning
for
secrets
and
git.
We
would
probably
want
to
do
these
things
to
anything
that
came
into
a
container.
J
There
is,
of
course,
specificity
around
how
individual
languages
test.
At
that
point,
we
would
look
to
defer
to
build
tooling
of
some
description,
and
while
we
have
had
discussions
about
uniformity
of
interface,
so
a
high
level
dsl
that
is
approachable
to
developers.
In
inverted
commas,
without
laughing
at
myself:
yes,
it's
almost
the
xkcd
standards
question,
but
yeah
the
question
of
a
higher
order.
Dsl,
maybe
something
above
text
on
that
provides
uniformity
and
basically
means
developers,
don't
have
to
do
much
thinking
under
the
hood.
J
All
of
these
things
are
run
without
exception,
based
upon
the
classification
of
the
type
of
thing
that's
coming
in
this
is
this
is
very
much
part
of
the
software
factory,
but
in
terms
of
the
bootstrap
of
trust
that
we've
been
talking
about
for
the
majority
of
the
presentation,
this
is
just
a
post
script.
J
And
I
guess,
finally
again
attempting
to
use
the
same
controls
for
infrastructure
and
security
pipelines
as
application
pipelines
just
deduplicates
our
efforts.
These
systems
are
highly
privileged.
They
have
access
at
least
read
access
and
sometimes
write
access
to
old
source
codes
in
in
the
domain,
let's
say
data
and,
of
course,
infrastructure.
When
we're
talking
about
getting
these
things
into
production,
it
would
be
a
prudent
use
of
dedicated
build
infrastructure
to
actually
build
the
build
steps
that
go
into
tecton.
So
again,
all
of
this
stuff
is
standing
on
existing
thinking.
J
If
you
like,
and
one
of
those
things
is
have
more
than
one
build
server
frankly
and
that
just
about
takes
it
to
the
end,
I
will
pass
back
over
and
thank
you
for
listening.
Yeah.
A
Right,
I
mean
clearly
there's
a
huge
amount
of
still
open-ended
questions,
and
what
we're
really
trying
to
do
is
just
get
people
together
and
try
to
collate
corr
collate
some
of
those
best
practices
so
that
we
can
start
to
work
on
those
next
steps
together,
because
I
think
the
number
of
people
are
trying
to
do
these
sort
of
things
separately
and
it
would
just
be,
I
think,
beneficial
to
the
community
to
try
and
collate
some
of
those
efforts
and
see
if
people
are
interested
in
doing
that
and
perhaps
get
to
a
point
where
we
can
define
some
of
those
standards
or
policies
that
we'd
perhaps
implement
during
the
supply
chain.
A
To
make
it
easier
to
build
upon
and-
and
I
didn't
do-
need
to
call
out
again
the
the
work
from
the
department
of
defense,
the
that
dod
platform,
one
team-
it
really
is
excellent
and
I
think
it's
really
just
looking
at
providing
perhaps
a
reference,
implementation
and
standards
to
work
through
that
that
we
can
then
extend
so
andy's
suggesting
definitely
building
on
shoulders
and
giants
in
that
regard.
D
Great
presentation,
thank
you
so
much
so
I
I
mean
I.
I
know
that
this
is
a
work
in
progress
and
it's
always
evolving.
I
was
just
curious
as
to
you
know,
you
talked
about
these
artifacts
that
go
from
you
know
one
step
to
the
next,
and
one
is
the
consumer
and
the
producer
of
the
other
etc.
Do
you
have
a
sense
of
you
know
what
those
verifiable
items
might
be?
I
know
one
of
the
first
things
that
comes
to
mind
is
obviously
a
container
image
taken
atomically.
D
J
We're
keenly
aware
that
legacy
applications
will
not
do
that
without
some
kicking
and
screaming.
Is
that
other
kind
of
things
you
were
thinking
of.
D
Yeah
no,
no,
because
I
think
if
I
maybe
I
I
didn't
understand,
maybe
I
took
it
more
broadly,
but
I
thought
you
also
alluded
to
the
fact
that,
as
as
one
of
these
assets
go
through
the
different
steps
in
a
build
pipeline,
if
you
will
but
you're
also
able
to
verify
the
consistency
and
the
security
or
some
kind
of
verifiable
aspect
of
the
environment
itself,
because
as
you
call
it,
I
forget
what
tcb
stood
for.
D
But
you
know
the
tcb
and
then
you
went
from
the
build
tcb
to
a
running
tcb
to
the
actual
environment.
So
I
thought
that
and
it
would
be
great.
Maybe
if
it's
not
it's
not,
and
I
understand
that
it's
evolving,
but
you
were
also
had
some
kind
of
constructs
to
actually
verify
each
of
those
environments.
In
addition
to
just
these
container
images
and
the
software
artifacts
that
they
wrap.
J
The
the
short
answer
is
not
immediately,
but
it
essentially
by
it,
it
kind
of
comes
down
to
what's
the
root
of
trust,
and
what
form
does
that
take?
So
it's
key
material
of
some
description.
J
If,
if
we
kind
of
standardize
as
kubernetes
does
on
x509s,
then
things
are
signed
back
to
the
to
the
root
of
trust
actually
in
an
x509.
So
we
have
the
spiffy
concept
of
a
trust
domain,
which
at
least
allows
us
to
verify
not
that
things
aren't
compromised,
but
that
the
signing
keys
came
from
somewhere
that
we
trust
a
kind
of
like
meta
s-bomb
of
each
stage
is
a
nifty
idea
that
I
haven't
really
considered
it.
It's
like
maybe.
H
Some
something
that
can
help
there
with
with
that
is
you
want
to
be
careful
that
people
don't
just
take
the
certificates
in
the
trust
chain
and
implicitly
trust
them.
Ssl
libraries
tend
to
be
very
careful
not
to
do
this,
but
when
people
start
to
roll
their
own
code
to
implement
this
stuff,
we
have
to
be
very
sensitive
that
people
will
likely
take
that
approach
and
there's
a
couple
things
we
can
do
to
help
mitigate
that
one
of
them
is.
H
You
can
encrypt
with
the
pub
with
a
private
key
decrypt
without
with
a
public
key,
and
so,
if
we
were
to
put
in
the
fingerprints
or
some
other
thing
that
we
could
use
to
identify
each
parent
and
use
that
to
to
decrypt
the
chain
up,
and
that
would
allow
any
rather
start
from
start
with
the
fingerprint
of
the
of
the
ca
and
work
your
way
down
the
chain.
H
That
way
that
you
can
then
start
to
decrypt
layer
by
layer
until
you
get
to
your
final
certificate
and
validate
it
so,
but
we
do
have
to
be
very
careful
that
we
don't
establish
a
practice
where
people
can
buy
through
lack
of
knowledge
or
through
making
a
mistake
that
they
don't
need
to
advertise
put
themselves
in
a
negative
situation.
I
also
have
some
more
literature
right
about,
obviously,
that
I
can
write
up
and
put
into.
K
Yeah,
so
this
is
vessel
just
to
add
one
thing
to
to
this
point
about
digital
signatures
as
well.
I
think
we
need
to
differentiate
between
when
digital
signatures
are
needed
and
when
we
are
using
code
signing
based
certificates
right
because,
with
code
signing
certificates,
time,
stamping
servers
come
as
well
where,
where
you
can
basically
go
back
in
time
and
verify
whether
the
artifact
was
when
it
was
built
like
five
years
ago
or
two
years
ago,
was
it
valid
at
that
time
or
not?
K
So
I
think
there
are
few
angles
here,
of
course,
within
the
x
509
domain,
one
is
the
digital
signature
one
and
the
other
one
is
also
the
code
signing
certificates,
so
the
the
signing
with
both
of
them
and
how
it
is
handled
is
different
and
we
need
to
identify
in
this
sort,
software
factory,
that
what
kind
of
signing
we
are
using
and
where
image
signing
when
you
are
doing
is
one
use
case.
K
But
people
are
also
looking
towards
signing
the
configuration
files
as
well,
because
now
we
are
seeing
with
with
a
lot
of
yaml
going
around
right,
I
mean
with
different
things:
people
are
exploring
or
kind
of
interested
in
knowing
if
you
could
sign
these
ml
files
as
well.
So
so
there
are
a
lot
of
aspects
around
signing
and
yeah.
If
you
guys
are
interested,
I
would
want
to
contribute
on
that
end
as
well.
K
One
thing
that
I
do
want
to
emphasize
here
is
when
we
see
during
the
call
I
haven't
gone
through
this
whole
presentation
honestly,
but
there
was
mention
of
vault
right
where
all
the
signing
keys
will
be
stored,
or
something
like
that.
I
think
yeah.
K
I
think
this
link
has
to
be
kind
of
generic
in
nature,
so
that,
if
because,
if
you
are
going
towards
big
enterprises,
they
will
already
have
their
existing
hsm
infrastructures
will
be,
which
will
be
kind
of
backed
by
already
kind
of
custom
walls,
or
they
will
have
their
own
implementation
of
boards.
A
Completely
it's
it's
a
vault,
not
necessarily
okay
default,
and
it's
really
you're
absolutely
spot
on
facade
in
that
we're
really
looking
at
how
you
know
how
we're
signing
the
different
artifacts
throughout
that
chain,
where
we're
getting
signatures,
we're
storing
it
and
what
we're
signing
and
frankly,
we're
just
looking
at
different
approaches
to
try
and
see
which
one's
actually
going
to
fit.
A
So
I
think,
be
really
interested
in
continuing
that
conversation
with
you,
but
to
your
point
yeah,
it's
not
it
as
with
other
things
within
this,
it
needs
to
be
generic
right,
so
a
generic
store,
possibly
some
form
of
hsm
different,
build
pipeline,
possibly
not
techton,
possibly
anything
else
frankly,
but
we
just
needed
to
use
one
as
a
reference
architecture
to
start
with.
K
Okay,
great
yeah,
and
I
would
happen
to
contribute
on
this
code,
signing
aspect
or
digital
signature
aspect,
so
the
signing
part
is
that
I'm
really
interested
in
and
if
you
could
continue
discussions
in
the
future,
I
would
really
like
to
contribute
in
it.
I
will
go
through
the
dod
document
as
well,
but
I
do
have
a
background
in
good
signing
so
yeah.
We
will
take
a
look
at
it.
Thank
you.
Thank
you
for
the
presentation.
The
presentation
is
great.
I
think
this
is
where
everything
is
going
and
yeah
a
great
presentation
thanks
guys.
H
On
the
code
signing
just
to
just
to
add
to
this,
it's
also
good
to
assign
certain
types
of
metadata
about
it
like
if
I
scan
it
with
a
certain
version
of
a
code
scanner
for
security
problems.
I
want
to
know
what
version
that
was
scanned
with
and
when.
So
that
way
that,
if
I
release
a
new
version,
I
can
actually
add
into
the
into
the
metadata
and
policy
around
that
information
about
what
is
in
scope
or
without
or
without
a
scope
of
my
policy.
H
So
that
gives
me
the
ability
to
force
older
versions
that
don't
protect
me
to
to
sunset
that
over
time
and
make
them
make
them
fail
policy,
as
as
the
system
involves
anyways.
I
just
wanted
to.
M
Yeah
so
justin
and
either
work
on
the
v2
stuff,
there's
so
many
people
here,
I'm
not
sure.
If
anybody
else
is
there
too,
one
of
the
things
we've
been
debating
is
the
x
509.
Is
that
okay
to
require
x,
509
or
do
we
need
something
else?
So
I
heard
a
lot
of
x,
509
conversation
here,
I'm
curious.
If
people
feel
like
that's
too
limiting,
or
has
that
been
good?
What
do
people
feel.
K
So
I
want
to
comment
on
this
thing
right.
Yes,
if
you
are
talking
about
open
source
software-
or
you
are
talking
about
individuals,
whenever
you
will
mention
x509
people
will
get
a
bit
on
the
backside.
They
will
say:
okay,
we
need
a
pki
infrastructure,
but
if
you
are
selling
anything
or
if
you
are
building
something
for
the
enterprises
or
big
financials
or
big
big
pharma
companies
or
big
insurance
vendors,
they
already
have
pretty
established
pki
infrastructures.
K
K
We
have
kind
of
figured
out
all
the
standards,
all
the
best
practices
for
it,
and
if
your
secure
factory
kind
of
integrates
with
it
right
for
for
them
it
is.
It
is
a
plus
point
right
for
individuals.
It
will
always
be
tough,
because
I
mean,
for
example,
if
you
ask
me,
for
your
open
source
project,
establish
everything
as
pki.
For
me,
it
will
be
difficult.
I
will
go
for
gpg
keys,
but
enterprises
will
prefer
pki.
So,
yes,
I
do
want
to
emphasize.
H
Would
it
be
possible
to
consider
having
some
metadata,
so
you
can
specify
what
you're
using
so
you
can
say
whether
it's
gpg
or
x509,
or
something
some
other,
because
that
means
that
the
problem
that
you're
running
with
pki
is,
if
you're
running
in
a
large
enterprise.
They
will
almost
certainly
require
you
to
use
pki
if
it's
available,
but
if
you're
an
open
source
project
you
may
want
to
use
gpg
keys
that
you've
published
and
the
problem
that
we
run
into
with
with
pki
and
the
open
source
path.
H
Is
that
pka
assumes
that
you
have
a
a
route.
You
have
a
shared
thing
that
and
everything
falls
under
a
specific
tree,
which
is
true
in
major
enterprises,
and
even
that
is
available
to
some
degrees.
But
once
you
there's,
no
federation,
or
rather
the
federation,
is
not
as
simple
when
it
comes
to
the
pki
approach
and
so
being
able
to
specify
or
possibly
even
sign,
multiple
times
like.
I
can
sign
this
with
the
gpg
key
and
pki
key.
H
That
would
give
me
the
ability
to
pick
which
one
I
want
to
use
based
on
my
needs
and
does
not
require
me
to
pull
in
the
ca
of
some
third-party
organization
that
I
may
not
trust,
which
I
may
only.
I
may
only
care
about
the
did.
This
thing
get
produced
with
the
right
set
of
certificates
through
gpg,
but
anyways
just
just
some
thoughts,
if
you're
in
the
planning
sessions
of
that
like,
I
would
love
to
join
them
on
those
issues.
If
those
are
still
are.
If
they're
fully
debated.
O
Hey
this
is
him.
I
have
leave
the
crashing
in
the
room
chat
window,
so
the
question
is,
I
know
that
people
are
like
the
questions
regarding
how
how
we
use
this
cbv
id.
So
I
know
that
a
lot
of
people
are
generating
generates
the
cevi
id
during
the
runtime,
for
example
after
the
node
attestation
and
then
workload
attestation
and
mount
the
certificate
into
the
container
at
runtime.
O
But
but
I
will
not
look
at
the
slides
today.
It
seems
like
we,
you
know
during
the
building
procedure
will
generate
the
spf
id
suites
id
and
put
it
together
with
container.
For
me,
like
this
is
a
different
usage.
It's
like
one
is
during
building
period.
We
generate
the
estimates,
another
one
is
like
we
generate
during
the
runtime.
So
what
would
be
the
suggest
way.
J
I,
if
anybody
with
better
spliffy
or
spire
specific
experience,
wants
to
jump
in.
Please
just
do
interrupt
me,
but
ultimately,
the
spiffy
published
this
book
called
solving
the
bottom
turtle,
which
I've
just
dropped
in
the
the
form
of
attestations
that
the
spiffy
can
spire
inspire,
which
is
an
implementation.
If
you
can
do,
are
based
upon
what
it
trusts
and
we
can
define
our
own
things.
J
A
hypothesis
here
would
be
perhaps
one
of
two
scenarios,
but
maybe
there's
a
disaster
recovery
scenario,
or
maybe
it's
just
building
up
more
infrastructure
for
a
new
project,
but
there
would
be
an
attestation
based
upon
the
human
identity.
So
I
know
we've
got
workload
identities
here.
Maybe
we
would
link
it
to
a
physical
device,
but
there
has
to
be
that
initial
trust
relationship
and
we
went
through
a
number
of
different
other
options.
J
While
looking
at
this
and
really
spy
is
the
closest,
we
can
come
to
something
that's
well
standardized
and
battle
tested.
Basically,
the
end
of
this
book's
got
a
load
of
interesting
use
cases
that
may
be
limit
reaching
the
limits
of
my
expertise,
which
is
not
expertise,
knowledge,
let's
say.
C
I've
done
a
little
bit
of
work
on
this
and
a
lot
of
thought
over
the
past
few
months,
and
you
know
I
think
it's
that
each
of
these
steps
in
a
build
process
right,
there's
an
agent,
that's
performing
that
build
process.
Maybe
it's!
You
know
your
your
goal
build
right.
So
there
is
a
process
that
that
we
can
identify
with
that
with
the
shaw
sum
of
that
container.
C
That's
doing
that
and
because
of
that,
what
spiffy
does
is
give
you
that
identity,
so
that
agent
is
then
given
that
that
npe
right
that
that
non-person
entity
identity
by
spiffy.
So
then
we
can
use
that
within
you
know
the
the
each
of
the
build
steps
to
say:
hey
each
of
these
agents
provided
these
raw
materials
to
this
end
product
right,
and
I
think
what
spiffy
does
is
it
gives
you
those
short-lived
certificates.
C
So
you
can
say:
hey
these
raw
materials
will
spoil
if
they're
not
put
into
the
end
product
within
this
period
of
time,
and
I
think
when
we
have
the
end
product,
that's
where
we
can
do
final
signature
using
whatever
we
need
to
based
upon
a
threat
model.
Maybe
that's
you
know
your
pgp
key
or
using
tough
or
you
know
you
have
a
hardware
device,
that's
signing
it,
but
that's
going
to
probably
be
more
organizational
policy
than
anything
else,
but.
A
C
But
I
think
I
think
it's
that
concept
of
these
raw
materials
and
ensuring
that
they
don't
spoil
and
understanding
what
goes
into
that
raw
materials
and
that
final,
that
final
attestation,
we
look
at
all
those
raw
materials
and
say
hey.
Do
we
have
a
trace
on
the
build
to
make
sure
that
there's
no
debugger
attached
to
it
are
our
virus
definitions
within
a
certain
date
right?
So
then
we
can
look
at
all
that
and
then
give
that
final
attestation
step
and
then
publish
that
to
our
our
infrastructure.
A
And-
and
that's
the
bit
that
I
perhaps
didn't
didn't
explain
particularly
well,
but
that's
a
bit
we're
trying
to
threat
model
and
get
our
heads
around
because
at
the
moment
we're
using
it
to
effectively
bootstrap
trust
for
the
the
pipeline
itself,
so
that
we
we
get
that
understanding
of
trust.
We
we
looked
at
potentially
using
that
to
sign
the
actual
artifacts
and,
and
then
the
conversation
as
you're
suggesting
is.
Do
we
use
that
and
then
a
long-lived
credential
to
sign
the
ultimate
build
artifact?
And
we
validate
that
that
single
credential?
A
What
do
we
have
an
ability
to
look
at
the
intermediate
signatures
for
each
one
of
those
build
processes?
I
I
think
there's
pros
and
cons
of
each
one
of
them.
I
think
you've.
I
know
you
and
I
have
had
a
conversation
about
this
separately.
A
I
think
that's
definitely
something
we
need
to
dig
into
and
find
out
because
yeah,
I
kind
of
want
to
throw
out
model
and
see
see
we
see
where
the
the
so
what
the
the
implications
are,
because
at
the
end
of
the
day,
if
the
you've
still
got
a
single
point
of
exploitation,
where
you
just
hack
the
final
signature,
do
you
get
the
additional
benefit
from
using
those
individual
signatures
at
each
individual
build
step?
I
think
there
is
still
value
there,
but
what's
that
trade-off,
yeah.
D
Sorry
two
things:
thank
you
say
that
we
are
almost
at
the
top
of
our
time
and
I
think
that
we
would
not
be
doing
justice
to
the
topics
at
hand.
Thank
you
all
for
such
a
great
presentation.
I
don't
know
how
we
could
have
you
all
back,
maybe
on
another
meeting
to
continue.
There
were
so
many
good
topics
that
I
think
weren't
a
double
click
and
then
and
somehow
jonathan.
D
If
you
could
take
the
lead
on
figuring
out
on
your
ticket
501,
how
the
the
different
folks
could
collaborate
and
sound
out
and
where
the
landing
spot
is
to
you
know,
provide
some
of
our
inputs.
That
would
be
fantastic.
Sorry
to
cut
everyone
shot,
I'm
just
trying
to
be
conscious
of
everyone's
time
we're
at
the
top
of
it,
but
if
it
makes
sense,
maybe
justin
we
could
collaborate
and
see
how
we
can
have
the
folks
back
on
to
continue
this
discussion.
Yeah
absolutely.
I
I
mean
we,
the
aim
is
really
to
have
more
collaboration
so
whether
we
structure
it
as
a
as
a
working
group
or
some
or
some
kind
of
well-
let's,
let's
put
it
on
the
on
the
ticket
and
we'll
carry
on
from
there.
That's.
A
F
Yeah
and
we
can,
we
can
schedule
something
in
the
future
as
well.
If
that's
enough,
there's
enough
people
that
want
to
participate
in
the
discussion,
I
just
want
to
make
a
really
quick
announcement.
We
have
a
presentation
next
week,
which
may
be
interesting
to
some
of
you
on
this
call,
because
it's
also
related
to
signing,
which
is
a
a
public
ledger
for
supply
chain,
called
record
basically
trying
to
mimic
certificate
transparency,
but
for
for
supply
chain
metadata.
D
Yeah,
thank
you
so
much
brandon
look
forward
to
seeing
everyone
great
conversation
today
see
you
next
week.
Thank
you.
Cheers.