►
From YouTube: Supply Chain Integrity WG (August 17, 2022)
C
We
we
we
got
to
the
big
milestone
for
ruby
gems,
which
is
that
mfa
is
now
required
for
the
owners
of
the
most
downloaded
gems.
A
C
A
Cool
all
right
small
group
today,
so
I
don't
know
if
you
I
mean
I'd,
love
to
hear
the
updates
and
the
fresca
demo.
But
I
don't
know
if
you
want
to
wait
for
a
bigger
audience
mike
and
we've
been
part
on
our
part,
but
we
can.
We
can
kick
it
off.
So
anyone
new
on
the
call
that
wants
to
say
hi
trevor,
you
look
like
maybe
have
you
been
here
before.
D
D
Primarily
we're
getting
started
in
a
supply
chain,
security
as
well
our
group
at
nc
state.
They
just
won
this
frontier
that
covers
the
next
five
years
from
the
nsf
targeting
supply
chain
security.
A
Yeah,
cool,
okay,
yeah,
just
a
reminder:
these
meetings
are
recorded
and
then
uploaded
to
youtube
too.
So
folks
that
can't
be
here
can
always
watch
the
recording
all
right
so
yeah
we
kicked
it
off
with
a
nice
announcement
with
ruby
gems,
there's
a
link
to
the
announcement
there.
That's
requiring
mfa
now,
so
that's
awesome,
good
work
on
that
one
and
then
yeah.
So
the
next
thing
on
the
agenda
is
john.
Speed
wants
to
talk
about
the
compromise
data
set,
so
you
want
to
kick
it
off.
B
Yeah
glad
to
hi
everyone.
I
think
I've
met
many
of
you,
but
my
name
is
john
speed.
I
work
at
chain
guard
and
I
do
research
and
development
there,
and
one
of
the
things
that
has
been
slowly
brewing
is
the
idea
of
a
open
source
software
supply
chain
compromises.
Data
set
this
actually
began
even
before
I
was
at
chain
guard
on
an
issue.
B
You
click
on
that
issue
link.
It
should
lead
yeah
exactly
to
a
long
thread.
It'd
be
too
much
to
read
now,
but
abhishek
and
kim
and
others
said
hey.
We
should
maintain
a
list
of
these
open
source
software
compromises
and
it
would
be
useful
for
a
variety
of
reasons
and
there
happen
to
be
data
sets
that
are
related,
but
none
of
them
quite
like
this.
B
I'm
glad
to
talk
about
that
and
I
at
the
time
said
thinking
this
would
be
a
relatively
simple
idea
said:
hey
well,
I've
actually
been
maintaining
one
of
those
data
sets,
but
I'm
I'm
leaving
that
place.
I
was
at
a
place
called
iqt
labs
and
it
would
be
nice
for
openssf
to
host
it
and
improve
it
and
make
it
better.
B
So
what
happened
is
over
a
series
of
months
this
spring
and
early
summer,
a
number
of
persons,
if
you
click
into
this
document,
collaborated
to
figure
out
well,
what
would
such
a
data
set
look
like
and
what
would
it
be?
Do
the
it
looks
very
modest?
I
I
swear.
It
took
a
surprisingly
long
time
to
figure
out
all
the
stuff
and
agree
it's
only
a
page
and
a
half,
but
the
idea
is
to
have
a
data
set
hosted
by
the
open,
ssf,
potentially
under
the
auspices
of
this
working
group.
B
There's
actually
already
a
private
repo,
for
it
called
something
like
you
know:
open
source
compromises,
data
set.
I
forget
the
exact
terminology
and
name
that
the
private
repo
has
but,
and
we
would
propose
that
it
tracks
open
source
software
compromises.
So
that's
not
just
malicious
compromises,
though
that
would
be
included,
but
also
vulnerabilities
that
are
known
to
be
compromised.
B
So
this
is
not
just
a
rehash
of
cves,
since
there
are
many
cvs
that
there
are
no
no
evidence
of
them
being
compromised
and
not
all
cves
are
open
source
components
and
how
it
would
work
is.
B
There
would
be
these
documents
that
are
using
animal
documents
that
are
stored,
and
you
know
I
think
things
you
would
expect
a
name-
a
class
of
attack,
some
other
basic
information,
what
ecosystem,
but
this
is
all
subject
to
change
and
my
reason
for
bringing
it
up
today
is
if
it's
to
be
hosted
on
this
group,
it
would
be
wise
to
get
feedback
from
anyone
who
is
curious
and
interested.
B
I
think
this
data
set
will
be
an
asset
in
the
long
term
to
the
openssf
integrity
working
group
and
to
anyone
interested
in
software
supply
chain
security,
specifically
open
source
software
supply
chain
security,
so
feel
free
to
leave
comments
make
questions
you
can.
I
can
talk
about
it
right
now,
too,
more
so,
but
I'm
gonna.
The
proposal
concretely
is
I'll.
Give
it
two
weeks
for
anyone
to
make
ask
questions.
Make
comments,
throw
tomatoes.
B
You
can
dm
me
on
slack.
If
you
don't
want
to
do
it
through
here
and
then,
what
I
would
propose
to
do
is
this
anybody
who's
interested.
We
would
start
prospectively,
you
know
making
these
making
this
repo
public
and
then
making
entries
for
attacks
that
happen
in
the
future,
and
we
would
do
that
for
a
few
months
and
if
it
seemed
useful
and
it
worked,
then
we
could
try
to
add
in
past
attacks,
but
that's
kind
of
one
of
those
things.
That's
just
it's
just
hard
to
do.
B
There's
only
so
much
time
in
the
day,
so
how
we
make
it
easier
is
just
collecting
data
on
new
attacks,
and
so
I'm
sure
there,
if
you
have
questions,
I'm
glad
to
take
them
thanks
for
kim
for
letting
me
have
a
slot
today,.
C
Yeah,
I
think
this
is
a
great
idea.
I
was
curious
about
how
you
see
it
or
whether
you
see
like
existing
firms
contributing
to
this
like
a
lot
of
security
firms,
have
their
own
data
sets,
including
exploit
detections,
which
is
really
useful.
C
The
chain
of
thought
that
led
me
to
that
question
was
actually
indirect,
which
was
maybe
we
could
talk
to
first
forum
of
incident
responders
and
security
teams,
who
are
the
custodians
of
cvss
cvss,
not
that
many
yeses,
but
also
one
called
epss,
exploit
probability
scoring
system
which
relies
on
data
from
I
can't
recall
which
vendor
you
know,
synopsis
or
call
us
or
one
of
them.
B
Yeah,
I
I
would
be
glad
to
talk
to
these
persons,
especially
if
you
knew
them
and
were
willing
to
make
an
introduction
and
make
it
so
these
people,
you
know,
knew
what
was
going
on.
I
would
enjoy
that,
and
I
certainly
you
know
I
think,
a
stretch
goal
I'm
willing,
just
because
I'm
ignorant
and
an
expert
I'm
unwilling
to
make
it
like
a
permanent
goal,
but
it
would
be
very
nice
if
it
was
helpful
to
this
data
set
could
be
helpful
to
threat
intelligence
feeds.
B
Are
you
know
people
who
want
to
act,
not
simply
researchers?
I
I
myself
have
been
a
researcher,
but
to
you
know,
people
who
want
to
operationally
use
this
data,
and
I
think
that
would
be
enhanced
by
talking
to
the
parties
you
described
so,
if
you're
willing
to
do
an
introduction
to
jax.
Yes,
please.
C
D
So
I
actually
just
had
a
conversation
with
someone
from
check
marks
over
when
I
was
at
defcon
about.
I
think
this
kind
of
idea,
oftentimes
you'll,
find
an
open
source
project
might
have
been
compromised.
D
You
know,
let's
say
somehow,
miraculously
jq
1.7
is
released
right
and
it's
actually
a
compromise
of
an
account.
I
mean
everyone
pulls
that
right,
and
so
what
often
happens
is
the
actual
repo
that
was
compromised
is
deleted
altogether.
So
there's
not
actually
a
fingerprint
any
longer
of
that.
So
I
think
what
you're
talking
about
john
sweet
is
like
addressing
that
because,
like
sure
like
it
might
have
like
it
might
be
a
known
thing
like
that
that
could
be
exploited.
D
B
I
I
do
think
that
is
what
I'm
trying
to
address.
I
mean,
I
think
I've
heard
at
least
one
registry
maintainer
concede.
B
You
know,
conce
some
registries
right
now
just
blow
up
bad
artifacts
that
get
found
and
they
disappear,
and
it's
a
shame
on
a
couple
different
levels,
both
that
the
artifact
no
exists,
there's
no
record
to
of
the
artifact
existing,
and
this
is
at
least
the
record
of
the
artifact.
B
The
actual
collection
of
the
artifact
I
sell
itself,
I
think,
is
you
know
a
better
fit
for
other
endeavors,
especially
mark
ohms
back
summer.
Is
my
collection
data
set,
but
I
hope
it
I.
I
hope
I
can
at
least
partially
address
that.
C
I
had
a
related
note,
which
is:
we've
discussed
the
idea
of
storing
the
artifacts
or
collecting
artifacts
in
the
securing
software
repairs
group,
because
it
is
a
common
researcher
request
to
get
a
hold
of
those.
And
you
know
we
want
to
have
a
collection,
but
we
don't
want
to
just
hand
it
out
willy-nilly.
B
Yes,
yeah
strongly
agree.
I'm
sure
you've
discussed
this.
The
I
think
the
python
package
index
maintainers
feel
similarly
and
not
that
I
have
anything.
I'm
sure
you
thought
about
more
deeply,
but
if
I
can
help-
or
you
think,
there's
some
sort
of
way
or
you
know
if
there's
something
I
can
do
to
make
that
a
more
likely
reality.
Please
let
me
know.
E
Cool,
so
yeah
just
wanted
to
give
folks
who
maybe
aren't
in
the
salsa
meetings
a
quick
update
on
some
of
the
work.
That's
that's
that's
happening
on
that
front,
so
we've
been
doing
a
big
push
regarding
hitting
1.0.
E
E
So
that
means
like
it's
going
to
be
an
official
sort
of
production
release,
as
in
you
know,
here's
a
set
of
requirements
that
we
feel
you
know
is
something
that
that
we
want
to
kind
of
put
out
there
and
get
really
drive
folks
to
adopt,
and
so
with
that
has
come
sort
of
for
work
stream
or
I'm
not
going
to
overload
because
the
the
depending
on
the
open,
ssf
there's
a
lot
of
different
work
streams
available
anyway.
There's
a
bunch
of
work
meetings
for
for
the
for
these
things.
E
So
first
off
is
the
specification
meeting
and
I
you
know,
put
the
the
link
to
sort
of
this.
The
sigs
or
whatever
we
want
to
call
it
inside
the
notes
for
this
meeting,
and
so
it
has
all
the
the
the
time
and
and
and
notes
and
for
those
meetings.
But
the
main
thing
is
so
there's
a
specification
meeting.
E
The
specification
meeting
is
focused
on
actually
defining
the
requirements,
as
well
as
the
json
sort
of
specification
for
the
provenance
metadata
and,
and
so
that's
one
of
the
work
streams.
That's
that's
happening
on
that,
and
so
there's
there's
a
bunch
of
related
work
on
that.
E
There
is
next
up
on
that
list
is
the
tooling
meeting,
so
that's
one
that
I
co-lead
and
that
one
is
focused
on
sort
of
identifying
what
software
is
out
there.
That
is
looking
to
sort
of
either
you
know
as
a
salsa
builder
or
or
work
to
sort
of
ingest,
salsa
and
those
sorts
of
things
and
as
well
as
sort
of
identify
gaps
in
tools
and
identify
gaps,
potentially
in
existing
tools
that
you
might
want
to
integrate
with
salsa,
for
example
like
jenkins,
or
something
like
that.
E
So
that's
the
some
of
the
stuff
that
we're
doing
in
the
tooling
meeting
and
that's
kind
of
focused
a
little
bit
more
on
hands-on
keyboard.
E
Like
you
know,
actual
sort
of
development
work
there
is
the
positioning
group,
mostly
led
by
by
melba
and-
and
I
believe
bruno
is
the
backup
on
that
one
and
that
the
positioning
group
is,
you
know,
focused
around
sort
of
identifying
different
areas
in
the
community
where
similar
sort
of
work
is
done
and
where
we
can
collaborate
and
where
we
should
make
sure
that
like
salsa
is
you
know
if
folks
aren't
aware
of
salsa,
we
need
to
make
sure
that
salsa
is
made,
aware
of
because
we
we
want
to
make
sure
that
you
know
folks
don't
just
sort
of
in
another
corner
of
our
our
of
our
sort
of
industry.
E
We
want
to
make
sure
that
you
know
nobody
just
sort
of
reinvents
salsa
again
and
then
we
have
potentially
sort
of
competing
standards
and
and
that
sort
of
thing,
and
actually
I
think
melba
is
on.
If
you
wanted
to
maybe
talk
a
little
bit
more
about
the
positioning
group.
F
Sure
yeah
and
we're
also
trying
to
do
like
the
mappings,
like
mike
said
we
have
a
spreadsheet
to
say:
okay,
these
are
the
salsa
levels
and
if
you
are
following,
let's
say
like
nist
800
161,
these
controls
are
met
by
social
levels.
One
two
three
requirement
for
provenance
as
an
example.
So
we're
looking
for
feedback.
F
So
that
way
we
can
finalize
it
and
bring
it
to
the
open
ssf
so
that
it
is
an
official,
open,
ssf
document
and
additionally,
we're
trying
to
make
the
visualization
of
you
know
end-to-end
supply
chain,
how
it
maps
to
salsa
how
it
maps
to
the
other
frameworks
and
that's
the
doc
or
the
pdf
that
I
put
in
the
agenda.
I
already
got
notes
or
feedback
from
the
last
meeting
and
looking
to
expand
to
other
members
in
the
community.
E
Cool
and
then
the
other
group
which
hasn't
been
spun
up
quite
yet
is
the
adoption
group.
E
So
once
we
have
more
or
less
like
a
real
good
understanding
of
what
specification
looks
like
for
1.0,
and
we
have
a
good
idea
of
what
needs
to
get
done
from
a
tooling
perspective
for
1.0
and
then,
as
sort
of
melba
had
mentioned,
you
know
what
sorts
of
you
know
as
long
as
we
start
to
kind
of
start,
having
those
conversations
with
the
right
groups
and
we
start
doing
the
right
mappings,
then
we'll
have
a
good
story
for
how
we
want
folks
to
start
adopting
salsa
at
a
broader
level,
and
when
that
happens,
hopefully,
in
the
next
few
weeks,
probably
in
september,
we
will
spin
up
the
adoption,
work
stream
and
that
stream
will
be
focused
on
you
know:
driving
adoption,
getting
sort
of
average
folks
and
projects
to
start
adopting
the
salsa
specification,
the
salsa
tooling,
as
well
as
you
know,
helping
them
out
through
stuff
like
the
mapping
so
that
they
can
kind
of
look
at
as
melbourne
had
mentioned.
G
Thank
you.
I
was
interested
to
know
where
this
lies
in
terms
of
the
interaction
between
open
source
projects,
which
are
distributed
to
a
wide
group
of
people
and
customer
supplier
relationships.
E
So
it's
for
both,
so
the
the
idea
is,
you
know
you?
Can
you
know
you
can
imagine
that
a
vendor
provides
a
salsa.
You
know
provides
a
salsa
attestation
to
their
customers
and
or
potentially
you
know,
one
of
the
other
things
that's
being
discussed
in
stuff.
Like
the
positioning
meeting
and
the
specification
meeting,
there
are
some
things
that
we're
looking
at.
If,
let's
say
somebody
says
hey,
we
believe
elements
of
how
we,
the
metadata
associated
with
our
build,
is
potentially
proprietary.
E
There's
discussion
about
you
know
third
party
auditors
under
you
know
sort
of
nda
and
that
sort
of
thing
can
sort
of
provide
third
party
attestations
on
other
people's.
You
know
build
systems,
and
that
kind
of
thing.
A
Yeah,
I
was
just
gonna,
add
them.
The
the
original
motivation
of
salsa
was
for
open
source
projects
and
helping
the
open
source
ecosystem,
but
there's
no
reason
why
the
framework
and
the
and
the
specification
can
be
applied
to
kind
of
close
source
vendor
software
as
well.
I
think
it
still
holds
true.
G
Thank
you.
I
was
interested
in
this
because
part
of
the
provenance
specification
says
that
formats
are
agreed
upon
between
the
consumer
and
supplier,
and
it
occurred
to
me
that
it
would
be
very
hard
to
please
everyone
when
you
have
an
potentially
unlimited
number
of
customers.
E
Yes,
but
I
think
what
we're
trying
to
drive
is
we're
trying
to
kind
of
say
that
if,
for
example,
you
you
have
to
use
something,
that's
not
the
salsa
provenance
specification.
For
some
reason,
we're
saying
still
following
the
salsa
requirements
is
still
valuable,
but
I
think
the
thing
on
that
front.
We,
we
highly
recommend
if
you
can
use
the
salsa
provenance
specification,
because
it's
slowly
but
surely
becoming
a
widely
adopted
spec
in
the
space.
G
Thank
you
very
interesting
and
looking
forward
to
see
how
it
develops
in
the
future.
A
Yeah
the
last
thing
the
salsa
meetings
are
tomorrow
like
the
the
day
after
these
meetings
at
the
same
time.
So
if
you
want
to
join
kind
of
the
main
overarching
meeting,
we
welcome
all
new
folks
to
that
one
of
two
cool
all
right,
moving
on
last
topic
looks
like
we're
getting
a
a
nice
demo,
so
part.
Do
you
want
to
take
it
away,
feel
free
to
take
over
the
screen
too?.
H
Yes,
all
right,
I
think
you
need
to
stop
sharing
okay,
perfect.
A
H
All
right
hope
everybody
can
see
this.
Is
this
readable?
I
can
make
a
little
bit
bigger.
Let
me
know:
does
that
look
good
all
right,
so
I
think
so.
This
this
project
is
called
fresco
and
it's
under
the
under
this
working
group,
the
integrity
working
group
so
kind
of
want
to
show
up
some
of
the
features.
I
think
a
lot
of
people
are
aware
of.
You
know
what
the
kind
of
focus
was.
It's
like
this
is
supposed
to
be
based
off
the
reference
architecture.
H
So
that's
what
I
have
here
so
the
cncf
reference
reference
architecture
for
the
secure
soccer
factory
that
got
created,
so
it
was
kind
of
an
implementation
of
that.
So
multiple
different
pieces
are
all
accounted
for.
I'm
sorry.
E
Oh
yeah,
and-
and
just
also
one
thing
to
just
highlight-
is
it's
it's:
it's
both
a
combination
of
the
cncf,
secure
software
factory
reference
architecture,
as
well
as
trying
to
kind
of
be
the
cutting
edge
of
what
salsa
what
you
can
achieve
with
salsa,
as
well
as
similar
sorts
of
frameworks
like
the
ssdf.
H
Right
so
I
think
the
major
pieces,
I'm
kind
of
gonna,
I'm
just
gonna-
walk
through
those
major
pieces.
I
think
a
lot
of
people
are
aware,
but
just
for
people
that
are
not
basically
right.
This
is
the
whole.
H
The
build
environment
that
we're
using
is
techton
in
this
piece
for
the
identity
workload
at
a
station,
no
data
station
we're
using
spire
the
pipeline
observer
is
going
to
be
the
tecton
chains
piece
and
then
the
for
runtime
visibility,
which
is
going
to
be
part
of
this
demo,
is
going
to
be
tetragon,
which
is
actually
in
this
separate
cluster.
H
Here
that's
running,
and
then
we
also
have
you
know
vault
also
in
here
so
vault
is
integrated
with
spire
using
oidc
so
that
it
can
actually
fetch
the
signing
keys
so
that
keys
are
no
longer
being
stored
as
a
secret
in
the
kubernetes
space
you
can
have
it
stored
in
vault
and
using
the
transit
of
plug-ins.
You
know
you
can
they
can
do
the
signing
for
the
images
and
so
forth
without
ever
the
key
ever
leaving
the
vault.
H
So
the
main
piece
that
I
wanted
to
focus
on
today
is
the
house
how
this
obtains
salsa
level
three.
So
let
me
close
this
out.
So
this
is
the
main
piece
that
we're
going
to
talk
about,
and
this
diagram
kind
of
kind
of
explains
how
exactly
how
it
achieves
this
piece.
So
what
happens
is
that
the
spire
server
and
inspire
agent
or
spire
server
is
registered
with
both
the
tecton
pipelines,
controller
and
tecton
chains?
H
You
can
see
that
in
this
diagram,
but
what
happens
is
that
there
are
multiple,
multiple
svids
generated,
right,
multiples,
multiple
short
list
certificates
that
are
generated
for
both
task
run
itself.
So,
as
a
task
run
is
running,
it's
going
to
generate
a
specific
s-vid,
that's
valid,
for
it
only
and
then
there's
also
a
another
s-vid.
That's
for
the
controller.
H
So
both
both
these
two
certificates
are
used
in
conjunction
to
sign
and
verify
different
aspects
of
the
as
the
the
task
itself
is
running.
So
the
first
piece
that
I'll
show
is
basically,
I
think,
a
valid
like
a
good
example
of
exactly
what
happens.
So
I
ran
this
quickly
beforehand,
so
that
you
know
just
so
that
we
didn't
have
to
kind
of
wait
for
it
to
finish
so,
you
can
see.
H
Let
me
finish
uploading,
okay,
so
in
crane
this
is
using
crane
ls.
Basically,
you
can
see
that
a
signature
and
anastasia
was
associated
with
this
right,
so
chains
automatically
does
signing
and
creates
the
salsa
attestation
and
pushes
that
into
oci,
so
that
you
can
see
that
this
image
that
got
created
by
this
task
was
able
to
be
signed
and
attested,
but
the
main
piece
is
how
how
does
spire
kind
of
integrate
with
this?
H
So
the
first
thing
you
see
in
here
is
that
there
are
a
bunch
of
you
see,
there's
a
bunch
of
keys
and
signatures
now
in
here.
So
let
me
scroll
up
a
little
bit
all
right
so
now
what
happens
is
that
whenever
a
task
runs
when
a
task
actually
instantiates
it's
gonna,
it's
gonna
request
a
certificate
from
spire
and
spire
is
gonna
validate
that
has
a
proper
uid.
It
has
a
proper.
H
You
know
service
account
whatever
name
space,
everything
that's
running
in
and
if
it
validates
that
it's
going
to
provide
the
specific
task
run
an
estimate,
a
certificate
that
it
can
use
for
signing.
So
you
can
see
that
estimate
right
here.
Here's
our
certificate
kind
of
stored
in
here
as
a
status
annotation,
and
what
is
happening
is
that,
in
this
specific
case,
is
that
the
results
that
are
coming
out
so
basically
checking
to
see
that
the
results
are
not
modified
during
the
execution
and
when
the
task
is
actually
finished.
H
So
in
this
case
there
are
two
results
that
are
coming
out.
So
the
results
are
coming
out
is
the
actual
the
image
image
id
and
the
digest.
Those
are
two
things
that
are
coming
out,
so
what
happens?
Is
that
it-
and
that
is
up
here.
H
I
believe,
let
me
see,
let's
go
down
yep,
so
here
it
is
these
two
things:
the
image
url
image
image
digest
that
are
two
results
that
are
coming
out,
so
it
signs
those
two
results
to
make
sure
that
at
the
end,
once
the
task
is
actually
finished
or
the
pipeline
is
actually
finished,
they
can
go
back
and
verify.
Yes,
these
are
the
results
that
came
out.
They
were
not
modified
either
during
or
after
the
the
pipeline
actually
finished.
H
The
other
piece
that
it
does
is
that
it
also
checks
for
camper
proof
like
so.
Basically,
if
so,
as
the
pipeline
itself
was
running,
did
somebody
tamper
with
it?
Did
somebody
change
the
image
that
was
being
used
right,
the
image
id
for
one
of
the
steps
that
was
being
used?
Did
they
change?
Like
you
know,
an
exit
code
or
something
you
know
they
changed
something
as
the
actual
pipeline
was
running,
that
invalidated
the
results.
H
So
that's
the
other
piece
that
comes
into
play
and
that's
the
that's
using
another
svid,
which
is
a
controller's
s,
width,
which
is
this
techno
pipelines
controller
and
it
kind
of
takes
the
status
it
takes.
The
hash
of
this
whole
status
the
status
object
here
in
the
crd
and
it
signs
that
using
the
the
s
bit,
that's
provided
so
at
the
end
again,
it
checks
to
see.
Oh
actually,
sorry,
it
continuously
checking
to
see
if
anything
has
modified
the
the
task
itself,
the
taskcoin
itself,
as
as
it's
progressing
through
the
pipeline.
H
If
it
has
been
modified
by
anything
except
for
the
pipeline
controller,
then
it
invalidates
the
whole
pipeline
and
now
the
results
will
no
longer
be
signed.
So
at
the
end,
basically,
the
chains
change
is
actually
validating
that
the
results
are
valid.
So
you
can
see
here,
there's
a
condition.
That's
set
that
all
the
results
are
validated
by
spire
and
it
also
checks
checks.
H
This
annotation
here
to
see
that
the
status
at
the
end
does
the
status
match
and
everything
everything
kind
of
lines
up
and
if
it
doesn't
line
up
that
automatically
invalidates
and
the
that
means
that
the
image
is
no
longer
signed
and
it's
not
pushed
to
oci.
So
you
you
at
the
end
of
the
at
the
end,
you
might
get
an
image,
but
it's
not
signed,
there's
no
annotation
associated
with
it.
So
it's
going
to
fail,
the
you
know.
H
The
upstream
admission
controllers,
such
as
kyberno,
is
going
to
fail
those
kind
of
checks
where
it's
checking
for
signatures
and
attestations.
E
E
H
Yeah,
so
I
think
what
I'll
show
right
now
is
basically
a
long
running
task,
so
just
a
quick
example.
So
you
can
see
this
is
the
actual
task
task,
where
that's
getting
run
very
simple,
just
as
a
hello
world,
what
I'm
gonna.
So
basically,
what
I'm
gonna
stimulate
in
this
is
that
you
know
like
like
mike,
was
saying
some
bad
actor.
You
know
cluster
admin
kind
of
comes
in,
let's
say:
oh,
I
wanna
modify
this
image
right.
I
wanna
I
wanna
use.
H
So
what
this
is
going
to
do
is
that,
as
is
running
so
the
piece,
I
told
you
about
that,
it's
going
to
be
continuously
validating
that
that
the
actual
task
object
itself
hasn't
changed.
So
it's
going
to
catch
that.
So
as
it's
running
it's
going
to
be
initially
it's
going
to
start
out
with
ubuntu
and
it's
going
to
change
to
some
other
image.
H
That's
not
not
ubuntu
anymore,
and
we're
going
to
techton
chains
along
with
spire
is
going
to
check,
check
catch
that
that
has
changed
and
it's
going
to
invalidate
the
whole
result.
So
it's
going
to
finish
just
let
this
finish
running
right,
so
it
finished
running,
so
you
can.
If
I
scroll
up
here,
you
can
see
some
of
this
stuff
has
changed
but
like
basically,
as
you
can
see
here,
that
it's
no
longer
verified,
so
it
failed.
So
the
results
didn't
come
out
properly
either,
so
it
failed
both
the
results
check.
H
It
also
failed
the
verification
check,
because
if
I
scroll
down
here
now,
you
can
see
the
image
now
changed
to
not
ubuntu.
So
as
as
it
was
running,
I
I
modified
the
image
as
it
was
running
and
then
it
popped.
H
That
so
now
it's
no
long!
So
if
I
scroll
the
page
tries
to
try
to
check
it
multiple
times
it
fails
and
it's
try
to
sign
it
and
it
didn't
catch
because
it
was
no
longer
valid
aspire,
is
not
going
to
sign
it.
There's
no
annotation
associated
with
it
with
it.
H
Yes,
any
questions
questions.
H
This
is
a
lot,
a
lot
of
components
that
kind
of
work
along
with
it,
but
that's
kind
of
the
main.
The
main
demo
kind
of
wanted
to
show
is
like
how
is
it?
How
is
it
providing
you
know,
salsa
level,
three
non-falsifiable
providence
as
well
as
how
is
a
temper
evident
right
so
if
it
for
so,
if
anything
ever
try
to
do
anything-
and
you
know
during
the
build
time
it's
going
to
catch
all
those
kind
of
things.
H
A
Speaking
of
solarwinds
mike,
you
might
actually
know
the
answer
to
this.
One
too
they
so
we
had
a
presentation
at
was
one
of
the
conferences
and
trevor
who
was
at
solarwinds
at
the
time,
did
a
presentation
of
how
they
rebuilt
their
systems
after
the
attack,
and
I
believe
it
looked
very
similar
to
this
sort
of
architecture.
E
Yeah
they
started
using
tekton.
They
also
started
building
across
multiple
installations,
right
that
were
similar,
but
not
identical.
E
That
allowed
them
to
sort
of
run
the
same
sort
of
build
and
then
run
a
bunch
of
tests
against
those
things
to
make
sure
like
did
they
largely
build
the
same
sort
of
thing
like
their
their
builds
were
not
truly
reproducible,
so
they
couldn't
actually
bit
for
bit
sort
that
out,
but
they
could,
in
the
very
least
sort
of
run
it
against
multiple
build
clusters,
and
so
you
would
have
to
sort
of
compromise
multiple
of
those
clusters
in
order
to
kind
of
do
that
yeah.
E
So
this
is
similar
and
yeah.
This
is
also
a
lot
of
the
stuff
that
we
had
brought
into
stuff
like
the
secure
software
factory.
Reference
architecture
is
a
lot
of
that
sort
of
learnings
from
from
things
like
that,
as
well
yeah-
and
I
think
the
thing
here
that
that
you
know
we
really
want
to
sort
of
drive
home.
E
Is
you
know?
A
lot
of
this
sort
of
stuff
is
not
you
know
like
in
fresca
right
is,
is
stuff
that
you
just
also
get
for
free
right.
It's
not
like
it's
when
you
install
fresca,
this
sort
of
stuff
is
already
by
and
large
set
up
for
you.
You
don't
have
to
spend
a
lot
of
time.
You
know
configuring
all
the
different
pieces
to
make
sure
that
they
wire
together
correctly
and
yaya
and
and
and
yeah.
E
So
if
folks
want
to
kind
of
understand
more
about
that,
we
have
the
fresca
community
meetings
every
other
wednesday.
So
it's
not
this
week,
but
it'll
be
next
week
at
was
it
10
a.m?
Eastern
time,
which
I
know
doesn't
work
for
for
everybody,
but
it's
one
of
the
few
times
where
there's
not
an
overlap
with
a
another
open,
ssf
meeting,
and
so
you
know
we're
definitely
looking
for
more
folks.
E
H
Yeah
yeah
exactly
I
kind
of
just
want
to
reiterate
that
you
know
it's
it's
kind
of
all
behind
the
scenes
right
all
the
spire
stuff,
all
the
other
things
that
you're
kind
of
doing.
It's
that's,
not
really
adding
any
kind
of
extra
work
for
the
actual
developer
right,
so
they
just
checking
their
code.
Basically,
I
had
this
triggering
on
a
pull
request
so
automatically
so
I
have
a
github
is
connected
to
my
github.
H
I
it
triggers
the
building
of
the
pipeline
on
my
pull
request
and
it
just
that's
it
like
it's
going
to
create
you
the
image
at
the
end
of
the
day,
and
if
all
the
checks
and
verifications
pass
you're
good
to
go,
you
can
put
it
into
a
you
know:
push
into
production
or
put
it
into
a
test
environment
whatever
you
want
to
do
and
it'll
do
all
that,
for
you
so
like
there's,
nothing,
nothing
extra
getting
done
at
the
same
time
the
kyverno
kyron
is
running
here,
which
is
the
admission
controller
and
again,
that's
also
continuously
validating
that
are
you
using
the
proper
images
are
using
that
images
that
are
signed
that
you
trust?
H
Are
you
using
images
that
have
specific
ad
stations
associated
with
them,
like
all
that
kind
of
stuff,
as
it's
built
during
the
build
process
as
well?
As
you
know,
if
you're
pushing
it
out
to
production
right,
it's
gonna
continuously
validate
all
those
kind
of
things
so
basically
kind
of
creating
a
zero
trust
environment
for
your
build
and
then
so
moving
on.
So
I
want
to
show
this
real
quick
also.
H
So
this
is
still
a
work
in
progress,
but
the
other
piece
that
I'm
trying
to
that
we're
trying
to
do
is
get
something.
A
little
bit
bigger
here
is
is
get
runtime
attestations,
so
create
runtime
attestations
that
are
separate
from
the
build
time
analyst
station.
H
So
what
change
on
exchange
will
create
cell
satisfactions
right
for
the
actual
build
now
we
also
want
to
go
a
bit
further
and
capture
what
is
actually
happening
during
run
time
for
the
actual
build
so
as
as
a
task
is
executing
what
kind
of
sys
calls
processes
what's
actually
what's
happening
right.
So
if,
if
I
want
to
know
in
the
future
that
you
know
it's
reaching
out
to,
you
know
malware.com
or
something
during
the
build
and
that
I
wasn't
aware
of
right,
then
I
want
to
know
that
piece.
H
So
if
you
can
create
some
kind
of,
if
you
can
create
the
attestations
associated
with
those
as
they're
building,
we
can
again
we
can
use.
You
know
future,
like
policy
checkers
such
as
carverno
to
check
to
see.
Did
this
piece
like?
Did
it
reach
out
to
did
it
reach
out
to
the
internet?
If
it
did
reach
out
to
the
internet,
then-
and
I
didn't
expect
that-
then
you
can
automatically
say
like
okay.
H
This
is
I'm
not
going
to
allow
it
to
run
in
my
production
environment
because
it's
it
ran
it
reached
out
to
the
internet
when
it
was
not
supposed
to
so
I
had
this
integrated
with
tecton
chains.
So,
basically
again,
it's
nothing
that
the
the
developer
kind
of
needs
to
do
more
tetragon
kind
of
runs
in
the
background
along
with
chains,
and
this
is
monitoring
all
the
tasks
that
are
running
so
once
it
runs.
So
let
me
just
kick
that
off
here.
H
So
you
can
see
that
this
is
running.
So
all
basically
nothing
really
changes
in
terms
of
how
pipelines
or
tecton
change
runs
the
only
thing
that
really
happens
in
the
end.
So
I'm
just
going
to
this
one's
going
to
run.
So
I'm
going
to
look
at
the
older
one
that
I
have
here,
so
you
can
see
in
here
that
it's
now,
instead
of
just
the
build
the
salsa
satisfaction
that
got
generated
by
tekton
chains,
now,
there's
also
a
runtime
payload.
H
So
this
runtime
payload
is
basically
another
predicate
type
that
we're
actually
kind
of
kind
of
going
forward
with
now.
So
I'm
kind
of
working
with
the
the
folks
from
in
toto
and
and
such
so
we
can
create
a
new
predicate
type
for
the
actual
runtime
just
and
capture
this.
H
You
know
the
specific
information
that
we
need,
so
just
to
show
you
what
this
kind
of
looks
like
this
was
the
runtime
payload
and
again
this
is
just
a
rough
draft,
but
basically
it's
it
kind
of
prints
out
a
lot
of
I'm
simplifying
the
the
output
here
into
like
a
string
format,
because
you
know,
of
course
it
puts
out
a
lot
of
information.
So
so
I
think
the
next
next
piece
of
this
is
basically
what
should
this
predicate
look
like,
and
what
kind
of
information
do
we
want
to
capture
in
here?
H
So
I
know
we
want
to
capture.
You
know,
like
tcp
connections,
all
those
kind
of
things,
any
other
things
that
we
want
to
capture
like
okay,
is
it
touching
specific
files?
What
is
it
doing?
Is
it
modifying
things
we
want
to
capture
those
kind
of
information?
So
that's
that's!
The
next
step
is
okay,
defining
you
know
it's
getting
a
lot
of
data,
but
what
data
do
we
care
about
and
what
data
do
we
want
to
keep
and
then
that,
so
that
we
can
use
that
for
future?
H
You
know
to
create
policies
and
that
that
we
can
use
for
admission
controllers
and
such
so,
you
can
see
in
here
it
did
a
bunch
of
things,
and
it
automatically
generates
this
for
each
of
the
different
tasks
that
get
run
and
again
you
can
push
it
to
oci
and
again
it
will
do
signatures
associated
with
it.
Everything
kind
of
kind
of
similar
to
how
change
does
everything.
So
any
questions
on
this
piece.
G
H
So
you
would
not
get
the
because
so
change
expects
tetragon
to
be
enabled
and
running
right,
so
change
itself
start
throwing
understanding.
There
was
not
tetragon
is
not
there,
it's
it's
not
configured
and
so
on
and
so
forth,
and
also
at
the
same
time,
this
runtime
annotation
would
not
be
generated
right.
So
that's
another
piece
that
you
would.
You
would
see
if
you're
expecting
a
runtime
attestation
to
be
created
and
pushed
to
your
ocr
registry.
Let's
say
right:
it
doesn't
exist
anymore.
H
So,
even
even
if
that
happened,
your
admission
controller
in
the
future
should
be
able
to
catch.
That
saying,
like
I
expect
a
runtime
attestation
to
exist
for
this
piece.
It
doesn't
so
now.
I
know
something
upstream
has
gone
wrong
and
I'm
not
going
to
allow
this.
This
release
to
be
pushed
into
a
production
environment.
G
D
So
I'll
just
jump
in
next
because
I
have
my
hand
raised,
so
I
don't
call
on.
I
see
the
the
runtime
attestation
seems
really
neat,
I'm
kind
of
visualizing
it
in
my
brain,
like
you're
playing
a
racing
game
and
you
have
to
get
to
the
next
checkpoint
to
continue.
D
So
if
it's
not
there,
then
then
it
doesn't
continue
curious,
though,
if
you
guys
have
thought
about.
I
know
this
is
like
very
active
development.
You
know
the
historic
attestations
is
there
sort
of
sort
of
like
cleanup
or
because
I
could
see
that
getting
a
little
deep
and
messy
right.
E
Yeah
I
mean
this
sort
of
stuff
like
would
get
associated
with
whatever
is
getting
sort
of
built.
So
if
you
clean
up
old
artifacts,
this
would
get
cleaned
up
with
those
old
artifacts,
but
it's
associated
like
the
the
idea,
should
be
to
associate
it
with
every
artifact
that
you
are
potentially
building,
especially
artifacts,
that
you're
like
hey.
This
is
intended
to
be
a
production
artifact.
E
You
want
to
associate
all
those
different
attestations
with
them,
and
then
the
idea
also
would
be-
and
this
is
something
that
we're
working
on
in
conjunction
with
with
google,
which
is
that
thing
that
I
believe
brendan
lum,
I
think,
showed
off
at
either
this
meeting
or
it
was
at
salsa
recently,
which
is
a
project
that
we're
calling
guac
now,
which
is,
is
a
sort
of
a
graph
for
sort
of
like
associating
all
this
metadata,
so
that
you
can
then
get
the
historic
so
that
you
can
look
at
it
historically
and
some
of
the
stuff
that
we
would
love
to
do
long
term
is
start
to
detect
sort
of
anomalous
behavior
like
if
a
build
mostly
does
these
five
things,
time
and
again
and
all
of
a
sudden,
it
starts
doing
a
different
set
of
a
completely
different
set
of
things.
E
We
you
know
that
is
not
necessarily
indication
of
malicious
behavior,
but
you
might
want
to
highlight
hey
whoa
hold
on
this
didn't
reach
out
to
the
internet.
Before
now,
all
of
a
sudden
is
trying
to
reach
out
to
the
internet.
Is
that
a
bad
thing?
Is
it?
E
Is
it
just
a
mistake
or
whatever,
and
this
sort
of
thing
helps
us
detect
that,
because
the
the
key
sort
of
pillars
behind
the
secure
software
factory
and
therefore
fresca,
is
you
want
to
make
sure
that
the
pipelines
you're
running,
like
the
actual
sort
of
as
code
that
you've
written
up
you
want
to
make
sure
that
stuff
there's
policy
behind
it?
So
that,
generally
you
know,
are
you
running
the
right
steps
in
the
right
order?
E
Are
you
you
know
if
you,
if
your
security
demands,
you
should
have
a
software,
you
know
an
sca
scan
or
a
security
linting
scan
or
whatever.
If
that
sort
of
demands
that
we'll
make
sure
that
that's
actually
running,
we
don't
want
to
sign
off
on
images
that
didn't
go
through
the
right
process
right.
So
that's
one
piece
of
it
and
we're
controlling
that
through
stuff.
Like
the
the
you
know,
ki
verno
policy,
as
well
as
our
abstraction
layers,
built
on
queue,
queue
line
and
then
we're
also
looking
at
stuff.
E
Like
then,
the
workload
stuff
that
that
parth
had
shown
the
node
and
workload
stuff
that
parth
had
shown
with
spiffy
spire
helps
us
sort
of
say:
okay
great.
If
somebody
messes
with
actually
like
an
admin,
messes
and
exchanges,
a
thing
or
if
another
service
inside
of
the
build
with
adequate
permissions,
tries
to
mess
with
the
thing
we
want
to
be
able
to
detect
it
and
then
sep.
Then
the
last
piece
which
what
parth
showed
as
well
is:
okay,
runtime
visibility
helps
out
because
spiffy
spire
and
that
component
can
detect.
E
Nefarious,
like
you,
can
imagine
some
of
the
mpm
packages
that
have
been
compromised
in
the
past,
where
it
starts
to
sort
of
poke
around
and
look
at
the
dot,
ssh
folder
and
all
those
sorts
of
things
and
start
trying
to
do
stuff
there.
So
you
can
imagine
with
with
this
runtime
visibility,
hey
if
you
start
to
detect
that
sort
of
behavior
whoa.
That's
that's!
That's
bad!
We
don't
want
that
and
you
can
not.
E
You
know
you
can
sort
of
block
that
from
getting
signed
eventually,
as
we
build
this
out
as
well
as
in
the
very
least
audit
it
right.
You
have
that
sort
of
record
of
what
actually
happened
in
the
build
at
the
run
time,
and
you
can
always
go
back
and
say
hey
this
thing
that
we
publish
to
production
is
acting
really
strange
and
then
we
go
back
and
go.
Oh,
no
like
this
is
what
happened
and
we
have
that
information
available
to
us.