►
Description
For more Continuous Delivery Foundation content, check out our blog: https://cd.foundation/blog/
A
B
Thanks
again
for
joining
today's
agenda,
please
start
with
a
presentation
and
demo
on
binaural
authorization
from
david
and
laurie
from
google,
which
is
a
good
and
interesting
topic
to
restart
our
meeting
series,
and
they
will
talk
about
what
they
are
doing
when
it
comes
to
binary
authorization,
and
we
will
hopefully
see
some
fun
demo
and
we'll
also
see
if
demo,
you
know,
monsters
are
with
us
or
not
the
demo
works
or
not,
and
then
our
next
topic
will
be
about
a
reminder
or
heads-up
around
the
new
initiative.
B
So
few
reminders
about
the
discussions
we
had
since
the
formation
of
the
specialized
web
software
supply
chain
and
then
how
we
could
you
know,
align
our
efforts
with
reference
architecture,
work
stream,
but
before
I
hand
it
over
to
dave,
I
wonder
if
anyone
wants
to
introduce
themselves,
because
I
see
many
new
faces
or
new
names
with
us,
should
we
do
that?
Perhaps
so
we
get
to
know
who
is
with
us
david?
Would
you
like
to
go
first.
B
You
do
that
then.
I
pick
the
next
person
in
the
list
alex
miss
torp.
Would
you
like
to
introduce
yourself.
D
Hey,
yes,
my
name
is
alex.
I
work
on
cloud
build
basically
also
where
david
himself
works
and
I
was.
F
Yeah,
I
guess
I
did
but
I'll
just
do
it
real
quick,
so
I'm
brett
smith,
I
work
at
sas,
I'm
primarily
architect
on
the
on
our
supply
chain
pipelines
and
I've
been
talking
about
nothing
but
security
since
last
year.
So
super
fun.
G
Hello,
everyone,
my
name,
is
I'm
from
google
as
well
same
team
as
david.
I
work,
but
my
team
is
particularly
working
on
the
open
source
security
side.
Thanks.
H
D
K
Okay,
sorry,
it
was
sevensome,
I'm
still
having
difficulties
anyway,
I
put
in
the
chat,
I'm
a
project
manager
at
sas,
and
one
of
the
teams
that
I
support
is
the
salsa
supply
chain,
next-gen
pipeline
team,
so
you've
already
met
brett
and
charles
and
jill
and
scott
so
and
eric.
So
I'm
I'm
part
of
that
posse.
J
Hey
guys,
I'm
tim
miller,
I'm
a
ceo
of
a
company
called
prusari.
We
work
on
supply
chain
security,
work
with.
B
J
B
M
B
Thank
tracy
thanks
tracy
and
anyone
else
wants
to
introduce
themselves
before
we
move
on.
I
think.
A
Hi
everyone-
I
am
rajet,
I'm
from
jenkins
x,
the
g
stock
mendi
yeah,
it's
first
time
joining
here,
justin's
interesting
to
know
about
supply,
chain
security.
M
Hi,
I'm
cara,
I'm
carl
de
la
I
work
at
the
cdf
I've
been
in
this
meeting
before,
but
I
haven't
been
recently
so
very
nice
to
be
here.
D
A
D
N
Yes,
I'm
sam
magdy,
I'm
from
virginia's
ex
I'm
a
gsoc
contributor
at
the
supply
chain
security.
We
look
forward
to
improve
security
of
our
supply
chain.
Engine
6.
B
Welcome
osama
okay,
so
if
everyone
introduced
themselves,
maybe
david,
it's
your
turn
now.
C
C
Now
there
we
go
okay,
so
just
to
expand
my
introduction.
Just
a
little
bit.
I
have
been
at
google
for
a
little
over
seven
years
as
of
monday.
I
will
be
focusing
full-time
solely
on
techton
until
from
from
monday.
Until
now
I
will
call
it.
I
have
been
the
engineering
manager
for
google
cloud
build,
which
is
our
hosted,
build
in
the
cloud
offering
at
google
I'm
going
to
talk
today
about
binary
authorization
for
borg,
binary
organization
for
borg.
C
One
caveat
I
needed
to
add
sorry.
I
am
a
google
engineering
manager,
but
please
note
that
I'm
speaking
today
on
my
own
behalf
and
not
on
behalf
of
google
or
alphabet,
these
slides
that
I'm
presenting
have
been
approved
for
external
presentation,
but
they
have
not
been
approved
for
public
distribution,
so
I
won't
be
sharing
the
slides.
Obviously
the
recording
will
will
continue
to
exist.
C
C
We
are,
we,
generally
speaking,
are
focused
on
a
specific
enforcement
risk
and
a
specific
enforcement
check
with
binary
authorization
from
work,
and
that
is
insider
risk.
The
idea
that
I,
as
a
google
engineer,
could
maliciously
inject
a
payload
into
a
google
product
or
service
that
causes
that
service
to
do
something
on
my
behalf.
That
is
not
appropriate
in
violation
of
our
commitments
to
our
users,
possibly
in
violation
of
the
law.
C
So,
as
a
company,
google
has
a
very
strong
security
focus
and
we
have
many
measures
in
place
to
limit
our
what
we
call
insider
risk.
I
think
that's
a
common
term.
Many
of
you
are
likely
familiar
with
the
hacker
that
we
are
concerned
about.
Yes,
we're
concerned
about
external
hackers,
but
really
when
we
talk
about
binary
authorization
for
borg.
C
So
when
we
interact
with
user
data
at
google,
there
are
a
couple
of
different
roles
we
might
play
and
I'll
lay
them
out
here
we're
not
actually
going
to
end
up
talking
about
all
of
them.
First
one
is
an
end
user.
I'm
a
google
employee,
I'm
also
a
gmail
user
on
my
personal
basis,
so
I
might
authenticate
directly
against
some
google
service
to
see
my
own
user
data.
I
should
not
be
able,
as
an
employee,
with
my
employee
credentials,
to
access
even
my
own
personal
user
data
from
my
personal
account.
C
So
that's
what
I'll
call
the
end
user
access
that
google
employees
use
all
the
time.
We're
not
going
to
be
talking
more
about
that
today.
The
second
access
is
I,
as
a
google
employee
access
user
data
as
part
of
my
role,
so
you
can
imagine
the
google
cloud
build
team
gets
a
bug
report
from
one
of
our
customers.
That
customer
authorizes
us
to
do
some
debugging
for
them.
We
might
access
customer
data
as
part
of
that
debug
process,
and
you
can
imagine
this
with
any
product.
C
We
do
have
standards
around
that,
including
two-party
authorization.
Before
we
can,
we
can
access
customer
data
like
that
access
transparency,
reports
that
give
our
customers
insight
into
what
we
looked
at
when
we
were
when
we
were
using
that
access,
but
that
is
legitimate
access
that
I'm
doing
as
part
of
my
role.
C
We
might
aggregate
user
data
to
provide
aggregate
aggregate,
metrics
and
statistics
to
keep
the
service
up
and
running
smoothly
to
ensure
users
are
having
a
positive
experience
and
so
on.
But
access
to
individual
user
data
as
part
of
the
normal
functioning
of
a
service
is
not
logged
and
tracked.
So
the
white
paper
on
binary
authorization
for
borg
focuses
on
the
threat
model.
From
the
third
scenario
I,
as
a
malicious
employee,
inject
a
malicious
payload
into
my
own
service.
In
order
to
provide
me
inappropriate
access
to
user
data,
I.e,
I've
abused
my
power
in
some
way.
C
So
I
want
to
give
an
understanding
of
how
we
developed
a
solution
for
this
that
limits
that
kind
of
programmatic
access
to
user
data
by
the
service
itself.
But
first
you
have
to
understand
how
a
service
runs
at
google.
So
the
basic
process
looks
like
this:
a
developer
authors,
some
changes
to
the
code
at
google.
We
have
a
large
repo.
You
can
read
about
that
in
other
white
papers.
C
Once
that
code
is
checked
in
it's
built,
we
use
a
build
system
internally,
that's
similar
to
the
open
source
bazel
project,
which
we
are
big
contributors
to,
and
supporters
of
that
does
a
standard
compile
to
create
a
binary.
The
build
system
actually
runs
in
an
isolated
locked
down.
Hermetic
environment
employees
can
interact
with
builds
in
the
sense
of
kicking
them
off
and
canceling
them,
but
we
are
unable
to
access
the
build
time
environment
and
that's
going
to
become
very
important
down
the
line
in
ensuring
our
software
integrity.
C
The
build
needs
to
be
verifiable.
That
means
it
produce
a
build
manifest.
Something
that's
commonly
called
one
piece
of
that
is
commonly
called
a
software
bill
of
materials.
Another
piece
is
commonly
called
a
provenance.
Our
bills
produce
signed
certificates
that
describe
exactly
the
source
that
went
in
cryptographic,
hashes
of
any
binaries
or
built
artifacts
that
came
out.
What
were
the
build
parameters
used?
What
were
the
arguments
used,
etc?
C
Once
that
artifact
is
built,
it
can
be
deployed
to
our
production
infrastructure,
that's
known
as
borg.
It
runs
in
something
called
a
borg
job
if
you're
familiar
with
kubernetes,
it's
a
very
similar
model.
The
the
deployment
happens
in
a
container
format
that
borg
job
includes
configuration
that
specifies
the
requirements
for
the
job,
the
packages
to
run
runtime
parameters,
flags
arguments
etc,
and
that
board
job
now
runs
as
an
identity
independent
from
the
human
users,
and
it
can
interact
with
other
jobs
in
production,
including
accessing
user
data.
C
The
interactions
between
those
work
jobs
are
typically
rpcs
and
the
job
is
provisioned,
with
specific
cryptographic
credentials
that
I,
as
a
developer,
cannot
access
under
normal
circumstances.
Those
credentials
identify
the
job,
providing
identity
for
the
job
to
use
and
making
these
requests
from
other
services.
So
that's
what
the
production
looks
like.
That's
what
the
production
environment
looks
like.
Let's
look
at
the
threat
model
that
binary
authorization
for
borg
addresses.
M
Hey
david,
can
I
ask
the
question
before
you
get
too
far.
M
Is
tracy
I'm
curious
about
two
things
you
said
you
generate
one
of
your
work
products
from
the
build
is
an
s
bomb
and
a
provenance
report.
Can
you
share
the
data
elements
that
you
use
in
the
prominence
report.
C
I
don't
know
if
I'm
allowed
to
share
them.
They're,
certainly
they're,
certainly
internal.
I
don't
know
if
we've
released
them,
but
what
I
can
say
is
that
the
salsa
model,
if
you're
familiar
with
that,
is.
K
C
M
Exactly
exactly
and
I'm
curious
what
that
offers
above,
what
the
s
bomb
is
is
is
providing
you
and
are
you
using
spdx
or
are
you
using
a
cyclone.
C
M
Okay,
so
that
kind
of
so
it's
probably
more
of
like
a
build
audit
with
the
pro
with
another
report
that
shows
the
provenance
is
what
I'm
I'm.
I'm
guessing
I'd
love
to
see
what
those
data
elements
are
that
you're
collecting,
and
one
of
the
reasons
why
I
asked
that
question
is
because,
as
you
well
know,
we
are
starting
to
talk
about
a
reference
architecture.
M
C
Understood
if
someone
is
taking
notes,
please
can
you
make
an
ai
for
me
to
follow
up
on
that
for
for
tracy
and
for
everyone
I'll
find
out
exactly
what
I'm
allowed
to
share?
I
don't.
I
really
don't
know
offhand,
but
I
do
stand
by
my
statement.
Broadly
speaking,
salsa
is
very,
very
well
aligned
with
what
we're
doing
internally
at
google,
albeit
you
know
different
formats
and
perhaps
even
different
naming
conventions,
we're
certainly
taking
lessons.
C
I'll
also
add,
before
I
go
back
to
the
slides,
I'm
working
in
a
split
screen
environment
here
for
the
presentation.
So
if
you
raise
a
hand-
and
I
don't
see
it
by
all
means-
just
chime
in
like
tracy
did
tracy,
I'm
assuming
your
hand
is
up
from
your
okay.
There
we
go
all
right
so
continuing
onward,
so
the
mission
of
binary
authorization
for
borg
is
to
ensure
that
production,
software
and
configuration
has
been
properly
reviewed
and
authorized,
especially
if
that
code
can
access
user
data.
C
So,
to
achieve
that
mission,
we
have
a
deploy
time
enforcement
service
that
looks
at
those
deployment
manifests
before
the
borg.
Job
stands
up
and
makes
a
determination
as
to
whether
or
not
this
job
is
even
allowed
to
run.
It
verifies
that
the
binary
meets
certain
requirements
at
deployment
time,
so
you
can
imagine,
for
example,
an
enforcement
that
says
salsa
must
exist
or
salsa
must
meet
the
following
standards.
C
C
So
some
examples
of
deployment
checks.
It
was
the
binary
built
from
checked
in
code
as
opposed
to
codon
developer
workstation.
C
You
can
imagine
that,
as
part
of
my
developer
workflow
at
google,
I
constantly
build
in
progress
code
that
has
not
been
checked
into
the
repository,
and
I
may
even
deploy
that
in
what
I'll
call
a
developer
environment
that
we
want
to
allow.
But
if
I
try
to
deploy
that
in
production
that
we
want
to
block,
so
there
are
different
gateways
for
different
deployments.
C
We
would
ask:
is
the
binary
built
verifiably
reproducibly?
Is
it
built
from
tested
code?
What
was
the
testing
suite
code
coverage
standards?
Have
all
the
tests
passed?
Is
the
binary
built
from
code
that's
intended
to
be
used
in
the
deployment,
as
opposed
to
did
other
codes
sneak
in
here
in
some
way,
every
borg
job
at
google
has
a
policy
and
so,
for
example,
a
borg
job
like
that
developer.
Job
that
I
talk
I
talked
about
that
has
no
need
to
access
any
real
user
data.
It
has
a
policy
defined
but
might
have
no
requirements.
C
M
I'm
sorry
it's
me
again,
I'm
so
sorry,
this
stuff.
I
can
talk
about
all
day,
so
I
apologize
ahead
of
time.
So
on
your
on
your
bill
on
the
build
process
itself,
do
you
have
a
way
to
enforce
or
restrict?
I
should
say
particular
directories
that
the
build
can
pull
code
from
and
do
you
do
that
at
the
time
that
the
compiler
reads,
the
file.
C
So
the
answer
is
yes,
but
not
quite
in
the
way
you're
describing
in
very
simple
terms.
What
happens
at
build
time
is
the
first
step
in
the
build
is
walk
the
graph
out
to
the
leaf
nodes
of
all
the
sources
and
gather
all
the
sources
in
what
I
will
simply
call
an
appropriate
place
once
those
sources
are
in
an
appropriate
place.
The
entire
builds
context
is
sealed
hermetically
so
that
there
can
be
no
outreach
to
a
source
repository
or
to
an
artifact
repository,
or
anything
like
that.
So
also.
K
M
C
Yes,
are
you?
Are
you
familiar
with
techton
tracy?
I
am
okay,
so
you
can
imagine
that
in
tecton
I
have
a
source
fetch
step,
maybe
git
clone,
and
then
I
have
a
build
step,
maybe
kaneko,
and
so
I'm
going
to
have
different
policy
enforcement
depending
on
which
container
is
running
and
what
it's
doing,
and
my
policy
enforcement
for
that
git
clone
might
control
where
it
is
allowed
to
clone
from.
M
C
Thank
you
sure.
So,
it's
funny
I
thought
you
were
going
to
go
here.
Tracy
with
your
question.
I
was
going
to
thank
you
for
giving
me
the
softball
pitch.
One
of
the
things
to
think
about
here
is
that
the
systems
and
the
components
that
enforce
the
binary
authorization
itself
they
need
to
be
tightly
controlled.
C
All
right,
so
there
are
several
different
enforcement
modes
for
binary
authorization,
and
this
becomes
very
important
when
you're
trying
to
figure
out
how
to
set
this
up
and
run
it
in
your
organization
for
the
first
time.
The
first
is
deploy
time
enforcement.
I
will
call
that
single
shot
enforcement,
meaning
at
the
time
that
I
say
to
deploy
an
artifact.
It
is
checked
against
the
policy
which
makes
a
determination
as
to
whether
or
not
it
can
be
deployed.
C
The
second
mode
is
deploy
time
audit,
which
means
audit
all
deployments
record.
What
action
you
would
have
taken,
make
your
determination
record
it,
but
allow
all
deployments
to
go
through.
You
might
call
this
a
dark
launch.
You
might
call
this
failing
open
that
enables
us
to
turn
on
binary
authorization
for
a
new
proposed
service
and
see
what
it
would
be
doing,
allow
it,
but
then
develop
a
policy
that
will
enable
us
to
lock
it
down.
C
You
can
imagine
that,
as
part
of
this
bootstrapping
problem
that
I
described,
you
probably
need
to
do
something
like
this
with
your
build
environment,
in
order
to
figure
out
what
to
allow
in
and
what
to
block
that
is
an
iterative
process.
Finally,
there's
a
continuous
verification
mode,
so
you
can
imagine
that
I
want
to
make
sure
that
I
have
not
deployed
any
assets
that
have
the
log4j
vulnerability.
C
I
discover
that
late
one
friday
afternoon,
let's
say,
and
so
I
add
a
new
policy
that
says
asset
may
not
be
deployed
product
to
production.
If
it
has
this
java
package
installed
well,
continuous
verification
will
go
through
all
the
production
jobs
and
provide
a
continuous
verification
that
they
are
aligned
with
that
policy.
C
Just
to
set
expectations
here,
I've
got
three
more
slides
and
then
I'll
go
to
a
demo.
So
there
were
several
new
features
developed
to
implement
binary
authorization,
including
some
emergency
response
procedures.
The
audit
mode
itself
was
developed
as
part
of
the
transition.
C
The
way
we
got
here
was
to
first
make
the
business
case
for
the
benefit
and
why
we
needed
to
do
this
at
google.
The
talk
was
about
reducing
insider
risk,
robust
code
identity
and
simplified
compliance
for
when
we
needed
to
do
audits
and
provide
audit
trails
later
on.
We
would
have
them
all
in
place
already
with
regard
to
reduced
insider
risk.
This
was
one
of
the
primary
drivers
behind
our
development
of
binary
authorization
at
google
to
prevent
any
single
individual
from
obtaining
unauthorized
access.
C
Binary
authorization
does
not
ensure
that
a
group
of
individuals
could
not
do
this,
but
of
course,
having
a
group
do.
This
is
harder
to
coordinate
than
than
the
single
rogue
engineer.
Who
is
able
to
do
this
on
their
own
robust
code
identity,
so
the
work
jobs
run
it
as
an
identity
to
to
certify
their
identity
before
gaining
access
to
user
data.
C
There
are
other
services
that
can't
enforce
the
use
of
that
data
and
so
have
to
trust
that
job
identity,
so
binary
authorization
ties
the
job's
identity
to
specific
code,
indicating
that
only
this
code
can
be
run
in
that
job
with
those
privileges
and
that
allows
a
transition
from
the
job
identity,
I.e,
trusting
the
identity
and
any
of
its
privileged
human
users
transitively
to
code
identity,
so
that
we
trust
that
a
piece
of
code
was
reviewed
and
that
its
specific
purpose
is
approved
and
the
last
one
was
simplified
compliance.
C
Notice,
in
particular
that
last
bullet
point
a
story
for
managing
third-party
code
is
critical
here,
if
you
are
taking
in
open
source
dependencies.
Do
you
know
that
that
code
is
valid
and
that
it
adheres
to
your
standards?
And
how
do
you
go
about
verifying
that?
We
of
course
have
a
story
around
that
as
well
at
google,
so
binary
authorization
is
an
internal
system
at
google.
C
We
also
have
now
provided
this
as
a
product
on
google
cloud
platform.
That's
what
I'm
going
to
be
demoing
for
you
in
just
a
minute.
It's
tightly
integrated
with
google
kubernetes
engine,
that's
our
kubernetes
offering
and
binary
authorization
allows
you
to
apply
a
policy
to
your
kubernetes
cluster.
That
will
provide
these
kinds
of
deployment
checks.
Like
we've
been
talking
about.
It
supports
signature,
verification,
image,
white
listing.
You
can
whitelist
images
by
repository
by
path,
a
particular
set
of
digests.
C
You
can
set
policies
at
the
project
level,
the
cluster
level,
even
the
name,
space
level
and
finally,
there's
a
break
glass
functionality
that
has
audit
and
audit
trail
associated
with
it.
I.E.
If
you
break
last,
what
you
do
in
that
break
last
mode
is
fully
recorded.
So
it
can
be
audited
later,
that's
obviously
for
emergency
production
access,
which,
which
is
obviously
a
production,
need
in
the
real
world.
C
These
are
a
set
of
links
that
you
may
want
to
follow
up
on
to
learn
more.
Those
links
are
also
available
in
the
meeting
notes
from
this
meeting,
I
will
call
out
that
demo
of
binary
authorization
on
google
cloud
link
that's
to
a
personal
repository
of
mine,
and
that's
that's
you're,
going
to
see
that
in
action
in
the
demo.
I'm
about
to
show.
C
Okay,
so
hopefully
you
are
now
seeing
my
smiling
face
on
github
and
I'm
going
to
roll
this
demo.
This
is
this
was
pre-recorded
just
in
case
my
name
is
david
benadori,
I'm
the
engineering
manager
for
the
tecton
team
at
google
and
I'm
going
to
give
a
demo
today
of
our
binary
authorization
offering
on
google
cloud
platform,
which
is
our
externalization
of
binary
authorization
for
borg.
C
Everything
you're
going
to
see
in
this
demo
today
is
available
in
open
source.
If
you
go
over
to
github.com,
benduri
you'll
see
that
I
have
a
repository
there
called
tecton
on
gcp.
That
repository
contains
scripts
that
can
be
used
to
set
up
a
gcp
project
exactly
like
this
one.
These
are
the
scripts
that
I
used
to
set
up
and
even
run
this
demo,
and
that
will
enable
you
to
set
up
a
similar
pair
of
clusters
in
a
google
cloud
project
and
play
with
tecton
chains
and
attestations
yourself.
So
here
I'm
at
my
command
line.
C
I've
set
up
my
gcp
project
already
and,
as
you
can
see
here,
I
have
got
two
docker
images
already
pushed
to
my
repository.
The
repositories
named
my
repo
one
is
cleverly
named
allow,
and
the
other
is
cleverly
named
deny
I
used
a
tekton
task
run
to
build
an
image
called
allow
that
image
was
signed
by
tekton
chains,
which
also
created
an
attestation
attesting
to
the
provenance
of
the
image.
In
contrast,
I
built
the
deny
image
using
a
local
docker,
build
and
pushed
that
image
to
the
repository.
C
C
Let's
go
have
a
look
at
that
in
the
google
cloud
console
so
here
we're
in
the
google
cloud
console
for
the
same
project
and
if
I
go
and
look
at
my
kubernetes
clusters,
I
have
two
clusters
up.
One
is
called
prod:
one
is
called
tecton.
The
tecton
cluster
is
where
I
built
these
images
using
tekton,
here's
the
prod
cluster
and
you
can
see
on
the
prod
cluster.
If
I
scroll
down
to
the
security
section,
I
have
binary
authorization
enabled
let's
go
look
at
the
settings
for
that
binary
authorization.
C
If
we
go
look
at
the
workloads
on
the
cluster
itself,
we
see
our
two
deployments
here
as
two
workloads.
One
is
the
allowed
image
which
was
allowed
to
be
deployed.
The
other
is
denied
image
where
you
can
read
the
error
there.
At
the
end
of
the
error,
the
last
line
well
starting
at
the
penultimate
line,
no
attestations
found
that
were
valid
and
signed
by
a
key
trusted
by
the
attestor.
C
This
deny
image
I
tried
to
deploy
was
built
by
a
nefarious
developer
in
this
case,
who
did
not
use
our
standard
build
chain
to
build
it,
and
so
did
not.
Have
the
attestation
in
place
certifying
that
this
image
was
deployable
in
our
production
environment?
In
contrast
to
the
allow
image
which
was
built
using
our
standard
tecton
tool
chain
and
therefore
had
the
needed
attestation
in
place,.
C
So
that's
what
I've
prepared
for
today
I
do
have
those
I
I
do
have
that
project
still
standing
up.
If
you
have
questions
about
it,
I
could
go,
live
and
and
head
over
there
and
show
you
whatever
it
is
that
you
want
to
ask
about.
If
you
have
questions
about
other
parts
of
the
presentation
happy
to
try
to
field
some
answers,
and
otherwise
we'll
move
on
with
our
agenda.
O
C
It
doesn't
break
the
world
sure,
that's
a
great
question
justin,
so
what
we'll
typically
do
internally
and
I've
actually
been
through
this
several
times
since
joining
google?
What
will
typically
happen
is
the
binary
authorization
team
will
send
an
announcement
of
some
new
policy.
That's
coming.
They
will
deploy
that
policy
in
a
dark
launch
audit
only
mode
and
it
will
on
a
per
job
basis.
It
will
record
violations
of
that
policy
and
send
the
job
owners
details
about
those
violations.
C
The
job
owners
can
then
decide
how
that
policy
is
going
to
be
applied
and
enforced
on
their
jobs
at
first.
That
will
probably
be
an
explicit
opt-in
for
early
adopters
and
then
at
some
announced
date,
people
will
opt
out
of
the
future
enforcement
and
then
the
future
enforcement
will
be
on
by
default,
except
for
people
who
have
opted
out
of
it.
So
there's
a
sort
of
gradual
rollout
process
like
that
that
happens.
You
can
imagine
doing
this
in
your
organization
with
salsa
as
you
climb
the
salsa
ladder.
C
So
today
you
might
enforce
only
deploy
assets
with
salsa
level
two
and
tomorrow
you're
working
your
way
to
source
to
level
three.
So
you're
going
to
keep
a
policy
in
place
that
enforces
level
two
with
blocking
deployments
and
you're
gonna
start
auditing
violations
of
level
three,
so
that
you
can
figure
out
where
those
violations
exist
and
how
to
mitigate
them.
Make
sense.
O
C
Yeah
yeah,
as
you
can
imagine,
that's
situation
dependent
right.
There
are
situations
where
a
team
can
decide
to
opt
out
and
that's
fine
with
manager,
approval
or
maybe
to
opt
out.
They
need
director
level
approval
or
maybe
there's
a
deadline
or
maybe
in
an
extreme
situation.
A
new
policy
is
just
imposed
across
all
of
google
and
there
is
no
opt
out.
It
depends
on.
What's
the
vulnerability
that
we're
trying
to
mitigate.
C
F
So
I've
got
a
couple:
it
was
really
cool
demo.
Thank
you,
who's,
the
attester
in
this
situat.
In
this
scenario,.
C
You
know
it's
funny.
I
did
a
dry
run
yesterday
and
I
thought
oh,
I
never
showed
who
the
attester
is
and
how
that
works.
So
give
me
just
a
minute
here.
Let
me
go
over.
Actually
let
me
share.
Let
me
share
this
screen,
so
I
half
anticipated
your
question,
but
didn't
really
prepare
for
it
by
by
changing
that
presentation.
C
So,
let's
see
which
cluster
do
I
want
here?
That's
the
key
okay.
So
here
is
my
setup
of
a
demo
project
very
similar
to
the
the
pre-recorded
demo,
which
was
using
different
projects,
because
I
recorded
that
last
week
and
if
we
whoops
didn't
mean
to
go
there.
Sorry
about
that,
if
we
go
over
here
to
security
and
bind
binary
authorization,
so
this
was
the
policy
I
showed
that
in
the
demo
what
I
didn't
show
was
well.
What
is
that
a
test
for
look
like
so
the
attester?
So
you
can
imagine
again.
C
You
can
see
the
tail
end
there.
It
says
keys
in
the
name
of
the
project,
whereas
the
project
I'm
in
up
here
it
ends
in
demo.
Otherwise
these
are
similar
projects.
So
here
I
am
in
the
keys
project,
and
here
is
the
key.
So
the
way
I
set
this
up
for
the
demo
is
that
I
have
a
separate
keys
project
from
from
the
other
project
in
the
real
world.
You
might
in
fact,
have
I
put
tekton
and
prod
in
different
clusters
very
likely
you
actually
have
them
in
different
projects
if
you're
on
google
cloud.
C
In
this
case,
I
put
the
key
project
separate.
You
can
imagine
that
only
security
engineers
get
access
to
this
keys
project
and
then
what
we
do
is
for
these.
These,
these
individual
keys.
C
C
If
we
go
back
over
here
to
the
quote:
unquote
production
project,
which
is
technically
also
my
techton
project,
and
we
look
here
at
all
of
the
principles
that
exist
on
this
account.
So
this
builder
is
the
build
time
service
account
itself.
This
tecton
changed
controller
account
is
the
one
who
had
access
to
that
key.
So,
if
you're
familiar
with
tecton
chains,
the
way
this
works
is
there
are
different
workspaces
in
as
part
of
my
techton
setup
in
kubernetes,
the
tecton
chain's
workspace
is
not
accessible
by
people
who
have
access
to
the
default
workspace.
C
Where
I
do
my
builds
so
the
builds
take
place
in
the
pipelines.
Workspace
where
my
techcon
pipelines
are
installed.
Tecton
chains
watches
those
builds.
It
has
a
separate
service
account
that
is
not
accessible
at
build
time.
That
service
account
sees
that
the
container
image
got
pushed
signs.
It
provides
an
attestation
uploads.
The
attestation,
so
I
also
have
to
have
security
controls
into
that
tecton
chain's
name
space
as
to
who
can
get
access
there,
because
if
I
can
get
access
there,
I
can
get
access
to
the
service
account.
C
If
I
can
get
access
to
the
service
account,
I
can
get
access
to
the
key.
If
I
can
get
access
to
the
key,
I
can
create
an
inappropriate
attestation.
You
can
imagine.
I
only
have
one
attestation
here
that
I
showed
and
this
attestation
is
effectively
saying-
was
this
artifact
built
using
my
tecton
pipeline
that
I
have
set
up
here
and
that's
the
only
thing
I
did.
You
can
imagine
that
I
could
have
dozens
of
these
access
stations.
Was
it
built
using
a
particular
pipeline?
C
Are
there
certain
critical
vulnerabilities
that
have
been
mitigated
or
are
there?
Is
there
a
prevention
of
certain
critical
verna?
Have
I
attested
that
these
critical
vulnerabilities
do
not
exist
in
the
binary
artifact?
That's
what
I'm
trying
to
say
I.e.
This
excitation
set
is
here
and
signed
only
if
the
log
for
j
vulnerability
does
not
exist
in
this
in
this
artifact,
and
you
can
have
a
whole
series
of
these.
That
might,
in
fact
be
signed
with
different
keys
by
and
enforced
by,
different
teams
and
the
policy
determines
which
ones
apply.
C
Right
here
in
that
deployment
project,
but
I
do
have
control.
Oh
I'm,
sorry,
the
policy
I
went
to
the
policy.
The
policy
is
stored
in
this
project.
The
test
stores
are
set
up
in
this
project.
The
attestations.
C
C
Where
do
I
want
to
go
here
to
get
my
config
secrets
and
config
maps
is
where
I
want
to
go
okay
here
it
is
chains
config.
So
you
can
see
here
I'll
give
you
the
ammo.
C
That's
not
helpful
here,
so
I've
configured
the
task
run
storage,
that's
the
build
provenance
to
store
in
two
places
face
and
oci,
and
I
have
also
there
are
two
signatures
that
are
involved
here:
the
I've
configured
oci
storage,
to
go
to
graphics
and
oci.
So
we
can
see
those
here
in
artifact
registry
I'll,
just
show
you,
the
oci
storage.
C
If
you
are
familiar
with
container
analysis
at
google
oops-
sorry
about
that,
if
you're
confi,
if
you're
familiar
with
container
analysis
at
google,
that's
the
the
graphics
storage
is
in
the
container
analysis
api.
This
here
is
artifact
registry.
This
is
where
my
docker
images
are
stored.
This
was
the
image
I
created.
This
is
the
allow
image
you
can
see.
It's
labeled
here,
here's
the
latest,
that's
the
image
that
was
built
and
pushed,
and
these
two
oci
bundles
contain
the
attestations
you
can
see.
C
One
is
a
signature
ending
in
dot
sig,
the
other
one
is
the
attestation
ending
in
dot
at
and
the
tags
on
those
actually
match
the
tag
of
the
image
that
was
built.
So
in
this
case
I
stored
them
right
alongside
the
image.
I
could
store
them
in
a
different
oci
registry
and
in
addition
to
that,
I've
configured
it
here
for
storage
in
container
analysis.
F
Thank
you
one
last
quick
question
and
you
don't
have
to
answer
it
if
it's
a
really
really
long
answer.
But
what
do
we
do
about
key
management
like
when
we
rotate
keys.
C
Key
management
is
a
long
and
complicated
story.
Actually,
as
part
of
setting
all
this
up.
I
had
a
long
conversation
about
that
with
some
of
our
internal
folks
and
I'm
sorry,
brad,
I'm
simply
going
to
say
to
that
one.
It's
complicated.
The
quick
answer
is
that
over
here
in
that
security
section
under
binary
authorization,
so
what
I
can
do
here
is,
if
you
look
at
my
attest
store,
it's
got
version
one
of
the
key,
so
I
wanted
to
rotate
that
key.
C
If
I
have
a
security
breach,
and
I
need
to
immediately
revoke
things
that
he
has
compromised,
that's
things
that
he
has
signed
because
of
a
compromise.
I
could
just
delete
this
test
door
and
it
will
no
longer
allow
this
a
tester
will
no
longer
you
know,
validate
signatures
right
in
in
the
real
world.
If
I'm
doing
a
a
rotation.
What
I
probably
do
is
I
create
a
new
key
version.
Two,
I
create
a
test
store
with
version
two
and
then
over
time
I
sunset
and
delete
that
version
one
at
test
store.
C
I
might,
I
might
take
some
other
steps
along
the
way,
but
it's
a
it's
a
complicated
story.
Today
and
it's
a
manual
story
in
binary
authorization
today,
at
least
for
our
external
offering
of
this.
M
C
So
for
this,
no
there
is
no
reliance
on
a
monorepo
whatsoever.
This
this
entire
demo
was
built
using
google
cloud
platform
and
open
source.
C
So
the
code
I
built
happened
to
come
off
github
here
in
one
of
my
own
personal
repositories,
meaning
the
code
for
the
demo
is
in
github,
and
I
built
a
container
that
allow
container
is
is
also
set
up
there.
Oh
actually
there's
a
second
container
that
I
built
that
I
didn't
talk
about
in
the
demo,
which
is
so.
M
When
you
go
farm
you're,
when
you
go
farm
the
source,
because
I'm
still
thinking
about
the
question
I
had
earlier,
when
you
go
farm
the
source,
you
can
specify
what
repos
you
wanted
to
farm
it
from
a
list
or
something.
Yes,
yes,.
C
In
a
simple
case,
you
could
create
an
allow
list.
These
are
the
allowed
places
where
you
can
go.
Go
fetch
source,
okay,
great
thanks.
Obviously,
in
the
real
world,
there's
a
lot
more
subtlety
than
that.
F
Tracy
you
had
asked
earlier
about
what
the
provenance
looks
like
I
shared
in
toto,
which
has
a
spec
for
provenance,
and
I
shared
a
couple
of
those
examples
back
up
in
the
chat
and
then
for
anybody
else.
If
you're
not
using
kyberno
we're
using.
N
F
Policy
agent
to
do
a
lot
of
our
policy
so
point
that
out.
M
C
You,
by
the
way
in
the
bigger
picture
here,
google's
very
active
in
in
south
side.
I
don't
know
how
many
people
in
the
room
are
are
active
over
in
salsa.
We
very
much
want
to
help
to
uplevel
the
standards
across
the
industry
here,
that's
part
of
why
I'm
giving
this
presentation,
so
you
should
feel
free
to
reach
out
to
me
in
our
slack.
C
If
you
have
questions
or
want
more
details,
those
kinds
of
things,
I
would
also
encourage
you
to
take
my
scripts
in
that
repo
and
stand
up
this
demo
yourself
and
go
play
with
it
and
think
about
how
you
might
adopt
something
like
this
in
your
organization,
whether
it's
on
google
cloud
platform
or
using
other
tooling
binary
authorization
is
going
to
be
a
little
harder
to
find
in
other
tooling
right
now,
but.
E
So
I
have
a
quick
question
in
terms
of
moving
to
the
higher
levels
is
also,
how
does
google
have
a
plan
or
in
terms
of
how
it's
going
to
achieve
that,
or
I
was
thinking
about
that.
C
So
I
assume
you're
asking
about
that
in
terms
of
the
public
offering
of
binary
authorization
internally,
we
are
beyond
what's
currently
available
in
salsa
in
terms
of
our
internal
binary
authorization
publicly.
Yes,
our
plan
is
to
provide
a
path
to
enable
our
customers
to
gradually
opt
in
to
up
leveling
their
security
we'd
like
to
get
them
to
a
place
where
they
can
be
secure
by
default,
which
is
the
experience
I
have
as
an
internal
google
engineer,
but
you
know
that's
obviously
going
to
take
some
time
and
I'll
be
honest.
C
Security
causes
some
friction
right,
and
so
that's
why
we're
doing
this
opt-in
in
terms
of
our
google
cloud
platform
offering
and
hopefully
we're
going
to
see
more
and
more
industry
adoption
of
these
kinds
of
standards
and
enforcement.
C
B
So
we
can
look
at
that
topic
and
look
for
ways
to
contribute
to
it.
So
recently
there
has
been
some
conversations
within
the
community
under
the
name,
cdf
reference
architecture,
and
this
topic
was
present
to
take
over
site
committee
on
july
19.
If
I
am
not
mistaken
and
then
the
topic
was
brought
into
governing
board
as
part
of
his
strategy
session
and
another
discussion
took
place
there.
B
What
this
initiative
is
about,
I'm
not
going
to
go
through
all
these
slides,
but
this
initiative
intends
to
bring
all
the
work
we
have
been
doing
within
under
different
groups
such
as
patient,
telescope
and
adults,
internality
events,
best
practices
and
software
supply
chain
and
put
them
together
to
come
up
with
a
reference
architecture
that
would
give
us
organizations
some
kind
of
blueprint
when
it
comes
to
how
they
can
implement
continuously
within
their
organizations,
and
I
included
slides
on
the
hackmd
document.
B
So
please
take
a
look
at
the
document
and
why
I
am
bringing
this
topic
right
now
is
because
we
talked
about
a
similar
thing
in
the
past,
within
our
sake,
which
we
didn't
call
it
reference
architecture.
We
named
it
a
proof
of
concept
based
on
the
work
done
by
mike
liberman
and
his
friends
at
cncf,
techno
advisory
group
football
supply
chain,
and
that
will
create
a
reference
implementation
architecture
called
secure
software
factory,
which
is
a
name
to
freshka.
If
I'm
pronouncing
it
right.
B
So
we
were
thinking
of
basing
our
proof
of
concept
on
that
and
trying
to
contribute
to
the
efforts
from
continues
their
perspective,
but
seeing
the
reference
architecture
work
kicking
off,
the
idea
is
perhaps
we
should
look
at
what
is
happening
with
the
reference
architecture
initiative
and
align
our
efforts
with
that
work.
B
So
we
contribute
to
reference
architecture
from
software
by
chain
aspects
from
within
our
group,
instead
of
us
going
and
creating
yet
another
proof
of
concept
or
set
of
pipelines,
and
this
work
actually
started
this
week
at
least
the
conversations
started
this
week
within
special
interest
group,
best
practices
and
terry
created
a
branch
on
suspicion.
They
scoped
best
practices,
github
repository.
B
Let
me
quickly
show
that
as
well
to
put
the
basics
in
place.
If
I
can
find
the
branch,
no,
maybe
I'm
looking
at
every
was
it
the
website.
Let's.
P
Practice
this
site
other
than
main
rape.
B
P
Yes,
so
basically
we're
trying
to
do
with
this
is
to
work
downwards
from
the
best
best
practices.
What
work
that
we
to
date
to
try
and
blanch
some
conceptual
gaps
that
are
out
there
at
the
moment.
So
so
the
first
phase
of
the
work
really
is
to
make.
P
People
to
understand
how
contingent
delivery
fits
into
business
process
and
the
structure
and
culture
of
the
organization
and
and
then
once
we've
done
that
piece
of
work.
We
can
then
start
into
some
of
the
more
specific
areas
of
our
rough
reference
limitation
and
hopefully
pull
together
a
number
of
these
pieces
of
worm
that
across
across
the
various
special
in
groups.
P
F
P
Yeah,
in
fact,
in
fact,
what
we're
intending
really
really
was
work
is
to
do
you
know
the
classic
sort
of
inverted
pyramid
on
pyramid,
so
so
we're
working
on
from
the
top
a
broad
set
of
basic
methodologies
for
continuous
livery
and
bringing
that
down
to
a
point
which
will
be
the
reference
architecture
on
the
business
perspective
down
and
what
we're
looking
to
do
is
from
a
community
perspective.
P
Say
here
is
a
specific
tool
or
a
specific
project,
and
this
is
how
this
object
implements.
You
know
best
best
practice
against
the
overarching
methodology
so
very
open
to
to
having
some
of
that
bottom-up
work
being
done
in
in
parallel,
so
that
we
can
connect
all
these
things
together
efficiently.
B
Thanks
terry
and
the
best
practices
sikh
meets
every
second
and
fourth
monday.
I
believe,
okay,
is
that
correct,
second
and
fourth,
correct.
B
Yeah,
so
if
you
would
like
to
take
part
in
conversations,
please
join
those
meetings.
This
is
still
at
early
phases.
I
finally
located
the
right
repository
best
practice
site
and
the
branch
left
arc.
One
is
the
branch
where
this
preview
is
generated
from
so,
and
the
other
thing
to
highlight
here
is
like,
if
you
have
like
ideas
around
contributing
different
perspectives.
D
E
I
just
did
one
I
just
did
want
to
mention,
so
I
think
you
brought
up
fresca
and
as
an
implementation
of
the
cncf,
the
architecture
and
best
practices,
so
we
would
love
to
do
a
demo,
maybe
next
time.
I
think
you
can
show
that
off
and
see
how
that
kind
of
applies,
and
you
know,
show
off
how.
E
How
is
it,
how
exactly
it's
you
know,
meeting
all
the
the
best
practices
that
the
cmc
cfc
have
set
forward,
all
the
different
components
of
it,
and
you
know
how
it
achieves
all
the
different
salsa
levels
and
up
to
salsa
level
three
and
forward.
B
Cool
yeah,
I
will
then
reach
out
to
you
part
two
schedule,
a
session
presentation
demo
from
portugal.
Next
one
is
wizard,
but
we
can
think
for
the
perfect
september
one
thanks
for
that.
Well,
that
actually
brings
the
closing
codes,
so
our
next
meeting
will
be
on
august
25th
and
we
have
another
presentation
scheduled
for
that
one.
Please
keep
an
eye
on
slack
channel
and
also
cdf
twitter,
which
will
be
announcing
the
upcoming
presentation,
and
with
that
I
want
to
thank
everyone
for
joining
today.