►
From YouTube: CNCF SIG Security 2021-04-26
Description
CNCF SIG Security 2021-04-26
A
A
So
when
was
the
second
one
born
nine
weeks
ago,
mate,
oh
wow,
fantastic,
that's
great!
So
what's
what's
their
name,
yeah
a
river
river,
oh
sweet,
yeah,
yeah,
sometimes
yeah,
absolutely
yeah,
it's
good
so,
and
the
other
one's
max
right,
yeah!
That's
right!
Yeah!
That's
right!
Yeah!
I
remember
that
oh
cool,
so
he
he
it'd
only
be
like
under
two
max
is
18
months.
A
A
Thinking
yeah
and
how
are
things
going
with
so?
Oh
good,
good
yeah
we've
been
smashing
it
actually,
so
a
bunch
bunch
of
really
good
deals
and
doing
a
bunch
of
stuff
with
with
with
actually
we've
actually
had
a
whole
bunch
of
new
products
recently.
A
So
we
bought
a
company
called
musedem
all
right,
which
is
basically
a
sort
of
a
static
analysis
product
a
bit
similar
to
sony
cube.
I
guess
yeah
and
just
to
expand
our
coverage
of
that
space,
because
a
lot
of
people
want
that
as
well.
Yeah
sort
of
single
throat
to
choke
yeah,
no
for
sure
yeah,
and
then
we.
A
Got
a
strategic
partnership
with
neue,
vector
sort
of
adding
the
the
containers,
the
container
piece
as
well?
Okay,
what
are
you
guys
doing.
A
So,
basically,
just
working
with
them,
because
they've
got
this
really
clever,
behavioral
analysis
stuff
to
do
dynamic,
sort
of
port
blocking
and
process
blocking
them
and
sort
of
teach
it.
You
know
programmatically
what
processes
are
good
and
bad
and
that's
sort
of
pretty
innovative.
No
one
else
has
got
that
so
yeah
that
that's
sort
of
the
differentiator,
so
you
don't
have
to
talk
to
the
city.
You
don't
have
to
sit
there
and
whitelist
all
the
processes.
It'll
actually
do
it
automatically
based
on
you
know,
putting
into
a
learning
mode.
E
B
F
All
right,
nice,
nice,
nice,
might
add
in
my
marketing
person
as
well
to
be
a
flight
on
the
wall.
C
All
right
well
look
at
12
o'clock.
We
might
as
well
get
the
ball
rolling.
So
thanks.
Everyone
who's
here,
hey
justin,.
F
But
I'll
just
finish
what
I'm
gonna
say
thanks
everyone
who's
here
so
far
nice
to
see
you
all
again
for
those
who,
I
already
know
I've
organized
with
with
andreas
here
from
red
hat,
to
give
a
bit
of
a
presentation
on
software
supply
chain,
security.
C
By
the
usage
of
software
factories,
if
you're
familiar
with,
if
you're
familiar
with
the
dod
devsecops.
F
A
framework
as
part
of
platform
one
they
they
introduce
this
concept
of
software
factories
and
it's
it's
pretty
cool,
so
he
will
handle
that.
I
thought
I'd
copy
a
note
from
the
the
previous
agenda
on
april
21,
just
to
remind
everybody
that
well,
kubecon
is
coming
up
and
as
part
of
that
there's
cloud
native
security
day
on
may
the
4th.
So
if
you
haven't
already
check
that
out,
I
think
you
you're
going
to
go
to
it.
Aren't
you
brad.
D
E
Do
yeah
definitely
yeah
yeah.
I
normally
go
each
year,
my
it's
not
good
timing,
but
I
just
forced
myself
to
stay
out.
E
We
should
do
a
little
zoom
meeting
to
hang
out
in
between
or
you
know,
yeah
we.
E
I
normally
have
an
irish
coffee,
so
I
put
whiskey
in
my
coffee
or
baby
yeah.
F
All
right
well,
look.
That's
kind
of
all.
I
wanted
to
say
justin,
I
noticed
you're
here.
Do
you
want
to
do
an
attack
over.
H
I
think
it
might
be
product.
My
system
is
saying
it's
going
to
do
a
software
update
in
a
moment,
and
I
have
a
feeling
that
that
means
that
I'm
going
to
be
disconnected
so
you've
been
doing
a
great
job.
Let's,
let's
have
you
just.
F
G
Sure
mark
do
you
want
to
share
your
screen
and
and
I'll
also
say
hi
to
we've
got
other
red
hatters
on
the
on
the
call
today,
as
well.
G
Andy
block
from
from
the
us
he's
one
of
our
principal
architects
there
and
our
distinguished
architects
and
we've
got
adam
goose
and
shane
bolden
on
there
and
mark
hilton
brand,
and
we
all
work
for
red
hat,
but
we
all
sort
of
also
you
know,
are
interested
in
this
topic
and
and
I'm
probably
the
one
who's
done,
the
the
least
amount
of
work
in
there
so
cutest
doesn't
belong
to
me.
G
It's
more
the
people
I
just
mentioned
right,
especially
mark
who
was
building
this
this
demo
in
the
last
few
weeks
and
so
yeah.
Let's
kick
off
so
where
did
the
idea
come
from?
So
like
many
many
a
few
months
ago
now,
right
many
weeks,
I
basically
watched
a
recording
from
one
of
the
cnc
f6
security
sessions
in
the
us,
and
then
john
meadows
was
heading.
This
up
and
and
andy
martin
was
sort
of
presenting
the
the
concept
of
a
software
factory,
and
I
posted
some
screenshots
in
there
right.
G
So
multiple
csd
pipelines
can
be
composed
into
a
complex
build
system,
and
this
is
called
a
software
factory
and
it's
basically,
you
know,
used
to
securely
build
and
deploy
all
components
of
a
system
and
at
the
same
time
we
were
then
writing
that
white
paper
about
supply
chain
security,
and
we
learned
about
the
solarwinds
attack
right
and
then
we
looked
at
this.
This
architecture
diagram
on
the
on
the
right
hand,
side
and
and
mark-
and
I
basically
looked
at
that
and
and
go
like
what
are
the
components
and
how
could
we
make
this
better?
G
Then
we
reached
out
in
in
good
old,
open
source
fashion
to
the
the
wider
redhead
community
across
the
globe,
and
we
actually
found
that
that
our
teams
that
work
with
federal
governments
across
the
globe
specifically
have
already
started
an
open
source
project
called
ploygos
and
we've
got
the
we've
got
the
links
in
here,
but
that
that
is
actually
the
you
know
open
source
way
of
or
approach
to
basically
build
such
a
software
factory.
G
The
only
thing
we
didn't
quite
I
didn't
quite
like
about
the
software
factory
secure
bootstrap-
is
that
it
sort
of
the
idea
was
to
have
a
laptop
locked
away
in
a
world
where
you
basically
would
start
off
that,
and-
and
I
thought
in
a
modern
enterprise
that
probably
doesn't
work
that
well,
so
we
need
to
come
up
with
with
other
things
and
even
though
boot
attestation
is
not
part
of
this
demo
today,
you
know,
I
think
it
has
a
lot
of
vital
building
blocks
to
actually
you
know,
bring
those
secure
software
supply
chains
into
into
enterprises,
and
then
you
know,
as
you
can
see,
what's
the
problem-
that's
not
from
us.
G
That's
from
the
previous
session,
like
large
problem
space
requiring
an
end
to
end
solution,
and
that's
really
what
this
progress
is
is
all
trying
to
do
and
on
the
other,
on
the
left-hand
side,
we've
got
the
the
sort
of
famous
in
in
the
six
security
space
right,
the
dod
enterprise
devsecops
reference
design,
and
so
I
think,
all
the
teams
that
was
sort
of
a
common
denominator.
G
We
all
knew
about
it
and-
and
we
all
sort
of
you
know,
agreed
with
that
approach
and-
and
that's
also
why
what
you
see
today
is
aligned
with
that
reference
design
so
mark.
Do
you
wanna
get
to
the
next
page,
so
in
good
again
in
google
dot
open
source
fashion,
the
ui
is
probably
the
last
thing
that
gets
updated.
G
So
what
he
sees
is
a
flurry
of
text
messages,
I'm
just
going
to
run
you
through
a
couple
of
screens
so
that
you
get
used
to
that
when
you
see
it
in
the
demo
of
what
it
is.
This
is
really
just
a
text
output
and
this
usually
pass
messages
in
there,
but
the
the
orange
highlighted
text
is
basically
fail.
So
that's
why
you
know
something
is
is
wrong.
It
was
interesting.
G
The
open
source
project
have
been
updated
and
then
you
need
to
sort
of
re-check
all
the
parts
and
then,
when
it
went
this
time
you
know
mark
was
thinking
now
what
changed
now
and
then
he
realized
actually
nothing
changed,
but
the
software
supply
chain
did
actually
its
job,
because
what
it
realized
is
the,
as
you
can
see,
the
title
for
this
step
is
ensure
software
patch
is
installed
and
there
was
a
new
vulnerability
and
by
checking
the
content,
it
actually
checked
and
and
realized
that
the
patches
have
not
been
all
updated
and
that's
why
it
failed-
and
that
was
a
great
case
to
to
show
us
that
this
is
really
a
good
approach
and
it's
working
all
right.
G
So
next
slide,
then
this
is
a.
This
is
an
open
shift
screen.
This
is
the
deployment
topology,
like
all
those
components
are
being
installed
by
an
operator
and
that
operator
basically
takes
care
of
all
the
components
you
need
to
run
and
set
up
your
your
software
supply
chain
or
your
software
factory
as
such
right
and
we
go
through
the
through
the
components
more
in
detail
during
the
demo.
G
As
you
can
see,
there's
gt
on
there
there's,
you
know
all
those
components
in
there
that
actually
makes
that
a
comprehensive
solution
right
and
if
you
think
of
enterprise
context
that
that's
actually
what
you
want.
You
want
the
single
operator
certified
operator
installed,
and
then
you
know
that
everything
is
taken
care
of,
and
you
can
trust
that
and
and
sort
of
you
know,
build
your
enterprise
software
based
on
that.
So
next
screen
pipeline
view.
G
This
is
this
is
just
our
jenkins
pipeline
and
the
blue
ocean
view
and
what
what
it
shows
you
is
exactly
the
error
message
I
said
earlier
in
the
middle,
you
see
the
ci
static
image
scan
is
failing,
and,
let's
just
you
know,
show
you
how
it
how
it
looks
like
later
when
you
encounter
that
screen.
So
the
next
screen
and
then
context
so
that
the
ploy
goes
project
is,
is
not
a
product
right.
What
we?
What
we
do?
G
We
want
to
invite
everyone
who
hears
about
this
to
contribute
and
make
it
a
successful,
open
source
project.
It's
it's!
It's
a
redhead
led
project
at
the
moment
because
our
consultants,
all
across
the
globe,
wanna,
wanted
to
collaborate,
and
you
know
that's
how
it
got
created.
So
it's
the
the
perfect
storm,
basically
for
for
an
open
source
project,
driving
faster
results
for
our
customers
without
reinventing
the
wheel
and
yeah
ready
consulting.
I
mentioned
that
the
dod
website
to
align
with
the
white
paper
and
reference
architecture.
G
I
mentioned
that
and
then
yeah.
The
security
meeting
that
I
mentioned
earlier
was
basically
the
starting
point
to
for
me
to
to
think
about
it
and
and
gather
around
and
then
at
the
moment
it's
teams
in
australia
in
us
that
talk
and
communicate
and
collaborate
across
that-
and
I
mentioned
the
laptop
in
the
fault
that
we
didn't
didn't
like,
and
we
also
have
everything
you
see
today
is
is
in
the
script
and
that's
on
on
mark's
sort
of
github
account
and
the
main
component.
G
I
mean
there
are
more
components
on
there,
as
you
just
see.
So,
like
the
operator,
the
the
you
know,
progress
project
and
there
is
the
the
gt
source
code
management
system
in
there
as
well,
but
just
as
a
on
a
high
level
right.
The
openshift
platform
is
the
user
interface
that
you
see
on
the
right-hand
side.
So
it
shows
you
all
the
the
operators
that
we
have
here
installed
and
then
yeah
program.
G
Software
factory
operator
is
obviously
the
the
main
one
in
this
case,
and
then
reusing
or
mark
has
used
record
for
artifact
at
the
station,
and
I
think
that's
it
next
slide
is
that
where
we
move
over
into
the
demo
mark
yeah,
so.
I
You
move
over
to
me:
yes
thanks
andreas,
so
yes,
I'm
a
mark
standing
on
the
shoulders
of
giants.
So
I
didn't
deplo,
I
didn't
create
most
employer
ghosts,
but
I
have
used
it.
You
know
I'm
a
longtime
listener,
not
often
that
I
call
in,
but
today
I'm
going
to
show
you
a
demo
of
how
plugins
works
I
do
have.
I
I
do
have
a
live
cluster
which,
with
all
this
stuff
going,
but
just
because
builds,
take
a
long
time
and
demo
gods
are
binge
full.
I
do
have
it
pre-recorded,
so
we
can
get
through
all
this
in
a
reasonable
amount
of
time.
People
can
still
have
lunch,
so
the
demo
there's
a
lot.
There's
like
a
couple:
different
demos,
seven
different
little
snippets
I'll
show
you
but
broadly
categorized
into
three
chapters.
I
If
you
will
so
the
first
one
is
sort
of
riffing
on
what
andreas
was
just
talking
about
like
okay,
so
we
have
the
department
of
defense
they're
talking
about
things
like
software
factories,
employee
ghosts.
How
does
it
make
a
software
factory?
So
we've
sort
of
hinted
at
that
again
with
this
kind
of
this?
Is
the
developer
perspective
of
open
shift
showing
you
a
myriad
of
boxes,
which
is
probably
slightly
confusing,
we'll
try
and
make
more
sense
of
that
in
a
second?
I
I
That's
where
the
operator
will
busily
work
and
you'll
see
in
times
like
time
lapse
fashion.
It
kind
of
builds
all
this
stuff
out.
Just
because
I
created
one
little
custom
resource.
This
is
all
the
magic
of
kubernetes.
It
builds
these
all
out,
and
the
openshift
developer
perspective
gives
me
a
view
that
demos
really
well
so
I'll
show
you
what
that
looks
like
again.
If
there
are
questions,
I
have
a
live
cluster
and
all
that,
so
we
can
get
into
that,
but
this
is
a
little
helpful.
So
we'll
stop.
It
start
with
the
operator
hub.
I
The
important
thing
here
is:
you
see
this
notion
of
the
provider
type
so
that
provider
type
says
an
operator
hub
is
basically
saying:
hey
I'd
like
my
cluster
to
connect
to
you
know
all
these
other
operators
that
are
available
in
a
marketplace
and
some
of
them
are
community
operators.
Some
of
them
are
certified
by
red
hat,
because
this
is
red,
hat
open
shift.
What
you're
looking
at
right
now,
in
our
case,
the
plugins
operator,
came
from
a
separate
kind
of
provider
which
I've
installed
on
the
cluster
previously.
So
as
an
admin,
I've
decided.
I
I
If
we
look
at
the
operator
in
a
little
more
depth,
it
looks
like
any
operator
you
see,
the
install
of
the
operator
succeeded
and
it
looks
for
two
custom
resources,
a
pipeline
which
we'll
get
to
in
a
platform
which
we're
going
to
look
at
now,
there's
a
form
view
for
the
platform
sort
of
I'm
going
to
use
the
yaml
view
we'll
come
back
to
this
at
the
end.
So
this
is
basically
prescribing
deploy,
goes
what
we
want
it
to
build
right,
that's
the
operator.
That's
in
the
developer
perspective.
I
That
operator
then
just
starts
building
things
through
its
blueprint.
It
knows
what
it
needs
to
pull
in
now,
it's
pulling
in
things
that
go
a
little
bit
beyond
just
the
pipeline,
get
t
you
see
pop
in
there,
it's
a
little
bit
chaotic
selenium
is
coming
in
there.
It's
like
a
thousand
little
cobblers,
elves
running
off
and
creating
a
sulfur
factory
for
me.
I
So
this
takes.
I
think
it
took
about
sevenish
minutes
when,
when
all
was
said
and
done,
if
we
take
a
look
at
it,
this
is
a
platform
that
is
revolving
around
jenkins.
There
is
also
a
tecton
version
because
who
wouldn't
want
a
tekton
version,
I'm
showing
the
jenkins
version.
That's
what
I
started
on
and
you
can
see
like.
I
said
there
are
elements
in
here
that
aren't
related
just
to
building
there's
also
a
ide
called
code
ready
workspaces
based
on
eclipse
j.
I
If
we
look
at
the
platform
in
a
little
more
detail,
so
now
I'll
just
compare
and
contrast
the
custom
resource
with
what
was
created
so
getty,
you
can
see
get
to
use
over
there
right.
So
basically,
the
things
that
I
called
out
that
I
wanted
for
my
continuous
integration.
The
things
that
I
wanted
for
uit
jenkins
is
there
that'll
be
important,
static
analysis.
Sonar
cube
is
in
there
good
good
for
good
on
us,
for
including
solar
cube
nexus
is
in
there.
I
So
there's
a
number
of
things.
It's
basically.
If
you
will
an
implementation
or
you
know,
the
the
department
of
events.
White
paper
is
prescriptive
in
terms
of
what
you
should
do,
but
not
what
tools
you
should
use,
and
these
this
is
sort
of
our
consulting
team,
saying
hey
within
red
hat.
These
are
the
tools
that
we
find
are
the
best
practice
for
implementing
some
of
those
different
stages
that
every
software
supply
chain
should
have
per
the
white
paper
and
per
our
collective
wisdom
as
as
a
community.
I
I
That's
that's
right,
though
there
is
a.
There
is
some
proviso
in
there
like
as
you'll
see
when
we
get
to
the
end
of
the
demo
as
long
as
certain
things
you've
implemented
certain
things
ahead
of
time,
but
yes,
it's
been
set
up
so
that
you
can
plug
and
play
which
you
see
is
best.
So
it
stands
to
reason,
if
not
argo
you
could
put
in
what's
called
a
step
runner
you
could
put
in
something
that
allows
you
to
use
flux
or
something
like
this.
I
If
you
wanted,
in
our
case
our
platform
again
just
to
give
you
in
the
live
cluster,
what
this
looks
like,
maybe
a
little
bit
easier
to
see.
This
is
the
yeah.
These
are
the
different
options
that
this
platform.
I
I
It's
not
well,
this
is
the
thing,
so
it's
not
meant
necessarily
to
be
specific
to
openshift,
but
for
our
red
hat
consulting.
They
tend
to
use
openshift
because
they're,
red
hat
consultants,
but
most
of
the
primitives
we're
using
are
generic
kubernetes.
If
somebody
else
in
the
call
wants
to
jump
in
on
that
yeah.
J
So
yeah,
so
a
lot
of
the
primitives,
as
you
mentioned,
are
using
the
with
out
of
the
box
with
kubernetes.
The
operator
pattern
is
one
that
isn't
designed
for
openshift,
it's
one
that
can
be
deployed
in
any
kubernetes
environment,
because
it's
just
running
a
control,
loop,
custom,
resource
definitions
that
are
applicable
to
any
environment.
It
just
comes
down
to
what
you
want
to
enable
and
how.
E
And
does
it
account
for
let's
say
you
don't
have
enough
cpu
around
in
the
cluster
to
to
install
them?
Does
it
just
blow
up
or
it
sort
of
nicely
tells
you
you've.
J
So
right
now
it
is
kind
of
prescriptive,
so
we
have
a
prescriptive
stack
but
down
the
road
we're
going
to
be
looking
to
seeing
different
ways
to
provide
more
capabilities.
J
As
we
mentioned
on
the
c
on
the
ci
perspective,
we
have
jenkins
or
tecton
to
be
exposing
more
options
down
the
road
because
we
especially
because
I'm
field
facing
so
I
see
a
lot
of
customers
and
some
want
to
use
one
product-
some
want
to
use
a
different
product,
so
we're
going
to
provide
better
options
down
the
road
and
provide
ways
to
enable
and
disable
certain
features
as
necessary.
I
And
as
someone
who's
experienced
a
fair
amount
of
failure
at
the
hands
of
this
platform,
one
way
you
find
out
is
in
typical
kubernetes
ways.
Like
you
see
here,
the
operator
has
installed
successfully.
You
could
imagine
a
world
where,
if
I
didn't
have
enough
compute
to
create
everything
that
the
polygons
operator
needed
like
say,
code,
ready,
workspaces
or
eclipse
changing
this
would
be.
It
would
be
in
a
reconciled.
Loop.
G
It's
just
it's
not
just
like
what
what's
redhead
providing
right.
This
is
an
invitation.
If,
if
you
guys
see
something
that
that
you
want
to
have
as
part
of
the
trusted
software
supply
chain
or
the
software
factory,
then
you
know
the
invitation
is
that
you
implement
that.
I.
I
Yes,
yes,
like
yeah,
not
being
able
to
pull
certain
images.
Yes,
I've
I've
seen
it
all
and
trying
to
make
this
demo
but
yeah,
as
you
will
see
like
in.
If
we
look
at
the
topology
view,
it's
it's
a
number
of
open
source
projects.
It's
not
meant
to
be
just
red
hat.
Some
of
them
will
be
things
that
red
hat
supports
again
because
it's
the
out
working
of
our
consulting
arm
at
the
moment,
but
as
andrea
says
it's
we're
presenting
here,
because
we
want
it
to
be
more
community.
I
F
H
I'm
curious
about
from
a
security
standpoint,
so
I
I
think
I
get
what
you're
doing,
but
do
you
like
is?
Is
there
security
in
this?
Is
there
a
cryptographic
signing
of
things?
Are
you
using
something
like
in
toto
underneath
or
how
like
what
happens
if
an
attacker
breaks
in-
and
you
know
to
like
this
framework
here,
can
they
just
go
and
produce
whatever
they
want,
as
part
of
it.
I
K
So
I'll
I'll
throw
in
my
two
cents
in
that
one.
So
what
I'd
say
is
that
logos
on
its
own
is
not
a
substitute
for
any
other
security
controls
that
you
need
at
each
step
and
the
the
work
that
mark
is
going
to
show
in
terms
of
the
integration
with
record
is,
I
think,
you
know,
is
good
stuff
in
regards
to
verifying
and
attesting
the
output
of
each
stage
of
the
process
and
the
process
overall.
H
Okay,
yeah
that
helps-
and
I
know
in
in
toto
they're,
adding
like
wrecker
into
the
latest,
like
I
things
like
this,
so
I'm
just
kind
of
I
was
just
kind
of
curious
because
it
seems
like
it
would
be
almost
no
work
for
you
all
to
integrate
and
we
give
you
like
a
huge
security
differential.
H
I
mean
because,
like
this
wouldn't
protect
against
solar
winds,
because
solar
winds
was,
you
know
bad
guys
getting
into
the
infrastructure
and
doing
things
but
with
in
total
plus
this,
then
you
have
at
least
some
hope
of
that
happening
and
and
for
you
know
not
a
lot
of
development
effort
at
all,
it
should
be
in
fact
it
should
be
nearly
trivial.
You
could
basically
be
there,
so
it
was
just
yeah
was
just
curious
about
that.
G
I
think
keylime
would
also
you
know,
play
a
role
into
this,
where
you
basically
start
off
with
boot
attestation
and
and
make
sure
that
the
the
right
amount
of
you
know
operating
system
libraries
boot
from
from
trusted
sources
as
well,
and
that's,
I
think,
where
you
where
you
would
start
off
a
100
secure
supply
chain.
I
That's
right
and
and
yeah,
I
know
in
total,
it's
not
the
first
time.
I've
heard
about
it
and
being
close
to
this
project.
There's
certainly
a
lot
of
talk
about
integrating
with
it
with
the
north
american
team.
This
is
just
where
it
stands
right
now,
and
key
lime
and
buddha
at
the
station.
All
that
yeah
that's
yeah
can
be
built.
On
top
of
this.
H
Right,
yeah,
that
makes
sense,
and
for
those
who
don't
know
I'll
just
say
the
you
know,
entoto
is
basically
a
way.
It's
basically
designed
to
take
like
kind
of
cryptographic,
information
about
different
steps
and
then
let
you
apply
a
policy
that
gets
checked
over
this,
and
so
it's
you
know.
I
think,
if
you
put
a
lot
of
buzzwords
on
try
to
distill,
what's
done
here
into
buzzwords
and
in
total
into
buzzwords,
there's
a
lot
of
overlap,
but
if
you
actually
look
at
what's
happening,
there's
a
lot
of
difference.
H
So
entoto
is
completely
agnostic
to
everything
happening
in
the
system.
It
doesn't
care.
If
you
know
it
has
nothing
like
the
functionality
here,
and
so
I
think,
there's
there's
a
tremendous
potential,
because
this
is
a
really
slick,
really
like
well
done
really
usable
high,
like
high
level
system
that
integrates
everything
together
in
a
good
way,
and
I
think
you
can
just
sort
of
get
those
security
properties
from
in
total,
with
almost
no
work
and
really
have
like
the
best
of
all
worlds
for
people
using
this
so
yeah.
H
But
this
this
is
really
cool.
Sorry,
please
go
ahead
and
continue.
I
No
problem
you'll
see
more
overlap
as
we
go
right
that
there's
other
places
where
toto
might
overlap,
but
but
yeah.
So
that's
the
platform,
as
adam
said
too,
not
a
substitute
for
typical
security
controls
principle
of
least
privilege.
Having
you
know,
this
is
all
set
up
to
use
saml
and
all
that
other
kind
of
good
stuff,
but
but
yeah.
So
it's
as
you
say,
it's
an
implementation
of
something
that
anyone
could
implement,
which
is
sort
of
the
department
of
defense's
kind
of
best
practices.
I
I
Yet
when
we
talk
about
things
like
solar
wind-
yes,
maybe
we
could
have
had
a
compromise
factory,
that's
pumping
out
something
that
is
itself
compromised,
but
what
we're
going
to
talk
about
next
is
sort
of
how
you
might
have
other
controls
where
you
could
start
to
see
if
something's
been
kind
of
tampered
with,
so
the
pipeline
and
the
platform,
just
as
we
saw
with
the
operator
right
when
we
were
looking
at
the
operator
over
here,
I
look
at
the
installed
operators.
I
I
So
what
does
that
look
like
for
us?
So
we
already
have
the
platform
custom
resource.
The
operator
is
still
running
the
background,
what
I'm
not
showing
a
ton
of-
and
there
are
other
demos
out
there
that
talk
about
like
well.
How
do
I
make
a
project
that
can
that
is
compatible
with
ploy
ghost
the
barrier
to
entry
is
really
low?
Nowadays,
though,
it
is
still
somewhat
demonstration
when
it
comes
to
just
any
random
project
out
there.
I
There
are
kind
of
two
things
that
you
need:
one
is
your
project
should
be
git
ops
ready,
so
this
pipeline
kind
of
assumes
that
you're
going
to
have
a
code
repo
and
some
sort
of
git
ops,
repo
or
helm
repo
in
this
case
right.
So
that's
one
thing:
you'll
see
that
in
the
custom
resource
when
we
build
it.
The
second
thing
is
that
it
assumes
that
you
know
enough
about
your
use
of
ploy
ghosts
that
you
can
kind
of
see
it
here.
I
You'll
see
it
a
little
closer
in
the
demo
that
you
have
a
jenkins
file
that
tells
ploy
ghost
what
version
of
the
the
overall
kind
of
groovy
script.
You'll
see
that
in
a
second
that
you
want
to
use
for
pluto
sort
of
the
thing
that
binds
jenkins
to
what's
called
step
runners.
So
you
may
I'll
show
you,
through
this
diagram
and
you'll,
see
in
the
demo.
I
make
a
pipeline
custom
resource.
That's
basically
a
way
to
say
dear
plagos
set
me
up.
I
If
you
will
an
assembly
line
in
your
software
factory
for
this
project
and
you'll
see
what's
in
the
custom
resource,
the
main
thing
in
the
custom
resource
is
what
the
project
is,
so
what
the
git
repos
are
and
how
I
want
them
to
be
manifest
inside
of
my
cluster
for
us
in
what
we
do
with
our
our
teams.
We
want
to
deploy
it
locally
to
an
in
cluster
gt.
I
You
know
sort
of
a
little
github
that
runs
inside
of
our
mighty
fortress,
which
is
you
know,
kubernetes
openshift,
then
armed
with
that,
but
the
jenkins
file
in
the
project
a
build
is
kicked
off
just
like
any
other
build.
It's
just
ploy
goes
as
part
of
the
platform
install
the
jenkins
kind
of
main
server
for
us
that
main
server
is
told,
and
it
also
gets
set
up
on
that
main
server
hey.
I
I
have
a
new
assembly
line
for
you,
a
new
conveyor
belt,
whatever
you
want
to
call
it
for
this
reference
app
code
project
and
that
looks
at
the
jenkins
file,
like
any
other
jenkins
file
in
any
project
which
points
to
a
ploy
ghost
library
which
binds
in
this
thing,
called
the
ploy
ghost
step
runner,
which
is
basically
a
way
to
decouple
the
tool
chain.
If
you
will
from
what
happens
in
every
given
step.
So
what
happens
in
the
steps
is
in
this
python
library,
and
each
step
is
a
conglomeration
of
these
different
kind
of
plugins.
I
So
one
is
for
signing.
One
is
for
running
sonar
cube.
One
is
for
maven
right
if
that
makes
any
sense
again.
All
this
is
not
necessarily
telling
everyone
in
the
world.
This
is
how
you
should
do
it.
This
is
just
so
how
you
could
make
sense
of
the
demo
that
I'm
about
to
show
you,
as
the
different
steps
of
jenkins,
is
done,
and
these
are
steps
that
you
would
recognize
from
best
practices
white
papers
around
the
world.
I
I
How
I
want
my
asset
to
be
built,
so
some
of
that
information
will
come
from
the
platform
like
where
the
heck
is
sonar
cube
in
this
factory,
but
what
tests
I
want
to
run.
That
might
be
something
that
I
can
configure
to
the
local
project
right,
so
the
factory
plus
the
project
creates,
if
you
will
an
assembly
line
which
in
this
case
is
implemented
with
jenkins.
We
also
have
a
techconf
label
for
those
who
are
tech.
I
I
Again,
who
knows
where
this
git
location
is
for
the
demo
it's
in
github
and
then
I
have
a
helm,
a
config
repo
gitopsy
thing
in
reference
helm,
and
I
want
my
service
name
to
be
called
reference
app
fruit.
It's
an
it's!
A
highly
trusted
app
that
spits
out
information
about
fruit.
For
some
reason,
it's
a
demo
so
once
I
create
this
custom
resource,
so
code
helm
as
we
talked
about
when
I
go
and
click
create
that's
going
to
create
that
assembly
line.
So
it's
going
to
migrate
the
projects
into
getty.
I
It's
going
to
tell
jenkins
about
those
projects,
it's
going
to
set
up
a
jenkins
job.
If
I
look
at
the
pipeline,
I
can
see
that
hey.
It's
already
done
everything
that
it
needed
to
do
to
get
to
the
desired
state
for
that
custom
resource.
I
can
search
for
jenkins
in
the
in
the
developer
topology
view
log
in
using
I'm
going
to
use
the
openshift
single
sign-on.
I
I
For
me,
based
on
this
information,
so
that's
one
of
the
things
is
the
jenkins
file
for
my
project
to
qualify
to
be
built
in
the
software
factory.
The
second
thing
is
this
config.yml,
which
is
project
specific
options
that
I
want
to
be
able
to
override
in
building
my
asset,
and
there
is
one
thing
that
I'm
going
to
override
for
this
demo.
Is
I've
added
my
own
step
implementer,
which
we'll
get
to
at
the
end
this
notion
of
the
recore
log,
but
we'll
come
to
that
in
a
second
but
some
foreshadowing
for
you
stick
around.
I
I
I've
run
it
locally
within
my
cluster,
but
it
doesn't
have
to
be
run
here
just
for
the
sake
of
the
demo,
we'll
again
come
to
that
plenty
before
we're
done
here,
just
proving
that
it's
up
and
running,
I
can
run
it
locally,
we'll
talk
about
what
wrecker
is
why
it
matters
now,
while
it's
busy
it's
still
going
to
be,
it
ran
those
unit
tests
forever
and
ever
then
it
here
a
different,
typical
blue
ocean.
I
can
look
at
all
the
logs
that'll
come
into
play
in
a
little
bit,
so
there
are
outputs.
I
Every
step
has
different
artifacts
that
it
produces,
which
we'll
get
to
at
the
end
static
analysis.
I
have
output
that
it
produces
from
from
that,
and
you
kind
of
get
the
idea
it's
going
to
go
through
all
these
different
stages,
it's
going
to
push
artifacts
to
nexus.
These
are
typical
things
to
do
for
like
java
projects,
and
if
I
just
kind
of
skip
through
some
of
this
stuff,
I
create
an
image.
I
scan
an
image.
I
This
is
the
point
that
andreas
made
before,
where
the
scan
actually
broke,
and
another
thing
I'll
say
about
having
an
operator.
I
had
to
adjust
what
tests
I
wanted
ploy
ghost
to
run
and
I
wound
up
changing
a
custom
resource
that
the
operator
managed
and
the
operator
wound
up
blowing
away
those
changes.
I
So
if
I
wasn't,
if
the
operator
is
running
and
running
properly,
it's
there
are
many
different
controls
to
kind
of
keep
things
from
being
tampered
with,
which
you
saw
just
go
by
there,
we'll
come
back
to
that
was
sort
of
it
writing
out
what
happened
in
the
build
using
wrecker
or
just
a
demonstration
of
how
one
might
use
wrecker
similar
to
graphios
again,
as
we
start
to
get
closer
to
that,
and
you
can
see
I'm
skipping
dev,
I'm
moving
to
test
and
prod.
So
it's
going
to
deploy
my
application
to
test
and
prod.
I
It
runs
some
selenium
tests
behind
the
scene.
There's
a
fair
amount
of
complexity
that
our
consulting
team
has
already
put
into
this,
because
these
are
things
that
they
do
all
the
time.
This
proves
that
it
actually
deployed
something
out
to
our
production.
Namespace,
that's
what
we're
calling
production
in
this
demo
and
then
that's
the
end
of
the
pipeline
all
the
way.
At
the
end,
you
can
see
it's
meant
to
deploy
this
fruit
service
again.
Thank
god.
E
I
guess
I
would
like
to
understand
a
little
bit
more
about
overrides,
so
in
that
file
there
it
looks
like
you
have
these
software
factories
are
docker
images.
Can
you
override
the
arguments?
Let's
say
I
have
a.
I
don't
know,
for
example,
a
certain
maven
java
edition
that
I
want
to
use
for
the
build
like
1.8.
Can
I
override
that
anyway,
or
it
pretty
much
just
comes?
I
E
I
I
I
think
this
on
this
call
like
he
provided
me,
this
kind
of
customized
version
of
the
workbench
to
be
able
to
like
skip
a
couple
tests,
because
I
didn't
want
it
to
test
those
things
because
it
could
fail
at
any
time
and
that
that
gives
an
example
of
sort
of
light
configuration.
What
I'm
doing
here
is
probably
ill-advised
and
requires
deeper
knowledge
of
sort
of
how
polygons
is
running.
I
I
think,
most
most
of
the
time
you'd
want
to
only
have
to
change
things
like
this,
but
that's
how
the
project
will
evolve
to
make
stuff
like
that
easier.
Anyone
who's
on
the
project
want
to
say
more
about
the
thoughts
behind
that,
if
not
that's
fine,
but
just
giving
space.
If
anybody
else
pronounced.
J
F
Obviously,
red
hat
has
a
fair
bit
of
experience
with
you
know:
supply
chain
security
I
mean
red
hat,
got
hacked
in
2008
the
whole
fedora
thing.
What
was
the
motivation
for
this
project
like
I'm,
I'm
genuinely
interested
to
know
that,
because
to
to
justin's
point
there's
you
know
some
similar
tooling
out
here
that
kind
of
fits
into
the
puzzle.
So
I'm
I'm
pretty
keen
to
understand
what
was
the
core
motivation
for
building
something
like
this.
K
So
I'll
I'll
I'll
give
it
a
go.
I
wasn't
there
at
the
start
of
the
project,
but,
as
I
understand
it,
the
primary
driver
was
essentially
a
a
a
an
identified
gap
in
between
here
is
the
usdod's
devsecops
reference
design.
Here's
the
what
but
not
the
how
and
a
I
guess,
a
desire
to
have
and
essentially
something
that
comes
in
and
fills
that.
K
How
so
that
folks,
who
go
hey,
I
need
to
be
aligned
with
dead
sword,
can
get
off
the
ground
very
quickly
with
a
tool
chain
that
aligns
directly
back
to
dead
sword
meets
all
of
the
requirements,
etc.
That's
a
that's
my
understanding
of
essentially
why
it
came
into
existence,
because
I
looked
and
went
there's
nothing
that
really
exists.
That
makes
sense.
Have
you
have
you
read
through
the
the
devsecops
framework,
the
the
dead
sword.
F
I
appreciate
that
that
response
I
I've
I've
read
through
it
myself
quite
a
bit
and
done
a
little
bit
with
it
myself
to
try
and
implement
you
know
some
of
the
stuff
they
have
in
platform,
one
for
example,
and
it's
very
much
so
this
you
know
massive
resource
of
the
you
know
wild
west
like
where
do
you
kind
of
start?
So
I
can.
F
I
can
absolutely
appreciate
that
just
just
for
the
dummies,
I
guess-
and
this
this
might
be-
you
know,
feel
free
to
chime
in
here
justin
for
in
total,
if
it's
relevant,
but
let's
just
take
a
scenario
where
you
know
you
know
you're
you're,
a
security
tool
like
nmap,
and
you
know
your
host
of
don.
F
I
don't
know
what,
whatever
like
leonard
or
something
and
some
big
bad
cyber
gang.
You
know
roots
one
of
your
boxes
and
you
know
they
pop
a
vps
and
they've.
You
know
they've
got
root
and
you
know
all
of
a
sudden
they've
got
access
to
your
software
and
you
know
they.
They
re-upload
a
version
that
has
the
malware
hidden
in
it
like.
How
does
this
protect
users
from
that
from
that
particular
problem?
Because
I
mean
that's
ultimately,
the
goal
of
software
supply
chain
security.
F
F
You
know
if
a
particular
piece
of
software
was
compromised
and
you
know
an
attacker
uploaded,
a
malicious
version
of
that
software.
You
put.
K
So
kind
of,
as
I
was
saying
before,
there
is
always
the
need
to
have
additional
controls
over
and
on
top,
but
where
this
can
potentially
come
in
is,
and
I
might
kind
of
touched
on
it
is
because
there
is
this
decoupling
between
the
pipeline
and
the
tools
it
becomes
possible
to
to
write
and
inject
new.
I
guess
implementations
of
steps
in
there.
K
So
let's
say,
for
example,
that
we've
got
a
git
step
implementer,
which
is
basically
hey
check
out
the
things
there's,
no
reason
that
couldn't
be
extended
or
enhanced
to
also
include
verifications
of
the
committer
and
folks
like
that.
So
it
doesn't
have
the
tools
out
of
the
box
today
to
do
it,
but
it
could
be
built
and
added
in
as
part
of
the
pipeline.
I
In
an
opinionated
way:
yes,
because
yeah,
if
you
start
from
dead
sword
the
what
not
the
how
as
adam
said,
we're
starting
to
fill
in
the
how
but
then
questions
of
like
well.
How
do
we
really
now
that
we've
got
a
bit
of
a
framework?
How
do
we
further
harden
that
framework
and
how
do
we
make
it
customizable,
because
we're
trying
to
balance
the
interests
of
security?
Well,
we
want
everything
to
be
repeatable,
but
tool
chains
tend
to
be
very
snowflakey
in
the
real
world.
So
how
can
we
kind
of
balance
those
two
interests?
F
E
And
just
a
quick
question
in
terms
of
the
open
source
part
of
this
so
high
level.
What
I've
seen
today
is
there's
a
few
layers,
so
we
have
the
container.
We
possibly
have
the
kubernetes
deployment
and
then
we
have
the
framework
itself,
which
ones
could
I
go
in
today
and
maybe
make
a
pr
against?
Can
I
do
it
at
the
container
level,
the
framework
or
all
three.
K
So
everything
for
this
is
up.
I
was
gonna,
say
upstairs
upstream
in
github,
slash,
polygons,
all
the
container
definitions.
Are
there,
the
workflow
definitions.
Are
there
the
operator
definition?
Is
there
all
written
and
ansible
so
yeah?
It's
it's
all
there
and
ready.
F
G
We
let
the
community
decide.
Yes,
you
know,
without
for
our
customers,
so
obviously
we're
building
on
technology.
We
we
know
the
source
code
of,
and
we
know
it
works
and
one
we
in
like
contributing
conscious
of
time.
I
really
would
like
us
to
to
demo
the
the
require
part
of
it.
That's
where
the
the
you
know
the
signing
and
the
you
know
the
attestation
basically
comes
in.
If
that's
okay,
man.
I
Yeah,
I'm
okay
with
that,
but
yeah.
Let
me
let
me
get
to
that
before
we
run
out
of
time
and
then
questions
at
the
at
the
end
so
record.
So
wrecker
is
a
part
of
six
store,
so
there's
a
separate
group
of
projects,
cosign
wrecker,
a
bunch
of
things
and
there's
a
lot
going
on
in
the
space
there
in
toto
kind
of
friends
in
this.
I
In
this
case,
what
we're
looking
to
demonstrate
is
just
as
you
guys
were
pointing
out
hey
this
framework
is
great,
but
there
are
all
these
things
you
could
add
to
it.
We
decided
like
hey,
let's
try
and
add
something
to
the
framework
and
recore
seemed
like
a
thing
to
add
for
multiple
reasons
which
hopefully
I'll
get
to
before
we're
done.
I
So
again,
what
we're
doing
is
trying
to
write
our
build
activities
to
a
tamper
evidence
store
in
a
way
that
may
have
been
something
that
could
have
helped
in
terms
of
software
supply
chain
attacks
like
solarwinds.
I
understand
it's,
not
a
remedy
for
it,
but
being
able
to
see
here
were
all
the
steps
in
the
chain
that
went
from
this
git
commit
to
this
container.
Build,
let's
say
what,
if
we
had
something
that
could
immutably
attest
everything
that
happened
now,
that
we
have
our
framework
so
dead
sword
the
how
the
framework
is
the.
I
What
now
that
I've
made
some
opinionated
calls
about
the
what
it
lends
itself
to
say.
Well,
what?
If
each
of
these
steps
someday
had
to
write
out
the
output
of
each
of
these
steps
into
an
immutable
database,
so
that
auditors
or
other
things,
could
automatically
check
that?
So
what
I'm
demonstrating
here-
and
this
is
just
a
demonstration-
this
is
exactly
how
it
would
be
implemented.
Is
I'm
saying
when
I
go
to
sign
the
image?
I
I'm
also
going
to
do
two
things,
I'm
going
to
store
the
signed
image
in
record
to
say
I,
the
build
chain,
I'm
going
to
you
know,
use
my
keys
to
sign
it.
I
the
build
chain,
made
this
image
and
then
I'm
also
going
to
add
a
node
we'll
talk
about
that
in
a
second
I'm
going
to
record
also
in
record
artifacts
that
went
into
building
this
image
and
again
stored
in
a
mutual
database
in
this
case
trillium.
So
transparency.dat
sort
of
google's
trillion
project
open
source
project
we'll
get
to
build
nodes
in
a
second.
I
So
let
me
just
introduce
record
really
quickly
if
zoom
thank
you
zoom
all
right
here
we
go
so
zoom
was
just
preventing
me
from
showing
you
this
video
so
where
we
last
left
our
heroes.
This
is
the
thing
I
want
to
show
about
signing
the
container
image.
So
what
you
see
here
is
I've
got
private
keys
that
represent
the
tool
chain
and
it's
using
those
keys
to
sign
images.
I
So
I
sign
a
container
image
which
is
kind
of
what
you
see
here,
using
a
ploy
ghost's
key
for
the
the
factory
it
signed
an
image,
that's
cool
that'll
be
important
in
a
second.
It
also
stores
that
image
a
nexus.
So
it's
internal
it's
using
sort
of
an
internal
using
nexus
as
a
container
image
registry
and
then
finally,
at
the
very
end,
this
is
what
I
was
showing
before,
where
I
put
two
things
in
when
I
go
to
sign
the
container
image
I
podman
sign.
I
I
curl
push
to
nexus
and
I
use
my
recore
log.
I
log
to
my
local
kubernetes
local
kind
of
recore
instance.
I
log
two
entries
one
is
sort
of
the
last
build
node
we'll
talk
about
what
that
means
in
a
second
and
there'll,
be
some
command
line
here.
So
you
see
the
record
url.
Has
a
service
level
address
service
local
address?
I
need
to
turn
that
into
a
public
address,
just
to
prove
that
this
is
my
record
server.
I
have
it
exposed
publicly
again
think
of
record
how
we're
using
it
right
now.
I
I
Let
me
just
make
it
a
little
prettier
there's
a
body
of
uu
encoded
data,
but
there's
an
inclusion
proof
as
you'd
expect
from
immutable
databases
that
says
okay,
so
I've
got
a
database
of
a
there's,
a
merkle
tree
behind
the
scenes.
The
body
is
interesting,
we'll
get
to
that.
The
thing
at
the
top
is
the
uuid
of
the
leaf
node,
which
wrecker
uses
to
find
entries.
I
A
I
Now,
if
I
go
to
the
next,
I
think
I'll
just
jump
to
the
next
one.
This
is
going
to
talk
about
build
nodes
so
and
again
the
questions
right
at
the
end,
just
for
the
sake
of
time
so
adam
and
I
got
to
chat
about
this.
This
was
kind
of
fun.
So
here's
how
we
imagined
that
something
like
this
could
work
both
for
attestation
but
auditors,
and
all
that
and
also
eventually
you'll,
see
at
the
very
end,
a
way
we
can
hook
into.
I
I
If
you
want
to
see
what
happened
before,
here's
the
previous
entry
in
record,
which
is
itself
a
build
node,
so
sort
of
a
linked
list
inside
an
immutable
database,
and
this
thing
would
be
all
hashed
in
the
same
way,
you'd
expect
from
a
mobile
database
with
its
step
name
with
whatever
output
is
relevant
to
it.
Maybe
it's
the
unit
tests
or
you
know,
pick
a
thing
or
maybe
it's
a
static
analysis
with
a
entry
to
the
previous
step
and
so
on.
I
All
the
way
down
to
eventually
the
previous
uid
would
be
zero
and
thus
you'd
have
a
way
of
creating
the
whole
provenance.
Everything
that
happened
from
checkout
from
debt
all
the
way
to
the
creation
of
an
image
in
an
immutable
database
that
you
could
verify
for
yourself.
So
anyone
anywhere
could
look
this
up
as
well
as
for
the
sake
of
auditing.
All
this
kind
of
stuff
right
so
again,
let
me
run
through
this
and
then
we'll
get
to
questions
at
the
end.
Just
to
give
you
more
a
sense
of
what
rekkor
can
do.
I
I
can
look
up
the
last
entry
in
record
by
saying
get
me
that
uuid
get
it
in
json
and
use
jq
just
to
kind
of
format
it,
so
we
can
make
sense
of
it.
This
is
how
rekkor
stores
things
in
trillion
right.
So
it's
got
a
body.
You
can
change
how
record
stores
things
that's
outside
the
scope
of
this,
but
the
things
we
care
about
in
its
data
is
this
extra
data
which
we'll
get
to
the
signature
is
stored
in
wrecker.
The
public
key
is
stored
in
wrecker
as
well
as
uuid.
I
I
I'm
going
to
look
at
that
extra
data
to
pull
out
my
build
node.
When
I
pull
out
my
build
node,
I
see
that
the
step
in
question
again
for
the
demo
there's
only
one
step,
that's
signing
anything!
It's
that
sign
container
image,
step,
which
you
see
there
right
so
in
this
step
and
that
matches
that
string.
Exactly
that's
what
I
wrote
out,
I
said
my
step
was
sign
container
image.
I
My
step
output
was
base64
encoded,
but
we
will
see
in
a
second
that
that
is
exactly
the
signature
that
was
stored
and
then
finally,
the
wrecker
id.
So
what
I've
done
is
I
put
into
record
proof
that
if
the
image
was
signed
by
rekkor
now
this
whole
build
chain
this
chain
of
of
what
happened
bill
of
goods
on
the
assembly
line
is
now
all
signed
in
record.
So
I
have
another
way
of
verifying
that
this
the
build
chain
did
what
I
thought
it
would
do
in
terms
of
verifying
there's
a
number
of
different
ways.
I
I
can
verify
I
may
skip
through
some
of
this.
You
can
verify
by
artifact.
You
can
verify
by
public
key
right.
So
if
you
only
have
the
artifact
or
you
only
have
the
public
key,
you
need
the
artifact,
the
public
key
and
signature.
If
you
didn't
have
the
original
entry,
that's
kind
of
what
I'm
going
to
show
here
for
the
sake
of
time,
I'll,
let
it
go
for
a
second
just
to
give
you
a
sense.
The
artifact,
in
our
case
is
the
last
build
node.
I
The
last
signature
entry.
I'm
going
to
pull
that
out
of
record.
I'm
going
to
look
to
record
to
get
that.
I
could
get
that
somewhere
else
if
I
wanted
to,
but
I'll
pull
that
out
of
record.
So
that's
the
detached
signature
and
the
last
bit
is
a
public
key
and
just
showing
that
I
could
get
that
from
anywhere.
I
So
it
says
what
the
hash
is.
It
gives
me
the
tree
root
in
case.
I
want
to
do
my
own
kind
of
inclusion
proof
and
in
fact
the
cli
does
its
own
inclusion
proof
locally,
based
on
the
sha's
from
the
tree.
Current
tree
size
is
only
two
because
I've
only
put
two
things
in,
because
this
is
a
demo,
and
this
is
the
first
time
I
ever
ran
this
you
can
see.
I
You
can
also
search
through
wrecker
again
just
to
give
you
a
sense
of
what
record
does
I
can
search
by
public
key
or
by
sha?
So
if
I
want
to
take,
if
I
want
to
take
my
artifact,
which
is
the
build
node
or
in
this
case
you
know
that
build
node
that
wraps
the
signature,
I
could
search
for
it
that
way,
search
by
shaw,
which
is
what
you
kind
of
see
here
and
I
get
back
the
uuid.
I
I
Our
link
list
is
only
two:
it's
the
one
at
the
end
with
the
container
signature
and
then
there's
another
one
which
has
the
output
of
the
build
right
and
that's
sort
of
what
you
see
here,
just
proving
that
there
is
a
public
record
server
out
there
there's
no
reason
it
has
to
be
local
to
my
cluster.
In
fact,
we
want
it
to
be
transparent,
but
just
proving
I
didn't
write
to
that
record
server
and
I
may
have
the
you
know
the
tree
root
changed.
I
could
know
that.
Oh
this
isn't
the
tree.
I
What
I
expected
for
anyone
who's
verifying
this
and
then.
Finally,
this
is
the
last
bit.
I
wanted
to
show
you
guys
today,
let's
play
with
these
kind
of
build
nodes,
I
told
you
the
last
build
node
is
what
we
said
was
the
signature
in
our
case
it'll
be
the
signature
of
the
output,
the
container
output
right.
So,
let's
just
see,
if
that
matches,
can
I
take
a
public
key,
which
is
the
ploy
goes
public
key?
I
have
it
locally
here.
I
Can
I
basically
get
that
signature
file
right,
like
just
proving
at
the
signature
file,
that
this
thing
is
signed
by
ploygos
when
I
then
go
to
decrypt
it
with
gpg?
Right
I
download
it,
and
then
I
jq,
what
I
should
see
is
sure
enough.
It
was
signed
by
the
service
account
again
the
tree
again,
a
demo.
We
don't
have
the
verification
of
this
key,
but
we
know
it
was
this
key.
I
I
should
be
able
to
use
the
signature
content
that
signature
content
should
represent
the
signature
that
was
in
nexus
blah
blah
blah.
If
I
output
that
what
we'll
say
is
should
match
this
appear,
which
it
does
see,
it's
the
same
thing,
it's
the
same
build.
So
these
are
ways
that
you
can
start
to
do
some
verification.
I
I
I
can
get
that
previous
build
node
and
then
look
at
its
extra
data,
which
will
show
me
the
step
name,
some
content
and
the
previous
record
id,
and
that's
what
you
see
here,
although
the
step
is
much
bigger,
again
same
step,
because
it's
the
same
step
that
I
ran
this
from
sign
container
image.
But
that
represents
this
first
thing
that
I
put
sorry
yeah
yeah
this
first
one
that
I
put
right
over
here.
I
I
highlighted
the
wrong
one,
this
first
one
which
is
the
build
output
right,
and
so,
if
I
decode
that
which
I'm
going
to
quickly
just
do
right
now,
following
the
same
things
previous
record
id
0,
that
means
there's
no
further
entries
in
this
chain.
If
I
take
the
step
output,
you
see
the
step
results
that
from
this
build.
So
this
is
just
an
xml
file
that
gets
produced
by
ploygos.
I
The
last
thing
I'd
say
before
we
end
our
time
is
a
future
demo,
and
I
think
there
are
other
people
trying
to
do
this,
but
you
can
imagine
I
didn't
get
time
to
do
this,
integrating
with
things
like
opa
or
gatekeeper,
which
brings
opa
to
kubernetes
you're,
going
to
imagine
a
world
where
even
a
simple
admission
hook,
that
does
a
record
verify
on
a
uuid
of
an
image
in
certain
participating.
Namespaces
would
allow
you
to
follow
this
pattern
that
you
see
from
the
transparency.dev
website
about
assume
this
is
kubernetes.
I
G
I
I
B
E
I
G
So
shame
that
we
are
out
of
time,
but
I
did
want
to
get
justin's
view
as
well.
In
terms
of
you
know,
solar,
wind
and
and
supply
chain
attacks
and
and
how
that
would
help.
G
So,
ideally
you
build
your
your
employees
operator
as
well
through
a
software
you
know
to
the
to
the
factory,
but
the
the
point
is:
where
do
you
start
right
and
would
that
ever
be
possible
without
actually
having
full
control
and
understanding
of
the
source
code
that
is
used
to
to
build
whatever
it
is,
whether
it's
the
operator
or
the
the
software
right?
So
matt
was
saying
what,
if
someone
uploads,
you
know
dodgy
sort
of
components,
and
I
think
the
the
records
or
merkle
tree
addressed
this
a
little
bit.
H
Right
so
yeah
and
in
toto
has
different
ways
of
managing
and
handling
this.
We
actually
worked
with
git
and
so
they've
redone
parts
of
their
signing
scheme,
because
we
found
design
flaws
in
the
way
that
git
signing
worked
for
git
tags
and
other
aspects
like
that.
They
actually
use
santiago's
code
who's,
the
lead
of
the
intoto
project
and
a
design
that
he
and
I
and
some
of
our
collaborators
came
up
with
so
yeah.
There's,
there's
like
a
bunch
of
stuff
that
exists.
H
That
already
does
this
in
the
like
in
totoscope,
because
I
think
that
project's
been
working
at
this
problem
from
the
different
angle,
where
we've
been
very
security
focused
from
day
one
and
that's
been
really
the
sort
of
primary
thing,
and
also
we've
been,
I
think,
very
vendor
and
technology
agnostic,
and
so
like
I'm
really
impressed
by
what
you
have
and
really
I'm
looking
forward
to
digging
deeper
into.
H
You
know
like
some
of
the
different
pieces,
but
it's
also
a
little
in
some
ways
hard
for
me
to
understand,
because
I
don't
have
the
there's.
There
feels
like
there's
a
lot.
That's
very
vendor
specific
here
and
it's
hard
for
me
to
disentangle
some
of
the
security
properties
from
you
know
some
of
the
the
other
things
going
on,
but
yeah
I
it
was
really
enlightening
and
really
enjoyed
learning
about
it.
L
No
thank
you
very
much.
That
was.
It
was
awesome,
great
great!
Well,
look
if
you
guys
want
to
get
in
touch.
F
With
andreas
he's,
he's
on
slack
so
feel
free
to
reach
out
to
him,
you
can
check
out
the
github
repo
and
thanks
so
much
everyone
from
red
hat.
That
was
that
was
really
enlightening.
I
really
appreciated
that
yeah
thanks
so
much.
G
That
was
really
cool
awesome
thanks
for
having
us
and
see
you
at
the
project.
And
yes,
if
you
have
anything
in
terms
of
extensions
that
makes
it
you
know
just
as
applicable
for
any
other
platform,
absolutely
right.
So
that's
the
exactly
the
point
behind
an
open
source
project.
So
thanks
a
lot
for
having
us.
F
Yes,
yeah,
I
think
look
as
I
mentioned.
I
took
a
quick
look
on
the
call
at
the
repo.
You
know.
Obviously
you
know
just
available
for
openshift
right
now,
but
from
what
I
read,
it
doesn't
seem
too
hard
to
get
this
running
on
vanilla
kubernetes.
So
I'm
gonna
gonna
have
a
little
bit
more
of
a
play
and
yeah
thanks
for
sharing
hey
jj.
Are
you
still
here,
buddy.
M
Yeah
thanks
thanks
for
putting
this
together.
This
is
an
awesome
project.
One
thing
I
would
suggest
is
try
and
see
if
we
can
pull
together
a
demo
that
has
that
doesn't
have
anything,
that's
open
shift
or
red
hat
specific.
M
It
will
serve
a
few
purpose
in
terms
of
trying
to
get
wider
feedback
and
adoption
and
for
a
project.
That's
as
useful
as
this.
I
think
it'll
also
be
useful
to
see
how
this
plays
in
well
with
the
other
open
projects
that
we
have
like
what
justin
was
saying
about
in
toto
and
stuff.
So
I
would
I
mean,
if
you're
interested
and
if
you're
curious
in
getting
more
community
involvement.
I
think
it'll
be
a
useful
thing
to
do.
That's
that's
not
a
demo!
That's
not
too
underspecific!
M
That's
what
I
would
do,
but
otherwise
it's
a
use.
It's
awesome!
It's
it's
a
good
learning
for
a
lot
of
a
lot
of
folks
and
I'm
pretty
sure
it's
going
to
be
useful
for
for
the
community
overall.
J
D
Great
is
anybody
else
on
the
call,
wanna
wanna
say
anything
before
we?
We
close,
I
think,
we're
pretty
much
running
out
of
time
now
so.
E
Yeah,
I
guess
just
what
we
briefly
touched
on
at
the
start,
I'm
going
to
keep
going
yeah
like
if
anyone
wants
to
maybe
hang
out
for
10
minutes
or
something
sometime.
Just
just.
Let
me
know.
E
I
have
a
discussion
on
one
of
the
key
talks
that
we
go
on,
or
maybe
at
the
end
of
it
we
can
also
do
key
takeaways
and
have
a
talk
about
it
or
something
like
that.
It
just
makes
it
nicer
being
not
in
that
region.
You
know,
because
it's
quite
an
exciting
event
for
me,
so
it'd
be
nice
to
to
share
that
with
people
and
talk
about
it.
F
Cool
all
right,
well,
cool
cool!
Well,
hey
look
just
for
for
those
of
you
in
in
you
know
the
in
in
australia
or
or
even
apac.
F
I
guess
it's
pretty
relevant
just
just
to
let
you
know
brad
and
I
have
have
been
in
touch
with
bill
mulligan
from
the
linux
foundation
and
we're
going
to
be
spinning
up
kubernetes,
like
the
kcds,
the
kubernetes
community
days
here
in.
B
F
Where
you
know
we're
reaching
out
to
different
vendors
for
for
sponsorship,
etc
at
the
moment
to
try
and
get
things
organized
there's
a
very
open
invitation
to
anyone
who
gets
wants
to
get
involved
in
in
something
like
this.
It's
you
know
a
massive
undertaking
for
full
transparency.
F
The
the
kubernetes
forum
2019.,
like
today,
the
largest
conference,
that's
been
hosted
in
australia
for
in
the
cloud
native
space
that
that's
been
indefinitely
cancelled
and
and
it's
the
view
of
you
know
the
cncf
and
linux
foundation
that
this
will
kind
of,
replace
that
now
so
yeah.
Look
if
you
want
to
know
any
more
information
or
if
you
want
to
get
involved.
Please
just
ping
me
because
brad
and
I
would
really
appreciate
any
help
or
support
you
could
offer.
H
F
Yeah
other
than
that,
we'll
keep
you
guys
posted
and
yeah
all
the
best
we'll
chat
to
you
guys
soon.
I
guess
have
a
great
day.