►
From YouTube: CNCG SIG Security Supply Chain Security WG 2021-02-12
Description
CNCG SIG Security Supply Chain Security WG 2021-02-12
A
Early
it
going
good.
Thank
you
good.
I
think
the
you
know
we've
had
more
contributions
to
the
document,
which
is
good
cool.
We're
gonna
go
through
that.
C
There's
been
some
really
good
discussion
actually
at
the
number
of
other
meetings,
some
other
references
that
we
can
add
in
here
too.
Oh
cool.
D
E
E
E
I
got
I've
got
my
boat
out
of
the
water,
so
I'm
going
to
give
it
some
love.
I
gotta
put
a
new
radio
in
it,
the
other
one
got
water
damaged
and
then
you
know
do
some
polishing
and
stuff.
C
A
F
So
I
updated
the
github
issue
because
it
needed
some
plc,
at
least.
Like
summary
of
the
issue
I
pulled
down
most
of
your
links
from
down
there.
Jonathan
make
sure
that
the
calendar
invite
is
discoverable
that
we
have
the
links
to
the
notes,
and
that
was
on
me.
I
think
I
think
no
one
else
had
access
to
it.
A
A
C
Sort
of
contemplate
some
of
that
certainly
around-
maybe
the
higher
security
end
of
this,
but
they
have
a
chap
called
david
wheeler
who's
got
a
lot
of
interesting
work
that
he's
done
in
the
past
and
including
a
phd
thesis.
A
The
ossf
is,
I
believe,
so
yeah
the
the
the
conversation
was
more
just
general,
an
update
from
the.
C
Chapter
that
runs,
reproducible
builds
org.
It
was
a
really
interesting
conversation.
I
think
they've
got
a.
C
E
Yeah,
I
know-
and
toto
has
done
some
work
there
with
some
of
the
rebuilders
and
and
the
testers
on
that,
but
you
know
they
showed
something
yesterday
on
their
community
called.
That
was
pretty
interesting,
so
I
can
find
the
reference.
D
Oh
yeah,
actually
I
was
presenting
the
builder
work
on
the
community
meeting
yesterday.
Oh,
oh
so
yeah!
So
yes,
we
are
working
with
the
reproducible
builds
community.
We
are
working
on
using
internal
at
the
stations
for
the
results
of
rebuilders
and
I
administer
an
arch
linux
to
the
builder
at
nyu.
D
D
A
The
reason
for
bringing
that
up
is
we
we
haven't.
I
don't
believe
we've
necessarily
discussed
that
as
part
of
our
conversation
so
far,
and
I
just
wanted
to
see
if
there's.
C
Interest
and
support
for
for
adding
that,
in
terms
of
best
practice,
potentially
for
some
of
the
higher
security.
You
know
we
had
the
two
personas
the
low
and
high.
E
Yeah,
I
think
you
know
when
you
get
down
to
it
when
you
threat
model
everything
out,
you
know.
That's
really,
you
know
the
only
way
you
can
really
make
some
reasonable
assumptions
about
the
security
of
your
software
is,
if
you
rebuild
it
on
n
number
of
nodes
and
those
hashes
match
right.
You
know
that
the
attacker
would
have
to
attack
and
number
of
nodes
for
for
that
to
fail
for
that
to
be
compromised.
G
Yeah
yeah
yeah
and
on
my
end
this
is
from
a
few
years
ago,
but
even
even
not
necessarily
fully
reproducible
builds
but,
like
mostly
reproducible
builds,
has
helped
us
out
a
lot
in
the
past
and
just
like
being
able
to
get
some
reasonable.
E
You
know-
and
I'm
wondering
you
know,
like
those
those
rebuilders-
I've
been
doing
a
lot
of
research
into
d-bomb
the
past
couple
days
and
they
have
this
concept
of
like
a
d-bomb
node.
I'm
wondering
if
you
know,
if
you
had
a
rebuilder
and
had
that
as
a
public
d-bomb
node,
and
then
you
could
almost
have
it.
As
like
a
community
effort,
you
could
open
source
some
of
the
security
for
for
open
source
software
to
make
sure
that
the
builds
are
actually
happening.
E
Well,
the
yeah!
Well,
wherever
yeah
I
mean
it,
wouldn't
be
the
d-bomb
known
software
that
rebuilt
it
right.
It'd,
be
something
like
that
and
total
rebuilder
that
they've
been
working
on
right
and
then
it
publishes
that
to
the
d-bomb
node
right,
which
distributes
that
you
know
via
the
public
channel.
F
Cole
said
the
rebuilder
is
being
worked
on
right
by
that.
Is
it
something
readily
available?
If
we're
documenting
best
practices,
tell
people
hey
here's
a
complete
solution.
You
can
reference
if
you're
looking
for
reproducible
builds
or
if
it's
something
that
eventually
will
become
available.
I
don't
track
the
product
too
closely.
So.
D
Right
so
we
are
working
closely
with
the
debian
side
of
reproducible
builds
for
the,
and
we
also
have
an
app
transport
to
perform
the
verification
and
we're
also
working
with
with
the
arch
linux.
Spacer
builders,
which
uses
a
project
called
rebuild
ready,
and
I
think
in
the
last
couple
of
weeks
or
so
there's
been
some
interest
from
a
core
member
of
the
cubes
project
and
they've
been
working
on
using
internal
attestations
within.
I
think
they
call
it
rpm
reproduce
and
so
on.
There
were.
D
D
Yeah,
I'm
gonna
check
those
links
down
now.
A
C
I
mean
it's,
it's
it's
it's
pretty
mature
on
debbie
and
I
mean
they've
done.
Some
awesome
work
there
as
well.
I'm
just
wondering
how
prevalent
that
is
through
the
rest
of
the
industry.
I
mean
some
of
the
stuff
that
we're
talking
about
is
is
perhaps
you
know.
We've
discussed
this
before
right,
colin,
that
this
is
best
practices
now
and
then
this
is
here
we're
trying
to
fill
the
gaps
and
some
of
the
future
work
around
some
of
the
d-bomb
work
that
you
and
I
are
talking
about,
and
some
of
the
spiky
and
total
work.
E
C
A
E
Well,
I
think
reproducible
build
is
a
concept
is
different
than
the
actual
implementation
right.
You
know
the
implementation
is
fairly
new
and
I
think
you
know
they're
working
through
some
of
the
issues
and
some
of
the
design
of
that
you
know,
look
to
be
working
really
well,
I
I
have
to
take
a
deeper
look
at
some
of
the
source
code
that
they've
been
doing,
but
I
think
we
can
list
it.
Maybe
as
like
hey
this.
E
H
C
The
ingestion
and
security
of
the
dependencies
and
your
source
code
right
through
how
you're
building
that
product
through
to
how
you're
distributing
it
and
sending
out
evidence
of
what's
in
that,
build
and
effectively
s-bombs
and
such
so
it's
kind
of
really
looking
at
it
from
an
end-to-end
perspective.
But
what
we're
also
looking
at
is
you
know
what
is
the
best
practice
of
producing
those
builds?
That's
how
we've
sort
of
led
to
through
the
reproducible
builds
concept
and
some
great
conversation
that
was
on
the
ossf
a
couple
days
ago
and
from
the
debian
team.
H
So
I
I
come
from
the
the
susa
and
open
sousa
world.
I
I'm
an
employee
of
susa,
so
I
can
give
you
some
insight
on
what
what's
going
on
in
that
open
source
community
as
well
from
a
supply
chain
perspective
and
also
rancher,
since
susa
owns
rancher
now
as
well.
H
So
we've
got
the
convergence
of
supply
chain
coming
in
here
and
two
different
companies
doing
it
two
different
ways
and
there's
gonna
there's
going
to
be
some
big
changes
coming
soon
in
the
open,
soosa
community,
you
will
see
a
convergence
of
rancher
labs
and
open
sousa
coming
together
to
create
actually
some
brand
new
supply
chain
methodologies.
H
So
it'll
be
very
interesting
to
see
what
happens
with
that.
Our
current
process
uses
a
tool
called
the
open
build
service.
I
don't
know
if
you've
ever
heard
of
that
before
it's
actually
used
by
the
linux
foundation
to
build
all
of
their
linux
distribution
stuff.
It
is
a
complete
supply
chain
tool.
H
You
can
use
it
to
build
ubuntu,
red
hat
centos
and
many
other
flavors
of
linux,
but
sousa
uses
that
it's
it's
our
it's
our
bread
and
butter.
It's
what
builds
susa
linux
enterprise
server,
it's!
What
builds
opensuse,
tumbleweed
and
leap,
there's
a
lot
of
security
built
into
that
supply
chain.
It's
very
interesting
from
an
authorization
standpoint.
H
When
you
see
people
building
out,
you
know
their
own
supply
chain
to
handle
things
like
maven
builds
and
things
of
that
nature,
which
are
very
insecure
in
nature,
because
you
can
inject
from
you
know
various
other
repositories
across
the
internet.
H
So
it's
very
insecure.
Our
new
build
chain
from
from
the
open
sousa
community
will
be
incorporating
these
different
build
environments
from
maven
to
python,
to
ruby
and
building
something
more
secure
as
a
supply
chain.
H
So
we're
kicking
off
an
internal
meeting
about
all
this
next
week.
In
fact,
that
will
be
talking
about
the
next
generation
of
linux
enterprise
and
what
that's
going
to
incorporate
from
a
build
and
supply
chain
methodology.
H
Yeah,
so
from
the
open
sousa
perspective,
the
build
service-
we
do
use
jenkins
in
there,
but
it
does
have
its
own.
You
know
build
environment
where
it
scales
out
across
a
kubernetes
cluster
and
it
can
build
from
multiple
architectures.
It
uses
see.
What's
the
tool
called.
H
Create
the
name
of
it,
but
it
allows
for
you
know:
multiple
architecture
builds
no,
no
matter
the
the
binary
type.
It
does
some
very
limited
stuff
with
java
today.
H
H
C
I
I
guess
one
of
the
sort
of
interesting
things
there
in
cameron
is
from
the
document.
Maybe
you
you
were
able
to
take
a
look
at
it.
Is
there
any
items
or
recommendations
that
we
are
missing
so
far
from
that,
even
at
a
high
level
that
that
already
implemented
and
you
build
service,
you
recommend
adding.
H
C
Well,
definitely,
an
improvement,
I
think
we'll
continue
to
work
on
it
and
I
think
we're
starting
to
get
some
more
meat
on
the
bones
of
that
sort
of
skeleton
document,
and
I
know
that
there's
a
couple
of
people
offline,
putting
chunks
of
pages
together
and
reviewing
it
before
they
submit,
but
at
least
the
the
high
level
titles
are
in
there.
So
if
there's
anything
specifically
in
nursing,
please
do
call
that
out.
H
H
Another
additive
on
the
open,
build
service
is
that
it
can
also
do
container
builds,
and
so
once
you
build
your
application,
the
container
the
output
can
directly
output
it
to
to
a
registry
which
you
can
inject
into
your
pipeline
to
do
scanning
as
well.
But
the
idea
behind
the
open,
build
service
is
that
you
do
all
that
scanning
before
it
actually
reaches
the
registry.
H
So
you're
scanning
all
your
binaries
you're
scanning
your
rpms
to
make
sure
that
make
sure
the
rpms
don't
have
any
outlandish
cves
that
have
not
been
applied,
and
you
know,
there's
various
checks
within
that
environment
to
make
sure
it's
completely
up
to
date,
with
the
machine.
C
H
That's
part
of
our
business
right
to
make
sure
that
our
binaries
are
secure
and
make
sure
that
they
are.
They
have
the
latest
security
patches
applied
and
so
we're
doing
a
lot
of
the
indemnification.
If
you
will
on
those
binaries,
so
we
have
a
security
team
that
is
constantly
checking.
H
So
we
have
a
tool
that
plugs
directly
into
our
build
service.
That's
constantly
checking
binaries
and
then
it
spits
out
a
report
on
a
daily
basis.
It
does
it
in
real
time,
and
so
you
can
look
at
it
and
look
at
the
report
and
it
will
automatically
create
bug
reports
for
us
based
on
security,
vulnerabilities
that
it's
scanning
against
the
cde
database,
okay
and
then
we're
taking
that
list,
we're
doing
checks,
we're
writing
new
code
to
fix
the
patch.
H
You
know
cves
that
are
that
come
out
in
real
time,
so
it's
a
very
interesting
process
from
our
security
team
perspective.
H
It's
a
lot
different
than
say
a
supply
chain
from
a
consumer
supply
chain
from
a
consumer
might
scan
a
different
database
and
make
sure
that
you
know
they're
using
you
know,
maybe
some
third
party
or
a
third-party
tool-
that's
actually
got
their
own
database
or
they're,
getting
it
from
the
mitre.org
or
some
other.
You
know,
location
so
from
a
consumer
perspective
scanning
might
be
a
little
bit
different.
H
But
what
we're
trying
to
do
as
as
a
delivery
company
we're
delivering
the
linux
sources,
we're
delivering
all
the
binaries
that
are
capable
of
you
know
being
inside
of
a
container,
and
you
know
all
the
binaries
for
your
programming
languages
and
your
build
packs
all
those
kinds
of
things.
H
We're
making
sure
we're
doing
all
the
checks
up
front
to
make
sure
that
your
build
packs
are
completely
hardened
to
the
point
that
they
have
all
the
the
right
security
patches
and
everything
available
before
you
actually
start,
injecting
your
your
source
code
into
that
build
pack,
there's
a
lot
of
different
methodologies
behind
all
this
behind
the
scenes
which
I'm
sure,
you've
you've
all
discovered
right.
H
There's
there's
a
lot
of
ways
to
actually
do
this
and
sousa
is
trying
to
make
this
better
for
the
consumer
side
so
that
you
don't
have
to
inject
this
outlandish
supply
chain
security
method
into
your
software
supply
chain,
where
you
can
actually
pull
down
a
trusted
source
for
a
build
pack,
or
you
know
your
programming
language
of
choice
and
then
create
pair
that
up
with
you,
know,
a
build
service
of
sorts
that
that
has
all
those
binaries
ready
to
go.
H
And
then
you
take
your
source
code
and
and
put
those
components
together,
and
then
you
kick
out,
you
know
your
container
or
your
rpm
or
whatever
it
might
be.
Whatever
your
distribution
point
is
so
the
whole
idea
about
around
the
build
service
is
that
it's
it's
a
built-in
process.
H
You
only
have
to
worry
about
your
source
code
that
you're
writing
within
your
corporation
and
you're
doing
source
code
scanning
at
that
point,
where
everything
underneath
should
be
all
taken
care
of
by
a
company
like
susa
or
red
hat,
or
you
know
you
know
whoever's
providing
you
those
binary
sources
in
a
secure
way.
Maybe.
H
Really
trying
to
do
that's
what
we're
trying
to
solve
because
the
the
biggest
concern
you
know,
we
see
them
all
the
time.
You
know
we,
you
posted
one
up
here
the
other
day
and
I
was
like
yeah.
I've
been
telling
these
people
for
years
that
this
is
a
problem.
H
You
know
a
supply
chain
attack.
You
know
either
through
ruby
or
through
python,
or
you
know
that
can
happen
so
easily.
H
C
Sure,
I
guess
I'm
just
trying
to
think
from
the
securities.
I
guess
the
main
thing
is
is
looking
through
the
different,
different
best
practices.
We
have
to
see
if
we're
actually
picked
up
or
all
rather
than
because
I'm
just
wondering
how,
as
you're
going
through
that
pipeline,
are
you
rebuilding
the
open
source,
libraries
that
people
reference
within
their
source
code
or
you
somehow
you'll,
be
building
all
of
the
binaries
and
transitive
dependencies?
Or
would
you
somehow
validate
them
in
some
other
way,
yeah
evolving
in
some
way,.
H
Yeah,
so
there
are
some
validations
that
go
on
so
in
terms
of
you
know
where
we
get
those
binaries
we've
got
guys
that
we
have
hundreds
of
developers
that
are
working
in
many
different
communities.
H
H
H
You
know
it's,
it's
a
very
heavy
feat
to
be
able
to
to
accomplish,
and
I
know
from
a
sousa
perspective
and
also
our
open
souza
community.
We
have
very,
we
have
quite
a
few
members
of
the
community
that
that
do
outreach
and
we
are
that
are
maintainers
of
of
source
code
that
gets
dropped
into
our
build
service
so
to
the
point
where
they
maintain
the
build
environment
or
the
build
sources
for
those
projects.
H
H
It's
just
our
community
members
developers
that
are
going
out
and
maintaining
that
relationship
and
maintaining
those
build
sources,
and
so
they'll
typically
maintain
a
subset
of
you
know:
100
different
projects,
some
of
them,
where
they're
just
maintaining
the
build
on
on
certain
projects,
making
sure
the
software
builds
properly,
making
sure
that
the
cves
are
updated
and
so
there's
a
lot
of
due
diligence
that
goes
on
there.
It's
very
interesting
and
then
from
a
you
know
a
pure
checking.
H
You
know
standpoint
it's
you
know
some
of
those
get
checked
for
specific
security
where
those
developers,
if
it's
an
enterprise
developer
and
it's
a
it's
a
product
that
is
consumed
by
the
enterprise
side,
those
communities
will
will
actually
be
checked
for
security
by
our
security
team.
H
We'll
put
it
through
some
internal
tools
at
susa,
that
does
you
know
extra
security
scanning
than
than
the
opensuse
community
actually
does
so
the
way
that
it
sits
today
that
actually
happens
in
tandem,
so
you'll
see
a
lot
of
the
majority
of
the
opensuse
software.
Today
is
actually
the
same
software
as
the
enterprise.
Today
there
are
very
few
pieces
that
are
different.
H
You
can
install
opensuse
15
15.2
and
it
is
literally
identical
to
sled's
15
service
pack.
2..
There
are
some
very
minor
differences,
you're,
probably,
and
the
ones
that
are
going
to
be
different
are
the
ones
that
actually
have
licensing
software
in
it.
C
Can
I
ask
cameron
just
with
with
that
build
sort
of
set
up
there
I
mean,
do
you
have
the
ability
to
attest
how
the
build
has
been
put
together
or.
H
We've
got
some
slide
decks
there
out
here
that
talk
about
how
our
build
service
is
put
together.
Let
me
see
if
I
can
find
some
of
those
it.
C
H
H
C
H
Yeah,
let
me
find
these
these
presentations
made
this
out.
C
Okay,
oh
well!
That's
when
you're
looking
through
that.
I
guess
one
of
the
things
I
think
reproducible
builds
makes
sense.
Is:
are
there
any
volunteers
to
take
that
one
on
as
a
there's
a
section
or
we
can
add
it
to
the
list
of
ones
we
just
go
through
as
a
group,
I
don't
know
if
anyone's
got
any
specific,
in-depth
knowledge
of
reproducible
builds
and
an
interest
in
the
court.
H
And
so
your
your
active
doc
is
the
one
that's
in
the
the
slack
right.
C
This
is
a
placeholder
effectively.
I
also
added
mr
viewers
phd
pieces
of
references.
It's
actually
really
detailed,
very
interesting
stuff,
very.
E
Cool
one
thing:
I'm
trying
to
do:
jonathan,
I'm
trying
to
go
through
some.
You
know
the
recent
publications.
You
know
that
cloud
security
white
paper
right.
I
think,
there's
a
lot
of
stuff
there
that
we
can
just
grab
and
maybe
summarize
or
go
into
detail
where
appropriate,
a
lot
of
source
material
there,
as
well
as
that
that
spiffy
the
spiffy
book
as
well
there's
a
lot
of
really
good
stuff
there.
E
I
think
we
can
draw
from
so
I'm
gonna
be
doing
that
today
and
trying
to
grab
all
that
prior
art,
and
then
I
I
wanted
to
talk
about
that
d-bomb
a
little
bit
more,
maybe
next
week.
So
I
gotta
I
reached
out
to
chris
blask
from
unisys
and-
and
he
wants
to
talk
about,
he's-
been
doing
a
lot
of
work
in
in
that
area.
E
So
I'm
gonna
try
to
get
some
notes
from
him
to
see
what
the
current
state
is
on
that
and
then
I'll
have
that
info
for
the
meeting
next
week
and
maybe
try
to
convince
him
to
join
us
to
join
this
team
too.
C
E
And
I
think
it
needs
to
be
decoupled
too
right,
because
if
we
just
say
okay,
blockchain
right
that
doesn't
work
for
everybody.
The
channels
need
to
be
separate.
I
think
the
d-bomb
kind
of
addresses
that
in
a
nice
decoupled
way,
so
I
don't
think
we
can
get
too
far
down
it
because
there
that
works
just
not
far
far
as
far
as
long
as
I'd
like
it
to
be
to
really
put
in
here,
but
as
what
are
your
thoughts
on
that.
C
It's
open
for
discussion,
but,
but
I
think
it
hasn't
necessarily
been
picked
up
widely
in
the
community.
I'd
say
from
what
I
can
see.
However,
there
are
some
pretty
significant
sort
of
initial
deployments:
that
from
what
I
can
see
in
different
industries
that
I
think
are
interesting.
So
I
think,
even
if
we
sort
of
identify
that
as
a
functional
issue,
where
we
have
the
s-bombs,
how
do
we
securely
trans
transport,
the
s-bombs
and
understand
that
the
source
code
that
you
received
and
the
s-bomb
material
that
goes
with
it?
C
C
C
So
I
think,
on
the
the
s
bomb
work,
there's
a
lot
of
good
work
out
there
from
spdx
and
cyclone
dx
and
how
people
are
using
it.
I
think
that's
pretty
well
understood-
and
I
think
there's
still
conversations
about
the
appropriate
file
formats
and
such
and
how
those
tools
are
going
to
be
used
to
actually
create
those
s-bombs
and
how
we
can
ingest
that.
But
it
does
seem
to
be
the
gap
around
how
to
distribute
things
securely.
E
G
C
I
mean
it
brings
up
the
other
one
is
is,
maybe
we
extend
this
document
a
little
bit
more
and
you
know
we've
got
a
high
security.
We've
got
reasonable
security
level,
I
guess
yeah,
but
also
we
need
to
demark
some
of
this
stuff.
Maybe
you
pull
it
into
that
second
document,
because
yeah
this
is
like
draft.
This
is
a
suggestion
how
to
fill
that
gap.
E
Yeah
or
even
a
pendex
that
has
some
some
of
those
use
cases
and
possible
solutions
for
it,
get
it
out
of
the
main
document.
The
oh,
this.
This
would
be
cool
if
this
happened.
But
right
does
it
really
belong
in
the
white
paper.
Well,
we're
gonna
give
high
volatile
guidance
executives
on
probably
not
yet.
C
C
Right
and
that's
why
I
think
that
for
the
for
the
stuff,
where
we're
getting
into
the
you
know,
gaps
and
the
the
areas
where
we're
starting
to
build
new
functionality
and
identifying
issues,
that's
the
bit.
Maybe
we
end
up
pulling
into
that
second
document,
and
then
we
can
solidify
the
best
practices
and
publish
that
whilst
we've
highlighted
these
are
some
gaps
and
we're
going
to
continue
on
separately,
to
figure
out
how
to
fix
that
as
a
community
and
toy
with
ideas.
Otherwise,
we'll
never
finish.
F
C
One
chunk,
I
think,
we're
pretty
light
on
is
kind
of
the
back
end
of
the
supply
chain
and
how
we
distribute
the
software.
We
just
don't
have
a
lot
of
content.
We've
basically
got
a
huge
amount
of
ingesting
and
validating
dependencies
in
the
source
code.
Git
commits
and
search.
We've
got
a
chunk
of
material
coming
in
around
the
software
factory
and
how
it
involved
the
router,
trust,
etc.
C
But,
bearing
in
mind
that
most
most
of
this
you're
going
to
be
a
producer
and
a
consumer
in
some
way
or
another.
E
I
And
I
think
we
should
also
add,
like
a
difference
between
a
commercial
software
or
a
difference
between
a
standalone
software
or
a
library,
because
they
have
different
countries
like
if
a
company
is
building
an
open
source
software
inside
a
company
right.
So
the
s-bomb
and
d-bomb
scenario
is
a
bit
different
from
a
s-perm
receiving
for
a
commercial
software
or
a
code
solution
from
a
vendor
right
so
and
maybe
for
a
standalone
software
also
like,
even
if
it
is
an
open
source
software
like
as
a
docker
image
or
something
like
that.
I
But
I
think
there
is
a
slightly
different
scenario
for
a
sperm
and
deep
bob
in
those
cases
like
especially
if
you
know
if
it
is
a
software
development
company
consuming
open
source
libraries
and
building
it
internally,
you
know
I
don't
know
how
the
d-bomb
can't
be
aligned
the
same
scenario
as
a
company
just
buying
a
software
from
a
vendor
and
they
may
want
to.
They
may
get
the
s-bomb
from
the
vendor
itself
right,
but
for
open
source
software.
I
I
Yeah
I
mean
that
is
also
related
right,
rebuilding
yeah.
So
but
even
without
rebuilding
a
software,
we
can
still
generate
a
sperm
right
like
in
java,
like
even
if
you
have
just
been
in
your
application,
you
can
still
generate
an
sperm
of
your
transitive
dependencies
without
rebuilding
them,
but
ideally
we
should
rebuild,
and
then
we
can
have
more
accurate
information
of
all
the
transitive
dependencies.
C
That
did
you
not
have
that
in
there
about
whether
or
not
you
do
or
don't
rebuild.
I
That's
what
I
want
to
ask
you:
should
we
categorize
the
different
types
of
supply
chain
software
like
in
a
supply
chain?
It
can
be
a
small
library
to
a
full
full-fledged
software
which
is
just
running
on
a
server
or
a
container
right
like
a
it,
may
have
different
behavior.
How
securely
you
can
procure
those
like
it
may
not
have
all
the
same
attributes
or
requirements
like
yeah
or
we
just
need
to
say
I
don't
know
maybe
inside
when
we
mentioned
debone.
Maybe
we
can
say
that
d
bomb
in
a
in
a
court
scenario.
I
You
know
you
can
expect
the
you
know
the
debum
data
to
be
shared
with
partners
and
things
like
that,
whatever
the
channels
you
have,
but
for
an
in
open
source
library
perspective,
it
might
be
a
different
scenario
right
like
something
like
that.
I
don't
know
if
you
need
to
categorize
in
the
all
in
the
bigger
picture
in
the
supply
chain,
are
we
considering
only
open
source
supply
chain?
I
Only
standalone
software
like
linux
distribution,
or
something
like
that
or
are
we
considering
a
java,
google,
guava
library
or
a
go
simple
libraries,
and
things
like
that
today?
They
may
have
different
treatment.
We
need
to
provide
right
like
a,
and
you
know
we
we
need
to
be
also
practical
right,
like
we
can't
expect
to
have
asp
for
all
the
open
source
software
libraries
like
at
least
in
in
next
five
or
ten
years.
I
believe,
but
yeah.
I
C
I
Like
yeah,
I'm
just
asking
like
it's
a
just,
and
I
just
want
to
ask
everyone
else,
thoughts
on
that
like
what
do
you
think?
Like
I
mean
there
are
several
common
aspects
about
all
these
things,
but
there
can
be
a
different
things.
Treatment
like
essentially
it
may
be
specific
to
d
bomb
or
a
specific
gas
bomb
generation,
or
things
like
that.
C
I
think
finnod,
if
you
maybe
make
a
couple
of
suggestions,
you
know
go
into
the
document,
perhaps
and
just
put
a
a
question
or
a
box
as
endless
is
doing
and
just
potentially
open
that
up
as
a
question
that
we
can
dig
into.
Maybe,
okay,
if
you
can
supply
some
thoughts
around
it,
we
can.
C
C
You
know
open
source
consumers
if
it's,
if
we're
providing
this
guidance
to
someone
who's,
consuming
open
source
software
and
they're,
obviously
they're
using
their
software
and
they're,
deploying
it
to
other
customers
how
we're
going
to
actually
deploy
that
it's
a
kind
of
the
inverse
of
how
we're
ingesting
it,
making
sure
that
we
provide
provide
s-bonds
to
provide
the
signatures
of
the
work,
we're
providing
possibly
send
the
data
here,
we're
building
into
some
sort
of
a
recall,
transparency
log
that
that's
the
area.
I
think
we're
pretty
light
on.
D
C
It's
more
like
yeah,
where
itself
right,
so
we,
if
we've
and
that
that's
kind
of
one
of
the
answers
of
this
feature
right,
we've
built
something.
But
if,
if
the
person
building
this
thing
is
effectively
an
open
source
provider,
you
know
you've
got
your
library.
What
are
the
recommendations
of
how
you
can
distribute
that
thing
securely?
C
D
I
mean
if,
if,
if
we're
talking
about
package
repositories,
where
these
libraries
are
uploaded,
I
I
don't
need
to
get
very
pluggy
of
the
work
we're
doing
at
nyu,
but
I
I
my
mind
would
automatically
go
towards
stuff.
You
know
the
update
framework
for,
if
you're
uploading,
these
libraries
into
a
repository
and
then
that
repository
should
distribute
to
consumers
using
something
like
tough.
So
that's
the
direction
I
went
to
right
away
so.
F
So
I
think
consumer
point
of
view.
I
say
I
work
at
an
organization
we
deploy
software
in
our
gap,
environment.
So
someone
somehow
handed
me
an
artifact
or
I
got
it
from
somewhere
and
pass
it
to
someone
else
in
my
team.
Who's
gonna
do
the
deploy.
What
would
be
the
guidance
and
well
yeah
this?
This
has
a
tough
or
notary
signature
like
how
does
the
person
go
around
validating
the
provenance,
so
we
put
a
lot
of
focus
on
maintainers
of
software.
What
are
the
best
practices
but
for
the
end
consumer?
F
How?
If,
if
this
landed
up
in
like
an
artifactory
or
some
catalog?
How
can
they
validate
what's
that
process
and,
I
think,
we're
doing
a
great
job
up
front?
But
how
can
people
really
tell
if
they're,
just
gonna
like
go,
pull
something
that
they
do,
that
extra
step
somewhat
hygienic
to
validate
where
this
came
from?
How
did
they
check
if
it
was
as
simple
as
checking
the
md5
checksum
or
like
doing
something
else,
but
like
what
is
that
at
the
station
they
can
perform?
C
But
it's
it's.
I
still
see
that
as
the
front
part.
So
if
we
are,
I
mean
if
it's
difficult
right.
If
we're
writing
this
document-
and
we
are
the
consumer-
and
this
is
advice
to
the
consumer.
Okay,
I'm
gonna
build
my
product,
so
I've
got
to
validate
my
inputs.
I've
got
to
validate
my
dependencies
and
everything
you
just
said,
and
this
is
hey.
I
don't
know
where
this
thing
came
from
and
it
some
of
this
gave
me.
C
You
know
some
random
software
in
the
middle
of
artifactory,
and
this
is
advice
on
how
you
validate
that,
etc.
What
I'm
now
getting
to
do
is
literally
at
the
bottom
of
our
document.
C
C
C
What
are
the,
what
are
the
checks,
the
end
user
performs,
and
how
can
we
facilitate
that
by
providing
that
data?
So
you
know
it
could
be
we're
going
to
build
our
products,
we're
going
to
contribute
an
s
bomb.
You
know
we're
going
to
make
sure
that
we
sign
our
relevant
artifacts
and
we
distribute
it
securely
into
a
package
manager.
For
example.
C
F
I
hate
to
put
you
on
the
spot,
but
following
up
on
well
what
what's
that
latter
part
in
your
mind
from
your
perspective,
so
yeah
this
thing
has
tough.
How
does
someone
like
verify
your
validate
if
you're
passing
it
to
like
a
colleague
of
yours,
and
yes,
this
this
was
built
using
nintendo
like
the
artifact,
has
a
binary
signature?
F
D
I'm
actually,
I
actually
started
thinking
some
more
of
I
I
I
I'm
I'm
not
the
most
experienced
person
when
it
comes
to
tough,
I
kind
of
joined
quite
a
bit
later
and
I'm
more
involved
side
of
things,
but
I
was,
I
actually
wanted
to
kind
of
think
about
this,
a
bit
more
about
how
to
accomplish
this
for
for
the
intermediate
steps
in
the
software
supply
chain,
rather
than
just
at
the
end,
because
when
we
talk
about
stuff,
I
think
we
usually
talk
about
distributing
software
right
at
the
end
of
the
software
supply
chain.
D
Right
so-
and
I
I
guess
the
question
is:
how
do
you
verify
if,
if
you're
an
intermediary,
how
do
you
verify
what
was
handed
to
you
from
the
previous
step
in
the
software
supply
chain?
Right.
C
Jonathan,
I'm
actually
looking
at
the
other
one,
where
you're
about
to
send
it
to
someone
else.
Okay,
what
technologies
do
you
ensure
that
you've
implemented
when
you
distribute
so
the
person
next
in
the
chain
is
able
to
validate
that
stuff
and
it
kind
of
they've
already
hit
a
couple
of
them
right
generate
the
s-bomb
right.
F
F
D
I
actually
kind
of
think
that
this
is
where
in
total
comes
into
the
picture,
because
if
you
perform
something
and
you're
about
to
hand
it
off,
you'll
also
be
generating
an
internal
link
for
whatever
you
perform.
What
for
whatever
you
did
in
that
particular
step,
and
while
we've
focused
on
verification
workflows
for
once,
we
have
all
the
steps
performed.
We
usually
have,
like
you
know,
one
root
layer
for
the
entire
supply
chain
and
all
the
link
metadata
for
corresponding
to
each
step
in
the
supply
chain.
D
I'm
actually
wondering
about
some
kind
of
you
know
not
a
full
scale
verification,
but
at
the
very
least,
you
can
check
that
the
link
for
the
the
person
who's
performing
a
step
and
has
just
handed
something
right.
They
can
check
if
the
link
metadata
associated
with
that
was,
at
the
very
least
signed
by
the
authorized
person
for
the
previous
step
and
so
on
and
so
forth.
I
I
also
wonder
if
we
could
kind
of
capture
these
transitions
between
two
steps
into
their
own.
D
You
know
little
root
internal
layouts
to
ensure
that
the
right
person
handed
off
software
to
the
next
step
and
so
on
and
so
forth.
Yeah.
E
You
know,
I
know
you
know
when
I'm
bringing
artifacts
into
an
air
gap
deployment
right
the
you
know
we
get
the
docker
container
the
docker
image
at
the
end
of
the
build
process.
Then
I
do
a
docker
export
write
down.
The
hash
manually
burn
that
on
burn
the
the
image
onto
a
cd
and
then
walk
into
the
secure
facility
and
and
give
it
to
somebody
else,
and
then
they
they
verify
those
hashes
match.
You
know
physically
right
that
that's
the
current
process
that
is
like
that
exists
right
now
for
secured
air
gap
environments.
E
There
may
be
some
other.
You
know
places
that
have
automated
that,
but
I
I
just
don't
know
if
there's
anything
else
out
there
that
that
really
we
can
guarantee
the
security
of
verifying
those
hashes.
There's
no
build
transparency
server
out
there
in
the
internet
that
we
can
go
query
yet
right,
the
d-bomb
stuff
isn't
there.
So
I
think
that's
that's
the
best
way
to
do
it
now,
if
we're
talking
about
actually
moving
the
artifacts
around
right
and
we
have
a
lot
of
repositories
for
that,
we
have
artifactory.
E
You
can
set
up
your
own
satellite
server
if
you're
doing
different
types
of
artifacts,
so
we
can
talk
about
that,
but
as
far
as
distributing
like
the
s-bombs
and
that
attestation
information,
I
just
don't
think
there's
a
way.
B
G
Oh
me,
I'm
mostly
just
listening
in
still
trying
whoops
trying
to
get
up
to
speed
because
it's
been
a
few
years
since
I've
sort
of
operated
in
sort
of
the
supply
chain
side.
F
E
D
F
C
But
I
think
maybe
that's
it.
It's
like
like
that.
That
is
the
you
know
you
if
we
take
this
from
the
euro
link
in
this
chain.
Okay,
we've
clearly
got
a
gap
here,
but
this
is
the
contract
or
at
least
the
end
point.
You've
got
your
piece
of
software,
and
this
is
your
contract
to
the
next
guy
in
the
chain.
There's
a
gap.
We
don't
know
how
to
get
that
there,
it
might
be,
it
might
be
called
taking
your
software
writing
it
on
a
cd
disk
and
writing
the
md5
on
the
top.
C
E
Makes
that
whole
process
secure
that
I
just
talked
about.
That's
the
part
that
I
didn't
have
two
years
ago
when
I
was
doing
that
process
today
right,
I
would
say:
okay,
let's,
let's
implement
in
total
for
this.
So
at
least
I
know
when
we're
we
are
making
that
handoff,
that
we
can
verify
the
signatures
and
the
artifacts
along
the
way.
So
if
someone
developer
does
need
to
develop
on
their
the
terminal,
they
can
use
that.
E
F
I
think
there's
an
added
dimension
of
yes,
like
there
you're
you're,
almost
making
the
assumption
that
the
machines
that
built
that
software
properly
secured,
but
it's
almost
like
separated,
are
concerned
entirely
right,
like
there
must
have
been
defense
in
depth
and
least
privilege
to
this
machine.
That
machine
must
be
pardoned.
The
kernel
must
have
been
protected,
the
machine
must
have
properly
been
attested,
memory
should
have
been
encrypted
and
we
often
don't
like
we
just
presume
like
people
are
doing
those
things
but
like
we
should
stay
like
hey.
F
These
are
all
the
other
things
that
are
not
supply
chain
directly,
but
you
should
have
strong
security
of
all
your
nodes
and
all
the
software
that
executes
there.
C
And
that
that
to
me
is
like
the
the
software
factory
itself
right
in
that
you
know,
if
I'm
building
that
thing
I'm
gonna
have
all
of
that,
I'm
gonna
have
you
know
strong,
strong
protection
within
that
pipeline
to
the
nth
degree,
to
make
sure
I
know
what's
being
built
and
I've
got
solutions
to
it
to
monitor,
build
stuff
every
way
right,
yeah.
E
C
E
C
G
I
was
just
gonna
say
because
I
think
to
what
you
had
mentioned
before
I
know
in
the
past.
One
of
the
big
things
for
us
was,
you
know
everything
you
mentioned
about.
You
know
securing
the
build
servers,
I
think
is
huge
and
I
think
one
of
the
things
that
that
has
been
sort
of,
I
think
the
big
open
questions
is:
how
do
you
guarantee
that?
What
you
know,
let's
say
an
open
source
build,
is
doing
like
how
do
you
guarantee
that
they
are
following
that
protocol
you've
outlined.
I
B
I
I'm
going
back
even
from
build
to
source
right
like
there
can
be
npm
or
pipe
package
like
jquery
package
or
something
which
may
even
have
a
back
door
inside
that,
and
even
it
is
a
secure,
build.
They
are
following
that
can
still
execute
right.
So
my
point
is
that
I
think
at
least
in
the
software
factory
for
some
threshold
security.
I
think
we
should
rebuild
everything
from
source,
which
we
can't
trust
and
we
should
generate
our
own.
I
You
know
s-bombs
and
in
total
testation,
whatever
we
need
right,
so
we
shouldn't
just
pass
it
because
it's
an
open
source,
it
has
a
winter
cluster,
so
it
has
an
s
bomb.
You
know
attackers
can
still
publish
a
library
like
in
a
you
know
with
all
these
things,
they
know
that
we
just
need
to
have
a
intro
or
we
just
need
to
have
an
escrow
test
as
a
software
build
material
right.
So
that's
that's
what
I'm
thinking
like.
I
Maybe
we
may
need
some
kind
of
a
threshold
for
like
a
higher
security
requirement
yeah,
so
these
libraries
should
be
rebuilding
the
software
factory
and
the
re,
attested
or
something
whatever
other
internal
security
like
static
analysis
or
dynamic
analysis.
They
want
to
do.
They
can
do
all
this
testing
and
they
can
have
their
own
threshold
and
they
can
certify
that
it
is
internally
certified
and
it
is
internally
site
inside
that
company
to
use
for
further
as
a
supply
chain
ingredient
for
other
software
right.
F
You
know
you
make
a
great
point
and
we
should
try
to
capture
that
in
writing
and
convey
that
now
for
that
library,
vulnerability
to
be
exploited.
You
must
have
like,
had
your
network
penetrated
or
someone
like
exfiltrated
a
credential
and
gain
access
to
one
node
on
the
edge
and
like
started,
performing
lateral
moves.
F
So
I
think
yeah.
We
shouldn't
like
skimp
on
well
move
away
from
embedded
credentials
and
like
long-lived
keys
and
like
move
on
to
like
identity-based
systems
and
have
short-lived
credentials
have
mtls
end-to-end
like
these
are
imperatives
because,
yes,
like
there's,
there's
going
to
be
day,
zeros
right
and
there's
gonna
be
like
software
stuff
written
by
humans.
But
we
should
like
automate
like
we
should
delegate
to
the
machine
and
forcing
least
privilege
at
every
single
layer
of
the
stack.
I
Yeah
definitely
I
mean
there
can
be
like
this
kind
of
advice.
For
I
mean
we
should
at
some
point
in
future.
In
my
opinion,
every
open
source
software
maintainer
should
generate
an
aspirin
from
his
library.
He
should
do
proper
attestation.
He
should
securely,
you
know
authenticate,
you
know
he
shouldn't
use
weak
authentication
and
things
like
that.
I
mean
that's
something
we
can
expect
in
future
and
we
can
also
give
advice
in
the
white
paper
for
this
open
source
community,
but
for
the
consumers
like
if
they
have
a
higher
level
of
security
requirement.
I
C
I
think
we
covered
that
part,
or
at
least
pick
that
part
out
at
the
front
of
that
document,
and
I
think
that
how
we're
securing
the
build
of
that
software
I
mean
that's
the
massive
chunk
of
software
factory.
We've
got
a
shed
load
of
detail
on.
We
we'd
be
able
to
dump
in
there
right,
hopefully
we'll
cover
that.
F
Your
holiday,
I'm
done
with
the
other
things,
I'm
I'm
curious,
like
we
need
to
get
the
chair
sign
off,
but
if
we
move
like
that
length
of
tubecon
eu
maintainer
truck
track
sessions
was
february
7th,
but
I
might
be
able
to
pull
something
off
if
we
want
a
kubecon
session.
Maintainer
track
35
minute
spot
and
we
have
like
the
group
like
share
the
progress
in
the
white
paper.
We
might
not
yeah
finished.
C
C
F
C
Done
no,
I
agree,
I
I
think
it's
good,
because
I
think
it's
it's
important
to
get
the
the
the
cons
out
there
and
I
just
want
to
make
sure
it's
nice
and
tight
before
we
publish
it.
But
yeah
makes
sense
to
me
all
right.
E
F
E
Some
places
I
do
it's
pretty
good
coverage,
but
I
actually
just
pre-ordered
starlink,
so
I'm
gonna,
you
know
when
that
comes.
Maybe
next
by
next
fall
I'll,
be
you
know,
camping
out
and
nice
for
a
lot
for
a
week
or
something
and
working
from
the
woods
we'll
see
we'll
see.