►
From YouTube: Security Tooling Working Group (June 6, 2023)
Description
Notes:https://docs.google.com/document/d/1jzxhzIfkOMTagpeFWYoZpMKwHYeO4Gc7Eq5FcMFEw2c
Our mission is to Identify, Evaluate, Improve, Develop & Ease Deployment of universally-accessible, developer focused tooling to help the open source community secure their code. This space allows members to collaborate together on these goals.
A
A
A
A
Everyone
on
the
call
right
now
is
in
North
America,
so
we'll
see
some
European
show,
potentially
okay,
so
one
of
the
things
we
sort
of
do
to
start
things
off
is
just
drop.
It
drop
a
little
a
piece
of
information
in
the
chat
about
who
you
are
and
that's
sort
of
what
we
do
by
convention.
I
need
to
make
this
and
put
this
to
the
side.
This
is
something
that
Josh
likes
to
see
us
do
so
that
everyone's
sort
of
knows
who
they're
talking
to.
A
Josh
well,
if
you
want
to
cue
things
up
you
off
at
in
about
two
or
three
minutes,
because
I
know
you
have
to
leave
it
at
the
halfway
point:
yep,
okay,.
A
A
A
A
Yes,
yes,
okay,
so
Josh
is
here
and
I've
been
asking
people
to
put
the
sign
in,
and
the
meetings
in
the
text
map
for
putting
that
there
and
if
and
Josh,
has
sort
of
queued
up
and
ready
to
go
so
I
think
we
could
probably
start
right
now.
A
A
C
Okay,
so
I
do
a
lot
of
work
with
spdx
in
the
octoproject.
If
you're
not
familiar
with
what
the
OCTA
project
is,
we
kind
of
do
I'm
going
to
run
through
this
really
quick.
C
We
do
builds
that
are
primarily
targeted
at
embedded
systems,
but
not
exclusively
so
we
can
build
containers
and
package
feeds
and
all
sorts
of
sdks
and
things
like
that
kind
of
the
way
that
this
works
is
we
take
in
a
whole
bunch
of
source
code
and
then
metadata
about
how
to
build
that
source
code,
and
then
we
produce
whatever
the
target
thing
they're
trying
to
produce.
C
So
you
could
kind
of
think
of
us
as,
like
a
quote:
unquote
meta,
build
system
if
you
sort
of
wanted
to,
because
we're
not
like
replacing
auto
tools
or
make
or
Meson
or
anything
like
that,
we
actually
invoke
them
to
build
whatever
the
underlying
software
uses
to
do
its
builds,
and
then
we
take
the
outputs
of
that
put
them
into
packages
which
are
your
traditional
package
types
that
you
would
see
within
your
desktop
distro
like
debians
or
RPMs
or
ipks,
and
then
we
take
those
and
we
can
assemble
them
into
whatever
the
final
Target
thing
that
we're
trying
to
build
is
so
it
kind
of
gives
you
a
very
high
level
view
of
how
how
we
do
this
running
through
it
real
quick
one
of
the
interesting
things
that
I
think
sets
the
actor
project
apart
from
perhaps
other
meta
build
systems.
C
Is
that
a
lot
of
the
tools,
a
lot
of
the
quote-unquote
Native
tools
that
we
need?
We
actually
build
ourselves,
and
so
what
we
mean
by
that
is,
we
don't
rely
on
the
host
version
of
GCC
that
comes
on
the
system.
You're
building
with
we
all
actually
build
our
own
host
version
of
GCC
and
then
use
that
to
cross
compile
the
software
that
goes
into
our
system.
C
So
a
lot
of
the
we
call
them
native,
but
you
can
think
of
them
as
a
host
tools,
a
lot
of
those
tools
that
we
need
we're
actually
building
ourselves.
So
we
have
very
few
like
actual
host
dependencies
from
the
system.
So
it's
all
very
self-contained.
It's
very
easy
to
do.
Hermetic
builds
and
things
like
that.
C
If
you
need
to
the
way
we
do,
this
is
through
this
complicated
system
of
hashing,
which
I
won't
get
into
in
too
much
detail,
but
basically
like
if
something
changes
like
each
hash
is
basically
dependent
on
the
hashes
Upstream
from
it.
So
if
any
hash
changes,
that's
how
we
know
to
rebuild
everything,
Downstream
of
that,
because
all
those
other
hashes
suddenly
become
invalidated.
C
I
have
a
whole
presentation
on
this.
That
I
can
link
you
to
from
my
talk
at
fosdum,
which
goes
into
us
a
lot
of
this
in
more
detail.
So
if
you
want
the
whole,
the
whole
thing
I
can
give
you
that
I'm
just
trying
to
go
really
fast,
so
kind
of
the
way
that
we
do.
This
is
that,
at
certain
points
in
our
build,
we
just
output
an
spdx
document
that
says
like
this
is
what
we
did
at
this
point
in
the
build
and
because
we're
like
a
meta,
build
system.
C
That's
actually
pretty
easy,
because
you
know
we
know
all
of
the
things
that
are
required
to
successfully
build
the
software,
so
we're
basically
just
recording
that
in
a
in
spdx
format
and
then
writing
out
the
document
and
then
at
the
end
we
take
all
the
documents
and
we
suck
them
all
together
into
a
single
big
tar
archive
we're
hoping
to
make
that
final
step
better
with
spdx3
with
spdx2
there's
an
a
particularly
great
way
or
easy
way
to
combine
multiple
documents
together
into
a
single
document,
but
that
will
be
better
with
spdx3.
C
So
we
are
excited
about
that,
because
that
will
get
rid
of
our
big
tarball
at
the
end.
That
just
has
a
bunch
of
documents
in
it
and
then
this
kind
of
gives
an
overview
of
the
relationships
that
we
generate
so
I'm
going
to
spend
probably
the
most
time
on
this
slide.
Can
you
see
my
mouse
cursor
yep?
It
might
be
really
tiny,
okay,
yeah.
So
basically
we
have
what
are
called
recipes
and
those
are
the
thing.
C
That's
the
file
that
says
how
to
build
whatever
the
piece
of
software
is,
and
so,
when
we
process
the
recipe
we
spit
out
to
this
recipe,
spdx
document
and
that
basically
describes
you
can
think
of
that
as
describing
the
source
code
itself
and
how
that
source
code
is
built.
So
we
put
relationships
that
say
like
these.
C
This
recipe
spdx
document
contains
the
source
code,
and
then
we
have
to
know
all
the
build
dependencies
in
order
to
correctly
build
the
the
software,
so
we
just
put
in
the
build
dependency
of
between
recipe
spdx
documents,
so
this
recipe
needs
depends
on
this
recipe
to
build.
Then
we
put
that
relationship
in
there
in
that
document,
and
this
works
out
really
well,
because
you
know
we're
building
these
things
in
dependency
order.
C
So,
by
the
time
this
recipe
needs
to
know
needs
to
link
to
its
build
dependency,
we've
already
written
out
this
recipe
document,
so
that
works
really
well,
because
we're
already
doing
the
directed
acyclical
graph
of
builds,
and
so
we
spit
out
the
documents
as
we're
going,
and
so
we
can
link
them
together,
really
well
like
that,
so
the
recipe
and
the
recipe
could
contain
a
whole
bunch
of
other
stuff
about,
like
you
know,
the
what
source
code
was
where
the
source
code
was
downloaded
from.
C
C
You
know
we
don't
have
it
in
there
today,
but
we're
planning
on
adding
it.
We
could
add
like
this,
the
compiler
flags
that
were
used,
and
things
like
that.
C
You
know
we
know
a
whole
bunch
of
different
information.
I.
Think
I've
got
a
slide
on
that
over
here,
because
I
can't
remember
off
the
top
of
my
head.
There's
just
a
lot
of
it.
C
It's
not
here
anyway.
It
must
be
a
different
presentation.
Okay,
so
once
we've
actually
compiled
the
source
code,
then
we
split
it
up
into
packages
which
again
are
just
the
normal
packages
that
you
would
expect
from
like
a
package
manager
on
a
distro,
and
so
then,
when
we
do
that,
we
write
out
these
package
spdx
documents
and
this
sort
of
describes.
While
the
recipe
spdx
describes
how
we
built
the
software.
C
So
we
can
say
this
runtime
package
was
produced
from
this
build
more
or
less,
but
we
can
also
do
a
lot
of
interesting
things
where
we
can
actually
use
the
debug
information
from
the
generated
executables
to
find
like
static
library,
dependencies
and
things,
and
so
we'll
also
insert
these
like
generated
from
to
other
recipes
based
on
that
that
debug
information,
which
is
really
useful,
it
makes
it
actually
possible
to
track
down
like
static
Library
dependencies,
which
historically,
are
quite
difficult
to
track.
C
The
thing
that
we
can't
do
when
we
generate
these
packages
is
actually
figure
out
the
runtime
dependencies
and
that's
mostly
because
these
packages
aren't
necessarily
while
the
the
recipes
are
built
in
build
dependency
order.
So
there's
a
director
directed
acyclical
graph
of
build
dependencies.
C
The
runtime
dependency
graph
is
not
directed
or
is
not
acyclical
hysterected,
but
it's
not
acyclical,
and
so
we
can't
actually
write
out
the
runtime
dependencies
when
we
generate
this
package
spdx
document,
because
we
have
to
actually
generate
all
the
packages
and
then
we
can
resolve
all
the
runtime
dependencies,
and
this
was
not
unique
to
our
build
system.
C
Lots
of
districts
out
there
have
a
have
non-acyclical
runtime
dependencies.
So
it's
not
just
something
weird.
We
did
so
at
a
later
Point.
Once
we've
generated
all
the
packages
in
the
system,
then
we
go
through
and
we
write
out
these
this
runtime
spdx
document,
and
basically
all
this
is-
is
it's
an
amendment
to
this
original
package
document?
That's
that
adds
in
the
runtime
dependency,
so
you'll
see
in
this
runtime
dependency.
It
says
it
amends
the
original
one,
but
then
it
adds
in
this
runtime
dependency
of
to
the
other
generated
packages,
PDX
documents.
C
So
that's
basically
what
we
do
there
and
then,
when
we're
all
done,
we
do
some
stuff
over
here,
which
is
all
right
so
yeah.
When
we're
all
done,
then
we
generate
whatever
the
final
thing
that
we've
generated
is
the
SDK
image
you
can
Flash
to
an
SD
card
or
boot
whatever.
C
Then
we
write
out
a
final
spdx
document
for
that
thing.
That
basically
just
says
it
includes
it,
contains
all
of
these
packages
and
then
we
just
add
in
the
runtime
as
an
other
relationship,
because
I
don't
know
how
to
do
it.
C
That
is
we
generate
this
image
index
and
a
tarball,
because
what
we
have
here
is
a
whole
bunch
of
documents
and
what
we
really
don't
want
to
do
is
rewrite
documents
after
we've
written
them,
because
spdx
documents,
at
least
an
spdx
2.0
SPX,
two
two
they're
linked
together
by
their
hash.
C
So
that's
why
we
have
all
these
different
documents,
but
we
want
to
include
them
all
in
one
thing
for
the
end
consumer.
So
we
take
all
of
these
documents,
starting
at
the
image
spdx.
C
We
basically
do
a
recursive
tree,
walk
of
all
the
documents
that
we
find
all
the
links
to
other
documents
that
we
find
and
we
put
them
all
into
a
big
tarball
and
that's
really
not
a
standard
spdx
thing,
that's
just
what
we
do
to
make
it
easy
for
our
end,
consumers
to
get
all
of
the
documents
in
one
go
you
get
the
image
font
you
get
like
your
image
file.
C
That's
the
thing
you
flash
on
a
disk
and
then
beside
it,
you
get
a
tarball,
that's
all
the
spdx
documents
that
are
in
that
image
and
then
to
make
things
a
little
bit
easier
on
our
users.
We've
also
written
this.
What
we
call
this
image
Index.
This
is
just
a
Json
file
that
tells
you
given
a
document
name
space.
What's
the
file
name
in
the
tarball
for
that
file,
because
that's
can
actually
be
kind
of
difficult
to
figure
out
unless
you
open
up
every
document
and
figure
out
his
namespace
and
say
this
file
name.
C
Is
this
one?
So
we
just
pre-calculate
that
for
everyone
with
this
image
index
file,
which
is
also
not
an
spdx
thing,
yeah
again,
we
hope
that
big
tarball
thing
and
this
image
index
will
go
away
with
spdx3,
because
it
will
be
much
easier
to
combine
documents
so
we'll
be
able
to
suck
them
all
together
into
one
big
document
and
be
done
with
it.
C
But
yeah
I
mean
a
lot
of.
This
is
just
information
we
already
have
I,
don't
know
if
I
have
the
slide.
That
has
that
list
of
things
do
do.
C
Yeah,
it's
all
information
that
we
already
have
in
our
recipe
like
we.
We
require
users
to
correctly
annotate,
like
correctly
wrote,
A
recipe
that
says
how
to
describe
their
software
and
so
because
we
already
have
all
that
information.
It's
basically
just
recording
it
in
spdx,
and
we
have
also,
as
as
a
project,
taken,
a
stance
that
we
don't
like.
If
it's
not
something
we
can
authoritatively
say
like
this
is
what
this
is
supposed
to
be
like.
We
don't.
C
We
don't
comment
on
it
in
the
spdx
and
the
intention
of
that
is
that,
like
that,
will
make
it
easier
for
us
to
include
like
Upstream
spdx
documents
that
describe
source
code
without
necessarily
conflicting
with
them
right.
So
if
we
don't,
if
we
don't
know
something,
we
really
try
not
to
say
anything
about
it,
just
because
we
want
to
provide
the
information
that
we
know
and
not
really
comment
on
things
that
we
don't
necessarily
know
about
yeah.
So
I've
got
about
10
minutes
for
questions.
If
anyone
has
any.
F
Yep
and
then
also
Alan
put
something
in
the
chat
too.
So
when
you're
looking
at
this.
Thank
you
for
this
really
helpful
insight
into
how
you
approach
leveraging
spdx.
When
you
look
at
this
slide
that
you
have
up,
what
would
you
say
is
the
s-bomb?
Is
it
the
tarball
at
the
end?
Is
at
the
image
spdx
the
top
right
like
what?
What
would
be
the
outcome
of
well.
C
C
Right
alongside
that
image
file,
is
that
tarball
and
that
tarball
tells
you
all
of
the
packages
that
are
installed
on
that
image
that
you
flash
to
the
to
the
Raspberry
Pi
and
all
of
the
dependencies
required
to
build
all
the
image,
all
the
software
that
was
flashed
onto
that
image
that
you
put
on
the
Raspberry
Pi
right.
So
it's
the
complete
compendium
of
everything
that
that
we
know
at
least.
A
And
so
then
I'm
also
going
to
try.
This
is
the
reason.
We've
introduced
the
positive
types,
because
the
s-bomb
is
considered
the
software
like
the
Source,
the
recipes
and
some
of
that
stuff
would
be
considered,
as
well
as
by
other
people.
C
Yeah,
it's
kind
of
a
hybrid
like
if
you
really
wanted
to
like
draw
a
line
like
all
the
stuff
kind
of
like
half
of
the
packet,
because
the
package
says
where
it
was
generated
from
so
the
package
links
back
to
the
build.
But
basically
everything
below
this
line
would
be
your
build
information.
And
then
everything
above
this
line
is
your
runtime
information
but
they're
linked
together
right
right.
F
So
then,
a
person
who's
going
to
consume
what
you
built
would
take
that
they
would
pull
out
what
they
need
from
the
build
they'd
pull
out
what
they
need
from
the
deployed,
and
then
they
would,
you
know,
push
it
along
to
the
next.
The
next
phase,
building
on
what
you
had
so
the
s-bomb
is,
is
evolving.
F
So
from
your
experience,
obviously,
you
can
only
speak
to
your
experience,
but
is
there
an
opportunity
to
make
this
process
that
you
go
through
easier
and
it
could
be
spdx
or
it
could
be
just
some
other
part
of
you
know,
understanding
this
to
begin
with
through
an
education,
or
you
know,
one
click
and
everything's
rolled
up
into
something.
Do
you
have
any
perspective
there
yeah.
C
I
think
the
multiple
like
one
of
the
biggest
problems
that
we
ran
into
especially
early
on,
was
definitely
a
tooling
problem
like
no
one
could
consume
multiple
linked
spdx
documents,
The
Way,
We,
Were,
generating
them,
and
so
it
was
very
hard
to
validate
them,
and
things
like
that
is,
like
we've
tried
to
make
this
very
easy
and
honestly,
the
code
for
this
is
like
less
than
three
thousand
lives
of
python.
So
it's
really
not
actually
that
complex
to
write,
I
think
the
thing
honestly.
The
thing
that.
C
The
thing
that
it
would
be
most
helpful
for
us,
the
the
problem
that
we
run
into
and
I
think
this
isn't
as
much
problem
with
spdx2,
but
that's
a
little
more
concerning
for
spdx3
is
what
we
would
like
to
do
eventually
is
be
able
to
consume
spdx
documents
from
Upstream
source
code
repositories.
So
if
a
true
source
code
repository
provides
a
an
spdx
document
like
with
like
reuse,
I
think
gives
you
it
will
give
you
that
right,
someone,
someone
can
correct
me
if
that's
not
true
yeah.
C
So
what
we
would
like
to
do
is
pull
those
in
to
our
thing
that
we're
generating
and
link
to
those.
So
we
could
have
the
recipe
document
say
this
is
the
source
code
spdx
and
we'll
just
pull
it
in
on
you
know,
without
touching
it
we're
just
going
to
pull
it
into
our
compendium
of
stuff
so
that
we
can
say
like
oh-
and
here,
like
you,
can
follow
this
all
the
way
back.
C
Here's
the
source
code,
spdx
document,
if
you
want
and
I
think
the
thing
that
and
then
the
other
problem,
the
thing
that
goes
along
with
that
that
we
have
a
little
bit
of
trouble
with
is
like
because
of
our
Reliance
on
minimal
host
dependencies
like
we
are
basically
stuck
with
stock
python
for
what
we
can
Implement
and
so
like.
Don't
get
me
wrong,
I'm,
really
glad
that
spdx
has
this
spdx3.
Has
this
really
strong
data
model
under
it?
C
It
was
acronyms
that
I,
don't
can't
remember
like
owl,
and
things
like
that,
like
I,
don't
understand
this,
like
data
model,
stuff
I've
never
had
to
deal
with
it
much,
but
it
worries
me
a
little
bit
that
I'm
gonna
have
to
parse
that
in
like
stock
python,
without
any
external
tools,
you
know
what
I
mean
like
that
worries
me
a
little
bit,
but
as
far
as
like
generating
it
like
I,
it's
really
not
too
bad
like
we
take
what
we
know
and
we
spit
it
into
some
Json
and
it's
not
not
that
big
a
deal
right
like
I,
don't
know:
I,
don't
Envision,
spdx3
being
that
much
more
difficult
than
what
we
have
today.
F
C
F
C
C
Hands
when
I'm?
Oh
there,
we
go:
yeah.
Okay,
sorry
I,
finally
got
the
the
user
view
up.
So:
okay,
yeah
Matt.
A
D
You
made
a
reference
to
sort
of
having
the
user
describe
their
software.
Can
can
you
sort
of
unpack
that
a
little
bit
or,
if
that's
something
that
everyone
here
knows
about?
You
can
just
point
me
to
oh.
C
Yeah,
it's
we
call
them
recipe,
so
we
have
recipe
files
that
describe
how
to
build
it.
They
kind
of
have
they're
their
own
unique
syntax,
but
it's
kind
of
like
I,
don't
know
the
best
way
to
describe
it.
It's
a
little
bit
like
a
make
file,
but
not
quite
okay.
It's
more!
It's
not
as
bad
as
make,
but
it's
the
same
idea
like
you
describe.
You
have
variables
that
you
can
set
or
pend
or
whatever
to,
and
then
you
have
tasks
I
might
be
able
to.
Let
me
see
if
I
can.
C
New
and
that's
just
part
of
that's
just
part
of
that's
part
of
yocto-
that's
how
yakto
builds
okay,
so
so
yeah,
so
so,
basically
the
way
that
yakto
says
you
have
to
build
stuff.
Is
you
have
to
write
this
recipe
that
says
how
to
build
it,
and
so
that
recipe
has
all
the
information
that
we
need
to
build
the
software
so
we're
basically
just
taking
that
information
and
putting
it
into
an
spdx
document
right
because,
like
we
have
to
know
all
that
information
to
build
the
software
right.
C
So
you
know,
if
you
don't
correctly,
annotate
your
build
time
dependencies
in
your
recipe.
It
won't
build
right.
So
like
we're
like
fairly
confident.
These
are
correct
type
of
thing,
so
we're
just
taking
what
the
user
has
written
in
the
recipe
that
describes
their
software.
They
want
to
build
into
the
spdx
format.
Basically
right.
D
F
Time
check
for
you,
Josh
you've
got
two
minutes
and
I
know
you
had
to
drop
at
the
half
hour
Matt.
You
want
to
ask
your
question
really
quickly.
G
Well,
it
was
more
of
a
a
response
to
your
question:
Sarah
how
you
automate
so
I
put
myself
on
the
agenda
for
May
20th
to
describe.
You
know
how
a
guide
that
I'm
writing
for
How
We
Do,
how
we
would
do
this
with
Cyclone
DX,
so
my
intend
to
leverage
CI
build
systems
runtimes
like
tecton
and
Jenkins,
and
they
have
declarative,
build
artifacts
that
can
directly
map
to
the
recipes
or
formula
or
tasks.
Okay,.
A
G
And
that
system
actually
had-
and
actually
it's
kind
of
disappointing-
that
Fresca's
fun
and
hard
times
here
at
openssf,
because
they're
they've
done
a
lot
of
work
in
terms
of
in
integrating
with
Sig
store
as
well
as
to
get
different,
confirmed,
different
attestations
and
bring
in
different
products.
You
know
for
the
for
the
build
process
in
an
ephemeral
way,
but
in
addition,
it
does
capture
through
monitoring
through
Tech
time
chains,
does
capture
and
verify
all
the
build
steps,
all
the
build
tasks.
C
C
Other
thing
that
was
the
other
reason
we
didn't
want
to
like
say
we
don't
want
to
guess
things
and
we're
not
trying.
We
are
explicitly
not
trying
to
make
a
like
spdx
guessing
tool
like
that,
can
scan
our
Docker
container
and
figure
out
what's
supposed
to
be
in
there,
because
that's
not
our
job
for
our
build
system.
There's
other
tools
out
there.
That
can
do
that
better.
G
Do
that
the
the
key
thing
that
you
know?
Yes,
what?
What
hard
parts
are?
It's
basically
getting
clear
identities
so
from
when
you're,
creating
formula
or
or
it
captures
of
builds
it's
it's
forcing
people
to
create
instance,
identifiers,
they're
assigned
and
adjustable
through
something
like
Sig
store.
So
the
key
is
that
if
you
run
two
bills,
they're
two
distinct
instances
of
the
resources
used
to
build
the
software
in
terms
of
tuckton,
you
have
a
different.
You
have
a
different
container
base
image.
You're
running
on.
You
have
a
different
runtime.
C
Yeah
we
haven't
gotten
to
that
part
yet,
but
I'm
sure
we.
H
C
I'll
drop
a
link
that
this
slide
is
from
and
a
couple
other
talks
that
I've
given
on
this.
If
you
guys
want
to
watch
them,
they
all
want
to
watch
them
later
so
yeah,
but
then
I
do
have
to
go
so.
F
F
H
Yeah,
just
a
real,
quick
comment
because
Fresco
was
Matt
mentioned
Fresca
I
mean
it's
not
like.
Oh
we're
trying
to
kill
at
least
my
understanding
of
the
Challenger
Fresca
is
Fresca's
got
this
wonderful
integration
of
all
these
different
capabilities.
But
the
challenge
is
that
when
you
have
a
large
integrated
workflow,
a
lot
of
projects
don't
want
to
just
replace
their
entire
workflow
with
some
other
completely
different
workflow.
It's
that
that
that's
changeover
is
rather
a
big
challenge.
So
well.
G
Someone
asking
people
to
change
we're
saying
that
you
know
in
terms
of
openness,
we're
trying
to
create
a
sterling
tool
chain:
reference
implementations
that
we
lost
sorry
team.
Basically,
so
we
lost
the
five
or
since
active
developers,
Jennifer
Fresca,
we've
lost
to
two
or
three
ones
and
in
fact,
the
key
ones
that
were
actually
working
with
the
tech
Time
Project
to
integrate
pull
request
there.
So
those.
H
H
A
I
guess
and
thanks
again:
yes
and
he's
dropped,
I!
Guess
it's
avi's
turn
now
so
Abby
the
floor
is
yours.
B
Well,
I
won't
say:
I've
got
anything
nearly
as
impressive
or
as
fancy
as
that
last
one
from
Josh,
but
I'll
do
what
we
can
I'll
provide
a
little
bit
of
background.
What's
my
time
frame
here,
you've
got
to
the
top
of
the
hour.
All
right,
I'll
try
to
take
a
heck
of
a
lot
less
than
that,
so
I'll
provide
some
context
on
it
verbally
and
then
I'll
bring
up
a
few
screen
shares
I've
been
working
as
I
wrote
in
the
both
in
the
meeting
notes
and
the
chat.
B
I'm
independent
I've
been
working
with
LF
Edge,
which
is
the
sub
Foundation
of
LF
on
the
evos
project
for
quite
a
while,
actually,
both
the
directly
and
the
as
the
date
of
the
commercial
company
that
originally
started
it
in
working
and,
as
you
know,
Tim
brought
up
and
working
with
Tim
and
Tim's
company
we've
been
trying
to
bring,
shall
we
say
our
s-bomb
compliance
up
to
a
much
higher
standard.
Let's
be
polite
about
it.
B
That
way,
so
we
had
a
number
of
challenges
throughout
it,
and
Tim
suggests
that
we
speak
about
some
of
those
here.
So
I'll
provide
some
context
on
both.
B
What
the
evos
project
is
briefly
just
a
few
short
minutes
how
it's
built
and
then
how
we've
gotten
s-bombs
and
then
some
of
the
challenges
we
run
into
and
I
think
I'll
Focus
much
more
on
the
challenge,
issues
that
we've
hit
rather
than
the
structure
itself
and
feel
free
to
shoot
in
and
stop
me
I,
don't
know
we
see
the
hands,
so
we
can
go
ahead
and
just
shout
that's
fine
here
we
go
I!
Believe
it's
this
window.
Here
we
go
all
right.
B
This
is
a
public
piece
I'm
going
to
completely
ignore
any
of
the
marketing
stuff.
That's
in
here,
because
nobody
wants
to
see
it
from
a
structural
perspective.
Eve
is
also
an
edge,
a
virtualized
or
page
velocity,
whatever
it's
called
a
virtualized
Edge
OS.
Unlike
the
Octo,
it's
not
a
recipe,
a
put
a
build
it
together,
type
of
thing:
it's
pre-built.
If
you
have
e
version
940
for
x8664,
then
that's
going
to
be
one
pre-built
thing
that
will
run
on
every
x8664..
B
There's
one
for
art
as
well
and
I
believe
there's
risk
five
under
experimental
either
way.
It
says
it's
pre-built
completely
open
source,
but
it
is
pre-built
from
the
get-go
usually
distributed.
There's
artifacts
on
GitHub
release
assets
or
on
Docker
hub
the
I'm
trying
to
look
for
a
good
architectural.
Oh,
that
was
a
good
one
here,
one
of
the
important
pieces
about
it.
Okay,
second,
one
important
pieces
about
it
that
it
is
completely
self-contained
and
controlled,
meaning
it
doesn't
can't
do
anything
without
a
controller.
A
B
Is
coming
fine?
Okay,
so
you
can
see
here
that
has
to
talk
to
a
controller
either
open
source
or
commercial.
There
is
an
open
source
controller
or
else
under
LF
Edge.
There's
a
commercial
one.
I
noticed
the
commercial
one's
got
a
much
bigger
logo
than
the
open
source
one,
but
I
don't
think
that
was
intentional.
B
Either
way
it's
completely
pulled
from
that
it
can't
do
anything
locally.
It's
not
changeable!
You
want
to
deploy
anything.
You
go
to
the
controller
you
want
to
update
it.
It
does
the
verification
and
so
on
and
so
forth.
B
So
you're
dealing
with,
what's
essentially
not
a
recipe
of
to
build
different
distributions,
we're
dealing
with,
what's
essentially,
a
single
distribution,
a
single
completely
set
up
image
for
for
your
architecture,
and
you
can
change
it
based
on
the
release
versions,
but
that's
about
as
far
as
it
goes,
so
to
understand
how
we
get
into
the
s-bombs
and
how
we
scan
them
and
the
challenges
it's
worth.
Looking
at
how
it's
built,
it's
Eve
itself
is
largely
built
around
Linux
kit.
B
I,
don't
know
if
people
anybody
here
other
than
Tim,
of
course,
has
experience
with
Linux
kit
that
Linux
kit
originally
came
out
of
Docker
I,
don't
remember
if
it's
still
under
Docker
the
copyright's
all
open
it's
either
Apache
or
MIT.
It
is
a
an
OS
composition
engine,
it's
not
quite
as
opinionated
as
some
of
the
edge
specific
stuff
I
put
a
sample
build
value
here.
B
This
doesn't
do
a
hell
like
a
whole
lot,
but
this
you
run
this
yaml
file
through
it
and
it
will
pull
the
various
containers
like
this
one,
and
this
one
it'll
essentially
compose
a
fully
bootable
OS,
it's
very
specific,
very
tailored.
So
if
you
say,
I
want
a
bootable
OS
that
does
a
b
and
c
or
just
runs
nginx
I,
wouldn't
start
with
say
in
Ubuntu,
or
a
rail
or
a
Suzy
or
whatever,
and
add
the
packages
and
trim
it
down.
You're,
basically
composing
it
just
from
these
bits.
B
This
one
here
actually
runs.
Nginx
I
picked
that
out
of
the
hat,
but
it
just
runs
engine
X.
So
the
stuff.
That's
in
services
will
be
long
running
containers
the
stuff
that's
in
on
boot,
non
shutdown
are
one-time,
runs
and
the
stuff
that's
in
it,
and
kernel
are
laid
out
specifically
on
the
file
system.
This
is
really
just
built
into
one
big
tar
stream
and
then
is
converted
into
whatever
bootable
OS
image
you
want.
B
What
that
means
is
that
it
leaves
you
with
well
an
OS
image
or,
if
you
just
take
it
a
step
earlier,
a
tar
stream
that
you
could
then
go
ahead
and
pass
to
any
kind
of
scanner,
whether
security
scanner
or
for
this
contextness
bomb
scanner.
So
the
if
I
look
at
the
actual
Eve
make
file
and
whoever
those
who
said
that
they
like
make
files.
This
is
an
absolutely
brutal
make
file.
So
if
you
like
make
files
do
not
spend
time.
Looking
at
this
make
file,
I'd
love
to
see
lots
of
it
coin.
B
Dot,
there's
as
part
of
the
build
stage,
is
a
root
festar
that
will
eventually
be
combined
and
converted
into
a
bootable
image
and
distributed
as
an
asset,
but
essentially
the
root.
Festar
is
created
right
before
that
it
reports
combined
into
an
OS
image.
It's
expanded
into
a
temporary
router
and
then
skip
the
stage
for
a
second,
and
then
it
just
runs
sift
on
it.
B
Y
sift,
open
source
and
they've
been
very
responsive
to
issues
we
found,
but-
and
we
definitely
do
not
want
to
be
building
an
s-bomb
generator
on
our
own,
not
quarter
what
anybody
is
doing,
either
on
the
open
source
side
or
on
the
closed
source
side
of
this
company
generates
it
and
then
saves
it
out.
B
Well,
you
can't
see
it
because
it's
in
the
config
here,
but
the
configure
we're
using
the
spdx
Json
for
the
output
case,
people
curious
as
to
why,
as
I
said,
Eve
is
LF.
Sub
Foundation
of
LF
svdx
is
officially
of
LF,
wasn't
much
of
a
question
so
to
get
into
some
of
the
challenges
we
had
I'm.
Sorry.
This
is
not
such
a
fancy
slide,
but
here
you
go.
We
had
a
bunch
of
challenges
around
it,
so
this
is
not
in
any
particular
order.
B
Kernel
crota
modules
in
init
when
your
kernel
is
distributed
as
part
of
some
sort
of
package
manager,
yum
or
apps,
or
one
of
those
it
will
generally
show
up
in
your
package.
Databases
if
you're,
actually
laying
them
out
directly
directly
building
a
kernel
and
putting
them
into
a
into
a
turn
eventually
into
a
disk
image.
B
Scanners
have
a
hard
time
recognizing
exactly
what
they're
dealing
with
most
scanners.
Don't
we
eventually
push
that
upward
Upstream
into
sift,
so
if
they
do,
modules
are
a
little
bit
easier
in
that
you
can
get
the
module
info
and
if
yes,
file
with
its
magic,
actually
does
a
pretty
decent
job.
Recognizing
most
of
these
things,
it
was
an
interesting
experience,
figuring
out
how
exactly
live,
magic
works
and
then
getting
that
building
some
of
that
native,
leading
to
go
and
then
getting
it
Upstream,
which
is
what
I
spent
a
week
doing.
B
That
is
one
area
of
challenge.
Another
was
depth
of
items,
so
I
can
show
you
an
example
here,
I
believe
it's
this
page.
It
is.
This
is
just
in
the
open.
Tar
I
did
a
look
just
for
the
apkdb
installed.
You'll
see
that
it's
one,
two,
three
four
five:
it's
got
about,
15
to
20
different,
install
databases,
because
there
are
multiple.
This
is
the
route,
but
then
there's
multiple
contained
file
systems
within
here.
Every
one
of
these
is
going
to
eventually
be
a
container.
B
So
recognizing
that
and
same
thing
is
not
at
the
usually
when
you
scan
an
oci
image
or
you're
scanning
a
file
system
where
you're
scanning
disk
image,
whatever
you're
scanning,
usually
expect
to
see
oh
I'm
going
to
see,
live,
APK
DB
install
at
the
root
I'm
going
to
see
my
D
package,
Stuff
Etc
I
expect
to
see
them
there.
When
things
are
embedded,
it
can
get
interesting.
You
also
have
multiple
package
databases,
most
cases
when
you're
scanning
I
think
you're
dealing
with
a
single
package
manager.
B
But
if
you've
got
containers
embedded
within
containers
or
within
a
file
system,
you
can
actually
get
different
ones.
You
have
to
be
able
to
recognize
them.
Compile
binaries
was
actually
interesting.
B
Most
scanners
out
there
will
do
a
pretty
good
job.
Seeing
okay
I've
got
I
recognize
that
this
is
a
package
database,
I'm
going
to
read
it
and
cross-reference
files,
and
so
on
and
so
forth
things
that
don't
come
out
of
there
and
are
self-contained
things
like
golang,
compile
binaries
generally
carry
enough
information
for
you
to
identify
it.
When
you
compile
see
you're
very
often,
unless
you're
really
adding
a
lot
of
metadata
to
it
you're.
Basically,
it
was
something
that
a
scan.
It
has
no
idea
what
it
is.
It
says.
B
B
I
have
yet
to
see
something
that
actually
sanely
scans
a
got
a
qcal
image
or
a
raw
disk
image,
and
even
then,
once
you
do
you
end
up,
for
example,
inside
there,
what
if
you've
got
tars
inside
there,
and
you
can
have
multiple
layers
of
embedding
when
you're
dealing
with
the
whole
OS
as
opposed
to
just
an
application,
or
just
a
single,
a
single
oci
image,
specifically
in
some
of
the
areas
Alpine
packages,
how
much
experience
people
have
here
dealing
with
alpine
Packaging
in
the
guts
of
it?
B
I
B
That's
okay:
this
is
this
slide,
is
the
last
I
have
I'm
keeping
a
close
eye
on
the
time
when
you
install
install
an
Alpine
package
like
most
package
managers,
it
goes
out
retrieves,
whatever
package
it
has,
in
this
case
an
APK
which
is
just
well.
It
isn't
just
the
targets
that
file
as
a
weird
structure,
but
fine,
it
solves
the
various
files
and
then
updates
its
installed
database.
B
That's
your
ability
to
install
Downstream
Upstream,
saying
I've
got
an
install
database,
I'd
like
to
know
where
this
came
from
the
Alpine
database
doesn't
include
everything
necessary
to
go
back
Upstream.
It
doesn't
necessarily
include
which
repository
took
it
from
did.
I
take
it
from
here.
Did
it
take
it
from
there
you
sort
of
have
to
know?
Oh
look.
This
package
was
X.
Therefore,
where
did
it
come
from?
B
Seen
on
their
gitlab
about
how
to
add
things,
to
be
able
to
go
back,
Upstream
I
believe
there's
talk
about
adding
a
package
URL
into
the
database
itself,
so
you'd
really
be
able
to
reliably
get
it
with
the
various
tags
available
on
it.
Package.
Mutability
is
a
headache
for
anybody.
Who's
dealt
with
it,
you
can
have
a
version
of
a
package
3.14.0
R2
and
that
version
can
be
overwritten
at
any
time
in
the
future.
They
don't
aren't
often,
but
they
can
be
for
various
patch
reasons.
B
You
have
sources
that
can
disappear
even
once
you
figure
out
how
do
I
trace
Source,
they
can
disappear
and
not
everybody
cares
to
be
honest
for
most
of
Alpine's
use
cases.
It
doesn't
matter,
but
if
you
start
caring
about
things
like
Source
tracing
and
bills
and
materials,
it
actually
matters
quite
a
bit.
What
were
you
gonna
jump
in
a
bit
down
I'm
happy
to
take
a
detour.
I
Oh
good
I
was
going
to
ask
around
when
your
skin
when
you're
scanning
containers,
especially
given
some
of
the
issues
you've
done
about
you
know
not
finding
necessarily
you
know,
nested
objects
or
you
know
you
know
nested
in
multiple
different
ways.
I.
I
How
much
have
you?
How
are
you
currently
like
at
what
point
during
Runner
build
time
we
run?
Are
you
scanning
containers
just
at
runtime,
once
you
have
actual
container
I'm
curious,
whether
you've
done
any
experimentation
like
scanning
at
on
the
file
system
and
comparing
the
Deltas
between
what
you
might
find
just
getting
a
container
versus
scanning
some
of
the
the
instructions
and
the
source
code
that
goes
into
the
generating
the
container.
B
For
most
of
the
images
is
that
if
you
look
back
at
the
words
that
Linux
get
one
here,
it's
basically
taking
all
these
images
and
just
laying
them
out
in
a
file
system
and
ending
up
with
a
tar
Street,
and
so
we're
just
emptying
out
that
Tarzan,
which
is
essentially
the
equivalent
of
I've,
got
no
CI
image.
I've
got
a
container
image,
spread
it
out
and
scan
it
or
scan
lots
of
them
at
once.
B
So
we're
not
scanning
at
build
time
or
during
the
build
we're
standing
at
the
end
of
the
builds
if
you've
got
five
layers
and
the
last
one's
from
scratches
includes
three
binaries
we're
just
getting
the
three
binaries,
but
we
are
doing
it
not
at
runtime.
So
I
couldn't
tell
you
about
the
comparison,
but
at
the
very
end
of
build
time
when
they're
all
composed
I
guess
together
over
here.
B
That's
almost
true.
There
are
cases
and
again
I.
B
Don't
have
the
slides
here
and
we'll
take
them
in
where
the
time
that
we
have
to
find
it
where
we've
said,
there's
stuff
that
gets
put
on
there,
that
won't
quite
be
caught,
and
so
what
we've
gone
down
the
process
of
is
enforcing
Within
I'm,
not
going
to
find
the
link
right
now
of
enforcing
within
the
build
process
that
we
don't
allow
network
access
during
almost
any
of
the
builds
so
we're
running
stuff
through
that
the
bills
are
happening
through
Docker,
build
X
and
we
basically
disable
network
access
for
almost
every
image
build
every
container
image
build
and
if
you
disable
that
it
will
allow
you
network
access,
but
only
via
the
ad
commands,
and
so
we've
essentially
ring
fenced
the
arbitrary
ability
to
pull
things
down
from
the
network
using
the
only
using
ad
and
since
ad
is
well
defined,
as
opposed
to
run,
which
can
have
it
can
run
a
curl,
the
corona
W.
B
You
got
to
run
some
script
that
you
can't
figure
out.
We
then
are
able
to
actually
scan
Docker
files,
find
the
ads
and
actually
figure
out
exactly
what
they're
doing.
Actually,
we
built
a
Docker
file
ad
scanner,
it's
also
on
the
open
source
that
uses
the
same
parser
it
just
it.
Library
Imports
the
parser,
that's
used
by
buildkit
so
that
it
actually
can
do
it
parse
it
figure
it
out
and
pull
down
and
use
that
as
sources
for
things.
B
We
specifically
use
we
use
in
a
bunch
of
areas,
but
especially
use
it
for
the
kernel
bills,
because
kernels
are
so
hard
to
reverse
figure
out.
If
I
can
know
what
I
actually
probably
can
yeah.
I
I
appreciate
the
other
answer:
I
don't
want
to
take
up
the
the
rest
of
the
time
here,
I'm
running
a
similar
experiment
on
scanning
Docker
images
at
various
points
or
containers
at
various
points.
If
you
wouldn't
mind
throwing
the
that
open
source
Docker
file
scanner
that
you're
alluding
to
in
the
chat
that
would
be
awesome
happy
to
chat
with
you
offline
about
some
of
the
more
creative
experiments
where
we're
trying
out
over
here
but
still
early
days,
but
happy
to
compare
notes.
Oh.
A
A
A
B
No,
not
at
all
Daniel
when
I'm
done.
Also
I'll
put
my
I'll
put
my
email
in
the
in
the
the
link
to
the
notes.
Next
to
my
name,
that
way,
you
can
always
just
send
me
an
email
if
we
lose
connections
from
there.
I'll
skip
with
pleasure.
I'll
skip
two
link
constraints
for
a
moment,
so
I
have
a
chance
to
get
the
other
two
I
want
to
have
a
chance
for
people
talking:
okay,
go
Lang
modules,
tag
mutability
in
pseudo
versions.
If
I
have
something,
that's
version:
zero,
2.0.4.
B
Theoretically,
that
can
change.
The
proxies
are
supposed
to
enforce
a
dozen,
but
it
doesn't
always
when
you
actually
have
things
that
aren't
properly
released.
You
get
those
pseudo
versions,
they
include
the
commits
life,
gets
a
lot
easier
and
you're
much
more
guaranteed
disappearing
sources.
Same
problem,
you
can
refer
to
a
source
that
somebody
can
literally
pull
offline
again.
The
go
proxies
are
supposed
to
handle
it.
B
One
of
our
bigger
headaches
has
been
the
main
package.
Source
tracing
I'll
show
you
what
I
mean
this
is
I.
This
is
FS
script.
You
notice
that
it
has
I,
hear
all
the
build
flags
at
the
bottom
and
all
the
dependencies.
It's
got
versions
and
hashes
and
it's
great
except
for
the
actual
module
itself.
Github.Com
Google
fscript
and
it
says,
develop.
Why?
B
Because,
when
you
do
go
build
as
opposed
to
go,
install
off
the
network,
go
build,
will
always
put
in
an
oversion
for
it,
and
then
you
are
left
with
this
blank
and
you
do
go
version.
Dash
M,
where
you
use
the
inherent,
go
Google
libraries
and
it
gives
you
basically
developerate
yeah
and
that
that
has
been
a
huge
headache
to
great
credit
of
the
sift
people
they
they.
When
they
see
this.
They
look
at
the
build
flags
and
if
they
see
a
main
dot
version,
they
will
override
that.
G
Yeah
I
mean
in
terms
of
technetability
this
I'm
not
I'm,
just
curious,
because
my
My
Hope
was
is
that
we
could
always
Force
the
use
of
a
pearl
with
a
commit
hash.
So,
even
if
the
tag
changes,
we
always
have
to
commit
hash
to
fall
back
on
that
viable.
B
It's
so
is
it
viable
I
think
so
when
you're
dealing
with
go
itself
because
it
uses
the
go
sum
it'll
actually,
if
attack
changes,
it's
supposed
to
catch
and
cause
all
sorts
of
errors
out,
but
when
you're
dealing
with
something
like
Source
tracing,
which
essentially
is
an
s-bomb,
it's
essentially
going
to
give
you
here
on
depend
on
a
protobot
version,
one
two:
zero:
it's
not
necessarily
going
to
include
this,
but
it
might
there
are
space
to
do
it.
B
Well,
I,
like
the
idea:
I,
don't
remember
if
it's
a
four
something
it's
forced
or
not,
but
a
main
package
Source
tracing
was
a
has
been
a
big
headache.
There's
a
lot
of
back
and
forth.
It's
a
one
or
two
major
GitHub
issues
on
it.
I
can
go
people
and
it's
it's
philosophical,
I
get
it
I
disagree
but
everybody's
out
to
have
their
opinions
and
just
to
finish
up
detaching
the
last
month,
the
artifacts.
Where
do
we
put
it
so
partially?
B
B
You
can
see
somewhere
here
here
rid
of
sspdx
Json.
So
you
see
it's
actually
there
as
well.
We
consider
a
separate
oci
image.
We
don't
do
it
as
separate
Osama
for
the
sources
we're
looking
at
things
like
OCR
artifacts,
we're
looking
at
things
like
cosine
and
stuff
like
that.
A
E
So
one
of
the
things
that
we've
heard
I-
think
many
of
us
have
heard
in
many
places
is
s-bombs
are
just
something
that
somebody
else
wants
they,
if
I'm
on
an
open
source
project
generating
home,
does
nothing
for
me.
Is
there
anything
that
you'd
offer
around
changes
to
the
open
source
project
or
benefits?
You
saw
or
lesson
you
talk
about
the
challenges
but
Lessons
Learned.
Did
it
improve
the
project
in
some
way.
B
B
I've
had
many
conversations
where
we've
said:
okay,
we
just
had
another
call
with
Tim
and
Tim's
right,
but
this
is
a
pain
thing
to
do,
but
we
know
this
is
going
to
be
good
in
the
end
and
I
I
will
openly
admit
that
there
were
times
when
it
was
very
difficult
because
it
was
a
lot
of
work
but
come
out
of
it
much
more
cleanly
I
like
the
idea
that
I
can
look
to
people
on
the
compliance
side
and
say:
okay,
we
want
to
use
your
your
your
open
source
or
your
closed
Source
where's,
the.
B
What
do
you
have
in
it?
How
do
it
with
licenses?
Every
time
it
comes
out
there,
there's
no
spam
attached
to
it.
How
is
it
attached?
Well,
it's
in
the
assets
or
it's
here
like
I,
said
it
has
to
be
standardized
a
little
bit
better
or
at
least
widely
adopted
better,
but
it's
made
things
just
enormously
easier
to
have
the
conversations
about.
E
Hopefully
we're
making
Supply
chains
better.
That's
the
overarching
goal,
the
the
way
I
see
us
bombs.
Once
we
start
sharing
the
documents
they
cause
us
to
be
more
introspective
about
what
we're,
building
and
aim
for
better
attributes
in
what
we're,
building
and
ultimately
to
to
improve
you,
you
change
what
you're
doing
as
a
developer
and
I
mean
I.
Guess
you
put
it
that
way,
like
better
people
made
better
developers
and
the
developers
are
making
better
open
source
better
product
that
this
is
about
making
a
feedback
loop?
E
B
B
I
went
for
a
run
this
morning.
It
was
painful
too.
That
was
a
good
thing.
This
bad
pain,
there's
good
pain,
there's,
definitely
good
pain
by
the
way
Daniel
I
did
put
my
email
on
the
document
there,
so
you
can
just
pull
out
for
me
and
email
I'm
happy
to
talk,
Matt
I
think
to
see
your
hand
up
and
I'm,
not
sure
if
that's
from
before
no.
G
It's
a
new
one
in
terms
of
container
scanning
I
I
mean
I
I
would
love
to
build
upon
Tim's
Point
but
I've
been
running
into
brick
walls
trying
to
get
our
own
Red
Hat
team
to
produce
s-bombs
for
our
base
images.
G
So
if
you
encounter
an
oci
image
or
other
container
image,
that's
been
flattened
or
I.
Guess
the
new
direction
of
ocis
to
support
compressed
container
images.
How
can
you
know
how
can
we,
after
the
fact,
extract
much
from
compressed
s-bombs
to
get
anything
meaningful?
If
we
don't
have
you
know,
what's
your
you
know
your
experience
of
tools
getting
s-bombs
and
writing
against
container
images
if
they're
flattened
or
compressed
how
successful
are
they.
B
E
Well,
I
think
some
of
the
patterns
like
what
Avi
is
talking
about
the
introspection
of
the
docker
file
level,
if
you're
able
to
get
back
to
that
and
look
at
the
ads
and
things
like
that,
that's
one
of
the
ways:
it's
not
just
the
I
think
avi's
approach
has
been
a
mix
of
scanning
outputs,
but
also
scanning
the
inputs
and
I
know
we
always
get
into
these
kind
of
philosophical
discussions
as
one
sufficient
or
the
other
are
both
impossibly
insufficient
and
I.
F
B
One
of
the
interesting
areas
has
been
integration
with
build
tools.
I
I
myself
have
contributed
to
build
kit,
I
love
it
and
hate
it.
At
the
same
time,
it
is
so
freaking
complex
to
try
and
contribute
to
it.
It
is
very
powerful
and
I've
said
that
to
the
people
there
as
well
there's
work
going
on
in
two
different
places
to
as
Islam
integration.
There.
B
One
is
at
the
build
level
and
the
other
is
at
the
end
of
the
bill
to
be
able
to
have
an
integration,
a
a
scan
which
would
make
things
a
lot
easier
in
many
ways.
I
do
not
know
how
mature
already
that
is.
It
would
make
my
Docker
file
ad
scanner
thing
go
away.
It
would
make
some
other
Integrations.
We've
done.
Go
done,
go
away,
but
yeah
I
think
it
build.
Tools
are
a
better
place
to
do
a
lot
of
the
stuff
we've
done
on.
Just
this
stuff
is
still
young.