►
From YouTube: Supply Chain WG Meeting (Jan 5, 2022)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Yeah,
so
we
I
don't
know,
october
november,
it's
been
a
little
bit
ago.
We
so
we
started
off
with
this
working
group.
It
was
the
digital
identity,
attestation
working
group.
We
had
a
number
of
new
projects,
you
know
wanting
to
come
into
open,
ssf
and
and
discussions
that
were
related
to
more
supply
chain,
more
broadly,
not
just
the
identity
of
individuals
or
or
artifacts
in
the
supply
chain.
B
B
So
we
agreed
to
do
that
both
in
this
working
group,
and
then
we
discussed
with
the
with
the
open
ssf
tac
and,
and
so
we
have
renamed
the
working
group
and
we've
and
we've
started
to
you,
know,
update
the
links
and
the
you
know
the
names
of
the
of
the
mailing
lists
and
those
things.
What
we
haven't
done
yet
is
update
the
readme
page,
and
we
also
need
to
have
some
discussion
about
the
you
know.
B
What
did
we
consider
as
the
overall
scope
and
mission
for
the
the
working
group?
You
know
you
know,
given
the
the
new
the
new
focus,
so
so
that's
what
this
topic
was
about
is
working
on
updating
the
readme
which
will
involve
talking.
You
know,
discussions
about
scope
and
mission
and
and
such
so
I
would
be
happy
to
you
know
kind
of
we
can
talk
about
process
for
that.
I'd
be
happy
to
volunteer
to.
B
You,
know,
create
a
draft
of
a
new
readme
page
and
then
share
it
with
everyone
on
the
mailing
list,
and
then
you
know,
and
then
we
could
discuss
next
week
once
once,
we've
got
a
draft
and
you
know,
and
some
people
commenting
on
a
draft.
That's
one
way
to
go
about
it,
sharing
it
on
the
mailing
list.
The
other
way
we
could
do
it
is.
I
could
create
a
draft
and
then
make
it
a
pull
request.
B
And
then
we
could
comment
on
the
pull
request
or
comment
on
issues,
so
I'm
open
to
whatever
people
think
are
the
best
ways
to
go
about
that.
C
Hey
kay,
this
is
michael
sorry,
I'm
having
problems
with
networking
today,
but
I
think
a
pull
request
model
would
be
a
little
premature
because
it's
not
a
not
a
collaborative
editing
process,
so
I
think
a
a
doc
of
some
kind
that
we
can
iterate
on
be
a
great
place
to
do
it
happy
to
help
you
early
on
in
the
draft
as
well.
B
Okay,
that
sounds
good,
so
I'll
yeah.
So
so
we'll
do
that.
Then,
unless
anyone
has
concerns
I'll,
go
ahead
and
create
a
draft
document
and
then
I'll
share
that
out
on
the
you
know,
maybe
I'll
iterate
with
a
couple
of
folks,
maybe
michael
linkin
or
you
know
very
briefly.
First
and
then
you
know
quickly,
we'll
get
it
out
onto
the
mailing
list
and
let
let
other
people
add
comments
and
make
suggestions.
A
Cool
okay
next
thing
on
the
agenda
is
we're
going
to
get
a
demo
from
the
folks
at
sift
or
a
sip
demo
folks
at
anchor,
and
I
think
I
saw
alex
on
the
call
and
the
cephalo
so
alex
I'm
gonna
hand
it
over
to
you.
D
Yeah
howdy
howdy
yeah
nice
nice
to
see
y'all.
Let's
see,
I'm
gonna
share
my
screen
real
quick.
E
Hi
so
yeah
I
heard
at
the
last
meeting
there
was
a
request
to
get
a
demo
sip
so
decided
to
join.
So
thanks
josh
for
for
poking
me
and
letting
me
know
what
was
going
on
so
yeah.
So
without
further
ado,
what
is
sift
sift
is
a
hey.
D
E
I'm
gonna
interrupt
you
for
a
second
who
are
you
and
what
do
you
do?
Yes,
thank
you,
the
small
details
in
between
yeah
okay,
so
my
name
is
alex
goodman.
I
work
at
encore,
I'm
the
tech
lead
and
one
of
the
core
developers
on
sifting
gripe
and
the
the
tools
team
here.
E
So
we've
been
working
on
on
these
tools
for
the
past
year
and
a
half
two
years
now,
yeah,
and
so
we've
just
been
kind
of
barreling
forward
and
adding
more
and
more
features
as
they've,
also
gotten
integrated
into
our
our
core
enterprise
products
as
well.
So
yeah
thanks,
josh,
sometimes
I'll
skip
ahead,
so
so
yeah
without
further
ado.
E
Yes,
sift
such
as
a
a
software
bill
of
materials
tool
for
given
a
container
image
or
a
file
system,
you
can
generate
a
s-bom,
so
we
can
generate
it
for
the
wide
variety
of
formats
that
are
available,
such
as
spdx
and
cyclone
dx,
as
well
as
detect
packages
across
a
large
set
of
ecosystems
such
as
python
packages,
ruby,
gems
go
binaries
in
the
packages
within
that
we
can
find
within
there,
etc.
We
have
a
listing
on
our
readme,
so
I
kind
of
wanted
to
show
what
that
looks
like.
E
So,
if
I
have
a
it's,
a
local
local
image
in
our
docker
game
in
my
docker
gaming
here,
so
we'll
pull
that
image
and
we'll
start
cataloging
cataloging
all
the
packages
we
could
find-
and
this
is
just
a
summary
view
of
what
was
found-
and
this
particular
image
was
a
alpine
package.
So
we
found
a
lot
of
apk
files
apk
packages
as
well
as
some
python
files,
but
we
also
have,
as
I
was
mentioning
earlier,
the
ability
to
say
hey.
E
I
want
to
see
get
this
output
and
say:
cyclone
dx,
a
standard,
s-bom
format.
So
this
is
cyclone
dx
version,
I
believe
2.3,
but
I
have
to
double
check
so
and
additionally,
we
also
support
spdx,
both
the
tag,
value
format,
which
is
been
around
longer
than
say
the
json
format
that
they
also
have,
which
we
additionally
support
so
you'll.
E
So
the
nice
thing
is,
we
can
support
all
these
formats
and
but
still
have
like
a
one
core
model
that
represents
what
is
discovered,
that
core
model
is
best
shown
in
our
in
our
json
output.
So
if
you
do
json
and
to
show
you
what
we
find
for
each
individual
package
see
I'll,
actually
spit
everything
out
into
a
file
and
kind
of
jump
into
what
that
file
looks
like.
E
So
this
json
file,
we
have
all
the
packages
that
we
discover
here,
information
about
the
tool
and
its
configuration
underneath
descriptor
and
any
files
that
we're
discovering,
as
well
as
anything
about
the
the
source,
in
this
case
we're
looking
at
a
particular
container
image.
So
if
I
look
at
specifically
the
source
section,
you'll
see
everything
that
we
were
able
to
parse
about
this
particular
container
image.
You
know
specifically,
all
the
layers,
all
the
layer
tars
and
what
the
digests
are
and
any
amenities
that
we
find
is
that's
useful.
E
If
we
were
to
jump
in
to
say
what
one
of
these
packages
actually
looks
like
looks
like
artifacts,
so
I'm
going
to
jump
into
the
artifact
section.
So
I
randomly
looked
at
one
particular
package.
So
it's
a
package
called
key
utils.
E
So
yeah
so
popping
up
for
a
little
bit,
so
I
was
showing
off
specifically
running
sift
against,
say
an
image,
but
I
also
wanted
to
show
hey.
I've
got
a
local
directory
where
I
don't
necessarily
have
an
image.
I
just
have
other
stuff
in
here
and
I
can
run
sift
against
this
local
directory,
which
happens
to
have
say,
like
you
know,
log
for
j
wrapped
in
a
zip,
and
we
can
still
detect
that,
like
yeah,
we
still
have
to
take
like
log
for
j
core
and
the
same
thing.
E
If
I
were
to
say
passing
that
specific,
you
know
you
know
archive
here
to
say:
hey
yeah,
we
can.
We
can
definitely
unwrap
this
and
see
the
contents
within
it,
so
so
yeah.
I
guess
that
is
most
of
what
I
wanted
to
show
off
today
and
I
I
want
to
let
you
know,
stop
and
pause
and
see
if
you
guys
have
any
questions
or
comments
or
curiosities.
F
Hey
santiago
here
this
is
pretty
cool.
I
had
a
couple
of
questions
of
what
I
saw.
I
wonder
I
wonder
if
we
could
go
back
a
little
bit
to
the
part
where
you
were
showing
the
cpes,
oh
yeah.
Let's
do
it.
F
Yes,
so
this
is
something
that
I
was
very
very
curious
to
know,
because
the
cpe
is
listing
key
utils
and
qtills
live
on
a
version.
That's
upstream
right,
and
we
know
the
upstream
version
is
a
red
hat
package
just
by
looking
at
the
url,
but
we're
looking
at
an
alpine
build
of
that
library
right.
F
H
F
Precise
in
terms
of
like
policing
and
understanding
what
this
all
means,
how
how
would
you
see
a
policy
processing?
This
fact
that
it's
a
it's
an
alpine,
build
of
a
red
hat
library,
of
a
particular
version
for
a
particular
architecture
and
well.
Some
cvs
won't
tell
you
whether
this
applies
to
to
to
this
pearl
or
the
cpe
right,
but
more
than
likely,
they
will
tell
you
it
will
apply
to
a
cpe
rather
than
a
purple.
E
Right
so,
and
actually
that's
one
of
the
main
reasons
for
including
say,
cpes
and
it's
out
and
output
so
and
just
to
be
actually
really
clear.
The
the
cpes
in
particular
this
is
synthetic
data.
This
is
data
that
we're
generating
based
off
of
what
we
found.
It's,
it's
not
actually
something
that
we
directly
found.
E
The
vast
majority
of
software
in
the
world
does
not
include
say
a
cpe
listing
for
a
particular
package,
and
so
this
is
us
basically
taking
our
best
guess
off
of
the
metadata
we
found
of
what
the
cpe
might
be
and
the
reason
why
that's
really
useful
and
how
important
is
because
essentially
the
nvd
nvd
database
national
vulnerability
database,
it's
all
indexed
primarily
by
cpe,
and
so,
if
you
want.
E
And
search
against
the
thing
cpe
is
going
to
be
your
best
way
to
do
it
and
which
is
usually
against
the
upstream
package
and
not
not
in
os
specific
packaging.
So
that
kind
of
speaks
a
little
to
what
you're
talking
about
between,
say
a
you
know:
alpine's
packaging
of
a
thing
versus
say
the
upstream
upstream
project
itself.
F
Right
yeah,
this
is
this
is
probably,
I
think
it's
a
broader
problem
and
you're
totally
right
that,
if
I
wanted
to
do
some
like
checking
say
using
nvd
then
would
use
like
their
json
schema
and
that
thing
let's
be
query
by
cbes,
but
then,
but
then
I
maybe
even
over
over
claiming
or
by
maybe
under
claiming
right,
because
we
know
that
there's
some
cds
that
apply
to
a
build
with
a
particular
configuration.
F
So
that's
that's
kind
of
why
I
wanted
to
highlight
this
and
yes,
you're
totally
right.
I
think
in
that
sense
you
need
to
the
best
that
you
can
afford
right
now
is
to
just
guests,
take
this
regex
and
and
make
it
to
the
best
of
your
ability.
H
E
Here,
the
less
guessing
we
do
the
better,
so
we
could
actually
start
implementing
more
putting
in
more
guesses,
more
information,
but
the
permutations
kind
of
get
pretty
wacky
and
and
pretty
unusual
for
a
consumer.
So
we've
found
it's
best
to
keep
the
guessing
to
a
minimum
and
to
the
core
fields
which
essentially
are
the
product
and
the
vendor
and
the
version-
and
those
are
the
three
core
fields
that
that
we
attempt
to
support.
So.
F
F
Cool
awesome
follow
up
on
that
again.
I
think
I
need
to
play
with
this
a
little
bit
more,
but
the
metadata
field
that
you
have
here
this
is
essentially
jus.
You
read
it
out
of
of
like
the
apk.
E
F
Awesome
and
another
follow-up
of
that
just
to
like
smoke,
test
things
a
little
bit
now.
Where
do
you
verify
that,
for
example,
this
claim
of
like
uids
and
permissions
and
hash
of
the
file
actually
hold
in
the
host
or
oh
yeah,
good
question.
E
So
here's
what
we
do
so,
for
instance,
given
the
source
so
notice
that
we
have
say
for
this
alpine
package
lib
key
utils,
so
we
are
looking
at
also
individual
files
within,
so
we
also
go
to
the
direct
source
and
start
cataloging
all
of
the
individual
files
as
well
so
far
to
look
at,
say
all
the
files
we
have
a
bunch
of
stuff
that
we
found.
I
think
earlier.
I
took
a
gander
at
yeah.
Okay,
so
I
took
a
gander
at
this.
E
Where
say
for
all
of
our
files,
we're
gonna
select
the
location
where
the
path
matches
our
specific
lib
key
utils,
which
is
part
you
know
supposedly
owned
by
this
package,
and
so
we
can
have
a
specific
digest
here
and
all
the
specific
metadata.
Everything
that
you
see
here
was
what
we
actually
found
in
the
source,
not
a
claim
from
the
package
manager.
So
you
could
take
these
two
two
fields
and
you
know
cross-correlate
them
to
say
yes
nobody's
messed
with
the
package
manager
or
no
somebody
has
messed
with
it.
F
And
the
last
thing
and
sorry
about
all
the
questions,
but
this
is
this
is
really
cool.
I
want
to
play
with
this,
and
I
think
I'll
probably
bring
this
in
my
course
this
semester,
because
I've
been
looking
at
small
tools
and
I
want
the
students
to
be
able
to
first
understand
the
first
class
first
class
citizenship
of
desmond
tools
in
the
software
world.
Something
that
I
that
I
wanted
to
know
is,
I
think,
the
two
files
that
you
have
in
the
example.
F
Well,
one
of
them
is
a
sim
link,
just
from
like
so
is
that
is
that
also
captured.
E
Or
yes,
we
do
capture
some
links.
I
don't
think
I
have
one
handy,
but
the
the
short
of
it
is
what
you
would
see
is
you
would
see
type
sim
link
for
a
particular.
If
that
path
is
like
truly
assembling
and
we
don't,
we
won't
have
a
digest
for
that
per
se,
because
it
doesn't
make
sense
to
take
a
digestive
assembly.
It
makes
sense
to
take
a
digest
of,
say
the
actual
real
path.
F
Great
awesome,
thank
you
so
much
and
sorry
for
just
jumping
in
michael.
C
So
it's
totally
fine.
I
think
your
questions
were
really
interesting
alex
this
is
really
cool
stuff
and
thank
you
for
bringing
this
and
showing
this
oh
yeah.
I
know
so
building
on
some
of
santiago's
questions.
I
wonder
if
there's
value
in
sort
of
teasing
apart
the
problems
that
this
solves
and
other
problems
like
like
this
is
always
the
challenge
right.
C
You
want
something
that's
useful,
but
you
also
want
something
that
is
like
going
to
get
used
in
its
for
the
primary
value
proposition,
and
so
here
I'm
wondering
if
the
matching
of
things
to
package
identities
and
ultimately
to
cbes
is
a
separate
process
like
the
primary
thing.
The
thing
doing
here
is
essentially
spelunking
through
an
existing
body
of
code,
somebody's
produced
artifacts
and
they
didn't
produce
an
s-bomb
in
any
kind
of
signed
or
unsigned
way.
C
I
wonder
if
the
then,
if
there
was
the
value
in
separating
out
from
the
list
of
things
that
you
found
to
then
start
matching
that
to
cpes
and
cbes
and
other
things
downstream
as
a
separate
part
of
the
process,
rather
than
a
single
sort
of
you
know,
I'm
sort
of
the
unix
tool.
One
thing
one:
do
it
well
kind
of
thinker
here.
E
Yeah
so
great
question:
we
have
another
tool.
A
separate
tool
called
gripe
and
gripe
is
a
vulnerability
scanner,
where
I
could,
in
a
very
similar
fashion,
say
you
know,
given
some
source
try
to
match
all
the
packages
with
known
known,
cdes
out
in
the
world,
and
we
and
we
can
look
at
the
original.
We
could
look
at
the
original
image
per
se.
E
It's
an
alpine
image
because
odds
are
there's
not
going
to
be
a
ton
of
a
ton
of
vulnerabilities
against
it,
but
we'll
see
if
there
is
any,
and
so
you
could
use
the
s
bomb
itself
as
a
way
to
to
do
vulnerability
matching
and
if
you
have
like
a
lot
of
images
to
go
through
using
the
s-bomb
is
way
faster
than
looking
at
the
images
themselves.
C
No,
that
makes
that
makes
no
sense.
So
how
are
you
resolving
to
the
cpe?
I
guess
is
perhaps
the
question
that
I
didn't
answer
more
clearly
ask
more
clearly
like
you
do
this
analysis
find
a
bunch
of
things
and
then
somewhere
along
the
way
you
resolve
that
back
to
a
cpe,
and
I
was
curious
as
to
what
you're
doing
there
and
how
okay
so.
E
H
E
Yeah,
okay,
so
I
think
for
product
name,
so
that
there's
two
fields
that
are
the
hardest
and
every
ecosystem
has
a
different
answer.
I'll
mention
right
up
front.
We
have
vendor
and
your
product
and
for
the
product
field.
It
usually
correlates
pretty
strongly
to
the
name
of
the
package
that
we
found
and
that's
usually
the
easiest
one
to
find
vendor
is
not
always
easy
and
varies
for
many
different
things.
For
java.
You
have
completely
different
answers
than
if
you're
looking
at
python,
which
you
have
completely
different
answers.
E
If
you're
looking
at
go,
and
so
we
will
have
different
guesses
tailored
for
each
ecosystem,
that
we
find
it
in,
and
so
I
would
say
like
apk
is,
is-
is
a
little
more
of
a
straight
forward
answer,
because
yeah
we
have
like
metadata
down
here
that
that
more
directly
ties
to
it.
In
fact,
I
think
this
is
actually
a
oh
actually,
I
think,
on
a
slide
right
there.
Apk
is
not
a
great
example,
because
we,
we
include
the
product
name
itself
as
a
possible
vendor.
E
That
is
a
very
popular
thing
for
people
to
do
when
they're
writing
cdes
and
they
don't
know
what
the
right
cpe
is.
It's
like.
Well,
it's
a
product
where
the
vendor
is
itself
the
same
name
as
the
product,
so
we'll
try
very
various
flavors
of
that.
We'll
see,
dashes
and
underscores
and
you'll
we'll
cut
off
suffixes
and
whatnot.
So.
C
So
what
I'm,
what
I'm
wondering
is
if
separating
those
two
pieces
out,
would
make
sense,
because
you
have
a
like
two
somewhat
heuristically
driven
things
like
one
is
like
okay,
let
me
go
find
all
the
things
right
and
then
the
other
one
say:
okay.
E
No
you're
so
you're
definitely
on
something
in
the
sense
that
we've
had
a
lot
of
internal
conversations
about
just
that
like
did
the
cpe
generation
really
belong
in
an
s-bomb
generation,
or
does
it
belong
downstream
of
that
and
processing
right?
We
landed
on,
we
think
for
the
meantime,
it
makes
sense
to
have
it
in
s
bound
generation
with
the
which
is
a
caveat
for
cpe,
specifically,
which
is
we're
not
very
authoritative
to
whether
or
not
these
cpus
are
going
to
be
correct,
which
kind
of
hints
that
hey,
maybe
it
should
be
downstream.
E
One
thing
that
you
would
really
wish
you
had
at
some
input
is
to
know
what
are
all
the
things
that
you
tried
to
match
against
the
upside,
about
having
a
static
s
bomb
document
where
you
could
have
all
the
full
set
of
cpus
for
everything
you're
ever
going
to
try
to
search
for
is
that
you
can
at
least
answer
that
question
like
this.
Is
the
cpu
that
you
tried?
E
I
see
that
the
vendor's
wrong,
which
is
why
you
would
have
never
found
this,
and
that's
that's
the
advantage
that
we
essentially
are
leaning
on
here,
knowing
that
that
that
we're
also
leaning
on
the
caveat
as
well
that,
like
these
are
not
authoritative.
These
are
still
guesses
and
you're
right,
like
it
sniffs
to
stay
downstream.
But
there
is
a
very
practical
reason
to
keep
it
upstream
here
in
like
s
bomb
generation.
So.
C
This
is
super
cool.
I
don't
want
to
mop
an
opposite
conversation.
I'd
love
to
follow
up
with
you
offline
and
see
if
we
can
lean
in
and
help
and
start
teasing
apart
these
pieces,
because
I
think
that
somebody
else
in
case
is
just
about
component
detection
is
a
thing
there
too.
So
there's
definitely
a
an
angle
to
play
on
here.
Oh
yeah,
cool.
F
I
I
want
to
try
for
that
sorry
to
jump
in.
I
think
this
is
something
that
would
definitely
be
in
the
purview.
A
group
like
this
to
maybe
help
disambiguate
between
this,
like
cbe,
world
and
earl,
and
all
that
yep.
E
Awesome
yeah,
I'm
definitely
for
like
continuing
that
conversation
for
cp
generation
for
crafting
pearls,
because
cp
generation
for
certain
ecosystems
has
been
a
very
difficult
thing
like
for
java,
for
instance,
that
can
get
if
you
open
up
the
code
and
look
at
cb
generation
for
java.
That
one
goes
on
for
a
while.
There's
there's
some
code
there
for
that.
So
yeah,
I'm
definitely
open
to
having
some
conversations
about
about
that.
So
I
I
didn't,
like
you
know
specifically
generation
of
ids
for
packages
yeah.
C
Well,
and
and
at
some
level
like
what
we're
doing
here,
what
you're
doing
here
with
this
heuristic
is
sort
of
a
band-aid.
What
we
really
want
to
do
is
to
work
our
way
upstream
and
start
improving
how
the
packages
are
formally
identifying
themselves
so
that
we
don't
have
to
have
this
sort
of
guessing
game,
but
we
can
start
to
do
a
better
job,
and
I
would
be
interested
in
how
to
think
about
that
too.
E
That's
true,
and
it's
it
comes
down
to
this
like
fork
on
the
road,
as
as
we've
kind
of
chatted
about
it
internally,
which
is
do
we
even
if
we
say,
improve
the
producer
side
where
your
producers
of
software
are
are
generating
these
ideas
and
they're
they're,
including
it
and
their
metadata
for
their
packaging?
E
We
still
have
this
consumer
long
tail
where,
like
for
producers
that
do
not
do
that,
which
is
you
know
today,
is
the
vast
majority
of
the
world.
The
consumers
still
need
some
solution
in
place
in
order
to
like
act
as
a
bit
of
a
shim
or
shim
tooling
I'll
say,
while
the
world
is
bootstrapping
to
say
to
include
these
ids
like
we
still
need,
from
a
consumer
point
of
view,
to
define
these
ideas
or
generate
these
ideas
yeah.
E
I
I
was
curious
if
you've
been
sort
of
talking
with
any
of
the
s-bomb
sort
of
format
engineers
on
on
some
of
these
things,
because
I
know
that
cyclone
dx
has
something
in
there
regarding
like
what
you
gonna
specify
when
it
comes
to
like
what
you're
claiming
or
how
you're
claiming
you
identified
the
package
like
are
you
claiming
that
you
identified
the
package
because
it
was
part
of
a
build
and
I'm
you
know
as
part
of
the
compiler
or
whatever
I'm
making
of
you
know
a
claim,
or
am
I
claiming
that
it
is
through
a
heuristic
model,
or
something
like
that?
I
Have
you
had
any
issues
with
that
sort
of
approach,
because
I
know,
as
a
consumer
of
these
things,
one
of
the
big
things
that
I
want
to
just
be
able
to
tell
is,
if
a
if
I
get
an
s
bomb,
that's
let's
say
generated
by
something
like
sift.
How,
like
what
claim
is
it
making?
I
Is
it?
Is
it
asking
me?
You
know,
sorry,
is
it
telling
me
that,
yes,
I
am
99
sure
this
package
is
this
or
is
it
telling
me
that
hey
you
know
we
I
I
did
prep
and
you
know,
look
through
some
names
and
that's
how
I'm
specifying
it
or
I
looked
at
a
package
of
sorry.
I
looked
at
a
database
of
hashes.
I
want
to
have
a
better
understanding
of
of
how
the
package
was
actually
identified.
E
Gotcha,
so
I
will
say
that
so
most
of
what
I
showed
off
today
was
just
the
kind
of
the
sift
json,
and
so
everything
that
is
like
say
above
metadata
has
been
normalized
where
we're
saying
yeah,
we
pull
it
out
from
raw
information
and
everything
below
is
the
raw
information
itself.
It's
essentially
unprocessed,
and
when
we
take
this
information,
this
the
sift
model
and
map
it
into
something
like
cyclone
dx
and
to
spdx.
E
We
are
continually
trying
to
find
better
ways
to
fit
into
those
formats,
and
so
right
now
we're
in
a
bit
of
a
push
right
now
for
on
spdx
to
enable
both
encoding
and
decoding
so
in
the
near
future,
we'll
be
adding,
say,
a
convert
command
work.
You
know,
given
an
spdx
document,
I
can
convert
to
say
a
cyclone
dx
document
and
part
of
that
is
what
you're
and
and
part
of
doing
that.
E
That
kind
of
work
is
fully
understanding
what
each
spec
can
express
and
which
has
been
definitely
a
bit
of
a
challenge.
So
I
guess
to
answer
your
question
more
directly
for
cyclone
dx,
I'm
less
aware
of
such
features,
for
is
where
we've
been
working.
The
last
several
weeks,
I'm
a
little
more
aware
like
for
licensing
they
have
that
same
like
license
declared
versus
license
concluded.
So
you
can.
You
can
make
those
such
discrepancies
or
distinctions
distinctions
also,
but
yeah
I
haven't
looked
into
it
for
cycle
and
dx.
C
Alex
to
build
on
this
theme,
I
think
I
love
michael's
question
like
understanding
what
is
claim,
what
is
fact
where
it
came
from
and
separating
the
utility
of
generating
of
you
know
a
particular
format:
output
with
what
it
can
do,
versus
what
truth,
you're
able
to
discover
is
going
to
help
us
all
lean
in
to
figure
out
where
we
can
make
it
better
help.
You
know
build
on
it
so
really
interesting.
Conversation.
I
Yeah
and
just
one
like
quick,
tin,
foil
hat
sort
of
you
know
thing
here
is:
is
israel,
like
I
know
in
in
previous
sorts
of
things
like
this
right,
is,
if
somebody
just
sort
of
names,
a
package
the
same
as
as
some
sort
of
upstream,
if
they
copy
the
metadata
over,
you
know,
depending
on
how
you're
sort
of
expressing
that
it
could
cause
confusion,
and
somebody
thinks
like
oh
yeah.
I
This
is
actually
the
package
where
it
turns
out
it
was,
you
know
some
malicious
actor
had
just
sort
of
more
or
less
copied
the
metadata
from
another
package
put
whatever
they
wanted
in
there
and
you
know,
and
so,
if
you're
saying
hey,
this
is
I'm
just
you
know
looking
at
this
metadata,
and
this
is
how
I'm
doing
that
then
I
know
okay,
I
might
need
to,
for
certain
cases,
have
a
deeper
level
introspection
and
and
helping
you
know,
sort
of
figure.
That
out.
I
think
it's
going
to
be
important.
E
Yeah,
okay,
so
not
tenfold
at
all
to
be
very
clearly,
I
don't
think
it's
sinful
at
all.
I
think
it's
very
important
so
and
I
think
to
answer
that
more
directly,
more
about
what
you
know
between,
like
the
truth
of
the
thing
and
the
claim
of
a
thing,
so
everything
that
we
find
under
artifacts
is
essentially
what
we
discovered
from
packaging
metadata
or
from
what
the
binary
tells
us,
which
is
all
essentially
what
something
is
declared.
However,
everything
that
we
have
underneath
files
is
is
what
we're
verifying
to
be.
E
This
is
this
is
the
actual
metadata
on
disk.
This
is
the
actual
digests
et
cetera.
So
you
would
hope
that
there
is,
you
know,
depending
on
the
ecosystem,
that
there
is
some
connective
tissue
between
these
two
things.
E
There's
a
claim
of
an
artifact
saying,
hey,
I
own
these
13
files
and
you
hope
to
find
those
13
files
in
existence
and
that
to
match
with
what's
claimed
up
in
artifacts
and
that's
about
as
far
as
we
go
today,
but
we
don't
have
anything
beyond
that
that
that
that
does
any
verification
of
that,
and
so
that's
something
that
we
want
to.
Like
start
looking
for
for
the
future,
we've
been
more
focused
on
like
the
actual
generation
of
the
s
bomb
like
raising
that
raw
data.
J
H
Yeah
yeah-
let's
see
I
I
put
in
the
chat
here,
but
you
know
the
the
challenges
of
cpes.
You
know
most
projects
don't
have
cpes
for
the
open,
ssf,
best
practices,
badge
we're
using
the
repo
urls
and
the
homepage
urls
to
help
identify
projects,
because
that's
that
actually
generalizes
across
the
world.
H
I
have
been
trying
to
make
some
attempts
and
some
wines
at
the
national
vulnerability
database
to
try
to
add
those
kinds
that
kind
of
information
for
identification
they
still
they
have
for
years,
kept
claiming
they're
going
to
switch
to
swins,
which
don't
which
is
which
is
completely
laughable.
We
all
know
that
won't
work.
I
don't
know
why
they.
I
know
why
they
keep
saying
it.
They
keep
saying
swedes
because
it's
an
iso
spec
the
fact
that
it's
not
a
useful,
I
suspect,
for
their
purpose,
apparently,
is
not
important.
H
I
I
say
somewhat
jokingly
here,
but
but
you
know
I
understand
they
want
to
identify
things.
They
want
to
use
iso
standards,
but
you
got
to
use
the
right
ones,
not
just
because
it's
got
a
label,
so
I'm
hoping
that
the
nvd
will
add
things.
You
know
home
page
urls,
repo
urls
pearls
might
be
an
option,
but
you
know
it's
something.
You
know
if
right
now,
cps
are
the
one
true
way
and
for
and
I'd
love
to
see
alternatives,
but
they
have
to
be
useful
alternatives.
E
Yeah
I'm
with
you
there.
I
saw
the
also
yeah
the
software
id
the
proposal
that
went
in
and
they
made
my
heart
skip
and
beat
so
so
I
I'm
sorry
week's
proposal
made
your
heart
skip
a
beat.
Oh,
they
know
the
swing.
The
one
you're
talking
about
like
using.
H
K
H
Gotta
we've
gotta
move
beyond
switz
being
the
one
true
way,
because
that
that
it
doesn't
work.
E
Yeah
and
pearl,
we
have
found
to
be
most
generically
useful.
We
have
found
use
cases
where
pearl
still
doesn't
like
quite
fit
the
bill,
but
it's
not
like
it
needs
to
in
the
situations
like
say
for
vulnerability,
matching
like
knowing
the
identity
of
a
package
is
good,
and
but
knowing
that,
it's
based
off
of
an
upstream
source
package
is
even
better
for
matching,
because
you
can.
E
That
link
is
helpful
for
knowing,
if
you
actually
may
have
a
vulnerability
in
something
that
that
it
wasn't
that
the
cve
doesn't
actually
specifically
say
or
maybe
like
inversely.
So,
like
you
know,
the
the
cve
is
written
against,
like
that
umbrella
project.
The
upstream
thing,
but
not
against
the
70
different,
builds
that
you
know
that
that
he
just
doesn't
know
about
so
yeah.
H
Well,
in
particular,
I
mean
I
I
put
in
a
comment.
You
know
I
only
have
so
much
time,
but
I
put
in
a
comment
pearl
if
pearl
would
be
way
more
helpful
if
it
could
support
version
ranges,
but
it
doesn't
today,
which
is
the
main
reason
why
cpes
keep
getting
used,
because
it
has
a
nice
simple
version
range
mechanism
and
without
that,
a
lot
of
these
other
systems.
Don't
work
very
well.
B
I
know
there
have
been
discussions
in
the
spdx
community
about
some
additional
changes
enhancements
that
are
wanted
for
the
for
pearl
and
there
have
been
discussions
about
working
together
with
the
maintainers
of
of
pearl
to
see.
If,
if
we
can't
make
some
extensions.
B
H
This
is
actually
a
good
reminder,
because
you
know
I'm
really
excited
want
to
talk
about
these
things,
but
you're
right.
We
did
have
one
more
agenda
item.
Maybe
we
can
I
mean
this,
it's
actually
you,
so
you
think
you
can
you
I
I,
which
is
why
you're
probably
asking
yeah.
Do
you
think
we
could
have
like
two
more
minutes
and
and
give
you
20?
Would
that
be
adequate?
Yeah?
K
K
D
All
right!
Well,
I
guess
any
other
questions.
L
Bush,
real,
quick,
quick,
real,
quick
comment,
though
comments.
One
is
that
another
thing
I
would
love
to
talk
about
maybe
next
time
on
sippy
caps
is
inheritance,
which
you
cannot
deduce
inheritance
within
cpes.
L
I
know
that
other
identification
schemes
are
not
quoting
inheritance
as
well,
but
I
think
in
terms
of
dependency,
that
would
that's
that's
a
really
big
issue,
and
second,
there
is
a
one
project
that
I'm
I'm
following,
but
it's
very
latent
from
nest
is
the
fcap
scap
project
who
tries
to
ties
in
all
different,
like
I
think
it
has
like
15
or
16
different
schemes
together.
H
Okay,
yeah,
I
I
so
I
propose,
let's,
let's
give
kay
her
20
minutes.
D
B
Go
ahead,
yeah
sure,
thanks
david
and
thanks
alex
this
was
a
you
know
and
and
josh
you
know
great
great
project
and
great
discussion.
B
So
what
I
wanted
to
discuss
is
using
this
working
group
or
a
subgroup
of
this
working
group
to
work
through
our
you
know:
sort
of
common
approach
or
strategy
for
supply
chain,
artifact,
signing
and
some
context
for
this.
So
I'll
you
know
describe
what's
been
happening
at
microsoft.
Maybe
other
companies
are
are
going
through
the
the
same
thing,
but
but
we
and
some
of
the
industry
partners
that
we
have
that
we've
been
working
with
have
been
contemplating
approaches
to
signing
artifacts,
including
s-bombs
containers,
digital
media
metadata
and
by
digital
media.
B
I'm
talking
about
imaging
and
and
photos
and
that
kind
of
thing,
iot,
artifacts,
etc,
and
so
we've
been
going
through
a
process
where
we're
evaluating
a
number
of
different.
B
So
the
goal
that
we
have
is
you
know
spoken
from
the
user
perspective
is
that
supply
chain
participants
across
the
industry
can
validate
the
authenticity
and
integrity
of
digital
artifacts
by
digital
artifacts.
We're
looking
at
a
we
think
of
this
broadly,
so
it
could
be
code
packages,
containers
any
type
of
supply
chain
attestation,
including
s-bombs.
B
There
are
a
number
of
other
in
the
in
the
executive
order.
There
are
a
number
of
other
attestations
and
evidence
of
conformance
that
that
will
be
required
for
supply
chain
artifacts
and
then
also
for
policy
for
policy
statements
for
supply
chain
artifacts
or
for
what
artifacts
people
will
accept.
B
B
B
That's
what
I'm
talking
to
so
I
said
that
you
know
what
we
want
to
do
is
allow
people
to
validate
the
authenticity
and
authenticity
and
integrity,
and
then
I
just
defined
them
authenticity
is
was
the
artifact
provided
by
the
expected
entity
who
you
know
whoever
the
expected
entity
is
and
was
the
artifact
altered
between
the
time
it
was
provided
and
the
time
it
was
received.
B
So
the
issue
that
we're
seeing
or
that
we
would
like
to
try
to
solve?
Is
that
there's
a
I
called
it
a
multiplicity
of
signing
formats
and
having
all
of
the
many
sign
you
know
existing
and
new
forthcoming
signing
formats.
B
So
the
what
what
I
would
like
for
this
working
group
or
a
subset
of
this
working
group
to
do-
is
to
develop
recommendations
for
facilitating
the
authenticity,
verifying
the
authenticity
of
an
integrity
of
artifacts
solutions
might
include
one
or
more
of
the
following.
I'm
not
trying
to
say
what
the
what
solutions
we'll
come
up
with,
but
some
logical
ones
might
be
agreeing
on
a
standard
format
that
we
use
for,
citing
digital
artifacts
and
or
creating
tooling,
to
assist
with
the
signing
and
validation
of
artifacts
across
a
variety
of
formats.
B
So
you
know
what
would
the
next
steps
be?
So
if
we
decide
to
establish
a
subgroup
to
discuss
this
well,
we
want
to
you
know,
create
that
subgroup,
a
task,
the
subgroup
with
creating
a
proposal
that
outlines
the
requirements
and
the
alternatives,
and
you
know,
discusses-
and
you
know
and
proposes
an
overall
strategy
and
then
bring
that
back
to
this
full
working
group
for
discussion.
B
So
that's
you
know,
that's.
You
know
the
the
background
that
I
have
for
this
and
then
I'd
like
to
you
know
you
know
open
up
and
see
what
folks
think
I
I
don't
want
us
to
get
into
today
discussing
the
relative
merits
of
different
signing
approaches.
I
think
that
that's
a
top.
Those
are
topics
that
we
want
to
lead
to
this
sub
group.
B
For
today,
we
just
excuse
me
just
want
to
you
know
agree
is
this:
is
this
problem
space
that
we
think
as
relevant
for
this
working
group?
And
you
know
how
do
we
want
to
approach
that?
Do
we
want
to
create
a
subgroup
and
and
go
forward
from
there.
F
Okay,
yeah,
that's
me,
so
I
had
a
I
had
a
comment
not
on
the
merit.
Rather,
my
understanding
is
that
in
the
agenda
there
was
something
about
like
reviewing
the
readme
and
kind
of
picking
up
on.
Where
are
we
at
with
things
for
the
main
change
and
what
are
what
are
we
doing
here?
I
wonder
if
that
can't
set
up
the
except
the
tone
of
the
conversation
as
to.
Why
is
this
a
challenge
in
the
supply
chain
space?
What.
B
F
H
F
And
in
that
sense,
that
kind
of
set
up
the
like,
oh
well,
we
will
use
to
do
digital
identity.
Now
we're
going
to
be
doing
supply
chain
integrity.
What
are
the
implications?
I
actually
found
myself
reading
the
readme
earlier
today
and
it's
funny
because
there's
a
section
I'm
like
oh
there's,
some
related
efforts
that
we
should
look
about
and
it
was
talking
about
like
gpg,
key
servers
and
the
dcdid
and
stuff
like
that.
That
I
was
like.
F
Oh
well,
that's
that's
perfectly
relevant
to
the
old
group,
but
I
wonder
if
we
can
try
to
to
like
find
synchrony
between
that
and
this
sort
of
effort,
but
I
think
it'd
be
easier
to
walk
in
and
say:
hey
we
looked
at
this
efforts.
There's
all
of
these
tools.
They're,
insufficient
or
people
are
getting
lost
between
the
signal
between
the
signal
and
the
noise.
F
So
we're
going
to
set
forth
a
recommendation
of
what's
the
like
reference
usage
of
supply,
chain,
integrity
tools
and
signing
mechanisms
so
and
so
forth,
or
we
find
out
that
the
there's
like
projects
already
within
openssf.
That
may
be
fitting
some
of
the
bill
and
we
need
to
extend
it
somehow
right,
I
don't
know
what
do
you
think.
B
My
my
my
preference
would
be
to
do
both
at
the
same
time,
so
we
we
do
need
to
update
the
readme,
and
we,
you
know-
and
we
do
want
to
you
know,
have
a
look
at
you
know
existing
projects
and
think
about
the
roadmap
for
existing
projects
and
where
we
want
to
go,
but
we
do
in
addition
to
that,
we
do
have
a
kind
of
a
window
of
opportunity
now
to
talk
about
signing
formats
and
the
reason
I
suggest
that
it's
a
window
of
opportunity
is
that
very
shortly
because
of
the
executive
order,
a
number
of
organizations
will
be
producing
evidence
of
conformance
demonstration,
including
things
like
software
bills
and
materials
and
they'll
want
to
be
signing
those.
B
So
this
is
our
case.
At
microsoft,
we
have
an
internal
goal
to
be
signing
all
of
our
s-bombs
and
we'd
like
to
do
that
by
march.
So
we
really
would
like
to
you
know:
have
this
discussion
about
signing
formats
for
supply
chain
artifacts.
You
know
at
the
same
time
as
we're
working
on
the
readme.
Does
that
make
sense?
Santiago.
F
H
F
They're
also
being
deployed
for
debian,
and
I
think
the
for
good
or
bad.
The
open
source
ecosystem
will
move
a
little
bit
differently
from
like,
say,
microsoft,
taking
a
stance
whatever
format
they
decided
to
choose.
B
Yeah,
so
to
so,
we
want
again.
All
I'm
really
doing
is
suggesting
that
we
create
a
working
group
to
talk
to
talk
through
those
issues,
so
it
would
create
a
sub-working
group
to
talk
through
those
issues.
So
you
know
that's
that's
a
good
point
and,
and
I'd
like
to
discuss
that
I
just
would
like.
F
To
sure
yeah
I
wanted
to
bring
that
forward.
I
know
that
it's
a
fine
line
to
walk
between
hey.
This
is
like
the
merits
of
the
the
actual
thing
or
like.
Let's
discuss
whether
to
have
the
conversation,
and
I
think
the
conversation
is
important.
I
just
yeah.
I
just
wanted
to
put
that
a
little
bit
forward.
B
Okay,
michael
lieberman,.
I
Yeah
so
yeah,
I
I
I'd,
be
interested
in
better
sort
of
understanding
what
what
the
the
current
sort
of
problem
is
with
some
of
this
sort
of
stuff,
because
I
do
know
of
a
couple
of
different
sort
of
open
source
tools
and
formats
that
are
starting
to
evolve
or
or
even
sort
of
old
school
existing
things
that
people
have
done
using
like
gpg
to
sign
stuff
I'd,
be
just
sort
of
curious
to
understand
what
what
some
of
the
concern
is
like.
What
you
know
is
there
like?
I
Can
you
hear
that
there's
just
so
many
formats
and
that
that
that
it's
gonna
end
up
causing
yeah?
I
just
don't
know
what
the
what?
What
is
the
look?
What's
the
real
issue
we're
trying
to
solve.
B
Yeah
so
so
I
think
the
real
issue
is
just
to
make
it
easy
to
exchange
to
exchange,
artifacts
and
part
of
making
it
easy
is
using
a
common
format.
So
you
know
just
imagine
that
we
are
getting
everyone,
and
this
is
just
you
know
a
simple
example
we're
getting
everyone
providing
software
bills
and
materials.
B
It
makes
it
easier
to
exchange
to
trust
those
if
we're
all
using
a
common
signing
format.
Does
that
make
sense.
B
Okay,
hank.
K
K
The
the
the
thing
is
that
the
time
window,
I
think
the
time
window
is
the
hard
requirement
here
and
and
to
to
iterate
a
mantra
from
some
sdos
paralyzed
and
not
serialized
yeah,
and
so
I
I
think,
if
we're
going
to
do
a
subgroup
here
and-
and
that
sounds
to
be
a
quite
good
start-
we
should
paralyze
by
by
not
starting
with
a
blank
page
and
aggregating
requirements.
We
should
bring
requirements
to
the
table.
K
If
you're,
starting
with
a
blank
page,
it
will
take
longer
and
again,
the
window
is
like,
I
think,
three
months
so
in
computing
scientists
time
it
takes.
This
is
two
eternities
and
two
eternities
go
very
very
fast,
so
I'd
say
that
bring
content
into
the
subgroups,
don't
start
with
an
open
discussion,
but
with
proposals
with
the
problem
statements
and
then
with
concerns
aborting.
Those
is
is
to
me
a
very,
very
quick,
maybe
expediated
a
procedure.
B
Yeah,
that
makes
sense,
I
think
we
do
have
we've
been
working
through
this
internally
at
microsoft
and
we'd,
be
happy
to
share
the
problem
statements
and
the
alternatives
that
we've
been
looking
at.
But
but
I
don't
you
know
I
don't
want.
I
don't
want
anyone
to
come
away
from
this
feeling,
like
microsoft,
is
trying
to
drive
a
decision
and
do
it.
You
know
exactly
in
our
way
what
we
want.
What
we'd
want
to
do
is
just
bring
forward.
B
You
know,
here's
the
way
we've
been
thinking
about
it,
and
but
you
know
we
would
want
to
hear
from
others
and
see
other
proposals
and
and
consider
changing
our
thinking.
Michael.
C
So
thanks
for
bringing
this
k
lots
of
conversations,
obviously
we've
had
in
the
past
around
this.
I
think
to
some
of
the
sort
of
macro
questions
that
I've
heard
coming
up
in
response
to
this.
My
goal
is
actually
kind
of
like
at
the
end
of
the
day
the
government
is
coming.
C
You
know,
standards
track
thing
that
doesn't
make
sense,
but
it's
always
imposed
on
all
of
us
and
we
all
end
up
scrambling
around
trying
to
make
it
work
for
real
or
ignoring
it
end
up
in
a
really
bad
place.
So
my
macro
goal
is
that
this
working
group
actually
becomes
in
many
ways,
authoritative
and
on
that
journey
I
don't
think
we
can
be
over
the
over
there
overnight.
B
Okay,
thanks
michael
so
so
I
guess
what
I'd
say:
we're
kind
of
coming
up
on
the
end
of
time,
if
there's,
if,
unless
there
are
strong
concerns,
you
know
that
we
should
not
create
a
subgroup.
I'd
like
to
go
ahead
and
do
that
so
and
I
can
take
the
next
step,
which
is
to
which
I
I
believe
would
be
to
you
know,
send
out
a
we'll
figure
out
how
to
you
know,
create
a
subgroup
and
get
a
mailing
list,
and
things
for
that.
B
I
can
talk
with
the
folks
at
the
linux
foundation
about
that
and
and
then
you
know,
send
that
information
out
to
this
mailing
list,
and
you
know
we
can
pick
a
date
where
or
you
know,
a
meeting
series
where
the
people
who
are
interested
in
it
can
can
can
begin
participating.
So.
H
H
You
know,
I
think,
consistent
with
what
michael
in
san
diego
said,
making
clear
what
the
problem
is,
and
I
think
we
ought
to
try
to
make
it
clear
how
existing
projects
like
sync
store
connect,
and
I
don't
know
there
has
to
be
a
single
format,
but
you
know
something
that
we
can
all
work
with.
I
think
is
the
real
goal
right.
So
you
know
if
it's,
if
it's
everybody
has
to
read
these
three
formats
but
they're
all
common,
and
we
all
understand
them,
but
you
know
something
we
can
do.
H
Or
or
at
least
stop
spending
money
on
on
problems
that
you
didn't
really
need
to
solve.
B
Okay,
I
do
have
one
last
agenda
item,
which
is
I've.
I've
heard
from
a
few
people
that
that
this
time
frame
every
other
wednesday
or
wednesdays
at
nine
o'clock
does
not
work
for
some
of
the
new
people
who
are
starting
to
attend
these
meetings.
And
so
I
wonder
if
we
can,
you
know
again,
given
the
increase
in
scope
and
the
the
new
attendees
consider
moving
to
a
different
time,
and
so
I'd
be
happy
to
help
with
this
as
well.
B
So
I
can,
you
know,
do
a
quick
scan
of
the
open
ssf
calendar
to
see
what
other
times
are
available,
that
don't
conflict
with
existing
open,
ssf
meetings
and
then
send
a
poll
to
the
mailing
list?
Is
that
okay,
with
folks.
B
Okay
well,
well,
let's,
let's
see-
and
you
know,
one
of
the
one
of
the
options
will
certainly
be
keep
keep
the
current
time
so
and
then
we
can
look
at
some
others.
H
Would
you
please
coordinate
that
first,
with
with
kim
at
least
to
make
sure
that
we
don't
try
to
pick
a
time
she
can't
do.
H
Awesome
all
right,
I
we're
at
times
so
anybody
have
any
quick
parting
shots.
Otherwise
we're
done
happy
new
year.