►
From YouTube: IETF-SCITT-20230911-1500
Description
SCITT interim meeting session
2023/09/11 1500
A
Hopefully
we're
all
caught
up
with
the
the
fact
that
we're
going
to
have
the
d-bomb
presentation
today,
so
mostly
just
a
special
guest
session
and
yeah
I'm,
hoping
I,
was
in
contact
with
the
folks
late
last
week.
So
Medi
should
be
along
to
talk
to
us
fairly
soon.
B
John,
this
is
dick
Brooks
I'm
wondering.
Are
there
any
plans
to
have
a
hackathon
at
the
on
the
at
the
ietf
118
meeting.
A
So
that's
the
detailed
focus
of
I
think
we're
hoping
for
another
special
guest
presentation
next
week
and
then
the
rest
of
the
interims
are
are
all
focused
around
exactly
that
topic.
So
right
now,
without
any
further
inspiration.
A
What
I
think
we
need
to
do?
What
I
think?
What
we
have
agreed
a
few
weeks
ago
is
that
we
need
to
get
the
the
interoperable
bits
of
feeds
worked
out,
because
we
couldn't
go
beyond
what
we
did
last
time,
I
think
without
having
some
kind
of
agreement
on
what
the
locator
is
actually
going
to
look
like.
A
So
the
the
feed
structure
is,
is
our
sort
of
technical
fodder
for
what
we're
trying
to
work
out
in
this
in
this
period,
between
117
and
118
and
I'm
hopeful
and
dare
I,
say
a
little
bit
confident
that
we'll
we'll
come
to
a
conclusion
on
that,
so
that
the
the
plan
is
to
demonstrate
that
and
get
feeds
properly
integrated
into
the
emulator.
A
A
A
little
bit
Yeah
so
I
think.
The
reason
why
we
landed
on
on
feed
structure
is
the
words
that
I
think
we
used
is
that
we
want
to
work
out
the
the
correct,
minimal
interoperability,
that's
required
to
have
portable
receipts.
You
need
to
know
potentially,
if
you've
got
one
on
your
hand,
you
need
to
know
where
it
came
from,
and
the
dids
go
a
long
way
to
doing
that.
A
But
I
think
we
found
numerous
things,
including
your
use
case
there
of
finding
or
indexing
receipts
from
a
transparency
service
where
actually,
it's
really
important
to
understand
what
you're
looking
at
in
the
in
the
feed
and
indeed
how
a
client
knows
what
feed
to
put
into
the
protected
header
to
refer
to
their
artifact
in
the
first
place.
A
So
yeah,
it's
not
going
to
be
all
indexing
and
searching
I
think
we,
we
will
always
have
a
bit
of
attention
about
where
we
draw
the
line
between
the
data
and
integrity,
layer
and
a
sort
of
semantically
cognizant
application
layer.
So
we
really
won't
answer
absolutely
all
of
those
questions,
but
this
is
the
most
fundamental
one
that
I
think
unlocks
unlocks.
That
capability.
I.
Think
we're
all
we're
all
agreed
that
in
some
way
the
services
need
to
be
able
to
index
and
find
stuff.
Otherwise,
it's
a
bit.
It's
a
bit
tricky
to
bootstrap.
B
A
C
A
Okay,
so
that's
good,
thank
you
so
with
no
further
Ado
thanks
for
the
the
intervention
dick
we
can
ameth
is
asking
for
screen
sharing,
so
I
assume
Emmett
you're,
going
to
present
on
behalf
of
d-bomb.
A
Okay,
yep,
we
can,
we
can
hear
you
so
yeah
just
for
for
people
who
weren't
aware
or
didn't
know.
What's
going
on,
we
are
looking.
We
as
a
community
obviously
are
looking
for
interoperability
for
sharing
stuff
learning
from
each
other.
A
So
a
number
of
us
here,
I
think,
are
quite
familiar
with
the
d-bomb
project,
but
a
number
won't
be
and
yeah
we're
really
looking
forward
to
learning
about
the
state
of
of
where
you
guys
are
at
certainly
interested
to
see.
If
there's
any
area
for
for
collaboration
where
the
building
blocks
we're
creating
here
would
be
useful
to
directly
to
debomb
to
customers
of
d-bomb,
also
adjacencies
to
it
and
yeah
just
try
and
share
our
mission.
A
C
E
D
Should
be
a
great
deal
so
I'll
just
get
started
so
today
we'll
just
look
at
how
we
see
the
supply
software
supply
chain,
ecosystem,
the
secure
software
supply
chain
ecosystem.
Just
so
that
have
your
common
understanding
of
the
assumptions
that
we've
made
Etc
I
will
briefly
go
through
the
diva
architecture,
not
too
much
of
a
technical
Deep
dive
just
enough
to
further
the
conversation.
D
So
these
are
the
questions
that
we
see
forecasting
around
the
software
Supply
chains,
security
scenario:
you
have
proactive
questions.
It's
basically
do
I,
really
know
what
I'm
running
figuring
out
those
key
projects,
that's
maintained
by
one
or
two
maintenance
on
GitHub
somewhere,
and
they
are
a
key
dependency
to
a
lot
of
your
infrastructure
and
could
also
be
a
big
spot
for
security
vulnerabilities.
D
D
Do
all
the
dependencies
of
those
components
also
satisfy
the
security
checks
that
you
have
do
they
have
the
right
provenance?
Do
you
have
guarantees
on
how
that
artifact
was
created?
Do
you
do
you
have
guarantees
on
whether
there
are
any
open,
critical
vulnerabilities,
all
those
sorts
of
questions,
and
then
finally
last?
But
nevertheless
you
have
the
reactive
questions
right,
there's,
there's
a
big
exploit,
something
like
Locker,
shell
or
heartblade
or
like
something,
as
was
bad
as
solvents.
In
the
case
that
that
unfortunately
happens
what
should
I
be
protecting?
D
What
things
should
I
be
looking
at
patching,
what
should
I
be
putting
behind
the
firewall?
What
should
I
be
taking
down
if
it's
not
critical,
all
those
questions
now,
let's
say
let's
say
you're
in
that
last
case.
If
you
find
products
are
at
risk,
it's
currently
very
much
an
ad
hoc
approach.
Certain
vendors
do
have
very
well
thought
out
support
portals.
For
example,
you
know
maybe
VMware
right,
like
they
have
a
very
good
place
where
you
can
subscribe
to
changes
or
subscribe
to
various
security.
D
Advisories
be
notified
based
on
the
products
that
you
bought
Etc,
but
that's
not
really
true
for
most
of
the
industry,
at
least
from
what
we've
seen
and
if
there's
no
security
advisory,
you
have
to
go
ahead
and
ask
the
renter
and
you
either
create
a
support
ticket
or
in
more
ad
hoc
cases.
D
If
you're
working
with
a
more
a
smaller
SMB,
startup
type
deal
you're,
essentially
sending
over
an
email
and
waiting
for
that
email
to
be
followed
up
on
simultaneously
your
security
teams
being
proactive,
would
you
would
conduct
your
own
investigation
figure
out
if
you're
actually
affected
again
takes
a
lot
of
the
time
that
yes,
professionals
are
putting
in
Additionally?
D
You
have
in
a
lot
of
cases
unless
it's
Source
available
or
open
source,
you
don't
have
access
to
vendor
code,
so
you
can't
really
pinpoint
where
the
vulnerabilities
or
have
a
quick
check
across
all
your
departments.
And
this
could
this
could
be
at
multiple
levels.
It
couldn't
if
it's
a
first,
if
it's
the
product
that
you
bought?
D
Okay,
great
and
that's
somewhat
simpler,
but
imagine
if
it's
in
a
deep
dependency
and
you
have
to
figure
out
what
products
even
have
those
dependencies
in
place,
which
again
exponentially,
makes
it
more
difficult
when
it
comes
to
figuring
out
where
these
defects
are
so
when
we
think
of
the
end-to-end
secure
software
supply
chain.
Again,
this
is
the
perspective
from
the
Disciples
of
materials
team.
Is
that
there's
a.
C
D
Is
hard
to
get
at,
but
that's
why
that's
why
we
have
things
like
salsa
right,
too
figure
out
which
level
of
security
we're
at
while
producing
these
artifacts
we
simultaneously
produce
attestations.
The
attestations
could
include
software
dependency,
attestations
like
s-bomb
PSI.
You
could
have
like
Vex
and
bdrs
that
can
be
filed
against
those
software
dependencies.
D
You
could
have
assigned
check
some
of
all
the
artifacts
that
are
produced,
possibly
identity,
related
information
around
the
build
machine,
the
build
infrastructure,
as
well
as
the
provider
and
and
attestation
for
the
processes
in
line
as
well,
similar
to
what
we
do
with
in
Toto
and
then.
Finally,
at
the
client
side,
we
have
a
lot
of
Automation
in
place,
so
these
attestations
are
not
things
that
humans
have
to
handle,
ideally
there's
something
that
is
automatically
handled
by
a
policy
infrastructure
that
goes
in.
That's
that's
reactive.
C
D
B
D
Side
right,
like
you,
have
tools
that
generate
these
attestations.
It
could
be
stuff
like
cosine
right
in
from
the
six-store
ecosystem
that
generates
oci
compliant
logs
on
the
fact
that
that
container
was
created.
It
could
be
s-bombs
like
the
s-bomb
generators
from
Microsoft
from
the
spdx
teams
themselves.
Cyclone
DX.
You
have
tools
like
intoto
that
creates
attestations
across
the
chip
that
you
can
publish
and
verify
that
every
step
has
been
verified
inside
and
there
are
vulnerability
reports
that
various
tools
allow
you
to
create,
and
sometimes
even
based
on
Upstream
vulnerability
information.
D
Let's
say
you
depend
on
openssl
you
can
and
your
product
it
uses
operation,
availability
report
for
your
product,
saying
that
I
have
open
still
so,
and
so
I
have
this
vulnerability,
and
this
is
the
patch.
Then,
once
you
generate
all
those
things,
that's
a
lot
of
reports.
There's
a
lot
of
attestations.
You
need
a
way
to
store
and
organize
them
a
way
to
sort
of
version
them,
send
them
out
and
figure
out.
D
The
relations
between
these
pieces
right,
one
of
the
ability
reports,
don't
exist
on
their
own
they're
related
to
specific
software,
and
here
there
are
a
lot
of
issues
around
name
spacing.
How
do
you
reference
an
artifact?
How
do
you
reference
an
attestation?
How
do
you,
how
do
you
figure
out
that
this
attestation
is
actually
created
by
the
right
person.
C
D
It's
watched
any
of
that
sort.
I
believe
we
have
a
handoff
a
Charles.
G
Just
dealing
with
meet
Echoes,
muting
situation
so
on
the
attestation
versus
the
artifact
in
this
case
is
the
s-bomb
itself
in
attestation,
or
is
it
an
artifact.
G
Is
an
attestation,
okay,
yeah,
so
yeah,
so
we're
having
some
discussions
about
the
data
and
where
it
should
go
in
skit
and
it
sounds
like
you
are
storing
that
as
a
matter
of
course,
in
your
well
or
you're,
making
a
provision
for
that
within
t-bomb.
D
Yep
and
and
I'm
sort
of
jumping
the
gun
here,
but
I
mean
we've
sort
of
separated
out
like
the
notary
piece
and
the
attestation
storage
space.
So
it
may
just
be
that
we
are
using
different
terms
to
refer
to
the
same
set
of
things,
so
it
may
be
more
clear
once
we
run
through
the
architecture,
yeah.
G
We
yeah
I'm
trying
to
get
my
head
around
some
Concepts
in
skit,
and
you
know
where
the
data
is
where,
where
that
you
know
where
the
s-bomb
is
I
guess
is
that
the
Crux
of
the
matter
and
I
don't
know
if
we've
quite
solved
that
yet,
but
this
is
that's
useful
info.
Thank
you
very
much.
Thank.
D
You
Charles,
so
the
third
block
is
the
policy
based
internal
and
external
sharing
and
the
whole
authorization
piece
right.
If
you
have
open
source
software,
this
is
not
really
hard.
I
mean
you
throw
it
in
your
GitHub
action.
Ci
it's
released
along
with
every
single
release.
Anyone
can
download
it
and
use
the
statisticians
if
you're
publishing
it
to
a
container
registry.
You
have
things
like
oci
type,
like
Boca
metadata
field,
where
you
can
store
stuff
alongside
the
artifact.
D
However,
if
you
get
to
the
realm
of
proprietary
software,
the
distribution
stories
and
that
Stellar,
it's
not
a
single
Source,
you
have
various
vendors
across
the
board,
some
some
of
them
exposing
their
internal
artifact
repositories,
some
of
them
having
various
download
sites.
Essentially,
this
it's
not
there's
no
uniform
way
to
share.
D
D
When
it's
the
disclosure
gets
into
the
wrong
hands
before
it
said
remediated
and
critical
Downstream
customers,
you
could
have
Bad
actors
I
misusing
that,
so
you
need
a
way
to
sort
of
permission
that
make
sure
that
it's
only
read
by
the
right
set
of
people
and
when
you
look
at
a
supply
chain,
you
have
multiple
layers
right,
like
you
have
stream
and
you
have
several
downstreams
and
that
makes
it
difficult
to
spread
it
out
across
the
downstream.
So
it's
not
just
your
immediate
customer
that
might
need
to
know
about
that.
D
Maybe
they're
making
a
product
around
whatever
you
deliver
and
they
are
giving
it
again
Downstream.
So
it's
difficult
to
get
that
connectivity
and
to
make
sure
that
that
information
is
flowing
and
even
when
the
information
does
flow,
you
need
to
be
able
to
consume
it
from
one
place.
You
you're
not
running,
like
10
different
CMS
right,
like
you'll,
have
a
single
place
where
you're
handling
all
your
security
remediations,
your
tracking
and
all
that,
and
you
want
to
be
able
to
ingest
all
that
data
into
one
place
with
the
uniform
format.
Finally,.
C
D
Of
this
data
means
nothing
if
it
cannot
be
trusted
and
that's
where
the
validated
build
trust
piece
comes
in
so
as
we're
receiving
as
bombs
as
we're
receiving
vulnerability
information.
It's
important
that
the
data
is
notarized,
that
data
is
signed
by
the
appropriate
parties
without
having
to
trust
the
party
themselves,
so
make
sure
that
a
record
of
that
attestation
being
created
exists
make
sure
that
it's
the
right
identity,
creating
the
statistications
I'm,
not
a
malicious
actor.
So
those
are
the
pieces
now
there's
no
one
solution
that
can
fix
all
of
this.
D
C
D
C
D
Storage,
dispersal
in
the
sense
of
Transport,
as
well
as
a
common
framework
to
perform
notarization
as
well
as
integrate
as
well
sort
of
give
you
an
event,
bus
that
you
can
integrate
into
so
that
other
products
can
consume
these
attestations.
D
Now.
How
is
this
holding
architecture
now,
if
we,
if
I,
were
to
describe
debum
in
a
sentence?
It's
an
open
source,
decentralized,
Federation,
based
solution
to
bring
uniformity
and
automation,
security
and
auditability
to
any
sort
of
attestation.
In
this
case,
we're
just
highlighting
yes,
vulnerability,
attestations,
and
this
is
a
Linux
validation
project.
It's
been
handed
over.
Everything
is
open
source
and
right
now.
This
is
what
the
architecture
looks
like.
D
Essentially,
if
you've
ever
used
something
like
a
master
doll
or
any
of
the
videos.
Certain
networks,
it's
very
similar
where
everyone
hosts
their
own
debug
note.
Each
debum
node
is
a
set
of
micro
services.
In
this
case,
you
can
see
that
there
is
a
service
called
the
gateway
that
acts
as
a
mediator
to
everything
else.
That's
a
single
point
of
contact.
C
D
D
Let's
say:
Unisys
is
running
a
debum
node
at
attestation,
one
dot
nodes.unisys.com
right,
it's
the
d-bomb
node,
it's
the
repository
under
the
debum
node
that
is
storing
the
s-bomb
simultaneously,
as
things
are
stored
and
updated
and
Version
Control
on
the
repository,
it
is
simultaneously
notarized
to
one
or
more
public
or
private
notaries.
Let's
say
six
third,
for
example,
if
the
public
record
instance
as
an
example,
a
blockchain
like
ethereum,
something
as
simple
as
a
transparency
talk
or
could
even
just
be
a
shared
database
that
doesn't
really
matter,
but
something
that
multiple
parties
can
trust.
D
Debunk
notes
themselves
talk
to
other
debunk
nodes
via
the
Federation
API,
so
let's
say
I
I
want
to
get
attestations
from
there.
So
Dell
just
says:
oh
here's
a
channel
in
the
channel
this.
This
contains
all
the
responsible
products
that
you're
buying
and
units
is.
This.
Debomb
node
can
use
its
identity
to
go
ahead
and
connect
to
Dell
server
very
similar
to
how
you'd
share
a
file
on
Google
Drive
and
get
access
to
that
stream
of
data,
and
it
can.
D
It
can
be
notified
whenever
there's
a
new
s
bomb
available
when
a
sperm
changes
and
while
you're
accessing
using
all
this,
you
can
ensure
that
every
single
piece
of
every
single
application
that
comes
out
is
verifiable
against
both
the
notary,
as
well
as
the
signatures
of
the
remote
dick.
You
have
your
hand
raise.
B
Yeah
hi
Ms
dick
Brooks,
reliable
energy
analytics.
So
my
question
has
to
do
with
the
repository
concept.
It
says
only
one
does
that.
Does
that
indicate
that
each
d-bomb
node
has
its
own
repository
information,
it's
collected
or
our
d-bomb
nodes
sharing
information
within
their
repositories
with
other
d-bond
nodes
thanks.
D
So
two
things
that
by
default,
every
Diva
node
has
its
own
repository,
so
it
only
let's
say:
I'm
I'm,
Eunice
and
I'm,
putting
out
an
S
form
I
own,
that
s-bomb
the
data
lies
with
me
until
it
is
accessed
or
steamed
out
by
another
debug
note.
However,
debug
nodes
can
also
work
in
a
mirroring
sort
of
mode
where
it
can
replicate.
C
D
All
the
data
available
in
remote
channel,
as
well
as
a
essentially
a
remote
debug
note,
so
in
the
cases
that
you
have
access
to
that
data,
even
when
the
remote
debug
node
is
done
again,
that's
not
part
of
the
V2
implementation
yet,
but
it
is
a
piece
of
it.
B
Thank
you,
and
does
it
also
mean
that
there's
a
public
access
to
these
repositories,
so,
let's
say,
for
example,
there's
an
attestation
on
a
cyber
security
label
that
you
want
to
make
public
is.
Is
that
also
an
option
here
where
you
could
have
public
access
to
the
Repository.
B
Thank
you,
Miss
I
appreciate
the
I,
appreciate
you
disclosing
the
information.
Thank
you.
I
H
For
the
presentation
going
along
with
what
dick
was
asking
in
terms
of
the
repository,
is
the
concept
that
you're
working
with
that
all
of
the
data
artifacts
that
you're
securing
here
are
Inc
our
place
in
the
repository,
or
do
you
support
external
repositories
if
it's
very
large
data,
so
let's
say
we
have
a
terabyte
of
data
that
we'd
like
to
you
know
secure?
Is
it
the
concept
that
we
would
have
to
move
that
all
into
the
local
repository,
or
do
you
support
that
external
mode?
Thank
you.
So,.
D
In
in
the
case
that
you're
interfacing
to
something
that
already
exists,
so
that's
where
the
repository
agent
comes
in
right
and
that's
why
it's
a
separate
microservice
away
from
all
the
other
pieces.
The.
D
Specific
surface
area,
essentially
an
API
contract
with
the
rest
of
the
microservices.
So
if
you
let's
say
you
have
like
a
large
worth
of
data
in
a
proprietary
database
somewhere,
if
you
create
a
request
free
agent
following
the
same
sets
of
you
know
the
same.
C
D
As
the
what
the
rest
of
the
deworm
ecosystem
expects,
you
can
use
that
as
a
repository
yeah
for
bespoke
use
cases
where
you
have
large
data,
where
it's
infeasible
to
move
to
a
repository
that
we
already
support.
H
I
mean
what,
if
the
data
is
not
owned
by
you
and
it
is
actually
maintained
by
somebody
else,
and
you
want
to
but
you're
using
it.
So
it's
it's
a
lot
of
data.
You
don't
want
to
have
to
move
it
around.
It
exists
somewhere
else
and
you
want
to
just
refer
to
as
supported
I.
J
Think
maybe
yeah
I
was
just
going
to
say.
Maybe
the
next
slide
will
help
because
repositories
outside
of
the
d-bomb.
So
let's
say
an
entity
has
already
a
repository
that
wants
to
share
information
with
a
set
of
customers,
customers
of
power,
so
they
will
create
the
agent.
If
it
doesn't
exist
for
that
type
of
repository,
then
they
will
have
an
extension.
Of
course.
J
The
d-bomb
which
the
repository
is
that
database
then
for
the
other
entities
they
just
and
then
the
source
will
set
up
the
channels,
the
challenge
that
others
can
subscribe
to
the
other
entities.
They
just
need
to
set
up
a
debum,
node
and
instantiate,
a
debug,
node
and
then
subscribe
to
that
channel.
So
the
data
is
still
reside
in
the
main
place,
but
the
channels
provide
the
rules
around
the
access
to
that
data.
J
D
Yeah,
so
this
whole
debug
note
actually
creates
this
abstraction
called
the
channel.
On
top
of
the
repository
each
channel
is
hosted
by
one
debum
node.
It
can
store
structured,
Json
data
within
assigned
Json
envelope.
It
has
one
or
more
subscribers
with
the
well-defined
access
policy.
So
like
can
this
remote
debug
note
read?
Can
it
write?
Can
it
read
audit
entries,
which
is
essentially
the
version
control
for
a
given
attestation?
D
Each
channel
is
optionally
associated
with
one
or
more
notaries,
so
essentially
everything
on
the
channel
will
be
verifiable.
So
if
you
put
an
attestation
on
a
channel
you'll
be
able
to
verify
that
attestation
was
indeed
created
at
a
given
point
in
time
by
a
given
identity.
Unambiguously.
D
K
Could
I
ask
what
science
technology
you're
using
here
sorry
what's
signing
format.
D
Told
you
so
we're
just
assigning
Json
payload
and
then
storing
the
signature
and
then
also
notarizing.
It
simultaneously.
D
Yeah,
maybe
in
the
end
we
can
dig
into
that.
So
it
also
like
creates
an
unambiguous
namespace
for
attestation.
So
since
every
debug
node
essentially
exists
as
a
service
that
is
reachable
under
a
specific
DNS
entry.
That's
a
node1.attestations
DOT
dot,
uss.com
every
single
Channel
and
every
single
attestation
on
a
channel
has
a
specific
identity
that
can
be
reached.
Essentially
it
has
a
URI.
So
in
order
to
I
guess
it's
going
an
organization
instantly.
D
It's
a
debum
load
in
sentences
that
repository
sets
up
channels
invites
their
Partners
to
subscribe
to
these
channels,
and
then
you
can
integrate
it
with
your
tooling
and
once
that's
done,
you
can
record
retrieve
and
audit
attestations.
D
So
essentially
you
have,
let's
say
multiple
partners,
partner,
X,
Y
and
Z
you
can
set
up.
Each
partner
can
have
their
own
B1
node.
They
can
set
up
channels
between
them.
So
in
this
case,
partner
X
has
set
up
a
channel
where
they're
Distributing,
the
s-bombs
hardware
bombs,
waxes
vdrs
Etc
to
multiple
partners.
Downstream-
and
you
can
imagine
this
being
scaled
up
to
many
suppliers
interacting
across
supply
chain,
having
access
to
various
channels
across
the
supply
chain.
D
So,
essentially,
what
it's
trying
to
do
is
what
you
would
traditionally
do
with
email
or
any
other
communication
medium
when
you're,
actually
requesting
s-bombs
or
vexes
or
videos
or
any
attestations.
We
move
that
communication
over
to
debum
nodes
by
establishing
channels
that
that
pre-approve
sort
of
the
level
of
access
that
you
have
to
these
attestations
and
you're
able
to
publish
and
retrieve
these
attestations
using
your
debugger's
identity
across
the
supply
chain.
D
B
Yeah
thanks
again
I
miss.
So
could
you
go
back
to
the
previous
slide?
Please
I
have
a
question
yeah,
so
I'm
I'm
looking
at
the
channel
concept
and
it
looks
like
there's
a
channel
for
h-bomb
and
a
channel
for
s-bomb.
Does
that
mean
that
a
channel
is
specific
to
a
single
type
of
attestation
or
can
a
single
Channel
be
used
from
multiple
types
of
attestation.
D
It's
it's
up
to
how
you
we
don't
have
it's
not
opinionated.
As
such,
we
do
have
a
field
that
allows
you
to
specify
the
schema
per
asset
so
for
attestation.
Essentially,
so
you
can
put
in
all
your
response,
CVS
and
H
bumps
into
one
channel.
Let's
say
you
could
create
channels
based
on
your
products.
Right,
let's
say:
you're,
releasing
five
products.
You
could
segment
your
channels
by
products.
You
could
segment
them
by
any
other
form
of
organization
that
you
want
to.
B
Okay,
so
if
I'm
a
consumer
and
I
have
let's
say
10
suppliers
right
they're
going
to
be
sending
me
different
types
of
attestations,
they
may
send
me
an
s-bomb,
but
they
may
send
me
a
vulnerability
disclosure
report.
B
D
Around
the
actual
attestation
body
contains
a
field
to
specify
the
schema
URL,
which
is
essentially
a
web
link
or
a
debomb
another
Diva
manifestation
that
contains
a
Json
schema.
So
let's
say
it's
an
spdx
file
right.
It
would
have
the
well-known
schema,
URI
or
spdx
2.2
or
spdx2.3.
Similarly,
if
you're
going
for
a
Vex
or
bdr,
it
would
link
to
Json
schema
well
known,
Jason
schema
for
that
specific
type
of
attestation,
so
you're
able
to
filter
by
that.
J
Yeah
just
to
add
it
to
Amit
saying
that,
but
for
every
record
that
is
recorded
on
the
channel
the
cons,
the
subscribers
of
that
channel
will
get
a
notification
that
some
you
know.
Something
was
just
recorded
on
the
channel,
so
they
are
notified
and
they
can
make
a
decision
how
to
treat
those
information.
C
C
H
Yeah
so
yeah,
thank
you.
So
the
idea
is
that
everybody
that's
involved
in
this
has
to
have
a
d-bomb
machine
running
at
all
times,
for
it
to
work
and.
D
D
Yeah,
everyone
must
have
a
note,
but
it's
not
just
a
streaming
architecture
channels
are
both
a
store
as
well
as
a
stream.
So
you
can
like
you,
can
have
a
debum
node
go
down,
go
back
up,
doesn't
have
to.
D
H
H
So
what
about
lightweight
suppliers
and
lightweight
customers?
I
mean
a
lot
of
suppliers
are
not
these
heavyweight.
You
know
companies,
they
might
be
just
those
software.
You
know
a
little
software
shop
somewhere.
That
makes
one
little
thingy
and
the
customer
might
be
an
end
user
or
something
I
mean
not
not
such
a
heavy
weight.
H
That
would
have
the
d-bomb,
node
and
I'm
also
worried
with
this
this
architecture
about
fragility,
because
if
you
require
that
everything
is
running
and
the
apis
are
all
up
and
running
and
yeah,
you
got
all
your
micro
Services
running
and
all
that
I
find.
This
is
very
fragile
and
it
means
everything
has
to
be
running
to
for
to
work
versus
some
sort
of
static
data
methodology
where,
where,
where
you
don't
have
to
be
running
for
it
to
work,
you
can
just
go
out
and
check
check
some
data
check
some
receipts
somewhere
anyway.
D
Oh
yeah,
that
it
is
true
that,
like
you
know
all
the
all
the
notes,
the
node
that
you're
retrieving.
D
That
information
and
you
were
subscribed
ahead
of
time
and
you're
measuring
that
information.
The
node
must
be
running
to
retrieve
it,
but
in
case
you
did
like
you
know,
you.
C
D
And
going
back
to
the
key
values,
it's
it
helps
you
decouple
your
attestations
from
your
artifacts
create
Channels
with
the
configurable
sharing
policies.
Oh
sorry,
I
I
think
I
miss
Neil.
I
Well,
I
I
was
interested
by
your
your
analogy
to
email
and
I
guess
what
I
put
in
the
chat
is
a
response
to
Ray.
It
sounds
like
everyone
who
produces.
Software
has
to
would
have
to
associate
themselves
with
a
d-bomb
node,
but
you
can
farm
that
out
to
somebody
else.
I
The
way
that
people
don't
run
I
mean
we
used
to
run
our
own
email
service,
our
own
email
servers,
but
you
know
people
farm
that
out
so
I
I
I'm,
assuming
that
it's
going
to
be
easy
for
people
to
associate
themselves
with
folks
that
want
to
run
that
infrastructure.
D
Yeah,
that
is
true
since,
since
we
don't
really
provide
this
answer
or
like
SAS
around
it,
we.
C
E
D
But
again
yeah,
it
is
available
like
being
put
around
like
a
SAS
like
setup
where
you
know
you
have
multiple
tenants
here.
You
just
subscribe
and
you
create
your
own
debug
notes
within
software
as
a
service.
Offering
is
that
there's?
No
there's
no
piece
of
the
open
source
that
does
that.
D
So
yeah
you
can
create
channels
with
configurable
sharing
policies.
You
get
encryption
and
Transit
and
at
rest,
so
essentially
debum
node
to
debunk
node
Federation
Communications
run
over
https
with
mutual
certificate
with.
C
D
Essentially,
and
at
rest
that
opportunity
offers
encryption
and
in
terms
of
trust,
you
can
cryptography
design
your
attestations,
you
can
notarize
changes
to
other
stations
and
transparency
logs
and
distributed
Ledges
so
that
you
can
verify
those
out
of
band
as
well
as
the
debug
note
can
also
do
the
verification,
but
you
can
verify
them
yourselves.
D
You
can
have
an
audit
log
for
every
change
and
retrieval
operation,
since
everything
goes
through
the
bomb
infrastructure
and
you
can
independently
verify
the
truthfulness
of
these
attestations,
whether
they
were
created
at
a
specific
time
by
a
specific
identity.
D
Then
begins
becomes
that
one
place
where
you
can
subscribe
to
multiple
channels
on
multiple
nodes
across
multiple
vendors
and
you're,
getting
stuff
in
a
specific
format
that
allows
you
to
streamline
the
way
that
you
integrate,
that
debum
node
into
your
systems,
so
that
you
can
have
one
single
place
where
all
your
Upstream
supplier
information
flows
in
and
you
can
query
in
the
same
way
again
standardized
apis.
So
the
whole
everything
that
happens
on
a
debum
node
is
put
out
onto
a
asynchronous
queue,
so
you
can
imagine
up
any
integration.
So
what?
D
If
you
want
to
automatically
connect
to
every
customer?
That's
already
within
your
CRM
right?
You
could
connect
it
in
the
CRM,
listen
for
events.
Whenever
channels,
Channel
establishment
requests
come
in
and
then
automatically
onboard
your
ongoed
year,
Downstream
customers
or
their
customers.
You
can
also
integrate
this
India
CM
systems.
You
can
integrate
into
systems
that
absorb
this
s-bomb
content.
This
this
text,
content
and.
D
It,
for
example,
Google's
work
project
which
aims
to
pull
in
all
these
attestations
and
make
them
queryable
and
usable
for
policy
decisions
and
yeah
integration
is
popular
tools.
You
know,
being
able
to
publish
stuff
off
of
dependency,
track,
being
able
to
publish
in
total
attestations
and
verify
them
without
needing
to
share
them
in
total
bundles,
alongside
the
artifacts
all
those
pieces
and
none
ambiguous
namespacing
of
attestation,
since
everything
is
essentially
every
debug
node
has
a
DNS
address.
That
is
unambiguous.
D
D
The
video
implementation
is
now
in
progress
with
an
alpha
release
available,
try
out
V2
by
attend,
like
by
heading
to
the
alpha
branch
of
the
deployment
repositories
Links
at
the
end
of
the
presentation
now
use
case
and
pocs
again,
I'm
not
going
to
spend
too
much
time
on
this,
but
this
is
the
in
total
POC,
where
you
know
every
step
of
the
way
whether
you
have
Development
Across
The
Oracle
like
across
within
the
same
organization,
you
can
have
discrete
debum
nodes,
publish
in
total
attestations
over
a
specific
Channel,
and
we
are
able
to
actually
in
totally
verify
and
in
total,
like
in
total
artist
throughout
the
supply
chain,
and
ensure
that
everything
hap
before
the
current
step
has
indeed
happened,
has
been
indeed
attested
too
and
and
can
be
retrieved
and
verified.
D
So
again,
you
could
also
ship
some
of
these
in
total
attestations
over
to
the
consumer
in
order
to
so
that
they
also
have
access
to
an
auditorial
again
depends
on
how
you
have
it
structured
there.
C
D
A
link
to
a
demo
that
will
not
go
over
here,
there's
the
Vex
plus
as
one
pocs,
where
we've
just
shared
Vex
documents,
as
well
as
soft
tables
of
materials
generated
from
continuous
integration
infra
and
sent
that
over
to
various
Downstream
customers.
In
order
to
and
connected
that
to
budget
cooling
like,
for
example,
dependency
track
in
order
to
seamlessly
integrate
with
the
rest
of
the
tooling
that
scans
for
our
vulnerabilities
monitors
as
bombs.
Etc.
I
Going
back
just
a
bit,
so
you
talked
about
I
think
it
sounded
like
you
had
Uris
that
that
are
based
on
kind
of
your
Federated
approach
and
I'm,
just
wondering
if
somebody
changes
a
provider,
you
know
if,
if
I
start
off
and
I'm
running,
you
know
if
I'm
publishing
things
via
one
particular
d-bomb
node
and
then
I
decide
to
change
or
there's
a
consolidation
or
whatever.
How
do
you
do
you
have
a
way
to
find
out?
I
D
So
as
as
you
do
move,
let's
say
you,
you
have
to
re-authenticate
notes
essentially
similar
to
how
you
set
up
a
new
SSH
key
right.
So
when
the
identity
changes
you
do
have
to
re-authorize
the
node,
because
there
isn't
any
switch
over
sort
of
process
where.
D
Not
as
of
yet
and
when
URI
is
change,
we
don't
currently
have
a
redirect
approach
that
allows
you
to
remap
like
areas
like,
for
instance,
videos,
prevention,
consolidation
right.
What
if
you
consolidate
multiple
channels
into
one
or
if
you
consolidate
attestations
across
to
one
to
one?
We
don't
really
have
a
way
to
map
that
right
now,
but.
C
J
Every
the
subscriber
to
a
channel
to
the
private
Channel
they
are
aware
who
is
who
are
the
other
subscribers.
B
I
mean
yes:
well,
it's
been
a
while,
since
we
we've
chatted
about
d-bond,
so
I'm
still
catching
up
from
the
past.
One
of
the
things
that
I
know
we
deal
with
all
the
time
at
Rea
is
that
you
receive
different
types
of
odd
artifacts
like
an
s-bomb's
case
that
can
be
spdx
or
it
can
be
cycle
in
the
act.
It
can
be
different
formats,
but
the
same
is
true
at
Vex
you
know
you
can
get
an
open,
Vex
format.
You
can
get
a
CSA
effect
format.
You
can
get
a
cyclone
DX
effects
format.
D
D
Yeah
is,
is
the
window
on
top
visible
or
did
I
just
share
with.
D
So,
okay,
that's
not
what
I
intended
okay,
never
mind!
So,
within
the
asset
envelope,
there
is
a
link
to
the
schema
itself,
but,
for
example,
this
is
the
link
to
the
well-known
schema
for
spdx
to
version
2.3.1.
So.
C
D
As
you
can
link
to
adjacent
schema,
your
whatever
client
application
is
consuming.
These
attestations
can
actually
resolve
that
schema
and
figure
out
what
exactly
this
is
now.
C
D
In
this
case,
by
looking
at
this
schema,
URL
I
can't
tell
you
that
it
is
spdxv
2.3.1
schema
because
they
do
have
well-known
URLs
for
those,
but
in
some
cases
you
may
be
hosting
it
on
schema,
but
the
details
are.
C
D
B
D
Yep
and
that
also
sort
of
allows
you
to
put
in
custom
schema
so,
for
example,
for
Hardware
built
materials,
there's
no
real
standard
assets
right
like
we
have
spdx
and
Cyclone
DX,
so
your
organization
may
be
using
something,
that's
bespoke.
So
in
that
case
the
schema
you
are,
you
are
the
need,
not
be
a
web
URL,
but
could
also
be
a
debum
URL.
C
D
Uri
to
a
specific
channel
right
so
like
you'd
have
P1,
which
is
basically
https
over
a
specific
Port.
You
have
the
URL
which
can
be
resolved
thanks
to
DNS
and
we
are
able
to
get.
D
D
D
D
How
they
related
so
basically
say
that
okay,
does
this
software
depend
on
something
that
has
a
high
vulnerability
at
any
level,
so
work
helps
you
answer
those
questions,
so
B1
can
we
we
are
going
to
do
a
PhD
where
D1
is
essentially
going
to
act
as
an
ingest
for
work
where
you
get
all
your
Optimus
pumps,
your
backs
reports
and
what
strings
them
together
and
you're
able
to
do
policy
queries
on
them
again.
This
is
something
that's
in
development
and
not
available
right.
Now,
developments
get
again.
This
is
super
high
level.
D
So
again,
correct
me.
If
I'm,
if
I'm
wrong
here
and
they're
very
high
level
suggestions
on
how
we
can
contribute
again,
decom
channels
can
store
and
distribute
skit
objects
by
serializing
them
into
attestations.
D
These
attestation,
So,
currently
I
I,
saw
that
there
is
like
a
skit
Ledger
like
an
e-notary
that
has
been
implemented.
That
could
be
added
in
as
a
debug
notary
as
well.
So
if
we
have
c
c
bar
and
Json
interrupt
where
we
can
serialize
those
things
to
cboard
and
you
can
to
perform.
C
D
Signing
essentially
and
put
that
on
the
you
know,
trip
that's
one
way
we
could
integrate
essentially
on
the
right.
If
you
look
at
it,
d-bombs
essentially
give
you
a
weight
to
Identity
with
xyz09
certificates,
so
you
could
do
did
debomb
or
you
could
do
did
web
as
well.
We
support
that
as
well
again,
that's
currently
not
open
source,
but
that's
something
that
we
built
out
and
claims
and
evidences
those
skit
objects
can
be
attestations.
D
That's
are
stored
on
default
but,
as
you
say,
I
get
that
you
guys
like
skin
is
currently
using
C
bar
and
we're
using
Json
SMG
as
the
serialization
format,
and
there
would
be
that
we
need
like
sort
of
look
into
that
and
figure
that
out
I
think
I
saw
a
hand
up
Henry's,
but
I
think
I
lost
it
all
right.
Foreign.
K
This
is
exactly
what
we
thought
skit
would
be
usually
be
used
for
was
to
to
back
in,
for
something
like
this,
like
the
d-bomb
I,
like
your
presentation,
there's
some
questions
about
how
to
do
offline
and
and
so
forth,
and
some
of
the
ones
that
Ray
brought
up
I
think
can
be
answered.
K
There's
this
question
of.
Does
all
information
have
to
be
on
in
the
store
or
just
a
placeholder
saying
here's
where
you
go
to
get
the
real
data
and
you
can
be
a
secondary
permissions,
I
think
that
would
work
here
as
well,
the
intersection
with
quack
and
some
of
the
other
higher
level
abstractions.
Yes,
those
are
concerns
for
me
as
well,
but
I
think
if
we
were
going
to
do
this,
we
potentially
want
to
engage
the
cve
community
into
the
same
pipeline.
D
It
may
be
that
you
know
you
have
several
like
again:
it's
it's
in
a
perfect
world.
Everyone
would
be
putting
out
as
bombs
and
waxes
and
whatnot
right,
but
we're
sort
of
imagining
like
an
intermediary
layer
where,
like
you
know,
you
have
this
third
party
providers
that
provide
us
pumps.
But,
however,
you
do
have
vulnerability.
D
Distortions
being
put
out
by
organizations,
so
you
could
have
adapters
that
do
publish
those
things
on
debug
channels
and
they
interim.
While
the
infrastructure
goes
around
and
gets
adopted.
C
K
The
the
complexity
that
I
see
is
the
all
the
the
OSS
or
Library
packages
are
going
to
be
put
on
all
their.
You
know:
how
do
you
know
where
the
definitive
one
is
for
each
of
them
to
link
it
together,
you're
going
to
be
making
requests?
So
do
you
think
a
d-bomb
is
more
like
a
DNS
client
cash
or
that
it'll
go
off
and
do
the
appropriate
queries
to
follow
the
chain?
Is
that
a
clock
function
or
is
that
something
you
think
belongs.
D
Yeah,
you
could
have
Diva
my
sort
of
proxy
there.
We
had
an
experiment
done
with
pearls
right
where,
like
you,
could
have
Universal
Pearl
retrieval
of
the
source.
Repository
again,
you
know,
pearls
are
not
everywhere
right
now
it
doesn't
work
for
everything,
but
essentially
it's
a
first
retriever,
a
written
as
a
reposition.
So
essentially
GitHub
becomes
like
that
one
channel
it
becomes
an
accessible
channel
from
any
debug
note
and
you're
able
to
pull
retrieve
the
source
code
based
on
the
per
resolution.
K
D
Yeah
and
that's
sort
of
one
part
of
the
minute
like
what
happens
if
a
remote
debug
note
that
is
subscribed
to
goes
offline,
so
essentially
when
subscribed
Devo
node
can
should
be
able
to
mirror
all
the
contents
of
all
the
channels
that
it
is
connected
to
at
the
moment,
so
that,
even
when
those
remote
debug
nodes
are
offline
and
your
entitlements
are
unavailable,
you're
still
able
to
retrieve
them
on
the
flight.
Obviously
it'll
be
marked
as
being
cached
and
retrieved
from
your
own
node.
D
D
Taken
on
the
yeah
on
the
risks
of
having
like
a
offender
and
I
think
same
trade-off
comes
with
any
Federate
infrastructure
like
like,
for
example,
even
if
you
take
something
like
Master
loan,
you
have
one
instance
going
down
all
the
content
on
that
instance,
unless
mirrored
by
another
instance,
is
like
not
available.
K
Right,
so
you
know
we
can
build
something
that
allows
single
retrieval
documents.
My
concern
here
is:
is
going
to
fall
on
its
face
pretty
quickly
with
all
the
network
traffic
that
we
probably
have
to
do.
Some
of
these
higher
level
constructs
now,
let's
Steve
Jam
in
here.
F
F
The
the
package
manager
distribution
problem
of
yet
another
service
like
are
you,
is
this
another
service?
Are
you
leveraging
existing
storage
and
package
manager,
services.
F
F
Yeah
I
guess
I,
just
I
have
all
the
same
questions
that
we
normally
have
with
these
with
package
managers
and
distribution
services,
as
they're
expected
to
be
one
global
Service,
our
customers
expected
to
host
their
own
for
private
storage
and,
for
you
know
all
of
the
performance,
reliability,
security
boundaries,
but
with
three
minutes
left
I
didn't
want
to
crack
open
that
conversation,
so
maybe
I
could
just
ping
you
offline
and
and
talk
more
about
it.
A
I
A
A
good
that's
a
good
time
check
for
us
I'd
like
to
let
Hank
in
because
he's
patiently
got
his
hand
up,
but
I
think
unless
there's
anything
burning.
This
should
be
the
last
question,
please
and
then
admit
if
you
want
to
wrap
up
I'll
give
you
all
the
time.
So
thank
you
very
much
for
the
presentation
now
and
you
can
take
the
remaining
two
and
a
half
minutes
with
with
Hank
thanks.
So
much.
E
You
for
all
your
insights
here,
that's
actually
really
helpful.
We
are
literally
maybe
of
comment
of
that
at
the
beginning
of
the
meeting,
we're
in
the
middle
of
a
categorization
problem
and
as
you
highlighted
there
is
this
marketing
categorization
of
you
call
it
attestations.
E
We
call
it
statements.
Potato
potato
Charlie
asked
the
right
question:
is
this
a
artifact
or
is
it
an
attestation
about
artifact?
So
doesn't
matter
in
the
core?
E
We
have
to
categorize
these
statements,
slash
attestations
and
we
are
in
the
middle
of
trying
to
do
that
in
the
feed
structure
challenge
we
accepted
here,
so
to
speak,
and-
and
maybe
we
can
cross-pollinate
here
on
this
on
this
topic
because
I,
if
this
is
still
an
open
item
for
you
and
you
can
improve
on
the
categorization
of
your
of
the
statements
you
want
to
notarize.
E
We
also
want
to
have
that,
and
if
that
is
something
we
can
semantically
learn
from
each
other
I
think
that's
already
one
step
towards
operability
absolutely
agree.
Wonderful,
because
then
we
have
a
common
problem
and
that's
always
good
thanks.
D
Yep
agenda
with
this
deck
too
many
as
well,
so
that
you
can
get
access
to
the
links.
Of
course
you
can
click
on
them
here,
right.
D
Community
slack
that
we
monitor
again
feel
free
to
email
us
to
that
that
works
too.
But
if
you
want
to,
if
you
want
a
more
message
experience,
there
is
lack
the
V2
Alpha.
Is
there
again
it's
enough?
It
comes
with
all
the
warnings
of
an
alpha
but
feel
free
to
try
it
out.
We
love
for
people
to
test
it
out
this
website.
There
are
detailing
the
same
things
that
we
just
talked
about
and
a
special
thanks
to
the
USS
team
for
core
contributions,
documentation
and
marketing
materials
around.
A
Perfect
well.
Thank
you.
Yes,
thank
you
again,
very
much
good
use
of
time.
I
think
so,
just
enough
time
to
say
for
all
the
regulars
we
do
have
our
things
set
up.
Finally,
I've
pressed
the
right
buttons
to
get
our
weeklies
in
so
next
week
might
be
another
guest
presentation
if
it
isn't,
we've
got
plenty
of
PR's
and
issues
from
an
editorial
review
of
the
architecture
last
week
that
we'll
go
through
so
see
you
all
next
week
and
and
look
forward
to.
Whichever
of
those
two
things
happens,.