►
From YouTube: SLSA Meeting (July 21, 2022)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
After
he's
he's
done
because
I
just
want
to
make
sure
he
can
get
through
what
he
wants
to
before,
he
has
to
leave.
C
Awesome
thanks
so
much
okay,
so
I'd
like
to
just
share
a
little
bit
about
what
we've
been
doing.
The
svdx
community
related
to
something
called
the
build
profile,
which
is
kind
of
like
trying
to
express
some
of
the
things
that
we.
D
C
I
know
it's
like
s-bomb
is
like
a
very
overloaded
term,
but
spdx
is
going
into
this
model
in
spx
3.0,
where
it's
talking
about
having
different
profiles
and
different
use
cases
when
talking
about
like
different,
you
know,
you're
using
spx
for
build
using
spdx
for
defects
using
spdx
for
artificial
intelligence
use
cases,
so
we
are
treating
kind
of
all
those
things
differently
and
you
kind
of
want
a
different
spx
document,
whether
you
call
it
as
bomb
or
not
as
from
there-
some
new
ones
in
that,
but
I
think
just
just
to
make
it
clear.
C
You
know
we
are
not
the
the
work
here
is
not
about
like
saying
that.
Okay,
you
know
this
bill.
Information
should
go
into
s-bombs
as
we
know
it,
but
this
is
kind
of
a
particular
use
case
that
can
be
expressed
within
spdx.
C
So
I
think
the
the
main
thing
I
want
to
do
is
to
share
the
group
that
this
effort
is
going
on.
I
want
to
share
just
a
little
bit
on.
You
know
what
it
looks
like
and
I
think
what
we're
looking
at
is
to
be
able
to
express
salsa
type
provenance
within
spx
documents
and
also
to
kind
of
like
start
a
conversation
with
like
making
sure
that
we
are
aware
of
the
different
work
that
we're
doing
and
making
sure
that
it's
aligned.
C
C
But
basically
it
looks
something
like
that
where
we
want
to
express
some
like
spdx
elements
to
say
you
know
what
was
part
of
this
build.
How
was
the
build
done?
Who
was
initiated
by
where
was
it
being
built
and
so
on,
and
essentially,
basically
like
being
able
to
express
things
for?
Reproducible
builds
as
well
as
express
salsa
type,
provenance
files
or
metadata
so
that
they
can
be
used
within
the
svgs
context
for
compliance
for
certain
use
cases.
C
So
we
we
meet
every
every
monday,
and
so
I
think
I
think
we
want
to
kind
of
start.
You
know
at
least
a
model
communication
that
we
can
share.
A
bit
of
what
we're
doing,
making
sure
that
we're
aligned
making
sure
that
we
are
able
to
express,
like
all
the
things
that
salsa
wants
to
express
as
well.
C
Yeah,
so
we
are
not
looking
to
generate
like
the
providence
documents.
I
think
it's
mon
spx
as
a
whole
trying
to
encapsulate.
You
know
one
of
the
things
that,
within
this
data
model
that
we
may
need
to
use
so
in
the
sense
of
like
creating
creating
aspen
things
like
that,
sometimes
it's
useful
to
have
the
build
information.
C
C
B
A
Yeah,
so
one
of
the
real
use
cases
right
is,
I
remember
a
while
back
right.
There
were
certain
open,
ssl
vulnerabilities
that
were
only
applicable
if
the
software
was
compiled
with
certain
flags
or
compiled
with
certain
additional
features,
and
a
lot
of
that
information
today
really
isn't
captured
in
many
of
the
s-bombs,
and
so
I
know
that's
one
of
the
reasons
why
this
this
could
be
that's
one
of
the
use
cases
for
for.
E
But
isn't
that
covered
by
get
bomb
and
we
actually
put
it
in
the
debug
symbols
for
windows
compilers
right,
we
put
all
that
information
there
already.
For
this
reason,
so
I'm
trying
to
figure
out
where
the
intersections
of
all
the
other
pre-existing
technologies
sit
with
this,
and
what's
the
super
valuable
at
value
here,
and
I'm
not
detracting
from
from
the
discussion
of
salsa
on
the
rest
of
it,
I'm
just
trying
to
figure
out.
You
know
a
huge
huge
json
documents
for
monolithic
builds
seems
to
be
problematic.
C
Yeah,
so
so
that
there
is,
there
is
definitely
going
to
be
overlap.
I
think
the
the
use
case
here
is
kind
of
like
within
the
the
sps
ecosystem
of
what
people
use
for
compliance
that
that
needs
to
be
expressed.
C
I
think
we
see
git
bomb
as
a
way
to
augment
the
information
that
we're
getting,
but
I
think
in
the
end
there
is
an
ask
for
these
documents
to
be
in
a
data
model.
That's
interchangeable
and
spdx
provides
that.
D
Yeah,
so
is:
would
this
replace
s
salsa
provenance
predicate
that
we
have
now.
C
No,
I
I
don't
think
it
would
well
depends,
depends
on
where
you
want
to
go
with
it
right,
because
I
think
it
could
be
expressed
in
that
way.
But,
like.
C
I
think
there
are
other
use
cases
that
it
tries
to
aim
at
as
well,
so
I
don't
know
whether
it
would
be
a
replacement
rather
than
a
a
possible
representation
of
it.
So
so
I
I
imagine
that,
maybe
that
there
would
be
more
than
one
document
that
you
could
use
to
produce
create
salsa
compliance
so,
whether
it's
a
salsa
provenance
document,
it
could
be
the
spgx
document
and
hoping
that
that
could
also.
C
We
are
not
looking
to
at
least
as
a
community
we've
discussed
this
and
we
are
not
aiming
to
kind
of
replace
the
provenance
spec.
In
fact,
we
have
conversations.
C
We've
had
conversations
of
saying
that
you
know
the
spdx
document
should
reference
the
the
salsa
providence,
spec
and
so
on.
So
it
could
be
like
I'm
going
to
take
this
salsa
permanence
document
and
I'm
going
to
pass
it
and
express
it
as
pdx,
and
also
a
reference
to
say
that
this
is
how
I
generated
this
sbx
document.
D
Yeah
because
I
it
seems
like
if
the
information
is
very
similar,
I
don't
know
how
others
feel
so,
just
speaking
for
myself,
it
seems
desirable
like
if
spdx
can
represent
everything
we
need,
and
it
has
the
properties
we
want
in
terms
of
being
unambiguous
and,
like
not,
you
know
hard
to
mess
up
then
like
having
one
specification
instead
of
two
seems
desirable
to
me
of
like
deprecating,
the
salsa
predicate
and
just
having
spdx
sounds
nice
and
if
we
think
long
term,
we
do
actually
want
to
have
these
two
ways
to
represent
this
information.
D
To
make
progress
here
to
join
the
is
joining
the
the
s
bomb
meeting
the
best
way
is
it
to
start
some
sort
of
thread?
What
is
it,
what
is
a
good
way
to
kind
of
continue?
This
conversation.
C
I
think
joining
the
the
build.
The
build
profiles
meeting
at
least
within
we
are
gonna,
get
a
model
down
within
the
next
two
weeks.
So
I
think
thereafter,
we're
gonna
probably
be
in
a
better
position
to
talk
about
what
we're
gonna
do
with
this.
So
just
update
the
build
profile
style
is
aiming
to
be
part
of
spdx
trio.
Draft.
C
That's
going
to
be
published
around
and
august
september
time
frame.
C
C
Okay,
it
doesn't
look
like
this.
There
were
any
more
questions
unless
I
missed
one,
but
I'd
be
happy
to
kind
of
chat
about
this
offline
as
well.
If
people
on
the
chat.
D
So
we
usually
spend
a
few
minutes
for
anyone,
who's
new
to
the
meetings
to
just
quickly,
say,
hi
and
and
where
they're
from
so.
If
anyone
is
new
and
would
like
to
quickly
say
hi
love
to
hear
from
you.
B
Hi
I'm
mike
scaveda
from
microsoft.
I
lead
an
open
source
security
team
and
co-lead
the
alpha
mecha
project
with
an
openness
stuff.
B
Hey,
I'm
jeremy,
I'm
also
from
microsoft.
I'm
also
a
sig
release.
Tech
lead
for
kubernetes.
So
here
we
both
had
some.
E
B
I'm
jay
white
from
microsoft.
Sorry,
I
don't
have
my
camera
on
this
time,
I'm
sitting
here
with
a
with
a
with
a
what
looks
like
a
gurney
on
more
explanations.
Later
anyway,
I'm
on
the.
C
Open
source
strategy,
ecosystem
team
and
here
for
all
the
fun.
B
B
Hi,
I
am
alberto
figueroa,
also
from
microsoft,
and
I'm
also
the
maintainer
for
slackware
and
wristband
hi.
Everybody.
F
B
D
All
right,
hey
does
anyone
else
thanks
so
much
great
to
see
so
many
new
places,
it's
exciting
to
have
you
here
as
as
a
reminder,
we
usually
keep.
If,
if
you're
able
and
in
the
meeting
notes,
we
keep
track
of
attendance.
So
if
you
could
join
there
and
there's
a
link
at
the
top
of
the
google
group
to
join
tip
people
have
right
access.
D
So
thanks
thanks
everyone
for
joining
us,
really,
I'm
looking
forward
to
working
with
you
all.
I
know
I've
I've
personally
in
other
meetings,
but
some
of
you
so
it's
great
to
have
everyone
at
the
big
community
meeting.
D
D
D
Work
stream
meeting
on
mondays
at
noon-
eastern,
it
seems
like
people
are
all
over
there's
west
coast
and
europe
west
coast
us
and
europe.
So
there's
no
particularly
good
time,
but
noon
is
like
an
okayish
time.
Tooling.
Most
people
could
make
fridays
at
10
a.m.
Eastern
and
for
positioning
tuesdays
at
2
p.m.
Eastern
seems
to
work.
D
So
that's
what
I
suggest
unless
there's
any
objections
that
we
set
up
the
initial
meetings
we
have
the.
I
think
I
suggest
that
the
best
way
to
communicate
this
further
is
on
the
slack
channel
inside
the.
We
will
document
this
on
the
website
too,
but
I
think
for
like
the
initial
meetings
we
could
use
this.
This
dedicated
slack
channels
to
coordinate.
F
D
The
public
open
sf
calendar,
but
I
just
wanted
to
share
that.
That's
that's
happened
to
so
sorry
for
the
delay
in
that
I
had
taken
off
on
tuesday
and
was
busy
yesterday.
So
I
didn't
get
a
chance
to
do
that
until
today.
D
D
Brief
a
brief
update
in
the
beginning,
and
then
we
could
talk
about
kind
of
more
general
issues.
There's
a
question
from
corey
about
joining
slack,
there's
a
way
to
join
slack.
I
will
also
send
out
there's
a
email
alias
too
per
working
group,
so
if
you
sign
up
for
one
or
the
other,
we'll
send
any
sort
of
announcements
about
joining
on
both.
G
Sure
so,
as
per
the
specs
requirements,
attestations
are
at
source
level,
four
supposed
to
be
dependency
complete,
and
that
includes
the
the
dependency
all
the
all
the
dependencies
that
form
part
of
the
build
graph.
So
the
the
transient
dependencies
at
build
time
as
well,
but
I'm
wondering
the
attestation
format
itself
contains
in
the
materials
section
links
to
the
the
other
artifacts
that
were
part
of
that
build,
but
not
to
the
attestations
for
them.
G
And
given
that
most
people's
entry
point
and
the
reason
that
they
will
want
the
salsa
ada
stations
is
because
they're
downloading
the
artifacts.
So
what
they
will
have
primarily
is
a
link
to
the
artifacts,
and
I
wonder
whether
anybody
had
done
any
work
and
what
approaches
they'd
taken
to
given
a
reference
to
an
artifact,
finding
the
attestations
for
it,
and
even
though
we're
only
talking
here
about
the
build
dependencies
being
a
part
of
the
dependency,
is
complete.
That's
not
the
same
as
the
runtime
closure
of
the
artifacts
and
you
shift
your
shipping.
G
So
it
would
also
be
interesting
to
be
able
to
find
the
attestations
for
the
other
runtime
dependencies
that
that
your
artifact
has,
and
I
wonder
whether
there
was
any
work
on
that
people
had
done
on
and
being
able
to
discover
just
basically
being
able
to
given
an
artifact
or
a
reference
to
an
artifact
being
able
to
find
the
attestation
for
it.
G
So
michael's
got
his
hand
up.
A
Yeah,
so
this
is
something
I've
worked
on
in
the
past
and
I
know
I
downloaded
it
here
a
while
back
sort
of
the
what
they
call
it
scq
tools
that
sort
of
goes
out
and
sort
of
recursively
iterates
through
salsa
ada
stations,
and
so
one
of
the
things
there
is
right
now
making
a
bit
of
an
assumption
that
those
attestations
are
being
stored
in
recoil.
A
Obviously,
that's
not
going
to
work
for
for
everybody,
but
one
of
the
things
and
it's
on
the
agenda,
but
the
the
thing
I
was
going
to
talk
about
a
little
later
was
a
few
of
us
are
starting
to
poke
around
with
this
idea
of
sort
of.
A
You
know
an
api
for
discovering
these
things,
so
the
idea
there
being
right
for
policy
consumption
purposes,
it's
usually
valuable
to
have
it
associated
with
the
package,
but
when
it
comes
to
sort
of
more
generic
like
discovery
of
like
hey,
how
do
I
discover
at
a
stations
or
a
sit
that
are
associated
there?
Whether
the
app
station
is
purely
salsa
or
it
could
be?
You
know
a
security
scan
out
of
stations
like
that.
You
still
want
to
have
have
something
there,
and
so
there
is
work
being
done
there
right
now.
A
G
Oh
okay,
I
mean
might
so.
When
I
looked
at
recall.
I
saw
that
yeah.
Basically,
given
you
could
you
could
certainly
search
the
log
by
for
the
added
stations,
but
there
wasn't
a
way
that
I
could
see
easily
to
find
it
by
the
artifact
or
reference
to
the
artifact
itself.
But
if
you
can
do
that
via
the
hash,
that's
an
interest.
Yes,.
G
That's
interesting.
I
think
it
might
be
more
interesting
to
have
something
a
little
more
explicit,
but
yeah
that's
an
interesting
way
forward.
Mark
you
had
your
hand
up
first.
D
Yeah,
the
so
to
cl
to
clarify
your
question.
One
thing
is
actually
finding
the
attestation
and
getting
it,
but
did
you
also
mention
that,
like
at
build
time,
if
you
input
attestations
for
particular
artifacts
recording
that
in
the
provenance,
is
that
what
you're
saying
or
is
that
not.
G
D
Yeah,
I
think
this
is
one
of
the
problems
that
I'd
like
the
tooling
working
group
to
to
specifically
work
on
is
like
how
to
do
that
in
the
model.
One
recommendation
I
have,
which
I've
mentioned
previously
is
looking
up
by
pure
hash,
has
a
problem,
because
if
you
have
build
artifacts
that
are
like
you
might
get
hot
spots,
for
example,
the
empty
file
is
probably
generated
by
lots
and
lots
and
lots
and
lots
and
lots
of
different
builds.
D
So
even
if
it's
not
malicious,
it
might
just
be
the
case
that
you
just
get
many
results.
So
storage
and
retrieval
are
both
hot
spots
for
that,
and
so
you
might
want
to
look
up
so
not
even
just
hot
spots
for
performance,
but
also
for
usability
of
like,
if
you
don't
have
an
attestation
that
satisfies
it.
You
might
there's
no
good
way
to
prevent
to
present
a
good
error
message
to
users,
because
you
say
I
looked
up
four
million
at
the
stations.
None
of
them
passed
the
policy.
D
I
don't
know
what
to
do
versus
like
your
policy
said
you
need
x,
I
found
y,
you
know
what's
the
problem,
one
way
to
help
with
that
might
be
to
include
like
some
sort
of
package,
identifier
or
uri,
or
something
like
that.
D
G
Okay,
rory,
you
have
your
hand
up.
G
The
operating
system
is
one
point
where
you
could
do
it,
but
there
are
other
things
where,
for
example,
proprietary
tooling,
where
you
could
potentially
give
a
digital
signature
that
you
couldn't
verify,
because
you
don't
have
a
license
to
actually
ship
the
artifact
that
you're
signing
to
whoever's
actually
validating
it.
So
yeah
I
mean
that's
another
another
point
about
graph
walking
is
where
do
you
terminate
and
how
do
you
terminate
that
graph
in
such
a
way
that
it
makes
sense
that
you
are
actually
talking
about
terminal
nodes?
At
that
point,.
E
In
our
case,
we
have
the
additional
benefit
of
we
produce
the
compiler,
so
the
compilers,
the
os
uses,
are
not
public
at
the
time
we
we
use
them
and
then
the
changes
get
rolled
into
an
official
release
later
on.
So
yes,
this
is
an
interesting
sort
of
discussion
and
love
to
have
discussions
on
the
right
way.
We
want
to
solve
some
of
these
things.
G
E
The
resonating
thing,
just
one
last
thing,
is
that
a
lot
of
the
open
s
and
the
linux
foundation,
and
a
lot
of
in
microsoft,
in
particular,
have
stated
for
every
package
we're
producing
an
s-bomb,
and
the
question
is:
do
we
just
inherit
that,
as
part
of
the
expected
artifacts,
we
use
to
build
the
rest
of
the
system
on.
G
A
Oh,
I
was
just
going
to
say
I
mean
to
be
clear.
This
is
purely
proof
of
concept
code,
but
if,
if
folks
did
want
to
poke
around
with
it
where's
this
I
did
have.
A
D
I
think
we
have
an
entry,
I
don't
know
the
specific
person,
but
the
souza
team
talking
about
their
compliance
work.
F
Yeah
yeah,
so
is
it.
F
Slides,
perhaps
then
just
go
through
them:
okay,
just
a
second.
F
Yeah,
so
basically,
what
I
wanted
to
would
like
to
talk
about
how
we,
how
we
started
looking
at
salsa
at
zuzu
and
started
to
work
with
that
and
worked
on
the
compliance
or
working
towards
the
compliance
towards
the
0.1
standard.
F
So,
susan,
you
might
be
aware,
as
a
soft
linux
operating
system
vendor,
we
take
a
lot
of
community
code.
We
review
the
code,
we
select
the
packages
that
we
ship
in
our
products.
F
Then
we
build
them
from
source
code
to
binaries,
put
them
into
our
testing
pipelines,
sign
them
and
put
them
on
various
content:
delivery
networks
in
either
rpm
form,
ftp,
trees,
iso
images,
containers
and
so
on.
F
So
we
have
been
doing
various
certifications
on
security
side
like
common
criteria
in
the
past,
and
we
became
aware
of
salsa
and
brought
this
idea
to
look
at
salsa
compliance
to
our
management.
F
Our
management
liked
the
idea
and
said,
go
ahead,
and
so
we
went
and
did
the
gap
analyzers
what
sites
are
is
composed
of
how
it
fits
our
actually
what
we
have
already
and
what
what
we
need
to
change,
and
so
we
went
ahead
and
did
that
and
we
found
that
what
we
are
doing
as
as
building
a
source
code
management
building
is
already
meeting
quite
a
large
part
of
the
zelza
framework
that
it
is
standing
now
and
we
decided
to
refine
the
rest
of
the
pieces
or
art
pieces
that
were
missing.
F
For
instance,
providence
force
missing
so
with
more
details
about
that.
So
the
entirely
building
of
our
source
codes
to
binaries,
including
so
turning
it
into
rpms
turning
rpms
into
images
and
ftp
trees
and
whatnot,
is
done
by.
F
So
the
build
service
is
a
zuzu
developed
in-house,
build
service
solution
that
manages
source
code
that
builds
binaries
up
to
the
point
of
signing
and
publishing
we've
been
using
that
like
forever.
So
as
I
joined
20
years
ago,
it
was
already
there
it's
now,
the
second
third
generation.
It's
also
continuously
developed
source
code
is
stored
inside
of
the
build
service,
the
full
history
with
associated
users,
who
did
what
the
possibility
to
manage
diffs
and
so
on.
F
Sadly,
it's
not
in
a
legit
format,
which
is
today's
standard
way
of
doing
that,
but
in
some
form
of
svn,
like
own
handcrafted
storage.
So
this
is
something
that
we're
also
planning
to
see
if
you
can
replace
it
by
external
digital
storage,
for
instance
yeah.
So
it
does
the
source
code
management
source
code
management.
We
have
internally
be
separate
between
projects
and
packages
packages,
just
one
source
for
package
like
gillipsy,
like
rim
projects
encapsulate
things
like
celeste
15s
before
or
so.
F
The
factory
and
the
build
service
models,
access
management
on
top
of
that
based
on
user
on
groups,
so
that
access
access
protection
exists
in
that
site,
resources.
F
Packages
with
changelog
revisions-
and
we
have
tooling
around
it-
that
verifies
incoming
gpg,
gpg
signatures
for
instance,
so
this
is
one
challenge:
how
to
get
source
code
into
the
system
when
trust
and
trusting
whoever
is
has
provided
it
is
really
the
maintainer.
So
that
is,
however,
for
us
a
bit
outside
of
the
useless
scope,
but
it's
something
that
we
are
also
looking
at
of
how
this
can
be
improved
like
with
more
gpg.
Six
for
whatever
methods,
build
service
source
code
moves
between
projects,
so
we
model
we
actually
release
process.
F
On
top
of
that
between
developer
homes,
station
projects,
findable
projects
or
release
projects
happens.
This
kind
of
moves
happens
only
after
either
human
or
automation,
reviews
or
multiple
stages
of
that.
So
we
can
mirror
all
kind
of
release
poses
on
top
of
that,
for
instance,
susan's
enterprise
development
is
a
two-stage
thing.
We
have
developer
homes.
D
F
The
build
project
builds,
builds
the
sources,
so
what
is
submitted
is
not
binaries
but
sources
it
rebuilds.
Everything
includes
substaging
projects
for
qa
to
do
the
continuous
integration
parts
and
in
the
end
can
be
these.
Build
projects
can
be
frozen
or
release
binaries
into
release,
projects
to
either
create
ga
builds
containers
and
so
on.
F
For
factory
our
open,
open
source
project.
We
have
a
free
stage
or
a
free
project
set
up
like
the
community
developer
has
its
own
project.
He
can
work
on
that.
He
submits
to
a
so-called
development
project.
It's
an
intermediate
project
where
more
senior
community
members
review
the
changes,
look
if
they
fit
the
style
that
is
required
and
then
forward
it
to
our
main
factory,
build
project
where
we
again
have
the
building
of
binary,
staging
and
publishing
to
our
users.
So
the
builder
is
kind
of
flexible.
F
We
can
define
that
in
in
that
and
mirror
the
access
restrictions
and
whatever
the
way
is
fit.
The
user
management
is
done,
be
an
external
ldap
based
idp
solution.
We
here
is
usually
useful.
Eventually,
corporate
server,
that's
debian
commercial
product
that
is
run
on-premise
so
that
we
have
still
full
control
over
the
user
management
and
the
build
service.
So
the
build
service
also
hosted
in
the
zuse
data
center.
The
group
management
is
not
in
ldap,
it's
managed
in
the
build
service
itself.
F
Excuse
me
so
building
binaries
the
build
service
has
the
set
of
workers.
Basically
every
one
of
them
is
a
specific
machine
that
spins
up
worker
instances.
Worker
pvm
instances
builds
the
package
and
discards
the
kvm
instance
afterwards.
So
it's
isolated
if
a
rimmel
scripted
built
so
there's
no
human
interaction
so
that
a
source
check-in
triggers
the
internal
scheduling.
The
internal
and
the
internet.
Scheduler
distributes
the
bills
to
these
workers
and
yeah,
including
of
course,
automatic
dependency
resolution.
F
Binaries
are
not
leaving
the
build
service
or
not
getting
stored
outside
of
the
build
source,
but
also
the
intermediate
binaries
are
stored
within
the
build
service.
In
the
end,
of
course,
they
leave
it
as
the
deliveries,
but
the
normal
user
has
no
way
to
inject
binaries,
of
course,
with
weird
rpm
hex.
That
would
be
possible,
but
usually
only
sources
are
imported,
except
for
bootstrapping
purposes.
F
F
So
we
looked
at
salsa.
How
does
it
fit
the
build
service
so
for
the
four
big
pillars?
Source
management,
binary
bills,
build
provenance
administration
and
went
over
them
and
identified
tried
to
do
a
gap.
Analysis.
F
With
the
goal
of
really
achieving
or
going
towards
the
level
four
as
it
is
currently
so,
the
source
code
management.
As
far
as
we
see
it,
meets
all
the
requirements,
the
authentication
of
users
that
we
saw
that
the
strong
authentication
was
not
so
we
currently
had
just
password
authentication
so
be
complemented
that
now
by
multifactor
authentication,
so
that
the
simple
password
stealing
is
no
longer
possible,
but
otherwise
we
saw
that
what
we
do
as
a
source
code
management
system
meets
server
standards.
F
So
we
didn't
really
look
further
at
the
source
source
code
management.
The
binary
building
we've
been
over
there
over
the
years
working
on
this
already.
So
what
we
have
with
these
stand-alone
kvm
workers
that
automatically
pull
their
scripts
that
automatically
set
up
change,
roots,
build
and
discard
them.
We
also
meet
these
kind
of
things
like
distorted
artifacts
only
coming
from
the
build
service
itself,
the
filament
scripted
hermetic.
So
there
is
really
no
network
access,
incoming
or
outgoing
to
these
workers.
F
The
builds
can
be
redone
and
it's
also
to
some
form
reproducible
as
we
are
building
rpms,
we
are
with
reproducible.
We
face
the
challenge
of
timestamps
and
signature.
Nonsense
occasionally,
so
we
are
still
struggling
to
get
a
full
bit
identical,
rpm
reproducible
build.
F
So
this
is
something
where
we
still
have
a
gap
at
this
time,
but
it's
at
least
content.
We
are
working
or
we
have
a
high
percentage
of
our
abilities.
Only
the
content
of
the
rpms
is
binary.
Reproducible,
there's
usually
no
human
or
outside
interaction
on
these
bills.
So
and
it's
all
managed
in
our
data
center,
not
on
any
kind
of
development
workstations
provenance.
As
we
went
to
implement
it,
we
saw
the
provenance
we
are
not
doing.
F
We
have
not
been
doing
yet,
so
we
captured
build
environments
internally,
but
this
was
in
an
internal
feature,
so
our
build
ops
team
implemented
this
in
total
attestation
format
or
provenance
format
describing
what
binary
rpms
are
used
for
interchangeable
for
this
kvm
would
have
built
what
the
conflicts
are,
what
the
sources
are
and
we
store
the
artifacts
on
the
outside
server
sources.com
and
deliver
this
in
total
json
files.
This
attestation
files
in
parallel
to
the
rpms
that
we
deliver
with,
of
course,
the
goal
to
reproduce
the
build
by
third
parties.
F
F
So
that's,
but
it's
not
kind
of,
and
then
the
standard,
I'm
not
sure
whatever
standardization
is
for
them
to
to
reproduce
that
so
this
this
was
a
big
part
that
we
needed
to
implement
and
where
we
spent
quite
some
effort
on
on
that.
It's
probably
not
fully
finished,
especially
the
signing
part
was
a
bit
tricky
but
yeah.
We
started
delivering
that
around
three
months
two
months
ago
administration,
so
they
will
build
service,
so
that
was
the
fourth
pillar.
F
The
whole
system
is
usually
deployed
with
soil
states
and
on
vm
cluster
that
can
be
spun
up
with
others,
either
solid
deployments
and
or
similar
methods.
There
are
still
some
admins,
of
course,
that
can
do
changes
to
the
system,
but
we've
looked
at
these
other
requirements
that
should
be
very
limited
and
two-person
reviewed
with
trying
to
do
it.
I
mean,
of
course,
the
super
user
can
do
a
lot
of
things.
F
It's
kind
of
hard
to
enforce
this
kind
of
super
user
two-person
review
restrictions,
but
we
strove
to
implement
those.
F
So
the
whole
thing
is
deployed
and
managed
also
devops
style,
so
challenges
that
we
faced
provenance
challenges
currently
provenance
for
rpm
builds
kind
of
easy.
We
put
the
json
file
beside
the
rpm,
still
looking
how
to
do
container
provenance
and
where
to
publish
the
provenance
files
in
the
end
beside
in
the
oci
registry
or
an
electrolog
or
wherever.
F
We
also
saw
that,
especially
if
you
look
at
open
source
factory,
that
is
a
always
rolling
release
that
rebuilds,
occasionally
larger
sets
of
packages.
How
do
we
keep
the
prominence
for
everything,
including
all
the
rpms,
that
specific
rpm
was
built
with,
because
that
will
easily
explode
in
storage
requirements?
So
so
this
is
a
bit
still
of
a
challenge
and
that's
why,
for
instance,
for
those
factory,
we
have
not
inevitable,
because
we
don't
have
that
much
storage
available.
It
would
just
be
too
big
and
occasionally
do
some
weird
bootstrapping
things.
F
So
that's
how
to
do
the
provenance
for
the
weird
bootstrapping.
So
that's
that's
also
an
open
question
that
we
are
that
we
are
facing
so
yeah
and
one
challenge
that
I
forgot
is
that
we
still
had
a
bit
high
or
a
lot
of
high
privileges
for
release
management
that
we
reduced
to
the
super
user
group
and
we
had
to
change
a
lot
of
processes.
Well,
we
had
to
adjust
the
processes
so
that
only
the
super
user
team
in
the
end
has
to
be
finally
arbitrarized
and
other.
G
F
The
brief
brief
recap
of
our
implementation
towards
sites
on
still
looking
at
that
and
I'm
happy
to
take
questions.
Roy
has
his
hand
up
as
proceeded.
E
F
So
the
workers
have
a
local
caching
directory
for
rpms,
so
they
they
store
just
for
building.
They
store
the
rpms
that
they
fetch
for
building,
but
it's
it's
on
one
hand,
verified,
checksum
and
also
discarded.
If
this
space
runs
out
or
if
the
worker
is
newly
deployed.
E
E
F
Yeah
I
mean
we
started
this
in
the
middle
of
our
big
code
stream,
so
basically
assumed
what
we
have
as
existing
as
existing.
But
yes,
the
bootstrap
issue
is,
is
a
topic
you
know
and
it's
it's
a
challenge,
especially
if
we
would
like
to
say
if
you
introduce
disc
five
as
a
new
architecture
in
the
near
future.
F
E
F
Yeah,
so
what
the
way
to
do
this
work
is
that
our
worker
client
instance
it
fetches
all
artifacts
to
the
local
machine,
then
spins
up
a
kbm
instance,
or
it
basically
prepares
a
very
limited
set
that
it
can
boot
out
of
these
packages,
then
spins
up
a
kvm
instance
boots
into
this
minimal
instance.
In
the
minimum
instance,
has
the
rpms
the
sources
inside
then
so
this
kvm
instance
has
no
network
interface.
It
builds
everything
through
in
the
end,
writes
out.
F
The
cpio
archive
at
a
specific
location
in
the
file
system
terminates
itself,
and
the
worker
node
extracts
the
cpio
for
the
bit
resize
like
rpms,
split
log
file,
statistics
and
so
on.
So
the
actual
build
doesn't
have
any
network
network
interface
available.
F
I
mean,
of
course,
the
the
worker
nodes
and
the
the
source
server
the
binary
servers
there
are
in
the
network
in
the
network,
you
know,
but
the
build
itself
is
not
able
to
access
this
network.
So
the
vodka
node
transfers
define
respect
to
binary
backend
there.
The
binary
backend
is
taking
care
of
signature
of
the
signatures,
so
works
with
with
a
one
point
to
point
connection
with
the
signing
server
signs
it
and
stores
the
binaries,
but
bill
itself
is
not
able
to
access
this
network.
The.
E
Thing
I
struggle
with
is
trying
to
write
salsa
evidence
and
claims
out
during
the
builds
you're,
basically
a
model
where
you're
going
to
have
to
write
them
to
a
scratch
directory
on
your
build
machine,
wait
for
the
instance
to
stop
and
then
harvest
it
and
there's
that
window
where,
if
the
something's
already
in
that
isolated
box,
it's
going
to
give
you
tampered.
So
you
can't
get
them
out
through
a
different
path.
Say:
hey!
It's
untemperable!
E
F
F
We
consider
this
as
one
unit
as
a
trusted
unit,
so
because
otherwise
it
will
be
a
bit
of
a
challenge
that
that
would
make
it
more
complicated
to
really
sign
this
in
the
build
and
yeah
not
be
able
to
trust
the
intermediate
network
or
something,
but
so
currently
we
abstract
it.
As
we
look
at
this,
our
build
service
cluster
as
one
trusted
entity.
Okay,
thanks.
F
If
not,
then
thank
you,
I'm
if
you
have
questions
me,
an
email.
E
F
So
largest
build
is
something
like
ceph
or
chromium
or
livo
office,
probably
not
but
colonel,
and
when
I'm
talking
about
worker
it's
occasionally
we
have
these
64
cpu,
big
fat
machines
that
where
worker
node
is
assigned
like
32,
cpus
or
16,
dbus
and
so
much
gigabytes
of
ram,
so
there
is
no
secondary.
F
B
F
10
15
years
ago,
experimented
with
ice
cream,
so
offloading
compile
jobs,
but
we
stopped
doing
that.
What
we
are
doing
is
we
do
compiler
caching
cc
cache
compiler
caching
for
some
packages
like
ceph
and
store
the
intermediate
cc
cache
as
an
artifact,
also
besides
the
rpms
that
were
built
also
interested
for
in
total.
As
a
station
then
yeah
that
there
is
this
cc
cache
it's
a
channel.
F
It's
yet
another
thing
to
think
about
yeah,
but
usually
a
very
big
beefy
build
machine
will
get
you
we'll
get
you
there
still
chromium
on
mi64
on
18
hours.
So
it's
it's
long.
F
B
D
Thanks
so
much
marcus
that
was
really
great
and
really
helpful.
I
think
it's
really
nice
hearing
how
people
use
this
specification
and
also,
as
we
think,
about
the
next
version
of
the
specification.
I
think
it's
really
useful
to
look
at
this
as
an
example
of
like
how
it
was
used
places
where,
like
we
did
well
with
a
spec
in
places
that
we
that
we
could
do
better
like
in
particular,
I
thought
it
was
interesting
about
the
comments
on
the
common
criteria
or
that's
the
wrong
word.
D
The
common
requirements
around
like
two-party
review
of
the
administration.
That
was
something
I
thought
actually
wasn't
like,
not
very
readable
and
most
people
just
ignore
it.
But
it's
good
to
hear
that
that
was
that
was
usable.
So
I'd
like
to
include
that
in
some
form
in
the
next
version.
D
A
It
can
wait
till
next
time
it's
not
a
big
dick.
We
were
just,
I
think,
it's
tied
to
what
sean
is
doing.
There's
there's
a
beginning
to
be
an
initiative.
A
Nothing
super
public
right
now,
it's
just
a
few
of
us
in
the
community
like
santiago
from
purdue,
and
you
know
brendan
lum
from
google
and
a
few
of
us
have
sort
of
been
poking
around
with
some
interesting
ideas
around.
How
do
you
distribute
salsa,
providence
and
other
attestations
and
metadata
artifacts,
and
how
can
you
start
to
ask
questions
of
your
supply
chain
based
on
it,
but
details
can
wait
till
next
time.
D
All
right,
I
think,
there's
one
one
more
thing
asked
to
review
a
blog
post.
It's
a
pull
request,
so
I
think
we
could
discuss
that
on.
If
you
have
any
feedback
on
the
the
pull
request.
Also,
there
was
it's
probably
worth
highlighting
that
there
was
there's
a
pull
request
out
now
to
change
the
community,
the
contributing
guidelines
for
blog
posts.
D
That
was
a
topic
that
we
brought
up
last
week,
and
so,
if
you
have
opinions
on
that,
please
review
I'll
add
a
link
to
the
meeting
notes,
but
because
that's
a
kind
of
a
governance
question,
I
think
it's
worth
highlighting
to
the
to
the
community.
D
Okay,
well,
thank
you,
everyone.
As
always,
I
appreciate
everyone's
great
contributions
and,
seeing
everyone
have
a
great
week
and
I'll
see
you
next
time
and
we'll
send
out
announcements
about
the
specific
work
stream
meetings.