►
Description
Warpforge is a tool for building software and creating data pipelines, founded in content addressable primitives and aimed at happily operating in a decentralized environment — both in the sense of “on laptops as well as in datacenters”, as well as in the sense of “I share build instructions with friends, and we don’t need a monorepo to coordinate”. Along the way, we put may IPLD datastructures to use to obtain our goals — including some data structures which are used to create local solutions to the infamous decentralized naming problem. This talk was given at IPFS Camp 2022 in Lisbon, Portugal.
A
Chick
this
is
Eric
Meyer.
Yes,
we
are
both
Eric's
and
we
are
both
working
on
the
same
project,
which
is
warp
Forge
and
today
we're
going
to
be
doing
really
the
first
introduction
to
this
this
project.
It's
the
first
sort
of
public
talk
about
it
at
an
ipfs
or
PL
event.
So
we're
looking
forward
to
sharing
this
with
all
of
you
and
in
this
compute
on
data
track.
A
So
at
a
high
level.
What
are
we
building
here?
Really
simply
put
it's
a
three-stage
thing,
so,
at
the
very
input
side
you
have
content
addressable
things
going
in.
These
are
some
sort
of
input
asset
that
you'd
like
to
do
some
compute
on.
We
treat
these
usually
as
file
systems
as
tarballs
that
get
extracted.
A
Then
we
have
some
execution
that
happens,
and
this
happens
hermetically
so
we're
doing
execution
in
an
environment
that
relies
minimally
on
your
host
and
once
that
execution
is
done,
you
probably
want
to
do
something
with
the
results.
So
we
take
those
outputs
and
we
also
make
those
content
addressable
and
usable
in
further
execution
or
in
next
steps
and
in
terms
of
you
know
what
this
is
for.
A
These
inputs
tend
to
be
things
like
the
tools
you're
going
to
use
to
build
software,
so
things
like
GCC
make
and
Bash
if
you're,
building
classic
Linux,
C
binaries,
something
like
golang
and
well,
nothing
else,
because
golang
is
pretty
self-sufficient.
Maybe
it's
a
rust
tool
chain.
Maybe
it's
python
something
along
those
lines,
as
well
as
any
Library
code
that
you're
going
to
need
to
build
your
build
a
tool
then
from
that
we're
going
into
the
execution.
A
This
is
also
going
to
depend
on
your
particular
build
chain,
but
at
a
simple
level,
it's
something
like
make
make
install
something
like
go.
Install
you
know,
running
that
tool
chain
and
what
the
output
is
is
well,
whatever
you
were
trying
to
build.
So
if
you're
going
for
libraries,
maybe
that's
image
magic
if
you're
at
PL,
maybe
that's
Kubo,
various
things
that
you
could
you
know
put
together
here,
but
fundamentally
it's
all
about
building
software.
That's
the
main
approach
in
general.
A
It's
also
part
of
a
better
bigger
ecosystem
and
you're
going
to
hear
more
about
these
items
later
today,
so
I
I'll
introduce
them
up
front.
So
warp
Forge
is
the
build
tool
it
takes.
Some
inputs
runs
something
and
splits
out
some
outputs.
We
also
have
this
concept
of
zaps,
which
is
a
way
of
packaging
applications
such
that
they're
path,
agnostic,
and
we
can
run
them
from
anywhere
on
the
Linux
file
system.
A
That
sounds
trivial,
but
it's
actually
harder
than
you
might
think,
and
there's
a
whole
talk
about
it
for
that
reason
this
afternoon
and
lastly,
we
have
this
thing
called
warpsis,
which
you'll
actually
see
pop
up
in
this
talk,
and
this
is
a
software
distribution
that
we're
starting
to
work
on
so
think
of
you
know:
debian's
got
a
bunch
of
packages
as
a
software
distribution.
Arch
has
a
big
list
of
packages.
Nyx
has
a
big
list
of
packages.
A
This
is
the
foundation
for
starting
to
build
up
some
of
those
packages
such
that
they
can
be
used
in
these
types
of
builds
in
a
path
agnostic
way,
so
to
execute
hermetically
the
the
main
like
the
Bedrock
of
this
is
to
do
all
the
execution
in
a
containerized
environment.
So
we
use
a
container
to
do
every
piece
of
exec
and
the
real
goal
here
is
to
eliminate
the
dependencies
on
your
host
system
as
much
as
possible.
A
This
is
the
concept
that
we
want
to
be
able
to
have
instructions
that
will
allow
us
to
build
a
bit
by
bit,
reproducible
exact
same
executable
twice,
and
by
putting
all
of
our
definitions
for
inputs,
definitions
for
exec
in
a
single
place
in
a
way
that
is
not
going
to
depend
on
your
host
system.
This
eliminates
a
lot
of
the
challenges
of
trying
to
make
things.
Reproducible.
A
A
But
writing
these
formulas
by
yourself
without
any
help
from
a
computer,
is
quite
annoying.
So
we
have
an
abstraction
on
top
of
that
called
plots,
and
this
allows
us
to
have
multiple
steps
and
also
allows
us
to
use
some
other
nice
features
that
make
this
a
lot
more
sane
to
deal
with
as
a
human
and
then.
Lastly,
we
have
modules.
This
is
just
some
metadata.
A
We
attach
to
builds
to
say,
hey,
here's,
a
name
and
some
other
data,
and
that
you
might
want
to
keep
track
of
the
formula
is
again,
this
base
level
of
execution.
So
all
these
inputs
are
content
addressed
their
hashes
and
they
become
paths
inside
your
container
and
that's
really
it
we're
just
placing
these
hash
values
inside
a
container.
At
the
input
phase,
we
then
run
some
exact.
A
This
is
really
just
a
Json
object,
that's
represented
as
and
when
you
see
the
inputs
here,
yeah,
it's
a
big
hash
and
that's
not
an
easy
thing
to
keep
track
of
without
doing
some
copying
and
pasting
or
something
along
those
lines.
You're,
probably
not
going
to
want
to
do
it
by
hand.
But
fundamentally
it's
just
these
three
things.
So
we
have
the
input
saying:
I
have
a
root
file
system
I'd
like
to
place
in
the
container
that
root
file
system
happens
to
be
a
busy
box
binary
that
provides
your
typical
utilities.
A
Like
you
know,
sh,
like
a
basic
shell,
you
got
Echo
you've
got
CD
things
like
that.
A
The
exact
step
here,
that's
getting
into
actually
running
what
we've
brought
in
and,
in
this
case,
we're
just
running
sh
and
running
a
very
simple
command
to
put
the
text
hello
into
a
file
called
slash
out,
slash
world,
and
then
lastly,
we
might
want
to
take
that
file
and
do
something
with
it
later.
So
we
set
up
an
output
that
picks
just
the
slash
out
directory
and
we'll
pack
that
up
as
a
tarball
and
that
output
is
now
something
we
have
in
our.
A
So
some
distinctions
here
over
some
other
systems.
The
input
is
a
map
and
the
keys
are
actually
the
mount
path.
So
you
can
compose
this,
however,
you'd
like
if
you'd
like,
to
bring
in
six
different
versions
of
GCC
and
place
them
all
in
different
places,
for
some
probably
terrible
reason
you
could
do
that,
but
it
means
that
you
don't
need
to
build
one
big
tarball,
one
big
image
and
load
that
you
can
actually
take
different
pieces
and
stick
them
together
at
runtime
and
the
outputs.
Well,
that's
also
a
map.
A
You
don't
need
to
bundle
up
your
entire
file
system.
You
can
just
say
I
only
care
about
this
one
binary,
for
example,
with
go.
You
often
care
about
maybe
a
handful
of
files
from
your
output,
but
you
may
have
brought
in
a
lot
of
things
to
build
it.
You
can
just
pick
out
what
you
want
to
actually
save
and
save
that
instead
of
you
know
having
to
Tar
up
the
whole
system
so
again,
manually,
writing
hashes
is
not
fun,
and
this
brings
us
to
plots.
A
So
one
thing
we
realized
pretty
quickly
is:
we
need
a
naming
system
for
inputs
and
for
outputs
and
what
that
became
was
catalogs.
There's
a
whole
piece
that
other
Eric
will
talk
about
on
that,
but
being
able
to
assign
real
names
to
things
is
quite
helpful.
A
Also,
multiple
execution
steps
can
be
very
useful.
One
example
here
is:
if
you
are
building
a
large
binary
and
you
have
some
steps
that
build
it
and
then
some
steps
that
maybe
do
some
tests
or
move
some
things
around
if
that
first
step
runs,
and
then
you
want
to
modify
one
of
the
later
steps,
you'd,
probably
rather
not
have
to
rerun
the
large
build.
So
what
this
allows
you
to
do
is
break
things
up
into
steps
which
are
both
logically
useful
and
also
can
make
it.
A
So
we
can
cache
parts
of
the
build
and
we
also
need
the
ability
to
chain
outputs
of
one
step
into
the
next
step
in
order
to
actually
accomplish
our
our
goals.
So
we
have
the
concept
of
pipes,
and
this
is
a
way
of
taking
certain
outputs
and
sticking
them
into
inputs
of
another
piece
of
the
execution.
A
Also,
notably,
plots
use
these
things
called
Proto
formulas.
They
are
formulas
with
some
extra
features
and
they
end
up
getting
resolved
just
into
those
base
formulas
but
they're
easier
to
write
because
they
allow
for
some
more
complex
inputs,
and
it
just
makes
it
easier
for
you
to
to
work
with
so
a
step
of
a
plot.
A
plot
might
consist
of
many
of
these,
but
a
single
step
would
look
something
like
this
so
you'll
see.
A
Now
we
have
a
Proto
formula
at
the
top
and
we
also
have
an
input
that
is
no
longer
an
ugly
hash
that
would
take
you
forever
to
type
it
is
instead
a
human,
readable
name.
So
in
this
case
we
use
catalog
to
note
that
it
is
a
catalog
type
input,
and
then
we
provide
a
triple
here
that
has
the
name
of
the
input.
We've
used
a
busy
box
that
comes
out
of
our
warpsis
distribution.
A
So
we've
named
that
as
a
URL
to
make
it
easy
to
have
good
namespaces,
and
we
then
provide
the
specific
release
which
in
this
case
was
1.35.0
and
then
an
item
that
item
value
is
free
text.
You
could
do
whatever
you
want
with
it
for
our
particular
distribution
of
software
we've
standardized
on
using
AMD
64
for
that
architecture
and
then
Dash
static.
To
show
that
this
is
a
statically
linked
binary.
A
Then
everything
else
is
basically
the
same.
The
exact
and
outputs
work
the
same
way
we
get
the
same
result,
but
now
we
can
use
these
names
instead
of
having
to
Sling
hashes
all
over
our
definitions,
foreign.
The
other
thing
we
can
do
is
have
another
step,
so
we
could
have
a
step
two
in
this
plot,
and
this
is
also
a
Proto
formula
and
it
looks
very
similar.
A
The
one
difference
here
is:
we
now
have
mounted
the
result
of
step
one
at
the
path,
slash
step
one
by
using
a
pipe
and
there
we're
specifying
that
it's
a
pipe.
We
should
pipe
in
from
Step
One
the
output
hello
world.
So
this
allows
you
to
now
combine
a
whole
graph
of
execution,
many
different
parts,
another
notable
thing
with
warp
Forge.
Is
it
actually
resolves
this
graph
for
you?
A
A
It'll,
notably
fail
before
you
attempt
to
execute
it,
which
might
save
you
quite
a
bit
of
time,
so
just
to
show
a
full
plot,
there's
a
bit
more
to
it.
It's
it's
a
Json
object,
and
one
of
the
real
key
approaches
here
is
to
describe
everything
as
data.
So
we
don't
really
care
about
how
you're
templating
these
out,
but
the
full
plot
looks
pretty
similar.
We
see
some
more
features
here.
We
can,
if
we
want
to
mount
things
from
the
local
file
system.
This
is
not
a
good
idea.
A
If
you're
trying
to
do
repeatable
builds
because
now
there
is
definitely
a
dependency
on
your
host
system,
but
sometimes
you
do
want
to
run
something
on
files
on
your
host
system,
so
it
can
be
useful.
It's
also
useful
for
testing,
sometimes
people
to
pull
in
arbitrary
file
systems
and
the
steps
again
just
sort
of
go
together
in
this
steps
map.
Notably
it's
a
map,
not
a
list.
It
is
not
ordered.
A
We
determine
the
order
based
on
what
order
things
need
to
run
in
in
order
to
produce
the
outputs
you've
asked
for,
and
this
particular
one
is
kind
of
simple.
It
has
no
outputs,
which
is
maybe
questionably
useful,
but
you
could
also
do
like
a
RW
Mount
if
you
wanted
to
actually
work
in
a
local
directory
or
you
could
spit
out
outputs
to
tarballs
the
last
concept
here,
if
you're
looking
at
you
know,
our
list
of
three
things
is
modules.
A
This
is
pretty
simple.
It's
just
to
attach
some
metadata.
Often
this
is
things
like
who
authored
this
and
what
collection
of
packages
it's
part
of
it
also
notably
attaches
the
name.
So
whenever
you're
using
a
catalog,
this
name
is
the
name,
that's
going
to
be
used
and
you
can
actually
put
whatever
metadata
you'd
like
in
here.
We
have
a
free
text,
string
string
map.
So
let's
say
you
know
you
have
license
information.
A
You
want
to
include,
or
maybe
a
website
you
want
to,
link
to,
or
whatever
it
is
all
of
that's
available
is
free
text
depending
on
how
you
want
to
build
your
catalogs
speaking
of
catalogs,
we've
seen
them
in
use
a
little
bit.
We've
talked
about
plots
and
I
kind
of
nebulously
said:
here's
a
magical
string
that
will
look
up
this
item.
A
We
are
hoping
that
we
can
build
a
system.
That
means
people
can
centralize,
sorry
do
not
need
to
centralize
and
build
massive
catalogs
themselves.
Instead,
you
know
we
can
coordinate
with
each
other
and
build
things
with
name
spacing
so
that
you
can
decentralize
these
catalogs
and
have
software
that's
built
by
multiple
parties
and
to
do
this.
B
B
B
Data
structure
like
git
is
good
because
it
is
aggressively
content
addressed
in
the
bottom
of
the
design.
Right,
that's
a
critical
choice
and
it
works
really
well,
but
imagine
trying
to
use
git
with
just
like
the
git
tree,
hash
commands
and
having
no
concept
of
branches
or
tags.
It
would
not
work
well
right.
You
would
not
accomplish
things
with
that
system,
so
we
had
the
same
Discovery
as
we're.
Building
warp,
Forge
content
addressed
data
structures
and
content
addressed
instructions
for
computing
things
very
useful,
primitive.
B
You
still
need
something
kind
of
attaching
names
to
this
at
the
top
in
order
to
make
it
usable
so
we're
going
to
introduce
a
whole
data
structure
for
that
called
catalogs
and
they're
going
to
solve
this
name
to
Hash
mapping
problem
and
we're
also
going
to
dial
up
the
complexity
and
the
challenge
for
ourselves
pretty
much
to
the
top,
because
we
have
this
strong,
strong
interest
in
the
system
in
reproducibility,
you
saw
the
reproducible
builds
logo
on
the
slides.
Earlier
you
see
it
on
my
shirts
as
well.
It's
a
major
emotional
commitment.
B
We
want
to
make
sure
anytime.
There
are
names
involved
in
a
system
and
you're
doing
that
name
to
Hash
lookup.
That
name
resolution
process
is
itself
deterministic,
reproducible
extremely
auditable
for
all
time,
so
the
giveaway
is
there's
going
to
be
a
Merkle
tree
and
we're
going
to
insist
that
the
root
hash
of
that
Merkle
tree
is
also
covered
by
the
root
hash
over
a
plot
and
as
long
as
you've
got
something
encompassing
both
of
those
you're
going
to
have
a
good
time,
but
we'll
also
dive
a
bit
more
now
into
this.
B
B
B
So
we
built
a
somewhat
hierarchical
system
here
to
contain
different
concepts.
We're
going
to
worry
about
having
this
structure
Encompass
many
different
authors
and
then
many
different
versions
of
things,
and
then
we
at
some
point
in
our
domain
had
to
deal
with
like
machine
architecture.
So
we
needed
more
metadata
to
discover
that
problem
and
we're
going
to
represent
all
that
in
this
structure
at
the
very
top
level
of
this
catalog
concept,
we're
worried
mostly
about
different
modules
and
things
that
may
have
been
produced
by
different
authors.
B
So
at
that
level
it's
just
going
to
be
a
huge
map.
The
map
is,
from
a
human,
readable
name
to
more
data,
a
hash
that
will
link
to
smaller
parts
of
this
document,
but
it
may
be
a
huge
map
if
you
look
at
other
systems
that
are
doing
big
pieces
of
architectural
work
like
if
you
look
at
a
Linux
distro
like
Debian
few
packages
in
it,
it
might
have
a
quarter
million
packages
in
it.
So
we
need
this
map
to
be
capable
of
containing
a
lot
of
data.
B
So
we
can't
just
throw
one
giant
Json
blob
at
this
problem
and
call
it
a
day.
I
mean
we
could
but
like.
Let's
not,
it
won't
scale
well,
so
we're
at
a
ipfs
camp
here,
so
I'm
going
to
have
to
just
briefly
shout
out
to
one
of
the
IP
star
Technologies.
We
use
ipld
for
this
data
structure
and
we
use
something
called
an
advanced
data
layout
for
this
data
structure.
B
An
advanced
data
layout
and
I'll
just
beg
you
to
type
that
into
the
search
engine
of
your
choice
and
find
the
ipld
docs,
for
it
is
something
that
acts
semantically
like
a
simple
data
structure
like
a
map
and
may
have
starting
internally.
So
our
preferred
data
structure,
for
this
is
a
prolly
tree.
It's
sort
of
like
a
combination
of
a
Patricia
tree
and
sharding
and
b-plus
and
hashing
all
at
once
has
a
bunch
of
very
nice
properties.
B
B
Now
that
top
level
map
points
at
a
data
structure
specific
to
warp,
Forge
called
a
module.
This
contains
the
human
readable
name
for
things,
so
fubar.org
robber.
You
can
manage
this
string.
However,
you
want
it's
just
a
string
because
we're
still
on
ipld
and
Merkle
tree
construction
land.
This
is
going
to
point
to
deeper
parts
of
our
information.
B
This
next
level
down
where
we
have
releases
again,
we
have
to
have
a
name
attached.
This
is
free
text,
usually
it
has
a
v
in
it
and
then
some
numbers
and
it
will
contain
another
map
of
more
data.
It'll,
have
string
values
as
Keys
again
and
also
string
values
or
C
IDs
as
the
values
in
our
system.
That
is
going
to
point
to
a
file
system
hash.
This
is
where
you
finally
get
to
like
solid
stuff
yay,
and
you
can
attach
another
map
of
free
text
metadata
to
this
for
extensibility
purposes.
B
We
use
this
for
architecture
tuples,
it's
still
a
free
text
string,
but
as
we
try
to
like
build
the
universe
and
have
it
execute
in
workforge,
we
found
architecture
exists
and
is
a
thing
we
have
to
describe
so
here.
It
is
now
every
part
of
this
document.
Again
connected
by
cids,
it's
just
one
huge
nicely
structured,
fairly
typed
Merkle
tree,
and
we
have
names
that
will
index
across
this
whole
thing.
B
B
B
You
don't
or
at
least
that's
not
in
the
picture,
so
we're
worried
about
having
Integrity
in
this
structure
and
we're
worried
about
having
reproducible
resolve
when
you
start
with
that
root
hash,
and
we
really
focused
on
that
and
then
we
stop
right
there
intentionally,
because
we
want
to
allow
the
authentication
and
the
update
control
system
to
actually
be
pluggable.
We
want
to
let
people
solve
this
in
possibly
various
ways.
We
want
people
to
have
policy
control
over
this
on
their
local
machines.
B
You
could
have
a
pki
thing
here
with
signatures.
That'd
be
great,
you
could
have
a
some
sort
of
a
chain
of
blocks
on
the
top
fine
transparency
logs,
please
or
as
a
very
degenerate
but
totally
functional
thing.
You
could
have
this
whole
data
structure,
serialized
on
your
local
disk
and
say,
what's
on
my
local
disk
is
authoritative.
B
It's
a
convenience
thing
by
standardizing
the
heck
out
of
this
data
structure.
We
also
have
gained
the
ability
to
make
some
other
standardized
tooling
around
it.
For
example,
in
workforge
we
want
people
to
be
able
to
not
just
build
stuff
but
share
it,
and
so
we've
built,
for
example,
a
website
generator.
Where
anything
that
you
have
tracked
in
this
catalog
structure.
By
doing
a
warp,
Forge
build
and
then
saying
catalog,
add
release,
etc,
etc.
B
B
It's
even
navigable
on
mobile,
but
now
the
worst
thing
I
one
thing
I
mentioned
and
then
sort
of
skipped
on
real
fast
that
I
want
to
come
back
to
I
said
there
was
one
piece
of
metadata
in
a
catalog
that
is
extra
interesting
and
this
thing
is
called
a
replay,
it's
attached
by
CID,
linked
to
the
rest
of
the
catalog
and
it's
kind
of
off
to
one
side.
B
B
But
if
you
want
somebody
else
to
be
able
to
do
that
again,
really
you
just
have
to
share
that
declarative,
Json
document
right,
so
we've
automated
that
this
should
not
take
any
human
effort
at
all.
It
should
just
be
built
into
the
tool.
So
everyone
does
it
all
the
time
when
you
do
workforge
catalog
ad
release
we're
going
to
save
that
plot
that
declarative
Json
of
what
is
being
done
in
the
release
metadata.
B
This
isn't,
honestly,
all
that
magical.
It's
just
really
important
to
do
it,
so
this
piece
of
data
gets
Frozen
and
attached
to
the
release
fits
into
the
home
Oracle
tree.
That's
it
and
remember.
We
said
it's
very
important
to
us
that
every
time
there's
a
name
resolution
process
that
is,
reproducible
given
that
you
have
the
same
catalog
route,
we're
freezing
the
plot
Json
in
the
metadata,
where
it's
covered
by
the
hash
of
eventually
the
catalog
route.
So
you've
got
one
catalog
root
hash.
B
B
So
the
semantic
implications
of
this
replay
data
structure
is
truly
rebuild
anything
recursive
explanations,
having
a
total
bill
of
materials
for
what
you
needed
to
build
anything.
All
of
it
is
ready
to
execute
with
minimal
human
intervention.
Again,
there's
also
one
subtle
detail
about
how
we
put
this
together.
That
is
different
than
I
think
any
other
system
to
date.
So,
if
you're
mentally,
comparing
this
to
other
build
tools,
perhaps
you've
heard
of
Blaze
sorry
bazel,
the
Open
Source
One
NYX
various
other
tools
like
this-
that
might
have
at
least
heard
of
recursive
build
descriptions.
B
There
are
some
out
there
the
way
we've
strung
together
this
Merkle
Tree
in
warp
Forge
means
you
can
start
anywhere
with
your
rebuilding
or
you're
explaining,
and
you
can
pause
at
any
point,
because
every
plot
has
a
snapshot
of
file
systems
and
the
content
as
the
input
and
a
snapshot
of
file
systems
and
the
content
of
the
output.
You
can
reproduce
and
audit
that
step.
B
B
This
means,
if
we
build
really
large
graphs
of
systems,
we
have
an
easy
time.
Actually
we
can
view
things
as
having
this
complete
dependency
graph,
but
we
have
really
no
difficulty
at
all
bootstrapping
other
systems
when
you
try
to
build
the
entire
universe
at
once,
you're
going
to
end
up
with
a
very
snarly
thing
somewhere
down
in
the
bootstrap
area,
and
we
also
see
in
systems
like
bazel,
for
example.
B
B
These
systems
seem
to
end
up
de
facto
requiring
that
you're
coordinating
in
a
mono
repo
I
hear
this
from
basically
everyone
who
uses
these
tools.
Even
people
who
love
these
tools,
they
will
say
yeah.
We
have
our
company
repository
and
it
contains
all
the
build
instructions
and
it
works
great
for
us
in
our
Organization.
No,
we
can't
share
that
with
you.
Sorry,
that's
a
bummer
and
workforge
I'm,
hoping
we
get
away
from
this
because
build
descriptions
in
warp
Forge
always
depend
on
content,
and
they
can
attach
this
information
about
how
to
recursively
build.
B
We
have
much
much
more
freedom.
We
have
this
freedom
to
pause
and
resume
the
build
process
wherever
we
like
it's
much
much
easier
for
us
to
stitch
together,
graphs,
where
different
people
have
owned
the
instructions
in
different
parts
of
the
graph,
It's
relatively
easy
to
update
freely
your
references
to
what
else
you've
included
or
don't
keep
it
Frozen,
and
something
that
is
very
surprising
outcome
of
this
is
the
way
we've
stitched
the
name
and
the
hash
lookups
together
we
can
actually
describe
Cycles
in
the
build
graph.
This
is
weird
you
can.
B
B
So
if
you've
been
wondering,
are
there
other
package
managers
that
do
this
or
do
close
enough
things?
There
are
some
that
come
close,
but
I
hope
this
last
couple
of
slides
has
suggested
some
novel
Innovations
as
well.
I
would
also
say
in
general
package
managers
tend
to
involve
solving
a
series
of
problems,
there's
transporting
the
data
there's
having
some
log
files,
there's
selecting
versions
and
then
there's
usually
some
opinions
about
the
interior
of
the
packages,
so
like
npm
packages
have
a
different
internal
file
structure
than
go
Packages
Etc.
B
Some
of
these
are
meaningful
problems
and
are
different
per
different
tool
chains
for
a
reason,
and
I
would
never
argue
with
that,
but
these
first
two
transporting
data
would
be
nice
to
solve
that
problem
once
if
we
could
having
lock
files
in
in
the
dark
history
of
many
years,
I
think
package
managers
have
argued
about
whether
or
not
lock
files
are
a
good
idea.
I
think
that
argument
is
over
now
and
we
can
definitively
move
on
yes,
lock.
Files
are
good,
I.
Think
at
this
point
it
should
even
be
obvious.
Yes,
lock.
B
So,
in
order
to
make
warp
Forge
usable
today
we're
building
stuff
and
trying
to
solve
these
problems
as
we
go,
possibly
we
would
like
to
look
into
building
some
bifrosts
I
would
call
them
in
the
future
to
bridge
other
package
managers
into
warp.
Forge
like
just
take
the
selection,
algorithms
that
they've
already
gotten
them
turn
that
into
log
files
and
proceed.
So
we
can
get
the
reproducibility
of
the
catalogs
that
occurs
and
explain,
et
cetera,
Etc,
while
still
reusing
other
tooling
I
would
also
love
it.
A
Yeah
so
just
kind
of
closing
out
here:
why
are
we
building
this
thing?
And
why
are
we
building
this
thing
now?
I?
Think
at
the
fundamental
you
know,
Point
here
is
that
we
want
computers
to
do
things
that
are
repeatable
and
reliable
and
predictable
and
happen
the
same
way
every
time
and
that's
a
hard
problem
to
solve.
You
know
some
of
the
bugs
that
we
run
into
there's
somewhere.
You
get
things
like
kernel
dependencies.
There
also
seems
to
be
some
where
you
get
things
like
Hardware
dependency.
A
So
it's
it's
not
an
easy
problem
to
fix,
but
better
tooling
means
we
can
better
detect
these
types
of
issues
and
Patch
them,
and
you
know,
sort
them
out
other
build
tools.
Well,
some
of
them
are
just
straight
up
hard
to
use
and
are
not
really
developer-centric.
If
you've
spent
a
lot
of
time
with
Jenkins
you've
probably
gone
through
some
beating
your
head
against
making
it
actually
build
things
in
a
reliable
way.
I
know,
I
have
and
very
few
do
actual
hermetic
operations
so
really
enforce
that.
A
No,
you
must
run
this
in
a
container
or
in
some
environment
that
will
not
leak
or
minimally
leak
things
from
your
system
when
we
get
into
content.
Addressing
and
obviously
you
know
we're
talking
about,
moving
towards
being
able
to
do
this,
the
type
of
stuff
on
ipfs
hashing
becomes
very
important.
If
you
want
to
be
able
to
build
something
twice
and
actually
know
it
was
built
twice,
you
need
to
use
hashes
all
the
way
through
the
system,
and
this
morning
you
know
talking
about
the
compute
over
data
at
scale
in
a
distributed
way.
A
A
A
So
it's
really
a
foundational
element
for
being
able
to
do
those
types
of
operations
from
the
security
standpoint
and
we've
been
a
little
bit
quiet
on
security
applications,
thus
far,
just
because,
while
they
are
interesting
and
they
definitely
exist
and
are
relevant
they're,
not
our
first
or
Target
Market
I'd
say,
but
there
is
the
whole
concept
of
software
bill
of
materials
they're
becoming
something
that
people
care
about
quite
a
lot.
There
was
the
whole
Biden
executive
order
that
was
mostly
about
software
bill
of
materials
and
yeah.
A
When
you
look
at
things
like
root,
kits
things
like
on
trusting
trust,
and
can
you
trust
your
C
compiler,
it
can
be
quite
difficult
to
you
know:
do
that
on
a
modern
system
at
least
we
provide
some
way
of
freezing
something
you
trust
such
that
you
can
do
it
again.
It
doesn't
fully
solve
all
the
problems,
but
it
provides
you
with
some
features
and
some
level
of
trust
over
the
the
software
you're
building.
A
So
just
to
review
you
can
build
stuff
with
warp
Forge.
We
are
building
stuff
with
it
today,
as
you
know,
trying
to
build
up
system
software
and
trying
to
do
some
dog
fooding
with
the
tool
internally,
of
course,
and
we're
looking
to
build
out
that
you
know
that
set
of
packages
and
also
get
more
people
on
board.
A
A
Does
it
build
exactly
the
same
on
my
system
and
in
my
CI
environment,
which
is
not
always
a
given,
and
you
can
also
Force
other
tools
to
be
repeatable,
so
this
is
one
thing:
we've
had
building
certain
tools
that
they
aren't
going
to
come
out
the
same
way
every
time
and
we
have
to
figure
out
what
patching
would
have
to
be
done
to
to
make
that.
So
so,
of
course,
we're
looking
for
folks
to
both
help.
A
Finally,
we
have
a
few
URLs
for
you,
so
the
first
one
is
the
source
code.
It's
go.
It's
the
actual
tool
itself
below
that
you
have
the
warpsis
org,
which
contains
both
the
code.
That's
used
to
build
the
catalog
for
warpsis
and
then
also
the
catalog
itself,
and
then
lastly,
you
have
that
catalog
site.
If
you
haven't
checked
that
out,
it
works
lovely
on
both
mobile
and
desktop,
so
you
can
take
a
look
at
what
that
looks.
A
Like
one
last
thing
we're
missing
from
here
is:
we
do
have
a
chat
on
element
on
matrix.org
right.
So
if
you
want
to
talk
to
some
of
us
and
you
don't
catch
us
in
person
or
some
people
that
can't
be
here,
that's
what
warp
Forge
pound
warp
forge
on
matrix.org,
but
otherwise
I
think
we
have
a
little
bit
of
time
left.
So
we
can
open
up
to
questions
thanks
again
for
coming
out
and
looking
forward
to
talking
to
you,
as
the
conference
goes
on
thanks.