►
Description
VMs & IPFS - use cases - presented by @jbenet at IPFS bing 2022 - IPFS and WASM - https://2022.ipfs-thing.io
A
So
I'm
going
to
talk
about
ipvm,
use
cases
and
system
designs
and
very
inspired
by
all
the
conversations
here.
I
want
to
make
sure
that
we
are
having
specific
system
designs
in
mind
as
we
go
build
this
there's
a
bunch
of
specific
success
stories
from
some
distributed
systems
that
we
should
make
sure
to
include
otherwise
we're
gonna
have
bad
time
I
talked
about.
I
showed
this
in
the
morning.
We've
talked
about
a
lot
of
this
stuff
throughout
the
day,
but
just
flashing
it
we
got
a
discussion
about
fem
from
steven.
A
We
were
already
planning
to
build
what
could
eventually
turn
into
ibm,
and
there
was
a
question
of
like
what
do
you
build
first
and
kind
of
the
decision
there
was
to
build
the
fem
first,
even
though
it's
more
specific
than
ipv
ipvm
is
more
general
to
because
it
was
needed
more
urgently
and
because
it
would
give
our
teams
a
lot
more
knowledge
working
with
wasm
implementing
a
system
learn
a
lot
from
that
and
then
kind
of
start
again
based
on
those
learnings.
A
The
story
of
lvm
and
the
success
of
compiler
tooling,
of
moving
many
other
systems
into
an
intermediate
representation
and
then
targeting
a
bunch
of
chipsets
is
phenomenal.
We're
going
to
see
the
same
kind
of
success
stories
with
wasm
they're
already
happening,
but
it'll
happen
even
more
it'll
help
that
wasm
will
be
able
to
leverage
lvm.
A
So
you
can
just
like
stick
wasn't
in
here
as
a
target,
but
there's
a
bunch
more
different
language
front-ends
that
can
appear,
including
all
of
the
zero-knowledge
computation,
stuff
and
cryptographic,
computation
and
machine
learning
things
as
well,
so
think
of
right.
Now
there
are
two
large
areas
of
computing
that
are
yielding
new
hardware:
one
is
ml,
one
is
blockchains
and
we're
sort
of
going
to
be
generating
lots
of
different
specialized
chipsets
and
so
on
so
yeah.
I
think
this
story
is
going
to
be
really
successful.
A
A
You
have
one
runtime,
there's
a
single
universal
runtime,
you
just
have
a
part
of
it
and
it's
a
really
key
piece
that
impacts
how
you
design
the
vm
and
if
you
build
for
one
runtime
connected
to
other
runtimes,
you
end
up
building
a
bunch
of
different
abstractions.
Then,
if
you
design
for
one
single
universal
runtime,
one
single
universal
memory-
and
you
treat
everything
as
you
have
a
local
patch,
you
never
have
universal
visibility,
and
that
means
you
have
to
deal
with
distributed
concepts
from
the
gecko
right
in
the
in
the
vm
layer.
A
There's
a
lot
of
specific
use
cases
around
the
computer
data
tech
and
all
of
them
will
relate
tremendously
to
ibvm.
So
everything
that
everyone
wants
to
do
with
all
the
computer
data
stuff
will
end
up
calling
out
to
our
pvm.
A
One
thing
I
want
to
mention
from
the
consensus,
hierarchical
consensus
model
is
that
you
can
use
that
consensus,
layer
or
parts
of
it
in
the
same
way
that
people
use
your
entities.
So
at
the
end
of
the
day,
you
need
some
log
that
is
accumulating
shared
observations
and
you
can
have
some
computation
and
then
eventually
snapshot
some
of
that
and
checkpoint.
It
back
to
some
other
thread,
and
so
you
can
squint
to
this
whole
model
and
think
of
hanging
crdts
from
the
bottom
of
these
consensus
layers.
A
So
you
can
think
of
having
one
local,
regional
consensus
thing
for
when
you
want
to
use
distributed
consensus.
If
you
don't
want
to
do
that,
you
want
to
use
crdts.
You
can
hang
an
entire
crdt
tree
from
one
of
these
things,
but
then
you
can
use
the
consensus
layer
to
do
your
snapshotting
or
tombstoning
or
whatever
or
like
you
know,
compaction
and
so
on.
A
So
you
can
make
the
crtt
model
work
and
you
can
have
local
disconnected
computation
and
move
really
fast,
while
using
an
already
really
fast
consensus
in
your
local
region,
but
you
only
use
it
to
as
a
signpost
for
for
crdtc's
tend
to
need
some
system.
It's
kind
of
a
long-running
long-lived
system
that
everyone's
going
to
deposit
things
into.
You
can
use
the
consensus
layers
here
for
that
model,
and
so
you
can
now
finally
bridge
the
blockchain
smart
contracts
world
and
the
distributed
consensus
model
with
local
crts.
Everything
is
eventual
consistency
model.
A
One
thing
that
we
wanted
to
get
to
and
we're
like
dramatically
closer
today
and
still
a
goal
is
to
be
able
to
upgrade
all
these
systems
automatically,
as
brook
said
earlier
from
within
the
runtime.
So
you
want
to
be
able
to
ship
new
versions
of
the
system
and
have
an
upgrading
system
in
place,
and
it
is
very
much
application
dependent
because
your
application
will
define
the
upgrade
cadence
or
the
trust
model.
A
But
you
ideally
want
to
enable
all
these
groups
to
do
really
smooth
upgrading
of
the
system
in
parts
by
deploying
new
wasm
or
new
bundles
in
general
could
be
wasm
could
be
os
code
or
whatever
right.
So
you
could
have
a
network
that
says:
hey,
there's
this
new
dht.
If,
if
you
want
to
be
part
of
it,
if
you
want
to
be
part
of
the
csg,
you
agree
to
update
your
vht
running
code
locally,
to
whatever
the
dhc
tells
you
to
up
update
into,
and
that's
just
the
contract.
A
A
But
that
applies
to
every
single
part,
every
single
component,
and
because
you
have
content
addressing
and
hash
linking
and
so
on,
then
you
can
start
using
strong
social
trust
models
in
the
traditional
open
source
world
to
establish
what
secure
code
or
you
can
do.
Proofs
and
formal
verification
for
areas
where
you
really
need
it
in
some
areas
where
you
really
need
to
verify
some
software,
you
can
go.
Do
that
use
verifiable
claims
directly
in
hash
linking
and
make
that
part
of
the
runtime.
A
So
all
of
this
means
that
whatever
we
end
up
with
should
be
a
runtime
that
takes
in
upgradability
of
the
entire
system
like
completely
into
account,
and
thus
either
swapping
out
modules
that
are
running
in
a
long,
long
lived,
runtime
or
swapping
out
the
entire
vm
itself.
So
starting
the
way
that
google
chrome
got
started,
which
is
start
with
a
very
thin
thing
that
just
auto
updates
and
then
from
there
build.
The
rest
of
the
thing
could
be
an
extremely
good
way
to
go.
It's
just
that
you
don't
have
one
channel
or
one.
A
A
Yeah,
so
we
heard
about
the
fvm
architecture,
which
is
a
really
useful
model,
there's
a
lot
of
things
that
are
actor
runtime,
specific,
which
is
kind
of
what
we
are
headed
towards.
There's
the,
of
course,
all
the
evm
blockchain
specific
stuff
is
not
needed,
but
this
is
kind
of
this
is
there,
so
this
is
already
an
ipld
vm
that
exists.
You
can
take
that
rip
out
all
the
blockchain-specific
stuff
and
arrive
at
ipvm.
A
So
this
is
why
today
we're
going
to
talk
about
why
falcon
actors
are
not
smart
contracts
or
like
they're,
not
called
smart
contracts.
They're
called
actors
just
want
to
preface
with
there's
a
long
road
here,
so
many
of
us
have
been
working
on
all
this
stuff
for
a
long
time.
A
Thankfully,
not
the
one
person
who's
been
working
on
the
stuff,
the
oldest,
I
know
at
least
michael-
has
been
working
on
these
distributed
data
structures
a
little
bit
older,
but
roughly
around
the
same
similar,
similar
time
skills,
but
but
basically,
like
a
lot
of
us,
have
been
working
on
these
things
here,
like
some
snapshots
from
2014.
For
me
of,
like
already
wanting
to
get
to
this
kind
of
things
like
emails,
that
I've
sent
out
would
be
like.
A
Oh,
I
want
like
an
erlang
style,
peer-to-peer
vm
that,
like
just
runs
all
the
things
so
meaning
it's
okay
for
these
things
to
take
a
while,
as
long
as
we're
making
steady
high
momentum,
progress
towards
like
really
good
outcomes,
but
if
we're
getting
stuck
like
things
taking
a
long
time
are
not
an
ex,
it's
not
an
excuse
for
moving
slowly.
We
should
be
moving
fast
in
terms
of
arriving
at
successful
things,
but
we
can
have
really
ambitious
goals
that
take
a
while
right.
A
So
you
have
to
take
those
two
in
in
at
once
completely
agree
with
the
versus
better
statement
that
you
should
just
look
at.
What's
constantly
reorient
look
at
what
has
momentum
and
use
things
with
high
momentum
to
to
go
through
and
now,
wherever
possible,
try
and
patch
those
systems
to
become
upgradable,
so
you
can,
over
time
whittle
away
the
worst
to
become
better.
So
worse
is
better
to
become
best
right.
You
want
to
over
time
hook
onto
the
high
momentum
things
and
improve
them.
A
Good
example
of
this
is
like
mark
miller,
joining
the
ecmascript
committee
to
like
harden
javascript
and
get
ocaps
in
there
and
yielding
the
entire
vm
isolate
model
that
v8
ended
up
using
right.
So
a
lot
of
people
worked
on
that
stuff
and
worked
on
it
for
many
many
years
to
improve
the
worst
is
better
success
case
into
a
best
type
of
system.
A
Let's
talk
about
programming
models
for
a
moment,
so
this
is
like
the
z
plus
plus
runtime
model
has
random
access
memory.
You
have
a
bunch
of
data
structures
laid
out
in
some
virtual
memory.
You
have
pointers
to
a
v
table
and
and
that
v
table
of
pointers,
point
to
machine
code
that
are
going
to
execute
some
particular
code
so
like
the
compiled
output
compiled
for
the
local
machine
is
linked
from
that
virtual
table,
and
you
have
like
your
data
structure,
has
some
like
inline
data
and
then
this
virtual
pointer.
A
A
Now
that
sounds
expensive,
so
it
means
because
you
don't
necessarily
want
to
do
a
cryptographic
hash.
Every
time
you
like
update
a
pointer
that
sounds
like
wild,
so
we
have
to
figure
out
a
way
of
doing
that,
such
that
those
links
are
implicit
or
local
when
you're
making
them
and
any
moment
when
that
object
becomes
passed
to
some
other.
A
To
some
other
context,
then
at
that
point
you
go
through
and
serialize
and
hash,
and-
and
there
are
many
systems
that
do
this
already
closure
is
a
good
example,
so
yeah
applying
wasm.
Think
of
like
expressing
all
the
functions
as
ipld
call
everything
by
cid
and
so
on.
But
the
really
key
thing
here
is
it's
you're
dealing
with
many
different
states
of
data,
so
you're
dealing
with
many
different
layouts
of
the
data.
So
you
don't
have
a
universal.
You
don't
have
a
canonical
way
of
describing
all
the
things
or
actually
you
can.
A
Let
me
describe
it
another
way.
You
don't
have
the
exact
same
data
layout
for
all
functions
right,
so
in
c
plus,
plus
in
this
world,
you
compile
a
thing
into
an
a
target
architecture.
You
have
a
specific
way
that
integers
are
laid
out
and
so
on,
and-
and
so
that's
why
you
don't
have
to
worry
about
that.
A
But
in
our
case
we
have
to
worry
about
the
layout
of
the
memory
being
different
across
programming
languages
and
and
different
data
structures,
and
so
on,
independent
of
the
code
running
over
that
data
also
being
potentially
different,
different
source
languages
that
compile
down
to
wasm
or
something
else.
A
Now
the
wasm
story
could
be
really
good
here,
because
it
can
unify
a
lot
of
stuff,
but
we
would
very
likely
still
end
up
with
very
different
memory
layouts,
because,
if
someone's
coming
from
russ
it'll
be
look
different
than
if
someone
was
coming
from
js
and
so
on,
so
yeah,
so
so
really
approach
the
building
of
the
vm
as
building
a
runtime.
A
One
brief
note
on
fat
pointers
and
wrapper
objects,
so
this
is
kind
of
like
what
we
were
talking
about
before,
where
there's
like
different
data
structures,
pointing
to
different
things,
the
we
can
really
go
in
a
bunch
of
directions
and
what
we
should
do
is
like
do
what
modern
programming
languages
do,
which
is
to
like.
Let
you
have
both
and
let
you
have
any
of
them,
and
your
application
will
likely
dictate
one
or
the
other,
but
build
compiler
tooling,
then
makes
it
really
fast.
A
Build
compiler
tooling
that
lets
you
inline
entire
objects
and
decide
when
a
pointer
becomes
a
cid
and
when
that
pointer,
just
forces
inlining
of
an
object,
and
if
we
have
those
facilities
in
the
in
in
the
kind
of
step
of
compiling
some
language
into
into
into
the
structures
that
are
going
to
go
into
ipld,
then
then
we
don't
have
to
worry
about
all
of
this
and
our
push
to
agree
on
one
thing:
we
can
just
have
a
nice
programming
model
that
lets
you
do
either
one
but
and
be
fast
about
it,
because
the
be
fast
about
it
is
a
really
key
thing
I
would
be.
A
I
would
dissuade
folks
from
increasing
the
length
of
the
pointers
just
because
we're
kind
of
like
at
the
bounds
of
what
many
things
out
there
can
take
things
like
domain
names
and
so
on,
and
it
is
possible
to
do
these
like
second
layer.
Things
like
you
have
a
short
pointer
leading
to
a
large
pointer
and
so
on,
and
then
sometimes
you
can
use
a
large
pointer
so
that
maybe
could
work.
But
I
think
you
can
probably
still
use
that
as
the
full
object
and
then
maybe
define
a
cid
type
that
just
inlines
into.
A
A
So
one
important
point
here
too,
is
because
these
kind
of
what
I
was
saying
before,
because
runtimes
don't
have
codec
pointers
in
in
those
worlds,
and
we
do.
We
can't
just
use
the
exact
same
memory
models.
Also
other
things
we.
A
This
is
different,
so
we
we
have
a
code
pointer
that
points
to
both
the
codec
in
order
to
read
the
data
and
the
functions
that
are
going
to
operate
on
the
object
now
functions
here
is,
could
be
a
v
table,
the
simple
plus
family
of
languages
style
or
it
could
be
something
more
complicated.
A
A
It
really
depends
on
kind
of
what
people
want
to
do,
but
either
way
like
we
do,
have
codecs
and
functions
being
different
and
the
difference
there
is
that
codecs
are
about
the
layout
and
encoding
of
the
data
and
function
and
the
actual
program
that
you're
going
to
run
it's
about.
How
do
you
you
process
the
thing
there
usually
will
be
coupled,
but
there's
you
don't
want
to
sort
of
like
force.
The
little
objects
that
have
code
to
then
also
have
to
carry
that
codec
or
inline
the
codec,
or
something
like
that.
A
I
highly
recommend
that
everyone
here
spend
some
time
looking
at
closure.
So
closure
is
an
entire
programming
language
and
programming
system
built
on
top
of
persistent
data
structures,
and
so
like.
The
really
key
thing
here
is
the
the
whole
stack
has
been
architected
from
the
ground,
up
thinking
about
persistent
data
structures
and
the
actor
model
and
distributed
computation
and
distributed
runtimes
and
so
on.
So
the
whole
thing
is
like
very
much
kind
of
what
we're
talking
about
it.
A
Just
doesn't
have
any
hash
linking
and
so
on,
but
it
already
the
compiler
does
a
ton
of
tricks
and
the
runtime
does
a
lot
of
tricks
to
speed
up
tons
of
things
to
in
the
kind
of
way
that
I'm
describing.
So
we
can
learn
from
how
closure
runtimes
work.
To
then
reason
about
how
you
could
have
an
ipld
program
running
with
local
links
and
the
moment
that
you
take
a
snapshot
of
that
thing
and
you
export
it.
A
Then,
at
that
point
you
go
and
serialize
and
you
hash
link
and
you
collapse,
or
you
sign
messages
and
so
on,
or
you
produce
proofs
right.
So
you
could
have
like
you
know
proofs
that
compute
only
when
you
need
them
like
you,
actually
go
and
prove
the
thing
only
when
you're
going
to
externalize
this
and
send
it
to
somebody
else.
A
You
also
want
to
write
databases
right.
So
what's
the
database
in
reality,
it's
just
a
bunch
of
like
low-level
file
system
bits
like
a
byte
stream
or
a
number
of
them,
plus
a
bunch
of
algorithms
and
trees.
It's
a
bunch
of
trees
and
weird
weird
trees
and
weird
algorithms
that
run
over
by
streams.
That's
what
a
database
is,
and
so,
if
we
can
do
all
this
stuff,
then
we
can
write
databases
on
top
of
all
this.
These
data
structures
and
we
can
tune
for
high
performance
indices
over
the
existing
data.
A
One
of
the
problems
with
content
address
data
is
that
once
you
make
something
and
you
hash
linking
link
it
or
you
sign
it,
it
tends
to
stay
that
way.
You
don't
want
to
mutate
it
a
lot,
and
so,
whenever
you
want
a
high
performance
database,
that's
laid
out
differently.
You
either
end
up
copying
the
data
or
you
need
to
link
into
it.
So
we
can
have
a
whole
class
of
databases
that
point
to
whatever
the
data.
A
However,
the
data
is
laid
out
already,
but
all
they
do
is
they
pull
the
pieces,
arrange
it
in
memory,
how
they
need
it.
Like
these
are
all
copies
and
then
build
the
indices
that
they
need
all
their
weird
trees,
and
then
they
operate
on
that
data.
Exactly
as
is
so
right
now,
the
falcon
network
is
ingesting
a
huge
amount
of
data.
You
know,
hunt
over
100
petabytes
right
now
and
it
is
laid
up
laid
out
in
like
horrible
ways.
A
You
can
just
account
for
like
it
having
because
the
need
for
like
getting
stuff
on
the
data
on
the
network
just
required.
Choosing
some
format
today
that,
like
is
not
tuned
to
whatever
use,
is
going
to
come
tomorrow.
So
you
can
just
already
guarantee
that
whoever's
going
to
want
to
use
it
tomorrow
is
going
to
want
a
different
layout.
A
There
are
like
data
sets
where,
like
you
know,
one
data
set
got
like
split
into
a
bunch
of
files
and,
like
the
files
themselves,
have
some
other
data
structures,
and
that,
like
has
to
deal
with
limitations
from
filecoin,
which
forced
it
to
like
be
split
up
along
different
sectors,
and
so
you
end
up
with,
like
all
of
these,
all
the
data
itself
being
split
and
content
addressed
in
particular
ways
and
proved
that
it's
there.
So
you
have
all
this
information,
but
that's
okay,
because
you
can
you
can
still
access
all
that
data.
A
You
can
still
hashlink
it
and
you
can
write
a
program
that
says
point
to
these
specific
things
and
then
pull
them
and
copy
them
over
here.
And
so
you
just
grab
all
the
things
you
need
and
then
you
can
have
a
high
performance
database
on
top
of
that,
and
you
know
that
it's
the
exact
same
data
because
you
have
the
exact
same
links.
A
What
should
the
vm
do
to
make
this
really
easy
to
make
these
kinds
of
systems
really
easy,
like
you
probably
need
some
way
of
the
program
probably
needs
some
way
of
like
being
able
to
natively
pin
graphs,
so
your
runtime
might
want
the
ability
to
declare
some
set
of
ipld
data
that
you
want
to
keep
around
and
pin
to
memory
pages
or
pin
to
whatever
the
os
facility
facility
has
and-
and
all
of
these
will
need-
probably
many
different
processes
running
at
the
same
time
in
the
same
vm.
A
Also
would
recommend
looking
at
daytonic.
Sorry,
the
stuff
on
the
left
is
not
the
atomic,
which
is
a
database
written
by
the
closure
folks.
So
they
took
all
their
data
structure,
knowledge
and
they
wrote
a
database
with
it
and
so
that
they
already
solved
all
the
database
problems
of.
How
do
you
build
a
really
fast
good
database
with
history
and
snapshotting
and
so
on
in
exactly
our
model?
It's
just
they.
They
don't
do
any
hash
linking,
and
so
we
have
superpower
they
don't
have.
A
However,
you
can
probably
look
at
how
they
solved
tons
of
problems
and
write
a
data
database
based
on
their
their
their
model,
all
right,
great
actors.
So
why
actors-
and
why
do
I
keep
hammering
on
this?
A
The
erlang
model
is
extremely
good
in
that,
and
really
this
also
applies
to
the
so
the
actor
model
and
the
pi
calculus,
and
just
in
general,
the
processing
systems
that
separate
the
execution
runtime
and
defining
to
be
local
and
allow
for
machines
to
to
come
in
and
go
out
like
that's
the
right
like
model
for
computation.
A
We
just
accidentally
ended
up
in
a
different
model
just
because,
but
that
was
very
path,
dependent,
a
model
where
you
have
a
computing
environment
where
you
can
bring
online
processes
and
shut
them
down,
and
you
have
addressing
that
makes
sense
across
the
board.
Here
is
the
right
one
and
you
you
need
to
be
able
to
send
information
from
one
process
to
another
and
know
that
that
is
an
asynchronous
operation
like
synchrony
and
asynchrony
need
to
be
defined
in
the
process
boundary.
A
So
a
process
needs
to
is
like
the
right
unit
of
synchronization,
where,
when
you
send
a
message
from
when
you
send
information
from
one
to
another,
that
needs
to
be
an
asynchronous
call
and
you
need
to
be
able
to
like
deal
with
it
in
the
programming
language
that,
in
that
level,
funny
story.
Blockchains
were
built
decades
after
all
of
this
and
chose
the
wrong
model
like
blockchains
are
just
stuck
in
vocations
because
that's
much
easier
to
build,
but
in
reality
like
a
blockchain.
A
All
it
really
is
is
a
transaction
like
mail
router
in
an
erlang
pod
right,
like
all
blockchains,
is
just
an
airline
pod
that
has
a
bunch
of
actors
and
they're
processing
messages
coming
in
and
they're
routing
them
to
each
actor.
It's
just
that
for
convenience.
Everyone
said,
oh
it'd,
be
really
nice
to
like.
Have
this
actor
poke
into
the
other
one,
and
no
one
was
there
in
the
room
to
say,
or
maybe
people
were
in
the
room
and
were
not
listened
to
to
say
no,
don't
do
that.
That
is
not
the
right
model.
A
Yeah
we
probably
could
could
have
averted
the
or
yeah
I
wish
they
had
listened.
A
But
let's
do
it
right
this
time,
so
in
falcon
already
we
are
getting
closer
and
closer
falcon
is
not
quite
the
proper
actor
model,
because
it's
all
the
consensus
execution
is
still
one
machine
and
so
we're
closer
to
the
vat
model
of
agorik,
where
you
have
a
model
where
a
bunch
of
things
are
executing
locally
and
you
do
have
stack
invocations
there,
but
the
moment
that
you
want
to
do
any
asynchronous
call
across
machines.
You
turn
that
into
asynchronous
call.
A
So,
let's
talk
about
the
vat
model,
it's
basically
message
passing
plus
the
bad
model.
Here
is
just
a
green
layer
in
the
aquaric
stack.
So
it's
it's
the
the
kind
of
glue
that
brings
a
bunch
of
machines
together
into
into
a
run
time
that
allows
you
to
address
things
across
the
board
and
it
gives
you
the
facility
to
be
able
to
have
local
execution
know
what
your
local
machine
is
and
be
able
to
refer
to
remote
machines
and
then
have
message
passing
between
these
now
then
the
rest
of
the
aquarius
stack.
A
We
should
also
learn
a
lot
from,
but
that's
a
separate
story
like
the
ocaps
model
is
a
separate
piece.
So
I
think
really
here
we're
talking
about
building
the
runtime.
That's
going
to
go
into
like
the
lowest
layer
to
create
something
like
the
green
layer,
and
we
can
just
learn
a
lot
from
this
model.
It's
just
in
our
case
we're
doing
it
with
hash
linking
which
they
didn't
do
and
we're
doing
it
with
wason
instead
of
ses.
A
It'd
be
great
to
like
just
have
a
version
of
this
with
them
to
like
mix
mix
ideas,
the
higher
consensus
stuff
going
back
to
that
already
gives
you
a
version
of
this,
because
it's
going
to
have
a
bunch
of
different
fvms
and
the
message
passing
between
the
two
is
a
cross
chain
invocation
and
anyone
submitting
a
transaction
has
to
send
a
message.
It
already
really
gives
you
this
erlang
model.
A
It's
just
that
you
have
high
consistency
in
that
in
each
one
of
those
machines,
and
it
does
not
admit
machines
outside
you
sort
of
like
have
this
blurry
boundary
with
with
the
rest
of
the
world
where
messages
come
in
from
like
who
knows
where
and
and
you
emit
messages
out
into
the
world,
but
and
so
it's
not
like
routable
across
the
rest
of
the
the
system
or
the
rest
of
the
runtime.
But
it
already
will
give
us
a
bunch
of
experimentation
with
this,
these
kinds
of
models
so
ocaps
and
ucans.
A
I
think
that
it's
very
likely
that
we
can
need
to
put
capabilities
into
the
vm
itself,
like
you
should
be
able
to
deal
with
adding
accessing
interfaces
in
the
vm
using
capabilities.
What
I
mean
by
that
is
imagine
that
you
have
you
receive
a
program
from
some
semi-trusted
source
and
you
want
to
give
it
access
to
some
amount
of
resources
with
the
processing
power.
You
can
meter
it
and
you
can
use
gas
accounting,
and
we
know
how
to
do
that.
A
But
maybe
you
want
to
give
it
access
to
the
file
system,
but
giving
access
to
the
file
system.
You
don't
want
to
pass
the
full
wassi
interface.
You
want
to
give
a
very
restricted
subset
of
that,
and
so
you
want
to
pass
in
a
capability
into
that
process
and
load
in
only
the
resources
you
want
to
give
it.
If
you
want
to
give
it
access
to
a
gpu,
for
example,
you
might
want
to
specifically
decide
which
one
and
not
all
of
them-
and
so
I
think
we
need
the
capability
model
native
in
the
runtime.
A
There's
already
massive
scale
systems
that
do
this
all
the
time
they
are
like
the
underlying
things
that
runland
does
and
browsers
browsers,
already
do
a
version
of
this.
You
have
a
browser
sandbox.
That
carefully
is
built
with
capabilities
and
the
capability
model
to
give
you
all
of
the
components
so
that
you
can
write
into
the
file
system.
A
So,
let's
sync
the
capabilities
into
the
runtime
and
we
should
also
be
targeting
and
have
in
mind
as
a
test
case
being
able
to
like
model
running.
You
know
different
kinds
of
computing,
so
not
just
lambdas
but
all
the
way
to
vms
so
think
about
like
what
it
would
be
like
to
write
a
hypervisor
on
top
of
this
vm.
That
then
runs
a
bunch
of
other
oss
and
has
like
a
full
vm
in
that
place.
A
So
and-
and
we
could
start
instrumenting
those
things
so
imagine
if
we
wrote
a
hypervisor
like
like
a
standard
linux
hypervisor
on
top
of
ipvm.
That
would
be
smart
and
then
snapshot
the
file
systems
using
ipfs.
So
you
have
cids
of
the
entire
vm
snapshot
and
you
then
duplicate
over
all
its
os,
and
you
can
do
snapshots
of
the
memory
model
too.
So
you
can
actually
snapshot
the
entire
virtual
machine
at
one
moment
in
time
and
then
do
the
suspend
and
resume
and
mobile
computing
that
brooke
was
talking
about.
A
But
and
so
you
can
write
that
entirely
in
the
hypervisor
and
then
run
full
vms
on
top
of
this
ipvm
thing
and
yeah.
So,
ideally,
we
could
just
take
an
existing
hypervisor
and
sprinkle
it
with
ipld
magic,
and
then
it
could
do
this.
But
yeah
we'll
see
same
thing
for
containers.
Containers
is
just
like
a
tweaked
version
of
this.
A
A
I
think
in
this
particular
case,
a
lot
of
people
are
interested
in
sprinkling,
ipld
magic,
all
the
way
into
the
containers
and
the
docker
files,
and
so
on.
This
is
another
case
where,
like
we
could
have
changed
history,
but
we
didn't
we're
like
we
could
have
gotten
cids
into
docker
files
and
so
on.
Early
on,
like
you
should
import
by
cid.
A
Is
what
people
really
like
right
now
in
terms
of
how
to
run
large-scale
computing,
because
it's
super
flexible,
super,
fast
and
so
on
and
actually
by
the
way,
the
easiest
for
us
to
work
with,
because
the
model
is
very,
very
easy
for
us
to
ipl
dfi,
so
potentially
targeting
not
just
wasm
but
like
make
sure
that
things
work
really
well
for
the
javascript
world
and
make
sure
that
ipvm
either
is
built
on
top
of
v8
or
there's
a
version
of
it
on
top
of
v8
is
probably
a
really
good
target.
A
To
like
learn
from
it's
amazing.
How
much
you
learn
from
by
just
looking
at
images.
Yeah
so
remember.
There's
a
bunch
of
cryptographic
computing
around
the
corner
and
this
cryptographic
computing
is
going
to
need
really
nice
integration
to
programming
languages,
plug
for
lurk,
which
we
it's
only
already
kind
of
ipd
file
and
so
on,
and
we
can
start
working
with
that.
One
to
see
what
it
would
look
like
to
include
be
able
to
kind
of
write.
A
Those
proof
like
see
if
we
can
take
brooks
model
and
then
write
proofs
in
between,
like
the
state
execution
like
you,
so
you
like,
execute
some
some
stuff
and
then
write
a
proof
of
that
entire
execution.
Having
that
as
a
demo
would
be
awesome,
and
that
would
like
be
a
super
super
cool
environment
because
you
can
now
start
scheduling
a
bunch
of
compute
computation
and
so
on,
prove
it
and
submit
the
proofs
for
fhe
for
full
homophobic
encryption.
A
A
lot
of
people
don't
are
not
plugged
into
this
like
this
was
20
years
ago
away.
So
nobody
tends
to
consider
fhe
seriously
right
now,
because
everyone
thinks
oh,
it's
20
years
away,
but
that
message
fh
20
years
away,
was
written
15
years
ago.
A
So
guess
what
fhe
is
five
years
away,
and
it
really
is
like
the
benchmarks
are
getting
very,
very
close,
so
as
computing
power
improves
as
the
and
so
you
have
computing
power,
improving
and
one
end
the
use
of
asics
and
the
actual
runs
of
of
hardware,
plus
the
fact
that
there
are
breakthroughs
in
the
protocols
and
so
we're
getting
into
like
this
converging
state
where,
basically,
in
the
next
five
years,
we'll
have
sophisticated
fha
applications.
A
So
let's
get
ready
for
that
kind
of
thing
and
think
start
thinking
about
how
these
pro
these
programming
languages
there
might
be
some
programming
languages
that
need
to
deal
with
this
other
kind
of
computation
and
be
and
blend
it
in
there's
a
really
good,
compiler,
tooling,
from
from
zama,
which
is
concrete,
so
go
check
that
out
and
go
see
how
that
might
blend
in
they're
following
the
tensorflow
model.
A
So
the
tensorflow
architecture
of
being
able
to
write
one
python,
programming
language,
write
your
your
machine
learning
code
and
leverage
programming
language
magic
to
take
that
section
and
then
compile
it
out
to
run
on
tpus.
Like
that's
exactly
what
you
want
to
do.
You
want
to
leverage,
compilers
and
and
so
on
whenever
you
can
so
here's
like
a
proposed
path
to
to
ipvm.
Let's
build
a
working
system
with
wasm,
so
either
we
can
back
into
it
from
the
fam.
Take
the
fbm
and
rip
out
all
the
non-blockchain
parts.
A
Sorry,
all
the
sorry
keep
the
non-blocking
parts
rip
out
the
blockchain
parts
or
start
from
scratch
with
fbm.
Ideas
really
depends
on
what
stephen
and
rob
will
think
and
melania
and
others
another
folks
working
on
it
and
build
for
the
erlang
or
agoric
vat
model,
so
think
of
distributed
implications
and
message
passing
and
so
on,
and
do
that
in
the
ipld
model.
A
So
think
of
that
entire
part
of
doing
the
message,
passing
as
just
writing:
ipld,
graphs
and
figuring
out
how
to
like
send
out
the
messages
and
retrieve
them
and
so
on
and
bringing
capabilities
at
that
layer.
So
don't
don't
let
the
capabilities
be
defined
inside
bring
them
in
here,
so
that
we
can
reason
about
them
in
the
program
layer
so
that
when
you
invoke
a
program
and
issue
a
program
to
be
run
by
the
vm,
you
can
express
the
capabilities
that
you
want
to
give
to
that
program.
A
If
we
have
that
layer
there,
and
that
should
be,
of
course,
an
extensible
layer
like
uconns,
but
if
we
bring
that
there
it'll
make
every
other
system
that
we
write
on
top
much
easier
to
write,
because
system
will
just
be
able
to
express
what
it
wants,
the
programming
language
to
be
able
to
run
and
yeah.
I
would
I
think
that
in
this
time
I
think
we
need
to
write
it
with
a
language
runtime
from
the
get-go,
and
I
think,
given
where
rust
is.
A
One
other
thing
here:
if
we
do
go
and
start
experimenting
with
like
distributed
programming,
language
things,
then
if
you
start
doing
this,
then
really
do
it
with
multiple
languages
like
no
cheating
here
and
doing
one
language,
because
you're
bound
to
like
miss
something
pick
two
very
different
programming
languages
and
have
them
target
this
and
have
it
work
once
you
do
that
you'll
find
a
bunch
of
edge
cases
like
oh
the
memory
model
of
this
thing
has
to
like
convert
into
this
other
thing.
A
When
you
invoke
the
function,
that's
weird,
okay,
how
we're
going
to
do
that?
And
so,
thankfully
we
can
look
at
bind
gen
and
all
the
rust
wasn't
stuff,
because
that's
already
doing
a
lot
of
that,
so
we
basically
have
to
solve
the
same
problem.
But
in
our
case
they
might
actually
have
all
the
exact
same
problems,
I'm
not
sure
worth
looking
into
it
could
be
that
we
can
just
use,
bind
gen
and
similar,
but
maybe
not.
A
Cool
and
all
right,
how
does
light
on
fluence?
I
don't
know
where
it
is
so.
The
other
thing
I
was
gonna
say
is:
like
imagine,
influence
diagram
here.
Grab
the
like
go
see
how
the
distributed
programming
language
works,
because
you
you
want
something
kind
of
like
that.
A
You
want
to
be
able
to
express
programs
in
one
file
that
end
up
running
in
multiple
machines,
but
a
we
want
content
addressing
so
maybe
the
fluence
vm
can
the
aqua
vm
can
learn
to
have
iple,
which
would
be
pretty
sweet
and
it
could
become
the
the
a
really
good
substrate
for
for
this
or
we
need
to
and
by
the
way
capabilities.
A
I
think
aqua
vm
also
needs
capability,
so
think
of
syncing
capabilities
into
the
vm
itself,
like
the
full
ocap
model
there
would
be
would
be
great
and
yeah,
and
so
I
think
like
have
that
as
a
model
and
yeah,
I
think
once
we
have
some
working
systems
and
some
like
test
use
cases.
A
If
we
like
can
define
some
like
concrete
test
use
cases
specific
programs
to
run,
then
we
can
actually
build
a
thing
that
like
works
for
them,
and
so
I
would
like
propose
like
have
like
a
toy
database
as
one
use
case,
and
you
have
like
a
standard,
I
think
of
sql
lite
or
or
maybe
like
a
distributed
little
pouch
to
be
style
thing
with
like
three
little
nodes
that
are
gonna
like
sync,
a
to-do
app
or
something,
and
you
wanna
back
that
you
wanna,
allow
those
pouch
to
be
things
to
run
on
three
instances
of
ypvm.
A
And
then
you
have
like
some
other
use
case.
Like
I
don't
know,
take
quake
and
like
run,
it
pick
up
the
server
for
quake
and
run
an
fpvm,
and
then
maybe
I
don't
know
like
a
full
operating
system.
Like
I
don't
know
emacs
and
then
we
can,
you
can
try
running
emacs
on
top
of
ibm
and
see
how
it
works
cool.
That's
it.
Ipm
use
cases
system
designs.
Hopefully
I
give
ideas.
B
I
have
a
question
about
the
capability
model,
so
you
know
like
in
the
web
assembly,
there
is
a
already
merged
proposal
called
reference
types
I
know,
do
you
know
about
it,
or
can
I
explain
a
bit
so
reference
types
is
a
proposal
that
allows
you
to
pass
values
into
webassembly
module
and
this
values
could
represent,
for
example,
file,
descriptors
and
the
like
stuff,
like
that,
sockets
and
etc,
etc,
and
these
values
are
unforgeable
from
the
webassembly
side
and
then
imagine
that
you
want
to
call
likewise
import
and
then
like,
for
example,
write
into
your
file
and
for
the
right
cisco.
B
You
need
to
provide
the
file
descriptor,
and
this
is
how
reference
styles
could
be
used.
So
like
calling
yz
from
webassembly
module,
you
provide
this
opaque,
a
value
in
the
import,
and
my
question
is:
do
you
plan
to
like
build
your
capability
system
based
on
reference
types,
or
it
would
be
something
different.
A
We
totally
could
it
just
sort
of
depends
on
how
extensible
this
is
a
lot
of
wasm
is
very
sensible,
which
is
great.
I
had
seen
this
before.
I
haven't
dived
really
deeply
into
it.
My
sense
is
that
it's
great
for
us
that
the
wasm
community
already
has
to
deal
with
the
the
problem
of
moving
around
functions,
calling
each
other
from
these
programs.
So
they're
gonna
solve
a
bunch
of
the
hard
problems
it's
gonna
be
it's
gonna.
A
Be
great,
however,
where
you
pass
in
like
the
ipld
data
model
is
different,
so
we
have
to
make
sure
that
that
works
out,
and
then
you
can,
whenever
possible,
pass
in
things
with
cids
as
references
so
that
you
can
leverage
the
the
consistency
guarantees
that
that
gives
you
and
the
duplication,
because
a
lot
of
times
you
might
be
passing
references
to
to
objects,
and
you
don't
want
to
copy
the
whole
thing
if
you
already
have
access
to
it
somewhere
else-
and
the
second
thing
I
would
be,
I
would
say,
is
like
I
don't.
A
I
don't
think
that
they
have
capabilities
here,
yeah.
They
would
have
probably
mentioned
it
in
this
post.
It
would
be
great
to
get
them
to
enhance
this
with
capabilities
or
have
some
other
version
of
it,
and
this
is
a
great
thing
to
stick
the
like
to
to
get
the
entire
community
come
to
help
them
figure.
This
out.
C
C
Reference
apps
can
effectively
run
time
capabilities,
but
we
would
probably
want
to
believe
something
we
can
actually
store
somewhere.
So,
like
they're
great
for
like
I
need
a
file
handle
but
they're
less
great
for
like
I
now
have
access
to
this
thing.
I
need
to
keep
it
and
maybe
transfer
someone
else
or
whatever
that's
one
of
the
tricky
parts
there.
A
A
D
So
so
like
we
could
put
capabilities
on
the
outside
of
the
container
or
the
the
module,
but
in
so-
and
I
just
want
to
get
this
because
this
is
getting
recorded
and
to
save
people,
some
pain,
so
people
will
read
about
ocap
and
then
say:
oh,
this
is
kind
of
like
having
a
file
descriptor.
The
major
difference
is
in
ocap.
I
should
be
able
to
turn
that
file
descriptor
off
like
completely,
and
so
you
tend
to
put
essentially
like
a
proxy
in
the
middle,
where
your
file
descriptor
is
writing
to
something.
D
That's
not
the
actual
file.
Just
like
has
a
bounce,
and
then
you
can
shut
off
the
proxy.
So
yeah
like
this
stuff,
definitely
isn't
that
rich,
but
we
could
make
it
do
that
because
all
that
wasn't
knows
about
is
byte
streams,
and
so
we
could
add
that
on
the
outside,
if
we
really
felt
like
it.
A
One
really
important
thing
with
capabilities
is
that
you
need
the
ability
to
pass
them
by
value
and
a
lot
of
file
descriptors
are
conditioned
on
the
process,
and
so
the
os
will
handle
the
os
will
hand
the
process
a
file
descriptor,
that's
only
callable
from
within
that
process
and
capabilities
as
part
of
it.
You
need
the
ability
to
embed
the
thing
into
an
object
that
you
pass
to
some
other
party
and
that
other
party
can
now
invoke
that
capability
fully.
For
this
I
would
recommend
looking
at
cap
and
proto
as
well.
A
That's
like
a
a
completely
ocap
inspired
model
that
tried
to
build
the
grpc,
the
the
built,
a
grpc
style
framework
for
service
definition.
So
there's
there's
some
model
here,
and
a
lot
of
this
is
done
in
the
language.
So
it's
so
all
the
capabilities
are
implicit.
A
You
don't
see
it
here,
because
the
way
in
which
the
thing
is
described
is
meant
to
enforce
handing
of
capabilities
by
by
defining
the
language
and
what
you
can
call
the
same
way
that
how
hard
in
javascript
works
in
in
the
agoric
stack,
where
you
lean
on
not
being
able
to
just
have
the
the
references
to
the
things
you're,
not
supposed
to
call
you
pass
in
the
capabilities
of
the
bearer
by
handle
sorry
by
directly
into
the
language,
and-
and
ideally
you
should
be
able
to
have
a
process
that
copies
that.
C
That
so
the
problem
there
is
like
that
relies
on
cryptographic
stuff
but
like
if
you're
running
in
a
lot
of
systems
that
won't
work
except
like
on
blockchain.
You
can't,
because
you
don't
secret
material
and
like
even
on
your
local
system.
You
honestly
often
don't
want
like
cryptographic,
capabilities
in
this
stuff.
That
can
be
useful,
but
it's
not
always
like
it's
like
in
this
case.
You
could
use
reference
types
as
long
as
you
had
some
way
to
say.
Oh,
I
see
that
deep
inside
your
ipod
data
structure,
you
wrap.
A
But
a
concrete
example,
so
you
you
want
to
be
able
to.
You
have
an
object
in
when
fs
and
you
wanna
you
one
program
wants
to
say
it's
like
a
like
a
a
drawing
program.
It
wants
to
defer
it's
going
to
write
out
the
the
the
the
output
image
into
a
file
to
share
to
some
other
party
and,
as
part
of
that
needs
to
like
run
another
program.
A
That's
gonna
take
that
file
as
an
input
and
rescale
all
the
images
for
a
bunch
of
like
versions
or
whatever,
and
so
you
want
the
the
drawing
program
to
be
able
to
hand
the
capability
of
right,
rewriting
those
objects
to
or
writing
derivative
objects
and
be
able
to
read
them
to
this
other
program,
and
you
want
to
not
happen
just
as
an
invocation
within
the
same
process.
You
don't
want
the
drawing
program
to
have
to
like,
like
run
the
other
program
itself,
to
inherit
all
its
capabilities.
A
It
needs
to
be
able
to
send
a
message
to
another
program
that
may
be
long
running.
That's
going
to
do
this
and
it
needs
to
be
able
to
read
and
write
whatever
was
given
to
it
by
that
capability.
So
that's
that's
the
case
where,
like
you,
do
want
the
cryptographic
capabilities
in
the
file
system
layer
file
system.
Here
being
you
know,
persistent
data
structure
file
system,
not
not
the
underlying
host
os.
E
Yeah,
so
capabilities
have
more
to
do.
I
think,
to
the
actual,
like
host
capabilities
so
like
whether
or
not
the
webassembly
model
or
webassembly
module
has
access
to
network
directly
or
the
file
system
or
so
on.
So
it's
more
like
the
host
description
and
less
about
like
what
permissions
like
someone
else
has
there's
both.
A
Which
is
object,
capabilities
which
do
mean
cryptographic
capabilities.
I
think,
like
e-writes,
are
different.
A
Sorry,
the
oh,
you
mean
the
old
cap
browser
capabilities.
I
guess.
D
Yeah,
so
I
many
ways
of
doing
this,
so
there's
locality-based
and
then
there's
cryptographic
based
both
work.
The
ability
to
shut
things
off
requires
locality,
otherwise
you're
in
something
that
looks
more
like
simple,
publicly
infrastructure
spooky,
which
is
what
ucan
is
as
opposed
to
like
full-blown
in
scarecrow.
It's
full-blown
ocap,
which
requires
so
it
depends
on
your
use
case
right.
It's
like
there's
a
spectrum
here
that
you
may
want
to
to.
D
If
you
want
something
to
be
really
fast
and
totally
distributed
and
not
need
to
go
to
the
other
side
of
the
planet,
because
that's
where
the
you
know
the
the
one
true
pointer
is,
then
you
use
something
like
you
can
and
if
you
want
really
fine
grained
control,
where
you
can
at
any
time
shut
things
off
in
a
synchronous
fashion.
So
without
eventual
consistency,
then
you
go
up
to
something
like
the
agora
model,
so
it
like.
I,
I
don't
think
that
it's
like
one
or
the
other.
They
both
have
trade-offs.
D
C
So,
for
example,
on
filecoin
we
could
implement
ocap
as
an
actor
like
each
actor.
Would
it
could
be
a
no-cap
where
it's
object,
yeah
or
this
would
have
to
be
managed
by
an
actor.
We've
also
looked
into
implementing
more
basic
capabilities
where
you
can't
shut
things
off,
and
we
can
do
that
through
funky
things
with
ipld,
where,
like
we
can
have
specific
types
of
objects
that
cannot
be
created
by
you
that
can't
be
created
by
the
system.
C
But
then
this
can
give
you
a
link
and
you
have
to
give
another
actor
a
link
and
that
kind
of
thing
that
allows
you
to
have
like
more
basic
capabilities.
The
problem
is,
I
can
then
copy
this
capability
around
as
much
as
they
want
and
there's
no
way
to
like
turn
it
off
or
say,
use
once
or
anything
like
that.
A
C
Sorry
so
that
means
like
you
could
shut
off
a
part
of
the
capability.
You
can
limit
it,
but
the
difference
between
that
and
like
shut
off.
I
assume
you
mean
shut
off
as
I'm
like.
I
call
it.
I
could
call
once.
A
D
So
to
have
that
that
multiple
chain
proxy
right-
that's
in
this
liveness
requirement
world,
where
you
have
to
go
through
these
proxies
yeah
like
exactly
this,
whereas
alice
references
out
to
bob
who
references
out
to
mallet.
I
can't
remember
this
exact
one
here.
D
On
the
line
is
the
actual
capability:
that's
getting
moved
around
with
a
pointer
to
the
thing
that's
getting
shuffled.
Actually
these
diagrams
are
great.
They
take
a
little
bit
of
getting
used
to
so
yeah,
so
this
one
alice
has
a
direct
pointer
to
carol
and
is
sharing
to
bob
this
reference.
Bob
never
gets
a
direct
pointer
to
carol
alice.
Has
this
pointer
because
alice
created
carol
by
parenthood?
Let's
say
like
literally
created
this,
this
object
carol
and
so
we'll
shut
off
this
foo
capability
in
in
between
or
well.
D
Actually
there
were
probably
something
between
that
you
actually
shut
off,
but
that
requires
locality
right.
So
this
gets
us
into
like
full-on
distributed
systems.
Flp
impossibility
like
stuff
right.
So
to
do
that,
you
need
liveness
right
yeah!
Well,
because
if
I
want
to
shut
that
off,
it
has
to
be
synchronous.
C
D
Yeah,
but
so
it
gets
into
the
cryptographic
model
and
like
eventual
consistency,.
A
There's
a
great
diagram
somewhere
here
that
shows
like
the
capability
chaining,
where
you
can
rev
like
what
you
do.
Is
you
hand
a
capability
that
goes
into
that
some
intermediate
node,
and
then
you
can
always
turn
off
that
capability
there,
so
that
whenever
a
message
gets
sent,
like
you
hand,
capabilities
to
send
a
message
to
a
thing,
that's
going
to
send
a
message
and,
and
you
build
the
off
switch
there.
C
A
But
so.
A
Are
really
cool
capabilities
are
awesome.
We
should
all
learn
about
them
and
discuss
them
and
invent
new
ones
and
so
on.
But
let's
go
back
to
vms,
slash
other
talks,
because
I
ate
already
running
very
late,
any
other
okay
good.
Yes,.
C
Sorry
I
got
distracted
one
a
couple
problems.
I
want
to
motivate
as
well
local
data
so
like
one
of
the
problems
with
ipld
is:
if
you
want
to
save
your
thing,
you
have
to
encode
it
in
a
format:
that's
not
always
not
or
not,
always
efficient,
and
you
also
do
a
lot
of
hashing.
C
So
what
I
really
like
is
some
kind
of
local
format
that
uses
literally
a
local
oracle
to
create
the
cids
but
they're
only
valid
locally
and
then
use
the
local
format
that
you're
not
really
going
to
share
on
the
network,
and
you
get
back
some
kind
of
rap
cid
that
you
don't
share.
But
then,
if
you
actually
want
this
to
be
used,
then
you
can
lazily
like
say:
okay,
make
this
network
transparent.
The
data
is
all
the
same.
Structures
are
generally
the
same
or
not.
C
The
data
is
not
exactly
the
same,
but
it's
the
same.
I
guess
structurally,
but
then
you
basically
sort
of
re-encode
and
actually
materialize
it
lazily
when
you
need
it.
So
this
will.
Let
us
have
a
lot
better
data
systems
where
you
can
like,
for
example,
like
have
like
mutable
files
and
stuff.
Like
that,
where
you
can
constantly
write
like
at
normal
speeds
and
then
only
like
materialize,
the
fully
hashed
encoded
thing
when
you
need
it.
C
Another
thing
I
want
to
note
is
like
a
lot
of
students
want
to
do
with
lots
of
small
objects
that
are
independently
addressable.
This
is
just
hard
for
ipld.
We
need
to
get
better
at
this.
A
lot
of
this
is
just
like
tooling,
and
better
data
source
and
better
transports.
C
One
thing
we
don't
support
is
something
subdressing
where,
like
we
can
point
at
the
full
object,
but
we
often
can't
point
like
something
inside
of
it,
and
this
can
be
a
bit
of
a
problem
because,
like
you
have
a
bite
slice,
you
want
to
point
to
sub-sub-slice.
You
can
handle
this
at
the
layers
above
and
we
usually
just
do,
but
having
systems
and
abstractions.
For
that
is
really
helpful.
We
already
talked
about
capabilities.
C
Oh
sorry,
in
terms
of
the
f
there's
a
question
on
like
whether
to
use
the
fem
or
not
like
it
has
a
lot
of
really
good
learnings,
but
I
would
also
start
and
like
really
focus
on
codecs
and
linking
and
stuff
like
that,
because,
like
in
the
fbm,
we've
very
much
taken
a
like
a
file
going
approach
of
like
okay,
a
really
a
v,
sorry
another
one,
even
like
a
blockchain
approach
of
like
we
need
this
to
be
very
deterministic
and
very
like
very
fast.
C
We
have
a
lot
of
good
ideas
there,
but
also
everything's
block
oriented.
I
would
like
to
see
more
around
the
composable
codec
stuff.
I'd
also
like
to
see
more
about
like
linking
ipld
modules
together
or
sorry
wasn't
modules
together
with
ipld
links
and
that's
not
stuff.
We
do
some
of
that.
We
want
to
do
but.
A
Within
these
vms,
though
you
know,
probably
will
take
a
while,
however,
for
ipvm
for
what
we
want
to
do,
we
really
need
a
model
where
each
one
of
these
gets
its
own
independent
tick
and
gets
scheduled
separately.
C
D
Another
project
that
might
be
interesting
to
explore
for
getting
parallelism
and
determining
where
the
boundaries
for
things
that
can
be
parallelized
are
is
bloom,
which
is
yeah.
Peter
alvarez
like
this.
This
whole
group
I've
been
working
on
this
for
like
15
years,
that's
at
the
programming
language
and
runtime
level,
but
it
would
be
really
nice
to
support
that
as
of
like
just
ship
that,
as
a
wasn't
blob
like
all
those
capabilities
as
a
wasn't
blob
and
then
yeah
with
the
tilde
top
now
bottom
right,.
D
A
Yeah
yeah,
I
guess
a
question
for
the
fluence
folks:
have
you
guys
how
much
have
you
guys
looked
at
bloom
and
either
learned
from
it?
No,
that
might
be
useful
it
when
you,
when
I
saw
your
language
when
I
saw
aqua,
I
was
thinking
of
it
reminded
me
very
much
of
bloom
and
I
was
like
oh
excellent
cool
thanks.