►
From YouTube: IPVM - @expede - IPFS and WASM
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Ipvm,
the
long-tabled
execution
layer,
or
maybe
the
easiest
way
to
run
wasm
everywhere
or
maybe
the
fastest
way
to
ship
ipf's
features
to
users
or
a
step
towards
an
interplanetary
os.
Okay,
take
your
pick,
I'm
going
to
cover
a
bunch
of
stuff,
hopefully
we'll
have
time.
Actually
I
really
want
to
have
time
at
the
end
for
questions
discussions.
A
I
I
think
it
could
get.
Spicy
should
be
great.
This
is
super
early
days.
This
isn't
like
really
deeply
built
designed
et
cetera.
It's
mainly
in
the
like
requirements
gathering.
Can
we
all
align
on
what
this
thing
needs
to
be
right?
A
A
A
This
gets
you
things
like
transparent,
ipfs,
nodegrade
upgrades
like
in
the
web
right
where
you
just
load
your
software.
It
gets
upgraded
for
you,
you
don't
have
to
do
anything
and
that's
controversial
just
to
get
started
right,
but
if
we're
going
to
talk
about
moving
really
fast,
sometimes
doing
things
like
this
is
an
advantage
right.
The
web
is
by
far
the
best
distribution
method
system
that
we've
ever
built
right
as
a
society.
A
It
can
support,
features
like
autocodec,
which
is
the
next
talk,
so
stick
around
for
that
one,
you
can
compute
without
required
consensus.
You
could
still
do
consensus,
but
without
required
consensus.
So
you
don't
have
to
go
out
to
say
the
fem.
You
can
run
things
yourself.
A
A
This
gives
us
a
bunch
of
bunch
of
things
right
with
context
addressing
data's
becoming
ubiquitous.
That's
awesome.
I
want
compute
to
be
that
awesome.
A
A
You
can
get
full
consistency
between
clients,
deduplicate
the
amount
of
work,
and
this
is
the
one
that
I'm
really
excited
about
is
we
can
create
the
http
of
compute,
so
no
more
proprietary
lambdas
you
push
these
jobs
out.
People
pick
them
up,
there
may
or
may
not
be
a
payment
layer
and
the
same
way
that
ipfs
may
or
may
not
have
a
payment
layer
on
it.
A
So
what
do
we
actually
need?
This
thing
to
do
it
definitely
needs
to
be
portable.
It
needs
to
be
deterministic
probably
needs
to
be
verifiable
by
default.
Right
turn,
that
on
and
off
does
this
sort
of
open
questions?
Does
this
need
to
be
completely
pure?
Does
it
do
managed
effects
kind
of
an
open
question
right
like
if
I'm
running
a
completely
untrusted
code,
then
I
probably
have
it
really
locked
down
if
I'm
running
something
from
within
my
team
or
inside
my
organization,
maybe
I
can
give
it
a
little
bit
more
power.
A
The
nice
thing
is
wasm
and
wazi
in
particular,
have
really
nice
interfaces
for
doing
these
kinds
of
things,
and
maybe
that
ends
up
exposed
in
a
manifest.
Based
on
these
are
the
you
know.
The
resources
we're
going
to
plug
in
effects
are
always
scary,
because
if
you
rerun
them,
then,
depending
on
the
kind
of
effect
you
have,
some
effects
are
fully
safe
to
run
again
right.
It's
like
I'm,
going
to
read
something
out
of
you
know,
I'm
going
to
do
a
network
call
and
pull
something
in
by
content
address.
A
A
You
know,
if
you
have
some
encrypted
content,
something
like
that,
but
also
moving
your
compute
to
data
and
because
code
is
data
we
can
package
up
a
wazer
module
or
you
know,
moving
compute
to
data
data
to
compute,
because
often
you
want
to
move
just
you
know,
take
your
wasa
module
and
actually
push
it
across
and
have
it
run
over
there
we'll
talk
about
that
a
little
bit
more
in
a
moment,
but
even
as
we're
discovering
you're
running
these
things
at
as
operators
right
for
data,
you
wanna
have
push
and
pull,
and
sometimes
you
wanna
be
able
to
push
and
pull
things
to
that
machine.
A
Specifically
right,
so
both
push
and
pull
are
important
in
this
case,
especially
if
you're
doing
matchmaking
for
and
they're
going
to
run
the
compute
for
me
right,
then
you
want
to
really
only
have
it
go
there
and
then
you
need
some
kind
of
permissions
model,
so
I'm
not
biased
at
all.
You
could
use
a
you
can
for
a
modification
for,
for
example,.
A
Adoption,
so
I
know,
arguably
a
major
reason
to
use
wasm
is
that
it's
gaining
huge
amount
of
adoption
tooling.
All
of
these
things.
It
appears
to
be
the
future,
but
it
also
it's
getting
adoption,
because
you
can
bring
your
own
language
and
in
this
case,
so
we
should
learn
from
that
right.
We
should
definitely
support
all
of
the
you
know
various
languages,
because
you
know
it's
wasm,
but
also
support
common
patterns
right
like
build
packs
or
cron
jobs
or
manifest
and
learn
lessons
from
those
and
make
this
familiar
and
easy
to
adopt
right.
A
We
could
have
the
coolest
system
in
the
world
if
nobody's
using
it,
because
you
have
to
learn.
You'll
read
a
book
beforehand.
It
doesn't
matter
right,
it
has
to
be
super
super
easy
to
get
up
and
running
with
and
use
and
be
useful.
It
has
to
be
substantially
better
than
what
people
have
today
right
and
having.
I
think,
having
a
wasm
execution
that
you
can
literally
just
say,
like
yeah
run
this
function
over
that
thing
is
already
amazing,
but
then
this
should
be
able
to
plug
into
systemd
cron
jobs.
A
All
of
that
stuff
right
as
as
I
pull
stuff
down,
maybe
I
want
to
process
on
it.
Maybe
I
want
to
build
indices
all
of
this
stuff
right
and
so
how
we
actually
write
that
up
is
in
a
familiar
way
is
important.
A
Deep
integration
with
the
system
right,
we
want
to
lean
into
content
addressing
right,
it's
not
just
an
s3
bucket,
it's
special.
It
does
things
in
a
particularly
good
way
right
it
it
deduplicates,
you
can
grab
it
from
a
bunch
of
peers.
You
can
write
things
back
all
of
this
stuff
right.
It
needs
to
be
more
than
the
sum
of
its
parts.
More
than
just
compute,
more
than
just
data.
A
There
needs
to
be
some
deep
integration
between
these
you'd
have
remote
and
local
execution,
so
I
should
always
be
able
to
run
everything
myself
locally
or
push
it
somewhere
else,
and
we
should
reuse
this
community's
experience
with
wasm
in
things
like
bfvm,
aquamarine
cloud
flower
workers
back
cloud
web
through
storage,
ipfs
fan
all
of
this
stuff.
We've
done
a
bunch
of
exploration
with
this
stuff.
Already,
let's
just
let's
just
start
grabbing
some
of
this
stuff
and
seeing
what
the
common
patterns
are
and
see
what
they
how
they
use.
This
there's
always
trade-offs.
A
So
this
is
a
wands
triangle,
and
you
this
is.
You
know
a
pick
two
of
three:
you
have
performance,
verifiability
or
privacy,
but
not
all
three
right.
A
So
your
centralized
cloud
providers,
your
you
know,
ec2
and
lambdas-
are
in
the
bottom
right
corner
and
as
you
go
towards
verifiability
things
get
slower
because
they
have
to
get
slower,
you're,
doing
more
work
right
and,
as
you
go
up
towards
privacy,
you're
also
losing
performance
because
you're
doing
more
work
having
both
of
them
is-
and
you
know
one
of
these
other
cases
where
it's
like
well
now,
you're
really
doing
a
lot
of
work
and
maybe
we'll
get
there
eventually,
but
you
know
probably
not
today.
A
We
don't
want
to
turn
any
of
these
options
off
and
that's
another
nice
thing
with
a
reasonably
low
level.
Vm
like
wasm
right
is,
as
things
get
more
supported
and
as
the
algorithms
get
faster,
you
can
totally
do
snarks
or
whatever,
but
it's
not
required
by
the
entire
system.
Right
you
can
plug
those
in
as
modules.
A
And
an
ask
I
have
for
the
end
of
the
presentation,
like
don't
get
me
wrong,
I'm
still
going,
but
for
the
end
of
the
presentation,
we
should
talk
about
anti-goals
so
think
about
those,
as
as
we
go
so
execution
as
ipld
or
interplanetary
linked
invocation
or
whatever
acute
name
you
want
to
put
on
it.
A
This
is
a
description
of
the
job
and
the
results.
So
not
just.
I
want
to
run
this
thing,
this
module
with
these
arguments,
but
I
also
want
to
collect
what
came
out
of
it.
Okay,
we
need
an
index
or
names,
something
human
readable,
possibly
for
later
lookup.
What
was
the
result
of
when
I
asked
for
this
job
to
be
done
right
and
streams
of
results
per
machine?
A
I
have
an
entire
slide
for
that
later.
So
very
roughly,
this
looks
something
like
this.
So
here's
ipld
we
have
the
root
node.
It
has
arguments
which
is
usually
just
a
a
byte
array,
but
sometimes
you
know.
Maybe
we
want
to
break
this
up
into
actual
readable
arguments
and
an
api.
A
We
have
the
wasmblob
and
then
some
configuration
you
know,
scheduling,
config,
something
else
in
there
when
this
finishes
or
when
this
goes
into
the
queue
we
have
this
output
and
then
results
at
minimum
results
and
possibly
more
data
stats.
How
long
it
took
to
execute
if
there's
anything
else
that
needs
to
get
run
after
this
was
the
suspended
anything
like
that
right,
all
this
extra
information
and
we
want
to
connect
them
together
in
a
tree
like
this,
for
for
a
bunch
of
reasons.
A
Schedulers
events,
job
streams
in
general.
This
is
one
way
to
do
it.
That
happens
to
work
nicely
when
you
want
to
move
things
around
or
run
them
locally,
there's
other
models,
but
just
as
a
as
a
general
version,
you
have
a
base
event
stream.
That
says,
I'm
pushing
you
jobs
and
then
streams
that
handle
pure
functions
and
pure
effects.
Now
pure
effect
is
something
like
go
into.
Ipns
read
me
the
latest
version
of
this
thing
computer
for
that
right,
so
it's
always
getting
baked
down
into
something.
That's
actually
pure.
A
So
that's
time.
Obviously
pure
function
is
very
easy.
Pure
effects
have
this
problem
where
okay,
I've
computed
I've
got
it
back
down.
I'm
gonna,
you
know
hop
forward,
but
actually
I
didn't
my
ipns
was
out
of
date.
I
need
to
run
this
thing
again
and
I
don't
want
that
mapped
to
the
name
that
I
gave
the
job.
So
I
need
to
roll
back
over
it
because
it's
pure,
we
can
do
that.
That's
actually
completely
fine.
A
So
when
you
have
scale
right,
this
is
in
in
an
ideal
world,
as
you
add
more
machines
in
parallel
right,
you
get
more
scale
and
it's
linear,
but
that,
unfortunately,
isn't
how
this
works
for
basically
any
job,
because
you
have
diminishing
returns
and
beyond
this.
A
You
have
the
universal
scaling
law
which
says
that
if
you
need
any
coordination
between
these,
you
you're
waiting
for
something
coming
off
of
another
queue,
you're
waiting
for
the
next
step.
You
did
optimize
optimistic
execution
and,
oh,
that
was
actually
the
wrong
argument.
You've
got
to
go
back
right.
All
of
these
things.
You
actually
lose
performance
as
you,
after
a
certain
point
with
your
parallelization,
so
you
actually
need
to
keep
in
this
smaller
range,
if
you're,
keeping
all
of
your
outputs
and
you're
in
a
pure
environment.
A
You
can
do
adaptive
optimization,
essentially
hotspot
jits,
the
entire
vm
globally
across
the
entire
planet.
Right
now
how
useful
this
ends
up
being
in
practice.
It's
kind
of
an
open
question
right.
How
many
functions
can
we
grab
the
intermediate
output
of
and
feed
that
into
other
systems
right,
and
this
has
been
tried
in
various
languages
like
haskell?
A
Famously
does
some
of
this
like
not
not
to
a
huge
degree,
but
some
of
this
to
to
get
performance
right,
but
it'd
be
really
interesting
to
see
this
at
full
scale,
because
some
things
you
know
there'll
be
a
power
law.
Some
things
will
get
reused
all
the
time
and
then
we
can
just
cache
them
locally
and
never
run
that
compute
again
ever
right.
The
other
thing
you
can
do
is
if
you
notice
something's,
really
like
literally
hotspot,
optimization,
some
wasn't
blob
is
really
popular.
A
A
A
This
is
essentially
kind
of
like
a
suspend
resume
mechanism
to
some
degree
which
lurk
is
playing
around
with
so
for
those
not
familiar.
It's
a
snark,
a
turing,
complete
snark
system
where
it
externalizes
its
state,
like
it's
its
internal
running
state,
does
the
proof
and
then
feeds
that
into
the
next
step,
so
essentially
suspends
proves
and
continues.
A
A
A
The
last
one
because
I
was
chatting
with
some
people
in
the
computer
data
group
yesterday,
the
last
one-
that's
always
controversial-
is
managed
effects
which
I
don't
think
a
lot
of
people
have
actually
run
into
in
a
distributed
systems.
Context
before
it's
actually
finding
explain
in
a
few
places,
but
essentially
you
do
what
as
much
of
the
pure
computation
as
you
can,
and
then
you
output,
a
description
of
what
you
want
to
have
done.
A
You
say,
send
email
to
this,
you
know
here's
the
body,
here's
the
address,
and
then
that
goes
to
somebody
who's
going
to
execute
it.
They
do
the
effect
which
is
completely
off
system
right
like
hopefully,
this
actually
got
run.
We
have
no
way
of
checking
that
you
know
in
the
general
case,
and
then
that
comes
back
down
and
we
write
into
the
stream.
This
is
the
result.
This
is
what
happened
and
then,
if
somebody
is
looking
at
this,
you
know
these
trees
later
of
the
execution
of
what
happened.
A
A
So
content's
addressing
ipfs
itself,
shipping
around
wasn't
modules
that
do
ipfsc
things
that
we
can
integrate
deeply
into
into
these
nodes
and
possibly
not
for
everything.
Right.
If
you
have
something,
that's
really
heavily
performance
critical,
then
maybe
you
don't
do
this
part
right,
but
for
shipping
updates
shipping
bug
fixes
having
new
codecs.
In
fact
again
we'll
talk
about
auto
codec
in
the
next
session.
A
New
kinds
of
cryptography
bug,
fixes
and
especially
sharing
effort
between
projects
so
that
we're
not
rewriting
the
same
code
end
times
would
be
really
nice
and
that's
not
to
say
that
you
know
there's
anything
wrong
in
having
a
go
and
a
rest
implementation
of
something
and
for
critical
portions
of
your
code.
That
totally
makes
sense.
A
But
if
you're
trying
to
build
a
new
implementation
from
scratch
or
you're
trying
to
implement
a
new
feature,
this
might
be
at
minimum
a
nice
place
to
start
so
that
you
can
ship
the
feature
today
and
then
optimize
it
later.
And
if
you
have
a
wasn't
vm
guaranteed
for
the
user
execution.
That
means
that
you
definitely
have
it
around
to
put
in
the
middle
security
when
you're
pushing
around
things.
That
can
actually
run.
This
is
a
bigger
security
question
than
than
data
right.
A
Data,
at
least
is
static,
maybe
you're
to
get
the
wrong
data,
but
it's
not
like
you're
going
to
spend
a
bunch
of
cpu
cycles
or
you
know
something
right
or
or
even
trust
that
the
computer
is
being
done
correctly
right.
It's
like
you
know.
I
I
want
you
to
apply
a
photo
filter
to
this
image
and
it
just
replaces
the
image
right
like
you.
A
You
don't
want
things
like
that,
so
you
need
to
have
either
a
verifiability
or
some
kind
of
a
trust
model,
and
so
there's
a
in
general
when
you're
doing
things
in
a
distributed
system.
Where
you
don't
want
centralization,
then
capabilities
based
systems
are
essentially
where
it's
at
so
yep,
there's
ucan
and
that's
great.
If
you
have
something
that's
offline,
it
does
spooky
spki,
which
is
a
subset
of
ocap.
A
Now
downside
for
ocap
is
you
have
to
be
online
in
the
general
case
for
it?
So
there's
a
spectrum
here
of
how
online
or
and
offline?
How
much
am
I
interactively
doing
some
things
with
somebody
else,
and
you
know
how
much
am
I
willing
to
verify
as
I
go
along?
Maybe
I
don't
want
to
do
any
of
this,
because
this
is
running
on
the
you
know
the
company
intranet
and
we
don't
care
or
I'm
working
with
completely
untrusted
peers,
and
I
I
want
to
go
to
full
full
ocap,
I'm
going
to
put
things
in.
A
You
know
all
of
these,
these
concepts
that
we'll
have
to
get
familiar
with
from
from
the
from
that
whole
world.
So
if
you
go
to
eric.org,
there's
tons
of
writing
on
this
right,
like
you
know,
vats
and
objects,
and
all
of
this
stuff
right
and
yeah,
so
there's
that
also
mobile
computing.
So
I
have
some
compute
running
on
my
phone
and
I
say
this
is
taking
too
long.
I'm
gonna
suspend
it
and
move
that
process
over
to
my
computer
or
to
a
cloud
service
provider
and
start
running
it
there.
A
But
now
it's
in
a
different
context,
and
maybe
it
has
a
bunch
of
my
decrypted
data
right
or
maybe
you
know,
I
really
trust
that
I'm
going
to
get
the
result
back
locally
and
now
it's
running
somewhere
completely
else
and
it
there
wasn't
an
api
call
in
the
middle.
I
just
suspended
it
and
moved
it
right.
So
we
need
systems
like
this
to
say:
yes,
you're
actually
allowed
to
grab
this
this.
This
wasn't
blob
this
chunk
of
data
to
run,
and
I
actually
trust
you
to
do
this.
A
The
more
capabilities
you
have
the
more
problems
you
have.
If
you
plug
this
thing
into
your
file
system,
it
can
write
arbitrarily,
often
that's
a
great
thing
right
like
yes,
I
would
absolutely
love
to
have
this.
You
know
all
fuse
enabled
and
just
you
know
running
through
the
entire
system,
but
not
on
completely
untrusted
code.
So
we
need
to
have
some
switches
to
say
yes,
enable
powerful
features
for
this.
Don't
enable
powerful
features
for
that.
I
trust
these
peers
I'll
get
a
ucan
from
this
one.
I
need
live
verification
for
them
to
run
stuff
right.
A
A
Having
remote
capabilities
into
other
people's
systems
is
super
powerful
right
so
again
we're
doing
a
lot
of
this
stuff
with
ucan
today,
where
it's
openly
interoperable
right.
I
create
some
data
myself.
I
create
a
ucan
for
it
and
I
send
it
to
philip
and
philip
wants
to
apply
that
photo
filter
and
he
sends
it
up
into
the
cloud.
A
We
should
be
able
to
follow
this
chain
around
and
just
have
things
execute
and
when
that
gets
back
to
you
know,
let's
say
that
it's
in
more
of
this
live
model
and
it
gets
back
to
me
hey,
am
I
allowed
to
do
this
thing?
Can
you
send
me
the
link
to
the
next
chunk
of
data?
I
can
just
look
at
the
certificate
chain
and
go
yeah.
That's
the
thing
you
want
here.
A
A
So
it's
probably
probably
I
I
don't
know,
but
I
assume
that
the
follicle
folks
have
looked
into
this
a
whole
bunch
say
channels
as
opposed
to
doing
things
in
consensus,
and
maybe
actually
this
is
possibly
a
moot
point,
because
I
was
talking
to
juan
yesterday
the
day
before
about
hierarchical
consensus,
and
it's
like
it's
actually
like
region
based
and
now
I
don't
have
you
know
these
like
massive,
you
know
latency
concerns,
and
maybe
you
know,
instant
finality
and
all
of
that
stuff,
but
this
is
another
way
to
get
instant
finality
in
a
peer-to-peer
like
point-to-point
system,
where
I
don't
need
global
states
right.
A
You
pay
for
consensus.
Consensus
always
takes
time
both
in
latency
and
in
agreement,
because
it
has
to
execute
in
synchronous
rounds
right
this
doesn't.
This
is
totally
async
like
to
the
point
that
one
party
in
the
channel
can
be
offline.
Come
online
sign
the
thing
push
it
back
up
and
it's
done
instantly.
A
So
this
is
the
basic
idea
for
those
who
haven't
been
exposed
to
this.
Before
is
that
you
have
two
parties,
they
want
to
do
some
interaction,
they
put
some
state
say
on
chain
or
in
a
public
place,
and
that
is
the
agreed
upon
initial
state
and
then
they
start
computing
and
signing
essentially
an
updated
log
and
counter
signing
it
with
each
other,
and
they
can
do
that
as
soon
as
the
other
one's
signed.
It's
done
now,
if
they
don't
have
a
direct
connection,
you
know
there's
somewhere
else
in
this
graph.
A
Well,
then,
you
can
follow
this
system
through
there
and
say
you
know
this
is
often
done
financially.
If
you
know
person
in
the
top
left
has
some
you
know
hundred
dollars
and
wants
to
get
it
to
person.
In
the
you
know:
middle
rights.
They
go
through
all
these
intermediaries
and
the
balances
update
along
the
way,
and
so
it's
almost
like
pushing
in
in
one
direction
or
the
other,
and
you
can
generalize
that
to
any
kind
of
state.
A
So
to
say:
here's
my
you
know:
seed
leech
rate,
all
of
this
stuff
right
or
here's
the
amount
that
I
trust.
This
other
peer
or
not,
and
to
propagate
information
in
the
system.
That
way,
you
can
also
use,
in
fact,
we've
experimented
with
a
little
bit
with.
A
Yet
again,
you
can
for
all
of
this
stuff
too,
because
it's
about
this
counter
signing
and
so
is
that
payments
payments
will
come
up
for
sure
at
some
stage,
because
you're
you're,
running
compute,
so
reusing
the
same
state
channel
idea
for
payment,
reputation,
etc,
means
that
you
can
plug
in
a
payment
system
to
that
as
well.
And
so
I
still
think,
like
you
know,
based
ipfs,
there's
no
payment
by
default.
A
Anybody
can
participate,
but
if
you
want
to
go
up
to
having
a
pinning
service
or
a
provider
of
some
kind,
you
might
have
to
pay,
it
would
be
really
nice
to
have
direct.
You
know,
here's
my
you
know
existing
relationship
with
web3
storage
and
they're
going
to
provide
me
a
terabyte
of
data
as
a
quota
and
I'm
going
to
not
have
to
go
through
the
whole
dance.
A
Every
time
just
say
like
yep
here,
you
know
use
up
a
bunch
of
my
my
quota
as
I'm
going
right
and
make
that
really
fast
so
where
to
start
this
is
a
very
rough
list,
but
right
roughly
in
order
ship,
a
wasm
execution
engine
of
some
kind
into
an
ipfs
implementation.
A
Don't
do
anything
automated
at
first
experiment
with
an
ipli
format
and
outputs
and
manifests,
and
all
of
that
stuff
work
up
to
a
concurrent
job
scheduler,
including
tunable
trust
and
resource
limits,
figure
out
sensible
default
configs
from
this
experience
so
like
up
until
this
point
we're
just
running
stuff
right
out
of
the
box,
there's
probably
going
to
need
to
be
a
lot
more
adjustment
and
so
iteratively
improve
that
experiment
with
deeper
integration,
use,
wasn't
ipld
other
other
packages
and
see
if
we
can
get
this
deeper
into
the
nodes
plug
it
into
cron
have
event
triggers.
A
Have
it
be
a
little
bit
smarter
about?
Okay,
if
I
get,
if
I
make
a
request
about
this
ipld
thing,
sorry
ipns
thing.
I
want
that
to
get
transformed
in
this
way
and
written
over
here
and
then.
Finally,
in
this
first
chunk
of
work
figure
out
how
to
push
jobs
and
associate
the
authorization
with
it
as
well,
so
with
about
five
minutes
left
open
discussion,
which
I
expect
to
get
a
little
bit.
A
Spicy
tell
me
why
you
love
or
hate
these
ideas,
and
it
doesn't
just
have
to
be
with
me
like.
Let's,
I
also
wanna
know
what
people's
requirements
are
right,
and
I
also
want
to
hear
why
this
is
a
terrible
idea.
B
Yeah
really
cool.
It's
a
question.
I
I
think
a
little
bit
with
the
with
the
vms
is
when
you
try
and
bind
them
to
the
to
the
host
the
thing
that
lives
outside
you're
you're
you,
you
build
some
interface
and
you're
like
you
know:
here's
here's,
how
I'm
here's,
how
I'm
doing
the
binding
and
then
you
realize
you
got
it
wrong
and
it's.
The
question
is
how
we
might
want
to
deal
with
that
right.
A
B
Level
we
want
to
make
a
spec
pr,
because
nope
turns
out
like
we,
we
allocated
large
slice
and
we
needed
to
do
incremental
things,
and
we
didn't
realize.
B
I'm
not
so
worried
about
security,
I'm
more
worried
about
like
in
this
case
I'm
thinking
like
performance
like
do.
I
have
to
keep
translating
all
of
the
old
versions
into
the
new
versions,
and
so
I'm
like
I
I
have
you
know
ipvm
v1
code,
that
I'm
now
running
on
ipf
ipvm,
v5
and
now
I
have
to
like
seven
through
four
translation
layers
before
I
get
anything
useful
out
and
that
and
then
every
time
someone
wants
to
make
a
new
change,
the
the
old
people
are
like
no,
but
my
code
will
be
slower,
yeah.
E
E
So
an
impression
that
I'm
gathering
is
sorry
comment
first
question:
at
the
end,
there
is
some
set
of
parameters
that
go
into
a
wasm
function.
Call
in
some
results
in
practice.
It
seems
like
people
are
also
always
talking
about
some
sort
of
syscall
like
interface
in
case.
You
need
to
have
the
wasm
thing,
ask
for
more
data
from
the
host
and
receive
it
later,
and
then
we
also
keep
seem
to
see
the
interface
have
a
bytes
blob
fly
through
somewhere
and
that
bytes
blob
is
opaque
at
that
first
level
of
interface.
E
A
Yeah
so
I
mean
so,
I
guess
there's
like
a
few
layers
of
question
in
there.
So
one
is
what
should
the
interface
even
look
like?
I
think
that's
where
we
go
and
look
around
and
see
what
is
everybody
else
doing
what
stocks
are
doing?
What's
what
do
build
packs?
Look
like
what's
been
successful,
what
do
people
really
love
all
the
stuff
right
in
terms
of
sys
calls
so
fundamentally
wasm
understands.
A
I
might
have
a
stream
like
literally
a
bite
stream
import
right,
and
it
knows
nothing
about
the
source
of
that
right
and
maybe
the
programmer
labels
that
source
of
randomness
and
then
you're
going
to
read
off
of
the
stream
and
then
use
that
to
do
cryptography
right
or
you're
gonna
do
networking
and
the
host
is
gonna,
provide
you
networking,
it's
gonna
plug
in
you
know
whatever
tcp
stream
and
you
can
do
read
and
write
over
this
binary
channel
and
that's
all
it
knows
about
just
binary
channels
and
so
anytime,
that
you
need
these
extra
capabilities.
A
A
It
just
fails.
So
do
we
need
to
write
it
into
the
description
hey.
This
is
only
going
to
run.
If
you
have,
these
capabilities
turned
on
probably
right
and
then
the
last
one,
I
think,
was
how
do
we
actually
feed
the
data
into
the
system.
E
If
we
have
structured
data,
how
do
we
specify
that,
but
I
guess
so
like
just
to
sanity
check
something
I
saw
on
the
screen.
I
think
it
might
have
been
in
the
last
talk,
but
I
know
you
folks
work
together.
So
forgive
me
if
I
ask
you
like.
I
saw
rust
code
that
was
exporting
a
cabi.
F
E
A
E
A
Yeah,
so
it's
it's
the
the
way
we
do.
This
is
completely
unspecified
and
we
should
get
a
bunch
of
people
in
the
room
to
to
figure
that
out.
Maybe
this
should
be
in
the
cod
working
group.
All
those
things
right,
wasm
itself
doesn't
care
what
language
you
wrote
it
in.
It
has
low-level
low-level
instructions
for
for
working
with
data,
so
there's
yeah
you'll
need
some
interfaces.
A
If
you're
grabbing
json,
you
probably
want
to
put
a
codec
in
between
to
say:
okay
now
dump
this
into
the
way
that
we
do
arguments
for
this
or
my
program
is
going
to
have
to
do
that
translation
directly,
because
they're
modules
you
should
be
able
to
go
and
grab
the
whatever
dad.
Json
codec
wasn't
blob
and
put
that
in
front
right
and
basically
say
as
you
pass
through,
dump
this
into
raw
ipld
data
model,
and
then
my
code
will
know
how
to
handle
that,
for
example,
right
or
use
an
adl.
A
Something
like
that
in
terms
of
the
actual
from
the
outside
doing
the
calls
we'll
need
to
call
in
convention
of
some
kind
right
and
that
we'll
also
need
to
export
like
abi
and
various
things
in
that
manifest
as
well.
G
Yeah,
I'm
gonna
go
even
more
basic,
but
it's
interesting
because
I
know
I'm
like
it's
ironic
that
I'm
sitting
literally
in
between
you
two
because
I
know
what
both
of
you
are
doing
and
it's
gonna
kind
of
touch
on
both
like,
I
think,
there's,
even
something
both
higher
more
higher
level
and
also
more
basic,
which
is
not
just.
How
do
you
write
the
one
wasm
you
talked
about
like
wiring,
together
many
wasms
to
accomplish
something.
G
We've
talked
about
this
many
times
and
there's
there's
also
a
dag
definition
that
you're
going
to
need
to
establish,
which
some,
like
you
know,
whatever
recursively
defines
the
thing
that
we're
trying
to
accomplish
and
sticks
whatever
codex
in
between
or
whatever
you
might
say.
I
think
that
that's
just
a
yes
and
that's
nothing
other
than
that.
H
Okay,
I
love
you
guys
so
much
all
right.
So
a
lot
of
this,
like
definition
of
webassembly,
that
connects
to
other
webassembly
to
provide
attractions
to
potentially
existing
or
non-existing
lower
levels
of
the
api,
is
pretty
much
exactly
what
wazi
is
right.
So
if
we
want
like
they
have
a
spec
language
that
can
generate
api.
H
So
if
we
really
want
to
be
like
serious
about
this,
we
can
take
their
language
and
write
an
interface
for
it
and
then
it'll
do
code
gen
and
we
can
do
magic
stuff
with
them,
then
maybe
have
a
standardized
which
would
also
be
very
cool.
H
It
also
goes
into
the
component
model
of
things
where
they
recently
moved
to.
Where
you
have
web
assembly
that
connects
to
other
web
assembly,
then
you
combine
webassembly,
and
then
you
have
one
giant
web
assembly
that
can
do
different
things
and
you
can
compose
things.
But
for
our
cases
we
have
a
linking
thing.
So
we
can
have
a
dag
description
of
different
webassembly
models
that
you
just
point
to
the
dag,
and
it
says
I
use
these
and
it's
dependencies
and
quotations.
I
guess
so.
H
It's
binary
dependencies,
weird
and
then
yeah
just
compiles
it
all
together
and
it's
a
single
webassembly
thing
and
you
can
just
add
for
like
breaking
previous
things.
If
you're
smart
about
it,
you
can
just
add
a
polyfill
of
a
webassembly
that
just
polyfills
to
the
old
interface
but
yeah.
That's
a
bit
more
work.
C
That's
what
we
like
plan
to
do
pretty
soon
so
would
be
nice
to
to
to
discuss
and
talk
over
and
also
once
you
have
like
a
lot
of
wasms
and
link
them
through
like
a
component
model
or
through
linking
into
some
kind
of
like
functioning
parts
of
a
node
of
a
bigger
system.
You
start
to
have
a
need
to
express
algorithms
up
over
them
like
distributed.
One
once
have
you
thought
about
how
to
do
that
about
language,
maybe
or
something
some
way
to
express
distributedness
over
a
lot
of
like
functioning
parts.
A
Yeah,
so
I
mean
a
bunch
of
stuff
in
there,
so
the
try
to
go
back
to
them.
So
one
thing
is
yeah.
I
love
stuff,
you
guys
do
it's
great.
Thank
you.
Also.
The
filecoin
fdm
community
also
has
some
experience
with
this
right
like
we,
we
have
a
bunch
of
projects
that
are
doing
doing
things
like
this.
I
would
love
to
get
everyone
in
a
room,
maybe
at
the
cod
working
group,
and
to
talk
about
like
what
are
the
right
ways
to
do
these
things.
A
How
much
do
we
need
to
express
some
of
these
things
as
functions
versus
declarative
specifications
as
a
dsl
right?
One
thing
that
so
part
of
the
managed
effect
idea
is:
you
should
be
able
to
create
from
from
wasm,
as
opposed
to
a
dedicated
language.
Right
like
this
is
a
like
just
use
the
one
tool
strategy
and
just
write
it
in
in
whatever
output.
A
One
of
your
outputs
is
a
declarative
specification
of
how
this
thing
should
get
run,
and
you
tag
that
as
an
effect
for
how
the
things
should
get
continued
right
or
suspended
or,
however,
wants
to
happen.
That's
one
approach.
You
know,
there's
there's.
Obviously,
a
bunch
of
others
right
so
yeah,
I
think
mainly,
we
just
need
to
get
a
bunch
of
people
in
the
room
and
hash
this
stuff
out.
A
The
other
thing
I
wanted
to
mention
from
I
can't
remember
somebody
else's
comment
earlier
is:
if
we,
as
a
community,
decide
that
we're
really
serious
about
wasm,
we
should
join
the
bytecode
alliance
and
get
involved
in
all
of
that,
so
that
obviously
probably
means
like
a
full-time
spec
person
or
something
but
yeah.
C
A
D
So,
as
I
understand
you
have
your
own
interfaces,
your
allon
api,
have
you
considered
webassembly
interface
types
or,
and
the
wit
language,
especially
so,
for
example,
you
know
like
there
is
a
it
was
developed
inside
as
an
extension
for
yc
in
the
wasm
time
group,
and
then
they
want
to
standardize
it
as
a
webassembly
proposal
itself,
not
as
a
yz
but
as
a
web
assembly,
and
now
there
are
like
a
lot
of
extension
vswit,
especially
in
wasm
time.
So
have
you
considered
this.
A
Yeah,
so
I
I
think
we
should
borrow
whatever
is
coming
out
of
the
community.
I
I
didn't
realize
it
was
that
far
along,
like
this
is
running
today,.
D
A
Amazing
yeah:
we
should
really
use
that
there's
some
extra
stuff
that
we
can
do
on
top
of
that
by
taking
advantage
of
content
addressing
in
addition,
but
like
yeah
like
let's
not
rewrite
anything
right
like
let's
just
use
as
much
that
already
exists
as
possible.
A
Let's
take
stuff,
so,
let's
just
straight
up
steal
stuff
from
the
webassembly
world,
just
wholesale
and
then,
let's
start
stealing
things
from
other
projects
in
this
ecosystem
right
like
fluence
like
fem
like
all
these
other
projects,
and
if
they
have
what
you
know
if
whit
is
essentially
done.
Yeah,
let's
plug
it
in
amazing,.
D
Yeah,
thank
you,
but
so
yes
read
that's
done,
but
so
there
is
no
good
developer
experience
from,
for
example,
from
ross
site.
D
But
but
actually
I
mean
I
mean
like
it's
now.
It's
difficult
to
compile
like
rust,
file,
rust
code
or
webassembly
is
support
of
bit.
So
is
there
an
also
like?
I
know,
good
developer
experience,
but
but
inside
like
what
sometimes
it's
already
implemented.
A
Right,
so,
are
you
saying
that
it
you
have
to
write
your
code
in
a
adaptive,
adaptive
optimizable
way,
special.
You.
D
Can't
just
you
can't
run
static
analysis,
you
you
need,
you
need
to
have
like
compiler
of
assembly
file
and
like
narrow
dot.
Read
file
that
like
describes
is
the
interfaces,
so
I
mean
that
this
file,
like
doesn't
generate
it
by
themselves
during
the
compilation.
A
Okay,
let's
take
this
offline,
I'd
love
to
know
more
detail
because
yeah.
The
way
I
was
thinking
about
it
was
essentially
closer
to
like
a
hot
spot
where
it
just
it
looks
at
the
thing
and
starts
applying
iteratively
optimizations
over
it
and
then
having
a
pointer
into
it.
Basically
saying
like
I'm
optimizing
this
thing,
if
you
want
to
run
it
faster
run
this
one.
F
Can
you
talk
a
bit
about
how
you
are
thinking
about
distributed
execution
across
many
vms,
so
you
you
have,
of
course,
as
you
bring
up
many
vms,
you
can
connect
them
because
of
all
the
pure
functions.
The
hash.
B
F
And
so
on,
there's
like
a
nice
translation
layer,
I'm
going
to
be
talking
about
some
of
that
later
on.
Here's
just
how
you're
thinking
about
it,
how
you
see
the
current
like
those
streams
that
you're
describing
coupling
like
what
happens
when
you
have
many
of
them,
do
you
envision
many
of
them
per
single
local
vm
and
maybe
practically
like
the
boundaries
of
execution
like
what,
when
you
run
some
software,
do
you
run
an
instance?
A
Know
yeah
so
yeah
there's
a
few
things
in
here,
so
one
that
I
I
would
love
to
see
them.
Oh,
maybe
maybe
the
fluence
folks
have.
This
is
deterministic
parallelism,
pretty
much
a
requirement
for
for
a
bunch
of
these
things.
Right
like
we
want
to
run
it
twice.
A
It
has
to
be
deterministic
and
there's
some
trade-offs
in
there
right
in
terms
of
the
actual
scheduling
and
the
the
pipelines.
I
think
it
depends
on
the
kind
of
job
that
you're
running
right.
So,
if
you
want
this
run
by
you
know,
an
end
number
appears
minimum.
Then
you
push
it
into
a
single
queue
and
you
say
this
is
a
work.
Stealing
queue
go
for
it
and
if
there's
duplication,
that's
fine.
A
If
it's
I
want
this
run
once
and
exactly
once
by
one
person
right
with
this
chunk
of
it
run
once
exactly
once
by
one
person,
then
you
need
consensus
because
it's
distributed
right
or
sorry.
You
either
need
consensus
or
overstate
channel
or
some
other
communication
to
say
yup
you're,
the
one
that's
going
to
actually
do
this
thing.
A
So
if
you're
running
you
know
just
as
a
you
know,
the
common
case
is
like
you're
going
to
run
mapreduce
or
some
some
huge
data
set,
then
yeah
break
it
up
push
it
over
these
streams,
click
the
results
do
the
reduce
and
if
that
gets
run
multiple
times
or
once,
unless
you
have
managed
effects
or
unless
you
have
like
actual
effects
off
platform
effects,
then
the
the
duplication
on
the
compute
is
actually
fine,
because
it's
deterministic,
it's
just
wasteful,
so
but
in
the
same
way
that
you
might
get
duplicate
blocks
in
a
in
a
network
hall
right.
A
So
so
essentially,
I
think
that
needs
to
be
tunable
depending
on
the
use
case.
There's
also
the
case
of
like
I
want
to
run
this
over
my
cluster
of
those
machines.
A
Send
it
to
them
right-
and
this
starts
to
look
a
whole
lot
more,
like
the
stuff
happening
in
the
cloud
native
folks
stuff
right.
So.
A
Maybe
to
clarify
the
the
presentation
is
hey
here's
something
we've
been
thinking
about.
This
is
actually
on
our
roadmap,
or
at
least
a
significant
subset
of
this
is
on
our
roadmap.
I
would
like
to
collaborate
with
other
people
to
build
this
thing
and
solve
it,
for
everybody.
Are
there
other
people
that
are
interested
in
this?
It
sounds
like
probably
yes,
yeah,
okay
great,
so
we
should
probably
talk.
Then.