►
From YouTube: FVM Deep Dive - @stebalien - IPFS and WASM
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
today
we're
going
to
talk
about
well,
first,
the
fem,
deep
dive.
This
is
basically
it's
going
to
assume
some
amount
of
understanding
of
both
wasm
and
ipld
and
then
talked
about
how
we
use
them
in
the
fvm.
So
the
fvm
is
the
filecoin
virtual
machine.
It's
the
virtual
machine
that
we
are
now
using
in
the
filepoint
network.
A
The
one
of
the
fun
things
about
the
the
there
about
wasm
and
about
the
fm
general
is
that
like
because
it's
wasm
and
you
can
compile
a
lot
of
things
too
wasm-
you
can
actually
just
compile
other
vms
to
this
and
run
it
on
top.
So
for
now
it's
the
evm,
but
in
the
future
we
can
add
additional
ones
and
basically
wasn't
a
great
target.
If
you
just
want
to
be
able
to
run
anything,
that
was
our
main
reason
for
using
it.
A
So
this
is
the
architecture.
There
are
a
lot
of
components
here,
but
I'm
just
gonna
go
and
walk
through
this,
so
at
the
very
top
there
you
can
see
the
the
executor.
Actually,
let
me
look
back
up
and
talk
about
filecoin
for
a
second,
so
filecoin
is
a
blockchain
as
a
blockchain.
It
has
blocks
these
blocks
have
messages.
The
messages
are
also
called
transactions
and
some
chains
and
to
get
the
state
of
the
next
block.
A
You
take
the
state
of
the
previous
block,
you
execute
all
the
messages
and
then
you
get
some
new
piece
of
state,
and
that
is
like
the
new
state
of
the
world.
That's
the
general
idea,
but
five
one's
a
bit
tricky
because
it
has
tip
sets,
but
that's
basically
the
idea
here
you
can
see
this
executor.
A
This
executor
will
live
for
the
lifetime
of
a
single
block
and
sort
of
like
handle
all
the
message
execution
within
that
single
block.
Within
that
we
have
a
machine,
the
machine
kind
of
keeps
track
of
all
the
state.
We
have
relative
blocks,
like
we
initialize
this
machine
with
the
current
state
tree.
The
block
store
some
externally
implemented
functions
that
are
implemented
by
the
the
falcon
client
and
anything
else.
You
might
need
there.
That's
sort
of
like
common
across
all
messages,
then
for
each
message.
A
We
have
that
message,
execution
section
you
see
there
where
basically,
like
we
start
processing
a
message,
the
executor
creates.
This
thing
called
a
call
manager.
The
call
manager
is
how
we
manage
the
call
stack
where,
like
you,
have
like.
Basically
the
best
comes
in.
We
need
to
then
send
it
to
the
first
actor
that
receives
it,
and
the
first
actor
can
send
to
the
next
actors
and
the
next
actor
sent
to
the
next
actor.
That's
what
the
call
manager
deals
with.
Basically,
it
receives
a
message.
A
It
sends
it
to
the
first
actor,
which
gets
running
things
called
evocation
container
that
executes
the
the
wasm
actor
internally.
Then
this
message:
well,
really
the
actor
can
call
back
out
through
something
we
call
syscall
into
what
we
something
we
call
kernel.
It
actually
is
kind
of
like
the
it's
the
host
side,
glue
that
we
have
for
for
these
actors
from
the
kernel
they
can
access
their
state
check
their
actor
id
like
do
all
the
the
fun
things
they
want
to
do
on
the
blockchain.
A
They
can
also
call
into
other
actors.
So
there,
basically
you
you
start
in
your
invitation
here
in
your
wasm
actor.
You
call
in
through
a
syscall
into
the
kernel
and
say
hey
colonel.
I
want
to
send
a
message
to
this
other
actor.
That
then
goes
through
the
call
manager
and
gets
routed
down
to
like
then.
Basically,
the
call
manager
will
then
create
the
next
kernel
and
then
load
the
next
actor
and
sort
of
repeat
that
down.
Then,
as
you
can
see,
this
diagram
is
kind
of
repeated
along
for
each
message.
A
We'll
do
the
same
thing,
create
a
new
call
manager
and
just
we're
in
the
same
process,
and
then,
when
we're
done
with
the
block,
we'll
move
on
to
the
next
block
with
a
new
executor,
everything
starts
all
over
again
at
the
very
bottom
there
you
can
see
an
engine.
This
is
just
a
very
it's
a
common
way
of
dealing
with
wasm,
where,
like
you
have
some
caching
engine,
they
can
cache.
A
It
wasn't
early
ross
modules
because
they
can
be
expensive
to
compile,
but
this
is
the
general
architecture
I
made
this
in
case
you're
interested
in
contributing
to
the
code.
Please
do,
but
this
explains
how
this
works
in
general,
because
if
you
just
jump
in
the
code,
it's
very
difficult
to
understand.
Yeah,
that's
the
architecture.
A
I
I'm
going
to
talk
about
some
specifics
of
how
we
use
ipld
one
is.
We
use
it
for
sending
messages
so
like
when
you
get
a
message
from
off
chain.
The
parameters
are
an
identity
block.
This
block
cannot
link
to
other
things
because
you're
off
chain,
you
can't
copy
state
that
way
on
chain.
A
When
you
send
from
an
accurate
actor,
you
can
send
an
arbitrary
appeal
tag.
You
can
actually
say
like
here
is
this
tree
of
data
that
I
have
take
it?
The
other
actor
can
then
do
whatever
wants
in
the
tree
of
data,
and
they
will
show
another
trade
data
back
to
you.
This
is
really
cool
because,
like
in
a
lot
of
blockchains,
I
can
just
send
like
serialized
date
and
that's
it,
but
I
can't
give
access
to
parts
of
my
state
tree
with
alcohol,
and
we
can
do
that.
A
I
can
literally
like
unhook
a
part
of
my
my
state
and
just
give
it
to
you.
You
can
then
do
whatever
you
want
with
it,
mutate
it
or
just
read
it
or
whatever,
and
send
it
back
so,
for
example
like
like,
instead
of
like
making
a
bunch
of
calls
into
an
actor
to
like
try
to
read
and
understand
what
it's
doing,
I
can
just
say:
hey
what
is
the
current
state
of
your
registry?
It
just
sends
me
its
registry.
A
I
can't
modify
it
because
I'll
copy
and
write
it's
ipld
immutable,
but
I
can
still
go
and
modify.
I
can
look
at
it
and
do
whatever
I
want
there
and
if
I
do
want
to
modify
it,
then
basically
go
and
edit
it
locally
send
the
edited
version
back
the
the
target
method
and
then
check
to
make
sure.
Okay
is
your
edit
sane?
Does
this
make
sense?
Are
you
allowed
to
do
these
operations
and,
if
it,
if
it
is,
then
they
could
they
would
save
it?
A
This
is
just
something:
that's
really
cool,
that's
just
not
possible
to
do
in
any
other
system
yeah
so
that
that's
the
message
side
of
things
for
working
with
ipld.
This
is
something
important
because
like
dealing
with
basically
working
or
crossing
the
wasn't
boundary
is
hard
in
general,
so
way
to
think
a
lot
about
how
to
make
this
work
reasonably.
A
Well,
the
the
basic
effect
we
have
three
read
functions
and
three
write
functions
in
in
the
fm
when
you
start
out
with,
when
you
start
out,
you
have
to
first
ask
the
system
for
your
root
object
or
for
your
root.
Node,
the
like
the
your
root
is
your
state
route,
and
you
do
this
via
a
syscall.
There
were
we
call
syscall,
but
in
moments
it's
called
the
host
call
where
basically,
you
you're,
calling
out
to
the
system
and
saying
hey
system.
Please
give
me
my
root
c
id.
A
This
works
is
actually
you
pass
a
pointer
into
your
memory
and
ask
the
sim
to
write
the
cid
back
to
the
address.
It's
annoying
about
whatever
it
works.
Once
you
have
your
root
c
id,
then
you
can
go
open
it.
This
gives
you
a
handle
like
a
file
handle
to
this
this
to
your
root
block
in
your
state
tree.
It
also
tells
you
the
codec
and
the
size
you
can
decode
it
in
the
fpm.
We
decode
everything
in
the
actors,
so
basically
in
user
space.
A
We
do
this
for
a
couple
reasons.
One
performance
actually,
because
if
we
did
this
in
the
kernel
or
like
in
the
system,
then
be
lots
of
calling
back
and
forth
between
the
the
actor
and
system
the
actor
in
this
macro
system.
The
other
performance
benefit
is
like,
if
you
know
the
shape
of
your
data
ahead
of
time,
because
you've
compiled
something
that
knows
the
shape
of
the
data
you
can
sort
of
skip.
A
You
can
optimize
your
decoding,
assuming
that
you
have
the
data
and
if
the
shape
doesn't
match,
then
you
just
you,
fail
you
abort.
So
like
that's
one
of
the
other
reasons.
Finally,
it's
like
by
doing
all
this
stuff
in
the
actor
we
have
better
security,
because
we
did
this
client.
If
we
did
this
in
the
fm
itself,
then
if
some
actor
puts
malicious
state
that
could
cause
us
to
like
run
into
memory
or
something
like
that
by
doing
this
afterwards,
everything's
nice
and
sandboxed
so
yeah,
we
we
then
open
the
block.
A
We
then
read
it.
We
have
this
as
two
steps,
because
when
we
open
the
block,
we
don't
know
the
size,
and
we
also
may
not
want
to
read
the
entire
thing.
So
this
lets
us
basically
like
in
a
file
system
you
open,
and
then
you
load
the
data.
Then
you
have
a
little
circle
thing.
Basically,
we
sort
of
rinse
repeat
so,
like
once,
we've
read
the
root
block.
A
We
can
start
decoding
the
the
substate
start,
loading
children
working
with
it,
then,
when
we
actually
want
to
save
something
we
create
the
block.
So
basically
we
say
hey
system
here
is
the
codec.
I
want
to
use
here's
the
block.
The
system
will
then
validate
that
this
is
a
valid
block.
It
also
does
some
other
validations
trends
like
I'm
only
linking
to
state
that
I'm
allowed
to
access.
We
have
to
do
some.
You
have
to
be
very
careful
here
so
that
you
can't
just
like
access
a
state.
A
That's
not
a
part
of
your
state
tree.
Otherwise
you
have
non-determinism
other.
Not
so
good
things
you'll
get
back
a
handle
and
then
finally,
you
will
give
in
the
handle.
You
will
create
a
new
cid
and
tell
the
system
hey.
Please
make
the
cid
for
me
of
the
state
and
then
you
can
finally
set
your
route
to
the
root
c
id.
So
that's
kind
of
the
life
cycle.
A
Where
you
you
get
the
route,
you
open
start
opening
blocks,
start
reading
blocks
and
you
can
start
creating
blocks
linking
in
blocks
and
then
finally
setting
your
route
to
give
an
example
of
how
we
actually
use
all
this.
This
is
the
multi-state
definition
in
the
actors.
It's
just
a
struct,
it
encodes
type
ld,
but
really
we
just
specify
as
instructors
and
say:
hey
yeah,
I'm
coded
bld.
In
fact,
we
want
to
be
really
really
efficient,
so
we
don't
encode
this
map.
A
We
actually
put
it
as
an
array
where,
basically,
we
encode
it
as
an
array
of
fields
and
it
makes
it
fairly
efficient.
It
means
that
it's
a
bit
difficult
to
understand
without
using
like
an
actual
schema,
because
the
navigation
schema
can
say
like
hey
like
here
are
the
fields
it's
object,
but
I'm
scamming,
you
just
get
a
set
of
fields,
but
on
the
other
hand,
it's
actually
a
sequel,
it's
actually
very
efficient
and,
unlike,
for
example,
like
an
evm,
it's
still
typed
and
structured
so
like.
A
A
This
is
like
linking
to
a
hampt
of
of
transactions
or
a
sharded
map,
and
this
is
also
one
of
the
powers
of
ipld
is
like.
You
can
decide
where
your
block
boundaries
are
and
like
where
your
data
boundaries
are
so
like.
Basically
reading
the
state
is
one
big
read
we're
like
an
evm.
You
have
to
read
a
bunch
of
little
things
which,
like
like
basically
every
one
of
those
will
sort
of
have
the
full
read
cost.
A
In
this
case,
you
basically
have
certain
like
read
like
basic,
read:
cost
and
then
a
per
byte
read
cost.
But
if
you
have
group
state,
it's
really
convenient,
so
you
can
just
kind
of
like
read
blobs
as
they
were
like
related
to
each
other,
and
then
then
you
can
go
ahead
and
pull
out
a
transaction.
That's
encoded
in
a
separate
object
and
referenced
by
cid.
In
this
case,
we
have
a
transaction.
A
That
really
is
it
for
the
fvm
like
that's
how
it
works,
it's
actually
not
too
complicated.
The
complicated
part
will
come
next
with
like
how
we
get
evm
working
with
it.
How
addressing
works,
how
users
will
use
these
things
all
this
kind
of
stuff?
A
lot
of
the
stuff
is
still
very
much
up
in
the
air
and
work
in
progress.
A
So
I'm
not
gonna
present
it
right
right
now,
but
the
basics
of
the
fm
and
specifically
how
it
uses
ipld
and
vosm,
is
just
it's
actually
fairly
straightforward.
I
guess
the
next
step
is
any
questions.
A
Yes,
ideally
so
so
the
way
this
works
is
yeah.
Basically
we
can.
We
can
take
any
virtual
machine,
we
want,
we
can
compile
it
to
webassembly,
and
then
we
can
just
run
that
as
an
actor
or
as
an
actor
runtime,
but
one
of
the
fun
things
about
the
fbm
is
like.
A
So
if
an
evm,
when
you
deploy
an
actor
or
a
contract,
you
actually
like
you
copy
the
code
each
time,
so
you
say:
hey
deploy
this
code,
it
will
then
it'll
run
your
init
code
to
produce
more
code
and
it
stores
the
code
next
to
the
actor.
But
this
like
duplicates
a
bunch
of
code
and
means
like
deploying
new
actors,
is
actually
somewhat
expensive.
A
In
our
case,
you
can
make
really
big
actors
like
the
evm
and
then
just
like
keep
on
deploying
new
copies
of
this
vm,
and
it's
basically
free
because
we
just
point
to
it
by
cit
in
the
edm.
They
have
a
workaround
here
where
they
can.
They
can
deal
with
this
by
using
what
we're
called
proxies
or,
basically,
like
you,
you
in
the
evm,
you
would
deploy
your
heavy
actor
to
some
like
common
address
and
then
like
emeralds
or
deploy
their
their
other
actor
and
have
that
proxy
to
this
common
address.
A
This
has
some
security
concerns,
though,
because
like
you're,
basically
delegating
to
this
common,
this
common
actor
and
if
the
common
actor
is
not
safe
or
like,
like
someone
else
controls
it
then
like
any
like
that
the
controller
can
sort
of
change
at
any
time
with
content
addressing
you
can't
do
that.
A
For
the
evm,
we're
still
experimenting,
I
think
we
were
looking
at
sputnik.
I
can't
remember
the
latest
version
we're
looking
at,
but
basically
it
needs
to
be
in
rust,
which
makes
it
somewhat
tricky
yeah,
yes
like
we're,
not
using
geth
or
the
go
idiom
or
anything
like
that,
we're
using
a
rust
bm.
A
Okay,
then
I
guess
I
can
add
some
addendums
here
where
like
in
like
in
future
versions,
because
one
want
to
think
about
the
wasson
target
is
the
most
languages
will
compile
the
wasm,
so
I'd
love
to
actually
be
able
to
like
transpile
evm
by
code
to
webassembly,
and
you
get
much
better
performance
that
way.
The
problem
with
writing.
A
The
evm
inside
webassembly
is
you're,
gonna
pay
it
for
its
penalty,
but
but
on
the
other
hand
you
get
like
we
can
get
like
exact
compatibility
here
where
we
can
actually
like
literally
take
well
that's
getting
some
extensions
here.
I
guess
I
was
focusing
more
on
the
epilepsy
stuff,
but
I
guess
I
could.
I
have
a
lot
of
time,
so
I
can
dive
in
a
little
bit
to
even
compatibility.
A
The
plan
here
actually
is
to
support
sending
messages
from
off
chain
as
actual
like
ethereum
messages
and
then
processing
them
on
chain,
using
an
account
abstraction
somewhat
like
the
one
proposed
for
if
you're
in
blockchain.
This
means
you
can
use
like
metamask
or
whatever
tool
you
have
to
just
like.
Take
basically
great
messages,
send
them
to
the
file
point
blockchain
and
then
your
sort
of
like
your
edm,
based
account
on
chain
within
process.
A
These
messages
validate
the
ethereum
signature
and
then
forward
them
on
chain,
and
then
you
should
be
able
to
use
all
the
addressing
and
all
that
all,
basically
all
the
standard
tools
on
on
chain
as
well.
The
other
cool
thing
is
like
we're
going
to
be
implementing
the
at
least
a
part
of
the
geth
api.
So
you
should
be
able
to
use
all
your
same
off-chain
tools
as
well.
A
Basically,
you'll
just
pretend
that
you
have
an
evm
node
and
follow
the
exact
same
or
really
an
eth
node
follow
the
exact
same
flow
and
you
have
to
deal
with
their
second
block
times
out
of
ten
second
block
time,
but
other
than
that,
like
everything
should
just
work
the
same,
and
then
you
also
have
access
to
all
of
the
to
all
the
built-in
actors,
so
all
the
the
the
existing
like
storage
stuff.
A
So
that's
the
the
general
interoperability
story
there
yeah,
that's,
I
guess.
Does
anyone
here
want
to
talk
about
how
we
do
instrumentation
or
I
can
move
on
to
the
next
talk?
Is
there
any
interest
in
like
how
we
like
do
gas,
accounting
and
stuff
like
that
and
wasm,
or
stack
yep?
Okay?
I
just
didn't
want
it.
It's
like
this
is
an
ips
audience,
not
a
falcon
audience,
so
I
just
wanted
to
check
so
like
the
the
okay.
A
So
one
of
the
fun
things
about
wasm,
I
don't
know
how
much
people
have
dug
into
like
the
details
of
how
it
works,
but
like
it's,
it's
highly
structured.
So
if
you
look
at
the
actual
like
the
byte
code,
it
has
actual
actual
blocks,
so
you
can
see
like
loops
and
blocks
and
function
calls
and
all
this
kind
of
stuff
they're
all
in
the
wasm.
A
So
you
can
actually
like
look
at
it
as
a
sort
of
computation
graph,
which
means
it's
very
easy
to
like
to
decode
and
to
annotate
and
to
to
instrument
so
like
we
basically
we're
doing
the
same
thing
that
I
what's
that
group
that
I
think
parody
did
yeah
we're.
Basically,
we
go
through
your
wasm
code
and
for
every
block
of
data
before
we
run
the
block,
update
or
sorry
block
of
code.
A
Before
we
run
the
block
of
code,
we
basically
charge
for
gap,
the
the
gas
that
block
of
code
code
will
charge,
and
then
we
also
check
your
stack
depth
and
we
like
basically
have
a
running
counter
of
your
stock
depth
and,
like
we
know
how
much
stack
this
function
will
use.
So
we
just
charge
for
but
not
charge.
We
actually
just
like
account
for
that
up
front.
You
run
a
stack.
We
just
cancel
your
your
your
message.
A
If
you
have
gas,
we
also
cancel
the
message,
but
the
cool
thing
about
this
means
that,
like
we
can
insert
this
into
your
webassembly
code,
then
compile
it
then
run
all
the
optimizers
and
stuff.
So
the
gas
counting
is
actually
very,
very
cheap.
It's
not
super
cheap,
but
it's
pretty
cheap
because,
like
it
can
actually
get
optimized,
and
we
also
don't
have
to
like.
We
don't
have
to
use
like
ever
something
for
every
single
instruction.
Instead,
we
can
kind
of
account
for
entire
blocks
at
a
time
in
the
future.
A
Actually,
we
would
like
to
get
even
better
at
this
as
like.
Right
now,
we
account
for
every
single
block
chart
up
front.
In
theory,
you
can
actually
sort
of.
You
can
be
a
little
bit
more
loose
here,
where
you
can
account
for
like
an
entire
function
for
really
for
multiple
paths
of
code
and
sort
of
over
account
for
multiple
pads
and
then
refund
at
the
end
of
the
pads.
A
This
allows
you
to
actually
avoid
a
lot
of
jumps
and
checks
and
stuff
like
that,
because,
instead
of
actually
having
to
check
to
have
a
run
out
of
gas,
you
just
see
if
you
will
run
out
of
gas
up
front
charge
for
everything
you
could
use
and
the
very
end
is
available.
Add
instruction
to
to
like
rebate,
but
that
just
becomes
very
cheap.
A
The
other
cool
thing
here
is
that
you
can
actually
charge
for
gas
charging
as
part
of
your
your
gas
charges,
which
something
is
somewhat
difficult
to
do
in
other
systems
or
whether
you
can
charge
for
the
gas
charging
instructions
or
charge
for
the
the
the
the
you
can
even
account
like
charge
for
the
the
stack
accounting
functions,
which
we
can
actually
encourage
people
to
write
better
or
code
optimizes
better
for
gas
accounting.
A
I
would
love
to
do
that.
We
don't
currently
do
that.
Yeah,
that's
one
of
my
biggest
beats
with
gas
right
now.
Is
that
it's
one
dimension,
but
it's
wrong
because,
like
the
gas
dimension
is
time,
state
storage
is
entirely
different
thing.
A
We
actually
have
one
major
benefit
in
fog
point
in
that
we
are
a
storage
network
so,
like
like
we
can't
store,
live
state
in
in
the
file
coin
network
because,
like
we
need
to
be
able
to
exit
every
single
block,
but
we
can
like
we
can
probably
store
a
dead
state,
we're
not
dead
safe
but
like
sort
of
iced
state
so
like
if
we
have
contra
or
actors
get
frozen
once
we
finally
figure
out
some
ways
to
do
some
kind
of
rent
thing
like
in
systems
like
the
theorem,
like
it's
it's
like
it's
somewhat
difficult
to
do
this
because,
like
someone
has
to
have
the
state
they
have
to
be
able
to
resuscitate
it
in
something
like
filecoin.
A
One
thing
I
really
would
like
to
do
is
just
say:
fine.
Every
single
storage
provider
has
to
store
the
entire
state
entire
historical
state.
It's
not
that
much
data.
I
mean
it's
a
lot
of
data,
it's
terabytes,
but
these
storage
dividers
store,
like
petabytes
at
least
usually
or
at
least
hundreds
of
terabytes.
So
it's
actually
not
too
bad
to
say.
Fine.
If
you
are
a
storage
fighter,
you
can
store
the
historical
state.
You
have
to
be
able
to
bring
it
back
online,
which
I
think
is
really
cool,
but
yeah.
A
So
at
the
moment
we
charge
an
average,
which
is
not
good,
but
we're
doing
this
because
we're
only
running
users.
Sorry
like
built-in
smart
contracts,
so
we're
not
running
user
specified
ones.
So,
like
you
can't
actually
securely
charge
an
average,
which
means
we
have
to
like
go
and
do
more
analysis
there,
because
if
you
charge
an
average,
then
someone
will
just
find
out
something's
really
expensive.
For
example,
when
we
were
doing
this
analysis,
we
found
out
that
random
memory
reads
are
really
expensive.
A
Computers
are
designed
for
computers
designed
for
optimized
code
or
optimizable
code
they're,
not
designed
for
pessimistic
code,
so
like
basically
in
a
blockchain,
you
can
throw
away
your
caches.
Those
don't
help
you
with
security.
You
could
throw
away
like
all
of
your
your
branch
prediction,
all
this
kind
of
stuff,
because
an
attacker
will
just
like
find
the
code.
A
That
is
the
slowest
possible
code
if
they
want
to
if
they
want
to
slow
down
your
chain
and
try
to
get
you
to
execute
that,
so
we're
going
to
have
to
do
actually
is
is
basically
charge
like
so
one
of
the
nice
things
about
website.
That
was
like
it
now
has
this
bulk
read
instruction,
so
we
can
do
here.
Is
we
can
charge
like
a
sort
of
a
read
offset
or
sorry
a
some
amount
for
the
random
read
and
then
submit
per
byte,
because
per
byte
is
cheap.
A
It's
just
the
random
read
that
is
expensive.
It's
latency
going
to
memory
for
other
instructions,
it's
not
too
bad,
because
most
instructions
are
just
like
any
kind
of
math
or
branching
or
whatever.
That's
not
terrible,
but
we
still
need
to
do
a
lot
of
detail
now.
So
this
and
we
haven't
done
it
yet.
A
The
other
thing
we've
actually
noticed
is,
I
we
think
it's
the
instruction
cache
we're
not
entirely
sure.
We
noticed
that
jumping
back
and
forth
between
web
assembly
and
and
system
is
like
it's
not
super
slow,
but
it
has
a
fair
amount
of
cost
and
we
believe
that's
because
it's
trashing
construction,
caches
and
stuff
like
that.