►
Description
Speaker: Alex Beregszaszi, co-lead at Ethereum Foundation
Topic: Fizzy — A deterministic interpreter
Fizzy https://github.com/wasmx/fizzy
Ethereum https://ethereum.org/en/
Wasm in Web3 workshop https://avive.github.io/wasm_on_the_blockchain_2021/#/
A
Of
course,
today,
I'm
not
going
to
talk
about
evm,
I'm
going
to
talk
about
webassembly,
and
my
topic
of
choice
is
fizzy,
which
is
an
interpreter.
Our
team
has
written,
and
this
was
basically
the
culmination
of
of
the
work
we
have
done
on
the
topic
of
assembly.
A
I
would
also
mention
that
pavel
who
is
whose
talk
is
after
mine
will
cover
a
lot
of
the
really
nitty-gritty
details
regarding
the
the
testing
and
and
all
the
different
findings.
We
have
come
across
with
fizzy,
but
I
will
first
just
give
like
an
overview
of
the
the
topic
of
interpreters
and
then
go
into
fizzy
in
a
bit
more
detail.
A
Let
me
if
this
works.
Okay,
yeah.
So,
as
you
mentioned,
our
team
is
just
called
epsilon
and
we
don't
really
have
a
website
apart
from
this
hackmd,
but
we
publish
all
the
different
findings
and
the
specifications
and
we
do
and
there's
quite
a
few
of
them.
We
also
publish
on
each
research
and
ethereum
magicians,
but
of
course
all
of
these
are
rather
ethereum
centric
and
currently
we're
mostly
working
on
on
evm
related
optimizations.
A
But,
as
you
will
see,
a
lot
of
the
work
we
have
done
goes
back
and
forth
between
evm
and
wasm
and
the
learnings
we
had
on
either
of
these
influenced
the
things
we
have
done
on
the
other
and
the
previously
our
team
was
called
just
ewasm,
and
the
project
itself
was
also
called
he
wasn't,
but
even
in
those
times
we
in
practice
we
focused
on
execution,
so
not
only
webassembly
and
just
having
a
a
new
team
name
just
helps
in
making
it
a
bit
more
clear
that
you
know
our
topics
are
a
bit
more
varied
than
just
webassembly
itself.
A
A
As
far
as
I've
seen,
everybody
is
using
wasmer,
which,
if
I
know
correctly,
it
is
mostly
an
ahead
of
time
compiler,
but
it
may
also
have
an
interpreter
and
legit
component,
but
I
think
it
is
a
aot
which
is
that
the
version
everybody
is
is
using
and
compared
to
that
interpreters
are
quite
a
different
beast.
A
So
just
going
back
to
the
actual
background,
you
know
how
all
of
this
started,
and
then
you
know
why
we
made
any
of
these
decisions.
I
have
to
go
back
to
the
very
beginning,
which
was
in
2016
or
rather
in
2015,
when
we
started
to
work
on
webassembly,
and
I
believe
this
was
the
the
the
first
application
of
webassembly
to
like
a
blockchain
context,
and
our
initial
requirements
were
quiet,
simple.
We
just
wanted
deterministic
execution
and
we
wanted
instant
startup
time
and
we
wanted
fast
execution
time.
A
So
these
were
the
requirements,
and
here
I
I
need
to
because
I
said
you
know
we
want
to
determine
the
execution.
I
want
to
expand
a
tiny
bit.
You
know
what
that
does
that
mean
so
under
this
we
mean
that
between
two
points.
The
first
point
is
that
there
must
be
some
kind
of
a
limit
and,
within
that
limit
execution
has
to
always
stop
at
the
same
place.
A
No
matter,
you
know
how
many
times
you
run
it,
where
do
you
run
it
etc,
and
this
limit
is
is
mostly
accomplished
with
metering
which
many
others
already
have
talked
about
today.
A
The
second
point
is
that
under
determinism,
the
results
have
to
be
identical
of
the
execution
and
no
matter
which
host
machine.
You
run
it
on,
and
you
know
whether
it's
like
an
x86,
whether
it's
an
iron,
whether
it's
any
kind
of
other
exotic
machine,
it
has
to
result
in
a
very
same
outcome-
and
here
I
mentioned
like
two
important
points
which
could
cause
concern.
The
first
is
any
kind
of
floating
points
and
because
floating
points
can
be
implemented
differently
across
different
platforms
and,
in
fact,
x86
and
arm
implements
some
parts
differently.
A
I
guess
in
the
past,
like
arm
hasn't
been
such
a
huge
issue
because,
like
all
the
ud
servers
really
focused
on
like
x86,
but
in
the
recent
years
of
course,
with
phones
and
most
recently
with
the
the
apple
m1
arm
is,
is
becoming
you
know,
more
heavily
used
in
different
contexts,
and
so
this
point
on
floating
points
becomes,
you
know
more
important
than
it
may
have
been
in
the
past.
A
The
second
point
is
that
different
host
computers-
and
this
doesn't
only
apply
to
to
different
cpus,
but
rather
different
os's
different
operating
systems,
may
have
like
different
default
stack
sizes.
One
example
is
that
mac
os
versus
linux,
the
mac
os,
has
a
much
smaller
default
stack
size,
and
what
this
can
cause
is
that
if
you
recurse,
you
make
multiple
calls
and
those
could
result
in
a
failure
at
different
points
of
time,
depending
on
how
much
stack
space
you
have.
A
So
that's
all
I
wanted
to
say
with
determinism
and-
and
now
you
know
with
with
knowing
what
the
initial
requirements
were
when
we
started
with,
he
wasn't,
and
these
were
the
our
expectations
we
expected
that
we
may
have
some
issues
with
determinism
with
metering
and
maybe
with
the
interfaces,
how
to
use
wasm
in
the
context
of
contracts,
and
we
expected
there
won't
be
any
issues
with
speed
and
there
would
be
a
lot
of
different
vms
we
could
use.
So
these
were
our
expectations.
A
Do
you
guys
think
that
you
know
we
were
right?
Obviously,
if
I
list
these,
no,
they
weren't
right.
So
what
we
expected
and
what
actually
happened.
What
actually
happened
is
that
speed
was
an
issue
and,
more
importantly,
the
lack
of
choice
for
vms
and
different
kind
of
vms.
A
That
was
definitely
a
giant
issue
and,
as
we
learned,
these
different
vms
may
have
different
problems
associated
with
them,
and
especially
I
mentioned
aots
and
jits.
More
importantly,
jits,
of
course,
which
we
have
found
some
issues
with
and
then
for
the
the
points
we
we
hope
that
there
wouldn't
be
any
issues
with.
A
Actually
those
were
the
things
you
know.
We
hope
that
speed
on
the
vms
were
on
the
issues,
but
of
course
those
were
the
main
issues.
So,
instead
of
these,
what
we
found
is
that
the
terminus
determinism
and
metering
isn't
that
bad
those
can
be
solved
and
they
can
be
solved
quite
easily.
So
it's
really
just
the
speed
and
the
vms,
which
became
the
problem
so
based
on
these,
our
initial
choices,
you
know
how
to
accomplish
e-wasm
was
that
we
wanted
to
inject
metering
into
wasn't
code.
A
We
just
wanted
to
use
off-the-shelf
vms
and,
more
importantly,
we
also
wanted
to
use
browsers
and
browsers
on
the
phone
because
we
hoped
like,
like
clients,
would
become
you
know
more
important
than
they
have
become
so
far,
and
we
expected
that
a
lot
of
the
execution
actually
happens
on
the
phone
where
the
browser
would
have
a
wasn't
vm
built
in
and
since
we
cannot
really
modify
a
vm
in
the
browser,
the
obvious
choices
that
we
wanted
to
inject
metering
in
the
wasm
code,
and
so
that
means
we
modify
the
wasm
code
prior
to
execution.
A
We
insert
extra
statements
which
counts
the
execution
steps
and
we
use
different
costs
for
different
kind
of
instructions
and,
for
example,
a
more
complex
call
instruction
is
likely
more
expensive
than
just
a
regular
edition
and,
secondly,
we
wanted
to
inject
call
depth,
checking
it
wasn't
code
similar
to
metering,
and
this
is
to
ensure
that
the
different
stack
sizes
wouldn't
cause
any
any
difference.
And
lastly,
we
just
wanted
to
reject
any
code
which
has
floating
points
in
our
use
case.
Floating
points
weren't
crucial.
A
Okay,
so
with
all
these
decisions
we
started
working.
We
got
everything
done
to
like
an
initial
prototype
level,
and
these
were
results
in
2018.
A
and
back
then
we
used
binarian
and
v8,
which
is
the
the
javascript
engine
of
chrome
and
also
used
in
node.js.
So
the
good
results
were
that
everything
works.
The
the
metering
overhead
is
is
not
terrible.
It's
it's
quite
okay.
64-Bit
operations
are
really
fast
compared
to
the
evm,
of
course,
and
256-bit
operations
are
not
as
fast
as
we
hoped
and
definitely
not
as
fast
as
they
are
on
the
evm.
A
Of
course
there
were
problems
v8
when
I
said
you
know
it,
it
works
and-
and
it
is
fast
of
course
I
mean
v8,
so
v8
was
pretty
fast,
but
we
have
had
some
issues
with
v8,
which
is
or
at
the
time
at
least
it
was
a
jit
with
jits.
A
The
the
the
problem
case
we
have
found
is
an
attacker
could
craft
a
piece
of
wasm
code
which
increases
the
compilation
time
of
the
wasan
code
into
the
native
code
excessively,
and
this
is
basically
a
case
where
somebody
could
the
os
attack
the
network
at
that
time
there
weren't
any
ahead
of
time,
compilers
or
at
least
stable
versions
of
them.
So
basically
we
just
decided.
Okay,
we
have
to
to
pause
the
work
on
this
v.
A
We
didn't
want
it
to
write
the
vm
right
at
all
and
then
binary
and
was
pretty
slow,
so
we
we
switched
to
to
web
it,
which
was
at
a
time
the
the
next
best
option.
So
we're
talking
about
2018.,
so
in
like
early
2018,
when
we
started
work
on
rebit.
Obviously,
here
I
don't
want
to
give
like
a
you
know,
a
day-to-day
timeline,
because
over
these
years
of
course,
a
lot
of
things
happen.
A
So
this
is
just
like
an
excerpt,
but
in
like
2018
with
webbit,
we
have
mostly
done
different
optimizations
on
the
interpreter
itself.
I
just
mentioned
a
few
here.
One
interesting
one
is
we
combine
multiple
wasm
instructions
into
just
super
instructions,
so
one
combined
instruction,
which
does
the
sum
of
all
those,
and
this
is
of
course
not
like
a
novel
idea.
This
is
exactly
what
jits
also
do
and
ahead
of
time
compilers.
A
A
As
expected,
this
gave
like
a
huge
boost
and
we
also
did
some
other
smaller
optimizations
regarding
different
checks
and,
lastly,
which
is
like
a
similar
improvement
in
terms
of
speed
and
then
the
super
instructions
is
we
translated,
known
host
function,
calls
into
like
internal
custom
instructions
and
the
the
kind
of
host
functions
we
are
interested
in
is
of
course,
as
I
mentioned,
256
bit
operations
were
kind
of
slow
compared
to
what
we
expected,
and
so
we
designed
like
an
api
where
basically,
a
big
num
library
exposes
as
host
functions
and
that
worked
pretty
well,
but
the
speed
could
still
be
improved,
so
the
improvement
the
way
to
improve
it
is
to
translate
those
calls
into
sorry
into
custom
instructions,
and
all
of
these
changes
made
quite
a
big
difference.
A
I
won't
really
explain
all
of
them,
but
yeah,
please
go
ahead
and
browse,
so
the
results
were
that
what
it
is
is
pretty
fast,
so
this
on
the
right
is
like
what
stock
webbit
did.
This
on
the
left
is
what
the
native
rescued
would
do
in
the
middle
is
our
optimized
rabbit,
which
is,
is
pretty
good.
The
red
one
is
the
startup
time
and
the
blue
is
the
actual
execution
time.
A
So
all
of
that
combined,
we
are
like
at
3x
td
native
34x
native,
but
it's
insanely
more
fast
than
you
know
not
doing
any
of
these
optimizations.
So
we're
quite
happy.
But
of
course
there
were
some
some
new
problems,
mainly
that
vebit
and
especially
our
changes.
It's
it's
not
production
ready.
This
was
just
a
prototype,
but
the
second
more
problematic
requirement
we
had
is
that
the
vm
should
be
small
and
understandable
by
the
client
maintainers
and
by
client
maintainers.
A
I
mean
all
of
those
people
who
maintain
ethereum
clients
because
remember
our
main
goal
is
to
introduce
wasm
to
ethereum
and
those
people
at
least
today.
They
understand
the
evm.
It's
it's
pretty
simple.
They
understand
every
single
part
of
the
system,
and
if
you
want
to
introduce
like
webassembly
to
them,
you
are
introducing
a
black
box.
So
the
requirement
is,
if
we
can
make
this
black
box
to
be
understandable
by
them,
then
there's
a
much
better
chance
of
acceptance
and
just
as
a
remark,
I
said
that
we
also
used.
A
So
here's
like
an
overview
what
we
had
of
the
different
interpreters
in
mid
2019,
just
one
sec
yeah,
and
so
basically
these
were
the
interpreters
at
the
time
there
was
binary
and
rabbit
was
me
warmer
and
then
dragon.
At
least
these
were
the
ones
we
looked
at
and
speedwise
wagon
and
binary,
and
they
weren't
really
good.
So
the
choice
was
that
vebit
was
me
and
warmer,
but,
as
I
said,
webbit
wasn't
really
production
oriented.
A
You
know,
as
we
discussed
at
least
we
placed
a
question.
Well,
it
is
more
like
a
toolkit
and
it's
it's
really
just
a
an
implementation
of
the
specification.
It's
not
really
designed
to
be
like
a
production
tool,
at
least
not
like
in
such
a
context.
We
wanted
it
to
be
and,
and
that
that
just
left
us
with
was
me
and
warmer
and
and
their
issues
we
have
found
with
was
me
and
vomer
was
like
the
integration
part,
integrating
rust
with
different
languages.
A
It
wasn't
really
nice
at
the
time
and
the
warmer
not
only
had
an
interpreter.
It
also
had
an
aot,
aot
and
jit,
so
it
was
just
a
large
code
base.
So
all
of
these
meant
that
we
decided
to
create
fizzy,
which,
which
is
just
an
interpreter.
A
A
So
finally,
we
got
to
fizzy
at
this
point
and
I'm
just
going
to
list
the
different
goals
we
have
and,
and
we
have
four
categories
of
goals.
The
first
category
is
is
the
code
quality
and
we
really
want
to
have
a
small
code
base.
You
know
because
of
the
the
reason
I
mentioned,
and
we
also
don't
want
any
kind
of
external
dependencies.
A
You
know
if
you
can
avoid
them
if
you
want
extremely
clean
and
readable
code
and
it
has
to
be
easily
embeddable
and
the
second
category
is
simplicity,
and
this
is
an
interesting
one,
so
we
only
want
to
support
web
assembly
1.0
and
it's
you
know.
This
is
a
question.
What
what
is
episome
1.0
and
we
started
to-
maybe
you
know
wrongly
at
least
internally
refer
to
the
mvp
as
1.0
and
pavel
in
the
next
talk
we'll,
I
think,
explore
this
topic
in
more
depth.
A
But
basically,
what
we
mean
is
the
the
mvp
without
any
of
the
extensions.
Of
course,
this
doesn't
mean
that
none
of
the
extent
extensions
would
ever
be
supported,
but
the
goal
is
that
those
extensions
would
only
be
supported
once
they're
like
fully
final
deployed
everywhere
up
until
that
point,
they're
really
just
experiments.
At
least
you
know
in
the
eyes
of
visit.
A
A
The
next
set
of
goals
is
conformance.
We
really
want
to
have
a
high
unitas
coverage
and-
and
you
know,
we
want
to
be
really
strict
on
testing.
We
want
to
pass
every
single
possible
test
and-
and,
lastly
is
you
know,
firsthand
support
for
blockchains,
and
so
you
mentioned
a
couple
of
things:
the
floating
point
metering,
the
important
part
with
metering
here
so
far.
I
only
mentioned
injected
metering
because
we
didn't
want
it
to
modify
a
vm,
but
here
we
have
the
opportunity
to
modify
vm
and
doing
runtime
metering.
Doing
all
of
this.
A
In
the
vm
itself
is
much
nicer
in
the
sense
that
it
is
much
more
optimal,
and
so
we
implemented
metering,
and
it
is,
you
know,
much
better
speed,
wise
compared
to
injected
metering,
and
the
same
applies
to
the
call
depth
bound.
A
If
you
don't
need
to
inject
it,
but
you
can
modify
the
vm,
it
is
obviously
cheaper
and
faster,
and
what
I
mentioned
regarding
256
bit
numbers
that
is
like
a
major
goal
to
have
a
efficient
and
and
good
api
for
big
numbers,
and
I
here
I
extended
it
that
not
only
256
bit
but
modular
arithmetic
for
arbitrary
widths.
A
So
here's
some
some
numbers-
and
you
know
what
fizzy
looks
like
today,
the
core
logic
itself,
and
by
this
I
mean
the
the
parser,
the
instantiation
part
and
the
execution
part
and
the
these
are
the
actual
lines.
So
it's
I
didn't
go
at
looking
at
the
the
number
of
statements,
but
the
actual
lines
in
the
file.
It's
only
3
500
lines
in
total
and
we
have
100
unit
test
coverage.
Of
course
we
pass
all
the
upstream
tests
and,
according
to
our
measurements,
we
are
the
second
fastest
interpreter
on
the
market.
A
After
wasn't
three
here's
some
speed
comparison.
I
only
listed
these
three
because
these
three
vms
are
integrated
into
our
benchmarking
system.
In
physio
itself,
we
did
had
some
prs
to
also
support
ssvm
and
to
support
walmart,
but
yeah.
We
had
some
issues
with
the
the
apis
of
of
those
vms
and
found
some
some
bugs,
or
at
least
you
know,
maybe
things
we
didn't
fully
understood.
I
think
we
did
open
some
some
issues
upstream,
but
we
couldn't
fully
like
integrate
those
into
benchmarking
system.
A
That's
why
they're
excluded
here,
but
we
do
have
wasn't
free
and
vivid,
and
I
think
what
we
found
in
general
that
fizzy
is
like
two
to
five
x,
lower
than
wasn't
three,
but
is
five
to
ten
x
faster
than
vevit,
and
we
did
do
measurements
outside
of
this
benchmarking
framework
in
infizi.
With
these
these
other
vms,
and
there
we
have,
we
have
seen
that
the
at
least
ssvm
and
and
warmer
at
the
time
they
were
of
similar
speed
of
levitt.
A
This
may
have
changed
in
the
past
few
months
because
we
haven't
run
those
since
like
early
summer,
but
at
the
time
they
were,
you
know
similar
to
the
webbit
and
speed,
and
this
is
a
particularly
heavy
benchmark
of.
A
Basically,
signature
recovery
for
ethereum,
which
is
you
know,
one
of
the
frequently
used
functions
and
the
next
benchmark
is,
is
just
four
smaller
benchmarks,
two
hashing
and
just
regular
mem
set,
which
of
course
would
be
solved
by
one
of
the
wasm
extensions
and,
lastly,
just
a
256
bit
implementation,
and
here
you
can
also
see
that
fizzy
is
unfortunately
not
beating,
wasn't
tree
but
is
as
well
in
the
middle
okay.
A
So
a
few
more,
I
guess
just
some
of
the
features
we
wanted
to
summarize
about
fizzy
itself
is
we
have
decided
to
use
c
plus
plus
for
the
core
implementation,
and
initially
we
did
use
exceptions
for
everything,
but
we
did.
There
was
more
like
a
temporary
measure.
We
didn't
really
wanted
to
use
exceptions,
you
know
indefinitely,
but
we
did
remove
them
from
everywhere,
except
parsing,
and
this
provided
quite
a
big
boost.
A
A
We
also
looked
into
adding
support
for
the
wasmc
api
and
it
wasn't
c
plus
plus
api,
which
is
used
by
some
of
the
other
vms,
but
so
far
we
haven't
done
that
because
it
is
quite
a
complicated
api
in
terms
of
the
number
interfaces
which
need
to
be
introduced.
The
the
wasmc
api
header
file
itself,
I
believe,
is
you
know
as
long
as
the
entire
source
code
of
like
fizzy
itself,
we
do
have
a
rust
binding
and
we
do
have
what
what's
the
support.
A
It's
not
like
fully
complete,
but
it's
easy
to
complete
each
of
the
the
functions
we
just
didn't
have
time
for
them
and
we
have
a
benchmarking
tool,
which
I
mentioned
fizzy
bench
and
it
it
is
easy
to
extend
with
with
new
test
cases
it
is
and
that
that
is
what
I
used
for
the
charts
before
and
we
have
runtime
metering
then
on
the
testing.
I
just
wanted
to
mention
a
few
things
here,
but
pavel
will
go
really
deep
dive
into
this.
A
So,
of
course,
we
have
unit
tests
which,
which
we
used
with
to
to
find
issues
like
test
cases
which
were
uncovered
by
by
upstream,
but
we
do
have
a
runner
for
the
spec
tests
and,
as
I
mentioned,
we
don't
really
have
support
for
the
text.
Webassembly
format,
and
that
is
not
a
problem
because
we
can
use
webbits,
was
to
json
to
translate
the
test
cases
and
that's
what
we
do.
A
That
was
one
of
the.
The
only
reason
we
would
ever
wanted
to
have.
The
text
format
is
for
specters,
but
since
webbit
has
this
tool
we
there
was
a
really
good
find.
And,
lastly,
we
have
a
special
testing
tool
based
on
this
berkeley
test
float,
which
is
a
quite
large
test,
suite
to
compare
against
ie754
floating
point
conformance
again
pavel
we're
we're
talking
in
more
detail
about
this
okay.
A
So
I
think
I
like
these
are
the
two
main
features
left
to
be
done
on
fizzy
and
the
first
one
I
mentioned
like
exceptions
as
we
want
to
get
rid
of,
and
that
would
require
quite
a
I
mean
not
like
a
an
insanely
big,
but
quite
a
big
refactoring
on
the
parser.
A
But
it
would
be
nice
to
to
get
rid
of,
because
that
could
also
mean
that
fizzy
can
be
compared
to
wasn't
too
I'm
not
sure
how
useful
it
is.
But
of
course
that's,
that's
very
you
know
funny
to
do
and
the
the
big
big
number
api
I
keep
mentioning.
A
We
do
have
different
versions
of
it,
but
we
don't
really
have
a
final
and
then
well
well-designed
one
that
would
be
the
you
know,
one
of
the
the
major
last
remaining
items
for
fizzy
and-
and
we
could
also
consider
some
some
further
restrictions
of
complexity
by
adding
limits
due
to
various
various
fields
in
wasm
itself.
Initially,
we
thought
that
this
could
provide
quite
a
big
boost,
but
now
I
think
we're
leaning
towards
that.
A
So
now
you
know
to
almost
two
years
have
passed
since
we
made
this
table
of
of
the
different
vms
and
in
2021
there
are
many
more
interpreters
than
in
2019.
A
So
the
new
additions
are
wasn't
tree
and
was
an
edge,
was
images
as
the
new
name
of
ssvm
and,
of
course,
fizzy,
and
I
want
to
highlight
that
you
know
these
labels.
These
are
really
subjective
by
production
oriented.
You
know,
we
just
mean
that
they're
not
designed
to
be-
or
at
least
you
know,
based
on
on
our
understanding
or
opinion.
They
do
not
really
their
goal
isn't
to
be
used
in
in
like
production,
blockchains
or
or
such
use.
A
Cases
they're
more
like
a
hobby
project
or
a
academic
research
project,
or
maybe
they're
like
at
an
earlier
time
where
they,
they
really
focus
on,
like
certain
properties
and
not
others,
and
one
property
for
wasn't
tree
is
the
speed
and
our
experience
was.
Entry
is
really
just
focusing
on
the
speed
currently
and,
of
course,
they're
they're,
the
best
at
that.
It's
really
hard
to
beat
them,
if
not
impossible,
but
we
did
find
a
few
issues
and
bugs
it
wasn't
free
which,
which
is
is
understandable.
A
Given
you
know,
they're,
not
really
at
least
at
the
time
they're,
not
focusing
on
being
like
production,
ready
software
they're,
focusing
on
being
the
fastest,
and
I
think
they're
gonna
they're
gonna
focus.
You
know,
maybe
shift
focus
once
you
know
all
the
optimizations
and
features
are
there.
But
that's
you
know,
that's
how
we
understand
the
space
so
based
on
on.
A
Based
on
on
this
table,
I
think
the
like
wagon
was
me
binary
and
webbit.
They
really
haven't
changed
that
much
in
the
past
two
years,
but
of
course,
wasn't
three
wasn't
much
fizzy
and
I
believe
bomber
as
well
has
got
a
lot
of
development
done
to
them.
A
Okay-
and
I
think
you're,
probably
at
the
13
minute
mark
now,
so
I
just
want
to
just
similar
summarize.
You
know
what
we
learned.
So
what
we
learned
is
that
creating
a
wasm
interpreter
is
pretty
simple.
It's
it's
easy.
When
we
set
down
for
like
the
initial
like
hackathon,
for
for
getting
fizzy
done,
we
only
spent
less
than
a
week
together
and-
and
we
pretty
much
got
like
a
working
interpreter
out
of
it,
and
that
is
is
not
bad.
So
wasm
isn't
that
complex?
You
can
crack
it.
A
You
know
in
a
week
and
create
an
interpreter,
but
we
soon
learned
that
making
100
correct
interpreter.
Isn't
that
easy?
That's
pretty
hard!
It
took
us
quite
a
few
months
to
to
get
all
this
testing
in
place
and
every
single
nitty-gritty
detail
in
place
to
fully
pass
the
spectes.
A
I
think
it
took
us
probably
close
to
five
five
six
months
to
get
to
this
point,
but
making
sure
that
all
of
this
you
know,
while
being
hundred
percent
correct,
is
also
really
fast.
That
is
a
extremely
hard
task,
I
would
say
at
least
getting
to
the
current
speed.
A
It
took
us
an
additional
like
three
four
months,
so
in
total
probably
took
us
like
twelve
months
to
get
where
we
are
at
being
hundred
percent
correct,
and
you
know
the
speed
we
have
so
I
guess
the
takeaway
here
is
that
creating
a
was
on
vm
is
not
that
bad,
but
being
correct
and
fast,
that's
quite
a
challenge.
A
That's
that's
really!
All
I
wanted
to
say
about
fizzy.
Please
check
out
the
project
on
this
url
and
shout
out
to
the
team,
which
is
andre,
pavel
and
myself,
and
we
do
have
a
few
contributors.
Thank
you.