►
From YouTube: Are We Polyglot Yet? - Saúl Cabrera, Shopify
Description
Are We Polyglot Yet? - Saúl Cabrera, Shopify
Platform and language independence are considered two of WebAssembly's main advantages, which open the door to new possibilities for secure and polyglot runtime environments and applications. To what extent and how practical is it to achieve polyglot applications on top of WebAssembly today? In this talk, Saúl will showcase – using JS-on-Wasm and Rust – how to leverage WebAssembly's Component Model to create composable, polyglot Wasm applications through safe programming language interoperability.
A
Hey
everyone,
so
my
name
is
awl
and
I'm
here
to
talk
about
polyglot,
webassembly
or
exploring
composition
through
the
component
model.
So
polyglot
is
a
very
overloaded
word
like
runtimes
and
other
words
that
we
use
in
computer
systems.
So
I'm
going
to
define
what
I
mean
by
a
polyglot
later
on,
so
we
can
have
a
common
base
to
work
on,
but
first
I'm
going
to
introduce
more
a
bit
of
myself.
So
I
work
at
shopify,
mainly
on
the
shopify
vm
and
wasn't
foundations
team.
A
So
at
shopify
we
are
building
a
composable
and
scalable
webassembly
platform
for
customizations,
that's
basically
user
defined
functions
and
on
the
wason
foundation
side
we
work
mostly
on
open
source
and
we've
been
lately
doing
some
work,
helping
the
biker
alliance
folks
move
forward
some
of
the
tooling
around
the
component
model,
because
that's
crucial
to
shopify's
use
case.
Also,
we
have
done
some
experiments
with
bringing
dynamic
languages
to
webassembly,
and
the
main
example
of
that
is
is
javi,
which
is
our
javascript
webassembly
toolchain.
And
the
idea
is
it's
very
basic.
A
We
compile
something
like
quickjs
into
wasm,
three
two
wassi,
and
then
we
execute
your
javascript
code.
So
we've
also
been
experimenting
with
other
runtimes
like
spidermonkey,
which
is
mozilla's
javascript
engine,
and
even
though
this
is
heavier
and
a
bit
more
complex,
we
think
that
there
is
an
opportunity
here
to
have
faster
javascript
on
top
of
webassembly,
so
we
don't
have
a
specific
results
yet,
but
we've
been
working
on
something
really
exciting
and
I'm
probably
going
to
do
a
different
talk
once
we
have
more
information
here
so
now
on
to
the
definition
of
polyglot.
A
A
A
So
that's
for
the
definition
of
polyglot
now
in
order
to
understand
why
this
talk
even
exists,
it's
important
to
give
some
context
on
how
we
use
shopify
at
webassembly
at
shopify
and
the
best
definition
for
that
is
this
slide.
So
we
use
webassembly
for
synchronous,
execution
of
untrusted
code
in
performance
sensitive
context.
So
there
are
a
couple
of
things
that
we
can
extract
from
this
definition.
A
The
first
one
is
synchronous
execution.
We
don't
allow
any
side
effects
in
our
webassembly
execution,
so
that
means
no
http,
no
async.
Nothing.
That's
just
pure
execution,
and
the
second
important
concept
here
is
in
performance
sensitive
context,
so
you
can
think
of
this
in
a
context
where
delays
are
not
allowed
or
they
should
be
minimized.
So
a
performance,
sensitive
context
or
a
good
example
of
that
is
a
checkout
process.
A
So,
basically,
your
platform
won't
be
functional
for
what
the
user
is
trying
to
do,
and
then
the
second
important
characteristic
here
is:
we
want
the
smallest
binary
size
possible,
so
we've
been
working
with
sizes
of
around
256
kilobytes
per
binary.
If
someone
submits
something
bigger
than
this,
it
won't
be
able
to.
We
won't
be
able
to
to
accept
that
as
a
valid
program.
A
So
some
of
you
might
ask
okay,
why
is
this
size
constraint
so
important?
And
the
answer
to
that
is
that
we
want
to
have
highly
available
modules
right
and
the
bigger
they
are,
the
harder
it
is
to
store
them
to
cache
them
to
transfer
them
over
the
network
and
so
the
the
the
smaller
they
are,
the
better.
A
With
these
ideas
in
mind
it,
it
seems
like
the
traditional
approach
of
statically,
linking
a
single
program
and
running
that
in
your
runtime
might
not
be
as
scalable
as
we
might
expect.
A
So
it
seems
like
what
we
want
instead
is
submitting
a
single
binary
that
requests
some
functionality
and
that
functionality
is
provided
to
you
at
runtime,
basically
dynamic
linking
that
functionality.
So
you
can
think
of
this
as
having
a
set
of
modules
that
act
as
a
standard
libraries
at
the
webassembly
layer
and
then
any
web
assembly
program
that
requests
this
functionality
is
going
to
have
access
to
it.
So
you
can
think
of
this.
A
The
perfect
mental
model
for
this
would
be
like
when
you
request
something
from
the
operating
system.
You
expect
it
to
be
there
because
the
operating
system
can
be
seen
as
a
platform
for
for
running
programs
too.
So
here
is
where
concepts
from
the
component
model
start
making
sense
to
us
for
for
our
use
case.
If
you
have
questions
about
the
component
model,
don't
ask
me
as
look
so
yeah.
A
You
might
be
able
to
answer
better,
but
I
I
want
to
highlight
three
main
pillars
of
the
component
mode
that
are
crucial
to
enable
this
composition
of
programs
and
they
are
module
linking
the
canonical
api
and
interface
types.
So
I'm
going
to
start
with
module
linking,
and
the
idea
here
is
that
you
can
link
modules
at
runtime
without
having
to
write
blue
code
on
your
host.
A
So
your
api,
your
runtime,
needs
to
provide
a
native
api
to
allow
you
to
do
this,
and
then
we
have
interface
types
which
are
types
that
describe
high
level
values.
A
So,
as
as
some
of
you
might
know,
webassembly
right
now
only
supports
low
level
values
that
are
basically
in
floats,
and
so,
with
this
proposal
we
can
have
access
to
high
level
values
like
a
string,
and
then
we
have
the
canonical
api,
which
is
pretty
important
to
what
we're
doing,
because
it
describes
a
relationship
between
a
high
level
value
and
a
core
web
assembly
value.
So
how
do
I
go
from
an
n
32
to
an
s
to
a
string,
or
you
know,
or
the
other
way
around,
so
that's
important?
A
Now
I
don't
want
to
get
too
theoretical
here,
but
I
want
to
do
touching
to
a
practical
use
case.
So
let's
assume
that
the
user-defined
functions
that
you're
accepting
in
your
platform
need
access
to
a
date.
Formatting
library
right,
your
users
need
for
some
reason
to
format
some
some
dates
so
I've
I've
created
this
repository,
which
you
can
visit
where
I
have
all
the
code
for
this.
A
For
this
presentation
and
first
I
want
to
visit
the
static
linking
approach,
so
how
you
would
do
this
if
you
wanted
to
create
a
single
program
that
is
going
to
run
and
is
going
to
format
some
some
dates
right.
So
the
main
thing
that
you
would
do,
for
example,
is
have
a
project
that
looks
like
this.
You
will
have
an
index.js
and
then
you
would
have
a
package.json
that
would
look
like
this,
and
potentially
you
would
import
a
date
formatting
library.
A
From
from
your
you
know,
the
npm
echo
system,
so
this
this
date,
formatting
library,
is
going
to
take
two
to
two
strings
and
give
you
a
formatted
string
in
in
human
form,
depending
on
you
know
the
length
of
those
dates
and
in
your
build
process,
you're
going
to
use
a
tool
like
javi,
for
example,
pass
it
in
your
javascript
and
then
get
a
final
webassembly
module,
and
your
code
probably
is
going
to
look
something
like
this.
A
A
A
A
Now
that's
the
easy
use
case.
Now,
let's
go
to
the
dynamic
linking
approach.
The
first
thing
that
we
want
to
do
since
the
date
formatting
functionality
is
what's
common
for
all
the
user.
Defined
functions
is
factor
out
the
common
functionality
right
and
that
basically
means
that
we
are
going
to
have
some
javascript.
That
is
going
to
request
that
date,
functionality
and
we
want
to
link
it
against
a
performant
date,
library
which
is
going
to
be
written
in
rust.
A
A
A
The
with
binding
tool
for
rust
expects
us
to
fulfill
this
contract
by
implementing
a
trait
and
the
trade.
Implementation
basically
looks
like
this
right.
You
get
two
dates
and
you
pull
in
chrono
and
chrono
humanize,
which
are
rust,
libraries,
which
will
be
the
equivalent
for
javascript
dependency,
and
then
you
execute
that,
on
top,
when
we
compile
this
to
wasm
3
to
washi,
we
get
the
following
export.
A
So
we
export
a
function
called
format
that
is
going
to
be,
you
know,
is
in
low
level
type,
is
going
to
receive
four
params
which
represent
two
strings
and
then
gives
us
a
another
pointer.
A
A
So
the
idea
is
that
I'm
going
to
let
the
javascript
runtime
quickjs
know
that
at
some
point
in
the
future,
this
functionality
is
going
to
be
present
and
the
usage
is
going
to
be
like
this.
I'm
going
to
have
in
the
global
access
to
an
object
called
date
and
to
a
key
that
is
going
to
be
called
format
that
is
going
to
represent
a
function
and
then,
in
the
last
line,
I'm
going
to
call
format
difference
and
I'm
going
to
pass
two
strings.
A
Keep
in
mind
that
these
two
strings
are
high-level
javascript
values
that
we
are
going
to
need
later
on.
So
that's
the
important
part
and
the
way
this
is
done
in
pjs
or
in
java's
codebase
is
just
creating
a
callback
that
is
going
to.
A
You
know,
be
a
function
that
is
going
to
look
like
this
in
terms
of
rust,
but
the
important
part
here
is
how
this
is
implemented.
So
the
important
part
here
is
the
extern
block
that
we
have
at
the
top
that
it's
instructing,
that
we
are
expecting
this
functionality
to
be
available
somewhere
in
the
future.
A
Once
we
convert
those
two
values,
those
two
javascript
strings
into
pointers
in
the
process-
that's
called
lowering.
We
can
call
that
that
that
function.
One
thing
to
notice,
though,
is
that
we
have
five
params
here,
because
in
the
last
param
it's
something
that
we
call
the
return
pointer
area
in
which
we're
going
to
encode
what
that
function
is
returning,
basically
the
length
of
up
up
until
where
we
can
read
memory
and
the
pointer.
A
So
this
is,
I
think
this
has
been
standardizing
the
canonical
api
and
then
the
last
part
that
we
do
here
is
basically
from
that
return
area.
Pointer.
We
read
that
specific
memory
and
then
we
create
a
javascript
string.
So
this
is
the
process,
that's
called
lifting,
so
we
return
this
string
to
the
user
and
then
we
can
have
access
to
that
functionality.
A
A
Now
we
have
the
consumer
module
and
we
have
the
producer
module
or
the
api,
and
one
export
looks
like
this,
and
the
other
import
looks
like
the
following.
We
can
see
that
these
signatures
don't
necessarily
match,
and
here
is
where
we
need
to
introduce
something
else
like
adapting
the
functions
between
the
two
modules
to
be
for
them
to
to
communicate
effectively,
and
here
is
where
we
started
using
a
tool
called
wasm
link
and
radu,
who
was
here,
has
written
a
blog
post.
A
So
if
you
don't
know
how
this
works,
you
can
read
the
blog
there
and
it
might
give
you
more
insight.
So
the
idea
behind
wasom
link
is
that
you
give
wasom
link
a
consumer
module
like
our
javascript
module,
a
set
of
interface
types
and
then
another
implementation
of
the
date
of
the
module
that
is
exporting
the
functionality
that
you
want,
and
then
it
creates
a
statically
linked
module
that
contains
all
that
functionality
and
both
modules.
A
Communicating
together
that
I
had
to
modify
a
wasm
link,
because
if
I
kept
it
the
way
it
was,
it
was
counterintuitive
to
the
size
benefits
that
we
can
get
by
dynamic
linking
at
runtime.
So
in
my
modification
you
give
wasm
link
a
consumer
module
and
just
the
interface
types
definitions
of
the
modules
that
you
want
to
consume.
A
This
gives
us
a
final
module
that
is
thinner
and
it's
smaller,
and
you
know
it
doesn't
require
the
code
of
all
the
other
modules
that
this
module
is
trying
to
communicate
with.
So
the
way
I
invoke,
this
is
just
wasmlink.
I
give
it
the
consumer
module
and
then
I
get
you
know
the
the
result
is
a
module
that
looks
like
this.
A
So
at
the
top
we
have
a
type
of
an
instance
which
is
a
concept
introduced
by
the
I'm,
not
sure
to
say
deprecated,
but
module
linking
proposal,
and
then
the
important
part
is
the
last
line
in
which
we
import
a
date
as
an
instance
as
it
was
defined
above
and
then
that
instance
is
required
to
export
the
format,
functionality
and
some
other
functions
to
be
able
to
allocate
in
free
memory.
From
from
this
module
so
yeah,
this
is
what
I
was
explaining
before
and
well.
A
We
have
the
two
pieces:
now
we
have
the
consumer
module
and
the
producer
one,
and
we
have
adapted
the
modules
between
the
calls
between
those
two
modules.
How
do
we
do
this
in
the
host?
Well,
here
I
have
a
wrapper
around
wasn't
time
which
I'm
you
know
I
created
sort
of
like
a
std
api
and
we
should
load
the
date
functionality
that
we
want.
A
We
instantiate
that
standard
library
piece
and
then
we
create
a
guest
module
from
the
one
that
was
given
to
us
from
the
command
line,
and
then
we
create
an
instance
and
grab
the
start
function
and
we
execute
that.
So
the
important
part
here
is
the
instantiate
std
part,
which
basically
looks
like
this.
So
was
some
time
up
until
version
dot,
35.3
provided
a
way
to
instantiate
a
module
and
then
registering
that
instance
as
an
instance
that
will
be
resolved
if
someone
else
asked
for
it.
A
A
A
Now,
in
the
dynamic
approach,
we
have
a
size
of
900
kilobytes
and
a
performance
of
106
160
microseconds.
So
this
means
that
in
general
we
have
saved
fifty
percent
in
both
size
and
performance
by
going
through
that
dynamic
link
approach.
So
just
to
do
a
recap
we
have
done.
We
have
gotten
gains
in
performance
and
in
code
size,
and
we
have
also
enabled
composability
at
the
runtime
layer
right
and
the
other
advantage
of
this
approach
is
that
everything
is
wasn't
based.
A
If
you
have
a
platform
in
which
you
have
the
developers
submitting,
you
know
third-party
developers
who
meet
user-defined
functions
and
then
first-party
developers
enabling
those
apis.
You
certainly
want
to
have
that
functionality
sandboxed,
because
you
could
do
some
of
this
in
the
host
too,
but
you're
getting
out
of
the
sandbox.
At
that
point
now,
let's
see
some
of
the
cons
that
we
have
seen
so
far.
A
So
the
first
one
is
the
bug
ability
when
you
have
a
composition
and
we
are
and
when
you
are
dealing
with
multiple
binaries
compiled
by
different
tool
chains,
the
debuggability
story
is
not
fully
sorted
out,
so
that
has
been
a
challenge
for
us
and
then
the
other
piece
is
memory
consumption.
So
the
approach
that
I've
shown
here
is
something
that's
called
the
shared
nothing
approach
in
which
in
which
each
module
keeps
access
to
its
own
memory.
A
And
so,
if
you
have
multiple
of
this,
you
would
have
this
like
in
this
in
this
graph,
for
example,
here
in
this
diagram,
we
have
eight
modules.
You
would
have
at
least
eight
linear
memories.
So
that
means
that
your
memory
consumption
can
go
up
depending
on
how
you
architect
this
yeah.
So
that's
everything
that
I
have.
I
would
say
that
this
is
highly
experimental.
A
All
the
tools
are
changing,
but
this
has
been
something
that
has
proven
to
be
very
useful
for
our
use
case,
in
which
we
want
to
keep
performance
under
control
and
also
module
size
under
control.
So
this
is
my
github
and
twitter
handle
in
case
you
want
to
reach
out
or
take
a
look
at
any
of
the
code
that
I've
shown
here
thanks.