►
From YouTube: Rust and WebAssembly Working Group Meeting 2019-07-11
Description
A
Hello
and
welcome
to
another
Rustin
whether
somebody
working
group
meeting
thanks
for
coming
out
y'all,
so
I
switched
up
the
agenda
a
little
bit
to
put
the
binary
inand
Emscripten
items
near
the
top,
so
that
folks,
who
are
only
used
to
those
things,
can
discuss
those
things
and
then,
if
you're
not
interested
in
the
rest
can
perhaps
leave
and
so
I
don't
think
we
have
anything
in
the
RFC
triage
stuff.
So
why
don't
we
jump
right
into
the
topic
of
binary,
n',
stuff
lazon
to
jeaious
and
west
mods.
B
What
one
thing
that
we've
been
seeing
is
we
see
a
bunch
of
cool
things
being
done
with
rust,
wasm
on
the
web,
and
often
the
binaries
are
not
optimized,
so
I
mean
typically
just
a
matter
of
code
size,
but
we've
actually
hit
like
browser
bugs.
Because
of
this
there
was
one
on
Chrome
that
without
running
wesam
app
that
had
so
many
locals,
they
end
up
triggering
some
v8
bug.
B
C
C
That
was
the
whole
reason
we
added
binaries
a
long
time
ago.
It's
just
we
haven't
it's
been
development.
Wise
in
fact,
recently
has
been
a
little
bit
slow,
but
it's
definitely
our
attention
to
run
it
by
twofold
and
it
should
be
documented
as
well
in
terms
of
code
size,
so
I'm
actually
curious
as
well.
Everything
that
we've
seen
from
my.
B
C
And
is
that
I
get
shaves
20
30
like
a
good
good
double-digit
percentage
off
of
code
size,
but
it
doesn't
tend
to
affect
runtime
performance
that
much,
and
so
we
haven't
really
documented
it
in
terms
of
this
is
how
to
make
code
faster,
it's
more
so
how
to
make
code
smaller.
Does
that
kind
of
match
what
you're,
seeing
or
is
it
also
a
like
it
actually
accelerates?
What
comes
out
of
LVM
is
even
more.
B
C
B
C
B
Wasn't
it
yes,
and
yes,
so
moving
on
to
that,
so
so
I
wanted
to
mention
that
I
finished
like
a
large
chunk
of
workable
as
and
two
Bs
fuzzing
it
getting
it
passing
them.
Written
test
leads
getting
it
to
a
mid
smaller
code
than
instant
old
SAS
path,
so
I
talked
to
others.
Maybe
some
lack
of
clarity
on
the
long-term
plans
that
you
might
have
you
guys
think
so.
I
was
curious
if
you're
still
using
resin
to
guess
and
if
you
are,
but
you
changes
everything
bugs.
C
C
A
C
D
D
D
Exact
numbers
like
we
have
a
function
which
ships
processors
fix
it
amount
of
data
and
I
know
the
time
which
takes
in
webassembly
and
I
recorded.
My
all,
that's
like
it
took
around
like
200
milliseconds
in
webassembly
and
after
was
into
J's.
It
took
around
400
milliseconds
I.
Also
play
me.
The
alto
enabled
and
disabled
and
different
optimizations
flux.
D
B
No
go
for
it,
go
for
it.
Yeah
in
general.
I'd
expect
like
two
times
they're
so
slow
down,
just
because,
obviously
it's
not
as
fast
as
wesam,
but
anything
sort
of
surprising
like
ten
times
slower
or
whatever.
Definitely
is
something
we
need
to
look
at,
but
I
mean
any
slow.
That
might
be
interesting
if
it's
surprising
one
mm-hmm.
A
Don't
the
only
thing
I'd
add
on
the
subject
of
Azam
tjs,
which
I'm
not
sure
if
we
made
this
point
clear
or
not
so
far
is
again
similar
to
when
mopped
we
do
plan
on
integrating
this
into
a
wesen
pack
to
have
it
automatically
download
and
kind
of
provide
and
out-of-the-box
solution.
For
you
know,
ie
11,
so
yeah
cool
and
you
also
wanted
to
talk
about
Thomas
Krypton
is
moving
too
fast
Conte
or
moving
away
from
fast
calm
to
the
LVN
beckon
yeah.
B
So
we
basically
announced
recently
that
were
we
feel
we're
ready
to
switch.
The
default
is
getting
to
the
new
back
end
and
our
plan
is
to
get
rid
of
fast
comp.
Eventually,
the
relevance
here
is
I
believe
correctly
I'm
wrong
that
in
rust,
if
there's
a
death,
there
are
two
paths
that
work:
one
is
with
fast
Compton
funded
with
up
screen
upstream,
when
using
a
Miss
Krypton
with
rust.
Is
that
still
the
case.
C
E
B
E
B
B
C
B
B
E
B
E
C
E
B
D
C
C
The
Azam's
yes
and
unknown
scripted
target
is
kind
of
a
lie
because
it
no
longer
room,
it
says
in
GIS,
but
maybe
it's
like
fine,
and
so
that's
one
we're
like
we
could
either
just
drop
that
entirely
or
we
could
just
change
it
to
say
it's
not
really
as
yes,
but
it's
like
jeaious
whatever
and
that
might
work
out
just
as
well.
Yeah.
B
C
Be
sure
to
clarify
that
this
is
like
my
own
personal
opinion
from
like
also
maintaining
the
rusty
eye
perspective,
and
so
not
building
in
LVM
would
be
really
nice.
But
in
terms
of
like
I,
think
this
group
in
general
we
tend
to
be
I,
don't
think,
there's
a
ton
of
em
scripts
users,
and
so
this
probably
isn't
super
representative
of
the
rest
community
using
em
scripted
and
so
that
I
suspect
I'm
sure
will
guarantee
you.
There
are
people
who
care
a
lot
about
this,
but
aren't
here
we're
like
we
have.
C
B
D
There
is
only
one
library
which
is
compiled
to
JavaScript,
because
it's
JavaScript
trans
faster
than
possible
for
ffmpeg,
as
if
I
get
it
correctly
from
this
report
and
well
I'd
like
to
use
a
rest
in
C++
together,
and
even
these
updates
I
wish
you
are
going
to
do.
They
will
bring
us
this
support.
It
would
be
really
a
horse
for
me
to
create
the
use
cases
like
we
use
two
languages
together.
I.
C
B
C
B
C
B
Yeah
kind
of
low-level,
C++,
probably
huazi,
makes
sense,
I,
think
and
Mister.
It
makes
sense
for
things
like
SDL
to
GL
all
these
cysts
and
libraries
that
basically
that's
really
all
we
focus
on
the
special
yeah.
So
it's
probably
kind
of
a
niche
thing
to
need
EMCC
with
rust,
but
the
potaters
people
with
game
engines
using
a
steal
and
Russ.
E
A
A
B
C
C
And
when
I
say
they're
not
stable,
I
mean
like
they
still
have
like
alpha
version
strings
and
things
like
that,
they're
not
like
polished,
just
like
a
200
or
something
good,
and
so
I
think
we
might
want
to
wait
for
that
for
the
ecosystem
to
kind
of
start,
having
published
stable
base
as
well,
and
then
once
we
get
there
I
think
could
be
trivial
for
us
to
just
go
and
add
it
all
in
okay.
Do.
A
D
D
C
C
On
the
rust
and
stood
work
hardware
side
of
things,
one
of
the
one
of
the
major
things
about
threading
and
restaurant
assemblies.
We
don't
really
have
a
great
story
for
how
we're
ever
going
to
actually
ship
this,
and
so
that
is
a
very
strong
candidate
for
being
you'll.
Just
recomposed
it
on
the
fly.
D
C
D
D
D
D
A
A
A
C
B
A
E
B
E
Thank
you
guys
on
that
issue
and
goes
there
and
then
I
also
have
a
random
question.
So
the
CG
recently
voted
to
move
a
feature.
Detection
proposal
to
stage
one
so
we're
doing
feature,
detection
and
so
I'm
wondering
clang
has
a
feature
called
function:
multi
version
where
you
have
different
implementations
of
a
function
for
different
architectures,
basically
they're
different
architectures
as
rust,
support
that,
because
that
might
be
a
user
facing
way
to
use
the
feature
detection
stuff.
So.
C
This
is
where
we
don't
have
direct
support
function,
Multi
versioning,
but
we
do
have
support
for
compile
this
function
with
these
features
enabled
and
that's
it,
and
so
that's
all.
We
have
I
think
Multi.
Versioning
is
kind
of
like
a
little
bit
of
a
layer
above
that,
where,
like
it,
creates
a
function
shim
that
does
the
dispatch
for
you
and
you
just
call
that
so
we
kind
of
have
the
lower
level,
but
we
do
have
that
and
say
you're
saying
we
can
there's
a
way
to
express
on
x86.
Today
this
function
has
avx-512.
C
E
E
E
C
Basically,
the
only
thing
that
we
would
need
on
our
end
is
we
have
we
largely
rely
on
runtime
feature
detection
for
that,
so
there
has
to
be
some
way
to
detect
at
runtime
whether
it
was
enabled
so
I.
Imagine
that
would
be
like
some
clang
intrinsic
for,
like
some
global
is
set
somewhere
to
whether
the
feature
check
passed
or
like
whatever
the
Elvia,
whatever
the
Weizmann
structure
would
be
for
like
check
the
fifth
feature
gate
and
whether
it
return
true,
weird
intrinsic
so.
C
E
Way
we're
thinking
of
doing
it
is
basically
that's
decided,
a
module
instantiation
time.
So
yes,
there's
a
section
with
MVP
and
a
section
with
the
cindy
version.
But
then,
when
you
instantiate
the
module
the
engine
which
knows
what
features
that
sports,
obviously
I,
sort
of
opposite
to
using
the
seventy
one
and
so
the
user
code.
Never
like
doesn't
branch
and
decided
teams,
one
or
the
other.
E
It
just
calls
the
function
as
normal
and
if
the
engine
supported
Cindy
and
it
would
have
instantiated
the
module
to
use
the
Cindy
version,
otherwise
it
would
use
the
MVP
version.
So
it
it's
on
the
engine,
but
it's
not
quite
at
runtime
in
the
same
way.
So
what
we'd
really
need
is
for
the
rest
compiler
to
just
emit,
and
this
will
be
like
in
the
object
format,
everything
but
basically
to
tell
LLVM
that
both
exist
simultaneously
and
it's
just
add
the
metadata
or
whatever.
It
is
make
like.
E
E
Yeah
I,
don't
remember
exactly
the
LVN
of
nine
branch
point
is,
but
these
things
are
all
enough
that
yeah.
E
C
Was
so
no
well,
I
was
a
AUD
hack,
League
lat
hack
week
last
week,
doing
rest
of
assembly
stuff
and
I.
Don't
know
it
seemed
really
cool
to
see
like
sisters
stuff
they
were
doing.
They
were
the
use
case.
I
saw
it
was
they
were
taking
c-sharp,
compiling
it
to
C++
and
then
compiling.