►
Description
The Past, Present and Future of JavaScript Engines - Alejandro Oviedo, Beamery
Speakers: Alejandro Oviedo
It’s been nearly 25 years since the first JavaScript engines were created and through time we’ve seen tremendous progress and proliferation of multiple architectures. As the language grew on adoption we have witnessed a steady increase on performance over time including improvements on the latest additions to the language specification. We will also focus on the current state of V8, ChakraCore, SpiderMonkey and JavaScriptCore and discuss possibilities for what lies ahead for these well-established engines.
A
So,
hey
everyone,
my
name
is
Alejandra
today
we'll
be
talking
about
the
past
present
and
future
of
JavaScript
engines.
You
can
find
me
on
social
media
by
a
co
B
arrow
before
we
continue
I
do
apologize
if
we
go
too
fast
over
these
decades
or
the
past.
It's
a
lot
of
content
to
get
through
all
of
the
slides
but
feel
free
to
reach
out.
Send
me
questions
or
find
me
right
after
the
talk
and
we
can
dig
in
whatever
question
you
might
have.
A
The
wind
weren't
many
virtual
machines
around
and
note
in
that
bye,
beautiful
machines
in
this
talk
I
will
refer
to
the
type
of
virtual
machines
that
translate
from
one
instruction
set
to
another
instruction
set.
There
are
other
types
of
beautiful
machines
that
virtualizes
operating
systems
and
hardware.
We
won't
cover
those
virtual
machines
today.
A
So
one
of
the
benefits
of
introducing
metal
machines
around
this
time
is
that
you
could
have
some
of
your
code
of
your
implementation
to
stay
the
same
say
in
this
case.
The
transformation
from
code
to
an
intermediate
representation
will
stay
the
same
for
different
CPU
architectures
and
remember
that
around
this
time
there
was
not
a
monopoly
or
a
CPU
architecture
that
was
predominant
over
others.
There
were
a
couple
CPU
architectures
lying
around
and
you
would
have
to
actually
write
your
own
compiler
for
a
programming
language
for
all
of
those
CPU
architectures.
A
These
an
overly
simplistic
diagram
is
how
a
virtual
machine
would
look
like
at
that
time,
and
it
wasn't
conceived
that
brittle
machines
could
be
as
fast
as
other
compiled
programming
languages
like
C
in
this
decade.
Also,
a
small
tag
was
created
a
language
designed
specifically
for
educational
purposes,
and
it
was
authored
by
Alan
Kay.
His
multo
code
was
also
executed
in
a
virtual
machine
and
it
is
an
object-oriented
programming
language.
A
A
A
By
this
time,
I
started
what
we
will
call
the
first
wave
of
adaptive
optimization
and
this
first
wave.
There
was
a
profiler
involved
that
marked
some
of
the
functions
to
be
sent
to
a
just-in-time
compiler.
In
this
example,
on
the
Left
say,
you
are
executing
a
function
a
thousand
times,
then
the
profiler
could
identify
that
function
as
a
target
for
optimization
or
to
be
won
through
a
compiler
in
the
90s.
There
were
other
improvements
over
existing
runtimes,
like
small
talk
and
self,
a
small
talk
being
actually
implemented
in
production
systems,
but
the
most
iconic
advance.
A
A
The
increase
on
this
innovation
was
not
just
a
coincidence,
personal
computers,
which
previously
were
way
WAY
expensive,
started
to
become
more
affordable
by
this
time.
From
this
graph,
we
can
see
that
the
slope
starts
to
get
pronounced
around
the
90s
and
goes
all
the
way
up
like
that
to
2005
or
2010.
A
During
this
period
period
it
became
he
began,
the
second
wave
of
adaptive
optimization
we
was
which
was
adding
multiple
levels
to
what
we
had
previously
so
before
by
having
multiple,
just-in-time
compilers,
say
on
the
previous
case,
with
a
simple,
just-in-time,
compiler
of
being
applied
for
a
function
running
maybe
a
thousand
times
and
the
other
optimizing
compiler
for
functions
that
would
go
over
fifty
thousand
a
hundred
thousand
times
in
the
2000s.
We
saw
the
web
first
big
steps.
A
Multiple
browsers
were
released,
like
Firefox
in
the
2002,
an
opera
changed
into
a
free
to
use,
approach
and
I.
Actually
remember
you
certain
opera
I
was
a
nice
browser
like
fast
JavaScript
execution,
fast
tab
startup
as
well,
who
else
here
remember
using
opera?
Okay,
wasn't
that
bad,
so
the
web
pushed
JavaScript
engines
by
that
time,
you
could
do
all
sorts
of
things
now
on
the
web,
like
logging
to
chat
rooms
and
talk
with
hundreds
of
people
simultaneously
simultaneously,
you
could
browse
your
emails.
A
A
A
We
see
an
interpreter
which
will
take
the
the
JavaScript
and
we'll
run
that
JavaScript
with
node
code
being
generated
whatsoever
at
that
time.
Also
now
optimisation
either
for
spider
monkey
around
that
time.
They
also
had
an
interpreter,
but
they
would
also
have
an
optimizing
compiler
that
would
target
these
hot
paths
we
were
talking
about
if
a
function
was
being
executed
too
many
times
and.
A
P8
at
the
beginning,
he
was
created
around
2008
I,
think
Chrome
later
was
released
later,
but
it
didn't
had
an
interpreter.
Initially,
he
had
two
compilers
one
for
compiling
the
JavaScript
core
as
fastest
as
possible,
right
away
and
an
optimizing
compiler
that
would
compile
these
functions
that
were
executing
too
many
times
by
this
time.
A
A
But
we
still
see
an
increase
on
Internet
users.
Actually,
in
ten
years
it
grew
from
five
percent
to
twenty
eight
percent.
A
growing
outgrowing
I
believe
every
projections
that
people
had
around
that
time.
The
number
of
websites
it
also
grew
from
less
than
200
millions
to
almost
800
millions
or
more
in
the
year,
2014
or
2015.
A
What
we
are
seeing
here
is
a
graph
that
aggregates
smartphones
by
the
amount
of
memory,
and
it's
dated
for
the
year
2017
dark
gray
bars
represent
devices
with
one
gigabyte
of
memory
following
there's.
A
yellow
bar
with
devices
that
have
2gb
bytes
and
the
light
Way
is
devices
with
three
gigabytes
of
memory.
I'm,
not
sure
if
you
can
reach
out
to
see
the
labels
below
in
the
graph,
but
developing
countries
like
Argentina
Brazil,
Colombia,
Nigeria,
South
Africa
have
devices
with
just
one
gigabyte.
A
Now,
if
we
dig
a
little
deeper
into
some
of
the
trade
trade-offs,
does
that
just-in-time
compilers
had
to
deal
with
will
found
that,
as
we
apply
more
optimizations,
the
code
generated
starts
to
grow
and
internal
representations
start
to
get
more
complicated.
You
need
to
keep
track
of
more
things
happening
either.
If
you
in
line,
if
you
take
something
outside
from
a
loop,
whatever
apps
optimization
you
do,
it
often
results
in
memory.
Usage
increase
the
second
trade
off
that
compilers
have
to
deal
with
is
the
latency
introduced
in
execution
time.
A
The
best
optimizations
available
will
introduce
a
higher
late
latency,
compared
to
not
applying
any
optimizations
at
all.
On
these
scale
on
the
extreme
left
on
the
extreme
left,
we
will
find
interpreters
because
those
would
not
require
any
generation
of
code
and
would
only
require
that
parsing
of
the
code
you
have
for
them
to
start
up.
A
Spidermonkey
being
the
engine
for
Firefox
went
through
a
couple
of
changes:
six
or
seven
years,
they
moved
from
one
compiler,
t-to,
compilers
and
then
later
on,
they
changed
the
optimizing
one.
In
this
case,
I
am
monkey,
which
is
the
layout
they
are
using
today,
and
the
interesting
thing
about
spider
monkey
bailouts
or
the
optimizations
would
go
up
to
the
baseline,
just-in-time
compiler.
Instead
of
going
all
the
way
back
to
the
interpreter,.
A
A
We
see
that
they
still
hadn't
added
an
interpreter
and
they
would
still
have
a
parser
that
would
output
an
abstract
syntax
tree
and
would
compile
that
as
fast
as
possible
for
the
optimizing
compiler
that
was
crankshaft
and
for
the
optimizations.
The
code
would
go
back
to
the
baseline,
just-in-time
compiler.
If
we
would
graft
those
two
in
the
scale
than
we
had
before,
crankshaft
will
be
closer
to
the
right,
and
the
baseline
compiler
would
be
to
the
left,
but
not
to
the
extreme
left,
because
compiling
code
will
still
have
some
latency.
A
By
2017,
pietà
switched
completely
to
these
new
architecture.
Of
course,
they
didn't
just
turn
a
switch
from
overnight.
They
introduced
some
changes
progressively.
They
did
all
their
testing
that
the
needed
to,
and
once
they
felt
very
confident
with
this
new
architecture,
they
changed
from
having
T
compilers
to
having
one
interpreter
in
this
diagram
called
ignition,
an
turbulence
which
is
replacing
what
we
really
previously
was
crankshaft.
A
A
We
haven't
talked
into
you
too
much
for
garbage
collection
or
their
approach,
but
nowadays,
if
we
will
go
through
all
nature
browsers,
we
would
be
able
to
say
that
ba
and
spider
monkey
use
a
precise
strategy,
so
they
count
exactly
or
they
know
exactly
how
many
allocations
they
did
during
the
execution
with
when
javascriptcore
and
Chaka
core.
Do
you
a
conservative
approach
which
is
looking
at
memory
positions
and
determine
saying
how.
A
For
this
year
alone,
there
were
granted
bounties
for
sixty
thousand
US
dollars
for
an
integer
overflow
vulnerability
on
Amazon
echo,
another
one
around
20
thousand
US
dollars
for
another
integer
overflow
in
a
TV.
That
would
allow
attackers
to
get
our
boobers
shell,
and
that's
just
this
year
in
this
band.
I
know
ponchione
had
also
an
event
in
Vancouver
and
they
released
other
bounties
as
well.
A
So,
for
what's
next,
a
few
caveats
before
we
move
on
these
are
all
my
observations
and
have
in
mind
that
I'm
not
subject
matter
expert
on
this
field.
I,
don't
work
on
a
daily
basis
on
JavaScript
engines.
My
goal
is
to
share
with
you
all
the
point
of
view
for
a
developer
on
how
things
could
look
like
in
the
future.
A
Just-In-Time
compilers
will
keep
getting
better,
but
I
don't
see
them,
overcoming
the
constraints
that
we
talked
before
on
memory
specifically
because
of
how
we
write
software,
we
are
down
drawn
into
making
short
live
allocations,
which
is
one
of
the
punim
fundamental
pillars
for
generational
garbage
collection.
The
strategy
that
all
JavaScript
engines
are
using
today.
A
A
This
year
there
was
a
blog
post
written
by
DB
18
on
how
json
dot
parse
could
be
between
10
to
50
percentage
faster
than
parsing.
Just
a
straight
object
little,
and
it
is
not
something
the
v18
was
encouraging,
as
developers
to
go
and
change
on
our
repositories,
but
rather
full
tools
like
web
pack
to
include
that
in
their
logic
for
bundling-
and
it's
also
interesting,
because
this
is
not
krama
specific,
it
would
yield
performance
improvements
on
other
browsers,
like
Firefox
or
Safari.
A
First,
it
started
happening
with
them
with
very
low
resources,
specific
JavaScript
engines
and
I
believe
it
could
be
a
good
approach,
or
at
least
an
interesting
one,
to
see
with
node
an
option
for
node
could
be
having
different
set
of
flags
or
configuration
values
to
let
the
engine
know
it
is
running
on
a
long
lifespan
context.
I
do
know
there
are
flags
to
say,
modify
the
limits
of
memory
for
b8,
but
that
doesn't
actually
change
how
the
engine
manages
memory
boy.
A
These
stop
is
for
the
last
June
and
we
can
see
that
many
of
the
efforts
that
have
have
been
in
play
rendered
an
enormous
effect
on
helping
people
to
use
utilize
resource
online
and
I.
Consider
that
a
win-win.
If
us,
we
keep,
this
number
going
high
services
and
companies
will
be
able
to
sell
to
more
people
and
people
will
be
able
to
reach
out
to
more
resources,
knowledge
and
other
services
online
as
well.