►
From YouTube: Out of Memory Must Fail Fast in JS
Description
Presented at TC39 Oct 2019. https://github.com/tc39/proposal-oom-fails-fast
Achieved stage 1 status.
Slides:
https://github.com/tc39/agendas/blob/master/2019/10.oom-fails-fast-as-recorded.pdf
The ECMAScript specification nowhere mentions the possibility of running out of memory (OOM), and so cannot be correctly implemented on finite memory machines. Allocation in JavaScript is pervasive and implicit, implying that an OOM may happen anywhere in the execution of the program. If OOM threw a catchable error, computation within the agent would continue in an inconsistent state. Instead, we should immediately terminate the agent cluster, in order to abandon all unrepairable inconsistent state.
A
A
Java,
on
the
other
hand,
will
throw
an
out
of
memory
error,
which
is
a
catchable
error,
and
some
JavaScript
engines
do
that
as
well
and
Java
goes
further
and
the
out
of
memory
error
is
a
category
called
virtual
machine
error
in
the
explicit
JVM
contract.
Is
that
an
is
that
a
virtual
memory
as
sorry
is
a
virtual
machine
error
can
be
thrown
between
any
two
instructions
or
some
correspondingly
fine-grained
unit
of
atomicity,
not
quite
sure
what
the
precise
phrasing
is.
A
As
an
example,
here's
a
you
know:
computer
science,
101
doubly-linked,
lists
splice,
where
we
have
a
noted
doubly
linked
list
left
and
we're
splicing
into
the
WLAN
list
to
the
right
of
left,
the
node
new
right.
So
the
first
three
lines
of
the
function
are
not
problematic.
New
right
is
modified
to
point
into
the
old
list.
A
So
if
between
modifying
left,
right
and
modifying
old
right
left,
we
do
something
that
provokes
an
out
of
memory
condition
and
that
condition
is
caught,
and
this
doubly
linked
list
data
structure
is
still
reachable
by
the
computation
that
proceeds
from
catching
it.
You're
toast
you
have.
You
have
unpredictable
confusion,
you
you,
you
don't
know
what
is
corrupted.
A
Might
think
that
the
same
triquetra
cry
funnily
logic
could
be
used
to
repair
the
inconsistency,
but
even
this
trivially
simple
example
shows
how
hopeless
that
is.
If
you're
going
to
put
the
problematic
part,
that's
the
little
section
during
which
the
state
is
inconsistent
inside
a
try-catch
causal
apparent
in
the
catch
up
front
on
the
clause.
What
the
hell
did
you
write,
there's
really
nothing
sensible
to
write
there,
and
even
if
you
did
come
up
with
something
sensible
to
write.
B
A
A
Today,
I
can
reveal
the
following
exploit
that
we've
been
sitting
on
in
responsible
disclosure
for
a
month
as
the
the
we
here
being
a
collaborating
group
of
several
companies,
including
gorrik
and
Salesforce
and
figma
and
others.
We
had
a
responsible
disclosure
a
month
ago
against
the
the
realms
shim.
A
The
realm
shim
creates
a
sandbox
around
code
evaluation,
we're
inside
the
sandbox
code.
When
it
says
eval,
it
should
only
get
the
safe
evaluator.
The
evaluator
constructed
by
the
realm
Shen.
This
code
run
as
as
sandboxed
code
caused
an
out
of
memory
in
a
crucial
point
in
the
execution
inside
the
round
shim
before
when
it's
flag
had
been
flipped
one
way
and
not
yet
flipped
back
and
then
caught.
B
A
B
A
A
Oh
god,
that's
funny
if
this
was
run
on
a
conforming
JavaScript
system
which
Apple
provides,
and
nobody
else
does
with
regard
to
the
tail
call
issue
this
exploit
would
not
have
happened.
However,
obviously
you
could
transform
this
program
into
one
for
which
the
recursion
is
not
in
tail
position,
in
which
case
the
exploit
itself
would
still
happen.
So
it
doesn't
change
the
point
of
the
exploit,
but
you're
right.
The
actual
code
would
not
have
failed.
Had
the
underlying
JavaScript
engine
actually
conformed
to
the
telecoil
part
of
the
spec.
A
B
A
Unrecoverable
preserved
situation,
the
virtual
machine
throws
a
catchable
error
and
the
attack
code
was
able
to
catch
the
error
and
then
continue
execution
and
the
particulars
of
the
mechanism
underlying
the
rom
shrim
is
the
eight
match
lines
of
code
that
I
provoked
that
I
explained
at
a
previous
echo
script
meeting
and
the
the
core
of
it
is
that
the
eval
call
at
the
bottom
is
the
thing
that's
evaluating
the
sandbox
code.
The
sandbox
code
is
the
one
is
the
the
code.
A
That's
the
contents
of
the
SRC
variable
the
source
code
variable
so
that
event
has
to
be
a
directive
out
the
directory.
It
has
to
be
a
direct
eval
so
that
the
so
that
the
sandbox
code
is
evaluating
within
the
scope
of
the
of
the
enclosing
width
which
which
captures
which
in
which
yeah
captures
all
sculp
lookups
and
turns
them
over
to
the
scope
handler
shown
on
the
right.
A
But
in
order
to
do
that,
the
amount,
the
lookup
of
the
name
eval
itself,
you
know-
do
the
direct
eval
has
to
be
looked
up
by
the
scope
handler
by
the
same
width,
dereferencing
it
to
the
original
unsafe
eval.
The
box.
In
the
red,
so
the
way
we
did
that
we
the
way
we
did
this
in
the
realm
shrim
at
the
point
when
the
responsible
disclosure
was
reported
to
us-
is
that
we
have
a
switch
in
the
scope
handler
that
before
entering
these
eight
magic
lines,
we
flip
the
switch
to
say
the
very
next
time.
A
Somebody
looks
up
the
name
of
al,
give
them
the
unsafe
eval,
but
as
soon
as
you
do
that
remember,
to
switch
back
to
safe
mode
where
any
further
lookups
at
the
name
of
al,
give
you,
but
only
the
safe
eval.
The
reason
we
can
publicly
disclose
this.
Now
we
publicly
disclosed
all
this
exploit
and
are
fixed
yesterday.
A
By
making
the
scope
handler
logic
much
less
stateful
jf,
in
particular
behind
you,
he
fixed
this,
and
but
we
should
stress
that
we're
fixing
a
symptom
of
the
underlying
problem,
the
underlying
problem
of
causing
inconsistent
state
catching
the
error
and
then
exploiting
inconsistent
state
is
something
that
we're
going
to
be
chasing
forever.
As
long
as
out
of
memory,
errors
are
catchable.
A
A
The
green
rectangles
are
agents
and
multiple
objects
in
the
same
agent
can
interact
with
each
other
synchronously,
which
is
the
fiume
which
is
the
red
arrows,
but
an
object
in
one
agent
can
only
interact
asynchronously
with
objects
and
other
agents,
and
this
means
that
under
that
picture
the
agent
is
a
perfectly
viable
unit
of
pre-emptive
termination,
and
this
is
exactly
the
Erlang
model.
The
Erlang
process,
the
process,
when
it
hits
an
unrecoverable
condition,
is
immediately
terminated,
no
further
code
within
the
process
executed.
A
So
the
fact
that
it's
in
an
inconsistent
state
is
not
observable
after
that
point
within
the
process,
and
then
other
processes
are
only
asynchronously
coupled
to
that
process
and
the
other
processes
now
have
the
burden
of
reacting
to
the
sudden
absence
of
that
process.
So
there's
still
a
burden
to
recover
functionality
to
recover
from
the
condition,
but
is
it
but
it's
a
burden
that
can
be
met
because
the
entities
that
are
reacting
to
the
absence
are
only
asynchronously
coupled
to
the
absent
computation.
A
A
Unrepairable
corrupted
and
I
don't
know
the
answer
to
that-
I
propose
that
that
be
part
of
the
investigation
here,
there's
also
the
question
of
if
it
is
contagious
through
right
through
shared
array
buffers
do
you
do
it
according
to
the
entire
agent
cluster,
which
is
all
the
potential
sharing
relationships,
or
do
you
track
the
actual
sharing
relationships
and
have
it
be
contagious
by
the
actual
sharing?
All
of
these
are
open
questions.
A
A
A
A
For
for
other
hosts
for
a
device,
for
example,
what
devices
do
right
now
and
let's
say
watchdog
timer
runs
out?
Is
the
device
gets
rebooted?
You
could
imagine
that
it
might
be
useful
to
also
just
reboot
the
device
that
runs
out
of
memory
in
a
situated
in
which
you
have
all
the
bookkeeping
to
support
or
portable
transactions
like
JavaScript
run
on
a
watch
chain.
A
You
could
imagine
that
these
conditions
cause
transaction
abort,
and
then
the
computation
falls
back
to
a
previously
consistent
state
and
moves
forward
from
there
and
then
what
our
line
shows
with
its
supervisor
architecture,
and
likewise
the
Kiko's
operating
system,
with
its
very
similar
keeper
architecture,
is
that
you
can
create
abstractions
in
language
for
some
code
to
create
units
of
computation
that
are
that
can
fail.
Preemptively
such
that
the
creating
code
can
can
arrange
how
the
termination
of
the
created
code
is
handled.
So.
A
So
the
the
approach
that
we're
going
to
show
very
much
again
still
following
the
ER
lining
Kiko's
model
is
it
can
be
thought
of
as
a
generalization
of
the
philosophy
that
we've
already
taken
two-week
reference
finalization,
which
is
the
post
mortem
philosophy.
Java
finalized
errs
from
inside
the
finalization.
The
condemned
object
is
still
accessible,
and
that
has
only
caused
trouble
in
the
week
reference
post
mortem
finalization,
the
condemned
memory
is
never
again
accessible
from
anything,
including
the
finalization
logic.
That's
reacting
to
the
condemning
of
that
memory.
A
A
The
right
units
are
such
that
when
you
create,
let's
say
an
agent
cluster,
that
you
can
provide
it
as
an
option
and
out
of
memory
keeper
such
that
the
agent
clusters
is
allocated
out
of
a
different
budget
of
memory
when
it
runs
out
of
memory,
it
gets
preemptively
terminated,
but
then
the
keeper
gets
immediately
invoked
so
that
recovery
processes
outside
the
condemned
computation
can
then
proceed
to
recover
the
overall
functionality
of
the
system,
and
this.
This
is
very
much
in
line.