►
From YouTube: SES-mtg: Realms vs Evaluator vs SES
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
C
Something
that
we
we
saw
ourselves
in
the
code
before
it
was
reportedly
expected
it
to
be
recorded.
It's
something
that
existed
before
the
latest
changes
so
what's
happening
is
we
are
evaluating
in
you
a
script,
a
new
string,
a
fight
function
in
the
new
realm
every
time.
The
realm
that
evaluate
is
called
when
it's
a
root
problem
and
that
when
the
realm
is
not
frozen,
allows
a
an
attacker
to
poison
the
intrinsic.
This
gets
evaluated.
This
script
gets
evaluated,
it
uses
the
poison
intrinsics,
and
then
it
opens
the
door
to
use
to
exploit.
C
A
C
C
A
A
Security
bugs
against
the
realms
shim
when
used
with
non-frozen
realms.
That
is
not
a
vulnerability
for
the
platform
that
agaric
is
building
for
our
own
secure
use,
which
is
the
evaluator
shim
used,
both
with
only
one
root
realm
single
root,
realm
environment
and
where
we
do
freeze
the
primordial
before
we
load
untrusted
code.
A
So
so
one
thing
that
I'm
wondering
like
us
to
figure
out
is:
is
there
someone
else
who
wants
to
take
on
the
burden
of
the
realm
shim,
specifically
where
we
have
vulnerabilities
there
that
are
that
do
not
overlap
with
vulnerabilities
on
the
evaluator
Shen
and
if
not,
then,
how
do
we
get
out
of
a
responsible
disclosure
treadmill
on
a
security
kernel
whose
security
we
don't
plan
to
use?
I.
C
A
A
And
it's
it's
also
preserving
realm
separation,
trying
to
keep
the
realms
isolated
from
each
other,
and
it's
ensuring
that
the
yeah
that
the
global
object,
part
of
the
the
larger
story
there
is
that
it's
confining
it's
giving
any
way
to
confine
code
in
the
created
realm
so
that
the
code
running
in
the
created
realm
has
no
more
authority
than
is
granted
by
the
creating
realm.
Okay.
B
A
E
E
E
And
and
and
I
think
this
this
this
goes
to
more
engineering
work
which
which
follows
in
the
same
vein
as
the
security
support
that
you're
trying
to
get
out
from
underneath,
which
is
the
the
disentangling
that
you
did
for
the
evaluator
Shem.
Would
there
be
in
conceptually
at
least
a
corresponding
disentangling
on
the
other
side?
Yes,
such
that
the
the
realm
Shem
would
be
just
the
realm
Shem
and
the
the
evaluator
Shem
component
of
it
wouldn't
be
a
component
of
it.
It
would
just
well
if
you
want
to
use
the
evaluator
Shem
inside
it.
Yes,.
A
That's
that's
exactly
right!
That's
what
I'd
like
to
see
happen.
Yeah,
one
of
the
things
that
I
was
hoping
for
that
I
probed
last
meeting
and
got
a
negative
answer
was
a
gorrik
would
not
benefit
from
reconstructing
the
realm
Shem
in
a
principled
way,
so
it
was
only
doing
the
its
specialized
job
and
leaving
to
the
evaluator
the
evaluator
shims
job
I
would
very
much
like
to
see
that
reconstructed,
realm
shim
that
was
just
specialized
in
that
way.
That
specialization
would
make
it
also
much
more
likely
that
it
can
be
done
securely.
A
A
That's
right,
I
want
the
standardization
proposals
to
reflect
that
modularity.
Even
if
nobody
works
on
the
rudy,
the
reconstructed
realm
shen,
I
want
it
to
be,
as
if
you
know
the
the
the
the
shims
that
car
is
the
hypothetical
shims
that
correspond
to
what
we
want
to
specify
should
be
that
reconstructed,
minimal,
realm
shim,
but
but
gorrik
has
no
need
to
reconstruct
that
the
realm
shim,
because
we're
not
planning
to
use
multiple
root
realms.
I
was
hoping
that
salesforce
would
be
interested
in
picking
it
up,
but
it
sounds
like
they're
not
so.
A
A
Agaric
did
run
across
a
good
use
case
for
SES.
That
would
involve
both
the
realm
shim
mechanism
and
in
order
to
create
multiple
root
realms
and
the
choice
not
to
freeze
some,
and
that
was
we
in
talking
to
a
brave
about
some
things
that
they
might
do
with
SES.
So
that
was
a
good
exploration.
We
came
up
with
a
good
plan,
but
a
goreck
and
brave
are
not
pursuing
that
plan.
Nevertheless,
it's
a
it's
a.
E
A
Yes,
node
right
now
has
this
vm
system
for
creating
multiple
rooms.
The
realm
shim
on
node
leverages
that
the
evaluator
shim
would
have
no
awareness
of
that,
of
course,
and
just
secure
whatever
realm
it's
in.
If
you
wanted
to
do
interim
computation
within
node
in
a
secure
manner,
then
yes,
the
reconstructed
realm
sure
would
be
quite
relevant
there.
So.
D
Just
just
to
be
clear,
once
we
come
back
to
the
committee
and
update
on
the
proposed
separation
and
such
and
continue
working
on
the
ROM
proposal,
just
needed
anyways
for
many
cases,
you
need
to
create
instead
of
a
knife,
and
you
want
to
create
a
new
realm
and
this
use
it
directly
will
have
to
have
the
polyfill
for
it
and
at
a
time,
we'll
probably
work
on
creating
the
polyfill.
For
that
that
may
be
used
the
iframe
directly
and
doesn't
have
all
the
other
feature
set
that
were
added
to
the
polyfill.
D
So
you
have
a
polyfill
for
around
I'm,
nan
and
I
feel
that
that's
that's,
not
a
problem.
You
have
it.
Someone
will
maintain
it.
Maybe
maybe
someone
else
maintaining
it,
but
we'll
have
a
polyfill
for
the
reporters
for
every
proposal
that
we
do
and
we'll
have
a
polyfill
for
the
proposal
of
the
evaluator
as
well,
and
hopefully
this
time
around.
D
We
just
don't
add
things
that
we
have
an
agree
on
in
terms
of
the
specification
of
the
proposal
and
anything
that
is
additive
to
it
will
be
just
a
separate
library
on
top
of
it,
an
abstraction.
On
top
of
it,
so
we
don't,
we
don't
make
the
similar
mistakes
that
we
make
re
nollie
and
I
and
I
feel
that
someone
will
pick
it
up.
Isn't
it's
not
really
a
problem?
A
Okay
and
obviously,
if,
as
as
bugs,
are
filed
against
it,
we
will
check
whether
this
indicates
that
there
might
be
a
security
vulnerability
in
something
that
somebody
is
using
in
production,
because
sometimes
one
is
is
a
discovery
that
applies
to
both.
But
once
we
get
a
no
answer
on
that,
then
we
will
go
ahead
and
treat
it
as
as
as
public.
C
Attack
and
yeah
and
I
think
that
removing
from
the
original
proposal,
everything
that's
related
to
cross
realm.
So
if
we
go
back
to
history,
we
there
was
the
idea
of
a
frozen
realm,
and
then
there
was
this
realm
proposal
and
then
the
foursome
realm
was
really
about
any
other
leader
inside
of
an
environment
which
was
in
the
trinsic
when
it
got
merged
into
the
realm
shim
we've
got
the
added
behavior
of
the
creating
you
what
we
call
the
root
realm,
but
was
react
later
realm.
No.
A
C
A
C
Well,
it's
interesting,
then,
that
of
the
this.
The
extension
is,
then
we
moving
its
parent
everything
that
that
it
was
supposed
to
inherit
from
the
realm
API.
It's
the
thing
that
we're
getting
out
of,
because
it's
more
important,
problematic
and
there's
a
value
of
just
the
subclass
by
itself
in
a
witty
and
conceptually,
where
there's
no
concern
about
or
no
behavior
related
to
creating
a
new
root
realm.
A
A
C
Tell
me
any
problem:
the
bathroom
aside
and
I
think
that
we
have
to
be
careful
with
those
this
student
of
disclosure.
We
are
not
planning
any
way
to
maintain
the
realm
and
just
doing
it
at
the
hawk.
Every
time.
There's
something
found
and
going
to
those
cycle
is
not
it's.
It's
not
just
that
just
a
waste
of
time
yeah.
So
we
should,
and
also
maybe
in
the
minds
of
the
the
attacker,
we're
not
seeing
the
whole
suite
of
yet
of
the
problem
that
have
been
discovered
and
those
are
just
dropped.
C
A
A
We're
not
yet
saying
that
people
should
be
using
this
in
production
when
real
assets
are
at
stake.
At
some
point,
we
are
going
to
be
saying
that,
but
until
we
say
something
like
that,
we're
bugs
against
that
also
shouldn't
need
all
of
the
the
responsible
disclosure
process.
We
should
still
go
ahead
and
check
whether
people
doing
similar
things
in
production
now
like
Salesforce,
might
be
vulnerable
to
anything
reported
against
this.
C
C
But
this
is
just
salvaging
the
evaluator
part
of
it
good,
there's.
No,
it
includes
everything
we've
learned
through
the
history
and
even
the
latest
discoveries
and
but
I
think
we
still
need
to
have
a
full
review,
a
line-by-line
with
you
like
we
did
a
year
and
a
half
ago.
Yes,
you
confirmed
that
yeah
that
Cody
still
still
safe,
yeah.
A
C
Think
that's
right,
I
think
you
know
that
this
is
removing
everything
that's
related
to
to
to
round
and
now
that's
the
base
on
which
we
want
to
build
it.
So
we
need
we
need
to
do
the
documentation.
We
need
to
go
back
and
confirm
the
API
and
and
the
terminology
that
we
want
to
use.
Things
have
changed
a
lot
over
the
past
year,
I
would
say,
and
a
lot
of
even
a
lot
of
the
tests
need
to
be
to
confirm
to
an
organization
that
would
make
that
follows
the
structure
of
the
code.
C
So
there's
a
lot
of
things
that
are
still
under
this
under
the
structure
of
the
of
the
realm
and
ethnicity
of
the
evaluator,
but
at
least
by
itself.
It's
it's
devoid
of
any
mechanism
that
involve
re,
injecting
into
the
code
into
new
root
realm
I,
never
and
all
the
complexities
related
to,
and
things
related
to,
late,
capturing
of
intrinsic
all
the
intrinsic
should
be
captured
earlier
with
this
system,
yeah.
C
The
intention,
but
I'm
so
concerned
about
uncared
this
and
the
techniques
that
we
have
in
general
to
stay
away
from
the
intrinsic
because
of
the
latest
problem
that
we
discovered
with
the
dis
value,
have
been
used
in
in
creative
ways
and
I.
Think
we
need
to
to
do
another
passive
that
and
but
curtly.
We
just
don't
have
time
to
do
this.
Okay,
so.
A
A
We
can
say
we're
first
trying
to
get
to
where
we
can
use
it
with
confidence
assuming
the
primordial
are
frozen
before
untrusted
code
is
running
and
then
at
a
later
stage,
where
we
are
still
or
would
still
be
shooting
for
being
able
to
be
confident
that
the
shim
preserves
security,
even
when
the
prime
mortals
are
not
frozen.
Yeah.
C
And
also
look
at
like
the
integration
which
says
actual
usage
of
this
library,
into
your
contacts
that
we
intend
to
use
it
for
will
help
us
confirm
the
API
and
the
behavior
of
the
evaluator.
Yes,
at
the
same
time,
if
we
could
proceed
with
the
architecture
of
the
evaluator
in
relation
to
the
realm
to
the
compartment
and
the
module
loader,
this
would
also
help
shape
the
structure
of
the
different
proposals
and
how
they
will
play
with
each
other.
A
A
So
I
just
pasted
it
into
the
chat,
the
draft
spec
for
standalone
cess
so-
and
this
was
very
much
motivated
by
T
by
TC,
53
and
TC
53
and
moddable
really
did
start
with
the
document
that
I
just
pasted.
The
point
of
this
document
is:
there's
really
two
aspects
of
the
evaluator
incest.
One
is
the
API
for
starting
from
a
full
JavaScript
system
and
creating
this
protected
subset
of
JavaScript
with
these
enforced
protection
properties
and
then
the
but
the
the
other.
But
the
other
aspect
of
this
is
you
can
start
from
the
inside.
A
The
embedded
JavaScript
engines
that
they're
anticipating
just
build
this
standalone
environment
from
the
outset.
In
fact,
that's
what
moddable
actually
did.
His
moddable
has
now
built
directly
built
a
configuration
of
the
model
engine
which
they're
planning
to
use
for
these
purposes,
which
just
starts
out
in
the
state
that
this
document
describes.
A
Here's
the
API
for
creating
that
end
state
and
here's
the
the
various
switches
options,
knobs
or
whatever.
That
enables
you
to
make
variations
on
what
the
end
state
is.
But
the
end
state
that
you
start
off
describing
is
the
default
one,
which
is,
if
you
didn't
use
any
of
the
options
knob.
What
do
you
get
by
default
and
then
what
things
you
can
switch
with
options
will
also
be
based
on
feedback
from
the
device
guys
about
what
reasonable
alternative
configurations
make
sense
in
their
world.
C
Yeah
I
think
a
dare
exploration
is
very
valuable.
A
lot
of
TC
their
own
proposal
come
from
methods
or
internal
objects
or
until
circuit
operation
that
are
useful
and
they've
been
worked
into
a
pleasant
public.
Api
multiple
has
done
just
this
and
offer
this
opportunity
to
look
at
how
far
they've
gone
and
what
type
of
API
they've
developed
and
it
could
become
a
standard
by
looking
at
the
actual
decisions
that
they
do,
how
to
make.
A
You
know
one
of
the
things
been
very,
very
satisfying
going
back
and
forth
with
moddable.
They
were
just
over
here
at
Agora
the
other
day
by
the
way,
yesterday,
I
think
on
the
day
before
is
that
the
verb,
their
primary
focus.
Engineering-Wise,
has
always
been
making
things
work
on
devices
which
pressures
them
into
being
small
and
they
always
had
security
goals,
but
they
didn't
really
have.
You
know
know
how
to
carry
through
their
security
goals.
A
Until
these
you
know
we
started
talking
about
oh
caps,
incest,
but
what
they
did
for
smallness,
just
fit
and
the
end
what
they
did
for
optimizing
from
the
device
context
is
just
so
incredibly
aligned
with
security.
We've
already
talked
about
some
aspects
of
that
here,
like
the
fact
they
have
a
very
simple
interpreter
that
doesn't
do
lots
of
case
splitting.
A
Another
thing
that's
really
fascinating
is
that
they
have
it's
what
they
call
a
preload
architecture
where
they
run
their
part
of
their
JavaScript
code
at
Build
time
in
an
environment
where
there
are
no
devices,
it's
just
a
completely
isolated
self-contained
JavaScript
system,
but
you
engage
in
sort
of
all
the
initial,
the
initialization
that
computation
for
your
for
your
app
at
don't
time
and
then
there's
this
snapshot
of
the
heap.
That
happens.
The
snapshot
only
happened.
A
You
only
happens
once
the
stack
is
empty
and
the
promise
queue
is
empty,
so
the
heap
really
is
all
of
the
state.
All
of
that
gets
snapshotted
and
turned
into
ROM,
and
then
what
happens
in
the
device
is
computation
simply
continues
from
that
point
in
the
device,
but
it
continues
from
that
point
now,
with
all
of
the
primordial
x'
being
frozen
so
part
of
what
they
do,
part
of
the
semantic
step.
There's
two
semantics
attached
to
they
to
the
snapshot.
One
is
the
same.
A
A
B
Guess
so,
just
a
tail
topic
we're
talking
about
the
evaluator
I
would
just
like
to
get
possible
feedback
from
everyone
about
the
nature
of
potential
parsing
that
anyone
would
be
interested
in
if
they
wanted
that
runtime
aspect
you
know
like
if,
if
people
have
you
know,
identified
particular
uses
for
parsing
or
priorities,
because
I'm
trying
basically
to
drive
mind,
worked
where
it's
parsing
JavaScript,
to
focus
on
things
that
are,
that
would
be
of
critical
use.
Okay,.
B
B
So
you
know
those
are
I,
guess
time,
parsing
requirements
more
than
runtime,
but
the
concern
that
comes
to
mind
is
when
you're
evaluating
like
I
know.
The
only
two
places
where
runtime
parsing
may
be
important
is
when
you
import
and
when
you
evaluate
so
has
anybody
you
know
encountered
the
evaluation,
related
considerations
or
you
know,
use
cases.
D
A
D
A
D
A
Oh
by
the
way,
I
looked
up,
shim
versus
polyfill.
The
definitions
that
I
found
like
on
Wikipedia
and
elsewhere
do
not
align
with
the
the
difference
you're
explaining
so
I'm
I'm
inclined
to
continue
to
talk
about
what
we're
doing
as
a
shin
definitions.
That
I
saw
in
several
places
is
a
polyfill
is
a
shim
in
the
browser.
This
is
just
sort
of
is
a
term
specific
to
the
browser,
whereas
a
shim
as
a
general
term.
A
Yeah,
we
don't
have
the
Reg
X,
where,
if
we
see
anything
that
looks
like
an
import
expression,
then
we
statically
reject
and
in
order
to
do
that,
reliably
without
parsing
we're
also
going
to
statically
reject.
If
something
that
looks
like
an
import
expression
appears
inside
a
comment
or
in
inside
a
literal
string,
yeah.
D
B
So
can
I
can
I
just
pull
that
thread
a
little
bit
further,
because
I
I
did
explore
this
idea
right
so
with
the
reg
X
you're.
Looking
for
the
keyword
import,
if
there
will
be
a
vowel
that
will
take
in
a
string,
you
could
mangle
an
arrow
function
that
that
will
do
import
dynamic,
import
of
the
first
argument
or
whatever,
and
then
you
can
mangle
the
way
you
go
about
constructing
that
string
that
it
would
not
actually
be
reg
X
yeah.
D
D
B
D
B
A
So
the
reason
we're
not
doing
that
for
the
evaluator
shim
is
that
for
the
shins,
this
is
distinct
from
the
proposals
for
the
shims.
The
confidence
we
have
in
the
evaluator
shim
is,
among
other
things,
based
on
the
fact
that
that
that
it
does
not
have
a
JavaScript
parser
and
therefore
it
cannot
blow
it
security
by
getting
JavaScript
parsing
raw.
You.
D
B
A
C
B
Sorry
I
didn't
hear
yeah
so
like
I,
every
time
I
try
to
I
try
to
wrap
my
head
around
how
you
could
you
know
somehow
gain
access
to
either
eval
or
import,
and
you
know
somehow
hide
it
until
it's
a
variable
that
you're
calling
that
is
neither
I
find
that
one
you
you
have
to.
You
have
to
really
control
access
to
both
every
time
there
is
an
import
you
have
to
like
what
I'm
doing
parsing
right
like
Mike.
My
goal
is
to
do
parsing
on
everything
coming
and
do
subtle,
rewrites
for
particular
things.
B
That's
my
use
case
right.
So
so,
every
time
I
have
an
import.
I
have
to
look
for
smells
for
eval
and
import
being
wrapped
as
a
function
and
I
just
wondered.
If
you
know
it's
a
priority
elsewhere,
then
I
would
dedicate
more
time
to
you
know,
trying
to
create
a
lightweight
person
approach
for
that
that
definitely
I
comes
up
right
right.
C
So
that's
that's
highly
desirable,
because
what
we
identified
in
the
past
is
maybe
use
a
very
simple
rejects
of
fine
pattern
that
would
have
triggered
more
of
false
positive,
but
then,
once
those
are
discovered,
send
them
to
a
more
precise
parser.
That
could
actually
do
the
rewrite
and
do
the
right
thing.
That's
a
mental.
B
C
And
the
other
thing
is
to
reduce
or
prevent
limitations
of
the
shim,
for
example,
when
we
declare
global
variables
and
things
like
that,
so,
for
example,
a
top-level
var
could
be
rewritten
in
order
to
provide
the
proper
behavior
mm-hmm,
so
that
would
increase
compatibility.
It's
not
for
security
is
to
improve
compatibility
of
the
shim
with
the
specs.
B
Okay,
great
so
can
I
also
get
some
feedback
on
two
schools
of
thought.
One
you
purse
accurately
or
you
parse
cautiously,
but
you
leave
room
for
the
spec
to
subtly
change.
An
operator
gets
added.
Something
like
that
all
right.
So
the
balance
here
is
that
if
you
really
have
to
parse
very
accurate,
you
always
have
to
be
up-to-date
or
else
you're
going
to
be
a
security
risk.
B
If
you
have
to
parse
carefully
in
a
design
way
that
is
designed
to
actually
borrow
on
the
nature
of
how
operators
need
to
behave
in
certain
ways
when
they're
being
written
by
you
know
and
someone
who's
reading
code,
you
know
you
expect
keywords
to
appear
in
certain
places
of
the
syntax
operators.
You
know
so
so.
With
that
logic,
you
could
write
a
you
know,
forgiving
purse
that
is
not
like
naively
forgiving,
but
that
that's
like,
like
a
you,
know,
a
dream,
really
you
know
being
able
to
get
it.
A
We
try
consumption,
hear
the
echo
script
committee
usually
almost
always
tries
to
ensure
that
all
old
programs
that
used
to
parse
continue
to
parse
with
the
same
meaning
that
they
have.
We
actually
have
violated
that
occasionally,
but
almost
never.
Well,
assuming
you
don't
violate
that
going
forward,
then
a
purser
which
is
constructed
for
a
given
version.
That
is
precise
in
the
sense
that
anything
that
is
not
syntactically
accepted
today
gets
reliably
rejected.
A
B
Yeah
so
yeah
in
terms
of
backwards
compatibility
of
code
in
in
you
know
in
future
revisions,
so
quote:
parsed
by
a
new
parser
of
a
new
version
of
echo
script
that
is
old
code
I
believe
you
know,
the
assumption
definitely
is
is
important
when
designing
a
parser.
You
rely
on
that
notion.
You
know
at
least
I
was
inclined
to
think
of
that
a
lot
and-
and
there
was
availability
to
confirm
that
I'm.
B
A
B
That
is
actually
the
I
believe
that
is
affirming
100%,
yes,
and
that
I
do
not
believe
right
now,
I,
don't
have
any
you
know
particular
fact-checking,
but,
based
on
on
you
know
what
I
hit
when
I
was
doing.
You
know
all
the
work
that
is.
That
is
the
case.
You
every
time
you
introduce
a
new
feature
to
the
language.
It
should
not
accidentally
parse
semantically
different
on
an
old
parser.
B
A
So,
given
all
of
that
with
regard
to
your
first
question,
I
would
prefer
a
accurate,
precise
parser
that
was
exact
with
regard
to
a
version
of
language
rather
than
trying
to
build
something
which
is
looser,
I
think
the
the
looseness,
no
matter
how
well-intentioned
is
probably
asking
for
trouble.
Yeah.
E
There
fair
enough
fair
enough,
but
the
point
being
that
you're
trying
to
your
your
concern
about
doing
a
complete,
parse
correct
me.
If
my
understanding
is
wrong,
is
not
a
performance
consideration
like
it's
too
long
and
complicated
to
do
complete
parts,
but
the
the
probability
of
getting
it
wrong
is
higher,
because
it's
much
more
complicated
that
right.
E
E
And-
and
this
isn't
I
think
a
case
where
I
think
you
could
put
forward
the
argument
that
actually
building
a
parser
API
into
the
language
with
with
suitable
qualifications
in
the
spec
could
be
a
huge
boon
simply
because
if
you
require
that
the
parse,
the
parsing,
that
this
parser
API
does
is
the
same
as
the
parsing
that
the
engine
that
is
running
does
then,
even
if
it's
wrong,
it's
going
to
be
wrong,
it's
going
to
interpret
it
exactly
the
same
way.
The
evaluation
and
execution
will
interpret
it
wrongly.
Yes,.
A
E
Exactly
and
it's
interesting
to
me
that
a
lot
of
the
most
valuable
innovations
in
the
in
the
in
the
JavaScript
language,
as
its
seen
by
the
Java
programmer
and
the
the
best
some
of
the
greatest
innovations
that
tc39
has
made,
have
come
as
a
result
of
taking
existing
abstractions
that
were
part
of
the
language
definition
and
surfacing
them
into
into
the
the
world
visible
to
the
programmer.
And
this
is
in
that
tradition.
Yeah.
B
But
it
has
a
lot
of
pushback,
though
so
so
here's
the
here's,
a
hopeful
scenario.
That
would
really
mean
you
know
we
have
enough
to
try
to
you
know
to
fill
gaps.
The
hopeful
scenario
would
be
that
we
would
know
what
parameters
a
parser
would
take
to
to
parse
the
different
variants
of
parsing.
You
know
the
different
goals
of
parsing.
So
so
what
are
the?
What
are
the
options?
B
B
You
know:
there's
pushback,
because
people
don't
just
want
parsing,
they
want
a
SDS
that
are
conforming
to
something
and
they
want
to
be
able
to
mute.
Those
ASDs
and
I.
Don't
think
browsers
are
really
non
browsers.
You
know,
like
implementers,
are
really
excited
to
make
sure
that
there
are
no
differences
or
you
know.
B
But
I
mean
if
you
could
agree
on
the
potential
parameters
that
lead
to
different
that
lead
to
parsing
in
different
goals
like
they
just
talk
about
module
code
and
global
code,
and
you
know
like
there
are
other
like
we
need
a
section
that
really
says
what
are
the
flavors
of
Gold's
that
you
can
start
from
as
a
parser.
You
know
a
section
dedicated
to
talking
about
cursing.
You
know
source
text
not
about
how
implementers
first
source
things.
E
E
I
mean
having
access
to
the
complete
to
a
complete
ast
would
be
useful
for
a
whole
world
of
things,
but
I
think
there
is
a
bunch
of
valuable
use
cases
that
don't
require
that
you
could
you
simply
get
back
and
opaque
reference
to
a
thing
which
you
could
then
will
be
allowed
to
use
in
other
contexts.
Depending
on
what
kind
of
thing
it
was
yeah.
A
I'm
the
so
it
there
are
several
times
in
the
past
where
somebody
tried
to
propose
something
that
get
given
AST
and
there
was
pushback,
because
people
were
just
not
ready
to
agree
on
an
AST
yeah.
At
this
point
there
really
is
kind
of
a
de
facto
standard
AST,
which
is
the
you
know
the
with
with
you
know,
with
some
details
divergent
well,
but
basically
the
babble
and
acorn,
and
all
these
others
are
pretty
much
using
the
same
AST
right,
right
and
I.
A
E
C
A
This
format
is
only
to
minimize
network
traffic
and
parse
time
it
is,
or
maybe
even
just
to
minimize
network
traffic,
but
explicitly
disclaiming,
that
this
format
is
for
interpretation
by
programs
and
with
regard
to
all
the
issues
we're
raising
over
here
as
to
why
we
would
like
to
be
able
to
have
to
reliably
interpret
a
program.
I
knew
exactly
where
I
would
look
in
that
binary
ast
format
to
reliably
get
an
answer
or.
C
A
C
I
think
it
was
a.
It
was
a
more
of
a
bluebird
concern
because
that
it
was
the
library
side.
So
there
was
an
appetite
and
with
I
think
it
was
LinkedIn
and
Bloomberg
because
of
the
size
of
their
library.
At
a
time
when
one
of
the
pushback
was
that
it
would
open
a
channel
for
people
who
have
the
means
to
provide
server.
C
F
A
A
B
B
B
B
You
know
a
web
component
and
get
it
to
run
in
the
browser-
and
you
know
like
these,
these
kinds
of
like
additional
trips
that
force
people
to
actually
have
command
log
lines,
and
you
know
not
directly
get
a
package
and
use
it
in
the
browser
like
don't
those
for
me
are
problematic
gaps,
so
I
was
trying
to
find
parsing.
You
know
minimalistic
parsing
requirements
and
and
solve
those,
so
I
would
be
able
to
avoid
the
need
to
actually
have
to
use
to
link
and
have
to
go.
You
know
the
you
know
just
command
line.
B
C
B
Exactly
it
was
roll
up
actually
in
the
experiments
I
did
it
was
nice
because
roll
up
you
could
actually
get
like
by
by
fetching
and
evaluating
right,
and
you
know
right.
There
I
had
my
belt
system
in
the
serviceworker
so
but
from
there
like.
Okay,
so
can
I
you
know
avoid
the
need
fact
should
get
roll
up
every
single
time
and
and
that's
where
things
started
to
unravel.
B
B
B
A
I
have
some
things
to
report
on,
as
I
mentioned,
that
I
recently
had
a
meeting
with
the
moddable
guys
they
have
one
of
the
things
that
you
know
we're
wrestling
with.
You
know
in
both
in
these
meetings
and
separately,
and
a
bunch
of
you
know,
work
we've
started
doing
at
agaric,
but
haven't
pushed
far
enough
yet
is
static.
Checking
for
module
purity
that
we
identified
a
bunch
of
you
know.
You
know
on
paper
a
set
of
static,
checking
rules
for
checking
that
a
module
is
purifying
and
what
I
mean
by
a
purifying
Bowl.
A
Making
immutable
all
of
the
state
of
the
heat
as
of
the
snapshot,
and
that
includes
assignable
variables
and
internal
slots
that
have
no
currently
have
no
user
level
way
to
to
make
them
non
mutable
so
so
the
example
is.
Data
object
has
an
internal
slot
where
it
keeps
track
of
the
current
date
and
the
built-in
date
methods
will
mutate
that
so
there's
no
such
thing
as
an
immutable
data
object.
Obviously
you
can
go
to
wrapper,
but
the
the
thing
that's
recognized
by
the
languages.
A
The
data
object
cannot
be
made
immutable
in
the
language,
the
model
guys
actually,
if
they
find
a
date
object
in
that
heap
to
be
snapshot,
they
will
make.
They
will
turn
it
into
an
immutable
data
object
and
after
snapshot
in
the
device.
If
an
attempt
is
made
to
mutate
that
internal
slot,
it
actually
throws
the
typer.
A
As
of
the
initial,
as
of
the
module
being
initialized,
when
the
module
is
fully
initialize,
you
look
at
the
the
exports
if
the
exports
are
not
transitively
immutable.
Looking
at
the
internals
of
them
that
only
the
platform
has
access
to
that.
That
could
become
an
error
where
all
of
its
exports
turning
to
erroneous
exports,
very
much
like
a
module
that
fails
to
load
and
if
the
module
is
in
a
strongly
connected
cycle,
then
you
would
have
them.
Then,
just
like
a
module
that
fails
to
load.
The
entire
strongly
connected
cycle
will
collide.
F
F
A
You
couldn't
reference
built-ins
that
we
couldn't
that
could
not
be
made
immutable.
So
this
plays
well
with
the
read-only
collections
proposal,
because
you
could
create
a
snapshot
at
map
that
was
in
the
initial
State,
but
you
could
not
have
a
mutable
map
in
the
initial
state
and
that's
in
particular
why
moddable
is
so
interested
in
coach
champion
code
championing
the
read-only
collections
proposal.
A
So,
with
the
current
semantics
of
private
fields,
you
would
also
have
to
turn
that
into
actually
with
the
current
the
current
semantics
private
fields.
You
have
to
treat
that,
like
a
let
variable
where
you
don't
actually
have
to
reject
it
merely
by
the
existence
of
the
private
field,
because
you
can
scan
all
of
the
source
code
in
that
lexical
scope
and
see
if
it
is
ever
assigned
to.
A
If
there
are
instances
in
the
initial
state
that
have
private
fields
and
the
code,
that's
in
the
lexical
scope
of
that
private
field
never
assigns
to
that
private
field.
Then
you
could
decide
to
treat
that
as
a
non
assignable
private,
just
like
I've,
been
advocating
that
a
let
variable
that's
never
assigned
to
is
treated
by
these
rules
as
if
it
is
a
constant
variable.
I.
F
A
F
A
A
This
was
from
a
dinner
on
the
side
that
between
shoe
and
some
other
people
from
Google
Michel,
say
Botha
and
other
people
from
Apple
and
myself
and
jf
firma,
gorrik
Patrick
from
mana
bull
was
invited,
but
he
didn't
get
the
invitation
until
afterwards,
but
the
deadlock
that
we
were
trying
to
get
past
that
we
did
get
past
was
the
original
Apple
built
in
modules
proposal
was
proposing
that
there
be
a
Billy
Jay
s,
:
namespace,
on
module
specifiers,
that
is
for
which
tc39
are
the
gatekeepers.
Is
that
always
a
tc39?
A
Is
the
gatekeepers,
the
gatekeeper,
I
suppose
in
any
case,
the
as
you
might
as
those
who
were
there
might
remember
this
turned
into
a
huge
political
fight
with
Dominic,
who
was
taking
a
very,
very
strong
position
that
there
must
be
no
such
thing
in
response
to
that,
the
Apple
guys
try
to
evolve
their
proposal
into
something
more
complicated
in
hopes
of
being
able
to
get
consensus
on
it
at
this
dinner,
with
the
context
being
that
Dominic
is
no
longer
representing
Google
at
tc39,
we
came
to
agreement
in
principle
on
Apple's
original
proposal
with
without
the
elaborations
that
none
of
us
liked
none
of
us,
including
Apple,
liked
it
was
just
done
in
order
to
try
to
find
a
que
agreeable
position.
A
Rather,
we
all
agreed
on
the
initial
position,
which
I'm
very
very
happy
with
that,
doesn't
turn
into
something
person
for
stage
advancement
or
technical
proposing.
Yet
because
they're
still
certain
there's
still
two
outstanding
technical
questions
that
are
of
concern
to
everybody
and
very
relevant
to
the
concerns
of
this
group,
which
is
well
one
is
Jordan,
brings
up
that
once
there
are
built-in
modules
and
some
of
the
functionality
that
would
have
been
added
to
the
global
is
instead
added
to
the
Jas
:
part
of
the
import
namespace.
A
A
A
It
could
do
a
dynamic
import
and
put
a
dot
thin
on
it
and
then
have
the
remainder
of
the
script.
Logic
happen
after
the
had
completed,
and
that
would
be
fine
with
me,
but
I
agree
in
principle
with
Jordan
that
it's
weird
to
add
new
functionality
to
the
platform
that,
if
it
had
been
added
earlier
before,
we
made
this
change
of
style
scripts
could
have
used
it
synchronously
and
merely
by
the
decision
to
provide
it
as
a
module
rather
than
a
global
that
prevents
scripts
from
using
it.
Synchronously.
That's
certainly
an
awkward
consequence,
but.
A
E
A
So
I
think
we
should
examine
that
as
a
possibility.
I,
don't
I,
don't
immediately
see
anything
that
disqualifies
that
all
the
complexity
that
that
that
I
would
have
liked
to
avoided
that
that
introduces
all
that
complexity
was
already
introduced
by
the
import
expression,
so
I
think
I.
Think
none
of
that's
made
worse
by
import
statements
in
scripts.
E
F
E
F
F
F
Users
and
their
ergonomic
s--
because
they
are
already
forced
to
do
asynchronous
API-
is
for
most
tasks,
especially
with
IO,
but
polyfills
Rand.
My
argument
here
is:
if
the
polyfill
is
loaded.
First,
it
doesn't
matter
if
they
perform
their
work,
synchronously
or
asynchronously
on
the
target,
be
it
the
global
or
a
built-in
module,
because
they
will
have
first
access
and
can
perform
their
mutations
as
necessary
prior
to
any
other
code
running
and
being
able
to
modify
those
modules.
F
A
Makes
sense
in
the
browser
context
the
if
the
polyfill,
if
it's
work
is
spread
over
several
terms,
then
you
know
the
normal
HTML
script
tag
way
of
loading
successive
scripts
can
earlier
script
postpone
later
scripts
until
after
something
that
happens
in
a
later
term
is
done.
No
okay,
I
think
that's!
A
F
F
Our
odd
ways
to
do
things,
I,
should
say
with
document
dot
right
in
particular,
ad
serving
mechanisms
use
document,
dot,
write
to
effectively
spin
a
nested
event
loop,
and
they
could
do
it
with
that
I.
Don't
it
is
not
organ
Amma
kit,
it's
not
pleasant,
but
in
light
of
not
having
actual
evidence
of
a
real
concrete
case
of
needing
to
do
this,
I
do
not
feel
we
should
jump
to
a
conclusion
of
what
we
should
do.
Okay,.
A
A
F
Essentially,
what
you
can
do
is
you
can
perform
a
document,
dot,
write,
call
and
the
browser
will
stop
all
execution
currently
and
perform
anything
within
that
its
first
tick
within
that,
and
so
you
can
start
to
have
odd
timing
situations.
Let's
go
with
that
effectively
document
dot
write
if
you
insert
a
script
tag
will
block
the
page
while
it
is
executing
its
first
tick.
So
what
ad
vendors
in
the
like
do?
Is
they
explicitly
do
that
to
control
a
tick
and
escape
and
they
can
queue
stuff
before
anybody
else
on
event
loops?
F
What
do
you
mean
by
tick
when
the
JavaScript
stack
unwinds,
okay,.
A
So
turn
turn.
A
A
F
A
A
A
A
Okay,
so
the
other
issue
that
comes
up
with
regard
to
built-in
modules
is
there
is
too
inconsistent
demands,
or
at
least
naively
inconsistent
demands
that
people
want
to
make
on
built-in
modules.
A
One
is
that
they're
born
in
a
monkey
patch
rebel
state
in
a
shippable
state,
just
like
the
primordial
Tsar
and
the
other
one
is
that
they're
pure,
although
most
people
asking
for
it
use
different
terminology,
and
that
might
not
mean
precisely
what
we
need
by
pure.
They
mean
essentially
the
same
thing.
I
was
very
pleased
at
how
many
people
independently
came
to
the
desire
that
the
built-in
modules
be
in
a
lockdown
state
of
some
sort.
A
A
A
So
we
have
there's
two
pleasant
ways
or
or
at
least
not
horribly,
unpleasant
ways.
I
could
imagine
approaching
reconciling
these
two
needs
for
Britt,
for
built
in
modules.
One
is
to
have
their
exports
basically
to
have
the
built-in
modules
beep-beep
your
final,
but
to
have
nothing
unnecessarily
pre
frozen
in
order
to
achieve
purify
ability,
in
other
words,
that
they're
not
pure
until
hardened
and
before
anything
that
you
can
reach
bike
reversal,
starting
from
the
exports
you
can
modify
and
then
having
modified
them.
A
The
I
think
that
you
know
the
Dominic
import
Maps
is
to
browser
specific.
We
don't
want
those
and
they're
too
complicated,
but
all
of
us
know
from
the
SES
perspective,
from
the
node
perspective,
from
all
these
perspectives
from
the
work
on
module
systems
that
the
sort
of
module
systems
that
the
model
guys
are
doing,
that
Michael
fig
is
doing
some
kind
of
control
over
import
remapping,
so
that
you
can
so
that
so
that
initial
code
that
starts
off
with
the
initial
built
in
module
namespace
can
create
a
new
compartment.
A
A
According
to
the
compartment
creator
and
I
think
you
know
we're
gonna
have
such
control
over
the
import
mapping
anyway,
so
we
might
as
well
deal
with
the
built-in
namespace
with
the
same
lever
and
I
think
we
should
just
do
both
of
them,
because
for
the
easy
cases
the
mutation
of
exports
will
be
adequate
and
for
the
hard
cases
the
remapping
is
sort
of
the
universal
hammer
that
cannot.
You
know
that
makes
everything
possible,
even
if,
if
hard.
F
A
With
the
out
of
memory
proposal,
there
was
a
big
surprise
in
the
discussion
that
we
had
in
committee
after
I
made.
The
proposal,
which
is
also
relevant
to
this
group,
which
is
I,
was
crying
I
was
not
including
purposely
not
including
in
the
proposal,
a
language
standard
mechanism
for
creating
a
terminating
unit
of
computation,
which,
for
that,
for
the
moment,
thought
to
say
an
agent
which
is
essentially
a
worker
which
is
basically
a
separate
threat
of
control
or
where
an
agent
can
have
many
realms
in
it
and
in
the
browser
many
origins
in
it.
A
A
But
the
keeper
that
was
created
that
was
attached
to
the
agent,
but
itself
runs
in
the
creating
agent
and
and
with
the
memory
resources
of
the
creating
agent
that
the
keeper
then
gets
invoked
within
the
creating
agent.
As
notification
that
the
created
agent
has
been
terminated.
So
in
case
I
sketched
all
that
to
suggest
that
eventually
we
might,
we
might
provide
an
in
language
mechanism
for
attaching
out
of
memory
handlers
to
the
out
of
memory
condition.
A
But
what
I
was
proposing
was
only
the
requirement
that
the
that
the
unit
of
computation,
which
is
at
least
an
agent
everything
that
necessarily
has
lost
consistency,
has
unrepairable
consistency.
Loss,
be
preemptively,
terminated
on
out
of
memory
and
then
I
was
proposing
to
just
leave
it
to
the
host.
For
now
how
the
larger
system
recovers
from
the
death
of
that
component,
the
surprise
I
got
back
from
the
committee.
A
Is
they
wanted
me
to
bundle
the
language
based
from
your
new
agent
or
New
Ager
cluster
or
something
creation
mechanism
in
with
the
proposal,
so
that
they're
both
part
of
one
proposal
and
they
go
forward
together,
which
I'm
very
happy
to
do?
But
that
means
that,
in
the
same
way
that
we
have
new
compartment
and
new
realm
and
new
compartment
for
the
new
evaluator
shrim
api
new
realm
for
the
you
know
the
the
remaining
realm
api
for
just
creating
new
root
realms.
A
This
would
also
have
a
new
agent
for
creating
a
new
preemptively
terminate
able
unit,
which
is
potentially
also
a
new
unit
of
concurrency.
There's
an
interesting
knob
here
as
well,
which
we
talked
over
with
not
with
some
honorable
guys.
They
actually
have
this
knob,
which
is
you,
can
use
the
the
notion
of
separate
pre-emptive
termination
with
a
separate
memory
budget
is
actually
dinked
from
a
separate
threat
of
control,
and
it's
sometimes
useful.
In
fact
it's
it's
in
particularly
useful
for
a
goreck
as
well
as
moddable.
F
A
To
kill
their
agent,
that's
what
I'm
demanding
is
that
or
rather
that
we
agree
that
it's
an
optic
which,
because
Shu
made
the
point
that
there
is
existing
web
content,
that
depends
on
out
of
memory,
throwing
a
catchable
exception.
That's
a
Peter
Hadi
from
audible
made
the
point
that
this
is
a
real
security
vulnerability.
The
ability
to
introduce
to
induce
arbitrary
inconsistencies
in
any
JavaScript
program
by
forcing
an
out
of
memory
at
the
wrong
moment
of
execution
and
that
level
of
vulnerability
should
trump
web
compatibility
and
at
the
time
I
didn't
position
it.
A
That
way,
maybe
I
should
have
I
suspect
we
would
have
ended
up
with
the
same
conclusion
anyway,
which
is
no
matter
what
their
rhetoric.
The
browser
makers
would
generally
wait
compatibility
over
security,
but
what
we
agreed
to
was
some
kind
of
opt-in
mechanism,
but
once
such
an
agent
has
opted
in
then,
if
somebody
does
exhaust
its
memory,
then
my
proposal
is
exactly
that.
The
agent
as
a
whole
be
preemptively
terminated
because
there's
no
way
to
resume
execution
with
in
the
agent
without
arbitrary,
unpredictable
unrepairable
loss
of
consistency.
A
F
A
A
Inside
our
code,
we
have
this
operation
to
throw
a
tantrum,
which
is
what
we
call
when
we
want
our
unit
of
computation
to
preemptively,
halt
with
no
further
execution
of
unit
code
user
code,
which
we
don't
have
any
way
to
reliably
do
it,
and
one
of
our
security
vulnerabilities
was
actually
due
to
that.
The
memory
exhaustion
attack
that
x,
noah
h
found
on
us.
We
actually
were
in
fact
in
that
particular
case.
We
happen
to
be
invoking
throw
tantrum,
but
we
were
security
vulnerable
because
throw
tantrum
could
not
suppress
further
execution.
C
Nothing
one
of
the
requirements
from
the
browser
maker
or
they
are
concerned,
is
that
a
lot
of
websites
I
could
be
using
the
fact
that
those
errors
are
pretty
catchable,
so
they
can
put
telemetry
in
JavaScript
and
be
informed
that
something
in
production
is
not
working
as
it
should
and
the
adding
the
OEM
as
part
of
like
enforced.
The
OLS
part
of
the
specification
would
prevent
that
to
happen.
That's
why
there
was
the
Upton
I
think
it
was
a
major
use
case,
but
I
was
reading
like
the
others,
yeah.
A
So
I
definitely
agreed
to
the
opt-in
I'm
happy
to
agree
to
the
OP
Caen.
There
was
a
bit
of
a
fight
that
I
think
went
the
right
way,
which
is
what
is
it
that
has
to
opt-in
in
order
for
the
opt-in
to
apply
and
I
I
was
advocating
that,
if
anything,
if
any
realm
within
the
agent,
let's
just
stay
with
agent
for
the
moment,
I'll
come
back
to
the
issue
of
that.
A
If
it's
an
agent,
but
if
any
realm
within
the
agent
does
whatever
the
opt
in
action,
is
that
that
opts
in
the
agent
as
a
whole
and
Shu
raised
the
objection
that
that
means
that
all
other
realms
within
the
same
agent
are
now
subject
to
that
unilateral
decision
and
I
made
the
point
that
there's
only
one
threat
of
control
within
the
agent.
So
anyone
who
who
who
gets
control
in
the
a
in
within
the
agent
can
just
go
into
an
infinite
loop
and
deny
everyone
else
any
further
execution
anyway.
A
So
I
think
the
result
and
I
think
the
result.
The
way
I
remember
the
meeting
is
that
the
result
was
that
we
at
least
did
not
have
disagreement,
and
hopefully
I
can
say
we
had
consensus
to
go
forward
with
any
realm,
can
opt
in
to
this
more
robust
handling
of
out
of
memory
on
behalf
of
the
agent
as
a
whole.
A
Well
also,
if
we
have
a
new
agent,
API
I
would
make
it
a
configuration
parameter
on
the
new
agent
for
the
creator
of
the
new
agent
to
simply
create
an
agent.
That's
already
opted
into
this
robust
behavior
and
possibly
even
that
the
new
agent
proposal
would
say
if
you're
making
a
new
agent
with
this
new
API,
maybe
we
specify
that
it's
necessarily
opted
in
maybe.
A
E
E
C
A
A
A
A
A
A
A
F
So
at
least
there
is
existing
precedent
of
encountering
the
situation
in
browsers
with
the
introduction
of
the
serviceworker
api
tabs
workers,
and
basically
everything
and
the
browser
can
send
a
shared
array
buffer
to
anything
else.
Unfortunately,
they
also
can
preempt
and
terminate
workers,
so
at
least
with
the
current
behavior
browsers,
do
not
throw
errors
so.
F
F
However,
they
have
instrument
implemented.
The
agent
optional,
can
block
I,
believe
it's
what
it
was
called
or
can
wait
there
we
go
so
the
UI
thread
of
a
browser
cannot
actually
perform
a
synchronous,
wait
on
a
shared
array
buffer,
and
that
was
one
of
the
reasons.
A
long
time
ago,
somebody
proposed
an
asynchronous
wait
on
shared
array.
Buffers
is.
A
F
F
A
C
A
A
A
A
F
Correct
the
cookies
I
know
is
actually
a
view
of
an
underlying
data
structure
that
is
normalized
to
by
the
browser.
Local
storage
also
has
some
normalization
steps
going
on.
It's
not
raw
access.
F
A
So
it's
311
I
propose
that
we
stop
recording.
Okay,
I
am
stopping.