►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Yes
and
basically
I
come
from
the
printing
world,
where
I
have
stopped
being
like
involved
in
the
production
home
and
focused
a
lot
on
research
related
to
image,
processing
and
trying
to
create
this
image
processing
ecosystem
that
uses
JavaScript
purely
and
taking
it
very,
very
slowly.
I
guess:
okay,.
A
D
C
So
yeah
I've
been
working
on
on
modules
for
quite
a
while
crept.
The
module
loader
system
D,
has
also
been
working
on
a
project
called
JSP
M
for
many
years,
and
I
started
working
with
Bradley
on
the
modules
integration
to
node
just
over
two
years
ago.
Now,
I
think
and
yeah
that
sort
of
has
grown
into
what
we
have
now,
as
more
more
people
have
got
involved
and
yet
also
got
more
involved
in
the
90s
project
as
well
from
that
side.
C
A
D
A
D
A
C
C
But
the
goal
is
that
this
kind
of
works
towards
the
main
major
changes
that
we
want
to
have
in
place
on
core
for
when
we
unflagged
and
release.
Finally,
after
all,
these
years
is
of
module
support
in
no
js',
and
so
there's
there's
a
number
of
things
that
are
being
done
that
are
going
to
set
new
conventions
in
terms
of
how
people
use
modules
in
leopard
and
therefore,
what
modules
they
publish
on
NPM
as
well,
how
those
NPM
modules
behave.
C
They
previously
tried
to
deprecate
the
butler
global
and
it
turned
out
that
this
wasn't
possible
at
all
because
of
the
amount
of
commentators
packages
on
NPM
that
were
using
this
buffer
global
so
and
it
might
have
been
believe,
that's
the
exact
details,
but
I
might
be
wrong
on
that.
But
the
point
is
that
once
you
have
these
things
being
used
with
with
such
a
huge
amount
of
usage
and
reliance
in
the
wild,
you
can't
change
them,
and
so
there's
the
opportunity
here
in
the
shift,
the
es
modules
to
possibly
deprecated
the
process
and
buffer
Global's.
C
And
it's
not.
We
won't
necessarily
have
again
because
once
people
start
writing
it
from
script
modules
and
publishing
them
to
NPM
and
rely
on
working
in
their
12
whenever
the
experimental
modules
bug
is
removed,
then
it's
we're
stuck
with
it.
So
it's
kind
of
one
of
the
decide
semantics
of
modules
that
would
be
nice
from
a
security
perspective
if
we
can
possibly
remove
these
now.
C
That
is
a
difficult
thing
to
do,
because
v8
doesn't
actually
allow
us
to
use
a
different
global
in
echo
script
modules
than
from
common
difference
modules,
and
it
looks
like
we're
gonna
get
that
functionality
soon,
so
it's
been
quite
hard
to
craft,
something
that
can
provide
this
support
and
in
the
end
it's
ended
up.
I
actually
originally
did
something
quite
similar
to
what
the
realm
shim
does.
C
A
C
So
that
that
was
the
I
believe
that
was
the
approach.
I
think
it
was
a
with
proxy
I
yeah,
the
the
POS
there
it
was
working
and
that
the
approach
was
able
to
achieve
this
goal
of
making
sure
that
you
couldn't
access
them
in
in
the
es
module
case,
and
only
in
the
comment
is
that
case
you
would
get
access
to
them
because
comment
modules
are
already
wrapped,
but
there
was
a
lot
of
a
lot
of
objections
to
that
approach.
C
C
So
the
proxy
would
be
used
in
all
common
jeaious
modules
as
the
approach
and
then
the
new
echo
script
modules
would
get
the
normal
global.
But
yes,
a
couple
common
Janus
modules
would
always
have
to
use
a
proxy
in
that
approach
so
that
you
were
stuck
with
this
proxy
and
I
think
there
were
a
lot
of
concerns
about
that
and
also
changing
the
wrapper,
because
there's
even
code
that
relies
on
the
things
like
how
the
wrapper
behaves
exactly
so,
even
at
that
semantic
level,
there
might
be
concerns.
D
D
A
C
Yeah,
so
we
certainly
agree
if
we
get
first-class
support,
these
kind
of
changes
in
future.
You
know
that
that
would
certainly
be
a
path,
and
ideally
these
kind
of
approaches
can
be
improved
and
I
think
there
is
a
degree
that
we
can
actually
look
at
up
streaming,
any
additional
fixes
to
the
aides
that
need
to
be
made
for
node
to
make
the
stuff
work
more
seamlessly,
but
yeah.
C
So
I
finally
found
that
this
is
the
PR
where
it
got
knocked
down
pretty
quickly
and
Matteo
is
objecting
who's,
a
TAC
member
of
know,
Jess,
again
kind
of
based
on
the
performance
argument
of
the
whis
proxy,
because
a
lot
of
packages
will
direct
the
access
process
and
possibly
in
tight
loops
as
well
and
anyway.
So
it's
just
not
another
possible
approach,
but
then
something
that
made
that
possible
recently
was
v8
have
released
a
new.
C
C
C
Right,
so
this
this
mechanism
actually
only
applies
to
evaluated
scripts
and
there
isn't
analog
of
it
for
modules.
It
only
exists
for
the
script
goal
so
yeah
we
were
using
a
compile
function.
Api.
This
is
a
complete
replication
of
the
v-a
API
arguments
as
well,
and
this
argument
here
actually
allows
custom
context
and
I
think
the
the
wrapper
is
done
by
actually
just
using
custom
arguments
that
is,
that
can
be
passed
so.
C
It
behaves
like
a
with
statements,
but
out
of
at
a
lower
level,
so
the
it's
not
a
syntactic
with
statement.
It's
it's
basically,
just
you
can
create
a
custom
scope
that
wraps
the
execution
and
actually
referenced
an
object.
And
so
that's
that's.
What
I've
used
in
in
this
PR
approach,
where
we
have
a
cj's
context,
object.
C
B
C
A
C
A
C
In
in
addition
to
on
the
global
object-
and
the
reason
for
that
is
that
on
the
global
objects,
we
can
then
do
a
stack
detection.
Where
we
say
did
you
access
the
global
object
from
a
common
guess
module?
Or
did
you
access
the
global
object
from
a
Nachman
script
module?
And
if
you
accept
from
a
common
GS
module,
we
can
check
by
checking
the
stack
in
C++.
Okay,
so.
A
As
a
kludge
to
get
something
working
for
demonstration
purposes,
all
sorts
of
things
should
be
on
the
table.
I
just
want
to
make
it
very
very
clear
that
I
have
been
a
guardian
on
the
committee
to
prevent
dynamic
scoping
from
leaking
into
the
semantics,
there's
nothing
in
the
semantics
left
that
is
sensitive
on
the
caller
or
that
even
like
the
caller.
Yes,.
D
C
What
we're
doing
here
is
opening
up
the
design,
space
of
being
able
to
say,
process
and
buffer
this.
This
PR
actually
still
keeps
processing
by
her
as
available
in
equity
script
modules.
It
just
provides
a
warning
when
you
access
them,
and
the
idea
is
that
we
can
start
to
reserve
the
design
space
around,
not
necessarily
using
these
levels
and
that'll
allow
for
better
semantics
in
future.
C
Okay,
so
we
were
placed
on
the
global
object.
They
do
still
exist,
but
as
getters
which,
when
you
not
accessing
it
from
a
common
jeaious
module,
which
you
know
in
in
five
years
time,
will
widely
be
regarded
as
an
entirely
legacy.
Debra
edit
multiple.
Now,
when
you
access
these
Global's
right
now,
the
one
but
it'll
still
give
them
to
you
in
future,
we
would
just
possibly
even
throw-
or
we
could
just
return
undefined,
even
though
quick.
B
Question
I
ran
into
a
couple
of
days
ago,
when
I
was
playing
with
a
bill
that
has
that
I
usually
do
type
of
process
equals
object
to
detect.
If,
on
a
note,
yes
and
type
off
will
throw
a
warning
is
unit,
taxes
is
the
global,
but
then
you're
saying
that
that
we're
moving
towards
a
path
to
actually
not
having
there.
B
C
C
C
Your
income
that
would
access
the
the
context
version,
which
would
do
that
with
style,
object,
lookup
and
the
normal
process
object.
If
you're
in
an
echo
script,
module
it'll
do
the
lookup
to
global
and
it'll,
then
do
the
stack
check,
the
full,
slow
or
full
stack
check
and
then
either
provide
a
warning
or
return
undefined,
depending
on
where
we
are
in
that
deprecation
cycle.
So.
A
C
C
It
could
be
possible.
I
did
actually
have
some
discussions
with
various
members
of
the
v8
team
about
having
a
separate
wearable,
but
because
it's
something
that
has
no
need
in
browsers
it's
difficult
to
get
it
prioritized
for
node
I,
don't
think
it's
something
that
came
easily
to
hand.
So
I
did
have
some
brief
discussions
about
that,
but
nothing
okay.
C
Yes,
sir
I
mean
unfortunately
yeah
it
is.
It
is
something
that's
late
in
the
day,
it's
something
that
I've
been
trying
to
bring
up,
but
most
people
don't
really
care
at
all
about
this.
Most
people
are
very
uninterested
in
this
and
they
don't
think
it's
possible,
and
so
it's
one
of
those
things
that
just
kind
of
slips
under
the
radar
and
I
think
at
the
moment,
like
it's
probably
50/50
whether
this
will
be
possible,
but
because
of
the
security
properties
that
it
would
enable
for
know
that
in
future.
C
That's
why
I
think
it's
an
important
thing
to
bring
up,
because
it
allows
it
to
get
to
that
place
where
the
ecosystem
conventions
will
work
with
resolve,
abased
security
and
otherwise
process
continues
to
have
access
to
native
bindings
high
resolution.
Timers,
you
can
access
all
the
environment
variables.
C
Yeah,
so
process
has
got
the
ability
to
load
any
native
node
binding,
so
you
can
get
bindings
to
anything
on
the
operating
system.
Basically,
just
with
that
one
global,
the
alternative
would
be
to
literally
deprecated
each
of
those
paths
individually
over
the
coming
years
of
nodes
operation,
which
I
think
would
effectively
be
the
outsider
from
a
security
perspective
for
those
security
properties.
So
it's
we've
got
the
nice
opportunity
here
to
kind
of
just
do
one
quick
fix
and
it's
done
but
yeah
it's
a
long
shot.
So.
A
What
so
it
the
node
thing
that
wraps
an
individual
commonjs
module
the
thing
that
used
to
be
a
function?
That's
now
this
other
mechanism
that
you
just
showed
us.
It
brings
new
things
into
scope,
specific
to
that
module
in
particular,
require
and
module.
Could
you
put
process
and
buffer
into
scope
using
that
mechanism?
So
it's
just
in
this
in
the
global
scope
of
common
jas
modules,
but
it's
not
on
the
global
object.
C
Yes,
so
that
is
what
what
is
done
to
avoid
that
stack,
look
up
before
performance
costs
in
this
this
approach
here
it
uses
exactly
that.
I
can
find
it
here.
This
is
wrong,
so
yeah,
we
we
add
in
the
perk
system
buffer
into
that
common
GS
scope,
specifically
common
J's
context
object,
and
then
we
write
buffer
on
Surratt's
and
process
on
and
then
provided
in
in
the
CGS
loader
yeah.
So.
C
That
is
because
we
need
to
have
full
backwards:
compatibility
for
common
J's
module
users
and
people
who
could
do
things
like
writes.
They
could
change
the
process
in
object
to
instruments
it
for
instrumentation
purposes
by
writing
to
the
global
they
could
also
access
the
global
directly
when
checking
versus,
for
example,
does
that
it
actually
reads
first
process
from
global
process
and
uses
that
version
so
put
those
backwards
compatibility
scenarios.
That's
why
we
have
to
keep
the
global
check
and
yeah.
C
A
C
D
A
Didn't
it
is
the
semantic
cost
of
sensitivity
to
the
caller
right
and
that
breaks
a
lot
of
things.
The
eval
example
that
I
mentioned
that
for
the
direct
eval
I
believe
your
answer.
If,
instead
we
did
an
indirect
eval,
we
do
open,
paren,
1,
comma
eval,
close
paren,
open,
paren,
quote,
process
close
paren,
then
we're
not,
then
the
module,
that's
not
code
in
the
module.
That's
accessing
process
its
the.
D
A
C
A
C
C
C
A
C
C
A
C
C
These
days
is
that
the,
if
you're
in
an
atmosphere
module
the
the
so
much
and
we're
in
that
that
points
where
it's
been
deprecated
and
it'll,
do
the
the
stack
check
and
it'll
just
return
undefined
if
you
try
and
access
the
level
of
process
and
you're,
not
in
a
common
dais
module
I
believe
the
only
way
that
you
can
face.
Oh,
oh,
oh.
C
Okay,
yes,
so
that
the
check
isn't
ideal
at
the
moment.
I've
actually
also
spoken
to
the
VA
team,
an
improvement
due
to
the
check
to
make
sure
that
we're
using
unique
identifiers
to
make
sure
we've
got
the
right
identification
of
the
script
and
I've
already
been
suggested.
A
way
to
do
that
better.
So,
basically,
if
this
does
land,
then
because
at
the
moment
is
just
a
string
based
check
that
could
be
effectively
wrong
in
in
the
rare
cases,
we
were
just
checking
that
it's
the
right
ID
of
the
source
file.
A
A
A
So
by
by
denying
the
ability
to
run
code
in
sloppy
mode,
we
were
able
to
build
on
that
to
deny
access
to
the
global
you're,
basically
doing
the
same
trick
with
command
J
s
you're,
saying
by
allowing
code
to
run
in
command,
J
s
I'm,
creating
I'm,
giving
it
this
magic
ability
to
see
a
process
and
buffer
global,
but
I'm,
giving
it
only
to
that
to
that
code,
nothing
else
magically
gets
that
ability
so
to
everyone
else,
they're
running
in
a
world
that
is,
that
is
one
another.
There's
no
prop
processing
buffer,
global
right.
A
Okay,
so
now
so
now
let
me
okay,
so
so.
A
So
I
think
I
believe
this,
that
it
does
have
good
stability
properties.
Not
it
has
some
surprising
and
non
backwards,
compatible
consequences
from
command
J
s
code.
So
if
in
command
J
s
I
do
an
indirect
eval
of
an
indirect
eval
process,
I
would
see
undefined
and
not
the
genuine
process,
because
the
common
Jas
module
is
not
at
the
top
of
the
stack.
C
Yeah
I'll
have
to
haven't
tested
that
case,
but
I
can
check
that
from
from
backwards
compatibility
point
of
view
doesn't
sound
like
something
that
would
be
common
patent
and
the
most
important
thing
is
that
anything?
That's
you
know
I
guess
you
have
some
threshold
of
backwards
compatibility
where
it's.
C
So
that's
one
of
the
reasons
why
it's
so
pressing
to
try
and
merge
this
in
a
next
few
weeks
where
we
still
have
the
possibility
of
getting
pull,
requests
into
twelve
release
and
as
I
say,
it's
a
difficult
battle
because
a
lot
of
people
they
don't
be
benefit
to
it
and
they
don't
believe
that
it's
gonna
be
backwards
compatible.
They
they
think
it's
that
it's
gonna
cause
problems.
C
A
C
Haven't
actually,
but
yes,
that
is,
that
is
the
next
step
right
now,
there's
a
block
from
the
modules
group
itself,
where
the
modules
group
wants
to
discuss
it
further,
which
is
happening
on
Wednesday
and
then
we'll
bring
we'll
bring
the
discussion
back
to
the
core
after
next
Wednesday
and
and
then
yeah
moving
to
the
to
those
tests
is
definitely
the
next
step.
I'll
speak
to
those
who
run
the
CI
jobs
and
see
if
we
can
get
that
going.
Definitely
yeah,
so
just
push
it
through
and
and
I
guess.
C
Making
the
marude
review
is
aware
of
what
these
properties
are
and
why
it
might
be
useful
and
I
guess
also
to
to
to
convince
them
that
it's
that
it's
possible
and
that
this
will
work
out
and
definitely
getting
that
data
will
be
a
huge
one
on
this
and
showing
that
it
that
it
can
pass
the
tests
of
the
top.
However,
many
hundreds
of
packages
that
that
CIA
check
can
do
yeah.
D
D
C
A
C
Certainly,
if
anyone
wants
to
review
the
PR
for
further
review
is
very
welcome.
If
I
don't
wants
to
test
it
out
as
well.
That's
very
welcome
sorry,
yeah
I'm,
certainly
like
that's.
The
thing
is
just
to
get
eyeballs
on
this
I've
discussed
it
with
Bradley
a
bit
but
but
yeah,
just
just
looking
to
get
further
review
and
and
any
help
in
in
persuading
on.
C
You
know
that
the
nerd
review
process
of
the
security
properties
here
with
E
would
be
amazing.
I
was
working
on
that
er
for
an
intrinsic
sin.
The
new
frozen
intrinsic
swag
and
I
was
provided
that
the
documents
I
believe
it
was
an
SES
document
on
why
you
would
want
to
freeze
the
intrinsic
scene
in
a
JavaScript
environment
and
that
very
much
helped
show
that
there
was
a
body
of
research
around
the
arguments
there
and
that
it
wasn't
just
me
as
an
individual
saying,
hey.
Let's
do
this.
C
It
was
actually
a
whole
philosophical
direction
for
the
project
and
I
think
that
helped
a
lot
so
some
kind
of
some
kind
of
paint
any
type
of
way
of
indicating
that
you
know
that
there
are
some
security
properties
here.
That
could
be
useful
for
node,
as
a
project
in
future
would
be
a
huge
help
to
the
argument,
because
it
is
an
uphill
battle
when
I
get
this
through.
But
yeah
we'll
know
that
next
week,
basically
yeah.
A
Helps
me
understand
that
this
is
you
know
that
this
is
not
causing
any
semantic
difficulties
elsewhere.
It's
just
a
a
mode
that
we
hope
to
be
able
to
put
to
bed
where
there's
extra
magic
Authority
lying
around
that
is
otherwise
denied.
I
would
hope
that
the
getters
that
that,
if
we
had
better
platform,
support
support
from
d8
for
this-
that
it
would
be
nice
if
the
properties
were
absent,
except
that
when,
when
looked
at
by
a
common
Jazz
module.
C
C
So,
but
you
know
you
know
if
there
are
new
new
ways
of
handling
this
in
future
or
what
blood
will
hooks
could
open
up
in
v8
in
future
but
yeah.
Ideally,
these
geta
paths
would
in
in
the
lungs
eventually
be
deprecated.
I
mean
it'll,
certainly
stick
around
for
a
while,
though
this
yeah
I
mean
it's,
it's
no
small
thing
to
stick.
These
getters
in
that
are
gonna
lie
around
for
many
many
years,
so.
B
I'm,
actually
it's
very
early
like
I'm,
not
gonna,
be
presenting
anything
at
this
point,
but
I
have
been
exploring
this
idea
of
first-class
bindings
where,
where
we're
actually
trying
to
avoid
the
reliance
on
getters
and
to
extend
the
ability
of
how
you
would
import
from
an
ESN
module
to
make
it
something
that
you
could
actually
have
the
similar
behavior
taking
place.
I
have
not
really
explored
how
how
that
would
translate
to
the
global,
but
it's
related
to
namespace
dynamic
named
namespaces.
Basically,
so
it
will
eventually
be.
B
So
I'm,
like
I'm,
trying
to
find
a
good
way
to
phrase
it
because,
like
you,
you
have
those
getters
that
you're
you're
setting
on
the
global
object
itself.
Is
there
like,
like
if
you
compared
that
to
how
the
window
proxy
object
does
almost
the
same
behavior.
D
C
Hi
I
think
that
was
I.
Guess
the
winner
perks,
II
approach
was
kind
of
like
what
we
tried
in
the
first,
which
was
you
have
a
global
proxy,
and
maybe
we
will
end
up
with
the
global
being
a
proxy
and
married
at
some
point
in
future
by
default.
I,
don't
think,
is
by
default,
a
proxy
right
now
in
HS,
if
it
ever
does,
and
proxy
performance
in
v8
is
so
good
that
that's
really
not
a
problem
which
I
guess
we're
getting
very
close
to
these
days
yeah.
But
that's
why
most
modify
it.
Yeah.
A
C
A
D
A
C
C
D
A
So
a
static
way
of
thinking
about
what's
being
specified
here.
Let
me
try
it
out,
and
you
can
tell
me
how
it
differs
from
what
you
have
in
mind
is
that
within
commonjs
modules,
every
syntactic
property
look
up,
meaning
every
dot
every
square
bracket.
That's
looking
up
a
property
has
the
magic
behavior
that
those
property
lookups,
see
the
process
and
buffer
properties
as
having
those
magic
values.
A
C
A
C
Yeah
a
hundred
percent-
and
it
has
to
be
the
exact
global
object.
It
won't
work
for
any
other
objects.
I
guess
you
could
basically
read
those
getters
off
the
global
object
and
relocate
the
get
it
themselves
and
then
you'd
have
access
to
this
magical
piece
of
code,
which
would
effectively
a
new
piece
of
functionality
in
your
JS
environment,
which
would
allow
you
to
effectively
now
have
a
function
that
can
tell
the
difference
between
a
common
guess,
module
and
an
echo
script
monster.
Maybe
that's
that's!
C
B
I
do
have
a
question
on
the
context
of
other
like
a
VM
context
right
in
node
and
correct
me
if
I'm
wrong,
but
I
believe
at
an
earlier
time,
I
tried
and
made
maybe
didn't
make
the
conclusion.
I
can't
remember
if
I'm
right
or
wrong
sorry
that
when
I
use
the
context
object
that
had
a
prototype,
it
basically
exposed
it
as
an
object
that
didn't
have
a
prototype
within
the
context.
D
B
You
know
prototype
that
gives
that
the
add
listeners
in
node
yeah,
so
the
global
object,
extends
from
the
event
target
or
whatever
it.
Whatever
is
called
that
has
the
adlistener
remove
listener
methods
on
it.
So
those
are
you
know
if
you,
if
you
really
look
in
the
prototype
chain
of
the
global
object,
you
will
see
all
those
levels
there
and
the
browser.
Now
you
get
this
flat
object,
just
like
the
console
is
now
a
flat
namespace
right.
B
C
B
A
Me
do
let
me
just
make
sure
that
you're
not
falling
into
a
trap
that
I've
fallen
into
multiple
times
on
the
when
I
first
started,
writing
command.
J
s
modules
is,
and
I
saw
that
the
that,
from
within
a
module
the
top
level,
this
was
bound
I
assumed
that
it
was
bound
to
the
global
object
and
it
turns
out
the
top
level
this
to
a
command
J
s.
Module
is
something
other
than
the
global
object.
C
B
C
Okay,
this
yeah
I
don't
want
to
take
up
too
much
more
of
your
time
on
this
topic.
But
thanks
for
hanging
me
if
you're
interested
in
the
code,
it's
basically
a
function
here
in
the
geta,
which
is
at
the
moment
we're
using
the
source
name
itself,
the
actual
name
of
the
source
file
and
then
there's
this
kind
of
private
C++
function,
which
is
only
available
from
nodejs
side.
C
C
But
this
is
a
terrible
check
which
will
be
replaced
with
a
v8
approach
of
actually
checking
the
origin
ID
and
making
sure
that
it
is
corresponds
to
the
exact
idea
of
a
car,
a
common
gas
module
which
I
have
a
with
v8
on.
If
this
gets
in,
but
it's
a
gates,
it's
a
car
park.
You
pay
it
here,
yeah,
so
I
think
that
the
syntactical
wording
makes
sense
in
terms
of
get
as
being
syntactical.
A
Ok,
I
think
this
is
a
good
direction.
I'm
very
worried
about.
C
A
A
C
A
Let's
discuss
what
some
other
topics
are.
I
should
take
a
look
at
what
it
is
that
I
had
announced
an
email.
B
Just
to
verify
I
guess
III
was
making
like,
like
I,
drew
a
conclusion
on
something
and
I
got
confused.
It
wasn't
really
intentionally
looking
into
it.
So
I
got
it
wrong.
Actually,
thanks.
A
So
so
a
topic
that
I
have
is
to
talk
about
the
relationship
of
SES
and
TC
53
TC
53
is
a
new
standards.
Technical
Committee
under
ECMO,
that's
focused
on.
A
D
A
Chat
so
over
there
I
wrote
in
response
to
a
message
from
Peter
hottie,
the
one
of
the
XS
architects.
I
wrote
that,
in
order
to
have
a
specification,
not
of
SES
at
in
Tourette
has
an
API
for
not
I'm.
Sorry,
not
in
terms
of
specifying
an
API
for
creating
assess
environment
out
of
a
full
JavaScript
environment,
but
rather
just
focused
on
trying
to
specify
what
the
resulting
cess
environment
would
be
in
such
a
way
that
that
you
could
build
an
engine
that
was
just
a
standalone
implementation
of
SES.
A
A
A
A
So
access
is
already
set
up
so
that
you
can
configure
it
at
Build
time
with
regard
to
what
components
of
JavaScript
you
want
at
runtime,
so
the
so
the
the
main
issue
of
just
omitting
things
outside
of
SES.
They
would
just
do
by
revising
their
configuration
options,
because
currently
they
don't
line
up
perfectly
with
the
distinction
between
SES
and
non
SES.
A
A
So
the
main
thing
that
came
up
that
diverged
that
caused
a
difference
from
the
way
I've
been
mostly
thinking
about
things,
is
that
SES
and
realms
are
the
thing
that
we've
gotten
settled
is
the
semantics
of
run
time
evaluation.
So
you
know
eval
and
then
the
evaluate
method
on
the
realm
object
and
the
function
constructor,
and
all
of
that.
So
we
we've
done
all
this
engineering
for
safe,
run
time,
evaluation
of
code,
including
calculated
strings
and
we're
now
figuring
out
what
we
want
to
do
for
modules.
A
The
typical
configuration
of
XS
omits
any
runtime
evaluator
for
reasons
we
can
all
understand
and
and
only
supports
modules,
and
they
support
ACMA
script
modules,
not
Command
Reyes
modules
and
the
normal
way
they
support
them.
Is
they
do
all
of
the
module
initialization
and,
linking
all
sorry,
all
the
module,
no
evaluation
and
linking
at
Build
time,
and
then
they
snapshot
the
resulting
heap
and
turn
it
into
read.
A
Immutable
objects
really
immutable
in
our
sense,
their
interest
in
immutability
written
really
purity.
In
our
sense,
their
interest
in
purity
is
because,
essentially,
if
it's
pure,
then
you
can
put
it
in
raw
and
from
in
very
in
IOT
Ram
ism
is
much
more
expensive
than
wrong.
You
can
afford
much
wrong
space.
Then
you
can
afford
graham
space
that
fits
well
with
security,
because
if
you
have
smaller
RAM
space,
you
also
have
a
smaller
state
space
to
be
reasoning
about.
A
But,
but
in
any
case,
so
they
purify
their,
they
have
this.
This
ability
to
run
the
full
excess
engine
in
at
Build
time
without
access
to
the
devices
that
would
be
available
at
run
time
do
run
initialization
code
for
that
purpose
and
then
snapshot
the
entire
heap
and
make
that
pure
as
well
and
the
snapshotting
and
making
pure
is
done
by
magic
is
not
done
by
a
language
based
mechanism.
It's
done
by
a
configuration
option
that
the
XS
engine
knows
directly
about,
and
then
the
remaining
state
that
that
was
outside
the
snapshot
they
can.
A
Then
those
things
are
the
things
that
then
proceeded
to
run
with
mutable
state
at
runtime,
so
our
pure
modules
and
our
pure
loader
fit
very
well
with
their
pure
modules
on.
A
Define
the
static
checking
rules
that
can
that
can
by
static
checking,
ensure
that
normal
JavaScript
mode
in
a
module
is
only
exporting
pure
stuff.
One
could
imagine
proposing
a
mechanism
such
that
a
module
declares
that
it's
only
creating
pure
State
and
where
the
platform
itself
somehow
enforces
it
by
means
that
cannot
be
currently
expressed
in
the
language.
That
would
certainly
reduce
a
lot
of
complexity.
From
our
point
of
view
and
and
make
code
much
more
natural
to
write,
you
wouldn't
have
to
sprinkle
hardened
calls
everywhere.
A
Why
we
also
talked
about
Jessie
one
of
the
things
that
I
thought
was
a
nice
perspective?
Is
there
are
always
tinier
devices,
the
the?
As
as
we
succeed
at
doing
circuits,
that's
smaller
and
smaller
scales?
It
doesn't
just
mean
that
we
always
have
more
transistors
to
play
with.
It
also
means
that
we
can
build
machines
that
are
ever
smaller
and
that
are
therefore
still
constrained
and
for
some
of
the
ones
that
they've
actually
encountered
Jessie
would
be
useful.
A
There
was
actually
an
important
use
case
for
which
excess
was
lost
because
it
was
too
large,
but
Jessie
would
have
been
fine
with
and
for
Jessie.
Their
sense
is
not
to
do
it
by
configuring,
a
subset,
but
rather
porked
their
experience
from
excess
rather
than
porting
code.
Jessie
is
small
enough
that
just
building
an
engine
to
run
Jessie
from
scratch
would
seem
to
be
the
more
useful
approach,
and
it
also
enables
them
to
drop.
A
A
That
means
going
back
and
forth
between
one
is
orthogonal
persistence
where
you
just
math,
you
just
mask
machine
crashes.
You
just
snapshot
state
and
then
restore
from
snapshot
in
a
way
that
computation
simply
continues
as
if
the
crash
had
never
happened,
versus
the
state
API,
where
there
is,
which
is
generally
how
things
like
object,
relational
mapping,
zand
and
anything
that
we're
I
mean
to
pay
pretty
much.
The
entire
JavaScript
world
is
always
any
state
that
it
saves.
A
It's
actually
talking
to
some
API
to
save
state
because
objects
don't
persist,
and
but
the
the
explicit
API
approach
for
object
capabilities
would
still
be
one
that
traverses
an
object
graph
and
is
doing
an
explicit
serialization
of
an
explicitly
traversed
object
graph,
I'm
very
attracted
to
orthogonal
persistence.
It
makes
many
things
much
easier
to
about
to
reason
about.
It
makes
code
much
smaller
and
more
obvious.
It
has
some
interesting
costs
which
I'll
get
back
to,
but.
A
It
turns
out
that
the
excess
engine
already
has
all
the
the
core
engineering
needed
to
support,
object,
persistence
and
the
reason
is
because
they
did
the
snapshotting
thing
to
be
able
to
execute
up
to
some
computation
state
at
Build
time.
Take
the
resulting
state,
put
it
into
ROM
and
then
proceed
at
runtime
in
a
heap.
That's
already
populated
by
those
robbed
objects.
A
A
A
A
To
other
aspects
that
we
talked
about,
all
of
which
seemed
to
come
out
very
well-
is
the
determinism
for
blockchain.
It's
not
so
much
that
we
want
to
mask
crashes
in
a
single
execution
going
forward.
It's
that
we
need
deterministic
replication
where
all
of
the
in
the
blockchain
roll
they
call
them.
Validators
the
the
the
participants
and
blockchain
world
that
are
running
a
replica
of
the
computation
such
that
all
the
replicas
are
cross-checked
against
each
other.
A
A
There's
the
legacy
date,
which
gives
you
access
to
the
current
time,
there's
math
dot,
random
and
for
the
and
there's
share
a
buffer
for
doing
shirt
and
rimy
multi-threading,
which
we're
never
going
to
do
on
blockchain
other
than
that.
There's
really
no
current
dynamic
non
determinism
in
JavaScript
that
I
can
think
of
there's
spec
non-determinism
and
platform
non-determinism
in
the
sense
that
the
spec
allow
has
some
has
purposeful,
purposefully
some
wiggle
room
in
it
and
we
different
implementations
of
javascript
make
use
of
that
wiggle
room.
A
So
they
differ
from
each
other
in
semantically,
significant
ways,
and
what
this
is
the
case
with
wasm
as
well,
and
the
wasm
on
blockchain
folk
have
already
figured
out
sort
of
what
the
right
standardization
interaction
is
to
deal
with.
That
issue,
which
is
the
the
wasm
specification,
has
a
tiny
bit
of
wiggle
room.
A
That's
absolutely
necessary
for
the
performance
that
waz
amides,
specifically,
they
allowed
themselves
wiggle
room
on
the
bit
representation
of
a
floating
point
nan,
because
different
platforms
do
that
differently
and
there's
no
way
to
normalize
it
without
a
performance
cost
that
wisdom
in
general
is
not
willing
to
pay.
Waz
them
on
blockchain
is
perfectly
happy
to
pay
that
performance
cost
and
more
because
the
language
execution
is
not
your
bottleneck
on
blockchain,
so
they
just
picked
an
and
format.
A
So
the
general
thing
that's
going
on
here
and
they
use
the
right
terminology,
for
it
is
starting
from
the
greed
spec.
They
created
an
agreed
refinement
of
the
spec,
so
it's
a
new
spec,
but
the
new
spec
is
a
refinement
of
the
existing
spec
or
all
of
the
unambiguous
choices
that
the
new
spec
makes.
Are.
Choices
that
are
allowed
within
the
ambiguity
of
the
original
spec
I
would
want
us
to
proceed
in
the
same
manner,
because
excess
is
the
engine
that
we
would
be
interested
in
running
in
such
a
manner
and
I.
A
A
A
A
A
Refining
that
by
simply
writing
into
the
refine
spec,
the
the
text,
content
of
presumably
English
error
messages
is
not
something
that
any
of
us
want
to
do
so.
I
suggested
that
there
that
excess,
at
least
when
configured
for
these
purposes,
all
of
the
platform
generated
errors,
have
an
error
code
as
their
error
string
and
then
the
programming
environments
built
around
excess
use
a
side
table.
That's
not
part
of
the
execution
that
look
up
the
error
codes
in
order
to
present
human,
meaningful
error
message,
strings.
A
A
A
There
is
a
red
dot,
prototype
dot,
sort
where
the
spec
on
purpose
does
not
specify
what
sorting
algorithm
is
used,
but
because
of
the
nature
of
JavaScript,
it's
very,
very
observable.
What
sorting
algorithm
is
used,
so
they
just
use
a
quicksort
modified
for
stability
and
I'm,
fine
hap
specifying
that.
Well,
we
already
talked
about
error
codes.
A
D
A
A
You're,
no
one's
going
to
specify
a
memory
semantics
such
that
the
occurrence
of
an
out
of
memory
condition
will
be
deterministic.
We're
agreed
on
so
that
the
out
of
memory
took
us
into
a
discussion
of
gas
and
which
is
specific
to
blockchain
use,
which
is
a
blockchain
computation.
All
the
block
chains
meter
the
computation
so
there's
some
kind
of
budget
represented
by
a
number
called
the
amount
of
gas
and
as
computation
proceeds,
the
number
gets
decremented.
A
And
then,
if
the
number
reaches
zero
before
the
computation
completes,
then
the
transaction
in
which
the
computation
is
happening
is
aborted,
with
essentially
no
side
effects.
So
all
side
effects
are
or
hypothetical
until
either
the
transaction
completes
the
rewards
and
for
JavaScript
on
a
watch
chain.
The
natural
transactional
boundary
is
the
term.
A
So
if
you
run
out
of
gas
in
the
midst
of
the
turn,
then
the
turn
is
aborted
and
the
engine
falls
back
to
this
to
the
state.
As
of
the
previous
turn
boundary,
that
would
be
a
reasonable
way
to
do
gas
I'm
a
gas
by
the
way,
just
not
sure
if
these
that
font
sizes
here
or
oh
I,
think
I'm,
actually
not
using
it.
Yeah
the
gas
and
determinism
er
are
two
separate
heads
here.
A
Headings
here
on
gas
is
not
within
determinism,
but
actually
gas
has
its
own
determinism
issue,
which
is
you
gas
has
to
satisfy
two
hard
requirements
and
one
soft
requirement
it
has
to
prevent
infinite
resource
strains.
So
it
should
not
be
possible
for
user
code
to
engage
in
an
infinite
loop
without
consuming
gas,
and
therefore
it
should
be
impossible
to
express
nonterminating
computation.
A
And
all
of
the
validators
must
deterministically
agree
at
the
point
in
computation,
actually
as
to
whether
the
computation
ran
out
of
gas
and
in
which
transaction
and
anything
that
does
that
is
actually
going
to
have
complete
agreement
down
to
them
to
the
to
the
fine
grain
as
to
precisely
when
they
ran
out
of
gas.
Even
though
all
that's
observable
is
the
transaction
boundaries.
A
A
A
A
A
A
A
D
D
A
In
in
the
Jesse
repository
michael
fig-
and
I
have
been
going
back
and
Dan
Connolly
had
been
going
back
and
forth
in
a
thread
about
the
static
checking
rules
for
inserting
Harden
and
he's
finding
it
painful.
He
has
a
challenge
example
for
me
that
I
have
not
looked
at
yet
so
I
don't
have
a
response
yet,
but
I
can
it's
certainly
plausible
that
once
you
start
using
it
that
it
turns
out
to
be
notationally
more
painful
than
I
expect.
A
The
reason
why
I'm
expected
I
was
expecting
I
am
expecting
it
not
to
be
that
painful
as
I've
just
been
manually,
writing
jesse
code
manually
hardening
things
to
the
point
that
I'm
confident
in
my
own
head
that
everything
is
hardened.
But,
of
course
any
static.
Checking
role
is
going
to
be
more
severe
than
what
I
think
I'm
confident
of
in
my
own
head,
so
so
Michael
fig
has
proposed
two
other.
A
A
Hardening
is
a
requirement
to
being
pure,
but
we're
requiring
hardening
a
lot
more
than
is
required
for
purity,
and
the
notation
pain
that
Michael
is
running
into
is
all
about
the
additional
hardening
of
runtime
state
that
is
not
needed
for
pure
modules
and
therefore
a
built
in
platform
mechanism
for
purifying
modules,
rather
than
using
our
static
checking
approach,
buys
us
much
less
than
I
was
thinking
a
moment
ago.
You
still
have
to
most
of
the
hardened
calls
in
order
to
harden
API
surface
would
still
be
needed
with
our
static
checking,
plus
hardened
approach.
A
The
two
interventions,
as
he
has
suggested,
a
stronger
fum
and
harden
that
he
calls
immunized
we're
what
immunize
does
is
in
the
walk
whenever
it
sees
a
function.
It
replaces
the
function
in
place
in
the
object
graph,
with
a
wrapper
of
the
function
and
where
the
what
where
what
where
the
wrapper
of
course
is
hardened.
But
the
more
interesting
thing
is
the
wrapper
immunizes
all
incoming
arguments
and
outgoing
return
results.
So
it
has
the
kind
of
transitivity
that
we
associate
with
a
membrane,
but
it's
not
intermediating
to
an
object
graph.
A
And
you
know,
if
there's
nothing
wrong
with
that,
there's
nothing
wrong
with
it.
So
so
so,
let's
continue
to
examine
it,
but
if
it
turns
out
to
well,
we
need
to
find
out
what
the
costs
actually
are
and
then
evaluate
it
based
on
costs
and
benefits.
The
other
intervention
is
something
I
raised
that
Michael
fig
took
farther,
which
is
we
can
define
language
like
Jessie,
that
is
written
in
the
Jessie
syntax,
but
does
not
require
the
calls
to
harden
be
written
by
the
programmer.
A
Rather
this
language,
which
Michael
think
called
Jessa
with
a
a
at
the
end,
a
better
name
solicited
that
basically
is
a
source
source
rewrite
of
Jessa
into
Jessie.
That
essentially
can
be
have
the
form
of
our
static
checking
walls,
where
everything
that
we
would
have
rejected
in
our
static
checking
rules
because
of
the
absence
of
a
harden.
Instead,
the
rewrite
would
just
insert
the
heart.
A
So
you
have
a
sort
of
preservation
of
functionality
under
lack
of
error,
but
not
preservation
of
enforcement
or
execution
under
error.
This
is
what
I've
called
in
the
past
a
fail,
stop
subset
of
the
language
and
I
think
it's
still
a
good
category
to
keep
in
mind.
It
is
from
from
previous
work.
I
can
say
that
it's
definitely
a
a
kind
of
subset
as
useful
to
be
thinking
in
terms
of
and
then
Michael
fig.
D
D
D
It's
it's
actually
very
similar
to
the
one
that
that
you
presented
on
the
on
the
issue,
the
idea
being
that
you
have
either
an
array
or
an
object
of
where
that,
where
the
properties
in
in
that
graph
is
a
function.
And
if
that
function
has
a
reference
inside.
The
modules
say
that
you
would
want
to
be
able
to
do
a
comparison
between
the
direct
reference
and
then
the
value
at
the
right
place
inside
the
object
graph.
Could.
A
D
A
So
there's
been
a
lot
of
variations,
as
the
threat
has
continued
I'm
going
to
speak
for
the
most
conservative
variation
since
Michael
is
not
on
the
call
and
just
see.
If,
if
we
still
have
the
problem
in
the
most
conservative
variation,
you
wouldn't
be
allowed
to
say
let
foo
a
top
level,
you
would
have
to
say
Khonsu
and
then
the
right-hand
side
of
the
equals
would
need
to
be
within
and
immunize.
D
Okay,
so
in
this
case
that
would
still
work.
Okay,.
A
A
It
can
interact
with
an
object,
that's
frozen,
but
not
immunized.
Where
that
object
already
contains
a
function.
That's
not
immunized!
That
object
ends
up
in
a
graph
of
Jessie
objects.
That
Jessie
would
like
to
immunize
and
you
can
just
say:
well,
that's
not
what
immunize
is
for.
That's
that's
a
valid
position,
but
I,
don't
I,
don't
think
harden.
Has
that
interoperation
problem
where
it
has
it
less
severely.
A
Ses
code
could
purposely
want
to
keep
objects
non
frozen
and
then,
if
they
end
up
in
a
graph
of
objects
that
SES
code,
they
don't
even
have
to
bring
Jessie
into
it.
Once
the
harden
then
SAS
code,
hardening
them
will
will
proceed
to
hardened
objects
that
might
have
been
created
with
the
intention
that
they
not
get
frozen,
and
that's
just
a
you
know
something
where
that's
a
coordination
issue
that
were
just
willing
to.
A
D
A
A
But
I
think
that
so
I
think
that
that
interoperates
perfectly
well
with
jesse
code,
that's
using
harden
and
I
think
it
interoperates
poorly
with
jesse
code
that
use
is
immunized,
but
not
in
a
way
that
hurts
the
SES
code.
It's
a
way
that
causes
the
immunized
to
fail
and
if
the
immunize
fails
even
harden
can
fail.
If
the
immunize
fails
or
if
harden
fails
in
both
cases,
we
say
that
none
of
the
security
guarantees
follow
if
the
operation
failed.
So
there's
no
contradiction
there
either.