►
From YouTube: SES Meeting - Hot Module Replacement
Description
Fred Schott (Snowpack) and Jovi de Croock (Preact) join the SES community to discuss language support for hot module replacement.
A
All
right
welcome
back
hope.
Everybody
had
a
a
great
break,
the
we
today's
topic
is
hot
module
replacement
and
we
have
a
few
guests.
Daniel
has
invited
you
to
crook
and
and
fred
shot
to
join
us
and
fred
is.
I
recently
learned
the
creator
of
snowpack,
if
I'm
not
mistaken,
which
is
topically,
relevant
and
and
yovi.
Would
you
like
to
introduce
yourself.
B
Hey
so
I
worked
together
with
fred
when
we
initially
were
looking
at
hmr
within
snowpack
and
I'm
a
maintainer
of
pre-act.
I
wrote
the
yes,
the
hmr
implementation
for
preact
components,.
C
Yeah
yeah,
as
you
mentioned,
I
created
and
am
leading
the
snowpack
project,
which
is
a
web
application
kind
of
web,
build
tool,
front-end
build
tool
and
dev
server
that
really
leans
into
esm,
so
particularly
relevant
for
all
the
standards
work
and
then
also
hmr
is
something
that
we've
been
really
focused
on
providing
a
good
experience
with
which
creates
its
own
set
of
challenges
when
working
directly
with
the
esm
module
graph,
so
yeah
very
excited
to
to
come
chat
with
you
all.
A
The
we
we
bring
you
to
we
bring
this
group
together
because
we
have
been
investigating
the
compartments
proposal,
which
is
in
its
in
its
in
in
a
layering.
Its
second
layer
is
the
a
module
loader
api
that
would
allow
us
to
that
would
allow
us
to
incorporate
modules
into
the
into
the
262.
A
D
Yeah,
I'm
glad
this
group
can
can
come
together.
I
mean
there.
There
are
multiple
ways
we
could
think
about
the
hmr
problem
at
a
high
level.
So
one
way
is
by
providing
imperative
hooks
to
the
module
loading
process,
which
is
something
that
this
group
has
been
working
on
and
also
something
that
you
know
there's
experience
within
in
user
space.
That
leads
a
lot
of
engineers
to
ask
for
this
to
be
added
to
to
javascript
and
and
environments.
D
Another
way
is
through
a
higher
level
approach,
where
we
could
add
sort
of
direct
capabilities
for
for
certain
things
outside
of
compartments,
and
I
want
to.
Hopefully
we
can
discuss
the
problem
space
in
it
at
a
high
level
like
what
what
is
the
semantics
of
hmr
that
we're
that
we're
discussing
to
to
get
the
scope
right
because
are
we
talking
about?
D
You,
know
migrating
instances
or
are
we
just
talking
about
rebinding
the
live
bindings
that
are
exported
and
and
then
we
can
think
about
ways
to
to
implement
it
said
yeah,
make
sense
of
framing.
A
Yeah,
let's
so
much
of
this
group
is
probably
I
certainly
am
not
intimately
familiar
with
how
hmr
works
today
with
the
various
systems
that
that
support
it.
My
impression
is
that
it's
I,
but
the
the
intuitive
design
I
would
imagine
it
is-
is
that
you
would
be
from
the
point
of
of
some
subset
of
your
working
set.
A
That
is
modules
that
have
been
edited
that
you
would
you
would
be
watching
for
edits,
and
that
would
evoke
an
event
that
would
cause
those
modules
and
any
of
the
modules
that
depend
on
them
to
be
reinstantiated
with,
but
sharing
the
instances
that
were
that
were
not
modified,
that
they
depend
upon
and
that
that
would
entail
the
need
for
hooks
to
hand
off
state
that
some
applications
like
preact
would
would
be
implementing
or
libraries
like
preact
would
be
implementing.
In
order
to
conserve
some
amount
of
state.
C
A
C
Just
spend
a
couple
of
minutes
on
that
because
yeah
it's
an
interesting
yeah,
yeah,
yeah
yeah.
C
E
C
Yes,
I
would
love
to
so.
Hmr
is
hot
module
replacement,
which
is
at
its
basic
core.
The
idea
of
I
edit
a
file
and
without
having
to
reload
the
browser
that
edit
is
basically
applied
into
the
the
page
itself,
so
it
live
updates
replaces
the
module
in
the
module
graph
on
the
page.
C
C
It's
a
really
very
popular
feature
of
webpack
and
and
roll
up
and
any
sort
of
dev
environment.
I
think
even
going
back
before
webpack,
although
that's
certainly
reaching
into
my
my
ancient
history
but
yeah.
It's
today
at
the
very
least,
a
very
popular
workflow,
some
people
use
it
on
the
server
but
really
on
the
web.
It
is.
It
is
essentially
table
stakes
to
a
lot
of
people
and
really
useful
for
speeding
up
that
development.
C
What's
interesting
for
the
esm
story
and
for
the
browser
is
that
you
can
think
of
webpack
and
roll
up
in
any
kind
of
modern
dev
environment
today
as
the
way
they
work,
they
basically
end
up
replacing
the
module
system
or
shipping
their
own
module
system
to
the
browser.
So
at
its
core
you
know,
webpack
is
a
bundle,
but
really
what
it's
doing
is
saying.
I'm
shipping
this
code
to
the
browser,
and
I
own
it
and
control
it.
C
I'm
shipping,
the
module
system
that
I
like,
and
that
gives
them
the
hooks
that
they
need
so
webpack
can
basically
say
this
file
is
updated,
I'm
going
to
go
and
replace
it
in
my
module
cache,
which
I
manage
myself
as
as
the
webpack
client.
C
So
this
is
a
feature
that
that
up
until
now
has
been
solved
by
just
because
there
was
no
native
module
system
in
the
browser
people
would
ship
their
module
system
and
and
build
this
functionality
and
themselves.
C
What
snowpack
is
is
leading
is
this
this
sort
of
new
dev
tool?
That's
really
esm
focused
and
relies
heavily
on
esm.
So,
instead
of
doing
any
bundling
in
development,
we're
just
shipping
the
esm
modules
pretty
much
directly
to
the
client,
so
letting
the
browser
do
its
native
fetching
its
native
reloading.
C
F
C
C
There
is
no
idea
of
once
a
you
know:
a
file
has
loaded,
it
is
essentially
kind
of
cached
and
living
in
that
url,
it's
indexed
by
url,
so
we
run
in
this
problem
where
we
can't
actually
update
a
file
after
it's
been
loaded
the
first
time
without
doing
a
full
page
refresh.
E
Can
I
can
I
ask
some
questions
because
there's
there's
something
I
need
for
orientation.
E
I've
had
experi
a
lot
of
experience
with
systems
like
small
talk
where
you
can
update
code
on
the
fly
within
a
system
of
live
objects,
and
there
were
certain
rough
edges
to
that.
That
seemed
that
were
fundamental
that
were
never
able
to
be
fixed
in
principle
manner,
and
I'm
wondering
if
they're
the
same
kind
of
rough
edges
here.
So
the
the
the
two
that
were
roughest
was
anonymous
closures
if
you've
got
a
anonymous
closure.
That's
an
instance.
E
There's
no
obvious
way
to
correlate
the
function
that
the
closure
used
to
instantiate
with
what
function
the
closure
should
currently
instantiate
and
then
the
other
thing
that
was
very
rough
in
small
talk
was
stack
frames
which
in
javascript
would
only
come
up
if
the
time
of
the
reloading
is
while
there
is
a
stack,
if
you,
if
the
reloading
only
happens
at
turn
boundaries,
when
there's
no
stack,
then
at
least
you
avoid
the
stack
frame,
but
I
still
can't
imagine
what
you're
doing
for
closures.
C
Yeah,
that's
a
great
great
point
and
I
think
I
actually
don't
know
the
exact
answers
jovi
might,
but
I
think
a
really
important
point
of
this
is
that
it's
not
automatic.
C
So
it's
actually
the
the
the
request
on
the
platform
here
is
basically
the
hooks
to
to
do
this,
but
the
actual
implementation
is
actually
user
by
user
site
by
site.
The
hmr
that
we
provided
snowpack
is
essentially
just
the
kind
of
hooks
to
say.
Okay,
when
an
update
happens,
how
do
I
apply
it?
So
if
there's
any
sort
of.
D
D
F
C
Yeah
this
is,
it
is
totally
up
to
you
and
I
can
share
kind
of
the
interface
that
both
of
us
use
to
do
this.
If
that
would
help.
C
C
Yeah,
let
me
show
that
it's
basic
what
this
interface
looks
like
today,.
C
C
So
this
is
a
project
that
we
started
when
we
started
looking
at
this
for
snowpack,
which
is
essentially
just
an
hmr
interface.
We
called
it
a
spec,
but
it
was
very
much
in
progress
for
interacting
with
hmr
on
the
client
side.
So,
basically
accepting
these
updates.
You
can
see
joby
here.
He
spent
some
time
on
this
evan
yu
who's
been
working
on
vitae,
has
spent
some
time
on
this
as
well.
Everything
that's
kind
of
in
this
new
esm,
based
dev
tool,
is
essentially
using
a
flavor
of
this.
E
C
You
can
see
us
here.
This
is
a
bit
of
our
kind
of
us
having
to
do
stuff
to
work
around
the
limitation
in
webpack.
This
is
just
kind
of
like
create
replacement,
is
the
kind
of
default
behavior?
What
we're
doing
here
is
we're
saying,
okay,
to
get
around
this
limitation,
we
apply
updates
to
the
current
module,
so
there's
a
bit
of
a
the
current
workaround
that
you're
seeing
here.
But
if
you
just
like
to
focus
on
the
interface
itself,
it's.
E
D
D
E
Yeah,
I'm
also
unclear
who
I'm
also
unclear,
who
the
players
are.
When
you
say
we,
the
you
know
who
is
providing
what
who's
consuming?
What
I
just
don't
understand
how
the
respon,
who
the
players
are
and
what
responsibilities
divided
among
them.
C
Yeah
no
problem,
basically
so
so
we
as
snowpack
as
the
dev
environment,
provide
this
interface,
so
webpack
provides
their
interface
rolo
provides
theirs.
It's
an
interface
for
the
handoff
between
an
update
between
the
server
and
the
client.
C
C
The
idea
is
that
frameworks
can
also
have
a
say
in
this
and
kind
of
how
does
a
re-rendering?
How
does
accepting
this
updated
file
end
up
re-rendering
my
page
itself
in
terms
of
the
workflow
there's?
I
don't
want
to
get
too
far
into
the
weeds
here,
but
the
basic
flow
of
an
update
happens
somewhere
in
the
leaf
node.
C
It's
part
of
that
server
client
kind
of
communication,
where
the
server
understands
the
client's
module
graph
and
we'll
basically
try
to
bubble
that
update
up
into
it,
sees
one
of
these
accept
handlers
that
can
accept
it.
So
if
a
file
with
an
accept
handler
in
it
is
the
file
that
gets
updated,
it
essentially
just
accepts
itself
the
handoff.
There
is
just
saying:
hey,
I
have
updated
browser,
go
and
fetch
this
update
and
then
apply
it
using
the
logic
found
inside
of
the
accept
handler
who.
E
E
C
C
The
bubbling
comes
into
play
where,
if
you're
building
something
like
well
really
any
one
of
these,
if
you're
building
a
react
to
pre-act
spelt,
they
all
have
this
concept
of
re-rendering
and
that
usually
really
only
exists.
This
is
kind
of
app
root.
Really
you
only
need
to
add
one
of
these
in
a
in
practice
to
one
of
those
app
routes.
That's
where
basically
says.
Okay,
any
update
that
happens
lower
down
in
this
in
this
tree
bubble
that
up
to
here,
I'm
the
thing
that
handles
rendering.
A
That
is
to
say
that
the
bulk
of
components
don't
need
to
write
anything
to
engage
with
hmr
and
there's
a
sort
of
default
behavior.
If
you
change
one
of
those
modules,
then
it
will
inform
I'm
guessing
up
the
dependency
graph
until
it
finds
until
it
finds
and
if
it
finds
one
of
these
hooks
installed.
Is
that
right?
That's.
C
F
I
have
a
question
about
this
bubbling
behavior.
We
actually
removed
the
idea
of
parent
modules
from
node
because
you
can
actually
have
multiple
parents.
What's
the
expected
behavior
there.
C
Yeah,
so
we
handle
that
by
not
relying
on
anything
native
to
calculate
that
we
will
just
basically
monitor
what
we
serve
to
the
user
so
or
to
the
browser.
So
we
will,
you
know
basically
for
everything
that
we
serve.
We
will
do
a
quick
static
analysis
of
its
import
graph,
what
it
imports
and
what
it,
what
is
importing
it
and
basically
recreate.
F
C
Idea
of
the
module
graph
server
side
so
that
when
we
get
that
updated
file
server
side,
we
can
basically
calculate
and
do
the
bubbling
logic
without
asking
the
client
or
without
any
asking
anyone
other
than
our
own.
C
So
it
definitely
if
one
of
those
doesn't
have
a
handler,
we
consider
it
unacceptable,
or
this
cannot
be
applied
successfully
or
correctly.
That
also
triggers
a
full
page
refresh.
C
Each
of
those
parents
kind
of
becomes
a
new
bubbling
path.
A
new
path
for
this
update
to
take
so
each
of
those
parents
would
need
to
have
an
accept
handler
for
this
to
be
accepted.
C
So
a
and
b,
import
c
and
c
is
changed
if
a
and
b
both
accept
updates,
they're,
basically
saying
we
accept
hmr
updates
to
ourselves
and
our
children,
we
can
apply
this
update
to
the
module
graph
successfully
and.
F
Safely
and
if
so,
if
somebody
in
the
future
imports
and
becomes
a
two-parent
or
two
dependent
module,
you
can
break
reloading.
Is
that
correct.
F
C
Programmatically
by
calling
something
like
invalidate
here
or
again,
if
we
ever
just
detect
through
that
bubbling
logic,
that
there
is
a
a
path
that
does
not
lead
to
an
acceptance.
We
will
just
trigger
a
refresh.
A
C
We
very
quickly
fall
back
to
that.
As
a
kind
of
you
know,
this
is
where
this
really
benefits
us.
Someone
is
doing
pretty
rapid
development
or,
just
you
know,
they're
making
changes
to
a
single
file
and
trying
to
catch
those.
Those
updates,
if
anything,
more
complex,
happens,
and
we
can't
handle
it
successfully.
We
pretty
quickly
fall
out
to
a
full
page
refresh,
assuming
that
the
user
is
moving
files
around
maybe
doing
something
more
complex.
B
If
I
remember
correctly,
I
it
depends
like
we
will,
we
we
track
visited
modules
and,
if
they've
already
been
visited,
they
are
seen
as
a
non-accepting
part,
which
then
leads
to
a
full
reload.
C
So
if
I,
I
think
yeah
there's
definitely
there's
a
lot
of
details
here.
Webpacks
documentation
is
also
pretty
standard
and
they
have
you
know
again
because
they
have
control
of
that
module
graph.
They
have
a
little
bit
more
say
in
how
an
update
gets
applied.
I
don't
think
supporting
hmr
is
a
kind
of
a
feature
of
the
platform
itself,
but
more.
There
are
certain
proposals
in
flight
right
now,
which
would
really
benefit
this
use
case.
C
A
And
that-
and
that
is
the
context
in
which
we're
interested,
so
what
what
would
the
compartments
api
or
whatever
api
for
a
module
loader,
that
it
gets
that
surfaces
in
262?
What
would
it
need
to
facilitate
in
order
to
make
hmr
something
easier
to
implement?
A
And
so
far
what
is
so?
I
I
still
have
more
questions
about
the
about
the
semantics.
Is
it
the
case
that,
if.
A
A
C
It
will
basically
follow
the
bubbling
up
the
path,
so
if,
if
leaf,
you
know
c
gets
updated,
it
will
bubble
that
event
up
to
the
accept
handler
that
that
accepts
that
update
and
that's
kind
of
one
of
the
the
things
that
we
have
to
deal
with
here
as
an
implementation
detail
is
okay.
Well,
how
do
we
actually
apply
the
update
for
that
leaf
node
into
the
application?
We
basically
have
to
reload
everything
in
its
path
up
to
that
parent.
A
C
Okay,
all
the
way
down
to
the
file
that
was
changed.
I'm
sorry
all
the
way
down
the
module
graph
to
the
file
that
was
changed.
So
every
transitive
child
on
that
path
will
get
reloaded
yeah
because
the
url
is
again
in
the
file
imported
and
now
that
import
url
has
to
be
changed
to
point
to
the
new
update
and.
A
So,
on
you're
viewing
hash
busting
with
the
query
string
of
some
essentially
yeah,
okay,
the
which
is
maybe
something
that
we
could
avoid.
C
C
Yeah
in
the
context
of
this,
it's
the
idea
of
busting
the
cash,
the
idea
that
only
one
module
can
exist
for
each
url.
So
when
we
load
your
application,
the
file
path
is
the
kind
of
root,
the
the
main
thing
that
gets
loaded
in
the
application.
A
Yeah-
and
this
is
usually
done
by
appending
a
query
stream
with
a
version
of
the
version
version
number
which
we
can
do
with
yes,
yeah
with
with
web
with
the
web's
notion
of
esm.
I
think
so.
Okay,
this
this
brings
so
relevant.
Relevant
parts
of
the
compartments
api,
as
proposed
today,
are
one
we
have.
A
E
A
In
any
case,
that
isn't
the
only
avenue
that
we
could
use
in
order
to
introduce
the
ability
like
compartments,
also
support,
lexical,
endowments
and
and
global
object,
which
would
be
equally
suitable.
You
could
introduce
a
global
named
hot
in
the
context
of
a
compartment.
Could
you
have
a
unique
one
for
each
module?
Do
you
need
a
unique
one
for
each
one?
You
do.
You
do
need
a
unique
hot
for
every
module.
F
F
We
can
already
establish
a
dependency
graph
from
the
compartment
api
by
tracking
the
host
hooks.
F
But
I
think
really,
the
idea
that
you're
going
to
send
a
signal
to
your
dependents
might
be
a
better
basis
than
trying
to
model
the
exact
api
here
and
how
to
accept
a
generic
value
roughly
so
that
that's
more
interesting
to
me
having
to
recreate
a
individual
patching
functionality
per
module.
I
don't
think
is
avoidable.
F
E
Some
more
questions
go
ahead.
The
call
the
callbacks
are
the
callbacks
a
callback
once
because
the
new
module
will
typically
go
ahead
and
register
similar
callbacks.
But
that
way
the
the
current
callback
code
is
always,
according
to
the
most
recent
module,
the
one
being
replaced.
C
Yeah
is
this
a
question
about
esm,
hmr,
that's
spec.
F
C
C
So
the
idea
is
that
the
accept
handler
has
to
be
written
in
a
way
that
actually
applies
the
updates
to
the
current
module,
so
this
module
anyone
that
actually
accepts
updates
doesn't
have
that
same
behavior
of
being
replaced.
The
accept
handler
exists
to
basically
apply
updates
into
the
module
graph
that
exists
in
the
moment.
It
was
first
loaded,
so
the
url
of
this
never
actually
does
change
by
accepting
updates.
It
says
I
accept
them
in
a
way
where
my
url
will
not
change.
I
will
just
instead
apply
updates
into
myself.
C
It
does
the
way
that
we
implement.
This
is
by
running
that
import
of
the
kind
of
cache
busted
version
of
it
in
a
dynamic
import,
so
about
behind
the
scenes,
we'll
go
and
fetch
this
new
instance
of
module
right
here,
but
it
will
be
loaded
as
an
orphan
of
the
actual
module
dependency
graph.
E
So
the
code
that
we're
looking
at
as
written
in
in
you
know,
module
x
when
you
up
when
the,
when
the
programmer
updates
the
source
code
of
module
x
and
update
this
part
of
the
source
code
for
module
x.
E
The
you're
not
going
to
replace
the
accept
handler
with
the
revised
accept
handler
that
they
wrote.
C
C
Yeah
we
we
are
able
to
rely
on
two
things.
One
is
that
again
in
most
applications,
your
only
integration
point
for
this
is
at
the
render
call.
So
if
you
imagine
a
react
application,
that
kind
of
renders
an
entire
route
of
something
those
render
calls
are
much
fewer
than
actual
components
in
the
site.
So
you
end
up
only
really
having
to
implement
one
of
these,
and
it's
generally
just
basically
calling
telling
react
to
kind
of
do
its
thing
render
a
new
version
of
the
application.
A
C
B
B
B
B
Ties
in
a
bit
with
the
point
mark
made
about
small
talk,
like
every
framework,
will
have
their
own
logic
to
make
a
module
a
hot
reload,
because
not
everything
is
the
same
and,
like
everything
has
their
own
logic,
to
use
this
new
module
or
function
code.
A
The
other
things
that,
where
compare
the
compartments
api
comes
to
bear
on
this,
are
for
one
it
decouple
the
compartments
api
decouples
the
the
fetch
name,
space
from
the
logical
namespace
of
the
compartment,
which
would
probably
be
convenient
for
this
feature,
because
the
the
the
mod
the
module
url
rewriting
behind
the
scenes
to
do
the
cache
busting
can
be
transparent.
A
C
F
Yeah,
so
on
that
topic
about
having
those
have
you
seen
any
pushback
about
memory
usage
going
up
over
time,
because
I
know
we've
started
to
see
that
in
node
with
other
workarounds
like
this.
C
Yeah
I've
actually
kept
my
eye
out.
My
approach
was
when
someone
reports
that
we'll
deal
with
it
and
and
no
one
has
reported
it
yet.
I
think
because
it's
so
easy
to
just
refresh
the
browser-
and
it
probably
just
happens
through
the
course
of
someone
doing
web
development
over
over
a
long
enough
time.
We
have
not
seen
that
reported
as
an
issue.
I
am
sure
that
we
have
memory
issues
as
this
grows
and
as
it
grows,
the
module
graph.
A
Yeah,
this
is
actually
relevant
to
a
point
to
assade
gorick,
because
what
we're
doing
at
agorik
is
not
on
the
web.
It's
on
a
on
a
blockchain
and
we're
very
interested
in
upgradability
of
smart
contracts
and
we're
also
very
interested
in
smart
contracts,
not
indefinitely
retrain
in
training
all
of
the
state
of
prior
versions,
but
mark
had
some
really
neat
ideas
about
how
to
bring
something
like
hmr
to
bear
on
them.
A
And
ideally,
ideally,
we
have
a
solution
that
works
for
both
cases,
if
not.
E
Yeah
there
there's
there's
a
a
big
division
between
something
like
this,
whose
purpose
is
development,
where
you
can
always
fall
back
to
a
refresh
that
you
know
that
it
doesn't
need
to
reliably
update
old
instances
for
production
purposes
blindly,
with
no
developer
there
to
to
repair
things
versus
something
that's
used
in
development,
where,
if
things
go
wrong,
you
can
just
refresh
and
those
are
really
two
very
different
worlds
and
the
and
a
good
a
good.
E
You
know
precedent,
for
the
contrast
is
that
small
talk,
the
update
in
place
of
code
upgrading
old
instance
state
was
purely
development.
It
was
also,
by
the
way
extraordinary
you
could
be
inside
the
debugger.
E
Going
up
and
down
stack
frames,
see
a
bug
in
the
code,
fix
it
and
continue
the
develop.
The
the
debug
session
live
from
point
which,
when
it
worked,
was
great,
it
would
work
like
90
of
the
time
when
it
didn't
work,
then
then,
then
it
was
it
was
it
was.
You
know
it
was
fine,
because
you
just
started
again.
The
the
other
side
of
the
contrast
would
be
java,
serialization,
where
the
serialized
state
uses
the
fully
qualified
class
name.
E
Identifies
instance
variables
by
name
and
then
there's
all
of
these
complex
rules
about
what?
What,
if
you
unserialize
an
old
instance
into
a
new
class
where,
where
the
the
you
know
it
might
have
new
instance
variables
or
old
instance,
variables
might
have
gone
away
or
typing
might
have
changed.
E
So
the
java
serialization
was
something
that
was
intended
for
production
and
it
had
to
to
really
be
very,
very
clear
about
what
the
upgrade
rules
were.
I'm
not
inclined
to
do
it
the
java
way,
but
I'm
just
saying
that
that
doing
it
for
production
upgrade.
E
A
Yeah
and
I
I
think
that
I
think
that
the
industry
in
general
has
decided
that
that
strategy
is
unwise
in
general
because
of
like
related
issues
with
pickles,
for
example,
with
not
being
the
like.
For
example,
if
you
serialize
your
state,
you
know
python
pickle
and
then
deserialize
it
in
a
future
version
of
your
program.
A
You
aren't
actually
guaranteed
that
the
classes
align
with
the
state
that's
contained
in
them,
and
that
can
that
is
a
frequent
source
of
bugs,
and
I
think
that
that's
part
of
the
reason
we
as
an
industry,
moved
in
the
direction
of
using
idls
and
tools
like
protobuf,
to
be
more
explicit
about
upgrading
state,
even
though
it's
extremely
expensive.
E
Yeah,
so
so
I
think
so,
just
terminology-wise
would
it
what
would
it
work
in
the
sense
of
not
being
conflicting,
with
current
usage
to
to
use
upgrade
consistently
for
the
production
problem,
which
is
what
agoric
is
facing
and
replacement
for
the
development
problem.
E
The
for
transforming
the
old
state
and
code
into
a
state
that
works
to
with
the
new
code,
but
is
a
successor
to
the
old
state
and
that
and
that
the
the
the
the
point
about
production
versus
development
is
that
the
upgrade
process
has
to
be
one
that
can
work
with,
with
with
understood
reliability
across
all
possible
instances
of
the
old
code.
F
E
Yeah
by
the
by
the
author
of
the
code,
the
author
of
the
code,
writing
the
code
to
upgrade
old
state
to
be
state
that
instantiates
the
new
code,
and
the
key
thing
is
that
the
author
of
the
code
doesn't
know
all
of
the
instance
state
he's
deploying
the
upgrade
process
to
apply
to
instant
state
that
the
author
of
the.
That
is,
that
the
author
of
the
code
doesn't
know
about,
because
it's
it's
it's.
You
know
the
multiple
instances
out
there
in
the
world
of
the
old
code.
F
I
think
I
think
the
differentiation
seems
reasonable
to
me.
I
do
know
v8
is
removing
their
ability
to
do
that
kind
of
in-memory
replacement
that
you
were
talking
about
with
small
talk,
they've
had
that
feature,
and
it's
no
longer
supported
due
to
the
amount
of
bugs
it
has,
so
they
recommend
doing
this
live
editing
in
their
deprecation
dock
like
you're,
describing,
I
think
I
think
that
sounds
good.
I
think
there
are
a
bunch
of
other
questions.
F
If
we
do
those
manual
upgrade
handlers,
particularly
around
what
happens
when
the
module
namespace
changes.
F
So
if
a
module,
namespace
changes,
that
means
you
could
add
or
remove
exports,
which
is
visible
in
a
variety
of
other
places,
particularly
export
star
will
be
nasty.
You
get
a
word
star,
and
then
you
do
this.
F
F
More
to
it
where,
if
you
do
export
star,
you
probably
should
reload
those
modules,
even
if
they
aren't
on
the
path,
if
that
makes
sense,
but
as
long
as
it's
a
manual
process,
I
don't
see
any
major
objections.
I
think
the
the
major
objection
might
be
this
signal
propagating
out
to
the
host
to
cause
the
reload
needs
to
be
clear,
and
I
don't
think
at
least
I
don't
think
naively
will
be
able
to
find
a
cross-platform
solution
for
a
single
kind
of
signal
to
do
that.
F
So
let's
take
web
browsers
and
node
if
we
produce
a
signal
that
causes
this
reload
to
happen.
For
I
don't
know
development
purposes,
the
host
has
to
accept
it.
Somehow
the
host
could
be
the
person
creating
compartments.
For
example.
F
Whatever
we
send
up
and
automatically
working,
I
think
it'll
have
to
be
per
host
for
the
most
part,
it
sounds
like
snowpack
has
at
least
some
specification
for
doing
this
across
different
environments,
and
so
that
seems
good.
F
D
F
So
yeah,
like
there's,
there's
an
entry
point.
There
can
be
many
entry
points
into
your
module
graph,
dan
and
those
are
going
to
be
the
things
originally
without
dependence.
F
This
is
going
to
basically
have
any
node
in
the
graph
propagate
its
dependent
modules
until
it
either
terminates
from
having
visited
all
of
its
dependents
and
their
dependence
ending
up
in
a
termination.
So
since
we
have
a
graph,
it's
hard
to
use
terms
for
this
term.
D
But
the
the
idea
is,
this
is
related
to
the
sort
of
cache
busting
strategy
where
you're
you're
you're
loading
and
kind
of
rotating
that
hash
until
you
get
up
to
something
that
does
not.
B
C
The
cash
busting
is
more
the
implementation
detail
of
how
we
get
this
today
in
an
esm,
only
environment,
the
idea
of
bubbling
up
or
travel
traversing
through
the
path
through
the
graph
to
dependence
is
more
because
there
is
no
such
thing
as
an
automatic
acceptance
of
a
of
a
module
update
using
the
react
example
to
accept
it,
you
actually
have
to
re-render
your
application,
so
there
is
no
way
in
that
example.
You
could
actually
automatically,
as
the
platform
accept
it.
So
it's
more
about
the
accept
has
to
be
logic
provided
by
the
user.
C
D
D
I'm
just
trying
to
think
about
how
this
could
work
with
different
approaches
to
to
catch
busting,
because
this.
This
is
something
that
I'm
thinking
about
in
the
context
of
a
standard.
Bundling
solution
that
maybe
maybe
this
these
cache
busting
identifiers
should
be
part
of
a
separate
request
header
rather
than
part
of
the
url
or
some
something
like
that,
and
if
this
would
affect
the
the
design
at.
D
So
I
mean,
I
guess,
part
of
what
I'm
wondering
is
if
we
want
commands
in
the
to
control
the
the
esm
import
graph
to
actually
load
or
unload
modules
at
all,
or
if
the
the
current
way
that
the
esm
you
know
module
map
works
is
is
enough,
you
talked
about
import
import
maps
and
is
is
your
plan
to
make
use
of
those
at
some
point
in
connection
with
this,
because
I
think
those
also
have
this
property.
You
can't
go
and
change
them
later,
like
the
module
map.
C
In
terms
of
what
we're
looking
for
or
what
would
help
us,
because
I've
described
the
interface
and
a
lot
of
this
is
working
around
limitations,
there's
a
couple
of
things.
One
is
the
idea
of
having
to
traverse
up
the
chain
traverse
up
the
the
import
dependence
path
to
compare
that
to
sign
like
webpack.
C
They
will
not
do
that.
They
will
just
essentially
replace
the
individual
instance
itself,
basically
sending
the
code
itself
over
the
wire
eval
that
code
and
replace
it
in
the
registry
that
they
own
for
all
of
these
modules,
so
in
one
sense
the
way
that
we
do.
This
is
because
we're
using
urls
as
the
identifier
we
don't
control
it,
there's
something
there
about
the
ability
to
kind
of
patch
in
the
single
update
and
then
combine
that
with
okay,
and
then
here's
how
I
accept
it.
C
If
you
check
out
webpack
spec
for
this,
they
don't
have
the
same
problem
of
okay.
I
have
accepted
an
update,
but
I'm
still
running
the
original
accept
handler,
so
I
believe
mark
you're
the
one
who
pointed
that
out
like
what
happens.
If
you
change
the
accept
handler
code
in
a
world
where
you
control
the
module
registry,
you
can
pull
in
that
new
code
and
then
that's
actually
accepting
itself.
So
they
don't
have
the
same
problem
of
okay.
C
D
Can
you
go
into
more
detail
about
how
that
works,
because
I
think
that
that
would
help
understand
I
mean
it
seems
like
this
relates
somehow
to
the
to
this
cache
and
validation.
C
C
So
loading,
the
update
into
the
module
graph
is
where
we
hit
that
issue
of
a
url
once
you've
loaded,
you
know
the
file
at
its
proper
file
name,
that
thing
just
exists
and
you
can't
replace
them
so
the
the
uniqueness
and
the
the
the
no
control
over
replacing
something
by
its
url
when
the
url
is
itself
the
the
kind
of
key
the
index
into
the
module
graph.
C
The
other
side
of
it
is
just
the
application
of
that
update
so
again,
because
we
have
that
limitation
of
the
loading
we
have
to
load
the
accepting
module,
but
then
apply
it
to
the
original
module
that
was
loaded
in
the
graph.
So
we
use
those
boundaries
as
essentially.
C
This
is
where
this
one
module
will
actually
never
change.
By
having
to
accept
handler,
we
will
never
be
replacing
this
with
some
kind
of
uniquely
generated
url.
That
will
then
kind
of
cache
bust
and
eventually
replace
the
original
one.
When
there's
an
acceptance
handler,
we
say:
okay,
this
is
a
single
instance
that
will
never
be.
There
will
never
be
two
of
these
running
in
the
same
module
graph.
Instead
as
an
accepting
module,
this
will
apply
updates
to
itself
its
url
in
the
main
module
graph
will
not
change.
C
C
Yes,
the
the
word
replaced
just
you
know,
making
the
clarification
that
it
does
not
itself
get
replaced
by
being
an
accepting
module.
It
keeps
its
own
instance
within
the
original
module
graph
and
kind
of
applies
updates
to
itself
versus
anything
else.
That
is
not
accepting
is,
is
thought
of,
as
it
can
be
kind
of
loaded,
a
second
time
or
a
third
time.
You
end
up
just
orphaning
and
you,
I
think,
pointed
out
the
the
memory
issues
here.
You
end
up
essentially
orphaning
a
lot
as
you
develop.
D
So
if
we,
if
we
had
an
api,
for
example,
that
was
like
import.meta.set
and
it
took
a
multiple
specifier
and
and
a
new,
you
know
thing
that
looked
like
a
module,
namespace
object
and
it
you
know
imperatively
changed
what
the
module
map
points
to
with
this.
I'm
not
I'm
not
saying
we
should
add
this,
but
with
this
kind
of
thing,
give
you
the
the
capability
that
that
webpack
has
right
now
in
changing
its
own
module
graph
or
what
is
the
missing
capability.
C
Yeah,
so
that
I
believe
it
would,
the
missing
capability
here
is
that
idea
of
I
want
to
load
this
new
version
of
the
module
and
whether
that's
done
via
sending
code
literally
over
the
wire
and
then
evaling
or
here's
a
new
url
load.
This,
however,
that
is
loaded.
The
missing
feature
here
for
us
is
the
idea
of
okay.
I
now
want
to
replace
the
existing
module
in
the
existing
module
graph,
so
at
the
original
url
replace
it
that
wouldn't
change
too
much
about
our
interface
right.
C
Every
time
we
bubble
up,
we
have
to
kind
of
update
the
parent
of
that
and
the
dependent
all
the
way
up
the
chain.
Instead,
we
would
just
be
loading
the
changed
file
and
then
calling
the
accept
handler
now,
not
in
a
new
context
or
a
new
module
or
anything.
It
would
just
be
called
assuming
that
it's
dependency,
its
own
already
loaded
imports
now
point
to
the
new
place,
all
the
way
down
the
tree.
C
That
would
bring
us
more
in
line
with
the
kind
of
expectations
of
hmr
today
around
not
having
to
worry
about
this
context
of
applying
an
update
to
itself
and
having
to
remap
its
own
file's
own
imports
one-to-one.
You
could
just
kind
of
say.
Okay,
by
applying
this
update
in
the
tree,
I
have
using
the
live
bindings,
basically
replaced
it
with
the
newest
version
of
this
module.
D
How
would
we
feel
about
exposing
an
api
like
this
to
javascript,
which
was
you
know
which
which
let
you
set
these
live
bindings
in
that
kind
of
way?
D
D
This
is
for
the
case,
where
you're
not
within,
where
you're
not
using,
except
handler,
where
it's
just
supposed
to
update
automatically,
because
it
just
replaces
what
the
what
the
module
exports.
A
Yeah,
there's
also
the
matter
that
are
that
that
currently
the
compartments
api
assumes
an
append
only
module
identifier,
name
space,
my
my
thought,
which,
okay
so,
which
is
which
is
on
the
table.
Do
we
do
we
ma?
Do
we
alter
that
so
that
do
we
alter
the
compartment
api
so
that
it
has
some
feature
that
would
allow
some
of
its
some
of
its
internal
module
map
to
be
invalidated
and
replaced
and
which
would
have
to
be
asynchronous?
A
Incidentally,
in
any
case,
my
my
thought
on
this
with
my
intuition
on
this
is
like
suppose,
that
we
change
nothing
nothing
about
the
compartment
api.
Can
we
support
this
feature?
Can
does
the
compartment
api
better,
enable
frameworks
to
implement
implement
this
feature
using
this?
This
low
this,
this
module
loader
api
in
the
language,
and
I
think
that
the
answer
is
yes
that
that
you
would
be.
A
It
would
be
possible,
for
example,
to
construct
a
new
compartment
that
at
least
partially
shares
some
of
the
module
instance
with
instances
with
its
predecessor
and
then
allow
it
to
to
live
on
on
its
own
again,
resuming
is
an
append,
only
module
module
map,
but.
E
Really
I
really
like
that.
The
idea
that
that
the
the
it's
a
new
compartment
that
is
successor
to
the
old
compartment,
rather
than
just
a
new
module
instance
within
an
existing
compartment
that
somehow
replaces
the
old
module
instance
in
place.
F
E
F
Hot
remember
not
to
cross
the
streams
that
would
be
bad.
Oh,
there
are
many
ways
you
could
mean
cross
the
streams.
What
are
you
talking
about
here?
Oh.
F
There
isn't
a
good
mapping
to
our
terminology
at
this
group.
It
depends
on
how
expensive
it
is
to
generate
a
new
compartment.
Hopefully.
F
E
Yeah,
we're
not
we're
not
talking
about
instantiating
a
new
realm.
Talking
about
extension
modules
should
be
trivially,
I
mean
modules
should
be.
Featherweight.
Modules
should
be
extremely
cheap,
per
instance
of
a
module
within
one
realm
and
that's
certain,
and
we
can
already
take
a
look
at
the
excess
implementation
where
I
think
they're
they
actually
are
more
expensive
than
it
should
be,
but
there's
still
there's
multiple
mod,
there's,
multiple
compartments
being
instantiated
in
the
light
bulb.
A
A
The
compartment
shim
is
relatively
lightweight,
it's
just
a
bunch
of
tables,
a
small
number
of
them
in
any
case,
in
any
case
also,
we
are
over
time
and
I
I
want
to
make
sure
that
we
thank
fred
and
jovi
for
joining
us
and
and
daniel
for
bringing
this
group
together
and
and
hope
that
maybe
you
would
like
to
join
us
for
this
conversation
in
the
future.
A
Let
me
know
if
there's
a
good
time
to
put
on
the
calendar
if
we
want
to
to
continue
this
conversation,
and
apart
from
that,
I
I
acknowledge
you
probably
all
have
places
to
be,
and
and
thank
you
again
for
coming.
F
So,
just
back
to
daniel's
question,
I
think
exposing
something
seems
reasonable
to
me.
At
least
I
don't
know
about
the
api
design
of
what
that
is,
I'm
not
committed
to
any
api
design.
I
think
there
I
have
lingering
questions
about
the
source
of
whatever
this
signal
is
right.
Now
it's
coming
in
from
the
server
somehow
and
then
it
needs
to
notify
the
module
graph
and
so
are.
Are
you
just
basically
sending
a
signal
to
any
any
module
by
string
by
href,
whatever
you
want
to
call
it?
F
So
if
foo
updates
like
that,
server
needs
to
tell
foo
to
start
the
propagation
somehow
and
it
foo
can't
do
it
itself.
D
D
F
Yeah,
so
that's
that's
one
end
that
I'm
trying
to
figure
out,
I
think
doing
it
by
url
or
something
seems
fine.
What
the
actual
signal
is.
I
don't
have
any
strong
preference,
slight
preference,
that
it
should
be
generic,
so
you
can
basically
have
the
same
ergonomics
as
try
catch
here
per
the
acceptance
criteria.
You're
you're
asking
for
some
kind
of
barrier
semantics,
which
I
I
need
to
think
on
the
barrier
would
be
basically
propagating
out
until
you
reach
a
depth
that
is
entirely
covered
with
these
handlers.
F
F
A
A
point
in
which,
within
which
in
esm
semantics,
is
not
a
stack,
it's
it's
a
a
traversal,
yeah
yeah.
F
And
once
you
reach
a
guard
for
these,
in
this
case
the
accept
handler
you
stop
the
traversal
on
that
graph
node,
and
so
you
continue
all
other
current
traversals,
because
there
can
be
multiple
active
at
a
time.
So
if
we
have
two
and
at
depth
one
we
encounter
and
accept
on
one
of
those
traversals,
we
stop
that
traversal
and
then
in
depth.
Five
on
the
other
one
we
encounter
it
and
there
are
no
other
active
traversals.
F
The
barrier
idea
is
just
the
computer
science
idea
that
you
have
a
shared
condition
which
must
be
met
across
all
concurrent
executions
before
anything
continues.
A
I
think
that
fred's
description
of
the
semantics
of
of
of
what's
presently
implemented
with
hmr
is
a
great
deal
simpler
than
that.
I,
my
understanding
is
that
it's
by
a
complete
traversal
of
the
cog
graph
and
it
hap,
and
if
it
happens
to
be
that
there
are
multiple
accepts
in
that
in
that
traversal
of
the
code.
Co-Graphic
is
ambiguous
and
it's
discarded.
B
F
I
think
we
could
at
least
write
up
some
visual
examples
and
then
come
back
to
this.
I
can
write
up
some
as
well
all
right.
Well,.
A
Given
that
we're
over
time,
I
think
that
what
we
ought
to
do
is
put
some
more
time
on
the
calendar
when
jovi
and
fred
are
both
available
and
we
meet
regularly
at
this
at
this
time
every
week.
So
a
proposal
for
a
time
is
welcome.
I
I
think
that
yeah
we
were
booked,
for,
I
think
one
or
two
meetings
ahead,
but
we
can
come.
A
We
can
revisit
this
one
when
we've
had
some
more
time
to
to
think
about
it
and
come
up
with
a
proposal
on
well
we'll
just
keep
that
door
open
all
right,
I'm
going
to
stop
the
recording.