►
From YouTube: Node.js Foundation Modules Team Meeting 2019-04-10
Description
A
We
are
now
recording
the
April
10th
edition
of
the
new
gist
modules
team
I'm,
going
to
go
ahead
and
quickly,
Gandalf
hosting
duties
to
Jordan
carb,
and
so
I
should
be
able
to
close
this
stream.
Now
sorry,
I
can't
make
it
y'all
I
am
in
Europe
and
it
is
late
and
I'm
at
dinner.
But
please
message
me
if
you
have
any
problems
and
then
I'll
follow
up
with
getting
this
video
on
YouTube
afterwards,
ok,
bye!
Thank
you.
B
Ok,
so
I've
never
done
this
before
so
please
feel
free
to
interrupt
me
if
I
am
forgetting
anything
that
I'm
looking
here
at
the
agenda
that
was
posted
on
github
and
looks
like
I
see
six
different
bullet
points.
First,
one
is
about
locking
down
the
process
in
buffer
Global's,
since
we
have
an
hour
and
we
always
overshoot
it.
Maybe
we
can
try
and
keep
things
closer
to
5
minutes
than
10
minutes,
and
you
know
if
things
need
to
run
late.
That's
ok,
but
hopefully
that
will
help
us
meet
our
time.
So
I
think
is
guy.
C
B
B
D
D
Ahead,
yeah,
so,
as
discussed
before
we're,
the
the
approach
is
to
basically
deprecated
these
Global's,
just
in
Echo
scrub
modules.
Again,
my
hope
on
this
is
not
necessarily
even
to
get
full
consensus
that
we
want
to
do
this,
but
but
more
to
kind
of
say
we
should
keep
the
door
I've
been
on
this.
That's
that's
the
kind
of
position
I'm
taking
on
it
now,
because
once
we
unflagged
those
Global's
we're
stuck
with
them
for
the
rest
of
nodejs
life
effectively,
so
the
benefits
are
video.
D
A
process
gives
you
access
to
high
resolution
timers
or
the
native
bindings,
all
the
environment
variables.
It's
like
root
level,
security,
stuff
and
the
question
is:
do
we
want
every
single
module
that
you
install
on
NPM
to
always
for
the
rest
of
the
life
of
node?
Have
access
to
every
single
root
level
security
thing,
and
there
is
an
argument
to
be
made
that
it
might
be
possible
to
improve
the
the
secure
of
the
modular
security
in
future
and
that
this
could
sort
of
need
something
on
the
path
to
paving
the
way
for
that
yeah.
D
Sorry,
I
just
keep
it
very
quick,
the
the
first
step
that
we've
done
for
that
is
to
make
process
and
buffer
getters
on
the
current
node
12,
and
that
avoids
one
of
the
possible
breaking
changes
of
this
feature,
so
we've
kind
of
reserved
it
in
that
way
and
at
the
moment
mateo
has
put
a
block
on
this
feature.
So
I
I
think
that
the
main
thing
at
the
moment
is
actually
working
through
the
block
with
with
her
in
in
terms
of
how
to
get
how
to
move
forward
on
this.
D
B
So
I
agree
with
you
guy
that
once
we
won
flagged,
these
Global's
must
remain
present
for
the
lifetime
of
the
SM
and
note.
So
if
we
are
going
to
remove
them
prior
to
unflagging,
is
the
time
I
think
it's
fine
to
remove
them
at
any
point
up
until
that
point,
if
we
decide
we
want
to
remove
them,
I
think
there's
a
bigger
premise
that
isn't
necessarily
agreed
on
that.
B
The
the
very
long
road
map
to
having
a
different
security
model
for
node
modules
is
something
that
is
possible
or
desired
consistently
like
so
that
I
don't
know.
If
that's
that's
a
long
discussion,
it's
probably
not
worth
resolving
before,
like
now
but
or
in
this
group,
but
the
other
thing
is
the
the
general
thrust
of
my
pushback
has
not
been
about
the
buffer
global,
which
I
even
get
rid
of
all.
You
want
it's
about
the
process.
C
So,
actually,
to
two
things:
the
question
that
I
still
don't
have
and
like
didn't
find
an
answer
for
is
about
platform
detection.
I.
Think
it
comes
hand
in
hand
with
worrying
about
removing
process
is
a
clear
path
to
you
know
what
people
will
copy
and
paste
to
know
they're.
C
In
note-
and
the
second
point
is
I
know
that
navigator
in
the
browser
is
flavored,
depending
on
the
context,
the
idea
that
we
could
have
process,
even
if
it's
through
a
proxy
like
a
frozen
proxy
kind
of
thing,
become
a
sanitized
proxy
in
the
models
code.
So
I
think
those
are
the
only
two
thoughts
that
keep
coming
up
related
to
this,
but
definitely
I'm
for
making
it
safer
sure.
D
Yeah,
these
are
big
discussions
and
the
security
and
usability
arguments
are
yeah
as
very,
very
important
areas
and
and
areas
that
that
need
to
be
fleshed
out,
and
obviously
we
can't
go
through
that
all
right
now,
but
to
respond
very
briefly
on
those
two
points:
if
we
salon
today
have
a
process
proxy
or
some
kind
of
restricted
process,
this
work
would
be
the
first
step
towards
that,
because
we're
changing
we're
providing
a
different
version
of
the
global.
So
this
would
be
a
path
to
that
and
it
would
be
the
path
to
that.
D
So
that
is
one
one
of
the
ways
we
could
go
and
the
the
reasons
for
teaming
process
are
exactly,
as
you
say,
detecting
these.
These,
these
common
detection
cases
and
I
think
there's
this
code.
That
does
that
answer.
We
need
to
be
able
to
work
out
what
those
workflows
become
Jordan
after
your
previous
after
you
previously
mentioned.
D
This
concern
I
an
issue
to
Dino
discussing
what
they
would
want
to
do
for
processed
an
environment,
note,
environment
and
the
response
I
got
back
was
that
I
can
share
the
threader,
but
as
far
as
I
recall
that
that
actually,
this
isn't
necessarily
a
good
pattern.
Doing
process
that
environment
doesn't
know
the
environment
and
having
it
through
a
sort
of
a
global,
a
lot
of
the
process.
Environment,
note,
environment
production
checks
are
actually
doing
requires.
B
Yawn,
you
have
your
hand
raised
so
I'm
I
won't
speak
for
long,
but
I'm,
the
guy
I
think
so
there's
a
few
things.
Denno
already
has
a
global
called
denno,
so
you
could
use
that
for
environment
detection.
I
suspect
people
will
number
two
there's
a
long
list
of
things.
Denno
thinks
are
bad
patterns
or
Ryan.
Things
are
bad
patterns
that
there
is
in
no
way
broad
agreement
on
that.
B
Any
of
those
things
are
bad
patterns
and
number
three
is
determining
entry
points
per
environment
is
only
one
of
many
use
cases
for
detecting
the
environment,
and
so,
while
for
each
of
those
use
cases
exploring
alternative,
you
know
declarative
ways
to
do.
That
is
great.
That's
not
going
to
solve
them
all.
Certainly,.
D
E
Just
very
very
quickly
here
also,
my
understanding
is
that
we
are
removing
the
global,
not
process
itself
like
you
can
always
require
process
or
import
process,
and
you
have
the
object.
So
if
you
want
to
get
the
end,
if
you
still
can
and
it's
compact
backwards
compatible
with
I
think
all
LTS
versions,
so
there's
no
reason
why
you
couldn't
import
process
from
process
process
and
no
ten
and
do
your
checks.
B
It's
true
I
mean
build.
Tools
could
be
updated
to
support
that,
but
I
mean
at
the
moment
they
will
not
necessarily
I
know
that.
That's
that's
also
a
tricky
you're
right,
there's
lots
of
paths
for
doing
that,
but
there
is
also
a
lot
of
code
on
the
web
and
in
node
that
behaves
a
certain
way
because
type
of
process
is
undefined
or
not.
It's
the
same
reason
that,
like
global
couldn't
ship
as
global.
C
Had
just
the
secondary,
you
know
importance
of
platform
detection,
it's
not
platform
detection
in
a
predictable
way.
It
you're
rather
doing
it,
sometimes
not
even
knowing,
if
you're,
in
a
module.
So
when
you're
doing
type
of
process,
sometimes
you're
writing
a
file
that
you
expect
would
not
necessarily
run
under
any
known
conditions.
C
So
so
it's
it's
really
going
to
take
a
lot
of
work
there.
It's
doable.
Definitely
everything
is
doable
but
I,
think
importing
or
requiring
only
addresses
access
to
the
features
of
process,
but
it
does
not
really
satisfy
the
case
where,
if
you
import
process,
it
crashes,
your
browser,
because
you're
not
a
note,
so
there
will
be
very,
very
tricky
questions.
I
think
here.
F
Know
I
can
say
them,
I
mean
it's
pretty
much
just
that,
like
I,
don't
think
moving
to
yes,
like
yes,
removing
these
unsafe
process.
Global's
is
good,
universally
good.
It's
probably
something
that
shouldn't
have
been
global
in
the
first
place,
however,
compact
concerns
say
that
you
can't
quite
just
remove
them,
like
you
have
to
do
more
than
that
and
I.
By
that
same
token,
I,
don't
think
when
yes,
modules
are
made
is
like
an
excuse
to
say:
well,
we
can
remove
them
now.
It
doesn't
enable
that
it's
not
like
it
doesn't
somehow
make
it.
F
Okay,
for
the
same
reason
that
I
don't
think
that
you
can
just
take
FS
and
just
discard
the
non
promise
versions
of
F
when
we're
in
ESM.
Like
know
you
kept
present
the
same
api's.
It's
the
fact
that
the
module
loader
in
the
module
system
is
changing,
doesn't
give
carte
blanche
to
edit
the
api's
presented
to
those
modules
and
I.
Don't
think
that
it's
similar
to
removing
double
underscore
file,
name
or
double
underscore
name.
F
F
Any
SM
is
to
me
more
of
a
technical
restriction
than
anything
else
and
like
moving
process
and
buffer
into
module,
scoped
variables
and
then
saying
well,
they're,
okay,
to
remove
any
SM
now
doesn't
make
it
okay
to
me
because
they
are
still
effectively
global
like
that
technicality
is
just
a
technicality
to
me.
They
still
look
the
same.
The
implementation
is
just
different.
Their
usages
of
the
same.
D
D
The
we
put
process
and
buffer
into
a
kind
of
a
scope
wrapper,
but
that
part
of
it
is
actually
just
a
performance
optimization
for
all
intensive
purposes.
The
way
this
works
is
terrible,
but
the
way
this
works
is
by
checking
that
when
you
access
process,
whether
you're
in
a
module
environment
or
not-
and
it's
actually
a
stack-based
check-
and
so
it's
almost
like
an
access
control
on
a
global,
and
this
is
not
a
pattern
that
nodes
should
do
going
forward
into
the
future.
D
But
it's
a
pattern
that
works
I
mean
you
can
evolve
in
a
ball
and
of
all
that
accesses
the
global
and
it
works
at
a
syntactical
language
level.
We
couldn't
do
this
for
input
FS,
necessarily
because
we
couldn't
do
this,
for
example,
for
property
access,
because
the
global
access
is
specific
syntactic
that
we
can
make
sure
that
that
this
kind
of
check
works
out,
but
generally
in
languages.
You
don't
want
to
be
doing
this,
but
yeah
it's
an
exception
that
we
can
make
work
out.
D
B
G
G
B
H
Yeah
so
see
anything
I,
don't
know
what
the
hell's
going
on
here.
So
yeah
I
opened
a
issue
and
a
PR.
Let
me
know
if
you
can
see
this
so
test
hype
got
converted
kind
of
at
the
last.
Second,
as
part
of
the
upstream
PR
into
entry
type
was
right
around
when
we've
kind
of
found
what
I
considered
a
bug
with
it,
that
it
wasn't
behaving
as
at
least
I
had
expected
it
to
be
clearly
others
had
expected
it
survey
this
way,
and
it
wasn't.
You
know
almost
a
miscommunication
there
apologize
for
that,
but
yeah.
H
It's
the
I
feel
that
the
current
implementation
is
afoot
gun,
as
in
as
several
people
in
the
upstream
PR
had
mentioned,
that
they
expect
it
that
they
expect
that
if
you
set
the
entry
type
or
set
the
type
of
your
entry
point,
it
will
like
that
sets
the
type
of
your
whole
project
or
at
least
of
that
package
scope
and
that's
not
the
case.
It's
like.
If
you
have
indexed
ojs
and
then
you
know,
startup
j/s.
If
you
set
entry
type
equals
module
index,
dot
JSP
to
index
such
as
the
SM,
but
then
start
up.
H
H
The
initial
proposal
was
a
flag
called
package
type
which
would
have
basically,
if
you
used
it
in
M,
it
does
exactly
the
same
thing
as
the
type
field
and
packaged
up
Jason
and
then,
which
would
behave,
I
think
as
people
who
were
expecting
entry
type
to
behave.
But
people
had
some
objections
to
that.
H
So
then,
the
kind
of
next
best
option
is
just
get
rid
of
a
flag
to
override
the
type
of
a
file
at
all,
and
we
still
need
a
flag
for
eval
print
and
standard
in
the
like
string,
input
types,
so
I
was
proposing.
We
just
rename
this
not
just
rename
it
but
rename
it
from
entry
type
to
input
type
and
restrict
it
to
only
eval
print
and
standard
in
and
that's
it
like.
If
users
want
a
flag
for
working
with
files,
then
we
can
see
why
they
want
to
evaluate
their
use
cases.
H
Would
it
be
more
appropriate
for
entry
type
or
for
package
type,
or
do
we
not
want
to
support
it,
but
like
I?
Basically,
I
was
like
scale,
let's
scale
this
back
for
the
initial
release
as
part
of
no
212,
and
then
you
know
kind
of
start
the
conversation
with
users
to
see
like
if
we
want
more
flags
or
a
broader
scope
of
a
flag,
what
it
should
be
from
there.
So
I
can't
see
what
people
are
chatting
so
I'm
going
to
stop
sharing,
so
I
can
see.
B
I
just
indicated
that
I
was
fake
raising
my
hand,
so
the
I
think
the
way
you've
explained
yourself
in
the
issue
in
particular.
Is
it's
very
well-done
and,
and
certainly
here
as
well
and
I-
think
the
regardless
of
the
bike
shed
about
the
argument
names
to
me.
The
only
thing
that
is
like
strictly
critical
is
the
like
string,
input
case,
input
type
atler
words,
because
there
is
no
way
to
control
the
PowerSchool.
B
Otherwise,
so
for
me,
that's
pretty
like
we
have
to
at
least
provide
input
type
for,
like
the
the
only
use
case
like
when
we
did
all
our
use
cases,
documents
and
stuff.
The
only
thing
that
was
really
like
that
this
there
really
the
only
other
thing
that
really
overlapped
with
this
was
I,
have
a
one-off
script
and
I
want
to
run
it
with
node
and
it's
not
necessarily
part
of
a
package,
and
if
you
can
rename
the
file,
you
can
already
just
give
it
a
CJ,
S
or
MJ
s
extension
or
whatever.
B
H
Please
yeah.
That
was
the
thing
that
miles
had
mentioned
in
the
in
the
conversation
about
this,
and
that
is
basically
the
one
downside
of
like
if
we
remove
this
flag
that
specific
in
this
case
of
extension,
this
essentially
becomes
slightly
harder
in
that,
like
say
like,
and
you
know,
npm,
which
should
be
an
extension
this
file
or
like.
If
you
had
a
backup
shell
scripting,
you
want
to
be
able
to
run
it
VI,
just
backup
instead
of
typing
backup
that
MJ
s
basically
just
means
you
have
to
create
a
symlink
from
that
extension.
H
But
I
just
don't
think
that
that
use
case
is
so
huge
of,
like
wanting
your
run,
extension
list,
CSM
files
and
I
think,
there's
a
lot
of
people
that
are
gonna
be
burned
by
this
flag,
not
behaving
the
way
they
expect
it
to,
and
I
think
it's
you
know
asking
the
people
who
want
to
run
an
extension
with
shell
script,
essentially
to
make
a
symlink
feels
like
a
pretty
small
price,
to
pay
to
be
able
to
do
that
in
order
to
spare
to
spare
the
broader
public
of
you
know,
a
foot
gun.
Essentially
sorry,
sorry.
B
Yeah,
so
that
was
essentially
I.
Think
like
I
I
agree
with
with
your
conclusions.
There
I
think
that
the
challenge
is
that,
when,
whenever
we've
made
a
decision
that
makes
a
use
case
impossible,
we
definitely
have
to
explain
ourselves
and
we've
made
it
when
we
make
a
decision
that
makes
a
use
case
annoying,
we
should
explain
ourselves
and
extension
was
files.
If
we
go
an
input
type,
we
will
have
to
explain
ourselves
for
extension
list
files.
H
Because
it's
a
sim
link
to
a
file
inside
your
package,
your
package
would
have
a
package
at
Jason.
So
in
your
package,
your
package
at
Jason,
which
could
have
a
type
module
field,
and
so
that
would
cover
it
if
they
happen
to
run
it
with
like
preserve
symlinks,
then
it
wouldn't
work,
but
I
mean
no
one
should
be
running
like
you
know,
no
one
should
be
using
like
if
they
use
that
flag,
then
they
know
what
they're
getting
essentially.
C
The
UX
concern
I
had
here
was
about
the
preserve
something
flag
in
this
very
particular
case.
Maybe
we
want
to
explore
if
this
is
a
case
that
is
separate
from
preserves
and
links
in
general,
or
is
it
exactly
what
the
preserve
symlinks
flag
people
found
useful
to
utilize,
so
so
that's
all
I'm
trying
to
point
out
is
that
sometimes
assembling
that
is
very,
very
predictable,
like
this
one
may
not
be
as
assembling.
That
is
in
your
sources.
H
Mean
I
think
a
lot
of
things
break
when
you
use
that
flag,
like
you,
can't
run
NPM
with
that
flag,
for
example.
So,
like
I,
think
you
know
that
flag
has
a
particular
use
case
that
people
would
use
it
for
I,
don't
think
anyone's
going
to
like
put
that
on
their
node
options
and
expect
to
like
run
all
the
time
with
that,
because
they'd
never
be
able
to
use
NPM.
For
example,
I,
don't
know
I
mean
we
could
see
like
we
could
just.
We
could
also
like
push
this
out
and
see
what
people
complain.
H
Because
I
can't
think
of
a
use
case
for
for
when
people
would
use
that
flag.
To
be
honest,
it
must
be
use
cases
out
there.
So
you
know,
but
before
you
know
we
do
something
that
I
think
most
people
wouldn't
want
for
a
use
case.
We
can't
think
of.
Maybe
we
should
you
know
kind
of.
Do
it
the
other
way
around
and
let
people
point
out
what
they
need
and
why.
B
B
H
B
H
That
was
ass.
That's
I
deliberately
put
that
as
a
separate
PR
I
mean
I,
don't
know
if
I
have
a
fob.
Viously
people
are
gonna
object
to
that.
So
I'm,
assuming
we're
not
merging
that
in
today,
but
even
if
that
was
approved
today,
I
would
not
have
put
that
as
an
upstream
PR
like
merged
together,
like
I,
would
submit
two
of
them,
because
I
totally
expect
upstream
people
to
complain
about
Auto
as
well,
so
I
think
that
should
definitely
be
its
own
thing.
Yeah.
B
Personal
opinions,
aside,
I,
think
keeping
them
separate
is
wise
for
all
sorts
of
reasons:
cool.
Okay.
So
then
it
sounds
like
we
do
have
quorums
and
it's
there's
17
members
and
ten
of
them
are
here
so
it
sounds
like
we
have
consensus
to
merge
the
input,
type,
PR,
suis
and
then
and
then
presumably
to
upstream.
That
as
well
is
that
cool
yeah
everyone
agree
all
right,
I've
never
run
the
meeting
before
again
so
making
sure
all
right.
So
look
great!
That's
that's
in
progress!
B
H
It
was
on
there
there's
on
the
agenda.
Intriguing,
oh
you're,
right
I
do
see
input,
type,
otter,
okay,
let's
talk
about
Auto,
so
I
mean
assuming
we
have
objections
so
I
don't
know.
I'm
just
been
too
much
time
on
this,
but
I
was
proposing.
We
I
think
it's
pretty
common,
especially
for
string
input
where
people
like
they
might
be
piping
in
to
node
from
some
other
source.
So
they
might
not
know
what
the
string
input
is
like
the
like:
the
coffee
command
and
the
battle
command.
H
So
at
least
we
know
that
so
the
question
then
is
like
for
a
file
without
an
import
and
export
statement,
it
could
be
ESM
that
just
somehow
doesn't
import
or
export
or
anything
or
it
could
be
comma
J
s
or
it
might
not
become
J
s,
and
we
just
don't
know.
So
what
do
we
do
in
that
situation?
There
was
discussion
about
whether
to
like
look
for
references
to
the
common
J's
Global's,
and
if
you
see
those
and
it's
likely
that
it's
coming
to
us
and
if
it's
not,
then
it's
truly
just
ambiguous.
H
They
were
basically
the
consensus,
seemed
to
be
that
that
was
just
too
messy,
especially
when
you
consider
cases
like
you
know,
if
require
to
try
to
detect
if
you're,
in
a
comedy
environment
or
not,
and
then
therefore
that's
like
well,
that
is
a
reference
to
the
comedy
s:
global
require,
etc,
etc.
So
the
end
result
of
the
discussion
was
basically
redefined
the
goal
and
that
we're
not
trying
to
definitively
detect
commonjs
we're
just
trying
to
base
we're
just
saying
like
if
it
contains
import
and
export
running
is
module.
Otherwise
running
is
coming
just
the
way.
H
It
is
now,
and
just
by
definition,
ambiguous
files
get
run
is
coming
to
s
the
way
they
do
now
and
that's
just
the
definition
of
the
feature
like
you
would
put
in
the
docs.
That
is
what
Auto
does
run
as
ESM
if
import
export
and
then
else
does
come
a
j/s,
and
so
whether
or
not
we
added
to
import
input,
type
is
kind
of
a
separate
question.
H
I
would
most
want
it
to
be
added
to
I,
think
it's
a
VM
dot
create
script
or
whatever
here's
a
run
script
so
that
babble
and
CoffeeScript
and
typescript,
and
anything
else
that
generates
a
string
of
JavaScript
to
be
run.
Where
those
tools
might
not
know
the
type
of
the
input
have
a
way
of
like
kind
of
deferring
to
node
for
that
detection
piece,
rather
than
all
these
tools
having
to
come
up
with
their
own
solution
for
it,
and
then
we
have
inconsistent
solutions
between
the
various
tools,
so
so
I
have
my
oh
yeah.
H
B
B
For
example,
let's
say
you're
using
a
hot
reload
and
you
save
it
and
then
you
add
an
export
or
an
import
and
then
suddenly
it
parses
differently.
Similarly,
they
wouldn't
want
you
to
be
typing
in
the
SM
file
and
then
delete
an
import
and
then
for
until
you
put
in
a
new
one,
it
suddenly
parses
differently.
That's
seamless
thought
is
being
very
surprising,
so
it
strikes
me
as
being.
H
H
There's
there's
a
big
difference,
though,
if
I
can
just
respond
to
that
is
that
this
is
explicitly
opted
into
like
I,
don't
think
no-one's
proposing
we
make
this
like
the
default
behavior.
So
this
would
be
only
in
use
cases
where
you
just
don't
know
the
goal.
The
input
for
whatever
reason,
and
you
can't
figure
it
out.
You
have
a
way
to
like.
B
Opt
into
this
sure,
but
the
if
we
had
like
VM
run
script
and
VM
run
module
which
I'm
assuming
we
would
or
some
way
of
you
know
explicitly
doing
it.
Then
you
could
use
this
the
API,
the
general
API
to
say
what
do
you
think
this
is
and
then
the
code
could
decide
I
want
to
run
it
as
a
module,
instead
of
it
allowing
node
to
make
that
decision
for
them.
So
so.
I
Can
you
hear
me
so
I'm
just
curious
like
given
the
most
recent
eight
to
this,
where
it's
just
basically
import
acts
like
you
just
said
it
would
be
for
something
where
you
don't
know
the
kind
of
the
file
at
all
for
summary,
but
it
all
doing
is
detecting
important
exports
days.
I
think
that
goes
faster.
I
H
The
end
of
that
conversation
was
that
that
kind
of
means
that
it's
useless,
if
you
wanted
to
apply
it
to
every
file,
but
if
you're,
if
it's
the
entry
point,
only
it's
still
pretty
valuable,
because
it's
it's
gonna
be
awfully
rare,
that
you
have
an
ASM
entry
point
that
doesn't
import
or
export
anything.
You
know
what
I
mean
so
I'm.
I
B
G
So
I
mean
I
think
I
I
would
prefer
that
we
avoid
doing
auto
detection
I.
Think
of
any
form
partially
for
reasons
that,
were
you
know
already
stated,
I
do
think
that
there's
perhaps
a
valid
argument
in
and
saying,
okay
well,
at
least
if
we
have
something
that's
able
to
do
this,
maybe
not
as
a
fly,
but
maybe
is
like
a
function.
Then
at
least
everyone
else's
sort
of
like
auto
detection
will
be
consistent.
H
I
G
I
Just
want
to
say,
the
repple
were
actually
working
on
a
custom
particle
that
would
go
in
tc39
for
repple
environments,
which
is
basically
a
sort
of
I
mean
it's
kind
of
very
fuzzy
at
the
moment.
But
it's
it's.
It's
not
a,
not
a,
not
something
that
could
be
solved
with
this.
This
kind
of
detection
anyway,.
H
Anyway,
how
about
this
just
to
wrap
this
up?
Why
don't
I
refactor
this
PR
into
like
like
Jordan,
was
saying
a
some
kind
of
function
on
you
know,
module
or
something
like
that.
That
could
be
an
API
that
tools
can
pull
in
and
we
can
kind
of
go
from
there
and
see
what
people
think
of
that
and
if
it
gets
added
to
input
type
at
some
point
down
the
line
it
could,
but
it
would
really
start
here
what
do
people
think
of
that?
That.
B
C
C
I
G
H
B
F
B
F
Concept
is
thus
right.
So
back
when
we
first
started
working,
we
sat
down
and
someone
said
well,
we
have
to
do
importing
of
things
asynchronously,
because
spec
and
technical
limitations
right,
so
we
sat
down
and
went
oh
so
that
means
I
can
script,
module,
graphs
and
node
module
graphs
can't
really
interplay,
which
means
that
it
can't
really
be
strictly
speaking.
Substitutable
you'll
never
be
able
to
require
ESM
and
so
well
that
means
in
CJ
s
we'll
need
to
have
different
api's
for
importing
the
sm
versus
importing
CJ
s
and
that
separated
entry.
F
Point
mentality
has
permeated
a
lot
of
decisions
we've
made
since
then.
Now
I've
been
honestly
slightly
unhappy
with
that
interes
entire
time.
Because
of
how
it's
changed.
All
of
the
discussions
we've
had
since
then
and
I
was
made
aware
of
some
work
that
that
I'm
sorry,
what
stuff
snakes
real
name
again-
Daniel,
oh
god,
Gus
there
we
go!
Oh
that
Gus
had
done
where
he
experimented
with
just
sinka
fiying
arbitrary
promises,
with
a
node
before
and
after
looking
at
that.
F
This
actually
like
the
reason
it
doesn't
work,
is
because,
with
arbitrary
sanctified
boundaries,
you
introduce
the
potential
for
deadlock
across
those
boundaries.
The
thing
is
with
in
node
core.
Well,
that's
not
actually
a
problem
when
we're
working
with
the
loaders,
because
as
long
as
we
control
the
code
executed
within
the
loader,
we
can
guarantee
that
there
aren't
any
cross-boundary
async
dependencies
and
therefore
async
versus
sync
is
actually
completely
up
to
the
choice
of
how
things
are
executed
in
c++
like
there
isn't
actually
a
difference
from
the
runtime
perspective.
F
It's
just
about
what
gets
called
when
and
so.
In
that
respect,
you
can
I
can
actually
go
back
and
I
can
say
all
right
with
you
know
a
little
bit
of
fiddling.
We
can
in
fact
require
the
SM
in
CJ
s,
and
once
we
have
that
it
there's
no
reason
to
have
air
quotes
tool
mode
modules,
with
the
exception
of
being
able
to
ship
an
implantation
for
old
versions.
F
Note
at
that
point
you
want
to
just
say:
if
I
ship
ESM
execute
ESM
all
the
time,
if
I
wife,
I'm
required
from
CJ
s
or
if
I'm
imported
from
other
yes
modules,
give
me
the
same
thing.
Let
me
only
ship
one
implementation,
please,
like
there's.
No
reason
to
not
want
that
really,
because
to
me
it's
almost
insane
that
you
would
have
multiple
implementations
of
one
module
knocking
about
in
the
same
runtime
and
I
know
because
versioning
conflicts.
That
already
does
actually
happen,
having
multiple
implications
running
your
ad
from
the
same
runtime,
and
that
does.
J
F
H
F
F
B
B
Then
separately,
I
absolutely
will
publish
all
hundred
plus
of
my
packages
in
a
way
that
way
they
will
work
in
both
new
and
old
mode.
It
would
be
great
if
I
didn't
have
to
avoid
using
ESM
syntax
to
ensure
that,
and
so
I
would
like
some
way
to
do
dual
mode
packages.
I
think
I,
agree
with
I
know
that
I
don't
find
have
a
huge
like
conflicts
with
either
of
the
two
proposals.
I
just
want
to
make
sure
that
it
can
be
done
in
some
way.
B
I
have
expressed
in
the
past
that
I'm
very
in
favor
of
extension
resolution,
because
I
think
that
does
address
it,
because
the
way
that
the
source
and
dist
problem
is
currently
solved
in
CJ
s,
is
they
like?
You
might
have
you
have
a
couple
top-level
files
that
import
from
dist?
And
you
know
all
your
actual
code
is
in
source,
so
there's
like
so
there's
hacky
solutions
which
neat
deserve
a
better
solution,
but
not
any
SM.
B
Only
one
so
yeah
so
I
think
that
if
we
have
extension
resolution
and
if
we
have
some
answer
in
an
ESM
supporting
node
to
make
sure
that
the
common
J's
file
can't
be
required
for
the
same
specifier
I
can't
be
brought
in.
You
know
that
we
don't
have
the
two
copies
for
the
same
package
on
disk.
Then
I
think
that
that
would
be
a
really
good
place
to
be.
F
I
assume
that
means
oh
yeah
responsibly
with
you
holy.
That's.
Why
I
looked
into
this
like
I
I,
do
think
that
exports
is
a
good
thing
to
apply
to
both
commonjs
and
ESM,
because
clearly,
but
like
everyone
wants
it
right,
like
people
who
are
writing
typescript
do
want
the
ability
to
just
say
well,
I've
got
my
typescript
in
the
source
folder,
but
you
know
having
my
exports
defined
from
the
dist.
Folder
is
very
nice
and
like
right
now.
F
Already
support
extension
resolution,
old
versions
of
node
already
have
a
single
main
entry
point,
working
with
what
old
versions
of
node
support,
building
a
layer
on
top
of
it
and
saying
this
is
how
you
get
a
new
default
behavior
in
new
versions
of
node
that
support
ESM,
but
you
don't
have
to
you:
can
ship
the
same
configuration
file
and
on
an
old
version
of
node,
you
will
get
an
older,
the
older
implementation
that
works
on
the
older
version
of
node.
It's
a
good
way
to
layer
that
API.
In
my
opinion,.
C
So
that
would
be
something
that
we
would
also
have
to
really
explore
from
a
lot
of
its
like
you
know,
child
prosthetics
extinct
already
exists
a
node.
It
works
in
pretty
much
the
exact
same
way,
yeah.
So
so
I'm
are
you
know
if
we
narrow
down
on
this
particular
aspect
of
the
chronological
gap,
can
we
work
towards
some
sort
of
a
model
that
we
can?
You
know,
throw
things
at
and
see
if
that
works,
I,
think
that
would
be
suggestion
moving.
You
know
in
that
direction
like.
F
C
B
G
So
yeah
I
think
I,
agree
well.
I
do
agree
that
I
the
be
like
sort
of
like,
because
ultimate
purpose
of
having
dual
mode
packages
is
kind
of
invalidated.
If,
if
you
have
sync
and
that
and
it
kind
of
exists
at
least
originally
to
solve
the
problem
of
having
sync
and
async
so
I'm,
not
sure
if
it's
really
I
don't
think
it's
necessary
if
to
have
dual
mode
packages.
G
If
we
don't
have
this
a
sink
sink
problem,
I
think
it
could
be
nice
for
backwards
compatibility,
but
there
are
certainly
other
ways
to
do
that,
and
people
I
mean
already
maintain.
You
know
various
different
branches
and
forks
and
things.
The
existing
modules
that
are
backward
are
backwards,
incompatible
to
various
versions
have
no
to
anyway,
so
that
that's
a
problem
that
exists.
Anyways,
I,
don't
think.
G
That's
that
big
of
a
deal
I
do
want
to
mention
on
the
the
idea
of
making
it
sync
when
I
had
looked
into
this,
so
I
had
looked
into
not
making
it
not
necessarily
well,
not
not
specifically
in
making
promises
or
the
promise
chains
in
the
loader
executed
synchronously,
but
I
had
looked
into
making
the
essentially
the
the
any.
A
true
async
calls
that
you
made
as,
for
example,
to
Libby
B.
They
would
open
like
a
some
handle.
That
would
be
persistent
and
then
you'd
have
to
worry
about.
G
F
F
G
B
B
H
B
H
Just
we
have
a
meeting
dedicated
to
dual
mode:
dual
packages-
maybe
out-of-band
or
not
I,
don't
know,
but
I
think
that's,
probably
one
the
most
important
things
in
Phase
three
they'd
love
to
get
make
progress
on
sooner
than
later,
so
I
think
maybe
maybe
after
West
gets
this
PR
up
and
we
can
look
at
it.
Then
I
think
we
should
have
a
meeting
dedicated
to
this.
If
people
are
up
for
it.
H
J
J
F
You
still
resolve
all
of
the
namespace
objects
before
you
even
start
evaluation
and
the
namespace
objects
are
all
you
need
to
return.
The
appropriate
namespace
object
to
CJ.
Yes,
in
fact,
all
I
do
is
like.
Do
the
resolution
get
the
namespace
objects,
return
the
namespace
object
to
CJ
s
kick
or
get
the
namespace
object
to
return
to
CJ
s,
kickoff
execution
and
return
the
namespace
object.
If
the
execution
is
acing,
then
it
will
be
async
go
for
it.
It's
gonna
be
async.
F
The
only
thing
that
you
could
argue
needs
to
be
different
about
that
is
depending
on
how
you
want
the
interrupt
to
go.
Do
you
want
the
required
to
implicitly
await
the
asynchronous
execution
of
an
es
module
if
you
think
it
should,
then
that
also
needs
to
be
sync.
If
I'd
and
that's
where
there's
a
problem,
however
I,
don't
think
that
should
be
the
case
because
require
has
never
returned
a
promise
implicitly
before,
and
so
in
that
case,
in
the
case
where
the
execution
does
end
up
being
asynchronous.
F
J
Not
already
a
bit,
and
if
you
want
premier
sea-chest
module
to
wait
for
the
results
as
they're
required
to
be
ready,
that's
what
and
dynamic
import
is
is
for,
and
but
thank
you
for
explaining
that
I
I
now
understand
ins,
your
your
your
concept,
which
is
the
require,
will
instantly
return.
The
namespace
object,
regardless
of
whether
the
module
has
evaluated
yet
or
not.
It.
F
B
So
just
now
in
case
our
meeting
gets
cut
off
any
second.
Thank
you
for
attending.
We
can
keep
trying
to
chat
I,
don't
know
how
to
keep
everything
going,
but
I'll
turn
to
leave
me
I
want
to
just
cut
it
off
and
continue
discussion
on
github
and
see
each
other
next
week.
Our
next
next
meeting
cool
all
right
thanks
everybody.
We
got
consensus
on
a
thing
today,
that's
excellent
and
we
have
some
direction
for
some
other
things.
So
we
you
know
I'd
like
to
keep
up.