►
From YouTube: Node.js Loaders team
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
oh
and
I
can
hear
myself
right
all
right.
We
are
live
on
youtube.
This
is
the
may
14th
2021
meeting
of
the
node.js
loaders
team.
We
have
an
agenda
which
I
will
paste
in
the.
A
A
So
I
you
mentioned
on
the
on
the
on
the
thread
for
the
meeting
that
the
pr
was
waiting
on
something
for
me.
Yeah
look
at.
C
Sure
I
think
it
was
a
question
forwarded
to
you
by,
oh
god,
what's
his
name,
the
guy
in
vancouver.
C
C
He
suggested
doing
something
and
I
think
it
was
out
of
the
scope
of
what
we
had
agreed
to
in
the
original
like
scope
of
work.
So
he
tagged
you
to
see.
If,
like
I
guess,
it's
okay
to
add
it.
A
Even
this
one
is,
you
know:
27
files
changed
like
it's
already,
a
pretty
substantial,
so
I
was
feeling
that,
like
in
general,
we
should
just
try
to
keep
these
as
focused
as
possible
and
just
have
many
small
ones
rather
than
a
few
big
ones.
You
know
unless
there's
a
particular,
but
that's
just
in
general,
like
if
there's
a
particular
reason.
Why,
like
it
adds
a
lot
of
work
to
push
a
piece
off
till
later,
but
do
you
wanna.
C
The
hooks
that
it
would
to
optionally
allow
a
format
to
be
returned.
A
Yeah,
I
think
that
was,
I
think,
that's
part
of
the
design
like
like.
So
if
you
look
at
the
loaders
repo,
I
put
like
the
current
design
as
like
a
markdown
file
and
the
loaders
repro,
so
that
that
way
like
we
can
treat
that
as
like
a
working
document,
and
we
can
open
pr's
against
that
markdown
file
to
you
know
to
change
the
design
over
time,
rather
than
just
being
like
locked
in
a
comment
on
a
on
the
github
issues
thread.
A
A
I'll
get
format,
get
source
and
transform
source
collapse,
those
into
just
resolve
and
load.
That
was
like
the
only
intent
of
this
yes,
and
I
think
the
the
question
there
is
like.
A
A
Well,
here's
the
the
only
the
only
issue
is
that
then
it's
sort
of
like
like
right
now,
it's
kind
of
clean
and
that
the
resolve
hook
or
the
chain
of
resolve
hooks
always
generates
one
url,
just
a
string
and
that
string
becomes
the
input
to
the
load
hook
and
like
that's
it.
That's
the
only
like
interface
point
between
the
two.
A
C
Yeah
and
then
I
was
suggesting
that
if
they
do
include
that
optional
format
that
it
be
included
in
the
context
object,
which
is
the
second
argument
of
the
load
hook
as
like
a
hey,
this
information
is
available.
If
you
want
it.
A
Yeah
right
here
this,
so
what
if
the
resolve
hook
always
returned
an
object
as
well
with
two
out
with
two
values:
one
that's
like
url
and
the
other
one.
That's
format
and
format
would
just
be
optional
from
the
resolve
hook,
but
required
from
the
return
of
the
load
hook.
C
Yeah
exactly
and
the
resolve
hook
currently
does
already
return
an
object
just
currently
it
has
only
one
property
which
is
url
so
yeah.
I
think
it
would
be
a
pretty
easy
thing
to
facilitate.
A
C
Yeah,
I
thought
of
adding
it
as
another
argument,
but
I
typically
try
to
avoid
having
lots
of
arguments,
especially
because
it
makes
like
future
updates
more
difficult
to
be
backwards,
compatible
and
stuff.
So
it
seems
like,
since
context
is
already
an
object
that
contains
like
meta
information
and
all
of
the
properties
that
would
be
on.
It
are
optional
that
that
seemed
like
a
proper
place
for
it.
But
I
don't.
C
A
So
so
I
think
that's
fine
with
me.
I
think
then,
that,
like
maybe
maybe
step,
one
though,
should
be
a
pr
against
this
document,
adding
that.
A
I
mean,
I
guess
one
thing
that
this
document
doesn't
have
is
like
a
description
of
the
each
hook.
This
is
like
an
explanation
of
how
they
would
chain,
but
there's
no,
you
know
things
nothing
similar
to
like
this
page
about.
A
You
know
this
is
the
resolve
hook.
You
know
what
I
mean,
although
maybe
you
don't
need
it
on
this
page.
Maybe
you
just
needed
if
I
guess
this
pr
probably
updates
the
docs
too.
So
if
this
pr
updates
the
docs
to
add
what
you're
describing
then
that's
probably
good
enough.
Okay
does
anyone
else
have
any
concerns
about
this.
A
I
mean
one
thing
that
does
occur
to
me,
though,
is
that,
like,
if
resolve
returns,
a
format
like
I
think
right
now,
yeah
so
like
in
these
examples?
These
examples
would
need
to
be
updated
because
it's
like
it
always
assumes
that
each
resolve
function
returns
just
a
string,
so
this
would
have
to
be
an
object
with.
Oh,
like
I
think,
that's
wait.
D
So
I
believe
at
one
point
in
time
we
did
allow
returning
format.
I
don't
know
if
that's
been
stripped
from
resolve
due
to
having
a
easier
migration
for
an
older
form
of
hooks.
C
Because
I
think
it
was
moved
to
load
because,
under
certain
circumstances,
it's
impossible
to
know
what
format
is
ahead
of
time
and
then
this
would
just
say
like
in
the
situations
where
it's
not
impossible.
You
can.
C
For
instance,
I
think
this
was
related
to
http
loader,
where
you're
fetching
some
content,
and
you
don't
know
what
you're
gonna
get
back
until
you
have
the
header
information.
A
Yeah,
I
think
that
was
like
in
the
I
think
I
put
it
in
in
this
somewhere
of
like.
A
Like
prior
prior
art,
or
something
like
that,
I
forget
where
it
went,
but
there
was
like
a
an
issue
that
that
jan
had
opened.
A
That
was
like
the
first
pass
of
this
pr
and
yan
was
trying
to
solve
that
issue
of
like
oh
for
some
use
cases,
you
don't
know
the
format
until
you've
like
loaded
the
file,
for
example,
and
things
like
that
so
like
so
we
needed
to
somehow
or
so
sometimes
be
able
to
like,
I
think,
most
of
the
time
we
would
get
the
format
by
just
looking
at
the
file
extension
of
on
the
end
of
the
specifier,
but
there
would
be
some
cases
where
you
need
to
look
at
the
source
of
the
file
or
the
head,
the
headers
of
the
http
request,
or
things
like
that,
so
so
yeah,
it's
in
here
somewhere
of
where
that
issue
was
that
that
this
was
meant
to
solve,
but
so
yeah.
A
A
C
I
would
like
to
get
this
pr
out
the
door,
but
it's
not
a
lot
of
work
to
get
it
done,
so
I
could
do
it
today
or
tomorrow.
A
It's
really
up
to
you,
I
mean,
I
think
I
feel
like
I
mean
bradley,
would
know
better
than
me,
but
I
feel
like
smaller
prs
get
merged
in
faster,
so
you're
kind
of
better
off
keeping
it
more
scoped,
especially
if
guys
already
reviewed
this.
I
think
I
think
maybe
the
next
step
is
to
like.
So
I
need
to
review
this
because
I'm
I've
looked
at
this
in
a
couple
weeks.
I
feel,
like
I'm
gonna
likely
say
it's
all
fine.
A
Have
you
resolved
everything
that
guy
cause
guy
left
a
lot
of
notes
and
he
hasn't
he
hasn't
approved
it
yet
so,
since
he
wrote
most
of
the
esm
loaders
code,
I
think
most
people
would
be
hesitant
to
merge
this
in
before
guy
approves
it.
So
he
seems
very
positive
in
his
notes.
So
maybe
maybe
they've
all
been
addressed,
but
I
just
want
to
make
sure
that
we
do.
You
know.
C
Yeah,
I
believe
I
have
addressed
all
his
comments,
except
for
this
one,
since
I
was
waiting
for
an
answer
from
you
there's
an
outstanding
issue
where
some
automated
thing
is
failing,
because
there
are
missing
ids
in
a
markdown
file,
I
don't
know
how
to
fix
them.
A
Oh,
that,
okay,
where
is
there
a
link
to
that.
C
A
A
His
I
don't
see
any
x's
on
this.
C
Maybe
you
need
to
be
signed
in
if
I
scroll
to
the
bottom,
it
says
change
is
requested
by
derek
non-generic
okay
and
then
it
says
workflow
awaiting
approval
to
proceed,
which
makes
me
think
and
then
oh
and
then
it
explicitly
says,
skipping
automated
checks.
A
A
But
basically
it's
like
if
you
look
at
the
bottom
of
all
the
markdown
files
in
the
docs,
like
they
use
a
kind
of
a
unique
system
for
for
links
so
like
here,
like
you
this
this
link
here,
util.text
decoder.
This
means
you
need
at
the
bottom
of
this
file.
A
So
down
here,
yeah
util
that
text
decoder.
A
So
the
the
lint
step,
just
like
checks
that
these
are
all
like
valid
links.
Essentially,
so
it's
probably
something
like
that
either
it's
missing
from
here
or
the
link
isn't
valid.
I.
C
C
A
A
All
right
I'll
I'll
give
it
a
pass
and
double
check
with
well.
So
if
you
can
fix
the
linting
error
I'll
give
it
a
pass
see
if
I
see
anything
and
double
check
with
guy,
if
he's
okay,
giving
it
a
green
check
mark,
I
think
with
me
plus
guy,
that
should
be
enough
and
then
it
can.
It
can
land,
and
you
know
automated
test,
passing
and
bradley
if
you
have
any
time
to
review
it.
I
love
your
your
feedback
on
the
two
all
right.
You
know
any
of
you,
of
course.
A
C
I
don't
think
so,
I'm
glad
we
got
guys
input
on
why
he
was
doing
that.
Weird,
like
double
loader
thing,.
A
I
was
thinking
about
that.
There
are
like
several
things
like
that
where,
like
I
see
it
in
the
code,
I'm
like,
I
don't
know
what
the
hell
like.
I
don't
know
what
the
hell
someone
was
thinking
when
they
wrote
this
see
ya,
we
go.
I
you
know
if
you
have
the
time
like
when
you
go
back
in
to
fix
the
markdown
thing,
if
you
could
add
some
comments,
I.
C
Yeah,
I
added
a
comment
and
I
added
an
example
scenario
for
where
and
when
it's
important,
I
think,
there's
actually
probably
a
better
way
to
go
about
it.
Where,
like
it.
Just
because
the
the
main
issue
is,
it
needs
to
skip
cash,
and
so
I'm
thinking
you
could
probably
just
have
a
flag.
That
says
like
don't
cache
this,
but
maybe
later.
D
Completely
separate
you're,
acting
almost
as
if
floaters
are
in
separate
threads,
but
we
couldn't
get
perf
people
to
let
us
put
loaders
in
separate
threads
okay.
We
we've
tried
like
three
pr's
to
get
them
to
do
that
and
they
just
shut
it
down
because
it
takes
too
long
to
spin
up
a
thread.
C
Oh,
no,
that's
it!
I
I
added
that
stuff
back
in
I
added
test
cases
to
a
search
that
that
mystery
behavior,
that
was
previously
undocumented,
does
in
fact
persist
and
documented
why
it
was
needed.
So
I
think
we're
awesome.
A
And
yeah,
I
guess-
and
we
can
talk
about
this
after
bradley's
thing,
but
then
my
once
we
get
this
merged
in.
If
you
don't
mind,
putting
a
comment
somewhere
like
maybe
on
this
pr
or
something
with
like
okay,
so
on
the
loader's
roadmap
like
in
the
loader's
repo,
or
maybe
it
could
be
a
pr
against
the
lotus
repo
like
okay,
you
know
step
one
is
finished
this
you
know
what
did
it
complete
anything
from
the
step
two
or
beyond?
A
A
We
should
always
know
what
the
next
step
is,
so
that
if
you
or
someone
else
wants
to
work
on
a
pr
like
here
it
here,
it
is,
and
this
is
what
we're
gonna
do
and
we've
updated
our
you
know,
design
plan
to
you
know
accommodate
it,
but
yeah
we
can
get
this
landed
first,
but
but
we
should
try
to
try
not
to
forget
that
so
that
at
least
then
we
know
keep
that
we
keep
the
ball
rolling
cool.
D
Yeah,
so
beyond,
just
actually
how
loader
hooks
work,
their
problems
just
for
people
trying
to
use
loaders
due
to
how
esm
exists
and
things
like
timing,
the
way
the
spec
was
designed,
it
wasn't
really
designed
for
reflection
or
instrumentation,
so
we
have
a
little
bit
of
an
open
feature
request.
I
put
it
up
because
it
it's
been
talked
about
several
times
and
we
just
needed
a
concrete
feature
request.
D
Basically
stating
we
need
a
way
to
spin
up
a
main
thread
with
a
loader
that
you
configure
using
code
due
to
esm.
You
can't
do
this
with
a
reflective
api.
You
have
to
basically
spin
up
a
thread.
It
doesn't
have
to
be
a
real
operating
system
thread.
You
have
to
spin
up
some
code,
let
it
do
its
javascripty
thing
and
then
spin
up
another
thread
thanks
to
esm's
timing.
D
So
we
can't
do
anything
like
require
dot
extensions,
because
it's
simply
too
late
in
the
execution
flow
you
there
are
problems
putting
it
in
pre-load
steps
like
hyphen
hyphen
require,
because
then
you
get
kind
of
what
we
were
talking
about
earlier,
where
you
have
two
different
module
graphs
and
you
start
having
cases
where
you
loaded
something
eagerly
in
pre-load
phase,
but
it
no
longer
matches
execution
phase.
So
you
don't
want
that.
D
So
if
you
use
import
in
your
preload
and
it
is
doing
something,
it's
loading,
who
knows
what
low
dash,
but
your
loader
is
that
describing
dash
that's
required.
D
Yeah
dash
just
require
or
get
global
preload
code
in
loader
hooks.
Okay,
it
is
before
the
main
entry
point
of
the
application.
D
So
if
your
preload
code
were
to,
for
example,
import
low
dash,
but
your
loader
hook,
installs
replaces
low
dash
with
underscore
you
get
into
weird
situations
where
they
just
don't
match
up
and
like
bad
things
happen.
This
is
what
happened.
This
is
why
we
have
a
separate
registry
for
the
loader
hooks
things
don't
behave.
Well,
particularly
singleton
modules.
Don't
behave.
Well.
D
If
you
do
this,
so
we
can't.
D
We
can't
put
an
api
in
the
preload
code,
even
though
it's
before
the
main
entry
point
and
lock
it
down
after
that,
we
basically
do
actually
need
to
spin
up
a
javascript
thing,
run
some
javascript
and
probably
tear
that
javascript
down
and
spin
up
another
one
with
the
result
of
that.
First
one
is
this
kind
of
what
the
vm
module
does.
D
So
we
we
definitely
never
want
to
implement
something
with
the
vm
module
unless
we
have
to
because
the
the
loader
hooks
in
particular
module
wrap.cc
is
able
to
do
some
stuff
without
doing
some
javascript
interaction
that
v8
really
doesn't
like
vmwraps
modulewrap.cc.
It
adds
a
bunch
of
javascript
callbacks.
V8
doesn't
super
like
that,
so
you
can
see
seg
faults
and
things
just
look
at
jest,
which
has
tried
to
use
vm
to
recreate
a
loader
and
like
their
multi-year
effort
of
trying
to
make
it
work.
D
We
don't
want
to
go
down
that
path.
So
currently,
if
you
create
a
new
worker
thread,
you
can
specify
new
sets
of
loaders.
D
Okay,
that's
great,
but
you
can't
specify
that
your
worker
thread
really
should
act
as
if
it
is
the
main
thread,
so
you're
locked
out
of
parts
of
process,
your
standard
io
is
actually
piped.
It's
not
the
real
file
descriptors,
a
few
other
things,
and
this
can
cause
problems.
So
whenever
you
do
things
with
worker
threads,
you
might
see
stuff
like
oh
my
standard
io,
I'm
piping
out
to
a
terminal,
but
the
colors
are
disabled
and
stuff
like
that,
because
the
file
descriptors
aren't
the
same.
D
D
A
D
Yeah
they
always
do,
but
apms
are
a
huge
knock-on
performance
anyway,
and
that's
the
main
people
who
need
this.
A
A
Maybe
discuss
like:
is
there
some
way
we
can
either
opt
into
this
behavior?
I
mean,
I
feel,
like
the
whole
point,
is
to
avoid
flags
and
having
a
flag
that
enables
this
behavior
almost
defeats
the
purpose
but
or
even
have
a
flag
topped
out.
If
we
make
this
the
new
default
and
then
it's
like
you
need
to
support
flags.
If
you
want
the
high
performance
single
thread,
whatever
behavior.
D
So
we
actually
do
have
a
branching
code
path
like
this
already
within
esm
for
loaders
in
particular,
loaders
de-opt,
like
a
bunch
of
the
module
loader
itself,
if
you
have
them
enabled
at
all.
Similarly,
here,
if
you
call
this
api,
you
you
have
to
manually
call
it
so
it
shouldn't
affect
anything
until
you
call
it
once
you
call
it
it's
going
to
de-opt
a
bunch
of
stuff,
that's
just
the
nature
of
you're,
doing
things
at
runtime.
A
D
B
A
That's
kind
of
what
I
mean
is
like
I'm,
assuming
the
people
that
are
objecting
to
this,
their
their
point
of
comparison.
Isn't
how
to
how
fast
is
node
running
with
loaders,
because
like
if
you're,
if
you're,
including
a
loader,
then
by
kind
of
by
definition,
you're
not
like
trying
to
achieve
you
know
max
speed
startup
time
you
know.
So
it's
it's
going
to
be
like.
A
How
does
this
compare
like
for
just
running
node
space
script.js,
where
it's
essentially
like
a
shell
script,
so
they
wanted
to
take
whatever
200
milliseconds
to
start
not
one.
Second,
to
start.
Does
this
impact
that
and
if
so
like?
How
much
worse,
does
it
make
the
not
using
loaders
use
case
slow
down?
You
know.
D
D
D
A
Similar
to
like
the
exec
call
and
like
a
shell
script,.
D
A
D
Doing
this
is
actually
rather
complicated,
because
a
bunch
of
the
stuff
node
is
written
for
dino
is
written
for
all
these
different
host
environments
is
the
assumption,
basically
on
a
given
thread.
There's
only
one
javascript
process,
if
you
wanna
you
wanna
say,
and
the
main
thread
is
a
special
thread
in
all
of
them,
even
in
the
web
browser.
A
D
A
D
Best
kind
because
we're
gonna
have
to
run
multiple
event
loops
on
the
same
thread
with
the
same
underlying
c
plus
plus
data
structures.
So
we'll
have
two
different
processes,
basically
sharing
an
event.
Loop.
C
Would
it
be
possible,
would
it
be
possible
to
tag
them
and
say
like
who
they
belong
to
so
yeah
you're
running
in
the
same.
B
D
D
Correct
we
do
this
already
with
the
environment,
node
environment
data
structure,
but
we
have
to
add
another
thing
where
it's
not
just
environment.
It's
like
isolate
within
the
environment,
which
we
just
don't
have
james
snell,
has
done
some
prelim
work
on
this
with
near
form,
but
just
there's
been
no
funding
or
way
to
move
that
we
could
bother
him
and
see
if
they
can
move
forward
with
it.
A
D
You
can't
use
any
of
them
with
aws
lambda.
Basically,
there's
a
bunch
of
environment
variable
flags
in
node.
You
can
use
a
couple
of
them
with
aws
lambda,
but
not
in
the
traditional
way,
and
so
basically
we
want
to
take
all
these
flags
that
have
to
happen
before
user
code
executes
and
we
want
to
say
hey.
D
A
Okay,
but
look
backing
up
for
a
second
like
so
like
first
off,
I
think
that's
no
longer
the
case
for
lambda
I
haven't
checked
but,
like
I
remember
like
a
year
or
so
ago,
a
aws
allowed
you
to
specify
a
docker
image
to
use
for
in
place
of
theirs
for,
like
the
wrapper
for
a
lambda
function.
So
I
would
think
if
you
can
do
that,
and
you
can
have
your
docker
image
launch
node
with
whatever
flags
you
want.
D
It
is
theoretically
possible
to
do
this
already,
but
in
reality
people
are
not
doing
that
because
of
the
sheer
difficulty
of
doing
so.
They
would
rather
not
use
your
product.
A
D
A
Okay,
so
then
my
my
follow-up
question
then
was
like
a
lambda
function.
Doesn't
seem
like
the
kind
of
thing
you
would
want
to
attach
performance
monitoring
to
because,
like
it's
the
whole
point
of
those
is
that
they
run
as
fast
as
possible.
So
why
would
you
put
you
know
a
data
dog
or
new
relic
or
something
on
it?
That's
gonna,
slow
it
down
versus
compared
to
like
running
node
in
a
docker
container
or
ecs
or
ec2
or
more
traditional
environment.
Where
you
don't
care
how
slow
the
startup
is,
because
it's
a
long-lived
process.
A
I'm
asking
because
it
kind
of
matters
in
the
sense
that,
like
if
it's
a
rare
use
case
for
people
to
want
to
do
instrumentation
in
a
loader
or
anything
that
would
require
or
instrumentation
in
a
lambda
or
anything
that
would
require
a
loader
inside
a
lambda.
Then
for
the
rare
people
that
have
that
need
they
could
do
what
I
described
of
like
spinning
up
the
custom,
docker
image,
etc.
A
Because
the
majority
of
use
cases
you're
either
running
instrumentation,
but
not
in
a
lambda
or
running
something
fast
inside
a
lambda
if
there's
already
a
way
to
achieve
what
you
want,
if
even
if
it
requires
extra
effort,
it's
like
is
this:
is
this
a
prominent
enough
problem
to
solve
like?
Are
there
enough
people
affected
by?
E
Apm
is
pretty
common
in
lambda.
What
like
it
a
lambda
like
it's
generally
a
simple
enough
thing
that
you
don't
actually
need
a
lot
of
insight
up
on
like
the
thing
itself,
but
it's
really
common
to
have
apm
in
there
just
for
distributed
tracing
purposes.
A
Okay,
I
wasn't
trying
to
argue
that
I
was
correct
on
this.
I
was
just
legitimately
asking
like
is
this.
I
don't
want
us
to
build
a
feature
that
no
one
uses.
You
know
what
it
means
if
you're
saying
that
there
are
people
out
there
that
have
this
need
then
sure
you.
E
A
Okay,
so
next
question,
just
as
devil's
advocate,
I
I
looked
on
that
thread
and
gus
had
a
very
interesting
comment
which
was
like.
Couldn't
you
couldn't
you
make
your
entry
point,
be
a
common.js
file
that
just
had
a
dynamic
import
of
the
esm
entry
point
and
if
you
did,
that,
presumably
couldn't
you
put,
you
know
require
datadog
right
above
that
dynamic
by
dynamic
import
and
that
way
it's
like
you
don't
need
to
do
all
this
stuff
of
replacing
the
main
thread,
blah
blah
blah,
because
you
have
loaded
the
instrumentation
before
any
esm.
D
We
could
certainly
add
a
check
for
have
you
touched
the
esm
loader
at
all.
I
think
we
might
have
to
change
some
caching
behavior
for
native
modules
to
do
that,
because
the
loaders
can
replace
native
modules.
It's
not
going
to
work
for
anything
except
the
module
loader.
Probably
so,
if
you
wanted
to
configure
other
things
like
disabling,
what
is
it
called
cogen
from
strings?
A
D
It'll
solve
part
of
the
need
sure
if
you,
if
you
only
care
about
getting
a
loader
to
work,
and
you
never
really
want
people
to
use
esm
in
pre-load
code,
and
you
never
want
to
use
them.
You
never
want
to
load
an
apm
using
esm
and
apms,
never
use
esm.
It
would
work
it's
it's
like
a
cascade
of.
You
can't
use
this
whole
section
of
node
and
yes,
yes,
it'll
work
well,.
A
D
A
A
D
Yes
and
no
there's
there's
a
problem
where,
if
you
make
a
common
js
wrapper
of
an
esm
apm,
the
common
js
can
set
up
hooks,
but
it
can't
actually
use
the
esm
implementation
because
the
hooks
aren't
set
up.
So
if
you
try
to
import
your
apm
to
delegate
to
it
from
your
common
js
wrapper,
you
have
now
loaded
esm
and
you're
now
unable
to
actually
use
esm.
D
So
yeah
you,
you
simply
cannot
implement
loader
hooks
with
esm.
If
you
take
that
approach,
okay,.
A
All
right,
so,
okay,
so
okay,
I've
I've
been
I've
played
enough
devil's
advocate.
It
sounds
to
me
like
if
you,
however,
you
implement
this
like
some
method
on
process
or
something
that's
like
the
equivalent
of
the
unix
exact
command.
A
D
Yes,
the
main
objections
are
gonna,
be
around
like
you
can
do
a
privilege
escalation
by
like
disabling
some
flags
that
you
pass
in
normally
that
are
not
overrideable.
Currently,
that's
about
it.
Well,.
D
A
Well,
we
could
do
that
right,
I'm
just
thinking
of
like
how
how
does
the
exec
command
work
in
unix,
like
if
you're
running
a
shell
script
as
a
normal
user
exact
can't
just
spawn
a
new
shell
script
as
rude
or
something
right
like
there
must
be
some
way
that
it
can
only
inherit
up
to
the
limits
of
the
permissions
of
the
parent
spawning
process.
Right.
D
No
that's
a
little
complicated
and
goes
into
the
unix
permissions
model.
You
can
set
uid
and
change
the
effective
user
id,
which
is
different
from
the
real
user.
Id
windows
has
a
similar
issue.
So,
if
you're
familiar
with
this,
if
you
spawn
os.
A
D
A
Well,
aren't
they
gonna
say
that,
like
you
know,
amazon
won't
load?
This
won't
allow
this
version
of
node
in
lambda.
If
it's
gonna,
because,
like
I
said
like
somebody
like
amazon,
isn't
gonna
want
people
to
be
able
to
upload
code
where
suddenly
the
node
process
is
elevated
and
can
reach
outside
of
its
box.
A
D
D
A
All
right,
so
maybe
so
maybe
attempt
number
one
is
just
implement
it,
and
you
know
don't
worry
about
this
elevated
permissions
issue
and
when
people
bring
it
up,
you
can
say
why
it
shouldn't
be
a
concern,
rely
on
os,
sandboxing,
et
cetera,
and
if
that
becomes
a
blocker,
then
it
needs
to
be
something
that
we
deal
with
as
part
of
this
pr
in
order
to
get
it
to
land.
C
D
A
But
yeah
I
mean
it
sounds
like
a
cool
idea
to
me
something
we
should
support
for
sure.
Is
this
something
you
have
time
to
work
on.
D
Nope,
that's
why
it's
a
feature
request.
James
snell
actually
has
a
partial
implementation
he's,
it's
probably
who
we
should
hound.
A
Okay,
do
you
want
to
add
it
to
the
so
there's
a
loader
github
project
and
then
there's
the
loaders
repo?
Do
you
want
to
add
the
ticket?
You
opened
the
issue
you
opened
onto
the
loader's
project
and,
if
there's
anything,
updating
in
the
loaded
repo
to
like
put
this
on
the
road
map
or
something
you
can
add
it
there
too,.
A
D
C
A
Not
gonna
we're
not
gonna
solve
this
in
two
minutes,
so
maybe
you
can
add
a
an
issue
that
becomes
an
agenda
item
for
next
time,
but
I
think
there
was
an
earlier
loaders
thread.
I
want
to
say
it
was
gus's
original
loaders
pr,
which
is
way
too
many
comments.
It
might
take
you
a
while
to
stitch
through
those
to
find
the
discussion
of
this,
but
there
was
a
debate
about
this
about,
like
whether
say
you've
got
three
resolve
hooks
registered.
A
Do
you
always
run
all
three
because,
like
you're
saying,
if
you
don't
always
run
all
three
lower
users
might
be
confused,
but
the
other
problem
is
that,
like
you,
kind
of
need
to
be
able
to
short
circuit,
because
if
you
can't
short
circuit,
then
there
are
other
problems
introduced
where
like.
A
However,
I
I
hear
what
you're
saying
and
that
it's
it's
frustrating
and
confusing.
So
if
there's
some,
if
there's
some
better
solution
that
can
like
you
know,
handle
both
then
sure,
but
I
I
I
don't
well
a.
I
think
we
should
land
this
pr.
First
and
b,
we
should
we
should
kind
of
figure
it
out
as
a
design
like
on
a
design
doc
level
before
we
implement
any
any
thing
to
kind
of
you
know
like
we
already
know
that
this
model
works
at
least
because
it's
kind
of
what
we
have
it's
just.
A
We
can
only
chain
from
custom
to
node
and
nothing
more,
and
so
that's
kind
of
why
I
was
like,
let's
just
you
know,
get
this
working
first
and
then
we
can
think
about.
You
know
other
variations
to
it,
but
yeah.
I
agree
we
should
talk
about
like
what
is
there
a
better
design
possible?
What
would
it
be
and
then
what
would
be
the
trade-offs
like
you
know?
Would
it
work
for
all
the
use
cases
that
this
model
works
for
and,
if
not,
is
that
a
problem,
etc?
Okay,.
D
It
doesn't
all
have
to
be
api
stuff,
like
we
had
a
similar
issue
with
the
promises
apis
and
fs,
particularly
leaking
file
descriptors,
and
we
just
added
a
garbage
collector
hook
that
basically
told
the
programmer.
D
You
did
something
it
doesn't
look
right
and
we've
just
printed
to
the
console.
So
if
you're
like,
if
you
don't
return
anything
from
your
hook
and
it
doesn't
resolve
and
the
next
gets
garbage
collected,
you
obviously
are
doing
something.
Weird.
A
A
D
Was
basically
the
warning
you
get
in
the
file
descriptor
class?
If
you
want
to
look
at
that
implementation.
C
Yeah
that
was
basically
what
I
said
if
we
were
to
move
forward
with
the
next
thing
that
we
should
add
in
something
that,
like
hey,
it
looks
like
you
messed
up,
but
I'm
just
thinking
so
that
it's
probably
going
to
be
a
common
hey.
It
looks
like
you
messed
up
that
maybe
we
could
be
kinder.
A
He's
going
to
start
trickling
into
this
call,
so
I
need
to
end
the
thing
but
pleasure,
seeing
you
all
jacob
feel
free
to
open
an
issue
to
discuss
that
further.
For
that
and
then
it'll
be
on
the
agenda
for
next
time
and
thanks
again,
everybody
cool
thanks,
ciao.