►
From YouTube: Open RFC Meeting - Wednesday, January 27th 2021
Description
In our ongoing efforts to better listen to and collaborate with the community, we run an Open RFC call that helps to move conversations and initiatives forward. The focus should be on existing issues/PRs in this repository but can also touch on community/ecosystem-wide subjects.
A
And
we're
live
on
youtube.
Welcome
everyone
to
another
npm
openrc
call.
Today's
date
is
wednesday
january
27th,
2021
we'll
be
following
along
the
agenda
that
was
posted
in
the
npm
rfc's
repo
issue,
number
311,
I'll,
just
copy
and
paste
or
spam.
The
meeting
note
stock
for
folks
feel
free
to
add
yourself
as
an
attendee.
A
If
you're
joining
and
yeah
we'll
be
following
along
here
apologize
for
the
last
few
weeks,
we've
had
to
push
these
calls
out,
but
I
appreciate
everybody
being
patient
with
us,
quick,
quick
sort
of
code
of
conduct,
acknowledgement
these
calls
and
all
comms
on
the
rfc's
repo
are
are
covered
over
a
code
of
conduct
which
you
can
go
see
on
the
mpmjs.com
in
the
policy
section.
There
please
be
kind
respectful
on
the
on
these
calls.
A
We
only
have
a
small
agenda
today,
but
I've
tried
best
to
sort
of
curate
curate
as
best
as
possible
and
try
to
sort
of
formulate
some
discussion.
A
I
know
we
don't
have
exactly
everybody
that
we'd
love
to
have
on
on
the
call
today,
but
I'm
guessing
that
this
is
going
to
span
not
just
this
call
but
calls
for
sure
in
the
future
and
there's
opportunity
to
obviously
have
async
communication
happen
on
this
before
we
dive
into
the
first
item
that
is
queued
up
here,
just
wanted
to
put
out
a
quick
announcement
from
our
team
that
we're
looking
to
make
npm
7ga
pretty
soon
here,
which
means
that
when
you
go
to
install
latest
the
latest
npm,
it
will
be
v7.
A
So
please,
you
know
if
you
can
test
test
it
out,
continue
to
give
us
feedback.
We've
been
we've
been.
You
know
poking
and
prodding
at
these
releases
for
what
feels
like
almost
a
full
year
now
and
really
appreciate
the
feedback
we've
gotten
so
far,
so
you
know
that
that's
incoming
inbound
that
changed
so
just
want
to
be
mindful
and
give
everybody
a
heads
up
that
we'd
love.
A
A
If
not
we'll
dive
into
the
first
item
on
the
agenda,
which
is
the
file
path
resolution
review,
this
was
sort
of
brought
up
based
on
the
issue
that
I've
linked
here,
which
was
2145
in
the
actual
clay
repo,
and
I
will
share
that
issue.
A
The
behavior
of
essentially
linked
depths
here
has
changed
from
v6,
so
if
a
package
isn't
actually
nested
within
the
the
root
of
your
project,
we're
not
going
to
install
its
dependencies
and
essentially
the
reason
why
I
want
to
bring
this
up
again
is
just
you
know
to
question
and
ask
if
we
want
to
actually
change
this
behavior
and
it
sort
of
ties
into
the
next
item,
which
is
more
a
workspaces
review,
but
want
to.
You
know,
get
people's
thoughts
on
this.
A
I
believe
I
believe
we
had
some,
I'm
not
sure
isaac.
If
you
can
speak
this,
you
may
not
have
a
great
connection,
but
I
believe
we
have
some
course
of
action
that
we
can
take
here
in
terms
of
like
reintroducing
and
supporting
this.
A
Just
wondering
what
the
impact
is
this
was
we
haven't
seen
a
lot
of
people
complaining,
but.
B
Yeah,
so
just
like
real
quick
context,
npm
six
was,
and
not
just
npm
six,
but
like
npm
six
and
before
was
a
little
bit
weird
with
how
it
handled
targets
of
link
linked
sim
linked
packages
in
the
tree.
B
When
you
do
an
install,
it
would
sometimes
kind
of
read
through
and
and
also
install
the
dependencies
of
that
sim
linked
target
within
the
sim
links
target.
It
would
occasionally
hoist
those
dependencies
up
to
the
root
package,
even
though
the
sim
link
target
couldn't
see
them
there,
which
was
kind
of
a
source
of
a
lot
of
very
strange
bugs
over
the
years
and
the
the
intention
which
was
not
always
followed
was
that
a
sim
link
target
would
get
its
dependencies
installed.
B
If
and
only
if
it
was
underneath
the
the
root
package,
the
root
project.
Now
in
npm
7,
we
actually
did
implement
that
that
intention
very
faithfully
and
what
we
found
was
that
people
were
relying
on
the
edge
cases
where
that
was
not
being
done.
B
You
know
where
it
would
sort
of
descend
into
the
child
packages
of
a
linked
target,
even
though
it
was
in
some
external
folder.
B
So
this
raises
an
interesting
question:
should
we
do
that
all
the
time
should
we
just
tell
people
sorry
that
was
a
bug
too
bad
that
you
were
depending
on
it,
but
you're
gonna
have
to
just
cd
into
that
folder
and
then
run
an
install
there,
or
is
there
a
different
set
of
a
different
kind
of
constraint
for
when
we,
when
we
follow
a
sim
link
into
its
target
to
install
dependencies,
and
when
we
don't
I'm
personally,
like,
I
don't
think,
there's
actually
much
harm
in
any
of
these
approaches
so
long
as
it's
something
that
we
can
you
know
document
and
test
and
have
very
consistent,
because
the
the
inconsistency
in
how
npm
six
and
before
did
it
was
a
problem-
was
a
bigger
problem
than
any
downsides
of
of
anything
we
choose
here
so
yeah.
B
That's
that's
kind
of
why
I
think.
Last
last
week
or
a
week
before
a
couple
weeks
ago,
I
suggested
bringing
this
up
just
as
a
thing
to
discuss.
B
Yeah
wesley
see
your
hands
up.
C
Yeah,
they
moved
the
hand
up
button
by
the
way
it
took
me
a
minute
to
find
it.
So,
if
anybody's
looking
it's
in
the
reactions
now,
so
I
think
I've
hit
all
of
those
things
that
you
described
isaac
in
the
past.
I
agree
it
was
very
unpredictable.
C
I
was
also
depending
on
it
and
when
I
updated
the
seven
it
broke
me,
I
didn't
complain
because
I
it
was
pretty
clear
what
was
happening
and
and
what
I
well
actually
maybe
I
did.
I
might
have
brought
it
up
in
the
slack
channel
either
way.
I
fully
am
behind
consistent
behavior,
but
I
think
that
asking
people
to
go
npm
install
in
the
other
directory
is
a
bad
user
experience
it's
partly
because
it's
unexpected
from
the
previous
behavior
and
partly
just
because
extra
work
for
a
person
right.
C
That
said,
all
of
these
things,
I
think,
were
better
way
back
in
the
day
when
it
wasn't
a
link
and
when
the
file
protocol
copied
the
the
whole
directory
over.
So
that's
my
two
cents.
I
want
that
behavior
somewhere,
and
you
know
I
understand
why
you
changed
it
again.
Historically,
but
but
I've
missed
that
behavior
because
it
was
straightforward
and
understandable
and
I
think
most
cases
didn't
cause
the
sort
of
cascade
of
unexpected
results
that
linking
currently
causes.
B
So
yeah,
so
it's
my
my
preference
would
be.
Let's
just
follow
the
links
right
like
let's
do
the
thing
that
npm
6
did
sometimes
right
like
because
then
it's
actually
simpler,
then
we
don't
have
to
say
well,
we
will
install
the
subdependencies
if
it's
a
subfolder
of
the
current
project.
Otherwise
we
just
leave
it
the
hazard
and
the
reason
why
that
was
not
the
case
or
at
least
the
the
argument
for
it
not
being
the
case
is.
B
Let's
say
I
have
a
a
sim
link
to
a
you
know
to
another
project,
and
I.
B
B
It
might
also
be
weird
to
lock
those
down
in
a
lock
file
within
the
parent
within
the
kind
of
originating
project.
So
what
we
might
want
to
do
is
like.
If
we
have
a
package
lock
in
that
target
folder,
then
you
know
we
kind
of
we
kind
of
do
the
install
in
the
target
folder,
but
as
if
it
was
a
top
level
thing
and
we
store
the
metadata
there.
Now
that's
a
more
involved
change.
B
Just
setting
the
you
know
follow
true
on
arborist.
That's
super
easy,
but
it's
going
to
have
kind
of
a
weird
impact.
I
think
in
some
of
these
cases,
when
it
is
a
separate
project.
C
So
I
agree,
I
don't
like
the
changing
of
the
package
lock
on
a
separate
project.
I
think
that
would
be
pretty
unexpected
if
I
got
that,
maybe
I
haven't
seen
this
brought
up
and
maybe
this
totally
doesn't
work.
This
is
totally
me
off
the
cuff
thinking
about
this,
but
one
of
the
things
I've
seen
with
the
linking
behavior
that
has
been
problematic
is
when
they
get
different
versions
of
the
same
package,
because
they're
now
under
different
directory
trees
and
I've
always
wondered,
is
it
possible
to
do
like
back
linking
as
a
way?
C
B
C
C
If
and
that
has
caused
a
bunch
of
problems
for
different
workflows,
that
I've
seen
it's
particularly
things
like
react
as
sort
of
the
big
ones
but
like
somewhere,
it
just
uses
an
in-memory
reference
to,
like
you
know,
a
symbol
or
something
and
now
they're
different,
and
you
compare
you
know,
is
that
symbol
that
it
exported
the
same
one
and
it's
not
not
anymore
right.
If
instead
package
b,
who
was
the
link
target
when
you
built
the
tree
said
oh
actually,
package
c
is
shared.
C
B
So
the
thing
that's
weird
the
thing
that
would
be
bizarre
about
that
is
that
now,
if
you're
working
in
this
package
b
separately,
its
dependency
on
c
is
linked
over
to
some
other
project,
that's
sort
of
spooky
action
at
a
distance,
and
if
you
have
multiple
different,
you
know.
B
Let's
say
I
have
a
package,
another
project
x
that
depends
on
b
and
c,
and
it's
installed
normally
and
then
I
link
b,
now
b,
gets
updated
to
the
x's
c
and
you've
got
a
bunch
of
churn
in
a
project
that
is
in
some
completely
different
folder.
So
I
I
think
I
I
mentioned.
I
dropped
this
in
the
zoom
chat.
B
Just
so
I
wouldn't
it
wouldn't
fall
out
of
my
head,
but
we
could
also
treat
like
let's
say
we
do
a
link
target
to
a
separate
project,
right,
dot,
dot,
slash
b
and
in
that
folder
there
is
a
package
log
file.
B
B
The
the
thing
that's
weird
is
we
end
up
storing
all
of
the
link
targets
dependencies
within
the
sort
of
current
projects,
package
lock,
and
maybe
that's
fine,
because
that
is
in
fact,
what's
being
loaded.
It's
just
a
little
bizarre
as
all,
and
I
I
think
honestly,
I
I
feel
like
part
of
this
problem
might
go
away.
We
just
said:
look
like
we
don't
follow.
We
don't
install
dependencies
of
external
links,
period.
B
C
But
I
do
think,
like
people
have
used
the
tools
at
their
disposal
to
solve
a
hard
problem
and
not
always
well,
but
workspaces,
don't
100
cover
that
problem
either.
So
I
I
think,
saying:
well,
we
don't
install
them
over
there,
but
also
like
there's
this
other
thing
you
could
be
using.
It's
like
well
yeah,
but
that's
not
that's,
not
the
problem.
I
have.
C
I
have
10
unrelated
packages
and
you
know
in
my
normal
workflow
but
like
today,
I'm
trying
to
test
out
one
and
another
and
that's
like
when
you
might
do
the
link
right,
which
is
not
really
a
workspace
like
I'm
not
trying
to
define
a
workspace.
I'm
just
like
seeing
are
these
new
changes
I
made
compatible
with
my
mother
thing
so
like
it's
not
simple,
and
I
think
your
we
just
don't
install
them
might
actually
be
the
only
one
that
doesn't
have
all
these
weird
caveats.
C
Yeah
well,.
B
B
If
I
want
to
do
that,
like
I
don't
want
to
put
everything
into
one
mega
workspace
just
so
I
can
share
this
one
test
framework,
but
I
do
want
to
make
it
change
the
test
framework
and
go
through
a
bunch
of
projects
and
make
sure
it
didn't
break
anything,
but
in
that
case
I've
already
installed
taps
dependencies
right.
C
Well,
yeah,
it
is,
it
is
weird
often
times
right,
like
you
get
unexpected
results
one.
This
does
harken
back
a
little
bit
to
one
of
the
things
I
brought
up,
especially
early
on
with
rory
is
like
we.
We
need
workspaces
that
aren't
file
system
structure
dependent.
Yes,
the
problems
we
want
to
solve
here
are
those
right
and
if
we
can
find
another
way,
which
I
have
a
proposal
for
which
and
even
I
think,
chrome
has
intend
to
ship
with
the
spec
for
source-
sorry
import
maps.
C
If
we
could
just
do
import
maps,
it
would
actually
pretty
elegantly
solve
these
problems,
because
you'd
make
a
temporary
import
map.
That
is
all
of
the
things
you
want
all
over
the
file
system
in
any
combination.
That
makes
sense
at
the
moment-
and
it's
not
going
to
be
a
spooky
action
at
a
distance,
because
your
local
project
didn't
have
to
change
the
other
project
to
use
its
dependencies
right.
C
So
if
we
had
that
in
I
say
in
node,
but
even
as
a
loader
that
you
could
temporarily
use
and
npm
had
a
way
to
just
spit
out
the
proper
import
map,
then
I
think
we
could
start
people.
You
know
seeing
if
that
solves
these
problems,
without
having
to
go
and
fix
the
the
legacy
of
the
past.
With
these
you
know,
link,
behaviors
and
and
file
imports.
B
So
so
I
think
that
there's
there
is
a
need
for
a
more
comprehensive
rfc,
just
on
like
when
and
how
and
and
why
do
we
follow
simlinks
to
install
subdependencies?
B
Where
do
we
store
that
metadata
about
that
and
sort
of
what
lock,
files
or
or
metadata
do
we
respect
or
treat
as
authoritative
they,
even
if
we
don't,
even
if
the
conclusion
there,
which
I
think
feels
kind
of
likely,
even
if
the
conclusion
is
we
don't
change
anything
like
npm
7
is
doing
the
right
thing.
It's
doing
it
consistently
it's.
What
npm
6
should
have
been
doing.
B
Sorry
that
it
sorry
that
you
be
became,
you
know
dependent
on
the
the
bugs
there,
but
at
least
then
we
would
have
it
documented
and
have
something
to
point
to
and
then
our
you
know,
state
phase
two
could
be
well.
Here's!
Here's!
When
you
should
use
you
know,
let's
make
workspaces,
let's
start
workspaces
out
from
like
this
is
shared.
This
is
not.
This
is
linked
whatever
and
then
worry
about
the
implementation,
details
of
workspaces
sort
of
secondarily
right.
B
We
so
we
first
figure
out
what
what
should
be
shared
and
what
should
not
be
and
what
should
be
loading.
What
and
I
I
do
feel
like
a
lot
of
these
linking
problems
will
kind
of
evaporate.
A
Yeah
so
wes
is
giving
the
thumbs
up
awesome
reactions
and
I.
A
So
in
terms
of
actually
getting
that
work
done,
though,
putting
together
the
rc
is,
is
probably
the
key
here
like
we
need
some
initial
time
to
actually
put
words,
words
down
and-
and
I
think
it's
great
to
think
that
we
can
potentially
mop
document
this.
But
I
don't
know
if
that's
actually
like
efficient.
A
So
I'm
wondering
if
like
is
this,
something
that
we
can
just
backlog
as
a
to
do
for
our
team
isaac
like
somebody
to
to
own
the
action
to
create
that
initial
rc
to
essentially
document
link
depths,
and
why,
essentially,
what
you
just
stated
like
what
we
do
and
don't
install
and
why,
with
with
link
depth
specifically.
B
I
can
take
the
lead
on
like
documenting
the
current
behavior
and
and
also
just
kind
of
adding
that
that
thing
I
mentioned
about.
If
the
link
target
has
a
package
lock
trying
to
think
through,
like
what
do
we,
what
do
we
do
with
that?
The
simplest
is
if
the
link
target
is
outside
of
the
current
project.
We
don't
touch
it.
B
It's
somebody
else's
problem,
but
at
least
say
like
this
is
how
it
was
in
npm
six,
and
this
is
how
it
is
in
npm,
seven
and
here's
why
it
changed
for
consistency
purposes,
for
consistency,
reasons,
etc.
A
Appreciate
that
so
just
added
that
we're
gonna
take
away
that
you'll,
you
have
the
action
to
draft
that
initial
rfc,
allowing
the
current
behavior
is
sort
of
what
I'm
yeah
yeah.
Okay,
so
in
terms
of,
is
there
any
anything
else
for
you
yeah?
This
is.
A
Yes,
which
you
know
we
do
those
sometimes
yeah,
I
mean
this-
would
have
had
to
happen
like
a
year
ago.
I
think
because
I
think
that's
probably
when
this
changed
that
could
be
wrong,
so
sometimes
that
yeah,
we
back
back
ourselves
into
the
rc's
anything
else
on,
let's
say
specifically
to
like
link
depths
versus
maybe
workspaces
like
more
broader
workspaces
conversation,
which
seems
to
be
the
second
second
item.
We
have
on
the
agenda.
A
If
not
so
as
sort
of
noted,
this
is
that
work
is
going
to
be
a
bit
of
a
precursor
to
us,
even
like
really
talking
in
depth,
probably
about
the
way
your
workspaces
are
implemented
today,
but
essentially,
I
tried
to
summarize,
you
know
the
the
problem
space
that
we've
we've
continued
to
hear
from
from
folks
like
jordan,
who
unfortunately
can't
be
on
the
call
today
about
the
differences
between
hoisted
and
sort
of
like
shared
depths
and
how
we
could
potentially
like
improve
the
workspace
experience.
A
If
we
were
to
delineate
these
things
better,
which
might
mean
changes
to
how
we've
implemented
workspaces
today,
since
we've
treated
them
essentially
as
link
depths,
so
I'm
wondering
you
know
is:
is
there
any
conversation
to
be
had
essentially
on
workspaces
that
doesn't,
you
know
essentially
include
the
you
know
what
we
just
talked
about,
or
you
know,
should
we
circle
back
on
that
once
we
have
that
rc.
A
B
I
I
think
it
I
think
it
does
somewhere.
I
mean
at
least
for
me:
it
summarizes
my
understanding
of
of
the
the
feedback
I've
been
hearing
about
workspaces,
and
I
think
that
the
really
where
we're
at
now
we
we
do
need
a
we
need
somebody
to
kind
of
sit
down
and
outline
in
on
in
words
in
a
document
like
these
are
these
are
the
set
of
cases
that
exist.
B
These
are
the
ones
where
we
should
be
sharing
dependencies
between
things
within
the
workspace
within
the
the
workspaces
project,
and
these
are
cases
where
it
should
be
sort
of
a
separate,
install
and
then
and
leave
aside.
B
You
know
once
we
have
alignment
on
that
then
start
to
look
at
okay.
Well,
these
are
the
cases
we
need
to
share.
These
are
the
cases
we
need
to
keep
separate.
B
A
You
want
to
outline
those
questions
like
you're,
bringing
up
a
bunch
of
good
questions
like
could,
could
we
essentially
pull
those
out
now
and
write
like
right
now
like
what
we
want
to
essentially
answer
in
that
rfc,
which
might
help
to
help
you
or
whoever
it
is
that
ends
up
drafting
it
like
might
be
easy
easier
to
solve
that
one,
like
you
know,
a
question
would
be
like:
what
do
we
and
don't
we
install
or
you
know
what
isn't
isn't
the
scope
and
all
we're
doing
is
formulating
the
questions
we
want
answered
versus
trying
to
get
the
answers.
B
The
the
question
would
be
which
which
projects
have
access
to
which
instances
of
their
dependencies.
B
So
if
you
have
a
workspace
project
that
depend
a
workspace
that
depends
on
another
workspace
within
the
same
project,
you
know,
in
which
cases
should
that
be
loading,
the
same
code.
That's
in
that
workspaces,
folder
and
in
which
cases
should
it
be
a
separate,
install.
B
Let's
say
you
have
a
peer
dependency,
it's
a
dev
dependency,
it's
a
regular
dependency,
it's
like
whatever,
like
there's,
there's
a
bunch
of
kind
of
aspects
to
that.
That
vector
of
of
that
like
matrix
of
of
possibilities
right
and
we
kind
of
just
need
to
like
write
them
all
down
and
say
whether
it's
shared
or
unique.
A
And
again,
for
folks
that
are
following
along,
if
you
want
we'll,
essentially
be
adding
notes
and
comments
here
in
the
meeting
notes
doc.
But
yeah
like.
I
think
that,
going
through
an
exercise
of
just
like
outlining
what
those
questions
are,
is
going
to
be
yeah
help.
C
I
just
figured
if
you
if
it
sounds
like
we
have
some
time.
I
was
just
sort
of
following
up
on
that.
Since
I
I
mentioned
the
source
map
thing,
I
was
clicking
around
to
see
where
those
issues
were,
and
it
does
look
like.
There's
a
experimental,
loader
polyfill
that
somebody
made.
Would
there
be
any
interest
from
you
all
on
a
proposal
for
generating
source
maps?
Sorry
sort
of
okay,
there's
too
many
map
terms
import
maps
from
straight
from
npm?
C
That
would
be
something
I
would
probably
be
able
to
spend
a
bit
more
time
on.
If
you
all
are
interested
in
that.
C
Yeah
here
I'll
I'll,
post,
the
this
is
the
link
to
the
loader,
and
this
is
the
original
issue
which
has
been
open
for
a
long
time,
and
I
so
I
in
that
issue-
I
I
had
written
way
back
now,
almost
yeah
well
over
a
year
ago
now
a
example
with
arborist
of
generating.
C
The
import
map-
oh,
wait,
oh
did
I
never.
I
never
posted
that
code.
I
should
post
it
anyway.
I
could.
The
question
was
more
along
the
lines
of.
Do
you
see
that
because
I
know
there's
the
tink
work
but,
like
I
haven't,
heard
a
lot
about
that
recently.
A
Haven't
yeah,
we
haven't
been
spending
much
time
in
that
space,
although,
like
I've
been
following
along
in
the
last
year,
with
fs,
hooks
and
and
sort
of
hoping
that
the
foundation
and
node
project
would
essentially
make
progress
there,
and
I
know
there
was
some
work
that
chrysler
and
other
folks
were
trying
to
do
to
consolidate
effort.
I
think
in
that
space,
specifically,
I
guess
around
hooks,
but
yeah.
I
don't
think
that's
a
one-to-one
to
what
you're
bringing
up
here
yeah
so.
C
It's
not
it
yeah,
it's
like
it's
they're
sort
of
orthogonal.
They
can
solve
some
of
the
same
problems
just
with
a
like,
pretty
drastically
different
approach,
and
I
think
I'm
not
I
haven't
built.
I
built
the
one
proof
of
concept
with
this,
but
I
haven't
built
anything
with
like
a
hooks
approach
as
an
idea,
so
I
don't
know
like
I
can't
compare
them
on
a
technical
basis,
whether
there's
like
clear,
pros
and
cons,
I
guess
you
could
look
at
what
yarn
does
as
an
example
of
sort
of
a
similar.
I.
A
C
A
That's
that's
where
my
head
was
going,
as
you
were
saying
like.
Would
we
consider
supporting
something
like
this
and
and
the
work
with
tink
sort
of
in
this
space?
I
guess
it's
sort
of
like
yeah:
do
you
mean
virtual
file
system
versus
generating
an
important
map?
Those
are
earlier.
C
C
One
big
pro
is
that
this
is
a
standard
based.
You
know,
standards
track
solution
which
has
intent
to
ship
in
browsers,
which
I
think
was
the
big
that
was
the
big
blocker
in
the
modules
working
group
when
I
originally
proposed
it,
which
obviously
you
know
any
solution
with
file
system
hooks
or
a
virtual
file
system
is
never
going
to
be
cross-compatible
in
the
browser
environment
right.
So
that's
just
sort
of
right
that
off
the
table
which
to
me
is
like
a
big
win
for
for
those
import
map
approach.
C
A
Yep
no
I
mean
I,
I
I'd
have
to
read
this
a
bit
more,
but
yeah
you're
right
like
like.
Obviously
we
do
have
some
time
right
now,
but
yeah
this
is
interesting.
Maybe
could
you
give
like
a
synopsis
of
the
the
our
summary
of
like
the
initial
issue
that
you
created
for
support
for
airport
maps,
yeah.
C
So
so
the
idea
would
be
so
there's
this
the
standard
for
import
maps,
which
basically
is
trying
to
do
node
style
short
identifiers
in
the
browser
right,
so
you
could
say
import
low
dash
and
then
the
import
map
would
map
from
lowdash
to
a
fully
qualified,
url
right,
and
then
it
has
a
bunch
of
features
in
the
spec
that
are
intended
to
support
the
node
style
use
case
like
scoping,
so
you
can
say
well,
if
you
import
lodash
from
module
foo,
then
it
should
be
this
url.
C
But
if
it's
a,
if
you
import
low
dash
from
module
bar,
it's
a
different
url
and
you
can
sort
of
nest,
scopes
and
stuff
sort
of
like
the
node
modules
resolution
algorithm
right.
So
it
was
intended
to
be
a
the
design
of
it
was
intended
to
be
compatible
with
node.
It
evolved
quite
a
bit
from
there
just
because
browsers,
you
know,
have
different
concerns,
but
it,
but
it
looks
like
what
chrome
is
intending
to
ship
is,
is
compatible
and
should
should
work
for
our
use
case.
C
So
basically,
what
we
would
have
would
be
so
the
proof
of
concept
is
using
the
loaders.
So
these
are
esm
loaders,
but
I
also,
I
think,
the
one
I
wrote
was
just
using
the
old
like
hack
together,
like
override
the
the
file
path
resolution.
What's
it
called
resolve
file,
name
or
something
yeah
result
file
name,
and
so
basically,
what
it
does
is,
instead
of
using
the
file
system
to
look
up
where
it
should
resolve
from
it,
uses
the
import
map
and
then
returns
to
you
the
file,
the
fully
resolved
file
path.
C
This
means
you
get
a
bunch
of
interesting
features,
so
one
of
them
in
relation
to
what
we're
talking
about
earlier
is
you
want
to
resolve
something
from
outside
the
directory
you're
in
fine,
like
just
tell
me
where
it
is
it's
an
absolute
file
path
right
things
like
you
know
all
the
work
that
pnpm
does
to
hard
link
to
save
on
disk
space.
Well,
you
want
to
implement
that.
Okay,
don't
even
the
hard
link
just
use
the
same
file
path
from
some
shared
cache.
That's
system-wide
right,
like
there's
a
bunch
of
interesting
things.
C
You
also
can
do
things
like
aliases
without
have
like
you
can
do
them
in
you
know
the
the
import
map,
instead
of
having
to
have
them,
be
like
aliased
on
the
file
system,
like
there's
a
bunch
of
things
that
sort
of
go
magically
go
away
as
soon
as
you
have
this
sort
of
data
structure
to
represent
what
your
imports
look
like.
C
What
it
would
take
from
an
implementation
side
would
be
something
to
generate
the
import
map
and
something
to
consume
the
import
map,
so
the
proof
of
concept
uses
loader.
My
proposal
was
to
actually
just
build
this
straight
into
node
and
just
have
like
node
look
in
a
file
path
that
we
agree
upon
sort
of
like
it
does
today
with
node
modules.
So
you
just
have
like
slash
node
modules,
slash
importmap.json
and
if
that
exists,
node
would
load
that
and
use
it.
C
There
was
some
feedback
on
that,
so
I
you
know
there
you
that's
still
up
would
be
definitely
up
for
discussion
on
what
the
like
technical
details
would
look
like,
but
if
you
had
npm
generating
them
and
node
supporting
them,
the
end
user
would
be
like
pretty
much
fully
transparent
the
way
it
would
stand
today.
If
you
wanted
to
do
this,
which
you
could
do
with
this
loader
that
I
linked
that
somebody
wrote
you
would
just
pass
dash
loader
experimental.
A
So
in
terms
of
like,
because
you
did
identify
this,
I
think
accurately,
where
it's
you
know,
there's
essentially
the
runtime
consumption
versus
like
the
generation
of
the
import
maps,
and
I
imagine
that
we
could
be
on
like
the
latter.
Yes,
the
ladder
of
of
those
two
as
a
like
a
first
like
might
be
something
that
we
want
to
consider
or
like
like
create
helping
to
create,
like
essentially
an
import
map,
potentially
from
package.json
like
the
existing
like
right.
A
I
imagine
that's
kind
of
where
we
could
help
with
that,
but
until
we're
actually
like
the
runtime,
I
guess
like
mpx
are,
like
you
know,
exact
potentially
could
essentially
be
respecting,
let's
say
import
maps
that
that
are
defined
potentially
right,
like
this
gets
into,
as
you
said,
like
tinker
like,
if
we're
eventually
going
to
you
know,
do
this,
which
you
know
to
me,
makes
sense
that
we
would
try
to
get
this
implemented
and
supported
node
itself,
and
then
npm
could
potentially
be
the
help.
A
You
generate
these
from
your
existing
packet
json,
that's
kind
of
like
where
my
head
is
in
terms
of
like.
If
we
were
to
look
at
this
and
and
make
this
actionable,
I
imagine
we
want
to
start
with
well
can
can
we
potentially
solve
for
and
help
generate
based
on
things
we
already
know
like
the
dependencies
or
I
have
and
like
yeah.
C
Yeah,
I
think
you're
spot
on.
That's,
I
think
where
npm
really
could
come
in
handy
here
so
again
I
I
it's
literally
just
a
matter
of
iterating
the
the
arborist
tree
and
like
writing
this
out.
It's,
I
think,
20
lines
I
think
was
my
pr
is
like
really
straightforward:
the
proof
of
concept
code.
Obviously,
there's
tons
of
caveats
and,
like
you
know,
you'd,
have
to
resolve
for
those
and
it'd
blow
up.
C
C
Just
back,
and
people
apparently
hate
on
wiki
or
something
I
don't
know,
and
and
like
the
fact
that
it
wasn't
at
the
time
and
implemented
any
browsers
or
intended
to,
but
but
that
has
changed,
chrome
has
intent
to
ship
on
it.
So
I
think,
if
npm
implemented
it
and
chrome
has
intend
to
ship
that
might
be
the
the
driving
force
to
get
the
modules
group
on
board,
at
which
point
I
I
would
definitely
be
that's.
This
is
something
I've
wanted
to
put
time
and
effort
into
for
a
long
time.
C
So
I
would,
I
would
definitely
be
able
to
champion
that
I
just
got
really
frustrated
with
the
modules
group
a
year
ago
when
it
was
like
this
seems
like
a
good
idea,
and
it
is
standards
track
and
everybody
was
like.
Well,
it's
not
the
right
standards
body.
So
sorry,
like
it's
kind
of
frustrating,
but
I'd
be
willing
to
pick
it
back
up.
If
you
all
are,
you
know
not
going
to
just
re
not
going
to
just
implement
tink
and
say
well,
this
is
our
solution
for
it.
You
know
kind
of,
like
yarn,
did.
B
B
I
don't
want
to
like
you
know,
downplay
it,
but
put
next
to
like
getting
workspaces,
working,
really
well
or
overrides,
or
some
other
things
that
are
gonna
sort
of
have
a
more
substantial
user
user
experience,
improvement
or
functionality
improvement
like
tync
sort
of
becomes
this
like
optimization
that
we
could
do
anytime
and
it's
not
really
essential
to
get
right
away
there.
There
may
also
be
some,
you
know,
other
other
changes
to
to
how
we
sort
of
do
trees
or
or
manage
data
or
caching.
A
We
may
want
to
start
just
referring
it
to
to
it,
as
you
know,
like
npm,
supporting
a
virtual
file
system,
a
virtual,
like
dependency
resolution
versus,
like
you
know
the
project
itself,
because
we
may
I'm
not
sure
if
we
would
pick
that
up.
Specifically,
I'm.
B
Sure
we
would
not
right
right
exactly
right,
it's
very
tied
to
npm
six,
it's
it's
sort
of
a
very
it's
a
proof
of
concept
that
proved
its
concept,
and
you
know
if
we
were
to
do
it
now,
it
would
probably
be
using
different
node
implementations,
a
different.
You
know
tree
management
stuff.
A
B
I
I
think
that
would
be
you
know,
potentially
a
great
way
to
go
again.
I
mean,
as
far
as
just
kind
of
getting
back
to
the
original
problem
here
of
how
are
we
going
to
make
workspaces
really
good?
We
sort
of
need
to
figure
out
like
okay.
What
is
the
behavior
like
when
I
do
when
I
do
require
x
from
within
packages
y?
B
Which
thing
do
I
get,
then
it's
sort
of
like
okay
well,
depending
on
where
we're
at
in
the
in
the
technical
landscape.
Do
we
implement
that
using
import
maps?
Somehow?
Do
we
implement
that
using
sim
links
or
import
maps,
just
the
sort
of
thing
that
we
can
generate
for
your
benefit,
but
you
can
use
them
or
not?
That's
all
kind
of
up
in
the
air.
D
I
think
there's
also
something
to
be
said
for
it
being
being
used
for
like
node,
no
dsm
and
and
how
it
plays
into
that
as
well.
C
Tierne,
do
you
have
any?
Has
there
been
any
conversation
on
those
together?
I
haven't
seen
it
I'd
love
to
read
it.
D
I
I
believe
there
might
have
been
a
little
bit.
I
I
I've
had
individual
talks
with
people
about
how
it
could
potentially
help
like
it
could
help
kind
of
get
away
from
having
to
do
like
dot
js
in
the
path
and
kind
of
make
make
the
the
you
know
experience
a
little
bit
more
naughty.
So
that's
that's
kind
of
the
big
thing,
the
the
the
big
one.
D
That
is
like
people
care
about
some,
for
I
mean
like
I
guess
I
get
why,
but
that's
that's
a
big
one
and
then
there's
other
features
that
come.
You
know
other
benefits
that
come
from
it
in
the
context
of
vsm.
I
think
so.
I
I
there
there
are
people
I
will
talk
to
and
I'll
tell
you
who
those
people
are
offline.
A
I
mean
we
could
end
early
and
and
go
into
a
private
session
if
we,
if
we
want
so
in
terms
of
circling
back,
I
I
know
isaac
you've
taken
some
times
to
add
some
questions
here
for
the
I
guess,
eventual
draft
that
we're
gonna
queue
up.
Did
you
want
to
speak
to
any
of
those
notes
that
you've,
added
or
or
no.
B
B
That
is
a
particular
kind
of
spec
which
either
is
referenced
in
the
workspace
or
is
not,
which
which
thing
do
I
get?
Do
I
get
a
shared
copy?
Do
I
get
a
thing
from
within
the
workspaces
project,
or
do
I
get
a
you
know,
a
new
thing
that's
installed
just
for
that.
One
workspace
and
I
know,
there's
kind
of
a
few,
a
few
sort
of
pithy
ways
to
summarize
or
to
kind
of
apply
some
kind
of
principle
to
it,
but
we
really
need
like.
B
Those
of
you
who
have
written
code
side
by
side
with
me
know
this
is
what
I
do
like
all
the
time
for
every
problem.
You
write
it.
You
write
a
matrix,
you
fill
it
out
exhaustively
and
tediously
and
then
usually
the
pattern
kind
of
jumps
out,
and
it
makes
it
a
lot
easier
to
do
the
right
thing
and
when
it
doesn't
jump
out
well,
you've
got
your
test
cases
so.
D
A
It's
document
driven
design,
so
yep,
yeah,
cool
and
and
roy
reacted,
crying
crying,
crying
laughing.
B
I
just
I
just
recently
was
like
sort
of
dragging
right
through
the
through
this
process
with
npm
diff
arguments
yeah,
but
it's
it's
yeah.
It
can
be
a.
It
can
be
a
tedious
process,
but
it's
saves
a
lot
of
even
more
tedious
processes
in
the
future.
A
For
sure
did
folks
have
anything
else
they
want
to
bring
up
on
on
these
topics
specifically
or
anything
else.
That
may
have
not
been
added
to
the
agenda.
I
know
this
is
sort
of
a
a
bit
of
a
deep
dive
that
we've
gone
through
high
level,
deep
dive.
A
If
not,
I
can
give
folks
time
back
today
which,
which
might
be
nice
to
start
the
new
year
and
and
ideally,
if
folks,
have
any
comments
or
or
want
to
essentially
add
to
this
feel
free
to
do
async
in
in
the
rc's
repo.
As
normal
appreciate
everybody
jumping
on
today,
again
and
yeah
feel
free
to
continue
to
queue
up
rcs
or
any
feedback
that
you
have.
A
It
sounds
like
the
action
item
here
will
be
to
ideally
get
some
documentation
written
or
an
rfc
written
by
by
next
week,
if
possible,
for
our
next
call,
and
then
that
might
be
something
that
we
can
look
at
together
and
review
again.
Yeah
just
one
more
chance
for
folks
to
add
any
announcements
or
any
last
comments.
A
If
not
we'll
see
you
next
week
and
appreciate
everybody
jumping
on
again
today
and
I'll
see
you
soon
bye.