►
From YouTube: Open RFC Meeting - Wednesday, January 6th 2021
Description
In our ongoing efforts to better listen to and collaborate with the community, we run an Open RFC call that helps to move conversations and initiatives forward. The focus should be on existing issues/PRs in this repository but can also touch on community/ecosystem-wide subjects.
A
A
And
we're
live,
welcome
everybody
to
the
first
open,
rfc
call
of
2021
happy
new
year
and,
and
hopefully
some
folks
on
the
call
were
able
to
have
a
bit
of
a
rest
appreciate
you
joining
us
again
for
an
open,
rfc,
npm,
open,
rc
call.
We
have
a
really
small
agenda
today,
there's
only
two
issues
that
were
flagged
in
the
rfc
repo
and
just
quickly.
We
can
do
maybe
a
round
of
introductions
for
some
new
folks
that
are
on
the
call,
I
guess,
michael
or
gar.
B
Hi,
I'm
gar
I'm
joining
the
cli
team,
so
it's
day
three,
so
I
don't
really
know
what
that
means.
Yet.
A
B
You
all
might
have
a
better
idea
of
what
I'm
going
to
be
doing
than
I
will,
but
that's
me
and
what
I'm
doing
cool.
A
C
Yeah,
hello,
I'm
jonathan,
I'm
not
really
working
at
npm.
I
just
was
invited
to
join
the
call,
I'm
the
one
who
raised
the
issue
about
the
workspace
that
we're
going
to
be
talking
about.
I
work
at
nexbot
in
oklahoma,
so
hello.
A
Awesome
well
thanks
for
joining
cool
so
quickly,
as
I
do
normally
just
be
mindful
as
we're
speaking
there's
a
code
of
conduct
that
we
follow
in
the
rfc's
repo
as
well
as
on
these
calls.
A
If
you
want,
it
should
be
linked
there
in
the
npm,
rfc's
repo
and
just
be
mindful,
as
folks
are
talking.
Please
raise
your
hand
if
you'd
like
to
say
something
and
I'll
call
on
you
again.
The
sort
of
intentions
of
these
calls
is
to
work
closely
with
the
community
and
and
hopefully
drive
forward
initiatives
and
and
features
that
we
think
are
important.
Also
to
you,
know,
sort
of
close
the
loop
in
feedback
and
and
see
improvements
in
the
npm
client
quickly.
A
If
not
we'll
dive
right
in
so
the
first
issue
that
we
had
on
the
agenda
was
the
rfc
support
to
version
specifiers
other
than
sunbur
ranges
so
number
301
and
I'll
link
that
for
folks
roy,
I
can
take
notes
if
you'd
like
to
speak
to
this
sure.
Yeah
it'd
be
good.
D
Cool
yeah,
I
can
give
a
summary
for
sure,
and
then
jonathan
can
also
speak
up
to
it.
He
was
the
the
one
who
originally
brought
this
up
in
the
clyde
ripple
as
an
issue
there,
and
then
I
brought
this
over
to
the
rfc
so
that
you
can
like
chat
about
it
right
and
make
just
be
sure
what
is
the
best
way
forward
here.
So
basically,
the
issue
is,
as
of
today,
workspaces
only
really
works
if
you're
using
a
senver
range
in
the
package
that
are
consuming
that
workspace
internally.
D
So
if
you
want
a
workspace
to
be
simulink
and
use
that
instead,
you
kind
are
stuck
with
only
assembly
ranges
and
yeah.
Basically,
the
original
issue
is
that
jonathan
wanted
to
use
a
git
version
specifier
instead,
so
basically
just
point
it
to
repo
like
install
that
and
that's
kind
of,
like
not
supported
like
not
even
would
not
even
thought
this
through
originally,
but
I
think
it's
kind
of
like
part
of
the
process
right
so
yeah,
maybe
jonathan.
If
you
want
to
add
something,
I
see
jordan
just
raised
his
hand,
which
is
nice.
D
A
Jordan,
you
can
jump
in
and
then
sorry,
jonathan
and
then
jordan.
A
C
C
Okay,
so
I
have
this
large
project
that
is
checked
out
into
a
shape
of
a
monorepo,
but
all
the
pieces
are
still
in
their
own
repos.
So
there
is
an
umbrella
project
that
all
the
packages
are
checked
out
into
these
workspace
directories,
and
so
all
of
them
install
fine
when
you're
just
installing
normally.
But
if
you
want
to
use
the
workspace
the
links
happen,
but
it
also
creates
the
checkouts
in
the
subdirectories
of
each
of
the
projects.
C
So
it's
it's
like
95
away
there,
and
it
just
doesn't
seem
to
think
that
the
get
versions
honor
the
workspace
link
versions.
C
And
I
think
that's
what
it's
doing,
I'm
nowhere
near
involved
in
knowing
more
than
that,
but
yeah.
So
jordan.
E
Yeah
so
there's
a
couple
things
so
roy
like,
I
guess
the
first
question
is
why
at
restriction
in
the
first
place,
like
let's
say
the
package
in
my
monorepo
package,
foo
is
version
two
and
then
in
another
package.
In
my
monorepo
I
depend
on
foo
at
version.
One
does
workspaces,
do
something
pair
where
it's
like?
Oh
well,
you
want
version
one
and
the
local
ones
version
two.
So
I'm
not
gonna
link
it.
I'm
gonna
install
that
one.
F
So
and
then
yeah
go
ahead
yeah
this
was.
This
is
actually
what
I
wanted
to
clarify
what
it
actually
does
when
you
have
workspaces
it's,
it's
really
pretty
simple
everything.
That's
that's
listed
as
a
workspace
is
linked
up
at
the
top
level
right
and
then
we
go
through
and
we
resolve
all
of
their
dependencies
in
the
normal
way.
So
if
you
depend
on
foo
at
one
and
there's
a
sim
link
to
a
foo
at
one
in
the
route,
well,
then
that
dependency
is
met
and
we
move
on.
F
If
you
depend
on
foo
at
two
and
there's
a
one
linked
in
the
you
know
the
folder
above
yours,
then
that
dependency
is
not
met
and
in
fact
we
need
to
nest
another
one
in
order
to
not
have
a
a
conflict
right,
because
we
can't
have
two
of
the
same
thing
in
the
same
place
at
the
same
time
what's
happening
here
is
now,
and
that's
also
like,
if
you
depended
on
foo
at
dot
dot,
slash
foo.
F
That
would
also
work
because
it
would
say:
well,
you
want
a
sim
link
to
this
folder
and
what
I
have
is
a
sim
link
to
that.
Folder
so
you're
good,
I
don't
have
to
touch
it.
If
you
depend
on
a
tag,
it
will
look
and
see
that
there
is
a
version
number
on
it
and
say
well,
I'm
guessing
that
it
was
that
tag
at
some
point
in
the
past.
F
So
I'm
going
to
call
that
okay,
the
the
challenge
here
is:
we
have
a
sim
link
to
a
folder
and
what
you
depend
on
is
a
git
repo.
So
when
we
look
at
like
what
is
the
node,
that's
in
there,
what
is
its
resolved
value
like?
What
is
what
is
it
all
that
we
see
on
that
root?
Node?
Is
it's
assembling
to
a
folder
and
what
we
we're
not
actually
going
through
and
saying?
Well,
is
it
a
get
checkout
from
you
know
this
or
that
particular
location?
F
Now,
if
the,
if
we
were
a
little
smarter
here
and
and
I
I
think
we
could
be-
and
this
is
kind
of
where
it
gets
into
rfc
territory,
what
we
would
need
to
do
is
when
we,
when
we
have
a
node,
a
package
node
in
the
root
or
anywhere,
that
is
a
sim
link
and
that
sim
link
has
a
dot.
Get
directory
in
it
we
would
have
to,
and
then
we
have
a
dependency
on
a
git
url.
F
F
Another
workaround,
which
has
its
own
set
of
challenges
like
another
way
that
you
might
approach
this
is
to
actually
have
the
root
depend
on
the
get
versions
of
all
those
things,
and
then
their
dependencies
would
still
be
fine.
But
what's
going
to
happen,
there
is
you're
going
to
have
checkouts
in
the
root
node
modules
folder
and
then
have
to
be
smart
enough
to
kind
of
go
in
and
do
your
editing
inside
of
node
modules
or
manually
set
up
sim
links,
etc,
and
that
that's
also
kind
of
challenging.
E
So
I
I
that's
a
really
helpful
explanation
and
I
think
that
your
kind
of
implied
rfc
path.
There
is
a
good
way
to
meet
jonathan's
use
case,
but
I
really
think
that
the
issue
here
is
that
npm's
implementation
of
workspaces
and
everyone
else's
is
fundamentally
flawed
in
that
all
of
them
use
the
node
modules
hierarchy,
and
that
is
not
how
anyone
actually
wants
to
use
their
work
like
their
monorepo.
E
E
I
think
in
that
both
in
that
it
it
allows
your
project
to
depend
to
depend
on
something
that
it
doesn't
actually
have
in
its
dependencies
manifest,
which
will
then
break
users
when
you
publish
it,
but
also
because
it
actually
creates
issues
like
this,
where
the
the
conflation
of
node
modules
folder
hierarchies
with
the
desi,
the
actually
desired,
sharing
behavior
is
causing
problems
for
edge
cases
like
the
jonathan's
monorepo
set
up.
So
right,
I
don't
know,
I'm
not
it's.
E
I
know
it's,
it's
very
like
bold
and
potentially
unpractical
to
suggest
like
let's
do
a
holy
workspaces
totally
differently,
but
like
the
more
that
I
see
these
edge
cases
and
the
more
that
I
interact
with
workspaces,
the
more
that
I'm
convinced
that
just
everyone's
doing
it
fundamentally
flawed
right
now
and
that
we
need
to
come
up
with
something
new
to
do
it
correctly
or
we're
just
going
to
be
chasing
down
these
edge
cases
with
rfcs
forever.
F
Yeah,
so
I
I
think
that
there's
something
interesting
here
like
kind
of
the
like,
should
we
treat
a
sibling
to
a
git
checkout
as
a
valid
satisfaction
for
a
git
dependency,
and
I,
I
think,
there's
a
pretty
strong
case
to
be
made
there
that
we
should
work
spaces
aside
right,
that's
fair!
If
I,
if
I've
checked
something
out
as
a
sub
module,
and
then
I
depend
on
that
as
a
sim
link
and
something
else
depends
on
the
get
repo
url
like,
and
it's
at
that
branch
or
whatever.
E
The
yeah
and
doing
that
fix
does
not
seem
like
it
will
obstruct
what
I'm
talking
about
either
so
like.
It
seems.
E
But
like
I,
I
want
to
yet
again
kind
of
implore
us
to
not
put
a
band-aid
on
it
and
then
move
on
to
like.
I
think
we
should
actually
look
like
do
a
deep
dive
on
what
like
a
conceptually
correct
workspaces.
Implementation
would
look
like
and
not
be
constrained
by
all
of
the
prior
art,
which
I
think
has
done
a
good
job
of
trying
to
solve
the
majority
use
cases,
but
has
not
actually
looked
at
it
holistically
and
come
up
with
an
alternative
model.
F
Right
so
I
think
so,
if
I
understand
you
correctly,
what
you're
saying
is
basically
pure
depths
should
always
be
hoisted.
F
E
D
E
E
G
This
thing
exists:
it's
called
import
maps.
There
you've
got
intent
to
ship
in
browsers.
I
have
the
issue
open
for
almost
a
year
now
on
node.
I
think
we
could
build
a
great
proof
of
concept
of
that.
I've
actually
worked
on
it
before
just
never
finished
it,
but
I
think
I
fully
agree,
and
I
think
that
we've
taken
a
great
step
toward
this
awareness
being
raised
by
having
it
in
npm
where
it
followed
the
prior
art.
G
But
I
think
that
fundamentally,
these
things
are
all
hacks
around
the
fact
that
we
don't
have
a
really
good
way
to
deal
with
it
and
if
we're
comfortable,
adding
hacks
that
we're
going
to
have
to
support,
we
you
that
you
are
going
to
have
to
support,
for
you
know,
five
years
from
now,
10
years
from
now
forever,
maybe
which
you
know
might
be
like
this
sort
of
logic
around
how
we
resolve
git
dependencies.
G
But
then
we
have
to
look
at
every
single
one
in
that
list
and
I
was
kind
of
reading
it
and
the
original
reason
I
read
my
words,
in
my
hand,
was
because
some
of
these
are
going
to
be
ambiguous
and
they're
going
to
cause
people,
confusion
and
problems.
If
you
can
specify
them
where
the
december
ranges
could
be
troublesome
for
some
folks,
but
I
think
it's
pretty
easily
understood
whereas,
like
what
you
just
described
with
the
get
you
know,
is
it
a
git
repo
like?
G
Is
it
the
right,
git
repo,
like
that,
has
a
lot
of
weird
meanings,
depending
on
how
people
structure
things
and
like
maybe
somebody
has
a
fork
and
they
want
to
point
theirs
to
the
fork
and
now
the
dependency
isn't
resolved,
but
it
is
right
and
they're
like.
Why?
Isn't
it
right?
There's
a
lot
of
those
edge
cases
that
starting
to
go
down
this
rabbit
hole
open
up,
whereas
what
people,
I
think,
really
want,
as
jordan
was
describing,
is
this
feature
set
that
doesn't
they
don't
want
to
get
into
those
kind
of
details?
G
Monorepo
like
those
are
the
kind
of
things
that
happen
as
soon
as
you
start
mixing
all
these
ideas
together
in
a
real
project
in
a
real
world
scenario,
and
it
was
really
hard
to
debug,
and
so,
if
we
continue
to
add
these
features,
I
think
we're
risking
going
down
that
long-term
path
of
of
making
these
things
just
unmaintainable,
even
if
they
solve
great
this,
they
might
solve
today's
use
case
really
well
and,
like
maybe
there's
some
case
for
adding
it
and
then
intentionally
adding
it
to
deprecate.
A
F
I
I
want
to
just
I
want
to
just
kind
of
summarize,
as
I
understand
it,
what
jordan
is
proposing
and
then
I'll
yield
the
floor
to
others,
so
forget
about
hoisting.
Let's,
let's
not
talk
about
implementation
details.
The
bottom
line
is
anything
that
is
listed
as
a
peer
dep.
We
wish
to
be
shared
among
all
of
the
workspace
projects.
All
the
defined
works.
F
Second,
anyone
who
lists
any
of
the
workspace
projects
that
list
another
of
the
workspace
projects
as
a
dependency
pure
depth.
What
have
you
will
get
the
one?
That's
in
the
workspace,
that's
in
the
workspaces
project
I
this
is.
This
is
obviously
a
you
know,
a
pretty
big
departure
from
the
implementation
that
we
have
it's
a
very
big
departure
from
the
way
that
learn
and
yarn
one
work.
F
So
that
is
a
thing
that
I
would
love
to
see
like
a
a
write-up
for
and
like
water
like
staying
away
from
implementation
details
or
like
we're
going
to
use
node
modules
for
some.
If
it's
the
right
implementation,
if
it
gets
us
the
right
behavior
like
let's
not
worry
about
what
the
implementation
is,
though,
and
just
write
up
like
how
do
I
want
this
to
work
like
what
are
the
constraints?
F
What
are
the
tests
it
needs
to
pass
and
you
like
get
that
nailed
down,
explore
those
edge
cases,
and
then
we
can
implement
something
to
that
spec.
The
second
thing,
though,
about
like
you
know,
is
it-
is
a
sim
link,
an
instance
of
a
get
repo.
F
I
I
can
imagine
some
cases
which
are
not
workspace
related.
That
would
have
a
similar
kind
of
issue
and,
at
the
very
least,
we're
being
inefficient
right
now
right
if
I
have
a
git
sub
module,
that
does
a
checkout
of
you
know,
github.combar,
and
I
have
something
else
that
depends
on
github.com
in
one
of
my
dependencies
and
I'm
actually
depending
on
my
sub
module.
F
As
a
sim
link
like
I'm,
going
to
end
up,
checking
it
out
twice,
which
feels
unnecessary,
like
that
feels
like
a
separate
thing
to
work
spaces
that
we
should
fix
anyway,
and
that's
it
well,.
B
F
D
Yeah
no,
I
wanted
to
just
mention
the,
but
then
I
was
digging
more
when
jordan
was
speaking
first,
but
basically,
maybe
just
the
possibility
of
yeah
just
doing
that.
Other
model
we
discussed
earlier
on,
I
said
where
we
just
like
sim
link
everything
into
the
node
modules
folder
of
each
consumer
of
that
workspace,
but
yeah.
D
I
I
just
wanted
to
say
that
maybe
like
maybe
it's
not
too
late,
I
don't
think
it's
not
it's
too
late
at
all,
like
we
just
landed
the
foundation
work
for
workspaces,
so
I
think
we
can
definitely
like
rework
and
and
the
setup
features
we
support
right
now
is
so
small
that
I
don't
think
is
super
disruptive
if
it's
not
a
complete
departure
like
yeah,
but
it's
basically
what
I
wanted
to
bring
on
yeah.
E
Just
saying
like
over
the
next
month
or
two,
I'm
happy
to
help
to
like
pair
with
wes
or
jonathan
or
whoever
else
is
interested
to
try
and
write
the
rfc.
I
mean
the
my
instinct
is
that
it
will
either
be
something
import
maps
like
or
something
where
we
make
a
workspace
underscore
modules
folder
or
something
at
the
root
and
link
from
there,
otherwise
leveraging
node
modules
and
one
of
those
two
right,
yeah
or
exactly
node
modules,
slash
dot
workspaces.
Something
like
that.
I
I
feel
like
we'll.
E
A
We
could
actually
so
just
a
suggestion
there
like.
If
that
sounds
like
something
we
want
to
kick
off
and
and
sort
of
have
that
collaboration.
We
could
kick
off
that
with
like
a
new
hack,
md
doc,
where
people
could
like
contribute
to
that
async.
If
you
think
that
would
be
like
an
easy
way
to
get
that
collaboration
going
jordan.
A
So
that's
that's
one
option
there
I
apologize
did
anybody
else
have
anything
they
want
to
add
to
this.
A
I
I
know
that
it's
come
up
a
few
times
or
jordan's
made
the
note
a
few
times
about
the
differences
between
the
hoisting
and
then
shared
like
first,
not
shared
dependencies,
sort
of
model
and
strategy.
A
A
From
my
perspective,
that
would
be
something
like
a
strategy
strategy
config
or
something
like
that
which
would
change
like
essentially
how
things
get
like
refined
on
disk
or,
like
even
built.
I
know
isaac
corrected
me
the
other
day
on
this.
It's
like
what
the
ideal
tree
looks
like
and
then
also
how
things
are
reified
on
disk.
A
So
I
think
there's
opportunity
here
to
to
create
a
couple
rfcs
for
for
this
stuff,
so
so
in
in
particular,
though,
for
for
this
do
we
want
to
just
keep
it
on
the
agenda
and
then
the
action
item
is
we'll
see
something
more
concrete,
come
out
in
the
next
next
little,
while.
A
G
What
but,
oh
okay?
Well,
I
was
just
gonna
ask
from
your
side:
do
you
see,
like
I'm
worried
about
these
things
in
the
long
term
and
what
the
user
impact
is,
but
from
from
your
side,
if
we
started
going
down
these
solving
for
the
edge
cases
in
these,
you
know
different
spec
types
is
my:
is
my
concern
real,
like
in
the
long
run?
Is
that
going
to
be
a
problem
for
maintaining?
G
You
know
the
the
workspaces
implementation
if
we
have,
if
we
have
that,
because
otherwise,
I
think
adding
this
is
an
incremental
step
like
if
those
concerns
are
sort
of
unfounded.
I
think
adding
this
as
an
incremental
step
is
probably
really
good,
although
I
still
believe
we
need
to
fundamentally
rethink
the.
F
Yeah,
so
I
I
think
both
rfcs
are
are
worth
exploring
and
probably
will
result
in
some
kind
of
change
or
improvement
to
the
client.
F
I
think
that
the
you
know,
like
the
there's,
like
the
small
issue
in
the
big
issue
right,
the
small
issue
is,
should
assembling
to
a
get
checkout,
be
a
satisfying
resolution
for
a
git
dependency
on
that
on
that
remote,
and
I
I
I
have
a
hard
time,
imagining
like
why
it
wouldn't
be
it
seems
it
seems
surprising
to
me
in
a
way
that
it's
not
even
though
I
understand
why
it's
not,
that
would
solve
kind
of
the.
F
You
know
that
does
paper
over
the
problem
that
that
was
that
jonathan
brought
up
here,
but
I
also
think
the
bigger
rfc
of
like
let's,
let's
really,
you
know,
draw
much
clearer,
much
clearer
sort
of
guidelines
around
like
what
is
a
workspace
and
how
what
has
access
to
which
thing
and
and
how
to
predict
that,
and
so
on
also
would
solve.
Jonathan's
issue,
and
potentially
you
know,
is,
is
worth
doing
for
a
host
of
other
reasons
as
well.
F
A
D
D
Oh
yeah,
one
more
thing
about
the
original
rfc
I
just
opened
here
is
also
thinking
about
other
different
types
of
versus
specifiers,
like
even
tags
like
they
don't
work
right
now
like
if
you
try
to
just
depend
on
latest
for
a
package
like
the
work,
the
current
workspaces
implementation
is
not
going
to
be
able
to
to
match
that.
F
D
Right,
I
don't
yeah,
I
don't
really
know
what's
going
on,
but
you
didn't
look.
D
F
D
Yeah
and
I
put
a
few
examples
to
kind
of
try
it
out
and
yeah,
I
found
out
that
even
like
just
using
latest
with
like
a
breath
right
like
wouldn't
really
match
against
it.
So.
D
Yeah,
so
I
think
anyways
I
just
wanted
to
make
sure
when
we
discussed
the
entire
workspace
again.
Let's
just
make
sure
we
have
like
this
kind
of
matrix
of
all
the
different
version
specifiers
we
support,
and
just
at
the
very
least
document
that
we
are
not
supporting
these
other
ones.
If,
for
some
reason
they
are
not
supported.
F
Yeah
so
tag
and
remote
types
will
so
tag.
Only
tag
is
treated
as
valid.
If
it's,
if
it
has
a
res
excuse
me,
if
it
has
a
resolve
value
and
the
result,
value
type
is
remote,
meaning
it
was
fetched
from
a
tarble.
Since
that's
really
all
that
we
can
do
to
say
well,
you
depended
on
latest
and
we
got
this
from
the
registry
and,
let's,
presumably
it
was
latest
when
we
got
it,
so
that
must
be
valid.
F
The
other
thing
is:
if
it's,
if
you
depend
on
remote,
we
verify
that
it
is
the
the
same.
Url
was
the
one
that
we
fetched
it
from.
If
it's
get,
we
check
to
see
that
it
that
the
resolved
value
is
a
git
repo
and
that
it's
the
same
repo
and
then
there's
kind
of
some
some
other
logic.
So
if
you
depend
on
a
branch
or
a
semver,
we
we
test
that
as
well.
F
Anyway,
bottom
line,
there's
there's
two
good
ideas
here:
that
we
should
explore.
A
Awesome
and
so
you've
taken
the
action
item
to
look
through.
The
first
is
that
right.
A
Smaller,
okay
cool
did
anybody
else.
Have
anything
on
that
jonathan?
Did
you
want
to
add
anything
other
than
you
want
to
do
this
thing?
Let's,
let's
make
it
happen,
yeah.
A
Just
yet
no
no!
No!
We
we
love
that.
You
brought
this
like
and
it's
it's
mostly.
These
calls
are
about
conversing
through,
like
the
problem
space
and
then
the
you
know,
artifact
of
of
a
more
formalized
like
rfc
and
is
kind
of
like
the
place
where
we
can
collaborate
on
like
the
actual
definition
and
sort
of
spec
and
then
and
then
we
go
off
and
and
hopefully
get
some
time
and
resources
to
actually
go
and
implement
that.
So
this
is
perfect.
A
You
know
bringing
this
bringing
this
up
so
appreciate,
appreciate
you,
you
know,
proposing
it
or
or
asking
us
why
it's
not
not
the
way
it
is
today
cool.
So
if
there's
not
anything
more
to
to
speak
on
to
that,
maybe
we
can
move
on
to
the
next
item
we
had,
which
was
added
sort
of
last
minute.
That
is
pr
number
198
in
actually
the
arborist
repo
and
I'll
share
that
with
folks.
A
So
they
know
where
it
is,
and
this
is
apr
to
introduce
foreground
scripts
as
an
option
isaac
did
you
want
to
maybe
speak
to
this
and.
F
Yeah
it
should
be.
I
expect
it's
going
to
be
somewhat
uncontroversial.
I
just
kind
of
wanted
to
bring
it
up
in
this
forum.
The
idea
is
right.
Now
we
push
all
like
build
scripts
to
the
background,
and
this
adds
an
option
that
we
could
be
able
to
do.
Npm
install
dash
dash
foreground
dash
scripts
and
then
all
the
build
scripts
will
not
be
pushed
to
the
background.
F
And
you
know
if
you
have
like
no
builds
that
are
like
flood
a
bunch
of
warnings
and
errors
because
of
like
c
plus
plus
style
junk,
then
you
know
you
get
all
those
same
problems
back,
but
there's
been
a
handful
of
cases,
we've
seen
where
folks
are
not
sure
that
a
script
is
running
and
then
they,
you
know
say:
oh
well,
I
updated
I
updated
npm
and
it's
still
not
running
the
script
and
we're
like
yeah.
No,
it
is
you
just
don't
see
it.
F
So
I
think,
having
that,
having
that
option
to
run
with
foreground
scripts
would
at
least
help
with
debugging
and
there's
there's
also
been
a
couple
requests
to
do
that
for
ci
environments,
where,
like
okay,
adding
adding
30
seconds
to
the
install
time
is
not
actually
a
big
deal.
What
I
want
is
a
log
of
the
build
so
yeah,
that's
what
this
is
for.
E
Yeah
I
mean
this
option
seems
fine,
but
I
maybe
just
by
default
when
a
package
runs
a
lifecycle
script
instead
of
outputting,
nothing.
We
just
output
package,
foo
is
running
a
pre,
you
know
has
is
running
a
pure,
install
script.
It
exited
with
this
screen.
You
know
and
then
dot
dot,
and
then
it
exited
with
this
code
or
something
like
so.
The
purpose.
F
Log
it
it's
it's
logged
at
the,
I
want
to
say
info
or
verbose
level,
because
it
is
really
noisy
like
if
you
have
a
lot
of
things
that
do
a
lot
of
build
scripts
like
in
a
normal,
install
you.
You
can
get
really
flooded
and
it
does
look
like
an
error
if
we're
logging
in
but.
E
F
F
But
I
I
think
the
the
I
mean
what
you're
suggesting
is
take
this
thing:
that's
currently
being
logged
as
like,
verbose
and
bump
up
the
log
level.
The
only
issue
there
would
be
like
not
wanting
to
make
it
look
like
it's
an
error
but
like
if
the.
If
the
build
fails,
then
you
do
see
it
in
your
in
your
debug
log
like
it's,
it
is
being
tracked
now
on
the
initial
npm,
seven
beta
it
actually
wasn't
and
that
led
to
all
kinds
of
problems.
So
that's
why
we
added
that.
E
Oh,
I
mean
even
sorry,
just
real,
quick,
even
a
summary
line.
At
the
end
that
was
like
these
packages
come
a
list
ran,
you
know,
you
know,
or
whatever
just
some
sort
of
summary
output
said,
these
packages
ran
these
install
scripts
and
they
worked
like
would
probably
be
sufficient.
People
are
just,
I
think,
looking
for
confirmation
that
their
scripts
are
doing
something
they
don't
care.
What
it's
doing.
G
Yeah
so
just
add
a
little
bit
of
color
commentary
to
this.
I
have
recently
seen
some
security
issues
that
got
caught
specifically
because
all
of
the
log
output
was
there
even
if
most
people
don't
want
it.
So
it's
definitely
like
a
double-sided
issue
here.
Right,
there's
some
pros
and
cons,
and
I
think
jordan's
proposal
actually
adds
some
some
great
value
in
that
it
gives
just
enough
to
know
when
you
might
want
to
go,
look
a
little
bit
deeper
as
soon
as
you
hide
it
all.
G
You
don't
actually
know
that
it's
running
anything
right
and
a
curl
to
a
malicious
server
that
happens
in
a
you
know:
post-install
script,
that
kind
of
is
nice
to
know
and
like
if
you,
if
you
thought
oh
man,
I
nev
I
that
one
looks
new
to
me.
That
might
be
just
enough
to
trigger
the
the
end
user
to
go.
Oh,
let
me
go
look
at
the
debug
log
and
see
what
it
actually
did
there,
because
I
don't
remember
it
running
a
post,
install
script
I
actually
don't
or
like.
G
G
What
about
is
one
idea
to.
F
Kind
of
capture,
all
of
that,
because
we
do
we
do
capture
it
right.
We
are,
we
are
piping
it
and
saving
it
up
as
a
string
so
that,
if
it
fails,
we
can
dump
it
to
the
the
error
log.
It
wouldn't
be
too
hard
to
write
all
of
these
things
to
a
file
in
your
npm
folder.
F
E
F
E
Yeah,
like
that's,
that
wouldn't
probably
be
more
useful
to
list
which
dependencies
ran,
build
scripts
as
well
there,
but
yeah
that
something
like
that
would
be
fine
for
me.
Just
the
idea,
like
I
mean,
because,
especially
in
an
app
where
you
have
60
things,
maybe
it's
not
a
maybe
you're
cool
with
59
of
them,
but
you
really
want
to
see
that
that
60th
one
ran
and
you
don't
want
to
dig
through
a
massive
log
file
every
time
to
like
audit
it.
F
F
G
I
disagree,
I
mean
well
specifically
the
case
that
I'm
talking
about
was
that
the
the
user
recognized
the
package
name
as
something
that
had
previously
had
some
security
issues
around
it
and
they
went.
That
seems
strange.
I
remember
that
package
name
from
the
last
time.
Let
me
go
look
and
if
they
had
had
to
go
into
a
file,
otherwise
they
probably
wouldn't
have
done
it
like
you
know,
so
that
specifically
putting
the
package
name
was
the
in
this
very
specific
real
world
use
case
that
saved
us
one
time.
G
A
Personally,
I'm
all
for
better
messaging
and
communicating
to
the
end
user
and-
and
I
I
I'm
kind
of
in
the
bo
more
probably
closer
to
jordan
in
terms
of
consolidating,
like
I
think,
a
perfect
example
is
the
output
that
we
get
for,
like
npm
audit
and
npm
fund,
where
it's
like
a
single
log
or
a
single
line
that
essentially
collapses,
that
information
and
and
down
to
like
a
single
metric.
A
So
I'd
rather
see
something
like
that
than
like
something
super
verbose
in
terms
of
the
logging
and
obviously
we
could
do
a
better
job
in
in
saying
like
when
scripts
are
are
running
and
who
like
ran
them
as
well.
So
I
think
there's
like
an
opportunity
to
to
clean
up
that
experience
for
sure
I
think
this
would
probably
align
with
some.
You
know
ux
and
ui
improvements
we
want
to
make
you
know
towards.
A
I
think
the
you
know
the
the
middle
of
the
the
new
year
here,
which
might
might
be
helpful.
A
Because,
clearly
more,
it's
just
a
a
new
indicator.
Yes,
that
would
be
net
new,
but
where
I
can
see
you
know,
there's
probably
a
lot
of
value
in
just
essentially
seeing
a
clapped
single,
even
if
it's
a
single
line
with
a
number
of
like
how
many
scripts
were
run,
that's
that's
better
than
what
we
have
today,
which
is
nothing
and
or
everything
right.
F
F
Foreground
scripts
would
also
make
it
possible
for
some
scripts
that
do
still
insist
on
having
an
interactive
installer
post,
install
script.
B
A
So
that's
just
an
indication
that,
like
truly
it's
like
a
sanity
check
for
for
a
lot
of
folks,
that
would
be
doing
something
along
those
lines
right
and
or
if
it
would
be
potentially
like
a
quality
check
for
folks
that
are
like.
I
just
installed
this
thing
in
an
environment
and
it
ran
like
a
thousand.
A
A
So
this
that
was
sort
of
like
a
side
channel
like
there's
potentially
like
an
rfc
there,
where
we
could
say
like
I
know
somebody
else
has
created
an
rc
before
about
collapsing.
I
think
it
was
I'm
not
sure
if
it
was
install
output.
F
Yeah,
so
I
think
I
I
think
a
reasonable
course
of
action
here.
Is
we
move
forward
with
foreground
scripts
in
arborist
and
adding
the
adding
the
flag
in
the
clay?
The
other
bit
of
work
there
is
to
make
arborist
count
up
how
many
scripts
it
runs,
which
is
also
pretty
easy
or
even
like
stash
all
the
outputs
somewhere,
even
if
it
doesn't
error
where
right
now,
if
it
doesn't
error,
then
we
just
throw
it
away
because
everything's
fine.
F
So
if
we're
gonna,
you
know
we
could
track
that
stuff
on
the
arborist
object.
It's
just
it's
just
ram.
It's
fine!
You
got
plenty
of
that
and
then
we
can.
We
can
talk
more
or
have
an
rfc
about,
like
you
know
how
we
want
to
message
that
stuff
up
at
the
cly
level,
but
the
getting
the
kind
of
the
tools
in
place
is
pretty
straightforward
and
I
think
there's
a
lot
of
support
for
like
yeah.
I
would
use
that.
So
it's
not
a
waste
of
time.
A
Cool,
so
if
there's
nothing
else
on
that,
did
anybody
else
have
any
issues
or
any
other
rfcs
they
want
to
discuss.
A
Going
once
go
on
twice,
I
will
give
you
15
minutes
time
back
or
14
minutes
time
back
then
and
again
say
happy
new
year
to
all
the
folks
that
have
joined
and
let
everybody
know
that
we're
we're
back
here
and
we'll
be
back
here
again
next
week
same
time
same
place
and
hopefully
encourage
everybody
to
continue
to
contribute
comments
and
and
feedback
in
the
rc's
repo
itself.