►
From YouTube: Open RFC Meeting - Wednesday, November 17th 2021
Description
In our ongoing efforts to better listen to and collaborate with the community, we run an Open RFC call that helps to move conversations and initiatives forward. The focus should be on existing issues/PRs in this repository but can also touch on community/ecosystem-wide subjects.
A
And
we're
live,
welcome
everybody
to
another
npm,
open
rfc
call.
Today's
date
is
wednesday
november
17
2021
we'll
be
following
along
in
the
agenda
that
was
posted
in
mpmrc
issue,
494,
which
I'll
copy
and
paste
here
folks,
if
you're
just
joining.
A
I
know
I've
spammed
this
a
few
times,
but
the
mean
notes,
link
is
in
the
chat,
feel
free
to
add
yourselves
as
an
attendee
there
and
I'll
be
taking
notes
as
we
discuss
the
agenda
items,
quick
code
of
conduct
acknowledgement,
and
we
do
ask
that
folks,
please
be
a
plight
and
on
these
calls,
if
you
can,
as
others
are
speaking
just
be
mindful
and
raise
your
hand
if
you'd
like
to
to
speak
and
we'll
call
on
you
appreciate
everybody
just
being
kind
to
each
other
and
thoughtful
on
these
calls,
as
well
as
all
the
rfc
issues
themselves
and
prs
themselves.
A
If
not,
we
can
jump
right
in.
I
started
the
agenda
a
little
bit
differently
than
it
was
generated
initially,
so
I
apologize
if
folks
saw
this
shift
a
bit
since
I
generated
it
this
morning.
The
first
item-
that's
actually
on
here
is
item
493,
so
this
is
new
from
last
week.
I
believe
caleb
you
put
this
together.
It's
mpm,
adding
npm
copy
to
the
cli.
Did
you
want
to
speak
a
bit
to
this.
B
Sure
I
wrote
this
rfc
after
reading
another
rfc
in
the
queue
which
is
about
multi-app
repository
support,
workspace
support
and
also
after
experimenting
with
workspaces
myself.
I
found
that
it
was
hard
to
create
a
deployable
bundle
from
a
package.
If
you
have
workspaces,
because
packing
with
bundle
dependencies
doesn't
include
dependencies
that
are
hoisted
above
the
package
level,
so
npm
copy
is
possibly
a
new
command
that
would
be
included
in
npm,
which
copies
the
pack
list
files
and
production
dependencies
of
a
workspace
package
or
packages
in
a
workspace
into
a
new
directory.
B
A
Yeah,
I
think
we've
had
previous
discussion
in
this
space.
I
think
there
might
have
been
historical
rfcs,
though.
Actually
like
people
spun
up,
I
could
be
wrong.
I
had
to
have
to
go
digging,
but
I'm
not
sure
if
other
folks
have
contacts
there,
maybe
isaac
or
mattis
you.
You
also
want
to
chime
in.
C
Yeah
yeah,
so
I
mean
I'm,
I
think
the
the
multi-app
workspaces
support
thing
is
and
what
came
out
of
that
discussion
was
sort
of
two
rfcs,
one
of
which
is
to
run
repair
scripts
on
linked,
bundled
dependencies,
so
in
a
sort
of
multi-app
monorepo
or
a
workspace
monorepo
a
multi-app
workspace,
whatever
you
say
that
sort
of
app
a
depends
on
package
b
via
the
sort
of
file
path,
specifier
and
then
also
list
it
as
a
bundle
dependency
and
then,
when
both
of
those
things
are
true,
running
npm
install
inside
of
a
package
right.
C
That
sort
of
like
local
node
modules,
part,
is
sort
of
part
of
another
rfc
which
I
think
is
still
being
worked
on
and
jordan,
I
believe,
was
gonna.
Take
that
over
which
was
more
about
kind
of
like
how
npm
installs
tree
layout
works
in
a
workspaces
monorepo,
but
I'm
a
plus
on
board
with
the
use
case.
E
Yeah,
so
on
the
shared
layout,
I
was
just
going
to
say
what
jordan
just
said,
but
there
is
sort
of
a
I
don't
know,
primordial
form
of
the
shared
layout
rfc,
which
is
written,
and
that's
just
basically
suggesting
that
we
do
not
hoist
anything
up
above
the
workspace
level
so
other
than
pure
dependencies
and
dependencies
on
sibling
workspaces.
E
If
we,
if
we
were
to
do
that,
that
would
obviously
lead
to
more
duplication
in
some
cases,
if
two
things
share
direct,
you
know
production
or
dev
dependency.
It's
not
going
to
get
hoisted
and
shared,
but
a
we
can
say
all
right
well
solve
that
with
isolated
mode,
which
is
going
to
give
you
the
same.
You
know
maximal
level
of
deduplication
there
with
sim
links.
Second,
I
think,
in
order
to
really
make
it
satisfy
this
use
case,
that
the
npm
copy
use
case
account
was
bringing
up
here.
E
We
would
need
to
do
sort
of
the
the
next
step
of
that
which
was
jordan's
suggestion
where
anything
that
is
shared
between
workspaces.
You
know
dependencies
on
sibling,
workspaces
or
peer
dependencies,
not
only
we,
we
don't
hoist
them.
Instead,
we
put
them
in
some
hidden
location,
like
node
modules,
dot,
npm,
slash,
workspace,
shared
and
then
sim
link
those
packages
from
there
so
that
they,
they
are
hoisted
in
a
sense
but
they're
still
not
accessible
to
the
top
level
dependency
and
they're
explicitly
linked
into
the
workspaces.
E
If
we
do
that,
then
npm
pack
should
still
just
do
the
right
thing,
because
everything
that's
a
dependency
of
that
workspace
will
either
be
a
sim
linked
dependency
or
a.
E
Something
that's
in
its
own
node
modules,
folder,
so
in
in
that
case,
I
think
I
think
it
would
actually
give
us
exactly
what
we
need
in
terms
of
making
npm
pack
just
work
out
of
the
box
without
an
additional
command,
so
that
would
be
kind
of
an
argument
against
this
particular
rfc
that
we're
talking
about.
But
I
think
the
use
case
is
it's
one.
E
That's
been
brought
up
several
times,
and
so
it's,
it's
kind
of
just
been
a
little
challenging
to
figure
out
like
what's
the
what's
the
best
and
least
breaking
way
to
satisfy
that
use
case.
In
a
way,
that's
sort
of
going
to
make
things
behave
in
an
expected
way.
B
E
B
So
the
one
of
the
weaknesses
is
that
parsing
npm
packs
output
to
determine
what
tarball
file
you
would
need
to
extract.
So
for
my
use
cases,
I'm
not
interested
in
having
a
tarball
file.
I
either
want
a
zip
file
to
upload
to
lambda
or
a
directory
layout,
so
I
have
to
run
pack
and
then
find
the
tar
ball
that
was
created
and
then
extract
it,
which
it's
not
terrible,
but
it's
it's
not
aesthetic.
B
Another
issue
is
with
npm
copy,
it's
designed
so
that
you
can
list
a
number
of
workspace
packages
to
include
while
npm
pack
packages
each
workspace
package
individually.
So
if
you
needed
to
include
like
two
top
level
packages,
you
would
end
up
with
lots
of
duplication.
E
I'm
I
am,
I
am
tipping
my
hand
that
I
have
not
read
it.
I'm
sorry.
B
I
I
mentioned
that
because
I
noticed
that
somebody
was
typing
out
typing
out
these
comments
in
the
meeting
notes.
A
Yep
yeah,
I
just
put
a
reference
to
the
actual
rendered
rc
there
and
chat
as
well
for
folks
that
are
watching
it.
But.
E
Right
so
so
I
think
I
think
this
is
this
is
maybe
better
to
zoom
in
on
them
like,
let's
you
know,
leave
leaving
aside
the
like,
we
should
fix
the
tree
layout
in
a
way
that
will
make
npm
pack
behave
reasonably
for
for
what
it
is.
I
think
that's
totally,
I
think
somewhat
orthogonal
to
this
npm
copy
command,
or
you
know,
spelling
tbd.
E
Right
right,
unless
you,
unless
you
jump
through
a
bunch
of
hoops
to
like
you,
know
manually
copy
them
in
which
is
not
ideal.
The
the
use
case
that
I'm
hearing
here
then
is.
Essentially,
you
need
a
folder
structure
which
has
the
kind
of
like
fully
realized
package
tree
of
one
or
more
packages
referenced,
maybe
as
workspaces
or
as
folders
or
just
as
you
know,
name
at
version,
and
so
that
you
can
kind
of
like
zip
that
up
and
send
it
somewhere.
Yep.
E
Right
right,
so
yeah
send
to
lambda
copy
to
docker.
Do
whatever
that's
the
thing
that
wouldn't
be
super
hard
to
do.
I
think
we
just
kind
of
need
to
figure
out
the
the
ergonomics
of
it.
You
know
kind
of
the
the
command
shape
and
arguments,
and
and
what
have
you.
B
A
So
I'm
just
grabbing
that
link
actually.
A
Oh
awesome:
thanks
killed.
Anybody
opposed
to
like
the
strategy
of
implementing
a
new
command.
F
The
other
the
other
question
I'd
have
is:
is
this
applying
to
workspaces
only
because
it
looks
like
it
doesn't
need
to,
but
it
seems
like
it's
phrased
like
it
is.
B
So
I
phrased
it
focused
on
workspaces,
because
npm
pack
and
then
extracting
works
pretty
well
without
workspaces.
The
test
implementation
works
with
or
without
workspaces,
there's
just
less
of
an
argument
to
create
this
new
command,
if
you're,
not
considering
which
spaces.
F
F
B
Does
it's
great
comics
that
need
to
be
reviewed
are
which
dependencies
are
included
by
default,
so
if
you're
in
a
workspace
should
root
dependencies,
be
included,
should
root
files
be
included
if
you're
not
in
a
workspace,
it's
much
simpler.
Obviously,
all
the
dependencies
all
of
the
included
files
should
be
included.
B
There's
also
some
issues
that
I
haven't
documented
I'll,
add
them
to
the
rfc,
but
the
current
implementation
doesn't
run
any
scripts,
it
doesn't
run
prepare
and
it
doesn't
run
post
install
stuff
which
it
should
probably
run
post
install
for
the
package
itself,
because
those
files
would
already
be
on
the
exclude
list
and
wouldn't
be
copied
by
pack
lists.
So
there's
some
complicated
things
that
need
to
get
worked
out.
E
Yeah,
you
might,
you
might
actually
it
might
be
worthwhile,
even
sort
of
approaching
it,
something
like
a
global
install
to
a
predefined
folder
right,
but
we,
but
with
a
little
bit
more
of
a
affordance
for
for
including
one
or
more
workspaces
within
your
project
right,
because
that's
kind
of
what
you
want
it
to
be.
You
want
to
you
want
to
get
like
as
if
this
had
been
installed,
so
that
you
can
drop
it
into
either
a
node
modules,
folder
or
global
space,
or
whatever,
like
on
your
docker
image.
E
B
A
B
A
A
Cool,
it
might
be
interesting
just
to
like
get
visibility
into
like
that,
get
like
people
actually
using
it.
Would
you
be
okay
with
opening
up
the
pr
between
now
and
next
week,
maybe
and
and
cool.
B
A
Yeah,
I
don't
think
that
it
sounds
like
there's
like
any
strong
arguments
against
this.
So
long,
it's
like
clearly
documented,
given
the
fact
that,
like
copy
is
such
a,
I
don't
know
it's
sort
of
very
vague
in
terms
of
maybe
what
it's
actually
like,
what
you're
actually
doing.
I'm.
E
Yeah,
I
I
don't
have
a.
I
don't-
have
a
a
like
ready
suggestion
that
I
would
put
forward
as
better
than
coffee,
but
I
kind
of
also
feel
like
it's
a
little
vague,
so
I
apologize
for
the
unhelpful
negative
feedback,
but
yeah
that's
something
we
can
kind
of.
It
seems
like.
E
Npm
npm
layout
npm
dump
into
a
folder.
I
don't
know.
A
E
Yeah
yeah,
I
I
mean,
I
think,
if
we,
if
we
mess
with
the
with
the
implementation
and
kind
of
like
comb
through
the
edge
cases
and
then
iterate
on
the
command
name,
it's
entirely
possible.
We
just
planned
somewhere
like
that,
like
maybe
pax,
should
just
have
an
option
that
says:
dump
it
to
a
folder
or
pack
should
be
able
to
take
multiple
arguments
and
give
you
one
tarball
that
has
all
of
them
and
then
it's
a
matter
of
just
you
know:
tar
x,
dollar
sign
npm
pack
abcde,
but
any
any
changes
to
npm
pack.
E
I
would
kind
of
worry
it
would
be
challenging
because
it
doesn't
doesn't
currently
give
you
a
way
to
pack
multiple
things
into
a
single
tarball.
If
you
give
it
multiple
things,
you
get
multiple
tar
balls
and
anybody
who's,
relying
on
that
will
run
into
issues
if
we
change
it,
which
we
could
in
a
semi-major.
If
we
decide
that's
more
sensible,
but.
B
Also,
a
straw
man
use
case.
It's
something
that
copy
as
I
implemented
it
supports,
but
I
don't,
I
guess
I
can
think
of
a
couple
times
that
you
might
want
to
create
a
single
directory
or
archive
that
includes
a
couple
different
workspace
packages
at
a
top
level
like
maybe
you
want
one
archive
that
you're
going
to
use
in
multiple
lambdas
that
use
different
handlers,
but
I'm
not
sure
that
that's
necessary.
E
I
I
have
definitely
run
into
this
in
in
test
cases,
but
but
I'm
also
one
of
you
know
a
half
dozen
or
so
people
who
regularly
run
tests
that
involve
npm,
packing
things
and
and
package
trees
and
folders
and
such
so.
That
may
also
be
sort
of
a
very
niche
use
case.
E
I
mean
if
we
fix
npm
pack
and
then
find
that
it
actually
just
does
satisfy
this
use
case.
Then
that's
fine,
maybe
that's
where
we
land
with
it,
but
I
I
can
imagine
that
there's
some,
even
if
it's
just
a
flag
to
npm
pack
that
says
put
it
in
a
directory
that
might
be
worthwhile.
I
also
the
the
case
you
bring
up
about
bundle
dependencies,
I
think,
makes
a
ton
of
sense
because
npm
pack
really
should
be
like
this
is
the
archive.
E
This
is
the
artifact
that
we're
going
to
be
publishing
that
that's
like
the
installable
artifact
and
what
you're
looking
for
here
is
more
like
yes,
create
that
installable
artifact,
but
then
actually
install
it
in
this
folder.
So
it
has
all
the
steps
and
everything
bundled
or
not,
and
that
is
definitely
different
from
what
npm
pack
is
kind
of
designed
to
do,
because
you're
not
trying
to
create
a
package
you're
trying
to
create
a
package
tree.
That's
fully
resolved.
A
Okay,
jordan,
I
see
your
hands
up
and
then
I'd
like
to
time
box
this,
so
we
can
keep
going.
D
Yeah,
so
I
I
see
in
the
rfc
post
that
there's
dash
dash
production.
Obviously
it
makes
sense
that
it
should
mirror
the
way
those
flags
work
with
every
other
npm
command.
Is
there
any
use
case
where
you
would
not
want
to
do
only
production
or
is
like
every
use
case
for
this
involves
stripping
dev
depths?
I.
B
Think
so,
okay,
I
left
production
and
omit
the
way
that
they
were
like
even
the
documentation
for
npm
prune
is
like.
Why
would
you
run
prune
without
passing
production,
so
whatever
documentation
rewrite
for
that
would
mirror
it,
but
you
know
I'm
happy
with
default,
with
production
being
the
default.
D
A
A
B
I've
started
an
implementation
and
I
have
something
that
works,
but
it
doesn't
have
any
options.
So
it's
not
configurable.
I
would
appreciate
some
help
like
advice
on
how
to
wire
up
options,
because
it's
currently
in
a
static
method,
an
arborist
doesn't
have
access
to
all
of
the
npm
command
flags
and
if
we
were
to
enable
this
it'd
have
to
be
with
some
either
boolean
or
replacement
option
that
can
all
be
async.
We
don't
need
to
talk
about
that
here.
A
We
can
get
someone
potentially
to
to
pair
with
you,
so
if
you've
got
time
over
the
next
week
or
two,
let's
see
if
we
can
get
like
a
call
set
up
or
even
async,
give
you
some
guidance
on
where
to
look.
A
So
we
have
a
few
slack
channels
that
I've
sent,
I
think
historically,
I
sent
matt
a
whole
list.
I'll
add
them
to
the
actual
rfc
docs.
So
you
know
where
which
channels
we
we
are
in,
but
the
openjs
foundation
slack
channel
has
an
npm.
A
Our
slack
organization
has
channel
the
no
tooling
working
group
also
has
a
slack
org
that
we
have
a
channel
in
and
then
the
third
sort
of
slack
async
place
we
hang
out
is
the
node.js
former
node
foundation,
orc
slack,
that
we
have
an
npm
channel,
and
so
those
three
are
great
great
places
to
poke
any
of
the
the
core
team.
Okay.
A
Moving
on
to
rc
488,
so
this
is
the
fpm
install
scripts.
Turning
npm
install
scripts
ought
to
be
opt-in.
Francesco,
I
know
you're
you're
here
did
you
want
to
maybe
speak
to
any
updates
that
you've
got
in
the
last
week,
or
so
I
know
we
were
talking
about
telemetry
and
and
getting
some
more
data
as
well.
G
Yeah-
and
I
also
just
want
to
highlight
peter's
here
too-
he's
actually
been
doing
a
lot
of
the
actual
datification,
so
I
might
hand
off
to
him,
but
just
to
give
just
a
very
high
level
overview.
We
got
like
the
very
basics
of
the
data
we
wanted
to
get
which
is
just
like
hey.
G
This
is
how
many
packages
currently
rely
on
some
version
of
this,
including
kind
of
the
more
subtle
ones
like
you
know,
the
chip
file
stuff,
how
that
compares
to
total
numbers
and,
if
you're
interested
like
specifically
well
how
many
people
are
using
install
scripts
versus
relying
on
the
you
know,
auto
detection,
zip
file,
that
sort
of
thing
and
it's
pretty
much
matches
kind
of
the
last
time.
We
did
something
like
this,
which
was
like
I
think
over
a
year
ago,
which
is
you
know
the
from
a
package
perspective.
G
I
always
have
to
kind
of
like
make
this
distinction
right,
because
there's
different
concepts
of
package
right,
there's
like
versions
versus
like
packages
and
it
becomes
kind
of
complicated
to
like
fully
understand
the
full
scope.
But
I
think
that
this
is
kind
of
the
easiest
way
to
kind
of
look
at
the
landscape.
G
It's
around
0.6
of
packages
and
the
the
kind
of
purpose
of
this
metric,
at
least
in
our
perspective,
is
like
from
like
a
worst
case
scenario.
Let's
say
we
had
to
go
and
you
know
change
every
single
one
or
something.
G
G
Just
you
know,
since
it's
not
like
a
hundred
thousand
packages,
since
it
is
like
12
000
and
since
in
theory,
it's
going
to
kind
of
look
like
a
long
tail
download
count
wise,
we're
just
going
to
kind
of
manually,
go
through
and
start
looking
at
these
things
and
seeing
whether
kind
of
some
of
our
intuitions
match
up
with
the
actual
use
cases
of
like
well,
we
think
a
lot
of
people
are
using
it
to
either
display
a
message
or
to
do
some
sort
of
you
know,
fancy
compilation
or
whatever
such
that,
hopefully
the
next
time
we
update
the
rfc
it's
going
to
be
like
okay,
well,
here's
the
raw
numbers,
how
many
people
are
using.
F
G
Here's
hopefully
an
oaky
understanding
of
how
people
are
using
it
right,
which
will
ideally
inform
the
discussion
a
little
more
of
like
you
know.
If
we
go
through
and
like
we
look
at
100
packages
and
all
of
them
use
it
for
like
a
completely
unique
thing
that
would
require
just
you
know,
adding
100
features
10
p.m,
then
that's
you
know
would
clearly
give
us
some
pause
as
opposed
to
like
if
we
go
through
and
of
the
first
hundred,
it's
like.
Well,
they
kind
of
fall
into
two
big
buckets,
maybe
like.
G
Let's,
let's
contact
these
package
authors
ask
them
how
they
would
you
know
envision
like
this
being
handled
automatically
by
a
pm
et,
etc.
So
I
also
have
started
kind
of
there's
just
a
lot
of
information.
G
Basically
trying
to
just
get
all
that
information
into
the
rfc
so
that,
like
there's,
you
know
you
don't
have
to
read
through
like
30
comments,
but
that's
that's
more
or
less
where
we're
at
now.
I
don't
know
peter
if
you
want
to
add
anything,
but
I
think
that
that
covers
at
least
from
my
end.
A
Yeah
yeah,
no,
I
know
I
think
it
answers
all
right.
So
go
ahead,
please
I
was
just
gonna
say
I
I
saw
bradley
your
hand
was
up
and
then
I
apologize
if
that
was
from
before,
and
then
peter
would
love
to
have
you
top
up
on.
H
There
are
a
bunch
of
packages
like
the
datadog
package.
I
mentioned
last
time
that
don't
actually
need
the
script
in
order
to
function,
so
that
should
be
taken
into
account.
But
I
suspect
at
least
a
few
of
those
are
going
to
have
really
high
download
counts
and
then
a
bunch
of
them
are
not
going
to
have
so
high
of
download
counts
and
so
the
practicality
of
12
000
packages
migrating.
H
I
am
very
skeptical
of
after
being
part
of
the
npm,
well,
the
esm,
migration
on
npm
and
watching
kind
of
the
responses
by
various
maintainers,
some
of
which
are
not
so
eager
to
support
people
outside
their
specific
use
case.
So
that's
all.
G
Sorry
I
did
I
I
did
see
the
comment
that
you
that
you
left-
I
I
just
forgot
to
to
address
it,
so
we
we
know
how
to
calculate
that
number.
It's
at
least
it's
it's
non-trivial
in
description
or
it's
trivial
in
concept,
but
it's
non-trivial
like
computationally
to
figure
out
kind
of,
like
you
know
the
adjacency
matrix
of
every
package
to
every
package
to
see
if
there's
kind
of
like
an
eventual
connection,
the
the
kind
of
thinking
on
our
part
was
so,
for
example,
just
to
use
the
two
hypotheticals.
G
If
every
single
package
had
an
install
script,
that
would
know
it
would
be
really
hard,
and
probably
you
need
to
think
of
a
different
thing.
If
one
package
use
an
install
script,
then
we
know
like
oh,
let's
just
convince
this
one
person
and
then
we'd
be
okay.
So
the
the
kind
of
idea
behind
this
12
000
number
was
just
basically
like.
G
That
seems
like
the
kind
of
thing
where
we
can
at
least
start
sniffing
around
and
seeing
like
what
like
the
the
easiness
or
difficulty,
is
we're
certainly
going
to
look
into
this
adjacency
matrix
stuff?
Again,
I
just
as
far
as
I
know,
the
best
way
to
do
it
and
unfortunately,
like
you're,
just
going
to
go
open
up
every
package,
json
you're,
going
to
read
every
dependency
of
everything
right.
You
have
a
2
million
by
2
million
matrix.
You
put
a
little
one.
Every
time,
there's
a
connection.
G
Then
you
raise
that
to
the
2
millionth
power
and
then
the
resulting
matrix.
Anything
that's
not
zero
is
has
a
transitive
dependency
and
then
you
go
and
count
those
you
know
easier
said
the
dot
just
from
the
perspective
of
like
you're,
either
going
to
be
curling
or
sorry
running,
npm
info
to
get
dependencies
that
we're
going
to
be
untouring
to
like
grab
the
dependencies
from
package.json,
et
cetera,
all
that
before
actually
creating
the
matrix
and
doing
the
matrix
math.
So
we're
gonna
see
like
how
reasonable
that
is
like.
H
G
And
then
I
guess
the
question
I
had
kind
of
in
relation
to
all
the
status
stuff
is,
like
you
know
the
more
you
know
this
is
something
we've
dealt
with
internally
too,
the
more
we
we
like
start
thinking
about
this
problem,
the
more
it's
like.
Oh
I
wouldn't
it
would
be
really
nice.
If
we
could,
you
know,
query
the
genie
for
this
information
on
on
the
packages.
G
It
would
be
interesting,
like
ahead
of
time,
to
know
like
what
data
is
meaningful
from
the
perspective
of,
like
you
know
what
moves
the
needle
right
like
if
we,
if,
if
I
had
a
timer
like
well,
if
the
transitive
dependencies
are
around
this
number,
then
that
would
be
a
definite
no.
If
it
was
around
this
number,
it
would
almost
be
like
an
easy.
G
Yes,
as
opposed
to
kind
of
you
know
and
again,
something
that
we
everyone
can
get
stuck
on
a
lot
just
the
trap
of
like
there's
always
another
question
to
ask
which
is
kind
of
why
we're
we're
falling
more
towards
the
like.
What
would
it
look
like
to
implement
some
of
these
solutions
and,
like
you
know,
essentially
reaching
out
to
some
of
these
package
authors
and
and
answering
precisely,
I
think,
you're,
a
very
good
point
of
like
well.
If
they
come
back
with
like
what's
in
it
for
me
or
something,
then
it's
like
okay.
G
Well,
maybe
that's
not
the
best
approach,
as
opposed
to
like
if
we
go,
and
we
see
that
there's
some
big
buckets
we
reach
out
to
them.
We
try
to
pitch
this
like.
Actually,
this
makes
your
life
easier.
You
don't
have
to
update
your
scripts
anymore,
blah
blah
and
we
get
a
positive
reception
having
that
be
kind
of
like
a
positive
indicator
as
to
I
guess,
starting
to
make
some
code
progress,
because
something
that
we
mentioned,
I
think
last
meeting
too,
is
I
think,
in
an
ideal
world.
G
We
begin
to
put
some
of
these
features
into
mpm
prior
to
any
sort
of
big
switch
right
like
whether
that's
like
hey
we've
made
something
easier.
You
can
choose
to
use
it
right,
no,
nothing,
scary!
No,
we're
not
we're!
Not
gonna,
we're,
not
gonna.
Take
your
scripts
away
yet
or
anything,
and
then
even
in
between
that
and
some
big
switch
just
having
the
option
of
saying:
hey,
I'm
running
npm
without
scripts
and
only
enabling
things,
but
that's
something
that
you
turn
on
and
if
that
starts,
to
look
really
good.
G
J
Yeah
I
mean
I
don't
think
I
have
that
much
to
add,
but
I
I
definitely
am
interested
in
hearing.
J
If
there's
any
specific
data
like
like
what
what
brad
bradley
said,
you
know
any
any
sort
of
metrics
that
would
help
make
this
decision
easier
and,
and
frankly,
if
there's,
I
think,
if
we
kind
of
establish
what
the
criteria
should
be
prior
to
getting
the
metrics
it
would,
it
would
help
us
like,
I
think,
make
this
decision
in
a
less
sort
of
emotional
way,
but
but
I'm
totally
yeah,
I'm
basically
all
ears
for
any
any
sort
of
data
that
would
help.
J
Thank
you
yeah,
that's
I
that's
really
awesome
and
I
and
I
think
that's
in
addition
to
the
npm
endpoint
that
that
zv
is
talking
about
in
the
chat.
H
It's
separate
so
josemnid
downloads,
everything
to
a
local
database.
Oh
version
published
as
well.
K
A
couple
different
things
one,
I
don't
know
the
best
way
to
do
this,
but
I
do
remember
when
I
used
to
work
at
google.
You
could
query
github
data
on
bigquery,
I'm
not
sure
how
out
of
date
or
if
how
to
date
that
data
set
is.
But
there
is
another
use
case
that
we
want
to
not
forget
about
here.
K
Like
yeah
packages
in
your
tree
is
a
huge
one,
but
like
how
many
people
are
also
using
these
life
cycle
scripts
within
their
own
packages
and
projects,
albeit
it's
much
easier
for
us
to
tell
people
to
use
different
scripts
but,
like
I
have
actually
some
repositories
of
my
own,
where,
like
I
use
the
the
post-install
lifecycle
script
to
bootstrap
other
things
in
my
own
repos
when
I
run
npm
install
and
so,
and
maybe
the
solution
to
that
is,
we
just
say:
hey
at
the
top
level,
we'll
let
those
scripts
go
because
it's
your
project,
but
it's
when
you
start
creating
these
kinds
of
like
edges
or
like
carve
outs,
that
a
feature
becomes
really
confusing
or
it's
not
as
easy
to
like
understand
or
educate
people
on
how
they
work.
K
So
I
just
want
to
like
remind
folks
of
that
use
case
regarding
transitive
dependencies
and,
as
folks
was
meant,
as
folks
were
mentioning
here.
I
do
think
that
it's
important
to
remember
that,
like
even
if
we
approached
every
single
one
of
these
packages
and
they
all
updated
their
stuff,
like
that's
great,
deep
trees,
don't
upgrade
like
and
you
can't
get
like
it's
it's
pretty
much
impossible
and
I'm
not
trying
to
be
like
negative
about
it
and
like
we
are
working
on
things
in
npm
and
yarn.
K
What
one
piece
of
research
that
I
was
really
interested
in
doing
but
like
for
obvious
reasons,
as
you
all
may
have
seen
on
a
blog
I've
been
busy
with
other
stuff,
is
what
workflows
are
we
breaking,
and
I
think
we
talked
about
that
a
little
bit
last
week,
but
it's
like
if
I
npm
install
react,
or
I
install
electron
or
a
great
example
that
we
that
we
saw
is
if
we
look
at
that
list,
that
you
have
one
of
the
top
ones
in
there
is
node
serial
port
which,
to
my
understanding,
is
like
using
the
post,
install
script
to
grab
binaries
and
like
yeah.
K
I
think
just
as
useful
and
if
not
more
useful
will
be
like
what
are
these
use
cases
and
what
are
these
ecosystems?
I
always
like
to
think
of
the
javascript
ecosystem?
K
Is
an
ecosystem
of
ecosystems
so
like
who
are
the
ones
that
are
going
to
be
the
most
affected
by
this
and
what
is
like,
like
what
are
ones
that
are
like
less
likely
to
update
who's
going
to
like
end
up
getting
getting
caught
in
this
and
the
the
other
side-
and
I
just
want
to
remind
folks,
is
something
we
talked
about
last
week,
which
is
while
npm
scripts
absolutely
are
a
place
where
malicious
code
can
be
put,
and
it's
like
the
fastest
path.
K
If
you
steal
a
package
that
you
can
get
some
stuff
in
there
right
like
it
is
not
foolproof,
it
is,
to
a
certain
extent,
kicking
a
can
down
the
road.
If
I
want
it
to
be
malicious,
I
can
do
some
static
code
analysis.
K
If
I
get
access
to
some
packages
and
figure
out
the
code
that
will
run
a
hundred
percent
of
the
time
whenever
someone
runs
npm
test
first,
certain
package-
I
do
want
us
to
really
be
careful
about
going
too
far
down
the
rabbit
hole
of
like
kind
of
completely
getting
rid
of
this
really
great
developer.
K
Experience
that
we
have
when,
like
the
problem
that
we're
really
talking
about,
is
again
highly
highly
utilized
packages
that
are
compromised
and
there
are
a
lot
of
other
ways
and
obviously
we're
looking
at
them,
and
this
is
not
to
say
hey.
We
shouldn't
fix
this
because
there's
other
ways
to
fix
it,
absolutely
not,
and
I
think
francisco
some
of
the
things
in
your
proposal
that
you're
talking
about
that
give
people
better
tools
for
protecting
themselves,
like
even
if
the
defaults,
for
example,
are
not
going
to
make
a
huge
difference
like
giving
people.
K
The
tools
they
need
to
protect
themselves
is
also
awesome
and
I
think
framing
it
that
way,
as
you
were
before
we're
like
hey.
Maybe
what
we're
working
on
is
like
better
tools
for
managing
scripts
auditing
scripts
handling
your
own
repositories
becoming
aware
when
things
within
your
tree
change.
These
are
all
things
that
are
super
awesome,
and
you
know
we
could
kind
of
see
where
the
ecosystem
goes
with
that
without
having
to
enforce
it.
So
anyways
lots
and
lots
of
words
coming
out
of
my
mouth.
K
I
really
like
the
direction
this
is.
This
is
going
and
thanks
for
the
hard
work
everyone.
L
Okay,
thank
you.
So,
in
terms
of
tools
for
managing
scripts,
I
started
building
some
and
my
conclusion
from
looking
into
the
topic
and
trying
a
few
hacks
and
getting
suspended
on.
Npm.
Sorry
is
that
that
you
actually
want
to
stop
new
post
install
scripts
from
showing
up
so
the
first
and
simplest
thing
we
can
do.
L
That's
going
to
be
fairly
quick
to
introduce
is
to
put
a
note
in
package
lock
if
your
dependency
had
a
post,
install
script
or
not,
and
then,
if
you're,
installing
again,
if
you're
updating
stuff
anything,
we
check
back
to
see.
If
this
post
intel
script
is
new
and
if
it
is
new,
we
don't
run
it.
L
We
warn
or
ask
or
stop
the
process
somehow
to
let
the
user
check,
because
the
only
popular
risk
with
post
install
scripts
is
someone
adding
the
script
to
a
package
that
didn't
have
it
before,
and
we
have
so
very
few
popular
packages
with
postings.
Those
scripts
existing
already
that
taking
one
over
would
not
be
as
easy
as
taking
over,
like
the
the
majority
of
other
packages
and
adding
a
post-install
script.
E
G
So
definitely
I
think
the
the
idea
from
the
original
rfc
is
to
kind
of
have
that
feature
through
the
like
specific,
allowing
of
scripts
for
specific
versions
under
the
theory
that
like
if
a
new
thing
comes
out,
but
the
new
scripts
can
be
on
a
different
version,
and
I
think
you
get
the
same
feature,
I
guess
technically,
the
distinction
would
be
like
hashing
the
script
and
comparing
the
script,
but
like
then,
we
get
into
kind
of
weird
use
cases
of
like
what,
if
that
relies
on
something
else,
but
I
guess
just
big
picture.
G
I
agree
with
that.
My
only
kind
of
I
think
opinion
on
that
matter
is
that
that
information
should
exist
in
the
package
json,
if
possible,
as
opposed
to
package
lock.
G
Just
due
to
the
kind
of
unfortunate
situation
we
have
with
github,
where
by
default
package,
locks
don't
render
in
pr's,
because
they're
always
very
big
and
in
my
experience
no
one
reads
through
them,
as
opposed
to
like
information
and
package
jsons,
I
think,
do
get
kind
of
more
eyes
on
them.
During
the
review
process,
at
least
my
experience
package
lock
is
a
bit
more
kind
of
intended
for
a
computer
to
read
than
a
human,
and
it
would
be
kind
of
doing
double
duty.
G
If
and
again,
if
one
of
the
at
least
one
of
my
desired
goals
of
this
is
less
about
turning
things
on
or
off,
but
precisely
to
kind
of
the
previous
point
that's
been
made
here
just
surfacing
this
information
right,
like
putting
the
user
in
a
position
to
be
able
to
make
the
right
decision
the
just
the
other
thing
I
wanted
to
very
clarify.
Just
I
want
to,
I
guess,
just
validate
what
miles
is
saying
here
like
in
my
mind
it
is
a
failure
case.
G
If
we
break
things
and
like
I
understand
that,
like
almost
any
change
will
break
stuff.
The
way
I
like
to
operate
is
I
want
to
create
at
least
at
this
stage
very
early,
the
bar
to
me
to
be
like.
Can
we
do
this
when
it
doesn't
break
anything,
maybe
like
after
a
month,
thinking
about
it
yeah?
We
can't.
F
G
G
Can
we
provide
the
best
experience
and
solve
this
particular
problem
without
breaking
stuff,
and
what
does
that
look
like
right
and
I'm
willing
to
believe
that
it's
not
possible,
I'm
just
I'm
I'm
curious
what
happens
when
we
when
we
set
that
restriction
on
ourselves
and
if
nothing
else,
I
just
want
that
to
kind
of
relieve
anyone's
idea
that,
like
you
know,
our
position
is
to
you
know,
turn
off
scripts.
That's
not
our
position.
G
Our
position
is
to
like
ideally
kind
of
like
in
a
lot
of
ways
like
the
thing
I
think
we're
all
kind
of
saying
we
wish
we
could
do
is
somehow
magically
say
no
more
scripts
from
now
on,
unless
you
really
really
really
need
them
right,
like
you
know,
we
just
had
some
way
of
codifying
that,
and
perhaps
we
can
get
close
to
that.
You
know
it's
all
through
the
chat
some
stuff
like.
I
definitely
think
that
there
are
some
approaches
of
like
look.
F
I'm
not
sure
who
was
first
zbr
or
two.
I
think
I
was
I
I
the
one
thing
I
wanted
to
say
to
what
was
brought
up
about
packaged
chase
and
and
including
it
there
and
an
iccb
said
kind
of
extended.
This
or
kind
of
it
might
be
talking
about
this
too.
F
I
I
think
I
I
totally
agree
that
package
json
or
package
lock
is
not
a
file
that
should
ever
be
processed
by
humans,
like
it's
just
a
huge
risk
in
a
lot
of
ways
to
have
to
rely
on
people
processing.
That
document,
I
I
think
you
know
the
way
I
was,
or
I
would
think
about
that
is
it
is
being
intended
to
be
processed
by
machine,
and
then
that
will
be
you
know
the
cli
will
be
the
interface
in
which
that
gets
surfaced
to
a
person.
F
So
when
you
know
I
run
npm
install
and
there
would,
I
guess
need
to
be
some
storing
of
of
like
a
decision
log,
basically
around
this
around
what
you're,
allowing
what
you're,
not
but
the
cli
should
be.
The
thing
controlling
you
know,
access
allowing
allowing
you
allowing
scripts
to
run
based
off.
F
What's
in
the
package
lock,
rather
than
the
package,
lock
kind
of
being
something,
people
would
reference,
and
then
I
suppose
you
would
need
a
metadata
file
either
like
a
property
impacted
json,
which
I
guess
would
be
fair
of,
like
which
scripts
to
allow
which
to
not,
which
I
have
a
feeling
is.
Maybe
what
zb
already
did
but
yeah
that
that
there
needs
to
be
something
outside
of.
I
agree
there
needs
to
be
something
outside.
I
think
that
data
should
probably
live
in
the
package.
Lock,
though,.
L
Yeah,
so
what
I
meant
about
package
lock
is,
I
think,
the
the
easy
thing
we
can
do
first
is
to
not
get
people
through
any
trouble
of
reviewing
any
of
this.
L
So
package
lock
is
the
choice,
because
it's
the
cli
that
needs
to
store
that
information
and
only
react
to
the
risky
situations
and
then
inform
the
person
so
yes,
tampering
with
it
in
a
pr
would
would
be
risky,
but
then
tampering
with
package
block
maliciously
in
the
pr
and
getting
it
approved
is
much
riskier
even
now,
with
existing
capabilities
than
just
just
this.
L
Changing
that
one
bullion
so
yeah,
I
I
still
think
package
lock
is
better
for
the
initial
choice
and
package.json
is
famously
to
overload
it
with
other
stuff
coming
from
various
things
we
tend
to
configure
through
it,
so
some
people
are
very
opposed
to
changing
it,
and
I
don't
know
if
that's
something
we
still
care
about,
but
it
used
to
be
an
argument
before.
A
So,
just
to
be
mindful
of
time,
we
only
have
about
seven
minutes
left.
I
know
we're
talking
a
little
bit
in
the
weeds
here
about
the
actual,
an
actual
implementation.
It
sounds
like
there's
still
a
lot
of
work
to
be
done
to
actually
uncover
impact.
It
sounds
like
there's
a
bit
of
work
that
a
few
folks
have
agreed
to
take
on.
A
I
just
want
to
make
sure
that
I
capture
that
and
make
sure
that
it's
accurate
bradley
did
you
note
that
you
were
willing
to
to
potentially
help
uncover
or
utilize
like
the
no
projects
tooling
to
uncover
whatever.
A
Okay,
I
think
that
would
be
important
to
have
just
the
the
overall
impact
of
the
depth
of
of
usage
or,
let's
say
like
transitive
dependencies
that
are
relying
on
one
of
these
12
000
packages
that
have
already
been
uncovered
to
be
using
install
scripts
in
some
way.
But
then
what
might
be
interesting
and
and
what
I
think
francisco
you
were
getting
at-
was
uncovering
and
mouse.
Look
to
this
as
well.
A
H
H
I
don't
think
a
patch,
but
if
people
are
pinned
on
versions
like
miles
said,
they're,
never
gonna
update,
they'll,
never
get
the
update.
So
we'd
need
to
know
like
how
how
big
of
a
change
we
can
allow
and
automatically
kind
of
roll
into
the
transitive
depths,
which
I
don't
think
will
be
very
big.
I
think
it's
going
to
be
a
patch.
A
Rebecca,
do
you
have
a
comment,
I'm
just
trying
to
read
the
the
chat
here.
At
the
same
time,.
C
J
So
that
yeah,
so
I
think,
a
very
important
callout
I
do
have
listed
and
I'll
talk
with
you
bradley
offline,
just
to
coordinate
some
of
this
work.
But
I
do
have
a
list
of
every
single
package
that
has
any
any
of
the
life
cycle
scripts,
including
the
jib
file
as
well,
and
so
that
list
can
be
used
basically
to
determine
this
dependency
graph
and
get
a
a
full
understanding
of
these
transitive
dependencies
that
that
that
I
can't
remember
who
it
was
so
I
was
asking
for
that.
A
I
think
there's
also
potentially
a
few
rfcs
that
we
could
potentially
or
even
just
discussions.
We
can
start
here
that
helped
to
start
to
address
potentially
at
least
the
platform
and
os
specific
package
distributions
internally.
Our
team
actually
looked
at
this
and
did
some
some
exploration
into
the
space
roughly
six
seven
months
ago,
when
yarn
was
actually
considering
a
package
variance
rfc.
A
This
is
one
of
the
use
cases
that
has
come
up
for
post-install
scripts
in
terms
of
fetching.
You
know
native
binaries,
and
it's
not
all
the
use
cases,
but
it
seems
to
be
the
one
that's
most
often
referenced
and
so
potentially
in
the
next
week,
or
so
we
could
explore
opening
up
at
least
the
pr
or
rfs
rfc
in
that
space
to
share
sort
of
the
initial.
A
You
know
work
and
thoughts
that
our
team
had
as
well,
which
might
allude
to
features
that
we
could
add
to
the
cli
that
you
know
mitigate
the
the
need
for
install
scripts.
A
I
we've
only
got
a
couple
minutes
left
and
I
apologize
for
to
the
other
two
rfcs
we
had
here.
Were
there
any
updates
to.
I
guess
375
isaac
that
you
could
speak
to.
I
know
yourself
and
jordan
we're
gonna
at
some
point
sync
up
and
and
make
some
work
happen.
There.
E
A
E
I'll
update
with
the
pr
comment,
basically
what
we
already
said
kind
of
earlier
in
the
call
regarding
npm
copy.
I
kind
of
touched
on
that
we're
have
some
idea
of
kind
of
what
the
next
step
would
be
to
make
it
a
little
bit
less
primitive
than
its
current
state.
A
Yeah,
if
there's
any
way
of
you,
two
syncing
up
before
next
week,
I
know
a
lot
of
folks
might
be
out
given
given
the
holidays
that
are
coming
up
here
in
north
america
and
apologies
matt.
Did
you
have
any
update
on
your
side
if
you've
got
a
chance
to
maybe
even
sync
up
with
mike
yet
or
not.
C
Yeah
no
real
update
for
me.
I
was
out
of
town
last
weekend,
but
I
should
have
some
time
this
weekend
this
weekend,
especially
if
I
mean
I'm
in
all
the
slack
channels
now.
So
if
anyone
wants
to
like
hit
me
up
in
your
preferred
slack
channel
to
chat
about
stuff
would
be
super
interested,
especially
to
find
out
if
caleb's
version
of
the
rfc
like
makes
mine
not
a
thing
or
vice
versa.
A
Maybe
that's
something
you
two
can
stick
up
on,
even
if
you
both
get
into
the
same
org
I'll
invite
killed
all
the
three
channels
or
three
slack
orgs
that
I
did
for
you,
matt
cool
cb.
I
see
you
hands
up.
Did
you
want.
L
To
give
a
comment
just
before
we
go,
I
just
wanted
to
ask
if
it
makes
sense
for
me
to
write
up
a
short
rfc
covering
the
minimal
approach
we
could
take
or
if
we
want
to
continue
discussing
it,
just
as
as
an
option
or
precondition
to
the
existing
rfc.
A
A
So
I
feel
like
it's
very
much
aligned
with
your
recommendation
and
just
where
that
information
lives
might
be
the
the
thing
that
you
have
differing
opinions,
but
I
think
we
can
most
folks
on
this
call,
I
think
we're
aligned
with
lock
files,
but
how
we
surface
that
information
to
the
end
users
is,
I
think,
still
up
for
debate.
J
Yeah,
I
have
your
email.
I
now
have
your
email
I'll,
send
you
an
email
and
and
try
to
get
and-
and
I
think
I
have
bradley's
email
still
so
I'll-
try
to
get
a
group
going.
Thank
you.
No
problem.
L
A
Thank
you
so
much
everyone.
I
know
we're
at
time
appreciate
the
conversation
and
continuing
to
discuss
these
items
into
the
next
week
and,
ideally,
and
hopefully,
we'll
see
you
next
week
cheers.