►
From YouTube: Open RFC Meeting - Wednesday, February 17th 2021
Description
In our ongoing efforts to better listen to and collaborate with the community, we run an Open RFC call that helps to move conversations and initiatives forward. The focus should be on existing issues/PRs in this repository but can also touch on community/ecosystem-wide subjects.
A
Welcome
to
another
open
rfc
meeting:
apologies
for
the
late
start
here,
folks
and
apologies.
We
aren't
able
to
stream
live
today,
I'll,
try
to
figure
out
what
exactly
has
happened
here
to
to
our
options.
A
Usually,
we
should
be
live
on
youtube,
so
apologize
for
that
for
folks
that
usually
watch
us
there
want
to
quickly
thank
folks
for
for
jumping
on
as
well
updated.
The
zoom
link,
which
also
seems
to
have
updated
and
now
is
requiring
password
again
I'll,
try
to
look
into
that
for
next
week
and
make
sure
that
things
are
a
little
bit
more
streamlined.
A
B
Yes,
hello,
everyone,
my
name
is
victor
and
I
am
one
of
yarn
contributors
and
I
would
like
to
discuss
today
the
issue
I
have
opened
on
your
issue
tracker.
A
Yeah,
thank
you
for
joining.
It's
always
good
yeah.
It's
good
to
have
new
folks
jumping
on
again
for
just
your
your
knowledge
and
everybody
here.
These
calls
and
all
comms
on
the
rfc
use
repo
are
governed
under
a
code
of
conduct
which
is
linked
in
the
rfc's
repo
there.
A
Again
you
know
these
calls
are
are
meant
to
push
forward
and
hopefully
you
know
help
work
get
done
in
the
mpm
client,
and
I
guess
one
quick
announcement
that
I
have
before
I
open
the
floor
to
others
is
that
we
are,
you
know,
looking
higher
for
the
npm
clyde
team,
so
we
have
two
open
positions.
A
There's
one
one
rec
there
that's
on
the
our
job
description,
that's
on
the
github
website
today,
I've
tweeted
about
it
before.
If
you'd
like
to
come
work
with
us,
we
have
some
open
positions
and
you
can
work
with
the
great
team
we
have
here
so
again
we'll
be
following
along
in
the
agenda
that
was
posted
in
the
rc's
repo.
I
believe
that
was
issue.
A
Number
326.
again
for
for
folks,
if
you're
watching
this
async
after
the
recording
today's
date
is
february.
17Th,
wednesday
february
17th
and
the
first
issue
that
we
had
queued
up
and
and
actually
maybe
I'll
just
ask
folks
here
is:
did
anybody
else
have
any
other
announcements
they
want
to.
A
A
If
not,
I
I
think
the
one
other
thing
I
want
to
note
before
we
jump
in
is
the
open.
I
think
it's
open.js
world
conference,
that's
being
run
by
the
openjs
foundation.
They've
extended
the
call
for
proposals
the
cfp
process.
A
So
if
you're
looking
to
speak
an
event,
you
can
still
go
and
apply
there.
So
that's
you
know
we
encourage
folks
to
if
you're
contributing
open
source
or
you
want
to
get
involved
more,
that's
a
great
way
of
doing
it
and
applying
to
speak
at
an
event
like
that
is
great.
So
so
yeah
we
have
a
jam-packed
schedule
here,
we'll
try
to
go
quickly
and
I'll,
try
to
time
box
discussions,
and
so
we
can
get
through
more
of
these.
A
A
A
We
actually
you
have
the
ability
to,
I
think,
write
hooks
which
aren't
very
well
known,
and
I
know
that
our
docs
just
got
a
fresh
coat
of
paint
and
some
updates
in
terms
of
the
life
cycle
events.
I
know
that
gar
has
been
working
on
cleaning
up
that
documentation,
so
it
should
be
a
little
bit
clearer
in
terms
of
when
things
are
being
run
and
when
they
aren't
but
yeah
today,
you
can
actually
still
run
a
pre
and
post
install
hook
on
an
individual
when
an
individual
package
is
being
installed.
E
D
I
I
would
like
to-
and
you
know,
maybe
use
this
request
for
an
rfc
and
sort
of
direct
this
towards
my
my
good
friend
and
colleague,
michael
garvin,
to
document
how
we
ought
to
be
using
all
life
cycle
scripts
and
in
a
way
that's
kind
of
sensible
and
and
has
some
kind
of
underlying
symmetry
in
order
to
it
within.
Within
the
bounds
of
you
know,
maintaining
our
current
functionality,
as
as
best
we
can
and
as
with
its
minimal
disruption
as
possible.
F
F
I
think
the
the
solution
here
is
to
work
on
this
and
and
come
up
with
a
spec
for
this
use
case,
a
a
life
cycle
event
that
runs
when
I
install
a
new
package-
or
maybe
it's
not
even
installed
because,
like
what
action
are
we
actually
trying
to
hook
into
here
is
when
our
arborist
is
get
gets
rebuilt
right,
because
that
the
npm
cli
could
definitely
use
that
we
have
things
we
want
to
run
whenever
we
update
our
dependencies
right.
So
I
think
that's
a
discussion.
F
This
rc,
we
just
need
it's
brand
new,
it's
only
four
hours
old,
it's
great!
The
problem
being
solved
figure
is
a
good
one.
Let's,
let's
iterate
on
this.
E
Yeah,
so
I
think
that
actually,
like
I'm
aware
of
the
history
of
the
organic
growth
of
npm
right,
so
there's
no,
I'm
not
trying
to
judge
its
growth,
but
the
I
think
npm
has
long
lacked
the
like,
like
when
you
run
npm
in
it.
The
first
question
it
fails
to
ask
you
is:
is
this
a
package
or
an
application,
and
almost
every
like
best
decision,
has
two
possible
answers
depending
on
the
answer
to
that
question?
Which
is
you
know,
should
it
be
private,
true
or
not?
E
What
scripts
do
you
want
to
run
and
so
on,
and
so
I
think
that
it
might
be
useful
to
holistically
think
about
how
we
can
differentiate
those
use
cases
because-
and
something
could
obviously
be
both
right,
like
eslint-
is
both
an
app
and
a
package
right
in
some
ways,
but
like
the
the
scripts
that
you
want
to
use
that
you
want
other
people
to
use
when
they're
installing
you
and
the
scripts
that
you
want
to
use
when
you
are
developing
yourself
are
different
like
completely
distinctly,
and
it
might
be
nice
if
I
had
a
time
machine
and
there
were
two
different
scripts
blocks,
one
for
yourself
and
one
for
consumers,
and
that
would
very
cleanly
differentiate.
E
What
and
I
think,
easily
answer
the
question.
What
should
happen
with
this
script
because
with
those
in
those
two
contexts,
I
think
it's
very
straightforward.
Now
we
only
have
one
scripts
block
and
it
already
does
both
in
many
ways.
So
I
don't
I'm
not
suggesting
a
solution,
but
like
I,
I
I
wanted
to
kind
of
raise
that
as
like
a
holistic
viewpoint
that
might
help
that
kind
of
thought
process.
A
Yeah,
I'm
not
sure
if
we
should
be
adding.
We
should
probably
add
notes
back
to
this
rc.
As
gar
said,
it's
only
a
few
or
sorry.
This
conversation
specifically
since
it's
only
a
few
hours
old.
It
got
picked
up
last
minute
here
and
shoved
to
the
top,
but
yeah.
I
noted
in
chat
that
I
think
there
is
a
way
that
you
know.
At
least
you
can
solve
this
today,
to
run
essentially
a
script
on
any.
A
E
D
A
D
Hooks
are
kind
of
a
a
hack
that
occludes
that
has
not
made
sense
since
at
least
like
npm
1.0
they
were.
They
were
a
pretty
useful
way
to
do
a
lot
of
interesting
things
in
npm
0.x,
and
then
we
started
installing
packages
locally
and
it's
kind
of
silly.
They
do
work,
it
is.
It
is
incredibly
hacky
and
it's
difficult
to
use
and,
like
the
conceptual
model,
doesn't
really
match
most
of
the
rest
of
the
clay.
D
I
think
I
think
they're
best
to
kind
of
ignore
and
keep
dragging
forward
as
a
as
a
regrettable
wart,
but
I
wouldn't
I
wouldn't
hang
like-
hang
too
much
on
them
as
like
a
blessed
api
like
it's,
not
a
it's,
not
a
solution
really.
G
Go
ahead,
I
would
already
know
that
we
ourselves
kind
of
need
this
in
clay.
I
remember
me
and
you
having
the
conversation,
that
we
wanted
to
run,
that
bundle
scripts
and
more
all
the
post
post
install
package,
whatever
it's
gonna,
be
so
yeah.
I
think
it's
definitely
an
interesting
rfc
to
follow
up
with.
F
I'll
keep
this
in
mind
as
I'm
working
on
that
request
update
the
life
cycle
scripts.
I
think
our
jordan
hit
the
nail
on
the
head
that
the
context
of
these
is
not
clear
and
that's
the
place
to
start
is
documenting
what
the
current
context
is
and
when
your
install
script
means
when
I'm
being
installed,
and
so
then
that's
the
only
chance
of
getting
to
the
other
side
of
that
is
good
documentation.
First,
so.
G
A
Thank
you
roy
I'm
just
like
talking
to
myself
here
it's
a
day
for
technical
glitches,
I
guess
yeah
so
moving
on
to
issue
324
prefer
peer
dependencies
over
regular
dependencies
when
both
are
specified
together.
B
If,
currently,
I
have
carried
down
some
experiments
with
npm,
both
versions,
six
and
version
seven,
and
when
I
declared
the
dependency
as
both
speed
dependency
and
regular
dependency
in
all
the
cases,
npm
just
ignored
the
clear
dependency
part
and
used
whatever
I
have
written
in
regular
dependency.
B
Now
this
can
be
changed
a
little
bit
to
give
priority
to
put
dependency
part
instead,
this
way,
if
next
we'll
declare
webpack
as
both
peer
dependency
and
regular
dependency
and
npm,
give
priority
to
per
dependency
part,
then
next
can
pick
up
the
user
version
of
webpack
and
only
one
version
of
webpack
will
be
installed
in
the
user
project.
In
this
case.
This
in
this
case,
is
pretty
common
and
the
question
how
to
solve
the
problem.
When
you
want
only
one
version
of
some
dependency
in
user
project
is
very
important.
A
E
Yeah,
I
mean,
I
think
my
general
thought
is
so
so
in
npm,
seven,
this
workaround
is
sort
of
not
needed,
because
if
you
just
put
a
peer
dependency,
your
consumers
will
still
get
it
by
default
before
npm,
because
npm
7
starts
now
auto
installing
peer
dependencies
when
it
can
before
npm
7.
E
E
Let
him
speak
to,
but
the
it
seems
like
both
preferring
the
pure
is
important,
but
also
that
the
thing
that
would
be
most
useful
here
is
like
warning
and
saying
something
like
you
have
something
in
peer
and
depth
or
pure
and
dev
depths
and,
as
a
result,
you're
risking
the
ranges
are
different,
so
you're
risking
duplication.
E
B
Yes,
I'm
just
thinking
if
the
users
will
declare
okay.
If
authors
of
next
gs
declare
webpack
has
pure
dependency
only
and
what
will
what
will
npm
install,
which
version
of
webpack
from
the
peer
dependency
of
next
or
from
peer
dependency
definition
of
vowel
loader,
which
one
will
npm
pick.
D
D
So
that's
that's
a
good
question
somewhat
orthogonal
to
this,
so
I
can
see
how
it's
related,
the
npm
7
will
attempt
guarantee
that
everything
gets
a
pure
dependency
at
or
above
their
level
in
the
tree
that
matches
their
pure
dependency
range.
D
If
it
is
impossible
to
resolve
in
this
way,
then
it
will
raise
an
e-resolve
error
which
is
either
going
to
be
a
failed,
install
or
a
just
a
warning,
and
we
will
try
to
use
some
heuristics
to
get
kind
of
the
best
possible
match
for
everybody.
D
There's
been
there's
been
quite
a
lot
of
discussion
around
like
okay,
so
in
this
kind
of
weird
scenario,
and
that
kind
of
weird
scenario-
these
two
conflict,
but
there's
a
prod,
dep
and
there's
a
dev
debt,
but
not
a
et
cetera.
So
we
can,
we
can
take
that
offline.
I
think
it's
it's
kind
of
not
relevant
here.
D
The
the
question
is
like
the
the
goal
is,
though,
that
npm
seven
will
and
and
does
in
most
cases
find
a
pure
dependency
for
everybody
with
a
pure
dependency
that
matches
what
they
need
and
if
there's
a
conflict,
then
it
will
raise
it
as
an
error
to
the
developer,
who
can
fix
the
conflict.
D
So
if
it's
at
the
root
project,
it'll
say
hey,
like
you
have
a
problem,
you
are
causing
a
problem.
You
need
to
fix
it.
If
it's
somebody
consuming
your
package
and
it's
causing
an
unresolvable
situation,
then
it'll
be
a
warning
and
we
will
try
to
find
the
one
that
was
most
likely
to
have
been
picked
by
npm,
six
or
yarn
version
one.
So
so
that's
kind
of
that
answer
to
that
question.
The
the
actual
implementation
here
there
was
two
things
I
wanted
to
mention.
D
First
of
all,
preferring
peer
over
prod
is
incredibly
easy.
We
can
just
do
that.
It's
it's
a
it's
a
two
line,
change
and
a
test.
I
actually
posted
the
patch
in
the
issue
because
it
was
just
like
I
did
it
while
I
was
waiting
for
this
call
to
start
the
the
thing
that
is
kind
of
interesting
is
when
or
how
how
how
opinionated
we
want
to
be
about
those
ranges
matching
exactly
so
for
prod
and
pier.
I
agree.
D
I
have
a
particular
version
I
want
to
be
running
with
in
dev,
but
I
know
that
my
you
know
I
want
to
test
in
dev
with
version
three,
but
I
know
that
it
works
with
any
of
version
three
four
or
five,
and
you
want
your
period.
You
generally
want
peer
dependencies
to
be
a
very
broad
range
so
that
we
can
find
that
overlap
right.
D
So
I'm
not
sure
that's
the
thing
that
we
would
want
to
warn
about,
because
that
would
just
be
kind
of
an
annoying
warning
in
a
lot
of
cases,
but
prod
and
pier,
not
matching
yeah.
That
should
be.
D
That
should
probably
be
a
warning.
What
occurred
to
me,
as
I
was
looking
at
this,
is
that
you
can
actually
have
right
now,
a
dev
dependency
which
does
not
match
a
prod
dependency,
and
the
behavior
here
is
pretty
interesting.
What
will
happen
is
when
we're
like.
So
let's
say
you
have
a
project
x
that
has
y
listed
in
both
dev
dependencies
and
regular
dependencies
and
at
two
different
ranges.
D
It's
an
interesting
foot
gun
because
you
now
are
in
a
situation
where
you're
testing
with
a
different
thing
than
you're
publishing,
and
I'm
I'm
not
sure
what
we
want
to
do
about
that.
Also.
So,
maybe
just
when
we're
adding
prod
depths,
if
there
is,
if
it
exists
in
another,
you
know
if
the
edge
exists
already
and
the
spec
is
different,
then
we
should
say:
hey,
like
you,
have
two
different
things
here:
between
prod
and
dev
or
between
prod
and
peer.
But
if
peer
and
dev
don't
match
like
okay,
that's
normal,
it's
fine.
E
So
go
ahead
and
join
us
yeah.
So
I
think
that
whenever
you
have
a
pure
depth
range
that
is
broader
than
one
version
you
already
are
in
that
foot
gun
situation
where
the
default
installed
thing,
isn't
everything
you
support
and
you're,
not
implicitly,
testing
on
that
and
the
solution
is
usually
to
make
a
ci
matrix
that
explicitly
installs
the
versions
you
care
about
and
in
npm
seven.
If
the
dev,
dep
and
peer
dep
ranges
aren't
identical,
and
then
you
do
that
explicit,
install
it'll
error
out.
E
So
it
actually
is
a
pain
in
the
butt.
If
they're
not
the
same,
so
I
I
just
think
there's
no
real
ergonomic
solution
there
but
like,
but
but
I
agree
that
if
you
chose
to
only
show
a
warning
for
prod
depths,
that
would
address
the
most
important
case
because
in
the
dev
dev
case,
it's
only
harming
the
developer
and
they
can
figure
that
out.
But
I
have
found
through
like
practice
that
making
the
dev
depth
one
also
like
match
the
pure
depth.
One
is
also
very
important
and
the
most
useful.
A
So
just
be
mindful
of
time,
because
we
do
have
a
number
of
other
issues,
and
I
know
that
we've
already
added
a
couple
comments
on
this
thread.
Maybe
you
can
leave
this
open
and
bring
it
back
up
next
week
and
feel
free
to
add
some
comments
back
async
on
the
thread.
The
one
note
I
missed
there
isaac.
You
said
you
fixed
just
before
the
call.
What
was
that
again,
what.
D
That
was
this,
this
issue
of
preferring
peer
over
prod.
It's
it's
very
easy.
I'm
not
sure
why
we
didn't
already
to
be
honest.
Looking
at
the
code,
it
seems
like
it's
an
oversight
or
something
because
we
prefer
optional
and
dev
over
prod
for
the
same
reason,
so
I
think
the
the
the
only
thing
to
do
here
is
like
make
sure
that
you
know
kind
of
give
it
a
quick
look
over
and
make
sure
it
doesn't
break
anything
else,
so
I'll
I'll
review
the
other
tests
in
arboris.
D
D
D
B
Nice
to
have
maybe
some
I
don't
know
some
place
where
these
situations,
which
are
unclear
to
the
users,
what
will
be
preferred
in
one
case
or
another
when
they
contradict
they
contradict
each
other.
It
would
be
great
to
have
some
document
which
clarifies
the
behavior.
D
B
The
that's
a
good.
D
D
A
Awesome,
thank
you
so
much
for
bringing
this
to
us.
So
moving
on
to
the
third
item
that
we
had
on
the
rfc
dom's
issue,
323
the
improving
the
experience
around
security
with
mpx
and
scoped
packages.
H
Yeah,
I
guess
you
can
give
a
quick
overview.
I
guess
the
the
issue.
There
is
probably
broader,
as
charging
correctly
pointed
out
in
that
in
general,
there's
a
bit
of
a
need
of
clarification,
of
a
more
deterministic
way
of
how
the
naming
collisions
of
the
various
bins
are
resolved
and
what's
preferred.
H
D
Npm
exec
is
essentially,
it
creates
a
sort
of
temporary
run
script
in
your
package.json
in
the
package.json
data,
and
then
it
calls
run
script
with
that
script
name.
It
doesn't
save
it
back
to
the
package.json
file
right,
which
is
which
is
important
because
you
just
want
to
run
it
as
a
one-off
thing.
So
what
that
the
the
other
piece
is,
if
you
are
specifying
a
package
that
doesn't
already
exist
in
the
current
local
node
modules
folder,
then
it
will
try
to
install
that
package.
D
If
you
specify
a
package
and
that
package
has
a
bin
it'll
swap
out
the
package
name
with
the
the
bin
name.
So
you
know
if
I
did
npm
exec.
D
And
then
a
package
name,
I'm
trying
to
imagine
I'm
trying
to
remember
like
a
package
name
where
the
package
name
doesn't
match
the
bin
name.
But
if
you
know,
if
you
do
okay,
so
if
you
do
npm
exec
at
npmcly,
arborist
and
then
some
some
command
name,
it
will
look
at
the
at
npm
class.
Arborist
package
it'll
see
that
the
bin
name
is
arborist
and
it'll,
swap
that
in
the
command
that
it
actually
invokes
to
run
the
arborist
bin.
D
H
So
it
does
look
into
dot
dot
bin
folder.
At
the
time
being,
I
guess,
I
guess
that's
part
of
the
overall
security
issue,
because
I
I
guess
that
that
that
was
never
really
resolved
in
terms
of
what
gets
linked
into
the
bin
folder
and
who
gets
the
priority
yeah.
So
the
I
have
listed
multiple
issues
here
into
a
single
rrfc.
H
I
guess
it
will
need
to
be
split
up
and
you
know
we'll
want
to
follow
up,
but
maybe
we
need
to
discuss
into
to
discuss
what.
How
do
we
group
these
features
and
and
what's
the
correct
course
of
action?
I
guess
my
number
one
thing
is
the
regression
stuff.
I
I
don't
know
if
it's
considered
a
regression
or
whether
that
was
deliberate,
but
in
npm
six
npx
at
scope,
cli
would
just
execute
whatever
is
the
cli
inside
that
scoped
package?
H
It
would
actually
execute
whatever
is
linked
into
the
dot
min
folder
as
the
cli,
but
it
would
strip
off
the
scope,
whereas
right
now
it
no
longer
accepts
the
scope
next
to
the
command,
and
that
is
my
primary
concern
there
and
I
guess
if
that
starts
working,
and
if
that
gets
defined
in
the
strictest
securest
way
possible,
then
maybe
it
solves
a
lot
of
the
other
issues
right
so
yeah,
that's
that's
the
impacts
at
scope,
cli,
the
next
one
is.
H
There
was
a
long-standing
issue
in
the
old
npx
that
if
there
is
a
command
inside
a
cli
which
does
not
match
the
package
name-
and
you
know
that
you
always
want
to
execute
that
particular
command,
it
is
very
annoying
to
pass
minus
p
every
single
time
you
want
to
execute
it.
So
it
would
be
nice
to
have
a
kind
of
mapping
somewhere
right.
So
if
you
want
to
execute
your
example
was,
was
it
arborist
right?
H
So
if
you
want
to
execute
arborist
bin
from
that
specific
package,
then
maybe
you
there
could
be
a
mapping
somewhere
where
you
define
it,
and
then
you
always
execute
it
there,
whether
that's
in
the
package.json
or
in
ideally,
probably
I'd,
prefer
the
npmrc,
because
that
way
it
can
be
made
global
and
it
can
work
for
the
global
packages,
because
there's
some
tricks
that
I'm
using
maybe
they're
anti-patterns,
but
when,
when
we're
pinning
versions
of
various
tooling,
that
needs
to
be
installed
globally.
H
For
whatever
reason
is
we
re-expose
the
bin
from
an
internal
dependency
right?
So
I
know,
if
a
specific,
a
specific
version
of
eslint
exposed
via
a
specific
version
of
our
tooling
right,
so
we
re-expose
the
same
bin
so
that
it
gets
promoted
into
that
bin
folder
and
it
overwrites
the
correct
thing
there
and
it's
available
globally
right
that
that
was
the
primary
concern
there,
so
so
yeah
if
we
could
somehow
alias
or
create
a
mapping
of
where
certain
binaries
should
be
prioritized
from
that
would
be
nice
to
have
the
other.
H
One
is
a
small
thing
is
that
if
you're
running
npx
inside,
if
you're
developing
the
cli,
especially
right
so
right
now,
you
cannot
npx
in
the
same
folder
as
you're
developing
in
if
you're
developing
in
that
folder,
you
have
to
call
the
bin
directly
or
node
or
use
an
npm
run
script,
whereas
so
you
need
to
maintain
the
context
right
am
I
am
I
working
on
this
or
do
I
have
this
installed,
so
it
would
be
nice
to
just
default
to
dash
dash
package
to
the
current
folder
if
it
matches
the
cli
name,
I
guess
and
then
in
terms
of
the
security
issues
yeah.
H
So
the
fact
that
it's
it's
not
explicitly
defined-
or
at
least
I
couldn't
find
it-
how
the
clies
are
preferred
when
there's
naming
collisions,
there's
there's
a
bit
of
an
issue
and
that's
especially
in
the
light
of
that
recent
article
that
was
doing
the
rounds
right
and
and
isaac's
response
that
you
should
be
using
packages
in
this
particular
case.
H
It
does
not
fully
resolve
the
issue
in
that,
if
you
execute
npx
cli
and
that
cli
is
available
from
both
a
scoped
package
and
an
unscoped
package,
the
unscoped
version
from
the
public
registry
will
will
be
preferred
over
your
scoped
tooling,
which
may
introduce
a
certain
issue.
So
as
a
user,
if
you
want
to
stay
safe,
you
should
always
be
passing
package,
but
yeah.
H
That
is
a
lot
of
typing
and
then,
as
as
as
on
top
of
that,
building
on
top
of
this
is
that
in
general,
overall,
if
there
are
multiple
clies
exposed
with
the
same
name
for
sorry,
if
the
cli
exposed
with
the
same
name
from
multiple
packages,
they
will
get
linked
into
dot
bin
folder
and
they
will
get
executed,
but
it
is
not
strictly
specified
where
the
cli
will
be
coming
from,
which
means
that
maybe,
ideally,
the
securest
option
is
to
just
forbid
that
behavior
right.
A
D
Well,
I
was
just
gonna
go.
I
was
gonna,
provide
some
context
for
in
regards
to
like
the
bin
collisions,
so
we
we
ran
into
an
issue
a
while
back
where
which
affected
npm,
6
and
pm7
as
well
as
yarn.
It
was
prior
to
npm
7
shipping,
which
would
allow
you
to
have
multiple
packages
in
the
or
any
time
you
installed
a
package
in
the
global
space
that
had
a
bin
and
the
file
already
existed.
It
would
just
go
ahead
and
overwrite
the
one
that's
already
there
in
the
global
space.
D
This
is
really
really
really
harmful,
because
I
can
have
a
you
know,
a
package
that
has
a
bin
called
like
get
or
vem
or
bash,
and
now
I've
just
owned
somebody's
entire
machine.
We
fixed
that
pretty
swiftly
and
in
the
fix
we
attempted
to
do
this
also
in
the
local
space
and
very
quickly
found.
Well,
that's
actually
horrible
there
are,
does
you
know,
lots
and
lots
of
packages
out
there
that
people
are
using
together
that
all
export,
a
bin
called
parser?
D
D
Nobody
can
use
npm
right
and
so
we're
kind
of
in
a
we
were
sort
of
stuck
where
we
landed
was
local
packages.
You
just
have
to
use
the
you
know
you.
They
just
go
ahead
and
clobber
one
another,
and
we
let
that
happen
because
rolling
that
back
was
just
way
way
too
disruptive.
D
That
said,
if
you
just
specify
npx
at
scoped
client,
it
should
run
the
client
from
that
package,
even
if
there's
something
else
by
that
name
in
node
modules,
just
looking
at
the
code,
so
I
think
that
it's
worth
us
maybe
digging
into
offline
digging
into
the
the
issues
you
brought
up
here
and
just
coming
up
with
kind
of
a
thoughtful
response
for
each
one
of
them.
D
Since
we
are
short
of
time
on
this
call,
I
think
we're
it's
probably
better
to
just
sort
of
move
on
and
take
the
action
item
to
to
go
through
and
address
each
one
of
these.
If
that's
all
right
with
you
sure
it
works
for
me.
A
Yeah,
thank
you.
This
is
very
detailed,
very
detailed,
so
yeah.
Moving
on
to
the
next
issue
we
had.
I
know
this
suit
actually
came
up
from
our
morning.
Stand
stand
up
isaac.
This
is
327..
This
is
drop.
The
support
for
optional
optional
dependencies
when
they're
installed
stalling
in
their
specified
a
different
platform
than
the
one
you
are
on
using
the
force
like
did
you
want
just
quickly?
A
D
A
D
We
have
a
bug
in
in
npm
7
today
that
nil
has
a
fix
for
an
open
pr
to
fix,
but
basically
the
way
that
the
bug
works
is
any
optional
dependencies
which
have
a
mismatched
platform
specification.
So
an
os
cpu,
node
version,
npm
version.
We
do
think
we
might
crash
on,
but
any
mismatched,
oscp
or
node
version.
D
We
just
go
ahead
and
install
the
the
optional
depth
and
then,
if
it
fails
to
build,
we
clean
it
up
and
roll
it
back
and
proceed
with
the
rest
of
the
install,
but
we
shouldn't
actually
even
be
trying
npm
six
would
not
attempt
to
install
those
if
they
were
optional
dependencies
and
this
actually
breaks
some
existing
modules
in
the
wild
that
are
using
this
to
kind
of
ship
one
of
a
set
of
pre-built
binaries.
D
So
they'll
have
you
know
optional
dependencies,
a
list
three
or
four
or
five
different
binary
packages,
all
these
optional
dependencies,
but
all
of
them
have
different
os
and
cpu
requirements
and
then
npm
npf6
would
only
attempt
to
install
the
one
that
matched,
and
so
they
would
always
get
kind
of
the
correct,
correct,
pre-built
binary
the
the
npm
seven
breaks
out
entirely
by
just
unpacking
all
of
them,
and
since
none
of
them
have
a
build
step,
they
all
work
and
now
they
can't
tell
which
binary
package
they're
supposed
to
load.
D
The
thing
that
I
ran
into
is
that
npm
6
will
also
install
optional
depths
with
mismatched
platforms.
If
the
force
flag
is
set-
and
I
think
that
that
might
actually
be
something
we
don't
want
to
support
because
in
having
those
having
those
pre-built
binaries
there
in
the
first
place
can
cause
problems.
D
Presumably
you
don't
need
them
there.
That's
why
they're
optional
dependencies
and
we
actually
currently
tell
users
to
use
force
in
some
cases
to
accept
conflicted
peer
dependencies.
So
we've
we've
increased
the
cases
in
which
force
might
be
used.
The
this
is
sort
of
a
breaking
change
somewhat
from
npm.
Six,
if
we,
if
we
don't
install
these
things
when
force,
is
specified,
but
I
I
think
it's
actually
a
more
kind
of
reasonable
intention.
A
E
D
So
if
it's
a,
if
it's
a
mismatch
platform-
and
it's
not
an
optional
dependency
like
it's,
a
regular
production
dependency
and
presumably
the
package
won't
work
without
it
and
force
will
say,
go
no
install
it
anyway.
E
Right
install
it
on
my
local
machine
so
that
in
my
docker
machine,
which
it
is
compatible
with,
it
will
work,
for
example,
sure,
like
there's
lots
of
use
cases
that
you
could
come
up
with
where
you
might
want
it
and
like,
is
there
anything
else
other
than
engines.npm?
Is
there
anything
else
that
you
can't
override
npm
on
with
force.
D
Well,
you
can't
like
override
version
requirements,
for
example
the
dependency
requirements
right
like
there's,
there's
a
bunch
of
things
where
we're
like
okay,
yeah,
it's
force
is
kind
of
used
for
those
cases
where
there's
something
which
is
you
know,
we
could
go
one
way
or
another
and
they're
both
kind
of
technically
correct,
but
one
is
safer,
and
so
we
go
with
the
safer
one
by
default.
And
then,
if
you
do
force,
we
go
with
the
the
less
safe
one.
D
The
exception
to
this
rule
obviously
is
pure
dependencies,
where
we'll
actually
allow
something
which
is
incorrect
in
some
cases
with
with
force,
but
most
of
the
time
force
will
push
forward
with
something
which
is
technically
correct,
but
potentially
dangerous,
like
overwriting
a
file
you're
not
supposed
to,
or
you
know
whatever
else.
D
So
I
think,
in
this
case,
having
that
force
option
for
prod
depths
and
pure
depths,
that
you
know
non-optional
dependencies
makes
is,
is
exactly
what
force
is
for,
but
having
it
for
optional
dependencies
is
kind
of
a
hazard,
and
this
is
this
is
different
from
engine
strict,
because
we're
actually
talking
about
os
and
cpu
requirements,
which
npm
is
much
more
strict
about
so
yeah
the
the
the
other
thing
I
wanted
to
mention
about
this-
that
we
might
want
to
do.
D
For
that
case,
you
just
brought
up
about
you
know
my
docker
containers
linux,
even
though
I'm
on
a
osx
or
something
we
might
want
to,
let
you
specify
like
override
what
what
os
to
use-
I
don't
know
if
we,
if
that's
configurable,
right
now
and
just
defaults
to
the
operating
system
in
the
cpu,
but
it
might
be
good
to
kind
of
tell
npm
like
pretend
to
be
this
other
for
the
purposes
of
package
resolution
pretend
to
be
this
other
os
and
cpu
version.
That's
interesting.
D
A
Stubbing
the
m:
well,
I
guess
we're
probably
getting
that
information
from
yeah.
We
would.
A
Okay,
any
anything
else
you
want
to
mention
on
that,
or
can
we
leave
this
up
for
now
and
see
if
folks
want
to
give
feedback
over
the
next
week.
D
Yeah,
I
would
say:
let's
leave
it
up,
we're
almost
out
of
time
and
let's
leave
it
up
and
see
if
there's
any
comments
on
it
or
use
cases
where,
like
jordan,
I
think
you're
thinking
in
exactly
the
right
direction
like
what
are
the
use
cases
where
somebody
might
be
actually
depending
on
using
force
in
this.
In
this
way,
since
npm
6
did
support
it.
A
So
the
next
two
that
we
can
try
to
get
to
here
with
the
last
seven
minutes-
and
maybe
guy
you
can
speak
to
these-
is
pr
321
and
319..
These
stem
from
the
conversation
we
had
last
week
about
this
tags.
The
first
is
the
no
tag,
publish
pr3321,
add
proposal
for
no
tag,
publish
and
then
the
319
is
the
multiple
disk
tags.
A
Again,
a
proposal
for
supporting
essentially
specifying
multiple
disk
tags
to
be
modified.
Did
you
want
to
just
highlight
or
give
an
overview
of
these
two
cooler.
F
First,
I'd
like
to
thank
everyone
for
the
last
week
having
the
discussions
in
those
it'll
make
today's
discussion
much
quicker.
I
don't
think
we're
at
a
resolution
or
not
of
either
of
them,
because
we're
still
dog
fooding,
but
at
least
we
know
what
we
have
and
the
first
one
is
that
there's
no
way
to
upload
without
a
tag
and
there's
lots
of
use
cases
for
why
you
want
to
do
this.
Even
it's
just
you
get
your
default
config
in
that
state,
so
you
can
override
it
later.
F
If
you
want
to
right,
and
the
underlying
problem
is
the
no
prefix
on
tag
sets
the
tag
to
the
string
false
and,
as
we've
learned
with
the
cli
there's
going
to
be
someone
out
there,
relying
on
that
and
just
to
comment
on
jordan's
question
about
do
the
private
packages,
I
think
we
can,
but
based
on
the
fact
that
it's
arguably
the
that's
an
intent
someone's
gonna
want
and
the
hoops
you
gotta
jump
through
to
go
look
at
private
stuff
for
so
on
the
registry.
It's
I
don't.
I
don't
think
we
need
to
research.
It.
E
E
So
would
there
be
a
way
to
figure
out
if
anybody
has
requested
the
the
false
tag
for
any
packages,
public
or
private,
because
if
nobody's
requested,
it
probably
always
an
unintentional
bug
when
it
exists,
because,
like
I'm
pr,
I
have,
it
is
entirely
possible
that
I
have
a
false
tag
on
some
of
my
packages
like
at
least
one
because
I
ran
no
tag
at
one
point
thinking.
It
would
turn
off
the
tag
right.
F
E
E
F
So
that's
that
one
I
I
would
agree
that
no
dash
tag
would
be
the
preferred
way
to
do
that.
The
other
issue
here
is
the
npm
registry
itself.
Won't
allow
it.
So
are
we?
Okay
with
that,
like
they'll,
get
a
400
error.
E
F
F
A
F
Yeah,
maybe
it's
that
requires
trapping
the
400
response
and
parsing
it,
but
yeah.
Potentially
we
could
do
that
because
the
response
is
actually
400.
Tag
is
required,
so
it
does
in
that
error.
It
does
say
exactly
why
it
blew
up
on
the
npm
registry
right
now,
so
I
will
darcy
will
help
me
get
those
logs
and
if
that's
the
case,
we'll
probably
then
be
able
to
make
a
decision
on
this
to
say
no
tag
is
the
way
to
do
it
and
go
forward
with
it
so
yeah.
F
If
anyone
else
has
any
comments,
that's
321
rfc
go
comment.
The
other
one
is
the
multiple
dist
tags,
which
jordan
really
gets
some
good
questions
in
there.
F
Originally
we
were
hoping
we
could
comma
separate
it,
but
the
current
way
it
would
work.
I
thought
commas
were
not
allowed
in
tags
based
on
some
code.
I'd
read
turns
out:
that's
not
true,
it's
totally
allowed,
so
we
can't
do
that
and
then
it
was
suggested.
A
multiple
dash,
dash
tag
declaration
on
the
command
line.
To
do
that,
which
I
believe
is
pretty
easy
to
implement.
F
The
question,
then,
is:
what
does
the
cli
currently
do?
If
I
do
tag
foo
tag
latest
it'll
take
the
last
one.
You
gave
it
and
use
that
as
the
only
tag.
So,
if
we're
okay
with
that,
then
this
rfc
goes
through.
If
not,
we
need
more
conversation.
That's
where
the
state
strategies
on
those.
A
So
just
be
mindful,
we
only
have
one
or
two
minutes.
I
appreciate
you
giving
that
overview.
I
think
that
at
least
personally
yeah
like
I'd,
be
okay
with
like
shipping
that
as
a
minor,
I
don't
think
it
would
be
a
major
like
breaking
change,
but
we
can
have
that
discussion
either
in
the
comments
here
or
we
can
keep
these
open
for
next
week
as
well
apologize
for
for
the
tight
timeline
today,
yeah
yeah.
I
think
these
are
both
great.
A
D
Yeah,
so
the
the
the
main
thing
is
currently
tag
is
always
only
a
string
and
it's
only
a
single
string,
so
you
get
the
last
one
in
and
it's
coerced
to
a
string.
If
you
do
no
tag,
it
will
set
it
to
the
string
false.
D
So
the
other
thing
that's
kind
of
a
little
bit
of
a
weird
issue
here.
If
we
allow
tag
to
be
a
list
of
of
strings
or
a
boolean,
is
that
we
have
we
gotta,
we
gotta
just
kind
of
outline
the
what
we
do
like
what
is
the
effective
default
tag
for
the
purposes
of
package
resolution
and
installation?
If
you
specify
it
multiple
times,
we
could
just
keep.
You
know
just
use
whatever's,
the
last
one
in
the
list.
D
The
other
thing
is:
what
is
the
default
tag
for
package
resolution
if
we
have,
if
we
set
tag,
equals
false?
Maybe
there
we
just,
I
think
in
most
cases,
if
we
do
that
today,
we'll
just
always
fall
back
to
latest,
because
if
you
don't
specify
it,
it
uses
latest
as
the
default.
D
The
the
third
idea
I
had,
which
is
a
little
bit
more
dramatic
and
explicit
but
might
be
nice,
is
you
know
we
kind
of
have
this
like
default
tag,
thing
which
is
for
installation
and
we're
reusing
the
same
config
value
for
publishing.
D
We
might
want
to
say
well,
actually
the
publish
one
and
the
install
one
are
conceptually
two
different
things,
although
they
both
refer
to
the
same
data
object.
They're
one
is
writing
it.
One
is
reading
from
it,
so
they
should
actually
be
two
separate
configs
and
so
basically
get
rid
of
publish,
dash,
dash
tag
and
add
you
know,
dash
dash,
publish
tag
or
something
which
will
be
a
boolean
or
a
set
of
strings.
D
You
can
do
that
for
publish.
Actually
you
just
put
a
publish
config
in
your
package
json,
but
it's
the
only
way.
It's
the
only
command.
You
can
do
that
for
right,
but
so,
but
the
point
is,
it
is
a
it's
still,
a
single
config
which
we,
which
we
read
in
the
same
way
and
we
and
we
validate
in
a
single
place,
and
I
don't
want
to
deviate
from
that.
D
You
know
from
the
fact
that
we
we
manage
configs
in
the
way
that
we
do,
but
it
seems
like
it's
actually
being
like
serving
two
purposes
here
which
are
somewhat
in
conflict
because
in
one
case
you
really
only
ever
want
to
have
exactly
one
thing
and
you
want
it
to
only
ever
be
a
string,
and
in
the
other
case
we
want
to
have
multiple
things
or
no
things.
D
A
So
I
know
that's
a
bit
of
a
broader
conversation,
shared
config
or
scoped
config,
but
yeah.
I
know
we're
over
time
now.
I
appreciate
everybody
jumping
on
today.
Again
I
apologize.
We
weren't
streaming
live,
but
we'll
upload,
this
recording
to
youtube.
So
there's
an
artifact
there
for
folks
to
to
see
in
a
bit
here
I'll
post,
the
notes
and
the
recording
back
to
the
issue
thread
as
I
we
usually
do
for
for
y'all
and
yeah.