►
From YouTube: Open RFC Meeting - Wednesday, Sept 2nd 2020
Description
In our ongoing efforts to better listen to and collaborate with the community, we run an Open RFC call that helps to move conversations and initiatives forward. The focus should be on existing issues/PRs in this repository but can also touch on community/ecosystem-wide subjects.
A
A
And
we're
live,
welcome
everybody
to
another
open
rc
call.
Today's
date
is
wednesday
september,
2nd
and
I'll
just
copy
and
paste
for
anybody
else.
That's
just
joining
the
haccmd
docs.
A
Right
and
yeah
just
a
couple
housekeeping
notes:
I've
been
away
for
a
couple
weeks
so
appreciate
you
know
the
work
by
roy
and
isaac
and
everybody
else
for
for
keeping
these
going.
While
I
was
away
quick
acknowledgement
of
the
code
of
conduct
here,
all
the
discussions,
these
calls
and
and
discussions
in
the
rc's
repo
and
the
all
the
repos
in
npm.
A
Are
you
know,
under
a
code
of
conduct
that
we
ask
that
you
abide
by
and
there's
a
link
there
if
you'd
like
to
read
more
the
gist
of
it
is
you
know,
be
kind?
Please
raise
your
hand
when
others
are
talking
and
hopefully
we'll
have
some
good
discourse.
The
idea
behind
these
meetings
is
is
that
we
hope
to
push
forward.
A
You
know
the
discussion
and
hopefully
the
work
around
the
mpm
project,
and
this
is
an
opportunity
and
a
channel
for
folks
to
to
help
give
feedback
and
and
for
us
to
have
a
discussion
with
community
about
work.
They'd
like
to
see
us
do
and
work
they'd
like
to
contribute.
Did
anybody
have
any
announcements
they
wanted
to
share
today?.
B
A
Yeah
so
we've
been,
I
think,
publishing
every
tuesday,
but
we
also
have
publishers
in
between
them.
We
may
even
have
another
one
go
out
today.
So
just
keep
your
eyes
peeled
for
that.
If
you
use
the
next
seven
you'll
get
it.
If
we
push
yeah
correction.
A
You're
right
yeah,
no,
it's
great
and
yeah
so
feel
free
to
to
start
using
that
and
giving
us
feedback.
We
actually
have
a
specific
issue
template
which
will
label
any
bugs
that
you
file
for
mpm7,
specifically
on
the
cli
repo,
so
that's
a
great
way
of
giving
us
early
feedback.
A
If
you
see
that
something's,
breaking
or
or
you
know,
you're
experiencing
some
sort
of
degradation
from
what
you
should
be
and
we'd
love
to
know
early
and
often
and
we're
hoping
to
to
get
more
and
more
people
using
this
as
we're
hoping
to
get
a
general
generally
available
release
out
soon,
so
yeah
again
feel
free
to
add
yourselves
to
that
hack
mddoc
and
if
there's
not
any
other
announcements,
we'll
just
dive
right
into
the
agenda,
starting
with
the
rrfc
issue,
number
211
registry
per
package
on
same
organization.
A
C
I
think
hello,
guys
yeah
yeah,
it's
better
one,
arnold
from
paris,
so
I
don't
know
if
I
can
just
talk
about
what
we
want
to
do
that
so.
C
Nvme
package
with
my
company
and
we
have
some
package
hosted
on
npn
and
some
hosted
on
github
for
private
package.
We
hosted
on
github,
but
we
want
to
use
some
npm
package
with
the
organization
scope
and
some
reference
said
some
hosted
package
on
the
github
registry
with
the
same
scoped
organization.
C
So
just
with
I,
in
fact,
I
think
we
don't
use
two
registry
with
the
same
scope
for
now,
so
it's
just
to
use
a
same
scope
with
two
different
registry,
but
for
some
package
and
further
we
use
another
registry.
I
know
if
someone
everyone
understands
what
I
said.
E
Yeah,
I
think
that's
pretty
clear.
I
read
through
the
rrfc
and
this
seems
pretty
reasonable.
The
only
the
only
potentially
tricky
thing
about
it
is
that
it
it
does.
It
is
going
to
require
that
we
kind
of
make
very
sure
that
we
are
setting
that
in
all
of
the
places
where
we,
where
we
select
a
registry,
but
I
think
that
it's
all
pretty
much
going
through
one
code
path
today.
E
So
it's
it's
just
a
matter
of
kind
of
updating
that
if
we,
if
we
see
a
full
package
name
and
it
matches,
then
we
use
that
one.
Otherwise,
if
there's
a
scope
that
matches,
we
use
that
one.
E
One
one
potential
alternative
that
that
we're
likely
to
see.
I
expect
that
we'll
see
people
request
and
and
has
been
brought
up
in
the
past
is
a
way
to
set
your
set
the
registry
right
in
package.json
for
a
given
dependency.
E
The
the
challenge
there
is
that
it
does,
it
can
make
things
a
little
bit
complicated,
but
one
of
the
things
that
it
would
allow
us
to
do
so,
for
example,
if
you
you
know,
if
you
publish
something
that
has
dependencies
on
multiple
different
packages
that
are
on
multiple
different
registries,
those
dependencies
can't
be
fetched
by
by
anybody
using
it.
E
The
other
hazard
is
making
sure
that
you
have
listed
all
the
things
that
need
to
be
on
the
public
registry,
so
you
know,
there's
there's
no
way,
and
I
again
like
this-
is
not
I'm
not
pushing
back
on
this
this
r.
E
This
idea,
I
think
this
approach
is
simple
and
easy
to
implement,
and
I
I
love
it
because
it
doesn't
make
my
life
too
hard,
but
the
the
shortcoming
I
expect-
or
maybe
one
of
the
issues
I
expect
here-
is
that
it
does
require
listing
out
all
the
the
public
things
sort
of
one
at
a
time,
and
it's
not,
it
doesn't
have
that
additional
capability
of
saying
well.
This
dependency
of
mine
is
from
this
alternative
registry.
E
Yeah,
so
usually
it
comes
down
to
putting
putting
an
object
rather
than
a
string
as
the
the
value
in
your
dependencies
list.
You
know
and
then
that
that
object
could
say
you
know,
version
and
registry
the
the
pushback
on
it
is
usually
like.
Well,
it's
not
backwards
compatible,
it's
more
complicated,
you
know,
has
potential
to
kind
of
split
the
ecosystem
in
weird.
A
Ways
so
this
seems
like
something
that
we
actually
want
to
include.
Do
we
think
we
can
turn
this
into
an
actual
rfc.
E
F
E
I
don't
see
why
not?
I
mean
our
note
if,
if
you
want
to
take
the
lead
on
writing
that
up,
that's
fine,
otherwise
one
of
us
can
get
to
it
or
we
can
even
just
implement
it
and
reference
this
discussion,
I
mean
it's,
it's
not
too
controversial.
A
C
A
Okay,
awesome
yeah
sounds
good
cool,
so
moving
along.
If
that's
there's,
no
other
discussion
there
to
issue
192
the
rfs
rrfc
for
new
sub
command,
mpm
clone
and
do
we
have.
A
So
I'm
not
sure
if
anybody's
had
a
chance
to
look
at
this
isaac,
you
were
the
last
to
I
guess
or
actually
sorry
roy.
You
were
lost
to
note
this.
I'm
not
sure
if
there's
some
discussion
last
week
on
this.
E
Yeah
yeah,
so
I
did
post
some
comments
in
the
issue.
I
see
meta
replied
back
so
the
the
the
hardest
thing
about
this
honestly
is
just
the
bike
shed
on
what
the
options
look
like,
but
I
I
think
that's
entirely
resolvable,
so
we'll
probably
just
do
a
little
bit
of
you
know.
Once
we
kind
of
have
that
nailed
down,
then
writing
us
back,
for
it
should
be
easy
again.
This
is
relatively
uncontroversial.
E
The
the
tricky
part
is
what
the
you
know,
whether
or
not
we
install
dependencies
when
you
when
you
clone
one
of
these
like
when
you
clone
a
regulator.
Is
it
clone
and
install.
E
Right
right,
one
one
possibility
is
we
say:
npm
clone
does
not
install
and
that's
just
how
it
is,
and
then
we
add
another
command
like
clone
install
that
will
clone
it
to
a
folder
and
then
cd
in
there
and
npm
install
it.
Maybe.
A
Sure
roy
do
you
have
good
grasp
of
it
or
isaac.
E
Yeah
so
quick
summary
there's
a
package
on
the
registry
that
lists
a
git
repository,
so
I
want
to
clone
that
git
repository
into
a
folder
without
having
to
you
know,
do
git,
clone
dollar
sign,
open,
paren,
npm,
repo
module
name,
close
paren
model
name,
and
so
npm
clone
would
effectively
just
do
that
and
then
potentially
also
run
a
command
in
the
in
the
folder.
So
one
you
know
one
possibility
to
do
installs
we
could
just
say,
like
you
know,
npm
clone,
express
dash
c
npm,
install
right.
E
That
would
be
like
one
way
to
cd
into
that
folder
and
npm
install
it,
but
otherwise
yeah
it's
it's
something
I
actually
have
a.
I
actually
have
a
shortcut
in
my
bash
rc
to
do
exactly
find
myself
doing
it
so
frequently
right.
I
think.
B
One
the
one
question
I
will
have,
though,
like
if
we
do
have
that,
let's
say
we
do
have
npm
clone.
What
is
maybe
we
will.
We
also
want
to
support
like
something
similar
to
paco,
to
extract
right
to
do
the
same
thing,
but
from
the
published
tarball
from
the
registry,
so
maybe
have
a
npm
clone
with
an
option
for
for
free
for
just
fetching
from
the
registry.
E
A
A
Yeah,
I
don't
see
myself
wanting
the
to
extract
the
tarball
yeah.
I
don't
see
a
use
case
when
I
would
want
that
roy.
B
F
F
E
A
Sorry,
I
see
killian
and
christian.
A
Hands
up
kellen,
would
you
like
to
go
first
then,
and
then
christian
sorry,
if
it
was
the
other
way
around.
D
Yeah,
so
a
lot
of
packages
just
don't
have
much
in
the
actual
upload
to
npm
I
mean
if
the
package
is
smartly
with
the
files
field
and
all
you
will
get
the
files
sure,
but
I
mean
most
of
what's
actually
making
the
package
won't
be
there
anyways,
so
checking
out
an
actual
repository
will
give
you
way
more,
that's
an
issue.
Well,
I
don't
have
an
issue
with
it.
It's
just
like
limited
use,
usefulness.
H
Yeah
kind
of
the
same
thing:
what
if
you
have
a
mono
repository
or
not,
really
that
that
is
an
issue,
but
especially
if
you're
looking
for
a
specific
version,
which
is
probably
something
you
want
to
do
with
npm,
so
I'm.
I
know
this
is
probably
kind
of
hard
right
to
be
able
to
clone
a
specific
version,
but
I
could
see
that
might
be
useful.
I
don't
know-
or
maybe
it's
just
me.
E
Yeah
I
mean
that
is
that's
actually
a
pretty
good
idea.
So,
if
it
to
to
address
both
your
points,
the
first
one,
if
it's
a
mono
repo
and
it
specifies
a
directory
in
the
repository
we
can,
we
can
sort
of
handle
that
there's
an
open
question
as
to
whether
we
would
want
to
only
clone
that
one
folder,
which
I
know
there's
some
ways
to
do
in
git
or
if
we
would
just
use
that
folder
as
kind
of
the
thing
we
cd
into.
If
you
pass
a
command
line
argument.
E
A
So
jordan's
made
a
comment
in
a
side
channel
here.
Npm
unpack
would
be
interesting.
A
F
But
as
for
the
clone
thing,
I
think
it'll
be
incredibly
tricky
because
tag
name
conventions,
the
most
common
is
a
v
and
then
the
version
number
the
next
most
common
is
just
the
version
number
and
the
next
most
common
in
mono
repos
is
the
monorepo
subpackage
name
and
at
sign,
and
then
the
virgin
number,
but
there's
also
like
a
bajillion
other
conventions.
F
E
Yeah,
so
the
the
thing
we
would
have
to
do
is
you
know
the
same
sort
of
like
sember
matching
and
just
look
for
a
several
range
like
what
we
do
for
december
tag
matching
for
get
dependencies,
which
is
not
perfect,
but
it
does
work
in
a
surprisingly
large
number
of
cases
for
most
of
those
kind
of
common
patterns.
It
doesn't
work
well
for
mono
repos,
but
yeah.
E
I
I
think,
there's
you
know,
there's
still
a
handful
of
bike
sheds
to
kind
of
sort
out
on
this
one,
but
I
don't
think
any
of
them
are
particularly
challenging.
They
all
kind
of
just
come
down
to
the
ux
like
the
basic
idea
of
like
there
should
be
an
easy
way
to
clone
something
that
lists
its
repo
like
I,
I
would
use
that
all
the
time
like
that's,
that
is,
that
is
something
obvious
and
good
that
we
should
have
yeah.
A
So
maybe
we
can
ask
for
either
meta
or
or
one
of
us
to
make
the
actual
rfc,
because
I
I
think
that,
like
we
can
actually
do
any
bike.
Shutting
in
the
pr
itself,
I
think
like
this
is
something
we
want
right
so
or
I
would.
I
would
use
this
like
day
one.
A
I
just
know
that
already
so
yeah,
okay,
any
other
feedback
on
this
go
ahead.
I.
B
I
know
meta
was
sending
some
messages
in
the
youtube
chat,
which
we
often
forget
to
check.
So
yeah
so
basically
say
main
idea
is
to
make
contributing
to
packets,
more
straightforward,
yeah
yeah
yeah,
but
yeah
then
he
says,
oh
realize
you
are
not
reading
this.
So
sorry
about
it.
A
B
A
There
might
be
a
delay
as
well.
There
we
go.
Okay
sounds
like
they'll:
try
to
try
to
flush
it
out.
That
sounds
good
cool,
so
moving
on
then
to
issue
number
191,
I'm
adding
a
registry
identifier
to
the
publish
prompt.
A
So
this
again
is
a
issue
brought
up
by
meta.
It's
closed,
I'm
not
sure
if
you
just
want
to
note
sort
of
the
next
steps
here.
Roy
you
closed
it
and
we
accepted
essentially
a
publish,
prompt.
A
So
basically,
I
think
you
added
this
as
an
agenda
item
just
to
note
that
we
closed
the
issue.
191,
add
registry
identifier
to
publish
prompt,
sure,
yeah,
and
then
you
reference
essentially
the
rfc
for
publish
the
publish
confirmation
prompt,
which
I
know
you
had
put
together
yeah.
B
Yeah
or
maybe
just
forgot,
deny
them
for
the
last
meeting,
but
anyways
it's
a
good,
a
good
reminder.
We
ended
up
amending
the
previous
rfc
to
just
append.
A
Cool
moving
on
then
pr
146.,
so
the
notification
system
for
cli
updates.
This
was
brought
by
you
roy.
If
you
want
to
speak
to
it,
I
can
actually
take
notes,
while
you're
speaking
to
it.
B
Yeah
cool,
so
I
put
that
back
home
just
to
maybe
so
that
we
can
just
give
a
heads
up
to
the
community,
because
we
ended
up
landing
a
new
notification
system
for
the
client
right
in
order
to
unblock
us
for
the
data.
We
had
some
problems
with
bitcoin
fire,
so
we
ended
up
replacing
it
in
in
the
clay
already,
but
the
rfc
still
does
have
a
bunch
of
cool
ideas
that
we
want
to
implement.
B
So
I
guess
the
action
item
here
is
to
update
the
rfc
and
remove
remove
the
references
to
update
notifier
and
just
like
the
ux
of
the
thing
itself,
and
now
it's
gonna
be
more
focused
on
the
header
idea
that
isaac
had
before
so
maybe
isaac.
If
you
want
to
add
anything.
E
Yeah,
so
just
to
review,
because
it's
been
a
while,
since
we
talked
about
this
very
quick
review,
the
idea
is
that
we
would
on
the
on
the
registry
side.
We
would,
if
we,
if
we
detect,
you
know
so
that
the
npm
pnpm
and
yarn,
and
pretty
much
all
clies,
that
talk
to
the
registry,
send
a
user
agent
header
that
tells
the
tells
us
exactly
what
version
of
of
the
client
that
they're
using
the
idea
would
be
on
the
on
the
registry
itself.
E
We
would
look
at
like
what
is
the
you
know.
What
is
the
version
and
which
cli
are
you
using
and
then
send
a
send?
A
specific
header,
a
well-known
header
that
would
say
you
know
latest
latest
version
is
blah
and
then
that
those
clis
could
decide
whether
they
want
to
take
that
info,
and
you
know
print
out
a
notice
telling
the
user
what
to
do
or
just
you
know,
ignore
it
and
throw
it
away
like
whatever
it's
it's
up
to
them.
E
If
they
want
to
show
that
every
day,
every
time
only
once
per
week
like
it's
entirely
up
to
them,
but
that
would
save
sort
of
an
extra
round
trip
to
the
registry,
for
that's,
that's
otherwise
unnecessary
and
actually
it's
it's
downloading
the
the
full
pacument
of
whatever
cli
they're
using.
So
it's
it's
not
a
small
request.
E
It's
not
huge!
But
it's
you
know
it's
it's
not
nothing!
So
yeah!
I
still
think
we
should
do
that.
It's
it's
almost
entirely
registry
work.
So
it's
probably
going
to
sit
and
kind
of
come
in
priority,
wise
behind
a
bunch
of
other,
more
important
things,
for
you
know,
stability
and
scaling
and
better
features
and
such
but
yeah
we
can.
We
can
leave
the
rfc
open
as
long
as
it
needs
to
be.
B
C
A
Didn't
realize
how
long
cool
so
moving
on
then
to
rc
number
210
peer,
specific
overheads.
I
think
killian
you
had
proposed
this.
I
was
away
last
week
and
I
think
there
was
some
discussion
already
on
this
by.
I
think.
Isaac
and
roy
is
that
right.
E
And
and
killian
yeah
I
yeah.
I
can
correct
me
if
I'm
wrong,
but
I
believe
we
where
we
landed
was
we're.
Gonna
do
regular
overrides
aspects
rather
than
this,
because
overrides
are
gonna,
come
relatively
soon.
D
B
B
D
Forgot
last
time
I
just
added
the
link
to
the
to
the
to
the
what's
it
called
the.
D
D
E
So
why
why
were
optional
pierce
chosen?
That's
that's
a
great
question.
E
This
is
something
that
that
yarn
added
and
then
pnpm
added
the
idea
being
that
it
is
a
pure
dependency
that
is
not
installed
if
you
do
no
optional,
so
it's
essentially
the
same
as
an
optional
dependency,
but
its
resolution
target
starts
at
the
the
parent
of
the
dependent
rather
than
inside
the
dependent.
F
E
Right
exactly
so
again
like
just
like
an
optional
dependency
that
has
to
be
appear
another
way
to
it's,
the
overlap
of
those
two.
E
You
know
circles
in
the
in
the
venn
diagram
in
terms
of
like
how
this
impacts
like
module
resolution
and
how
we
sort
of
build
out
the
dependency
tree,
a
pure,
optional
dependency,
if,
if
we
can't
place
it
anywhere
or
if
it's,
if
the,
if
it
can't
be
fetched
just
like
an
optional
dependency,
it
will
not
be
treated
as
a
like
a
critical
failure
that
we
we
crash,
the
install
for
that's
that's
kind
of
the
main
difference,
and
so,
if
there's
any
any
errors
installing
it
if
the
engine,
I
believe,
if
the
engines
or
platform
don't
match
as
well,
we
skip
optional
depths
if
we
can't
find
an
engine
match
and
if
you
know
or
if
it's
build
scripts
fail,
we
just
kind
of
clean
it
up
and
and
move
on.
D
F
Is
some
of
the
history,
I
believe
or
motivation
for
doing
that
in
the
first
place,
was
that
there
were
some
small
but
vocal
parts
of
the
ecosystem
that
were
under
the
mistaken
impression
that
peer
dependencies
were
optional
and
had
been
using
them
in
that
way
and
when
they
then
ran
headlong
into
the
fact
that
99
of
the
ecosystem
treats
them
as
required?
F
They
wanted
a
way
to
still
do
what
they
wanted
to
do
without
having
people
yell
at
them
and
make
them
feel
bad,
and
then
they
advocated
for
this
feature,
and
since
I
mean
it
has
a
real
use
case,
I'm
not
trying
to
marginalize
these
cases
just
kind
of
that's
my
understanding
of
how
the
feature
came
to
be.
E
No,
it's
a
reasonable
telling
of
the
story.
I
mean
that's
kind
of
how
a
lot
of
features
come
to
be
I
get
it.
I
mean.
I
also
thought
at
the
beginning
that
peer
dependencies.
E
Because
they're
not
installed
by
default,
like
that,
that
sort
of
de
facto
behavior
in
in
npm
and
yarn
and
pnpm
is,
is
what
made
them
be
essentially
treated
as
an
optional
dependency.
Although
you
know,
and
then
sure
got
annoyed
that,
like
they
got
bug
reports
because
these
dependencies
weren't
there
or
they
couldn't
install
them
and.
D
E
Yeah
last
time
we
talked
about
loosening
the
loosening
the
peer
dependency
requirement,
so
it
would
accept
a
potentially
accept
a
pre-release
if
there's
one
there
already
that
would
otherwise
match
its
december
range.
I
I'm
kind
of
with
you
killian.
I
you
know
thinking
about
that.
More
I
don't
know.
If
it's
a
great
idea,
we
were
sort
of
spitballing
some
ideas
of
of
how
to
get
past
a
lot
of
the
the
mess.
E
That's
that's
built
up
over
the
years
of
pure
dependencies
not
being
installed
by
default
and,
not
being
you
know,
sort
of
required
to
be
in
a
valid
state
by
default.
I
think
that
we've
landed
on
some
better
approaches
in
in
arborist
and
the
client
that
don't
require
that.
So
I
I
I'm
happy
to
you
know
strike
like
walk
back.
That
suggestion
I
made
last
time
and
pretend
I
didn't
say
it.
It
was
a
bad
idea.
Yeah.
D
Yeah
all
right
and
the
first
first,
I
thought
it
was
right
too,
but
think
about
it
same
as
you.
What
do
we
have
like
on
the
on
how
your
overwrites
rsc
on
the
on
its
workings,
like
if
you
specify
and
overwrite
here
I
have
the
example
like
in
the
link.
I
sent
you
what,
if,
like
you
do
you
have
to
like
open?
D
I
don't
because
I
am
just
telling
you
nobody
will
get
it,
but
so,
basically
we
have
a
route.
Has
a
dependency
on
package,
a
package,
a
request
appear
of
b
package,
a
also
has
a
dependency
on
c
c
requests
appear
of
b.
E
Okay,
so
root
has
a
depth
on
a
a
has:
a
pier
of
b
and
a
depth
on
c
and
c
has
a
pure
of
b
b
is
overridden
with
x,
root
towards
a
yeah.
So
anything
anything
under
that
on
that
branch
of
the
depth
graph
of
the
dependency
graph
would
get
the
overwritten
version,
so
I
mean
overrides
are
very
powerful
and
you
can
absolutely
break
your
package
by
doing
this.
So
in
that
case,
yes
c
would
get
the
overridden
version.
E
The
x
version
in
this
example,
because
we're
we're
overriding
it
from
the
very
you
know
from
the
root
right.
So
it's
everything
that
is
a
everything
that
matches
c
will
instead
be
replaced
with
x
and
all
the
depths
on
c
will
get
x.
Instead.
The
more
common
use
case
here
would
be.
Situations
where
I
have
a
I
depend
on.
You
know,
react
react
16.13
or
whatever,
and
I
have
something
in
my
tree
that
has
a
pure
dependency
on
react
16.8.
E
E
The
next
beta
release
will
make
it
somewhat
more
comprehensible
and
will
also
allow
them
to
get
past
this
by
specifying
false
or
sorry
specifying
force
to
say
you
know
I.
I
know
that
there's
a
pure
conflict,
but
there's
something
else
like
higher
up
in
the
tree.
That
has
a
direct
dependency
on
this
version
of
react.
So
just
too
bad
buddy,
like
you're,
going
to
get
the
new
version
of
react
with
an
override,
you
could
be
more
precise,
so
you
would
not
have
to
say
force
enforce
everything
you
could
say
just
this.
E
A
F
This
is
new,
it
was
just
a
quick
question
like
can
we
make
sure
that
there
is
a
simple
way
that
I
can
say,
hey
person
filing
an
issue
on
my
package:
what's
the
output
of
this
command
and
then
it
will
tell
me
what
their
overrides
are,
so
that
I
can
tell
them.
I
don't
support
that.
E
Yeah,
I
think
I
think,
like
listing
out
your
overrides
is,
is
probably
pretty
reasonable.
We
also
do
need
to
at
some
point
relatively
soon
I
think,
need
to
kind
of
take
a
a
fresh
design
pass
over
the
output
of
npm
ls,
because
the
way
that
it,
the
way
that
it
prints
things
out
is,
is
not
doesn't
really
clarify
whether
something
is
a
pure
dependency,
a
dev
dependency,
whether
it's
been
overridden.
D
E
I
mean
yeah,
so
you,
you
can
only
specify
overrides
at
the
root
package
level,
but
you
can
specify
them
in
a
nested
way.
So
I
can
say
in
my
project
I
can
say
any
any
dependency
on
on
c
that
comes
from
within
the
a
portion
of
the
tree
needs
to
be
overridden,
but
other
dependencies
on
c
I'll
leave
them
as
they
are.
D
D
D
E
Yeah
I
mean
it's,
it's
primarily
overrides
are
primarily
a
kind
of
app
level
feature
is
the
the
intent
but
yeah.
If
there
are
cases
where
you
know,
for
example,
like
let's
say
in
this
case
with
gatsby,
for
example,
the
dependency
in
question-
I
mean
gatsby
controls
it,
so
it's
managed
by
the
same
team
they
can.
They
can
totally
fix
the
issue
on
their
end,
I
have
a
open
pr
to
help
them
fix
the
issue
on
their
end
and
and
it's
not
that
hard
to
get
that
resolved.
E
But
if
it
was
a
case
where
you
know,
gatsby
is
using
some
dependency,
they
don't
control
and
that
dependency
has
a
pure
dependency
on
a
incompatible
version
of
react,
or
you
know
unnecessarily
strict
december
range
on
react,
but
actually
does
work.
Fine
with
the
latest
and
greatest
react.
16
then
the
you
know
basically
gatsby
would
work
fine
if
you
have
this
override
and
it
won't
work.
E
If
you
don't,
unless
you
specify
force-
or
you
add
the
override
yourself
in
your
app
so
yeah,
so
that
that
is
kind
of
an
open
question
right.
On
the
one
hand,
I
don't
necessarily
want
you
know
the
author
of
one
of
my
dependencies
to
have
that
much
kind
of
control
over
everything
in
the
tree
outside
of
their
the
package
that
I'm
actually
installing.
E
D
B
E
It
unfortunately
also
reduces
the
power
that
the
app
author
has
to
control
kind
of
what
what
they
want
to
control
with
overrides.
So,
for
example,
one
of
the
I
don't
know
if
wes
is
now
wes
appears
to
not
be.
E
Wes
had
a
bunch
of
real
world
cases
for
overrides
coming
out
of
netflix
and
some
of
the
stuff
that
they're
doing
in
production,
but
one
of
the
one
of
the
main
use
cases
was
there's
a
you
know:
either
a
security
vulnerability
or
a
bug,
or
just
some
kind
of
bad
thing,
with
a
published
module
that
they're
using
and
so
what
they
wanted.
E
You
know
they
have
a
fix,
they
have
a
patched
version
and
what
they
want
to
do
is
say
anybody
who's
using
you
know,
version
2.3.4
of
fubar
like
if,
if
it
would
resolve
to
that
anywhere
in
the
tree,
we
want
to
use
our
patched
version
instead,
that
fixes
this
security
or
or
other
bug,
and
so,
if
something
in
the
dependency
graph
has
an
override
on
that
thing,
they
they
don't
want
to
respect
it
like
they
want
to
say
no
like
I,
I
know
what
I'm
doing,
and
I
want
you
to
use
this
override
instead,
so
it's
it's
more
com
more
similar
to
yarn
resolutions
where,
like
the
root
package,
basically
always
takes
priority
sure,
but.
D
D
Using
that
package
now,
like
the
package
that
specified
the
override
that
no
matter
who
the
user
of
that
package,
what
overrides
like
he
specifies
on
his
level,
they
wouldn't
get
past
that
other
override
the
original
override
from
that
dependency
would
still
be
in
effect.
So
it
is
a
well
that's,
not.
D
D
Yeah,
that's
the
point
like,
but
if
we,
if
we
don't,
have
an
app
but
a
package
same
use
same
case,
the
package
doesn't
want
his
dependency
to
be
overwritten
in
any
way,
so
that
exactly
inheritance
that
override
that
that
package
specified
would
take
priority
the
new
route,
the
one
who
uses
that
package.
Couldn't
I
couldn't
overwrite
that
with
the
peer
system
I
have
here.
E
D
That's
also,
that's
also
why
I
made
that
a
separate
point.
The
pure
overrides
versus
the
normal
overrides.
It's
like,
I
don't
think
they're
necessarily
competing.
They
are
for
different
use
cases.
In
my
opinion,.
A
Okay,
I
I'm
think
I
just
want
to
be
mindful
of
time,
because
we
only
have
about
12
minutes
left,
and
I
want
to
make
sure
that,
like
we
get
the
last
two
two
items.
D
A
Yeah,
the
error
message
and
warning
I
see
is
another
note
that
you
added
here
on
the
on
that
so
yeah.
I
I
appreciate,
like
you,
put
a
lot
of
work
into
this,
so
I
mean
I
think,
if,
if
anybody
gets
some
time
after
the
call
like
we
should
definitely
I,
like
I'm
gonna,
read
through
this
properly,
because
I
haven't
had
time
to
do
that.
A
A
We
want
to
move
forward
with
something
like
that
might
help
with
ensuring
that
that
people
understand
how
that
override,
spec
is
going
to
work
and
the
fact
that
it's
not
going
to
it's
not
going
to
override
or
supersede
the
route
or
application
or
project
level
or
override
would
probably
be
good
for
us
to
implement
something
like
that.
A
So
we
may
want
to
update
the
spec
or
the
rfc
in
overrides
to
add
some
line
about
how
we're
actually
going
to
implement
that
so
that
we
avoid
that
kind
of
edge
for
for
folks.
A
So
they
don't,
they
don't
have
the
assumption
that
this
that
you
can
publish
a
package
with
overrides
specified
that
are
going
to
be
essentially
in
like
accepted.
So
that
might
be
something
we
want
to
take
away
and
add
to
that
rfc
and
then
it
sounds
like
there's
some
more
discussion.
We
can
have
on
your
specific
pr.
There
killian
I'll,
definitely
take
a
take,
a
look
and
and
add
some
notes
myself
after
this
call.
A
Awesome,
so
if
we
can
move
on
to
the
next
item,
pr
209
provide
download
counts
by
versions.
I'm
not
sure
if
this
was
talked
about
in
the
last
rfc.
No,
I'm
not
sure.
B
I
don't
think
we
get
to
it
last
time
on
is
no,
I
think
they
mentioned.
They
wouldn't
be
able
to
attend.
Okay,.
A
I'm
not
sure
if
anybody's
had
any
time
to
look
at
this,
then
maybe
we
should
just
move
on
if
they
were
not
able
to
speak
to
it.
This
is
something
at
least.
It's
definitely
like
a
registry
feature
that
we've
talked
about
many
times.
A
A
F
I
I
definitely
think
that,
like
I
made
a
package
called
ls-engines
that
checks
that
for
your
depth
graph
and
like
I
would
love
to
see
that
be
a
first
class
command
in
npm,
and
I
would
love
to
see
npm
do
that
check
like
when
people
opt
into
it
and
perhaps
even
do
the
check
and
show
the
output
without
failing
the
install
on
install.
But
I
I
think
it's
pretty
critical
that
personally
that
mismatched
engines
do
not
fail
the
install,
because
you
know
those
things
are
often
missing.
F
Often
wrong
and
it's
you
know
it
generates.
It
would
generate
a
ton
of
noise
if
I
had
to
like
if
I
or
if
my
two
options
are
either
as
a
package
author
are
either
to
have
my
engine's
support,
just
be
star
or
greater
than
or
equal
to,
like.
That's
that's
one
option
which
avoids
the
noise
but
means
people
are
going
to
complain
that
things
are
broken.
The
other
option
is
that
I
make
them
be
correct,
but
then
the
instant,
a
new
version
comes
out,
even
if
it
already
works.
F
E
The
the
problem
with
so
yeah
the
problem
with
engine
strict
and
the
reason
why
it's
it's
not
the
default
is,
is
that
people
tend
to
be
package.
Authors,
not
all
package
authors,
but
a
significant
portion
of
package
authors
tend
to
be
way
overly
strict
in
terms
of
the
the
requirements
that
they
put
in
their
package,
json
as
to
which
node
version
they
support.
E
There's
there's
some
reasonable,
like
justification
for
this
right,
like
if
I
haven't
tested
my
code
on
node
14
and
then
node
14
comes
out
or
note,
15
comes
out
or
whatever,
and
then
I
get
a
bunch
of
people
complaining
at
me.
I
want
to
be
able
to
say
like
look,
I
never
promised
that
this
would
work
there.
I
I
tested
on
node
12
and
it
worked
fine
and
you're
using
it
on
15,
and
it
doesn't
like
leave
me
alone.
E
The
the
hazard
there
is
like
it
probably
does
work
fine
on
node
15.,
and
so,
if
we
crash-
or
we
refuse
to
install
it
now,
a
bunch
of
apps
just
won't
work
on
node
15
and
that
sucks
for
node
right
now
now
node
is
getting
all
of
these
errors.
Saying
like
well,
everything
breaks.
E
So
we
we
we
ran
into
this
when,
like
a
long
time
ago,
with
with
node
versions
being
updated
and
and
apps
breaking
as
a
result
and
and
made
the
decision
to
make
engine
strict,
not
the
default
and
so
and
since
then
I
think
life's
been
mostly
pretty
good,
so
that
the
functional
thing
like
the
thing
that's
actually
worth
kind
of
talking
about
in
this
in
this
rfc,
like
we're
not
going
to
make
engine
strict
the
default,
that's
too
disruptive,
just
given
the
priorities
of
npm
and
and
how
we
want
to
work
within
kind
of
the
node
ecosystem,
but
the
the
thing
that
is
a
suggestion
in
this
in
this
rfc
we're
talking
about
is
so
I
I
do
an
installation
with
with
let's
say,
node
14
and
it
chooses
the
best
match
for
all
of
my
depths
and
it
builds
out
this
dependency
tree
and
it
saves
it
in
my
package,
lock
json
then
later
on,
node
12,
I
have
this
package
lock
json
and
I
run
npm
install
now.
E
If
there's
a
package
lock,
we
don't
rebuild
the
build
the
ideal
tree.
We
just
look
at
what's
in
the
package
log
and
say
all
right,
like
you,
you
decided
already
in
the
past
what
the
ideal
tree
should
be,
so
I'm
gonna
make
it
that
because
that's
what's
in
the
package
lock,
and
so
I'm
gonna
look
at
what's
in
the
actual
tree.
If
that's
empty,
then
you
know
I'll
just
dump
everything
in
there
that
the
package
lock
says
and
not
even
think
about
it.
E
So
the
the
suggestion
in
this
rfc
is
to
essentially
re-evaluate
for
engine
strictness.
Anything
that's
in
the
package
lock
file
that
would
not
otherwise
be
getting
evaluated
and
there's
there's
two
ways
that
we
can
go
with
that.
So,
first
of
all,
that's
adding
work
to
the
install
it's
not
something
we
should
probably
do
by
default.
As
a
result
of
that.
E
Second,
we
can.
We
can
add
that
so
we
can
either
make
that
a
separate
flag.
So
when
you're
installing,
like
you
know,
do
this
deep
engine
checking
or
we
can
make
it
something
that
npm
ls
does
where
it
will
like
highlight
those
problems.
As
it's
reading
your
package
tree
and
say:
oh
look.
This
thing
says
it
has
an
engine
requirement
on
foo.
You
have
bar
so
there's
a
problem,
the
other
thing
which
is
which
is
where
it
starts
to
get
into
kind
of
dangerous
territory.
E
We
use
the
engines
even
when
we're
not
in
engine
strict
mode.
We
still
do
use
the
engine
declaration
as
a
heuristic
to
decide
which
version
of
a
dependency
to
to
pull
in.
So
if
your
dependency
range
says,
you
know,
I
support
version
one
or
two
and
two
has
an
engine
requirement
of
you,
know:
node
greater
than
or
equal
to
14,
and
one
has
an
engine
requirement
of
node
greater
than
or
equal
to
six
and
you're
on
node
12
it'll
pick
version
one,
because
that's
the
one
that
doesn't
cause
an
engine
problem.
E
So
if
we,
if
we
were
to
go
back
and
sort
of
re-evaluate
the
dependency
resolutions
for
everything
that
has
an
engine
conflict
now,
essentially,
even
though
you
have
a
package
lock,
we're
installing
something
different
than
what's
in
the
package
lock
file,
and
that's
that's
where
it
starts
to
get
into
like
really
dangerous,
like
non-deterministic
kind
of
cases
where
which
I'm
not
entirely
comfortable
with
so
that
sort
of
second,
more
advanced
like
read,
rebuild
the
package
tree
like
if
you
want
to
do
that,
you
run
npm
update
explicitly.
F
F
E
Right,
like
would
that
be
possible?
Essentially
it
would
be
like
get
a
list.
You
know
the
the
logic
there,
the
pseudo
code,
whatever
would
be
like
get
a
list
of
all
the
things
that
have
an
invalid
engine
in
them
and
run
npm
update
in
your
current
context.
Just
for
those
versions
right,
so
that's
something
that
we
we
could
potentially,
we
could
potentially
add
as
a
as
a
flag
or
an
option.
A
C
A
E
That
yeah,
I'm
certainly
not
opposed
to
storing
you
know,
stashing
the
engines
that
we
used
to
generate
a
lock
file
in
the
lock
file
itself.
I
don't
know
that
it
solves
this
problem
per
se,
because,
if
we're
making
any
change
to
the
package
tree,
you
got
to
think
like
that.
Could
completely
change
what
what
resolutions
we
have
and
changing
one
thing
is
the
same
as
potentially
changing
everything
right
like
because
these
they
might
have
different
depths.
It
might
cascade
and
so
on.
E
E
Right
and
so
you'd
have
to
run,
you
know,
run
some
specific
command.
The
the
use
case
here
is
there's
a
ci
process.
That's
that's
running
and
it's
pulling
in
things
that
have
an
engine
mismatch,
they're
running
their
ci
on
multiple
node
versions,
with
the
same
package,
log
file-
and
so
I
mean
my
kind
of
my
my
thinking
on
this.
E
Like
the
way
I
would
approach
it
is
like
well,
if
you
need
to
support
node
node
12,
you
need
to
be
running
with
node
12
and
installing
and
building
the
lock
file
with
node
12,
and
then
your
ci
will
install
it's
in
the
log
file
and
test
it
on
node
12.
and
if
it
doesn't
break
on
12
and
it
doesn't
break
on
14,
even
though
the
engines
are
you
know,
allegedly
broken
like
okay
cool,
then
it
works.
A
So
it
sounds
like
we
need
to
give
some
feedback
to
narrow
in
on
on
what
we
could
actually
look
at
the
problem.
We
could
actually
look
at
trying
to
help
solve
versus
what
we
know
we're
not
going
to
support
so
okay,
so
it
sounds
like
we
can
add
a
couple
couple
comments
to
that
thread.
Then
I
appreciate
already
the
discussion
that's
been
had
there.
A
I
apologize
for
a
couple
of
minutes
over
feel
free
to
again
bubble
up
any
issues
or
pr's
that
we
we
might
have
missed
on
this
call
and
and
definitely
keep
having
conversations
in
the
actual
issues
and
pr's
themselves,
but
yeah.
I
want
to
thank
everybody
for
joining
again
and
we'll
be
in
the
same
time,
in
the
same
place.
Next
week
and
yeah
I'll
talk
to
you
all
soon,
ciao.