►
From YouTube: Open RFC Meeting - Wednesday, Sept 9th 2020
Description
In our ongoing efforts to better listen to and collaborate with the community, we run an Open RFC call that helps to move conversations and initiatives forward. The focus should be on existing issues/PRs in this repository but can also touch on community/ecosystem-wide subjects.
A
B
Okay
and
we're
live
on
youtube,
welcome
again
to
another
open
rc
call.
Today's
date
is
wednesday
september
19th
we'll
be
following
along
the
agenda
that
was
posted.
I
believe
in
issue
two.
B
In
issue
220
and
I've
copied
and
pasted
or
spammed
the
hackamdoc,
if
you'd
like
to
add
yourself
as
an
attendee,
just
quick,
a
couple
housekeeping
notes.
All
of
these
conversations-
and
these
calls
are
run
under
a
code
of
conduct
which
is
linked
there
in
both
the
the
meeting
notes
and
should
also
be
referenced
on
the
actual
rfc
repo
itself.
The
the
kohl's
notes
there
is
just
to
be
mindful
and
respectful
of
when
people
are
speaking.
B
Please
raise
your
hand
if
you'd
like
to
add
something
to
discussion
and
sort
of
the
outline
or
desired
outcomes
of
this.
This
call-
and
this
channel
is
to
have
some
discussion
with
community
and
and
ideally
be
pushing
forward
ideas
and
and
concepts
and
concepts,
and
the
discussion
around
the
work
that
we're
doing
in
the
mpm
project,
and
hopefully
it's
another
channel
for
for
the
community
to
collaborate
with
us,
so
yeah
awesome.
C
No,
I
mean
better
releases
continue
yeah,
please,
please
keep
checking
them
out.
B
I
was
gonna
say:
that's
roy's
usual
go
to
in
in
some
of
the
other
working
group
calls,
but
isaac's
right.
We
continue
to
ship
releases.
Isaac
and
ryan
have
been
on
top
of
those
and
and
usually
you'll
see
something
from
us
each
tuesday,
but
intermittently
we've
also
had
releases
in
between
that.
So
we'll
continue
to
until
forever
awesome.
So
yeah
please
be
checking
out
the
npm
v7
beta.
You
can
install
it
with
mpmi
g
npm
at
next
seven.
B
So
there's
there's
information
there
on
our
blog
about
what's
shipping
in
each
release
and-
and
you
can
also
follow
along
in
the
change
log-
lots
of
really
interesting
details
there
about
what
we're
shipping
so
yeah,
that's
great!
No!
So
moving
on
to
the
first
item,
we
have
today
it's
a
bit
of
a
different
topic,
but
we
want
to
note
some
of
the
work.
That's
gone
into.
B
C
Yeah,
so
we
had
so
one
thing
that
actually
jordan
brought
up
when
when
we
were
removing
the
the
underscore
fields
from
package.json
files
that
are
installed
into
node
modules,
there's
kind
of
this
power
user
hidden
feature
in
some
of
those.
Some
of
that
metadata,
where
you
can
see.
C
Okay,
well,
I'm
installing
this
thing,
but
I
need
to
know
what
it's
being
installed
for
and
if
you
kind
of
go
through
and
track
the
metadata,
that's
getting
stuffed
in
those
package,
json
files,
you
could
sort
of
work
that
out
so
one
downside
of
stripping
that
or
not
injecting
that
data
into
those
files
is
that
you
can't
do
that
anymore.
We
and
the
the
feedback
I
had
then
was
like.
C
Oh,
don't
worry,
we'll
have
you
know
better
facilities
to
to
provide
you
with
that
kind
of
info
as
like
an
npm
explain,
type
of
command
or
something,
and
while
we
were
after,
we
landed
the
the
change
to
sort
of
improve
the
error
messages
from
e-resolve
errors.
So
you
could
see
like
okay,
I'm
getting
a
you
know
conflicting
pure
depth,
and
it
can't
be
installed
without
force
or
legacy
pure
depths.
C
Well,
how
come
right
so
like
one
of
the
main
thing
you
need
to
know
there
is
like
is
like:
why
is
this
thing
blocking
like?
What's
it
actually
conflicting
with
so
I
can
go
fix
it
and
it
turned
out
once
we
landed
that,
like
oh
hey,
we
we
have
the
code
sitting
right
here.
C
That
will
explain
what
a
module
is
doing
in
the
tree
and
it
was
pretty
trivial
to
just
add
a
npm
explain
command
so
it'll
take
either
either
a
package
name,
a
package
name
at
version
or
a
path
in
your
node
modules,
folder
and
then
explain
all
of
the
things
that
are
depending
on
that
thing
and
kind
of
track.
It
back
to
the
root
project.
B
C
Oh
no,
I
didn't
so.
The
other
thing
is
when
we,
when
we
have
a
a
peer
dependency
conflict.
One
of
the
things
that
that
we
ran
into
very
quickly
is
that
real
world
peer
dependency
conflicts
tend
to
be
pretty
messy.
C
So
you
know,
you've
got
a
large
project,
that's
using
you,
know,
gatsby
and
react,
and
a
bunch
of
your
own
plugins
and
everything's,
depending
on
slate.
You
know,
has
pure
dependencies
on
slightly
different
versions
of
react
and
you've
got
like
eslint
in
there
too,
and
you
know
who
knows
what
else,
and
so
the
first
pass
where
we
were
just
like
fully
explaining
everything.
That's
in
your
tree.
It
ended
up
being.
C
You
know,
in
a
lot
of
pretty
typical
examples
around
like
several
hundred
to
a
couple
thousand
lines
of
output,
and
that's
just
too
much
to
put
in
an
error
message,
but
you
may
actually
find
it
useful
if,
if
your
next
step
is
to
go
and
track
that
down
and
actually
send
a
pr
to
kind
of
bump
package
versions
and
dependency
ranges
and
stuff,
so
we
do
two
things.
The
first
is,
we
give
you
a
really
trimmed
down
version
of
that
error
message,
so
it
sort
of
only
goes.
C
I
forget
what
it
is
three
levels
deep
and
it
trims
the
amount
of
things
that
it
actually
shows
you
and
then,
at
the
same
time,
if
we
do
encounter
that
error
and
it
crashes
the
install,
we
create
a
file
in
your
cache
folder,
which
has
the
full
report,
including
the
the
raw
json
of
the
of
the
object
explanation
that
we've
generated.
So
you
know,
if
you
want
to
dig
into
it
and
and
figure
out,
you
know
to
kind
of
do
some
forensics
and
figure
out
exactly
what's
going
on.
C
You
do
have
that
data,
but
it
will
be
just
sort
of
thrown
in
a
file
that
you
can
also
just
ignore
and
let
you
know
let
get
overwritten
in
your
in
your
cache.
If
you
don't
care
about
it,
awesome.
B
Any
feedback
or
anybody
I
know,
jordan.
I
appreciate
your
you're,
always
on
top
of
these,
so
I
know
that
you
already
gave
some
feedback
on.
B
Looking
forward
to
it
awesome,
so
that's
a
great
note:
it's
a
sort
of
an
announcement.
If
we
can,
let's.
B
F
F
Like
I
guess
you're,
I
see
that
you're
explaining
react
but
like
it
looks
like
I'm
going
down
the
tree
because
the
indentation,
when
really
what
you're
doing
is
going
up
the
tree
and
that
confused
me
at
first
glance,
and
I
think
if
it
said
from
the
root
project
tap
then
tree
port,
then
I'm
seeing
I'm
trying
to
read
this
in
reverse
order.
So
it's
even
confusing
to
me
here.
F
So
it's
tap,
ink,
react
and
tap
react
right
which,
but
by
going
in
the
reverse
order,
the
way
it
displays
here,
you
might
think
like
wait,
a
sec,
but
I
don't
depend
like
what
I
thought
at
first
is
I'm
like
wait?
Does
the
project
depend
on
ink?
No
tap
depends
on
ink
anyway.
It
just
feels
odd
to
me
that
you
are
going
up
the
tree
while
going
down
and
in
and
more
indentation,
which
just
feels
counter
to
me.
C
Well,
it's
yeah
it's
kind
of
it's
kind
of
weird
right.
So,
if
you
just
ran
npm
ls
react,
it
would
give
you
that
it
would
tell
you
from
the
root
project
all
the
things
that
depend
on
it.
So
there's
I
think
that
you
know
this
is
kind
of
just
a.
The
purpose
of
this
was
to
start
from
a
given
thing.
That's
conflicting
show
me
why
it's
conflicting
right
and
so
from
that
point
of
view
you
kind
of
need
to
get
to
that.
C
To
that
end
result,
you
know
it's
just
a
a
data
object
though,
like
we
could,
we
could
walk
it
in
whatever
order
we
want
when
we,
when
we
produce
that
output.
So
I
think
that
there's
definitely
some
room
for
improvement
there
in
the
actual
output
of
the
command
itself
and
one
of
the
things
that
is
much
needed
and
is
on
the
on
the
agenda.
C
You
know
on
the
plans,
but
not
sort
of
in
the
immediate
future
is
to
take
a
much
more
kind
of
thorough
look
at
all
of
the
output
that
we
produce
and
and
the
way
that
we
kind
of
you
know
and
sort
of
more
thoughtfully
design.
C
How
like
what
does
npm
ls
output
and
what
is
npm,
explain,
output
and
how
do
we
show
packages
like
we've
got
a
bunch
of
these
things
that
are
all
kind
of
developed.
You
know
where
the
output
is
sort
of
designed,
more
or
less
in
isolation,
so
like
outdated
and
ls
and
explain
are
all
kind
of
doing
their
own
thing
in
terms
of
how
they
print
this
output,
how
they
display
this
information.
F
F
I
actually
think,
then
throwing
out
most
of
this
output,
and
just
saying
here
is
the
path.
Here's
the
other
path.
This
is
the
node
at
which
it
conflicts
would
probably
be
more
effective
to
to
me
at
least
to
to
find
that
right,
because
what
you're
saying
here
is-
and
I'm
just
trying
to
read
the
the
I'm
reading
from
your
example
that
you
posted
in
the
issue
with
the
colors
and
all
that
right
right.
F
C
Oh
no,
the
example
I
posted
in
the
issue.
Everything
is
fine,
like
that's
just
explaining
a
node
that
lives
in
the
tree.
When
we
show
a
conflict,
it's
because
the
the
install
is
failing.
C
F
For
a
conflict
there
and
there's
not-
and
maybe
that's
part
of
the
reason
why
it
just
I'm
like
I'm
just
not
sure
what
to
take
from
the
output.
I
guess
that's
my
general
feedback
is
like
I
look
at
this
and
I
go
yeah.
It's
a
lot
of
information
about
how
things
got
in
there,
but
it
takes
me
just
as
long
to
read
it
there
as
it
would
have
taken
me
to
cap
the
package
lock.
C
Right
yeah,
that's
that's
reasonable!.
C
We
implemented
explain
because
it
was
like
trivial:
the
code
was
already
there,
but
it
is
experimental
and
so
anything,
that's
you
know,
I'm
sure
in
the
next
few
versions,
we'll
see
some
some
updates
to
it.
You
know,
especially
once
we
ever
get
around
to
properly
designing
all
of
our
various
types
of
output.
B
Yeah,
I
think,
to
isaac's
point:
there's
there's
room
for
improvement
and
wes.
You've
said
as
much
in
other
meetings
we've
had
for
sure
where
it's
like
the
same
set
of
data
just
with
different,
like
statuses.
Essentially,
a
line
are
associated
with
that
data
would
be
great
if
we
had
like
a
standardized
view
of
how
we're
looking
at
depths
and
then
the
different
slices
of
it
for
for
these
commands
right.
So.
F
Yeah
totally,
no,
I'm
I'm
not
saying
that
this
is
bad.
I
just
it
has
all
the
info
you
need
it
just
takes
longer
for
me
to
parse
it.
You
know,
then
than
I
would
hope.
I'm
also
thinking
of
this,
because
the
my
my
team
at
netflix
has
this
exact
feature
in
gradle
and
then
their
nebula
stuff,
and
it
like
gives
you
like
a
human,
readable
thing
which
I
I
have
always
found
kind
of
nice.
It
says,
like
you
know,
you
are
getting
this
version.
It
conflicts
with
this
other
version.
F
F
C
F
C
B
Cool,
so
moving
on
to
number
three
so
pure
pr
number
18,
npm
audit
resolve
audit
and
audit
resolve
json.
B
I
don't
think
zb's
here,
but
I
think
I
put
this
on
the
agenda
just
to
make
sure
that
we're
we
were
circling
back
on
on
this
conversation
and
keeping
in
mind.
I
know
that
we're
discussing
this
and
I'm
gonna
have
a
deep
dive.
I
think
conversation
tomorrow
from
the
package
maintenance
working
group
folks,
I
think
west
is
actually
the
person
who
is
is
championing
that
I
believe
or
or
maybe
michael's
awesome.
B
I
think
he's
yeah
you're
right.
Sorry,
it's
michael
dawson
who's
is
setting
that
up
to
essentially
have
an
out-of-band
deep
dive
on
on
this.
This
I'll
have
to
double
check
and
find
the
issue
here
I'll.
Take
that
as
an
action
item
to
add
it
to
the
notes
and
maybe
follow
up
with
a
comment
in
the
issue
thread
itself
to
make
sure
there's
some
visibility
but
yeah.
B
I
want
to
essentially
put
this
on
the
agenda
and
make
sure
we're
we're
not
figuring
about
it
and
that
there's
still
lots
of
discussion
going
on
about.
You
know
how
to
how
to
approach
this
and
there's
been
some
feedback
that
I
I
know
I've
given
and
you
should
bring
to
the
table
in
terms
of
again.
I
think
the
biggest
concern
is
immutability
of
of
these
kinds
of
contracts,
so
yeah.
B
So
moving
on
pr
217
for
the
rfc
ad
for
adding
registry
per
package
per.
B
Is
by
valoren
who
also
created
the
issue
for
this
actually,
so
this
is
the
issue
and
also
he
created,
or
is
this
the
pr
I
know
there
was
two
here
217.
B
Exactly
so,
I'm
not
sure
if
anybody's
had
time
to
look
at
this,
it
was
only
open.
Last
week
added
on
to
the
agenda
and
yeah,
it's
pretty,
I
think
uncontroversial
from
the
conversations
we've
had
right.
C
Yeah,
I
think
the
only
the
only
thing
that
we
probably
need
to
need
to
get
done
about
this
is,
if
we're,
if
we're
getting
close
to
calling
v7
feature
complete.
This
is
definitely
something
that
will
have
to
be
in
sooner
rather
than
later,.
C
C
Cool
but
the
I
guess
belorun
is
not
on
the
call.
C
The
the
nut
of
it
is,
you
know,
kind
of
the
nuts
and
bolts
of
it
is
right.
Now
you
can
do
at
scope,
colon
registry
to
set
a
registry
for
a
given
scope
and
what
they
want
to
be
able
to
do
is
set
at
scope,
package,
name
colon
registry
and
use
that
registry
for
that
package.
Name,
it's
a
really
small
code
change.
C
It
is
a
potentially
and
its
chance
of
being
disruptive
for
the
broader
community
is
pretty
slim
just
because
it
it
happens
in
the
at
the
config
at
the
user
config
level.
So
there's
not
much.
You
know
you're
not
like
publishing
something
with
a
versions
field
that
other
versions
of
npm
will
will
fail
to
read
or
will
read
incorrectly
so
low
risk
relatively
easy
low
cost.
I
think
we
should
do
it.
F
Yeah,
so
so
I
have
a
question
on
the
multiple
versions
of
npm,
so
I
the
way
I
anticipate
this
happening
is
a
little
unfortunate.
I
don't
know
if
there's
a
way
around
it.
Basically
somebody
on
the
team
who's
using
a
newer
version
of
npm
is
going
to
say.
Oh
great,
we
got
this
new
feature,
throw
it
in
our.
You
know,
npmrc
commit
it
to
the
project
and
then
some
ci
system,
somewhere
or
some
other
person
is
going
to
run
it
on
an
older
version
of
npm.
F
It's
going
to
ignore
the
config
and
they're
going
to
get
either
a
security
vulnerability,
which
would
be
a
remote
code
execution
because
they
now
downloaded
a
package
from
public
npm
that
they
were
trying
to
get
from
their
private
registry
or
something
or
it's
just
going
to
fail,
both
of
which
are
bad
outcomes.
C
Right
so
the
the
most
likely
outcome
is
that
it
will
just
fail,
because
I
I
don't
imagine
you
would
do
this
if
you
had
both
like
you,
have
to
manually.
Add
it
in
npmrc.
First
of
all,
second
of
all,
I
don't
imagine
that
you
would
do
this
and
not
also
have
a
scope
pointing
at
a
specific
registry
like
the
basically
the
the
use
case
here.
C
The
most
common
use
case
here
is,
I
have
my
scope
and
it's
pointing
at
my
private
registry,
but
I
also
have
these
open
source
packages
that
I'm
publishing
to
the
same
scope
in
the
public
registry,
and
so
when
I
you
know
when
I
try
to
fetch
these
open
source
packages
from
my
private
registry,
it
fails
because
they're
not
there,
and
so
it's
a
little
bit
of
like
an
escape
hatch,
to
to
be
able
to
do
that.
C
C
I
don't
know
exactly
what
the
plans
look
like
for
this
with,
like
you
know,
github
and
npm
integration,
but
I
know
there
is
some
work
on
like
sort
of
sorting
out
how
how
org
accounts
on
npm
can
translate
into
like
github
packages
instances
or
what
have
you.
That's
almost
certainly
quite
a
ways
off
and
there's
there's
a
lot
of
kind
of
logistics
to
to
work
out
there.
I
don't
want
to
speak
for
the
the
github
package
registry
or
get
a
packages
product
team
on
their
agenda.
C
I
don't
know
exactly
what
it
is,
but
that's
that's
the
use
case
that
bellerin
brought
up
was
they're
using
github
packages
which
has
a
scope
and
they
have
the
same
scope
on
npm
inc.
You
know
on
the
on
the
public
registry
for
their
open
source
packages,
and
so
they
want
to
install
their
open
source
from
the
public
registry
and
have
the
rest
of
their
scope
go
to
their
private.
F
Registry
yeah,
that
makes
perfect
sense,
I
think
as
a
feature.
This
is
a
great
feature:
I'm
not
trying
to
disagree
that
I'm
just
worried
that
we
are
opening
up
another
like
attack,
vector
related
to
multi-registry
setups,
which
are
already
confusing
and
problematic
today.
For
this
reason,
because
if
a
single
consumer
has
some
for
some
reason,
something
that
doesn't
respect
that
config,
that's
committed
to
your
repo
and
downloads
code
from
somewhere,
there's
nothing
even
in
the
process
telling
them
hey.
You
got
this
from
the
wrong
spot
right
like
if
there's
a.
F
F
Or
like
the
ci
servers
is
what
I'm
really
concerned
about
right
is
like
one
ci
script,
accidentally
running
the
older
version
of
npm,
because
that's
what's
installed
globally,
on
your
jenkins
runners
or
whatever,
and
suddenly
you're
now
downloading
code
from
the
public
repository
public
registry
that
you
didn't
know
you
were
going
to
download
that
to
me
is
a
huge
security
risk
that
we
don't
have
today,
because
it's
also
the
whole
scope
or
nothing
and
well.
We
do
have
it
today.
F
C
I
I
don't
think
it
would
be.
I
don't
think
it's
complete,
like
you
know,
I
know
v6
is
kind
of
like
in
in
I
don't
know
what
the
best
it's
in
in
cryosleep,
but
I
I
think
you
know
it's
not
out
of
the
question
to
add,
to
have
v6,
detect
that
that
config
and
then
either
raise
a
warning
or
throw
an
error.
If
it
has
a
problem.
F
It's
still
not
gonna
fix
anything,
because
if
people
are
staying
up
to
date
on
npm
six,
they
will
probably
just
have
gone
to
npm
seven.
It's
the
people
who
have
some
again
somewhere,
stuck
on
npm
six
at
dot,
two
six
dot
two
or
something
where
it's
probably
more
worrisome.
Again,
I
don't
think
there's
a
good
solution
here.
I
just
wanted
to
raise
that
awareness
that
I
think
you
know
somebody
will
come
along
and
say.
Look
my
old
thing.
Just
totally
ran
some
malicious
code
and
my
ci
servers
are
pwned
like
right,
because
right
wrong
place.
C
The
the
the
reason
for
adding-
and
I
mean
honestly
this
is
like
yes,
like
the
good
solution
that
you're
suggesting
is
like.
We
go
back
in
time
and
add
this
in
6.0.
But
if
we
add
it
in,
you
know
6.
latest
at
least,
then
we
can
say
like
well
look.
C
You
need
to
upgrade
your
upgrade
your
npm
like
you're,
using
something
that's
super
out
of
date
and
if,
if
what
we're
saying
is
like
you're
using
something
that's
out
of
date
and
you
need
to
upgrade
to
v7,
which
has
all
these
breaking
changes
and
like
shifts-
and
you
know
semantics
of
this
and
that
that's
that's
a
harder
pill
to
swallow
right,
yeah,
and
so
I
think
there
are
people
who
will
be
capable
and
willing
to
upgrade
to
6.
latest
that
are
not
capable
or
not
willing
to
upgrade
to
v7.
B
Yeah,
so
would
you
mind,
or
can
somebody
take
the
action?
Even
I
I'm
willing
to
to
just
note
that
hazard,
and
maybe
we
should
add
some
language
or
to
this
rfc,
then
specifically
to
say
that
we
like
we
would
ship
this,
but
also
this
is
predicated
on
maybe
shipping,
some
sort
of
warning
or
throwing
logic
into
the
pieces
like
v6.
B
B
B
E
Yeah,
I
mean
my
my
rough
opinion
seems
to
not
like
my
concern,
seems
to
not
actually
be
a
problem
after
some
discussion.
In
other
words,
I
think
it's
critical
that
engines
not
fit
a
mismatch
engine
not
fail
installs
by
default.
I
think
it's
great.
E
If
people
want
to
opt
into
that
themselves
on
their
apps
with
engine
strict
and
with
either
of
those
settings,
I
think
it's
very
important
that
engine
mismatches
do
the
intuitive
thing,
which
makes
sense
that
it
should
be
a
warning
and
not
a
failure
in
the
default
case,
but
then
it
should
be
a
failure
in
the
non-default
case,
like
the
strict
case
and
kind
of
reading
this,
this
rfc
it's
it's
saying
that
that
it's
not
doing
that
check
af
when
there's
already
something
in
a
lock
file.
C
If
it's
just
not,
if
it's
a
dependency
that
we're
not
re-evaluating
so
right,
you
know
whether
it's,
whether
it's
in
a
log
file
or
just
in
your
node
modules,
folder.
We
don't
like
scan
everything
in
your
tree
and
provide
an
engine
warning
or
what
we
especially
don't
do
and
there's
multiple
levels
to
what
we
could
do
here
so
just
to
go
through
them
like
the
the
easiest
is
what
we're
currently
doing,
which
is
nothing
if
it's.
C
You
know
when
we're
installing
something
we
we
do
this
this
heuristic,
where
we
try
to
find
something
that
will
match
your
engine,
your
current
engine
setup.
So
we,
whether
you're,
an
engine
strict
mode
or
not,
we
do
prefer
the
one
which
is
which
supports
the
engine.
C
Using
if
we,
if
we
can't
do
that,
if,
if
even
given
that
heuristic
the
the
dependency
range
restricts
us
to
something
that
is
not
supported
by
your
engine
range,
then
we
either
warn
if
it's
out
of,
if
it's
not
strict
or
we
throw
if
it
is
now
what
we
don't
do
is
like.
Let's
say
you
have
this
whole
package
tree
and
it's
all
installed,
and
then
you
run
npm
install
foo
to
just
add
a
new
dependency.
C
We
don't
go
and
check
the
rest
of
the
tree,
and
so
the
easiest
thing,
like
I
said,
is,
is
to
keep
doing
that
because
it's
really
efficient
to
not
even
do
anything.
E
Well,
I
have
a
suggestion
that
would
be
efficient,
so
I
wrote
this
tool
ls-engines
and
it
it
crawls
the
entire
tree,
but
it
generates
what
you
know
for
either
everything
or
just
production
depths.
E
You
know,
depending
on
what
flags
you
pass,
it
generates
a
single
m
like
sember
string
for
the
node
and
the
mpm
engines,
or
I
think
no
just
node
right
now,
but
and
that
doesn't
tell
you
why,
like
that,
doesn't
explain
why
there's
an
engine
conflict,
but
that
gives
you
a
a
single
string
by
which
you
can
then
judge
an
engine
conflict.
So
would
it
be
possible
when
generating
the
lock
file
to
do
that
to
just
generate
and
like
pre-compute,
a
top
level
string
for
like
dev
depths
and
prod
depths
both
or
something?
E
C
Right
so
the
the
first
level
of
things
we
could
do,
which
is
only
slightly
less
efficient
than
nothing,
and
I
I
I
respect
your
comment
that
it
is
efficient,
but
it
is
definitely
not
as
efficient
as
just
not
doing
it
sure
enough
in
terms
of
or
it's
not
as
cheap
as
not
doing
it
well.
E
C
Sure
so
the
the
next
so
the
the
easy,
the
next
like
relatively
easy
thing
we
could
do,
is
always
scan
the
tree,
even
if
we
are,
even
if
we
have
no
good
reason
to
no
other
reason
to
we
always
scan
the
tree
and
and
always
warn
if
there's
an
engine
conflict.
C
I'm
not
sure
if
it
even
makes
sense
at
that
point
to
throw,
if
you're
in
strict
mode,
if
it's
not
something
that
we're
trying
to
fix
now.
The
much
harder
thing
which
I
think
is
probably
not
worth
doing,
although
might
be
in
some
cases
or
with
some
opt-in,
is
to
actually
re-perform
those
heuristics
so
go
through.
Go
through
the
tree
like
I'm
in
I'm
npm
installing
foo,
to
like
add
a
new
dependency
and
I've
got
bar
buried
deep
in
there
and
now
I'm
on
a
different
npm
version
or
sorry
a
different
node
version.
C
So
what
I
could
do
is
effectively
add
all
of
bars
dependents
to
the
to
the
list
of
dependencies
that
I'm
that
I'm
queuing
for
re-evaluation
so
that
they
recheck
to
see
if
there's
a
new
version
of
of
bar,
that
they
can
potentially
get
the
the
hazard
of
doing
this
by
default.
Is
that
now
we
kind
of
have
this
like
spooky
action
at
a
distance
which,
in
the
past
every
time
we've
kind
of
I've
learned
to
be
aware
of
that
hot
stove,
because
users
do
tend
to
get
confused
and
upset.
C
When
you
know
I
installed
foo
and
how
come
it
updated
all
these
other
things
that
are
completely
unrelated
and
that
can
even
you
know,
introduce
bugs
or
instability,
so
it
would
have
to
be
something
where
you're
like.
I
I
want
to
like
update
everything
that
is
an
engine
mismatch
and
do
your
best
at
re-resolving
it.
C
C
Specifically
right,
so
the
feedback
for
this
rfc
specifically,
is
because
maybe
we
should
reevaluate
the
tree.
I
don't
know,
I
don't
know.
If
I
don't
know
if
it's
worth
doing
the,
I
don't
think
that
we
should
do
the
here's,
the
like
reperforming,
the
the
resolutions
by
default-
that's
extremely
hazardous
with
the
engine
strict
flag,
maybe
well
with
the
engine
strict
flag.
I
would
what
I
would
expect
as
a
user
is
that
if
I
have
engine
strict
on
and
there's
an
engine
conflict,
then
you
fail
right.
B
C
You
mean
re
the
work
of
reevaluating
or
the
the
work
of
just
checking
reva,
while
really
like.
C
I
think
in
that
case
you
may
still
prefer
to
have
to
have
a
failed
install
rather
than
an
install
that
works
and
make
some
changes.
You
did
not
expect
or
anticipate
right
if
it's
in
a
ci
environment,
for
example,
I
might
just
want
my
node
10-
builds
to
start
failing
right
right
exactly,
but
my
if
they,
if
they
kept
working
but
they're,
doing
a
different
thing.
Then
I
may
have
like
test
failures
that
are
unrelated
or
you
know,
other
things
that
are
harder
to
track
down.
C
So
that
really
has
to
be
opt-in.
So
you
know
what
you're
doing.
C
Yeah
so
engines,
so
I
guess
the
the
takeaway
here
it
sounds
like
there's
some
broad
consensus
for
because
right
now
it
just
throws
a
warning
right
value
evaluating
the
reevaluating,
the
parts
of
the
tree
that
we're
not
otherwise
walking,
even
just
just
if
there's
any
engines
right,
we
can't
maybe
make
that
more
efficient
by
by
adding
a
top-level
queryable
field
in
arborist
in
the
arborist
inventory
for
engines.
So
we
could
only
walk
the
ones
that
have
an
engine's
declaration.
C
B
Yeah,
did
you
have
anything
to
add?
Broadly
sir,.
D
Yeah,
so
I
I
was
presuming
you
would
have
a
top
level
field
as
a
range
in
particular.
That
would
keep
it,
so
you
would
avoid
the
action
at
a
distance
thing.
You
would
have
a
essentially
viewable.
What
are
my
supported
engines
currently
as
well,
rather
than
having
to
try
to
find
a
way
to
extract
that,
and
that
way,
when
you
install
foo
foo,
can
check
against
the
currently
supported
range
and
see
if
you're,
outside
of
it,
for
any
of
the
things
that
are
installed.
C
Right
so
this
is
this:
isn't
about
a
when
I
say
a
top-level
queryable
field.
I
mean
in
the
in
the
sense
that
we
can
so
within
arborist.
We
have
this
inventory
class
and
ads
as
nodes
are
added
to
and
removed
from
the
from
the
tree
along
the
way.
C
We
sort
of
maintain
a
couple
of
lists
for
some
some
very
common
things
that
we
want
to
be
able
to
query
on
the
the
main,
the
most
common
one
that
we
use
within
npm
is
just
package
name,
because
you
know,
there's
there's
quite
a
few
cases
where
we
say
I
need
to
get
all
the
packages
named
named
bar
that
are
anywhere
within
the
tree,
and
so,
if
we,
if
we
did
that
along
the
way,
we
might
be
able
to
skip
having
to
do
a
second
kind
of
full
tree,
walk
and
just
sort
of
maintain
a
collection
of
of
each
package
based
on
the
node
version
that
they
support.
C
It's
I'm
I'm
sort
of
thinking
in
real
time
here,
so
I'm
not
sure
that
it
doesn't
it's
not
quite
a
fit
for
the
the
way
that
we
use
these
these
queryable
maps
right
now,
but
I'm
getting
into
the
weeds
of
implementation
details
here
so.
A
C
It's
really
kind
of
just
an
optimization
for
being
able
to
get
this
information
quickly
in
runtime.
B
C
Yeah,
I
think
I
think
we
can
move
forward
and
I
think
that
there's
there's
enough
sort
of
consensus
on
what
this
is
actually
requesting
and
specifying
that
I
don't
think
it
would
be
too
hazardous
to
do
and
we
can
do.
E
C
Subsequent
rfc
to
to
add
some
capability
to
say,
like
reevaluate,
all
my
engine
conflicts
and
try
to
make
them
good.
E
C
The
the
the
benefit,
so
the
benefit
of
you
know
adding
this.
Adding
that
queryable
map
in
arborist
in
in
the
harvest
inventory
is
that
you
know
ls
engines
might
get
a
lot
shorter
because
there
would
just
be
you
know:
you'd
load
the
actual
tree
and
then
just
walk
over
this
map.
That
already
has
everything
that
has
engines,
so
you
don't
have
to
watch
them.
E
C
E
B
C
No,
we
should
be
warning.
We
should
the
the
the
plan,
I
guess
the
relatively
efficient
plan,
if
we
can,
if
we
can
tweak
some
things
in
arborist,
is
to
make
it
worn
on
every
install
regardless
of
strict
settings,
but
throw
if
strict
is
true.
Okay,
okay,
but
essentially.
D
C
Essentially,
just
run
that
check
engine
command
all
the
time.
E
E
C
Not
very
interesting
engine
engine
strict
will
fail
if,
if
there's
a
engine
mismatch
that
we
can't
resolve
yeah.
E
B
True
true,
so
yes,
no,
it
totally
is
supposed
to
work
that
way.
Okay
makes
sense.
Okay.
So
let's,
let's
move
on
then
to
the
last
item,
just
because
we
only
have
about
15
minutes
left,
and
I
want
to
give
some
time
in
case
there
was
anything
we've
missed.
B
There
is
a
new
rfc
number
issue,
number
221
for
no
auto
install
for
pure
depths
that
are
marked
as
optional.
This
was
made
just
yesterday,
so
somebody
playing
with
npm
seven,
I'm
not
sure
if
they
were
able
to
join.
I
don't
think
jeremy's
on
the
call
yeah.
This
was
the.
B
Oh
there,
it
is
oh
sorry,
damon
c
yeah,
yeah
yeah.
B
Okay,
awesome:
would
you
like
to
go
over
this
issue
quickly
or
the
rfc,
and
let
us
know
what
it's
all
about.
G
Yeah,
all
right
so
pretty
much
you
know
with
in
vm7
now
we
have
the
ability
for
pure
dependencies
to
automatically
get
installed
again,
which
is
great.
You
know,
I
think
the
majority
of
developers
definitely
want
that
feature.
It's
super
handy,
for
you
know
eslant
configs,
and
things
like
that.
That
have
been
a
major
headache
to
developers
and
one
thing
that
I
know
that
I've
personally
wanted
and
I've
seen
in
the
ecosystem
is
that
you
take
a
project
like
you
know.
Let's
say
a
database
rm,
something
like
nex
or
bookshelf.
G
You
know
any
of
these
types
of
orms
that
support
tons
of
different
databases
and
they
require
you
to
install
another
package
in
order
to
use
the
adapter-
and
you
know
say
you
would
install
your
your
base
rm
and
then
you
might
need
to
install
you
know
the
the
node
pg
package
in
order
to
support
postgresql
or
something,
and
what
I
have
definitely
seen
myself
across
the
community.
Is
that
that's
kind
of
been
a
wild
west
scenario
for
a
long
time
where
developers
avoid
listing
these
essentially
optional
adapters
and
other
things
like
that.
G
They
avoid
listing
these
as
pure
dependencies
so
that
they
can
avoid
having
errors
whenever
an
npm
install
is
happening.
But
then
now
we
have
this
automatic
installation
of
peer
dependencies
and
what's
happening
is
we
have
the
luxury
of
marking?
Things
is
optional,
so
that
developers
don't
get
a
warning
whenever
they're
installing
optional
peers,
but
then
all
of
peers
that
are
marked
as
optional
actually
get
auto
installed
now.
G
So,
in
the
case
of
something
like,
I
was
mentioning
the
rm
or
some
other
sort
of
tool,
then
we
go
right
back
to
that
same
sort
of
wild
west
scenario,
where
it's
essentially
developers
not
being
able
to
specify
a
peer
dependency
range
you
I
want
to.
As
an
author,
I
want
to
be
able
to
say
that
my
my
orm
supports
you
know
the
node
pg
package
with
a
specific
range,
but
I
want
to
be
able
to
mark
that
as
optional
and
currently
I
don't
have
the
ability
to
do
that.
G
G
If
you
have
the
package
installed
or
they
just
try
to
do
a
do,
a
try
catch
to
see
if
they
can
import
it
and
then
that's
been
a
whole
headache
of
you
know
that
kind
of
stuff
normally
works
great
in
npm,
but
then
there's
issues
if
a
package
doesn't
have
its
peer
dependencies
listed
in
other
package
managers.
You
know
pnpm
yarn
all
that,
so
it's
just
a
complete
mess
and
I
feel
like
this
is
a
really
great
opportunity
to
move
forward.
Actually
fixing
the
the
mess
like.
G
C
Yeah,
that's
a
that's
a.
I
was
just
saying
in
chat.
That's
a
really
good
use
case.
I
hadn't
really
thought
of
all
the
way
through
but
yeah,
if
you,
if
you
were
previously
listing
these
things
as
pure
dependencies
and
then
marking
them
as
optional,
you
update
to
npm
seven,
and
now
we
try
to
install
the
connectors
for
every
single
database
that
exists.
C
So
that's
not
great,
there's
another.
Another
thing
that
I
think
I
I
seem
to
recall
being
suggested
was
in
that
pure
dependencies
meta
actually
add
like
a
auto
install
boolean
that
defaults
to
true
or
defaults
to
false,
maybe
if
you're
yarn
or
pnp
m,
which
could
be
another
another
interesting
possibility.
C
I
think,
and
it's
a
it's
a
very
like
weekly
held
kind
of
bit
of
feedback
on
this
idea,
but
I
think
one
slight
advantage
might
be
that
it's
somewhat
more
explicit
than
calling
it
optional
and
saying
well,
optional
things
don't
get
installed,
which
might
be
weird
since
regular,
optional
dependencies.
We
do
try
to
install
them.
We
just
are
okay
if
they
fail,
but
we
do
treat
it
as
a.
C
We
treat
it
as
a
dependency
mismatch
or
dependency
con
like
an
invalid
dependency,
if
the
version
that
you're
loading
is
not
within
that
range,
so
you
do
get
kind
of
warnings
that
are
helpful.
There.
E
So
I
think
the
difference
here
is
that
we're
we're
you're
you're
expressing
an
intuition
based
on
your
combination
of
optional
and
peer
semantics,
which
has
the
optional
winning,
which
is
the
like
or
sorry
which
is
the
peer
winning,
which
is
the
that
the
pure
means
it's
it's
supposed
like.
Well,
let
me
just
step
back
the
way
I
would
view
it
is.
A
peer
dependency
is
required,
because
it's
saying
this
pro
like
this
is
how
it
has
to
be
here.
If
it's
not
there,
the
program
won't
work
and
so
trying
to
install
it.
E
For
me
is
tr
you,
you
know
npm
trying
to
be
helpful
and,
like
fix
my
omission
right
and
but
an
op,
an
optional
dependency
is.
E
I
I
think
it's
arguable
in
both
directions
that
pure
depth
should
be
optionally
installed,
but
I
feel
like
there's
just
not
really
a
strong
argument
to
automatically
install
these
optional
ones,
because
if
someone
wants
it
like
you
know
it,
it
seems
appropriate
to
me
that
they
explicitly
have
to
opt
in
to
getting
it
like.
It
would
make.
C
Yeah
yeah
I
mean
the
only
thing
is
the
only
pushback
I
have,
and
again
I
mean
this
is
like
I'm,
I'm
mostly
with
you,
I'm
kind
of
just
like
exploring
this
exploring
the
space
a
little
bit.
The
only
pushback
is
like
well.
Why
isn't
that?
Why
is
that
different
for
regular,
optional
dependencies
right,
like
they're
they're,
also
the
same
thing,
they're
saying
we?
We
only
need
this
like
we'll
benefit
by
by
this
being
here,
but
if
it's
not
here,
we're
fine.
E
B
Like
what
what
do
we
think
about
supporting,
essentially
pure
dependencies,
meta,
optional,.
C
Right
so
I
mean
we,
we
do
support
it
now
we
just
we
treat
them,
as
you
know,
with
with
kind
of
the
the
overlap
of
of
pure
and
optional
semantics.
So
it's
you
know
it
starts.
We
try
to
install
it.
It
starts
resolution
at
the
at
the
parent
of
the
depth
that's
being
installed,
so
it
has
to
kind
of
be
at
that
level
or
higher,
and
if
it
fails
to
install,
then
we
just
proceed
right.
We
just
kind
of
log,
I
think
not
even
a
warning.
C
We
log
like
a
verbose
level
and
then
move
on
so
the
the
shift
here
is:
don't
try
to
auto
install
optional,
pure
dapps,
I'm
not
sure
exactly
what
the
what
the
impact
would
be,
though,
if
you,
if
we
do
actually
have
a
version
there
already
that
doesn't
satisfy
it
like.
Should
that
be
considered
a
failure,
because
detecting
that
situation
will
be
a
little
bit
trickier,
it's
trivial
to
just
not
try
to
install
them,
like
just
don't
add
them
to
the
to
the
list.
Like
that's
easy.
C
Yeah,
I
think
I
mean
I
I
think
this
is.
I
think
this
is
a
definitely
like
a
very
valid
use
case
and-
and
I
can
see
how
this
rfc,
like
sort
of
provides
exactly
the
kind
of
the
right
shaped
kind
of
escape
hatch
that
we
need
for
it.
So
I'm
I'm
fine
with
proceeding
on
it.
I'm
just
kind
of
thinking
out
loud
about
how
it's
going
to
look.
B
Sure
jeremy,
would
you
be
willing
to
follow
up
with
a
an
actual
rfc
like
a
pr?
I
know
this
is
an
issue
to
get
some
commentary.
G
G
I
can
most
likely
get
a
pr
sent
over
for
today,
because
I
mean
personally
I'd
like
to
do
anything
I
can
if
this
is
something
that
everybody
is
on
board
with
whatever
direction
to
go
in,
if
everybody's
on
board,
with
trying
to
get
this
in
before
seven
is
finalized,
I
think
that
we
kind
of
avoid
the
you
know
not
being
able
to
put
the
genie
back
in
the
bottle
kind
of
scenario
of
you
know
having
to
reel
back
in
and
the
the
feature
with
people
it
would
be,
you
know,
could
be
a
big
misunderstanding
everywhere,
so
I
like
to
do
whatever
I
can
to
get
it
done.
G
B
Yeah
that'd
be
amazing,
yeah,
there's
an
issue
template
there
and
that
you
can
just
copy
and
then
fill
it
out.
I
think
you've
done
a
lot
of
the
like
work
already
based
on
the
issue
you
copy
over
a
lot
of
the
language
so
yeah.
It's
essentially
support
pure
dependencies
meta,
don't
install,
I
don't
know
what
you
wanna
call
it,
but
don't
install
optional,
pure
dependencies
so
by
by
default.
So.
B
Awesome
and
then
we
can
bring
that
up
and
and
and
look
at
that,
maybe
next
week
cool
all
right.
Well,
thanks
guys
awesome
was
there
any
other
issues
or
agenda
items
or
prs
that
we
didn't
get
to
today?
That
folks
wanted
to
bring
up.
We
have
a
couple
minutes
left.
A
One
thing
I
would
like
to
make
a
note
about
the
notes:
everyone
who
speak
if
you
want
to
just
double
check
it
was
kind
of
we
had
a
lot
of
discussions
with
how
hard
to
keep
up.
So,
if
you
just
want
to
give
it
a
quick
look
before,
I
just
submit
it
to
the
ripple.
B
Awesome,
thank
you
again
roy
for
doing
that
and
we'll
try
to
try
to
get
some
rotation
going
here.
Maybe
next
week
you
can
run
it
and
I'll
take
notes.
F
I
I
realized-
I
should
have
done
this
announcement
time,
but
I
think
tomorrow
the
meeting
we
did
schedule
with
the
package
maintenance
working
group
is
another
technical
discussion
on
the
create
pkg.
F
I
think
that
that
is
what
is
scheduled
tomorrow,
so,
if
anybody's
interested
in
basically
what
it
will
be,
is
a
community
driven
package
initializer.
So
it
would
be,
in
theory,
a
more
robust
replay
future
replacement
for
npm
init.
F
If
anybody's
interested
in
that
discussion
go
on
over
to
the
package
maintenance
working
group,
we're
going
to
do
a,
we
did
the
original
like
what
are
our
high
level
goals
last
week,
and
so
this
week
we're
gonna
discuss
some
some
more
technical
direction
and
see
if
we
can
set
out
like
what
should
the
first
version
contain
and
what
is
the
you
know,
structure
of
it
and,
and
maybe
even
who's
gonna
help
work
on
it.
F
B
Links
there
mark
martin,
do
you
want
to
add
something
as
well
mark
dodd.
I
Yeah
sure
yeah
there's
been
a
outstanding
rfc
number
27
for
a
while.
Now
I've
been
pinging
a
couple
of
times
on
that
pr,
it's
just
not
been
put
on
the
agenda.
I
wasn't
sure
whether
I'm
missing
some
step
to
get
it
on
the
agenda
or
not,
which
one.
I
It
can
be
discussed
next
time.
I'm
just
wondering
whether
I
thought
I
may
have
missed
a
way
of
getting
it
into
a
discussion.
No
apologize
about
that.
B
We
can
usually
we
put
agenda
labels
and
then
the
the
rfc
call
agenda
usually
is
automatically
created.
So
apologies.
E
B
Didn't
get
to
this,
we
can
also
probably
circle
back
on
that
comment
on
that
rc
async,
but
I'll
add
the
label
just
so
we
don't
forget
it
in
case
we
don't
get
any
discussion
on
it
between
now.
B
Thanks
for
bringing
that
to
our
attention,
I'm
mark
and
I
apologize
yeah,
so
paul
apologize
if
there
was
anything
else,
feel
free
to
continue
to
have
discussions
on
the
rfcs
themselves
on
the
issues
in
prs
and
appreciate
everybody
come
to
the
call
today
and
we'll
have
another
one
of
these
next
week
same
time
same
place,
and
hopefully,
everybody's
staying,
happy,
healthy
and
safe.
I'll
talk
to
you
next.