►
From YouTube: Open RFC Meeting - Wednesday, May 27th 2020
Description
In our ongoing efforts to better listen to and collaborate with the community, we're piloting an Open RFC call that helps to move conversations and initiatives forward. The focus should be on existing issues/PRs in this repository but can also touch on community/ecosystem-wide subjects.
A
B
C
I
don't
make
English
people
try
to
pronounce
it
just
use
the
first
two
letters.
It's
ZB
I've
been
using
that
with
english-speaking
people
for
the
last
seven
eight
years
it
works.
I
react
to
it
good
and
you
might
know
me
as
an
actor
from
the
Internet's
and
I'm
the
guy
who
wrote
npm,
Oda
trees
over
and
hope
to
talk
about
it
today.
A
bit
awesome
I
know.
C
D
A
F
E
A
Actually
before
that,
is
there
any
item
that
should
be
changed
or
added
to
this
I
know,
Roy
had
a
chance
to
modify
this,
but
if
folks,
if
there's
something
on
here
that
folks
want
to
talk
about
that,
isn't
listed
or
a.
Let
me
know,
but
we'll
dive
right
in
and
see.
The
first
item,
which
is
PR
number
138,
the
RFC
for
adding
NPM
app
ID,
the
an
HTTP
header
I,
know
we
got
into
this
sort
of
late
last
last
week,
I
think
when
we
were
discussing
in
this
first
I'm,
not
sure.
A
I
You
know
copy
and
paste
the
string
somewhere,
but
if
we
want
to
do
if
we
want
to
do
more
interesting
stuff
with
it,
like
you
know,
link
it
to
link
it
to
a
git
repo
or
an
action,
or
you
know
specify
like
some
kind
of
metadata,
that
sort
of
rides
along
on
this
app,
that's
not
being
published,
then
it
could
be
interesting
and
there's
there's
some
more
kind
of
more
aspects
of
the
surface
area
to
probably
be
outlined
a
bit
more.
So
we
could
talk
more
about
some
of
those
potential
use
cases
or
yeah.
I
The
result-
oh
sorry,
good
yeah.
No,
no!
You
got
I,
think
that
I
think
you're,
right
and
I
think
that's
kind
of
what
I'm
getting
out
like
the
the
result
may
end
up
being
functionally
identical
to
what
this
RFC
proposes.
I
just
I
think
that
there
needs
to
be
a
little
bit
more
commentary
on
and
like
exploration
of
what
it
could
be
used
for.
Okay,.
A
So
it
sounds
like
we
should
on
this
specifically
just
give
some
feedback.
It's
probably
not
something
we're
gonna
do
sounds
like
there's
a
workaround
to
help
you
achieve
this
today,
and
if
we
want
to
explore
something
along
these
lines,
then
it
probably
would
be
best
in
a
net
new
RFC,
with
with
all
the
various
use
cases
that
this
would
essentially,
if
we
support
something
like
this
like
this,
like
that,
would
unlock.
I
Yeah,
so
so
the
the
other
thing
about,
like
you,
know,
setting
setting
the
app
ID
by
a
an
environment
variable
like
that
seems
fine,
you
know
or
config
or
whatever
like.
We
could
do,
that
a
number
of
different
ways.
We
could
have
the
server
set
it
based
on
a
HTTP
header
that
could
set
back,
but
the
main
pushback
that
I
had
on
this
RC
specifically
was
rather
than
put
it
in
the
package,
lock
JSON.
It
should
just
be
a
thing
that
rides
along
with
every
HTTP
request
you
make.
I
So
if
you,
if
you
specify
an
app
ID,
then
NPM
registry
fetch
will
just
send
that
with
every
request
it
makes
and
then,
if
you
wanted,
that
actually
frees
us
keeps
us
less
tied
to
the
current
approach
that
we
use
with
NPM
audit
right
now
and
p.m.
audit
sends
your
entire
package,
lock
JSON
decorated
with
a
little
bit
of
metadata,
the
you
know
for
like
node
version
and
NPM
version.
What
I'd
like
to
do
actually
like
there's
some
problems
with
that
approach,
actually
for
doing
NPM
audit
a
it's.
I
So
we
are
actually
looking
at
maybe
doing
just
a
really
simple,
like
extremely
terse
bulk
advisory
fetch
when
we
do
NPM
on
it
on
the
MMB
seven,
which
is
you
know,
an
order
of
magnitude
or
two
faster
on
the
server.
So
it
can
be
much
more
reliable,
it's
more
cacheable
and
it's
less
data
over
the
wire.
So
if
that
means
it's
like
for
something
like
this,
that's
saying
like
oh
well,
we
should
add
it
to
package
like
JSON
like
well,
that
actually
doesn't
isn't
gonna,
be
as
future-proof,
as
you
might
think.
I
A
I
I
A
So,
item
number
three
PR
135,
so
clarify
outline
the
RC
withdrawal
proposal,
so
essentially
I
think
we
actually
merged
this.
The
last
time
I
yeah
this
got
merged
I
think
last
last
week.
So
essentially
this
was
a
amendment
to
the
readme
and
sort
of
the
process
that
we
take
to
gratifying
and
then
potentially
withdrawing
support
for
work
so
feel
free.
If
somebody
wants
to
change
the
language
in
the
future
make
employer
requests,
but
this
this
will
help
us.
A
Essentially
we
we
should
take
some
time
at
the
team
probably
offline
to
to
update
some
of
the
accepted
but
not
implemented
RFC's
and
then
open
up
some
PRS
for
us
to
to
migrate
those
into
that
withdrawn
folder,
with
with
obviously
reasons
why
they
don't
make
sense
anymore.
So
that's
good
thing
to
note.
So
that's
now
being
merged
I'll
take
off.
Actually
the
agenda
label
I
think
that's
what
I
got
pulled
out
yeah,
so
moving
on
up
here,
number
133
removed
the
depth
field
from
NPM,
outdated
and
I
see
somebody's
taking
notes.
Thank
you.
A
G
G
I
H
A
bunch
so
I
actually
ripped
out
the
part
that
calls
NPM,
outdated
and
just
reimplemented
them
not
to
say
that.
That
means
that's
what
you
should
do,
but
what
I
did
so
I'm
working
on
some
different
user
experiences
there
when
I'm,
ready
to
I'll
revisit
but
I,
don't
think
I'm
ready
to
make
any
proposed
changes
yet.
But
unfortunately,
there
was
a
couple
of
reasons
why
it
made
more
sense
for
me
just
to
fetch
all
the
pack
humans
myself
and
copy
and
paste
the
logic
out
from
NPM
outdated
and
do
it
separately.
I
Yeah
yep,
it's
mostly
implemented
the
the
stuff
around
update
desk
at
all
is
our
sorry.
Outdated.
All
is
a
little
bit
weird,
but
I
don't
know
if
there's
any
way
around
the
weirdness
like
showing
scope
in
Pig.
No,
so
if
you,
if
you
have
okay,
so
the
idea
the
thinking
behind
NPM
outdated
is
it
should
be
a
list
that
says
if
you
run
NPM
update.
This
is
what
it
will
do
right.
I
The
problem
is,
it
can
never
actually
be
that
because,
if
you,
if
you
run
it,
is
that
for
the
top
level
right,
it
will
accurately
say
like
these
are
the
things
that
are
top
level
dependencies
that
are
out
of
date.
If
you
run
out
of
PM
update,
it
will
definitely
update
those
things.
However,
when
you
look
deeper
in
the
tree,
you
might
have
an
outdated
dependency,
which
itself
has
an
outdated
dependency
that
the
new
version
no
longer
depends
on
right.
I
So
if
you
say
NPM
outdated,
that's
all
it's
going
to
show
you
both
of
those
things,
but
actually
only
one
of
like
one
of
them
will
actually
be
removed.
When
you
do
NPM
Oh
update
now,
I
think
that's
probably
fine,
like
the
the
actual
thing
that
you're
probably
care
about
is
like
am
I
using
anything
that
is,
you
know,
maybe
no
longer
supported
or
whatever,
and
so
we
can
kind
of
just
like.
We
can
answer
that
question,
but
the
bigger
thing
of
like
well.
I
What
will
NPM
update
do
it's
like
run
it
you
know,
run
it
with
die,
run
and
see,
and
that
will
tell
you
and
and
now
that
we
do
have
you
know
we
have
the
dry
run
for
all
the
reifying
commands
all
the
commands
that
modify
the
tree.
It's
it
should
be
fine
to
just
run
npm
update
dry
run.
If
that's
what
you
want
to
know,
though,.
H
For
once
worth,
this
actually
tackles
a
little
bit
of
some
of
the
oddities.
I
was
seeing
so
when,
when
I
brought
it
up
in
slack,
we
were
one
of
the
answers
was
yeah
well,
NPM
outdated
should
tell
you
everything
that
would
change
with
NPM
update
and
I
was
like.
Oh
that's,
just
not
really
what
I
was
expecting
right
and
what
you
just
said.
H
H
Right
and
it's
like
what's
useless-
I
just
want
to
know
like
does
my
package:
jason
need
to
be
updated
a
little
bit
right,
and
so,
if
we
changed,
npm
update
dry
run
to
be.
This
is
going
to
tell
you
what
driver,
what
update
would
do
versus
what
npm
outdated
does,
which
is
doesn't
tell
you
that
it's
missing
it
just
says.
Like
yeah
your
package,
jason
says
I
want
one
gato
and
there's
a
2.0
available
right
and
that
would
actually
resolve
the
main
reason
why
I
moved
away
from
using
NPM
outdated
in
the
tooling
I
was.
H
Writing
is
just
I
want
to
be
able
to
run
it
on.
I
want
to
be
able
to
fetch
a
package
Jason
by
itself,
and
just
say
like
tell
me
what
what
is
an
update
I'd
like
to
apply
here
right.
So
that's
the
the
tool
that
I
have
right
now
is
actually
just
pulling
it
from
the
files
API
in
the
stash
repo
right,
I.
H
Don't
even
have
to
look
right,
I,
just
fetched
the
package
Jason
and
now
I'm
able
to
run
my
version
of
outdated
just
by
fetching
the
PACU
mints
right,
and
so
that's
where,
if
we
had
outdated
the
ability
to
just
run
it
on
a
fresh
clone
of
a
git
repo,
you
know
it'd
be
a
lot
more
valuable
as
a
reporting
tool
than
it
is
today.
As
long
as
we
have
outdated,
that's
dry
run.
Tell
us
the
other
version
right,
which
is
what
would
what
would
outdated
do
right.
I
So
I
think
I
think
the
change
that
needs
to
be
made
here
for
NPM
outdated
is
it
needs
to
build
the
ideal
tree
and
in
such
a
way
that
it
will.
You
know,
look
in
your
package
lock
JSON.
If
that's
not
present,
that
it'll
look
at
what's
installed
and
if
that's
not
present,
then
it
will
build
up
the
tree
from
first
principles
right
based
on
your
your
dependencies
and
then
loop
over
it.
A
A
C
Okay,
so
let
me
introduce
the
concepts
first,
so
the
idea
is
it
all
started
with
me,
insisting
on
running
NPM
audit
as
I
step
in
CI,
so
initially,
while
finding
something
was
still
pretty
rare,
it
was
okay.
We
actually
started
with
with
the
predecessor
of
NPM
audit,
but
let's
get
that
so
at
some
point
we
noticed
that
every
time
we
run
it,
there
is
a
totally
irrelevant
redose
somewhere
in
our
dev
dependencies
and
that's
where
it
started.
C
So
the
idea
here
is
to
allow
developing
a
culture
of
caring
about
dependency
security
in
a
team
by
putting
this
in
CIA
and
flogging,
stuff,
etc,
while
still
being
productive.
So
we
need
to
be
able
to
keep
the
tooling
there
but
skip
it
for
24
hours
if
it's
the
last
day
of
the
sprint
right
and
no
one
wants
to
deal
with
it
now.
So
that's
that's
one
thing.
C
The
second
thing
is
to
ignore
stuff
in
general,
so
we
want
to
ignore
things
that
we're
certain
will
not
affect
us
in
any
way
or
that
don't
have
a
resolution
and
we're
fine
running
with
those
and
there's
a
bit
more
in-depth
cases
here
which
I'm
gonna
I'm
gonna
go
that
into
that
rabbit
hole,
because
this
is
the
most
important
feature
here.
So
the
the
key
thing
I
wanted
to
propose
is
that
the
whole
feature
of
ignoring
things
is
not
based
just
on
the
Advisory
ID,
but
it's
ignoring
a
specific
installation
of
specific
package.
C
So
imagine
I
have
a
package,
that's
my
dev
dependency
and
I
obvious
obviously
want
to
ignore
a
denial-of-service
on
that.
So
when
I
do
that,
but
then
the
same
package
is
a
fourth
level
dependency
of
something
I'm
using
in
production
and
if
that
shows
up,
I
don't
want
to
ignore
it.
I
want
to
be
told
about
it
again.
So,
whenever
I
install
the
same
vulnerability
in
a
different
location,
I
need
to
be
alerted
about
it
again,
even
though
I
already
ignored
it.
So
that's
one.
The
other
is
the.
C
Obviously,
the
same
advisory
could
show
up
in
different
packages,
or
even
the
usage
of
the
same
package
can
change
so
I
install
another
dependency
and
a
vulnerable
version
gets
shifted
up
by
the
application.
So
I
need
to
know
that
this
thing
that
I
already
ignored
is
now
used
by.
You
know
from
a
different
place
in
the
tree,
and
that
means
it
changes
how
it
makes
my
app
vulnerable.
C
C
audit-
and
there
is
an
audit
resolve,
JSON
file
with
some
decisions
that
were
made,
NPM
audit
would
take
that
into
account
when
deciding
if
the
exit
code
should
be
0
or
not,
and
that's
the
basic
feature
here
and
then
obviously
in
the
output
is
gonna
change,
because
it's
gonna
say
you
have
this
vulnerability
that
you
ignore
it.
Instead
of
saying
you
have
this
vulnerability
overall
right.
C
But
overall,
it's
intended
to
be
something
that
you
embed
in
in
NPM
or
or
other
package
managers,
and
something
that
you
use
to
build
other
tools
for
editing
the
audit
resore
file
on
top
of
death.
So
it
has
a
JSON
schema
for
validation.
It
has
some
tooling
for
migrating
between
versions
of
the
schema,
because
I
already
made
some
breaking
decisions
initially
even
before
the
RFC.
So
that's
already
there
and
I
just
want
to
improve
the
API
a
little
bit
yeah.
That's
that's
the
overview
of
what's
still
valid
from
the
RFC
yeah.
A
That's
awesome,
I
have
to
commend
and
applied.
You
know
this
has
been
open
for
a
long
time
and
you're
going
out
and
actually
creating
the
tool,
a
showing
the
use
case
and
ensuring
that
there
there
is
folks
that
would
be
using
this
like
in
the
wild
is
incredible.
So
you
know
I
validates
the
assumption
that
this
is
something
that
we
want
and
need
and
I
think
anecdotally.
I
think
we
all
I'll
see
a
use
for
this
breaking
down.
A
I
C
C
So
this
is
this
is
developed
and
I
would
like
to
suggest
keeping
it
separate
for
now
and
working
out
a
different,
maybe
just
a
simpler
pattern
for
helping
people
in
editing
the
file,
because,
overall,
this,
the
scheme
of
the
file
is
designed
in
a
way
that
it
should
be
human,
readable
and
human
editable,
with
the
only
fix
I
need
to
make
as
the
format's,
where
it
uses
timestamps
and
I
believe
it.
It
should
support
a
wider
range
of
notations
for
a
date
and
time,
but
other
than
that.
C
It's
fairly
doable
for
a
human
to
maintain
this
file
even
manually
and
I.
Believe
the
next
step
would
be
for
NPM
to
have
a
resolve
command
that
would
produce
a
template
for
the
file
where
everything
coming
from
NPM
audit-
that's
not
fixed,
so
you
run
the
resolve
and
it
fixes
everything
marks
it
as
fixed
and
everything
that
wasn't
fixed
is
marked.
As
the
decision
is
none
and
then
you
can
go
in
and
edit
the
decision
before
committing
the
file
with
the
other
benefit
of
this
file
being
pretty
printed.
C
H
So
I
I
think
this
is
really
interesting
and
I
I'm
surprised,
I,
haven't
actually
seen
this
RFC,
yet
I
must
have
just
looked
over
it
in
the
in
the
list,
because
I
think
this
would
be
an
another
very
interesting
topic
to
talk
about
with
the
package
maintenance
working
group
and
see
if
there's
any
ways.
So,
for
example,
if
you
have
a
specification
for
a
resolved
file,
you
know
we're
like
we're
working
on
this
package
support
specification.
H
It
seems
like
that
would
be
something
we
would
maybe
want
to
get
involved
with
and
help
give
feedback
around
and
help
promote.
Once
you
know
it's
it's
stabilized
a
bit,
especially
if
it
looks
like
NPM
has
some
interest
in
you
know
taking
up
implementing
it
inside
of
NPM,
because
these
kind
of
things
are
going
to
be.
You
know
widely
impactful
and
if
package
maintainer
z'
have
the
ability
to
you
know,
work
with
their
communities
and-
and
you
know,
get
them
on
board
with
like
oh
yeah,
we
have
a
new.
H
H
You
know
that
to
the
community
I'd
be
interested
at
least
in
in
you
know.
Reading
through
you
know
a
detailed
specification
for
what
it
you
know
what
it
looks
like
to
to
write
this
file,
you
know
and
and
and
like
I
said,
the
package
maidens
working
group
seems
like
a
good
place
to
have
a
discussion
around
that.
If
you're
interested
in
you
know
broadening
the
conversation
and
getting
feedback
from
you
know,
a
wider
array
of
community
members
so
slightly
unrelated
to
you
know
on
our
sea
call.
But
but
this
sounds
really
interesting
to
me.
Yeah.
I
I
have
a
similar
some
idea.
One
thing
that
come
that
comes
to
mind
here
is
the
you
know:
we've
sort
of
long
punted
on
this
on
this
idea
of,
like
you,
know,
I'm
using
I'm
using
mic
handlebars,
and
it
has
a
reduce,
if
you
know,
or
the
McGirt
minimis
thing
is
like
a
great
example
right
like
if
you're
passing
user
data
to
your
command
line
mcdr,
then
oh,
it
can
break
McGirt
like
nobody
cares.
I
If
you're
passing
user
data
to
the
to
the
command
line,
make
there
then
you're
already
owned
right,
and
so
what
would
be
nice
is
a
way
for
for
me
as
the
mcdr
maintainer
to
sort
of
add
an
advisory
thing.
That
says
yes,
yes,
there's
this
audit
warning,
but
you
probably
don't
have
to
worry
about
it
and
here's.
Why
and
then,
when
we're,
you
know
when
we're
doing
those
resolutions,
we
could
actually
look
at
those
those
upstream
decisions
and
say,
like
you
know,
hey
do
you
want
it?
I
I
We
have
run
into
a
bunch
with
the
dependencies
of
NYC
and
tap
we're
like
it's.
Just
it's
not
a
problem.
It's
only
a
problem
if
you're
running
it
in
a
server
you're,
probably
not
loading
a
test
framework
in
your
production
server
anyway,
like
it's,
not
a
big
deal,
but
we
have
to
go
in
bump
versions
on
like
a
hundred
different
things
because
of
these
so
being
able
to
just
sort
of
mark
them,
as
you
know,
in
some
in
that
kind
of,
and
serve
it
in
that
kind
of
format.
E
Think
to
expand
on
that
slightly
I.
Think
there
one
simple
thing,
because
our
low
barrier
to
entry
thing
that
could
be
limited
but
that
it's
like
letting
maintainer
mark
easy
support,
production
audit
concern
or
it's
like
a
damn
dependency
concern.
And
if
it's
in,
if
it's
production
it
filters
out
if
it's
installing,
via
like
audit
filter
ease
it
on
like
the
alert,
if
it's
installing
by
adepts,
but
sorry
that
yeah.
I
I
mean
we.
We
definitely
look
at
that.
There
have
been
a
fair
number
of
issues
that
we've
encountered
on
the
registry
and
most
of
them
were
actual
malware
right.
So
we
end
up
pulling
it
from
the
registry
anyway,
but
being
able
to
own
a
dev
machine
is
actually
really
valuable.
It's
arguably
more
valuable
than
owning
a
production
machine
and
you
get
all
of
them,
so
I
I
definitely.
H
F
I
G
F
Means
that
the
security
researcher
got
some
money,
that's
all
it
means
like
so
so.
I
totally
agree
with
you
like
I.
Don't
want
like
right
now,
all
because
of
NP
x.
Actually,
all
of
my
repos
are
only
checking
on
it
against
production
debts,
because
I
had
to
block
all
the
dead
DEP
ones,
because
my
audit
checking
tool
uses
live.
Np
x,
+
NP
x
currently
has
a
CBE
on
it
right.
It's
a
CD.
F
F
Why
we
need
to
be
able
to
target
specific
things,
both
as
an
end
user,
but
also
as
a
packaged
author
and
like
I,
think
we
just
there
has
to
be
a
decision
between
like
do.
We
want
to
undermine
the
entire
confidence
in
end
users
of
CVE
system,
or
do
we
want
to
risk
letting
package?
Authors
like
be
slightly
more
malicious
and
hide
real
CDs
I
should
go
through
the
response
event
and
then.
I
The
the
the
thing
that
we've
sort
of
hesitated
to
do
is
mark
a
dependent
mark,
a
mark
of
vulnerability
as
only
affecting
production
depths
and
ignore
it.
If
it
is
found
in
a
dev
depth
and
I
think
there
is
absolutely
a
balancing
act.
Jordan
your
background
is
extremely
appropriate.
While
we're
discussing
the
subject,
it's
you
know
it's
difficult
and
there
is
just
not
just
like
conflicting
motivations
but
but
perversely,
conflicting
motivations
in
a
lot
of
cases.
C
So
I
want
to
say
I'm
not
sure
if,
if
the
entirety
of
the
conversation
is
something
we
would
want
to
do
around
the
audit
resolutions,
because
for
the
package
maintainer
to
to
be
able
to
sort
of
defend
their
package,
important
air
quotes.
It's
it's
more
of
a
thing
that
we
should
add
to
the
audit
itself.
So
the
audit
data
that's
being
checked,
should
pull
in
some
information
from
the
package
maintainer,
which
you
did
put
in
package
Jason,
famously
with
other,
unexpected
stuff
or
or
even
edited
through
I.
C
So
this
this
would
make
the
same
file
serve
two
purposes
and
I'm
not
sure
I
feel
comfortable.
Having
that
happen
in
my
apps,
but
I
would
appreciate
the
maintainer
being
able
to
defend
themselves
and
say:
hey.
This
is
a
COI
tool.
So
if
it
has
a
reduce
you
shouldn't
care
and
it
would
be
helpful
to
display
that,
when
reviewing
audits
in
more
detail,
yeah.
I
E
I
just
wanted
to
say
that
I
think
might
be
encountered
or
1zv
said,
but
I
I
get
nervous
of
adding
more
and
more
file
formats
and
like
if
we
do
end
up
adding
an
additional
format.
I
would
prefer
it
if
it
would
be
something
more
broadly
generic,
so
like
audit
about
Jason
or
adding
this
the
package
Jason
or
having
the
option
to
parse
it
for
packages
and
instead
of
audit
diesel,
but
I
I
do
get
nervous
by
like
okay.
E
H
Also
agree,
but
this
is
why
I
think
this
needs
a
bigger
conversation,
because
there's
a
bunch
of
related
things
like
this,
that,
if
we
just
solve
for
audit
we're
not
really
solving
for
the
other
ones
that
are
the
exact
same
needs
so
I
mentioned
in
the
chat,
Express
Locke's
external
dependencies.
So
we
don't
ever
want
a
extra
and
we
say
external
meaning
something
not
managed
by
the
Express
team.
We
don't
anything,
that's
not
managed
by
the
Express
team.
H
We
want
a
never
to
have
a
carrot
or
a
tilde
or
any
cember
range
specifier
right,
but
we
have
no
way
to
signal
that
intent.
It's
just
a
whether
it
has
it
or
not.
So
all
of
them
depend
automated
dependency,
update
tools,
just
don't
work
for
us,
because
they're
always
gonna
be
like
complaining
and
opening
PRS
that
we
don't
want
right.
H
So,
like
there's
a
bigger
conversation
about
what
is
my
intent
by
specifying
this
version
right
and
an
audit,
it's
exact
same
thing,
I'm,
saying
I'm,
specifying
this
version,
and
my
intent
is
also
by
doing
that.
Ignore
CVEs
related
to
that
version.
Right,
like
there's
a
there's,
a
general
intent
that
we're
missing
by
just
having
a
single
semver
range
specifier.
That
means
that
all
of
these
automated
tools
in
the
ecosystem
aren't
able
to
do
their
full
job.
I.
Think
moving.
H
This
conversation
is
something
like
the
package
maidens
working
group
spending
six
months,
flushing
out
what
intense
do
package
maintainer
x'
have
that
are
not
represented
today
and
how
could
we
come
up
with
a
way
to
help?
Do
that
and
then
build
experimental
tooling
around
that
that
we
would
then
look
at
later
coming
back
into
NPM
proper,
and
maybe
the
start
is
your
NPM
audit
resolve
right
and
like,
and
then
we
could
get
a
group
of
people
around
building
out
this
experimental
tooling,
but
at
a
broader
scale
right.
Not
just
thinking
about.
H
Oh,
we
know
we
have
audit
resolved
like
I
can
tell
you.
Express
would
be
very
interested
if
we
could
have
a
way
to
signal
to
dependent
automated
dependency
update
tools.
No,
no
ignore
that
when
you're
looking
at
what
to
update
and
then
we
would
use
that
right.
So
that
would
be
a
way
to
really
up
level.
This
conversation
and
and
flush
it
out
before
NPM
makes
any
decision
on
you
know.
How
do
we
solve
this
one
very
specific
problem,
which
is
audit
complaining
too
much
so.
A
A
Yeah
that'd
be
great
and
to
bring
that
discussion
there
as
there's
been
other
discussions
about
supports
and
that
tangentially
the
next
item
on
the
agenda
actually
is
overrides
so
funny
enough.
I
feel
like
this
is
very
you
know,
similar
in
in
terms
of
how
the
approach
and
there's
I
think
a
little
bit
of
overlap
in
these
discussions.
Isaac.
Would
you
like
to
speak
to
that?
The
RFC
itself
is
one
twenty
nine
yeah.
I
So
I
did
actually
just
update
this
at
long
last
from
our
prior
discussions
several
meetings
ago.
So
essentially,
the
this
simplest
recent
kind
of
rev
on
this
RFC
simplifies
the
way
that
we
do
nested
nested,
override
object.
Rule
sets
as
well
as
string
overrides,
so
essentially
the
first
rule
to
match
is
always
the
last
one
to
match
right.
We
don't.
We
don't
keep
looking
after
we
hit
one
that
that
creates
an
override
on
that
particular
edge.
I
From
there
the
the
way
that
you
can
do
you
know
both
override
a
particular
resolution,
as
well
as
provider
rule
set
for
its
child
dependencies.
You
would
use
the
dot
key
to
apply
to
the
package
or
the
the
resolution
step
currently
being
considered.
I
I
suspect
that
there
may
be
one
kind
of
pathological
case
here.
I
If
you
have
met
adepts
that
are
that
are
being
overridden
in
some
rather
weird
ways.
I
think
there
may
be
cases
where
you
can
get
into
a
sort
of
infinite
loop
of
duplication
within
your
node
modules
folder,
but
I
haven't
been
able
to
solve
for
that
just
yet
the
yeah,
but
otherwise
I
think
most
of
the
bike
shutting
is
done.
I
The
the
one,
the
one
issue
that
that
does
need
to
be
sussed
out
and
I'm
not
sure
how
to
do
it
without
having
running
code
is
whether
or
not
overrides
applied
to
shrink,
wrap
and
bundle
dependencies
or
you
know,
shrink,
wrap
or
bundle
dependencies.
I
guess
the
the
contract
that
we
that
NPM
provides
with
bundle
dependencies
is
that
whatever
you
bundle
and
list
as
a
bundle
dependency
that
is
I
guarantee
it?
That
is
what
will
be
installed.
It'll
be
installed
at
that
path.
I
I
Jordan
has
found
a
you
know
the
the
specific
limits
of
that
contract.
It
doesn't
actually
apply
to
the
root
project,
but
you
know
whatever.
Basically,
when
you
pack
your
package,
whatever
is
a
bundle,
dependency
will
be
included,
as
is
in
the
package
artifact
and
then,
when
it's
installed
as
a
dependency,
everything
that's
bundled
will
be
left
alone
and
respected.
So
yeah
should
an
override
apply.
It's
it's
certainly
gonna
be
easier
to
say.
No
personal
feelings
is
no
personal
thoughts
and
feelings.
I
There's
right,
but
on
the
other
hand
like
if
I
want
to
override
the
version
of
NYC,
that's
installed
with
tap
and
it's
a
bundle.
Dependency
like
I
should
be
able
to
I
feel
like
I
should
be
able
to
do
that,
and
it
sort
of
it
depends
on
which
side
you're
sitting
of
the
Installer
consumer
divided.
Whether
or
not
you
want
to
do
that.
Person.
I
But
also
it's
about
who
has
more
understanding
of
what's
going
on
and-
and
you
know
what's
right
for
your
app
like
if
I'm
bundling
something
it's
probably
for
a
good
reason,
and
it
might
not
even
be
the
thing
that's
on
the
registry,
it
may
be
my
own
custom
fork
of
it
or
something
so
the
you
know
might
be
floating
a
patch
to
fix
something
the
other.
The
other
concern
is,
if
so,
that's
bundle.
I
Dependencies
shrink
wraps
are
kind
of
another
thing
where
we
have
a
similar
contract
that
says
whatever,
whatever
you
lock
in
your
and
capturing
craft
JSON
filed.
That
is
the
tree
that
will
be
resolved
from
that
point
in
the
in
the
package.
Dependency
graph,
so
can
I
overwrite
that
now
that's
a
lot
easier
to
do
then
then
bundle
dependencies,
but
it
is
still
easier
to
just
say
it's
in
the
shrink-wrap.
I
We're
gonna
leave
it
alone,
because
the
essentially
the
way
that
we
do
this
in
in
arbors
today
is
we
do
a
load
virtual
on
that
particular
tree
and
then
just
route
it
in
the
thing
that
has
the
shrink-wrap
and
say
all
right:
that's
that's
the
tree!
I!
Don't
even
have
to
look
any
further
or
do
any
kind
of
resolution,
so
if
we
we
can
do
overrides
on
shrink-wrap
depths
now
we
need
to
orbital
that's
we.
I
We
now
have
to
walk
that
sort
of
that
tree
that
we
unpacked
and
apply
any
of
those
resolutions
and
then
essentially,
do
a
whole
ideal
tree
build
on
it.
So
my
my
strong
suspicion
is
we're.
Gonna
say
this
just
too
hard
and
say
we're
good,
you
know
overrides,
don't
apply
to
shrink,
wrap
and
fund
old
dependencies.
I
A
Yeah
quick
question
on
that
and
just
to
be
mindful
of
time
because
I
know
Ordos
here
again
and
I
apologize.
It's
we
keep
leaving
types
of
discussion
to
the
seems
to
always
end
that
towards
the
end
here.
But
in
terms
of
the
questions
that
you're
sort
of
formalizing
about
overrides
and
those
edges
like,
could
we
just
put
those
down
in
our
thread
and
and
kind
of
do
that
async
or
flush
those
out
async
yeah?
So.
G
A
So,
ideally,
you
can
get
some
feedback
and
time
for
the
next
meeting,
maybe
on
that
specifically
it'd
be
good
like
going
forward.
If
we
have
questions
like
that,
we
can
open
up
polls
or
something
if
you
know
the
outcome,
it's
a
lot
like
if
there's
a
question
with
like
immediate,
like
multiple-choice
answers
or
outcomes.
Maybe
we
can
just
like
see
what
the
community
try
to
source
feedback
on
mass.
I
And
just
to
be
clear
like
if
99%
of
users
all
say
yes,
it
should
apply
to
bundled
EPS
and
shrink
wraps,
but
it's
really
really
hard
or
there's
some.
You
know
horrible
and
edge
case
that
we
can
see
like
we
might
still
not
do
it,
but
if
there's
sort
of
overwhelming
support
for
it
at
least
that'll,
don't
give
us
some
kind
of
direction
about
like
how
how
hard
we
should
look.
Yeah.
A
The
only
thing
like
my
mind
goes
through
is,
or
the
only
reason,
I
bring.
That
up
is
just
because
the
like
asking
those
questions
in
the
RC
itself
is
isn't,
as
actionable
like.
The
commentary
usually
ends
up
being
miss
grouped
or
a
smaller
subset
of
the
community,
whereas
I
could
potentially
mark
it
something
easier.
That's
super
actionable
like
a
poll
or
something
like
that,
an
issue
thread
which
we
we
have
support
for
with
Probot
polls,
so
it
might
be
be
an
option
right.
Let's
do
it
cool,
yeah
or
I
apologize.
A
D
D
So
most
of
the
time
people
just
it
just
automatically-
looks
to
see
whether
there's
any
DTS
files
full
of
files
inside
the
table
and
then
just
adds
it
as
part
of
the
registry
metadata
automatically
and
then
the
sort
of
the
opposing
idea.
So
we
have
to
kind
of
decide
between
these
two
ideas
is:
instead,
we
use
NPM,
publish
to
sort
of
push
people
in
the
direction
of
using
the
fields
that
are
already
supported
by
typescript
and
could
be
supported.
D
I
D
I
Ahead,
I
can
I
can
help
you
with
this
the
yes.
If
my
decision
is
yes,
we
should
we
should.
We
should
put
the
this
metadata
in
package.json
and
I
I'm
I'm,
stating
this
very
definitively.
But
you
know
if
there's
pushback,
I'm
I
don't
actually
feel
that
definitive
about
it.
We
should
put
it
in
packaged
up
JSON.
That
should
be
the
the
sort
of
authoritative
place
where
it
lives.
I
When
we
do
the
the
manifest
you
know
when
we
read
the
package
that
JSON
from
a
manifest,
if
it
does
not
have
a
types
field,
but
there
is
a
you
know,
index
DTS,
then
we
should
guess
at
what
the
types
field
should
be.
If
it's
possible
to
do
it
automatically.
We
can
do
that
client-side
and
we
can
also
override
it
based
on
you
know
whatever.
If
there's
something
specifically
in
there,
we
do
this
already
for
a
handful
of
fields
we
default,
like
the
I
forget.
I
Now
we
default
like
the
get
head
and
the
bugs
field
and
a
bunch
of
other
yeah
based
on
what's
in
the
folder,
so
I
would
say
yeah,
let's
like
make
the
spec,
be
like
what
what
do
you
put
in
package
JSON?
If
you
have
types
and
then
the
next
step
is
we'll
update
the
read
package
JSON
package
to
sort
of
automatically
shove
that
into
the
into
the
manifest
that
gets
published.
I
That's
definitely
the
path
of
least
resistance.
It's
it's
already
100%
supported
by
the
registry.
It's
I
guess
maybe
not
quite
a
hundred
percent.
We
probably
also
want
to
add
it
to
the
Corgi
that
the
minified
vacuum
it
documents.
But
again,
that's
that's
something
really
trivial
and
easy.
The
hard
part
is
deciding
what
the
data
is.
D
Yeah
not
blocked
in
very
I
bet,
I,
probably
have
to
make
a
small
amenity
just
to
sort
of
make
it
multi
finit
affray.
Now
it's
like
the
question.
This
is
what
we
could
do,
but
basically
from
that,
if
you,
if
you
give
it
a
read
over
and
that
seems
good
I'm
happy
to
make
a
PM
police,
do
the
CLI
I
don't
know
anything
else.
A
A
G
A
I
Ahead,
I-
don't
think
we'll
be
there
by
next
week,
but
just
just
calling
out
that
the
discussion
on
like
this
discussion
kind
of
around
on
it-
and
you
know
not
just
awed
it
resolved,
but
also
the
the
data
set
for
how
you
express
whether
or
not
something
is
a
valid
vulnerability
in
a
particularly
use
case
and
like
what
do
we
do
with
that
data?
Where
does
it
come
from?
How
do
maintain
errs?
Make
their
automated
tooling
a
little
smarter
like
that,
feels
like
it's
very
deep
dive
worthy
I.