►
From YouTube: Open RFC Meeting - Wednesday, October 6th 2021
Description
In our ongoing efforts to better listen to and collaborate with the community, we run an Open RFC call that helps to move conversations and initiatives forward. The focus should be on existing issues/PRs in this repository but can also touch on community/ecosystem-wide subjects.
A
A
And
we're
live
on
youtube.
Welcome
everyone
to
another
npm
rfc
call.
Today's
date
is
wednesday
october
6th
2021..
A
If
you
haven't
already
feel
free
to
add
yourself
to
the
attendees
list
in
the
meeting
notes
doc,
that's
the
hack
mddoc,
which
everyone
should
have
shared
access
to,
whether
you're
logged
in
or
not,
I've
just
copied
and
pasted
it
spammed
it
in
the
chat
as
usual
and
fritzi.
Yes,
that
is
not
a
bug
that
is
just
me.
Spamming
chat
as
usual.
A
We
will
be
following
along
in
the
ish
in
the
agenda
that
was
posted
in
issue
number
470..
It's
a
very
small
meeting
agenda
today.
We're
only
a
few
items,
so
we'll
have
lots
of
time
at
the
end
of
the
these
to
bring
up
any
other
topics.
That
folks
want
to
speak
to
just
a
quick
reminder
that
these
calls
and
all
comms
on
the
rfc's,
repo
and
all
cli
repos
are
covered
under
code
of
conduct.
We
asked
folks
to
please
be
kind
and
thoughtful,
especially
as
others
are
speaking.
A
Please
raise
your
hand
and
we'll
call
on
you
and
just
be
mindful
of
the
fact
that
these
calls
are
recorded.
They
are
live
and
yeah.
I
appreciate
everybody
just
being
polite.
I
want
to
give
some
time
space
for
any
announcements
that
folks
might
have
if
you
have
feel
free
to
bring
them
up
now.
A
If
there's
nothing
right
now,
we
can
move
into
the
agenda
portion
of
the
call,
starting
with
the
rfc
number
466,
so
this
is
presented
or
created
by
jordan.
This
is
the
npm
publish
if
needed
so.
B
C
A
Moving
on
then,
to
number
463,
this
is
the
multiple
app
mono
repo
support.
Mystery
command
is.
Are
you
on
here
matt.
D
Sure
my
name
is
matt
hayes.
I
am
in
chicago
illinois.
I'm
mystery
command
on
github,
it's
my
first
time
at
a
npm,
open,
rfc
meeting.
D
I'm
not
sure
how
controversial
to
expect
this
our
rfc
to
be,
but
it
covers
a
use
case
that
a
couple
of
colleagues
and
I
have
had
run
into
specifically
in
our
case
with
building
mono
repos
that
contain
multiple
lambdas
that
share
code
between
them.
D
The
I
think
the
the
kind
of
ideal
case
that
we
came
up
with
was
that
each
lambda
package
or
workspace
in
the
monorepo
would
be
able
to
you
know
basically
npm
install
or
npmci
its
dependencies,
and
then
we
could
zip
that
folder
up
and
you
know
s3
it
off
to
to
aws.
D
The
the
problem
with
publishing
it
is
just
that
it
seems
like
a
bunch
of
sort
of
like
extra
work
and
some
kind
of
weird
ci
cases
where,
for
example,
a
change
that
affects
both
a
shared
dependency,
and
one
lambda
requires
that
lambda
to
use
the
new
version
of
the
package,
which
means,
if
we
want
to
put
it
into
a
staging
environment.
We
have
to
like
publish
pre-released
versions
of
both
of
those
things
and
make
sure
that
everything
is
happening
in
the
right
order.
D
There's
like
a
lot
of
orchestration,
where
really
it's
just
like
right
there.
I
would
just
want
to
use
it.
I
just
want
to
essentially
call
npm
pack
on
the
on
the
shared
library
and
then
give
the
result
of
that
calling
pack
to
the
lambda
app
so
that
it
can
so
they
can
have
it's
kind
of
like
local
dependencies
at
the
app
level.
B
I've
talked
about
many
times
about
workspaces,
which
is
that
the
ideal
way
I
think
workspaces
should
work
is
that
there
is
a
node
modules,
folder
next
to
each
package,
json
that
it
only
contains
the
dependency
graph
that
is
relevant
for
that
package.json
and
that
things
that
need
to
be
shared,
which
are
you
know,
peer
dependencies,
other
sibling,
workspace
packages
and
so
on
are
shared
through
a
mechanism
that
is
not
hoisting
to
the
root
node
modules,
and
I
I
haven't
thought
about
it
in
depth
as
it
relates
to
matt's
rfc,
but
my
first
get
my
first
blush
or
at
first
blush
or
whatever
like
it
seems
like
that,
would
completely
solve
this
use
case,
as
well
as
many
others.
C
I
just
I
I
think
I
agree
with
jordan,
but
also
just
to
kind
of
clarify
the
the
use
case
here
is
I
want
to
do
like.
I
want
some
kind
of
a
command
that
will
create
a
packed
tarball
of
a
workspace
that
has
all
of
that
workspace's
dependencies
included
within
it
right.
C
So
within
that
you
know
one
way
you
can
imagine
that
workspace
declares
all
of
its
dependencies
and,
let's
say,
adds
all
of
them
to
bundle
dependencies
and
so
that,
even
if
it's
a
sim
link
packing,
it
will
include
those
things
inside
of
that
tarball
that
doesn't
work
today,
because
there's
nothing
in
its
node
modules.
Folder
right,
everything
has
been
hoisted
up
to
the
top
level.
C
If
it
was
this
kind
of
future
world,
where
anything,
that's
not
a
pure
depth
and
not
a
a
sort
of
a
sibling,
workspace
would
be
placed
inside
of
its
node
modules
folder
and
anything
that
is
a
pure
depth
or
as
a
sibling
workspace
would
be.
You
know
linked
in
some
way
so
that
all
the
workspaces
use
the
same
one
then
npm
pack
could
just
work
like
it
does.
Today.
We
wouldn't
actually
need
to
make
any
changes
there.
We
just
follow
the
folder
structure.
C
Am
I
getting
the
am
I
getting
the
am
I
getting
like
the
use
case
here?
More
or
less
accurate.
D
Yeah,
I
think
so
the
I
mean
the
the
sort
of
like
gotcha
that
occurs
to
me.
While
you
were
kind
of
redescribing,
it
is
that
we
want
to
call
like
lambda
a
depends
on
package
b.
We
want
to
call
pack
on
package
b
and
not
just
you
know,
copy
the
folder
structure
over.
D
If
that
makes
sense,
it's
like
it's
like
the
build
script
that
we
have
in
the
in
the
apps
or
in
the
lambdas
that
we're
building
you
know,
does
npm
install
dash
dash
only
prod,
and
then
we
do.
We
actually
because
we're
using
typescript.
We
actually
have
a
bunch
of
like
aliases
to
make
it
hop
around
and
sort
of
like
bundle,
the
lambda,
which
seems
also
kind
of
unnecessary,
but
there
should
be
like
a
way
to
specify
a
pre-serve
step
in
between
you
know,
building
the
dependency
graph.
Does
that
make
sense.
C
D
C
Okay,
so
that
would
be
again
if
I'm
not
the
way
it
works
today.
But
hypothetically
that
could
be
linked
into
lambda
a
slash,
node
modules.
Slash
b
would
be
a
sim
link
to
packages
slash
b
and
then,
when
you
run
pack
because
it's
a
dependency
on
a
peer,
it's
a
dependency
on
a
sibling
workspace
from
the
point
of
view
of
npm.
C
D
Yeah,
but
could
could
package
b
also
be
packed
in
as
a
part
of
that
process,
so
that,
if,
like
sure,
yeah
like
when
I
call
npm
pack
on
land
to
a
and
package
b,
is
a
folder
full
typescript
files
that
there's
some
you
know
pac
script
or
prepack
script
or
whatever
specified
that
actually
builds
it,
that
it
would
honor
like
an
npm
ignore
file
in
that
package,
so
that
wouldn't
include
the
source
and
stuff
like
that.
C
E
I
remember
that
I
thought
we
finished
it.
Let
me
go
look.
C
Okay,
it's
possible.
I
just
figured
out
how
to
do
it
and
told
you
and
then,
if
it
didn't,
follow
up
but
yeah
so
that
I
definitely
remember
us
pairing
on
that.
So
that
should
be
something
you
basically.
What
I'm
saying
is
you
can
do
that
with
the
you
can
do
that
with
the
prepare
script,
but
you
would
have
to
run
the
prepare
script
within
the
packages
b.
First,
it's
not
going
to
do
that
automatically.
C
When
you
run
npm
pack
lambda
a
now,
you
could
have
a
prepare
script
in
lambda,
a
that
does
exactly
that.
You
know
that
cds
into
the
packages
b
folder
and
runs
npm
preparer,
so
that
everything
is
you
know,
built
and
ready
to
go
and
the
ignore
files
are
all
set
up
properly.
B
So
what
I'm
hearing,
if,
if
you
don't
mind
me
cutting
in,
is
two
rfcs
to
solve
one
use
case
right,
which,
like
kind
of
both,
are
needed
to
make
it
super
ergonomic,
and
the
alternative
workspaces
layout
is
the
is
to
me
that
is
the
big
one
and
then
the
ergonomics
of
like
for
any
depths
that
are
shared,
maybe
do
the
extra
step
of
reaching
in
and
running
their
prepare
scripts
automatically
without
requiring
x.
You
know,
because
that
way
you
don't
have
to
hard
code
which
of
your
depths
are
internal,
because
you
presumably.
D
B
C
Okay,
so
gar
reports
that
the
we
did
make
the
change
to
npm
pack
list,
but
we
did
not
pull
it
into
engine
cloud
yet
so
that's
that's
got
it
that's
going
to
go
out
soon.
Basically,
what
that
does
is
it.
It
will
respect
the
ignore
files
within
linked
depths
that
get
refined
into
a
linked
bundle,
dependencies
that
get
reified
into
the
packaged
tarble.
C
The
a
subsequent
thing
to
to
run
prepare
scripts
of
sim-linked
bundle
depths
when
running
npm
pack,
yeah.
That
seems
like
a
decent
rfc.
It's
a
little
bit
trickier
than
just
respecting
the
ignore
files,
because
npm
pack
list
has
no
concept
of
running
lifecycle.
Scripts,
that's
all
kind
of
done
outside
of
it
and
pm
pack
list
is
just
determining
which
files
we
include
and
which
ones
we
don't
so
yeah.
So
I
think.
C
B
Bundled
dependency
right
like
I
would
assume
it
would
apply
with
any
kind
of
any
category
of
dependency
that
is
sim
linked.
You
would
want
to
automatically
reach
in
and
prepare
it
well,
why?
I
mean
it
doesn't
because
if
it's
sim
linked
it's
locally,
it's
probably
local,
which
means
that
well,
it
has
to
be
local,
which
means
that
it's
not
coming
from
a
registry
which
means
that
the
registry
preparation
steps
probably
haven't
been
run
as
necessary
as
as
recently
as
needed.
C
B
C
The
real
trick
here,
though,
is
getting
that
kind
of
like
shared.
You
know
what
is
shared,
what
is
linked,
what
is
isolated,
ironed
out
and
and
moving
to
reification,
so
that
we're
putting
those
things
under
the
workspace,
slash,
node
modules,
rather
than
in
the
root
node
modules.
B
D
Yeah
but
there's
another
problem
with
that
too,
because,
like
the
fact
that
it's
well
I
mean,
I
guess,
if
adding
everything
that's
in
dependencies
to
bundle
dependencies
and
then
calling
fpmpac
will
create
the
same
like
local
node
modules
structure,
then
I
guess
that
would
work
right
now.
What
we're
doing
is
calling
npm
install
dash
session
like
fraud.
D
B
F
B
F
Sounds
real
ick
you
shouldn't
have
to
be
doing
that
like
that
sounds
like
something
that
should
be
solved
by
the
cli.
C
B
D
So
the
other
I
mean
the
other
thing
that
we
tried
was,
you
know,
doing
mpm
install
you
know,
while
cd
into
the
lambda
to
produce
a
package
lock
and
then
running,
npm
shrink
wrap,
because
that
seems
like
the
sort
of
recommended
lock
file
to
ship,
with
your
deployable
whatever
and
then
running
npmci
from
inside
the
lambda,
which
got
us
some
other
way
there
too.
C
It
wouldn't
run
the
prepare
script
for
you
yet
as
we're
talking
about,
and
there
isn't
kind
of
this
nice
handling
of
it
within
workspaces,
because
b
is
going
to
be
linked
up
at
the
top
level
rather
than
directly
into
the
thing
that
depends
on
it.
So
there
are
two
changes
that
we
would
need
to
make
to
to
really
get
rid
of
this
problem.
For
you.
C
But
that
could
be
kind
of
a
workaround,
I
guess
in
the
short
term
I
will
write
it
out
here.
C
Solution,
I
think
that
is
a.
I
think
that
is
a
good,
a
good
approach.
It's
and
it's
a
little
bit
less,
it's
a
little
bit
like
the
isolated
mode,
but
without
quite
as
without
going
all
the
way
there.
B
Right,
well,
it's
it
overlaps
somewhat
with
the
isolation
mode,
in
the
sense
that
there
are
things
there
are
depths
that
are
shared.
That
aren't,
you
know
via
not
hoisting,
but
it
doesn't
go
all
the
way
there
correct
it
like
it
limits
itself
to
the
completely
non-controversial
and
almost
universally
expected
things
that
are
shared.
C
B
You'll
be
avoiding
extra
disk
usage
from
anything.
That's
shared
like
because
it
would
only
be
on
disk
once
instead
of
more
than
once,
but
it
allows
for
more
disk
usage,
just
like
isolation
mode
and
default
mode
of
npm
do
when
the
versions
are
different,
but
it
just,
but
it
doesn't
seek
to
minimize
disk
usage
on
the
non-shared
dependencies.
A
A
B
Yeah
and
then,
and
if
that
ended
up
being
something
people
wanted
to
fix
in
a
way
that
also
didn't
go
all
the
way
to
isolation
mode.
We
could
probably
add
some
sort
of
in
between
step
later,
but
I
don't
anticipate
that
being
a
big
issue.
I
have
hundreds
of
fully
installed
node
modules
on
my
disks,
like,
like
hundreds
and
hundreds,
and
like
it's
a
couple
gigabytes,
it's
just
not
that
big,
a
problem,
the
the
other.
C
The
other
way
to
go
about
that
could
be
just
make
it
a
top
level
depth,
but
we're
getting
pretty
far
off
topic
here.
So
cool,
probably
thanks,
move
on.
A
Yeah
so
action
items
from
our
end,
though,
to
make
facilitate
this,
it
sounded
like
there
was
one
that
will
actually
ship
the
improvements
to
npm
pack
list
that
are
just
sitting
there
and
should
get
pulled
in.
I
think
next,
but
then
it
sounds
like
we
have
another
ask
here:
what's
the
other
action
item
on
our
site
here,
isaac.
A
C
A
A
Just
wanted
to
circle
back
matt.
Do
you
feel
like
this?
These
three
action?
Well,
the
specifically
the
running,
prepare
scripts
for
link
bundle.
Depths.
Do
you
feel
like
that,
solves
the
use
case
that
you
you're,
proposing.
A
Thank
you.
I
think
it
worries.
Thank
you.
Thanks
for
joining
colin
like
and
proposing
this
yeah,
we
we
definitely
try
to
take
in
as
many
different
use
cases
as
as
possible
and
try
to
meet
all
the
needs.
Let's
move
on
then,
to
the
last
item
that
we
have
here.
We
have
about
a
half
hour
left,
so
we
can
go
a
little
bit
longer
on
this,
and
I
know
jordan
has
some
thoughts.
This
is
the
issue
445..
A
This
was
essentially
us
announcing
npm
8,
which
we've
got
queued
up
to
actually
release
tomorrow.
Right
now,
as
as
of
right
now
and
the
primary
braking
changes
for
mpm8
are
dropping
node,
10
support
and
just
changing
ascension,
essentially
our
engine's
matrix
of
node
versions.
We
support
also
updating
node
chip
and
a
whole
bunch
of
some
of
the
major.
A
Releases
to
a
whole
bunch
of
depths
that
are
also
going
to
coincide
with
this
with
this
change,
so
just
wanted
to
bring
this
up
because
it
is
sort
of
this
imminent
release
that
we're
trying
to
cut
with
that
in
mind,
jordan.
I
know
you
had
some
initial
feedback
for
this,
which
was
to
try
to
map
out
what
the
historical
you
know.
A
B
Well,
so
there's
a
few
things
upgrading
more
than
one
major
of
anything
at
a
time
is
a
risk
and
it's
generally
something
best
avoided
so
hopping.
If
we
tell
people
to
hop
directly
from
six
to
eight,
that's
not
often
going
to
work
out
well,
they
may
be
able
to
do
it,
but
like
it's
always
better
in
my
experience
to
move
one
major
at
a
time.
B
So
that's
the
first
thing,
so
anyone
who
can't
upgrade
from
six
to
seven
is
increasing
their
risk
because
they're,
if
even
if
they're
able
to
upgrade
from
six
to
eight.
So
that's
the
first
thing.
The
second
thing
is,
it's
totally
fine
to
aggressively
add
major
versions,
but
it
becomes
a
problem
for
all
sorts
of
things,
cves
and
also
bug
fixes
and
so
on.
B
If
back
ports
are
not
policy,
meaning
to
me
the
things
that
go
well
together
are,
I
don't
do
back
ports,
but
I
also
don't
do
breaking
changes
or
I
do
breaking
changes
whenever
needed.
But
I
also
do
back
ports
whenever
reasonable
and
it
sounds
like
npm
is
intending,
unlike
what
they
did
from
six
to
seven
intending
to
have
a
different
policy
for
seven
to
eight.
So
from
six
to
seven.
Six
and
seven
were
both
maintained
for
a
long
time,
and
it
was
a
correctly
it
was
it
was.
B
It
was
a
decision
that
was
weighed
for
a
while,
I'm
assuming
and
made
very
cautiously,
and
that
was
a
correct
amount
of
weight
to
put
on
it.
To
like
start
closing,
all
those
issues
is
saying
like
we
don't
support
mkm6
anymore,
like
it
was
a
good
thing.
I
appreciate
that
seven
and
eight
are
mostly
identical
but,
like
it
still
seems
like
a
good
thing
for
eight
to
come
out
and
seven
to
still
be
supported
for
a
little
while,
if
there's
anyone
still
on
node
10,
it's
not
because
they're
lazy.
B
It's
because,
like
it's
hard
for
them
to
upgrade
beyond
node
10,
for
some
reason,
probably
my
guess
just
right
now
it
would
probably
be
things
like
compiled
dependencies.
That
also
are
don't
work
above
node
10
things
like
that.
So
I
see
no
problem
with
cutting
npm8.
B
Like
immediately
my
without
npm
7,
getting
all
those
bug
fixes
if
those
bug
fixes
can
eventually
land
in
npm,
seven
and
eight
right,
like
I
think,
that's
fine,
that
would
be
no
big
deal
the,
but
if
npm
eight
launches
now,
then
that's
and
and
the
policy
is
basically
npm
7
will
never
get
those
fixes.
I
I
think
that's
like
that
was
the
whole
point
of
bringing
this
up
last
week.
Was
I
didn't
want
that
exact
situation
to
create?
Like
I
don't
it.
We
shouldn't
have
a
situation
where
the
advice
is
I'll.
C
A
Yeah,
so
I
guess
we
have
a
couple
things
to
dig
in
on
there.
One
the
support
policy
isn't
really
changing
at
all,
so
I've
referenced
it
here.
Our
support
policy
still
remains
the
same
in
that
we
are
going
to
support
the
versions
of
npm
that
are
in
stable
versions
of
node
that
are
currently
actively
maintained,
and
so
that's
why
we
do
actually
still
ship
security
updates
to
npm,
but
we're
not
npm-6,
but
we're
not
actively
maintaining
it
for
other
kind
of
like
bug,
reports,
etc,
and
we're
not
going
to
do
any
future
development.
A
The
our
hope,
and
essentially
the
sign-off
we've
got
from
the
node
project,
is
that
npm
8
would
land
in
a
minor
to
the
current
to
note
16..
So
that's
our
understanding
is
that
that
is
the
you
know
that
mpm8
is
essentially
going
to
trump
the
usage
and
use
case
for
mpm7
in
any
maintained
version
of
node.
B
A
Node
16
they're,
going
to
backport
it
to
14
and
12
as
well.
I
said
no,
so
that's
why
we
have
to
continue
to
maintain
six.
So
that's
why
seven
essentially
becomes
what
is
very
similar
to
node's
release
cycle.
As
a
almost
considered,
you
could
consider
it
experimental,
and
this
actually
is
we're
trying
to
move
towards.
A
A
Does
that
that
kind
of
give
you
some
insight
into
like
the
thinking
behind
this
and
then
also
how
we
see
it
in
terms
of
usage
today
is
only
in
like
we're
only
being
distributed
in
note,
14
or
16.
B
I
see
the
argument
and
but
I
think
this
is
sort
of
an
accident
of
the
fact
that
at
the
moment,
npm
7
when
npm
8
is
about
to
come
out.
Npm
7
is
only
in
node
16
and
the
node
project
is
willing
to
pull
an
npm
8
into
16,
and
also
the
changes
between
7
and
8
are
essentially
zero,
except
for
dropping
an
engine.
Is
that
going
to
be
the
case?
Moving
forward
like
when
nine
comes
out
is
eight
and
nine
going
to
be
identical,
except
nine
drops
an
engine
such
that
I
mean
like.
B
I
guess.
Naturally,
it
would
be
because
the
weight
versions
work
but
like
it
would
be,
at
least
that
but
like
at
some
point,
we're
gonna
make
breaking
changes
that
are
not
just
dropping
an
engine,
and
when
npm
does
that,
then
this
situation.
This
scenario,
I
think,
isn't
really
going
to
apply
so
like
yeah
like
breaking
config,
for
example.
So
it
seems
like
like
if
this
is
just
a
special
case
and
not
likely
to
happen
again
then
like
okay
but
like
if
I
it
just,
I
don't
know
it
seems
like
an
interest.
B
It
seems
like
a
strange
choice
to
make
given
that
the,
unless
those
bug
fixes
to
seven
are
more
difficult
when
supporting
node
10
then
like.
A
A
We
don't
have
the
like
to
back
port
continue
to
back
port,
let's
say
fixes
and
continue
to
maintain.
Multiple
release.
Lines
is
just
on
like
we
don't
have
the
capacity
to
do
that
and
that's
never
been
the
support
policy.
Since
I
at
least,
I've
started
working
on
the
project
that
we
were
going
to
maintain
multiple
major
release
lines
and
the
we
actually
made
a
condition
to
support
six
recently,
and
that
was
a
special
condition
when
we
decided
to
ship
seven.
A
So
the
special
case
we're
in
today
is
that
we
actually
are
supporting
essentially
two
major
release
lines
and-
and
yes,
seven
will
go
away
because
there
will
be
a
net
new
major
that
exists
that
essentially
trumps
it
but
guard
go
ahead.
I
apologize.
E
Yeah,
this
is
a
fundamentally
different
seminar.
Major
than
six
to
seven
six
to
seven
was
lots
of
changes.
This
is
the
engines
update
and
I
the
the
idea
is
that
this
is
something
we're
going
to
be
a
little
better
about
as
node
moves
forward.
E
This
is
something
we'll
do
what
that
looks
like
with
note
18,
I
don't
know,
maybe
we'll
be
able
to
time
it
a
little
better
and
to
where
npm,
9
or
10
or
whatever
is
you
know
as
we
cut
it,
you
know
it
happens
shortly
after
you
know,
18
comes
out
or
something
but
yeah.
This
is
as
far
as
right
now.
This
is
a
unique
situation,
because
if
9
comes
out,
it's
not
going
to
land
in
node
16,
and
so
we
would
have
to
still
support
8,
because
it's
in
16,
like
we
do
now.
E
E
So
it's
not
something
we
were
wanting
to
plan
to
do
because
it
is
going
to
be
onerous,
given
the
fact
that
sevens
really
ate,
except
for
an
engine's
drop,
I
mean
if
you're
note,
10
you're
gonna
have
to
get
off
that
to
get
into
the
newest
npm
and
it's
just
kind
of
gonna
be
the
reality
of
it.
C
Yeah,
I
had
forgotten
about
yeah
the
fact
that
we
do
want
to
shuffle
around
a
bunch
of
the
api,
the
internal
api.
That's
why
we're
putting
that
curtain
up
to
not
expose
that
internal
api
anymore.
C
B
Well,
I'm
sure
there
are
some
bugs
for
which
that's
true
right
and
I
like
understand
the
practicality
argument
there,
but
I
also
suspect
that
there
are
some
other
bugs
that
like
could
be
fixed
inside
arborist
and
will
not
be
impacted
by
the
internal
npmei
api
refactoring
and,
like
you
know
there,
there
are
probably
like
I
I'm
if
those
if
the
bugs
in
that
list
that
were
collected
have
been
explored
and
in
fact
triaged
to
the
point
where
the
like
difficulty
of
their
fix
is
understood,
then,
and
if
none
of
them
are
the
things
I'm
thinking
of
the
easy
ones
to
do
now
like
they
could
be
done
today.
B
If
somebody
had
time
right
then
like
great,
then
there's
no
problem,
but
my
guess
is
that
the
time
hasn't
been
put
in
in
the
last
six
and
a
half
days
to
fully
triage
all
of
those
issues
and
determine,
in
fact,
that,
like
they
will
be
difficult
to
to
do
in
npm.
Seven
after
eight
is
cut.
You
know
what
I
mean
like.
B
Right
and
again,
but
my
expectation
is
that
most
of
those
bugs
for
eight
will
be
fixable
in
ways
that
could
be
that
aren't
going
to
be
dramatically
affected
by
npm's
api
internal
api,
refactoring
and
so
would
be
really
easy
to
backport
like
and
and
it's
it's
it's
a
fine
argument.
You're
gonna
say
like
it's
fine
to
backboard
them,
but
we're
not
gonna.
Do
it,
but
like
we
take
the
pr,
that's
it
like
that's
a
different.
A
Position,
that's
that's
the
hit.
I
think
that
will
okay,
that's
kind
of
like
the
that's,
what
I
would
say
and
and
the
drift
already
becomes
like
unmaintainable,
based
on
the
fact
that
we've
updated
all
the
dependencies
and
any
of
those
fixes
that
would
go
into
those
dependencies
essentially
wouldn't
be
able
to
be
backported
unless
we
also
then
backward
the
vectors
into
all
those
in
the
depth
right.
So
it
becomes
like
un
unmanageable
at
some
point
to
like
do
that,
work
which
might
be
a
relatively
small
fix
that
isn't
wrong.
A
Yeah,
so
it
is
time
like
when
I
say
small
right
when
we
say
small
and
we
say
easy,
it
is
time
and
for
sure,
like
there's
lots
of
work
to
do,
we
have
a
very
large
backlog,
so
yeah
isaac.
I
see
you're
handsome.
C
Yeah,
I
think
I
mean
I
think,
that
the
obviously,
if
there
is
some
extenuating
circumstance
if
it
is
extremely
easy
to
do-
and
it
doesn't
impose
a
huge
amount
of
cost
and
it
makes
our
users
lives
better,
like
of
course
we're
gonna.
Do
it
if
it's
a
security,
vulnerability,
that's
affecting
somebody
that
you
know
some
major
number
of
users.
That
can't
upgrade
like
fine
sure
the
the
real
discussion
here,
and
I
think
we
all
sort
of
understand
that
a
line
is
going
to
be
drawn
somewhere.
C
There
will
be
some
changes
that
yes,
like
we
could
fix
that
in
seven,
we're
not
going
to
ship
a
new
version
of
seven
just
for
this
fix
and
the
the
real
question
is
just
kind
of
where
that
line
is
because.
C
Doing
that
work
will
keep
getting
more
and
more
and
more
expensive,
and
you
know,
even
if
there
are
things
that
don't
require,
that
are
not
affected
by
the
internal
shuffling
of
npm's
api
surface.
We
don't
need
to
explore
in
this
call
all
the
different
reasons
why
that
may
or
may
not
be
feasible.
We're
just
going
to
deal
with
it
as
we
as
we
go.
A
B
For
the
for
the
api,
I
don't
really
think
so.
The
api
has
been
explicitly
not
tied
december
since,
like
npm3,
which
broke
me,
and
I
had
to
stop
relying
on
it
many
years
ago
so
but
like
you
have
to
do
it
for
the
engine
change
anyway.
I
get
that.
C
Yeah
yeah,
it's
it's
one
thing
to
say:
well,
we
don't
really
have
great
like
tests
of
that,
and
it's
not
well
documented
and
it
kind
of
moves
and
we
don't
really
notice
it's
another
thing
entirely
to
say
no
no
require
npm
throws
now
right
like
one
of
them,
one
of
them,
you
could
hand
wave
and
just
say,
oopsie
I
dropped
the
plate,
the
other
one
is
like.
No.
I
I
point
there's
no
plates
here
anymore.
E
E
I
think
we
should
expect
those
to
be
sprinkled
in
and
have
the
same
kind
of
situation
there
where
it's
just
this
version
is
just
the
lts
support
and
I
think
we're
going
to
try
and
keep
those
ones
granular
like
this.
One.
A
Yeah,
if
they
happen
to
align,
then
that's
great
right
like
if
we
happen
to
be
able
to
preemptively
align
any
major
breaking
changes
at
the
same
time
that
we
want
to.
You
know,
line
up
with
node
dropping
lts
support
for
like
a
previously
supported
major.
Then
you
know
that's
great
that
that
alliance,
but
I
think
you're
right
that
we
should
also
make
the
distinction
of
what
we're
doing
here
versus
you
know:
net
new
breaking
changes.
A
My
expectation
is
that
we,
we
probably
will
keep
at
least
a
cadence
of
some
verb,
major,
probably
breaking
changi
within
12
months.
You
know
roughly
12
months,
but
potentially
you
know
you
know
who
knows
it
could
be
more
could
be
less,
but
I
feel
like
that's
at
least
a
reasonable
kind
of
like
expectation
and
gives
us
some
wiggle
room
to,
like
you
know,
plan
to
have
breaking
changes
within
a
year
of
a
major
cut.
A
And
yeah
we've
talked
internally
as
well,
jordan
in
terms
of
just
like
the
experimental
versions,
having
like
nine,
let's
say:
seven
and
nine,
it
be
examples
potentially
of
having
more
short-lived
life.
B
B
Versions
and
like
mirroring
nodes
like
very
obscure
versioning
pattern,
that's
a
major
change
for
npm
support
policy,
even
if
it
doesn't
require
changing
the
wording.
A
B
Yeah,
I'm
not
I'm
not
trying
to
rules
lawyer
here
and
say
you
violated
some
wording
of
something
it's
just
kind
of
in
terms
of
what
users
may
or
may
not
expect,
and
what
users,
what
difficulties
users
are
likely
to
run
into
when
trying
to
upgrade
and
so
on,
like
I
don't
actually
care
what
the
support
policy
is
on
paper.
I'm
just
talking
about
the
actual
effects
of
of
actions.
A
Yeah
so
like
there
will
no
longer
be
an
mpm7
version
in
an
actively
maintained,
node
distribution
once
we
cut
eight
and
landed
in
in
node
16..
So
that's.
E
A
Nope,
okay,
did
anybody
else,
have
any
other
feedback.
A
If
not,
then
I
can
give
you
all
a
10
minutes
of
time
back
today
and
hopefully
chat
with
you
next
week,
thanks
everybody
for
jumping
on
again,
as
usual,
appreciate
your
time
and
discussion
for
sure,
and
I
will
see
you
next
week-
cheers.