►
From YouTube: Open RFC Meeting - Wednesday, Feb 5th 2020
Description
In our ongoing efforts to better listen to and collaborate with the community, we're piloting an Open RFC call that helps to move conversations and initiatives forward. The focus should be on existing issues/PRs in this repository but can also touch on community/ecosystem-wide subjects.
GitHub Agenda Issue:
https://github.com/npm/rfcs/issues/99
Notes:
https://docs.google.com/document/d/1xxv0FprrAGextilwjKK2XpsnkI2jf6lzx5DrahdQknc/edit
A
A
I
just
want
to
re
restate
that
these
calls
are
under
code
of
conducts
that
we
expect
and
hope
that
all
folks
abide
by
and
that's
outlined
here
in
the
meaning
doc
me
notes
doc,
as
well
as
on
the
NPM
open,
RFC's
or
the
RFC's
repo,
which
is
where
we
source
most
of
the
agendas
from
so.
If
you
haven't
checked
it
out
yet
feel
free
to
add
yourself
to
the
attendees
list
and
if
there's
any
notes
from
folks
or
anything
that
we
want
to
shuffle
in
terms
of
the
agenda,
feel
free
to.
B
B
A
I
think
for
a
lot
of
these,
since
they
are
ongoing
and
we've
actually
touched
on
many
of
these
are
sees
in
the
past
that
it's
essentially
providing
probably
stay.
S
updates
here
on
this
call
so
yeah,
let's
dive
into
PR
number
68,
so
multiple
funding
sources
Jordan,
unfortunately
noted
that
he
would
can't
jump
in
on
this
call
today,
but
want
to
see
if
we
can
essentially
ratify
this
RFC,
as
the
work
is
already
queued
up
also
and
for
the
discussion
could
be
had
I.
B
Good,
ok
yeah,
he
said
he
updated
the
ERC,
so
I
know
I,
wasn't
sure.
With
the
final
changes
were,
it
looks
like
those
three
commits
passed.
His
comment.
B
Which
likely
water
we're
gonna
roll
together,
so
I'm
not
exactly
sure
what
the
verbage
change
I
know.
He
said.
One
of
the
things
specifically
was
that
he
was
going
to
add
more
examples
to
the
RFC.
Everything
he's
done
that
so
that
there's
two
examples
in
there
now
yep
three,
even
four
comon,
let's
see
how
this
this
view
so
yeah.
This
could
probably
write
a
fine.
A
A
So,
if
possible,
yeah,
okay,
so
yeah
we're
gonna,
have
to
backlog,
I
think
a
bit
of
work,
I
guess
the
website,
where
this
information
is
primarily
consumed
and
used
to
this
point.
So
if
you're
taking
notes
Michael,
can
you
just
make
a
note
of
that
that
I'll
I'll
essentially
make
sure
that
there's
an
issue
or
something
backlogged
for
for
that
work?
Cool,
so
I
will
pull
this
in
right
now.
A
D
A
D
Okay,
so
the
first
question
I
have
there
is
the
RFC
does
talk
about
a
new
user
permission
that
allows
to
grant
user
the
ability
to
read
or
promote
packages
in
staging
nowhere
in
the
RFC?
Does
it
mention?
How
does
a
fail
interact
with
with
the
staging
process
and
I
mean
whether
it's
implied
or
not,
I
I'm,
not
sure
but
I
suppose
it
is
implied,
but
the
fact
that
it
doesn't
stay
that
explicitly
that
I
assume.
D
D
I
do
have
a
pending
code
review
there
with
my
notes,
so
I
will
just
Pablo's
to
publish
those
after
the
meeting
and
then
and
then
they
will
be
there
and
ok.
The
other
question
I
have
there
is
the
the
spec
talks
about
packages,
but
when
it
says
package
I'm
not
sure
it
means
the
same
thing.
That
I
mean
when
I
say
in
a
kit
and
I
think
it's.
C
So
the
the
understanding
there
is
you
can
you
can
stage
a
particular
version
of
a
a
package
name
multiple
times
over.
Writing
it
in
a
way
that
you
cannot,
if
it's
a
published
version,
but
they're
still
keyed
based
on
the
version
number.
So,
for
example,
you
don't
have
three
different
packages
at
version,
1
3
different,
like
instances
of
you,
know,
flew
at
100.
D
Yeah,
okay
and
then
okay,
then
that
explains
I
supposed
to
follow
on
questions.
It
wasn't
clear
from
the
spec
exactly
so
maybe
some
clarification
you
need
it.
There
I'll
leave
a
note
and
okay
and
then
then
this
sort
of
answers.
The
other
question
that
I
have
there
is:
how
does
that
interplay
with
with
December?
C
C
E
B
Or
how
do
I
phrase
this
I
think
there
by
what
I've
just
heard,
there's
an
implicit
restriction
in
that
you
could
not
have
the
same
cember
in
a
stage
in
your
staged
version
section
as
there
are
inversions,
which
is
to
say
you
can't
have
a
oh
I,
know
in
versions
that's
published
and
a
no.
No,
no
that's
also
staged.
Is
that
correct
that.
C
B
And
then
you
win
promoting.
You
would
basically
so
I'm
trying
to
just
imagine
what
the
pékerman
metadata
looks
like
right.
You'd
have
a
staged
versions,
hash
or
whatever
and
a
key
on
version.
So
when
you
promote
one,
you
basically
remove
it
from
there
and
add
it
to
the
versions
list
right
because
it's
no
longer
a
staged
version.
It's
now
version
version
right
right.
C
C
E
C
A
really
good
point
and
I
think
I,
don't
think
that's
in
the
spec
already
so
yeah
I.
Don't
think
it
is
either
to
capture
that
right
right
now
we
just
have
the
you
know,
underscore
NPM
user
is
the
one
who
published
it
so
we're
gonna
have
to
kind
of
figure
out
what
happens
to
that
field?
Should
that
be
like
the
you
know,
I
would
I
would
suggest
that
underscore.
Npm
user
is
still
the
person
who
published
it
originally
like
who
created
the
package,
and
then
we
add
a
new
field
for
who
promoted
it.
I.
E
C
C
Point
of
view
like,
if
you
tell
me
to
publish
this
to
staging
I,
will
try
to
do
that
and
the
server
will
allow
it
or
not,
and
it
can
tell
me
to
try
to
promote
it.
I
will
try
to
promote
it
and
the
server
will
allow
it
or
not,
and
the
server
will
either
promptly
for
2fa
or
not,
but
I
feel
like
getting
this
actually
delivered.
C
You
know,
even
if
we
implement
this
in
the
CLI,
but
it's
not
supported
on
the
registry
like
it
doesn't
actually
do
anybody
up
any
good,
even
though
it's
obviously
a
big
part
of
the
project
and
worth
doing
I
think
we
can
for
the
purpose
of
implementation.
On
the
CLI
like
we
can
leave
a
lot
of
that
stuff
ambiguous,
but
it
does
need
to
get
sort
of
prioritized
on
them
with
a
registry
team.
So.
A
E
So
I
I
agree
that
ultimately,
a
lot
of
that
stuff
needs
to
be
there.
I
would
rather
see
you
ship
the
basic
thing
so
that
we
can
start
working
with
it
and
then
get
the
details
ironed
out
later
with
the
auth
it's
like,
if
I
can
have
all
users
who
have
published
have
promote
for
now.
Fine
right,
like
that's,
that's
significantly
better
than
where
we're
at
and
so
like
ship
it.
That
said,
I
do
think.
In
the
long
run,
the
the
granularity
will
be
important.
E
C
C
That's
that's.
Never!
That's!
Never
a
concern
of
mine
getting
things
perfect,
but
the
the
two
big
issues
there
are
just
the
the
difference
in
the
ACL
and
the
difference
in
the
two
FA
requirements
and
they're.
The
the
sort
of
minimum
viable
server-side
support
is
that
the
Corgi
doc
have
to
include
the
stage
virgins,
as
well
as
the
published
versions
and.
C
A
B
C
It
has
to
be,
it
has
to
be
different
than
the
ending,
then
the
ultimate
location,
it's
a
bit
different,
because
the
reason
why
it
has
to
be
different
is
because
the
of
the
the
cache
timeouts
that
we
set
on
all
of
the
URLs
that
match
are
kind
of
current
standard
location
for
tar
walls,
because
they
can
never
be
changed.
We
just
we
have
them
with
like
a
very,
very
long
time
out
for
our
CDN,
because
that
you
know
there's
there's
around
around
10,000
packages
that
account
for
about
99
percent
of
all
downloads.
A
Be
mindful
here
that
that
what's
being
presented
here
is
essentially
works,
much
different
than
versions,
so
you
can
stage
and
overwrite
an
existing
staged
version,
which
is
you
can't
publish
over
or
you
know
you
can't
have
you
know
you
can't
change
the
published
package
right
so
it
like
it
yeah
the
downstream,
where
the
actual
tar
ball
lives,
like
is
indeed
like
one
of
the
complexities
I.
Think
for
the
inference
so
see.
E
Do
you
see
the
location
on
the
public
internet?
Not
talking
about
your
internal?
You
know
stuff
changing
when
it
goes
from
stage
to
published,
or
do
you
see
it
staying
the
same?
No
it
would,
it
will
have
to
change.
Well,
we
have.
Is
that
a
requirement,
that's
kind
of
what
I'm
asking
right,
because
if
you
change,
if,
if
I
am
installing
from
a
stage
version
and
then
the
location
changes,
my
lock
file
is
gonna
change
right,
so,
like
I
kind
of
want
it
to
stay
the
same
because
it
is
it,
it
should
be
the
same.
C
C
On
your
local
machine
it
will
still
be
a
cache
head
because
the
integrity
will
be
the
same,
so
your
lock
file
will
get
updated.
The
the
resolve
value
will
change,
but
the
integrity
value
will
not.
And
if
you
happen,
if
we
happen
by
some
by
some
magic
to
have
to
tar
balls,
which
are
exactly
identical
that
you're
installing
let's
say
you
install
one
from
the
public
registry
and
one
from
some
other
registry,
but
they're
the
same
exact
object.
E
I'd
like
to
prevent
the
package
lock
changes,
though,
to
I
right.
If
we
have
a
thing
and
we're
staging
it,
we
try
a
couple
of
teams
that,
like
I'm
thinking
in
like
a
Netflix,
if
we
have
a
couple
teams,
we
say
check
out
the
new
stage
version
and
then
the
next
time
they
go
and
update
their
package
they're
there
you
know
app,
they
also
get
a
bunch
of
unrelated
to
them.
E
A
Say
Wes
what
you're
asking
them
to
do
is
test
something
that's
in
development
or
in
review
and
like
like
a
staged
version
shouldn't
be
considered
like
it's
gonna
change.
Right
like
like
you're
gonna,
have
to
add.
In
order
for
somebody
to
actually
install
stage
versions,
you're
gonna
have
to
add
a
flag,
so
my
guess
is
that,
like
the
artifact
that
gets
generated
is
gonna
be
different
than
when
you're
pointing
at
fraud.
You
know
if
you're
going
to
use
kinda
like
those
kinds
of
analogies
right.
D
A
D
That
there's
also
more
implications
for
the
shrink
wraps,
namely
that,
because
these
are
essentially
not
mutable,
the
integrity
could
also
change.
If
you
are
publishing
a
new
version
right,
so
you
will
get
a
new
integrity,
and
so
the
shrink
wrap
probably
also
needs
to
indicate
that
order
log
file.
It
needs
to
indicate
that
this
was
installed
from
a
stage
stage
URL,
and
that
would
also
mean
that
the
client
needs
to
respect
the
fact
that
in
this
particular
case,
if
it
was
previously
installed
from
staging,
then
it
should
accept
the
fact
that
the
integrity
changed.
Oh.
D
C
You
there
ought
to
be.
We
have
quite
a
few
tests,
verifying
that
integrity,
integrity,
mismatches
due
to
cause
an
error,
so
there
may
be.
There
may
be
a
way
to
thread
this
needle
one
one
possible
way
to
do
it
and
I
we
could
dig
into
this
in
the
spec.
I
could
go
how
to
think
on
this
and
kind
of
come
back
to
some
suggestions,
but
one
way
that
we
could
do
this
is
basically,
if
you
staged
a
package,
it
goes
to
a.
C
It
goes
to
a
URL
which
is
different
from
different
from
kind
of
the
current
way
that
we
that
we
structure
our
tarball
URLs,
but
it's
still
somehow
permanent
right.
So
you
have,
you
know,
imagine
instead
of
like
/foo
/
/
boo,
2.0,
teen
DZ,
it's
/
boom
/,
slash,
staged,
slash,
timestamp
/
vu
to
know
about
TTC,
and
then,
when
you
promote
it,
we
just
basically
move
it
into
the
into
the
new
location
on
the
PACU
min,
but
on
our
CDN
we
can
still
say
like
hey.
A
Whole
point
of
like
the
stage
versions
allowing
for
you
to
a
staged
version
is
for
you
to
essentially
be
updating
and
publishing
overtop
of
and
when
I
say,
publishing,
publishing
to
the
staged
versions.
Over
top
of
that,
which
means
that
you
can't
essentially
cash
that
that
are
you
shouldn't
be
trying
to
cash
the
reference
for
that
that
artifact
to
the
tarball
right.
So
we
should
be
expecting
that
somebody
should
be
fetching
latest
thought
that
latest
content
every
time
right
so.
B
Right:
what's
the
what's,
what's
the
difference
between
yeah?
What's
that
it
so
I'm
on
the
same
pages
West?
So
what's
the
difference
between
the
URL
that
points
to
stage
version
and
the
URL
wants
to
like
a
regular,
published
version,
because
in
my
mind
they
should
both
do
the
same
in
the
registry
should
just
handle
it
right
like
if
the.
B
It
says
like
I,
want
version
3
and
that
happens
to
be
a
staged
version.
If
you
didn't
pass
along
the
header
that
says
like
I'm
getting
stage
stuff,
then
the
registry
should
say
sorry:
this
doesn't
exist
just
like
it
would.
If
the
package
it
evacuation
didn't
exist,
right,
cuz,
then
then
you're
here
I'll
never
changes
right.
It's
the
rig
like
a
package.
Url
is
a
package
or
elles
a
packaging
around
right
hold.
C
On
okay,
yeah,
like
I,
think
I
think
there's
some
some
confusion
that
we're
talking
about
whether
we're
talking
about
fetching
the
fetching,
the
packing
mint
and
then
getting
the
URL
out
of
that
pack
of
it
right
versus
being
in
a
lock
file
where
we
already
have
the
resolved
URL,
because
we're
not
saying
in
that
case,
like
hey
registry,
give
me
version
2.0.
What
we're
saying
is
give
me
this
particular
Tarble,
URL
and
I
expect
it
to
have
that
integrity,
sure
what
so
the
URLs
that
we
have.
C
If
you
look
in
a
pack,
you
mint
in
the
package
manifest
in
the
disk
field,
the
tarball
URLs
that
we
have
have
a
very
sort
of
specific
structure
and
any
any
URL
that
matches
that
structure
in
our
CDN
is
basically
told
cache
this
forever.
Unless
we
tell
you
otherwise.
So
when
we
unpublished
a
version,
we
we
just
basically
purged
that
one
URL
out
of
the
cache
and
we
also
send
headers.
That
say
you
know
this
is
never
going
to
change.
So
the
the
issue
is
yeah.
C
The
issue
is
you
have
a
a
package
lock
file
that
has
stored
like
this?
You
know,
I
install
a
staged
version
and
then
it
goes
into
my
lock
file
and
from
the
lock
files
point
of
view.
It
doesn't
know
that
this
is
a
staged
version
or
you
know
a
thing
that
came
out
of
a
packing
mid
versus
a
thing
that
came
out
of
the
URL.
C
It's
just
like
this
is
a
this
is
the
URL
that
it
resolves
to,
and
this
is
the
integrity
that
I
expect
so
then
later
when
I
go
to
run
my
my
installation
in
CI
or
something
it
just
fetches
that
same
URL
and
gets
that
same
integrity
value
now,
if
you've
told
your
team,
you
know
please
try
out
this
staged
version
right.
Ok,
so
they
try
out
the
staged
version.
It
updates
their
package
lock
and
then
they,
you
know.
Sometime
later,
you
do
one
of
two
things
you
promote
that
staged
version
and
what
we
want.
C
What
wes
is
suggesting
is
they
should
be
able
to
run
it
install
and
everything
keeps
working
with
the
same
resolved,
URL
and
the
same
integrity.
Value
second
possibility
is
you've
published
a
new
stage
version
over
that
same
version,
number
right,
so
I
said:
try
out
our
stage
version
2.0.
They
do
it.
Then,
sometime
later,
we
make
another
change
to
it.
We
push
that
new
stage
version
it
overwrites,
the
previous
one.
C
At
that
point,
I
would
suggest
that
there
install
should
fail
you're
installing
from
a
package
lock
or
if
it
doesn't
fail,
it
should
at
least
update
right.
So
those
are
two
possible
cases
if
it,
if
it
updates
it's
going
to
be
a
change
to
the
package
lock
and
if
it
fails,
it's
going
to
be
a
404,
because
that
resolved
value
now
does
not
resolve
to
a
proper
URL
that
exists
and
I
think
we
just
kind
of
need
to
handle
both
of
those
cases.
You
know
if
we
see
that
it's.
C
E
D
D
D
They
can
always
publish
a
new
version
with
new
version
number
right.
So,
for
example,
you
staged
the
todaro
and
you
did
not
want
that
to
happen.
Well,
there's
not
allowed
to
be
a
to
it.
I
know
that
oh
there's
going
to
be
a
2.1
there's
nothing
in
the
registry
that
prevents
that
happening,
and
it's
also
not
an
app,
not
a
problem.
You
know
it's
nice
to
have
numbers,
go
nice
and
secure,
but
it's
not
a
problem.
D
E
C
I
I
mean
I,
think
I'm
kind
of
I'm
kind
of
halfway
between
the
two
of
you
here,
I
think
four,
four
versions
that
you've
published
you've
made
available
for
people
to
to
install
by
default
reusing
version
numbers
is
a
horrible
horrible
thing
to
allow.
It
was
a
big
mistake.
It
was
a
mistake
that
was
very
swiftly
corrected.
Really
in
the
in
the
history
of
the
registry.
We
made
them
immutable,
pretty
pretty
quickly
it.
It
really
results
in
some
very
unfortunate
impacts.
C
However,
for
staging
things
it's
it's
really
really
useful
to
have
version
numbers
that
matchin
people
do
apply
some
interesting
semantics
on
to
these
things.
One
one
case
that
I
know
where
I
know
folks
will
be
using
stage
stage
packages
pretty
pretty
regularly
is
in
cases
where
you
have
a
bundle
of
projects,
a
bundle
of
packages
that
are
kind
of
all
one
project
right.
C
So,
like
the
the
folks
who
are
using
lerna,
like
all
of
the
lodash
modules,
all
the
babel
modules,
the
way
that
they
work
now
is
they
they
bump
the
version
of
everything
and
then
publish
them
all
at
once.
So
the
the
expectation
when
you're
using
anything
with
babel
is
that
all
of
the
like
at
babel,
slash
star
are
the
exact
same
version
number,
and
it
actually
applies
some
some
really
pretty
nice
ability
to
like
pin
dependencies
to
verify
that
you're,
using
the
things
that
you
tested
against
and
so
on.
C
If
we,
because
of
the
fact
that
you
can't
reuse
version
numbers
which
I
think
is
justifiable
for
published
packages,
that
means
that
they
basically
have
to
try
to
publish
everything.
If
anything
fails,
they
have
to
unpublish
everything
bump
the
version
number
and
everything
and
kind
of
try
again
and
nice
feature
of
staging
is
he'd.
Be
able
to
do
this
in
a
much
more
a
much
kind
of
softer
way,
so
you
can
stage
all
of
your
packages.
C
A
Yeah,
so
I
just
want
to
circle
this
back,
because
I
think
we
got
on
to
this
because
Isaac
you
know
that
the
there
is
a
heavy
dependency
here
and
a
blocker
to
the
registry.
Actually
supporting
all
this
and
the
down
like
that
downstream
implementation
is:
where
does
you
know,
or
does
the
package
live
at
rest
in
staging
versus?
Where
does
it
live
when
it's
actually
published
properly?
E
I
agree:
the
client
is
where
I
see
the
issue
and
if
the
URL
changes
and
your
package
lock
changes,
you're
going
to
just
cause
a
ton
of
turn
for
people
and
I,
don't
think
even
having
those
package
versions
be
permanently
accessible.
Is
the
thing
because
then
you've
got
somebody
who
deploys
a
project
because
their
local
developer
said.
Oh
I'm,
gonna,
try
out
that
new
thing
and
then
like
it
got
shoved
in
a
locked
file
and
they
shipped
it.
And
now
it's
running
in
production
like.
A
E
Their
lock
file
has
the
the
hash
in
there
it
still
resolves
it's
the
same
tarball
right
in
the
same
URL,
then
URL
is
fine
same
tarball.
Fine,
because
it's
just
a
promote.
If
it
was
a
overridden
version,
then
the
install
would
fail,
it
would
block
their
build
and
it
would
let
them
not
get
to
production
with
a
staged
tarball
that
changed
right.
So
that's
actually
a
benefit.
C
E
C
D
C
D
C
D
D
Write
and
if
there's
the
stage
flag
because
inferring
that
from
the
URL
and
I
mean
sure
it
would
work
but
I
I'd
have
doubts
about
such
a
feature.
It
would
feel
like
magic.
So
there's
probably
going
to
be
a
need
for
a
specific
flag
in
the
package
lot
to
say
that
it's
staged
and
if
there
is
such
a
flag,
then
then
you're
gonna
have
turn
anyways.
Then
the
whole
URL
debate
is
alright.
C
So
so
then
what
happens,
then
that
actually
breaks
the
that
breaks?
The
use
case
where
you
promote
it
and
you
don't
want
to
have
to
do
anything
extra
right,
because
now
the
package
lock
thinks
it's
staged,
even
though
it's
been
promoted
without
without
requiring
that
we
make
another
fetch
for
the
PAC
event.
Every
time
we
see
a
stage
flag,
the.
B
F
E
But
this
why
I'm
saying?
Let's
not
have
charm
like
yeah.
This
puts
it
on
every
user.
If
we
just
keep
the
same
URL
no
stage
flag,
nothing
it
to
promote
is
transparent
to
the
end
consumer.
Then
you
know
it's
my
fail.
If
they
try
to
get
a
new
say
they
republish
and
then
promote
that
would
fail
to
checksum.
So
it
would,
it
would
fail,
but
it
rightfully
should
like
well
and
the.
E
C
C
B
Going
to
suggest
this
earlier,
however,
there's
there's
gonna
be
I,
don't
know
how
we
handle
the
difference
in
time
right
because
purging
cache,
it's
not
an
instantaneous
thing
right,
like
we're,
I
mean
we're.
Gonna
click,
Flair
I,
don't
know
what
optimizations
they've
done
it,
but
I
know
if,
like
Amazon,
for
instance,
it
takes
like
20
minutes
to
purge
all
endnotes
right.
There.
C
There
will
be,
there
will
be
some
period
of
time,
and
you
know
this
is.
This
is
something
we
have
a
bunch
of
disclaimers
around
with
unpublishing
like
when
you
and
publish
something
it's
not
instantaneous,
because
there
are
pops
that
are
going
to
cash,
that
tarball
URL,
essentially
forever
the
only
the
only
really
really
sure
way
to
to
guarantee
that
you're
gonna
keep
getting
a
new
thing.
Every
time
we
change
what
the
bits
are
is
to
bust
the
cache
with
the
URL
that
has
a
unique
identifier
on
it.
Names.
E
C
C
Now
we
can,
we
can
put
a
unique
identifier
in
the
URL
that
just
is
you
know
the
related
to
that
checksum
right
like
or
related
to
the
timestamp
or
doing
really
anything
any
arbitrary
thing.
It
doesn't
even
have
to
say
stage.
It
could
just
be
some
string
of
numbers,
and
those
people
who
are
in
the
know
would
be
able
to
look
at
the
URL
and
say:
oh
I,
see
that
there's
a
arbitrary
string
of
digits
here
it
must
have
been
staged
at
some
point.
I.
C
It
should
either
you
know,
and
that
can
be
either
at
404,
is
the
URL
and
has
to
reflect
and
causes
turn
in
the
in
the
lock
file
or
it
gets
an
invalid
checksum,
but
it
has
to
get
that
right
away.
What
I
don't
want
to
do
is
have
a
situation
where
there's
it's,
it's
ambiguous.
What
lives
at
a
particular
tarball
URL,
because
that
is
I'm
just
I've,
been
I've,
been
traumatized
in
the
past,
with
having
tarball
URLs,
where
the
contents
change
it's
just.
C
It
results
in
some
really
really
awkward
hard
to
debug
things
where
you
know
we're
like
we've
got
people
calling
us
saying
it
just
failed
again,
but
you
know
that
in
ten
minutes
it's
gonna
succeed
and
it's
like
impossible
to
debug
realistically
or
it's
only
failing
in
Frankfurt,
which
I
don't
know
why,
for
some
reason,
for
the
longest
time
it
was
like
six
months
of
my
life,
we're
just
like
installs
in
Frankfurt.
Don't
work
any
time
we
see
the
fr.
C
E
A
Sure
appreciate
you
taking
notes,
though
so
yeah
as
a
takeaway.
It
sounds
like
we
have
to
follow
up
in
in
the
PR
itself,
probably
with
with
some
of
these
these
questions,
and
it
sounds
like
to
make
this
also
you're
gonna.
You
have
some
peer
review
feedback
that
you're
gonna
give
on
this
guy
on
this
I
think
it
would
be
great
to
just.
B
Sort
of
the
use
cases
we
spoke
up,
I
think
it
would
be
great
Wes
and
I
guess
if
you
guys,
and
you
guys,
ik
I'm,
just
like
outline
I
use
case
that
you
want
to
see
it
because
I'm
just
curious
to
know
it
like
I
would
like
to
speech
you
because
I
can
do
X
with
it
and
like
what
does
that
look
like
and
if
we
all
said
that
about
it.
That's
technically
our
own
happy
paths,
I'm
curious.
B
C
C
C
The
other
one
yeah
I
can
I
can
basically
just
provide
a
brief
update
on
that
and.
A
C
So
there's
a
couple
of
things
that
need
to
be
set
in
the
environment
of
the
CLI
itself
and
then,
when
we
run
the
script,
there's
a
couple
of
package,
specific
things
that
are
going
to
be
passed
through
so
rather
than
just
dumping,
every
single
thing:
that's
in
package
JSON
we
can
we're
just
going
to
set
the
NPM
package
JSON
environment
variable,
which
is
a
path
to
the
package
JSON
for
that
particular
file,
the
particular
package
running
a
script.
If
you
want
to
know
it,
then
you
go.
Look
there
well
know.
C
What's
in
it,
we're
easy
to
parse
JSON
in
JavaScript,
if
and
then
beyond
that,
I've
added
in
arborist,
already
NPM
package
resolved
and
NPM
package
integrity
to
the
environment
and
also
in
compacted
optional
NPM
package,
dev
and
NPM
package,
dev
optional,
which
corresponds
to
the
the
flags
in
the
package
lock
and
and
in
the
harbor,
its
node
objects.
So
what
we're
not
going
to
do
is
pass
through
every
single
config
file,
config
value,
there's
a
handful
that
LJ
Jordan
Hardman
and
a
few
other
folks
have
have
brought
up
as.
C
C
If
we,
if
we
do,
update
a
depth
that
can
then
if
we
do
update
a
module
at
a
particular
depth
that
can
change
the
depth
at
which
another
module
of
that
same
name
is
now
found
in
the
case
of
dependency
cycles
right.
So
previously
it
only
existed
at
a
depth
of
ten,
but
then
we
update
something
and
now
that
new
thing
depends
on
it
and
now
it's
no
longer
a
depth
of
ten
anymore.
It's
a
depth
of
2,
so
my
you
know
can.
E
C
C
C
So
the
first
thing
I
try
to
update
is
bar.
The
new
version
of
bar
does
not
depend
on
food,
so
Foos,
two
level
depth
is
now
been
removed
and
its
new
depth
is
only
10
right
because
you
have
to
be
calculated
in
the
minute.
The
memory
for
that,
so
the
only
way
that
I
could
think
to
even
and
and
actually
what
NP
and
sixes
are
doing.
Is
it's
recalculating
this
repeatedly
by
re,
walking
the
tree
each
time
to
see
how
deep
in
something
is,
and
essentially
it's
like
just
a
thousand
traveling
salesman
problems.
C
I
mean
it's,
it's
really
pretty
gross.
It
gets
infinitely
more
complicated,
not
infinitely
more,
it's
significantly
more
complicated
in
a
world
of
peered
apps,
because
it's
not
clear
how
deep
a
peer
def
actually
is,
if
it's
being,
if
it's
been
sort
of
installed
automatically
by
virtue
of
being
a
peer
debt
right,
it's
it's
sort
of
like
it
should
be
either
a
depth
of
you
know,
n,
plus,
1
or
maybe
n,
because
it's
it's
at
that
end
level.
It's
really
really
awkward.
C
A
Just
to
be
mindful
of
time,
we
are
at
time
right
now
so
I'm
wondering
if
I
I
think
I
agree
with
what
you're
saying,
but
for
anybody
has
any
feedback
on
that
proposal.
Feel
free
to.
There
hasn't
been
any
comments,
since
you
create
the
RFC
Issac,
which
I'm
kind
of
surprised
by
I
kind
of
feel
like
that's.
No
news
is
good
news.
C
Mean
the
main
way
that
people
use
death
is,
they
run.
Depth
equals
nine,
nine,
nine,
nine
a.m.
yeah,
but
like
the
depth
arguments
that
people
specify
or
either
one
or
infinity
and
I'm
like
okay,
well
updating
to
a
depth
of
one
doesn't
make
sense,
and
your
other
you
probably
just,
and
actually
what
this
does
is.
It
means
that
NPM
update
is
the
is
the
replacement
for
deleting
my
package
lock,
JSON
and
running
in
CHEM
install
again
yeah.
E
A
I've
got
a
hard
stop.
My
friend
I'm
sorry,
but
I
appreciate
everybody
jumping
on
for
this
call
and
again
when
we
do
these
bi-weekly,
so
we
can
carry
on
the
conversation
and
in
the
PRS
themselves,
or
a
Sinkin
in
the
various
other
channels
that
we
have,
but
just
want
to
begin
to
think
of
really
for
jumping
on
until
they
call
today.
So
thank
you.
So
much
and
I'll
see
you
in
a
couple
weeks:
Thanks
Cheers
right.