►
From YouTube: Open RFC Meeting - Wednesday, November 10th 2021
Description
In our ongoing efforts to better listen to and collaborate with the community, we run an Open RFC call that helps to move conversations and initiatives forward. The focus should be on existing issues/PRs in this repository but can also touch on community/ecosystem-wide subjects.
A
Welcome
everybody
to
another
npm
openrc
call.
Today's
date
is
wednesday
november
10th,
2021
we'll
be
following
along
in
the
agenda
that
was
posted
in
issue
492
and
the
mpmrc's
repo.
We
asked
as
usual
that
folks,
please
abide
by
the
code
of
conduct
that
these
calls
and
all
comms
are
essentially
asked
to
sorry
apologize.
We
ask
you
to
abide
by
the
code
of
conduct.
We
reference
here
in
the
agenda.
A
A
A
So
we
may
have
some
time
at
the
end
to
add
topics
if
folks
want
to
and
or
we
can
end
early
and
give
some
folks
time
back,
but
yeah
just
quickly
wanted
to
jump
into
the
first
item
we
had
here
pr
488.
This
is
the
make
install
scripts,
opt-in.
Francisco,
I
think,
that's
how
you
say
your
name
did
you
want
to
maybe
just
do
a
quick
introduction
and
then
you
could
just
quickly
speak
to
this,
and
and
as
noted
we
can.
B
Sure
yeah,
I
kind
of
mentioned
in
the
comment,
so
I
mean
the
rfc
itself
is
just
hopefully
starting
a
discussion
around.
Like
you
know
the
feasibility
and,
like
you
know
what
it
would
look
like
to
not
have
install
scripts,
be
the
kind
of
just
default
action
of
typing
in
kim
install
happy
to
if
people
want
to
talk
about
stuff.
B
Now
totally
fine,
like
I
had
I
mentioned
in
a
comment
that
I
would
love
to
kind
of
just
punt
the
the
discussion
until
next
week,
just
because
a
lot
of
people
had
a
lot
of
good
ideas
and
stuff,
and
I
wanted
to
make
sure
those
ideas
were
incorporated
in
the
actual
rxc.
B
But
additionally,
we
kind
of
started
over
here,
like
a
bunch
of
processes
that
take
several
days
to
complete,
where
we're
just
trying
to
gather
a
bunch
of
data
like
I
would
love
to
be
able
to
say,
like
you
know
exactly
this,
many
packages
used
like
the
pre-prepared
script,
and
if
it's
like
a
small
number,
then
being
it
will
be
like
these
are
actually
the
use
cases
or
something
right
versus
like,
for
example,
the
post
install
script,
which
gets
used
a
lot
more,
and
you
know
they
fall
into
these
buckets
and
stuff
and
in
general
kind
of
like
you
know,
if
nothing
other
than
you
know
us
having
a
really
concrete
idea
of
like
what
the
usages
of
these
kind
of
things
look
like.
B
I
just
think
that
that
would
inform
just
future
discussions
about
like
you
know
why
people
want
these
things,
what
it
means
to
be
able
to
replace
that
behavior
in
a
world
where
we
want
to
get
rid
of
some
of
the
behavior
etc.
So
that's
that's
kind
of
like
what
I
would
ideally
like
to
have
to
be
able
to
present
kind
of
next
week.
A
Yeah,
what
I'll
say
on
on
that
just
quickly
before
I
go
to
bradley,
is
that
we're
also
doing
something
similar
or
I've
actually
asked
some
teams,
whether
or
not
they
have
insights
or
analysis
like
that
on
on
usage
by
dependencies
and
just
to
see
how
what
the
breadth
of
a
change
like
this
would
like.
A
Who
would
be
impacted
and
what
are
the
use
cases
that
we
currently
have?
I
think
I'll,
let
you
speak,
and
I
think
also
miles
had
a
note
that
you
want
to
make
as
well
so.
C
Yeah,
so
I'm
working
at
datadog
and
we
have
pre-built
binaries
for
everything.
Usually
so
we
did
some
fiddling
around
after
seeing
this
rfc
and
we
noticed
npm
does
run
arbitrary
commands
if
you
have
a
binding.jip,
for
example,
and
a
few
other
situations
that
are
not
in
the
package.json.
C
So
the
first
thing
we
did,
because
we
don't
normally
install
using
those
commands
they're
there
as
fallbacks
for
things
like
freebsd,
where
we
don't
supply
a
pre-built
binary
is
we
took
out
scripts,
the
whole
field
from
our
package.json
and
we
noticed
compilation
errors,
because
we
generally
are
running
in
environments
without
compilers,
and
so
it
this
rfc
definitely
needs
to
grow
in
scope,
because
there
are
certain
behaviors
of
arbitrary
code
execution
that
are
not
in
the
package
json
and
it
needs
to
identify
that.
C
The
other
thing
we
noted
was
a
lot
of
the
rfc
is
centered
around
the
person,
adding
a
package
in
their
workflow
and
there's
not
much
about
cross-computer
synchronization
yet
and
preventing
you
know
a
maintenance
burden
being
on
everybody,
consuming
a
package,
everybody
who
has
to
maintain
and
publish
packages
multiple
times
over
and
so
it'd
be
good.
To
add
more
perspective
to
the
rfc
on
those
two
things
and
that's
what
we
had
from
data.
D
D
I
have
a
couple
thoughts
on
this,
so
I
mean
to
be
frank:
I
flip-flopped
a
bunch
this
week
on
what
I
think
is
the
you
know
like
best
long-term
strategy
regarding
scripts,
and
to
be
honest,
it's
something
that
I've
flip-flopped
on
for
for
many
many
years.
I
think
that
we
could
come
and
enumerate
a
very
long
list
of
like
things
that
the
client
does
dynamically
via
scripts.
D
That
would
benefit
from
being
done,
statically
as
much
as
possible,
and
a
great
example
of
that
is
some
of
the
really
cool
work
that
I've
seen
coming
out
of
the
es,
build
team
who
is
using
optional
dependencies
and
per
platform
published
modules
as
a
way
to
distribute
native
modules
that
doesn't
require
install
scripts
and
doesn't
require
stuff
like
no
pre-chip
and
doesn't
require
gathering
dependencies
from
kind
of
like
a
third
party
that
doesn't
have
like
reliability
or
verified,
shot
sums
or
versioning,
and,
like
there's
a
ton
of
great
things
that
we
get
from
taking
these
dynamic
things
and
making
them
static.
D
D
D
We
actually
were
far
stricter
about
the
fail
cases,
but
it
broke
so
many
workflows
that
we
were
concerned
that
it
would
essentially
result
in
like
folks,
just
like
not
being
able
to
upgrade
their
clients,
and
so
we
introduced
some
like
escape
hatches
like
dash
dash
force
and
then
we're
like
well
some
of
the
things
we're
doing
in
force.
We
could
actually
maybe
do
automatically
for
people
but
anyways.
D
I
won't
get
too
much
into
that,
but
but
more
just
that,
like
that
level
of
breakage
needs
to
be
to
be
very
thoughtful,
and
there
is
the
concern
that
if
you
go
too
far
on
breaking
that
a
people
don't
upgrade
their
clients,
so
you
don't
have
the
desired
effect
because
people
refuse
to
upgrade
and
the
other
problem
is
that
people
blanket
opt
out,
and
this
is
a
problem
that
we've
discussed
extensively
also
about
like
npm
audit.
D
I'm
not
saying
this
is
the
world,
but
I
could
see
a
world
where,
like
we
introduced
a
change
like
this
and
then
basically
every
single
npm
tutorial
from
all
the
big
companies,
all
the
big
projects
like
react
or
angular
or
whatever
kind
of
start
with
like
the
first
thing
you
should
do,
is
like
change
this
setting
to
make
sure
that
things
will
actually
install
and
to
be
clear.
This
isn't
a
reason
not
to
do
it.
D
But
this
is
a
reason
to
be
like
really
thoughtful
about
the
way
that
we
make
changes
and
to
be
really
careful
about,
like
kind
of
how
we
move
the
needle
and,
if
possible,
to
do
it
in
kind
of
small
consumable,
measurable
improvements,
rather
than
kind
of
like
one
hard
sweep
of
just
disabling
things.
D
The
second
concern
that
I
have
which
to
be
clear,
I'm
not
entirely
convinced
of,
but
I'm
still
kind
of
working
through
it
is
I've,
heard
the
term
security
theater
thrown
around
a
little
bit
in
that.
If
a
module
itself
has
been
compromised
and
the
script
is
now
malicious.
Well,
if
we
disable
the
use
of
the
scripts.
D
Yes,
we
lower
the
threshold
and
the
attack
surface
and
the
number
of
packages
that
like
kind
of
immediately
get
compromised,
but
we
don't
totally
solve
it
because
you
know
a
sufficiently
clever
attacker
who's
planning
an
attack
if
they
know,
for
example,
that
if
you
load
react,
a
hot
path
always
gets
to
like
this
code.
Path
can
inject
code
in
that
hot
path
and
if
the
attack
that
we're
thinking
of
is
you
know,
install
crypto
miners
in
ci,
cd
or
install
password
installers
on
desktops,
it's
just
kind.
D
It's
kicking
the
can
down
the
road
for
lack
of
a
better
way
of
putting
it
and
to
be
clear,
it's
not
perfect,
it's
never
a
good
reason
not
to
do
anything,
but
I
think
that,
in
light
of
what
a
large
ecosystem
dx
breaking
change,
this
would
be.
I
think
that
is
an
important
thing
to
keep
in
mind
as
we're
thinking
through
the
solution
space
and
so
to
clarify
kind
of
all
of
those
statements
at
once.
D
The
intent
is
not
to
shut
down
discussion.
The
intent
is
not
to
say
that
we
shouldn't
do
this
at
all.
It
is
mostly
to
say
we
need
to
be
very
careful.
We
need
to
be
very
thoughtful
and
if
there
are
ways
that
we
can
think
about
this
in
a
phased
approach
that
takes
steps
towards
improving
the
status
quo
to
get
us
towards
that
goal.
D
I
think
that
we'll
have
much
more
success
in
significantly
improving
the
security
of
the
registry,
and
I
think
that,
if
we
did
it
in
one
fell
swoop,
we
would
not
accomplish
the
goals
that
that
we're
after
right
now
in
in
protecting
the
ecosystem.
C
Yeah,
so
security
theater
is
an
interesting
thing,
so
if
we
do
want
to
make
kind
of
a
threat
model
on
this
exact
feature,
we
can
do
that.
In
particular,
I
have
concerns
about
people
just
calling
out
a
security
theater
without
having
a
model
for
what
they're
talking
about.
In
particular,
one
of
the
nice
things
that
we
talk
about
is
shipping
pre-built
binaries
onto
you,
know
the
client's
computer,
so
they
don't
have
to
have
a
compilation
tool
chain
or
they
don't
have
to
take
the
time
to
compile
it.
C
Don't
have
to
execute
arbitrary
shell
commands
basically,
but
in
order
to
do
so,
you
still
execute
those
commands
somewhere
generally
some
ci
server
realistically
and
so
discussion.
C
Really,
when
we
want
to
talk
about
what
is
being
secured,
we
need
to
just
identify.
You
know
what
this
does
affect.
What's
practical
about
it
like
it's
not
practical,
to
have
a
read,
only
file
system
on
your
build
server,
but
it
usually
is
fairly
practical
to
have
a
read-only
file
system
on
you
know
a
development
production
deployment,
usually
that
works
out
okay.
C
So
this
is
a
good
case
where
you
have
an
attack
like
the
event
stream
attack.
We
have
historical
data
on
it.
It
did
the
attack
at
run
time.
It
wasn't
using
an
install
script,
it
kind
of
undid
its
attack,
afterward
and
all
this,
and
it
wouldn't
be
affected
by
this
ignore
scripts
thing,
but
other
attacks
that
we
have
on
record
would
be
prevented
by
it.
So
whenever
we
get
to
this,
I
think
just
enumerating
what
they
mean
by
its
security
theater.
They
need
to
back
up
what
they
think
security
is.
A
Go
ahead,
isaac
and
then
francesco
francisco.
Is
that
how
you
say
sorry:
apologies,
great.
E
You
know
as
bradley
points
out
like
it
does
redu
it
does
reduce
or
at
least
change
the
the
security
attack
surface
if
we
don't
run
install
scripts
by
default,
but
I'm
actually
you
know
because
of
like
the
event.
You
know
because
of
the
fact
that
we're
still
gonna
be
running
code
like
you
can
ship
a
pure
javascript
module.
E
E
That's
why
it's
there
in
the
in
the
tree
in
the
first
place.
So
if
you
know
it
does
prevent
taking
over
a
machine
at
install
time,
but
it
doesn't
really
prevent
taking
over
a
machine
period.
E
That's
still
that's
still
reducing
the
attack
surface.
I
don't
know
exactly
how
meaningfully
it's
reducing
the
attack
surface.
For
me,
I
I
feel
like
the
I
I
don't
know
if
I
would
go
so
far
as
to
say
it's
security
theater.
If
we,
it
would
certainly
be
theatrical
if
we
said
well
now
that
we
don't
run
install
scripts
anymore.
Npm
is
100
secure
and
you
can
trust
everything
like
yeah
okay.
Well,
but
if
you
take
over
an
account,
you
can
still
ship
malicious
javascript
code.
It
just
can't
be
malicious,
build
time
code.
E
The
the
thing
that's
much
more
interesting
to
me
is
actually
the
kind
of
angle
that
bradley
you.
You
initially
touched
on
with
how
we're
how
handling
and
sort
of
the
workarounds
and
hacks
that
people
have
have
put
together,
some
of
which
are
very
clever
and
kind
of
sophisticated
we've.
Seen
we've
seen
nexjs
also
doing
some.
E
You
know
creative
stuff,
where
they're
sort
of
sniffing
the
using
optional
dependencies
and
then
actually
sniffing
the
build
tool
chain,
so
they
can
have
a
different
build
depending
on
what's
available
in,
like
which
header
files
are
available.
Even
and
that's.
I
think
that
that's
actually
a
really
interesting
angle
to
explore
as
a
way
to
deliver
something
like
a.
E
What
was
the
variance
like
something
like
a
variance
type
of
feature
where
you
know
your
ideal
tree
still
includes
these
12
different
optional
dependencies,
but
only
one
of
them
will
actually
be
reified
by
npm,
based
on
your
particular
machine
setup,
so
things
like
that
might
be
really
interesting
to
explore,
and
I
think
I
think
that
we're
going
to
run
into
some
issues
with
you
know
the
the
clever
but
somewhat
surprising
default
of
like
well.
E
E
So
there's
kind
of
like
the
there's,
what
is
explicitly
written
in
the
package.json
file
and
then
there's.
What
is
the
package
manifest
that
we
load
from
a
folder
that
contains
a
package.json
as
well
as
other
files
and
those
two
things
are
not
a
hundred
percent
the
same.
It
is
still
deterministic,
but
it's
not
like
necessarily
exactly
what
you
have
in
package.json.
So
I
think
think
that
the
big
thing
to
consider
is
really
just
like
what
is
the
impact
to
the
community?
E
If
we,
if
we
said
and
I
haven't,
I
haven't
really
analyzed
this
rfc
in
in
detail.
So
I
I
apologize
if
anything
I'm
saying
has
already
been
addressed,
but
we
really
need
to
look
at
like
what
are
the?
What
are
the
things
we're
gonna
break
if
we
make
install
scripts
opt-in?
E
E
I
should
be
able
to
say
yes,
this
is
safe
and
just
don't
ever
prompt
me
about
that
particular
packaging
version
again
like
add
it
to
a
an
allow
list
which
could
be
good,
but
on
the
other
hand
we
have.
You
know
you
have
the
issue
with
like
ssh
logins,
where
you
just
say:
yes,
the
first
time
you
see
it
and
you
you
know
most
people
never
even
look
at
it
or
you
know,
certainly
don't
go
out
of
band
and
check
the
key
like
you're
unquote
supposed
to
so.
E
E
I'm
sorry,
I'm
bringing
up
kind
of
more
problems
than
than
solutions,
but
I
feel
like
this
is
kind
of
the
stage
that
this
is
at
and
where
I'd
really
like
to
explore
is
just
the
you
know
how
we
can
and
maybe
out
of
band
for
this
rfc,
but
how
we
can
get
a
little
smarter
with
our
defaults
around
when
we
run
nodejip
and
when
we
just
say,
hey,
there's
a
there's,
a
pre-built
thing
here,
I
don't
need
to
rebuild
it.
B
Sorry
ahead
of
time,
I
was
raising
my
hand
manually
and
wasn't
aware
of
the
metaverse
version
of
hand
raising.
So
so
I
just
wanted
to
some
of
these
are
kind
of
statements,
but
just
pretend
there's
question
marks
at
the
end,
because,
like
I'm
kind
of
asking
too
because
just
want
to
know
the
best
way
to
there's
like
a
lot
of
information,
that's
been
like
lost
in
the
comments
and
like
it
can
be
put
into
the
rxc.
B
But
you
know,
I
think
one
of
the
kind
of
fundamental
issues
here
is
that
there
are
like
a
lot
of
stakeholders,
and
you
know
I
want
to
set
expectations
for
the
rrc.
Like
specifically
hey.
This
is
a
set
of
problems
we're
trying
to
solve.
We,
we
agree,
there's
other
problems
but
like
this
is
the
bar
we're
trying
to
reach
and
an
example
of
that.
I
think
it's
like.
B
I
think
it's
very
easy,
since
I
mean
I
suffer
from
this,
and
since
I
I'm
isomorphic
right,
everyone
the
same
code
on
the
back
and
as
the
front
end,
I
immediately
jumped
to
this
conclusion
of
like
well
yeah.
You
can
just
put
the
code
in
the
package
and
it's
gonna
get
run,
but
then
I
remember
oh
everyone
who
actually
has
ruby,
back-ends
and
python
back-ends.
B
The
only
chance
that
the
npm
package
ever
has
to
run
on
their
computers.
Usually
is
the
install
script
right
because,
like
their
tests,
usually
run
in
puppeteer,
so
they're
in
that
sandbox,
and
so
there
actually
is
for,
like
this
huge
part
of
the
community
that,
like
often
doesn't
get
represented
in
the
comics
just
because,
like
they
like
to
them.
B
Npm
is
this
french
thing
it's
like
for
the
front-end
people,
but
that
like
actually
opens
up
this,
this
attack
surface
of
their
computers
in
an
environment
where
they
would
never
expect
right,
like
if
you're,
only
using
javascript
in
the
browser
by
default.
Your
intuition
is
that,
like
the
attack
surface
is
like
oh,
how
are
they
potentially
messing
with
the
cookies
of
my
users
or
something
like
that?
You
would
not.
I
think,
intuitively,
expect
the
possibility
of
something
taking
your
keys
in
your
ci
machine,
because,
again
you're
again,
just
by
intuition.
B
Your
expectation
is
that
this
code
is
only
ever
running
in
a
sandbox
browser
environment.
Why?
Why
would
it
run?
You
know
in
on
your
main
machine?
You
know
a
separate
like
attack
surfaces
that
I've.
Actually
you
know,
I
don't
think
I've
ever
been
attacked
but
like
I've
certainly
had
it
happen
to
me,
is
I'll
just
like
mis-type,
the
name
of
the
thing
like
npm
install
low
dash
with
two
h's,
and
if
you
have
people
squatting
on
that
on
those
you
know,
typo
names,
they
can
just
work
a
process
immediately.
B
You
can't
control
c
fast
enough
and
like
go
check
activity
monitor
if
you
misspell
the
name
like
who
knows
what's
running
there
right
like
so,
I
guess
what
I'm
trying
to
say
is
like
it's.
This
like
multi-dimensional
thing
where,
like
you
know,
the
motivation
section
of
the
rfc
should
be
like.
Well,
there's
all
these
different
stakeholders
and
then,
like
you,
know,
there's
different
usage
cases
and
so
forth.
B
So
I'm
not
I'm
not
sure
the
best
way
to
present
that,
but
like
I
want
to
just
acknowledge
that,
like
yeah
first
in
certain
use
cases
it
feels
like
this
is
the
least
of
my
concerns,
whereas
for
other
use
cases
it
seems
like
it's
almost
basically
the
only
attack
surface.
That's
that's
presented.
I
I
also
wanted
to
touch
on
this
kind
of.
B
Like
thing
that's
been
talked
about
a
bit
of
like
well,
what
are
the
alternatives
that
we
should
give
people-
and
I
I
put
some
of
this
in
the
rfc,
but
I
actually
think
that
that
that
might
just
be
the
most
interesting
part
of
this
and,
like
you
know,
perhaps
it's
worth
filling
out
the
rc
substantially
more.
B
I
didn't
want
to
like
completely
like
fill
it
up
with
just
like
here's,
a
bunch
of
wacky
ideas
for
how
to
do
this
stuff
separately,
but
I
I
I
agree
that,
like
the
correct
way
to
do,
this
is
not
with
a
big
switch
like
in
my
mind.
This
is
like
you
know.
B
This
is
the
the
the
theme
of
a
set
of
changes
that
take
place
over
the
course
of
time
where,
like
the
shared
goal,
is
to
be
at
a
place
where
it
feels
very
reasonable
to
remove
this
right.
Like
I
think,
a
great
example
is,
like
you
know,
mkm
added
the
like
funding.
You
know
package
package.json
field.
B
B
Can
we
do
that
with?
Like
you
know,
most
of
the
you
know
big
part
of
the,
and
you
know
of
the
tale
and
like
you're,
now
leading
like
fairly
esoteric
cases,
and
I
think
if
we
can
do
that,
because
the
other
thing
that
I
don't
want
to
be
conveyed
in
this,
I
think
a
lot
of
people
see
this
and
interpret
it
as
like
no
install
scripts,
and
that
that's
not
the
point
of
this
it's
in
an
ideal
world,
I
think
soft
scripts
would
be
exceptional,
would
require
exceptional.
B
You
know
interaction
and
like
if
we
can
get
to
a
point
where
that's
not
just
a
dream,
but
like
a
reality-
and
I
think
you
know
it's
not
that
unacceptable-
to
be
like
yeah
you're,
installing
a
really
weird
package,
and
it
requires
this
weird
flag
and
that's
okay
right.
I
guess
the
the
other
kind
of
thing
just
in
terms
of
like
goals
or
like
usages
or
something
is,
I
think
at
least
to
me.
B
One
big
success
metric
would
be
if
there
could
be
some
sort
of
commit
history
around
the
change
from
something
is
running
on
your
computer
or
something
is
not
running
to
something
that's
running
right
like
currently.
That's
not
the
case
just
due
to
the
nature
of
december
ranges,
and
you
know
the
fact
that
github
by
default
will
just
hide.
You
know
large
package
lock,
json
changes
so
like,
even
though
it
is
technically
true
that,
like
a
lot
of
you
know,
malicious
stuff
could
be
snuck
into
a
mainline
package.
B
It's
a
lot
easier
to
sneak
a
version
bump
into
a
package
lock
to
a
packet
or
even
sneak
a
new
package
into
a
package
log
file
that
everyone's
eyes.
Just
plays
over
and
like
they
don't
look
at
that
part
of
the
pr
right.
So
in
a
world
where,
like
the
kind
of
explicitness
of
like
hey,
we
are
explicitly
allowing
these
scripts
to
run,
gets
kind
of
percolated
up
to
the
package.json
file
and
there's,
like
perhaps
like
every
single
thing
that
you
initially
you
know
enable
with
the
with
the
flag
gets
recorded
in
the
package.
Json.
B
F
B
To
the
no
jib
thing
too,
we
also
run
into
like
so
at
run
kit.
We
actually
mpm
install
every
single
package,
and
so
we
quickly
had
to
find
out
that
oh,
we
thought
that
we
had
gotten
them
all.
No,
I
guess
there's
other
ways.
I
am
curious,
so
bless
these
things
are
questions
so
I'll.
Just
leave
it
after
that,
aside
from
reading
the
code,
is
there
a
place
where
it
is
completely
documented?
B
What
the
like
exhaustive
list
of
things
that
cause
code
to
run
on
your
computer
are
those
that's
kind
of
my
first
question
that
I
want
to
leave
and
then
my
second
question
is:
I
had
so
far
been
operating
on
the
kind
of
assumption
that,
like
I'm,
trying
really
hard
to
come
up
with
solutions
that
don't
require
back-end
or
registry
changes.
My
intuition
is
just
that.
That
makes
it
a
lot
easier
to
both
be
approved
and
like
have
community
support.
Obviously,
since,
like
you
know,
we
don't
have
access
to
you
know
internal
code
or
whatever.
B
If
you
know
mpm
feels
like
there
are
certainly
a
lot
more
interesting
things
that
can
be
done
on
the
back
like,
like
you
know,
the
floodgates
kind
of
open
on
like
possibilities.
If,
like
all
of
a
sudden,
you
can
do
like
analysis
on
the
back
end
or
add
new
features
like
again
like
if
there
was
like
an
official
support
for,
like
you,
know,
multiple
different
versions
of
binaries
or
whatever,
and
it's
not
like
a
thing
that
we
have
to
figure
out
on
the
side
that
certainly
changes
the
dynamics
of
the
proposal.
B
I,
however,
I'm
still
leaning
towards
like
how
much
can
we
do
just
in
the
client
so
yeah?
Those
are
my
kind
of
two
questions
to
leave.
A
Make
sure
if
you're,
following
along
in
the
meeting
notes
I'm
trying
to
do
my
best
to
capture
everything
as
you're
saying
it
would
be
good
to
capture
the
exactly
how
you
want
to
state
those
two,
I
think
questions
one
is
you
know:
can
we
help
try
to
find
all
the
use
cases
or,
like
you
said,
an
exhaustive
list
of
use.
Cases
for
install
scripts
today
would
be
great.
Then
I
think
that
you
have
their
scope
correct
in
that
you
know
this.
This
forum
is
trying
to
figure
out
what
the
cli
can
do.
A
Independent
of
let's
say
registry
features,
and
I
think
that's
the
right
scope
to
to
keep
these
rfcs
too,
since
that's
something
that
we
can
actually
affect
and-
and
I
think
that
yeah-
that's-
that's
the
right
like
lens
to
be
approaching
this
problem
set.
I
don't
know
who
was
first
but
caleb.
Maybe
you
can
speak,
I
see
your
hands
up
and
then
miles.
G
Howdy,
I
think
the
best
thing
that
you
could
do
for
this
rfc
is
to
add
a
appendix
with
data
about
the
number
of
packages
that
will
run
scripts
and
the
analysis
for
what
kind
of
scripts
they
run
and
then,
in
addition
to
using
the
data
to
justify,
if
it's
safe
or
like
what
impact
it
would
have
to
disable
running
scripts,
we'd
also
be
able
to
try
to
classify
what
people
are
doing.
G
Another
thing
I
noticed
is
that
the
rfc
has
written,
suggesting
that
we
would
ignore
scripts
or
just
disable
running
scripts
and
I'd
be
interested
in
failing
if
it
has
a
script
and
you
haven't
allowed
it
to
run
a
script,
because
a
package
could
be
installed
in
a
non-functional
manner.
If
you
just
ignore
a
script,
and
I
think
at
least
for
dx,
it
makes
sense
to
say
this
script.
This
package
has
a
script:
you're,
not
you've
chosen,
not
to
run
it
and
then
fail
and
require
users
to
allow
us
to.
D
I'm
with
you
caleb
on
the
research
we've
been
talking
about
what
kind
of
research
we
could
do
internally.
Another
bit
of
research
that
I'd
really
love
to
do
is
explore
a
handful
of
you
know
the
the
top
workflows
and
like
projects
that
people
use
so
we're
kind
of
enumerating.
You
know
francisco
you're
doing
a
great
job
of
enumerating.
D
Like
you
know,
some
people
are
using
this
in,
like
ruby
and
like
they're,
never
even
running
in
node
and
there's
some
people
who
are
using
react
native
there's
some
people
who
are
using
electron,
there's
web
apps
there's
desktop
apps,
there's
cli
apps,
there's
like
functions,
there's
edge
like
there's
so
many
different
runtimes
and
targets
that
that
folks
are
are
trying
to
accomplish
things
and
if
we
think
about
like
what
are
some
of
those
top
workflows
and
what
are
some
of
those
top
tools
like
we
already
have
ignore
scripts
that
you
could
do.
D
What
are
those
workflows?
Do
we
break
today,
which
ecosystems
would
we
like?
I
always
like
to
think
about
javascript
and
npm
as
like
an
ecosystem
of
ecosystems
so
well,
it
may
be,
like
90
of
things
are
unaffected
by
this,
but
we
like
completely
break
the
react
native
ecosystem,
for
example
like
that
is
that's
bad
and
again
it
doesn't
mean
that
we
like
need
to
move
forward
on
on
script,
but
it's
just
like
another
form
of
research
and
notes
that
we
could
bring
together.
That,
I
think,
would
be
helpful.
D
The
other
bit
here
regarding
the
scripts
that
I
that
I
think
would
be
useful
and
I
think
my
brain
forgot
about
it,
because
there's
so
many
things
going
on
here,
give
me
just
a
half
second
to
reboot
and
see
if
I
can
defrag
my
brain
and
find
the
thing
I
was
looking
for,
I'm
stalling,
as
you
could
tell
no
it's
gone
so
I'll.
Let
you
I'll
get
I'll,
let
you
respond
and
then,
if
it
comes
back
I'll
in
like
five
minutes,
which
likely
will
I'll
let
you
know.
A
Just
I
noted
in
the
chat,
as
you
were
speaking
before
about
one
one
nugget
that
came
out
a
recommendation
that
you
came
out
about
knowing
potentially
when
a
package
has
changed
or,
let's
say,
introduced,
install
scripts,
and
there
is
a
flag
right
now
that
we
do
define
specifically
in
the
sort
of
truncated
version
of
documents
which
is
package
documents
or
how
we
refer
to
essentially
the
the
the
registry
docs,
so
that
corgi
dock
actually
provides
a
a
flag
today
for
each
individual
package,
whether
or
not
it
has
some
scripts
like
some
lifecycle,
scripts
defined,
which
could
be
used
potentially
as
an
indicator
in
the
future.
A
For
whether
or
not
you
you,
an
unexpected
change
has
been
introduced
between,
let's
say
a
patch
or
a
minor
version
of
a
dependency
which
might
be
a
good
indicator
or
flag
that
you
don't
trust
that
thing
anymore.
That
trust
has
changed.
Potentially,
let's
say,
but
that's
just.
I
want
to
bring
that
note
up,
because
you
know
noted
something
in
that
space
and-
and
there
is
at
least
some
information
that
we
have
about-
that
that
we
could
use
today.
Mouse
did
you
wanna.
D
Yeah,
you
dropped
my
memory.
Thank
you.
So
the
other
thing
that
I
was
thinking
about
here
we've
talked
about
threat
model
a
couple
times
here,
which
I
think
is
really
important.
One
of
the
conversations
that
we've
been
having
internally
about,
like
the
various
ways
that
we
can
protect
the
ecosystem,
protect
people
from
you
know
like
installing
compromised
packages.
D
We
actually
do
this
today,
so
try
to
publish
some
packages
with
names
that
are
very
similar
to
top
packages,
and
you
won't
be
able
to
now
there's
a
threshold
and
we're
still
in
the
process
of
like
defining
and
fine-tuning
that
threshold,
but
we're
kind
of
thinking
about
this
idea
of
like
high
impact
projects
and
essentially
like
once
it
gets
past
that
threshold.
D
You
know
there's
some
additional
things
that
we
have
in
place,
and
I
mentioned
that
because,
realistically,
I'm
not
saying
100
of
the
time,
but
you
know
we
see
a
lot
of
malware
published
to
the
registry
and
in
general,
when
people
report
malware,
if
it's
a
package
that
has
you
know
zero
downloads
or
less
than
ten
downloads
and
no
dependents,
the
odds
of
that
compromising
computers
is
extremely
low
and
so
like
we
have
an
sla
as
to
like
how
long
we
take
to
respond
to
these
kinds
of
reports.
D
I
think
it's
like
48
or
72
hours
or
something
that
I'd
have
to
check
into
it,
and
this
is
like
internal
slas
for
the
different
teams
that
manage
it.
But
it's
like
we
don't
consider
these
things
to
be
high
risk,
they're
a
risk
we
need
to
clean
it
up,
but
like
the
odds
of
them
compromising
people
are
fairly
low,
so
realistically
in
feel
three
to
push
back
or
disagree.
D
The
the
cohort
of
packages
that
we're
concerned
about
here
are
highly
distributed
highly
downloaded
packages
that
are
compromised,
compromised
with
malicious
code,
malicious
code
that
is
using
the
scripts
as
an
injection
point
to
take
over
to
take
over
or
do
malicious
deeds.
The
reason
why
I
mentioned
this
is
like
there's
definitely
things
that
we
could
do
at
a
registry
level
as
well
to
have
signals
to
better
identify
these
types
of
scenarios
like
for
what
we
examine.
D
What
we
suggested
was
hey,
maybe
if
there
is
a
change
in
the
has
scripts
bit
on
an
extremely
highly
downloaded
package.
Well,
maybe
we
don't
promote
that
immediately
and
we
send
an
email
to
the
maintainers
to
say:
hey
did
you
mean
to
to
publish
this?
Is
this
you?
Who
actually
did
this
I'm
not
advocating
for
that
as
a
solution?
D
But
I
more
mean
that,
like
I
think
enumerating
this
problem
space
and
also
thinking
about
what
are
the
cases
that
we're
specifically
trying
to
protect
against
that
there's
other
solutions
that
we
can
come
up
as
well.
That
may
be
less
disruptive
that
have
the
same
kind
of
like
net
results
in
the
end
of
kind
of
protecting
people
from
being
compromised,
especially
if
we
think
that
we're
really
only
dealing
with
a
cohort
of
you
know
like
highly
dependent
on
highly
installed
packages
being
compromised.
That's
very
different
than
all
install
scripts
on
every
package.
A
So
just
wanted
to
give
the
floor
to
francisco
then
bradley
if
we
could
also
time
box
this
a
bit,
because
we
have
a
couple
other
items,
and
we
also
noted
that
we
want
to
go
more
deep
next
week
into
this.
But
obviously
it's
a
good
discussion
so
feel
free
to
go
francisco.
B
Yeah
so
just
wanted
to
say
I
definitely
agree
on
the
like
a
while
ago,
like
it
should
fail
and
not
like
just
silently
ignore
scripts.
I'm
sorry
that
the
rfc
reads
that
way
that
the
intention
was
like
in
my
mind
again
in
the
ci
case.
It's
like
your
test
failed
because
the
install
failed
like
because
you
know
some
script
didn't
get
run.
I
wanted
to
also
just
in
in
the
spirit
of
this
like.
How
do
you
make
this
like
not
be
one
big
switch
or
something
like
it
would
be.
B
One
way
to
think
of
this
is
kind
of
like
ignore
scripts
plus.
I
know
I'm
kind
of
just
contradicting
myself
since,
like
you
actually
wanted
to
fail,
but,
like
I
think
and
correct
me,
if
I'm
wrong,
one
of
the
unfortunate
things
about
ignore
scripts
is
that
it
is
just
a
yes
no
like
if
we
could
quickly
push
out
and
by
quickly
I
just
mean
faster
than
doing
the
real.
You
know
everyone's
affected
by
default.
B
Change
like
it
might
be
take
a
year,
but
like
a
thing
where,
like
people
are
empowered
to
essentially
say
pretend
like,
I
want
default
off
with
the
ability
to
piecemeal
turn
on.
I
think
that
would
certainly
help
kind
of
this.
You
know
be
a
more
active
investigation
of
like
well.
B
How
often
can
people
actually
start
using
it
in
this
way
without
it
breaking
their
stuff,
because
I
think
right
now,
just
with
ignore
scripts,
the
the
threshold
is
artificially
high
as
to
how
many
things
break
because
they're
turning
everything
off,
not
not
just
some
things.
B
And
I
had
actually
one
registry
question,
slash
request,
maybe
in
terms
of
this,
oh
actually,
the
two
things
but
sorry,
let
me
say
one
thing,
then
I'm
gonna
make
my
registry
thing.
The
framing
of
this
has
all
been
security.
I
I
don't
know
whether
it's
worth
to
to
mention
and
there's
been
a
lot
of
people
that
have
become
interested
for
non-security
reasons.
Like
I
know,
versailles
is
very
interested
in
the
fact
that
you
know
a
a
set
of
packages.
B
That
is,
you
know,
more
declarative,
both
surfaces,
a
lot
more
interesting
information
right,
like
you
just
start
querying
for
more
interest
like
how
many
people
want
funding,
as
opposed
to
you
know
like
the
the
old
way
of
doing
things
where
it's
like
you're,
trying
to
solve
the
halting
problem
of
figuring
out
what
scripts
are
actually
asking
for
funding
or
whatever,
but
additionally
it
does
lead
to
like
a
significantly
more
cash
ability.
Just
because
you
know
everything
is
going
to
kind
of
the
mkm
happy
path
so
just
wanted
to
bring
that
up.
B
I
don't
know
I'm
happy
to
leave
that
completely
out
of
the
rc
in
the
discussion,
but
there
are
people
who
have.
You
know
surprisingly
become
interested
in
this
for
non-security
reasons,
and
then
my
my
kind
of
registered
question
such
request
is
perhaps
it's
like
a
legacy
or
technical
thing,
and
also
for
the
record.
Maybe
this
has
changed,
or
at
least
the
last
time
I
checked.
B
B
I
don't
know
that
I'm
going
to
want
a
bunch
of
like
you,
know,
satellite
packages
and
then,
when
I
go
later,
to
get
the
scope
name,
someone
else
has
already
taken
it.
So,
like
there's
kind
of
just
a
fundamental
expectations.
Mismatch
for
users
in
that,
like
literally
the
same
name,
can
be
owned
by
two
different
parties,
and
I
think
the
implication
when
you
see
like
babel
and
at
babel
is
that
they
they
have
the
same
trust,
mechanics
right.
But
that's
that's
not
the
case.
B
As
far
as
I
know
currently-
and
I
understand
that,
there's
probably
like
a
big
legacy
problem
of
like
well,
what
about
all
the
the
kind
of
like
existing
packages
and
scopes
that
are
already
owned
by
different
people,
but
I
would
certainly
be
really
interested
in
like
kind
of
a
from
now
on
when
you
register
a
package,
you
should
just
get
the
scope
right
like
it
seems
like
asking
for
trouble
to
let
someone
else
get
the
the
scope
later
or
to
let
you
get
the
scope
and
not
the
package.
If
that
makes
sense,.
A
For
sure,
I
think
that's
definitely
like
outside
of
the
scope
of
where
we
can
potentially
provide
any
kind
like
insight
into
that.
A
policy
change
like
that,
but
we
can
take
that
back
internally
and
and
discuss
with
our
peers
in
the
registry
team
about
policy
changes
like
that,
because
it
it
does
sound
like
pretty
sensible
bradley.
I
want
to
give
you
a
floor
and
then
I
was
hoping
we
could
it'll
be
short:
okay,
okay,.
C
So
one
thing
to
know:
when
you're
only
concerned
with
popular
packages,
one
of
the
problems
is
on
developer
machines.
You
have
less
popular
packages,
sometimes
they're
different
from
deployed
packages.
C
So
one
of
the
attacks
that
I've
seen
in
the
past
is
you
actually
attack
the
developer
machine,
which
is
the
most
likely
place
that
you
do
want
to
run
your
actual
install
scripts.
So
I
don't
think
popular
packages
are
the
only
thing
you
need
to
protect
here.
I
think
the
scariest
attacks
are
going
to
be
on
developer
machines
yeah.
That's
it.
C
A
A
So
yeah
very
interesting
like
to
consider
but
cool.
I
just
wanted
to
quickly
move
off
this
and
and
and
note
that
we
will
leave
this
on
the
agenda
for
next
week
as
well,
since
it
ate
a
lot
of
time
here
and
encourage
everybody
to
continue
to
discuss
this
async,
potentially
in
the
rfc
thread
itself,
and
appreciate
any
analysis
and
surfacing
data
like
when
folks
are
finding
it,
I
think
is,
is
great.
A
I
know
I'll
try
to
be
doing
the
same
and
that
we've
poked
a
few
teams,
at
least
on
on
our
side,
to
like
investigate.
You
know
the
use
cases
for
install
scripts
and
what
people
are
doing
today.
So
I
want
to
move
on
to
the
other
item.
We
had
here
registry,
resolving
registry
overrides
so
caleb.
If
you
want
to
quickly
go
into
this,
I
know
we
didn't
really
spend
a
lot
of
time
on
it.
Last
week,.
G
Sure
so
the
core
idea
is
that
at
amazon
we
use
different
custom
registries
and
we'll
build
the
same
package
in
against
multiple
different
registries.
Currently,
lock
files
have
a
behavior
where,
if
they
use
the
default
registry,
it's
a
magic
value
that
is
always
resolved
as
the
currently
configured
registry.
But
if
your
lock
file
references
a
different
registry,
you
can't
switch
to
a
new
custom
registry
and
install
from
that
from
using
that
registry.
G
I'm
proposing
a
couple
of
different
options
that
will
change
the
way
that
npm
behaves.
One
of
them
will
just
write
out
lock
files
without
a
resolved
key,
so
that
you'll
always
use
your
currently
configured
registry.
There's
another
option
which
affects
installing
packages
with
shrink
wraps,
which
is
actually
a
change.
We've
observed
in
npm
7.
prior
to
npm
7.
G
It
appears
that
the
resolved
key
was
always
ignored,
but
after
seven
the
resolve
key
will
be
used,
which
has
caused
install
failures
on
our
network,
isolated
hosts
for
packages
with
shrink
wraps
that
reference
like
yarn
package
registry,
so
another
option
there
is
to
ignore
the
resolve
key
when
you're
reading
shrink
wrap
files
from
packages
that
you're
trying
to
install
there
hasn't
been
much
activity
on
this
rfc
since
last
week.
I
don't
know
if
that
means
no
one's
interested
in
it
or
if
it
would
be
rejected.
G
A
Yeah
I
apologize.
I
haven't
looked
into
this
enough
myself
since
last
week.
I'm
not
sure
anybody
from
our
team
has
I'm
not
sure
isaac.
Have
you
looked
at
this
or
anybody
else.
E
I
have
I
have
very
briefly
I
kind
of
read
through
it.
It
has
been
a
thorny
problem
for
a
very,
very
long
time,
the
the
fact
that
registry.npmjs.org
is
kind
of
like
this
magic
string.
E
E
I
I
think
that
the
thing
to
do
here,
I
I
can't
I'm
not
confident
I
kind
of
like
I
said
I
read
through
the
rfc,
I'm
not
really
confident
to
comment
on
it
and
say
whether
it's
you
know
necessarily
the
right
solution
approach
or
not,
but
it
is.
It
did
strike
me
that
this
is
a
pretty
good
explanation
of
like
a
use
case
that
is
not
being
well
handled
right
now
and
that's
you
know
the
ideal
place
to
start
for.
G
Currently,
we
have
some
custom
build
tools
which
will
attempt
to
clean
the
package
lock
after
an
install
or
really
any
modification
of
the
package
lock.
They
just
delete
the
resolve
key
and
that's
working
pretty
well
for
us,
but
it
does
cause
problems
when
people
don't
use
those
tools.
So
if
somebody
runs
update
now
they
have
a
lock
file
that
has
these
resolves
it'd
be
nice
if
we
could
bundle
it
into
the
cli
itself,.
B
Yeah
just
a
question,
so
I'm
trying
to
like
read
it
really
quick,
but
so
we're
certainly
interested
in
kind
of
registry
resolution,
stuff
so
just
fun
to
say
at
least
one
like.
I
definitely
think
it's
it's
worthwhile.
I'm
curious
if
this
is
specifically
about
kind
of
like
switching
registries
after
the
fact
or
if
it
would
allow
kind
of
like
package
by
package
registry
options,
which
I
think
is
kind
of
hard.
Now,
but
again
it's
been.
B
Since
I've
thought
about
this,
so
I
apologize
if
this
is,
has
a
well-known
solution
or
something
and
I'm
derailing
the
conversation
at
all.
G
This
rrc
doesn't
really
address
package
by
package
registries.
I
know
that
that's
currently
supported
as
long
as
they're
scoped,
like
I
know
that
you
can
select
different
registries
for
different
scope
packages.
I
don't
know
how
what
effect
that
has
on
the
package.
Lock,
though,.
B
E
No,
not
not
without
assigning
them
to
a
scope.
Okay,
yeah,
so
I
mean
the
the
use
case
that
we're
really
kind
of
optimized
for
is
a
for
your
internal
registry.
E
To
be
a
you
know,
pass-through
proxy
caching
proxy
at
at
least
so
that
anything
that
can
be
fetched
from
the
public
registry
could
be
fetched
from
your
internal
registry,
which
seems
to
be
a
very
you
know
at
least
a
fairly
common
approach
to
solving
this
at
a
lot
of
companies,
we've
talked
to
that
that
works
with
the
result
values
on
registry.npmjs.org,
assuming
that
your
internal
registry
is
set
up
in
such
a
way
that
it's
a
forward
proxy
and
also
assuming
that
you
set
the
like
dash
dash
registry,
equals
config
value
to
that
internal
registry,
and
then
you
kind
of
it
says.
E
Oh
well,
this
you
know
this
result.
Value
is
registry.npmjs.org,
but
I'm
actually
looking
at
a
different
registry
right
now,
so
I'm
going
to
fetch
it
from
there
instead
using
the
same
url
path.
But
it's
like,
I
said
it's
it's
thorny
and
it's
sticky
and
it's
like
a
lot
of
kind
of
weird
hidden
magic
that
is
in
place
to
support
npm,
six
and
five
and
six.
E
So
it's
definitely
in
need
of
a
refresh
if,
if
not
now
in
npm8,
then
certainly
something
we
need
to
kind
of
carefully
consider
for
npm
nine
since
that'll
be
a
bit
bigger,
breaking
change
anyhow.
So
both
the
current.
F
G
A
problem
but
like
in
our
case
our
private
registries,
don't
host
tarballs
at
the
same
path.
It
would
work
for.
F
E
So
you
have
to
actually
fetch
the
packing
and
look
at
the
disc.tarball
fields
and
and
sort
of
re-resolve
it
what
we
you
know,
maybe
the
other.
The
other
downside
of
this
approach
of
just
saying,
throw
out
the
resolve
field,
which
is
working
it's
working,
but
it's
adding
a
considerable
number
of
extra
http
requests.
Now,
in
your
case,
those
extra
requests
are
necessary
because
you
don't
have
the
correct
data
so
better
to
make
those
extra
short
small
json
requests
then
try
to
fetch
a
tarball
get
a
404
or
crash
your
whole
build.
E
G
Yeah,
so
one
of
the
alternates
that
I
proposed
was
a
another
configuration
option
for
recording
a
registry.
So
with
this
existing
magic
behavior
for
the
default
registry,
we
only
have
a
problem
when
we
generate
the
lock
file
while
we're
using
a
custom
registry,
because
the
lock
file
references
that
custom
registry,
which
doesn't
have
this
magic
behavior.
E
Right,
we
could
also
leave
the
the
kind
of
backwards
compatible
section
of
the
lock
file
in
place.
This
we
encountered
this
like
magic,
behavior,
this
magic
string,
behavior
prior
to
biting
the
bullet
and
and
making
lock
file
v2
right,
we're
still
trying
to
make
it
work
without
a
bump
to
the
lock
file
version
within
that
dependencies
or
within
that
gosh.
I
forget
now,
which
one
is
which,
within
the
packages
section
of
the
lock
file,
we're
much
more
flexible
right.
E
We
only
have
to
worry
about
backwards,
compatibility
to
npm
seven,
and
so,
if
we
just
stop
putting
the
resolve
field
there
and
start
putting
some
other
fields,
that
is,
you
know
a
little
bit
more
like
explicit
and
a
little
bit
more
clear
that
like
hey,
this
is
where
we
got
it
from,
but
this
wasn't
the
default
registry,
or
this
was
the
registry
we
were
using
at
the
time.
So
if
you
switch
registries,
you
have
to
re-resolve
it
there's
probably
some
interesting
solutions
we
could
explore
there.
A
In
terms
of
action
items
here,
do
you
want
us
like
give
a
green
light
to
explore
some
of
this
tequila?
It
sounds
like
you
have
some
time
to
even
potentially
work
on
that
implementation.
Is
that
right.
F
A
Maybe
we
can
get
somebody
to
pair
with
you
from
our
team
or
even
like
in
a
for
an
hour,
long
session,
or
something
like
that
to
look
into
this.
Let's,
let's
queue
that
up,
I
think
just
to
investigate
this
further,
so
we
actually
make
something
happen
here.
A
E
A
Let's
take
that
away,
try
to
set
up
a
call
with
somebody
from
our
team
and
you
kill
them
if,
if
you've
got
time
in
the
next
week
or
so
that
works
for
me
cool
awesome,
I
want
to
give
the
last
five
minutes
some
time
to
the
other
two
rc's
that
we've
had
open
for
a
while.
Now
I
just
followed
up.
A
I
know
today
on
the
running,
prepare
scripts
for
linked
depths
for
to
connect
matt
with
michael
perra,
who
used
to
be
on
her
our
team,
who
said
he
could
help
with
moving
that
forward.
I'm
not
sure
if
you
have
any
other
updates
there,
matt
that
you
want
to
share.
F
I
do
think
I've
made
some
progress
on
this
sort
of
like
tacky
solution
or
the
manual
solution
of
like
just
sort
of
hard
coding,
the
coordination
of
running
pack
and
prepare
on
linked,
bundled
dependencies.
F
But-
and
I
think
this
is
something
that
came
up-
maybe
in
one
of
the
last
previous
meetings,
it
doesn't
work
deeper
than
one
level
so
like.
If
I
set
up
the
prepare
script,
it's
like
if
a
depends
on
b
and
b
depends
on
c.
I
can
I've
gotten
so
far
as
to
orchestrate
it
so
that
when
I
pack
a
it
includes
the
linked,
bundled
b,
but
linked
bundle
b
doesn't
include
its
linked,
bundled
c.
F
F
Okay,
is
there.
F
Fix
this
right
now,
I
could
I'll
I'll
definitely
take
the
the
introduction
and
reach
out
to
mike
and
probably
make
some
noise
in
one
of
the
slacks
this
week.
A
Could
you
make
a
a
simple
like
I'm,
not
sure
if
you've
already
done
this,
but
make
a
simple
like
use
case
or
test
case
project
that
we
could
be
looking
at?
Yes,.
A
Just
so,
we
can
like
poke
at
it
play
with
it
collaboratively
even.
F
A
Yeah
cool
that'd
be
really
helpful
and
yeah.
I
apologize
it
took
a
while,
but
hopefully
you
and
I
can
can
work
on
that.
I
got
you
set
up
there.
So
isaac
did
you
want
to
give
any
update
to
the
pr
375
dependent
dependency
outlining
of
what's
shared
amongst
workspaces.
E
Yeah,
I
think,
that's
still
pending
some
more
kind
of
meeting
of
the
minds
between
jordan
and
me.
I
think
where
we
left
it
was
hey.
This
is
a
good
step.
It
doesn't
go
as
far
as
anybody
would
really
like,
but
it
also
has
basically
no
downsides.
E
Let's
go
ahead
and
do
it
the
I
have
a.
I
have
a
hack
together,
sort
of
working
implementation,
the
the
next
step
of
it
to
sort
of
do
it
as
almost
like
a
pseudo
isolated
mode
like
a
isolated
mode.
Light
for
workspaces
is
definitely.
E
A
Okay,
is
that
something
that
you
folks
are
gonna
get
together
in
the
next
week?
Is
there
like
a
call
you
can
set
up
between
youtube?
Maybe.
A
Okay,
hey
need
any
help,
figuring
out
schedules,
or
can
you
two
take
that
away,
and
you
know
at
a
time
yeah,
you
know
how
to
get
a
hold
of
each
other.
I've
seen
you
talk
quite
a
bit
in
slack
before
okay
that'd
be
great,
though,
if
you
can
find
some
dedicated
time
even
over
the
next
week,
just
to
hash
things
out,
I
I
think
would
be
really
helpful.
I
know
jordan
said
he
hates
writing
rfcs.
So.
A
Okay,
I
know
we're
at
time,
so
I
appreciate
everybody
jumping
on
today
apologize
if
we
didn't
get
to
something
that
you
wanted
to
discuss,
but
definitely
appreciate
any
discussion
that
can
be
had
async
between
now
and
next
week
and
we'll
probably
keep
most
of
the
items
that
were
on
today's
agenda
for
next
week
as
well.
In
case
you
didn't
get
to
comment
on
something
but
yeah.
I
appreciate
everybody
jumping
on
today
and
I'll
see
you
next
week
cheers.