►
From YouTube: 📦Package Managers WG Weekly Sync March 5, 2019
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Thanks
Jim
cool,
so
what
have
I
been
doing
right
so
I've
been
a
holiday,
so
I'm
actually
done
very
much.
I
found
a
problem
with
NPM
ipfs,
where
the
pub/sub
mechanism
that
it
used
to
broadcast
updates
of
new
modules
will
just
stop
working
after
a
while
turns
out
there's
a
magic
number.
That
is
how
big
a
pub/sub
message
can
be,
and
if
you
send
a
message,
that's
bigger
than
that
to
subscribe.
This
daily
disconnect
from
you,
I
didn't
see
that
documented
anywhere.
But
there
was
a
an
issue
that
said
it
might
be.
A
A
good
idea,
so
seems
like
something's
implemented
it
over
there
documents
Vaska
is
gonna,
have
a
look
into
the
implementation,
see
where
that's
how
she
said.
Please
make
sure
and
get
it
to
throwing
appropriate
error
message.
If
you
are
the,
if
you're
about
to
send
a
message,
is
going
to
cause
people
to
disconnect
from
it's
not
going
to
stop
malicious
clients
doing
obviously,
but
you
know
to
the
developer
who's
trying
to
do
the
right
thing:
it'll
be
nice
for
them
to
know
sooner
rather
than
later
yeah.
That's
it
I'm,
not
blocked
on
anything.
A
A
A
No,
they
I
think
they
can
be
sure
yeah.
They
can
be
sure.
It's
things
like
adding
missing
features
to
ipfs.
Well,
I
shall
show
you
know,
trying
to
add
support
to
tink
for
ipfs,
which
then
turned
out
to
be
earning
support
p.m.
itself
five
FS,
because
pink
just
uses
NPM
CLI
to
install
dependencies
and
I
kind
of
got
so
far
down
that
hole
there.
It
was
clear
that
it
would
require
more
work
than
I
imagined.
The
NPM
team
would
be
willing
to
accept
without
kind
of
prior
consultation.
A
A
B
B
Ipfs
support
in
homebrew
I've
not
sent
a
pull
request
yet
because
I
I
did
share
the
the
work
that
I've
done
with
Mike
McQuaid,
and
he
said
that
it
would
be
unlikely
that
they
would
merge
a
ipfs
transport
until
the
package
that
had
its
primary
source
available
requiring
ipfs,
which
was
interesting
because
ipfs
is
a
homebrew
formula.
So
there
might
be
a
lever
that
could
be
used
to
to
kind
of
bend
that
slightly
actually
turned
out
to
be
quite
easy.
B
Homebrew
already
has
a
number
of
different
transports
that
you
can
use
to
I'll
paste
a
link
actually
in
the
chat
here
as
well
to
make
different
kinds
of
checkouts.
It
uses
the
original
URL
and
then
you
basically
would
add
the
hash
in
as
an
extra
argument,
and
if
that
hash
is
available,
it
will
try
and
download
it
via
IP
FS.
B
And
then
the
nice
thing
about
homebrew
is
that
you
can
fork
the
whole
registry
and
add
that
in
so
we
could
potentially
have
a
fairly
simple
way
of
saying,
like
brew,
install
ipfs,
slash
core
slash
package
name
and
have
that
pulled
from
ipfs,
assuming
that
we've
backed
all
of
those
formulas
somewhere,
so
that
there's
always
someone
seeding
them
the
only
real
area,
that's
not
supported
from
the
end
users.
Point
of
view
is
bottles
which
are
the
pre-built
binaries,
but
I.
B
So
that
would
be
fairly
easy
to
add
in
the
area
that
would
be
more
difficult
to
make
it
like,
especially
full
IP
FS
is
cloning.
The
homebrew
formula
repository
from
I
PFS
rather
than
from
github,
because
it
is
a
gigantic
repository
and
it
was
like
200
and
something
megabytes,
so
they
could
end
up
being
some
files
that
were
rather
large
there
and
making
that
work
with
the
existing
get
it.
B
Basically,
it
always
clones
the
or
like
shadow
clones,
the
latest
version
of
the
repository
so
that
they
don't
have
the
same
kind
of
different
ways
of
transporting
their
registry
around,
which
means
we
can't
just
pluck
like
a
copy
of
the
file
system
as
like
the
latest
checkout,
which
is
a
bit
of
a
shame,
but
it
might
be
doable.
But
it's
also
kind
of
the
kind
of
thing
that
you
could
step
into
it.
B
Yes,
Jim
I
did
check
out
get
on
ipfs,
there's
a
tricky
limitation
with
the
number,
the
maximum
size
of
a
file
that
that
gets
into
like
the
the
problem
of
trying
to
make
it
like
say
that
you
can
do
git
clone
with
ipfs.
Is
that
transport,
rather
than
just
like
checking
out
like
downloading
a
git
repository
that
happens
to
be
on
ipfs
but
I,
think
there's
things
that
can
be
done
there
and
it
doesn't
have
to
necessarily
be
all
all
in
straightaway.
B
So
it
might
be
interesting
to
try
and
add
more
basically
write
a
script
that
will
go
through
the
formulas.
Add
them
all
to
ipfs
and
commit
the
hashes
to
a
fork
that
then
someone
could
use
instead
and
Mike
McQuaid
the
homebrew
maintainer
seemed
pretty
open
like
an
interested
in
it,
rather
than
kind
of
dismissive
her
and
it's
it's
there
might
be
an
option
to,
and
what
else
do
they
do?
B
I
also
spent
some
time
re
working
or
writing
out
a
different
approach
to
categorization,
which
is
grouped
by
implementation
from
ipfs
standpoint
and
that
basically
groups
three
different
categorizations.
That
quote
quite
closely
match
to
my
previous
categorizations
of
multi
registry
centralized
registry,
portable
registry,
the
main
groupings
being
filesystem
based
which
is
most
system
package
managers.
That's
probably
because
our
sync
works
really
well
for
keeping
mirrors
of
those
things
that
they
can
literally
just
treat
everything
as
a
file
in
the
file
system.
B
Copying
that
around
is
actually
fairly
easy,
but
it
means
that
they're,
all
their
metadata,
is
stored
in
files.
Maven
and
Sipan
also
work
like
that.
Then
you've
got
the
database
back
ones,
which
tend
to
be
more
modern
and
have
like
a
sequel
or
like
some
kind
of
web
application
with
database
that
is
doing,
authentication
and
like
handling
that
transformation
of
packages.
Usually
it
means
it
is
quite
self-service.
You
don't
need
to
get
someone
to
add
the
packages
to
the
file
system
yourself,
which
happens
on
maven,
for
example.
B
You
need
to
send
your
package
to
maven
central
and
say:
can
you
publish
this
and
whereas
most
of
the
database
ones
don't
require?
The
sharing,
like
don't,
require
someone
else
to
be
available
for
you
to
publish
your
package,
but
the
downside
is
that
it's
not
anywhere
near
easy
to
to
mirror
that,
because
you
need
a
copy
of
the
database,
you
need
to
run
the
web
application
server
or
you
need
to
make
something
that
pretends
to
be
the
web.
B
C
B
B
So
they
don't
have
to
host
their
own
databases
and
be
their
own
DBAs,
but
also
that
then
you
have
some
kind
of
history
of
versioning,
which
requires
a
lot
more
implementation.
If
you're
going
to
do
it
yourself.
So
it's
it's
kind
of
like
the
lazy,
maintainer
zwey,
which
is
totally
reasonable.
It
does
mean
that
when
you
get
to
like
home
brew
size
and
history,
you
can't
actually
view
the
history
of
any
one
package
in
git,
because
github
just
is
like
no
this
times
out.
B
This
takes
way
too
long
to
to
step
through
the
git
history,
but
there's
some
interesting
things.
There
also
notice
that,
apart
from
homebrew,
most
of
those
all
have
a
database
as
well
to
power
their
web
applications.
Homebrew
actually
exports
their
registry
into
JSON
files
and
puts
it
on
github
pages
so
that
you
can
like
you,
don't
have
to
use
git
as
a
database
in
production,
because
that
ends
up
being
incredibly
slow
in
general.
And
then
the
registry,
when
registry
lists
style
get
registries,
are
basically
like
like
go
or
Swift
or
carthage.
B
Where
you
just
point,
your
no
namespace
and
in
cases
are
URLs
on
the
internet,
that
point
to
git
repositories
and
the
history
of
the
versions
is
stored
inside
each
one
of
those
git
repositories.
So
the
actual
dependency
resolution
ends
up
being
quite
slow
because
your
cloning,
recursive
git
repositories
and
trawling,
their
history
of
their
tags
or
their
git
commits
may
be
on
different
branches
as
well.
But
it's
ends
up
being
like
there's.
B
No
there's
no
requirement
for
anyone
to
check
anything
when
any
new
published
package
goes
out,
so
there's
very
little
in
the
way
of
gatekeepers,
which
is
what
and
then
go
comes
from.
The
like
everything
is
going
to
be
vended
anyway
inside
of
google.
This
also
happens
to
work
for
other
people,
maybe
and
they're
slowly,
bringing
you
back
to
being
useful
for
people
outside
of
outside
of
Google,
but
it's
still
in
flux.
B
A
lot,
whereas
Swift
and
Carthage
are
much
more
stable,
they're,
not
changing
on
a
regular
basis
and
they're
all
susceptible
to
the
repository
is
going
away.
A
repository
being
forced
pushed
to
some
of
them
will
keep
track
of
what
the
repositories
look
like
last
time
that
they
talked
to
them
and
other
ones
won't.
So
that's
kind
of
gives
an
interesting
way
of
thinking
about
when
we
come
to
solve
a
problem
for
one
of
the
registries
on
ipfs
that
potentially
that
maps
to
like
that,
we
can
document
that
process
and
then
go.
Oh.
B
What
else
do
they
do?
Well
so
just
put
out
and
just
open
this
issue
about
getting
this
working
group
public?
It's
really
more
of
a
discussion
start
point
as
the
links
in
the
crib
pad,
and
it's
on
the
github
repository
just
thinking
about
like.
Are
there
any
steps
that
we
need
to
do
before?
We
can
start
having
these
discussions
in
public,
so
we
can
point
people.
There
I
tried
to
organize
a
call
with
someone
in
New
Zealand,
and
it's
actually
really
hard
to
schedule
a
time.
That's
not
for
one
of
us,
like
3
a.m.
B
so
I
thought
it'd
be
much
easier
if
we
could
do
this
asynchronously
on
on
a
a
document
that
was
on
github
and
the
other
thing
tiny
little
thing
as
I
was
learning
about
homebrew
and
how
that
worked.
I
found
the
IP
FS
package
depended
on
a
number
of
things
that
weren't
actually
use
because
IP
FS
is
make
file
bootstraps
itself
via
IP
FS
through
the
Gateway.
It
doesn't
actually
use
GX
when
it's
building
it
downloads,
its
own
copy
of
GX,
so
I
was
able
to
remove
three
different
dependencies
from
the
homebrew
formula.
B
So
anyone
who
installs
IP
fsy
homebrew
now
has
it's
just
much
faster
because
they're
not
playing
down
dependencies
and
never
get
used
and
that
go
accepted
along
with
a
number
of
a
fixes
that
I
found
for
formulas
that
do
some
go
basically
implements
its
own
wrappers
around
package
managers.
It
doesn't
trust
Python
and
go
being
the
two
that
I've
stumbled
across
the
way
they're
like
well
put
in,
and
every
individual.
B
Level
or
potentially
even
transitive
dependency
as
git
repositories
in
our
homebrew
formula,
and
they
will
be
cloned
into
the
path
they're
expecting
to
be
once
the
package
manager
would
have
finished
doing
what
it's
doing
so
that
a
formula
is
repeatable
rather
than
being
changed
every
time
that
the
the
bottle
server
would
build,
which
is
quite
interesting
as
a
has
a
bit
of
a
crutch
that
they,
they
basically
work
around.
How
other
package
managers
work,
although
homebrew
has
its
own
problems
with
that,
where
they're,
basically
always
like.
B
D
B
If
you're
just
like,
how
do
we
get
the
first
time
around?
Is
it's
slightly
painful
as
it's
more
like
we're
going
to
need
to
have
this,
as
this
is
a
good
transport,
and
this
is
what
the
ipfs
project
says
the
best
way
to
download.
It
is
even
if
the
first
time
you're
going
to
download
it
over
HTTP
through
the
Gateway,
rather
than
over
that
having
to
somehow
find
the
IP
FS
binary
to
be
able
to
install
the
IP
FS
by
new.
C
So
one
so
in
the
sort
of
category
of
package
managers
where
we
can
probably
sort
of
take
the
database
and
run
with
it.
One
thing
that
I
worry
about
is
a
lot
of
times,
the
way
that
they
that
you
export
or
that
you
get
a
feed
of
that
data
is
somewhat
out-of-date
or
it
eventually
becomes
out-of-date.
This
actually
happened
with
NPM,
so
like
initially
the
feed
that
you
would
use
to
sort
of
replicate.
C
The
database
was
the
live
database
and
then,
as
I
scaled
up
that
changed
and
it
went
from
being
like
a
couple
seconds
behind
to
now.
There
are
literally
different
systems
and
because
the
updates
are
so
fast,
it's
like
ten
minutes
behind.
So
it's
still
a
really
good
option.
If
we
want
to
build
like
an
offline,
capable
version
of
the
package
manager
like
we
can
pull
the
data
in,
we
can
turn
it
into
IP,
LD
terms
or
whatever.
We
need
to
do
to
move
the
data
around.
C
B
Yeah
this
all
of
the
database
package
managers
are
gonna,
have
a
similar
problem
like
NPM
is
actually
pretty
reasonable
for
getting
data
out.
Some
of
them
don't
even
have
RSS
feeds
or
any
way
of
saying.
Is
there
a
new
package
here
you
literally
have
to
trawl
through,
maybe
even
paginate
a
list
of
HTML
links
on
a
page
to
find
out.
If
there
is
a
new
thing,
so
there's
those
ones
I
think
the
kind
of
approach
is
there's
two
approaches.
B
I
can
see,
one
is
going
to
be
like
we
actually
get
ipfs
support
directly
in
the
production
registry
like
API.
So
it's
it's
doing
the
work
of
oh.
If
I
published
a
new
package,
I've
also
announced
it
and
added
it
to
ipfs
and
added
the
the
see
ID
into
my
database
and
it's
available
via
the
API,
and
the
other
way
is
the
e
kind
of
artifactory
style
proxy.
B
That's
gonna,
lazily
go
to
the
registry
and
ideally
like
have
enough
caching
or
like
BFS,
back
cashing
that
it
can
go
offline
or
it
can
be
lazy
with
the
way
that's
doing
but
yeah
you
you're
not
able
to
they.
Also,
there's
no
there's
often
no,
like
I,
think
NPM
just
recently
added
the
ability
to
roll
back
in
time
and
say:
oh
I
want
packages
from
before
this
date.
B
Most
of
them
don't
have
that
that
kind
of,
like
ability
to
it's,
always
assumed
that
you're
running
on
the
bleeding
edge
of
the
registry
and
things
may
or
may
not
work.
Otherwise,
the
interesting
one
there
that
stands
out
to
me
is
the
our
community.
Has
a
package
manager
called
cran
and
their
package
manager
doesn't
allow
you
to
pin
the
the
latest.
You
basically
can't
say,
there's
a
maximum
version
number
in.
That
means.
You
always
pick
up
the
newest
version.
B
B
It's
called
M
ran
for
Microsoft
cran,
and
so
you
can
actually
point
your
registry
when
you
say
I
want
to
install
all
the
packages
for
this
our
program,
you
save
the
day
that
you
want
to
install
it
on,
and
everyone
then
basically
freezes
their
whole
community
in
time
because
they
can't
they
come
back
out
over
a
broken
new
version.
You
can
only
roll
forwards
and
that's
not
really
an
acceptable
way
of
doing
it.
B
So
that
ipfs
has
that,
like
the
ideal
copy
there
to
go
well,
you
can
always
roll
back
to
a
previous,
especially
if
we're
using
a
kind
of
like
some
top-level
route
that
holds
all
of
the
metadata.
You
can
always
point
back
to
a
previous
version
of
that
metadata,
assuming
that
someone
else
is
hosting
it
or
has
made
it
available
or
that
you're
I
guess
you've
become
responsible
for
your
own
mess
there.
At
some
point,
I.
A
Think
we
should
like
really
be
sharing
for
the
first
option
where
we
get
people
to
add.
Cid
is
directly
to
the
metadata
for
the
actual
registries
that
we're
on
it.
You
know
we're
trying
to
target,
because
we've
got
this
incoming
IP
investing
and,
to
be
honest,
it
really
shouldn't
exist.
You
know,
because
it's
yeah
we
have
to
maintain,
we
have
to
write
it.
We
have
to
maintain
it.
We
have
to
host
it.
You
know,
whereas,
if
the
actual
registries
were
just
adding
like
reversible,
then
I
think.
D
E
C
B
C
B
So
that's
basically
like
we
take
snapshots
of
everything
all
the
time
and
timeless
stack
obviously
also
has
that
Eric's
just
quietly
lurking
down
there
on
the
bottom
and
I
guess
you
kind
of
you
have
it
halfway
with
with
the
get
backed
ones,
but
your
your
hoping
like
that
a
lot
of
the
get
back
ones
will
have
the
metadata
stored
in
an
immutable
way,
but
they
point
at
HTTP
URLs
on
s3
or
randomly
on
the
like
homebrew
point,
sir.
Whoever
is
hosting
the
source
originally,
which
could
be
some
university
or
it
could
be
anything.
B
It
doesn't
mean.
I
can
roll
back
six
months
and
all
of
the
URLs
on
all
of
those
formulas
will
will
resolve
correctly.
So
you're
not
going
to
have
that,
like
nice
way
of
being
able
to
snapshot
and
roll
back
the
whole
thing,
which
is
why
you
see
lots
of
companies,
use
I,
think
artifactory
and
things
like
that,
because
they
want
to
be
able
to
freeze
the
world.
C
F
D
Dissociation
some
projects
so
the
main
thing
that
people
so
es6
modules
there
they've
been
coming
for
a
few
years
in
web
browsers
and
they
let
you
write,
es6,
javascript
and
instead
of
in
nodejs,
we
do
require
with
NPM
name.
You
can
go
import,
you
can
import
from
an
HTTP
URL.
So
it's
really
the
opposite
of
package
managers.
D
But
the
thing
is,
like
all
the
JavaScript
lives
on
NPM,
so
people
so
there's
this
CDN
service
called
unpackage
which
allows
you
to
use
this
import
syntax
via
HTTP
to
the
CDN
and
in
the
CDN
unpacks.
Oh,
that
all
the
NPMs.
So
it's
sort
of
exploding
all
the
NPM
is
out,
so
they
all
can
be
referred
by
HTTP.
But
it's
got
to
do
some
special
little
tricks
to
do
it,
there's
something
in
here.
But
this
is
the
project,
and
that
seems
interesting
to
me.
Cuz
like.
D
C
C
C
C
They've
got
it
out,
but
they
there
for
their
new
services
like
this
HTTP
to
interface,
that
if
you're
using
import
syntax
in
your
raw
module,
it
will
just
pull
the
the
raw
file.
There's
no
build
necessary
right.
That's
why
they've
been
indexing
all
of
the
packages
that
actually
do
that,
like
that
made
no
sense
until
they
released
the
sillier
thing
and
now
that
they
really
thing
it's
like.
Oh
yeah,
it's
actually
really
elegant,
like
you,
don't
need
to
build
separate.
D
Anyways,
just
the
idea
of
doing
raw
import
over
HTTP
with
the
source
files
in
NPM
I
think
that's
a
powerful
idea.
There's
some
related
things.
So
no
GS
was
written
by
Ryan
Dahl
nice
to
work
with
that
giant
and
he's
doing
this
new
thing
called
denno,
which
is
like
node,
but
he
switched
the
characters
around
and
he's
basically
saying
no
NPM
I
hate
NPM,
which
is
funny
because
you
know
he
used
best
friends
at
Isaac
but
like.
D
Talk
to
him,
but
it's
sort
of
awesome,
because
it's
like
anyway,
if
we
had
the
raw
files,
the
other
weird
thing
is
he's
saying
only
typescript
files,
but
he
there's
already
an
issue
on
denno
which
he
closed,
which
is
saying:
support
for
IP,
FS
and
they're
like
hey
and
he's
got
a
great.
This
is
this
move
not
anytime
soon
I
can
conceive
of
it
happening
someday,
but
yours
in
the
future,
but
ipfs
is
not
very
good
right
now.
Antenna
is
not
very
good
right
now,
because
it's.
B
D
C
Of
his
complaints
is,
is
the
immutability
issues
right
like
having
to
resolve
like
versions
and
all
of
that
kind
of
mess,
and
he
sort
of
likes
the
go
model
that
you
just
like
pointed
at
an
earl,
but
he
hates
the
fact
that
that's
like
mutable,
so
I
think
that
they
get
there
yeah.
There
could
be
an
interesting
opportunity,
yeah.
D
And
then
another
what
this
is
an
interesting
project
from
guys
involved
with
that
he's
actually
done
this
for
ipfs.
It's
basically
like
he's
taking
no
GS
and
mr.
on
with
the
loaders.
So
you
can
load
a
sheet
es6
modules
over
HTTP
directly,
so
you
can
use
imports
and
tax
on
nodejs
itself,
so
you
can
use
unpackage
so
and
Kosala
iraq
leagues
actually
been
using
this
in
his
lunette
project.
D
So
that's
what
injury,
hello,
it's
not
so
bad,
like
as
it
gets
cached
like
once
it's
loaded,
maybe
the
first
time
it's
gonna
be
a
little
bit
slow,
but
I
I've
never
seen
anybody
done
really
big
things.
So
I've
done
some
experience
with
es6
modules,
myself
I'm
the
web
or
like
glue
apart
like
really
big
packages,
and
it
can
get
pretty
slow,
but
I
put
them
on
that
and
once
everything's
you
know
sync
locally,
it's
not
so
bad.
Like
so
I've
got
some
experiments
from
last
year.
D
I
can
show
that
so
I
think
this
is
a
somewhere.
We
could
go
I
think
another
project
dimension
is
TX
GS,
which
is
one
of
ours,
so
JavaScript
modules
to
BFS
I,
don't
know
how
this
all
fits,
but
like
we
have
we're.
Gonna
have
NPM
we're
going
to
have
all
this
fit
JavaScript
and
we
can
sort
of
blow
it
apart
and
publish
it
immediately.
So,
but
I
could
drop
some
of
these
links
into
a
issue
on
the
project.
I
don't
know
if
it
fits
into
the
scope
here.
A
B
Of
GX
I've
been
looking
into
I'm
watching
some
of
the
threads
on,
especially
on
the
protocol
labs
cluster
of
go
projects.
Lots
of
people
have
a
lot
of
pain
with
it,
and
it
seems
like
it's
also
for
modules
that
get
used
that
are
dependencies
of
ipfs.
They
get
used
by
other
projects
as
well.
They
really
want
to
not
have
that
pain
and
I.
B
Think
the
main
source
of
pain
is
something
is,
is
where
GX
requires
every
transitive
dependency
to
declare
its
dependencies
as
hashes,
rather
than
like,
basically
frozen
in
time
versions,
which
means,
if
you
want
to
update
any
transit
dependency,
you
need
to
update
everything
in
the
chain
to
be
able
to
get
the
latest
version
of
attractive
one.
So
it's
gonna
write
up
a
little
proposal,
but
it's
also
not
clear
how
much
there
seems
to
be
different.
There's
a
bit
of
confusion
within
protocol
over
how
much
GX
is
like
we.
B
We
definitely
want
to
keep
using
it
versus.
We
should
just
let
go
modules
like
take
over
and
not
worry
about
it
so
I
think
it
making
an
issue
and
sticking
a
sticking
a
little
flag
in
it
to
be.
Like
hey
like,
should
this
be
like
pay
attention
to,
or
might
might
be
a
good
place
to
to
like
swing
the
conversation
one
way
or
the
other
I
think.
C
E
B
You
can
see
if
we're
G
X
turns
into
more
of
a
tool
for
end
users
or
like
end
applications.
Web
applications
written
in,
go
where
it's
a
snapshot
of
frozen
dependencies
in
time,
but
it
doesn't.
It
actually
continues
to
work
with
the
underlying
tool
that
each
project
has
decided
to
use
which
basically
is
going
to
end
up
as
go
modules.
If,
in.
C
B
Very
slow
kind
of
like
transformative
way
it's
kind
of
similar
gonna,
be
similar
to
how
node
and
the
browser's
are
all
moving
towards
es6
modules.
It's
gonna
always
have
to
have
some
kind
of
support
for
previous
things.
If
some
maintainer
x'
don't
want
to
get
on
board
straightway
or
there
are
dead,
projects
which
are
still
heavily
depends
on.
It's
gonna
take
a
long
time
to
shake
out
all
the
edges
of
the
leaf
nodes
of
that
massive
graph.
Yeah.
D
So,
but
it's
like
do
we
want
to
keep
the
X
around,
but
I
think
it's
it's
useful
as
a
learning
exercise
in
a
lot
of
ways
in
terms
of
like
you
know,
if
you
actually
do
try
to
freeze
everything
in
and
expose
it
right
that,
like,
if
developers
have
to
deal
with
hashes
and
like
you
check
the
hashes
and
stuff,
it's
just
I
tried
to
do
some
ghost
stuff
with
Jax
and
I
just
kept
crashing
into
it
and
yeah
the
transitive
dependency
thing.
It's
really
difficult
goes.
B
Particularly
only
there
when
you
have,
the
import
is
also
like
that
string
contains
all
the
metadata
that
you're
gonna
say
like
for
that
whole
module.
There's.
No
necessarily
until
go
modules
came
along.
There
was
no
other
files.
That
said
anything
you'd
need
some
third-party
tool
to
keep
track
of
the
actual
revisions
that
you
like
known,
working
versions
of
any
of
those
modules.
So
then
swapping
the
hash
out,
you're
like
oh
suddenly.
This
means
nothing
to
me.
I
have
no
idea
of
like
where
the
reference
point
of
this
is
somewhere
else.
D
B
B
C
B
C
Remember
so
this
is
like
going
aways
back,
but
no
did
sue
before
NPM
was
a
company
was
sort
of
running
the
registry
for
a
minute
cuz.
They
acquired
different
Smith
company,
that
was
running
the
VM
registry
and
they
had
built
this
person
concept
and
never
got
released
as
a
product,
but
essentially
what
it
did
was
when
you
gave
it
a
package
version
to
install
what
it
would
return.
C
You
was
basically
the
computed
deep
like
graph
of
everything
that
it
needed
right,
so
it
sucked
for
caching
like
that
was
one
of
the
main
problems
like
you.
Couldn't
really
give
it
your
local
cache,
and
you
would
get
back
like
this,
trying
to
hardball
with
stuff
that
you
already
had
in
it.
But
it
was
significantly
faster
than
sort
of
like
going
down
that
data
and
then
resolving
it.
C
And
if
you
look
at
a
lot
of
the
performance
improvements
that
NPM
has
made,
since
a
lot
of
it
is
like,
like
using
the
registry
metadata
before
you
pull
down
a
package
in
order
to
understand
the
graph
right,
I,
wonder
like
in
particular,
if
we
did
something
that
I
like
allowed
offline
offline
ability.
You
you
could
create
these
really
nice
cache
states
and
then,
when
you
give
back
that
cache
state
as
sort
of
like
a
not-guilty
graph,
it
would
have
you
know
references
to
a
bunch
of
things
you
actually
could
have
cached
locally.
B
B
Ruby
gems
has
has
a
very
elegant
solution
to
that.
They
have
a
particular
API.
They
had
to
build
for
bundler
because,
as
bun
as
ruby
gems
grew,
it
got
way
slower
to
do
like
downloaded
a
gem
check.
What
the
dependencies
are
go
and
do
the
same
work
again
and
they
have
safety
I.
Think
it's
about
a
four
megabyte
file,
quite
quite
like
crammed
in
with
information
that
gives
whatever
the
current
state
of
all
versions
and
all
of
their
dependencies
that
are
published
on
ruby
gems.
B
So
you'd
like
extra
requirements
for
you,
your
dependency
resolution
that
feels
like
more
package
managers
could
use
that
to
speed
up,
especially
once
I
go
and
Swift
they're
like
I
need
to
keep
cloning
git
repositories
until
I've,
and
especially
when
you're
like
okay.
Why
have
my
cloning
all
of
kubernetes,
just
two
pulling
this
one
embedded
inside
of
it.
C
Yeah
I
mean
like
this
is
why
I'm
a
little
bit
like
more
interested
in
sort
of
the
docker
or
layer
model
for
what
we
could
do,
because
the
interesting
thing
about
these
doctor
layers
is
that
they,
they
are
immutable
snapshot
at
file
system.
So
if
we
have
the
hash
of
them,
we
can
effectively
say
okay,
we
have
that
before
we
pulled
down
any
data
and
when
they
try
to
access
the
data
in
that
file
system,
we
can
give
it
to
them
in
real
time.
C
There
was
like
a
proof
of
concept
that
Matthias
beuse
wrote
on
web
torrent
like
a
long
time
ago.
This
was
like
four
or
five
years
ago,
where
he
he
wrote
a
few
file
system
on
top
of
web
torrent
and
would
basically
mount
a
docker
image
and
he
would
have
none
of
the
files
and
then
it
would
boot
and
it
would
be
like
a
half
a
gig
image
or
something,
and
it
would
boot
in
like
10
seconds,
because
the
things
to
boot,
the
Linux
image
are
like
a
couple
Meg's.
C
You
just
have
no
idea
what
those
files
are
until
you
put
it.
So
we
could
get
like
you
know,
potentially
like
insane
performance
improvements
out
of
out
of
things
that
we
can
treat
as
a
layered
file
system.
Theoretically,
we
could
do
the
same
thing
with
any
gzip.
It's
just
like
unbelievably
more
expensive
to
do
the
work
upfront
to
unpack
the
star
balls
and
turn
them
into
real.
C
And
I
did
I
did
a
proof
of
concept
like
earlier
on
top
of
I,
peeled
the
UNIX
MSB
to
just
looking
at
like
what?
What
could
we
do
with
deduplication
if
we
were
unpacking
the
gzip,
and
it
turns
out
that
like
deduplication,
does
not
save
you
as
much
space,
as
you
said
so
also,
the
data
itself
that
we
would
be
storing
would
be
higher.
So
that
would
be
problematic
coach.
A
D
Okay,
what
you're
describing
is
basically
what
the
last
thing
in
my
blog
is
like
that
I
haven't
updated
in
a
year
and
a
half
is
like
called
pipette,
which
is
building
awful
work
max
and
matthias
we're
working
on
awareness,
but
I
like
food,
a
whole
Linux
container,
and
then
it
would
load
up
the
blocks
over
that.
But
then
I
would
just
eat
because
it
gets
patients.
Sparse
I
would
just
like
capture
what
was
paged
in
and
then
I
would
like.
D
Be
zip,
compress
it
or
something
like
that,
and
it
would
turn
out
like
everything
you
need
to
boot
up
a
particular
linux.
Image
could
be
packed
into
like
30
megabytes
and
and
do
like
some
pretty
heavy
work,
because
mostly
most
of
the
stuff
in
a
typical
image
it
like
99
percent,
it's
never
used,
and
that
was
really
interesting,
because
then
you
can
just
quit
the
little
zip
ball
and
yeah.
D
F
A
A
B
C
I
posted
an
interesting
visualization
in
the
metrics
repo
that
I
posted
a
link
to
here,
but
it's
just
the
the
q4
of
28
teams,
additions
to
each
package
manager
and
it
kind
of
gives
you
like
a
good
visualization
of
the
difference
in
scale
between
a
lot
of
the
different
kind
of
comment
package
managers
and
how
many
packages
they're,
adding
also
for
reference.
The
last
time
that
I
looked
at
the
data
for
this
update
two
packages
are
also
happening
at
a
much
higher
rate
in
NPM
as
well.
C
B
B
Good
count
of
unique
versions
as
well,
so
that
might
be
an
extra
data
point
to
head
in
there,
but
the
am/pm
is
it's
miles
away,
the
other
one!
That's
really
hard
to
pin
down
his
go
like
burner
and
buzzer,
oh
because
he
could
depends
what
you
declare
as
a
package.
So
in
general,
all
statistics
related
to
go
package
management
could
just
go
out
the
window
yeah.
C
No
I,
actually
so,
module
accounts
for
a
while
with
keeping
track
of
godox
as
a
package
manager,
number
and
so
I
did
a
quick
analysis
and
I
was
like
this
was
over
reporting
by
about
four
times
what
the
actual
package
is.
Look
at
it,
repo
or
I.
Did
this
yeah.
It
was
like
pretty
bad
yeah.
There's
just
there's
no
way
to
do
anything
like
I
think
we
could
probably
do.
C
C
C
Yeah
well
so
so
my
my
issue
with
it
was
like
I,
couldn't
figure
out
a
way
to
do
it
for
more
than
just
one
package
right.
So
if
I
wanted
to
look
for
like
I'm,
depending
on
something
with
ipfs
in
it
like
that
would
be
doable.
But
if
you
just
wanted
to
like
categorize
any
package
and
get
like
a
unique
list,
I
thought
like
it
looked
like
you
were:
gonna
hit
the
query
limit
pretty
quickly.
I.