►
From YouTube: ⚡️ⒿⓈ Core Dev Team Weekly Sync 🙌🏽 December 3, 2018
Description
https://github.com/ipfs/team-mgmt/issues/650#issuecomment-443755007
Try out npm on IPFS!
https://github.com/ipfs-shipyard/npm-on-ipfs
`npm config set registry https://registry.js.ipfs.io`
A
Alright
I
have
power.
Is
that
time
of
the
day
again
at
that
time
of
the
week,
even
it's
that
time
of
the
week
for
the
jas
core,
dev
team,
weekly,
sync,
it's
December,
it's
the
third
of
December
of
2018.
The
hackpad
is
in
that
Schatz
Jacob
has
volunteered
himself
as
note-taker.
Thank
you
very
much.
Please
I
just
named
to
the
attendees
list
on
the
CAC
pad
and
we'll
get
started
in
a
moment.
Yeah.
A
Also,
if
you
haven't
put
your
weekly
update
down,
then
please
add
it
and
it
in,
and
we
will
now
go
for
a
round
of
updates,
see
how
everyone
has
done
this
week,
what
they
blocked
on
and
what
they're
planning
on
doing
next
week,
who
is
first
on
the
list?
Vasco
is
first
on
the
list.
Would
you
like
to
give
us
your
update.
B
Sure
hello,
so
last
week,
I
worked
mainly
in
carrying
Ibanez
overcooks
I
mean
I've,
been
s,
/,
itchy,
ready
for
being
merged
this
week,
I
fixed
several
reviews
in
the
arts,
for
both
of
them
and
I
also
created
danger
up
tests
to
diagnose
over
the
HD.
Now
we
have
interrupts
green
for
both
of
the
P
arts
and
our
and
and
the
Hugo
and
Jacob
also
at
some
reviews.
B
So
we
are
on
a
good
way,
then
I,
also
during
the
the
interrupt
tests
for
the
HD
I,
also
found
some
problems
with
new
API
for
files
and
I'm
also
fixing
the
interrupt
tests
for
that
and
I
also
worked
in
the
HD
a
stress
test.
Well
last
week,
I
had
intensive
work
up
to
simulation
tests
now.
I
also
added
shared
intensive
tests
for
this
week,
so
I
want
to
get
I
been
a
super
pups
of
a
9s
or
ADHD
final
merged.
B
A
C
D
Alright
I
have
been
working
on
yes
like
no,
they
were
about
to
be
formats
stuff
because,
like
if
you
read
the
specs
and
find
out
that
they
are
not
really
good
or
accurate,
so
fixing
a
few
things
they
are
just
together
with
M,
Stephen
and
I
also
worked
on
starting
to
implement
new
JavaScript
IPO.
Dat
is
just
to
see
if
the
the
specs
kind
of
really
work.
So
it's
not
like
a
move.
A
Any
questions
Foca
I
can
recommend
anyone
on
the
Jas
team
hasn't
gone
and
looked
at
the
proposal
that
Volkers
added
for
IPO
D
API
changes.
Then
you
should
go
and
take
a
look
because
it's
red
and
super
interesting,
but
also
you
can
probably
have
to
be
using
it
at
some
point
soon.
So
you
might
as
well
contribute
to
it
or
at
least
read
it
to
know.
What's
going
to
get
happen,
so
yeah
cool
thanks
for
Kurt
next
person
is,
is,
oh
sorry,
any
other
questions
for
Volker
no
I
see
no
hands
cool.
It's
me,
okay.
A
What
are
they
so
yeah
last
week
added
some
CID
version
agnostic,
interrupt
tests,
I
put
a
picture
of
the
alias
that
got
some
work
to
do,
but
we
can
see
that
they,
it
works
for
go,
and
we
know
that
because
they
have
released
a
version
which
which
allows
that
so
I
just
need
to
go
ahead
and
implement
those
things.
But
now
we
have
some
tests
that
prove
when
I'm
finished
doing
that.
So
that's
great
other
things:
I
did
we
renamed
ipfs
API
to
ipfs
HTTP
client
to
better
communicate
what
it
is?
A
A
Basically,
if
you've
got
a
quick,
multi
adder
in
your
and
you
used
the
Jaso
PFS
api,
an
old
version
of
it
or
eight
version
of
it
to
kind
of
get
hold
of
some
swarm
swarm,
peers
that
are
using
quick.
Then
it
would
have
failed
because
it
didn't
know
that
you
know
it
doesn't.
That
is
especially
a
fix
for,
if
you
don't
have
a
Multi
in
your
in
your
in
the
codec
table.
If
it's
like
a
newer
version,
newer
one
comes
out
you're
using
an
old
version.
Then
you
can
still.
A
You
can
still
see
what
appears
you're
connected
to
it.
Doesn't
just
fire.
So
that's
good.
It
has
the
object.
Api
changes
and
the
files
API
refactor
I
opened
yeah
pull
request
to
the
interface
datastore,
which
I
think
most
of
you
commented
on
already.
So
thank
you
for
that.
That's
that's
super
rad
and
for
switching
to
like
asa,
quite
tasty
iterators,
and
then
I
used
that
as
a
base
for
doing
the
same
thing
to
datastore
level,
which
is
kind
of
pretty
easy,
because
level
up
have
a
promise
based
API
anyway.
A
So
that's
kind
of
fun
and
then
I
did
a
whole
bunch
of
reviews.
My
plan
for
the
next
week
on
the
block
done
anything
my
plan
for
next
week
or
this
week
is
to
implement
the
so
now
I've
got
the
tests
for
CID
version.
Agnostic
gets
boots,
I
can
go
ahead
and
implement
that
stuff
and
I
also
wanted
to
hopefully
get
round
to
implementing
the
ad
from
star
methods
in
JSI
PMS
in
time
for
the
0:34
release.
Any
questions
for
me.
E
So
a
lot
of
work
last
week
on
the
J's,
let
Peter
be
daemon,
so
I've
got
an
initial
branch
of
that
going
with
support
for
connecting
and
opening
and
handling
streams
from
the
daemon
I'm,
also
using
the
daemon
work
to
experiment
with
the
Lippe
to
be
async
API,
because
there's
a
lot
of
stuff
there.
We
need
to
do
with
like
prefix
streams
and
things
like
that.
E
Also,
the
Lippe
WebSocket
start
run
of
a
server
over
three
has
been
released,
so
that
also
improves
the
pure
discovery
startup
time,
because
before
it
was
like
a
10-second
wait
to
get
peers.
Now,
when
you
start
up
now,
when
you
connect
you'll
immediately
get
the
list
of
existing
peers
back,
so
that
should
be
quicker.
So
that's
support
for
pure
pad
work.
E
E
F
Yes,
so,
basically,
last
week,
added
Travis,
CI
config,
the
approach
guest
is
open
and
have
a
look
at
its
structures
on
the
circle.
See
I
also
did
some
work
on
the
epuap.
One
should
be
ready
to
go.
Everything
is
green
on
that
on
that
one
reviewed,
a
bunch
of
pork
requests
also
did
some
new
fillies
of
IP
FST
CTL
with
a
new
EFS
HTTP
client
also
start
as
a
request
about
the
proposal
regarding
the
errors.
F
Are
we
I
will
I
would
really
love
you
guys
to
review
that
these
it's
basically
a
improvements
on
just
using
the
air
calls,
as
we've
been
using
it,
especially
on
lip
HP.
Please
review
that
read
what's
there
and
give
your
feedback
and
also
ever
to
pull
requests
almost
ready
to
go,
reducing
the
bottle
size
on
the
API
or
now
the
HTTP,
client
and
and
for
ipfs
repo
I
just
need
to
figure
out
the
couple
of
top,
because
some
recent
commits
changed
some
stuff
that
increase
the
window.
F
F
So
if
you
can
look
at
that
too,
because
it
basically
also
reduces
the
bundle
size,
so
it
would
be
great
to
have
that's
enabled
by
default
instead
of
the
current
one
and
this
week,
I'll
continue
the
size
work
I
need
to
go
through
the
smaller
repos
and
continue
on
their
error
codes,
hopefully
using
the
new
approach
I
proposed
on
the
previous
post
requests
and
also
continue
the
EM
Plex
work
with
I
think
it's
raitis
and
a
bunch
of
new
reviews.
So
anyone
is
in
questions.
F
The
experimental
build
basically
it's
already
merged,
but
it's
behind
like
a
flag
and
just
picture
opened
that
issue
link
there.
So
people
can
like
test
if
everything
works,
it
probably
works,
but
you
should
test
it
anyway
and
get
a
feel
of
it.
If
you
think
it's
ready
to
go,
we
just
remove
the
old
code
and
make
that
the
default
one
just
one
little
difference
between
it
between
two.
F
B
A
A
A
Browserify
and
webpack
like
trimming
our
code,
they're
not
meant
to
in
and
causing
issues
well
to
be
fair,
I
think
it's
like
uglify
so
like
it
would
be
nice
that
okay
I,
take
care.
Take
your
point,
but
it
would
be
nice
to
get
to
have
some
actual
test
run
again.
Those
those
minified
created
bill
artifacts,
that's
probably
a
point
for
it
never
call
or
something
yeah.
A
B
F
Go
me
through
the
like
a
stuff
like
fuming
out.
It's
like
I
could
just,
let's
have
all
those
which
at
least
we
need
like
my
I
produce
and
stuff
like
that.
I
mentioned
some
on
some
pull
requests
and
I'm
starting
to
do
some
tests
to
see
if
it's
like,
you
can
prove
speed
and
stuff
memory
is
it's
at
least
okay.
A
G
Hi
sorry,
so
I've
worked
on
production
setup
for
the
benchmarks,
so
that's
all
in
place
now
so
I,
Britian,
bunch
of
and
small
stuff
and
docker
files
in
doc
will
compose
the
circle.
Ci
is
in
place,
so
we're
deploying
automatically
master
on
any
kind
of
merge
or
commit
to
their
environment
and
there's
also
now
an
endpoint
to
actually
trigger
a
run.
G
What
is
still
missing
there-
and
it's
also
on
my
next-
is
to
make
it
run
on
a
specific,
commit
and
also
the
feedback
you
get
from
that
API
currently
just
gives
you
some
random
URL
and
that
URL
should
point
to
something
that
is
relevant
to
that
test
run.
Basically,
so
that's
what
I'm
going
to
be
working
on
a
little
bit
more
I'm,
not
blocked
any
questions.
A
H
Hi
not
so
much
work
on
my
site,
mainly
because
there
was
a
lot
security
released
going
out
and
then
follow-up
work.
So
if
you
guessed
what
I've
been
doing
that
stuff,
then,
however,
I
do
have
a
question
for
Alan,
or
maybe
some
everybody.
There
is
an
issue
I'm
blocked
on
on
an
issue
for
doing
anymore.
I
did
a
little
bit
of
analysis
on
the
whatever
we
on
our
benchmarks.
Data
using
new
clinic,
however
I
mean
kind
of
whatever
we
are
benchmarking
right.
Now,
it's
not
significant
enough
to
produce
good
data
out
of
it.
H
We
need
to
get
a
longer
run
so
proximally.
We
need
to
transfer
bigger
files,
so
I
just
need
some
guidance
on
what
the
best
files
to
measure
will
be.
The
current
the
current
numbers
for
our
benchmarks
are
pretty
good,
so
if
those
are
the
size,
they
decide
that
we
are
benchmarking.
Right
now
is
one
megabyte,
an
1010
kilobyte,
which
are
probably
small.
H
The
number
city
of
God
are
kind
of
good,
since
we
switched
to
use
a
pre
generated
certificate
and
the
pre-generated
Brito
and-
and
this
type
of
things
so
and
then
I
am
blocking
on
that.
So
Alan,
if
you
can
trim
in
into
that
thing,
then
this
we
can
plan
to
dig
a
little
bit
more
into
the
issue
of
getting
faster
degeneration
in
place
and
yeah
more
code
reduce
further
study.
Yeah,
that's
mean
anymore
anyway,
since
yeah.
C
Big
suggestions
so
I
think,
like
you,
can
find
exactly
what
you're
looking
for
in
this
file.
So
this
is
our
in
drop
tests
and
you
can
see
at
the
top
okay
yeah,
you
guys
missed
a
bunch
of
sizes
and
directory
structures
because
then
like
like
in
every
file,
every
directory
goes
into
a
graph
exactly
the
file.
So
it's
interesting
to
I
could
tune
these
values
and
you
already
have
the
code
base
here.
Does
it
feels
like
pretty
much
a
copy-paste
to
the
benchmarks
and
running
yeah.
C
A
Also
in
the
in
the
meeting
we
had
on
Wednesday
with
the
with
Alex
and
Ron,
there
was
a
we
like,
there's
a
whole
bunch
of
other
kind
of
tests
and
benchmarks
that
we'd
like
to
kind
of
look
into
so
my
job
from
them
has
been
to
to
create
a
list
of
those
for
you
guys,
including
bigger
files,
and
things
like
that.
It
that
that
task
is
at
the
top
of
my
stack
now
so
I
will
do
that
tomorrow
morning,
first
thing
and
say:
you'll
have
that
pretty
same.
H
H
I
Just
also
updated
the
readme
on
how
to
add
new
test
I'm
using
it
just
a
test
template
that
we
have
in
place
so
I
think
that
was
pretty
much
it
just
added
I
think
the
format,
let's
say
was
added
yeah
test
format
on
the
output.
Nothing
blocking
me
so
this
week,
I
need
to
add
so
we're
going
to
be
able
to
so
for
Alex
to
be
able
to
run
on
a
certain
branch.
I
need
to
make
some
changes
also
to
little
uncle,
but
that's
that
should
be
actually
pretty
easy.
I
Since
I
did
the
refactoring
then
to
be
able
to
run
sub
test
locally
individually,
and
this
is
for
the
note
clinic
and
then
also
for
the
for
some
more
tests
that
Leitao
needs
I
need
to.
We
need
to
be
able
to
have
an
option
to
run
these
tests
also
without
the
pre-generated
keep
you
so
right
now
everything
runs
with
pre-generated
but
then
put
a
flag
in
there
to
run
it
without
the
pre-generated
keys.
J
Alex
right
so
yeah,
I
disabled,
a
freeloading
config
in
FS,
because
it
was
you
know.
The
effect
was
that
MP
MP,
my
PFS,
was
trying
to
upload
all
of
NPM
to
the
gateways
every
minute,
which
is
fun.
If
it
we
don't
know
what
it
is,
is
because
there's
until
we
get
the
DHT
into
Jo
PFS
we're
just
periodically
pushing
your
and
the
fest
mode
in
into
the
game
right,
so
that
other
they
open
discover
your
content
yeah.
J
So
if
you're
you're
running
NPM
on
ipfs
locally
as
world
and
music
as
interim
effects,
so
that's
kind
of
happening
for
users,
so
like
UNIX
as
to
metadata,
would
be
really
useful
to
not
have
that
happen
in
future.
But
we're
gonna
go
to
the
HD
internal
with
off
anyway
hurry
a
bunch
of
performance
improvements
like
not
listing
files
we
don't
have
to,
because
obviously,
if
you
do
that,
if
we
pull
down
the
node
and
if
it's
got
lots
and
lots
of
children
and
it
gets
really
big
and
really
expensive,
the
really
big
one
was
not.
J
This
not
searching
the
entire
HAMP
shard
for
a
given
leaf
node,
which
I
discovered
UNIX
FS
was
doing.
That
was
super
tedious,
because
the
hampt
shards
that
is
npm
is
enormous
and
it
was
actually
loading
every
single
leaf
node
to
find
a
single
single
file.
So
now
it
descends
via
working
out
what
the
indices
would
be
every
given
far
because
the
indices
are
stable,
based
on
the
input
file
name.
So
it's
relatively
easy
to
predict,
which
branch
you
need
to
follow
to
get
to
the
far
that
you
want.
J
That
is
quite
cool,
so
MPI,
my
professor
was
ingesting
NPM
on
Tigerfest
at
a
speed
of
about
nor
point
naught
naught
three
modules
per
second
now.
It's
doing
it
about
null
point
eight
modules
per
second,
so
it's
gone
from
taking
about
six
months
to
ingest
NPM,
to
probably
about
ten
to
twelve
days,
which
is
quite
nice.
The
download
speeds
have
changed
from
being,
like
thirty
seconds,
a
module
to
a
few
hundred
milliseconds,
which
is
a
massive
massive
performance
improvement.
I.
J
If
you've
got
a
demo
out
on
stage
on
Friday,
which
is
cool
because
I'm
not
going
to
you
know
it's
not
going
to
take
longer
to
install
something,
then
I
have
the
time
for
my
talk
and
so
that's
cool.
The
other
thing
that
I've
been
doing
is
trying
to
get
deep,
requiring
a
full
stream
modules,
because
I
noticed
one
of
you
guys.
J
Okay
I
was
to
try
and
make
the
bundle
friend
ipfs
smaller,
so
I
thought
I'd
help
out,
and
you
know
in
the
only
way
that
I
could
I
also
added
streaming
methods
to
the
LS
stuff
in
Emma's
hat,
because
at
the
moment
it
just
gives
you
back
an
array
of
all
the
files.
When
you
say
LS
me,
a
directory
which
is
bonkers
even
though
deep
down
its
so
just
a
raise
because
you
can
node.
J
But
obviously,
if
it's
a
shard,
then
you
get
lots
and
lots
of
nodes
in
this
way
it
will
stream
out
and
sort
of
like
giving
you
all
of
it
and
crashing
your
process
if
the
directory
is
too
big
cool.
That's
me,
I'm
kind
of
not
really
I'm
prone
blocked
on
a
sparse
array.
Pull
request,
but
not
really
because
I
literally
just
open
that
I
just
knows
it
was
sorting.
Every
time
you
get,
everything
doesn't
look
like
it's
necessary,
hey,
so
for
this
week,
I'm
gonna
be
doing
more
performance
Ryan
at
all,
giving
a
talk.
K
K
The
other
thing
is
so
you're
putting
the
full
tarballs
into
into
ipfs.
Have
we
looked
at
what
kind
of
performance
profile
it
might
look
like
if
we
actually
decompress
the
tarballs
but
encoded
them
with
rabbin,
because
most
of
what's
in
a
lot
of
those
Tarbell's,
I
think,
is
the
same
stuff
like
a
especially
across
like
a
particular
package.
It's
a
lot
of
the
same
code.
K
If
we
use
wrapping
I
wonder
if
we
would
actually
reduce
the
overall
storage
and
especially
the
cache,
it
increases,
improve
the
cache
at
sorry
if
we
actually
decompress
the
tar
balls
when
we
stored
them
and
then
like
maybe
recompress
them
on
their
way
out
to
make
NPM
work,
it
sounds
crazy,
but
I
think
it
actually
might
be.
You.
J
Know
that
makes
a
lot
of
sense,
because
when
you
show
it,
you
know
when
you
and
break
the
falls
down
into
there,
no
CID
I'd
possible
blocks
and
whatever
most
completely
right.
The
changes
between
the
given
module
versions
and
generally
can
be
very
small.
So
you
should
be
able
to
see
some
pretty
significant,
like
savings
and
s3
stores
that
were
using
for
the
whole
thing,
but,
to
be
honest,
like
I've,
just
been
concentrating
on
making
it
fast,
rather
than
making
it
small,
but
that
would
be
like
for
the
next
kind
of
thing.
So
there.
K
Yeah
I
think
it
would
make
it
faster
in
terms
of
cash
hits
right,
so
it
wouldn't
make
initial
gets
faster,
obviously,
probably
make
them
slower,
but
the
the
cash
hits
like
once
you've
ever
installed
a
package.
Any
new
version
of
that
package
would
probably
come
in
faster
I
would
think
yep.
J
A
J
A
Right,
you
heard
the
man
break
the
thing
and
leash
it
cool
all
right,
any
I'm,
a
really
quick.
We
are
over
time
by
two
minutes,
so
any
other
real,
quick
questions
or
shall
we
just
go
okay,
cool
and
all
right.
Thank
you
all
for
joining
us.
It's
been
a
pleasure
and
we'll
see
you
again
next
week
for
another
exciting
round
of
what
the
Jas
core
dev
team
did
last
week,
what
they
blocked
on
and
what
they'll
do
next
week.