►
From YouTube: ⚡️ⒿⓈ Core Dev Team Weekly Sync 🙌🏽 January 14, 2019
A
B
B
What
we
normally
do
is
we
do
a
weekly
update,
where
we
tell
each
other
what
we
did
last
week,
what
we
are
blocked
on
and
what
we
plan
to
do
this
week,
so
I'm
just
work
from
the
top
and
yeah.
If
you
haven't
put
your
weekly
update
down
on
the
hackpad,
then
please
do
and
since
I
am
listed
first,
oh
just
quickly
go
over
what
I've
been
doing.
I
was
on
holiday
last
week,
so
I
haven't
done
much,
but
I've
done
some
things
today
and
I.
B
B
A
ifs
hash
for
a
piece
of
data
is
basically
the
UNIX
FS
important
module.
So
I've
been
doing
some
dependency
wrangling
to
try
and
get
0:34
release
released
whilst
those
away
lipids
be
crypto,
got
updated
again.
So
just
changing
a
few
or
updating
a
few
modules
to
get
that
get
that
all
the
same
version.
We
have
many
modules
that
depend
on
it
and
it
is
quite
big.
It's
like
200k
I,
think
gzipped.
B
So
it
is
worth
not
having
multiple
versions
of
it
in
our
bundle,
so
there
and
then
yeah
so
last
week,
which
I
didn't
attend
because
I'm
Jose,
but
there
was
an
issue
which
was
blocking
0:34
from
being
released
because
of
like
control
signals,
and
it
was
because,
like
local
event,
emitters
were
being
added
to
like
over
10,
and
so
the
warning
was
coming
up
in
node
and
that
was
causing
the
bill
to
fail
or
kind
of
a
different
reason.
Anyway,
that's
now
fixed
and
I've
got
another
problem.
B
The
bandwidth
tests
are
now
failing
and
whilst
I
was
gone,
I
think
some
modules
got
updated
again
and
I
found
out
now
that
big
number
or
big
jeaious
has
been
swapped
out
for
big
number
j/s,
and
so
some
of
the
modules
have
that
change
and
others
don't,
and
so
our
tests
are
failing.
So
I
need
to
resolve
that
again.
A
B
So
next
week
at
this
week,
even
what
I
plan
to
do
is
I've
got
some
stuff
to
do
for
getting
the
timeline
sorted
for
the
J's
IVFs
roadmap
get
to
get
those
cows
and
things
all
finalized.
I
need
to
there's
a
bunch
of
emails
for
reviewing
the
benchmark,
work
that
has
been
done
by
near
falling
and
so
I
need
to
get
ok
and
look
at
those
I've
been
basically
doing
a
backlog
clearance.
B
B
D
C
Have
been
working
on
the
multi
ashing
async
doing
porting
the
API
from
callbacks
to
promises,
not
really
it's
Raiders,
because
it
doesn't
need
it
yet,
but
probably
will.
But
for
now
it's
all
promises.
The
pr
should
be
today.
I
also
had
a
call
with
Alex
from
the
forum
to
end
all
the
Indo
work
for
the
CI
part
and
the
CI
integration
and
talked
a
little
bit
about
the
whole
benchmarks
repo,
and
he
answered
all
my
questions.
So
we
should
be
good
to
go
for
now
during
this
week.
C
I'll
take
more
into
it,
but
yeah
also
the
some
research
and
brainstorming
with
fokin
and
other
people
about
supporting
array,
buffers
and
I've
to
raise
internally
and
on
the
API
I,
still
trying
to
figure
out
the
best
way
to
do
it
and
getting
a
feel
for
it.
I
use
the
multi
ash
async
repo
to
do
that.
That's
why
the
podcast
it's
not
up
yet
I
need
to
clean
it
up
with
with
the
stuff
about
the
rave
offers,
but
I
will
make
a
proposition
shortly,
so
we
can
think
more
about.
C
How
can
we
support
that
stuff
and
also
smart
support?
Node
buffers
at
the
same
time
and
yeah
also
the
window
size
for
requests,
Dileep
be
to
be
repos,
hail,
giving
me
a
little
bit
of
of
troubles,
especially
in
the
air
Plex
repo.
It
shows
of
everything
should
I've
been
passing
right
now,
but
it's
still
some
issue
with
switch
and
amp
likes
with
the
the
less
latest
changes
so
I'm
trying
to
fix
that.
C
But
everything
else
or
almost
everything
else,
already
merge
and
released
right
now,
it's
only
a
matter
of
getting
an
Plex
fixed
and
the
the
I
think.
That's
only
repo
ipfs
and
httpclient
are
missing
and,
of
course
the
p2p
is
also
missing,
but
that
should
be
easy
to
do
when
I
make
em
Plex
pass
the
test
with
switch.
So
the
rest
of
the
week,
I'll
be
finishing
finishing
up
that
and
I
will
be
integrating
the
benchmarks
trigger
on
the
the
CIA
poly
quest
that
we
have
for
get
lava
travelers
so
on
and
that's
about
it.
E
So
we
should
really
make
sure
that
we
have
multiple
maintenance
for
all
the
made
modules
we
have
I
guess
something
like,
for
example,
I
propose
that
it
should
either
be
like
them
in
the
lead,
maintainer
and
and
the
tech
lead
or
even
the
table,
repository
owner.
So
in
this
case
it
would
be
Ellen,
as
that
is
ipfs
maintainer,
because
then
you
can
release
anything.
You
want
em.
E
If
there's
any
issues
except
for
this
case,
it
wouldn't
have
helped,
because
neither
Elinor
X
would
have
been
there,
but
hopefully
normally
one
of
those
two
people
are
there,
but
just
able
to
bring
it
up
that
we
need
to
go
to
should
go
through
all
the
modules.
We
have
we
own
to
see
that
we
don't
have
those
issues
anyway,
it's
fixed
now
and
next
I
will
work
on
implementing
tree
for
JSI
PLD.
I
thought
originally.
E
Most
people
agreed,
but
I
also
want
to
have
agreement
from
those,
and
we
are
holidays
that
I
propose
that
the
multi
codec
that
we
changed
it
from
being
a
spring
like
takes
keyboard,
t
deck
TV
into
being
constant,
which
is
in
the
background
just
and
number,
and
I
will
also
propose
it
to
do
this
for
the
seei
DS
and
that's
those
things
that
I
clearly
do
in
parallel
to
my
iPod,
you
work
as
I
go
through
all
the
api's
cleaning,
those
things
up,
and
there
therefore
takes
a
while,
but
I
think
the
outcomes.
Quite
nice.
F
A
comment
so
I
just
noticed
like
there's
like
300
inhe
projects
using
just
like
you'll,
be
directly
some
of
those
like
our
our
own,
like
our
own
modules,
doesn't
count
but
like
it's.
Our
own
is
like
less
than
30,
so
I
can
see
like
more
than
270
projects
out
there
without
using
a
PLC
directory
that
we
never
really
like
system
and
so
giving
it
like.
F
There's
no
breaking
changes
on
the
CPI
and
like
some
of
those
projects,
are
pretty
developed
like
I,
recognize
some
names
from
some
PG
theses
that
I've
read
or
some
other
projects
in
the
world
on,
and
the
hacker
news
so
like
perhaps
reaching
out
to
those
projects
that
have
active
teams
working
on
them
and
ping
them
for
feedback.
Just
like
getting
them
involved
in
the
project
would
be
really
useful.
It's
it's
something
that
we
we
all
over
already
to
get
better
at
and
like.
F
Sometimes
we
forget
that,
like
our
user
is
not
just
ourselves,
we
actually
have
a
confused
large
community.
So
yes,
this
is
like
another
piece
of
work
to
do
and
I
can,
when
I
takes
a
lot
of
time,
but
I
consider
like
if
you
find
the
space
the
mind
space
just
based
on
this.
Your
calendar
to
just
like
go
through
these
300
repos
and
I,
see
how
active
these
projects
are,
and
it's
like
people
I
think.
E
G
There's
like
a
github
dependencies
thing
that
really
helps,
because
it
gives
you
a
little
bit
more
information,
sometimes
about
the
people
that
are
using
it.
Another
thing,
too,
is
like
you
should
do
an
alpha
release.
That's
in
a
major
like
in
the
next
major
do
an
alpha
release
and
then,
when
you
ping
them,
you
can
say:
hey
do
you?
Can
you
try
this
out
and
tell
me
if
it
actually
does
everything
for
you
and
if
it
would
work
so
that
you're
not
just
showing
up
with
like?
Oh
hey,
we
have
this
idea
for
an
API.
B
B
H
B
So
in
general,
we
sort
of
need
a
way
of
like
awesome
scripts
or
something
to
like
get
all
of
those
people
who
who
who
are
depending
on
our
project
and
be
able
to
like
open
a
pull
request
or
an
issue,
at
least
on,
like
all
of
those
repos,
to
sort
of
encourage
them
to
move
or
encourage
them
to
do
something
that
would
be
really
useful,
probably
against
github
Terms
of
Service
in
some
way.
I
don't
know,
but
and
I
will
look
into
that
or
I.
B
G
If
it
is
against
the
Terms
of
Service
to
launch,
like
you
know,
hundreds
of
pull
requests
which
it
probably
is,
are
it
identical
issues
which
it
probably
is?
One
option
is
that
you
could
just
use
the
data
that
you
pull
from
that
to
pull
in,
who
are
the
maintainer
Zoar
owners
of
all
those
repos
and
then
just
ping
them
in
an
issue
to
let
them
know.
So.
It's
like
it's
not
as
nice,
probably
but
it's
probably
within
yeah.
B
A
H
So
basically
spent
some
time
catching
up
been
away
from
the
product
for
a
while.
So
look
through
some
issues
and
then
decided
to
pick
up
Jas
ipfs
repo,
that
has
a
dependency
on
like
data
store
as
well.
So
I
might
look
at
a
number
project,
because
I
saw
that
I
think
Ellen
you're
working
on
a
data
store
sort
of
refactor.
So
if
there's
anything
of
sort
of
low
level,
that's
sort
of
critical,
so
that
way,
the
higher
level
projects
like
I
have
three
repo
can
get
done.
H
B
It
cool
yeah,
the
so
I
did
the
interface
for
theta
store,
I.
Think
I
sent
a
pull
request,
at
least
and
I
sent
a
pull
request
for
one
or
maybe
two
of
the
data
stores
that
we
have
already
so
like
a
low-hanging.
Fruit
would
be
to
do
a
similar
thing
to
some
of
the
other
data
stores
that
we
have
basically
following
what
what
was
proposed.
H
B
D
B
B
I
D
I
Multiple
pure
browser
tests
and
it
was
working
locally.
Fine
after
I
got
some
we're.
Basically,
we
were
having.
We
had
the
old
version
of
live
p2p,
so
locally
I
fixed
that
one
and
it
was
it,
was
running
fine
and
then,
when
I
actually
put
it
on
the
server
forgot
about
that
piece,
and
so
we
actually
took
awhile
to
troubleshoot
what
the
problem
was
with
it.
So
so
yeah
so
Alex
on
both
work.
Just
troubleshooting
that
fixings
our
install
our
deploy
scripts
they're
still
they're,
still
an
issue
with
it
and
alex
is
gonna.
Work.
I
I
I
Now,
in
the
react,
app
it'll
just
make
it
easier
for,
when
you
guys
are
adding
new
tests,
also
need
to
so
I
also
created
a
presentation
for
our
meeting
coming
up
in
Wednesday,
so
on
how
to
add,
testing
we're
going
to
go
through
each
of
the
node
go
in
the
browser
test.
So
I
worked
on
that
presentation
and
I
need
to
update
it
after
I.
Do
this
refactoring,
yes,
and
that
will
be
coming
up
Wednesday
so,
and
we
can
answer
more
questions
about.
I
B
J
Essentially,
it's
just
someone:
that's
like
scripting,
hitting
about
the
API
for
a
particular
slew
of
hashes,
and
it's
a
handful
of
IPs
that
are
like
thrashing
it's
not
really
what
the
nodes
the
preload
nodes
are
for,
so
not
sure
the
best
route.
There
was
one
over
the
weekend
that
I
just
blacklisted
for
a
little
while,
but
of
course
the
IP
is
back
so
yeah
can
I
just
block
these
IPS
at
will
when
they
seem
to
be
taking
down
the
nodes
or.
D
F
Also,
like
two
people
can
totally
abuse
this
like
it
is
like
with
the
we
open
genocide
because
it
I
you
go
to
the
field
that
says
captain
edge
and
I,
just
like
a
giant
file
and
some
only
the
people
know
they're
loading
provides
into
it.
So
we
need
to
it's
kind
of
like
normal
in
peer-to-peer
networks.
F
They're
like
these
problem
happens
like
long
time
ago,
which
is
like
the
bandwidth
it
got
used
by
each
node
connection
to
the
Bay
East
bandwidth
storage
was
computing
because
of
all
the
encryption
that
we
do,
and
so
typically
the
way
to
mitigate
this
and
like
this
is
just
like.
The
denial
of
service
prevention,
like
for
peer-to-peer
networks,
is
by
having
some
kind
of
proof
of
work,
not
the
Bitcoin
proof
of
work,
something
more
lighter
which
can
be
from
like
solving
crypto
puzzles
or
just
like
even
checking
hepta.
If
I
know
these
at
that
time.
F
For
enough
time
to
like,
if
I
know,
doesn't
prove
that
they
are
willing
to
stay
up
enough,
cuz
I'm
a
lot
like
a
file
that
is
large
enough,
then
their
requests
should
be
put
on
the
queue
at
the
bottom
versus
a
node
that
is
actually
requesting
a
smaller
file.
I
wonder
if
this
is
actually
something
so
I
think
like
the
preload
nodes
will
continue
to
exist
for
a
very
long
time.
They
are
like
a
way
to
increase
the
performance
in
the
experience
of
the
network.
F
J
Yeah
on
that
on
at
least
so
I
think
these
preload
nodes
came
like
in
a
hurry
and
they
are
now
pretty
production
II
as
a
service.
So
I'm
not
sure
there
was
a
lot
of
like
requirements
gathering
or
designing
around
exactly
what
the
service
would
look
like.
So
it
feels
like.
Maybe
that's
the
next
step
for
now.
J
I
feel
like
throttling,
does
make
a
lot
of
sense,
at
least
just
to
keep
these
nodes
still
functioning
for
the
thing
that
we
want
them
to
function
as
and
then
yeah,
maybe
a
an
initiative
to
actually
just
set
some
requirements
of
what
a
node,
preload,
cluster
or
otherwise
would
look
like.
You
know,
actually
do
a
design
work
on
that
and
then
to
play
that
as
a
real
service.
J
So
just
to
let
everyone
know,
I
am
planning
on
attending
flies,
then,
okay,
those
tickets
today
hopefully
and
yeah,
and
will
be
in
the
area
for
them.
For
two
weeks
after
that,
the
second
week
I
will
be
in
London
meeting
with
yeah
your
crew
there.
So
so,
anyway,
just
letting
everyone
know
I'll
be
physically
co-located,
so
we
can
maybe
knock
out
some
of
this
work.
J
Well,
while
we're
face
to
face
that
seems
to
make
some
things
go
faster,
so,
okay,
so
right
now
it
feels
like
that
good
solution
is
to
do
a
little
bit
of
throttling.
Maybe
I,
don't
know
I'll
do
some
testing
on
what
the
sort
of
delays
and
then
and
then
we
can
like
solve
the
problem
in
a
more
sustainable
way
in
the
future.
It's
up
to
me,
I.
F
J
Yeah
the
way
one
of
the
weird
things
about
looking
at
the
logs
is
that
overwhelming
majority
of
the
requests
are
canceled
before
upstream.
Actually,
there
aren't
therefore
ninety-nines
so
I,
don't
really
looking
at
the
logs
I'm,
not
really
sure
I
get
what
it
is.
That's
happening
just
yet
like,
for
example,
there's
no
user
agent,
that's
getting
passed
along,
so
I,
don't
exactly
know
what
it
is.
J
J
B
There's
a
couple
of
things
with
the
preload
how
it
works
that
I
wanted
to
mention,
and
in
so
it's
it
happens,
so
it
doesn't
so
David
was
talking
about
it
in
the
browser
and
that
people
shouldn't
be
pre
loading
stuff
in
the
browser,
but
it
is
not
restricted
to
running
in
the
browser.
If
you're
running
a
Jess
know
profess,
no
din,
no
chest
and
preload
will
also
still
happen.
So
so
that's
that
and
there's
no
kind
of
restriction
currently
on
what
is
it
is
being
preloaded.
B
So
if
you
try
and
get
anything-
and
it
will
try
and
preload
that
if
you
try,
try
and
add
anything,
then
it
will
also
try
and
preload
that
if
you
have
stuff
in
your
MSS
every
every
30
seconds
or
so
it
will,
it
will
try
and
preload
your
MSS
root
if
it's
changed,
and
so
that
was
originally
a
problem
with
I
know
Alex,
it
was
had
npm
on
ipfs
pre-loading,
the
MF
s
read
the
whole
of
NPM
for
a
while
until
we
disabled
that,
so
that
was
probably
not
helping,
but
in
terms
of
like
the
preload
requests
being
cancelled,
they
are
done
asynchronously.
B
B
We'll
talk
about
it,
but
just
just
there
just
some
things
to
know
the
thing
about
and
request
being
cancelled
immediately.
It
could
possibly
be
because
it's
what
probably
worth
checking
that
these
IP
addresses
aren't
like
our
own
IP
addresses,
because
it
could
maybe
it's
coming
from
our
testing
servers
and
we
are
spinning
up
ipfs
nodes
that
have
preload
enabled
and
we're
adding
test
data
to
things
on
jenkins
and
it's
trying
to
be
pre-loaded
on
the
preload
nodes
that
could
have
about
just
it
just
occurred
to
me.
B
J
B
J
When
I
looked
up
was
coming
from
a
we
work
location
so
like
anyone
in
a
we
work
right
now,
just.
B
Just
throwing
out
ideas
and
yes
to
David,
we
can
just
pretty.
We
should
just
realize
they
were
pretty
loading
on
testing
and
we
should
also
I
think
actually
Jacob
went
through
and
disabled
all
the
bootstrap
servers
for
tests
that
don't
need
lead
them.
So
our
tests
don't
try
and
connect
to
the
bootstrap
servers
and
so
yeah
that's
another
thing
we
can
do
anyway.
They
were
just
my
Forte's
sorry
to
make
everyone
listening
to
that.
B
We
are
like
five
minutes
over.
So
sorry,
I
completely
lost
track
of
time.
It's
really
nice
to
talk
to
you
all
and
if
you
have
anything
else
to
say,
let's
take
it
offline
and
thank
you
all
for
coming
and
I'll
see
you
again
next
week
for
another
exciting
round
of
JSI
PSS
what
we
did
last
week,
what
we
got
blocked
on
and
what
we're
gonna
do
this
week,
cool
all
right,
bye,
everyone.