►
A
Hi
everyone
welcome
to
the
ipfs
core
implementations
weekly
sync
for
monday,
the
23rd
of
november
2020
we're
going
to
go
through
our
high
priority
initiatives.
Now
other
initiatives,
q,
a
parking,
lot
question
time
answer
time,
all
the
fun
things
it's
going,
gonna
be
amazing
without
further
ado,
let's
start
with
the
high
priority
initiative,
so
upcoming
and
ship
releases.
B
Yes,
go
ipfs,
0.8
the
rc,
we're
hoping
to
maybe
land
it
this
week,
but
with
holidays
in
the
u.s
landing
later
this
week.
It
will
likely
push
to
next
week,
and
we
just
have
a
few
things
that
we
need
to
finish
up,
that
we
will
talk
about
shortly
for
pinning
services.
A
C
I'd
say
the
pin
remote
commands,
which
is
like
six
commands
for
pins
and
then
managing
server
cleaning
services
are
nearly
ready
for
this
candidate
one,
so
it
should
not
be
blocking
when
you
guys
are
ready
for
cutting
rc1
I've
added
a
oh
yeah.
I
made
the
mistake
so
pin
remote
service
ls
with
pin
count
when
you
pass
that
parameter.
C
It
will
test
the
end
point
and
try
to
fetch
how
many
pins
in
each
status
you
got
like
queued,
pinning
paint
and
failed,
so
that
acts
both
as
a
quick
validation
of
the
service
you've
added,
but
also
it
will
be
used
by
web
ui
when,
in
our
final
end-to-end
integration,
the
ci
is
still
not
green,
but
we
are
working
on
a
fix
to
the
pinning
service
we
use
on
the
ci
to
be
green.
C
And
http
client
for
which
is
used
by
web
ui
and
the
js
land
it.
The
adding
support
for
pin
remote
commands
is
working
progress.
I
believe
iraq
is
attacking
that
one.
D
Yeah
I
I
was
just
sending
items.
No,
so
I
worked
on
jazz
mock
pinning
service
because
that
lets
you
integrate
it
to
the
go
ipfs,
so
you
can
test
things
out
I'll,
be
publishing
it
sometime
today.
A
Neato
next
up
is
local
penny.
E
All
right,
that's
nearly
ready
for
rc
one.
All
that
we're
waiting
on
is
a
couple
decisions,
minor
implementation
decisions
concerning
things
like
encoding,
whether
to
use
multi-base
or
not
so,
there's
just
a
matter
of
making
sure
this
is
going
to
be
ready
for
any
future
changes
and
do
we
need
to
do
anything
to
future
proof
it
or
is
it
good?
E
So,
once
those
decisions
have
been
made,
then
everything
should
be
ready
to
move
along
in
terms
of
getting
that
that
implementation
in
as
well
as
make
sure
the
migrations
are
tested
with
the
latest
and
co-ipfs,
is
depending
on
on
the
appropriate
version
of
the
pinner.
So
that's
all
we're
waiting
for
at
this.
E
B
Yeah
so
last
week
that
got
removed
from
the
bootstrap
notes.
So
we
are
doing
some
monitoring
on
the
network
now
to
see
how
things
go
to
assess
the
situation.
But
we
are
in
watch,
watch
and
observe
mode.
A
So
far,
so
good
next
up
is
just
improved.
Discoverability
and
connectivity.
F
Yes,
so
the
auto
relay
example
was
merged
last
week
and
with
that
all
the
auto
relay
stuff
is
now
complete
in
the
rendezvous
front.
I
started
last
week
addressing
jacob's
review.
We
also
had
a
call
to
a
line
on
some
stuff
on
on
the
review,
and
so
with
that
I
I
already
did
some
stuff.
I
basically
we
decided
to
separate
the
client
in
the
server
and
with
that
we
aimed
to
move
the
the
client
part
to
the
live
hp
card.
F
Then
I
also
added
the
os
protection
for
the
register
and
I'm
currently
working
on
a
docker
compose
setup
with
also
my
sql,
so
basically,
I'm
working
on
creating
the
database
model
and
queries
so
that
we
don't
have
everything
in
memory
and
we
basically
store
everything
in
my
sql
database
and
this
week.
I
will
continue
on
that
and
also
I
need
to
work
on
improving
the
garbage
collector.
F
As
as
I
will
change
how
we
currently
deal
with
the
data
and
then
we
also
need
to
align
with
the
library
to
be
regarding
message
signing
that's
one
of
the
things
that
me
and
jacob
discussed,
and
eventually
it
will
make
sense
to
add
message
shining
in
the
same
way
as
we
have
for
pub
sub,
but
it's
currently
not
part
of
part
of
the
spec
so
yeah.
I
also
create
an
issue
for
that,
and
I
aim
to
eventually
get
the
signing
message
as
well
and
that's
it.
A
Good
stuff
next
up
is
bi-directional
streaming
and
streaming
errors
in
the
browser.
So
I
have
marked
the
pr
as
ready
for
a
review.
It
has
implementations
of
the
four
different
types
of
grpc
web
messages,
so
unary
client
streaming
server
streaming
and
bi-directional
streaming.
I
have
like
documented
the
protocol
that
it
uses
because
it's
not
made
up,
but
it's
not
part
of
the
spec,
because
the
spec
doesn't
allow
for
web
sockets,
but
it
is
based
on
other
people's
implementation,
so
plug
it
out
of
thin
air.
Anyway,
it's
all
there.
A
It's
documented,
along
with
a
rough
plan
of
how
we
can
get
into
go
ipvs
as
well,
so
I'd
love
some
eyeballs
from
the
go
guys
as
well
and
go
people
not
just
guys
when
all
guys
I'd
love
some
input
from
people
across
the
ecosystem,
because
ultimately
this
should
really
go
into
ipfs
as
well.
So
we
can
have
like
just
awesome
stuff
happening
in
the
browser
with
proper,
full
duplex
streams
everywhere.
A
A
A
G
G
G
This
new
work,
and
also
like
the
monoreport
js
ibfs
monorepo,
needs
some
work
on
the
types
we
have
a
new
proposal.
The
that's
wrote
that
basically
describes
a
bunch
of
stuff
that
we've
been
talking
about
and
yeah.
That's
it's
a
little
bit
more
complicated
that
repo,
because
we
need
to
reuse
types
in
multiple,
multiple
packages.
So
we
need
to
organize
stuff
better,
but
you
can
read
all
about
it
in
the
proposal
yeah.
That's
it.
A
Awesome
next
up
is
the
use
of
shared
node
from
service
worker.
D
Oh
yeah,
I
was
actually
trying
to
figure
out
if
I
need
to
do
there
anything
or
is
that
if
it
was
ready,
but
it
seems,
there's
one
issue
that
needs
to
be:
if
I'm
reading
the
pr
correctly
yeah
so
I'll
try
to
address
this
week
and
hopefully
run
it
nice
then.
A
We
can
get
in
badger,
2
support,
no
update.
I
mean.
E
A
B
Traversal,
it's
me
yeah,
so
arch
is
working
on
improving
our
dialability
statistics,
so
we
started
adding
some
stuff
to
hydro
nodes
so
that,
as
we
observe
traffic
going
through,
we
start
pinging
people
and
just
trying
to
get
a
better
idea
of
what
dialability
looks
like
on
the
network,
so
that
we
can
use
that
for
our
success
criteria.
When
we
implement
full
natural
reversal
and
then
he
is
working
on
improving
some
issues
with
autonat
being
a
little
bit
flaky
in
terms
of
observed,
addresses
so
working
on
stability.
A
More
radical
next
up
is
the
unix
versus
v
1.5
and
go
ipfs.
A
I
don't
think
there's
been
any
movement
on
this
since
last
week,
though,
I
did
put
a
demo
repo
together
that
lets
our
fantastic
contributor
run.
The
js
ipfs
interface
test
suite
against
arbitrary,
go
ipvs
without
having
to
jump
through
lots
of
hoops
and.
E
Still
working
on
design,
I've
actually
had
a
number
of
discussions
about
how
we
can
maybe
use
some
of
the
existing
infrastructure
for
doing
more
intelligent
garbage
collection.
We
have
art,
cache
and
bloom
filter,
and
things
like
that
that
we
don't
want
to
duplicate
if
we
don't
need
to,
if
there's
any
functionality
that
we
can
use,
that's
already
existing.
E
A
Good
stuff
that
brings
us
to
the
end
of
the
other
initiatives.
So
next
up
is
design
review
proposals.
Do
you
want
to
propose
something
for
design
review.
A
H
Hey
guys,
I
just
put
something
in
there,
but
I've
been
working
on
the
javascript
implementation
of
graphsync
and
mikhail
rogers
asked
me
to
join
this
call
to
discuss
a
possible
integration
point
with
js
ipfs,
as
I
think
you
guys
are
making
some
plans
for
that
next
year
or
something
so
I'm
here
to
answer
any
questions
or
discuss
it
just
don't
know
exactly
how
to
proceed.
H
It's
still
in
progress,
so
you're,
happy
to
or
you're
welcome,
to
look
at
it,
but
still
working
on
a
number
of
things.
So
I
think
it's
more
about
getting
a
placeholder
for
the
future
from
an
integration
point.
H
Just
does
a
quick
summary
where
I,
where
I'm
at
right
now
is.
I
can
issue
a
request
to
a
responder
and
I've
validated
that
against
the
go
ipfs.
H
And
I
did
a
prototype
of
a
responder,
so
I
think
I've
been
working
on
most
recently
is
doing
some
validation
of
the
requests
coming
back
to
make
sure
the
bad
actor
responder
doesn't
send
me
bad
stuff,
and
I
think
I'm
getting
pretty
close
to
putting
that
all
together
and
then
I'll.
Take
the
prototype,
respondent
and
grade
that
into
your
questions.
I
think
I'll
be
in
pretty
good
shape
by
the
end
of
december.
That's
what
I'm
expecting.
D
Is
this
something
that
we
need
to
detect
based
on
nodes?
Some
peers
would
have
graphs
and
others
don't
or
is
there
something
that
just
replaces
current
bit
swap.
H
Well,
in
go
ipfs
it
they
have
the
responder
turned
on,
but
there's
nothing
that
ever
makes
a
request
out.
So
within
go
ipfs
is
kind
of
a
it.
You
can't
enable
it
to
replace
bitswap.
All
you
can
do
is
enable
responder
to
provide
new
way
of
accessing
data,
and
I
think
you
know
if
we
took
this.
I
think
one
big
question
is
okay,
so
if
we
integrate
with
jsipfs,
what
do
we
want
to
do
with
it?
H
Do
we
want
to
do
just
like
what
go
ipfs
does
and
make
it
act
as
a
responder
in
case
request
comes
in,
do
you
want
to
have
a
mode
where
it
can
actually
issue
requests
to
other
nodes,
as,
like
you
said
the
bit
swap
replacement
right
because
questions
like
that
in
terms
of
what
is
exact
functionality,
I
don't
know
if
go.
Ipfs
is
graph.
Sync
has
some
road
map
or
plan
for
that,
but
but
yeah,
the
exact
way
you
can
create.
It
is
open.
B
So
thanks
hannah
and
alex
creepshanks
will
be
coming
back
to
ipfs
early
in
december,
and
some
of
the
stuff
they're
looking
at
is
ipld,
go
prime
integration
and
then
also
the
graph
sync
bit
swap
story
for
go
ipfs.
So
I
think
that
would
be
really
good
for
you
and
alex
put
cds
to
be
able
to
sync
up
with
them
in
a
dean
so
that
we
can
all
kind
of
coordinate
and
and
launch
those
things
all
together.
B
So
I
will
okay
I'll
make
a
note
for
us
to
make
sure
that
we
sync
up
probably
sometime
next
month,
so
we
can
figure
out
what
we
want
to
do
in
q1.
A
A
G
Yep,
the
libya2p
update
the
simple
beer
to
the
latest
version.
G
F
B
A
Incredible,
I
guess
that's
it.
We're
done
thanks
for
coming.
Everyone,
please
do
fill
in
your
async
updates.
People
do
read
them,
it's
very
useful,
but
yes,
I'd
love
to
see
you
all.
This
has
been
the
core
profess.
Sorry,
the
I
professed
core
infiniti's
weekly
sync
for
the
23rd
of
november
stay
safe
out.
There
things
are
getting
better.
It's
almost
christmas,
see
you
on
the
internet,
bye,
bye,.