►
A
Hi
everyone
and
welcome
to
the
ipfs
core
implementations
weekly
sync
for
monday,
the
25th
of
january
2021,
I
am
making
brain.
I
will
be
your
host
we're
going
to
talk
about
our
high
priority
initiatives.
Other
initiatives
parking
lot
q,
a
if
you've
been
to
these
things
before
you
know
the
drill.
By
now,
we
can
start
with
the
high
priority
initiatives,
which
the
first
item
is
upcoming
and
shipped
releases.
B
So
we
are,
we
are
hoping
to
get
the
next
rc
out
this
week.
Last
week
a
bunch
of
the
you
know,
team
was
was
preoccupied
with
with
research
things,
but
yeah.
We
will
be
attempting
to
get
that
out
this
week,
including
an
updated
version
of
the
ips
distribution
site
which
make
deployments
and
letting
people
play
around
with
it
a
whole
lot
easier.
B
C
D
D
We
also
plan
to
have
a
web
ui
release
without
binning
service
integration
to
decouple
release
from
when
pinning
services
are
available
and
also
to
handle
transition
period.
When
a
web
ui
may
be
used
with
a
back-end
which
does
not
have
pin
remote
commands,
I
believe
mfs
auto
pinning
is
in
a
state
which
is
compatible
with
web
ui,
but
we
are
lacking
docs
and
we
need
to
retest
it
with
pinata.
D
I
think
that's
it.
On
my
end,.
A
There's
the
pr
against
js
ipfs
for
the
http
client
to
add
the
commands
yeah
I
gave
it
once
over.
It
all
looks
fine,
there's
just
some
conflicts
that
need
to
be
resolved.
I
think
it's
going
to
be
trivial.
D
Yeah,
I
think,
if
we
could
just
like
review,
merge
and
ship
that
that
would
make
things
much
easier
on
the
web
ui
and
because
npm
does
not
support
those
like
dependencies
from
a
branch.
If
the
project
is
in
the
subdirectory-
and
I
just
ipfs
uses
later
now,
so
that
introduces
additional
step
for
testing
yeah.
A
B
There
is
nothing
there's
nothing
particularly
new
here.
This
is,
I
think,
on
pause
until
we
decide
if
we're,
if,
when
we're
allocating
more
time
to
change
the
api
for
local,
pinning
and
add
a
new
command
for
that,
and
all
that
jazz.
A
Fair
enough
a
wild
hana
has
appeared.
You
could
give
an
update
on
data
transfer,
speed
improvements.
E
Yeah,
so
I
just
copy
pasted
something
from
a
wrap-up
email,
so
it's
a
little
long,
but
we've
things
are
going
pretty
well
on
this
we've
shipped,
I
mean
we
showed
a
bunch
of
improvements
to
graphsync,
not
directly
related
to
the
data
transfer,
but
sort
of
like
preparatory
work,
kind
of
lining
everything
up.
We've
started
to
get
our
testing
infrastructure
in
place
and
we're
taking
some
preliminary
steps
towards
implementing
what
we
hope
will
be
some
good
solutions.
E
E
Finding
well,
probably,
is
very
hard
to
find
a
solution
that
will
offer
dramatic
improvement
without
requiring
clients
or
like
requiring
clients
to
upgrade
to
get
that
improvement
or
requiring
the
network
to
upgrade
to
get
the
best,
the
most
improvement,
so
that
that's
going
to
be
a
little
bit
of
a
challenge,
because
you
know
like
trying
to
work
just
with
what
exists
in
ipfs
and
bitswap
and
graphsync
in
like
deployed
ipfs
0.5
or
whatever
it's
going
to
be.
E
It's
going
to
be
hard
to
get
major
improvement
working
just
with
that,
but
we'll
we'll
do
our
best
so
yeah,
so
we're
making
progress.
Yeah.
G
I
am
creating
a
design
proposal
for
for
it
and
it
should
probably
be
ready
to
for
feedback
this
week,
so
I
will
probably
bring
the
jazz
folks
in
europe
also,
that
is
done,
but
probably
it
might
be
good
to
have
a
design
discussion
next
week.
F
And
let's
see
fi
one
of
the
one
of
the
things
and
id
we're
talking
about
a
little
bit
ago.
It's
like
kind
of
next
next
priorities
for
the
group
and
trying
to
get
everybody
back
into
being
focused
on
more
collective
issues
instead
of
everybody
kind
of
on
separate
stuff.
So
some
of
the
stuff
we
talked
about
is
like
finishing
up
working
together
to
finish
up
the
typescript
stuff
and
then
looking
at
like
what
do
we
need
to
do
to
fix
the
dht
in
node.js
or
at
least
get
that
more
performant?
F
F
But
we
may
want
to
focus
that
so
we
can
have
like
you,
hugo
and
alex
all
being
able
to
to
work
on
the
dht
together.
G
Yes,
but
don't
you
think
that
we
should
at
least
have
some
improvements
on
the
connection
manager?
That
would
be
important
for
adhd.
F
We
will
need
to
look
at
that
and
figure
out
what
that
what
those
changes
are
for
the
dht
and
what
we'll
need
to
do
that.
But
again,
right
like
this,
could
involve
focusing
on
the
dht,
doesn't
mean
three
people
in
the
dht
rebound
right.
It's
let's
make
sure
that
we're
fixing
all
of
these
systems
in
tandem
to
achieve
the
goal
of
more
performant
jsthd.
B
A
Okay,
that's
the
end
of
the
hybrid
priority
initiatives.
Moving
on
to
the
other
initiatives.
Typescript
integration
is
the
first
one.
H
Me
right:
okay,
so
we
merged
a
bunch
of
the
a
bunch
of
pr's
data
stores,
data
storage,
fs
data
store
level
data
store
core
interface
data
store
a
bunch
of
stuff.
So
with
this,
the
advanced
repo
type
spr
is
almost
done
just
missing,
like
two
pr's
in
the
multi-action
sync
and
a
gdpr,
and
I'm
still
thinking
about
what
to
do
with
the
migration,
just
ignore
it
for
now
or
vr
the
migration.
H
So
if
you
anyone
wants
to
help
with
my
questions
feel
free,
we
final
figure
figured
out
how
how
to
do
the
proper
documentation
being
the
documentation
generated
from
from
the
types
automatic
generated
from
the
types
that
actually
works
now,
so
the
the
tool
that
we
use
to
do
that
release
the
new
version.
So
now
it
actually
can
understand
better
the
common
js
code
instead
of
just
esm,
so
we
will
finally
be
able
to
have
nice
stocks
for
the
apis
without
like
jumping
through
a
milling
hopes
to
make
it
look
good.
H
So
that's
fun
and
yeah.
I
think
that's
pretty
much
it
for
me.
A
I
started
working
on
the
unix
fs
in
portrait
and
exporter,
which
meant
I
started
working
on
ipod,
doug
pb,
which
meant
I
started
working
on
protons
and
I'm
currently
in
hell.
Right
now.
A
But
yeah
so
all
the
data
so
stuff
unlocks
the
repo
which
then
unlocks
bitswap,
which
means
we
can
then
start
bubbling
this
stuff
up
to
to
ipfs,
which
is
going
to
be
great.
C
No
updates
on
that
at
this
point
I
think
we're
the
palmer's
still
waiting.
Do
we
want
to
go
ahead
and
create
a
a
new
repo
for
that
I
don't
think
there's
been
a
decision
on
that
I'll
defer
any
more
to
a
dean.
B
Yeah,
I
think
we
should
write
an
issue
and
get
some
feedback
from
the
the
you
know,
badger
2
using
folks,
but
I
think
that,
after
talking
with
daniel
about
this
a
little
bit,
probably
the
best
option
is
just
to
use
a
branch
just
use
like
branch
versioning
and
release
and
release
tagging
yeah.
I
trust
him
that,
even
though
the
the
go
mod
like
website
is
like,
we
recommend
using
like
you
know,
subfolders,
that
this
is
something
that
is
no
longer
a
good
idea,
because
gopath
is
not
used
everywhere,
anymore.
I
Yeah,
I
think
I
think,
and
the
only
real
question
at
that
point
is:
are
we
making
another
tag
or
branch
in
the
badger
to
repo,
or
are
we
trying
to
combine
all
of
these
different
versions
of
badger
into
just
the
badger
repo
which
would
sort
of
make
more
sense,
but
we've
already
went
in
this
other
way
of
having
repost
perversion
and
right
like
why?
Are
you
getting
badger,
3
out
of
the
badger,
2
repo
that
starts
to
be
somewhat
unintuitive?
Is
the
unfortunate
thing
right.
B
I
would
if
we
were
going
to
use
one
repo,
I'd,
probably
put
them
all
in
the
badger
repo
for
for
sanity
purposes,
yeah.
B
F
Yeah
so
arsh
had
the
pac
poc
of
tcp
and
quick
hole
punching
working
he's
currently
working
on
cleaning
up
those
prs
because
he
did
a
lot
of
hacks
to
get
around
some
of
the
interfaces
for
that
so
he's
working
on
cleaning
that
up
and
then
we
just
need
some
reviews
on
the
im
client
multi-stream
extension
for
the
spec
arsh
is
looking
at
putting
together
a
proof
of
concept
of
hole.
Punching
coordination
over
dht
servers,
the
idea
being.
F
If
we
can
reasonably
allow
dht
servers
to
do
limited
bandwidth,
hole
punching
coordination,
then
we
could
just
leverage
our
existing
infrastructure
to
do
a
lot
of
that
instead
of
running
expensive
relays.
So
one
of
the
things
he's
looking
at
doing
is
determining
what
the
actual,
like
bandwidth
rate,
is
for
coordinating
that.
F
So
we
can
then
calculate
based
on
the
connections
to
dht
servers,
how
much
coordination,
how
much
traffic
do
we
anticipate
going
through
those
to
coordinate
hole
punching
so
just
to
prove
concept
right
now,
not
that
we'll
eventually
use
it,
but
once
we
have
that
data,
then
we
can
make
a
determination
of.
Is
this
worth
going
down
that
path?
And
what
does
that
look.
F
If
we
did
bandwidth
rate
limiting
on
the
dht
relays
right
because
like
we,
don't
want
to
open
them
up
as
relays
so
understanding.
What
is
that
bandwidth
management
that
we
need
to
do
because
in
most
instances
right
when
you
go
to
find
somebody
you're
going
to
query
to
the
closest
people,
then?
So,
if
I
can
just
get
hole,
punch
coordination
with
my
20
closest
peers
that
that
provides
a
lot
of
value
of
being
able
to
get
that
final
content.
F
So
just
determining
like
how
feasible
is
it
how
expensive
would
it
be
on
those
dht
nodes
yeah
to
make
a
cost
assessment?
Really
there.
I
Interesting
stuff
can.
I
That
from
existing
auto
relay,
like
we've,
got
an
existing
coordination
thing
off
of
auto
relay
right.
Can
we
have
that
net
bandwidth
and
then
spread
it
over
our
estimated
size
of
dahcy
to
get
an
estimate
of
what
that
is
going
to
be
because
it
should
be
the
same
protocol
for
doing.
I
F
A
C
Yes,
actually
friday
was
testing
one
of
the
or
the
prototype
that
I've
come
up
with
so
far
and
there
are
some
improvements.
There
are
some
also
big
problems
with
it
that
I
I
have
to
figure
out
how
to
address,
and
I
will
fold
that
those
findings
back
into
the
current
design
dock.
C
One
of
the
things
that
came
out
of
that,
though,
was
I'm
looking
at
performance,
some
dht
related
issues.
Since
we've
been
discussing
that
here
you
may,
some
of
you
may
have
seen
the
discussion
that
happened
on
the
ipld
channel,
and
it
conclusion
was
that
we
have
a
couple
areas
that
are
causing
performance
to
be
hovering
around
15,
20
percent,
on
most
the
nodes
and
able
to
cut
down
on
my
dht
connections,
and
it
seemed
to
scale
that
the
the
idle
cpu
business
seemed
to
scale
with
that
anyway.
C
So
that's
that
that
may
branch
off
into
some
other
performance
related
discussions
and
things
we
need
to
do.
But
as
far
as
the
gc
is
concerned,
yeah,
there
are
some
improvements,
but
there's
also
some
some
things
that
that
I'm
going
to
have
to
probably
ask
some
questions
about
how
we
you
know
how
we
want
to
solve
that.
So
the
prototyping
work
is
going
forward,
but
it's
not
like
yay.
All
the
problems
are
solved.
C
B
C
There's
the
number
of
packet
handles
per
handling
in
the
packets,
particularly
the
incoming
packets,
was
responsible
for
a
large
amount
of
the
idle
cpu
usage
where
it
kept.
It
would
keep
an
eight
core
i7
coming
at
around
15
to
20
cpu
with
the
default
configuration
and
then
that,
just
you
know
a
steady
state
after
after
running
for
half
an
hour.
B
C
It
wasn't
windows,
though
I
I
didn't,
have
it
on
windows,
so
I
run
it
on.
It
was
running
on
linux
and
amd
64
linux,
a
mac,
a
macbook
pro
and
a
freebsd
machine,
and
all
three
of
them
exhibited
similar
performance
profiles
with
a
default
configuration,
and
this
was
all
on
either
the
rc-1
or
the
or
the
current
build
of
the
of
you
know
the
current
stable
or
the
stable
current
development
branch.
C
I
C
And
it
was
specifically
in
quick-
and
I
can
I
can
share
the
profile
I
get
a
number
of
profile
runs.
They
all
came
out
pretty
similar
and
it's
all
in
quick
and
it's
all
packet
handling
and
a
good
portion
of
that
is
in
memory
allocation,
so
something
we
should
probably
open
an
issue
up
issue
to
look
look
into
is
a
quick
performance.
F
That'd
probably
be
worth
pinging
martin
martin
seaman,
so
he
can
take
a
look
at
that.
Yeah
he's
a
maintainer
of
that
repo.
I
believe
right,
yeah.
C
I
will
share
the
I
have
profile
pictures
for
him.
Then
yeah
perfect.
A
Do
you
want
to
skip
straight
on
to
the
next
one,
which
is
the
go
profess?
Migrations,
rework.
C
I
am
waiting
for
review
because
we
haven't
really
had
time
to
do
a
review
of
that.
There's
a
lot
of
there's
a
lot
of
work
in
there
and
I
think,
as
I
said
last
week,
still
the
the
distributions
needs
to
get
approved
and
merged
and
everything
else
comes
behind
it.
No
more
work
is
necessary
in
less
review
unless
a
review
points
out
there
is,
I'm
excited
to
get
that
in
because
it
looks
like
a
it
will
be.
C
A
huge
improvement
and
it'll
offer
going
forward
a
way
to
actually
sanely
write
migrations
without
a
gigantic
amount
of
hassle,
so
very
happy
about
about
that
being
there
and
when
we
have
time
it'll,
it'll
be
reviewed
and
merged
and
accordingly,
according
to
relative
priority.
B
Yeah,
I
think
it's
basically
like
step.
One
is
do
the
next
rc,
which
includes
the
update
to
dist,
and
then
once
that's
done,
we
can
start
to
like
look
in
merging
in
these
other
ones,
because
they
all
they
all
sort
of
build
off
of
that,
because
the
new,
the
ollies,
all
these
pr
to
disk,
makes
everything
make
a
little
more.
B
A
Cool
next
item
is
the
ipfs
pub
sub
api
revamp.
There
is
no
sign
of
gaza,
so
doesn't
delight
me
an
update.
A
It's
moving
on
to
the
memory
leak
and
jsipfs.
Did
you
get
another
ego.
H
G
Yeah,
so
the
current
pressuring,
the
problem
here
with
our
testing
setup
is
that
lipid
pci
files
in
note
15
and
npm
7.
I
basically
created
a
proposal
in
the
proof
of
concept,
linkedin
notes.
The
main
issue
is
simpler
dependency
because
we
have
our
library
modules,
depending
only
php
and
lipitp
uses
the
between
modules
as
dev
dependencies
to
test
them
with
integration
tests,
and
this
basically
makes
with
the
new
npm
7
changes
that
installs
automatically
the
peer
dependencies
to
basically
have
a
mismatch
of
versions
so
yeah.
G
The
short
term
goal
here
would
be
to
move
the
integration
testing
out
of
lipid2p
to
the
lipid
p
modules,
and
then
we
could
support
note
15
and
in
the
long
run
we
should
probably
create
some
system
testings
using
test
grounds
and
follow
up
with
the
interrupt,
and
probably
we
should
also
discuss
afterwards.
In
this
context,
the
eventually
moving
glue
delivered
repo
to
a
learner
repo
following
the
wins
of
ipfs
one
and
yeah.
That's
it.
It
would
be
good
to
have
feedback
on.
A
That
this
it
for
the
other
initiatives
moving
on
to
the
rest
of
the
items,
design,
review
proposals,
anybody
got
anything
you
want
to
propose
for
a
design
review.
B
I
now
that
that
hannah
and
alex
are
back.
I
I
put
up
a
bit
swap
proposal
around
large
blocks.
This
isn't
something
that
we're
necessarily
going
to
tackle
right
now,
but
I
want
to
sort
of
get
like
some
sanity
checks
to
be
like.
Does
this
make
sense,
and
also
think
you
know,
I
guess
see
how
see
how
you
think
this
fits
in
with
some
of
the
other
stuff
that
you
were.
You
were
planning.
E
I
actually
did
I've
read
your
profiles,
I'm
sorry.
I
haven't
made
any
comments,
my
my
I'm
trying
to
and
once
well
let
me
let
me
I
shouldn't
I
should
discuss
with
you
offline.
We.
B
C
There's
a
sort
of
a
topic
related
to
saving
blocks.
There
was
a
there's
a
an
issue
out
there
that
there's
been
some
conversation
on
concerning
a
streaming,
a
proposal
for
streaming
data
directly
into
a
file
and
bypassing
the
block
store,
and
that
that
offered
some
interesting
possibilities
for
some
things
I
was
running
into
just
looking
at
when
we
have
very
little
size
left
to
write,
blocks
and
ways
around
that.
I
don't
know
if
that's
something
we
I
just
wanted
to
bring
that
up
and
in
case
that's
something
that
we.
E
E
E
Yeah
yeah,
I
mean
it's
super
problematic
in
implementation,
because
it's
like
all
kinds
of
problems
that
introduces
you
could
delete
the
file
of
their
system
and
you
have
no
way
of
knowing.
But
but
I'm
just
curious,
do
we
have
that
in
ipfs,
because
that
is
a
thing
over
in
like
file
coin,
when
they
use
that.
B
Yeah
I
mean
the
file
store,
the
file
store
exists.
I
don't
think
it
does
any
of
the
things
that
one
might
expect
like.
Have
the
os
mark
files
as
read
only
so
you
don't
blow
them
up
while
you're,
while
you're
using
them
or
whatever,
but
but
the
file
store
is.
Is
there
I
think
it's
experimental,
yeah,
okay,.
E
I
mean
in
theory
it
would
not
be
hard
to
stream
a
like
something
you're
getting
from
bitswap
or
wherever
from
the
dag
service
into
directly
into
a
file
instead
of
saving
it.
E
In
theory,
I
think
the
current
architecture
of
the
software
may
make
it
a
little
bit
challenging,
because
I
mean-
and
this
is
something
I'm
going
to
have
to
look
at
in
bitsoft
in
general-
is
like
bitswap
is
pretty
like
hard-coded
to
like
save
right
to
the
block
store
once
it,
you
know,
gets
blocked,
and
we
probably
we
have
to
in
minimum,
introduce
a
layer
of
indirection
there,
because
there's
a
million
scenarios
where
you
might
want
to
put
it
in
a
different
block
store
you
might
want
to
put
in
a
file
you
might
want
to
not
put
in
a
block
store
just
yet.
E
You
know,
there's
like
a
bunch
of
bunch
of
reasons
to
do
that.
Obviously,
if
it's
also
like
super
integrated
with
the
providing
system,
which
is
gonna,
be
a
real
fun
thing,
so
all
of
that
is
gonna
have
to
be
somehow
broken
apart
and
that's
probably
going
to
be
software
that
alex
and
I
write
and.
K
C
C
Because
architecture,
what
conceptually
it's
fairly
straightforward
but
yeah
like
you
said
you
really
need
a
layer
of
indirection
so
that
you
don't
have
to
worry
about
what
you're
writing
to
you
know:
here's
your
incoming
block!
Something
else
takes
care
of
wherever
it's
going
to
put
them
and
arrange
them
and
be
it
a
file
store
or
whatever
yeah.
C
E
A
I'm
just
conscious
of
the
time
here
we
have
just
run
over.
So
if
anybody
needs
to
drop
off,
please
feel
free.
If
that's
it,
for
the
designer
proposals,
we
can
move
on
to
the
blockers
and
asks.
B
The
questions
yeah
I've
got
a
question.
There's
a
someone
filed
an
issue.
I
don't
know
whether
it
was
in
the
go
or
the
js
repo
around
the
fact
that
ipves
import
export
is
not
like
quite
the
same
across
going.js.
B
Basically
because
js
had
it
first
and
uses
like
you
know,
passwords
to
encrypt
things
and
whatever
and
and
go
is
just
like.
You
are
going
to
send
me
a
key,
and
I
don't
know
anything
about
passwords
and
any
of
that
do
we
have.
F
A
F
Like
we
need
from
the
client,
because
we
need
we
need
for
the
bin
right
to
be
able
to
export
export
the
keys
locally
in
the
command
line,
but
yeah.
We
should
just
have
that
discussion,
probably
in
this,
that
interface
designed
for
that
interface,
spec
for
the
client,
the
http
api
and
figure
out
exactly
what
that
thing
is
supposed
to
do
and
then
unify
it.
If
it
makes
sense
to
do
that
across
the
two
implementations
which
it
probably
does.
B
Yeah
for
for
context,
the
go
ipfs
currently
allows
you
to
import
a
key,
but
does
not
allow
you
to
export
the
key
if
the
node
is
online,
because
we
work
work
to
be
done
on
api
security
from
local
things
running
on
your
local
machine.
How
do
you
tell
who's
asking
you
to
make
these
commands.
I
Yeah,
I
guess
sort
of
related
to
that
potentially
was
jen
was
asking
me
if
we
should
do
some
sort
of
blog
post
or
write
up
about
the
security
workshop
and
one
of
the
things
that
I
realized
I
was
very
unsure
about
is.
Are
we
going
to
talk
about
any
of
the
things
that
came
out
of
that
as
community
desires
as
things
we're
going
to
do,
or
are
these
just
things
that
people
talked
about
and
and
that's
you
know
you
know,
are
we
going
to
be
trying
to
deal
with?
You
know
different
application
profiles?
I
Are
we
going
to
be
dealing
with
this
http
permissioning
of
sids?
We
should
have
some
sense
of
so
so
my
answer
was:
let's
wait
a
week
until
prioritization
happens
more
and
we
have
a
sense
of
you
know.
Timeline
of
you
know
how
much
effort
any
of
us
are
going
to
be
working
on
any
of
that
before
we
try
and
set
expectations.
F
The
thing
that
I
think
is
going
to
happen
regardless
is
that
we're
going
to
end
up
in
a
situation
where
this,
the
security
issues
here
are
a
use
case
that
we
are
trying
to
unblock
as
a
core
development
and
protocol
development
team,
and
what
we
want
to
do
is
make
sure
that
people
like
peer
goss
and
everybody
else
that
was
on
that
call,
have
the
ability
to
easily
extend
ipvs
to
do
those
things
and
maybe
will
at
least
guide,
if
not
participate
actively
in
the
development
of
those,
but
really
make
sure
that
you
have
the
ability
to
compose
ipld
and
ipfs
in
the
block
store
and
whatever
else
is
needed
to
do
that.
F
And
so
we
need
to
work
to
make
that
easier
to
build
off
of
and
then
perhaps
will
also
help
build
those
features
out.
So
I
think
that's
kind
of
like
the
minimum
set
that
we'll
look
at
for
for
this
year.
That
will
very
likely
happen.
I
think
it's
just
the
degree
of
which
we'll
be
involved
in
that
still
needs
to
be
determined.
I
Right
but
the
I
mean
the
the
most
concrete
one
there
was
this
like
sid
permissioning
in
bit
swap,
and
the
initial
proposal
is
somehow
based
on
http
headers,
which
are
coming
into
a
gateway-ish
thing.
They
want
to
then
have
their
bit
swap
session,
be
able
to
decide
on
permissioning
against
some
sort
of
external
api
that
it
passes,
headers
to
and
there's,
I
think
at
least
some
rounds
of
design
that
probably
we
have
opinions
on
at
least
that
need
to
happen
for
something
like
that
to
actually
exist.
I
So
that
may
be
the
the
ask.
Is
we
probably
need
to
review
that
initial
issue
that
came
in
and
start
thinking
about
what
a
design
that's
plausible
actually
looks
like.
B
I
suspect,
for
that,
like
the
the
thing
that's
like
the
bigger
engine
that
we
need
to
worry
about
is
how
you
know
as
part
of
a
bit
swap
request.
You
send
a
token
that
grants
you
access
to
something
and
then
plumbing
that
through
the
http
like
client
responses
are,
are
like
highly
bike
shareable,
but
also
kind
of
not.
A
Either
way
we're
not
going
to
sort
that
out
in
this
meeting,
so
the
final
section
is
the
parking
lot.
There's
two
items
in
the
parking
lot.
One
is
tag
selector,
cid,
cache.
E
Yeah,
this
is
something
I
I
just
want
to
put
this
on
people's
horizons,
probably
a
ways
off,
but-
and
also
I
only
because
I
saw
something
in
andrew's,
update
that
suggested
some
similar
similarity.
Do
we
currently
have
anything
in
go
ipfs
that,
like
tracks
like
a
root
cid
to
all
its
children
or
anything
like
like
like
stored
in
the
I
know,
we
have
a
block
store?
Do
we
have
a
dag
store
of
any
kind.
L
E
Yeah
yeah,
okay,
this
may
be
something
we
look
at
in
down
the
line
in
terms
of
like
speeding
up
grassy,
especially
as
we
look
at
moving
towards
grousing
as
like
a
thing
that
can
serve
you
like
lists
of
cids.
I
think
there's
a
lot
of
other
things
that
it
might
come
into
play
on.
I
only
wonder
like
again.
This
is
like
just
an
idea.
It's
like
very
germinating.
E
It
might
like
I
mean
that's
like
I
don't
know
if
it
would
help
with
gc
to
be
able
to,
like
you
know,
do
something
like
that.
I
mean,
probably
by
the
time
it's
implemented.
It
won't
be
a
dad
query.
E
It
won't
be
like
a
dad
store,
it'll,
be
like
or
like
a
whole
dag
store,
it'll,
be
like
a
root
c
id
and
selector
query
where
we're
just
recording
things
as
we
serve
a
graph
sync
request
and,
and
then
the
next
time
we
do
it,
we
might
be
able
to
serve
it
much
faster
by
like
just
loading
the
blocks
and
setting
them
anyway.
That's,
like
you
know,
it's
just
a
it's
a
far
updating,
but
it
could
come
into
the
mix
in
terms
of
speeding
up
some
of
this
stuff.
So
yeah.
C
So
that
actually
has
some
implications
for
what
garbage
collection
can
do
and
so
far
I've
avoided
that,
just
because
of
the
sheer
amount
of
memory
it
can
take
up
for
a
large
or
if
you
have
a
lot
of
a
lot
of
objects
that
have
that
are
deeply
that
have
a
lot
of
deep,
dag,
yeah
yeah
yeah.
C
But
if,
if
they're,
going
forward
with
design
yeah
I'd
be
very
interested
in
at
least
staying
up
with
that,
because
you
know
keeping
up
with
whatever
the
thoughts
are
there,
because
that
does
have
direct
implications
of
what
I
could
do
with
garbage
collection.
E
Yeah
for
sure
yeah,
that's
why
I
just
wanted
to
put
it
as
something
that
you
know
something.
I'm
thinking
about
yeah
I
mean
I
imagine
it
could
take
a
lot
of
space,
but
I
mean,
like
I
mean
cids-
are
40
bytes.
I
would
imagine
that
in
comparison
to
the
block
store,
it's
just
going
to
be
like
not
going
to
be
a
huge
thing,
but
I
mean
obviously
we
need
to
find
out
if
that's
a
true
assumption,
it's
possibly
not
true.
E
C
E
A
I
To
oh,
the
other
person
to
ask
about
that
is
reba
who's
been
doing
a
postgresql
based
data
store
in
the
falcon
context
that
is
keeping
submitted.
Data
information
like
I.
E
E
Nice
yeah
yeah
we
by
the
way
we
may
want
to
vague
awareness
of
what
they're
doing
over
there
because
they're
just
because
they're
doing
a
lot
of
data
store
testing
and
we
might,
it
might
be
like
it's
somewhere
like.
Oh,
we
could
do
this
too
and
get
a
big
improvement,
though
I
don't
know
if
the
profiles
of
the
way
we
use
things
are.
E
You
know
the
same
so
that
it
would
make
sense.
Sorry,
okay,
that
was
long,
I'm
just
going
to
quickly
go
on
to
the
other
party
item,
which
is
really
quick,
which
is
me
I
am
just
curious.
I
I
randomly
reach
out
to
chris
happy
happy
who
at
one
point,
was
working
on
a
js
craft
sync
and
he
said
he's
no
longer
contracting
with
pl,
which
I
think
means
there
is
no
one
on
js
crossing
at
the
moment.
So
I
mean
you
know
like
I
don't
like.
E
I
don't
exactly
know.
It
does
just
actually
have
like
bits
what
fully
going
yet
so
like.
Maybe
it's
like
a
long
way
off
to
even
worry
about,
but
you
know,
especially
if
we're
gonna
be
like
banking
graph,
sync
being
a
thing
into
ipfs
that
enables
fast
data
transfer
like
it
would
be
bummer
if
we
couldn't
do
whatever
in
js.
So
I
don't
know
if
that's
even
a
concern.
F
Yeah,
we
just
did
js
handoff
on
craftsync,
because.
E
F
Can't
currently
work
on
it,
he
might
come
back
work
on
it
later.
So
we'll
probably
pause
on
that
until
we
have
a
better
picture
of
what
exactly
we're
doing
on
go,
but
there
is
a
working
version
of
that
in
the
repo.
I
added
the
repo
to
the
notes
we'll
migrate,
that
over
at
some
point
in
time,
but
that
also
has
working
examples
of
that
pulling
from
go,
but
I
don't
believe
it
works.
The
other
way
currently
yeah.
E
A
I
think
that
is
it.
I
think
we
are
done.
Thank
you
for
sticking
out
if
you
made
it
this
far,
this
has
been
the
ipfs
current
foundation's
weekly
sync
from
monday,
the
25th
of
january
2021.
Please
fill
in
your
async
updates,
so
people
know
what's
happening.
Otherwise
you
are
free
to
go
enjoy
burns
night.
I
hope
you're
drinking
whiskey
already.