►
From YouTube: 🖧 IPLD weekly Sync 🙌🏽 2021-03-01
Description
A weekly meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
A
I
start
with
myself.
I've
worked
on
the
js
multi-format
stuff
for
much
and
generally
spoken,
js
ipfs,
but
I
start
with
js
ipfs
unix
of
s,
and
I
did
a
small
patch
that
just
used
the
protocol
buffer
stuff,
the
new
one.
But
then
I
decided
to
just
because
like
when
I
was
integrated
integrating
it
into
ipfs.
It
turned
out
the
whole
like
cid
is
different
and
so
on.
I
just
tried
to
get
all
the
json
multi
formats
into
jsc
tuning
cfs
and
it's.
A
It
worked.
So
there's
no
js
ipal
d
in
the
branch
I
have
but
yeah
so
we'll
see
how
this
works
with
js
ipfs,
and
the
interesting
thing
I
think
was
that
we're
talking
about
like
also
like
abstractions
or
missing
pieces
that
jsc
multiformus
doesn't
have,
and
so
what
js
ipld
was
doing.
It
basically
combined
the
encoding
to
build
blocks
and
store
them
on
disk.
A
In
one
thing-
and
it
kind
of
feels
strange
that
all
this
is
bundled
in
one
thing,
but
when
I
did
the
parting
or
to
the
new
stuff
it
I
found
out,
like
it's
actually
quite
useful
and
mostly
for
writing
tests,
because
you
just
always
do
the
same
steps.
You
always
do
encode
this
thing,
get
the
cid
store
it
somewhere
and
it's
just
like
a
single
step.
That's
quite
convenient,
so
I'm
still
not
sure
like
yeah
or
if
we
should
end
up
with
similar
abstractions.
A
Again,
probably
it
won't
be
called
js
ipld,
but
perhaps
something
else
or
it
would
be
part
of
js
ipfs.
I
don't
know,
but
it's
just
like
it
kind
of
like
turned
out
to
be
quite
useful.
Having
this
in
one
step,
yeah-
and
I
will
just
yeah-
go
on
with
doing
this
in
the
next
week
and
see
how
it
goes,
and
hopefully
yeah
I
get
something
working
next
is
eric.
B
B
If
anyone
wants
to
comment
on
it,
there's
been
a
pr
out
for
a
while
now
on
the
link
system
in
goa
people,
prime
and
that
still
hasn't
officially
landed,
but
for
the
record
as
far
as
I'm
concerned,
it's
fine
to
build
on
the
reason
it
hasn't
landed
yet
is
I
had
some
more
conversations
with
the
other
people
who
are
interested
in
the
maintenance
of
our
other
go
libraries
like
go
multihash,
especially
which
I
ended
up
accidentally
half
replacing
in
the
course
of
this
pr
and
guidepld
prime,
and
I
it
sounds
like
we're
agreed
that
we're
actually
going
to
try
to
update
a
bunch
of
stuff
and
go
multi-hash
upstream.
B
So
that's
kind
of
exciting,
I'm
a
little
daunted
by
that,
because
it's
one
of
our
older
libraries
that
is
highly
transitively
dependent
upon
so
we'll
have
to
be
very
careful
about
making
changes
to
it.
But
that's
probably
what
I'm
going
to
be
up
to
in
the
next
week.
So
there
was
some
work.
B
I
did
as
part
of
this
link
system,
pr
that
started
using
a
bunch
of
standard
library,
hash
interfaces,
which
is
a
common
thing
in
the
standard
library
and
going
and
is
supporting
of
streaming
hashes,
and
I
did
a
bunch
of
work
on
making
the
import
paths
such
that
you
could
compile
your
program
without
transitively,
depending
on
a
bunch
of
unusual
hash
functions
like
or
not
even
unusual,
but
just
not
in
the
standard
library
so
like
shot.
Three
is
not
in
the
going
standard
library,
so
you're
pulling
in
more
transit
dependencies.
B
If
you
use
it
trying
to
migrate
all
that
stuff
to
a
reasonable
plug-in
system
so
that
you
can
choose
whether
or
not
you're
going
to
have
them
in
your
transit
dependency
tree
and
end
up
in
your
binary.
So
the
current
state
of
go
multi-hash
forces.
You
have
all
these
things
in
your
binary
and
we're
going
to
fix
that.
B
C
Will
I
guess,
to
provide
an
update
of
where
we've
been,
and
this
is
as
much
reporting
on
behalf
of
hannah
and
alex
as
myself.
C
There
is
a
repo
that
now
has
taken
shape,
called
ipfs
go
fetcher,
which
provides
an
ipld
primed
fetching
interface,
and
that
will
allow
us
to
transition
to
that
from
the
current
go
merkle
dag
based
fetching
interface
that
is
used
through
ipfs.
So
this
is
the
next
step
of
ipfs
or
iplt
primary
and
ipfs
work.
C
There
is
a
I
would
like
to
get
some
data
based
on
a
sid
which
will
both
attempt
to
get
it
from
your
local
data
store,
but
also
will
potentially
invoke
network
things
to
fetch
it
through
bitswap
or
graphsync
or
it'll.
Do
this
whole
networky
thing?
So
that's
what
gofetcher
is
is
it's
the
I
would
like
to
get
data?
Please
then,
there's
a
set
of
interfaces
that
are
just
for
the
local
data
store
like.
C
I
would
like
to
add
this
data
into
my
local
data
store
and
I
would
like
to
remove
this
data
from
my
local
data
store
and
that's
really
a
codec
serialization
thing
that
the
link
system
storing
has
something
close
to
it
and
that
it
can
serialize
a
dag
into
a
byte
stream.
But
we
would
like
to
serialize
a
dag
into
your
data
store
please.
So
that's
a
slightly
higher
level
thing
that
just
built
on
top
of
it
so
we'll
put
that
somewhere
and
remove
a
dag
from
your
data
store.
C
Please
and
then
the
third
one
is
like.
I've
got
unix
fs
and
I
would
like
to
add
a
file
to
a
directory.
So
there's
the
set
of
mutationy
things
and
those
may
just
sort
of
happen
in
the
ipl
ipld
prime
thing,
or
we
may
have
some
helper
functions
to
help
move
towards
semi-standard
ways
of
doing
things
so
that
we
don't
have
to
change
it
in
multiple
places
anyway.
So
that's
inflated.
The
other
work
I
did
last
week
is:
I
took
the
get
codec.
A
Things
any
updates
from
anyone
else
else
we
get
to
the
gen
items
so
also
feel
free
to
add
engine
items.
So
I
have
there
the
kind
of
like
update
from
danielle
and
also
talking
about
goku
deck
pb.
D
So,
to
recap,
from
last
week
will
was
asking
what's
the
difference
between
pld
prime
proto
and
go
kodak
deck
pb,
because
they're
both
simple
to
implement
the
deck
pb
codec.
I
don't
want
to
say
spec,
because
I
think
that's
only
the
latter,
but
essentially
we
have
to
choose
one
right,
because
what
I
want
to
maintain
both
and
something
that
hannah
said
last
week
is
it
doesn't
really
matter
which
one
we
choose
as
long
as
it
works
with
unix
fs
data
and
uses
ipld.
Prime,
so
I
started.
D
I
picked
up
this
task
of
okay,
let's
decide
which
one
we
want
to
use.
The
advantage
of
the
prime
one,
which
is
the
one
hana
wrote
is
that
it
uses,
I
think,
the
same
code
that
unix
fs
uses,
which
is
the
one
from
gomer
called
ag.
But
it's
not
api
wise.
It's
not
very
nice
in
terms
of
ipld
prime,
but
it
also
pulls
in
go
merkle
dac
as
a
dependency,
so
that
recursive
module
dependency
is
technically
not
a
problem
for
go.
D
It
builds
fine,
but
in
longer
term
it
would
be
a
problem
to
have
it
would
cause
problems
like
it's
just
a
web
of
dependencies.
That's
not
very
nice.
At
the
level
of
modules,
the
other
one
is
dac
pb,
which
is
one
that
rod
wrote
at
some
point,
and
the
question
was:
is
that
actually
compatible
with
unix
fs
data?
D
So
what
I
did
was
there's
a
package
in
gomercl
that
called
pb
and
it
has
a
table
driven
test
called
testcompat
and
it
uses
the
encoder
and
decoder
for
that
pb
in
gomerkeldaq,
and
it
has
a
bunch
of
test
cases
like
no
links
a
bunch
of
links,
some
data
blah
blah,
and
I
essentially
rewrote
that
using
that
pb
and
it
mostly
works,
there's
a
few
test
cases
where
it
doesn't
where
it
errors
out-
and
this
is
the
tricky
bit.
D
This
is
where
I
want
to
ask
you
all
if
you
think
this
will
cause
problems
for
reading
and
writing.
Unix
effects
data.
Some
of
the
tests
that
fail
are
to
be
expected.
They're
the
ones
like
having
a
nil
links,
slice
as
opposed
to
having
an
empty
links,
slice
and
that's
fine,
because
they
encode
to
the
same
thing
and
protobuf.
So
it's
just
a
different
way
to
represent
them
and
go,
but
as
long
as
they
encode
the
same
thing,
that's
fine
at
the
level
of
the
encoding.
That's
fine!
D
Oh,
I
might
need
to
reread
that
my
apologies,
the
other
test
case
that
failed
is
essentially
invalid
links,
so
links
that
lack
a
hash
that
kind
of
thing.
So
so
I
guess
the
question
there
there
is
I've
seen
that
the
spec
does
say
that
it
does
say
that
the
hash
technically
is
optional,
but
should
not
be
treated
as
such.
When
reading
and
writing
nodes,
but
has
anybody
verified
whether
unix
fs
v1
has
such
links.
E
No,
this
is
a
problem
that
now
you're
you're
touching
on,
which
is
a
good
thing.
So
part
of
the
issue
we
had
with
the
spec
for
dak
pb
was
just
getting
input
up
the
stack
into
unix
fs
and
people
that
actually
use
tag
pb.
So
there's
a
lot
of
brave
assumptions
being
made
there
about
what
can
and
can't
exist
and
what
to
deal
with
it
and
just
getting
answers
for
things
like
what
what
happened,
because
what
happens?
E
If,
if
I
get
a
hash
that
it's
not
a
valid
multi-hash
or
is
zero
bytes
like
what
hap?
What
do
I
do
with
that,
and
is
that?
Am
I
likely
to
encounter
that
anywhere
in
the
wild?
What
is
the
likelihood
of
that?
Getting
answers
to
that
has
been
very
difficult.
So
there's
some
brave
assumptions
made
in
that
spec
that
that
may
turn
out
to
be
problematic
and
it's
the
spec
is
not
set
in
stone.
It
is.
E
E
It
basically
just
says:
here's
go
michael,
dag!
That's
doing
this.
This
pb
node
pb
link
stuff.
Let's
just
wrap
that
around
whatever
comes
out
of
it
and
not
make
any
assumptions
at
all.
So
there's
you
know:
it'll
it'll,
it'll,
take
whatever
go
merkel
day
gives
you,
but
what
go
merkel
day
gives
you
is,
is
basically
just
it's
just
almost
stream
of
consciousness
that
arises
out
of
the
bytes,
with
no
checking
like
it's
like
it's
it's
raw
pb
and
there,
and
we
know
that.
E
That's
not
the
case
like
because
on
the
encode
side,
there's
some
rules,
so
it's
yeah
it's
complicated.
But
if
you
go
to
that
link
it's
kind
of
hidden
now,
because
the
doc
got
updated.
But
I
wrote
a
whole
lot
of
stuff
there
about
the
differences
and
and-
and
it
is
good-
it's
good-
that
you're
touching
on
this
now
and
getting
into
real
world
data,
and
I
can't
say
for
sure
whether
the
tests
that
you're
seeing
failing
are
actually
accounting
for
real
world
things
or
just
accounting
for
things
that
are
in
the
code.
E
D
D
We
can't
really
use
it,
but
let's
encode
it
again
and
pass
it
somewhere
else,
but
if
you
do
that
you
might
as
well,
you
know
use
the
original
bytes
so
yeah
I
don't
know
so
I
guess
what
I'll
do
is
just
continue
with
this,
but
sorry
continue
using
go
codec.pb,
but
in
the
case
that
we
do
find
that
unixfs
does
use
this
kind
of
semi-broken
data.
D
I
guess
we'll
get
into
a
territory
of
making
the
spec
more
lags
to
actually
be
compatible
with
unix
of
s.
Okay
and.
E
D
D
Yeah,
so
that
was
me
so
we'll
most
likely
do
will
we
most
like
most
likely
will
use
deck
pv,
because
I
can't
really
experiment
any
more
than
what
I've
already
done
and,
like
you
say
it's
hard
to
tell
if
the
tests
were
driven
by
real
data
or
not
so.
E
And
here's
another
thing
actually
so
go
merkle.
Dag
pb
also
has
additional
tests
that
I
put
in
there
as
part
of
the
spec
process.
So
it's
again
it's
going
to
be
even
harder
to
figure
to
disambiguate.
What's
where
for
what,
because,
when
I
was
doing
that
testing
work,
that
was
really
testing
edge.
E
The
edges
of
you
know
what
is
a
pv
unknown
in
pv
link
and
and
what
does
this
thing
produce
in
these
different
states
and
finding
that
there's
a
couple
of
different
states
where
you
can
get
the
same
bytes
from
different
data,
and
so
that
was
part
of
the
process
of
getting
to
the
spec
like
that.
We
should
have
one
way
to
get
to
the
same
data
like
one
data
equals
one
bytes.
So
it's
just.
E
There
is
a
looseness
that
has
been
around
this
whole
area
for
a
very
long
time
that
is
inherited
from
protobuf,
but
also
just
just
from
the
you
know.
Let's
make
it
work
and
move
on
kind
of
thing,
which
is
fair
enough
in
these
early
early
things.
But
you
know
we
only
just
now
have
been
going
back
to
try
and
formalize
it.
So.
D
E
You
go
yeah
right
and,
and
those
and
those
you'll
see
those
tests
have
been
mirrored
in
in
my
in
go
codec
pb,
but
I've
commented
out
some
of
the
ones
that
I've
asserted
shouldn't
be
passing
so
so
there
is
the
same
tests.
Are
there
in
go
codec
dp,
slightly
different
format,
but
there's
a
few
removed,
and
I
think
I
think
they're
still
in
the
file.
I've
just
put
notes
saying
this
shouldn't.
This
is
bad
so
that
that
compact
test
is
me
testing
the
limits
of
what
go
miracle.
D
E
D
Okay
last
question
promise
any
other
tests
that
you
think
I
should
look
at
because,
besides
test
combat,
I
didn't
really
find
any
tests
that
seemed
worthwhile.
A
E
Yeah
true,
because
this
is
the
thing
the
thing
that
I
was
when
I
was
going
through
this-
I
was
just
hungry
for
real
world
data,
and
it's
just
like:
where
do
I
get
representative
real
world
data
from
other
than
just
picking
someone's
random
dag
like
it
doesn't
exist?
It's
not
encoded
in
the
tests
and
it's
a
real
problem.
What
are
the
what's
I'd
love
to
see
real
world
data
that
tests
all
the
edges,
and
but
no
one's
really
collected
that
and
that's
really
hard
to
collect
like
just
a
in
a
day
kind
of
thing.
A
Danielle
just
in
case
so
in
so
as
what
said
currently
the
go:
codec
deck
pb
is
the
same
as
the
javascript
one,
and
basically
this
is
what
I
currently
work
on
getting
this
new
codec
into
unisfs.
So
just
in
case
you
make
any
changes
because
of
like
edge
cases
or
whatever.
Please
like
take
notes
or
something
so
that
we
can
also
then
update
the
javascript
version,
because
yeah
yeah
cool
yeah.
A
F
Cool,
I
I
just
want
to
ask
rat:
can
you
clarify,
when
you
say
a
real
world
data
that
pushes
the
edges?
What
do
you
mean
exactly
by
this.
E
Okay,
what
I
mean
is
dag
pb
is
the
most
used
codec
in
ipfs
yeah
like
it's
like
the
other
ones,
don't
even
write
a
mention
as
much
as
we'd
love,
dyke's,
e42,
and
so
this
stuff
is
out
there
in
the
wild
being
pushed
around
and
it's
hitting
edges,
and
we
have
these.
We
have
a
codec
on
the
deco
side.
That
is
extremely
sloppy
right
now,
so
on
decode
go
merkle.
E
Dagpb
will
there's
it's
a
lot
of
ways
that
it'll
just
say:
yeah
that
sounds
good,
I'll
use
that
cool
and
it'll
decode
really
sloppy
bytes
on
the
end
code
side,
it's
more
predictable,
there's
a
determinism
to
the
end
code,
but
the
decode
is
very
independent
undeterministic.
So
so,
some
of
the
some
of
the
decisions
that
went
into
the
spec
was
closing
some
of
those
holes
and
saying
well.
E
We
shouldn't
see
bytes
that
are
of
this
form,
because
we
I
we
don't
see
a
d,
an
encoder
that
we
have,
that
is
producing
them,
but
that's
a
very
brave
assumption
that
those
things
don't
exist
because
there's
other
things
creating
dag
pp
data
and
there's
also
a
lot
of
historical
wpb
data
that
still
lives
and
so
real
world,
meaning
this
dag
pb
stuff
out
there
that
people
are
passing
around
on
ipfs.
That
may
have
existed
for
many
years
that
pushes
the
limits
of
what
we
do.
F
That's
basically
why
I
was
asking
because,
like
what
does
representative
mean
because,
like
I,
you
know
when
I
when
I
first
learned
about
ipfs,
I
did
a
load
of
very
like
weird
protobuf
stuff-
that's
not
represented
right,
I
mean
they're
still
on
github.
You
can
still
like
import
them
and
stuff
like
that,
but
this
doesn't
mean
that
you
should
support
them
so
yeah
from
that
perspective,
but
now
yeah
it
can
answer
my
question.
You
are
aware
that
I
forgot
which
subsystem
it
is
something
to
do
with
geolocation
within
the
within
lip.
E
I
knew
there
was
some
prototype
stuff.
I
didn't
know
they
were
using
date,
pb,
but
see,
and
I
never
stepped
up.
I
never
stepped
up
the
level
to
unix
fs,
which
was
which
was
my
intention
to
get
there,
because
it's
like
it's
this
weird
onion
that
we
do
with
pvp
decoding.
F
E
A
Will
the
like
the
data
end
up
in
some
cross
language,
fixture
kind
of
thing,
because
I
was
wondering
if
the
rust
ipfs
people
obviously
have
their
own
dagpb
encoder
thing,
which
I
mean
I've
seen,
and
I
think
I
fixed
some
things,
but
I
I've
never
really
like
looked
into
like
how
it
actually
works
or
if
it
does
the
right
thing,
and
I
would
assume
that
it
probably
doesn't
do
the
same
thing
as
the
other
ones
do.
So.
I
guess
it
would
make
sense
because
they're
kind
of
like
I
would
still
consider
them
early.
A
G
I
wonder
if
there's
any
old,
open
bizarre
dag
files
floating
around,
because
they
would
have
been
pretty
early
users
and
creators
of
dag
structured
data
that
could
be
used
for
fixtures
and
they
might
be
fine
with
providing
access
now
that
they've
kind
of
shut
that
down
and
moved
on
a
bit.
E
Cross-Language
test
features:
that's
I'm
handing
that
over
to
eric
it's
his
responsibility.
Now
my
plan
for
this
was
I
was.
I
was
working
my
way
to
to
dag
jason
and
I
wanted
to
get
that
title
up.
So
I
could
then
use
dag
chasing
to
sit
back
into
these
other
things
and
represent
things
in
dag
jason
and
and
use
them
as
test
cases
for
pushing
the
other
formats
around.
So
it's
a
circular
thing,
but
jason
isn't
quite
finished
up.
A
Okay,
cool
yeah-
I
it
just
reminded
me
that
also
like
even
in
rust,
ipld
there's
a
deck
pb
codec,
which
I
probably
should
check
out.
Yeah
cool
anything
else.
Let
me
check
the
agenda.
No,
all
right.
Does
anyone
have
any
other
items
or
wants
to
discuss
anything?
G
So
I
have
one
question
for
eric
about
and
if
michael
was
here,
I
would
ask
him
to,
but
so
the
textile
team
has
found
ourselves
in
need
of
a
hash,
consistent,
sorted
data
structure.
G
I've
been
playing
around
with
the
javascript
stuff,
it's
just
lovely,
actually
the
chuck
and
I
think,
michael's
chunky
trees.
Library
is
a
nice
implementation
of
that.
Do
you
know
of
any
go
implementations
of
something
like
that,
even
that
I
could
build
on.
B
G
I
don't
I
don't
think
so.
I
mean
I
know
that
there's
one
of
that
paper
I
linked
you
guys
to.
I
can't
remember
what
that
was
called
what
that
paper
was,
but
anyway
there's
an
implementation
of
that
one.
That's
like
has
some
similar
ideas,
that's
been
implemented
and
go
so
I'm
guessing
a
bunch
of
crappy
pasta
could
be
done
there
to
make
something
compatible,
but
I
was
just
wondering
because
we'll
do
it
if
we,
if
it
doesn't
exist.
E
So
the
path
that
I
think
we
had
on
that
was
michael
was
trying
to
cement
something
like
trying
to
really
get
to
a
some
end
point
where
we're
like
this.
This
feels
good
enough
and
then
then
perhaps
propose
to
do
work
that
provides
multiple
implementations
of
it.
That's
that's
sort
of
on
hold
at
the
moment,
michael,
is
pushing
that
I
think
he's
you
know,
he's
got
his
interest
in
ip
sequel
and
he's
he's
still
pursuing
that.
But
michael's
prioritization
has
changed
quite
a
bit,
as
has
some
of
ours
as
well.
E
So
if
you
do
end
up
pursuing
that
further
and
it'd
be
great,
if
you
could
keep
us
in
the
loop
and
we
might
be
able
to
connect
many
dots
together.
G
Cool
yeah
I
mean,
if
we
do
it's
going
to
be
a
pretty
minimal,
like
implementation,
just
to
satisfy
our
primary
needs,
but
it
could
provide
some
baseline
for
something
else,
because
ultimately,
it's
probably
going
to
end
up
being
a
pretty
lazy
data
structure
that
we
need
and
so
it'll
be
pretty
minimal.
Just
like
take
some
data
figure
out
where
in
the
tree
it
belongs
and
then
like
that's
about
as
far
as
we'll
ever
have
to
go,
but
anyway
that's
useful.
B
Yeah,
so
we
we
did
have
a
recent
fresh
implementation
of
some
of
the
hemp
stuff,
which
is
so
it's
also
a
hash
tree.
But
it's
not
it's
almost
the
opposite
of
sorted
right.
So
probably
not
the
exact
implementation
that
you
want,
but
we
were
working
on.
One
of
those
really
recently
and
danny
was
actually
just
wrapping
up
a
bunch
of
the
stuff
on
that,
and
so
it's
probable
that
that
would
also
be
a
really
good
example
of
how
we
would
do
it
and
then,
and
then
just
the
sharding
logic,
I'm
sure
will
be
different.
B
B
B
What,
where
should
I
look
for
that?
I
think
it's
github.com
ipld
go
dash,
adl-hampt
or
something
I
made
that
up
off
the
top
of
my
head,
assuming
that
our
naming
scheme
is
consistent.
B
It's
not,
I
guess,
wrong,
we'll
find
it
somewhere.
G
That's
great
that'll
be
that's
very
helpful
and
then
my
other
question
is
more
of
a
comment,
which
is
what
I
call
questions
that
are
actually
really
in
the
form
of
a
comment.
G
Usually
you
get
them
at
conferences
and
things
like
that
where
people
pretend
to
ask
a
question
but
are
really
making
some
snag
comment
about
something
you
presented,
but
I
just
wanted
to
mention
that
there
may
be
some
real
world
edge,
casey
type
hack
arounds
in
some
of
that
originally
we'd
written
a
javascript
implementation
called
ipfs
lite
and
now
it's
been
migrated
over
to
some
ipfs
repos
and
it's
called
like
ipl
ipfs.
G
G
It's
a
bit
out
of
date
now,
but
there
may
be
like
it's
not
up
to
date
with
the
latest
multi-formats
js
stuff.
But
there
may
be
some
comments
like
note.
This
is
here
because
of
incompatibilities
between
cids,
like
libraries
and
those
notes
might
be
valuable.
G
Also,
that's
kind
of
like
an
interface
that
I
just
recently
redid.
Actually
I
kind
of
wish
that
ipfs
js
ipfs
built
their
like
dag
service
in
a
composable
way
like
that,
so
that
you
could
just
swap
in
a
different
one.
G
So
if
you're
in
there
messing
around
it
would
be
wonderful
to
like
make
that
dag
resolver
a
bit
more
plugable
just
a
suggestion,
because
I
know
that
rackley
wants
that
as
well
for
some
of
his
work.
So
that's
my
comment.
E
E
H
E
G
Dag
service
is
huge
because
it
encompasses
like
so
much
of
what
you
actually
really
only
need
ipfs
to
do.
In
a
like
lightweight
browser
environment.
I
need
to
fetch
some
dags
pull
them
together
and
display
something
for
you
and
all
I
need
for
that
is
like
access
to
a
block
store,
and
I
live
p2p
host
and
a
dag
service.
That
knows
like
how
to
deal
with
those
and
that's
it.
I
don't
need
pinning
apis.
I
don't
need
you
know
ipns,
I
don't
need
pub
sub.
I
don't
need
like
all
these
other
sub
modules.
G
G
One
of
the
reasons
I
was
pulling
it
apart
and
rebuilding
our
own
is
because,
instead
of
bit
swap,
we
wanted
a
graph
sync
to
graph
sinky
type
protocol,
because
our
two
peers,
we
know
that
they
are
tracking
the
same
dags.
So
there's
no
point
in
like
a
more
general
bit
swap
they
might
as
well.
Just
do
a
sync
together
and
so
being
able
to
swap
in,
like
those
components,
are
nice
as
well.
A
Cool
yes,
yeah.
This
kind
of
thing
has
been
like
on
my
agenda
since
I
joined
protocol
labs
and
worked
on
ipld.
It
was
like.
Oh
I
just
I
just
need
the
ipld.
I
don't
need
all
those
like
ipfs
things,
just
just
networking,
storage
ipld,
that's
it
but
yeah
yeah.
So
eventually
we'll
see
cool
anything
else.