►
From YouTube: 🖧 IPLD weekly Sync 🙌🏽 2020-07-28
Description
A weekly meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
For more information on IPFS
- visit the project website: https://ipfs.io
- or follow IPFS on Twitter: https://twitter.com/IPFS
Sign up to get IPFS news, including releases, ecosystem updates, and community announcements in your inbox, each Tuesday: http://eepurl.com/gL2Pi5
A
Welcome
everyone
to
this
week's
ipld
meeting,
it's
july,
the
27th
2020
and
as
every
week
we
go
over
the
stuff
that
we've
parked
that
we've
worked
on
in
the
past
week
and
any
open
issues
we
might
want
to
discuss
or
any
engine
items.
I
start
with
myself.
I
mostly
worked
on
rust,
multihash
and
there.
So
the
story
is
that
there
was
a
huge
pr
on
ross
martij
to
make
it
not
allocating
as
much
and
just
being
stick
allocated.
A
And
then
there
was
a
bit
discussion
about
it,
and
then
people
got
annoyed
about
each
other,
so
the
pr
was
closed.
But
the
the
idea
was
still
good
and
when
it
was
refected
to
something
called
tiny,
multihash
and
now-
and
I
still
want
to
get
basically
this-
this
code
kind
of
upstream
into
the
normal
rust
multihash.
A
And
so
I
work
with
the
creator
of
this
pr
on
it.
And
I've
made
a
pull
request
because
the
code
was
kind
of
I
felt
too
complicated,
and
I
couldn't
really
tell
why.
But
it
just
it
didn't
click
and
it
was
strange
and
then
I
took
the
time
to
really
dig
into
it,
which
also
leads
to
some
documentation.
I
haven't
published
yet.
A
But
it's
on
my
machine
about
what
multi-hash
libraries
should
do
in
general,
because
there's
several
features
that
I
might
not
be
obvious
which
library
should
support,
and
while
I
was
digging
into
it,
how
it
works
and
so
on.
I
finally
figured
out
a
way
that
I
think
it's
simpler.
So
I
made
this
pull
request,
which
is
also
linked
in
the
notes,
which
I
think
is
a
simple
approach
and
it's
very
similar
to
the
to
the
old
one.
A
It's
quite
complicated
still,
but
if
anyone
wants
to
check
it
out,
feel
free
and
to
leave
comments
about
it,
there
isn't
any
documentation
in
there
yet.
So
it's
probably
still
hard
to
follow,
but
this
will
be
in
yeah
upcoming
commits
and
hopefully
it
will
be
solid
enough
to
then
be
moved
upstream.
A
Yes,
that's
all
I
have
and
of
course,
next
week
I
will
probably
also
work
on
those
things.
Next
on.
My
list
is
eric.
B
So
I
did
a
lot
of
docs
again
and
a
bit
more
research
on
advanced
data
layouts,
some
of
which
showed
up
in
the
doc's
stuff
in
the
docs
repo,
and
some
of
it
is
together
with
the
rod.
B
I've
been
looking
a
little
bit
at
hands
in
the
wild,
as
they
appear
in
some
of
the
file
coin
stuff
and
the
lotus
code
bases
and
things
in
that
area
and
just
trying
to
extract
some
understanding
of
like
what,
in
practice
those
folks
are
doing
with
these
structures
already
and
like
just
gain
information
about
user
stories
from
that
which
has
been
really
interesting,
it
turns
out
they're
doing
a
lot
of
stuff.
That's
really!
B
Very
direct
ways
that
we
probably
might
have
discounted
if
we
didn't
see
people
solving
real
problems
that
way
so
yeah
that's
been
interesting.
I
did
a
little
bit
more
work
on
the
golang
kogen
stuff.
There's
some
pr's
that
are
boring,
bug
fixes,
don't
look
at
it,
but
there's
also
another
pr
that
is
maybe
fun
to
look
at,
which
is
I
cleaned
up.
The
generation
of
the
schema
schema
enough
that
that
is
actually
a
commit
that
you
can
look
at
and
that
will
probably
be
headed
to
master
sometime
soon
because
it
appears
to
work
now.
B
B
B
A
A
D
Yes,
my
update
is
going
to
be
super
short.
I
actually
didn't
get
the
chance
to
work
on
anything
I
built
evaluated.
Last
week.
I
am
essentially
working
on
my
falcon
tester
certification
air
quotes.
We
basically
are
working
on
various
ways
and
designs,
suppose
how
to
test
and
how
to
use
falcon
with
demo
drop
down
the
line,
and
this
involves
a
ton
of
meetings
and
clarifications
and
stuff
like
that.
The
only
thing
I
guess
is
somewhat
ipod
related.
D
I
tried
overlooking
to
implement
my
own
campy
calculator,
because
the
thing
is
including
lotus
is
super.
It's
based
on
rastafari
and
it
is
super
resource
intensive.
D
So
I
just
wanted
to
select
how
hard
can
be
and
looking
over
the
stuff
that
rod
wrote
way
back
in
javascript.
I
actually
ran
into
inconsistencies
not
only
with
small
files,
but
with
mesh
with
any
car
file
of
a
specific
size
that
is
very
close
to
the
to
the
limit
or
piece.
D
So
now
I'm
actually
downloading
a
couple
of
car
files
that
kinda
match
the
description
to
see
if
our
companies
are
incorrectly
calculated
and
if
I
do
find
files
like
this,
we
can
figure
out
like
who
is
incorrect,
whether
it's
lotus
now
or
what
was
before,
but
yeah
that's
potential.
I
have.
A
Thank
you
next
is
chris.
E
Hey
guys
so
last
week
I
been
working
on
a
little
more
refactoring
of
dumbo
drop
to
unify
the
configuration
right
now,
it's
it's
kind
of
like
there's
some
stuff
in
the
environment,
variables
some
stuff
in
command
line
arguments,
and
I
think
something
else.
So
I'm
going
to
clean
that
up
and
also
worked
on
making
it
more
deterministic
processing.
E
So
right
now
it's
not
deterministic,
which
I
mean
it
works,
but
I
think
it'd
be
better.
I
think
we
could
make
some
changes
to
deterministic,
which
would
enable
some
of
the
things
so
a
bit
of
work.
There
work
on
some
kind
of
application,
developer,
oriented
documentation,
so
just
thinking
through
typical
developer,
if
they're
going
to
ipo
ipldfy
their
app,
how
can
they
do
that,
and
so
I
have
like
some
interesting
kind
of
starting
work
on
like
configuration,
you
know
something
you
would
put
in
a
dot
file
or
something
logging
like.
E
How
would
you
log
to
ipld?
How
would
you
manage
standing
data
input,
files,
output
files,
it's
actually
pretty
cool,
but
the
one
problem,
though,
is
I
keep
thinking
through
it.
I
keep
like
discovering
new
stuff,
and
I
can't
get
in
this
like
spin
lock,
where
I
write
a
page
and
then
like
rewrite
it,
because
I
wanted,
I
think,
a
better
way
of
like
presenting
it.
So
it
I'm
not
happy
I'm
not
comfortable
with
committing
it
yet.
E
But
you
know
that's
been
I've
been
frustrated
that
the
other
thing
I've
had
some
higher
all
distractions,
so
some
stuff's
going
on
that
I
took
away
my
time
last
week.
I
also
got
a
chance
to
start
playing
around
with
dagde,
because
you
know
talking
to
michael.
One
of
the
things
I
want
to
do
is
take
dumbo
drop,
which
is
a
typical
application
design
where
you
have.
E
You
know,
configuration
logging
and
all
this
kind
of
stuff
that
drive
it
that's
in
traditional
files
and
see
if
we
can
move
all
that,
like
hundred
percent
into
ipld
and
dag
db,
is
kind
of
one
of
the
key
pieces.
We
need
to
make
that
possible
because
you
have
to
build
like
have
you
have
to
name
your
objects,
otherwise,
you're
writing
those
names
out
to
a
file
and
so
kind
of
the
goal
here
is
all
configuration
all
the
input
files,
all
the
logs.
E
Everything
is
in
a
an
ipld
and
we
use
dagdb
quite
extensively
and
so
kind
of
like
I.
I
personally
think
that
is
like
a
key
missing
piece
to
make
to
really
make
ipl
the
easier
to
use
in
a
complete
sense
like
for
a
lot
of
standard
things.
Applications
do
so,
I'm
pretty
excited
about
it,
and
so
anyway,
that's
that's
my
update.
A
Thank
you.
Next
is
rord.
C
It
is
okay
spent
a
chunk
of
time
in
the
far
coin,
hampton
which
is
in
the
ipfs.org,
and
its
name
is
gohampt
ipld.
C
There
was
discussion
about
moving
it
to
the
file
coins,
falcoid
project,
org
and,
and
so
I've
opened
up
an
issue
there
to
formalize
that,
and
it
looks
like
there's
some
agreement
around-
that
the
one
of
the
reasons
that's
relevant
to
this
team
is
that
it's
it's
place.
It's
you
know
it's
in
the
ipfs.org
for
historical
reasons,
but
its
name
as
well,
and
also
we
link
to
it
in
our
hashmap
spec.
C
C
It's
layout
slightly
different
and
also
this
question
of
of
context
that
keeps
on
coming
up.
How
do
you,
when
you
load
this
thing?
How
do
you
know
what
parameters
it's
using
to
be
able
to
read
through
the
structure
and
the
this
one
takes?
The
approach
of
it's
always
comes
from
the
code
where
you're
loading
it.
So
you
will
know
when
you
load
this
thing,
what
the
parameters
were
that
created
it
and
you
don't
need
to
find
them
anywhere
in
the
data
itself,
whereas
in
hashmap,
spec
and
elsewhere.
C
So
that's
that's
a
big
divergence
with
this
one
and
it's
not
going
to
change
with
filecoin,
because
filecoin's
got
this
very
strong
opinion
about
versioning
and
where
that
information
comes
from
and
also
the
importance
of
bytes.
So
that's
lacking
from
this
one
and
it'd
be
good,
not
to
give
the
impression
that
this
is
a
generic
workhorse
that
people
could
rely
on
and,
as
eric's
been
saying,
we're
not
too
far
off
having
a
new
go
one
that
users
can
just
pull
off
the
shelf
and
use
so
that'll
be
moved.
C
Hopefully,
there's
a
pull
request.
Number
52
there!
That's
just
just
chop
chop
block
full
of
documentation.
Just
I've
gone
through
the
code
documented
it
added
some
surrounding
documentation
found
a
bunch
of
to-do's
in
there
for
things
that
need
testing
some
a
couple
of
things
that
look
like
bugs,
and
it's
spent
some
really
interesting
conversations
with
eric
about
this
stuff.
C
Another
thing
I
did
was
have
a
good
chat
with
the
folks
over
at
vulcanizeddb.
These
guys
are
doing.
They
are
combining
ipld
and
cryptocurrency
blockchains
and
postgres
to
do
query
and
verification
work.
So
ipld
comes
in
handy
because
it
helps
to
proof
stuff,
because
these
blockchains
contain
these
verbal
trees
that
you
can
do
proofs
on
and
so
sticking
all
that
in
postgres
is
interesting
from.
C
But
it's
also
a
good
way
to
hold
the
data
and
format
it.
So
that's
interesting.
It
turns
out
they've
got
a
ton
of
overlap
with
the
work
I've
been
doing
on
blockchains
I've
been
doing
bitcoin
and
zcash
they've
put
that
aside,
even
though
they
have
some
support
for
it.
C
They're
we're
just
like
not
it's
not
complete
yet
but
they've
been
doing
ethereum
work
and
I
haven't
done
all
the
ethereum
work
yet
and
it's
turning
out
to
be
a
nightmare
and
they've
taken
some
older
theory
of
ipld
work,
implemented
re-implemented
it
all
or
upgraded
it
to
be
actually
work
fully
with
current
ethereum
data,
they've
even
implemented
their
own
format
for
representing
ethereum
state,
which
is
kind
of
complicated,
so
they
have
a
fork
of
the
main
ethereum
client
that
they're
doing
to
use
to
extract
state
information
and
yeah
really
interesting
conversation,
and
I
think
for
the
ethereum
archiving
work.
C
I'm
doing
I
need
to
lean
on
what
they're
doing,
whether
that's
trying
to
get
them
to
participate
or
just
using
their
work
and
then
also,
as
we've
talked
about
either
up
streaming
or
extracting
their
ipld
work.
Because
right
now
it's
it's
embedded.
C
Deep
in
a
a
repo
that
does
all
this
work,
but
it
would
be
nice
to
get
those
things
out
as
codecs
and
we
even
talked
about
doing
those
codecs
for
ipld,
prime,
so
yeah
ethereum's,
the
main
one
there,
but
they've
also
done
some
work
on
bitcoin
and
something
else
so
yeah,
there's,
there's
good
collaboration
space
there
that
I
wanted
to
shoot.
I
did.
I
spent
some
time
in
seabor
again
just
tinkering
at
edges.
I
was
grabbing.
C
They
there's
two
modes.
We
just
say
what
the
length
is
up
front
or
you
say
the
length
is
indefinite
and
wait
till.
I
give
you
a
break
and
that's
good
for
streaming,
but
it's
not
good
for
dax
ebook,
because
it
means
there's
variation
in
the
in
the
layout
that
you
can
have
for
any
given
set
of
data,
and
so
so
for
taxi
board.
I
don't
want
that.
C
I
want
to
be
able
to
turn
that
off,
but
to
be
able
to
use
test
fixtures
to
make
sure
I'm
compatible
with
everything
it's
sort
of
nice
to
have
that
coverage
in
there.
So
I've
already
got
evolving
in
there.
This
idea
of
strictness
and
flags
to
turn
features
on
and
off,
and
this
is
going
to
have
to
be
one
of
them.
If
I,
if
I
fully
implement
them
where
for
dag
sequal,
we
could
say
no
you
you
can
neither
write
like
that.
Nor
can
you
read
data
like
that
and
get
away
with.
C
So
I
mean
that's
one
of
many.
It's
really.
This
is
really
educational
in
terms
of
the
format,
the
ideal,
ipld
format
and
and
how
seaboard
is
not
it
in
its
state
and
even
the
way
we've
been
making
dagsy
ball
more
strict
is
probably
not
it
either.
C
C
It's
doing
it's
not
doing
any
strictness
checking
at
all,
because
they're
they're
making
performance
and
like
a
their
top
priority.
So
it'll
read
sloppy
data
which
could
be
a
problem
for
their
blockchain.
That's
a
different
issue,
but
it'll
also
it
currently
just
it
doesn't
sort
the
keys
properly
in
maps,
so
it
won't
even
write
data
recording
to
add
to
xc
or
spec.
So
this
is
this
is
why
seabor
is
not
a
great
solution
to
ipld,
but
maybe
there's
some
sort
of
evolution.
C
We
can
do
with
z-war
where
it's
like
not
only
strictness
but
just
features
just
get
cut
out
and
you
can't
go
anywhere
near
these
features
to
be
valid,
but
I
was
thinking
about
it.
I
was
imagining
a
instead
of
dark
sea
ball.
It's
like
dark
sea
wall
street
make
an
entirely
new
multi
codec
codec.
C
That
was
just
super
strict
and
you
couldn't
you
had
to
do
all
this
stuff
to
implement
that
and
you
had
to
cut
out
features
and
you
have
to
validate
on
read
all
sorts
of
stuff
to
be
a
direct
or
strict,
and
maybe
that
could
represent
our
next
evolution
of
data
format
instead
of
going
fully
our
own,
but
actually
carve
out
a
very
strict
space
in
cbore
where
to
call
it
dioxible
strip.
You
have
to
do
this
stuff
in
and
out
sorry
peter.
D
C
Yes,
yes,
it's
a
problem
now
spec
is
strict,
but
the,
but
nothing
is
strict
enough
out
there.
None
of
them
are
doing
all
this
stuff,
and
we
have
this
float
problem
as
well.
That
is
live
and
we
have
real
data
being
produced
with
taxi
but
like
when,
when
file
coin
goes
live
and
it's
producing,
dag
cbo
blocks
that
will
live
forever
and
if
the
sorting
stays
the
way
it
is,
they
will
not
be
valid
according
to
the
spec.
C
How?
How
do
we
deal
with
live
data
that
we
can't
change
and
we've
got
this
spec?
That
says
one
thing,
and
then
we
start
switching
on
our
dax
siebel
codex
to
be
more
strict
for
reading
data.
How
does
that
work?
That's
that's
crazy
land.
So
it's
like
okay,
you're
reading
file
coin
data.
Therefore
turn
off
your
strictness
checks.
C
C
Well
that
I
I
did
actually
wonder
whether
that
would
be
the
case,
but
but
the
the
cbo
code
encoder
is
pretty
good
and
it
actually
it
it
doesn't
implement
much
either
it
doesn't.
I
don't
think
it
inputs
floats
at
all
and
it
implements
just
enough
which
is
ideal.
C
It's
just
this
strictness
thing
that
concerns
me
because
there's
nothing
to
stop
other
far
corn
clients
from
producing
like
integers
encoding,
every
integer
as
a
64-bit.
C
It
will
still
read
them,
even
though
they
shouldn't
be
according
to
the
strictness
rules,
and
but
that's
also
the
same
with
all
of
our
other
symbol
codecs
as
well.
They
will
all
read
bad
data
and
say
yeah,
that's
fine!
I
don't
care,
nothing
will
reject
it
and
say
hey.
This
is
not
right.
This
shouldn't
have
been
coded
like
this.
This
this
hash
is
wrong
for
this
set
of
data.
C
Anyway
and
then
last
thing
was
continuing
on
these
ramblings,
there's
a
pull
request
in
the
space
record,
two
eight
three:
four,
the
noting
some
of
this
javascript
number
craziness,
because
that's
another
area
where
we
run
into
trouble
and
we
will
run
into
trouble
with
every
codec,
which
is
the
javascript
number
layout.
I
was
going
to
go
into
some
detail
as
well
just
about
in
in
in
general
about
typed
versus
untyped
and
the
problem
with
numbers
there
anyway.
C
F
Yeah,
hey
yeah,
sorry
commenting
on
github,
okay
did
some
stuff.
Oh
yeah,
that's
right!
Everybody
needs
to
apologize
to
me
for
last
week,
thinking
that
I
was
not
going
to
figure
this
out
and
get
everything
building
for
all
the
different
stuff
with
esm
so
anyway,
you're
welcome
that
all
works,
there's
a
tool
called
limbo
that
we
can
use
to
manage
those
builds.
Now.
F
That's
all
awesome
that
got
me
thinking
about
that
problem
in
general,
because
it
is
like
it's
a
lot
of
builds
to
manage
like
this
is
just
a
lot
of
extra
stuff
to
manage.
So
I
started
thinking
about
that
and
I
started
building
this
thing
called
ipjs,
which
is
like
basically
a
build
system
to
handle
this
actual
problem
that
we
have,
but
also
sort
of
internally,
can
use
a
data
structure.
That's
an
ipld
and
can
have
a
native
package
format.
F
So,
in
addition
to
making
things
that
can
be
easily
published
at
npm
like
we
need
right
now
that
are
unified
that
are
universal
javascript.
We
also
have
like
a
future
open
to
us
where
we
can
like
do
really
cool
stuff,
with
all
the
new
esm
and
all
these
data
structures.
So
I
wrote
a
bunch
of
code
and
did
a
demo
of
that
and
sent
it
around.
F
That's
pretty
cool
and
then
there
was,
I
wrote,
a
bunch
of
other
code
in
other
places.
I
think
I
fixed
a
bunch
of
bugs
in
in
block
and
in
multi-formats
and
dag
db
all
over,
and
I
wrote
some
docs
but
yeah.
That's
that's
me.
I
don't
know
who's
next.
D
F
E
Yeah
so
one
of
the
things
I've
been
thinking
about,
you
know
so
example
like,
but
from
an
application
development
point
of
view,
a
lot
of
times
text
data
gets
compressed
with
gzip
or
whatever,
and
once
you
move
to
ipld
you
know
yeah,
you
could
g-zip
something
and
stick
it
in
as
a
raw
block
or
you
could
hope
that
there's
a
block
store
that
will
automatically
compress
stuff.
But,
as
you
know,
some
blocks
may
be
more
compressible
than
others.
So
you
don't.
E
You
have
a
run
to
a
problem
where,
if
you
don't
have
any
kind
of
hint,
you
don't
know
if
you
should
try
to
compress
it
without
wasting
time,
and
so
I
don't
know,
I'm
sure
you
guys
talked
about
compression
before,
but
it
based
on
my
thinking
it
seems
like
having
you
know.
A
dag
seaboard,
gzip
codec
would
make
sense,
or
something
like
that.
So
essentially
the
data
is
compressed,
but
you
can
actually
follow
links.
You
have
to
decompress
it
first
in
the
codec,
but
you
can
do
it.
E
You
know
pretty
memory
and
cpu
efficiently
with
something
like
gzip,
it's
pretty
fast
and
that
will
allow
you
to
get,
and
you
know
then
the
developer
would
get
to
choose
okay.
I
know
this
is
going
to
be
a
highly
compressible
data
structure.
So
it'll
use
that.
But
if
it's
not,
let's
say
you
have
bite
arrays
or
something
embedded
in
your
c
board
that
has
compressed
chunks
or
something,
then
you
can
just
use
typical,
dag
c
boards.
You're
not
gonna,
get
the
benefit.
So
what
what
have
you
guys
talked
about
as
far
as
echoes.
D
Before
we
before
we're
going
to
the
answer,
I
have
a
question.
This
comes
up
quite
often
this
very
this
very
question
in
your
thinking
chris.
What
are
you
solving
by
introducing
this
codec.
E
So
what
I'm
solving
is,
I
guess,
a
more
efficient
compression
for
things
that
are
highly
compressible.
So
when
you,
when
you
think
about
an
application
developer
that
wants
to
move
as
much
if
not
everything,
into
ipld
one
things
they're
going
to
quickly.
You
know
a
stumbling
block's
going
to
be
well
now,
I'm
using
a
ton
more
data
storage
than
I
was
before,
because
it
can't
compress-
and
so
I
think
you
know
allowing
the
developer
to
have
some
control
over
the
data.
E
D
Michael,
you
admitted:
do
you
want
to
answer
that
or
sorry
I
I
missed
it.
I'm
still
commenting
on
this
dumb
thing.
F
Okay,
well,
sorry,
there's
a
there's:
a
codec.
B
So
peter
actually
just
thought
through
a
bunch
of
this
stuff
recently
and
text
in
some
other
channels,
so
maybe
he'll
be
super
ready
to
answer
it,
but
the
one
the
the
counter
force-
that's
always
incredibly
strong,
is,
if
you
do
a
hash
over.
The
compression
is
just
very
likely
to
produce
enormous
amounts
of
sadness
downstream
because
you
can
never
vary
the
compression
again
and
refresh
it
with
the
ideal.
Compression
algorithm
is
approximately
zero
right
like
it's.
E
F
Yeah
links,
yeah
so
I'll,
say
a
couple
things
because
yeah
I
do
have
several
threads
about
this
that
are
in
different
places,
so
they're
not
easily
discoverable,
but
one
is
that,
like
it
is
just
kind
of
an
obvious,
easy
win
for
storage,
abstractions
and
network
abstractions
to
implement
compression
so
like.
If
you
have
a
store
and
you're
worried
about
like
size,
limitations
and
you're
willing
to
trade
compute
for
for
storage,
bytes
implement
compression
right
and
then
you
can
press
everything
regardless
of
the
codec.
F
So
that's
like
where
you
want
to
get
that
win
and
where
you're
in
the
best
position
decide
whether
or
not
you
want
to
make
that
trade-off
transports
should
all
just
implement
a
basic
compression,
because
we
have
optimized
like
libraries
for
transport
compression
like
there's.
No
reason
that
it's
not
other
than
people
are
lazy,
which
is
the
case
right
now
that
doesn't
that
doesn't
mean
that
I'm
not
that
that
I'm
opposed
to
any
form
of
compression
in
a
codec.
F
I'm
just
opposed
to
taking
general
compression
solutions
and
then
taking
a
codec
and
then
making
and
taking
a
serialization
library
like
a
format
and
then
pairing
them
together.
For
a
new
codec
because
it
one
it
creates
like
a
multitude
of
codecs
for
every
compression
and
then
none
of
the
data
that
gets
created
across
them
is
duplicatable
and
we
don't
have
like
unified
addresses
for
them.
F
But
if
you
imagine
like
a
new
block
format
like
if
we
were
writing
a
new
block
format,
we
would
probably
have
some
form
of
like
application
specific
compression
in
there
right
like
we
would.
We
would
be
able
to
say,
like
oh
there's,
like
you
know,
if
a
cid
appears
hundreds
of
times
in
the
same
block,
there's
no
reason
to
write
it.
F
Hundreds
of
times
in
the
same
block
right
like
we
could
write
it
once
and
then
we
can
maintain
references
to
it
and
we
can
like
create
a
really
optimized
like
block
format
right,
and
you
can
see
this
across
some
other
format.
Libraries,
where
you
like,
if
you
look
at
like
the
the
http
header
compression
in
hp,
2,
and
I
think
it's
also
in
http
3.,
it's
very
specific
to
headers
right
like
it's
it.
It
understands
the
problem
space
enough
that
it
can
create
like
a
really
efficient
application,
specific
solution.
F
No,
no,
but
but
they're
going
to
then
put
that
data
into
the
network
and
they're
going
to
send
it
to
other
people,
and
you
just
made
that
decision
for
all
the
other
people
right,
whereas,
like
the
developer,
also
picks
their
block
store
and
if
they
want
to
turn
on
compression
in
their
box
store.
They
get
all
this
compression
for
all
of
their
storage
and
and
they've
been
able
to
make
that
trade-off
in
isolation
for
themselves
and
not
for
the
data
that
then
goes
out
of
the
network.
C
It
was
a
similar,
something
something
that's
not
not
the
same,
but
it's.
The
same
class
of
of
of
question
came
up
just
yesterday.
I
think
it
was
with
this
hemp
so
in
the
hamp
spec
that
we've
got
on
the
for
hashmap.
If
we
say
that
that
murmur
3
particular
type
the
64-bit
version
of
murmur
3..
C
Now
the
x64
version
of
it
in
mm3
is
the
algorithm
to
use
for
it,
which
is
great
because
you
know
that's
a
that's,
not
a
cryptographic
hash.
It's
just
a
good
hashing
function
for
generic
use,
and
then
filecoin
is
now
discussing
in
fact,
they've
just
merged
that
they're
going
to
switch
that
to
a
cryptographically,
secure
hash
so
that
they
avoid
the
problems
associated
with
hash,
collisions
that
where
people
could
actually
manipulate
the
knowledge
of
number
three
to
force
hash
collisions
great.
C
It
turns
out
that,
for
so,
jeremy
did
some
testing
with
that,
and
it
turns
out
that
shah
256
is
faster
on
the
average
machine
than
murder.
C
3
is
because
computers
are
optimized
for
it,
and
so
you've
got
this
case
where
you
as
a
developer,
you
think
you're
making
decisions
that
that
extend
through
the
whole
stack,
but
other
people
at
the
different
levels
of
the
stack
are
also
making
decisions
which
are
optimizing
in
different
ways
and
in
this
case
you've
got
operating
system
and
hardware,
vendors
that
are
optimizing
for
shop256
and
we
as
developers
think
oh,
no,
we'll
get
a
simpler
hash,
but
no
it's
actually
not
that
efficient.
C
So
it's
the
same
kind
of
thing
with
with
compression,
where
different
layers
of
the
stack
and
different
people
involved
in
those
layers
are
making
decisions
already,
and
sometimes
it
might
be
best
to
defer
those
decisions
to
those
parts
of
the
stock.
I'm
not
saying
no
to
your
suggestion,
I'm
just
saying
that
it's
it's
complicated
in
the
way
that
mike
was
talking
about.
F
And
and
frankly
like
I,
I
think
that
the
place
where
we're
most
likely
to
see
a
win
from
this
kind
of
compression
is
probably
not
actually
in
like
dag
seabor
and
and,
like
you
know,
natively
encoded
block
structures,
but
just
in
raw
bites.
F
So
I
wouldn't
be
curious
to
have
a
conversation
about
like
what
would
raw
jesus
codec
look
like,
like
that's,
probably
a
bigger
and
more
obvious
win
than
any
other
than
like
compressing
seaport
right
like
most
most
people
are,
if
you
really
look
at
their
data
store,
it's
mostly
raw
blocks
if
they
have
like
a
ton
of
data.
This
is
certainly
true
across,
like
all
the
data
processes.
E
Well,
so
I
think
one
thing
we
do
have
to
have
an
answer
for
is:
if
there
is
a
developer
that
feels
that
their
solution
requires
compression.
Otherwise
it's
like
ipld
is
a
non-starter
form.
We
need
some
guidance
about
how
to
do
that.
So
you
know.
E
E
You
should
try
to
compress
a
block
store,
because
what
we
don't
want
to
do
is
have
a
box
store,
try
to
auto
magically
figure
out
if
a
block
is
compressible
or
not,
because
that
can
be
extremely
expensive
a
waste
of
cpu
time,
especially
when
the
developer
can
provide
a
hint
kind
of
like
you
know,
content
negotiation
in
a
browser.
It's
like
hey.
This
is
what
I
want,
and
you
should
give
it
to
me
if
you
have
it
that
way,
but
I
can
also
support
these.
F
I
don't
need
to
compress
that
I
will
say
that,
like
I
don't
know
where
the
threat
is
about
this,
but
we
we
talked
about
this
and
talked
about
just
like
having
our
stores
defaults
to
having
compression
on,
and
there
were
actually
objections
that
people
had
to
defaulting
to
it
to
on,
because
they
mostly
had
video
data,
or
they
mostly
had
other
data
that
was
already
compressed,
and
they
didn't
want
to
waste
that
time
right
so
like
like
again
like.
F
I
think
that
the
best
place
that
we
probably
have
to
offer
it
is
in
the
storage
instruction
right
now
and
I
think
at
the
moment
our
team
doesn't
manage
those
storage,
abstractions
they're,
mostly
on
the
ipfs
side,
so
yeah,
it's
not
it's
not
quite
for
us
to
handle
yet,
unfortunately,.
E
F
Well,
that
is
great
great
yeah
I
mean
like
like,
like.
I
have
a
block
abstraction
in
tag
db
right
and
it
does
not
currently
have
compression
and
like.
I
would
be
interested
to
see
what
it
would
look
like
to
add
compression
as
a
feature
there.
You
could
actually,
like
you
know,
abstract
it
pretty
substantially.
F
There's
a
discussion,
I'll
link
you
to
it,
there's
a
discussion
in
jsiphs
lite
about
where
that
api
should
go
and
some
of
the
stuff
that
they
should
handle
and-
and
that
includes
they're,
really
thinking
about
what
the
block
store
abstraction
looks
like,
because
rockaway's
been
doing
work
on
putting
the
block
store
in
a
worker
so
that
you
can
share
it
more
easily,
and
this
is
why,
like
a
lot
of
that,
those
cid
changes
came
in
like
as
the
ends
that
we
could
share
across
the
worker
boundary
really
easily,
but
yeah
like
we
should
get
on
the
list
of
you
know,
thoughts
and
potential
features
compression
in
there
as
well.
E
So
the
the
other
way
you
can
do
it.
So
if
you
had
a
block
straight
api
that
had
an
optional
hint
like
an
optional
thing,
you
pass
in
that
could
have
a
hint
like
that.
Then
that's
one
way,
the
other
way.
I
don't
want
you
guys
to
think
about
this,
but
you
could
actually
have
in
the
application
side,
multiple
block
stores,
so
one
that
you
know
doesn't
compress
another
one
that
does
you
could
choose
to
save
the
cid
to
this
one
versus
that
one
and
then
aggregate
up
between
multiple
block
storage.
E
You
have
to
do
lookup
right,
so
if
you
don't
know
which
block
storage
and
you
just
check
all
of
them
until
you
find
it
is,
have
you
guys
thought
about
design
pattern,
that's
kind
of
like
a
nest.
Question
too,
I
mean
what
kind?
What
do
you
guys
think
about
multiple
block
stores
that
could
that
aren't
unified
kind
of
like
the
way
ipfs
is.
F
So
that's
how
I
kind
of
think
about
it,
like
you
stack
them
on
top
of
each
other,
and
this
is
also
why
I
very
firmly
think
that,
like
our
block,
store,
abstraction
should
continue
to
work
with
cids
and
not
break
down
to
the
multi-hash
layer,
because
we
can
actually
use
the
cid
as
a
hint
right.
I
mean
you
can
see
when
you
configure
it
for
storage.
You
can
say
like
hey
like
compress
these
codecs,
but
not
these
codecs,
because
we
don't
know
if
those
compress
well
or
we
know.
F
And
things
like
that,
whereas
you
lose
that
level
of
granularity
about
the
data,
if
you,
if
you're,
only
thinking
about
it
from
the
point
of
view
multihash.
B
Oh
sorry,
I
think
in
ipod
prime,
we
have
a
thing
that
should
allow
that
multiple
log
stores
id
or
something
like
it.
The
link
context
parameter,
allows
you
to
peek
at
some
of
the
immediate
context
around
the
link
that
you're
about
to
do
something
with.
So
you
can
go
look
for
other
info
over
there.
That
hints
like.
If
you
have
schemas,
you
can
look
at
what
type
this
is,
or
you
can
look
at
some
sibling
fields
to
get
some
hints
about
how
you
want
to
store
things.
B
E
Well,
like,
like
I
said
from
an
application
developer
point,
I
think
we
need
to
have
a
pretty
solid
answer
about
how
they
can
accomplish
it.
So
I
don't
think
we
have
to
hash
it
out
today,
but
I
know
it'll
be
a
barrier,
especially
I
mean.
Maybe
some
of
the
initial
use
cases
for
ipfs
has
to
do
with
data.
E
That's
not
very
compressible,
like
you
know,
movies,
or
I
don't
know
whatever
it
is
files
that
are
already
gzipped
already
compressed,
but
when
I
think
about
using
it
like
dag
db,
I
mean
you,
you
could
probably
imagine
use
cases
where
I
bet
that
data
is
highly
compressible.
E
E
You
know
it's
not
good
so
anyway,
or
they
probably
won't
compress.
Something
like
you
know,
binary
deals,
leave
it
as
is,
but
anyway,
I
think
we
should
you
know.
Maybe
we
can
revise
that
later.
On
but
one
one
thing
I
think
relates
to
that
it
I
it
seems
like
I
I'm
confused
michael,
why
ipfs
owns
the
box
store
api
when
ipld
is,
should
not
be
dependent
upon
ipfs,
it
seems
like
we
should
define
the
ipld
block
store
api.
The
ipfs
implements.
D
Ipld
should
be
completely
independent
from
a
block
store
like
I
build.
It
should
exist
without
a
block
store
in
the
first
place.
F
There
are
implementations
that,
like
don't
even
think
about
storage,
and
do
you
know
half
of
the
stuff
that
our
stack
does
so
like
when
we
talk
about
our
stack
and
then
I
feel
the
in
general
like
we
need
to
keep
in
mind
that
there
are
people
that
implement
only
tiny
parts
of
this
and
we
need
to
stay
compatible
with
them
and
that
that's
like
a
it's
like
a
good
thing
that
those
exist.
It's
not
like
they're,
not
like
competing.
F
So
in
our
stacks,
we
do
have
several
block
abstractions
and
most
of
the
time
those
block
abstractions
correspond
to
storage
layers
somewhere
they're,
just
usually
not
owned
by
ipld,
because
iplb
tries
really
hard
to
maintain
an
agnostic
relationship
to
the
storage
and
network
layer.
Like
that,
that's
what
it
really
is
like.
F
We
we
don't
want
to
have
an
opinion
about
how
you
store
the
data
that
doesn't
mean
that
we're
not
going
to
hand
you
an
abstraction
that
you
can
very
easily
store
or
that
we
might
not
even
work
on
some
libraries
that
make
that
easier
in
the
future.
We
totally
can.
We
can
totally
make
some
block
abstractions
that
do
this
and
put
them
out
as
libraries
there's
nothing
stopping
us
from
that.
F
Yeah,
so
in
javascript,
that
is
a
single
function
that
takes
a
cid
and
returns
a
block
so
across
the
whole
across
a
lot
of
the
js
deck.
That's
that's
what
it
uses.
Eric
has
a
much
more
sophisticated
system.
C
No
eric's
just
loaded
as
well
like
it's
it's
like
yeah,
it
still
can't
matter,
but
this
this
has
come
up
before,
though
chris,
because
we
still
need
this
for
things
like
shared
test
pictures
and
like
I
wanna,
I
keep
on
bumping
into
it
when
I
wanna
like
test
the
library
and
have
a
a
place
where
I
can
pull
blocks
from
and
just
a
just
a
generic
way
of
communicating
about
how
these
things
are
stored.
C
Car
files
are
a
great
example,
so,
with
the
car
file
stuff-
and
I
did
zip
car
as
well,
I
built
the
interfaces
around
that
around
the
same
block,
storage
interfaces
that
ipfs
uses
and
they're
really
clunky
for
just
doing
block
storage,
so
it
makes
me
want
something
much
cleaner
to
say
this
is
an
ipld
block
store.
C
This
is
how
we
interact
with
it,
and
there
have
been
some
good
discussions
about
the
kinds
of
features
we
want
from
that,
and
it
would
be
good
to
have
something
specked
out
one
day,
but
it
certainly
wouldn't
be
an
integral
part
of
the
stack.
It
would
just
be
something
that
we
have
and
use,
because
it's
useful.
F
Yeah,
oh,
I
should
clarify
a
little
bit.
Everything
that
we've
done
so
far.
Sort
of
in
the
iqb
stacks
are
very
intentionally
agnostic
about
this.
This
doesn't
mean
that
we
won't
take
on
some
work
that
will
make
this
better.
If
you
look
at
dagdb,
there's
a
storage
abstraction,
that's
much
more
sophisticated
than
this.
F
That
actually
maintains
an
index
of
links,
and
things
like
that
and
you
can
look
in
in
the
thread
in
jso3
lite,
where
I'm
talking
about
some
of
that
and
like
this
may
need
to
wake
it
make
its
way
into
ipfs
light
in
ipfs,
because
without
sort
of
the
link
indexing
over
the
graph,
you
can't
do
very
efficient
garbage
collections
and
operations,
and
I
have
an
entire
replication
system.
That's
quite
nice
integrity
that
really
relies
on
having
those
links
as
well.
D
Yeah
yeah,
I
want
to
add,
oh,
go
ahead.
I
actually
want
to
add
some
some
thoughts
on
that
yeah.
So
a
few
things
going
back
to
something
you
said
a
developer
working
with
that
doesn't
want
to
have
this
stuff,
take
a
whole
bunch
of
space.
D
You
are
implicitly
thinking
about
the
storage
portion
of
this
rescue
of
the
block
store,
you're,
not
talking
about
ipld
plus.
You
yourself
said
that
you
will
have
to
do
the
decompression
on
the
fly
to
basically
thread
the
links
and
stuff
like
that.
So
that's
one
thing
where
you're
kind
of
veering
off
into
the
implementation
detail
without
actually
needing
ipld,
to
have
comparison
compression
being
problematic
with
an
ipod.
That's
number
one
number,
two:
a
lot
of
the
space
saving
in
many
contexts
actually
comes
from
stripping
compression,
like
that's
part
of
what
has
delayed
dagger.
D
D
The
the
zim
wikipedia
dumps,
all
of
them
do
not
often
contain
in
vast
repositories,
contain
pretty
much
the
same
data
which
will
never
duplicate
because
it
is
so
compressed
if
you
decompress
it,
you
will
end
up
with
a
repository
that
contains
all
of
this
already
unwrapped
for
you
at
a
fraction
of
the
size
of
what
it
takes
to
host
the
compressed
version
of
the
data.
That's
super
important.
D
That's
point
number
two
point
number
three:
in
order
for
you
to
have
a
codec
that
is
verifiable,
just
like
what
rad
was
talking
about
with
the
with
the
sibo
implementation,
you
need
to
have
a
super
standardized
and
with
parameters
frozen
compression
algorithm,
you
would
think
the
gz
being
there.
You
know
forever
is
a
well-known
format
and
everything
can
produce
the
same
gzip
stream
from
the
same
input.
That's
absolutely
wrong
to
the
point
where
I
actually
have
a
chain
of
correspondence
with
cloudsports.
D
The
person
who
writes
this
standard
gzip
within
within
go
between
versions
of
the
same
library.
The
output
changes
with
the
same
parameters,
and
I
basically
got
into
a
correspondence
with
him
like
okay.
Is
there
actually
something
that
you
guarantee
that
you
will
not
touch
for
me
that
I
can
actually
rely
on,
and
the
answer
is
no
with
the
same
parameters.
I
cannot
guarantee
you
that
the
actual
text
actual
compressed
result
at
the
end
will
be
the
same
going
forward.
So
it's
super
important
to
keep
in
mind
that
you.
D
Basically,
if
you
do
a
codec,
you
will
have
to
have
a
full
complete
specification
of
what
the
compression
implementation
of
this
codec
is.
Otherwise
nobody
will
be
able
to
verify
your
your
blocks
down
the
road
or
you'll
have
to
like
you
know,
have
the
entire
algorithm
wasn't
when
everybody
just
plugs
it
in
they
get
the
same
the
same
stuff
out,
which
is
which,
which
is
the
same
thing?
You
simply
look
at
the
specification
it's
called,
and
the
last
point
I
want
to
make
about
this
is
that
compressing?
D
It
is
not
only
free
in
general,
it
is
definitely
free
compared
to
the
amount
of
signaling
and
extra
matter
that
you
have
to
carry
around
to
your
stack
to
differentiate
compress
this
part,
but
don't
compress
this
part.
So
those
are
all
pieces
that
you
can
need
to
put
together
in
order
for
a
codec
like
this
to
come.
You
know
together,
it's
not
insurmountable,
but
it's
way
more
involved
than
it
looks
on
the
surface.
So
that's
kind
of
my
thoughts
about
that.
C
I
think
in
general
the
answer
to
this
right
now
is
documentation,
because
I
think
there
will
be
a
lot
of
developers
showing
up
saying
I
want
my
stuff
compressed
and
for
us
to
patronize
and
just
wave
it
off
and
say:
oh
that's,
just
somewhere
else.
I
think
it's
wrong.
It's
not
going
to
be
it's
a
satisfactory
answer,
but
if
we
can
do
some
education
here
about
those.
C
Yeah
we
need
to
have
something
that
explains
the
position
and
says:
look
we
first
of
all.
You
know
you
should
question
whether
you
really
want
at
this
layer
because
it
a
it,
can
be
done
at
these
other
layers
and
b.
It
may
already
be
done
at
these
other
layers
and
you
don't
even
realize
it
and
then
talk
about
the
difficulties
of
these
decisions.
Talk
about
the
fact
that
you
could
do
it.
C
You
could
do
it
today
if
you're
doing,
if
you
are
doing
something
that
has
raw
bites
and
you
want
to
compress
it
just
g-zip
it
before
you
encode
it
in
raw
and
if
that
and
then
discuss
the
problems
with
convergence
with
that
particular
problem,
but
maybe
for
things
like
video
data,
it
doesn't
matter
because
it's
there's
too
much
craziness
going
on
there
anyway.
So
there's
a
lot.
We
could
document
here.
D
Okay-
and
I
guess
one
more
point
actually
on
speed
going
forward-
something
that
rad
brought
up
and
for
the
purpose
of
the
recording.
D
D
E
Well,
we
spent
a
lot
of
time
on
this,
but
I
think
I
agree
with
rod
that
as
application
developers
come
on,
we
do
need
to
give
them
direction,
and
so
maybe
one
thing
we
can
do
is
next
week.
If
you
guys
have
noodle
on
it
and
think
like
well,
what
would
our
you
know?
What
is
our
unified?
E
You
know
messaging
as
opposed
to
don't
do
it
and
how
do
you
do
it?
I
think
that
would
be
good.
Maybe
we
can
pick
it
up
then,
if
that's
cool
yeah.
E
I
use
it
all
the
time,
but
I
guess
just
real
quick
on
the
other
one.
We
don't
have
a
catalog
of
design
patterns
or
best
practices
for
ipld
like
stuff
right
now.
Do
we
anywhere.
E
So
different,
I
I
think
it's
actually
kind
of
language
independent,
it's
kind
of
like
if
you're
going
to
like,
for
example,
like
to
get
maximum
deduplication,
you
don't
want
to
put
anything
like
time
stamps
in
an
object.
You
want
to
probably
put
it
up
at
a
higher
level
and
link
to
it.
I
mean
there's
got
to
be
things
like
that.
You
guys
have
thought
about
that.
You're
like
oh.
This
is
the
that
way.
C
Don't
use
floats
yeah
well
and
so
schemas
gets
part
of
the
way
towards
that.
A
lot
of
the
thinking
in
schemas
and
even
the
documentation
points
towards
some
of
these
best
practices
schemes
actually
encode
best
practices
in
a
lot
of
ways.
So
yeah
you'd
be
good
to
have
a
more
high
level
discussion
of
that.
But
I
think
if
you
read
the
schemas
docs
you'd
already
be
in
a
good
position
to
think
about
design
decisions.
A
B
A
Okay,
so
yeah-
I
guess
that's
all
for
this
week,
so
see
you
all
in
next
week
goodbye
everyone.