►
From YouTube: 🖧 IPLD weekly Sync 🙌🏽 2020-12-14
Description
A weekly meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
A
Welcome
everyone
to
this
week's
ipld
sync
meeting
it's
december,
14th
2020
and
as
every
week
we
go
over
the
stuff
that
we've
done
in
the
past
week
and
stuff
that
we
do
next
week
and
then
discuss
any
agenda
items
there
might
be,
and
today
we
even
have
agenda
items
which
is
great,
so
my
update
is
again
not
that
exciting
and
not
really
ipd
related
just
partially
is
I
still
worked
on
the
store,
the
hash
stuff,
like
it's,
the
storage
for
chronic
address
data.
A
Basically,
I
was
just
running
benchmarks,
but
on
larger
datasets,
so
it
just
takes
so
long,
so
the
data
said
so
the
dataset
I
was
running
on
like
I
was
first,
it
was
about
70
million
keys,
which
is
about
80
gigabytes
of
data
and
then
on
the
huge
data
set,
which
is
77
gigabytes,
with
around
800
million
keys
and
depending
on
the
store.
A
So,
for
example,
like
I
had
a
test
run
over
the
weekend
with
lmdb,
it
took
24
hours
to
insert,
and
so
my
storage
is
around
in
the
first
one
was
eight
hours
and
picture
was
like
five
and
a
half
hours,
although
if
I
do
mine
with
pure
rust
and
not
going
through
the
ghost
stuff,
it's
also
around
five
hours,
so
yeah
but
yeah,
and
just
to
get
a
bit
of
an
idea
also
like,
because
it's
really
interesting
that,
for
example
like
when
the
test
runs
were
done
with.
A
For
example,
the
80
million
keys
lmdb
is
super
comparable
to
badger
like
there's,
not
that
much
performance
difference.
But
then,
if
you
go
around
like
10
times
as
much
data,
it
totally
changes
yeah
how
they
perform,
which
yeah
depends
on
how
you
how
your
data
store
works,
but
that's
kind
of
like
expected,
but
it's
good
to
measure
those
things
as
well.
Although
yeah
I'm
still
not
sure
like
what
ficon
really
needs
and
if
it
needs
to
scale
like
this
or
not
or
yeah,
we
need
to
figure
out
like
do.
A
We
need
to
such
things
or
not
and
yeah,
but
still
interesting
to
see
how
it
compares
and
it's
not
optimized
at
all
yet
but
yeah
we'll
see
so
basically,
what
I
now
do
is
I
just
clean
it
up
and
push
it
to
the
model
system
in
rust,
and
then
I
kind
of
like
call
it
done
because
like
then,
basically
my
prototype
is
done
and
then
yeah.
We
can
discuss
moving
forward
with
this
or
not
I'm
putting
it
to
go
or
do
whatever.
But
then
I
basically
consider
my
job
done
for
further
discussions.
A
Yeah,
that's
all
I
have
hopefully
like
this
week.
I
really
want
to
do
some
apology
stuff
again
and
so
next
on
my
list
is
danielle.
B
Cool,
so
I've
had
a
kind
of
slow
week,
but
I
did
continue
working
on
the
refactor
for
ipld
prime
to
use
in
64
for
values.
So
the
basic
idea
is
that
and
go
the
type
end.
Is
you
know
it's
the
machine?
It's
a
machine
integer
size.
So
if
it's
a
32-bit,
then
you
could
only
fit
the
32-bit
integers
and
I
guess
that's
not
a
good
idea
for
ipld
in
general,
so
the
obvious
ones
here
are,
as
in
and
assignment
as
rod
pointed
out
a
few
days
ago.
B
We
actually
had
thought
about
this
a
few
weeks
ago,
but
I
forgot
when
you
erased
that
issue,
which
was
a
bit
silly
of
me,
but
I've
got
a
couple
of
agenda
items
later,
because
there
are
actually
many
other
methods
that
use
integers
and
the
question
is:
should
those
also
be
in
64
or
not?
I
lean
towards
yes,
but
I
want
to
check
I'm
also
continuing
to
help
martin
with
go
quick
reviews,
so
most
of
the
small
ones
that
he
wanted
to
get
out.
B
So
it's
not
surprising
but
sort
of
it's
sort
of
a
chicken
neck
problem
because
he
says
I'm
going
to
test
it
with
core
116
once
beta
1
is
out
and
beta
1
is
2
weeks
late
and
there's
it's
only
like
six
weeks
until
the
final
release
and
that's
more
like
three
weeks
because
of
the
holidays.
So
I
think
ipfs
is
going
to
ship.
Well,
one
go
16,
ships
go
ipfs
might
not
even
work
with
that
version.
B
I
have
no
idea
we'll
see,
but
I
do
think
there
should
be
some
sort
of
policy
with
testing
with
upcoming
go
releases
with
time,
and
I
think
the
time
would
be
now
which
is
like
about
six
weeks
before
the
final
release
and
to
end
on
a
good
note.
I
know
my
lighting
has
been
horrible
for
a
long
time.
So
I
got
something.
B
A
Thanks,
I
forgot
one
item
on
friday
night
and
there
was
a
bit
of
a
jscid
feature,
release
craze.
So
we
I
did
like
four
releases
or
something
within
like
five
hours,
because
the
random
things
broke.
So
the
problem
was
that
so
there
was
type
type
script,
information
added
and
we
do
some
crazy
things
in
javascript
and
it
was
kind
of
like
simplified.
Why
do
we?
C
A
Was
like
so
basically,
the
problem
was
that
I
don't
even
have
enough
knowledge
about
the
whole
type
scripting
thing
and
I
was
relying
on
other
people
to
basically
tell
me
yeah.
Okay,
that's
that's
totally
fine
to
do
and
then
yeah
things
broke
and
yeah,
which
kind
of
like
makes
me
think
that
I
probably
like
perhaps
like
for
now
it's
okay,
but
perhaps
someone
else
should
maintain
jscid.
A
Who
knows
those
things
because,
like
I
really
didn't,
feel
comfortable
with
doing
all
this
and
just
yeah?
But
anyway
now
we
have
a
fixed
version
which
basically
everyone
who
reported
something
for
everyone
involved.
It
works.
It's
a
bug,
fix
release,
so
everyone
who
should
upgrade
so
we
haven't
upgraded
within
the
those
five
hours
should
be
totally
fine
and
don't
see
anything
but
yeah.
It
was
kind
of
not
that
good.
What
we
did
there
yeah.
Hopefully
we
can
prevent
such
things
in
the
future
and
also
not
doing
it
on
friday
nights.
A
Perhaps
next
on
my
list
is
eric.
C
So,
let's
track
to
the
list,
we're
not
aborted,
so
in
go
pld
prime.
We
have
tagged
and
shipped
a
release
of
something
we're
calling
it
the
0.6.0.
C
C
The
point
of
this
release
is
to
contain
absolutely
nothing
anybody
who
is
using
this
library.
You
should
upgrade
to
it
immediately
because
it
should
be
trivial,
there's
a
bunch
of
small
features
and
small
bug
fixes
since
the
v0.5.0,
which
was
in
mid-summer,
but
there
are
no
no
breaking
changes.
So
it
should
just
be
easy.
C
C
C
Now
we
did
go
ahead
and
merge
the
pr
which
is
a
prototype
of
the
schema
package,
so
that
provides
the
type
information
and
all
of
the
information
that
powers
the
code
generator
there's
a
new
one
which
is
prototyping
handling
that
data
using
cogent
types
we
merged
at
it
has
not
completely
replaced
the
old
placeholder
types
that
you
have
to
drive
around
programmatically,
but
it's
moving.
It's
moving
in
that
direction.
C
I've
been
picking
up
a
bunch
more
protocol
labs,
internal
comms,
this
week
of
which
turns
out
there's
a
lot
I'll
say
no
more
about
that,
and
there
is
a
document
where
I've
been
cooking
on
goals
that
we
could
have
for
ipld
in
2021.
C
I
think
I
dropped
that
link
in
here
a
week
or
two
ago.
Still
there
I'm
slowly
growing
it.
I
haven't
added
too
many
more
goals,
there's
no
scope
creep,
but
just
been
trying
to
explain
some
other
things
by
like
challenges
that
we
become
aware
of
in
this
year,
a
little
more
explanation
and
backstory
about
why
a
bunch
of
the
goals
start
talking
about
tools
stuff.
C
D
Oh
yes,
yes,
I
am
yes
here.
I
am
yeah
so
actually
again,
a
very
chaotic
week
with
very
very
little
to
show
for
it
on.
I
built
the
things
update
on
the
the
store
that
I
am
trying
to
put
together.
Basically,
where
lotus
runs
against
a
fully
relational
database,
it
finally
syncs
correctly
it
does
the
job,
it's
still
not
sufficiently
fast.
For
me
to
kind
of
call
it
done
and
let
it
you
know,
run
until
new
year
to
catch
up
number
one
number
two.
D
D
I
analyze
every
individual
block
that
gets
put
in
the
store
and
I
make
placeholders
for
any
cid
that
is
being
referenced
and
then
later
on.
I
expect
the
c8s
to
be
backfilled,
obviously
for
stuff,
like
the
the
compies
and
the
commas
and
stuff,
like
you,
know,
inlines
and
some
a
few
types
of
what
they
call
them.
D
A
few
types
of
robots
rossi
ids
are
kind
of
expected
to
never
show
up,
but
we
have
some
actual
cyborg
stuff
that
is
referenced
by
messages
that
I
have,
and
I
do
not
have
the
related
content
for
them
and
also
from
a
regular
learning
clubhouse.
D
I
also
cannot
find
this
content,
so
I
need
to
make
a
pass
over
the
entire
thing
to
make
sure
that
I'm
actually,
you
know
not
rocking
something
myself,
and
I
guess
I'll
hand
this
over
to
like
rat
or
somebody
to
to
do
a
little
bit
more
analysis
like
what
exactly
is
missing
but
yeah
other
than
that.
D
It's
it's
interesting
like
it's
still,
it's
still
a
massive
massive
massive
database,
because
I
additionally
store
a
bunch
of
extra
metadata
about
the
blogs
themselves,
but
because
it
is,
you
know
nicely
packaged
in
in
standard
boring,
stuff
postgres.
Basically,
I
one
would
be
able
to
pull.
D
You
know
to
pull
this
over
the
network
and
be
able
to
just
you
know
to
just
use
that
locally
and
request
the
queries
to
work
as
performantly
as
I
expected
to
so
you
can
like
do
a
full
export
of
cafe,
essentially
in
like
in
like
minutes,
I
suppose
the
hour
that
they
need
right
now
for
the
for
the
big
cars
that
we
actually
need.
Four
hours
for
for
the
big
cars
within
touching
so
yeah,
it's
been
interesting
and
well,
I
know
you're.
We
are
waiting
for
this,
it's
like
almost
there
but
yeah.
D
Yeah
I
can,
I
can.
I
can
push
it,
but
I
basically
like
I
will
push
it
but
absolutely
no
guarantees.
It's
not
cooperate
with
anything
that
anybody
like
tries
to.
E
That's
fair
it'll
just
become
more
real
for
me
once
it's
existing
yeah
absolutely
and
for
the
missing
cids.
If
you
figured
out
where
they're
being
referenced
from
that
isn't
expected.
That
already
sounds
super
valuable
and
like
something
that
I'd
be
interested
in
tracking
down
or
raising
fires
about.
D
Yeah
I
like
I'm,
not
entirely
sure
that
it
is
a
real
mistake,
or
it
is
just
my
thing
dropping
stuff,
because
that
it's
it's
a
bit
of
a
mess
of
concurrency,
because,
like.
E
If
you,
if
you
could
say
well,
this
message
is
referencing
it
and
I
don't
have
it.
That
would
be
enough
that
I
I'd
spend
a
cycle
trying
to
figure
out
if,
like
api
chain
love
and
some
of
the
other
official
lotus
notes,
have
it
and
if
not,
then
that's
a.
Where
is
this?
Why
isn't
this
getting
put?
Let's
start
tracking
around
this
sort
of
thing.
A
All
right
next
is
rod.
F
Okay,
so
I'll
continue
on
that
thought,
then,
because
I
did
have
a
continuation
of
that
one.
So
I've
been
doing
work
in
that
area,
but
it's
it's
about.
F
I've
been
trying
to
get
to
analysis
stuff
and
I'm
trying
to
do
it
in
javascript
and
then
I
know
people
will
frown
like
that,
but
it's
it's
just
so
much
easier
to
prototype
stuff
in
javascript,
so
quicker,
so
much
quicker
and
and
I'd
also
want
to
use
it
as
a
way
to
push
your
head
out
tooling,
but
trying
to
do
it's
this,
this
problem
of
scale,
which
is
what
peter's
running
into
as
well,
it's
which
is
really
good
to
be
running
into
because
it's
exposing
weaknesses
everywhere,
weaknesses
in
our
tooling
weaknesses
in
the
way
we
think
even
think
about
this
stuff,
and
that's
really
good
and
that's
spurring
work
everywhere,
like
even
vodka's
work
is
the
same
same
stuff.
F
So
this
problem
of
scale
is
really
interesting
and
causing
problems,
and
I
can't
I
can't
ingest
the
data
fast
enough
in
javascript.
It's
just
not
practical.
It's
completely
impractical
and
I've
been
trying
to
figure
out
why
a
lot
of
that
is,
and
one
of
the
things
I
ended
up
working
on
late
last
week,
was
getting
back
to
doing
more
of
my
javascript
cbo
stuff
to
just
to
see.
F
If
I
can,
because
when
you're
doing
traversals
you,
you
just
tend
to
want
a
certain
subset
of
the
data,
just
one
links
and
seeing
if
there
was
ways
I
could
improve
that
just
low-hanging
fruit
and
one
of
the
things
that
surfaced
was
that,
like
we've
known
our
sibo
parsing
library
has
not
been
optimal,
but
I
my
parser
I've
managed
to
get
at
least
a
10
times.
Speed
up
and
just
basic
decoding,
which
is
like
that's
encoding
is,
is
different,
but
decoding
is
like.
F
I
don't
even
know,
there's
not
there's
some
weird
thing
going
on
in
that
code.
That
is
causing
that
it's
like.
I
don't
even
know
how
you
get
that
slow
and
yet
it's
benchmarked
against
other
seymour
libraries
and
it
claims
to
be
faster,
and
so
so
I'd
like
to
understand
some
of
the
some
of
what's
going
on
in
that.
But
in
the
meantime,
I've
got
at
least
I've
got
a
path
to
actually
doing
and,
of
course,
I've
got
code
that
if
I
just
want
to
pull
out
links,
I
can
do
that
too
now.
F
So
I've
got
these
pieces
that
are
slowly
falling
into
place.
This
is
not
high
priority
work,
but
it's
becoming
higher
priority
because
of
this
scale
thing.
So
that's
one
of
the
things
I've
been
working
on
last
week,
though
bulk
of
my
time
was
spent
on
three
things
which
was
docs
for
the
amt
which
I've
had.
F
I
had
a
pull
request
for
that
repo
a
number
of
months
ago
and
the
code
moved
on
dramatically
since
then,
and
so
I've
redone
the
docs
and
that's
ready
to
land
there's
just
one
little
quibble
that
may
have
even
been
resolved
today,
but
that's
pretty
much
ready
to
land
and
it
it
it's.
F
It's
code
comments,
it's
algorithmic
description
and
then
there's
a
high
level
algorithm
description
in
a
a
doc.go
file
that
really
summarizes
it
and
volker
actually
had
read
that
and
said
that
he
understood
the
algorithm
by
reading
that
which
is
I'm
very
happy
about,
because
that
means
that
I've
done
my
job
there.
F
So,
though,
that's
a
big
deal
it
the
aim
there
is
to
make
it
so
that
this
algorithm
isn't
just
hidden
in
code
in
really
obscure
code,
uncommented
code,
that
it's
actually
that
multiple
people
can
understand
it
and
have
input
into
it,
and
so
because
currently
there's
like
three
or
four
people
that
rock
it,
and
so
this
should
help
with
that.
So
that
should
get
merged
this
week
I
think
go
dag
pb,
I
tied
it
up
finished
up,
got
all
the
tests
done.
F
I'm
really
happy
with
that,
and
that's
now
in
the
ipld.org
as
go
codec
dagpb
and
that
surfaced
the
32-bit
bug
thanks
to
martin
and
his
ci
work,
actually
testing
on
32
bits.
So
that
was
the
and
the
reason
the
interesting
thing
was
that
they
came
up
because
of
cross
language
tests,
which
I
was.
I
was
testing
the
javascript
integer
limits
in
the
test
fixtures
and
then
putting
them
into
go
that
takes
you
to.
You
know
like
a
53
bit
integer
and
that's
more
than
32
bits,
and
so
it
was.
F
You
know,
truncating,
that
to
minus
one,
so
it'd
be
nice
to
get
that
sorted
out
or
at
least
have
some
approach
to
deal
with
that
and
lastly,
they
did
some
help
with
proto
school
working
on
some
merkle
dag
content.
So
it
looks
sort
of
like
introduction
to
thinking
about
ipld
content,
and
it
was
it's
quite
impressive
work
as
an
introductory
material.
It
a
lot
of
overlap
with
the
some
of
the
presentations
that
a
few
of
us
have
been
doing
the
last
few
months
introducing
ipld
ideas.
F
E
Well,
yeah,
not
a
lot
of
ipld
stuff
in
the
past
week
or
likely
this
week,
but
a
range
of
things
going
on
and
a
couple
things
that
are
maybe
worth
asking,
because
the
people
here
are
probably
the
ones
who
know
the
answers.
E
So
one
thing
that
I
have
been
thinking
about
was
when
I
was
thinking
through
the
graphql
interface,
something
that
would
be
super
nice
to
have
is
a
schema
and
data
matching
that
schema
that
can
be
used
as
the
making
this
a
concrete
example,
and
also
as
testing,
which
is
what
I'd
really
like
is
you
know
something
that
is
doing
things
like
it
has
a
map.
It
has
some
unions,
it
has.
E
You
know
it
exercises
lists
and
it
exercises
all
the
different
parts
of
our
schema
or
most
of
them
at
least,
but
then
I
can
use
that
to
generate
what
the
graphql
thing
is
and
then
I
can
have
tests
on
that
generated
thing
right.
When
I
look
at
all
the
testing
in
graphql
implementations
and
go,
they
have
a
star
wars
schema
as
their,
like
example,
and
they
use
that
as
a
thing
that
makes
a
real
like
thing
where
you
can
query
which
characters
appear
in
which
movies
and
so
they've
got.
E
You
know
some
examples
of
you
know
many
commanding
relationships,
one
to
many,
even
each
one,
some
enums
of
types
of
characters,
so
so
they've
got.
You
know
an
extensive
enough,
but
still
small
and
concrete,
and
so
there's
you
know
a
small
database,
that's
in
the
repo
and
just
sort
of
allows
for
that
self-contained
thing
of
like
seeing
how
you
would
use
it
in
various
situations,
which
is
super
valuable
to
have
and
so
having
a
car
and
a
schema
definition
for
that
car.
E
That
is,
you
know,
order
from
egg
and
yet
extensible
enough
to
have
everything
seems
like
something
that
I
was
looking
for
and
haven't
found.
If
we
have
one
or
can
think
of
a
topic
that
we
would
be,
you
know
a
reasonable,
concrete
thing
on
which
to
have
one
that
isn't
too
cheesy,
but
still
interesting.
E
That
seems
like
it's
a
thing
that
would
be
nice
to
then
be
able
to
face
things
off
of
so
I
haven't
come
up
with
one
that
I
I
want
to
directly
propose,
but
you
could
imagine
maybe
like
astronomical
objects
that
that's
in
fitting
with
our
various
themes,
something
like
that.
Maybe.
F
E
The
the
other
thing
that
I'm
wondering,
if
there's
something
I
should
point
to
yet
another
like
blockchain
company,
was
like
we
want
to
store
our
historic
archives
on
ipfs
and
make
them
available
right
now.
What
we
have
is
order
of
hundred
gig
archives
that
we
snapshot
every
six
hours.
E
E
And
I
know
we
have
the
bitcoin
one.
Is
that
the
right
thing
to
point
them
to?
Is
there
tooling?
That
would
help
them
watch
their
chain
for
new
blocks
and
then
push
the
new
blocks
as
they
exist
into
pinned
ipfs
nodes
or
things
do
we
do?
We
have
enough
of
this
story
that
there's
more
than
just
archives?
F
Yeah,
there's
no
good
answer.
Basically,
no,
but
no
there's
there's
a
number
of
discussions
going
on
and
actually
you
should
link
you
should
loop
in
adding
to
that
discussion.
He's
been
I've
been
pulling
him
into
those.
Those
discussions
have
been
extremely
helpful
in
it
really
understanding
the
limits
of
ipfs
for
that
kind
of
use
case,
and
also
some
alternative
models
for
doing
it.
But
yeah
this
it
sounds
like
you've
got
overlap
between
us
and
ipfs
like
at
least
the
din.
What
hit.
E
F
Do
it
definitely
do
again
because
he
was
he's
been
really
good
but
but
they're
all,
but
your
point
about
having
it
like.
Actually
as
ipld
data
is
really
good,
and
definitely
that
should
be
something
we
can
talk
about,
so
you
can
feel
free
to
look
them
in
to
us.
F
If
you
want
I
and
and
yes
the
bitcoin
stuff
is
a
good
example,
and
yes,
that
is
one
of
those
things
I
need
to
finish
up,
so
it
can
be
used
as
an
example,
because
it's
the
spec
is
not
done,
and
I've
only
got
the
javascript
code,
which
is
not
as
helpful
to
a
lot
of
people
and
but
yes,
that
that
keeps
on
coming
up,
and
so
it
it's
back
into
one
of
my
higher
priorities
to
get
done.
But
I'm
happy
to
talk
about
it
with
people
at
least
so
awesome.
C
E
I
don't
know
this
is
parody,
so
it's
yet
another
known
blockchain.
E
Other
than
that,
putting
together
a
talk
trying
to
scope
out
what
measurement
of
ipfs
also
reconciling
there's
a
resnet
lab
wants
to
have
a
network
observatory.
Ipfs
wants
to
have
metrics
on
their
stuff.
F
Didn't
have
it,
I
thought
that
sorry,
there
was
a
thought
that
I
had,
if
that's
okay,
just
to
go
back
to
the
discussion
about
peter
finding
missing
blocks,
so
I've
been
doing
traversals
from
headers
out
into
the
state
and
so
far
I
think
the
I've
been
walking
back
in
the
chain
and
on
file
coin
and
so
far
there's
only
that
one
block
I
reported
to
you
peter.
That
was
missing
that
I
I'm
aware
of
so
I'm
not
seeing
like
a
systemic
hole
in
the
the
block
data.
That's
been
stored
anyway.
D
Yeah,
it's
it
that
there's
a
tricky
part.
It's
not
systemic
in
terms
of
like
I
get,
I
think
about
one
in
a
million,
so
you
will
not
find
that.
D
D
So
that's
how
I
can
just
just
just
entrance,
like
you
know,
give
me
all
the
stuff
that
doesn't
have,
because
the
content
is
certain
a
different
table,
and
I'm
like
just
give
me
everything
that
doesn't
have
content
yet
and
they
like
keep
growing
over
time
as
as
the
chain
progresses,
and
the
whole
point
is
that
there
shouldn't
be
garbage
like
this
left
in
the
block
store,
because
the
commit
of
lotus
is
supposed
to
be
like
complete
like
if
it's
storing
something
it
needs
to
store,
also
all
the
pieces
of
it,
and
even
if
this
thing
ends
up
not
being
referenced
by
the
winning
chain,
it
still
shouldn't
end
up
in
the
block,
store
and
like
something
is
not
right
there.
D
I
am
at
this
point
I'm
like
50
percent
willing
to
blame
my
own
code
for
this,
and
I
need
to
instrument
better
and
it
has
turned
a
little
faster
than
what
it
does
right
now,
because
I'm
still
doing
about
six
seconds
per
tip
sets
with
very
heavy
caching,
and
that's
not
enough.
D
I
need
to
bring
it
down
to
like
two
or
one
seconds
and
then-
and
I
can
let
it
drip,
and
the
other
thing
is
that
the
problem
with
the
s3
data
was
that
I
assume
that
the
badger
block
store
will
just
give
me
a
list
of
what
it
has,
and
I
realized
that
this
does
not
actually
work.
Well,
so
you
have
a
static
block
store.
You
literally
cannot
pull
every
single
block
out
of
it
by
a
simple
list.
You
literally
have
to
walk
the
chain
because
it
works
with.
D
You
know
give
me
this
particular
key
and
then
give
me
this
particular
key,
but
it
does
not
work
very
well
with
the
give
me
this
channel
of
your
you
know.
I
know
how
many
million
keys
and
it
doesn't
actually
give
everything
back
and
then
there
was
also
the
problem
with
the
with
the
latency
that
everybody
ran
into.
So
I
can
depend
on
the
sd
part
and
yeah,
because,
like
the
way
that
is
organized
now
they
put
this
in
the
ipld
channel
and
there's
a
schema
there
as
well.
D
But
basically,
you
literally
can
write
a
recursivity
and
speed
out
all
the
plugs
that
you
want
with
essentially
with
the
selector,
because,
instead
of
having
to
know
how
to
divert
the
blocks
when
I
store
them,
I
just
say
this
block:
those
are
all
its
children
pre-parsed.
This
costs
a
ton
of
space,
but
it's
worth
the
space.
A
All
right
am
I
political
on
the
gen
items,
the
first
item:
it's
from
me
it's
about
js
multi-codec.
So
that's
about
that.
It
came
up
in
recent
discussions
like
yesterday
and
the
day
before
this
is
friday
about
like
when
we
were
breaking
cid
and
also
like
there's
ideas
about
restructuring,
js,
multi,
codec
and
so
on,
and
so
we
certainly
like
at
least
I
consider
the
current
stuff
we
have
as
like
the
legacy
part
that
is
still
maintained
for
ipfs,
but
you
should
really
use
the
new
stuff
and.
C
A
F
Do
we
still
need?
The
answer
is
yes
to
that
it?
It
should
cover
the
the
bulk
of
the
surface
area
that
people
care
about
it's
just
because
ipfs
doesn't
use
it.
F
Then
a
lot
of
the
use
cases
are
problematic
so
because
people
often
come
to
ipld
and
want
to
couple
it
with
ipfs
or
you
know,
do
that
that
thing
in
the
middle
and
because
that's
that
hasn't
happened.
There's
this
awkwardness,
but
you
can
see
in
that
because
I
was
talking
today
about
even
bringing
forward
the
timeline
to
integrate
that
work.
A
Yes,
so
we've
also
taught
it
today
in
the
ipfs
def
meeting
or
whatever
it
is
called,
and
so
basically
like
also
for
from
alex
the
signal
that
we,
like.
Probably
it's,
really
something
that
should
be
in
the
okrs
for
q1,
for
the
ipfs
team
that
they
really
make
space
to
look
into
upgrading
to
the
js
multi-format
stuff,
and
the
idea
from
alex
is
to
do
it
similar
to
what
we
did
with
like
the
asynchrony
reflector
that
you
kind
of
like
start
at
the
bottom.
So
in
our
case
it's
ipld.
A
F
A
F
Up,
dag
pb,
that
was
the
big
missing
one,
but
yeah
we've
got
all
the
codex
and
and
one
of
the
nice
things
about
the
new
tool.
The
new
tool
chain
is
that
the
dependency
trees
are
really
shallow,
and
so,
when
you,
when
you
pull
in
the
pieces,
you
just
don't
get
this
massive
string
of
things.
F
There
was
an
outstanding
question
that
is
relevant,
which
is
which
kazala
had,
which
is
he
because
he's
looking
at
this
wasm
stuff
and
he's
thinking
about
the
challenges
of
the
current
approach
and
we've
had
on
going
discussions
about
whether
the
current
approach
is
right
or
whether
we
should
be
heading
towards
a
something
a
little
bit
more
like
ipld,
prime
and
we've
even
got
disagreements
on
the
team
about
that,
but
that
that
raises
the
question
of.
Is
it?
F
Is
it
safe
to
assume
that
the
current
js
multi-formats
approach
is
stable
enough,
or
are
we
going
to
be
replacing
it
in
six
months
with
a
new
thing,
and
I
don't
think
we've
got
an
answer
for
that,
because
there's
not
enough
agreement
on
yeah
the
way
forward
and
there's
there's
there's
enough
uncertainty
amongst
some
of
us
to
suggest
that
the
answer
may
be
no
like.
F
A
Related
to
that,
like,
what's
the
so
as
I
worked
on
also
like
the
js
ipld
thing
that
ipfs
is
using,
so
what's
the
what's,
our
current
replacement
for
it,
I
I
remember
like
fighter's
block
api,
but
like
what's
the
current
like,
which
would
actually
be
the
the
like.
Do
we
currently
have
a
replacement
for
what
js
ipod
is
doing
currently,
which
is
basically
just
like
bundle,
the
codex
and
having
some
api.
F
C
A
F
Good
question
michael
was
gonna,
do
where
is
it?
He.
F
Block
as
like
an
update
just
like
this
was
actually
a
point
of
disagreement
between
him
and
gonzalo
the
api
there.
So
he
has,
I
think
he
has
a
branch
where
he's
he
started
migrating
to
the
latest
multi-formats,
but
him
and
gaza
have
a
different
difference
of
opinion
about
how
about
how
the
pieces
pull
together
there.
My
approach
has
been
simply
to
just
push
that
away
and
say:
no,
I'm
just
going
to
use
the
base
pieces
and
forget
that
okay,
so.
A
Like
what
the
replacement
would
be
because,
like
this
is
kind
of
like
the
missing
piece,
obviously
because
like
because,
ideally
when
we
do,
the
the
soul
upgrade
thing,
what
the
js
ipfs
team
would
do,
is
they
just
talk
to
a
different
js,
ipd
kind
of
and
like
then,
everything
else
will
be
like
the
news
that
kind
of
ideally
like
I,
I
I
I'm
well
aware
that
this
won't
work
this
way,
but
in
theory
this
would
be.
F
A
F
A
F
So
I
think
there's
enough
in
there
as
it
is
it's
just
this
question
of
how
you,
because
it's
built
around
the
idea
that
all
these
codecs
and
hashes
are
separate
entities
and
you
use
them
individually,
and
then
there
was
this
concept
of
how
do
we
step
up
from
there
and
pull
them
all
together
into
a
singular
thing
that
we
interact
with
and-
and
that
was
the
outstanding
issue-
okay,
that
and
I
I
I
really
don't
believe
that
they
have
agreement
there
yeah,
because
I
was.
F
A
F
A
Lot
of
stuff
because,
like
what
like,
ideally
like
that's
kind
of
like
that
also
like
was
my
long
time
plan
like
I
kind
of
like
get
the
chance
of
like.
Basically,
this
is
my
last
action
on
the
javascript
kind
of
things.
Hopefully,
and
obviously
I
help
moving
on
from
like
the
old
stuff
to
the
new
stuff
and
then
I
kind
of
like
I'm
more
or
less
out
of
it
to
the
right
stuff.
A
So,
therefore,
basically
I'm
asking
to
like
yeah,
who
would
I
talk,
but
I'm
like
I'm
totally
like
I'm
totally
happy
doing
like
like
I'm
busy,
if
I
do
the
whole,
like
first
quarter,
only
javascript
work.
That's
totally
fine
with
me
like,
like
getting
this
transition
because,
like
it's
a
lot
of
work,
certainly
I'm
totally
happy
to
like
yeah.
F
Yeah
help
with
that
so
yeah,
I
I
mean
in
terms
of
putting
a
name
on
it.
I
I
take
responsibility
for
all
the
codecs
and
hashes
and
stuff
and
and
then
I
would
just
hope
that
other
people
would
jump
in
as
as
they
have
expertise,
but
for
the
codex.
I
I
think
I'm
pretty
much
taking
responsibility
for
them.
F
A
Okay,
because
then,
when
I
talk
to
the
js
ipfs
people
that
I'm
yeah
con
picking
the
right
people
and
yeah
okay,
cool
yeah,
I
think
that's
that
answers
all
my
questions
and
cool
yeah.
I
probably
talked
with
gosala
about
it
and
then
yeah,
we'll
figure
something
out.
C
F
Like
you,
you
you
get
and
you
give
the
shapes
that
go
into
the
data
according
to
the
spec,
the
schema
that
we
defined
for
dagp,
which
is
really
nice,
because
now
it
matches
the
other
codecs,
and
it's
just
this
it's
a
bit,
but
it
means
that
ipfs
has
to
take
more
responsibility
for
forming
those
objects
right.
There
is
some
help
in
the
dagp
codec
for
doing
that,
but
it's
yeah.
C
A
So
that's
certainly
something
I
can
use
between
between
js,
ffs
and
js.
Ipod,
then,
and
ideally
like
this
was
also
like
because,
like
js
is
really
yeah,
more
or
less
just
the
bundle,
so
I
think
it
will
be
even
less
so
it
will
be
even
like.
There
won't
really
be
a
js
ipld.
It
will
be
just
very
little
glue
code.
Hopefully.
F
Yeah-
and
I
think
that
was
part
of
the
aim,
at
least
from
michael's
perspective-
is
that
this
thing
shouldn't
come
up
to
this
singular
point:
that's
just
not
how
we
javascript,
and
it's
also
not
it's
not
it's
also
not
really
what
probably
what
we
should
be
doing
with
ipld
ends
as
well
like
it's,
this
whole
thing
about.
Are
we
a
product
team
or
not?
And
no,
I
don't
think
we
are
so,
let's
not
produce
a
big
big
product.
F
A
Okay,
yeah
cool
thanks,
yeah
this.
This
helps
a
lot
and
yeah.
I
will
yeah
see
what
we
do
there
in
the
also
like
busy.
So
we
should
certainly
also
like
plan
time
like
if
the
js
ipfs
team
decides
to
do
it
in
q1.
We
should
also
plan
time
accordingly
to
help
them
getting
this
done
because,
like
we
will
be
the
first
ones
that
do
some
work,
I
guess
yeah
cool
all
right.
There
are
two
more
origin
items
from
danielle,
so
yeah.
B
So
then
the
question
is
there
are
many
more
interfaces
and
methods
that
use
end
instead
of
in
64.,
and
they
generally
revolve
around
length
of
things
or
size
of
things
such
as
length
for
a
list
kind
or
when
you
iterate
over
something
the
index
it
gives.
You
is
an
end
or
path,
segments
and
lookup
by
index
which
also
work
on
it.
B
So
on
one
hand,
my
initial
thought
was
well
these
things.
If
their
lists
are
mapped
in
general,
they
should
fit
in
memory.
So
int
is
fine
because,
interestingly
for
things
that
fit
in
memory,
but
then
some
of
ipld
is
designed
so
that
you
know
you
can
use
data
structures
that
are
maybe
too
big
to
fit
into
a
single
machine
and
on
the
other
hand
you
also
have
cases
where
you
take
an
integer
value,
and
then
you
use
it
as
a
key
or
as
an
index,
so
some
pieces
of
the
code.
B
B
F
I
was
talking
to
martin
about
this
yesterday
and
he
was
equally
frustrated,
but
I
I
feel,
like
I'm
missing
something
here
about,
go
and
its
decision
to
use
int
as
the
default
type
okay.
So
here
we
go
my
the
only
the
theory
I
had
for
why
you
would
do.
F
That
is
because
if
you
encourage
developers
to
be
flexible
about
their
integer
types,
then
then
you
get
the
opportunity
to
make
smaller
and
more
efficient
binaries
on
32-bit
platforms
when
you
compile
them
down,
because
if
everyone's
just
using
64-bit
ins,
then
you're
probably
going
to
end
up
with
awkward
binaries
in
32-bit
platforms.
F
So
by
encouraging
that
soft
space
in
the
middle,
then
you
open
up
opportunities
for
optimizations.
That's
the
only
thing
I
come
up
with
to
me.
It
just
seems
like
you
should
just
use
in
64
by
default.
D
Honestly,
I
think
it's
the
sea
heritage,
looking
through
of
all
the
authors,
being
basically
sea
otters
as
well
and
there.
The
end
is
your:
you
know
it's
your
basic
end
and
in
goal
it
essentially
looks
through
from
lengths
of
arrays
and
lengths
of
you
know,
but
basically
lengths
of
the
primitive
types
and
the
primitive
types
are
expressed.
B
Yeah
and
this
might
be
going
into
a
tangent
but
there's
a
proposal
by
one
of
the
creators
of
go
that
says,
let's
swap
in
to
be
arbitrary
size
and
supposedly
the
compiler
could
be
smart
enough,
so
that,
if
you
use
a
local
end
variable
and
the
cobala
can
see
that
it
fits
into
64
bits.
For
example,
then
it
wouldn't
have
to
use
a
big
end.
It
would
automatically
just
use
a
small
end
and
then,
in
code
that
it
doesn't
know,
it
would
essentially
branch
depending
on
whether
or
not
it
detects
an
overflow.
D
F
Anyway,
the
summary
of
that
from
my
from
my
perspective
is,
I
don't
see
a
reason
not
to
have
code
in
64
in
these
places.
Indexes
are
a
little
bit
hard
to
justify,
but
it's
like
it
just
seems
like
to
me.
It
seems
like
int
is
flawed,
so
just
avoid
it.
B
D
A
B
B
and
then
the
other
item
is
just
you
know,
I'm
trying
to
automate
all
of
these
changes
as
much
as
possible
so
that
downstream's
going
to
have
to
manually
fix
a
lot
of
code
and
fixing
the
function
signatures.
The
function
types
is
fine,
it's
easy
fixing
the
code
that
uses
those
functions.
Initially,
I
thought
I
can
just
fix
the
code
to
you
know,
insert
the
correct
type,
conversions
and
so
on,
but
then
I
realized.
I
can't
really
because
I
might
be
introducing
overflow
handling
that
maybe
the
user
doesn't
want
or
hasn't
realized.
B
So
I
think
for
all
these
changes
downstream
should
just
carefully
adapt
the
code,
and
the
question
is:
are
we
okay
with
that?
I
think
we
are
because
I
think
most
changes
should
be
fairly
trivial,
but
I
guess
it
does
open
up
the
question
that
a
lot
of
some
downstreams
might
need
to
think
a
little
bit
harder
about
overflows
and
so
on.
C
C
And
the
places
the
the
exact
places
in
which
I
hated
my
life
were
you.
You
would
think
that
it
would
you
would
you
would
get
some
sort
of
a
knack
for
it.
You
would
be
able
to
anticipate
where
you're
going
to
need
these
casts,
and
I
just
two
hours
into
the
project.
I
had
no
functional
institutions,
five
hours
of
the
project,
I
had
no
functional
anticipations
20
hours
into
the
project.
I
still
had
no
functional
anticipations
and
every
once
in
a
while
I'd,
be
like.
Let's
try
changing
this
thing.
This
isn't
a
big
number.
A
G
Yeah,
so
I've
had
some
progress
with
the
the
did
specification,
so
I'll
put
it
into
the
chat.
So
the
currently
it's
still
draft
but
dag
cbor
is
in
there
I'm
getting
some
pushback
right
now
from
the
w3c
about
well
ipld
isn't
like
a
formal
specification
yet
so
we
can't
point
to
it
normatively.
G
So
I
would
appreciate,
if
you
guys,
take
a
look
at
it.
It's
a
huge
amount
of
work.
For
mostly
we
had
all
these
arguments
about
an
abstract
data
type
model
to
actually
go
in
between
different
formats,
including
json,
jason,
ld,
yaml
and
and
cbore,
and
so
I
wrote
the
entire
seabor
section,
including
the
dag
seabor,
which
I'm
obviously
very
invested
in,
and
mostly
just
limiting
that
to
everything.
That's
at
seaboard,
but
the
only
you're
using
a
tag
and
that
tag
is
is
42..
G
Karsten
did
get
back
to
me.
Actually
he
was
quite
helpful.
He
wouldn't
actually
join
the
working
group,
probably
because
he
said
forget
w3c,
that's
just
nuts
like.
Why
would
you
want
to
do
that
like
juan
warned
me
about
two
years
ago,
and
but
it
is
getting
some
progress?
I
have
a
meeting
tomorrow.
There's
big
pushback
about
dag
cbor
because,
like
how
do
you
bridge
different
representations,
so
seabor
it's
easy,
but
what
about
dag
jason
and
how
to
actually
take
like
the
representation
in
tag
42
and
represent
that
in
json,
so
in
different
formats?
G
So
there
is
this
such
a
thing
as
a
did
core
registry,
which
I'm
also
involved
in,
I
wrote
the
all
the
cddl
for
the
validation
of
our
did
document
and
different
subsections
of
the
document,
all
in
cddl,
which
works
fine
in
sibor
and
json
seymour
being
a
superset
of
json.
G
But
I'm
I
need
some
some
help.
Pushing
back
against
w3c.
That
mostly
my
argument
tomorrow
is
going
to
be
that
the
did
specification
is
representing
the
data
model
and,
what's,
on
the
left
hand,
side.
The
properties
like
you
can't
restrict
people
for
what
they're
going
to
put
in
the
right-hand
side,
the
values
of
that.
G
G
F
We
do
have
it
defined
for
doug
jason
and
we
did
we.
This
was
it
last
week
or
the
week
before.
We
spent
some
time
clarifying
some
of
the
outstanding
issues
around,
particularly
that
and
also
the
bytes
stuff,
and
that
hasn't
quite
made
it
up
into
our
stack
jason
spec.
But
I
don't
know
how
far
you
want
to
push
it,
but
you
could.
It
does
convert
cleanly
to
dag
jason
and
back
again
so
but
then,
but
then
does
that
mean
that
you
have
to
write
up
a
whole
day,
jason
section
in
there
as
well
like?
G
It's
so
amazing,
because
man
who
sporney
is
the
one
who
actually
is
giving
me
the
most
pushback
and
pl
and
juan
actually
contracted
him
to
actually
help
write
up.
The
multi-hash
and
multi-for
multi-hash
at
least
specification
for
the
ieee,
but
he's
the
biggest
one
pushing
back
on
on
cids
and
the
surprise.
G
G
There's
I'm
sorry,
I'm
getting
on
my
multi
there's
multi
formats,
multi
codecs,
multihash,
multi-adders,.
F
Maybe
maybe
was
that
because
there
was
some
work
on
multi-f,
multi-codec
and
multi-hash,
but
that
was
james,
snell
who's,
possibly
probably
bumped
into
in
that
area
as
well.
But
he
yeah
that
wasn't.
That
was
like
last
year.
G
So
one
thing
we
had
talked
about
before
with
michael
was
really
like
the
road
map
for
getting
the
ipld
specification
published
in
like
iatf,
and
I
know
like
it's
it's
in
like
q1
q2
of
next
year,
and
but
it
really
helped
me
with
making
a
counter
argument
that
yeah.
This
is
an
emerging
specification.
We're
defining
the
data
model
for
interoperability
with
other,
did
specifications
and
here's
a
dag
seabord
for
the
structure
and
the
value
of
it
is
actually
being
flushed
out.
F
So
this
this,
I
guess
this
gets
to
a
a
bit
of
a
conflict
for
us,
because
we
explicitly
don't
want
to
be
the
itif
for
ipld.
We
don't
want
to
do
what
the
ietf
does
for
ipld
and
we
don't
want
to
spend
our
time
endlessly
bike
shedding
about
minor
details,
but
we
also
want
to
provide
functional
specifications
that
people
can
actually
use
and
so
finding
that
middle
point
is
tricky
but
have
getting
pulled
into
things
like
that.
F
You
could
force
a
hand
if
we
were
to
go
too
deeply
because
we
don't
have
a
fully
full
spec
for
seaboard,
so
we're
not
going
to
use
it
well.
Are
we
going
to
go
and
do
a
full
spec
for
daxibor?
That
would
they
would
be
happy
with?
Probably
not
because
that's
the
whole
point
is
that
that's
what
there's
a
reason
why
we
haven't
done
this
for
the
ietf,
because
we
don't
want
to
be
doing
this
for
that
year.
F
So
I
don't
know
I
I
don't
know
what
how
to
tackle
this
one,
because
it's
the
more
we
go
into
that
the
more
we
turn
ipld
into
an
atf
offshoot,
and
I
don't
think
any
of
us
want
that.
F
But
if,
if
there's
practical
things
that
we
can
do
to
clarify
like
if
there
are
real,
if
there's
real,
concrete
feedback
about
this
is
under
defined
and
or
this
is
this
is
not
clear-
then
that's
probably
feedback.
We
should
take
on
advisement
and
actually
do
something
about
where
it's
meaningful
and
be
able
to
see
that
point
of
what's
too
much
like
if
it's
unreasonable,
then
we
need
to
figure
out
what
that
means
to
be
unreasonable.
A
I
I
can
also
like
I,
I
kind
of
see
the
problem
from
shawnee
that
lately
for
yeah
for
other
standards
in
in
order
to
adopt
ipld
or
parts
of
it
like
it
needs
to
be
a
normative
spec.
Like
that's
like,
so
I
was
a
bit
involved
in
the
ogc
which
is
kind
of
like
the
I
don't
know
like
the
w3c
for
geostandards,
and
that
would
be
the
same
problem.
A
So
if
I
would
want
to
convince
someone
there,
they
would
say
where's
the
number
to
spend,
and
if
you
don't
have
one
like
yeah,
you
don't
have
a
chance
to
get
it
in.
So
you
can
totally
see
this
point,
but
yeah.
G
So
I
think
the
most
the
defense
right
now
is
that
hey,
basically
we're
doing
that.
The
entire
purpose
of
the
did
working
group
is
data
modeling
and
right
now
explicitly
stating
tag
42
and
dag
cybor
is.
Is
this
how
you
do
it,
but
the
work
of
the
working
group
is
not
about
the
values
on
the
right-hand
side
and
that's
other
specifications.
C
G
A
All
right
is
there
anything
else.
A
Yeah,
so
if
there's
nothing
else,
then
thanks
everyone
for
attending-
and
oh,
I
also
put
like
on
the
notes
like
I
will
be
off
for
three
weeks
and
the
monday
after
so
you
won't
see
me
the
next
few
meetings,
so
someone
else
will
probably
do
this
meeting
next
week.
So
yeah
see
you
all
next
year
and
some
people,
perhaps
next
week.