►
From YouTube: 🖧 IPLD weekly Sync 🙌🏽 2020-04-06
Description
A weekly meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
A
Although
most
people
are
there
already,
because
I've
copied
from
the
last
week
so
I
start
with
myself.
I
actually
couldn't
recall
what
I've
actually
done
last
week,
but
one
of
the
things
was
that
I
still
did
Matthew
formats
stuff
or
is
he
was
a
week
before?
But
now
things
were
merged
and
this
leads
to
the
proper
block
API,
which
I
will
be
working
on
this
week
and
I've.
Also,
as
discussions
came
up
on
rust,
I
thought
this
stuff
and
that
I
basically
promised
that
this
week,
I
will
get
some
code
working
with
the
block
API.
A
So
people
concede
and
yes
how
it
works,
because
even
the
issue
last
I
thought
the
Eastside
is
outdated
and
yeah
and
they're
still
a
PR
open
for
rust
CID.
But
my
main
review
a
fruitless,
really
busy
with
five
coin
and
I
don't
want
to
back.
It's
not
super
urgent,
so
I
just
wait
for
him
to
have
time
to
review
it,
but
it
also
probably
had
habits
in
this
week
or
next
week
and
then
just
additional
note
that
I
will
be
off
on
Friday
and
Monday
yeah.
That's
all
I
have
next
on
my
list.
It's
Michael.
B
Yeah
sorry
checking
yeah
mute
yeah,
so
I
did
a
lot
of
major
refactoring
work
on
the
the
data
processing
stuff.
That
I've
been
doing.
The
good
news
is
that,
like
I
broke
through
a
bunch
of
the
limitations
that
we
had
in
lambda
and
in
our
whole
pipeline,
so
now
we
in
processing
data
each
bucket
at
around
six
gigabits,
a
second
and
then
creating
car
files
at
like
10
to
14
gigabytes,
a
second
and
generated
kompy's
for
them
at
like
20
plus
gigabytes
a
second
just
using
like
massive
amounts
of
concurrency.
B
We
can
finally
push
like
around
3000
concurrently
in
lambda.
Once
you
go
above
3000,
you
start
hit
other
AWS
infrastructure,
like
rate
limits
and
abuse
detection
mechanisms,
they're
like
very
undocumented,
and
not
changeable.
So
even
though
our
limits
technically
ten
thousand,
we
can't
really
practically
do
that.
Much
very
bad
things
start
to
happen
so
anyway,
that's
awesome.
That's
going
to
finish
out
pretty
soon
and
we'll
have
a
lot
of
I
peeled
the
raw
data
yeah
by
the
end
of
this
we'll
have
like
petabytes
of
my
field
data.
B
So
you
can
very
efficiently
not
parse
through
an
entire
graph
information
in
order
to
figure
out
if
you
need
any
of
the
data
from
one
into
the
other
and
who
has
what
so
that's
pretty
cool
the
tests
for
that
are
like
eight
to
ten
times
the
size
of
the
actual
implementation.
It's
unbelievable
difficult
thing
to
test
for
so
that's
cool
that'll
get
murder
spree
soon
and
I'll
point
people
at
it
when
it's
in
a
slightly
better
state
right
now,
there's
only
an
in-memory
implementation.
I
want
to
finish
up
the
implementation
on
s3
before
I
I.
C
So
for
me,
it's
all
about
Bitcoin
this
week,
understanding
the
format
and
figuring
out
the
cleanest
way
to
extract
the
blockchain
into
IP
LD
blocks
that
we
can
store
in
a
flat,
flattened
graph
in
car
file
form
and
that
it
just
fit
feels
like
a
rabbit
hole.
I
expected
this,
but
there's
a
rabbit
hole
here.
That
of
a
nature.
I
didn't
quite
expect,
but
you
know
this
raising
interesting
questions.
C
That
would
help
me
form
an
approach
here,
but
this
the
the
our
concept
of
of
CID
plus
block
it
sort
of
breaks
down
a
bit
when
you
get
to
these
block
chains
because
of
the
way
that
they
conceived
identifying
blocks
of
data
and
I'm
wondering
what
the
edges
of
our
concept
is
there,
and
the
fact
that
we
have
one
of
our
hashes
is
is
double
char
to
256
in
in
the
multi.
Hash
suggests
that
we
have
this
flexibility
at
the
edges.
C
But
how
far
does
that
go,
for
instance,
in
in
in
a
block
chain
block,
there's
some
compounding
examples
here
that
get
more
and
more
crazy
as
you
go
on,
so
you
can
push.
You
can
push
this
fairly
far
you
so
either
we
have
to
decide
the
line
or
we
say.
Well,
it's
it's
really
flexible
and
you
just
sort
of
choose
your
point
along
the
continuum,
but
then
that
has
implications
back
from
multi
hash
and
multi
codec
and
yeah.
C
C
C
C
Because
we
have
these
two
multi
codecs,
this
Bitcoin
block
and
Bitcoin
TX.
So
the
one
of
the
interesting
approaches
in
the
go
library
is
that
Bitcoin
TX
identifies
either
a
proper
transaction
section
or
or
a
leaf
in
a
miracle
tree
to
get
to
that
transaction.
So
there's
this
idea
of
a
maybe
transaction,
you
passing
this
block
and
it's
a
maybe
transaction
and
the
block
could
be
either
a
either
two
hashes
put
together
so
60
64,
bytes
or
if
it's
longer
than
64
bytes,
then
it's
an
actual
transaction.
C
C
So
you
could
then
theoretically
decompose
a
Bitcoin
block
into
a
large
number
of
IP
LD
blocks
with
CR
days,
and
that
would
be
the
header,
the
transactions
and
then
all
of
the
leaves
of
the
Merkel
tree
to
get
to
the
transactions
all
right.
So
that's
interesting!
So
then
that
stretches
the
idea
of
multi
codec,
so
Bitcoin
TX
identifies
two
different
types
of
things
which
you
can
figure
out.
C
So
you
can't
then
just
take
the
raw
binary
data
and
apply
it
double
sha-256
and
get
an
identifier.
That's
the
same
kind
of
identifier
as
they
use
you
either
have
to
follow
their
their
rules,
which
I
think
will
give
you
a.
There
is
another
path
to
get
all
the
data
properly
hashed,
but
it
involved
it
would
probably
involve
creating
another
Multi
multi
codec
at
least
another
multi
codec,
and
then
adding
additional
IP
LD
blocks
in
this
tree.
C
In
this
weird-looking
tree
or
back
to
my
original
point
of
we
stretch
the
concept
of
what
a
hash
is
and,
and
potentially
you
could
do
something
like
say
well,
this
block
is
hashed
with
Bitcoin
and
that
simply
means
that
you
can
take
a
block
of
data
and
derive
a
hash
from
it.
That
happens
to
include
a
double
shot:
256,
plus
a
Merkle
tree,
plus
more
double
to
our
256
s.
C
With
this
weird
sacred
thing,
and
that's
just
the
hashing
algorithm
is
all
of
that
stuff
and
it
can
be
derived
from
the
data,
but
it's
not
a
classic
hasher
algorithm
in
the
sense.
That's
it
not!
It's
not
a
char
to
256,
it's
a
Bitcoin
method
of
deriving
a
clean
hashable
thing,
identifier
from
us,
the
section
of
data
and
then
throw
out
the
Bitcoin,
TX,
multi
codec
and
to
say
this
is
just
a
Bitcoin
block
with
Bitcoin
hashing.
C
B
C
I
know
I
know,
I
know
I
know,
but
the
whole
point
of
it
was
that
they
wanted
to
remove
it
from
then
the
standard
hashing
algorithm,
because
people
were
using
it
to
do
hacky
things
with
mining,
because
they
they
had
this
whole
section
of
witness
data
that
then
they
could
fiddle
with
to
do
more
efficient
mining
and
it
was
then
people
were
inserting
garbage
data
and
stuff.
It
was
just
became
a
mess.
C
So
the
idea
was
to
remove
that
from
the
process,
but
I
believe
that
there
is
a
there
is
a
pointer
to
a
of
all
these
things
off.
No
is
appointed
to
the
root
of
a
Merkle
tree
to
all
of
these
things
that
is
stored
in
a
field
of
the
first
transaction
of
a
block.
I
I
haven't
figured
out
exactly
how
that
works,
because
there's
a
bit
of
an
inception
thing
going
on
there,
but
I
believe
it's
still
part
of
an
intact
tree.
C
So
you
could
extend
this
Bitcoin
block,
Bitcoin
TX
and
then
have
bitcoin
witness
as
another
Majah
colleague,
and
then
you
could
still
potentially
link
them
all
together
and
do
the
the
maybe
bitcoin
witness
thing
and
say
the
Merkle
tree
has
leaves
that
are
also
this
thing.
I
think
you
could
do
that
as
well.
So.
B
So
my
instinct
here
and
I
want
to
know
what
Eric
thinks
my
instinct
is
to
like
not
hack
up
the
link
at
all,
including
the
multi
hash,
if
they're
not
putting
this
information
in
the
link
like
if,
if
they're
like,
if
the
data
is
linked
in
some
other
way
right
somewhere
else
and
you
can
derive
how
to
get
there
with
just
the
information
from
the
header,
then
I
don't
see
why
we
need
to
hack
up
the
link
of
any
farther
it's
only
if
they
actually
include
some
of
this
information
somewhere
in
that
link
or
in
their
header,
and
that's
how
they
that's,
how
the
information
gets
replicated
around,
that
we
would
need
to
hack
up
the
link
somewhere
and
add
another
Oh
buddy
hash
like
I
know
that
it
would
make
it
more
usable
if
the
header
included
like
a
more
sufficient
link
the
length
of
stuff.
B
But
if
this
is
like
an
application-specific
thing
and
how
they
did
thunder
data
structure-
and
you
just
have
to
know
something
about
the
traversals
in
order
to
find
the
information.
As
long
as
like
our
parts
of
the
graph
will
grab
all
the
data
somehow
and
you
can
always
sort
of
like
figure
it
out,
if
you
know
how
that
clean
works,
I,
don't
I,
don't
think
that
we
should
extend
it
any
farther.
C
C
If
you
know
how
to
get
into
a
header,
then
you'll
be
able
to
use
it
to
walk
into
the
rest
of
this
block
as
long
as
you've
got
the
rest
of
the
block
in
the
data,
but
right
right
now,
we've
already
introduced
this
concept
of
slicing
off
a
header
and
and
saying
no.
This
is
the
block.
It's
the
90
bytes.
So
we'd
have
to
walk
that
back
and
say
no,
no,
we
don't
do
any
slicing.
It's
a
tableau
or
not
yeah.
B
B
C
B
This
just
all
breaks
like
blegh.
This
would
break
and
I
get
fast
when
they
finish
this
migration
to
multicast
storage
because
like
for
them
like
the
the
block,
is
stored
in
my
multi
hash
and
the
multi
hash
has
to
validate
the
block
when
it
comes
out.
So
if
there's
data
on
the
block,
it's
so
like
you
can't,
you
can't
like
have
extra
data
on
the
block,
so
we
would
basically
only
store
the
header
by
that
CID
and
the
other
data
would
own.
C
This
is
back
to
my
point
about
stretching
the
multi
hash
concept,
because
already
we've
done
that
by
introducing
double
sha
to
256,
which
is
a
that's,
not
a
stand
and
hashing
algorithm,
that's
shard
to
256
times
two
that
we
introduced
for
block
Bitcoin.
So
already
we're
saying
all
this
there's
a
weird
hashing
algorithm
that
you've
got
to
do
here.
You
can't
just
look
it
up
in
an
all
hash
table.
It
has
to
be.
You
have
to
go
and
implement
this
somewhere
else,
because
I,
don't
think
multi,
hashing
or
any
of
those
libraries
influences
themselves.
C
B
They're
the
same
thing,
so
the
double
256
is
just
like:
it's
just
apply
its
2p6
applied
twice
to
the
heaven
yeah
okay.
Well,
no
I
mean
that's
like
a
valid
passion
function
though,
like
that
that's
like
agnostic,
but
we
could
sort
any
data
out
of
that
and
get
a
hash
right
and
and
we'd
be
good.
Like
I
mean
it
would
be
a
silly
thing
to
do,
but,
like
you
could
do
it
theoretically,
I
think
that
like
where
we
may
have
messed
up,
is
that,
like
that
yeah
we
shouldn't
be
storing
block
data.
C
A
But
it
is
I
totally
see
your
rods
point
of,
like
I
mean
even
if
we
have
the
special
Bitcoin
function,
you
could
also
basic
code
it
in
a
way
that
you
can
put
any
data.
You
want.
Why
not
yeah
like
it
busy.
It
takes
the
first
90
bytes,
and
if
a
Plata
is
not
long
enough,
you
padded
with
zeros
or
something
so
it
would
also
be
a
hash
that
works
with
any
data.
You
want.
B
B
A
C
C
When
you
look
in
there
go
I
peel
the
Bitcoin
code,
because
it
does
a
it
to
derive
a
CID,
it
first
decodes
the
data
and
then
and
then
rebuilds
it
without
the
witness
data
and
then
hashes
that,
and
today
we've
got
a
transaction
ID,
but
you've
just
discarded
this
witness
data
that
was
on
there
to
get
you
CID
now,
I,
don't
know
if
that
currently
maps
directly
to
the
transaction
IDs
that
you
would
expect
from
the
Merkle
tree,
but
it
already
in
bitcoin
go
IPL
d.
It's
doing
a
discard
to
do
a
hash.
B
C
So
he's
my
problem
right
now:
okay,
I
I've
done
that
I've
got
a
decode
in
JavaScript.
I
can
decode
an
entire
Bitcoin
block.
I've
got
it
all,
comes
out
all
the
pieces,
but
then,
but
then
the
I,
you
know,
I'm
doing
my
tests
have
got
okay.
Well,
this
is
the
the
structure
in
Jason
that
the
Bitcoin
API
tells
me
that
this
block
contains-
and
this
is
the
structure
that
my
code
told
me.
C
It
tells
me
it
contains,
and
then
every
fifth
or
seventh
transaction
identifier
is
wrong
because
they're
the
ones
that
contain
this
separate
witness
data
they're
the
one
they're
the
segment
transactions,
because
the
way
that
I've
been
able
to
derive
hashes
just
by
simply
saying
because
my
hashing
algorithm
is
just
so
it
just
says-
start
chunk.
End
chunk,
do
double
char
256
over
that
section,
and
that
should
be
the
identifier,
and
all
of
them
are
wrong
for
the
ones
that
contain
segwayed
data,
so
I
need
to
it.
C
Sorry,
buddy,
I,
think
and
I
hope
this
will
have
to
verify
today
and
tomorrow
that
if
I
was
to
do
that,
but
but
basically
it
decode
the
block
and
then
reconstructed
without
the
witness
data
I
would
get
the
same
identifiers
that
they're
using
and
then
I
would
pass.
But
then
I'm
not
hashing
the
same
thing:
I'm,
not
hashing
the
entire
data
I'm,
putting
a
side
part
of
it
and
saying
no
we're
not
gonna
hash
that
bit,
which
is
what
seg
what
it
was
all
about.
C
That
was
one
of
the
reasons
about
is
removing
it
from
the
full
hash
algorithm.
But
then
there's
that
question
of
well,
it's
gonna
be
linked
somewhere,
yeah
I
agree.
It
does
that's
just
weird
to
me
that
it
wouldn't
be,
but
basically
they're,
they're
transaction
IDs.
Don't
include
part
of
the
data
of
the
transaction.
C
Well,
anyway,
that's
that's,
that's
the
that's.
The
thing
that
is
plaguing
has
been
plaguing
me
for
days
now
exactly.
How
is
this
thing
linked
and
is
it
linked
in
a
consistent
way,
and
if
it
is,
does
that
mean
we
need
to
use
include
another
multi
codec
that
gets
you
to
that
and
then
are
we
okay
with
some
of
these
multi
codecs,
actually
pointing
to
leaves
of
a
Merkel
tree
and
then
having
these,
maybe
decoders.
B
C
That's
fine
with
the
first,
the
first
any
Bitcoin
block
in
their
sense
has
to
decode
into
many
blocks
in
the
IPL
decent.
So
that's
what
you're
saying
there
and
we
can't
just
store
and
week.
We
know
right
now
we
could
just
store
the
header
and
we
could
make
a
stream
of
hitters.
We've
got
all
the
code
to
do
that
right
now,
but
then.
B
That
yeah,
we
just
need
to
figure
out
where
somebody's
link
to
it,
linking
to
yeah
and
like
what
I
imagine
is
that,
like
we'll,
probably
have
to
either
add
a
codec
or
repurpose
that
GX
codec
or
something
because
like
to
basically
to
have
to
basically
point
at
action
visually,
a
pointer
that
sorry
I
blocked
the
points
of
the
transaction
and
to
the
other
underlying
data.
That
was
part
of
hash.
What
imagine,
but
we'll
see.
C
Well,
I'll
I
need
to
yield
my
time
because
I'm
just
taking
the
most
of
it
so
Eric
pretty
nice
to
have
he's,
go.
D
Solving
that
problem
in
general
is
probably
going
to
be
hard,
but
there's
a
little
bit
of
a
new
code
this
week.
So
there's
now
a
traversal
skinny
value
that
can
be
handled,
and
so
any
user
provides
a
link,
loader
callback
to
do
some
of
the
work
in
a
multi-block
traversal.
One
of
the
things
I
can
do
now
is
just
return
the
skinny
token,
and
it
does
what
it
says
on
the
tin.
It
will
cause
the
traversal
logic
tests.
Ok,
so
you
can
use
this
to
implement
memoization
by
CID,
pretty
trivially.
D
D
If
you
have
a
struct,
that
has
let's
say
five
fields
and
some
the
nobles
and
both
the
iterator
might
say
it'll
always
say
five
type
of
level.
Iteration
will
tell
you
each
of
the
fields
and
I
will
tell
you
even
if
they're
absent,
explicitly
and
the
representation
level.
Iterator
might
say
it's
only
not
three
if
two
of
the
optionals
are
absolutely
right.
So
all
that
stuff
is
worth
and
that's
test
coverage.
D
D
So
it's
using
the
same
memory
to
describe
a
bunch
of
these
types,
but
I
can
cast
the
pointer
types
in
just
such
a
way
that
I
can
decorate
them
with
different
methods
for
the
type
local
representation
versus
the
representation,
representation
and-
and
so
that's
just
making
the
whole
thing
really
efficient
in
Iceland
really
excited
I'm.
Coming
up
with
some
interesting
questions
about
how
nobles
and
optionals
work.
D
But
one
situation
I've
been
staring
at
a
lot
lately
and
wondering
if
it
this
is
working.
The
way
that
we
want
is
at
the
very
root
of
something,
if
I,
to
unmarshal
some
serialized
documents
and
at
the
very
root
of
it,
I
want
to
accept
either
like
some
hole,
struct
or
not-
that's
actually
kind
of
hard
to
express
in
the
currents
index,
because
we
we
did
previously
consider
what's
worth
mentioning,
having
a
syntax
that
says
like
type
food.
Is
the
type
name
knowable
bar
bar
being
some
other
title?
D
Have
knowable
knowable
and
that
doesn't
make
any
sense
we
didn't
want
to
make
that
syntactically
expressible.
So
we
don't
allow
type
definitions
like
that,
but
this
makes
it
hard
to
put
a
null
Abul
at
the
root
of
document.
If
you
wanted
to
do
that
for
some
reason,
it's
good
I
don't
know.
I'm
rethinking.
D
This
one
it
turns
out,
you
can
work
around
because
you
can
write
a
kind
of
Union
where
null
is
one
of
the
contents,
so
for
as
often
as
somebody
games,
yeah
so
powerful.
So
this
is
a
little
verbose,
but
it
works.
It
can
describe
the
correct,
behavior
and
I
think
for
as
often
as
some
an
end
user
wants
to
do
this
not
being
a
little
verbose.
D
Maybe
that's
actually
desirable
that's
kind
of
it's
about
right,
but
if
we
wanted
to
express
allowing
a
whole
document
or
absent
I,
don't
know
how
to
express
that
it's
in
the
card
system-
and
so
that's
just
kind
of
keeping
me
up
at
night
at
the
moment
and
I-
don't
really
know
what
to
do
about
that.
I
don't
know
if
anyone
else
is
gonna
have
any
thoughts
on
that
and
I
will
push
for
them
now,
because
it's
kind
of
good
it
gets
weirder
the
longer
you
look
at
it.
This.
D
Yeah
than
the
most
it
follows
from
the
fact
that
you
can't
really
implicitly,
whenever
you're
handing
some
information
to
a
nun
marshal
function,
you're
handing
it
something
that
has
its
identified
by
a
tightening
and
there's
no
way
to
have
a
named
type.
That
has
one
of
the
optional
inaudible
properties
on
it,
and
there
are
other
reasons
that
we
decided
that's
good.
But
for
this
one
particular
application.
It's
like.
D
C
C
D
I
think
I
think
it's
something
that
definitely
can
be
solved
by
pushing
this
information.
The
air
cuz
I'm
just
wobbling
on
whether
or
not
that's
desirable,
because
it
it
makes
me
a
little
touchy,
because
I
sort
of
have
I
have
a
decent
amount
of
fear
around
the
air
handling
right
around.
Like
yo
F
errors,
it
seems
to
me,
like
those
are
just
a
pernicious
sources
bugs
and
I
wonder
if
we're
producing
one
of
those
bug
factories
by
not
treating
this
case.
D
D
D
D
C
D
D
C
C
B
D
D
So
some
algorithms-
you
might
know
you
as
the
algorithm
officer
that
that's
not
gonna,
be
a
problem
and
then
you
can
sue
them
no
more
ice
by
CID
and
get
on
with
it
and
that's
fine.
So
this
feature
is
basically
gonna.
Let
somebody
to
do
that
if
that's
the
situation,
that
they
know
that
they're
in
and
I
watch
them
do
it
today,
because
I
don't
think
I
can
finish
implementing
fancy
high-powered
version
of
mineralization
by
CID
+
selection,
sub-clause
I,
certainly
can
and
connected
in
alright,
that's
uncanny.
C
D
And
so
someone
did
tell
me
from
grassing
like
apparently
it
would
sometimes
be
nice
to
say,
skip
this
sub-tree
like
I.
Don't
have
it
if
I
have
to
give
you
an
IO
reader,
the
thing
is
gonna,
say
yo
f
and
that's
going
to
call
the
traversal
abruptly
and
what
I
actually
want
to
do
is
continue
somewhere
else.
D
So
it
also
makes
that
so
Michael
a
little
I
would
love
to
get
the
speaker
fancy
memorized
by
tuple
of
CA
and
selector
sub-clause,
but
that
also
is
unfortunately
non-trivial
because
some
selectors
sub-clauses
you
need
to
sort
of
smash
out
some
of
the
details
like
on
recursive.
You
don't
want
that
recursion
depth,
counter
decrement
fields
in
there,
because
that's
of
course
kind
of
break
gate
you
know,
but
what?
If
that
field
is
going
to
count
down
to
zero
somewhere
mid
block?
Now
you
do
need
it
to
be
correct.
C
D
Now,
if
you
know
you
can
use
a
simple
optimization
to
do
the
thing
I'm
wondering
how
this
is
gonna.
This
might
stick
around
even
and
I'm,
not
sure,
because
even
when
we
do
the
advanced
memorize
by
CI,
D
plus
carefully
massage
and
whitelisted
sub-clause
of
selector
there's
no
way
that's
going
to
be
free
right,
I,
don't
know
what
what
Richter
scale
of
costs
that's
going
to
be,
but
it
might
very
well
be
that
if
you
know
you
don't
need
that
memoization,
you
might
not
want
it
either.
D
D
A
It
also
fits
pretty
well
if
we
should
extend
the
time
slot
to
from
half
an
hour
to
one
hour
and
what
I
would
suggest
is
because
like
to
me,
but
it
doesn't
matter
because
I
I
surely
won't
have
any
meetings
afterwards
but
like
what
about
basically
extending
it
to
like
officially
to
one
hour
but
trying
to
still
keep
it
to
half
an
hour
which
would
be
because
I
am
I,
get
like
super
tired
and
it's
11:45
and
but
basically
busy
so
basic
that
people
like
Michael
have
it
blocked.
A
C
About
that
yeah,
that's
a
good
point,
but
mine's
just
moved
this
week
this
week,
I'm
just
moved
back
an
hour,
but
still
it's
not
it's!
It's
seven
o'clock
for
me,
but
I
had
this
fear
when
doubts
having
6:00
a.m.
meetings
again,
but
then
I
remembered
that
we
pin
to
UTC,
because
when
I
was
doing
meetings
with
US
people
regularly,
we
would
have
two
hour
shift.
So
I
end
up
at
6:00
a.m.
of
regularly
and
that
sucks.