►
From YouTube: 🖧 IPLD weekly Sync 🙌🏽 2020-07-13
Description
A weekly meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
A
Welcome
everyone
to
this
week's
IPL.
The
swing
meeting
is
through
9:30
in
2020
and
as
a
relief.
We
go
over
the
stuff
that
we've
done
in
the
past
week
and
then
discuss
any
anonymous.
We
might
have
today
I've
and
everyone
I.
Don't
we
talk
about?
Oh
there's,
another
I
got
to
talk
about
that's
great
and
yeah
and
then
we'll
see
so.
A
Yeah
I
quickly
start
with
myself,
because
there's
not
that
much
to
say
because
I
just
came
back
from
locations
and
just
yeah
caught
up
on
things
and
there
was
I
started
a
discussion
about
size,
limiting
the
identity,
multi
hash,
which
is
also
one
of
the
discussions
topic
that
he
will
talk
about
after
the
one
of
updates
and
for
the
next
week,
I
really
plan
to
get
back
to
my
stuff
in
groovy.
A
Some
rust
multi
formats,
refactoring,
mostly
rust,
multi
hash,
because
I
was
hitting
limitations
when
I
tried
twenty
traded
into
or
looked
into
how
Farrakhan
is
using
an
IPL
D.
Therefore
multi
formats
and
hit
some
limitations,
which
needs
some
redesign,
and
it
was
interesting
because
I
also
another
country,
we
did
a
refactoring,
which
was
a
similar
thing.
So
I
guess
we
both
agree
that
something
needs
to
happen
and
yeah
more
on
this
next
week.
Yes,
so
next
on,
my
list
is
Peter
yeah.
B
A
I
actually
forgot
to
it
to
mind
that
I
worked
a
lot
on
FICO
and
stuff,
but
might
also
be
interesting
for
others.
So
I
worked
a
lot
on
getting
the
specification
from
egghead
into
the
official
spec
repository,
and
the
interesting
thing
is
a
lot
of
they
take
math
code
stuff.
So
in
case
you
have
such
a
document
and
want
to
get
into
a
spec
thing.
We
now
have
everything
wired
up
to
use
who
go
to
do
such
things,
and
it's
like
it's
not
trivial
and
yeah.
C
A
C
D
D
So
far
it
is
only
the
key
union
representation,
but
having
one
of
them
is
enough
to
prove
out
the
core
concepts,
and
so
this
is
kind
of
a
big
milestone,
because
unions
were
the
last
recursive
type
kind
that
we
hadn't
yet
implemented
in
any
way
in
the
IP.
Ld
schema
related
to
it
at
all,
and
so
now,
they're
implemented
in
Cochin
and
no
abstract
stones
shattered,
which
is
such
a
basic
bar
for
sanity.
But
it
means
now
that
we've
gotten
through
all
of
the
recursive
type
cuts.
D
D
Some
of
the
fun
things
that
are
included
in
this
is
the
users
choice
of
two
different
internal
memory,
layout
strategies
in
your
generated
code,
I've
called
them
in
bed
all
versus
interface,
and
this
is
a
parameter
which
she
would
set
not
in
the
schema
itself,
because
it's
completely
going
specific.
But
you
had
said
in
some
of
the
adjunct
config
for
the
code
generation
tool.
This
is
still
only
available
when
you're
calling
it
as
a
library,
add
and
build
to
the
config
CLI
or
anything
for
this
room.
D
But
the
possibility
is
there
and
these
two
different
options
for
internal
memory
layout.
Basically,
let
you
decide
what
kind
of
performance
traits
that
you
want
in
your
code.
So
if
you
use
the
embed
based
strategy,
you
get
the
fastest
runtime
performance,
because
you're
advertising
allocations.
Still,
if
you
use
the
other
strategy,
which
is
based
on
interfaces
and
pointers,
a
lot
more,
that
you
have
smaller
resident
memory
consumption,
but
it
may
have
higher
costs
to
do
allocations.
D
So
those
are
two
hundred
trade-offs
that
you
will
sometimes
encounter
wanting
to
make
in
the
wild
so
and
also
to
to
review
something
more
common
about
how
schemas
work,
but
in
the
context
of
unions
in
particular,
there's
also
two
things
being
cogent.
Here
you
have
the
type
of
level
semantics
for
unions,
and
then
you
have
the
key
representation,
behavior
unions
and
so
the
type
of
semantics.
D
They
will
act
like
a
map
when
you
interrogate
them
as
I,
peeled
the
data
model
semantics
and
the
keys
will
always
be
the
type
names
and
the
values
will
be
values
when
you're
doing
the
key
union
representation.
It
will
also
act
like
a
map.
In
this
case
the
type
representation
and
the
representation
representation
are
about
as
orthographies
in
the
representation
will
instead
be
the
strings
that
users
specify
other
representations
will
be
more
divergent.
D
D
So
so
far,
I've
had
a
bunch
of
placeholder
types
which
in
go
code
represent
the
type
info
and
all
the
code
run
has
been
based
around
these,
and
so
this
is
just
very,
very
playful.
Different
bootstrapping
purposes
in
the
near
future.
I'm
going
to
be
able
to
take
this
code,
run
output
and
use
it
for
this
instead,
and
what
that
will
get
us
is
suddenly
we'll
be
able
to
parse
I
peel
the
documents
of
arbitrary
schemas.
D
It
will
turn
very
easily
into
this
in
memory
structure
produced
by
the
code,
gem
I
will
not
have
to
write
any
new
codecs
for
any
of
this.
They
should
just
plug
together
and
then
I'll
be
able
to
power
the
code
down
off
of
this,
so
that
should
scrape
off
a
huge
amount
of
the
work
in
order
to
make
this
an
actual
like
easily
reusable
CLI
fixing
tool
very
exciting.
Also,
it's
just
like
a
huge
tunnel.
D
There
are
some
titties
in
this
I
did
change
some
of
the
unions
to
be
a
little
simpler
for
the
current
schema
schema.
Some
of
those
will
probably
turn
into
the
PRS
back
to
the
schema
schema.
Some
of
them
are
too
dues
for
a
little
more
work
on
the
code
generator
to
support
detailed
features,
but,
broadly
speaking,
it
works.
D
The
output
size
in
terms
of
generated
lines
of
code
is
quite
large.
It
turns
out
to
be
more
than
a
megabyte.
I
have
made
no
attempt
to
minimize
or
optimize
this
yet
so
there's
probably
a
lot
of
things
for
movement.
Some
which
will
probably
be
a
priority
in
the
teacher,
will
see.
The
generated
code
is
providing
a
lot
of
features
and
so
I'm
not
sure
how
low
this
number
is
going
to
be
able
to
go
but
I
hope
lower
than
the
current
baseline.
There
should
be
a
lot
of
room
for
movement.
D
The
team
that
we're
missing
a
couple
of
basic
pieces
of
documentation
and
our
discoverability
is
a
bit
of
a
challenge
and
a
lot
of
information
about
how
it
all
fits
together
is
something
that
is
still
not
easily
discoverable
from
our
written
documentation.
And
that
needs
to
improve
so
I'm,
probably
going
to
spend
a
bunch
of
time
there.
Instead
of
on
code
near
future.
D
D
E
Okay,
so
did
okay,
ours
in
okay.
I
review
got
very
good
feedback.
You
know
we
we
already
had
in
there.
The
docs
were
like
important
and
then
we
would
be
doing
more,
but
basically
got
it
like
iterated
that
it's
even
more
important
than
that.
So
like
actually
drop
a
bunch
of
other
things
and
make
these
teeth
erodes
so
and
not
just
sort
of
docks.
E
The
way
that
we
may
have
been
think
you
had
them
before,
but
like
real,
like
tutorial
work,
real
content,
work,
potentially
even
blog
posts
or
videos
or
like
whatever
we
can
do
to
try
and
fill
this
knowledge
gap
between
where
did
uppers
are
and
where
they
kind
of
need
to
be
to
thinking
decentralized
data
structures
and
so
working
on
that
got
the
website
sort
of
outline
up
on
Friday
and
like
a
beginning
doc.
So
there's
a
Doc's
repo.
Now
that
repo
works
as
if
you
run
view
press
in
there,
then
you
get
a
view.
E
Press
site
eventually
that'll
be
automated
and
a
real
website
and
it
will
replace
the
current.
That's
the
only
website,
so
we'll
have
sort
of
all
of
our
resources
about
the
project
and
how
to
learn
everything
kind
of
in
one
place
through
one
path:
I'd
like
to
even
move
the
schema
documentation
that
we
have
into
there,
because
the
schema
documentation
is
very,
like
user
focused,
it's
not
really
a
spec
documentation
anymore
and
the
specs
repo
will,
you
know,
still
have
everything
that
isn't
that
separate
it
up.
E
What
is
the
next
thing?
Yeah,
Doc's,
okay
and
then-
and
in
my
time
that
is
left
over
after
managing
and
doing
everything
else
and
and
all
of
that
I
needed
to
do
something
that
was
not
Merkel
treats
from
it.
E
So
I've
been
diving
more
into
ESM,
built
some
nice
tooling
around
it,
starting
to
realize
that
the
world,
the
BSN
is
different
enough,
but
you
don't
actually
want
some
of
the
stuff
that
we've
been
using
started
to
build,
that
some
of
that
tooling
had
some
really
good
conversations
with
people
who
are
like
maintaining
popular
compilers.
E
You
know
at
pika,
snowpack
stuff,
I
talked
to
Fred
talked
to
Mel's
borns
a
little
bit
who
does
like
the
the
no
Deus
I'm
stuff.
All
the
native
stuff
and
got
the
tooling
to
a
point
where
it's
actually
kind
of
nice
and
I
can
kind
of
see
how
to
write
agnostic
modules
now,
like
modules
that
don't
require
a
compiler
and
just
work
in
the
browser
ended.
E
It
has
really
made
me
realize
how
nice
the
new
module
system
is
compared
to
the
old
one,
if
you're
building
any
kind
of
tooling
or
you
have
to
create
any
kind
of
platform
around
something
being
syntax
having
imports
map
to
a
file
directly
rather
than
a
complicated
and
potentially
mutable
resolution
algorithm.
It's
like
a
big
change,
and
it's
so
nice,
so
yeah,
I
started
working
on
the
test
thing
and
I
realized
like
I,
could
probably
throw
Dino
in
there
and
we
could
have
our
stuff
work
on
Dino
pretty
easily,
actually
and
so.
E
I
talked
to
Ryan
Dahl
and
figured
out
kind
of
where
things
would
need
to
go
and
what
the
direction
of
the
project
is
to
make
sure
that
it's
like
will,
align
and
then
went
well
so
yeah,
it's
it's
all
looking
good.
This
is
like
a
nights
and
weekends
thing
for
the
most
part,
though,
but
it
is
it's
looking
very
nice.
The
world
in
the
future
is
going
to
look
a
lot
better
than
the
world
in
the
past.
As
far
as
JavaScript
goes,
it's
gonna
get
way
way.
E
Simpler,
I
think
the
last
people
to
recognize
it
are
probably
going
to
be
people
with
my
background,
like
node
people,
because
if
you're
right,
if
you
just
write
node-
and
you
don't
deal
with
the
browser-
you
already
don't
deal
with
compilers-
you
don't
deal
with
these
big
tool
chains,
but
the
difference
when
you
do
have
to
have
something
run
in
the
browser
and
the
tool
chain.
You
need
to
present
around
it.
It's
a
world
of
difference.
E
If
you
can
only
deal
with
the
SM-
and
you
can
have
these,
like
agnostic
modules,
a
lot
of
the
crap
that
we've
had
to
deal
with
like
extricate
ourselves
from
like
buffer
and,
like
you
know,
dealing
with
all
of
these
like
dependencies
that
then
pull
in
all
these
polyfills
like
having
a
little
bit
of
a
reach.
Oh,
that's,
actually
gonna
be
kind
of
nice
for
this
new
system.
So
anyway,
let's
remember
that
who's
next
Chris
yeah,
hey.
F
Guys
yeah
so
last
week
was
mainly
working
with
the
rust
falling
compy
generate
and
good
news
is
kind
of
up
to
date
with
the
latest
and
so
do
a
lot
of
cleanup
and
whatnot
and
a
lot
of
times
actually
spent
just
getting
the
AWS
to
build
to
work.
There
was
just
a
number
of
issues
with
it.
One
kind
of
an
interesting
data,
because
you
heard
about
this,
but
there's
this
part
of
the
rust
file
coin
project
uses
OpenCL
and
apparently
it's
friendly
using
this
language
called
futhark,
which
I've
never
heard
about
before.
F
But
it's
supposedly
it's
better
than
writing
OpenCL
directly,
and
so
there
is
this
compiler
called
Gen
foo,
which
will
take
this
futhark
language
and
generate
C
code
and
that
gets
linked
into
the
rust
file
coin
library
and
it
turns
out
that
that
generates
C
code
didn't
compile.
With
this
ancient
version
of
Linux,
that's
needed
to
build
the
a
Tobias
land
of
custom
runtimes.
So
if
you're
doing
rust
and
lambda,
you
have
to
conform
to
their
linux
image
and
it's
like
I,
don't
know
sento
s6.
F
So
I,
you
know,
went
down
many
kind
of
wrong
dead
ends
in
this
process,
but
actually
I
mean
the
good
news
is,
is
I,
think
there's
a
reproducible
bill
path
now
for
it.
So
that's
good
and
then
once
the
bug
fixes
or
the
mike
Mayock
its
troubled
down
to
how
they
go
into
Gen
foo,
which
goes
into
Neptune
Triton,
which
means
an
update
to
a
smile
coin
and
then
into
the
file.
Pete
generate
things.
F
So
you
know
a
couple
of
weeks,
maybe
at
all
the
dependencies
to
be
updated
and
will
be
clean,
but
that
was
the
adventure
and
then
young
good
news
is
completed.
The
first
Dumbo
drop
in
port
of
18
terabytes
of
data,
so
that's
cool
I.
Don't
have
to
verify
it
so
actually
right
now,
I'm
working
on,
but
quick
verify
the
team
that
will
take
the
original
files
and
just
make
sure
I
can
get
100%
compatibility
extract
amount
of
car
files.
So
that's
not
doing
right.
Now,
too,.
B
F
Yeah
it's
so
the
repo
has
a
command-line
utility
and
it
will
generate
compete
for
you
and
I
mean
it
uses
a
memory
store.
So
you
can
only
do
follows
a
certain
size,
otherwise
they'll
just
run
out
memory,
but
yeah.
That's
there
and
then
also
trapped
in
a
in
abuse
lambda.
So
you
just
like
file
off
like
thousands
of
things
in
parallel
just
does
it
hey
I'm.
B
More
asking
like
I
shouldn't
have
about
compy
I
want
to
make
sure
that,
if
you're
on
your
stuff
on
the
same
that
we
already
have
it
we're
testing
with
right
now,
that's
where
all
your
upgrades
you're
actually
going
to
arrive
at
the
same
thing.
So
we
have
like
one
of
the
small
files
that
we
use.
I'll
get
the
data
from
from
from
the
other
guys,
which
piece
or
does
exactly
they
use
for
that.
So
we
can
vary.
You
know
you
not
sit
and
see
if
we're
gonna
get
there
again
with
the
small
cat.
Well,.
F
E
It's
it's
it's
hard
with
this
tool
chain
to
produce
the
same
car
file
twice,
because
the
way
that
it
allocates
putting
stuff
into
the
car
file
from
dynamo
or
from
like
the
kind
of
server
stable
but
like
if
you
just
want
to
check
if
compy
hasn't
changed,
that's
actually
really
easy.
We
could.
We
could
literally
like
now
that
this
new
company
is
that
we
can
literally
go
through
the
table
for
compy
and
just
pick
a
few
out
and
and
run
them
and
go
like
oh
hey.
B
F
I'd
like
to
see
if
as
well
yeah
I
mean
that's
how
much
I
won't
be
invest
in
this
tool,
but
that's
something
that
like
to
make
reproduce
for
at
least
yeah
at
least
one
of
them
is
right
now,
because
it
has
been
arbitrary
right
now
and
I
think
you
know
yeah.
We
can
make
it
perfect.
We
can
make
it
better,
but
yeah.
E
E
E
It
does
them
in
order
it
night,
it
might
paralyze
the
reads
from
the
lock
bucket,
though,
and
so
those
might
not
come
in
and
order
so
I'm,
not
yeah
I'm,
not
a
hundred
percent
sure
that
we
can,
like
those
those
are
not
necessarily
deterministic
and.
F
Know,
but
actually,
if
you
do
have
you
know
so,
one
of
the
challenges
with
the
russophile
P
January
is
that
or
compy
generate
was
that
there
was
no
testing
at
all,
and
so
it's
like
hard
to
know
for
there's.
Actually
working
so
I
did
a
unit
test,
so
I
could
actually
future
changes
come
in.
We
can
be
sure
that
it's
doing
right,
but
if
you
have
a
set
of
files
and
Comm,
it
actually
has
data
files
that
are
being
used
for
test
cases
for
car
files
or
compy.
F
A
C
A
A
E
E
E
E
A
B
Because
we
keep
mixing
this
with
without
any
cash,
this
has
to
do
with
a
length
of
a
CI
D,
because
you
can
have
Blake
Blake
3,
the
Carrick
stuff.
They
all
can
output
gigabytes
of
data
as
a
hash
and
that's
and
you
can
request
that
so
I
guess
what
we
should
do
instead
is.
We
should
say
that
number
one
you
always
have
to
look
at
the
length
of
your
CI
D
before
you
like,
read
the
rest,
because
you
have
to
look
at
the
language
AE
and
every
implementation
must
support
at
least
and
I.
B
E
Yeah
I
mean
I
would
be
fine.
Having
assured
in
there
that
said,
you
should
support,
see
IDs
of
a
leaf
length
X.
If
we're
talking
about
CID
limit,
recommend
a
limit
specifically
on
identity,
multi,
hash,
I
think
that
it
should
not
even
be
in
the
should
category
it
should
just
be.
You
know
this
is
what
we're
saying
yeah,
but.
B
A
A
And
therefore
I
I'd
like
to
say
so
basically
so
so
more
or
less
my
point
is
you
write
a
new
rust,
ma,
TL?
Sorry,
a
new
mod
has
documentation
any
language.
You
are
the
author
and
you
make
a
decision.
How
big
do
I
make
the
identification,
then
I
think
there
should
be
an
answer
somewhere
that
you
don't
make
some
random
number
because
then
you
are
not
interoperable
with
other
implementations.
E
A
Like
it
might
shift
differently,
we
have
we
in
this
issue,
in
this
case
of
basically
having
an
identity.
Having
the
case
of
what
we
call
in
lines.
Yeah
do
you
think
with
with
plenty
of
data,
and
we
have
the
cases
so
I
care
of
only
about
the
case
which
says
if
my
hair
is
longer
than
my
original
data?
Why
should
I
hedge
the
date
not
completely
data
I,
think
that's
the
use
case.
A
E
E
A
C
E
Brought
this
up
like
several
weeks
ago,
but
like
we
should
in
the
multinational
impatiens
when
you
have
a
hashing
function,
and
you
know
that
it
should
never
be
over
a
particular
size.
You
should
put
that
size
limit
in
there
just
to
protect
yourself
and
if
you
want
it,
that
would
be
where
you
would
implement
like
a
default
limit
on
the
size
of
that
in
any
multi
hash
and
yeah.
I.
Think
that
it's
reasonable
to
set
whatever
you
think
is
fine,
like
I,
mean
I,
can
go
with
Stephens
recommendation
from
the
issue.
E
I
can
go
with
anything
else,
I
mean
no
matter
what
you
do.
You're
gonna
make
somebody
mad
and
you're
gonna
crowd
out
some
use
case,
but,
like
is
that
it
might
be
more
important
to
have
the
limit
than
to
crowd
out
that
use
case,
and
if
it's
more
important
to
have
the
limit
than
the
crowd
of
the
use
case,
then
take
it.
But
that's
just
gonna.
That
decision
is
going
to
differ
by
implementation.
I.
A
They
can
do
that
like
they
didn't
bring
us
back
yeah,
but
they
could
even
do
if
yeah,
but
they
I
consider
them
like
a
power
user.
So
if
they,
if
they
so
they
even
can
because
like
if
it's
a
should
they're
totally
fine
to
do
it
but
I
for
me,
as
a
library
implementer
of
multi
hedge,
if
there's
a
should
I
just
do
that
everything
is
fine
and
it
works
which
go
it
works
with
rust
is
worse
with
JavaScript.
B
A
B
B
A
A
E
A
I
I
care
about
the
case
like
I,
so
for
me,
the
only
valid
case
for
me,
the
only
case
for
identity,
multi
hash
is
the
case
where
the
data
is
shorter
than
the
hash
I
would
use,
and
this
makes
it
quite
short
and
it
makes
quite
short
I
think
we
should
put
a
short
limit
whatever
it
is
on
the
identity
hash.
If
you
want
to
do
some
crazy
stuff
like
putting
future
than
it
just
create
your
own
codec
and
you're
fine
mate
just
do
it,
but
so.
E
You're
asking
for
like
an
objective
technical
criteria
for
what
you've
already
stated
as
an
opinion,
which
is
like
your
opinion,
is
that
the
only
valid
use
case
for
multi
hash
than
any
multi
happens
when
it's
smaller
than
the
hash
function
so
like.
That
is
your
opinion,
which
you're
allowed
like
you
can't.
E
E
B
Out
there,
somebody
out
there
is
now
watching
watching
watching
this
YouTube
stream
and
contact
I
know
what
I'm
going
to
do.
16,
K,
caches
for
and.
B
The
point
is
that
if
you
don't
have
that
they
will
be
able
to
build
their
thing,
prove
it.
It
actually
might
end
up
being
useful
and
then
for
you
to
be
able
to
support
it
because
you
go
like
okay,
this
kind
of
makes
sense.
Then
all
you
have
to
do
is
switch
a
bit
in
your
in
your
application
in
your
library
and
the
world
will
not
break,
whereas
what
you're
proposing
right
now
is.
We
gave
something
out
there
that
is
unlimited
for
good
or
bad
reasons.
B
E
To
be
perfectly
honest,
like
I,
have
reservations
and
concerns
with
identity,
multi
hash
in
searcy
ideas,
.
.,
like
they,
they
break
the
conceptual
representation
of
the
key
and,
like
there
are
libraries
right
now,
including
the
javascript
lock
api.
That's
flat-out
won't
decode
it
like
that.
You
can't
pass
it
off
to
a
storage
layer,
because
it's
not
a
storage
key
and
you
can't
get
the
block
to
decode
it
because
it
doesn't
come
with
data
associated
with
it,
because
it's
only
a
key
like
we
don't
support
like
key.
E
Only
storage
and
key
only
blocks
right
now,
like
that's,
not
a
concept,
and
it
would
really
start
to
break
down
the
model
so
like,
like
you're,
worried
about.
Like
literally
you
know
what
about
this
limit
for
identity,
multi
hash
and
causing
an
incompatibility
like
what
I'm
saying
is
that
identity,
multi
hash
right
now
already
fully
breaks
things
at
any
size
and
is
not
universally
supported
than
any
size.
So,
like?
Okay,
whatever
you
want,
okay,.
E
I
mean
like
in
your
multinational
imitation,
you
kind
of
have
to
have
it
there
and
like
like
in
in
blocks.
If
you
have
a,
if
you
have
a
property,
that
is
a
CID
we
can
represent,
the
CID
CID
comes
out
is
to
CID
instance
and
hasn't
any
multi
hash
in
it.
It's
just
that
like
if
you
want
to
decode
that
into
real
block
data,
you're
gonna
run
into
a
problem
now.
E
You're
gonna
have
a
problem
when
you
traverse
that,
as
the
CID
you're
not
gonna
have
a
problem
representing
the
CID,
and
so
like
the
break
that
you're
talking
about
doing
it's
like
a
little
bit
before
that,
so
it's
slightly
different,
but
there's
so
many
issues
involved
in
this
already
that
it's
just
like
this
is
not
like
the
top
of
much.
It's
do
things
to
worry
about,
because
you're,
like
already
often
in
crazy
hack
land,
if
you're
doing.
E
Because
because
a
block
storage
return
blocking
with
your
data,
they
don't
just
return
data
so
like
you
could
return
the
data
but
then
like
how
do
I
represent
like
here's.
The
thing
a
block
is
a
representational
pair
of
key
and
binary
data,
and
so,
when
you
have
it,
when
you
have
a
key,
only
and
there's
no
binary
data
that
just
derived
from
the
key
that
it
actually
is
the
key
it
breaks
the
representation.
E
E
B
E
But
but
but
yeah
but
like
when
you
have
these
conceptual
layers
of
the
library,
layer
and
ecosystem,
that's
the
compatibility
layer
that
you're
worried
about
you're,
basically
asking
every
block
store
to
also
implement
this
thing,
and
your
block
abstraction.
To
think
about
this
thing
and
that's
like
not
it
doesn't.
It
doesn't
work
like
there
are
things
that
you
can't
do
that
way
like
there
is
a
there's,
a
reason
to
have
a
logical
separation
between
key
and
value
and
to
not
break
that
representational
IRA.
That
I
don't.
B
E
B
Yeah,
so
we
could
actually
very
well
say
and
I
would
be
comfortable
that
to
say
that
you
know
in
the
multi
codec
table,
we
have
a
bunch
of
things
that
are
and
like
required
for
you
to
be.
You
know
to
you,
have
to
be
that
implemented
to
write
and
identification
might
not
be
among
this.
You
know
minimal,
set
on
this
file.
Yeah.
E
Multi
hash
on
its
own
is
fine
without
any
multi
hashes.
It's
really
see
IDs,
where
this
becomes
very
problematic
because
you
you're
not
just
an
address
anymore.
You
are
a
codec
associated
with
an
address,
and
so
there
there
needs
to
be
a
way
to
translate
this
keys
value
into
something
and
that
becomes
really
complicated.
A
Okay,
so
I
think
okay,
I
think
I'm
convinced
that
identity
hash
is
not
a
good
idea.
I
will
just
remove
one
of
our
supplementation,
but
I
still
think
and
I'm
also
happy
do
not
having
a
shirt
but
I
think
it
makes
sense
to
put
some
like
some
note
about
some
size
that
if
someone
implements
and
just
want
to
have
some
number
you
can
just
use
this
number
I
think
this
would
make
sense
in
the
wherever
it
lives.
We.
C
D
E
B
A
We
do
something
on
the
road,
so
why?
Why
ever
write
up
in
study
this?
So
the
reason
is
in
your
ass,
you
could
write
and
like
you
could
have
an
implementation
which
is
for
all
the
a
location
fence
is
arrogance
as
well
is
you
could
ride,
one
which
doesn't
do
any
heap
on
occasions?
If
there's
a
limit,
a
low
limit
and
that's
the
point
kind
of
made
such
an
information
then
possible?
A
And
if
you
then,
you
need
basically
weird
special
cases
for
in
this
case
it's
a
totally
different
data
structure
really
need
simple
locations
and
whatever-
and
this
is
why
I
brought
it
up
but
I
agree
with
Peter,
actually
that
yeah,
as
we
have
have
just
like,
played
three
which
can
be
whatever
size.
It
is,
and
we
perhaps
should
put
a
may
on
the
multi
hash
and
say
I,
don't
know
like
what
could
curly
hashes
like
I
think
1k
is
probably
good
enough
for
now
and
I.
A
B
E
A
A
A
You
try
to
read
it
like
get
bites
anyone
so
so,
basically
in
rust,
we
have
like
a
like
a
multi
hedge
object,
C
and
E
bytes.
So
can
either
do
you
drive
it
from
by
its
or
from
other
stuff,
but
let's
say
you
get
it
from
buy
it
so
get
some
white
thing
and
put
it
into
and
what
to
say.
Oh,
let
this
pass
as
a
multi
hash
and
if
it's
then
too
big,
it
will
just
say
well
too
long,
which
is
a
digest
of
whatever
too
so.
E
A
A
You
could
you
could
but
I
wouldn't
implement
it
because
I
like
you,
theoretically,
need
it,
but
who
needs
it
so
like
of
course
it
of
course
you
could
do
like
parts
of
the
lens
in
the
oh
I
use
this
other
invitation
because,
like
currently,
we
even
do
this.
So
currently
we
even
have
some
optimization
tell
me,
which
is
super
Aki
whatever,
but
you
could
only
do
it,
but
the
question
is:
should
it
be
implemented?
Because
actually
no
one
needs
it
like
you,
don't
need
a
I,
don't
16
megabytes
like
three
hash
I.
E
Would
make
a
limit
much
closer
to
the
actual
things
that
people
are
doing
like
512
and
then
wait
for
somebody
a
lot
above
it
and
then,
when
somebody
logs
a
blog
and
we
see
what
their
use
cases
do
it,
because,
like
the
problem
that
I
have
with
coming
up
with
this
language
and
coming
up
with
a
limit
and
putting
in
the
spec?
Is
that,
like
the
place
where
we've
seen
a
limit,
get
actually
implemented
and
enforced?
It's
much
smaller
than
chest.
E
A
Should
be,
then,
then,
perhaps
like
a
totally
different
idea
and
do
a
similar
thing
as
we
do
for
I
Cody
that
we
add
a
section
for
implementers
of
of
multi
hash
saying
that,
for
example
like
it
is
good
practice
or
like
many
or
some
of
meditation,
pose
a
limit
on
the
on
the
total
size
of
the
hash
to
make
implementation
easier
or
faster
or
more
convenient.
And
it's,
for
example,
like
it's
likely.
The
biggest
has
security
support.
E
A
Like
I
mean
it's
like
similar
to
the
to
the
to
the
document,
we're
for,
ideally,
it's
more
like
a
hey.
This
is
what
other
yeah
document
saying
hey.
This
is
what
other
implement
that's
a
New
England.
You
might
want
to
go
to
say
it
looks
like
the
I
think
early.
It's
it's
a
pretty
high
bar
to
implement
the
marquee
forma.
So
I
thought
these
stuff.
A
If
it's,
if
you
would
start
from
scratch-
and
you
don't
have
all
this
context-
it's
super
hard
to
actually
yeah
know
where
to
get
started,
to
make
many
mistakes
and
then
fan
out
Oh
ash.
Oh
I
could
have
used
the
limit,
but
you
never
had
the
idea,
because
in
America,
so
I
think
yeah.
Some
talk,
human,
okay,
so
I'm
fine
with
having
it
like
not
in
the
spec,
but
just
like
some
guidelines
or
whatever
you
call
it.
B
A
E
E
A
E
Okay,
so
yeah
Peter
I,
think
that,
like
I
agree
that
we
should
change
this
BL
to
have
a
union
in
that
one
field,
but
you
you
wanted
to
get
kind
of
some
alignment.
I
think
it
on
some
of
the
earlier
stuff.
So,
let's
think
yeah
yeah.
We
ran.
B
Like
I'm,
going
to
trouble
with
the
entire
concept
of
you
have
chunks
that
are
weird
sizes.
Don't
do
that
because,
like
in
my
special
case,
what
I
actually
have
is
I
have
a
file
that
is
already
written
in
such
a
way
that
groups
of
chunks
on
the
very
low
and
are
up
to
64
kilobytes
together.
So
there
might
be
one
track.
There
might
be
several
and
then
these
three
you
can
have
this
I
shouldn't
roads
to
bytes
or
three
bytes.
B
You
can
have
these
pieces
put
in
between
each
64
up
to
64,
K
chunk
and
then
what
you
end
up
with
in
a
streaming
fashion.
By
the
way
you
can
end
up
with
the
original
butter
and
the
deflated
data
with
a
zero
deflate
indexing
plan.
So
then
I
can
use
this
sub
stream
as
a
portion
of
a
gzip
as
a
portion
of
a
zip
as
a
portion
of
a
get
back
file.
B
E
Yeah,
you
can,
you
can
mix
types
in
an
array
in
seaboard,
just
fine
and
you
can
mix
types
in
I,
peeled
the
schema
as
long
as
you
set
up
the
value
as
a
union.
So
what
we
need
to
do
is
we
need
to
update
the
FBL,
because
right
now
it
says
that
has
to
be
a
link
back
to
the
rutile
structure,
but
we
could
change
it
so
that
it
was
a
union,
a
link.
You
like
a
kind
of
Union
for
linking
and
then
yeah
it
could
be
inline,
bytes
or
not.
Okay,.
B
And
then
my
my
other
thing
is
that
we
we
keep
talking
about
schemas
as
if
no
matter
what
you
do
whenever
you
like,
have
a
stream
in
it
to
give
us
essentially
give
a
schema
with
it's
almost
like
it's
no
XML
here.
You
know
that
without
a
DTD
or
you're
screwed
and
like
my
use
case,
is
that
it
needs
to
work
the
way
that
people
works
right
now
in
terms
of
not
structures,
but
in
terms
of
how
that
could
be
it's.
B
Essentially,
it's
a
test
right
and
no
matter
how
you
stick
these
things
together,
it
will
give
you
a
stream
back
out.
There
is
no
like
clear
story:
how
do
I
get
from
the
FPL
to
an
actual
something
that
a
web
Gateway
will
eventually
understand
like
this
doesn't
even
seem
to
on
the
roadmap
and
that's
what's
confusing
sorry.
G
B
B
E
It's
not
an
escape,
so
I
I
did
cut
it
at
one
point
from
the
early
versions
of
that.
Spec
had
that
in
there
and
I
cut
it,
because
when
I
was
implementing,
I
could
not
figure
out
a
good
way
to
actually
use
that
feature,
because
I
was
always
getting
data
as
a
stream.
And
then,
if
you
wanted
to
decide
whether
or
not
to
inline
bytes
or
to
hash
them
as
a
bra
block,
you
would
also
have
to
be
constructing
the
layout
at
the
same
time.
E
Because
then
you
that's
the
only
way
you
would
know
how
much
space
you
would
have
before
you
might
go
over
the
max
block
size
and
the
more
that
I
worked
on,
that
the
more
I
realized
there
like
that.
Dad
doesn't
really
work.
What
you
need
to
do
is
run
through
the
entire
chunking,
take
all
the
lengths
and
the
links
and
then
create
the
layout,
because
you
have
way
better
information
to
do
the
whole
layout
at
that
point.
E
But
if
you
had
something
so
small,
but
it
was
like
two
bites
and
they
were
just
all
over
the
place
you
you
could
just
keep
those
in
memory.
That'd
be
fine,
you
wouldn't
really
care
you
just
kind
of
hold
on
to
the
to
them,
so
I
can
see
it
also
on
on
slack
I
brought
up
that,
like
actually
four
videos
for
most
compressed
formats.
You
want
to
support
seeking
in
you
have
a
header
or
a
footer
that
you
need
to
read
before
you
can
do
anything.
So
why?
E
B
Cool
I
just
want
to
connect
the
misconception
that
you
cannot
stream
chunk.
You
can
that's
what
they
get.
That's
like
that
was
the
very
first
incremented,
because
yeah
I
read
that
is
two
of
your
max
blocks.
You
always
get
enough
space
in
there
to
figure
out
where
your
trinkets,
so
you
can
steam
and
drink.
At
the
same
time,
no.
E
No,
it
wasn't
that
I
couldn't
stream
in
truck
at
the
same
time
that
was
working.
What
I
couldn't
figure
out
was
constructing
the
the
layout
tree,
while
chunks
were
coming
in
because
I
didn't
know
the
length
and
I
didn't
know
how
many
trunks
would
be
a
part
of
it.
So
like
what
is
my
branching
factor
from
the
root
of
the
data
structure?
How
big
are
the
leaves
in
order
to
like
balance
the
layout
I
couldn't
figure
that
out
like
if
I
don't
know
the
length
and
I
don't
have
any
idea?
E
How
many
trunks
I'm
going
to
get
I
can't
figure
out
a
good
like
streaming
algorithm
to
figure
out
the
layout,
so
what
I
did
was
I
just
like
said,
okay,
here's
all
of
my
links
at
once.
I
know
all
of
the
chunks
I
can
create
all
the
intermediary
nodes
through
the
layout
and
that's
not
a
huge
amount
of
space
in
memory.
It's
just
like
a
bunch
of
hatches
and
then
even
a
material
actually.
B
B
E
D
Good
I
would
actually
really
like
to
drag
us
through
going
over.
Why
and
the
connection
to
the
highest
level
visions
for
all
of
that
stuff.
We
just
discussed
because
I'm
imagining
listening
to
this
conversation
as
somebody
who
wasn't
fully
in
sync
with
everyone
in
the
conversation
and
I
think
they're.
D
In
IP
OD
our
job
and
one
of
our
highest
level
visions
when
we
started
looking
at
things
and
revamping
them
a
couple
of
years
ago
and
make
all
of
our
new
generation
of
libraries
was,
we
have
noticed
that
after
people
have
been
using
all
this
stuff
in
ipfs
and
dag
PB
and
stuff,
that
big
maps
are
a
thing
that
people
constantly
want.
It
showed
up
as
directories
in
dag
PB,
but
we
want
big
maps
with
arbitrary
keys
and
the
other
thing
that
showed
up
is
we
want
big
bytes.
D
We
called
those
files
and
diabetes
had
a
totally
specialized
one-off
thing
that
did
this,
and
both
of
those
are
really
important.
When
we
make
IP
LD,
we
want
to
make
all
of
these
things
general
and
codec
agnostic,
and
we've
stretched
angles
to
encompass
both
of
those
things
big
maps.
We
added
big
lists,
even
though
that
didn't
really
exist
in
dag
PD
and
big
ranges
of
bytes.
D
D
For
short,
because
we
came
to
the
conclusion
that
sharding
often
requires
logic,
it
might
need
code
can
be
a
little
scary,
so
a
concept
of
ADL's
involves
some
sort
of
plug-in
structure
through
which
one
might
supply
code
and
also
gives
us
a
way
to
expand
on
things
in
the
future
without
being
too
prescriptive.
Now,
as
long
as
we
get
some
of
the
central
interfaces
around
this
correct,
so
we
want
interfaces
for
maps
and
lists
and
bytes
to
be
big.
D
This
FBL
acronym
that
we've
referred
to
a
couple
of
dozen
times
now.
Oh
no,
whoever
names
the
full
thing,
but
it
stands
for
flexible
byte,
layout
and
flexible
byte
layout
is
the
name
of
the
spec
that
Michael
made,
which
is
an
advanced
state
of
layout,
specific
way
for
a
big
bytes
problem.
It
is
just
one
of
them
we
could
make
more
of
them.
We
think
that
this
one
is
a
particularly
reasonable
thing
and
we're
going
to
try
to
make
it
as
usable
as
possible,
but
it
is
also
not
like
blessed
by
the
gods.
D
D
This
is
something
I
want
to
talk
through,
because
we
should
make
sure
that
we
have
grounded
hopes
understandings
on
this
yeah
so
right
now
when
a
file
renders
on
the
Gateway
and
it's
coming
from
dag
PB,
with
dag
peepees,
totally
specialized
understanding
of
big
bites
and
all
of
this
jazz,
it's
relying
on
a
lot
of
super
specialized
stuff
that
was
hard
coding
in
dag
baby.
This
is
important
to
highlight
because
we
are
not
carrying
forward
half
of
those
super
specialized
things.
D
D
So
it's
reasonable
to
warn
that
when
you're
talking
about
some
public
service
like
a
gateway,
it
is
probably
going
to
have
a
white
list
of
known
plumbing
systems
that
will
allow
you
to
use
because
it
is
running
code
and
letting
people
run
unknown
code
on
your
infrastructure
on
your
budget
is
like
not
a
thing
that
you
like
to
do.
So.
E
I
think
the
goal
of
fbl
is
to
allow
you
to
implement
any
chunker
any
layout
that
you
would
care
about
in
a
common
data
structure
so
that
the
read
of
it
is
the
same,
no
matter
how
you
decided
to
lay
it
out
or
where
he
decided
to
put
the
bits
or
what
chunker
that
shouldn't
be
dependent
on
that.
So
we
can
shift
more
chunker's
in
the
future.
B
To
be
fair,
this
what
we
have
today
with
deck
TV
that
could
be
no
matter
how
you
arrange
stuff,
no
matter
how
you
inline
things
like
it
has
actually
in
line
on
the
carracticus
in
line
came
between
with
you
know
that
NCATS
and
runs.
Of
course,
it
will
render
the
same
thing,
no
matter
how
you
arrange
it
and
if
FBL
actually
allows
this,
then
problem
solved,
and
you
know-
and
I
just
need
to
figure
out
how
to
how
to
actually
do
FBO
on
the
web.
So
to
speak,
because
that's
still
not
that
year,
but
yeah.
D
There
is
a
little
bit
of
gap
between
FBO
on
the
wire
and
well
wire
and
FBL
right,
because
the
FBO
description
is
topologically
applicable
to
the
data
model
tree
structure
and
you
can
choose
any
codec,
which
is
something
that
was
possible
with
dead
baby
right,
because
all
of
the
hacks
were
in
the
deck
PD
codec,
so
flexible,
byte
way
out
can
actually
work
on
seaboard
or
I
mean
using
it
on
JSON,
there's
a
little
funny
because
then
you're
gonna
get
base64,
but
you
could
it
just
absolutely
plugs
together.
The
interfaces
do
compose.
B
D
E
B
E
There's
there's
been
some
crazy
talk
about
this,
though,
like
because
so
much
of
the
UNIX
of
SB
1
code
exists
as
dag
PB
hacks,
and
it
doesn't
really
respect
the
sort
of
like
transition,
but
like
what
date,
TV
isn't
really
a
codec
for
the
way
that
we
think
about
them.
Now
it
has
a
lot
of
like
very
application
specific
stuff
in
it.
E
There's
been
talk
about
just
reinterpreting,
dag
TB,
so
that
what
gets
spit
out
of
the
codec
actually
matches
the
new
you
nation,
this
v2
schema,
so
that
they
don't
have
to
write.
Does
they
don't
have
to
maintain
two
implementations?
This
was
a.
This
is
an
idea
that
Stephen
had
and
nobody's
really
followed
up
on
it,
but
that
was
like
in
the
ether
yeah.
D
Yeah
I
think
how
to
have
like
a
heterogeneous
mix,
structures
that
have
wild
slurries
of
UNIX
FS
would
be
one
dag,
PB
and
UNIX
of
SV
is
a
whole
issue.
Yeah
yeah
like
I,
get
in
there.
It
seems
to
migrate
hashes
and
their
repositories
and
look
at
some
of
the
issues
discussed
over
there
about
how
doing
this
incrementally
is
actually
not
a
good
idea
and
maybe
consider
ourselves
cautioned
and
make
choices.
In
light
of
that
information,
that's
my
personal
opinion.
You
know.
B
E
I
mean
if
you
need
to
be
able
to
there
there
really
isn't
that
much
data
in
it.
That
is
dig,
maybe
the
majority
of
its
raw
blocks
right.
The
majority
of
the
data
is
Rob
blocks
and
like
creating
like
recreating
the
structures
on
top
or
actually
like
relatively
easy.
I
mean
we
need
to
have
like
a
much
larger,
longer
discussion
that
we
should
probably
start
a
thread
about
this
about
the
best
way
to
do
competitive
from
v2
to
v1,
and
if
we
want
to
create
a
union
like
what
would
that
Union.
E
Look
like,
for
instance,
I.
Think
that,
like
the
inclination
here
is
to
rely
on
the
codec
as
the
identifier
and,
if
that's
the
case,
then
we
don't
actually
have
a
way
to
represent
that
in
schemas
right
now,
schemas
really
assume
that
you
are
basically
codec
agnostic,
so
that
is
either
like
a
missing
area
or
it's
just
like.
We
shouldn't
use
that
we
should
actually
use
something
inside
of
the
universe,
p1
data
structure,
to
tell
us
if
it's
a
new
one
or
not
and
then
create
the
Union
on
that.