►
From YouTube: 🖧 IPLD weekly Sync 🙌🏽 2020-11-30
Description
A weekly meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
B
Welcome
everyone
to
this
week's
ipl.
The
sync
meeting
is
november,
the
13th
2020
and
as
every
week
we
go
over
this
stuff
that
we've
worked
on
in
the
past
week
and
discuss
any
open
agenda
items.
Today
we
even
have
an
agenda
item,
that's
good,
but
I
start
with
myself
so
not
that
much
ipld
but
kind
of
so
I
was
working
on
a
storage
system
for
quality
addressable
data.
Currently
the
main
purpose
is
filecoin,
but
it's
really
generally
for
calling
adjustable
data,
therefore
relevant
to
ipld
yeah.
It's
still
a
prototype.
It's
not
finished.
B
You
can't
even
get
data.
Currently,
you
can
only
put
data,
but
this
is
the
next
step
and
yeah,
so
it
could
be
looks
pretty
promising
if
you
run
it
on
a
huge
machine
with
700
gigabytes
of
ram.
It
works.
B
This
was
amazing,
like
it's.
It's
fun
to
open
those
machines
yeah,
but
yeah,
probably
more
of
this
in
the
coming
weeks
or
so.
But
if
anyone
is
interested,
it's
written
rust
and
you
can
I've
put
the
link
in
there
and
yeah
feel
free
to
check
it
out.
Next
on
the
list
is
danielle.
C
Cool,
so
I
released
go
multicodec,
the
tag
that
I
spoke
about
last
week,
so
actually,
when
releasing
that,
I
run
into
a
bug.
B
C
The
go
proxy
on
some
database,
which
are
like
a
central
place
where
you
can
download
go
modules,
then
source
code
and
so
on.
They
kept
giving
me
weird
errors
for
like
half
an
hour
and
turns
out
a
couple
of
other
people
at
pl
ran
into
the
same
issue
months
ago.
So
I
raised
my
shop
stream
about
that.
It's
kind
of
like
that
tv
show
where
you
try
to
fix
one
thing,
and
then
you
find
another
thing,
that's
broken,
and
then
you
find
another
thing,
that's
broken
and
then
it's
like
a
chain
of.
C
Why
is
everything
broken
when
I
tried
to
touch
it
anyway?
That
was
frustrating
and
I'm
just
replacing
the
old
code
that
inlines
the
constants.
As
I
go
with
this
library
we
also,
I
also
spoke
to
eric
and
we
planned
a
bunch
of
ipld
prime
releases.
C
So
the
tltr
is
that
we're
going
to
do
0.6
this
week
and
then
0.7
is
going
to
include
a
bunch
of
backwards
and
compatible
changes
that
we've
talked
about
for
a
while,
and
we
also
reviewed
emerged
some
stuff
that
has
been
flying
for
a
past
few
weeks
and
what
I've
been
doing
for
the
last
couple
of
days
has
been
figuring
out
how
to
automate
some
of
the
larger
low-level
changes
such
as
making
the
ass
foo
like
the
as
boolean
as
string
and
so
on
methods,
not
returner,
because
that's
an
easy
enough
change
to
make
in
the
interface.
C
C
You
can
do
a
little
bit
of
trickery
in
go
with
a
single
line
where
you
index
into
a
map
and
also
assert
that
to
be
a
certain
type,
and
both
things
can
also
return.
A
boolean,
and
if
you
do
both
things
at
once,
it
can
be
confused
confusing
which
boolean
you're
getting.
C
If
you
think
about
it
hard.
It's
kind
of
obvious
that
it's
a
second
one,
but
if
you
just
skim
the
code
quickly,
it
can
be
confusing
and
go
ipfs.
We
found
like
four
different
bugs
because
somebody
confused
the
semantics
of
this
boolean,
so
we
spoke
about
it
for
a
while.
That's
the
threat
of
the
slack
and
I
drafted
a
proposal
for
the
go
language
to
try
to
restrict
the
sort
of
hackery
of
doing
both
things
at
once
in
a
single
expression.
C
I
reckon
it's
going
to
be
rejected
because
it
feels
kind
of
not
sure
about
it,
but
at
the
same
time,
if
people
keep
getting
the
semantics
wrong
and
it's
a
confusing
corner
case
of
the
language,
it
might
be
worth
restricting-
I
don't
know
I'll-
probably
raise
it
next
week.
So
if
anybody
has
any
feedback,
it's
welcome
and
also
my
second
priority
for
quarter.
Four
was
writing
a
ci,
best
practices
document
for
go,
and
I
thought
it
could
be
upstreamed
because
it's
sort
of
generic,
it's
not
just
about
pl.
C
The
problem
is
it
sort
of
stalled
upstream,
because
I
waited
for
a
month
for
their
technical
writer
to
respond
to
me
and
he
just
didn't
and
then,
when
somebody
else
finally
responded
to
me,
they
said:
well,
we
don't
know
where
this
document
could
go.
So
just
stick
it
into
the
wiki
and
the
wiki
is
the
github
wiki,
which
is
horrible.
C
The
reason
being
that
you
know
you
can
post
a
page
there,
but
anybody
can
edit
it,
and
the
only
thing
I
can
do
is
subscribe
to
changes
and
if
and
if
anybody
like,
you
know,
spams
it
or
deletes
my
content,
I
have
to
like
then
go
back
and
you
know
revert
that
and
it's
really
annoying.
So
I
don't
know
what
I'm
going
to
do.
I
might
just
publish
this
as
a
pl
document
after
all,
and
that
is
it
for
me.
B
Thanks
next
one
is
michael.
D
Well,
yeah,
so
thanksgiving
was
last
week,
so
it
was
a
pretty
short
week,
but
I
did
I
got
a
little
fed
up.
I
was
debugging
cadb
and
realizing
that
the
way
that
I
implemented
the
tree
is
just
like
the
abstractions
like
aren't
in
the
right
place
to
debug
it
well
and
it's
like
a
total
mess.
So
I
did
my
third
implementation
of
these
trees,
oh,
which
are
now
called
chunky,
trees
by
the
way
makola
just
isn't
comfortable,
naming
things
after
him,
so
they're
called
chunky,
trees
now
and
not
nickel
trees.
D
So
I
did
a
third
implementation
of
these
trees
now
with
the
intention
of
the
base
having
it
be
like
a
base,
abstract
kind
of
type
system,
a
little
bit
like
for
the
trees,
so
that
I
can
implement
cadb
on
it
or
any
ipld
trees,
and
so
I
feel
d
trees
are
sort
of
like
they're,
basically
they're,
using
like
the
base
classes.
D
Anyway,
if
you
look
at
the
implementation,
that
makes
a
lot
more
sense
than
what
I'm
saying,
but
then,
with
that
library,
I've
now
implemented
a
cid
set
a
sparse
array,
an
ordered
map
and
a
database
index.
They
all
work
really
well
and
yeah.
The
the
tree
is
really
holding
up
like
we
can
just
kind
of
crank
out
these
data
structures
now
on
these
chunky,
trees
and
just
vary
just
vary
some
of
the
key
types
and
other
stuff,
so
it's
actually
really
really
nice.
D
These
trees
are
fantastic,
also
like
just
having
the
implementation,
this
kind
of
clean
and
abstract
it
allows
you
to
just
like
bring
the
branching
factor
way
down
and
really
kind
of
control
where
things
are
so
that
you
can
then
write
tests
that
cause
the
right
merges
and
things
like
that
in
a
really
predictable
way,
when
I
was
trying
to
sort
of
debug
these
things
and
write
nice
tests
for
them
in
a
in
a
kind
of
fully
materialized
data
structure,
on
top
of
it,
it's
just
too
unpredictable
because
of
all
of
the
hashing
randomness.
D
So
you
don't
know
when
you're
going
to
hit
some
of
these
cases
when
you're
not,
then
you
change
like
a
tiny
implementation
detail
and
all
of
a
sudden
none
of
that
stuff
works
anymore.
It
doesn't
do
what
it
used
to
do.
So
it's
all
just
like
a
lot
nicer
now
and
yeah.
This
is
for
me,
I
don't
know
who's
next.
A
I'm
having
one
of
those
trying
to
remember
what
I
did
all
week
weeks.
So
one
of
the
big
things
is
I've
been
trying
to
work
on
a
document
that
accumulates
some
known
goals
for
the
new
year.
So
we
can
have
those
written
down
someplace
and
maybe
start
circulating
around
anyone
else
who
wants
to
be
aligned
on
that
scale
of
planning.
A
We
will
make
many
smaller,
more
local
in
time
plans
throughout
the
year
as
well
anyway.
So
there's
a
link
to
a
document
there
for
this,
it's
full
of
a
lot
of
brain
dump
from
me,
but
it
could
improve
by
having
more
brain
dump
from
other
people
too.
So
I'd
love
some
review
on
that.
A
I
think
daniel
already
took
kind
of
most
of
the
wind
out
of
my
sails
and
discussing
that
we're
planning
some
new
releases
for
the
go.
I
peel
the
prime
stuff
did
some
merges
already.
I'm
going
to
be
doing
a
few
more
this
week,
basically
trying
to
wrap
up
and
nail
down
anything
that
could
move,
and
then
we
want
to
tag
one
version
coming
up
very
shortly,
which
will
be
a
minimal
breaking
changes.
A
Almost
nothing
should
surprise
anyone
when
they
jump
to
this
and
then,
as
soon
as
we
get
that
safe
checkpoint,
then
we're
going
to
move
ahead
and
do
a
bunch
of
api
changes
that
are
going
to
require
manual
intervention
for
somebody
to
upgrade
across
them
and
we're
going
to
try
to
get
a
bunch
of
these
done
really
quickly
within
a
month
at
least,
and
these
basically
all
have
the
goal
of
usability
improvements
to
the
core
interfaces
that
we
figured
out
from
just
more
use
over
time
and
the
feedback
we've
gotten
from
integration.
So
far,.
B
A
Example,
one
of
the
big
ones
being
using
our
node
interface,
the
thing
that
kind
of
feels
like
an
ast.
It
is
it's
costly,
because
a
bunch
of
the
read
methods
all
return,
a
value
or
an
error,
and
that
means
that
you
can't
chain
those
calls
so
syntactically.
It's
just
kind
of
a
big
pain
in
the
butt.
It's
it's
the
best
of
go
code.
A
It
has,
if
error
branches
on
almost
every
alternate
line,
nobody
likes
that
what
a
surprise
so
we're
going
to
try
to
trim
down
some
things
like
that,
but
those
will
be
breaking
api
changes
by
necessity,
so
we're
going
to
try
to
do
them
all
in
a
small
window
as
possible
and
then
move
on,
and
hopefully
anyone
starting
to
interact
with
this
library
by
the
time
of
the
new
year.
We'll
get
the
new
improved
experience
of
less
errors.
A
F
E
My
week
has
just
been
just
old
file
coin
and
just
swimming
in
data,
and
so
just
I
I've
been
trying
to
tackle
the
just
the
issue
of
the
volume
of
data
in
in
far
coin
and
in
some
ways
I'm
coming
at
this
I'm
coming
out
differently,
but
to
the
same
issues
that
peter
has
been
dealing
with
with
the
just
the
volume
of
this
stuff.
E
Almost
it's
just
so
many
things
break
down
when
you're
dealing
with
this
much
volume
and
and
also
the
fact
that
everything's
so
small.
All
these
blocks
are
so
tiny
and
there's
so
many
of
them
just
to
navigate
through
everything.
And
it's
not.
It
shouldn't
be
surprising
because
it's
how
so
much
of
the
real
world
data
is,
but
it
it
just
makes
everything
hard
and
especially
it's
not
it's
not
a
static
data
set
either.
E
And
it's
just
it's
becoming
frustrating
and
and
yeah
there's
some
other
other
file
coin
relative
stuff
in
there,
interacting
with
the
specs
actors
team
and
talking
about
data
structure,
shapes
and
sizes
and
algorithms,
and
I
think
I
shared
with
most
of
you
some
of
the
things
that
I've
that
have
surfaced
out
of
that
about
the
way
the
incentives
driving
the
size
and
shape
of
their
data
structures
and.
E
Some
of
the
feedback
I've
given
about
that,
but
also
I
it's
probably
worth
us
discussing
at
some
point
just
because
they're
trying
to
apply
fairly
brutish
methods
of
costing
the
various
operations
with
regard
to
apld
that
are
gonna.
You
know
like
when
you
simplify
anything
in
a
in
a
complex
system.
E
Then
you
end
up
with
these
things:
sort
of
bursting
out
in
unexpected
and
and
unwanted
ways,
and
I
think
we
we're
already
seeing
some
of
the
fruit
of
that,
but
we
will
continue
to
unless
I
guess
we
have
some
good
ways
of
speaking
about
the
way
the
different
constraints
cause
different
trade-offs.
E
Anyway,
that's
been
my
week
and
it's
just
I
I
don't
have
much
to
show
for
it
other
than
frustration
and
I'm
also
really
behind
on
my
github
backlog.
So
if
there's
things
I
haven't,
responded
to
then
yeah
I'll
catch
up.
B
Thanks
next
is
peter.
G
You're,
muted,
sorry,
the
joke
was
what
rad
said
same
here
so
yeah,
basically,
multiple
requests
to
move
this
large
set
of
data
to
different
parties
for
different
tests
and
so
on.
G
So
of
course,
I
think
I
finally
have
a
solid
way
forward
that
actually
works,
and
I
I
just
need
to
tweak
the
schema
just
a
little
bit
more,
where
the
original
idea
was
to
have
the
blocks
accessible
via
by
some
kind
of
synchronous
box
or,
like
you
know,
like
the
test,
3
or
or
you
know,
ipfs
in
the
picture,
whatever
it
is,
the
amount
of
blocks
that
we
have
makes
this
absolutely
impractical
you,
even
with
any
kind
of
fan
out
that
you
can
do.
There
is
just
not
enough.
G
There
are
not
enough
round
trips
in
the
day
for
you
to
even
pull
a
single
state
out,
let
alone
you
know
something
something
for
analysis,
so
the
version
that
kind
of
settling
on
that
seems
to
be
working
and
we'll
actually
be
talking
with
a
little
bit
after
this
call
on
how
this
can
be
integrated
into
sentinel
would
be
a
humongous
postgres
database
where
we
publish
that
is,
it
is
sufficiently
normalized,
but
it
doesn't
actually
go
into
like
all
the
way
into
breaking
down
the
blocks
as
far
as
like
sent
now,
but
it
does
actually
maintain
the
relationship
between
the
visual
blocks
so
actually
pull
out
the
I
parse
every
block
before
I
put
it,
and
I
pull
out
the
links
and
I
actually
recreate
the
linkage
within
within
the
database
itself,
which
basically
lets
me
do
things
like
exports
and
other
stuff.
G
The
same
way
like
you
would
do
by
traversing
a
tree,
but
using
it
with
you
know
it
with
recursive
cts,
which
obviously
is
way
way
faster
than
anything.
G
We
can
do
on
the
go
side
because
you
know
we
have
to
unpack
a
block
and
so
on
and
so
forth,
whereas
if
I
actually
have
the
graph
in
the
database
it
works
and,
moreover,
because
all
of
this
is
sufficiently
compressible,
it
is
practical
to
start
publishing
a
based
backups
of
this
database
like
every
every
day
or
so
once
it
is
working
and
it
is
caught
up
like
once
once
I
started
running
back
of
the
envelope
says
that
it
will
take
eight
days
to
actually
run
through
the
entire
chain,
because
I
cannot
just
import
an
existing
an
existing
data
store.
G
I
need
to
re-analyze
every
single
block
and
there
is
a
little
bit
more
nicest
so
basically
for
every
block
that
comes
in,
I
found
a
way
to
record
when
in
time
did
it
happen,
so
we
also
have
temporal
clustering
of
of
blocks
and
so
on
and
so
forth.
Plus
there
is
something
similar
to
file
system
relative,
so
actually
we
can
tell
a
particular
block
which
airbox
touched
it
throughout
time.
G
As
you
know,
as
the
chain
was,
I
was
progressing,
which
will
it's
not
clear
what
this
will
allow
us
to
do
in
the
future,
but
recording
this
information
once
and
keeping
it
forward
is
virtually
free.
So
we
are
going
to
click
this
as
well,
and
you
can
publish
a
base
backup
of
this
and
then
publish
a
streaming
wall
of
that
write
catalog.
G
So
essentially
anybody
who
wants
to
get
this
humongous
database
running
they
should
be
able
to
just
pull
a
recent
backup
from
from
ipfs
and
then
sync
up
to
there.
There
will
have
to
be
like
a
series
of
slaves
to
to
not
jeopardize
the
main
database,
but
if
everything
works
correctly
and
it
scales,
as
my
like
initial
testing
shows,
this
will
be
a
way
to
provide
the
data
to
whoever
wants
it
in
a
very
cheap
and
a
relatively
decentralized
way.
So
this
should
be
solved.
Of
course
it
doesn't.
G
You
know,
help
with
how
many
like
actual
roles
you
have
at
these
tables
and
so
on
and
so
forth.
But
you
know:
that's
that's
how
it
is
just
you
know,
don't
don't
select
star
count
because
can't
start
sorry,
because
that
is
taking
a
long
time
yeah.
So
there
is
that
and
another
piece
of
news.
The
ceremony
is
that
literally
that's
why
I
was
actually
late.
This
is
the
first
time
we
are
doing
actual
dumbo
drop
deals
with
with
the
real
sector
with
the
real
miner
with
rio.
G
Everything
things
are
going
way
better
than
I
expected
based
on
my
previous
experiences.
So
hopefully
we
will
have
our
first
sector
sealed
and
if
it
works
all
correctly,
the
power
shows
where
it's
supposed
to
show
there
is
this
simple,
simple
matter
of
automation
at
this
point
so
yeah
fingers
crossed
it?
Will
that
will
go
without
the
hitch
and
yeah?
That's
my
weak
hybrid
wise.
What's
next,
chris,
do
you
have
any
update.
H
Yeah,
I
got
validation
plugged
into
the
graph
sync
requester,
so
that's
good
news,
I'm
moving
on
to
adding
extensions
and
then
responders.
So
that's
my
update.
B
Thanks,
okay,
so
I.
B
Have
on
so
I
have
one
a
gender
item
which
is
about
also
one
of
our
favorite
topics
about
inline
cds,
and
it
came
up
again
with
the
the
cozy
jose's
back
thing,
and
so
I
had
a
talk
with
a
chat
with
drell
who
is
working
on
those
things
and
also
like
before.
I
had
a
talk,
I
briefly
on
slack
with
with
eric
about
like
in
ncads
and
couldn't
we
do
schemas
differently
and
so
on.
So
long
story
short
is
like.
B
It
was
interesting
to
me
because,
like
from
like,
I
have
encountered
in
inline
cities
from
like
implementing
them
in
rust
and
stuff
like
this
and
see
like
how
they
kind
of
like
can
fall
apart
because
well,
they
are
not
fixed
size
and
different
things.
B
But
as
I've
talked
with
drill
about
it.
It
was
interesting
to
see
like
from
an
application
developers
perspective.
They
are
just
super
useful
like
so,
if
I
wouldn't
have
the
whole
like
background
of
like
how
to
implement
them
or
whatever
it's
like,
it's
really
nice
to
use,
and
what
I
ended
up
with
is
what
was
interesting,
and
this
is
what
I
kind
of
like
would
want
to
discuss,
or
just
like
yeah
point
out.
B
Is
that
a
thing
that
at
least
from
my
understanding
is
what
inline
cds
also
give
you,
which
you
can't
currently
do
with
pure
schemas,
is
that
you
can
still
traverse
the
whole
thing
transparently?
B
So,
let's
say
in
the
in
the
cozy
thing
or
like
yeah
in
the
whatever
like
in
the
first
in
the
chosen
part,
is
you
have
json
and
the
payload
is
clearly
defined
as
a
cid?
It
could
be
an
inline
cid
and
it
could
even
import
like.
So,
if
you
put
a
normal
cid
in
there,
of
course
it
goes
to
a
different
block
and
you
can
just
no
matter
which
format
it
is.
You
can
just
traverse
it.
B
And
so,
if
you
use
schemas,
obviously
your
thing
will
be
either
cid
or
it
will
be
bytes
or
some
object
or
yeah.
So
the
traversal
changes
so
really
like
your
your
traversing
path
is
different,
depending
on
whether
it's
inline
or
not,
and
from
an
application.
This
developer's
perspective,
like
that's,
super
useful.
B
D
D
B
Yeah,
so
what
I
then
also
like
came
up
with
is
like
so
because
we
had
a
similar
discussion
about
like
in
phi
coin,
there's
something
where
you
also
store
c
ball
as
a
backseat
ball
in
a
field
as
bytes
which
is
like
again
like
like
it
isn't
another
question:
if
you
should
do
it
or
not,
but
just
like
it's
the
case,
and
if
this,
for
example,
would
then
be
an
inline
cid,
which
I
don't
propose
to
do,
but
it
would
be
you
can
you
can
you
could
just
traverse
it
which
is
kind
of
nice,
so
the
question
to
me
is
like
is
like.
B
D
B
Yeah,
so
yeah
yeah
so
have
any
anyone
any
thoughts
on
this
or
just
like,
should
we
just
flicks
could
be
to
next
week
or
the
next
meeting
to
see
like
if
anyone
has
like
had
some
thoughts
and
yeah
and
if
not
like,
just
like
yeah
it
just
came
up
in
this
pr,
and
I
just
found
it
interesting
to
yeah
think
about.
G
B
D
D
That
we're
still
pushing
back
against
on
on
one
of
those
threads
is
not
like
the
idea
of
ever
using
inline
cids
when
you
think
that
they
might
be
valuable.
It's
literally
like
having
some
auto
magic
logic
where,
whenever
you're
about
to
encode
something
and
link
to
it,
you
decide
based
on
the
length
whether
or
not
it
should
be
inline
cid
and
effectively
you're.
Turning
like
every
link
into
a
potential
inline
value
like
that's,
where
things
get
really
tricky.
A
And
that
means
that
if
somebody
is
switching
between
different
codecs
or
switching
other
parameters
of
linking
like
switching
to
an
actual
link
versus
inline
cid,
then
we
have
very
little
ability
to
describe
the
switching,
and
that
makes
a
lot
of
discussions
get
much
harder.
Whenever
these
things
appear.
D
And
we
don't
have
entirely
consistent
handling
in
some
places
like
we're
still
not
entirely
sure
about
this
car
file.
Thing
like
stephen,
seems
to
think
that
they're
in
the
car
file
and
peter
doesn't
think
so,
and
I
don't
in
my
view
they
actually
shouldn't
be
their
own
value
than
the
car
file
like,
like
that's
just
a
waste
of
space,
but
no,
I
mean
like
they're,
definitely
useful.
In
some
cases
I
mean.
B
D
I
stand
by
like
my
original
analogy,
which
is
that
they're
they're
data
uris
like
they're
the
same
thing
like
you,
take
the
value
of
the
address
and
you
put
it
in
the
address
you
make
it
the
address
and
those
work
like
ninety
percent
of
the
time
like
data
uris
work
like
ninety
percent,
like
they're,
really
useful
when
they
do
work,
I
use
them,
but
sometimes
they
don't
work
because
sometimes
you
need
an
address
to
be
a
value
type
and
they
don't
work
very
well
as
value
types
and
they
don't
have
the
same
guarantees
as
as
address
value
types.
D
D
Yeah
I
mean
I
I
I
at
one
point:
I
had
some
objections
around
the
fact
that,
because
it's
not
a
hash
in
actual
hash,
there
are
assumptions
that
you
might
make
about
it.
That
would
be
problematic
if
you're
trying
to
use
the
link
as
a
value
type,
which
is
true,
but
I
upon
reflection,
I
don't
think
that
it's
any
different
than
just
insecure,
hashing
functions
are
so,
if
you're
ever
using
a
hash
digest
for
something
that
you
need
it
to
be
an
actual
hash
digest.
B
So
the
so
what
what
they,
for
example,
need
for
a
day,
cozy
and
chosey
is
that
they
so
like
so
inline
cids
are
kind
of
have
two
bytes
or
like
two
variants
more
than
you
would
need,
because
what
you
basically
need
is
you
need
an
identifier.
What
the
coding
is
the
length
and
then
the
data,
and
what
I
thought
it
like.
Just
like
I
thought
I
had
was.
Perhaps
there
could
be
in
the
future
a
cid
version,
two
which
is
defined
in
a
different
way,
so
that
it's,
it
could
also
be
codec
length
data.
B
So
basically,
if
you,
for
example,
would
flip
from
multihedge,
you
would
flip
the
length
and
the
hash
function,
not
quite,
but
just
like
just
as
an
idea
that,
basically
or
just
wouldn't
make
sense
to
come
up
with
something
that
is
kind
of
reads
like
a
cid.
But
it's
not
a
cd
again
like
it's,
not
something
that
I
push
for
or
proposing
it's
just
like
yeah
the
thought
that
it
might
be
a
useful
concept
to
have
something
in
ipod,
which
is
just
codec
length
data
and
then
follow
the.
D
I
mean
you're
only
saving
two
bytes
and
if
we
ever
do
another
version
of
cid,
we're
gonna
have
to
deal
with
like
a
lot
of
code
that
just
checks
the
first
byte
and
if
it's
not
one
it
freaks
out.
So
it
would
just
be
like
a
huge
amount
of
work
to
support
this
everywhere,
just
to
say
just
to
shave
two
bytes.
So
it's
probably
not
worth
it.
D
You
you
could
have,
you
can
imagine
doing
something
like
this
in
a
codec,
though,
that
was
very
aware
of
inline
cds,
and
it
just
shaped
bites
this
way,
so
it
had
its
own
typing
token
for
it
and
then
could
could
identify
it.
So
you
you
could,
you
know,
do
this
reduction
and
have
this
type
like
within
a
codec
as
a
method
of
compression
and
that's
probably
good
enough.
B
Sure
should
then
in
reverse,
be
illincided
is
more
of
a
first-class
citizen
than
if
we
like,
like
be
like,
because,
like
julia
hey,
if
the
feeling
that
we
treat
them
like
well
they're
there
and
I
can
use
them
but
like
they
might
break,
they
might
not
break,
and
it's
like,
like
so
yeah,
do
we
have
like
do.
We
then
have
plans
to
say:
okay,
that's
a!
B
This
is
a
proper
thing.
It's
not
really
a
hash
function,
depending
on
the
definition
you
use
for
a
head
function,
but
we're
still,
okay
with
it
and
yeah,
it's
very
reversible
in
our
tools
and
so
on.
Or
do
we
just
keep
saying
well,
this
thing
exists
and
you
can
use
it
and
we
will
see
what
happens.
I
mean
that's
also
totally.
H
A
A
I'm
not
super
convinced
by
some
of
the
reasoning
that
we
entertained
earlier
in
this
call
about
why
cids
are
superior.
I
think
a
lot
of
those
things
are
actually
still
just
equal,
like
transparent
pathing
will
work
the
same
way,
whether
there's
an
inline
cid
or
a
link
to
another
block
or
just
an
inline
piece
of
additional
content
through
a
kind
of
union.
D
D
D
D
And
so
you
just
want
the
you
just
want
the
the
binary
value
which
in
json
is
like
base64,
earl
encoded,
and
you
just
want
to
change
the
interpretation
of
it
to
say
it's
a
cid,
and
so
you
actually
don't
have
an
opportunity
to
do
a
union
or
say
it's
any
other
type.
It
just
has
to
be
one
type,
because
then
you'd
have
to
change
the
format
and
we
don't
want
to
change
the
format.
D
The
thing
that
I'm
worried
about,
though,
is
that
if
you
do
this
with
encryption,
it
depends
on
the
encryption
function
that
you're
using
but
you're,
putting
like
highly
identifiable
known
bytes
at
the
beginning
of
the
encrypted
data
or
at
the
beginning
of
the
decryption
data.
So,
like
you
can
use
that
to
break
the
encryption.
I
think,
like
some
algorithms,
I
think
are-
are
fine
and
jumble
things
around
enough,
where
it's
not
a
big
problem,
but
some
of
them
like
this
is
a
problem
like
they
would
need
to
worry
about
this.
B
Okay,
so
I
I
can
see
the
point
of
like
you
probably
shouldn't:
do
a
code
exchange
in
an
inline
in
if
you
use
you
probably
shouldn't
change
the
coding,
which
kind
of
like
seems
pretty
reasonable
for
the
normal
use
cases.
Okay,
yeah,
you
can
see
now
that
okay,
then
I
kind
of
doesn't
make
a
difference.
Okay,
I
can
see
that
okay.
B
E
E
E
D
Yeah,
I
actually
mean
peter,
you
would
know
since
you
parsed
these
car
files,
like
all
the
time
are,
are
there
inline
cids
in
the
car
file
as
unique
entries
as
unique
blocks.
E
E
D
G
G
E
G
D
D
Have
to
use
that
right?
Yes,
I
mean
I
mean,
but
but
in
my
view
like
we,
we
should
just
update
the
car
files
to
not
write
these
and
and
because,
like
it's
just
it's
just
a
waste
of
space
and
then
that's
the
easiest
thing
to
keep
consistent,
since
we
know
that
there's
these
other
rappers
everywhere
that
just
like
make
them
vanish
so.
E
D
E
D
D
D
G
D
E
D
F
D
G
B
D
Exploited
someday,
somebody
is
there's
going
to
be
some
operation
that
we
do
where
we,
where
we
ask
somebody
else,
do
you
have
this
and
if
they
have
it,
then
we
know
something
about
them
and
somebody's
going
to
find
a
way
to
get
in.
Like
the
idea,
it's
going
to.
B
I
I
think
this
is
kind
of
the
reason
why
I
was
running
if
we
should,
like
yeah
push
for
us
in
90s
or
not
yeah,
but
I
guess
also
like
this
ship
has
sailed,
I
mean
it's,
they
exist.
We
have
to
deal
with
them,
but
I
guess
yeah.
As
I
said,
we
could
do
a
better
job
at
like
yeah,
explaining
like
yeah,
explaining
what
they're
good
for
and
not
good
or
like
what
they're
good
for
and
why
you
want
to
use
them
or
why
you
like
might
want
to
do
something
else.
D
Yeah
and
what
you
should
do
like
like
documenting
this
conversation,
we
just
had
about
storage
layers,
like
should
just
return
them
if
people
ask
for
them
is
like
a
good
thing
to
say,
because
I'm
pretty
sure
that
most
of
our
stores,
like
don't
right
now
like
a
lot
of
them
and
then
in
go
there's
just
like
this
magic
layer.
That
goes
on
top
of
everything
that
makes
all
of
it
do.
D
B
Yeah,
so
the
so
just
also
the
background
for
the
rust
stuff
we
currently
ship
as
default
codes,
we
only
ship,
secure,
cryptographically,
secure
hashes,
so
therefore,
no
identity
hash,
obviously
because
well,
it's
not
cryptographically
secure
and
yeah.
G
B
G
D
D
B
B
D
D
If
it
were
possible
for
inland
cities
would
be
possible
for
other
cities
too.
For
other
data
I
mean
it
would
it's.
It
is
the
same
thing
like
there's
no
way
to
to
put
a
link
in
there
to
a
parent.
B
Yes,
this
is
so
if
anyone
who
wants
to
do
a
homework
and
spend
the
weekend
on
proof
that
it's
not
possible.
J
B
Happy
to
read
it
anyway:
is
there
any
so?
Okay,
it's
also
like
coming
back
to
the
cozy
stuff.
So
basically,
we
used
to
post
a
little
comment
there
with
mostly
questions.
So
basically
we
say
like
like
the
current
days
like
it's,
they
should
just
go
for
what
we
currently
have
because,
like
it's,
it's
totally
fine
like
it's
it's
what
they
have
now,
and
I've
also
talked
with
eric
briefly
about
it
that
potentially,
we
could
still
upgrade
them
to
kind
of
unions
like
it's,
not
a
breaking
change.
B
So
if
we
ever
like
decide,
that's
a
better
way
of
doing
it,
we
could,
but
I
also
think
like
we
shouldn't
like
keep
blocking
them
on
anything
like
it's
just
like
yeah.
So
then
I
will
post
a
comment
like
concluding.
Like
just
writing.
Well,
we've
talked
about
in
the
team.
You
can
watch
the
recording
blah
blah
blah
just
through
it
and
yeah.
Okay,.
A
Yeah,
I
just
dropped
a
bit
of
a
comment
on
that
big
long
pr
already
about
this
too,
and
the
migration
potential.
A
I
think
another
thing
that
we
could
use
a
lot
of
docs
on
in
general
in
the
future
is
migration
potentials,
because
we've
started
thinking
about
this
in
this
team
and
the
people
in
this
call
on
a
regular
basis
quite
a
bit,
but
it's
very
non-obvious
to
people
outside
of
this
group.
I
think
what
kind
of
migrations
are
easy
and
what
aren't
easy
and
like
it
turns
out
that
yeah.
M
D
Also,
people
in
their
efforts
to
bite
shave
will
move
towards
these
really
thin
representations
of
their
structs
that
make
certain
types
of
unions
really
really
difficult.
So
if
you
don't
design
for
that
kind
of
stuff
and
future
proof
early,
then
you
don't
have
any
options
down
the
line.
E
It's
really
hard
to
solve
for
and
nobody's
really
doing
a
good
job
of
mapping
all
the
variables
and
their
impact
on
the
types
of
graphs
that
are
built,
and
that's
really
that's.
This
is
our
job.
This
is
something
that
we
should
be
looking
more
at
and
being
able
to
give
answers.
I
was
in
a
call
last
week-
and
I
brought
up
this
whole
thing-
that
we
kept
on
coming
back
to
this
cap
theorem
for
ipld
that
we've
never
fully
defined,
and
I
just
couldn't
pull
it
out
of
my
head
on.
E
D
B
D
Like
in
in
their
head,
because
that's
not
what
they've
worked
with
before
but
like
you,
you
can't
just
think
about
the
one-time
representation
of
these
and
how
many
bytes
you
shave.
You
have
to
think
about
like
how
much
of
the
structure
is
going
to
change
and
get
orphaned
as
it
changes
over
time,
which
is
like
something
that
you
think
about
a
lot
in
database
engineering.
But
you
don't
necessarily
think
about
the
immutable
types
of
world.
Well,.
G
E
So
so
just
just
I
don't
know
if
we-
I
don't
know
where
every
time
just
just
I
just
raised
this
thing,
I
brought
up
last
week
to
a
few
of
you,
but
so
in
far
coin
they
have
this
mechanism
that
is
driving
decisions
around
data
structure,
shape
where
everything's
gas
costed.
So
every
operation
comes,
you
know,
costs
gas
right,
and
so
how
do
you
tune
your
data
structure,
so
they
cost
less
gas
like
and
and
it's
it's
meant
to
drive
the
costliness
of
running
the
chain
like
it's.
E
It's
meant
to
drive
that
so
the
incentives
are
pushing
down
that
the
the
costliness
of
running
a
node,
because
these
things
shouldn't
be
too
expensive.
So
you
don't
have
these
perverse
structures
that
are
just
taking
up
cpu
and
disk
space,
but
to
do
that
they've,
it's
like
there's
too
many
variables,
as
I've
been
saying,
and
so
they've
they've
narrowed
it
down
to
some
important
variables
and
the
so
and
a
lot
of
it's
based
on
benchmarking.
But
a
lot
of
it
is
also
just
negotiation
between
people
and
other
different
incentives
going
on.
E
So
so
a
lot
of
it
gets
detached
from
the
real
world,
which
is
it's
difficult
to
represent
the
real
world.
It
really
is,
and
so
for
these
data
structures,
they're
primarily
driven
by
three
costs,
there's
the
cost
of
a
get
ipl
to
get
I'd,
cost
of
an
ipld
put
and
a
per
byte
cost,
and
the
per
byte
cost
is
really
small
compared
to
the
other
two
and
so
they're
heavily
having
to
heavily
index
on,
gets
and
puts.
E
So
when
you
try
and
mutate,
you
end
up
copying
these
huge
blocks
and
just
changing
a
small
bit
of
them
and
so
that
then,
that
drives
up
the
amount
of
storage.
You
need
to
store
these
blocks,
but
maybe
that's
the
right
decision,
because
maybe
storage
is
cheap
and
then
dealing
with
the
blocks
is
not,
but
anyway,
that's
that's.
E
What's
going
into
this
decision
making
and
it's
just
it'll
be
nice
to
be
able
to
to
engage
in
that
conversation
in
a
way
where
we
can
bring
metrics
and
scientific,
some
measurements
and
tools
to
it.
Sorry,
peter.
G
Yeah,
though
you're
absolutely
right,
I
I
basically
want
to
help
exactly
on
that
that
not
only
the
measurement
touring
is
not
there.
The
measurement
culture
is
not
there
either
because,
like
there
are
a
lot
of
misconceptions
of
what
actually
lotus
does
and
what
level
of
caching
exists
and
what
is
acceptable
in
terms
of
like
how
much
we
have
on
disk
how
much
we
have
in
intermediate
vfs
cache
and
how
much
we
have
in
actual
gold
style.
You
know
sized
internal
caches.
G
This
is
being
treated
a
little
bit
as
a
black
box
by
the
people
who
are
doing
the
measuring,
which
is
not
the
way
to
approach
this.
So
I
I
want
to
question.
Basically
everybody
on
the
call
to
not
jump
into
design
decisions
before
doing
the
measurement
themselves
kind
of,
like
kind
of
like
our
experience
with
dumbo
drop,
that
until
we
verified
every
single
thing
works,
and
actually,
while
the
call
was
running
the
every
single
deal
worked
flawlessly
for
for
this
person.
So
once
we
got
everything
debugged,
the
rest
is
kind
of.
G
What
what's
left
is
that
I
think
so
the
same
thing
with
the
with
the
measurement
and
design
like
we
need
to
remeasure
everything
at
least
twice,
to
be
comfortable
with
the
numbers.
E
So
so
this
is,
this
goes
back
to
what
our
job
is,
though,
because
the
reason
this
is
being
it
becomes
a
black
box
is
because
we're
talking
about
it's
almost
obscure
knowledge
like
when
you
push
it
down
to
the
block
level.
It's
it's
in
a
place
where
most
programmers
just
want
to
throw
it
in
the
database
and
ignore
it
like
I
put
it
there
and
then
I
get
it
out
of
there.
It
just
goes
there
and
we've
we've.
D
B
D
But
then,
like
you,
you
know
you
wanted
to
sign
the
cost
of
something,
and
so
you
have
to
actually
bubble
up
the
cost
of
the
abstractions
in
some
way
and
understand
them,
and
so
all
of
a
sudden
you,
you
can't
really
live
in
inside
of
any
abstractions.
You
have
to
unwind
everything
and
quantify
everything.
It's
quite
difficult.