►
From YouTube: 🖧 IPLD weekly Sync 🙌🏽 2020-10-19
Description
A weekly meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
A
Welcome
everyone
to
this
week's
ipl
desync
meeting
it's
october,
the
19th
2020
and
as
every
week
we
go
all
the
stuff
that
we've
worked
on
on
the
past
week
and
then
discuss
any
action
items.
And
today
we
even
have
an
agenda
item
and
yeah
though
we
still
start
with
the
round
of
updates.
So
I
started
myself
and
actually
this
week
I
had
plenty
of
stuff
in
the
ipld
world
to
do,
which
is
great.
A
So
so,
on
the
documentation
side,
I've
made
the
two
pull
requests
into
explorations
report
with
a
small
script
which
does
everything
automatically.
I
had
to
patch
it
a
bit,
but
now
it's
quite
nice
and
I
think
the
outpost
reads
quite
good
as
a
markdown,
so
I've
done
those
and
on
the
left
side
of
things.
So
currently
I
still
wait
with
the
rust
multi
hash
release
until
I'm
sure
that
it
works
well
with
for
the
p2p
forks.
A
I've
done
a
pull
request
on
the
p2p
and
I
hope
that
they
would
reply
quickly,
but
they
haven't
so
it's
been
six
days
or
so
so
I've
pinged
them
again
on
irc
and
just
see
like.
If
there's
anything,
I
could
do
to
get
this
reviewed,
but
as
this
also
blocks
the
rusty
id
release,
what
I'm
gonna
do
is
that
this
week
I
would
just
so
there
are
a
few
changes
that
I
think
they
would
need.
A
Then
do
a
roster
do
these
and
then,
if
there's
anything
that
they
still
need,
there
might
be
just
another
breaking
change
for
these,
so
I
don't
expect
it,
but
in
case
they
do.
I
will
just
do
another
release
and
yeah
we'll
figure
this
out,
and
but
I
don't
want
to
be
blocked
on
the
p2p
review
on
what
else
I
did.
A
So
there
is
the
simple
deck
stuff
that
is
coming
up
in
this
meeting
quite
a
bit,
and
I
looked
into
it
again
because,
like
eric
was
actually
mentioning
like
how
is
this
different
from
black
sea
war,
and
then
I
looked
into
it
and
decided
to
make
a
pull
request
to
show
that
it's
actually
quite
similar
yeah.
It's
linked
so
yeah
it's.
Basically.
A
The
point
is
that
I
think
that
the
ipl
databall
strings
should
be
valid
utf-8
and
eric
thinks
that
the
id
strings
should
be
just
I
8-bit
byte
sequences,
and
he
has
published
a
document
and
I've
published
a
reply
to
it
and
yeah.
I
think
this
will
be
a
discussion
that
will
be
ongoing
and
that
yeah
we
will
have
for
quite
a
while,
we'll
see
how
this
turns
out.
B
But
besides
that
it
was
a
bit
of
a
short
week
for
me,
because
I
was
giving
a
talk
today
at
a
go
conference
called
golab.
So
I
had
to
record
a
talk
for
it
because
they
didn't
do
live
talks
and
you
don't
learn
how
difficult
it
is
to
do
like
a
full
hour.
Edited
talk
until
you
try
it
for
the
first
time,
so
I
spent
a
lot
of
hours
on
that,
but
mostly
the
weekend,
but
also
a
little
bit
on
monday
and
then
obviously
the
live
q
a
today.
B
B
So
I
tried
to
switch
my
adl
repo
to
that
new
schema
and
I
spent
most
of
today
on
that
and
I
was
stuck
for
a
while
and
initially
I
thought
it
was
my
fault,
because
when
something
doesn't
work,
you
know
you
kind
of
assume
that
it's
your
own
fault,
but
it
after
some
digging.
I
kind
of
realized
that
the
spec
is
inconsistent
because
the
new
hash
map
node
type,
sometimes
it's
represented
as
a
map
and
sometimes
it's
a
tuple
or
list,
so
the
schema
doesn't
really
work
and
then
eric's
code
generator.
B
It
could
give
a
nice
error,
but
right
now
it
just
spits
out
invalid
go
code.
So
that's
what
was
confusing
me
for
a
while.
So
I
think
we
can
talk
about
that
later,
rather
how
to
fix
it.
B
So
otherwise
I've
been
adding
more
tests
and
fixing
to-do's
in
the
adl
repo,
but
nothing
that
I've
pushed
so
far
yet,
and
I
also
filed
some
of
the
simpler
go
ipld
prime
issues
or
rather
design
changes
that
we
had
been
discussed
discussing
with
eric
the
more
difficult
ones,
I'm
leaving
them
in
the
draft
notes
for
now,
because
it's
not
anything
that
those
are
not
anything
that
we
could
fix
anytime
soon.
So
and
that's
it
for
this
week.
C
Last
week
I
took
on
board
some
of
that
go
maintenance
that
we
talked
about
last
week
and
the
main
thing
I
focused
on
was
go
cid,
which
is
now
fully
updated
and
happy,
and
I
increased
the
test
coverage
by
30,
while
I
was
at
it
just
to
because
the
repo
had
a
coverage
rule
for
things
to
be
merged,
and
it
just,
I
think
it
just
slipped
below
that,
and
so
no
the
pull
requests
get
merged,
and
so,
instead
of
trying
to
find
someone
with
administrative
access,
I
decided
to
just
increase
the
test
coverage
because
that's
a
good
thing
anyway.
C
So
I
actually
managed
to
test
some
areas
that
just
were
not
like.
There
were
whole
masses
of
code
that
just
were
not
touched,
so
it's
good
to
to
test,
to
touch
them
and
to
validate
what
they're
doing
what
they're
intended
for.
C
C
C
More
tinkering
with
iple
prime
and
I
think
pb
thing
touching
lots
of
things
inside
our
plt
prime
from
the
inside
and
hitting
some
interesting
places,
but
mostly
that's
going
surprisingly
well,
and
I
you
know,
I'm
not
having
the
pain
that
I'm
not
having
much
of
the
pain
that
I
thought
I
might
do
in
trying
to
make
it
work.
At
least
so
in
my
initial
test
runs,
are
showing
that
the
thing's
working
as
I'm
as
it's
supposed
to
be
so
looking
good.
C
We
did
the
hemp
stuff
which
we
need
to
revisit.
Probably
today.
We
can
probably
knock
that
off
in
this
meeting
at
the
end
and
then
just
administrative
stuff,
including
the
peer
review
stuff,
which
just
that's
just
sapping
work
aside
from
that
just
mostly
little
bits
and
pieces.
So.
D
Oh
yeah,
a
lot
of
what
I
dealt
with
was
the
review
stuff
as
well.
Probably
have
another
round
of
that.
Next
week
this
week
I
actually
have
to
write
a
talk
for
quinn
liftoff,
so
that's
going
to
be
fun
so
main
kind
of
code
stuff.
That
I
did,
though,
is
that
I
did
get
back
to
that,
dkb,
that
little
key
value
store
and
just
started
adding
coverage
and
tweaking
it
I'm
really
liking
it.
D
I
think
I'm
gonna
write
a
little
blog
post,
probably
on
the
ipfs
blog
about
kind
of
like
how
would
you
write
a
database
on
top
of
ipfs
and
then
just
use
the
dkb
library,
as
like
an
example.
Basically
that
seems
like
a
good
idea,
but
that's
the
main
stuff
that
I
have
that
I
can
think
of
right
now,
interested
though
in
in
in
rob
talk
later
about
the
ham
changes.
E
Next
is
peter
right.
So
last
week
I
assisted
the
people
who
assisted
the
people
who
launched
falcon,
so
we
did
launch
valcoin.
It
was
very
quiet,
actually
swishly
quiet.
We
probably
will
see
some
stuff
this
week.
You
know
fingers
crossed,
don't
see
anything,
but
one
never
knows
so.
E
One
thing
that
is,
can
I
build
related,
and
it's
somewhat
interesting
to
this
team,
because
a
lot
of
you
guys
worked
on
this
was,
as
you
might
know,
we
had
a
few
hiccups
with
how
we
actually
transferred
the
data
from
the
data
that
we
assembled
for
pokemon
discovered
how
we
transferred
it
from
aws
to
actual
hard
drives.
There
were
a
number
of
mistakes
made
along
the
way.
E
So,
just
literally
yesterday,
we
wrapped
up
a
project
to
ship,
a
very
small,
very
foolproof
program
that
is
able
to
take
a
random
drive
and
basically
say
whether
it
is
good
or
not
good,
for
the
purposes
to
which
we,
you
know
whether
it
was
assembled
good
enough
for
the
purposes
that
we
are
trying
to
use
it
for
when
we
actually
start
sending
deals
and
stuff
like
that,
so
the
amount
of
things
that
we
had
to
figure
out
how
to
do
is
because
of
the
hostile
place
where
this
thing
is
run.
E
It
doesn't
really
have
a
good
like
back
end
service
or
anything
like
that.
So
step
number
one
was
basically
just
take
the
entire
dynamo
table
from
piece
like
the
entire
thing
cut
it
down
to
16
bytes
of
cash
for
each
cid
instead
of
the
full
32
and
just
bundle
this
thing
entire
entirely
into
the
binary,
which
is
only
like
140
megabytes.
So
it's
not
that
bad.
So,
basically,
whoever
runs
whoever
runs.
E
The
program
has
the
entire
data
set
of
what
is
actually
a
computer
to
expect
and
what
is
the
size
of
the
file
to
expect
which
data
says?
They're
part
of
all
of
this
is
like
part
of
the
part
of
the
thing.
Also,
we
have
an
actual
compy
calculator
that
is
not
based
on
the
rust
ffi
stuff,
but
it's
entirely
implementing
go.
E
It
is
only
three
times
faster
than
what
we're
using
falcon
right
now
I
haven't
spoken
to
the
crypto
lab
people,
yet
because
it
is
implementing
efficiency,
it's
actually
supposed
to
be
10-15
times
faster
than
ever.
When
everything
works
together
and
we
basically
can
channel
the
different
layers
in
parallel,
so
it
can
utilize
a
decent
cpu
with
your
transactions,
so
it
can
run
things
at
about
500
megabytes
per
second
one
gigabyte
per
second
in
constant
memory.
E
So,
right
now
the
the
program
actually
takes
about
50
megabytes
of
memory
to
protect
some
dark
car
file
just
takes
slower
than
I
would.
I
would
like
to
do,
and
once
that
is
ready.
Hopefully,
hopefully
this
will
be
able
to
clean
this
up.
E
We
will
try
to
push
it
into
into
waters
as
well
and
yeah
that
pretty
much
is
what
what
took
like
two
three
days
of
work,
but
it's
finally
out
and
now
we'll
we'll
see
if
multiple
people
running
this
will
be
able
to
find
extra
bucks
or
not
in
the
implementation
of
all
her
partners
too
and
yeah.
I
think
that's
all
I
have
what's
next.
A
F
Eric
so
I
don't
have
a
tundra
report
this
week,
I
had
a
lot
of
non-ipod
things
to
deal
with
in
life
this
week,
but
I
guess,
since
you
brought
it
up
very
vulgar
strings.
F
A
F
F
It's
way
longer
than
anyone
probably
wants
to
read.
I'm
sorry
it
talks
about
where
strings
appear.
It
proposes
a
definition
which,
like
volcker,
school,
did
already.
The
definition
that
I
would
advocate
for
is
strings
should
caps
lock
our
fc,
whatever
terminology
should
be
utf-8,
however,
complete
ipld
libraries
must
caps
lock,
support
the
full
range
of
8-bit
byte
sequences,
the
details
of
that
are
covered.
F
It
goes
a
lot
into
the
design
rationale.
I've
got
six
distinct
ones,
separated
out
there,
or
is
it
seven
and
then
there's
a
whole
chapter
full
of
alternatives,
and
all
of
these
are
labeled
as
rejected,
but
I
want
to
restate
that
this
is
because
this
is
written
in
the
exploration
report
style,
so
it
took
a
position
and
it's
running
with
it.
F
Some
of
these
things
are
more
thoroughly
rejected
than
others,
but
they
have
between
two
and
eight
reasons
why
I
consider
them
not
such
a
good
idea,
but
some
of
these
I
don't
know,
maybe
they're
salvageable.
If
somebody
wants
to
take
this
all
in
a
different
direction.
What
I
would
suggest
is
read
this
and
if
you
want
to
pluck
one
of
these
things
out
pick
one
of
them
and
try
to
run
with
all
of
the
concrete
implications
of
what
we
need
to
change
in
order
to
make
it
happen.
F
So,
for
example,
one
of
the
things
that's
rejected
is
maps
with
mixed
key
kinds,
because
if
we
supported
maps
with
mixed
key
kinds
entirely,
then
a
whole
bunch
of
the
debate
about
how
strings
need
to
be
specified
would
actually
change.
It
would
loosen
some
constraints.
It
would
add
some
new
ones
and
it's
very
non-local
which
of
those
things
it
affects.
F
So
if
somebody
wants
to
try
to
take
that
definition
and
wrong
with
it,
that
would
be
really
cool.
But
good
luck,
because
just
saying
no,
I
don't
want
it
to
be
rejected.
Will
not
get
you
very
far.
You
actually
have
to
work
through
the
implications
of
all
these
things.
What
I've
tried?
I've
ended
up
failing
to
be
able
to
work
through
the
implications
of
a
lot
of
these,
but
maybe
some
of
them
are
possible.
I
just
don't
have
enough
headspace
right,
one
of
the
other
alternatives.
F
That's
in
here
that
I've
marked
as
rejected,
but
maybe
it
shouldn't
be.
Maybe
you
could
do
this?
Is
we
could
change
the
data
model
to
have
multiple
map
kinds?
We
could
have
map
with
string
keys
as
a
distinct
kind
from
map
with
byte
keys
and
another
distinct
kind
for
map
within
keys.
That
would
be
an
option.
F
D
So
there
was
a
crazy
discussion
that
I
had
with
volcker
today
actually-
and
this
is
like
a
good
time
to
kind
of
put
this
on
your
radar,
we're
talking
about
the
map
key
thing
and
within
the
context
of
strings
and
we're
talking
about
kind
of
like
like
if
you,
if
you
pull
back
and
you
stop
thinking
about
like
data
modeling
types
for
a
second
just.
D
What
is
the
thing
that
we're
trying
to
ensure
and
the
main
thing
that
we
just
care
about
is
is
round
tripping
right
like
we
just
want
to
make
sure
that
we
can
always
round
trip
all
of
these
things
between
languages
to
round
trip.
You
don't
actually
need
a
consistent
decision
about
this
for
all
maps
or
even
for
all
languages.
What
you
need
is
a
very
consistent
decision
for
each
codec,
for
each
language
and
and
the
main
constraint
that
you
have
is
that
it
cannot
be
mixed
types
right.
D
So
I'd
like
it
kind
of
at
the
same
place
that
you
ended
up.
Where,
like
you,
can't
mix
the
key
types,
but
we
don't
actually
need
to
specify
what
type
the
keys
are,
for
instance,
if
you,
if
what
you
do,
is
you
decide
to
encode
arbitrary
binary
into
a
string
key
and
a
c
more,
which
is
what
popcorn
is
doing
now?
D
That
actually
round
trips
in
javascript
is
fine,
because
javascript's
map
only
takes
strings,
but
those
strings
can
include
characters,
so
you
actually
can
get
a
round
trip
out
of
that.
It's
fine
and
then,
if
you
know
that,
that's
what
you
want
to
be
doing,
if
what
you
really
want
are
are
binary
keys
but
you're
typing
them
to
strings
because,
like
whatever
then
in
your
codec,
you
can
say:
oh,
like
I'm
in
I'm
in
a
language
that
supports
binary
keys,
it's
always
a
binary
key
for
maps.
D
D
You
can't
have
mixed
types,
so
that
was
like
an
interesting
thing
and,
like
I
think,
a
lot
of
our
string
constraints
tend
to
point
back
sometimes
to
like
well
we're
using
them
in
maps
for
this
thing.
But
I
think
that
we
should
maybe
just
decouple
that,
like
we
don't
actually
need
that
to
be
the
case.
D
But
it
can
just
be
arbitrary
data
and
it
doesn't
have
to
be
utf-8,
which
is
like
what
we
actually
have
to
support
now
and
then
in
some
languages,
or
even
like
you
know,
in
literally
in
that
library,
in
file
coin
they're,
just
treating
them
as
bites
right,
I
don't
know
what
do
you
think?
How
does
that
fit
with
like
your
picture
of
this
eric.
F
D
Well,
yeah
yeah,
so
yeah
yeah,
yeah
yeah,
so
so
the
deg
seabor
spec
should
be
very
specific
about
how
you
encode
this.
What
I'm
saying
is
that
the
data
model
needs
to
set
some
constraints
for
the
codec
expect
to
say
what
to
do
right
and
then,
when
you
go
to
implement
that
spec
in
your
language,
you're
going
to
make
particular
decisions
about
that
like
like
here's,
a
very
good
example
that
comes
like
to
the
string
value
thing
right.
D
If
you
look
at
the
problem
we
have
here,
it's
really
that
languages
do
not
agree
about
strings
and
like
we
can't
fix
that
like
like
we're
not
going
to
fix
that,
and
so
the
choice
becomes
like
do
we
do
we
try
to
get
really
heavy-handed
and
force
people
to
use
types
that
aren't
string
types
to
represent
strings
so
that
they
can
support
things
that
other
people
can
do
in
other
languages?
Or
do
we
basically
say?
D
Look
if
you
do
things
that
aren't
utf-8
you're,
going
to
have
a
really
bad
time,
you're
going
to
see
exceptions
in
some
languages,
because
they're
going
to
try
to
convert
this
and,
in
fact
like
no
matter
what
we
say
in
the
data
model
that
is
going
to
be
the
case.
We
actually
have
no
ability
whatsoever
to
solve
this
because,
like
even
if
all
of
our
codecs
that
we
ship
have
this
ffi
type
like
basically
a
binary
value
for
strings.
D
The
first
thing
everybody
does
with
them
is
convert
them
to
a
string
and
if
it
has
invalid
utf-8
characters
in
it,
it's
going
to
throw
an
exception.
So
all
we've
done
is
move
the
error
from
the
codec
to
like
the
thing
that
people
would
do
with
the
data
right
when
they
first
get
it.
It's
not
actually
solving
anybody's
problem.
D
D
Like
that,
that
is,
that
is
100
the
right
thing
to
have
in
spec,
where
I
think
that
we,
like
can't
really
we
really
don't
have
the
ability
to
dictate
is
the
the
must
support
section
and,
and
a
good
example
of
this
right
is
json.
Is.
F
A
C
A
D
A
G
D
Yeah,
but
but
now
you're
in
in
adl
land
and
things
change
a
bit
like
they're,
not
going
to
be
enforced
by
codec
in
the
same
way
like,
but
to
come
back
to
it
like
sort
of
try
to
unwind
this
again,
you
often
pair
a
should
with
a
mei,
not
a
shirt
with
a
must.
D
So
I
think
that,
like
the
the
thing
that
we
that
the
spec
should
probably
say
is
you
should
encode
strings
in
utf-8.
You
may
encode
arbitrary
data
into
those
strings,
but
you
should
know
that
that
is
going
to
cause
you
some
problems
in
languages
that
do
not
support
that
like.
That
is
the
consequence
of
doing
that.
Saying
that,
like
you,
you
must
support.
D
It
means
that
it
it
would
force
a
compromise
in
some
languages
that
I
don't
think
that
we're
willing
to
do,
which
is
to
say
that
all
of
the
strings
now
use
non-native
types
or
or
yeah
I
mean
even
the
ffi
type
in
rust
is
like
effectively
a
non-name
of
type
or
you're,
using
like
some
kind
of
like
tagged,
binary
type
or
something
like
you're
gonna
end
up
doing
something
special
to
them.
D
So
I
think,
like
that's,
that's
probably
where
the
language
needs
to
be,
and
then,
when
we
look
at
specific
codecs
in
specific
languages,
we
can
get
much
stricter
about
this
right,
like
we
can
say
specifically
in
seabor
like
these
must
be
strings
like
they
might
have
arbitrary
data
in
them,
but
they
need
to
be
typed
strings
for
the
math
key
values.
Right
like
we
can.
D
We
can
get
like
we
can
even
say
in
some
codex,
like
you
know
what
you
actually
can
only
do
utf-8
for
strings
in
this
codec,
and
here's
like
how
you
do
that
cleanup
or
that
translation
for
this
codec
for.
D
Reason
and
in
fact,
like
this
sort
of
happens
already
in
json,
because
json
is
not
a
binary
format,
it's
a
string
format.
So,
like
the
end,
the
final
outputted
entire
thing
has
to
be
a
value
to
your
a
string,
and
so,
if
you
try
to
just
shove
certain
arbitrary
bytes
into
the
string
values
it'll
like
actually
mess
up
the
encode,
so
you
like
have
to
do
rear
escaping
sometimes
so.
D
D
We
have
to
warn
people
about
inline
cities
too
and
just
say,
like
you
know
what
like
this
is
a
thing
that's
possible,
but
like
you're
gonna
have
a
bad
time,
sometimes
because
there
aren't
always
agreements
about
what
the
representation
should
be
yeah
yeah
I
mean
I
agree
that
the
spec
should
be
maximally
useful,
but
also
like
the
actual
implementations
need
to
be
maximally
useful,
and
here
we're
like
hitting
like
like
something
of
a
of
a
conflict
a
little
bit,
but
I
think
I
think,
the
language
that
you
came
up
with
with
works
if
we
just
replace
that
the
the
must
with
mma
for
the
the
support,
arbitrary
stuff.
A
Okay
and
it
just
like
it
just
we
think
I
think
we
should
go
to
the
agenda
item
first
and
then
like
like.
We
could
probably
discuss
this
one
like
for
another
three
hours
as
we
did
on
friday.
So
it's
okay!
If
we
go
on
with
the
edge
and
item
first
and
then
okay
cool,
it's
just
for
the
for
the
trigger
that
I
also
like
I
would
want
to
discuss
this,
but
I
also
try
to
keep
still
yeah
okay.
So
it's
so
yeah.
So
it's!
H
C
Look,
this
is
the
same
topic,
but
it's
the
so
the
the
chain
safe.
Is
that
him,
the
people
that
are
doing
the
russell
implementation
of
final
coin,
open
this
issue
and
having
trouble
with
the
abuse
of
map
keys
and
they're?
It
looks
like
they're
using
the
the
standard
survey
implementation
and
it's
just
balking
on
on
the
arbitrary
bytes
it
converted
to
strings
that
they're
doing
in
go,
and
peter
pointed
out
that
this
is
because
the
the
serializing,
the
label,
which
contains
arbitrary
things.
C
So
it's
like,
like
the
user,
can
even
control
what
goes
in
this.
But
I
think
so.
It's
the
yolo
kind
of
deal.
A
E
C
I
was
talking
to
alex
yesterday
about
a
whole
bunch
of
stuff
to
do
with
spec
actors
and
and
how
they're
they
are
doing.
C
Regular
they're
planning
to
do
regular
upgrades-
and
he
showed
me
some
of
the
migration
stuff,
which
has
been
a
bit
of
a
black
box
for
me,
and
so
there
is
scope
for
us
to
propose
some
changes
in
some
of
these
things,
and
it's
not
that
we
can
rewrite
history
because
the
history
is
sort
of
stuck,
but
you
we
can
have
points
in
time
where
the
history
gets
rewritten
for
a
point
onwards
and
they're
they're
they're
going
to
be
doing
that
regularly
anyway.
C
So
there's
a
couple
of
things
here:
one
is
we.
We
have
scoped
to
speak
to
this
space
to
file
coin
and
say
here's
a
better
way
to
do
it
like.
Maybe
we
say
you
know
you
should
be
using
utf-8
cleaning.
You
know,
that's
just
a
hypothetical
you
know,
but
maybe
there's
a
way
you
should
be
doing
this.
It
is
that
is
going
to
be
more
stable
across
implementations,
because
this
is
kind
of
rough
like
yeah.
C
For
all
the
reasons
we've
talked
about
just
saying
just
saying,
like
it's,
what
what
the
file
coin
implementation
has
done
in
multiple
places
is
said.
Well,
the
ipld
people
said
we
can't
have
anything
but
string
keys,
so
we'll
just
put
everything
in
bytes
and
stringify
them,
and
you
know
with
the
rust,
I'm
sorry
with
the
go
string
of
brights
implementation.
C
Okay,
it's
just
not
very
friendly
to
others,
so
we
could
suggest
changes
here,
but
the
other
thing
is,
I,
I
think
the
and
that's
actually
what
the
rust
implementation
people
are
suggesting
that,
like
this
changed,
but
I
also
because
we
were
called
into
this:
do
we
have
any
feedback
on
this?
Do
we
do?
We
have
to
say,
because
I
think
the
rust
people
are
gonna
have
to
live
with
it
anyway,
because
they're
gonna?
This
is
already
on
chain.
C
A
D
G
So
it's
not
about
keys,
so
it's
really
about
a
value
so
yeah
so
yeah.
So
there's
two
things
right:
there's
hamps
and
the
adls,
which
in
which
are
cbor
encoded
correctly
in
that
they
they're
they're.
A
weird
map
thing
that
has
couple
values
and
the
values
are
bytes
and
then
you're
supposed
to
interpret
them
as
a
map
that
has
these
bytes
as
the
keys
that
are
maybe
going
through
a
string
conversion,
but
the
c4
representation's,
all
good.
G
E
Yeah,
that's
that
there's
a
thing
that
started.
The
simple
then
became
jason
became,
but
that's
not
the
point.
The
point
is
that
there
are
places
within
the
blockchain
right
now
that
take
arbitrary
user
input,
because
the
label
is
just
conventional.
What's
in
it
right
now,
you
can
literally
write
anything
in
it
yeah
and
the
one,
and
there
are
two
places
one
is
to
place
the
template.
One
is
transactions
in
sorry,
sent
transactions
and
sent
transactions.
It
is
bytes,
so
you
can
literally
write
whatever
you
want
into
a
round
trip.
E
C
A
A
I
think
the
problem
is
like,
if
you
would
change
it
to
bytes
now,
and
you
use
the
the
auto
embedded
code,
gen
or
serializes
that,
like
the
the
default
one
that
they
currently
have
in
the
russell
computation,
the
problem
is
that
currently
it
sounds
like
that
in
the
xc
ball.
This
thing
has
the
tag
string
but
arbitrary
bytes
and
if
you
change
it
to
two
bytes
in
a
higher
level,
this
becomes
bytes
and
arbitrary
bytes.
But
of
course
I
guess
it's
a
blockchain.
D
Yeah,
so
so
the
real
problem
here
is
literally
they're,
stepping
on
the
spec
change
that
we
literally
were
just
talking
about,
which
is
like
they
encoded
thing
into
a
string
that
is
not
utf-8
and
now
they're
in
a
language
that
it
needs
to
value
tfa
and
so
they're.
Getting
an
x
like
that's
the
literal
problem,
okay,
so,
okay,
so
what
they
need
to
figure
out
is
how
they
are
going
to
figure
out
exactly
when
they're
encoding,
these
particular
blocks
and
how
they're
going
to
somehow
tell
the
decoder
to
encode
that
as
a
byte
value
instead.
A
D
D
Well,
but
the
that's
unreasonable,
like
I
mean
like
they,
have
a
string
type
they're
going
to
use
it
for
strings
like
that's,
that's
like
what
everybody
does
like
that's
pretty
much
it.
It
might
be
like
a
certain
problem
in
that
there
there
should
be
an
easier
way
for
you
to
say:
hey,
don't
encode!
This
is
that
in
certain
like
that,
may.
A
Be
a
problem,
I
think
it
was
just
basically
the
quick
fix,
I
guess
just
to
make
it
work,
because
that's
the
whole
point,
but
I
think
yeah,
but
still
like
it's
there.
So
it's
actually
there.
I
think
that's
the
problem
back
there
that
strings
are
yeah.
If
it
would
be
all
valid.
You
defeat
everything,
but
we
would
be
fine.
A
D
C
Although
I
mean,
maybe
our
feedback
is
simply,
you
should
just
change
respect
to
say
this
is
bytes
and
then
they're
not
like
even
historical
data
works.
It
just
means
that
they
have
to
interp
like
the
go.
People
will
say
well,
this
can
be
a
string
in
some
place
like
I
don't,
maybe
there's
a
reason.
Maybe
this
is
maybe
something
we
actually
punt
over
to
hannah,
because
this
is
really.
E
C
C
J
D
On
hold
on,
but
like
I,
I
do
want
to
just
say
that,
like
eric
you,
you
are
probably
going
to
end
up
needing
a
feature
then,
where
somebody
can
apply
a
schema.
That
says
this
is
bytes
and
I
don't
care
if
it
says
that
it's
a
string
in
the
data
model
you
encode
and
decode
it
as
bytes,
so
that
they
get
a
consistent.
D
D
F
E
D
F
F
Going
to
assume
that
it
is
fine
and
I
do
not
care,
that
is
an
acceptable
amount
of
knowledge
for
me
to
have.
We
seem
that
we
all
agree
in
this
scenario.
This
particular
issue
one
two,
four:
eight-
that
there
is
this
field
that
is
currently
specified
as
if
it
is
a
string
and
it
contains
non-utf,
faith
bytes,
and
we
think
that
that
is
silly
and
it
should
just
be
a
bytes
field,
because
that.
F
What
it
is
fine,
solid,
super
easy
we
are
additionally,
the
only
thing
here
that
should
be
running
on
is
we
might
as
a
team
look
at
this
as
a
potential
learning
experience
for
what
happens
when,
rather
than
if
people
do
put
non-utf-8
bytes
in
serial
strings
and
then
other
people
have
to
deal
with
the
result
right.
That's
the
one
thing,
that's
interesting.
E
F
E
E
E
E
But
wanted
to
replace
it
with
with
this
unicode
thing:
okay,
it
will,
it
will
keep
it
correct.
Okay,
that's.
K
F
Yeah,
no,
the
goat
has
a
couple.
It
has
some
library
features
that
have
that
kind
of
I'm
not
a
big
fan
of
it,
behavior
about
the
unicode
error
byte,
but
it
shows
up
in
like
json
libraries
and
some
some
string
pretty
printers
and
stuff.
But
it's
it's
in
relatively
high
level,
libraries
that
we
can
just
choose
not
to
use,
and
it's
fine.
A
C
Sorry
I'll
go
back
and
I'll
just
say
look.
This
is
this
is
one
of
those
unfortunate
things
of
language
imaginations
and
the
rust
people
are
just
gonna
have
to
deal
with
the
historical
data,
at
least,
but
our
suggestion
is
that
this
field
just
should
be
changed
to
bytes
for
for
in
a
future
upgrade
so
that
it's
encoded
as
a
byte
string
and
then
it's
easy
for
everyone,
because,
let's
just
admit
this
thing
is
arbitrary,
literally
arbitrary.
C
It
says
it
in
the
spec
so
make
it
arbitrary
and
stop
dealing
with
strings
and
that's
and
that's
the
best
we
can
do.
But
unfortunately
I
mean-
and
I'm
sorry,
I'm
sure
the
russ
people
already
know
they're
going
to
have
to
deal
with
this
data
anyway.
So
it's
like
we've
got
to
deal
with
the
historical
craft,
so
yeah,
but
let's
make
it
better
in
the
future
and
then
maybe
I'll
pull
hannah
into
that
because
I
don't
know
if
hannah
wants
more
things
to
deal
with,
but
that's
kind
of
been
hannah's
basket.
A
A
J
Yeah,
just
a
quick
question:
did
we
ever
like?
What's
what's
the
current
thinking
about
having
like
a
b
tree,
adl.
F
D
I
I
actually
today
got
approval
to
contract
somebody
to
do
this,
so
we're
talking
ron-
and
I
are
talking
to
somebody
right
now
but
like
that-
that
wasn't
even
ready
to
really
talk
about
yeah.
We're.
Okay,
like
it'll,
probably
be
a
red
black
tree.
J
Well,
yeah,
something
that
allows
you
to
do
like
non-exact
look-ups.
Here
we
have
hampty's
hemp
for
exact
exact
match
lookups.
But
if
you
want
to
range
or
something
like
that,
then
you
need
to
be
treated
so
yeah.
So
I
was
just
wondering
I
mean
since
we
didn't
have
one.
I
didn't
see
anything
in
the
specs.
I'm
like
I'm
wondering
if
we
talked
about
like
no,
that
we're
never
going
to
do
that
for
some
reason,
but
it
just
sounds
like
no
one's
had.
D
Somebody
actually
built
it
yeah.
No,
I
mean
rod
did
a
lot
of
exploration
when
he
first
started
around
this
and-
and
we
have
a
bunch
of
sort
of
different
sort
of
collections
that
we
want
to
look
at.
D
The
reason
why
they
haven't
been
done
so
far
is
that
so
much
of
the
work
has
been
going
into
filecoin
and
these
data
structures
are
not
going
to
be
hash,
consistent
right,
like
you're,
unless
you
can
guarantee
that
you're
always
going
to
do
a
rebalance,
then
they're
not
going
to
have
the
same
hash
as
somebody
else
with
the
same
data,
because
insertion
order
really
is
going
to
change
the
structure
and
so
they're
not
useful
for
blockchain
they're,
not
useful
for
like
some
different
content
addressing
use
cases,
but
they're
really
important
for
databases
and
other
stuff
like
that,
like
I
really
want
it
for
dagdb
because
for
secondary
indexing
I
don't
need
a
consistent
hash,
but
because
you
don't
actually
share
that
between
parties
like.
D
I
only
need
that
for
the
primary
store.
So
it's
fine
but
yeah
like
we're,
we're.
Finally,
now
getting
around
to
that
yeah
and
then
once
once
we
have
some
good
patterns
there
and
some
good
stuff
then
like
I
want
fulcrum
right
in
archery.
Walker
needs
to
write
an
archery
okay.
So
what's
an
archery
geospatial
indexing.
F
I
would
also
mention
somebody
else:
who's
not
in
the
call
right
now,
but
might
be
good
to
talk
with
is
ian
from
the
piergos
project.
I
know
at
some
point:
they've
used
other,
I
think
closer
to
b,
plus
tree
structures
like
this,
and
then
they
moved
over
to
hampton
champs.
F
But
you
might
have
some
interesting
observations
about
that.
I
think
they
were
driven
a
lot
by
the
the
content
addressing
convergence
story,
but
they
they
did
ship
with
a
bunch
of
other
tree
structures
for
a
while.
So
there
might
be
learning
experiences
there.
F
He
lurks
in,
I
think
he
works
in
our
irc,
pretty
reliably
too.
If
email
can't
be
found,
I
don't
know
whatever.
C
C
So
some
users
won't
care
about
trade.
I
won't
care
about
convergence.
They
just
want
efficiency,
some
users
won't
care
about
block
size,
and
so,
like
a
you
know,
a
a
binary
tree
might
be
fine,
but
in
some
some
users
will
care
about
block
size.
So
and
then
mutations
come
into
the
question,
so
it
I
think
it's
right
that
there
should
be
multiple
invitations
of
this.
D
Yeah,
I
think
that
we
sometimes
mistakenly
think
that,
like
ipld
is
at
the
level
of
the
database
for
like
the
analogous
kind
of
centralized
model,
but
it's
actually
like
the
level
of
like
the
disk
right,
and
so
it's
not
like
we're
gonna
have
a
database
or
a
way
to
do
this.
D
It's
like
there
are
many
many
databases,
because
they
all
have
very
different
performance
profiles
and
that
the
way
that
they
implement
those
two
performance
differences
is
primarily
how
they
write
those
things
to
disk
and
how
they
do
transactional
architectures
between
each
other
and
so
like
we're
going
to
end
up
with
many
many
different
trees.
That
functionally
do
the
same
thing,
but
have
wildly
different
performance
characteristics.
D
Yeah
I
mean
like
the
where
you
implement
the
linking
in
a
database
is
that
you
literally
write
like
where
the
other
location
on
disk
is
right
and
we're
like
basically
at
that
linking
layer
like
where
we're
how,
where
you
define
how
those
things
link
together.
So
it
is
somewhat
analogous.
F
D
What's
interesting
is
like
I,
I
think
that
we're
a
little
bit
here
underestimating
how
much
work
we
can
do
above
the
primitive
adl
error
because,
like
if
you
look
at
like
like
daggery,
for
instance,
you
you
already
you
so
you
have
to
hash
of
the
database
between
different
states
and
then
you
can
always
do
a
diff
of
the
primary
store
between
those
states.
D
I
would
have
just
combined
like
putting
a
hamp
next
to
a
red
black
tree,
just
the
roots,
like
already
you
have
like
really
nice
kind
of
performance
characteristics
between
them
and
you're
and
you're
getting
over
a
lot
of
the
problems
you
would
have
so,
and
I
wonder
like
how
far
you
can
stretch
that
right
like
how?
How
hybrid
can
we
get
with
like,
where
we
shove
some
of
these
different
data
structures
together
to
get
different
characteristics.
D
H
So
I
just
put
a
link
in
I:
I've
been
working
with
a
cddl
quite
a
bit,
and
so
you
might
want
to
take
a
look
at
that
for
writing.
The
the
entire
spec
and
I've
been
doing
some
branching
logic
for
my
work
and
it
did
work
so
testing
the
validity,
ncdl
and
branching
if
it,
if
the
based
on
the
type
it
was
just
this
rest
library
was
just
updated
yesterday.
I
think
it
does
support
now
both
validating
seabor
as
well
as.
C
Hang
on
sorry,
there
was
this
ham
thing
where
we
going
to
talk
about
the
hampt:
oh
stu,
okay,
so
the
problem
we
have
right
now
is
the
changes
we
merged
last
week.
C
Have
a
conflict
in
the
kind
of
union,
so
I
went
with
a
a
tuple
representation
for
the
nodes.
I
think
I've
been
infected
by
the
far
coin
need
to
make
everything
compact
and
small
and
there's
the
link
yes
and
so.
C
But
then,
when
you
go
to
this
recursive
point
where
you're
you
could
have
buckets
or
links
to
child
nodes,
it's
a
kind
of
union
and
the
bucket
itself
is
a
list
and
the
hashmap
node
that
has
we've
listed
as
a
map.
So
that's
wrong.
It
seems
to
me
the
easiest
answer
is
to
remove
the
representation
tuple
there
and
then
just
have
them
as
as
maps,
and
the
objection
to
that
is
simply
that
you're
wasting
space
by
having
the
map
keys
in
the
in
your
encoding.
C
F
I
C
Still
shortens
to
b
it's
a
mapping.
Okay,
do
we
so
do
we
is
it?
Do
you
think
we
should
use
the
rename
in
there
or
should
we
just
make
the
names.
C
A
No,
it
doesn't
seem
like
it
then
goodbye
everyone
and
see
you
all
next
week.