►
From YouTube: 🖧 IPLD Every-two-weeks Sync 🙌🏽 2022-04-12
Description
An every two weeks meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
A
Welcome
everyone
to
this
week's
ipld
sync
meeting
it's
april
of
11
2022
and
as
every
two
weeks
we
go
over
the
stuff
that
people
have
worked
on
and
then
go
over
any
discussions.
There
might
be
your
agenda
items
yeah.
So
I'm
sorry
for
the
delay.
I
had
technical
difficulties,
but
now
everything
should
be
fine.
A
I
should
also
perhaps
put
the
meeting
notes
into
the
chat
room,
so
I
guess
should
we
should
we
start
with
announcements?
Perhaps
this
time
perhaps
eric.
B
B
B
The
project
will
continue,
there
will
still
be
lots
of
other
people
to
triage
things,
we'll
figure
it
out
and
that's
all
my
voice
can
handle
so
talk
to
you
later.
A
Thanks
eric
for
the
update
yeah,
so
now
we
go
to
the
regular
stuff
that
we
usually
talk
about
so
yeah.
So
so
in
this
meeting
notes,
I
don't
see
any
updates
from
anyone
which
is
interesting,
because
this
never
happened
before
I.
C
I
I
have
some
some
updates
too
or
things
to
share
share
with
people.
If
we
have
time
but
yeah
don't
want
to
eat
up
all
the
time.
A
Yeah
so
so
I
also
don't
have
an
update
really
from
my
side,
so
yeah,
so
I
guess
yeah.
I
just
like
at
least
I
think
you
should
get
started
and
then.
C
Okay,
cool
all
right
all
right.
Well,
let's
start
all
right,
so
I
guess
I
have
two
things
so
one
is
I
mentioned
this
last
week.
I
I
hacked
together
some
stuff
for
ipld
plus
bittorrent
things
like
a
codec
for
vencode
and
an
adl
for
files.
C
They
exist
and
work,
and
I
can
do
things
like
load
pictures
over
gateways
by
making
a
minor
tweak
to
the
gateways
to
allow
them
to
be
like
is
your
node,
a
byte?
Is
your
node
bytes?
If
so,
that
thing
probably
will
render
as
a
file
and
then
do
that,
and
so
that
means
like
any
adl,
that
outputs
bytes
would
then
be
able
to
turn
into
a
file.
C
So
I'd
like
to
see
some
of
that
work
get
pushed
through
and
that's
kind
of
like
a
pretty
cool
step
in
the
direction
of
like
unix
fs
equals
less
special,
which
is
awesome
because,
like
a
lot
of
people
have
a
lot
of
problems
with
the
xfs
all
right.
The
next
thing
is
like,
oh
so
so
this
bittorrent
stuff
and
whatever
this
seems
this
seems
cool
and
all
but
like
you
have
to
write
the
codec
and
you
have
to
do
it
in
all
the
languages
and
like
dean.
C
Are
you
going
to
feel
sufficiently
motivated
to
write
this
in
javascript?
And
the
answer
is
no,
but
but
I
hear
that
people
are
really
in
to
write
ones
that
run
everywhere,
which
may
or
may
not
be
the
cure
to
everything
or
a
way
to
make
everything
slightly
less
performant
all
the
time,
and
that
is
webassembly.
C
And
so
I
have
written
a
codec
and
adl
in
web
assembly
that
also
get
called
from
go
or
can
get
called
from
go.
And
so
you
can
then
use
those
things,
and
I
can
do
things
like.
C
C
And
this
is
a
selector.
This
selector
is,
you
know
what
this
is
actually
much
easier.
I
can
see
because,
because
go
ipfs
has
these
things
I
can
do
like
this.
C
Represented
as
dag
json,
it
is
the
please
interpret
me
as
a
bittorrent
file
and
then
just
like
grab
the
thing
match
on
it
and
just
grab
the
grab
that
node.
For
me,
please-
and
so
this
is
saying,
take
this
be
encoded
thing
please
be
interpret
me
as
a
bittorrent
file
and
then
render
me
and
it's
like
oh
hey,
you're,
bytes
and
then
renders
it
at
fights.
How
does
it
know?
It's
a
jpeg
same
way.
Everything
else
knows:
everything's
a
jpeg.
C
They
sniff
the
first
few
bytes
and,
like
maybe
it
looks
like
a
jpeg,
which
is
how
the
gateways
work
kind
of
already,
and
if
you
wanted
to
make
that
more
annoying,
you
could
do
like
or
handle
all
the
edge
cases.
You
could
do
things
like
put
in
the
download
file
name
to
end
in
jpeg,
and
then
it
would
do
the
sniffing
like
that
too.
C
Now
again,
this
the
codec,
you
know
for
those
not
fluent
in
in
multi
formats.
This
is
the
thing
that
says
b
in
code
or
been
code.
This
is
written
in
wasm,
there's
a
go
on
too,
but
this
is
webassembly
and
the
codec.
The
adl
thing
in
here
is
also
webassembly,
and
this
was
done
by.
C
Okay
yeah,
so
that's
not
quite
right,
but
basically
the
way
this
was
done
was
first.
I
I
made
it.
I
made
a
new
codec.
C
Basically
eric
has
some
some
great
docs
on
this,
but
we
have
no
codecs
that
are
complete
and
fitted
to
the
ipld
data
model,
where
complete
means
it
represents
all
the
things
the
data
model
represents
and
fitted
means.
There
is
only
one
way
to
represent.
Take
your
codec
thing
and
like
put
it
in
a
data
model,
and
vice
versa.
C
We
don't
have
one
of
these,
like
seaboard,
doesn't
do
it
that
jason
doesn't
do
it
dog
tv
definitely
doesn't
do
it.
So
I
made
a
new
one,
mostly
junked,
from
michael's,
simple,
dag,
but
made
slightly
different.
C
This
is
the
motivation.
Is
it
perfect?
I
don't
know
called
it
whack.
I
need
a
better
name.
It's
the
web
assembly
codec.
The
idea
is,
I
don't
want
to
keep
crossing
the
boundary
between
webassembly
and
the
host.
C
Maybe
when
the
host
is
rust,
it's
cheap
when
the
host
is
go,
my
understanding
is
it's
not
cheap?
A
lot
of
these
things
come
with
caveats.
My
first
line
of
rust
code
was
last
sunday,
so
who
knows
yeah
we
can
look
at.
I
can
link
to
the
spec,
and
we
can
talk
about
that.
If
people
are
interested,
there's
things
to
still
work
on,
mostly
there
may
be
better
ways
to
optimize
encoding
and
decoding.
C
I
haven't
specified
floats
mostly
because
the
thing
I
stole
from
simple,
dag
had
not
really
described
floats
and
I
think
in
general,
the
ipld
model
is
a
little
sketchy
on
floats,
so
we
might
want
to
like
get
that
described
so
that
it
can
then
be
accurately
represented
in
the
codec,
and
then
people
seem
generally
grumpy
about
parts
of
the
data
model
and
because
I
am
faithfully
executing
the
data
model.
That
means
there
is
some
grumpiness
there.
C
I
think
we
can
use
this
as
like
an
opportunity
to
like
flesh
that
out
and
figure
out
what
our
options
are,
if
any
to
to
progress,
but
like
because
there's
like
a
a
thing
to
maybe
represent
or
work
with,
it
may
make
it
like
a
little
more
concrete
for
us
for
those
less
familiar.
Some
of
the
common
points
of
grumpiness,
aside
from
what
is
afloat,
include
what
is
a
string
and
what
is
a
map
key
yeah?
C
So
there
is
what
there
is
a
wac
implementation
in
go
and
in
ross
I
haven't
written
the
encode
format
just
decode,
because
I
didn't
need
in
code
in
order
to
show
the
koala
yeah,
I
sort
of
yellowed
the
calling
conventions
again,
first
line
of
rust
code
like
last
sunday,
but
basically
you
allocate
for
for
a
codex
decode.
You
just
allocate
some
space,
you
load
the
bytes
into
the
space,
you
call
it
and
then
you
return.
C
The
data
that's
returned
from
the
webassembly
call
is
just
a
a
wac
encoded
thing,
which
means
I
can
then,
because
it
is
fitted
and
complete.
I
can
then
turn
it
back
into
the
data
models.
That's
the
point
of
this,
so
I
don't
have
to
keep
crossing
boundaries
yeah.
There
are
some
tbd
things
like
better
calling
conventions
figuring
out
how
to
return
errors
and
how
people
do
this
in
webassembly.
C
Encode,
the
node
as
whatever
then,
whatever
the
node
is,
you
know,
string
bytes,
something
a
map
encode
that
thing
in
as
whack
pass
it
into
webassembly
with
the
create
adl
thing
and
then
just
call
functions
on
that
pointer,
for
instance,
read
into
this
buffer
for
me
or
seek
you
know,
seek
into
into
the
ada
into
the
bytes
in
the
adl
somewhere.
C
I've
only
done
this
for
the
bytes
node,
but
you
know
we
could
do
the
same
thing
for
for
lists
and
maps
and
things
like
that,
and
that
is
all
there's
a
lot
more.
I
could
dive
into,
but
yeah
I
don't
want
to
eat
everyone's
time.
So
I
will
let
questions
come
in
and
then
use
that
to
figure
out
how
much
time
to
allocate.
A
A
C
Yeah,
I
I
think
the
the
idea
here
like
what
I'm
trying
to
get
to
there's
one
other
thing
that
I
I
have
in
my
my
demo
for
the
bits
aren't
codec,
which
is
sort
of
the
what's
next
and
I
think
like
or
what's
it
take
to
make
this
thing
like
great,
and
I
think
that
one
of
this
is
like
the
making
the
codex
and
adls
more
portable,
which
is
the
webassembly
thing,
but
the
other
is
like
being
able
to
move
around
arbitrarily
sized
blocks
of
data.
C
The
problem
is
like
you
have
to
cut
the
line
somewhere
for
what
is
the
block
limit
in
your
program
in
order
to
make
sure
your
memory
doesn't
blow
up
and
your
disk
space
doesn't
blow
up
and
things
like
that,
as
you
wait
for
the
data
to
be
downloaded,
so
you
can
verify
it
and
whatever
your
application
chooses,
the
magic
number
is
going
to
be
wrong
for
someone's
definition
of
wrong
right,
you're
going
to
say
two
megs
and
then
the
the
iot
people
are
going
to
be
like.
What
are
you
doing?
C
You're
killing
my
ram
and
you're
going
to
choose
two
megs
and
the
people
are
gonna,
be
like
what
are
you
doing?
It
should
have
been
80
megs,
it's
so
inefficient
and
there's
really
no
winning.
So
in
order
to
be
able
to
be
backwards,
compatible
and
really
have
like
use
ipld.
As
like
the
the
connector
of
all
the
dags,
you
need
to
be
able
to
be
backwards
compatible
with
those
formats
which
means
being
able
to
figure
out
how
to
handle
the
large
blocks.
C
C
B
B
The
use
of
selectors
is
amazing,
although
I
also
hope
that
we
see
it
work
with
other
signaling
mechanisms
in
the
future.
I
know
that's
later,
you
got
an
interface
for
big
bites
where
you're
doing
an
ad
hoc
seek
api
right.
So
I'm
guessing.
B
This
probably
isn't
being
zero
copy,
which
okay,
I
guess,
that's
probably
a
really
ridiculously
high
bar
to
shoot
for
crossing
languages.
So
sometimes
you
could
have
a
read
request
and
it
will
like
cross
two
internal
blocks,
because
you
have
no
way
to
align
fair
enough.
B
C
I
just
I
just
did
that
one
because
I
didn't
I
didn't
have
yeah.
I
just
I
didn't
have
time
slash,
I
think,
to
point
out
to
do
another
thing,
but
yeah.
I
definitely
think
we
we
could
do
that.
There
are
other
things
that
are
worth
considering
like,
and
people
who
know
things
about
like
web
assembly
and
or
us
probably
could
help
here
in
the
sense
that
I
you
know,
maybe,
for
example,
instead
of
calling
the
as
I
do
the
reads
and
you
and
you
move
along
through.
C
Maybe
you
don't
want
to
have
to
get
the
full
block
from
the
block
store
every
time
you
probably
want
to
do
like
cache
some
of
the
blocks
in
the
wasm
code
for
a
little
while,
maybe
you
want
to
like
cache
a
few
blocks
around
instead
of
needing
to
go
back
to
the
you
know
across
the
cross,
the
boundary
and
go
into
the
block
store
in
the
host
to
get
the
bytes
and
reconvert
through
whack,
and
all
of
that.
C
That
is
the
whack
codec
and
maybe
that's
a
good
idea.
I
don't
know
this
certainly
seemed
easier
for
someone
who
is
unfamiliar,
so
I
did
that
one.
D
This
is
a
comments
like
or
for
like
zero
copy
for
web
assembly,
at
least
from
coming
from
the
host
to
like
into
web
assembly.
There's
no,
but
there
is
a
spec
I
think
getting
worked
on,
but
it's
not
likely
gonna
land
for
web
assembly,
so
you
do
have
to
copy
from
the
host
to
web
assembly,
and
then
I
mean
internally.
Web
assembly
can
do
zero
copy
stuff,
but
yeah.
C
Yeah,
I
think
so
a
lot
of
people,
and
so
I
think
this
is
pretty
cool.
A
lot
of
people
get
very
excited
and
they're,
like
oh
cool,
you
have
webassembly
codex,
have
you
considered
loading
everything
by
cids
everywhere
and
like
having
cids
as
codecs
and
adls,
and
attaching
them
to
all
your
data?
C
We
can
start
to
write
more
of
these
things
and
we
can
reuse
them
in
more
environments,
doesn't
necessarily
mean
we
jump
like
right
away
to
network
load
everything
everywhere,
because
then
we
sort
of
locked.
We,
if
we're
not
careful,
we
may
have
locked
ourselves
into
whatever
terrible
ipld
calling
convention
that
I
made
up
without
understanding
rust.
C
So,
like,
I
think,
that's
kind
of
the
order
of
operations
we
get
to
like
explore
a
little
bit
here.
How
to
do
this
better
and
then,
as
we
learn
more
about
that,
we
can
start
to
be
like
oh
okay,
this
is
cool.
Now
we
get
to
start
answering
some
of
the
hard
questions
like.
C
How
do
I
move
these
things
around?
How
much
gas
do
they
cost?
I
tried
turning
on
gas
cost
just
to
see
if
it
would
work-
and
I
was
like
the
mechanism
I
used
for
determining
gas
costs-
was
I
kept
increasing
the
number
until
it
stopped
failing,
but,
like
that's
not
really
meaningful
right,
I
wrote
it
for
one
code.
I
wrote
it
for
one
codec
and
one
adl.
That's
not.
D
Yeah
parity
tech
has
a
lot
of
work
into
gas
costs
robust,
so
it
might
be
useful
to
talk
to
about
that.
C
Yeah
that
makes
sense.
I
also
suspect
that
steven
and
the
folks,
working
on
the
fvm,
probably
have
some
notion
of
gas
costs
there
as
well
and
are
also
thinking
about
some
of
the
ipld
stuff,
including
like
what
does
it
mean
to
call
into
the
host
for
how
they
do
blocks
they're?
Doing
it
a
little
differently.
I
think
they're
they're
keeping
all
of
the
data
in
the
host.
B
B
My
best
understanding
is
that,
especially
as
long
as
we
ever
live
with
algorithms,
where
the
performance
is
stated
dependent,
which
also
is
just
like.
Yes,
that
describes
reality
in
almost
all
cases
right,
I
I
think
the
best
thing
people
have
figured
out
for
like
blockchains
in
general,
where
they
have
a
strong
incentive
to
figure.
This
out
is.
B
Pre-Perform
the
computation
and
measure
how
expensive
it
was
and
then
ask
the
network
to
deal
with
that
cost
and
then,
if
you
have
operations
that
need
to
be
performed
on
the
networked
computation
on
the
the
publicly
replayed
computation
that
turned
out
to
be
data
dependent
on
some
state
that
changed
by
the
time
the
network
gets
long
to
replay
the
computation
and
that
changed
the
cost.
C
C
But
the
only
way
to
do
the
verifiable
thing
is
to
like
understand
the
ipld
pieces
right.
So
if
I
wanted
that
to
work
with,
you
know
the
bittorrent
adl,
then
it
means
that
the
javascript
code
needs
to
understand
that
too,
and
so
like.
I
don't
want
to
be
the
only
one
doing
this
right.
I
suspect.
There's
lots
of
file
formats
out
there.
There's
git
has
a
file
format.
Fitstorm
has
a
file
format
most
of
the
like
data
storage,
blockchainy
things
have
data
formats,
if
you're
storage
or
psi
or
leave
there's
some
blockchain
transaction
somewhere.
C
That
has
a
link
to
all
the
pieces
that
you're
supposed
to
be
verifying
right.
Those
are
all
data
formats
that
you
could
just
represent
as
files
and
you
could
be
like
yep
they're,
all
just
ipld
things.
They
turn
into
files
to
adls.
E
So
adan
good
to
see
you
one
question
I
had
okay,
can
you
chain
them?
Can
you
do
like
like
for
the
koala?
Basically,
let's
say
you
want
to
also
encrypt
it
or
decrypt
it
and
from
like
you're
loading
it
from
the
wasm.
It's
gonna,
it's
supposed
to
be
a
jpeg
you're,
selecting
as
a
jpeg
and
by
the
way.
Well,
once
it's
decrypted,
it
should
be
displayed
as
a
jpeg.
C
Yes,
you
can
do
chaining
in
the
sense
that,
let's,
if
we
pretend
for
the
sake
of
argument
that
I
did
what
eric
said-
and
I
had
also
built
one
of
these
for
large
maps
instead
of
just
large
bytes-
I
could
traverse
through
like
nested
layers
of
large
maps,
or
I
could
traverse
through
a
nested
layer.
Large
maps
through
into
large
bytes
encryption
is
trickier,
because
I
have
to
find
a
way
to
like
thread
the
keys
in
the
right
place
at
the
moment.
C
As
you
saw
like
the
way
that
I'm
I'm
putting
the
keys,
I'm
putting
data
in
is
through
like
the
way
I'm
signaling
is
using
the
selectors.
We
don't
have
a
signaling.
This
is,
I
think,
where
I
was
getting
with
like.
We
should
have
more
signaling
mechanisms.
If
there's
a
signaling
mechanism
I
could
throw
in
there.
C
That
was
like,
oh
by
the
way,
use
this
decryption
key.
You
know
aes
decrypt.
This
thing
I
don't
see
why
that
wouldn't
that
would
work.
E
I
think
at
least
from
juan's
early
libraries
he
basically
used
it
as
a
an
anchor
id
and
and
you
grab
it
from
the
url.
So
I
can
see
the
problem
that
your
the
the
scheme
of
the
adl
actually
like
you
pass
through.
You
can't
necessarily
put
that
in
that
that
goes
past
those
bytes,
but
maybe
you
pull
it
from
either
the
javascript
pulls
it
from
the
anchor
id
when
you
display
it.
So.
C
If
I
could
go
through
the
final
sets
of
things
to
the
end
right,
if
the
only
thing
I
need
inc
decrypting
is
like
the
file
at
the
end,
I
could
definitely
see
building
a
program
where,
like
you,
you
pass
in
the
cid
and
the
selector,
and
it
gets
you
the
large
bytes
and
then
you
put
like
you
know,
sort
of
hashtag
decryption
key,
and
then
it
decrypts
that.
But
if
I
need
to
like
decrypt
the
links
along
the
way
or
something
then
oh.
E
C
C
C
E
E
C
C
It
affects
like
what
user
stories
you're
trying
to
do
like
do
you
need
to
do
the
decryption
stuff
along
the
way
you
know,
for
example,
if
I'm
trying
to
follow
a
graph
that
has
encrypted
links
in
the
middle,
I
either
have
to
do
it
like
bit
by
bit
sort
of
bit
swap
style
where
I
walk
the
graph
locally,
or
I
have
to
ask
you
to
like
over,
send
me
bytes,
because
you
don't
really
know
which
ones
to
send
me
yet
so
that
I
can
then
get
them
back
or
you
need
to
have
a
particular
type
of
encryption
scheme
where,
like
I'm
willing
to
let
you
know
the
full
structure
of
the
graph,
but
not
what
any
of
the
bytes
are
in
the
graph
and
as
with
almost
anything,
that's
like
access
related,
whether
it's
encryption
or
acls,
like
everyone,
has
their
own
opinion
on
what
is
like
the
best
way
to
do
so,
like
you're,
sending
too
much
data
but
you're,
protecting
it
too
much
and
it's
inefficient
and
whatever.
E
There
was
an
interesting
apple
and
google
were
working
on
and
I'll
put
into
the
chat.
This
proposal
through
ietf
about
a
mailbox
for
sharing,
I
guess
they're,
calling
it
secure,
credential
transfer
and
so
we're
like
very
similar
to
the
hashtag
id
or
the
anchor
id,
is
basically
being
used.
As
for
the
decryption
mechanism,
and
so
I
think
this
is
just
where
my
head
was
at
today-
cool.
B
So
when
you
have
adl
code,
of
course
it
can
do
whatever
the
heck
it
wants
internally,
it
could
call
on
other
things
that
do
other
layers
of
processing
and
then
you
would
still
have
all
the
like
key
management
questions
for
encryption
as
usual.
But
you
know
that's
always
a
thing
if
you
want
to
have
separate
adls
in
different
places
as
you're
doing
a
walk,
that's
a
more
like!
B
B
So
that's
what
happens
when
you
have
like
a
map
dealing
with
a
directory
and
then
another
map
dealing
with
another
directory
and
then
something
for
bringing
bytes
together
for
a
file.
So
that's
much
easier
for
trying
to
stack
things
on
each
other
doing
a
series
of
transformations
in
a
way
that
isn't
hard
coded
within
the
ado.
B
I
started
thinking
about
this
recently
and
there's
an
exploration
report
with
the
title
of
lenses
in
the
name
that
came
out
recently,
but
this
is
not
at
all
finished
standard
and
has
no
prototypes
yet.
So
this
is
just
like
something
to
think
about.
We
haven't
actually
had
a
sufficiently
strong
call
for
that
kind
of
composition,
yet
that
it's
been
forced
to
exist,
but
maybe
it
will
happen
soon.
That
would
be
cool.
C
One
thing
that's
sort
of
interesting,
as
you
mentioned,
that
is
like
the
I
couldn't
think
of
if
it
was
really
necessary,
because
I
can't
think
of
any
examples
for
this,
but
the
way
in
which
the
like
the
wasm
adls
are
specified.
You
don't
get
to
like
the
only
callback
I
passed
then
was
the
give
me
a
block
callback,
which
means
all
of
the
other
adls
from
the
rest
of
the
universe
are
like
foreign
to
you
and
all
of
the
other
codecs
or
any
other.
Like
weird
logic
is
foreign
to
you.
C
Your
adl,
just
like
is
a
thing
that
takes
in
some
data
model
stuff
and
then
spits
out
some
other
data
model
stuff
a
little
fancier
in
that
like
it
exposes
parts
of.
I
guess
like
the
node
interface,
because
if
it
didn't
do
that,
it
might
be
really
inefficient
right,
like
for
large
bytes
or
large
maps
or
in
theory
large
integers.
C
It
could
be
really
inefficient
to
do
that,
but
in
general,
like
the
codex
and
other
stuff,
I've
like
avoided
implementing
the
node
interface
in
a
lot
of
places,
because
I'm
trying
to
minimize
the
number
of
like
cross
boundary
calls.
C
D
B
D
Sorry,
as
for
like
calling
web
assembly
and
like
having
it
run
other
web
assembly
to
support
someone
lenses
or
anything
like
that
again
that
as
a
working
spec
to
make
it
like
practical,
I
don't
have
the
link
on
the
top
of
my
head,
but
it
is
yeah.
It's
a
working
spec
to
make
that
feasible.
Like
there's
like
the
closest
support
you
could
have
to
do.
D
That
is
like
doing
multi-threading
with
web
workers
in
the
browser
and
then
that's
you
just
have
a
shared
array,
so
things
get
really
messy.
Really
quick
rust
makes
it
easier
to
deal
with
that
stuff,
but
yeah.
That's
like
the
currently.
The
only
way,
I
could
think
is
feasibly
possible
to
get
multiple
like
web
assembly
to
work
again
together
to
decode
a
single
piece.
C
Yeah
I
mean
the
good
news
is
that
right,
right
once
run
everywhere,
has
been
like
the
semi
holy
grail
of
people
writing
software
for
like
50
years
or
something
so
we
can
rely
on
other
people
doing
stuff.
We
can
kind
of
like
piggyback
on
whatever
is
working
the
most
at
the
moment.
A
F
And
thanks,
I
it's
a
very
small
update.
I'm
making
slow
but
steady
progress
on
helping
eric
with
the
incremental
update
api
and
going
back
and
forth
have
some
ideas
there
so
capture
some
notes
on
the
github
issue
and
I'm
gonna
continue
to
work
on
that.
D
D
So,
if
you
want
like
dynamically
allocated
things,
then
you
have
usually
you'd
have
a
feature
and
then
like
the
stack
stuff,
would
also
where
it
actually
knows
the
other
way
around,
usually
for
the
no
std
stuff
you
have
stack
allocated
and
that's
usually
behind
a
feature,
but
there's
some
questions
as
to
like
interoperability.
Between
that
like
do
you
have
the
same
type
as
exposed,
or
do
you
have
something
different
like
what's
the
interface
like
and
kind
of
how
important
is
getting
everything
on
the
stack
yeah.
A
That's
that's
a
good
question,
so
I
can
answer
from
like
what
what
happened,
kind
of
what
happened
in
the
past
so
yeah.
It
was
always
like.
A
It
was
always
a
problem
to
find
people
to
work
on
rust
on
the
multi-format
stuff
and
ipod
stuff,
and
I
basically
then
looked
whoever
worked
on
those
things
to
collaborate
with
them
and
see
where
we're
going
and
so
the
most
recent
installment
of
the
what
you're,
mostly
referring
to
the
the
multi-format
stuff
like
cid
and
multihash
and
multibase,
and
those
people
really
wanted
to
get
dressed
like
on.
Like
smaller
systems.
E
A
Also
like
if
people
like,
if
people
go
crazy
about
performance,
they
want
to
do
sticker
locations,
and
I
think
this
is
where
the
current
kind
of
installment
of
the
of
the
libraries
come
from.
I'm
also
not
sure
like
how
much
it
matters,
for
example,
for
cids.
If
they
are,
for
example,
heap
allocated
how
much
it
would
actually
matter
in
depending
it
depends
on
your
application.
A
So
what
certainly
is
the
problem?
Is
we
need
to
stick
a
location?
Is
that
at
least
in
rust
or
in
the
how
we
implemented
it,
things
get
quite
complicated,
I
would
say,
like
it's,
not
super
intuitive
to
use
anymore,
and
then
the
risk
is
quite
big
like
if
you
think
about
last
martial,
for
example,
it's
I
think
it's
way
more
complex
and
large
than
it
should
be,
and
it's
mostly
due
to
this
whole
stake
education
story.
A
So
I
think
it
would
be
great
to
also
have
some
allocated
alternative,
but
yeah,
as
you
say,
it's
like
the
question
is:
is
it
a
separate
library?
Is
it
the
same
library?
Is
it
with
features?
Is
it
totally
separate
is
its
different
grades?
A
I
don't
know
like
we
have
to
figure
out,
but
then
again
to
conclude
kind
of
what
would
be
that.
I
think
it
should
be
bound
to
some
application
or
like
someone
using
it,
because
I
find
it
especially
harder
us
to
develop
something
out
of
the
blue
and
like
from
my
experience,
you
don't
get
the
apis
right
and
if
you
have
really
a
use
case
and
some
user
of
it,
it
might
be
easier.
D
A
Yeah
so
yeah,
so
currently,
so
so
currently,
as
I
speak
about
applications
currently,
one
big
driver
for
the
rust,
ipld
and
rust
multi-formats
ecosystem
is
currently
the
ficond
virtual
machine
development,
where
I'm
also
involved
in
work
on
iclb
and
there.
Of
course,
they
are
quite
happy
about
doing
sticker
locations
and
having
kind
of
like
fixed
size,
things
and
buffers
and
all
those
things
especially
like,
if
you
think
about
cids
and
motif
and
multi
hashes.
Normally
they
are
fixed
sized.
A
So
you
like
yeah,
it's
easy
to
work
with,
so
I
think
for
them
it's
important,
or
at
least
like
it's
a
nice
to
have,
and
especially
at
least
I
got
the
impression
that
also
the
people
that
want
to
compile
rust
into
web
assembly
are
also
quite
key
into
not
using
the
standard
library
which
is
kind
of
like
separate
issue
but
kind
of
connected
because
like
when
you
call
it
rust.
The
no
standard,
no
allocation
stuff
is
kind
of
related,
and
I
think
those
people
want
to
they
want
to
use.
A
D
Yeah
from
all
of
the
assembly
standpoint
like
avoiding
allocations,
is
pretty
important.
Like
you
have
a
fixed
page
size
and
allocating
you
can't
get
returned
back
and
it's
yeah.
There
is
a
good
implementation,
but
it's
not
great.
A
F
C
I
wonder
so
one
thing
I
noticed,
as
I
was
poking
around
with
with
the
rust
ipld
stuff,
and
then
this
is
probably
like
something
we
can
discuss.
You
know
more,
I
don't
know
offline
or
or
another
thing
but
like
it
doesn't
quite,
and
this
is
where
I
was
sort
of
getting
out
with
like
what
is
the.
What
is
the
ikld
data
model?
It's
like
it.
C
C
It
has
some
stuff,
that's
missing,
like
cids
and
and
sort
of
how
we,
how
we
deal
with
that
and
then,
like
you
know,
one
that
kind
of
was
was
sort
of
obvious
was
like
the
math
strings
thing
which
hit
me
into
some
trouble,
because
ben
code
only
has
strings
there.
Aren't
there
are
no
bytes,
there's
just
the
strings
thing
and
the
strings
are
sometimes
utf-8
and
sometimes
they're,
not
utf-8.
C
In
the
case
of
like
bittorrent
files,
like
the
pieces,
thing
is
just
a
concatenated
list
of
sha-1
hashes,
which
not
utf-8,
and
then
you
know
you
have
koala.jpg,
which
is
in
the
name
field
which
is
utf-8,
and
so
I
kind
of
like
v1
of
the
v1
of
me.
Trying
to
make
this
work
was.
I
just
wrote
like
a
transcoder,
I
just
used
like
a
surday
transcoder
that
moved
the
data
from
you
know.
C
Whatever
the
input
codec
was
to
the
output
codec,
which
was
nice
because
they
were
like
being
you
know,
they
were
like
been
code.
You
know
stuff
like
already
sitting
around,
but
it
didn't.
It
didn't
quite
work,
though,
because
because,
like
the
data
models
don't
match
up,
which
is
why
I
sort
of
went
this
other
way
where
I
was
like.
Okay,
I'm
not
going
to
try
and
like
build
rust.
C
It
may
not
be
like
the
optimal
format
for
people
who
are
like
actually
coding
in
rust
to
actually
move
stuff
around,
like
maybe
the
way
things
work
now
is
the
way
to
do
it,
but
it
was
one
of
those
like
weird
things
where
it
seems
like,
for
instance,
like
bite
map
keys,
are
part
of
the
are
supported
under
the
data
model,
but
I
don't
think
they're
supported
by
any
of
our
codecs.
C
But
that's
not
so
that
just
you
know
if
the
ipld
data
model
allows
for
bytes,
but
like
dag
c
board,
doesn't
it
just
means
that,
like
dag
cbor,
isn't
a
complete?
You
know
a
complete
in
that
definition.
Ipld
data
model
represent
representation
as
opposed
to
the
ipl
data
model.
Representation
should
be
different,
although
I
am
slightly
curious
as
to
like
I've
heard
rumors
of
the
various
horrendous
things.
People
are
doing
that
are
non-spec
compliant,
but
I
don't
know
what
they
are.
A
A
Yeah,
so
so
there
is
even
an
issue
open
somewhere
about
certain
ipld
and
from
two
years
ago,
or
something
where
I
said
well
so
ipad
inserted,
just
don't
work
out
because,
like
the
data
model,
differences
are
just
like
it's
just
a
data
model
and
I
long
fight
it
for
well.
A
We
don't
use
30
and
if
you
look
at
the
current
like
what
is
on
github
ipod
d,
slash
slip,
ipod,
it's
not
using
certain
and
but
so
the
new
versions
that
I'm
currently
working
on
also
for
the
ritual
machine
is
using
certain.
A
The
reason
is
that
we
also
inherited
a
lot
of
code
from
chain
saves
forest,
which
is
a
fico
implementation,
and
they
used
certainly
heavily
also
in
a
pretty
bad
way,
and
basically,
they
also
had
hey
to
gather
it
had
it
hacked
together,
working
with
the
databall
and
so
on,
and
now
I've
worked
with
steven
on
those
things
and
I'm,
I
think,
I'm
now,
okay
using
30
and
we
found
a
hack
which
is
acceptable
but
still
yeah.
You
have
this
mismatch
between
the
the
server
data
model
and
the
ipd
data
model.
A
It's
not
ideal
and
what
you
certainly
end
up
with
is
which
I
think
is
a
problem.
So
my
vision
with
certain
was
that
well,
if
you
certainly,
you
can
use
them,
as
I
did
mention
those
transcoders
or
other
formats,
that
he
did,
he
used
an
existing
one
and
those
things
just
won't
work
automatically.
A
So
it
means
so
I
I
describe
it
as
we
use
the
certain
infrastructure,
but
not
really
the
codex.
So
we
use
the
whole
mechanism
and
how
sergio
works
internally.
A
But
if
you,
for
example,
want
to
create
your
own
codec,
let's
let's
say
even
you
use
a
like
a
popular
one,
and
you
want
to
say
that
you
want
to
represent
your
ipod
data
model
as
tomil,
for
example,
and
you
can't
just
plug
it
in
and
it
would
work,
but
you
really
need
to
make
changes
that
it,
for
example,
recognizes
the
cids,
and
this
is
really
a
custom
code
and
you
can't
just
plug
it
in
so
you
really
need
to
fork
the
library
so
that's
unfortunate,
but
on
the
other
side,
what
we
found
out
working
as
the
ipod
team
worked
on
possible
codex
for
javascript
and
go.
A
We
found
out
anyway
that
in
the
end,
we
ended
up
coding,
our
codecs
anyway
for
dax
zebo
and
for
the
xjson
and
so
on,
and
that
protocol
buffers
so
as
we
do
it
anyway,
it's
not
such
a
big
problem,
I
would
say,
but
that's
just
for
those
people
following
us
and
so
on.
It's
good
to
know
that
you
really,
you
would
need
to
re-implement
your
code
yourself,
although
it's
really
more
like
a
copy
and
paste
job.
A
So
you
basically
paste
in
this
the
cid
deserializer,
which
I
have
coded
and
I'm
pretty
sure
you
can
just
really
copy
and
paste
it
from
your
dex
hebrew
coding
and
it
works
in
deck,
json
and
somewhere
else
as
well,
but
yeah.
You
need
to
make
changes,
it's
not
ideal,
but
on
the
good
side
of
things
like
certain
it's
just
like,
if
once
it
works,
it's
really
nice,
like
you,
can
just
work
with
your
native
rust
structures
and
you
just
encode
them
as
iprd
or
you,
for
example,
can
now
go
directly.
A
I
think
it
should
work
with
those
transcoders.
You
can
directly
go
from
bag,
json
to
dag
cbore,
of
course.
Internally.
It
has
an
intermediate
step,
but
you
won't
see
it
and
you
basically
don't
go
through
an
intermediate
rust
step,
but
you
kind
of
like
go
directly
more
or
less
and
from
one
to
another,
which
is
quite
nice
and
then
of
course
like.
If
you
think
about
the
vision
of
ipad,
we
have
like
many
formats
and
so
on.
A
You
only
need
to
implement
your
codec
once
for
survey
and
then
you
can
transcode
them
to
all
the
other
ones
and
don't
need
to
do
basically
the
the
m
n
by
m
thing,
where
you
need
to
code
it
for
everyone,
and
so
this
is
basically
what
we
get
from
using
30.
Although
we
have
those
problems,
but
I
think
yeah.
A
So
this
is
the
background
about
the
sort
of
stuff
kind
of-
and
I
haven't
talked
in
this
meeting
much
about
the
criminal
implementation,
because
I'm
still
working
on
replacing
the
underlying
keyboard
library
for
the
third
time
now-
and
I
just
yeah
it
just-
doesn't
make
sense
to
follow
basically
because
it
changes
almost
every
day
so
but
hopefully
in
the
near
future.
We
have
something
that
they
can
then
like
probably
publish
and
then
talk
about,
and
then
this
will
be
like
the
new
version
of
kind
of
lib
ipd.
E
A
A
So
we
switched
to
another
zebra
like
we
recalled
cbor,
for
is
the
name
which
we
now
try
to
use
and
yeah,
but
more
on
this
once
I
have
a
really
working
version
before
I
because
like
if
I've
talked
about
it
like
three
weeks
ago,
I've
talked
about
that.
We
use
siborium,
which
we
now
don't
use,
and
so
it
changes.
A
Yeah
and
about
this
whole
like
map
map,
keys,
spites
and
strings,
and
so
on,
like
this
I
mean
we
can
discuss
this
for
weeks
if
we
want
to
which
we
also
have
in
the
past
so
yeah.
I
we
can't
finish
this
discussion
in
five
minutes.
Yeah.
C
I
I
wasn't
thinking
we
were
gonna
finish
it
in
five
minutes.
What
I
was
thinking
is,
as
I
was
reading
through
the
the
many
many
very
good
documents
that
have
been
generated
on
this
by
by
folks
over
over
the
years.
I
was
thinking
like
there's
there's
like
there's
trying
to
understand
what
does
it
mean
when
you're
trying
to
make
something
that
is
like
supposed
to
be
like
ipld
data
model?
C
Transitive
to
some
extent
is,
is
part
of
this
or
is
like
one
way
to
talk
about
the
the
description
here,
and
so
I
was
thinking
like
we
could.
There
may
be
some
like
lessons
to
learn
from
here
in
terms
of
like
where
down
the
rabbit
hole.
We
want
to
go,
and
it's
it's
fair
that,
like
there
doesn't
just
I
mean
in
go,
we
have
two.
C
We
have
two
ipld
implementations.
One
one
is
one
is
much
newer
and
shinier
than
the
other,
but
we
do
have
two
in
rust.
I
think
there
were
a
whole
bunch,
there's
now
fewer
right
and
so
like
every
one
of
those
like
library
sets,
will
sort
of
follow
their
own,
their
own
mechanism.
C
For
for
what
works,
but
I
think
like
I
think,
especially
if
you
take
a
look
at,
I
think
eric
last
time
showed,
like
a
little
graphic
of
like
how
all
of
the
very
how
a
lot
of
the
various
pieces,
whether
it's
like
the
codex
or
the
adls.
Now
like
some,
it's
like
lens
stuff,
they
all
sort
of
end
up
have
exposing
an
interface
that
is
like
the
data
model
and
so
understand
having
like
agreement,
I'm
like
yup.
This
is
what
the
data
model
is.
You
don't
have
to
have
all
of
it.
C
Some
of
it
might
suck.
You
might
be
like
no.
In
the
same
way,
you
might
be
like
seabor
seems
good,
nah,
no
tags,
no
special
tags
for
you
right.
We
could
be
like
yeah
here.
This
is
the
ipld
data
model,
but
like
no,
you
don't
want
to
do
this.
Okay,
fine,
just
so
that
we
know
like
what
things
are
fair
game
to
to
implement.
C
You
know
the
example
with
the
map
the
map
byte
keys
is
like:
if
map
by
keys
are
not
okay,
then
it
also
means
that
they're,
not
you,
you
probably
don't
want
to
make
them.
Okay.
For
you
know
the
adl
or
the
lens
layer,
because
then
they
don't
things,
don't
quite
line
up
exactly
and
if
you
do
then
okay,
then
you
get
this
power
everywhere
and
you
get
the
trade-offs
everywhere.
So,
like
understanding
a
little
bit
of
like
what
it
is,
people
are
expecting
from
that.
E
So
now
not
to
dwell
on
it
too
much,
but
there's
also
either
way.
Is
it
packed,
seabor
or
now,
cbor,
ld
or
basically
embedding
a
dictionary,
so
basically
in
a
dictionary
can
be
a
uri
or
this
like
or
could
be
a
tag
42
to
ipld,
which
is
basically
like.
Is
then,
even
though
using
bytes
or
numbers
as
keys?
Is
that
there's
a
dictionary
where
you
actually
go
through
an
algorithm
to
actually
to
expand
it?
C
Yeah,
so
I
think
that
that's
sort
of
part
of
it
is
like
one
reason
why
I
think
people
have
been
like
a
little
more
like
I
don't
know
what
to
do
with
this
in
the
past
is
like
if
we
look
around
at
our
codex
and
like
none
of
them
support
this
thing.
It's
like
we.
We
want
this
to
exist
because
we
want
it
for
like
for
things.
You
know
whether
it's
like
adls
or
lenses
or
otherwise,
but
like
in
some
codecs,
but
none
of
the
ones
that
we
have
right
now,
all
right.
C
It's
like
a
little
hard
to
justify
because,
like
the
codex
aren't
there,
whereas
was
you
know,
sort
of
when
they
are
maybe
becomes
a
little
easier
to
discuss
because
the
same
like
you
could
you
could
make
a
version
of
of
json
that,
like
was
able
to
support
more
exotic
things
like
binary
map
keys
by
being
like
yup,
you
don't
get
to
be
keys
anymore.
C
You
get
to
use
like
our
magic,
slash
that
we
use
in
all
sorts
of
places
and
like
yep,
we're
just
appending
like
a
list
of
tuples
to
the
end
of
you,
you're,
not
a
map
anymore.
It's
like
you,
have
all
the
map
keys
and
then
you
have
like
here's
all
the
byte
keys
or
all
the
string
keys,
and
then
you
have
all
the
byte
keys.
Like
that's
like
one
way,
you
could
do
this,
but
those
only
come
from
once
you're
like
out
the
data.
C
A
All
right-
and
we
are
hitting
the
hour
mark
so
thanks
everyone
for
attending-
and
I
think
like
today
was
like
I
think
today
was
really
like
a
quite
exciting
meeting
like
I
learned
so
many
new
things
and
like
see
so
much
cool
things.
So
hopefully
it
will
be
like
this
every
two
weeks
so
see
you
all
again
in
two
weeks,
then
so
goodbye
everyone
and
have
a
good
time.
Bye.