►
From YouTube: 🖧 IPLD Every-two-weeks Sync 🙌🏽 2022-08-30
Description
An every two weeks meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
A
A
I
need
to
open
the
meeting
notes.
As
so
often
I
don't
have
any
updates
this
week,
but
first
on
my
list
is
lord.
B
Okay,
I'm
just
trying.
C
B
Out
my
notes,
I
got
two
two
major
things
and
I
think
if
I,
if
I
had
more
time,
if
I
was
more
prepared
than
I
probably
would
have
more
things
put
in
here,
but
the
big
big
things
are,
there
was
a
pull
request
to
go
miracle:
dag
pull
request,
87..
I
think
I've
talked
about
this
last
week
that
got
closed
out
the
new.
C
B
Got
released
and
that's
to
do
with
this
lynx
sorting
thing
that
was
creating
different
traversal
traversal
ordering
in
different
situations
and
so
that
pull
request
stabilizes
it
so
that
when
you're
using
go
miracle,
dag
for
whatever
reason,
which
you
probably
shouldn't
be
doing
and
you're
loading
in
dag
pb
blocks
that
the
links
you
load
in
will
be
stable
according
to
the
order
you
get
loaded
in
and
I
the
second
pull
request
is
in
the
ipld
repo
for
the
website,
and
it's
got
so
that's
pull
request,
233
and
ipl,
the
ipld
ipld
repo,
and
that's
got
lots
of
notes
about
what's
going
on
there.
B
So
if
you
want
a
further
explanation,
it's
all
documented
in
there
and
it
talks
about
there's.
There's
three
separate
version
ranges
in
go
merkel,
dag
that
have
different
behavior.
So
this
the
current
behavior
was
introduced
in
version
zero.
Four,
it's
now.
B
B
It
starts
to
matter
when,
when
we
have
these
situations,
where
we
actually
care
about
traversal
order,
so
we
we're
and
we're
getting
more
of
these
situations
where
two
parties
need
to
agree
on
on
the
shape
of
a
dag
or
what
they've
got
the
main
one
is
file
coin,
but
there's
these
other
use
cases
around
file
coin,
where
the
order
matters,
and
particularly,
I
think,
as
we
go
as
as
the
fvm
starts
to
go
forward
with
with
with
far
coin.
B
I
imagine
there's
going
to
be
other
situations
where,
where
multiple
parties
need
to
agree
about
things
about
dags.
B
So
having
this,
these
cases
where
traversal
order
is
different,
it's
probably
going
to
be
important
to
iron
out
anyway,
that's
obscure
for
most
people,
so
probably
not
that
interesting.
The
second
one
is
in
javascript.
I
spent
some
time
with
the
iple
schema
stuff
in
javascript.
That
was
out
of
date,
and
it's
been
frustrating
me
for
nearly
a
year,
because
there
was
these.
This
flurry
of
activity
last
year
with
schema
the
schema
dsl.
B
Getting
it
changing
it
and
then
the
schema
dmt,
which
is
the
the
data
model
form
of
the
it's
like
an
ast
for
the
the
schema
language
that
was
changed
and
I
didn't
have
good
reason
not
to
to
spend
time
in
javascript,
but
I
I
really
want
to
do.
I
really
want
to
use
it
in
javascript,
and
so
I
did
this
big
update
where
I
fixed
all
the
fix
it
all
up.
So
it
does.
It
talks
the
right
language
now
and
it's.
B
I
think
it
even
does
more
than
the
go
version
now
so
it'll
pass
more
of
the
schema
language
and
do
more
of
the
stuff,
and
you
can
go
both
ways
you
can,
you
can
take
it
schema
dmt
and
you
can
and
it'll
print
a
schema
for
you
as
well.
So
it's
the
schema,
dsl
stuff
is
all
complete.
It's
fully
featured,
but
the
most
interesting
thing
in
there,
I
think,
is
these.
B
You
can
take
a
schema
and
you
can
ask
it
to
create
you
a
what
I'm
calling
validators
and
converters.
So
we
don't
have
the
full
node
interface
like
in
go,
but
this
gets
some
sort
of
towards
that
direction.
B
Where
it
gives
us
the
ability
to
have
typed
and
representation
forms
of
of
your
data
model
objects
in
javascript,
so
you
can,
the
workflow
would
be
you
would
create.
You
would
take
it
to
schema.
You
would
make
a
this
little
validator
thing
and
then
you
would
load
your
your
data.
B
You
would
instantiate
your
data
into
the
data
model
from
from
the
you
know,
from
a
codec
you'd
pass
it
through
this
validator
and
it
would
either
come
out
the
other
side
or
nothing
would
create
the
other
side,
and
so
it
would
which
means
it
didn't
validate,
and
there
are
various
schemas
that
will
transform
the
data
as
well.
So
you
will
go
from
your
representation
form
that
we
call
is
what
we
call.
B
So
it's
the
at
what
we
serialize
into
bytes
and
then
you've
got
the
typed
form,
which
is
according
to
the
schema,
and
that
may
involve
some
transformation,
and
then
you
can
do
the
reverse.
So
you
could
then
modify
your
data
in
your
application,
where
it
makes
sense
and
then
pass
it
back
through
the
the
readme
for
that
repo
I've.
I've
got
it
in
in
the
in
that
validator
and
converter
section.
B
I've
got
a
big
example
of
doing
that,
so
you
would
take
some
very
terse
data
and
it's
actually
actually
somewhat
similar
to
file
coin
chain
data,
which
is
very
terse,
and
if
you,
if
you
were
to
load,
far
coin
chain
data
or
just
look
at
the
blocks,
if
you
were
to
decode
them,
then
that
would
just
be
just
rubbish
because
they're
just
arrays
of
stuff,
but
when
you
apply
a
schema
to
it,
then
you
you
can
the
data
emerges
out
of
that
because
you
get
the
you
know,
you
can
suddenly
understand
what's
in
the
in
the
arrays,
and
so
that's
the
example
is
a
bit
like
that,
where
it's
really
terse
encoding
form
that
would
be
meaningless.
B
Otherwise,
but
you
can
pass
it
through
these
these
validates
and
you
will
get
a
form
that
your
application
cares
about
and
you
can
actually
use
these
objects
like
they're.
Oh,
this
is
interesting
stuff.
You
can
modify
it
and
pass
it
back
through
again
you
get
the
form
to
decode.
So
I'm
really
happy
with
that,
and
I've
got
lots
of
things
I
want
to
do
with
it,
but
yeah
if
you're
using
javascript.
I
would
encourage
you
to
have
a
look
at
that,
because
I
think
it's
really
good
anyway.
That's
me.
C
Yeah,
so
I
think
it
would
be
more
of
a
description
point,
so
I
think
we
do
that
at
the
end,
but
I
started
thinking
about
a
new
download
client
that
would
like
support
those
pizza
up
and
grassing,
and
what
I
want
to
use
is
divide
and
focus
strategy
where
I
take
part
of
the
dag
and
I
divide
them
into
subtext,
and
I
basically
do
that
all
the
way
down,
and
I
I
had
troubles
by
doing
two
things
which
is
the
assuming
I
want
to
use
selectors,
which
I
don't
know
if
I
will,
but
right
now,
it's
what
I
plan
to
use.
C
C
I
I
then
call
explore
multiple
time
on
it:
lazily
when
I'm
exploring
the
dag
and
so
now,
I'm
three
level
deep
with
a
new
selector,
and
I
need
to
transmit
the
sector
to
another
node
to
start
doing
graph,
sync
and
right
now.
We
cannot
because,
once
you
compile
the
selector,
that
means
in
the
sector
package
from
go.
You
have
a
function.
C
It
just
takes
a
node
and
gives
you
backsight,
or
you
cannot
do
this
the
other
way
around,
which
means
you
have
no
way
of
serializing
the
current
static
torque
that
you
have
and
the
second
h
case,
which
is
similar,
which
is
if
I
have
a
sharp
piece
of
the
deck.
So
you
have
a
dag
that
references,
the
same
blocks
multiple
times,
and
these
are
just
edge
cases
where
like,
if
you
have
like,
for
example,
aesthetic
source
that
is
traverse
everything.
Three
level
dips
and
the
first
one
come
here
with
the
widget
of
zero.
C
When
you
just
explore
that
block-
and
you
don't
go
further,
however,
the
second
time
that
you
export
this
block,
now
you
have
a
budget
of
one,
so
you
will
go
one
level
deeper
and
that's
an
hk
is
which
is
like
we.
I
need
a
way
to
complete
aesthetic
work
between
them
and
like
to
know
that,
oh,
I
could
go
deeper
with
that
one,
so
I
should
use.
C
A
D
Yeah
hi,
so
I
have
been
up
to
some
more
ipld
stuff
lately,
namely
outside
of
my
direct
work.
I've
been
helping
get
this
grant
along
to
standardize
a
new
adl
called
prolly
trees
based
on
michael's
work
in
javascript,
and
so
there's.
D
This
group
called
ken
labs
based
primarily
out
of
china
and
europe,
and
they
are
working
on
standardizing
some
of
the
stuff
that
michael
did
in
golang,
using
ipld,
go
ipld
prime
adls
and
schemas,
and
also
documenting
exactly
how
the
adl
works
and
how
it's
structured,
also
probably
playing
around
with
different
weights,
and
so
they've
started
that,
hopefully,
within
a
couple
months,
we'll
have
a
initial
spec
out
and
then
some
of
the
next
questions.
D
There
are
whether
they're
going
to
go
to
do
a
rust
implementation
or
a
javascript
implementation,
next
rust
being
appealing
because
they
kind
of
work
want
to
work
on
integrating
it
with
the
fvm
altcoin
virtual
machine,
but
rust
ipld
kind
of
is
not
as
advanced
as
say,
go
or
even
javascript,
so
they
might
also
be
putting
together
a
dev
grant
to
work
on
rust,
ipld
just
in
general,
so
we'll
kind
of
see
how
that
develops.
D
Probably,
if
there's
people
that
want
to
work
on
rest
apld,
maybe
we
should
talk
yeah
outside
of
that
I've
been
working
more
on
ipld,
urls,
slash
gateway
protocol
handlers
and
I've
actually
sketched
up
an
initial
kind
of
like
usable
thing
in
ipfs
fetch,
which
is
this
library
used
for
playing
with
protocol
handlers
for
ipfs
and
ipns.
D
I
feel
that
and
I've
integrating
it
integrated
it
into
the
agricore
web
browser,
and
so
part
of
that
is
figuring
out
the
ux
of
like
how
to
actually
use
ipld
and
protocol
handlers
that
works
nice
with
browsers
and
makes
it
easy
for
applications.
D
So
one
of
the
cool
parts,
in
my
opinion,
is
you,
can
have
the
protocol
handler
deal
with
encoding
and
decoding,
and
your
application
can
think
closer
to
what
the
data
model
is.
So
one
way
you
could
do
that
is.
If
I
am
uploading
some
data
to
ipld,
I
can
do
a
post
request
to
the
browser's
protocol
handlers
or
do
a
curl
request
with
a
a
post
to
a
gateway
with
data
in
json,
and
then
I
can
tell
the
gateway.
D
Oh,
when
you
actually
save
this
to
disk,
do
it
in
the
dag
cboard
format
so
that
it'll
be
stored
more
efficiently
and
then
give
me
back
an
ipld
url
for
the
dag
c
board
version
of
this
data,
and
this
means
that
a
client-side
application
doesn't
have
to
understand
dag
c-bore
and
can
be
a
lot
more
thin,
which
isn't
useful
everywhere.
There's
definitely
problems
with
gateways
that
people
have
raised
recently,
but
from
an
application
in
the
peer-to-peer
web
space.
I
think
it's
gonna
be
very
exciting.
D
Next,
on
my
oh
and
part
of
that
was
I
wrote
a
javascript
ipld
url
parser
and
serializer
called
js
ipld
url.
I
think
it's
on
npm
now
yeah
and
my
next
steps
are
going
to
be
to
get
the
stuff
that
rod
did
with
schemas
and
see
how
we
can
integrate
schemas
into
ipld
urls.
Like
can
I
have
a
url
that
has
the
sid
for
some
ipld
tree
and
can
I
also
embed
the
schema
to
apply
on
top
of
the
tree
in
the
url
parameters
and
then
kind
of
like?
D
D
No
common
js
and
part
of
that
is
also
going
to
be
to
standardize
the
build
tooling
across
all
of
our
repos,
so
in
the
javascript
ipfs
ecosystem.
There's
some
tooling
that's
been
made
by
other
teams,
namely
this
thing
called
a
gear
aker.
I
don't
know
how
to
pronounce
it
still.
I
really
should,
but
it
kind
of
like
does
all
of
the
typescript
buildy
stuff
and
just
has
a
few
command
line
utilities.
D
This
means
that
we
can
get
rid
of
like
five
six
dependencies
per
repo
and
have
just
like
a
standard
thing.
So
if
someone
wants
to
create
a
new
js,
ipld
related
library,
they
can
just
kind
of
like
base
their
work
off
of
this
template.
I'm
also
going
to
be
updating
the
github
workflows
to
use
more
standardized
cis
from
across
other
ipfs
related
repos.
D
So
this
does
mean
that
people
that
are
trying
to
import
these
js
multi
multi-formats
libraries
from
within
common
js
in
a
bundler
that
doesn't
do
fancy
magic
to
make
that
work.
They
won't
be
able
to
use
this
new
version
once
it's
published.
D
D
Also
part
of
this
change
we're
still
going
to
be
having
typescript
typing
support
as
part
of
the
build
process,
so
I'm
gonna
be
doing
some
tests
to
make
sure
that
typescript
isn't
gonna
be
horribly
broken
the
moment
we
release
this,
or
at
least
like
not
horribly
broken
for
absolutely
everyone.
D
Anyway,
yeah,
that's
that's
all
that
I've
got
that.
I
remember.
A
Thanks
so
I
call
this
tool
agir,
but
yeah
just
no
reference,
but
I
guess
everyone
calls
it
somehow
different
so
and
yeah.
Okay,
so
is
there
anything
else
before
we
got
into
the
discussions
about
the
graph,
singing
and
selector
stuff.
B
I
had
one
more
thing
that
I
forgot
about
the
list
that
was
non-trivial,
so
gazala
has
had
a
pull
request
to
the
js
multi-formats
repo
for
a
long
time.
Now
it's
and
because
it's
large,
because
he's
stretched
for
time,
it's
been
difficult
to
get
it
over
the
line,
but
we
we're
getting
close.
We've
got
a
second
one.
B
Now
that
sort
of
replaces
it
with
something
slightly
different,
but
I'm
hopeful
we
can
get
this
released
very
soon
and
it's
one
of
the
problems
we
have
with
in
the
javascript
ecosystem
is
that,
regardless
of
how
much
you
use,
you
always
need
the
cid
class.
We'll
almost
always
need
the
cid
class
to
do
stuff.
So
you
you,
regardless
of
what
codecs
you
want
to
use
or
anything
else.
You
end
up
having
to
use
js,
multi-formats
and
very
likely
getting
the
the
cid
class
to
use
that
in
various
places,
not
always
the
case.
B
You
could
use
other
other
packages
if
you're,
depending
on
them,
but
that
that
brings
up
challenges
with
typescript
and
having
this.
You
know
this
this
cid
class
everywhere
and
it
having
to
be
the
same
version
and
typescript
getting
hairy
about
that
stuff
and
so
a
to
do
for
a
while
has
been
to
turn
that
into
an
interface
so
that
we
can
have
this
cid
interface
that
you
rely
on
and
then
you
don't
have
to
have
the
exact
type
it
just
has
to
match
the
interface.
So
it's
a
bit
nicer
for
typescript
users.
B
It's
now
morphed
into
this
we're
calling
it
link
so
not
just
cid.
It's
a
link
interface,
which
is
a
bit
simpler
than
a
cid,
but
also
potentially
a
broader
concept
than
just
a
cid.
B
So
when
this
gets
merged,
people
just
be
able
to
rely
on
that
interface
and
they
just
need
to
conform
to
that
interface,
and
we
won't
need
the
exact
version
of
cid
and
we
can
do
some
more
creative
things
with
it
probably
sounds
less
interesting
than
it
is,
but
for
typescript
users
it
should
be
a
big
deal.
B
Unless
somebody
is
writing
a
concrete
form
of
it,
that
does
crazy
things.
Then
you
shouldn't,
but
that's
the
risk
in
every
language
and
particularly
javascript,
where
I
think
you
can,
you
can
make
things
you
can
you
can
duct
type
in
even
with
typescript
and
this
so
yeah,
okay,.
B
It's
sort
of
part
of
it.
The
assumption
is
that
the!
If
and
when
we
we
arrive
on
a
on
on
what
that
means.
That
would
just
be
a
simple
extension
to
the
existing
cid
implementations,
so
go
cid
and
jsmody
formats
that
there
will
be
an
additional
method
or
methods
where
you'll
be
able
to
get
that
extra
metadata
out
of
the
cid.
That's
the
assumption,
I'm
working
on
that.
B
I
suspect,
there's
there's
some
more
creative
modes
where
that,
where
our
tooling
you'll
want
to
show
up
with
a
cid
and
combine
it
with
something
else,
and
so
you
have
these
two
portions
that
you
bring
together
and
rather
than
just
squishing
them
into
a
cid
like
take
a
cid
extended
and
then
hand
it
off.
There
might
be
ways
and
places
where
you
these
things
are
separate,
but
you'll
want
them
to
combine
at
some
point,
and
so
we
might
have.
B
We
might
end
up
doing
some
experimenting
with
what
those
kinds
of
apis
look
like,
but
I
don't
think
I
don't
think
anything
that
we're
doing
is
getting
in
the
way
of
that.
I'm
not
sure
that
what
anything
we're
doing
is
explicitly
enabling
those
modes,
but
we're
certainly
mindful
of
the
possibility
of
extending
cids.
E
E
Don't
they
sort
of
just
let
you
put
kind
of
whatever
you
want
in
the
record
field
or
the
metadata
field,
and
this
is
cool,
because
it
means
that
we
can
start
experimenting
with
you
know
other
types
of
data
transfer
protocols
that
aren't
strictly
lib
p2p
based
right,
if
they're
a
p2p
based,
then
you
can
just
use
your
prid.
Do
identify
you'll,
learn
about
what
the
other
guy
supports
and
you'll
negotiate
and
figure
it
out.
E
That
doesn't
help
me
if
I
want
to
use
http
or
something
you
know
ftp
or
something
else
to
do
my
downloading,
and
so
I
can
put
it
in
there,
there's
a
little
bit
of
work
to
go
and
expose
all
of
that
and
make
that
all
work
with
like
the
reframe
api
stuff,
but
the
implementations
of
the
reframe
api.
But
that
should
be
fine.
E
E
So
if
you
wanted
to
do
this,
like
I
want
to
transfer
arbitrarily
large
blocks
thing
like,
I
want
to
transfer
a
the
you
know
a
shot
to
of
a
one.
You
know
one
gigabyte
blob
right
and
that's
too
big
for
all
of
our
block
sizes
and
ipl
everywhere,
but
you
could
get
around
this
without
making
the
other
party
implement
a
custom
data
transfer
protocol
by
basically
putting
in
here
is
the
cid
that
contains
all
of
my
blocks.
E
But
that
makes
sense.
So
you
know
imagine
you
have
really
a
really
big
blob
as
long
as
you
can
write
client
code
and
as
long
as
you
have
this
hook
in
the
content
routing
system,
you
could
take
your
data
bash,
it
up
store
it
in
any.
You
know
ipfs,
pending
service
thing,
you
want,
you
feel,
like
you
know,
pinata
and
fiera
some
file
coin
nodes
and
without
them
having
to
change
how
they
do
anything
else.
E
You
could
incrementally
verifiably
pull
this
like
shot,
two
of
a
one
gigabyte
blob
which
is
kind
of
nifty,
because
getting
people
to
implement
new
protocols
is
hard,
but
maybe
getting
them
to
store.
Data
is
easier.
D
One
tough
thing
about
that
is
mutability,
because
a
lot
of
protocols
that
I
personally
work
with
and
that
I've
been
looking
into
have
some
sense
of
like
stuff
changing
over
time.
So,
for
example,
like
I
think
three
years
ago,
I
was
standing
at
d-web
camp
and
talking
to
some
folks
that
were
like
oh
yeah,
we
have
like
ipfs,
it's
so
cool.
We
should
just
have
secure
scuttlebutt
and
hypercore,
and
these
other
things
just
build
on
top
of
lib
p2p
and
ipfs,
and
it's
cool
for
the
content.
Addressable
part.
D
E
D
I
just
think
that,
like
if
having
more
data
shoved
into
ipld
is
the
goal
or
like
more
protocols,
I
think
not
having
mutability
be
part
of
the
conversation
like
really
limits.
What
is
available
like
bittorrent
is
really
great
an
example
where
you
know,
if
you
don't
use
mutable
torrents,
then
you
can
translate
it
or
like
a
sha-256
of
a
one.
Gigabyte
file
is
cool,
but
like
things
that
are
a
little
more
advanced
become
harder.
I
don't
know
I.
E
You
know
most
if
you
download
something
with
docker.
There
are
tags,
people
follow
tags
and
they
try
and
download
the
latest
one
or
maybe
they
download
a
fixed
version,
but
the
mutability
is
handled
by
like
the
docker
registry
and
the
immutable
stuff
is
handled
by
at
this
point,
still
the
docker
registry,
but
like
maybe
you
could
use
ipld
and
ipfs
for
that.
E
I
think
mostly
like
people
like
as
long
as
you
can
they're
sort
of
the.
I
think
the
main
issue
of
contention
is
like
it's
a
little
bit
like.
E
E
If
I
do
that
or
not
and
like
what
which
of
my
properties
have
I
preserved?
If
I
allow
you
to,
you
know
to
store
mutable
links
inside
of
immutable
data
structures,
I
feel
like
that's
kind
of
where
it
is
there's
some
interesting
like
discussion
on
this.
If
you
search
like
the
ancient
ipld
stuff
like
before
my
time,
this
is
like,
like
nicola
and
other
folks,
talking
to
each
other
about
how
they
think
that
maybe
this
should
operate
or
not
operate.
A
All
right-
and
we
have
a
few
more
questions
from
other
folks,
so
I'd
like
to
hand
it
over
to
the
dvd-r-w
so.
F
Okay,
well,
I
have
a
fair
few
questions,
I'll
I'll
start
with
well,
it's
sort
of
a
misleading
question.
It
sounds
simple,
but
there's
a
lot
of,
I
guess
confusion
on
my
end
behind
it,
so
the
goal
of
ipld.
As
far
as
I
understand
it,
is
to
right
to
reach
independence
from
wire
formats
right,
so
you
write
code
once
which
just
works
on
data
abstractly
and
then,
however,
it
gets
serialized.
That's
not
none
of
your
concern.
F
My
issue
is
when
writing
code
and
I've
been
you
know,
interfacing.
I
have
pretty
low
level
with
ipld.
How
do
you
like?
What's
the
you
know,
developer
intended
way
of
supporting
as
many
codecs
as
possible
or
rather
like
how?
How
do
you
support
serializing,
these
mainly
deserializing
arbitrary
codecs.
E
F
Know
concrete
example,
there's
so
you're
parsing
some
data
you're
expecting
some
sort
of
you
know
data
to
appear
in
some
sort
of
format,
and
you
do
this
by
you
know
you,
the
user
gives
you
a
said
and
you
you
download
the
block
you
and
now
here's
the
problem.
You
have
to
decode
that
to
get
to
the
data.
F
E
Yeah,
so
I
mean
you
gotta,
you
gotta
sort
of
we'll
say,
draw
draw
the
line
somewhere
right,
so
I
guess
I'll
I'll
I'll
say
two
different
philosophies
for
how
to
address
this
problem,
the
one
which
is
like
much
closer
to
how
things
work
right
now
is
basically
the
codec
tells
you
right,
there's
a
field
in
the
cid
that
tells
you
how
to
decode
the
data
right
and
then
you
have
a
map
somewhere
in
your
program
that
maps
basically
the
the
code.
That
tells
you
what
you
know.
What
type
the
data
is.
E
Is
it
dag
cbor?
Is
it
git?
Is
it
json,
whatever
dag,
pb
and
and
then
the
function
for
decoding
it
into
the
data
model
right?
So
you
just
have
a
map
that
does
that,
for
you
and
all
of
the
libraries
have
all
of
like
the
implementations,
have
this
map
somewhere
right
now.
The
question
is:
if
you
want
arbitrary
expansion
of
that
map,
right,
which
is
you
know,
the
somebody
writes
a
new
codec
tomorrow
registers
the
number
in
the
code
table
and
now
you're
asking
yourself.
F
Essentially
well,
like
I
said
it's
a
bit
of
a
misleading
question:
I've
seen
exactly
the
map
pattern
with.
You
know,
codes
to
decoders
to
to
codecs,
and
I
was
just
wondering
if
there
is
you
know
a
an
already
existing
multi-formats
library
that
you
know
exposes
some
sort
of
interface
for
mainly
library
writers.
To
so
you
know
you
can
do
like
what
would
be
late
binding.
You
can
let
your
library
users
pass
you
on
additional
codecs
and
then
you
integrate
that
I
was
yeah
so.
B
Yeah,
I
think
it's.
This
is
a
javascript
question,
though,
isn't
it
yeah?
That's
where
so
I'll
explain
the
back
I'll
explain
the
background
of
the
javascript
stack
as
it
is
today,
so
it
used
to
be
the
case
that
we,
the
old
stack,
did
have
some
notion
of
this,
where
you
would
load
in
codex
essentially-
and
you
would
just
have
this
sort
of
expanding
core
that
we
just
shove
stuff
into,
but
the
the
the
pattern
that
we
tend
to
find
with
ipld
is
there's
there's
two
main
use
cases.
B
One
is
the
ipfs
use
case
and
the
other
one
is
almost
everything
else.
So
the
ipfs
use
case
is
I
I
want
to
deal
with
these
arbitrary
blocks
coming
in
and
out
of
my
system,
and
so
I
want
to
be
able
to
maximally
support
those
things,
and
so
the
for
most
of
its
life
ipld
has
been
catering
to
that
use
case.
B
And
so
the
current
stack
is
is
built
around
this.
This
problem
of
we're
trying
to
cater
to
these
systems,
where
they
typically
just
have
one
codec,
that
they
don't
they're
not
as
interested
in
saying
I
want
to
be
able
to
load
any
code.
That
comes
my
way,
because
no
I'm
building
a
content
address
system-
and
I
know
my
serialization
type
and
that's
all
I
really
care
about
I've-
got
a
codec.
I've
got
a
hasher.
B
I
put
these
things
together
and
that's
my
system
and
when
I'm
loading
blocks,
I'm
almost
certainly
guaranteed
they're
going
to
be
one
codec,
and
so
you
can
build
this
narrow
stack
in
the
in
the
ip
current
ipld
tooling
that
we
have
built
around
js
multi
formats,
where
you
just
say
these
are
my
pairs
of
things
and
then
your
bundle
only
contains
those
things,
but
but
because
we
pushed
it
far
that
way.
We
have
this
challenge
that
you,
the
exact
challenge
you're
talking
about
off.
B
If
you,
if
you
find
yourself
in
a
situation
where
you
want
to
support
arbitrary
codecs,
then
we
don't
have
good
tooling
anymore
to
to
do
that.
But
I
I
know
you've
got
a
pull
request
in
js
multi
formats.
I
don't
think
I've
commented
on
and
I've
been
meaning
to
there
is.
There
is
a
proposal
that
gazilla
has
had
up,
because
because
allah
helped
build
js,
multiple
masters
and
he
recognized
this
problem.
B
Let
me
just
see
if
I've
got
the
pull
request
right,
but,
and
he
wanted
his
his
idea
was
to
build
in
these
combinators
of
for
both
the
hash,
probably
because
we've
got
this
problem
with
hashes
as
well
as
codex,
because
you
could
have
arbitrary
number
of
hashes
that
you
want
to
need
to
support
and
update
number
of
codecs
that
you
want
to
support.
B
So
he
wanted
to
have
a
situation
where,
because
now,
when
you're,
when
you're
dealing
with
cids
and
blocks
your
the
api
makes,
you
show
up
with
a
codec
and
with
a
hasher
to
those
things.
So
he
wanted
to
build
a
combinator
of
that
would
let
you
combine
multiple
codecs
into
one
thing
that
had
the
had
the
codec
interface
so
that
then
you
could
pass
data
through
it
and
it
would
do
it
would
choose
the
right
thing.
I'm
pretty
sure
the
pull
request
is
still
open.
F
Well,
I
I've
built
a
more
or
less
a
library
just
like
that.
I
guess
it's
an
interface
which
does.
B
I
think
they're
related
the
two
oldest
pull
requests
that
are
still
open
and
it's
got
this
combinator
thing
where
you
can
smush
them
together
into
a
single
interface,
and
I
see
that
that's
kind
of
like
what
you've
been
tinkering
with.
I
think
similar.
B
And
and
this
yeah
this
week
we
could
push
this
forward
because
it
is.
It
is
a
neat
concept,
because
because
then,
once
once
we
did
this
upgrade
to
js
multiple.
Once
I
went
back
to
js
ipfs
to
like
okay,
let's
do
the
upgrade
there,
and
then
I
found
myself
just
implementing
that
map
again
and
then
just
smashing
all
the
codex
in
there.
F
I'm
working
on
the
ipns
method
for
for
decentralized
identifiers.
If
you
know
that's
right,
I
I
saw
there's
was
sort
of
a
draft
spec
for
it
that's
been
sort
of
abandoned
and
in
writing.
Well,
first,
firstly
and
a
reference
implementation
for
all
of
this,
I
yeah
I
saw
myself
practically
copy
pasting,
this
exact
map
code
all
over
the
place
so
yeah,
that's.
B
B
B
The
main,
the
main
reason
is
this:
this
bundling
thing
it
where
we
want.
We
want
to
enable
use
users,
use
cases
where
you
just
you,
have
one
codec
and
one
hash
function.
That's
that's
so
common
and
those
of
us
who
who
tend
to
work
in
far
corn
and
ipfs,
and
these
other
places
where
it's
like
we're
thinking
about
this
multiplicity
of
codecs
and
hash
functions.
B
That's
not
the
norm
for
content
address
data
the
norm
is
I've
got
one
way
of
doing
things,
and
so
we
built
the
stack
around
that
one
way
there
is
in
js
multiformats.
There
is
this
thing
called
basics.
B
We
export
this
thing
called
basics
that
has
all
the
things
in
it.
It
doesn't
have.
It's
not
like.
It
has
a
really
nice
interface,
but
it
has
all
the
things
in
it.
So
if
you
want
to
have
that
all
that
stuff,
you
import
this
basics
thing
and
it's
got
a
lot
of
it's
all
the
bases
and
all
the
hashes
it
doesn't
have
all
the
codecs.
We'd
still
do
them
separately.
So.
C
B
D
Yeah,
if
I
may,
I
actually
ran
into
this
issue
of
using
multiple
codecs
and
keeping
track
of
stuff,
and
I
think
rods.
You
pointed
me
to
the
dot
or
method
of
decoders,
and
so
you
can
chain
a
bunch
of
codecs
together
effectively
when
you're
decoding
data
and
you're
not
sure
what
it
might
be,
or
it
might
be
a
bunch
of
different
things.
D
So
just
I
use
that
and
then
you
bring
the
codecs
that
you
care
about
yeah,
there's
no
default.
That
combines
all
of
these
things
I
mean
honestly.
Maybe
we
should
have
a
pr
that
has
that,
for
like
a
sub
dependency,
you
can
import
be
like
import
all
the
bases
for
people
using
it.
I
don't
know.
E
That's
what
we
have
in
go
by
the
way
it's
like
in
go.
You
can
also.
We
haven't
done
this
for
for
all,
but
for,
like
all
the
bases
and
stuff,
because
most
of
that
tends
to
just
get
bundled
in
any
way
like
standard
library
things,
it's
not
much
extra
weight,
but
like
the
codecs
and
the
hashes
you
now
can
like
import,
smaller
subsets
of
them
and
there's
just
like
a
package.
You
can
import
that
imports
all
the
other
that
imports
all
the
things
for
you
and
chucks
them
into
the
map.
E
But
there
is
a
global
map
that
you
can
use.
Then
there's
probably
like
the
non-global,
the
non-singleton
map
that
you
can
pass
around
that
gets
to
do
more
fancy
stuff
right,
the
link
system,
which
just
has
like
interfaces
on
it
that
lets
you
do
whatever
you
need
to,
and
if
you
pass
that
around,
then
it's
like
low,
it's
like
low
weight
right
and
you
have
to
control
your
configuration.
B
When
we
were
redoing
this
core
stack,
there
was
this
intermediate
version
that
that
michael
made
that
we
and
you
still
see
some
code
that
hasn't
been
upgraded,
where
we
would
do
something
like
the
link
system
and
you
would
have-
and
it
is
essentially
a
singleton
that
you
you
maintained,
and
it
was
it
was
the
original
js
multi-formats
before
it
is
what
it
is
today,
where
you
have
this
thing
called
multi-formats
and
you
would
plug
things
into
it,
and
so
you
held
onto
the
object
you
plugged
in
your
hash
function
you
plugged
in
your
codecs,
and
then
you
passed
that
around
everywhere.
B
You
want
to
do
something
and
that
that
worked
for
this,
like
it
solved
this
problem.
It
was
this
global
thing
that
had
all
the
things
and
you
could
and
then
a
library
would
export
its
its
functions
and
and
to
use
it
you
would
have
to
pass
in
your
multi-formats
object,
so
he
could
use
that
to
figure
out
what
to
do.
B
Let
me
make
you
a
new
version
of
myself,
but
I
need
your
multi-format
object
before
I
can
make
myself
and
it
just
created
these
really
hairy
apis,
and
we
just
wanted
something
simpler,
so
yeah,
but
there
is,
there
is
it's
definitely.
This
is
definitely
an
area
where
we
could
do
with
some
work
to
make
this
stuff
easier.
So
these
com,
improving
these
combinators
and
you
know
the
or
function
building
on
that
yeah.
F
I
have
a
few
more
questions,
mainly
working
with
with
the
ids
they
have.
Obviously
part
of
them
is
a
path,
and
now
past
resolution
is
from
what
I've
figured
out
on
using
you
know,
reading
the
history
on
git,
it
seems
like
it
has
a
lot
of
legacy
to
it.
Where,
especially,
you
know
with
dag
pb
and
non
dag
pb
passing
it
works
very
differently
and
well
I
mean
obviously,
I
understand
how
it
works.
F
Now
I
was
thinking
which,
obviously,
most
codecs
will
probably
use
the
default
passing
behavior,
but
maybe
passing
should
be.
You
know,
part
of
what
a
coder
has
to
define
so
that
you
could
have
this
varied
behavior
per
codec
and
once
again,
in
the
you
know,
thinking
of
the
multiplicity
of
codecs
you
have
to
deal
with.
F
This
would
make
it
a
lot
easier
than
having
to
basically
hard
code
and
manually
discover
which
codecs
are
pathologic
and
then
implement
a
a
pathing
algorithm
for
them,
specifically,
maybe
an
interface
over
that
and
then
generalizing
over
from
that.
When
I'm
writing
the
the
spec
for
the
ipns
method,
I
realize
I
have
to
explicitly
sort
of
define
what
supported
codecs
for
the
you
know.
Document
blocks
have
to
do
like
a
sort
of
a
list
of
guarantees.
F
You
know
that
they'll,
you
know
sort
map,
keys
and
order
and
stuff
like
that,
so
maybe
a
sort
of
trade
system
where
each
codec
can
sort
of
self-describe
what
you
know
the
guarantees
it
provides
and
sort
of,
like
basically
object-oriented
interfaces.
Something
like
that
and
you
know
you
could
think
of
a
couple
of
implementations,
but
I
think
it's
worth
thinking
about.
If
you
know
ipld
is
to
move
towards
this
universal
data.
Polymorph
polymorphic
over
wire
format,
future
just
thought
that
might
be.
B
B
Of
discussion,
I
think
we
try
and
solve
this
by
defining
the
data
model,
and
then
I
know
the
pathing
stuff
sounds
confusing
in
the
way
it's
written,
because
it's
accounting
for
history,
but
we
try
and
make
that
history
not
matter
anymore.
It
does
matter
in
imagine
ipfs,
kubo
and
friends
where
they
have
to
deal
with
legacy
into
apis,
but
for
any
new
stuff.
B
We
just
ignore
that
history,
stuff
and
paths
are
just
as
they
work
over
the
data
model,
and
so
when
we
push
all
this
stuff
into
the
data
model-
or
we
say
that's
a
the
codex
concern
is
to
get
it
to
the
data
model
and
back
again,
and
that's
where
all
that's.
Where
we
deal
with
all
this
set
of
problems
and
pathing
then
works
on
top
of
the
data
model.
So
pathing
should
take.
B
You
know
something
that
has
been
instantiated
into
data
model
form
and
be
able
to
work
over
those
the
nodes
of
that
graph
and
then,
as
long
as
the
codec
is
doing
stable
and
predictable
things
to
get
into
the
data
model,
then
that's
that's.
That
should
help
the
the
main
challenges
we
have,
then
are
some.
B
Some
codecs
in
the
way
they
go
to
and
from
data
model
and
then
the
differences
that
creates
and
the
big
one
is
map
ordering
it
doesn't
matter
so
much
for
pathing,
because
pathing
is
like
there's
one
hop:
it
doesn't
matter
what
order
the
maps
are.
You're
just
gonna
make
one
hop.
So
that's
fine,
but
when
we
do
selectors,
which
are
like
paths
on
steroids,
then
it
can
matter
what
order
the
maps
come
in
and
then
it
also
does
matter
when
you're
doing
the
round
trips.
B
If
the
maps
are
differently
ordered-
and
that's
that
was
that
pull
request
for
the
dagp
spec
that
I
was
doing
was
dealing
with
that
issue,
but
for
paths
it's
much
simpler
because
it's
like
I've
got
one
hop
in
each
of
these
each
of
these
nodes
of
the
graph.
So
we,
I
think,
I
think,
the
enhanced
that
we
try
and
solve
it
by
saying
the
codex
concern
is
to
get
to
the
data
model
and
then
back
again,
and
everything
else
will
push
on
top
of
that
yeah.
E
It's
like
slightly
deceptive.
It
may
be
slightly
deceptive,
but
like
the
it's
dag
pb
papping
is
like
almost
never
your
problem,
because
it's
not
even
it's
like
it's
like
unix,
fs,
sort
of
and
the
slash
ipfs
namespace
and
what
that
implies
right.
So,
even
if
you
like,
don't
account
for
dag
pbe
popping
at
all,
even
even
if
tag
pb
was
like
everything.
If
everything
had
been
named,
you
know
everything
was
named
correctly
and
I
wasn't
using
these
indices
and
the
names
inside
the
links
and
traversing
through.
E
Even
if
none
of
that
was
there,
I
would
still
run
into
a
problem
where,
when
I
try
and
do
a
traversal
and
I'm
at
a
unix
fs
hemp
that
I
try
and
traverse
that
as
if
it's
a
directory
and
go
through
multiple
layers
of
of
directory
structure
internally,
instead
of
just
one
appealid,
node
right
and
so
what's
happening,
is
that
you're
you're
doing
you're
applying
multiple
of
these
lenses
that
are
not
just
like
dag
pd
things
and
codec
things,
but
they're
like
multi-block
data
structure,
things,
and
so
I
think,
like
you
know,
slash
ipfs,
is
sort
of
is
a
little
bit
stuck
with
this.
E
F
That's
an
interesting
approach.
I
guess
our
goals
are
similar.
I
just
thought
that
instead
of
you
know
trying
to,
I
guess,
force
everything
well,
my
so
the
right.
The
future
approach
currently
is
just
to
use
default
pathing,
and
then
that
puts
the
burden
on
making
paths
pretty
either
on
the
path
format
or
on
the
you
know,
the
the
developers
making
the
the
structures
right.
F
C
C
The
way
it
works
is
you
can
get
back
to
normal,
like
with
more
travel
style
from
a
hand
using
the
referee,
which
is
a
piece
of
code
that
like
takes
the
data
model,
hands
and
makes
sense
of
it
and
gives
you
a
data
modules
of
like
slightly
less
than
as
it
doesn't.
Basically,
it
does
awesome
hand
project
for
you,
so
I
guess
it
would
be
kind
of
a
way
and
we
would
just
need
to
like
find
a
way
to
give
the
adl.
F
E
Except
what
it
means
is
that
I
can't
implement
like
this
hemp
adl,
and
this
like
hemp
thing.
I
can't
say
that
this
thing
works
as
both
doug
seymour
and
dad
jason
right.
I
have
to
like
preserve
a
code
reserve,
a
magic
number,
that's
like
the
same
hamped
json
and
then
the
same
hamped,
dag
seabor
and
then
the
same
hamps
for
every
ipld
codec
I
might
want
to
abstract
over
and
so
I've
no
longer
decoupled
from
the
serialization
layer.
F
I
guess
my
understanding
of
it
is
a
it's
cute,
but
I
thought
you
could
you
know
you
could
compose
adls
and
then
obviously
the
order
matters
and
then
you
could
put
in
a
you
know
the
serialization
one
in
front
or
just
any
sort
of
transformations.
You
want
to
do
and
put
them
in
order
and
then
the
result
of
that
is
some
again
some
sort
of
ipld
value
which
you
can
do
whatever.
E
However,
if
like
what
you
try
and
do,
is
you
you
hide
the
you
say:
the
codec
is
the
thing
that
signals
to
you.
How
to
do.
You
know
say
like
traversals,
then
either
what
you're
doing
is
you're
expanding
the
data
model
to
say
the
data
model
now
includes
also
pathing.
E
Maybe
that's
fine
right,
so
expanding
the
data
model,
but
that
expands
it
everywhere
and
then
you
still
need
the
adl
thing
right
like,
for
example,
the
unix
fs
ham.
Traversal,
doesn't
won't
get
covered
in
in,
like
that,
wouldn't
get
covered
in
the
dagpb
codec
special
pathing
definition.
D
D
If
you
look
at
them
raw,
just
because
it's
efficient
that
way-
and
you
can
put
together
nice
schemas
to
make
the
pathing
nice
and
so
with
ip
ipld
urls.
What
we'll
be
able
to
do
is
combine
that
so
it's
like
you
can
have
your
cake
and
eat
it
effectively.
D
Well,
I
mean
for
some
cases
where
it's
like
you
need
an
adl.
Schemas
won't
solve
that,
but
down
the
line
webassembly
might
help
like
once
we
have
web
assembly
auto
codec,
auto
adl
stuff.
You
could
do
really
really
crazy
things
with
pathing,
because,
for
example,
you
can
like
run
a
php
script
inside
a
wasm
blob
and
you
to
to
return
like
a
web
page
as
part
of
you
passing
over
it
or
other
like
very
horrible
horrible
things.
That
will
be
very
exciting.
B
Yeah
the
way,
I
think,
the
way
the
ipld
way
that
we
we've
told
ourselves
to
look
at
this
stuff
is:
let's
do
all
this
in
the
data
model.
Let's
invest
all
the
functionality
into
the
data
model
piece
and
push
everything
else
outside
of
that,
and
so
codecs
are
just
a
way
to
get
into
the
data
model.
But
then
we
do
these
transformations
of
data
model
to
data
model
to
model.
So
you
can
have
the
base
data
model.
B
We
we
call
it
the
representation
layer
that
the
codecs
deal
with,
but
then
you
can
have
a
transformed
version
that
is
done
through
schemas
and
the
the
link
to
the
the
js
ipl
schema
repo
that
I
put
in
the
chat.
Has
that
really
data
form
where
everything's
an
array
and
you
just
if
you
do
put
that
into
the
node?
B
B
I
want
to
path
this
data,
but
I
want
it
to
be
transformed
by
this
schema,
and
so
you
end
up
showing
up
with
codec
schema
path
and
data,
and
you
have
to
smush
these
all
these
things
together
and
then
the
additional
layer
that
we're
also
dealing
with
that
works
in
go,
and
we
don't
really
have
great
answers
in
javascript.
Yet
is
where
you
shop
with
an
adl
that
works
across
multiple
blocks.
B
And
you
can
you
know
you
show
up
with
with
adl
schemas
blocks,
codecs
and
here's
my
path
and
you
smush
them
all
together
and
you
get
what
you
want
and
that
that
the
ideal
is
that
you
should
be
able
to
layer
these
things.
And
then
the
paths
just
work
same
with
selectors
same
with
a
bunch
of
other
operations
and
then
it
that
it
should
operate
through
the
transformations
all
the
way
down.
And
that's
that's
the
ideal
that
we're
aiming
for,
but
but
as
a
yeah.
A
All
right,
we
are
almost
running
out
of
time,
yeah
and
some
people
have
to
run
so
sorry
for
that.
A
C
B
I
have
to
I
have
to
go
I'd
love
to
talk
about
this
dropout.
Maybe
when
I
can
talk
async,
because
I'm
very
interested
in
what
you're
doing
with
selectors
and
deacon
party,
because
I've
been.
I
was
looking
at
that
problem
myself
a
while
ago
and.
B
A
Cool
yeah,
then
then
I
will
close
the
meeting
and
normally
like
after
this
meeting,
we
often
have
some
after
hours,
but,
as
you
already
are
about
time,
we
probably
won't
have
one
but
feel
free
to
stay
around
in
case
we
have
so
thanks.
Everyone
for
attending
and
see
you
all
again
in
two
weeks,
goodbye
everyone.