►
From YouTube: 🖧 IPLD Every-two-weeks Sync 🙌🏽 2022-08-15
Description
An every two weeks meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
A
And
we
appear
to
be
broadcasting
on
youtube
and
recording
our
week
every
two-week
sync
or
ipld,
and
it
is
the
15th
of
august
depending
on
where
you
are
in
the
world,
and
we
have
a
smallish
crew
today
to
look
at
what's
happening
in
the
ipld
world.
So
has
anyone
else
put
their
stuff
before
me?
No,
it
looks
like
I'm
first,
so
I'm
going
to
do
an
update
first
on
ipld-
and
I
have
I
have
done
a
lot
of
different
stuff
these
past
couple
of
weeks,
ranging
over
a
variety
of
things.
A
So
the
the
thing
that
I
spent
probably
the
most
time
on
and
the
the
first
my
first
two
items
are
related.
There's
a
very
obscure
bug
in
this
far
coin
deal
making
apparatus
where
a
car
for
a
bar
coin
deal
a
car
is
made
twice.
So
it's
made
once
to
grab
this
compe,
which
is
like
it's
basically
a
hash
of
the
car,
and
it's
made
again
when
it's
received
or
depending
on
how
it
flow
works.
A
There's
these
two
two
places
where
cars
are
made
and
the
the
compe
hash
thing
is
compared
between
them
and
there's
this
one
bit.
One
component
called
boost,
which
does
a
lot
of
this
these
days
that
was
coming
up
with
these
mismatching
competes
and
and
compe
mismatches.
Basically,
it's
basically
a
hash
mismatch.
The
car
that
you
gave
me
does
not
match
the
com
p,
that
you
told
me
about
and
they're
very
frustrating,
and
this
this
bug
has
been
showing
up
increasingly
in
this
component
and
getting
more
frequent.
A
Apparently
no
one
can
figure
out
what
it
is.
So
anyway,
after
lots
of
investigation,
I
found
a
really
minor
spec,
well,
a
codec
difference
and
something
that
we
never
factored
into
the
sp
dagpb
spec
and
it's
all
to
do
with
traversals
and
the
way
that
and
eric
eric
you're
gonna
love
this
it's
it's
also
doing
traversals
and
the
way
that
the
data
is
instantiated
into
memory.
So
it
turns
out
so
in
in
dag
pb.
A
We
have
this
links
array
and
the
links
array
when
you
encode
by
spec
you're
supposed
to
sort
it
by
by
the
name
field.
So
it's
a
bitewise
sort
on
the
name
field
when
you
sort
it
when
you,
when
you
encode
it
and
that
that's
the
spec
thing
it's
just
there's,
there's
reasons
why
that
might
be
nice.
But
it's
not
strictly
required.
It's
just
that
all
our
codecs
do
that.
A
So
all
our
dagp
beats
old
and
you
all
do
this
sort
thing
and
it's
a
stable
sort,
and
so,
if
there's
no
string
or
if
there's
different
duplicate
names,
they
just
stay
as
they
are
and
they
go
down.
It
turns
out
that
go
merkle,
dagger,
old,
dag
pb
handler.
A
It
also
does
a
stop
sort
when
it
loads
it
from
the
bytes.
But
I
don't
it.
Does
it
in
these
really
strange
locations,
so
you
can
load
it
from
the
bytes
and
then
you've
got
this
dpp
thing
in
memory,
but
it
won't
sort
it
until
you'd
make
certain
calls.
So
you
recall,
suddenly
your
links
are
sorted
and
and
there
you
have
your
links,
but
it
shouldn't
matter
if
codecs,
when
they're
encoding
are
sorting
links,
and
so
it
doesn't
matter
most
of
the
time.
A
But
now
we
have
more
and
more
codecs
in
the
wild
by
third
parties
that
are
not
quite
doing
the
right
thing
so
turns
out.
We
have
some
major
players
that
are
storing
stuff
on
far
coin
they're,
using
their
own
codex
as
a
couple.
One
of
them
is
written
by
somebody
who's
now
on
the
stewards
team,
who's,
who's,
who's
gone
and
fixed
up
his
codec.
Another
one
is
by
the
tsar
working
group,
so
has
academic
data
they've
written
the
dagp
python
implementation
and
it
doesn't
sort
links
when
it
goes
out.
A
A
And
so
we
had
a
car
file
with
blocks
that
are
out
of
order,
because
the
traversal
is
different,
so
yeah
and
there's
there's
a
bunch
of
others,
a
bunch
of
other
minor
things
in
there.
There's
a
there's,
an
investigation
issue
there
if
anyone's
interested
in
this.
But
basically
I
I'm
having
to
update
the
dagp
spec.
To
make
this
clear,
I
also
have
to
write
about
the
go:
merkle,
dag
differences
with
iple
prime
and
there's
some
things
to
fix
as
well,
but
there
was
a
little
fix.
A
That
was,
you
know
just
basically
extracting
the
links
list
early
in
this
traversal
path
before
it
gets
sorted
by
some
other
code.
That
was
the
fix,
so
lots
of
fun
with
that
one,
but
yeah
the
joys
of
sorting
anyway,
and
they.
A
The
second
thing
I've
I've
been
working
on
is
this
comes
out
of
that
same
issue,
which
is
trying
to
remove,
go
vertical,
dag
it's
a
path
for
boost,
because
the
one
way
to
solve
this
is
to
stop
using
go
merkle
dag,
but
it
turns
out
that
so
will
had
already
had.
Did
some
work
in
the
go
car
library
to
do
a
for
various
reasons?
A
We
need
a
car
writer
that
can
skip
a
certain
number
of
bytes
at
the
beginning
of
the
car
before
it
starts
riding
out
and
will
had
started
this,
and
so
I
picked
it
up
during
the
during
last
week
to
try
and
finish
it
off
and
and
I've
got
it
mostly
complete
and
there's
various
constraints
that
sort
of
has
to
live
up
to
to
be
able
to
replace
what's
in
there
now.
A
But
it's
I've
got
it
to
the
point
where
I'm
starting
to
question
whether
it's
even
worth
it
because
the
the
complexity,
the
sheer
complexity
of
doing
this,
with
a
selector
with
an
arbitrary
selector
and
having
proper
caching
and
being
able
to
skip
parts
of
the
first
part
of
the
traversal
and
and
not
touch
the
blocks
door
as
much
as
possible.
A
B
If
anyone
wants
to
talk
more
about
this,
then
guess
I
have
a
brief
question,
which
is
so
one
thing
I've
noticed
a
little
bit
is
that
pathing,
with
adls
covers
like
a
lot
of
what
you
might
want
with
selectors,
not
everything
right,
mostly
the
fact
that,
like
we've
selectors
define
a
language
and
because
I'm
not
transferring
my
adl
code
around,
you
can
run
my
selector
stuff.
B
Even
when
you
don't
have
the
adl
code,
I
would
have
wished
you
had,
instead,
which
we
had
some
fun
experience
with
when
people
were
trying
to
use
selectors
to
try
and
grab
unix
fs
data
when
there
was
no
unix
fs
adl
support
inside
of
selectors
or
there
was
no
adl
supported
all
inside
of
selectors,
and
so
I
wonder
like
if
you
did
this
stuff
on
pathing
instead
of
on
selectors,
if
it
would
be
easier
right
to
some
extent,
a
lot
of
the
logic
that
lives
in
like
go,
merkle,
dag
and
go
ipld
format,
whatever
is
path
based,
and
so
it
may
be.
B
That
trying
to
replicate
that
and
like
the
ipld
prime
world
is
is
like
much
more
is
much
easier,
much
more
reasonable
to
do.
If
that
makes
sense,
it
just
gives
us
like
another
way
of
describing
subgraphs,
that's
less
powerful
but
easier
for
us
to
work
with.
A
C
A
I
want
a
full
graph,
I
want
an
exhaustive
and
that
that
turns
out
to
be
the
most
common
mode
of
using
selectors
and
and
yet
most
of
the
code
that
we
write
for
selectors
is
taking
into
account
the
complex
cases
and
that's
what
I'm
finding
here.
The
requirement
for
this
go
car
thing.
A
The
immediate
requirement
is,
I
just
want
an
exhaustive
selector.
I
just
want
to
walk
for
graph
and
I'm
writing
code.
That's
trying
to
manage
arbitrary
selectors
and
I
don't
don't
even
have
a
use
case
for
arbitrary
selectors,
and
so
it's
like
okay,
I
could
back
up
and
just
say
this
only
doesn't
exhaust
you,
but
you
can't
even
provide
provide
me
with
selector
and
then
that
make
brings
up
api
discussions
all
that
sort
of
stuff,
but
even
within
the
selector
engine
itself.
A
D
C
C
They
were
just
first
by
like
a
month
or
two,
maybe
not
even
that
much
but
okay,
the
oldest
draft
of
selectors,
is
like
way
older,
but
by
the
time
I
heard
about
it
anyway
and
yeah
packing
over
adls
is
actually
more
powerful
because
it
has
the
ability
to
do
data
dependent
things
like
understand
the
internals
of
sharding,
which
selectors
still
can't
and
never
will
so
yeah.
If
people
want
to
build
simpler,
apis
that
explicitly
talk
about
pathing
and
not
selectors
and
say
hey,
this
covers
a
lot
of
use.
Cases
see
if
you
can
get
there.
A
Yeah
and
it's
that's
what
I
was
feeling
with
this
this
go-kart
thing
I
could.
I
could
get
this
to
merge
ability,
but
but
then,
like
there'd,
be
two
people
that
would
be
able
to
maintain
it
in
the
future
and
and
and
given
enough
time,
I
wouldn't
be
able
to
maintain
it
because
I,
like
I'm,
not,
I
don't,
have
the
greatest
memory
and
so
in
a
year's
time,
I'll,
look
back
in
this
code
and
say
what
you
know
and
take
a
week
to
figure
out
what
on
earth
is
doing
and
then
yeah.
A
So,
but
it's
almost
like
we
have
these
the
most
common
upload
operation
is
here's
a
root
or
here's
a
path
to
a
root.
Give
me
the
full
dag
underneath
that
root
and
and
possibly
the
second
one,
is
here's
a
path,
and
I
want
that
block
not
the
day.
B
Sometimes
we
have
other
ones
like
I
want
to
get
like
effectively
range
requests
like.
I
want
this
bite
offset
like
the
things
that
have
been
added
to
selectors.
More
recently
like
I
want
to
do
an
adl
thing,
and
I
want
to
have
a
slice
inside
the
matcher,
those
all
came
out
of
like
use
cases
for
it
that
we
probably
still
need
I'll
say,
like
the
there's,
some
good
news
and
some
the
good,
the
good
news
and
some
bad
news.
B
You
could
just
like
use
selectors
as
that
syntax
and
then
just
verify
that
they
fit
the
model
like
they
don't
do
anything
weird
when
you
get
them
and
so
like
that.
That
might
be
one
way
to
like
avoid
a
design
discussion
and
get
the
thing
you're
going
to
faster,
because,
like
I,
I
think
we
can.
I
think
what
you're
saying
like
I
totally
agree
with
you
like,
there's
all
these
optimizations
we
can
take
when
we
allow
ourselves
a
few
of
the
easy
paths
and
the
easy
paths
cover
like
a
lot
of
stuff.
A
Yeah,
okay,
I'm
just
going
to
quickly
finish
the
rest
of
my
update
and
then
I'll
make
a
note
here,
and
we
can
talk
more
about
it
at
the
end.
If
we
want
so
some
other
stuff
I've
been
doing
is,
I
went
finally
went
back
and
updated.
The
javascript
schema
packages
to
the
latest
schema
changes.
A
There
was
a
flurry
of
changes
last
year
to
the
schema
language
and
dmt
that
it
was
super
frustrating
because
it
just
the
internals
just
changed
a
lot,
and
so
I
put
a
big
to-do
in
the
javascript
land,
and
I
finally
did
that,
because
I
kind
of
want,
I
want
to
be
able
to
use
them
in
some
stuff.
A
I
want
to
build
so
yeah
the
I
updated
that
on
the
weekend
and
we
can
now
parse
selectors
and
have
the
full
dmt,
that's
in
the
right
format
and
all
that
sort
of
stuff
and
there's
also
a
validator
as
well.
That's
currently
it's
fairly
simple.
It's
just!
A
I
want
to
validate
the
blocks,
match
a
schema
as
part
of
that
there's
also
a
typescript
schema
schema,
which
is
which
is
actually
pretty
nice
to
use
when
you're,
when
you're
working
working
with
the
scheme
of
dmt
and
you've
got
the
typescript
behind
it
telling
you
you
know.
There's
this
thing
here
or
this
thing
can't
be
here,
or
this
thing
is
optional:
they're,
actually
pretty
nice
to
use
in
an
editor
and
having
that
sort
of
typescript
linting.
A
I
am
working
towards
putting
this
all
into
a
new
named
package.
Ipld
schema.
So
currently
it's
I
currently
it's
ipld
schema
in
npm,
but
I
want
to
have
it
at
ipld
schema
and
bundle
it
all
into
the
same
package
and
have
these
utilities
that
you
can
pick
out
and
one
of
the
one
of
the
things
I'm
doing
is
updating
the
validator
to
be
a
transformer
type
of
transform
and
validate.
A
So
it's
it's
like
it's
a
little
bit
like
the
layering
in
ipld,
prime,
where
you
pass
a
just
a
javascript
object
through
you
know
you
mush
it
with
a
schema
and
either
it
comes
out.
It
doesn't
either
it
doesn't
come
out,
in
which
case
it
hasn't
validated
or
it
comes
out
potentially
transformed
according
to
the
to
the
validator.
So
you
have
this.
You
end
up
with
this
representation
and
typed
layer
version
of
things,
and
then
you
can
do
the
reverse
operation.
A
Where
you
say
this
is
my
typed
version
give
me
the
representation
version,
and
then
you
can
do
things
like
you
know.
Take
your
your
your
tuple
structs
and
have
them
come
have
arrays
come
out
as
as
objects
that
have
got
the
right
names,
which
would
be
quite
nice,
so
I've
been
working
on
that.
A
That's
not
a
high
priority,
so
sort
of
it
becomes
a
sort
of
something
that
I'm
doing
to
get
my
head
out
of
go
and
do
some
javascript
again
so
I'll
yeah
I'll
be
working
on
that
the
other
two
things
that
are
mainly
discussiony.
There's
a
crdv2
proposal
in
the.
C
A
Specs
repo
that
we're
talking
about
that
adds
a
a
metadata
portion
at
the
end
of
the
cid
in
the
form
of
sort
of
a
second
cid,
so
two
together,
and
you
can
use
that
second,
one
for
things
like
content
and
context
for
the
cid.
So
here's
what
I'm
linking
to
and-
and
let
me
tell
you
a
bit
about
it
so
that
you
can
carry
over
context
context
through
links.
The
interesting
discussion
there
and
I've
had
some
more
thoughts
since
the
last
time
I
posted
there.
A
A
You
know
it's
chopping
and
changing
where
this
thing's
fitting
and
how
it
work,
how
it's
working,
but
that
that
seems
to
be
a
fruitful
discussion
and
most
students
really
learning
about
the
design
goals
of
the
stack
and
and
getting
in
deep
and
excited
about
it.
So
that's
very
good!
D
Yep,
so
I
haven't
been
super
busy
compared
to
rob,
but
I've
been
making
some
more
progress
on
the
ipld
gateway,
so
I'm
actually
in
the
process
of
expanding
on
some
experiments.
D
I
did
initially
to
get
a
url
pressure
and
still
layer
in
javascript
by
pld
urls,
which
includes
the
fancy
parameters
thingy
and
once
I've
got
an
initial
version
of
that,
I'm
going
to
try
to
integrate
it
into
jsipfsfetch
and
just
kind
of
show
what
it
could
look
like
to
have
a
gateway
that
speaks
iplb
urls
so
that
we
could
also
start
filling
out
the
ipip
for
it
more
outside
of
that
I've
been
catching
up
on
all
the
web
assembly,
discussion,
stuff
and
rewatching
all
of
the
recordings
from
the
ipfs
thing.
D
Seems
my
internet
died
for
a
bit?
Did
you
hear
me
talking
about
youtube.
D
Yeah,
so
you
can
look
up
all
the
wasm
stuff
on
youtube,
I'm
looking
through
it
there's
also
the
discussion
that
happened
last
week,
which
has
yielded
some
useful
directions.
So
I'm
going
to
try
to
understand
this
face
a
bit
more
and
see
if
we
can
find
out
just
a
way
to
get
a
tight
integration
with
all
the
idld
stuff
and
all
the
webassembly
stuff
in
the
the
ecosystem.
D
I'm
also
working
on
something
a
bunch
of
ipld
stuff
to
script
modules.
So
we've
I've
kind
of
like
actually
worked
with
the
build
tools
that
were
proposed
so
that
what
this
is
gonna
do.
Is
it's
going
to
get
rid
of
a
bunch
of
legacy,
build
tooling
we're
going
to
be
using
this
thing
called
a
gear.
D
I
don't
know
how
to
pronounce
it,
but
it
kind
of
like
makes
it
easier
to
configure
javascript
stuff.
A
I've
also
been
going
through
the
ipld
thing.
Videos
they've
been
quite
good
to
watch,
but
I
have
to
watch
most
of
them
on
like
1.5
or
1.75
speed.
A
D
Yeah
sorry
for
my
random
disconnections:
this
is
what
happens
when
you
have
duopoly
of
that
tree.
It
sucks
something
something
to
it:
anyways,
so
the
falcoin
grant
giver
folks
were
like
hey.
Maybe
this
prolly
tree
standard
could
be
done
with
rust
instead
of
go
to
start
inspiration
is
how
feasible
is
that
what
is
the
state
of
ipld
in
rust,
in
particular
with
reference
to
adls
and
also
how
it
could
integrate
with
the
fbm?
D
So
I'm
going
to
reach
that
day,
because
I
haven't
dug
two
friends,
I
feel
the
rust
it
seems
like
it's
it's
basically
just
like
that's
so
pointers
would
be
helpful,
but
I'm
probably
going
to
read
source
code
yeah.
B
B
Yeah,
I
don't
know
if
all
of
the
there
were
a
lot
of
talks
around
some
of
the
ipld
stuff
at
the
ipfest
thing
I
think
they're,
basically
all
up
there.
There
were
some
more
like
informal
discussions,
I'm
not
sure
if
they
made
it
there
I'll
have
to
read,
see
what
videos
I
think
some
of
the
videos
that
were
like
a
little
more
informal
didn't
get
posted,
but
I'll
have
to
check
on
that.
A
A
Dean,
maybe
you
should
go
and
do
your
symbol.
Okay,.
B
I'll
I'll
do
mine
in
the
meanwhile
okay,
I
guess
start
with
this
unix
the
best
thing.
So,
following
up
on
what
rod
mentioned?
Oh
shy,
should
I
pause?
B
We
have
a
person
back,
no
okay,
all
right.
Following
up
on
what
ron
mentioned:
yeah
unix
of
s,
implementations,
we
sort
of
our
specs
and
text
fix
and
test
fixtures
are,
are
insufficient
and
implementing
unixfs
is
not
the
easiest.
B
I
guess
we'll
call
it
which
is
go
unix,
fs
node
their
iro,
which
is
implementing
this
in
in
rust
for
their
for
their
ipps
implementation.
There's
probably
a
good
opportunity
here
to
try
and
like
collect
the
test,
fixtures
that
they're
using
and
try
and.
B
B
I
suspect
the
rust
implementation
is,
people
will
be
happier
to
include
via
ffi
than
the
go
one,
which
means
it
can.
It
might
end
up
in
more
places,
and
so
we
should
really
make
sure
that
that
thing
is
is
correct
and
do
what
we
can
to
to
help
friedel
and
co
out
there.
B
I
guess
that's
my
pitch,
for
that
is
noticing
that
you
know.
We've
already
found
a
few
bugs
in
there
just
recently,
and
this
thing
is
a
whole
bunch
of
years
old.
B
This
web
assembly
codec
thing
got
some
feedback
about
people
about
using
the
ieee
floats
as
the
floats
and
only
having
one
instead
of
positive
and
negative
seems
reasonable.
This
does
mean
this.
This
is
sort
of
the
one
way
in
which
the
spec
will
will
not
cover
the
data
model,
but
I
think
that's
fine.
As
soon
as
people
start
wanting
to
use
really
big
floats,
we
can
talk
about
it.
We
can
make
a
whack
v2,
but
until
then
it
doesn't
seem
like
that's
something.
B
Yeah,
I
have
a
question
there.
You
know,
I
guess
I'll
just
raise
it
now.
People
can
answer
later,
which
is
is
anything
other
than
my
ben
code.
Codec
implementation,
putting
non-utf-8
characters
into
strings,
because
I
I
I
feel,
like
a
lot
of
our
codec
implementations,
seem
to
reject
putting
a
lot
of
our
implementations
of
things
that
I
know
about
already,
at
least
at
the
spec
level
say
that
they
should
not
have.
B
A
B
I
guess
what
I
mean
is
like,
as
far
as
I
can
tell,
everyone's
specs
are
assumed
like
the
dag,
seymour
and
and
even
dag
pb
specs.
I
think
assume
that
the
strings
are
are
utf-8
things
there's
like
some
ancient
comments
and
some
of
the
go
unix
fs
code.
That's
like
this
is
utf-8,
but
I
don't
know
if
anyone
ever
abided
by
it,
because
it
was
a
string
comment
and
some
code
written
six
years
ago.
A
The
only
the
only
place
the
only
place
where
this
has
really
come
up
as
a
concern
is
far
coin
chain
data.
There
was
a.
There
was
a
field
for
the
first
year
of
file
coin,
that
this
label
field
that
was
used
as
just
bytes,
and
it
was.
It
was
encoded
as
a
string
index
dax,
eball
string
and
that
that
that's
completely
unreadable
in
if
you
were
to
load
that
block
in
javascript.
A
So
it's
just
lost
data,
and
I
was
I
was
coming
up
with
strategies
for
sort
of
preserving
the
bytes
of
strings
through
the
codec
so
that
you
could
access
them
if
you
needed
them
later
on,
but
thankfully
that
got
fixed
in
a
fip.
I
think
it
was
late
last
year
and
just
changed
to
bites
like
it
should
have
been
the
first
first
place,
but
there
was
this
that
one
place
with
and
there's
a
lot
of
data
out
there
with
that.
A
It
has
been
painful,
and
I
think
one
of
the
reasons
it's
one
of
the
other
reasons.
This
changes
for
the
the
fbm
stuff
is
to
get
that
all
sorted
out
so
that
we
don't
have
to
cuts
about
in
rust.
So
do.
A
It's
a
possibility,
but
I
I
I
can't
say
I've
seen
issues
that
would
flag
that,
and
I
would
if
people
are
sticking
to
go
like
they
go,
implementations
and
they're,
probably
fine,
but
as
soon
as
they
start
to
want
to
traverse
through
different
implementations
and
maybe
once
iro
starts
getting
going.
A
Unless
they
bake
in
support
for
bicycle
strings
and
vice
versa,
then
they
might
find
some
of
these
edge
cases.
C
We
don't
know
right,
I
don't
know,
I
would
be
absolutely
shocked
if
there
wasn't
wild
data
out
there
in
the
aptly
named
wild,
like,
as
you
correctly
observe,
the
comments
about
this
field
should
be
utf-8
is
a
comment
in
the
protobuf
headers
from
like
closer
to
a
decade
ago
than
not.
There
has
never,
to
my
knowledge,
been
any
code
in
prod
anywhere
ever
checking
it.
So
yeah,
there's
probably
strange
data
out
there,
but
just
I
can't
imagine
that
there
isn't.
B
To
some
extent,
I'm
asking
because
I'm
wondering
if
like
even
though
I'm
just
following
the
data
model,
things
is
maybe
my
is
maybe
the
ben
code
thing
like
am
I
like
rocket?
Am
I
like?
Am
I
like
rocking
the
boat
by
making
a
codec?
That's
like
explicitly.
That
will
definitely
end
up
putting
bytes
in
string
fields,
because
that's
how
the
bin
code
format
kind
of
works.
Unless
I
make
the
codec
do
something
different.
A
I
think
if
it's
explicit
at
the
codec
layer,
then
that's
that's
sort
of
fine,
because
we
every
language
has
a
way
of
dealing
with
this.
It's
just.
It
gets
really
awkward,
it's
just
when
we
pass
it
from
the
above,
the
codec
layer,
and
you
say
to
the
data
model.
Here's
this
thing,
I'm
calling
a
string,
but
it's
actually
just
arbitrary
bytes.
That's
when
it
starts
to
be
a
problem,
but
if
it
didn't
come
out
of
the
codec
and
say
this
is
maybe
not
a
string,
then
that
would
be
nice.
B
If
I,
even
if
I
already
put
up
the
ben
code
codec
or
not,
I
did
all
right
I'll,
just
ping,
some
more
folks
for
reviews
on
it
or
fill
it
out
more.
If
it
needs
to
be
yeah
right,
I
will.
I
will
ping
some
folks
to
review
on
the
vent
code,
codec
spec,
and
we
can.
We
can
chat
about
what
would
have
been.
What's
the
better
thing
to
do
there.
B
I
think
some
of
the
cidv2
stuff
is
is
sort
of
interesting
I've
been.
You
know,
we
can
talk
more
about
it
later
if
there's
time,
but
it
seems
a
lot
of
this
is
about
like
signaling
and
where
does
signaling
live
and
where
is
it
appropriate
and
what
types
of
signaling
makes
sense
inside
of
a
cid
thing,
and
so
it
seems
reasonable
that
we're
talking
about
this.
At
the
same
time,
people
are
talking
about.
B
You
know
the
ipl,
the
uri
scheme,
things
like
that,
I'm
also
in
the
middle
of
writing
up
a
couple
of
you
know:
docs
post
things
of
like
you
know,
I
guess
an
opinionated
view
on
some
of
this.
B
In
particular,
sort
of
the
combination
of
like
what
I
would
like,
ipld
and
ipfs
to
be
able
to
do
what
that
means
for
having
ipld
and
uri
schemes
and
things
like
that
and
what
this
means
in
terms
of
the
whole
of
like
having
large
blocks.
B
If
we
have
time
I'm
sort
of
curious
what
people
think
about
large
blocks
related
things
I'll
dump
some
links
in
the
meeting
notes.
But
if
anyone
reads
them
note
that
they
are
very
very
work
in
progress.
So
any
comments
and
feedback
is
appreciated.
As
this
moves
along
and
that's
all,
I
got
for
now.
Yeah
we'll
see
yeah.
C
I
over
mostly
been
here
to
look
and
see
what
people
are
up
to.
I
am
shifting
what
I'm
focusing
on
with
much
of
my
time
going
forward,
because
I'm
working.
C
C
So
I
guess
I'll
still
certainly
be
supporting
it
as
well,
a
project
about
computing
and
hashing,
and
I
won't
talk
about
it
too
much
beyond
that,
but
and
because
wasm
is
popular
here,
I'll
just
claim
think
less
wasm
and
more
like
dealing
with
the
reality
of
programs,
especially
on
weddings,
but
anyway,
not
for
this
meeting.
C
I
guess
I'll
confess
that
I
did
a
little
like
personal
r
d
day
recently
and
discovered
that
tree
sitter
exists,
and
I
don't
know
if
you
folks
have
heard
of
this,
but
I
want
to
just
like
shout
out
that
it's
freaking
amazing
it's
a
tool
for
building
grammars
and
it
is
the
best
tool
for
building
grammars
I've.
Ever
it's
incredible,
I'm
just
like
gobsmacked
by
how
good
it
is.
C
I
won't
say
why
I
was
looking
into
that
either
skunk
works,
possibly
nothing
will
ever
come
of
it,
but
it
is.
It
is
a
lot
easier
to
write
parser
grammars
with
this
tool
than
anything
else.
I've
ever
used,
including
rod.
I
remember
peg
jazz
being
amazing,
because
when
you
used
it,
it
looked
good
and
I
understood
it
was
this
is
in
that
territory,
but
like
similar,
possibly
better.
It's
really.
It's
incredible.
C
And
I
want
the
prolly
trees
thing
I'll
have
to
tell
move
this
in
a
text
chat
since
their
internet's
cheated
on
them,
but
my
desire
for
big
trees
that
are
sharded
and
give
me
maps
and
are
sorted
is
as
high.
I
would
really
be
way
more
excited
about
using
those
first
amino
applications
than
I
would
be
about
say.
The
random
order
thing
at
amps
is
just
like
it's
fine,
but
sort
of
we
a
lot
nicer.
C
There
are
people
working
on
data
lark,
these
starlark
bindings
on
and
off,
so
there's
code
being
pushed
into
that
github
repo
still,
but
it's
kind
of
going
along
we're
trying
to
figure
out
where
to
go
with
some
of
it,
where
there's
a
bunch
of
basic
binding
work
done
that
you
can
use
if
you're,
not
if
you're
not
going
to
use
any
advanced
skewness
or
like
any
fancy
features
at
all.
C
You
can
just
it's
already
done,
but
we're
working
on
how
to
use
schemas
as
part
of
a
way
to
make
things
syntactically
more
easy
and
fun
and
try
to
have
like
a
dsl
for
wrangling
data
over
there
and
that's
more
interesting
process
and
kind
of
evolving.
So
I
just
think
that's
cool.
I
want
people
to
know
about
it
if
it
hasn't
been
mentioned
in
a
while.
A
Okay,
so
there's,
I
think,
there's
three
topics
that
we
can
put
in
the
notes
afterwards,
one
of
them
we
didn't
go
into
here
that
maybe
we
should
is
the
cidv2
stuff.
A
Yeah
so
eric,
I
don't
know
if
you've
been
following
any
of
this
stuff,
the
the
proposal
is
to
smash
two
cids
together
and
have
the
first
one
be
the
link
and
the
second
one
be
what
what
you
put
under
the
the
the
collective
idea
of
context.
So.
A
That's
quite
broad,
and
it
ends
up
being
a
largely
signaling,
but
also
genuine
context.
Like
here's
this
thing
I
need
to
pass
on
as
I
traverse
these
these
links
that
I
don't
have
a
good
way
to
pass
on.
Otherwise,
I
suspect
what
it'll
end
up
being
used
for
is
very
heavily
is
signaling
just
more
about
what
this
data
is
and
where
it
fits
into
the
stack
and
so
and-
and
so
one
of
the
challenges
with
it
is
since
we're
we're
essentially
having
like
the
link
is
fine.
A
A
Maybe
we
need
to
have
a
discussion
about
what
multi-codec
is
in
that
context,
because
maybe
it's
something
different
and
maybe
we,
the
boundaries
of
that
concept,
are
different
when
you
come
to
that
second
part,
and
that's
already
come
up
in
the
initial
draft
of
the
spec
itself,
it
came
from
the
lurk
perspective,
the
compute
over
data
perspective,
where
they
they
really
want
to
pass
along
this
signaling
thing:
where
they've
got
these
these
schemas
for
data
that
they
really
want
to
attach
from
when
they're
traversing
links
and
they
can't
they
want
initially
their
their
request.
A
Was
we
want
to
be
able
to
use
multi-codec
for
this
in
cidv1,
but
we
don't
have
enough
numbers.
We
want
to
have
arbitrary
size
virus
as
multicodex,
so
that
we
can
have
millions
of
these
numbers
to
pass
on.
So
they
have
these
little.
You
know,
then,
using
the
number
as
a
signal,
but
the
idea
of
being
able
to
fit
it
into
a
second
cid
comes
up
and
it's
like
we
can
put
anything
in
it
as
an
identity
multi-hash.
A
But
then
what
is
the
codec
there
and
anyway,
so
that
that
seems
like
a
something
that
we
should
look
at
very
more
carefully
in
that
spec?
Anyone
else
want
to
talk
about.
B
One
I
didn't
add
there,
but
is
like
something
I
think
is
a
little
bit
interesting
is
like
I,
if
I
understand
correctly,
we've
seen
a
few
times-
and
I
don't
know
enough
about
the
lurk
case
to
know
if
this
is
one
of
this
seems
similar,
but
we've
seen
cases
where
the
structure
of
data
is
like
very
rigid
and
and
the
hashing
is
very
rigid,
and
then
that
starts
to
cause
problems.
B
Maybe
if
you
had
that,
then
you
would
just
be
in
the
same
scenario.
We
normally
are
we're
like
we're
talking
about
people
wanting
to
you
know
what
type
of
typing
are
they
trying
to
throw
in
the
data
like?
Are
they
trying
to
use
nominative
typing
inside
of
cids,
which
seems
probably
like
not
the
point,
because
it's
generally
more
about
structural
typing,
or
is
it
something
else?
B
B
B
We
don't
want
to.
I
don't
think
we
want
to
it's
not
like
our
style
to
like
tell
people
don't
do
this,
because
it's
a
bad
idea
and
like
stop
them
from
doing
it,
like
our
style,
is
more
like
give
them
the
information.
They
need
to
know
to
understand
why
this
is
a
terrible
idea
and
then
let
them
go
ahead
as
long
as
they're
not
getting
in
anyone's
way.
A
That's
where
these
discussions
become
hard
because
we
end
up
having
to
do
some
amount
of
policing
and
that's
an
awkward,
it's
an
awkward
kind
of
policing,
because
we're
at
that
point
of
saying,
look
we
we
do
we.
You
know
we
do
have
some
criteria
for
getting
into
this
table.
So
we
want
to
have
a
discussion
with
you
and
maybe
gently
talk
you
out
of
the
thing
that
you're
proposing
doing,
but
at
the
end
of
the
day
we
probably
shouldn't
be
gatekeeping
too
hard.
But
what
is
the?
What
is
that
too
hard?
A
You
know
there
is
a
gate
there,
but
what
does
it
just
open?
So
it's
not
open
but
and
that
so
so
this
cdh
v2
does
open
open
that
even
more,
and
I
think
what
we
would
see
is
we'd
come
back
to
that
discussion
of.
Well,
it's
a
multi-codec.
I
want
to
use
multicodec
for
signaling
and
then
we're
saying:
no,
no,
that's
not
what
it's
for
and
but
maybe
in
that
second
spot,
it's
not
a
multi-codec.
A
Maybe
it
is
a
it's
a
signal
or
a
something
that
else
that
could
be
a
multi-codec
or
it
could
be
something
else,
and
maybe
we
can
put
them
in
the
table,
but
maybe
the
when
you
call
it
that
thing,
then
it's
we
don't
get,
keep
so
strongly
because
it's
like
well,
you
want
to
do
that
thing.
Signal
great
put
the
signal
there.
B
Yeah
I
mean
if
we
had,
I
guess
it
is
a
side
thing
and
again
I
don't
know
whether
this
needs
to
be
a
full
cid
thing
or
not
like
if
we
had
like
a
parameter
in
there
right
like
if
it
was
the
same
way
that
multi-hash
has
like
a
has
like
a
digest
right
like
if
we
had
a
parameter
for
the
codec.
In
theory,
we
could
have
a
codec
that
was
just
called
like
you
know,
yolo,
and
then
you
put
a
string
in
there
and
then
that's
the
name
of
your
codec.
B
So
you
don't
have
to
like
reserve
a
number
in
the
table
and
it
starts
looking
a
little
bit
like
how
the
p2p
operates.
Where,
like
you,
can
you
can
choose
your
protocol
name
and
you
should
just
make
sure
you're
not
colliding
with
other
people.
Unless
you
know
what
you're
doing,
I
think
to
some
extent,
they'll
probably
end
up
thinking
about
the
same
problem
too.
B
I
know
that
when
they,
when
they
get
time
to
think
more
about
how
to
improve
like
protocol
select
or
the
thing
that
replaces
multi-stream,
I
think
they're
interested
in
like
how
they
can
save
a
few
bytes
for
common
protocols
by
swapping
the
names
with
numbers,
but
still
preserve
the
names
thing
that
makes
people
not
have
to
use
the
table,
but
sometimes
it's
not
even
about
the
table
right.
It's
about,
like
other
people,
need
to
support
your
data
structure.
Like
I
can't
move,
I
can't
move
my.
B
If
you
don't
understand
the
ipld
codec
I've,
given
you
it's
not
an
option
I
have
and
if
we
start
adding
support
for
adls,
the
same
thing
happens:
whether
it's
a
string
as
it
is
right
now
or
a
number
like
you
still
need
the
code,
which
is
where
the
people
start
getting
into
like
what
web
assembly
land
where
they're
like.
What?
B
If
I
could,
you
know
if
I
could
just
pull
from
like
the
global
universe
registry,
my
my
data
right
and
then
execute
it
in
the
sandbox,
like
that's
where
that
desire
comes
from,
is
to
resolve
that
part
of
the
equation.
B
Yeah
I'd
like
to
see
more
examples
like
I
was
hoping,
and
I
guess
we
can
ping
some
folks
like
michael
to
get
where
they're
coming
from,
because
I
I
have
a
suspicion
that
the
types
of
signaling
that
he's
looking
for
and
the
types
of
signaling
the
lurk
folks
are
looking
for
is
not
the
same
and
understanding
what's
going
on.
There
will
probably
help
us
learn
more
about
what
this
needs
to
look
like,
because
I
guess
one
of
my
concerns
is
like
everyone's
like.
B
Oh,
there
should
be
two
things
mine
and
yours
right
or
the
the
regular
one
and
the
one
I
need
for
my
format
and,
like
I
say
that,
because
I
have
a
special
thing
in
mind:
I
want
an
a
at
the
end
and
like
eric
comes
by
and
he's
like,
yep
sounds
like
there's
a
good
idea.
There
should
be
the
standard
one
and
there
should
be
an
extra
one
at
the
end,
that's
like
for
e
for
eric
and
then
we're
both
like
yep
yep,
one
plus
one,
and
then
we
start
using
it.
B
A
Yeah
and
you
can
already-
you
can
already
see
that
problem
in
the
fact
that
this
was
birthed
from
michael,
but
then
it
was
taken
to
respect
by
lurk
and
even
in
that
process
it
became
something
a
bit
different,
so
yeah.
I
agree.
We
do
need
to
see
it's
I'd
like
to
I'd
like
michael
to
nominate
someone,
maybe
iraqi,
to
come
and
tell
us
about
what
this
thing
would
be
used
for
and
how
and
because,
because
I
do,
I
do
suspect
some
of
it.
A
Some
of
it
is
very
much
that
that
problem
of
we
want
to
do
this
multiplicity
of
things
and
we're
stuck
by
not
being
able
to
signal
and
new
cans.
I
think
come
into
play
here
as
the
big
thing
in
the
the
big
elephant
in
the
room.
Maybe
it's
not
even
an
elephant,
but
it's
it's
in
the
room
and
they
want
to
do
more
stuff
with
ucans
and
cids
and
and
then
we
get
to
the
discussion,
if
you
do
can
be
a
codec
or
just
taxi
ball
with
some
other
signaling.
B
Yeah
and
including
things
like
you
know,
even
things
like
signature
verification,
like
sometimes
people
put
someone
put
signature,
verification
and
again
because
the
signal
is
there,
they
want
to
put
it
in
the
codec
because
they're
like
well,
then
I'll
know
it
works
and
I'll
know
it
works
with
this
type
of
tooling
that
when
it
loads
the
data
is
all
sort
of
the
signatures
are
all
verified
you're
like,
but
is
that?
Is
that
what
you
wanted?
B
A
It
does,
but
we
we
are
trying
to
support
an
ecosystem
of
doing
things
that
we
don't
necessarily
we
wouldn't
do
ourselves
and
and
getting
past
that
problem
of
we
want
the
data
to
pass
through
all
the
systems
as
in
with
with
the
with
the
dag
integrity
maintained
throughout.
A
That
seems
like
a
top-line
problem,
and
if
we,
if
we
don't
provide
people
tools
to
do
the
things
that
they
want
to
do
that,
we
may
not
want
to
want
them
to
do,
but
they
want
to
do
anyway.
Then
we're
going
to
end
up
with
a
lot
of
codecs
that
can't
pass
through
our
system.
So
maybe
this
is
simply
just
an
escape
patch
for
hey.
You
know
just
use
the
codex.
You
have
give
us
put
it
in
the
city
like
normal,
but
you
can
put
extra
stuff
in
there.
A
That
does
your
thing
and
it'll
still
pass
through
the
systems,
and
then
your
thing
will
be
maintained
all
the
way
through
and
then
you'll
get
it
at
the
other
end
and
really
do
your
custom
thing
with
it.
Maybe.
C
B
Yeah,
although
again,
if
you
just
need
a
little
more,
if
you
need
a
little
more
signal
on
things
that
are
sort
of
actually
data,
then
maybe
it's
just
instead
of
an
extra
see,
an
extra
multi-hash.
At
the
end,
it's
just
an
extra
optional
field
on
the
codec
right,
where
I
can
be
like
nominative
typing
for
my
application
parameter,
because
I've
decided.
A
A
I
think
I
think
I
think
we
should
push
the
discussion
in
that
issue
into
into
why
why
a
multi-coding
plus
multi-hash?
Do
we
really
need
that?
Because
then
we
get
to
the
problem
of
it's
still
called
a
multi-coding
and
a
multi-hash,
but
if
you're
not
actually
going
to
use
either
of
those
things
for
what
they're
called
then,
maybe
we
that
shouldn't
be
what
we're
doing,
because
I
can
imagine
a
lot
of
situations
where
people
are
just
saying.
Oh
all
I
want
is
a
signals.
A
I'm
going
to
put
a
number
in
there,
I'm
going
to
use
the
multi-coding
field,
but
then
I've
got
this
multi-hash
thing
and
it's
just
going
to
be
three
zeros
just
because
so
it's
a
null
so
we're
going
to
end
up
with
the
number
and
three
zeros
and
if
that's,
if
that's
what
we
end
up
with
with
civ2
v2,
it's
just
a
cidv1,
a
number
and
three
zeros.
Then
that's
a
bit
of
a
failure.
B
Yeah
and
the
ramifications
of
this
like
understanding
the
alternative,
so
that
even
when,
even
you
know,
if
you
know
we,
everyone
agrees,
we
found
the
new
answer
cidv2
here
it
is
we'll
save
the
day.
There's
gonna
be
a
while
before
it
gets
rolled
out
in
a
lot
of
places
and
helping
people
understand
like
do.
I
want
to
try
using
the
cidv2
thing
or
like
do.
I
want
to
just
stick
with
cidv1,
because
it'll
give
me
and
I'll
just
get
around
this.
Some
I'll
I'll
put
my
signals
in
a
different
place.
B
Right
yeah
seems,
I
feel
like
that
kind
of
thing
seems,
like
you
know
again,
interesting,
there's
a
thing
for
us
to
explore
and
understand
right
like,
for
example,
I
could
see
like.
B
Maybe
this
makes
a
lot
of
sense
for
lurk.
What
lark
is
doing
is
very
different
from
what
basically
anyone
else
is
doing
around
this
stuff,
and
so,
if
they
wanted
to
mandate
like
no
we're
waiting
around
for
people
to
update
and
take
this
new
stuff,
and
you
need
that
to
interoperate
with
what
we're
doing
anyway.
A
B
A
But
I
think
I
think
lurk
is
lurky's
different,
but
it's
also
similar
to
these,
the
all
the
non-go
use
cases.
I
think
we
are
hitting
a
problem
where,
if
we
come
to
all
this
stack
through
the
the
lens
of
of
go
and
our
go
stack
so
adls
and
the
nice
things
nicely
fitting
together.
If
you,
if
you
come
from
a
perspective
of
not
having
those
things,
then
you
start
wanting
your
concerns
line
up
differently,
we're
seeing
that
in
the
javascript
world,
which
is
why
we're
seeing
it
come
from
michael
and
ucan
people.
A
They
don't
have
that
same
stack
and
they're,
viewing
things
differently
and
they're
saying.
Well,
we
just
we're
just
gonna,
have
lots
of
codex
and
they're
like?
Oh,
maybe
that's
not
a
great
idea,
but
we
want
other
things
so
and
I
think
and
the
rust
and
well
I
mean
look,
it's
not
even
rust
to
really
want
to
define
it
purely.
A
But
it's
lurk
is
in
that
world
of
they've
got
their
own
stack
and
it's
like
the
basics
of
it,
and
we
may
even
see
things
similarly
come
from
come
out
of
what
iro
is
doing
and
moving
away
from
the
core
go
land.
So
maybe
this
is
just
a
symptom
of
not
having
a
full
ipld
stack.
B
Yeah
yeah,
I
think
that's
part
of
the
thing
to
explore,
is
understand
like
where
yeah,
where
some
of
the
boundaries
are,
and
some
of
it,
I
think,
is
also
making
it
like
easy
for
people.
Like
one
thing,
I
I
noticed
when
I
was
doing
some
of
the
the
webassembly
ipld
stuff
was
that
or
just
trying
to
implement
the
or
even
I
guess,
even
aside
from
the
web
assembly
stuff,
just
trying
to
implement
a
new
codec
and
adl
for
bittorrent
things
was
like.
B
It
wasn't
obvious
where
to
draw
the
lines
which
logic
lived
where
and
like.
While
I
could
just
decide
on
one.
It
felt
like.
B
That
that,
like
mental
burden
of
needing
to
think
about
it,
I
feel
like
can
somehow
can
like
throw
people
off,
and
so
maybe
like
yeah,
whether
it's
just
you
know
how
the
different
layers
of
I
guess,
lenses
or
the
different
layers
of
like
these
interpret
interpret.
As
this
interpret
as
this
layers
that
we
keep
throwing
on
stuff.
Helping
people
understand
when,
when
to
use
the
different
sets
of
tools
and
like
what
they're
for.
A
All
right
so
cool,
well
I'll,
stop
the
the
youtube
and
thanks
everyone
for
tuning
in
repeated.