►
From YouTube: 🖧 IPLD Every-two-weeks Sync 🙌🏽 2021-09-27
Description
An every two weeks meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
B
Live
hello,
everyone
to
this
two
weeks,
ipld
meeting
it's
streamed
for
the
first
time
since
a
long
time,
but
that's
great
and
it's
september,
27
2021
and
as
every
usually
every
two
weeks
we
go
over
the
stuff
that
we've
worked
on
or
people
have
worked
on
or
heard
about
and
then
discuss
any
open
items
there
might
be.
As
recently,
I
don't
have
anything
to
report
so
I
can
hand
it
over
to
danielle,
who
has
something
to
say.
C
Yeah,
I
was
totally
not
finishing
my
own
notes
here,
but
so
I
actually
yeah
I
haven't
been
in
any
of
these
calls
for
like
two
months,
because
I
was
away
for
a
couple
of
weeks
and
then
we
had
a
team
meeting
and
I
was
traveling
and
so
on.
So
I'm
gonna
give
a
sort
of
two
months
update
instead
of
two
weeks
update
the
main
chunk
of
work.
C
Ipld-Wise
that
I've
been
doing
is
something
that
eric
and
I
have
had
been
talking
about
in
early
summer,
which
is
what
we
call
the
schema
schema
work
and
essentially
it's
being
able
to
parse
and
use
schemas
as
the
schema
language
that
we
define
in
the
specs,
because
right
now,
there's
no
parser
for
that
there
is
the
go.
Ipld
schema
repo
that
does
implement
a
parser
and
does
implement
the
sort
of
ast.
C
C
What's
the
word,
I'm
looking
for
fully
merged
into
the
main
ipld
prime
repo,
so
what
I've
been
trying
to
do
is
essentially
think
of
how
it
could
fit,
because
I
believe
eric
tried
a
couple
of
ways
with
the
schema
package
and
also
a
schema
2
package,
but
neither
of
them
really
worked.
So
there's
a
pr
that
I
linked
to
in
the
notes
and
that
works
as
a
proof
of
concept
to
parse
a
schema
as
a
dmt,
which
is
just
an
asd,
a
syntax
tree,
and
then
it
has
another
function
called
compile.
C
That,
then,
gives
you
a
type
system
with
those
types
and
that
works
well,
it's
supposed
to
work.
I
think
I
didn't
try
to
use
it
and
I
just
panicked,
but
you
know
it's
got
bugs
it's
a
proof
of
concept,
so
I
believe
eric
suggested
to
move
that
to
a
different
package.
So
I
will
do
that
should
take
like
an
hour
and
I
believe,
after
that
it
can
be
merged.
C
And
then
it's
just
gonna
be
a
matter
of
actually
seeing
the
whole
thing
works,
going
from
like
a
schema
file
to
a
type
system
and
then
using
that
with
bind
node
or
whatever.
I
think
it
should
work
as
far
as
the
dean
said
it
doesn't,
but
it
might
be
like
a
silly
thing,
that's
missing
and
then,
ideally,
we
would
end
up
with
a
top-level
function
like
ipld.load
ipld.loadschema.
C
The
other
piece
of
work
that
I
had
done
before
this
is
by
node,
which
is
ipld
node
without
code
generation.
I
haven't
really
touched
it
in
a
long
time.
It
seems
like
eric
has
been
doing
some
work
there.
C
I
don't
believe
many
people
have
been
using
it,
but
I
think
as
a
proof
of
concept,
it
works
okay,
and
what
I
was
writing
here
is
that
I
might
spend
my
some
some
of
my
next
rotation
fixing
some
bugs
that
he
uncovered
and
adding
some
more
tests,
and
that
is
it
for
me.
I
think.
D
Sure
so
a
lot
of
I
put
down
a
few
notes
of
things
that
are
going
on
that,
I
think,
are
at
least
tangentially
related
to
ipld.
Not
all
of
this
is
stuff
that
I'm
going
to
take
credit
for
so
one
thing
that
we've
been
thinking
of
is
better
ways
to
get
cars
created
right
now,
if
you've
got
a
bunch
of
files
or
file
and
directory
or
this
sort
of
stuff,
and
you
want
to
pull
it
into
a
car,
the
sort
of
recommended
path.
D
Is
you
import
it
into
ipfs,
which
makes
use
of
a
very
large
amount
of
code,
called
the
mutable
file
system,
abstraction
and
then
you've
got
it
there
in
that
mutable
file
system,
and
then
you
can
export
it
from
ipfs
out
to
a
car,
and
the
question
is:
is
there
like
a
direct
way
to
go
from
a
file
or
a
directory
to
a
car?
D
I've
started
looking
at
it
a
little
bit.
I
should
push,
but
I'm
doing
this
in
a
combination
of
I
think,
there's
better
builder
abstractions,
that
we
can
add
to
the
go:
unix
fs,
node,
adl
abstraction,
that's
been
written
for
how
we
think
about
unixfs
data,
along
with
the
car
cli
command
that
we've
added
somewhat
recently
to
the
cart
repo,
because
it
would
be
really
nice
if
you
could
just
say
like
a
car.
D
You
know
pack
and
then
either
file
a
directory,
and
you
can
get
the
car
representation
of
that
and
based
on
what's
happening
there.
It
looks
like
at
least
for
a
sort
of-
maybe
maybe
it's
going
to
be
annoying-
to
replicate
all
the
functionality
of
what's
possible
with
ipfs
as
it
as
it
does.
This
ingestion
right
now
there's
a
lot
of
sort
of
different
options
that
are
possible
and
we
don't
necessarily
need
all
of
them,
but
we
need
something
that's
compatible
and
that
should
be
actually
pretty
doable,
at
least
on
the
file
part.
D
I
think
eric
maybe
has
also
looked
at
some
of
this
in
an
ipld
cli
that
he
started
poking
at
the
other.
One
that
I
know
of
that's
going
on
is
they're
not
on
the
call,
but
the
people
from
kubelt
have
made
a
cli
called
make
car
that
one
builds
a
car
off
of
json
structured
data.
So
if
you've
got
json
ld
data
and
you
want
to
turn
that
into
an
ipld
dag
in
a
car
it'll
do
that
conversion.
D
So
if
you've
got
sort
of
structured
textual
data
or
stuff,
where
we've
got
a
schema,
you
can
do
an
ipld
native
conversion
to
a
car,
and
that
seems
like
a
useful
thing
to
see
enter
the
world
a
little
bit
more
as
well
short
update
on
ipld
prime
in
ipfs,
so
the
that
got
merged
to
master
a
while
ago
and
is
almost
out
the
door.
There's
some
flags
changing,
but
I
think
that's
largely
fine.
D
D
And
then
there's
some
underlying
work
going
on
in
graphsync,
so
the
data
transfer
protocol
as
it
relates
to
ipld
and
so
the
interesting
things
there.
We
should
see
pretty
soon
an
update
to
that
go
data
transfer
graph,
sync
that
supports
partial,
dag
transfer.
D
So,
rather
than
just
failing
you
get
it
just
sort
of
stops
when
it
can't
follow
a
link
and
lets
the
client
decide
if
that's
a
failure
or
if
that's
just
you
know,
you
got
what
you
got,
you
can
potentially
make
another
request
for
that
missing,
link
or
something,
and
then
the
other
thing
that
we'll
need
to
think
about
eventually
is
how
graphsync
as
a
thing
protocol,
especially
with
this
sort
of
world,
can
we?
How
do
we
extend
it
to
multiple
sources?
So
you
know
this.
D
Is
you
know
this
fundamental
thing
of
bitswap
can
handle
getting
blocks
from
multiple
peers
at
the
same
time,
graphsync
right
now
is
a
one-to-one
protocol,
and
we
need
to
do
probably.
We
need
to
scope
what
the
selector
and
then
graph
sync
changes
are
to
be
able
to
say
things
like
great.
I
got
this
root
block
from
you
and
it
has
a
bunch
of
children.
D
D
How
do
you
efficiently
do
this,
that
that
we
can
lean
on
there,
but
that's
sort
of
gonna
be
maybe
the
next
sort
of
larger
design,
algorithms,
slash
actual
practical
thing
that
needs
to
happen
in
that
data
transfer
stack,
so
some
exciting
stuff
there!
B
Thanks,
I
just
reminded
myself
that
I
have
something
to
say,
because
I'm
still
maintaining
the
rust
libraries
of
ipld
so
and
there's
not
that
much
happening,
but
there's
an
open
pr
about
making
read,
write,
no
standard
compatible
and
if
anyone
anyone's
watching
this
more
eyes
will
be
appreciated,
because
I
I
mean
I
have
some
experience
with
no
standard,
but
if
someone
has
more
experience,
that
would
be
great.
I
am
to
have
another
review
on
that
one
yeah,
that's
all
next
is
eric.
E
Yeah,
so
I
will
also
start
off
with
some
recappy
type
stuff,
because
some
of
this
stuff
has
not
been
touched
on
in
the
videos
that
we've
recorded
and
made
public.
So
in
the
same
sort
of
like
box
of
history,
where
we
started
actually
seeing
go
ipfs
starting
to
use
a
bunch
of
stuff,
we've
cranked
out
a
couple
of
releases
of
the
go
ipld,
prime
library
and
tagged
them.
There
has
been
a
version:
oh
11.0,
where
the
most
notable
change
was
a
bunch
of
our
codecs
started.
E
They
still
won't
force
sorting
on
you
on
the
way
in
be
aware
of
this,
as
you
update
things,
because
it's
one
of
those
fun
changes
that
does
not
actually
get
hinted
to
you
by
the
compiler
either
was
also
a
version
0.12.0
tagged
and
okay,
a
couple
patch
releases
after
that
too.
The
big
news
there
is
that
there
was
a
really
big
refactor
so
that
there
is
a
new
package
called
data
model
into
which
a
bunch
of
the
most
core
interfaces
went
and
as
a
result
of
this,
the
root
package
of
the
repo
the
ipld
package.
E
There's.
So
many
aliases
using
the
go
type,
alias
feature
that
I
believe
almost
any
user
can
actually
just
upgrade
to
this
and
nothing
happens
in
the
future.
I
think,
if
you're
writing
a
library
code,
that
you
want
to
be
highly
reusable
by
other
people,
you
should
use
the
new
lower
level
data
model
package
and,
if
you're
writing
code
just
for
yourself
and
you
don't
really
care
about
the
reusability,
go
ahead
and
keep
using
the
group
package.
E
The
only
real
functional
difference
is
that
in
the
root
package
you'll
drag
in
more
trans
dependencies
like
you
will
always
drag
it
in
json
parts
or
even,
if
you
didn't
need
it
technically,
because
it
so
and
in
the
future
now
we
can
accept
a
lot
more
prs
about
adding
helper
methods
and
like
cool
examples
to
the
package,
and
things
like
this
so
very
exciting.
E
One
of
the
other
pieces
of
review
news-
that's
a
couple
weeks
old
now,
but
still
really
cool.
So
I
want
to
touch
on
it.
Briefly
is
the
test
mark
fixtures
format.
E
This
is
a
new
format
for
writing
tests
and
embedding
them
in
your
markdown.
Okay,
not
writing
tests,
writing
fixtures
and
embedding
them
in
your
markdown.
You
still
have
to
write
a
little
bit
of
blue
code
in
your
language
of
choice,
but
parsing
fixtures
out
can
now
be
automated
and
you
can
start
scripting
around
that
stuff,
so
I've
written
a
system
in
go
and
ron
has
already
contributed
a
system
in
js
and
it
took
him
a
day
because
he's
both
a
wizard
and
I'd
like
to
claim
that
the
parser
is
fairly
easy.
E
That's
it
so
we
started
using
this
in
a
couple
pieces
of
the
ipld,
specs
and
docs
website
already,
so
our
latest
generation
of
fixtures
and
docs
for
selectors,
it's
the
same
thing.
The
documentation
is
literally
the
data
that
we
evaluate
in
the
tests.
So
for
the
first
time
I
think
the
documentation
on
selectors
is
good.
E
I
hope
that
we
start
doing
more
of
this.
I
think
rod's
also
converted
some
codec
fixture
stuff.
That's
really
nice,
I'm
just
using
it
everywhere.
I
can
in
more
recent
news.
If
anybody
out
there
is
hankering
to
like
keep
an
eye
on
the
progress
towards
implementing
things
like
unix
of
sv1
as
adls.
I
popped
open
an
issue
to
that.
It's
number
258
or
you
can
go
to
the
hackmd
and
find
the
link
we'll
see
if
anybody
is
going
to
be
able
to
pick
up
and
focus
on
that
work.
E
But
now
there's
at
least
a
discussion
point
there.
I'm
kicking
off
a
new,
interesting
little
project.
Maybe
we'll
go
somewhere,
maybe
we'll
I'm
making
an
ipl
dcli.
E
I'm
going
with
the
intention
of
like
building
some
sort
of
a
jq
life
vibe
here.
I
think
a
bunch
of
us
have
talked
about
this
for
just
ages
and
I'm
gonna
start
taking
a
chisel
to
stone
and
doing
it.
So
we'll
see
how
that
goes.
I'm
using
more
test
mark
to
document
the
heck
out
of
it
from
the
start
and
try
to
build
examples
of
how
this
should
all
work.
E
And
if
people
have
ideas
about
the
suggestion,
the
the
behaviors
that
they
would
want
from
things,
I
think
it's
now
already
in
a
pr's
welcome
state.
I've
gotten
this
to
the
point
where
there's
like
some
hello
world
commands
and
a
little
bit
of
framework
for
seeing
like
here's,
where
you
would
add
new
commands
and
new
test
pictures
for
them
so
from
now
on.
If
anybody
has
an
idea
for
something
they
want
to
see
why
it's
cool
to
do?
E
Let's
do
it,
I'm
putting
this
for
the
moment
anyway,
inside
the
go
ipld
prime
repo
as
a
separate
go
module,
so
it
does
not
drag
additional
transitive
dependencies
into
your
system
if
you're,
just
using
libraries,
but
it's
still
in
one
repo,
so
development
is
like
pretty
straightforward.
I
have
no
idea
how
that's
going
to
go.
I
hope
it
goes.
Okay,
if
it
doesn't
go.
Okay,
it'll
get
extracted
to
do
a
new
repo,
we'll
see.
E
I
still
have
one
more
thing,
I'm
so
sorry
for
taking
so
long
there's.
Also
a
new
repo
out.
There
called
go
data
lark,
which
is
a
portmanteau.
I
don't
know.
Data
and
starlar
starlark
is
an
interpreted
language
that
has
been
hoisted
out
of
the
bazel
project
and
it's
kind
of
like
a
cool
python-like
little
dialect
of
the
language
it's
easily
interpretable
and
has
like,
I
think
at
least
three
different
interpreters.
Now,
there's
a
gold
level
one
or
a
rust
one
java,
one.
E
If
anyone
is
interested
in
this
pr
is
extremely
welcome.
I
kind
of
did
a
like
weekend
hobby
project
here
and
we'll
we'll
see
how
far
this
is
worth
going
with.
So,
if
anyone's
interested,
yes,
please
find
that
repo
and
get
in
touch
sienna.
My
niece.
A
Wrong
mine's,
not
it's
interesting!
I
I
spent
a
lot
of
time
doing
just
minor
supportive
work
across
both
javascript
and
go
lots
of
little
dangling
pieces
in
the
javascript
stack
that
sort
of
fall
on
yeah
they
so
multiformats
and
and
javascript.
This
the
state
of
the
stack
is
pretty
good
but
again
fairly
minimal
in
terms
of
the
the
level
of
support
it
offers
and
we
haven't.
We,
we
haven't
found
a
way
to
prioritize
anything
more
sophisticated.
A
I
do
tinker
fairly
regularly
with
alternative
approaches,
but
it's
hard
to
sort
of
prioritize
doing
that,
because
the
the
the
the
need
isn't
as
clear,
but
it
would
be
nice
to
have
some
more
higher
level
functionality
for
ipld
in
javascript,
but
one
day,
so
I've
been
getting
a
lot
more
familiar
with
go
up
early
prime
doing
go
work
here.
So
that's
been.
E
A
Good
so
contributing
to
the
go
ipfs
work
as
well,
it
was
was
very
instructive,
and
now
I'm
got
my
head
all
over
selectors
and
and
cons
both
consuming
and
and
and
working
with
them
and
a
bunch
of
other
things
as
well.
So
that's
been
good,
but
not
a
lot
to
report
there
and
the
only
other
major
thing
was
the
codec
fixtures
work
with
eric
mentioned.
There's
a
there's,
a
repo
codec
fixtures,
the
the
fixtures.
A
A
So
we
we
take
some
data,
we
decode
it
using
a
codec
and
then
we
re-encode
it
using
a
different
codec
and
then
we
take
the
cid
is
what
we
expect
and
then,
when
we
do,
that
between
all
the
combinations
and
check
that
the
cids
are
the
ones
we
want,
then
we
can
assert
that
that
codec
was
both
and
decoded
the
data
correctly
and
encoded
the
data
correctly
when
it
goes
both
ways.
So
it's
using
codecs
to
test
each
other
and
crds
to
assert
correctness.
A
That
data
has
also
been
extracted
using
the
test
mark
format
into
the
new
ipld
ipld
repo,
but
it's
not
yet
consumed
from
there.
That's
another
bit
of
work.
There
will
there's
probably
always
going
to
be
a
case
for
a
special
repo
to
do
to
run
some
of
the
tests
just
because,
when
you're
pulling
in
all
the
codecs
together,
it's
like
like
I-
I
don't
in
javascript,
for
example-
I
don't
want
to
be
te.
A
I
don't
want
to
be
pulling
in
dag
pb
and
dag
jason
just
to
test
dag
the
dag's
ebook
ziba
repo,
and
you
don't
want
to
be
pulling
them
all
into
each
other's
repos
just
to
test
them.
A
It's
a
little
bit
different
in
go
where
it
might
be
practical,
at
least
to
a
lot
of
this
in
go
I
billy
prime,
but
then,
if
we
start
adding
fixtures
for
like
dag
jose,
which
I
I've
got
a
pull
request
for
to
add
the
deco
say
fixtures
into
there
or
some
basic
ones
which
they
don't
work
across
languages
at
the
moment.
So
there's
problems,
then
you
just
you're
pulling
you're
potentially
pulling
in
all
sorts
of
codecs
from
around
the
place
to
test
each
other.
A
So
I
think
the
case
for
a
separate
repo
is
probably
probably
probably
still
remain
there
in
some
form,
even
if
we
do
push
the
fixtures
to
that
doc,
the
specs
and
documentation
repo,
so
yeah,
and
that
there's
a
cron
github
action
in
there
as
well.
That
will
pull
in
the
latest
of
all
of
the
pieces
to
test
them
and
so
sort
of
get
early
warning
of
any
failures
if
there's
any
weirdness
going
on
in
any
updates.
A
So
that's
that,
aside
from
that
not
a
whole
lot
of
other
interesting
stuff,
it's
mainly
just
been
just
small
contributive
work
around
the
place.
So
that's
me.
B
Thanks
any
other
updates
from
anyone.
C
Left
a
couple
yeah,
but
they're
short
ones.
There
are
more
directors
directed
at
eric,
but
I
thought
might
as
well
bring
them
up
here.
One
of
them.
I
thought
I
I
think
I
brought
it
up
in
the
ipld
channel
in
slack
a
couple
of
weeks
ago,
which
is
that
back
in
winter
somebody
contributed
a
new
multi-codec
go
module
because
the
previous
one
we
had
was
massive
and
unmaintained,
and
the
new
one
is
just
code
generated
constants
with
a
string
method
and
a
set
method.
I
believe,
and
that
works
fine.
C
But
after
that
ipld
prime
also
gained
a
multi-codec
package,
which
is
a
multi-codec
registry,
for
I
think,
registering
encoders
and
decoders,
and
that's
fine,
but
having
the
two
packages
with
the
same
name
is
really
confusing.
So
could
we
please
rename
the
ladder
to
something
like
multi-registry
or
something
like
that?.
C
Cool,
I
will
do
that
and
the
other
one
is.
C
E
C
A
A
E
E
Something
else
that
I
guess
is
briefly
on
my
mind
and
I
probably
don't
want
to
solve
synchronously
in
the
meeting,
but
I'm
starting
to
think
about
is
which
of
the
data
storage
interfaces
we
want
to
run
with
in
the
future.
That's
sort
of
coming
into
my
mind
again.
E
We've
got
some
melange
of
interfaces
between
stuff
that
I've
done
with
link
system
and
data
store
and
motor
abstractions.
We've
got
some
old
generations
of
dag
or
data
or
something
else,
and
then
the
word
service
or
store
there's
like
a
whole
bunch
of
repos
there,
I'm
not
sure
which
of
them
were
supposed
to
really
be
a
fan
of
going
forward.
E
If
anyone
has
like
early
thoughts
on
which
of
those
interfaces
are
the
most
useful
to
bridge
and
which
ones
are,
maybe
not.
That
would
be
really
interesting
to
me
because
I
have
a
heck
of
a
time
figuring
out,
which
is
which
and
it's
getting
to
the
point
where
now,
we've
done
all
this
ground
work
to
have
like
the
root
ipld
package,
be
able
to
include
more
connective
tissue
and
show
how
things
work
and
now
that
I'm
starting
to
work
on
the
cli
tool,
I'm
really
seeing
like
okay
yeah.
E
A
I
think
hannah's
done
the
most
work
with
doing
the
bridging
there
and
it's
been
pretty
messy
yeah,
because
because
this
is
not
just
daggers
the
block
ones
as
well,
that
have
blocks
and.
E
Then
yeah,
I
have
this
intuition
that
the
number
of
things
in
motion
here
should
go
down
from,
like
whatever
the
number
is
to
like
approximately
one.
One
will
be
a
good
number
of
things
to
be
in
motion
around
the
storage
apis,
but
I
don't
know
what
the
road
is
going
to
be
like
to
get
there,
because
it's
definitely
not
there.
Today.
A
And
there's
been
some
there's
been
some
additional
interfaces
added
along
the
way
that
really
special
purpose,
narrow
ones
like
like
car,
has
its
own
store
store
reader,
that's
just
a
subset
of
a
block
reader.
You
know
be
nice
if
you
didn't
have
to
make
your
own
interfaces
just
to
do
a
read.