►
From YouTube: 🖧 IPLD Every-two-weeks Sync 🙌🏽 2021-06-07
Description
An every two weeks meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
A
And
welcome
to
the
ipld
every
two
weeks
think
and
who's
starting
off.
B
I
think
that's
me,
so
I've
got
a
bunch
of
stuff
to
talk
about
a
quick
update
on
the
ipld
reflection
stuff,
so
I
refactored
the
tests
to
run
on
both
the
code
gen,
which
was
already
happening
and
the
new
stuff
which
was
a
bit
of
a
refactor
trickier
than
expected,
but
it's
working
now
and
I'm
essentially
extending
the
tests
to
hammer
all
the
to
do's
that
I've
got
in
the
navy,
kelsey
reflection,
implementation,
which
is
in
the
bind
node
package.
B
So
that
means
hitting
methods
that
the
existing
tests
didn't
hit
like
lookup
by
segment
lookup
by
node,
some
weird
stuff,
around
representations
and
so
on.
B
So
I'm
I
plan
to
do
that
for
another
day
or
so
add
a
couple,
more
high-level
examples,
I'll
probably
polish
the
api
a
little
bit
and
then
I'm
just
gonna
share
it
with
a
bunch
of
people
and
say:
try
it
and
see
what
happens
and
they're
probably
going
to
come
back
and
say
it
panicked,
and
then
I'm
going
to
fix
that.
B
But
I
think
it's
going
to
be
a
good
exercise
to
see
what
they
think
of
the
api.
At
a
first
glance,
I
also
filed
a
bunch
of
quality
of
life
improvements
that
I've
been
thinking
about
for
a
while.
We
did
talk
about
give
me
a
go
value
for
a
basic
ipld
node
a
couple
of
weeks
ago,
and
initially
I
thought
I
wouldn't
file
this,
but
then
I
changed
my
mind
again
because
in
dealbot
we
did
see
a
few
places
where
we
did
the
work
manually
for
like
small
lists,
and
things
like
that.
B
I
also
think
we
should
have
a
tight
prototype
type
interface
type
in
the
schema
package,
because
we
have
typed
node,
so
you
can
get
the
representation
out
of
that.
You
do
not
have
anything
like
that
for
prototypes.
If
you
have
a
prototype
for
like
a
dac
pb
node
there's
no
way
to
build
a
representation
of
it.
You
have
to
have
the
product
prototype
separately
and
the
code
gen
does
provide
you
both
prototypes.
But
again
you
have
to
grab
them
separately,
which
is
a
bit
weird,
it's
easier.
B
I
think
it
would
be
a
little
bit
nicer
to
have
a
basic
note
chooser,
because
I've
seen
a
lot
of
people
write
this
pattern
where
they
layer
the
choosers,
for
example,
for
dac
pb
to
choose
the
right
prototype
for
a
multi
codec
and
that
works
fine,
but
then
what
they
write
at
the
bottom,
as
the
default
case
is
always
the
same.
It's
just
return
basic
node.prototype.any,
so
I
think
we
could
make
that
a
one-liner.
B
Please
shout
if
you
disagree
and
then
there's
two
things
that
are
also
quality
of
life
improvements
but
which
are
not
as
easy.
So
I
didn't
file
them
directly.
B
One
of
them
is,
can
we
finish
the
ipld
schema
parsing
stuff,
because
working
through
the
examples
for
ipld
node
reflection
using
ipld
itself
gets
much
easier,
but
the
schema
part
is
still
quite
verbose
and
difficult,
because
you
have
to
wrangle
the
type
system
and
then
generate
those
stringy
definitions
and
all
that
it
would
be
much
easier
if
I
just
had
a
file
with
the
schema
and
just
parse
that-
and
it
just
worked
especially
nowadays,
because
I
could
embed
that
schema
into
the
go
program
right.
B
You
have
a
package
to
build
selectors
and
it's
quite
powerful,
but
could
we
just
have
something
that
just
parses
a
selector
from
a
string
that
could
be
quite
handy
and
if
you
have
both
parsing
and
printing
there's
a
proposal
to
have
text
martialers
and
text
and
marshallers
be
be
usable
as
flags
with
ghost
flag
package.
So
you
could
make
a
flag
transparently
work
as
a
selector
which
would
be
pretty
cool
and
that's
it.
Any
comments.
Disagreements,
thoughts.
D
Go
for
it,
I'm
not
very
masterful
of
navigating
or
using
github
issues,
so
I
tend
not
to
file
them
unless
somebody
else
chases
me
but
feel.
D
Oh
yeah,
so
most
of
my
stuff
to
report
on
will
be
docs
and
specs
consolidation
updates
again.
I
think
this
is
cruising
in
towards
the
finish
line,
or
at
least
I'm
desperately
going
to
try
to
make
it.
So
so
all
of
the
three
biggest
content
repos
in
the
past
have
now
been
unified
and
are
in
the
ancestry
of
one
coherent
gate:
history,
the
old
specs
repo,
the
old
dox
repo
and
the
old
ipld,
slash
ipld
repo
are
now
all
consolidated.
D
I
have
updated
the
master
branch
pointer
in
the
ipl
ipld
repo
to
be
this
new,
unified
history,
so
everything
that
was
ever
in
that
repo
before
is
still
reachable
by
commit
hash.
It's
all
there
and
everything
in
the
other.
Two
repos
is
now
also
there.
So
we
now
have
all
this
content
in
one
place.
The
contributor
count
is,
of
course,
the
union
of
all
the
contributor
counts
of
this
previous
repos,
and
it's
nice
to
see
that
number
be
so
high.
D
So
this
is
now,
I
think,
pending
on
doing
the
dns
update.
I
would
like
to
launch
this
and
have
ipld.io
pointing
here
very
soon.
I
don't
think
that
the
dns
update
has
quite
happened,
yet
we're
waiting
for
somebody
who
holds
keys
to
that,
but
it
should
be
soon,
I'm
not
yet
going
to
move
the
old
docs.ipld.io
dns
entry,
so
that
can
stay
there
with
the
content.
That's
currently
there
for
a
while
at
some
point,
though,
we
should
figure
out
what
to
do
with
it
and
other
content.
Work
can
continue
from
here.
D
The
finish
line
that
I'm
aiming
for
here
is:
we
have
more
consolidated
content
than
ever
before
and
when
new
content
is
written,
we
can
now
make
it
clear
that
this
is
the
repo
where
it
goes
and
make
a
nice
paved
road
around
that
area.
But
there
is
still
definitely
to-do
blocks
in
the
website
too.
I'm
hoping
that
sometimes
people
can
see
them
and
jump
in,
maybe
if
we
are
so
lucky
as
to
have
wonderful
contributors
out
there.
A
Wonderful,
congratulations!
Congratulations
eric!
That
was
a
lot
of
work.
Thank
you!
Yes,
okay.
My
update,
since
last
I
was
on
in
one
of
these
meetings.
Actually
there's
a
lot
more
than
this,
but
this
is
the
stuff
I
remember
from
the
last
two
weeks
in
a
bit,
so
I
recorded
an
ipld
module
for
resnet
lab
there
on
tour
program,
been
working
on
this
for
most
of
the
year,
just
chipping
away
at
the
content,
but
finally
flurry
of
activity
recorded
the
final
session.
Hopefully
that's
being
edited
now
and
they'll,
publish
that
it's
it's
good.
A
I
think
it's
not
awesome.
It
fits
in
with
their
content,
but
I'm
not.
I
won't
be
unhappy
with
it
as
a
introduction
to
ipld,
but
it
does
to
me
anyway.
For
my
purposes,
it
feels
like
an
iteration
towards
explainers
for
what
we're
doing
so
another
iteration
of
learning
the
best
way
to
frame
these
things.
It
was
a
little
bit
constrained
and
it
was.
A
It
sort
of
hinged
a
bit
on
ipfs,
so
it
had
to
be
clear
about
that
relationship
there
yeah
anyway,
that's
hopefully
done
and
just
put
aside
finally
did
some
minor
fixes
and
test
additions
to
go
car
and
go
and
js
card
just
for
header
detection
and
version
detection,
just
to
make
sure
that
they
error
correctly
in
anticipatory
prep
for
car
v2,
which
may
be
using
a
special
little
header,
pragma
thing
that
should
break
the
old
format,
and
just
this
is
just
making
sure
that
they
these
older,
the
older
code,
if
you're
still
using
it
without
any
updates,
will
tell
you
properly
you'll.
A
A
Did
some
api
cleanup
and
editions
just
to
make
it
usable,
because
I
also
added
some
examples
just
so,
I
could
figure
it
out
and
the
examples
sort
of
exercise
the
thing
and
show
you
how
to
use
it
both
in
and
out
of
ipfs,
quite
interesting
and,
I
think,
fairly
usable.
Just
you
know,
people
need
a
a
signing
or
encryption
solution
for
ipld.
Then
jose
is
pretty
much
all
we
can
offer
them
at
the
moment
so
and
people
are
using
it
so
may
as
well
get
it
working.
A
There
is
a
go
version
of
this
that
apparently
is
heavily
used.
This
is
used
in
dead
stuff
that
they're
doing
somewhere.
I
can't
remember
the
details.
Oh
and
there
was
I
did
I
finally
merged
the
dag
jason
spec
cleanup
pr
that
was
been
sitting
waiting
to
be
merged
for
a
while
that
that
was
a
couple
of
weeks
ago.
I
merged
that.
A
I
actually
don't
know
if
that
made
it
into
the
into
your
docs
yet
eric,
but
we
probably
should
get
that
synchronized,
but
that
cleans
up
a
bunch
of
outstanding
issues
and
yeah
got
the
javascript
and
actually,
along
with
that,
I
got
a
new
version
of
the
javascript
json
implementer
implementation,
shipped
out,
which
does
a
lot
of
fancy
stuff
lost
a
little
bit
of
performance
in
the
process,
but
it's
it's
nicely
spec,
compliant
and-
and
I'm
really
happy
with
how
like
strict
it,
is
all
around
the
edges
and
does
all
the
right
things
so
yeah.
E
Yeah,
so
on
my
side,
very
small
update,
I
basically
closed
up
a
full
request,
which
I
opened
literally
a
year
ago
to
get
a
text
like
very
rudimentary
text.
I
think,
into
a
string
of
a
text
that
piled
into
a
selector
and
then
where
this
up
to
falcoin
in
the
lotus,
binary
and
actually
partial
retrievals
work
with
that.
It
is
not
pretty
and
there
were
a
lot
of
like
both
not
pretty
from
a
ux
perspective
and
not
pretty
from
the
implementation
perspective.
But
you
know
this
works,
so
that's
actually
a
great
success.
E
Apparently
this
is
literally
the
first
working
prototype
of
possibility
when
falcon
so
far
and
yeah.
I
am
hoping
that
we
will
get
a
little
more
agreement
in
this
call
on
what
can
we
shift
as
like
pieces
of
what
I
did
ask
like
helper
libraries
to
make
both
combining
selectors
and
the
blank
selectors
a
little
bit
less
people.
So
I
have.
C
C
Great,
we
may
already
have
alignment.
I
think
the
only
thing
I've
got
is
the
non-ending
saga
of
ipld,
prime
and
ipfs.
Integration
continues
slowly.
We
spend
another
week
attempting
to
understand
how
the
dag
get
command
works.
C
More
proposals
for
more
complicated
processes
were
considered
and
rejected.
The
current
plan
is
that
we
will
need
it
to
work
sort
of
like
ipfs
catworks,
where
we
will
not
deal
with
any
of
this
encoding
in
the
encoders
library,
which
was
the
previous
plan.
It
turns
out
that
that
encoder's
thing
as
you
might
specify,
while
it
provides
help
which
is
nice
to
have
it,
also
happens
not
in
the
correct
process
all
the
time
it
happens
in
the
client
process,
but
we
would
prefer
to
use
the
encoder
plugins
that
are
available
in
the
demon
process.
C
C
There
was
a
proposal
of
maybe
we
always
have
the
backing
command
actually
push
out
some
sort
of
dag
json
thing,
and
then
we
return
them
into
ipld.
Prime
nodes
and
then
we
serialize
them
with
the
intended
codec
closer
to
the
client.
That
seems
really,
I
don't
know,
suboptimal
and
also
means
that
you
could
have
a
codec
available
on
the
daemon,
but
not
available
on
your
local
codec,
and
then
you
couldn't
do
that,
which
also
seems
like
it
would
be,
not
the
thing
that
the
user
would
be
expecting.
C
So
I
think
the
having
it
be
just
within
the
command
itself.
This
command
just
puts
out
bytes
of
whatever
the
codec
encoded
encoding
of
the
expected.
Dag
is,
is
the
simplest
thing
to
explain
and
try
and
provide
as
a
service.
So
that's
the
direction
we're
going
now
and
now
we
just
need
to
do
it,
but
should
be
actually
simpler
code.
A
This
is
just
actual
bytes
to
the
for
the
console,
not
not
hexadecimal
anything.
Okay,.
C
A
C
I'm
not
surprised
there
seemed
to
be
also.
I
don't
know
people
staring
at
this
and
also
staring
at
some
of
that
gateway
set
of
questions
and
being
like
this
current
thing
is
both
extremely
confusing
and
not
what
we
want
and
also
intertwined
in
such
a
way
that
it's
not
going
to
be
pleasant
to
remove.
So
what's
the
path
forward.
A
I
mean
in
a
way
it's
nice
that
we're
stuck
where
we
are
finally
addressing
the
dag
pb
lock-in
problem.
That's
really
at
the
heart
of
this
and
it's
nice
to
be
doing
that,
because
without
doing.
C
This
is
that
there's
some
attempt
at
a
generic
outputting
from
all
ipfs
commands
that
goes
through
ipfs
commands
and
either
gets
serialized
to
json
over
an
http
api
or
directly
to
some
other
thing
and
then
gets
re-put
through
the
commands
that
does
the
final
encoding
and
that
and
that's
just
an
artifact
purely
within
how
ipfs
has
structured
its
potentially
to
process
daemon
and
client
setup
and
is
not
related
to
the
data
model.
C
C
There's
a
real
problem,
which
is
you've,
got
an
ipfs
daemon
and
then
you've
got
some
other
process,
and
so
you
need
some
sort
of
serialization
between
these
and,
if
you're
saying
great
and
now
we
need
to
be
able
to
let
you
encode
into
json
or
into
xml.
Apparently
maybe
you
want
to
do
that
on
yeah
right
they've
got
an
xml
codec.
We
don't
have
that
yet.
Maybe
you
do
that
on
the
on
the
client
side,
and
also
you
may
want
to
interface
with
the
daemon
over
the
web.
C
So
can
we
have
that
same
interface?
Be
the
thing
that
goes
through
an
http
transport
to
some
sort
of
javascript
client.
So
that's
the
it's
that
boundary
that
you
know
I
I
understand.
I
think
that
there
are
very
valid
motivations
to
want
to
abstract
that
one
level
so
that
you
can
have
these
different
things
that
pipe
into
it.
But
we
have
done
that
in
a
way
that
we
now
want
other
things
that
it
can't.
D
Do
so
I
also
jumped
in
and
added
a
few
more
things
to
the
docket
that
I
forgot
on
the
first
round.
Does
anyone
want
to
talk
about
test
fixtures.
D
I'm
not
sure
I
do
either,
but
I
will
make
some
information
available
and
if
people
are
interested
in
it
give
me
some
reactions,
I
had
done
a
little
research
on
the
problem
domain
of
cross
language
test
fixtures
in
the
past.
I
think
it
might
be
in
last
week's
recording.
I
don't
remember
if
I
mentioned
having
a
prototype
solution.
D
D
And
I
guess
I
would
just
be
interested
in
people
taking
a
look
at
this
and
telling
me
if
it
looks
crazy
or
not.
The
intention
was
to
make
something
relatively
human
readable,
but
also
relatively
clear
for
a
machine
to
update
and
god
have
mercy
on
my
soul,
somehow
also
be
consistent
about
line
break
maintenance
which
man,
computers
enough
said.
So
the
state
of
this
is
that
I've
made
a
prototype
implementation
of
something
I
hope
it's
human
readable
opinions
may
vary.
It
uses
indentation
as
the
process
of
hitting
the
sweet
spot
between
yeah.
D
D
You
basically
have
escaping
or
or
or
nothing
honestly
right,
you
have
escaping,
or
nothing
and
indentation
is
the
only
concept
of
escaping
that
is
trivial
to
do
as
a
human,
because
you
select
a
bunch
of
text
and
you
press
tab
right,
you're
done
as
contrasted
with
going
through
the
entire
thing
and
like
putting
slashes,
that's
somewhat
harder
to
do
like
you
can
do
it
with
a
said
script,
but
does
your
editor
have
a
button
for
that?
If
it's
emax,
maybe
but
like
everybody
else,
probably
not
anyway,
wait.
D
D
E
Okay
and
and
your
expects
are
flat,
all
of
them
it's
on
the
selectors
that
are
tabulated,
okay,.
D
Right
so
the
the
intent
of
the
format
is
be
human
readable,
while
maintaining
all
the
white
space
and
everything
else
too,
and
so
in
this
example
file.
Some
of
the
things
that
make
that
interesting
are
this
example.
File
contains
a
bunch
of
other
formats,
some
of
it's
json,
some
of
it's
not,
and
so
all
of
that
should
be
maintainable
and
yet
hopefully,
the
sections
between
those
separate
documents
should
be
clearly
visually
delineated.
E
I
don't
know
here
you
can
see.
Well,
I
actually
don't
know.
Do
we
have
equivalence
to
the
long,
equivalent
long
names
for
the
short
commands
for
the
f's?
You
know
the
little
you
know
like
where
you
have
f.
Oh
we're
talking
about
selectors
now,
yeah.
E
Well,
I'm
looking
specifically
at
your
example.
It's
like
do
we
actually
have
because
out
of
off
top
of
my
head,
I
actually
don't
know
how
to
read
this
myself
I'll
have
to
go
and
find
the
scheme
and
so
on
and
so
forth,
and
I
remember
we
had
a
long
version
of
those.
Maybe
we
could
consider
using
that.
D
So
we
yeah
that
was
my
initial
dream
a
long
time
ago,
we
still
have
some
docs
in
the
website
now,
which
are
using
those
long
form
things
the
words
in
the
schema
and
are
arguably
more
legible.
As
a
result,
I'm
really
not
sure
if
that
was
a
good
idea.
In
hindsight
I
was
hoping
that
we
would
rapidly
get
the
tooling
relating
to
schemas
to
be
in
such
a
point
of
quality,
where
I
could
say
to
somebody.
D
Okay,
take
this
document
with
the
long
names
now
run
this
transform
tool
and
give
it
the
schema
file
and
it'll
produce
the
short
names
or
go
vice
versa,
and
then
that
would
be
sufficiently
executable
and
clear
to
a
user
that
we
could
actually
write
docs
in
either
format
and
say,
click
a
button.
Here's
the
flip,
but
we
just
haven't
gotten
there,
and
so
I've
become
pretty
uncomfortable.
Having
those
long
hand
things
showing
up
in
the
examples,
because,
right
now,
if
you
plug
that
into
code,
does
it
work?
D
D
E
Yeah
the
the
fact
that
the
long
names
actually
do
not
work
that
wasn't
apparent
to
me
so
yeah,
then
right
yeah,
that's
that's
the
way
to
go
like
you.
You
won't
stop
the
transient
examples.
Obviously,.
A
D
A
B
I
don't
really
object
to
the
format.
I
think
it's
fine,
I
think
I
said
this
two
weeks
ago.
Any
format
is
better
than
no
format,
and
this
certainly
seems
fine.
B
The
first
thing
I
was
going
to
bring
up
is
what
peter
brought
up,
which
is:
why
not
be
strict
about
at
least
one
leading
tab
within
the
content
of
each
file?
Because
then,
if
you
didn't
require
that,
I
guess
you
could
have
like
a
tool
like
gofund,
that
you
know
rewrites
them
to
have
that,
but
I
don't
know
I
wouldn't
want
to
have
inconsistent
inconsistently.
Tapped
files.
D
Yeah,
maybe
we
should
make
that
a
strict
rule.
There
are
some
comments
about
that
in
in
the
read
me
of
that
poc
repo
that
flagged
that
as
apostles
principal
thing,
I
guess
I
was
actually
highly
influenced
here
by
golang's
txtar,
which
has
this
really
intensely
provocative
comment
at
the
top
of
its
documentation,
saying
it
is
impossible
to
have
a
file
which
does
not
parse
as
txtar.
D
And
I
kept
going
back
to
that
comment
being
like
that's
a
really
cool
feature,
but
then,
of
course,
the
flip
side
of
that
is
it.
It
really
boxes
in
the
format
quite
a
bit
right.
It
can
have
all
sorts
of
weird
data
sections
that
can
be
incorrect
and
there
are
many
kinds
of
typos
which
can
produce
data
that
did
not
parse
the
way
that
you
intended,
even
though
it
did
successfully
parse.
So
so
maybe
you're
right,
maybe
that's
not
an
ideal
trade
and
it's
actually
more
reasonable
to
be
strict
about
more
of
the
format.
D
Yeah
the
the
idea
there
was
when
we
end
up
writing
fixtures
that
involve
multiple
blocks
of
data.
We
usually
have
them
addressed
by
cid
links,
and
so
it
would
be
probably
useful
to
have
a
way
to
add
a
bunch
of
those
hunks.
D
C
D
Document
here
doesn't
need
that
feature
because
it
was
all
single
block
selectors,
but
we
should
have
another
set
of
fixtures
for
multi-block
selectors
and
then
that
feature
would
suddenly
become
visibly
useful.
D
D
D
Like
you,
can
you
can
easily
write
these
things?
The
the
main
thing
that
we've
found
to
be
irritating
is,
as
the
human
authoring
these
fixtures.
You
write
a
bunch
of
of
blocks
and
you
fill
them
with
your
json
data
or
whatever,
and
then,
when
they're,
referring
to
the
other
blocks,
you
have
to
write
the
cid
in
there
and
you
have
to
like
go
through
a
cycle
with
a
tool
to
tell
you
what
the
cid
is
and
there's
no
way
around
that
right.
D
There's
no
total
fix
for
that
other
than
using
like
some
sort
of
pointer
base
reference
and
having
something,
but
we
could
get
an
interesting
form
of
halfway
there
by
having
a
tool
that
looks
at
this
file
and
then
says:
okay,
any
of
these
hunks
that
have
like
slash,
dag
slash
in
the
middle
I'm
going
to
hash
that
content
turn
it
into
a
cid
rename
the
hunk
to
have
that
cid
as
a
suffix
of
its
name.
D
And
then
this
would
still
force
you
the
human
to
go
through
some
cycles.
Where
you
write
content
and
then
you
do
that
to
the
file
and
then
it's
going
to
replace
the
file
and
then
you're
going
to
have
to
make
sure
that
your
editor
picked
up
to
change
the
file,
and
then
you
can
write
the
next
hunk
that
refers
to
that
one,
but
it
would
at
least
let
you
have
that
editing
cycle,
and
I
think
that
would
be
one
of
the
least
crappy
user
stories
that
I've.
D
D
D
A
D
Pound
ipld
and
you'll
get
there.
There's
also
discord
bridges
again,
which
is
super
nice.
Apparently
people
like
discord,
I
believe,
bridging
back
to
irc
again
on
the
new
livera
network
is
planned.
I'm
not
sure
if
it's
happened
yet,
but
it's
in
the
works.
So
if
you
haven't
found
us
there
go
find
us
there.
It's
cool.
B
E
It
seems
to
be,
I
would
actually
ask
ollie
and
and
andrew
they're,
both
like
heavy
into
this
general
automation,
stuff.
A
Okay,
well,
let's
end
the
recording
and
and
finish
up,
let's
see,
okay
thanks,
everyone
for.