►
From YouTube: 🖧 IPLD weekly Sync 🙌🏽 2020-08-17
Description
A weekly meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
A
A
So
this
week
I
started
with
myself
this
week.
I
don't
have
anything
to
report
for
the
ipld
stuff,
because
I
haven't
do
it
done
any
ipod
related
stuff
last
week
and
I'm
still
busy
with
filecoin
stuff,
but
this
will
hopefully
be
finished
soon
so
yeah
soon.
I
will
have
again
paper
list
of
the
report
and
next
on.
My
list
is
peter,
but
he
also
writes
that
he
doesn't
have
anything
so
next.
B
C
So
I
mean
I
can
say
it
again,
I'll
I'll
write
the
text
in
a
second
I'm
just
making
tea.
So
I
was
off
camera
for
a
second
yeah.
I
haven't
written
up
my
notes
yet,
but
the
main
thing
was
this
file
coin
retrieval
stuff.
So
we
now
know
that,
like
the
only
way
to
get
any
data
out
of
the
file
coin,
network
ever
is
with
retrievals
and
selectors.
C
So
I
wrote
up
kind
of
why
and
then
wrote
up
two
feature
requests
that
would
make
it
more
doable
in
the
future
that
I
kind
of
expect
to
get
prioritized
after
the
launch
and
there's
some
other
fallout
like
I
talked
with
folker
today
about
accelerating
some
of
our
work
to
do
wassum-based,
read-only
codecs,
so
that
we
could
have
a
dag
of
read-only
codex
that
could
be
linked
and
that
you
could
safely
load
off
the
network
and
actually
be
able
to
read
data
with
a
lot
more
codecs
than
you
ship
with,
and
that's
it
for
that
made
some
more
progress
on
the
ipjs
build
tool.
C
It's
now
building
I'm
working
on
the
test
side
of
things
so
that
we
handle
mainly
a
thing
that
rod
brought
up
over
the
weekend,
and
so
that
should
be
working
really
soon.
I
had
a
little
side
quest
where
I
was
trying
to
do
a
new
compiler
with
just
the
ast,
because
it
actually
isn't
that
hard.
But
then
I
realized
that
there
was
some
other
features
in
roll
up
that
I
wanted
to
use
that
we're
not
going
to
be
able
to
replicate
at
just
the
ast.
C
So
I
actually
figured
out
a
way
that
we
can
leverage
some
of
the
caching
and
performance
stuff
that
that
roll-up
does
and
not
hit
some
of
the
bottlenecks
everything
before
so
that's
actually
implemented,
and
it's
it's
even
faster
than
it
was
before.
I
was
using
all
those
workers.
So
that's
really
cool
to
see
and
that's
where
I'm
at.
D
Right
yeah,
so
my
two
main
things.
D
D
It
looks
like
there's
just
a
matter
of
bandwidth
for
the
other
folks
to
deal
with
that.
Well,
it
looks
like
steven's
been
merging
some
things
today.
So
that's
good
and
then
just
back
to
some
javascript
work
just
to
try
and
get
some
closure
on
some
other
things,
but
not.
E
D
Else
to
report
other
than
that's
really
wasn't
that
eventful.
I
did
have
eric
along
for
the
ride
for
some
of
the
hamp
stuff.
He
was
doing
some
benchmarks
and
trying
to
figure
out
how
the
flush
semantics
exactly
worked
and
what
they
actually
benefited,
and
he
got
some
good
docs
in
for
that.
But
yeah.
That's
it
for
me
really.
A
All
right
does
anyone
else
want
to
give
any
updates
or,
if
not,
then
we
can
also
open
for
discussions.
If
anyone
has
anything
to
talk
about
or
discuss.
C
B
Can
we
talk
a
little
bit
more
about
how
you
guys,
working
at
michael
consumes,
spoke
to
each
other
already?
How
do
you
envision
the
wasn't
stuff
to
be
implemented
for
read
only
things?
This
is
basically
going
to
be
a
callback
for
the
link
loader
or
it's
going
to
be
more,
give
the
awesome,
a
stream
and
get
back
blocks
or
something
else.
C
No,
it's
going
to
be
definitely
transactional
with,
like
all
of
the
block
data
at
once
so
it'll
just
be
you
know,
here's
a
block.
Give
me
the
decoded
version
of
that
or
more
accurately.
What
it
may
be
is
like
here
is
block
data
in
in
x
format.
Give
me
back
dig
c
bar
because
you
have
to
like
there's
going
to
be
some
kind
of
binary
civilization
anyway,
and
that's
the
only
one
that
we
know
that
everybody
already
has
and
it
supports
the
whole
data
model.
C
So
that
may
be
what
it
does,
but
some
of
that
detail
is
still
being
worked
out,
but
it
definitely
doesn't
need
to
work
with
a
stream
because
it's
always
working
with
a
whole
block
at
a
time.
B
Right
so,
basically
you're
going
to
incur
the
penalty
of
into
awesome
lantern
out
on
every
link.
C
Yeah,
it's
not
intended
to
be
the
fastest
thing
in
the
world.
Like
you
know,
if
you
want
to
really
perform
a
codec
like
you
would
ship
with
a
with
a
native
version.
I
would
imagine
this
is
mainly
like
you
want
to
be
able
to
read
data
that
you
did
not
ship
with
the
codec
iv,
and
so
it
may
just
be
a
little
bit.
C
B
Out
and
the
mechanism
to
get
a
trusted
version
of
a
reader
or
whatever
would
be.
How
like,
like.
Let's
say:
you're
a
miner
and
you'll,
get
a
deal
and
what
happens.
C
Next,
so
we
would
publish
the
cid
for
a
dag
and
that
dag
would
associate
the
codec
identifiers
with
the
wasm
binaries,
and
so
you
would.
You
would
look
up
in
that
dag
like
hey,
which
one
of
these
and
then
you
would
pull
it
out,
and
then
you
have
like
a
call
that
is,
like
a
you
know,
a
share.
Nothing
called
co-author,
where
you
just
say:
here's
a
block
of
data
and
some
memory
go
decode
this.
For
me,
you
know
folker
now
that
I
think
about
it.
I
think
peter's
right.
C
We
should
put
in
addition
to
just
deserializing
the
entire
structure.
We
should
have
a
way
for
you
to
return
just
cds
back
and
come
up
with
some
kind
of
serialization
for
just
the
ids,
because
then,
if
you're,
if
you're
just
parsing
all
the
links
out
in
order
to
do
the
graph
traversal,
that
would
work.
C
And
then
just
here's
a
whole
block
give
me
back
all
of
the
paths
to
links
so
the
path
and
the
link,
and
then
that's
just
an
array
of
tuples
right.
So
you
just
do
a
seabor
array
of
tuples
back.
A
A
Exactly
yeah,
it
will
be
like
it
will.
So,
basically,
the
probably
the
first
version
will
just
return
everything
and
then
you
can
add
things
like
pathing,
which
is
like
this
is
the
most
basic
version
of
pathing.
I
guess
so
you
get
bigger
links
and
then
you
can
add
new
whatever
like,
but
I
guess
yeah
so
yeah.
C
We
should
we
should
start
working
on
designing
the
data
structure,
but
the
data
structure
should
basically
be
like,
like
a
struct
for
the
codec
and
then
for
each
method
right
for
each
one
of
these
methods.
You
would
just
have
a
separate
wasm
entry
point
right.
C
B
B
C
Well,
no
often,
you
need
to
know
the
path
of
the
links
and
when
you
traverse
the
block
to
get
the
links,
you
have
the
paths
anyway.
So
like
all
of
our
apis,
that
just
give
you
all
the
links
in
a
block,
give
you
the
path
as
well,
because
there's
no
reason.
No.
C
B
Because
then,
the
this,
the
design
of
this
entire
thing
becomes
even
simpler,
because
you
literally
do
not
even
need
to
think
about
like
how
to
decode
a
map
in
this
codec
or
anything
like
this.
You
literally
just
need
to
in
c,
for
example,
look
for
anything
that
is
tagged
42
and
you
don't
care
what
structure
it
is.
A
Depending
on
the
predictor,
so
let's
say
the
selector
selects
something
within
your
object
and
you
only
use
the
ids
you
you
can't
so
from
the
selected
perspective,
you
can't
tell
if
you're
going
to
select
this
led
or
not.
B
C
I
mean
we
have
the
whole
selection
selector
engine
there.
You
can
actually
use
it
in
the
retrieval
market
and
we
haven't
yet
explored
what
people
are
gonna
do
with
that
and
if
you,
especially
in
some
of
these,
like
potential
partial
graph
cases
like
we,
we
have
thought
about
like
recursive,
selective
criteria
in
paths
and
things
like
that,
so
you
you
would
want
it.
C
I
mean
I'm
not
opposed
to
just
having
two
methods,
but
given
that,
like
we
don't
know
of
a
codec
in
which
we
can
easily
parse
out
the
links
without
getting
the
path
information
anyway,
like
like
all
of
our
current
codecs
that
I
know
of
when
you
go
and
look
at
them
to
get
all
the
links
in
order
to
even
parse
out
the
link,
you
have
had
to
accumulate
the
path
to
the
link
already,
so
there's
no
reason
not
to
return
it.
C
I
think
that,
like
theoretically,
there
could
be
a
block
format
in
the
future
that
just
put
all
of
the
links
in
the
header
without
the
paths
to
them.
That's
certainly
possible,
but
I'm
just
not
sure
if
we
should
optimize
this.
For
that
I
mean
because
it's
just
going
to
make
the
dag
way
bigger,
because
we're
going
to
end
up
having
two
methods
for
every
codec,
one
for
just
the
links
and
one
for
the
length
of
paths.
B
C
Yeah
for
one
of
the
use
cases,
but
we
want
to
keep
it
flexible.
We
did
the
same
thing
in
the
javascript
block
api.
It
returned
to
the
the
paths
as
well.
C
Oh
one
thing
that
I
forgot
to
put
in
the
notes
I
should
add.
It,
though,
is
that
we
did
have
the
dag,
jose
cose
call
last
week
and
that
went
really
well.
Actually,
things
are
a
lot
simpler
than
we'd,
initially
made
them
out
to
be
yeah.
So
the
the
sort
of
top
level
of
that
was
that
the
actual
format
of
the
data
for
deg,
jose
and
kosei
will
be
unchanged
from
the
dag,
jose
and
kosei
spec.
C
It
will
basically
just
look
identical,
but
when
you
use
a
dag,
jose
or
tag
cose
codec,
that
means
that
the
thing
you're
signing
is
a
link.
So
the
binary
payload
that
you're
signing
is,
should
be
interpreted
as
a
link
and
in
our
codex
we
will
want,
to
you
know,
add
an
adjoining
property
to
the
payload
called
link
so
that
it
is
like
in
the
decoded
data
that
you
would
traverse
through
it.
F
F
Yeah,
I
think
we
have
to
work
out
the
wording
and
I
think
just
to
add
that
to
the
I
guess:
where
would
we
dump
that
documentation
for
that.
C
I
believe
that
we
have
prs
that
are
open
and
started
for
the
specs
for
those
and
they
they
were
really
complicated
because
they
were
talking
about
like
alternative
schemas
and
stuff
like
that.
But
now
they
should
be
a
lot
simpler.
So
I
think
in
the
ipld
specs
repo
there
should
be
like
an
open
pr
for
the
day,
jose
cosey
specs,
and
there.
F
That's
where
we
want
to
say
that
garbage
there
to
actually
make
sure
we
work
on
that.
I
think
the
other
thing
that
I
was
working
on,
which
is
tangential
but
is
abnf
notation
for
ipld,
so
abnf
is
like
all
of
the
the
syntax
for
a
fully
qualified
url,
and
so-
and
this
has
to
do
I-
I
mentioned
it
briefly
to
dietrich,
because
I
was
working
on
getting
the
ipfs
colon
scheme
name
registered
and
I
picked.
F
I
dropped
the
ball
mostly
because
I
was
the
full
qualified
like
path
queried
fragment
for,
like
has
to
be
flushed
out
and,
and
the
current
draft
of
the
spec
doesn't
actually
spell
that
out.
But
I
think
I
was
waiting
to
do
this
for
ipld
until
actually
get
like
things
like
selectors
in
that,
like
a
full
path,
query
fragment
interpretation
and
I
think.
D
F
C
In
the
interest
of
getting
something
usable
sooner,
we
could
consider
just
doing
instead
of
ipld
colon,
just
cid
colon
knockback
and
then
there's
no
selectors,
no,
nothing!
It's
it's
only
a
url
scheme
to
point
at
cds
that
might
be
useful
in
the
interim,
because
I
don't
I
don't
know
when
we're
going
to
figure
out
the
pathing
and
selector
stuff.
It's
like
every
time.
We
we
open
that
box
it
the
bottom
of
it
looks
deeper
than
it
was
before,
and
so
I
mean
even
like
pathing.
C
We
don't
have
a
stable
spec
for
yet
and-
and
you
know,
peters
actually
doesn't
work
on
it
and
we're
like
still
not.
It
didn't
go
well.
Yeah.
F
B
C
It's
it's
like
it's
a
really
tough
problem.
I
think
honestly,
some
of
the
stuff
that
you're
talking
about
selected
in
selectors
might
actually
fix
some
of
these
problems
for
us
like
we
may
like.
It
may
actually
be
a
lot
easier
to
just
say
you
know
what
you
need
to
just
encode
your
selector
as
an
I
feel
the
object
and
then
reference
it
by
cid.
Rather
than
trying
to
do.
You
know,
paths
as
selection
criteria,
because
it's
just
too
complicated.
F
C
Mean
we're
also
looking
at
it
in
terms
of
the
we
would
like
to
get
to
the
point
where
a
car
file,
like
like
a
future
version
of
the
car
file
format,
included
a
selector
in
the
header
and
so
that
selector
would
basically
be
the
selection
criteria
that
created
the
car
file.
So
even
when
it's
an
incomplete
graph,
you
always
have
a
complete
selector
over
it,
and
that
would
be
a
selector
query
in
that
case.
For
sure.
F
I'm
working
on
some
of
the
interoperability
with
let's
say,
did
methods
that
use
json
ld,
and
I
want
to
be
able
to
translate
between
seabor
dagsebor
to
back
to
json
native.json
and
represent
this
and
know
that,
let's
say
the
path
and
the
the
yeah
yeah,
the
the
link,
slash,
link
and
json
or
tag
42
and
cbor
is
represented
as
basically
as
a
fully
qualified
uri,
in
which
case
would
be
conforming
with
the
specification.
F
Yeah,
I
don't
really
need
the
padding,
although
that
would
be
helpful,
because
then
there
is
there's
so
much
robustness
that
actually
you
can.
You
can
do
especially
for
me
in
the
the
did
method
that
I
did
ipld.
The
anchoring
into
specific
blockchains
is
actually
would
be
important
because
I'm
blockchain
agnostic
to
when
doing
a
proof
signing
and
an
anchoring
of
these
proofs
into
proof
of
existence.
C
I
do
wonder
if
we're
we're
missing,
just
like
a
simpler
path:
spec,
because
we
have
the
data
model
and
if
we're
willing
to
say
that
pads
are
just
only
data
model
and
we're
not
going
to
touch
them
beyond
that,
and
at
that
point
you're
like
in
selectors,
we
should
just
be
able
to
borrow
the
earl
path
rules
and
say
that
when
you
fall
outside
of
this,
like
I'm
sorry,
we
can't
pass
into
your
data
because
we
have,
I
mean
we
have
the
data
model.
C
It
specifies
like
how
all
of
the
different
potential
key
values
would
work.
I
don't
think
that
we
have
a
great
system
right
now
for
differentiating
between
integers
and
strings.
That's
the
only
thing
that
I
feel
uncomfortable
about.
F
Yeah
and
the
current
path
is
basically
json
pointers.
I
think
so
you're,
basically
like
indexing
into
an
a
specific
item
in
the
array.
Yeah.
C
Yep
yep
yeah
I
like
if
we,
if
we
had
a
solution
to
that
I'd,
be
comfortable
putting
out
a
regular
spec.
That
was
only
on
the
data
model,
because
we
we
also
say
that
string
map
keys
need
to
be
utf-8.
I
think
in
the
in
the
data
model
spec
as
well
so.
F
F
Exactly
yeah-
and
I
mirrored
that
for
the
did
specification
to
basically
mirrored
that
because
I
don't
want
it
to
be
basically
ipld
but
to
be
backwards
compatible
or
basically
allow
for
a
full
c
bore
on
top
of
ipld,
in
which
case
the
cid
points
to
dag
a
seaport
object,
in
which
case
you
just
use
the
multi
codec.
That
is
cbor,
which
case
everything
works.
Fine
right.
F
Know
it's
either
the
the
mime
type
is
application,
cbor
plus
ipld
or
it's
in
json,
but
the
json
starts
with
ipld
colon.
Slash,
slash!
Oh
interesting.
I
don't
even
know
if
actually
the
slash
slack,
because
it's
not
hierarchical,
I'm
sorry.
It
is
hierarchical
because
it's
a
dag,
but
it's
not
authoritative,
and
so
it
this.
This
rules
have
to
say,
as
far
as
the
syntax
required
for
a
fully
qualified
uri,
which
happens
to
be
ipld
objects.
C
C
B
Yeah
peter
so
specifically
on
this
part,
at
least
in
ghost
selector
implementation.
It
is
implemented
as
a
kind
of
union
on
the
resolver
side.
So
if
it
is
an
array,
it
treats
it
as
an
offset
if
it
is
a
map,
it
treats
it
as
a
key
with
the
string
value
of
oh.
B
B
C
Okay,
we
need
to
change
that.
We
we
don't,
we
don't
have
a
representation
in
several
languages
for
that,
so
it's
like
the
only
way
that
we
can
ensure
like
cross-language
competitors.
No,
no
just
so
it's!
No!
That's
a
really!
That's
a
really
good
point,
though,
where,
like
you
can
actually
just
define
the
behavior
on
the
data
rather
than
on
the
interpretation
of
the
path
in
the
earl.
C
Just
fail
exactly
yeah
and
if
you're
trying
to
do
it
on
the
map,
it
would
just
convert
it
and
then
that
would
also
force
people
to
never
be
able
to
use
the
ninja
map
keys,
yeah,
okay,
yeah.
Let's
let's
get
that
spec
in
then
for
the
for
data
model
padding,
and
then
we
can
leverage
the
data
mob
having
for
any
kind
of
uri
spec
that
we
would
ever
need.
That
sounds
good
to
me,
and
then
you
know
for
for
more
advanced
stuff.