►
From YouTube: 🖧 IPLD Every-two-weeks Sync 🙌🏽 2022-09-26
Description
An every two weeks meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
A
Foreign
to
this
week's
ipld
sync
meeting
it's
it's
September,
the
26th,
sixth,
sixth
2022,
and
as
every
two
weeks
we
go
over
the
stuff
that
people
have
worked
on
and
then
discuss
any
open
engine
items
or
also
you
can
drop
by
and
ask
questions
about.
Italy.
B
Yeah
so
I
was
looking
over
my
my
history
for
the
last
few
weeks.
It's
not
not
a
whole
lot
done.
I
was
I've
been
sort
of
part-time
for
the
past
week
and
a
half
or
so
for
health
issues,
but
there's
lots
of
little
keeping
the
wheels
on
commits.
You
know
updating
repositories,
keep
anything
going
and
chugging,
but
the
biggest
thing
that
I
want
to
report
on
is
where
we
have
a
bunch
of
issues
around
identity
cids
at
the
moment.
B
That's
that's
being
addressed
from
various
angles
and
I
think
this
is
going
to
be
an
ongoing
topic
for
a
little
while
that
this
mainly
stems
from
some
work
in
filecoin,
but
I
came
back
to
and
did
and
did
someone
else
to
go
car,
but
then
there's
also
there's
ongoing
discussions
about
limit
the
limits
of
identity,
IDs
and
I've
linked
an
issue
in
notes
in
the
multi-hash
repo
there's
a
there's,
an
issue:
that's
been
it's
a
long-standing
issue
about.
Are
there
practical
limits?
B
Should
there
be
limits
on
their
identity,
cids
and
and
we're
not
in
agreement
at
all,
about
limits
and
about
their
reasonableness,
even
whether
it's
reasonable
that
identity
cids
could
could
contain
additional
identity
IDs
to
some?
B
That's
hey,
speak
of
the
devil,
to
some
that's
a
reasonable,
a
reasonable
proposition
and
anyway,
so
as
part
of
that
work,
I
actually
had
I
found
a
place
where
I
had
to
impose
some
limits,
and
so
I
came
up
with
somewhat
arbitrary
limit
for
this
particular
use
in
file
coin
retrievals
of
248
bytes
maximum
size
and
because
it
mattered,
in
this
case,
a
limit
of
a
maximum
of
32
links
inside
the
identity
CID,
even
when
you,
even
when
they
contain
recursive
identity
IDs
anyway.
B
It's
it's
an
interesting
issue.
If,
for
those
that
want
to
engage,
I,
think
the
best
place
to
do
is
always
in
that
multi-hash
issue,
you
could
read
through
the
history
there
I'm
trying
to
post
updates
there
when,
as
I
do
do
things
in
various
other
parts
of
the
stack,
but
I
I
really
think
we're
coming
to
a
point
where
we're
going
to
have
to
start
probably
posing
in
a
global
maximum.
B
But
at
least
having
some
sort
of
global
standards
for
what
a
reasonable
sizes
for
internet
acids
and
it's
probably
going
to
be
use
case
specific,
but
we
have
too
many
places
where
they're
unbounded
and
that
keeps
on
causing
problems
so
yeah.
That's
all
for
that,
and
that's
all
for
my
contribution.
For
this
week,
foreign.
C
So,
with
these
limits
also
apply
to
the
CID
V2
conversation
on
like
how
big
a
CID
V2
might
be.
B
Yeah,
that's
that's,
actually,
that's
the
same
sort
of
set
of
issues
because
crd
2v2
wait.
We
did
start
discussions
in
there
about
limits
and
I
really
do
think.
We
need
to
impose
something
before
we
commit
to
it.
And
it's
such
a
hard
thing,
though,
because
you
don't
want
to
constrain
people
arbitrarily,
and
these
numbers
that
we
come
up
with
are
are
arbitrary.
B
But
it's
this
these
unconstrained
things
that
we
keep
on
running
into
problems
with
so
I
think
we're
going
to
have
to
have
something
like
a
hard
limit
for
crdv2.
What
I'm
imagining
is
that
we
might
maybe
we
come
up
with
a
hard
limit
of
this-
is
the
biggest
that
you
should
make
and
and
various
parts
of
the
systems
will
the
system
will
will
Buck.
B
If
you
do
that,
and
maybe
we'll
have
Global
constants
in
our
core
libraries,
that's
like
Max,
cidb2
size
or
something,
but
then
as
and
we
and
encourage
a
conversation
when
people
start
to
run
in
professional
problems
with
that
limit.
B
So
if
that
limit
is
not
high
enough
for
you
come
and
talk
to
us
and
we'll
have
that
conversation
and
maybe
we'll
increase
that,
whereas
the
reverse
we're
doing
now
is
let's
just
leave
it
unbounded
and
then,
when
we
run
into
problems,
we'll
have
a
conversation,
but
because
it's
been
on
Banner
for
so
long.
B
A
D
A
D
Yeah
I've
had
a
I
had
a
very,
very
quick
comment.
I
want
to
make
I
I
translate,
but
on
whether
it
is
reasonable
or
not
to
have
I
forgot
about
how
how
people,
basically,
whether
it
is.
D
Right:
it's
not
that
I
particularly
have
anything
it's
like.
Oh,
this
is
so
cool,
or
something
like
that.
I
was
more
commenting
that
this
is
the
natural
way
for
a
dag
to
form
right.
So
the
moment
you
have
any
kind
of
deck
builder,
which
is
a
cursive
bottom
up.
Your
effective
decision
tree
of
this
is
an
identity
or
not.
Is
how
big
this?
How
big
is
my
block
right?
D
So
once
you
apply
this
exclusively
you
automatically
just
by
Workshop,
that
will
end
up
with
Inception
stages,
as
you
call
them
so
it's
it's
I
basically
want
to
say
the
conversation
that
it's
not
something
that
folks
are
going
out
of
their
way
to
make
impact,
if
you
disable
them
folks
who
have
to
go
out
of
their
way
not
to
make
them.
B
Except
in
the
case
like,
if
this
comes
from
the
fact
that
we
are
setting
these
limits
higher
than
the
size
of
a
CID,
because
if
if
the
limit
was
the
size
of
a
CID,
you
wouldn't
have
that
precisely
yes,
correct,
so
so
sure
I
don't
know
if
the
IDS
for
for
byte
ranges
that
are
smaller
than
the
size
of
CID
and
that's
I
could
see
the
reasonable
is
there,
but
but
pretty
much
everywhere,
where
they
use.
Where
we
see
this,
the
main
use
comes
from
the.
B
What
is
it
called
inline
Builder
from
the
CID
utils
for
Unix
FS
building
and
the
most
common
value
I've
seen
for
there
is
127
bytes,
which
you
can
pack
a
few
cids
into
and
then
so
that
that
sort
of
size
would
then
lead
to
yeah
okay.
Well,
we
can
pack
a
Cid
in
there
and
oh,
the
next
bytes
is
small
enough
that
we
should
pack
it
inside
there
as
well
now
I
I,
understand
why
bigger
than
a
CID
makes
sense.
B
But
it
just
leads
to
these
really
perverse
cases
where
things
get
awkward
and
in
fact
this
all
started
from
the
fact
that
this
is
so
common
in
filecoin
because
of
the
127,
where
we
end
up
with
a
a
root
of
a
dag.
B
That's
got
just
two
cids
in
it,
so
a
a
an
identities,
the
ID
that
pax2c
cids
in
it
and
that's
the
root
of
a
dag,
that's
stored
at
the
root
of
a
car,
there's
no
block
in
the
in
the
car
that
has
that
and
just
chaos
anyway,
but
yeah
no
you're
right.
This
is
hard,
but
it's
we
have
this
possibility
of
using
identities
so
that
we
can
squeeze
out
efficiencies,
but
I
think
you
know
you
know
this
discussion's
all
in
that
issue,
I
think
so.
A
All
right
next
on
my
list
is
move.
C
Yeah
so
personally,
I've
been
working
a
lot
on
an
iPod
URL
stuff
lately
and
I
put
together
a
library
called
jsypld
URL
resolve
and
it's
kind
of
comparable
to
the
link
system
in
go
ipld
Prime,
but
it's
in
JavaScript
and
it's
using
iplt
URLs
and
one
of
the
fancy
things
is
that
it's
got
a
concept
of
lenses
built
in.
C
So
at
the
moment
you
can
have
a
CID
with
some
sort
of
path
segments
and
in
the
path
segments
you
can
actually
specify
schemas
that
you
want
to
interpret
given
ipld
node,
as
so,
it
will
load
that
node
and
then
wrap
it
in
a
schema
and
then
continue
passing
continue
traversing
over
the
wrapped
node.
So
part
of
this
has
been
using
Rod's
recent
work
on
Jess
ipld
schema,
which
has
been
really
nice,
so
it's
got
validation
and
stuff
in
there
and
pretty
error
messages,
so
I'm
pretty
happy
with
it.
So
far.
C
This
is
probably
also
where
ADLs
will
go
in
there
when
we
get
to
figuring
out
what
ADLs
mean
in
the
context
of
iplt
URLs
and
also
this
relates
to
the
wasm
stuff
that
a
Dean
worked
on.
So
hopefully,
I'll
get
some
of
that.
C
Why
some
stuff
either
ported
over
or
something
similar
ported
over
to
the
system,
and
then
potentially
this
could
mean
that
you
know
really
fancy
URLs
linking
to
different
data
types
yeah,
so
schemas
are
in
there
right
now,
I'm
working
on
integrating
it
with
the
JS
ipfs,
fetch
library
that
I
have
where
I
have
basically
different
protocol
handlers
for
ipfs
related
stuff,
like
ipns
or
BFS,
and
Pub
sub.
C
Now,
ipld
and
I'm
also
going
to
be
writing
some
more
specs
that
are
like
now
more
concrete
about
how
these
parameters
are
supposed
to
be
used
in
URLs
and
how
the
schemas
should
be
loaded.
C
Actually
other
thing
relating
to
schemas
that
I
forgot
to
write
is
I
brought
up,
that
it
might
be
useful
to
think
about
schema,
reuse
or
importing
schemas
within
schemas,
and
how
to
do
that.
So
there's
been
some
nice
conversations
there.
Let
me
link
to
that.
C
So
right
now
we're
kind
of
bike
shedding
some
new
syntax,
potentially
or
some
sort
of
pre-processor
preprocessor.
Where
in
your
DSL
you
could
potentially
reference
a
different
ipld
schema
file
or
a
CID
of
an
ipld
schema
and
then
that
can
be
either
resolved
to
a
CID
when,
when
we're
actually
compiling
the
DSL
to
the
DMT,
using
a
tool
or
something
along
those
lines.
C
But
this
will
let
us
have
schemas
which
can
be
reused
across
projects
either
by
having
people
like
copy
paste,
some
files
around
and
then
hope
for
the
best
on
deduplication,
or
we
could
just
have
Registries
similar
to
schema.org
of
schemas
that
people
can
import
by
their
CID.
So
this
is
still
super
raw
I.
Don't
think
anyone,
including
myself
or
anyone,
has
committed
to
actually
implementing
this
yet
but
I
wanted
to
get
the
conversation
started
after
having
worked
with
iplt,
schemas
and
dynamically
resolving
them.
C
It
felt
like
a
natural
next
step
of
letting
other
schemas
reuse
each
other
longer
term.
I'm
I've
been
talking
to
some
activity,
Pub
and
rdf
related
folks
and
I'm,
trying
to
get
them
excited
about
ipld
and
the
potential
to
get
structured
data.
That's
peer-to-peer
and
content
addressable
and
linkable,
and
all
of
that
mixed
in
with
some
of
the
existing
ecosystems
like
activity
Pub,
where
you
know
they
already
have
a
lot
of
schemas
and
a
lot
of
types
of
data.
C
C
I've
been
talking
to
the
folks
from
Ken
Labs
that
are
working
on
creating
a
new
ADL
for
probably
treats,
which
will
be
useful
for
database
search
indexes,
so
they're
kind
of
going
off
of
the
structure
of
the
Hamptons
or
the
the
other
ADLs
on
the
ipld
website
and
they're
just
progressing
they're
having
an
initial
spec
and
golang
implementation,
and
then,
probably
after
that,
they're
going
to
work
on
a
rust
implementation
for
the
fbm
and
so
that
once
we
have
that
in
place,
then
we
can
have
some
really
interesting
databases
and
stuff
ported
directly
to
ipld,
yeah
I.
A
Thanks
next
on,
my
list
is
real.
E
Sure,
just
a
few
notes
of
other
things
happening
that
are
a
bit
tangential
or
adjacent
to
ipld.
E
There
continues
to
be
a
conversation
about
thinking
about
proofs
of
data
inclusion
around
PowerPoint.
So
is
this
data
in
a
deal
and
there's
sort
of
this
question
of
how
do
we
want
to
talk
about
that?
Because
the
natural
proof
mechanism
that
Falcon
has
which
is
around
a
Merkle
tree
inclusion,
proof
for
the
current
proof
lets
you
save?
E
These
bytes
of
data
exist
in
a
deal,
but
there's
a
disjoint
or
a
gap
between
that
and
our
current
packaging
in
a
car,
because
you
can
have
the
bytes
that
represent
a
block,
but
it's
not
easy
to
say,
and
this
is
going
to
be
a
named
object
that
you
can
actually
retrieve
out
of
this
deal,
because
that
requires
a
non-corrupt
car
header
and
that
the
striding
that
you
do
through
the
car
with
those
size
offset
jumps
all
the
way
through
the
car
all
work
out,
such
that
those
bytes
of
data
actually
are
properly
named
in
the
car
file.
E
So
it's
not
just
an
inclusion,
it's
this
other
Global
property
and
trying
to
do
that.
Global
property.
On
something
that
could
be
32,
gigs
ends
up
being
potentially
quite
expensive,
and
so
that's
going
to
be
an
interesting
challenge
for
ipld.
E
E
So
trying
to
right
now
we
use
these
we're
using
SIDS
and
that's
one
of
the
core
ipld
things
as
how
we
talk
about
data,
but
we're
going
to
have
to
think
about
how
we
match
that
with
what
we
can
get
out
of
falcoin.
If
we
can
find
ways
to
bridge
the
gap
of
what
what
the
tools
and
current
pieces
PC
IDs
give
us
yeah.
That's
imply.
E
So
that
was
one
thing.
A
second
thing
is
there.
There
was
noticed
some
selector
oddness
that
we
may
at
some
point
want
to
dig
into.
This
is
again
going
back
to
that
ADL
interpret
ads
in
selectors.
E
It
looks
like
when
you
have
a
union
selector,
so
if
your
selector
is
Union
of
matcher
and
of
interpret
as
the
union
actually
causes
you
to
go
down,
one
stop.
So
the
union
is
not
the
the
interpret,
as
is
not
executed
on
the
direct
node
that
you
pass
into
that
selector,
but
into
the
children
of
it.
E
So
if
you
want
to
run
the
ADL
first,
you
have
to
put
that
above
the
union
which
is
non.
It
may
be
a
bug
in
how
we've
got
unions
set
up
right
now.
It's
certainly
not
intuitive,
but
there's
no
way
to
both
match
and
ADL
the
top
level
node
currently,
and
so
that's
probably
something
that
we
may
need
to
look
at
how
that
edge
of
selectors
is
structured.
E
That
being
said,
the
behavior
of
the
interpret
as
selector
tends
to
mean
that
we
don't
like
matching,
stops
making
sense,
and
so
the
like.
What
did
we
even
mean
by
doing
a
union
of
a
match
or
an
ideal?
It's
a
little
confusing,
because
the
at
the
point
that
we've
ever
used
these
ADLs
in
practice,
like
in
data
transfer,
what
we've
cared
about
more,
is
the
block
loading,
so
the
substrate
level
blocks
that
are
accessed
rather
than
the
transformed
node.
E
That
goes
back
to
the
Callback,
which
is
not
actually
itself
a
node
that
exists
in
practice
right
and
so
you
to
match
this
reconstructed.
Adl
node
is
a
little
less
obvious,
but
that's
what
you
want
to
do
versus
matching
the
underlying
blocks
that
are
accessed
in
the
creation
of
that
ideal
Network.
The
other
thing,
though,
that
we
realized
is
not
in
this
interpret
as
ADL
selector
space.
E
Is
that
there's
no
way
to
ask
for
an
ADL
selector
to
be
lazy
versus
non-lazy,
and
so
in
particular,
if
you
think
about
a
Unix,
FS
directory,
that's
sharted
right
now,
that
is
a
lazy
reconstruction
and
so,
if
I
interpret
as
and
that
directory
I
just
get
the
top
level
of
it.
E
I,
don't
get
the
enough
to
fully
iterate
through
all
of
the
directory
entries
and
if
I
treat
it
as
the
map
of
name
to
files
that
the
ADL
makes
it
as
and
recurse
one
level
I
also
get
the
top
locks
of
all
of
the
files
in
that
directory,
and
so
we
don't
have
a
way
still
to
Traverse
just
the
contents
of
the
directory,
and
so
there's
a
question
of.
Do
we
need
some
option
to
the
ADL.
Do
we
have
two
ADLs
one?
E
That
is
the
lazy,
unixfs
and
one
that
is
a
non-lazy
unixfs
and
that
that
may
be
the
easiest
ways
that
we
just
have
two
different
named
ADLs
there?
E
But
we
should
think
a
little
bit
about
what
the
what
our
answer
would
be
for
people
who
want
to
transfer
all
the
blocks
needed
to
list
a
sharded
directory,
because
we
don't
have
an
actual
selector.
That
would
work
for
that
right
now,
I
think
having
two
unit
xfs
is
one,
that's
lazy
and
one
that's
not
lazy
registered
is
probably
the
easiest
way
there.
E
E
So
in
about
a
month
there
will
be
a
bunch
of
conversations
we
should
think
about
if
there
are
ideal
conversations
that
are
worth
having
there
and
if
so,
we
can
Advocate
to
make
time
to
have
those
I,
don't
believe
that
there
is
a
direct
I
feel
the
track.
C
Not
a
track,
but
I'm
definitely
going
to
be
doing
some
sort
of
IPL
detox,
I'm
still
figuring
out
this
specifics.
Having
a
track
is
a
lot
of
effort,
yes,
I
personally
can't
commit
to
that.
Is
there
someone
in
the
community
that
wants
to
go
for
it,
but
we
could
probably
just
piggyback
on
some
existing
ones.
E
C
Maybe
it
might
be
useful
to
also
Point
people
to
where
the
ipfs
camp
discussions
are
happening
because
I
actually
don't
know
and
I
thought
people
asking
me.
E
That
is
a
great
question.
There
I
believe
is
an
ipfs
Camp
Slack
I
do
not
know
if
it
has
fully
opened
yet,
but
the
website
is
probably
the
thing
that
will
update
and
it's
I
believe
2022.ipfs
dot.
Camp
has
the
current
details
for
the
camp,
which
is
not
a
Tonia
besides
ticketing
and
some
forms
to
apply
or
to
propose
things
that
you
would
be
interested
in.
A
All
right,
thank
you
when
I've
added
an
item
that
news
from
other
people
from
the
community
that
I've
heard
about.
So
there
is
rust
implementation
that
has
been
in
the
works
for
years.
So
I've
talked
with
the
author
I
think
two
years
ago,
three
years
ago
roughly
and
he
finally
found
some
time
to
work
on
it
again
and
yeah.
So
just
yeah,
if
you're
interested
check
it
out
and
the
major
thing
there
is
I
would
say,
is
it's
focusing
on
schemas
or
like
this?
A
Is
what
the
author
really
cares
about
or
wants
to
push
forward,
which
clearly
is
not
in
the
Deep
IPL
the
implementation
in
Rusty?
You
might
know
not
really.
There
are
a
bits.
Are
there,
but
fully
so
if
you're,
interested
and
yeah
just
check
it
out
and
give
feedback
yeah
and
we'll
see
how
this
goes
and
but
we're
certainly
in
touch.
So
it's
really
like
it's
not
like
a
so
it's
kind
of
like
a
friendly
other
development.
A
B
Well,
I
had
one
more
and
to
mention
that
I've
got
two
that
so
Mo
started.
This
work
on
JS
multi-formats,
esm
upgrade
and
a
bunch
of
other
things
has
turned
into
a
an
epic
JS
multi
performance
version,
10
release,
so
that
pull
request
is
now
so
it's
pull
request
number
199
in
JS,
modern
format,
I'll
put
that
in
the
notes.
B
We're
really
close
to
just
to
releasing
that.
But
we
keep
on
coming
up
with
these
little
things
to
tackle
so
we're
we're
painting
bike
sheds
in
there
a
little
bit,
but
there
should
there's
a
bunch
of
work
that
goes
into
this.
That
will
be
fairly
major
once
it's
released,
then
there
will
probably
be
a
bit
of
pain
for
dependencies.
So
there's
going
to
be
other
repos
that
we
have
to
update
all
the
codecs
and
lots
of
other
utilities
that
use
multiple
JS
multi
formats.
B
We're
gonna
have
to
drop
common
JS
support
for
this,
but
it
does
simplify
things
a
lot
and
it
and
it
enables
our
release
process
to
be
a
bit
more
streamlined
with
other
release
processes.
We
have
so
I'll
drop
that
note
in
there.
A
Things
does
anyone
else
want
to
share
something
or
ask
questions.
A
Then
I
also
invite
everybody
to
the
after
party.
So
after
this
after
we
we
closed
this
meeting,
we
still
have
some
time
to
hang
out.
If
you
want
to,
and
without
a
public
screen
streaming
crazy,
you
want
to
share
something
that
should
be
publicly
screened,
streamed
and
so
on.
So
yeah,
oh
and
one
more
note
is
that
so
as
the
daylight
saving
time
changes
are
soon
will
happen.
This
meeting
just
stays
on
the
UTC
time.
A
So
whenever
you
like,
wherever
you
are
in
the
world,
the
time
might
change,
but
this
meeting
will
be
always
on
the
UTC
time
and
it
will
make
things
a
bit
nicer
for
the
Europeans
and
the
Australians
soon
all
right.
All
right,
then
yeah
see
you
all
again
in
two
weeks
and.