►
From YouTube: 🖧 IPLD Every-two-weeks Sync 🙌🏽 2023-03-13
Description
An every two weeks meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
A
Welcome
everyone
to
this
week's
ipld
sync
meeting
it's
March,
13th,
2022
and
23,
and
as
every
two
weeks
we
talk
about
ipd,
stuff
and
discuss
what
you've
worked
on,
but
also
discuss
any
any
agenda
items
you
might
have,
or
also,
if
you
have
questions,
feel
free
to
join
and
also
please
everyone
at
you
yourself
to
the
hack
pad
yeah.
A
So
this
week,
I
also
have
an
update,
because
last
week
or
the
last
past
two
weeks,
I
did
an
update
on
the
third
ipld
deck
sibo,
which
is
kind
of
rust,
mtech,
cboard
implementation
and
a
new
release,
not
many
new
things.
But
the
important
part
is
that
it
now
has
stricter
decoding.
So
if
you
have
Seaboard
data-
and
it
has
now
in
it,
it
will
error,
it
was
to
you
if
it
has
undefined
sorry.
I
got
this
wrong.
A
Now
it's
okay,
undefined
is
not,
and
then
also
if
you
have
indefinite
sized
elements,
because
in
sibo
their
support
that
you
have
like
yeah,
unlimited
size
list,
and
you
have
just
some
stop
operator-
and
this
is
not
supported
in
sibo,
because
what
we
really
want
is
only
one
way
of
representing
things
and
for
this
it
would
be
two
different
ways,
and
the
idea
here
was
also
that
so
I
to
first
of
all
match
the
other
implementations
in
JavaScript
and
go,
but
also
to
be
really
like.
A
I
would
rather
be
because
it's
kind
of
like
a
new
library,
it's
being
strict
in
the
beginning
and
then,
if
it
turns
out
of
your
problem
with
existing
data
or
something
we
can
always
add,
a
feature
of
lag
or
an
option
or
whatever
but
being
strict
at
first
is
a
good
idea,
I
think,
and
then
we
can
go
from
there.
A
So
that
was
fixed
and
hopefully
the
also.
So
this
also
the
library
the
file
converter
machine
is
using,
and
hopefully
they
will
update
it
for
the
next
release
and
I
guess
they
will,
because
I've
told
the
developers
it
shouldn't
break
anything.
What
they're
doing
all
right
then
next
on
my
list,
is
already
rod.
C
I
just
put
in
the
chat
about
there's
a
library
in
Rust
that
is
deterministic
seaboor
and
I,
haven't
really
dived
into
too
much
detail
by.
A
C
Oh
that's
way:
better.
Okay,
I
had
just
put
something
in
the
chat
about
deterministic
seabor,
that's
coming
out
from
Christopher
Allen
and
wolf
McNeely.
It
I'm
not
sure
the
overlap
between
dags
seabor
and
but
they
have
a
actually
an
ietf
application
right
now
for
the
German
six
seaboor
use
for
in
blockchain,
mostly
Bitcoin,
okay,
have
a
read,
yeah
and
actually
going
at
the
RFC
8949,
which
is
I,
think
yeah.
What
you
guys
did
too,
with
the
deterministic
or
canonical
section
so
I'm,
not
sure
how
it's
different,
but
worth
the
read.
A
B
Yeah,
that's
probably
going
to
be
the
main
thing
if
they,
if
I,
think,
that's
the
thing.
If
anyone
starts
down
this
path
of
deterministic
symbol,
you
have
to
decide
which
this
first
of
all,
which
things
you
want
to
exclude,
and
we
we
exclude
Things
based
on
our
data
model.
So
there's
a
bunch
of
stuff
we
don't
bother,
including
because
it
doesn't
doesn't
really
work
for
our
data
model,
but
you
could
actually
choose
to
include
a
lot
more,
and
the
other
thing
is:
is
map
ordering
no
key
ordering
yeah?
That's
interesting.
B
Thanks
for
the
link,
okay,
my
notes
so
I've
been
doing
a
bunch
of
stuff
around
selectors
and
unixfs.
B
So
a
lot
of
the
work
I'm
doing
right
now
is
focused
on
retrievals.
So
we've
got
a
retrieval
client
for
ipfs
and
filecoin,
which
the
ideal
for
ipfs
and
PowerPoint
is
that
they
become
the
same
thing.
Essentially,
when
you
fetch
from
ipfs
that
you
could
also
be
fetching
from
farcoin.
The
whole
point
of
filecoin
was
that
it
was
meant
to
serve
as
a
an
incentive
by
a
storage
layer
for
ipfs.
B
So
over
time
we
should
see
those
things
merging
and
becoming
fairly
transparent,
sorry
fairly,
opaque,
really
in
terms
of
where
your
data
is
coming
from.
So
we
have
this
retriever
client
that
we've
been
building
out
called
Lassie
and
it
can
fetch
from
filecoin
or
ipfs
it's
not
like
Kubo
or
another
ipfs
client.
In
that
it
doesn't,
it
doesn't
maintain
a
long-running
connection
or
even
a
long
running
knowledge
about
the
DHT.
B
It
doesn't
bother
doing
DHT
searching
it
uses
the
index
service,
which
Now
can
do
DHT
searching
for
for
peers
that
have
blocks.
So
we
have
these
multiple
endpoints
that
we
can
talk
to
to
discover
who
has
what
content
so
Lassie
is
a
is
a
really
dumb
client.
B
It
doesn't
just
doesn't
maintain
State,
it's
just
for
downloading
content
and
it
can
do
from
Kubo
peers
or
elastic
ipfs
or
filecoin
storage
providers
anyway
right
now,
our
main
use
case
is
we're
focusing
on
Unix
FS,
because
that's
the
main
type
of
data
people
use
and-
and
lastly,
is
very
ipld-
Prime
Centric.
It's
the
way
it
uses
the
Apple
ID
Prime
model,
so
we
use
selectors
and
we
use
all
the
good
pieces
in
like
go
Unix
first
node,
which
is
a
ADL
for
Unix
service.
B
So
there's
been
lots
of
fun
sort
of
building
out
that
little
ecosystem
of
pieces
that
are
not
fully
complete,
like
there's
some
rough
edges
on
some
of
them,
but
we're
improving
them.
So
last
few
weeks
the
things
I've
been
touching
are
there'd
been
some.
Some
changes
to
go
unit,
xfs,
node
and
go
car
in
parallel,
so
go
car
is,
is
our
our
primary
utility
for
interacting
with
car
files?
B
If
you're
doing
anything
with
car
files
then
go
car
has
a
command
line
command
line
program
that
you
can
use
to
interact
with
cars,
and
it's
it's
mainly
a
kitchen
sink
that
we
throw
utilities
into.
It
does
a
whole
bunch
of
stuff
over
time.
I
think
it's
starting
to
become
a
little
bit
more
coherent.
But
one
of
the
one
of
the
main
Tools
in
there
is
is
car
extract
where
you
can
extract
Unix
content
out
of
a
car
and
to
date
it
has
been
focused
on
complete
dags.
B
So
you've
got
to
come,
it
assumes
you
have
a
complete
Unix
of
s,
tag
and
you're
going
to
extract
it.
So
you've
done
a
car
create
so
you've
imported
a
directory
structure
or
a
file
and
car
extract
will
do
the
reverse.
But
when
we're
doing
retrievals
we're
often
dealing
with
partial
dags
and
even
very
specific,
just
like
one
file
in
a
DAC,
because
now
with
Lassie,
we
support
pathing.
So
we
can
do
really
complex
pathing
from
a
route
down
through.
You
know
a
sharded
unixfest
directory.
B
The
exactly
example
actually-
and
it's
become
a
test
case-
is
to
fetch
a
Wikipedia
page
from
a
Wikipedia
dag
which
otherwise
would
be
170
terabytes,
but
just
to
fetch
six
blocks
to
get
to
a
single
page,
Through
The
Shard
in
the
Unix
of
s.
So
you
get
a
car
with
six
blocks
in
it
and
you
want
to
extract
that
page.
So
the
car
is
verifiable.
It's
got
all
the
blocks
from
the
route
to
the
page,
but
you
want
to
extract
the
page.
B
So
car
extract
sorry,
yeah
extract
Now
can
do
partial
extractions.
It
can
do
path
extractions
as
well
as
well.
So
you
can
say:
here's
my
car
I
would
just
want
this
path,
whether
it's
a
directory
or
a
file
it'll
also
accept
from
standard
in
and
it'll
also
send
to
stand
it
out
if
you've
just
got
one
thing:
you're
extracting
there
are
some
changes
to
go:
unixfest
node
to
propagate
errors
better,
because
Unix
versus
node
hides
a
bunch
of
details
when
it's
going
over
shards,
it's
doing
block
loading
and
stuff.
B
In
the
background
with
car
extract
I
wanted
to
have
an
option
where
you
it
will.
You
can
let
it
ignore
missing
pieces
of
your
Unix
dag.
So
you
do
a
car
extract,
and
so
it's
Unix
science
Wikipedia
with
one
page
and
I,
don't
want
to
have
to
specify
the
path
I
just
want
to
extract
whatever's
in
there.
So
now
you
can
do
that
and
it'll
it'll
it'll
recognize
that
it's
skipping
over
missing
blocks
and
just
extract
whatever
it
can
find
in
there.
B
B
You
can
fetch
a
Wikipedia
page
with
Lassie
pipe
it
to
go
car
and
then
pipe
that
to
stand
it
out
and
you've
got
your
Wikipedia
or
HTML
once
down
it
out,
which
is
pretty
nice
little
tool
chain,
and
you
can
do
all
sorts
of
interesting
things
when
you
start
to
build
up
those
those
pieces
together,
so
we're
so
nicely
focused
on
these
little
pieces.
B
The
other
thing
I'm
doing
in
in
amongst
this
is
one
of
the
challenges
we
have
with
selectors
is
that
they
so
because
selectors
are
all
about
maintaining
the
the
correct
order
of
the
traversal
it's
hard
to
do
selectors
in
parallel,
so
which
is
fine
for
graph
sync
graph.
Sync
was
built
around
selectors.
That's
a
protocol
for
transfer
synchronizing,
two
two
Piers
their
graphs,
so
they
both
have
an
understanding
of
the
the
graphs
they
have
and
the
selectors
that
you
want
to
synchronize.
B
So
you
can
do
parallel
work
in
that
if
you
share
an
understanding
with
the
other
with
the
other
side,
but
when
we're
doing
something
like
bits
where
we're
fetching
from
lots
of
peers,
you,
it's
all
you're
the
one
holding
all
the
information
and
the
p
is
just
here-
requests
for
blocks.
So,
if
you're
doing
a
traversal,
then
it's
really
hard
to
parallelize
that
when
you,
when
you
traversal
it's
being
strictly
ordered,
you
know
getting
down
this.
You
know
the
left
side
and
then
you
go
your
depth.
First,
all
that
sort
of
stuff.
B
So
what
I've
been
working
on
is
a
way
to
parallelize
bit
swap
traversals
while
still
maintaining
the
ordering.
So
this
is
what
I'm
working
on
this
week,
but
the
the
trick
here
and
I'm,
starting
on
some
work,
that
Hannah
started,
there's
a
link
in
there
to
to
a
pull
request
in
go
iPod
Prime.
The
trick
is
to
when
you're
doing
a
traversal
and
you
hit
a
block
to
do
a
scan
first
of
what
are
the
links
that
are
in
that
block
that
I'm
going
to
encounter.
B
So
you
do
the
current
version.
My
my
sort
of
re-implementation
of
this
based
on
Hannah's
work
is
to
you:
do
a
traversal
over
the
block,
the
single
block
and
you
stop
at
the
Block
boundary.
You
collect
all
the
links
you
send
those
links
off
to
an
optional
preloader
and
then
you
go
back
and
you
do
the
proper
traversal
and
then,
in
the
background
the
preloader
is
receiving
each
time
it
receives
a
new
block.
B
It
you
receive
this
list
of
links
and
the
preloader
then
has
the
option
to
go
and
in
parallel
pre-fetch
it's
somewhere
all
of
those
blocks
so
that
when
they're
eventually
encountered
again,
they
are
already
there
there's
a
bunch
of
challenges
in
choosing
which
box
to
fetch
all
that
sort
of
stuff.
But
what
it
means
with
bit
swap
is
that
when
we're,
when
we're
synchronizing
a
dag,
we
can
optimistically
fetch
blocks
in
parallel
with
multiple
bit
swap
Communications.
So
we
can.
B
A
Thanks,
that's
a
really
cool,
especially
like
that
the
tooling
finally
gets
to
where
it
can
actually
make
useful
things.
So
that's
super
cool.
B
Yeah,
it's
nice,
it's
nice
to
not
have
it's
nice
to
not
have
to
get
a
kitchen
sink
application
like
Kubo
to
do
all
this
stuff
like
get
Kubo,
and
then
it
will
do
all
of
the
things,
but
do
them
all
sort
of
awkwardly,
but
have
these
pieces
that
you
can
use
independently
and
piece
together?
It's
kind
of
nice.
A
Yeah
cool
yeah
does
anyone
else,
have
any
updates
or
want
to
share
something
or
ask
something
or
yeah.
B
A
Cool
yeah
then
I
guess
then
it
was
a
quick
meeting
yeah
as
as
every
time
we
have
so
we
will
stop
the
live
streaming.
Then
we
have
the
after
party
in
case
you
have
things
you
don't
want
to
share
publicly
and
yeah
else.
We
see
us
again
in
two
weeks
and
for
yeah
for
everyone
else,
like
the
data
saving
times
around
the
world
start
to
kick
in
or
did
already
kick
in,
so
be
sure
that
you
turn
it
at
the
right
time
and
this
meeting
currently
is
really
fixed
with
certain
UTC
time.