►
From YouTube: 🖧 IPLD Every-four-weeks Sync 🙌🏽 2023-04-24
Description
An every four weeks meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
A
Welcome
everyone
to
the
IPL,
these
sinking
Community
goal
it's
every
four
weeks
and
today
it's
April
the
24th,
2023.
and
yeah.
As
every
four
weeks
now
yeah
we
recover
the
stuff
that
people
worked
on,
but
also
here's
some
talk
or
even
yeah.
A
Also,
if
you
have
questions
or
also
yeah,
if
you
want
to
present
something
just
let
us
know,
and
we
are
still
searching
for
people
to
present
things,
and
it
doesn't
need
to
be
really
about
like
building
core
things
for
iPod,
but
it
could
be
just
like
cool
things,
building
with
the
ipld
as
well.
Of
course,
all
right.
A
Well,
let's
see
yeah.
Should
we
start
with
the
presentation?
First
or
I
don't
know,
Rod
would
even
start
or.
B
Let's
see
what
has
anyone
else
got
anything
they
would
like
to
present
or
talk
about.
C
I
mean
I
think
one
thing
Rod
that
maybe
you'll
talk
about,
but
that
I
suspect
is
something
that
comes
up
over
the
next
week
or
two
is
that
there
are
going
to
be
at
least
two
efforts
likely
working
to
do
traversals
in
various
ways
and
thinking
about
validation
of
cars
in
traversal
order.
C
I
know
jaropo,
expressed
interest
last
week
in
extending
the
car
spec
to
talk
about
what
a
ordered
car
was
so
that
currently
ipfs
I
think
it
has
some
ways
where
it
can
write
a
car
without
any
guarantees
of
the
blocks
being
ordered
and.
C
C
It's
asked
for
them,
if
appear
for
a
later
block,
gets
it
first
it'll
write
that
block,
even
though
it's
later
in
the
sequence,
and
so
that's
the
the
sort
of
ultimate
source
of
where
blocks
may
come
in
potentially
unordered
right
now
and
then
I
think
jaropo
has
a
Unix,
Festa
car
or
like
Linux
to
car
binary
that
just
does
a
mapping
type
based
things,
but
wants
to
play
some
tricks
that
would
result
in
the
file
also
being
unordered
or
or
having
some
interesting
ordering
properties.
C
C
So
I
think
there's
work
that
both
Adina
is
going
to
do
and
that
also,
lastly,
that
I
think
we'll
hear
from
Rod
is
maybe
going
to
look
at
about
you
get
in
a
car
file
and
you
want
to
be
able
to
incrementally
verify
it
as
you
get
it,
and
so,
based
on
my
current
understanding
of
code,
that
likely
means
there's
both
an
ipld
format,
slash
and
a
go
ipld
Prime
variants
of
being
able
to
read
in
a
car
stream
and
incrementally
verify
it
that
come
out
of
those
Paths
of
work
over
the
next
weeks.
C
C
B
Well,
the
one
of
the
the
one
of
the
reasons:
I
put
my
hand
on
it
to
do
to
present
something
is
because
there's
actually
been
a
lot
of
activity
around
like
BLD
just
coming
out
of
this
Gateway
work
and
it's
been,
we've
been
really
thinking
it
so
which
is
really
exciting
because
you
know
for
the
past.
B
While
it's
been,
it's
been
a
bit
more
difficult
to
justify
working
specifically
on
my
pld,
because
it's
so
much
other
stuff
to
do,
but
but
we've
come
full
circle
as
we
start
to
do
retrievals
from
final
coin
and
then
integrate
that
in
with
you
know
this
broader
notion
of
ipfs
retrievals,
it's
given
us
a
lot
more
opportunity
to
do
ipld
stuff.
B
Do
the
people
want
to
hear
my
little
ramble
about
work,
we're
doing
with
Lassie,
okay,
so
I'm
gonna
share
and
now
I
I
have
this
as
a
this?
Is
a
markdown
presentation
and
I'm
not
doing
this
I
promise
I'm
not
doing
this
to
be
a
hipster
I
did
I?
Did
this
last
night
really
quickly?
So
so
you
can.
You
can
follow
along
in
my
little
markdown
presentation
here,
I'm
going
to
scroll
through
this
and
and
talk
through
and
I
I'm
I'd
love
to
have
this
be
interactive.
B
If
folks
want
to
ask
questions
or
comment
on
the
way
through,
then
then
please
interrupt
I'd
love
to
clarify
or
go
deeper
into
things
that
folks
find
interesting
and
I've.
Also
I
can
poke
around
and
code
if
folks
want
to
see
anything
in
particular.
But
hopefully
this
is
broadly
explanatory
and
and
you
can
go
and
find
out
more
if
you're
actually
interested
it
is
yeah
so
Lassie
is,
is
on
GitHub
under
the
filecoin
project.
B
Org
Lassie
is
a
a
universal
retrieval
client
for
ipfs
and
filecoin.
So
it's
focused
on
retrieving
it's
technically
an
ipfs
implementation,
but
it
doesn't
store
or
publish
data.
So
it's
not
it's
not
a
reciprocal
node.
It's
a
it's
a
client
that
fetches
and
the
name
Lassie
is
so
that
we
can
do
things
like
Latin,
so
go
to
fetch
me
my
data,
it's
ipld
native,
and
it
only
only
deals
in
ipld
blocks
as
car
format,
so
it
stops
short
of
doing
anything
fancy
with
the
output.
B
It
leaves
that
up
to
you,
it's
got
a
fairly
a
strict
boundary
of
responsibilities,
so
yeah.
So
that's
what
makes
it
interesting
for
iple,
because
it
is
very
high,
purely
heavy
it's
written
in,
go
and-
and
it
offers
three
things
at
the
moment
in
in
one
package.
So
there's
a
command
line
utility
you
can
download,
binaries
or
install
it
using.
What
does
it
go,
install
I
start
off,
GitHub,
GitHub
and
then
the
main
way
to
interact
with
it
is
this
Lassie
fetch
command.
B
It
also
offers
a
a
minimal
ipfs
Gateway
like
HTTP
API,
so
you
can
run
it
as
a
Daemon
and
there's
a
there's,
a
an
API,
that's
roughly
similar
to
an
ipfs
Gateway.
That
will
allow
you
to
fetch
content
through
that.
So
you
can
use
it
as
a
like
an
IPC
with
another
tool,
so
you
don't
have
to
integrate
it
so
deeply,
and
then
there's
also
a
go
Library
interface,
where
it
makes
sense
to
connect
in
with
with
existing
go
code.
B
It's
also
a
work
in
progress
It's
under
heavy
development,
but
it
works
great
right
now.
It's
it's
the!
It
is
the
best
way
to
fetch
from
filecoin.
In
particular,
and
we
are
busily
making
it
the
best
way
to
fetch
from
any
ipfs
protocol,
so
yeah,
so
so
why
are
we
doing
this?
Well,
it
started
as
a
file
coin.
Retrieval
tool.
We
we,
our
team,
was
focused
on
file
coin
retrievals
and-
and
this
is
a
way
of
pulling
together
that
work-
it's
it's
got.
B
Its
code
has
history
in
Jeremy's
fill
client
which
is
sort
of
integrated,
which
was
integrated
into
the
auto
retrieve
code
for
bridging
ipfs
to
filecoin.
So
it's
it.
B
It's
some
of
the
code
Legacy
traces
back
to
there
now
we've
expanded
support
for
the
protocols,
so
it
now
supports
it,
supports
graph
sync
bit
Swap
and
we're
working
on
the
verifying
HTTP
car
support,
which
is
what
will
was
talking
about
that
I'll
talk
about
a
bit
more
about
later,
so
it
can
talk
to
filecoin
nodes
and
ipfs
nodes,
including
elastic
ipfs.
B
It
now
serves
as
the
back
end
for
a
certain
Network
to
for
certain
setting
as
a
CDN
Network
built
on
top
of
Falcon
and
my
BFS,
and
it
it
can.
It's
the
back
end
component
that
fetches
content
from
ipfs
and
filecoin
so
that
it's
serving
in
inside
of
Saturn,
some
very
heavy
traffic
and
there's
ongoing
work
to
replace
the
ipfs
Gateway,
but
replace
a
large
chunk
of
the
ipfs
Gateway
existing
ipfs
Gateway,
with
a
Saturn
back
in
so
there's
this
ongoing
work.
B
That
is
pushing
a
lot
of
this
stuff
forward.
That's
driving
performance
needs
and
specifications
and
all
sorts
of
interesting
things
Lassie
is,
is
very
lightweight,
so
there's
another
one
of
the
purposes
it
serves
is
being
something
that
is
very
lightweight
and
small,
which
is
not
something
that
you
can
currently
get
with
the
tooling
around
our
ecosystem.
B
If
you
want
to
do
ipfs
fetching
and
you
know,
generating
a
carve,
whether
from
a
dag
off
off
ivfs,
to
go
to
grab
Kubo
and
run
a
node
and
do
all
this
stuff
and
it's
it's
very
heavyweight,
whereas
Lassie
is
intended
to
be
lightweight,
there's
no.
Currently,
no
config
files.
There's
a
lot
of
comp.
There's
a
lot
of
command
line
options
in
library
options
that
you
can
run.
B
You
don't
run
a
local
ipfs
node,
depending
on
how
you
define
that.
There's
no
persistent
storage,
it
shouldn't
leave
a
trace,
except
for
what
you've
requested
and
it's
very
fast
startup
time.
It'll
get
going
very
quickly
and
it's
got
just
enough
functionality
to
get
ibld
content.
We,
as
I,
said
we
keep
the
the
the
boundaries
of
its
functionality
fairly
tight
when
we're
trying
not
to
to
span
too
far
beyond
its
core.
B
How
so
we
don't
run
a
node,
so
we're
not
out
there
doing
constant,
Discovery
and
figuring
out
where
to
get
stuff
off
file
coin
is
not
the
most
straightforward
thing,
but
there's
the
filecoin
and
ipfs
network
indexer
service,
which
we
integrate
with.
So
we
rely
on
the
index
service
to
do
the
content
Discovery
for
us,
and
thanks
will
and
team
for
all
the
work
on
on
the
indexer,
because
that
we
get
to
completely
put
aside
that
Discovery
side
of
of
the
the
question.
B
So
the
indexer
can
tell
us
where
content
is
so
A
lassie
fetch
will
query
the
indexer
with
a
CID
and
it
finds
first
of
all,
it
finds
filecoin
storage
providers,
because
it
has
a
record
of
Bitcoin
storage
providers
that
have
advertised
their
content
onto
the
index
of
service
and
we
can
get
a
list
of
those
and
then
it
also
queries
the
the
indexer
can
query
the
DHT
for
us,
so
we
can
ask
it
to
do
DHT
Discovery
for
us
and
it'll
find
nodes
that
have
the
content.
B
So,
let's
see
we'll
begin
graph,
sync
and
all
bid
swap
sessions
to
retrieve
content
from
peers
and
we're
working
on
HTTP
fetching
as
well.
Soon
it's
it's
a
work
in
progress
right
now,
so
it'll
it'll
work
on
model
protocols.
Currently
it
does
a
race
but
we're
working
on
more
sophisticated
ways
to
do.
Multi-Protocol,
chatter
and
then
it'll
it'll
request
the
the
data
that
you
want
as
a
graph.
So
you
request
to
CID,
but
you
also
request
you
have
a
ways
of
requesting
graphs.
So
there's
multiple
things
going
on
here.
B
You
can
request
a
path,
so
the
CID
plus
path.
You
can
request
the
entire
dag
under
a
CID
and
it'll
it'll,
fetch
everything
you
can
request,
just
just
the
CID
and
and
the
and
or
the
the
path
so
that
just
one
little
block
that
you're
pointing
to
or
you
can
fetch
just
the
unixfs
entity
under
the
root
or
enter,
and
that's
I'll
explain
that
a
bit
more
later.
B
But
there's
these
these
three
different
modes
of
fetching
of
of
describing
a
graph
our
code
currently
also
does.
If
you
use
Lassie
as
a
library,
you
can
do
arbitrary,
selector
fetching
as
well.
We're
not
sure
if
we're
going
to
continue
to
expose
that
in
quite
the
same
ways.
Arbitrary
selectors
are
quite
hard,
but
there
is
still
currently
support
for
fetching.
According
to
your
own
custom,
selector
I
I,
we
may
retain
that
in
some
form.
B
It's
just
it's
hard
to
do
arbitrary,
as
well
as
all
the
very
specific
things
we
want
to
do
and
there's
a
bunch
of
other
problems
with
arbitrary
selectors.
So
that's
tapping
the
air
a
bit
and
then
the
content
is
returned
as
a
verifiable
car.
B
So
we
only
deal
in
cars
and
they're
structured
to
be
verifiable,
and
it
turns
out
to
be
a
perfect
partner
with
GoCar
which
we're
also
developing
in
tandem
with
Lassie,
so
you
can
fetch
from
with
Lassie
you
pipe
the
output
car
to
go
car,
which
you
can
then
extract
the
data,
and
then
you
could
even
pipe
it
onto
something
else.
B
So
on
specifics
on
the
the
dag
forms
that
we're
selecting.
So
as
I
said,
we
you
select,
you
query
primarily
with
a
CID
and
an
optional
pass.
That's
the
primary
mode
for
dealing
with
Lassie.
Although
there
is,
there
is
still
this
ability
to
do
your
own
selector.
B
So
so
you'll
say
this
is
my
CID
and
the
past.
It'll
start
at
the
CID
and
it'll
walk
the
path
to
that
you've
asked
for
according
to
ipld
pathing
rules,
there
is
a
caveat
there
that
I'll
come
back
to,
which
is
that
the
the
default
it
will
default
to
Unix
FS
passing
semantics,
where
possible.
So
I'll
explain
this
this
in
a
minute
because
it
I
I
find
this
interesting.
Masters
may
not,
but
there
are
two
different
ways
of
doing
pathing,
with
primarily
with
ipfs.
B
One
is
playing
ipoding
where
you
park
the
nodes
in
blocks
and
then
the
other
one
is
pathing.
According
to
Unix
FS
semantics,
where
you
path
according
to
the
named
links
in
a
unixfs
graph,
so
we
will
default
to
unixfs,
because
that
makes
sense
for
most
of
the
content
we
need
we
have
to
care
about,
but
it
will
do.
It
will
fall
back
to
a
plain
iPod
pathing.
B
So
single
block
fetch
as
I
said,
is
just
give
me
the
block,
but
give
it
to
me
at
the
the
Terminus
of
the
path.
So
so
I
can
specify
the
path
to
the
final
block
that
I
want.
You
can
fetch
the
entire
dag
underneath
the
the
path,
if
there's
a
path
or
this
entity
fetch
which
is
I'm,
calling
it
unixfs
entity
fetch,
we
don't
have
a
great
name
for
it.
B
There's
various
names
for
this
in
code
and
in
specs,
but
what
I'm
calling
entity
fetch
is
give
me
the
Unix
first
entity
at
the
Terminus
of
this,
this
CID
and
path.
Now
an
entity
can
be
defined,
as
is
it
a
a
sharded
file?
So
well,
is
it
a
file
it's
file
or
directory
within
Express?
Is
it
a
file?
Just
give
me
the
whole
file?
Is
it
if
it's
sharded?
That
is,
if
the
file
is
spread
across
many
blogs,
then
give
me
all
of
the
blocks
of
that
file,
which
ends
up
being
the
complete
Dag.
B
But
it's
it's
like
I,
don't
know
what
this
is.
You
just
give
me
the
you
know
what
what
the
full
thing
that's
here,
but
no
more,
because
if
it's
a
if
it's
a
directory
and
it's
a
Sharda
directory
well,
if
it's
a
directory,
it's
not
Charlotte,
then
it's
just
a
single
block.
So
you
get
a
single
block
in
that
case,
but
you
don't
get
the
leaves
of
the
directory
if
the
shutter
directory,
then
you
want
to
get
all
of
the
blocks
that
make
up
that
directory,
but
not
the
leads
so
I.
B
B
So
your
next
Affairs,
but
the
reason
we're
doing
unixfest
by
default
is
at
least
95.
This
is
always
a
guess:
I
don't
know,
but
at
least
95
of
the
content
stored
on
ipfs
and
filecoin
is
UNIX
FS.
So
we
just
assume
by
default
that
you're
using
unit
FS,
but
an
increasing
amount
is
non-unix.
First
people
are
making
their
own
formats
and
we
will
probably
see
formats
like
like
winifs
expand,
which
will
not
be
unixfest
so
for
now,
anyway.
Unix
office
is
like
table
Stakes
that
people
expect.
B
So
we
say
we'll
try
looking
at
this
thing
as
Unix
first,
but
if,
if
the
blocks
we
get
back
and
not
dagpe
or
they
are
dpb
and
we
can't
interpret
them
as
Unix
Fest
because
they
don't
they
don't
load-
is
UNIX
verse.
They
will
default
back
to
just
plain
ideology:
semantics
foreign.
B
So
excuse
me
part
thing
on
Unix
Fest
lets
us
do
named
parts.
B
So
if
I
say
path
to
thing,
the
the
actual
ipld
path
for
that
on
a
unixfs
block
on
on
instead
of
Unix,
first
blocks,
it's
really
ugly.
It's
you
navigate
into
into
the
block
and
then
you
go
into
the
links
array,
and
then
you
choose
the
index
in
the
length
array
you
want,
and
then
you
go
to
the
link
in
that
index,
which
is
called
hash.
So
you
end
up
with
this
ridiculous,
looking
path
that
people
do
use,
but
it
doesn't.
B
B
B
So
the
the
back
to
the
entity,
fetch
thing
sharding
is
is
really
the
problem
here
that
we
we
have
to
solve
because
often
files
are
files
bigger
than
safe,
iPod
block
size,
so
people
have
files
that
are
bigger
than
one
or
two
Megs
and
a
Unix
Fest
will
encode
them
in
multiple
blocks
and
then
pretend
that
it's
a
single
file.
B
Well,
it
is
a
single
file,
but
it's
spread
across
blocks,
so
that
makes
it
complicated
people
when
people
say
I
want
to
fetch
this
thing
at
the
end
of
the
path
they
don't.
Typically,
if
they're
giving
you
a
plane
path,
they
don't
typically
mean
I
want
just
the
block,
because
if
they're
like
using
this
through
a
Gateway
or
they're
expecting
a
page
and
the
page
is
huge-
they
want
the
whole
page.
They
just
want
the
block
because
they
might
only
give
them
part
of
the
page
and
the
same
with
directories.
B
When
directories
grow
over
some
size,
then
they
become
Charlotte
and
they
turn
into
a
hampt,
and
we
get
this
really
complicated
tree
of
blocks
to
describe
the
directory.
B
So,
for,
as
I
said,
the
the
directories
are
an
interesting
case,
because
the
Hampton
could
end
up
being
quite
large
with
a
Charlotte
file.
It's
a
sort
of
a
bit
more
linear,
but
with
a
ham
to
just
this
sort
of
web
of
of
of
dag
that
that
you
need
to
have
the
whole
lot
of
it
to
be
able
to
look
inside
everything.
But
you
don't
always
need
it.
B
So
we
can
pass
through
a
Hamp
to
get
to
somewhere
else
and
just
pick
out
the
pieces
along
the
path
that
we
want
without
getting
any
more
blocks
than
we
need.
So
there's
nuances
to
the
way
this
passing
stuff
works,
and
but
the
thing
I
wanted
to
I
want
to
talk
about
was
how
we're
achieving
this,
because
the
logic
of
it
all
is
very
complicated,
but
we
get
to
in
in
the
code.
B
We
get
to
hide
a
lot
of
the
complexity
by
using
ADLs,
which
is
quite
exciting
because,
as
with
a
lot
of
the
other
stuff
we're
doing
with
Lassie,
we
are
we're
actually
building
on
top
of
the
ipld
Prime
vision
of
ipld,
which
Eric's
been
working
on
for
I.
Don't
know.
Five
years
and-
and
it
feels
like
when,
when
I'm
working
on
Lassie,
it
feels
like
we're
pulling
together
so
many
pieces
of
air
exhibition
for
ipld
and
and
instantiating
them
in
in
this
final
form,
that
is
sort
of
it.
B
It's
validating
a
lot
of
the
work
that
Eric
did.
Finally,
a
lot
of
it.
Iple
Prime
is
used
heavily
all
across
the
stack,
but
it
feels
like
in
Lassie
we're
pulling
it
together
in
a
very
like
we're,
putting
lots
of
it
together.
So
ADLs
are
this.
This
concept
I've
been
working
on
for
a
few
years
and
they've
been
variously
applied,
mainly
Unix
office
through
go
unit,
Express
node,
which
will
hasn't
had
a
big
hand
in
in
developing.
B
So
we
get
to
use
Unix
FS
as
an
abstract
data
layer
where
we
say
to
the
the
traverser
and
to
other
components.
We
say
when
you
get
to
this
thing.
We
want
you
to
be
able
to
see
it
as
Unix
of
fists,
not
as
plain
iPod.
B
So
basically,
it's
a
lens
like
you
when
you
get
to
these
blocks,
put
these
glasses
on,
and
you
view
it
in
this
way,
rather
than
viewing
in
the
way
the
default
way
that
you
want
to
and
that
lets
us
do
all
these
things
with
the
the
path
you
know
the
Unix
first
pathing,
but
also
this
entity
fetching
thing
where
we
get
to
use
all
the
tools
of
ipld
prime,
but
when
we
get
to
unixfest
jump
into
this
different
mode,
where
we
say
well
now
you're
dealing
with
something
special.
B
So
we
want
you
to
Traverse
according
to
these
rules,
and
so
it'll
it'll
do
things
like
it.
You
know
it's
traversing
across
you
know
the
CID
and
get
to
a
path
and
this
path
segment
called
path
on
the
third,
let's
say
thing,
a
path
simply
called
thing:
it'll
get
to
a
block
that
has
these
and
it'll
know
to
Traverse
these
three
different
nodes
within
that
block,
because
it,
the
the
Unix
of
sadl,
will
load
the
block.
B
Look
in
it
and
figure
out
where
thing
is
so
thing
is,
is
a
name
of
a
link
within
that
array
and
it'll
figure
out
which
one
to
go
through
and
it'll
it'll
skip
skips
it,
but
it'll
also
do
other
things
like.
If
the
thing
it
was
a
directory
within
a
big,
a
sorry,
a
a
file
within
a
Sharda
directory,
it
could
be
multiple
blocks
deep,
so
a
common
place
we
see
this
is
in
when
people
are
using
Wikipedia
over
ipf
ipfs.
B
When
you
get
to
the
into
Wikipedia
on
ipfs
and
you
get
to,
you
know,
go
to
slash
Wiki,
slash
en
or
maybe
Third
Way
Around
Ian
Wiki,
and
then
you
go
slash
something
and
you
name
a
a
page.
You
want
to
view
on
Wikipedia
English
Wikipedia
is
so
big
that
and
and
it's
a
flat
directory
like
every
page,
is
in
the
same
directory,
so
that
gets
sharded
as
as
a
big
hamped
and
to
navigate
through
those
pages
you'll
you'll
like
just
to
get
to
a
single
page.
B
There
you
end
up
having
to
jump
four
or
five
different
blocks
to
get
there,
and
so,
but
we
get
to
transparently
do
that
into
our
coding
in
our
traversal.
We
don't
hit
this
and
it's
like.
Oh
no,
it's
a
complication.
What
do
we
do?
We
just
say
this
is
unique,
so
let's
figure
it
out
and
and
it
will,
it
will
figure
out
how
the
path
through
the
Hampton
to
get
to
the
page
you
want
as
deep
as
it
is,
which
is
great
so
so
this
is.
B
This
is
proving
the
power
of
ADLs
applied
to
this
stuff.
There's
still,
obviously
a
lot
more
work
to
do
on
this
layering
aspect
of
ibld,
prime,
but
it
is
working
really
nicely
for
you
next
service.
It
would
be
interesting
to
see
what
would
what
would
happen
when
we
have
other
ADLs
to
apply.
B
I've
been
talking
a
little
bit
of
that
in
some
of
my
spare
time,
but
that
hopefully
we'll
get
there
where
it's
not
just
hey.
We
need
to
be
able
to
do
plain,
iPod,
all
unixfs.
There
might
be
some
other
thing
that
somebody
shows
up
with
saying.
Hey
I
want
you
to
interpret
this
as
winifests
or
as
something
else,
and
then
we've
got
to
figure
out
how
on
Earth
to
plug
that
in
because
currently
in
xfs
is
a
special
case
like
really
a
special
case
to
everywhere
in
our
code,
because
it's
so
common.
B
Okay
and
deterministic
and
verifiable
cast
this
is
what
will
was
talking
about
the
beginning.
This
is
ongoing
work.
Now
we've
been
doing
this
in
the
last
season
since
the
beginning,
because
we
get
this
free
using
iple,
primes
traversal
engine,
so
the
the
cars
we
output,
what
we
call
verifiable
and
they're,
also
deterministic
so
Unix
of
S
node,
provides
the
deterministic
Unix
of
s.
Traversal
iple
Prime
provides
overall
deterministic
traversal,
so
for
any
dag
you
know
we
can.
B
We
will
always
Traverse
the
tree
in
the
same
way
whenever
you
encounter
it
in
you
know
different
times,
it's
not
going
to
there's
no
Randomness
here.
There's
no,
you
know
latency
doesn't
impact
the
order
of
the
blocks
that
you
get
out
of
Lassie.
It's
always
the
same
order.
They're
also
verifiable,
which
is
really
neat,
and
this
is
this-
is
now
being
specked,
but
it's
something
that
we've
needed
for
a
long
time.
B
So
what
this
means
for
Content
the
easiest
way
to
think
about
why
you
would
want
something
to
be
verifiable
and
what
it
means
to
be
verifiable
is
to
consider
apparently
what
we
have
with
the
ipfs
Gateway.
So
if
you
load
a
thing
from
the
Gateway
and
the
most
obvious
example
is
something
with
the
park,
particularly
so
you've
got
you
give
it
a
CID,
and
then
you
give
it
a
path
to
something.
So
maybe
it's
a
page
and
it
gives
you
the
HD
HTML
and
you're
viewing
a
page.
Yeah
I've
got
it.
B
How
can
you
verify
that
that
page
you
got
is
in
any
way
related
to
the
CID
that
you
asked
for
you
can't
there's
no
there's
no
way
for
you
to
do
that.
So
you
off
whenever
you
use
an
ipfs
Gateway,
you
offload
all
of
your
trusts
to
that
Gateway
and
you
say
I
trust
you
entirely
that
the
content
you're
giving
me
is
in
any
way
related
to
the
CID
that
I
asked
for
which
is
a
bit
unfortunate,
because
now
you
know
the
whole
thing
about
content.
B
Addressing
is
that
you're
supposed
to
be
able
to
verify
it
in
theory?
If
you
didn't
ask
for
a
path-
and
you
just
asked
for
a
page,
you
should
be
able
to
verify
it
by
repacking
the
page
and
checking
the
crd,
but
given
the
way
Unix,
the
best
file
packing
is
is
not
deterministic.
You
can
pack
files
in
different
ways.
B
It's
it's
not
that's,
not
always
going
to
be
possible.
The
file
might
be
packed
across
multiple
blocks.
It
might
have
different
chunking
all
that
sort
of
stuff,
and
you
don't
know
that
the
Gateway
doesn't
tell
you
that
they
had
they
had.
B
It
has
been
work
to
to
work
on
this
previously,
but
now,
with
all
this
work
around
Saturn
and
this
Distributing
of
the
Gateway
stuff,
we're
really
truly
specifying
this
and
working
on
it
in
in
multiple
places,
as
will
said
so,
Lassie
app
puts
these
cars
that
import
that
include
the
the
CID
that
you
requested,
regardless
of
whether
you
had
a
path
or
not.
It'll
always
give
you
the
CID,
you
requested
and
every
block
along
the
path
that
it
needed
to
load.
B
So
if
that's
Wikipedia,
then
there's
like
four
blocks
in
there
through
a
hamped
as
well
as
any
blocks
before
that
to
get
to
slash
en
slash
Wiki,
so
it'll
give
you
the
root
and
then
every
block
along
the
path,
and
then
the
content
that
you
ask
for
so
the
way
you
can
establish
trust
is
well
you
for
some
reason
trusted
that
CID
you
asked
for
so
you
have
reason
to
trust
that
CID
is
what
you
wanted,
whatever
that
reason
might
be,
but
given
that
you
have
a
trust
of
that
CID,
you
can
then
verify
that
at
that
block
for
that
CID
exists
in
your
car.
B
B
You
can
check
that
that
next
block
is
included
in
the
block,
that
you
trust
and
you
can
verify
that
Etc
et
cetera
all
the
way
down
to
the
content
that
you
wanted.
So
suddenly,
you
have
this
chain
of
trust
from
the
CID
that
you
trust
all
the
way
to
the
content.
You
can
say:
hey
I
got
what
I
wanted,
and
so
you
can
do
things
like
use
GoCar
as
a
coupling
to
this.
B
B
B
B
This
is
a
challenge
when
you're
doing
deterministic,
dag
traversal
deterministic
director
vessels,
when
you
it's
just
one
way
to
Traverse
and
we'll
mention
this
as
well
is-
is
difficult
because
you
don't,
the
network
doesn't
always
give
you
the
blocks
in
the
order
that
you
you
want
them,
because
some
blocks
might
be
smaller
or
they
might
be
closer
or
whatever
reason
you
could
get.
B
If,
if
you
were
doing
a
full
parallel,
you
know
give
me
everything
thing,
then
you
get
you'd
get
blocks
out
of
order,
but
because
we
want
to
do
deterministic
deck
reversal,
we
have
a
specific
way
of
doing
it.
We
want
to
go.
We
want
to
assess
which
block
to
load
as
we
get
them,
so
we
get
a
block.
We
look
in
it
and
figure
out
where
to
go
next
and
all
the
way
down
that
chain,
and
that's
one
one,
one
one
one
there's
a
very
serial
process.
B
Graphsync
has
a
protocol
Works
around
this
part,
it's
by
having
parallelism
built
in
because
it's
it's
a
single
peer
talking
to
a
single
peer.
They
establish
an
agreement
about
what
they're
talking
about
here's
my
root
selector
and
then
they
synchronize
the
blocks
between
them
and
they
have
this
way
of
doing
multiple
blocks
at
a
time.
You
don't
have
to
go
back
and
forth
one
at
a
time.
It's
this!
It's
this
shared
agreement
about
what
you
both
have
and
then
give
me
that
stuff,
but
it's
only
a
single
peer
to
a
single
peer.
B
B
You
want
to
have
lots
of
them
like
a
like
a
BitTorrent,
so
bit
swap
is
multiplayer,
but
there's
no
graph
awareness
with
bit
swap,
which
is
which
is
annoying
because
you
have
to
you,
can't
make
an
agreement
with
a
bit
to
appear
to
say
we're
going
to
talk
about
a
graph
here.
You
just
have
to
fetch
the
blocks
and
figure
out
the
graph
yourself,
so
you
have
to
figure
out
what
you're
getting
next
so
bit
swap
with
deterministic
date.
Reversal
means
that
you're
stuck
in
this
serial
land.
B
So
what
we
did
is
built
a
pre-fetching
parallelism
or
or
this
traversals
before
use
over
bits.
What
so?
What
we
do
is
we.
We
have
a
selector
that
has
rules
that
it
will
pass
a
particular
way
over
a
the
the
the
nodes
in
a
block
to
figure
out
where
to
go
next.
So
when
we
get
a
block
in
our
traversal,
we
do
two
passes
on
it
now,
so
we
do
it
with
a
selector.
B
The
first
pla
pass
will
stop
at
the
edges
of
the
block,
and
just
say
these
are
all
the
links
in
the
block
that
we
are
going
to
visit
next.
So
for
this
block
we
will
be
visiting
all
of
these
links
eventually
and
then
it
comes
back
and
does
a
second
pass
where
it
will
follow
the
normal
rules
where
it
will
go
down
that
one
that
first
route
to
the
first
link.
B
B
You
know
depth
first
traversal,
where
each
block
gets
two
passes
and
that
the
second
pass
is
the
normal
traversal
path
pass,
but
the
first
pass
it
allows
us
to
say
for
every
block,
here's
a
full
list
of
links
that
we
will
want
to
Traverse
when
at
some
point
during
the
traversal
within
this
block.
So
we
get
to
build
up
this
list
of
blocks
that
we
are
going
to
need
eventually.
B
So
we
can
put
them
aside
and
say:
let's
prefetch
these,
because
we
know
when
we
come
back,
we'll
need
them
again.
So
then
we
get
to
parallelize
bit
swap
traversals.
It
doesn't
always
work
because
sometimes
there's
not
a
whole
bundle
of
blocks
that
a
dag
can
just
be
a
single
line.
Like
a
you
know,
a
blockchain
backbone
that
doesn't
go
anywhere,
so
sometimes
you
don't
get
the
opportunity
to,
but
often
particularly
with
unixfs,
you
you're
fetching
a
dag.
B
That
happens
where
you
have
these
lists
of
these
blocks,
that
you're
going
to
fetch
eventually,
so
we
can
pre-fetch
them,
put
them
in
a
temporary
place
and
then
eventually,
when
the
selector
comes
back
and
says,
I
want
this
next
block,
we
say:
well,
we
already
had
it
so
this
gives
us
the
ability
to
parallelize
bit
swap
for
deterministic
date,
vessels
which
is
neat.
It
has
some
difficulties.
There's
some
limitations
to
this.
B
Doesn't
work
so
great
if
you're
working
with
Lincoln
node
budgets,
which
we
currently
are
I-
think
we're
going
to
be
this
we're
going
to
have
alternative
strategies
for
this,
but
we're
currently
in
our
the
way
that
we
are
using
less
the
in
Saturn
in
particular,
has
linked
limits.
So
don't
when
you
don't
don't
give
you
more
than
10
blocks
or
whatever
it
is.
We've
got
a
block
limit,
so
this
preloading
has
difficulty
with
with
that,
but
it
in
a
normal
mode.
B
You
wouldn't
a
budget's
sort
of
like
a
specialist
case
anyway,
so
that
so
preloading
is
currently
available
in
ipob
Prime
Master.
It's
not
in
a
release
version,
but
we'll
get
it
released
and
and
some
of
the
tooling
for
working
with
the
preloader
and
managing
block
loads.
It's
currently
living
in
Lassie,
but
we
will
pull
some
of
that
out
and
put
it
into
ability
Prime.
B
So
anyone
can
use
this,
but
this
is
really
neat
because
it's
been
one
of
the
criticisms
of
of
deterministic
traversals
with
happy
LD
Prime
is
that
you
can't
parallelize
it
well
now
you
can
Lassie
and
go
car.
As
I
said,
these
are
great
a
great
pair
and
we're
working
on
them
actively
together.
So
you
can
do
things
like
go-kart
verify,
car
inspect,
I.
Think,
and
you
can
check
that
a
car
has.
You
can
verify
the
blocks
in
it,
and
particularly
car
extract
to
work
with
the
nxfs
content.
B
So
you
can
you
can
pipe
to
car
extract
from
Lassie
or
you
could
just
car
extract
a
lassie
car
and
get
your
unixfest
content
out
of
it.
So
we'll
work
on
on
both
of
these
over
time
or
something
I
would
particularly
like
to
do-
is
have
this.
B
This
verified
car
stuff
encoded
into
go
car
as
well.
So
if
you
know,
if
we
say
Lassie
produces
a
verified
car,
you
should
be
able
to
verify
it
easily
with
another
utility.
So
you
could
say
the
go-kart.
This
is
a
verified
car
make
sure
it
is
actually
verifiable
in
the
way
that
it's
described
currently
it'll
it'll
verify
the
the
dag.
So
a
car
extract
will
actually
verify
the
content
for
you,
but
we
we
should
do
some
more
work
to
make
that
more
explicit.
B
Finally,
as
as
will
talked
about
as
well,
this
ipf
ipfs
HTTP
transport
is
currently
under
under
specification,
and
it's
a
it's
a
new
transport.
This
will
be
a
third
transport
in
Lassie,
so
you'll
be
doing
graph,
sync
bit
Swap
and
HTTP,
and
it's
for
sending
verifiable
cars
over
HTTP.
B
So
this
this
sort
of
this
nested,
you
know
verifiable
car
chatter,
going
on
where
a
peer
might
be
able
to
give
you
a
car,
that's
already
in
the
format
that
you
want
so
which
is
great,
I,
think
elasticfs
by
weird3
storage.
We'll
we'll
be
doing
this
I
presume
where
we
will
query.
The,
Elixir
and
the
index
will
say:
hey
this
peer
over
here.
Does
this
transport?
Actually?
What
is
the
transport
ipfs,
Gateway,
HTTP
and
also
great?
We
don't
have
to
do
Pit
swap
or
graph
sync
with
them.
B
We
just
say
we
just
pass
on
the
information:
here's
the
Cid
in
the
past.
Can
you
give
me
that
please
and
here's
the
you
know
the
diagram
that
I
want
and
it
will
give
you
a
car
and
then
we
can
just
pass
that
on
to
the
user,
but
in,
but
what
Lassie
will
do?
It
will
do
its
own
verification
of
has
the
peer
given
me
the
car
in
the
way
that
I
want
it
and
is
it
does
it
include
all
that
trusts,
those
trust
properties
so
yeah?
B
It
will
it'll
be
like
we'll
be
able
to
talk
to
gateways
as
well.
So
you
know
elastic
can
sort
of
act
like
a
Gateway
and
it
will
be
able
to
talk
to
other
gateways
and
there's
this.
This
neat
Progressive
thing
happening
there,
but
that's
a
work
in
progress
anyway.
That's
the
end
of
my
notes.
B
I
hope
that
was
interesting.
There's
a
lot
of
ipld
stuff
in
there
and
if
anyone's
got
questions,
I'll
be
happy
to
hear
them.
I
could
poke
into
code
I'm,
not
sure
that'll
be
so
interesting,
but
if
anyone's
interested
in
that,
let
me
know
thank
you.
Questions
comments.
B
A
lot
of
this
is
thanks
to
the
you
know,
great
work
by
both
Hannah
and
will
have
been
particularly
champions
in
pulling
this
stuff
together,
but
it
feels
like
we're
just
making
use
of
all
of
these
things
and
really
proving
it.
It's
it's
quite
it's
quite
satisfying
I
wish
Eric
was
still
around
to
you,
know,
make
the
most
of
it,
but
he's
off
on
other
Adventures
still
using
an
iPod,
but
not
as
deeply
in
in
the
core
of
it.
A
Then
I,
might
you
know
quickly
answer
it's
not
something
that
I've
worked
on,
but
just
some
information
for
the
people
that
work
in
the
rust
I
put
the
ecosystem
and
there
has
been
a
huge
refactoring
of
rust's
multi-hash
on
the
master
Branch
it's
merged,
and
so,
if
you're
using
multi-hash,
somehow
it
would
be
great
to
try
out
the
master
Bunch
because
yeah
as
I
said,
so
it's
now
also
different
crates
and
so
on,
and
it
should
be
fairly
smooth
to
upgrade.
A
We
don't
have
any
documentation
yet,
but
so
before
we
do
a
release,
there
will
be
proper
documentation
about
how
to
upgrade,
but
it
should
be
straightforward
and
just
to
see
so
because,
like
I'd
like
to
see
before
I
do
the
release
actually
to
see
like
if
it
works
for
people
or
if
you've
missed
some
cases
that
yeah
people
use
modification.
We
haven't
cached
and
the
idea.
So
the
background
is
the
problem.
Obviously
is
always.
If
you
do
it's
still
developing.
A
If
you
do
some
changes
and
then
you
have
breaking
changes
and
then
you
upgrade
and
so
on
so
now,
really
the
core
of
rust
mod.
Yes
is
really
only
the
kind
of
like
the
multi
hash
type
and
the
whole
codec
table,
and
so
on
is
moved
into
a
separate
and
crate.
So,
for
example
like
if
we
add
a
new
hash
function
or
something
or.
A
It's
not
a
breaking
change
for
the
core
part,
which
probably
many
people
will
only
use
and,
and
also
one
point
is.
The
original
idea
from
current
implementation
of
rasmati
hash
is
that
everything
is
Tech
allocated
which
makes
it
quite
difficult
to
use,
but
that
is
due
to
yapping
stick
allocated,
but
also
that
the
idea
always
has
been
that
you
create
your
own
codec
cables
for
the
hashes
that
you
actually
use,
but
people
basically
just
kept
on
using
their
the
bundled
one
and
now,
as
the
bundle
is
kind
of
like
separate.
A
A
B
Related
to
yours,
but
with
python
there
is
a
discussion
currently
happening
about
the
python
libraries
and,
where
somebody's
contributed,
a
page
for
ibly.io
that
lists
python.
Libraries
that
you
can
use
for
ibld
and
they've
developed
a
Json
codec
for
python,
which
is
great
but
I
I've
been
actively
deprecating.
Some
of
the
the
sort
of
a
non-core
core
in
the
sense
of
this
is
what
protocol
Labs
mostly
cares
about,
which
is
Javascript
and
go
non-core
language,
libraries
under
both
iple
and
multi-formats,
because
a
lot
of
them
are
not
actively
maintained.
B
We
don't
have
trusted
maintainers
to
take
care
of
them,
and
so
they
collect
issues
and
pull
requests
that
don't
get
responses,
which
is,
you
know,
not
not
great
open
source.
So
instead
I've
been
archiving.
So
people
know
these
things
aren't
actively
maintained.
But
if
you
want
to
maintain
them,
then
Fork
them
and
also
contribute
links
back
to
our
documentation,
which
somebody's
done
or
python.
Now
I've
opened
up
an
issue
on
the
multiformats.org
which
I've
Linked
In
the
the
notes
here
under
Pi
multicodec
repo
just
to
see.
B
If
there's
anyone
around
that
is
on
on
the
existing
team
there.
That
is
actually
maintaining
otherwise
I'm
going
to
I'm
going
to
deprecate
all
the
python.
Libraries
in
multi-formats
and
I
think
I've
already
done
an
ipld
and
then
the
the
new
page
on
ipld.io
will
link
to
external
libraries,
and
one
in
particular
that
folks
seem
to
be
using
is
a
multi-formance
library
that
is
in
the
hashberg.
I
o
org,
but
there
seems
to
be
some
interesting
activity
on
python.
So.
A
A
Thanks
all
right,
if
there's
no
further
questions,
I
will
close
the
meeting
and
also
say
when
the
next
one
is
it's
kind
of
clearly
a
bit
of
a
manual
process
on
to
publish
them
everywhere.
But
let
me
quickly
check
so
it
will.
C
A
Yeah
I
also
thought
about,
like
it's
probably
a
good
idea,
so
I
like
that
you're
filming
it
every
four
weeks
instead
of
monthly
because,
like
perhaps
like
someone
has
exactly
on
this
Monday
in
the
month,
no
time
so
if
it's
every
four
weeks
and
people
might
have
a
chance
to
join
so
so
we
are
rotating.
A
So
let's
see
it
will
then
be
the
one
two
three
four
that
May
22nd
will
be
the
next
meeting,
but
yeah
I
will
put
it
on
the
channel
and
also
on
Luma
and
wherever
those
things
are
all
right.
So
thanks
everyone
for
attending
and
see
you
all
in
four
weeks,
bye,
everyone
and,
as
always,
feel
free
to
hang
out
for
the
after
party,
obviously
yeah,
all
right.