►
From YouTube: 🖧 IPLD weekly Sync 🙌🏽 2021-02-23
Description
A weekly meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
A
Welcome
everyone
to
this
week's
ipld
sync
meeting
it's
monday
february,
the
22nd
2021,
and
as
every
week
we
go
briefly
over
the
stuff
that
we've
done
and
work
on
next
week
and
then
discuss
any
agenda
items
we
might
have.
I
start
with
myself.
I
currently
work
on
getting
the
js
ipfs
implementation
using
the
js
multi
formats,
stuff
and
kind
of
like
as
a
first
step.
A
What
I
do
is
I
try
to
get
the
unix
fs
using
the
new
deck
protocol,
buffers
codec,
that
rod
was
doing,
and
so
unix,
if
it
is
like
this
part,
was,
is
unexpectedly
easy
and
but
the
problem
is
now
integrating
into
ipfs,
because
js
ipfs
is
of
course
using
internally
ipld
and
of
course,
the
whole
ipld
thing
that
we
currently
have
is
using
the
the
old
cid,
the
old
codex
and
so
on.
So
yeah.
This
just
doesn't
work
well
together,
so
I'm
currently.
A
So
the
idea
is
what
I
try
to
do
and
then
I
will
see
if
it's
a
good
idea
is
that
also
internally,
instead
of
using
this
kind
of
like
ipld
ripper,
that
we
had
it's
using
the
the
repo
or
block
service,
it's
amount
of
detail
directly
and
it's
just
using
the
codex
directly
encodes
them
decodes
them
and
then
puts
them
directly
into
the
store.
So
there's
not
really
like
there's
something
in
between
anymore,
and
this
way
I
should
be
able
to
use
the
back
product
buffers
for
the
doing
s.
B
A
We'll
see
how
this
turns
out,
but
the
first
step
I
want
to
get
during
sfs
working
because,
like
this
is
really
like
kind
of
like
the
core
of
what
ipfs,
at
least
for
me,
is
doing,
and
it
would
be
really
a
great
step
forward
to
having
those
using
the
new
libraries
that
we
want
to
also
people
people
to
use
in
their
javascript
ecosystem
yeah.
And
this
is
what
I'm
continuing
to
do
over
the
next
few
weeks.
C
Cool,
so
something
that
has
finally
started
to
shape
up
is
the
I
don't
want
to
call
it
version
two
of
encoding
json,
but
it's
it
is
kind
of
that.
It's
an
experiment
to
write
a
new
implementation
of
json
for
go.
I've
been
working
on
this
with
a
couple
of
other
people
for
a
few
months
and
we
did
want
to
open
source
it
eventually,
so
we're
not
fully
there
yet
because
we
don't
want
to
draw
too
much
attention,
but
there
is
a
public
snapshot
now
on
github
and
it's
importable
as
a
co-module.
C
So
if
anybody
wants
to
play
with
it
right
now,
is
it's
fully
it's
a
full
token
encoder
and
decoder.
So
that
means
you
can
do
fairly
low
level.
Things
which
I
think
are
going
to
be
interesting,
mainly
for
things
like
dac
json
and
go,
but
also,
for
example,
quick
go,
has
a
logger
in
json.
That
has
to
be
really
really
fast,
but
right
now
it
generates
a
ton
of
code
with
a
an
external
library,
and
I
think
that
could
be
like
50
lines
with
this
new
token
encoder
and
it's
going
to
keep
improving.
C
C
So
the
design
part
that
I'm
picking
up
is
the
node
modifications
in
the
api
in
go
merkle,
dag,
which
is
what
a
lot
of
the
ipfs
packages
use.
So
I've
linked
to
a
hackmd
where
we're
collecting
all
the
modules
and
packages
that
use
these
apis
and
interact
with
them
and
I'll
probably
have
to
sync
with
eric
on
this,
because
I
think
he's
been
designing
similar
apis
for
modifying
and
navigating
nodes.
I
think
they're
called
the
transform
apis
might
be
wrong
there.
C
But
I
want
to
talk
to
him
because
you
know,
essentially
I
should
decide.
Do
I
want
to
reuse
his
work
there
if
it's
close
to
being
ready
or
do
I
want
to
do
something
simpler
and
go
merkle
jack
directly
or
something
like
that,
and
that's
it
for
me.
D
Sure
so
I'm
also
part
of
this
group
with
daniel
that's
looking
at
applying
ipld
and
the
ipld
prime
into
ipfs,
and
we're
still,
I
think,
coming
to
terms
with
what
we're
chewing
off
in
that
project.
There's
a
lot
of
cruft,
whereas
we
look
at
each
new
get
repository,
we're
like.
Oh,
that's
still
using
gx,
there's
there's
a
lot
that
could
be
done.
That's
low
hanging
in
that
one!
D
So
there's
some
design
work
as
dan
mentioned
that
a
lot
of
this
is
figuring
out
how
to
map
what's
currently
called
a
dag
service
that
combines
a
few
different
things,
including
the
concept
of
how
you
fetch
data
served
generically,
but
also
how
you
add,
and
remove
data
to
your
local
data
store,
which
is
somewhat
of
a
separate
interface,
because
it's
not
necessarily
pulling
from
the
network
in
the
same
way.
D
One
of
the
unblocked
ones
that
I'm
hoping
to
get
through
this
week
is
there's
a
bunch
of
additional
codecs.
That
ipfs
is
able
to
at
least
read
from
and
think
about
in
terms
of
data,
including
git
and
tar,
and
so
we
should
figure
out
how
to
take
that
same
data
and
read
it
into
ipld
prime
nodes
so
that
we
can
still
have
the
network
as
we
move
to
an
ipld
primary
world.
So
I
expect
to
pack
up
at
least
sort
of
the
same
level
of
functionality
of
being
able
to
from.
D
I
get
on
data
score,
a
tar
on
disk
end
up
with
ipld
private
node
views
of
that
same
data,
so
that
we're
unblocked
as
we
start
moving
to
ipl
prime
interfaces,
and
we
will,
I
think,
also
end
up
posting
a
bunch
of
links
in
github
issues
to
help
people
follow
along
on
this
work.
But
the
end
view,
ideally,
is
that
additional
ipld
codecs
can
make
their
way
into
ipfs
much
more
easily,
which
hopefully
makes
our
lives
happier.
E
Oh
boy,
a
lot
of
things
happened
in
the
last
week
and
I
didn't
keep
a
diary
on
the
day
by
day,
so
I
forgot
most
of
them,
but
a
couple
of
things
that
are
interesting
to
this
group.
I
did
start
looking
at
the
new
json
stuff
that
mv
dan
linked
earlier
and
it's
cool
surprise
surprise.
E
Something
I've
also
been
wondering
about,
and
am
still
wondering
about,
is,
if
we'll
be
able
to
use
the
token
concept
from
that
library
to
do
other
broader
work
like
use
it
as
the
foundation
for
dex
seaboard
token
handling,
and
I
think
that's
probably
a
lot
less
likely
at
first
glance.
I'm
gonna
keep
thinking
about
that
because
the
more
I
mean,
obviously
the
more
that
I
can
be
used.
The
happier
that
I
am,
but
it's
just
meant
to
do,
json
things.
So
it
wouldn't
be
surprising
if
it's
not
super
reusable.
E
Where
that
starts
using
the
token
from
the
json
libraries
we'll
see
all
that
stuff
that
I'm
also
just
looking
at
not
actually
writing
code
on
yet
because
that's
a
whole
endeavor
and
it's
not
published
yet
something
I'd
like
to
ask
for
input
from
other
folks
here
on
is
we've
talked
about
cross
language
fixtures
before
and
we've
got
some
of
them
in
some
of
our
projects
in
some
various
formats
and
I've
started
taking
notes
on
that
with
a
couple
of
examples
of
places
where
we're
doing
it
already
and
thoughts
about
how
that's
working
out
and
some
options
for
things.
E
I
should
think
that
we
should
look
at
going
forward.
Man,
I'm
figuring
how
to
speak
english.
So
there's
a
pr
with
a
bit
of
an
exploration
report
going,
and
I
would
love
comments
on
that.
I've
only
got
a
handful
of
formats
identified
as
possibly
useful
there
so
far,
and
so
it's
just
things
that
I
have
heard
of,
and
there
might
be
more
so
yeah,
take
a
look
at
that.
A
Thanks,
I
can
read
that
peter
doesn't
have
an
update
this
week
and
next
one
is
carlson,
who
also
doesn't
really
have
an
update.
But
if
you
yeah
feel
free
to
say
something
about
your
research.
F
Yeah
not
much
this
week,
our
team
just
came
off
a
big
research
sprint,
so
there's
not
a
lot
of
code.
Progress
of
any
kind,
in
particular,
there's
some
interesting
research
findings
which
I
can
talk
about,
but
it's
a
little
tangential
to
the
discussion
here
so
I'll
leave
that
some
work,
so
I
met
last
week
with
pvh
from
incan
switch
and
we
had
a
chat
about
ipld,
because
that
crew
crew
has
been
kind
of
curious
about,
has
been
ipld
curious
for
a
while.
I
think
I
convinced
them
to
give
it
a
try.
F
So
expect
some
questions
from
that
spectrum
of
the
internet
soon
and
the
cool
earth
thing
on
that
front
is
that
we've
started
to
move
forward
with
some
proposals
around
some
of
the
hash,
consistent,
sorted
tree
work
that
michael
told
me
about,
but
I
think
maybe
alex
or
somebody
who
was
someone
wrote
that
up
anyway.
So
that's
looking
very
promising
for
some
thread,
syncing
stuff,
that
we're
doing
so
that's
cool.
E
F
Cool
okay,
where
our
team
is
actually
starting
to
move
we're
trying
to
do
more
of
the
research
stuff
in
the
open
like
many
people,
so
we've
started
to
move
stuff
to
github
discussions
and
that's
looking
pretty
good
actually
as
a
way
to
get
stuff
out.
So
I
could.
I
could
potentially
link
some
of
that
once
it's
discussionable.
E
A
Thanks
does
anyone
else
have
any
updates
before
we
go
to
the
ad
gen
items
also
feel
free
to
edit
agenda
items.
A
A
Cool
then
we
will
hear
about
it
next
week.
I
guess
cool
yeah,
so
yeah.
So
I
have
also
like
a
question
to
the
yeah
to
you
folks
about
the
april
d
ipfs
stuff.
So
the
first
one
is
about,
as
you
talked
about
manipulation.
A
It
has
something
better
and,
as
I
clearly
move,
js
ibfs
to
the
new
stuff,
it
also
like
kind
of
like.
Does
it
make
sense
to
move
this
old
thing
or
if
you
also
think
about,
should
we
move
this
old
thing?
It
might
make
sense
to
discuss.
What
are
we
gonna
do
with
this.
D
Yeah
so
I
talked
with
a
dean
earlier
today
and
he
was
open
on
the
go
ipfs
side,
at
least
to
changing
that
interface
and
said
it
felt
like
a
weird
awkward
thing
given
where
we
are
with
ips
prime
and
that
having
you
know,
maybe
that
merges
with
dag
or
maybe
that
we
deprecate
that
in
favor
of
jag,
michael
has
something
to
say.
It
seems
like.
G
So
we're
we're
entering
into
like
a
real
kind
of
status
quo
bias
here,
like
I
think
that
we
should
actually
frame
two
questions
and
put
them
in
github
issues
and
see
if
we
can
find
any
sufficient
response,
one
for
the
objects,
api
and
one
for
the
dag
api
and
literally
ask
for
anyone
to
defend
it
existing
like.
Can
it
does
anyone
think
that
it
is
a
good
idea
to
still
have
these
and
if
it
is
not
better
to
just
use
the
block
api
in
the
other
apis
like
is
there?
G
Is
there
a
real
defense
of
of
these
things
because
we
keep
like
patterning
around,
like
maybe
we
deprecate
it?
Maybe
not,
but,
like
I
haven't
heard
anyone
in
years
really
try
to
defend
them
so
like
let's,
let's
frame
it
that
way,
and
then
it.
I
think
that
by
the
end
of
it,
it'll
be
really
clear
what
we
need
to
do,
maybe
I'm
wrong.
D
Yes,
for
object
for
having
two
of
them.
That
sounds
totally
right
and
even
from
a
command
line
like.
Why
are
you
interacting,
with
structured
data
on
this
inherently
weirdly
sort
of
serialized
command
line
thing
that
also
feels
like
this
weird
disjoint?
You
know
you
could
totally
see
it
would
sure,
be
nice
over
either
the
http
api
or
over
your
internal
like
if
I'm
embedding
as
a
library,
I
probably
want,
like
my
ipld,
dag,
to
be
able
to
go
in
and
out
right,
and
that
is
in
some
sense
like
this
interface.
D
E
G
G
The
dang
api
is
like,
like
a
really
bad
jsony
api
for,
like
for
creating
block
data
with
different
codecs
like
it's
just
it,
you
can
always
design
a
better
experience
by
just
having
the
codec
and
the
client
and
using
the
block
api.
Then
you
can
using
the
deck
api
and
they're
basically
kind
of
one-to-one
like
there's,
no
like
additional
important
feature,
essentially
that
the
dag
api
gets
there's
no
special
treatment
or
improvement
there.
D
So
so
we
should
think
about
where
these
move
to,
but
like
that
seems
in
some
sense,
like
the
natural
thing
like
there's,
this
very
well
used
and
trodden
block
thing
that
a
lot
of
people
using,
but
if
we've
got
ipld
prime,
we
might
also
expose
it
like
it
sure
seems
useful
to
expose
the
actual
fpld
nodes
right.
That's
that's
the
thing
that
is
like
the
natural
thing
to
work
with,
but
we
need
to
figure
out
how
we
end
up
there.
A
Yeah,
so
I've
also
talked
with
a
ghost
salah
about
it
on
the
javascript
set
of
things
that
he
also
said
basically
yeah,
so
he
would
want
to
yeah
either
move
things
into
the
block
api
having
a
separate
api,
but
also
so
we
had
the
same
discussion
about.
Does
the
deck
api
make
sense?
Does
the
object
api
make
sense
and
just
yeah
but
anyway?
So
it's
good
to
hear
that.
A
Basically,
both
sides
like
the
goal
side
and
the
transcript
side
says
we
need
more
discussions
and
figure
something
out,
and
I
think
the
idea
from
michael
is
good,
because
I
I
also
have
the
impression
because,
like
if
someone
had
questions
about
it-
or
I
always
told
people,
don't
use
the
data
api,
don't
use
the
objects
api
and
I
guess,
like
many
of
us,
did
this
in
the
past
few
years.
So
hopefully
people
just
don't
use
it,
but
I
also
can't
tell
like
it
could
totally
be
that
people
use
those
apis.
A
I
don't
know
cool
yes,
so
who
like?
How
do
we
move
forward
like
who
is
the
owner
of
those
like
opening
those
issues
or
get
the
discussion
started?
I.
A
A
A
G
Yeah
so
so
there's
actually
been
some
developments
there
like
this
is
a
bit
influx.
I
think
that
the
the
vulcanize,
I
think
was
the
vulcanized
folks
like
they
just
did
a
whole
big
document
on
ethereum
and
ipld,
which
I
believe
had
recommendations
for
for
slight
modifications
to
the
way
that
we
did
it
or
or
perhaps
it
just
like,
normalized
to
something
that
rod
had
done
before.
G
But
I
think
that
we
have
like
better
documentation,
at
least
on
that
now,
and
I
think
there
is
a
proposal
for
them
to
do
some
grant
work
around
this,
which
may
end
up,
including
either
new
versions
of
those
which
may
end
up,
including
new
versions
of
those.
In
fact,
one
of
their
one
of
the
things
they
brought
up,
that
they
need
is
an
ipl,
the
prime
version
of
a
bunch
of
these
codecs,
and
so
I
think
that
we
may
actually
be
funding
a
grant
for
them
to
do
that.
G
But
I'm
I'm
not
100
of
where
that
landed,
because
there's
a
few
different
grants
that
we
have
with
them,
and
I
don't
know
like
what
the
staging
is.
A
So
I
was
basically
just
wondering
if
we
care,
for
example,
about
like
future
parrot
feature
parity
between
go
in
javascript
and
so
on
about
those
things
that
so
in
my
opinion,
if
someone
wants
to
use
ethereum
with
js
ipfs,
they
just
should
start
from
scratch.
Like
that's
just
the
honest
answer,
because
you
can't
just
build
on
top
of
the
stuff
that
we
currently
have,
and
so
I
would
rather
deprecate
it
and
or
remove
it
from
js
ipfs
and
then
yeah.
D
A
Yeah,
okay,
then
so,
then,
for
me,
the
action
item
will
be
that
I
talk
to
the
javascript
side
of
subset
of
ipfs
and
talk
about
like
what
is
up
and
if
you
really
do
the
update
to
js
multi-formats
like
do
we
take
the
chance
to
say:
okay,
we
don't
maintain
those
codes
anymore
or
yeah
stuff,
like
this.
G
I
don't
know
how
in
the
loop,
you
are
gonna
stay
in
your
new
role,
with
the
vulcanized
people
and
and
the
pm's
over
there
that
are
doing
that,
but
we
should
at
least
get
word
to
them
that
the
javascript
side
of
this
also
needs
some
love
and
potentially
you
know,
maybe
there's
another
grant
that
vulcanize
wants
to
do
if
they
have
some
js
engineers.
B
G
Yeah-
and
I
mean
just
just
like
making
sure
that
the
pms
over
there
on
the
ecosystem
side,
know
that
there's
a
hole
here
on
the
javascript
side
that
they
can
at
least
be
on
the
lookout
for
people
that
might
be
able
to
fill
it.
Given.
E
Time
something
that's
already
probably
a
little
bit
of
good
news
from
what
the
vulcanized
people
are
proposing
and
working
on
is
that
they've
started.
Writing
nice
schemas
for
the
way
that
they're
transforming
this
data,
and
so,
if
we
do
in
the
future,
look
forward
to
having
somebody
take
on
javascript,
fresh
implementations
of
ethereum
stuff,
then
that'll
probably
be
a
huge
help
in
making
sure
that
the
implementations
are
actually.
A
A
All
right
yeah,
so
those
were
my
questions
or
my
agent
items.
Is
there
anything
else?
A
H
Okay,
I
have
a
question
and
don't
laugh
at
me
when
I
ask
this,
but
I
was
talking
about
hash,
authenticate
data
structures
to
someone
in
healthcare
and
they're
like
wrap
the
question:
what
about
a
collision
between
two
data
item
and
items?
How
do
you
deal
with
that
and
I've
always
kind
of
answered?
I
just
trust
the
math
nerds
on
this,
but
that
doesn't
quite
fly
so
well.
So
I'm
curious,
if
you
guys
have
a
more
structured
answer
or
or
what
about
that.
G
Yeah,
so
I
mean,
if
you
do
the
math
on
the
collisions,
you
can
tell
them
sort
of
what
the
probability
is
in
practice
like
a
sufficient
hash
function.
If
you
see
a
collision,
it
means
that
someone
is
attacking
the
algorithm
like
like
like
that's
that's
what
that
means.
It
means
that,
like
it's,
been
compromised
and
they're
going
after
your
data
like
like
literally
you
just,
you
won't
really
see.
Given
the
probability,
you
won't
see
natural
collisions
with
a
sufficient
hash
function.
That's
really
the
answer.
G
H
G
No
so
so,
this
is
like
again
like
assume
that
this
didn't
happen,
naturally,
that
it
happened
because
there's
an
attack,
there's
going
to
be
ways
to
put
bits
in
that
still
look
like
valid
data.
So
it's
not
going
to
crash
when
it's
under
attack.
The
only
time
that
it
would
crash
is
if
it
was
like,
expecting
data
to
look
a
particular
way,
and
then
it
didn't,
because
there
was
like
a
natural
collision
which
again
is
like
you
know:
you're
you're
more
likely
to
be
hit
by
lightning
while
being
hit
by
a
boss
like.
I
Can
I
can
I
put
in
terms
that
your
hopes
will
understand
basically
tell
them
that
every
single
bitcoin
wallet
is
secured
by
the
very
same
stuff
and
the
fact
that
there
is
no
theft
of
bitcoin
like
left
and
right
means
that
this
stuff
actually
works,
because
there
are
actually
people
out
there
trying
to
guess.
You
know
caches
to
actual
wallets
and
they're,
not
very
successful.
So
that's
your.
You
know,
actual
yeah
risk
scenario.
H
H
So,
if
you
have,
I
think
the
concern
is,
is
that
you
know
healthcare
data
has
certain
regulations
around
it
being
correct,
and
so
if
there
was
a
collision,
potentially
data
could
be
lost,
and
so
I
think,
that's
their
main
kind
of
concern
and
I'm
not
sure
how,
because
I
mean
theoretically,
I
could
have
two
medical
images,
each
which
are
different
but
create
the
same
cash
theoretically
now
practically
speaking,
I
agree
with
you
guys
that
won't
happen,
but
I,
but
I
think,
when
you
talk
about
a
super
risk
adverse
person
just
I
have
to
kind
of
address
that
in
some
ways
so
do
they
miss
understand.
I
What
my
example
is
that
there
are
people
out
there
actively
looking
for
collisions
with
an
immense
payout
and
they're,
not
successful,
and
you
and
your
scenario
is:
oh,
I'm
just
going
to
randomly
have
a
collision
without
one
looking
for
it.
So
that's
basically
my
my
argument.
It's
it's
the
same
like
when,
when
when
they
were
starting
the
large
lhc,
they
were
asking
the
physicists,
like
oh
can't,
like
explode,
and
they
were
like
well,
theoretically,
it
can
like
turn
into
a
black
hole
right.
It's
basically
the
same
level
of
stuff.
G
I
mean
I
mean
cosmic
ray
might
set
some
red
flags,
but
I
would
say
that,
like
the
consistency
guarantees
that
they're
used
to
getting
from
writing
this
to
hard
drives,
those
are
much
more
likely
to
like
there's.
Those
hard
drives
are
much
more
likely
to
corrupt
that
data.
In
fact,
like
we
know
that
there
is
like
a
definitive
like
corruption
rate
of
these
hard
drives
like
it
is
just
yeah,
it
is
it's
more
likely
that
that
would
corrupt
than
there
would
ever
be
a
collision.
G
H
G
H
Okay,
that's
what
I
okay,
that's
helpful.
I
think
that's
the
way
I
have
to
think
about.
It
is
the
probability
there's
lots
of
probabilities
of
errors,
and
this
one
is
just
here's
how
it
relates
to
other
ones
that
you're
worrying
about
what
I
the
way
I
thought
quick.
I
I
thought
about
quickly
on
my
feed.
I
basically
said:
oh
we'll
just
actually
generate
a
hash
using
two
different
hash
functions.
The
probability
of
both
of
those
colliding
for
the
same
piece
of
data
is
like
impossible.
B
It's
this
is
one
of
those
problems
where
probability
becomes
hard
to
visualize,
and
it's
like
the
it's
like
the
chessboard
thing
of
you
know,
there's
more
there's
more
possible
chess
moves
than
you
can
make
than
I
don't
know,
atoms
in
the
universe
or
whatever,
whatever
the
stat
is
something
ridiculous,
but
it's
this
thing
where
you've
got
this
chess
board
in
front
of
you
and
there's
this
probability
statement
about
how
you
you
can't
statistically
produce
all
of
the
the
possible
moves
on
this
small
chess
board
in
a
reasonable
set
of
outcomes
like
in
a
in
a
measurable
set
of
of
solutions
and
and
that's
really
hard
to
imagine.
B
But
that's
the
kind
of
communicating
you
need
to
do
here
is
look.
This
is
it's
seen
on
the
face
of
it.
It
sounds
ridiculous,
but
the
the
probabil,
the
probability
underlying
all
this
stuff
is,
is
so
extreme
that
it's
just
you
know
unlikely
to
happen
in
your
lifetime.
A
No
cool,
then
I
closed
the
meeting,
and
so
thanks
everyone
for
attending
and
see
you
all
next
week
again
goodbye.