►
From YouTube: 🖧 IPLD Every-two-weeks Sync 🙌🏽 2021-10-25
Description
An every two weeks meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
A
Welcome
everyone
to
this
week's
ipld
sync
meeting
it's
october,
25th
2021
and
as
every
week
as
every
two
weeks,
we
go
over
the
stuff
that
we've
worked
on
and
then
discuss
any
open
agenda
items
or
yeah
just
answer
questions.
If
anyone
has
questions
so.
A
A
B
Thank
you
for
dogs.
They
are
important,
so
I
have
continued
to
be
slogging
along
on
the
storage
apis
that
go
ipld
prime
offers
and
asks
for,
and
that's
something
that
I
want
to
confess
probably
sounds
boring.
But
it's
also
a
really
important
thing
to
have
been
working
on,
because
what
we
are
starting
to
get
to
now
is
clearing
a
bunch
of
roadblocks
that
kept
people
on
old
interfaces.
B
So
this
week
some
of
the
new
stuff
is,
there
are
now
adapters
to
any
of
the
old
generations
of
storage
apis
that
you
might
have
been
used
to
from
the
ipfs
land.
The
go
data
store
apis,
have
an
adapter.
Now
the
little
box
store
apis
have
an
adapter
now
even
go
block
service,
which
is
not
really
a
storage
layer,
but
is
indeed
a
wrapper
for
the
entirety
of
bitswap.
B
There
is
now
an
adapter
to
that
which
makes
it
act
like
the
new
go
ipld
prime
storage
apis.
So
you
can
plug
in
any
of
those
things
and
have
them
put
and
get
meaning.
You
can
use
the
link
system,
store
and
load
methods
for
whole
nodes
and
graphs,
and
so
on
and
it'll
go
all
the
way
down
to
this
layer
and
then
shell
out
to
the
old
code.
B
B
B
On
my
machine
non-scientific,
you
should
try
it
on
different
file
systems
and
kernels
and
things
I
am
sure,
but
it
is
noticeably
faster.
It
also
has
about
40
fewer
allocations
in
the
path.
For
some
reason
that
was
also
my
first
try.
The
ceiling
may
be
considerably
higher.
I
have
done
no
targeted,
optimization
work
on
that
whatsoever.
B
The
benchmarks
are
hours
old.
I
have
done
no
p
profs,
nothing,
so
the
odds
that
there
is
low
hanging
fruit
to
improve
further
on
that
already
pretty
decent
performance
increase
is,
I
think,
really
high,
so
that'll
be
cool.
I
don't
know
if
I'm
going
to
pick
that
up,
but
if
anyone
else
out
there
wants
to
go
fast,
there's
a
new
place
where
you
might
be
able
to
go
very
fast.
Indeed,.
B
B
It
gains
the
ability
to
trigger
codegen
tools
for
going
so
you
can
now
finally
have
that
in
a
nice
cli
tool
that
you
can
like
pick
anywhere
and
run
without
a
fuss,
edem
made
a
pr
for
that,
and
it
was
really
small
and
landed
really
easily.
It
only
took
a
little
bit
of
naming
bike
shedding
and
the
code
was
brief.
I
just
had
to
plug
things
together,
so
that
was
a
huge
victory
and
it
was
really
awesome
that
somebody
was
able
to
contribute
that
and
that
it
worked
and
it
just
everything
came
together.
B
It's
a
good
day
to
be
working
on
ipld
there's
also
a
huge
amount
of
progress
that
comes
from
mv
dan,
who
is
often
in
this
call,
but
isn't
around
today
on
the
bind
node
apis
in
goley,
which
have
gotten
way
more
stable
and
easy
to
use.
If
you
hold
them
wrong
previously,
it
would
sometimes
panic
now
it
mostly
doesn't.
B
It
will
check
that
your
go
types
are
correctly
formed
for
the
schema
that
you're
trying
to
match
with
them
right
away
and
it'll,
give
you
reasonable
error
messages
right
away,
meaning
no
panics
later
way,
better,
there's
also
a
huge
amount
of
new
support
for
the
schema
dsl
in
going
from
also
ambident
the
syntax
around
various
representation
strategies
is
now
supported.
B
I
think,
for
almost
everything
I
don't
have
like
a
future
table
in
front
of
me,
but
from
what
I
remember
in
the
code
view
it's
darn
near
everything
I
think
there's
a
couple
of
things
around
any
which
are
still
pernicious
and
need
a
little
more
work,
but
you
can
use
it
on
the
majority
of
schema
syntax
now,
and
it
should
do
the
right
thing,
and
that
is
also
showing
up
in
the
ipld
tool.
As
mentioned,
that
has
all
these
other
features
like
can
now
run
the
code
gen.
B
So
this
stuff
is
finally
getting
to
the
ease
of
use
that
we're
hoping
for
check
it
out
all
right,
that's
it
for
me,.
C
Oh
well,
eric
you
forgot
to
mention
that
the
the
test
fixtures
in
the
iple
repo
for
schemers
that
you've
been
test
mark
test,
markifying
sure
that
goes
that
goes
along
with
the
deal
it
deserves
powers
of
stuff.
So
true,
how
close
are
we
to?
How
close
are
they
to
getting
those
finished?
I
don't
think
I've
actually
looked
at
the
latest
pr.
Okay,.
B
B
Anyone
out
there
is
developing,
especially
cli
applications
in
golang.
The
test
mark
framework
has
gotten
more
and
more
features
to
make
that
easy,
I'm
basically
building
these,
so
that
the
ipld
cli
tool
can
use
them
and
because
I'm
getting
up
to
the
point
of
doing
the
stateful
stuff
in
the
ipld
cli
tool.
I
added
a
bunch
of
new
features
to
test
mark
to
help
you
test
things
with
on
disk
state.
B
B
This
is
something
that
I'm
doing
only
on
the
go
implementation
so
far
and
it's
purely
conventions,
so
the
spec,
the
syntax,
isn't
extending
any.
This
is
just
like
special
naming
conventions
trigger
behavior,
but
it's
really
useful.
So
anybody
who
wants
to
use
that
sort
of
thing
have
fun
with
it.
It's
I'm
having
a
good
time
with
it
all
right.
I'm
done.
C
Okay,
mine's
easy
it's
just
it's
been
last
week
was
it
last
week
all
right,
maybe
the
week
before,
on
the
dag
api
for
the
jsi
bfs
stack.
So
we
did
that
work
for
go
ipfs,
which
is
in
v
010,
and
it's
got
a
bunch
of
breaking
changes
in
it
for
the
dag
api,
which
are
really
nice,
that
the
changes
that
we
got
in
there
really
nice.
C
But
then
everything
broke
for
everyone,
particularly
people
using
the
http
client
from
javascript
to
their
go
ibfs
back
ends,
and
now
we've
got
people
stacking
up,
saying
yeah.
This
is
not
working
for
me,
so
I
did
compatibility
work
there.
Unfortunately,
when
you
do
jsi
bfs,
even
if
you
just
want
to
fix
the
client,
you've
got
to
fix
everything.
So
it's
like
just
server
js
daymond,
js
everything
you
had
to
get
fixed
for
this
to
work,
but
anyway
that's
done,
but
it
need
it's
waiting
for
review.
C
I
think
alex
is
on
a
break.
So,
unfortunately,
there's
some
people
who
are
after
those
changes
but
they're
on
a
branch-
and
there
might
be
some
tweaks
before
it
gets
landed
because
there's
still
some
discussion
about
exactly
what
should
be
happening
because
the
the
js
ipfs
stuff
with
dag
api
actually
leans
a
lot
on
the
block
api
to
do
stuff
internally,
rather
than
deferring
to
the
back
end,
which
is
for
apparently
performance
reasons,
but
I'm
not
sure
they
apply
anymore
anyway.
C
C
The
other
thing
was:
there's
been
a
collaborator,
that's
been
showing
up
in
the
js
multi-formats
repo
and
been
having
some
interesting
discussions
on
him
with
a
variety
of
topics
and
the
most
interesting
one
I
thought
was
actually
haven't,
put
a
link
in
there,
the
multi-base
identity
multi-base.
So
that's.
D
C
You
know
instead
of
it's
encoding,
it's
like
what
would
it
be?
It's
base
256
identity,
multi-base
in
javascript.
It
does
a
a
utf-8
conversion,
like
we
do
with
all
other
bytes
to
strings
when
it
gets
into
those
problems
of
sorry.
We
can't
represent
your
bytes
because
they're
not
valid
utf8,
but
he
raised
that
for
identity
that
that
doesn't
really
apply
not
actually
after
utf-8
you're.
C
Actually,
after
a
string
that
matches
the
bytes,
and
we
can
actually
do
that
in
javascript,
we
had
a
discussion
about
the
mechanics
of
that
and
I'll
put
a
link
in
there,
and
so
I
might
actually
go
and
do
that
just
for
multi-base.
The
challenge
is
to
make
sure
it's
absolutely
clear
that
this
only
applies
to
these
situations.
Where
we're
not,
we
don't
have
a
utf-8
contract,
because
we
don't
want
this
code
being
used,
say
in
a
ipld
codec,
where
you
want
to
get
the
string
form
of
some
bytes.
C
We
actually
do
want
to
do.
Utf-8
probably
utf8
conversion
there,
but
it
also
has
me
thinking
about
the
possibility
of
doing
some
something
that's
closer
to
what
go
does
with
strings
and
bytes
in
javascript
and
just
bypassing
the
the
default
utf-8
parser
and
actually
implementing
one
that
does
it
in
the
same
way.
I
just
don't
know
what
the
implications
of
that
are,
but
that's
a
separate
issue,
but
basically
we're
back
at
this
whole
utf-8
string
thing
so,
if
anyone's
interested
in
that,
that's
the
discussion
linked
in
the
notes.
E
E
Now
that
we
have
indexed
car
files,
you
can
create
a
car
file
from
a
unix
file
or
directory
or
multiple,
and
it
works
like
a
tar
file
and
it
does
charting
and
such
you
cannot
yet
extract
a
car
file
back
into
files,
but
you
can
split
it
based
on
sub
dags
and
things
like
this,
so
you
can
do
some
of
the
splitting
that
you
might
want
to
do
not
comfortable,
so
there'll
be
a
bit
more
we're
thinking.
E
Some
extraction
would
be
useful
if
it
does
happen
to
be
unixfs
and
then
also
being
able
to
take
a
car
file.
That's
an
arbitrary
car
file
and
get
the
like
com
key
calculated
hash
out
of
it,
because
we
can
from
any
car
file
stream
the
canonical
version
of
that
car
file,
which
just
orders
the
box
in
a
select
or
traversal
order.
E
E
The
other
thing
that
we've
been
working
on
is
a
ipld
graph,
dag
synchronization
data
transfer
protocol,
and
so
there's
a
library
called
go
legs
that
does
the
leg
work
of
synchronizing,
a
dag
between
a
producer
and
consumers
that
has
previously
worked
over
graphic
and
gossip
sub.
So
gossip
sub
announcements
of
changes
will
trigger
graph
sync
polls
of
delta
parts
of
the
dag
that
the
subscriber
does
not
yet
have.
E
We
now
also
support
an
http
based
transport
for
that,
so
the
and
what
that
allows
is
the
producer
doesn't
have
to
actually
be
active.
It
can
just
be
some
http
server
somewhere
and
that
drops
data
like
into
an
s3
bucket
or
some
static
web
server,
and
then
the
subscribers
can
pull
from
that
as
well.
So
that's
cool
one
of
the
things
that
that
meant
was.
We
have
to
understand
the
conversion
between
multi
addresses
and
urls,
and
it
turns
out
we
haven't,
actually
dealt
with
that
one
yet.
D
Yeah
good
to
see
you
guys
haven't
been
on
a
while,
so
I
mostly
work
on
the
w3c
did
specification
the
did
and
verify
credentials
and
then
for
better
for
worse
the
smart
health
cards,
which
is
the
covid
credentials
which
are
controversial,
I
suppose,
but
still
interesting.
D
Now
the
I
was
working
on
the
dag
cbor
representation
of
the
did
document
that
would
play
nicely
into
the
ipid
did
method
that
I
created,
but
we
compromised
via
politics
to
a
plain
sea
war
representation,
and
I
put
the
link
in
the
notes
and
still
trying
to
advocate
for
how
it
could
be
like.
I
did:
dag
cbor
core
representation
given
all
the
goodness
of
ipld,
but
the
athletically
that's
challenging.
I've
probably
guys
heard
the
did
specification
version.
D
One
got
a
formal
objections
from
google,
apple
and
facebook,
and
so
it's
working
its
way
through
some
political
challenges,
and
you
know
actually
even
mozilla,
actually
also
had
a
formal
objection,
but
I
think
mozilla's
objection
was
actually
well-founded
and
meaning
that
a
lot
of
the
did
methods
were
divergent
in
their
technologies
and
not
convergent.
D
And
so
I
think,
and
a
lot
of
my
reservations
about
like
we
have
all
these
different
blockchains
and
all
these
different
people
with
150
different,
did
methods
and
ultimately
150
different
protocols,
and
so
you
know
I
was
still
advocating
hey
ipld
as
being
the
narrow
waste
of
the
protocol.
If
we
could
all
just
talk
that
then
actually
there's
interoperability
from
the
get-go
now
the
challenge
still
is
that
getting
the
multi-codex
work
out
of
the
get
repo
and
how
to
formalize.
D
That
into
registry
is,
I
think,
still
needing-
and
I
think
that's
you
know
now.
Since
ipfs
scheme
is
registered,
you
know
types
of
things
of
formalizing,
some
of
the
registries
of
the
multi-codecs
and
ideally
through
ietf,
which
would
be
my
preference,
not
w3c,
and
then
I
saw
the
the
call
last
week
with
stephen
about
the
ipld
wasm
work,
and
I'm
super
excited
about
that
and
I'm
diving
into
looking
at
this
stuff
and
just
right
now.
D
It
mostly
looks
as
though
it's
some
markdown
and
some
some
notes,
and
so
I'm
really
gonna
just
dive
into
more
juicy
detail
of
some
of
that
work
and
and
just
my
day,
job
is
mostly
still
working
on
data
modeling
and
healthcare
applications.
I
I
work
in
ipld
and
how
to
model
that
in
for
machine
learning
and
just
ask
me.
A
F
D
Thank
you
for
all
the
great
work
cool
yeah.
I
think
for
three
bucks.
We
were
working
not
only
on
go
jose,
but
also
the
cddl
for
kosay.
Let's
say
the
right,
jose
and
kosei,
so
the
seabord
representation
and
cose,
and
I
think
I'm
I'm
favoring
seabor
just
because
of
the
more
deterministic
representation
versus
all
the
nuances
of
json
and
so
but
super
cool.
C
So
so
johnny
regarding
multi-codec
sorry
to
quickly
divert,
but
regarding
the
multi-coding
stuff
and
specification
we
did
start
a
ietf.
C
I
think
it
was
idf
registration
thing
for
that
proposal
for
that
I
think
two
years
ago.
So
there
is
draft
stuff
to
try
and
do
that.
We
never
push
it
over
the
line.
But
if
that's
really
needed,
then
maybe
we
should
have
a
discussion
about
reviving
that
and
see
seeing
how
pl
can
help
push
that
forward
so
be
in
touch.
If
that's
a
pressing
concern.
D
Yeah,
I
think
it's
just
a
lot
of
the
pushback
that
I
got
from
on.
The
w3c
side
was
like,
ultimately,
the
bat,
the
political
battle
of
w
the
internet
standards
not
represent
like
not
recognizing
w3c
work
and
then
w3c,
not
recognizing
ietf,
like
who's
who's,
the
authority,
and
so
I
think
my
just
now.
D
My
experience
is
that
I
think
itf
is
basically
just
a
lot
easier
and
straightforward,
and
it's
just
to
create
a
registry
like
just
like
registering
the
scheme,
representation,
and
I
think,
if,
if
it
just
needs
to
be
like
you
know,
busy
work
to
actually
just
get
it
over
the
finish
line.
I'd
be
happy
to
help
out.
I'm
just
curious
as
far
as
like
as
new
things
are
added
to
the
registry.
Like
what
that
process.
More
looks
like.
C
The
process
could
be
formalized,
but
it's
a
yeah,
and
the
problem
we
have
is
because
the
registry
covers
so
many
different
types
of
things
that
the
discussions
are
always
very
different.
So
I
want
to
add
a
multi
address,
or
I
want
to
add
a
hash
or
you
know
it's
every
discussion
is,
is
weirdly
different
in
some
novel
way.
C
So
so
we
have
this
prioritization
problem
right
now,
which
is
that
this
stuff
is
it's
maintained,
but
it's
not
highly
prioritized.
But
if
we
have
something
like
like
did
or
some
some
other
standard
that
was
important,
then
we
could
probably
easily
get
it
reprioritized,
because
getting
some
of
this
stuff
standard
standardized
has
been
a
been
a
desire
for
a
long
time.
So,
if
there's
a
path
to
getting
the
priority
bumped
up,
then
that
might
happen
and
then
we
might
get
some
more
eyes
on
it
and
yeah.
A
Okay,
is
there
anything
else,
I
don't
see
any
agent
items
or
questions.