►
From YouTube: 🖧 IPLD Every-two-weeks Sync 🙌🏽 2021-05-10
Description
An every two weeks meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
A
Welcome
everyone
to
this
every
two
weeks,
ipld
sync
meeting
it's
may
the
10th
2021
and
as
every
week
we
go
over
the
stuff
that
we've
worked
on
and
then
discuss
any
open
agenda
items.
B
Sure
I
think
I'm
getting
ready
to
unveil
and
declare
that
we
have
a
new
unified
site
for
documentation
and
specifications
and
pointers
to
libraries,
pointers
to
design
docs
and
everything
else.
This
is
the
combination
of
content
from
the
old
ipld,
slash,
ipld,
repo,
the
old
docs.
We've
got
the
old,
specs
repo
and
a
bunch
of
other
stuff
that
have
been
just
floating
around
in
the
notes
document.
I
have
a
link
that
is
on
feek.com,
where
I've
been
hosting
the
static
site,
build
and
fleek
is
kind
of
a
cool
service.
B
I
guess
I
talked
about
that
in
the
past.
It's
still
there.
It
still
works.
It's
nice.
The
source
for
this
is
in
the
ipld
ipld
repo
right
now,
and
it's
on
a
branch
called
2021
if
you
want
to
check
out
the
source,
so
that
is
where
that
is
living
right
now.
B
B
Is
it
now,
but
I
have
not
ported
a
hundred
percent
of
the
content
so
the
way
I'm
going
about
this
is
in
the
source,
there's
a
underscore
legacy
directory
and
I
put
all
the
old
repo
content
in
there
and
I'm
basically
deleting
it
out
of
that
directory,
as
I
put
it
into
the
new
site
to
make
sure
that
I
don't
lose
anything
and
there's
a
bunch
of
stuff.
That's
still
in
the
legacy
directory,
but
like
all
the
schema
specs
have
come
along.
B
All
of
the
code
expects
that
come
along
a
lot
of
stuff
has
come
along
already
so,
and
I'm
gonna
just
keep
grinding
away
on
that.
But
I
think
at
this
point
it's
more
complete
than
any
of
the
other
things
that
we
pushed
to
the
web
before
the
bar.
There
isn't
super
high,
unfortunately,
so
so
if
you
want
to
take
a
look
at
that
and
scream
about
anything,
that's
missing,
I
think
it's
you
can
do
that
now.
B
A
C
Yeah,
I
guess
I
can
this
is
somewhere
in
between
question
and
update.
So
as
as
you
may
know,
we've
been
spending
a
while
doing
an
ipld
prime
integration
into
ipfs
indigo
ipfs,
and
that
is
continuing
to
go
along
reasonably
well.
C
We
in
the
last
couple
weeks
were
able
to
run
the
branch
that
is
using
ipld
prime,
under
the
hood
on
some
of
the
gateway
machines
that
protocol
labs
runs
and
took
profs
of
them
and
found
that
there
was
not
any
noticeable
performance
issues
due
to
the
use
of
ipld
prime
at
the
data
layer,
which
was
good
because
that
unblocks
the
need
for
performance
debugging
at
this
stage,
and
so
really
the
only
thing
we're
trying
to
get
sort
of
finalized
before
we're
happy
merging
this
into
go.
Ipfs
and
getting
it
into
a
release.
C
Is
the
interfaces
for
how
you
access
ipld,
prime
data
in
ipfs
and
the
current
interface
that
we're
targeting
is
there's
a
sub
command
called
ipfs
dag,
get
an
ipfs
dag
put
where
you
put
in
data
and
get
data
out,
and-
and
we
have
these
sets
so
that
they
can
use
ipld
prime
codecs
as
the
data
formats
on
both
both
in
and
out,
and
so
one
of
the
questions
that
we're
still
that,
I
think,
is
sort
of
the
only
unresolved
design
thing
in
my
head
and
that
we're
working
with
ipfs
on
in
some
sense
is
when
you
go
to
get
data
back
out.
C
C
The
way
we've
done
that
happening
is
there's
a
sort
of
generic
library
that
is
in
a
separate
repo
from
ipfs
called,
go
ipfs
commands,
which
is
the
command
line,
type
thing
that
ipfs
sends
stuff
through,
and
so
we
have
this
as
a
it.
It
already
knows
how
to
serialize
data
using
go's
default,
serializers
and
so
it'll
serialize.
C
Any
thing
coming
out
of
ipfs
is
json
as
xml,
and
it's
got
a
few
of
these
sort
of
serializers,
and
so
what
we
added
as
our
first
pass,
was,
if
it's
not
one
of
the
four
sort
of
hand-built
serializer
things
that
this
ipfs
commands
library
knows
about,
but
you
specify
either
a
codec
number
or
a
named
multi
codec
that
there
is
an
ipld
prime
codec
for
it'll,
see
if
what
it's
been
given
is
an
ipld
node
and
if
so,
it'll
pass
it
through
the
codec
and
then
it'll
give
you
that,
and
so
this
means
like
this
basically.
C
But
one
thing:
that's
true:
is
the
data
that's
being
passed
to
that
command's
library
is
not
just
dag
get
data,
but
in
fact
it's
any
data
coming
out
of
ipfs
and
some
of
that
data
is
not
going
to
be
an
ipld
node.
It's
going
to
be
an
arbitrary,
stuck
destructive
data
that
the
default
json
serializer
is
using
a
reflection
on
to
turn
into
json,
and
so
we're
sort
of
trying
to
understand.
C
Should
we
take
this
fallback
to
or
this
this
secondary
use,
ipld
prime
codex
and
make
it
only
applicable
on
the
dag
get
method
where
we
know
what
we're
getting
out
is
an
ipld
node.
Or
do
we
want
to
integrate
reflection,
work
that
I
believe
is
being
worked
on
so
that
when
we're
given
some
struct,
that
is
not
already
an
ipld
node?
We
turn
it
into
an
ipld
node
and
then
pass
it
to
the
codec
so
that,
whatever
sort
of
comes
out
there,
we
can
turn
into
an
ipl.
C
The
prime
node
and
then
we
can
serialize
it
in
the
format
you
asked
for.
If
you
ask
for
a
ipld
prime
codec
serialization,
and
so
then
you
know
the
the
upshot
is
okay,
you
got
some
structured
data
out
of
a
direct
ipfs
get
or
something
and
you'll
be
able
to
encode
that
as
json
or
cbor
or
whatever
your
other
custom
codec
is
which
is
sort
of
interesting.
C
And
you
get
the
benefit
that
you've
got
ipld
prime
codex,
now
at
this
generic
layer,
where
all
commands
get
them
which
is
cool,
but
is
that
what
the
reflection
was
meant
for
and
is
that
subverting
this
whole
thing?
So
that's
a
question
that
eric
maybe
has
opinions
on
and
eric
and
ipfs
should
come
to
consensus
on,
and
then
we
will
do
the
thing
that
the
consensus
emerges.
B
B
B
B
A
B
B
C
No
okay,
I
have
questions
and
because
because
it's
still
when
we're
doing
it
on
the
command
line
with
daggett,
it
is
an
ipld
node.
It
has
not
been
serialized
and
recreated
it's
the
direct
node
right
that
that
it
gets.
It
gets
the
direct
thing
that
the
actual
command
chooses
to
output.
But
it
is
one
of
two
options
which
is
or
I'm
using
it
in
one
of
a
few
different
ways.
C
But
instead
you
have
ipfs
http
api
that
does
its
own
serialization
of
stuff
over
an
http
wire.
And
then
you
have
a
files
client
that
either
exposes
as
a
separate
process.
That's
consuming
ipfs
over
http
api,
another
go
library
and
does
a
secondary
serialization
there,
or
you
can
have
a
direct
thing.
That
gets
said
that,
like
there's
a
light
client
that
talks
to
the
real
ipfs
over
http
api
for
cases
where
you've
got
along
with
daemon
processed
separately
from
the
one
that
you're
using
to
interact
with
it
from
the
panel.
B
B
So,
to
give
topic
switch
to
this
reflection
thing
that
daniel
has
been
working
on.
I
forget
how
much
we
introduced
that
in
the
previous
calls.
I
guess
that
okay,
that
probably
started
since
the
last
time
we
had
a
bi-weekly
call
things
moved
fast,
daniel
who's,
not
here
in
this
call
right
now
started
working
on
an
implementation
of
the
go
interfaces
for
ipld
prime
to
make
a
node
which
is
implemented
using
golang
reflection.
B
So
you
can
sort
of
bind
it
to
your
own
implementation
of
like,
however,
you
want
to
define
golang
structs
and
you
want
to
morph
them
into
nodes.
You
can
do
it,
and
so
this
is
going
to
be
a
third
major
way
to
make
nodes,
so
the
options
you've
got
right
now
is
there's
this
basic
node
implementation,
which
is
general
purpose,
but
not
efficient,
because
it
just
uses
like
untyped
maps
and
lists,
and
you
can
use
the
code
gen,
which
is
super
efficient,
but
then
you're
using
codegen
and
the
barrier
to
entry.
A
B
That
you
had
already
written
and
using
reflection
or
possibly
even
struct,
annotations-
that
happened
power
reflection
later
the
library
might
conceal
it,
we'll
see
how
it
goes,
use
your
ongoing
structs
and
get
nodes
out
of
it.
So
it'll
be
a
lot
closer
to
what
going
standard
library
json
works
like
and
it'll.
B
B
We
don't
need
tags,
I
don't
know
so
there's
also
previously,
we've
had
the
refund
library,
which,
let
you
do
all
the
configuration
for
how
to
map
struct
into
something
that
is
functionally
the
precursor
of
ipld
nodes,
but
wasn't
called
that
and
that
library,
let
you
do
all
this
stuff
programmatically.
So
if
anyone
has
seen
that
we
could
do
that
approach
again,
but
tags
and
having
the
library
for
a
bunch
of
stuff,
for
you
is
also
often
practical.
B
B
And
I
don't
know
if
I
want
to
report
on
the
progress
bar
too
much
since
the
person
doing
the
work
is
not
in
the
video
call,
but
we've
been
talking
about
it
a
lot
and
it
seems
to
be
going
well.
I
think
the
current
implementation
handles
all
of
structs
already.
Basically,
this
will
be
kind
of
fun
to
document.
B
B
I
think
going
struck
which
maps
to
schema
struct
is
implemented
and,
most
interestingly
passing
tests.
It
turns
out
that
the
test
that
we
have
for
the
cogen
stuff
is
successfully
being
reappliable
to
this
new
work.
So,
like
I'm
really
happy
about
that,
I
tried
to
do
that
and
wasn't
sure
that
I
succeeded
really
glad
it's
working.
So
when
we
say
that
things
are
spec
compliant
and
like
feature
matching
code,
gen
will
actually
be
reasonably
confident.
That'll
be
nice.
B
The
last
I
heard
he's
still
working
on
the
unions
and
he's
optimistic
that
that's
going
to
go
really
fast
and
I
am
not
because
unions
surprised
me
with
how
complex
they
were
when
I
added
them
in
cogen.
So
we'll
see,
maybe
it'll
be
easier.
The
second
time
we'll
see
but
there's
exciting
stuff
on
the
horizon
anyway,.
A
A
Cool
then,
then,
I
close
the
meeting
so
goodbye
everyone
and
see
you
again
in
two
weeks.