►
From YouTube: 🖧 IPLD weekly Sync 🙌🏽 2020-11-16
Description
A weekly meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
A
Welcome
everyone
to
this
week's
ipld
sync
meeting
it's
november,
the
16th
2020
and
as
every
week
we
go
over
the
stuff
that
we've
worked
on
the
past
week
and
then
discuss
and
open
items.
Let
me
just
open
the
hack
pad
yeah,
so
I
start
with
myself.
A
So
I've
been
working
still
lots
of
stuff
on
the
rust
side
of
things
and
what
I
currently,
which
is
also
the
basic
plan
for
the
this
quarter,
is
to
concentrate
on
getting
people
that
use
upstream
rust
stuff
actually
using
it,
and
there
was
major
changes
on
the
multi-format
stuff
and
now
I'm
helping
those
people
to
upgrade
and
also
like,
of
course
like.
Ideally,
they
upgrade
themselves,
but
as
it's
really
kind
of
like
a
huge
breaking
change,
I
help
them
because
I
also
want
to
see
if
still
things
can
get
smoother.
A
So,
for
example,
for
the
p2p,
it
was
like
a
too
bad
breaking
change
and
then
they
helped
me
to
basically
figure
out.
Oh,
how
can
I
make
it
smoother
for
them
and
release
the
new
version,
so
I
really
want
to
make
also
take
the
chance
to
get
the
upgrade
path,
smoother
and
yeah
things.
Just
smoother
for
everyone,
and
so
I
help
them,
and
so
almost
there
is
little
p2p
and
also
forest.
A
The
rust
filecon
implementation
is
upgrading
already
upgraded
through
the
latest
multihash,
and
that
was
upgrading
the
latest
cid
version
and
they
even
see
performance
improvements,
that's
which
is
great
just
basically
just
due
to
the
upgrade
and
yeah.
So
the
next
stuff
is
obviously
rust,
ipod
itself
and
then
also
the
ipfs
implementation,
ipfs,
embedded
and
rust
ipfs
will
be
the
next
candidates
I
will
look
into,
and
then
it's
not
highly
related
to
ipald,
but
still
it's
I'm
working
for
the
file
code,
stuff
on
getting
code
coverage
for
rust
projects
and
code
coverage.
A
For
us
stuff
is
a
sad
story
currently,
but
there
was
last
week
there
was
a
new
pr
merged
into
rust,
which
improves
the
coverage
tooling,
and
I
will
look
into
that
as
well,
and
so
it's
also
of
course
like
then
I
could
do
it
for
five
coin,
but
it's
all
obviously
also
like
beneficial
for
any
rust
project.
We
have
so,
let's
see
how
this
goes
and
this
week.
I
also
prepare
a
talk
so
on
so
it's
exciting
that
this
week
we
have
two
ipfs
meetups
on
on
thursday.
A
Is
the
munich
ipfs
meetup,
the
virtual
one,
and
I
will
give
a
talk
about
ipali
and
it
will
be
kind
of
like
a
broad
overview
of
everything,
so
it
really
goes
from
multi-formats
to
schemas,
but
I
already
promised
that
I
also
get
into
schema,
so
I
might
ping
some
of
you
folks
to
get
more
like
run
stuff
because,
like
it's
really
meant
for
people
that
basically
know
nothing,
but
also
like
people
that
already
have
used
ipl
to
get
excited
about
like
the
new
stuff,
we're
building
so
perhaps
even
into
the
like
the
graphql
stuff,
which
is
kind
of
cool
like
it's
also
like
related
to
schemas
and
like
yeah,
seeing
basically
where,
where
all
this
could
go,
might
be
interesting
for
them
yeah.
A
So
this
is
why
I
also
work
on
this
week.
Yeah,
I
think
that's
pretty
much
it
next
on.
My
list
is
danielle.
B
Cool
so
I
finally
merged
the
first
version
of
go
multi-codec
with
the
table
of
contents
that
we
spoke
about
a
bunch
of
weeks
ago.
It's
taken
a
while,
but
it
was
a
repo
that
had
some
users
and
contributors
before
and
I
just
wanted
to
make
sure
I
wasn't
going
too
fast
in
case
anybody
else
had
any
thoughts.
B
There's
a
second
pull
request
now
to
essentially
polish
the
api,
a
little
bit
the
person
that
wrote
the
code.
I
already
reviewed
them
that
code
a
bunch
of
times
before
it
got
merged,
but
there
was
there
were
a
few
remaining
things
such
as
the
names
of
the
constants
were
still
not
very
nice.
So
I
simplified
that
a
little
bit
and
once
that
reviewed
emerged,
I'll
tag
the
first
release
and
then
we
can
actually
start
using
it
in
the
rest
of
the
go
repost.
B
I
also
continue
with
the
hand
so
now
it
has,
it
follows
links,
and
it
also
knows
how
to
create
links.
When
you
know
you
create
maps
that
are
too
big
for
a
single,
node
and
stuff,
like
that,
I'm
now
working
on
node
copying.
So
right
now,
when
I
modify
a
node,
I
just
modify
it
in
place,
which
is
fine
for
the
tests,
but
not
for
much
else.
So
I'm
working
on
that
and
the
only
blocker
I've
got
there.
I
shot
eric
a
question
a
little
bit
ago.
B
It's
it's
not
clear
to
me
how
the
api
should
look
for
this
stuff,
because
if
I
modify
a
node
that
points
to
another
node,
I
can
just
modify
this
link
and
that's
fine
but
the
root
node.
I
can't
just
modify
it
in
place.
If
I
make
a
copy,
I
need
to
give
this
copy
back
to
the
user.
Somehow.
So
it's
not
clear
to
me
how
the
go
api
for
that
should
look
like.
B
I've,
been
also
helping
eric
review
and
merge
some
changes
to
ipld,
prime,
because
he
he's
been
posting
a
bunch
of
poor
review,
pull
requests
to
the
repo
over
the
past
few
weeks
and
it's
quite
a
lot
of
code,
so
we're
figuring
out
how
to
get
that
review
properly.
B
And,
lastly,
I'm
giving
a
talk
at
a
go
meet
up
tomorrow
at
the
belfast
one.
It's
gonna
be
about
the
ghost
build
cache.
So
it's
not
gonna
be
ipld
related,
but
if
anybody's
interested,
I
think
it
is
somewhat
relevant,
especially
for
some
of
the
more
complex
repos
like
ipld
prime,
where
the
tests
do
a
lot
of
funky
stuff
and
then
maybe
the
cache
doesn't
work
very
well
with
that
kind
of
thing.
So
it
might
be
interesting
and
that's
it.
C
So
one
of
those
weeks
where
an
accounting
for
it
doesn't
feel
like
I'm
capturing
all
the
stuff
I
did,
but
I
think
it's
just
because
this
file
coin
data,
rpl
data
structure,
stuff,
actually
was
a
lot
of
work.
I
went
back
through
and
comb
through
all
the
code
and
pulled
out
types
and
and
documentation
as
well,
so
there's
a
lot
of
inline
documentation
and
a
lot
of
structure
in
the
the
linked
pages
that
I've
put
in
the
notes.
C
So
there's
the
the
main
chain
and
then
there's
the
messages
which
also
has
returns
and
they're
all
versioned
as
well.
So
there's
there's
been,
it's
been
quite
educational,
going
through
it
as
well,
so
there's
additional
stuff
in
there
like
the
actor
actor
types
are
identified
with
a
code
and
I've
generated
all
the
different
codes
in
the
documentation.
So
you
can
actually
you
could.
C
Actually,
if
you
were
to
build
this,
you
could
hardwire
in
the
the
byte
strings,
to
check
for
the
codes
and
there's
a
bunch
of
places
where
there's
just
additional
documentation
put
in
describing
what
things
are.
So
I
think
it's
a
good
resource.
It's
a
good
start
anyway
for
getting
this
documented
and
the
the
next
step
is
probably
to
restructure
the
adl
stuff
to
get
that
out
of
the
way,
because
it
is
mixed
up
in
between,
but
it
does
need
a
nice
way
to
do
that.
C
So
it's
not
completely
hide
it,
and
there
was
also
there's
a
couple
of
minor
things,
but
what
else
did
I
say
the
syntax
highlighter
in
vs
code
for
ipld
schemas,
which
is
quite
nice
to
use
when
you're
doing
pages
of
schemas
so
happy
about
that.
C
Now
the
only
place
that
there's
oh
there's,
two
places
that
need
syntax
highlighting
one
is
vim
and
the
other
one
is
on
github,
which
is
not
gonna
get
there.
So
that's
really
all
for
me.
D
Sorry
about
that,
okay,
yeah,
so
continued
on
graphql
code,
gen
off
of
an
ipld
schema.
That's
now
in
a
place,
people
can
play
with
it,
which
is
cool,
ongoing
work
to
make
it
faster
and
figure
out
all
of
the
various
there's
a
bunch
of
design
stuff
still.
So
what
methods
do
you
want
when
you've
got
a
map,
because
a
map
is
not
a
thing
that
graphql
has,
for
whatever
reason
it
just
has
structs,
but
you
can
define
methods
on
it.
D
So
I
left
a
few
notes
that
are
probably
mostly
to
eric
on
various
design
things
and
how
they
mesh
with
between
graphql
and
the
ipld
cogen
type
schema,
and
so
in
particular,
it's
very
frequent
that
you
want
to
do
paging
where
you've
got
different
requests
coming
in
and
those
want
different
set
parts
of
a
list
or
a
map
and
right
now
the
only
thing
that
we
expose
on
ipld.node
is
an
iterator,
and
that
means
we
have
to
either
statefully
keep
that
iterator
between
different
requests,
somehow
to
know
where
it
is,
or
we
have
to
start
again
with
a
new
iterator
or
somehow
keep
our
own
sense
of
how
to
get.
D
You
know
the
second
10
items
in
a
map
or
a
list,
and
especially
in
a
list
I
mean
like
I
can
get
by
index
and
hope
that
that
works,
but
we
should
think
a
little
bit
about
if
there's
a
way
to
have
cursors
or
some
of
these
other
standard
things
for
making
selections.
That
is
worth
you
know,
making
people's
lives
in
that
sense
easier.
D
I
also
took
the
hampt
as
it
exists
in
filecoin
project.
Slash,
go
amp,
ipld
or
something
it
has
ipld
in
the
name
of
it
still
and
added
the
ipld
node
methods
to
it
and
was
successfully
able
to
plug
that
in
as
an
ipld
type,
which
means
that
it
can
pass
through
it
without
loading.
The
whole
adl,
which
is
quite
good
from
a
performance
improvement.
D
But
we
should
think
about
what
that
actually
means
in
practice
and
why
we
would
probably
discourage
people
from
doing
that
fixed
or
added
a
little
bit
more
validation
and
I'm
starting
to
understand
the
intricacies
of
the
go
code
generation
a
little
bit
more
and
we'll
try
and
continue
putting
some
code
in
there.
E
Yeah
right
so
the
ipod
stuff
from
last
week.
Yes,
I
finally
figured
out
how
to
capture
everything
I
need
from
from
lotus,
including
all
all
state
that
we
care
about
figured
out
how
to
correctly
segregate
it
into
into
slabs,
because,
like
just
putting
it
into
into
stereo
or
just
putting
into
into
a
database,
is
not
sufficient
for
what
we
need
to
be
able
to
do
with
it.
E
Now
I
actually
have
a
way
to
capture
this
particular
block
has
been
seen
for
the
first
time
at
such
and
such
epoch
and
more
interestingly
for
the
stuff.
What
we'll
be
doing?
I
also
can
capture
and
will
record
for
the
main
chain
this
particular
block
has
been
accessed.
In
this
day,
I
will
not
be
able
to.
I
mean
I
could
build
you
an
entire
dag
in
the
database,
but
then
it
will
be.
E
It
will
also
be
one
terabyte
and
we
don't
want
to
do
that,
but
we
can
at
least
have
with
very
high
degree
of
granularity
to
say,
like
this
particular
block
has
been
accessed
in
this
this
and
this
day,
which
is
probably
sufficient
to
kind
of
get
a
rough
idea
where
to
look
and
from
there
on.
You
can
just
drill
further
down
and
stuff,
like
that.
E
I
just
got
this
to
work
on
on
nerpo
yesterday
and
hopefully
we'll
be
able
to
start
the
main
ingestion
today.
There
isn't
this
a
little
bit
difficulties.
I
need
to
go
through
the
entire
chain
once
to
actually
record
this
all
these
interactions,
but
yeah-
it
hopefully
will
finish
this
week,
so
you
will
have
as
a
result
a
way
to
see
not
only
what
is
there
but
when
it
has
been
accessed
and
of
course
we
will
have
access
to
the
entire
student
chat
chain
industry
going
forward.
E
I
am
actually
in
the
process
right
now
of
uploading
on
all
a
little
bit
older
like
two
three
days
ago
set
of
blocks.
I
just
want
to
see
how
how
it
will
go
and
what
adjustments
I
need
to
to
make
twisty
to
to
get
to
in
just
faster
but
yeah.
One
question
for
for
later
is:
if
anyone
had
a
chance
to
walk
through
the
small
set
that
I
put
together
just
to
make
sure
that
I
didn't
actually
miss
things
but
yeah,
that's
more
or
less.
F
Hey
guys
so
a
lot
to
report,
but
I
did
meet
with
hannah
last
week
talking
about
some
of
the
issues
around
validating
the
graph
sync
requests
and
got
some
inspiration
for
a
better
approach.
It's
actually
kind
of
more
similar
to
how
she
did
it
in
gra
and
go
but
from
like,
I
guess,
a
high
level
design
point
of
view
so
start
implementing
it
this
morning
and
can
keep
on
working
on
that.
Let's
go!
That's
my
update.
A
Thanks
eric,
do
you
have
any
updates.
G
Yeah,
so
in
the
last
week
I've
done
some
chatting
about
hampts.
Those
are
coming
along,
mostly
through
other
people's
work.
Very
exciting
we've
been
having
specs
meetings
on
thursdays,
and
last
week
we
managed
to
do
that
again.
We
should
probably
say
for
anyone
out
there
on
the
internet
who's.
Listening
to
this,
these
aren't
super
secret
they're,
just
a
different
meeting
than
this
one.
G
If
anybody
wants
to
come
fight
about
specification
and
documentation
issues
with
us,
let
us
know
either
by
like
showing
up
in
this
meeting
and
talking
about
it
or
like
send
us
messages
in.
G
I
think
we
want
to
like
keep
an
intense
kind
of
working
group
vibe
to
it,
but
it's
also
not
at
all
a
secret
so
anyway,
so
that's
happening
on
thursdays.
Last
week,
the
main
one
that
I
was
interested
in
is
we
tried
to
hash
out,
what's
going
to
happen
with
the
any
specification
in
the
schema
layer
again
and
we've
recruited
a
bunch,
more
information
about
that
and
a
couple
of
other
topics
as
well.
I
still
owe
everybody
a
pr
with
those
notes
into
the
team
management
meeting
where
again
anybody
else
on
the
internet.
G
We
are
not
recording
those
meetings
personally
because
I
frankly
want
to
like
speak
well
frankly
in
them,
but
we
are
producing
textual
notes
from
them
and,
like
that's
the
artifact
of
record,
so
and
hopefully
that's
a
productive
thing
in
code
news,
so
I've
torn
off
a
little
bit
of
work
in
trying
to
revamp
some
of
the
codex
and
go
ipld
prime,
and
this
is
something
that
I'm
probably
going
to
carry
the
baton
a
little
way
and
then
try
to
find
a
safe
quiet
place
to
put
it
down
again,
because
it's
not
the
hugest
priority
in
the
world
right
now.
G
But
the
goal
of
doing
this
is
mostly
to
do
something
about
the
linking
api
in
the
go
ipld
primary
post,
which
has
been
something
we've
known.
We
want
to
improve
for
a
long
time,
so
this
is
trying
to
step
in
that
direction.
Get
some
apis
that
are
normalized,
so
they're,
more
pluggable
and
just
normalized
try
to
figure
out
where
codec
configurability
should
fit
when
we
tolerate
it,
which
of
course,
is
not
always
multi
codecs
do
not
necessarily
like
supporting
configurability,
but
sometimes
it's
awfully
nice.
G
If
we
can
have
one
tiny
little
switch
that
you
can
use,
if
you
really
know
what
you're
doing
to
get
like
white
space
tolerance
right
so
currently,
those
things
are
a
crazy
pain
in
the
butt
to
configure
in
the
go.
I
feel
the
prime
code
base
it
requires
reaching
across
several
packages.
G
It's
just
ugly,
I'm
trying
to
fix
that
trying
to
get
more
internally
reusable
code
by
placing
the
tokenization
boundaries
better
and
hoping
to
get
as
a
target
of
opportunity
anyway,
performance
improvements
at
the
end,
so
there's
two
pr's
out
of
this
already
101
was,
I
think,
up
last
week
already
and
112
is
now
demonstrating
some
more
of
what
I
hope
a
new
codec
will
actually
look
like
with
this.
G
I
think
some
of
the
code
is
turning
out.
Simpler
benchmarks
are
still
big
to
do
so,
we'll
see
about
that
performance
improvement,
but
things
seem
to
be
sort
of
moving
along
on
there.
The
112
pr
is
up
and
available
if
anybody
wants
to
review
it,
but
it
is
also
a
little
preliminary,
for
example,
the
missing
benchmarks.
So
you
know
take
that
with
a
grain
of
salt.
A
lot
more
stuff
is
probably
needed
there
before
that's
anywhere
near
landing.
G
G
It's
going
to
be
really
brief,
probably
not
actually
interesting
to
anybody
who's
following
all
of
these
sessions
in
detail,
but
hopefully
be
a
good
high
level
review
so
I'll
copy
the
sign
up
link
into
our
text
notes
for
this
meeting
too
in
case
anybody
wants
that,
and
that
is
it
for
me.
I
need
to
go
prepared
for
that.
In
fact,.
A
A
So
does
anyone
have
anything
else?
I
don't
see
any
agent
items.
A
No,
then,
I
think
it
was
a
quick
meeting
which
is
good
because,
like
originally
we
we
said
that
when
we
expanded
from
half
an
hour
to
an
hour
that
we
tried
to
squeeze,
it
still
tried
to
keep
it
through
half
an
hour
and
now
we're
better.
So
that's
good
yeah.
So
thanks
everyone
and
see
you
all
next
week,
goodbye.