►
From YouTube: 🖧 IPLD Every-two-weeks Sync 🙌🏽 2022-02-14
Description
An every two weeks meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
A
Hello
and
welcome
this
is
the
ipod
community
call
for
anyone
who
would
like
to
join
us.
It
is
february
the
14th
we
do.
These
calls
about
every
two
weeks
and
people
who
would
like
to
contribute
or
find
out
more
about
ipld
stuff.
You
should
join
us
here
we're
going
to
do
two
phases
of
this
meeting,
we're
going
to
go
on
the
record
and
we're
streaming
live
to
youtube
now,
and
these
meeting
recordings
will
be
made
available
for
posterity.
A
Anybody
who
would
like
to
share
updates
about
stuff
they're
working
on
that's
ipv
related
or
developments
they're,
making
the
I
feel
the
ecosystem.
This
is
a
great
time
to
do
it.
We
are
also
going
to
do
a
little
bit
more
informal
round
table
that
is
not
fully
recorded
at
the
end
of
this
meeting.
So
if
you
have
things
that
you
would
like
to
share-
and
you
want
a
free
and
open
environment
to
do
that
open,
but
less
recorded
that's
available
too.
A
If
you
are
joining
us,
we
also
have
some
meeting
notes
that
are
in
a
shared
document
in
haccmd.
You
can
find
links
to
all
these
things
in
the
github.com
ipld
slash
team
dash
management,
repo-
I
don't
know
the
first
thing
I'll
do
in
running
through
our
shared
notes.
Document
is
see
that
there
are
some
notes
from
vmx
who,
I
think
is
not
here
to
present
these
so
I'll.
Do
it
on
his
behalf.
A
I
don't
want
to
do
myself
right
now.
I
want
to
give
my
mouth
a
break,
so
will
scott?
May
I
choose
you
to
continue.
B
But
sure
the
only
thing
that
I've
remembered
so
far
to
write
down
is
that
I
I
said
on
the
last
of
these
one
of
these
calls
that
there
was
some
work
on
thinking
through
the
traversal
library,
where
you
walk
through,
what's
matched
by
selector
to
support
some
resumption
use
cases.
B
B
So
there's
now
a
couple
prs
about
that
that
are
in
the
ipld
primary
bill
with
a
couple
different
options:
they've
gotten
some
review
already
and
we're
trying
to
think
through
what
makes
sense
to
live
at
that
level
and
what's
sort
of
probably
closer
to
data
transfer
and
should
be
a
level
above
wrapping
stuff.
But
there
there's
also.
I
guess
I
guess,
there's
two
pr's
that
are
active
in
ipld,
prime
and
then
there's
also
an
implementation.
That's
already
getting
some
use
of
this
in
the
older
ipld
format,
instance.
B
So
trying
to
understand
why
that
got
made
and
and
how
we
can
support
the
same
features
in
the
current
set
of
libraries.
I
think
that's
all
for
me.
A
All
right
I'll
go
back
into
order
in
my
doc,
so
I've
got
a
couple
of
technical
things
that
I
will
share
just
real
briefly,
that
I've
been
working
on
and
then
one
more
policy
oriented
thing
so
for
tech
stuff,
I've
been
starting
to
grind
away
on
trying
to
produce
a
patch
system
for
ipld.
A
The
idea
is
to
do
something
very
declarative
and
basically
go
with
the
json
patch
rfc,
which
is
a
pretty
well-rounded
rfc
and
have
something
based
on
that
which
you
can
apply
to
ipld
documents.
A
So
it
would
have
many
of
the
same
features
that
jsonpatch
does
declare
add,
remove
move,
etc,
but
being
based
on
ipod
libraries,
it
should
be
able
to
work
on
your
choice
of
kodak.
Of
course
it
should
work
even
with
lensing
systems
like
schemas.
It
should
work
even
with
lensing
systems
like
adls
and
what
else
is
going
to
be
cool
there?
Oh
of
course
it
should
work
over
links.
So
the
idea
is
even
if
you
have
a
large
dag
that
includes
many
sub
documents
connected
by
content.
A
This
will
be
pretty
cool,
there's
a
work
in
progress
pr
in
the
glide
filthy
primary
though,
and
some
other
pr's,
the
ipl,
the
ipod
meta
specs
repo,
which
contains
some
fixtures.
A
So
far,
I'm
mostly
grabbing
fixtures
from
the
json
patch
rfc,
because
it's
a
well-written
rfc
and
those
exist
in
a
couple
places.
Things
are
diverging
a
tiny
bit.
For
example,
we
maintain
map
order
and
the
json
patch
our
pc
didn't.
So
there
are
a
couple
of
small
switches
like
this,
but
mostly
uninteresting.
A
A
A
I'm
finding
the
error
handling
pass
is
rather
gnarly
and
I
keep
running
into
null
pointer
exceptions
quite
a
lot,
and
I'm
thinking
this
would
be
drastically
easier
if
we
just
use
the
absolute
value
consistent
migrating
to
that
might
be
painful,
but
I
don't
think
it
will
get
less
painful
as
more
time
passes
either
so,
but
feedback
welcome
on
this.
A
Then,
on
a
policy
note,
I'm
thinking
that
it
would
be
great
to
get
more
people
involved
and
the
ability
to
merge
pr's
across
some
of
this
stuff,
and
especially
in
the
goliath
prime
repo
I'd
like
to
propose.
A
We
do
a
model
of
like
plus
two
and
plus
one
votes,
as
people
do,
reviews
and
people
who
have
been
core
contributors
for
a
while
and
are
like
persistently
involved
in
merging
and
maintaining,
can
cast
plus
two
votes
and
people
who
have
done
some
contributions
before
we
should
get
plus
one
votes
roughly
and
as
long
as
you
can
get
a
plus
two
total
in
some
pr,
then
my
proposal
is
that
we
should
feel
good
about
merging
how
exactly
to
implement.
A
This
may
be
subject
to
some
more
discussion,
but
I've
put
more
notes
on
this
in
the
ipod
chat
channels
and
discordant
matrix.
So
if
people
want
to
look
at
that
and
offer
more
comments,
there,
please
do
we'll
probably
figure
out
how
to
formalize
this
more
in
coming
weeks,
and
I
think
that
is
it
for
me.
I
see
rod
is
next
in
the
document
if
you'd
like
to
take
it
away.
C
Yeah,
just
some
small
things.
I
am
noteworthy,
not
my
work,
mainly
godzilla's
work
he's
been
pushing
through
a
sync
sync
hasher
interface.
Through
the
javascript
stack
in
our
javascript
stack.
We
we
do
our
hash,
our
hashes
as
asynchronous,
because
in
the
particularly
in
the
browser
you
you
have
access
to
these
async
apis
for
hash
algorithms.
C
C
So
that's
been
our
parenting
interface
for
all
of
the
hashes,
even
though
a
lot
of
them
are
not
asynchronous,
but
there's
been
a
use
case
in
unix
of
s,
because
I'll
have
been
working
on
unix
fs
cleaning
that
up
and
wants
to
use
the
hash
of
the
mirmous
minimus
rehasher
in
in
that
and
and
it
being
asynchronous,
is
annoying
because
it
doesn't
need
to
be,
and
it's
inefficient,
so
he's
pushed
through
a
sort
of
an
optionally
sync
interface
for
multi-hash
multicaster,
and
that
goes
down
to
murmur
three,
and
there
are
places
where
you
can
do
this
like
identity
hash,
for
example,
another
one
that's
easy
to
be
synchronous
and
then
what
it
means
is
for
users
is
is
if
they
want,
they
can
feature
detect
whether
a
hasher
is
synchronous
or
asynchronous.
C
If
they
would
rather
go
for
the
synchronous
api,
they
can
down
scale
from
that.
Otherwise
they
can
just
await
anything
any
any
hash
thing.
So
that's
there's
a
pull
request
there
with
discussion.
We
we
merged
it
and
then
had
to
back
it
out
because
it
was.
It
broke
the
typescript
types
for
that
package,
which
had
a
flow-on
effect
to
jsipfs.
C
So
we're
doing
that
and
we're
having
a
discussion
about
how
to
best
do
the
breakage
of
that
so
slightly
awkward,
but
might
be
interesting
for
javascript
people.
C
Two
things
in
iple,
prime
one
is
ibi
league
garbage
is
now
in
iple,
prime,
so
I
billy
brown
is
officially
garbage,
capable
it's
just
a
little
utility
to
generate
garbage
blocks
of
different
sizes
using
and
you
can
weight
different
kinds
in
there
as
well
and
get
some
interesting
shapes
to
run
through
tests
and
fuzzing
and
stuff.
C
The
other
thing
is
encoded
length
for
dag
cyborg,
the
adaxi1
encoder.
You
can
run
this
over
a
node
to
get
the
the
length
of
that
node
will
encode
as
a
block
in
using
using
dag
cbo.
There
is
discussion
in
the
pr
for
that
that
ended
up,
mostly
in
hopefully
in
the
docs,
for
that
that
that
may
not
be
if
you're
reaching
for
that
as
a
pre-allocation
technique,
may
not
end
up
being
more
efficient
to
do
that.
So
this
is
something
it's
very
particular
for
the
use
case
that
you
want.
C
We
wanted
it
in
graphsync,
because
graphsync
does
a
memory,
budgeting
thing
for
limiting
clients,
it's
like
a
client
quota,
and
so
it
it's
not
for
efficiency,
it's
more
for
making
sure
clients
don't
use
up
their
memory
budget,
and
so
this
is
just
a
useful
way
of
of
accounting.
For
when
I
end
up
sending
this
message,
this
is
the
these.
I
can
add
up
the
different
portions
of
the
message
to
build
it
to
send
it,
and
so
it's
it's
a
it's
a
yeah.
C
D
I
just
wanted
to
to
mention
that,
as
discussed
earlier
started,
started
documenting
my
thoughts
around
it
and
also
doing
a
little
bit
of
prototyping.
D
E
Cool
looks
like
I'm
next
up
more
on
the
process
regarding
not
anything
cool
engineering-wise
here
but,
as
folks
may
have
seen,
protocol
labs
as
engineering
andreas
working
group
is
moving
trying
to
get
out
of
its
internal
pl
slack.
It's
something.
We've
talked
about
for
a
while
and
we're
finally
trying
to
get
the
trigger
pull
to
make
that
happen
so
involved
in
in
that
effort.
So
you'll
see
more
channels
sprouting
up
in
ipfs
discord,
we're
not
doing
anything
to
the
existing
matrix
channels.
E
The
existing
bridges
will
all
live
and
we
can
see
if
it
makes
sense
to
add
additional
bridges
between
any
new
channels
that
get
created,
but
that,
but
that
effort
is
you
well
underway,
and
you
should
start
to
see
even
more
presence
from
pl
engineers
here
this
week.
As
we
do
the
lift
this
week
and
there's
you
know,
I
tried
to
link
to
anything
here
where
the
posts
have
been,
including
the
like
the
public
notice
on
this,
which
then
like
links
to
a
a
thread
in
ipfs
discord.
E
So
if
you
have
any
issues
or
have
comments
like
please
leave
you
myself
and
others
we'll
be
kind
of
going
through
those
to
make
sure
we're
listening
and
paying
attention
and
addressing
any
problems
that
come
up.
But
thanks
for
your
patience
in
this
as
we
work
through
these
bumps
and
at
least
get
us
one
step
out
of
being
totally
silent
in
pl
slack,
even
though
knowing
that
some
of
that
is
going
to
be
bringing
gravitational
pull
to
discord
which
is
not
like
our
ideal
end
state.
But
it's
a
transitionary
move
for
now.
F
Yeah
so
last
time
I've
presented
my
custom
chunker.
I
was
writing
on
and
some
of
you
were
curious
to
know
about
the
performance,
so
I
fixed
some
some
different
bugs
mainly
the
padding
which
is
all
of
the
copy
I
was
doing,
was
not
properly
aligned
to
4k,
so
it
was
fast
lower
than
it
actually
needed,
and
so
I've
just
finished
performance
comparison.
F
Basically,
it
is
about
10
times
faster
than
gokar
at
the
exact
same
job,
so
just
taking
a
file
making
a
car
out
of
it,
I'm
I
am
tim
them
faster
yeah.
That's
all.
A
F
Yeah,
exactly
it
doesn't
need
to
be
like
linux
to
ipfs
is
actually
faster
than
the
right
speed
of
my
disks,
just
because
I'm
doing
ram
wrath
links
so
ref
copies
so
like
in
the
link
I've
shared.
I
I
show
the
bitter
rfs
tool,
which
shows
like
the
search
here
that
is
about
42
gigabytes,
so,
like
I'm,
not
actually
copying
any
data
from
file,
I'm
just
reading
it
to
hash
it
and,
like.
Obviously,
there
is
a
bit
of
overhead
because
you
have
to
write
like
the
car
and
the
linking
dags
but
yeah.
Basically,
that's
it.
F
Yeah,
exactly
if
you,
if
you
do
a
copy
on
like
non
4k,
it
makes
an
actual
copy,
it
doesn't
make
a
wrestling
and
so
to
have
ref
links.
I
just
insert
like
empty
notes.
I
just
found
it.
It's
blocks
that
are
of
zeros
and
I
just
installed
zero
blocks
and
that
passes
to
is
that
my
actual
data
to
4k
and
it
works.
A
So
if
you
want
to
share
some
more
performance
measurements,
I
don't
know
if
you
use,
what's
it
called
the
pprof
system,
ingo.
F
A
Yeah,
if
you
can
be
bothered,
I
would
love
to
see
some
of
the,
like,
probably
the
png
format.
Outputs
are
the
easiest
ones
like
post
on
the
web
to
share
with
other
people
yeah
or
the
svgs
or
the
prop
itself,
but
like
those
pictures
can
be
really
really
fun
to
share.
Sometimes
I
have
a
question
that
our
system
is
like.
F
A
No,
I
I
don't
think
I
have
a
good
answer
that
I
maybe
somebody
else
out
there
will.
I
have
not
figured
out
how
to
draw
that
well,
other
than
drawing
the
wall
clock
time,
one
instead
of
the
cpu
one
kinda
tells
you
that
a
little.
A
Anyone
else
who
wants
to
say
stuff
on
the
record,
I
think,
we've
already
flown
through
the
agenda.
That's
in
the
note
stocks
here,
if
you
want
to
do
stuff
on
the
record,
feel
free.
Otherwise,
I
guess
we
can
shift
into
the
second
half
of
the
meeting
and
have
more
of
the
round
table.
That's
a
bit
more
conversational
off
the
record
last
call.
Anybody
want
to
speak.