►
From YouTube: 🖧 IPLD weekly Sync 🙌🏽 2020-08-10
Description
A weekly meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
A
Welcome
everyone
to
this
week's
ipld
dev
meeting
it's
august,
the
10th
2020,
and
as
every
week
we
go
over
the
stuff
that
we've
worked
on
in
the
past
week
and
we'll
work
on
in
the
next
week
and
then
discuss
open
agenda
items
we
might
have,
and
this
time
I
even
have
a
general
section
before
I
start
with
the
weekly
updates,
because
it's
kind
of
like
also
outside
people,
where
involved.
So,
therefore,
it's
like
not
really
like,
when
someone's
played,
perhaps
later
on,
someone
put
it
on
his
plate.
A
I
don't
know,
but
it's
important,
because
there
is
a
change
in
cid.
It's
not
a
breaking
change.
It's
just
a
change
of
the
definition.
What
ici
is
yes,
so
we
don't
have
a
version
number
for
cid
anymore,
but
the
version
number
is
actually
also
a
multi-codec.
A
A
Whatever
adequately
supports,
we
have
already
reserved
version
two
and
three
in
the
multi-coding
table,
because
it
just
makes
sense
to
have
those
numbers,
but
in
theory
it
could
be
something
else
and
we
yeah
it.
We
could
even
decide.
That's
like
was
discussed,
but
I'm
not
sure
if
it's
a
good
idea
that
cid
version
0
is
defined
as
the
same
multi
codec
as
sha2,
because
that's
what
it
actually
is
so
that
the
definition
we
could
change
the
definition
of
what
the
cid
version
0
is
and
say.
A
A
In
theory,
we
could
do
what
we
probably
won't,
but
let's
just
yeah
so
anyway,
if
anyone's
who's
watching
this
is
using
cid.
Nothing
really
changes
for
you.
It's
just
a
matter
of
yeah
having
it
defined
in
a
different
way,
but
it's
not
really
yeah
breaking
anything
cool.
Then
I
go
on
with
myself.
A
A
I
haven't
had
that
much
time
last
week
to
work
on,
and
hopefully
this
week
I'll
have
more
time
and
on
that
js
ipl
d
side,
the
changes
that
we
use.
U
and
eight
arrays
instead
of
node
buffers
bubbled
up.
So
it's
now
also
in
jsi
panel
d
and
the
stacks
below,
like
multiformats
and
so
on.
But
this
work
was
basically
done
by
ancient
brain.
A
I
just
did
reviews
and
real
pieces
of
the
stuff,
but
he
did
the
hard
work
of
doing
actually
the
code
changes
and
the
only
things
that
I
kind
of
left
is
js
ipld,
bitcoin
and
js
ipld
ethereum,
which
is
in
which
are
internally
still
using
buffer.
So
we
convert
the
stuff
on
the
way
in
and
on
the
way
out
and
to
make
it
work.
But
ideally
we
would
change
this,
but
like
it
should
be
possible
with
bitcoin
with
rod's
new
library
that
he
did
probably
for
ethereum
ethereum.
A
This
will
be
at
one
point
if
someone
creates
a
proper
js
ethereum
library
that
we
might
want
to
use
which
also
rotten
might
do
in
far
distant
future,
but
I
hope
so
so,
but
that's
like
not
anything
blocking
or
something
it's
just
yeah.
If
you
want
to
get
rid
of
the
full
node.js
buffers
in
the
full
stack,
you
would
need
to
also
update
those,
except
if
we
don't
bundle
them,
then
it's
fine,
because
if
you
don't
bundle
them,
you
don't
have
the
dependency.
A
Okay
and
that's
all
I
have
next
on.
My
list
is
peter.
B
Yes
again,
another
underwhelming
update,
I'm
mostly
still
doing
powerpoint
things
for
those
who
wonder
like.
What's
he
testing
that
much
when
we
have
the
entire
falcon
oni
team,
what
I'm
focusing
on
is
doing
things
on
the
live
chain
with
all
the
extra
nestings
that
comes
from
you
know,
having
more
people
on
the
chain
and
having
fluctuations
in
how
your
messages
are
taking
one
one
minute
and
the
next
minute.
Nothing.
B
Nothing
goes
on
chain
at
all,
and
things
like
that,
and
this
unfortunately
takes
an
extraordinary
amount
of
time
and
discovering
things
in
between
misconceptions
between
like
the
implementation
team
and
the
and
the
ce
team,
and
so
on
and
so
forth.
So
a
lot
of
time
goes
there.
B
B
I
am
for
now
not
giving
up
on
go
just
yet
I'm
looking
at
what
would
it
take
to
get
tiny
code
to
actually
produce
wasn't
for
me,
and
I
know
that
some
folks
already
looked
into
that,
I
think
from
the
direct
project
and
it
was
kind
of
deemed
like
yeah
they're
they're
like
disabilities
here
and
there,
like
some
things,
are
missing
yeah,
but
right.
Actually
it
was
you.
I
forgot,
and
my
thing
is
a
little
bit
simpler.
B
B
The
reason
I'm
still
focusing
on
gopher
a
little
bit
until
I
completely
give
up
is
because
the
rust
story
on
multi-trading
is
it's
workable,
but
it's
nothing
like
the
go
multi-trading
story,
and
while
it's
not
interesting
for
wasm
itself,
it's
interesting
for
having
essentially
one
code
base
that
you
can
both
push
into
awesome
and
keep
it
in
goal.
B
Whereas
a
go,
one
can
saturate
as
many
cpus
you
can
throw
at
it
where,
if
the
bottom
one
essentially
just
for
for
a
browser
and
doing
that
in
the
rest
on
the
high
end
of
basically
pushing
as
much
of
the
cpu
to
do
things
with
unmarshaled
shared
memory
is
doable.
But
it's
it's
a
lot
of
work.
It's
not
not
nothing!
I
can
go
so
that's
why
my
team
is
to
try
to
go
as
far
as
I
can
with
go
and
if
it
doesn't
work,
then
we'll
switch
some
parts
to
rest.
B
Oh
yeah
trading,
who
wasn't
it's
entirely
experimental,
I'm
more
like
if
I
switch
the
parts
that
run
on
the
regular
console
like
on
on
regular
cpu,
not
in
mosum
to
rust,
doing
the
multi
trading
there
in
a
regular
compilation
is
just
you
can
do
it,
but
because
you
essentially
have
posix
threads
or
you
have
like.
You
know,
tokyo
and
stuff,
like
that,
it's
nowhere
near
as
low
overhead.
As
what
goal
gives
you
for
when
you
need
exactly
the
thing
that
I
need
for
basically
moving
bites
around.
C
I
guess
I
can
say
from
diran's
perspective:
there
were
just
a
couple
like
library
things
that,
on
the
initial
attempt
at
tiny
go
failed
and
we
didn't
spend
much
time
looking
into
it.
I
suspect
with
not
too
much
effort.
We
could
have
gotten
a
tiny
go
compilation
target,
but
it
wasn't.
It
was
a
web
demo
that
we
were
aiming
for,
so
we
really
didn't
need
it.
A
Okay,
next
on
my
list
is
rod.
D
Looking
at
the
far
coin,
doing
some
doing
a
bunch
of
work
to
well,
I
got
the
docs
merged
the
ones
I
wrote,
so
that's
all
merged.
So
actually
the
go
doc
now
should
be
up.
D
So
that
that's
good,
that
was
a
big
chunk
of
work,
but
that
was
from
like
two
weeks
ago.
D
D
So
the
idea
is
that
the
hemp
should
only
have
one
form
for
any
given
set
of
data.
So
when
you
delete
it
needs
to
wind
back
into
the
right
shape
and
the
collapse
algorithm.
That
didn't
look
right
at
all.
To
me
is
it's
like
this
looks
both
incomplete,
but
also
a
little
bit
wrong
and
after
pushing
it
a
lot
with
like
different,
you
know
forcing
it
into
different
situations.
It
it
seems
to
be.
It
seemed
to
be
working
just
fine,
so
it's
just
written
in
a
way.
D
That
was
not
how
I
conceptualized
the
algorithm
in
my
head,
and
I
think
part
of
that
is
go
and
just
just
the
way
that
it's
it's
written
as
well.
D
So
so
I
I
I
cleaned
up
the
code
anyway
and
put
some
extra
things
in
there
to
make
it
much
more
explicit,
what's
happening
and
there's
a
full
request
for
that
in
the
go:
hemp
ipld
and
I
cleaned
up
the
oh,
you
know-
and
this
and
some
next
chest
for
that
as
well
for
the
collapse
stuff,
then
what
I
spent
most
of
my
time
was
this
block
load
time
validation
work.
D
So
it's
the
hampton
is
really
loose
about
what
it
accepts
and
it
sort
of
it'll
try
and
pass
a
block
and
then
try
and
navigate
it
and
just
sort
of
fail
when
it
absolutely
can't.
So
I've
made
it
much
more
strict
now
about
when
it
loads
a
block
that
it'll
it'll
check.
Does
this
block
match
my
expectations
of
what
a
block
should
look
like,
and
so
it's
the
kind
of
thing
we
were
doing
in
schemas.
D
If
we
had
all
that
stuff
ready
like
we,
we
might
be
able
to
assert
a
lot
of
the
stuff
I
did,
but
this
involved
doing
a
lot
of
generating
a
lot
of
manual
c
board
to
force
it
to
in
the
tests
to
force
it
to
fail
validation
on
a
bunch
of
different
shapes,
so
there's
a
link
there
to
pull
request
with
all
that
work.
I'm
happy
with
that,
but
it's
it's
pretty
strict!
D
Now
it's
not
it's
not
gonna,
accept
anything
that
doesn't
smell
right,
oh
yeah,
and
then
so
the
the
next
things
that
are
coming
up.
I
think
now
that
I've
done
some
work
around
the
seaboard
format,
so
validating
that
it
accepts
the
right.
Just
the
right
thing,
there's
a
couple
of
things
now
that
will
change
the
format,
so
I've
got
some
tests
now
that
have
the
format
explicitly
hard
coded.
D
D
One
of
them
is
the
bit
field
in
this
hand,
is
quite
go
specific
in
that
it
uses
a
big
int
and
it
just
takes
the
bytes
from
that
which
means
that
so
the
idea
is,
you
have
this
byte
array
and
you
set
bits
on
and
off
in
the
byte
array
to
indicate
which
elements
of
the
array
that
you
have
and
the
go
implementation
of
bigint
will
give
you
bytes
to
represent
the
integer
you've
created.
D
That
is
only
as
small
as
it
needs
to
be
as
in
big
india,
and
so
you
end
up
with
different
size
bitmaps
in
the
serialization
format,
depending
on
which
bits
are
set
and
there's
not
not
necessarily
a
bias
towards
smaller
either
it's
it's
sort
of
it
doesn't
actually
save
you
a
ton,
but
it
just
gives
you
inconsistency.
D
So
I've
got
agreement
amongst
most
of
the
people
that
are
concerned
with
this,
that
we
should
just
make
it
a
fixed
width,
and
then
we
can
validate
the
width
of
it
against
the
number
of
element
of
elements
in
the
array
against
a
couple
other
things,
so
there's
extra
validation
there
and
there's
also
an
opportunity
to
consider
maybe
changing
it
to
little
endian
I'll.
Look
at
that
further
and
see
what
we
ended
up
with
the
javascript
side
and
a
couple
of
other
places.
D
I
think
the
pierre
goss
hamped
as
well
uses
little
endians,
so
I
think
there's
more
standardization
around
little
endian
and
it's
easier
to
to
mess
with
the
some
of
the
the
bits
when
you
do
when
you've
got
little
nd
just
because
we've
got
more
tools
for
that.
So
maybe
we'll
do
that.
But
anyway
that's
one
thing
and
the
other
thing
is
to
we're
going
to
change
a
key
union
into
a
kind
union
so
that
we
we
ditch
a
map
and
a
string
key
and
just
end
up
with
the
element
itself.
D
So
it'll
that'll
save
bytes
on
the
other
end,
so
those
two
format
changes
will
make
a
break
making
it
make
it
breaking,
but
we'll
bring
it
a
lot
closer
to
what
we
have
in
our
hand
spec.
D
So
there's
there's
a
lot
of
room
here
to
pull
this
in
alignment,
which
is
what
we
tried
to
do
last
year,
but
now
we're
getting
to
do
and
then
we
we
have
potential
to
read
and
write
this
with
our
other
libraries
that
we're
building
for
amps.
So
that's
all
positive
and
along
the
way,
I
added
a
diagnostic
for
my
printer
to
my
seaboard,
encoder
parser,
which
was
really
fun
so
being
able
to
stick
in
cbor
and
print
out
a
really
nice
diagnostic
format.
D
It's
very
satisfying
and
that's
my
week.
A
E
Eric
yeah,
so
I
also
spent
some
time
doing
hampty
things
and
adding
some
benchmarks
to
the
same
libraries
that
rod
is
looking
at
and
just
trying
to
get
like
a
gross
statistical
understanding
of
what's
going
on
in
them
and
that's
interesting,
we're
trying
to
understand
some
of
the
caching
works
there
and
I
don't
know
just
get
more
graphs
out
of
the
performance
stuff.
There's
a
little
bit
of
work
like
that
so
far,
but
not
a
ton
and
so
just
expanding
in
code.
E
Gen
news,
I
always
have
a
little
bit
and
kind
of
unions,
are
now
supported
in
the
go
code
gen,
which
is
kind
of
cool.
If
we
do
a
total
review
of
like
all
of
the
completed
features
in
codegen
now,
we've
got
maps,
we've
got
lists,
we've
got
unions
now
the
keyed
and
the
kind
of
ones
alike.
We
already
had
struck
this
map
and
we
got
struck,
says
tuple
last
week
and
we
have
that
struxx's
string
join
for
a
while.
Of
course,
we've
got
all
the
scalars.
The
string
bytes
in
lists
links
yeah.
E
This
is
a
lot
of
things,
so
there
are
still
a
couple
more
features
for
which
cogen
can
be
implemented,
but
like
we're
really
getting
down
to
the
esotera
and
all
the
like.
Rarely
used
representation
modes.
Those
are
the
things
that
are
left
now,
so
I'm
starting
to
be
able
to
use
this
in
alpha
ways,
and
this
is
still
generating
more
tests
and
fixes.
But
it
is,
I
would
call
it
alpha
usable
now,
I'm
starting
to
grind
things
out
and
then
see
if
I
can
wire
them
up
and
the
answer
is
like
yeah.
E
Actually
I
can
so.
If
anybody
wants
to
play
with
alpha
things,
then
I
would
say
it
is
beginning
to
be
about
that
time.
A
Thanks
next
is
michael
yeah.
Let
me
pull
up
the
list
here.
F
Oh
come
on
too
many
things:
okay,
all
right
yeah,
so
I
did
figure
out
how
to
store
ipl
d
blocks
in
git
lfs.
So
I
wrote
a
little
storage
backend
for
that
and
I
wrote
a
feature
for
dagdb
to
test
it
out
and
it's
pretty
rad.
You
can
even
create
a
database
and
give
it
with
a
github
action
as
the
type
of
database,
and
then
it
will
pull
all
the
credentials
in
the
current
environment
from
a
github
action
and
use
that
as
its
storage
mechanism,
it's
pretty
awesome.
F
Lfs
is
like,
like
one
of
the
worst
pieces
of
technology.
I've
ever
had
to
reverse
engineer,
but
so
it's
very
slow,
but
it
is
like
kind
of
amazing
because
it's
just
you
know
github
repos
and
everything.
It's
it's
really
nice
and
it's
not
really
noisy
in
the
github
repo,
because
it's
l
and
lfs
yeah.
F
I
realized
trying
to
finish
up
that
work
that
I
really
really
don't
like
the
tooling
that
we
have
right
now
for
trying
to
build
things
from
our
esm
libraries,
and
so
I
sat
down
over
the
weekend
to
try
and
really
build
out
the
ipjs
build
tool.
F
Basically,
just
I
just
want
a
workflow
where,
like
we
write,
a
very
simple
esm
file
or
sorry,
we
write
esm
modules
in
javascript
and
then
we
just
have
a
couple
named
exports
that
point
at
whatever
things
in
source
or
whatever,
like
the
thing
that
bulkers
wanted
forever,
where
we
don't
have
all
these
files
in
the
root
and
and
then
this
will
generate
the
right
common
js
and
esm
and
the
new
package.json
and
everything
you
would
need
to
publish
that
as
a
universal
module.
That
would
just
work
everywhere.
F
So
that
is
like
80
done
now.
I
think
the
the
thing
that
really
tripped
me
up
was:
I
was
overthinking
it.
You
don't
actually
need
a
compiler
for
most
of
this
stuff.
You
can
just
manipulate
the
ast
and
it's
much
much
faster.
So
even
the
ast
parsing
manipulation
I
put
into
workers,
so
it's
an
incredibly
fast
build
tool
as
well,
so
that
should
be
done
pretty
soon
and
yeah
that'll
really
help
us
with
publishing,
like
all
of
our
modules.
F
Now,
in
all
the
modules
in
the
new
js
multi
formats
and
block
stacks,
so
yeah
that's
going
fixing
bugs
and
js
multi
formats.
I
got
the
tests
running
again
because
they
had
broken
because
of
that
paladina
bug
in
mocha
8
and
I
wrote
some
more
docs
for
ipld
in
for
dagdb
and
that's
what
I
did
last
week.
F
A
A
Stuff
I
give
an
update.
Does
anyone
else
want
to
give
an
update
on
the
like?
Perhaps
michael
would
be
good
if
you
talk
about
like
because,
like
we
basically
talked
about
the
transition
plan
to
js
multi-formats,
can
you
give.
F
Right
right
so
pretty,
basically,
I
think
I
think
what
we
landed
is
that
all
of
the
changes
that
we
want
to
make
for
js
multi-format
are
going
to
be
accessible
in
jsc
id
pretty
soon,
and
all
of
the
old
methods
will
print
warnings
but
they'll
still
work,
and
so
that'll
be
like
a
nice
transition
path
for
people.
There
was
one
kind
of
final
sticking
point
where
I
think
a
lot
of
you
dropped
off
and-
and
I
think
we
ended
up
just
not
doing
that.
So
I
actually
backed
this
out.
F
We
had
taken
a
pr
in
js
multiformats
to
make
it
look
like
an
array
buffer
view,
so
it
had
like
a
dot
buffer
property
that
was
a
unit
8
array
and
not
a
new
js
buffer
and
that
conflicted
with
the
old
property
that
was
a
node.js
buffer,
but
it
still
kind
of
worked
if
it
was
the
right
thing.
But
the
issue
with
that
is
that,
because
it's
potentially
a
view,
then
that
buffer
could
be
much
larger
than
the
actual
cid.
F
And
so
it's
the
silent
bad
behavior
that
kreptan,
and
so
I
actually
pulled
that
out
and
now
there
is
no
buffer
property
and
it
will
like
throw
if
you
try
to
access
it
or
write
it
to
things
and
that's
what
we'll
ship
with
for
a
while
in
js
multi-formats
and
then
we'll
kind
of
look
at
the
landscape.
F
A
I
actually
checked
today.
I
think
that
array,
buffers
don't
have
a
buffer
property.
F
F
A
F
So
that's
you
went
to
eight
array
you
and
16
array
into,
like
you
know,
like
all
of
the
different
view,
types
array
buffer
view
refers
to
those,
as
well
as
data
view
and
data
view,
is
like
a
view
of
an
array
buffer
that
you
can
then
do
operations
for
every
kind
of
typed
array
on
so
there's
just
like
it's
like
another
thing,
because
we
didn't
have
enough
things
that
were
binary
types
in
javascript.