►
From YouTube: 🖧 IPLD weekly Sync 🙌🏽 2020-02-17
Description
A weekly meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
A
Okay,
welcome
everyone
to
this
week's
IPL
d.
Sync
meeting
is
2020
February,
the
17th,
and
as
every
week
we
talk
about
the
stuff
that
the
IPL
team
is
working
on
and
has
worked
on
and
worked
on
the
next
few
days
and
yeah
and
then
discuss
any
open
agenda
items.
We
might
have
I
started
with
myself.
I've
worked
on
the
rust
IPL
the
stuff,
and
it's
really
M
getting
started.
Finally,
and
it's
shaping
up
nicely
so
I
found
that
how
to
do
it,
some
coding.
A
So
there
is
a
PR
for
multi
hash,
which
is
currently
in
review,
and
so
the
exciting
thing
is.
It
got
reviewed
by
quite
a
lot
of
people,
so
it's
really
by
I
think
now
seven
different
people
from
seven
different
companies.
So
it's
really
like
it's
an
amazing.
So
it's
not
like
only
ppl
people
review
I'd,
really
like
independent
people
for
other
companies.
So
it's
really
nice
that
this
Ivan
it's
really
exciting.
A
So
this
makes
me
really
happy
and
yeah
and
shame
up
quite
nicely
and
then
the
other
thing
is
that
there
is
currently
a
deaf
grand
open
for
rust,
ipfs,
which
is
close
to
being
approved,
and
those
people
already
have
opened
an
issue
which
I
filling
the
notes
about
IPS
rusts.
There's
a
volume
repository
where
you
basically
want
to
collect
all
the
people
that
are
involved
I
want
to
get
involved
with
the
Russell
foundation
of
IPs,
but
then
also,
of
course,
I
feel
I
shall
be
in
multi
formats
and
so
on.
A
So
in
case
you
could
watch
and
want
to
get
involved,
then
go
to
github.com
I
think
that's
rust,
so
they
should
all
come
and
yeah
final
issue
there
and
with
those
people
that
could
view
I
also
had
a
call
last
week
and
the
notes
from
the
meeting
are
also
linked
in
the
notes.
A
I
was
pretty
good
just
to
see
Ike
yeah.
What
the
plan
is,
what
I
need
to
do
is
so
they
can
use
the
use
it--only
what
they
plan
to
do,
but
the
focus
will
be
and
so
on,
yeah
if
you've
interested
just
read
the
notes,
and
so
what
I
won't
work
on
is
I.
Did
it
through
three
weeks
ago
started
with
some
block
API
I
want
to
complete
a
draft,
so
I
can
discuss
those
with
the
other
people
and
then
also
what
I
really
need
to
do
in
this
week.
A
Is
writing
a
blog
post
about
all
the
rust
ipfs
effort,
because
just
last
week
there
was
another
repository
from
a
company
in
China
opened.
They
also
didn't
I,
rust,
I,
guess,
implementation
or
started
and
I,
also
pink
them
and
yeah
that
yeah
we
should
look
together
and
so
on.
So
I
really
want
to
make
sure
that
we
get
the
word
out.
So
people
know
like
also
like
we
are
the
entry
points
way
get
in
touch
with
the
people
and
ya
know
this
is
something
happening,
and
then
we
can
work
on
all
the
stuff
together.
A
B
You
know
I,
think
both
of
those
pieces
are,
they
could
be
streamlined.
I
think
there
could
be
a
lot
more
I
think
both
of
those
things
could
be
done
in
a
streaming
way.
But
that's.
This
is
deep
in
the
bowels
of
the
file
cutting
proofs,
so
we
so
stuck
with
that.
So
what
I
did
do
is
the
the
Merkle
tree.
B
Caching,
backing
store
is
pluggable,
so
by
default,
the
file
coin,
proofs
plugs
in
a
disk
backing
store
and
that's
what
we
had
to
swap
out
to
get
into
memory
with
a
with
a
backing
store
that
are
already
ships
with
which
is
a
vector
backing
store.
So
what
I
did
was
wrote
a
backing
store
that
pieced
together
a
disk
end,
a
vector,
backing,
store
and
put
a
bet
just
under
four
hundred
Meg's
on
disk
and
then
the
rest
in
memory.
B
Then
that
comes
in
under
under
sizing
memory
and
uses
of
a
bit
of
disk
to
get
there.
Now,
it's
working
as
a
lambda
I'm,
actually
getting
results
from
it,
at
least
from
some
tests,
simple
test
data
for
one
gig
test
data
and
not
super
confident
in
the
code.
I
got
Locker
reviewed
it
for
me
and
fixed
a
bunch
of
my
rust
rustiness,
and
but
there
was
one
bug
in
particular
there
that
should
have
been
a
logical,
but
the
group
should
have
caused
poor
calculation,
but
didn't
so
that
raises
questions
about.
B
Why
didn't
that
show
up
as
a
as
a
bug
in
the
in
the
results,
so
that
that
undermines
my
confidence?
A
little
bit
in
this
is
doing
the
right
thing,
even
though
it
seems
to
be
giving
me
the
right
results.
So
it's
the
kind
of
thing
that,
as
I've
said
before
the
further
we
move
away
from
using
the
code
that
well
ideally
that
lotus
end
and
go
Falcone
using
the
more
then
the
more
reduced
confidence
we
should
have
in
the
results.
So
then
one
we
commit
to
this
path.
B
B
So
I
figure
we'll
just
try
Andrea
my
bits
of
it
and
see
if
I
can
get
the
results,
that
I
should
do,
and
so
that's
an
interesting
exercise
not
probably
not
going
to
be
massively
fruitful
but
interesting
anyway,
but
I
should
be
able
to
yeah
I
have
a
rust
lambda
that
I
can
publish
I,
just
not
super
confident
in
it,
and
then
the
other
thing
was
I.
Had
a
chat
lyric
yesterday
about
some
just
general
IP
LD
data
structure,
like
internal
representations.
B
Api
is
a
little
bit
about
the
sorting
stuff,
key,
sorting,
memories,
memory,
representation
of
key
sorting
versus
codecs
versus
all.
That's
where
stuff.
There
was
a
thread
on
slack
in
in
slack.
There
was
a
thread
about
this
and
we
had
a
chat
about
it.
I
took
some
notes.
I
think
Eric
took
some,
probably
better
notes,
but
it
it
occurred
to
us
that
there
was
the
kind
of
stuff
that
really
Volker
should
be
looked
in
on
as
well,
while
he's
at
the
early
stages
of
rust
implementation
yeah.
You
know
we
need
to
get
some.
C
Yeah
so
last
week,
I
finished
up
a
lot
of
like
the
fixes
and
migrations
for
this
new
slicer
project
gets
the
car
files
down
below
a
gig
I
lost
like
a
day.
I
figured
out
that
there's
a
segmented
scan
that
you
can
do
that
supposed
to
be
faster,
but
the
moment
that
I
started
doing
it.
We
were
then
overloading
our
Dynamo
table
in
terms
of
throughput,
so
I
had
to
back
off
and
go
back
to
sequential,
so
that
sucks,
but
nearly
done
sort
of
migrating
and
fixing
all
the
basically
bugs.
Now
any
old
data.
C
C
C
C
Probably
know
why
yeah
so
that'll
be
interesting,
so
yeah,
that's
something
along
that'll,
be
hopefully
kind
of
senior
step
by
the
end
of
it
week,
all
right
and
then
also
talking
with
Brian
sort
of
kicking
around
this
idea
for
a
database
built
on
like
he'll,
be
stuff,
that's
gonna!
Think
of
it
as
that,
it's
basically
a
transaction
log
over
a
hampt.
C
So
yeah
I've
been
poking
it
bad
a
little
bit,
and
it
made
me
start
thinking
about
some
of
the
API
questions
that
we've
had
for
a
long
time
in
a
new
light.
In
particular,
we
we're
definitely
going
to
need
a
storage
layer.
That
knows
how
much
of
a
graph
is
stored
for
a
particular
cid,
so,
basically,
just
if
the
whole
graph
is
stored
in
that.
C
So
that's
one
thing:
another
thing
is
in
the
type
the
type
generation
stuff
that
I
have
from
schemas
for
the
longest
time
I
had
been
I
had
this
get
method
on
it
that
would
get
blocks
out
and
traverse
its
way
through
multiple
color
block
layers
and
then
cast
the
Limbo's
into
the
types
and
now
that
I'm
trying
to
use
that
in
this
their
context,
I'm
realizing
that
that's
not
quite
the
way
do
you
want
to
use
that
it's
actually
probably
better
to
have
something
that
takes
the
storage
layer
and
takes
the
types
and
has
an
API
for
gluing
them
together
and
casting
traversing
through
them
properly.
C
D
I
did
not
make
my
goal,
for
we
could
getting
all
of
the
interface
changes
basic
node
we
right
in
to
the
court
in
go
IPO,
D,
prime
or
the
Codex,
traversals
and
stuff.
That
was
that
was
really
ambitious,
turns
out
what
took
a
bit
more
time
than
I
expected
was
primarily
while
I
was
doing
all
these
new
interfaces.
I
started
for
the
sake
of
iterating
quickly,
I,
just
slung
all
of
the
new
code.
D
In
like
one
giant
package,
all
the
benchmarks
will
have
to
do
all
this
stuff,
so
I
just
press
two
traits
he
won't
snow.
Do
it
yeah
so
now
as
I
get
ready
to
move
this
into
the
core
and
like
make
it
correct
and
compartmentalize
taking
the
basic
notes
stuff,
which
is
what
I'm
renaming
I
killed
lead-free.
But
what
was
the
code
name?
Is
fine
they've
gotten
too
much
for
me.
It's
basic
node
package
versus.
D
These
things
cannot
touch
in
a
central
way
when
they're,
actually
in
separate
packages,
and
so
there
was
just
some
learning
experiences
for
the
future
code
in
in
there
and
I'm,
basically
tabling
fixing
those
for
this
package
because
it
doesn't
matter
for
like
it
was
a
research
project
right.
So
I'm
just
going
to
like
kind
of
drop
those
bits.
But
the
lesson
was
learned
and
the
good
side
effect
is
I'm.
D
Getting
a
ton
of
design
dockside
of
the
week,
which
are
probably
going
to
continue
to
pay
off
in
future
and
hopefully
be
generalizable
I'm
working
on
a
nice
right
up
of
like
just
identifying
all
the
different
values.
You
could
prioritize
like
the
speed
versus
the
assembly,
size,
train
and
then
go
through
the
trade-off
selection.
And
then
this
sort
of
explains
a
lot
of
the
code
very
quickly
at
least
I
hope.
D
I
also
can
we're
gonna
have
issues
than
usual.
If
anybody
listening
wants
to
do
some
going
coding
work,
that's
like
pretty
well-defined,
like
self-contained
fun
places
to
the
play,
there's
a
bunch
of
new
issues
on
pipe
and
be
primed
for
some,
like
traversal
interfaces.
That
I
think
would
be
cool
to
have
and
probably
easy
to
do
on
top
of
the
core
stuff.
D
And
we're
maybe
starting
some
conversations
with
go
ipfs
about
them,
integrating
some
more
of
all
this
stuff.
Those
are
still
very
early.
Conversations
like
research
is
needed
on
generating
some
concept
of
scope
and
like
what
would
go
and
one
order
in
that
process.
That's
we'll
see
how
that
goes
and
is
wrapping
it
up.
For
me,.
A
Two
things
and
I
have
a
question
for
all
about
the
complete
stuff.
So
we
you
mentioned
at
one
point
that
so
the
problem
with
the
heading
sn',
obviously
putting
at
the
end,
but
the
what
they
call
padding
like
the
bits
shifting
around
somehow
yeah.
Then
it
you
can't
stream
it
because
it
also
goes
backwards
and
so
on
and
is
it
would
it
be
insanely
complicated
to
change
it
to
streaming
or
is
it
just
like,
like
insanely
mildly,
perhaps.
B
This
is
what
I'm
feeling
within
JavaScript
right
now
just
turn
it,
and
why
so,
there's
this
weird
code
in
there,
the
it
in
in
rustic
needs
the
seek
trait
because
it
go
it
backs
up
so
go
it
goes
forward
and
then
it
backs
up
and
then
goes
forward
again
and
I'll
try
to
understand
why
it
backs
up
and
why
it
just
doesn't,
because
it
is
I
think
it's
only
backing
up
like
a
byte
at
a
time.
So
you
know
in
theory.
This
is.
This
is
not
a
a
siik
to
a
random
place.
B
A
B
C
B
B
It's!
It's
only
matters,
it's
heaps
of
documentation.
It's
just
some
of
its
really
an
obscure
like
have
you
really
and
I
think
but
I
think
the
way.
The
reason
that
it's
really
complicated
is
that
it's
allowing
for
arbitrary
padding,
so
we're
only
caring
about
padding
two
bits
per
two
54
bits:
that's
that's
the
mess,
the
padding.
That's
the
only
padding
we
care
about,
but
this
thing
is
written
so
that
it
could
take
any
number
of
padding
bits
per
bits.
B
A
A
C
One
is
that
I
wanted
to
keep
it,
so
it
has
to
be
rounded
to
1
Meg,
obviously,
because
along
the
other,
slicing
is
already
like.
The
way
that
we
tend
to
slice
is
already
aligned
to
one
megabyte
slices
or
one
megabyte
chunks.
Sorry
I
wanted
to
make
sure
that
there
was
enough
room
for
all
of
the
blocks.
All
of
the
UNIX
must
be
one
blocks
and
the
car
file
overhead.