►
From YouTube: 🖧 IPLD weekly Sync 🙌🏽 2020-03-16
Description
A weekly meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
A
A
Well,
it's
a
purple
use
case
for
IPS
and
then
I
figured
out
loading
in
60
gigabytes
is
not
that
easy.
Probably-
and
it's
interesting
that
the
way
they
do
the
dumps
it's
not
a
good
fit
for
ipfs,
because
it's
probably
with
the
current
chunking.
We
have
you
end
up
with
a
lot
unique
data,
so
you
don't
have
that
good
deduplication
so
with
a
custom
chunk
or
it
might
be
cool.
So
if
we
would
have
a
custom
chunker
for
the
OpenStreetMap
data,
it
might
work
very
well
and
yeah.
A
B
It's
currently
exported
with
some
very
funny
instructs
we're
debating
doing
some
weirder
style
of
coding
in
order
to
make
it
look
nicer
on
the
output
feedback.
Welcome
this
everybody
cares
otherwise
I
spend
some
time
on
a
sidequest
looking
at
ipfs
and
how
it
uses
I,
peel
D
and
trying
to
consider
how
we
can
make
those
go
more
smoothly
together
in
the
future
and
some
probably
upcoming,
rewrites
and
I.
B
C
Let's
go
:,
so
oh
yeah
still
working
on
this
big
data
stuff
reach
some
really
interesting
performance
issues,
particularly
with
dynamo
that
are
like
blocking
a
lot
of
for
progress,
and
so
it
made
me
real
value,
8
the
entire
approach
to
to
an
approach
that
would
actually
avoid
some
of
these
large-scale
Dynamo
issues.
It
also,
let
me
to
uncover
a
weird
corner
of
s3
performance,
where
apparently
s3
performance
will
bottleneck
on
a
prix
fixe.
So
that's
where
you'll
find
like
the
maximum
read
and
write
capacity.
C
We
get
maximal
performance
for
every
single
block,
which
is
great,
oh
and
now
my
daughter
is
typing,
and
so
using
that
now
and
generally
working
on
down
a
new
approach
to
this
whole
big
data
collection,
that
kind
of
refactor
even
the
first
day
that
we
were
doing
but
overall,
is
actually
it's
going
to
be
way
faster
and
easier.
So,
working
on
that
also
made
a
lot
of
progress
on
dag
DB.
This
is
surfacing
some
really
interesting.
C
New
use
cases
for
car
files,
where,
like
it's
sort
of
using
the
car
file
like
a
git
repo
and
then
every
time
that
it
does
it
operation,
is
just
replacing
the
car
file
with
the
new
ones,
and
so
the
api's
that
we
currently
have
for
manipulating
car
files
are
not
super
well
suited
than
this.
It's
like
a
full
kind
of
rewrite
of
the
entire
graph.
C
So
yeah,
so
all
the
projects
are
doing
contingency
planning
for
commenting
and
so
I'll
be
putting
on
a
plan.
You
know
if
I
have
to
if
I
you
know
get
pulled
by
my
daughter
to
go
play
videos
for
her
or
whatever
that
the
team
doesn't
have
to
stop
doing
work
and
that
everybody
knows
who
to
contact
so
it'll
probably
just
be
a
list
of
the
different
areas
of
IPL
D
and
who
to
contact.
It
won't
be
like
Kira's,
who
is
now
the
new
me
because
I
don't
feel
like
anyone
right
now.
C
E
C
Yeah
no
I
mean
it's
certainly
not
designed
for
that
now,
but
it's
interesting
to
think
about
what
it
would
take
to
do
that
and
it
overlaps
with
some
of
the
other
stuff
that
we're
doing
like
and
just
recently
we're
talking
about
how,
in
order
to
do
really
efficient
pinning.
What
we
need
to
do
is
keep
around
a
rubber
an
index
of
all
the
all
the
for
each
block.
C
What
does
it
link
to
and
who
links
to
it
that
we
currently
have
in
the
data
store
right
like
having
that
index
around,
but
allow
us
to
do
really
associated
garbage
collection?
You
want
to
do
something
very
similar
with
car
files,
where
you
want
to
have
like
that
that
layer
of
an
index
so
that
when
you
do
the
manipulation
you
can
see
which
blocks
were
orphaned
and
if
those
were
the
only
links
to
them
and
then
and
then
you
can
do
a
really
efficient
new
right
of
a
car
file
without
Rhian
coding.
C
E
D
E
Sorry
I
thought
it
was
work
but
like
if
you
had
like
some
core
format,
which
is
like
sounds
like
car,
but
just
like
walk
format
or
like
storing
tags
but
very
flexible,
and
then
car
is
like
a
specific
version
of
that.
There's
any
concentrates
archives.
It
has
a
specific
form
and
because
of
other
things
built
on
top
of
the
same
underlying
like
walk,
store,
kind
of
idea,
but
yeah.
E
C
I
mean
it's
an
interesting
territory.
I
would
say
that,
like
we've
been
thinking
for
a
while
now
about
what
a
storage
API
would
look
like,
they
did
a
better
job
of
this
and
would
have
information
like
do.
We
store
the
entire
graph
for
this
block
or
not
right
like
like
giving
us
an
index
of
all
the
things
that
this
links
to
and
and
the
link
to
it
in
the
current
storage
system.
C
Right
so
like
what
are
the
the
add-ons
that
we
need
to
do
more
efficient
operations
and
when
I
think
about
how
to
port
that
to
different
storage.
Backends,
it's
like
I,
don't
see
a
place
for
a
specific
file
format
being
helpful
because,
like
if
I'm
doing
an
s3,
what
I'm
doing
is
I'm
actually
using
like
just
key,
only
storage,
to
store
like
little
index
with
these
things
and
stuff
like
that,
so
yeah
so
there.
G
Guys
so
I
catch
get
stay
with
Michael
last
week,
just
to
see
how
I
could
help
the
project
and
basically
decided
to
focus
on
provide
feedback
from
an
app
developer
point
of
view,
icon
usability
of
API
is
and
and
whatnot
so
I
think
is
any
of
you
know:
I'm
kind
of
working
in
the
medical
imaging
space
building
a
prototype.
So
oh
I
provide
a
couple
of
issues
and
dialogue
being.
Yes,
you
may
be
with
Michael
about
some
of
those
things
from
my
point
of
view,
but
I
am
available
to
help
with
other
things.
G
He
is
anything
just
feel
free
to
ping
me
know.
Do
my
best
to
help
closely
relates
to
my
friends,
I'm
doing
a
first
step
working
and
you
know
one
thing
you
know
yeah
so
getting
some
real
world
or
first-hand
experience
with
it
and
at
some
point
I'm
not
sure
the
best
way
to
engage
Tesla
engage
this
team,
but
would
like
to
kind
of
share
a
little
bit
about
how
that
works
and
get
so
maybe
design
feedback
in
so
maybe
next
week
or
can
schedule
a
separate
meeting,
I'm
not
really
sure,
if
and
probably
separate
beings.
G
That
way,
if
you
don't
want
to
come,
you
don't
have
to
you
know,
take
up
your
time.
One
thing
is:
is
that
I
need
some
kind
of
way?
Okay,
I
was
originally
thinking
my
a
personal
project
to
use
a
centralized
database
to
point
into
you
know
IP
fsip
LD,
but
then
you
know
the
work
that
Michael
did
on
dad
DB
it
kinda
made
me
think
you
could
actually
build
a
database
on
top
of
IP
LD
and
not
have
to
have
a
separate
contra,
tional
database
and
so
ended
up
looking
to
orbit
the
orbit.
G
Db
and
I'm
sure
you
guys
have
looked
at
it
to
you,
but
it's
just
kind
of
limited.
It's
like
misty
requires
everything
to
be
a
memory,
so
it
wouldn't
be
appropriate,
but
I
think
the
work
I
really
liked
the
idea
and
Michael
you
start
with
Daddy,
be
it
seems
like
it
could.
You
know
transition
is
something
that
could
be
really
cool.
You
know
in
terms
of
just
this
usually
scalable
database
and
provide
that
kind
of
search
capability.
That
is
useful.
So
you
ask
my
update.
I
have
a
couple
of
questions
at
the
end.
E
F
D
F
F
F
A
Cool
and
I
forgot
an
update,
because
I
was
also
at
so
the
conference.
It
was
also
about
geo
standards,
and
so
there's
this
okay,
Stennis
organization
called
OGC
for
do
standards
for
open
juste
enhance,
and
there
is
an
upcoming
standard
which
is
kind
of
like
the
basis
for
all
the
other
upcoming
statement.
So
they
kind
of
replace
their
old.
A
C
C
But,
but
what
it
kind
of
surfaced
is
that,
like
even
rabbit,
isn't
gonna
be
very
good,
because
there's
some
compressed
data
in
there
that
introduces
noise
and
then
the
fingerprint
and
what
you
actually
want
is
just
a
custom,
Tumkur
and
there's
probably
a
lot
of
like
cases
out
there
where
a
custom
chunker
is
actually
what
you
want
to
write
then
and
use
a
lot
of
the
time,
and
so
just
having
ApS
that
make
it
easier
to
provide
a
custom.
Chunker
is
probably
going
to
be
really
nice
to
have.
C
E
D
C
Yeah
yeah
I
mean
like
I
I
needed.
It
was
effectively
ended
up
being
a
custom
chunker
for
this
big
data
work
and
so
I
had
to
get
that
working
on
top
of
JSF
EFS
and
it
was.
It
was
really
hard
to
do
with
the
code,
the
way
that
it
was
in
there
and
then
it
being
from
PRS.
But
now
it's
actually
quite
easy,
significant
easy
to
do
this
and
so
yeah
some
extent
it
just
like
a
tooling
issue,
but
yeah
yeah
I'll,
say
so.
C
G
You
actually
related
to
that
is
kind
of
binary
data
management,
so
in
medical
imaging.
The
bulk
of
the
day
that
we're
dealing
are
is
medical
images
and
the
can
decompress
in
different
ways
and
I've
actually
moved
away
from
my
campus
files
to
store
it
for
a
couple
reasons.
But
one
was
just
like
we're
saying
we're
not
control
over
the
chunking
and
so
I
couldn't
take
advantage
of
deduplication.
G
That
I
was
inherited
so
having
control
over
where
my
chunk
file
up
would
actually
make
ipfs
work
much
better.
For
my
use
case,
that's
why
I
did
is
I've
actually
been
storing
the
binary
data
using
the
VIP
LD
using
the
raw
codec,
which
you
guys
may
kind
of
shudder
on,
but
the
reason
I
did.
That
was
because
of
performance.
So
it's
actually
the
fastest
store
and
retrieved
from,
and
that's
actually
one
kind
of
point
of
feedback
is
you
know:
is
that
really
evil
or
is
okay
to
do?
G
Because
really
it's
just
a
piece
of
binary
data
out
there,
but
the
question
I
actually
have
is,
it
seems
like
there's
a
need
or
that
there
could
be
a
value
and,
having
has
separates
schema
specifically
for
blobs,
and
so
we
have
UNIX
fSV,
which
takes
you
know
it's
kind
of
file
rights.
You
know.
Finally,
and
you
have
permissions,
you
have
some
other
things,
but
if
you
just
want
to
store
a
blob,
you
know,
like
my
case
I,
don't
want
to
actually
store
it
as
a
file
in
CT.
I
just
want
a
multi-block
blob
sequence.
F
C
It's
called
I
believe
that
the
final
name
that
we
settled
on
after,
like
was
flexible,
binary
lists,
and
so
it's
just
an
arbitrary
length
binary
and
it
scales
from
anything
from
an
embedded
byte
to
like
just
one
linked
byte
like
nested
so
and
the
goal
there
is
so
that,
like
you,
don't
tie
the
layout
algorithm
to
the
reed
method.
So
you
can
have
a
common
read
and
then
figure
out
better
layouts
in
the
future,
and
everybody
can
still
read
the
old
data
still
and
so
yeah
I'm.
C
Actually,
my
plan
is
to
actually
shoved
up
some
dag
DB,
so
dag
DB
has
this
way
of
like
you
can
take
special
objects
and
then,
basically,
when
you
pass
them
on
the
day
to
be
as
a
value,
it'll
create
another
block
and
then
link
to
it,
and
so
that
block
is
basically
a
signal
to
say.
Do
something
special
with
this.
C
So
one
of
the
things
that
I'm
gonna
have
in
Dagny
B
is
you
can
pass
it
in
a
stream
like
any
stream
or
actually
in
a
stick
generator
that
produces
binary,
and
it
will
basically
use
that
FTL
code
binary
less
code
to
encode.
It
as
that
and
then
when
you,
when
you
read
it
out
of
the
database,
you'll
also
get
back
a
stream
object
and,
like
that's
the
underlying
data
structure,
the
law
and
a
few
things
so
yeah
I'll
send
you
a
link
to
the
end
to
the
schema
for
it.
G
That's
awesome,
especially
in
quality
here,
is
or
don't
know
if
I
could
actually
contribute
something
which
you
guys
are
ahead
of
me
here.
Yeah
I'll,
look
at
that
because
that
you
know
there's
a
lot
of
things
you're
talking
about
there.
That
I
was
like
Mattson
yeah,
there's
two
ways
to
do
it:
okay!
So,
given
that
this
here's
one
other
question:
oh
yes,
just
real,
quick,
so
maybe
I
implemented.
Would
you
did
you
like
the
raw
codec
to
store
the
binary
chunks
or,
as
you
use
like
decks
Tibor.
C
So
the
schema
isn't
specific
to
a
codec,
so
you
can
use
deck
JSON
or
decks
Ybor
out.
You
probably
can't
use
the
egg
P,
because
it's
not
data
model
compatible
but
like
so
basically
I'm.
You
can
either
embed
the
binary
in
as
a
value
or
you
can
link
to
the
box.
So
if
it's
really
small,
you
can
inline
it
basically.
C
So
the
idea
is
that,
like
you
still
have
one
data
structure
and
one
read
method
we
pass
around,
even
if
you
just
have
them
embedded
bytes,
so
you
don't
have
to
do
that
like
Union,
if
def
code
like
somewhere
else
but
yeah
and
then
yeah
administered
Rob
locks
like
it
just
it
uses
raw
blocks
for
all
of
the
underlying
data
it
doesn't.
It
doesn't
do
that
the
old
thing
and
egg
PB
the
be
used
to
do
where
you
like,
wrap
the
binary
and
add
egg
could
be
known
every
time,
because.
G
I've
been
to
ask
is
like
just
storing
the
rod
chunks
in
raw
codec
kosher
man.
It
sounds
like
it
is
a
reasonable
in
this
case.
Okay,
that's
the
preferred
method
for
sure
yeah,
one
other
quick
question.
Hopefully
this
is
fast.
Oh
so
I've
been
via
a
graph
sync
and
I'm.
Just
tryna
share
from
a
scope
is
intended
to
replace
a
bit
swap
some
day.
I
was
like
one
question.
It
was
a
separate
thing.
Well,.
C
It's
it's
just
not!
It's
not
gonna
be
good
at
some
things
and
it's
gonna
be
way
better
at
some
things
so
like.
If
you
had
a
lot
of
peers
with
the
data
and
you
had
a
really
wide
graph,
you'd
end
up
getting
it
faster
out
of
that
swap
because
you
you
can
you
can
get
data
from
multiple
peers
at
one
time
and
it
naturally
kind
of
balances.
C
If
you
look
at
how
BitTorrent
operates
really
efficiently
was
really
high
demand
torrents,
it
looks
a
lot
like
bit
swap,
but
when
you
have
a
linear
chain
like
we
have
in
a
blockchain
graph,
sake
is
like
orders
of
magnitude
faster,
like
you
know,
based
on
the
grip,
the
depth
of
the
graph
you're.
Basically,
it's
saving
all
of
those
round
trips.
So
just.
C
C
C
C
Yeah
so
I
mean
yeah.
That
is
adding
quite
a
bit
like
I
mean
if
you're
using
the
default
block.
Api
then
you're,
including
de
PB,
anyway.
Okay,
so
you're
gonna
have
like
the
from
a
buff
librarian.
So
it's
not
really
going
to
increase
the
bundle,
size
and
JavaScript
supports
gonna
call,
but-
and
we
don't
have
to
deal
with
the
dag
PB
codec
stuff.
We
can
like
just
use
a
quota,
buff
library
that
that's
reasonable,
I
guess
yeah.
It
does
kind
of
suck.
That's.
B
B
C
B
I
would
like
to
come
back
a
couple
of
questions
ago
and
dear
keep
an
odd
or
for
anyone
else
listening
who
might
not
have
heard
of
these
keywords
before
so.
You
asked
about
how
to
handle
large
pieces
of
binary
data.
Chris
and
Michael
talked
a
little
bit
about
his
flexible
bytes
list
for
that
yeah,
flexible
byte
list,
I.
Think.
B
We
understand,
tacitly
that
some
amount
of
code
is
needed
to
make
sense
of
the
reading
in
the
writing,
and
so
so
usually,
this
is
to
do
with
splitting
stuff
up
into
multiple
blocks,
but
it
could,
for
example,
also
be
used
for
doing
encryption.
There's
one
hypothetical
in
this
case.
We
have
no
proofs
of
that
yet,
but
it's
something
that
has
been
discussed
as
a
cool
idea,
at
least
no
crus
of
it
that
I
know
of
maybe
it's
that
they're
I
don't
know,
and
so
so
that's
the
general
feature.
B
That
sounds
like
topologically
what
it
is
and
and
then
you
just
need
a
little
bit
of
code
on
top
in
order
to
read
or
write
it,
and
so
we
are
hoping
that
that
is
gonna,
be,
as
you
were,
also
thinking
a
pretty
reusable
thing
and
we'll
probably
try
to
drop
it
into
the
middle
of
unix
FSB
too,
and
also
hopefully,
works
great
for
your
standalone.
Byte
needs
that
have
enough
we'll
be
doing
five
systems.
Yeah.
C
I
think
like
the
good
differentiator
here
is
like
everything
in
the
data
model
works
with
just
any
kind
of
IP
library
like
you,
can
serialize
and
deserialize
and
read
bytes,
and
do
everything
once
you
hit
these
more
advanced
use.
Cases
like
I
have
a
ton
of
binary
that
you
split
over
a
tree
you're
now
in
the
land
of
like
okay.
Well,
you
need
I,
feel
the
plus
some
other
code
and,
like
that's
basically,
what
we've
been
referring
to
is
in
advanced
layout,
where
there
there
is
something
outside
of
just
our
off-the-shelf
I
feel
the
library.