►
From YouTube: 🖧 IPLD weekly Sync 🙌🏽 2020-07-06
Description
A weekly meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
A
Welcome
everyone
to
this
week's
IPL,
these
weekly
sync
meeting
it's
true
like
the
SiC
2020
and
as
every
week
we
go
over
the
stuff
that
we've
worked
on
and
will
work
on
next
week
and
then
discuss
open
issues
or
answer
questions
so
feel
free
to
draw
in
and
yeah
I
started
myself.
So
I
was
on
vacations
for
two
weeks,
so
I
don't
have
any
work
to
report.
A
Well
before
my
vacations
I
worked
a
bit
on
rusts,
I--please,
stuff
and
rust
motif
all
my
stuff,
and
next
is
just
catching
up
with
the
stuff,
I
missed
and
probably
quite
a
bit
of
ficon
work.
So
I'm
not
sure
if
I
won't
do
much
IP
work
this
week,
but
I
will
surely
catch
up
so
and
expect
me
to
do
Kerr
reviews
and
yeah.
B
B
Third
time
is
a
charm
going
to
another
set
up
right
now,
hopefully,
full
of
land
deal
flow
tend
to
end
with
multiple
nodes
and
we'll
be
able
to
issue
the
chorus
with
this
reckless
and
stuff
like
that
to
actually
check
that
everything
works
as
advertised
on
the
dagger
front,
I
met
with
Michael
and
tried
to
kind
of
reintroduce
them
to
what.
What
is
that?
We're
doing?
You
know
with
this
set
of
tools.
B
A
lot
of
like
alignments
happened
there,
because
Michael
wasn't
actually
aware
which
direction
used
to
went,
and
we
are
going
to
figure
out
sometime
this
week
what
our
next
steps
are,
but
like
step
0.5
seems
to
be
that
we'll
see
we
can
cross,
compile
it
either
in
parcels
home
to
Osman,
see
how
that
will
work,
because
everybody
who
who
Michael
knows
would
be
interested
in
this
is
very
very
much
in
the
wasn't
realm.
So
that's
that
and
also
had
a
long
meeting
with
the
person.
I,
never
remember
his
name,
I'm.
B
Sorry,
the
person
working
on
rest,
UNIX,
FS,
add
command
and
walk
them
through
what
are
actually
the
components
of
adding
something
to
ipfs
and
which
parts
basically
for
steak,
which
ones
they
can
do
they
don't
things
like
that,
because
I
should
be
away
longer.
Discussion
than
I
originally
anticipated,
and
that's
pretty
much
my
week.
C
Yeah
so
last
week
was
family,
just
kind
of
digging
into
Dumbo
drop
code
and
doing
a
lot
of
refactoring
so
come
so.
The
key
highlights
are
creating
a
visual
studio
code
containers
configuration
which,
if
you
haven't,
played
that
before
it's
kind
of
cool,
because
you
can
kind
of
encapsulate
your
whole
tool
chain
and
dependencies
in
a
docker
container
and
then
just
basically
spin
that
up
so
it
helps
people
get
started
really
quick,
as
opposed
to
figure
out
version
conflicts
or
the
other
dependencies
smoke
test.
C
C
D
A
A
E
E
E
So
this
is
kind
of
a
big
deal,
because
unions,
despite
possibly
sounding
simple
and
generally
turned
out
to
be
one
of
the
trickiest
parts
of
Cochin
and
the
schema
system
and
then
especially
to
implement
because
they
involve
making
so
many
choices
based
on
what
about
what
data
they
expect
to
be
within
them,
they're,
actually
a
lot
harder
to
deal
with
than
anything
else
like
structs
destruct.
You
know
you're
going
to
have
all
of
these
things,
and
that's
simple
enough.
E
But
we've
got
the
keyed
representation
and
the
type
of
semantics
both
implemented
now,
and
so
that
will
get
us
pretty
far
and
you
just
like
proves
out
the
core
design
around
them
works
coherently,
which
is
just
a
huge
relief.
So
this
gets
us
really
close
to
the
point
of
being
able
to
do
a
bunch
of
late
cool,
practical
things,
I've,
basically
not
written
the
schema,
today's
that
doesn't
have
at
least
one
UT
minute
so
that
we
finally
got
this
is
like
yes,
we're
gonna,
almost
even
self
host.
The
schema
schema.
E
Almost
the
one
big
caveat
there
is
that
I
used
inline,
Union
representations
in
the
schema,
schema
and
so
I'm.
Actually,
thinking
of
changing
the
schema
schema
to
back
that
out
and
switch
them
over
to
keep
unions,
because
kikyo's
are
just
better
anyway,
and
it
would
happen
to
make
this
self
hosting
thing
a
lot
easier.
E
So
maybe
they
haven't
taken
action
on
that,
yet
the
like,
probably
yeah,
so
that's
exciting
I've
had
a
couple
conversations
with
rod
about
Doc's
I
think
we
just
need
to
carry
those
further
harder,
better,
faster,
stronger
and
the
future
I'm
just
gonna
be
doing
more
I'm
so
close
to
getting
the
schema
schema
self
hosting
code
gem
that
I
kind
of
want
to
get
that
across
the
finish
line
in
the
next.
However
long
it
takes
the
next
fun
thing
that
I
have
in
there
is.
E
D
As
your
experience
with
inline
unions
and
your
change
of
heart
with
the
schema
schema
suggests
that
we
should
consider
revisiting
some
Doc's
about
inline,
I,
think
I
think
we
might
have
downplayed
them
just
saying
that
they'd
harder
to
discriminate,
but
maybe
we
need
to
put
some
more
clarity
in
there
about
how
this
is
not.
This
is
one
of
the
least
ideals
or
the
lesser
ideal.
Union
types.
E
Yeah
I
would
really
like
to
clarify
that
as
much
as
we
possibly
can
in
the
docs.
It's
a
feature
that
we
need,
because
people
make
data
like
that
and
there's
a
number,
but
in
the
wild
that
we
care
about,
describing
it
accurately
but
yeah
the
more
I
get
into
it.
The
more
like
the
more
I'm
just
aware
of
how
massive
the
performance
implications
are
of
these
decisions
to
like
seed
unions
are
just
I.
Think
I
said
this
in
text
in
enough
places,
but
for.
E
E
You
don't
necessarily
know
that
you're
discriminating
information
hint
is
going
to
come
at
the
front
of
this
range
of
data,
so
if
it
comes
somewhere
way
at
the
back
here,
you
have
to
buffer
this
data
that
you
don't
have
the
power
to
decide
what
to
do
with
yet,
and
so
your
options
for
doing
anything
efficient.
There
is
like
there
and
so
well.
I
want
us
to
be
able
to
support
reasoning
about
that
kind
of
data.
It
would
be
great
to
encourage
people
to
not
do
it.
Yeah.
D
There's
a
bunch
of
stuff
in
the
schemas
that
exist
to
describe
data
that
we
know
exists
in
the
wild,
but
then
there's
ideal
data
that
would
work
really
well
with
just
schemas,
but
the
scheme
is
describing
something
that
is
more
ideal
than
what
a
lot
of
what
there
is
a
lot
in
the
wild
so
making
that
differentiation
clear
in
the
Docs
to
say,
hey.
These
schemas
should
be
able
to
describe
90%
of
common
data
formats
that
are
out
there
or
debt
on
layouts.
E
D
This
discussion
is
actually
continued
in
the
issue
that
you
opened
last
week
about
a
new
proposed
Union
format
called
you
called
a
key
sniff.
It
may
or
may
not
be
a
reasonable
name,
but
it's
a
good
place
holder
for
now,
but
there's
an
issue
cool
about
this
key
sniff
union
type.
That
would
expand
that
90
percent
to
maybe
95
to
describe
more
types
of
data
in
the
world,
but
it
does
have
all
those
problems
in
line
unions
that
make
it
less
than
ideal.
D
But
I
do
unions
of
this
area
where
they're
trying
to
capture
this
area
in
data
which
it's
where
the
sloppiness
is
in
a
lot
of
data
formats
that
exist
in
the
wild
and
more.
We
can
capture
that
within
the
safe
bounds.
That
scheme
is
I
think
the
better,
but
it
does
open
us
up
to
that
sloppiness
a
lot
more.
So
we
have
to
be
very
careful
to
tread
very
carefully.
B
D
D
D
B
E
A
D
Well,
in
line
unions
also
have
this
potential
I
think
to
act
in
a
space
where
you
could
have
a
complex
structure.
That's
got
a
lot
of
optional
things
or
you
could
discriminate
the
two
types
of
script
structures,
so
it
does
exist
in
this
space,
where
you
have
optionality
with
how
you
write
schema
so
because,
then
that's
that's
because
they're
the
examples
I
can
think
of,
like
you
know,
I
reached
for
things
like
the
NPM
data,
which
has
got
so
much
optionality
in
it.
D
D
D
D
Like
today's
Z
cache
blocks,
it
has
to
rewind
a
little
bit
because
I,
what
I
did
now
said
do
the
same
thing
with
Bitcoin
I,
packed
them
up
into
one
gig
car
files
and
I
didn't
have
a
leftover
car
file
so
wherever
the
last
one
gig
took
me
to
is
where
I
stopped,
but
these
things
could
potentially
be
added
to
as
well
later
so
it
can
be
run
again.
So
it
took
me
to
about
block
810,000,
which
is
within
the
last
two
months,
I
think
and
yes,
that's
all
done
uploaded.
D
So
this
there's
fewer
edge
cases
that
are
having
to
be
caught.
So
once
I've
got
the
basics
in
place,
it
ran
pretty
nicely
and
that's
that's
just
that's
really
interesting
in
itself.
Just
the
way
that
people
have
really
been
poking
and
prodding
at
Bitcoin
to
stretch
the
format
to
enable
different
mining
types
and
to
to
pack
weird
things
into
the
transactions
and
people
just
aren't
doing
that
with
Z
cash,
because
it's
not
as
it's
not
it's
fun
or
it's
not
as
profitable,
so
that
so
the
code
works.
Fine.
D
It's
all
good
I've
got
a
lot
of
code
for
this
stuff
and
the
Bitcoin
working
in
pull
requests
that
are
not
merged
there
and
that's
largely
because
of
document
documentation
which
is
not
fully
complete.
It's
the
documentation
for
bitcoins
complete
code
documentation
for
the
C
cash
stuff
is
not
quite
complete
and
then
the
next
item
was
is
related,
which
is
doing
the
Bitcoin
spec
for
the
LD
repo
I
filled.
These
specs
repo
I
am
still
working
on
that
chipping
away
at
it's
bit.
Tedious
and
I
keep
on
coming
back
to
it.
D
I
told
you
to
finish
that
up
and
then
do
the
Z
cache
one
as
well.
That's
basically,
where
all
the
my
brain
dump
about
about
these
formats
as
content
addressed
formats
goes
so
all
the
edge
cases.
How
to
think
about
its
contents,
addressed
how
to
bridge
the
world
of
blockchains
and
cryptocurrencies
with
content
address
data
structures
which
needs
a
lot
more
bridging
than
you
would
think
from
from.
You
know,
a
cursory
look
at
the
two
things,
so
that's
going
on
I
headed
off
some
compy.
D
That's
all
open-source
anyway,
it's
just
not
very
pretty,
and
it
also
that
code
overreach
because
it
was
trying
to
figure
out
how
to
do
it
and
how
to
do
it
effectively
enough
to
run
in
lambda.
So
there's
a
lot
that
can
be
trimmed
back
there,
but
also
some
Doc's
and
also
five
coin.
Proofs
coders
churned
a
lot
since
then,
so
it
needs
to
be
updated.
Falcone
proofs
code,
so
Chris
he's
got
that
helped
Peter
get
the
file
coins,
CID,
stuff
and.
B
I
got
merged.
Do
you
think
admitting
it
merge
disappear,
no,
not
at
all,
but
it
is
on
their
on
their
board
for
the
next
test
net
reset,
which
is
about
two
weeks,
but
it's
a
massive
change
for
them.
Just
perform
a
piece
they
need
to
touch
so,
but
I
basic
added
on
on
their
on
their
board.
So
again,
good.
D
This
is
this
is
something
that
is
in
an
area
that
might
be
useful
to
have
some
sort
of
concept
docking
aspects
about
so
pricing.
We
continue
to
face
I,
think
their
philosophical
differences
with
the
blockchain
world
when
it
comes
to
filing
and
it's
the
same
stuff,
but
that
I
have
encountered
with
Bitcoin
in
particular,
but
there's
there's
a
it's.
D
This
context
address
versus
blockchains
for
cryptocurrency
thing
when
you,
when
you're
building
a
blockchain-
and
you
are
thinking
about
it
from
the
perspective
of
a
miner,
you
have
a
lot
of
things
available
to
you
for
making
decisions
about
your
format
and
it's
changing
and
it's
and
its
ability
to
change
over
time.
Most
notably,
you
have
the
chain
height,
so
you
can
make
decisions
based
on
what
height
your
app
at
any
point.
D
In
time
so
say
you
you're,
at
a
height
of
some,
you
know
a
hundred
thousand,
and
you
say:
okay
I
want
to
switch
out
the
way
I'm.
Storing
these
numbers,
you
know,
I,
want
to
stop.
I
want
to
start
storing
my
floats
as
strings
at
that
height,
one
hundred
thousand.
You
can
do
that
and
all
the
miners,
all
these
full
nodes,
have
that
height
information,
but
height
is
not
something
that's
stored
in
the
blocks.
So
if
your
file
format
is
branch
and
based
on
height,
then
you
need.
D
You
need
to
be
able
to
navigate
all
the
way
back
to
the
Genesis
block
to
make
that
decision.
When
you're
reading
the
format
and
from
our
perspective,
when
we're
viewing
content
address
data,
the
more
locality
the
better.
So
when
I
mean
when
I
encounter
a
content,
address,
block
I've
got
a
hash
and
I
think
of
data.
It's
and
you've
got
a
multi
codec.
D
That
says
this
is
the
type
of
data
it
is
it's
nice
to
be
able
to
say
this
is
the
type
of
data
and
I
know
how
to
read
it,
but
if
you
then
have
to
say,
okay,
I
have
this
block,
but
I
need
all
of
the
needs.
A
hundred
thousand
blocks
before
it
before
I
know
how
to
read
it:
that's
not
ideal
and
in
the
in
the
blockchain
world
it's
sort
of
reasonable
for
two
reasons.
D
One
is
that
most
most
software,
that
is
decoding
these
blocks,
has
that
information,
and
so
they
store
height
adjacent
to
the
the
binary
data
they
have
it.
So
it's
not
that
big
a
deal
and
also
that
there
is
this.
This
philosophical
idea,
where
the
block
the
block
chain,
provides
you
security.
If
you
can
validate
it
back
to
the
Genesis
block,
so
there's
a
tend
forward
as
well.
D
So
just
having
that
ability
to
say
this
thing
is
connected
to
the
Genesis
block
is
part
of
the
the
idea
behind
a
blockchain,
and
so
height
is
part
of
that.
It's
just
locality
makes
everything
else
so
much
easier
and
so
trying
to
get
these
data
versioning
things
into
more
localized
talk
about
tools
is
better,
and
so,
as
e-cash,
notably,
is
using
their
version
for
you.
They've
got
version
fields
in
their
block
headers,
but
also
their
transaction
transaction.
So
the
units
that
are
content
addressed,
they
have
a
version
field
in
it
and
they're.
D
Actually,
using
that
version
field
they
have
data,
they've
got
a
version
and
a
virgin
group
ID
uh-huh,
and
when
you
combine
these
two
things
you
get
two
and
they
write
their
beginning
of
the
binary
format,
and
so
you
get
to
make
those
decisions
locally.
Okay,
this
is
the
version.
This
is
version.
A
version
group
ID
I
know
how
to
decode
the
rest
of
it.
I,
don't
know
what
it
all
means.
D
Bitcoin
has
a
version
has
a
version
field
bit
doesn't
use
usual
as
well
and
and
as
far
as
I
know
far
coin
doesn't
really
have
version
fields.
So
I
did
have
a
write-up
about
that.
Just
to
pass
onto
the
file
coin.
Team
is
something
we've
talked
about
a
couple
times
before,
but
it's
never
been
sticky.
Actually.
B
B
Why,
like
it
going
to
find
it
I'm,
not
sure
why
they
didn't
they're
using
it
to
them
so
either
they
have
a
flaw
in
it
or
they
just
never
get
consensus
that
we
need
to
start
with
something
like
that.
But
TL
DR
is
that
for
file
coins.
This
might
actually
be
a
thing
to
consider
because
it
isn't
really
well
thought
through
from
the
album.
E
D
Yep
it's,
but
especially,
if
you
in
these
situations,
where
you
as
the
author,
don't
control
how
this
thing
evolves
over
time,
so
you
can't
with
a
block
chain
a
cryptocurrency.
You
can't
you
can't
say
well,
I
know
how
this
will
evolve
over
time,
because
you
have
to
reach
consensus
among
your
miners
and
you
this
there's
a
good
chance.
You're,
not
gonna,
be
the
one
with
with
the
enough
power
to
just
make
those
decisions.
So
you
know
this
thing
will
evolve
out
of
your
hands
and
I.
D
D
D
That
I
think
that's
how
we
think,
but
I
don't
know
how
to
bridge
that
to
the
blockchain
world,
because
I
know
that's
not
how
they
think
and
I
know
and
I
think
it's
reasonable
that
that's
not
their
h4.
So
the
challenge
is:
how
do
we
as
people
who
think
in
at
the
block
level,
communicate
that
as
something
and
as
an
ideal
for
people
that
are
not
thinking
at
the
block
level?
It's.
B
D
This
is
the
point
another
way,
I
think
so.
My
understanding
is
the
blockchain
for
file.
Coin
is
gonna,
be
very
thick,
and
so
it's
gonna
generate
a
lot
of
data
and
over
time,
and
so
with
Bitcoin,
it's
coming
on
for
11
years
and
you've
got
around
300
gigs
worth
of
data
theorems
around
300,
gigs
and
they're
younger
file
coin
could
be
bigger
and
bigger
sooner
if
it's
successful
and
so
the
cost
of
being
able
to
say
I,
add
hundred
thousand.
D
It's
not
like
this
stuff
is
insurmountable,
but
there
is
a
cost
to
having
to
say
height,
and
so
then
you
end
up
minimizing
the
trust
that
comes
with
with
blockchains,
because,
as
you
see
now
with
Bitcoin
there's,
not
that
many
full
nodes
and
the
number
of
full
nodes
has
been
reducing
over
time
and
so
we're
not
having
to
use
these
third
parties.
And
so
you
go
to
a
Bitcoin.
D
A
blockchain,
Explorer
and
you
say
I
want
to
understand
what's
in
this
block,
and
then
you
have
to
trust
them
that
they're
giving
you
the
right
information
at
high
100000.
It's
doing
this
okay,
I'm,
trusting
you
that
you
know
the
height
and
I'm,
also
trusting
you
for
all
these
other
reasons,
because
I
don't
want
to
run
a
full
node
to
understand
that
data.
So
this
trust
relationship
breaks
down
as
it
becomes
weren't
pretty
impractical
to
have
the
whole
thing.
D
So,
the
more
you
can
verifiably
say
this
is
likely
data
from
that
blockchain,
and
this
is
not
only
likely,
but
this
is
certainly
what
the
data
says.
So
anyway,
that's
I
mean
that's
been
really
interesting:
philosophical
stuff
there
that
I
hadn't
liked
I
had
we
had
this
argument
with
file
Coyne
last
year,
but
I
hadn't
really
dived
as
deeply
into
the
blockchain
these
these
cryptocurrencies
until
this
year,
and
so
there's
some
really
interesting
focal
points
for
all
the
work
we're
doing
that
is
coming
out
of
this
anyway.
D
As
eric
mentioned,
I
did
we
just.
We
discussed
some
near-term
priorities
for
dock
work
in
the
specs
repo.
I
want
to
take
some
he's
these
gist's
that
he
throws
out
full
of
these
really
good
ideas
and
turn
them
into
more
formal
spec
documents,
if
their
concepts
or
specs
and
and
with
that
I'm,
currently
working
on
the
one
that
he
came
up
with
last
week,
which
was
codec
completeness,
which
is
builds
on
some
work.
That's
been
mulling
around
for
a
little
while
about
the
spectrum
of
completeness
for
any
IP
LD
codec.
D
When
considering
that
IP
LD,
it
has
an
aim
to
be
able
to
describe
a
large
number
of
content
interest
formats
from
get
to
Bitcoin
and
and
then
and
then
along
that
continuum
of
completeness
towards
things
like
dag
Seabourn,
Dec
Jason,
which
are
close
to
being
able
to
fully
describe
our
ideal
state
of
arbitrary
shaped
somewhat
arbitrarily
shaped
data,
not
just
these
specific
formats.
And
so
what
does
that
spectrum
look
like?
D
That's
really
tricky
actually,
but
it's
it's
good
to
get
that
out
in
this
way,
because
the
dark
at
the
moments
basically
says
there
is
an
ideal
end
of
the
spectrum
that
we
don't
exist
at
yet.
So
we
could
push
further,
but
it
would
mean
defining
our
own
format
or
changing
the
way
we're
using
seaboard.
For
example.
We
could
do
that
and
push
towards
the
ideal
even
further,
but
it
might
be
that
at
that
ideal
end
you're,
just
in
a
realm
of
compromises
and
you're,
just
making
the
right
compromise
for
your
use
case.
D
D
It's
extremely
tempted
to
because
it
would
probably
really
really
easy
and
I
just
it's
just
a
matter
of
how
much
bang
for
the
buck
that
is,
and
so
there's
so
many
coins
that
would
be
really
fun
to
throw
out
there,
but
yeah,
probably
not,
but
it
probably
wouldn't
be
hard
cuz.
It's
just
it's
like
an
easy
for.
Could
bitcoins
Oh
I
just.
A
And
I,
if
one
information
for
Chris,
if
you
work
on
the
comb
PDF
and
look
also
into
the
actual
rust,
fill
proofs
code,
and
you
wonder
why
isn't
this
thing
just
a
screaming
operation?
It
could
be
it's
on
my
dual
it
to
do
this,
but
I
haven't
gotten
into
it.
So,
just
just
in
case
you
wanna
like.
Why
is
it
do
things
in
memory
or
on
disk
or
whatever,
and
you
could
just
stream
it?
You
can
so
just
yeah.
It
was
like
yeah
I
would
expect
this
tumbler
across
it
and
then
yeah.
D
That
note
on
that
note,
vodka,
because
I
do
describe
all
that
to
Chris,
but
there
is
this
all
could
bit
where
the
compare
calculator
has
to
re-implement.
The
whole
compete
generate
function
because
the
rust
filled
proofs
only
only
allows
using
the
Merkel
local
tree
algorithm
with
a
specific
storage,
backing
storage,
cache
storage,
and
we
can
we.
We
need
memory,
storage
and
there's
no
way
with
the
rust
field.
Proofs
API
just
switch
their
storage,
so
you
have
to
re-implement
a
whole
variety.