►
From YouTube: 🖧 IPLD weekly Sync 🙌🏽 2020-11-02
Description
A weekly meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
A
I
start
with
myself,
so
the
kind
of
like
the
big
news
and
a
big
relief
is
that
finally,
rust
multi
hash
0.12
is
released.
This
is
the
thing
that
contains
the
tiny
multi-hash
stuff,
which
was
a
fork
of
multi-hash,
something
like
a
few
months
ago.
A
So
now
you
can
really
use
the
rust
mod
as
a
replacement
for
tiny
multihash
in
case
you
have
used
it
and
it's
now
no
standard
compatible,
which
means
for
those
are
not
that
familiar
with
rust
means
that
you
can
use
it
on
systems
which
don't
have
an
allocator
and
yeah.
So
it
doesn't
do
any
allocations
on
the
heap
for
the
multihead
stuff,
but
there's
obviously
some
functionality
where
you
want
to
have
support
for
this.
A
So,
for
example,
if
you
want
to
get
the
into
a
proper
vector
or
so
on,
yeah,
but
it's
just
an
option
to
do
yeah.
So
that's
pretty
cool
and
then
the
other
stuff
on
the
ipod.
What
I
did
was
an
exploration
report
about
extending
the
schema
maps
with
more
advanced
list
pair
representation,
it's
kind
of
related
to
the
string
discussions,
but
it's
also
a
nice
kind
of
like
in
a
separate
way
and
yeah.
A
I
have
even
more
thoughts
about
this
kind
of
thing,
but
I
still
have
to
write
it
down
or
yeah
think
about
it
more,
but
I
think
this
one
like
this
really
is
interesting.
I
think
because
it
also
gives
kind
of
a
different
view
on
like
or
like
a
separation
between
schemas
and
data
models
and
how
they
relate
and
so
on.
So
I
think
it's
yeah.
I
think
it's
a
like
in
case
you
are.
You
are
flooded
by
the
all
these
string,
whatever
discussion
and
don't
want
to
read
it
all.
A
B
So
essentially
I
did
finish
the
go
multi-codec
new
implementation
with
an
external
contributor
last
week,
but
I
hadn't
actually
pushed
it
to
the
main
repo
until
now
so
now,
I've
unarchived
the
repo
there's
a
pull
request
to
delete
all
the
existing
code
and
add
the
new
implementation
it
code
generates
all
the
codes
as
go
constants.
B
Initially
we
said
untyped
constants,
but
now
they
do
have
a
codec
type
which
might
need
to
be
called
code.
I'm
not
sure
input
welcome,
but
the
whole
point
of
having
a
named
type
is
so
that
we
can
have
methods
like
string,
which
I
think
is
nice,
because
otherwise
they're
just
integers
and
you
can't
really
do
much
with
them.
Unless
you
have
separate
functions,
we
also
had
a
couple
of
more
string
calls
mostly
with
volker.
B
So
I
think
the
point
here
is
that,
right
now
we
have
people
that
maintain
repos
and
packages
and
so
on,
but
sometimes
things
do
fall
through
the
cracks
and
get
forgotten
just
because
we
have
so
many
repos.
So
I
think
I'm
gonna
start
essentially
keeping
an
eye
at
least
once
a
week
or
so
on.
Issues
and
pull
requests
from
external
people
that
we
might
have
forgotten
about
so
say
hasn't
had
a
comment
from
us
in
a
few
days
or
a
few
weeks
or
so
on,
and
also
some
more
hemp
work.
B
I
added
about
an
hour
ago,
I
added
if
varex
code,
gen
prs
get
merged
I'll,
be
so
happy
because
my
code
will
get
so
much
better,
but
I
see
one
of
them
has
already
merged.
So
that's
good,
but
I've
been
reviewing.
B
Those
also
I've
been
adding
some
more
tests
from
the
point
of
view
of
the
go
api,
because
I
expect
that
all
the
internal
details
that
you
know,
people
wouldn't
directly
use
directly,
will
be
covered
by
the
spec
compliance
tests,
which
I
assume
are
still
a
priority
for
for
this
quarter,
and
I'm
of
course
I
already
offered
my
help
in
getting
those
done,
because
I
will
be
using
them,
and
I
also
implemented
initial
support
for
link
loading.
B
I
you
know
know
it's
referencing
other
nodes,
but
it's
kind
of
messy
at
the
moment,
so
I'm
going
to
post
the
pull
request
tomorrow-
and
the
last
thing
I
have
is
that
tomorrow,
I'm
going
to
be
on
the
go
time,
show
talking
about
it's
a
podcast
by
the
way
talking
about
what
features
you
could
remove
from
go,
which
I
think
is
an
interesting
topic
as
opposed
to
you
know,
everybody's
talking
about
adding
features.
So
I've
got
some
ideas
like
you
know,
dot,
import
or
like
struct,
embedding
and
things
like
that,
which
I
honestly
hate.
A
C
D
The
docs,
okay,
so
busy
busy
thick
week
of
work
I
felt
like
I
was
working
a
bit
too
late
at
night
and
then
deep
into
my
weekend,
just
because
I
was
doing
that
focus
thing
that
you
know
how
it
goes
you
get
in
the
zone
and
you
just
bang
it
out.
I
think
that's
michael's
usual
mode
with
with
weekends.
I
can't
can't
let
go
of
something
and
anyway,
so
I
I
was
doing
the
car
to
schemer
thing.
D
I
finished
well,
I
didn't
finish
it
up
because
there's
so
much
more
in
this
area,
but
I
it's
at
a
state
that
I
wanted
to
get
it
to
for
this
initial
purpose
like
how
far
can
we
push
this
thing
in
utility
with
minimal
work
without
creating
a
whole
new
rabbit
hole?
So
so
this
thing
now
does
this
multi-stage
thing
where
you
can?
D
You
can
run
it
across
a
car
file
and
it'll
just
find
the
common
forms
in
there
and
describe
them
with
schemas
and
then
just
give
you
a
huge
list
and
stats
about
how
frequently
they're
encountered
and
then
you
can
add
in
a
library
of
known
schemas
to
it
as
well
and
they'll
run
with
it,
with
known,
schemas
and
it'll.
Add
them
to
the
mix
and
then-
and
so
when
you
add
your
own
schemas,
you
can
start
to
make
common.
D
The
shapes
that
you
know
are
common
like,
for
example,
in
in
the
file
coin,
data
the
the
hemp,
because
it's
variable
arity.
D
You
have
these
blocks
that
have
between
it's
like
between
1
and
32
elements
in
an
array,
but
it's
all
the
same
data
structure,
but
they
end
up
getting
described
as
separate
things
because
it
doesn't
know
how
to
it
doesn't
know
the
same
thing
so
once
you
start
describing
that
with
a
schema
and
say:
okay
now
this
is
actually
a.
This
is
just
an
array
of
elements
and
it
can
be
potentially
any
length.
D
Then
you
start
to
capture
all
of
the
variations
into
one
thing,
so
you
build
up
a
library
and
then
eventually
you
you
find
no
novel
schemas
in
there
and
then
you've
described
all
of
the
things
and
then
you
can
get
stats
on
where
they
will
appear.
So
I
did
that
for
the
five
point
down,
there's
there's
a
couple
of
other
little
neat
hacks
in
there
like,
because
the
fire,
the
falcon
data,
is
so
big.
D
It
can
do
things
like
you
run
it
across
the
thing
with
your
library
and
you
say
any
novel
blocks
that
you
find
put
them
into
another
car
file
over
here
and
then
you
can
use
that
to
keep
on
incrementally
making
your
novel
car
file
smaller
and
smaller,
and
so
it's
quicker
and
quicker
to
do
these
iterations
to
describe
schemas.
So
I
can
imagine
this
being
an
interesting
way
of
doing
schema
development
for
existing
data
for
validating
schemas
that
you've
manually
written
for
forms
like
I
want
to
go.
D
I
want
to
use
this
now
to
go
back
to
the
bitcoin
data,
because
I'm
in
the
middle
of
that
bitcoin
spec,
and
actually
verify
that
the
schemas
that
I've
manually
written
for
the
bitcoin
shapes
actually
matches
what
the
data,
the
variations
of
the
data
that
I'm
spitting
out
for
that.
So
that's
another
use
case
it's
quite
nice
for,
but
there
are
limitations,
because,
because
this
is
looking
at
blocks
without
context,
this
is
coming
back
to
that
whole
context.
D
So,
yes,
you
can
describe
the
shape
and
you
can
match
it
to
schemas
and
and
so
far,
there's
no
there's
no
clear
as
far
as
I
can
find,
maybe
maybe
we'll
could
think
something
off
the
top
of
his
head.
There's
nothing
in
the
file
coin
schemas
that
conflicts.
Obviously,
so
there's
no,
not
like
two
completely
different
data
structures
that
encode
looking
the
same.
So
there's
a
lot
of
things
with
like
there's
a
lot
of
tuples
with
two
elements
in
them,
but
in
most
cases
they're
either
attached
to
something
else.
D
All
the
two
elements
are
switched
around
in
order,
so
you've
got
string
and
bytes,
but
then
you've
got
back
and
string
and
so
by
accident,
we've
got
all
these
unique
forms
that
you
can
encounter
within
the
schemas
and
within
the
file
the
the
blocks
and
say
you
know
this.
This
is
actually
this
thing
over
here,
even
though,
if
you
switch
those
elements
around,
it
would
be
something
over
the
other
side.
D
But
there's
all
these
data
structures
that,
in
the
middle
of
things
like
the
particularly
the
hampton
the
amt
where,
if
you've
got
the
leaf
nodes,
then
and
they're
storing
inline
values,
you
know
what
it
is
and
there's
a
lot
of.
You
know
a
lot
of
these,
particularly
the
hemp.
A
a
large
number
of
nodes
are
leaf
nodes
because
the
ham
doesn't
just
have
a
leaf
layer.
D
The
leaves
can
be
scattered
throughout
it,
but
with
the
amt,
you've
got
a
leaf
layer
and
then
everything
above
that
is
intermediates
and
that
whole
space
everything
looks
the
same,
and
so
that
context
of
how
did
I
get
to
this
node?
What
linked
me
to
it?
You
can't
know
what
what
it's
hanging
off
of
so
I
I
and
and
also
if
you
don't
inline
the
values.
If
you
put
cids
at
the
end
of
the
leaves
in
your
data
structures,
then
it
all
looks
the
same.
D
So
I've
got
schemas
that
are
like
they're
called
amt,
non-leaf
or
amt
with
links,
and
so
you
can
collect
these
things,
but
and
and
so
far,
there's
there's
a
limited
number
of
things
that
actually
have
links
at
the
leaves.
A
lot
of
these
structures
are
in
line,
but
if
we,
if
we
were
to
encourage
them
to
link
them
rather
than
line
them
to
safe
churn,
then
that
would
that
probably
go
that
way
would
exacerbate.
So
so
the
ideal
state
is
somewhere
that
you
can
like.
D
This
is
helpful
for
schema
authoring
and
getting
everything
pinned
down,
but
to
go
further
with
this
analysis,
you
really
do
need
random
access
to
blocks
like
you
need
these
things
in
a
block
store
where
you
can
actually
start
querying
them
and
then
build
up
that
that
chain
and
that
connects
to
some
of
the
work
michael's
been
doing
with
putting
this
stuff
into
a
data
store.
You
can
actually
retrieve
from
and
also
stuff.
D
I
think
that
will
be
doing
with
the
graphql
stuff,
where
you're
describing
these
connections
and-
and
that,
I
think,
is
that's
really
important.
Work
for
us
is
to
to
to
figure
out
the
the
the
the
way
that
context
works
with
these
blocks
and
how
we
can
build
that
into
our
tools
in
a
way
that
is
not
super
expensive
because
you
know
when
we're
talking
about
like
a
car
file,
that's
400,
gigs
and
you're
wanting
to
do
context.
You
know
just
becomes
out
of
hand.
So
so
anyway,
that's
yeah,
interesting
work.
D
There
lots
of
other
avenues
for
going
further.
I
think
the
far
coin
team
would
love
answers
right
now
I
see
the
farcoin
team
pushing
ahead
with
just
optimizations
that
they
think
are
good,
like
steven's,
currently
expanding
the
the
width
of
the
amt
which
which
we
everyone
thinks
is
good,
but
I
don't
think
anyone
knows
how
wide
it
should
be
and
there's
a
whole
lot
of
questions
there
about
the
amt
that
we
don't
have
answers
to.
D
You
could
say
here's
the
kind
of
turn.
If
you
increase
the
width,
this
here's
the
kind
of
the
kind
of
savings
you
would
get
if
you
swapped
it
out
for
this
other
data
structure
like
that
that
would
that's
high
value
work.
That
would
be
good
to
do
at
some
point
soonish,
but
I
I'm
not
right
now.
As
my
second
item
on
the
list
is
I'm
backing
up
to,
I
really
have
to
close
out
some
of
these
things.
I've
left
open.
D
I've
left
a
trail
of
open
work
and,
and
I
have
to
close
them
out
so
so
I
the
second
half
of
my
week,
I
went
back
to
my
car
work
and
part
of
that
was
designing
a
new
block
story,
jpi
that
could
potentially
be
shared
in
common
with
other
javascript
block
storage
systems.
So
there
was
just
some
design
work
in
that,
and
how
can
we
think
about
this?
How
can
we
extract
out
some
of
the
like
using
this
data
store?
D
Api
has
just
been
just
been
terrible
because
he
just
puts
too
much
into
the
the
format,
but
this
new
abstraction
that
gazala
helped
me
get
to
is
really
pulls
it
apart
into
function
functionality.
So
you
can
address
these
storage
structures
just
in
the
in
for
the
particular
functionality
that
you
need.
So
if
you
need
random
access
like
just
to
get
random
blocks,
then
yeah
you
can
have
that.
But
if
you
just
need
to
iterate
over
it,
then
you
can
just
load
it
as
an
iterator
and
say
I
just
want.
D
I
just
want
all
of
the
I
want
to
see.
What's
in
there
or
even
just
I
want
to
see
all
the
cds,
then
that's
a
separate
thing
that
you
can
say
here's
my
car
data,
in
whatever
form
you
give
it,
whether
it's
a
as
a
binary
blob
or
as
a
stream
or
whatever
give
me
all
the
cids,
and
then
you
just
get
an
iterator.
So
it's
it's
pulled.
D
Apart
into
these
separate
functions
from
pieces
of
functionality,
the
apis
are
separate
enough,
that
you
could
load
them
separately
and
only
get
the
code
paths
in
your
bundle
that
that
need
that
and
along
the
way,
with
that,
I
spent
half
of
my
week
learning
how
to
include
typescript
annotations
in
my
code
and
in
my
build
step,
which
was,
you
know,
been
meaning
to
really
get
more
of
a
handle
on
typescript
and
I've
been
doing
that
last
little
while
and
that's
turned
out
to
be
really
nice.
D
It
sucks
a
lot
less
than
I
thought
and,
and
it
adds
a
really
nice
linting
step
to
the
build
process.
So
you've
got
linting,
but
then
you've
got
the
structural
linting,
which
can
legitimate,
like
it's
legitimately,
finding
errors
which
is
fantastic.
So
I'm
getting
some
of
those
benefits
of
types
without
going
full
typescript.
So
it's
just
plain
javascript,
with
typescript
annotations
and
some
and
using
typescript
to
describe
the
api
as
well.
So
there's
a
link
in
the
docs
to,
for
example,
the
external
api
typescript
doc.
D
That
is
not
it's
not
used
as
a
as
a
a
core
part
of
the
code,
but
it's
exported
and
it's
used
to
validate
all
the
code
paths
within
the
whole
thing,
including
tests.
So
yeah,
I'm
happy
with
that
and
I'm
gonna
continue
doing
that.
But
this
car
stuff's
pretty
much
done.
I
just
need
to
get
the
docs
written
for
it
and
then
published,
and
then
I
can
tick
that
one
off
so
anyway.
That's
me
sorry
for
taking
too
much
time.
E
Right
so
I
actually
don't
know
where
my
week
went
very
very
little
things
that
are
important
to
iplt.
A
few
points
actually
tied
with
what
rob
said
if
I
managed
to
get
lotus
to
play
ball
with
a
lot
of
strict
embedding
by
the
way
like
without
certain
betting.
E
This
this
wouldn't
be
even
possible,
because
I
basically
need
to
split
one
interface
into
three
separate
ones,
while
still
allowing,
for
you
know,
without
having
to
change
a
ton
of
code
like
this
block,
store,
interface,
specific
and
motors
are
used
in
like
350
to
400
sites,
and
I
need
to
break
them
apart
without
coming
to
the
right
of
the
world.
So
that's
not
great,
but
if
everything
works
right
or
not,
you
will
have
access
to
individual
blocks
from
this
block
store.
That
actually
can
do.
E
You
know
concurrent
key
source
starts
with
s
and
ends
with
three,
which
is
essentially
the
only
thing
that
can
keep
everything
that
we
need
at
scale
right
now.
Yes,
we
still
need
the
resume
to
figure
out
some
way
to
do
to
do.
E
You
know
non-aws
hosted
blog
stores,
but
at
the
scale
which
we
are
going,
there
is
not
going
to
be
a
possibility
to
do
any
of
this
on
a
single
machine
pretty
much
very
soon,
even
if
you
put
all
of
this
into
into
ipfs
without
the
dht
provider.
Just
you
know
just
I
have
this
even
at
just
storing
the
blocks
at
the
scale,
the
amount
of
blocks
that
we
need.
We
cannot
do
this
on
a
single
ipfs
instance
anymore
because
of
battery
and
other
limitations.
So
it's
great
that
the
work
is
happening.
E
You
know
to
get
our
data
stores
a
little
bit
more
performant,
but
we
basically
need
like
a
10
200,
x,
improvement
and
until
then,
what's
going
to
happen
is
basically
the
chain
will
migrate
mostly
to
s3,
and
the
twist
to
this
is
that
there
should
be
a
way
to
get
lotus
to
bootstrap
from
that
as
well,
provided
we
have
a
separate
way
to
feed
it.
E
What
is
the
current
head
that
we
assume
is
correct
and
if
somebody
wants
to
you
know,
wants
to
walk
through
the
entire
chain
from
genesis
all
the
way
they
can
still
do
that,
because
block
storage
is
blog
store,
it
doesn't.
You
know
it
doesn't
matter
where
you
get
the
stuff
from
the
hash
will
match
and
we'll
you
know
to
validate.
So
it's
all
good
but
yeah.
E
That's
what
I've
been
trying
to
do
here
since
friday
and
it's
calling
but
not
well
enough
because,
like
our
block
store
abstraction,
is
so
bad.
It's
like
it's
not
as
bad
as
our
that
store
abstraction
in
golan,
specifically,
but
yeah
they're.
E
E
At
some
point,
we
need
to
design
things
that
work
with
dags
in
mind.
Almost
nothing.
We
do
right
now
on
the
storage
side.
Has
this
in
place
like,
for
instance,
the
the
the
simplest
thing
being
able
to
gc
for
this?
Ideally,
you
know,
especially
with
tax.
You
need
to
be
able
to
reference
scouting
in
order
to
do
reference
counting,
you
need
to
know
a
block
that
you
just
stored.
What
other
links
does
it
have?
E
If
you
want
to
do
this
today,
you
basically
in
the
in
the
blog
store,
you
have
to
parse
every
single
block
and
pull
the
links
out,
which
is
crazy.
Instead,
we
would
have
an
interface
which
says,
like
hey
store,
this
blog
and
those
are
by
the
way,
the
links
that
it
refers
to,
and
you
know
from
there
on
you
don't
have
to
do
anything,
but
because
of
how
our
interfaces
are
built,
that's
not
possible.
E
There
are
also
the
things
like
you
know.
We
already
talked
about
the
enumeration
of
every
single
block
that
you
have
in
the
in
the
store
and
also
the
way
the
the
put
interfaces
work
is
you
know,
batching
is
weird,
so
yeah
yeah
mini
rant
there,
but
we
we
probably
need
to
come
up
with
better
ways
to
do
things
because
yeah
what
we
have
right
now
just
doesn't
scale
and
we'll
probably
get
plenty
of
experience
with
that
as
well
in
in
the
lenses
world
and
yeah.
F
Just
for
reference
dagdb
has
that
indexing
that
you
just
talked
about
db,
has
its
own
block,
store,
abstraction
and
it
index
every
time
you
write
a
block,
it
writes
all
of
the
to
and
from
links
in
that
block,
so
you
can
always
go
and
check.
We
use
it
for
efficient
graph
synchronization,
so
you
can
tell
if
you
have
a
full
graph
or
not
really
easily,
but
you
could
also
use
it
for
gc.
That
was
one
of
the
intentions
of
it.
E
E
Yeah,
but
does
it
parse
the
block
itself
or
you
feed
it?
The
links
who
does
the
parsing
basically
do?
Do
you
have
the
falcon
problem
that
you
have
to
understand
everything
that
is
being
written
into
it
or
is
just
opaque,
and
you
just
say
here's
the
block
and
here
are
the
links
that
this
block
refers
to
separately.
F
I
think
it
I
think
it
parses
them,
because
it
works
with
full
block
instances,
so
they
already
have
a
cached
decode
if
they've
been
decoded.
So
I
think
I
think
it
just
reads
the
links
out.
It
doesn't
take
them
yeah,
it
doesn't
take
them
from
it
because,
like
again,
you're
writing
to
and
from
so
every
new
block
that
gets
written
has
the
to
and
from
so.
You
can
always
look
up
a
block
and
know
all
the
blocks
that
that
were
that
were
written
as
a
result
or
all.
F
E
B
F
G
G
G
You
were
right
and
it's
there
now
so
rerun
your
generation,
and
you
should
have
nicer
things.
I
ran
into
these
while
trying
to
prototype
writing
some
bigger
things
myself
using
the
schema,
cogent
output
and
it
was.
It
was
just
very
visible
that
we
needed
some
more
of
these
methods,
so
that,
like
once,
you
have
you
know
in
the
native
type
system.
G
What
you're
holding
on
to
you
should
be
able
to
traverse
over
those
data
structures
and
continue
to
know
what
the
native
type
of
the
subsequent
things
is
previously
a
bunch
of
the
iterators
like
ditched
that
and
went
to
the
overly
generic
node
interface
and
now
there's
alternative
methods
which
will
keep
you
with
the
concrete
information
and
those
are
a
lot
easier
to
use.
They'll,
probably
go
faster,
because
the
compiler
can
inline
things
around
them.
Autocomplete
works.
You
know
things
that
are
good,
there's
another
pr
out,
that
is
about
rearranging
the
cogent
output.
G
So
currently
the
code
gen
spits
out
a
file
per
type
in
your
schema.
This
is
a
kind
of
this.
This
has
been
terrible
to
use
because,
among
other
things,
it
means,
if
you
do
regeneration
and
you've
changed
the
names
of
some
types.
It
doesn't
go
clean
up
the
old
files,
so
now
it
makes
a
finite
number
file.
So
you
just
don't
have
to
worry
about
that.
G
Instead
of
using
any
of
that,
you'd
have
to
call
these
janky
special
placeholder
go
away
methods
that
is
all
going
in
the
bin,
and
now
we
have
a
draft
of
these
schema
types
re-implemented
to
use
the
schema
schema
code
gen
to
produce
types
that
match
the
data
model.
So
now
you
can
connect
any
ipld
codec
to
those
code,
gen
outputs
from
the
schema
schema
and
that
will
match
and
recognize
schema
dm
dmt
data
model
trees.
G
And
now
you
can
parse
those
into
the
fully
ratified
schema
system
that
the
rest
of
this
library
uses.
So
we
can
glue
that
to
cogen
the
gluing
decoding
part's
a
little
bit
draft
still,
but
it's
probably
going
to
come
out
in
the
next
week
or
two,
it's
almost
there,
so
this
is
even
before
it's
glued
to
cogen.
This
is
getting
really
really
cool,
so
this
has
tons
more
tests
than
the
existing
stuff,
and
so
now
we
can
take
stuff
like
I
have
tests
written
which
have
a
json
document
of
schema
and
that
can
be
parsed.
G
It
goes
into
the
schema
types.
This
is
coming
in
two
packages.
If
you're
looking
at
it
and
go
by
the
way,
there's
one
package,
it's
called
schema,
dmt
so
again,
dmt
short
for
data
model
tree
contrast
to
ast,
which
is
an
abstract
syntax
tree.
This
is
conceptually
the
same,
but
there's
no
syntax
attachment
because
you
can
choose
any
ipld
codec
right.
G
G
References
and
the
difference
between
these
is
the
one
that
handles
the
data
model
tree.
You
can
only
do
so
many
validations
on
this.
You
can
check
that
it's
the
structurally
correct
information,
but
also
there
are
some
other
properties
we
want
like.
We
should
validate
that.
The
whole
type
system
is
complete,
that
all
types
that
reference
other
types
actually
it's
it
has
to
be
a
connected
graph
and
verifying
graph
connectivity
is
potentially
computationally
expensive
operation
right.
So
we're
not
going
to
let
the
schema
system
alone.
G
G
This
is
now
including
all
of
the
validation
rules
which
have
never
been
implemented
before,
so
this
is
kind
of
a
big
deal
and
I'm
hoping
rod
is
going
to
be
available
to
review
some
of
these
with
me,
because
I
know
we've
talked
about
a
bunch
of
these
before
so
now.
I
finally
implemented
them.
So
all
the
like
little
validation
rules
like
an
inline
union
that
contains
structs
as
members.
It
really
ought
to
check
at
compile
time
so
to
speak,
that
the
inline
union
discriminant
key,
should
not
collide
with
any
of
the
struct
field.
G
Names
in
the
representation
form
things
like
this
yep,
and
so
we've
had
no
tooling
to
validate
this
before
now
we
do
there
are.
I
don't
know
how
many
rules
there
are
like
this.
I
should
actually
count.
There
are
several
dozen
small
rules
like
this,
and
these
are
things
where
previously,
if
you
wanted
to
write
a
schema,
you'd
have
to
read
the
spec
real
closely
and
make
sure
you
didn't
screw
up
and
now
we're
finally
getting
to
having
automated
tooling
to
recognize
these
things.
So
this
is
going
to
be
kind
of
huge.
G
F
G
G
G
I've
also
got
one
more
guest
on
strings
like
volker
mentioned,
we're
probably
all
tired
of
this,
but
hey
here's
a
link,
this
write-up
is
still
largely
oriented
around
the
thesis
of
this
data
exists.
What
are
we
actually
going
to
do
about
it?
G
G
So
in
cliff
notes,
it
is
possible
to
use
ipfs
add
with
any
file
name
where
any
file
name
means,
of
course,
any
sequence
of
bytes,
not
necessarily
utf-8,
not
necessarily
even
unicode
at
all
so
jeremy
tested
that
and
there's
a
link
to
the
ipfs
web
gateway
in
here
also,
which
is
the
result
of
that,
and
it
turns
out
the
dagp
codec
stores,
the
bytes.
We
give
it
so
this
works.
G
It
is
interesting
to
note
that
the
dag
pb
codec
documents
this
as
a
string,
the
the
protobuf
spec
file,
says
string,
but
it
is
clear
that
in
actuality,
the
support
range
for
this
is
a
full
sequence
of
bytes.
It
is
not
limited
to
unicode
there's
a
comment.
G
Yep,
so
that's
fun.
G
The
web
gateways
do
largely
handle
this
correctly,
with
some
exceptions
that
I
will
know
in
a
moment
so
the
urls
in
links
in
the
web
gateways
for
like
directories
and
stuff.
These
are
fully
precise.
They
lose
nothing.
They
use
sequences
of
escaping
for
bytes
that
are
non-unicode.
This
ends
up
using
percent
based
escapes
that
is
normal
for
urls
and
can
apparently
handle
this
entire
range
of
data.
G
This
is
the
general
like
anything.
That's
using
dag
pv
data
directly
generally
handles
things
correctly
because
it
gets
the
raw
data,
so
it
works.
So
the
web
gateway
is
one
example
of
that.
The
mounting
subsystems
also
work.
They
will
give
you
back
files
with
their
full
arbitrary
file
names,
but
not
everything
works.
Here's
the
exception
that
I
mentioned
earlier.
The
dag
json
codecs,
which
do
get
used
in
some
of
the
ap.
The
ipfs
apis
are
currently
as
implemented
lossy,
and
so
we
now
have
this
demo
that
is
linked
to
in
the
web
gateway.
G
G
Byte,
and
so
this
is
lossy,
it
throws
away
bytes
so,
but
this
only
shows
up
in
some
places
and
I
have
not
entirely
traced
how
the
web
gateway
reaches
this
state.
But
what
I
can
observe
at
the
end
of
the
day
is
the
links
are
correct,
where
they're
using
url
escaping
and
percents,
but
the
file
names
as
rendered,
are
not
correct
and
they're
going
to
render.
Well,
however,
your
browser
renders
that
code.
G
F
We
have
talked
a
lot
about
doing
a
pathing,
spec
and
like
this
makes
me
like
very
confident
that
we
should
not
write
a
pathing's
back
and
then
we
should
point
to
the
earl
passback,
because
then
we
have
all
the
same
escaping
rules
and
we
we're
not
worried
about
people
like
not
implementing
them
in
gateways
and
then
all
these
other
like
in
between
layers.
F
G
F
G
D
And
it
would
be
really
good
to
get
doug
jason
sort
of
got
a
bunch
of
points
in
doug
jason
that
are
not
sorted
out.
I'm
using
jason,
more
and
more
in
test
fixture
related
things
and-
and
it
really
it's
a
real
problem-
that
it's
not
consistent
between
languages
and
not
even
internally,
consistent
so
like
to
get
that
one
hammered
out
fully.
G
B
C
Okay,
let's
see
I
can
go
next.
I
think
I'm
next
so
so
last
week
after
being
put
on
the
async
generator
path
as
the
way
to
solve
all
my
complexity
on
error
handling,
I
ended
up
abandoning
it
working
with
michael
it.
Just
it
turns
out
that
the
graph
sync
protocol
is
a
bit
too
complex
to
kind
of
fit
into
the
iterator
or
iterator
generator
paradigm.
C
We
tried,
but
it
just
didn't.
Quite
it
wasn't
wasn't
converging,
so
I've
switched
back
to
kind
of
the
old
school
way
and
been
working
on
that
so
adding
in
validation,
error
handling
for
the
requester
logic.
So
that
is
my
update.
F
To
be
clear,
the
way
that
the
protocol
works
is
that
there's
two
streams,
one
for
control
and
one
for
the
blocks
and
because
of
the
way
that
it's
it's
designed,
you
can't
actually
implement
effective
flow
control
like
you
can't
actually
do
a
generator
over
it,
and
you
can't
actually
like
map
the
right,
pauses
and
resumes
in
that
structure,
because
it
wasn't
really
designed
to
think
about
flow
control
like
you
can't
even
pause
from
the
client,
which
is
like
crazy
and
because
it
very
much
just
looks
like
bit
swap
in
a
lot
of
ways
and
bitswap
doesn't
have
this
problem
because
you
always
request
a
block.
F
C
Yes
and
I'm
actually
eager
to
write
another,
a
new
protocol,
that's
far
simpler
and
but
anyway,
that's
after
this
one.
F
F
Okay,
so
a
lot
of
what
I
did
last
week
was
get
the
reviews
ready
for
a
pre-calibration
call,
but
other
than
that.
What
else
did
I
do?
Oh
did
some
more
migration
today
to
be
doing
multiformat
stuff.
I
have
to
say
I
really
do
like
these
new
primitives
they're,
like
a
lot
nicer
to
work
with
all
the
code's
getting
cleaned
up.
It's
it's
really
really
nice!
Then,
over
the
weekend.
F
I
so
I
couldn't
get
this
thing
that
mccola
figured
out
out
of
my
head
for
those
sort
of
trees,
and
I
started
to
think
like
what
is
the
simplest
possible
application
of
this
of
this
technique
and
what
I
realized
that
it
was
looks
like
is
just
a
sorted
set
of
cids.
F
So
not
a
map
because,
like
all
you
need,
are
things
that
are
ordered
by
some
ordering
and
then
you
need
something
that
does
hashes
to
do
this
right.
So
you
just
make
the
key
and
the
value
of
the
hash
you
order
them
by
binary
sort
order,
and
then
you
use
the
last
byte
if
it's
zero.
You
just
chunk
on
that,
so
you
don't
have
to
implement
like
a
floating
hash.
F
Algorithm
or
anything
you
just
like
rely
on
the
randomness
from
the
hash,
and
I
really
just
wanted
to
implement
it
so
that
I
could
like
see
how
it
works
in
practice
and
see
how
you
would
implement
it
and
then
to
start
to
sort
of
look
at
some
of
the
performance
dimensions
of
the
structure.
Having
something
that
is.
This
simple
and
predictable
will
allow
us
to
like
really
kind
of
tweak
the
performance
dimensions
and
see
what
that
produces
in
terms
of
different
block
structures.
F
F
So
I
did
that
once
I
did
that
I
a
conversation
that
I
had
with
jeremy
was
stuck
in
my
head,
where
we
were
talking
about
these
block
storage
problems,
and
I
was
relaying
some
of
what
me
and
folker
had
talked
about,
and
he
even
said
I
think
outright
like
it
seems
like
the
right
solution
to
this
problem
is
an
append
only
b
tree
like
a
single
file
database.
That's
just
depend
only
and
I
was
like
yeah
and
then
once
I
implemented
this
tree,
I
was
like.
F
Oh,
you
could
totally
use
this
to
implement
a
pendle
industry
or
independently
tree.
So
I
started
to
design
that
and
worked
it
out
as
a
database
file
format.
So
this
is
really
cool.
I've
started
implemented
and
there's
a
pretty
detailed
spec
on
the
design.
Right
now,
it's
called
cadb
content
address
database,
but
effectively
it's
a
database.
That's
just
a
single
file.
F
All
of
the
keys
have
to
be
hash.
Digests
and
the
values
obviously
are
binary
blocks.
It
doesn't
do
any
of
its
own
validations.
You
have
to
make
sure
that
you're,
giving
it
like
good,
hashes
and
and
everything
and
the
the
block
data
matches
the
storage
interface
doesn't
do
that
for
you,
but
basically
you
just
use
this
that
same
technique
and
then
every
time
that
you
write
a
new
entry,
you
rewrite
the
leaves
that
you've
adjusted
and
blah
blah,
like
it's
typical
kind
of
b,
plus
tree
stuff
and
yeah.
F
We'll
have
to
see
what
the
reads
look
like,
because
the
the
depth
can
get
kind
of
big
when
you're
doing
the
tree
like
this.
The
thing
is,
though,
the
the
leaves
are
a
pretty
predictable
size
once
you
know
the
average
hash
digest
size.
F
F
So
anyway,
that's
really
cool,
I'm
working
on
implementing
that
if
it
works
and
if
it's
performant,
then
I'll,
probably
pull
rod
in
after
you
finish
all
of
your
dangling
tasks
and
work
on
a
more
unified,
spec
and
hardening
implementation
and
then
eventually
I'd
like
a
new
version
of
the
car
file
format
that
just
uses
this
is
the
block
storage
and
then
we
take
the
other
stuff
from
a
car
file
like
the
header
and
the
roots
and
stick
that
somewhere
because
then
like
we
would.
F
If
we
started
using
this
as
the
car
file
format,
then
we
would
like
instantly
get
see
index
cid
access
to
everything
right
and
when
you
compact
these
they're
deterministic,
so
we
still
have
the
deterministic
cast
over
a
compacted
store
and
now,
as
we
ship
around
car
files
like
you,
don't
actually
have
to
wait
to
load
them.
You
can
just
like
start
the
database
on
them
right
and
they'll
just
work.
It's
kind
of
nice
like
there's
a
lot
of
really
really
nice
properties
to
this
format.
F
A
Thanks
so
this
time
I
scroll
up
to
the
agenda,
I
don't
see
any
engine
items,
but
is
there
anything
anyone
wants
to
talk
about
yeah
peter.
E
Yeah,
I
I
I
want
to
add
to
what
michael
said,
if
you
guys
are
actually
looking
for
good
caching
performance,
I
would
recommend
splitting
the
basically
having
two
files
having
the
index
entirely
separate
from
the
data
which,
which
is
pretty
terrible
as
far
as
moving
it
around.
But
it's
the
most
ideal
thing
for
the
vfs
to
actually
keep
your
keep
the
right
piece
of
bytes
in
memory
at
all
times,
and.
E
And
being
able
to
to
leverage
what
was
it
called,
not
pre-read,
but
prefetch
previous
display
fetch.
F
I
I
don't
know
I
mean
I'll
have
to
see
like
if
we
keep
the
nodes
in
memory.
It's
not
gonna
be
much
different
and
like
the
problem
with
having
two
files
is
that
you
no
longer
have
the
nice
transactional
guarantee
so
like
like.
I
can
write
the
lo
like
the
nice
thing
about
this
penduly
file
format.
Is
that,
like
the
the
page
file
is
always
written
as
an
atomic
write,
so
this
file's
never
in
a
state,
that's
unreadable
or
corrupted
as
long
as
you
insure
these
atomic
rights.
F
So
you
just
do
a
bunch
of
right
v's
for
the
for
the
page
file
and
then
the
last
thing
in
the
page
file
is
actually
the
position
and
length
of
the
current
route.
F
So
if,
but,
if
you
have
two
files,
then
you're,
like
you,
don't
know
if
you're
in
a
transactional
state
or
not
of
updating
something,
so
you
can't
like
take
a
live
database
or
block
store
and
just
copy
the
file
out,
which
would
be
really
nice
like
to
just
copy
the
block
store
out
of
your
lotus
node
without
shutting
it
down
like
this
is
like
what
people
do
with
couchdb.
Actually,
it's
really
really
nice.
It's
one
of
my
favorite.
F
E
F
The
file
system,
layer,
at
least
yeah,
and
but
also
like
I
mean
so
you
have
to
keep
them
in
a
sorted
tree
anyway,
in
order
to
update
them
so
like
like
in
the
database.
So
that's
why
it's
like
not
actual
like
you.
You
don't
need
a
separate
index,
because
you
already
have
a
sorted
structure
like
to
do
the
lookups
in
right,
like.
I
don't
think
that
there's
actually
like
much
that
we
would
get
out
of
an
index
in
a
separate.
A
File,
I
I
think
also
like
one
advantage
is,
if
you
take
to
to
put
dnx
into
a
separate
file
because,
like
it's,
linux
really
is
like
a
secondary
index
in
this
case.
So
the
actual
data
is
in
the
file
and
the
index
is
like
a
secondary
index
in
the
sense
of
it
doesn't
contain
any
data
that
is
not
in
the
file,
so
you
can
just
throw
it
away,
so
you
can
always
recreate
the
index
and
if
it's
a
separate
file,
you
can
just
do
it
or
you
just
buy.
F
F
F
Right
yeah,
yeah
yeah,
but
but
but
that
is
because
you,
the
the
structure
of
the
file
format
for
the
car
file,
doesn't
keep
things
in
order
because
it
isn't
actually
a
database
right
like
that's.
Why
it's
a
secondary
mix.
Now
we
eventually
will
probably
end
up
with
a
secondary
file
anyway,
because
for
peter's
thing
of
maintaining
the
link
indexes
for
the
for
the
gc.
That
is
not
going
to
be
in
the
same
file,
so
that
would
still
need
to
be
a
separate
file
and
probably
export
it
separately.
A
Yeah,
it's
also
for
us.
The
advantage
of
a
second
file
is
also
that
you
sometimes
need
different
guarantees.
So,
for
example,
for
the
main
file,
you
really
want
to
make
sure
that
the
fsync
really
happen
and
so
on.
But
basically,
if
your
index
gets
corrupt,
it's
I
mean
it
shouldn't
get
corrupt,
but
if
it
does
it's
not
a
big
deal,
because
you
could
always
regenerate
it.
So
you
might
end
up
with
busy
systems
where
you
put
the
index
file,
which
is
like
faster
and
might
be
have
more
errors
or
something
you
don't
get
that
much.
F
I
know,
but
like
here's,
the
thing
like
I
understand
why
you
would
want
to
have
a
secondary
index
over
a
structure
that
wasn't
well
sorted
but,
like
this
structure,
is
actually
like
a
well-sorted
structure
in
the
file
format.
So
I
don't
really
see
what
what
like
the
index
is.
Only
saving
you,
like
some
branch,
reads
effectively.
F
So
the
last
head
pointer
to
the
root
is
to
the
root
of
a
sorted
tree
structure
for
updating
the
actual
index.
So
you
have
an
index.
You
have
a
well-sorted
structure
for
for
seeking
into
the
file
and
like
finding
what
you
need
so
the
secondary
index,
like
doesn't
really
save
you
much,
especially
once
once
things
get
a
little
bit
hot.
E
F
Hold
is
the
same
regardless
of
compaction
where
they
are
in
the
disk
changes,
if
you
compact
so
like
this
is
what
we're
talking
about
like
throughout
that
whole
structure.
Throughout
that
whole
file
format,
a
bunch
of
places
don't
have
data
in
them,
but
that
doesn't
actually
impact
the
performance
of
the
tree,
because
the
tree
is
still
the
same
shape,
regardless
of
whether
you
compact
it
or
not,
because
as
you
change
it
you're,
it's
rebalancing
with
the
with
the
hash.
F
H
That,
I
would
say,
is
your
reads,
especially
on
a
cold
start.
You're
doing
login,
reads
of
on
down
that
tree
to
get
to
your
data
item
and
all
of
those.
H
H
H
A
D
E
H
F
H
File,
no,
it
doesn't
need
to
take
an
map
because
it
it
basically
I
mean
to
generate
it.
It
does,
but
for
reads
it
doesn't
need
to.
So
when.
H
After
generation
it's
a
b
tree
and
it's
it's
doing
the
the
search
and
that's
not
an
mac,
backed
price
of
region
where
it
doesn't
all
need
to
be
in
memory
at
once.
There
is
a
file
it.
It
knows
that
that's
the
sorted
list
of
sids
and
the
offsets
into
the
main
data
file
that
those
are
it
does
log
n
reads
into
that
index
file
to
learn
where
the
item
is,
and
then
it
does
one
read
into
the
car
file
to
get
the
item.
F
H
It's
all
in
one
compact
thing,
because
I'm
doing
a
single
scan
over
it,
but
you
could
and-
and
it
is
because
it's
just
that
implicit
record
array
that
it's
using
as
its
sorted
list
of
those
sids.
What
you
can
do
is
you
can
have
a
tree
and
you
can
have
the
branches
and
you
can
write
a
new
route
and
and
maintain
your
metadata
in
an
append
only
way
as
well
like
you're.
Imagining
doing
on
your
car.
F
H
Still
pulling
in
a
larger
block
on
each
read
so
there
if
you
can
get
locality,
that's
sometimes
nice
and
that's
what
you
would
get
with
your.
I
think,
as
a
byproduct
of
your
updates.
Is
that,
like
your
root
and
the
things
close
to
the
root
often
will
be
all
written
all
changing
at
least
a
path
of
them
and
you'll
end
up
with
those
in
the
same
block?
H
That's
getting
pulled
back
into
memory,
so
that's
worth
thinking
about,
maybe,
but
I
suspect
you're
right
that
it
really
doesn't
matter
that
much
that
the
the
things
near.
A
F
And
so
it's
like
all
in
the
same
file,
I
mean
one
thing
that
I
do
do,
though,
like
I
keep
separate
file,
descriptors,
open
for
reading
and
writing
just
so
that
the
so
the
position
on
the
on
the
reader
is
always
the
same.
So
that
actually
is
a
lot
nicer.
It
does
speed
things
up
quite
a
bit,
but
I
don't
like
the
access
is
not
usually
a
huge
deal,
but
I
don't
know
I'll
benchmark
it
and
see
what
it
looks
like.
H
I
can
say
I've
got
a
pr
that
generates
a
graphql
schema
out
of
an
ipld
prime
schema,
I'm
starting
to
work
through
auto-generating
graphql
server
from
that
as
well,
so
that
you
can
serve
out
of
your
my
pld
data
store
and
then
deal
with
queries
that
people
come
in
that
are
in
an
ipld
format.
So,
basically,
that's
additional
cogen
that
follows
the
existing
golang
cogen.
A
Yeah
cool-
we
are
out
of
time,
so
then
I
close
the
meeting,
and
so
thanks
everyone
for
attending
and
see
you
all
again
next
week.