►
From YouTube: 🖧 IPLD weekly Sync 🙌🏽 2020-10-26
Description
A weekly meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
A
Welcome
everyone
to
this
week's
ipld
sync
meeting
it's
october,
26
2020
and
as
every
week
we
go
over
the
stuff
that
we've
worked
on
the
past
week
and
that
we
were
on
the
next
one
and
then
discuss
agenda
items
and
I'm
just
checking
if
the
live
stream
actually
works,
because
it
looks
strange.
Oh
it
does
oh
cool.
Somehow
the
youtube
autoplay
is
not
auto
playing
anymore,
because
I
know
I
had
to
hit
the
pause
button.
A
I
was
confused
that
I
don't
have
to,
but
okay
that's
great,
so
I
dare
to
start
with
myself
and
so
this
week
I
actually
had
time
to
work
on
some
italy
stuff.
A
I've
did
a
few
exploration
reports
about
the
id
data
model
and
map
keys
and
strings
and
so
on,
and
so
the
short
version
is
that
open
one
which
was
just
like
a
starting
point
for
the
discussion
and
there?
But
what
I
the
important
thing
there,
I
think,
is
that
I
would
like
that.
A
And
so
the
conclusion
of
the
exploration
report
is
that
map
keys
should
perhaps
be
their
own
thing
and
not
be
a
kind
that
we
already
have,
and
so
just
make
it
clear
for
people
and
watching
this.
So
exploration
reports
are
written
by
someone
and
it's
more
or
less
also
like
an
opinion
about
how
things
work.
So
it's
not
like
it's
in
our
definitive
spanking
things
we
use,
but
just
like
yeah
ideas,
which
then
might
go
into
space
yeah
and
on
the
rust
side
of
things,
so
rust,
multi-foam
and
sodium.
A
Finally,
I
did
a
rust
multi-base
release,
which
was
low
overdue,
which
adds
the
base
64
a
36,
encoding
and
sadly,
on
a
smarty
hash.
There's
no
release
yet
because
there's
still
one
pr,
that's
that
is
kind
of
a
review.
It's
all
details,
but
hopefully
this
week
I
will
do
a
proper
customer
dash
release
in
the
last
cd
release
and
I
certainly
still
haven't
heard
back
from
all
the
p2p
people.
A
But
I
will
now
just
go
ahead
to
the
release
and
then
rebase
my
pr
only
p2p
and
hope
that
they
then
will
review
that
pr
and
get
it
merged,
because
so
I
did
a
quick
review.
So
I
got
a
quick
review
on
the
little
p2p
stuff
and
it
seems
that
generally
it's
okay,
because
I
wanted
to
postpone
the
release
basically
so
that
they
can
still
get
packing
changes
in
before
I
do
release,
but
it
seems
that
they
don't
need
it.
A
So
I'll
just
do
the
release
and
if
they
still
need
it,
we
can
still
do
another
release.
I
think
that's
all
I
did
for
ipod.
So
next
on
my
list
is
rock.
B
Okay,
so
I've
done
some
a
bunch
of
little
things
and
a
few
big
things.
So
a
little
bit
of
multi-hatch
multi-codec
work,
mainly
javascript
such
as
a
blank
two
multi-hasher,
some
dag
go,
dag
pv
work.
Not
much
of
that,
though,
I
really
would
like
to
just
get
that
finished
off
and
done
some
more
go.
Car
work,
some
more
javascript
car
work,
which
is
another
one
that
is,
is
nearing
completion.
B
So
I
just
need
to
close
that
off
and
publish
that
as
this
new
car
library,
a
little
bit
of
js
schema
work,
which
I
haven't
touched
for
a
while.
So
because
that
overlapped
with
the
other
work
I
was
doing
and
these
these
three
big
things
that
I
was
I've
done
last
week.
So
one
of
them
is
this:
jsi
pld
schema
describer,
which
will
take
a
javascript
object
and
will
describe
it
with
an
ipld
schema
in
as
minimal
way
as
it
can
without
using
some
of
these
flexible
options
like
like
nullable
and
unions.
B
So
it'll
give
you
a
relatively
naive
schema
for
any
javascript
object
that
falls
within
the
data
model
anyway,
and
then,
on
the
flip
side,
I
did
js
ipld
scheme
of
validator,
which
will
build
your
little
javascript
validate
function
for
any
given
ipld
schema
that
you
can
stamp
against
a
javascript
object
and
say:
does
it
match
the
schema
and
they're
minimal
and
fast?
And
this
supports
that
all
of
the
schema
features,
except
for
the
string
join
and
string
pairs
representations?
B
That's
the
only
things
that
it
doesn't
support,
but
that's
really
neat
it.
You
can
see
a
lot
of
use
cases
for
that,
but
it's
mainly
interesting
within
the
realm
of
schemas,
but
because
I
put
these
two
things
together
in
this
car
to
schema
project
and
this.
B
So
this
comes
about
because
we
did
make
some
not
explicit
commitments
to
file
coin,
about
helping
them
understand
their
data
better
because
they
are,
the
pressure
is
on
them
to
reduce
the
size
of
their
storage
by
some
by
some
degree
and
there's
all
sorts
of
proposals
out
there,
some
of
which
we
have
flagged
as
being
a
problem,
but
we'll
probably
go
ahead
without
anything
else.
B
So
I
thought
somebody
better
do
some
of
this
work,
so
this
carter
schema
thing
is:
it
will
take
a
car
file
and
run
through
it
and
it'll
describe
all
the
shapes
in
there
in
schemas,
and
so
you
get
all
the
unique
schemas
it'll
spit
out,
but
then
also
what
I've
now
added
is
the
ability
to
feed
it.
A
library
of
schemes
as
well,
so
it'll
first
check
against
your
existing
schemas
and
if
it's
got
them,
it'll
tally
them
up,
if
not
it'll
describe
them
with
with
the
schema
describer.
B
So
you
end
up
with
this
two-part
thing,
with
the
aim
that
you
would
move
from
no
library
to
a
full
library,
and
then
you
would
be
able
to
see
what
blocks
are
taking
out.
The
majority
of
stories
you've
got
to
count
them
by
by
block
counts
that
match
the
schemers
and
then
also
beyond.
That
would
be
nice
to
have
some
statistics
about
sizes
of
blocks.
B
So
that's
the
kind
of
thing
you
should
be
able
to
say
with
this
and
then
that
that
should
lead
to
better
decisions
about
how
to
optimize
and
and
not
just
jumping
the
gun
on
things
like,
let's
just
do
small
blocks
in
line
as
cids,
which
is
on
the
table
and
will
probably
be
implemented
as
one
of
the
first
solutions
to
dealing
with
chains
eyes
if
we
don't
have
better
input.
So
that's
that
and
now
the
where
I'm
at
with
that
is,
I
was
going
through
will's.
B
What
is
it
called?
What's
the
project
called?
Oh,
the
state
div,
where
will
has
built
already
schemers
in
the
go
rpld
prime
language
for
the
for
for
far
coin,
and
I've
been
trying
to
build
some.
The
problem
is
with
with
these,
with
these
data
structures
is
that
you've
got
some
a
lot
of
the
nodes.
Are
things
like
the
hampton
or
the
amt
where
the
values
are
in
line?
B
And
so
you
want
the
schema
to
describe
the
the
data
structure
plus
the
inline
values
and
avoid
the
use
of
any,
because
you
really
do
want
to
be
able
to
describe
that
part
as
well,
so
you're
going
to
end
up
with
schemas
that
are
like
amt
with
this
value
and
amt
with
this
value
and
yeah
that'll
be
interesting.
C
So
yeah
I
punted
on
all
of
that
stuff
in
the
one
that
I
had
as
you
can
see
that
I
did
the
sort
of
interpreted
representation.
So
a
hampt
is
just
a
map
of
things.
For
instance,.
C
B
I
can't
find
anything
that
matches
that
so
far.
So
it's
a
bit
of
a
game,
it's
sort
of
the
kind
of
thing
where
you
want
somebody
who's
much
more
familiar
with
the
data
structures
to
do
some
of
this
work,
but
I
don't
know
if
I
can
really
tear
anyone
off
for
that.
I.
C
B
Yeah
that
that's
an
it's
a
nice
ideal,
it's
just
tackling
such
a
large
amount
of
data
in
a
sort
of
this
random
access
way
is,
is
tricky
and
so
the
way
I've
been
solving
it
so
far
has
been
just
feed
me
blocks
in
whatever
order
they
are
in
the
car
file
and
I'll
figure
that
out,
but
there
is,
there
is
utility
in
the
in
the
linking
information
that
I
just
I
haven't
touched
at
all,
because
it's
just
a
it's
a
problem
space.
B
It's
a
little
bit
too
complicated
for
what
I
want
to
bite
off
for
now,
but
really
you
should
be
able
to
talk
about
relationships
between
blocks,
but
right
now
it's
like
I
reached
cid,
that's
just
a
link
and
what's
beyond
that,
don't
care
about
that
for
now
so,
but
this
is
throwing
up
a
whole
lot
of
areas
of
really
interesting
areas,
really
interesting
spaces
for
additional
work
or
research
that
I
don't
have
time
for
now.
I
don't
really
want
to
be
doing
this
for
a
long
time.
B
I've
got
to
get
back
and
finish
off
a
bunch
of
other
things,
but
it's
definitely
a
rich
space
for
practical
work.
I
think,
but
I
can-
I
can
already
see
things
like
with
useful
with
this,
like
I've
got
a,
I
have
to
finish
off
the
bitcoin
code
expect
and
part
of
that
was
describing
things
in
schemers
and
I've
already
I've
written
some
schemas.
B
Just
you
know
handwritten
that
I
that
match
what
I
see
it'd
be
nice
to
be
able
to
validate
them
or
actually
use
this
to
create
some
initial
schemas
and
then
build
some
matching
here.
So
there's
already
utility
I've
applied
it
to
dag
pb
as
well.
That's
interesting
work
as
well
to
validate
some
of
the
schema
decisions
there.
So
there's
actually
a
lot
of
practical
stuff
in
here
that
could
grow.
D
Or
point
me
to
an
example
I
mentioned
last
week:
I'm
doing
a
lot
of
work
with
cbdl,
concise
data
definition,
language
and
there's.
Actually
a
russ
library
pointed
to
you
guys
too.
Last
week
it's
not
fully
completed,
but
I'm
gonna
start
contributing
to
that
for
the
stuff
I'm
working
with
did
and
building
in
some
branching
logic.
D
So
basically
you
can
just
do
test
cases
of
either
classes
or
types
actually
just
to
do
satisfy
that
in
in
rust,
and
so
I
think,
but
if
you
give
me
just
pointing
I'll,
just
ponder
it
a
little
bit.
E
Rod,
I
was
just
thinking
like
I
may
end
up,
actually
just
writing
this
or
poking
out,
but
I
think
that
there's
like
an
interesting
sort
of
iterable
approach
that
you
could
take
to
using
this
tool
where
you
basically
just
say
like
this
is
the
schema
for
the
root
block
and-
and
this
is
where
you
start
and
then
as
it's
building
out
or
trying
to
figure
out
other
schemas.
You
at
least
know
where
the
branching
came
from
right.
So
if
you're
saying
like?
E
Oh,
I
I
don't
know
a
bunch
of
data,
that's
like
in
this
property
and
here's
a
bunch
of
schemas
that
I've
kind
of
generated
for
it.
You
at
least
get
oh,
you
kind
of
know
where
the
context
of
where
it
came
from,
and
then
you
have
a
much
easier
time
kind
of
writing
a
new
schema
on
top
of
the
data
that
you
just
got
and
then
every
time
that
you
kind
of
lock
one
in
you
gain
more
information
and
you're
segmenting,
more
sort
of
the
information
about
where
the
blocks
are
that
you
don't
understand
right.
B
Yeah,
there's
a
like
even
yesterday,
was
talking
with
alex
about
their
challenges
with
storage
and
badger
and
then
looking
at
novel
ways
to
deal
with
that
some
extremely
novel,
but
even
just
knowing
like
what.
What
is
the
nature
of
the
way
that
the
the
heads
of
these
data
structures
lead
to
different
sizes
of
structures
underneath
them?
So,
okay,
we
have
a
this
this
you
know,
we've
got
the
ultimate
route,
but
we,
you
know
we
have
these
other
routes
beneath
them
of
different
things
like
what
is
the?
B
What
is
the
spanning
nature
of
these
things,
and
just
not
being
able
to
see
that
or
or
have
numbers
for
that
is,
is
really
frustrating,
and
it
would
be
good
if
we
had
better
tools
for
that,
but
yeah
anyway.
This
is
a
start
and
there's
a
bunch
of
ideas
that
come
out
of
it.
F
Cool-
so
I
think
I
mentioned
this
a
few
days
ago,
but
there
was
somebody
who
actually
wrote
a
version
of
go
multihash
as
as
in
the
library,
and
we
didn't
know
of
each
other,
because
I
had
written
down
a
plan
to
do
it,
but
I
just
hadn't
written
much
code
yet
so
I
ended
up
reaching
out
to
them.
There's
a
link
in
the
notes,
and
surprisingly
they
were,
you
know
open
to
my
feedback.
F
I
also
updated
to
the
new
schema,
which
I
mostly
did
the
week
before
this
one,
but
then
it's
when
I
unearthed
that
bug
in
the
schema
which
was
fixed
after
the
call-
and
I
right
now
what
I'm
working
on
is
support
for
nodes
that
are
linked
instead
of
just
inline,
because
that
requires
me
to
add
a
bit
of
extra
api
for
a
link,
loader
and
so
on.
F
F
And
finally,
I
also
filed
a
couple
of
issues
to
improve
koi
pld
prime's
go
code
generation
and
besides
that,
not
really
ipld
related,
but
because
goes
freeze
is
coming
up
in
like
four
days.
I've
been
scrambling
the
past
week
because
there's
a
few
things
that
I
really
wanted
to
get
in,
but
they
were
lagging
behind,
so
I
got
two
of
them
merged,
but
the
third
one
isn't
like
nitpicking
hell
and
I'm
really
upset
about
that.
G
Oh
boy,
yeah
hi,
everybody
so
actually
very
little
ipl
this
stuff,
even
though
I
was
kind
of
part
of
all
this
efforts
around
around
the
chain
in
one
way
or
another.
So
the
kind
like
that,
the
very
ipod
related
thing
is,
I
got
some
actual
raw
data
to
a
bunch
of
folks,
not
included.
G
So
you
know
a
lot
of
the
a
lot
of
the
like
initial
exploration
was
done
over
live
data
and
also
am
part
of
several
efforts
both
in
sentinel
and
in
the
exploration
of
data
store
performance,
just
to
basically
quantify
what
works,
and
what
doesn't?
G
G
To
be
honest,
that
people
who
know
way
way
way
more
than
me
about
this
stuff
are
focused
on
like
optimizing,
badges
and
stuff
like
that,
whereas
long
term,
this
just
can't
can't
work
for
us
like
our
chain,
is
just
too
too
big
like
when
I
say
too
big.
I
I
really
recommend
for
anybody
who
hasn't
actually
touched
falcon
to
just
get
one
of
the
car
files
that
I
left
in
the
ipod
room.
G
Different
layer,
levels
of
big,
essentially
yeah,.
G
E
D
D
G
Yes,
yes,
and
and
because
there
is
like
little
focus
on
like
longer
term
thinking.
Okay,
I
want
to
like
put
together
something
like
super
janky
and
not
to
be
included
everywhere,
will
not
to
be
included
everywhere
anywhere,
not
not
like
what
happened
to
my
first
proper
concept.
That
now
is
being
a
pr
please
no,
but
to
to
see,
basically,
even
if
it's
viable,
and
then
we
can
dance
from
that
kind
of
to
a
little
bit
to
give
more
context
to
what
rod
said.
G
G
But
the
like
the
schema,
as
we
understand
you
know,
is
a
relationship
model.
That's
number
one
number
two:
I'm
actually
not
sure
that
there
is
a
way
to
get
out
of
this,
because
if
you
run
a
histogram
on
the
actual
block
store,
95
percent
of
the
blocks
are
under
2
kilobytes
of
this
500
million
and
no
sorry
my
bad
99
percent
are
under
2
kilobytes
95
are
under
512
bytes.
We
have
a
ton
of
blocks
that
are
one
byte
large
with
the
hash
and
everything
you
know
around
it.
So
what.
G
B
Back
to
that
same
that
same
thing,
we
need
to
be
able
to
pre
give
provide
ways
to
visualize
or
provide
statistics,
at
least
for
how
these
shapes
work
out
in
practice,
because
a
lot
of
this
architecture
has
gone
it
that
has
gone
into
it
has
been
about.
This
seems
like
the
right
place
to
hang
something
or
let's
put
a
link
here.
That
seems
right.
Well,
let's,
let's
in
line
this
one,
it
seems
right,
but
you
just
can't
see
the
result
of
any
of
those
those
decisions.
B
B
H
This
healthcare
thing
I'm
working
on
working
on
some
other
guys
and
we're
like
we
just
need
to
be
able
to
visualize
the
dag,
to
understand
it
and
communicate
it,
and
I
didn't
really
think
about
the
size
of
the
blocks,
but
that's
actually
another
aspect
of
visualization
so,
and
I
agree
I
mean
I
think
when
you
start
moving
in
this
world,
where
use
like
we
have
tools
like
relational
databases,
you
can
draw
the
schema
out,
you
know
and
it's
construct,
it's
controlled,
there's
other
ways
you
can.
G
And
yeah,
so
that's
that's
kind
of
my
my
update,
so
that
was.
H
B
G
I'm
not
saying
there
are
like
millions
of
them.
Obviously
there
aren't
I'm
saying
there
are
even
blocks
that
are
one
bite
long
and
basically
there
is
the
entire
camera,
like
oh
yeah,
yeah,
okay,
okay,
okay,
okay,
okay,
okay,.
G
I
think
I
have
it
hold
up
yeah
there.
It
is
so
this
is
the
histogram
of
of
the
chain.
I
mean
it's
small
because
because
I
I
ran
it
this
way.
The
first
three
columns
are
up
to
390
bytes.
G
E
Number
of
bytes,
the
the
height
of
this
graph,
is
total
size,
not
number.
No,
no.
The
height
of
this
graph
is
how
many
number
of
blocks,
how
many
okay,
okay,
so
so,
but
like
when
you
get
down
farther
here
that
actually
could
be
more
of
the
actual
bytes
of
data.
It's
just
yeah.
I
mean
that
that
doesn't
help
you
with
badger,
because
the
problem
is
the
number
of
keys,
but
I'm
just.
E
B
Well,
this
goes
back
to
those
same
tradeoffs
we've
been
talking
about
for
ages,
which
is
they're,
they're
they're
optimizing.
I
think
in
a
lot
of
cases
for
mutation
costs,
so
they
did
benchmarking
early
on
in
the
hampton
and
came
up
with
bit
width
of
five,
which
only
gives
you
an
arity
of
32
in
a
block
which
are
relatively
small
blocks,
but
then,
but
the
reason
that
was
chosen
was
because
the
speed
of
mutation
and
the
cost
of
doing
mutations
was
low
enough.
B
So
you
end
up
with
a
lot
of
small
blocks,
but
you
you
keep
on
reusing
those
blocks,
because
your
mutations
don't
have
to
insert
new
big
blocks
just
for
a
small
change,
so
trade-offs
everywhere.
It's
just
that's!
I
think
this
is
this
is
a
big
part
of
our
job
is
educating
and
providing
tools
around
trade-offs.
G
Yes,
because
again
like
if
the
actual
amount
of
like
the
chain
is
big,
because
we
are
trying
to
put
a
lot
of
messages
on
it
like
there
is
a
limit
at
which
you
just
cannot.
You
know
optimize
things
anymore.
You
just
need
separate
states,
so
this
kind
of
what
I'm
trying
to
say
like
try
to
see
for
a
smaller
set
of
this,
like
small
blocks,
what
they're
hanging
off
of
because
this
might
be
just
what
we
designed
data
wise
and
there
is
no.
A
E
So
I
had
a
really
I
had
a
quick
chat
with
with
jeremy
today,
and
he
mentioned
that
raul
is
poking
around
at
a
potential
badger
replacement
and,
and
he.
E
The
point
that
this
was
I
I
should
have
thought
of
this
and
rod
you'll
appreciate
it,
but
badger
uses
an
lsm
tree
and
an
ls
entry
like
really
wants
your
keys
to
be
ordered
and
to
have
some
kind
of
like
consistency
to
where
you're
mutating
them,
and
so
the
fact
that
we're
in,
like
inserting
random
keys
into
this
just
makes
the
the
actual
data
structure
on
disk
like
really
inefficient.
So.
E
A
No,
no,
no,
no,
it
doesn't
matter
like
it
doesn't
matter
like
it's
like.
I
forgot.
The
name
of
the
storage
like
there
was
one
in
earlier,
which
basically
did
similar
to
lsm,
but
not
sorted
like.
It
was
just
a
hash
map.
Basically,
and
you
can
do
like
you-
can
kind
of
do
lsm,
with
kind
of
like
a
hashmap
kind
of
structure
like
it's.
B
Totally
a
multi-level
storage
system,
but
but
the
tr,
but
the
lsm,
like
the
I
was
talking
to
alex
about
this
because
alex
is
pointing,
is
poking
at
it
too,
that
these
data
structures
like
badger
and
level
db
and
even
lmdb
they
they
all
exist
to
support
range
queries
and
they're
optimized
for
the
sorted
case.
So
the
traditional
lsm
it
exists
to
support
sorting
like
there
are
use
cases
where
you
can
do
it
without,
but
but
these
things
are
optimized
around
sorting
and
there's
nothing
that
we
do.
B
That
requires
sawing,
and
so
all
we
care
about
is
fast
retrieval
of
keys
which
sorting
helps
with,
but
it's
not
the
only
way
to
do
it.
So
I
mean
there
is
a
lot
of
scope
here
for
potentially
just
saying:
let's
just
have
a
new
data
store.
G
That
is
yeah
and
customized
lookups
and,
more
importantly,
the
insertions
are
actually
what's
killing
us,
because
our
insertions
are
guaranteed
like
for
every
single
key.
The
next
one
is
guaranteed
to
be
on
the
other
side
of
the
index
because
of
the
hashing
yeah
and
like.
G
And
to
be
to
put
him
a
more
in
perspective
today,
the
answer
for
I
have
been
running
a
note
from
the
very
beginning
and
it
doesn't
stay
in
sync
anymore.
A
A
E
C
Lipstick,
sorry,
the
the
the
c
api
right,
there's
synchronization
costs
to
get
over
to
rust
in
the
same
way
as
to
see,
and
that
is
fine
for
more
expensive
things,
like
proof
validation,
but
for
all
of
your
storage
items,
that's
going
to
be
pretty
painful
and
to.
D
C
E
A
Yeah
yeah
just
who's
currently
poking
at
it,
because
I
want
to
forward
a
paper
because,
like
that's
like
who's
currently
looking
into
this
on
the
fico.
B
Side,
there's
two
efforts:
there's
roles
actively
looking
at
at
you
know,
possibly
replacing
things
alex,
is
also
doing
some
other
work,
I
think
longer
term,
but
I've
I've
already
suggested
that
he
talk
to
a
couple
of
you,
but
I
think
you
should
definitely
hit
alex
up
with
some
information.
Yeah
cool
yeah.
A
C
Yeah,
I
finally
got
around
to
opening
the
hackmd
this
time,
just
a
couple
short
things.
So,
thanks
to
peter
last
week
we
did
a
bunch
of
actually
getting
the
car
data
store
or
car
block
store
to
work.
So
there's
a
speaking
of
new
types
of
data
stores.
If
you've
got
a
car
export,
this
will
build
an
index.
C
It
has
a
couple
different
indexes
for
random
access
retrieval
into
the
car
file
and
then
exports
the
lock
store
interface
over
it
so
that
you
can
not
have
to
re-import
back
into
badger
or
something
like
that,
but
can
just
access.
The
next
thing
that
I'm
sort
of
is
on
my
mind,
is
we've
got
an
ipld
schema
and
it
would
be
really
great
to
get
a
graphql
style
access
over
this
same
thing,
and
so
how
can
I
do
the
code
gen
off
of
the
existing
schema
to
a
graphql
format
of
it?
C
And
how
can
I
then
use
the
same
machinery
to
actually
do
the
graphql
queries
off
of
that?
So,
that's
probably
the
path
that
I
start
walking
down
here,
because
a
lot
of
what
has
made
our
lives
slow
is
there's
sort
of
two
things
I
suppose
one
is
accessing.
These
individual
items
is
not
so
bad
because,
as
peter
said,
they're
small,
but
full
enumeration
is
often
very
expensive,
so
being
able
to
selectively
pull
out
the
small
things
you
want
and
have
an
efficient
way
to
path
down
to
them
is
the
part
that
we're
often
missing.
C
C
So
I
don't
know
if
that's
going
to
mean
sort
of
custom
ipld
nodes
that
I
have
to
make
to
allow
pathing
without
fully
loading
when
realizing
the
hampt
and
amped
nodes
likely
that's
what
that
means,
but
I'll
be
working
on
that.
E
Yep,
okay,
so
I
need
to
look
at
the
thing
now.
I've
lost
it.
I
did
do
stuff.
Last
week.
I
promise
here
we
go.
Okay,
oh
yeah,
I
wrote
in
gave
a
talk
at
falcon.
Liftoff
took
a
lot
of
stuff
from
different
talks
that
we've
given
at
different
times
and
put
it
all
together,
went
really
well
me
rod
and
I
had
a
talk
with
mikola
senko
and
we
set
up
the
grant
for
this
be
tree
project.
E
After
we
finished
the
stand
up
and
then
also
carson
farmer
has
been
poking
at
dag
to
be
for
a
little
while
and
then
last
week
he
told
me
that
he
had
written
a
custom,
crdt
data
structure
that
using
the
new
value
type
stuff
that
I
have
in
there
with
a
custom
replicator
that
understands
crdt,
so
that's
pretty
hot
and
then
he
also
went
and
wrote
a
a
degree
remote
like
we
have,
for
you,
know,
http
and
other
stuff,
but
over
limp
p2p.
E
So
this
is
actually
like
using
the
lip
pdp
network
for
replicating
between
dag
to
be
nodes,
but
it's
not
using
bit,
swap
it's
actually
still
using
the
internal
protocol
in
dag
to
be
which
is
actually
like
a
lot
more
efficient
than
we
have
with
dag
with
graph
sync
or
bitswap,
because
it
has
a
block
link
indexing.
So
it's
like
really
really
good
for
replicating
around
when
you
have
different
cache
dates.
E
So
that's
really
cool
and
it
prompted
me
to
do
some
other
improvements
today
to
be
like
making
that
remote
setup
like
quite
a
bit
nicer
and
and
some
other
things
like.
We
were
running
to
get
lfs
tests
always
and
we
should
really
only
run
them
in
ci,
because
they're
kind
of
hard
to
run
locally
and
yeah.
That's
where
that
stuff
is
next
up
is
chris.
H
Big
old
word,
a
bunch
of
text
up
there,
but
mainly
I
just
saw
crafts
in
class
week
kind
of
a
lot
of
I
guess
dead
ends
or
kind
of
this
starts.
But
one
thing
is
I
so
go
graph.
Sync
has
this
kind
of
message
queue
mechanism
where
aggregates
you
know,
requests
responses
and
sends
them
out,
and
I
started
implement
the
same
thing,
but
it's
getting
like
super
complex
and
I
kind
of
thought
about
it
and
really
isn't.
I
think
it's
not
worth
doing
that
up
front.
H
So
basically,
I
kind
of
punted
on
it
ended
up
just
creating
a
new
stream
per
request
which
isn't
as
performant
but
probably
insignificant
from
a
requester
point
of
view,
and
then
I
went.
I
met
with
eric
on
selectors,
because,
despite
reading
the
standard,
multiple
or
the
spec
multiple
times,
I
just
still
couldn't
quite
rock
it,
and
my
selector
I
was
sending
to
go.
H
Ipfs
was
not
doing
what
I
thought
it
was
so
he
walked
me
through
it
and
just
I
think
this
link
has
gone
out
before,
but
the
chain
saved
guys
have
this
example
by
the
link
in
there,
which
shows
like
data
with
a
selector
and
then
what
what
this
selector
does
as
a
result
is
kind
of
this
test
factor,
which
is
super
helpful.
I
think
overall,
we
need
some
kind
of
tutorial,
though,
because
I
don't
think
it's
all
that
the
spec
is
just
not
grokable,
probably
for
a
lot
of
people.
H
It
wasn't
for
me,
let's
see
so
I
move
forward
with
working
on
employing
the
requester
logic
inside,
and
so
it's
actually
working
end-to-end,
where
I
can
actually
receive
blocks
and
put
in
the
block,
store
and
view
them.
So
that's
cool
and
I
started
working
on
doing
some
basic
validation
since
I
don't
have
a
selector
engine.
I
was
thinking.
H
Oh
I'll,
just
do
this
depth
first
block
simple
validation,
but
there's
a
problem
with
is
that
at
least
with
go
ipfs,
so
go
ipfs
require
only
works
with
depth,
limited
recursive,
selectors
and
so
like
only
up
to
100
as
well.
H
So
that
means
is
that
if
I
have
a
graph,
that's
deeper
than
100
nodes,
it's
going
to
give
me
a
partial
graph
and
I
have
no
way
of
actually
truly
validating
that
with
without
selection
engine,
which
also
does
at
the
node
level,
and
so
I
couldn't
just
use
it
by
doing
block
level
depth
first
validation,
so
that
was
kind
of
frustrating.
H
When
I
figured
that
out
and
then
the
other
thing
is
I
worked
through
like
error
handling
api
design,
it's
just
feel
a
little
bit
nasty,
but
you
know
it's
doable,
but
just
like
it
feels
kind
of
didn't,
feel
quite
right
and
so
and
I'm
gonna
come
back
to
the
end
here,
but
also
discovered.
This
is
also
weird.
I
don't
know
how
I
didn't
know
this,
but
I
thought
so.
Js
multi-formats
is
the
latest
home
for
cf
for
cids
and
javascript.
H
It's
not
jscid,
and
so
I
didn't
know
that
until
today,
and
so
I
don't
know
why-
how
I
missed
that
didn't
realize
it,
but
I
also
found
this
new
home
for
blocks.
That's
the
other
thing
I
didn't
know
that
was
there,
but
I
did
need
this.
Prefix
functionality-
and
I
know
valkyrie
mentioned
before-
was
deprecated,
but
I
still
need
to
either
create
a
local
competition
or
put
one
in
another
shared
library,
because
graph
sync
has
this.
You
know
prefix
as
part
of
the
message,
so
I
have
to
be
able
to
get
that.
H
So
the
last
thing
is
just
discussing
the
api
stuff
with
michael
today
he's
you
originally
suggested
me
using
async
generators
iterators
for
this,
but
I
was
just
trying
to
get
the
basic
stuff
working
and
talking
through
them.
I
think
you
can
actually
be
helpful
for
a
couple
of
the
issues
I
mentioned
up
there.
H
So
one
is
cleaning
up
error
handling
because
instead
of
having
one
kind
like
master
object,
that
keeps
track
of
all
your
types
of
errors
and
I
can
distribute
them
throughout
various
async
generators
iterators
in
a
pipeline,
so
I
have
like
a
whole
pipeline
takes
care
of
multiple
things
like
I
could
kind
of
put
error,
handling
more
local
to
them,
and
it
also
makes
it
more
optional.
So
I'll
have
to
like
cloud
the
api
with
all
this
error
handling
and
status
stuff.
H
It
also
should
allow
me
to
move
the
validation
logic
out
of
the
core
library.
So
again
I
can't
I
don't
have
a
selector
engine,
yet
someone
eventually
will
build
one,
and
at
that
point,
like
a
user
library,
could
just
put
it
in
there
to
do
the
validation
if
they
want
to.
In
some
cases
they
may
not
care
about
validation,
so
that
makes
sense
and
then
also
the
whole
block
creation.
H
H
H
I
need
some
kind
of
cue
structure
to
connect
where
I
receive
messages
it
to
the
actual
generator,
because
the
way
the
way
graph
sync
works
is
I
get
the
requests
go
out
in
one
stream
and
then
the
responses
come
back
another
one,
and
so
I
have
to
like
it's
straight
up
to
connect
that
that
stream
that
comes
back
into
the
original
object,
where
the
generator
is,
and
so
I'm
going
to
look
at
using
it
pushable,
which
is
a
async
generator
cue
of
some
type,
and
if
anyone
else
has
any
other
ideas
about
like
a
queue,
that's
async
generator
friendly.
H
A
H
Didn't
talk
to
them,
but
that's
one
of
the
things
I
have
to
figure
out
over
time
is
how
the
p2p
works,
because
it
because
this
as
core
it's
using
async
generators
for
strings.
So
it's
not
so
the
pdp
stream
is
not
like
a
node.js
chain.
It's
not
it's!
It's
an
async
generator
of
data,
and
so
so
yeah.
They
are
using
that
and
that
does
have
effects
on
the
design.
E
And
they
don't
it's
not
just
an
async
generator,
it's
like
they
decorate
it
with
these,
like
it
modules
that
they
have
and
and
they're
it's
actually
like
kind
of
difficult
to
use,
sometimes
without
their
other
iterator
modules
as
well.
So
it's
it's
actually
like.
Quite
it's
a
lot
trickier
to
figure
out
than
you
would
think.
A
Cool,
so
the
next
one
is
eric.
I
One
of
them
is
I've,
been
working
on
and
off
on
making
a
rot,
13
adl
demo,
so
a
demonstration
of
an
advanced
data
layout
which
is
implemented
in
go
just
as
like,
essentially
documentation,
and
it
does
the
rot
13
string
transformation,
so
it's
useless,
except
as
a
demo,
and
so
there's
a
new
pr
up.
That
is
continuing
to
nudge.
That
along
its
purpose,
is
documentation,
so
the
pr
might
actually
be
the
artifact.
Hopefully
it
gets
merged
at
some
point
as
well.
I
It's
also
got
a
couple
of
caps,
lock
review
comments
in
it
so
far
that
are
interesting.
Choices
like
some
of
this
is
trying
to
figure
out
how
codegen
for
internal
nodes
should
compose
with
the
idea
of
adl's,
also
having
like
a
synthetic
high
level
and
a
substrate
internal
and
like
if
the
substrate
is
also
cogens
like
does
that
need
to
be
visible?
I
Should
it
not
be
visible
questions,
so
that's
kind
of
what's
going
on
over
there,
I
think
daniel's
figuring
out
is
encountering
probably
many
of
the
same
questions
in
his
much
more
encompassing
and
difficult
work
with
hamps,
and
so
I've
been
trying
to
also
wrangle
them
in
a
smaller
domain
and
see.
If
that
is
simpler,
it
might
not
even
be
simpler.
I
Anyway,
a
bunch
of
codec
hardening
work
was
necessary
in
glyphild
prime
in
this
last
week,
or
so
so
there
have
been
some
new
stuff
merged
to
master,
which
does
budgeting
during
the
serialization
some
of
those
fixed
already
made
last
week.
So
I
think
I
might
have
talked
about
them,
but
now
they're
in
master
and
I've
added
some
regression
tests
that
actually
make
sure
that
they
work
kind
of
important.
I
There
are
also
some
unexpected
panics
around
malformed
cids,
and
so
those
are
now
fixed.
A
lot
of
this
comes
from
fuzzing
efforts
that
was
done
by
the
least
authority
team
in
general,
and
a
couple
of
folks
in
particular,
who
I
think
have
I
put
the
full
names
in
the
document
for
why
they're
not
clearly
pronounceable
anyway
so
and
I
looked
at
codex
a
lot
more
as
a
result
of
some
of
that,
and
I've
made
a
fresh
run
on
codec
apis,
because
those
are
also
bugging
me.
I
The
codec
apis
and
go
ipld
prime,
were
on
a
first
draft
and
they're
kind
of
due
for
a
second
draft
at
some
point
they
currently
work,
but
they
leak
weird
abstraction
details
like
it's
very
clear
that
they're
using
this
other
library
and
some
of
that
library's
configuration
structs
leak
from
these
interfaces.
It's
bad
so
buying
a
fresh
run
on
those
trying
to
make
that
api,
less
squeaky
and,
at
the
same
time
trying
to
make
more
reusable
components,
and
so
this
involved
doing
a
fresh
run
at
how
tokenization
works.
I
The
idea
is
very
similar
to
the
current
implementations,
but
is
moving
the
boundary
of
what
the
token
is
a
little
bit
and
what
the
big
shift
is
basically
saying
we're
going
to
treat
ipld
links
as
a
whole
token
themselves,
opaquely,
and
so
I
think
that
actually
changes
the
composability
boundary
quite
a
bit
and
we'll
probably
make
writing
new
codecs
it'll
have
a
lot
more
code.
Reuse.
I
I
hope
we'll
see
it's
not
done
yet,
but
I
think
it
should
be
possible
to
implement
dag
json
and
dag
sieber
with
a
lot
more
actual
shared
code,
whereas
currently
they
are
implemented
with
sort
of
kind
of
like
they
inspire
each
other
very
closely,
but
there's
enough
divergences
that
they
are
textually
totally
quite
forked
and
like
maintaining
this
really
sucks.
I
was
reminded
how
much
this
sucks
while
implementing
the
budgeting
fixes,
which
then
had
to
be
done
twice.
I
I
would
rather
not
so
hopefully
this
new
token
system
makes
more
shared
code
possible
there
in
the
future,
but
we'll
see
if
somebody
wants
to
review
this,
because
it's
an
interesting
topic,
it's
at
a
point
where
you
can
I'm
not
sure
when
I'm
going
to
push
the
ball
further
on
this,
because
the
next
couple
of
steps
would
be
porting
a
lot
more
codec
code
and
I'm
just
not
sure
if
that's
priority
or
not
right
now
so
might
sit
for
a
while
talk
with
chris
about
selector
stuff.
I
Just
now
is
we've
got
selectors
as
implemented
by
libraries,
and
most
of
the
specs
are
talking
about
that
behavior
as
libraries,
and
a
lot
of
this
involves
the
concept
of
like
either
itering
over
or
like
having
a
callback
style
experience
doesn't
matter
whatever,
but
they
step
across
things
on
the
node
level,
and
we've
also
got
ipfs
having
apis
for
this
stuff
and
those
operate
on
a
totally
different
semantic
level
and
they
expose
quite
a
narrow
amount
of
what
selectors
do
and
they're
basically
very
block
oriented,
and
they
give
you
a
bunch
of
cids
back,
and
this
is
just
like
a
useful
api
for
some
purposes,
but
it's
way
narrower
than
the
general
power
level
of
selectors,
and
so
we
should
probably
add
explicit
documentation
about.
I
We've
talked
a
lot
more
about
docs
and
specs
a
lot
of
folks.
In
the
last
week,
we've
had
a
couple
of
additional
meetings
to
figure
out
stuff
around
that
one
of
the
things
that
bulker
brought
up
was
stuff
around
vocabulary.
We
use
around
multi-formats
versus
multicodecs
and
what
the
issues
are
there.
I
I
If
you
use
the
sequence
of
eight
bit,
bytes
definition
and
text
is
how
I'll
refer
to
the
we
actually
check
unicode
thing,
and
so
the
document
explores
both
of
them
at
the
same
time,
just
as
different
columns
in
this
little
graph
of
things,
we
could
do
and
then
the
the
far
left
side
of
the
graph.
So
the
feature
of
each
row
is
how
we
define
what
map
keys
are
and
how
traversals
work
implicitly,
and
so
this
comes
out
to
be
a
really
big
table.
I
I
I
There
are
a
couple
of
things
that
are
plausible.
There
are
a
couple
that
are
less
plausible.
Some
of
the
reasons
are
to
do
with
codecs.
Some
of
the
reasons
are
to
do
with
how
difficult
they
would
be
for
libraries
to
implement.
Some
of
them
are
how
it
will
close
off
future
codecs.
Some
of
them
are
about
how
it
would
struggle
to
be
retrofittable
to
existing
codecs.
I
I
cannot
read
the
entire
document
out
loud
again,
so
I'll
just
stop
now
and
say.
Such
a
document
exists.
Please
enjoy
it.
If
you
are
so
inclined-
and
that
will
be
the
end
of
my
update
other
than
to
say
holy
crap
rod's
work
this
week
is
ridiculously
cool
and
he
should
post
more
screenshots
of
it,
because
it's
so
cool
I'm
done.
B
B
String
thing
is
a
problem:
we
have
to
find
a
limiting
function
to
how
deeply
we
go
with
strings
the
risk
we
have
with
this
one-
and
I
know
it's
important
for
val
for
your
implementation,
but
we
have
a
risk
here
of
going
into
that
space
of
designing
a
system
so
perfect
that
it's
not
usable.
A
It's
just
like
it's
interesting
that,
basically
we
all
share
the
same
concerns,
but
we
have
like
we
differ
on
like
what
the
outcome
means.
Basically,
so
we
are
all
what
an
easy
to
implement
system
and
we
have
different
ideas
about
like
what
it
means,
but
it's
cool
that
we
share
the
values.
So
that's
that's
fine.
A
G
A
All
right
is
this
everything
then
I
close
the
meeting.
Even
beef,
no.
E
E
Yeah
so
yeah,
so
I
talked
to
a
couple
people
about
this
already,
but
but
not
everybody's
had
a
chance.
So
we
asked
mikola
in
this
grant
to
write
us
a
b
treat
and
we
assumed
that
it
would
be
we'd
need
to
do
rebalances
and
then
it
wouldn't
be
fully
hash.
E
Consistent
because,
like
those
are
the
constraints
that
we've
decided
like
were
reasonable
for
assorted
structures,
because
we
didn't
have
a
way
to
not
do
that
and
nikola
figured
out
a
technique
and
we're
waiting
on
the
implementation
now
to
make
sure
that
it
works.
But
the
technique
seems
really
sound
when
you
think
about
it
and
I'm
kind
of
kicking
myself
for
not
having
seen
it
before
so
real
quick.
If
you're
familiar
with
how
the
hamp
works,
we
hash
the
keys
and
then
that
puts
them
in
basically
like
a
random
address
space.
E
So
mccola
decided
to
figure
it
out
was
that
if
you
take
a
floating
fingerprint
algorithm
like
rabin
right
or
something
better,
probably-
and
you
start
to
tweak
the
settings,
the
settings
basically
give
you
what
is
a
target
amount
of
size
right
by
some
definition
of
size
like
it's
making.
You
know
these
these
things
more
likely
or
less
likely
to
get
like
a
particular
size
of
a
chunk.
E
If
you
throw
random
data
into
it,
you
literally
just
get
that
behavior.
It's
not
gonna
find
sort
of
critical
common
spaces.
Like
you
do
in
text,
it's
actually
just
going
to
give
you
like,
like
chunks
around
the
same
size
that
you
were
asking
for,
depending
on
kind
of
the
randomness.
You
just
have
to
make
sure
you
feed
it
random
data.
E
So
what
mccoll
decided
to
do
is
basically
take
any.
You
can
do
this
with
any
sorting
structure
but
but
take
the
b3.
For
instance,
you
have
these
key
value
pairs.
If
you
make
all
of
the
values
links
to
the
blocks,
then
there's
a
hash
digest
that
you
can
use
as
a
random
value
and
so
instead
of
having
the
static
setting
for
what
the
bucket
size
should
be.
E
Basically,
so
it's
a
little
bit
more
expensive
than
a
traditional
sort
of
b
tree
change,
because
it's
self-balancing
as
it
works,
but
it's
also
fully
deterministic.
So,
regardless
of
the
insertion
order,
you'll
get
a
consistent
hash
for
the
entire
structure,
no
matter
how
you
mutate
it
and
it's
going
to
be
relatively
balanced.
Depending
on
your
settings,
the
only
question
is
like
what
algorithm
do
you
use
for
the
fingerprinting
and
what
are
the
settings?
E
Because
those
are
going
to
give
you
like
dramatically
different
performance
characteristics
for
the
structure,
but
like
that's
where
all
of
peter's
work
comes
in
in
like
having
this
whole
setup,
where
we
like
go
and
throw
a
bunch
of
data
at
this
problem
and
go
like
like
take
all
this
random
data
and
then
tell
us
what
these
structures
look
like,
so
that
we
can
figure
out
what
the
ideal
one
looks
like.
E
So
we're
like
in
a
really
good
position
to
use
this
technique,
so
yeah
we're
like
kind
of
waiting
on
the
implementation
and
stuff.
But
this
gives
us
like
a
whole
new
area
of
data
structure,
development
and
research,
because
we
had
thought
that
we
wouldn't
actually
have
a
way
generically
to
just
build
sorted
structures
that
were
fully
hash
consistent.
So
you
couldn't
use
them
in
filecoin.
You
couldn't
use
them
in
any
place
where
the
insertion
order
was
really
going
to
matter
and
now
like
we
can
do
that.