►
From YouTube: 🖧 IPLD weekly Sync 🙌🏽 2020-09-28
Description
A weekly meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
A
Welcome
everyone
to
this
week's
ipld
sync
meeting
it's
september,
28
2020
and
as
every
week
we
go
over
the
stuff
that
we've
done
in
the
last
week
and
plan
to
do
and
discuss
any
other
items
that
someone
might
have.
So
I
start
with
myself
again
news
from
the
rust
side
of
things.
A
So
the
major
rust
multi-hatched
thing
that
I
was
talking
about
in
the
past
few
weeks
finally
got
merged
and
it
was
kind
of
a
big
refactor
is
now
kind
of
back
to
what
it
originally
was
on
the
first
refactor
and
but
in
the
end.
So
now,
if
you
want
to
integrate
it
into
the
current
rast
cd,
it's
really
nice,
because
it's
really
just
really
small
changes,
which
is,
I
think,
a
good
sign
that
yeah.
I
got
things
correct.
A
So
this
brings
us
a
bit
closer
to
merging
upstream
into
rust,
multihash,
and
what
I
currently
work
on
worked
on
today
was,
although
this
refactor
landed
now,
everything
is
allocated
and
what
you
need
to
do
is
if
you
define
your
own
code
table
with
hashes,
you
need
to
define
how
big
actually
the
stake
location
is,
and
we
decided
that
we
make
it
explicit.
A
So
you
really
have
to
set
some
value,
but
it
could
still
be
that
you
use
a
hasher
which
produces
a
bigger
digit
and
clearly,
then
your
stuff
will
just
panic,
and
the
plan
is
to
have
better
error,
reporting
saying
on
compile
time.
Hey
your
one
of
your
hashes
would
want
to
allocate
more
than
you
have
specified,
so
you
might
want
to
increase
it.
A
A
Everything
will
be
fine
and
they
don't
even
notice
that
it's
only
a
syntax
thing
in
the
type
thing
and
yeah
at
least
it
should
get
smooth
things
out
for
people
implementing
things
and
the
other
thing
that
needs
to
happen
before
we
can
merge
into
upstream
last
multi-display
three
support
that
finally
got
kicked
off
by
a
community
member,
and
so
I'm
only
doing
the
reviews,
but
it's
slowly
getting
there
yeah.
A
Other
things
is
that
next
I
look
into
some
awesome
stuff.
I
will
get
into
more
detail
about
this
next
week.
I
guess,
when
I
know
more
next
on
my
list
is
danielle.
B
Cool,
so
I
continued
with
hampt.
I
spent
quite
a
bit
of
time
trying
to
update
the
schema
and
the
code
generation
with
rods
schema
in
the
hashmap
document.
So
that's
done
now.
We
also
discussed
a
bunch
of
ways
that
we
could
simplify
the
schema,
which
were
mostly
things
that
rod
already
had
in
mind,
but
which
could
be
nice
to
just
you
know
finish
up
before
I
write
a
lot
more
code
and
you
know
getting
some
agreement
on
those
was
also
nice.
B
I
also
got
nerd
sniped
into
looking
into
a
json
panic
that
lotus
got
a
week
ago
or
so,
and
it
turns
out
like.
I
was
very
confused
by
this,
because
I
started
looking
into
it
around
wednesday
last
week.
Initially
I
thought
it
was
just
a
json
package
bug,
but
I
was
digging
more
into
it
today
and
it's
gotten
to
the
point
where
I
think
there's
a
language
bug
somewhere,
because
methods
are
not
being
promoted
consistently,
which
is
pretty
scary,
because
the
lotus
project
is
hiding
one
method
and
replacing
it
with
another
method.
B
But
then
that's
just
not
working.
The
replacement
is
not
working
as
expected.
So
I
fouled
a
spec
buck.
If
anyone
wants
to
look
at
that,
I
also
helped
with
the
reviews
on
go
miracle
dag.
So
I
think
I
follow
most
of
what
rod
is
doing
there.
B
C
Eric
I
was
kind
of
all
over
the
place
this
week
answering
a
lot
of
questions.
I'm
sort
of
building
up
a
backlog
of
things
that
I
now
have
discovered.
We
do
need
a
little
bit
more
api
documentation
on,
especially
as
well
as
our
usual
need
for
doc
stocks,
a
little
review
of
the
deck
pb
stuff
that
rob's
working
on,
and
I'm
really
glad
that
he's
working
on
that.
C
This
is
because
we
want
these
things
to
be
resistant
to
denial
of
service
attacks,
even
in
the
case
of
fairly
maliciously
formed
inputs.
So
we
started
getting
some
fussing
in
on
things
and
it
is
discovering
things
where
we
need
to
sanity
check,
more
values,
make
sure
allocations
are
reasonably
bounded
and
stuff
like
this,
so
I've
started
some
pr's
that
do
keep
running
budgets
even
and
so
that's
pretty
exciting.
C
Getting
those
things
to
be
totally
consistent
is
interesting,
but
at
this
point
there's
some
accounting
that's
reasonably
consistent
and
some
other
things
that
are
willing
to.
Let
you
like
run
ahead
of
the
budget
by
a
fixed
amount,
and
I
think
this
will
in
total
pretty
much
end
up.
Okay,
it
could
be
more
pure,
but
it
should
do.
C
I've
been
doing
a
little
work
this
week
with
will
on
describing
some
more
of
the
filecoin
systems
using
ipld
and
ipld
schemas
via
his
project,
which
is
the
stative
project,
and
that
is
being
really
darn
exciting.
I
don't
know
if
he'll
want
to
chime
in
more
with
this,
but
basically
it's
generating
a
lot
of
experience
points
and
we'll
probably
continue
to
do
so.
C
We're
finding
that
we
can
describe
a
lot
of
file
coin
structures,
unsurprisingly
with
ipld
schemas,
because
a
lot
of
them
were
originally
drafted
in
the
same
headspace
as
the
schema
language.
So
like
it
stands
to
reason.
C
A
couple
of
things
have
evolved
in
interesting
ways
that
are
a
little
bit
trickier
to
describe
and
a
couple
things
just
fall
in
the
corners
of
parts
of
the
schema
cogent
system
that
I
haven't
implemented
yet
because
they're,
the
less
frequently
used
features
so
getting
lots
of
to-do
lists.
Out
of
that,
we've
had
some
really
interesting
reflections
on
how
developers
might
want
to
make
custom
codecs.
C
Sometimes
there
are
a
couple
of
interesting
little
spinny
bits
in
filecoin
where
they
have
the
the
the
serialization,
which
is
the
binary
protocol
like
it
has
a
certain
specified
structure.
That's
going
to
be
encoded
in
dagsebor,
all
the
hashing
for
the
trees
is
dag
sea
boar.
C
So,
like
that's
clearly
the
way
that
the
thing
is
serialized
but
then
part
of
the
goal
of
this
state,
if
project
that
was
working
on
is
have
human
readable
presentations
of
these
things,
so
that
we
can
like
create
visual
diffs
and
most
of
the
time
it
is
perfectly
reasonable
to
just
swap
the
codec,
the
multicode,
that's
kind
of
the
point
of
them.
So
you
just
like
stop
using
dijkstra
or
start
using
json
instead,
and
then
you
get
reasonably
difficult
stuff.
C
There's
a
couple
of
cases
where
the
filecoin
project
had
already
made
some
other
choices
where
some
structures
are
supposed
to
be
jsonned
in
a
custom
way,
and
this
is
not
generally
how
we
would
suggest
a
project
to
do
this.
If
it's
doing
new
work
in
the
ipld
universe,
because
it's.
D
Way,
which
is
you've,
got
things
that
have
a
string
representation.
That
is
what
you
would
use,
for
instance,
because
they're
used
as
map
keys
everywhere
and
in
json.
You
need
strings
as
your
map
keys,
and
so
that's
what
you
would
expect,
but
in
cbore
you
can
use
a
bytes,
much
more
compact
representation,
and
so
they
do,
and
so
are
we
going
to
say
that
this
is
bytes
based
on
the
sebor
encoding,
which
is
our
canonical
encoding.
But
then,
if
I
want
a
json,
I
probably
want
the
string
and
not
non-printable
unicode
map
keys.
D
C
Yeah,
I
was
also
thinking
about
the
weird
shenanigans
going
on
with
the
address
field,
which
seemed
to
require
custom,
but
there's
also
we're
also
looking
at
the
byte
keys
and
maps
situation
again
and.
C
Yep
yep
and
so
then
figuring
out
how
to
pretty
print
those
is
like
oof,
it's
just
unclear
what
should
be
done
there
and
the
usual
just
replace
sigma
with
json
codec.
It
doesn't
quite
it
doesn't
feel
great.
It
works,
but
it
doesn't
anyway.
So
we'll
have
more
to
report
on
that
in
the
future.
I
hope
yep.
That's
that's
all
I'll
say
about
that.
C
I
wrote
a
new
document
recently
about
map
ordering
a
piece
of
our
docs
and
specifications
for
the
whole
project
that
we
really
need
to
nail
down.
I've
launched
this
as
a
guest.
So
there's
a
link
in
the
group
chat
here
in
the
the
document
that
we'll
post
to
the
team
repo
later.
C
C
That's
almost
it
I'm
supporting
daniel
a
little
bit
on
the
hamp
stuff
and
I've
done
a
new
issue.
That's
writing
up
the
remaining
road
map
to
getting
self-hosting
code,
gen,
doing
cogent
of
the
types
that
describe
schema
so
that
we
can
use
those
types
to
drive.
Cogen!
C
E
Excuse
me,
oh
man,
I'll,
be
glad
when
this
moves
by
an
hour.
I
think
that's
next
week,
I'm
ending
daylight
saving,
so
yeah,
okay,
I
said
last
week
I
was
going
to
escape
dag
ppe
land
and
I
completely
did
not
escape
tag
pb
land.
That's
pretty
much
all
I
did
last
week,
but
I
was
deep
in
it.
I
feel
like
I
have
mastered
it,
but
I
won't
be
making
any
commitments
and
I'm
not
going
to
be
escaping
it
this
week,
because
there
could
be
some
tidy
up
to
do
so.
E
I
went
deeper
into
the
decoding
and
identified
three
separate
ways
that
you
can
vary
the
bytes
and
still
get
valid,
dag
pb
and
you
can
represent
the
same
data
in
different
forms,
and
these
combined
could
give
you
infinite
variation
because
of
the
way
that
they
stack
up,
and
this
is
because
pb
is
really
designed
for
protocols.
It's
not
designed
for
content,
addressing
like
everything
else
we
do.
This
is,
and
this
connects
to
stuff
that
michael's
been
talking
about
too
it's
just.
E
It's
not
a
it's
just
like
a
convenient
way
of
representing
things
in
binary,
not
a
way
of
doing
it.
Consistently
or
canonically
so
in
pb,
the
decoders
will
accept
out
of
order
fields,
even
though
the
pb
schema
tells
you
the
order
they
just
accept,
whatever
order
they
that
they
come
in
and
we'll
build
it
up.
E
However,
you
like
they'll,
also
accept
repeated
fields,
so
if
it
receives
the
same
same
field
with
different
bytes
it'll,
say:
oh
okay
and
just
overwrite
what
it
had
before
and
not
complain
or
anything
it's
completely
silent
and
then
it'll
also
accept
extraneous
bytes
that
are
valid
pb,
but
not
valid
for
the
schema
that
you've
presented
so
it'll
just
skip
over
them
and
say
hey.
This
looks
like
protocol
buffs,
but
protobox,
but
I
I
don't
know
what
it
is
and
I'm
gonna
ignore
it
in.
There
is
a
nice
mechanism
in
go.
E
They
go
protobuf,
koji
and
stuff
for
collecting
that
stuff
and
attaching
it
to
part
of
the
structure.
So
you
can
validate
that
in
go
at
least.
E
So
I
implemented
the
super
strict
decoder
for
js
and
it's
in
the
jsjp
library.
I
I'm
still
interested
interested
to
see
what
it
does
with
the
real
world
data
out
there,
because
there's
a
lot
of
tag
pb
in
the
wild.
E
But
you
know,
income
encoders
are
good.
The
encoders
are
all
very
good
they'll
do
what
they're
supposed
to
it's
just
the
decoders
that
are
flexible
so
unless
there's
a
particularly
bad
encoder
out
there
in
the
wild
somewhere.
This
should
be
fine,
but
this
is
this.
This
implements
a
one-way
one-to-one.
E
This
is
how
it
has
to
be
on
the
way
in
and
out.
I
revisited
the
spec
pr
again.
I
think
that
it's
ready
for
landing.
As
of
yesterday,
I
wouldn't
mind
a
couple
eyes,
but
otherwise,
if
I
don't
get
any
more
reviews,
I'm
going
to
land
it,
the
I
put
in
some
more
clarity
about
the
schema
and
the
constraints
and
after
eric's
input,
which
I
thought
was
quite
good.
The
links
field
is
now
not
optional.
E
Making
it
optional
introduces
another
state,
the
cardinality
that
you
have
to
deal
with
and
that's
there
is
some
ergonomics
there
as
a
programmer.
That
can
be
annoying
because
then
you
you
you
as
the
person
interacting
with
this
object,
need
to
do
the
check.
Does
this
thing
exist?
E
Okay,
then
I'll,
look
in
it,
but
if
you
make
it
not
optional
and
make
it
always
there,
then
you
can
take
for
granted
that
it
exists
and
say
well,
I
don't
need
to
check
for
it
to
exist.
If
this
this
codec
is
implemented
correctly,
it
will
always
be
there,
and
so
that's
that's
in
the
spec
now
and
that's
in
the
codec,
the
js
codec
anyway.
E
So
when
you
construct
one
of
these
things,
you
always
must
have
that
array
and
when
they
come
out
of
binary
form
and
they
decode-
it's
always
there,
so
it's
just
always
there
and
that
this
prepare
method
in
javascript
is
there
to
help
you
with
that.
As
well,
so
you
don't
have
to
throw
these
links
or
links
arrays
out
manually.
If
you
have
a
use
case
where
that
doesn't
make
sense,
which
is
a
lot
of
places
like
making
an
empty
array,
is
not
something
you
normally
do.
E
And
then
I
put
some
additional
notes
in
the
spec
for
around
strictness
of
decoding,
so
these
things
that
I've
detailed
above
I
I
wrote
them
into
the
spec
and
saying
that
these
are
these
are
shoulds
they're,
not
musts,
but
they're
shoulds
and
they're.
Not
must
because
it's
not
it's
like
you
don't.
This
is
not
how
you
normally
deal
with
protobuf.
You
don't
normally
write
your
own
decoder,
but
if
you're
in
a
position
to
do
so,
then
you
should
make
sure
it
strictly.
E
Does
these
things
and
so
and
we've
got
that
in
javascript
now
and
I
did
a
go
version
as
well,
so
there's
a
go:
dag
pb
repo
in
my
github
that
mirrors
the
javascript
one
and
it
has
all
the
strictness
in
the
decoding.
It
has
the
everything
cleanly
in
the
encoder
and
it
also
tries
to
deal
with
this
data
model
representation
side
as
well.
Where
the
thing
you
give
it
must
be
of
the
right
shape
that
this
is
just
the
low
level
portion
of
it.
E
It's
like
a
drop
in
replacement
for
don't
go
murkle,
dag
at
least
the
protobuf
version
of
the
protobuf
portion
of
go
merkeldag.
So
it
could
drop
into
ipld
prime
proto,
or
we
could
make
a
new
ipld
prime
proto,
that
has
this
stuff,
and
I
I'm
interested
in.
E
I
it's
still
fuzzy
to
me
what
the
the
builder
interface
is,
when
you
have
a
custom
codec
in
ipld,
prime
and
and
hannah
never
fleshed
that
out
with
ipld
prime
proto,
because
that
use
case
was
more
about
pulling
the
data
out
to
you
to
use
selectors
over
it,
and
so
I
I
don't
really
know
what
it
looks
like
when
you've
got
a
codec
that
is
very
strict
about
the
forms
that
it
accepts.
E
What
your
builders
end
up,
looking
like,
when
how
you
export
assemblers
from
that
thing,
that
is
completely
fuzzy
to
me
that
and
I
wouldn't
mind
understanding
that,
because
this
same
thing
comes
up
with
all
of
the
other
codecs
that
we
haven't
implemented
like
bitcoin
and
git,
and
all
the
rest
of
these
things,
where
you,
the
shapes,
are,
have
to
be
particular.
You
can't
just
freeform
build
these
dynamic
things.
No,
you
can
only
build
them
in
this
very
narrow
way.
E
What
does
that
look
like
in
iple,
prime?
I
would
love
to
know
not
necessarily
because
I
want
to
implement
something,
but
I
keep
on
hearing
that
when
I
look
at
this
code-
and
maybe
I
should
try
and
implement
that,
but
I'm
not
gonna
hold
my
I'm
not
gonna
hold
myself
to
that,
but
that
could
be
somewhere
where
I
go
this
week.
Just
to
have
a
look.
The
other
thing
was,
I
gave
a
talk
at
the
end
of
the
week.
E
Speakeasy.Js
about
it
was
titled
content,
address
data
structures
and
I
decided
to
back
right
up
to
the
beginning,
because
there
was
a
bunch
of
talks,
including
michael's
at
speakeasy.js
that
just
went
straight
into
content,
addressability
and-
and
you
know
even
the
guys
doing,
hyper
core
and
the
what's.
E
The
browser
beaker
browser,
these
guys
just
don't
jump
straight
into
content,
addressability
and
data
structures,
and
let's
talk
about
b
trees
and
that's
you
know,
do
distributed
data
structures
and
it's
like,
I
feel
sorry,
for
the
audience,
because
mostly
this
stuff
is
not
that
well
socialized
in
our
industry.
It's
it's
sort
of
we're
a
little
bit
fringe
still,
even
though
this
is
straightforward
when
you
try,
when
you
delve
into
it.
E
So
I'm
not
super
happy
with
how
the
talk
actually
went,
mainly
because
I
was
really
tired,
but
I
was
gonna
extract
that
talk
from
that
thing
and
put
it
up
as
a
separate
video,
and
I
have
the
slides
of
that
if
anyone
wants
to
take
them
and
use
them
and
improve
on
them.
I
wouldn't
mind
trying
to
improve
on
this
myself
if
I
have
opportunities
to
iterate
on
it,
but
I
thought
it,
the
the
sequence
worked
out
quite
well
and
that's
me.
A
Thanks
next
is
chris.
F
Hey
so
that's
interesting
ron,
I'm
glad
that
you're,
you
posted
that
video.
I
wanted
to
come,
listen
to
your
presentation,
but
I
had
could
at
that
time,
but
I
guess
I'll
leave
with
my
last
thing
on
yours.
I've
been
evangelizing
the
heck
out
of
ipld
and
content,
visual
storage
in
the
healthcare
medical,
imaging
community,
and
it's
tough.
I
mean
you're
right.
F
This
is
still
like
very
fringe
things,
and
you
know
my
challenge
is
to
try
to
figure
out
how
to
communicate
this
in
a
way
that
you
know
it
leads
them,
because
there's
no
way
you
can
go
in
an
hour
and
like
do
a
brand
up
on
someone
they
just
like
get
it
unless
you're,
like
super
genius
material,
you
know
it's
just
this
is
time,
but
I
am
making
good
success
and
I
just
just
so
you
guys
are
curious.
F
Like
got
some
really
strong
interest
from
my
people
at
microsoft
and
other
like
major
health
systems
like
kaiser
permanente,
I
don't
know
if
maybe
in
the
europe
you
don't
know
who
they
are,
but
some
pretty
big
players
that
think
this
is
like
a
good
idea,
so
it's
it'll
take
a
while,
but
I
I
have
been
doing
that
and
it's
a
good
success.
F
So
far
in
terms
of
my
day
job
I
did
finally
begin
working
again
last
week
and
so
working
on
a
java,
js
crossing
spike,
and
it
was
pr
frustratingly
slow
progress,
just
trying
to
figure
out
the
p2p
stuff
and
it
was
really
hindered
by
my
lack
of
go
fluency
and
the
library.
You
know
the
all.
The
libraries
are
there.
F
So
I
can
read
go
code,
but
I
can't
necessarily
find
the
code
I'm
looking
for
sometimes
amongst
everything
else,
but
at
the
end
of
last
week
I
did
actually
successfully
get
a
javascript
requester
connected
to
go.
Ipfs,
graphsync,
okay
and
the
responder
connecting
to
go.
Ipfs
graphsync
get
test
app
and
then
them
talking
to
each
other.
So
at
least
I
have
kind
of
like
the
basic,
the
pdp
connection,
things
working
and
just
for
those
you
know.
Why
was
that
so
difficult,
because
it
really
shouldn't
have
been
well.
F
The
pdp
decided
to
change
the
default
encryption
provider
from
secio
to
noise,
and
I
just
didn't
know
this,
and
so
I
was
getting
all
sorts
of
weird
encryption
errors.
I
wasn't
expecting
because,
like
one
thing
was
using
the
an
older
version
with
the
with
the
original
secio
default,
and
then
my
code
is
using
the
newer
one
with
noise
default
or
something
like
that,
and
then
also
there
was
a
undocumented
thing
in
grassland
spec
about
framing
of
the
actual
messages.
F
So
apparently
it
prefixes
the
size
and
message
of
the
bar
ends
in
the
in
the
stream,
and
so
I
found
an
issue
on
that.
I'm
going
to
update
that
or
I'll
update
the
spec
repo.
With
this
and
other
changes
I
find,
I
think
I
think
the
graphs
inspect
could
use
a
little
bit
of
love
as
it's
written
right
now.
So
I'll
try
I'll
do,
I
won't
say
overhaul,
but
I'll
do
a
bigger
update
later
on
currently
working
through.
So
I'm
not
quite
all
the
way
weeds
on
the
p2p
right
now.
F
There's
some
kind
of
weird
streaming
issue
that
using
it
pipe
and
family
that
I
spoke
with,
might
go
about
and
have
a
new
strategy
to
deal
with
that,
and
I'm
also
planning
to
use
your
new,
dag,
pb
library
rod.
So
I
I
just
just
know
that
is
coming,
but
I'm
sure
everything
will
just
work
first
time
and
I'm
hoping
to
get
the
spike
done
earlier
this
week
either
tomorrow,
maybe
wednesday
and
I'm
gonna
start
refactoring.
F
That
code
into
library
do
some
kind
of
quick
integration
with
js
ipfs
to
allow
the
browser
extension
guys
to
like
go
forward
and
then
finish
the
rest
of
the
functionality
following
that.
So
that
is
my
update.
G
Yeah
yeah,
so
new
doc's
site
is
up.
The
main
website.
Earl
is
now
redirecting
to
it.
So
that's
all
done.
I
worked
with
gozala
to
land
his
big
js
multi-format
refactor
that
actually
went
pretty
well.
G
The
api
is
really
nice.
It
looks
really
nice.
The
implementation
is,
is
not
how
I
would
have
done
it,
but,
like
you
know
it's,
I
think
it's
a
lot
nicer
for
typescript
people,
so
whatever
and
worker
likes
that
and
yeah,
so
that
that
was
that
the
main
thing
that
I
want
to
talk
about
and
rod
kind
of
hinted
at
this.
G
It's
just
the
the
situation
with
formats
like
our
codecs
and
the
formats
in
general
conversations
with
rod
and
a
few
other
people
last
week,
particularly
the
one
with
raul
in
lit
p2p.
They
just
want
to
send
some
messages,
but
they
are
signing
those
messages
and
they
may
hash
them
at
some
point,
and
so,
even
though
they're
not
putting
links
in
them
and
they
don't
need
dag
features
like
ipld,
usually
provides,
they
do
have
they're
going
to
end
up
with
a
lot
of
the
same
kind
of
hash.
G
Consistency
concerns
that
we
have
and
and
trying
to
advocate,
like
what
should
they
do.
G
It
was
very
clear
that,
like
we
don't
have
the
thing
they
would
want
to
use
and
we
need
like
a
much
simpler
format
for
them
to
use
for
some
simple
case
like
this,
and
so
here's,
my
thinking
and
I'll
I'll,
just
kind
of
bring
it
to
some
people,
and
they
can
tell
me
what
they
think,
but
as
we
look
at
how
these
are
actually
being
implemented
and
adopted,
the
way
that
we
adopt
and
use
this
technology
is
just
not
the
way
that
most
consumers
use
it.
G
G
So
we
we
live
in
this
this
world,
where
we
have
to
create
the
stack
where
all
of
these
different
choices
are
options,
and
we
end
up
with,
like
you
know,
we're
getting
to
the
point
where
we
have
really
nice
abstractions
for
working
with
all
this
stuff,
but
in
practice
most
applications
are
working
with
one
type
of
data,
with
all
these
choices
as
constants,
not
not
as
variables,
and
it's
just
significantly
simpler
for
them
to
basically
implement
the
protocols
just
in
line.
G
And
if
you
look
at
a
lot
of
our
protocols,
they're
really
nice
and
they're.
Getting
adopted
this
way
because
they're
so
simple
and-
and
we
don't
really
we've
actually
like
learned
just
not
to
worry
about
them.
G
Doing
this
right,
like
a
lot
of
people,
you
know
produce
multi-format
stuff
and
they
just
sort
of
you
know
they
just
stick
buffers
together
and
stick
binary
bits
together
and
prefix
things
and
like
hey,
it
works,
and
it's
really
obvious
and
like
we're
not
worried
about
them,
messing
that
up
or
writing
bad
data
and,
as
we
see
the
codecs
get
adopted
like
if
you
look
through
filecoin
and
like
how
filepoint
is
adopting
it
and
like
we
just
heard
about
a
bunch
of
extra
things
and
filecoin
a
bunch
of
extra
things
in
bagpv
that,
like
you
know
you
you
can
do
and
that
people
are
doing
in
the
wild
where
they
have
these
implementations,
where
they're
moving
outside
of
the
data
model
they're
not
like
they're
they're,
not
picking
up
all
the
hash
guarantees
that
we
have
in
dag
c
board,
they're,
not
really
producing
day
support
data.
G
Frankly,
that's
just
like
not
what's
happening
with
multi-formats,
and
it's
not
how
you
would
really
want
to
design
a
resilient
protocol
if
you
kind
of
unwind
everything
we've
been
coming
at
this
from
the
point
of
view
of
like
it
would
be
better
to
build
on
an
existing
format
and
then
not
really
inspecting
why
that
might
be,
and
normally
the
reason
why
you
want
to
work
with
an
existing
format.
G
Is
that
there's
a
bunch
of
libraries
already
and
a
bunch
of
people
already
know
about
it
and
there's
a
bunch
of
mind
share,
and
so
you
get
to
bootstrap
on
top
of
that
ecosystem
and
not
write
a
new
format,
but
since,
since
our
number
one
concern
are
these
hash
consistent
round
trips
that
jumps
to
the
top
of
our
stack?
And
we
don't
want
to
make
any
other
trade-offs
for
that,
and
so
the
way
that
we
implement
that
on
an
existing
format,
is
a
library.
G
So
we're
not
really
writing
a
protocol
like
I,
I
really
don't
feel
like
the
way
that
it's
being
adapted
and
the
way
that
we're
asking
people
to
adopt
like
doug
seabor
is
a
protocol.
It's
like
a
library
like
we
wrote
a
bunch
of
libraries
that
we
need
people
to
use
because
it's
really
easy
to
this
up,
and
so,
like
don't
write
it
yourself,
don't
write
a
partial
implementation
of
just
the
things
that
you
need
like.
G
If
you
need
to
serialize
just
a
list
of
integers,
I'm
sorry,
you
need
to
take
this
whole
library
to
do
that,
and
that's
not
going
to
be
resilient
to
the
ecosystem
that
we're
trying
to
build
like.
We
should
expect
that
people
are
going
to
keep
doing
what
they've
been
doing
so
far
and
doing
partial
implementations
of
oneness
implementations
and
what
we
probably
need
actually
is
a
format
that's
resilient
to
that,
not
one
where
there's
an
existing
ecosystem
that
we
thought
would
be
a
benefit.
G
But
it's
actually
our
number
one
problem,
because
when
people
just
grab
seymour
libraries
and
do
things
with
them,
they
tend
to
write
bad
data
because
we
literally
need
them
to
not
do
that
so
anyway,
in
an
afternoon,
I
wrote
a
really
simple
format
and
I'm
glad
that
it
didn't
take
very
long
because
it
should
be
really
simple
to
implement,
because
then
that
means
that
people
won't
it
up,
and
it
looks
a
lot
like
multi-formats
for
the
ipld
data
model.
G
There's
just
sort
of
you
know
typing
prefixes
in
front
of
data,
and
that's
just
you
know
it's
just
turtles
all
the
way
down
in
terms
of
the
collections
and
I've
been
working
with
with
folker
now
on
on
what
to
do
about
floats.
There's
still
like
a
lot
of
questions
there
and
there's
going
to
be
some
more
documents
and
stuff
and
we'll
work
that
out.
G
But
one
of
the
things
here
is
that
I
want
to
get
the
implementation
fully
tested
in
javascript
and
and
get
folker
happy
with
it
and
what
the
rust
implementation
might
look
like,
and
then
I
want
everyone
else
to
look
at
it
and
consider
implementing
it.
Just
so.
We
can
make
sure
that
it
actually
is
really
easy
to
implement
that
it
is
really
resist
resilient
to
implement,
especially
partially
implement,
so
that
we
can
still
design
to
that
and
what
I
really
want
to
have
is
just
like.
G
I
wanted
to
be
able
to
point
in
a
dag
format
and
say
if
you're
doing
a
simple
encoding,
here's
here's
what
you
should
use
and
if
you're
just
serializing
a
list
of
integers,
you
can
just
serialize
a
list
of
integers
and
just
implement
that,
and
it's
really
it's
a
really
simple
path
to
write
that
and
you're
not
going
to
mess
it
up,
and
I
think
that
that's
going
to
just
get
us
out
of
a
lot
of
the
problems
that
we're
having
on
top
of
our
existing
formats,
especially
since
in
this
in
this
format,
in
particular,
we're
kind
of
trading
a
little
bit
of
the
compactness
of
cbor
for
just
ease
of
implementation
and
resiliency
to
different
implementations
and
partial
implementations.
G
In
particular.
You
know,
z,
z,
like
you
know,
I
love
it
and
I'm
gonna
keep
working
on
it
for
probably
a
number
of
years
before
it'll
be
usable,
but
it
is
like
not
resilient
to
partial
implementations.
G
You
know,
there's
a
million
little
compression
algorithms
that
you
have
to
implement,
but
yeah
I
mean
this
is
like
kind
of
a
big
departure
from
where
we've
been
going,
which
is
like
dead
sea
board,
never
see
more
new
subor
and
to
be
clear,
like
we
can't
get
rid
of
it
like
the
file
coin
chain
is
in
it
in
itlb.
G
This
is
the
the
right
path,
and
this
is
making
the
trade-offs
that
we
think
are
like
actually,
the
right
ones
and
for
some
of
the
stuff
like
folker's
building
around
the
wason
read-only
codex,
like
he
needs
an
intermediary
binary
format
to
move
the
data
model
back
and
forth
and
implementing
all
of
dead
seabor
is
probably
like
much
less
attractive
for
that
interchange
format
than
something
like
this,
where
it
would
be
a
lot
easier
to
implement
in
different
host
languages
for
these
readable
awesome
products,
so
yeah
yeah.
G
If
people
have
a
chance,
they
can
check
out
the
spec
or
just
wait
until
folker,
and
I
finish
up
the
floats
stuff
but
yeah.
I
mean
thoughts
and
comments
and
any.
H
So
I've
been
thinking
about
this
quite
a
bit
with
my
work
in
that
did
specification
and
in
particular
like
the
challenges
of
converting
from
seaboard
to
json
and
not
just
dagsebor
but
cbor.
And
how
do
you
just,
I
think,
numbers
and
date?
Timestamp
is
actually
like
really
hard
to
actually
to
to
do,
and
so
I've
recently
actually
for
this.
I
have
a
deliverable
for
tonight
to
actually
to
get
like
I
mentioned
last
week
to
get
the
seabor
specification
for
dids
into
the
specification.
H
Otherwise
it's
being
threatened
to
pull
it
and
I'm
actually
using
cddl,
not
sure
if
you
guys
are
familiar
with
concise
data
definition,
language
as-
and
it
also
is.
H
Basically
it
really
is
an
abstract
data
model,
and
it
really
is
like
more
of
an
interface
data
language
than
it
is
a
library
and
where
you
explicitly
state
the
the
abstract
structure
of
the
data
and
as
either
objects
or
maps
or
numbers
or
floats,
and
also
in
in
seaboard,
there's
actually
quite
a
bit
of
tags
to
let
you
know
like
what
encoding
this
should
transfer
into
if
it's
basically
binary,
but
it's
prepended
with
a
tag
21
or
23.
H
It's
supposed
to
basically
convert
into
adjacent
objects
being
hexadecimal,
and,
and
so
I
feel,
your
pain
and
I
think
it's
being,
but
I'm
really
liking
cdl
as
being
explicitly
as
possible
in
order
to
make
sure
the
the
ease
of
conversion
between
cyborg
and
jason
is
backwards.
Compatible.
It's
completely
round
tripping
and
I've
been
doing
a
lot
of
testing
all
weekend,
actually
to
make
sure
that
all
the
stuff
we
have
in
the
specification
it's
mostly
json
centric
because
they
want
it
to
be
readable.
H
But
a
lot
of
the
seaboard
stuff
isn't
very
readable,
it's
machine,
processable
and
but
using
this
abstract
language
approach.
Basically,
it's
a
a
structure,
and
it
basically
it
is
this
in
c
boards,
this
and
json.
It
doesn't
matter
as
long
as
you
explicitly
state
it
in
cddl
and
everything
from
maps
arrays
to
it's
very
expressive
and
even
allows
a
lot
of
regular
expressions
for
timestamp
information,
for
instance,
and
I
had
to
brush
up
on
my
skills
and
of
regular
expressions
in
order
to
get
it
to
fully
work.
H
It
would
be
interesting
to
actually
have
a
a
multi-codec
for
cddl,
which
is
basically
the
explicit
abstract
data
model,
as
basically
like
an
embedded
almost
like
an
at
context
for
json,
but
basically
like
at
cddl.
That
allows
you
to
say-
and
this
is
the
structure
explicitly
of
the
this
object
of
of
things
that
are
actually
in
it,
and
here's
where
it
could
be
binary.
Here's
where
how
many
members
an
array
it
could
have
here's
the
either
regular
expressions
and
even
tag
information
you
actually
can
put
in
that
this
needs
to
be.
H
This
data
has
to
be
in
binary
prepended
with
a
tag
23
for
hexadecimal,
and
it's
it's
completely
binary.
As
far
as
the
representation
of
the
the
semantics
and
the
structure
in
the
syntax.
G
I
can
see
how
cvvl
would
be
really
nice
in
a
spec,
but
I
I'm
just
in
terms
of
like
using
existing
formats
for
these
content,
addressing
cases
like
unless
that
format
happened
to-
and
I
haven't
seen
the
format
do
this
yet,
but
unless
the
format
happened
to
say,
okay,
hash,
consistent
representations
are
the
most
important
thing
and
we
have
not
made
any
trade-offs
in
the
other
direction
like
it's
just
it's
hard
for
me
to
see
how
we
we
end
up
not
being
in
the
same
situation,
no
matter
what
the
format
is,
where
we're
we're
we're
not
really
describing
a
protocol
as
much
as
we
are
saying
you
have
to
use
this
library.
H
Well,
I
think
this
is
in
between
which
is
it's
not
a
fully
fledged
library.
It
is
very
concise,
interface
declaration
of
the
the
objects
that
you're
dealing
with,
and
I
think
without
it
being
like,
like
the
full
like
again
like
you
basically
like
in
seabor,
you
prepend
it
with
a
like
tag
and
it
basically,
it
hints
at
as
far
as
what
the
expanded
string,
interpretation
and
interpolation
might
be
in
it
in
other
languages,
other
representation,
other
core
representation
models.
E
I
feel,
like
the
point
o
with
all
of
these
specs.
It's
this
catch-up
thing
that
we're
doing
where
we
add
on
we
on
the
you
know
with
the
spec
plus
content
addressing,
and
it's
always
that
additional
bit
that
where
it
just
breaks
down,
because
none
of
these
things
are
designed
for
that,
none
of
these
spec
processors
care
about
content,
addressing
and
consistency
in
that,
and
so
when
we
tack
it
on
the
end,
we're
the
ones
that
are
accepting
compromise
and
there's
always
these
compromises
involved
and
and
there's
nothing.
E
We
can
do
about
it
because
we're
not
connected
to
the
spec
process
and
the
spec
processor
doesn't
care
about
our
use
case.
Yet
so
it's
we're
sort
of
trapped
when
it
comes
to
dealing
with
interacting
with
any
of
these
specs.
G
I
don't
think
that
we
really
understood
how
important
prioritizing
that
hash
consistency
was
until
you
know,
later
on
after
we
had
already
kind
of
wrangled
on
top
of
these
formats,
like
it's
not
like
a
trait
like,
we
can't
really
make
trade-offs
in
the
other
direction,
like
you
know,
even
when
you
know
we're
just
mapping
stuff
on
top
of
a
format
it's
like,
if
you
have
all
these
ways
in
which
you
can
still
get
out
of
it,
then
it's
it's
not
actually
doing
the
job
that
we
wanted
to
do.
E
I'm
still
a
little
bit
shocked
at
digging
into
protobuf,
just
how
bad
it
is
at
this,
because
dagpb
has
it's
got
literally
two
data
types.
That's
all
it
cares
about
in
the
whole
format
when
it
comes
to
the
binary
representation,
and
it
still
can't
do
them
consistently,
and
this.
G
E
A
I
Did
that
I
did
yeah,
so
I
want
to
talk
about
this
really
quickly.
It
came
up
today.
I
I
The
problem
with
that
is
the
following:
when
originally
I
was
looking
into
how
to
implement
all
of
this,
it
became
pretty
quickly
clear
that
we
need
to
only
allow
supplement,
only
allows
somebody
to
supply
a
path
that
terminates
to
a
single
thing,
because
if
you,
because
the
entire
falcon
transaction
is
essentially
allowing
for
one
and
one
selector
only
and
nothing
else,
so
you
are
asking
the
other
side
to
send
you
something
starting
from
one
root,
and
this
something
number
one
is
to
be
a
complete
graph.
I
I
Some
sort
of
dag
or
something
like
this,
which
you
can
just
give
back
and
say
like
okay,
you
asked
me
for
this
sub
pass,
I'm
actually
going
to
like
take
this
pass.
Selector,
I'm
going
to
ask
for
this
selector,
plus
full
recursion
from
the
other
side,
they're
going
to
give
it
to
me,
I'm
going
to
run
the
selector
again
find
out.
Where
is
the
subroute
that
you
asked
for
and
then
I'm
going
to
give
you
this
feedback
either.
A
I
A
car
file
or
as
as
direct
output,
yes
michael,
go
ahead.
G
No
well,
that's
not
necessarily
true,
so
like
the
use
case
that
we
have
for
paths
and
why
we're
looking
at
adding
paths
in
was
the
lotus.
Retrieval
command.
Wasn't
just
give
me
these
blocks.
It
was
actually
like
give
me
this
file
at
this
path,
and
so
it
definitely
did
need
to
be
one
whole
thing
that
was
just
sent
into
one
file
that
you
would
then
output.
But
we
have
cases
like
like
what
chris
is
building
now
for
js
graph.
G
Sync
is
going
to
hook
up
over
an
rpc
interface
to
some
lotus
client
and
then
that's
going
to
be
doing
graph.
Sync
requests,
retrieval
requests
and
what's
going
to
come
out
of,
that
is
actually
the
raw
blocks
from
the
retrieval
request.
So
it's
actually
fine
in
that
case,
if
you
give
it
a
selector
and
you're
selecting
a
bunch
of
stuff
in
in
various
different
files
that
don't
necessarily
like
concat
together
or
become
one
semantic
representation
like
it
can
actually
work
with
like
the
raw
graph
retrieval
data
from
the
selector.
I
Yeah
yeah,
but
it
needs
to
be
of
number
one
needs
to
be
a
full
graph
number
two!
Yes,
if
it,
if
it
does
stream
out
the
blocks
directly,
that's
fine!
If
it
doesn't,
then
it
goes
into
this
entire
trouble
of.
We
basically
now
have
a
sub
I'm
trying
to
find
the
word.
If
I
understand
this
correctly,
we
basically
have
a
sub
store
per
retrieval,
which
is
then
garbage
collected
when
the
retrieval
ends.
G
So
now
we're
in
this
position
where
it's
like,
oh
yeah,
we
have
to
keep
the
entire
retrieval
around
in
order
to
do
this,
it's
just
anyway,
yeah
like
that.
That's
just
a
bug
like,
in
my
opinion
like
it's
just
about
like
we
just
shouldn't,
allow
indeterministic
graphic
replies
like
if
you,
if
you
give
me
the
blocks
out
of
order
from
what
I'm
expecting
from
my
local
interpretation
of
the
selection,
then
I'm
gonna
give
you
an
error.
I'm
gonna
throw
and
that's
how
we
should
solve
the
buffering
issue.
G
G
Okay,
that's
totally
acceptable
and
that
sort
of
maybe
solves
some
of
the
buffering
stuff,
but
I'm
not
sure
yeah.
I
G
I
mean
so
selectors
have
a
bunch
of
issues
that
we
should
take
more
seriously
like
potential
dos
vectors,
but
in
the
file
coin
retrieval
market.
We
don't
need
to
be
too
worried
about
that
because
you
have
to
pay
for
them.
So
that's
not
a
great
dos
factor.
If
you
have
to
pay
money
to
exploit
it,
it's
usually
how
you
fix
dos
vectors.
So
I'm
not.
I
can't
think
of
anything
to
be
worried
about
right
now.
Maybe
eric
does,
though,.
I
From
my
perspective,
it's
not
that
much
about
it's
not
that
much
about
attacks
it's
more
about!
Well,
you
let
me
use
this
thing,
but
it
doesn't
work.
Make
it
work
for
me
now
and
it
like
would
not
be
able
to
work
on
a.
I
G
I
G
I
C
One
of
the
things
michael
said
about
like
yes,
the
traversal
order
should
be
entirely
deterministic
and
protocols
should
heart
should
halt
immediately
and
hard
when
things
come
in
an
unexpected
order.
This
is.
I
G
Like
the
thing
is
that
I've
I've
I've
been
down
this
road,
like
I've,
played
this
game
before
and
and
there
is
no
satisfactory
answer
for
how
large
that
buffer
should
be
so
like.
The
answer
is
to
not
do
it
and
to
know
that
you're
that
you're,
stable
and
and
then
like
the
moment
that
somebody
doesn't
do
this
correctly.
They
get
an
error
and
they
don't
try
to
ship
that
code.
F
Is
is
there
a
defined
order,
said
blocks
in
graph
sync
since
we're
kind
of
talking
about
this
right?
I
don't
recall
reading
about
that.
C
G
It's
the
encoded
order,
which
in
some
codecs
we
enforce
in
some
we
don't,
but
whatever
was
encoded,
even
if
it
was
encoded.
Bad
is
what
you
should
do.
F
I
C
G
Yeah,
ideally,
yes,
because,
like
you,
have
to
realize
that
the
problem,
if
you
allow
a
small
variation
in
order,
that
variation
in
order
very
quickly,
is
now
just
a
variation
in
link
order,
which
means
that,
like
the
the
subsequent
selection
of
the
sub
graphs
is
out
of
order,
which
means
that
you
have
to
buffer
that
entire
sub
selection.
Often
in
order
to
unless
you
have
like
a
con
current
selection,
validator,
which
is
like
nobody's
written,
then
then
you
you
very
much
like.
Then
you
have
to
buffer
that
around.
G
So
it's
not
like,
we
can
say,
like.
Oh,
no,
keep
keep
a
few
blocks
around.
It's
like!
Oh
no,
like
this
is
going
to
very
quickly
turn
into.
I
I'm
I'm
buffering
entire
sub
tags
of
whatever
length
and
eventually
that
link
gets
too
big
when
somebody
has
a
bug,
but
nobody
notices
it
until
they
have
a
really
big
graph
that
they're
running
over
this
bug,
which
is
why
no,
this
error
right
away
when
somebody
does
the
wrong
order,
so
they
know
that
they
have
a
bug
and
then
we
don't
ship.
It.
F
Well,
I
mean
you
hear
all
the
rationale.
You're,
the
one
thing
I'd
say.
I
think
one
of
the
things
about
grasslands
is
supposed
to
be
like
a
high
performance
protocol
where
bitswap
isn't
at
least
from
peer-to-peer.
So
he's
have,
you
know,
requested
a
responder
like
supposed
to
go
really
fast
or
good.
Swap
is
limited,
and
so
this
could
impact
that
I
would
say
goal
of
graph
sync
in
terms
of
limiting
performance.
G
F
G
F
What
do
you
think
about
it?
Yeah
it?
This
is
helpful
information,
especially
since
I'm
implementing
it
exactly,
but
it's
kind
of
surprising
too,
because
I'll
noodle
on
a
bit
more.
G
C
Yeah,
if
somebody
wants
to
increase
the
throughput
of
these
data
pipelines,
then
like
big
picture,
the
the
point
of
selectors
and
how
graph
sync
is
supposed
to
go
fast,
is
you
request
many
blocks
all
at
once
in
one
query,
and
then
you
stream
them
back
with
this
one
question
being
answered,
however,
many
blocks
it
is.
That
is
the
answer,
and
so
this
does
remove
round
trip
times
right.
C
You
don't
need
a
ton
of
acknowledgement
to
fly
back
like
the
sender
can
just
keep
streaming
blocks
back
and
they
should
do
that
at
whatever
rate
they
can
to
saturate
the
pipe
that's
fine
and
at
some
point,
if
the
recipient
of
all
this
response
data
starts
getting
things
in
totally
the
wrong
order,
I'm
assuming
we're
using
tcp.
So
this
just
shouldn't
happen.
If
we're
not
using
tcp,
then
something
at
the
transport
layer
should
be
responsible
for
restitching.
C
C
So
then,
if
the
recipient
starts
getting
these
chunks
streamed
in
totally
incorrect
order,
then
they
should
send
a
like
a
hard
knack
and,
like
close
this
connection,
because
the
the
far
side
is
just
not
behaving
correctly
at
all
and
they
appear
to
be
trying
to
waste
my
memory
trying
to
make
me
hold
on
to
stuff
that
I
can't
verify,
I
think,
trying
to
increase
the
throughput
of
this
should
pretty
much
just
focus
on
trying
to
increase
the
throughput
at
the
transport
level.
C
Yeah,
I
don't
see
why
that
would
be
really
necessary
unless
you're,
like
did
somebody,
add
an
https
4000
or
whatever
version
google
is
up
to
now,
where
it
like,
makes
more
connections
and
we're
streaming
whole
different
blocks
on
them,
for
reasons
so
go
faster.
Like.
F
Yeah,
I
think
the
thing
I'm
thinking
about
is,
let's
see
a
graph
that
has
some
fairly
large
blocks
and
some
very
small
blocks
and
my
responder
basically
realizes
oh
there's,
like
20
child
blocks
on
this
one,
I'm
going
to
issue
async
requests
for
all
of
them
and
start
shipping
them
over
as
soon
as
they
come
back
and
just
so
happens.
The
first
block
in
this
kind
of
order
is
the
big
one.
C
I
Let
me
say
something
I
think
there
is
a
disconnect
chris
seems
to
be
under
the
under
the
idea.
That
graph
sync
is
this
super
parallel
protocol,
whereas
graph
sync
is
designed
explicitly
to
be
serial
and
to
basically
just
walk
the
graph
super
efficiently,
but
not
walk
the
graph
in
parallel
efficiently
at
all,
and
in
fact,
query
are
building
something
like
that,
because
graphing
doesn't
work
with
them
and
bitswap
doesn't
work
for
them,
but
bitswap
is
actually
more
performant
graph,
sync
as
implemented
as
and
as
designed.
G
If
you
have
any
cache
date,
if
you
have
any
cache
date
like
bitswap,
can
can
beat
it
just
because
you're
you're
doing
you're
asking
for
less
things,
you
need
less
things
crash.
Sync
isn't
really
designed
for
that,
like
if
you
already
have
any
of
the
blocks
like
it's
all
your
own
manual
work
to
figure
that
out
and
then
you're
still
doing
a
bunch
of
round
trips
for
sub
queries
for.
G
G
Yeah
yeah
and
like
I
think
that
the
way
that
we
we
always
imagine
that
people
might
do,
that
is
that
they
were
just
doing
multiple
queries
like
they
would
do.
The
shout
out
like
you
were
talking
about.
They
would
do
the
shallow
query
and
then
a
couple
parallel,
queries
and
stuff,
and
then,
if
you
have
them
in
the
same
graphing
channel,
you
do
get
the
deduplication
of
the
blocks
coming
out.
But.
I
But
yeah
and
and
in
falcon
you
cannot
do
any
of
this,
because
there
is
only
one
connection
because
it
is
predicated
on
payment.
So
you
cannot
do
anything.
G
Yeah
yeah
yeah,
yeah
yeah,
I
mean
there's
a
lot
of
considerations
here
and
performance
is
not
the
top
one
in
the
retrieval
market.
I
think
graphsync
has
other
performance
requirements
for
other
use
cases
other
than
retrieval
that
that
make
people
want
to
make
it
faster,
but
retrievals
are,
are
not
going
to
be.
People
are
going
to
be
blocking
on
on
chain
access
like
they're,
not
they're,
not
going
to
be
performance
bound
on
these
on
these
parallelism
issues,.
I
No,
I
just
know
that
they're
working
on
it
query
io,
it's
the
the
company
that
brandon
is
working
at
they're
like
supposedly
finishing
it
up
and
going
to
publish
it
the
way
they
move
blocks
between
their
shards
of
their
like
they.
They
have
semi-ipfs
gateways.
C
I
don't
know
if
they've
done
anything
new
but
to
do
a
quick
review
of
what
I
know.
They've
done.
Historically,
that
they've
been
happy
with
they
made
like
a
two-phase
protocol
where
they
would
build,
manifests
of
all
the
structure
and
size
of
the
data
and
then
send
those
around
first
and
then
stream.
All
the
data
later
and
that's
exactly.
G
That
works
really
well,
if
you
know
the
use
case,
it's
hard
to
do
generically,
like
you
just
don't
know
how
big
that
diff
is
going
to
be
and
where
the
information
is.
C
It's
like
the
thing
where
windows
file
copies
now
they
give
you
the
spinny
progress
bar
for
a
while,
while
they
say
calculating
size,
but
then,
after
that,
the
progress
bar
is
predictable
and
people
like
that
yeah
is
it
faster?
Actually,
no,
it's
almost
guaranteed
to
be
slower,
but,
like
very
saturated
experience,.
F
Yeah
yeah
and
there's
I
mean
I
have
to
think
about
this-
I
mean
actually
this
is
this
blot
this
ordering.
I
need
to
think
about
it
for
my
use
case,
something
like
this
manifest
or
streaming
manifest
with
string
blocks.
On
top
of
it
may
be
a
better
fit
for
what
I
need,
but.
G
Okay,
I
think
that
we
just
need
a
lot
more
time
with
different
use
cases
and
protocols
to
understand
what
what
something
generic
would
need
to
look
like
like
I'm,
I'm
not
really
happy
with,
like
any
of
the
replication
protocols
to
be
honest
and
I've
even
written
a
couple
like
different
ones,
and
all
of
them
work
really
well
in
particular
use
cases,
but
not
generically,
and
the
more
that
I
dig
into
dagdy
being
like
what
you
actually
want
to
do.
G
When
you
replicate
it
gets
a
little
bit
too
complicated
for
something
generic,
particularly
when
you
you
get
into
like.
I
want
to
keep
around
parts
of
this
graph
by
reference,
but
I
don't
actually
want
to
move
the
data
around
like
those
kinds
of
decisions,
get
like
really
really
difficult
to
embed
in
generic
interfaces,
but
anyway
we're
we're
at
time.
So
we
should
yeah
we.
G
So
I
need
everybody
who
who
who
who
works
for
me
to
stick
around,
because
we
have
to
do
some
planning
at
the
end
of
the
call.
C
No,
I
got
one
more
thing
to
tack
on
to
the
end:
yeah,
okay,
the
so
this
manifest
discussion
that
we
were
just
having.
I
think
it
would
be
really
cool
if
we
could
extract
some,
maybe
mild,
convention
or
loose
specification
for
that
sort
of
thing.
It
seems
to
me
like
it
would
be
totally
reasonable
to
make
some
function,
which
is
roughly
give
me
your
block,
store
and
your
starting
hash
and
a
selector,
and
I
will
apply
this
function
and
it
will
give
me
a
manifest.
C
I
don't
think
we've
specified
that
yet
it's
something
that
you
can
pretty
easily
construct
but
like.
Oh,
why
not
make
it
standard.
G
C
G
A
All
right
thanks
so
yeah
thank
everyone
and
see
you
all
next
week.