►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
this
is
a
talk
about
graphsync.
I
want
to
tell
you
guys
a
little
bit
about
what
it
is,
what
it
can
do
and
how
it
came
to
be
and
how
it
came
to
be
what
it
is
today
and
like
a
vh1
behind
the
music
document
documentary.
There
was
some
there's
a
lot
of
really
awesome
and
early
unexpected
success,
followed
by
a
ton
of
debauchery
and
regret.
A
So
in
any
case,
this
is
gonna
be
a
little
bit
of
an
I'm
gonna
kind
of
go
over
a
bunch
of
concepts,
but
then
there's
plenty
of
room
for,
inter
interaction
and
q,
a
which
is
a
sub
text
that
these
slides
are
not
actually
fully
finished.
Okay.
So
what
is
graphic
graph?
A
Sync
is
a
lib
p2p
protocol
that
allows
a
requester
to
express
an
ipld,
selector
query
to
a
responder,
and
then
the
responder
executes
the
query
and
sends
back
to
the
requester
all
the
blocks
needed
to
satisfy
the
query
and
the
responses
are
designed
to
be
sent
back
in
a
way
that
allows
the
requester
to
incrementally
verify
incoming
data
against
the
query.
So
you
can
run
a
selector.
A
You
can
get
the
responses
back
and
you
can
process
them
in
a
way
that
does
not
force
you
to
absorb
the
whole
graph
before
you
know,
if
you're
getting
the
right
stuff.
This
is
the
graph
sync
message
format
I
I
know
this
is
too
much
to
put
on
the
slide,
but
I
want
to
point
out
a
couple
things,
so
you
can
see
a
request.
You
can
see
a
response,
you
can
see
a
block
and
then
you
can
see
the
message
which
is
the
one
at
the
bottom.
A
Now
you
may
or
may
not
be
able
to
see
this
but
you'll
notice
that
the
graphic
message
has
like
multiple
requests
in
it
and
then
the
graph
sync
response.
Then
it
has
the
crafting
responses,
but
there's
also
like
blocks
are
like
not
directly
tied
to
responses.
So
that's
the
format.
It's
like
a
bunch
of
requests,
some
responses
that
are
mostly
response,
codes
and
metadata
and
a
bunch
of
blocks.
A
Now
you
might
be
thinking
that's
a
little
bit
of
a
weird
like
format,
for
like
a
request
response
protocol
to
send
around
selector
queries
and
get
get
like
a
stream
of
data
back,
and
that
is
because
graphsync
is
not
actually
a
request
response
protocol.
It
is
a
pub
sub
protocol,
just
like
bitswaft,
you
can
see.
This
looks
a
lot
like
a
bunch
of
wants
and
a
bunch
of
responses
and
a
bunch
of
block
responses.
A
So
this
is
probably
the
record
scratch
freeze
frame
moment
where
we
wonder
how
we
got
here
and
so
in
in
essentially,
I
started
here
at
protocol
labs
working
on
bitswap
trying
to
fix
some
critical
performance
problems.
We
were
having
with
the
gateways
which
you
may
have
heard
this
this
story
as
a
starting
starting
point
for
protocol
labs,
and
I
worked
on
that
for
about
six
months
made
some
progress.
I
worked
on
some
of
the
early
sessions.
Stuff
did
some
code
cleanup?
A
What
not
and
at
the
time
there
was
this
idea
that
we
would
write
this
other
protocol.
This
feature
protocol
had
been
thrown
around
for
years
called
graphing
and
eventually
one
day
I
was
called
into
a
meeting
where,
basically,
you
know,
they
said,
write
this
protocol,
and
I
I
was
I
used
to
be
a
web
consultant
and
when
I
got
to
work
on
ipfs,
I
have
to
say
like
the
first
six
months,
we're
just
like
this
is.
I
am
in
like
such.
B
A
Amazing
place
getting
to
do
such
cool
stuff
and
I
really
felt
a
little
outclassed
and
I
really
wanted
to
prove
myself,
and
so
I
was
like
yeah
sure
I
could
totally
do
that
on
the
theory
that
this
would
be
a
way
to
show
that
I
knew
something
and
it
was.
It
was
challenging
and
I
wrote
it
and
I'll
give
you
a
very
brief
introduction
to
how
it
works
and
then
some
of
the
challenges.
Okay.
So,
first
of
all
what
are
selectors.
A
I
know
people
like
I'm
not
going
to
go
too
deep
into
this,
because
folks
probably
have
heard
this
term
before
a
lot,
but
they
may
not,
but
I
do
know
that
most
people
haven't
dug
into
what
they
actually
are.
So
I
just
want
to
go
like
just
give
you
some
a
very
brief
intro
on
selectors,
so
selectors
are
essentially
a
language
for
expressing
queries
against
an
ipld
graph,
they're,
quite
powerful
or
quite
expressive.
Here's
just
a
couple
examples.
We
have
one.
A
That's
called
like
explore
fields
which
means
given
like
a
data
structure.
That's
a
map
go
down
one
or
more
paths
in
that
map
based
off
of
a
set
of
keys
right.
We
have
explore
recursive,
which
is
like
do
this
selector
and
then
go
down
the
path
with
that
with
a
selector
and
then
do
it
again
and
do
it
as
many
times
and
up
to
a
limit
we
have
like.
A
There
are
some
lesser
known
ones
like
explore
index
which
allows
you
to
do
a
selection
in
in
a
list
and
or
actually
no
that's
explore
range,
but
explore
index
allows
you
to
get
a
numbered
item
out
of
a
list
and
then
there's
another
one
called
there's,
there's
also
a
really
wild
one
that
I
don't
think
anyone
here
has
used
called
explore
union
that
does
actually
work
which
allows
you
to
run
two
selectors
simultaneously
like
on
the
same
graph
and
then
like
combine
the
result,
which
is
pretty
wild.
A
But
again
most
folks.
Don't
you
use
it
that
much
here
is
an
example
of
a
select
that
you
can
now
use
and
go.
You
can
do
a
unix,
fs
path,
selection,
which
is
super,
duper
awesome,
and
that
is
not
actually
a
selector.
A
That
is
a
function
that
you
call
with
some
go
library
code
because
assembling
that
selector
is
quite
complicated,
so
selectors
are
an
incredibly
powerful
language
that
can
do
so
much
and
like
also
develop
our
experience
kind
of
sucks
like
in
order
to
use
them
really
well,
you
need
a
lot
of
tooling,
which
there's
probably
a
lesson
here
around
like.
If
you
need
that
much
tooling,
maybe
your
language
should
be
simpler
or
maybe
you
should
just
all
commit
to
building
lots
of
awesome.
Tooling.
It's
an
interesting
question.
A
Let's
talk
about
how
does
graph
sync
work?
Let's
talk
about
the
client
side
or
the
request
I
should
be.
It
should
say
grassland
requester,
because
we
don't
say
client
server,
because
again
it's
pops
up
protocol.
Only
there
really
are
clients
and
servers
the.
So,
how
does
the
grossing
client
work
the
basic
architecture?
The
way
I
describe
it,
the
the
the
conceptual
shift?
I
I
use
to
try
to
help
folks
understand.
A
It
is
not
that
I'm
sending
a
request
and
getting
a
response
back,
but
rather
that
I
am
executing
a
selector
query
on
my
local
machine
that
happens
to
be
backed
by
an
asynchronous
stream
of
data
coming
in
from
a
remote
machine
that
is
actually
sending
me
the
data.
I
need
to
satisfy
that
query
and
the
reason
it
works
this
way
is
incremental
verifiability.
A
So
what
I'm
doing
on
the
local
machine?
I
have
to
execute
a
selector
query
against
the
response
I
get
back
and
that
allows
me
to
see
that
the
responses
I'm
getting
back
from
the
server
are
actually
the
correct
responses
for
that.
Selector
query:
the
the
hard
part
about
implementing
that
is
you're,
getting
an
asynchronous
stream
of
responses.
A
You
probably
also
have
local
data
and
you
may
need
to
combine
those
two
and
reconcile
them,
and
the
other
condition
for
all
of
that
working
is
that
you
need
to
have
a
deterministic
order
to
the
data
right.
This
is
where,
if
anyone's
like
mentioned,
oh
selectors
are
always
depth.
First,
you
know
single
threaded
traversal
that
that's
that's.
Why
there's
a
deterministic
order
in
order
for
the
verification
to
work?
There's
some
there's
some
there's
pluses
and
minus
to
that,
and
you
can.
You
can
certainly
optimize
it.
A
You
could
use
other
orders,
but
I
think
the
determinism
does
help
that's
the
basic
operation
of
it.
How
does
the
server
work
so
the
graphsync
server
is
somewhat
analogous
to
a
web
server.
You
have
a
lot
of
the
same
concerns
right.
You
want
to
only
serve
and
simultaneous
requests
at
once
overall.
In
order
to
not
overload
your
machine,
you
want
to
maybe
try
to
prioritize
that
distribution
between
lots
of
peers.
A
The
server
is
executing
a
selector
query
and
then,
as
it
it
as
it
executes
the
selector
query
and
the
query
says:
here's
the
next
block.
I
need
in
order
to
continue
this
query.
It
takes
that
block
and
it
then
sends
it
over
the
wire
to
the
client
and
probably
one
of
the
more
complicated
things
that
the
server
implements
is
that's
pretty
critical
when
you're
doing
this
is
back
pressure.
A
Basically,
you
do
not
want
to
execute
the
whole
selector
query
and
buffer
all
these
blocks
into
memory
that
you're
going
to
send
over
the
wire.
You
need
to
pause
the
selector
query,
occasionally
to
prevent
more
blocks
from
coming
off
of
the
disk
and
into
memory
so
that
you
can
cue
them
up
to
go
over
the
wire.
So
you
have
this
like
interesting
back
pressure
where
you
like
have
like
you,
don't
want
to.
A
You,
could
load
a
block
into
memory
and
immediately
send
over
and
send
it
over
the
wire
and
block
the
selector
query
until
you
until
you
send
the
next
block,
but
you
also
may
want
to
actually
load
dip
blocks
into
memory
ahead
of
the
time
it
goes
over
the
wire
up
to
a
certain
amount,
and
the
reason
is
because,
like
the
disk,
I
o
is
also
a
potential
blocking
operation.
So
you
want
to
have
those
going
a
little
bit
in
parallel.
A
A
A
Another
key
architectural
component
of
gra
go
go
graph,
sync,
are
these
message
queues,
and
this
is
something
that
came
over
the
bitswap
conversation
as
the
thing
we
want,
and
I
might
say,
be
careful
what
you
do
there.
What
graphsync
does
is
so
you
have
multiple
responses
executing
in
parallel,
potentially
across
multiple
peers
and
then
for
any
given
peer.
Since
again,
remember
graphic
allows
you
to
have
multiple
responses.
A
Multiple
blocks,
multiple
requests
in
a
given
message,
you're
sending
all
the
responses
and
potentially
requests
into
a
queue
that
packages
them
up
into
appropriately
sized
messages,
and
so
each
message
may
have
multiple
may
have
like
one
or
more
responses
with
one
of
our
blocks
up
to
a
certain
slot
message:
size
because
you
don't
want.
You
know
because
block
limits
and
all
that
and
also
theoretical
lib
p2p
message,
size
limits
which
may
or
may
not
be
real.
A
But
in
any
case,
that
turns
out
to
be
that's
like
another
layer
of
concurrency
and
another
layer
of
logic
that
turns
out
to
be
quite
complicated,
and
I
I
I
would
encourage
folks
to
to
factor
that
in
it
sounds
like
a
really
nice
architecture
or
you
just
hand,
it
all
off
to
a
message
queue
and
that's
just
arranging
things
into
this
nice
little
packaging
line
of
messages
it's
pretty
hard
to
implement,
among
other
things,
there's
some
if
you're
doing
a
big
streaming
protocol
like
grasssync,
there's
some
real
fun
stuff
in
there
like
okay,
what,
if
you're
cute
what
if
your
peer,
goes
offline
right
now,
you
have
this
queue
of
like
packaged
up
messages
that
contain
multiple
responses,
and
you
probably
want
to
send
all
the
way
back
to
the
queries
if
they're
still
executing
that
like
no,
this
is
not
gonna
go
through
like
we
need
to
stop
and
redo
the
you
know,
wait
for
the
client
to
come
back
online
and
do
the
re
the
query,
the
logic
around
that
now
works
like
I
submitted
that
pr
after,
like
I
submitted
the
fixed
pr
for
that,
the
original
behavior
was
just
like
every
single
one
of
those
messages
failed
and
like
if
you're
listening
for
graphic
error
messages
like
grassing
errors,
you
get
like
a
hundred
of
them
at
once.
A
It
was
not
a
pleasant
behavior
for
the
higher
level
libraries.
I've
submitted
a
thing
to
to
like
fix
that.
It
took
me
three
weeks
and
like
my
co-person
who's
working
on
it
rod
who's
like
one
of
the
sharpest
people,
people
that
I've
worked
with
like
that
was
the
one
pr
where
he
was
like.
I
don't
know
what
this
does,
but
sure.
Let's
merge
it.
You.
B
C
A
In
any
case,
so
yeah
that
that
that
can
be
complicated,
and
I
would
encourage
folks
to
to
be
aware
that,
although
that
sounds
like
a
really
simple
architecture,
it's
quite
complicated
or
maybe
go
is
just
bad
at
chrome
currency.
I
don't
know
so
crosstank
extension,
so
the
interesting
thing
about
grassland
is
grassland
has
like
all
the
things
that
people
are
always
like.
A
B
A
Name
and
the
data
for
the
extension
is
an
ipld
data
structure
to
be
determined
by
the
extension
and
then
the
person
who's
implementing
the
server
can
there's
essentially
the
ability
to
set
up
a
hook
that
looks
at
every
incoming
request
and
whether
it's
based
off
the
extension
or
the
selector
or
whatever
else
you
can
choose
to
accept
or
reject
that
request.
So
that's
where
we
have
a
filtering
based
off
of
like
I
don't
want
to
execute
an
infinite
recursive
selector
against
an
untrusted
peer
right.
A
The
default
implementation
and
file
point
is,
I
don't
accept
any
requests
unless
they're
authorized
now
there's
another
kind
of
too
long
selector
the
protection
that
we
have,
which
is
what
is
it
called
like
the
selector
budget,
which
is
like,
as
you
execute
the
selector.
You
have
a
decrementing
budget
and
that's
just
a
global
limit.
I
think
you
can
set
for
the
selectors.
A
We
did
put
that
in
because
at
one
point
there
are
like
the
selector
language
is
too
powerful
and,
as
a
result,
it
has
some
potential
geometric
time
executions
for
certain
types
of
dags,
which
is
really
bad,
and
we
haven't
fixed
that
yet.
But
in
the
meantime,
we're
just
putting
in
limits
so
so
anyway,
so
that's
another
thing
that
you
can
do
to
limit.
You
know
like.
A
I
don't
want
to
execute
too
long
of
a
query
or
you
can
limit
up
front
and
just
say
I
don't
authorize
this
request
and
there's
a
whole
authorization
structure.
You
know
you
look
at
you
look
at
incoming
requests.
You
can
set
up
a
default
authorization
policy,
it's
kind
of
got
like
all
the
things
you
would
expect
from.
Like
you
know,
people
the
way
people
do
cookies
and
in
web
2
off
yeah.
What.
B
A
Oh
grassland
has
some
some
pretty
useful
filters
that
you
can
use
that
you
can
send
as
extensions
to
sort
of
like
augment
what
you
can
do
with
the
selector,
and
this
is
mostly
used
for
like
resuming
requests.
The
very
first
one
we
wrote
was
called:
do
not
send
sids,
which
was
just
like,
along
with
my
request
for
a
selector,
I'm
sending
you
a
giant
list
of
sids
that
I
don't
want
you
to
send
me
the
block
for
in
case
you
pass
them
in
the
selector.
A
This
was
like
the
very
first
version
of
like
we
can
like.
Let's
see,
if
we
can,
we
can
do
a
resume
request,
as
you
might
imagine,
with
a
very
large
dag
that
can
be
potentially
problematic
to
send
a
giant
list
of
sids
of
everything
you
have.
So
we
sort
of
stopped
using
that
the
one
we're
using
right
now
is
do
not
send
first
blocks,
which
is
basically
just
like
execute
the
selector
and
after
you
send
me
n
blocks,
start
sending
me
data
right
that
works
pretty
well.
A
There
are
some
potential
problems
with
that
I'll
talk
a
minute
about
it
in
a
minute,
so
ones
that
could
we
could
do
that,
wouldn't
be
that
hard
to
implement?
I
think
one
would
be
like
execute
a
start
at
an
ipld
path
in
a
selector.
We
actually
already
have
this
machinery
in
the
ipld
selector
execution
and
go.
We
just
haven't
written
the
graphics
extension
for
it
I
mean
now.
These
are
all
extensions.
So,
as
you
implement
them,
you
potentially
create
differences
between
implementations.
A
I
mean
they
are
mostly
documented
in
a
somewhat
out
of
date
spec,
but
but
they
do,
you
know,
but
the
those
two
at
the
top
are
implemented
by
default
by
the
go
implementation,
but
they
are
extensions
to
the
protocols.
So
it's
an
interesting,
complicated
thing.
Yeah.
A
Start
at
an
ipld
path,
another
one
you
could
do
is
send
the
bloom
filter.
Now
the
problem
with
the
bloom
filters.
Bloom
filters
are
false,
they're
a
set
and
you
get
false
positives.
You
would
really
like
to
only
get
false
negatives,
which
I
don't
know
enough
about
bloom
filters
to
know.
A
If
there's
a
version
of
that,
but
you
so
like
what
you
could
do
is
take
all
you,
the
list
of
sids
you
have
and
you
could
put
them
in
a
bloom
filter
and
then
you
could
send
that
and
then
they
could
send
you
back
the
data.
They
could
potentially
miss
some
sids,
but
you
could
just
try
again
until
you
got
everything.
That's
an
idea.
I
don't
know
it's
a
thing.
I
think
it
would
be
pretty
easy
to
implement
and
I
know
some
people
are
interested
in
that.
So
there's
lots
of
ways.
A
You
can
there's
an
interesting
pattern
here
that
I've
noticed,
which
is
like
the
selector
language,
can
do
a
lot
of
things
with
ipld.
It's
also
really
useful
to
do
some
selections
that
are
really
block
level.
So
that's
another
another
lesson,
interesting
thing.
Okay,
I
want
to
talk
about
just
very
briefly
like
something
that
grassland
does,
that
we're
talking
about
doing
in
a
number
of
cases.
That,
I
think,
is
a
really
interesting
concept
right.
A
So
essentially,
grassland
has
a
stream
of
data
incoming
and
you
want
to
verify
it
and
you're
verifying
it
against
a
deterministic
order.
So
in
grassland
it's
a
selector,
a
selector's,
a
pretty
complicated
concept.
It
could
be
anything
right,
it
could
be,
it
could
be
like
you
could
write
a
protocol
that
had
a
deterministic
order
for
sending
unix
fs
and
you
could
verify
against
it.
A
That
would
be
pretty
straightforward
as
long
as
it's
deterministic,
you
might
want
to
send
a
very
large
block-
and
we
talked
a
little
bit
yesterday
about
the
idea
of
using
a
graph
sync
like
query-
to
send
back
a
send
back,
essentially
a
chain
of
nodes
to
verify
a
hash
against
large
data.
If
you
weren't
here
for
that
session,
don't
worry
about
it!
It's
another
potential
use
case
of
that
kind
of
concept
and
there's
some
really
and
the
one
thing
that
grassing
has
done.
A
Is
it's
implemented
this
and
got
it
worked
out
and
if
it
works
against
selectors,
you
could
probably
copy
the
code
and
make
it
work
off
of
pretty
much
anything,
because
that
they're
pretty
complicated
the
the
other
and
then
the
other
addition
to
this,
which
I
think
we're
starting
to
have
conversations
about,
is
what,
if
the
rather
than
having
the
person,
who
sends
you
the
list
of
things
you're
going
to
go
over,
send
you
the
data
as
well.
A
What
if
they
just
send
you
the
list
of
things
that
a
deterministic
traversal
is
going
to
go
over,
and
then
you
get
the
blocks
themselves
yourself
with
a
protocol
like
bitswap
and
that's
what
we're
gonna
have
a
session
on
this
thing
called
manifests,
and
I
think
that
that's
that's
a
really
interesting
concept,
because
that's
a
very
straightforward
way
to
plan
queries.
You
can
you
know
you
get
a
list.
You
get
essentially
a
list
of
things
you're
going
to
go
over
and
then
you
go
get
them
yourself
and
verify
against
it.
A
Now
that
gets
complicated,
because
if
you
just
get
the
list,
it's
not
verified
at
all
right,
so
you
have
to.
Then
you
have
this
interesting,
like
trust,
incremental
verifiability
problem,
where
you
want
to
probably
ask
ahead
in
like
in
a
manifest
queue,
but
then
you
probably
need
to
constantly
not
get
too
far
ahead
such
that
you
get
too
much
data,
but
you
also
want
to
balance
that
against.
I
want
to
stay
ahead
of
like
I
want
to
always
have
data
requests
out,
so
I
can
get
them
as
quickly
as
possible.
A
So
that's
a
really
interesting
concept.
I
see
it
come
up
a
lot,
there's
some
good
code
and
grasping
to
deal
with
that.
I
think
if
you're
working
in
go
again,
probably
concurrency
in
another
language,
you
may
be
it's
not
a
hard
thing
to
do
so.
Yeah,
let's
see
what
else
yeah,
I
would
say
be
aware
if
you,
this
is
another
one
where
like
sound
super,
implement
is
suitable
to
implement.
A
There
are
dragons
in
there,
like
you,
know,
take
a
look
at
it
before
take
a
look
at
what
exists
before
you
think
I'll
just
go
implement
this
and
it'll
work.
Okay,
lessons!
This
is
where
some
of
the
slides
get
a
little
messy,
because
I'm
sort
of
yeah
I
ran
out
of
time.
Okay.
Well,
first
of
all,
obvious
lessons
be
careful
what
you
ask
for
so
grass
inc
at
this
point,
I'm
gonna
make
a
bold
statement,
which
is
that
it
works
mostly
at
this
point.
A
After
three
years
of
development,
like
lots
of
rounds
of
like
clearing
out
all
the
memory
leaks
and
which
I
guess
again,
another
wouldn't
have
happened
in
in
certain
languages,
but
in
any
case
you
know
I
mean
estuary
has
asked
for
lots
of
graph
sync
requests:
we've
tested
it
under
a
decent
amount
of
load,
mass
aminos.
You
know
like
it's
it's
you
know
they're
they're.
It
is
not
a
perfect
library
and,
among
other
things,
well
I'll
get
to
this
there's
another
lesson
in
there.
A
But
there
is
one
thing
I
do
want
to
respond
to
there.
I
have
heard
a
few
times.
Grass
think
is
slow,
and
I
just
want
to
refute
that
by
saying
no,
it
is
not.
A
That
is
the
unanger
translated
version
of
that
look.
Here's
the
thing
we
have
not
tested
point-to-point
performance
in
lib
p2p
for
real,
like
like
we
we
like
bitswap,
is
a
block
protocol.
We
have
not
tested
how
fast
we
can
move
streams
for
large
amounts
of
data
and
you
can
go
in
the
slack
channel.
You
can
go
on
all
of
the
annoying
messages
I've
sent
if
you're
on
the
file
coin
slack
and
in
various
channels,
but
like
http
over
lib
b2p
is
slow.
A
Like
you,
the
the
trick
is
that
we
got
to
get
down
to
the
protocol
layer
and
I
think
that's
a
worthwhile
topic
and
try
to
figure
out
like
where
the
protocol
layer
I
mean,
like
the
transport
layer
and
really
try
to
figure
that
out,
but,
like
I'm
sorry
to
be
a
jerk.
This
is
not
the
case.
It's
do
it
like.
A
You
do
have
some
problems
with
like,
and
I-
and
I
say
this
after
a
year
of
gaslighting
myself
on
this-
I'm
not
the
type
of
person
who
like
tries
to
tries
to
just
be
like
you're
wrong,
but,
like
I've
really
dug
into
this,
and
I'm
anyway,
this
probably
doesn't
even
concern
you.
I
should
take
it
up
with
my
manager,
but
but
fyi
there
is.
Let
me
see:
oh
no,
oh
yeah,
the
slides
really
end
here,
but
there's
a
really.
A
This
leads
to
a
really
important
lesson,
which
is
that,
like
graphic,
is
freaking
complicated
and
there's
a
downside
to
complicated
protocols,
which
is
that,
like
people,
don't
understand
how
they
work
when
they
don't
work,
they
they
tend
to
blame
the
people
who
wrote
them,
but,
more
importantly,
but
aside
from
personal
job
challenges
like
the
the
it
means
they're
harder
to
implement
harder
to
understand
harder
for
other
people
to
work
off
of
right
now
you
might
have
seen
from
the
bit
the
bit
swap
when
you
add
sessions
and
all
that
is
also
complicated
and
to
some
extent,
like
writing.
A
Network
protocols
is
complicated.
That's
another
thing
to
keep
in
mind
that
you
will
not
get
it
right
on
the
first
try
but
there,
but
in
any
case
it's
a
lesson
it's
like.
Why
did
we
need
to
execute
the
entire
selector
language
up
front
right?
Maybe
we
should
have
started
with
just
a
couple.
Maybe
we
should
have
started
with
not
a
multi-request
per
message
protocol.
If
I
could
go
back
and
do
one
thing
I
would
have
listened
to
volcker,
who
is
always
right
by
the
way?
A
If
you,
if
you
hear
him,
say
something
like
pay
attention,
he
like
one
time
he
was.
He
welcome
his
works
in
rust.
I
think
he's
currently
on
fvm
he's
sort
of
like
someone
who's
been
at
protocol
labs
for
a
while,
but
isn't
very
well
known.
He
he
one
time
to
me
was
like
I
have
this
idea
for
a
database
table.
I
wrote
a
thing
in
russ
and
I
was
like
and
I
was
a
hackathon.
I
was
like
oh
yeah
I'll,
pour
it
to
go.
A
That
thing
runs
the
indexer
now
he
he's
he's
he's
anyway.
So
that's
sidebar,
but
he
had
said
just
make
it
a
single
request.
Response
simple
protocols
are
simple,
are
easier
to
work
with,
and
I
was
like
no.
I
have
this
design
document
that
I
was
given
by
the
luminaries
and-
and
you
know
that
would
that
was
that
would
have
been
simpler.
Other
yeah
other
lessons
I
think
like.
I
am
not
convinced
of
the
complete
value
of
a
streaming
protocol
for
large
requests.
A
A
lot
of
you
will
push
me
on
this,
but
I
think
I'm
kind
of
coming
around
that.
If
you're
executing
a
very
complicated
query
in
a
request
response
form,
you
lose
like
the
ability
to
split
things
easily,
which
is
kind
of
our
superpower.
Right
like
if
we're
gonna
come
we're
not
trying
to
so
grassland
will
work.
A
Well,
you
know
if
we
work
out
all
the
low-level
things,
it
can
certainly
compete
with
http
right
but
like
we're
trying
to
be
better
than
the
regular
web
and
we
have
an
uneven
playing
field
and
that
most
of
our
nodes
are
not
like
amazon
nodes.
We
have
some
but
like
a
lot
of
them
are
not
like
going
to
be
as
good.
So
we
need
to
have
multi-party
transfer
right
like
that's
how
we
win
on.
A
Not
win
any
in
any
case
like
like,
so
you
should
plan
for
request
splitting
at
the
beginning,
and
that
may
mean
that
sen.
Sending
big
queries
is
not
always
the
right
thing.
The
other
thing
is
also
like,
I
think
we've
said
this
a
few
times,
but
like
the
higher
the
data
higher
level,
the
data
model
you
work
at
the
more
you
are
the
more
you're
taking
on
like
a
lot
of
shared
understanding
between
the
two
nodes
right.
A
So,
yes,
you
can
execute
a
selector
query
against
any
encoding
that
ipld
prime
supports
and
probably
at
this
point,
if
you're
talking
between
two
file
coin
nodes,
because
we've
sort
of
piggybacked
along
the
oh
on
the
chain
upgrade
process
to
force
upgrades
to
the
protocol,
you
will
be
able
to
do
like
cool
unix,
fs
queries
and
all
that.
But
it's
a
lot
of
things
that
that
need
to
be
like
shared
understanding
and
then
like
what,
if
you
add
something
new
like
now,
how
do
you
communicate
that?
A
So
you
may
there's
a
question:
do
you
want
to
put
all
the
different
possible
higher
level
data
models
in
a
single
protocol?
Or
do
you
want
to
do
multiple
protocols
for
all
the
higher
level
stuff
and
then
just
have
like
a
bit
swap
underneath
to
move
the
blocks
around?
I
don't
know
yeah.
Those
are
a
lot
of
thoughts,
a
lot
of
things,
questions.
C
What's
your
sense
that
if
I
go
in
the
place
that
usage
of
gravity
right
now
with
this
wall
in
terms
of
like
is
the
basically
is
the
changes
to
graph
swings?
Is
it
worth
it
in
the
sense
of
like
how
it's
now
being
used
versus
like
what,
if
I
just
pack
a
bunch
of
stuff
on
top
of
bits
as
well
and
use
that.
A
Yeah,
so
I
would
say
so,
it
depends
on
your
goals
right
like
if
you're,
like
filecoin's
usage
of
graphsync
is
for
point-to-point
transfers
and
like
it's
probably
for
that,
I
would
just
use
graph
sync
right.
I
mean
if
you
want
to
use
lip
pdp
the,
for
what
I
would
say
is
that
I
think
I
mean
there's.
There
is
a
problem
with
bitswap
around
the
like
deep
graphs
right.
You
probably
need
a
solution
for
that.
A
I
think
these
sort
of
ideas
of
manifest
protocols
to
get
ahead
of
your
like
local
traversal,
in
terms
of
what
you
know
about
the
graph,
are
probably
sufficient
for
that.
There
are
certain
queries
where
it
would
just
be
simpler
to
use
graph
sync
like
if
your
goal
is
to
get
to
a
deep
path
in
a
directory
like
that's.
A
Why
would
you
split
that
up
like
it's
it's
efficiently,
linear
that
you
would
just
that
you
might
as
well
just
get
all
the
blocks
up
to
the
file
and
then
go
to
bitswap
and
there's
probably
a
lot
there's
some
interesting
thinking
about
like
when?
Is
it
better
to
split
up
queries
because
they're,
if
you're,
truly
traversing,
like
a
blockchain
like
a
traditional
blockchain,
not
a
five-point
blockchain
like
there's
no
value
in
splitting
that
for
the
most
part,
though,
maybe
you
want
to
get
like
the
blocks
from
multiple
pure
yeah.
A
You
know
there's
still
value,
I
guess,
but
if
it's
small
enough
you're,
probably
going
to
save
data
just
going
round
trip
and
getting
everything
at
once
if
the
date
is
small,
not
if
the
not
not
the
shape
of
the
graph
but
more
like
if
you've
got
lots
of
nodes,
you
might
as
well
just
hand
it
to
one
pier
and
get
it
back
quickly.
Yeah.
B
A
A
I
mean
you
can
also
build
one:
that's
deterministic,
but
not
very
linear.
Like
you
know
you
can
you
can
you
can
make
a
breath
first
right
as
long
as
you
say
it's
this
order
of
breath.
First,
the
so
and
then
yes,
you
can
play
games
with
trusts
right.
The
only
thing
I
would
say
about
that
right
when
I
say
play
games
with
trust
like
if
you
want
to
make
it
non-deterministic.
Maybe
you
accept
a
certain
amount
of
more
stuff
buffered
in
memory
before
you
can
verify
it
right.
B
I
mean
I
don't
mean
well,
I
don't
mean
from
a
trust
perspective.
I
mean
like,
like
a,
I,
have
a
branching
file
right
right.
Instead
of
you
know,
walking
each
of
the
blocks
in
some
traversal
order,
like
once
I
put
a
node
that
has
you
know
whatever
256
spread
right,
whichever
order
I
get
those
blocks
in
shouldn't
really
matter,
I
can
verify
them.
A
All
the
same
yes,
so
yeah,
I
think
the
unix
fs
file
case
is
a
good
example
of
where
you
should
split
it,
between
grass,
ink
and
and
and
bit
swap
or
you
don't
even
need
to
use
grass
ink,
but
like
a
linear,
traversal
and
just
fetch
it
right
like
so
like
you
should.
Like
I
mean
the
simplest
thing
would
be
to
transmit
that
tree
of,
like
very
small
blocks,
that
you
know,
lay
out
a
unix
fs
thing
down
to
the
down
to
the
like
raw
blocks,
the
bottom
and
then
go
to
bitswap.
A
A
A
B
B
B
B
B
A
Yeah
I
mean
there
there
there's
like
there's
a
point
of
termination
in
in
a
selector
right
where
you're
like
I'm,
not
going
to
go,
I'm
at
the
last
layer
of
links
and
that
you
can
totally
verify
in
whatever
order
you
want,
because
you
have
the
thing
immediately
before
it.
That
tells
you.
These
are
the
links
that
are
in
this
block,
but
there
are
cases
where
there's
more
complicated
things
to
do.
Honestly,
I
don't
think
you
get
that
much
out
of
non-determinism.