►
From YouTube: Session: Graphsync Deep-Dive
Description
Recorded at DataSystems September 2021 Colo at LabSpace Liech Join us in #retrieval-market on filecoin slack to ask questions or build an open data transfer stack with us!
A
Cool,
so
this
is
going
to
be
a
little
bit
about
grassland,
trying
to
get
everyone
here,
at
least
to
the
point
of
understanding
the
basic
kind
of
like
architectural
stuff
and,
like
maybe
feeling
like
you,
know
something
about
it
enough,
so
that
we
could
maybe
like
if
you
were
to.
B
A
Assigned
a
ticket
in
grassland,
you
might
be
able
to
make
progress
even
if
it
was
like
a
smallish
ticket,
maybe
so
yeah.
So
that's
that
that's
what
we're
doing.
First
of
all,
what
is
grassland
grassland
is
a
network
protocol
to
synchronize
ipld
graphs
across
piers,
a
very
short
description,
but
a
key.
A
couple,
important
elements
here:
ipld
graphs,
so
so
grousing
is
a
protocol
to
synchro
to
to
transfer
data
at
the
level
of
ipld.
A
The
iplt
data
model,
which
is
a
model
for
structured
data
structure,
content
address
data
that
is
linked
together
on
the
on
the
distributed
web
and
that's
gonna
be
important
when
we
get
to
stuff
like
when
we
get
stuff
like
like
sorry,
let
me
rewind
the
the
thing
that's
important
about
that
is
that
to
trans
to
synchronize,
ipld
graphs
on
the
internet.
A
Both
both
sides
have
to
understand
ipld,
and
that
can
be
that's
a
slightly
higher
bar
than
say
bitswap,
which
is
our
other
main
transport
which
operates
solely
at
the
block
level.
That
just
means
both
sides
have
to
understand,
like
content,
address,
content
addresses
and
and
just
like
serialized
blocks,.
B
A
So,
let's
so
that's
that's
the
first
piece
and
we're
gonna
be
thinking
synchronizing
across
peers,
so
we
are
gonna
be.
This
is
primarily
a
protocol
that
operates
in
lib
p2p,
at
least
for
now,
and
is
oriented
towards
peer-to-peer
distribution
distributed
data
where
the
peers,
don't
necessarily
trust
each
other.
So
let's
talk
about
our.
A
Grass,
sync
is
that
it
is
a
trussless
protocol
and
the
goal
is
for
the
data
you
transfer
over
grass
thing
to
be
incrementally
verifiable.
A
So
that's
a
pretty
key
guarantee
is
that
when
you
make
a
request
with
grasssync,
you
should
not
be
accepting
any
data
that
you
cannot
verify
as
the
data
you
requested
and
that
should
be
done
in
reasonable
chunks
so
that
you
do
not
have
to
take
in
like
a
gig
of
data
before
you
are
able
to
verify
that
it's
the
right
data,
that's
an
important
guarantee
when
you
don't
necessarily
trust
the
other
person.
A
But
you
do
have
a
content
address
the
root
of
your
graph,
so
you
can
so
you're
verifying
against
that
root.
Content
address
it's
a
little
bit
more
complicated
because
you're
verifying
a
graph,
and
you
can't
really
verify
blocks
past
the
first
one
until
you
have
until
you
essentially
perform
the
selector
traversal
to
get
down
to
them.
We'll
get
into
that.
B
A
A
second
grassland
is,
is
generally
designed,
ideally
to
be
resilient
and
attack
resistant,
particularly
on
the
side
that
is
responding
to
incoming
requests,
because
you
are
effectively
you
know
you
have
an
open
protocol
address
on
the
on
the
internet,
so
you
have
to
do
things.
A
It
ideally
is
performant.
We
should
be
able
to
serve
lots
of
requests
at
once.
We
should
be
able
to
make
lots
of
requests
at
once.
It
should
be
able
to
also
operate
in
parallel,
and
it
should
be
able
to
do
so
in
relatively
efficient
ways
and
then
the
other.
A
The
other
big
thing
about
grassland,
particularly
is
compared
to
bitswap,
is
that
it's
very
configurable
there's
a
lot
of
ability
in
grass
sync
to
change,
request
behavior
over
the
course
of
the
request
and
to
make
lots
of
changes
to
like
where
you're
loading
data
from
on
your
local
disk,
which
is
those
are
those
are
things
that
could
be
in
bitswap,
but
are
primarily
not
in
bit
talk
today,
so
that
it
part
of
the
reason
they're
in
gross
sync
is
because
it
was
sort
of
written
post
bit
swap.
A
I
go
any
farther.
This
is
like
interactive
presentations.
So
please
interrupt
me
because,
like
I
don't
have
like
lots
of
entertainment
and
jokes
for
this,
this
presentation,
so
if
you
want
to
stay
interested
stay,
bring
your
own
jokes,
yes,
and
also
like
you
know,
I
feel
like
like
when
you
don't
have
a
lot
of
you
know.
You
don't
have.
B
A
Under
interrupted,
talking
gets
a
little
overwhelming
anyway,
so
just
looking
at
the
like
super
highest
level,
architecture
of
graph
sync,
we
have
grassland,
has
a
top
level
interface.
These
are
the
methods
that
you
can
call
as
a
person
who
is
using
graph
setting
and
that's
like
there's.
Basically
one
thing
you
construct
and
that's
your
top
level
interface.
Everything
else
is
constructed
underneath
that
you
have
and
then
in
terms
of
the
code
itself,
there's
a
it's
sort
of
divided
up
into
three
other
major
components.
A
Besides
the
top-level
interface,
you
have
like
a
requester
implementation.
This
is
the
side
that
is
gonna,
make
requests
and
process
responses
from
the
internet.
You
have
a
responder
implementation,
which
is
the
this.
Is
the
site
that's
going
to
receive
incoming
requests
and
serve
them,
and
then
you
have
a
message
sending
layer
and
we'll
get
into.
Why
there's
a
separate
message
sending
layer
in
a
minute.
C
A
It'll
probably
become
a
become
apparent
why
this
is
an
important
architectural
sort
of
like
design
where
that
card
is
taken
away
from
everything
else.
But
essentially
this
thing
is
managing
as
people
wish
to
send
requests
or
responses
over
the
internet,
it's
sort
of
like
gathering
collecting
and
then
sending
them
out
and
discrete
network
messages.
So
just
a
sidebar
about
terminology,
I'm
gonna,
say
requester
and
responder,
and
you
can
vaguely
think
of
this
as
requesters
like
the
client
and
responders
like
the
server.
A
B
A
That
they're,
the
reason
we
use
request
and
responder
is
theoretically
on
a
libpeter
peer
network.
Anyone
can
send
a
request
and
anyone
can
serve
a
response.
Everyone
is
running
their
responsive,
responder
implementation.
This
is
obviously
different
than
like
http,
where
there's
like
someone
who
is
the
server
and
someone
who
is
the
client.
So,
yes,.
A
Yes,
that
is
the
case.
It
flows
one
way,
the
distinction
being
that
if
you
were
to
think
about
compared
to
like
http,
basically,
everyone
on
the
internet
is
either
running
a
web
browser
or
a
web
server
and
they're
not
running
something
that
does
both
everyone
who
runs
grasssync
is
running
both.
A
So
that's
that
is
that's
the
main
distinction.
Grassland
has
just
just
a
couple
dependencies
that
it
needs
like
this.
This
network
implementation
dependency
it's.
This
is
just
a
really
simple
wrapper
around
the
limp
pdp
protocol
to
to
like
be
able
to
send
and
receive
messages,
and
it
mostly
exists
so
that
in
testing
and
graph
sync,
you
can
basically
take
the
p2p
out
of
it.
If
you
don't
want
to
actually
talk
over
the
network
and
then
it
take,
it
needs
a
default.
A
Essentially
implementation
of
a
local
storage,
as
expressed
through
an
ipld
link
system
instance.
So,
for
those
of
you
guys
who
know
I'd
build,
a
prime
link
system
is
effectively
now
the
mechanism
by
which
you
configure
the
loading
and
storing
of
ipld
data
from
disk.
So
it
does
need
a
default
link
system.
You
will
see
in
a
bit
that
you
can
change
a
lot
of
that.
That's
all
you
really
need
to
supply
to
it.
A
A
Goal
of
essentially
serving
an
incrementally
verifiable
ipld,
selector
traversal
request
right
so
starting
on
the
requester
side.
They
need
to
encode
and
send
the
request
to
the
responder.
That
part
is
pretty
simple
once
they
have
sent
it
and
the
responder
receives
it,
they
need
to
receive
it
and
they
need
to
start
running
an
ipld,
selector
traversal
right.
So
the
way
I
feel
these
selective
reversals
work
is
you.
A
You
essentially
tell
ipld
to
you,
give
it
a
root,
node
and
a
selector
to
perform
the
traversal,
and
you
also
give
it
a
link
system
and
the
way
it
works
with
that
link
system.
Is
that
every
time
in
the
process
of
running
that
query,
every
time
it
gets
to
a
link
boundary,
it's
going
to
call
out
to
the
link
system
to
load
the
load,
the
the
next
block
in
the
traversal
from
disk
and
then
and
then
continue
on
with
the
traversal.
A
So
what
the
responder
is
going
to
do.
Is
it's
going
to
load
the
blocks
from
the
local
storage
to
back
the
selector
query.
A
Also
needs
to
encode
and
send
those
blocks
that
it's
traversed
and
metadata
about
that
traversal
to
back
to
the
requester
over
the
network.
Right,
meanwhile,
on
the
requester
side,
they
need
to
verify
the
blocks
that
they
get
from
the
responder.
Because
again
we
can't
just
trust
the
blocks
that
they
they
get
from
the
responder
and.
A
This
is
we
perform
our
oh,
we
perform
the
exact
same
selector,
traversal,
query,
backed
by
the
network
responses
on
the
requesting
side
and
and
then
the
requester
stores
the
blocks
to
its
local
store,
as
those
blocks
are
verified
by
running
this
local
traverse.
C
A
C
The
verification
you're
doing
in
the
requester
site,
I'm
receiving
so
so
I
perform
this
vector.
You
start
giving
me
money
and
you're
saying
that
we're
verifying
ourselves
that
the
clubs
you're
sending
me
belong
to
the
request.
Selector
request
that
I
sent
to
you
right
yep.
How
is
this
done?
If
you
don't
have
all
of
the
blocks.
A
Yes,
you
do
not
have
all
the
blocks
yet
so
this
is.
This
is
actually
part
of
the
the
core
complicated
part
of
this.
You
you.
So
we
know
that
when
we
run
a
selector
traversal
in
ipld,
it's
going
to
run
its
own
code
until
it
gets
to
a
block
boundary.
At
that
point,
it's
going
to
say,
load
this
block
and
that's
where
that's
where
the
requester
is
going
to
do
a
lot
of
magic
and
in
fact
that's
the
next
slide.
A
Is
this
like
core,
like
thing
that
happens
on
both
sides
is,
is
intercepting
the
process
of
loading
blocks
and
doing
other
things
there?
That's
basically
the
the
mechanism
by
which
everything
happens
in
in
to
a
large
extent,
in
grassland
right.
So
in
the
requester
side,
we're
running
this
selector
traversal
we
get
to
a
blocker
block
boundary,
and
now
we
need
to
wait
until
we
have
the
block
to
load
from
the
network
right
and
so
we're
gonna.
A
We
know
that
it's
a
verified
block,
so
we
need
to
now
write
that
to
local
storage.
At
that
moment
right,
this
is
all
happening
in
the
block,
loader
function,
and
meanwhile
the
other
thing
is
that
we
might
that
we
do
is
we're
in
order
to
be
efficient.
A
If
we're
performing
a
selector
traversal
in
a
graph
sync
request,
and
it
encounters
the
same
block
twice,
we
are
only
going
to
send
it
once
and
the
second
time
what
we
can
do
is
since
we
already
got
it
and
we
put
it
in
our
local
storage.
We
can
fall
back
and
be
like.
Do
we
have
this
in
local
storage?
Oh,
if
we
have
a
local
search
and
we'll
just
read
it
right
and
then,
and
that
way
the
the
responder
doesn't
have
to
send
it
twice.
A
So
that's
that's
the
sort
of
funkiness
on
the
requester
side
and
there's
also
the
other
thing
we're
gonna
do
in
there
is.
We
are
going
to
call
like
hooks
to
be
like.
Oh,
we
gotta
block,
you
know
do
do
what
what
do
you.
A
Block
to
anybody
who's
registered,
yes,
go
ahead,
you
register
a
book
you
may
or
just
order
one
right.
You
can
register
more
than
one
for
almost
every
hook.
Yes,
the
order
is
currently
just
order
of
being
registered.
Yeah
yeah
yeah
and
you
can
unsubscribe
to
to
unregistered
yeah,
go.
D
Ahead
in
the
previous
slide,
it
said
something
like
the
responder
includes
a
block
to
send
it
to
the
requester.
But
will
the
responder
already
have
the
encoded
block
somewhere
because
it
loaded
block
from
somewhere.
A
A
The
request,
which
is
one
of
the
the
tricks
by
which
the
whole
retrieval
market
payment
protocol
works,
is
pausing
and
resuming
requests
yeah.
So
that's
kind
of
like
the
basics.
A
I
want
to
look
briefly
at
the
grassland
message
format,
which
will
hopefully
illustrate
why
there's
a
whole
message
sending
layer-
and
this
is
one
could
argue-
this
is
not
like
this-
is
worth
worth
and
revisit
so
a
graph
sync
network
message
is
a
protobuf,
probably
at
some
point,
actually
there's
a
proposal
to
just
take
this
and
rewrite
it
and
see
more
so
that
it
would
be
and
make
sure
that
it
matches
the
ipld
spec
so
that
it
has
like
an.
B
A
Schema
to
describe
it,
but
for
the
moment
it's
a
protobuf
and
the
thing
that's
interesting
about
this-
I
don't
know
do
you.
Do
you
all
know
how
like,
if
I
go
through
this,
this
is
protobuf
description
format.
Hopefully
it
makes
some
sense.
You
can
see
that
within
the
grasstank
message,
we've
just
defined
a
request,
type
and
response
type
and
a
block
type,
and
then
the
actual
content
of
the
message
is
just
a
list
of
requests,
a
list
of
responses
and
a
set
of
blocks.
So
that's
a
it's
an
interesting
message:
format.
A
It's
not
exactly
the
simplest
message
format
because
you
can
have
multiple
requests,
multiple
responses
and
blocks
in
the
in
a
single
message
right
and
the
other
thing
that's
interesting
about
this
is
you'll,
see
that
there's
not
an
obvious
in
the
current
message.
Format,
there's
not
an
obvious
connection
between
the
blocks
you
receive
and
which
responses
they
were
part
of
the
there
actually
is
in
every
graphics
message
that
encoding,
but
right
now
it
is
one
of
these.
A
You
can
see
each
both
the
requests
and
the
response
have
a
map
from
string
to
bytes
of
extensions.
This
is
you
can
basically
pack
arbitrary
extensions
into
graphing
messages.
The
the
thing
that
is.
B
A
In
any
case,
I'm
going
to
just
briefly
go
over
to
that,
so
you
can
see
that
at
least
twice
well,
yes,
and
now.
A
Yeah
it's
sent
here,
and
then
it's
also
sent
in
the
response
metadata.
The
reason
that
it's
sent
in
the
response
metadata
is
that
it
also
includes
a
core
other
piece
which
is
here
I'm
going
to
just
pull
this
up.
This
is
the
known
extensions.
Why
did.
A
So
a
couple
of
reasons,
first
of
all,
that
protocol
was
designed
that
message
format
was
designed
in
2018
by
people
who
really
understood
bitswap
and,
like
I
think,
looking
at
it
right
now
like
I
feel
like.
If
I
were
to
to
do
it,
I
would
redo
it
differently
and
also
when
I
implemented
it
that
was
sort
of
like
presented
as
a
as
the
message
format
and
then
like
like
that
spec
existed
with
that
message,
format
for
the
most
part.
Obviously
it's
changed
a
few
times.
B
A
Like
I
was
like,
oh
my
sure,
yeah
I'll
implement
it.
That
way,
and
now
I'm
like
there
are
some
reasons
right.
First
of
all,
like
two
different
responses
might
contain
the
same
block
right
and
they're.
Ideally,
you
should
probably
not
send
it
twice
if
both
responses
contain
the
same
block
in
the
same
message,
the
so
when.
A
If
they're
separate
messages,
you
also
so
that's
an
interesting
thing
about
like
one
of
the
most
interesting
questions
in
grassland's
responder
implementation
is:
how
often
should
you
duplicate
blocks
right?
There's
two
things
that
seem
fairly
simple.
One
is
that
if
two
different
responses
send
the
same
block
and
the
same
message,
you
probably
shouldn't
descend
it
twice
right.
The
other
is
if
the
same
request
encounters
the
same
block
twice
at
two
different
parts
of
the
traversal.
You
probably
shouldn't
send
it
twice
once
you
go
beyond
that.
A
There's
some
really
interesting
questions
right
because,
like
a
responder,
could
they
they
could
simply
never
send
the
same
peer
at
the
same
block,
twice
right,
but
now
they're
tracking
peers
that,
like
you
know
for
theoretically
in
perpetuity
and
basically
maintaining
a
list
of
what
sids
they
have
so
obviously
they're
not
going
to
do
that
forever.
A
The
implementation
of
go
graph
sync
makes
this
sort
of,
like
I
don't
know
like
what
seemed
like
a
fairly
logical
decision
at
the
time,
which
is
essentially
if
I
have
two
requests.
If
I
have
a
request
in
progress
from
from
up
here
and.
A
Progress
phone
appear
if
you
encounter
the
same
block
twice
in
each
request,
but
in
a
different
message
as
long
as
they're,
both
in
progress,
I'm
going
to
go
ahead
and
say:
let's
not
send
it
again
right.
So
that's
sort
of
like
the
best
case.
The
spec
doesn't
just
it
doesn't
say
what
the
deduplication
logic
is,
which
is
interesting.
A
A
Right
now,
the
go
implementation
supports
this
more
fancy
case
and
it
also-
and
it
adds
a
ton
of
complexity
to
the
code,
because
yeah
there's
a
lot
of
a
lot
of
challenges,
particularly
when
you
start
actually
on
the
requesting
side.
Writing
your
responses
to
different
block
stores,
because
now,
like
your
deduplication,
is
actually
going
to
be
super
different
than
you
expected
so
anyway,.
B
A
A
bunch
of
things
to
deal
with
that
right
now,
but
yeah.
It
is
what
it
is,
the
link
metadata
that
or
the
the
response
metadata.
It's
really
simple:
it's
just
a
list,
an
array.
This
is
a
sebor,
encoded,
ipld,
node
and
and
all
it
is,
is
a
a
list
of
data
structures
that
contain
a
sid
and
whether
or
not
the
block
was
present
on
the
responder,
meaning.
A
Had
that
block,
that
is
important,
because
one
of
the
things
we
support
with
graph
sync
or
we
intend
to
support
is
partial
responses,
meaning
the
server
executed,
their
selector
traversal
to
the
best
of
those
were
missing
some
of
the
blocks,
which
means
that
they
had
to
stop
the
traversal
of
that
block
there
that
mean
they
may
have
actually
had
some
of
the
blocks
underneath
it,
but
as
soon
as
it
hits
something
it
can't
go
past
like
it's
going
to
skip
over
that
and
say
I
didn't
have
this
block
so
then,
on
the.
A
A
Yeah
there
is
an
extension
specifically
for
the,
so
basically
there
is
a
there's
a
thing:
there's
an
extension
called
grassland
deduped
by
key.
We
ended
up
using
this
because
we
use
multiple
link
systems
in
filepoint
to
because
every
request
response
goes
to
a
separate,
effectively,
writes
a
separate
car
file
now,
and
so
it's
gonna
go
to
a
different
place.
A
So
when
the
requester
sends
the
when
they
send
the
when
they
send
the
request,
they
say
this
is
the
key
by
which
you
should
dedupe
and
basically
any
requests
that
do
not
share.
This
key
should
not
be
due
due
with
this
request
so
and
that
I
believe,
is
automatic
in
grasping.
If
you
use
an
alternate
link
system,
I
believe
the
requester
implementation
will
send
that
without
you
having
to
do.
B
A
E
A
Would
be
good
to
have
a
subject
later
anyway?
So
that's
that's!
That's
that
thing
yeah.
That
was
like
literally
the
most
simple
version
of
that
that
we
could
write
and
we
use
it.
We
totally
kind
of
like
use
and
misuse
in
our
data
transfer
to
do
restarts,
because,
right
now
a
restarting
data
transfer
is
done
by
sending
the
request
again
with
every
single
sid
you've
already
transferred
you've
already
traversed.
A
Yes,
there
are
people
on
ignite
who
probably
hate
me
for
that
in
any
case
yeah.
So
those
those
are
some
things.
Where
were
we,
but.
B
Like
like,
how
would
you
know
how
to
like,
if
so
imagine,
that
we
go
from
do
not
send
sids
with
the
individual
sids
to
do
not
send
do
not
traverse
past
sub
bags
of
yes
and
I've
got
my
partial
profile?
Yes,
I
guess
I've
got
my
selector
and
somehow,
with
the
selector
and
that
partial
set
of
sits
in
the
profile
they
can
figure
out.
A
Yeah,
no,
that
actually
is
probably
not
the
most
efficient
version
of
the
resume
request.
Most
likely,
we
need
to
have
an
extension.
That's
like
don't
send
me
the
first
end
bytes,
you
would
have
sent
me
over
the
the
network,
though
that's
also
not
totally
simple,
probably
more
like
you,
you,
god,
there's.
A
For
doing
resuming
requests
that
are
probably
better
implementations
than
do
not
send
individual
sids
but
like
do
not
send
individuals
says
it's
really
easy
to
write
so
yeah.
D
This
might
send
you
in
a
bit
of
a
rabbit
hole,
but
if
a
traversal
is
deterministic,
can
you
like
say
the
last
state
that
I
received
was
this
kind
of
wizard
from
this
point.
E
A
Yeah
another
thing
I
mean
interesting
thing
generally
about
draftsync
is
that
for
the
responder
you're,
basically
asking
them
to
do
a
lot
of
work.
Every
time
you
make
your
grassland
request,
so
the
responder
has
to
be
like.
You
have
to
be
pretty
careful
in
what
you
allow
the
responder
like
what
kinds
of
requests
it
serves.
A
A
To
like
be
like
well,
if
it's
a
data
transfer,
that's
connected
to
a
storage
or
retrieval
deal
that
these
two
parties
have
already
agreed
to
then
yeah,
let's
go
ahead
and
serve
it,
no
matter
what
the
selector
is,
but
the
default
implementation
that
was
originally
written
based
off
of
the
idea
of
putting
it
in
ipfs
is
that
you
would
serve
certain
kinds
of
selectors,
but
not
other
kinds
of
selectors.
A
Another
thing
that
you
might
but,
but
actually
the
current
check
for
that
is
really
not
a
great
check
for
as
a
like,
let's
not
serve
like
a
really.
You
know
infinitely
long
requests.
You
probably
instead
want
to
do
like
right
now,
it's
based
off
of
like
the
recursion
limit
on
a
recursive
selector,
which
is
actually
pretty
ineffective
for
certain
types
of
dags.
So
what
you
probably
want
to
do
is
do
something
like
I
don't
know.
I
only
serve
selectors
up
to
a
certain
number
of
blocks
or
something
like
that.
A
All
requests
for
graph
sync
will
be
rejected
unless
they
are
validated
as
part
of
storage
or
retrieval
deals.
What
else
can
we
say
about
the
crossing.
A
So
we
saw
this
like
pretty
funky
message
format.
I
wanna
just
talk
briefly
about
how
that
all
leads
into
the
the
architecture
of
grass
think
right.
One
of
the
interesting
things
is
because
this
message
format
is
going
to
be
combining
messages
for
requests
and
responses
and
de-duplicating.
A
We
have
a
whole
separate
layer.
That's
like
a
message
sending
layer
that
you
basically
just
say
hey.
I
want
to
add
this
request
to
the
next
outgoing
message
or
I
want
to
add
this
response.
The
next
outgoing
message
I
want
to
add
these
blocks
to
it,
and
that
is-
and
basically
we
have
a
running
thread.
That's
like
a
message
cube
per
peer
that,
like
basically,
I
said
you
know,
lets
you
keep
assembling
things
until
the
next
message
goes
out
and
then
takes
everything.
You've
assembled
puts
it
into
the
queue
sends
it.
A
While
you
build
the
next
message,
so
that's
what
this
whole
message
sending
layer
does.
It
also
does
some
interesting
things
like
you
probably
like
it
needs
to
know
when
to
like
cut
off
a
message,
because
you're
building
too
large,
of
a
message
so
after
you've
added
a
certain
size
of
blocks
to
a
message.
If
you
try
to
add
another
one,
it's
gonna,
it's
gonna,
actually
go
right
to
the
next
message,
instead
of
continuing
within
that
message.
So
that's
that's
another
thing
to
know.
A
A
There
are
a
lot
of
threads
in
grass
inc.
You
have
so
there's
like
basically,
two
ways.
The
top
level
interface
gets
called.
The
main
one
that
you're
going
to
be
working
with
is
like
calling
the
request
method
that
initiates
a
new
request,
but
the
other
big
thing
that
actually
goes
right
into
the
top-level
interface
is
when
incoming
network
traffic
comes
in
on
the
graphic
handler.
It
goes
right
into
a
couple
methods
on
to
the
on
the
top
level
interface.
A
If
you
have
a
request,
if
you
want
to
if
they,
if
somebody
requested
making
an
outgoing
request-
or
you
receive
an
incoming
response,
it
goes
over
to
the
the
requester
implementation,
which
is
called
the
request
manager.
These
probably
should
be
should
probably
be
called
requester
instead
of
like
request
manager,
because
it
sounds
like
you're
like
you
could
be
managing
incoming
requests,
but
it
actually
means
managing
outgoing
requests.
A
And
then,
if
you
have
an
incoming
request,
you
go
over
to
the
response
manager,
which
again
is
generating
responses
to
incoming
requests,
which,
again,
probably
these
could
use
a
rename
actually,
as
as
we're
saying,
as
I'm
saying
them
aloud.
In
any
case,
yeah
within
the
request
manager,
like
another
thing
about
to
know
about
this,
is
like
the
request
manager
has
its
own
independent
thread.
That
runs,
and
basically
is
like
the
single
thread
for
everything
that
comes
in
and
all
the
other
things
that
can
happen.
A
There's
an
internal
internal
thread
in
the
request,
manager
and
you're
gonna
send
this
in
it's
going
to
send
the
request.
If
it's
an
outgoing
request,
it's
going
to
send
it
to
the
network.
A
If
it
is
an
incoming
response,
it's
going
to
feed
it
into
this
module
called
the
async
loader,
and
that
is
the
thing
one
of
the
hardest
parts
is
figuring
out
how
to
deal
with
the
problem
of
you're
running
this
traversal
to
verify,
and
meanwhile
you're
receiving
blocks
at
totally
unknown
times,
you're
receiving
them
for
like
multiple
responses
at
once,
potentially,
and
you
have
to
figure
out
how
to
feed
these
incoming
blocks
to
the
right,
selective
queries,
and
so
this
this
async
loader
is
the
thing
that
does
all
that
magic.
A
It's
relatively
complicated
because
it's
got
basically
when
it
gets
new
response
like
when
you,
when
you
ask
it
to
load
a
block
it
like,
tries
to
load
it
immediately
if
it
can't
it
like,
instead
of
loading
instead
of
returning
a
an
actual
block,
it
returns
a
channel.
It's
gonna
return,
a
response
once
it's
done
and
so
then,
like
meanwhile,
like
as
it
gets
new
responses.
It
looks
at
all
these
like
held
up
requests
to
load
blocks
and
it
like
feeds
the
responses
to
those
blocks.
A
In
any
case,
meanwhile,
while
that's
happening,
you
have
independent
threads.
You
have
this.
These
probably
need
to
re
rename.
But
do
you
have
independent
threads
executing
these
selectors
right
getting
fed
blocks
as
they
come
in
from
the
network
and
then
once
you
traverse
the
blocks
and-
and
you
want
to
serve
it
back
to
the
person
called
request,
one
of
them,
possibly
not
great
design
choices,
but
also
kind
of
nice
from
a
user
ability
standpoint.
Is
that
like
when
you
call
dot
requests
you
get
channels
back?
A
You
can
read
them
whenever
you
so
choose,
because
grassing
has
a
whole
separate
thread
for
your
request
to
collect
all
the
resp,
collect
everything
and
essentially
infinite
buffer,
the
channel
for
you,
which
is
super
cool
and
also
potentially,
probably
a
possible
memory
problem.
If
you
were
to
do
a
huge
response,
a
huge
request
and
never
read
any
of
the
responses
in.
A
Yeah
so
they're
in
a
couple
different
places.
There's
a
couple
things.
First
of
all,
when
you
make
a
request
using
the
request
method
on
the
top
level,
you
can
you
can
you
can
pass
it
extensions
in
the
to
the
request
method
and
they
will
go
out
with
the
encoded
request
you
there.
A
When
you
do
there's
a
couple
places
you
can
see
extensions,
otherwise,
are
primarily
handled
by
hooks
right,
because
the
idea
is
that
hooks
are
the
user
supplied
code
they're,
the
ones
that
are
going
to
be
caring
about,
like
what
extensions
are
in
the
request,
with
the
exception
of
the
metadata
piece
which
is
like
built
into
grassy
and
also
like
the
handling
of
do
not
send
sids
on
the
responder
side
is,
is
also
built
into
grassland,
but
for
the
most
part
you're
going
to
assume
that
they
they
live,
except
for
well-known
extensions,
they're
gonna
be
handled
by
user
code
in
hooks,
and
so
you're
gonna
see
a
couple
hooks
here.
A
There
is
one
on
the
requester
side.
You
can
encode,
you
can
intercept
at
the
point.
You
get
an
incoming
network
message.
I
believe,
there's
like
an
incoming
network
response.
That's
it
like
the
like
intercept.
It's
like
register
incoming
response
hook.
I
think
this
is
an
incoming
response
hook.
There's
also.
A
Hook
at
the
point,
the
block
is
actually
verified
and
and
the
basic
structure
of
hooks
I'm
going
to
pull
up.
I
think
I've
got
the
grass
and
code
up
here
somewhere.
I
don't
want
to
do
that.
No,
no,
it's
cool,
because
I
wanted
to
actually
you
know
what
I
do
want
to
talk
about
hooks,
but
let's
talk
about
them
in
a
minute.
If
that's
cool,
yes,
go
for
it.
E
A
The
so
yes,
in
the
sense
that
no
grass
sync
implementation
is
required
to
know
about
any
extension
other
than
the
request
metadata
extension
and
that
one
should
not
be
an
extension.
It
needs
to
go
into
the
core
protocol.
The.
A
If
you,
you
know,
if
it's
an
extension,
not
everyone
knows
about
it.
The
list
of
known
extensions
is
purely
a
specification
so
and
that's
like.
Basically,
these
extensions
are
ones
that
we
you
might
want
to
like
know.
Some
might
want
to
do
something
with,
but
the
idea
is,
you
can
do
arbitrary
extensions
and
then
like,
for
example,
data
transfer
has
all
has
its
whole
own
set
of
extensions,
but
they
are
not
part
of
the
known
extensions
list,
because
it's
not
it's
not
assumed
that
someone
who
is
implementing
grass
sync
would
care
about
those
extensions.
A
Not
necessarily
I
mean
the
grassland
request
is
only
going
to
one
pier
and,
and
although
you
could
technically
pack
lots
of
extensions
into
it
and
do
aggressive
requests
it
like
there's
a
message
size
limit,
so
you
actually
couldn't
like
it's
only
like
four
megabytes
that
you
could
put
into
a
single
message.
So
so
I
mean
like
you,
you
can
definitely
pack
lots
of
extensions.
The.
B
A
A
Yeah
it's
a
single
traversal
is
is,
is
a
a
single
graphic
request
is
from
one
period
to
another
peer
for
an
entire
traversal,
yeah
and
so
yeah,
so
the
single
p
at
the
least
at
the
moment,
we
don't
have
any
like
request
farming
where,
like
you
know
like
there
are
a
lot
of
one
of
the
things
that's
interesting
about
grassland
in
the
long
term.
It's
like
to
work
well
in
terms
of
like
fast
downloading.
A
Is
you
probably
want
to
do
this
with
you
want
to
make
sure
that
your
request
is
served
by
multiple
peers?
So
then
you
have
to
figure
out
how
to
break
up
the
request,
which
is
not
necessarily
obvious
when
you
do
a
with
bit,
swap
it's
really
easy
to
break
up
requests
because
you're
asking
for
individual
blocks
and
they
are
verifiable
as
a
unit
right
like
so
like.
A
If
I
ask
for
a
block,
you
send
me
back
bites
with
that
block,
I
can
verify
it
against
the
sid
I
asked
for
and
that's
all
I
have
to
do
right.
So
that
means
that
if
I'm
doing
a
dag
traversal,
I
run
that
traversal
locally.
I
run
it
with
lots
of
threads.
I
end
up
with
lots
of
blocks.
I
need
it
once
and
I
just
send
different
blocks.
Different
block
requests
to
different
peers.
So
it's
pretty
easy
to
split
up.
A
A
C
A
C
A
C
A
A
A
There's
also
a
bunch
of
really
interesting
things
you
might
do
mixing
the
two
one
of
the
things
that
I'm
most
interested
in
there's
really
only
one
problem
with
bitswap
right,
which
is
that
I
mean
they're.
Well,
there's
a
few
problems
but
like
because
you're
dealing
with
bite-sized
chunks,
they're
soon
as
optimizations.
The
other
side
can't
do,
but
but
the
main
problem
with
bitswap
is
the
nature
of
ipld
data
and
verifying
right,
and
the
fact
that
you
really
can't
do
much.
You
know
when
you
have
a
deeply
nested
dag.
A
I
mean
you
know.
The
canonically
like
a
blockchain
is
like
the
worst
example
from
bitswap,
because
you
can
request
the
first
block,
get
the
results
back
and
then
you
go
to
the
next
level
of
the
blockchain
and
you
just
go
through
the
entire
blockchain.
But
it's
like
this,
like,
like
you,
know,
a
round
trip
where
you
hide
in
the
blockchain
yeah.
So
I
mean,
I
think
I
think
you
were.
You
may
have
seen
this
proposal
like
from
like
when
we
were
looking
at
different
things
like.
C
A
A
A
Yes,
yeah
exactly
right,
so
if
you
have
that
cid
list,
you
can
essentially
get
around
the
bitswap
problem,
while
still
using
bitswap,
which
is
really
easy
to
split
requests.
The
biggest
problem
there
there's
still
the
problem
of
like
someone,
can
just
send
you
a
malicious
set,
and
then
you
know
whatever
that.
B
A
Build
a
lot
of
cool
stuff
and
that
sort
of
has
never.
A
It's
never
quite
happened,
partly
because
the
grass
sync
use
case
ended
up
being
filecoin,
which
tended
to
continue
to
push
it
to
more,
like
configurability,
within
an
individual
request,
as
opposed
to
like
that
like
distributed,
request
thing
and
it's
interesting,
because
now
we
have
a
ton
of
configurability
in
grassland
and
like
all
of
it's
missing
from
bitswap.
So,
like
you
know
like
you
can't,
you
know
attach
things
to
blocks
or
you
know
like
basically
there's
like
how
would
we
jam
the
payment
system
into
bitsoft?
A
Definitely
something
about
the
protocol
that
is
inherently
anti-payments
in
payments
for
things
but
like
it
still
has
to
be
written
so
anyway.
That's
that's
the
that's
a
lot
of
stuff
that
I
think
I
just
went.
A
B
A
Yeah,
there's
more
more
things
we
can
discuss
in
the
future.
Oh
just
to
respond
because
I
do
want
to
just
one
more
thing
is
to
just
go
over
the
hooks
briefly,
so
that
you
can
see
how
they
work
and
like
we
can
probably
keep
working
yeah
they're
they're.
A
This
will
just
be
a
brief
tour,
but,
like
I
mean
I
just
want
to
show
you
these,
so
that,
like
you,
know,
what's
available.
Actually
let
me
just
this
big
is
that
what.
E
A
Yeah,
so
you
have
a
bunch
of
different
ones.
One
of
them
is
to
register
an
alternate
link
system.
This
is
useful.
This
is
register
persistence,
option
you
give
it
a
you,
give
it
a
link
system
and
a
name
and
that
becomes
useful
later
for
hooks.
Then
you
have,
if
you
wanted
to
like
configure
a
request
to
write
to
somewhere
else.
A
You
have
this
register
outgoing
request
hook
and
basically
this
would
you
can't
actually
specify
it
in
the
call
to
request
you
instead
register
an
outgoing
request
hook,
and
you
say
you
look
at
the
request
and
you're
like
oh
I'm
going
to
for
this
one
use
this
persistence
option.
Let
me
just
show
you
how
this
end
up
ends
up
happening.
It's
like
you,
have
this
on
outgoing
block
hook
that
you
have
to
supply.
This
is
a
function
you
supply
to
it,
and
the
on
outgoing
block
hook
is
going
to
get
the
function.
A
That's
going
to
get
the
peer,
the
request.
Oh
no,
sorry,
outgoing
request.
Apologies,
the
peer
the
request
and
this
thing
called
hook
actions
so
hook.
Actions
are
the
things
you
can
do
within
hooks
to
modify
the
request.
If
we
were
to
go
and
look
at
this,
the
things
you
can
do
is
you
can
change
the
persistence
option.
That's
the
thing
I
was
just
talking
about
and
then
you
could
also
use
an
alternate
link.
Note
target
node,
prototype
chooser.
A
A
No,
no
hooks
are
completely
local
and,
like
the
assumption,
is
if
you
register
a
hook
with
local
code,
that
you're
gonna
run
on
your
machine,
and
you
provide
that
code.
You
are
taking
the
risk
that
you
are
not
putting
anything
in
that
hook.
That
is
like
you
know.
I
mean
we.
We
trust
the
person
who's
calling
register
hook
who's
a
local
computer
user.
To
not
you
know,
I
mean,
if
there's
a
thing
in
that
hook
that.
B
A
That
would
do
different
things
in
this
yeah.
The
assumption
is
the
mechanism
for
doing
that,
is
you
send
an
extension
and
then
the
responder
has
to
be
able
to
read
that
extension
and
then
in
a
hook
and
cause
it
to
do
something
different
and
the
goal
being
there
that
that
code
doesn't
run
on
the
responder
that
isn't
the
responders
local
code,
though
theoretically
I
guess
one
day
we'll
have
blossom
and
ipld
blocks
and
you
can
run
all
the
code.
You.
A
With
the
exception
of
the
ones
that
go,
grassland
already
implements
itself,
which
are
do
not
send
sids,
and
it
also
implements
d
dupe
by
key.
I
think
that's
the
only
ones
that
knows
how
to
to
just
do.
If
you,
if
you
don't
send,
if
you
send
those
extensions,
even
if
you
have
no
hook
it
will
know
what
to
do
with
them.
I
believe,
and
then.
A
No
there's
no
other.
The
assumption
is
that
basically,
grassland
will
handle
any
any
extension
that
it
knows
the
requester
side
of
go
grassley
generated
itself
like
or
could
have
generated
itself
so
like,
like
dedupe
by
key,
will
automatically
get
sent
in
the
request.
If
you
call
use
persistence,
option
in
an
outgoing
request
hook,
so.
A
A
There
is
no
existing
well
there's
there.
Theoretically,
someone
could
register
an
incoming
request
hook.
That
did
exactly
that,
because
it's
user
supplied
code
right,
so
the
only
hooks
that
are
registered
right
now,
for
the
most
part,
are
the
data
transfer
hooks
there
are
potentially,
but
anyone
can
register
hook
right
so
like
it
like
the
there's,
no
code,
that
would
do
that
unless
the
person
who
is
running
grasssync's
responder
on
their
machine
added
code
that
they
wanted
to
run
in
a
like
incoming
request
hook.
That
would
trigger
further
things.
E
So,
where
I'm
going
with
all
these
questions,
the
security
thing
you
mentioned.
A
An
exponential
generation,
you
can
absolutely
write
a
hook
and
register
it.
It
would
do
terrible
things.
The
assumption
is
that
since
you're
running
this
on
your
computer
and
you're
the
programmer
who's
initialized
graphs
and
can
set
it
up
with
hooks,
you
are
any
user
supplied
code.
You
put
you
personally
put
on
your
own
computer
in
a
hook
right,
not
a
remote
hook.
Right
like
it's,
but
you
you
are
assuming
the
security
risks
for,
for
whatever
code
you
supply,
so
that
is
that's
sort
of
like
I
don't
think,
there's
anything
else.
A
You
could
do
there
because
yeah
but
there,
but
there's
a
lot
of
like
a
lot
of
things
that
are
in
grassland
or
around
like
basically
it's
I
mean
this
is
like
the
security
model
is
in
my
own.
A
Very
probably
misinterpretation
of
like
the
javascript
security
model,
which
is
like
in
the
browser
like
you,
can
kind
of
like
in
the
browser
anything
that
the
user
initiates
or
that
somebody
initiated
by
actually
doing
something
with
the
grassland
code
is
considered.
Like
that's,
like
you
know,
you're
you
can
do
whatever
you
want.
Based
on
that.
So,
like
one
thing,
that's
interesting
is
that,
like
we
deal
with,
we
have
a
whole
mechanism
on
the
responder
side
to
try
to
not
process
a
bajillion
incoming
requests
at
once.
A
Right
like
if
somebody
sends
me
a
thousand
requests
at
the
same
time.
Ideally
that
should
get
buffered
in
some
fashion
so
that
my
computer
doesn't
go
to
a
cpu
100
trying
to
process
all
those
our
disk
100.
A
The
assumption
is
on
the
other
side,
because
every
outgoing
request
was
initiated
by
someone
calling
request
that
you
can
do
as
many
as
you
want
at
once,
but
then
that
came
up
as
a
potential
problem,
because
data
transfer
has
a
mechanism
by
which
the
outgoing
request
is
not
initiated
by
someone
calling
the
code,
but
rather
someone
sending
a
data
transfer
push
request,
which
is
like
one
of
the
main
things
data
transfer
implements
is
an
addition.
A
If
you
say
I
want
to
actually
receive
this
data
you
actually
the
data
transfer
code
initiates
a
grassland
request
back
to
the
original
person
to
receive
the
data,
so
that
created
problems
because
like
if
somebody,
if
like
in
in
filecoin
right
now,
if
somebody
sends
you
100
deals
and
you
accept
all
of
them
all
those
transfers
are
going
to
get
kicked
off
at
once,
which
is
problematic
for
falcon
miners.
A
So
yeah
there's
some
there's
some
issues,
though
I
think
we're
probably
gonna
end
up
putting
request,
buffering
outgoing
request,
buffering
and
grasping
itself,
as
opposed
to
being
a
transfer
system
depth
first
based
off
of,
like
whatever
the
iteration
of
nodes,
is
and
lists
and
maps
and
ipld,
which
I
think
is
like
well,
I
guess
in
lists
it's
just
the
order
of
the
list
by
index,
and
then
I
guess
and
maps.
I
think
it's
actually
predictably
guaranteed
to
be
sort.
A
The
case
yeah
exactly
and
it's
all
single
threaded
depth
first,
which
is
its
own
potential
problem
in
the
future.
Just
for
performance
issues.
A
E
Year
I
mean,
even
if
you
don't
putting
the
connectivity,
it's
like
the
key
advantage,
that
is
approximate
physical
location.
It
might
be
faster
or
beneficial
to
me.
If
you
could
resolve,
tell
me
something
about
the
thing
that
I'm
going
to
request
because
for
you
it's
much
cheaper
to
find
out,
and
it
may
mean
that
I
no
longer
need
to
traverse.
A
No
there's
a
bunch
of
interesting
well,
one
of
the
things
that's
interesting
right
now
is
that
graphing
is
all
transfer
no
discovery.
Bitswap
has
a
has
a
form
of
a
discovery
in
it
in
the
sense
that,
like
in
bitswap,
I
can
say
I
want
to
know
if
you
have
this
block
as
opposed
to
I
want
to.
I
want
you
to
send
me
this
block
and
grassing
doesn't
have
any
functionality
like
that.
Yet.