►
Description
Every other week, the Retrieval Market Builders get together to share progress in their projects in a demo format. We want to sincerely thank all of our collaborators for demoing their developments and helping to improve the FIL Retrieval Markets one demo at a time.
In this video, you can find quick demos from -
• Magmo on go-nitro testing on Testground
• Myel on Rust Graphsync
A
Welcome
to
this
week's
retrieval
markets
demo
day.
Today's
date
is
wednesday,
the
14th
of
september
2022.
we've
got
a
very
exciting
couple
of
talks.
Today,
we've
got
alex
gaff
from
magmo
speaking
about
go
nitro
testing
in
test
ground,
and
just
more
generally,
you
go
nitro
and
we've
got
thomas
from
ielts
speaking
about
their
work
on
ross
graftsync.
B
Awesome,
okay,
yeah,
so
my
name
is
alex
and
I'm
from
megmo
and
I'm
here
to
talk
today
about
our
go
nitro
framework
and
how
we're
using
this
really
cool
tool
called
test
rounds
to
test
performance
and
capture
metrics.
B
So
I'll
just
start
with
a
little
bit
of
an
outline
of
what
I'm
going
to
talk
and
show
so
I'll,
just
give
a
little
bit
of
an
intro
to
our
team
and
what
we
do
I'll
go
over.
Our
go.
Nitro
framework
a
little
bit
just
a
very
high
level
of
how
it
works.
B
B
So
who
are
we
yeah,
like
I
said
we're
team
megmo
we're
currently
a
four-person
team,
part
of
consensus
mesh.
B
We
focus
on
state
channels
and
payment
channels,
we're
maintainers
of
the
statechannels.org,
which
the
link
is
there
on
the
slide,
and
we
worked
on
some
pretty
cool
stuff,
like
web3
torrent,
which
was
like
a
demo
of
a
browser-based
torrentine
app
with
embedded
micropayments
yeah
and
what
we're
working
on
right
now.
So
right
now,
we've
been
focusing
on
bringing
our
state
channel
framework
to
go
and
we
call
it
go
nitro,
we've
written
our
original
state
channel
framework
or
sorry.
B
B
The
goal
of
this
framework
is
to
provide
fast,
off-chain
payments.
It
lets
retrieval,
clients,
pay
recruitable
providers,
off-chain,
quick
and
easily
we've
completed
one
grant
with
five
coin
already.
That
grant
was
to
just
write
out
the
basic
protocols
and
functionality
and
go
and
now
we're
working
on
a
section.
The
second
grant,
and
the
focus
of
the
second
grant
is
to
productionize
our
go
nitro.
B
B
Let's
jump
to
the
very
end
awesome,
so
how
does
go
nitro
work
well,
basically
go
nitro
is
a
state
channel
client
framework
that
allows
micro
payments
using
this
a
shared.
What
we
call
a
hub
and
the
hub
is
just
a
new
role.
That
is
a
participant.
B
B
So
this
requires
everyone
doing
direct
deposits
on
chain
once
this
is
done,
we
can
use
this
like
funded
on-chain
channel
to
open
virtual
payment
channels.
This
is
done
completely
off-chain,
so
it's
gonna
be
really
quick
and
easy,
and
then,
once
a
retrieval
client
has
this
payment
channel.
B
I
can
then
use
that
payment
channel
to
send
these
micro
payments
to
the
provider
and,
like
I
said
since
it's
not
on
chain,
but
it
can
be
quick
and
easy
once
we're
done
with
that
payment
channel,
we
eventually
close
it
up
with
cooperation
between
the
client
fighter
and
hub.
B
This
updates
these
the
ledger
channels
with
the
balances-
and
this
is
also
done
completely
off-chain
and
then
eventually
at
some
point,
when
the
writer
wants
to
withdraw
they'll,
go
directly
to
the
blockchain
and
with
cooperation
of
the
hub
they'll,
just
close
that
channel
that
will
just
get
them
all
the
funds
back.
B
So
that's
just
a
very
quick
and
very
high
level
of
our
golden
nitro
protocol,
but
I'm
going
to
chat
a
bit
more
about
how
we
look
at
performance
of
it
and
how
kind
test
round
fits
in
to
this
puzzle.
So
it's
like
as
a
big
part
of
the
production,
productization
effort.
We've
been
focusing
on
performance
and
integrating
things
like
fuller
integration
to
the
blockchain
things
like
that.
B
When
I
started
looking
for
tools
like
that,
I
found
a
lot
of
them
that
are
based
around
like
a
client
server
model.
A
lot
of
them
based
on
like
assuming
you'd,
be
running
some
http
server,
but
there
wasn't
a
lot
for
the
kind
of
distributed
peer-to-peer
system
we
have.
B
However,
luckily
there
is
this
tool
called
test
rounds
and
it's
actually
kind
of
written
and
signed
for
our
use
case.
It's
a
tool
designed
for
benchmarking
distributed
peer-to-peer
systems.
It
was
originally
designed
to
test
and
measure
the
ipfs
and
the
pdp
copies,
which
is
kind
of
similar
in
spirit
to
what
we're
doing
yeah.
So
what
is
test
ground?
Basically,
it's
a
platform
for
testing
benchmarking,
simulating
peer-to-peer
or
distributed
systems.
B
Basically,
it
lets
you
write
a
script
that
will
be
run
in
a
configurable
amount
of
instances.
Testground
also
handles
spinning
up
like
as
many
instances
as
you
want
in
writing
the
code.
In
those
instances
for
most
of
our
tests,
we
use
stock
our
generally
test.
Everyone
can
use
doc
to
do
that.
You
can
spin
up
separate
docker
containers
for
you,
but
you
can
also
just
run
it
completely
locally
and
we
often
use
the
docker
container
configuration
instead
of
some
kind
of
central
coordination
like
kind
of
dictating.
B
B
It
also
provides
some
nice
stuff
for
metrics.
It
provides
an
easy
way
to
capture
metrics
and
report
on
that
and
also
does
provide
some
nice
stuff
around.
Like
manipulating
your
network.
It
lets
you
like
add
mock
leg
or
jitter,
so
you
can
simulate
like
various
network
conditions
yeah
and
how
is
it
so
how's
test
run.
How
does
test
run
work?
Basically,
it
just
runs
as
a
demon
listening
for
like
test
requests
on
some
server
when
it
receives
a
request.
It
simply.
It
builds
the
code
that
you've
submitted.
B
B
This
is
kind
of
nice
because
it
lets
us
share.
Metrics
and
test
runs
around
the
team
or
around
with
whoever
we
want
quite
easily,
and
we
also
use
the
test
screen
metrics
api,
so
we
record
metrics
for
our
go
micropro
client
and
we
also
have
various
dashboards
to
report
on
those
metrics
and
then
I
should
also
want
to
mention
that
we
have
a
docker
container
running
hard
hat
that
serves
as
our
blockchain
instance
for
our
tests
and
that's
running
on
that
cloud
vm.
B
B
This
basically
means
everyone
has
a
funded
budget
channel
with
the
hub
and
is
able
to
create
virtual
channels
the
clients,
then
open
payment
channels
with
random
briars
and
using
random
hubs
and
uses
those
to
send
payments
and
then
close
the
channel.
B
B
So
one
really
cool
thing
about
this
work
is
it
lets
us
easily
capture
like
metrics
and
add
reporting
on
them,
so
this
is
just
a
little
example
here
of
how,
in
the
top,
we
have
some
of
our
client
code
here
and
to
add
some
like
record
the
duration
of
this
function
and
report
that
in
the
metrics,
all
it
takes
is
adding
this
nice
one-line
bill
statement
here,
report
function,
generation
and
then
we'll
automatically
be
able
to
pick
this
up
when
we
run
our
test
and
see
this
in
our
test
round,
metrics.
B
B
B
This
line
here
is
where
we
specify
the
various
instances
and
the
number
of
them
so
for
this
type
first
run
we're
gonna
have
one
hub:
we're
gonna,
have
six
clients
just
in
the
tests
and
call
them
payers,
and
then
two
providers
or
payees
for
a
total
of
nine
participants
run
the
tests,
the
payment
test
for
20
seconds
just
so
we
don't
have
to
wait
forever
and
then
we're
going
to
have
this
concurrency
value
set
to
five.
This
is
basically
how
many
payment
channels
clients
try
to
maintain.
B
That's
going
to
connect
off
to
our
test
ground
instance.
First
thing
you
can
see
here
is
it's
going
to
build
our
code
and
once
it
does
that
it's
going
to
start
running
it?
You
can
see
here
that
test
ground
is
nice
enough
to
synthesize
all
the
output
from
all
the
various
instances.
B
So
we
can
see
here
that
each
instance
is
like
reporting
back
to
deskground
at
the
start,
they're
just
initializing
their
network
and
doing
lots
of
setup
stuff,
and
if
we
scroll
down
a
bit,
we
can
see
that
we're
now
sending
payments
and
receiving
payments
that
continues
on
for
about
20
seconds.
B
B
On
the
right
here,
you
might
see
some
stuff
happening.
This
is
our
hard
hat
container,
so
you
can
see
the
pauses
happen
at
the
start
of
the
test.
B
B
B
Cool
so
now
we've
run
those
tests.
Let's
go
take
a
look
at
the
metrics,
so
I
can
mention
test
ground
provides
like
easy
metric
integration
and
what
we've
used
it
for
so
far
is
to
focus
on
measuring
this
average
time
to
first
payment.
B
So
here
we're
looking
at
the
first
test,
run
where
we
had
just
the
one
hub.
So
here
you
can
kind
of
see
the
payment
average
time.
The
first
payment
keeps
tripping
up
up
up
and
up
on
the
left
here,
where
we
have
the
incoming
message
view.
It's
just
basically
the
amount
of
messages
in
a
client's
inbox
to
deal
with,
and
in
this
case
you
can
see
this
particular
hub
is
kind
of
getting
slammed
here
and
kind
of
slowing
everyone
down
see
how
the
message
inbox
gets
filled
up.
B
If
we
jump
to
the
other
scenario-
which
I
think
is
this
guy,
you
see
a
much
nicer
picture
with
more
hubs:
there's
less,
not
deadlocks,
sorry
less
competition
for
resources,
and
we
see
it's
much
quicker
time
to
first
payment
and
the
message
keys
as
expected.
Don't
get
filled
up
as
much
yeah
and
the
really
nice
thing
about
this,
too,
is
that
you
can
maintain
historical
data,
so
I
can
easily
pull
up
this
five-minute
run.
I
did
maybe
yesterday,
and
you
can
also
like
see
the
results
of
that
not
terribly
exciting.
B
B
Another
really
nice
thing
is
the
metrics
have
really
enabled
us
to
dig
deep
into
performance.
So
we
have
a
report
like
this.
That
can
actually
let
us
see
various
function,
call
generations
in
our
framework,
so
this
is
going
to
also
help
us
identify
like
bottlenecks
and
things
like
that
and
hopefully
help
reveal
those
things.
B
Yeah
awesome,
so
I'm
just
going
to
quickly
talk
about
how
it's
helped
us
out.
So
far
I
mean
one
of
the
most
obvious
things
is:
it's
revealed
bugs
lots
of
bugs.
We've
tried
to
write
tests
for
all
our
components
and
like
do
integration
tests,
but
nothing
really
competes
with
putting
it
all
together
and
seeing
how
it
works.
B
So
it's
been
really
great
to
reveal
all
those
bugs
it's
gonna.
Let
us
start
recording
monitoring,
metrics.
B
It's
gonna
be
really
great
to
get
a
sense
of
like
benchmarks
that
we
can
refer
to
and
let
us
monitor
performance
make
sure
like
we
don't
break
it
all
of
a
sudden,
and
it
also
made
us
use
our
own
client
api,
which
is
is
pretty
great,
makes
us
dog
food.
I
guess
it's
pretty
easy
to
write
an
api
and
just
ignore
it
so
being
forced
to
use.
It
has
been
really
really
helpful.
B
What
we're
looking
at
doing
next,
more
benchmarking
love
to
do
like
have
more
standardized
benchmarks
and
report
more
metrics
in
general,
yeah,
like
we'd
love,
to
integrate
this
into
ci.
Since
this
is
running
on
like
a
cloud
vm,
it's
actually
gonna
be
pretty
easy.
I
think
to
like
have
a
go:
nitro
pull
request,
just
ping.
B
The
test
fan
runner
and
run
an
instance
of
it
whenever
we
like
check
into
master
or
something
or
check
in
domain,
or
something
like
that
so
be
really
great
to
do
that
and
have
that
kind
of
close
loop
between
making
a
change
and
seeing
performance,
we'd
love
to
contribute
back
to
the
test
ground
project.
We've
already
done
that
a
little
bit
around
some
m1
our
support
for
m1
max,
and
I
think
mike's
done
some
work
and
some
docker
container
stuff,
which
has
been
awesome
so
it'd,
be
great
to
kind
of
keep
that
up.
B
And
then
a
really
neat
feature
of
test
rams
is
that
it
enables
like
large
scale
or
distributed
testing
using
like
aws
and
kubernetes,
so
it'd
be
really
cool
to
once.
We
get
a
bit
further
along
to
set
up
like
a
properly
distributed,
really
large
scale
test
and
see
how
that
works,
and
that's
about
all.
I
have
to
say
on
that.
B
If
you
want
to
check
us
out,
you
can
reach
me
at
alex.gap
at
mesh.xyz,
or
you
can
reach
me
in
slack
check
out
our
website
and
the
github
repos
that
I
use
for
this
code
are
linked
there.
If
you
want
to
check
it
out,
yeah
any
questions.
B
Oh
sorry,
I'm
just
reading
the
chat
now
try
playing
some
latency
simulations
yeah,
so
we
haven't
really
dug
into
that
too
much
too
much.
Yet
sorry,
so
the
question
is:
have
you
tried
playing
with
any
some
latency
simulations
curious,
how
much
distance
between
hub
and
pairs
impact
performance
and
yeah?
We
haven't
really
explored
that.
B
Yet
I
think
test
round
is
giving
us
has
provided
us
the
tools
to
do
so,
but
we
just
really
haven't
taken
the
time
to
dig
into
it
yet,
but
I
think
that's
something
we're
hoping
to
explore
like
coming
up
as
part
of
like
our
benchmarking
process,
but
we
definitely
can
do
that
thanks
to
the
fact
that
testcraft
can
manipulate
the
network.
C
Yeah,
it's
sort
of
the
it's
like
for
the
what
we
call
the
go
nitro
api,
which
is
the
implementation
of
the
state
channel
client
code
in
goleng.
C
It's
it's
at
the
stage
where
it's
fairly
complete
in
a
sense,
but
it
hasn't
been
tested
in
like
a
real
network
setup
and
so
where
I
think
test
ground
is
going
to
be
really
powerful,
is
kind
of
forerunning
a
lot
of
those
scenarios
and
trying
to
simulate
a
lot
of
the
issues
that
you
see
in
a
real
topology
before
you
deployed
there.
C
Traditionally,
you
just
kind
of
like
earned
my
experience,
start
off
with
a
small,
topology
and
kind
of
expand
it
gradually
and
see
what
what
issues
you
hit,
but
I
think,
with
test
ground.
It's
going
to
be
really
great
to
just
maybe
skip
some
of
the
initial
steps
and
go
immediately
to
a
larger
topology
check
it
out.
A
Awesome
thanks
thanks
guys.
Thanks
alex,
I
had
one
one
question
which
was
that
were
there
was
there
any
change
like
parameters,
you
could
change
like
number
of
payers
or
payees
or
hubs
which
caused
anything
unexpected
or
kind
of
highlighted,
something
or
like
change
that
you
wanted
to
make
to
the
protocol.
B
Yeah,
it's
a
good
question.
I
think
right
now,
we've
mostly
been
using
it
to
just
reveal
when
things
don't
work,
so
I
think
now
we're
just
in
the
stage
where
we're
start
starting
to
dig
into
it
and
be
like.
Why
is
the
graph
look
like
that?
So
nothing
yet,
but
hopefully
we'll
have
something
to
report
soon.
C
Yeah,
it's
it's
been
sort
of
like
hard
air
like
we
would
tweak
parameters
and
we
would
deadlock
the
whole
like
network,
and
it
would
just
be
like
oh
well,
yeah.
Of
course,
it
makes
sense
like
this
particular
thing
is
like
emitting
things
on
a
on
the
go
length
channel
and
nothing
is
reading
those
subscriptions
and
when
the
channel
fills
up
the
whole
system
stalls
out,
which
I
mean
those
heart
failures,
have
been
really
important
to
know,
but
we're
still
sort
of
getting
like.
C
Once
things
are
working
more
smoothly,
then
it's
going
to
be
time
to
sort
of
look
at
more
interesting,
like
performance
type
things
for
the
system.
D
E
Definitely
what
we
expect
they're
going
to
be
like
the
most
critical
component,
but
it's
just
an
opportunity
for
us
to
say
here
that
our
particular
brand
of
virtual
channels
is
designed
such
that
the
intermediary
is
only
involved
in
like
funding
setting
up
the
channel
and
tearing
it
down
at
the
other
end.
So
that's
like
a
huge
benefit
because
yeah
those
intermediaries,
which
is
their
performance,
critical
they're,
not
involved
in
the
micropayments.
E
Those
are
the
things
that
are
going
with
real
high
frequency,
so
they
can
just
step
out
the
protocol
for
like
most
of
it
effectively,
so
we're
hoping
that
that
that
brings
real
benefits.
Yeah.
C
Yeah
like
adding
like
adding
a
new
hub
will
be
that
would
be
kind
of
a
a
tricky
like
not
tricky,
but
that
that's
when
the
hub
is
really
getting
experience
a
flood.
It's
like
with
yeah
when,
when
you
add
a
new
hub,
probably
a
lot
of
different
clients
are
going
to
want
to
establish
a
relationship
with
that
hub,
especially
if
you
add
a
presumably
you're,
adding
clubs
to
to
us
to
a
sector
of
your
network.
That
is
overloaded.
C
That's
why
they'll
be
economic
incentive
would
be
there
to
add
those
hubs,
and
so
presumably
there
will
be
a
ton
of
interest
in
the
in
that
node
as
soon
as
you
add
it,
and
now
it
would
get
slammed
with
a
lot
of
channel
open
transactions,
and
then
you
would
want
to
make
sure
that
that
doesn't
doesn't
grind
the
hop
to
haul
yeah.
D
I
mean
the
the
this
kind
of
liquidity
you
know.
Function
of
the
of
the
channel
intermediary
is
going
to
be
a
dual
partner
with
the
indexers
that
are
doing
the
lookup
from
cds
and
kind
of
finding
the
channel.
So
to
a
large
degree,
I
mean
it's
going
to
be
a
similar
problem
there
that
when
a
new
indexer
comes
on,
it's
going
to
get
kind
of,
you
know
want
to
set
up
and
let
people
know
who
it
is
and
other
things
so
there's
a
larger
discussion.
D
We're
going
to
have
and
there's
the
whole
performance
aspect,
because
in
some
ways
as
much
as
this
problem
is
hard,
you
know
actually
managing
a
good
indexing
service.
For
you
know,
file,
storage,
storage
provider
discovery
is
a
really
big
problem
as
well.
So
this
will
be
a
joint
effort,
I'm
sure
in
the
end.
A
Awesome
thanks
so
much
that
was
great
presentation.
Next,
up
yeah,
we
have
toma
with
a
review
of
the
russ
crafting
work.
F
Hey
guys,
tomorrow
from
mayo
here,
I'm
gonna
share
my
screen,
really
quick
cool,
so
yeah.
Today,
I'm
gonna
talk
about
rust,
graph,
sync
and
kind
of
generally
kind
of
our
experience,
working
with
brust
and
rustly
p2p
and-
and
you
know
why
we
went
into
there
and
what
the
status
is
and
and
where
are
we
going
next
and
things
like
that,
and
hopefully
you
know
kind
of
help
out
folks
who
are
debating
getting
into
the
rust
part
of
ipfs
and
the
p2p
ecosystem
and-
and
you
know,
sharing
our
experience
with
that.
F
F
So
we
built,
you
probably
see
a
demo
on
one
of
the
previous
virtual
market
demo
days,
but
we
built
a
js
version
of
graphsync
and
we
were
just
not
really
satisfied
with
the
performance.
We're
getting
and
so
kind
of.
The
next
thing
we
were
thinking
about
is
just
digging
into
gra
into
rust
ecosystem
and
it
was
kind
of
nice
because
chain
safe,
has
already
done
a
lot
of
things
with
forest
and
now
there's
the
fvm.
F
That's
doing
a
lot
of
things
in
rust,
so
it
just
kind
of
seemed
like
a
good
place
to
explore
and
see
if
we
could
gain
some
performance
here,
and
so
it
turns
out
that
performance-wise
it
was
harder
than
we
thought
to
like
really
get
to
what
we
thought
we
would
gain,
but
we
ended
up
having
a
really
nice
experience
with
the
development,
and
it
was
kind
of
you
know
my
first
time
really
building
things
for
production
and
rust
beyond
pet
toy
projects,
and
things
like
that,
and
so
it
ended
up
being
a
really
nice
experience
for
for
building
libraries
and
and
kind
of
refreshing
way
of
of
building
systems
and
composing
systems
together
and
so
for
kind
of
give
some
backstory
as
well.
F
We
we
built
three
different
implementation
of
graph,
sync
and
rust
to
really
figure
out
how
we
wanted
it
to
be,
and
so
initially
the
first
version
was.
Very
you
know,
taped
together
and
kind
of
you
know
using
some
existing
protocols
like
request
response
that
rustly
p2p
has,
and
so
because
there's
like
a
bunch
of
so
russ
lip2p
has
a
really
interesting.
You
know
design
framework
in
paradigm.
F
In
terms
of
how
it
it's
built
and
how
you
can
compose
behaviors,
so
there
that's
what
they
call
network
behaviors,
and
so
it
kind
of
basically
enables
allow
you
to
orchestrate
how
messages
are
sent
and
how
peers
connect
and
basically
how
the
protocol
operates
and
how
handlers
basically
are
executed
over
substreams
and
how
new
stop
streams
are
created,
and
things
like
that
so
based
on
that,
you
can
compose
different
protocols.
F
So
that's
how
currently,
for
example,
forest
has
a
gossip
sub
protocol,
that's
implemented
for
gossiping
new
blockchains
on
the
filecoin
blockchain
of
their
falcon
implementation,
and
so
these
are
all
different
type
of
handlers
that
respond
to
different
protocols
and
and
basically
has
just
worked
with
the
the
standard
lib
p2p
multiplexer,
that
is,
you
know,
implemented
in,
go
and
in
javascript
and
etc.
So
these
are
just
kind
of
how
it
works,
and
basically
you
can
compose
those
network
behaviors
and
just
like
that,
have
very
isolated
logic.
F
That
really
corresponds
to
your
your
pro.
But
this
kind
of
design
really
basically
means
that
I
mean
it
was
really
designed
for
message
sending,
which
is
not
exactly
what
we're
doing
with
graph
sync.
F
So
you
know,
graphing
is
an
interesting
protocol
because
you
basically
open
a
pipe
upon
a
request
and
create
a
substream
and
then
you're
going
to
be
streaming
a
whole
bunch
of
messages
during
an
entire
life
cycle
of
a
transfer
on
the
same
substream,
which
is
not,
which
is
different
from
like
passing
messages
where
you're
sending
you
know
a
request
and
receiving
a
response,
for
example,
on
the
existing
networks.
F
So
it
was
kind
of
adapting
this
kind
of
you
know
paradigm
to
to
basically
build
something
that
would
work,
and
that
would
stay
simple
and
that
would
be
nice
to
use.
So
we've
really
gone
for
like
simplicity,
for
this
implementation
and
and
create
kind
of
like
an
ex
an
interface,
that's
that
makes
it
easy
to
use
which
is
sometimes
challenging
with
rust.
You
don't
want
things
that
are
passing
down
a
ton
of
generics.
You
don't
want
things
that
are
like
making
it
hard
to.
F
You
know
basically
use
different
references
to
the
same
extract
and
things
like
that
so
kind
of
based
on
that
we
kind
of
created
just
like
a
simple
extendable
protocol,
where
you
just
kind
of
create
requests
and
then
just
extend
those
requests
with
different
types
of
extensions
and
things
like
that
and
so
yeah.
F
A
lot
of
the
things
that
we
did
as
well
is
in
the
ipld
world,
where
we
kind
of
created
a
package
for
ipld
operations
with
things
like
a
blog
store,
the
block
store
interface
is,
for
example,
based
on
what
the
av
the
fbm
has
been
using.
So
we're
trying
to
like
stay
very
similar
to
what's
already
out
there.
So
you
know
once
there's
an
official
version
of
it.
F
F
Then
you
have
kind
of
link
system
and
things
like
that,
they're
very
similar
to
the
ipld
prime
interfaces.
So
if
you're
familiar
with
that,
it
just
makes
it
you
know
similar,
and
you
can
basically
find
your
marks
that
way,
which
is
kind
of
nice.
The
selectors
we've
kind
of
implemented
a
whole
bunch
of
different
selectors
and
ways
to
build
them,
including
refires,
because
it
really
enables
us
to
you
know,
do
things
like
accessing
paths
in
unix,
fs
directories
and
things
like
that.
F
So
basically,
you
can
do
things
like
just
use
a
string
with
a
cid
and
a
path
to
a
directory,
and
then
we
have
a
little
method
that
you
can
import
and
that
will
just
generate
your
cid
into
selector
and
then
you
can
use
that
to
you
know,
select
whatever
you
want
inside
of
that
tree,
and
so
then
we
have
different
traversals
in
what
we've
changed
from,
for
example,
the
traversal
is
that
instead
of
doing
recursive
traversals
using
you
know
callbacks
and
things
like
that,
this
is
very
iterative.
F
So
it
really
implements
the
the
standard
rust
iterator
trades.
So
it
makes
it
way
nicer
and
simpler
to
use.
So
you
just
kind
of
create
your
iterator,
which
is
kind
of
a
there's
there's
a
couple
of
them.
For
example,
it's
an
ipld
traversal,
there's
a
block
traversal
and
based
on
what
you
want.
So
these
return
ipld
objects.
If
you
want
that
or
you
can
also
return
blocks,
if
you,
if
you
want
that
so
depending
on
this,
these
are
used.
F
You
know
behind
the
scenes
to
create,
on
the
provider
side
a
bunch
of
blocks
and
then
send
them
the
messages,
and
so
it's
quite
nice,
you
just
kind
of
iterate
over
until
you're
done,
basically
and
and
then
you
just
consume
the
iterators
like
this
so
yeah.
These
are
all
kind
of
documented
in
the
library
that
you
can
import.
F
If
you
want
to
use
that
separately
for
different
things
eventually,
you
know
we're
we're
probably
going
to
have
to
standardize
these
things
and
stuff,
but
I
think
it's
good
example,
and
hopefully
it
can
help
out
other
people
when
they
design
their
own
kind
of.
You
know,
implementations
and
things
like
that,
but
so
yeah.
F
This
is,
this
is
part
of
it,
so
this
is
kind
of
a
package
that
you
can
import
separately,
but
it's
consumed
by
by
graph
sync
and
then
I'll
just
share
to
kind
of
show
you
a
little
demo
of
what's
happening,
but
essentially
just
to
kind
of
show
you.
We
have
a
couple
examples
that
are
nice
in
there.
F
Oh,
so
one
of
the
the
things
that
we've
also
explored
quite
in
depth
was
wasn't
compilation
and
figuring
out
how
we
could
improve
the
performances
inside
of
browsers,
it's
still
very
unstable
and-
and
you
know,
we've
we've
gotten
a
ton
of
learnings
and
this
probably
needs
a
bit
more
work
to
get
things
very
stable
in
that
sense
and
use
that
in
production.
But
the
performance
is
it's
pretty
nice
better
than
the
js
version,
and
so
I
think,
there's
different.
F
It
depends
on
the
use
cases
and
things
like
that,
but
it
could
be
quite
interesting
for
some
projects
and
so
and
then
there's
a
data
transfer
protocol
example.
Basically,
that's
showing
exac
an
example
of
how
to
implement
another
protocol
on
top
of
graphsync.
In
this
case,
the
filecoin
data
transfer
protocol,
which
is
used
for
retrievals,
and
things
like
that.
So
this
is
basically
a
full
implementation
of
the
data
transfer
protocol.
F
It's
not
completely,
it
doesn't
have
feature
parity,
it's
probably
missing
a
few
things,
but
it
really
kind
of
shows
how
you
can
compose
other
protocols
with
graph
sync
and
then
it
impulse
implements
things
like
pool
and
push,
and
you
can
see
how
you
can
also
how
you
can
integrate
it
in
a
bigger
application.
F
So
you
have
things
you
know
like
a
cli
and
you
can
see
how
the
commands
work
and
how
you're
passing
messages
to
the
swarm
and
then
basically
how
you're
creating
vouchers
and
things
like
that.
So
it's
quite
useful.
If
you're
really,
you
know
looking
to
build
your
own
kind
of
retrieval
protocol
and
things
like
that,
and
it's
actually
quite
nice.
I
mean
it's
really
enjoyable.
We
find
it's
really
enjoyable
to
to
develop
like
this
and
rust.
F
It's
just
been
really
initially
it's
a
little
bit
of
a
learning
curve,
but
then
it's
it
just
feels
quite
nice
and
everything
just
feels
way
more
robust.
In
a
sense,
I
don't
know
if,
like
the
the
forest
guys
have
ever
shared
their
experience
and
why
they
went
for
rust
initially,
but
I
in
in
our
opinion
it
just
feels
more
predictable.
There
was,
for
example,
in
our
go
implementation.
F
We
definitely
were
battling
with
some
race
conditions
and
things
like
that
that
were
a
little
hard,
sometimes
to
to
figure
out
and
so
with
rust.
It
none
of
this
kind
of
we.
We
had
one
race
condition
at
some
point,
but
it
was
easier
to
flag,
so
things
like
that
were
were
just
more
stable
in
in
a
sense.
F
So
all
this
is
compatible
and
fully
interoperable
with
the
go
version
of
data
transfer
in
graph
sync
and
with
the
js
version
of
data
transfer
and
graph
sync,
and
so
now,
I'm
I'm
going
to
share
just
kind
of
a
a
few
operations
that
I'm
I'm
running
with
this
example.
So
this
is
a
go
provider
that
I'm
running
that
runs
go
data
transfers.
F
So
the
the
goal
is
really
to
show
compatibility
and
see
how
it
will
work.
So,
for
example,
here
I'm
using
a
push
operation,
meaning
that
the
the
node,
the
the
rust
node
is
gonna,
basically
trunk
this
file
and
then
request
a
push
to
the
go
provider.
So
this
is
basically
what
happens
when
you
send
content
to
a
filecoin
provider.
F
You
know
to
a
five
coin:
miner,
for
example
in
some,
so
I
don't
think
boost
use
it
anymore,
but
the
regular
one.
So
if
all
goes
well
so
here
we
just
sent
it
so
we
created
the
the
dag
and
then
sent
it
to
that
the
other
peer,
and
so
now
we're
good.
And
so
now
I
can
show
you
an
example
of
running
pool.
Just
gonna
need
to
update
the
peer
address,
but
this
is
the
same
cid
here
we
go
and
then
just
run
it
boom.
F
So
we
got
it
here,
I'm
just
running
cargo
in
unoptimized
mode.
So
that's
why
the
transfer
looks
a
little
slow,
but
when
you're
running
it
in
a
fully
optimized
build,
it
goes
way
faster
and
so
one
of
the
things,
for
example,
we're
still
missing.
There's
a
there's,
a
lot
of
leeway
for
improving
performance,
and
actually
it's
been
quite
interesting.
F
Where
we've
kind
of
found
this
bottleneck
at
the
basically
at
the
seaboard
decoding
level,
so
decoding
seabor
messages
on
the
the
incoming
substream
is
actually
slower
on
the
rust
version
than
on
the
go
version,
which
is
a
little
sad,
so
we're
still
trying
to
figure
out
where
and
how
it's
happening.
It
just
seems
that
maybe
the
the
rust
the
survey
and
rust
library
for
decoding
c
boards
is
just
not
that
good
or
just
we.
F
It
just
functions
weirdly
with
the
with
the
async
and
the
the
p2p
async
substream,
and
things
like
that.
So
it's
kind
of
to
investigate
still
so
be
curious
to
like
compare,
I'm
still,
basically,
writing
benchmarks
right
now
to
really
compare
how
things
are
doing
at
different
levels
and
and
see
how,
where
the,
what
we
can
do
to
improve
this
and
probably
bubble
these
issues
up
with
other
teams
and
see
what
they're
thinking.
F
But
this
it's
been
really
like
good,
so
we
basically
know
what,
where
things
are
blocking
and-
and
I
think
there's
a
lot
of
actionable
items
to
really
improve
things
still,
but
right
now,
it's
quite
usable,
it's
quite
stable!
It's
and
I
mean
personally,
we
really
enjoy
working
with
these,
like
this
little
libraries
so
yeah.
I
think
it
just
it's
a
little
scary.
F
If
you
don't,
you
have
no,
you
know
rust
experience,
but
playing
around
with
it,
and
once
you
you
got
the
you,
take
the
pill,
it's
just
you're
you're,
basically
convinced
it's,
it's
quite
nice,
so
yeah.
F
A
All
right
thanks
so
much
one
one
question
to
come
through
on
the
chat
I'll.
Do
you
want
to
read
it
out.
E
Sure
I
was
just
asking
about
the
difference
in
performance
between
the
rust
and
the
javascript
implementations.
You
said
it
was
a
bit
faster,
but
do
you
know
like
how
much
faster,
or
I
mean.
F
The
the
native
rest
is
obviously
way
faster,
but
there
wasn't
the
wasm
implementation
is
twice
as
fast.
So
very
I
mean
it's
it's
it's
kind
of
sad,
a
little
bit
because
so
the
off
top
of
my
head,
the
javascript
implementation,
goes
up
to
18
megabytes
per
second,
which
is
kind
of
slow
already
and
then
with
wasm.
We
get
up
to
30
megabytes
per
second,
which
is
definitely
quite
nice.
The
only
challenge,
obviously,
is
you
have
a
lot
of.
F
Essentially,
you
know
you're
still
using
some
some
bindings
with
a
javascript
apis,
so
like
the
websocket
transport
and
things
like
that,
and
I
think
these
still
slow
down
a
little
bit
on
some
levels,
but
I
think
I
believe
that
if
we've
solved
some
of
the
the
issues
with
decoding,
for
example,
we
could
really
improve
that
yeah.
We
could
really
improve
the
performance.
E
I've
got
one
very
quick
follow-up
question
for
mike
didn't
ask,
because
you
mentioned
you
like
implemented
three
different
things
in
rust.
Like
was
there
any
difference
in
performance
between
those,
or
was
it
more
just
like
usability
that
you
were
optimizing
for.
F
Yeah,
well
definitely
the
first.
The
first
implementation
was
not
good
in
terms
of
performance,
but
it
was
also
you
know,
kind
of
spaghetti.
It
was
really
kind
of
learning
and
and
taping
things
together
to
really
understand
and
and
learn
more
I
mean
wrestling.
P2P
is
not
that
well
documented
either.
F
So
you
there's
a
little
bit
of
a
learning
curve
where
you
go
through
the
code
base
and
you
play
around
with
things
and
so
based
on
that
we
like
had
a
very
you,
know,
naive
implementation,
where
we
just
like
stuck
a
channel
somewhere
and
then
send
some
some
output
in
there
and
then
you'd
use
the
channel
to
like
receive
the
things,
whereas
you
know
there's
a
whole
stream.
F
That's
you
know,
embedded
inside
of
the
swarm
that
you
can
use,
so
we
were
just
doing
not
doing
things
very
well
in
terms
of
understanding.
What
was
the
initial
philosophy
around
the
design
of
the
whole
library
and
then
the
second
version
was
very
radical
where
we
like
took
apart
the
wrestling
p2p
and
only
used
some
modules
that
we
liked
and
needed.
For
example,
we
only
used
the
multiplexer
and
then
the
transport,
so
we
basically
removed
the
swarm
and
everything
in
there
because
we
were
like
this
is
all
you
know,
adding
so
much
fluff.
F
F
It
actually
abstract
away
a
lot
of
the
things
that
we
ended
up,
implementing
and
just
kind
of
was
like
a
lot
of
code
and
a
lot
of
channels
and
and
threads
that
were
going
into
all
the
directions,
and
so
it
made
it
pretty
hard
and
scary
for
developers
to
look
at
it,
and
so
now,
after
all
these
learnings,
when
we
built
it
when
we
created
it,
it's
like
not
much
code,
it's
it's!
F
So
it's
really
nice
that
way,
because
you
have
very
little
code,
you
know
if
you
look
at
the
source,
it's
just
five
files.
You
know,
there's
basically
your
message
encoding
your
traversal
logic
and
then
putting
it
all
together,
and
so
it's
it's.
You
end
up
being
having
something
that's
way
easier
to
maintain
and
that
just
looks
really
nice,
so
it
just
it's
it's
way
more
satisfying
in
in
that
sense
to
you
know
just
for
the
developer
experience
and
maintaining
it
and
everything.
E
C
Yeah,
I'm
just
I'm
curious,
mostly,
I
I've,
never
I've
I've
I've
quite
a
bit
of
experience
with
goaling,
but
I've
never
built
anything
in
rust,
and
so
I
was
curious
about
like
it
seems
like
go,
is
more
widely
used
in
ethereum,
at
least
I'm
not
sure
about
filecoin,
so
wanted
to
hear
why
you
think
that's
the
case
and
and
and
why
you
think
that
rust
you
you
mentioned
that
it's
more
predictable
and
leads
to
potentially
fewer
race
conditions,
and
why
why
you
feel
that
rust
is?
Is
that
way.
F
Yeah
I
mean,
from
my
perspective,
it's
really
been
that
go
basically
go
and
allows
you
to
do
a
lot
of
things.
You
know
you
have
all
these
like
native
routines
and
like
all
these
different,
like
basically
primitives
in
there,
that
enable
you
to
like
build
pretty
complex
code,
and
I
feel
like
it
just
kind
of
all.
These
basically
enable
you
to
build
pretty
complex
code.
That
then
becomes
pretty
hard
to
reason.
F
F
You
know
import
channels,
and
you
have
a
lot
of
actual
modules
that
reproduce
a
lot
of
the
go.
You
know
native
primitives,
but
if
you
just
use
rust,
simply
you
just
end
up
with
a
very
more
crude
way.
Basically,
and
so
it
forces
you
to
design
things
in
a
way
that
are
like
more
simple
and
rudimentary,
and
where
you
just
don't
you
you
basically
try
to
you,
have
to
you're
like
forced
by
the
compiler
and
ever
and
the
idioms
of
their
language,
to
build
something
basic.
Basically,.
F
That's
how
I
actually
I
made
it.
You
know
when
I
when
I
was
writing
a
lot
of
go.
I
was
doing
a
lot
of
fancy
things
and
felt
you
know
with
channels
and
all
those
things
and
and
and
selectors
and
things
like
that
and
so
yeah
I
just
ended
up
really
complex.
I
I
think
I
mean
I'm
not,
you
know
the
best
goat
developer
either.
So
I
feel
like
there's.
Definitely
like
really
good
go
that
you
can
build.
That's
really
safe,
but
like
rust,
kind
of
forced
me
right
off
the
bat.