►
From YouTube: Domain-specific IPFS content indexer with Aqua - @folex - Content Routing 1: Performance
Description
Domain-specific IPFS content indexer with Aqua - presented by @folex at IPFS þing 2022 - Content Routing 1: Performance - https://2022.ipfs-thing.io
A
So
what
I'm
going
to
show
you
today
is
how
to
we
like
how
it's
possible
to
think
about
writing,
distributed
algorithms
and
experimenting
and
iterating
quickly
on
them
like,
without
going
into
nitty-gritty
details
of
sockets
and
all
the
stuff
trading
that
all
the
just,
ignoring
and
trying
to
think
more
declaratively.
A
What
I'm
not
going
to
talk
about
today
is
how
to
build
a
production,
really
decentralized,
ipfs
indexer,
that's
not
the
goal.
I
don't
have
a
solution
for
that.
Obviously
I'll
try
to
show
you
how
to
like
experiment,
quick,
how
to
like
think
in
terms
of
distribute
algorithm
and
ignore
all
that
doesn't
matter.
A
So
what
do
we
have
like
the
it's
kind
of
hard
for
me
to
talk,
because
we
didn't
have
an
introductory
talk
to
fluence
and
no
one
explained
what
what
fluency
is
and
how
awkward
works
so
I'll
try
to
real
quick.
So
we
have
a
peer-to-peer
network
running
on
rustly
p2p
and
it
has
several
like
layers,
networking
layer,
computation
layer
where
you
can
have
like
microservices
on
it
and
a
layer
of
orchestration
orchestration
is
the
most
interesting
one
you
can
like
you
things
like
this.
A
You
can
have
like
a
code
that
orchestrates,
for
example,
like
this,
that
orchestrates
distributed
system,
you
can
like
say,
hey,
go
through
those
providers
and
I
upload
something
to
all
of
them
to
any
every
provider
like
you
can
have
do
this
in
parallel.
You
can
like
wait
for
results.
You
can
handle
errors.
You
can
check
whether
something
exists
or
not.
A
You
can
call
external
services
you
can
integrate
like
we
have
integrated
ipfs
natively,
but
you
can
integrate
any
other
network
into
the
fluence,
because
we
can
call
like
external
binaries
that
are
on
hosts.
So
as
long
as
you
have
clis
on
hosts,
you
can
integrate
any
networks
and
join
them
and
effectively
make
influence
a
network
of
networks.
A
So
this
how
a
distributed
algorithm
might
work
this
one.
What
this
one
does.
It
goes
through
some
subset
of
the
fluence
network
talks
to
the
ipfs
nodes
running
on
them
and
upload
the
files
there
and
then
checks
whether
they
exists
doing
it
in
parallel
and
waiting
for
some
time
out
on
or
a
number
of
results.
At
the
end,
I
will
go
through
details
a
bit
later,
just
trying
to
sketch
a
whole
picture
here
so
yeah,
that's
how
we
can
like
work
on
distributed
algorithms.
A
So
what
I
also
want
to
to
one
of
the
goals
I
have
today
is
to
like
request
for
comments,
ideas
and
solutions
to
open
problems
like
if
we
can
orchestrate
easily
and
iterate
over,
distribute
algorithms.
What
do
we
do
next?
How
do
we
like
approach
actually
decentralizing
the
indexers
decentralizing
different
services?
A
How,
for
example,
instead
of
like
having
maybe
instead
of
having
one
big
indexer
for
the
whole
network,
we
can
have
subnets
and
per
domain
indexers
and
every
app
developer
or
whoever
does
something
interesting,
can
have
an
indexer
for
his
own
domain
for
his
own
application
and
use
it
for
use
it
with
his
users
and
not
share
it
globally
with
everyone,
so
not
storing
petabytes
of
data,
but
just
gigabytes.
Maybe
that
will
work.
A
So
I
have
a
kind
of
workshop
today
and
what
it
means
that
you
can
actually
like
go
with
me.
You
can
clone
this
repository.
I
have
even
somewhere
a
qr
code
that
you
can
scan
if
you,
if
you
wish,
but
it's
pretty
easy.
It's
on
github,
slash,
fluence
labs,
slash
indexer
workshop.
A
Well,
pretty
easy
to
remember.
I
guess
this
to
me:
it
is
yeah
and
you
can
clone
do
npm,
install
and
just
follow
along
should
work.
Hopefully
it
should
work.
A
So
since
I
like
I'm
talking
about
content
indexer,
it
does
index
content
what
it
effectively
does
it's
just
stores
for
every
cid
that
I
upload
it
stores
what
nodes
now
store
the
file
like
it.
A
Has
this,
like
association
between
cid
and
the
list
of
multi
addresses
that
are
ipfess
nodes
that
would
allow,
for
example,
some
app
developer
to
have
a
set
of
ipfs
nodes
store,
a
file
and,
for
example,
I
have
10
ipfs
nodes
that
I
want
to
use
as
a
data
plane,
but
I
want
to
store,
like
avatars
on
first
three
nodes
with
replication
factor:
three,
all
the
like
databases
on
fif
and
six
node
and
stuff.
Like
that,
I
can.
A
Okay,
so
let's,
let's
just
try
and
do
that,
how
that
will
like
will
work,
how
the
whole
process
works.
We
have
a
like
a
network,
peer-to-peer
network,
and
I
have
my
laptop
here.
A
Yeah,
okay
and
I
say
hey
just
for
this
example.
A
I
will
use
like
those
nodes
for
just
because
I
had
a
beer
with
their
owners
and
we
decided
that
we
will
work
together
and
we
can
have
a
human
agreement
about
that
and
what
I
want
to
do
when
I'm
like
I'm
providing
a
web
app
for
people,
and
whenever
someone
wants
to
upload
a
picture
to
my
application,
he
will
use
aquascript
that
will
go
through
the
epiphyse
nodes,
decide
with
what
replication
factor
to
store
that
file
and
just
upload
it
to
different
ipfs
nodes
and
on
upload
it
will
automatically
add
as
a
part
of
uploading
process.
A
It
will
out
in
a
push
in
a
push
mode.
It
will
automatically
store
that
this
file
is
stored
on
this
node
on
all
these
nodes
and
it
will
use
the
whole
network.
A
This
part
of
the
network
to
have
services
a
service
that
store
the
index
so
actual
iphone's
nodes
that
like
store
the
files
are
one
part
of
the
network
and
the
whole
other
network
can
be
used
to
store
information.
The
index
yeah.
So
let's
go
for
the
code
and
see
if
it's
like,
if
it
makes
any
sense,
yeah
so.
A
Uploading
a
file
first,
we
have
like.
I
said
that
we
have
like
a
domain.
It
could
be
any
anything
like.
I
want
to
to
call
my
my
application
index,
for
example,
or
I
could
call
it
like
telegram
or
whatever,
and
I
want
to
to
associate
some
nodes
with
this
domain
and
say
hey,
I
will
use
use
those
nodes
for
my
application.
I
may
later
pay
them
somehow
or
just
have
some
different
agreement
with
them,
but
I
will
use
them.
They
will
serve
me.
A
That's
kind
of
solid
question
that,
let's
just
assume
that
it
is
and
we
use
a
kind
of
first,
we
started
to
use
like
dht
for
that
goal
to
store
the
routing
information.
But
then
we
understood
that
there
is
kind
of
tragedy
of
commons
happens
where
everyone
uses
dht,
and
why
would
nodes
like
provide
their
dht
to
store
the
routing
information?
They
will
just
evict
it
and
you
won't
have
it
anymore.
A
So,
instead
of
that
this
I
have
an
agreement
with
nodes
and
they
will
serve
the
indexing
service
that
I
implemented
in
rust
and
deployed
to
them
and
those
nodes
are
stored
in
all
over
the
network.
I
can
always
resolve
them
by
my
domain
name,
and
so
I
will
always
like
know
where
those
nodes
are
and
how
to
access
my
indexing
service.
A
Okay.
So
once
I
have
those
nodes
that
have
the
indexing
service,
I
can
actually
go
and
upload
them
a
file
to
to
their
ipfs
nodes
and
store
information
about
where
the
file
stored
to
index,
so
how
that
works.
A
I
go
through
every
node
that
I
work
with
in
parallel
and
I
retrieve
its
multi
address
of
its
ipfs
node
and
I
say:
hey
okay,
now,
let's
upload
a
file
that
I
have
locally
on
my
laptop
to
every
of
this
node
of
this
nodes
so
and
that
uploading
mechanics
is
implemented
as
implemented
as
a
javascript
code
that
I
have
locally
on
my
laptop,
so
it
transfers
file
to
every
apis.
A
Node,
that's
running
on
the
fluence
network,
from
my
local
laptop
and
the
beauty
of
this
like
script,
is
that
it
could
be
a
file
not
from
my
laptop.
It
could
be
a
file
like
from
any
other
computer.
Instead
of
uploading
it
locally
to
those
nodes,
I
could
retrieve
it
from
some
other
remote
node
directly
to
the
nodes
that
are
in
the
network
and
that
would
like
change
just
a
code.
Just
a
little
bit
like
here.
A
The
code
says:
go
to
my
local
laptop
through
the
entry
point
that
I'm
using
to
connect
to
the
network
and
upload
the
file
to
remote
ipfs
node.
I
could
change
it
to
some
more
complex
logic
and
say,
for
example,
if
I
know
that
the
file
stays
on
my
entry
point
node,
that's
called
hostpure
id.
A
A
Okay,
so
once
we
we
have
like
this
thing
just
goes
through
several
epifast
nodes
and
uploads
a
file
to
there,
and
then
it
waits
for
files
to
be
uploaded
or
like.
If
no
some
of
the
notes
go
down.
They
might
not
answer
me,
so
it
waits
for
timeout
and
then
goes
on,
and
then
we
go
to
an
index
service
and
store
that
information
to
every
index
that
we
have
like.
We
have
the
indexing
service
distributed
across
the
network.
A
So
it's
not
like
a
single
index
instance
but
like
it's
replicated
and
I
have
like
tens
of
them
out
there.
So
if
some
any
of
it
goes
down,
that's
fine.
I
will
still
have
my
replication
factor
that
I
want
so
yeah
and
that
indexing
service
is
implemented
in
rust
and
compiled
to
webassembly
and
deployed
to
all
the
nodes.
A
So
it's
possible
for
anyone
to
host
that
index
service
by
just
retrieving
the
webassembly
file
from
ipfs
and
deploying
it
to
his
own
node
and
just
using
it
just
becoming
a
part
of
that
indexing
protocol
kinda
and
serve
the
purpose
of
the
application
that
I'm
building
so
that's
kind
of
a
push
model.
Instead
of
pull
not
so
every
user
to
upload
a
file
will
run
that
script
in
their
browser
or
or
on
their
mobile
device,
and
every
time
he
uploads
a
file.
He
pushes
the
data
to
index.
A
So,
instead
of
like
waiting
for
discovery
to
happen
or
traversing
the
network
constantly,
it's
done
right
away
on
upload.
So
you
can
this
way
clients
become
like
orchestrators
of
the
backends.
They
use.
A
A
It
will
start
the
aqua
run,
clitool
what
it
does
it
says:
hey.
I
have
a
function
somewhere
in
this
source
source
files.
The
name
of
the
function
is
upload
to
subnet
and
just
take
the
local
path.
The
very
this
path
read
it
to
memory
and
send
to
ipfs
nodes,
and
it
also
pins
aggressively,
because
I
had
some
troubles
with
pinning
it
also.
A
As
we
have
seen,
there
was
a
like
cycle
through
the
provider
nodes
that
we
can
upload
to
and
once
it
uploaded
it
sent
a
message
back
to
my
local
laptop
with
a
login
instruction
to
say
that
hey
we
have
uploaded
to
this.
So
this
way
we
can
report
progress.
We
can
stream
data
and
progress
and
anything
we
want
send
some
events
along
the
execution.
A
So
we
don't
just
wait
for
response
to
happen.
We
might
not
even
have
a
response,
that's
totally
possible
to
like
you
can
program
it
in
the
way
that
there
is
no
response
to
my
client.
It
just
goes
to
the
network.
Does
some
effects
there
and
stays
no
need
to
to
give
me
a
response?
If
I
don't
want
it,
but
in
this
particular
case,
because
I
wanted
to
show
the
progress,
I
wanted
the
response,
so
I
received
the
first
log
that
said:
hey,
hey,
we
upload
to
the
file.
A
A
We
have
association
between
the
cid,
this
one
and
thus
providers
and
it's
all
stored
on
several
copies
of
an
indexer
service
out
there.
So
what
we
can
do
now,
we
can
always
retrieve
the
index
right.
A
Yeah,
so
you
see
it,
it
calls
a
function
and
passes
environment
variable
cid
there
and
I
didn't
pass
it
on
the
first
time.
Instead,
I
was
passing
file
okay,
so
we
have
the
same
index.
So
it's
pretty
fast.
You
can
just
like
say,
hey,
give
me
an
index
and
every
provider
will
start
streaming
your
results
and
you
can
use
them
as
they
come.
So
you
don't
have
to
wait
for
all
the
responses
from
all
the
nodes.
You
can
just
stream
everything
you
yeah.
A
We
have
extremes
in
the
language
and
they
allow
you
to
allow
you
to
have
concurrent
executions
that
happen
when
new
data
arrives.
Okay,
great
so
now
we
have
an
index,
but
what,
if
some
of
the
nodes
they
lose?
The
data
like
something
bad
happens,
and
someone
removes
the
data.
Let's
just
try
to
do
that.
A
We'll
just
unpin
a
file.
I
know
that's
not
very
secure
to
provide
rpc
to
outer
world,
but
hey.
Why
not?
A
Great
so
now
we
have
an
index,
that's
not
correct!
What
do
we
do
now?
There
is
a
way
to
calculate
who's
absent,
given
the
index
so
like
who's
absent
in
a
sense
that
which
nodes
has
lost
have
lost
the
file.
A
I
will
show
how
how
that
code
works
in
a
moment,
because
it's
pretty
interesting,
let's
copy
this
id
again
now
we
I
want
to
like
get
a
list
of
nodes
that
actually
lost
that
file,
and
now
we
have
unpinned
the
file
from
two
nodes,
mx
and
zq,
and
the
aquascript
detected
that
they
are
exactly
absent.
How
did
that
work?
A
A
And
asks
hey:
does
the
file
exist?
It
was
kind
of
hard
part
because
ipfs
doesn't
have
api
for
file
existence,
so
I
I
had
to
implement
it
like
like
this
in
js
like
I,
I
just
do
a
pin
less
and
if
it's
unpinned,
then
I
conclude
that
the
file
doesn't
exist.
So
what
happens
in
the
in
this
script?
A
Is
I
go
through
all
the
providers
and
from
my
laptop
I
ask
them:
hey:
do
you
have
this
file?
Do
you
have
this
file,
and
you
know
that's
not
very
efficient
way
to
do
that,
because
what,
if
my
laptop
network,
is
bad
yeah,
but
that's
a
way
to
iteratively
experiment
on
an
algorithm
like
you,
you
can
start
simple.
You
can
start
implement
by
implementing
a
service
that
works
with
ipfs
in
javascript
on
your
laptop.
A
But
then,
if
you
think
hey,
this
is
a
good
algorithm
that
works
well,
but
maybe
not
very
efficient
because
of
all
these
topology
hubs.
You
can
just
take
that
javascript
service,
rewrite
it
in
rust
or
any
other
language
that
compiles
to
webassembly,
deploy
to
the
network
and
change
that
script.
Just
a
little
bit
like
that,
for
example,
if
providers,
if
every
provider
has
a
service
that
talks
to
ipfs,
we
could
just
do
that
and
it
will
work.
A
And
it
will
just
work
and
we
already
have
designed
the
algorithm.
We
have
like
iterated
on
pinning
and
pinning
made
some
decisions
and
when
we
want
to
make
it
more
production,
really
more
like
a
better,
we
can
just
change
a
little
bit,
but
the
whole
logic
will
stay
the
same
okay.
A
So
we
did
check
that
I,
where
file
exists
and
where
not
and
we
send
streamed
the
data
back
to
my
laptop
and
we
received
like
this
information.
So
now,
what
do
we
do?
How
do
we
repair
the
index?
There
are
like
two
ways
to
do
that,
like
logical
ways.
First
one
is
to
alter
the
index
and
remove
nodes
that
have
lost
the
file
from
the
index
and
another
one.
A
A
And
like
I,
I
wouldn't
show
you
a
difference
in
like
topology.
So
when
I
was
running
the
uploading
script,
it
was
working
like
this.
It
went
to
one
one
of
the
entry
point
nodes
in
the
network
and
then
it
sent
messages
to
nodes
on
the
network
and
they
say
said
hey.
I
have
ipfs
node
here's
its
address.
You
can
upload
something
to
there
and
I
did
that
uploading
directly
from
my
laptop
to
their
ipfs
apis.
A
That's
one
way
of
doing
it,
but
when
I'm
I'm
doing
like
a
repair,
what
I
I'm
actually
I'm
doing
a
different
topology
approach.
Like
I'm
commanding
the
network,
hey,
can
you
heal
itself,
here's
a
script?
How
to
do
that?
Can
you
just
download
the
file
to
the
nodes
that
have
lost
it
from
the
ones
that
still
have
it,
so
we
can
do
both
ways
and
initially,
when
I
was
like
just
developing
this
example,
I
started
to
do
repair
very
same
way
as
I
did
ip
fast
upload.
But
then
I
thought:
hey
this.
A
A
A
So,
as
I
said,
we
we
go
through.
We
like
retrieve
the
index.
We
check,
which
nodes
still
have
the
file
and
we
go
through
all
the
nodes
that
lost
the
file
in
parallel
and
we
try
to
download
the
file
from
every
node
that
has
still
that
that
still
have
that
file.
So
we're
basically
doing
like
two
iteration
loops,
two
loops,
first,
one
through
absent
nodes
and
the
second
one
through
nodes
that
still
have
the
file
and
we
say
on
absent.
Node.
You
see
the
this
this
on
instruction.
A
So
if
there
are
like
two
absent
nodes
and
multitude,
like
thousand
node,
who
still
have
the
file
that
wouldn't
be
very
efficient,
but
I
can
like
iterate
on
that
right.
I
can
put
some
heuristics
here
and
I
can,
for
example,
have
a
local
service
written
in
javascript
that
for
every
interaction
with
any
ipv5
node
stores,
some
metrics
on
their
latency,
and
I
can
try
to
use
that
information
to
wait
who
to
ask
and
who
not
to
ask
to
restore
the
file.
A
And
if
I
decide
that
hey
this
is
a
great
approach
having
the
metrics
okay,
I
can
move
that
metric
server
to
the
network,
just
rewriting
it
to
webassembly,
and
then
everyone
can
do
the
same,
and
I
can,
as
I
said
my
goal
here,
is
not
to
show
you
a
solution
to
greatest
indexer
or
to
just
any
indexer.
My
goal
is
to
show
you
the
thinking
that
you
can
write
this
simple
code
that
operates
across
multitude
of
nodes.
B
How
would
you
there's
some
pick
at
random
some
set
of
the
other
notes
of
the
present
ones
and
then
try
until
it
succeeds
with
some
time
after
then
try
others.
A
Yeah,
you
could
do
that.
We
we
don't
have,
for
example,
here's
the
limitation
that
not
limitation
like
right
now
aqua
doesn't
have
anything
to
manipulate
lists
yeah
and
that
could
be
not
very
like
that
could
be
very
uncomfortable
to
to
program
with.
But
we
can
just
implement
a
service
that
can
like
we
can
call
at
least
operations
like
and
that
can
be
a
function
get
random.
A
And
here
I
would
say,
hey
give
me
a
random
random
portion
of
of
the
present
note
right.
A
Like
present
right
and
instead
of
using
present
here,
why
do
you
don't
like
it?
Okay,.
A
And
instead
of
using
like
the
whole
present
array
here,
I
I
would
use
this
random
part
right.
It
would
yeah,
that's
fine,
just
some
type
safety
here
and
you
can
expre
and
you
you
asked
a
little
bit
more
complex
questions
like
how
to
try
like
some
portion,
then
another
portion,
then
another
portion
by
timeout,
right
yeah.
You
can
still
code
that
that
would
take
some
time
and
some
mind-bending,
because
you
on
every
portion
you
have
to
think
about
how
to
exactly
do
time-outing
and
then
take
another
portion.
A
I
would,
I
guess,
do
something
like
not
get
random
but
like
get
random
chunks,
so
it
would
return,
not
one
list
of
providers,
but
several
and
for
every
list
I
would
iterate
and
try
it
and
if
it
doesn't
succeed
I
would
get
next
list.
B
Against,
for
example,
so
in
this
section,
where
you're
just
being
able
to
run
some
code
and
all
those
is
very
useful,
being
able
to
take
a
set
of
candidate
nodes
sort
them
by
some
quality,
where
you
use
some
features
about
them
to
be
able
to
arrive
at
a
certain
set
and
then
try
try
to
run
something
on
them,
where
you
can
treat
all
of
them.
As
the
same.
A
B
Either
some
of
that
parallelism
or
slow
amount
of
bounded
wait
time.
A
Yeah
yeah
and
we're
like
this
is
a
living
language.
It's
like
become
became
stable
and
fast
a
few
months
ago,
so
it
still
has
missing
some
features
like
array,
manipulation
and
what
you
describe
like
just
doing
the
syntactical.
A
It
all
could
be
implemented,
but
it
would
take
time,
but
we
sure
need
some-
have
to
add
some
some
syntax
sugar
to
that.
So
so
it's
easier
to
to
do
manipulations,
you're
talking
about
and
about
like
quality
metrics
and
like
weighted
sorting.
We
have
like
thoughts
about
how
to
approach
protection
from
cyber
attacks
and
by
waiting
and
having
some
reputation
system
and
we're
like
right
now,
where
I
was
doing
like
get
providers
here
in
pin
sets
file
here.
A
This
thing
would
also
integrate
the
truss
graph
and
it
would
return
weights
for
every
peer
and
that
weight.
It
would
be
subjective
for
my
peer
or
for
nodes
that
I
trust,
for
example,
yeah.
I
trust
protocol
labs
and
practical
apps
say:
hey
those
peers
are
nice
and,
like
I
don't
know,
parity
says
their
spirit.
Peers
are
nice
too,
and
I
trust
them
as
well,
and
some
I
would
have
like
weights
for
every
peers
assigned
indirectly.
A
I
didn't
know
about
those
peers,
but
you
did
know
party
did
know
so
I
I
have
weighted
them
and
I
can
use
them
for
like
prefer
them
over
some
random
pairs
yeah.
A
Where
the
awesome
code
gets
applied,
yeah,
let
me
show
how
do
I
let
me
just
remove
all
that
stuff.
Let
me
show
how
I
did
deploy.
Actually
when
I
was
like
doing
this
example,
this
workshop,
we
have
some
tools
that
I'll
assist
you
in
deploying
and
managing
configuration
files
and
ids
and,
like
this
kind
of
cloud
control,
plane
cli
stuff,
but
it
didn't
have
all
the
features
that
I
wanted
you
to
have.
A
So
I
kind
of
rewrote
it
in
several
hours
in
aqua,
and
this
is
the
deployment
system
that,
like
deployment
function,
that
takes
a
list
of
peers
and
a
local
service
that
could
say
where
to
get
webassembly
files
and
for
like
I
have
this
json
file
that
describes
the
service
that
I'm
deploying
it's
the
rust
service.
That
has
two
two
modules.
One
is
the
indexing
and
another
one
escolite.
A
So
it's
basically
let
me
show
the
code,
maybe
I'm
answering
to
detail,
but
still,
I
think,
that's
pretty
interesting.
So
I
have
this
trust
code.
It's
an
indexer
code
that
just
uses
sqlite
underneath
it
could
use
instead
in
memory
hash
table,
but
I
think
using
this
collide
here
more
versatile,
because
you
can
actually
download
the
sqli
database,
put
it
to
ipfs
and
replicate
across
the
indexers
I
didn't
get
to
that,
but
it's
still
possible.
A
So
that's
a
service
written
in
rust
and
that's
a
configuration
file
that
says
how
to
build
that
service
from
web
assembly
files
how
to
deploy
it,
and
this
is
a
script
that
can
deploy
that
service
to
that
set
of
peers
and
that
set
of
peers
could
be
anything
and
how
it
works.
It
goes
through
all
the
modules
that
I
have
in
config,
two
modules,
uploads
uploads
them
to
every
node
that
I
have
here
in
peers
and
then
just
creates
a
server
a
service
from
it
so
like.
Where
does
it
create
yeah?
A
So
we're
going
through
every
peer
that
we
have
uploaded
modules
to
like
we
upload
modules
to
ipfs
and
then
every
peer
downloads,
the
modules
from
ipfs
takes
them
as
a
list
and
just
creates
a
service
from
that
list.
This
is
basically
a
configuration
file
for
a
service
and
then
appear
can
create
a
service
from
that
configuration
file
from
webassembly
modules
that
live
on
ipfs,
and
you
can
like
create
that
service
call
function
and
kill
it
or
it
can.
A
It
can
be
a
long-running
service
until
you
want
it.
So
this
way
you
can
manage
like
any
strategy
of
deployment,
what
you,
whatever
you
can
imagine
like
if
you
want
to
deploy
on
every
second
node
in
the
network,
and
they
allow
you
to.
You-
can
do
that
if
you
so
that's
up
to
you
to
decide
the
strategy
of
deployment,
but
we
will
sure
give
some
primitives
to
do
that.
A
The
the
nice
thing
about
aqua
is
that
all
the
algorithms,
when
you
write
them,
you
can
just
publish
them
to
npm
or
to
whatever
your
favorite
package
manager
is
and
just
reuse
them.
So
if
you
like
write
in
a
research
paper-
and
you
decide
hey,
this
is
a
nice
strategy
for
deploying
services.
You
just
you.
You
write
a
paper
and
also
you
push
the
code
that
you
just
wrote
to
npm
and
everyone
can
reduce
the
distributed
algorithm
that
you
have
and
it
will
be
abstracted
over
the
actual
implementation.