►
From YouTube: Fluence Protocol: A P2P Computing Protocol
Description
Speaker: Evgeny Ponomarev, co-founder & COO at Fluence Labs
Paris P2P Festival #1
May 2022
Source: https://www.youtube.com/watch?v=NFcHnXR5alA&list=PLNeNFYqVeWnNy8KdZOdOTlzSkKoBWyfqO
A
Okay,
hi
everyone,
I'm
evgeny
from
fluence,
I'm
gonna
quickly
talk
about
fluence.
Today
I
think
my
talk
is
the
last
one
before
hakaton
officially
starts
so
I
I
think
I
want
to
be
really
quick,
so
you
know
some
obvious
things
in
the
beginning.
A
We
believe
that
peer-to-peer
are
the
future
of
software
in
general,
so
I
assume
most
of
people
in
the
audience
would
agree
with
me
and
basically
right
now,
we
at
the
stage
where
crypto
we're
efficiently
decentralizing
the
financial
markets,
so
we're
pretty
successful
in
this
part,
but
like
we're
also
trying
to
decentralize
different
other
on
the
makeup
transition
from
centralized
systems
to
peer-to-peer
systems
in
other
stuff,
like
storage,
also
quite
successful
identity.
A
Networking.
But
specifically,
I
want
to
talk
about
the
the
computing
part,
because
the
the
tricky
thing
about
the
computing
part
is
that
blockchains
is
also
decentralized
computing,
but
blockchain
is
very
specific
decentralized
computing,
with
consensus
usually
run
by
all
nodes
in
the
network.
Some
consensual
algorithm
proof
of
work
or
proof
of
stake,
which
basically
assumes
verification
of
all
the
transaction
data
that
stored
in
this
replicated
database,
which
makes
basically
the
constraints,
puts
constraints
on
the
computation
and
data
that
you
can
put
there
and
put
constraints
on
the
cost.
A
So
like
it's
basically
become
super
expensive,
and
there
was
another
attempt
to
address
the
decentralized
compute
problem
by
other
projects
who
basically
created
the
hardware
marketplaces
and
but
they
mostly
like
some
of
them,
still
live,
but
they
mostly
target
the
high
intensive
computations
like
video,
rendering
or
like
scientific
computations
like
when
you
have
like
a
huge
task,
and
you
want
to
parallelize
it
to
many
many
machines
to
execute
it
faster
and
cheaper.
Instead
of
just
deploying
it
to
single
machine
single
cloud
or
something.
A
A
Any
task
would
allow
to
basically
provide
apis
like
would
allow
to
use
it
instead
of
almost
any
use
cases
where
we
use
centralized
clouds
like
instead
of
using
putting
back-ends
on
the
cloud,
would
be
nice
to
be
able
to
just
you
know,
deploy
the
computation
in
the
network
and
get
it
magically
executed
or
surf
like
send
requests
to
a
network
and
get
response
from
a
network
magically
as
like
a
magical,
decentralized
back-end
for
any
application,
but
also
to
have
it
not
limited
by
devices,
because,
like
right
now,
cloud
model
dictates
the
very
specific
client
server
model
where
we
have
thin
client
in
the
browsers.
A
They
are
limited
or
like
on
user
devices,
and
then
you
have
the
performant
computation
cloud
and
development
stacks.
There
are
very
different,
so
you
cannot
just
you
know,
take
the
piece
of
code
that
being
run
in
the
cloud
and
put
it
on
the
user
device
and
run
it.
A
It
would
just
not
work
so
we're
looking
for
something
more
universal
and,
like
obvious
reasons
why
it's
important,
but
just
quickly
again,
centralized
infrastructure,
basically,
service
hardware
owned
by
centralized
companies
is
a
risk,
but
also
what
is
risk
is
the
centralized
apis
like
a
if
you're,
using
the,
if
you're,
building
your
application
using
someone
else's
data,
and
you
want
to
like
enhance
your
application
with
someone
else's
data
like
facebook,
data
or
twitter
data
or,
like
any
other
web
2
api.
A
Basically,
these
guys
which
api
you
call
they
control
data
and
they
control
your
access
to
their
data.
So
if
you
build
in
your
business
on
top
of
some
other
company
data,
you
basically
dependent
on
them
and
they
can
cut
you
off
and
you
can
lose
your
business
and
just
because
of
this
broken
model
of
centralized
silos
of
data
and
centralized
access
to
this
data.
A
But
this
platform
risk
also
exists.
Now,
in
webtree
we
have
this
success
with
decentralizing
data
via
blockchain
network,
which
are
replicate
the
state
or
the
network,
and
you
can
connect
to
any
blockchain,
node
and
download
any
transaction
data
that
you
need,
or
with
ipfs,
where
you
can
do
the
same,
but
a
lot
of
projects
building.
A
Basically,
the
indexing
data
stored
in
decentralized
storages
like
blockchain
or
ipfs,
and
convert
it
into
like
different
formats,
which
are
better
fit
different
use
cases
like,
for
example,
if
you
want
basically
in
blockchain,
if
you
want
to
display
the
list
of
nfts
for
particular
wallet,
you
cannot
do
this
with
a
single
request
to
ethereum
rpc
node.
You
need
to
before
that.
You
need
to
build
the
index
because
otherwise
you
would
have
to
do
like
a
lot
of
requests
from
your
device
from
the
user
device
to
the
backend
ethereum
node.
A
They
build
their
own
web,
2,
http,
back-ends
and
then
applications
that
use
them
again
got
this
problem
of
being
dependent
on
centralized
access
to
these
backends,
and
so
we
kind
of
repeating
the
the
story
of
of
the
web
2
and
would
be
nice
to
to
solve
this
probably
enable
real,
safe
composability
of
applications
like,
for
example,
the
the
level
of
compatibility
that
we
have
inside
the
blockchains,
where
we
have
smart
contracts
that
are
immutable
and
basically,
if
I
deployed
some
smart
contract,
I
and
I
put
the
basically
admin
I
removed
the
admin
from
from
the
contract.
A
It
means
that
it
will
be
there
forever
and
it's
safe
to
build
on
top
of
the
smart
contract,
because
it
cannot
be
changed.
It
cannot
cut
the
access
off
from
whoever
uses
them
so
would
be
nice
to
to
to
have
something
similar,
not
on
chain
but
also
off-chain,
and
this
is
the
way
how
we
envision
the
per-duper
pretty
pure
computing.
So
it's
basically
sits
somewhere
between
on-chain
and
centralized
computing.
A
So
we
don't
have
middle
man,
no
redundancy,
it's
the
computations,
optionally,
reliable
and
it's
obviously
cheaper
than
than
on
chain.
So
it's
like
gives
you
a
lot
of
flexibility.
Basically
and
use
cases
are
like
all
kind
of
blockchain
cross
chain
use
cases
where
you
need
like,
for
example,
oracles
or
transfer
asset
transfers
or
dynamic,
nfts
things
like
that
in
a
lot
of
use
cases
around
user
control
data
data
privacy,
where
you
want
to
compute
to
bring
compute
to
data
and
not
send
data
from
the
device
somewhere
on
untrusted
server.
A
So
you
should
be
able
to
bring
compute
and
compute
locally
and
then
send
the
results
somewhere
in
the
network
or
connect
users
directly
and
build
local
first
applications
and
and
also
one
of
the
use
cases.
Basically,
we
have
the
decentralized
networks,
but
they
we
still
have
this
gap
between
daos
that
sit
on
chains,
basically
govern
the
new
generations
of
applications
and
then
the
execution
of
these
applications.
So
dows
are
those
work
for
governing
on
chain
applications,
but
dows
doesn't
work
for
governing
off
chain
applications.
A
So
if
we
have
the
dials
that
can
really
govern
the
off
chain
application,
then
basically
we
build
the
full
stack.
We
now
can
have
the
on-chain
organization
that
manages
the
fund,
the
processes
decision
making
and
so
on
so
forth,
and
basically,
when
the
decision
is
made,
on-chain
the
off-chain
runtime
triggers
the
update
of
the
application
or
doing
things
like
that,
like
rollouts
the
new
feature
or
removes
the
feature
like
updates
the
app
and
like
just
anything.
A
Basically,
so
you
can
really
link
the
governance
of
the
app
community
governance
of
the
app
to
community
around
the
the
runtime
of
the
app
yeah
and,
and
we
solve
this
with
fluence
influences
three
things:
the
development
stack
for
the
applications,
the
network
that
runs
this
development
stack
and
the
economics
around
it.
A
So
the
development
stack
consists
of
two
biggest
important
things:
first,
called
marine,
which
is
webassembly
runtime.
It
allows
to
basically
run
functions
on
any
device
the
same
way
and
brings
some.
You
know,
features
of
how
you
build
these
functions.
You
can
link
a
few
modules
together
and
you
can
access
file
system.
It
supports
all
the
webassembly
standards,
so
it's
pretty
cool
and
aqua,
which
is
the
control
plane
for
the
execution
of
these
functions.
So
aqua
is
a
new
programming
language
for
peer-to-peer
systems
where
you
basically
describe
the
execution.
A
So
it
looks
like
this,
so
basically
aqua
allows
you
to
implement
any
workflow
and,
for
example,
you
want
to
implement
some
mapreduce
on
the
network.
So
this
red
squares
is
nodes
and
functions
inside
is
the
marine
functions,
so
this
aqua
code
basically
says
I
want
to
get
some
list
of
nodes
that
provide
getprice
function,
call
getprice
function
several
of
them
and
then
calculate
average
on
some
other
node
and
send
it
to
second
user.
A
So
basically,
this
is
application
that
works
without
any
centralized
coordination
server
at
central
without
any
centralized
coordination
place
the
request
being
issued
by
first
user
and
then
the
second
user
gets
the
result,
but
it
could
be
like
the
first
user
also
get
the
result.
It
could
be
programmed
anyway.
So
it's
it's
a
workflow.
You
can
program
this
workflow.
A
The
same
way,
probably
like
the
the
closest
analogy,
would
be
the
amazon
step
functions.
So
if
you
know
the
amazon
stack,
they
have
lambda
functions
which
is
like
a
pure
functions,
and
then
you
have
step
functions
to
describe
the
workflow
and
their
workflow
description
is
basically
the
json
file
and
aqua
is
much
more
flexible.
A
Aqua
is
a
full
featured
programming
language,
so
you
can
create
any
kind
of
algorithms,
basically,
network
algorithms
or
distributed
systems
in
aqua,
so
aqua
turns
the
this
complex
cloud
services
into
libraries
of
the
language,
and
this
is
pretty
cool,
so
things
like
load,
balancing,
routing,
auto
scaling,
orchestration
deployment
systems.
They
all
basically
become
the
pieces
of
code
in
aqua,
so
you
can
build
really
really
complex
distributed
systems
or
peer-to-peer
systems
just
using
aqua.
A
Yeah
so
again,
example
like
in
in
traditional
cloud
backend,
you
have
this
centralized
api
gateway,
usually
that
talks
to
services
on
the
backend
and
with
aqua
it's
all
sort
of
been
being
served
without
any
any
centralized
coordination
place.
A
This
is
how
fluent
spear
looks
like
so.
Every
fluence
node
basically
runs
marine
functions,
has
aqua
vm,
which
gets
requests
proxies
it
them
into
execution
of
marine
functions
and
then
send
the
result.
Whoever
ask
stories
out
and
then
it
also
has
schedule
scripts
scheduled
aquascripts,
which
basically
like
a
triggering
marine
functions
by
time
so
and
on.
On
the
other
side.
Basically,
every
fluent
spear
can
connect
to
external
world
like
we
are
http
or
we
are
linking
binaries
sitting
on
the
same
physical
machine,
so
it
could
access
api
file
system.
A
You
know
web
2
networks,
web
3
networks,
anything
basically
and
fluent
spear
by
default
always
been
shipped
with
ipfs.
So
you
have,
you
always
have
ipfs
access
from
an
influence
node,
but
basically
any
fluence
node
can
decide
whatever
effector
services
it
could
provide
for
a
network
or
has
access
to,
and
there
are
also
there
are
two
implementation
of
fluent
spirit,
javascript
and
rust,
and
they
are
a
little
bit
different
because
javascript
is
designed
to
work
in
the
browser.
A
So
it
has
less
flexibility
in
terms
of
connectivity
to
network
and
it's
always
connects
to
network
via
relay
node
and
relay
node
is
like
any
any
full-featured
fluence
node
on
the
network
and
raspir
is
kind
of
like
more
backhand
sort
of
node,
but
they
are
designed
to
be
as
close
as
possible.
So
we
will
try
to
make
them
as
identical
as
possible
and
if
we
think
of
like
a
fluent
stack,
so
what
we
are
trying
to
build
is
we
are
trying
to
reinvent
the
whole
cloud
stack
from
you
know,
bottom
to
top.
A
So
we
have
this
execution
of
the
functions,
and
then
we
have
the
content
plate
for
the
functions,
and
then
we
start
building
the
abstractions
on
top
of
it,
which
allow
and
unlock
different
features
like
failover
clusters,
replication
consensus,
load,
balancing,
auto
scaling
and
things
like
that.
So
it
it's
going
up
right
now.
This
is
what
exists.
A
So
a
lot
of
things
needs
to
be
done,
but
applications
can
already
be
built
with
the
tooling
that
that
exists
now,
yeah
so
again,
basically
besides
pure
marine
runtime
and
pure,
like
a
aqua
language
right
now,
what
you
can
use
when
you
build
on
on
on
this
stack,
is
you
can
kind
of
connect
to
peers?
You
can
discover
peers,
you
can
connect
to
them.
You
can
discover
resources,
basically,
which
marine
functions
are
deployed.
A
Where
you
can
deploy
functions,
you
can
call
functions
and
you
can
schedule
the
execution
of
this
functions
across
the
network,
and
then
you
can
basically
there's
a
thing
called
trust
graph,
which
is
allow
you
to
score
and
select
nodes
based
on
the
score
I'll
talk
about
it
in
a
minute
and
and
also
there
are
plugins
basically
to
external
things,
to
external
data
layers
like
ipfs,
ceramic
and
on-chain
and
blockchains
that
also
available
right
now,
you
can
just
deploy
them
to
to
your
nodes
and,
like
all
the
other
cloud
features
that
we
expect
from
the
cloud
all
this
magic
with
scaling
and
fault.
A
So
it's
gonna
be
a
lot
of
the
things
yeah
so
and
and
about
the
trust
processness
of
the
computing,
because,
like
usually
when
we
use
decentralized
network,
we
sort
of
expect
some
security
model,
some
verification
of
computation
or
something
like
that.
So
the
approach
we
have
to
it
is
that
by
default
the
computation
is
trusted.
So
if
you're,
using
some
remote,
node
and
you're
deploying
your
function
on
it,
you
trust
this
node
that
this
function
will
be
executed
correctly
so
and
like
so
what
what
it
means.
A
A
So
it's
obviously
no
reason
not
to
trust
each
other
in
this
case,
but
there's
also
some
flexibility
to
it.
So
there's
a
thing
called
trust
graph
and
basically
what
it
means
it
works
the
same
way
as
ssl
certificates.
Work
in
the
browser,
like
you
have
a
list
of
root
certificates
in
the
operating
system,
and
then
you
open
a
website
and
you
see
there
are
certificates,
and
then
you
verify
their
certificate
against
the
root
certificate
that
you
have
in
the
system.
A
This
way
you
can
see
that
this
website
has
a
valid
certificate,
so
it
was
trusted
by
the
root
authority
that
you
trust,
like
very
sign,
for
example,
so
very
sign
issued
them
certificate
and
you
trust
were
assigned
so
similarly,
here
in
the
network,
nodes
can
issues
trust
certificates
to
each
other.
So,
for
example,
if
there's
fluence
labs
our
company,
if
we
run
some
several
our
nodes,
we
issue
certificates
to
these
nodes.
A
So
when
your
application
or
users
of
your
application
discover
these
nodes
and
they
want
to
execute
some
functions
on
these
nodes,
they
would
be
able
to
see
that
these
nodes
are
sort
of
trusted
by
us
by
fluence
labs
and
if
they
trust
fluence
labs,
they
can
trust
this
node,
and
this
chain
of
trust
can
be
like
really
really
long
and
but
it's
subjective,
so
every
user,
every
application
developer
can
choose
whoever
authorities
they
want
to
trust.
So
you
can
build
your
own,
basically
trust
graph.
A
If
you
want
so
it
allows
you
basically
to
select
nodes
more
to
to
distinguish
more
trusted
node
on
the
network
to
last
on
from
less
trusted
nodes,
and
the
second
way
to
get
verification
is
to
get
pluggable
consensus.
So
consensus
algorithm
is
the
distributed
system
algorithm,
it's
also
written
in
aqua.
So
it's
not
yet
like.
A
We
don't
have
yet
it
on
a
network,
but
you
can
like
the
simplest
algorithm
that
you
can
implement
is
basically
when
you
have
our
aquacode,
that
calls
the
same
function
on
several
nodes
and
then
collects
the
results
and
basically
collects
the
the
results
of
execution
with
signatures.
And
then
you
can
specify
that
like.
If
I
get
k
out
of
n
signatures,
then
I
think
that
this
result
was
correct.
So
I
have
a
consensus
on
the
computation,
so
you
can
do
very
simple
things.
A
A
Like
how
how
strong
consensus
you
need
to
have
on
your
on
your
computation,
so
basically,
this
development
stack
runs
on
the
network
and
the
network
is
a
marketplace
of
services,
so,
like
every
node
is
different,
there
is
no
it's
not
a
blockchain.
There
is
no
global
consensus
on
the
network,
so
the
network
is
like
ipfs
network
where,
like
every
ipfs
network,
keeps
different
set
of
data,
but
the
data
is
discoverable
right.
A
So
here
every
node
is
different:
every
node
decides
whatever
they
want
to
host
whatever
they
want
to
provide
to
network
among
external
things
or
whatever
they
want
to
host
in
terms
of
what
was
deployed
to
every
node
by
developers,
so
that
that
means
basically,
this
this
network
can
scale
almost
infinitely
and
if
you
have
some
consensus
there,
it's
it's
always
like
a
you
know:
10
nodes
out
of
million
nodes
in
the
network,
so
it's
not
like
every
node
should
be
involved
in
this,
and
basically,
if
we
have
such
network,
which
is
like
a
marketplace
of
the
hosting
providers
who
host
different
applications,
now
we
can
solve
this
problem
of
api
access.
A
Cutting
off
like
with
this,
the
the
the
problem
of
non-compassibility
of
web
2
apis.
So
because
now
we
decoupling
the
hosting
of
the
applications
from
the
authors
of
the
application,
and
now
we
have
this
the
same
model
as
in
the
blockchain,
where
blockchain
nodes
run
by
one
people
by
some
people
and
developers
of
of
the
smart
contracts
are
different
people.
So
now,
if
I
have
the
application,
for
example,
it's
a
messenger
application.
It
was
deployed
to
a
network
and
hosted
by
nodes,
and
then
I
released
the
new
version
which
I
introduced.
A
Some
bad
feature
to
it.
So
and
the
nodes
may
disagree
with
this
bad
feature
that
may
decide
not
to
update
like
some
nodes
may
decide
not
to
update
some
nodes,
may
decide
to
update.
So
now
I
basically
fork
the
application
this
way,
because
I'm
not
fully
controlling
the
runtime.
So
I
cannot
say
that,
like
this
new
version
of
the
application
will
be
adopted
by
by
every
node
and
because
nodes
serve
users.
A
So
now,
basically,
we
have
two
versions
of
the
applications
and
users
may
choose
whichever
version
they,
they
wanna
use
what
they
want
to
switch
to
new
version.
Where
I
introduced
this
bad
feature
or
they
want
to
stay
on
the
previous
one,
there's
kind
of
a
lot
of
details
of
how
you
know
how
this
all
would
work,
how
this
all
two
versions
coexist
and
so
on
and
so
forth,
but
it's
all
kind
of
solvable.
The
concept
here
is
that
we
are
decoupling,
the
the
runtime
from
the
from
the
author
yeah.
A
So
so,
basically,
with
this
approach
having
this
network
independent
from
the
application
developers,
we
solve
the
d
platforming
problem
and
basically
applications
been
hosted
on
demand.
So
as
long
as
nodes
are
interested
to
host,
they
will
host
and
it's
not
like.
If
you
know
I
was
a
developer
and
I
paid
for
my
amazon
server
and
then
my
credit
card
expired.
My
application
doesn't
work
anymore,
so
this
model
works
differently.
A
A
So
currently
we
have
the
aqua
language
it
works
and
in
the
network
we
constantly
improving,
fixing
bugs
working
on
stability
and
and
things
like
that,
we
are
moving
into
launching
the
bigfluence
dao
and
the
governance
model
for
the
whole
network,
which
should
happen
this
year
and
we
will
transition
into
focusing
on
the
economics
of
this
whole
network
like
like
how
all
the
incentives
works,
how
to
make
sure
that
the
hosts
are
online,
how
to
pay
for
hosting,
and
things
like
that
and
there's
a
lot
of
things
there
to
think
about.
A
So
we
basically
would
be
able
to
enable
the
new
economy
of
the
decentralized
computation.
So
right
now
in
in
amazon,
you
have
this
huge
list
of
details
like
for
what
you
pay
and
why
you
pay
and
different
services
have
different
payment
models.
So
basically
we
can
get
all
of
it
and
more
types
of
the
paying
for
the
computation,
because
different
nodes
would
accept
different
models,
different
pain
models,
some
nodes
would
be
fine
in
be
paid.
A
You
know
for
certain
for
hosting
certain
piece
of
code
for
certain
time,
but
within
certain
limits,
other
nodes
would
want
to
be
paid
per
request.
Third
node
would
be
want
to
do
some.
You
know
short
term
offerings
or
like
a
flexible
pricing
depending
on
the
day
of
the
time
and
their
load,
and
things
like
that,
so
it's
all
possible
and
it's
like
we're
trying
not
to
hard
code,
basically
we're
trying
to
enable
the
all
the
spectrum
of
the
opportunities
of
different
ways
to
pay
for
for
the
computation.
A
It's
also
like
a
think
of
defy
on
top
of
all
of
this.
This
is
this
is
pretty
cool,
and
the
last
thing
here
is
that,
so,
if
you
have
this
off
chain
computation
network,
like
a
application
platform,
we
might
really
start
having
the
dowels
for
applications
which
you
can
find
on
chain,
and
then
these
dowels
will
be
tied
to
the
to
the
application
itself.
A
So
we
can
solve
the
problem
that,
like,
if
you
right
now
doing
the
the
dow
own
chain,
which
is
decoupled
from
the
real
product
you
basically
can
run
away
with
money
and
like
the
the
product,
would
stop
working
or
there's
a
lot
of
things
that
can
happen
so
now.
We
can
really
really
make
sure
that
this
dao
is
related
to
this
particular
software
product,
which
is
run
by
by
the
network.
So
like
it's
it's
kind
of
forever
and
another
thing.
Another
cool
thing
is
open
source
modernization.
A
So
here
because
everything
runs
on
the
purdue
network,
it's
all
transparent,
and
if
we
have
the
on-chain
economics,
we
can
enable
the
monetization
the
reward
of
people
who
created
the
most
useful
components
of
the
application
like
this
is
the
story
from
web
to
world,
where
we
have
like
a
huge
open
source
databases
that
was
taken
by
the
cloud
and
basically
because
people
want
it,
have
them
in
the
cloud
and
run
them
in
the
cloud
cloud
making
billions
of
dollars
by
providing
these
open
source
databases
as
a
services
and
authors
of
these
databases,
usually
don't
get
anything
back.
A
Usually
they
have
to
create
their
own
cloud
and
compete
so
and-
and
it's
really
hard
to
compete
with
big
clouds
like
amazon,
because
amazon
have
everything
and
and
developers
would
wouldn't
really
want
to
switch
to
database
developer
cloud
which
have
only
this
database
but
doesn't
have
everything
around
it.
So
if
we
have
the
services,
the
basically
the
new
peer-to-peer
cloud
services
running
on
the
fluence
network,
now,
if
I
created
the
database
and
people
reusing
it
across
applications
in
the
network,
we
can
drive
a
part
of
revenues
of
the
nodes
to
the
author.
A
So,
basically,
author
can
have
some
some
kind
of
royalty
from
the
in
payments,
and
this
is
very
cool.
This
is
the
new
way
of
monetizing,
open,
source
and
kind
of
innovating
software
that
that
we
didn't
have
before
so
yeah
a
couple
of
links.
This
is
the
network
dashboard.
A
You
can
see
some
services
and
notes
that
that
are
running
right
now
and
on
this
hackathon,
I'm
not
sure
if
I
allowed
to
talk
about
prizes,
but
we
have
two
prizes:
basically
just
build
best
application
with
fluence
and
get
these
great
prizes,
and
this
is
the
link
you
can.
You
can
see
a
little
more
details
about
our
bounties
there
and
yeah.
That's
it
thanks.
So
much.
B
A
Yeah,
so
I
think
for
aqua
we
don't
have
any
fresh
metrics.
We
did
metrics
on
like
benchmarks
on
marine.
A
You
know
we
tried
to
to
compare
it
to
to
native
code
running
and
it
was
like
a
30
slower
when
you
use
web
assembly
and
marine,
but
it's
sort
of
not
really
bad
like
it,
doesn't
sound
bad
like
the
the
the
features,
the
the
the
advantages
it
gives
you
the
benefits
it
gives
you
it's
much
bigger
than
this
30
loss
so
but
on
aqua
I
think
we
yeah
it
would
be
nice
to
do
some
french
benchmarks,
but
I
I
cannot
say
for
now.
D
How
do
you
make
sure
that
the
function
you
executed
you
execute
in
the
network
is
properly
executed?
And
how
do
you
compare
the
result
with
other
nodes?
How
does
it
work.
A
So,
basically,
you
are,
if
you're
writing
your
code,
like
your
workflow
in
aqua,
you
have
full
control
over
it.
So
if
you
don't
trust
particular
node,
you
can
do
the
like.
In
this
example,
it's
it
says,
like
a
you,
see,
there's
a
loop
among
peers
that
sends
parallel
requests.
This
par
says
parallel,
so
I
can
specify
the
list
of
peers
and
I
can
just
send
the
parallel
requests
to
execute
the
same.
The
same
functions-
and
I
know
it's
the
same,
because
it's
kind
of
the
same
compiled
wasm,
which
is
identified
by
ipfscid.
A
So
I
know
it's
I
like.
I
know
it's
the
same
but,
as
I
said
the,
if
I
don't
trust
nodes
theoretically,
they
can,
instead
of
they
can
basically
corrupt
it
or
like
they
can
run
something
else
instead
of
it.
So
when
I
do
this
parallel
execution,
some
of
the
results,
if
they
corrupt
it,
will
return
something
different
or
like
they
will
turn
aurora
or
something
right
so
and
that's
why?
If
I
don't
trust,
I
always
should
have
some
redundancy.
A
A
What
aqua
gives
me
is
the
security
in
terms
of
from
step
to
step,
so
it
makes
sure
that
whatever
results
were
given
by
this
node
that
executed
get
price
function.
This
exact
result
will
go
to
the
next
step
and
will
be
sent
to
the
peer
that
executes
average
calculate.
So
this
is
aqua
can
guarantee
the
internals
of
the
function.
Execution
like
tricky
so
like,
if
you
don't
trust,
use
like
use.
Consensus
like
use
basically
execute
several
times
and
ask
for
signatures,
ask
for
to
get
the
same
result.
E
Hey
thanks
for
the
presentation,
I
have
a
question
about
the
code
creator
economy
and
how
you
actually
make
sure
that
there's
no
third
party
that
would
actually
benefit
from
getting
data
between
the
code,
creators
and
the
network,
or
how
do
you
make
a
system
that
is
that
makes
sure
that
there's
no.
A
So
I
could
give
you
example
like
with
this
open
source,
monetization
concept,
for
example,
if
I
create
a
database
and
then
I
publish
to
network
and
then
people
use
it
and
then
basically
I
have
this
in
the
on
the
on
the
payment
side
of
it.
I
have
this
sort
of
contract
that,
when
developer
pays
for
hosting
their
application,
they
also
pay
for
hosting
of
the
database
and
like
a
one
percent
of
of
payment
to
of
the
database
hosting,
should
go
to
the
author
of
database
right.
A
The
thing
is,
if
I'm
a
hosting
provider
who
you
know
want
to
provide
the
same
database,
but
one
percent
cheaper,
I
may
say
I
would
you
know
I
would
provide
this
database,
but
I
would
replace
the
the
author
fee.
You
would
not
need
like
if
you,
if
you
consume
this
database
from
my
node,
you
would
need
to
pay
this
author
fee
and
that's
possible.
That's
that's
basically,
but
but
the
thing
is
this
mechanics
is
the
same
as
4k
open
source
like
every
open
source
product
is
open.
A
You
can
fork
it,
but
how
would
you
drive
people
to
use
your
fork
versus
the
original
product
because
the
original
product
being
chosen
for
particular
reasons?
It's
been
maintained
by
regional
authors,
so
it
gets
updated
and
updated
and
updated
so,
and
you
would
need
to
maintain
your
fork
as
well.
So
now,
if
you
note-
and
you
want
to
basically
fork
this
this
database,
you
would
need
to
also
maintain
your
fork
with
every
which,
which
would
every
time
exclude
the
the
author
fee.
So
it
cannot
guarantee
the
hundred
percent
that
hundred
percent
developers.
A
A
There
should
like
there
should
be,
and
it
only
makes
sense
to
switch
to
node
who
forked
only
if
their
product
is
better
if
they
made
the
product
better.
But
this
basically
means
that
they
they
did
a
different
product
and
now
they
receive
a
fee.
So
that's
that's
the
logic
here.