►
From YouTube: Aqua: the Language - @alari - IPFS Implementations
Description
Aqua language - presented by @alari at IPFS þing 2022 - IPFS Implementations - https://2022.ipfs-thing.io
A
My
name
is
dmitry
and
I'm
going
to
speak
about
the
new
concept
aqua,
the
language,
so
we
are
thinking
a
lot
about
going
to
the
planet
scale
and
beyond
the
planet,
scale
into
extra
planetary
things
and
in
the
planet
scale.
Problem
solutions
are
delivered
as
distributed
protocols
and,
as
we
have
many
problems,
many
different
problems.
We
have
many
different
solutions
and
this
means
many
protocols.
A
A
A
Then
you
need
to
re-implement
it
on
every
language.
If
you
want
to
go
go,
you
need
to
go
go
if
you
need
to
go
rust,
you
need
to
grasp,
and
so
on,
and
implementation
in
one
language
does
make
it
simple
usually
does
make
it
simple
to
implement
it
in
another,
and
often
the
way
from
research
to
deployment
might
take
infinite
time
because,
like
how
it
happens,
the
problem
is
identified.
A
A
We
have
many
implementations
and
we
have
some
algorithms
which
are
made
in
some
of
the
implementations
of
lib
p2p
and
they
are
not
automatically
ported
to
other
languages
and
implementations,
because
it's
quite
hard
to
kind
of
reverse
engineer
it
to
the
specification
and
then
back
to
implementation,
or
do
something
like
that,
and
sometimes
it
lasts
forever
and
the
same
calls
for
the
approach
to
solving
like
scalability
problems,
for
example.
A
Yesterday
or
the
day
before
yesterday,
I
attended
the
talk
about
partition,
kadamlia
and
it
has
this
simulation.
It
solves
some
of
the
problems,
it
shows
great
capabilities,
but
will
it
get
to
lip
b2p?
I
don't
know.
Probably
it
would
be
hard
and
that's
that's
the
moment
to
say
about
aqua
aqua
is
the
new
language,
it's
based
on
pi
calculus
and
is
made
for
scripting
decentralized
protocols
on
top
of
lip,
b2p
ipfs
and
beyond.
A
So
basically,
it's
a
paradigm
shift.
Aqua
means
that
you
need
to
split
distributed
workflow
away
from
computations
and
data
and
aqua
is
made
to
express
workflows
in
trusted.
Coordinator-Free
setups
it's
based
on
pi
calculus,
to
reason
about
the
security
and
to
have
completeness
to
have
the
powerful
enough
framework
to
program
almost
any
algorithm
and
to
not
limit
developers
without
the
need
and
developers
can
make
algorithms
on
aqua,
then
compile
it
to
a
set
of
er
scripts.
A
Here
is
a
marine
intermediate
representation,
a
very
small
language
which
is
barely
usable
by
humans,
but
humans
can
use
aqua,
and
these
crypts,
like
they
have
a
very
strict
execution
contract
and
every
peer
involved
into
the
workflow,
is
able
to
execute
the
script
validate.
It
do
some
computation
and
proceed
so
high
level.
Aqua
language
basically
converts
distributed
algorithms
into
composable
library
functions,
and
this
leads
to
platform
independent
implementations
of
distributed.
Algorithms.
A
Basically,
it
works
like
this
very
briefly.
We
have
live
p2p
for
connectivity,
transport
and
all
that
stuff.
We
have
some
kind
of
smart
package
which
we
call
a
particle
that
contains
immutable
code
and
the
mutable
data.
It
comes
to
lipitope,
lipitope
transfers
it
to
the
pool
of
aqua,
virtual
machines,
aqua
interpreters.
A
So
compute
is
controlled
by
aqua
in
this
case,
but
only
controlled.
It's
not
executed
by
aqua
is
delegated
from
aqua
to
some
local
peer
capabilities,
and
it
could
be
a
web
assembly.
It
could
be
native
code,
it
could
be
docker.
Fvm,
like
you
can
apply
plug
in
almost
anything
aqua
doesn't
depend
on
this
and
we
have
implemented
the
functions
in
native
rust,
native
javascript,
typescript
and
web
assembly
with
our
marine
webassembly
runtime.
A
A
We
have
the
data
from
the
previous
execution
of
this
request
of
this
flow
of
this
execution
flow
and
when
the
new
data
comes
in,
we
check
that
everything
is
correct.
We
merge
this
data
via
the
the
previous
data.
A
The
decision
is
made,
what
should
we
execute
locally
or
not?
And
finally,
if
there
is
something
that
is
not
executed
on
this
peer
and
should
be
executed
somewhere,
this
data
moved
is
moved
to
to
the
next
pierce
one
or
many,
and
there
is
a
like.
Basically,
it's
a
state
machine
air
interp
interpreter
is
pure.
It's
a
pure
function,
no
effects
just
input
and
output
code
execution
is
independent
execution
trace
data.
This
mutable
data
forms
conflict-free
data
type.
A
And
a
few
simple
examples
of
what
aqua
looks
like
on
the
top:
you
have
a
sequence
diagram
with
three
different
peers,
some
client,
some
relay
and
some
other
nodes
or
like
many
nodes
and
usually
to
implement
this
flow.
You
take
verticals
and
you
implement
verticals,
like
I'm
waiting
for
the
request
when
it
comes.
I
do
this
particular
event
like.
I
will
send
this
request
to
to
the
next
node
with
aqua.
A
A
A
There
could
be
fire
and
forget
and
we
can
easily
code
a
fork
join
with
parallelism,
and
in
this
case
we
do
some
function
execution
on
peers,
which
are
like
we
fetched
somehow
like
as
arguments
or
something
like
that
and
then
on
some
pure.
We
wait,
we
are
doing
join.
You
can
do
partial,
join
like
mfn,
two-thirds
anything.
A
Then
we
do
some
calculations
and
then
we
proceed
the
result
to
some
other
bureau.
So
if
you
want
to
do
like
notifications,
for
example,
you
can
easily
code
it
in
very
few
number
of
lines
and
you
don't
need
to
recompile
anything
redeploy.
Anything
and
aqua
tries
to
have
the
least
possible
number
of
peers
involved
in
this
into
this
flow.
A
A
You
can
play
with
algorithms
fast
and
improve
them,
incrementally
a
unique
style
or
b
style,
because,
like
it's,
it's
scripts,
it's
very
fast
to
to
try
out
and
many
problems
are
abstracted
out
because
they
are,
they
have
solutions
like
connectivity
is
already
done
very
good.
If
you
have
any
computations
or
any
other
capabilities,
you
can
easily
delegate
them
to
local
capabilities
of
the
peers.
You're
focused
on
something
new
and
how
to
deliver
it.
A
So
no
need
to
redeploy
the
network,
no
need
to
recompile
the
node
you
can
publish.
You
can
do
research
using
this
approach,
this
language
and
publish
the
result
as
an
aqualip
and
engineers
will
catch
up.
It's
quite
easy
to
get
from
research
to
some
deployments,
and
these
diplomas
does
mean
that
everything
should
change.
It
could
be
very
small
deployment
and
to
use
the
new
orgo
in
many
languages
little
to
no.
If
work
is
needed,
it
depends
on
the
compute
requirements.
If
you
use
something
portable
to
express
computations,
then
you
can
just
deploy
this
web
assembly.
A
Ask
the
peer
to
run
it.
If
you
use
something
not
portable
like
if
you
need
more
performance,
something
special
or
if
you
did
it
for
to
speed
up
development,
then
it's
still
easier
to
port,
because
you
don't
need
to
have
requests
responses
await
all
this
network-oriented
stuff.
You
just
need
to
work
with
a
with
data
and
do
some
data
transformations
and
through
some
ingredients
and
so
on.
So
it
should
be
much
easier
still.
A
There
are
some
problems
and
this
solution
is
not
perfect.
At
least
yet.
One
of
the
questions
that
we
would
like
to
discuss
during
the
ip
flatting
is
how
to
lower
aqua
back
to
lip
b2p
protocols.
Currently,
aqua
forms
a
single,
very
simple,
very
like
it
consists
from
one
kind
of
lipid2p
packages,
a
very
simple
protocol
on
its
own
and
it's
not
trivial
question
how
to
lower
down
from
what
can
be
expressed
with
aqua
to
native
liquidity
protocols,
which
should
be
very
useful.
A
We
investigated
this
question
and
we
have
some
ideas,
but
we
need
to
validate
it
with
the
community
and
check
what
could
be
done
and
and
not.
Another
question
is:
maybe
we
have
a
closer
integration
with
ipld
and
it
looks
like
yes
definitely,
but
I
also
need
to
speak
like
how
exactly.
A
A
So
there
is
no
language
level
solution
to
express
failures
and
supervising
on
the
language
level,
which
is
really
needed.
You
can
still
express
it,
but
with
a
lot
of
code
in
aqua,
we
want
to
make
sugar,
but
the
question
is
what
what
is
the
best
solution
to
express
these
things?
A
A
So
that's
all
thanks
for
listening
and
see
you
on
our
events
during
the
ip
fasting.
We
have
two
events
tomorrow
and
the
day
after
tomorrow
we
have
a
deep
dive
into
aqua
and
ipfs,
we'll
speak
about
aqua,
vm,
internals
contracts
for
by
calculus
derived
instructions,
and
things
like
that
and
for
sure
anyway,
any
time
a
big
part
of
the
fluency
team
is
here
and
we
are
ready
to
to
speak.