►
From YouTube: Idea/Brainstorm: Aqua and IPFS - @alari - Aqua and IPFS
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
C
Half
node
or
something
I
was
thinking
about,
moving
like
this
data,
so
it's
a
network
protocol
and
there
is
a
code
for
generating
like
data
packets,
to
be
sent
to
the
network.
That
code
will
be
compiled
to
a
web
assembly,
run
as
a
service
influence.
That
service
could
be
associated
with
just
a
stream
that
identifies,
and
every
data
that
comes
out
of
the
service
could
be
sent
through
db2p
connection
to
other.
You
know,
whatever
atlas
says
to
do
it
just
as
hey
this.
C
What
service
send
this
kind
of
message
to
this
peer
and
it
will
just
generate
byte
array,
say:
hey,
send
this
hey
no
send
this
to
this
gear
and
when,
once
you
receive
the
message,
send
it
back
to
the
server
so
service
will
have
a
local
memory
and.
B
B
C
One
new
deployment,
but
I
could
open
connect
so
without
qualifying
connections
and
then
send
something
through
the
stream
that
has
to
be
yet
implemented
on
all.
But
it's
pretty
simple
you
just
like.
D
D
Essentially
what
what
happens
the
the
series
is
being
called,
and
it
knows
that
it's
called
with
arguments
from
something,
and
it
knows
that.
Okay,
I
am
the
service.
I
have
this
peer
deployed
me.
I
have
this
peer
id
where
I
am
executed
and
like
very
isolated,
unboxing
access.
D
So
if
you
need
any
capabilities
either
you
as
the
node
operator,
may
allow
white
list
this
service
and
say
I
allow
this
service
with
this
hash.
Ipves
cache
to
be
to
have
an
access
to
these
effects
to
this
folder
to
this
binary,
to
this
socket
whatever
or
for
the
link
between
use
case,
probably
you
will
not
use
webassam
at
all,
because
probably
you
will
have
this
direct,
that
that
was
your
question
about
optimization
these
aspects
of
like
performance,
security
and
so
on.
They
are
very
translated.
B
Like
algebraic
effects,
you
have
isolated
place
for
a
computation
and
you
can
reason
only
about
computations
here
you
don't
need
to
know
anything
else.
You
have
one
place
where
you
have
this
good
to
socket.
You
can
first
use
the
channels
for
that
then
write
the
software
directly
and
so
on.
So
in
this
place
in
the
host
like
in
the
peer,
you
have
all
the
contacts
that
you
need
to
make
the
decision,
whether
you
should
execute
this
call
or
not.
B
D
In
many
cases
I
say
that
finally,
aqua
code
is
is
pure
function,
yes
and
finally,
the
peer.
B
D
A
For
marine
or
like
something
and
the
fourth
queue
for
beats
wall,
for
example,
and
it's
another
resource
you
just
plug
it
into
the
control,
plane
or
vm.
But
but
this
wall
will
work
like
through
the
p2p
correctly,
like
it's
different
way.
B
It
low
level
if
it
could
be
within
the
vm
runtime
yeah
yeah.
So
like
can
you
implement
this
yeah?
That's
what
I
was
trying
to
understand
so
that
you
handle
all
of
these.
We
just
have
to
first,
we
just
to
figure
out
how
to
patch
how
to
register
a
handler
into
the
business
yeah,
how
to
teach
aqua
vm
to
register
and
for
that
service,
so
that
you
can
see.
E
C
F
From
back
records,
it
would
be
seen
as
the
like
name
or
a
channel
where
events
like
should
come
to
the
other
services.
Yeah.
E
E
Everything
from
the
pier
I
guess
the
pure
is
like
an
identity,
but
the
the
service
can
be.
You
can
map
that
to
a
hash
of
the
code.
That's
going
to
run.
The
function
is
also
it's
just
like
some
offset
on
that
and
and
then
the
arguments
to
the
function
can
all
be
hash
linked
and
that
that
whole
thing
is
now
visible,
and
you
have
a
way
of
saying
this
thing
you
can
like
you
know.
E
If
it's
for
pure
functions,
you
can
maybe
label
things
that
are
pure
and
then
you
can
run
them
only
once
in
the
network
and
then
store
the
store,
outputs
and
separate.
You
can
then
start
doing
things
like
yeah
like
take
entire,
like
decompose,
some
like
a
long
computation
into
different
particles
and
then
like
suspend
type
states
and
bring
them
back.
E
You
can
do
you
can
have
a
program
on
top
of
this.
That
gives
you
like
core
routines,
like
the
the
weight,
basically
type
models
where
you
can
write
a
very
complex
program,
and
whenever
you
async
away
you
compile
down
to
different
particles,
and
they
run
correctly
like
I'm
in,
like
the
right
sequences
yeah,
it's
called
fork
song
yeah,
it's
all
over
there.
E
E
Do
you
like
mechanics,
yes
yeah
so
fluence
is
a
flow
of
particles,
okay
and.
B
With
these
bike
outlets
we
have
it
split
sometimes,
and
we
have
different
observers
and
then
we
have
an
observation.
Well,
actually,
it's
the
same
particle.
It
just
flows.
So
we
have
a
particle
and
many
observations
of
this
particle
by
different
peers.
Usually
they
have
different
views,
but
finally
it
converges.
B
B
So
that's
that's
about
particle
and
when
the
aquarium
execution
is
done
on
the
peer,
the
result
is
a
list
of
the
next
few
and
the
subjective
observation
of
what
the
data
is
and
the
same
data.
The
same
particle
is
sent
to
that
to
all
of
them
and
can
trigger
state
changes
in
group
in
the
peers
that
it
costs
you
so
many.
If
I'm
persisting
so
I
mean
I'm
sending
something
to
mike
for
you.
Can
I
change
your.
B
D
United
states
or
or
we
have
two
two
two
layers
of
state
changing
here,
the
first
one
is
for
every
particle.
I
remember
what
I've
seen
so
I
have
the
cache
of
my
last
observation
and
when
I
have
the
new
data
from
somebody
I
merge.
I
learn
only
the
effects
and
keep
the
old
facts
that
are
already
in
you
and
I
remember
it
it's
the
state,
so
we
say
that
every
particle
creates
a
single
use
coordination
network
and
this
creation
network
for
every
participant
every
participating
period.
D
D
The
second
layer
is
what
could
be
done
with
a
service
cost
service
call
can
trigger
any
kind
of
event
like
effects
on
the
gear,
so
one,
and
we
have
different
approaches
to
that.
For
example,
within
the
same
particle
within
the
same
execution
flow,
we
have
a
file
vault.
We
have
an
isolated,
simple
use
file
system
so
that
our
services
are
sandboxed,
but
they
can
exchange
data
inside
the
local.
B
Period
inside
the
same
request,
things
like
that
and
that's
how
we
integrate
yeah.
The
kind
of
use
is
indexing
service,
where
what
we
have
different,
so
it's
providers
that
want
to
broadcast
like
an
update
of
their
other
systems.
If
I
see
the
particle
like
updating
the
I'm
sending
you
an
advertisement
or
or
like
broadcasting
or
talking
to
a
set
of
users,
if
I
can
update
your
state
of.
B
The
state
is
isolated
for
your
service
right,
yeah,
so
remember
enemies,
so
you
want
to
have
some
replication
right.
So
maybe
another
mapping
correction
here
is
for
each
particle.
You
have
like
a
very
small
amount
of
state
that
you
define
it's
kind
of
like
an
actor
model
like
the
local
state
of
that
object
and
when
you
are
going
to
run
it,
you
like.
C
It
like,
like
unfreeze
the
state
you
contact,
switch
into
it.
You
then
process
an
operation
and
then
you
produce
some
output.
That's
another
message:
it's
very
similar
to
the
active
model,
and
then
you
potentially
update
the
internal
state
of
the
actor.
And
then
you
like,
freeze
that
my
question
is
from
one
actor.
I
can
access
the
state
of
an
alright.
That's
no!
Well,
you
shouldn't
be
able
yeah,
but
but
the
services
they
are
our
system.
They
they
always
run
so
they
yeah.
C
E
B
But
how
can
you
describe
that
in
ipod?
That's
what
I'm
having
a
hard
time.
So
so,
let's
say
that
we
have
the
indexing
actor
and
then
we
have
some
other
service
that
transfers
or
just
waiting,
basically
yeah.
So
you
have.
D
B
D
Absolutely
doable
with
what
we
have
with
loans,
but
anyway,
the
first
approach
is
to
run
the
competitions
every
time
when
you
need
it
and
stop
when
you
don't
need
it.
So,
for
example,
that's
what
one
of
our
users
did.
D
He
was
a
running
reading
from
ethereum.
The
hash
of
the
web
assembly
then
runs.
This
web
assembly
then
gets
ipfs
cache
of
the
previous
state
copies
this
for
state-of-the-art
service.
Does
the
new
computation
brings
the
estate
back
to
ipfs
and
removes
his
service?
So
in
this
case
you
don't
have
the
service
outside
the
particle,
because
you
do
everything
all
the
life
cycles
inside,
but
usually
it's
not
not
this
way.
Usually
you
have
the
service
like
long
running.
E
B
Actually
I
want
to
yeah.
I
really
want,
because
I've
been
dreaming
about
this
slim
db2p
host
with
computation,
which
I
think
that
you
have
and
like
be
able
to
so.
Instead
of
like
implementing
a
particle
several
times,
you
could
have
a
single
implementation.
That
runs
because,
as
long
as
you
have
it,
for
instance,
you
would
be
able
to
run
yourself
with
a
single
condition
and
if
a
new
host
like
javascript
node
comes,
they
just
have
to
worry
about
the
core.
B
C
B
C
C
A
So
so
so
they
they're
kind
of
trying
to
do
it
in
traditional
about
two
ways.
So
it's
basically
like
the
docker
we
like
to
orchestrate
and
deploy
they're,
not
talking
about
how
they
deploy
things
right.
It
means
that
it's
basically
like
manually
deployed
on
some
machine
and
mostly
what
they
care
about
is
how
do
they
make
sure
that
they
can
execute?
You
know
machine
learning,
model
on
top
of
local
data.
A
A
A
D
G
How
do
you
verify
the
computation
like?
How
do
you
like?
I
know
you
have
the
incoming.
Basically,
you
have
to
be
coming
like
set.
You
include
particles,
you
have
like
the
original
state
new
state
and
then
you
have
a
set
of
operations
museum.
It's
a
good
question
yeah
so
like
why.
F
F
D
Yeah
yeah,
but
I'm
not
sure
about
the
guarantee
that
it
can't
leave
because
well,
the
only
guarantee
that
I
could
imagine
is
that
your
white
list,
the
services
or
you,
or
at
least
the
apis
of
the
services
that
have
access
to
your
data
and
you're,
true,
that
they
have
no
output
except
a
number,
for
example.
A
E
At
a
time
in
a
single
container,
but
we're
modifying
it
so
that
you
can
run
a
pipeline
a
lot
of
those
with
leads
and
everything
right,
but
it
all
happens
on
one
node,
one,
one
centralized
compute
provided
right.
So
the
next
step
after
that
would
be
super
cool
like
there's
more
peer-to-peer
api.
So,
and
maybe
one
provider
has
one
type
of
data.
Another
provider
has
you
know
another
type
of
data
and
we
want
to,
like
you
know,
go
between.
E
G
G
So
what
we
could
do
is
like
we
could.
You
could
like
do
a
pr
with
the
provider
code
for
motion
that
like
incorporates
an
aqua
node
in
it,
and
then
they
could.
You
know
potentially
double
double
readiness,
because
you
could
root
between.
G
Because,
because
yeah,
the
way
I
think
of
it
is
like,
like
we
think
about
everyone
having
their
own,
like
kind
of
like
ipfs,
know
their
own,
like,
let's
say,
gpu,
we're
assuming
and
so
like.
If
we
have
like,
let's
say
an
aqua
video
on
top
of
that,
we
can
have
essentially
services
through,
like
let's
say
these,
like
you
know,
python,
doctor
container
through
the
gpu
and
I'm
not
sure
if
you're
familiar
with
ray.
G
G
Yeah
right
right
so
like
basically,
if
we
like
the
way
the
way
we
have
it
right
now
is
like
I'll
just
show
you
is
or
like.
This
is
kind
of
like
the
way
it's
like
a
high
level
idea,
but
you
have
like.
G
And
then
clients
are
essentially
local
cloud
databases
or
other
computer
peers,
and
then
essentially
you
can
compose
them
whichever
way
and
then
arrays
essentially
like
you
have
like
you,
can
deploy
like
a
class
or
an
object
into
this
object
space.
You
can
dedicate
resources
to
it
locally
and
they're
they're
demons.
G
Essentially,
so
you
can
also
call
them
basically,
like
you
can
call
them
a
remote
function
so,
like
I
can
basically
like
say
what
module
I
want,
what
function
and
then
the
inputs
exactly
that's
just
yeah,
so
this
is
all
so
locally
or
it
could
be
done
in
a
cluster
it's
compiled
with
kubernetes
and
so
basically
yeah.
This
is
how
we
think
of
replacing
the
scope
here
with
essentially
an
aqua
vm,
and
then
the
services
would
just
be
the
essentially
client
calls
to.
I
don't
know
to
red
and
then
yeah.
Basically,
we
have
like.
B
G
A
G
Right
now
like
we
have
well
right
now,
the
stuff
that
we're
building
is
is
connected
to
like
evm.
So
it's
through,
like
browning
so
you're
connecting
you,
can
actually
connect
these
python
processes
and
trigger
that
and
enable
smart
contracts
like
in
python.
So
like
you
can
trigger,
like
smart
contact
events
through
red,
essentially
via
python,
and
we
want
to
also
like
add
cosmos
and
like
polkadot
functionality.
That's
probably
when
you
pull
it
out
to
the
next
step,
and
so
for
that
we
would
probably
either
that
were
well.
G
That's
in
order
to
create
your
own,
like
polka,
dots
node.
We
would
essentially
use
our
own
like
substrate
node,
and
then
we
would
probably
incorporate
some
of
the
active
vm
stuff
to
include
that
like
functionality
or
have
a
separate,
app
or
vm
mode,
as
well
as
a
substrate
node,
but
they
both
have
like
peer-to-peer
kind
of
like
functionality.
So
it
might
be
a
bit
prevented.
G
G
Yeah
so
essentially
I
would
have
a
like
a
client,
so
the
python
would
be
a
client
that
would
trigger
through,
like
let's
say
at
a
mass
or
through,
like
essentially
have
like
a
an
ethereum
client
or
a
theory.
Medium-Based
client
like
so
I
have
like
a
ganache
container
that
essentially
allows
you
to
essentially
sign
transactions
from
a
python,
and
so
that's
how
you
connect
your
machine.
Your
calls
with
python,
I'm
sorry
with
with
with
edm
chains,
and
you
can
do
similar
things
with
like
polkadot
as
well
any
other
channel.
G
So
but
you
don't
have
to
do
a
few
python
like
you
know,
there's
always
javascript
like,
but
I
I
prefer
not
like.
I.
I
prefer
to
have
all
the
background
stuff
in
one
language
and
then
all
the
front
end
stuff
like
in
javascript
or
like
yeah.
G
So
it's
like
we
use
react.js
and
we're
also
trying
to
build,
like
you
know,
like
these
footage,
so
like
we're
using
this
graph
so
but
like
kind
of
like
visualizing
like
just
like
topologies,
so
we'll
use
like
you
know,
this
type
of
graphing
structure
is
a
bit
more
than
that.
But
it's
like
you
know.
We
want
to
visualize
like
the
interactions
as
well,
so
that
it's
easy
for,
like
developers
to
just
like
you,
know,
drag
and
drop
and
connect
with
each
other.
F
A
G
I
guess
another
thing
I
was
really
into
like
just
combining
this
with
like,
like
I
guess
you
have
to
you-
need
to
form
proofs
of
like
contribution
proofs
of
like
let's
say,
if
I'm
using
your
vm
like.
Let's
say
it's
not
on
like
the
influence
network.
But
if
let's
say
someone
else
is
like
here
and
I
want
to
use
their
let's
say
gpu
then
like.
How
do
you
like,
I
guess,
verify
that,
like
that's
another
kind
of,
if
there's
a
lot
of
problems
yeah
so
like
you
can't
sell
them.
G
So
I'm
not
sure
if
that
one,
like
people
are
trying
to
solve
that
one
separately,
but
like
yeah,
just
even
like
having
like
a
proof
of
concept
with
like
fluents,
would
be
pretty
cool.
I
think,
because
then
that
would
just
like
open
up
your
product
to
all
these
pythons,
like
it's
just
insane.
How
many
people
can
probably
use
this
and
are
python
developers
and
like
it's
like
this,
would
just
like
bring
into
a.
G
For
example,
they
have
two
modules
where
the
first
module.
The
aim
is
like
source
data
to
get
people.
To
like
add
labels
like
cats
and
dogs,
that'll
be
the
first
module
and
then
the
second
module
will
be
like
training,
classifier
classified
cats
and
dogs.
So
you
could
be
like
credit
source,
10
images,
train
it
and
see
what
the
performance
is.
If
it's
not,
the
performance
isn't
good,
go
back
and
credit
source
10
more.
E
And
then
train
it
again,
and
this
kind
of
dynamic
way
of
training
models
like
with
crowdsourcing
and
training
in
a
loop
has
never
been
possible
with
machine
learning,
and
I
think
we've
just
completed
that
yeah
and
it
looks
like
a
key
is
aqua
because
it's
easily
represented
exactly-
and
you
know,
you're
thinking
about
federated
learning
when
you
have
like
deeper
heavier
parameters
like
learned
on
different
machines
and
combined
together
and
also
it
looks
like
a
more
produce.
That's
a
very
natural
pattern
for.
G
Yeah,
I
think
I
think
it
would
be
really
really
useful,
even
like
just
like,
because
I
I
like,
I
believe
in
like
people
should
set
up
their
own
peers
and,
like
you
know,
manage
their
own
peers
and,
like
you
know,
be
able
to
connect
with
other
peers,
they're,
very
local
and
then
obviously,
if
they
can't
do
that,
then
they
would
need
some
public
infrastructure,
so
they
can
go
through
like
the
fluence
network
and
yeah
like
it's.
G
G
With
between
agents,
it's
all
implemented
in
python
and
go
so
similar
to
how,
if
this
is
kind
of
like
you
have
like
a
appear
a
dope
here
and
then
that's
making
like
grpc
calls
with
like
this
python
part,
and
so
like
it's
just
like
each
like
agent,
is
essentially
just
a
model
and
then
they
just
communicate
with
each
other
yeah.
So
this
is
like
it's
we're
still
in
a
pretty
early
kind
of
stage,
there's
also
a
big
cancer
which
is
implemented.
I've
been
looking
more
and
more
into
that.
I
think
yeah.
G
It's
called
good
cancer.
It's
like
it's
cancer.
So
basically
it's
like
it's
just
like
a
network
of
like
all
these.
They,
they
type
the
shapey
values
of
all
these
models
after
one
problem
and
then
they
try
to
find
which
models
are
the
best
words.
G
This,
like
it's
only
solving
text,
model
again
they're
just
doing
text
modeling
right
now,
but
I
think
some
sort
of
work
to
work,
yeah,
yeah,
yeah,
but
like
from
the
way
I
was
like
thinking
of
extending
this
was
like
so
they're,
actually
using
a
substrate
like
rust,
like
you
know,
substrate
polka
dot,
like
kind
of
like
you
know,
node
as
a
peer
to
peer
layer,
and
then
they
had
like
a
python
layer
on
top,
but,
like
I
was
thinking
like
you
know
like
what
is
the
value
of
a
network
right,
the
value
of
the
network
is
essentially
the
api.
G
It's
the
influence,
like
the
endpoints,
are
essentially
providing
value
to
certain
customers,
so
like,
like,
I
think,
they're
just
focusing
on
one
like
kind
of
like
network
but
like
what,
if
you
can
allow
people
to
create
their
own
networks
and
like
allow
them
to
form
their
own
endpoints,
so
that
whoever
buys
into
those
endpoints
fuels
value
into
that
network.
It's
describing
exactly
our
idea
of
soft
networks,
yeah
yeah,
it's
really
pretty
cool,
because,
yes,
this
idea
was
that,
like
public
interface
forums
like
so,
it's
really
really
what
we're
thinking
about
and.
F
What
we
want
to
implement
but
like
we
don't
have
now
enough
capacity
and
we
postponed
for
it,
but
yes,
it's
waiting
as
a
toaster
yeah.
Then.
Basically,
I
I
have
a
question
like
what
is
the
state
of
your
project?
Is
it
like
already
implemented
somehow,
or
it's
only?
I
know
it's
only.
G
G
So,
at
least
for
me
I
know
you've
done
like
a
lot
with
ocean
protocol,
but
for
me
it
was
like
I
have
all
like
the
like.
The
grey
stuff
is
integrated
with,
like
you
can
basically
show
you
to
do
it
locally.
You
can
deploy
it
and
after
we'll
believe
now.
I
just
need
to
connect
them
across
like
different
like
peers,
and
it's
all
unlocked
and
closed.
G
F
Yes,
thousands
of
influence
with
the
voice,
and
also
do
you
know
about
how
like
how
web
assembly
could
be
integrated
with
neural
networks.
So
there
is
a
like
what
was
something
like
this
could
be
considered
as
a
white
box
that
has
exports
and
imports,
and
you
know,
like
webassembly,
can't
call
in
function.
F
Configuration
system
accept
imports
and
there
is
a
special
sub
format
called
yz
like
sent
from
the
summary
system
interface,
and
there
is
a
like
sub
substandard
code,
wpit
web
assembly
three
times
and
they
stands
like
mj
extends
like
yz.
Is
your
power
difference
and
there
is
a
standard
for
neural
networks
called
and
I'm
not
sure?
Is
it
serious
for
you
or
not,
but
against.
G
Can
like
make
calls
like,
let's
say
if,
if
you're
having
a
web
assembly
kind
of
like
vm
making
calls
to,
let's
say
the
like,
let's
say,
let's
say
a
python
like
or
maybe
it
could
be
like
a
local
api
like
yeah.
I
knew
that
like
if
the
standard
would
be
enough,
I'm
not
sure
so
highly
likely
that
it
won't,
because
it's.
F
On
a
proposal
stage,
but
that
if
so,
it
could
be
used
from
our
runtime
directly,
meaning
that,
like
you,
could
just
deploy
a
module
or
service
that
could
call
this
import
functions
like
from
the
standard.
F
And
then
this
function
would
be
called
from
without
the
front
and
it
would
be
called
actually
like
python
or
other.
Like
neural
network
called
like
gpu.
G
It
doesn't
measure
like
it
depends,
depending
on
your
like
runtime,
depending
on
how
it
is
fired
yeah.
So
I
guess
like
if
that,
so
that
assumes
that
all
of
the
the
whole
runtime
has
to
be
within
what
I
was
in.
They
can't
like.
So
I
was
thinking
like.
Maybe
you
can
like
have.
Can
you
do
api
calls,
but
then
like
to
like?
Let's
say
python:
do
you
like
your
vocabulary,
because
that
might
be
a
better,
a
better
fix?
Yes,.
G
What's
the
name
program,
so
I
would
have
a
talk
about
our
runtime
and
I
would
also
recover
the
topic.
But
yes,
I
think
if
it
just
don't
require
some
form
of
connection.
G
You
wouldn't
need
to
like,
as
long
as
your
environment
is
is
running
like
I
guess.
As
long
as
the
container
is
running
with
the
python
environment,
gq
enabled
then
like
you
can
call
that,
through,
like
you,
wouldn't
need
explicitly
to
have
that
embedded
in
webassembly
just
have
a
function.
That
would
call
that,
like
local
kind
of
like
a
lovely
vehicle,
essentially,
yes,.
G
Yeah,
be
that
that'd
be
really
cool,
that'd,
be
really
important,
yeah
that
would
be
yeah.
So
what's
right.
A
A
G
A
lot
of
people
thought:
are
you
like
doing
like
scaling
like
healthy
scaling
solutions
here,
you're
focusing?
Is
that
like
we
were
researching
this
before,
but
we
were
like
always.
A
Kind
of
starting
from
different
direction
like
not
not
from
the
direction
of
hey,
like
we
have
a
theorem
in
a
video
and
how
do
we
scale
at
the
end?
We
were
like
hey,
we
want
to
build
the
clouds.
How
are
we
going
to
use
blockchain
to
build
a
cloud
and
we
have
like
a
100
clusters
and
things
like
that?
We
could
deploy
like
a
spin
of
the
ad
hoc
television
clusters
under
there
yeah
and
you're
like?