►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Runtime
thingy
could
be
nice,
and
actually
this
really
links
with
an
idea
that
probably
everyone
has
had
throughout
all
of
these
years,
which
is
like
the
idea
of
decomposable
computation
so
to
have
a
network
where
we
have
a
lot
of
devices
and
we
can
like
flow
kind
of
the
fluence
labs
approach.
Where
you
you
can,
I
mean
we
transform
the
network
into
an
operating
system
where
we
have
memory,
we
have
computation
and
we
can
orchestrate
executions
throughout
all
of
the
networks.
So
that's
the
idea
behind
this,
but
the
short-term
problem
that
we
have
I'm.
A
I
don't
know
if
you
were
in
the
hierarchical
consensus
or
hc
talk,
but
one
of
the
things
that
we
are
trying
to
do
to
scale
falcoin
is
to
deploy
new
subnets
right,
so
to
the
same
way
that
we
have
a
load
balancer
and
we
balance
the
load
over
a
back
end.
We
will
try
to
spawn
new
subnet
as
we
need
them,
instead
of
like
explicitly
partitioning
the
state
of
of
firecoin.
A
We
every
time
that
someone
wants
to
implement
a
cons.
So
if
we
have
different
falcon
clients,
one
written
in
go
one
return
in
rust.
If
someone
wants
to
support
a
specific
consensus
diagram,
it
needs
to
implement
it
in
every
single
language,
which
means
that
we
would
end
up
with
a
single
node.
While
if
we
had
this
this
common
runtime
in
all
of
the
clients,
we
would
have
a
way
of
having
a
single
consensus
protocol
and
be
able
to
have
different
client
implementations
and
run
it.
So
this
is
the
problem,
and
this
can
be
applied.
A
A
So
the
idea
is
that
if
we
have
these
like
a
really
light
node
with
this
runtime,
we
would
be
able
to
have
protocols
that
can
be
implemented
once
we
can
enter,
then
the
the
fact
that
if
there
are
bugs
these
are
vulnerabilities
and
so
on,
okay,
but
like
from
a
perspective
of
end-to-end
cycle
of
deploying
a
network,
this
would
be
really
interesting
and
that's
so.
The
motivation
is
this
is
our
problem.
A
The
motivation
is
like
to
be
able
to
have
like
this
script
or
this
liby2p
host,
which
handles
all
of
the
networking
with
a
small
runtime.
A
On
top,
so
that
we
can
like
attach
all
of
these
protocols
and
like
be
pulling
the
protocols
from
the
network,
we
may
have
different
implementation
of
this
script
or
like
these
nodes,
so
we
could
have
like
a
rust
implementation
or
a
or
javascript
implementation
or
whatever,
but
the
idea
is
that
we
should
have
this
target
runtime,
where
we
can
run
all
of
the
the
different
behaviors
that
we
want
for
our
for
our
notes.
Bruco
mentioned,
like
runtime
upgrades.
This
would
be
great.
A
The
fact
that
we
can
ship
really
small
nodes
and
we
can
configure
the
behaviors
that
we
want
for
that.
Node
maybe
like
this
is
another
thing
that
we
are
seeing
like
one
of
the
things.
So
this
protocol
is
being
implemented
in
a
fork
of
of
lotus,
and
then
we
are
exploring
the
the
implementation.
Now
we
are
starting
implementation
with
forest
right,
so
we
have
two
clients
that
run
hc.
A
The
problem
is
that
lotus
is
heavy
like
really
really
heavy
with
a
lot
of
behaviors,
and
the
only
thing
that
we're
interested
in
from
lotus
is
like
syncing
with
we
don't
want
storage
and
ceiling,
and
all
of
this
we
just
want
to
sync
with
maintenance,
and
it
would
be
great
if
we
can
have
like
a
really
shallow
node
and
we
say:
okay,
we
want
just
to
use
the
sinker,
the
networking
side
of
things,
the
dhc
and
so
on
so
yeah.
A
It
would
be
great
if,
while
we
do
this
universal
runtime,
we
can
have
these
things.
In
I
mean
in
the
back
of
our
heads
because
yeah
we
transfer
the
network
into
an
operating
system
and
we
can
like
treat
it
as
such.
We
could
even
come
up
with
a
programming
language.
I
think
that
aqua
does
kind
of
like
this
already
but
like
we
have
all
of
the
abstractions
of
that
we
have
in
a
computer.
So
we
have
the
memory,
we
have
storage,
we
have
potentially
computation.
A
Networking
is
hard
and
it
was
mute,
has
to
be
horrible
so
and
then
like
have
a
universal
runtime
where
we
attach
like
we
have
the
actual
implementation
of
the
different
protocols.
So
an
ipfs,
not
node,
could
look
something
like
this.
I
mean
ipl
codex
codes,
bitswap
and
the
academia.ht
filecoin
node
could
be
something
like
that
with
fvm,
also
targeting
runtime.
So
all
of
the
protocol
targets
in
runtime,
then
the
expected
concentration
factor
and
so
on.
An
ac
node
could
be
something
like
this.
A
Where
I
have
like,
I
could
point
I
could
have
like
here
different
consensus,
algorithms
and
the
hc
specific
protocol,
and
they
would
be
all
operating
in
the
same
network.
We
could
even
have
maybe
an
ethereum
node,
where
it's
just
like
targeting
proof
of
work
and
like
all
of
the
all.
The
sync
is
the
syncing
protocol
specific
for
for
ethereum.
So
the
idea
is
to
have
like
this
function.
A
Addressable
protocols
is
always
the
same
idea,
like
the
same
way
that
we
can
like
fetch
and
perform
computations
that
in
the
from
the
network
and
and
execute
them
in
your
node.
What?
If
we
could
determine
the
behaviors
of
our
nodes
through
cits,
like
I
want
to
run
these
three
protocols.
I
just
marked
the
three
cids
and
I
would
be
running
an
upgrade-
would
be
as
simple
as
pointing
another
city,
I'm
oversimplifying
it.
A
It
would
be
as
simple,
but
I
would
love
to
see
how
it
works
under
the
hood
and
then
there's
another
content
that
I
really
like
from
ipld,
because
we
are
saying
always
this.
I'm
fetching
the
something
with
a
cid
from
somewhere,
but
I
think
that
it
would
be
really
interesting
as
an
additional
model
for
this
to
have
kind
of,
like
the
link
system
that
we
have
in
ipld,
prime
some
way
of
having
of
allowing
the
user
to
choose
the
behavior
of
their
loader.
A
Because
it's
not
the
same,
I
mean
that's,
why
I'm
not
linking
it
to
ipfs,
because
if
you
just
go
to
ffs,
but
you
may
want
to
like
access
through
cid
from
your
local
device
or
load
the
protocol
from
falcon.
So
the
idea
is
that
I
think
it
would
be
really
interesting
also
to
have
these
dynamic
loaders
so
or
link
systems
or
whatever,
that
you
can
use
to
teach
your
node
how
to
get
the
behaviors
to
execute
the
protocols
over
this.
This
runtime
and
yeah
content
addressable
aqua.
A
A
Like
brooke,
I
mean,
let's
discuss
it,
let's
see
if
we
can
come
up
with
a
short-term
plan,
I
don't
know
what
are
the
requirements,
the
models,
what
are
the
pieces
that
we
can
reuse,
because
I
feel
that
a
lot
of
people
are
working
on
this
and
there
are
a
lot
of
things
that
we
can
reuse
and
that's
it.
Thank
you.
B
Questions
can
you
maybe
talk
through
when
you
sort
of
imagine
like
a
hierarchical
consensus
network
running
with
like
some
chain
running
at
in
a
really
fast
speed
and
so
on?
Can
you
maybe
talk
through
like
the
processing
model,
even
though
it's
like
you're
using
a
blockchain
model
for
consensus?
That
doesn't
mean
that
the
jobs
themselves
have
to
be
consensus?
Could
you
maybe
like
elucidate
the
the
like
the
utility
of
vms
like
this
and
these
kinds
of
models,
but
for
then
running
many
other
kinds
of
jobs
from
within
the
blockchain
processing
model?
A
So
the
what
if
the
idea
is
that
you
can
run
any
consensus,
bft
state
machine
replication.
So
in
the
end
we
want
to
do
state
machine
replication,
more
than
consensus
and
have
some
kind
of
verifiable
proof
that
we
can
propagate
somewhere
else
so
that
we
trigger
the
the
state
changes.
A
A
We
don't
even
need
consensus,
like
maybe
it's
just
verifiable
computation
of
our
moves,
but
then
we
want
to
propagate
a
verifiable
proof
that
we
played
the
game
and
I
was
the
winner
so
that
we
can
update
the
the
scoreboard
so
with
ac,
with
the
only
thing
that
we
put
the
put
is
the
pipes
for
this.
So
the
idea
is
that
we
are
running
in
this
tournament.
We
all
do
a
subnet
for
the
tournament
there.
A
We
are
running
a
consensus
that
maybe
I
don't
improve
a
work
or
a
vfd
and
then
what
we
want
is
to
create
a
new
subnet
and
the
two
things
that
we
need
is
like
talk,
the
same
semantics,
which
we
have
because
that's
what
the
ac
protocol
gives
us
and
then
a
runtime
where
we
can
do
all
of
the
verifications
and
and
like
run
and
have
this
state
machine
replication.
So
I
think
that
these
could
really
help
for
this
state
machine
replication.
C
It's
like,
theoretically,
you
could
just
make
a
subnet
that
everyone
just
says
like
okay.
Well,
we
really
don't
care
we're
just
gonna
pass
into
boolean,
true
and
that's
our
consensus,
but
we'll
still
do
operations
like.
So
it's
not
like
useful
thing
out
of
it
and
it's
not
like
consensus
per
se,
but
it
is
using
like
that
sort
of
model
like
yes,
we
go
up
above
and
we
share
state,
but
we're
only
sharing
the
state
because
we're
pulling
it
down
from
the
parents
and
we
don't
actually
share
any
back.
C
B
A
D
I
just
want
to
know
like
beyond
final
and
everything
like
that.
This
architecture
lets
you
it's
like
currently,
when
people
build
apps
on
ipfs
like
they'll,
have
a
bunch
of
stuff
in
the
browser
and
the
browser
tends
to
fall
over
then,
like
the
other
way
to
do
this,
you
want
something
browsing,
try
to
talk
to
a
local
node,
but
then,
like
you,
have
that
really
big
barrier.
D
This
lets
you
just
move
a
lot
of
heavy
code
like
into
a
local
node,
so,
like
even
people
like
on
a
desktop,
this
is
quite
useful,
where,
like
a
like,
some
form
of
daemon
could
basically
be
this
run
time
where
it
will
run
your
ipfs
related
like
services
and
libraries
and
whatever
that
your
apps
need,
and
then
you
can
have
most
of
the
gui
code
in
the
browser.
So
you
can
do
all
the
connections,
whatever
you
want
inside
the
statement
so
like
it's.
A
A
E
Thank
you
for
talk.
I
asked
a
question:
what
does
the
difference
like
of
this
idea
of
experience.
E
What
what
is
what
are
main
differences
between
like
this
idea
and
fluent
swaps,
fluence
protocol.
A
Okay,
I
think
continuous
dressability
to
be
honest,
like
the
ability.
So
so
you
kind
of
have
everything
there.
You
even
have
like
this
language
to
to
propagate
the
jobs
and
and
but
yeah
like
data.
A
So
so
it's
content
that
there's
a
video
where
we
were
discussing
yesterday,
even
like
the
language,
it's
really
open
to
be
represented
and
have
like
as
a
first
class,
primitive
the
cids
so
that
it
handles
all
of
this
loading
and
all
of
these
things
under
the
hood,
because
you're
doing
already
those
calls
to
the
network
right,
but
not
by
the
cid.
So
you
don't
understand
ipld.
E
A
I
don't
know
if
so
so
it's
the
content,
addressability
part
of
things,
so
you,
for
instance,
you
specify
jobs
through
a
hash,
but
that's
it,
and
if
you
had
jobs
that
were
a
cid
and
that
these
intermediate
states
that
you're.
So
I
don't
know
if
I
have
your
no,
I
don't
have
that
that,
but
in
your
vm
you
had
like
the
in
the
current
state,
the
previous
state
and
then
a
few
inputs.
A
A
To
even
to
like
the
pause
and
resume
model,
I
mean,
if
something
breaks,
I
could
fetch
the
previous
state
like
say:
hey
me,
my
node.
They
it
run
out
of
of
of.
A
A
I
could
point
to
all
of
the
work
that
I've
already
did
through
the
cid
and
send
it
to
my
fluence
node,
which
is
in
the
at
home
or
in
aws,
and
it's
kind
of
like
I,
and
I
sometimes
say
that
I
really
like
bit
swap
and
graph
sync
over
http
because
of
this,
like
you
can
reload
from
where
you
I
mean
from
where
you
left
right
now,
if
the
http
download
fails,
you
have
to
start
from
scratch
in
many
cases.
A
For
these
you
can
start
from
the
block
that
the
latest
block
that
you
have
for
the
execution,
I
think,
is
the
same
concept
like.
Instead
of
having
to
do
the
executions
end
to
end,
I
would
be
able
to
resume
them
from
where
I
left
it.