►
Description
Blockless: https://blockless.network/
Nunet: https://www.nunet.io/
BOINC: https://boinc.berkeley.edu/
A
Okay
and
we're
started
so
hello,
everyone.
Thank
you
so
much
for
joining
the
computer
over
data
working
group
session.
This
is
our
first
time
getting
together
as
a
community
after
the
excellent
series
of
events
in
Lisbon
a
few
weeks
back
earlier
this
month,
I
want
to
before
I
hand
over
to
our
first
presenters
I
wanted
to
share
a
couple
points
of
reference
here.
If
anyone
has
follow-up
discussions,
questions
or
wants
to
share
documentation
for
their
project,
please
jump
into
the
slack
link
here
for
the
computer
data
working
group.
A
That'll,
take
you
to
filecoin
slack
server
and
the
compute
overdata
working
group,
Channel
specifically,
which
is
also
a
public
channel.
So
you
do
not
need
anyone's
permissions
to
be
added
there.
We
are
also
keeping
track
of
the
recordings
here
at
COD
summit.io
from
the
summit
and
then
also
in
the
slack
channel,
is
a
link
to
the
full
broken
down
speeches
which
are
available
by
Speaker,
so
I'll
repost
that
so
everyone
has
it
available
now.
A
B
All
right
so
well,
no
nuts
and
we
are
building
a
global
global
economy
of
decentralized
computing.
That's
what
this
is
our
slogan
and
basically
the
concept
is
that
we
want
to
build
the
economyal
hardware,
software
and
data
where
all
of
these
entities,
let's
say,
interact
in
a
flexible
decentralized,
computational
Universe,
which
is
basically
can
be
described
as
some
sort
of
decentralized
Hardware
software
mesh
without
central
point,
of
course,
so
that
is
I
I
think
I
will
start
with
the
vision.
B
So
basically,
what
does
that
mean?
That's
like
in
terms
of
vision,
what
we
want
to
have
is,
or
we
want
to
build,
is
in
evolving
decentralized,
computational
universe,
meaning
that
software
engines
can
fight.
You
can
find
other
compute
resources
to
execute
the
sounds
themselves
and
pay
for
the
execution
costs
themselves.
B
Knowing
what
is
what
are
those
execution
costs
and
knowing
the
place
in
the
business
model,
then
Hardware
resources
or
agents
can
advertise
their
capabilities
so
that
other
components
in
this
whole
local
system
or
whatever
Universe
computation
Universe,
can
find
them
in
the
network
and
send
requests
receive
responses.
We
want
to
integrate
data
sources
which
can
declare
the
data,
content
and
price
this
data
so
that
software
agents
can
buy
data
points
or
data
sources
pretty
much
based
on
the
declarative
knowledge
of
this.
B
So
that
the
whole
thing
also
becomes
evolving
of
a
system
of
algorithms,
oh
yeah
software,
then
application
developers
could
construct
the
backends
on
these
applications
of
the
applications
based
on
all
these
infrastructure
that
that
we
would
build
algorithms
running
on
their
own.
Let's
say
paying
for
the
for
the
computer
resources
and
the
normal
platform.
B
One
of
the
problems
that
we
want
to
solve
is
utilization
of
latent
compute
resources,
because
there
are
a
lot
of
a
lot
of
compute
resources
around
the
world
which
are
not
utilized
because
everybody
is
using
the
app
data
centers
and
then,
and
that
could
be
achieved
by
building
this
connectivity
of
devices
of
different
type
locality
from
different
angles,
running
different
softwares
and
so
forth
in
some
kind
of
single
decentralized
cloud
single,
which
means
they
are
connected.
B
But
they
are
not
managed
by
a
single
entity,
and
this
is
a
let's
say
assumption,
which
is
not
I
mean
based
on
certain
research
and
in
a
space
that
by
building
this
Cloud.
Let's
say
these
are
less
decentralized
cloud.
Based
on
the
on
the
yeah
principle
of
decentralization
just
by
Design,
we
can
develop
Better,
Business
and
security
of
the
computational.
Even
universe
than
centralized
approaches
can
even
imagine
to
offer
there's
a
lot
of
work
there.
B
B
So
just
just
a
little
bit
of
History.
You
know
that
is
a
spin-off,
singular,
not
Global.
Ai
Marketplace,
we
were
incubated
as
a
small
team.
Since
2018
we
got
spinned
off
as
a
separate
company
in
2021
we
issued
our
cross
chain
utility
token,
which
is
one
which
we
are
going
to
use
or
building
the
usage
for
it
to
to
power
the
whole
economy
and
to
do
the
settlements
in
the
in
the
network,
and
currently
we
are
90
plus
team,
most
of
them
well,
more
than
half
of
them
are
Developers.
B
Yeah,
so
I
will
skip
all
the
rest
now
in
terms
of
the
architecture,
wow
conceptual
architecture,
basically
we're
building
a
layer
between,
as
I
mentioned,
software
and
Hardware,
with
a
layer
of
of
location,
complex
awareness,
Mobility,
communication
between
between
hardware
and
software
and
between
hardware
and
hardware
and
soft
and
software
and
the
payment
layer,
so
that
all
these
components,
which
can
come
from
different
owners,
different
entities,
Etc,
could
also
integrate
them
economically,
meaning
exchange
value
between
each
other
and
basically
pay
for
each
other
for
the
for
the
money
and
now
in
terms
of
the
ecosystem.
B
So,
basically
we
integrate
data
providers
or
we
aim
to
integrate
data
providers,
compute,
which
is
data,
computer
providers,
Hardware
algorithms,
which
is
right
now,
as
we
are
singular
Nets
spinoff,
we
are
looking
to
simulate
net
or
we
are
not
concerned
to
that.
Consumers
of
applications
and
cells,
which
are
business,
business
models
themselves
and
application
developers.
B
So
at
certain
point,
we
would
like
to
to
to
to
kind
of
construct
this
way
of
people
to
use
lunat
as
a
backend
and
to
monetize
and
basically
we're
looking
for
being
able
to
monetize
each
call,
depending
on
what
kind
of
hardware
and
what
kind
of
software
is
running
so
just
just
some
kind
of
short
slash,
long
list
of
what
we
want
to
achieve
in
terms
of
tech,
so
autonomy
of
hardware
and
software
agents
or
nodes
context,
awareness,
meaning
that
basically
yeah
a
computational
reflection,
basically
that
every
node
should
understand
its
its
place
in
the
topology
based
on
where
it
is.
B
What
is
data,
how
what
is
the
latency
of
connections-
Etc,
computational
refractions
in
a
sense
that
algorithms,
basically
what
we
want
to
do?
We
want
to
do
a
layer
where
algorithms
could
understand
to
certain
point
on
what
Hardware
they
run
and
what
kind
of
Hardware
they
need.
And
how
much
does
that
cost
and
sort
of
make
the
decisions
more
or
less
autonomously?
How
to
select
that,
and
vice
versa
yeah?
B
What
kind
of
software
Hardware
is
running
and
how
much
it's
it's
so
different,
which
is
some
kind
of
making
all
these
entities
as
a
as
a
first-class
citizen,
with
a
certain,
let's
say,
level
of
intelligence
within
the
Network
mobile
computational
process,
meaning
the
processes
should
should
should
be
able
themselves
to
port
to
wherever
place
they
want
to
be
in
the
network
in
order
to
make
this
optimized
value,
exchange
between
data,
Computing,
Hardware,
Etc
and
and
and
also
payments
at
all,
this
layer,
logical
scalability,
meaning
that
basically,
each
each
agent
in
the
network
should
be
able
to
spawn
and
to
ask
other
Asians
to
to
to
do
work
for
that
now.
B
Collective
verification,
validation,
security.
These
are
sort
of
requirements
for
the
platform
that
we
want
to
build
in
and
and
I
mean.
I
could
put
a
question
mark
there,
but
the
collective
means
that
we
do
not
want
to
become
any
central
point
of
who,
let's
say,
does
a
verification
who
does
validation
security
where
the
whole
battery
is
sort
of
space
comes
in
and
blockchain
at
least
conceptually
comes
in
with
with
the
concepts
yeah.
B
So
we
are
building
a
open
source
platform
allowing
to
choose
permission
level,
which
means
the
Machining
level,
which
means
we
would
like
to
open
it
up
for
the
commercial
usages,
which
means
we
do
not
really
want
to
prevent
proprietary
code.
That
runs
on
the
platform,
but
we
want
to
keep
the
platform
open
source
completely
decentralized
ownership,
yeah,
meaning
that
every
machine
is
a
network
and
every
every
algorithm
in
the
network
can
be
owned
by
different
different
people,
different
advantages
so
from
the
kind
of
com.
B
The
basis
of
the
how
we
try
to
build
conceptually
the
the
model
is
that
we
kind
of
take
the
actor
model
for
computation
as
a
as
a
guiding
guiding
model.
Let's
say,
and
which
means
everything
is
an
actor
and
yeah
I
will
I
I.
Probably
I
will
skip
that
part,
but
let's
take
it
for
that,
but.
B
This
is
a
computational
model
of
the
internet
as
a
platform
now
from
certain
sense
it's
a
self-organizing
graph
of
different
entities.
B
It's
connected
together
by
a
certain
certain
yeah
apis
that
we're
building,
but
we
aim
to
build
from
another
sense.
We
can
say
it's
a
serverless
ish
I
took
the
ish
from
I
think
David's
David's
presentation
some
time
ago,
which
nothing
is
serverless
completely
serverless,
Workshop
or
August
station.
B
Then
we
want
to
basically
build
this
layer
of
the
collaborative
searching
and
matching
between
all
the
agents
across
each
other,
which
comes
to
a
kind
of
research
project
level
thing
which
is
aidsl,
which
we
call
AI
domain
domain,
specific
language,
which
is
general
purpose
description,
language
for
AI
engines
or
software
in
general
hardware
and
data.
So
we
cannot
really
think
that
we
can
build
this
immediately,
but
I
believe
that
we
can
build
it.
Iteratively
from
the
manually
constructed,
let's
say,
Dax,
which
Define
programs
and
Hardware
requirements
Etc
to
a
certain.
B
Let's
see,
yeah,
making
it
more
and
more
intelligent,
as
it
goes
so
opening
up
for
the
algorithms
themselves
to
choose
Hardware
or
maybe
to
choose
other
algorithms
Etc
on
sound
surface.
So
the
thing
that
well
it's
a
little
bit
non
not
completely.
B
Let's
say
ordered
list,
so
what
we
are
looking
for
as
a
certain
certain
aspects
of
the
system,
and
it's
it's
it's
it's
a
little
bit.
Things
can
be
missed
here
because
it's
pretty
huge,
so
so
the
numerous
identity,
management
and
authentication
system
we're
looking
to
research.
Basically,
yes,
it's
part
of
research
that
we're
looking
forward
and
try
to
to
look
for
the
partners
to
build
it
up.
So
the
animals
are
data
management.
Automation
system
within
the
within
the
network,
authentication
means
ID.
B
B
We
want
to
build
a
reputation
API,
which
will
allow,
which
should
allow
repetition
system
to
interact
between
each
other
rather
than
building
one
reputation
system,
which
you
supposedly
have
to
know
everything
a
tokenized
data
fabric
mesh
with
with
a
metadata
which
describes
metadata
so
that
algorithms
could
choose
what
kind
of
data
they
can
use
for
the
purposes
of
the
whatever
computation
they're
doing
caching
of
compute
State.
This
is
something
that
we
need
to
solve
and
looking
forward
very
much
to
the
to
the
basically
ipfs
or
in
or.
D
B
So
basically
what
we
as
I,
said,
what
we're
trying
to
do.
First
of
all,
we
are
building
a
back
end
where
you
can
just
drop
a
DAC
as
a
manually
constructed,
which
defines
your
your
basically
logic
of
the
backend
that
you
want
to
run
and
given
that
we
have
all
those
algorithms
are
onboarded
onto
the
network,
we
simply
can
dynamically
construct
it
and
run
this
and
based
on
the
use
case,
requirements
of
latency,
Etc
and
so
forth.
B
We
can
provide-
or
at
least
this
is
aimed
to
provide
the
the
the
the
answer
to
front-end,
whatever,
whatever
the
application,
and
we
would
like
in
the
future
to
reach
certain
stage,
which
is
probably
long
term
vision
is
where
can
we
can
build
those
decks
dynamically
based
on
certain
intelligence,
which
is
in
the
network
itself?
So
we
call
it
the
kind
of
the
ideas
of
project
so
the
way
we
are
building
it.
We
are
building
open,
API
what
we
call
open,
API
of
apis,
so
the
whole
functionality
of
Mona
platform.
B
We
Define
it
as
as
a
collection
of
open
apis
and
of
each
component
within
the
platform,
and
we
do
this
apis
publicly
available,
and
then
we
try
to
develop
the
platform
ourselves
like
on
the
basis
of
those
apis
and
also
we
are
exposing
everything
to
the
community.
Whoever
these
are
people
who
want
to
use
a
platform.
People
who
want
to
integrate
all
the
platform
integrators
so
that
we
can
develop
based
on
the
same
well,
let's
say:
coordinate
the
development
within
the
ecosystem
is.
B
As
I
said,
is
defined
by
this
open,
API,
5bis
I
think
I
have
to
follow.
I
will
publish
this.
This
I
will
send
this
presentation
so
that
you
could
could
go,
follow
the
links,
but
basically
we
yeah.
Let's
say
this
one:
we
the
apis
on
the
basically
I,
think
it's
a
Sync
API
API
specification,
and
then
we
publish
them.
On
the
other
hand,
we
publish
them
as
yeah
for.
B
Basically,
to
see
and
to
describe
things,
some
of
things
can
be
just
requirements
for
certain
functions
to
be
implemented.
Some
of
those
things
are
implemented
and
the
let's
say
the
goal
is
then,
so
then
we
Implement
those
functions
and
calls
what
we
need
for
our
Integrations
against
those
API
specifications.
A
Okay
and
Kabir
just
one
a
little
side
note.
This
is
excellent
so
far
if
you
could
wrap
up
in
maybe
two
to
three
more
minutes
we'll
pass
on
to
the
next
group,
but
I
know
you
got
a
lot
of
content,
so
thank
you.
I.
B
Think
I
will
skip
it,
but
basically
that
was
the
image
that
I
think
I
did
like
three
years
ago.
So
this
is
what
we
want
to
get.
We
want
to
build
this
API
of
apis,
which
connects
to
different
different
Platforms
in
the
space
and
since
we
want
to
provide
also
the
tokenomics,
which
means
value
Exchange.
B
The
way
we
are
we
are
we
developing
is
what
we
call
use
case-based
platform
development
model,
which
means
we
are
integrating
use
cases
and
we're
building
features
of
the
platform
based
on
certain
use
cases
that
we
already
see
in
the
what
can
be
used,
what
the
platform
can
be
used
for,
and
therefore
we
are
looking
for
both
platform
integration,
Partners,
meaning
platforms,
that
or
solutions
that
can
can
run
certain
things
well,
based
on
what
what
what
I
said
and
integrate
into
this.
B
Let's
say
based
on
this
API
and
also
the
whole
API
of
apis,
which
hopefully
we
can,
we
can
develop
collaboratively
and
also
use
case
integration
Partners,
where
we
are
looking
for
business
models
and
and
real
world
users
that
can
already
be
used
on
whatever
level
of
the
platform
we
have.
So
we
try
to
bridge
the
two
and
to
find
The
Sweet
Spot
of
the
next
step
of
the
development
that
we
can.
We
can
reach
so
I
guess:
I
will
skip
the
use
cases
and
prototypes.
These
are
the
use
cases.
B
These
are
prototypes
that
we
have
built.
These
are
use
cases
that
we're
building
right.
Now
we
have
a
bunch
of
things
now.
One
of
the
things
that
I
wanted,
probably
I
mean
one
of
the
things
that
I
wanted
to
say.
So
we
are
building
blockchain
integration
right
now
we
are
building
it
with
with
the
Cardinal
blockchain,
but
basically
we
have
blockchain
agnostic,
meaning
that
if
we
see
that
we
need
to
do
certain
settlements
between
the
agents
we
basically
expose.
This
I
mean
we
build
this
economics
API.
B
We
expose
this
via
each
component
in
the
network,
which,
basically,
we
call
it
DMS
so,
and
device
management
servers
and
yeah.
We
allow
everybody
basically
to
do
that.
So
then
we,
just
few
days
ago,
we
launched
a
distance
Network
where
we,
which
is
right
now,
it's
a
discourse
Discord
server,
but
that's
not
the
main
thing.
A
B
Yeah,
it's
just
just
the
kind
of
demo
that
we
have
something
but
I
think
the
main
thing
is
what
I
was
showing.
What
we
are
looking
for
is
basically
collaboration
with
the
community,
so
I
would
invite
to
get
in
touch
with
us
and
to
try
to
connect,
and
we
will
yeah.
We
are
trying
to
kind
of
categorize
the
community
in
order
who
will
how
to
collaborate
with
on
the
platform
integration
level
on
the
use
case,
integration
level
and
to
sort
of
build
up
this
Network,
where
we
can
actually
develop
Things
based
on
that.
A
A
You
so
much,
and
normally
we
would
save
some
additional
time
for
Q
a
I
think
there's
probably
a
lot
of
questions.
People
have
because
I
I
really
like
you
guys,
architecture
on
the
apis,
but
at
a
minimum
we
will
post
all
this
information
in
the
slack
Channel
and
then,
if
we
can
get
a
recording
with
you
guys,
we
might
post
that
as
well
or
maybe
schedule
guys
for
a
future
session.
I'm
thinking,
30
minutes
is
a
better
fit
for
making.
A
Keep
going
some
more
all
right,
well
so,
transition
to
our
next
set
of
speakers
from
let's
boutian
and
Derek.
Are
you
guys
ready?
Well
what
take
us
away
when
you're
ready
sure.
E
All
right,
I
swear,
I
know
how
to
use
technology.
Okay,
all
right
so
I
should
be
unmuted.
You
can
see
my
slides
I'm,
just
gonna
do
share
one
screen,
I,
don't
know
if
the
share
two.
You
know
it's
broadcasting
both
at
the
same
time,
but
just
so
that
it's
a
readable,
so
hello,
everybody.
Thank
you
for
giving
us
the
time
today.
I'm
Derek
Anderson
of
blockless,
one
of
the
the
members
here
bootian,
is
also
here
so
I'll
give
a
quick
introduction
about
myself
been
working
on
the
internet.
E
Since
the
mid
90s
I
did
a
stint
at
Big,
Fortune
200
from
PayPal
eBay,
LG,
Electronics,
Walmart
I've
got
a
couple:
patents
around
security
and
consumer
electronic
security
sandboxing
and
basically
have
been
working
in
web
3.
For
about
three
years
now,
You
may
be
familiar
with
one
of
my
alma
motors,
which
is
Akash
network
I
was
there
for
about
two
years
before
Switching
gears,
and
you
know
joining
this
team.
We
have
a
little
bit
different
Vision
on
what
we
think.
F
Oh
yeah
sure
hi
everyone
great
opportunity
to
here
to
present
block
list
previously.
My
first
project
that
I
did
was
awapi,
which
is
back
in
2017
and
later
on,
yeah
moving
to
awesome
investment
site
of
what
three
and
the
team
right
now
we
have
13
people,
full-time
working
on
block
list
and
a
couple
of
Engineers
themselves
are
entrepreneurs
former
CTO
on
their
own
used
to
run
projects
backed
by
layer
one
ecosystem
as
well.
F
We
are
spread
it
all
across
the
globe,
yeah
majority
of
us
in
the
US
here,
but
Derek
in
South,
Dakota
and
then
a
couple
folks
in
the
Bay
Area,
while
the
rest
of
the
engineers
in
India
and
China.
E
All
right,
so
exactly
what
problem
is,
is
block
with
solving
and
I
think
we're
all
kind
of
you
know
part
of
this
group.
E
E
You
know
we
want
to
be
verifiable
as
well,
and
so
that's
exactly
where
we're
kind
of
looking
at
moving
our
compute
layer
that
we're
working
on
the
special
thing
that
I
think
that
we're
you
know
trying
to
also
do
is
we
want
to
kind
of
abstract
the
scalable
and
fault
tolerancy
away
from
the
developer.
That's
actually
developing
these
applications.
E
It's
really
a
big
benefit
that
you
get
today
when
you
develop
blockchain
based
applications
or
using
blockchain
as
a
back
end,
and
we
don't
see
that
same
kind
of
benefit
really
coming
to
the
middle
layer,
the
API
layer
or
the
end
user
layer
as
well.
A
lot
of
that
has
kind
of
been
skipped
in
the
current
topologies
okay.
So
what
is
our
solution
around?
You
know
this
problem
space
itself
and
so
really
it's
a
verification,
composability
modular
verification
as
well.
E
You
know
and
are
really
the
the
pillars
and
the
foundation
of
the
compute
system
that
we're
creating.
We
want
to
ensure
that
all
individual
components
are
behave
as
expected
and
they
have
verifiable
use
so
that
developers
can
build
really
complex
systems
by
combining
these
smaller
portions
of
the
of
the
network.
E
A
lot
of
what
this
does
then
is
we
use
wasm
really
to
wrap
a
lot
of
these
functions
and
make
those
functions
composable
together,
so
the
product
itself,
so
that
we
can
kind
of
get
into
more
about
what
block
list
is
doing
it's
an
entire
Suite
of
products
itself.
So
there's
the
developer,
experience
tools,
that'll,
help
a
developer,
build
these
serverless
decentralized
applications.
We
have
customizable
consensus,
which
is
really
kind
of
big
with
the
platform
and
that's
kind
of
where
ipfs
and
that
protocol
itself
comes
in
So
at
the
base
of
our
node.
E
Topology
is
really
an
ipfs
gossip
sub
protocol.
That's
going
on!
We
use
that
protocol
then
to
do
a
dynamic
roll
call
where
node
workers
will
then
fall
in
line
to
these
consensus-based,
algorithms
that
are
defined,
whether
that's
pbft
raft
or
just
simply
a
a
first
in
first
out
fastest
node
worker
scenario.
The
wasm
topology
then,
is
actually
all
distributed
currently
through
ipfs
and
filecoin
portals,
using
the
ipfs
Gossip
sub
protocol
and
direct
connection
for
the
execution
layer
itself.
E
Really
it's
it's
supposed
to
be
easy
to
use
and
single
point
distribution.
We
want
rapid
adoption
of
this,
and
that's
one
of
the
reasons
why
we've
also
gone
with
a
wasm
based
interface
is
because
of
all
the
benefits
that
it
brings
to
an
application
like
this.
So
our
execution
layer
is
meant
to
be
high
performance.
The
P2P
layer
is
just
using
ipfs
for
discussion.
There
is
no
consensus
on
that
layer.
E
We
actually
run
a
separate
layer,
blockchain,
that's
outside
of
the
ipfs
and
and
it
right
now
we're
using
Cosmos,
but
we
have
tried
it
with
ethereum
and
it
really
could
be
based
on
any
L1.
You
know
know
that's
preferential.
The
key
here
is
that
we're
augmenting
whatever
the
L1
is,
with
an
ipfs
based
collection.
Network,
the
nodes
are
really
portable
themselves,
so
because
the
nodes
don't
do
anything
that
would
be
blockchaining
we're
not
keeping
State,
they
really
are
just
focused
on
compute.
E
We
can
really
run
everywhere,
especially
with
the
benefit
of
wasm,
a
compile
once
run
everywhere
with
a
fairly
High
degree
of
native
execution.
We
can
run
on
smartphones,
laptops
industrial
grade
servers,
even
in
the
web
browser
we're
a
multi-language
support,
so
we're
you
know,
a
lot
of
smart
Contracting
may
fall
down
is
that
it
requires
specialized
languages
to
interact
with
with
the
network.
E
Our
Network
and
our
workers
in
our
execution
environment
rely
on
a
standard,
wazzy
and
wasm
protocol,
which
are
all
defined
by
specification,
and
so
we
can
support
any
language
that
compiles
directly
to
wasm
and
just
some
of
the
ones
that
we're
supporting
to
directly.
You
know
within
our
team,
our
rust
go
JavaScript
C,
C
plus
plus,
but
obviously
we
could
even
dot
net
python
or
any
of
the
other
languages
in
environments
that
are
out
there.
E
It's
secure
so
by
Nature
the
wasm
machine
itself.
Doesn't
have
an
interface
to
it
and
so
a
lot
of
what
we're
adding
when
you
think
about,
like
the
ipfs
as
being
our
protocol
in
discussion
layer,
we're
adding
on
top
of
that,
then
this
system
interface.
That
really
would
control
how
the
wasm
interacts
with
the
entire
system.
And
all
of
this,
then,
is
really.
You
know
brilliantly
put
onto
ipfs.
So
ipfs
becomes
the
storage
for
our
assemblies,
the
wasm
themselves,
the
Manifest
themselves,
using
things
like
ipns
and
pinning.
E
We
can
then
start
to
create
an
assembly
structure
where
we
pull
these
from
ipfs,
then
to
assemble
an
execution
strategy
for
a
user,
and
that's
really
where
we
get
into
composable
functions.
The
wasm
assemblies
are
stored
on
a
file
coin,
ipfs
node.
We
use
filecoin
really
because
it
helps
the
end
user
in
that
last
mile.
E
There's
nothing
wrong
with
just
you
know,
interacting
with
ipfs
directly,
but
throwing
filecoin
on
on
top
of
it
allows
us
really
to
tap
into
all
of
the
portals
that
are
kind
of
stood
up
around
the
world
and
really
get
the
fast
CDN
kind
of
execution.
That
we'd
expect
to
happen.
E
You
know
pulling
down
images
and,
in
you
know
mere
seconds
rather
than
waiting
for
an
ipfs
Style,
so
the
entire
platform
itself
is
really
kind
of
built
to
you
know
mimic
what
would
be
kind
of
serverless
functions
that
you
see
today,
but
kind
of
offering
that
smart
contract
composability
that
we're
used
to
seeing
so
you
enter.
You
can
interact
with
several
different
networks
based
on
using
our
SDK.
E
This
is
all
then
executed
on
node
topology
of
the
developers
Choice,
based
on
how
trusty
they
want
their
execution
environment
to
be
so
so
for
web
2
developers.
Who
may
not
need
that
much
trust
just
to
get
started.
They
could
execute
on
a
single
node
and
get
single
response,
and
then
you
know
for
the
next
year
in
quarter
one
we
actually
have
planned
to
finalize
what
we
call
the
block
list
app
engine
which
is
actually
running
an
x86
emulator
kind
of
flipping
the
container
story
on
its
side.
E
If
you
will
and
running
an
x86
emulator
inside
of
a
wasm
machine
instead
of
running
a
Docker
and
targeting
the
native
platform
underneath
to
abstract
we're
actually
going
to
take
advantage
of
the
fact
that
the
wazzy
interface
is
about
95
percent
native,
no
matter
where
we
run
it
and
we've
begun
work,
then,
on
on
writing
a
Linux
in
machine
x86
for
wasm
that,
then
we
can
Port
traditional
web
to
Applications,
including
full
stack
like
python
nodejs.net,
core
applications.
We
can
boot
them
up
in
about.
You
know,
10
to
15
seconds
cold
boot.
E
We
imagine
will
probably
be
used
for
long
running
services,
but
you
know
we
can
experiment,
obviously
with
serverless
in
them
as
well,
but
really
moving
those
web
to
users
to
the
web
3
world.
This
all
runs
still
on
our
distributed
topology
inside
the
the
node,
the
ipfs,
augmented,
node
topology,
where
we're
sending
these
wasm
archives
around
and
executing
them
based
on
the
developers
chosen.
Topology,
so
kind
of
you
know
to
quickly
get
into
a
technical
architecture,
real
quick
just
to
discuss
how
this
works.
E
The
head
note
here
would
be
what
you
would
think,
as
your
ipfs
boot
nodes
in
terms
of
our
topology,
we
could
dial
all
the
way
up
to
ipfs
if
we
want,
but
because
of
the
size
of
the
nodes
at
this
early
stage
of
our
Network,
we
really
want
to
be
able
to
measure
the
impact.
So
we
run
you
know
25
distributed
nodes
across
the
globe.
Three
head
nodes,
22
workers-
we
are
using
other
web
3
providers,
such
as
run
on
flux,
a
Cosh
Network
to
also
distribute
our
topology.
E
Currently
as
we
go
through
test,
this
is
pretty
much
how
our
system
works.
We
do
a
roll
call
using
the
pub
and
sub.
We
do
a
roll
call
response
using
the
pub
and
sub
we
do
a
worker
selection.
Then
that
happens
at
the
network
level.
Once
the
worker
selections
have
been
doled
out
for
the
pool
they
do
a
direct
lib
P2P
dial
to
each
other.
E
All
right.
So
I
got
a
quick
demo,
I'll
show
here
as.
D
E
A
Take
your
time
Derek
on
the
logistics
and
if
you
could
wrap
up,
maybe
the
next
three
or
four
minutes
they'll,
give
us
plenty
of
time
for
for
David
and
and
his
presentation
at
the
very
end.
Thank.
E
You
well
all
right
so
hopefully
now
we're
looking
at
a
video,
because
I
want
to
make
sure
the
demo
goes
through.
D
E
So
what
we're
doing
here
is
we
have
a
SAS
based
offering
as
well
that
kind
of
goes
with
our
Network,
so
we're
hopping
into
the
SAS
and
we're
going
to
launch
a
GitHub
template,
but
we're
gonna
show
here
basically
in
this
demo,
is
that
we're
using
GitHub
actions
on
our
DX
tooling.
That's
going
to
build
a
canned
hello
world,
wasm
application.
E
It's
going
to
be
built
into
wasm,
archived
into
a
car
uploaded
to
ipfs,
and
then
our
SAS
offering
is
going
to
get
a
signal
that
that
is
deploying
the
ipfs
nodes
that
we
have
for
The.
Blacklist
Network
are
going
to
download
that
wasm
archive
they're
going
to
then
verify
that
they
can
execute
it
send
back
that
it
has
been
deployed
and
then
we'll
jump
in
and
actually
execute
it
I'll
speed
up
the
video
here,
but
just
really
to
show
you
know
that
it's
real
time
this
all
happens
in
in
under
two
minutes.
E
So
now
we're
we're
deploying
and
finishing
here
and
we'll
be
able
to
hop
in
and
grab
this
assembly.
So
here
we
go,
we've
been
deployed.
The
job
should
finish
and
Report
here
on
the
right
and
now
we're
going
to
go
in
and
we've
kind
of
made.
This
is
our
our
Ingress
that's
kind
of
built
into
this
SAS
offering
currently
that's
that'll,
be
abstracted
out
as
part
of
our
node
topology,
but
it's
asking
hello
world
to
be
executed.
It
happens
in
about
43
seconds.
E
We
see
that
in
or
33
milliseconds
excuse
me.
We
see
that
whole
roll
call
go
out.
We
request
the
nearest
nodes
to
this
worker
to
let
us
know
if
they
can
execute
this
function.
They
respond
back
if
they
meet
the
attributes
that
were
requested,
a
worker
has
been
selected,
the
function
has
been
executed
and
the
results
were
relayed
back
all
through
the
the
lib
P2P
Library.
Then
they
were
sent
to
this
SAS
offering
which
transferred
to
this
web
response,
cool
and
I.
E
Think
I've
just
got
a
couple
more
slides,
then,
and
so
here's
our
roadmap,
that
you
can
take
a
quick
look
at
so
we're
already
in
private
Alpha
we're
testing
with
our
partners.
Now,
as
we
do
use
case,
build
outs
and
solutions
for
them
as
well
and,
like
I,
said
in
quarter
four
or
early
quarter,
one
we
have
the
app
engine
coming
as
well
to
start
onboarding,
traditional
web
apps
and
then
early
next
year,
we're
looking
at
launching
our
zero
knowledge
wasm
solution
so
that
the
compute
executions
are
verifiable.
E
That
happen
not
just
consensus
and
as
well
as
starting
GPU
support.
So
we
can
start
to
take
on
some
more
interesting
loads
through
distributed
compute
at
this
style,
and
if
you
have
any
questions.
Thank
you
so
much
for
your
time.
Today
we
have
our
website
blocklist.network.
You
can
follow
us
on
Twitter.
We
have
some
Hangouts
there
and
we
talk
from
time
to
time.
If
you'd
like
to
see
our
code
or
contribute,
you
can
hit
us
up
on
GitHub
at
Blacklist
network.
If
any
anybody
on
this
call
is
looking
for
a
job
or
hiring.
E
A
Brilliant
well
done
that
was
I
love
your
guys
architecture
and
especially
the
walls
and
Wazi
components.
Our
Engineers
are
going
to
be
very
fascinated
to
see
your
approach
because
we're
in
the
middle
of
that
in
so
many
different
ways
for
backing
out
as
well,
so
that
was
lovely
all
right
more
to
come
on
block
lists
in
the
in
the
slack
Channel
David
Anderson.
Can
you
are
you?
Are
you
ready
we'll
hand
it
over
to
you
if,
if,
if
you're
ready.
C
Yeah,
let
me
bring
up
my
slides.
C
Okay,
my
name
is
David
Anderson
I'm,
a
computer
science,
computer
science
researcher
at
UC,
Berkeley
I
got
involved
in
distributed
computing
a
long
time
ago
through
a
project
called
seti
at
home
and
that
led
to
another
project
called
boink
which
developed
a
platform
to
let
other
scientists
do
large-scale
volunteer
Computing
projects
like
setting
at
home.
So
the
idea
of
volunteer
Computing
is
that
we
want
to
take
advantage
of
consumer
Computing
devices
like
like
laptops
and
potentially
game
consoles.
C
We've
actually
had
a
project
that
that
used
the
the
processors
and
cars
which
I
think
is
a
real
interesting
resource
for
the
future.
But
the
the
reason
to
be
interested
in
this
is
that
there
are
a
lot
more
consumer
devices
than
there
are
organizational
ones.
C
C
So
at
its
at
its
peak,
which
was
I
don't
know
10
years
ago,
there
were
about
a
million
computers
running
blank
and
that
supplied
an
amount
of
of
compute
power
that
was
at
that
point
greater
than
the
largest
supercomputer
in
the
world
and,
of
course,
home
computers.
Also
have
a
lot
of
storage,
the
PC
comes
with
one
or
two
terabytes
of
disk,
and
most
people
don't
use
much
of
that
because
of
because
of
cloud
services.
C
The
the
blank
itself
is
open
source,
lgpl,
Sun
GitHub,
and
it
has
a
very
open
architecture.
It's
it's!
It's
not.
It
consists
of
a
whole
bunch
of
programs
that
interact
through
through
various
kinds
of
documented
Network
apis,
and
it's
easy
to
use
it
as
parts
of
larger
systems.
C
People
have
have
built
commercial
systems
based
on
in
different
ways
and
there's
there's
a
there's.
A
cryptocurrency
called
called
grid
coin
that
is
based
on
boink.
C
So
the
the
the
architecture
blank
is
that
the
the
people
who
who
are
doing
Computing,
which
in
general,
are
science
projects
as
I
should
say
the
the
focus
of
blank
has
been
scientific
Computing
and
that
that
kind
of
breaks
down
into
projects
like
seti
at
home
or
other
astronomy
projects
that
have
instruments
that
produce
huge
amounts
of
data
and
which
have
to
be
analyzed
in
compute
intensive
ways.
It's
kind
of
one
category.
C
Another
category
is
projects
that
are
doing
simulations
mostly
molecular
simulations
things
like
virtual
drug
design
and
protein
folding,
and
things
like
that
which
which
are
not
data
intensive
but
but
need
essentially
infinite
computing
power.
C
C
The
the
blank
client,
which
is
what
the
volunteers
run
on
their
home
PCS
or
their
phones
or
cars
or
whatever
is
sort
of
the
idea,
is
when
you
run
the
client,
you
can
pick
from
among
all
the
available
projects
and
decide
which
ones
you
want
to
participate
in
and
you
can
adjust
what
fraction
of
your
resources
go
to
each
one
of
those
projects
and
what
the
client
does
then.
C
Is
it
periodically
contacts
those
projects
to
to
report,
completed
jobs
and
to
get
new
jobs
and
and
it
downloads
the
executables
and
the
input
files,
and
then
what
it
does
is
it
tries
to
do
as
much
Computing
and
as
much
use
of
storage
as
it
can
without
the
user
of
the
computer
being
being
aware
of
that,
so
it
it
keeps
track
of
of
whether
the
user
is
actually
at
the
computer
and
whether
they're
doing
other
Computing
it.
C
It
looks
at
the
at
the
memory
usage
to
make
sure
that
it
doesn't
cause
thrashing
it's
it's
really
kind
of
infinitely
configurable.
You
can
control
the
number
of
cores
that
it
uses
when
you're
at
or
not
at
the
computer
and
various
preferences
about
how
much
disk
space
it
uses.
C
It
also
has
a
whole
bunch
of
features
related
to
storage.
Of
course,
you
you
can't
do
Computing
without
storage,
so
it
did
it.
It
needs
to
store
executables
and
input
files
and
and
virtual
machine
images.
You
know
some
of
these
things
can
add
up
to
a
lot
and
it
it
sort
of
allocates
storage
among
the
different
projects
that
the
client
is
attached
to
using
the
resource
share
idea.
A
C
Yeah,
thank
you,
I
switched
to
to
to
slideshow,
and
maybe
that
screwed
things
up.
C
Yeah
so
anyway,
one
of
the
early
blank
projects
is
called
Einstein
at
home,
which
analyzes
data
from
gravitational
wave
detectors
and
their
files
are
fairly
large
about
100
megabytes,
and
each
file
has
to
be
analyzed
with
a
lot
of
different
jobs,
a
lot
of
different
parameters,
so
the
Boeing
scheduler
has
a
mechanism
called
locality,
scheduling
where,
when
it's
deciding
where
to
send
a
job,
it
preferentially
sends
it
to
to
a
client
that
already
has
that
file
to
minimize
the
amount
of
network
traffic
that
that
happens.
C
So
so
at
some
point,
I
started
thinking
about
using
boink
for
just
for
pure
data
storage,
and
it's
it's
it's
important
to
understand
what
the
host
population
is
like.
So
home
computers
are
turned
off
some
fraction
of
the
time.
It
turns
out
that
that,
for
a
lot
of,
computers
are
left
on
all
the
time
about
60
of
them,
but
you
need
to
take
into
account
the
other
ones.
In
addition,
people,
you
know
start
running
blink
and
then
they
event
they
turn
it
off
or
the
computer
goes
away.
C
C
C
You
know
much
larger
than
you
could
fit
in
a
single
computer
on
volunteer
hosts
with
the
idea
that
you
could
access
them,
but
the
latency
of
access
could
potentially
be
long
because
you
know
it
might
take
a
day
or
two
before
a
particular
computer
gets
turned
on
again
to
to
achieve
reliability.
You
need
to
replicate
the
data
and
the,
of
course
you
have
to
be
able
to
tolerate
the
the
departure
of
a
large
number
of
computers,
so
whole
file
replication
is
way
too
space
inefficient.
C
So
this
system
uses
what's
called
Erasure
coding
where
you
divide
it:
a
file
into
a
whole
bunch
of
chunks
and
then
generate
some
additional.
What
are
called
checksum
chunks
with
a
property
that
you
can
reconstruct
the
file
from
a
subset
of
of
that
whole
collection,
and
you
can
tolerate
the
loss
of
of
any
n
of
them.
C
C
So
we
use
a
kind
of
a
refinement
called
multi-level
coding
where
we
break
the
file
into
these
chunks
and
check
some
chunks
and
then
recursively
divide
each
one
of
those
into
sub
chunks,
with
with
two
or
potentially
more
levels
to
that
hierarchical,
encoding,
I
developed
a
simulator
for
how
the
whole
system
would
work.
The
simulator
kind
of
models,
a
host
population
and
and
models
the
arrival
and
departure
of
hosts
and
the
the
hours
when
they're
available
are
not
available.
C
It
lets
you,
you
know
kind
of
predict
the
amount
of
network
traffic
or
or
the
the
latency
that
people
are
going
to
see
so
I
I
got
all
this
working.
It
never
actually
got
used
for
anything.
C
But
if
anybody
is
interested
in
and
poking
around
with
this,
you
know
I
I'd
be
interested
in
and
potentially
collaborating
on
that
so
anyway,
that's
that's.
That's
all
I
have
to
say
I,
don't
know
if
there's
a
question
or
two.
A
In
this
case,
we
actually
do
have
a
few
minutes
left.
So
we
can.
We
can
pause
for
questions
if
anyone
has
them
and
David
I've
got
one
question
to
start
to
start
everyone
off
I'm.
Sorry,
two
questions.
If
you
don't
mind,
one
was
throughout
the
life
cycle
of.
Were
there
any
major
surprises
in
terms
of
how
the
the
system
was
architecting
the
past
and
how
you
may
have
re-architected
it
in
the
future.
D
C
Well,
we've
we've,
the
the
the
technology
has
kind
of
evolved
gradually,
essentially,
as
as
Hardware
has
evolved,
we
you
know
we
had
to
add
support
for
multi
multi-process
applications.
C
We
had
to
add
support
for
for
gpus
and
and
I
should
mention
that
at
this
point
you
know,
gpus
are
so
much
faster
than
CPUs
that
about
80
percent
of
the
of
the
the
flops.
The
floating
coin
operations
come
from
gpus
these
days,
but
that
was
a
that
was
a
big
change
to
the
architecture,
because
a
GPU
application
you
know
generally
requires
a
particular
version
of
the
of
the
of
the
driver,
software
and
there's
all
sorts
of
other
considerations.
C
So
the
question
of
whether
a
particular
application
can
run
on
a
particular
computer
can
become
very
complex
and
we
had
that
we
had
to
add
a
a
very
powerful
mechanism
for
that
and
then,
similarly,
with
with
supporting
applications
that
run
in
in
Virtual
machines,
but
basically
every
time
that
apple
or
Microsoft
introduces
a
new
version
of
their
operating
system,
something
breaks
somewhere
so
we're
we
at
this
point
we're
kind
of
in
maintenance
mode.
But
it's
it's
a
very
active
maintenance
mode.
B
B
C
Just
send
me
an
email
if
you,
if
you
go
to
the
boink
website,
which
is
boink.berkeley.edu
on
you,
can
find
my
contact
contact
info
yeah,
I'm
I'm
interested
in
collaborating
either
about
storage
or
about
or
about
computing
yeah.
There
we
go
so.
B
C
B
A
C
D
If
one
machine
goes
down,
if
one
machine
goes
down,
then
another
machine
takes
its
place
like
dynamically.
C
Yeah
so
so
boink
the
framework
allows
for
checkpointing.
In
fact,
in
fact,
you
know
most
of
the
applications
use.
Boeing
have
jobs
that
can
take
hours
to
finish,
and
you
don't
want
to
lose
that
whole
time,
so
there
most
of
them
do
checkpointing.
They
write.
You
know
they
write
a
file
from
which
you
can
resume.
C
Boink
does
not
move
these
checkpoint
files
from
one
computer
to
another.
If
a
if
a
computer
goes
away
forever,
it
just
restarts
the
job
from
the
beginning
on
a
different
computer.
That's
partly
because
you
know
the
checkpoint
files
themselves
can
be
app
that
can
be
architecture
dependent.
It
introduces
a
lot
of
complexity
though
there
are.
There
are
other
systems
like
Condor
another
another
framework
for
distributed
computing
that
do
have
the
ability
to
move
Computing
state
from
one
place
to
another,
but
blank
doesn't
do
that.
A
Very
good
all
right
and
thank
you
again,
David,
it's
extremely
important
for
us
to
learn
from
people
that
have
been
doing
this
in
the
space
for
a
while,
so
I
very
much
appreciate
the
learnings
there,
and
if
anyone
has
further
questions
through
the
boink
website,
you
can
get
David's
contact.
Information.
A
I'll
also
be
posting
that
in
the
slack
channel
for
everyone
with
some
follow-up
notes,
so
that
will
that
will
wrap
us
up
for
today.
Thank
you,
everyone
for
joining.
This
was
a
tremendous
session
and
we'll
meet
back
again
in
two
weeks
for
more
content
thanks,
everyone
have
a
great
day.