►
From YouTube: Compute Over Data Working Group 5th Session
Description
On today's call:
Jim and Jonathan from the GridCoin team take us through their history, architecture and roadmap for incentivizing research projects on BOINC (e.g. Seti@Home)
Charles Cao from FilSwan takes us through their new Multi-Chain Storage (MCS) solution and how it can save time for COD project teams integrating payments and storage into their solution.
GridCoin: https://gridcoin.us/ https://boinc.berkeley.edu/
FilSwan and MCS: https://filswan.com/ https://docs.filswan.com/multi-chain-storage/overview
A
All
right,
we're
recording
hello,
everyone,
that's
listening
to
the
recording,
welcome.
This
is
our
fifth
meeting
of
the
computer
data
working
group
and
we're
very
excited.
We've
got
some
some
stellar
guests.
Today
we
have
jonathan
and
the
grid
coin
team,
who
are
going
to
give
us
an
overview
of
the
history
of
great
coin.
A
It
turns
out
that
grid
coin
has
been
around
for
a
very
long
time
and
they're
doing
some
really
interesting
things
to
benefit
humanity,
specifically
around
decentralized
compute,
for
research
projects,
and
so
we're
going
to
let
them
give
us
a
detailed
breakdown
there
and
then,
in
the
second
half
of
the
session
today,
we'll
hand
it
over
to
charles
who
has
been
working
on
some
very
interesting
things
related
to
multi-chain
storage,
with
phil,
swann,
and
so
we'll
get
we'll
get
some
details
there
before
I
hand
it
over
to
jonathan,
though
I
do
want
to
just
give
two
quick
advertisements,
one
for
our
in-person
irl
meetup,
that's
going
to
happen.
A
It's
going
to
be
november,
2nd
through
3rd
in
lisbon,
portugal,
just
after
the
the
phil
lisbon
events
the
previous
week.
So
please
pencil
that
in
on
your
calendars
and
also
if
you'll
go
to
discourse.cod.com
love,
to
have
everyone
in
the
community,
chime
in
questions
and
and
see,
if
there's
some
topics
that
kind
of
have
shared
interests.
I
just
want
to
let
everybody
know
that
we
do
have
that
available
now
as
well.
A
So
that's
all
I
have
for
announcements.
Jonathan
looking
forward
to
hearing
from
you
guys
so
I'll
hand
it
over
to
you
whenever
you're
ready.
B
Awesome
yeah
thanks
wes
and,
as
I
share
my
screen
here,
let
me
know
if
it
doesn't
pop
up,
but
I
have
been
watching
back
over
the
recordings
of
these.
C
B
Past
episodes
best
meetings
and
they're
very
exciting,
so
I
know
me
and
jim,
who
should
be
popping
in
here
shortly-
are
both
really
excited
to
see.
We
can
come
from
this
because,
as
you
said,
grid
coin's
been
around
for
a
very
long
time
and
we've
been
part
of
a
couple
of
these
working
groups
over
the
years
and
some
of
well
one
of
our
fundamental
mechanisms,
actually
came
from
one
of
those
working
groups
so
yeah.
B
I
will
give
a
quick
overview
of
the
history
and
then
jump
into
some
of
the
details
with
what
we're
working
with.
So
we
are
a
coin.
We
are
layer,
one
block
chain
started
in
2013.
We
shifted
from
a
proof
of
work
bootstrap
to
proof
of
stake
in
2014
and
the
main
unique
aspect
we're
working
with
is
a
unique
emissions
mechanism
that
lets
us
do
multi-incentives.
B
So
we
currently
incentivize
blockchain
consensus
through
proof-of-stake
and
blink
contributions,
so
boink
is
the
berkeley
open
infrastructure
for
network
computing
spring
out
of
seti
at
home.
In
2004.
Study
at
home
is
from
1999,
though
so
it's
been
around
for
quite
a
while,
but
grid
coin.
This
emissions
mechanism
can
incentivize
any
point,
point
accruing
or
leaderboard
system
and
we'll
get
a
little
more
into
that
later.
B
In
2019,
we
figured
we
had
about
3400
unique,
staking
nodes,
securing
the
the
blockchain,
and
that
would
put
us
at
about
seventh
of
all
black
projects.
If
you
compare
it
to
staking
nodes
or
consensus
nodes
in
2021,
so
it
is
a
fairly
large
project,
albeit
fairly
under
the
radar
as
well.
B
We
have
16
000
active
contributors
contributing
to
these
distributed
computing
projects
on
boink
that
are
incentivized
by
grid
coin
and
18
of
the
point
projects
and
I
think,
there's
30
some
odd
point
projects,
but
18
of
them
are
incentivized
by
grid
coin,
and
then
we
just
had
one
of
our
largest
releases,
which
is
manual,
rewards
claim
we'll
get
into
that
a
little
later,
but
it's
basically
a
form
of
delegated
staking
by
requests
and
it
makes
grid
coin.
As
far
as
we
know,
the
first
effectively
no
cost
entry
for
a
proof
of
stake.
B
Blockchain,
you
basically
contribute
distributed
computing
resources
to
a
research
project
earn
your
grid
coin
until
you
have
enough
grid
coin,
to
contribute
to
consensus
on
the
blockchain.
Through
proof
of
sake,
it
doesn't
require
anyone
to
buy
grid
coin.
To
start
staking,
we
think
that's
very
cool,
so
these
are
more
boink
contributions,
but
these
are
all
coming
from
projects
that
grid
coin
has
incentivized
and
we
contribute
a
significant
amount
of
processing
power.
It's
between
three
and
five
petaflops
to
the
boink
network.
B
I
think
it's
something
like
20
of
computation
is
incentivized
by
grid
coin
rosetta
at
home
out
of
the
university
of
washington
in
institute
of
protein
design,
modeled
covid
before
anyone
else,
they
also
developed
with
those
resources,
a
vaccine
that
they
sold
to
south
korea.
B
We've
got
einstein
at
home
and
a
bunch
of
space
projects
that
have
identified
a
number
of
paul
stars.
World
community
grid,
which
used
to
run
out
of
ibm
currently
runs
out
of
the
krembel
institute
in
canada,
has
a
lot
of
subprojects,
notably
in
medicine.
They
do
cancer
marker
and
tree
treatment
research.
They
also
do
some
climate
change
projects
and
a
whole
bunch.
There's
there's
a
lot
out
of
that.
B
One
climateprediction.net
runs
out
of
oxford:
they've
done,
climate
simulation
specifically
and
then
goofy
x
grid-
and
this
is
more
important
than
it
might
seem-
is
a
user
sort
of
started
project,
because
a
big
aspect
of
boink
and
grid
coin
is
that
it's
permissionless.
Anyone
can
set
up
a
computation
project
with
any
data
they
want,
and
they
are
essentially
guaranteed
computation
resources
to
get
that
data
crunched
and
we'll
get
into
that
a
little
later
as
well,
but
they
determined
that.
B
So,
with
regards
to
how
grid
coin
is
developed,
it
is
an
entirely
open
source
project.
The
people
that
contribute
are
doing
it
because
they
love
the
stuff.
They
love.
They
love
science,
they
love
distributed
computing
and
they
love
science,
communication
and
participation.
B
That's
not
to
say
we
don't
have
some
grc
to
play
around
with,
although
grc
is
not
worth
much
so
big
number
little
value,
we
got,
15
million
grc
in
a
community
run
wallet
and
we
use
it
for
development.
When
we
can
some
marketing
initiatives,
mainly
community
bounties
and,
like
I
said
it's
not
worth
much.
So
it's
more
bragging
rights
than
anything,
but
we
have
used
it
to
fund
development
and
then
we
did
implement
something
called
side
staking
in
2018.
B
This
came
from
one
of
those
working
groups.
I
was
much
you
know
came
from
big
pink
coin,
which
is,
if
you
know,
giveth
io
could
be
considered
a
precursor
to
giveth
and
it
enables
automatic
donation-based
funding
and,
along
with
mrc,
is
probably
going
to
lay
the
foundation
for
a
future
treasury
structure
that
we're
going
to
build
and
with
regards
to
sort
of
philosophy.
B
When
it
comes
to
development,
we
have
been
focusing
on
what
makes
us
unique,
which
is
that
emissions
layer
and
and
everything
that's
incorporated
with
it,
and
letting
other
projects
focus
on
like
blockchain
and
scaling
and
ux
ui
and
all
that
stuff.
So
we
like
to
experiment
with
different
tools,
then
try
to
automate
them
and
then
find
out
where
we
can
iterate
and
improve.
B
This
is
a
graph
of
the
repo
activity
since
2014
when
we
shifted
to
proof
of
stake.
We
also
use
an
on-chain
voting
mechanism,
so
you
can
think
of
it
like
a
dao
from
2014,
it's
not
binding,
it's
not
contractual
binding.
We
are
a
permissionless
blockchain,
so
whoever's
whatever
software
is
running
on
the
blockchain
nodes
is
that's
the
ultimate
choice,
but
this
voting
mechanism
has
been
incredibly
useless,
very
useful.
B
Sorry
in
informing
people
on
the
decisions,
we're
making
sort
of
giving
letting
feedback
come
into
developers,
letting
developers
know
what
will
generally
be
accepted
by
the
network.
B
So
it's
stop
some
developments
in
their
tracks
because
no
one
would
accept
it
and
it's
helped
us
move
forward
in
specific
directions
over
the
years
move
forward
from
that,
and
now
we
get
to
the
fun
stuff.
So
we've
got
the
basic
grid
coin
infrastructure-
and
this
is
a
very
old,
very
dense
project
and
we're
going
to
try
and
simplify
it
a
lot
here.
But
basically
there
are
what
six
layers
we've
got
here.
Five,
if
you
remove
the
participant,
but
the
participant
we
do
think
is
probably
the
most
important
person.
B
B
You've
got
the
starting
from
the
bottom,
then
the
participants,
the
statistics
providers,
which
are
point,
point
projects
or
any
other
computing
platforms,
so
we
can
incorporate
folding
at
home
or
bachoyal,
which
looks
very
interesting
there.
We
could
also
incorporate
any
point
to
crewing
system.
So
one
of
the
examples
we
use
is
we
could
incorporate
olympics
like
we
could
use
our
mission
structure
to
reward
anyone
who
gets
a
gold
medal
or
a
bronze
medal
in
the
olympics.
B
If
we
wanted
to
and
then
on
top
of
that,
we
have
our
beacon
id
we
on
top
of
that,
we
have
our
oracles,
then
we
have
our
incentivization
mechanism,
which
is
the
bulk
of
everything.
On
top
of
that,
we
have
the
base
layer,
the
the
blockchain
and
economic
protocol,
so
statistics
providers,
as
I
said,
any
point
accruing
system
they're
entirely
independent.
They
do
what
they
do
and
we
bring
them
into
the
network
by
a
by
using
our
voting
system.
B
So
it
is
up
to
the
network
participants
to
verify
and
and
vet
these
statistics
providers
and
say
hey
these
guys
are
not
going
to
cheat
or
they're
doing
the
right
stuff.
They're
they're
aligned
with
our
network
values
we're
going
to
vote
them
into
our
incentive
structure,
which
will
incentivize
the
contributors
to
these
statistics
providers
through
the
emissions
layer
once
they.
These
statistics
providers
are
added
to
the
incentivization
mechanism.
B
They
are
basically
guaranteed
incentivization.
They
are
guaranteed
an
amount
of
computation
resources
and
a
chunk
of
the
grc
that
the
protocol
means
the
beacon
id
is
essentially
a
way
to
verify
and
identify
crunchers
contributors
to
these
computation
projects.
There
is
a
process
that
a
user
goes
through
to
verify.
They
are
who
they
say
they
are
to
verify.
Their
wallet
is
connected
to
the
statistics
provider.
They
are
contributing
to
and
it's
so
far
worked
very
well.
It
is
the
only
required
cost
to
work
to
enter
the
system.
It
was
once
only
a
transaction
fee.
B
I
don't
know
if
we've
shifted
that
to
like
a
1grc
fee,
but
we
can.
Maybe
jim
can
enlighten
us
a
little
later,
but
it
is
a
very,
very
low
cost.
So
that's
why
it's
effectively
a
no-cost
entry
to
the
pos
pos
system,
but
it's
effectively
no
cost
right.
Oracles
oracles
are
the
scrapers.
We
used
to
collect
the
statistics
from
the
statistics
providers
the
contributions
of
all
the
people
contributing
to
these
various
projects
it's
collected
by
these
oracles.
They
do
a
whole
bunch
of
stuff
with
it.
B
It's
just
outlined
on
this
page
and
there
is
a
lot
more
technical
detail
when
it
comes
into
the
oracles,
but
they
essentially
collect
it,
send
it
off
to
the
blockchain
network
and
the
blockchain
nodes,
not
the
oracles
come
to
consensus
on
whether
or
not
the
oracles
are
being
truthful.
So
it
is
a
very
decentralized
system.
The
oracles
do
have
some
amount
of
trust
in
them,
but
really
most
of
the
trust
is
on
the
blockchain
network.
B
We
thought
that
was
very
important
as
we
develop
our
decentralization
as
as
we
develop
our
mechanisms
to
keep
things
as
decentralized
and
trustless
as
possible.
Yeah.
C
I
mean
going
back
on
the
jay
ringo
or
john.
The
the
the
only
really
caveat
with
the
oracles
is
to
which
is
part
of
the
vetting
process
for
approval
of
an
oracle
is
to
ensure
they're,
independent
right,
so
that
one
individual
doesn't
have
is
not
running
more
than
one
oracle,
so
that
that
allows
us
to
effectively
treat
them
as
transparent
in
combination
with
the
way
they're
publishing
stats.
D
C
Three
years
ago,
yeah,
it
was
a
while
back
that
was
the
first
one
of
the
first
major
technology
pieces
we
put
in
place
to
solve
the
centralization
of
statistics,
collection
problem
along
with
load
on
the
statistics
right,
so
that
you
know
before
the
each
node
was
downloading
the
statistics
directly
and
trying
to
do
its
own
comparison,
and
it
was
killing
the
the
projects
because
it
was
like
a
dd
ddos
attack
so
to
eliminate
the
scalability
problem.
C
We
implemented
these
oracles
and
then
the
oracles
you
know
of
which
there's
six
on
main
net
right
now
there
can
be
an
arbitrary
number
of
them,
although
it's
it's
silly
to
have
more
than
about
10
for
a
lot
of
different
reasons.
But
you
know
six,
they
download
the
statistics
and
they
also
cache.
They
check
the
statistics
on
the
project
sites
for
changes
right,
so
we're
looking
at
the
hash
of
the
statistics
or
etag
as
we
pull
those
down
over
https
and
if
they
don't
change,
we
don't
download
them
again.
C
B
B
B
It's
great
all
right,
so
so
moving
along
through
the
the
layers.
Here
we
then
get
to
the
incentivization
mechanism
which,
as
I've
said,
is
multi-incentive
the
basic
way
it
works.
There's
a
lot
to
this,
but
the
basic
way
is:
we
meant
a
certain
amount
of
grc
per
day,
which
is
37
600
and
then
75
of
that
grc.
That's
minted
by
the
protocol
gets
distributed
to
crunchers
or
you
can
think
of
them
as
participants,
participants,
contributors,
sorry
to
statistics
providers
and
then
25
of
that
37
000
is
distributed
to
stakers.
B
We
call
cbr
of
10
grc
per
block
that
was
determined
by
a
network
vote
way
back
when
I
think
2016
2017-
and
it
was
a
great
use
of
the
voting
mechanism
to
come
to
consensus
on
how
much
we're
going
to
distribute
to
crunchers
how
much
the
stakers
and
how
much
we're
going
to
make
the
block
reward.
C
Now
I'm
going
to
make
a
comment
on
this,
so
the
transition
to
cbr
was
really
to
solve
a
low
difficulty
problem.
We
had
people,
we
used
to
pay
people
at
the
first
layer
by
interest
which
was
time
accrued
and
people
would
just
start
their
wallet
to
get
on
and
collect
their
interests
and
then
go
back
off
again
and
a
proof
of
stake
system.
C
You
need
people
actively
staking
right,
so
we
we
made
a
decision
as
community
we're
going
to
push
towards
a
constant
block
reward
which
would
only
reward
you
if
you're
actually
staking
so
the
beautiful
thing
about
that
is.
We
saw
our
difficulty
go
from
like
three
and
each
difficulty
unit
in
greatcoin
represents
about
10
million
online
coins
effectively
right,
so
we
went
from
a
difficulty
of
three
which
was
30
million
effective
coins
sticking
at
any
one
time.
Our
difficulty
today
is
averaging
19.,
so
we
have
highly
effective.
C
We
have
190
million
out
of
a
total
issued
400
million
coins
staking
continuously,
which
I
think
is
I'm
very
proud
of
that
statistic
that
we've
got
half
of
the
coins
that
have
ever
been
put
into
existence
for
grid
coin
actually
actively
staking
at
any
one
time.
B
Yeah,
it
goes
to
the
point
that
this
is
a
field
that
we're
working
in
that
the
people
who
end
up
contributing
to
these
projects
are
very
interested
in.
They
want
to
see
this
work
from
what
we
have
seen
at
least,
but
to
carry
on
here.
There's
we've
got
so
9
600
grc
per
day
for
stakers
28,
200
grc
for
crunchers
or
again
contributors
to
statistics
providers
continuing
with
the
incentive
mechanism.
We've
got
a
unit
called
magnitude
which
helps
determine
how
much
each
contributor
earns
out
of
the
the
grc
emissions
to
contributors.
B
The
reason
we
have
this
unit
is
because,
when
you're
doing
distributed
computing
at
least
when
you're
doing
it
on
top
of
boink
or
when
you're,
allowing
yourself
to
incentivize
multiple
different
leader
boards,
there's
no
standardized
point
system.
Among
all
these
projects,
we
have
to
standardize
the
information
that
comes
in
within
grid
coin.
We
have
to
make
it
equal
to
itself,
so
we
reward,
we
distribute
the
grc
to
contributors
relative
to
other
contributors
that
are
contributing
to
that
specific
project.
B
A
specific
statistics,
provider
and
magnitude
is
the
process
by
this,
by
which
this
happens,
and
it
ends
up
creating
a
couple
very
interesting
sort
of
non-intentional
but
passive
incentive
layers,
because
the
way
it
works
is
essentially
there's
a
certain
number
of
magnitude
and
that
gets
evenly
distributed
among
all
the
statistics
providers
and
then
a
contributor
earns
a
portion
of
that
distribution
based
on
their
relative
contribution
when
compared
to
other
contributors
to
that
same
project.
B
So
if
someone
wants
to
earn
the
most
grc
possible
from
the
emissions,
they
have
to
get
the
highest
magnitude
and
the
way
they
do
that
is
they
go
to
projects
that
don't
have
a
lot
of
contributors
to
it.
So
if
you
have
a
one
of
these
citizen,
scientist
projects
like
goofy
x
grid
that
has
zero
marketing
at
all
has
zero
funding.
B
It's
just
run
on,
like
someone's
laptop
in
a
closet
and
that
project
gets
into
the
incentivization
layer
through
the
network
vote,
people
are
going
to
go
to
it
because
they
get
paid
to
not
because
they
were
told
it's
a
really
good
project,
not
because
ibm
has
supported
this
project
with
a
whole
bunch
of
funds.
So
the
the
only
challenge
for
these
statistics
providers
is
to
communicate
their
science
to
the
network
as
best
they
can
and
once
they
get
on
the
network.
B
This
is
magnitude
is
the
reason
they're
guaranteed
a
portion
of
computer
cycles,
so
I
just
described
this
for
the
most
part,
the
crunching
contribution
distribution
and
then
that
last
sort
of
thought
is
the
active
education
part
where
a
a
crunchy
project
distributed
computing
project
or
any
leaderboard
is
incentivized,
because
we
have
so
many
resources,
they're
incentivized,
to
educate
a
general
population
as
to
why
the
research
is
worthwhile
instead
of
just
you
know,
educating
other
scientists
why
their
their
research
is
worthwhile.
B
They
need
to
get
science,
communicators
or
otherwise
develop
communication
material
to
get
people
to
approve
the
incentivization
of
their
project
and
at
the
same
time,
on
the
other
side,
the
participants
are
kind
of
passively
incentivized
to
keep
up
to
date
on
what
these
projects
are
doing,
sort
of
scrutinize
them
for
for
malpractice
or
security
flaws.
I,
and
because
each
project,
that's
incentivized
by
grid
coin,
does
it
represents
it's
a
it's
a
value
representation
of
what
the
network
sees
as
valuable?
B
So
we
want
to
make
sure
it's
not
like
trying
to
hack
the
nsa
or
doing
anything,
that's
highly
illegal.
It's
a
very
fun
process
to
watch,
because
people
do
have
very
intense
debates
about
what
makes
quantum
quote
good
science
or
good
research.
B
So
if
one
of
them
does
get
compromised
there,
only
that
chunk
of
emissions
is
compromised,
no
one
can
monopolize
or
compromise
the
entire
emissions
schedule
and
basically
the
more
projects
they
can
incentivize
the
higher
the
security
goes
and
because
of
that
distribution
as
well,
there's
no
way
for
asics
to
come
about
because
they're
each
distributed
computing
project
runs
better
on
different
hardware.
It's
they're
different
projects,
so
you
can't
build
an
asic
to
monopolize
bitcoin
emissions.
You
it's
built
for
users.
B
It's
built
for
just
people
with,
like
pcs
and
cell
phones
or
servers
to
to
come
in
and
get
some
reward
for
contributing
distributed.
Computing
we've
already
been
through.
Most
of
this
we've
run
on
proof
of
stake.
We
have
these
things
called
super
blocks
which
help
with
statistics,
aggregation
and
storage.
B
We've
got
the
magnitude
which
helps
with
emissions
distribution,
and
we
are
decentralized
at
many
different
layers
from
the
statistics
provider
to
the
magnitude
system
to
how
currency
is
distributed
to
our
oracles
and,
of
course,
at
the
end
of
the
day,
the
way
we
distribute
the
grc
rewards
to
people
is,
they
stake
a
block
and
what
so
we're
incentivizing
people
to
get
enough
grc
to
stake
their
own
block,
and
once
you
can
become
a
consensus
contributor,
then
we
are
further
decentralized.
B
This
is
something
to
do
with
mrc
too,
because
mrc
makes
it
so
anyone
can
get
the
rewards
without
staking
a
block,
but
we
haven't
seen
how
that
plays
out.
Yet
when
it
comes
to
the
users
and
the
values
of
the
most
important
layer
of
gridcoin,
the
participant
permissionlessness
is
the
key
to
everything
we
build.
B
A
lot
of
us
have
been
contributing
to
boink
for
many
years
since
before
crypto
is
even
a
thing
before
blockchain
existed
and
boink
itself
is
a
permissionless
system
where
anyone
can
come
in
and
create
a
distributed
computing
project
as
long
as
they
can
convince
people
to
contribute
to
the
project
they
get
their
data
crunched
with
grid
coin,
we
developed
incentives
that
guarantee
data
crunch.
B
You
don't
need
to
convince
anyone
if
you
can
just
convince
people
once
that
your
your
your
research
is
worthwhile
and
you
can
get
in
the
network,
then
you
are
guaranteed
computational
resources
and
we
believe
that
at
our
core
anyone
should
be
able
to
set
up
a
project.
Anyone
should
be
able
to
receive
computational
power
for
free,
but
that's
not
to
say
you
can't
build
business
models
on
top
of
this.
B
If
there's
a
base
layer
that
just
says
you
come
in
and
you
get
one
over
n
computation
power,
maybe
someone
can
get
one
over
n,
plus
one
computation
power
by
doing
something
additional
and
that
would
be
layer,
two
incentives,
project
creation,
imagine
management,
magnitude,
waiting,
there's
a
lot
of
potentials
there.
We
won't
get
into
them
here,
but
this
is
part
of
where
we
want
to
explore.
Collaboration
is
developing
these
new
economic
models,
so
we've
been
around
for
three
cycles:
we're
entering
the
fourth
right
now,
the
first
one
we
kind
of
experimented.
B
Does
this
even
function?
Is
this
a
good
idea
turns
out.
It
was
because
people
kept
it
alive
until
basically,
jim
came
along
and
stabilized
the
technology,
along
with
some
other,
really
fantastic
core
devs,
and
then
for
the
past
several
years
been
working
on
economic
stability.
Getting
that
system
worked
out
getting
mrc
implemented
side
staking
all
that
stuff
and
in
the
next
cycle,
we're
hoping
to
experiment,
automate
and
iterate.
Once
again,
so
we've
got
the
experiments
we
want
to
sort
of
conduct
have
to
do
with
economics.
B
You
want
to
see
how
these
levers
can
be
changed
and
then
we
onto,
for
example,
dynamic
economics,
is
changing
the
emissions
schedule
based
on
the
context
of
the
network
at
any
given
time
so
take
in
some
information
change
the
amount
of
grc
that's
distributed
in,
which
which
way
we
want
to
automate
the
voting
voting
mechanism
and
the
approval
process,
and
we
want
to
iterate
on
what
types
of
projects
we
accept
into
the
incentive
layer
and
what
sort
of
treasury
structure
we
we
have.
B
We
do
have
a
small
one
right
now,
but
it
needs
to
be
built
out
in
my
opinion,
in
terms
of
collaboration.
This
is
where
I
think
we
can
bring
a
lot
to
the
table
and
where
we
want
to
learn
a
lot
as
well
so
work
unit,
verification
finding
out
work
units
are
what
point
calls
the
the
data
packets
that
get
sent
out
to
users
to
to
crunch
to
work
on.
So
we
want
to
improve
the
way
that
those
that
work
that
is
done
by
a
user
is
verified.
B
So
we
can
know
that
it's
good
work.
We
want
to
expand
the
incentives
and
the
economics
of
the
system
want
to
bring
more
different
leader
boards
into
the
system.
We
want
to
look
at
how
business
models
can
be
built
on
sort
of
base
technology
that
is
rooted
in
sort
of
open
source
ideals,
which
is
anyone
can
participate.
B
We
want
to
figure
out
how
to
use
ux
and
ui
to
to
spur
adoption,
because
I'm
sure,
as
everyone
here
knows,
this
is
a
lot
of
technology
that,
at
the
end
of
the
day,
needs
to
be
basically
one
or
two
click
for
a
lot
of
people
and
then,
when
it
comes
to
standards,
social
standards
for
the
way
we
view
it
are
sort
of
the
that
permissionlessness
aspect.
B
Anyone
should
be
able
to
build
a
computation
project
and
get
their
their
research
done,
and
we
would
love
to
see
different
levels
of
sort
of
approvals,
like
maybe
a
badge
by
some
committee,
or
something
that
says
this
is
a
permissionless
system.
It
works
this
way
or
this
is
semi-permissionless.
B
It
works
this
way
and
then
technical
standards
when
it
comes
to
application,
development
and
automation.
For
these
researchers,
a
lot
of
what
we've
found
over
the
years
is
that
researchers
don't
necessarily
want
to
go
through
the
process
of
building
out
their
application.
It
kind
of
needs
to
be
one
click
for
them
as
well.
B
So
if
we
can
develop
standards
around
how
to
sort
of
compile
data,
how
to
arrange
data
so
that
someone
else
can
take
it
and
just
build
that
distribute
distributed
computing
application,
one
click
that
would
be
absolutely
great,
so
that
is
grid
coin,
in
a
nutshell,
happy
to
take
any
questions
and
definitely
looking
forward
to
seeing
how
we
can
collaborate
with
any
of
the
projects
here.
A
E
I
also
want
to
second
that
I
mean
really
tremendous
stuff.
You
know,
I
think,
the
the
layers
between
the
how
you
do
the
the
orchestration
and
and
distributed
distribution
of
the
jobs
versus
the
actual
execution
of
the
jobs.
I
think
that
that's
worth
teasing
out
a
lot
here.
I
know
that
you
don't
own
the
computation,
which
is
fine,
but
I
know
that
a
lot
of
folks
here
do
want
to.
You
know,
participate
in
the
computation,
and
I
think
it's
probably
something
that
that
goes
beyond
this.
E
This
conversation,
but
figuring
out
exactly
how
that
those
various
projects
work
together,
I
think,
will
provide
a
lot
of
like
really
interesting,
valuable
layering,
so
you
can
have
innovation
in
various
ways.
So,
for
example,
let's
say
you
know,
I'm
a
compute
provider,
for
I
don't
know
cuba,
because
whoever
needs
executed
on
cuba
needs
to
execute.
The
project
needs
to
do
it
in
cuba.
Only
you
know
surfacing
that
through
grid
coin,
making
this
a
targeted,
compute
provider
sharing
the
the
incentives
between
the
two
groups,
I
think,
would
be
really
powerful.
C
Yeah
david,
it's
interesting,
implicit
in
some
of
the
stuff
that
you
might
read
into
this
is
you
know
there
are
a
number
of
blockchains
that
have
tried
to
tackle
this
problem
by
trying
to
do
direct
compute
on
the
blockchain.
You
know
distributed
computing
and
the
idea
that
you
know
you
can,
let's
call
it
proof
of
you
know
we
even
used
to
use
this
term
and
we
don't
use
it
anymore,
because
it's
misleading
proof
of
research
right.
C
Some
betting-
that's
done
on
the
on
a
per
project
basis
to
make
sure
that
we're
not
including
bad
projects,
and
then
the
projects
themselves
are
actually
responsible
for
validating
and
checking
the
work
units
that
they
distribute
out
and
then
issuing
the
statistics
on
that,
and
then
you
know
they're
as
as
john
talked
about
their
are
mechanisms
of
security
built
into
the
way
we're
doing
the
the
distribution
of
grid
coin.
C
We
think
this
is
really
the
right
approach,
rather
than
trying
to
do
an
on-chain,
proof-based
approach
of
research
which
really
quite
frankly,
even
if
you
can
make
it
work,
it
severely
limits
the
types
of
problems
that
can
be
done
and
the
second
thing
is:
it's
not
scalable
the
whole
idea
of
being
able
to
verify
a
problem.
You
have
to
do
it
on
so
many
nodes
that
it
goes
directly
against
the
whole
idea
of
distributed
computing.
If
all.
E
E
Absolutely
so
again,
this
is
something
that
we're
exploring
as
well.
We,
you
know
sorry,
the
bakayow
project
is
looking
only
as
a
computation
provider.
We
are
not
trying
to
you,
know
kind
of
universalize,
job
scheduling
and
things
like
that
that
you
are
and
that's
where
you
know-
and
I
think
there
are
many
projects
like
us
who
want
to
be
a
computer
provider
or
want
to
do
something
very
specialized,
but
don't
want
to
deal
with
or
or
you
know,
don't
want
to
take
on
yet
another
kind
of
computational
incentive
later.
B
Of
the
one
of
the
interesting
aspects
from
the
bachelor
project,
from
what
I
gathered
from
your
presentation
on
what
the
17th
is,
the
the
way
you
were
going
to
look
at,
maybe
verifying
different
compute
providers
by
just
randomly
auditing
the
work
they
do
and
then
giving
them.
E
C
Actually
I'll
actually
like
that
idea
a
lot
because,
right
now
our
vetting
is
effectively
single
shot
right.
So
we
you
know,
we
we
go
through
a
process
of.
Basically,
this
is
our
white
listing
process.
We
go
through
a
process,
we
have
a
defined
set
of
standards
to
white
list
and
there's
a
trusted
set
of.
We
have
a
white
listing
committee
of
trusted
community
members.
C
It
goes
through
and
people
nominate
a
project
and
they
go
through
this
procedure
and
we
look
at
different
things,
make
sure
they're
doing
what
we
call
wingman
work
units,
so
they
issue
the
same
work
unit
to
more
than
one
node
and
cross.
C
I
really
like
this
idea
of
being
able
to
measure
an
audit,
even
if
you're
you
know,
and
you
really
don't
need
to
do
a
large
sample
size
to
catch
rampant,
cheating
right,
if
someone's
doing
that
crazy.
I
totally.
E
Agree,
I
I
don't
want
to
distract
from
the
entire
group
by
the
way.
E
Yeah
for
those
that
do
want
to
get
together,
I
I
I
think
we
definitely
should
figure
out
a
place
to
join
together,
unify
and
talk
about.
Some
of
these
things
talk
about
the
various
layers,
but
I
think
what
you're
already
doing
is
is
tremendous
and-
and
we
very
much
want
to
support
you.
C
C
It
be
deviled
us
for
years
until
we
finally
cracked
it
in
our
fern
release,
which
was
the
big
release
we
did
a
couple
years
ago
and
it
is
rock
solid,
and
you
know
it's
too
it's
too
gory
to
get
into.
I
mean
john
got
into
a
little
bit
of
it,
but
you
know
this
is
this
is
an
area
where
we
learned
via
the
school
of
hard,
knocks
how
to
put
this
whole
thing
together
and
make
it
work.
C
You
know
we're
not
as
scalable
as
a
pure
blockchain,
because
we're
actually
our
our
underlying
layer
is
vanilla,
proof
of
stake,
which
is
very
bitcoin
like
right,
so
we
have
the
same
transaction
rate
limitation,
but
the
second
layer
can
actually
be
applied
on
another
blockchain
base.
That's
not
great
coin!
That
is
more
scalable,
so
that's
that's!
Something
else
to
consider
is
that
you
know,
while
we
were
never
aiming
for
like
absolute
transaction
capacity
and
all
that
kind
of
stuff,
that
really
wasn't
our
aim.
C
B
A
Thank
you
guys,
yeah.
This
is
tremendous
and
there's
a
lot
more.
I
think
tech
under
the
covers
from
great
coin
that
we'll
we'll
try
to
get
into
a
future
session.
So
thank
you
guys
for
that,
and
so
turning
over
to
to
charles
girls,
we're
looking
forward
to
learning
all
we
can
about
what
you
guys
are
up
to.
In
particular,
you
know
I'm
curious
to
learn
if
this
is
a
technology
that
other
cod
working
group
members
can
be
building
on
to
make
their
integration
with
ecosystems
easier.
D
A
D
D
Okay,
great
so,
basically
like
we're
currently
doing
a
little
two
cross
chain
solutions,
so
we
can
here
pop
up
three
projects
into
beta
computing
stream
together.
So
so
currently,
like
we
see
like
when
people
start
using
blockchain
storage
and
computing,
they
were,
they
were
asking
like.
There
are
so
many
tokens
so
many
chance
and
where
to
store
the
data
where
to
store
to
using
the
payment
so
and
also
like
they
need
a
whole
busket
of
different
things.
D
For
example,
you
need
to
buy
unless
token
for
hosting
websites
you
need
to
buy
price
storage,
token
to
hosting
your
price
coin,
and
you
need
lots
of
other
tokens
when
you
want
to
do
different,
for
example,
computing
also
in
the
computing
tokens
like
bitcoin
right.
So
let
me
pick
a
big
problem
for
people
saying
like
okay,
look,
we
have
a.
We
need
to
look
under
a
basket
of
different
tokens
and
we
need
to
integrate
a
basket
of
different
sdk.
D
D
Try
is
based
on
ipfs
and
the
friday
coin,
so
we
enable
people
to
using
the
balance
chain
or
polygon
chain
to
pay
usdc
to
buy
the
storage
and
get
it
back
in
the
proof
of
our
own
chain
to
the
user,
and
the
fund
will
be
locked
there
and
until
people
start
to
get
that
data
on
chat
and
our
next
skill
was
an
integration
of
better
like
educating
solutions
and
also
city
solutions
to
our
network.
So
yeah
so
like
one
example,
is
about
the
energy
storage.
D
B
D
Like
ibm
has
good
things,
but
this
you,
like
ipf,
is
the
lack
of
incentive
layer
so
which
means
that
it
was
a
node
runner
who
turned
off
the
node.
You
probably
like
not
really
get
all
the
data
distributed
on
the
node.
Those
are
all
the
issues
so
currently
like.
We
do
your
merchant
storage
so,
basically,
like
you,
can
using
a
wallet,
a
meta
mask
wallet
to
connect
with,
for
example,
polygon
network,
and
you
can
get
as
a
payment
for
storage.
D
You
can
pay
the
storage
on
the
ipfs,
so
here
we
give
you
the
storage
paint
on
the
ipfs.
At
the
same
time,
we
are
sending
the
storage
dealers
up
to
five
nodes
so
which
means
that
if
you
have
a
data
loss,
you'll
be
able
to
get
the
data
back
receivable
back
from
frequent
node
and
adding
back
to
your
fps
for
temporal
usage
and
after
storage
on
chain.
The
payment
will
be
we're
using
training
oracle
to
unlock
the
data
analyze
the
fund,
so
the
user
and
the
storage
provider
can
get
the
payment.
D
So
this
is
the
basic
techno
stack,
we're
using
upload
a
file,
a
document,
so
he
needed
to
pay
with
a
particular
token
and
probably
were
locked
in
the
village,
and
then
we
are
to
go
into
the
exchanges
to
dx
to
swap
the
token,
for
example,
from
the
uitc
to
run
wrapped
firecoin
like
a
real
fairy
coin.
The
applicator
rainfall
coin,
like
ipfs
storage,
will
be
catching
your
data
and
we
at
the
same
time
we
are
starting
backing
up
to
the
fracking
network
to
five
nodes.
D
So
each
of
the
nodes
will
get
as
a
proven
chain,
the
drid
and
other
information,
or
include
the
zika
proof.
The
chaining
article,
where
in
charge
of
bridging
back
those
engine
proof
to
the
polygon
script.
Now
sorry
it
should
be
a
smart
contract
and
then,
when
you
need
to
get
the
ontron
proof,
we
know
that
the
data,
the
backup
successfully
your
fund
will
be
unlocked
so
which
means
that
if
you
have
five
back
up
to
five
nodes,
the
only
two
nodes
success.
You
only
pay
for
the
to
success.
D
D
So
we're
performing
to
run
our
test
net,
we'll
get
about
5
000
creators
of
100
nfps
and
you
can
click
mint
on
openc.
We
are
going
to
launch
the
product
officially
in
september,
we
give
users
about
30,
gigabyte,
free
storage
as
a
first
time
user,
and
for
every
30
gigabytes
you
need
to
pay.
A
small
amount
of
funds
for
using
the
services
currently
like
is
still
pretty
much
low
cost,
because
mcs
is
verified
the
data
provider
of
fargo
ecosystem.
So
on
storage
provider
side.
D
With
this
the
charge
of
nothing,
we
only
ask
the
user
to
pay
for
the
article
fees
which
disgusts
me.
So
can
we
have
a
sdks
and
a
tutorial
integration
with
javascript
and
pi
stone?
If
you
want
to
be
able
to
uploading
and
try
the
data,
then
they
can
automatically
upload
thousands
of
animal
keys
or
storage
pieces.
It's
already
done
by
the
community
and
they
have
been
tried
a
lot
and
we
asked
them
to
write
all
the
documents
and
now
we
are
in
the
process
of
a
hackathon.
D
So
we
have
a
special
track
for
the
computing
of
the
data,
which
is
one
problem
to
doing
the
computing
of
the
ipfs
or
fractal
network
using
the
new
new
data
or
any
scientific
data.
So
we
give
us
more
demo
about
how
to
do
in
the
computing
of
ipfs
yeah.
So
I
can
do
a
demo
here
yeah.
So
this
is
the
motion
storage.
So
here,
like
I've
already
pre-connected
with
my
menus,
if
you
want
to
upload
storage,
you
can
go
here,
choose
any
data
you
want.
D
For
example,
I
have
a
data
set
of
something
like
a
gif.
I
can
choose
something
small
or
medium
size,
let's
see
like
to
the
one
megabyte
and
the
update
to
the
ipfx
node,
and
we
just
approve
that,
which
means
that
we
are
submitters,
create
a
contract
and
after
the
contract,
all
the
data
funds
will
be
locked
there,
but
this
is
for
the
test
net
on
the
main
net,
the
first
30
bytes,
so
digital.
But
you
don't
need
to
pay.
D
Okay,
so
we
see
that
you
immediately
get
fpf
best
for
training
your
data
and
you
start
sending
the
data
to
different
nodes.
They
take
a
while
because
there
are
query
orders
of
variable
string
nodes
in
the
network.
For
example,
we
can
see
like
the
different
nodes
except
the
deal
in
the
past
for
the
dna
success.
You
can
refound
the
part
of
your
phones
here,
and
this
is
a
positive
success
here.
D
You
can
directly
see
that
the
the
deals
on
the
on-chain
and
also
like
which
node
and
if
you
want
to
retrieval
it,
you
can
using
the
code
here
and
yeah.
I
know
the
founder
needles
are
rv
data
article
to
approval
for
the
unlock,
because
we
needed
to
make
sure
like
the
data
on
chain.
Then
we
are
going
to
pay
back
its
cross
chain,
which
means
from
frye
coin,
to
polynomial
work.
We
rely
on
those
requests
to
get
the
data
back,
and
this
is
the
past
payment
consensus.
Sorry,
payment,
struct
contract.
D
You
can
see
the
logs
if
you
analyze
the
logs.
If
I
tell
you
like
the
data
original
society,
the
task
ids,
we
have
something
called
webster
ids.
So
it's
on
top
of
the
data
ids.
You
will
have
different
ids.
If
your
story
is
too
small,
we
are
building
something
like
a
sidecar
to
enable
multiple
files
compressed
together
become
one
bigger
contract,
because
nobody
actually,
where
real,
accepts
three
megabytes
storage
deals.
So
we
are
like
compressed
over
one
thousand
deals
in
some
contract
and
doing
the
unlock.
D
D
Yeah,
so
now
you
can
see
the
image
is
the
loading
vertical
well,
but
you
can
see
like
it's
already
become
nft.
So
if
you
have
important
data
you're
doing
the
training
or
you
want
to
doing
the
labeling
and
if
you're,
finding
it's
things
of
high
value
you're
using
this
function,
doing
the
one
click
to
doing
the
transfer
listing
on
openc
and
for
sale.
D
Also,
you
can
do
the
same
thing
for
your
training
results
about
the
training
part.
I
just
give
you
example.
This
is
the
famous
monkey
things
we
have.
So
writing
something
directly
fetch.
If
you
already
have
loaded
the
data
to
the
ipfs
node
with
doing
a
small
thing,
you
can
direct
downloading
the
data
for
ibfs,
so
we
have
other
script,
help
people
downloading,
and
if
you
don't,
if
you
don't,
you
already
have
it
downloaded,
so
we
can
just
start
training
from
locally
instance.
D
D
So
I
I
hope,
like
a
marker
team,
can
make
something
of
those
training
algorithms
become
container
based
or
something
like
that.
So
we
can
deploy
it
on
the
same
node
of
the
flight,
concealing
servers
and
if
we
can
make
it
the
storage,
node
and
the
computing
node
as
cool
as
possible,
it's
very
much
easier
for
people
to
do
the
training.
Since
we
have
the
cid
running
on
the
age
environment,
that
will
be
a
great
advantage
for
us,
so
we're
working
on
bridging
those
things
when
everything's
ready
to
use
yeah.
D
Yeah
so
currently
like,
we
already
have
over
50
nodes
worldwide
running
the
our
storage
pro.
Some
providers
was
integrated
with
storage
and
other
functions
and
which
means
that
we
have
the
potential.
If
we
have
the
computing
nodes
integrate
together,
we
can
distribute
to
those
fifty
nodes
worldwide.
D
Currently,
like
we
have
two
big
nodes
owned
by
swan
team,
which
is
one
is
in
canada,
has
about
over
one
thousand
servers
and
also
in
north
carolina
eastern
u.s,
where
the
over
300
servers
computing,
probably
stored
together,
can
provide
over
temperature
storage
from
the
computing
task,
and
the
internet
work
is
even
bigger
than
that.
So
we
that
that's
why
we
added
the
payment
part.
We
can
get
payment
incentives
for
our
user.
D
There
will
be
a
be
great
for
them
to
bring
the
training
jobs
and
the
same
oracle
structure
we
had
before
will
be
also
used
for
the
computing
tasks.
So
if
you
guys
can-
or
anyone
can
prove
give
us
a
proof
of
work,
so
computing
done
something
it
can
either
be
a
z,
key
or
transaction
hashi
or
task
akashi.
We
can
put
in
our
code
that
was
smart
contract.
We
are
using
the
thumb
stack
to
integrate
computing
and
storage
because
another
web
three
computing
tasks
distributed
on
different
networks.
D
So
we
have
a
timeline
here,
working
on
the
dot
against
them,
about
the
trailer
management
proving
both
voting
and
also
arbitrate.
If
it's
a
case
of
claim
happens,
so
the
age
competing
with
aiming
at
the
start
of
the
integration
worker,
q1
or
q2
depends
on
if
we
found
the
good
candidates,
that's
the
first
approach
of
integration
and
on
the
q3
would
like
to
integrate
the
syrian
capability
in
the
geoduic
distribution
node
worldwide,
so
like
we
can
move
the
computing
tasks
and
the
data
as
close
as
possible.
D
A
Lovely
charles,
thank
you
so
much.
I'm
super
interested,
particularly
there's.
A
So
many
use
cases
where,
if
I'm
a
computer
over
data
developer
and
my
ecosystem
is
very
ethereum,
centric
or
very,
very
matic-centric-
and
I
don't
have-
maybe
I
don't
have
file
coin
integration
today,
but
I
could
use
your
service
to
simplify
and
just
pay
my
native
currency,
that,
with
my
wallet,
adapt
that
I
built
and
let
you
guys
manage
the
back-end
storage,
should
I
think,
of
of
mcs,
as
is
there
a
exchange
process
between
my
native
token
and
the
fill
tokens
that
would
happen
or
does
it
just
get
locked
up
somehow
as
rap
tokens
yeah.
D
So
basically
here
has
we
don't
have
exchanges
we're
using
existing
exchanges,
for
example,
particular
network
web
sushi
psc
network
will
be
the
pancake
exam
network
bigger
unit,
swap
we're
using
the
existing
exchanges
to
do.
In
the
token
swap
we
don't
manage
the
exchanges,
that's
too
much
risk
so,
which
means
that
your
fund
is
safe.
You
only
pay
for
usage
so
which
means
that
you
don't
really
need
to
prepare
price
coin
tokens
like
three
weeks
ago.
D
B
Where
does
the
expense
for
gas
fees
go
if,
like
you're,
switching
between
changes
or.
D
Yes,
the
gas,
so
the
gas
phase
actually
is
a
part
of
a
smart
contract.
It
will
deduct
from
smart
contract
because
everything
when
doing
the
swap
is
on
the
same
network.
So
we
have
calculations
of
that's
fee
locks
in
front
and
it
will
be
that
detect
from
there
the
polygon
network
feeds
the
current
is
very
slow,
very
small.
That's
why
we
choose
a
polygon
and
the
first
one.
D
Well,
maybe
like
integrated
other
assumed
layer,
two
solution
as
a
way
out
so
when
it's
running
on
insulin
over
the
world
becomes
some
tiny,
as
well
so
carrying
like
less
than
one
cents
on
the
polygon
network,
but
also
it
also
actually
causes
some
timing
tokens
as
well,
because
if
you're
doing
articles
about
approve
you
needed
to
using
some
token
but
the
good
news
that
we're
doing
the
batch
approval,
which
means
that
we're
not
doing
the
single
file
approval,
but
every
two
hours
we
are
like
we'll
approve
all
the
nodes
at
the
same
time.
D
So,
which
means
that
you
will?
You
can
unlock
the
funds
with
the
1000
other
users
together
to
you
know
so,
which
means
it's
much
slower.
Some
more
I
mean.
B
D
E
Let
me
use
that
as
a
moment.
Let
me
use
that
as
a
moment
to
step
in
we
don't
just
meet.
You
know,
once
every
two
weeks
we
have
a
very
active
slack
channel.
We
would
love
to
talk
about
this
with
everyone
as
a
community.
Please
do
come,
join
the
slide
channel
and
continue
the
conversation.
Let
me
add,
additionally
a
little
bit
of
housekeeping
before
we
run
out
of
time.
We
are
planning
on
november
second
and
third
in
lisbon.
E
This
will
overlap
with
web
summit,
so
those
go
already
going,
you'll
already
be
there.
We
are
planning
to
do
a
compete
over
data
summit
and
would
very
very,
very
much
love
to
have
each
of
you
have
a
you
know
short
presentation
there.
You
know
obviously
we're
gonna
work
really
hard
to
get
a
lot
of
people
attending.
You
can
show
off
your
stuff.
You
can
work
with
me.
E
I
have
a
lightweight
session
doc
that
or
a
schedule
that
we're
working
through
to
figure
out
what
the
schedule
looks
like,
but
it
very
much
should
feel
like
a
community
driven
thing
two
days.
We
have
the
space
and
we
would
love
to
see
you
all
there
very.
A
Good
all
right!
Well,
I
think
we
can
go
ahead
and
wrap
unless
anybody
has
anything
else
for
today
all
right,
well
much
much
fun!
Thank
you,
charles
and
james
and
jonathan,
so
much
for
presenting
today
and
looking
forward
to
continuing
the
conversation
in
the
next
session.
Thanks
so
much
guys
thanks.