►
From YouTube: Compute Over Data Working Group 3rd Session
Description
On today's call the Kamu team shares their demo for batch and real time data pipelines pulling data from Ethereum blockchain, analyze the value of a wallet over time. Al from the Koii Network shares an overview of the Koii network architecture and leads a discussion on verification solutions for decentralized compute platforms.
Kamu: https://www.kamu.dev/
kamu-cli tool to try on your laptop: https://github.com/kamu-data/kamu-cli
Self-serve demo to try kamu from your browser (click the demo link): https://docs.kamu.dev/cli/get-started/self-serve-demo/
Open Data Protocol: https://github.com/open-data-fabric/open-data-fabric
Koii Network: https://www.koii.network/
A
Okay,
all
right,
hello,
everyone
and
thanks
everyone
for
joining
in
the
computer
data
working
group.
This
is
our
third
session
today,
we're
very
fortunate
to
have
nelai
and
sergey
from
the
camus
team
that
are
going
to
give
us
a
deeper
dive
on
their
technology
stack
and
also
al
morris
from
the
coin
network.
We're
very
very
excited
to
have
an
overview
on
both
of
these
technologies
and
before
I
do
turn
over
to
sergey.
A
I
do
want
to
just
give
a
brief
advertisement
that
we
are
in
the
early
stages
of
scheduling
the
compute
over
data
summit
to
happen
in
lisbon.
It's
going
to
be
around
november,
2nd
or
3rd,
so
you'll
see
a
lot
more
communication
about
that.
We
want
to
have
everyone
in
the
community
involved,
also
ecosystem
partners,
you
care
about
your
users.
You
know
potential
investors
and
those
sorts
of
things.
A
So
so
please
pencil
that
in
on
your
on
your
dates
and
that's
the
only
announcements
I
have
sergey
and
now
I
I'll
hand
it
over
to
you
guys
and
let
you
take
us
through
content
on
commune.
B
C
B
All
right
so,
as
you
might
have
known
that
our
focus
with
the
web3
data
can
mean
is
focusing
on
how
organizations
can
rapidly
and
efficiently
exchange
the
information
and
also
how
they
can
collaborate
together
on
enriching,
improving
data
making
iterative
improvements
to
it,
and
all
of
this
should
also
work
in
a
completely
decentralized
and
trusted
way.
So
today
I'm
going
to
be
showing
you
our
tool
called
canoe.
It's
a
single
binary,
app
that
I
have
installed
on
my
laptop.
B
B
You
start
working
with
canoe
by
creating
a
workspace,
so
I'm
running
communic
command
here
and
we
can
see
that
our
workspace
right
now
is
empty.
No
data
sets
there.
So
in
this
demo
I'm
going
to
be
showing
you
how
we
can
build
a
very
small
data
pipeline
that
pulls
data
from
the
blockchain
ethereum
blockchain
of
some
random
account
and
we're
gonna
analyze
the
market
value
of
their
portfolio
using
canon.
B
So
I'm
gonna
run.
Can
you
pull
command
and
explain
what
it
does?
So?
Basically,
we
want
to
make
data
sharing
as
easy
as
possible.
So
we
try
to
achieve
the
same
simplicity
as
apfs
brings
to
sharing
files
we
do
the
same,
but
for
structured
data.
So
with
this
pool
command,
you
can
see
me
pulling
information
from
ipns
subdomain
of
our
of
our
website,
so
this
url
basically
points
to
the
ipfs
cid
and
we're
actually
pulling
structured
data
set
from
ipfs
and
we're
just
giving
it
a
a
certain
name.
B
B
We
can
immediately
jump
and
start
exploring
this
data
set.
For
example,
can
a
tail
command
is
going
to
give
us
a
sample
of
last
events
from
this
data
set?
We
can
also
do
cool
things
like
run
sql
just
from
the
command
line.
B
B
B
D
Sorry,
I
just
wanted
to
add:
we
have
an
entire
online
demo
that
we
will
share
that
where
you
can
try
this
session
and
others
in
your
browser
without
downloading
the
data
we
just
wanted
to
add
it
quickly.
Sorry
go
on.
B
B
So
moving
on
from
this,
you
might
be
wondering
like
what
this
data
set
where
is
actually
coming
from,
and
how
did
we
get
it
into
the
system?
So
I'm
going
to
show
you
how
you
add
such
a
root
data
set
to
the
system,
so
the
data
sets
in
canoe
are
all
expressed
as
yaml
files,
so
you're
going
to
get
a
lot
of
kubernetes
vibes.
If
you
worked
with
kubernetes
before
here,
we're
going
to
look
at
one
of
such
files
that
I
prepared
called
account
transactions
data
set.
B
B
Of
this
specific
address,
then
there's
some
boring
stuff
like
massaging
the
data
from
the
json
format
into
more
typed
schema
and
we're
using
spark
sql
engine,
one
of
our
plugin
engines
to
again
shape
it
into
a
better
types
and
rename
some
columns.
So
I'm
gonna
go
ahead
and
look
a
new
add
data
sets
account
transactions.
B
Gonna.
Add
this
yaml
immediately.
We
see
data
set
is
here,
but
currently
it's
empty,
so
I'm
gonna
do
kind
of
pull
command
again
on
the
account
transactions
now
and
it's
going
to
start
ingesting
the
data.
So
what
it's
doing
here
it
downloaded
reached
out
to
the
api
downloaded
the
data.
Now
it's
spawning
the
spark
engine
once
again
and
ingesting
the
data
per
specified,
schema
and
specified
skill.
D
And
one
other
thing
to
sorry
go
ahead,
please
one
other
thing
to
note
here
I
was
going
to
say
you
can
add
any
boring
black
data
cleaning
steps
in
this
ingest
and
you
can
automat
like
automatically
do
this
process
like
repeatedly
without
ever
like
having
to
touch
the
pipeline
again.
Sorry
go
ahead!
No.
C
That's
great
so
then
do
you
like
create
a
hash
of
the
data
and
the
intermediate
stuff.
So
you
know
you
don't
have
to
do
it
again.
B
Yeah
exactly
this
is
what
I
want
to
have
right
after
this.
So,
as
you
can
see
here,
if
we
run
a
lock
command
on
this
account
transaction
data
set,
the
data
is
in
canoe
is
actually
ingested
in
the
form
of
a
ledger.
So
both
the
data
itself
is
a
ledger,
but
we
also
have
the
metadata
ledger.
So
in
this
ledger
of
metadata,
you
can
see.
The
first
block
is
a
seed
block
which
is
assigns
the
unique
globally
unique
identity
to
the
data
set
in
form
of
a
dad.
B
Then
we
have
some.
We
have
this
block
that
was
defining
where
to
get
the
data
from
yeah.
We
also
have
like
vocabulary,
blocks
that
defines
some
names
for
special
columns.
You
can
have
licenses,
you
can
have
provenance
information,
you
can
have
governance
stuff
in
it.
So
this
is
the
metadata
format
is
basically
extensible.
B
We
want
this
to
be
the
standard
way
to
propagate
any
kind
of
meta
information,
along
with
data
set
that
is
so
important
to
keep
alongside
with
the
data,
because
if
you
don't
know
where
the
data
is
coming
from
right
like
how
can
you
trust
this
data?
How
can
you
even
use
it
right?
We
need
to
keep
this
information
close
by.
B
So
when
you
ingest
the
data
you
can
see,
the
data
block
was
added
describing
that
okay,
we
got
1000
records
into
this
data
set
and
what's
important
if
we
run
cameo
pool
command
again,
so
I'm
going
to
run
it
actually
on
crypto
compare
data
set
because
it
updates
very
frequently
it's
going
to
check.
Updates
it's
going
to
reach
out
to
the
cryptocompare
api.
It's
going
to
check
for
updates.
It
will
notice
that
some
new
data
is
present
based
on
caching
headers.
B
It's
going
to
get
this
data
again
ingested,
but
it
will
only
add
the
data
that
hasn't
been
previously
seen
before
so
my
file
around
camera
list.
Again
now,
there's
78
930
records.
Previously
it
was
908,
so
we
can
see
in
the
log
for
this
data
set
here
that
the
last
add
data
transaction
added
only
22
records,
so
in
kind
of
tail
we
can
see
that
the
data
itself
is
in
the
form
of
bi-temporal
ledger.
B
So
we
remember
the
time
the
outside
world
clock,
where
this
event
happened,
where
this
like
data
point,
was
captured,
but
also
we
remember
the
system
time
where,
when
it
got
into
the
system,
so
this
is
our
way
to
ensure
the
complete
verifiability
of
source
data.
If
you're
a
publisher,
publishing
data
in
this
format,
any
consumer
of
this
data
can
ask
you,
okay,
I
had
this
data
you
gave
me
gave
me
this
data
last
week.
B
Can
you
confirm
that
this
is
the
data
you
gave
me
and
it's
all
based
on
hashes,
so
you
can
verify
the
data
even
long
after
the
fact
when
you
got
it.
So
this
is
something
you
don't
get
from
json
apis,
for
example,
because
once
you
got
the
data
from
json
api
like
tomorrow,
the
json
api
might
not
exist
right.
Yeah.
C
I
think
it's
fantastic,
so
then
you
run
your
own
workflow
engine
or
does
that
happen
on-chain.
B
B
Yeah
I'm
going
to
get
into
this
in
a
minute
while
I'm
when
I'm
explaining
the
pipelines
so
now
that
we
have
two
data
sets
all
of
them.
Both
of
them
are
source
data.
We
now
can
start
processing
the
data,
so
I
have
prepared
another
couple
of
data
sets.
One
of
them
is
account
balance
data
set.
So
you
can
see.
This
is
the
second
type
of
data
sets.
B
It's
a
derivative
data
set
derivative
data
set
unlikely
root
are
not
allowed
to
reach
out
to
anything
external,
so
they
they're
only
using
data
that
is
already
in
the
system
already
in
the
network.
So
this
one
is
pretty
simple:
it
just
grabs
the
account
transactions
data
set
and
basically
it
just
computes
a
cumulative
sum
of
all
the
deltas
in
transactions
of
the
ethereum
on
this
account
giving
us
at
every
point
at
every
transaction
and
time
a
balance
or
the
total
balance
of
this
account.
B
The
second
data
set
is
a
market
value
data
set,
and
this
one
is
much
more
interesting
because
it
grabs
two
inputs.
It
grabs
the
balance
and
it
also
grabs
the
exchange
prices.
Data
set
and
this
sql
is
a
bit
more
complex,
but
what
it
does
is
basically
a
temporal
join
of
two
data
streams,
so
this
is
really
cool
stuff.
B
I
renamed
the
data
set
right
before
the
demo.
That
was
a
mistake,
so
what
it
does
now
it
actually
spawns
the
canoe,
the
apache
flink
engine
and
is
doing
all
these
streaming
transformations
using
flink
sql.
So
we
can
actually
switch
back
to
the
notebook.
B
B
B
So
what's
interesting,
we
can
jump
into
canoe
ui
and
talk
about
this
a
little
more
so
this
is
going
to
spawn
the
web
server
serving
our
web
front
end
on
top
of
your
workspace
and
we
can
explore
the
data
in
the
same
way
that
we
did
through
the
command
line
run
some
scale.
But,
most
importantly,
what
I
wanted
to
show
you
is
this
lineage
graph,
because
it's
really
nice
to
to
look
at
your
data
like
that.
What
we
created
here
is
not
just
a
derivative
data
set
right.
B
B
It's
going
to
immediately
tell
me
that
there
is
no
updates,
because
two
of
these
data
sets
that
were
inputs
to
this
data
set
have
not
changed,
but
I
can
supply
recursive
flag
and
what
it's
going
to
do
is
it's
going
to
descend
down
the
entire
dependency
chain
and
it's
going
to
start
pulling
these
source
data
sets
checking
for
updates.
It's
going
to
notice
that
crypto
compare
data
set
was
updated,
it's
going
to
ingest
new
data
and
it's
going
to
propagate
all
the
updates
through
the
pipeline.
B
So
it's
very
easy
to
to
keep
these
data
sets
up
to
date,
so
yeah.
What
we
built
here
basically,
is
a
data
pipeline,
not
just
some
disassociated
data
sets
and
you
can
inspect
the
ledger
of
these
derivative
data,
sets
and
understand
where
the
data
is
coming
from
and
how
it
was
achieved,
because
all
of
these
sql
transformations
are
stored.
Inside
of
that
ledger,
it's
very
important
as
well
to
mention
that
this
sql
is
a
special
type
of
scale,
don't
think
of
it
as
running
like
an
sql
query.
Inside
of
your
database.
B
So
once
you
have
this
pipeline,
you
can
imagine
yeah.
I
forgot
to
say
that
this
solution
is
super
composable,
so,
for
example,
in
another
window
here
I
have
a
different
pipeline
and
this
is
the
one
that
you
would
build
if
you
follow
our
demo,
so
you
can
see
it's,
it
can
grow
to
a
pretty
complex
pipelines
and
every
stage
of
such
pipelines
like
we
did
all
of
this
on
one
laptop,
but
every
stage
could
be
created
and
operated
by
a
different
person
and
different
organization
and
share
it
with
the
world.
B
B
If
you
have
the
key
setup,
but
next
thing
I
wanted
to
touch
on
is
once
you
see
this
data
like
derivative
data
shared
with
you,
it's
really
important
once
you
get
it
there
get
it
locally.
You
will
immediately
start
questioning
like
how
can
you
be
sure
that
you
can
trust
this
data?
B
So
this
is
why
canoe
has
went
to
great
length
on
verifiability,
reproducibility
and
provenance,
so
I'm
just
going
to
quickly
show
you
the
camille
verify
command.
If
you
run,
can
you
verify
on
account
market
value?
B
What
this
thing
is
going
to
do
it's
going
to
scan
the
like
metadata
ledger
of
this
data
set,
it's
gonna
understand
what
transformations
have
been
run
and
when
and
it's
gonna
replay
these
transformations
using
the
data
from
those
data
sets
so
basically
starting
over.
You
can
also
supply
the
recursive
flag
and
it's
going
to
start
over
from
these
data
sets
run
the
transformations
on
account.
Balance
then
run
the
transformations
on
account
market
value
and
produce
the
result.
B
D
And
yeah,
so
everybody
in
the
network
basically
can
challenge
the
results
of
the
like
the
data
transformations.
B
Yeah
we're
hoping
that
the
network
effect
is
going
to
be
a
similar
to
like
an
open
source
software
right
where
everyone
constantly
verifies
each
other's
work.
You,
when
you're
using
open
source
library,
you
don't
immediately,
go
and
read
the
code.
You
trust
that
there
was
like
security
audit
done
by
someone
else,
because
this
library
has
like
thousands
of
stars.
So
this
kind
of
collaboration
and
network
effect
is
what
we're
trying
to
achieve
for
the
these
data
pipelines
as
well.
B
So
this
basically
concludes
my
demo.
I
happy
to
jump
to
the
questions
and
yeah.
To
summarize,
what
we're
trying
to
achieve
is
iot
data
volumes
near
real-time
latencies
work
seamlessly
for
on
and
off
chain,
like
I
forgot
to
mention,
but
we're
bringing
the
trust
anchors
to
the
publishers
and
one
of
the
biggest
publishers
are,
can
be
the
blockchains
themselves.
B
So
the
trust
and
data
can
be
rooted
in
a
blockchain
directly,
allowing
you
to
easily
query
and
merge
data
from
both
on
and
off
chain
data
sources
and
yeah,
enabling
collaborative
processing
and
getting
verifiably
to
restore
the
data.
A
A
B
So
yeah
hardware
right
now
is
a
important
question
because
we're
using
these
bp
enterprise
data
engines,
like
spark
and
flink
we're
planning
to
migrate
and
support
more
streaming
data
engines
that
are
rust,
based
and
require
less
hardware,
but
yeah
like
for
for
now
as
like.
We
don't
want
to.
We
want
to
achieve
the
state
where
we
don't
have
to
move
data
around
so,
for
example,
doing
a
streaming
transformation
on
just
one
data
set.
B
We
would
submit
a
workload
to
like,
let's
say,
filecoin
vm
that
can
actually
spawn
and
run
this
engine
for
us
right
and
do
the
transformations
locally
collocated
with
the
data
itself.
So
you
can
like
imagine
like
this
pipeline
that
I
showed
like
where
the
nodes
and
data
processing
would
be
collocated.
B
B
Like
when
I
hear
wasm,
it
immediately
like
triggers
a
reaction
that,
like
we
just
want
to
run
like
some
simplistic
function
on
data,
but
if
we
want
to
have
these
continuously
processing
data
pipelines
that
are
resilient
to
all
kinds
of
messy
situation
with
data
like
imagine,
joining
two
data
sets
that
are
constantly
out
of
sync
of
each
other,
there's
a
lot
of
edge
cases
like
backfills,
etc
that
you
need
to
account
on
account
for
so
it's
really
important
not
to
ignore
the
history
like
decades
of
experience
that
went
into
these
enterprise
scale
engines.
B
So
if
we
can
manage
to
run
them
through
wasm
on
collocated
with
these
nodes,
I
think
it's
going
to
be
a
like.
A
pretty
pretty
amazing
state
like.
B
A
Well,
if
there's
not
any
other
questions,
I
will
I'll
highlight
that
we
are
going
to
include
references
to
this
in
the
meeting
notes,
so
that'll
be
available
on
on
youtube
there
for
anybody
else.
That
wants
to
find
more
information.
Thank
you
guys.
So
much
for
sharing.
A
Wonderful,
thank
you
guys,
all
right.
So
next,
up
on
the
list
is
al
from
coin
network.
Looking
forward
to
learning
a
lot
more
about
about
the
platform
I'll
hand
it
over
to
you.
E
Great
presentation-
guys-
I
don't
have
a
demo
today,
because
the
stuff
that
we're
working
on
is
pretty
pretty
complicated,
mostly
for
developers,
so
a
little
bit
trickier
to
kind
of
give
a
full-on
demo
here.
What
I
will
show
is
sort
of
a
little
bit
of
background
on
why
we're
approaching
the
problem
this
way
and
what
it
is
that
we're
really
trying
to
accomplish
with
koi
on
the
face.
We
have
an
attention
economy
which
attracts
people
to
run
nodes.
E
Most
of
these
nodes
will
be
running
on
people's
personal
devices,
so
we've
started
to
work
on
a
sort
of
way
of
distributing
compute
operations
onto
many
personal
devices
and
then
wrapping
those
up
into
a
stack
that
allows
you
to
have
sort
of
a
verifiability
of
a
full
web
stack
within
this
kind
of
very
decentralized
architecture,
so
we'll
run
through
a
few
different
ways
that
this
helps,
but
just
to
kind
of
tie
it
into
this
computer.
With
data
working
group,
I
want
to
talk
a
little
bit
about
the
three
different
ways
that
we've
established.
E
So
the
first
one
which
most
people
are
kind
of
familiar
with
in
the
blockchain
space
is
using
a
vm
on
a
layer
one
chain
this
is
you
know,
evm
fbm,
any
of
the
other
layer,
one
networks
and
sort
of
a
very
deterministic
structure,
smart
contracts
that
they
have
there.
E
The
second
one,
of
course,
is
things
that
are
more
like
fully
verifiable
compute,
where
you
have
internal
proofs
that
are
being
generated
as
you
verify
things,
and
then
you
know
that
would
be
like
lurk
and
move
and
then
sort
of
what
we're
working
on
at
koi
is
kind
of
a
little
bit
of
a
more
flexible
paradigm.
E
That
is
meant
to
complement
these
existing
two
options,
and
this
is
sort
of
a
localized,
verifiable
determinism,
which
is
probably
a
very
bad
use
of
terms
for
this,
but
it's
the
best
way
that
we've
been
able
to
represent
it,
but
what
this
gets?
You
is
sort
of
this
middle
ground
between
the
different
capacitor
different
capacities
to
be
looking
for,
and
it
makes
it
fairly
flexible.
So
if
you
do
want
something
of
higher
performance,
you
can
tweak
it
for
that.
E
If
you
want
something
with
higher
variability
verifiability,
you
can
add
more
proofs
and
if
you're
looking
for
something
with
a
greater
privacy,
you
can
kind
of
pull
things
that
direction
as
well.
So,
let's
kind
of
tweak
the
knobs
a
little
bit
so
I'll
go
through
a
few
different
ways
that
we're
tweaking
these
knobs
some
examples
of
how
we've
done
that.
But
first
I
want
to
give
a
little
bit
of
background
about
kind
of
how
poi
actually
works
at
a
ground
level.
E
So
these
things
called
tasks
that
we
have
are
basically
designed
to
have
an
executable
file
that
runs
on
some
device,
an
audit
that
you
write
to
verify
that
the
executable
was
run
properly
and
that
nobody
tried
to
mess
with
the
outcome.
And
then
you
have
a
set
basically
a
set
of
these
nodes
that
we've
been
building
up
from
our
community,
which
have
a
reputation
and
collateral
which
prevents
them
from
misbehaving.
E
And
so,
if
they
get
audited,
then
they
lose
some
of
their
collateral
in
some
of
their
reputation
and
if
they
run
the
executable
successfully,
they
get
to
earn
a
bounty
in
tokens
and
we're
working
on
actually
broadening
that.
It's
beyond
the
koi
token
itself,
so,
ideally
we'll
have
stable
points
and
other
forms
of
incentives
available
as
well.
E
So
this
is
a
really
quick
overview
of
what
this
looks
like
I'll
just
step
through
the
points
here
when
you're
a
developer.
Here
you
deploy
one
of
these
tasks
as
an
executable
file.
So
you
write,
like
you,
know,
essentially
a
chunk
of
code.
You
deploy
that
out
to
a
storage
network
like
you
know,
filecoin
or
rwev,
and
once
it's
up
there,
then
you
register
that
storage
location
to
our
kind
of
floor
mastery
task
contract
and
you
pay
a
bounty
into
it
at
that
time.
E
They
stake
some
tokens
and
then
they
start
running
this
execute
in
the
process,
though
they
are
also
going
to
be
witnessed
by
other
notes,
and
these
witness
nodes
can
basically
look
through
all
the
payloads
that
are
being
submitted
from
these
worker
or
service
nodes,
and
the
witness
nodes
are
going
to
verify
that
everything's
been
done
properly
using
that
audit
that
we
discussed-
and
so
basically
you've
got
this
kind
of
binary
option
here,
where
if
there
is
a
vote,
that's
triggered
and
a
bunch
of
the
witness
notes
see
that
there's
something
that
needs
to
be
audited,
then
the
stake
of
the
offending
note
can
be
slashed
and
they
can
be
excluded
from
the
prize
pool
and
also
their
data
is
then
discounted
as
malicious,
and
so
we
don't
use
that
data
anymore,
and
so
this
is
really
useful
for
things
like
web
scraping
for
things
like
downsizing
thumbnail
images
we'll
go
through
a
bunch,
more
examples
for
that.
E
So
before
we
get
into
that,
I
want
to
talk
a
little
bit
about
gradual
consensus,
which
is
the
way
that
we've
set
this
up
so
far.
Oh,
actually,
I
guess
execution
environment
first.
So,
within
this
task
of
confusion,
environment,
you
have
a
couple
of
different
features
that
make
it
really
useful.
After
building
these
sort
of
consensus,
consensus
games
is
what
we
kind
of
call
them.
So
the
first
thing
you
get
is
you
get
this
javascript
vm
we're
working
on
having
these
for
python
and
hopefully
go.
E
If
you
wanted
to,
because
you
can
compile
down
to
wasm,
and
you
can
put
that
in
there,
you
get
a
full
support
for
all
the
existing
npm
modules,
which
makes
it
really
easy
to
build
in
things
like
proofs,
because
there's
a
lot
of
verifiable
random
numbers
out
there
that
you
can
just
pull
in
a
proof
library
for
that,
and
then
you
can
use
it
inside
your
tasks,
and
so
you
get
to
have
a
lot
more
flexibility
and
composability,
and
you
don't
necessarily
have
to
build
everything
yourself.
E
You
also
get
to
have
a
file
system
on
the
node
and
these
are
all
namespace.
So
the
file
system
that
you
get
is
basically
like
a
little
folder.
That's
just
for
your
task,
and
this
is
going
to
run
on
the
device
that
is
running
the
task,
so
each
device
might
have
its
own
copy
of
each
set
of
files,
which
can
mean
that
you
can
have
sort
of
competing
versions
of
truth
and
then,
when
they
come
together,
you
get
to
compare
those
competing
versions
and
rewards.
E
So
what
you
might
have
here,
as
well
as
you
might
have
a
need
for
something
a
little
faster
than
a
file
system,
so
we
installed
the
radius
cache.
That's
kind
of
part
of
this.
You
also
get
a
rest
api
built
into
it.
We
chose
rest
api
for
this
because
then
you
can
put
it
behind
caching.
E
It
makes
it
a
lot
easier
to
start
distributing
that
information
without
having
a
lot
of
load
on
these
endnotes,
and
you
also
get
some
prompt
timers
in
case
you
have
recurring
operations
that
need
to
happen
over
time.
E
Finally,
you
also
get
two
two
types
of
secrets.
Basically,
you
get
a
secret
injection
which
is
kind
of
where
you
can
ask
the
node
operator
to
configure
something
like
an
api
key
and
then
inject
that
api
key
into
the
task
execution,
and
so
you
can
get
your
node
operators
to
all
go
and
get
you
know
free
tier
api,
key
for
something
that
they
need
to
fetch.
E
Maybe
price
data
maybe
go
and
get
some
kind
of
like
a
inferior
key
or
something
like
that,
and
that
means
that
these
nodes
can
actually
act
like
a
full-fledged
server,
because
they're
going
to
each
have
their
own
set
of
private
keys
or
secret
keys.
I
should
say,
and
then
in
addition
that,
obviously
you
also
get
some
private
keys
for
signing
things.
We've
been
working
on
making
this
sort
of
distance
from
the
node
operator
themselves.
E
So
the
way
this
will
work
based
on
kind
of
the
latest
iterations
that
we
have
is
that
you
will
be
able
to
establish
your
kind
of
core
key
and
then
you
can
have
secondary
keys
that
are
backed
by
that
core
key
and
so,
as
a
node
operator,
you're
signing
every
payload
but
you're,
not
exposing
your
personal
keys
into
the
task
environment,
you're
actually
generating
a
new
set
of
keys
for
each
task,
and
so
a
couple
caveats
of
this
would
be.
E
I
mentioned
that
we're
looking
at
trying
to
extend
the
task
environment
a
little
bit,
so
you
can
do
things
like
python.
Go
rus.
You
know
kind
of
via
wasm,
but
eventually
we'd
like
to
have
that
be
a
little
bit
more
robust
and
then
for
the
keys.
You
get
an
ecdsa
key
for
ethereum.
You
get
our
weave
keys,
which
are
rsa.
E
We
also
have
solana
keys,
because
our
kind
of
core
settlement
layer
is
a
fork
of
solana
and
we've
been
working
more
closely
now
with
near
and
far
coins,
so
we're
going
to
be
establishing
those
keys
as
well
in
the
their
future.
So
the
idea
here
is
to
put
together
kind
of
a
set
of
tools
that
will
make
it
extremely
easy
for
any
developer
who's,
familiar
with
kind
of
basic
programming
languages,
to
create
consensus
games
that
can
be
played
by
these
various
nodes
in
the
network.
E
And
the
neat
thing
here
as
well
is,
if
you
are
having
a
hard
time
coming
out
with
audits
for
your
system,
and
you
just
wanted
to
run
this
say
you
have
like
a
mobile
app.
You
can
run
it
across
all
the
devices
that
are
part
of
your
mobile
app
network
and
you
can
potentially
even
find
a
way
of
commissioning
access
to
it.
E
So
you
can
say
you
know,
maybe
there's
some
sort
of
an
oauth
book
that
has
to
happen
when
somebody
creates
an
account,
you
put
them
through
a
recaptcha
and
then
they
can
become
a
node
operator.
But
only
in
like
a
very
permissioned
sense
so
that
you
know
your
your
specific
task
might
have
a
specific
subgroup
of
nodes,
so
we're
really
trying
to
more
establish
a
design
pattern
than
to
provide,
like
a
you,
know,
one-size-fits-all
network.
E
So,
to
give
a
little
idea
of
how
this
gradual
consensus
thing
works,
it
probably
helps
to
understand
proof
of
history
and
how
solana
operates
so
we
actually
recently
forked
the
solana
kind
of
poor
chain
we're
not
really
trying
to
compete
with
solana
on
what
they're
doing
they're
you
know
very
d5
focused
they've
got
a
lot
of
nft
type
stuff
that
they're
doing
really.
E
Our
goal
has
been
mainly
to
create
sort
of
a
a
central
heartbeat
that
can
act
as
a
kind
of
a
cadence
for
all
of
these
different
consensus
games
that
are
happening,
and
so
with
solana
you
have
epochs
salon.
Epochs
can
take
two
to
three
days.
Usually
ours
happens
twice
a
day,
actually
so
they're
every
12
hours,
and
what
this
lets
you
do
is
you
can
establish
sort
of
periods
of
your
game,
and
so
your
first
epoch
of
the
day
is
kind
of
the
creation
epoch.
E
Then
you
might
be
thinking
well
what
happens
if
I
want
something
to
happen
in
the
afternoon
that
you
know
that
would
be
a
huge
problem
right.
But
the
neat
thing
about
this
is
that
these
various
epochs
actually
layer
on
top
of
each
other,
and
so
your
first
epoch
can
actually
be
happening,
while
the
other
epoch
is
happening
because
your
nodes
that
are
part
of
this
group
will
be,
you
know,
in
the
process
of
being
audited
and
if
their
stake
is
slashed.
E
Before
the
end
of
that
epoch,
then
they're
not
going
to
be
able
to
submit
the
results
to
the
system,
and
so
you're
still
actually
just
as
secure
but
you're,
basically
able
to
layer
these
things
kind
of
in
sequence,
so
that
you
can
have
consistent
uptime
of
your
service
that
you're
trying
to
run
without
having
to
without
having
to
try
and
wait
for
the
verifiability.
The
advantage
of
this,
of
course,
is
that
we're
slowing
down
the
whole
consensus
game
holder
and
so
we're
not
trying
to
provide
something.
E
That's
instantaneous
we're
not
trying
to
provide
something
that
is
going
to
always
work
exactly
as
expected.
We're
mostly
trying
to
prune
the
the
group
of
nodes
to
find
nodes
that
are
always
going
to
be
performed
reliably
and
then
give
those
a
higher
reputation
so
that
they
can
be
included
in
more
tasks
and
so
to
bootstrap
the
network.
We
actually
have
basically
offered
a
kind
of
a
pre-sell
to
people
that
will
be
running
nodes
over
the
initial
period
of
the
network.
E
We've
got
about
120
of
these
kind
of
pre-sale
nodes
that
have
joined
and
they've
all
purchased
tokens
so
that
they're
kind
of
invested
in
making
the
network
successful
and
then
from
there
we'll
be
opening
up
to
their
community,
which
we
now
have
like
almost
45
000
devices
rsvp,
and
so
the
idea
here
is
that
over
time
we
can
start
to
build
up
this
group
of
people
that
we
know
we
can
reliably
trust
to
run
sort
of
kubernetes
style
operations
across
the
devices
and
then,
if
anybody
does
get,
for
example,
corrupted
we
can
exclude
that
data
fairly
efficiently
from
the
pool,
and
so
this
is
mainly
meant
as
a
design
architecture
we're
working
on
establishing
it
in
a
variety
of
different
places.
E
So
I'll
go
through
a
few
of
those
now
and
kind
of
give
an
idea
of
how
this
can
be
used,
and
I
guess
the
one
thing
to
think
about
this
as
well
as
we
talk
about
these
different
applications.
Is
that
the
idea
is
that
each
individual
task
is
going
to
solve
one
problem
at
a
time
you
could
try
to
use
these
web
servers
as
well.
If
you
wanted
to
so,
you
could
have
sort
of
like
a
wider.
You
know
super
app.
That
would
run
as
a
single
task
the
problem
with
doing
it.
E
That
way
is
that,
then
you
have
more
moving
parts
that
can
break
and
it
becomes
harder
to
audit,
and
so
ideally,
what
you
do
is
you
actually
break
down
your
tasks
into
multiple
kind
of
more
deterministic
sections,
and
then
you
audit,
each
of
them
in
a
separate
way,
and
you
can
even
have
them
run
on
potentially
separate
nodes
which
is
really
cool,
and
so,
if
we
imagine,
like
kind
of
the
original
use
case
of
koi,
was
web
scraping.
E
And
so,
if
you
wanted
to
be
able
to
scrape
the
internet
index
the
data
and
then
search
that
data
or
filter
it,
you
would
not
actually
want
that
all
to
happen
in
one
task.
You
want
it
to
happen
in
multiple
separate
tasks
and
then,
ideally,
all
the
data
gets
stored
into
an
ipfs
style
cluster
and
it's
all
accessible
to
people
throughout
this
entire
process,
or
you
can
query
some
of
the
nodes
which
will
have
the
rest
api
available
and
all
the
data
would
exist.
E
There,
which
is
how
we've
kind
of
been
doing
it
so
far,
so
the
interesting
thing
about
this,
though,
is
that
the
second
group
here,
which
is
the
one
indexing
data,
is
going
to
be
able
to
fetch
over
the
rest
api.
All
of
this,
the
web
scraping
data
from
the
original
group
of
nodes,
and
then
they
can
add
kind
of
metadata
to
it
index.
E
So
yeah
some
examples.
So
proofs
are
one
way
that
you
can
do
this,
so
I
mentioned
verifiable
random
numbers
before
that's
one
thing,
so
those
have
been
around
for
a
while
they're
possible
to
spoof.
If
you
have
enough
compute
capacity,
but
if
you
have
enough
people
doing
this
verifiable
random
number
process,
you
can
layer
upon
it.
E
You
have
basically
end
users
submitting
proofs
of
real
traffic
on
the
website
that
they're
viewing,
which
you
know
effectively,
are
just
signatures,
but
they're
all
verifiable,
and
so,
if
you
have
enough
of
these
verifiable
signatures
coming
together,
you
can
create
something
that
is
verifiable
on
a
macro
scale,
as
well
as
on
a
microscale
and
so
another
tool.
That's
kind
of
in
the
tool
belt
here
is
deterministic
actions,
so
we've
been
using
this
for
things
like
transformations.
E
So
if
you
want
to
take
a
thumbnail
and
size
it
down,
you
can
do
that
and
you
can
also
use
these
kind
of
deterministic
actions
to
do
things
like
index,
because
if
your
index
comes
out
to
a
specific
pattern,
it's
you
know
fairly
reliable
to
go
over
that
same
data
set
on
another
device
and
verify
that
the
index
is
generated.
The
same
way,
assuming
that
we
have
access
to
the
same
data
set.
E
So
it's
really
about
trying
to
break
down
these
tasks,
to
make
sure
that
each
individual
task
is
verifiable
and
from
there
there's
also
kind
of
the
one
last
one
which
is
sort
of
a,
I
think,
probably
a
unique
component
of
the
coin
network,
which
is
that
you
can
do
time
based
verifications
of
sorts.
E
They're
actually
also
broken
into
these
slots,
which
each
have
a
you
know,
a
unique
hash
to
them,
and
so,
as
long
as
each
of
your
things
is
actually
timestamped
properly,
you
can
do
things
like
retrieval
markets
and
you
can
make
sure
that
nodes
are
giving
no
preferential
treatment.
E
Because
if
the
retrieval
requests
are
all
posted
to
the
nodes
api,
then
you
can
see
that
they're
actually
coming
in
with
proper
timestamps
and
if
an
end
user
submits
a
request
and
they
don't
get
back
a
response
from
the
server
they
can
submit
that
same
request
to
another
server
or
another
node.
We
call
them
and
then
that
node
is
able
to
serve
that
request
as
well.
Similarly,
this
also
works
really
well
for
the
web,
scraping
use
case
just
as
a
side
effect
of
the
fact
that
some
content
changes
quite
regularly.
E
So
if
you
know
where
the
node
is
in
the
world-
and
you
know
what
they
or
what
time
they
actually
queried
that
server,
then
you
can
actually
create
a
proof
that
verifies
that
they
have
a
valid
result
from
that
particular
website,
which
is
very
different
from
how
you
do
this
in
the
traditional
sense,
because
you
know
traditional
web
scraping
and
proxies,
it
would
actually
be
very
hard
to
tell
if
you
were
getting
a
spoofed
result.
E
Most
of
the
time
you
kind
of
rely
on
proxy
botnets
to
do
this,
and
they
don't
seem
to
do
that
very
well,
and
so
there's
a
lot
of
these
different
kind
of
components.
This
is
kind
of
the
cookbook
that
we've
been
working
with
so
far.
We're
in
the
process
also
just
exploring
a
lot
more
types
of
proofs
that
you
can
put
in
here.
There's
a
lot
of
other
deterministic
actions
that
have
kind
of
been
coming
to
the
front.
E
We've
got
even
some
people
at
this
point,
working
on
ways
to
verify
audio
files
that
match
each
other.
So
you
know
extracting
like
a
standardized
snap
from
a
piece
of
audio
and
then
verifying
that
it
matches
a
specific
masker
file
kind
of
like
a
shazam
type
behavior.
So
there's
lots
of
different
flexibility
to
this
really
opens
the
doors
for
developers,
but
if
anybody
has
any
kind
of
interesting
applications
that
they'd
like
to
explore,
we're
really
open
to
trying
to
help
you
go
through
those
and
helping
you
find
the
right.
E
You
know
the
right
structure,
proofs,
deterministic
actions
and
potentially
like
time
stamps,
to
make
that
possible,
and
so
I
guess
the
last
thing
would
be
that
there
are
always
the
possibility
of
having
all
three
of
these
components
or
even
more
as
kind
of
a
combo.
So
if
you
have
something
like
search
and
you
want
that
to
run
in
a
reliable
decentralized
way,
then
you
can
combine
a
lot
of
these
different
factors
to
ensure
that
your
service
quality
is
consistent
across
all
the
nodes
that
are
providing
it.
Of
course,
these
are
not
solved
problems.
E
What
we're
creating
here
is
not
a
general
solution,
it's
more
of
a
sandbox
that
allows
you
to
try
things
out
and
build
these
consensus
games
so
just
kind
of
a
little
bit
of
a
deeper
example.
With
these
proofs,
verifiable
random
numbers
are
really
designed
to
give
you
the
ability
to
generate
a
random
number
and
then
prove
that
there
is
randomness
within
it,
and
this
is
something
that,
like
immediately
when
we
started
talking
about
this
kind
of
an
application,
people
looked
at
it
and
they
said
well.
E
For
me,
it
just
happens
to
turn
out
that,
that's
something
that
you
can
do
quite
easily,
and
so
this
this
exact
example
is
one
of
the
main
reasons
that
we
kind
of
stumbled
in
the
beginning
here,
and
then
we
realized
that
these
kind
of
generic
proofs
that
you
can
generate
they
seem
to
support
a
lot
of
different
types
of
behavior
and
so
there's
actually
a
lot
of
people
who
worked
on
this
across
the
whole
blockchain
ecosystem,
because,
generally
speaking,
almost
every
type
of
oracle
needs
to
have
a
proof.
E
E
This
is
actually
a
kind
of
a
boom
to
these
people,
so
we've
gone
out
kind
of
with
crypto
and
we've
sort
of
incentivized
people
to
create
all
these
proof,
libraries
and
distribute
them
more
or
less
for
free,
so
they're
all
out
there
and
you
can
pretty
much
jump
into
them.
I
mentioned
the
attention
tracking
before
that's
kind,
of
a
good
example
of
a
signature
type
proof,
and
so
one
of
the
main
things
we
have
with
toy
is
this
ability
to
create
these
multi
wallets
that
have
all
kinds
of
different
wallets
within
them.
E
And
so,
if
you
wanted
somebody
to
generate
a
signature
and
their
wallet
is
attached
to
an
account
that
has
some
reputation
in
the
system,
it
becomes
fairly
unlikely
that
somebody
who
has
that
reputation
is
going
to
also
forge
a
proof
or
to
contribute
forged
proofs
to
somebody
else's
account,
and
so
a
lot
of
these
different
kinds
of
signature
type.
E
It's
going
to
say
this
is
my
data
that
I'm
standing
behind,
and
so,
if
it
turns
out
that
there's
anything
in
there
that's
incorrect,
then
that
node
is
now
accountable
and
you
can
build
that
into
your
audit
function
as
well
and
then,
similarly,
of
course,
the
voting
process
that
happens
to
release,
rewards
or
to
audit
people
is
also
all
signature-based.
So
you
have
a
lot
of
this
data.
That's
getting
generated
and
kind
of
creates
a
sort
of
an
auditable
trail,
that's
behind
all
of
this
consensus
process
and
so
determinist
reactions.
E
I
mentioned
a
couple
of
them.
Transformations
like
generating
thumbnails
are
really
good,
and
so,
if
you
have
something
like
you
know,
if
you've
got
like
a
five
megabyte
image,
you
don't
want
that
to
be
in
a
list
view
on
a
social
media
website,
because
it's
going
to
bog
down
everybody
in
the
process
and
make
it
basically
useless
on
mobile,
and
so
what
you
can
do,
though,
is
you
can
run
a
hash
function
against
it?
You
can
hash
the
original
image
you
get
a
hash
from
that.
E
You
can
compress
the
image
and
you
can
do
a
fairly
deterministic
compression
and
you
get
a
smaller
hash
or
well.
You
don't
have
a
smaller
hash
but
you're
going
to
pass
with
a
smaller
file,
and
now
we
know
that
that
is
the
correct
thumbnail
for
that
image,
and
a
lot
of
these
processes
can
be
established
to
run
pretty
effectively
on
most
hardware.
So,
depending
on
what
the
environment
is,
you
can
usually
pretty
much
rely
on
that
to
to
output,
reliable
results.
E
Of
course,
these
have
to
be
tested
depending
on
the
application,
but
you
know
you
can
pay
a
few
poi
tokens
run
a
test
across
multiple
nodes.
Make
sure
that
it's
going
to
work
on
lots
of
different
hardware
specs.
E
We
are
also
working
on
kind
of
some
tools
to
index
all
the
different
hardware
that
we're
running
on
the
network,
so
we're
just
in
the
process
of
onboarding
nodes
now,
but
we're
hoping
to
also
have
sort
of
a
list
of
you
know
what
sort
of
operating
system
they're
running
what
sort
of
hardware
specs.
They
have
all
those
sort
of
parameters
that
you
would
need
to
use
to
calibrate
this
kind
of
data.
E
So
that's
hash
comparisons
in
a
nutshell,
and
I
guess
there's
sort
of
a
question
of
performance
trade-offs
here
where
you
know
you
obviously
don't
want,
like
most
of
the
nodes
in
the
network
to
spend
their
time,
compressing
the
same
image
file,
and
so
what
you
can
do
to
avoid
that
is.
E
And
so
you
don't
actually
need
everyone
to
go
through
and
audit
every
single
thing
that
happens.
You
sort
of
need
more
of
a
random
sampling
or
sort
of
like
a
fisherman-style
test,
and
this
can
be
really
really
effective
as
a
way
of
reducing
the
amount
of
duplication
of
the
work
so
that
you
can
get
the
same
performance
without
having
to
have
like
an
overwhelming
amount
of
compute.
That's
just
happening
on
all
these
devices.
You
know
most
of
these
devices
are
consumer
hardware,
so
they're,
basically
free.
E
For
all
intents
and
purposes
you
know
most
people
aren't
that
worried
if
their
computer
is
doing
these
things,
while
they're
sleeping
you
know
they
don't
mind
if
they're
doing
a
bunch
of
extra
compute,
they're
pretty
happy
to
just
get
paid
for
having
the
hardware
and
the
internet
connection,
but
if
they,
you
know,
if
you
do
as
an
operator
want
to
be
a
little
bit
more
eco-conscious
and
reduce
your
energy
output.
This
is
a
really
good
way
to
do
that
as
well.
E
You
can
kind
of
choose
how
much
verifiability
you
want
and
how
you're
going
to
actually
execute
on
that,
and
I
guess
the
the
final
point
here
is
that
this
also
supports
some
interesting
privacy
attributes,
because,
depending
on
how
your
proofs
were
generated
and
what
you're
trying
to
do,
you
could
implement
some
zero
knowledge
proofs
and
things
like
that.
That
would
allow
you
to
actually
integrate
significant
amounts
of
privacy
into
the
structure,
so
I'll
mention
the
time-based
verification
thing
one
more
time,
just
because
it's
kind
of
fun.
E
This
means
that
effectively
with
the
k2
solano
fork,
you
can
get
these
slot
numbers
all
the
time
and
so
with
traditional,
like
client
server
architecture,
if
you're
submitting
a
payload
to
a
server
and
the
payload
is
too
old
or
there's
something
happening
where
the
server
doesn't
want
it
because
of
the
timestamp,
you
can
have
certain
kind
of
protocols
for
requesting
another
payload,
and
I
think
the
exact
same
thing
should
be
applied
here
for
the
most
part.
E
E
Again,
you
need
to
know
when
the
page
was
scraped,
because
the
time
that
the
page
is
varies
too
significantly
between
the
data,
it's
very
hard
to
have
consensus
about
the
data,
that's
being
outputted,
whereas
if
you
have
a
significant
number
of
timestamps
for
all
these
different
points
and
the
node,
that's
accepting
the
payload
sees
the
same
epoch
or
the
same
slot
as
the
node
that's
submitting
the
payload,
then
they
can
both
actually
tell
that
it
is
the
correct
one,
and
since
these
are
generated
basically
every
couple
of
seconds,
it
provides
a
very
small
window
to
have
that
data
come
out
of
that
website
for
the
specific
locality
where
the
node
that's
scraping,
the
data
is
located.
E
Sorry,
that's
a
big
mouthful.
We've
been
working
on
a
lot
of
stuff
for
a
while,
so
we're
working
on
kind
of
condensing
it
down
to
make
it
make
a
little
bit
more
sense.
But
hopefully
this
is
somewhat
educational,
I
guess
beyond
just
the
the
web
scraping
market,
there's
also
a
lot
of
use
for
this
in
terms
of
retrieval.
E
So
if
you're
trying
to
fetch
some
data
off
of
ipfs
or
something
like
that
being
able
to
timestamp
your
initial
requests
and
then
being
able
to
timestamp
the
the
data
that
you
get
back
when
it
comes
back,
it's
also
really
helpful
because
then
you
can
kind
of
tell
if
the
retrieval
nodes
are
doing
their
jobs
properly,
especially
if
there's
some
kind
of
a
relayer
in
the
middle.
E
So
if
you
have
one
node
that
receives
the
request
and
then
it
passes
it
up
to
another
set
of
nodes
that
are
going
to
retrieve
it
from
the
system,
that's
going
to
allow
you
to
basically
verify
that
the
nodes
that
are
actually
retrieving
the
data
are
doing
so
in
an
efficient
fashion
and
you
can
start
to
benchmark
them
against
each
other.
And
then
I
mentioned
the
data
gathering
and
web
scraping
side
of
this
as
well,
so
some
options
there.
E
So
again,
just
to
recap:
this
cookbook
you've
got
proofs
verifiable
actions
and
timestamps,
and
then
on
the
other
side,
you
can
also
use
kind
of
apis
between
the
nodes
to
take
some
shortcuts
here.
So
you
can
get
the
nodes
to
host
something
onto
their
api.
That's
going
to
have
some
proof
data
associated
with
it.
You
can
also
use
storage
layers
instead
of
blockchains,
which
makes
a
lot
of
this
stuff
a
lot
more
efficient.
E
It's
on
ipfs,
it's
going
to
be
significantly
easier
to
index
it
and
retrieve
it,
and
it's
going
to
mean
that
you're
not
trying
to
pay
some
observed
gas
fees
to
upload
something
you
can
also
get
your
nodes
to
act
as
ipfs
pins
as
part
of
their
tasks,
so
they
can
temporarily
hold
on
to
something
and
then
pass
it
off
to
a
file
coin
miner
to
hold
on
to
it
for
a
longer
term,
and
I
would
say,
generally
try
to
use
witnesses
and
audits
not
always
consensus,
because
it's
going
to
reduce
your
performance
issues
a
little
bit
because
you're
not
actually
always
worried
about
having
a
perfect
consensus.
E
You're
mostly
worried
about
auditing
things
that
are
going
wrong
or
being
used
incorrectly,
and
that
kind
of
leads
to
the
last
point,
which
is
that
speed
of
results
is
not
speed
of
verification,
and
so,
with
most
of
these
systems.
You
can
kind
of
have
an
optimistic
approach
to
this,
because
you
know
that
the
nodes
that
are
participating
in
your
tasks
are
probably
going
to
consistently
have
the
same
behavior.
E
So
this
is
kind
of
similar
to
uber
or
airbnb,
where
they're
actually
trusting
that
the
person
who
runs
the
airbnb
is
going
to
consistently
be
a
sane
person
and
not
change
their
behavior
too
erratically.
However,
an
uber
driver,
for
example,
who
has
previously
never
been
a
drunk
driver
or
has
never
been
in
a
car
accident
while
driving,
is
unlikely
to
have
a
car
accident,
while
they're
driving
the
passenger,
and
the
same
is
probably
true
of
these
node
operators.
E
For
the
most
part,
they're
going
to
be
fairly
unsophisticated,
you
know
operators
and
so,
as
a
result,
they're
mostly
going
to
press
the
button
and
play
the
task
and
if
they're
consistently
doing
that,
then
it's
very
unlikely
that
they're
ever
going
to
run
a
malicious
script
and
so
over
a
long
enough
time
period,
you
can
actually
build
a
significant
group
of
these
trusted.
E
People
who
you
can
effectively
just
guarantee,
are
going
to
run
the
task
as
you've
requested
that
they
will,
and
so
again
this
is
not
necessarily
for
kind
of
the
sort
of
stuff
that
people
are
doing
very
close
to
the
file
storage.
E
You
know
we're
not
trying
to
do
massive
data
processing
here,
but
it's
really
effective
for
things
that
happen
at
kind
of
the
web
application
layer,
and
so
this
year,
our
last
year,
rather
about
an
eight
month
pilot,
we
got
about
30
million
views
on
the
network
that
led
to
about
300
000
tokens
being
minted,
and
now
we
have
about
45
000
consumer
devices
between
phones
and
computers
that
are
now
rsvp
to
be
part
of
this
network.
E
That
number
goes
up
just
about
every
day,
and
so
we've
been
gradually
just
kind
of
building
up
this
attention
economy
as
a
way
to
bring
in
all
these
people.
To
give
you
all
kinds
of
compute
capacity,
we
have
lots
of
grants
available
in
koi
tokens
as
well.
If
you'd
like
to
try
deploying
some
tasks
onto
the
network,
we're
actually
just
about
to
start
onboarding
these
nodes,
so
we
we
could
use
any
crazy
ideas
that
people
have
that
they
want
to
use
our
sort
of
honest
fodmap.
E
A
Excellent,
thank
you
so
much
al
that
was
tremendous
and
I'm
just
following
along
with
you
there
for
folks
that
want
to
get
involved
and
they
want
to
deploy,
deploy,
use
cases
on
the
coin
network.
Is
there
an
easiest
path
in
terms
of
you
talked
a
lot
about
the
witness
approach
and
verification?
If
I
just
have
some
web
server
that
I
want
to
have
running
arbitrarily,
does
that
require
any
amount
of
verification
or
is
it
manual
or
what
do
you
recommend.
E
So
when
you
deploy
the
task
you're
going
to
write
the
task
itself
and
then
you're
also
going
to
write
an
audit
we're
working
on
creating
kind
of
a
cookbook
for
these
audits,
so
a
lot
of
standard
operations,
you
would
have
on
a
web
server.
We'll
try
to
give
you
examples
of
how
to
do
all
of
those
in
our
tutorials
and
we're
really
hoping
that
we
can
get
to
the
point
where
this
is
composable.
E
So
if
you
say
I
want
to
have
an
endpoint
that
does
x,
we'll
say:
well,
that's
how
you
audit
x
and
just
try
this
audit
script-
that's
already
written,
so
it
should
hopefully
be
pretty
plug
and
play
within
a
few
months.
A
Brilliant
all
right,
thank
you
so
much
that
was
tremendous,
lovely
dan,
I'll
I'll
pause
case.
You
guys
have
any
questions
and,
and
if
not
we
can,
we
can
wrap
up.
B
Yeah,
I
have
one
I'm
curious
about
the
layering
of
the
tasks
like
you
mentioned
web
scraping
and
then
indexing.
When
you
run
first
layer,
do
you
have
to
wait
for
the
task
to
settle
fully
like
all
the
audits
and
verification
pass?
Does
it
basically
mean
that
there's
a
20
12
12
hour
settlement
window
between
the
layers
or
you
can
kind
of,
can
kick
off
the
next
layer
of
tasks
and
have
them
all
undone?
If
the
data
proves
to
be
non-non-reproducible
and
verifiable.
E
So,
there's
a
few
ways
to
look
at
that
one,
the
the
thing
that
would
happen
there.
You
could
wait
12
hours
to
make
sure
that
everything's
been
fully
audited,
but
you
could
also
have
sort
of
an
optimistic
approach
to
it
where
you
say.
I
know
that
this
node
operator
has
been
consistently
operating
the
task
for
a
year
and
a
half
and
they
have
a
really
significant
stake.
E
They
have
a
very
high
reputation
for
their
node
operation
skills
and
at
that
point
you
can
kind
of
trust
those
results
ahead
of
time,
and
so,
if
you
wanted
to
your
indexing
task
is
going
to
be
depending
for
its
success.
E
Each
of
the
indexing
nodes
are
competing
for
that
reward
and
they're
also
trying
not
to
get
audited,
and
so,
if
they
wanted
to,
they
could
shortcut
that
by
getting
the
results
from
one
of
the
nodes
that
they
already
trust,
using
that
to
compute
their
result
and
then
they're
ahead
of
the
curve
and
they
get
to
submit
the
result
first
and
so
there's
a
few
ways
to
kind
of
build
in
layers
of
accountability
here,
but
yeah
you're
right.
E
E
B
Yeah,
I
guess
if,
if
there's
a
way
on
the
layer,
two
to
detect
that
okay,
this
part
of
the
data
will
actually
proven
to
be
non-trustworthy
and
we
can
undo
this
like
unincorporated
from
the
overall
data
set.
That
would
be
pretty
interesting.
E
Yeah
and
it
I
think
that
will
happen
fairly
organically,
because
as
a
node
loses
its
stake
or
is
audited,
then
it
immediately
is
cut
out
of
the
pool,
and
so
it
would
no
longer
be
shown
as
like.
A
type
address
would
be
shown
as
one
of
the
nodes
offered
in
the
task,
for
example,
and
so
it
wouldn't
be
queried
by
the
nodes
that
are
part
of
that
task,
and
actually,
I
guess
one
of
the
other
sides
of
this
is
that
you
can
run
an
audit
at
any
point.
E
And
so,
if
a
node
is
broadcasting,
something
that's
on
their
api,
they
can
get
audited
even
during
that
first
epoch,
because
that
would
technically
be
part
of
the
previous
audit
period
and
they
might
get
locked
out.
You
know
within
three
hours
instead
of
within
12.,
so
it
can
happen
on
a
much
shorter
time
span.
If
you
get
enough
compensations.
A
Can
go
ahead
and
wrap
for
today
to
the
the
coin
network
team
and
the
commute
network
team.
Thank
you
guys
so
much
for
presenting
that
was
excellent
content.
We're
going
to
post
this
up
on
on
the
youtube
page
soon,
and
hopefully
we
will
we'll
get
some
feedback
from
the
community.