►
From YouTube: Game Theory in Distributed Computing - Levi Rybalov
Description
Anti-cheating mechanisms and pricing models for decentralized computing platforms.
A
Yay.
Thank
you.
Thank
you
for
the
encouragement.
I
love
you
all.
Where
did
Luca?
Why
did
Luke
just
leave?
What's
he
doing
cook?
A
Oh
okay,
got
it
another
story.
Yeah!
Thank
you.
I
was
totally
totally
fair.
That's
totally
fair!
No
I
could
just
start.
I
mean
Lucas
already
knows
all
this
anyway.
Okay,
so
hi,
you
know
I'm
gonna
be
presenting
now
about
Game,
Theory
and
distributed
computing.
These
problems
that
I'm
going
to
be
talking
about
are
kind
of
pretty
much.
Every
distributed.
Computing
project
is
going
to
encounter
some
form
of
these
problems.
A
In
fact,
almost
all
of
them
very
much
explicitly
have
these
issues
and
that's
what
I'm
going
to
be
discussing
so
foreign.
A
Distributed
computing
does
not,
there
aren't
really
any
robust,
distributed
computing
anti-cheating
mechanisms
in
use
right
now
right,
so
there
are
people
are
researching
them,
but
they're
it's
the
Space
is
really
sorely
lacking.
You
know
kind
of
kind
of
proper
incentives
in
this
space
and
while
pricing
mechanisms
are
a
bit
more
prevalent,
you
know
without
anti-sheeting
mechanisms,
they're
not
secure
right.
So,
like
you,
you
have,
if
somebody's
giving
you
a
price.
A
If
you
don't
know
that
the
the
compute
that
they're
providing
you
is
accurate
or
legitimate,
then
you
know
the
price
could
you
know
is
is
just
a
suspect,
and
so
we
have
to
ask
you
know
what
role
did
she
enter?
Cheating
mechanisms
and
pricing
models
play
in
the
you
know
larger
scheme
of
distributed
competing
in
general
right,
because
anti-cheating
mechanisms
and
pricing
models
are
only
kind
of
a
subset
of
the
problems
faced
in
this
space.
So
no
look.
B
A
There
we
go
so
when
a
client
receives
so
the
here's,
the
problem
just
to
kind
of
the
basic
problem
statement
when
a
client
receives
a
result
from
a
compute
node
or
so
what
assurances
can
be
made
that
the
result
of
that
computation
is
actually
correct.
A
So
how
do
you
know
that
the
node
is
Computing
honestly
and
let's
set
aside
the
possibility
of
faulty
nodes
for
now
and
just
say,
how
do
you
know
that
that
a
compute
node
isn't
just
giving
you
a
result
without
actually
doing
the
computation
in
order
to
get
the
the
the
price
or
the
reward
for
for
computing?
It
and
the
second
question,
is
you
know
how
do
we
actually
price
that
compute
you
know?
A
Can
we
do
it
per
job
like
if
we
could
think
about
it
like
a
Docker
container
or
a
wasm
image,
or
you
know
per
time?
You
know
how
do
you
actually
price
these
these
computations?
So
so
there
are
a
number
of
ways
for
avoiding
cheating
in
in
distributive
Computing.
A
So
one
approach
you
know:
naturally,
you
know
you
can
have
a
snarks
or
fully
homomorphic
encryption
trusted
execution
environments
and
the
verification
would
be
a
replication
and
we're
going
to
focus
on
verification
via
replication
in
this
talk
and
the
reason
being
that
the
well
for
the
cryptographic,
method,
snarks
and
homomorphic,
encryption
and
so
on,
they
have
very,
very
high
overhead
and
they
will
for
a
long
time
and
even
kind
of
in
the
best
case
scenario.
A
A
Unless
there's
some
like
incredible
breakthroughs,
we'll
see
who
knows
and
then
T
is
a
trusted.
Execution
environments
are
they're
known
to
have
vulnerabilities.
This
may
change
over
time,
especially
as
the
low-hanging
fruit
are
kind
of
picked
off
and
we
might
come
to
a
time
when
teas
are
very,
very
much
trusted.
But
you
know
that's
not
the
scenario
right
now.
A
A
So
there's
a
difference
between
if
you
have
a
two-sided,
Marketplace,
there's
a
difference
between
the
client
being
sure
or
relatively
sure
that
they're
that
their
task
was
computed
correctly
and
maybe
the
compute
node
would
also
want
to
know
in
case
the
result
is
disputed,
but
there's
a
difference
between,
like
the
the
client
plus,
maybe
a
few
other
nodes
knowing
and
being
secure
that
the
result
was
completed
correctly
and
then
every
node
on
the
network.
A
Being
sure
that
the
note
the
result
was
computed
correctly
right.
So
this
is
kind
of
global
local
versus
global
consensus.
A
You
know
like
in
the
kind
of
like
not
natural,
like
like
explanation
like
the
natural,
like
example
of
Global
consensus,
would
be
something
like
a
blockchain
right,
so
contract
computations
in
the
blockchain,
for
example,
and
so
I'm
just
going
to
assume
in
this
talk
so
non-malicious
nodes,
but
they
but
knows
that
our
utility
maximizing
agents
right
so
they're
they're
not
going
to
specifically
go
out
and
try
to
mess
up
a
result
on
purpose.
A
Basically,
so
yes,
so
we,
the
kind
of
like
basic
idea
behind
verification
and
be
a
replication,
is
to
verify
occasionally
now
you
you
can,
for
example,
just
do
two
or
three
copies
of
every
job,
but
this
is
kind
of
relatively
expensive
and
once
you
start
doing
that,
you
start
approaching
the
same
costs
that
you
would
get
with
centralized
Cloud
providers
and
so
that
that,
while
there
are
still
benefits
to
using
the
distributed
a
distributed
system,
there
are
also
benefits
to
using
a
centralized
system
in
that
case
and
becomes
a
bit
less
clear.
A
So,
ideally,
you
want
to
minimize
your
costs,
and
so
we
need
to
there's
a
few
problems,
so
we
need
to
if
you're
doing,
verification
we
have
replication.
You
need
to
know
that
the
that
the
nodes
that
are
verifying
each
other
don't
collude
on
the
result
either
correct
or
incorrect
right,
so
they
could
include
on
an
incorrect
result,
so
neither
of
them
computed
and
then
they
just
send
you
the
result.
A
And
now
you
know
you're
kind
of
you
know
in
a
really
bad
spot,
because
you,
you
know
you
just
the
both
knows,
gave
you
the
same
result
and
the
result
is
wrong.
Alternatively,
the
the
the
compute
node
can
can
one
of
the
computers
can
compete,
the
result
honestly
and
just
send
that
result
to
the
other
node
and
while
that's
not
the
worst
scenario,
you
know
you,
you
want
to
generally
kind
of
avoid
that.
A
So
there
are
there's
a
whole
like
space
of
Solutions
in
here
I'm
just
going
to
focus
on
two
one
of
them
is
based
on
the
paper
I'm
just
going
to
skip
to
the
end
right
here
by
dong
at
all
betrayal
destruction,
rationality,
smart
counter,
Collision
contracts
for
verifiable
cloud
computing
and
that
that
the
the
method
they
use
there
is
basically
they
take
two
nodes
and
the
two
nodes
that
are
supposed
to
be
replica,
verifying
a
job,
and
they
put
them.
A
They
put
them
into
something.
They
call
a
prisoners
contract
which
basically
puts
the
two
nodes
into
a
kind
of
they
design
a
game
which
is
basically
the
prisoners.
Dilemma:
game
where
the
two
nodes
have
an
incentive
to
give
each
other
false
results,
because
they'll
get
a
higher
reward,
if
the
other,
if
they're,
correct
and
the
other
node
is
wrong
and
the
way
you
do
that,
of
course,
is
you
know
we're
going
to
have.
A
This
is
in
the
context
of
having
you
know,
deposits
and
collateral,
as
this
is
kind
of
a
standard
assumption
in
these
scenarios,
and
so
and
then,
of
course,
you
know
the
hypothetically,
those
two
could
collude
on
a
could
collude
to
say
actually
we're
going
to
override
the
prisoner's
contract
with
their
own
contract,
and
then
you
override
that
with
the
Traders
contract,
which
basically
says
you
know
the
first
node
to
report,
a
potential
collusion
basically,
basically
is
gets
rewarded,
more
that's
kind
of
the
tldr
of
it,
and
so
the
the
second
paper
that
I
had
cited
there.
A
They
they
had
a
they
kind
of
replaced
this
model
with
a
kind
of
mediator
role
where
you
would
verify
occasionally,
and
so
by
the
way
this
first
paper
did
not
have
they
assumed
you're,
always
sending
two
copies
out
of
every
of
every
job
which
has
kind
of
a
high
overhead.
A
The
second
paper
they
verified
occasionally,
but
they,
the
verification,
happened
with
a
kind
of
trusted.
Node,
basically,
a
mediator
in
the
way
it
worked
is
that
you
would
have
a
list
of
every
client
and
every
compute
node
would
have
a
list
of
mediators
that
they
both
trust
and
then,
if
neither
of
them
like
the
performance
of
the
other.
A
If
neither
if
the
client
doesn't
or
the
compute
node
doesn't
like
the
performance
of
their
mediator,
they
can
just
remove
them
from
from
their
list
of
desired
from
their
list
of
accepted
mediators,
and
you
know
that
that's
the
kind
of
incentive
for
mediators
to
be
honest
and
there's
a
few
issues
with
that
model,
but
that's
a
basic
model.
So
in
that
paper
they
also
were
focused
on
focusing
on
deterministic
workloads.
A
In
this
scenario,
I'll
get
to
non-deterministic
workloads
in
a
minute,
but
in
deterministic
workloads
you,
you
can
actually,
of
course,
naturally,
you
can
actually
replicate
the
the
the
result,
which
is
really
important
for
actually
double
checking
the
work
and
in
the
second
paper
the
I'll
just
go
skip
to
the
end
here:
mechanisms
for
outsourcing
computation
in
a
decentralized
market,
the
basically
the
way
they
did
it
was
they
had
they
had.
They
had
the
mediator
to
check
for
non-determinism.
A
They
had
the
remediator
repeated
computation
n
times
to
check.
If
you
get
a
different
result,
every
at
any
point
in
time,
and
if
you
do,
then
the
client
gets
slashed,
because
this
is
a
potential
attack
of
the
client.
The
client
can,
for
example,
send
a
deterministic
workload
or
that
they
basically
want
or
even
a
non-deterministic
workload,
but
then
they
can
dispute
the
result
from
the
compute
node
and
therefore
try
to
get
the
computation
for
free,
which
is
kind
of
what
they're
trying
to
avoid.
So
they
have.
A
The
mediator
run
the
computation
many
times
in
order
to
check
for,
if
there's,
a
non-deterministic
output
which-
and
this
is
a
bit
of
a
this-
is
a
bit
of
a
security.
Hole
I
would
say
that
probably
the
oh,
a
safer
method
for
doing
this
is
basically
to
take
those
n
repetitions
of
that
same
job
and
then
actually
spread
those
end
jobs
across
n,
different
nodes
right,
and
so
you
actually
have
like
almost
like
a
a
Byzantine
fault.
A
Tolerant
kind
of
in
quotation
marks
a
way
for
actually
checking
the
verifying
the
result
of
a
computation
and
so
and
then
yeah.
Of
course.
Now
you
want
to
avoid
malicious
coalitions
with
high
probability,
and
then,
of
course
you
could.
You
can
allow
any
note
to
raise
a
dispute
and
continue
to
expand
the
set
of
no
notes
verifying,
but
this
actually
goes
to
Global
consensus,
not
local
consensus,
so
there
it
is
so
per
job
pricing
model.
So
how
do
we
actually?
A
How
do
we
actually
like
price
these
compute?
So
this
is
kind
of
going
to
focus
on
a
like
a
gas-based
approach
like
a
gas
and
like
the
traditional
like
blockchain
sense,
so
assume
that
we
can
get
deterministic
results
and
we
can
count
instructions.
Ideally,
we
want
to
be
able
to
count
different
types
of
instructions,
like
you
know:
fp32,
fp64,
integer
operations
and
so
on.
That
has
a
bit
of
an
overhead,
but
this
is
kind
of
more
idealized
and
assume
that
we're
paying
per
job
right.
A
So
you
have
some
kind
of
like
Docker
container
that
you're
running
and
you
or
you
know
awesome
that
you're
ideally
was
them
because,
let's
take
it
that
you
can
be
relatively
sure
about
determinism-
and
that's
in
that
case,
and
you
and
you're
paying
for
the
entirety
of
that
job.
Right,
you're,
not
paying
per
time
right
now,
and
so
you
basically
construct
a
two-sided
Marketplace
for
different
components
of
compute.
So
you
basically
have
a
price.
You
can
have
like
a
a
price
per
fp32
fp32
instruction,
a
price
for
FPA
64
instruction.
A
You
could
also
have
you
can
go
on
and
on.
You
can,
of
course,
include
memory
usage
in
this
GPU
compute
NJ.
Of
course,
gpus
have
fp32
and
fp64,
and
now
in
in
like
into
eight
or
whatever
competitions
of
their
own,
and
you
know
when
the
compute
node
Returns
the
result,
it
returns
its
instruction
counts
and
usage
of
other
resources.
So
it's
basically
this
Vector
of
numbers
that
is
returning
about
the
it's
indicating
its
resource
consumption
and
then
you
know
you.
A
The
client
can
choose
to
pay
the
pre-agreed
upon
price
based
on
that
result
or
it
can
dispute
right
and
if,
if
it
disputes
disputing
a
price,
is
basically
actually
the
same
as
disputing
a
a
result
right,
because
how
do
you
actually
check
that
that
instruction
count
is
correct?
A
Well,
you
just
send
it
to
another
node
to
and
you
ask
that
node
to
compute
the
results
itself,
and
then
you
basically
just
invoke
the
anti-cheating
mechanisms
that
you
were
using
before
right.
So
ideally,
if,
if
that,
if
that
first
node
was
honest,
when
you
send
it
to
a
second
node
and
that
if
that
second
note
is
honest,
it
should
have
returned
the
same
result
with
the
same
instruction
count.
A
And
then
the
question
is
you
know?
How
do
we
get
to?
You
know
a
per
Time
pricing
model,
which
is
a
bit
more.
You
know
a
bit
more
familiar
in
like
the
for
the
traditional
cloud
computing
sense,
so
again,
assume
that
we
can
get
deterministic
results
and
count
different
types
of
instructions
and
as
before,
compute
nodes
offers
some
cost
per
type
of
computational
resource.
A
A
But
now
we
have
resource
per
time
right,
so
I
can
offer
you-
and
you
could
think
about
this,
like
you
know
like
in
a
Docker
container,
you
can
set
like
the
the
the
parameters
of
whatever
that
document
container
can
can
can
hold
and
or
the
type
the
CPU
limitations,
random
limitations
and
so
on,
and
so
now
that
you
have,
you
know,
cost
per
resource
times.
A
Resource
per
time
is
cost
per
time
right
and
that's
that
that's
basically
like
the
kind
of
really
really
basic
General
model
of
how
you
want
to
get
to
that.
And
then
the
question
is
so:
how
do
you
actually
check
for
correctness
or
honestly
from
the
computer
code?
A
And
so
you
can
you
know
and
and
then
of
course,
I'm
this
isn't
on
here,
but
like
naturally,
the
way
you
would
do
it
is
you
you
would
have
like
an
escrow
from
the
client
like
a
pool
that
that's
used
to
pay
the
compute
node.
And
you
know,
as
long
as
the
job
is
not
finished,
then
the
the
money
would
move
or
the
payment
would
move.
A
Parts
of
payment
would
move
from
that
Escrow
in
either
directly
to
the
client
or
to
some
kind
of
intermediate
smart
contract
that
can
maybe
be
disputed
or
revoked
for
safety
purposes,
and
then
that
would
pay
out
solely
over
time
for
every
single
step
of
this
step
of
this
operation
and
so
and
then,
but
we
still
have
to
be
able
to
check
for
correctness
and
honesty,
and
so
there's
a
ton
of
questions
around
this.
A
So
you
know,
can
we
compare
subjects
subsections
of
the
computation
we,
so
we
can
compare
subsections
to
the
computation,
so
we
could
take
a
snapshot
of
the
Wasa
memory
and
then
you
know
just
check
you
know
check.
Let's
say
we
split
the
computation
into
100
pieces.
We
could
check.
You
know
50,
checkpoint,
57
to
58,
right
and
you'll,
see
if
you
get
the
same
result
out
they're.
The
problem
is,
with
the
you
know,
the
bandwidth
consumption
for
this
might
be
very,
very
high
right.
A
So
if
you're,
if
you
have
like
a
you,
know
a
10
gigabyte
job
which
you
know
is
not
unrealistic,
you
now
have
to
send
this
a
10
gigabyte
snapshot
of
the
Wasa
memory
to
another
node
in
order
to
to
check
whether
it
was
whether
you
know,
step
57
to
step.
58
was
actually
done
correctly
and
that's
that
can
be
like
really
really
expensive
right.
A
And
so
the
question
is,
you
know:
is
it
possible
to
analyze
subsets
of
the
computation
in
a
totally
generalized
way
that
gives
the
same
guarantees
as
we
competing
from
scratch?
So
the
you
know,
can
you,
for
example,
take
any
subset
of
the
computation
and
only
check
that
and
still
have
the
same,
the
same
like
run
time
and
or
rather
like,
for
example,
if
you
have
like,
if
you
take
like
you,
only
need
to
be
like
one-fifth
of
the
original
image.
You
want
the
runtime
to
be
one-fifth.
A
So
can
can
you
achieve
that
in
a
way
that
guarantees
the
same?
That
has
the
same
security
guarantees
as
recomputing
every
thing
from
scratch
and
that's
a
that's
a
you
know
a
question.
There
are
people
working
on
it
to
the
extent
of
my
knowledge,
that's
not
totally
doable,
but
you
know
maybe
we'll
find
a
way
to
do
it
in
the
future
and
then,
of
course
you
know
what
about
not
as
good
guarantees.
So
like
you
know.
A
Well
what,
if
you
don't
have
the
same
security
guarantees,
but
you
have
some
decent
security
guarantees.
You
know,
there's
a
trade-off
in
there
and
that's
kind
of
a
theme
in
general
in
the
game
theory,
industry
in
kind
of
verification
and
beverage
application,
there's
always
trade-offs
right.
The
more
you're
willing
to
pay
the
more
security
you're
going
to
have
so
we
only
get
to
deposit
economics-
and
you
know
predicting
the
cost
of
a
job
in
advance-
might
be
easy
in
some
cases,
but
in
general
it's
really
difficult.
A
There
are,
you
know
some
exceptions
to
this,
for
example,
if
you're
running
inference
on
some
AI
model,
you
know
you,
you
have
a
you
know
rough
approximation
of
you
know
like
some,
like
the
number
of
operations
required
to
do
inference
on
a
neural
network
and
so
on,
but
in
general
it's
really
hard
to
predict,
and
so
you
know
this
is
kind
of
a
challenge,
and
this
is
why
you,
you
know
even
per
job
in
per
Time
pricing.
A
You
know,
has
these
kind
of
like
internal
issues
with
them,
and
so
you
know.
Naturally,
you
know
you
want.
The
deposit
needs
to
be
used
as
collateral
for
slashing
and
the
higher
the
deposit
relative
to
the
estimated
amount.
The
test
will
cost
the
higher
the
security.
This
is
coming.
Also
from
from
that
second
paper,
I
mentioned.
A
The
issue
is
that
when
you
have
a
very,
very
high
deposit,
it
actually
skip
to
the
last
Point
here
collateral
has
to
account
for
potential
cheating
and
not
getting
caught
until
a
task
is
chosen
for
application.
So
how
pathetically?
A
If
you're,
only
getting
two
percent
of
your
tasks
looked
at
verified
via
replication,
you
could
probably
cheat
for
a
while
until
you
get
caught
right,
and
so
you
want,
when
you
do
in
fact
get
caught,
you
want
to
deposit
in
there
to
be
high
enough
to
actually
make
having
cheated
on
all
those
prior
jobs.
You
know
you
know
non-dominant
strategy,
and
so
you
know
the
problem
with
this,
of
course,
is
that
there's
a
massive
Capital
inefficiency
caused
by
using
very
high
deposits.
A
A
You
know
pooling
resources
like
Insurance
might
solve
this
problem,
but
you
know
in
general
this
is
It's
it
it.
It's
not.
It's
kind
of
it's
kind
of
an
issue
because
it
does.
It
also
costs
a
lot
to
join
the
network,
and
you
could
also
potentially
replace
this
with
reputation
as
well
and
I'll
just
get
to
that
in
a
minute.
So,
just
like
kind
of
some
miscellaneous
points.
Well,
there
we
go
sorry
so
yeah
some
some
miscellaneous
points.
A
So
there's
a
really
interesting
heuristic
in
that
the
decentralized
computational
Marketplace
paper,
where
basically
yeah
clients,
compute
nodes
choose
through
their
mediators
directories
and
solvers
are
and
if
two
nodes
do
not
agree
on,
or
rather
if
a
single
node
doesn't
like
how
their
mediator,
performed
or
in
a
solver
here
by
the
way,
is
a
market
maker
I'll
get
to
that
in
a
second
the
the?
A
If,
if
to,
if
a
note,
doesn't
like
how
the
mediator
is
performing
well,
they
could
just
remove
that
media
from
from
the
list
of
trusted
mediators,
and
it
interestingly,
actually
creates
a
market
for
mediators
and
that
that
that's
the
kind
of
idea
they
have
around
how
to
actually
get
mediators
to
act.
Honestly
and
otherwise,
mediators
are
basically
just
other
compute
nodes.
There's
really
no
no
other
difference,
and
so
most
clients
don't
really
want
to
choose
their
compute
nodes
manually
or
even
Define
their
own
algorithms
for
doing
so
right.
A
So
if
you
are,
if
you're
thinking
about
how
a
client
might
use
AWS
or
gcp
I,
you
know
they,
they
are
not
choosing.
You
know
which
they're
not
choosing
a
bunch
from
amongst
a
list
of
machines
that
all
have
different
prices.
I
mean
there
are
different,
like
tiering
options
and
so
on,
but
within
those
tiers
it's
totally
homogeneous.
A
Your
options
are
totally
homogeneous,
whereas
in
this
environment,
you're
going
to
have
a
really
really
heterogeneous
set
of
Hardware
heterogeneous
set
of
prices,
especially
because
you're
going
to
have
local
different
local
electricity
prices,
you're
going
to
have
different
local
bandwidth
prices,
and
then
you
know
even
actually
local
hardware,
different
local
hardware,
prices
and
so
on
and
so
forth.
A
So
you're
gonna
have
this
very
heterogeneous
set
of
prices,
and
you
know
who
on
Earth,
wants
to
actually
look
at
like
this
list
and
actually
pick
from
them,
and
so
there
is.
There
is
a
a
space
for
solvers
right
market
makers,
dispatchers,
whatever
you
want
to
call
them
that
actually
match.
Do
the
market
matching
between
clients
and
and
compute
nodes
and
provide
that
bridge.
That
makes
makes
that
problem
pretty
easy,
and
so,
of
course,
naturally,
if
assuming
that
those
solvers
and
so
on,
are
behaving
honestly,
and
that
is
another
issue.
A
Arguably
another
issue
with
that
paper.
But
assuming
those
solvers
are
behaving
honestly,
they
basically
have
an
incentive
to
do
as
efficient
to
matching
as
possible
between
the
clients
and
the
compute
node
as
well,
simultaneously
maximizing
their
revenue,
and
that
might
involve
especially
when
you
consider
the
all
of
the
complexities
that
that
might
involve
involved
in
this
market
matching.
A
That
might
involve,
like
a
some
very,
very
complex
algorithms,
and
you
could
have
like
a
genuine
marker
for
these
and
and
business
models
around
designing
algorithms.
In
order
to
do
that,
Additionally
you
could,
if
you're
trying
to
do
something
like
a
global
global
consensus,
you
could
also
have
a
smart
contract.
A
I
had
to
do
all
of
that
and
that
the
the
design
of
that
smart
contract
would
have
to
be
agreed
upon
by
you
know
the
network
participants
they're
they're.
You
know
there
are
issues
with
that
as
well.
Just
naturally,
people
will
still
try
to
game
that,
but
at
least
there
is
a
one
simple
set
of
rules
for
everyone,
and
that's
kind
of
you
know
fixed
and
kind
of
deterministic.
In
that
sense,
in
the
literature,
there's
a
relatively
limited
analysis
of
repeated
games
and
bounded
rationality
in
real
Behavior
right.
A
So
all
of
these
games
are
repeated
games.
If
you
go
to
a
client-
or
rather
you
have
this-
this
Marketplace
of
clients
and
compute
nodes,
a
client
is
usually
probably
going
to
send
out
more
than
one
task
in
their
lifetime
and
the
compute
node
is
going
to
compute
more
than
one
task
in
its
lifetime,
and
so
these
are.
A
These
are
actually
very
much
repeated
games
and
that
really
that's
a
kind
of
a
kind
of
a
gap
in
the
literature
and
then,
of
course,
we
have,
you
know
bound
to
rationality
real
life
Behavior,
not
everybody,
as
we
know
from
you,
know,
behavioral
economics
or
just
common
sense.
Not
everybody
is
the
utility
utility
maximizing
rational
agent.
A
Arguably
does
utility
even
exist,
but
whatever
that's
a
different
story,
I'm
just
kidding,
but
so
there's.
Another
question
of
you
know
are
analytic
Solutions
the
way
to
go
in
in
in
in
these
environments.
You
know
you,
you
can
try
to
try
to
Define.
You
know,
for
example,
in
that
first
paper,
the
smart
contract
prisoners
contract
one
they
they
found
that
because
it
was
really
phenomenal
paper.
They
found
a
way
that
their
method
basically
induced
a
sequential
equilibrium,
which
is
like
a
very,
very
strong
form
of
game,
theoretic
equilibrium.
A
It
basically
means
that
along,
if
you
have
this,
basically
this
tree
of
presidential
actions
and
then
along
every
single
path
in
that
tree,
the
most
rational
action
leads
to
both
notes
being
honest,
basically,
that's
a
very,
very
strong
form
of
equilibrium.
In
the
second
paper,
they
found
the
Nash
equilibrium
if
I
remember
correctly,
the
question
is:
are
these
types
of
analytic
Solutions
the
way
to
go
right
because
again,
none
of
these
are
taking
into
account
like
real
life.
A
Behavior
repeated
games,
they're
not
they're,
all
doing
verification
via
replication,
but
in
a
truly
robust
Marketplace
you
would
have
the
teas
and
snarks,
and
you
know
any
any
type
of
General
computation
and
so
every
single
time
you
add
one
more
parameter
to
to
this
set
of
possibilities
like
the
the
results
start.
A
Breaking
that
the
result
results
will
almost
certainly
the
analytics
results
will
almost
certainly
start
breaking
down
right,
because
the
the
pro
the
complexity
of
the
problem
just
increases
massively
also
non-deterministic
jobs
can
be
supported
by
these
models
and
so
there's
a
lot
of
focus
on
deterministic
tasks
because
they're
very
easy
to
replicate.
A
But
if,
if
the
compute
node
can't
tell
that
is
being
sent
an
undeterministic
task,
it
can't
really
distinguish
yeah
between
an
undeterministic
task
and
a
deterministic
task,
and
so
it's
better
off
playing
it
safe
and
assuming
that
it's
deterministic
and
if
there's
a
pop,
if
you
start
really
introducing
a
probability
that
it
might
be
non-determinist.
Thinking
of
that
kind
of
chases
changes,
the
game,
theory
and
so
on.
A
But
you
know
most
compute
nodes
are
probably
not
going
to
not
going
to
do
that
and
of
course
you
do
have
to
account
for
the
ones
that
are,
though
I
think
I
have
a
point
about
that
in
the
next
slide
and
there's
also.
It's
also
really
important
to
note
that
the
game
theory
changes
quite
a
bit
when
the
client
actually
does
have
enough
Hardware
or
does
have
the
computational
power
to
check
a
result
for
themselves.
So
if
you
know
we're
we're
talking
about,
you
know,
I
was
describing
here.
A
A
If
a
the
reason
most
people
use
cloud
computing
is
so
on
is
because
they
don't
have
the
local
resources
required
to
do
all
the
computation
either
the
hardware
isn't
power
powerful
enough
to
do
it
at
all,
or
they
don't
have
enough
Hardware
to
do
all
the
computation
they
need
and
that's
why
the
Outsource
competition
in
the
first
place,
but
if
the
client
does
in
fact
have
enough
Hardware
to
do
the
computation
by
themselves.
This
does
change
the
game
theory
a
bit,
and
you
know
you
can
get
into
that.
A
For
example,
like
you
send
out
a
test
to
a
compute
node,
you
want
to
check
where
one
out
of
every
100
tasks
or
one
out
of
every.
However
many
tasks
to
check
if
it's
actually,
if
it
was
actually
done
correctly,
and
if,
if
you
dispute
it,
you
know
you
still
need
to
have
a
that
was
if
you
dispute
that
you
say,
oh,
that
competing
would
lie
to
me
and
it
didn't
give
me
the
right
result.
A
You
still
need
to
be
able
to
check
that
or
not
not
you,
the
client,
but
like
the
network,
needs
to
be
able
to
check
that
you.
The
client
are
being
honest
about
them
lying
right
and
so
that
that
that's
a
slightly
different
scenario,
but
it's
kind
of
solved
in
a
similar
way
right.
You
just
want
to
have
like
a
subset
of
nodes,
all
around
the
computation
themselves
and
compare
the
results
and
so
yeah.
So
basically,
a
robust
distributed
computing
platform
would
have
the
capacities
for
the
interested
execution
environments.
A
Snorks,
CK
snarks,
eventually
homomorphic
a
fully
homomorphic
encryption
when
we
get
to
that
and
so
on,
and
the
question
is
how
to
best
balance,
the
needs
of
the
clients
and
the
research
resources
of
the
compute
nodes.
There's
no,
like
I,
said
before.
Look
you
know.
Our
analytics
solution
is
the
way
to
go
I'm,
not
sure
they
are
I.
Think
simulating.
This
is
probably
the
best
approach
that
we
can
have
simply
because
again,
every
single
time,
you
add
another
parameter
to
this
kind
of
really
complicated
environment.
A
It's
too
much
for
it's
too
much
for
mere
mortal
mathematicians
to
try
solving
analytically,
maybe
maybe
AI
mathematicians
in
the
future
will
be
able
to
do
a
better
job.
I
wouldn't
be
surprised,
but
we're
not
there
yet,
and
so
we
we
do
want
to
be
able
to
account
for,
for
example,
you
know
you
have
to
take
into
account
like
what
is
the
benefit
of
using
a
snark
or
a
ZK
snark
and
what's
the
over
overhead.
A
What's
the
computational
overhead,
it's
going
to
cost
a
lot
more,
but
if
you
need
the
privacy
or
someone,
if
you
need
to
know
that
it
was
done
correctly.
Well,
that's
a
different
story
and
then
the
same
thing
is
same
same
thing
is
true
of
trusted
execution
environments
so
t's,
for
example,
you
know
the
you
know:
what's
the
overhead
of
setting
it
up?
What's
the
overhead
of
running
inside
a
t
which
I
understand
is
very
very
low,
but
it
still
should
be
taking
into
account
the
cost
of
doing
that
versus
doing
it
natively.
A
So
it's
also
important
to
note
that
stream
processing
is
the
stream
processing
scenario
is
different
right,
so
no
nodes
might
not
be
available
for
the
entirety
of
a
job
cycle.
The
value
of
the
output
decreases
with
time
and
so
on,
and
so
there's
a
slightly
different
game
theory
in
this
environment
and
the
authors
of
that
that
the
decentralized
compute
Marketplace
they
followed
up
with
a
a
really
good
paper
on
this.
A
A
There
were
all
sorts
of
these
kinds
of
algorithms
in
the
2000s
like
eigen
trust
and
power,
trust
and
and
so,
and
peer
trust,
and
so
on
that
were
designing
reputation,
systems
for
nodes
for
for
distributed
either
like
file
storage
or
distributed
computing
and
so
on
and
so
forth.
The
question
is:
why
can't
you
use
this
and
the
answer?
Is
you
you
you?
It's
not
that
you
can't
in
fact
a
robust
or
distributed
computing
Mark
a
robust
distributed
computing
platform
would
have
capacity
for
reputation
right.
A
You
really
really
want
to
have
the
capacity
for
any
one
of
these
kind
of
models,
but
you
know
the
repetition
gets
tricky,
but
it
also
could
be
very
useful.
So,
for
example,
you
could
be
just
be
willing
to
pay
higher
high.
You
might
be
willing
to
pay
more
for
hard
notes
with
higher
reputation,
but
you
also
have
to
be
very,
very
careful,
because
nodes
can
build
up
a
very,
very
good
reputation
over
time,
and
then
you
know
if
they're
checked
very
infrequently,
they
can
start
messing
things
up.
A
They
want
to
and
that's
what
a
malicious
note
would
do,
but
not
necessarily
a
utility
maximizing
one
and
the
the
the
kind
of
space
of
you
know
what
can
be
done
with
reputation
is
like
really
really
large
and
so
there's
also
other
questions
like
so
what
happens
if
a
client
goes
to
a
compute
notice
or
repeat
customer
versus
using
just
uniformly
choosing
amongst
the
nodes
that
have
you
know
some
their
bids
within
some
range.
A
This
has
changed
the
game.
Theory
at
all,
and
naturally,
if
all
clients
are
using
the
same
verification
strategy,
then
an
expectation
honestly
honestly
will
still
be
a
dominant
strategy
right.
But
if
the
clients
have
different
verification
strategies,
that
might
not
be
the
case
right.
So
if
one,
if
one
client
only
verifies
one
percent
of
the
time,
but
another
client
verifies,
you
know
10
of
the
time.
A
Well,
if
a
node
is
able
to
if
a
compute,
no
malicious
or
rather
utility
maximize
and
compute
node
is
able
to
kind
of
observe,
you
know
how
often
how
often
each
of
these
nodes
replicate
and
kind
of
switch
between
the
two
and
you
know,
notice
how
often
it
happens
that
kind
of
changes
again
in
theory
quite
a
bit
as
well-
and
these
are
you
know
these,
this
amongst
that
plus,
the
repeat
of
games
and
so
on
and
so
forth.
A
This
is
why
I
kind
of
advocate
more
for
the
simulation
strategy,
rather
than
rather
than
kind
of
going
for
analytic,
Solutions
and
a
couple
other
things
that
we
kind
of
would
want
to
know
about.
Are
we
do
need
like
a
list
of
potential
client
attacks,
so
like
one
of
the?
A
Like
most
obvious
clients,
attacks
is
to
get
a
job
for
get
a
task
computer
for
free
and
not
be
called
out
on
it,
and
obviously
you
know
the
client
needs
to
put
up
stake
and
so
on
and
so
collateral
to
make
sure
that
doesn't
happen
and
so
on.
But
the
question
is
what
other
attacks
can
clients
launch
right
other
than
like?
Like
software,
you
know
viruses
and
so
on,
like
what
other
kind
of
potential
manipulations
can
clients
do
and
then
another
question
is,
you
know,
can
we
get
determinism
on
gpus?
A
The
answer
is
from
what
I
understand.
Basically,
yes,
but
it's
a
bit
complicated
I
know
that
there
are
people
working
on
deterministic
they're,
getting
determinism
and
gpus
getting
determinism
on
inference
both
on
inference
and
on
training.
It's
a
it's!
It's
a
bit
tough,
but
my
understanding
from
what
people
I've
spoken
to
is
that
it
is
very
much
possible
and
this
these
are
the
those
two
papers
that
I
cited.
There's
plenty
more,
but
these
are
kind
of
for
people
who
are
interested.
A
I
would
say
that
these
are
the
kind
of
like
two
kind
of
critical
papers.
If
you're,
if
you're
interested
in
learning
more
about
this
and
yeah
I,
think
that's
I,
think
that's
everything
and
yeah.
Thank
you
and
I'll
I'll.
Take
any
questions.
If
you
like.
A
A
Any
any
questions,
any
serious
questions.
Yes,.
A
Okay,
so
the
question
for
I'll-
repeat
it
so
did
do.
Do
the
deposit
economics
put
the
bound
on
the?
Is
it
the
cost
of
a
job
or
what
was
it.
A
Yeah
yeah,
so
that
that's
a
bit
of
an
issue,
so
if,
if
you
have
yeah
so
the
question
is
like
what
what
happens
if
you
have
like
a
really
really
expensive
job
like
and
the
the
authors
in
this
this
last
paper
here,
the
very
last
one
they
they
basically
said
you
just
have
to
put
in
like
a
lot
of
slack
for
the
potential
cost
of
this
job.
A
Actually,
the
slack
is
actually
more
for
also
to
be
to
for
being
able
to
pay
for
the
mediator
running
we're
running
the
competition
end
times
but
yeah.
That
is
an
issue
right.
You
and
I.
Think
I
said
this
yeah
predicting
the
cost
of
a
job
in
advance.
A
Oh
yeah,
just
a
collateral
must
account
for
potential
cheating
and
not
getting
caught
until
a
task
is
chosen
for
replication.
Well,
that's
kind
of
a
that's
kind
of
another
issue
in
so
far
as
like,
if
you're
only
verifying
two
percent
of
the
jobs
you
can,
you
can
get
away
with
cheating
for
a
really
really
long
time
until
you
get
caught,
and
you
know,
especially
if
you're
really
playing
some
like
complicated
game
or
you're.
A
So
yeah,
the
in
terms
of
like
paying
for
the
the
unpredictability
of
the
cost
of
a
job
is
a
really
really
challenging
problem
in
this
environment,
and
you
know,
arguably,
the
ability
to
break
up
a
job
in
in
that
kind
of
per
Time
pricing
model
that
I
described.
That
could
that
that
method,
where
a
client
can
say
hold
on
a
second,
is
this
job
really?
Does
this
really
cost?
This
much
you
know
is
this:
is
this
being
competed
honestly
or
correctly?
A
I,
don't
know
why
that
went
out,
but
you
know
is
this:
is
this
being?
Is
this
being
competed
honestly
or
correctly?
You
know
it's
it's
a
it's
a
really.
A
It's
actually
fits
of
my
knowledge
that
that
I
haven't
seen
people
address
that
in
the
literature
I
thought
about
it
myself
and
all
my
my
basic
answer
is
you
just
have
to
have
enough
in
there
to
pay
for
anything
like
that
and
if
you,
if
suddenly,
for
example,
if
you're
in
the
per
Time
pricing
model,
where
you're
just
paying
incrementally,
while
as
long
as
the
compute
node
doesn't
doesn't
says
that
the
job
isn't
finished
well,
what
you
would
basically
have
to
do
is
if,
if
the,
if
you,
if
you're
out,
if
the
money
in
your
escrow
is
out
in
the
compute
node
saying
this
job
isn't
over
well
either
well,
first
of
all,
you
might
want
to
just
check
if
they're
cheating
and
you
might
want
to
rerun
the
part-
a
small
subset
of
the
computation,
but
otherwise
you
said
you
have
to
put
more
money
in
your
escrow
right.
A
They
can
just
if
it's
a
awesome
if
they're
running
awesome
it
just
you
know,
freeze
the
memory
State
and
just
wait
until
you
pay
more
money.
It's
it's
not
a
great
solution,
but
there
might,
there
might
be,
and
hopefully
will
be,
better
kind
of
models
for
that
computation
going
forward.
A
You
know
so
so
streaming
yeah,
so
this
mechanisms
for
outsourcing
computation
will
be
a
decentralized
market.
The
authors
followed
this
up
with
a
decentralized
market
for
stream
processing
as
well,
and
that
is
a
it
is
a
it's
a
different
environment.
So
I
should
probably
have
prefaces
by
saying
that
this
is
focused
on
that
what
I'm
describing
is
focused
on
batch
processing.
A
The
game
theory
is
somewhat
different
for
for
stream
processing,
but
that
wasn't
covered
in
the
stock.
But
there
is
a
robust
thing.
We
can
definitely
talk
about
it
after
and
yeah.
A
The
paper
yeah
the
paper
they
followed
up
this
last
this
this.
Oh
it's
not
there.
This
paper,
the
author's
fault,
the
same
authors
are
slightly
expanded.
Set
of
authors
followed
this
paper
up
with
the
paper
in
2022,
actually
focused
specifically
on
the
streaming
Market
streaming
decentralized
streaming,
Market
case
use
case.
A
So
the
question
is,
why
do
you
have
to
count?
Different
types
of
instructions
is,
and
the
reason
is
that
different
types
of
instructions
so
because
different
Hardware
have
different
advantages
and
disadvantages
on
different
types
of
computations
and
the
the
the
it
becomes
very,
very
complex
because
you
really
didn't
have
to
take
into
account
the
kind
of
diversity
of
mother,
especially
in
this,
in
a
in
a
decentralized
distributed
system
where
you
people
have
gaming,
Rigs
and
phones
and
and
tablets,
and
so
on
and
so
forth.
A
You
have
to
be
able
to
take
into
account.
You
know
this
combination
of
different
motherboards
and
Rams
and
CPUs
and
gpus
and
and
bio
settings,
and
whatever
operating
systems
goes.
The
list
goes
on
and
on
the
complexity
of
the
things
that
cause
complexities
go
on
and
on
and
different
machines
are
going
to
have
different
capacities
and
they're
going
to
be
better
or
worse
at
different
types
of
computations
and
so
they're
each
going.
A
They
might
all
prefer
to
have
a
different
type
of
price
to
break
it
to
to
to
break
down
when
they're
charging
somebody
and
that
actually
helps
both
the
client
and
the
compute
node.
Because
it
helps
match
more
tasks
that
are
better
suited
for
different
types
of
Hardware.
A
Yeah
exactly
so
see
yeah,
so
CPUs
have
like
yeah,
exactly
have
like
specialized
instructions
for
different
subsets
and
so
on
so
yeah.
That's
that's!
Basically,
that's
basically
the
logic
behind
it,
but
as
as
as
far
as
I
can
tell
based
on
people
that
I've
talked
to
this.
Where
is
it
the
ability
yeah
the
ability
to
count
different
types
of
instructions
in
both
in
both
of
these
models?
That
apparently
incurs
a
pretty
like
in
Blossom?
A
Apparently
it
it
doesn't
exist
right
now,
but
it's
apparently
possible,
but
there's
a
very,
very
high
overhead
for
counting
different
types
of
instructions
and
that's
a
bit
of
an
issue
right,
so
my
understanding
is
counting
instructions
in
general.
Is
that
can
be
done
without
much
overhead,
but
counting
different
types
of
instructions?
That's
very
expensive,
but
this
was
kind
of
a
more
idealized
model.
This
is
kind
of
how
I
was
bringing
that
up.
Okay,
any
other
questions,
all
right,
yeah.
Thank
you
again.
Thank
you
so
much
for
coming.
Thank
you.