►
From YouTube: PoRep Day 2020 // Filecoin Proofs - Nicola Greco
Description
For more information on Filecoin
- visit the project website: Filecoin https://filecoin.io/
- or follow Filecoin on Twitter: https://twitter.com/Filecoin
Get Filecoin community news and announcements in your inbox, monthly: http://eepurl.com/gbfn1n
For more information on IPFS
- visit the project website: https://ipfs.io
- or follow IPFS on Twitter: https://twitter.com/IPFS
Sign up to get IPFS news, including releases, ecosystem updates, and community announcements in your inbox, each Tuesday: http://eepurl.com/gL2Pi5
A
Oh,
this
presentation
is
gonna,
be
a
little
bit
more
high
level
and
and
it's
trying
to
show
the
different
challenges
that
we
went
through,
not
just
in
terms
of
theory,
both
so
in
practice,
and
what?
How
does
our
construction
look
like
and
so
I'll
go
very
quickly
into?
How
do
we
use
para
pinfire
coined
in
practice
and
this
time
based
proof
for
replication
and
then
I
will
open?
A
I
will
have
some
time
to
explain
what
are
the
challenges
that
we
would
like
to
address
and
to
get
a
much
better
proof
of
space
for
replication
for
five
coin.
So,
let's,
let's
get
in
with
a
proof,
replication,
five
coin.
So
for
those
five
coin,
I
said,
I
talked
about
giving
a
little
bit
of
an
overview.
Fico
names
are
doing
two
offering
two
main
services.
A
One
is
want
to
create
a
decentralized
storage
market
which,
in
other
words,
means
that
anyone
in
the
world
can
put
the
storage
resources
and
offer
them
to
the
to
the
in
in
this
market
and
any
user.
That
participates
and
makes
deals
with
these
miners
for
delegating
their
surge
as
a
guarantee
that,
if
they're
paying
they're
getting
the
service
that
they
they
were
promised.
How
do
we
guarantee
this
with
this
proofs
of
storage
that
we're
doing?
In
other
words,
I?
A
Don't
have
to
trust
the
prover,
the
storage
provider,
to
provide
me
storage
because
they're
going
to
put
proofs
on
the
network
and
they're
only
gonna
get
paid
if
they
put
proofs
on
the
network.
If
they
happen
to
lose
my
files,
they
lose,
they
stop
getting
paid
and
they
also
lose
some
amount
of
collateral.
A
But
in
five
coin
differently
from
other
networks
out
there,
miners
or
storage
providers,
don't
only
earn
tokens
by
providing
storage
to
clients.
They
can
reuse
the
same
storage
to
do
proof
of
space
Oh
more
technically
in
this
proof
for
application,
and
this
allows
them
to
participate
in
a
consensus
protocol
where
they
win,
rewards
and
name
create
blocks
proportionally
to
their
amount
of
power
to
the
amount
of
storage
power.
A
Storage
providers
won't
only
earn
rewards
from
the
storage
market,
but
they
will
earn
the
block
reward,
and
so
the
block
reward
is
meant
to
subsidize
part
of
the
expensive
computation
of
the
proof
of
space,
of
the
proof
or
replication
the
widow
and
and
at
the
same
time,
it's
time
it
will
attract
a
lot
of
mining
facilities.
So
exabytes
of
storage
to
the
file
core
network,
and
even
today,
in
test
net
that
was
out
in
in
December,
there
is
a
without
incentives.
A
There
has
been
over
three
petabytes
of
storage
and
they
have
been
brought
up
into
the
network,
and,
and
so
we
believe
that
with
incentives,
we're
going
to
be
able
to
attract
so
much
storage-
and
this
is
my
speculation.
If
there
is
a
lot
of
storage
in
the
network,
we
and
the
storage
providers
are
also
earning
block
reward.
The
price
of
storage
is
also
could
become,
could
be
a
competitive
price.
A
Let's
go
back
to
the
proof
or
application.
We
have
we've
seen
how
this
works.
There
is
an
honest
prover
that
does
computation,
which
is
a
either
slow
or
expensive.
What
matters
is
that
if
they
did
not
do
it
and
they
need
to
redo
it,
sorry
if
they
did
not
do
it
or
if
they
did
do
it,
but
they
deleted
the
this.
They
deleted
the
data.
Then
it's
gonna,
take
too
long
or
it's
not
gonna
be
rational
for
them
to
to
do
the
attack,
and
why
do
we
care
there
are?
A
There
are
several
ways
to
think
about
this.
This
attack.
We
want
to
make
sure
that,
if
someone,
if
multiple
identity
is
claimed
to
store
different
copies
of
my
file,
I
want
to
make
sure
that
they
are
selling
different
copies
and
then
not
storing.
Just
one
I
want
to
make
sure
that
if
I
give
them
to
a
miner,
this
miner
is
now
out
sourcing,
the
storage
to
someone
else
and
for
the
network.
A
A
A
So
the
ideal
requirements
for
this
proof
or
application
is
are
the
following.
We
want
to
make
sure
that
replication
is
short,
so
it
doesn't
take
too
long
to
to
add
storage
in
the
network
and
it's
cheap
so
that
the
miners
can
spend
less
resources
on
computation
and
more
resources
on
buying
hardware.
The
second
one
is
very
important:
we
want
to
make
sure
that
this
proofs
are
small,
because
if
this
proofs
are
large,
every
minor
then
needs
to
put
some
amount
of
proofs
on
the
blockchain
per
gigabyte.
A
So
we
want
to
another
property.
Is
that
I?
Don't
know?
If
you
remember
we're
going
to
go
into
this,
but
the
prover
is
challenged
and
they
have
to
reply
by
a
certain
time.
We
want
to
make
sure
that
we
give
them
enough
time
to
pass
the
proof
into
the
chain,
and
there
is
a
lot
of
problems
on
posting
the
proof
into
the
chain.
There
could
be
a
network
problems,
miners
you
put
gas
fee.
There
is
a
little
too
little
you
want
to
make.
A
The
clothes,
the
clothes
that
they
are,
the
so
say,
for
example,
they
takes
1
hours
to
generate
and
one
hour
to
attack.
Then
we
can
have
a
challenge
window
of
the
length
of
an
hour,
but
if
it
takes
one
minute
to
attack
the
challenge
window
cannot
be.
One
hour
must
be
smaller
than
one
minute.
And
finally,
we
want
to
make
sure
that,
after
that,
the
the
client
that
wants
to
extract
the
data
from
from
the
replica
can
do
this
fast.
A
Let's
say
that
we
have
a
node
in
the
graph
in
order
to
encode
the
next,
the
next
node.
We
need
to
have
all
the
parents
in
order
to
encode
the
next
node.
We
need
to
have
the
parent
of
the
node,
and
in
other
words,
this
visual
is
just
to
ensure
that
we
can
go
from
in
order
to
encode
to
you
need
to
have
one
in
order
to
include
in
generate
3
min
to
have
1
and
2
and
so
on,
and
there
are
some
special
type
of
graphs
which
are
least
depth
robust,
robust
graphs.
A
They
guarantee
that
there
is
a
particular
depth
that
is
being
preserved.
In
other
words,
we
can
use
this
particular
this
particular
game,
which
is
called
the
babbling
game,
meaning
we
can
put
some
pebbles
in
the
graph
in
the
same
that
we're
playing
here,
and
we
can
guarantee
that
there's
only
one
there
is.
There
is
no
better
way
than
the
sequential
the.
There
is
a
particular
time
that
we
claim
with
this
graph.
A
This
is
some
particular
sequential
time
they
will
claim
about
these
graphs,
and
you
cannot
do
better
than
that
sequential
time
and
this
graphs
have
the
property
of
the
properties
of
these
graphs
are
the
following.
Is
this?
Is
not
the
proposed
graphs?
Don't
use
a
home
for
your
FICO
Network
fork,
but
that
is
the
following,
even
even
if
some
amount
of
the
graph,
so
let
me
say
I'm,
let's
say
my
prover
and
I-
follow
the
previous
protocol
and
on
each
and
a
split
I
get
my
data.
A
How
long
will
it
take
for
me
to
regenerate
that
fraction
so
the
way
the
the
way
that
we,
the
way
that
were
using,
is
more
like
this
say
that
I
delete
node,
1,
1
2
in
all
the
nodes
in
the
red
area,
if
I,
if
I
were
to
delete
that
it
would
take
me
some
time
to
regenerate
and
that
time
is,
and
the
time
is
a
is
that
orange
line.
So
what
we
know
about
this
particular
graph
is
that
sure
you
can
delete
some
percentage
of
the
graph?
A
Otherwise,
you
could
have
regenerated
the
graph,
so
this
would
be
beautiful
if
these
graphs
would
give
us
fantastic
D,
which
is
the
path
and
fantastic
e,
and
this
is
not
the
case,
and
actually
we
don't
even
know
how
to
construct
these
graphs.
We
know
how
to
construct
these
graphs
but
know
in
an
efficient
way.
So
we
know
probabilistic
way
to
construct
these
graphs
and
we
don't
have
very
good
bounds
on
on
these
graphs.
A
A
Because
this
graphs
have
an
e
of
20%
and
a
D
of
20%,
which
in
practice
in
the
way
that
we
would
be
using
it
means
that
the
prover
would
be
able
to
delete
80%
and
still
be
able
to
regenerate
before
Tmax,
but
if
we,
if
they
worked.
So,
let's
assume
that
this
was
80%.
If
they
delete
80%
plus
some
of
these
nodes,
then
they
will
be
forced
to
regenerate
the
long
path
and
reject
in
the
long
path
would
would
push
them
to
go
beyond
the
challenge.
Time.
A
So
we
understand
about
the
space
gap,
but
this
is
this
is
a
very
critical
number,
which
is
the
length
of
the
path,
the
language
the
path
is
20%,
and
this
means
that
the
difference
between
generation
and
a
regeneration
is
of
five
times,
because
in
is
a
one
over
one
or
twenty
would
be
five
times.
This
means
that
an
honest,
prover
that
is
doing
the
honest
generation
would
take
one
a
dishonest,
prover
would
take.
A
Twenty
would
take
twenty
percent
of
the
time,
in
other
words,
in
other
words,
if
you
want
to
have
T
max
to
be
ten
seconds,
the
initial
initial
is
a
t
max
to
be
ten
seconds,
meaning
the
attack
takes
ten
seconds
to
do
lot
more
than
ten
seconds
to
do.
We
need
to
have
the
initialization
that
is
five
times
longer,
so
it
would
take
fifty
seconds
to
do
the
initialization,
and
this
would.
A
This
is
in
the
ideal
world,
where
every
step
of
computation
takes
the
same
amount
of
time
in
across
all
machines,
and
this
is
not
the
case.
Why?
Because
I
could
try
to
build
a
particular
hardware
that
could
allow
me
to
do
the
P
max
number
of
steps
by
in
less
time.
So
this
was
a
we
want
to
in
this
adventure,
especially
with
the
super
national
folks,
to
try
to
understand
what
is
the
smallest.
A
What
is
the
best
delay
function
which
for
which
we
know
how
much
oh,
what's
the
best
possible
speed
up
and
likely
there
is
a
there,
is
a
there
is
a
hash
function
which
we
could
use
as
this
delay
function,
meaning
we
use
this
hash
function
to
encode
every
node
and
likely
the
sha-256
is
inside
every
one
CPU
and
the
the
through
the
latency
throughput,
meaning.
How
long
does
it
take
to
from
the
time
I
call
the
sha
till
the
time
I
get
the
answer.
A
A
So
this
is
a
very
quick
overview
on
once
you
build
the
graph.
Why?
How
does
the
proof
of
space
based
on
babbling
would
work
once
you
build
the
graph
you
commit
to
the
graph
and
you
receive
sorry.
This
is
a
in
the
previous,
so
you
get
some
you
get
some
data.
You
divide
the
data
into
the
graph,
you
encode
the
you
encode,
the
graph
doing
this
delay
function
and
then
I'm
not
sure
if
you
can
see
it,
but
you
commit
you
commit
to
the
graph.
A
We'll
give
you
some
challenges
and
you
will
give
them
the
challenge
nodes
and
all
the
other
material,
the
auxiliary
information
that
the
verify
will
need
in
order
to
verify
that
the
challenge
node
were
encoded
correctly,
and
if
you
ask
for
enough
challenge
enough
nodes,
then
the
verifier
is
able
to
get
a
high
guarantee
that
all
not
just
the
challenge
nodes
were
encoded
correctly,
but
the
entire
graph
was
encoded
correctly
and
the
vilification
is
the
way
if
I,
just
reruns
the
encoding.
For
each
of
these
steps.
A
The
issue
is
that
if
we
need
to
ask
for
many
of
these
challenges,
they
say
that
we
want
to
ask
for
many
nodes
to
challenge
for
each
node.
We
need
to
give
the
node
the
parents
of
these
nodes
and
also
some
vector
commitment,
inclusion
proof,
because
we
have
committed
to
that.
We
committed
to
the
data.
We
want
to
make
sure
that
we
then
just
give
to
the
prover
a
random
data
that
happened
to
encode
correctly.
A
We
want
to
make
sure
that
the
data
that
we
give
them
are
consistent
to
the
data
that
I
have
committed
to.
Then
we
also
eat
to
give
them
some
Merkle
trees,
and
you
can
do
some
very
simple
calculation.
If
the
medical
tree
is
of
height
30
and
they
asked
for
100
challenges,
it
would
be
200
challenges
times
30,
which
is
the
path
of
the
miracle
tree
times,
32
bytes,
which
is
the
node
size.
A
This
would
be
in
the
order
of
hundreds
of
kilobytes,
if
not
megabytes,
and
so
we
cannot
just
use
these
proofs
as
they
are.
So
what
we
need
to
do.
We
need
to
what
we
are
doing.
We
are
doing
a
we
having
the
prover
to
generate
a
proof
that
they
verified
this
medical
trees
and
these
checks
correctly
and
we
generated
we
have
a
we
generated,
0:16,
zero
knowledge,
proof,
1/16,
snark,
a
snark
that
proves
that
we
have
verified
our
proof
correctly.
A
So
we
kind
of
view
snacks
as
a
way
to
compress
the
the
proof
size
from
hundreds
of
kilobytes
to
just
a
few
bytes.
And
there
is
many
challenges
in
here
as
well,
which
is
which
are
add
their
vector
commitments.
There
are
more
efficient
efficient
to
do
in
the
snark.
So
can
we
avoid
very
fine
and
our
circuits
are
in
prob?
A
Someone
can
correct
me
if
little,
but
they
are
large
part
over
70%
medical
tree
verification,
and
if
we,
if
we
were
to
have
replaced
the
miracle
tree
with
an
vector
commitment
which,
for
which
we
need
to
do
more
snack
friendly
operation,
that
would
need,
we
would
would
have
them
more
efficient
proofs
and
in
first
place
it
would
be
fantastic
if
there
would
be
a
snark.
If
we
there
would
be
a
proof
or
application
that
does
not
need
to
go
and
use
these
very
expensive
tools.
A
So
the
space
cap
is
a
problem
and
we
try.
We
wanted
to
avoid
to
have
this
very
large
space
gap
and
so
from
the
from
the
from
men's
paper.
We
have
two
construction.
One
of
this
is
called
SDR,
and
that
is
that
we
stack
multiple
DRG
graphs
and
we
connect
them
in
a
party
in
a
special
way
for
which
we
can.
A
We
can
have
a
better
spacecraft.
Maybe
we
can,
we
can
run,
we
can
have
a
tight
space
gap.
In
other
words,
we
can
choose
what
the
space
gap
is,
and
but
we
need
to
do
some
trade-off
with
with
the
with
some
parameters,
and
the
issue
of
this
particular
construction
is
that
the
security
is
only
based
on
the
security
of
a
single
layer
of
the
graph.
What
does
it
mean
in
practice?
A
The
SDR
gives
us
a
better
space
gaps,
a
space
gap,
but
if
there
are
ten
layers
in
SDR
and
on
and
the
best
attack
is
an
attack
on
a
single
layer,
we
know
that
we
can
attack
this
a
single
of
this
layer
with
just
doing
20%
of
computation
in
parallel.
Then
this
means
that,
with
the
LG
we
go.
Five
times
faster
because
we
have
better
hardware,
we
go
three
times
faster,
so
in
total
15
and
because
the
attack
is
the
best
attack
is
one
one
of
these
ten
layers.
A
A
And
so
just
to
summarize,
the
problems
that
were
the
the
challenges
that
we
had.
It
would
be
great
to
have
dear
G's,
with
much
better
tatical
bounds.
We
kind
of
solved
the
problem
of
a
very
large
space
gap
and
and
the
drg
generation
of
FX
was
superseded.
We're
gonna
see
later
on.
We
do
have
a
delay
function
which
has
more
Emacs.
It's
just
that
it's
not
snark
efficient.
There
are
other
delay
function
as
not
coefficients
but
which
we
don't
know
what
the
best
ad
words
are
and.
A
It
would
be
great
to
have
a
very
large
challenge
time,
but
the
more
we
do
larger
challenge
time,
the
more
we
increase
the
the
time
it
takes
to
to
replicate
and
SDR
requires,
because
we
have
multiple
DRGs.
It
requires
more
more
complex,
knack
circuits,
but
we
came
up
with
several
tricks
to
try
to
reduce,
to
reduce
that
and
and.
B
A
A
So
the
time-based
proofs
cannot
give
us
short
at
least
depict
in
the
construction
that
we've
seen
where
they
don't
have
a
short
amount
of
initialization
time
and
in
practice
there
also
happen
to
be
expensive
because
of
memory
requirements,
but
in
spirit
they
were
they're
meant
to
be
long
and
cheap,
and
so
let's
say
that
we're
not
cheap
yet,
but
there's
a
lot
of
positions
that,
like
that,
could
that
could
appear
so
that
they
would
also
be
cheap.
The
unchained
footprint
is
small,
so
we
have
small
proof.
A
We
do
have
a
large
challenge
time,
but
this
means
that
the
larger
the
challenge
time,
the
the
replication
time
and
the
thing
that
we're
missing
from
this
particular
scheme
is
a
faster
retrieval.
We
have
other
ways
for
which
we
can
guarantee
faster
retrieval
in
five
coin,
but
if
you
were
to
extract
the
data
from
the
proof
of
replication,
unfortunately,
you
will
have
to
read:
you
will
have
to
rerun
the
the
replication
or
you
will
have
to
you.
You
will
have
to
do
some
form
of
general
regeneration.
A
Attack
like
the
attacker
would
do
in
order
to
regenerate
what
I'm
saying
is
that,
regardless
of
regardless
of
the
tricks
that
you
could
use,
it
still
takes
time
to
extract
the
data,
and
ideally
we
don't
have
this.
Ideally,
we
would
have
the
in
terms
of
sequential.
In
terms
of
clock
time,
we
would
like
the
retrieval
to
be
much
faster.
B
A
I,
yes,
I'm,
not
gonna.
Do
this
now.
I
think
this
is
something
that
yes,
we
need
to
do
and
we're
trying
to
we're
having
some
intuition
or
ideas
that
we've
worked
on,
but
didn't
work
as
much
as
we
wanted
to
to
get
faster,
retrieval
one-
and
these
are
these-
are
three
ideas
and
then
I
think
that
I
great,
we
don't
necessarily
just
need
good
ideas.
We
just
need
a
much
better
definition
that
could
include
encompasses.
C
B
B
So
it
gives
the
stores
a
manager
there
to
play
with
right
and
so
and
how
do
those
are
related?
We
can
probably
describe
that,
but
that's
what
those
are
like.
The
aircraft
areas
that
we're
trying
to
align,
going
in
really
fast
speeds
and
yeah
getting
precise
about
the
numbers.
It's
probably
something
we
can
do.
Probably
after
the
talks
we
can
write.
A
So
I
want
to
give
just
some
intuition
on
how
some
construction
could
lead
to
faster
with
repo
and
where
the
construction
which,
when
we
wind
up,
not
using
because
of
small
issues
in
in
our
confidence
in
security
of
this
construction.
But
the
idea
is
the
following
say
that
it
would
take
a
long
time
to
do
a
replication,
but
because
of
the
sequentiality
of
each
of
these
graphs,
but
but
going
so
generating
the
graph
would
be
fast.
A
So
it
would
be
sorry
generate
in
the
replica
would
be
slow,
but
going
from
the
replica
so
say
that
sorry
I'm
fortunate,
the
graph.
It's
not
really
showing.
Let's
assume
that
we
have
the
data
at
the
top
and
we
encode
the
data
through
these
graphs.
And
then
there
is
a
way
to
the
code
from
the
replica
up
to
the
data
and
we
could
construct
preferred
applications.
We're
going
generating
the
replica
is
slow,
so
for
someone
to
regenerate
would
still
take
a
long
time,
but
the
Korean.
A
The
data
is
fast,
not
because
we
do
less
computation
in
practice.
We
end
up
doing
the
same
amount
of
computation,
but
this
computation
can
be
parallelized
and
if
we
could
paralyze
the
computation,
then
we
could
we
could.
We
could
go
fast.
The
issue
with
constructions
like
this
is
there
is
this
particular
attack
which
I
guess
Joe
will
dive
into
later.
But
the
idea
is
the
following:
is
that
say
that
I
encode
some
I
generate
a
replica
and
then
I
use
this
replica
to
generate
the
another
replica?
A
Then
it
only
store
r2
and
when
I'm
challenging
r1
I
can
just
very
quickly
go
up
to
R,
1
and
and
B
and
I
would
be
able
to
stack
multiple
of
those
and
because
this
operation,
because
going
upwards,
is
fast,
then
I
would
be
able
to
just
extract,
get
each
other
and
get
the
answers
to
the
challenges
that
I
need
and
only
have
some
amount
of
storage.
Only
after
one
unit
of
storage,
there
are
some
ideas
on
on
how
to
go
around
this
type
of
attack,
for
example.
A
Well,
I
think
we
should
just
add
I
think
we
should
just
drive
them
into
later,
but
we
have
some
ideas
on
how
we
would
go
about
the
seal
stack
by
none
of
this
ideas
idea
and
the
alternative
is
if
we
go
into
the
cosmo.
In
other
words,
let's
say
that
I
can
do
the
operation.
I
can
do
the
ceiling
in
power
again.
A
Do
the
replication
in
parallel,
so
I
can
be
very
fast,
but
doing
the
operation
is
expensive
and
what
we
want
to
make
sure
we
want
to
make
sure
that
the
cost
of
the
cost
of
regeneration
is
much
higher
than
the
cost
of
storing
the
data
for
that
amount
of
time,
and
so
you
cannot
do
this
with
the
with
a
simple
definition
of
proof
of
space
link.
Ron.
Has
this
construction,
where
you,
if
you
are
not
storing
the
data
that
you
you're
supposed
to
store,
then
you
need
to
do
2
to
the
K.
A
Computation,
unfortunately,
is
not
that
we
cannot
use
constructions
like
this,
because
in
fact
coin
we
have.
We
don't
have
just
a
single
prover.
That
just
runs
a
single
instance
and
nobody
else
is
doing
anything.
Other
people
are
also
doing
proof
of
space,
so
we
need
to
make
sure
the
prover
split.
Our
proof
of
space
is
secure
and
the
parallel
composition,
and
so
we
cannot
rely
on
the
2
to
the
K
computation
because
they
could
be
reusing.
A
The
storage,
in
other
words,
say
that
I
claim
to
have
two
units
of
storage
where
I
only
have
one
so
I
have
enough
to
do
the
first
proof
and
to
not
do
to
do
the
K
computation,
which
I
could
just
reuse
the
storage
for
just
rerun,
the
encoding
which
takes
less
to
do
to
the
K
computation.
So
what
we
need
to
do
we
need
to
know.
We
need
to
make
sure
that
we
know
what's
the
best,
the
best
regeneration
attack,
and
so
it
is
three
key
to
two.
A
So
once
we
approve
space,
we
need
to
know
what
is
the
best
regeneration
attack
and
then
we
need
to
put
a
place,
a
cost
on
the
regeneration
attack,
and
we
tried
multiple
paths
here.
We
try
to
understand
how
much
does
it
cost
to
hash?
So,
for
example,
we
could
tune
the
number
of
hashing
such
that
the
cost
of
regeneration,
because
it
has
a
lot
of
hashing
it's
higher
than
the
cost
of
using
storing
the
sorting
the
data.
A
Storing
the
data
for
I
think
that's
I,
don't
know
if
it
is
I
think
these
are
all
per
second.
This
is
per
second,
certainly,
the
first
second
takes
their
amount
and
what
we
want
the
amount
of
money
and
doing
a
hash
takes
cost
that
much
so
the
number
of
hashing
that
we
would
need
to
do
would
be
so
high
that
the
initial
would
be
almost
comparable
to
a
proof-of-work.
So
so
we
tried
so
we
excluded
the
hashing,
and
then
we
looked
into
memory
bandwidth,
which
basically
is
the
cost
of
energy.
A
A
A
A
This
is
about
see
there
have
a
some
amount
of
memory
and
you're
forced
to
use
some
of
that
memory
consistently
throughout
the
computation.
Then
you
could
not
use
that
memory
for
something
else
and
well
before
we
were
thinking
about
the
access
of
the
cost
of
accessing
memory.
Now
this
is,
if
we
can,
if
we
could
for
some
computation
to
keep
some
amount
of
memory,
then
we
could
force
the
proof.
A
We
could
first
attacker
to
buy
a
lot
of
of
RAM
units,
and-
and
it
looks,
this
venue
looks
promising
because
the
cost
of
RAM
is
very
high,
and
so,
if
we
force
the
attacker
to
fill
the
RAM,
they
cannot
really
use
the
RAM
for
something
else
during
that
time,
and
so,
if
they
want
to
do
to
pull
this
attack
on
a
small
sector,
it
has
a
particular
cost
me
by.
If
they
want
to
further
sorry
for
a
small
file,
it
has
a
particular
cost.
A
A
A
Can
we
had
other
assumptions
that
could
give
us
a
replication
which
is
both
cheap,
like
the
first
one,
which
was
based
on
time,
so
I
was
not
meant
to
yeah,
both
cheap
and
fast,
like
those
that
are
based
on
cost
and-
and
let
me
give
you
this
is
an
intuition,
which
was
the
original
intuition
with
five
coin
was
that
we
can
run
some
initialization
and
we
continuously
check
so
will
analyzation.
It
takes
a
small
amount
of
time,
and
then
we
such
that
we
know
a
particular
team
acts,
meaning
we
need
to
be
challenged.
A
A
A
They
would
spend
most
of
the
challenge
time
doing
this
after
alternate.
Construction
with
vdf
ends
execution,
and
then
they
can
still
have
some
time,
which
is
the
steam
axe
to
be
able
to
reply
on
time.
So
it
looks
like
that
there
might
be
space
for
proof
of
space
prefer
replication.
That
is
both
that's
both
cheap
initialization
and
is
also
a
very
fast
initialization,
and
this
would
allow
us
to
get
this
idea
of
true
for
replication.