►
Description
For more information on Filecoin
- visit the project website: Filecoin https://filecoin.io/
- or follow Filecoin on Twitter: https://twitter.com/Filecoin
Get Filecoin community news and announcements in your inbox, monthly: http://eepurl.com/gbfn1n
For more information on IPFS
- visit the project website: https://ipfs.io
- or follow IPFS on Twitter: https://twitter.com/IPFS
Sign up to get IPFS news, including releases, ecosystem updates, and community announcements in your inbox, each Tuesday: http://eepurl.com/gL2Pi5
A
Currently
sectors
are
32
gigabytes,
although
that
number
is
subject
to
change
depending
on
the
powerup
parameters
and
the
way
this
works
is
the
miner.
A
storage
miner
will
collect
data
from
users
will
pack
that
data
into
a
sector
and
then
we'll
seal
that
sector.
This
is
the
file
coin
terminology
that
is
encode
the
sector.
Using
the
powerup
encoding
scheme
before
storing
the
data,
and
so
as
you
heard
from
Aaron,
a
and
and
Nico
this
sealing
is
essential
to
enforce
unique
storage
resources
dedicated
to
each
sector.
A
If
you,
you
know
sort
of
rule
out
that
trivial
case
you
can
say
well,
the
miner
can
just
store
encrypted
zeros,
to
which
only
they
have
the
key,
and
you
wouldn't
know
the
difference
between
storing
encrypted
zeros
and
storing
some
encrypted
useful
data
on
the
on
behalf
of
an
actual
user.
So
this
is
why,
in
particular,
if
we're
relying
on
not
just
retrieve
ability
of
useful
data
but
storage
resources
as
the
basis
for
the
blockchain
for
consensus
and
for
the
block
reward
poor
app
and
not
just
proof
of
retrieve
ability,
retrieve
ability
is
essential.
A
A
Now
again,
as
you
heard
from
from
eco,
poor
f
is
used
in
tandem
with
what
we
call
proof
of
space-time
or
post,
and
so
the
way
this
works
is
that
at
some
frequency,
each
minor-
and
this
can
happen
randomly
or
at
some
regular
interval,
depending
on
the
parameters.
But
each
minor
is
sampled
for
a
post
challenge,
some
set
of
the
miners.
A
That's
a
minor
e
story,
and
so
when
this
is
done
at
regular
intervals,
the
idea
is
we
want
to
make
sure
that
either
it's
done
frequently
enough
or
it's
done
in
such
a
way
that
it
requires
a
fast
response
such
that
the
miner
can't
possibly
or
it
doesn't
make
sense
rationally
for
the
miner
to
regenerate
the
data
to
meet
each
of
the
challenges.
That
is,
they
have
to
be
storing
the
data
the
whole
time,
because
they
don't
know
when
and
on
what
data
we're
gonna
challenge
them.
A
Alright,
so
I
won't
dive
into
the
details
of
the
the
pour
up
and
post
model
too
extensively,
but
rather
I
want
to
highlight
some
specific
challenges
that
we
face
in
file
coin,
that
because
of
both
because
of
the
more
stringent
requirement
to
enforce
storage
of
useful
data
and
because
of
the
technical
restrictions
of
the
protocol.
Just
the
scalability
target
that
we're
aiming
to
hit
are
particularly
salient
in
the
file
coin
setting.
A
So
the
first
issue
is
when
we've
discussed
previously,
which
is
that
the
encoding
procedure
for
encoding
has
to
be
expensive
and
the
way
in
which
it's
expensive
is
pretty
flexible.
It
could
be
cost
based,
meaning
that
the
adversary
needs
to
spend
a
certain
amount
of
some
resource
in
order
to
reencounter
or
it
could
be
latency
based,
meaning
no
adversary
in
the
world.
A
Even
if
they,
you
know
have
fifty
million
dollars
in
build
a
custom
can
possibly
regenerate
the
data
quickly
enough
to
meet
the
challenge,
so
it
has
to
be
expensive
now,
if
we
make
it
costs,
you
know
expensive
in
the
cost
model.
That's
not
ideal
for
the
honest
user
and
and
we'll
get
to
the
details
of
that
momentarily,
but
essentially
because
the
honest
user
has
to
do
this
at
least
once
it
sort
of
enforces
a
bound
on.
A
You
know
how
how
often
we
have
to
require
the
adversary
to
do
this
in
order
to
make
it
in
order
to
make
the
rational
equilibrium
workout,
that
is
to
say
it
has
to
be
cheap
enough
for
the
honest
party
to
actually
do
it
actually
want
to
participate,
but
it
can't
be
so
cheap
that
the
adversary
can
do
it,
often
enough
to
answer
all
the
challenges,
and
so
the
cost
model.
You
know
the
cost
model
is
usually
easier
to
proof.
A
You
know
this
sort
of
motivates
this
very
sharp
constraint
here,
which
is
you
know
in
the
extreme
case,
sealing
once,
which
is
something
that
the
honest
party
has
to
do
regardless
has
to
be
cheaper
and
ideally
much
cheaper
than
just
storing
the
data
for
several
months.
Otherwise,
the
honest
miners
are
just
not
going
to
store
the
data,
and
the
problem
is
that
hard
drives
are
really
cheap.
I
mean
that's
great,
we
can
get
exabytes.
You
know
tens
of
exabytes
on
the
network
potentially
because
hard
drives
are
cheap,
but
also
because
hard
drives
are
cheap.
A
She
only
has
to
be
really
cheap,
but
it
has
to
be
expensive
in
some
way,
and
so
the
ideal
case
would
be.
It's
really
cheap
that
we're
not
even
depending
on
a
cost
assumption,
but
there's
still,
this
intrinsic
latency
bound.
That
means
that
the
adversary
can't
possibly
do
this
inherently
sequential
computation
fast
enough
to
answer
the
challenge.
A
So
that's
that's
sort
of
where
we're
at
now.
Another
challenge
is
when
one
that
we've
heard
about
in
previous
talks
as
well.
This
is
what
we
call
retrieval
latency,
so
you
know
sealing
may
be
extremely
slow
and,
in
fact,
has
to
be
extremely
slow
in
terms
of
sequential
time,
especially
for
large
sectors.
A
So
you
know
typically,
what
we
want
to
show
is:
if
the
attacker
wants
to
re-encode
a
sector
from
scratch,
they
have
to
spend
at
least
30
seconds,
sequential,
like
a
latency
bound
of
30
seconds,
and
if
the
powerup
is
symmetric,
that
is
to
say,
encoding
is
the
same
operation
as
decoding
or
you
know
it's
a
similar
operation,
and
it
has
the
same
types
of
you
know.
Cost
profiles
then
decoding
or
retrieval
we're.
Treating
the
original
data
also
has
very
high
latency,
which
is
not
good.
A
So
if
you
know,
if
we
want
to
enforce
a
bound
of
30
seconds,
then
and
and
so
and
this
sort
of
falls
into
the
analysis
that
Niko
mentioned
as
well,
you
have
this
mound
of
30
seconds.
You
have
some
Loss
Factor,
just
inherently
from
the
pull
up
construction,
where
you
say
you
know.
For
example,
the
honest
party
to
encode
has
to
compute
the
entire
graph,
but
the
adversary
only
has
to
compute
20%
at
one
layer.
It's
a
pretty
brutal
constant
here
and
the
a
max
the
hardware
any
max
only
makes
this
worse.
A
So
if
the
adversary
can
compute,
you
know,
let's
say
the
adversary,
can
compute
sha,
sha
256
and
for
nanosecond
sequential,
and
it
takes
us
12,
nanoseconds,
sequential,
using
shine
transduce
on
the
CPU.
That's
another
factor
of
three.
So
if
we
want
a
symmetric
power
app
where
encoding
is
the
same
as
decoding,
and
we
want
to
enforce
a
challenge
response
time
bound
of
30
seconds,
then
we
have
this
ridiculous
latency
required
for
the
honest
miner
to
actually
retrieve
the
original
data
to
decode
the
power
of
encoding.
A
That's
a
big
problem,
and
so
this
motivates
a
variety
of
new
sort
of
new
paradigms
and
new
design
choices
for
powerup
in
general,
one
of
which
is
a
symmetric
core
app,
which
I
can
go
into
a
bit
more
in
a
bit
more
detail
later.
But
it's
where
essentially
encoding
and
decoding
are
different
operations
and
the
other
is
is
a
new
direction
which
I'll
Lou
to
later
as
well,
which
we
call
wide
poor
app,
which
is
basically
you're
sort
of.
We
want
to
provide
an
additional
knob
to
tune
to
say
we
still
want.
A
You
know
we
still
want
some
relatively
stringent
latency
bound,
but
we
don't
want.
Basically,
we
don't
want
the
latency
bound
to
depend
on
the
parameters
of
the
construction,
so
we
don't
want
it
to
say
well.
If
we
want
32
gigabite
sectors,
you
know
you
have
to
do
32
gigabytes
x,
degree,
sequential
hashes,
I
mean
I've
sort
of
that.
That
puts
a
very
severe
limit
on
the
latency
bounds
that
we
can
get
relative
to
the
the
decoding
latency.
A
The
other
problem
is
again
one
that
we've
discussed
in
previous
talks.
I'll
go
into
a
little
bit
more
detail
here,
so
we
call
regeneration
attacks
now
in
a
full
regeneration
attack.
The
adversary
actually
regenerates
all
of
its
data
every
sector.
Every
time
it
gets
challenged-
and
you
know
this-
this
seems
ridiculous.
A
A
So
these
are
the
two
types
of
regeneration
attacks
were
concerned
about
right
now
in
the
file
corn
protocol.
We
have
this
sort
of
blanket
mitigation
which
is
effective
against
both
but
comes
at
a
considerable
cost
for
honest
parties,
which
we
call
election
Post
and
the
way
this
works
is
essentially
so
contrary
to
the
traditional,
you
know,
challenge
response
paradigm.
A
We
don't
we
don't
sample
a
miner
and
then
you
know
sample
sectors
and
then
challenge
them
on
specific
data
items,
or
rather
each
miner
throughout
the
continuous
execution
of
the
protocol,
every
block
in
the
chain
is
required
to
examine
all
of
its
storage
or
some
non-trivial
constant
fraction
of
all
of
its
storage
to
mind
for
winning
tickets.
Essentially,
that's
that's
the
language
that
we
use,
and
so
what
we
say
is
well.
You
know
you
can't
you
can't
you
don't
know
when
you're
going
to
be
challenged
because
you're
always
challenged
you're
challenged
every
epoch.
A
You
know
every
every
step
in
the
blockchains
execution.
You
need
to
scan
your
store
with
that
time-dependent
state
from
the
chain
and
see
if
you
win,
and
so
you
know
if
you
try
to
do
a
full
regeneration,
what
that
amounts
to
is
you're
regenerating
all
of
your
data,
every
epoch,
every
step
of
the
chain,
every
15
seconds
or
30
seconds,
and
if
you're
gonna
do
that
you
might
as
well
just
store
the
data.
A
So
that's
that's
where
this
this
sort
of
rationality
assumption
comes
in
where
you
know
you
can't
possibly
do
it,
you
could,
but
you
doesn't
make
any
sense
to
do
full
regeneration
because
you're
you're
challenged
essentially
every
epoch
and-
and
you
know
similarly,
you
know
if
you
try
to
do
selective
regeneration.
What
you'll
find
is,
whichever
sectors
you
decide
to
regenerate
those
are
the
only
ones
on
which
you'll
win,
so
it
scales
linearly,
doesn't
help
you
to
regenerate
fewer
because
any
of
those
win
last
it'll,
be
as
if
you
were
a
smaller
miner.
A
So
this
is
this
is
our
current
mitigation.
It
works
quite
well
against
both
types
of
regeneration
attacks.
It
comes
at
a
considerable
cost,
though,
which
is
that
honest
parties
have
to
scan
their
entire
storage,
so
we're
spinning
all
these
disks.
You
know
we're
burning
all
the
processors
computing
these
proofs
and
it
imposes
very
serious
constraints
on
you
know
how
essentially
the
cost
of
participating
in
the
protocol.
That
is
to
say
you
know
you
would
like,
for
a
number
of
reasons,
for
participation
to
be
essentially
free.
A
That
is,
you
know
in
a
robust
and
and
low
latency
Byzantine
agreement
protocol,
for
example,
as
you
would
use,
you
know,
to
maintain
consistent
state
on
the
chain.
You
don't
want
parties
to
have
to
spend
resources
or
consume
latency
in
order
to
participate.
You
want
it
to
be
completely
frictionless,
because
the
faster
you
can
do
that,
the
more
frequently
you
can
have
rounds
of
the
consensus
protocol,
the
more
efficiently
you
can
exchange
this
information
and
the
more
rapidly
you
can
achieve
convergence
of
the
chain
and
separately,
the
less
margin.
A
A
If
we
can
help
it-
and
this
is
where
again,
you
know
if
we
have
some
very
strong
notion
of
soundness-
that's
completely
based
on
latency,
where
we
just
say
we're:
gonna
challenge
a
miner,
give
them
a
certain
amount
of
time
to
respond
based
on
the
change
state
and
they
can't
possibly
respond
within
that
period.
Unless
they've
stored
the
data
a
large
fraction
of
the
data,
so
that's
the
goal
now
I
should
also
mention
an
attack.
A
That's
that
somewhat
specific
to
the
context
of
a
file
coin
and
useful
proof
of
replication
in
particular,
and
this
is
also
something
that
Nikko
alluded
to
in
his
talk,
which
is
what
we
call
seal
stack
seal
stack
the
term
coming
from
the
fact
that
we
refer
to
pour
up
encoding
in
falconis
as
sealing
and
the
way
this
works.
Is
you
start
with
some
these
to
the
adversary?
Let's
say
starts
with
some
data:
they
can
easily
regenerate-
let's
say
all
zeroes
for
simplicity.
A
They
you
know
posing
as
a
client.
They
submit
this
as
data
to
be
stored.
They
make
a
deal
with
themself
to
store
the
data,
they
encode
the
data
and
then
they
get
some
replica
data
x1
the
powerup
encoded
data.
They
submit
this
as
a
client
again,
and
you
know
if
you
want
they
can
encrypt
it
so
that
you
can't
tell
it
they're
doing
it,
but
they
submitted
it
through
the
chain
again
make
another
deal
with
themselves,
as
some
other
posing
as
some
other
civil.
A
They
repeat
this
K
times
and
they
only
store
the
final
layer
and
now,
whenever
they're
challenged
on,
let's
say:
X
I
they'll
just
look
at
their
final
encoding
and
they'll
decode.
Only
this
section
of
the
data
and
answer
this
challenge,
and
so
now
the
idea
is:
if
decoding
is
arbitrarily
fast,
as
we
would
like
it
to
be
functionality
wise,
then
we
cannot
escape
this
attack
because,
if
it's
arbitrarily
fast,
they
can
just
decode,
no
matter
what
in
time
to
meet
the
challenge.
A
A
So
that's
a
serious
problem
and
this
I
think
gets
to
gets
to
Ben's
question
as
well,
which
is
ultimately
decoding
can
be
too
fast.
Retrieval
can't
be
too
fast,
and
so
what
we
would
like
to
do
is
we
would
like
to
say
you
know
we
have
this
tunable
knob.
There
are
a
variety
of
ways
we
can
get
it,
but
there's
this
tunable
knob
that
we
can
say
we
would
like
to
increase
the
parameters
or
allowed
to
modify
the
parameters
so
that
decoding
takes
certainly
longer
than
four
milliseconds.
A
It
has
to
take
longer
than
an
honest
party
seeking
to
the
location
on
disk,
to
read
it,
but
not
so
long
that
it's
impractical
to
use
in
the
network-
and
this
is
where
Hawaiian
mentioned-
to
figure
400
milliseconds
anywhere
in
between
there
is
great
like
we
would
like
to
say.
You
know
decoding
a
particular
entry
must
take
the
attacker
at
least
say
15
milliseconds.
That
would
that
would
be.
A
That
would
be
fantastic
right
if
it
must
take
the
attacker
50
milliseconds,
but
the
honest
party
can
still
do
it
in
200
milliseconds,
that's
sort
of
the
Holy
Grail
and
it's
hard,
but
we
claim
not
impossible
to
achieve
and
and
we'll
see
a
few
directions
towards
achieving
this.
In
the
last
few
slides.
A
All
right
so,
as
promised,
here's
one
new
direction
that
we're
looking
at,
and
this
is
what
we
call
white
power
up,
so
white
PO
app
is
still
in
the
symmetric
paradigm.
So
it's
not
an
asymmetric
power.
App
like
decoding
and
encoding
still
look
roughly
the
same,
but
the
idea
is
we
want
to
provide
this
tunable
knob.
A
So
we
want
to
say
you
can
crank
up
this
parameter
to
make
both
encoding
and
decoding
slightly
more
expensive
or
less
expensive
in
order
to
hit
this
narrow
window
between
four
milliseconds
and
400
milliseconds,
and
so
the
way
this
works
concretely.
So
you
have
imagined
that
the
entire
data
set
the
entire
original
data
is
divided
into
what
we
call
Windows.
A
Let's
say
you
know
intuitively
you
can
think
of
a
window
as
like
32
megabytes,
let's
say
so
pretty
a
pretty
small
chunk
of
data,
and
you
know
you
still
have
these
very
large
sectors,
maybe
32
gigabytes,
maybe
larger
we're
in
fact
we're
playing
with
some
constructions
where
the
sector
size
is
a
terabyte.
But
it's
still
divided
into
these
very
narrow
windows
of
32
megabytes
and
within
such
a
window
you
do
the
typical
symmetric,
poor
up
paradigm,
which
is
to
say
in
order
to
seal.
You
derive
some
randomness
from
the
chain.
A
You
use
that
as
the
input
to
some
some
depth
robust
graph
here,
and
you
know,
we
assume
that
this
depth
robust
graph,
has
to
be
significantly
larger
than
the
32
megabyte
window.
So,
let's
say
512
megabytes,
1024
megabytes
as
we'll
see
it
often
helps
for
this
to
fit
in
the
memory
of
the
GPU.
For
example,
if
you're
gonna
use
some
memory,
hard
function
as
well
as
I'll
show
momentarily,
but
you
can
think
of
this
as
32
megabytes.
A
Think
of
this
as
a
gigabyte
or
there
abouts
and
think
of
it,
as
essentially
you
know,
there's
only
one
DRG
in
the
construction
there's
only
one
inherently
sequential
path
and
that's
a
disk
path
and
all
of
the
rest
of
the
construction.
These
all
the
entirety
of
these
layers
is
geared
towards
making
every
challenge
here
hit
a
large
fraction
of
this
DRG
as
dependencies.
So
concretely,
what
we
do
is
we
say:
well,
we
first
reduce
this
DRG
via
some
parallelizable
procedure
like
just
take
the
first
K
items
and
hash
them
collapse
them
to
the
first
item.
A
Here
we
first
collapse
it
to
some
small,
constant,
multiple,
let's
say
2x
or
4x
the
mask.
We
then
essentially
amplify
the
dependencies,
so
we
say
these
are
using
butterfly
super
concentrator,
edges
or
expander
edges.
We
connect
these
layers,
which
themselves
are
just
Singleton's
they're,
not
the
are
G's,
but
we
connect
them
to
each
other
and
we
finally
distill
them
down
to
this
32
megabytes,
which
we
XOR
in
with
the
data
to
produce
the
encoding,
as
we
would
in
in
standard
symmetric
pore
up
and
what
this
it
aims
to
achieve.
A
Is
we
have
this
tunable
parameter
or
we
can
say
the
the
bigger
the
window
and
ultimately,
the
bigger
this
wide
DRG,
the
stronger
the
latency
assumption,
the
more
the
longer
a
path
of
hashes
that
the
versary
has
to
compute
in
order
to
recompute
the
mask,
and
so
concretely
you
know
if
the
adversary
drops
some
fraction
of
this
storage,
they
get
sampled
here
they
don't
have
this
data.
The
rest
of
these
layers
are
just
here
to
amplify
the
dependence
of
this
challenged
data
item.
A
So
you
know,
maybe
it
maybe
it
gets
enlarged
to
two
items
here,
four
items
here:
the
adversary
stores
one.
So
it's
three
items,
but
then
you
know
it
gets
amplified
to
six
items
and
twelve
items
and
eventually
it's
a
huge
fraction
of
this
layer
that
they
have
to
recompute
and
now,
of
course,
they
can't
store
this
entire
layer,
and
this
is
the
key
insight
behind
the
construction.
They
can't
store
this
entire
layer
or
even
an
appreciable
fraction
of
it.
It's
too
big,
it's
bigger
than
the
data
way
bigger
than
the
data.
A
So
if
they
have
this
budget
of
storage
to
allocate,
which
is
all
that
makes
sense
rationally,
all
they
can
do
is
store-
maybe
one
eighth
of
this
graph,
which
is
nothing
so
you
know
they
store
one
eighth
of
this
graph
and
then
we
hit
three-fourths
of
the
graph.
That's
a
very
strong
invocation
as
a
DRG
property,
the
depth,
robustness,
property,
and
so
what
that
tells
us
is
you
know,
by
tuning
these
parameters,
we
can
get
a
very
large
latency
bound
for
a
very
small
window.
B
C
A
Anything
comparable
to
this
amount
of
data,
so
this
is
one
building
block,
that's
sort
of
the
essential,
the
essential
lower
bound
on
latency
that
we'll
need
in
order
to
mitigate
the
seal
stack
attack
and
this
building
block
in
conjunction
with
what
we
call
asymmetric
poor
app,
is
what
we
currently
believe
is
the
best
path
to
achieving
low
latency.
Retrieval.
A
Now
I'll
just
mention
briefly
also
one
other
direction,
which
is
primarily
geared
toward
achieving
different
goals
than
fast
retrieval,
but
it's
still
interesting
and
could
play
a
sort
of
an
auxilary
role
in
the
new
constructions
which
will
called
Mempa,
rep
or
memory
intensive
pour
up-
and
here
you
can
imagine
that
you
know
we
have
any.
We
have
an
arbitrary
pair.
You
know
pour
up
construction
sort
of
parametric
over
the
construction,
so
we
can
take
wide
pour
up.
A
We
can
take
you
know
SDR,
we
can
take
zigzag
any
construction,
but
what
we
describe
here
is
sort
of
a
schema
for
making
changes
such
that
so
a
you
know,
rather
than
just
being
based
on
latency
of
hashing.
It's
also
based
on
memory
hardness,
either.
Cost
of
memory,
access
or
latency
of
memory,
access
and
B
the
edges
in
the
graph
or
graphs
are
determined
dynamically,
so
in
other
words,
rather
than
being
a
fixed
drgr
priori,
the
edges
of
the
DRG
are
determined
by
the
outputs
of
the
intermediate
hashes.
A
So
when
you
hash,
you
know,
when
you
produce
some
node,
you
hash
all
of
these
nodes
to
produce
it,
and
then
you
use
the
the
hash
output
here
just
interpret
those
bits
as
indices
as
a
sample
from
the
drg
distribution,
which
tells
you
what
the
parents
of
the
next
node
are
going
to
be,
and
so,
by
doing
so,
you
essentially
force
the
adversary
to
have
a
general-purpose
random
access
memory.
They
can't
they
can't
bake
it
into
the
ASIC
or
the
FPGA
or
even
the
algorithm
in
general.
A
Maybe
we
can't
do
some
opera
Orion
alysus
on
the
drg,
some
costly
analysis
to
reduce
the
depth,
because
the
DRG
is
going
to
be
different
for
every
instance
and
in
fact
they
won't
even
know
they
won't
even
have
enough
storage
to
store
the
drg
itself.
Those
edges
without
running
the
computation
again.
A
So
these
are
the
two
key
insights
here.
So
basically
you
know
we
have
a
relatively
small
sucker
size,
so
let's
say
for
gigabyte
so
that
it
fits
in.
You
know
comfortably
fits
in
memory
on
a
GPU,
for
example,
we
determine
the
and
there
are
a
ton
of
edges
so
and
there
are
there
a
number
of
tricks
to
accomplish
this
without
blowing
up
the
overhead
that
we
can
go
into
later.
But
there
are
a
ton
of
edges
a
very
high
degree
and
what
this
buys
us
is.
A
Essentially,
you
know
where
we're
forcing
the
adversary
to
have
a
machine
with
a
general-purpose
random
access
memory
with
very
high
bandwidth,
and
what
we're
betting
is
that,
no
matter
what
kind
of
chip
they
build?
Well,
you
know
GPUs
are
already
designed
for
that
they're
not
going
to
do
much
better
than
a
GPU,
and
so
you
know
in
theory
this
could
work
either
in
the
cost
or
the
latency
model,
although
I
think
it
is
best
suited
for
the
cost
model.
D
D
E
E
E
F
F
E
E
F
E
E
A
F
E
F
E
F
F
F
F
F
E
D
D
F
F
The
all
the
notes
the
way,
so
the
question
is
now
so
if
the
white
idrd
is
the
same
size
as
the
data,
then
now
you
just
been
able
to
prove
tighter
contact
energy.
So
it
sounds
like
that
you're
not
able
to
detect
that
you
need
this
to
be
much
wider.
So
now
the
question
that
has
what
is
why?
How
much
wider?
Because
the
tap
is
going
to
help
y-you
have
to
do
less
computation
in
order
to
get
this
right,
freshly.
E
F
F
F
F
D
D
E
D
E
D
I
E
F
G
G
So
if
you
have
a
you
have
a
lot
of
secretaries
right.
Each
of
them
are
one
replica.
If
this
white
ERG
blows
up
your
storage
by
a
factor
of
10,
then
you
definitely
don't
want
to
blow
up
all
your
storage
by
a
factor,
but
if
you
can
use
one
single
wide
white
ERG
for
all
your
storage,
then
you're
just
adding
a
constant,
a
small,
constant
amount
of
storage
which
you
can
totally
sorry.