►
Description
Agenda: https://github.com/ethereum/pm/issues/432
EIP-4444: https://eips.ethereum.org/EIPS/eip-4444
Discussion to: https://ethereum-magicians.org/t/eip-4444-bound-historical-data-in-execution-clients/7450
Contact Ethereum Cat Herders
---------------------------------------------------
Discord: https://discord.io/ethcatherders
Website: https://www.ethereumcatherders.com/
A
Work
incrementally
to
to
start
thinking
of
what
we
need
to
do
to
make
that
a
reality,
to
make
this
initial
proof
of
concept
a
reality,
and
that's
why
I
added
those
agenda
items
on
github,
which
is
in
particular.
You
know
what
kind
of
format
we
should
use,
which
kind
of
client
we
should
use
and
all
that
stuff,
and
also
we
talked
with
some
portal
people
that
I
think
are
in
this
call.
That
could
also
give
us
an
idea
of
how
porzin
could
facilitate
for
forest
and
yeah.
That
stuff.
B
C
A
D
C
Is
someone
here
else
from
the
portal
network
because
I
do
think
we
could
have
like
a
pretty
helpful
conversation
there
just
to
like,
at
least
you
know,
I
think,
for
me
it'd
be
helpful
to
hear
more
about
yeah
how
they
see
that
fitting
in
here,
because
I
do
definitely
think
that
we
want
something
like
the
pearl
network
to
serve
historic
data,
and
so
it
would
support
four
fours
quite
nicely.
C
We
are
we're
working
on
some
of
the
like
client
stuff
and
talking
about
the
portal
network
and
then
yeah,
I
think
well
separately,
yeah
cool
good
to
see
you.
A
I
mean
also,
I
think,
regardless
of
portal
network
ipfs
would
be
another
place
to
store
historical
data
as
one
of
the
easy
backends
you
know,
like
torrent,
is
the
obvious
one.
Ipfs
is
another
obvious
one.
So
that's
another
useful
thing
to
have
expertise
in
ipfs.
A
All
right
I
mean
look,
it's
it's
a
rough
agenda
that
I
made
up
on
the
spot,
so
I
think,
like
the
way
I
I
started
thinking
about
this
is
that
we
should
think
of
how
we
expect
nodes
and
clients
in
the
future
to
use
historical
data
like
okay.
Let's
see
that
we
have
a
file
that
includes
historical
data,
perhaps
per
year
per
month.
We
don't
care
at
this
point
and
we
get
this
through
torrent
or
portal
network
or
whatever.
A
How
do
we
want
clients
to
consume
this
data?
So,
for
example,
the
most
obvious
thing
to
do
is
have
an
import
functionality
and
and
import
that
into
like
the
local
state
of
the
client
and
have
the
client
manage
this.
Another
way
that
rai
started
thinking
about
is
whether
we
want
to
have,
like
you
know,
be
able
to
use
the
the
the
historical
data
archive
files
for
for
queries.
A
Like
the
databases,
I'm
not
sure
if
that's
worth
having,
I
think
it
might
complicate
things,
but
if
this
is
also
something
worth
considering
like
how
we
expect
the
ux
to
be-
and
then
you
know,
the
next
agenda
item
is
given
the
ux
that
we
want
to
support.
How
do
we
expect
the
format
to
be
like
what
kind
of
encoding
do
we
want?
A
You
guys
are
writing
stuff,
but
like
I'm,
not
monitoring
it
while
I'm
talking.
So
if
this
is
something
relevant
feel
free
to
interrupt.
I.
A
So
what
kind
of
encoding
we
want
from
the
historical
archives
like?
Do
we
want
the
binary
encoding?
We
want
to
be
it's
going
to
be
s1
data,
so
llp
stuff.
Do
we
want
to
use?
I
know
that
some
clients,
like
best
who
have
rlp
export
function.
How
does
that
work?
I
don't
know
so.
Do
we
want
to
just
use
that
we
want
to
I'm
not
sure
what
we
want
to
do
here.
A
So
that's
the
second
agenda
item
and
the
third
is,
you
know
towards
moving
toward
the
proof
of
concept
thing
decide
which
client
we
want
to
target
first,
and
as
someone
mentioned,
you
know
in
theory,
we
just
need
one
export
functionality,
one
client
to
export
everything,
and
we
want
all
the
clients
to
be
able
to
import
like
we
don't
it's
not
necessary
for
all
clients
to
be
able
to
export,
since
we
just
want
one
good
export
and
everyone's
gonna
use
that
so
yeah
that's
the
agenda
and
then
you
know
have
some
discussion
with
the
portal
network
group
about
what
how
that
would
look
like
and
how
integration
would
look
like
and
that
stuff
does.
B
B
It
feels
like
to
me
the
biggest
sorts
of
blockers
are
just
going
to
be
convincing
people
that
this
is
a
good
thing
to
do.
I
don't
feel
like
any
of
the
technical
discussions
that
we're
like
looking
to
have
are
going
to
be
things
that
we
don't
have
answers
to,
or
we
don't
really
have
things
that
might
solve
those
problems,
it's
more
just
by
shedding
what
the
best
way
we
think
is
to
do
it.
C
D
I'm
this
is
related,
I'm
thinking
about
the
amount
of
like
the
amount
of
time
or
the
the
size
of
the
period
of
time
for
which
the
history
retains
and
retains
under
the
guarantee
that
it's
stored
on
the
nodes
in
the
network.
Probably
this
one
year
is
too
much
if
we
want
to
increase
the
potential
size
of
a
block
but
potentially
increase
the
size
of
a
block.
D
I
I've
been
doing
some
very
rough
calculations
and
they
would
have
like
five
like
what
is
the
maximum
size
of
what
character
is
two
megabytes
right.
D
This
size
of
the
block,
with
current
gas
limit
the
box
size
of
a
block,
it's
a
kind
of
two
megabytes
right.
Okay,
and
if
we
have
this
like,
let
me
do.
D
D
F
B
Yeah,
I
guess
the
question
then,
is
a
little
bit.
If
we,
you
know,
if
we
make
a
decision
on
this
time
period,
how
difficult
is
going
to
be
to
to
tune
that
over
time?
Is
this
sort
of
a
decision
where
we're
going
to
say
you
know
here
is
the
period
in
which
you
can
expect
we
we're
going
to
guarantee
the
clients
will
maintain
that
data
and
then,
in
six
months
or
one
year
or
something
some
time
here.
A
B
Yeah
I
mean,
I
think
I
definitely
think
more.
Recent
is
more
important
and
it's
not
clear
to
me
where
it
starts
becoming
like
exponentially
less
important
if
it
ever
does,
but
I
would
guess
that
there
are
still
a
fair
number
of
people
using
things
between
a
three
and
12
month
period
and
the
less
people
we
can
upset
with
this
change.
The
better.
C
I
think
it's
okay
to
tune,
but
also
in
terms
of
like
de-risking
this,
because
it
is
such
a
huge
change,
it
makes
sense
to
say
yeah
start
with
something
very
big
where
you
almost
kind
of
don't
notice
it
and
then
once
we
have
it
working
and
we
have
all
these
other
methods
and
people
feel
comfortable
like
going
to
torrent
networks
to
get
the
history.
Let's
say
things
like
this,
then
we
can
say:
okay,
let's
ratchet
it
down
to
like
six
months
and.
C
Be
you
know,
there's
gonna
be
some
upper
bound
on,
like
the
consensus
needs
this
much
history
to
do
anything
so,
like
you
know,
it's
gonna
be
a
certain
amount
of
time
that
we
can
probably
go
slower
than
a
year
if
we
need
to.
B
A
So
right
now
it's
it's.
It's
embarrassingly
small,
it's
like
14
days,
and
we
want
this
to
get
bigger,
but,
like
I
you
know,
I
don't
think
it
will
under
the
current
model.
I
don't
think
it
will
ever
get
bigger
than
like,
say
eight
months
or
something.
So
it's
true
like
one
fundamental
thing
here
is
that
you
know
the
prune
needs
to
be
bigger
than
the
week's
objectivity
period,
but
you
know
I
don't
think
that's
going
to
be
too
big
anyway,
as
long
as
like
it
looks
right
now,.
E
B
B
That's
to
me
where
I
think
that
the
historical
data
will
start
going
really
quickly,
because
it's
just
think
it's
unrealistic
for
us
to
have
consistent
two
megabyte
blocks,
because
that's
assuming
30
million
gas
is
used
so
realistically,
it's
closer
to
like
under
one
megabyte
and
then
for
the
most
part,
we're
not
pushing
50-60
percent
of
roll-up
data.
Yet
so
that's
still
a
ways
out
where
we're
seeing
that
much
block
growth.
B
B
I
think,
like
that's,
that's
probably
won't
happen
at
this
point.
Okay,
I
think
tim
said
he
talked
to
a
lot
of
client.
Devs
and
people
want
to
focus
on
the
merge
and
that's
totally
understandable.
I
think
we
should
focus
okay,
so
I
don't
think
that
I
don't
think
4488
will
happen
before
the
merge
anymore,
which
kind
of
gives
us
a
little
bit
more
flexibility
with
four
fours.
We
can
just
spend
a
little
time
focusing
on
that,
rather
than
trying
to
you
know,
combine
them
do
them
at
the
same
time,.
E
A
However,
like
I
think
the
mental
model
is
that
four
fours
should
guarantee
that
clients
can
fetch
data
to
do
weak
subjectivity,
sync
over
the
p2p
network,
but
you
know
over
time
they
will
not
be
guaranteed
to
fetch
any
other
data.
You
know
like
potentially,
if
we
start
shrinking
the
period,
they
will
get
for
a
quick
subjectivity
thing,
but
anything
else
they
want
they
might
not
get.
A
And
if
clients
start
working
towards
that
assumption,
you
know
we
can
start
with
a
big
thing
like
one
year,
but
then
we
can
start
shrinking
and
more
mechanisms
will
come
that
this
will
be
easier
for
clients.
I
don't
know
if
this
makes
sense
as
a.
B
So
I
okay,
that
kind
of
leads
us
to
the
question
you
asked,
though,
is
how
will
historical
data
be
consumed
by
clients
and
the
way
that
I've
sort
of
been
understanding?
This
is,
I
think
the
ideal
world
is,
is
that
it's
still
built
into
clients.
It's
just
you
not
using
the
dev
pdp
protocol
for
the
most
part,
and
so
it
depends
on
exactly
what
the
solution
is.
Maybe
we
don't
if
it's
bittorrent,
we
don't
want
to
tie
that
into
the
client.
That
should
be
something
that
we
just
download.
B
A
second
quote:
not
necessarily
yeah,
I'm
not
necessarily
anticipating
that.
I
think
that,
like
a
it's
decision
of
the
client
that
they
want
to
have
that
code
and
then
b,
I
was
thinking
this
is
more
in
terms
of
the
orchestration
wrapper
around
potentially
multiple
releases,
just
so
that
they
don't
you
don't
have
to
just
go
acquire
those
yourself
and
deal
with
the
raw
block
data,
but
I
think,
like
in
the
beginning,
it's
unlikely
that
clients
will
start
removing
old
code
immediately
and
so
having
a
simple
way
of
allowing
people
to
continue
doing.
C
B
Probably
yeah,
I
think
that
the
thing
is,
I
don't
know
when
that
that
software
would
come
into
play.
Like
I
don't
know,
maybe
marius.
E
B
B
I
don't
know
I
mean
I
guess,
like
you
guys,
tell
me
how
important
you
think
it
is
like.
I
think
it
is
an
important
piece.
I
don't
think
that
it's
a
question
of
whether
it
can
be
done.
It's
just
engineering
work
to
do
and
so
to
it.
To
me,
it's
like
kind
of
important
to
determine
is
that
something
on
client
developers
very
soon
road
map,
after
four
fours
like
as
soon
as
four
fours
is,
you
know,
is
able
to
make
the
data
available.
A
So,
just
on
the
original
question
to
understand
a
bit
more,
so
let's
say
that
I
got
the
archive
file
from
wherever
like
list
of
mirrors
or
whatever.
So
I
have
a
an
archive
file
that
contains,
I
don't
know
blocks
from
the
past
four
years
and
I
import
this
into
the
client.
Does
the
client
need
to,
like,
I
don't
know,
run
all
the
evm
and
do
everything
or
does
it
just
like
trust
the
blocks
and
provides
them
through
the
json
rpc
api?
What
does
it
have
to
do
to
import
them
like?
G
A
Okay,
so
it
doesn't
run
like
all
the
transactions
and
it
doesn't
do
the
entire
thing.
It
just
does
some
hashing
to
establish.
B
B
B
In
my
mind,
the
bare
minimum
is
just
blocks
like
I
mean
that
is
the
thing
that
we
have
to
provide
is
all
the
block
bodies
and
then
potentially,
we
might
also
make
available
all
the
receipts.
But
then
you
start
moving
into
this
realm
where
it's
like.
Should
we
also
make
it
possible
to
download
any
historical
state,
and
I
think
that
that
that
would
be
where
I
would
draw
the
line.
B
Is
that
would
not
be
something
I
would
want
to
provide
is
providing
historical
state,
tri
data,
and
so
that
would
be
things
that
that
if
they
wanted
that
information,
those
artifacts
that
generate
themselves.
C
B
I
I
don't
have
it
like
part
of
me
would
just
say:
let's
just
serve
block
bodies,
because
you
can
generate
the
receipts
from
the
block
bodies
and
that
to
me
feels
like
it's
a
it's
a
little
bit
simpler,
less
data,
anything
that
we
can
make
this
simpler
and
less
amount
of
data.
I
think
we
can
get
more
people
on
board
with
to
serve
that
data
and
I
think
that
creating
the
largest
pool
of
people
serving
that
data
is
important,
guarantee.
C
A
B
I
think
all
clients
support
import,
sub
command
or
most
concept
support
import,
sub
command.
That
takes
the
block,
rlp
body.
B
I
yeah,
I
would
need
to
look
a
little
bit
more
and
see
the
exact
way
it's
it
handles
block
bodies
and
how
it
handles
a
large
amount.
Like
millions
of
block
bodies.
I
I
think
it's
generally.
Okay,
though,
like
you
give
it
magnet
genesis
and
all
the
blocks,
I
don't
see
why
I
wouldn't
just
go
ahead
and
build
the
chain
exactly
like
you
provided
it,
but
the
you
know
the
question
is:
do
we
want
to
do
rlp,
and
I
think
I
think
we
should
look
at
using
soc
to
be
honest,.
C
C
H
I
Question,
I
think
so
I
think
if
we
go
to
the
ssc
rep,
we
won't
actually
end
up
getting
rid
of
rlp,
we'll
end
up
with
rop
and
ssc,
because
you
need
rlp
to
be
able
to
do
certain
types
of
verification,
and
so
the
only
compelling
reason
that
I
would
say
for
using
ssc
would
be
to
have
the
large
scale
merklization,
which
allows
you
to
have
a
single
hash
that
identifies
a
huge
blob
of
this
stuff
and
let
you
get
at
individual
components
of
it,
and-
and
I
have
no
idea
if
that's
compelling
or
not
right
like
like
it's
there's
other
ways
to
accomplish
that
right.
I
But
we
don't
have
any
ssc
hashes
in
our
protocol
for
execution
chain
and
so
like.
We
don't
have
the
right
hashes
to
be
able
to
verify
ssc
objects,
so
everything
would
have
to
be
converted
into
their
rlp
objects
before
you
could
actually
verify
them.
So
I
don't
think
that
there's
a
big
win
there
at
the
like
getting
rid
of
rlp
layer,
it
the
the
only
win.
I
think
that
makes
that
compelling
is
something
that
just
makes
ssd
add
features,
but
we
don't
actually
get
rid
of
rlp.
G
For
encoding,
we
definitely
stick
with
rlp,
just
because
it's
used
everywhere
and
difficult
to
remove.
At
the
same
time,
however,
the
commitment,
the
part
that
mercurizes
or
hashes
the
data
we
can
have
both
because
we
only
end
up
having
one
32
byte
object
at
the
very
end,
and
then
the
schema
like.
How
do
you
commit
over
all
the
data?
G
There
can
be
many
of
them,
and
once
we
have
these
well-defined
snapshots
of
historical
data,
you
can
just
compute
each
of
these
different
versions
of
it.
The
data
is
the
same
as
long
as
there's
some
software
that
can
reconstruct
and
recompute
the
commitment.
Then
it
can
be
used
for
whatever
use
case.
You
need
that
form
of
commitment
for.
C
Yeah
I
mean,
I
think
rlp
makes
sense
where
it
already
exists.
My
question
then,
would
be:
are
there
like?
Can
we
actually
think
of
use
cases
today
where
we'd
actually
need
commitments
like
this
to
the
history,
because,
like
what
we're
saying
is
there'll
be
some
like
year-long
art
archive
of
the
chain
and
then
we'll
mercalize
it
and
get
some
root?
I
Can
can
anybody
tie
this
together
in
a
cohesive
thing?
I
I
know
that
there's
one
thing
being
talked
about
which
is
making
large.
You
know
the
first
million
blocks.
The
second
million
blocks
right
is
that
kind
of
what
we're
talking
about
here
is
taking
a
million
blocks,
bundling
up
them
up
together
as
a
single
giant
chunk
with,
in
some
format
having
a
single
hash
that
identifies
all
of
that
and
and
then
what
are
we
serving
them
from
places?
Are
we
making
a
torrent?
Is
this
a
new.
B
Protocol
yeah,
so
I
don't.
I
don't
think
that
that
we've,
you
know
we're
hoping
that
there's
many
different
ways
of
accessing
that
data.
The
way
I've
just
been
thinking
about
it.
Just
in
simple
terms
like
to
me,
that's
a
black
box
is
how
to
access
the
data
and
we're
discussing
different
ways.
Portal
network
is
one
way
that
I
think
could
be
able
to
access
data
where
you
just
request
like
I
want
these
blocks,
and
then
you
get
the
blocks.
B
The
way
that
I
think
about
it,
just
in
the
simplest
terms,
is
that
you
know
there
would
be
a
list
of
mirrors
of
servers
that
for
providing
all
of
the
blocks,
they've
just
elected
that
they
would
provide
them.
They
post
on
some
smart
contracts,
saying
here's
my
server
address
I'll,
provide
the
box,
and
then
I
can
just
go
down
that
list
and
to
request
the
blocks
from
that.
You
know
the
torrent
is
also
like
another
potential
option,
but
I
I'm
thinking
about
it
like
in
the
most
simplistic
terms.
B
In
that
sense,
I
don't
I
don't
know
about
the
bundling
of
blocks
together.
This
is
something
that
I'm
not
sure
about.
Like
is
there
a
value
of
taking
some
number
of
blocks
and
then
committing
a
single
hash
to
all
of
those
blocks,
because
it
feels
like
I
am
going
to
need
to
download
them
anyways
and
make
sure
that
they
are
in
a
chain.
E
B
It
is
right,
I
totally
agree
with
that.
I
think
I'm
more
getting
the
the
point
where
you
were
saying
that
there
would
be
like
a
commitment
or
some
sort
of
hash
to
authenticate
those
blocks
against.
Is
that
something
we
would
want,
in
addition
to
the
authentication
that
already
exists
of
the
headers.
F
F
F
J
J
We
can
always
sync
the
header
chain
and
know
exactly
which
header
was
at
which
block
at
which
height
and
or
which
block
was
was
where
and
we
can
just
fill
the
header
chain
from
from
the
torrent
and
like
either
like
yeah.
I
I
would,
I
would
prefer,
having
it
accessible
by
by
by
hash.
J
So
we
know
the
block
hash
and
we
we
need
the
block
data
for
that,
and
once
we
have
the
header
chain,
we
we
know
exactly
which,
if
the,
if
the
data
that
was
just
sent
to
us,
is
correct-
and
you
can.
J
Yeah,
like
you,
can
you
can
build
optimizations
on
top
of
that
as
much
as
you
want
to
by
saying?
Okay,
I
want
like
I'm,
not
I'm,
not
I'm,
not
I'm
not
sending
a
single
header
request,
but
I'm
sending
like
the
state
from
from
here
to
here
and
please
give
me
like
all
the
data
and
you
can
always
check
it
against
the
header
chain.
J
B
I
guess
like
the
last
thing
I
wanted
to
say
on
the
encoding
topic
is
yeah.
We
have
rlp
and
yeah
it's.
You
know
very
prevalent
across
the
execution
layer
and
it's
going
to
be
more
work
to
use
ssz.
B
G
G
G
For
example,
when
we
have
a
series
of
block
headers
like
a
long
set
of
block
headers,
like
a
million
block
headers,
how
do
you
encode
that
that
group
is.
I
So
proto
is
the
suggestion
here
that
so
maybe
just
as
a
straw,
man,
our
data
format
is
block
headers
chunked
in
chunks
of
1024,
and
so
we
take
the
first
1024
headers
and
we
put
them
in
an
rlp
list
with
a
max
size
of
1024,
with
the
with
the
the
element
type
being
right
by
lists
or
yeah
byte
lists,
and
so
right.
The
the
data
structure
for
the
bundle
of
block
headers
is
is
rop
inside
of
that
it's
it's
rlp
encoded.
I
I
C
Bundle
we're
just
saying,
like
there's
a
million
blocks
and
then
yeah,
I
guess
I
I
don't
understand.
B
I
When
you
want
a
specific
block,
it's
because
there's
a
transaction
in
it,
you
wanna
look
at
where
there's
a
receipt
in
it
that
you
want
to
look
at.
That's
that's
the
individual
block.
Use
cases
authenticate
that
header.
B
I
think
a
lot
of
the
discussions
we've
had
so
far
have
focused
on
you're
going
to
just
download
all
the
blocks,
because
you
want
to
have
some
sort
of
client
with
a
header
chain
running
is:
do
you
think
that
that's
something
that
we
should
start
considering
more?
Is
it
if
somebody
is
trying
to
request
this
historical
block?
They
want
a
certain
transaction
in
it.
We
should
have
facilities
for
them
to
do
that,
or
is
that
out
of
scope
for
four
fours
and
that's
something
they
should
look
at
portal
network
for.
G
We
please
separate
authentication
of
data
or
like
the
commitment
to
the
data
separate
from
the
encoding,
because
if
a
client
asks
for
one
piece
of
data,
we
can
have
a
different
encoding
for
the
response
than
for
the
encoding
of
the
local
storage
of
like
the
full
snapshot
or
a
snapshot
divided
in
chunks
or
whatever
it
may
be.
G
There
are
two
different
things
here
and
the
authentication
of
data.
You
can
have
many
different
commitment
schemes
and
you
can
optimize
for
the
user
that
request
the
data
and
that
can
be
different
than
the
optimization
for
the
rotation
of
the
the
snapshot
or
like
the
how
we
collect
the
fill
data
somewhere.
C
Yeah,
it
sounds
a
bit
to
me
like
we're,
mixing
this
idea
of
like
the
history
from
like
yeah.
I
went
like
one
block
industry
for
me.
So
far,
four
fours
is
about
we're
passing
around
like
chunks
of
a
year
of
history
at
a
time,
and
it's
not
so
much
like
the
question
of
how
do
I
get
one
block
from
three
years
ago
is
different
than
how
do
I
get
the
snapshot
from
three
years
ago
and
we
could
like
we
could
have
a
conversation
about
where
that
boundary
is.
I
K
So
I
think
that
if
portal
network
was
finished
right
now,
then
I
don't
think
we
have
a
need
for
like
this
other
thing
that
we're
talking
about
right
now
for
four
fours.
The
issue
is,
of
course,
portal
network
is
a
much
harder
problem
than
here's
a
torrent
with
you
know
the
last
seven
years
of
history
like
a
tournament.
Seven
years
of
history
is
something
that
we
could
throw
together.
I
Yeah
I
have
a
decent
answer
for
that.
We
did
roadmap
planning
last
week
and
looked
at
what's
coming,
we've
shifted
priority
to
history
network
over
from
state
because
of
four
fours
and
because
of
the
you
know,
immediate
demand.
I
So,
and
that
has
the
ability
to
be
accelerated
with
additional
people
getting
in
on
this.
But
but
but
it's
probably
not
gonna.
It's
not
gonna
be
two
or
three
months
right.
We're
still
looking
at
q,
two
q,
three
q.
K
K
My
gut,
given
those
timelines,
those
times,
are
about
what
I
expected
to
hear
my
gut
is
is
for
four
fours
in
order
to
get
it
out
asap,
we
need
something
like
anything
literally.
We
need
some
way
for
people
to
be
able
to
download
the
history.
K
I
I'm
hesitant
to
spend
like
a
huge
amount
of
time,
engineering,
some
really
complex
solution
to
to
downloading
history.
That's
just
going
to
be
replaced
by
the
portal
network.
In
12
months,
like
that,
that's
my
biggest
worry
is
that
you
know
we
are
building
two
things,
one
of
which
we
need
right
now
and
one
of
which
is
much
every.
I
think
every
I
think
everybody
agrees.
Someone
correct
me
here.
That
is
the
best
long-term
solution,
which
is
the
portal
network,
because
it's
fully
decentralized,
you
know
you
can
pull
individual
blocks.
K
You
don't
need
to
get
all
of
history
at
once,
works
for
both
archive
nodes
and
it
works
for
like
clients
like
it
solves
everything
at
once,
and
so
I
think
long
term.
We
want
for
network
short
term,
we
need
something.
So
we
don't
have
to
wait
for
a
year
for
444.
C
C
A
world
in
which
we
have
both,
because
it
kind
of
seems
like
to
me
they're
almost
addressing
different
issues
right,
like
there's
just
very
separately
for
forza,
is
saying:
here's
the
history.
How
can
we
pass
it
around
to
whoever
wants
it
and
almost
a
layer
on
top?
Is
the
pearl
network
right?
Because
you
still
have
a
question
like
how
do
I
bootstrap
a
portal
network
node?
K
This
relay
yeah,
so
I
think
once
portal
network's
done.
You
boost
our
world
network
node
by
just
connecting
to
the
portable
gossip
network
and
just
start
downloading.
It's
like
a
light.
Client
may
say:
I
only
want
block
four
million
three
hundred
and
twenty
two
thousand,
whereas
an
archive
node
may
say:
hey
portal
network
start
giving
me
all
the
blocks
from
one
and
start
counting
up,
and
I
believe
portal
network
can
solve
both
of
those
quite
well.
K
The
caveat
is,
is
I
do
suspect
that,
no
matter
how
good
the
network
is,
a
single
giant
bittorrent
file
is
going
to
be
faster.
If
you
do
in
fact
want
to
sync
eight
years
of
data,
a
bittorrent
is,
I
don't
see
any
any
solution.
That's
going
to
be
out
bitcoin
for
speed.
K
That
being
said
again
from
an
engineering
standpoint,
it's
expensive
to
build
a
solution,
and
if
portal
network,
you
know,
does
good
enough
and
we
have
all
of
history
in
a
bittorrent
up
to
january
2021
to
bootstrap
from
I'm
just
I'm
hesitant
to
synchronize
your
engineering
effort
into
two
solutions
that
overlap
like
80
or
90
percent.
I
I
So
a
simple
solution
here
is
either
go
real,
simple
and
don't
invent
anything
new
here
and
we
bootstrap
a
couple
of
torrents
and,
and
we're
done
it's
like
quite
easy.
We
have
to
you,
know,
come
up
with
the
you
know,
some
some
way
for
everybody
to
agree
that,
yes,
these
are
the
torrents
and,
yes,
we're
we're
happy
to
you
know
hard
code.
All
of
these.
I
What
are
we
talking
about
magnet
links
or
whatever
the
hashes
are
that
right
commit
to
to
the
data
and
the
torrents,
and
that's
super
easy
and
straightforward
and
boring,
and
and
and
does
a
good
job
of
doing
what
it's
supposed
to
do
and,
in
theory,
does
still
allow
all
of
the
other
stuff
that
we
want,
which
we
talked
about
with
ssc
data
structures
being
able
to
index
into
it
and
grab
specific
ones
turns
out.
You
can
do
that
with
torrents
too.
I
If
you
know
things
are
split
up
across
a
couple
of
files,
and
things
like
that,
and
maybe
the
files
within
the
torrent
are
blocks
of.
You
know
a
thousand
things
at
a
time
encoded
with
ssc
and
all
of
that
seems
like
pretty
uncontroversial
and
straightforward
to
implement,
and
then
clients
need
to
support
ingesting
this
stuff,
which
means
embedding
a
torrent,
client
and
being
able
to
process
the
data
files.
I
So
the
work
here
like
we
could
bike
shed
over
the
format
and
stuff,
but
the
work
here
is
somebody
to
sit
down
and
just
go
ahead
and
write
that
down
and
bang
out
a
proof
of
concept
for
it.
Right,
like
I
guess
what
am
I
missing,
like
where's,
the
point
of
contention
there
in
that
in
that
proposal
a
torrent
full
of
big
files
that
are
ssc
encoded
full
of
you
know
whatever
data.
It
is
that
we're
betting
in
here
and
they're
available
for
download,
and
we
have
a
bunch
of
people
who
see
them.
C
I
don't
I
don't,
I
don't
think
you're
missing
anything
I
think
that's
aligned
with
at
least
how,
like
george
and
matt,
and
I
have
been
thinking
about
it.
The
question
I
think
I
think
the
conversation
was
having
was
like
how
does
that
align
with
yeah,
something
like
the
portal
network,
because
then
we
have
this
question
of
just
a
second
ago
of
like.
C
Are
we
somehow
duplicating
engineering
efforts
like
if
we're
on
board
with
like
moving
forward
with
this,
like
torrent
idea
for
like
the
year
to
two
year
mark,
and
then
you
know,
once
the
portal
never
comes
online,
we
can
reevaluate,
that's
fine.
Do
we
consider
it
wasted
work?
If
we
do
build
out
this,
like
quote
torrent
client
everywhere,
or
at
least
in
some
clients,.
I
C
So
does
this
so
yeah
marius
basically
said
you
might
not
like
the
torrents.
Do
you
want
to
chime
in
marius?
If
you
can.
J
J
Like
I
don't
like,
I
don't
see
us
dropping
history
without
any
like
really
trustless
and
and
decentralized
and
reliable
way,
to
get
the
data
and,
like
I
don't
see,
torrents
as
really
reliable,
and
I
also
don't
see
as
like,
like
getting
blocks
out
of
protocol
as
a
reliable
way
to
to
to
like
to
really
know
that
this
history
will
always
be
there.
J
B
J
I
I
I
believe
so
like
I,
I
that's,
of
course
I
cannot.
I
cannot
make
the
decision
myself,
but
from
from
my
point
of
view,
I
would
not
drop
drop.
The
requirements
for.
J
I
I
K
K
It
basically
allows
us
to
separate
out
the
hey.
I've
got
a
bunch
of
disks,
but
I
don't
really
want
to
run
a
full
ethereum
client
I'm
going
to
just
serve.
You
know
this
data
and
you
can
serve
that
off
of
a
hard
drive,
a
spinning
disk.
For
example,
you
don't
need
an
ssd
or
an
nvme
drive.
You
can
just
have
some
old
spinning
disc.
K
You
had
from
five
years
ago
put
the
torrent
on
it
and
and
seed
it
to
the
world,
and
so
I
think
that
my
argument
is
is
that
well
I
can
see
the
argument
for
torrents
aren't
reliable.
I
would
argue
that
they
are
more
reliable
than
the
current
ethereum
gossip
network
for
storing
for
serving
an
archive
state.
D
What
this
incentive
would
be
for
anyone
to
altruistically
yeah,
really
altruistically,
just
serve
this
kind
of
blobs.
K
So
I
think
the
I
I
will
agree
with
mikael
here
a
little
bit
in
that
guest.
Specifically,
I
don't
believe,
has
an
option
for
not
downloading
and
keeping
old
state.
The
other
other
clients
like
another
mine,
for
example,
just
has
a
command
line
flag
where
you
can
just
say
no,
I'm
not
going
to
store
it
all,
and
a
lot
of
people
who
are
another
mine
do
not
serve
the
data,
and
so
I
think
there
is
an
argument
to
be
made
that
it's
not
altruism
so
much
as
laziness
like
it's.
K
If
you
have
some
need
or
reason
to
run
an
ethereum
client
and
for
whatever
reason
you
choose
guess,
because
everybody
chooses
geth
by
default.
Unless
you
do
a
bunch
of
work,
you're
going
to
serve
all
the
old
data,
whereas
if
we
switch
over
to
a
torrent,
those
people
who
are
like.
Oh,
I
want
to
run
an
ethereum
node,
but
I'm
not
going
to
go
out
of
my
way
to
be
an
altruist.
Those
people
will
stop
serving,
and
so
we
will
reduce
from
lazy
plus
altruists
to
just
altruists,
with
the
torrent
option.
J
Yeah-
and
this
like
this-
is
why
I
like
the
the
idea
of
having
a
new
protocol,
a
new
networking
protocol
that
that
achieves
this
so
basically
like
retire,
the
the
history
history,
retrieval
from
from
the
current
eth
protocol,
implement
eth,
68
or
70
or
whatever,
and
create
a
new
networking
protocol.
That
is
only
there
to
serve
these
these
these
old
states,
and
then
we
can
build
nodes
that
that
just
that
just
serve
these
these
old
states.
We
can
build
our
own
torrent
network
and
we
can
have
even
that's
a.
J
I
I
So
being
part
of
that
history,
network
means
keeping
the
data
and
serving
it,
and
so
now
you're
relying
on
not
just
anybody
who
wants
to
host
a
torrent
and
serve
it,
but
only
people
who
want
to
host
an
ethereum
node,
which,
with
all
of
the
complexity
and
heaviness
that
that
involves
plus
the
the
history
data.
So
that
seems
like
a
like,
like
like
on
this
at
least,
seems
like
it's
worse
than
the
torrent
solution
in
terms
of
reliability.
J
Like
like
and
then
and
then
for
example,
the
portal
network
could
tap
into
the
same
protocol
and
serve
the
same,
the
same
blocks
on
on
on
this
protocol,
and
we
could
have
we're.
I
I
J
Yeah
and
like
I
I
agree
like
like-
I
agree
about
that,
but
I'm
just
saying
that
I'm
not
comfortable
dropping
our
requirements
before
we
have
this
decentralized
solution
like
I,
I
agree
that
we
can
build
it
and
we
can.
We
can
use
a
the
the
we
can
do,
the
bittorrent,
thingy
or
whatever
torrent
we
want
to
do,
but
I'm
not
comfortable
matching
the
pull
request
to
drop
this
historical
states
before
we
have
some
some
some,
our
own
version
of
it.
I
I
From
the
geth
team,
who
wants
to
start
building
out
a
portal
client
in
the
guth
code
base,
because
that's
the
solution
for
that
right,
like
I
I'm
being
a
little
like
pushing
here,
I
get
it
but
like
like,
if,
if
the
only
way
that
that
I
know
that
you're,
not
speaking
for
the
geth
team,
but
if
the
only
way
that,
like
you're
comfortable,
you
know
accepting
four
fours
and
dropping
this
data
out
of
the
client
is
to
is
to
have
a
an
owned
decentralized
peer-to-peer
solution.
For
this.
J
Yeah
right
now
we're
prior
we're
prioritizing
work
on
the
merge
so
like
we
have
our
hands
full
with
this,
but
yeah.
I
think
I
think
that
schultz
is
interested
in
in
in
in
doing
something
like
this.
I'm
like
I'm,
not
sure
if,
like
we
haven't,
really
discussed
for
forex
for
force
in
in
in
in
the
team,
yet
because
we
have
been
focusing
on,
in
my
opinion,
more
important
topics-
yeah
but
yeah.
J
I
I
I
agree
that
I
I
agree
that,
like
gath
building
a
a
portal
client
at
least
on
the
on
the
on
the
history
retrieval
on
the
history
network
would
be
would
be
very
important.
C
I
think
we're
about
to
hit
the
time
that
we
had
scheduled.
So
I
think
the
takeaway
here
is
that
we
need
to
probably
better
understand.
C
G
I
posted
this
thing
in
the
chat
where
this
eip
tries
to
solve
three
different
problems:
I'd
love
those
problems
to
just
be
solved
separately,
so
first
we
have
import
functionality.
Then
we
have
distribution
of
data
and
then
we
have
the
final
removal
of
the
from
the
defaults
where
we
don't
serve
historical
state
anymore.
I
G
K
Yes,
I
agree,
I
agree
with
that.
I
do
think
that
we
knew
didn't
need
to
solve
one
and
two.
I
just
want
to
be
clear
that
the
eip
itself
is
only
solving
three.
It's
just
one
can
argue
that
it
will
never
pass
all
core
devs
without
one
and
two
being
solved
separately
already
like
we
need
to.
We
need
someone
to
write
two.
G
J
G
B
Right,
yep
thanks
everybody
for
coming
I'll
post,
the
recording
for
this
on
the
github
issue
and
then
discord,
and
you
know
we'll
just
keep
the
conversation
going.
Yeah
we'll
try
and
synthesize
what
we
talked
about
a
little
bit.
I
think
we
had
some
good
discussion
about
how
to
solve
one,
and
so
maybe
we'll
try
and
write
something
more
concrete
up
about
that
sounds
good.