►
From YouTube: Ethereum Core Devs Meeting #121 [2021-09-03]
Description
A
Hello,
everyone
welcome
to
awkward
devs
number
121..
I
have
a
couple
things
to
discuss
today.
Most
of
them
are
related
to
the
merge
two
big
things
there
so
mikhail
put
together
a
document
a
week
or
two
ago
about
the
consensus
api
for
the
beacon
chain
and
the
execution
layer
they
communicate
after
the
merge.
We
discussed
this
in
a
merge,
call
and
then
brand
way
out
of
time.
A
So
we
can
kind
of
continue
that
discussion
here
and
then
a
bit
later
in
the
call
felix
from
the
get
team
put
together
a
spec
for
basically
what
a
post
merge.
Sync
algorithm
could
look
like.
So
that's
the
other
big
thing
we'll
need
to
discuss
there
and
then
a
bunch
of
other
other
topics,
but
yeah
mikael.
A
B
B
Obviously,
there
will
be
like
two
counter
parties
in
the
like
client
software,
after
the
merge,
which
are
the
consensus,
client
part
and
the
execution
client
part,
and
we
need
the
communication
protocol
between
them
in
order
to
communicate
with
blocks
and
other
stuff,
and
we
have
something
already
which
been
designed
for
the
iranis
hackathon
project
and
it's
been
based
on
the
json
rpc.
But
we
might
want
to
extend
this
protocol,
add
some
other
stuff
and
other
restrictions
to
the
underlying
communication
protocol.
So
there
is
a
doc
dropping
it
to
the
chat
that
just
the.
B
B
We
started
to
discuss
these
documents
and
stopped
like
not
far
from
the
beginning,
so
I'm
going
to
share
my
screen
to
continue
the
discussion
from
the
from
the
point
we
have
stopped
at
and
also
I've
made
some
adjustments
to
this
document
and
updated
it
with
the
the
result
of
discussion
we
previously
had
so
I'm
sharing
my
screen.
A
B
Not
no
no.
B
Yep
yeah,
I'm
sorry
yeah.
This
is
the
agenda.
Oh
okay,
okay,
yeah!
This
one
is,
is
the
document
right,
yeah,
okay,
cool,
so
I
would
encourage
us
not
to
fall
into
deep
discussions
right
now
and
if
any
item
that
we
are
discussing
requires
like
to
have
like
a
more
deep
conversation
on
it,
let's
continue,
let's
just
mark
it,
as
the
like
requires
some
pretty
discussions
and
continue
offline
on
discord
or
make
like
other
kind
of
call,
so
not
to
spend
much
time
on
every
on
everything
yep.
B
So
let
me
turn
on
the
chat
and
persistence.
Okay,
yeah.
So
I
was
starting
from
from
the
above
we'll
go
through
the
comments
a
bit
here
is
the
comment
from
yatsek
that
we
should
consider
rest
for
this
kind
of
api.
B
B
B
The
other
thing
is
that
rest
is
related,
like
is
about
the
resources
which
are
some
entities,
so
I'm
not
sure
if
this
fits
our
this
protocol
as
well.
So
but
yeah
yeah,
it's
okay,
if
you,
if
you
hear
us
just
if
you
want
to
discuss
this,
let's
discuss
it
on
this
course
more.
Oh.
C
Real
quick,
there
are
usually
the
counter
to
that
is
server
side
events
which
can
facilitate
that
with
restful
http,
but
I'm
not
I'm
just
putting
that
out
there.
I
don't
really
want
to
discuss
it.
B
Yeah
yeah
sure,
okay,
so
on
the
previous
call,
we
have
decided
to
like
to
replace
the
assemble
payload
with
a
couple
of
related
methods.
They
are
here
now.
So
there
is
the
prepare
payload,
which
gives
a
comment
to
the
execution
client
to
stop
building
the
the
payload.
B
It
has
these
parameters
here
and
it
will
keep
it
up
to
date
until
the
get
payload
is
called,
and
this
and
this
process
of
producing
the
paid
law
stops
then,
and
the
pay
and
the
most
updated
and
the
most
up-to-date
payload
is
returned
back
to
the
consensus
client
and
then
it
will.
It
can
take
and
embed
into
the
beacon
block
and
fire
this
block
into
the
network
yeah.
B
There
is
a
note
that
if
the
prepared
payload
is
called
if
they
are
now,
if
they
prepare
payload
with
another
set
of
arguments
of
parameters,
is
called
after
the
first
one,
then
the
process
of
building
should
be
restarted.
With
this
new
parameter
set,
which
makes
sense
as
we
can
like
as
the
consensus
client
may
receive
a
new
block
and
it
may
become
the
head
of
the
chain-
and
it
might
want
to
restart
this
process
because
it
will
build
on
the
other
block.
It
will
build
its
block
on
the
other.
B
One
get
payload
have
the
same
set
of
parameters
here.
It
could
be
argued
that
it
should
not
be
so,
but
the
reason
why
they
are
here
is
the
first
of
all.
It
can
be
used
without
the
prepared
payload
this
way,
so
it
can
work
as
the
assemble
payload.
B
I
don't
know
if
there
are
any
use
cases
for
this
or
this
like
property,
but
the
other
stuff
which
I
think
more
important,
is
the
additional
consistency
check,
because
the
this,
like
new
block,
may
be
received
by
the
consensus,
client
and
the
prepared
payload
might
be
sent
before
they
get
payload
with
the
same
parameters
all
processed
like
yeah.
There
could
be
a
kind
of
racing
between
these
two
messages.
This
is
a
very
cool
edge
case,
very
much
an
edge
case,
but
it
could
potentially
be
the
case.
B
So
this
is
why
here
is
the
set
of
parameters
as
well,
and,
if
the,
if
it's
not
it,
does
not
match
to
the
to
what
was
sent
with
preparedness,
the
block
should
be
either
adjusted
if
it's
even
possible
or
created
a
new
one
with
this
set
of
parameters
and
returned
back
to
the
consensus
client-
and
this
is
to
avoid
the
weird
case
when
the
consensus
client
proposes
a
block
with
a
payload
that
does
not
relate
to
this
block.
C
D
B
E
The
execution
engine
could
at
some
point,
have
a
timeout
and
say:
okay,
stop
trying
to
build
a
new
stop,
trying
to
add
transactions
cut
it
off
here,
send
the
block
because
run
out
of
time.
If
you
don't
have
any
kind
of
sen
sense
of
how
long
is
acceptable,
then,
presumably
the
execution
client
is
going
to
just
do
whatever
it
normally
does
to
get
a
block
which
maybe
means
hitting
a
remote
server.
Maybe
it
means
just
building
until
the
block
is
full
and
these
things
can
take.
You
know
seconds.
E
E
Got
is
an
empty
block,
then
I
can
send
that
right
away,
whereas
if
all
you
get
is
the
get
payload,
then
either
you
default
to
setting
only
the
empty
block,
because
you
have
no
time
to
prepare
anything
or
you
decide
that
I'm
going
to
spend
some
amount
of
time
actually
build
acquiring
a
block
and
there
needs
to
be
like
you
know
some
limit
on
that.
Presumably,
like
you,
don't
want
two
minutes.
For
example,
that's
obviously
wrong.
It's
10
seconds
too.
B
B
Yeah,
as
I
understood
you
were
like
saying
about
malicious
transactions
in
demand
pool
that
could
take
a
lot
of
time
to
execute,
and
in
this
case
we
might
want
to
add
the
this
time
restriction
to
the
prepare
payload
as
well,
because
prepare
payload
has
much
more
time
in
advance
right
and
it
could
include
all
those
transactions
without
any
problem.
This
is
what
I
was
mentioning
mike.
E
E
C
E
B
Yeah,
I
do
see
value
in
doing
this,
but
if
we
want
to
discuss
more
on
that,
let's
continue
on
the
discord.
What
do
you
think.
B
Always
happy
with
this
okay,
so
so,
let's
move
on
yeah
and
execute
payload,
so
it
verifies
the
payload
according
to
the
execution
wiring
rule
set,
which
is
exposed
in
the
key
here
is
the
question
from
martin.
What,
if
the
parent
block
state
is
missing,
some
error
type
will
that
be
defined.
B
This
document,
like
has
a
section
of
the
consistency
checks
of
the
consistency,
checkpoints
which
answers
this
question,
so
once
we
get
there,
we
can
discuss
it
the
basic,
but
the
basic
idea
is
that
if
you
execute
payload,
send
something
that
can't
be
processed
because
can
be
processed
by
the
execution
client
because
of
absence,
because
some
information
is
absent,
so
the
execution
client
responds
with
the
corresponding
message
that
something
is
wrong
and
yeah
the
consensus
and
the
execution
client
starts
the
recovery
process.
B
This
is
one
of
the
option
options
or
the
execution
client
goes
to.
The
network
goes
to
the
wire
and
pulls
of
all
this
data.
This
is
the
default.
C
At
danny,
oh,
I
was
going
to
say-
and
this
is
getting
ahead
of
ourselves,
but
I
think
in
a
sync
protocol.
It
is
going
to
make
sense
for
the
beacon
chain
to
be
optimistically
processing
forward
without
execution
validation,
and
I
think
that
likely
it's
most
simple-
to
handle
most
of
the
most
optionality
of
the
sync
protocol,
underneath
for
it
to
continue
to
run,
execute
payload
and
just
continue
to
send
the
messages
to
the
the
execution
layer.
And
in
that
sense
I
think
there
might
be
value
in
having
an
enum.
C
B
That
right,
this
dock,
this
like
dock,
has
a
suggestion
on
the
like
sync
status:
return.
Instead,
so
yeah,
it's
it's
optional,
so
it
it.
It
also
cow
right
here,
but
it
depends
on
the
sink
entirely
so
yeah
the
consensus,
validated
message
which
is
mapped
on
the
proof
state
consensus
validate.
If
you
end
from
the
eip,
it's
easy.
B
It's
sent
to
the
execution
client
by
the
consensus
client
when
the
when
the
beacon
block
gets
validated
with
respect
to
the
beacon
chain
c
transition
or,
like
the
ap
says,
with
respect
to
the
consensus
rule
set.
So
this
is
required
before
the
block
can
be
persisted
by
the
execution
client,
even
if
they
execute
payload
returns.
Even
if
the
payload
is
valid
with
respect
to
the
execution
environment,
rule
set.
C
B
B
The
alternative
would
be
to
send
execute
payload
after
the
beacon
block
has
been
imported,
which
will
cause
a
delay
required
to
process
the
beacon
block
and
like,
except
keeping
these
two
messages
separate
opens
up
the
ability
to
parallelize
the
beacon
block
and
the
execution
pillar
processing,
which
is
nice
next
one
is
the
any
questions
here.
Any
questions
so
far.
C
Payload
and
just
runs
consensus
validated
with
that
just
trigger
xc,
payload,
plus
consensus
validate
and
return.
It.
B
C
Yeah
yeah
the
execute
payload
has
not
since
validated
his
call
on
that
that
be
a
trigger
to
kind
of
like
run
all
the
processing,
and
we
can
just
note
that,
as
like
a
weird
edge
case
to
think
about.
G
B
C
B
Okay,
yeah
checking
the
chat.
Okay,
cool
engine
fork,
choice
updated.
B
There
is
the
pr
to
the
to
the
eip
I'll
drop
it
into
the
chat
that
unifies
the
two
previous
events,
which
was
the
chain
headset
and
the
block
finalized
into
the
one.
So
this
document
is
matched
the
follows
the
eip
currently
or
vice
versa.
Anyway,
here
is
the
suggestion
from
the
previous
call
and
comment
from
micah
confer
I've
called
it
confirmed,
block
hash,
which
means
that
this
block
is
confirmed
by
two
thirds
of
the
testers
in
the
network
they
have
been,
they
have
voted
for
it.
B
This
is
for
jason
rpc
for
users
jason
rpc.
Actually,
here
is
like
a
bunch
of
stuff,
so
this
yeah
there
is
a
head
block,
hash
and
finalize
block
hash,
which
must
be
and
the
confirm
block
hash.
All
this
information
must
be
updated.
B
All
the
changes
related
to
these
method
call
must
be
applied
atomically
to
the
block
store.
So,
though,
in
order
to
avoid
where
cases
when
the
head
block
even
four
microseconds
points
to
the
do
another
fork,
then
the
finalized
block
hash
and
the
confirmable
hash
as
well.
So
there
is
one
out
of
this
unification.
There
is
one
note
here:
this
is
more
for
consensus,
client
developers
in
the
eip
this
event
should
the
finalized
block
cache
before
the
transition
before.
H
B
First,
finalized
block
hash:
it
should
be
stopped
with
all
zeros,
so
this
event
will
be
we'll
set,
we'll
be
sending
the
actual
hat
block
cache,
but
the
finalized
block
cache
will
will
be
stopped
with
all
zeros
before
we
get
the
first
finalized
block
cache
actually
in
the
system,
but
there
is
nothing
like
no
additional
work
required
to
do
this
kind
of
stub,
because
after
the
merge
fork
we
have,
we
will
have
the
execution
payload
in
the
block
which
filled
with
all
zeroes,
so
the
finalized
block
cache.
B
We
will
have
this
block
cache
already
stopped
serious.
I'm
sorry,
I
I
could
be
a
bit
messy,
but
you
can
read
this
and
yeah.
It
should
be
enough
to
to
understand
what
I've
just
talked
about.
B
Yeah
this
was
my
first
try
on
the
introducing
the
confirm
block
hash
stop.
So
it's
just
stand
for
each
each
block.
The
study
status
invalid
and
concurrent
confirmed,
which
is.
E
C
Oh
just
the
consensus
validated
means,
like
I
checked
the
proposer
signature,
I
checked
the
attestations
and
the
other,
like
kind
of
outer
consensus
components
of
something
I
previously
had
you
execute
and
check
on
the
execution
layer
and
that
you
can
put
it
into
your
block
tree.
Updating
the
fork
choice
has
is
independent
of
the
fact
that
a
block
was
valid
to
insert
your
block
tree
and
a
block
that
I
insert
into
your
block
tree
may
or
may
not
ever
be
the
head
or
in
the
canonical
chain.
C
E
I
E
Okay,
so
the
execute
first
consensus
client
will
say:
hey
here's
a
block,
please
execute
it,
the
engine,
the
execution
engine
executes
it
replies
back.
This
is
good,
since
this
client
then,
does
some
additional
checks
and
then
says
hey.
My
extra
checks
are
also
good
and
then
some
point
later
it'll
say
hey.
This
is
now
the
head
block
and
then
eventually
this
is
the
confirm
block
and
eventually
this
is
a
finalized
block,
like
that's
the
normal
path
of
a
block
through
the
process.
C
A
lot
of
that
can
happen
in
parallel,
so
like
checking
attestations
and
things
like
that,
and
then
the
final
thing
it's
going
to
do
is
actually
do
the
beacon
state
route
which
includes
executions
here
and
stuff
and
then
passes
it
back.
One.
F
J
Hey
guys,
I
just
I
was
just
quickly
wondering
like:
why
is
the
fork
choice
like
why
so
much
like?
Why
is
the
fork
choice,
stuff
communicated
to
the
execution
layer
in
so
much
detail?
I
mean
I
haven't
really
looked
at
this
api.
You
know
ever
and
like
seeing
it
now.
It's
just
feels
kind
of
weird
that
you
know
like
the
execution
layer
should
know
like
all
of
these
details
about
the
fork
choice.
L
K
Is
really
useful
because
the
execution
client
has
like
different
tricks
for
storing
state
that
basically
optimize
for
making
it
like
really
easy
to
update,
but
at
the
cost
of
making
it
hard
to
revert.
And
if.
K
Okay
hold
on
sorry
about
that
and
why
ios
super
quiet
now:
okay,
yeah!
Basically
I
was.
L
K
Saying
that
for
the
finalized
blog
cash
in
particular,
the
issue
is
that,
like
the
actual
execution,
clients
have
a
lot
of
optimizations,
where
they
yeah,
basically
trade
off
an
increased
efficiency
of
reading
and
writing
to
this
to
the
state
as
it
is
now
in
exchange
for
making
it
harder
to
like
go
backwards
and
revert
and
revert
to
previous
states.
And
then
so.
If
you
give
the
execution
clients
a
finalized
hashing,
so
that
it
knows
that
it's
not
never
ever
going
to
have
to
revert
past.
K
That
point,
then
the
execution
client
can
use
that
information
to
like.
Basically
like
dump
all
the
information
like
jump
to
the
journal
and
like
flash
memory
and
do
all
sorts
of
things
that
makes
more
efficient.
K
Finalization
information
is
still
useful,
like
it's.
It's
a
trade-off
space
right
so.
G
About
the
api,
the
problem
is,
if
you
only
have
the
latest
block,
that
is
a
very
unreliable
information
and
it
might
be
even
less
like
confirmed
than
currently
on
proof
of
work,
so
we
want
most
applications
to
follow
a
slightly
less
aggressive
head.
Basically,
that's
why
they
confirmed
this
in
there.
C
So
to
be
clear,
yeah
you,
you
want
the
head
and
you
want
finality
and
you
want
to
update
that
information
atomically.
So
those
are
really
required.
And
then
this
notion
of
confirmed
or
safe
is
a
definition
which
might
help
serving
like
web3
apis
on
on
ahead
and.
B
Here
is
the
list
the
just
proposed
list
of
the
new
statuses
for
the
block
for
the
json
rpc.
A
new
identifier
sends
json
rpc
for
the
block,
so
it
could
be
finalized.
It
could
be
safe,
which
means
it's
confirmed.
Could
then
save,
which
is
unconfirmed
and
extend
by
say
it's
extended
with
finalized
and
safe
and
safe
and
safe
will
be
an
alias
to
latest.
According
to
this
proposal,
so
latest
will
be,
will
always
point
to
the
confirm
block.
B
And
this
is
aligned
with
what
we
have
currently
in
the
fork
chain
because
latest
always
like
points
to
the
to
a
block
that
is
accepted
by
that
that
could
be
accepted
by
the
network
in
terms
of
the
proof
of
work,
verification
and
in
terms
of
like
consensus,
all
consensus
verifications.
B
So
this
this
is
like
the
same
as
in
the
proof
of
stake
with
the
like
confirmed
blocks,
with
the
thirds
of
testers
voted
for
a
block.
J
J
L
J
E
Yeah
yeah
for
your
average,
for
your
average
user
latest
meaning
safe,
is
kind
of
a
very
reasonable
default
behavior
if
you're
using
the
app
or
whatever.
However,
if
you're
doing
something
like
mev
extraction
or
bot
work
or
whatever,
then
you
probably
almost
certainly
want
unsafe.
But
you
also
know
what
you're
doing
and
you
recognize
that
you're
taking
risks
and
you're
building
on
you
intentionally
want
to
build
very
specifically
on
the
absolute
latest
block
and
that's
why
we
return
both
because
both
have
different
use
cases.
B
E
B
E
M
A
Up,
we
can
probably
do
like
another
five
or
so
minutes
on
this
side,
that
we
finish
everything
in
the
next
five
minutes,
yeah
just
to
move
on
to
the
felix
document
as
well.
After
okay,
cool.
B
So
vlog
processing
flow
is
here
to
illustrate
yeah
this
couple
of
sequence
diagrams.
It
just
illustrates
how
the
er
block
will
be
processed.
I
should
probably
add
the
focus
stuff
here.
B
The
whole
choice
updated
stuff
here:
clarity,
I'll
do
that
now
we
are
going
through
the
transition
process
and
which
this
is
a
very
critical
part
of
this
api
and
yeah,
all
the
transition
stuff
and
all
the
stuff
that
is
marked
as
scope
transition,
including
some
parameters
of
some
methods,
will
be
deprecated
after
the
merge
and
could
be
removed
from
the
clients
in
the
next,
like
updates,
when
the
merge
has
already
happened.
B
So
we
have
here
like
a
couple
of
yeah.
We
have
here
the
couple
of
things
that
will
help
for
the
case
when
we
would
like
to
override
the
terminal,
total
difficulty
or
set
the
terminal
of
work
block
which
overrides
the
terminal
total
difficulty
overwrites
yeah.
So
these
two
methods.
C
B
B
So
if,
if
there
is
like
a
kind
of
emergency
and
any
of
these
parameters
are
communicated
to
like
some
channel
on
some
public
channel,
the
clients
should
restart,
you
should
be
restarted
with
either
of
this
one
and
yeah.
They
will
be
communicated
down
to
the
execution
clients
when
they
are
set
from
on
their
consensus,
client
side
more
on
the
reasoning
behind
this.
B
There
is
an
issue
here,
yep
also
by
the
way
I
forgot
to
mention
that
that
this
terminal
total
difficulty
override
will
also
be
used
for
setting
the
terminal
total
difficulty
in
the
normal
case.
So
once
the
merge
work
happens,
this
terminal
total
difficulty
gets
computed
by
the
consensus,
client
and
communicates
it's.
Why
this
method
to
the
execution
client?
So
it
will
know
at
which
total
difficulty
it
must
stop
processing
the
proof-of-work
box.
This
is
all
specified
in
the
eip.
B
Yeah,
I
feel
like
we
yeah,
we
should
we
should
stop
here
and
if
we
have
any
time
to
answer
the
questions
like
we
could
do
this.
A
Yeah-
and
I
guess
one
thing
that
might
be
worth
discussing
on
the
discord
after
is
if
we
want
another
merge
call
next
week
to
maybe
finish
this.
You
know
going
through
this
like
before
the
e2
call
right
yeah.
We
don't
need
to
agree
to
this
now,
but
yeah.
I
think
it's
just
worth
seeing
it's
definitely
something
we
can
do.
C
A
C
A
Right
right,
so,
okay:
let's
do
that!
Let's
do
a
call
before
before
the
e2
call
next
week
for
an
hour
cool
yeah
thanks
a
lot
mikhail
for
sharing
this
and
yeah.
Let's,
let's
keep
the
conversation
on
on
discord
in
the
next
week
and
yeah
felix.
Do
you
want
to
give
us
a
quick
rundown
for
your
document
around
the
the
post,
merge
thing.
J
Yeah
I
can,
I
can
do
this.
I
was
actually
kind
of
hoping
to
be
able
to
like
share
the
document
in
the
screen,
but
for
some
reason
I
can't
seem
to
do
this
in
here.
I
don't
know.
A
Okay,
I
I
I
should
be
able
to
share
it.
Give
me
a
sec.
J
Yeah,
so
I'm
really
sorry
about
this,
but
for
some
reason
it's
not.
It
doesn't
always
work
anyway
yeah,
but
I,
while
you
dig
it
up,
I
can
also
just
start
talking,
so
how
we
probably
gonna
do
this
is
like.
Basically,
I
can
just
talk
for
a
couple
of
minutes
about
the
general
idea
behind
this,
like
sync
stuff
and
like
where
we're
coming
from
with
this
and
then
after
this
we
can
kind
of
discuss.
J
J
So
a
couple
weeks
ago
we
had
our
team
meeting
and
in
the
team
meeting
we,
I
asked
peter
a
little
bit
about
like
his
ideas
for
the
thing,
because
he
had
been
kind
of
busy
thinking
about
it
and
trying
out
some
stuff
how
it
could
be
implemented
and
so
on,
and
then
yeah.
J
He
told
me
about
his
ideas
and
we
made
like
some
some
some
drawings
and
kind
of
yeah,
just
like
basically
tried
to
get
the
like
good
picture
of
it
and
then
peter
basically
went
to
on
vacation
and
now,
basically,
I'm
I'm
right
now,
the
guy
who's.
You
know
like
basically
carrying
a
torch
forward
and
I
suspect
that
he
will
come
like
when
he's
back.
He
will
likely
take
over
and
keep
working
on
this.
So
this
document
that
I
just
released
yesterday
is
basically
only
really
concerned
with
the
sync.
J
So
this
is
kind
of
important,
because
when
I
asked
some
people
for
review,
they,
you
know
immediately
jumped
out
and
were
you
know,
like
yeah,
discussing
like
the
api
that
is
used
in
this
document,
and
you
know
like
if
it
matches
the
the
real
api
or
that
that
is
gonna
be
used
between
the
clients
and
stuff.
And
it's
not
about
this
api.
It's
really
only
about,
like
you
know,
very
specific
part
of
the
sync,
which
is
exactly
the
sync
that
isn't
processing
non-finalized
blocks.
J
So
basically
the
main
interest
here
is
about
the
part
where
the
client
is.
You
know
trying
to
sync
up
finalized
blocks
and
then
for
the
beacon
chain.
It's
like
you
know
it
for
it
to.
You
know
like
basically
for
the
clients
to
be
fully
in
sync
with
the
network.
Obviously
there
it
has
to
get
to
the
real
head
of
the
chain,
and
this
in
the
end
of
the
sync
some
it
will.
You
know
basically
just
perform
the
same
operation
that
it
would
always
perform
if
it's
already
synced,
which
is
just
you
know,
like
processing.
J
Basically,
you
know
like
very
recent
blocks,
so
it's
not
about
this
part,
and
it's
also
not
really
about
like
handling
reorgs
and
things
like
that
during
this
like
later
normal
operation.
But
this
is
really
only
about
this,
like
earlier
part
where
it
doesn't
have
the
full
chain
yet
and
it's
you
know
like
just
trying
to
basically
get
to
a
state
where
it
can
start
processing
blocks.
So
this
is
the
main
importance
here
and
then
basically,
I
wanted
to
quickly
go
over
the
definition.
J
So
basically,
we
have
what
you
can
see
there
is
that
I
define
three
operations,
which
are
basically
calls
that
could
can
be
made
by
the
e2
client
to
the
eth1
client,
and
you
will
see
these
calls
all
over
and
it
might
be
a
bit
confusing
for
especially
for
people
who
are
very
familiar
with
e2,
because
these
calls
don't
directly
match.
You
know
like
the
consensus
engine
api
and
they
also
work
a
little
bit
differently
from
you
know
how
what
you
might
expect-
and
it
will
be
changed
later.
J
J
No,
it's
a
it's
a
block.
This
is
the
the
the
the
terms
are
actually
defined
right
above
this,
so
we
have
to
I,
but
I
it's
probably
a
good
idea
to
go
through
it
quickly.
So
in
this
document
we
have
case
b
for
beacon,
chain
blocks
and
uppercase
b
for
app
for
execution
layer
blocks
and
the
b
is
always
a
complete
execution
layer
block,
and
then
we
also
have
h,
which
is
for
block
headers,
so
the
block
hashes
actually
never
occur
in.
J
J
So
this
is
why
it
has
this
star
and
basically
the
idea
is
it
provides
this
block
actually
only
the
header,
the
execution
layer,
header
of
this
block.
It
provides
to
the
client
in
the
first
step
and
that's
really
it
there's.
It
doesn't
really
need
to
do
anything
else,
and
then
the
idea
is
that
from
this
weak
subjectivity
checkpoint
block
it.
J
This
is
the
step
number
three.
Now
it
actually
provides
the
execution
layer
block
which
is
embedded
in
this
block
to
the
eth1
client,
and
now
we
can
go
a
bit
further
down
and
go
to
the
next
part.
So
now,
basically.
B
I'm
sorry
felix,
I
have
a
question
so
the
first
final
final
b
call
should
be
made
with
the
latest
finalized
block
yeah.
J
In
this
case,
yeah,
you
had
the
question
in
your
document:
yeah,
basically
yeah.
It's
just
an
assumption
for
now,
which
it
just
makes
it
easier
to
explain
the
procedure
yeah
and
then
basically
now,
since
it
it
has
provided
a
final
block,
it
just
keeps
providing
the
like
finalized
blocks
as
they
happen,
so
it
keeps
following
the
chain
and
it
keeps
providing
the
like
finalized
blocks
and
to
the
eth1
client,
and
then
it
has
to
do
this.
You
know,
while
the
eth1
is
sinking
which
will
take.
You
know
like
a
lot
of
time.
J
So
basically
our
assumption
here
it
actually
takes
like
t
beacon,
blocks
worth
of
time
to
synchronize,
and
this
can
be
quite
long
and
eventually,
when
the
eth1
is
done,
it
will
respo.
It
will
basically
respond
to.
One
of
these
final
calls
with
the
signal
that
it
is
synced
to
this
particular
block.
That
was
just
provided
and
once
that's
the
case,
we
can
basically
go
into
the
regular
processing
and
start.
You
know
like
putting
the
the
non-finalized
blocks
through.
J
So
basically,
after
this
point,
when
the
eth
one
says
that
it
synced
up
to
the
latest
finalized
block,
it
is
ready
to
process
non-finalized
blocks,
and
this
is
basically
the
end
of
the
sync.
So
that's
kind
of
it
from
the
e2
point
of
view,
and
now
we
can
go
to
the
eth
one.
Are
there
any
questions
at
this
point?
B
Have
one
regarding
this
payloads
execution
after
the
sync
is
done
when
we
got
this
message,
there
are
two
options:
one
is
the
execution,
client
stores
all
the
execution
payloads
and
then
then,
when
the
sync
is
done,
it
just
executes
them
on
top
of
the
pivot
block.
The
other
option
is
that
it
communicates
that
the
sink
is
done
to
the
consensus,
client
and
consensus
client
replace
these
execution
payloads.
In
this
case
the
execution
client
don't
need
to
store
them,
but
yeah
it
should
store
them
right.
J
Way,
I
see
it
okay,
so
I
was
assuming
basically
that
the
execution
layer
is
so.
My
assumption
is
very
simple.
Basically,
the
execution
layer
shouldn't
really
store
anything
that
is,
you
know
like
totally
unverified
and
even
the
e2
client
in
this
case
it
cannot
really
verify
these
blocks
because
it
cannot
process
them
because
there
is
no
state
to
process
them
on.
So
I
felt
like.
Basically,
it
doesn't
really
make
sense
for
the
even
for
the
for
the
consensus
layer
to
you
know
like
process
or
look
at
these
blocks.
It
can
always
look
at
them
later.
J
You
know
like
when
it's
kind
of
ready
for
it,
so
these
blocks
don't
need
to
be
stored
in
the
execution
layer
before
it
has
reached
the
finalized
block,
because
these
blocks,
you
know,
might
be
totally
invalid
and
and
they
can
be
re-org
at
any
time,
so
it's
kind
of
you
know
like
why
would
it
even
care
about
these
blocks?
In
the
first
place,
it
should
really
mostly
care
about
blocks
that
you
know
like
have
you
know
it
can
actually
verify.
So.
This
is
why
I
didn't
put
it.
J
C
C
Kind
of
like
operating
in
in
block
headers
and
just
looking
at
difficulty
and
making
the
trade-off
that
okay,
when
I
get
to
the
head,
that
was
probably
a
reasonable
head,
because
so
everyone
else
agreed
and
that
had
to
hide
the
following.
The
beacon
changing
battle
execution
is
probably
making
a
similar
assumption.
J
And
so
we
are
as
we
we
are
assuming
here,
that
if
the
block
was
finalized
by
eth2,
there
is
a
pretty
high
chance
that
it
has
a
valid
state
transition,
because
the
e2
should
not
be
finalizing.
You
know
invalid
state
transitions
right.
C
Right,
the
head
with
respect
to
athens
station
beyond
finality
is
also
you
know,
there's
a
degraded
amount
of
security,
but
it's
yeah
operating
kind
of
in
the
same
yeah.
J
C
Me
I
I
care
about
the
detail,
because
I
think
that
it
simplifies
things
like
the
consensus
just
continues
to
provide
the
data
normally
like
here.
Here's
what's
finalized
here's
the
process,
here's
the
finalization
process
and
that
the
execution,
no
matter
what
their
sync
process
is
when
they're
at
the
end,
just
transit
ends
up
with
a
state
the
end
to
what
the
yeah.
J
Yeah
we
will
get
to
the
state,
so
basically,
the
way
I
want
to
do
is
basically
I
go
through
a
document.
In
the
end
we
did,
we
discussed
so
basically
yeah,
so
the
the
each
one
perspective
is,
you
know,
kind
of
you
know
like
a
mirror
of
what
we
just
had.
So
basically
what
happens
is
it
gets
the
signal
to
start
to
swing?
J
This
is
the
step
number
one
in
the
diagram
by
you
know
receiving
the
first
call
to
the
final,
and
previously
it
has
also
received
this
checkpoint
header,
it's
the
it's
the
hw,
and
now
the
idea
is
basically
that
very
simply,
it
starts
downloading
the
historical
headers
in
reverse
and
it
does
it
until
it
reaches
the
genesis
block
and
when
it
crosses
the
checkpoint.
J
It
also
has
you
know,
a
validation
step
where
it
actually
checks
that
the
it
also
checks
that
the
the
downloaded
headers
match
this
checkpoint,
and
this
is
just
a
safety
net
to
basically
not
land
on
like
totally
invalid
chain.
Otherwise
we
would
have
to
go
all
the
way
to
all
the
way
back
to
the
genesis
block
to
find
out
that
it's
the
wrong
chain.
So
that's
why
we
have
this.
Like
intermediate
thing,
we
will
obviously
also
verify
the
genesis,
but
if
it
matches
the
weak
subjectivity
checkpoint,
I
think
we're
pretty
safe.
J
Like
would
be
kind
of
weird.
If
that
one's
wrong.
So
and
then,
when
we're
kind
of
done
with
the
headers,
we
can
actually
download
the
block
bodies
in
the
forward
direction.
So
this
is
the
step
number
three
in
the
diagram
and
by
the
way,
the
text
for
this
is
below
the
diagram.
So
just
if
you
look
trying
to
look
at
the
text
this,
the
text
that
describes
all
this
is
actually
below
that.
So
and
then
you
go
basically
through
the
block
bodies,
and
here
you
have
two
options.
J
You
can
either
basically
perform
the
full
string,
in
which
case
you
simply
process
every
block
body
as
you
download
it
and
incrementally
recreate
the
state,
and
then
the
other
option
is,
of
course,
the
state
synchronization
where,
instead
of
processing
it,
you
just
download
the
blocks
and
while
you're
doing
it,
you're
also
concurrently
downloading
the
application
state,
and
we
expected
you
know,
because
we're
like
in
the
guest
mindset
we
expect
is
probably
going
to
be
done
with
something
like
the
snap
sync,
and
so
the
idea
is
that
you
will
basically
provide
this
and
then
what's
really
important
to
understand
is
also
in
the
diagram.
J
While
this,
while
this
like
steps,
two
and
three
are
happening.
We
are
actually
getting
notifications
about
newly
finalized
blocks
and
these
notifications
need
to
be
processed.
And
this
is
this.
How
they
are
process
is
described
above
the
diagram.
J
It's
less
required
for,
for
example,
the
full
string,
but
it's
it's
really
needed
for
the
snap
string.
So
this
is
why
it
also
it
has
implications
on
the
on
the
on
the
sync
and
then,
if
there's
any
other
finalized
block
provided,
then
there
are
two
options
either.
The
block
is,
you
know,
a
historical
block,
in
which
case
it's
kind
of
you
know
was
provided
for
whatever
reason,
and
in
this
swing
model
we
don't
care
about
it.
J
So
we
just
say
it's
old
and
or
invalid,
and
then,
if
it's
a
future
block,
then
we
restart
the
sync
on
this
future
block,
and
the
idea
for
this
we
will
get
to
it
later
is
for
the
like
restart
handling
of
the
sync
that
basically
like.
If
the
e2
client
was
restarted
and
now
you
know
has
reached
a
different
finalized
block,
then
we
basically
just
restart
the
entire
swing
procedure
on
the
east.
One
side
and
just
you
know
like
try
and
basically
do
redo
the
missing
steps.
So
now
we
can
go
even
further
down.
J
Oh
wait:
wait
wait
one
second!
So
what
you
can
see
is
that
basically,
after
all
of
this
is
done,
basically,
you
can
see
that
two
blocks
have
stayed
in
this
diagram.
So
one
is
the
hg,
which
is
the
genesis
block
this.
The
state
of
this
is
always
available
and
the
other
one
is
this,
like
block,
b,
plus,
f,
b,
f,
plus
t,
which
is
basically
the
like
final
block
of
the
swing.
J
So
when
this
block
is
reached,
we
have
to
guarantee
that
the
complete
application
state
is
available,
and
this
is
why
it
has
the
green
star
in
the
diagram
to
show
that
this
is
you
know
the
block
with
the
you
know,
final
state
in
the
case
of
the
forcing
we
actually
may
have
more
state,
and
we
will
get
to
the
question
of
state
at
the
very
end,
but
basically
for
now
what
you
can
assume
that,
after
the
sync,
what
is
guaranteed
is
that
this,
like
the
sync
block,
has
the
state
available,
and
this
is
kind
of
it
for
the
eth1
side,
because
after
that
it
will
simply
receive.
J
You
know,
calls
to
to
process
non-finalized
blocks
and
these
blocks
can,
you
know,
be
processed
on
top
of
the
state
which
is
available
and
there
may
also
be
reorgs,
and
but
this
is
really
not
the
reorgs
and
the
synga
are
like
two
different
things
for
now.
So
it's
kind
of
not
really
related,
so
we're
done
in
this
as
well.
J
Now
we
can
quickly
skip
over
the
section
which
talks
about
the
client
restarts.
I
don't
really
want
to
go
into
it
too
much,
but
it
I
think
this
is
going
to
be
very
important
for
the
for
the
ethernet
client
authors
to
consider
these
things.
So,
basically,
here
we
mostly
talk
about
like
how
to
handle
the
content
of
the
database
when
there
are
multiple
swing
cycles
and
how
to
efficiently
reuse
the
information
that
was
already
stored
in
some
previous
sync.
J
We
have
a
couple
things
here.
One
is
the
handling
of
you
know
like
when,
when
when
the
chain
that
was
previously
stored
is
now
like
when,
when
you're
sinking
at
a
different
chain
on
top
of
one
that
was
already
synced,
then
you
need
to
erase
the
old
information
and
you
can
reuse
the
parts
by
way
of
this
marker
system,
which
is
described
in
the
second
to
last
paragraph.
J
So
it
explains
that,
basically,
if
we
have
previously
synced
a
an
entire
segment
of
finalized
blocks,
we
can
efficiently
skip
over
this
segment
and
not
have
to
basically
recheck
every
single
one.
If
we
already
have
it
or
you
know,
basically,
we
can
skip
a
lot
of
work
this
way,
and
I
think
this
will
be
quite
important
to
implement
something
like
this,
especially
when
we
change
the
sync
later
or
when
it
you
know,
becomes.
J
J
What
we
assume
here
is
basically
that,
because
the
clients
were,
the
clients
are
supposed
to
start
the
sync
on
the
latest
finalized
block,
and
you
know
as
the
finalized
frontier
moves.
They
have
to
also
retarget
their
sync
to
this
block.
So
basically,
this
this
this
state
needs
to
be
available
in
the
peer-to-peer
network
in
order
to
be
downloadable.
J
So
this
is
why
we
recommend
here
that
basically,
the
clients
should
keep
this
state
available
in
their
persistent
store,
and
it
does
argue
that
basically
like
since
most
if
one
clients
are
now
moving
to
the
model
where
they
really
only
store
one
entire
copy
of
the
state
and
then
a
bunch
of
additional
information
to
facilitate
reorgs
in
some
way,
then,
basically,
we
we
argue
here
that
it
is
the
best
to
simply
store
this
state
of
the
state
of
this
particular
block,
because
it
is,
you
know,
like
it's
the
easiest
to
handle,
and
we
also
described
that
basically
like
in
order
to
facilitate
the
reorg
processing.
J
And
finally,
we
get
into
this
part
that
should
probably
have
way
more
text
so
and
it's
kind
of
a
bit
of
a
controversial
topic.
Also,
no,
no,
it's
not
the
issue
section
yet,
no
for
now
we're
still
talking
about
the
riots,
so
we
have
the.
We
have
this
thing
with
the
manual
intervention
reorg.
J
So
basically,
the
issue
is
as
follows:
so,
in
the
in
the
current
ethereum
one
name
main
network,
what
the
there
is
an
assumption
in
in
the
clients,
especially
in
guests
like
this,
is
where
we
are
coming
from
here,
so
that
basically
there's
got
to
be
the
safety
net
for
handling
issues
that
arise
in
the
you
know,
live
network
and,
for
example,
if
there's
a
consensus,
failure
in
the
in
the
network-
and
we
just
had
one
so
it's
kind
of
you
know
a
really
good
example.
J
Then
it's
kind
of
good
if
there
is
a
little
bit
of
a
time
window
where
reorgs
are
still
possible
and
in
the
get
this
time
window
is
defined
to
be
90
000
blocks
long.
So
at
the
moment
it
is
basically
always
possible,
like
the
geth
will
always
ensure
that
it
has
the
possibility
to
perform
a
90
000
block
re-org,
and
the
reason
for
this
is
not
so
much
that,
like
during
the
normal
operation,
these
reorgs
will
happen
all
the
time
generally.
J
It
is
not
expected
that
there
will
be
a
90
000
block
reorg,
but
this
specific
case
where
this
is
really
really
important,
is
if
your
client
version,
for
example,
had
a
had
an
issue
in
its
processing,
because
in
this
case
it
will
not
be
able
to
follow
up
on
the
new
chain
until
you
have
installed
the
software
update,
for
example,
and
because
of
this
you
gotta
have
you
know
like
a
bit
of
a
time
window
to
actually
update
your
client
and
when
you
do
so,
it
needs
to
be
able
to.
J
Actually,
you
know
like
reorganize
back,
even
if
the
wrong
chain
has
also
advanced
by
a
significant
number
of
blocks,
and
this
did
happen
even
with
this.
You
know
like
with
the
with
the
most
recent
consensus,
failure
that
actually,
because
some
of
the
pools
were
still
mining
on
the
like
chain
that
had
the
that
had
the
that
had
the
bug
in
it.
It's
kind
of
that.
Basically,
you
know
like.
J
J
What
we
recommend
here
is
that
the
execution
layer
client
should
maintain
backward
divs
of
the
state
in
some
kind
of
persistent
store,
so
basically
it
should
be
able
for
them
to
re-org
below
the
latest
finalized
block,
even
if
it
is
a
rare
occurrence,
but
it
gives
you
the
safety
net
to
be
able
to
say.
Like
you
know,
if
there
was
a
problem
you
can,
you
can
kind
of
reorg
out
of
this
problem
by
then
applying
these
reverse
divs
to
your
persistent
state.
J
Until
you
reach
the
common
ancestor
of
the
two
chains
and
from
this
point
onwards,
you
can
then
process
forward
to
get
to
the
good
state.
So
this
is
kind
of
we.
We
feel
pretty
strongly
about
this,
and
we
would
really
like
to
recommend
this
and,
as
we
will
see
just
now,
it
is
also
probably
going
to
be
required
to
do
something
like
this.
So
now
we
get
to
the
issues.
J
So
the
main
problem
that
we
discussed
right
away
is
that
actually
everything
that
I
just
said
is
you
know
like
totally
wrong,
because
finalization
doesn't
work
in
this
in
the
way
that
we
in
the
get
team,
you
know
initially
understood
it.
So
it's
kind
of
we
were
not
aware
that
actually
in
the
eth2
consensus,
finalization
is
something
that
can
you
know
take
up
to
two
weeks
in
the
worst
case.
So
what
this
means
for
us
is
that
our
current
scheme
of
you
know,
like
persisting.
The
finalized
block
will
actually
like
this.
J
J
We
have
basically
been
thinking
about
solutions
last
couple
days,
how
to
really
do
it
so
and
what
we
find
is
that
basically,
probably
we're
gonna
have
to
adapt
the
sync
a
little
bit
to
add
this,
the
notion
of
the
calcified
block,
which
will
usually
be
the
finalized
block,
but
it
may
also
be
an
unfinalized
block
and
adding
this
classified
block
will
have
a
lot
of
implications
on
the
string
because
yeah
like
it
basically
makes
the
whole
thing
a
lot
more
complicated,
and
I
really
invite
you
to
like
you
know,
look
with
us
through
these
issues
in
the
in
the
upcoming
weeks
and
figure
out
how
we
can
solve
it
in
the
in
the
best
way.
J
We
will
find
a
solution
for
this,
but
yeah
for
now.
Basically,
we
would
really
like
this
thing
to
work
in
the
way
that
is
described
in
the
rest
of
the
document,
but
unfortunately,
because
the
finalization
can
take
so
long
it
it
kind
of
like
yeah
means
we
have
to
do
some
more
engineering
to
really
figure
it
out.
J
B
Thanks
alex,
I
have
a
question
first,
how
much
space
do
you
think
those
tips
for
these
two
weeks
will
take.
J
Dips-
we
don't
really
know
about
this,
so
this
is
generally
something
that
I
that
we
need
to
discuss.
So
the
problem
with
the
reverse
divs
is
that
it's
I'm
actually
not
sure
it.
It
might
be
that
aragorn.
Maybe
someone
from
aragon
asian
can
comment.
You
know
like
how
they
handle
the
reorgs.
I
think
they
might
have
something
like
this
already
implemented.
M
Hi,
it's
sandra
from
aragon,
yeah.
We
have
a
reverse
delta,
so
we
can
really
implement
redox
by
implying
reverse
deltas.
J
M
J
I
don't
know
off
the
top
of
my
head
like
peter
would
know,
but
I
don't
know
what
is
the
usual
size
of
the
diff
of
each
block
it
it's
it's
it.
It's
I
think
it's
manageable
they're,
quite
like
I
mean
it's
definitely
going
to
take
some
disk
space.
I
don't
really
know
like.
What's
your
window
in
aragon,
for
these
diffs
at
the
moment,.
M
Well,
it's
configurable.
We
even
have
the
mode
like
in
the
archive
node.
We
don't
prune
anything.
So
we
have
deltas
for
the
entire
history
of
the
mainnet
and
it
takes
roughly
one
and
a
half
terabytes
to
like
for.
L
M
Full
archive
node
with
pruning
it's
configurable.
We
can
configure
it
for
something
like
90k
blocks
as
well,
and
then
the
total
database
size
will
be
about
half
a
half
a
terabyte,
but
I
don't
know
of
the
top
of
my
head.
How
much
of
that
is
the
delta's.
J
The
the
changes-
well,
that's,
that's.
That's
pretty
good
information.
It
kind
of
matches
my
expectations
as
well,
so,
okay
yeah,
so
there
you
have
it.
So
I
think
it
is.
It
is
manageable
to
like,
if
you
know,
aragon
actually
has
it
already
implemented
like
this.
Then
we
can
definitely
say
that,
like
this
is
a
this
is
a
manageable
approach
with
the
reverse
diffs.
J
It
does
mean
that
the
reorg's
below
below
this
point,
where
we,
where
we
keep
the
like
main
state,
they
will
take
a
lot
longer
to
apply
because
you
have
to
basically
you
know
like
adapt
the
state
incrementally
for
each
block.
You
can't
really
skip.
I
mean
you
could
store
larger
deltas,
but
then
that
would
take
even
more.
B
So
what
could
probably
be
used
is
the
like
if
the
consensus
client
would
communicate
the
like
finalized,
the
most
recent
finalized
checkpoint,
the
most
recent
finalized
block
and
the
the
most
recent
epoch,
the
most
recent,
the
block
at
the
most
recent
epoch
boundary
each
time
this
boundary
happens.
It
could
be
used
to
like
to
handle
this
kind
of
this
kind
of
non-finality
periods
if
they
are
too
long.
B
So
the
execution
client
may
see
these
two
checkpoints
and
decide
what
is
the
like
distance
between
them
and
I
think
it
makes
sense
for
this
calcified
block
conception
and
to
use
the
blocks
and
the
boundaries
at
some
follow
distance
from
the
head.
So
this
just
basic
thoughts
on
that
yeah.
Also,
we
could
use
the
justification
stuff
justify
checkpoints,
but
I
assume
that
if
we
have
no
finality,
then
we
potentially
don't
have
the
justified
blocks,
but
it
could
be
also
used.
B
So
if,
if
there
is,
if
the
justified
checkpoints
might
is
much
closer
to
the
most
recent
boundary,
it
could
be
also
used
as
a
pivot
block.
J
J
It
doesn't
have
to
be
very
smart
about
the
threshold,
but
the
main
problem
is
just
it
needs
to.
It
needs
to
basically
not
be
further
than
like.
You
know
a.
A
J
Hundred
blocks
from
the
head,
so
anything
that
satisfies
that
is
good
enough,
and
I
suspect
that
we're
gonna
have
to
calculate
this
on
both
sides.
So
I
think
it
would
be
easier
to
just
make
it
like
a
very
simple
definition.
So,
in
my
definition,
I
just
put
you
know
like
it's
the
finalized
block
or
it's
some
block,
which
is
you
know,
five
twelve
blocks
away
if
the
finalized
block
is
older
than
that,
so
it
just
basically
just
puts
a
bound
on
that
and
either
way
this.
J
The
change
to
the
calcified
block
will
have
huge
implications
because
it
basically
requires
that
during
the
certain
reorgs
need
to
be
need
to
be
handled.
You
know
like
in
in
in
some
way
there
are
some
cases
where
reorg
is
not
possible
during
the
sync
due
to
constraints
on
the
state,
so
it
it.
We
will
have
to
think
a
lot
about
these
cases
and
also
the
the
in
general,
it's
kind
of
like
a
bit
messy,
because
we're
going
to
end
up
in
a
situation
where,
like
since
the
calcified
block,
may
not
be
final.
J
It
puts
some
like,
for
example,
in
the
case
of
aragon,
it's
like
yeah,
it's
configurable,
but
it
will
no
longer
be
configurable
in
this
in
in
you
know,
for
for
anything
after
the
merge,
because
you
will
have
to
provide
a
certain
number
of
these
tips,
so
you
basically
have
to
restrict
the
use
of
freedom
there,
because
otherwise
their
client
will
not
be
able
to
follow
the
chain
correctly.
Should
the
situation
happen
and
things
like
that,
so
it
I
think
it
has
big
implications
on
the
clients,
the
dislike,
adding
the
calcified
block.
J
C
See
the
value
I
see,
the
like
practical
engineering
need
for
handling
state
in
these
times
of
non-finality
and
having
things
that
do
not
go
to
the
depth
of
finality.
I
do
note
that,
in
the
event
that
you
didn't
have
finality
and
in
the
event,
there's
some
sort
of
attack
scenario
network
partition
that
if
reorg's
beyond
the
calcified
state
are
very
expensive,
then
all
of
a
sudden
that
actually
becomes
like
a
place
to
attack.
C
If
you
can
get
the
chain
to
flip
between
states
that
are
beyond
the
calcified
state,
then
you've
now
like
grinded
most
clients
to
a
halt,
trying
to
do
that
expensive,
reorg
operation
from
disc.
So
there's.
I
know,
there's
like
very
much
practical
engineering
considerations
here,
but
there's
also
probably
security
considerations
that
need
to
be
discussed
in
tandem.
J
Yeah,
I
would.
I
would
also
like
to
note
that
basically,
like
my
first
reaction
was
that
you
know
like
we
should
rather
change
the
e2
to
basically
make
the
finalization
a
bit
more
reliable,
but
I
already
heard
it
from
like
multiple
people
that,
unfortunately,
it's
not
going
to
be
possible
to
change
e3
for
this,
so
we're
gonna
have
to.
I
guess,
find
the.
G
Well,
this
is
a
fundamental
consensus
property
that
you
can't
have
that
well,.
G
As
though
maybe
one
one
possibility
to
what
danny
just
said
would
be
to
make
reorgs
beyond
the
calcified
block
manual,
because
I
mean
I
reckon
when
you
are
in
that
mode,
you
would
probably
still
say
yeah
sure
reorg
said
large
can
happen,
but
there's
a
high
probability
that
it
is
actually
an
attack
if
that
happened.
If
that
does
happen,
so
you
might
actually
want
user
intervention
to
pick
pick
the
pick
the
fork
in
that
case.
C
Felix
and
I
discussed
that,
maybe
when
you
do
trigger
that
type
of
re-org,
the
execution
client
responds
and
says:
that's
really
expensive.
Are
you
sure,
and
then
that
can
either
be
trigger
from
annual
intervention
or
the
beacon
node,
even
trying
to
get
better
information
before
it
triggered
such
an
expensive
reorg?
So
there's,
maybe
there's
a
lot
of
different
like
trade-offs
on
that
spectrum.
G
J
Right
again,
I
mean
it
depends
on
the
on
the
on
the
implementation
of
the
state
and
it
implements
it.
It
depends
on
on
you
know,
like
I
mean
I
again,
since
basically
only
aragon
has
this
exact
system
implemented
right
now,
so
it
was
kind
of
you
know
like
the
way
I
wrote
it
was
kind
of
inspired
by
how
I
think
their
their
stuff
is
working
seems
like
mostly
works,
like
that,
it's
kind
of
that.
Basically,
I
think
they
might
be
able
to
give
some
context.
B
Know
how
long
would
it
be
great
to
get
those
numbers.
J
It's
just
a
it's,
but
again
it's
not
really
gonna
be
a
guarantee
because
it
it
highly
it's
highly
dependent
on
the
actual
on
the
client
implementation,
how
it
is
able
to
do
this
processing.
You
know
like
what's
going
on
in
the
client
at
the
time.
We
cannot
really
say.
J
I
think
it's
definitely
not
going
to
be
on
the
order
of
seconds,
because
reorganizing
many
blocks
in
this
way
basically
just
means
like
a
ton
of
of
writes
to
the
disk
and
yeah
I
mean
you
can
always
cache
some
things
and
optimize
some
things,
and
it
might
be
that
we
eventually
get
to
the
point
where
this
stuff
is
actually
kind
of.
You
know
like
fast,
but
we
can't
really
say
for
sure.
I
would
just
basically
really
like
to
assume
for
now
that
it's
an
expensive
operation,
because
if
it
would
be
so
quick,
we
wouldn't
yeah.
D
Rewinding
and
blocks
is
similar
in
time
as
going
forward
and
blocks.
So
if,
like
a
block,
is
processing
in
100
milliseconds,
that's
probably
your
rough
estimations,
but
there's
no.
K
G
G
B
And
in
the
worst
case
we
will
have
to
like
in
the
worst
case
I
mean
like
we
have
this
nine
ninety
thousand
blocks
and
we'll
have
to
re-execute
them
like
from
the
last
latest
finalized
checkpoint.
So
it
just
it
depends
on
the
time
of
the
execution
but
yeah.
It's
just
a
few
hours
to
my
communication.
So.
J
In
the
chat,
so
basically
we
also-
we
already
have
this
kind
of
optimization
implemented
in
the
get
as
well,
and
it's
definitely
applicable
here
so
like
if
you
need
to
do
like
you
know,
a
basically
really
large
backward
movement
on
the
state,
it's
also
possible
to
minimize
the
number
of
rights,
because
you
can
just
combine
multiple
diffs
into
one
in
into
one
in
the
memory
before
writing.
J
Anything
and
doing
this
usually
saves
quite
a
bit
of
time,
because
there
is
this,
the
state
has
kind
of
high
turnover,
so
you
may
be
able
to
skip
quite
a
few
operations
if
you
just
basically,
instead
of
writing
it
out
every
single
block
backwards,
you
can
basically
skip
over
some
and
hope
that
you
know
the
divs
kind
of
cancel
each
other
out.
It's
usually
the
case.
So
it's
like
something
something
else
to
keep
in
mind.
I
don't
think
we
have
to
discuss
the
details
about
this
too
much.
J
C
I
think
that,
as
we
consider
this
design
that
it's
important
to
consider
it
to
so
we
we're
not
writing
like
a
very
ad
hoc
communication
protocol
between
consensus
and
execution
for
this
particular
sync
and
instead
we're
writing
something
that
generically
provides
the
adequate
information
to
support
underlying
sync
methods
so
that
we
don't
like
design
this
too
pigeoned
to
the
particular
thing
that
we're
dealing
with,
and
I
I
have
some
ideas
for
that,
and
I
think
that
generally
what
you've
written
can
be
adapted
to
that.
J
Yeah
so
for
for
now
I
will
keep
this
like
the
operations
that
are
being
used
there.
I
will
try
to
keep
it
a
bit
abstract,
because
I
think
it's
going
to
be
really
easy
for
us
to
later
change
it
to
the
like.
You
know
like
map
these
onto
the
like
real
operations,
and
you
guys
have
a
lot
of
good
ideas.
I
already
check
out.
You
know
like
the
the
api
design
document.
J
It
is,
you
know
like
there's
a
lot
of
information
available
from
from
the
e2
node
that
can
be
used
also
during
the
sync
and
for
sure
we
will
have
to
make
use
of
it
when
we
redesign
it
for
this
calcified
block,
for
example,
we
will
likely
need
you
know
like
some
notion
of
like
what's
the
current
head
of
the
chain,
and
things
like
that,
so
we
will
work.
A
Yeah,
I
think
we're
kind
of
at
time
for
this,
just
because
we
have
a
few
more
things
on
the
agenda
and
only
10
minutes.
I
guess
we
can
continue
discussions
about
this.
Obviously,
in
discord
and
yeah,
I
perhaps
on
the
merge
call
next
week,
I'm
not
sure
if
I'm
kind
of
thinking,
maybe
mikhail's
dock
will
take
the
full
hour,
but
I'm
not
sure
if
maybe
doing
like
half
the
consensus,
api
and
half
this
makes
more
sense.
A
J
C
C
Invaluable
for
aragon,
who
I
think
generally
relies
on
differencing
protocols,
full
sync
and
these
rewinds
and
things
to
think
about
how
they're
going
to
be
doing
it
in
this
context
and
see
what
what
overlap
and
what
differences
their
requirements
need.
J
Right,
yeah
yeah,
I
would,
I
would
really
like
you
know
some
more
feedback
from
especially
from
the
eth1
client
author,
so
this
is
kind
of
written
I
mean
like
we
have
written
it
from
the
like
geth
perspective.
We
know
we
can
implement
it
like
this
in
the
geth,
but
you
know
like
how
it's
going
to
be
for
everyone
else.
I
don't
really
know-
and
this
is
specifically
about
this
later
section,
which
is
about
the
reorg
processing
and
the
state
availability
like
this
stuff
really
touches.
J
You
know
on
the
core
aspects
of
the
client
and
we
hope
it's
something
that
can
be
implemented
by
everyone
in
some
way,
but
we
have
to
like
this.
I
think
it's
more
a
matter
of
you
know
like
agreeing
among
the
eth
one
clients
how
we're
gonna
do
this,
and
so
it's
important
yeah
for
you
guys
to
basically
check
it
and
and
think
if
it
makes
sense
for
you
or.
A
Cool
yeah
so
yeah,
let's
definitely
discuss
it
in
two
weeks,
once
yeah
different
client
teams
have
had
time
to
have
a
look
and
you've
you've
made
the
updates
felix,
but
yeah
thanks
a
lot
for
sharing.
This
was
pretty
valuable
and
the
last
kind
of
big
thing
we
had
on
the
agenda,
which
apologies,
we'll
probably
have
to
do
a
bit
quicker,
and
we
can
also
discuss
again
that
a
future
call
is
eip.
35,
37
56,
the
gas
limit
cap,
light
client
you've
put
this
together.
N
Sure
I
can
keep
it
pretty
short
as
well,
so
setting
some
sort
of
in
protocol
limit
for
the
gas
limit
has
been
something
that
people
have
wanted
to
do
for
a
while.
It
was
originally
a
part
of
1559
and
then
removed,
and
then,
in
march
of
this
year
there
was
eip-3382
the
proposed
hard
code,
the
gas
limit,
and
I
think
that
3382
failed.
For
you
know
the
main
reason
it
failed
was
because
it
didn't.
N
To
reduce
the
gas
limit
in
the
case
of
some
sort
of
attack
on
the
network-
and
you
know
building
on
top
of
that
eip
the
next.
The
next
plausible
solution
would
be
to
just
have
a
upper
bound
of
the
gas
limit,
and
that's
what
three
five
three
seven
five
six.
A
Right
and
one
bit
of
context,
I
think
I
would
add,
is
when
we
had
the
discussion
around.
I
forget
the
number
but
the
previous
eip.
They
capped
the
gas
limit.
One
of
the
arguments
against
that
was
kind
of
backwards.
Looking
saying
you
know,
miners
have
historically
always
been
aligned
and-
and
you
know,
like
they've
done
a
good
job,
so
it
doesn't
make
a
lot
of
sense
to
remove
this.
A
This
this
degree
of
freedom
from
them
and-
and
I
think
over
the
past
couple
months-
we've
seen
like
you
know
there-
can
be
external
incentives
like
tokens
of
what
mods
that
that
pop
up
to
game
this,
especially
as
block
space
on
ethereum,
becomes
more
and
more
valuable,
so
yeah.
I
I
think
the
the
kind
of
reasoning
that
we
had
around
like
well
miners
have
always
been
good
in
the
past,
might
not
hold
forever
like
looking
forward.
A
Basically,
if
there's
more
and
more
incentives
for
people
to
to
try
and
influence
that
process,.
A
Yeah,
I
guess
people's
general
thoughts
are
on
this
feel
free.
Oh
a
couple:
hands
up
alex
blasov.
I
think
you
were
first.
O
Yeah
well,
my
question
is
somewhere
like
for
consistency
of
this
eep,
which
was
proposed
in
a
very
short
form
without
any
any
estimates
on
what
and
the
number
of
state
grows.
What
is
actually,
there
is
a
factor
which
potentially
affects
the
security
of
the
network.
O
Most
like
I
couldn't
find
any
different
questions
in
any
way
and
in
anyone's
work,
any
blog,
post
or
whatsoever
like
is
it
indeed
that
latency
of
cds
caccess
is
the
stopping
is
like
is
the
point
which,
which
is
like
is
the
most
vulnerable
point
in
processing
the
new
block
like
what
is
a
state
growth
rate?
What
is
what
can
be
called
acceptable
state
growth
rate
and
then
like?
O
What's
the
state
growth
rate
per
clients,
because,
as
I
was
quite
surprised
to
hear
in
emerge
call
well,
I
mean
I
cannot
make
a
good
contribution
there,
but
still
very
interesting
for
me
that
it's
now
was
kind
of
implied
that
the
clients
would
behave
in
some
way
regarding
how
they
store
the
data
and
like
it
means
that
it's
break
it's
like
in
the
future.
O
O
It
was
very
hard
for
me
to
react
to
this
in
any
form,
so
I
would
just
ask
to
extend
it
and
it
would
also
affect
obviously
the
number
of
the
current
limit,
but
I
I'm
just
curious
and
more
consistency,
maybe
it's
just
not
in
the
ep
yet,
but
there
is
already
some
analysis
would
be
great
to
see
it.
A
Right
thanks,
yeah
I'll
just
get
to
the
other
comment
or
like
client.
Do
you
want
to.
N
A
Cool
and
yeah
just
because
we're
almost
that
time,
there's
three
more
comments.
I
think
we'll
take
those
and
then
we'll
wrap
up.
I
think
it
was
enzgar,
andrew
and
marius,
so
asgard
you
want
to
go
first.
I
Sure
so
I
only
like
a
specifically
brief
question,
as
you're
saying
right,
like
that,
the
the
the
motivation
here
would
be
to
just
make
sure
xbox
base
becomes
more
valuable,
but,
like
miners,
basically
don't
succumb
to
the
temptation
at
some
point
to
like
go
like
to
to
abuse
the
control
there.
But
the
the
situation
is
just
that
right
now,
we
plan
on
the
next
hard
fork
with
any
features
to
be
the
merge
at
which
point
that
won't
be
minus
anymore.
I
So
I'm
just
wondering
is
this
still
a
concern
like
for
proof
of
stake
and
if,
if
not,
if
this
is
really
mostly
about
minus,
do
we
plan
on
in
case
we
end
up
with
like
a
december
ice
age
fork,
and
they,
I
don't
know
january
february,
merge
or
something
would
we
consider
this
eip
to
basically
be
included
in
the
ice
age
fork,
because
otherwise,
it
seems
to
me
to
not
like
like
this
would
be
the.
C
Only
circumstances,
I
would
still
say
that
this
mechanism
is
right
for
abuse,
for
any
set
of
actors
that
can
control
it
and
I'm
not
I'm
not
claiming
one
way
or
the
other
on
on
this.
I
don't
really
want
to
get
in
there,
but
it
is
still
contextually
and
if
it's
an
issue
with
minors,
it's
an
issue
with
stakers
and
if
there
are
mechanisms
that
can
be
designed
to
incentivize
the
miners
to
do
certain
things
that
same
exact
mechanism
can
be
used
on
stickers.
N
Yeah
and
regarding
the
including
an
ice
age
fork
the
way
it's
written.
It
could
also
be
included
as
a
soft
fork
before
the
ice
age.
M
Cool
andrew
right,
so
I
think
the
weak
consensus
in
the
aragon
team
is
that
we
are
against
this.
This
change,
but
we
are
not
going
to
die
on
this
hill.
Personally,
I
think
it's
bad
for
two
reasons:
first
it
if
it
requires
a
hot
fork,
then
it
will
distract
us
from
the
merge
and
second
is
that
currently
the
fees
are
very
high
on
ethereum.
A
Thanks
for
sharing
and
marius,
I
think
you
had
a
comment
also.
F
Yeah,
I
think
that's
I
I
would
just
like
to
react
to
that.
I
think
that's
a
bad
argument
that
we
would
force
current
clients
to
change
the
architecture
by
increasing
the
gas
limits.
I
think
all
current
clients
are
looking
into
increasing
into
changing
the
architecture
in
in
similar
ways
as
eragon
does
so.
The
people
are
already
looking
at
it.
Putting
pressure
on
the
teams
is
just
not
going
to
increase
the
the
speed
in
which
this
is
going
to
be
implemented.
F
The
other
the
other
small
comment
I
have
that
it's
currently,
in
my
opinion,
it's
not
about
state
growth
with
the
current
gas
limit,
it's
about
dos,
I'm
not
sure.
If
all
of
you
are
aware
away
but
aware,
but
like
there
are
some
dos
vectors,
we
found
some
dos
vectors
recently
and
it's
pretty
hard
to
measure
this
and
so
yeah.
I
don't
think
this.
This.
This
parameter
is
extremely
dangerous
and
it
should
not
be
in
the
hands
of
people
that
are
not
familiar
with.
A
Yeah
thanks
for
sharing
yeah,
just
because
we're
already
past
time,
you
know
we
can
obviously
continue
this
conversation
on
discord
and
bring
it
up
on
a
future
call.
I
think
we
have
kind
of
some.
You
know
definitely
areas
to
look
at.
I
think
puja.
You
had
put
a
couple
eips
on
the
agenda:
eip
24,
23,
64
and
24
64,
which
are
basically
the
eat
64
and
865
protocol
eeps,
as
I
understand
that
the
issue
is
like
both
of
those
are
shipped,
but
the
eips
are
still
like
in
draft
right.
L
Right
so
the
main
issue
here
is
eip2481,
which
is
for
866..
That
is
in
the
last
call.
Actually,
the
last
call
duration
has
also
passed,
and
we
would
want
to
move
that
to
the
final
status,
but
the
problem
is
that
proposal
requires
865,
which
is
2464
and
865
requires
e64,
which
is
two
three
six
four,
and
both
these
proposals
are
still
in
draft
status.
L
So
we
would
want
that
these
two
proposals
should
move
to
the
final
before
we
could
make
move
eip
2481
to
the
final
status,
I'm
happy
to
make
the
request
to
request
a
status
change.
It's
just
that.
We
wanted
to
make
sure
that
it
is
in
knowledge
of
get
team.
If
anyone
from
get
team
wants
to
volunteer
and
do
that
fair
enough,
and
if
not,
then
we
can
do
that
and
we
would
need
just
author's
approval.
J
So
thanks
for
the
initiative,
what
I
can
say
is
that
I
think
the
last
time
we
tried
to
do
something
like
this.
There
was
like
a
huge
amount
of
backlash
for
some
reason,
because
then
people
came
on
and
you
know
like
wanted
to
actually
see
some
justification
for
these
eeps,
even
though
they
are
like
four
years
old
or
something
I
don't
really
know
like.
This
is
definitely
something
that
I
would
like
to
avoid.
So
I
feel
like
these
eeps
they
have
been.
You
know
we
don't
even
use
864
anymore,
it's
already
like
past.
J
You
know
it's
it's
it's.
Basically,
it's
already
happened
like
it.
We
there
is
really
no
reason
not
to
move
it
to
the
final,
because
it's
not
even
supported
anymore,
and
I
mean
the
mechanism
in
it
is
obviously
still
supported
because
it's
carried
over
into
the
newer
protocol
versions,
but
the
protocol
version
itself
has
already
advanced
like
beyond.
It's
like
already
like
after
it's
end
of
life
now,
so
I
feel
like
it's
like
from
this
point
of
view.
A
L
J
L
We
have
some
eip
editors
on
the
call
if
they
have
you.
I
just
have
pasted
like
a
link
to
the
earlier
pull
request
which
we
created
for
eip
2481,
where
we
received
some
comments
from
eip
editors,
mentioning
that
these
two
proposals
should
be
moved.
So
if
we
can
directly
make
it
to
the
final
happy
to
do
that.
E
I
can
answer
that
quickly.
I'm
not
I'm
not
a
fan
of
skipping
straight
to
final,
because
it
encourages
people
to
drag
their
feet,
because
if
you
drag
your
feet
long
enough,
eventually,
you
can
just
avoid
the
bureaucracy
and
as
much
as
I
hate
bureaucracy,
I
don't
want
to
create
perverse
incentives
for
people
to
just
like
not
to
go
through
the
process,
knowing
that,
if
they
wait
long
enough,
they
eventually
they
can
avoid
it.
L
J
E
But
let's
talk
about
this
in
discord
only
because
I
suspect
90
of
people
remain
in
this
call,
probably
don't
care.
A
Yeah,
okay,
cool
and
okay,
so
last
thing,
and
that
was
it
in
terms
of
content,
but
next
friday
same
time
as
all
core
devs
1400
utc
we're
gonna
have
another
call
to
discuss
wallet,
support
and
infrastructure
support
for
1559
yeah.
So
if
you
are
kind
of
an
application
or
wallet
or
just
generally
interested
in
kind
of
broad
adoption
for
eip-1559,
you
can
join
that
we'll
post
the
link
in
all
core
dev
and
there's
an
issue
on
the
ethereum
pm
repo
for
it
and
yeah.
A
That's
pretty
much
all
we
have
thanks
everybody.
I
appreciate
you
staying
real.
H
Last
thing:
if
you
are
an
application
developer,
please
update
your
web3.js
version
to
the
latest.
It
looks
like
it
may
be,
causing
some
issues
with
metamask.
It's
supplying
some
different
priority
fees,
which
are
incorrect,
so
just
make
sure
to
update
to
the
latest
version.
Thanks.
A
Right
yeah,
if
it
is
a
pre
free,
1559,
3js
version,
it
doesn't
return
a
1559
transaction.
So
that
means
you
get
the
gas
price
to
set
to
both
the
max
fee
and
max
priority
fee
and
that
basically
causes
overpayment
for
for
some
users,
thanks
for
reminding
trent.
A
Cool
well
yeah,
thanks
a
lot
to
everybody,
see
you
all
in
two
weeks,
thanks.