►
From YouTube: Ethereum Core Devs Meeting #125 [10-29-2021]
Description
A
Welcome
to
awkward
devs
one,
two,
five,
a
couple
things
on
the
agenda
today:
merge
updates,
updates
on
aero,
glacier
and
then
ensgar
has
an
eip
which
would
modify
the
eip1559
mechanism
to
better
account
for
missed
slots
after
the
merge
and
then
finally,
dankrod
and
guillaume
are
here
to
chat
about
vertical,
tries
and
the
general
stateless
roadmap,
and
we
could
kind
of
get
through
the
first
three
things
and
then
give
bankrupt
and
guillaume
the
rest
of
the
call
to
go
over
their
stuff
yeah.
A
B
Until
now,
nothing
major
came
up.
I've
also
started
setting
up
a
server
for
merge,
fuss
and
m
fuzzing
gas
right
now,
but
I'll
add
some
other
other
execution
layer.
Clients
to
it.
For
those
who
don't
know
merge
first,
is
a
differential
fuzzer
that
basically
just
calls
the
engine
api
of
two
different
clients
and
sees
if
they
do
exactly
the
same
thing
yeah.
So
that's
that's
our
update.
B
So
far,
I
we
found
something
in
the
synchronization
call
in
the
sync
code.
That
was
not
final,
yet
nothing
really
major
awesome.
C
Yeah
we
continue
cleaning
up
our
merch
code
or
we
set
up
all
consensus
client
in
our
infrastructure,
with
nethermind
as
executioner
engine.
Of
course
we
are
in
contact
with
nimbus
team
because
something
not
working
with
nimbus
and
nevermind
yeah.
It
looks
like
we
are
working
after
the
mario's
test,
and
that
is
all
I
think
from
another
mindset.
A
Okay,
I
thought
it
was
someone
in
the
discords.
I
don't
think
they
had
their
real
name,
but
they
were
starting
to
implement
the
merge
and
asking
a
bunch
of
questions.
Yeah.
F
Oh
danny
you're
gonna
say
I
can
yeah.
I
can
give
an.
G
Update
on,
we
are
primarily
make
io
myself
and
some
others
are
quickly
closing
in
on
the
final
updates
to
compensator,
specs,
extrusion
layer,
specs
and
the
engine
api
that
came
out
of
amphora.
G
I
might
do
a
pre-release
later
today
and
point
to
a
commit
on
each
that
kind
of
gives
you
pretty
much
what's
happening,
but
we
likely
will
finish
more
on
monday,
there's
a
little
bit
more
work
to
patch
up
between
now
and
then,
but
but
very
close,
and
then
that
would
be
breaking
with
respect
to
pithos
and
we
would
start
kind
of
a
nude
wave
of
testnet
targets
in
november.
G
Got
it
and
just
for
context.
If
you
don't
have
the
context
mo,
it's
largely
similarly
structured,
there's,
probably
a
few
edge
cases
that
are
patched
up
and
generally
simplifications
and
reductions
in
the
engine
api.
So
it's
not
not
throwing
a
ton
of
out
or
anything
like
that.
D
Got
it
is
anyone
from
basu
on
the
car.
H
Awesome,
nice
yeah.
We
don't
really
have
any
updates
right
now.
We're
cruising
along
we're
merging
in
all
of
the
code
from
the
interop
exercise.
That's
that's
still
in
progress,
so
nothing
nothing
new
or
major
to
report.
A
A
Next
up
is
just
aero
glacier,
so
on
the
last
call
we
kind
of
decided
for
a
block
number
and
and
a
delay
for
the
difficulty
bomb.
I
guess
I'm
curious
to
see
how
how
far
along
our
clients
in
implementing
this
and
like
how
realistic
is
it
to
have
a
release
in
the
next
week
from
client
teams
with
this
just
so,
we
could
announce
it
about
a
month
in
advance
yeah.
I
guess
anyone.
C
B
E
H
Same
story
here
we
haven't
implemented
it,
but
we
have
a
quarterly
quarterly
release
scheduled
in
the
next
week,
we'll
likely
have
it
included.
There
awesome.
I
Yeah
go
ahead.
Yeah
I
just
want
to.
As
far
as
I
know,
there
are
new
difficult
tests
in
the
test
repo.
So
when
you
implement
it,
you
can
check
out
the
new
tests
as
well.
A
And
cool,
so
I
guess
for
people
listening
and
who
are
looking
to
like
plan
their
upgrade.
The
fork
block
is
gonna
happen.
Oh
the
fork
is
gonna
happen
at
block
13
million
seven:
seven:
three:
zero:
zero!
Zero!
That's
expected
to
hit
around
december
eighth,
so
a
bit
more
than
a
month
from
now
and
one
thing
that
would
also
be
useful-
I
I
guess
only
get,
has
it
implemented
now,
but
if
somebody
can
just
share
the
four
cache
like
the
2124.
I
Fork
id
yes
yeah,
it's
in
my
pr.
I
can
paste
it
on
the
vm
pm.
A
Okay,
if
not,
then
ensgar
has
been
working
on
an
eip
which
would
basically
modify
eip1559's
mechanism
around
the
merge
so
that
we
can
better
account
for
missed
slots
in
how
we
update
the
base
fee.
I
think
he
literally
finished
a
draft
yesterday.
Do
you
want
to
take
a
couple
minutes
and
start
to
kind
of
walk
us
through
how
this
changes
things
and
why
this
might
be
important
to
to
do
for
the
merge.
J
Should
I
shine
a
screen
for
that
or
just
talk
about
it.
J
J
Yes,
okay,
awesome
yeah,
so
so,
basically,
the
the
situation
is
just
that
with
1559
right
now.
The
the
base
view
of
course
looks
at
the
gas
used
in
the
parent
block
to
determine
whether
the
place
we
should
grab
or
go
down
and
with
with
the
merch
we'll
have
these
usually
regular
slots
every
12
seconds.
But
then,
if
there's
a
missed
slot,
of
course,
that's
that
basically
means
that
there's
like
a
24
second
window
and
just
because
transactions
continue
to
accumulate
right.
J
They
could
basically
always
expect
a
24
seconds,
a
block
basically
to
have
on
average
twice
as
many
transactions,
so
missed
slots
would
usually
result
in
little
base
fee
spikes.
And
so
the
question
was
just
a
is
this
a
big
problem?
Was
it
just
a
small
annoyance
and
then
b?
Is
there
something
small
we
could
do
to
to
to
mitigate
that?
And
so
basically
this
is
just
one
proposal.
So
the
question
is
just
basically
the
actions
we
could
take.
You
would
be
do
this.
J
Do
nothing
at
all
do
something
else
or
just
wait:
wait
wait
for
shanghai!
Basically,
yes,
so
so
the
the
approach
here
is
relatively
simple.
J
It's
just
because,
like
the
kind
of
the
core
problem
here
is
that
we
don't
really
account
for
block
times,
it
adds
like
a
a
simple
block
based
rule
to
the
to
the
calculation,
important
to
note
that
under
proof
of
stake,
there's
no
way
for
for
block
purposes
to
manipulate
the
block
time,
because
it's
like
it's
always
enforced
on
the
beacon
chain
side
that
the
like
every
slug
only
has
one
valid
timestamp
that
it
could
use
so
there's
no
there's
no
wiggle
room
or
anything.
J
So
it's
really
like
a
very
simple
way
similar.
It's
basically-
and
this
has
the
same
properties
as
the
slot
number,
but
it's
easily
accessible
from
the
execution
side
and
so
for
yeah.
First,
why
why?
Why
could
it
be
kind
of
like
important
to
do
that
with
the
merge?
So
it's
for
one.
Of
course
it's
a
little
bit
annoying
with
these
with
the
space
free
spikes.
I
think
that's
the
least
of
the
problems.
The
second
thing
that
is
a
little
bit
more
important.
J
I
think
is
that
basically,
every
lost
slot
is
like
a
permanent
lost
throughput
for
the
chain,
so
if
we,
if
basically
the
slot
dismissed,
that
means
that
we
just
have
50
million
or
whatever
guys
we
have
in
the
blog
and
just
less
of
overall
throughput
and
one.
I
think
I
would
argue
that
this
is
just
not
desirable,
because
when
we
reset
these
throughput
targets
to
reach
them
or
to
have
to
stay
below
them,
but
also,
more
importantly,
this
kind
of
like
gives
a
more
clear
incentive
for
a
denial
service
attack
against
proposals.
J
So
just
because
we
don't
have
yet
like
these
secret
selections.
There
are
some
concerns
that
potentially,
what
was
this
could
be
anomalized
and
targeted
in
our
service
attacks
and
in
the
under
the
current
situations.
Every
bulk
deposit
that
you
can
basically
stop
from
from
producing
a
block
means.
The
throughput
of
the
chain
goes
down
by
that
pokemon,
which
kind
of
increases
the
incentive,
and
if
we
could
mitigate
that,
of
course,
that
would
kind
of
like
make.
J
You
know
our
service
attacks
less
useful
for
attackers,
which
would
be
really
nice
and
then
the
last
concern
is
whatever
we
I'm
describing
here
and
they
like
degradation
during
consensus
issues.
J
So
I
mean,
of
course,
hopefully
this
will
never
become
relevant,
but
if
we
ever
have
situations
where,
like
a
large
number
of
validity,
goes
offline
at
the
same
time
and
approved
work
right
now,
this
is
really
quickly
self-healing,
where
basically
you
just,
they
have
the
difficulty
adjustment,
and
so,
while
the
block
times
go
up
for
a
little
while
they'll
they
come
back
down
quite
soon,
and
so
the
throughput
is
only
like
impacted
for
a
very
short
time,
and
if
we
stake,
we
still
have
a
self-healing
mechanism,
but
it
could
take
much
longer
right,
like
especially
if
it's
less
than
a
third,
so
the
same
change
still
finalizes.
J
It
could
take
literal
months,
even
if
we
like
below
the
finalization
threshold,
and
we
have
difficult
and
inactivity
leaks,
it
could
still
like
be
weak
so
basically
like
and
of
course
we
could
then
start
to
manually
intervene
and
just
increase
the
gas
limit.
But
again
also
this
manual
attack
intervention
would
take.
Quite
some
time,
so
so
with
much
more
permanent
impacts
to
the
throughput
of
the
chain,
and
that
is
physically
during
times
where
the
stress
on
the
chain
is
already
at
the
highest
because,
like
we,
we
have
these
consensus
right.
J
So
so,
not
only
do
we
have
like
a
period
where
there
will
be
more
more
activity,
because
people
will
want
to
react
to
the
consensus
issue,
but
it's
also
reduced
throughput,
which
is
just
like
not
ideal
for
the
stability
of
the
chain,
and
so
obviously
I
think
it
would
be
important
to
do
this
at
the
point
of
the
merge
already.
So
we
don't
have
a
period
of
proof
of
stake
without
without
this
adjustment.
J
J
Update
rule-
and
just
basically
it's
really,
if
you
look
at
it,
it's
just
basically,
it
changes
like
five
lines
of
code
or
something
it
adds
like
these
two
constants
just
block
them
target
and
basically
a
maximum
of
what
we
want
to
allow.
So
basically,
this
this
just
means
that
we
basically
allow
up
to
95
of
a
block
to
be
used
as
basically
like
a
as
a
target,
so
say
right
now
with
the
elasticity
of
two.
J
Basically,
this
would
mean
that,
like
we
basically
allow
the
the
the
glass
tiger
to
go
up
to
1.9
times
the
block,
and
then
the
only
changes
in
here
is
that
we
have
this
extra
sorry.
It's
extra
line
that
that
does
this
adjusted
guest
target
calculation
and
then
uses
it
down
here,
but
it's
yeah.
So
this
is
basically
like
four
five
lines
of
change,
so
it's
really
minimal,
so
it
shouldn't
impact
the
much
work
too
much
it.
J
J
That
is
a
significant
change
right
and
that's
basically,
that's
basically
that
the
one
important
thing
is
to
maybe
briefly
talk
about
limitations
so
because,
basically,
in
the
account
in
this
minimal
change,
all
you
can
do
is
really
kind
of
account
for
more
or
less
for
one
missed
slot.
But
if,
as
soon
as
you
have
two
or
three
slot
in
a
row,
basically
you
you
just
can't,
because
we
only
have
a
local
assistance.
J
J
The
blue
line
would
be
what
we
have
today
and
then,
of
course
like
what
up
first
would
be
like
for
the
brown
line
here
so
basically
like,
as
you
can
see,
we
have
a
much
more
gradual
decline
initially,
which
is
exactly
what
we
need
for
dust
protection
right.
So,
basically,
there's
almost
no
decline
initially
and
then,
even
even
in
situations
where,
like
20
30,
40
percent
of
what
proposes
offline,
we
still
have
like
much
less
degradation
than
we
would
usually
have.
J
But
of
course
it's
it's
not
perfect,
and
so
like
the
last
thing
is
where
maybe
I
would
want
some
input.
Is
there
some
extension?
So
you
could
you
could
make
this
much
better,
basically
much
much
less
degradation
until
until
you
go
down
way
way
further,
but
those
would
require
slightly
more
involved
changes.
J
So
this
would
add
one
header
element
or,
alternatively,
could
also
do
this
by
accessing,
not
only
the
parent
and
the
grandparent
of
the
book,
but
like
the
last
10
15,
20,
30
and
ancestors,
but
that
also
seems
like
a
more
more
kind
of
like
substantive
change
and
this
year
would
be
if
we
were
to
increase
the
elasticity
of
a
block
from
two
to
two
point:
five
or
three
or
something
that
would
also
help
quite
a
bit.
J
I
would
argue
this
might
actually
be
feasible
just
because
under
proof
of
stake
we
have
these
12
second
block
times
as
a
minimum,
and
so
it's
already
much
reduced
stress
right
now.
We
could
have
block
times
of
three
seconds
if
we,
basically,
if
the
randomness
of
proof
of
work
turns
out
against
us,
and
so
the
strain
is
already
much
reduced
and
approved
work.
So
I
think
there
might
be
a
case
to
do
this
as
well.
But
again
the
objective
was
to
keep
the
change
as
minimum
as
possible.
So
these
are
optional
right.
J
I
think
that's
basically
all,
and
so
then
just
basically
for
context.
It's
it's
really
just
because
as
then,
you
were
saying
we
kind
of
the
the
kind
of
the
specs
for
the
execution
side
for
both
execution
side
and
competitive
side
kind
of
are
supposed
to
be
more
this
final
very
soon.
So
if
we
really
want
to
consider
this
for
the
merge
which
that
would
be
the
call
we'd
have
to
make
very
soon
yeah
and
so
feedback
would
be
appreciated.
I
Okay
yeah.
I
had
a
bit
of
a
hard
time
understanding
what
you
meant
by
denial
of
service
yeah,
how
how
it
improves
the
situation
against
denial
of
service
and,
as
I
understand
it-
and
please
correct
me-
if
I'm
wrong
like
what,
if
there
is
some
transactions
that,
for
whatever
reason,
causes
a
large
majority
of
the
nodes
to
process
the
blocks
very
slowly.
So
the
block
time
increases,
I
think
it
I
mean
it
can
be
seen
as
if,
like
50
of
the
seeders
go
up
line.
I
Now,
if
I
understood
you
correctly,
the
the
what
would
happen
is
that
the
base
b
would
go
down
and
the
actual
transactions
included
in
the
blocks
would
go
up
to
yeah.
Basically,
the
top
miners
go
down
and
block
times
double.
Then
you
would
have
double
the
amount
of
transactions
in
the
block,
so
it
feels
like
that
would
make
the
denial
service
attack
worse
now
did
I
misunderstand
something.
J
I
think
that's
right
exactly
that's
it
so,
basically
that
they're
truly
service
problems,
one
is
as
you're
saying,
like
transactions
that
take
a
long
time
to
execute,
but
the
other
one
is
to
target
specific
block
proposals.
So
you
can
like.
There
are
some
worries
that
you
can
just
basically
based
on
message
relay
patterns.
You
can
you
can
de-anonymize
the
like
the
appeared,
addresses
behind
specific
validators
and
then
because
it's
known
in
advance
when
it's
their
turn
to
produce
a
block.
I
The
post
merge
right
so
in
the
context
of
the
pre-merge
world.
J
Right
so
well,
this
was
more
small
thing,
just
it
would
not
actually
decrease.
Basically,
it
would
just
stop
the
basically
from
increasing
when
the
block
be
like,
if
say
like,
basically
only
only
every
other
blog
proposal
actually
well.
Okay,
if
you're
talking
about
proof,
work
of
course
right.
If
basically,
the
block
times
start
going
up,
then
that
would
basically
mean
that
the
the
blocks
were
allowed
to
be
more
than
half
full
on
average.
J
That
does
mean
indeed
that
if
we
have
a
proof
of
work
attack
similar
to
like
this
shanghai
attacks,
where
we
just
have
like
very
slow
to
process
blocks,
that
that
would
basically
mean
that
the
well
it
wouldn't
actually
change
much.
It
would
just
basically
mean
that,
like
the
new
equilibrium
would
be
a
little
bit
different
instead
of
basically
having
say,
I
don't
know
if,
if,
as
you
were
saying
your
example,
usually
like
say,
the
block
times
would
double
now,
they
would
basically
go
up
4x,
but
every
bug
would
be
twice
as
full.
I
Reach
a
new
equilibrium,
though
I
mean
if
if,
if
the
throughput
as
in
you
know
gas
per
second,
while
gas
per
block
should
be
constant
and
the
nodes
cannot,
I
mean,
wouldn't
at
least
like
form
some
kind
of
cycle
where
the
block
gets
slower.
And
so
we
have
more
transactions
go
into
it.
And
that
makes
it
even
slower
and
more
transactional.
When
ended
like
some
kind
of
self-reinforcing
cycle
and.
J
J
So
so,
basically,
there
is
no
no
kind
of
room
for
for
any
of
the
for
the
cycle.
If
you
were
to
completely
remove
the
scissors
yeah
you're
right
that
this.
J
Right
but,
but
again
I
would
I
like.
Basically
I
would
say
that
that's
what
we
have
the
gas
limit
adjustment
mechanism
for
right,
like
I
mean
similar
to
what
happened
during
the
function
high
techs,
that
in
that
scenario,
you
would
just
basically
advise
miners
to
reduce
the
the
gas
limit.
I
How
your
proposal
interacted
with
the
gas
limit,
if
it
did.
J
No,
no,
it
does.
It
does
not
in
any
way
change
the
gas
limit,
and
I
think
that
it's
important,
because
indeed
like
the
test
limit
is
this
is
irrelevant
for
security
considerations,
so
it
just
acts
within
the
gas
limit.
It
just
basically
sets
the
gas
target.
Instead
of
always
targeting
half
full
blocks,
it
allows
more
than
half
of
blocks
to
be
targeted
if
bugs
basically
come
in
slower
than
expected,
so
so
that
it
balances
out,
but
it
never
changes
the
gas
limit.
K
Yeah,
I
have
a
simple
question:
why
can't
we
iterate
over
this
is
slots,
treat
them
as
an
empty
block
and
just
re
use
already
existing
formula
and
with
each
iteration
apply
this
formula
to
compute
the
base.
Video
of
the
block
that
finally
exists.
J
So
do
you
mean
basically
yeah
okay?
So
so,
if
we
just
basically
insert
artificial
empty
blocks
into
missed
slots,
the
problem
is
that
that
kind
of
set,
basically
just
sends
it
incorrect
signals,
would
be
because
an
empty
block
would
signal
that
there's
no
demand
so
so
that
will
basically
lower
the
the
base
fee
and
but
but
it
would
end
up
resulting
more
or
less
in
the
same
situation,
because
you'd
lower
the
base
fee
before
the
next
block
and
then
in
the
next
block.
J
You
because
it
would
on
average,
be
twice
as
full.
You
increase
it
back
up,
but
that
just
basically
means
that
we
have
the
incorrect
base
fee.
It
would
end
up
in
almost
the
same
situation
as
in
my
proposal
just
that
these
the
slot
after
in
the
mist
slot,
would
just
basically
have
artificially
true
low
base
v
and
so
too
much
extraction
by
the
mine
like
p
extraction
by
the
miner,
and
but
it
would
honestly.
J
K
F
K
J
L
G
A
K
K
Two
formulas
are
kind
of
equivalent
down
yeah
cool.
I
thought
that
this,
like
a
bit.
The
mechanism
that
you
have
proposed,
is
just
a
bit
different
from
what
I've
been
thinking
about.
It.
G
J
Just
because,
like
the
way,
we
like
the
basic
calculation
just
happens
when
we
validate
the
the
base
fee
in
the
block,
but
that
just
means
we
have
to
look
at
the
the
situation
in
the
parent
and
then
so
within
that
we
need
the
block
time
of
the
parent.
The
block
time
of
the
parent
is
the
difference
between
its
grandparent
and
the
parent
right,
because,
because
the
base
fees
never
adjusted
for
the
block
itself,
it's
always
suggested
for
the
next
book.
M
What
would
break
if
we
just
make
it
based
on
the
time
delta
between
the
black
and
the
parent
instead,
because,
like
I
guess,
the
way
that
I
philosophically
think
about
this
is
that
I
think
of
the
beast,
the
update
as
being
two
separate
updates.
There's
a
yeah,
always
positive,
update
as
a
result
of
guests
being
consumed
and
up
and
always
negative
update
as
a
result
of
time
passage
so
like
in
theory.
It
shouldn't
matter.
If
the
order
of
the
two
gets
flipped.
K
Yeah,
who
happened
if
yeah
there
is
also
the
like
miss
slots
before
the
parent
block,
like
miss
loss
between
the
grandparents
and
the
parents,.
J
Yeah
yeah
that
generally
sounds
possible
just
again.
I
think
that
yeah
just
have
to
have
to
think
about
it
briefly,
just
to
make
sure
that
there's,
no,
even
no
small,
within
a
block
inconsistencies
where,
like
the
basically
in
a
block,
is
not
the
one
it
should
be,
and
then
even
if
it
like.
Basically,
this
returns
to
the
correct
one
in
the
next
clock.
Ideally
you
you
never
want
to
individual
box
where
the
miner
basically
has
it
too,
too
low
or
too
high.
Basically,.
G
No,
it
could,
I
mean
it
knows
the
time
stamp
and
the
timestamp
is
ensured
to
be
congruous
with
slot
time
by
the
consensus
player,
but
it
does
not
know
the
kind
of
like
beacon
genesis,
time
and
the
slots
per
second
which,
if
it
did,
if
that
was
just
baked
deeply
into
the
configuration.
The
execution
client
could
calculate
plot
time
based
on
timestamp.
N
J
L
G
L
F
L
M
One
one
nice
benefit
of
making
it
entirely
timestamp
based
is
that
if
we
do
change
this
a
lot
time
in
the
future,
we
don't
need
to
do
anything
else
to
ensure
that
capacity
stays
the
same
across
the
change.
L
The
constant
should
be
per
second
right.
I
mean
it's
a
like
if
we
for
some
reason
decided
that,
like
poor
pos
has
to
be
two
times
slower
for
some
reason,
then
we
would
increase
it
by
a
factor
of
two.
So
I
would
argue
that
the
fundamental
dimension
of
the
constant
is
gas
per
second.
J
L
A
A
Are
in
the
past
and
then
the
you
still
have
future
blocks
coming
right.
Like
so
imagine
you
missed
three
blocks
and
you
have
like
you
know
a
three
times
bigger
block
than
because
your
elasticity
gives
it
to
you.
You
still
need
to
be
able
to
process
that
three
times
bigger
blocks
before
the
next
block,
which
you
assume
won't
be
missed
arrives.
So
it's
like
you
can't.
N
So
it
I
think
that
if
we
ignore
what
we
have
implemented
today-
and
we
just
think
like
conceptually
what
what
do
we
want,
what
we
want
is
we
want
the
chain
to
have
a
certain
number
of
amount
of
gas
per.
Second,
how
many
blocks
per
second
are
unrelated
to
that,
like,
fundamentally,
we
want
gas
per
second
and
the
execution
client
does
know.
N
You
know
when
was
the
last
block
and
how
much
gas
did
that
last
block
use
and
how
much
time
has
passed
since
that
last
block,
and
so
I
think
we
should
again
purely
theoretically
ignoring
currently
implementation.
We
should
have
enough
information
to
do
gas
per
second
without
knowing
the
future
slot
time,
without
knowing
what
the
intention
intended
slot
time
is.
We
should
have
enough
data
to
answer
that
question.
J
In
that
scenario,
it
would
just
replace
the
elasticity
multiplier,
because
we
don't
right
now
we
don't
actually
explicitly
set
the
gas
target.
We
set
only
the
gas
limit
and
instead
there's
a
decision
multiplier
to
calculate
the
guess
target
and
if
we
would
set
the
guess
target
on
the
per
second
basis
and
we
would
set
the
block
gas
limit,
which
we
still
need
in
order
to
know
what
the
maximum
block
size
is,
then
we
would
just
no
longer
need
elasticity,
which
would
just
be
implied.
Then.
N
Yeah,
so
I
guess
I
guess
my
argument
here
is
weekly
that
I
would
rather
see
us
come
up
with
a
larger
change
that
makes
it
so
we
don't
need
to
have
the
execution
client
know
what
the
block
intended
block
time
is
when
for
the
merge
like
with
the
merge,
I
would
rather
not
have
to
leak
information
about
block
time
into
the
execution
client
if
we
can
avoid
it.
If
that
means
making
a
larger
change.
N
I
think
I
would
prefer
that
personally,
like
a
larger
change
to
this
formula,
I
would
prefer
that
over
a
simpler
change
that
does
result
in
a
leak
of
information
from
the
case,
so.
B
O
Yeah,
I
will
probably
throw
a
wrench
into
the
works,
but
it
feels
like
we
are
thinking
about
fixing
a
consensus
problem
in
execution
engine
in
general.
O
G
You
can't
you
can,
but
I
mean
the
the
way
that
the
way
that
a
proposer
is
selected
is
just
fundamentally
different
than
proof
of
work
improvement,
and
so
you
could
do
some
sort
of
backup
model
where
it's
bought
in.
If
somebody
doesn't
show
up
in
a
second,
you
could
have
somebody
do
a
backup,
but
that's
still,
even
if
you
did
that
you
could
have
mislaws
and
result
in
reduced
capacity,
which
it
seems
natural
for
the
eu
to
be
aware.
J
Yeah,
I
just
wanted
to
briefly
say
with
the
guts
the
to
the
conversation
about
leaking
information
about
the
block
time
slot
time
and
to
the
execution
client.
I
don't
actually
think
that
this
is
avoidable,
just
because,
even
if
we
set
the
gas
throughput
per
second
and
the
block
gas
limit
separately,
we
still
want
them
to
go
up
and
down
in
lockstep
right
so
like
because
otherwise
we
don't
have
a
mechanism,
and
so
we
would
still
need
to
hard
code
the
elasticity.
J
To
go
up,
how
would
you,
how
else
would
you
ever
was
over
the
yeah,
the
gas
limit
and
the
car
started
up
and
down
separately.
N
J
Know
I
mean
I
mean
actually
like
with
the
signal
like
with
just
like
just
s
right
now,
and
I
don't
think
we
want
to
have
the
eap
that
would
remove
the
control
by
the
vocal
poser
from
at
the
part
of
the
merge.
So
right
now,
block
composers
can
slightly
nudge
the
gas
limit
up
and
down
right,
and
if
they
continue
to
be
able
to
do
so,
we
would
hope
that
also
the
guest
target
per
second
would
go
up
or
down
by
the
same
like
fraction.
N
We
could
we
could
make
it
so
so
so
this
is.
This
is
where
it
gets
into
the
a
bigger
change
to
avoid
the
data.
The
information
leak,
but
one
can
imagine
a
change
where
the
way
that
they
increase
is
now.
You
increase
the
gas
per
second
rate
instead
of
the
gas
for
block
rate,
and
so
the
limit
that
would
go
up
is
the
per
second
rate,
and
so
every
proposer
can
do
1,
10,
24
increase
or
decrease
in
the
gas
or
in
the
gas
per
second
rate.
N
So
just
like
the
rates
like,
if
we
want
you
know,
10
gas
per
second
or
1000
gas
per
second
or
whatever
and
that's
our
target,
then
each
proposer
can
say
I
would
like
to
increase
or
decrease
that
by
up
to
1
10
24,
which
would
be
functionally
the
same
as
them
currently
increasing
the
block
limit
by
1
10
24th.
But.
J
N
N
Yeah
yeah,
if
the
rate
goes
up,
then
our
if
the
rate
goes
or
if
the
slot
time
goes
up,
then
it
means
that
the
block
limit
would
also
go
up
and
that
isn't
necessarily
what
we
want
like
we
may
want
blocks
to
come
every.
J
Right
and
just
to
briefly
put
maybe
out
here-
I
I
think
it's
probably
not
not
ideal
to
just
talk
about
these
details
too
much
yeah.
I
definitely
think
that,
like
this
is
just
a
specific
proposal
that
just
came
out
of
me
talking
to
a
couple
of
people,
so
I
think
they're
definitely
other
flavors
of
this.
That
would
maybe
even
be
better
to
reach
the
same
goal.
The
question
is
really
more.
J
Maybe
just
talk
briefly
about
how
we
go
from
here
and
then
do
the
actual
discussion
offline.
I
think.
A
Yeah,
that
seems
reasonable.
I
guess,
does
anyone
think
yeah?
We
should
not
do
this
at
the
merge.
Like?
Is
anyone
strongly
opposed
and
there's
kind
of
the
weak
objection
around
like
the
information
leak
that
micah
just
posted
in
the
chat
but
like
assuming
we
yeah?
I
guess
that
aside.
Does
anyone
feel
like
it's,
not
something
we
should
do.
A
Right
right
so,
but
it
could
be
on
the
consensus
center
side
sure,
but
it's
it's
and
I
guess
the
trade-off.
There
is
obviously
you
know
if
we
do
it
and
it's
non-trivial.
It
does
add
some
work
related
to
the
merge,
but
it
does
seem
important
and
verdi's
kind
of
seem
worth
exploring
more.
G
Just
to
speak
to
that
there's
always
additional
things.
You
could
probably
do
in
the
consensus
layer
to
try
to
avoid
missed
slots,
but
there
is
just
a
stronger
notion
of
time
and
there
is
a
notion
of
something
not
happening
during
the
time.
Even
if
you
do
shore
it
up
in
some
ways,
and
so
there
is
this
notion
of
like
you
can
have
mis-slots
and
I
don't
think
that's
going
to
go
away,
and
thus
the
evm
can
either
react
to
that
or
not
with
respect
to
its
capacity
right.
A
Right
does
it
make
sense
to
maybe
just
schedule
like
a
breakout
room
for
this
sometime
next
week
or
the
week
after
something
like
that,
it
does
feel
like
we
probably
want
to
think
through
the
like
design
space
and
like
come
with
some
proposal
that
at
least
you
know
yeah.
We
we
all
agree,
makes
like
the
right
tri-duffs.
G
I'd
definitely
suggest
this
coming
week.
I
do
think
that
this,
if
we
do
anything
to
the
evm,
that
this
is
the
thing
to
do
with
the
merge.
G
I
do
think
that
10
of
block
proposals
going
offline
because
of
some
reason
or
other
is
like
totally
something
that
could
happen
and
having
reducing
the
incentive
for
that
to
be
happened
from
from
an
attacker
and
reducing
the
impact
that
has
on
the
execution
layer
and
on
capacity.
I
think
it's
very
nice
to
have,
and
if
it's
going
to
happen,
then
we
need
to
really
make
a
decision.
B
G
A
Okay,
so
yeah,
maybe
as
a
next
step
and
scar,
can
you
maybe
propose
a
couple
times
that
work
for
you
and
awkward
ads
for
next
week
and
we
can.
We
can
have
like
a
round
table
there.
A
Okay
last
thing
on
the
agenda,
I
suppose
probably
take
up.
The
bulk
of
the
call
is
dankrod
and
guillaume
have
been
working
on
stateless
and
vertical
trial,
implementations
to
facilitate
that,
and
they
wanted
to
share
just
kind
of
the
a
the
general
road
map
around
stateless
and
why
it's
important
and
kind
of
the
solution,
space
they're
in
and
then
why
specifically
vertical,
tries,
are
an
important
step
in
that
direction.
Yeah
solid
two
of
you
go
with
it.
L
Hey
I
just
shared
my
screen:
can
you
see
that.
L
Okay,
so
I
I
wanted
to
give
a
quick
overview
over
the
vertical
drive
work
for
evernote
on
this
call,
so
that
you
know
where
we
are
and
like
why
we
are
proposing
these
changes
and
that
are
quite
fundamental
and
so
I'll
start
by
quickly,
like
just
giving
a
very
rough
idea
on
on
on
on
the
whole
thing.
So
so,
what's
a
vocal
tri
vocal
stands
for
vector,
commitment
and
merkel,
and
so
it's
basically
a
tree
that
works
similar
to
merkle
trees.
L
But
the
commitments
are
vector
commitments
instead
of
hashes
and
tri
stands
for
tree
and
retrieval
so
like
it
just
means
a
tree
where
each
node
represents
a
prefix
of
keys,
which
is
already
the
case
in
the
current
merkel
partition
trees.
And
so
what
does
that
mean?
L
So
when
we
look
at
a
marker
proof,
I
made
an
illustration
here:
if
we
want
to
prove
this
this
green
leaf,
then
when
we
go
up
the
the
tree,
we
need
to
compute
all
the
yellow
nodes,
all
the
hashes
at
the
yellow
nodes
right
and
for
that
we
need
to
provide
all
the
siblings
of
either
the
green
or
yellow
nodes
so
that
we
can
always
compute
the
parent
hash.
L
Okay.
If
we
change
this
here,
I
I
showed
like
what
what
happens
if
we
go
to
with
four
instead
of
the
binary
tree
that
I've
shown
just
now
for
a
merkle
tree,
then
what
happens?
L
Is
we've
reduced
the
depth,
but
now
we
need
to
give
three
siblings
at
each
layer
instead
of
the
one
we
had
previously
and
so
actually,
by
increasing
the
width,
we
increased
the
size
of
the
proofs,
which
is
like
currently
one
of
the
big
problems
with
ethereum
state
and
that
mercury
partition
trees
are
actually
with
16,
so
the
proofs
are
huge,
and
so
how
do
worker
trees
change
this?
So
here
is
an
illustration
of
what
happens
in
a
vocal
tree
for
the
same
situation.
L
We
again
have
the
green
leaf
that
we
want
to
open
and
instead
of
having
to
provide
all
the
witnesses
in
a
in
a
good
vector
commitment
in
quotation
mark
so
like
in
one
of
the
the
ones
that
we
are
going
to
propose
instead
of
having
to
give
all
the
siblings,
which
is
happens
when
you
use
a
hash
as
a
vector
commitment,
which
is
what
merkle
trees
do
you
only
need
one
opening
for
each
of
these
layers
and
and
that
opening
is
constant
sized.
L
And
so,
if
we
have
this
very
tiny
example,
then
then
we
have
to
just
provide
this
inner
zero
one
node
as
part
of
the
proof
and
these
two
openings
as
part
of
the
vector
commitment,
openings
and
even
better,
typically
we're
going
to
use
additive
commitments.
So
all
these
openings
will
collapse
into
one.
L
So
that's
basically
like
a
short
summary
like
here.
Basically,
what
happens
is
you
have
to
give
this
one
in
a
node
and
one
opening
that
gives
a
proof
that
leaf
zero
one?
One
zero
was
part
of
inner
zero
one
and
in
a
zero
one
it
was
part
of
of
the
root,
so
there
you
can
see
where
vertical
trees
gain
the
efficiency.
It's
from
this,
this
property,
that
you
don't
need
to
give
all
the
siblings
anymore,
but
only
like
a
small
proof
that
everything
is
a
part
of
the
parent
okay.
L
So
I
made
a
short
illustration
here
like
on
how
how
good
they
are
so
basically
like.
We
are
we're
going
to
suggest
that
the
proposed
gas
cost
per
state
access
will
be
1,
900
gas
and
at
the
current
gas
limit,
that
would
mean
about
15
000
state
accesses.
L
If
you
use
a
hexa
merkle
tree,
then
currently
the
witness
sizes
are
about
3
kilobytes
per
witness
and
that's
47
megabytes.
So
that's
absolutely
huge.
If
we
change
this
to
a
binary
merkle
tree,
we
would
have
about
half
a
kilobyte
per
witness
and
then
we
would
bet
eight
megabytes.
That's
still
pretty
big.
L
Now,
if
we
use
a
width,
256
vocal
tree
instead
with
a
32
byte
group,
so
like
each
commitment,
has
32
bytes
as
it
has
now,
but
it's
going
to
be
a
different
type
of
commitments.
Then
it
would
only
be
about
100
bytes
per
witness,
and
so
that
reduces
it
to
1.5,
megabytes
and
now
we're.
Finally,
in
a
range
that
we
can
consider
is,
is
reasonable
and
lets
us
do.
Statelessness.
L
Some
summaries
on
the
course,
so
I
made
some
estimates
here
if
we
want
to
do
5000
proof,
so
each
each
of
the
25
000
openings
that
you
would
have
to
compute
would
need
256
times
for
field
operations,
each
of
them
costs
about
30
nanoseconds.
So
that's
750
milliseconds
for
such
a
proof.
So
that
seems
pretty
reasonable
in
terms
of
prover
time.
L
L
Okay,
so
I'm
going
to
come
to
the
tree
structure
that
we're
going
to
suggest
and
because
that's
important
for
the
road
map
and
why
we
are
suggesting
doing
these
changes
in
a
certain
order
and
the
design
goals
that
we
had
in
mind
when
we,
when
we
came
up
with
a
structure,
is
that
we
want
to
make
access
to
neighboring
code,
chunks
and
storage.
So
that's
cheap,
but
at
the
same
time
distributes
everything
as
evenly
possible
as
possible
in
the
tree,
so
that
state
sunk
becomes
easy.
L
And
then
we
want
this
whole
thing
to
be
fast
in
plain
text,
which
means
fast
in
the
direct
applications
as
we're
suggesting
it
now.
But
we
also
want
it
to
be
fast
and
snacks.
So
we
envisioned
that
within
a
few
years
it
will
become
very
feasible
to
to
compress
all
these
witnesses
using
snarks
and
and
for
that
we
optimized
everything
so
that
it
can
be
done
very
efficiently
in
snacks,
and
that
would
also
help
anyone
who
designs
roll-ups
with
us
or
anyone
who
wants
to
create
state
proofs
and
feed
them
into
a
snack.
L
Finally,
the
whole
thing
should
be
forward
compatible,
so
we're
basically
designing
a
pure
key
value
interface
with
32
bytes
per
key
and
per
value.
L
So
keys
are
basically
derived
from
the
contract,
address
and
the
storage
location,
and
what
we're
going
to
do
is
like
the
the
two
will
be
used
to
derive
a
stem
and
a
suffix
and
the
stem
is
simply
pay,
doesn't
cp
doesn't
as
a
type
of
hash,
but
when
that's
also
efficient
to
compute
in
a
snark
of
the
contract
address
and
the
storage
location,
except
for
the
last
byte
right.
L
So
we're
extruding
the
last
points
on
this
and
then
us
there
we
put
the
last
byte
of
the
storage
location
directly
into
the
suffix
and
so
for
any
storage
location.
That
only
differs
in
the
last
byte.
The
nice
thing
about
this
is
that
and
the
stem
will
be
the
same,
and
only
this
last
bite
will
differ.
L
And
then
we're
going
to
put
them
into
a
tree
that
looks
like
this.
We
basically
have
this
vertical
try
at
the
start
that
locates
the
stem
and
that
works
very
similar
to
like
the
currents.
L
The
current
account
try
and
then
at
each
stem
there
will
be
an
extension
node
that
commits
to
all
the
data
in
that
extension,
and
so
that
means
that,
like
opening
several
points
of
data
in
the
same
extension,
which
we
yeah
or
like
the
same
suffix
tree,
which
you
also
call,
it
is
very
cheap
because,
like
the
whole
stem
tree,
is
already
opened
at
that
point,
there's
nothing
new
to
do
and
you
just
have
to
open
another
point
in
these
polynomials,
c1
or
c2,
and
so
that
is,
that
is
a
very
fast
operation
and
very
cheap.
L
It's
about
five
times
smaller,
compared
to
binary
mic
mercuries
and
more
than
30
times
smaller
than
the
current
tax
retreats.
So
that's
pretty
huge
and
verification
times
are
pretty
reasonable,
like
similar
to
a
binary
merkle
tree,
the
prover
overhead
isn't
huge
so
like
even
in
the
worst
case.
It's
I
estimate
it
can
be
done
in
a
few
seconds.
L
Our
solution
doesn't
need
any
kind
of
trusted
setup.
So
it's
all
basic
elliptic
curve,
arithmetic
which
we're
already
using
in
ethereum,
just
the
discrete
logarithm
assumption-
and
I
think
it's
currently
the
only
known
solution
for
ethereum
state
that
doesn't
come
with,
like
huge
trade-offs
in
terms
of
how
big
the
witnesses
are
and
everything.
L
Right
so
yeah,
this
was
my
quick
introduction.
I
can't
see.
Are
there
any
questions
about
it
at
the
moment.
A
N
Yeah,
I
mean
anything.
L
Go
ahead
yeah,
so
please
feel
free
like
if
anyone
has
any
questions
about
this
like
reach
out
to
me,
I'm
very
happy
to
explain
anything.
I
also
wrote
a
few
blog
posts
about
it.
Yeah.
It's
obviously
like
some
something
to
get
into
and
it
has
a
big
change,
but
I
think
it.
It
also
comes
with
huge
advantages.
So
I
also.
G
I
did
have
a
question
from
the
chat
you
showed
the
max
witness
like
worst
case
witness
size.
Do
you
have
data
on
what
the
witness
sizes
would
be
with
you
know,
current
average
main
size
domain
network.
L
So,
that's
not
so
that's
not
technically
the
max
witness
size.
That
is,
I
mean
they're
different,
different
definitions
of
max
like
that
is
the
the
max
if
you
don't
spam
the
tree.
So
what
I
shared
here
was
a
rough
estimate.
If
you
have
like
a
block
that
does
only
state
accesses
for
the
whole
block,
but
that's
the
slide
right,
but
it's
not
about
so.
If
you
spam
the
tree,
there
are
some
worse
things
that
you
can
do
in
all
of
the
cases.
Obviously
I.
M
L
L
I
do
have
actually-
and
I
wanted
to
share
that
as
well.
I
actually
made
a
little
calculator
which
I
can
share
where
you
can
play
with
these
things.
Sorry.
L
So
it's
this
one
and
basically
how
you
can
use
this
one.
This
one
uses
all
our
suggested
parameters
and
you
can
adjust.
L
How
many
elements
are
in
the
tree?
It
will
compute
what
the
average
depth
is.
This
is
all
the
input
forces
suggested
gas
changes
and
then
you
can
enter
numbers
on
what
you
think
like.
So
this
is
an
example
where
we
access
one
thousand
different
branches
or
one
thousand,
stems
essentially
four
thousand
chunks.
Then
we
have
like
400
update
different
branches,
update
and
1200
chunks
updated.
Then
it
gives
a
number
here
that
would
be
the
gas
cost
for
that.
L
So
this
this
this
example
would
spend
more
than
6
million
in
gas
dust
on
the
state
axis.
So
that
seems
maybe
like
a
roughly
reasonable
average
case.
I
don't
know,
maybe
even
high
average
case.
I
don't
know
if
people
are
gonna
spend
that
much.
I
don't
know
how
to
estimate
right
now,
because
people
are
obviously
also
going
to
adapt.
So
in
this
case
the
total
data.
L
So
this
is
any
any
scheme
has
to
provide
the
data
right
so
that
that's
an
absolute
must
unless
you
snuck
the
whole
execution,
you
have
to
provide
the
data.
The
data
size
here
would
be
about
200
kilobytes
and
the
total
proof
size.
So
that's
all
the
commitments
and
the
opening
that
you
have
to
give
in
addition
would
be
110
kilobytes,
so
the
total
witness
size
would
be
308
kilobytes,
so
that
this
is
like
a
roughly
average
case-
and
I
put
this
on
there
on
the
acd
call
as
a
link
as
well.
L
B
N
So,
basically,
yeah
sorry
those
benchmark
those
benchmark
estimates
are
they
on
just
like
what
kind
of
very
rough
class
of
hardware
is
that
estimating
that.
N
L
L
Yeah
that's
my
estimate
like,
as
I
said
so,
these
are
basically
based
on.
What's
the
dominant
operation,
for
each
of
these
things
and
like
estimating
how
many
of
these
are
of
these,
you
need.
I
think
these
two
things
are
like
fairly.
I
can
estimate
fairly
well.
However,
this
does
depend
on
eliminating
all
the
other
bottlenecks,
so
just
to
be
clear
here,
the
prover
time.
L
L
Cool
and
so
the
reason
why
we
brought
this
to
the
call
now
is
that
what
we
want
to
suggest
is
that
we,
we
basically
have
created
an
idea
for
a
roadmap,
how
we,
how
we
make
ethereum
stateless
and
the
idea
is
this.
L
The
idea
is
to
spread
the
changes
over
three
different
heart,
forks
and
and
at
the
end
of
it
well
actually
like
if
ethereum
would
not
be
stateless
in
itself,
but
but
it
would
would
gains
optional
statelessness,
and
I
will
explain
in
a
minute
what
that
means
and
the
changes
would
be
for
the
first
hard
fork,
which
I
suggest
to
be
shanghai,
would
be
to
make
the
gas
cost
changes
in
order
to
like
enable
enable
all
this
and
the
reason
for
making
the
gas
cost
changes
first
is
well
one
is
they
are
relatively
a
bit
easier
to
implement
than
the
whole
database
structure
or
well?
L
Actually,
we
need
to
make
some
changes
to
the
data
structure,
but
the
whole
commitment
structure
and
two
the
most
important
thing
is,
I
think,
to
give
signals
to
like
developers
as
early
as
possible
on
like
how
they
how
they
should
handle
state
taxes
in
the
future.
L
Basically,
each
month
where,
like
state
access
remains
cheap
and
everything
remains
as
it
is,
is
another
month
where,
like
new
contracts,
will
get
deployed
and
they
will
all
depend
on
the
current
gas
schemes
and
they
will
all
be
upset
when
later
everything
changes
and
some
stuff
becomes
super
expensive
or
they
could
have
developed
in
the
more
efficient
way.
So
that's
really
annoying.
L
So
it
would
be
just
so
much
better
if,
like
as
early
as
possible,
we
could
get
them
to
the
right
numbers
and
I
think
realistically,
let's
be
honest,
the
only
way
to
do
that
is
to
actually
change
the
gas
costs,
and
so
that
is
why
I,
I
suggest,
in
the
first
hard
fork
to
make
these
gas
cost
changes
in
the
subsequent
hard
fork,
and
I
call
it
like
shanghai,
plus
one
here,
cancun
or
whatever.
L
That
is
what
we
do
is
we
just
freeze
the
current
market
partition
tree
root
as
it
as
it
is
exactly
at
that
point
and
we
add
a
vertical
try
commitment
and
that
is
initially
going
to
be
an
empty
commitment
and
we'll
just
track
all
the
changes
from
then
on
and
at
shanghai
plus
two.
L
We
replace
the
frozen
mercury
tree
root
with
a
vocal
tri
route,
and
the
reason
for
that
is
that,
with
this
roadmap
they're
at
no
point
does
there
need
to
be
an
online
recomputation
of
the
state,
like
all
the
recomputations
on
database
and
commitments
can
be
done
in
the
background
and
that
don't
have
to
be
done
online,
okay
and
so
the
gas
cost
changes
I
sold
and
well
I
mean
it's
based
on
work
by
vitalik,
but
yum
has
separated
them
here
into
a
separate
eip
draft
and
the
idea
is
basically
this.
L
So
I
I
desi
I
like
keep
in
mind
this
design
for
the
vocal
tree.
So
we
have
this
these
different
parts.
We
have
the
stem
tree
where
you
basically
try
to
group
things
together,
that
that
are
in
similar
storage
locations
and
then
extension
nodes,
which
is
like
a
a
node
representing
like
256
storage
locations
that
are
close
together,
and
so
basically,
we
have
typically
two
different
kinds
of
costs.
L
We
have
like
a
cost
if
you
access
any
of
these
stem
trees
and
a
separate
course
when
you
access
chunks
that
are
within
a
stem
tree
that
you've
already
accessed,
and
so
the
sorry
within
the
suffix
that
sorry
within
a
stem
that
you've
already
accessed
in
the
same
stuff,
extreme,
so,
okay,
and
so
the
nice
thing
about
that,
is
that
some
things
will
actually
get
cheaper.
L
So,
and
I
mean
that
that's,
I
guess,
like
one
good
news
for
smart
contract
developers,
not
everything
was
every
state
access
would
suddenly
become
crazy,
expensive
if
they
designed
things
well,
then
then
they
can
actually
save
some
gas,
and
so
we
have
here
suggest
that
basically
there's
five
different
costs
that
depend
on
what
you
do.
So
basically,
if
you
access
any
stem
for
each
stem
that
you
access
during
transaction,
you
pay
1900
gas.
L
L
and
then,
in
addition,
like
writing.
So
for
each
stem
that
you
write
to
you
pay
a
fixed
cost
of
three
thousand
again
for
each
chunk
within
that
stem.
You
pay
500..
So
if
you
added
10
of
these,
then
you
pay
like
five
thousand,
but
you
only
pay
this
one
once
if
it,
if
they're
all
within
the
same
stem
and
finally,
when
you,
when
you
fill
a
new
chunk.
L
So
when
you,
when
you
add
another
note
here
that
has
never
been
written
to
before,
then
you
pay
6200
so
like
adding
your
state
is
still
somewhat
expensive.
L
L
So
the
way
we
deactivate
this,
we
name
rename
it
to
send
all
and
it
moves
all
if
in
the
account
to
the
target,
but
it
doesn't
do
anything
else,
so
it
doesn't
destroy
code,
it
doesn't
destroy
any
storage
and
it
doesn't
refund
anything
for
this
for
destroying
those
because
they
aren't
destroyed.
L
L
Unfortunately
they
do
require,
and
that's
one
of
the
things
why
we're
trying
to
introduce
this
whole
thing
early
and
get
feedback
on
it,
and
it
will
require
changes
to
the
database,
because
we
are
basically
reducing
the
costs
for
this
chunk
accessors,
and
that
means,
if
in
practice
they
have,
they
have
the
same
cost
still
like
as
they
have
now.
L
Then
that's
a
dos
vector,
and
that
would
be
annoying,
so
it
would
require
that
clients
already
make
some
of
the
adaptations
to
their
database
so
that
accessing
this,
the
chunks
within
the
same
stem
tree
with
the
same
stem
would
be
cheap
and
I
think
the
reasonable
way
to
do
that
is
if
people
would
just
store
this
whole
thing.
L
This
whole
extension
in
one
location,
a
database,
so
that's
the
easy
way
like
it's,
even
if,
if
the
suffix
tree
is
full,
that's
about
eight
kilobytes
of
data,
and
so
that's
not
a
huge
amount
and
and
basically
every
time
you
read
that
you
just
read
the
whole
thing
from
this,
and
and
then
basically
because,
like
whether
you
read
32,
bytes
or
like
10
kilobytes
from
this,
like
makes
almost
no
difference.
It's
always
the
number
of
io
operations
that
actually
matters
that
that
should
alleviate
that
concern.
L
Yeah,
so
that
would
be
the
suggestion
for
the
shanghai
hard
fork,
the
next
hard
fork.
We
would
freeze
the
merchant
practitioner
route
and
add
a
worker
try
commitment,
so
we
would
say,
like
the
current
state
route
is
frozen,
exactly
as
it
is,
and
no
changes
are
made
to
it.
L
And
then
we
add
this
empty
vertical
tri
route
that
contains
nothing,
but
whenever
anything
from
the
state
is
written
to
or
even
read
from,
we
just
transfer
it
into
the
vocal
trial
rule,
but
that
doesn't
mean
we
remove
it
from
the
worker
participate.
L
We
leave
that
as
it
is,
and
then,
in
the
background,
at
any
point
between
this
and
shanghai
plus
two,
you
can
make
this
background
computation
where
you,
where
you
can
recompute
this
mpt
root
as
a
vertical
root,
and
we
would
then
replace
the
mpt
root
with
a
vertical
tri-root
and.
L
So
that's
yes!
So
basically
you
can
do
that
either.
You
can
do
that
locally,
but
it
can
be
done
in
the
background,
because
all
the
data
is
cons
is
constant.
So
even
if
you
have
to
access
the
same
database
like
you
can
do
that
in
some
in
another
process
or
anything,
people
can
run
that
at
any
point,
and
so
that's
nice-
you
have
several
months
to
do
this.
It
could
also
be
even
a
simpler
solution
for
some
clients
to
simply
say
like
provide
the
converted
database
as
a
torrent.
L
I
guess
there
should
be
still
a
way
to
like
verify
it,
but
maybe
not
everyone
has
to
absolutely
do
that.
It's
I
mean
the
trust
model.
If
you
just
don't
know
that
as
a
torrent
is
not
really
different
from
doing
a
snap
sync
so
like
as
long
as
we
have
reasonable
number
of
people
doing
this,
I
I
don't
see
like
that.
L
There's
a
security
concern
there
and
if
we
actually
do
already
have
states
expiry
ready
at
that
point,
then
we
don't
actually
need
to
do
a
database
conversion
for
most
clients,
because
we
can
simply
use
state
expiry.
For
the
last
step.
We
can
simply
say
from
now
on
this
first
database
has
expired,
and
then
you
only
need
to
literally
replace
the
root
and
you
normal
clients,
normal
nodes
that
don't
keep
all
this
old
state,
don't
even
need
to
convert
it
anymore.
L
L
L
The
lib
p2p
network,
on
which
all
these
two
clients
run
and
simply
distribute
status
blocks
on
it,
but
there
could
also
be
more
experimental
other
networks,
that's
that
do
things
with
these
status
blocks
and-
and
then
I
guess,
like
as
an
optional
future
thing
or
very
likely.
Future
thing
is
like
we
can,
for
example,
one
way
how
we
come
to
full
statelessness.
L
Is
we
just
deprecate?
The
old
devp2p
network
say,
like
consensus,
is
now
100
percent
just
on
lib
p2p
and
the
fp2p
remains
as
a
state
certain
network.
So
anyone
who
wants
to
have
full
state
goes
on
there
and
that's
like
a
network.
That's
mainly
used
in
order
to
get
full
state
and
p2p
is
used
by
all
the
consensus
nodes
like
clients
and
so
on.
Whoever
one
wants
to
get
blocks
with
witnesses.
L
Yeah,
cool
yeah.
That's
that's
my
that's
my
introduction
of
the
whole
thing.
Maybe,
and
do
we
have
any
questions
at
the
moment,
because
my.
N
So
first
question:
so
the
the
change
in
shanghai
it
feels
feels
like
maybe
I'm
missing
something
here.
It
feels
like
that's
a
pretty
significant
change
like
changing
the
database
structure
in
a
way
that
so
that
current
execution
clients
can
correctly
calculate
the
future
gas
costs.
L
Right
so
to
be
precise,
you
don't
need
so
you
you
can
easily
compute
the
gas
cost.
So
that's
that's
not
the
difficult
part
like
all
these
costs
can
be.
We
can
add
that
to
a
client
right
now
to
compute
these
correctly.
L
Yeah
without
data
changes,
so
you
can
compute
all
that
you,
you
need
to
compute
the
new
keys
and
you
need
to
have
like
some
some
array
where
you,
where
you
store
like
everything
that
has
been
accessed.
But
that's
that's
all
tiny.
That's
that's
not
a
problem.
L
So
the
problem
is
that
we
are
making
some
things
cheaper
with
these
gas
changes
and
that
realistically
requires
some
database
changes
in
order
to
yeah
I
mean
I,
I
agree
it
was
a
concern,
but
I
yeah
that's
also
like
one
of
the
things
I
want
to
bring
here
for
feedback.
N
We
could
do
it.
Marius,
I
think,
is
suggesting,
which
is
to
calculate
the
new
gas
cost
and
use
the
higher
of
the
two
so
calculate
what
the
gas
cost
is
using
the
old
database
layout
and
also
calculate
what
the
gas
cost
will
be
and
then
just
use
rights
of
those
two
for
each
operation.
N
L
N
L
N
Here
so
the
second
question
on
the
shanghai
plus
one
fork.
If
I
understand
correctly,
every
database
lookup
will
require
two
reads:
one
to
read
the
vertical
tree
to
see
if
it's
present
and
then
a
second
one
to
check
the
the
old
mpt
tree
and
then
potentially
then
migrate
it
as
well.
Are
we
accounting
for
that
in
the
gas
cost
the
cost
of
the
double
read
and
migration,
or
are
we
just
going
to
say
that,
hopefully
this
is
uncommon
enough
that.
M
M
P
L
Right
so
I
mean
I
guess,
the
point
is
also
maybe
to
be
clear
here.
I
don't
think
there
should
be
two
databases
right
so
at
least
the
data
only
has
to
be
in
one
database
like
the
the
actual
keys
and
values
like
there
should
be.
Maybe
currently
that's
not
the
way
it's
implemented,
but
the
right
way
to
think
about
it
is.
L
I
think
that
data
and
commitment
scheme
are
two
independent
things,
and
so
basically
there
there
are
two
different
commitment
schemes
at
that
point,
but
they
aren't
necessarily
two
different
databases
that
makes
sense.
L
L
Basically,
yes,
and
that
database
simply
has
like
a
little
marker.
That
says,
is
this
already
in
the
vocal
part,
or
is
this
still
in
mpg.
N
E
Well,
in
aragon,
we
have
so-called
plane
state,
which
is
separate
from
the
hashed
state
required
for
the
miracle
patricia
to
try.
So
we
already
have
that
in
aragon
and
for
us
it
will
be
relatively
easy
to
implement
this
additional
local.
Try,
commitment.
H
For
basi,
that's
kind
of
hard
to
say,
I'm
not
sure
about
you,
know
the
the
treeing,
but
we
do
have
the
ability
to
kind
of
swap
things
out
as
far
as
the
underlying
data
structure
itself.
So
it's
a
real
tough
question
for
me
to
answer
right
now,
but
we'll
keep
an
eye
on.
It
is
one
of
the
things
we're
tending
to
modularize
the
same
way
aragon
does.
N
So
related
to
that
the
question:
will
we
need
to
have
gas
accounting
for
the
migration
step
since
I'm
guessing?
That's
a
non-free
operation
like
if
something
is
the
first
time
the
first
time
you
read
something
from
the
mpt
tree
and
you
need
to
write
it
to
the
verbal
tree.
Do
we
need
to
have
that
cost
more
gas
than
any
subsequent
reads?
P
L
I
I
don't
think
I
don't
think
the
question
is
actually
about
about,
like
the
the
third
step,
where
you
say
like
it's
good
yeah.
H
L
B
L
L
And
I
haven't
thought
about,
I
thought
it
through.
I
don't
know
if
vitalik
has,
because
he
created
this,
the
gas,
the
ideas
and
gas
costs.
One
way
to
potentially
think
about
it
is
to
apply
the
costs
from
the
perspective
of
the
vocal
tree.
So
if
we
say
this
guy
yeah.
A
E
Yeah,
just
I
think,
because
the
new
target
costs
are
already
on
average
higher
than
the
status
quo.
So
then,
if
we
defensively
make
them
even
higher
in
the
transition
that
might
potentially
be
to
prohibit
too
expensive
for
smart
contracts,
so
I'm
thinking,
maybe
if
like
moving
if
having
the
new
target
costs,
requires
some
kind
of
database
refactoring.
Perhaps
it's
still
worth
doing
that
and
delay
the
the
gas
cost
not
not
to
shanghai
but
to
a
later
fork.
E
A
We're
about
that
time,
what's
the
best
place
to
continue
this
conversation
bankrupt,
there's,
I
think
we
have
a
channel
for
this
on
the
r
d.
This
chord.
Do
we
yeah?
We
have
a
state
expiry
channel.
L
Yeah
yeah,
if
you
look
at
so
basically,
I
intentionally
we
intentionally
designed
everything.
So
this
is
that
it's
independent
from
state
expiry
and
address
extension
so
that
all
of
these
can
be
worked
on
independently
and
they
aren't
blockers
for
each
other
yeah.
So
I
think
next
steps.
Yes,
so
I
mean
I'm
very
happy
like
if
anyone
wants
to
understand
more
and
if
they
want
to
ask
any
question
like
please
reach
out
and
also,
I
guess,
maybe
the
big
yeah.
L
The
big
questions
here
like
if
I
would
be
discussing
about
these
database
changes
that
would
be
required
for
for
the
shanghai
gas
cost
changes
that
we're
suggesting.
If
we
can
discuss
that
see
where
each
client
is
on
that
and
how
how
big
those
are
yeah.
That
would
be
great
to
understand.
A
Okay,
great
yeah.
I
think
we
can
use
just
the
vertical
trash
channel
here.
So
oh
yeah,
I'll
just
type
the
name
in
the
chat
here
in
case
people
are
not
on
it.
Yeah
yeah
thanks
a
lot.
I
thank
rad
and
guillaume
for
sharing
and
obviously
working
on
all
this
any
kind
of
final
questions.
N
B
N
Any
ideas
on
how
we
can
do
address
state
extension
and
that
would
allow
us
to
prioritize
state
expertise,
and
I
think
this
proc
transition
process
is
actually
significantly
easier.
If
state
expiry
can
be
done
simultaneously
or
first.
A
Great-
and
there
is
also
an
address
space
extension
channel
on
the
discord.
A
One
thing
I'll
note
before
we
head
off,
at
least
in
north
america,
daylight
savings
time
is
changing
before
the
next
call,
I'm
not
sure
about
europe.
I
think
so
as
well,
but
please
double
check
the
time,
so
the
call
stays
at
14
utc
two
weeks
from
now,
but
at
least
in
north
america
that's
one
hour
earlier
in
your
local
time,
and
I
think
that
might
also
be
true
in
europe
yeah.
So
please
just
double
check
that
before
we
meet
again
in
two
weeks.