►
From YouTube: Ethereum Core Devs Meeting #135 [2022-4-1]
Description
A
B
I
posted
the
agenda
in
the
chat
already.
I
think
everything
is
working
on
the
live
stream.
So
a
couple
things
today,
a
bunch
of
updates
on
the
merge,
so
some
updates,
mostly
on
the
shadow
forking.
B
I
don't
think,
there's
much
on
killed
itself
and
then
mikael
had
some
stuff
about
the
latest
flatted
hash
conversations
we've
been
having
then
on
shanghai,
a
couple
things
as
well
so
alex
had
some
updates
on
the
withdrawal
eip
based
on
the
the
the
call
last
time,
and
then
there
were
two
other
vips:
the
s,
transient
storage
and
the
removal
of
self-destruct
that
were
that
were
brought
up
in
the
comments
and
and
then,
after
all,
that
we
have
afri
here
to
talk
about
some
of
the
issues
we've
seen
on
gordy
and
some
potential
solutions
to
them.
B
C
Up
to
speed,
of
course,
so
the
last
since
the
last
acd,
our
testing
efforts
have
been
even
focused
on
shadow.
C
C
All
the
notes
configured
with
the
modified
genesis,
json
file
would
fork
off
into
their
shadow
form.
Basically,
but
since
we've
set
the
merge
fork
block
to
be
far
in
the
future,
we
continue
importing
transactions.
So
the
main
idea
is
that
we're
able
to
test
syncing
we're
able
to
test
actual
load
block
production
under
load
and
so
on.
C
We
had
two
attempts
so
far.
The
first
attempt
followed
a
mainnet
distribution,
and
we
found
issues
in.
I
think
every
client
especially
related
to
expectations
of
how
quickly
timeouts
should
happen
or
or
how
block
production
logic
has
to
work
and
so
on.
So
we
had
a
second
shadow
that
happened
day
before
yesterday,
and
that
was
a
lot
more
stable.
We
didn't
do
mainnet
distribution
for
that
one.
We
did
a
more
equal
client
distribution.
C
I
think
we've
only
found
like
one
or
two
minor
issues
and
they
have
patches
being
released
so
we're
attempting
a
third
version
again
with
mainnet
distribution.
Ttd
is
supposed
to
hit
on
monday
and
information
about
the
conflict
has
been
shared
on
the
testing
channel
in
the
rnd
discord
and
irrespective
of
how
this
one
goes,
we
want
to
already
try
a
a
mainnet
shadow
fork
sometime
next
week
and
the
main
idea
is
just
to
collect
as
much
information
as
possible
right
now,
so
that
we
can
make
an
informed
decision
about
about
the
merge
sooner.
D
So
one
thing
one
thing
where
we,
where
we
want
to
push
for
the
main
headshot
so
so
quickly,
is
to
collect
data
on
how
the
clients
behave
on
mainnet,
because
we've
seen
on
the
goalie
shadow
fork
that,
like
thinking
becomes
an
issue
like
bigger
blocks,
become
an
issue.
D
Really
nice
to
see
how
clients
work
post
merge
on
mainnet,
so
one
thing,
for
example,
was
that
we
like
we
provisioned
for
the
yeah.
Sorry,
I'm
I'm
not
at
a
great
place,
so
one
one
thing
we
had
for
the
first
shadow
fork
is
that
we
provisioned
eight
gigabyte
nodes
and
because
of
finalization
non-finalization.
D
These
ran
over
and
geth,
ran
out
of
memory
and
was
killed
and
and
then
was
killed
repeatedly
and
stuff
like
this
is,
I
think,
really
important
to
see
on
on
mainnet
state
and
on
main
transactions.
C
Yep
and
we
don't
really
have
an
expectation
on
client
teams
to
spin
up
their
own
nodes
to
test
in
I'm
spinning
notes
on
your
behalf.
More
or
less
every
client
team
knows
whose
ssh
keys
are
on
which
machine
it
would
be
great
if
you
guys
can
have
an
eye
on
your
nodes
test.
Weird
sync
states
start
syncing
switch
els
midway
through
sync,
weird
things
that
people
might
do
there.
D
Yeah
so
that
them
like
there
are
multiple
issues
there.
One
was.
D
So
all
and
because
gas
right
now
builds
blocks
synchronously
in
in
focus
updated,
that's
not
an
issue,
but
for
nethermind,
which
builds
blocks
asynchronously
every
block
is
empty
because
they
only
have
100
milliseconds
to
to
build
a
block,
and
there
was
one
issue.
The
other
one
was
that
clients
didn't
give
us
enough
time
to
execute
the
blocks.
D
So
I
think
some
clients
did
have
had
a
time
out
of
500
milliseconds
for
for
new
for
new
payload,
a
new
payload
can
like
we
can.
We
can
and
we
do
execute
blocks
during
new
payload
and
so
yeah.
We
we
we
timed
out
and,
and
so
so
some
clients
weren't
able
to
think
or
able
to
follow
the
chain.
D
D
Yes
and
I'm
not
sure
if
we
had
the
same
issues
for
nimbus
but
for
the
other
clients
it
was.
E
Yeah,
I
would
expect
like
if
it's
http,
I
would
expect,
like
timeouts,
set
to
something
some
like
to
speak
big
number
to
one
minute
or
whatever,
even
more
than
the
slots
seconds
even
more
than
the
seconds
per
slot
friendly.
So
I.
F
Was
gonna
ask
would
there
should
we
provide
some
sort
of
recommendations
for
timeout
duration?
I
guess
just
for
a
different
method.
There
may
be
different
hammer
recommendations
and
I
think
it's
fairly
important
that
all
client
kind
of
uses
the
same
timeout.
So
we
don't
see
some
sort
of
network
asymmetry
down
the
line.
G
Yeah
I
mean
timeouts
are
fundamentally
dangerous,
because
if
there's
a
timeout
I
can
construct
a
dos
block
that
can
sit
right
on
the
border
of
that
timeout.
Depending
on
the
machine,
then
I
can
split
the
network,
so
you
know
on
the
order
of
something
far
beyond
dos
blocks
should
be
the
timeout.
If
there
has
to
be
a
timeout
but
yeah,
maybe
there
should
be
explicit
recognition.
A
Do
we
discuss
timeout
for
fortress,
updated
or
for
new
payload
to
because
for
purchase
updates
that
we
should
to
have
relatively
small
timeout?
I
think.
G
E
Yeah,
I
was
just
going
to
say
the
same,
so
it
could
be
like
a
rework
that
is
processed
synchronously
like
for,
if
it's
like
two
or
three
blocks
and
poke
choice,
updated
like
what
marius
said,
so
it
should
be
more.
The
timeout
should
be
should
be
enough
to
process
like
a
few
blocks,
so
I
would
expect
I
would
even
expect
that
websocket
is
used.
I
don't
think
that
websocket
is.
Has
some
timeouts
like
like
inside
of
a
session.
G
Also
your
case,
though,
it
might
actually
be
better
to
have
the
execution
layer
signal
that
it
is
still
working
via
maybe
a
call
to
syncing
a
return
to
syncing
rather
than
hitting
the
timeout,
because
then
the
consensus
layer
has
no
idea.
What's
going
on.
E
And
I
think
that
since
the
communication
channel
was
like
a
trustworthy
relationship
between
cl
and
dl,
I
would
expect
that
the
match
should
be
relatively
high.
So
it's
not
like
el
is
just
kind
of
fine.
It
could
be
really
in
case
of
like
attacks
where
every
block
is
huge
and
it
takes
let's
say
20
seconds
to
execute.
I
don't
know
it's
just
a
random
number.
E
F
I
think,
like
so
coming
from
consensus
perspective
from
prism.
What
I
can
say
is
that
most
of
the
issues
or
most
of
the
edge
cases
we're
seeing
now
is
always
regarding
time
out.
F
I
think
this
is
a
very
tricky
area
to
tactical
just
before
we're
shaking
time
out
like
the
pillow
is
invalid,
but
that's
not
true,
because
the
pedal
is
not
invalided,
but
it
just
means
that
we
have
to
try
again
later
so
we
discovered
a
few
edges
there
and
it
may
be
important
to
to
basically,
like
I
said,
define
some
time
of
values
across
all
the
methods
so
make
sure
everyone's
on
the
same
page,.
D
Yeah,
I
think
yeah
we
should
we
should
defend
us
async
and
not
not
to
them.
Now.
One
thing,
one
thing
that
I
like,
maybe
I
didn't
hammer
this
point
home
home
hard
enough
because
perry
wrote
it
just
wrote
me
about
it.
Basically,
the
the
time
that
the
consent,
all
the
consensus
layer,
clients
currently
wait
between
they
between
when
they
ask
us
to
construct
a
payload
and
when
they
fetch
the
payload
from
us,
is
too
short.
D
D
So
they
they
send
us
the
payload
attributes.
We
we
create,
like
we
start
creating
the
payload,
and
then
they
already
send
us
the
get
payload,
and
this
is
like
100
milliseconds
apart
and
in
these
100
milliseconds.
We
cannot
create
any
blocks
and
I
think
this
is
going
to
be
it's
an
issue.
Yeah.
D
G
Consistency
should
be
doing
that
if
they're
trying
to
have
anything
profitable
coming
out
of
it,
yes,
but
we
can
make
a
clearer
recommendation,
I
mean
it
should
be
as
soon
as
you
have
as
soon
as
you
know,
the
four
choices
of
the
prior
slot
and
you
know,
you're
the
you're,
the
composer
of
the
next
slide.
A
But
I
think
I
noticed
it
some
time
ago
and
I
reported
it
and
I
think
only
tackle
doing
it
in
correct
way.
Yeah.
F
Chris
some
had
that
fixed.
We
actually
just
began
cashing
the
pillow
id
the
slop
higher.
When
the
hedge
changes.
I
can
give
you
a
new
image
to
try
yeah.
J
G
J
G
G
A
different
one
so
say,
for
example,
at
slot
in
there's
the
the
proposer
slot
in
actually
makes
two
conflicting
blocks
and
they're
going
to
get
slash,
but
there's
actually
two
conflicting
blocks
out
there,
and
maybe
what
I
see
is
the
head
initially
and
I
begin
my
build
process
on
actually
switches
because
of
the
weight
of
attestations
throughout
that
slot.
You
know
so,
then
there
are
these
edge
cases
where
you
might
switch,
but
you're
right
as
soon
as
you
get
the
information.
G
Even
if
it's
low
confidence,
you
should
probably
begin
the
build
process,
and
only
if
you
fork
choice
update
you
could
change
the
build
process.
If
you,
if
you
in
these
edge
cases,.
G
Yes,
it's
deep
enough
yeah
if
it
was
a
deep
enough
fork,
well,
yeah
a
late
block
potentially
or
a
deep
enough
fork
where
the
shuffling
is
diverged,
then
I
might
all
of
a
sudden
resolve
that
I
want
to
switch
forks
halfway
through.
E
C
E
Also,
another
comment
related
to
timeouts
or
a
question:
is
this
possible
to
process
multiple
payloads
at
the
time
on
the
old
side,
just
curious
how
it's
currently
implemented
in
the
clients.
E
D
Yes,
so
that's
no!
It
like
it
won't
because
we
we
have
vlogs.
K
And
theoretically,
we
could
do
them
in
paradise.
I
don't
know
if
we
do
marius
probably
knows
not
better.
No.
D
L
L
B
Yeah,
I
think
that
makes
sense.
We
can
use
the
interrupt
channel
to
do
that.
B
Okay,
next
up
mikhail
you
had.
Basically,
there
was
a
conversation
on
discord
this
week
around
the
latest
valid
hash.
I
shared
the
discord
link
in
the
agenda
and
you
put
together
kind
of
a
summary
explaining
your
thoughts
about
it.
You
want
to
maybe
just
take
a
minute
or
two
and
and
share
your
thoughts,
and
you
can
share
your
screen
also,
if
you
want,
with
with
the
document.
E
Yes,
okay
cool,
so
yeah
we've
been
a
bit
touched,
this
late
latest
valid
hash
and
apparently
it's
it
has
some
complexity
and
supporting
it.
So
I've
just
put
down
some
notes
on
that
some
initial
thoughts
on
how
this
could
be
implemented
and
when
it's
important
so
tldrs
here
we
have
two
cases.
Basically
that
should
be
considered
during
implementing
this
requirement.
E
First
one
is
important
and,
to
my
opinion,
it's
when
the
payload
realization
happens
synchronously.
So
it's
like
when
you
submit
the
new
payload
and
it's
responded.
It
appears
to
be
invalid.
This
is
the
easy
case
right.
So
here,
latest
valid
hash
is
also
the
print
hash
and
it
might
seem
that
this
latest
valid
hash
parameter
is
redundant
in
this
case.
But
if
we
take
a
look
at
factories
updated
which
has
the
same
parameters
in
response.
E
It
could
be
the
case
when,
like
the
first
block
is
invalid
in
this
fork
and
what
you
do
is
respond
with
invalid
and
in
this
case,
latest
valid
hash
will
point
to
the
first
block
in
this
work,
and
this
is
important
because
we
need
cl,
need
to
invalidate
all
all
the
blocks
that
starting
from
this,
like
first
invalid
block,
so
that
that's
this
is
where
it
should
be
like
implemented.
E
It
should
be
straightforward,
because
el
client
has
all
the
information
and
it
re
it
just
sends
it
back
to
cl.
E
So
the
other
part
is
when
yell
is
sinking
and
it
makes
an
invalid
box
somewhere
in
the
middle
of
the
chain.
It's
it
has
been
syncing
with,
and
the
el
is
syncing
with
this
chain
because
consistently
a
client
said
that
this
chain
is
canonical
and
fed
all
the
required
information
to
el
to
start
syncing.
E
E
So
that's
the
kind
of
cache
required
here
and
this
this
check
is
pretty
simple.
So
every
new
payload,
if,
if
its
parent
hits
this
invalid
tip
hash,
so
it
equals
to
invalid
t
hash.
So
the
in
this
case,
having
this
information
ill
client
just
may
respond
correctly
to
this
to
to
this
request,
yeah
we
we
should
not
account,
for.
E
I
think
that
we
should
not
account
for
the
case
when
the
some
payload
is
missed,
and
the
re
and
el
can't
just
build
the
chain
like
to
understand
that
this
payload
p1,
for
example,
is
linked
to
the
invalid
chain,
because
for
some
reason,
cld
didn't
send
the
all
required
information
to
build
this
link
and
that's
it.
So
I
think
that
this
these
two
two
steps
would
be
enough
for
implementing
it
in
the
asynchronous,
without
validation
case
also
yeah.
E
These
are
this
is
the
option
it's
it's
not
that
complicated,
but
it's
anyway
complexity
and
the
the
other
question
that
is
important.
What
what
would
happen
if
we
do
not
support
latest
valid
hash,
or
we
know
we
do
not
support
like
sending
this
information
to
cl
while
el
is
thinking.
So
what
may
happen
is
just
cl
will
not
know
that
this
chain,
that
it
follows,
is
invalid
and
will
still
keep
following
it
and
pull
from
the
network
if
it's
available
in
any
place
of
the
network.
E
Probably
in
this
case
a
node
is
under
eclipse
attack
and
this
being
fed
with
invalid
chain.
Ndl
will
just
like
assume
that
the
l
just
see
the
invalid
block
and
drops
the
entire
chain
and
cl
doesn't
know
about
it
anything
and
keeps
sending
new
blocks
from
this
invalid
chain.
They
all
started
syncing
again,
and
this
going
over
and
over.
E
E
E
This
one
is
like
optional.
I
would
say
to
my
observation
so,
but
people
may
have
other
opinions
on
that.
E
I
would
say
that
you
should
have
probably
l
review
cash
like
for
a
few
entries
so
and
just
catch
them
and
drop
the
most
latest,
one
that
were
was
added
to
this
cache.
It's
like
yeah.
This
this
case
is
only
actually
at
at
one
point
in
time.
There
will
be
only
one
invalid
chain
that
matters,
because
el
only
syncs
with
the
canonical
chain
and
if
canonical
chain
is
invalid,
then
it
matters
otherwise
yeah.
So
there
is
only
one
chain:
yale
is
synced
with.
E
This
is
the
assumption,
and
this
was
also
discussed
previously,
so
basically
yeah.
Probably
there
could
be
two
chain.
A
reward
happens
that
might
make
sense
to
store
in
this
cache
two
three
five
but
propanol
should
like
be
on
on
every
chain
that
new
payload
is
sent
for.
E
Because
yeah,
because
in
this
while
sinking,
what
only
matters
is
the
canonical
chain,
while
this
is
synchronous,
payload
validation,
there
is,
the
response
is
immediate
and
all
the
information
is
available
to
to
make
a
response.
According
to
the
spec.
K
So
two
things
so
I
mean
you
know
I
wouldn't
say
that
this
will
most
likely
happen
during
an
ecliptic
attack
most
likely.
The
el
has
some
data,
corruption
in
the
database
or
in
bad
ram
or
something.
K
E
Because
if
it's
yeah
right,
if
it's
in
chronos
yeah,
yeah
you're
right
yeah,
fair
question,
because
if
you
don't
have
a
parent,
you
would
respond
with
sinking
right.
So
it's
it
falls
to
the
second
case,
not
to
the
first
one
synchronous,
like
validation,
happens.
Only
when
you
have
all
the
information
available
and
the
current
block
is
known,
and
obviously
it's
valid
in
this
case,
like
the
assumption,
is
that
this
block
is
valid.
E
Yes,
you
will
have
to
think
if
cly
consists
on
on
sending
a
a
child
of
invalid
payload.
So
it
must
be
something
bad
with
cl,
because
it
it
got
rid
of
a
response
that
the
parent
is
involved.
G
G
G
All
right
so
martin
in
your
example
the
if
the
consensus
layer
sends
the
next
block
in
it
forgot
that
you
already
said
something
was
invalid.
So
this
does
you
know
this
could
happen
theoretically
like
if
the
consensus
has
a
failure
or
the
consensus
layer
turned
off
and
then
turned
back
on
or
something
like
that.
But
we
do
need
to
handle
the
case
where
it
does
try
to
insert
something
that
is.
M
Not
only
invalid,
so
if
the
el
returns
sinking
but
then,
while
syncing
it
it
discovers
an
invalid
block,
but
because
the
engine
api
is,
it
has
already
replied
syncing
on
the
engine
api,
the
consensus
layer
doesn't
know
yet
that
the
block
is
invalid.
It
only
got
syncing
for
that
block.
Then
it
starts
building
on
that
new
chain
and
sends
new
and
new
blocks
to
the
eo.
M
M
E
So
I've
heard
that
nethermine's
stores,
like
a
few,
the
most
recent
invalid
block
hashes,
but
this
this
neither
the
invalid
t
cash
nor
the
latest
valid
hash.
E
It's
like
she's,
the
the
invalid
child
of
the
latest,
valid
block.
So.
L
Yes,
for
never
mind,
so
we
were
thinking
about
it
and
our
potential
potential
fix
for
that
is
to
keep
invalid
blocks
in
our
block
tree
and
just
mark
them
as
invalid
to
have
like
the
whole
thing,
and
just
when
something
is
finalized,
we
can
prune
the
invalids
ones
from
the
tree
to
like
not
keep
garbage
there.
E
Yeah,
and
so
you
can
just
query
the
database
right
so
when
you
insert
the
new
payload
into
the
block,
you
may
check
this
right
that
the
parent
is
valid.
Oh
and.
E
Right
but
yeah,
this
sounds
like
an
attack
vector
I
mean
if
someone.
E
If
this
chain
is,
you
know
pretty
long
and
you
you
should
always
hit.
You
should
make
like
multiple
database
reads
to
to
just
traverse
this
chain
to
find
out
the
latest
valid
hash.
E
A
G
You
could
recompute
the
cache
essentially
in
that
if
you
get
some
block
deep
into
an
invalid
chain,
you
recursively
walk
back
and
then
you'll
see.
Oh,
this
is
invalid
and
then
you
can
have
the
cash
again
yeah,
but
storing
it
is
fine
too,
but
it
would
be
recomputed
if
the
consensus
player
was
continuing
to
pound
information
in
that
branch.
E
G
I
I
think
it's
very
difficult
to
construct
realities
where
these
full
any
number
of
are
like
induced
load
on
any
number
of
large
amounts
of
clients.
So
you
know
I
want
to
kind
of
minimize
complexity,
but
also
not
leave
this
as
a
glaring
hole.
I
don't
know
that's
not
much
of
an
answer
yeah.
I
need
to
think
about
it.
G
G
This
can
only
become
a
problem
when
a
node
is
sinking,
then
the
attack
vector
becomes.
Can
I
induce
a
lot
of
nodes
to
sync
and
thus
make
this
a
problem?
You
know
the
answer
to
that?
Hopefully
should
be
no,
but,
like
you
know
a
bug.
If
I
can
find
a
bug
that
can
induce
lots
of
nodes
to
sync,
then
I
can
maybe
exploit
you
know
an
edge
case
in
this
that
we
didn't
want
to
deal
with
yeah.
J
G
Build
against
marius,
I
mean
they.
Essentially
they
have
these
branches
that,
because
of
the
way
fortress
updated,
can
return
an
insert
payload
can
return
syncing
syncing,
syncing
syncing.
They
have
these
branches
that
once
the
execution
layer
does
finish
syncing,
you
know
they
need
to
resolve
these
as
invalid
or
not,
and
if
not,
they
just
have
these
branches
that
they
just
don't.
They
can
never
really
know
if
or
valid
or
not.
If
this
cannot
be
resolved,
I
mean
the
edge
case
is
really
I
have
I
inserted
a
branch
it
has
n
blocks.
G
Now
you
tell
me
the
the
nth
block
is
invalid.
What
about
the
the
n
minus
one
blocks
before
I
don't
know,
and
then
I
don't
know
what
to
do
with
them
is
the
that's
the
problem.
So
it's.
B
G
E
Yep
so
need
to
think
more
about
edge
cases
for
the
sinking
part
of
it,
but
for
the
when
the
parent
block
and
state
are
known
and
it
can
be,
and
this
information
can
be
easily
derived.
I
think
it's
must
to
be
implemented.
E
Yeah
I
like,
I,
don't
think
it's
just
a
must.
Yeah,
that's
bad.
G
E
E
And
this
cache,
I
don't
think
it
should
be
persisted
actually,
so
it
should.
It
would
be
enough
if
yell
has
restarted,
and
this
cache
has
gone.
E
A
Will
we
can't
remove
invite
blocks
from
block
3?
I'm
not
sure
how
f
is
working
right
now,
but
we
are
on
mainnet.
We
are
removing
inviting
blocks
from
block
3.
E
B
M
Yeah,
so
I
would
like
to
say
that
in
aragon
we
probably
most
definitely
will
need
to
implement
something
like
this,
because
it's
a
problem-
and
I
would
like
to
have
tests
in
in
hive
or
somewhere
for
these
this
kind
of
scenarios
and
also
on
how
things
with
the
merch
are
in
aragon.
I
would
say
currently
our
implementation
is-
is
alpha
quality
and
before
switching
public
testness.
M
I
would
like
aragon
to
reach
beta
quality
and
we
need
to
fix
the
tests
in
hive
implement
this
at
kh
case
and
quite
a
few
things
to
finish
and
also
start
start
preparing
a
release.
So
I
would
say
it
will
take
us
roughly
a
month
to
reach
like
to
move
from
alpha
to
beta
and
yeah
from
from
aragon's.
From
my
point
of
view,
I
would
like
quite
a
bit
of
time
before
before,
switching
public
testnets
to
profile
stake
right.
B
That
was
going
to
be
the
next
thing
I
bring
up
just
want
to
make
sure
there's
nothing
left
on
the
on
the
latest
valid
hash.
But
then
I
can
kind
of
share
my
my
thoughts
about
that
and
how
we're
tracking.
B
Okay
yeah,
so
so,
basically,
I
guess
you
know
everyone's
aware.
The
difficulty
bomb
is
set
to
to
to
happen.
It's
probably
it's
gonna
start
being
felt
around
june
and
and
kind
of
slowly
ramp
up
from
there
tj
rush
and
and
others.
Somebody
else
has
like
a
dune
analytics
dashboards
and
vitalik
has
a
script
which
estimates
it
and
basically,
roughly
around,
like
the
end
of
july,
is
when
you
start
getting
blocks,
which
would
which
would
exceed
like
17
seconds,
which
seems
pretty
long,
and
it's
hard
to
all.
B
These
are
estimates
it's
really
hard
to
to
estimate
how
quickly
the
bomb
goes
off
once
it
actually
starts
starts
having
a
bigger
impact
on
the
overall
hash
rates,
it's
probably
even
harder
to
estimate
now,
because
if
the
merge
is
the
next
upgrade,
people
might
start
selling
their
miners
and
so
take
all
these.
You
know
they
took
like
a
grand
assault,
but
basically
I
think
if,
if
we
want
to
avoid
pushing
back
the
bomb,
what
you
what
you'd
want
is
ideally
not
reach
like
17
second
block
times
and
people
can.
B
You
know,
disagree
on
the
number
feels
like
14
15
might
not
be
too
too
terrible,
and
a
few
calls
ago,
vitalik
had
this
idea
where,
if
anyways
we
are
gonna,
we
are
gonna,
put
out
a
release
with
sort
of
a
fake
fork
block
in
order
to
like
disconnect
nodes
who
who
haven't
upgraded
from
the
merge,
so
just
changing
the
fork
id
and
we
could
do
kind
of
a
mini
bomb
push
back,
but
that
only
buys
us
a
couple
of
weeks
because
it
means
that,
like
we
need
to
make
sure
that
you
know
by
the
time
the
pushback
happens
on
main
net,
the
bomb
is
still
manageable.
B
So
all
this
to
say,
if
we
want
to
like
aim
for
that
sweet
spot
of
like
we,
we
we
upgrade
and
run
through
the
merge
kind
of
before
we
hit
like
17
or
more
second
block
times
working
backward.
If
you
want
to
have
you
know
reasonable
time
for
like
a
main
net
announcement
and
then
reasonable
time
for
test
nets,
we
probably
need
to
make.
Oh,
I
guess
we
we
need
to
make
a
call
about
like
the
test
net
fork
blocks
about
a
month.
B
From
now
so
like
two,
not
necessarily
the
next
awkward
devs
but
the
one
after
I
think,
if
we're
in
the
spot,
like
like
late
april,
we're
like
we're
not
ready
to
say
you
know,
the
fork
is
gonna
happen
on
test
nets.
In
like
a
couple
weeks,
then
I
I
I
think
we
probably
want
to
consider
like
a
longer
bomb
delay
and
how
long
is
something
we
can.
B
You
know
we
can
discuss,
but
that's,
I
think,
that's
roughly,
where
we're
at
right
now,
where,
if
we,
if,
if
like
in
a
month,
basically
four
weeks
from
today,
we're
comfortable
saying
we're
gonna
fork
the
test
nets
in
another
like
three
to
four
weeks,
I
think
we're
in
a
good
spot
to
only
need
probably
like
a
sort
of
mini
bomb
delay
which
we
can
include
with
the
merge
release.
B
If,
if
in
a
four
weeks,
basically
we're
not
confident
about
moving
to
test
nets,
then
I
think
it
makes
sense
to
just
delay
the
bomb
a
bit
more
independently
and
then
and
then
you
know
for
that's
when
we're
when
we're
confident
and
the
client
releases,
and
obviously
you
know
stuff,
like
shadow
14
main
net.
Next
week
will
give
us
a
lot
of
data
about
you
know
how
how
many
more
issues
do
we
find
and
and
yeah?
B
I
guess
just
general
client
readiness
for
to
implement
all
the
stuff
we
just
we
just
discussed
yeah.
So
that's
roughly
my
my
point
of
view,
I'm
curious.
If
anybody
else
have
high
thoughts.
A
I
think
it
is
goodbye
to
observe
that,
but
one
comment
from
my
site
is
that
I
think
that
we
should
run
the
test
net
public
testnet
for
a
longer
time,
the
time
the
first
test
that
should
be-
I
don't
know
one
month
or
something
not
like
a
week
after,
let's
run
the
first
test
and
a
week
after
the
first
test
that
we
are,
we
are
running
second
test,
and
that
is
my
opinion.
Of
course,
not
don't
know
what
you
are
thinking
about.
It.
A
B
I
I
guess
the
counter
argument
I
can
see
to
that
is
we
get
like
rapidly
diminishing
amounts
of
information
like
if
the
fork
actually
works?
That's
like
a
lot
of
de-risking
and
then,
if
you
know
it's
it's
still
up
for
an
hour
after
that's
that's
great
and
then,
if
it's
still
up
for
like
a
week
after
that's
really
good,
I
I
yeah.
I
guess
I
maybe
I'm
wrong
here.
B
I
feel
like
that,
like
from
one
week
to
one
month,
we
probably
get
less
information
than
we
do
from
like
nothing
to
a
week,
but
if
that's
wrong
yeah
happy
to
understand
why
andrew.
M
B
Got
it
mirrors.
D
Yeah
like
this
is
something
we
we
should
already
be
testing
right
now
on
the
shadow
fox.
I
think
we
don't
like
we
shouldn't
wait
another
month
to
start
testing
stuff
like
thinking
from
scratch.
I
I
tested
that
on
gas
yesterday
already
and
it
kind
of
works.
It's
painful,
of
course,
because
thinking
a
full
test
net
from
one
of
the
big
test
nights,
probably
from
scratch,
is
already
painful
but
yeah.
D
So
my
my
my
opinion
is
that
we
should
have
two
to
three
weeks
for
between
the
first
between
the
first
and
the
second
test,
maybe
maybe
only
a
week
after
polio,
because
I
don't
think
that
there's
much
activity
on
the
polio,
but
for
like
a
bigger
test
net,
we
should
have
for
the
first
bigger
test
that
we
should
have
like
three
weeks.
D
Yeah
three
weeks,
I
think,
is
a
good
good
thing
and
after
afterwards
the
other
testament
should
be
in
more
rapid
succession,
maybe
every
week,
maybe
every
two
weeks,
but
we
should
like
we.
In
my
opinion,
we
shouldn't
wait
for
every
big
test
net
test
net
for
a
month.
That
would
just
like
delay
things.
A
Yeah,
just
not
to
release
all
tests
in
one
client's
release
that
is
so
not
in
one
release.
I
set
tdd
for
every
test.
That
is
my
opinion.
D
B
Will
it
maybe
make
sense
so
like
we
have
basically
gordy
robston
and
sepolia
that
are
going
through
the
merge
you
mentioned,
you
know,
gordy
and
robson
are
probably
the
ones
we
want
to
see,
live
or
like.
B
We
want
to
have
like
the
longest
data
for
so
like,
maybe
there's
something
where,
like
you
can
start
with
gourley
and
then
two
weeks
after
you
do
sepolia
or
gordia
robson,
whatever
like
whichever
one
of
the
big
ones,
and
then
you
do
sepolia,
maybe
like
two
weeks
after
because
you
kind
of
expect
that
if
you
know
you
shouldn't
like
learn
that
many
new
things
on
sepolia
assuming
the
the
previous
one
went
well
and
then
two
weeks
after
that
which
is
like
four
weeks
from
the
original
you
have,
you
have
the
second
big
test
net
and
then
and
then,
if
you,
if
you
do
that,
then
by
the
time
we
like
choose
magnet
blocks
like
once,
we
once
we've
seen
that.
B
Yeah
maybe
like
using
this,
this
test
net,
like
the
septolia
test
net,
which
is
like
lower
stakes
in
the
middle,
allows
us
to
have
like
more
time
on
under
two
other
one
which
have
more
activity.
Basically.
B
Okay,
I
guess
yeah,
we
can
continue
kind
of
the
test
net
ordering
conversation
async
as
well,
but
I
guess
people
generally
seem
to
agree.
We
want
like
more
than
one
week
between
each
of
the
test
net
like
we
did
for
like
london
and
probably
more
on
the
order
of
like
two
to
four
weeks
and
depending
on
the
type
of
test
that
does
that
generally
make
sense.
B
Okay,
cool
and.
J
B
Yeah
I'll
I'll
look
at
like
what
that
would
look
like
from
a
scheduled
perspective
and
and
and
we
can
chat
about
it
on
on
the
next
call
and
yeah
what
yeah,
when
we
would
need
basically
to
make
a
call
if
we
wanted
to-
or
I
guess,
yeah
when
when
we
when
would
we
need
to
delay
the
bomb
basically
based
on
when
we
choose
the
test
nets.
L
Did
you
ten
testing
around
pruning?
How
do
you
avoid?
Do
you
like
track,
would
finalize
and
allow
pruning
only
the
things
that
are
finalized
or
how
is
it
working
for
you.
K
J
D
I
I
think
we
committed
ourselves
to
not
doing
that
like
again
back
in
in
the
summer,
because
we
think,
if
something
breaks
and
if
we
were
to
break
the
finalization,
then
we
should
be
able
to,
and
we
want
to
be
ready
for
that.
We
hope
that
it's
never
going
to
happen.
But
if
it's
going
to
happen,
then
we
should
at
least
support.
G
K
G
D
K
Yeah,
presumably
I
mean
improve
the
work.
Well,
I
mean
in
our
the
way
we
we
would
only
store
a
bad
group
of
work
block
if
the
work
is
valid,
we
wouldn't
just
if
someone
sends
us
some
random
junk.
We
wouldn't
have
done
so
if
the
work
is
great
and
in
a
post
proof
mistake.
Well,
I'm
not
sure
I
guess
anything.
G
G
G
You
you
say
you
keep
you
keep
the
stuff
not
going
an
archive
or
not
in
the
freezer,
whatever
for
30
000
blocks
that
you
can
repair
things,
that
things
go
wrong,
but
if
you're
only
storing
the
first
block
on
an
invalid
chain,
an
invalid
chain
could
in
fact
be
a
consensus
split
chain.
But
how
does
that
actually
help
you
recover.
B
B
Okay,
so
next
up
shanghai.
First,
we
had
alex
with
with
some
updates
on
eip4895,
and
then
we
had
some
other
potential
cfi
eips
so
alex
you
want
to
go
ahead
and
share
the
list.
N
N
Yeah
yeah
I
mean
so.
Basically,
we've
talked
through
the
process
of
withdrawals
over
you
know
a
couple
awkward
devs
now
just
to
remind
everyone
where
we've
gotten
is
that
essentially
you'll
have
withdrawals
happening
on
the
consensus
layer
if
you
actually
just
want
to
go
to
the
render
eip
that
works
as
well,
but
either
way
so
you
have
the
consensus
layer,
managing
withdrawals,
they're
dequeued
in
some
way
piped
into
the
execution
layer.
N
The
questions
that
we
had
open
last
time
are
basically
essentially
syntax
like
how
do
we
want
to
you
know,
sort
of
structure,
the
block
and
the
header
and
different
things,
and
so
yeah
thanks
tim.
This
is
just
what
I
want
to
show
now,
if
you
scroll
down
just
a
little
bit
further,
there's
a
block
rlp
here
where
essentially
all
I
did
was
say:
okay,
this
is
going
to
be
appended
right
after
everything
else,
and
this
is
like.
N
Similarly,
in
the
header,
you
have
a
root
for
all
withdrawals
that
is
again
opened
to
the
end.
So
I
think
that
was
the
main
open
question
and
I
just
wanted
to
get
feedback
on
that
point.
Otherwise
everything
else
should
be
about
the
same.
K
Sorry
I
got
a
phone
call
in
the
middle.
I
didn't
notice.
That
was
my
parent.
Yes,
I
was
wondering
if
we
need
a
protocol
update
for.
N
M
So
one
small
comment,
I
think,
just
just
for
precision
the
withdrawal
shouldn't
be
an
rop.
The
withdrawal
should
be
specified
as
a
list
because
then
like
the
rlp
rlp
should
be
applied
at
the
end,
like
everything
is
a
list
or
a
buy
like
a
byte
array
in
rlp,
and
then
you
apply
rop
at
the
end.
That's
it
it's
it's
a
technical
small
comment.
M
I
think
in
in
you,
you
define
withdrawals
as
as
an
rop
of
a
list,
but
then
kind
of
because
rop
is
a
string
representation,
but
we
had
a
similar,
somewhat
similar
problem
with
type
transactions
when
you
additionally
wrap
rop
as
a
byte
array
of
things
like
that.
So.
M
Currently
like,
for
instance,
trunk
transactions
in
in
block
bodies,
transactions
are
defined
as
list
and
then,
when
you
serialize
block
body
as
an
rlp,
then
you
serialize
transactions
as
an
rlp.
But
if
you
define
withdrawals
already
as
rop,
it
means
that
they
are
a
string
instead
of
a
lister
kind
of
you.
You
additionally
wrap
a
list
into
as
rop
byte
array
right.
I
want
to
avoid
this
additional
wrapping.
M
J
And
we
already
agreed
last
week
that
we're
not
going
to
use
the
ohmer
sash,
we're
going
to
add
a
new
thing
on
the
end.
Is
that
correct.
B
N
K
C
G
Oh,
it
was
initially
put
in
there
actually
to
disambiguate.
You
know,
if
you're
following
logs
and
to
make
it
useful
because,
theoretically,
with
partial
draws,
you
could
have
a
collision
there
where
you
get
the
same
withdrawal
twice,
but
this
might
actually
have
negligible
value
and
we
can
consider
removing
it,
especially
with
the
with
the
push
method
very
open.
I.
N
G
No,
I
mean
if
you
were
tracking
all
withdrawals
ever
then
there
would
be
a
one-to-one
mapping
from
the
consensus
layer
in
the
execution
layer,
but
they
are
dequeued
from
the
consensus
layer.
So
they're
not
tracked
in
a
list
forever,
but
if
you
were
tracking
them
you
can
map
them
one
to
one
with
no
problem.
G
J
G
Yeah
slight
client
there
is,
we
can
link
to
it
here.
B
Okay,
so
next
up,
we
had
moody
with
an
update
on
eip
1153,
and
I
know
there
was
already
like
a
lot
of
comments
in
in
the
agenda
about
this
moody.
Do
you
kind
of
want
to
summarize
kind
of
where
things
are
at.
P
So
right
now,
I
think
my
goal
ideally
for
this
call,
which
might
be
a
little
bit
lofty,
is
just
to
get
this
eip
to
cfi
for
shanghai
and
and
that's
not
to
say
that,
like
I
want
people
to
commit
to
like
putting
it
into
shanghai,
I
just
think
we
need
to
get
it
to
a
point
where
we
can
fund
and
do
the
work
for
potentially
a
future
hard
fork,
and
I
think
cfi
will
help
a
lot
with
that
and
just
in
terms
of
like
signaling,
that
from
client
developers
that
like
it,
is
actually
a
good
and
useful
change,
and
so
from
the
from
the
last
call.
P
When
I
talked
about
this
eip,
there
were
a
few
follow-ups
which
I
can
talk
about.
If,
if
that
sounds
good
to
you,
guys.
B
P
Yeah
so
so,
last
time
there
were
a
few
follow-ups.
First
was
like
the
impact
that
the
eip
would
have.
The
second
was
a
concern
about
the
linear
memory
expansion
costs,
which
we've
elaborated
a
bit
on
the
eip,
and
the
third
was
is
this:
like
the
best
version
of
the
feature,
the
best
api
for
some
kind
of
persistent
memory
across
call
contacts,
and
so
just
I'll
go
I'll
start
with
the
first
one
about
impact.
P
So
I
mentioned
the
issues
a
bunch
of
developers
from
different
teams
that
are
all
interested
in
seeing
this
feature
implemented.
We
we
estimated
in
the
thread
just
for
v2.
You
know,
swap
v2
reentrancy
a
lock
alone
and
just
for
just
for
the
storage
load
operation
being
saved,
it's
order
of
billions
of
gas
just
for
unit
swap
v2
and
then
also
in
terms
of
like
how
much
load
there
is
on
nodes.
P
We
develop
that
weren't
possible
before,
for
example,
this
ability
to
like
you
know,
do
a
lock
across
multiple
pools
and
do
things
between
all
the
pools
without
doing
any
rc20
transfers
and
in
some
cases
the
gas
savings
are
drastic
when
you
don't
have
to
transfer
erc20s
or
call
into
erc20s
in
case
in
certain
cases,
and
so
there's
a
lot
of
context
to
that
in
the
thread,
including
some
gas
estimates
we
did
for
for
different
prototypes.
P
So
so
that's
a
little
bit
about
the
impact.
The
second
one
is
the
memory
expansion
cost.
So
the
amount
of
memory
you
can
allocate
is
on
issue,
but
martin
brought
up
an
issue
about
the
journaling
costs
which
isn't
really
fully
included
or
like
not
fully
analyzing.
The
eip.
I
think
a
simple
solution
to
the
journaling
cost
is
just
limiting
the
number
of
keys
that
can
be
stored
in
the
map
per
address
and-
and
I
can
update
the
eip
to
do
that.
P
I
think
a
thousand
keys
is
is
just
fine,
10,
24
keys
or
double
that,
or
some
order,
magnitude
or
order
of
a
thousand
keys
is
fine,
and
then
there
was
a
lot
of
conversation
about
like.
Is
this
the
best
version
of
the
feature?
P
I
think
one
one
alternative
was
was
making
a
persistent
memory,
op
code,
which
is
by
address
and
works
more
similarly
to
memory.
I
think
we
we've
had
a
lot
of
conversation.
I've
talked
to
like
charles
from
viper.
P
He
was
asking
about
a
different
api
where
a
t
store
and
t-load
were
by
address,
but
still
backed
by
a
map
in
the
client,
and
so
you
could
write
to
any
32
byte
word
any
offset,
but
but
that
one
was
a
little
bit
obfuscating
for
for
the
silhouette
developers
and
the
compiler
developers.
So
I
I
think,
like
this,
this
t-store
t-load
interface
is,
I
think
it's
probably
the
best
version
just
because
the
language
support
is
the
easiest
to
add
it's
very
similar
to
s4
and
s
load.
P
So
so
it's
like
it's
easy
to
think
about
and
there's
also
already
sort
of
a
concept
of
transient
storage
in
the
uvm
which
works
through
s
store
and
s
load.
It's
just
not
very
well
supported
because
of
the
refund
cap
and
the
fact
that
you
have
to
use
refunds
and
then
it
also
has
to
load
from
storage.
P
So,
like
there's,
there's
already
this
concept
in
the
evm,
that's
supported
with
slo
nestor,
so
it
kind
of
makes
sense
to
implement
it
via
t-store
and
t-load
and
then
finally,
mappings
and
dynamic
arrays
are
really
heavily
used
in
the
use
cases.
We
have
and
they're
not
supported
today
with
memory
and
solidity,
so
it
would
be
extra
work
to
get
developers
to
start
using
it
and
for
it
to
be
available
in
the
language
and
then
yeah.
P
There
was
one
more
comment
about
the
fact
that
this
is
like
sort
of
just
like
a
gas.
Optimization
doesn't
revolutionize
anything.
I
think
it.
P
I
think
it
opens
up
new
smart
contract
patterns
that
will
quickly
become
canonical,
and
so,
in
my
opinion,
it
does
have
a
bit
more
impact
than
is
clear
to
see
like
via
you
know,
gas
estimates,
or
I
think
it
has
a
lot
of
potential
to
like
improve
the
developer
ux
for
interacting
with
you
know,
you
know
swap
contracts,
for
example,
but
it's
hard
to
express
that
without
sharing
too
much.
B
Right
thanks
thanks
a
lot
for
for
the
update.
I
know
mark
martin
and
you
had
like
a
long
kind
of
back
and
forth
on
on
the
agenda
comments
and
and
andrew
also.
I
brought
up
earlier
like
the
idea
that
something
like
say,
deactivating
self-destruct
could
be
higher
priority
than
this
for
shanghai.
So
I
guess
just
to
kind
of
take
a
step
back
here
like
with
in
shanghai.
We
already
have
a
handful
of
small
eips.
I
don't
have
the
full
list.
B
We
already
have
this
withdrawal
eip
and
oh
eof
was
the
other
big
one.
So
we
have
eof
this
withdrawal
eip.
That's
cfi,
a
couple
of
other
small
changes
like
the
warm
coinbase,
the
push,
zero
op
code
and
whatnot,
and
I
think,
on
on
the
previous
calls,
we
discussed
a
bunch
of
potential
solutions
to
lower
the
the
cost
for
layer
twos
and
none
of
those
are
officially
cfi
yet,
but
there's
like
some
progress
on
them,
so
I
guess
yeah
I'd
just
be
curious
to
hear
from
clients
in
general.
B
Like
do
you
think
we
can
add
anything
more
scfi
for
shanghai
now.
Would
you
want
to
wait
until
we
we've
we're
a
bit
farther
in
the
implementation
to
decide
that
are
there
things
that
you
think
we
should
potentially
like
earmark
now
yeah,
I'm
just
curious
how
how
people
generally
think
about
that
and
andrew?
You
have
your
end
up.
M
M
So
if
4758
deactivate
self-destruct,
is
a
great
candidate
for
shanghai
inclusion,
because
it's
a
simplification
and
it
also
paves
the
the
way
for
the
ryoko
tree,
so
our
preference
would
be
to
include
4758
into
shanghai
as
to
1153,
I
would
say
we
can
see
fight
for
cancun
and
also,
I
think,
because
shanghai
is
already
big.
I
would
cf
48
44
shot
blob
transactions
for
cancun
as
well.
B
Got
it
thanks?
Micah,
you
have
your
hands
up.
J
I
just
wanted
to
comment
that
I'm
ambivalent
on
shanghai
or
not
shanghai,
but
I
do
think
there's
significant
value
in
core
devs,
giving
signals
as
to
whether
this
is
something
that
we
are
likely
to
include
in.
You
know
some
near
future
hard
fork.
You
know,
I
think
we
all
agree.
This
is
kind
of
a
neat
useful
thing,
but
is
this
something
where
we
can
say?
Yes,
you
know
moody
go
tell
the
swap
team
and
other
app
developers
that
you
should
spend
money
and
time.
J
You
know
developing
this
further
and
working
with
soliloquy
team
and
working
with
client
devs
to
implement
this,
or
should
we
tell
them,
you
know
we're,
probably
not
gonna
get
around
this
anytime
soon.
I
wouldn't
recommend
spending
money
and
time
on
further
developments.
For
now.
I
I
just
want
to
say
that
moody's
question,
I
think
what
he
started
with
was
whether
this
could
be
moved
to
cfi,
not
necessarily
whether
it
should
be
included
in
shanghai.
I
I
think
that's,
ideally
what
he'd
like
and
the
union
swap
team
is
interested
in
it,
but
I
think
he's
just
talking
about
cfi
and
I
don't
know
if
it's
right
to
say
you
know
this
should
only
be
like
the
what
andrew
just
brought
up
like
this
should
be
removed
from
shanghai,
which
it
already
isn't
being
included
in
in
favor
of
removing
self-destruct.
I
don't
know
if
that's
the
right
way
to
think
about
this
right.
B
I
guess
the
one
thing
I'm
just
cautious
about
is
we
haven't
implemented
like
anything,
that's
in
the
cfi
list
yet
so
like.
If
we
had
already
like
three
quarters
of
it
implemented,
I
think
it
would
be
like
a
different
conversation
where
it's
easier
to
say
like
okay,
let's
try
to
potentially
add
this
as
well
and
then,
if
you
know,
if,
if
it's
working
as
expected
and
like
it's,
you
know
it's,
it's
there's
no
issues
with
it.
B
Then
we
we
can
move
forward,
but
the
fact
that
it's
like
there's
already
like
six
and
yeah
at
least
six
things
that
like
are
not
implemented.
I
yeah.
I
I'm
just
a
bit
cautious
that,
like
we,
we
we
get.
You
know
to
after
the
merge
and
we
have
a
list
of
then
10
things
we
need
to
implement
and
we're
basically
back
in
the
same
the
same
spot.
Yeah.
P
So
sorry,
I
should
give
an
update
on
the
status
like
in
terms
of
the
development
work
we've
done.
We've
we've
actually
implemented,
it's
incomplete,
we've
implemented
it
in
the
ethereum
jsvm
and
it's
been
merged
and
that's
how
we've
been
doing
the
testing
is
against.
You
know
hard
hat
with
that:
ethereum,
jsvm
and
and
yeah.
I
agree:
there's
a
ton
of
testing
yeah
and
we've
written
some
of
the
evm
by
code
tests,
but
like
of
course,
we
have
to
write
right.
Okay,
yeah.
B
So
yeah,
sorry,
sorry
I
I
guess
I
was
more
responding
to
accident,
but
like
yeah
they're,
I
it's
good
that,
like
it's
obviously
implemented
in
in
one
client
and
that
it
works
and-
and
I
think
that's
that's
like
really
valuable.
But
it's
there
is
a
difference
between
that
and
like
it's
being
implemented
across
like
the
four
clients
and
you
know,
we've
tested
it
and
it
works,
and
and
we've
had
like
a
devnet
running
at
it
like
yeah
and
but
we
don't
have
a
good,
a
good
word
to
discern
between
those
two
things.
B
I
guess
I'm
curious:
if
does
any
client
team
like
feel
strongly
that
this
should
be
made
cfi
for
shanghai
now
yeah,
it's
probably
a
good
place
to
start.
B
P
K
You
know-
including
this
I
I
mean-
I
would
say
yes,
but
if
you're,
if
like
cfi,
is
some
kind
of
stamp
that
this
is
now
slated
for
inclusion
and
maybe
not
in
shanghai,
but
definitely
cancun
or
whatever
comes
next
and
then
I
will
be
hesitant
to
like
say
so
right
now.
J
B
B
So
it's
like,
I
think,
even
if,
if
we
had
like
strong
consensus
on
this,
call
that
like
eip
1153
is,
is
like
the
most
kind
of
important
thing
to
do
after
shanghai,
it's
kind
of
hard
to
make
a
commitment
that
that'll
still
be
true
in
a
year
yeah,
and
I
guess
we
can
yeah
collect
science.
We
can
time
box
this
in,
like
maybe
two
more
minutes.
P
So
I'm
not
I'm
not
sure.
I
understand
what
like
cfi
means
like
just
based
on
the
naming.
It
sounds
like
it's
like
consider
for
inclusion,
not
necessarily
inclusion,
so
in.
B
Practice,
yeah
and
in
the
past
what
we've
done
is
like
we've
tried
to
implement
all
the
cfi
eips
and
if
there
were
like
issues
that
came
up
during
like
the
multiple
client
implementations,
we
would
you
know,
remove
them,
as
we've
had
happen,
would
say
like
the
bls
eip,
where
we
implemented
it,
we
still
weren't
confident
with
it.
But
there
was
like
this
this
implication
of
like.
If
we
implement
it,
then
it's
all
good.
The
default
path
would
be
for
it
to
go
to
mainnet.
B
Yeah
yeah,
I'm
I'm
sorry
to
cut
the
shirt
just
to
leave
some
time
for
the
gory
conversation.
I
do
think
like
marius
commented
like
we
probably
don't,
have
the
consensus
to
move
this
see
the
cfi
today,
and
I
think
maybe
one
thing
we
should
discuss
on
the
next
call
is
just
like
generally:
when
do
we
want
to
start
like?
B
Do
we
want
to
maybe
hold
off
in
accepting
new
stuff
for
cfi
until
we're
a
bit
farther
ahead
with
shanghai
implementations,
and
we
have
a
better
view,
but
I
I
think
we
can
probably
discuss
just
like
shanghai
cfi
at
a
higher
level
on
the
next
call,
and
if
there
is
no
time
to
discuss
that
on
the
next
call,
then
it
probably
means
by
default
we're
not
ready
to
add
new
stuff
because
we're
still
dealing
with
the
stuff
that's
happening
right
now,
but
yeah
just
to
make
sure
we
we
can
cover
the
gordy
thing
afrid.
O
Yes,
thank
you
very
much.
I
know
90
minute
calls
are
draining,
so
I
will
keep
it
short.
There
is
this
kind
of
issue
with
some
speculation
on
girly
testnet
and
I
kind
of
feel
responsibility
to
encounter
these
speculations,
and
I
personally
believe
that
the
girly
testnet
is
still
having
a
lot
of
significance.
Initially
with
east
ii
or
consensus
layer.
Testing
now
was
merged
testing
pratas
depending
on
gurley,
and
I
believe,
as
opposed
to
like
older
test
nets
such
as
ring
b
and
robson.
O
That
might
be
duplicated
soon.
Gurley
should
be
around
for
another
couple
of
years,
at
least
after
the
merge,
and
I
would
love
to
drastically
inflate
the
available
supply
of
gurley
tesla
tokens
to
first
of
all
avoid
further
speculation
on
any
non-zero
value
of
early
test
tokens,
but
also
to
provide
application
developers,
but
also
consensus
layer
testing
engineers
with
necessary
resources
to
conduct
testing.
My
I
used
to
have
a
huge
stock
of
girl
lisa.
O
This
is
coming
to
an
end
soon,
and
I
know
the
timing
is
not
very
fortunate
right
now,
because
we
are
going
all
in
merch
this
year,
but
I
would
like
to
propose
to
inflate
the
girly
test.
Testnet
supply.
I
have
written
an
initial
proposal.
I
called
it
a
girly
eip1
because
I
don't
think
testnet
specific
stuff
necessarily
needs
to
go
through
the
eip
process
or
to
bind
so
much
more
resources.
O
So
I
proposed
to
pre-fund
an
externally
owned
account
with
92
quintillion
ether
on
the
girly
testnet
to
to
keep
it
short.
Martin
said
that
we
should
not
do
this.
I
think
peter
commented
somewhere
that
we
should
not
do
a
network
specific
or
tesla
specific
force.
Eventually,
martin
proposed
that
we,
instead
of
forever
hard
coding
one
key,
I
hold
that
we
could
just
have
a
more
generic
approach
by
pre-funding
all
active
validator
balances
by
a
certain
amount.
O
So
I
created
girly
eip2,
it's
linked
in
the
agenda
under
the
first
comment
and
again,
if
anyone
wants
to
take
a
look,
I
I
just
want
to
have
general
notion
from
the
coordinates
from
the
oil
collapse.
If
this
is
something
feasible
and
we
should
consider
it-
and
I
would
really
appreciate
if
we
could
do
something
like
this
on
the
girly
test
net
and
no-
we
should
not
so
it
was-
has
been
proposed
to
do
something
with
block
reward.
O
But
I
believe
we
should
keep
it
really
simple,
because
if
we
modify
the
click
block
reward,
then
we
will
play
around
with
consensus
engine,
and
this
would
might
this
might
drain
too
much
resources
from
core
developers.
That
is
not
really
necessary,
so
it
should
be
like
a
one-off
thing.
I
believe.
B
So
just
yeah,
because
we're
we're
very
close
to
time-
and
I
think
that
does
anyone
have
like
an
objection
to
generally
addressing
this
like
and
and-
and
you
know
whether
we
do
it
through
like
a
one
balance
increase
or
all
validator
increase
like
like
just
at
a
high
level.
It's
anybody
opposed
to
just
the
idea
of
like
increasing
the
supply,
to
make
sure
that
the
the
test
nets
value,
the
testnet,
coin's
value,
kind
of
stays,
basically,
zero.
J
D
B
And
so
one
thing,
okay,
so
I
guess
the
other
thing
I
we
might
be
able
to
get
rapid
consensus
on
is
basically
is
this
something
we
want
to
do
before
the
merger,
after
I
suspect
doing
it
before
the
merge,
would
like
delay
things,
but
I'm
just
curious
from
client
teams
like
how
it
you
know,
assuming
we
go
ahead
with
some
version
of
this
proposal.
The
simplest
from
like
a
client
developer
perspective
in
terms
of
timing,
how
the
people
feel
about
this.
K
So
the
disapproval
authority
that
the
the
coordination
of
the
network
itself
is
pretty
easy.
It's
like
four
or
five
notes
that
need
to
be
updated,
but
then
you
have
the
other
infrastructure
and
the
the
cl
test.
That's
that
rely
on
the
gurley
and
all
those
I
mean
getting
the
test
that
itself
to
upgrade
with
the
cedars.
That's
pretty
easy,
but
getting
all
the
other
infrastructure
and
cll
into
integrations
upgrade
as
well
might
be
a
bit
more
tricky
and
would
be
the
thing
that
causes
a
bit
of
delay
as
well
and
operational
issues.
B
So,
okay,
clearly
there's
and
there's
some
comments
in
the
chats
with
like
different
different
thoughts.
What's
the
best
way
to
continue
this
conversation
async
over
the
next
two
weeks,
afree
and
and
then
we
can-
we
can
cover
this
a
bit
earlier
on
on
the
next
call.
O
I
personally
would
appreciate
if
people
could
spend
some
time
on
the
girly
testnet
repository
to
just
leave
a
comment.
I
personally
believe
that
this
is
something
because
it's
only
addressing
a
test
that
it
can
be
totally
solved.
Asynchronously
also
there
is
this
idea
of
so
I
turned
ideas
just
to
wrap.
This
up
are
creating
a
new
test,
and
obviously
was
the
downside
of
losing
all
the
existing
infrastructure
there
might
be.
A
compromise
in
in
in
between
these
ideas
is
to
have
a
regenesis.
So
basically
we
said
girly
genesis.
O
This
is
also
something
that
could
be
interesting
by
retaining
the
name,
the
the
infrastructure,
but
having
a
new
guinesses,
basically
starting
from
scratch.
This
is
also
something
we
can
discuss
and
I
will
invite
everyone
to
the
early
test
repository
to
discuss
these
ideas
and
eventually
come
to
the
conclusion.
B
Q
Sure
yeah,
just
a
really
quick
update,
we're
continuing
to
work
towards
getting
some
sort
of
stable
devnet
running.
We
have
george
working
on
a
tool
to
submit
data
blobs
that
actually
commits
to
the
blobs
with
the
kcg
commitment,
vitalik
wrote
a
really
nice
faq
for
the
eep,
which
I'll
I
linked
above
but
I'll
link
again
here
and
then
proto
is
going
to
start
working
again
on
the
integrations
next
week.
Q
So
you
know,
I
think
my
next
acd
it's
pretty
likely
we'll
have
some
sort
of
local
sort
of
devnet
available
for
people
interested
in
using
and
then
maybe
after
acd.
After
that,
we'll
have
more
of
a
like
longer
running
devnet
available.
I
think
we're
gonna
onboard
some
more
people
onto
this
over
the
next
couple
weeks
as
well,
so
we'll
just
keep
posting
updates
in
acd
and
in
the
other,
relevant
discord
channels.