►
From YouTube: Ethereum Core Devs Meeting #137 [2022-4-29]
Description
A
C
So
right
when
we
start
streaming
micah
with
the
comments,
welcome
everyone
to
awkwardev's
number
137
today,
so
I've
posted
the
agenda
in
the
chat.
We
have
a
bunch
of
of
merge,
related
updates.
Frankly,
that's
probably
all
we're
gonna
have
the
time
for
yeah
and
yeah.
I
I
guess
you
know
to
kick
us
off.
Do
we
have
perry
here?
Yes,
okay,
so
we
have
perry
perry.
Do
you
want
to
walk
us
through,
like
the
two
shadow
forks
that
happened
last
week
and
where
yeah?
What
well?
What
happened
there.
D
So
we
had
two
shadow
fox
last
week
when,
during
death
connect
the
first
one
was
a
girly
shadow
for
girly
shadow
fox4,
and
this
was,
I
think,
the
first
shadow
fork
where
we
had
multiple
plans
sticking
on,
and
I
think
everyone
made
it
through
the
transition,
but
bezou
and
aragon
post
transition
had
issues
and
stopped
working,
but
during
the
week
the
teams
pushed
a
bunch
of
fixes
and
we
had
midnight
shadow
for
two
on
saturday.
So
that's
about
six
days
ago
and
mina
shadow
folk
2
was
like
it
worked
a
lot
better.
D
We
didn't
have
any
major
issues.
It
seemed
that
all
clients
did
hold
through
the
transition
and
also
well
after
we
did
uncover
a
couple
of
issues
with
deposit
processing
and
we've
we're
looking
at
more
ways
on
how
we
can
harden
that.
We
did
have
an
issue
with
late
blocks
being
proposed
by
prism.
There
was
a
fix
pushed
like
relatively
soon
after
it
was
discovered,
and
the
net
has
been
quite
good
since
then.
D
The
other
issues
we
found
through
the
week
was
some
proposal
related
incompatibility
between
nimbus
netherland,
that's
been
fixed.
Now
we
had
one.
I
think
we
had
two
issues
with
vasoprism,
but
I
think
that's
also
been
fixed
right
now
and
aragorn
prism
is
still
undergoing
triage.
I
don't
think
we
know
what's
going
on
there
yet,
but
there
are
other
erican
notes
that
are
in
sync,
so
it
could
just
be
some
some
incompatibility.
We
have
to
figure
out,
but
in
general
the
network
is
stable.
C
E
Yes,
I'd
like
to
add
that
occasionally
I
I
hear
reports
of
erygon
nodes
being
stuck,
especially
when
people
try
to
sync
maintenance
out
of
walk
some
sometime
afterwards.
So
I
have
to
investigate
the
the
zinc
stock
issue
and
also
like
fixing
hive
tests.
It's
something
on
my
plate.
So
still
a
lot
of
things
to
fix
in
aragon,
for
the
merge.
C
Call
it
any
other
crime
team.
F
Yes,
I
have
one
I've
been
discussing
with
a
a
bit
with
barry
after
the
shadow
fork,
and
I
guess
this
also
ties
into
this
aragon
comment
just
before
that.
As
so,
these
shadow
chords
are
kind
of
nice
to
test
the
transition.
F
They
are
always
testing
it
with
perfect
clients
so
to
say,
namely
that
all
the
clients,
both
the
deacon
clients
and
the
exhibition
clients,
are
in
sync,
so
everybody's
just
going
in
lockstep
and
waiting
for
the
thing
to
hit,
and
I
mean,
while
that
is
nice,
I
think
it
would
be
also
interesting
to
somehow
create
some
tests
where
certain
kind
combinations
are
out
of
sync
or
are
wacked
essentially,
and
I
probably
setting
this
up
would
be
quite
messy
or
quite
complicated.
Maybe
some
api
endpoints
would
be
needed
to
actually
test
this
scenarios.
C
F
I
guess
the
problem
here
is:
how
do
you
so
manual
testing
a
thousand
problem
problems?
That's
not
really
reproducible.
So
if
something
goes
wrong,
you
have
no
idea
what
happened.
What
went
wrong-
and
I
don't
know
so
at
least
speaking
from
get's
perspective-
we
have
a
api
called
the
set
head.
So
what
we
could
do
is
right
before
ttd
hits
maybe
just
set
the
head
back.
I
know
60
blocks
or
something
and
see
what
happens.
F
I
don't
know
if
something
like
that
is
available
on
on
beacon
clients,
but
I
think
it
would
be
nice
to
be
able
to
some,
even
if
not
a
very
exhaustive
list
of
corner
case
checking
at
least
some
some
basics,
because
the
problem
is
that
so
what
we
did
test
with
the
shadow
force
is
that
the
whole
thing
can
just
transition
through
it
and
that
the
transitioning
is
just
the
beacon.
Client
is
feeding
the
execution.
F
Clients
blocks
one
by
one,
but
if
let's
say
your
node
is
offline
for,
for
whatever
reason
you
just
installed
get
because
it
wasn't
ready
for
the
merge
and
you
restart
it
and
all
of
a
sudden,
nothing
works
because
there's
some
synchronization
issue,
then
it's
game.
It's
going
to
get
messy
because,
eventually,
all
nodes
will
it
will
happen
that
you
miss
a
block
for
whatever
reason
and
then
you
need
to
actually
fall
back
to
proper
sync
and
that
needs
to
be
somehow
tested.
C
Got
it,
anyone
on
the
testing
side
have
any
comments
about
this.
C
I
Yeah,
sorry,
I
might
have
caught
something
in
amsterdam.
I
did
a
bunch
of
tests
where
I
stopped
the
beacon
nodes,
so
they
get
the
consensus.
The
execution
layer
nodes
would
just
stand
there
and
not
be
fed
any
new
blocks
and
then
restarted
the
beacon
nodes
so
that
the
beacon
nodes
sync
up
and
feed
us
the
blocks
one
by
one.
I
So
I
have
a
sync
node
delete
the
the
consensus
layer,
database,
resync,
the
consensus
layer
with
the
execution
layer
running
and
the
other
way
around.
I
I
C
Awesome
and
peter
just
to
make
sure
I
I
I
understand
exactly
what
you're
saying
is
like
you,
want
to
make
sure
that
we
can
have
kind
of
these
imperfect
clients
with
like
alice,
saying
states
when
the
merge
is
happening
right,
like
the
fear,
is
not
that,
like
after
the
merge
they
can't
get
into
sync,
but
it's
like
as
we're
actually
going
from
ttd
to
finalizing.
C
F
Then,
if
you
reach
the
ttd,
then
you
need
to
wait
for
a
beacon
client
to
pop
up
if
there's
nobody
away
listening.
This
is
a
bit
different
for
full
sync
and
snap
sync.
So
there
are
these
weird
scenarios
that
we
try.
I
mean
I
tested
him
quite
a
lot
on
various
settings
on
the
kiln
test,
so
we
do
try
to
extensively
test
it.
F
J
So
for
us
it's
like
we
are
still
a
bit
working
on
these
cases
and
we
are
aware
of
some
problems,
so
we
need
to
test
it
on
the
killing
and
shadow
forks.
C
Is
is
it
a
possibility
to
also
kind
of
tell
people
that
they
should
have
a
sync
node
when
the
actual
merge
happens
and
if
not,
basically,
sync
it
on
the
other
side
or
something
like
that
like
where
yeah?
If
you
know
like,
I
think,
if
they're
in
step
and
sync
like
pre-merge,
they
should
obviously
be
fine,
but
then
and
and
then,
if
they
think
once
the
mergers
happen,
they
should
obviously
be
fine
there
as
well,
but
like
yeah.
It's
is
it
like
possible
to
just
tell
people
like
I
mean
you
need
to.
K
K
L
Up
the
head
and
sink,
I
want
to
just
that's
already
got.
N
F
F
N
And
you
might
actually
kind
of
wind
up
in
a
worse
situation
than
if
you
just
said,
not
sick,
because
then
it
wouldn't
be
stuck
in
the
past,
including
over
over
snapchat.
So
we
should,
whenever
we
issue
some
documentation
about
this,
the
syncing
and
merge.
We
should
try
to
highlight
yeah,
because
I
guess
the
same
thing.
It
needs
to
be
true
for
other
clients
as
well.
Presumably
that
yeah
thinking
needs
done
with
a
big.
C
Basically,
if
you
run
a
node
post
merge,
you
are
running
two
pieces
of
software
right,
like
the
consensus
layer
and
the
execution
layer
and
I
think
we're
gonna
yeah
in
the
first
like
test
net
announcement
post,
we're
gonna,
try
and
even
expand
that
more
to
you
know
yeah
just
for
stakers
for
non-stickers
for
people
who
are
running
a
node
on
on
the
execution
layer
today,
so
that
it's
very
clear
what
what
they
need
to
do
because
yeah,
if
you,
if
you're
running
kind
of
just
guest
without
a
consensus,
they
are
post-merged,
then
you're
kind
of
not
running
the
full
ethereum
chain.
K
There's
also
the
ability
to
surface
warnings
and
errors
to
the
user,
because
exchange
transition
configuration
would
not
be
happening,
so
you
wouldn't
be
getting
any
ping
essentially
on
the
engine
api
that
also
yeah
I
mean
you
could
do
a
lot
of
that
information,
but
at
the
bare
minimum
could
expose
some
big
warning
to
users.
A
A
A
F
A
N
A
F
F
Is
that
we've
been
discussing
that
after
the
merge
clients
would
release,
we
would
make
a
new
release
in
which
it
is
actually
marked
that
this
network
transitioned
essentially
just
some
field
in
the
in
the
genesis
pack,
and
that
would
be
super
helpful
exactly
to
circumvent
this
scenario
from
causing
troubles
long
term,
because
we
can
say
that
okay,
there
was
one
field
which
was
a
dtd,
but
we
can
also
add
the
second
field.
A
Include
that
in
the
the
before
merge
release
like
is
there
a
reason
that
people
should
be
able
to
run
a
group
of
like
the
latest
release
that
we
you
put
out
right
before
the
merge
without
ap
clients,
this
flag,
essentially
disables
lego.
C
A
I'd
assume
that
we'd
still
be
able
to
communicate
them
like
you'd,
still
be
able
to
communicate
with
people
who
haven't
upgraded
up
until
they
merge
right
right.
It's
not
unfolding
which
actually
happens
that
you
lose
communication,
and
so
if,
if
you're
running
a
merged
client
and
the
merge
has
not
happened
yet
it
feels
like
and
I'm
sure
I'm
missing
something
here,
but
it
feels
like.
We
should
just
say:
hey
if
you
don't
have
a
big
client
at
that
point,
your
your
execution
claim
will
start
like
like
it
feels
like.
K
A
G
F
Well,
yeah,
but
we
would
release
a
gap
here.
You
have
a
month
upgrade
and
then
make
sure
connect.
Communication
works,
whereas
if
I
add
this
feature
and
all
of
a
sudden,
it's
not
about
making
sure
you
have
a
correct
setup
at
the
merge.
Rather
you
need
to
make
sure
that
you
have
a
correct
setup
at
exactly
that
point:
the
upgrade
of
gas.
But
since
you
haven't
upgraded
guess
you
don't
even
know
what
the
correct
setup
is
yet.
A
Your
concern
here,
if
I
understand
correctly,
is
that
a
user
who
upgrades
their
their
guess.
You
want
them
to
be
able
to
upgrade
and
have
down
time
of,
like
you
know
a
minute,
however
long
it
takes
them
to
upgrade,
and
then
they
can
iterate
on
connecting
to
a
vegan
client,
yada
yada
yada,
and
they
have
a
month
to
do
that,
whereas,
if
yes
refuse
to
start
without
a
vegan
client,
then
when
they
go
to
upgrade
they're
offline,
for
you
know
a
week,
while
they
figure
out
how
do
I
run
a
consensus,
client
yeah?
A
I
think
you
want
to
be
careful
that
the
execution
client
does
not
do
any
sort
of
like
database
upgrades
or
anything
like
that
internally
until
after
it
has
established
communication
properly,
so
you
can
downgrade.
Basically,
if
you
want
to
do
that,
you
have
to
make
sure
that
the
downgrade
path
is
very,
very
clean
like
because
people
will,
you
know,
upgrade
the
latest
version
and
then
find.
Oh,
I
needed
a
consensus
client
to
make
this
work,
and
so
then
they
go
to
downgrade.
You
want
to
make
sure
that
downgrade
works
very
very
well
right.
A
F
F
F
Plus
the
other
potential
question
is
that
how
do
you
even
tell
the
user
that
get
is
out?
I
mean
that
you
need
to
do
something,
for
example,
if
I
just
will
get
refuses
to
start,
maybe
it
will
just
go
into
some
boot
loop,
where
some
kubernetes
manager
keeps
trying
to
start
it
and
we
keep
refusing
to
start
so.
Somebody
needs
to
somehow
dig
up
a
log
where
you
have
a
down
time
of
I
don't
know
half
an
hour
until
a
system.
It
actually
figures
out.
What's
wrong
it
I
don't
know.
K
I
guess
the
counter
to
that
is
if
it
starts
and
it
just
works
and
they
don't
look
the
logs,
which
are
warning
them
that
their
beacon,
node's,
not
synced,
then
they're
going
to
fail
with
the
merge.
But
again
I
I
think
that
this
can
be
left
to
clients.
It
seems
like
a
user,
you
know
user
experience
decision
that
each
client
can
make.
C
Yeah
and
on
the
communication
side
like
we
will
kind
of
communicate
loudly
and
already
have
started
that
you
need
to
run
both
parts.
I
think
this
is
probably
the
most
anticipated
upgrade
in
ethereum,
like
no
one
who's
running
like
infrastructure
level.
Production
of
nodes,
like
does
not
know
that
the
merge
is
happening
so
yeah.
C
I
I
think
as
long
as
like
it's
it's
clearly
explained
like
in
clients
like
what
what
the
behavior
is
and
that
we
clearly
explain
in
the
announcements
like
what
is
required
of
different
stakeholders
and
like
house,
you
know
how
should
they
set
up
their
infrastructure
yeah,
it
seems
unlikely
that,
like
you
know,
most
people
or
a
large
part
of
infrastructure
would
do
this
wrong,
like
there
will
be
someone
somewhere
who
messes
this
up,
like
we
see
in
every
single
network
upgrade,
but
I
think
you
know,
the
vast
majority
should
be
very
well
aware
that
this
is
happening
and
as
long
as
we
have
good
communications,
they
should
be
able
to
the
setup
and
mikhail.
C
It's
not
true.
There
were
a
bunch
of
miners
who
messed
up
london
yeah.
I
I
spent
a
couple
days
after
london
reaching
out
to
the
people
who
had
messed
it
up,
so
there
will
be
some,
but
that's
that's
expected
every
time.
N
F
Approach
is
opening
up
with
a
bit
of
kind
of
worms.
I
mean
every
client
is,
has
their
own
cross
to
there,
but
one
issue
that
I
can
see
it
is
that,
let's
suppose
you
do
have
a
proper
setup
already,
you
do
have
the
beacon
client
you
do
have
get
properly
upgraded
and
then
you
just
want
to
restart
your
system
and
well.
Your
get
node
is
started
up
faster
than
the
weaker
client
and
boom.
F
Refusing
to
start
because
there's
no
beacon
but
yeah,
because
it's
just
starting
or
what
happens
if
the
beacon
client
just
drops
off,
because
you
update
to
restart
it
now.
I
think
it
can
be.
I
mean,
depends
on
how
how
aggressive
this
mechanism
is.
But
you
can
end
up
with
weird
scenarios,
even
after
the
merge,
for
example,.
C
And
and
yeah
earlier
on
in
the
chat,
perry
said
he
can
try
and
manually
set
up
some
of
these
like
out
of
sync
instance.
So
we
can
actually,
you
know,
run
a
bunch
of
of
manual
tests
on
it
at
least
yeah.
I
I
know
beyond
that,
is
there
anything
else?
C
People
feel
we
we
should
be
doing
and
obviously
having
all
the
client
teams
kind
of
test,
their
own
software,
you
know
and
and
the
various
combinations
of
other
other
clients
and
and
making
sure
that
they
walk
through
the
edge
cases,
but
that,
beyond
that,
I'm
I'm
not
sure
what
else
we
can
do.
I
So
in
in
amsterdam,
we
discussed
a
lot
about
some
testing
tools
that
we
need,
or
that
we
could
implement
and
cena
from
the
guest
team
has
already
started
implementing
some
of
them
in
geth.
But
it
might
be
really
really
good
for
other
client
teams
to
also
implement
some
of
the
rpc
calls
so
that
we
can
at
least
make
debugging
issues
if
they
may
arise
way
easier.
C
C
Any
other
thoughts
on
just
the
shadow
forks
in
in
general.
C
Okay-
and
there
is
another
shadow
fork
perry
planned
for
this
thursday-
that's
correct.
D
Yep,
exactly
next
shadow
focus
thirsty,
my
notes
are
already
syncing.
I
just
have
to
upload
the
configs
to
github
and
I
just
channeling
soon,
okay,.
D
I
I
think
if
we
were
to
like,
if
we
were
to
automate
the
process
somewhere
like
a
bit
more,
then
we
could
have
garlic
shadow
folks
every
like
two
days
or
three
days.
I
I
would.
I
would
see
value
in
that.
Otherwise,
if
we're
going
to
only
manually,
do
two
shadow
forks,
I
don't
see
see
the
value
in
doing
the
girdy
of
folks
anyway,.
D
Okay
sounds
good,
and
another
thing
is
I'd
like
to
deplicate
main
nut
shadow
fork.
One
and
we'd
keep
maintaining
fork
two
around
so
the
one
that
happened
last
week
unless
someone's
testing
anything
on
one,
if
not
I'd
like
to
applicate
it
later
today,.
F
C
No
objections
anything
else
on
shadow
forks
generally.
I
Okay,
oh
yeah
yeah,
because
jamie
just
just
wrote
we
had.
We
saw
one
issue
that
we
only
saw
on
the
conditional
fork
so
yeah
I
have
to
think
about
if,
if
it
might
make
sense
to
have
another
one
or
two
girly
shirt,
folks.
Q
N
C
D
D
But
when
we
were
talking
about
it
in
amsterdam,
we
said
that
we'd
rather
use
the
dev
time
allocated
to
kurtosis
for
testing
like
weirder
edge
cases,
for
example,
pausing
docker
containers
around
transition,
or
I'm
not
sure
we
have
to
figure
out
what
other
beard
cases
we
want
to
toss
in
there.
Do
we
still
want
to
go
down
that
route,
or
do
we
want
to
focus
their
efforts
on
having
early
shadow
fork
in
kurtosis?
I
don't
think
we
have
enough
manpower
to
do
both.
I
D
C
Next
thing
we
had
on
the
last
call
we
discussed
kind
of
the
latest
valid
hash
issues
mikhail.
I
know
there's
been
like
a
lot
of
conversations
about
that
over
the
past
two
week.
Do
you
want
to
just
give
us
a
quick
recap
of
where
we
ended
with
that.
M
Yeah
in
amsterdam,
we
had
decided
that
the
engine
api
spec
stays
the
same,
and
your
client
will
just
adhere
to
this
pack.
Also
we'll
cover
this
with
the
tests
so
yeah,
basically
that's
it.
It
means
that
el
will
respond
with
the
most
recent
valid
ancestor
hash
in
case
if
invalid
block
is
found
on
the
chain
which
el
is
syncing
with
and
cl
may
use
this
information
to
remove
invalid
sub
chain
from
its
block
tree.
C
Okay,
got
it,
and,
oh,
I
guess.
First
off
anyone
have
thoughts,
comments
on
that
or.
S
Yeah
hey:
this
is
asu
here
we're
currently
looking
at
our
latest
valid
ancestor
code,
and
it
looks
pretty
much
doable
for
what
mikhail
is
talking
about,
but
I
could
personally
use
a
better
definition
of
what
the
latest
valid
block
is
real,
quick.
Our
understanding
right
now
is
that
it
is
the
common
ancestor
that
has
been
validated
as
it's
been
considered
valid.
Is
that
the
simplest
definition
we've
come
up
with.
O
S
It
all
right,
I
think,
we're
clearing
it.
Thank
you.
C
Okay,
next
up
mikael
this
morning,
you
also
posted
another
kind
of
request
for
comments
about
an
engine
api
response
status.
Do
you
wanna
you
go
over
that.
M
Oh
yeah,
this
is
related
to
latest
well
attached
and
this
is
a
kind
of
blind
spot
in
the
stack
currently.
So
the
problem
is
that
if
literally
the
first
proof-of-stake
block
in
the
chain
is
invalid,
what
what
the
l
should
return
as
a
latest
valid
hash
it?
It
may
return
like
the
proof
of
work
block.
That
is
the
current
of
this
first
rehearsal,
but
this
information
isn't
relevant
for
cl.
It
doesn't
have
this
proof
of
work
block,
which
is
basically
the
terminal
proof
block
in
block
3.
M
C
S
C
Okay,
next
up,
this
is
something
that's
been
open
for
a
while,
but
I
just
wanted
to
bring
it
up,
because
we
are
getting
close
to
being
done
with
the
consensus
level
changes.
We
still
have
some
open
questions
around
json
rpc
and
how
to
go
about
like
basically
finalized,
safe
and
unsafe.
C
I
don't
want
to
take
too
much
time
on
the
call
to
like
go
over
this,
but
it
would
be
good
to
get
this
pr
merged
in
the
next
couple
weeks,
so
that
clients
like
can
agree
on
what
what
we're
implementing
so,
I
don't
know-
I
know
like
danny
mikhail
mika,
like
the
three
of
you,
seem
to
feel
the
strongest
about
this.
Do
you
just
want
to
kind
of
take
a
minute
to
share
your
thoughts
and
how
we
should
move
this
forward.
K
Yeah
I
mean
there's
two
there's
two
ways
to
kind
of
think
about
what
these
words
mean.
One
is
the
algorithm
that
defined
that
derived
them
or
the
other
is
like
the
actual
status
and
state
of
the
the
item.
I
think
that
developers
and
people
reading
the
api
reading
this
would
generally
think
the
latter
so
like
if
something
says,
unsafe,
they'll
literally
think
it's
unsafe,
not
that
it
was
derived
from
an
algorithm
that
is
unsafe,
and
thus
I
think
that
the
problem
is,
you
can
have
a
safe
algorithm.
K
That
also
gives
you
the
head
and
an
unsafe
algorithm
that
gives
you
the
head,
and
thus
you
could
have
something
that
you
could
have
unsafe
and
safe
being
the
same
block,
which
I
think
is
very
confusing
for
end
users.
Thus,
I
think
latest
is,
I
think,
anchoring
on
the
ladder
and
actually
it
being
like
a
property
of
that
block
is
more
is
is
better,
so
I
would
say
you
leave
latest.
K
I
would
say
you
define
safe
and
you
hope
that
you
get
a
better
algorithm
over
time
and
you
do
finalize,
I
think
justified
is
like
a
nice
to
have,
but
it
would
require
a
change
to
the
engine
api
which
I
don't
think
it's
valuable
enough
to
do
a
break
and
change
at
this
point.
A
I
think
that
when
a
developer
asks
for
unsafe
they're
not
likely
between
asking
for
unsafe
and
safe
and
then
comparing
the
two
they're
just
saying,
the
thing
I
need
is
whatever
is
safe.
Give
me
that
or
the
thing
I
need
is
whatever
is
unsafe,
give
me
that
I
don't
think
the
user
actually
cares
what
they
get
back.
Nor
do
I
think,
they're
going
to
be
looking
and
comparing
that
to
the
other
options,
they're
just
going
to
say,
hey,
I
need
something.
A
That's
I
need
something
to
say
because
I'm
doing
off,
like
I'm
taking
this
off
chain
like
I'm
in
exchange
or
I
need
something,
that's
unsafe,
because
I
am
running
a
mbv
extracting
client
and
I
need
the
latest
absolutely
I
don't
care
if
it's
going
to
go
away,
and
here
we
are.
C
K
K
Very
well
and
that
people
understand
that
when
they're
using
latest
today-
but
I
know
I
know
the
argument
that
maybe
it's
foot
gun
and
it
should
be
renamed,
but
I
don't.
I
don't
know
if
I
want
to
debate
this
too
much
today.
Okay,.
A
Yeah
so
the
as
of
the
merge
we
want
users
should
be
able
to
or
before
the
merge
realistically,
but
as
of
the
emergency.
Lease
user
should
be
able
to
do
get
block
by
number
and
pass
in
safe
as
a
block
tag,
instead
of
so
currently
they
can
do
pending
latest
earliest
and
I'm
going
to
one
other
but
anyways.
We
want
to
add,
finalized
and
save
that
list
of
things.
They
can
request.
Q
I
mean
if
you,
if
you
do
replicate
latest,
I
think
a
lot
of
applications
will
break.
C
If
we
let
it-
and
I
would
rather
not-
it
would
be
good
if,
like
people
who
have
strong
opinions,
can
go
on
that
pr
and
share
them,
and
it
would
be
even
better
if,
like
by
the
next
awkward
devs,
we
could
have
some
consensus
on
this
and
have
that
pr
merged
in
one
way
or
another,
because
yeah
it
just
feels
like
at
the
end
of
the
day
like
in
the
next
couple
weeks,
is
when
clients
are
going
to
want
to
start
implementing
this.
Just
because
we're
going
to
want
this
before
we.
C
We
have
releases
up
for
for
public
test
nets,
and
so
I
think
the
most
important
thing
is
just
for
kind
of
being
done
with
arguing
over
what
the
options
are,
and
I
have
very
weak
preferences
for
what
for
what
the
actual
outcome
should
be,
but
just
that
we
should
try
to
wrap
this
up
in
the
next
couple
weeks.
F
My
two
cents
is
that
unless
there's
a
very
good
reason
to
change
an
existing
tag
to
replicate
something,
it
should
be
just
that
that
is
if,
for
whatever
reason
it
is
super
appropriate
appropriate,
then
maybe,
but
I
know
I
don't
really
see
that
because
the
difference
between
ladies
and
gentlemen
now
latest
is
kind
of
unsafe,
you
can
have
a
site.
You
can
have
a
real
one
to
block
rio,
they
have
them.
So
it's
not
like
it's
safe.
Currently.
C
C
Okay,
yeah
moving
on
from
this,
but
yeah.
Please
comment
in
in
in
the
the
pr
light
client,
you
had
some
comments
about
basically
the
the
block
gas
limit.
When
we
moved
to
a
proposer
builder
world,
do
you
want
to
kind
of
give
some
context
there.
T
T
You
know
to
also
be
good
custodians
of
this
power,
and
if
we
want
to
confer
that
power
into
this
new
system,
because
we
have
the.
T
Right
now
to
add
some
different
configuration
parameters
to
the
block
building
the
external
block,
building
protocols
that
will
be
available
post
merge
to
make
it
so
that
individual
validators
can
choose
their
gas
target,
and
I
wanted
to
ask
this
here
because
it
feels
like
a
bit
of
a
departure
from
where
we
currently
sit,
where
we
can
coordinate
with
a
fairly
small
group
of
people
to
change
the
gas
limit
if
needed,
and
if
we
allow
individual
validators
to
choose
it.
T
K
T
K
That
argument
is
just
that,
like
there
should
be
a
a
lever
that
developers
or
a
small
set
of
people
have
and
that
we
currently
kind
of
do.
I
I
don't
know
if
having
more
players
have
to
be
involved
in
this
decision
is
necessarily
a
bad
thing.
I
also
think
that
you
know
validators
incentives
aren't
the
same
as
end
users,
incentives,
but
I
do
think
that
builders
incentives
diverge
even
more
and
so
putting
this
inside
of
builders
you
know,
med
searchers
hands
is
is
not
a
good
equilibrium.
In
my
opinion,.
A
In
in
the
various
proposals
for
builder
proposer
separation,
can
we
put
the
block
s
limit
in
the
proposer's
hand,
instead
of
the
builder
sends
easily,
or
is
that
overly
complicated.
U
K
The
container
that
the
proposer
signs
so
in
in
l1
pbs,
you
know
when
there's
a
larger
redesign
here.
Certainly
it's
it's
easier
with
the
with
sort
of
like
stopgap
measures
where
you
try
to
simulate
that
and
an
extra
outside
of
it.
You
know
in
the
protocol
that
people
are
designing
with
mbb
boost
it's
a
little.
It's
certainly
more
complicated,
but
it's
not
impossible,
and
it's
not
that
much
more
complicated
I'd
say.
U
I
would
say
yes
because,
like
you
can
wait
like
you,
can
win
and
dominate
the
builder
auction
for
a
pretty
long
period
of
time,
just
by
being
a
willingness
and
more
money
than
other
people,
whereas
doing
that
for
proposers
is
yeah.
I
is
much
harder.
So
yeah,
like
builders,
like
a
very
mature
builder
ecosystem,
will
left
like
very
easily
accessible
elections.
Like
is
more
vulnerable
than
ever.
Builders.
Choosing
it.
A
Yeah,
okay,
I
think
personally,
I'm
convinced
that
we
should
probably
do
something:
I'm
okay
with
either
just
removing
the
ability
for
the
gas
limit
to
increase
or
decrease
from
via
block
builders
or
moving
at
two
proposers.
I
would
be
happy
with
either
personally.
U
C
C
And
you
want
to
have
a
strong
counter
position
to
that.
A
So
the
the
only
only
thing
question
would
be
is
if
this
turns
out
to
be
incredibly
difficult
to
integrate
into,
because
something
we're
not
thinking
about
here.
Should
we
bring
this
back
up
for
removing
this
from
block
headers
with
the
merge,
or
should
we
just
accept
that
builders
get
to
control
the
block
yes
limit.
U
Merge
I
mean,
I
think
it
would
really
be
hard
very
hard
to
make
any
consensus.
U
U
Laziest
and
basically
atticus
add
a
like
one
line
rule
that
just
says
that
proposers
only
accept
a
header.
If
the
header
contains
a
gas
limit
of
exactly
30
million
right.
K
A
C
Okay,
okay,
so
next
thing
I
had
basically
two
things
related
to
test
nets.
So
first
there
was
some
discussion
in
amsterdam
about
like
the
the
kind
of
future
of
the
test
nets.
There
afree
posted
a
comment
on
the
on
the
agenda,
which
links
to
it
I
don't
know,
was
anyone
is
anyone
on
the
call
who
was
at
that
session
in
person?
I
I
wasn't.
I
C
Goddess-
and
I
guess
just
from
like
our
perspective,
I
assume
nothing
changes
in
that
we're
still
happy
with
just
running
officially
robson
gordy
and
sepolia
through
the
merge.
So
if,
if
some
other
company
wants
to
like
maintain
ring
can
be
and
may
or
may
not
run
it
through
the
merge,
you
know
they
obviously
can.
But
then
yeah,
just
in
terms
of
like
the
data
that
we
want
from
these
test
nets.
These
three
are
still
sufficient
and
we're
happy
with
that
right.
I
Yes,
exactly
and
so
to
chinese
comment.
Taking
them
over
means
that
we
deprecate
them
in
our
software.
We
don't
maintain
them
in
our
software
anymore
and
they
can
either
their
own
version
of
geth
or
or
just
use
an
old
version
where
we
still
have
support
for
these
tests.
C
Okay,
okay,
and
so
I
guess
yeah.
The
other
thing
I
I
wanted
to
talk
about
for
testnets
is
basically
how
how
how
do
we
get
from
from
where
we
are
today
to
there.
So
like
we
had
this
shadow
fork
in
in
dev
connect,
which
was
which
was
smoother
than
the
ones
we've
had
before.
We
have
another
one
planned
for
this
week.
C
I
guess
you
know
I'm
curious
to
hear
from
crime
teams
like
you
know
what
what
do
we
want
to
see
in
terms
of
like
success
on
shadow
forks
before
we're
comfortable,
like
moving
to
upgrading
the
public
test
nets,
and
is
there
stuff
also
like
outside
of
the
shadow
forks
themselves,
that
you
know
we
still
want
to
to
test
or
to
do
before
before
comfortable
moving
the
even
the
first
of
the
public
distance.
I
I
Baso
is
currently
not
on
the
hive
test
instance,
but
I
think
they're
looking
into
fixing
that
quickly
gathers
only
failing
two
tests
and
that
still
is
an
open
issue.
Whether
this
is
an
issue
with
the
assumptions
of
the
tests
or
with
guest,
but
a
regardless,
for
example,
is
fading
28
tests,
and
it
would
be
really
important
to
get
all
of
those
tests
fixed
so
that
we
have
a
good
confidence
in
the
software.
J
Yeah,
I
think
we
should
also
have
green
good
assets,
of
course,
for
every
client,
and
I
just
need
to
cover
most
of
the
stack
and
what
is
more,
maybe
fuzzing,
because
right
now
correct
me.
Mauricio
framework.
I
C
E
I
have
a
question
why
so,
like
the
number
of
tests
is
different
between
aragorn
and
geth
and
nethermind?
I
think,
like
the
total
number
of
tests
for
for
aragon,
it's
46
for
nethermine,
it's
47
and
for
geth
is
54.
J
J
C
Awesome
and
then
justin
had
a
comment
in
the
chat
about
like
the
mid
thing
situations
being
like
compelling
and
and
that
we
want
to
like
run
something
like
that
run,
something
like
that
on
on
a
shadow
fork,
even
though
we
don't
have
the
infrastructure
to
do
it.
Is
this
something
that
like
client
teams
can
manually
try
on
the
fort
this
this
thursday
and
see?
C
C
Awesome
yeah
and
then
yeah
perry
has
a
comment
about
two
times:
magnet
shadow
forks
with
no
slash
really
minor
issues.
C
And
so
obviously
you
know
we
have
one
scheduled
this
week.
We
have
one
scheduled
next
week
or
I
assume
we'll
have
one
scheduled
next
week.
I
don't
know
or
sir
next
week
and
then
two
weeks
from
now
I
assume
we
can
we
can.
We
can
do
those,
I
guess
if
we,
you
know
if
we
had
those
kind
of
work
smoothly
and
then
obviously
spent
some
time
on
on
hive
in
the
next
two
weeks
to
make
sure
that
that
that
the
different
teams
passed,
the
tests
are
people.
C
You
know
the
people
feel
like
at
that
point
would
generally
be
in
a
good
spot
to
start
looking
at
upgrading
test
nets,
and
obviously
you
know,
there's
always
a
delay
by
the
time
like
we
need
to
set
the
block
and
then
like
put
up
the
software
and
then
it's
like
there's
a
few
weeks
until
we
actually
hit
the
the
actual
upgrades
but
yeah,
assuming
I
guess,
assuming
these
shadow
forks
went
smoothly
with
and
and
that
hype
support
was
there
is.
C
Is
it
realistic
that
to
think
we
might
start
looking
at
test
nets
in
in
like
two
weeks
or
do
we
feel
we
need
like
yeah?
We
need
much
more
time
than
that.
C
Thank
you,
martin
awesome,
so
yeah.
I
I
think
that
makes
sense.
One
thing
I'll
also
share
from
discussions
at
devconnect
is
yeah.
We
had
some
chats
about
like
the
difficulty
bomb
and
like
when,
should
we
actually
make
a
call
about
pushing
back
or
not
and
it
it
seems,
and
this
stuff
is
really
hard
to
estimate.
So
please
don't
quote
me
on
this
in
two
months.
C
If
I'm
wrong,
but
it
seems
like
you
know
we
can
probably
get
to
like
late
may
june-ish
before
we
really
feel
the
impacts
of
it
and
then
and
then
from
that
point
you
know
in
the
past,
we've
managed
to
like
ship
difficulty
bomb
upgrades
in
a
few
weeks
when
needed.
So
my
my
feeling
from
talking
with
like
different
client
teams
at
in
amsterdam,
was
like
it
seems
like
it
would
be
better
to
like
wait.
G
C
Without
necessarily
doing
anything
about
the
bomb
and
then
if,
if
we
get
to
like
late
may
june,
so
like
two
three
calls
from
now-
and
we
see
that
like
we're,
not
moving
forward
on
the
test
nets
either
because
we
found
some
issues
or
things
are
slower
than
expected,
we
can
coordinate
a
bomb
push
back
pretty
quickly,
then,
but
not
basically,
not
even
thinking
about
it
and
being
able
to
move
a
bit
quicker
in
the
short
run
would
be
better.
Is
anyone
like
strongly
oppose
that.
C
V
Well,
I
was
gonna
say
I
think
that
it
would
be
unfortunate
to
me
to
make
a
mistake
there
as
far
as
getting
into
a
situation
where
you
have
to
force
yourself
to
delay
the
bomb.
V
I'm
sure
everybody
understands
that.
But
let
me
just
share
this
one
chart
and
I
think
you
can
see
pretty
clearly
from
the
chart
how
quickly
the
bomb
goes
off
once
it
goes
off,
so
is.
Is
everybody
seeing
this
yeah.
V
V
These
are
the
last
two
bombs
here
and
we
delayed
them
in
plenty
of
time
for
them
to
actually
affect
the
block
time.
So
here's
14
second
blocks,
but
this
one
was
the
one
where
we've
kind
of
forgot
to
set
it,
and
we
kind
of
had
to
react
really
quickly
here,
because
there
was
two
hard
forks
right
after
each
other.
I
think
they
were
about
a
month
apart.
So
this
is
about
a
month
and
it
just
literally
drops
off
the
off
the
cliff.
V
So
this
red
line
is
june
15th,
which
was
when
we
thought
we
were
setting
it
back
here
in
december,
and
it
looks
to
me
like
it's
going
to
it's
getting
ready
to
drop
off
the
cliff
to
me,
I'm
just
looking
at
this
now.
I
never
wanted
to
predict
the
future
because
predicting
the
future
of
this
thing
is
hard,
but
I
can
see
that
we're
getting
to
the
point
where
it
might
start
dropping
off
the
cliff.
V
By
this
day
you
know
say:
may
15th
we're
definitely
going
to
have
a
decision
about
whether
we're
gonna
delay
it
or
not,
because
you
don't
want
to
get
to
the
place
where
you're
forced
to
delay
it,
because
it
went
from
18
second
blocks
to
22
second
blocks
in
a
two-week
period.
That's
what
I
would
say.
C
Right
and
I
think
yeah
I
I
think
we'd
probably
make
that
call
much
before
we
hit
like
18
seconds.
I
think
yeah
I
yeah,
that
has
to
be
pretty
low,
but
I
I
guess,
because
there
is
kind
of
this
this
chance
that
we
might
actually
be
able
to
ship
the
entire
upgrade
without
moving
back
the
bomb,
probably
wailing
until
we're
in
the
like
14
ish
second
range,
maybe
15,
to
see
like
how
we're
feeling
about
this
and
then
yeah,
and
then
that
means
you
know
by
the
time
we
coordinate
it
takes
it.
C
C
Yeah
yeah,
and
I
think
the
risk
is
just
I
just
don't-
want
to
paint
us
into
a
corner
where,
like
we're,
then
forced
to
act.
If
say,
the
bomb
is
not
actually
showing
up
as
like
we'd
expect.
So
I,
if,
if
we
do
you
know,
say
we
can
get
like
an
extra
four
weeks
without
having
to
make
a
decision
or
an
extra
six
like,
I
think,
every
week
where
we
can
delay
making.
That
call
is
valuable
to
us,
because
it
gives
us
more
information
on
the
readiness
of
the
merge
yeah.
C
So
I
I
guess
the
best
thing
we
can
kind
of
say
is
like
today
we're
far
enough
from
the
bomb
being
an
impact
that
we
can
at
least
bump
this
conversation
to
another
two
weeks
and
then
yeah
and
obviously
we'll
keep
kind
of
looking
at
it,
but
yeah
yeah.
That
would
be
my
approach
with
yeah
lucas.
I
see
you
have
your
hands
up
as
well.
W
Because
basing
on
this
graph,
I
would
say
that
we
have
four
weeks
in
four
weeks:
we
should
decide
if
we
should
move
the
bomb
or
not.
That
looks
reasonable
for
me.
P
Yeah,
it's
like
yeah.
That
would
be
two
calls
yeah
the
vitalik.
U
Analyze
it
mathematically,
there
are
scripts
that
have
predicted
it
pretty
pretty
accurately
and
like
we
do
know
how
it
progresses
right.
That's
like
every
hundred
thousand
blocks,
the
you
know
difference
the
discrepancy
between
the
actual
block
time
and
the
ideal
block
time
doubles
and
that's
basically
how
it
just.
U
Going
right,
so
it's
like
it
is,
it
is
something
that
we
that
we
can
model.
The
main
unknown
variable
is
basically
like
a
hash
rate,
but
even
hash
rate
like
it's
not.
I
mean
like
unknown.
G
U
V
V
Recovers
so
it
goes
off
and
then
the
hash
rate
recovers
that
that
is
dependent
on
the
hash
rate.
But
I
I
agree
with
you:
it's
predictable.
I
always
just
predicted
the
first
time
you
know
when
it
really
starts
going
off.
I'm
saying
this
is
right:
we're
going
to
start
seeing
it
go
off
by
the
time
we
get
here.
U
Yeah
I
mean
basically
like
when
it
breaks
15.
Then
it's
you
know
pretty
clear
that
it's
100
000
blocks
away
from
breaking
17
and
another
hundred.
U
Yeah,
that's
right
yeah
so
like
there's
like
it
is
a
kind
of
multi-variable
problem
because,
like
you
know,
we
also
we
have
to
evaluate
you,
know
the
pain
of
doing
an
extra
delay.
U
G
Living
with
21
or
21
25,
second
vlog
for
a
while,
which
is
you
know
something
that
we
have.
V
N
V
N
So
yeah
I'm
just.
A
So
we
could
instead
of
trying
to
project
forward.
Why
don't
we
just
pick
a
block
time
that
we
consider
unreasonable
and
then,
as
metallic
mentioned,
we
can
calculate
backwards
when
we
would
need
to
have
a
you
know
if
we
know
that
we
it's
going
to
take
us
three
weeks
to
get
a
release
out,
and
we
know
we
definitely
don't
want
30
seconds
and
we
can
calculate
at
any
given
point
in
time.
What
is
three
weeks
before
30
seconds?
I
think
it's
hard.
Instead.
C
C
C
Right,
like
I
at
least
for
me,
maybe
like
others
disagree,
but
for
me
it's
like
you
know
if
we
have
20
second
block
times
today.
I
would
consider
that
intolerable
if
we
have
30
second
block
times,
but
then
we
can
ship
the
merge
two
weeks
later
then,
like
that's,
maybe
worth
it,
you
know
I'm
not
convinced,
but
it's
like
yeah
once
we
can
tolerate.
The
thing
goes
up
as
we
get
closer
to
the
merge.
W
Lucas
so
two
things:
one
thing
we
can
have
this
unpredictability
enhanced
operatibility
based
on
hash
rate
leaving,
so
I
expect
that
close
to
the
marriage,
hash
rate
would
be
going
down
because
people
would
just
sell
their
hardware
before
anyone
else
like
before
before
hours
and
the
second
thing
I
would
really
like
the
dream
to
be
be
considered:
the
reliable
network
on
something
that
goes
from
14
seconds
to
like
20,
25,
second
blocks
or
whatever.
W
So
I
would
really
like
the
decision
to
be.
We
can
make
it
as
late
as
possible,
but
not
before,
not
after
we
are
seeing
time
increases
increases,
but
before
that,
so
I
would
prefer
doing
that
and
third
thing.
Moving
difficulty
pump
is
like
easiest
hard
fork
ever
so
I
don't
think
it
will
take
a
lot
of
effort
from
pushing
the
merge
to
move
difficulty
bomb.
It's
very
easy
things.
C
C
U
I
think,
like
basically
like
close
to
immediately
right
because
of
the
yeah
like
it
can
adjust
by.
I
think
it's
like
a
factor
of
e
in
1000
and.
C
Right,
so
that
means
even
if
the
hash
rate
goes
down
it
doesn't
it
doesn't
make
the
bomb
worse
because
then
overall
difficulty
for
the
network
goes
down.
U
N
U
Right
yeah,
yeah
yeah,
I'm
I
mean
if
people
wants
to
open
betting
pools
on
how
much
hash
rate
will
drop
before
the
merge,
I
mean
I'm
definitely
betting,
that
it
will
drop
by
less
than
a
factor
of
two,
probably
significantly
less.
This
is
why
I
never
predicted
the
future.
I
only
predict
when
it
first
comes.
V
A
That's
common
for
me
before
I
walk
away
from
this
conversation
because
I
don't
care
much
is
the
anxiety
and
effort
we're
putting
into
having
these
discussions
just
that
alone.
Is
it
worth
just
pushing
back
the
bomb
just
so
we
can
stop
worrying
about
it
like
like
this.
Isn't
we've
spent
a
very
untrue
amount
of
time
and
there's
a
lot
of
anxiety
around
it.
E
N
N
The
no
it
does
sound
good,
but
the
thing
is
with
yeah,
as
as
mark
said,
it's
trivial
for
us
to
make
these
hard
work.
Such
an
artwork
postponed
the
bomb,
but
it's
not
trivial.
For
people
to
to.
I
mean
the
whole
coordinating
the
update,
but
it
might
be
easier
if
we
just
early
on,
do
plan
for
for
bomb
update
and
not
let
it
affect
the
the
merge
at
all
or
at
as
little
as
possible.
C
Yeah,
if
you
can
push
it
back,
then
obviously
it's
it's
not
like
a
ton
of
work.
There's
some
coordination
work,
but
it
does
kind
of
make
it
smoother
danny.
You
have
a
comment
about
proof
of
work.
K
Yeah
I
mean
we
might
see
some
proof
work
works
here,
but
the
if
the
chain
is
actively
degrading
at
the
time
they're
performing
the
work
fork.
They
have
to
do
two
things
they
have
to
do
an
upgrade
and
convince
exchanges
to
list
them
at
the
same
time,
whereas
it's
very
easy
to
do
the
proof-of-work
fork
and
if
you
have
months
to
potentially
diffuse
the
bomb
and
that's
months
to
do
potentially
damage
users.
K
N
C
And
yeah,
I
I
mean
we
yeah.
We
have
basically
a
few
minutes
to
the
end
of
the
call.
Does
it
like
make
sense,
based
on
just
the
conversations
we
had
earlier,
that
you
know
spend
the
next
two
weeks.
Obviously,
focus
on
the
shadow
forks
on
hive
see
how
we're
feeling,
two
weeks
from
now,
we
can
see
also
how
the
bombs
progress.
C
Then
we
can,
you
know
I
think
we
we
do
have
like
much
more
than
two
weeks
to
make
this
decision
and
like
the
the
chain,
is
not
being
affected
today
and
it's
not
gonna
be
noticeably
affected
in,
like
two
weeks
realistically,
we
probably
have
maybe
like
even
like
four
to
make
that
call
so
like
yeah,
I
I
think
yeah
it's
it's
worth
at
least
like
kind
of
moving
forward
on
on
the
testing,
seeing
how
far
we
feel
we
are
in
the
process.
Two
weeks
from
now
and
and
looking
at
it.
G
Looking
at
it
then
and
yeah.
C
G
C
In
two
weeks,
and
obviously
continue
this
conversation
about
like
the
pros
and
cons
just
from
the
like
proof
of
work
side
in
in
the
discord.
C
We
have
two
minutes
anything
else.
People
wanted
to
bring
up.
C
Okay,
well
well,
finished
dirty
for
the
first
time
in
a
long
time,
thanks
everyone
for
for
coming
on
and
yeah
talk
to
you
all.
In
two
weeks
I
was
promised.