►
From YouTube: Ethereum Core Devs Meeting #126 [2021-11-12]
Description
A
A
A
A
Hello:
everyone,
oh
hello.
Everyone
welcome
to
awkwardev's
number
one.
Two,
seven
one,
two,
six,
a
couple
things
on
the
agenda
today.
B
As
mentioned
on,
the
consensus
layer
call
last
last
time
we're
going
to
spend
some
time
talking
about
kinsuki
and
the
the
upgrades
we
have
there,
and
then
we
have
a
couple
eips
and
and
and
things
about
the
merge
to
discuss,
but
first
just
wanted
to
mention
aero
glacier.
So
the
aero
glacier
upgrade
is
happening
on
december.
B
8Th
all
of
the
clients
have
a
release
ready
I'll
just
share
my
screen:
real
quick,
there's,
a
blog
on
blog.ethereum.org
with
all
these
releases,
otherwise,
in
the
execution,
specs
repo
there's
also
a
link
to
the
spec
with
all
of
the
client
releases
associated
with
it
yeah.
So
you
can
find
them
here.
B
So
that
was
the
first
one,
next
up
kinsugi,
so
I'm
curious
yeah
have
any
clients
basically
made
progress
on
the
specs
any
issues
people
want
to
bring
up.
C
C
D
Yeah
so
from
cats,
guess
I
created
a
version
with
eap4399
enabled,
I'm
not
sure
if,
like
we
have
to
decide
whether
we
want
to
run
the
first
test
net
or
the
testing
on
monday,
with
4399
enabled
or
disabled,
because
it
changes
the
block,
hash
and
yeah.
So
I
have
one
version
with
with
439
enabled,
but
I
can
also
create
another
one
without
it
and
yeah.
I
created
test
vectors
for
like
the
test.
D
Vectors
are
without
four
three
nine
nine,
but
I
can
also
create
test
vectors
with
four
three:
nine,
nine
and
mario
created
a
really
nice
tool
to
run
test
vectors
and
to
create
easy
test
vectors
for
the
con
for
the
execution,
layer
and
yeah.
Currently
it
only
runs
with
gas,
but
we
tried
we
hope,
to
extend
it
to
other
clients
soon.
E
Couple
things
there:
the
weekly
devnet
launches,
which
we
like
to
take
any
gear
next
week,
will
be
on
thursday
rather
than
on
monday.
So
there's
a
little
bit
of
time
to
coordinate
between
now
and
then
and
I'd
say
if
possible,
it
would
be
best
to
just
do
it
full
feature
unless
we
have
a
particular
reason
not
to,
but
I
guess
we
should
circle
back
on
on
perry,
put
a
proposal
together
so
that
on
monday
tuesday,
we
can
iron
out
the
last
details
before
thursday.
F
There
is
also
something
raised
by
merrick
on
the
test
vectors
as
a
response
for
the
update
forks
choice.
If
there's
no
execution
payload,
it's
geth
is
returning
0x
for
payload
id.
Is
that
the
behavior
we're
expecting
or
one
thing,
because
the
spec
seems
to
imply
the
null
value
or
not
doesn't
exist
on
the
object.
G
Well,
what
once
again,
what's
the
case
when
it
should
be.
G
I'm
also
wondering
what
should
cl
sound
when
the
focus
is
updated
when
it
literally
creates
the
first
initiate.
H
G
I
G
E
Which
makes
sense,
I
mean
if
you
had
two
viable
competing
terminal
for
fork
blocks
the
consensus
player
needs
to
pick
one
and
to
build
one,
and
so
that
I
think,
should
be
the
way
we
do.
It.
G
It's
just.
It
means
that,
with
this
call,
the
choice
rule
will
be
switched
to
the
proof
of
stake
of
choice,
rule
as
well,
right,
which
again,
I
think,
makes
fun
yep.
A
Cool
anyone
from
aragon
or
base
you
have
updates.
J
K
Okay
regarding
ap
for
399,
I'm
I
I
started
looking
at
it
yesterday,
so
I'm
still
like
trying
to
understand
how
we
can
implement
into
aragon
but
yeah,
nothing
yeah,
nothing
new!
For
now,
since
that's
cool.
B
L
Yeah,
I
can
talk.
Oh.
L
Yeah
about
this
gary
is
mainly
working
on
this
and
I
can't
add
too
much
looking
at
the
issue.
That
seems
to
be
a
lot
of
progress
lately
so,
but
gary
can
for
sure
tell
more,
but
it's
not
online
at
the
moment.
B
E
It's
only
on
consensus
layer
and
so
the
consensus
layer
might
have
a
view
where
terminal
total
difficulty
might
be
earlier
than
what
the
el
has
hard
coded,
and
so
it
the
el
must
listen
to
these
four
choice.
Updated
events,
even
in
the
event
that
it
seems
a
bit
early
compared
to
the
terminal
total
difficulty,
so
there's
no
need
on
the
proof
of
the
fork
choice,
rule
to
validate
ttd,
but
yeah
I'll
still
use
local
ttd
to
turn
off
block
gossip
and
block
import
for
the
p2p,
and
I
linked.
E
The
second
thing
I
linked
is
a
deeper
discussion
of
that
design.
Consideration
and
I
just
wanted
to
there
were.
There-
was
some
testing
and
some
different
failures
and
different
logic
changes
around
the
validation
of
ttd
in
el.
So
I
just
wanted
to
point
that
out
that
the
spec
says
slightly
otherwise
than
what
it
did
in
amphora.
E
Okay,
if
you
have
any
questions
around
this
logic,
change
just
pick
me.
G
G
This
is
mostly
like
for
ux
purposes,
so
the
cl
client
will
be
able
to
just
log
some
message
if
the
payload
is
invalid
and
it
can
read
from
the
logs
and
without
the
need
to
go
to
the
el
client
log
and
match
the
payload
hash
and
what
happens
to
investigate
what
has
happened.
G
So
it
would
be
great,
just
you
know,
to
see
more
opinions
before
making
the
final
decision,
how
it
should
be
done
for
more
yeah
el
client
developers.
A
Okay,
if
not,
I
think
miguel,
it's
also
worth
it
to
go
over
your
related
point
about
the
fork
identifiers
for
the
merge
before
we
dive
into
the
two
eips.
We
wanted
to
chat
about.
G
These
numbers
and
these
derivatives
should
not
be
affected
by
the
merge,
so
literally
work.
Next,
just
stay
zero
and
for
cash
is
the
same
as
of
the
previous
fork,
but
when
it's
turned
into
the
spec
change
the
question
of
how
how
it
should
properly
be
implemented
yet
again,
so
I
just
wanted
to
like
just
discuss
this
and
like
make
a
final
decision
to
make
their
respective
change
to
the
eip.
G
And
my
main
question
here
is:
if
we
have
the
fork
hash,
let's
just
suppose
the
mesh
has
happened
and
the
fork
hash
hasn't
been
changed
and
suppose
the
proof
of
work
network
keeps
progressing
and
considering
the
fact
that
the
same
process
and
the
the
like
the
fortress
rule
and
everything
else
will
be
like
coming
from
the
cell
side
to
el
and
also
considering
that
block
gossip
will
be
disabled.
G
So
that's
the
main
question,
if
my
my
god
is
that
nothing
bad
can
happen,
they
will.
G
These
notes
may
exchange
with
the
transaction
messages,
but
I
don't
think
it's
a
problem
so
yeah
we
need
to
decide
because
we
can't
update
for
cash
easily
like
retroactively
or
like
with
the
the.
How
is
currently
specified
in
the
eip
is
that
the
fork
hash
is
going
to
be
updated
when
the
real
transition
block
number
is
known,
but
if
it's
done
it
could
like
split
the
network
for
because
nodes
that
are
in
sync
and
not
yet
know
this
transition
block
number
their
fork.
G
H
M
H
E
Ahead,
danny,
I
was
just
gonna
say-
is
this:
this
helps
filter
peers
when
things
are
well
specified
and
known
in
advance,
and
if
we
don't
utilize
this
mechanism
for
this
upgrade,
because
it
is
hard
to
get
right
and
can
cause
the
splitting
because
of
the
dynamic
nature
of
the
fork
block
it.
H
H
H
Like
that's
in
this
background
right,
so
the
network
should
split
after
the
first
finality.
Is
that
correct,
like
everybody,
should
disconnect
from
each
other,
basically
and
they'll,
stop
trying
to
talk
to
each
other,
and
so
we
should
get
a
nice
clean
partition
at
finality.
That
personality
is:
is
that
an
accurate
statement.
H
H
M
I
think
if,
if
we
have,
if,
if
all
that
happens,
is
that
one
side
disconnects
the
other?
That's
not
the
clean,
disconnect
split.
It
just
means
that
the
disconnected
partner
will
try
again
and
succeed
to
connect
and
then
eventually
get
kicked
out
again,
and
it
will
continue
that
way.
M
If
we
want
something
clean,
I
should
think
we
should
think
about
how
this
type
of
how
we
could
integrate
this
with
quark
id.
Somehow-
and
I
know,
there's
been
a
discussion
about
that.
4K
is
usually
based
on
numbers
and
after
the
fork
we
do
know
the
numbers.
So
we
could,
I
don't
know
if
we
like
retroactively,
could
modify
the
fork
id.
C
E
Instead
of
once,
the
transition
block
is
known,
we're
going.
K
G
I'm
worried
that,
according
to
this
pack,
the
node
that
has
just
started
to
sync
with
the
network
and
will
try
to
connect
to
those
that
has
that
just
passed
the
transition,
even
even
they
reach
the
finality
and
whenever
the
fork
hash
has
changed,
it
will.
G
And,
according
to
the
current
stack,
it
should
just
not
connect
to
this
node
because
it
doesn't
know
it's
not
it's
not.
This
is
a
known
fork
for
fork
hash
or
for
the
node
for
the
local
node
that
connects
to
the
remote
one,
and
it
just
drops
the
connection.
G
If
you,
if
you
yeah,
if
you
said
it
retroactively
and
you're
in
run
time,
so
you
can
only
do
this
when
you
know
the
exact
number
right.
But
if
you
yet,
this
number
is
not
known
for
the
local
node,
because
it's
still
syncing
and
hasn't
reached
this
block
it
will.
It
will
use
different
four
cache
and
it
will
need
to
connect
to
someone
to
to
give
to
pull
the
chain
data
and
get
some,
but
it
can't
connect
because
of
all
cache
is
different.
G
If
you
can
recreate
the,
if
you
can
get
the
same
cache
using
the
fork
next
and
the
upcoming
four
caches,
then
you
can
connect.
If
you
can't
do
this,
you
should
disconnect
or
you
must
disconnect
according
to
the
spec.
This
is
my
understanding.
Probably
it's
wrong,
but
I've
like.
I
H
H
Block
number
okay,
so
the
issue
here
is
normally
for
every
other
fork.
We
know
the
exact
block
number
like
from
genesis
block
like
if
you
just
start
up
a
client
brand
new
and
you
have
genesis
and
a
config
file.
You
already
know
the
block
number
for
every
future
fork
that
up
to
the
download
client,
and
so
therefore,
you
should
be
able
to
connect
to
anybody,
because
you
all
agree
on
the
fork
numbers.
The
issue
here
is
that
we
have
a
fork
that
we
don't
can't
hard
code.
H
The
block
number
in
at
least
until
the
client
releases
after
the
fork
happens,
and
so
we're
in
this
weird
situation.
Where
you
you
don't
actually
know
that
from
genesis
and
therefore,
when
you
connect,
you
don't
actually
know
what,
for
what
block
number
to
say?
Is
that
hey,
I'm
expecting
this
for
the
fork
id?
Is
that
all
accurate.
O
Actually,
you
can't
even
hard
hard-coded
after
the
fork,
because
if
you
are
good
after
the
fork
and
for
example,
I
restart
my
client-
and
I
say
that
okay,
I
did
a
four
ten
blocks
ago
and
everybody
else
will
drop
me
from
the
network
because
they
will
see
that
I'm
at
block
20
million.
I
didn't
work
1
million
block
ago,
but
they
are
again
similarly
unlocked
20
million,
but
they
did
not
do
a
fork.
One
million
dollars
essentially
them-
and
I
am
on
two
separate
forks
based
on
the
four
id
rules.
H
Yeah
that
way,
we
get
the
nice
clean
separation
of
networks,
which
will
basically
be
people
who
upgraded
to
proof
of
stake
and
people
who
didn't,
or
rather
people
who
upgraded
their
execution,
clients
to
a
proof-of-stake,
capable
execution,
client
and
those
who
didn't
and
once
that
block
number,
which
is
a
fork
block
with
essentially
no
code
changes
in
it
once
that's
reached,
the
network
should
partition
cleanly
and
then,
when
we
get
into
ttd,
we
don't
need
to
worry
about
forklifting
that
I
know.
H
G
Yes,
I
have.
G
Before
this
walk
identifier,
it's
also
used
in
the
in
discovery
right.
G
B
And
I
do
think
there's
maybe
value
in
that,
because
it
kind
of
gets
people
downloading
both
clients,
like
anyways,
there's
going
to
be
a
change
to
the
consensus
layer
before
we
hit
ttd.
So
we
can
tell
people
to
also
upgrade
their
execution
layer,
clients
at
that
time
and
then
they're
going
to
have
to
upgrade
their
execution
clients
once
more
as
we
get
closer.
I
guess.
M
G
H
No,
so
if
you
upgrade
your
execution
clients
to
the
client
that
has
the
code
that
says,
there's
4k
there's
a
new
fork
empty
fork
at
this
block.
Then
we
know
that
your
execution,
client,
assuming
you
didn't
hack
it
or
whatever,
will
turn
itself
off
when
ttd
is
reached,
and
so
we're
confident
that
you
will
at
least
not
continue
on
on
a
proof-of-work
network.
Like
you,
you
will
you'll
just
stop
at
ttd
worst
case
scenario,
if
you
don't.
E
Notice
something
a
bit
funny
here
with
my
interest.
Essentially,
if
they
intend
to
not
do
the
fork,
then
we
will
disconnect
from
them.
At
this
point.
B
Oh
yeah
right,
but
they
still
have
that's
not
actually
true,
though,
because
they
still
have
an
incentive
to
mine
all
the
way
up
to
the
very
last
block
like
they
get
paid
so
they're
basic.
If
they
disconnect,
then,
if
the
disconnect,
then
it
basically
says
you
know
like
we're
losing
this
amount
of
hash
rate,
and
that
might
actually
be
a
useful
data
point.
Knowing
like
x,
percent
of
the
hash
rate
is
not
even
going
to
bother
mining
close
to
the
merge
yeah.
H
So
that
is,
that
is
true,
but
you
have
there's
it's
like
already:
there's
kind
of
incentive
for
miners
to
leave
early
because
sell
your
hardware
off
beforehand.
This
is
just
one
more
incentive
like
we're.
Basically
saying
you
have
to
do
this
last
upgrade
or
if
you
want
to
stop
a
week
early.
You
know
you
don't
even
have
to
bother
upgrading
your
infrastructure
or
it'd
be
nice.
If
we
can
just
say,
hey
miners,
don't
have
to
touch
their
infrastructure,
just
keep
running
your
old
clients
right
up
until
the
last
minute.
H
You
don't
need
to
do
anything
and
that
way
we
keep
retain
as
much
as
possible,
so
we
don't
have
a
precipitous
drop
in
hashing
power
at
the
last
minute.
Now
I.
H
B
They
want
to
keep
making
money.
You
know
some
of
the
miners,
as
you
said,
are
mining
pools.
A
lot
of
them
will
also
use
something
like
flash,
bots
or
or
whatnot
then
like.
If
they
want
to
keep,
you
know,
yeah,
basically
making
money
on
mev
as
well.
They
need
to
upgrade
so
I
don't
it
feels
like
if
they're
gonna
drop,
you
know
if
for
them
the
calculus
is
like,
I
want
to
drop
before
the
merge,
yeah
they're,
probably
going
to
drop
anyway.
Martin.
M
Yeah,
I
was
so
I
understood
one
of
the
one
of
the
drawbacks
of
having
a
dynamic
4kd
thing
is
that
it
makes
difficult
print
notes
that
are
in
the
middle
of
syncing
or
are
wanting
to
do
a
sync
right
around
when
t3.
M
O
So
the
entire
network
simultaneously
switches
to
swaps
in
a
new
four
kind.
That's
fine!
But
if
part
of
the
network
swaps
in
a
new
fork
id
and
the
other
part
does
not,
for
example,
because
it
requires
a
new
client
or
requires
a
restart.
And
what
happens
is
that
the
clients
who
swapped
in
the
new
fork
id?
They
will
suddenly
advertise
a
fork
in
the
past
that
the
other
clients
are
not
aware
of
which
means
that
chain-wise
they
aren't
compatible
with
this
command.
O
I
thought
that
the
suggestion
this
particular
suggestion
was
that
we
wait
until
pos
arrives,
and
then
we
just
retrospectively
say
that,
oh
by
the
way,
yesterday's
block
was
the
pos
block.
M
D
But
what
if,
what
if
there
are
two
two
blocks
on
the
third
terminal,
different.
E
D
M
D
M
Yeah
but
so
the
idea
would
be
that
the
entire
network
that
does
continue
on
group
stake
does
so
at
basically
the
same
time.
O
O
H
You
won't
be
able
to
connect
because
you
don't
think
that
because
you're
past
the
fork
block-
and
so
you
try
to
connect
to
people-
and
they
all
say
hey
the
fork
block,
was
you
know
back
there
that
block
you
downloaded,
and
you
say
I'm
past
that
block.
That
was
not.
The
fourth
block
because
you
have
not
yet
reached
ttd.
Is
that
the
issue.
D
G
Okay,
once
again,
everybody
has
transitioned
and
switched
their
fork
identifiers
in
the
network,
I'm
like
the
the
new
one,
the
new
guy
and
just
starting
my
notes,
like
let's
say
a
half
of
an
hour
after
the
transition
has
happened.
I'm
trying
to
join,
like
my
execution
layer,
tries
to
connect
to
everyone
that
has
the
new
fork
identifiers,
but
I
don't
have
it
locally,
because
I
don't
know
that.
G
O
I
think
it
will
allow
connections,
because,
on
from
one
side,
the
peer
that
you
join
to,
they
will
see
that
you,
your
advertising,
the
next
block
will
be
focused
for
you,
so
they
will
say
that
okay,
homestead
is
at
block
1700,
something
you
think
that
that's
the
next
fork
you're
outdated.
You
are
just
wanting
to
sync,
so
the
the
fear
that
you
joined
will
allow
you
to
join,
because
they
they
don't
know
that
you're
you're
on
some
other
form.
O
The
only
information
they
have
is
that
you're
at
genesis
and
the
next
work
is
homestead
it
matches
up.
You
want
to
connect,
so
from
that
perspective
the
connection
will
be
allowed.
From
the
other
perspective
you,
what
you
will
see
is
that
the
other
person
is
actually
on
the
fork
id
that
you
have.
You
know
nothing
about,
and
that
might
actually
cause
you
to
disconnect,
because
the
other
side
will
actually
advertise
for.
I
did
if
this
hash
checks
on
that.
O
G
Yeah,
that's
that's
my
understanding
too
also.
I
don't
think
that
if
we
said
this
walk
next
before
the
actual
transition,
it
would
differ
much
from
if
it
were
just
zero,
because
if
somebody
wants
to
keep.
G
They
will
just
take
this
client
release
with
this
fork
next
and
remove
the
ttd
and
other
stuff
and
do
other
stuff
that
needs
to
keep
supporting
the
proof
of
work
network
and
just
use
this
client
and
the
same
fork
next
will
be
there
so
as
if
you're
just
zero,
so
I
think
it
should
be
after
the
transition
has
happened.
If
we
want
to
split
these
two
networks.
O
Honestly,
I
think
it
doesn't
really
matter
whether
it's
a
little
bit
before
the
transition
a
little
bit
after
the
transition
generally.
The
reason
why
we
introduced
forecast
was
that,
especially
on
the
test
mats,
we
have
generally
maybe
10
of
the
nodes
upgraded
to
a
new
fork.
90
percent
didn't
and
then
it
was
for
the
rest
of
the
90
percent.
O
It
took
three
months
to
upgrade
so
during
these
three
months
it
was
really
annoying
to
finance
peers,
because
you
were
constantly
finding
peers
that
were
stuck
on
whatever
old
block
or
old
chain
and
meaning
that
the
way
to
somehow
separate
the
two
networks.
So
obviously
previously
we
did
it
very
very.
O
Precisely
because
to
be
precisely
new,
but
there
isn't,
I
don't
think
it's
necessarily
essential
for
this
fork.
I
need
to
be
extremely
precise,
so
if
we
can
just
update
the
fork
id,
I
mean
hardcore
the
fork
id
to
an
approximate
block
where
the
us
could
happen
and
essentially
that
block
will
be
the
one
that
will
split
clients
that
upgraded
versus
clients
that
did
not
operate,
and
it's
not.
This
is
actually
a
protection.
O
Against
malicious
people,
so
if
somebody
is
malicious
and
they
want
to
download
the
code
and
they
download
guest
source
to
their
ability
to
qualify
or
they
try
to
convince
other
people
to
run
malicious
modified
code,
we
I
mean
we
can't
do
anything
they
could
as
well
take
the
4k.
So
4k
is
kind
of
like
to
prevent
naive,
non-operated
users
from
causing
too
much
annoyance,
and
from
that
perspective
I
think
it
it
doesn't
matter
whether
the
fork
id
gets
updated,
a
thousand
blocks
or
five
thousand
blocks
before
pos
or
five
thousand
bucks
after
pos.
O
It's
just
the
idea
is
that
it
shouldn't
cause
havoc
for
three
months
straight
after
the
pos
switch.
I
mean
yeah.
Of
course
it's
definitely
better,
the
more
precise
it
is,
but
I
don't
really
see
any
particular
downside
if
it
is
not
that
precise,
if
there
are
a
couple
days
plus
or
minus,
which,
in
my
view
it
just
do
the
same.
P
Quick
question:
what
speaks
against
just
updating
it?
Well
ahead!
So
the
I
I
don't
know
all
the
details
on
the
peer-to-peer
network,
but
here's
one
reason
why
that
could
be
good.
Basically,
people
who
forget
forgot
to
update
their
clients
would
then
see
a
gradual
degradation
and
thus
get
a
very
strong
signal
that
they
did
something
wrong
and
potentially
be
nudged
into
updating
it
in
time,
and
so
you'd
have
fewer
people
who,
for
some
reason,
went
right
for
the
upgrade.
E
Yeah
my
primary
argument
against
that
is
that
if
a
contingent
of
miners
want
to
run
the
chain
beyond
the
proven
stake,
upgrade
and
intend
to
do
so,
then
they
wouldn't
upgrade
their
software
to
proof-of-stake
software
mode.
And
thus
you
might
create
a
partition
between
those
that
want
to
upgrade
a
proof-of-stake
and
those
that
do
not
prior
to
the
merge
and.
P
Part
at
all,
though,
or
is
it
only
in
a
networking
thing?
No,
it's
not
working
right
because,
like
I
mean
do
we
actually
know
like
because
miners
might
not
actually
just
use
the
normal
peer-to-peer
network.
Even
they
might
use
specialist
servers
that,
like
guarantee
them
faster
fusion
of
blocks
and
stuff,
like
that,
they
might.
H
I
think
I
would
I
also
have
with
with
the
anchored
here
I
think,
red.
I
think
I
have
a
weak
preference
that
I'd
rather
find
out
sooner
rather
than
later
there
there's
a
contingent
of
miners
that
plan
on
continuing
to
run
proof
of
work.
P
P
H
Which,
which
would
be
great
I'm
just
saying
like
if,
if
danny's
theory
proves
out
I'd
rather
know
you
know
a
month
in
advance,
instead
of
a
day
in
advance
of
the
fork
like
I'd,
rather
not
see
the
ttd
drop
off,
you
know
in
the
last
minute,
I'd
rather
see
the
ttd
drop
off.
You
know
a
month
ahead,
so
that
way,
when
we
set
the
ttd,
we
can
set
it
appropriate
to
the
actual
hash
power.
That's
going
to
stick
around
to
the
end.
E
And
this
isn't
necessarily
my
theory.
I
mean
the
synthetic
fork
before
was
my
idea,
and
I
I
kind
of
like
it.
I
just
that
is
the
primary
risk
is
that
we
accidentally
fragment
the
miners
off
the
network
because
they
do
not
intend
to
upgrade
it
for
sick,
but
I
I
don't
know
if
that
I
wouldn't
say
that.
That's
necessarily
what
I
think
is
like
the
outcome,
but
I
think
that
is
the
primary.
P
I
mean
if,
if
venice,
if
that
is
rifts,
that
we
see,
then
we
can
also
easily
add
a
command
line
flag.
That
says
yes
like
follow
this
new
proof
of
stake,
fork
id
but
do
not
switch
to
proof's
sake
like
an
override
flag
for
that,
and
then
miners
can
easily
just
do
that
and
don't
have
to
worry
about
modifying
the
clients
and
stuff
for
that
which
is.
P
Do
we
really
think
we
can
stop
that
fork?
Existing
yeah
like
I
am?
I
doubt
it
so
I
I'm
not
worried
about
adding
that.
I
I
just
want
it
to
never
be.
The
default
option
like
the
default
option
should
always
be.
If
you
just
do
the
normal
thing,
upgrade
your
client,
you
should
end
up
on
proof
of
stake
and
you
should
never
on
that
path.
You
should
never
just
end
up
on
a
proof
of
work
for
and
everything
seems
to
be
working
and
you
didn't
notice.
That's
like
you're
off.
H
J
O
Question
what
do
we
want
to
always
fall
for?
So
essentially,
what
is
the
problem
that
we're
trying
to
solve
here
because
the
fork
id
originally,
the
idea
behind
the
forecast
was
that
we
needed.
O
And
upgraded
versus
non-upgraded
networks,
apart
after
the
upgrade
happened
now,
if
this
is
the
goal,
we
actually
have
an
interesting
other
thing
that
we
can
abuse
for
this
purpose,
specifically
that,
as
far
as
I
understand
it
directly,
if
I'm
wrong
after
eos
mode,
the
total
difficulty
of
the
chain
of
conceptually
remains
constant.
M
O
Okay,
so
in
that
case,
if
for
existing
nodes,
once
pos
happens-
and
let's
say
finality
also
happens-
so
that's
four
plus,
I
know
15
minutes
or
something
like
that.
Essentially
after
that
point,
when
I'm
in
fully
eos
mode,
whenever
an
eth
handshake
happens,
I
can
actually
just
the
both
sides
advertise
their
total
difficulty
with
one
another.
O
Now,
if
somebody
advertises
a
total
difficulty
that
is
higher
than
my
dtd
or
not
sorry,
not
the
dtd
rather
higher
than
the
the
total
difficulty
of
my
last
block,
the
pos
block,
from
which
point
the
whole
thing
is
locked
in
then
I
cannot
disconnect
it.
So,
with
this
trick
essentially
eos
aware
clients
can
always
disconnect
non-upgraded
clients
if
there's
actually
at
least
one
more
block
mine
on
top
of
the
dtd
yeah.
That's
interesting.
G
But
yeah,
as
you
have
mentioned,
as
you
have
pointed
out,
that
this
is
more
important
for
like
to
force
it
as
an
indicator
of
as
an
indicator
in
the
signal
for
a
user
that
it
just
forgot
to
update
it's
fine
software.
G
So
I
mean
this
fork
id,
so
it
will,
after
the
like
after
the
client
release
yeah,
so
it's
not
gonna
disconnect
or
will
it
will
it
disconnect?
The
notes
that
has
different
fork
next
or
just
set
of
zero,
like
my
client
is
updated,
and
somebody.
G
To
me
the
the
and
has
the
fork
next
set
to
zero,
but
my
client
has
the
fortnite
step
to
correct
value.
I
I
think
it's
permitted
right.
O
I
think
in
that
case
you
will
be
the
one.
Your
client
will
be
the
one
disconnecting
if
once
it
goes
past
yeah,
oh
no
wait.
Actually,
if
there's
only
the
torque
next,
it's
different,
then
clients
don't
disconnect
because
then
it
just
means
that
one
of
them
might
not
be
up
to
date.
But
as
long
as
that
work
didn't
happen,
it's
fine,
but
once
that
work
happens
on
the
updated
side,
then
the
fourth
next
will
be
zero
at
that
side,
and
just
the
four
cash
will
be
different
and
that
will
trigger
it.
G
Right,
so
if
we
want
this
for
this
kind
of
purpose,
then
we
should
set
it
before
transition
before
connects
to
the
full
transition,
and
I
think
it's
reasonable.
G
H
Okay,
so
we
already
have
the
messaging
and
everything
this
would
just
be:
adding
a
line
to
the
client
that
says
disconnect
if
total
difficulty
reported
is
higher
than
the
ttd.
That,
I
think
should
exist
right.
Yes,.
O
H
O
O
So
you
could
just
somehow
have
a
node
that
so
essentially,
if
I
understand
correctly,
the
the
thing
that
we're
afraid
of
is
that,
if
there's
during
or
right
before,
the
court
there's
a
set
of
miners
which
don't
want
to
upgrade,
we
still
want
to
consume
their
power
blocks,
and
I
think
we
could
do
that
by
simply
having
some
peers
on
the
network.
That
quote,
connect
to
both
networks
and
just
replace
the
blocks
across
them.
O
H
G
To
your
point,
we
can
use
hard
code
ttd
to
disconnect
years,
so
it
will
be
hard
coded
and
everyone
will
know
it.
Even
if,
if
not
just
starts
to
see
yeah.
O
O
I
mean,
I
guess
you
know
actually,
if
you're
a
fresh
joiner,
then
you
don't
anyway.
I
think
we
shouldn't
really
hold
up
there.
E
G
Yeah,
I
prefer
to
have
a
full
transition,
so
that's
a
good
indicator
for
a
user
too
and
yeah
like
a
good
fact.
It's
one
of
the
factors
that
they
should
update,
notes.
O
B
O
O
B
Cool
and
yeah-
I
guess
miguel
danny
and
I
can
chat
about
the
next
steps
for,
like
the
other
approaches
and
yeah,
see
how
we
can
kind
of
flesh
those
out
a
bit
more.
H
B
Cool
yeah
just
to
be
mindful
of
time,
unless
anyone
has
any
like
urgent
comment
about
this,
I
think
you
should
just
move
on
because
we
have.
We
have
two
more
eeps.
We
wanted
to
discuss.
B
Okay,
so
next
up,
ensgar
and
bartabay
are
here
to
give
an
update
on
eip-4396,
which
is
the
one
that
proposed
to
change
how
eap
1559
works
in
a
post-merge
context,
yeah
and
scary
barnaby.
Do
you
want
to
give
a
quick
update
of
kind
of
what
you've
looked
at
over
the
past
couple
weeks?.
Q
Yeah,
I
can
maybe
start
with
a
summary
of
the
breakout
session
from
last
monday,
so
basically,
what
yeah
splitting
splitting
up
the
the
erp
into
two
parts
like
the
the?
Why
might
we
need
to
do
something
and
then
what
exactly
to
do
kind
of
sides
of
things?
I
think
the
the
main
result
from
monday
was
that
there's
generally
uncertainty
around
how
urgently
we
need
to
do
some
things
like?
Q
Is
there
something
that
can
wait
until
shanghai
or
is
it
something
we
should
do
at
the
at
the
point
of
the
merge?
There
was
also
some
some
talk
on
the
mechanism
side,
but
then-
and-
and
I
kind
of
spent
some
more
time
since
then
kind
of
looking
into
the
mechanisms-
and
I
I
think
that's
that's
really
solid,
so
the
question
is
really
more
is.
Is
there
needs
to
to
do
to
do
something?
Q
Basically,
and
just
to
recap
that
briefly
so,
like
there's,
one
small
concern
in
general
about
just
like
that
empty
slots
have
have
a
negative
like
have
a
distorting
effect
on
the
base
fee.
Where
there's
this
brief
base
v
spike,
usually
after
after
missed
slot,
but
the
main
the
main
concern
is
about
the
throughput
loss.
Q
These
these
kind
of
through
productions
can
be
compensated
in
the
medium
term
through
gas
limit
adjustments.
So
if
we
see
that
say,
three
percent
of
other
datas
are
offline
permanently,
we
can
just
increase
the
gas
limit
by
three
percent
and
then
on
average
it
smoothes
out
there
were
concerns
around
specifically.
Q
How
well
will
we
be
able
to
adjust
the
gas
limit
after
the
merge
so
right
now?
So,
of
course,
that's
that's
the
role
of
the
block
builder.
So
right
now
the
miners
and
then,
after
the
merge,
the
validators,
it's
already
not
trivial,
to
coordinate
miners
around
changes.
Every
time,
there's
like
a
there's,
some
change
it
takes
takes
a
while
and
then,
of
course,
after
the
merge,
validities
are
even
more
decentralized.
Q
So
so,
there's
more
people
we'd
have
to
reach
out
to
before
before
we
get
to
51
of
people
signaling
a
different
guest
limit.
One
thing
that
could
help
actually
is
the
centralization:
that's
probably
going
to
be
introduced
by
the
flashbots
proposed
merge
architecture
which,
by
the
way,
I
think,
should
be
a
topic
for
for
one
of
these
oco
devs
as
well,
because
that
might
introduce
some
general
centralization
concerns.
Q
But
in
this
case
it
acts
to
our
advantage,
because
it
means
that
you
basically
only
have
to
reach
out
to
flashbots
to
adjust
the
gas
limit
and
then
yeah,
and
so
basically
that
that's
that's
that
all
helps
for
medium
term
adjustments.
But
the
issue
remains
for
short-term
adjustment
and
that's
really.
Basically,
the
question
really
is
like:
how
important
are
these
short-term
adjustments
and
do
we
have
to
do
something
about
it
in
the
merch
and
so
again
the
facets?
There
are
like
for
one
there's
the
concern
around
dos
vulnerabilities,
and
this
is
important.
Q
This
is
kind
of
not
the
what
we
used
to
talk
about
in
the
dos
context
like
say:
slow
blocks
where
basically
transactions
kind
of
like
take
very
long
to
compute.
This
is
the
different
form.
Of
course.
This
is
kind
of
like
identifying
the
the
real
identities
the
ip
addresses
behind
this
individual
validator
and
then
targeting
them
and
bringing
them
offline
right
before
they.
Q
They
were
producing
the
block
and
there's
more
incentive
to
do
that
if
they're,
if
if
there
is
a,
if
that,
basically
results
in
a
throughput
loss
to
the
network,
so
you
can
actually
attack
the
network
through
that,
whereas
if
the
throughput
is
compensated
for
then
of
course
you
don't
have
much
of
an
immediate
incentive
anymore.
Q
So
so
that
could
be
mitigation
against
these
those
attacks
and
then
also
there
are
the
other
situations,
though,
say
like
a
big
staker
goes
offline
for
a
couple
hours
for
some
maintenance,
a
reason
or
something
that
just
could
be
like
a
10
throughput
loss
for
a
couple
hours
of
network
which
is
not
great
or
in
this
in
the
scenario
of
like
a
client
bug
or
a
consensus
issue,
or
something
that
they
could
be
they
they
could.
Q
They
basically
say
we
have
two
j,
two
forks
and
both
have
fifty
percent
of
our
debtors.
Then
at
least
until
the
gas
limit
starts
moving,
which
again
could
take
like
a
day
or
more,
but
like
the
the,
the
forks
will
only
have
half
the
throughput
yeah.
So
that's
that
that
was
made
mainly
in
other
necessity
sites.
So
there's
a
lot
also
to
report
on
the
mechanism
side,
but
it's
more
like
detail
oriented,
and
so
I
probably
first
want
to
just
stop
here
and
get
feedback
on
like
do.
Q
Q
So
kind
of
these
throughput
losses
and
the
dos
incentives
and
everything
is
that
something
that
is
bad
enough
to
do
something
about
it
now,
and
that
was
just
basically
on
monday
people
that
our
conclusion
was
basically
like
yeah
how
to
tell
kind
of
like
on
the
fence.
E
Yeah
I
mean
yeah
a
few
weeks
ago.
I
was
naively
optimistic
that
we
could
shove
this
in
here
tested
quickly
and
there
wasn't
much
complexity
to
deal
with.
E
Although
I
do
think
this
is
the
good
and
arguably
correct
behavior,
I
do
not
think
it's
critical
to
have
at
the
merge
and
that
personally,
I
think,
if
we're
sorting
things
merge
sooner
is
better
than
getting
this
in,
but
I
would
I
would
be
a
strong,
strong
advocate
for
putting
it
in
shanghai.
E
Having
the
gas
having
the
the
gas
limit
as
a
lever,
even
though
a
slow
lever,
we
can
mitigate
that
most
of
those
concerns
there.
B
I
guess
the
does
anyone
see
an
issue
with
potentially
raising
the
gas
limit,
the
offset
kind
of
the
average
throughput
loss
and
I
think,
like
right
now
miss
slots
are
like
less
than
one
percent.
If
I,
if
I
recall
correctly
so
we
you
know,
we
should
expect
them
to
be
in
that,
like
single
digit
percent
range,
so
that
solves
kind
of
the
general
case
issue.
The
issue
where
it
doesn't
solve
is
if
a
large
subset
of
validators
goes
offline
for
a
while,
then
we
have
like
a
throughput
reduction
in
the
chain.
Yeah.
H
Q
But
they
are
just
vulnerable
there,
because
I
mean
like
at
the
end
of
the
day,
because
somebody's
just
a
number
right
like
what
you're
probably
concerned
about
this
is
the
general
throughput
per
time
and
that
will
change.
H
State
growth,
yes
yeah,
so
if
I,
if
I
see
an
opportunity
to
lobby
for
reducing
state
growth,
I
will
take
it,
and
so,
if
there's
a
dis,
an
active
discussion
for,
should
we
increase
the
block?
Should
we
maintain
the
block
throughput
or
should
we
decrease
the
block
throughput?
I
will
vote
to
decrease
it,
which
is
just
what
I'm
saying
here
again,
though
I
recognize
this
not
related
just
questions
asked:
would
anyone
be
against
it
and,
yes,
I'm.
Q
Yeah,
just
just
to
point
out
that,
like
they're,
while
they're
there,
of
course
I
mean
so
so
from
a
state
growth
point
of
view.
Of
course,
I
don't
think
there
should
be
any
concerns
well,
except
for
maybe
that
the
gas
limit
increase
could
be
sticky
where
once
people
come
back
online,
maybe
it's
hard
to
bring
it
back
down.
But
besides
that,
there
really
shouldn't
be
any
concerns,
because
it
just
keeps
the
state
growth
rate
constant,
because
it
only
compensates
for
people
offline
on
the
peak
kind
of
networking
constraint.
Q
Side,
of
course,
that
it
does
propose
a
bit
of
an
added
strain
because
well
the
average
remains
constant,
like
basically
the
peaks
like
instead
of
having
one
block
every
five
seconds,
maybe
with
a
couple
percent
offline,
we
have
like,
on
average
one
vlog
every
13
seconds,
but
it's
ten
percent
higher
some
bigger
or
something.
So
that's
ten
percent
increased
peak
networking
strain.
Q
But
it's
important
to
point
out
that,
like
after
proof
of
in
improved
stake,
we
already
reduced
the
peak
network
strength
quite
a
bit
because
under
proof
work
we
had
these
stochastic
block
times.
So
you
could
have
like
two
three.
Four
blocks
within
a
couple
of
seconds,
and
now
we
have
like
this
minimum
of
12
seconds
in
between
blocks,
so
there's
already
like
a
big
reductions
in
peak
strain,
so
this
kind
of
small
added
increase
again
when
we
increase
the
gas
limit
should
not
be
concerned.
Under
that
viewpoint,
either.
B
Right,
but
it's
worth
noting
that,
there's
probably
so,
if
assuming
you
go
with
this,
like
you
increase
the
gas
limit
to
offset
the
average
miss
slots
loss
you're
still
in
a
case
where,
like
you,
might
have
an
effective,
lower
throughput
if
somebody
goes
offline
in
kind
of
an
emergency
fashion,
right
like
and
that's
kind
of
the
case,
we're
not
solving
for
so
if
a
validator
goes
offline
for
six
hour,
you
know
sorry,
a
large
swath,
the
validators
go
off
offline
for
six
hours,
probably
not
enough
time
to
coordinate,
raising
the
gas
limit.
B
So
what
happens
like
for
the
little
six
hours?
You
know
the
network
just
has
lower
throughput
and
that's
kind
of
the
basically
the
price
we
pay
to
not
by
not
implementing
your
proposal.
Right
is,
if
there's
a
case
where
a
large
part
of
validators
drop
for
a
short
enough
amount
of
time
that
we
can't
actually
coordinate
to
raise
the
gas
limit.
That
throughput
is
kind
of
lost
forever.
B
N
Q
Yeah,
maybe
just
because
yeah
it
sounds
like
right
now
we
are
really
pointing
in
towards
pushing
this
to
shanghai,
which
I'm
personally
absolutely
okay
with
maybe
just
there's
like
one
last
attempt,
though
I
was
just
wondering
danny
you,
you
were
saying
that
you
think
by
now
we're
already
so
so
so
late
in
the
process
that
it's
probably
going
to
be
unavailable,
that
this
would
delay
the
match,
and
I
was
just
curious
to
hear
like
a
little
bit
more
about
that,
because
I'm
just
wondering
I.
Q
I
definitely
can
see
that
for
these
more
involved
proposals,
there
was
this
extension
section,
but
with
the
base
mechanism,
it's
to
me
it
really
seems
like
five
six
lines
of
code
change
in
the
execution,
clients
each
we
of
course
plus
a
few
other
tests
and
everything.
So
I'm
just
curious,
dude.
E
I
think
the
intention
right
now
to
have
a
king
test
net
up
in
the
first
week
of
december
to
stand
up
through
the
holidays
to
begin
to
make
decisions
in
january
about
very
concrete
and
realistic
timelines.
I
believe,
if
you
don't
put
a
evm
change
into
that
test
net
and
then
are
working
on
test
nets
in
january
with
that
change
that
you've
very
likely
in
practice
delayed
the
merge
because
of
our
need
to
have
thing
on
test
nets,
even
though
there
are
a
couple
line
changes.
E
E
And
correct
me
I'm
wrong,
but
I
just
the
discussion
and
analysis
that
I
think
we'd
probably
want
to
throw
behind.
This
thing
is
probably
still
not
totally
done.
Is
everyone
comfortable
with
where
we're
at,
and
so
I
like
shoving
it
into
kenzugi?
It
can
2p
devnet
in
two
weeks.
Time
doesn't
seem
likely
to
me.
Q
Okay,
if,
if,
if
that's
the
case,
then
I
agree
that
pushing
nutrition
is
the
better
choice,
I
would
say.
B
Okay,
I
guess
I
guess
that's
pretty
clear,
then
so
yeah
I
I
also
personally
feel
this
is
something
that
would
be
really
valuable
to
have
in
shanghai
and
there's
a
lot
of
interesting
conversations
to
have
also
about
like
the
elasticity
factor
and
whatnot
that
might
that
we
might
want
to
change
for
shanghai,
so
yeah
I,
but
I
think
it
makes
sense
to
just
not
include
this
in
the
merge.
Q
And
when
we
one
last
briefing,
though,
just
because
it
came
up-
and
it's
not
really
related
to
it
itself,
but
it
did
come
up
in
the
discussion
on
monday
and
that's
just
because
we
talked
a
little
bit
about
the
elasticity
and
one
of
the
extra
concerns
that
martin
had
last
algorithms
was
about
like
the
slow
block
dos
attacks
on
the
network
and
how
that
could
make
it
worse.
Q
And
even
if
we
don't
do
this
efp
now,
basically,
of
course,
slow
block
attacks
in
general
are
still
a
thing,
and
one
thing
that
did
come
up
on
monday
was
the
fact
that
the
impact
of
these
kind
of
dos
attacks
would
be
different
after
the
merge
than
they
are
before
the
merge.
And
so
it's
just
maybe
something
to
keep
in
mind
to
that.
This
could
be
something
worth
valuable
to
to
test
on
test
net.
Basically
just
created
have
a
test
net
and
just
really
crank
up
the
gas
limit.
Q
All
the
way
until
computation,
like
the
computation
time
of
blocks
kind
of
starts
to
go
close
to
say
six
seconds
or
something.
So
so
we
can
see
how
our
network
would
react
under
this
kind
of
situation,
because,
if
not
implemented,
ideally
in
execution
clients,
the
impact
here
could
be
worse
in
a
proof
of
stake
than
it
is
in
underproof
work.
So
just
something
that
came
up
it
might
be
relevant
even
without
the
cfd.
B
Yeah
yeah
just
so,
we
have
time
for
like
clients
and
your
voice
is
the
ip.
E
B
Cool
yeah:
let's
continue
this
in
the
merge
channel
cool
last
but
not
least
like
client
and
your
goes
have
an
eip.
Oh
and
alex
have
an
eip
that
bounds.
The
historical
data
in
execution,
clients
yeah
guys,
do
you
want
to
share,
give
some
context.
R
Yep
hello
there,
so
I
will
do
a
small
summary.
I
don't
know
if
we
have
enough
time
to
really
like
exhaust
everything,
but
I'll
do
a
small
summary
of
the
proposal
as
it
is
so.
The
high
level
thing
is
that
we
want
to
specify
how
clients
can
treat
historical
data
like
you
know,
old
blogs,
states,
receipts
and
that
kind
of
stuff.
R
The
obvious
benefits
is
that
there
is
a
variety
of
use
cases
that
don't
use
those
data,
so
we
can
prune
a
lot
of
hard
disk
space
with
that
and
also
execution
engines.
Don't
need
to
keep
the
old
evm
versions
around
to
parse
those
blocks,
so
there
are
a
bunch
of
benefits.
So
that's
like
the
high
level
thing
and
now
in
terms
of
specification.
What
eip
for
force
does
is
that
it
does
two
main
things
one
is.
R
It
specifies
the
time
threshold
below
which
you
can
start
pruning
historical
data,
if
you
want
so
as
a
client
and
another
important
thing
that
it
does,
even
maybe
a
bit
controversial,
is
that
it
specifies
that
clients
must
not
serve
all
historical
data
over
the
p2p
network
and
right
now.
The
proposal
eip4s
is
forcing
clients
to
not
serve
such
historical
data
because
it
does
not
want
to
make
it
optional
and
then
have
other
clients
rely
on
that
optional
feature
and
then,
like
the
quality,
degrade
over
time
as
more
and
more
clients.
R
Ditch
this
optional
thing.
So
that's
one
thing:
these
are
the
two
things
that
the
proposal
does
define
the
time
threshold
and
specify
the
networking
logic.
Also,
this
has
implications
in
thinking.
So
since
historical
data
won't
be
kept
till
infinity,
our
clients
won't
be
able
to
do
full
things,
and
this
kind
of
things
and
the
proposal
basically
piggybugs
on
the
weak
subjectivity
system,
so
that
clients
can
still
sync
basically
to
a
safe
checkpoint.
But
you
know
the
way
this
should
happen
is
out
of
scope
for
eip4s.
R
Finally,
the
proposals
also
contains
a
bunch
more
discussion
on
various
like
miscellaneous
things,
so,
for
example,
various
ways
that
such
historical
data
can
be
retrieved
or
how
clients
can
sing
from
genesis
or
how
the
ux
should
be.
But
you
know
it
just
basically
touches
on
them,
and
then
it
leaves
it
for
other
eips
or
other
topics.
Basically,
so
that's
it.
So
far
we
pushed
the
iep
to
wherever
eips
get
pushed
and
we've
gotten
a
bunch
of
feedback.
R
Most
of
it
is
around
the
networking
logic
and
whether
we
should
like
force
clients
to
not
serve
engine
data,
so
whether
that
should
be
a
must
clause
or
a
should
clause,
and
also
about
the
dev
p2p
responses
that
our
clients
should
give
out
when
they
ask
for
such
data.
So
that's
it
for
the
eip44
summary
alex
or
litecline.
You
can
see
more
or
we
can
go
into
the
discussion.
M
So
this
looks
very
much
like
something
that
peter
started
thinking
about
in
and
percent
that
I
talked
about
in.
I
think
prague
a
couple
of
years
ago.
M
It
didn't
pan
out
eventually
I
mean
we
haven't
implemented
it,
but
I'm
curious
peter
what,
since
you
considered
this
a
couple
years
ago,
if
you
put
your
thoughts,
thought
on
it
now
in
this
context,.
O
M
O
So
way
back,
one
of
the
biggest
problems
was
that
we
kind
of
needed
a
way
to
distribute
room
chains,
because
we
kind
of
felt
that
so
there
wasn't
so
currently
in
ethereum.
One
world
promise
is
that
the
chain
is
accessible
and
whether
that's
past
headers
blocks
or
even
receipts
are
accessible
and
apps
kind
of
rely
on
that,
and
for
us
to
just
replace
this
guarantee
with
some
other
infrastructure.
O
We
kind
of
needed
something
that
is
as
reliable
or
almost
as
reliable
as
locally
having
the
box
available
available,
and
I
know
that
the
best
suggestions
was
that
we
could
have
some
form
of
infrastructure
ran
by
major
players
and
like
yeah
consensus,
whatever
a
lot
of
people,
we
could
have
bigger,
bigger
companies
running
this
infrastructure
and
they
could
just
serve
the
past
historical
blocks.
O
O
Bureaucracy,
a
lot
of
politics
to
to
dream
up
such
a
system.
So
it's
not!
The
challenge
here
is
not
really
the
technical
aspect,
whether
it's
the
whole
governance
aspect
of
who
is
going
to
be
part
of
it.
Why
et
cetera,
et
cetera
and
then
essentially,
we
just
had
other
things
to
do.
That's
why
we
dropped
it.
O
The
problem
is
not
synchronization,
I
mean
we
could
have
synchronized,
so
we
could
have
just
released
hardcoded
blocks
or
start
the
cache
that
they
didn't
get
and
just
say
that
well
you're
going
to
so
this
whole
week's
objective
could
be
because
I
can
get
a
long
time
ago.
The
problem
is
that
that's
rely
on
past
state
being
available
and
ethereum
users
are
either
one
promises
to
have
that
available,
and
that
means
everything
is
built
on
this
assumption
that
you
can
always
access
the
transactions
in
block
5
or
the
receipts
on
block
5..
E
Well,
and-
and
it
does
reduce
the
debate
like
you
could
argue,
releasing
checkpoints
and
proof
of
work
they're
like
oh
well,
that's
not
pure,
whereas
in
proof
of
stake
you
must
have
a
recent
piece
of
information
to
safely
sync
the
network
and
so
that
that
the
the
need
to
have
all
historic
block
headers
on
the
network
to
be
able
to
find
the
channel
head
does
become
reduced
because
of
the
security
model
shift.
That's
not
defined
at
the
crux
of
the
issue.
B
Yeah,
just
because
we're
running
we're
basically
at
time
andrew
as
you
have
your
hand
up,
so
we
can
go
over
your
thoughts
and
then
wrap
it
up.
I've
shared
the
discussion
link
in
the
chat
as
well
for
like
async
conversations.
J
J
When
we
have
a
solution
on
state
expiry
without
like
to
my
mind,
it
should
be
bound
to
state
expiry
other
than
or
or
some
infrastructure
for,
state
delivery,
or
at
least
wait
until
all
clients
have
their
own
system
of
reliable
state
delivery.
I
I
agree
that,
like
there
was
a
promise
in
if
one
that
all
kind
of
historical
data
is
is
available,
and
we
cannot
break
this
promise
without
having
either
like
proper
infrastructure
in
place
or
some
other
mechanism
or
maybe
look
like
either
either
individual
mechanism
for
each
client
or
some
agreed
upon
mechanism.
O
O
If
we
want
to
ever
do
this
whatever,
then
we
must,
at
the
merge,
explicitly
state
that
the
state
order,
I'm
sorry,
the
chain
segments
blocks
older
than
x
years,
I
don't
know,
will
not
be
available
as
a
photo,
so
their
full
names
don't
guarantee
this
now,
whether
we
will
implement
this
in
one
year,
two
years,
five
years
or
never
that's
a
different
story.
The
idea
is
that
we
need
to
get
people
to
stop
relying
on
on
the
subject
expecting
blogs
to
be
forever
available
and
the
reason
just
answer.
O
The
other
note
that
you
said
that
you
this
whole
thing
should
be
rolled
out
together
with
state
rent
or
truly
or
whatever.
I,
the
two
things
are
a
bit
separate
for
one.
The
blocks
currently
outweigh
the
state
four
to
one
or
three
to
one
or
something
like
that,
so
they
are
significantly
happier,
but
essentially
what
currently.
O
One
one
proposal:
protocol
improvement
proposal
are
for
witnesses
and
status,
clients
or
varying
degrees
data
science,
but
all
these
require
some
witnesses
that
need
to
be
retained.
Besides
the
blocks
and
these
weaknesses
are
significant
in
size,
so
they
are
maybe
about
one
order
of
magnitude
larger
than
the
blocks
themselves.
B
Just
because
we're
past
time
there's
a
comment
in
the
chat
about
maybe
doing
a
breakout
session
next
week.
I'm
not
sure
if
there's
like
urgency
here
like
given
this
is
something
we
you
know
might
want
to
do
in
the
next,
like
you
know,
year
or
two,
if
we
can
also
just
discuss
it
on
the
next
awkward
devs
and
kind
of
pick
up
the
conversation
there
yeah
so
does
anyone
have
like
a
strong
preference
for
a
session
next
week
or
is
like
the
next
awkwardness,
fine
with
people.
O
O
B
Yeah-
and
I
feel
like
this,
the
thing
is
this
also
kind
of
benefits
from
having
a
lot
of
different
people
involved
in
the
conversation
and
we,
when
we
have
these
breakout
rooms,
we
tend
to
only
get
a
subset
of
people.
So
I
my
my
hunch
is
like
we
should
probably
wait
until
the
next
awkward
end
so
that
we
have
kind
of
more
people
but
also
center
the
conversation
around
kind
of
the
guarantees
we
want
to
provide
and
stop
providing
around
the
merge
and
eip4444
is,
like
you
know,
related
to
that.
O
O
How
do
we
stop?
Well
sorry,
the
unbounded
girl
right
yeah
yeah,
that
is
the
goal
and
that
essentially,
if
you
want
ethereum
to
stay
alive
for
the
next
10
years,
that
problem
needs
to
be
solved
and
just
saying
that,
while
we're
just
kick
the
can
down
the
road
that
won't
work,
so
we
need
to
stop.
We
need
to
start
deleting
stuff
right.
B
Cool
that
seems
like
a
good
place
to
end
we're
already
a
couple
minutes
over
time,
so
appreciate
people
for
for
staying
over
yeah
thanks
everyone-
and
I
will
see
you
two
weeks
from
now.