►
From YouTube: Ethereum Core Devs Meeting #138 [2022-5-13]
Description
A
A
Okay,
I've
moved
us
over
to
the
main
screen,
welcome
everyone
to
all
core
devs
138
tons
of
merge
stuff
today
and
then,
if
we
have
time,
there's
some
updates
to
what
was
4938.
Oh
felix
had
a
networking
eip.
He
wanted
to
talk
about
and
then
there's
two
other
eips
that
wanted
technical
feedback.
A
Hopefully
we
get
through
them.
I
guess.
First,
we
had
mainnet
shadow
fork
for
earlier
this
week.
Perry
do
you
want
to
walk
us
through
how
that
went.
B
Yeah,
hey
everyone,
so
we
had
made
that
shadow
for
two
yesterday
he
was
hit
around
yes,
sir
yeah.
Four,
exactly
we
hit
ttd
around
4
pm
and
it
was
a
relatively
like
nothing.
Big
happened.
I
think
all
the
clients
that
were
there
before
were
also
there
after
we
added
a
bunch
of
validators
to
minority
clients
that
we
didn't
have
in
previous
shadow
faults,
for
example,
bezel,
lighthouse,
etc
instead
of
purely
displacer
prism.
B
Just
so
that
we
can
track
attestation,
and
the
one
thing
we
noticed
was
the
netherlands
mentioned
that
a
couple
of
client
pairs
are
proposing
empty
execution
payloads,
so
we're
getting
proposals,
but
the
proposal
itself
is
an
empty
execution
payload.
I
know
they're
talking
about
it
and
potentially
have
a
fix.
I
think
there
was
a
prism
fix
that
they
pushed
yesterday.
I've
updated
a
few
notes
with
it,
but
in
general
it
was
a
really
good
run.
B
We
did
have
an
issue
with
aircon
syncing
up,
but
that
I
think,
is
an
unrelated
non-merge
code
base
related.
A
C
D
E
Yeah,
so
we
are
working
on
a
new
sync
mode
and,
as
fairy
said
they,
the
issue
is
not
related
to
the
merge
or
like
yeah.
So
we
we
have
a
fix,
but
it's
not
much
related.
E
Yes,
for
the
we
have
this
news,
sync
mode
which
which
we
are
still
debugging
and
like
the
finishing
and
things
like
that,
but
the
the
old
sync
mode
it
works,
fine
got
it.
A
Okay,
cool
and
anyone
from
nethermine
want
to
chime
in
about
like
the
empty
blocks,
issues.
F
Yeah,
so
the
problem
is
that
still
the
timing
issue
that
I
mentioned
many
times
so,
for
example,
lord
start
not
giving
enough
time
anthony
boos
for
block
production,
and
because
of
that
we
have
empty
blocks
and
if
we
do
not
have
empty
blocks
on
something
else,
it
is
also
probably
something
wrong.
F
G
And
as
yesterday,
I
think
mario's
changed
that
behavior
in
gaff
to
be
async
production
of
blocks
on
fcu.
That
would
probably
manifest
on
that
code.
Waster.
H
C
And
for
anyone
listening,
there's
kind
of
like
a
prepare
like
where
you're
saying,
hey
execution,
engine,
I'm
going
to
ask
for
a
block
and
then
you
call
and
a
little
bit
later
and
say
I
want
the
block
and
if
you
put
those
too
closely,
you
know
you
just
get
an
empty
block
from
the
executioner.
A
Okay
and
then
perry,
as
I
understand
it
next
week,
we
are
doing
another
mainnet
shadow
fork,
but
instead
of
using
the
client
distributions
that
mirror
mainnet
where
there's
obviously
some
like
majority
clients,
we're
gonna
we're
gonna,
do
kind
of
an
equal
split
across
cl
and
el.
Is
that
correct.
C
Okay,
one
thing
maybe
worth
noting
just
with
the
issue
that
was
seen,
you
know
if
there
are
some
amount
of
blocks,
that
they
do
produce
beacon
blocks,
but
they
don't
have
any
transaction
payloads
the
elasticity
from
1559.
In
that
case,
as
long
as
it's
not,
the
majority
of
blocks
would
allow
for
no
reduction
in
capacity.
So
that
would
almost
you
know
be
something
we
wanted
would
want
to
fix
if
we
saw
it
on
mainnet
but
would
have
been
kind
of
a
no-op
for
users,
which
is
nice.
I
G
But
that
would
discourage
flying
diversity.
Yes,.
A
Okay,
next
up
on
the
last
call
mikhail,
you
had
a
request
for
comment
about
like
an
engine
api
status
response
when
the
merge
transition
block
is
invalid.
So
I
believe
we
just
went
with
like
the
third
option
you
proposed.
You
want
to
just
give
a
quick
update
there.
J
K
H
C
J
I
know
everyone
is
busy
with
engineering,
so
I
understand
that
it
didn't
get
too
much
attention,
but
anyway
it's
been
communicated
in
advance
and
the
asset
will
have
a
hive
test
for
it.
Basically,
one
of
the
yeah
when
the
work
and
test
was
like
on
this
checklist
on
the
test
checklist.
J
This
blind
spot
has
been
discovered.
That
was
the
initial
like
reason
to
have.
This
kind
of
you
know,
change.
E
Yeah
I
just
my
thinking
is
that
when
do
we
finalize
the
engine
api?
So
because,
because
football
like,
we
would
be
finalized
for
robstern
or
for
which
test
net,
because
we
should
say
okay,
this
this
is,
we
have
merged
all
the
pull
requests
that
we
that
we
already
under
consideration.
E
C
Yeah,
I
I
think
we're
at
the
place
where
we
should
probably
put
up
a
pr.
That's
called
a
release
candidate
and
not
have
anything
go
into
it
unless
it's
heavily
discussed
and
noted
by
client
teams
and
that
release
candidate
would
probably
stand
until
we're
pulling
the
trigger
on
choosing
mainnet.
C
J
I
think
it's
more
or
less
final,
the
the
right
cup.
There
are
a
couple
of
things
that,
in
terms
of
like
clarification,
so
it's
not
like
changing
behavior
whatever,
but
I
think
we
can
do
them
shortly.
So
there
were
a
couple
of
requests
to
clarify
responses
in
some
cases
and
likes
and
like
safe
block.
Hash
set
to
zeros
should
not
be
responded
with
any
errors.
This
also
should
be
clarified
in
this
bag
yeah
and
yeah.
But
that's
that's
like
that.
That's
not
the
updating,
behavior
and
in
design
it's
just
clarifications.
L
A
When
do
we
think
like
we
can
have
those
small
clarifications
and
then
a
release
candidate?
Can
we
do
that
like
sometime
next
week
or
is
it
does
it
yeah.
J
Definitely
next
week
we
can
do
this
I'll,
take
care
of
these
changes
and
then
we
can
communicate
and
cut
release
next
week.
I
guess,
unless
there
are
any
other
opinions.
A
Okay,
the
other
thing
that
we
did
finalize
was
this
discussion
over
the
json
rpc
finalized
in
safe
tags.
Mikhail.
Do
you
want
also
give
a
quick
update
on
that.
J
Sure
so
so
there
are
two
new
blog
tags
that
we
have
added
to
to
the
namespace,
so
we
have
like
we
had
the
earliest
latest
and
pending
right
and
now
we
have
all
these
and
also
finalized
and
saved.
J
In
addition
to
the
previous
ones,
there
was
a
discussion
to
have
on
safe
and
as
an
a
and
the
alias
latest
on
safe,
so
yeah,
we
we
decided
not
to
introduce
unsafe
at
all,
so
it's
now
latest
will
always
point
to
the
head
of
the
chain,
as
it's
been
previously
all
the
way
and
yeah
one.
One
thing
here
is
that
execution,
where
appliance
should
respond
with
error,
and
this
error
is
specified
in
this
change
in
this
pr.
J
The
the
iel
client
should
respond
with
error
if
the
if
safe
or
finalized
blocks
acquire
it
before
the
transition
gets
finalized.
That's
one
thing:
that's
worth
mentioning.
A
Okay,
okay,
next
up
so
yeah
we
are,
we
are
kind
of
getting
close
to
test
nets
and
then
one
thing
to
note
there
is
that,
while
gordy
has
an
existing
beacon
chain
associated
with
it,
prater
sepolia
and
roxton
do
not,
and
there's
been
some
discussions
over
the
past
couple
weeks
about
like
how
do
we
structure
those
chains,
danny
and
perry?
I
know
you've
thought
a
lot
about
this
this
week.
A
C
Yeah
so,
first
of
all
naming
wise,
I
think
we
should
just
call
them
robson,
beacon
chain,
it's
a
polio,
beacon
chain
and
the
unification
of
the
networks.
After
it's
just
robson,
that's
polio,
that's
easy!
C
The
robson
robson,
as
far
as
I
understand
will
be
support,
would
be
deprecated,
probably
after
this
in
the
order
of
some
amount
of
months,
and
so
doing
that,
there's
two
options,
one
would
be,
you
know,
have
a
conservative
size,
validator
set
just
kind
of
get
it
up,
get
it
going
with
an
open
validator
set
or
do
a
permissioned
validator
set
with
the
erc20
contract.
Based
on
some
some
discussions,
I
think
doing
an
open.
C
One
is
most
beneficial
to
the
community
and
we
can
start
with
just
a
hundred
thousand
validators
that
we
control
and
community
members
can
add,
and
it
would
be
unlikely
that
the
community
members
would
add
so
much
that
it
would
disrupt.
C
You
know
our
stupid,
stable
backbone
that
we've
added
robson,
I
think,
is
going
to
be
the
first
test
that
worked.
So
we
really
should
get
this
beacon
chain
up
in
the
next
couple
of
weeks
and
then
somalia
beacon
chain.
C
I
think
the
idea
would
be
to
do
a
2x
validator
set
size
in
comparison
to
mainnet
today,
also
make
it
unpermissioned
so
that
other
people
can
jump
in
this
would
give
us
the
chance
to,
or
was
it
permissioned
I'll,
let
perry
chairman,
but
this
would
give
us
the
chance
to
kind
of
see
if
anything
shakes
out
with
such
a
large
validator
set
pratter,
which
will
become
gourley,
is
about
the
same
size
as
mainnet,
and
we
try
to
track
it
poorly
okay.
We
try
to
track
that
so
pretty
much.
C
We
need
to
launch
two
beacon
chains,
the
robson
one
will
be
unpermissioned.
Hopefully
people
will
join.
I
believe
the
sapoli
one
will
be
unpermissioned
as
well,
but
I
can't
remember
perry
did
we
have
one
way
or
the
other?
We
were
thinking
there.
B
C
Right
and
one
of
the
one
of
the
reasons
we
might
permission
that
one
is
to
do
random
testing
on
it
in
the
event
that
we
want
to
turn
up
at
the
validators
or
have
a
more
controlled
environment.
C
I
think
the
the
thing
that
we'll
do
is
pretty
much
have
suggested.
Configs
have
suggested
distribution
of
validators
and
kind
of
make
the
rounds
and
get
quick
thumbs
up
and
have
the
teams
join
us
in
in
kicking
off
these
beacon
chains.
B
Yeah,
the
main
thing
that's
so
with
robson.
I
agree
it's
pretty
much
straightforward.
We
can
start
with
something.
That's
like
100k
and
people
can
join
in,
should
be
relatively
easy
to
set
up,
and
anyway
it's
going
to
be
deprecated
in
a
few
weeks
months.
So
we
don't
have
to
worry
about
it
too
much,
but
the
one
I
would
like
to
discuss
and
get
some
consensus
on
the
sequoia.
B
We
essentially
have
two
options:
either
we
want
a
large
beacon
chain
there
or
a
small
beacon
chain.
A
large
beacon
chain
means
that
we
essentially
saved
early
eta
because
we
don't
have
to
keep
growing
prata.
If
we
go
with
the
small
beacon
chain
for
sephora,
then
we're
gonna
continue
to
eat
up
a
decent
amount
of
garlic.
B
Exactly
and
the
question
then
would
be
if
we
have
a
really
large
beacon
chain
for
supporter.
That
means
client
teams
have
to
now
run
two
times
a
decent
number
of
validators
and
have
no
idea
how
how
open
they
are
to
them.
C
Yeah
to
answer
your
question
justin
I
mean
I
think
most
client
types
can
handle
thousands
of
validators
per
node.
I
don't
know
I
mean
you
can
also
throw
more
resources
in
a
node
and
handle
lots
of
validators
for
now.
So
I
don't
I
don't
we
don't
have
to
spread
them
necessarily
too
too
widely.
I
don't
think
it
looks
much
different
than
what
you
would
do
on
crater
today.
A
M
Sure
I
mean
gurley
and
prata
is
already
fairly
big.
It
has
a
lot
of
users
and
has
a
lot
of
validators.
Sepolia
is
kind
of
still
very
unknown,
so
there's
no.
We
don't
have
a
beacon
chain
yet
and
we
don't
have
many
users
building
on
sepoy.
Yet,
naturally
we
have
to
keep
in
mind
that
we
are
duplicating
a
lot
of
test
nets
in
the
coming
months
or
years.
M
So
I
would
be
my
personal
opinion
is
that
we
should
use
sepolia
as
like
a
fairly
stable
application,
developer
testnet,
because
it's
fairly
new
and
we
still
have
the
time
or
the
chance
to
define
how
the
consensus
layer
would
look
like
or
the
superior
beacon
chain
would
look
like,
and
I
would
personally
just
keep
it
simple
for
this
network
and
I
would
personally
say
that
we
should
continue
growing
prata.
M
But
that's
that
again,
it's
not
that
easy,
because
we
have
a
limited
supply
of
gurley
ether.
But
putting
this
aside
the
main
reason
why
we
should
grow
prata
and
girly
is
because
it's
already
fairly
big
and
we
have
a
lot
of
much
more
much
more
interesting
foundation
for
testing
and
for
growing
this
network.
A
B
C
Yeah,
that
was
my
original
intuition
and
then
I
was
convinced
to
maybe
do
the
2x.
I
I
think
that's
totally
fine
to
do
a
smaller
permission,
very
stable
net,
something
that
kind
of
feels
like
clicked
in
users
that
doesn't
have
to
unbox,
and
I
there
are
a
number
of
kind
of
emerging
ways
for
us
to
test
load
across
many
many
nodes
that
aren't
they're
more
in
like
transient
type
test
nets
and
not
public
test
nets.
H
C
H
Oh
right,
I'm
stupid
sorry
but
yeah
yeah,
but
doing
this
way
on
girly.
If
we
were
to
do
the
the
token
deposit
on
deposits
on
girly,
we
could
create
new
girl
leave.
A
I
guess
one
interesting
thing,
though
there
was
a
suggestion
to
like
upgrade
gordy
to
give
the
clique
signers
a
huge
amount
of.
If
I
guess
you
can
still
do
that,
even
after
the
merge
on
gordy,
because
you
still
know
the
accounts,
but
it's
kind
of
weird,
because
you're
like
it's
like
a
retroactive
thing,
yeah.
A
Okay,
anything
else
on
the
beacon
chains.
B
C
Right,
I
would
suggest
on
on
robson
we
just
kind
of
like
make
it
happen
very
quickly
in
the
in
the
sequence
of
initial
2800
epochs
and
then
maybe
for
the
sepolio
one
planet
a
bit
more
like
it's
an
event.
It's
a
thing.
That's
happening
run
your
node
before
it
happens,.
C
A
Right
and
then
that
means
that
if
on
robson
we
want
this
close
kind
of
upgrade
of
genesis
alter
electrics,
we
need
the
ttd
for
the
bellatrix
upgrade
correct.
C
Right,
I
guess
you
would
want
to
know
it
at
that
point
in
time
right.
So
if
you
have.
C
N
C
But
I
guess
that
that
would
the
distinction
on
what
to
do
there.
I
would
defer
to
when
we're
having
a
robson
upgrade
conversation
which
I
think
we're
having
soon.
A
Okay,
okay,
yeah
perry,
thanks
for
the
summary
in
the
chat,
so
robson
is
100k.
Validators,
plus
I'm
permissioned
for
people
to
join
and
simplia
will
be
more
like
20k
and
then
be
provisioned.
Probably
would
have
not
an
option
for
people
to
join
as
well.
If
maybe
like
not
as
easy.
B
A
Yeah
but
then
more
stable,
obviously
great
okay,
so
yeah.
I
guess
you
know
on
the
last
call.
A
We
kind
of
briefly
talked
about
robston
and
I,
over
over
the
past
two
weeks
like
I've,
I've
tried
to
talk
with
the
different
kind
teams
and
testing
teams,
and
it
seems,
like
my,
my
general
impression
is
like
client
teams
are
not
don't
have
like
quite
stable
releases,
yet,
where
there's
still
like
some
kind
of
open
issues
that
they're
looking
at
there's
still
kind
of
some
failing
hive
tests
here
and
there,
and
so
it's
it's
clearly
not
like
a
spot
where
the
code
we
would
deploy
today
is
what
would
go
on
mainnet.
A
That
said,
robsten
is
basically
like
a
kind
of
test
net.
We
we
intend
to
deprecate,
and
one
thing
we
also
talked
about
in
the
past
is
because
these
upgrades
are
like
a
bit
more
hands-on
for
for
node
operators,
where
previously
they
would
just
like
download
the
new
version
of
whatever
kind
they're
running
and
upgrade
that
now
they
need
to.
Like
figure
out.
A
You
know
running
an
el
and
the
cl
in
parallel
and
making
sure
that,
like
that
whole
setup
works
and
and
that
their
infrastructure
still
works
it
and
what
not
it
might
be
worth
moving
to
like
drops
in
a
bit
a
bit
quicker
than
we
otherwise
would
because
then
you
get
people
like
another.
A
You
give
people
like
another
chance
to
try
the
software
and
make
sure
that
the
the
overall
setup
works,
even
though
obviously
like,
what's
going
on
on
robs,
what
would
go
on
robson
is
not
what
would
end
up
going
on
main
net
we'd
probably
still
have
like
some
bug
fixes
and
whatnot.
A
So
I
guess
I'm
curious,
you
know
how
do
people
generally
feel
about
that?
Do
we
feel
like
it
makes
sense
to
do
roxanne,
even
though
we're
you
know
still
kind
of
working
heavily
on
testing.
Do
we
prefer
to
wait
to
do
robson
and
then
the
risk
there
is?
Potentially
you
know
we
we
might
have
to
push
back
the
bomb,
but
that
might
still
happen.
Obviously,
if
we,
if
we
find
an
issue,
a
critical
issue
at
any
point
in
the
process,
so
yeah
curious
how
how
people
feel
about
that.
G
So
another
mind
is
fine.
The
only
potential
issue
is
that
it
will
take
us
a
bit
to
release
a
version
that
we
could
use
on
robsten.
So,
depending
on
the
date
we
might
release
version.
Let's
say
a
little
bit
late,
of
course,
before
the
the
imaginary
upstand,
but
a
little
bit
later
than
usual.
G
O
H
So
for
gath,
we
want
to
create
a
release
anyway
next
week
and
if
we
have
the
ttd
for
robson,
we
can
bake
it
in.
There
are
a
couple
of
open
pr's
still
to
merge
for
for
for
for
the
merch
stuff,
but
those
are
only
like
minor
issues.
They
were
formed
by
by
hive,
and
so
it's
like
not
really
relevant
to
the
node
operation
and
to
the
test
nets,
and
so
we
can
bake
them
into
the
release
or
we
can
also
just
release
them
in
the
in
the
release.
Afterwards,.
N
So
I
just
wanted
to
add
that
also
I
kind
of
agree
with
what
you've
said
previously,
that
this
hard
fork
is
a
bit
special
in
that
all
operators
need
to
do
a
lot
of
extra
work,
to
figure
it
out
and
set
it
up.
So
I
I
am
very,
very
supportive
of
the
idea
of
forking
roxton,
let
the
hit
the
fan
and
so
that
everybody
kind
of
figures
out
what
it
actually
means
to
be
part
of
this
merge
network
and
then
then
see
where
we
go
with
the
rest.
A
Got
it
one
thing
you'll
also
add
on
that
front
is,
I
think,
for
mainnet
there's
a
world
where,
like
the
hash
rate
is,
you
know,
potentially
going
down
and
whatnot,
and
we
may
want
to
fork
like
to
have
the
ttd
happen
quicker
than
like
the
blood
times
we
set
usually
happen.
P
Yeah
basu,
similar
to
to
geth.
We
have
a
release,
regular
scheduled
release
next
week,
and
if
we
have
ttd
configs,
we
should
be
able
to
make
robston
merge
into
those
configs.
P
We
also
have
some
failing
hive
tests
that
we're
working
on,
so
we
want
to
get
those
as
sorted
out
as
quickly
as
possible.
The
our
release
is
planned
for
wednesday
of
next
week
so,
depending
on
when
we
have
ttd
configs,
I
think
we
should
be
able
to
get
that
baked
in.
H
Yes,
it's
really
unrelated,
but
I
think
we
should
think
about
moving
the
sepolia
for
a
fork
up
a
bit,
because
it's
relatively
hard
for
for
solo
stakers
to
set
up
notes
on
on
on
on
robs
and
girly,
because
they
have
to
sing
so
much
and
like
giving
them
the
ability
to
test
on
a
newish
test
net
so
that
they
don't
have
to
have
to
sync
too
much
might
be
a
good
idea.
A
E
Well,
next
week,
I'm
on
holiday,
so
we
can.
We
still
have
quite
a
few
things
missing
or
like
not
fully
implemented
for
the
merge,
so
we
haven't
updated.
We
haven't
updated
to
the
very
latest
engine
api
and
also
we
have
a
lot
of
tests
failing
in
hive,
because
the
the
the
what's
the
click
mining
is
not
set
up
there
and
also
I
haven't
tested.
E
I
haven't
fully
tested
our
sync
performance,
it's
quite
a
few
things,
but
we
can
provide
a
kind
of
raw
alpha
version
if
the
ttd
is
known.
A
Q
I
pretty
much
agree
with
old
with
what
everyone
said.
So
yeah
no
thought
on
my
end.
C
The
you
know,
I
don't
have
direct
answers
from
the
rest
of
the
teams,
but
we
have
talked
as
though
this
was
very
likely
to
be
the
next
step
and
to
happen
around
me
now.
So
I
I
don't
expect
much
pushback
for
any
question.
A
Okay-
and
so
I
guess
the
does
it
make
a
difference
like
so,
it
seems
like
geth
and
and
and
basu
can
pretty
much
release
something
next
week
without
too
much
issues.
Another
mine
still
needs
like
a
bit
more
time
and
and
and
then
eragon
holy
literally
something
like
right
now
or
then
also
also
needs
more
time.
Does
it
make
like
a
difference
if
we
choose
the
ttd
today
or
like
in
the
next?
A
Basically
in
the
cl
call
next
week
like
does
having
one
extra
week
before
you
know
the
ttd,
and
then
we
can
put
out
a
release
which
combines
everything.
Does
that
help
people,
or
does
that
not
really
make
a
difference?
So
it's
like
if
we
choose
it
now
and
have
a
release
next
week
versus
choosing
yet
next
week,
along
with
a
slot,
I
guess
for
belatrix
in
the
cl
call
and
then
having
your
release
like
the
week
after
so
like
two
weeks
from
now.
Yeah.
H
We
can
always
we
like,
if
you
want,
if
you,
if
we
want
to
have
releases
out
in
two
weeks,
then
we
can
have
releases
out
in
two
weeks
but
like
choosing
the
ttd
next
week
is
just
really
weird
to
me.
So
we
should
choose
it
right
now
and
either
decide
to
have
the
releases
out
by
next
week
or
by
the
week
after,
but
like
artificially.
H
Postponing
the
decision
to
choose
to
ttd
doesn't
make
sense
to
me
at
the
moment.
Okay,.
E
Yeah,
I
would
prefer
to
have
a
two
weeks
window
for
the
release
because,
as
I
mentioned
next
week,
I
am
on
holiday
and
I'm
not
going
to
work
on
anything
like
maybe
like
on
the
ttg.
The
bare
minimum,
so
another
week
would
be
helpful.
E
Just
another
week
to
ship
I
release.
D
N
Picking
a
dtd,
I
kind
of
I
think
we
can
might
as
well
do
it.
I
mean
there's
no
harm
really.
The
only
catch
is
that
roxanne
is
fairly
easy
to
attack,
so
to
say,
meaning
that
pikachu
now
and
somebody
just
starts
keeping
mining
with,
say
four
and
that
tpd
might
arrive
tomorrow
or
something
so.
We
need
to
also
have
a
contingency
on
what
happens
if
somebody
goes
crazy.
C
O
All
right
so
but
but
hold
on,
that's
not
a
problem
is
it.
It
will
just
mean
that
the
client's
course
what
would
be
a
problem
would
be
if
we
hit
the
tpd
before
we
actually
made
the
release
before
clients
have
been
released
so
right,
right,
yeah,.
A
Yeah,
okay,
so
I
guess.
N
N
C
You
might
you
right,
but
if
you're
running
a
robston
miner,
it's
just
going
to
keep
building
on
a
single
chain
likely,
rather
than
making
a
hundred
little
fork
chains
around
qtv.
N
C
A
Okay,
so
once
once
we
have
the
releases
out,
you
know
there's
like
the
blog
post
on
the
on
the
ef
blog.
How
long
do
we
think
we
we
want
to
give
people
to
upgrade
their
robsten
nodes
like
so
you
know
we're
basically
like
yeah
so
say
like
two
weeks
from
now
we
have
a
blog
post
that
goes
up
and
says
these
are
the
versions
for
for
robston
is
like
another.
Two
weeks
before
we
hit
ttd
sufficient
for
people.
N
I
A
J
On
the
related
topic,
I
would
just
like
to
remind
people
about
fork.
Next.
I
J
If
we
are
deciding
about
forking
robson,
I
think
it's.
It's
also
need
to
be
decided
what
to
use
for
the
fork
next
value.
J
H
And
so
so
in
theory,
you
shouldn't
make
any
decisions
based
on
on
the
the
merge
fork
block.
I
know
that
in
the
past
some
clients
have
implemented
some
stuff
differently
like,
except
for
the
focus
change,
and
so
we
can
set
the
merge
for
block
either
before
or
after
the
actual
fork.
So.
M
R
It's
also
important
to
notice
that
the
the
fork
next
it
was
kind
of
like
invented
for
this
world,
where
forks
are
scheduled
at
a
specific
block.
So
it
might
be
better
to
ignore,
like
to
not
set
it
for
the
merch,
like.
H
Felix
we
have,
we
have
a
merge
fork
block
specifically,
that
is
not
the
fork
where
the
merch
actually
happens,
but
that
that
is
only
for
the
the
to
split
the
networks.
Afterwards,.
D
L
R
J
We
were
discussing
one
way.
One
reason
to
set
it
before
is
it
will
force
users
to
upgrade
their
notes,
because
they
will
see
that
they
are
starting
to
lose
connections
like
with
other
peers.
But
I
don't
know
if,
like
this
is
valuable
to
do-
and
I
don't
know
if
it
will
work
as
as
discussed.
H
And
like
the
downside
to
it
is
that
you
that
you
will
alienate
all
the
people
that
are
not
upgrading,
and
this
might
include
miners,
so
my
miners
might
be
like
forked
off
from
the
the
network.
O
Yeah,
I
have
a
question,
slash
thought
so
if
we
said
forklift
next
to
a
high
value
that
won't
split
the
network
with
it
until
that
high
value
has
been
hit,
because
I
was
wondering
if
there
might
be
an
any
value
in
setting
up
or
next
to
some
high
value
just
so
that
we
more
easily
can
determine
on
the
peer-to-peer
protocol
level
kind
of
how
large
percentage
of
the
network
is
upgraded.
O
O
No,
I
mean
like
for
for
robson,
we
we
do
the
release
and
then
we
can
get
a
kind
of
good
estimate
by
just
connecting
to
100
pairs
and
checking
how
many
signal
that
this
new
fork
id
next
and
know
how
large
the
census
network
has
upgraded.
A
O
Because
if
we
ever
do
hit
that,
then
we
will
actually
cause
a
split,
so
we
should
I
mean
if
we
use
it
only
for
that
purpose,
then
we
should
set
it
to
like
three
years
in
the
future
so
that
we
know
that
two
years
from
now
people
use
a
different
software
where
we
have
kind
of
disabled
this
or
not
have.
N
N
E
Yeah,
I
think
how
far
as
far
as
I
understand
it,
if
we
set
it
too
far
into
the
future,
then
it
won't
cause
a
split,
and
we
already
see
this
happening
on
shadow
forks,
so
for
the
main
net.
I
would
think
we
should
set
it
to
something
happening
reasonably
soon
after
the
gtd
is
reached,
like
maybe
two
weeks
or
something
like
one
week
after.
A
E
R
Running
into
it
yeah,
so
it's
designed
to
be
a
block
number
and
we
cannot
change
the
definition
now,
because
all
the
other
software
also
publishes
as
a
block
number
we
could
make
another.
We
could
make
a
new
version
of
the
of
the
sort
of
like
enr
entry
and
things
like
that
like
we
could
just
create
like
we
can
create
a
new
system
that,
like
works,
a
bit
different,
but
then
it
won't
really
be
supported
by
the
older
software.
R
A
G
N
No,
so
the
problem
with
fork
id
is
that
the
moment
the
fork
passes,
the
the
block
number
all
of
a
sudden
it
gets
enforced.
And
anyone
who
tries
to
connect
to
you
and
saying
that,
let's
say
they
are
up
to
date
with
the
network.
But
they
don't
know
about
this.
A
And
so,
given
that-
and
you
probably,
it
probably
means
that
you
want
people
to
retroactively,
upgrade
the
fork
id
kind
of
at
the
same
time.
Does
it
make
sense
to
target
a
block
number
that's
at
least
over
a
year
out,
so
that
when
we
have
the
next
hard
fork
after
the
merge,
we
can
update
both
block
both
fork
ids.
So
we
can
like
update
this
fake
one
and
we
can
update
the
real
one
to
whatever
the
next
fork
block
will
be.
A
R
This
is
kind
of
what
we
were
saying
right,
so
it's
okay,
I
think,
to
basically,
let's
just
try
and
work
around
the
fork
id
for
the
merge,
so
the
the
consensus
that
I'm
hearing
is
the
fork
id
will
just
like
it's
a
potential
source
of
trouble
with
the
with
the
merge,
and
it's
also
not
possible
to
set
the
block
number.
So
we
just
kind
of
want
to
ignore
the
fork
id
for
the
merge
and
like.
R
R
Like
I
mean
we
will
see,
the
people
who
will
upgrade
for
the
merge
will
also
upgrade
for
the
fork
after
I
think-
and
this
four
can
literally
only
be
about-
I
don't
know-
probably
something
has
to
be
fixed
anyway
and
then
you
know
we
can
schedule
it
by
the
block
number
and
make
the
fork
id
and
it
will.
Everything
will
be
fine
again,
but
I
think
for
specifically
for
the
merge,
since
it
is
not
scheduled
by
the
block
number.
The
fork
id
system
cannot
help
at
all.
C
A
C
Then
that's
fine,
I
also,
but
I
also
think
it's
not
necessarily
unsafe
if
we
do
want
to
quickly
upgrade
this
after
to
just
firmly
put
it
plus
three
months
of
what
we
think
the
longest
ttd
would
be,
and
then
you
get
the
natural
segmentation
after
anyway
without
having
to
upgrade
the
notes
again,
I
don't
really
care.
I
think
that
that's
like
relatively
clean,
but
otherwise
it's
fine
to
also
just
do
nothing.
I
Yeah,
I
just
wanted
to
mention
a
kind
of
counter
of
intuitive
aspect
of
the
fork
id
when
it's
added
that
was
found
working
on
nimbus.
When
you
add
a
next
fork
id,
it's
not
true
that
all
of
the
nodes
that
haven't
upgraded
will
sync
up
to
that
fork
id
the
block.
I
mean
of
that
fork
id
if
they're
sufficiently
far
behind.
They
actually
stopped
thinking
at
the
previous
fork
id.
I
So
we
had
a
situation
where,
once
london
was
passed
and
everybody
else
had
reached
consensus
on
london
when
we
were
running
software
that
didn't
know
about
london,
it
actually
only
synced
up
to
berlin,
and
then
it
wasn't
able
to
connect
to
nodes
after
that.
So
it's
perhaps
just
something
to
keep
in
mind,
because
you
mentioned
earlier
that
you
believed
all
the
nodes
that
haven't
upgraded
will
keep
syncing
up
until
the
next
block
associated
with
fork
id.
It
doesn't
always
work
that
way
got
it.
S
E
I
don't
know
who
sends
it,
but
you,
you
have
headers
from
the
wrong
shadow
fork
on
like
from
the
main
net,
when
you're,
on
shadow
and
and
so
on.
So
and
that
happens
because
on
shadowfox
merged
emergency
block
is
set
so
so
far
into
the
future
that
it
doesn't
cause
a
split
so
for
if
we
set
it
for
to
plus
three
months,
we'll
have
for
three
months.
If
we
don't
do
anything,
we'll
have
this
weird
situation,
when
you
have
fears
from
both
proof
of
work
and
proof
of.
S
Stake,
let's
see
so
the,
if
understand
correctly,
there's
basically
three
options,
one
we
don't
use
4k
at
all
and
which
would
still
result
in
the
same
problem
that
you
just
described,
or
we
set
the
fork
at
a
three
months
in
advance
in
case
we
have
three
months
of
that
problem
or
we
do
like
a
quick
upgrade
right
after
ttd
and
set
it
or
do
some
sort
of
manual
around
so
that
we
don't
have
that
problem
at
all.
Is
that
accurate.
H
That,
like
I
wouldn't
I
think,
trying
to
calculate
it
plus
two
weeks
is,
is
a
bit
dangerous
because
we
don't
want
to
have
have
it
before
the
fork,
because
we
don't
want
the
majority
of
miners
to
drop
off
before
the
fork.
So
I
would,
I
would
say,
let's
do
something
like
plus
one
month
plus
two
months
and
and
that's
it
and
that's
basically
been
the
idea
from
the
beginning.
G
Yeah,
so
we
had
still
have
this
issue
somewhat
and
shadow
forks.
When
we
tried
to
sing,
we
were
trying
to
sing
from
wrong
piers,
so
our
work
workaround
there
was
to
when
we
connect
to
the
peers,
ask
about
of
one
of
the
latest
blocks.
We
got
from
beacon
train
about
the
hash
if
they
have
it
and
just
disconnect,
if
they
don't
so
that's
kind
of
a
workaround
that
we
disconnect
piers.
G
H
Also,
these
issues
on
the
shadowfox
are
only
this
like
harsh,
because
you
have
like
100
notes
on
on
each
shadow
fork,
but
you
have
like
thousands,
no
thousands
of
notes
on
maynard.
If
you
turn
it
around
and
like
most
of
the
notes
are
I
actually
having
the
canonical
chain,
then
you
won't
have
to
won't.
Have
these
these
issues
sinking
from
the
wrong
piece.
S
Yeah,
but
I'm
I'm
concerned
because
upgrading
this
time
around
is
so
much
harder.
I
have
a
small
fear
in
the
back
of
my
head
that
we
will
have
significantly
more
people
that
don't
upgrade
correctly
or
don't
upgrade
at
all,
just
because
it's
hard
this
time,
whereas
previously,
as
you
know,
just
update
your
docker
image
update
your
package
whatever
now
it's
like.
Oh,
I
gotta
do
a
bunch
of
work,
maybe
I'll
put
that
off
and
not
do
it,
and
so
I'm
concerned
we
might
end
up
with
actually
a
significant
number
of
nodes,
not
updated.
G
So
to
clarify
we're
asking
only
about
the
header,
so
currently,
after
the
after
the
match,
everyone
starts
with
syncing
the
headers
like
backwards.
Probably
so
that's
you
don't
really
disconnect
anyone.
That's
on
the
after
the
match,
train
so.
G
A
To
kind
of
try
and
wrap
this
up,
does
it
make
sense
to
try
to
just
do
nothing
with
the
fork
id
on
robston
see
how
that
goes,
and
if
we
see
that
like
it
raises
a
bunch
of
issues,
we
can
then
try
and
do
like
a
plus
three
month
thing
on
on
gordy
and
sepolia
and
and
see
how
that
goes.
I
think,
like
micah,
like
your
concern
about
people,
not
upgrading
will
be
at
its
truest
on
roxton,
because
this
is
where
people
have
the
least
incentive
to
actually
upgrade
their
nodes
so
like.
L
A
Main
main
shadow
forks,
though,
which
are
very
different
because
maintenance,
shadow
forks
there's
like
a
hundred
notes
that
we
control
and
like
thousands
of
notes
that
we
don't,
and
so
that
means
that,
just
like,
statistically
the
peers,
we
get
they're
all
on
the
wrong
fork
from
the
shadows,
forks
perspective,
which
won't
be
true.
It
might
be
true
on
props,
then
to
like
a
50
50
degree,
but
not
the
like,
95
to
1
degree,
yeah.
S
I
hear
you're
saying
my
my
gut
tells
me
that
having
two
peer-to-peer
networks
that
are
incompatible
with
each
other
that
don't
have
a
way
to
distinguish
between
each
other
is
likely
to
cause
problems
that
we
may
not
even
foresee
until
may
or
may
not
hit
until
midnight,
and
so
I'm
hesitant
to
just
kind
of
yolo
and
just
hope
we
don't
run
another
thing,
but
that's
just
a
gut
thing.
I
don't
have
any
actual
evidence.
H
H
I
would
just
say:
let's,
let's,
let's
shuffle
this
discussion
for
now-
let's
just
say
we're
not
going
to
schedule
a
a
a
merged
fork
block
on
on
robston.
We're
only
only
going
to
do
the
ttd
one
and
I
don't
know,
get
never
had
issues
finding
a
good
peer
and
sinking
from
them,
so
yeah,
but.
D
H
S
The
the
only
reason
I
would
push
back
on
that
a
little
bit-
and
this
is
pretty
weak-
is
just
that.
If
the
final
solution
we
come
to
involves
doing
something
other
than
just
setting
a
number
like.
If
we
decide
to
write
some
extra
code,
I
would
really
like
to
see
that
tested
on
robston
and
so
not
deciding
until
later
kind
of,
I
feel
like
we'll
cut
out
a
handful
of
potential
solutions.
Whatever
this
might
be,
yeah.
A
Okay
sold
so
no
no
fork
next
value
on
rustin
and
then
back
to
the
ttd
discussion.
So
if
we
say
we
want
the
releases
for
robston
two
weeks
from
now,
so
that's
like
the
week
of
may
23rd.
A
That
means
we
and
then
we
give
people
kind
of
two
weeks
to
upgrade
their
nodes.
So
that
means
we
want
to
like.
Have
the
fork
happen.
On
robson
the
the
week
of
june
6th,
we
had
someone
on
our
team-
mario,
not
mario
vega,
mario
havel,
kind
of
tried
to
estimate
ttd's
on
mainnet
and
robsten
for
a
while.
A
Now
he
tried
a
bunch
of
different
models,
and
this
is
just
like
a
simple
polynomial
regression
and
that
seems
to
work
the
best
and
it
seems
to
work
relatively
well
up
to
like
a
month
a
month
out.
So
I
would
just
suggest
that
we
go
to
like
the
june
8
value,
which
is
roughly
in
the
middle
of
that
that
week-
and
this
gives
us
this
ttd
value,
which
I'll
paste
here.
A
Anyone
have
an
issue
with
that.
We
can
make
it
look
like
a
palindrome
if
people
really
want
that,
but
otherwise
also
happy
to
just
go
with
this
estimate.
I'll
also
share
the
github
repo
in
the
chat
here
in
case
people
want
to
have
a
look
more,
oh
yeah,
good
question.
Ttd
is
terminal
total
difficulty.
A
It's
the
total
difficulty
value
on
the
proof
of
work
chain
at
which
we
trigger
the
transition
to
proof-of-stake,
and
so
the
one
I
posted
in
the
in
the
chat
here
is
the
one
that
would
happen
on
june
8,
which
is
basically
yeah
four
weeks
from
now,
and
so
it
gives
us
two
weeks
for
the
client
releases
and
then
two
weeks
for
people
to
upgrade
their
nodes
to.
S
So,
as
was
brought
up
earlier
scheduling,
the
robson
fork
far
in
advance
risks
someone
trolling
us
and
hitting
it.
You
know
next
week
or
tomorrow.
A
S
H
Basically,
that's
that's.
That's
exactly
exactly
what
we
win
yeah,
that's
exactly
what
we
did
on
what
we
do
on
the
shadow
forks
and
I
think
there
might
be
a
bunch
of
issues
there.
E
S
P
The
very
first
test
that
wouldn't
merge
until
june-
that's
that's
definitely
getting
us
into
territory.
We
would
need
to
push
a
difficulty
bomb.
In
my
opinion,.
E
E
O
I
would
agree,
but
my
view
is
that
this
is
testing.
If
it
were,
I
don't
consider
robs
in
a
production
network,
it's
a
test
network.
So
so,
therefore,
I
think
the
sooner
the
better,
because
it
gives
us
better
testing
for
when
it's
for
the
real
thing.
G
G
S
Now
so,
from
my
perspective,
just
play
to
the
devil's
advocate
here,
a
little
bit
us
pushing
the
difficulty
bomb
indicates
that
we
are
going
to
delay
the
merge
which
is
good
for
them,
and
so
that
feels
like
something
they
would
be
on
board
with.
A
And
so,
okay,
so,
if
so,
if,
if,
if,
if
we
did
do
like
the
june
15
rather
than
the
june
8,
it
also
means
I
I
guess
both
for
like
nethermine
and
aragon.
You
also
want
to
delay
by
one
week
when
we
actually
announce
the
releases
right,
because
it's
not
just
announcing
the
releases
and
having
three
weeks
to
hit
robsten.
It's
more
like.
A
A
There's
like
a
week,
weak
majority
in
favor
of
june,
8th
yeah,
I
don't
know
guest
basu.
Do
you
have
any
updated
thoughts.
A
A
Danny
danny
left,
I
think
danny
favors.
I
I
can
vouch
that
danny
would
prefer
earlier.
Rather
than
later,
I
don't
know
if
that's
his
personal
preference
or
the
aggregated
preference
of
like
cl
teams,
yep.
A
So
I
think
I
would
also
slightly
land
on
like
june
8,
and
one
of
the
reasons
here
is.
We
can
definitely
upgrade
the
releases
that,
like
clients
put
out
and
we've
done,
that
in
the
past,
even
for
like
mainnet.
A
If
you
look
at
the
london
blog
post
fork,
there's
like
a
couple
scratched
out
releases,
so
I
think
if
there
are
going
to
be
some
clients
and
like
most
of
them
that
are
that
are
ready
and
if
nethermine
and
aragon
have
like
a
release,
that's
maybe
not
like
the
the
one
that
they
would
prefer.
We
can
kind
of
start
with
those
and
then,
if,
like
a
week
later,
aragon
and
nethermine
have
like
an
updated
release.
A
We
can
definitely
like
just
upgrade
the
blog
post
and
and
communicate
that,
and
I
also
think
like
if,
if,
if
one
of
the
things
we
do
want
to
test
is
like
people
configuring
their
nodes,
then
if
we
do
like
the
sooner
the
better,
I
think
is
good
there,
because
we
might
find
some
issues
that
we're
not
aware
of
about
just
people
running
these,
these
kind,
combos
and-
and
it's
something
where,
like.
A
I
think
the
fact
that
the
release
is
like
a
bit
more
polished
probably
is
not
like
a
huge
deal
breaker,
so
I
my
weak
preferences
also.
I
would
rather
get
this
into
the
hands
of
people
to
like
try
to
like
combinations
as
soon
as
possible
and
then
just
like
make
sure
that
we
also
upgrade
the
release
versions
for
nethermine
and
aragon
as
soon
as
there's
like
a
new
one.
I
don't
know,
does
that
generally
make
sense.
A
Okay,
miraculous,
okay,
okay,
so
thank
you
afree
for
the
palindrome,
which
I
was
too
tired
to
recognize
and
so
I'll
copy
paste
it
here
I'll
copy
paste
it
here
also
I'll
just
share
it
in
all
core
devs
right
now,
so
consider
this
the
ttd
value
I'll,
make
it
a
proper
upgrade
to
the
shanghai
spec
in
the
execution.
A
Repo
and
the
thing
that
clients
on
the
cl
side
need
to
figure
out
in
the
next
week
is
basically
the
slot
heights
for
well,
basically,
the
genesis
for
the
beacon
chain
and
then
the
slot
heights
for
altair
and
dilatrix
based
on
this.
Does
that
make
sense.
H
A
Okay,
thank
you,
maris,
okay,
so
there's
three
people
who
wanted
to
discuss
eips
that
are
on
the
call.
A
I
doubt
we
can
get
through
all
of
them,
but
if
we,
if
we
stay
on
an
additional
five
minutes,
we
can
give
them
each
five
minutes
yeah.
So
first
up
felix,
you
had
eip
four
four,
nine
three,
eight.
R
Yes,
so-
and
this
can
be
really
quick,
so
this
isn't
already
like
pre-agreed-
I
just
wanted
to
let
you
guys
know
that,
for
formal
reasons
we
are
pursuing
this
eip,
the
eip4938
is
about
removing
the
get
no
data
message
from
the
eth
wire
protocol,
and
we
have
discussed
this
extensively
with
all
client
teams
that
we
are
aware
of
that.
R
We
want
to
make
this
change
in
geth
and
we
have
been
wanting
to
make
this
change
in
guest
for
a
very
long
time,
and
I
can
only
really
repeat
what
is
in
the
eep
we
are
set
on
making
this
change,
because
it
will
allow
us
to,
for
example,
restructure
our
database
to
not
store
all
of
the
tri
nodes,
for
example
by
their
hashes,
and
we
do.
R
We
do
provide
an
alternative
to
this
protocol
message
in
the
snap
protocol
and
all
of
the
existing
users
of
get
no
data
can
be
replaced
by
the
messages
in
the
snap
protocol,
so
it
is
not,
but
so
can
we
just
very
quickly?
Can
I
just
very
quickly
get
from
the
client
implementer
some
signal
that
this
is
okay,.
G
Are
currently
using
it
for
healing,
there
is
work
being
done
to
move
to
snap
sync
healing,
but
it's
not
done
yet.
Q
P
I
we
should
probably
get
back
to
you
on
on
that,
actually,
because
the
snap
sync
implementation
that
we
have
is
pretty
solid,
but
it's
not
really
production
ready
yet
so
we
probably
want
to
discuss
before
we
have
an.
R
Opinion,
so
what
should
be?
What
I
wanted
to
say
here
is
that,
even
if
there
is
no
need
to
implement
the
complete
snapshot
algorithm
to
use
the
snap
protocol
for
this
purpose,
basically,
this
is
just
for
us
like
a
way
to
say
that
we
want
to
roll
out
this
new
protocol
version
e67,
which
will
not
have
get
node
data.
It
doesn't
mean
that
e66
will
go
away
immediately.
R
We
will
keep
having
it
66
for
a
while,
because
phasing
out
the
protocol
version
it
takes
a
good
while.
So
all
we
just
really
want
to
do
is
basically
move
forward
and
define
the
protocol
version
67,
which
does
not
have
the
message
and
then
later
we
will
remove
version
66
and
then
it
will
become
unavailable.
R
So,
for
the
time
being,
e66
will
be
served
by
geth,
and
I
mean
I'm
not
sure
I
I
guess
all
the
other
clients
will
serve
it
in
the
same
way
that
they
have
been
serving
it
already.
For
example,
in
some
clients
like
aragon,
the
message
is
not
implemented
so
yeah.
G
So
question
mark:
when
would
this
go
in
like
in
the
timeline
and
when
get
no
data
would
be
stopped?
Serving
sir
won't
be
served
well.
R
R
R
G
So
in
terms
of
defining
the
protocol,
as
that,
I'm
fine
in
terms
of
like
phasing
out
eth
66,
could
we
have
a
guarantee
that
we
have
a
discussion
before
that
and.
R
Yeah
yeah
yeah,
it
doesn't
mean
it
doesn't
mean
it
will
go
away
tomorrow.
We
we
are
not,
we
definitely
open
to
discussing
it.
G
In
terms
of
the
days
because
we
are,
we
are
planning,
we
are
actively
working
on
it.
I
think
we
will
be
ready
for
the
autumn
to
stop
using
it.
No
data,
not
sure,
if
we'll
be
ready
to
serve
data
in
from
for
snap
sync,
which
is
the
the
harder
one
but
yeah.
So
I'm
I'm,
I'm
fine
with
generally
general
idea
direction.
U
Hey
yeah,
this
is
basically
just
to
clarify
what
gary
was
saying:
we're
fine
with
the
new
protocol
version,
it's
the
deprecating
of
the
old
one.
That
gives
us
a
little
bit
of
pause.
A
Okay
and
we
can
yeah,
we
can
discuss
that
well
after
charge,
yep,
okay,
moving
on
sorry
just
to
make
sure
we
we
at
least
give
other
folks
the
opportunity
to
speak.
Green
lucid
was
the
github
handle,
I'm
not
sure
who
that
maps
to
on
the
call
but
wanted
to
discuss,
get
feedback
on
eip
5022,
which
increases
the
cost
for
s-store.
When
you
go
from
a
zero
to
non-zero
value,
are
you
on
the
call
we're
induced.
A
Once
twice:
okay,
I
will
post
their
issue
in
the
chat
here.
If
people
want
to
chime
in
and
eat
magicians.
One
thing
I
did
mention
so
that
the
issues
phrased
for
like
shanghai
inclusion.
I
I
reiterated
that
we've
kind
of
paused
this
decisions
there,
but
then
they
said
they
would
just
like
to
get
technical
feedback
on
the
eip.
So
yeah
folks,
who
are
interested,
can
have
a
look
there
and
then
last
up
we
had.
I
can't
even
pronounce
this
this
handle
belf.
Hopefully
I
got
it
right.
A
Oh
well,
perfect,
walk
us
through
your
presentation.
We
can't
hear
you.
V
Yeah
yeah
hi,
my
name
is
zion
joe
or
victor,
and
then
a
quick
presentation
for
the
question.
Early
feedback
wanted
for
the
team.
So
basically
the
proposal
is
to
get
a
expiration
in
the
payload
and
one
question:
one
value
available
feedback
I
got
from
mikkel
is
the
potential
denier
of
us
attack
if
exposed
to
basic
someone
is
possible
to
throw
a
very
quick
soon-to-expire
transaction
and
get
get
a
propagate
over
the
network
and
cause
a
denial
of
service
attack.
V
That's
like
I'm
proposed
that
we
add
a
new
a
field
in
the
payload
for
a
transaction
and
for
that
block
to
be
valid,
all
transactions
has
to
not
expire,
so
expired
by
is
a
block
number
requirement
yeah
and
then
so,
if
that's
a
block
number
is
100,
and
then
someone
throws
in
a
transaction
within
101
with
a
high
transaction
fee,
and
so
it
is
possible
that
it
give
propagate
over
the
network,
but
then
soon
too
expired
and
then
within
101
and
102
didn't
get
executed
at
all
and
thro
and
could
that
cause
a
deny
of
service
attack?
V
That's
the
first
question,
and
so
I
put
a
bunch
of
question
here.
First
of
all,
is
that
really
a
problem,
because
for
me
it's
like
it's
very
natural
that
some
of
the
some
of
transactions
become
invalid
in
the
network,
for
various
reasons
and
and
expired
by
is
just
one
of
the
reasons
it
comes
not
invalid,
and
so
the
argument
was
that
attacker
can
have
very
low
cost
to
generate
a
crossed
network
attack
by
have
a
low
expired
by
number
specified.
V
But
for
me,
my
first
under
my
my
argument
is
that
it
seems
like
if
they
want
to
attack,
there's
also
risking
some
fees,
and
so
that's
one
thing,
and
the
second
thing
is
that
I
have
some
network
layers
of
a
proposal,
and
my
question
to
the
implementer
of
clients
is
to
the
client
authors.
Is
that
whether
not
nodes
are
incentivized
to
adopt
a
counter
eos
approach
at
all?
I
I
believe
they
are,
but.
O
Made
a
an
e
proposal
about
expired
transactions,
and
I
know
this
because
I
also
made
one
like
a
year
after
and
had
forgotten
that
he
had
made
one.
So
there
are
at
least
two
previous
attempts
on
this.
I
don't
think
there's
anything
like
dramatically
preventing
this,
like
from
a
node,
implemented
point
of
view,
saying
that,
oh
we,
we
really
cannot
do
this,
because
then
we
get
lost
yeah
mike
may
want
to.
O
I
just
think
this
has
not
been
picked
up,
because
there
have
always
been
other
stuff,
more
interesting,
more
interesting
modifications
from
transactions
that
have
been
on
the
table,
and
this
has
kind
of
been
seen
as
nice
to
have.
S
I'm
surprised
you're
here
to
say
that
martin,
because
I
could
have
sworn
you
were
the
person
that
argued
fairly
strongly
that
dust
vectors
from
expiring
transactions
were
a
problem,
because
someone
can
expand
the
network
and
be
confident
that
they
and
not
have
to
pay
for
it
and
be
confident
that
they
won't
have
to
pay
for
it
if
they
can
avoid
getting
included
within.
O
I
meant
that
so
there
are
other
transaction
types
which
I
rejected,
because
for
that
reason
for
some
of
the
batch
ones,
where
you
can
do
this
kind
of
thing,
but
for
expiring
ones,
I
mean
we
already
have
right
now,
the
case
where
there
there
are
like
big
activities,
thousands
of
people
sending
in
transactions,
but
instead
of
them
being
expired
and
can
be
rejected
and
flushed
out.
O
They
just
have
to
slow
through
the
system
in
like
days
after
this
big
event,
and
they
will
fail,
they
cost
a
little
bit,
but
they
just
hook
the
bandwidth
so
yeah
yeah.
I
think
it
would
be
nice
to
have
inspired.
S
I
agree
that
transactions
are
super
useful
and
I
would
love
to
figure
out
a
way
to
get
them.
In
my
concern,
which
I
thought
was
actually
your
concern,
but
I
will
say
it
as
mine
since
apparently
I
was
wrong
there
is
that
currently,
if
you
submit
a
transaction,
you
have
no
way
to
purge
it
from
the
mempool
and
there
are
some
people
out
there
who
have
insanely
large
mempools
and
can
store
basically
everything
forever.
And
so,
if
you
force
the
network
to
propagate
your
transaction,
you
are
nearly
guaranteed.
S
You
will
eventually
have
to
pay
for
that
transaction
like
it
is.
It
is
never
free,
like
you
will
have
to
pay
for
that
or
some
other
transaction
replaces
it
later.
But
that's
going
to
have
you
know
that
12.5
increase
and
so
there's
a
very
limited
amount.
So,
given
an
account
or
a
set
of
accounts
that
have
just
enough
gas
for
one
transaction,
each,
you
basically
can
spam.
You
know
once
maybe
twice
and
then
the
fees
start
getting
too
expensive,
whereas
expiring
transactions.
O
S
Sure,
but
I
mean
you
could
just
have
a
thousand
accounts
that
each
have
enough
gas
for
one
transaction
and
they
expire
in
five
minutes
or
ten
minutes,
or
something
like
that,
and
I
mean,
if
you're
good,
with
a
base
fee.
You
can,
you
know,
look
at
base
fee
history
and
you
can
do
a
little
bit
of
math
and
figure
that
you
know
it's
unlikely.
The
base
fee
is
going
to
drop.
You
know,
because
this
is
a
monday
afternoon
and
monday
afternoons,
it
never
drops
below
20
or
whatever.
A
So
sorry
yeah
we're
already
a
bit
over
time,
and
so
I
guess,
if
what's
the
best
place
for
people
to
comment,
do
you
mind
sharing
that
in
the
chat
as
well.
V
Yeah,
the
best
way
is
to
share
on
the
the
yeah
is
magicians,
which
is
also
appear
on
vip
discussion
too.
As
usual,
okay,.
A
Awesome-
and
one
thing
I'll
highlight
from
the
chat
like
client,
had
two
comments
about
like
43,
37
and
3074s
being
ways
you
could
also
support
this
feature.
Basically,
so
it's
probably
worth
to
look
at
those
and
and
yeah
and
and
yeah
see
if
they're
like
a
good
replacement,
because,
like
martin
was
saying
you
know,
even
if
there
are
no
dos
concerns,
there's
still
like
a
high
chance
that
there's
just
higher
priority.
A
Eips
that
end
up
taking
most
of
the
the
work
and
4337
specifically
is
not
the
corey
ip.
So
it
means
that
basically,
clients
can
implement
it
and
and
and
not
require
changes
to
the
consensus
protocol,
and
I'm
pretty
sure
nethermine
already
has
support
for
this
in
at
least
one
project.
V
I
think
yeah
that
actually
answered
a
good
question
of
mine,
which
is
whether
we
need
a
a
new
transaction
type
or
can
we
use
to
reuse
the
existing
transaction
tag
and
append
a
new
one,
and
it
seems
like
if
people
want
backward
compatibility,
it
has
to
be
a
new
one,
so
that
some
of
them,
if
they
don't
want
to
adopt
it
sooner,
then
they
can
just
avoid
like
avoid
seeing
it
before
using
it.
Yeah.
A
And
and
yeah
I
would
recommend
micah
if
you
want
to
share
your
eip
number.
Yours
was
basically
that,
if
I
remember
correctly,
it
was
like
a
new
transaction
type
with
an
expiring
transaction,
so
that
might
be
helpful
to
look
at
as
well
27
a
little
we're
already
over
time.
So
yeah,
I
guess
we'll
wrap
this
up.
Yeah.
Thank
you
for
the
presentation
as
well.
This
is
the
the
higher
budget
quality
ufc
presentations
that
we
had
and
yeah
thanks.
Everyone
yeah
thanks.
A
Everyone
for
joining
I've
posted
the
ttd
value
in
the
awkward
devs
chat.
So
we
can
use
that
for
releases
on
robston
and
I
guess
we'll
we'll
expect
to
put
a
blog
post
together
sometime
in
the
next
two
weeks
and
yeah,
and
then
we'll
figure
out
all
the
stuff
about
the
beacon
chains
for
test
nets
on
the
consensus
layer
calls
if
it's
not
already
done
before,
then
yeah
thanks.
Everyone.