►
From YouTube: Consensus Layer Call #86 [2022/5/5]
Description
A
Okay,
the
call
should
be
transitioned.
This
is
consensus.
Layer
call
86,
and
here
is
the
issue
issue
521
on
the
pm
repo.
We
will
focus
on
the
merge
as
much
as
there's
an
appetite
for
it.
Do
a
little
bit
of
client
updates,
there's
a
couple
of
things
that
have
come
up
for
discussion,
topics,
episode
and
builder
api
and
we'll
go
from
there.
A
Okay
to
kick
it
off.
We
did
have
a
merge
testing
call
on
monday.
I
believe
so,
if
there's
anything
else
in
addition
to
what
we
covered
on
that
call
that
we'd
like
to
cover
today,
we
can
cover
it
now
in
the
testing
section.
I
know
that
there
was
a.
B
Yeah,
hey
everyone,
so
we
had
minute
shadow
fog
three
today
I
think
we
hit
ttv
around
1pm,
so
pttd
we
were
at
something
like
99.8
participation.
We
were
seeing
really
healthy
block
production.
I
think
the
only
issue
pttd
was
sometimes
prism.
Bezu
was
missing.
A
couple
blocks
missing,
proposing
a
couple
of
blocks
and
I
think
the
bezel
team
already
had
a
theory
before
why
that
could
be,
but
yet
to
be
triaged
post
ttd.
B
We
were
at
97.6
percent,
so
the
same
prism
bezu
that
had
some
issues
before
was
then
dropping
off
the
chain
post,
ttd
a
while
later
we
noticed
something
with
prism,
nethermind
as
well.
Marek
was
looking
into
it,
but
it's
also
not
like
a
static
issue.
It
like
that
combination,
a
test
and
proposed
for
a
while,
then
dropped
off,
and
I
think,
came
back
again,
but
there
was
also
someone
looking
into
it
and
there
might
have
already
been
a
fix,
so
it
might
just
need
to
be
updated,
I'm
not
sure.
B
So.
All
in
all,
I
think
it
was
a
great
shadow
fork.
We
were
almost
bugless
this
time
and
we're
still
seeing
really
healthy
block
production.
I
have
like
a
monitor
enabled
on
tekku.
I
don't
really
see
any
blocks
being
laid,
so
we're
seeing
blocks
produced
on
time.
Besides
that
there
was
an
additional
test
done,
this
shadow
fork,
I
just
post
the
result
of
that
test
as
well.
Essentially
what
we
did
was
we
spun
up
a
couple
of
nodes.
B
We
allowed
them
to
sync
up
the
head
before
gtd
about
one
to
two
hours
before
we
paused
either
the
cl
or
the
el
to
simulate
them
having
to
sync
and
the
moment
ttd
was
hit,
we
unpaused
them.
So
this
is
before
post
ttd
finality
the
results
were
kind
of
mixed.
I
think
the
prison
guests
and
lighthouse
get
combo
had
some
issues
syncing
up
but
prism,
nethermine
lighthouse
nethermine
had
no
problems,
they
seemed
to
be
synced
up
perfectly.
B
This
is
still
new,
so
I
don't
think
any
of
the
client
teams
have
had
a
time
to
look
into
why
this
happened,
but
yeah.
That's
setting
the
overall
status
update.
Congratulations!
Everyone.
A
Great
thanks
cool
the
next
thing
I
wanted
to
talk
about,
and
I
believe
some
of
you
are
on
the
call,
or
at
least
you're
aware
of
some
discussion
last
week
on
the
awkward
call
around
the
difficulty
bomb.
A
I
just
wanna
do
a
quick
status
update
and
get
everyone
on
the
same
page
there
on
the
execution
side,
the
proof
of
work
side
still,
it
was
discussed
as
to
still
attempt
to
not
defuse
the
bomb,
but
to
revisit
this
on
the
next
call
and
the
call
after
given
status
with
testing
maintenance,
shadow,
forks
and
things
like
that,
I
think
essentially
in
may.
A
We
need
to
be
either
making
decisions
about
forking
public
test
nets
and
dates
around
those
or
making
beginning
to
make
a
decision
about
diffusing.
The
bomb
much
discussion
ensued.
Tim
are
there
any
other
relevant
points
from
that
discussion?
You
want
to
share
here.
C
No,
that's
that's
pretty
much
it!
It's
like
we're
in
this
weird
spot
between
where,
if
everything
goes
well,
we
might
be
able
to
merge
without
delaying
the
bomb,
but
if
we
have
some
delay
in
the
merged
and
what
we'll
probably
have
to
and
and
for
for
everything
to
go
well,
we
kind
of
have
to
start
looking
at
at
moving
to
test
nets
in
the
next
couple
weeks.
Yeah.
A
Okay,
cool
any
discussion
points
around
that.
Obviously
it
takes
two
to
tango.
You
know,
if
we're
having
issues
on
the
consensus
layer
that
would
cause
the
delay
the
bomb
if
we're
having
issues
on
the
execution
layer
that
would
as
well.
So
I
don't
mean
to
say
this
decision,
because
it
wasn't
necessarily
a
decision
was
made
without
y'all
it's
a
kind
of
a
we're
in
this
phase
in
may,
where
we
really
need
we're
monitoring
whether
we're
on
go
or
no
go
to
the
next
testnet
phase.
A
Robston,
sepolia
and
gourley
are
planned
to
be
go
through
the
merge.
I
believe
robson
is
at
the
beginning
of
that
list.
Although
that
might
be
up
for
discussion
and
to
do
a
robson
merge,
we
have
to
make
a
robston
beacon
chain,
or
you
know,
parallel
to
robson
make
a
beacon
chain,
so
I
think
to
be
prepared
for
that
and
maybe
to
give
that
beacon
chain
at
least
a
couple
weeks
of
activity
before
doing
the
merge.
We
need
to
have
that
conversation
now.
A
C
I
think
the
idea
is
we
would
we
being
the
client
devs
would
like
to
no
longer
maintain
it.
Not
necessarily
you
know
the
day
after
the
fourth
but,
like
you
know,
call
it
a
few
months
after
to
then
only
have
gordy
and
sepolia
be
like
the
two
clients
maintain
long
live
test
nets.
Yeah
people
don't
like
when
we
deprecate
test
nets,
so
there
might
be
some
company
that,
like
chooses
to
then
maintain
robson
but
yeah.
I
guess
from
our
perspective,
we
don't
want
to
the
maintenance
burden
anymore.
A
Right
so
we
have
two
vegan
chains
to
make
one
for
robson
one
for
sipolia.
I
think
we
should
make
a
plan
for
them.
I
think
primarily,
our
options
are
low,
validator
count,
high
validator
count,
permissioned
or
unpermissioned
permissioned.
We
can
use
the
the
erc20
variant
of
the
deposit
contract
to
essentially
restrict
the
set
of
validators
and
make
it.
D
Just
to
make
sure
I
understood
correctly,
gurley
will
be
unpermissioned
right,
detective,
correct
yeah.
I
don't
have
any
opinion
on
the
others.
A
For
robston,
I
think
it
might
make
sense
for
us
to
also
make
it
unpermissioned,
but
for
us
to
have
a
bunch
of
the
validators
just
so
that
if
others
want
to
get
in
on
practicing
the
transition
and
running
validators,
they
can
do
so,
but
we
also
don't
care
if
it
becomes
unstable
eventually,
because
we
don't
plan
on
maintaining
it.
A
So
then
the
question
is
on
sepolia:
do
we
want
permissioned
or
unpermissioned,
given
that
gourley
would
be
kind
of
the
unpermissioned
one
in
general?
One.
C
Thing
we
discussed
like
a
few
months
ago
when
we
were
talking
about
this
is
like
if
we
have
a
permissioned
one,
it's
easier
to
actually
test
some
like
chaotic
states,
like
you
know,
shutting
down
half
the
validators
for
a
week
and
seeing
what
happens
and
things
like
that,
and
also
we
probably
don't
want
to
do
stuff
like
that
on
a
test
net
which
has
a
ton
of
end
user
applications.
C
Gordly
obviously
does
so.
I
think
it
probably
makes
sense
versus.
C
No
not
today
and
like
yeah
today,
basically,
none
so
so
like.
I
think
it
might
make
sense
for
us
to
use
the
polio
as
like
a
consensus
level
test
net
and
people
are
obviously
free
to
deploy
it
as
well,
but
yeah
this
way.
If
we
decide
we
want
to
shut
down
the
validators
or
part
of
them
for
a
week
or
two
and
see
like
how
the
activity
leak
happens
or
long
periods
of
non-finalization.
C
B
I'd
like
to
add
on
that
point,
I
do
think
it
makes
sense
to
have
support
as
permission
in
the
end.
If
you
use
the
permission,
contract
it'll
be
token
gated,
so
we
could
still
like
make
it
unpermissioned,
but
people
are
not
using
bots
to
steal
like
the
underlying
asset.
So
we're
not
like
people
don't
have
to
fight
over
seboyea
ether.
Try
to
rather
would
fight
or
whatever
token
we
create.
A
Yeah
makes
sense,
I
mean
we
in
permissioned,
meaning
large
operators
could
run
validators
if
they
wanted
to,
and
if
there
was
demand
for
individuals
to
be
able
to
run
a
validator
or
two
we
could
set
up
one
of
those
faucets,
although
those
are
always
ridiculously
gamed.
Okay,
is
anybody
opposed
to
keeping
validator
set
on
the
smaller
side?
D
A
B
A
Yeah,
I'm
okay
either
way.
Does
anybody
feel
strongly
and
I'm
also
kind
of
implicitly
assuming
that
the
entities
on
this
call
client
teams
and
otherwise
would
help
run
nodes
and
validators
on
this
new
sustained
test
net?
Obviously,
if
I'm
incorrect
in
that
assumption,
please
speak
up.
A
Okay,
I'm
going
to
open
up
I'll
open
up
an
issue
about
in
the
pm
review
about
the
three
beacon
chains:
paramount,
which
already
exists
and
we'll
take
over
poorly
one
to
be
created
for
robston,
which
we
can
discuss,
how
we
want
the
genesis
of
that
to
be,
and
that
will
be
un
permissioned
and
then
a
permissioned
one
for
sepolia,
which
it
seems
like
there's
a
desire
to
go
for
a
larger
validator
set.
And
we
can.
C
A
C
B
We
could
also
do
the
two
separately,
for
example
the
robson
one.
We
could
have
it
small.
So
it's
not
too
much
of
a
task
for
client
games
to
spin
up
nodes,
even
if
it's
cost
ineffective
and
just
have
that
in
like
two
weeks
from
now.
Whereas
if
we
want
to
have
supplier
twice
the
size
of
mainnet,
I
think
it
might
take
a
bit
longer
for
client
teams
to
find
a
place
to
run
them,
and
that
could
be
like
three
four
weeks.
Maybe,
and
that
also
kind
of
follows.
The
order
in
which
we're
gonna
fork.
B
A
A
A
A
Great
otherwise,
on
merge
discussions,
any
technical
points,
people
like
discuss
any
issues.
They
would
like
insight
on
how
other
people
are
dealing
with
things.
Just
in
general,
our
discussion.
E
E
E
So
take
a
look,
please
it
also
yeah.
This
pr
is
not
only
about
replacing
ballots
terminal
block
with
like
something,
but
it
also
covers
a
blind
spot
in
this
bag.
That
has
been
discovered
like
months
ago
when
I
was
like
looking
going
through
the
list
of
tracks
to
the
engine,
api's
pack
and
yeah.
A
Gotcha,
there's
also
other
few
pr's
open
that
I
believe,
will
be
merged
very
soon.
There's
a
based
off
of
conversations
in
devconnect
there
was
a
216
is
open,
which
is
a
retake
on
the
engine
timeouts
and
there's
a
couple
of
things
related
to
error
codes.
So
these
will
be
bundled
into
a
release,
but
if
you're
following
them,
you
can
probably,
as
they
get
merged,
begin
to
make
prs
against
them.
A
B
A
Yeah,
I
think
we
should
sidestep
naming
them
by
giving
them
just
calling
them
what
they
are.
The
robs
didn't
beacon
chain,
but
you
know
now
I've
been
entered
into
that
conversation.
A
Okay,
anything
I
want
to
talk
about
with
the
relation
relation
to
the
merge.
A
A
Okay,
any
other
client
updates
that
people
like
to
share.
D
A
H
A
H
Throw
some
stuff
in
I
think
everyone's
seen
this,
but
michael
posted
an
issue
about
running
fork,
choice
before
block
proposals,
I'll
put
it
in
the
chat
just
in
case
anyone
hasn't
seen
it,
but
it
looks
like
everyone
has.
The
other
thing
that
we're
roughly
looking
at
is
trying
to.
We've
got
a
bit
of
pressure
from
the
community
to
implement
ipv6.
H
H
Well,
we're
starting
to
get
get
coin,
bounties,
specifically
they're,
saying
that
a
lot
of
them
have
ipv6
only
nodes
behind.
I
know
some
some
more
involved
infrastructure,
and
so
they
want
ipv6
support,
but
I'm
not
entirely
sure
how
that
how
that's
going
to
play
out,
especially
like
when
you're
discovering
other
nodes
that
are
mainly
ipv4
and
can't
communicate
wb6.
So
a
lot
of
them
have
dual
stack.
So
there's
this
kind
of
complication.
We
have
to
kind
of
add
into
the
lower
protocols,
which
is
what
we're
working
through.
I
Yeah,
I
think,
charming
as
well
so
release
version
2.1.1.
This
has
the
merged
the
the
so
this
has
the
merge.
Support
also
has
web
design
and
support.
We
also
have
experimental,
which
such
thinks
support
as
well.
He
also
never
proposed
a
boost
and
has
a
bunch
of
nice
batteries,
but
batterized
shot
256,
optimization,
so
yeah,
that's
pretty
much
from
us.
J
All
right,
we
are
quite
focused
on
much
in
the
shadow
for
test
nets,
but
we
are
shipping.
One
interesting
feature
that
I'd
like
to
share.
Nimbus
can
now
work
with
in
the
web3
signer
setup
in
such
a
way
that
we
connect
to
multiple
remote
signers
and
we
obtain
parcels
partial
signatures
from
them
which
are
combined.
This
is
similar
to
the
secret
shared
validator
setup,
and
it
was
actually
requested
by
some
staking
pools
which
have
this
concern
that
a
lot
of
employees
have
access
to
the
validator
keys.
J
J
A
A
All
right,
any
other
client
updates
before
we
move
on
to
some
discussion
points.
K
Some
optimization
in
the
backlog
around
the
transition
calculating
applying
the
transition,
so
we
are,
the
next
release
will
will
include
some
some
optimization
there.
We
are
currently
working
implementing
the
builders
apis
and
on
the
merge
side,
they're
just
minor
minor
things
to
to
to
improve,
not
nothing,
nothing
important.
L
A
A
A
Moving
on
there's
a
couple
of
discussion
points
but
in
the
chat
age
would
like
to
discuss
a
potential
upgrade
for
gossip
sub
called
epi
sub
age.
What's
the
status
on
that
spec?
Is
that
actually
a
well-defined
spec,
or
is
there
r
d
to
do
to
actually
get
it
to
an
implementable
place.
H
Yeah
so
they're
like
from
two
years
ago,
I
think
in
like
2019,
there
was
a
planned
upgrade
from
going
from
like
flood
sub
random
sub
gossip
sub,
and
then
this
final
thing
called
episode
and
I've
been
looking
into
just
the
bandwidth
that
all
our
clients
are
using
trying
to
minimize
it,
which
we've
had
discussions
with
various
people
about.
In
particular,
I
wanted
to
try
and
make
like
some
of
the
mesh
parameters
a
little
bit
more
dynamic
or
because
we're
seeing
we're
seeing
quite
a
few
duplicates.
H
But
anyway,
the
the
concept
of
episode
is
to
make
the
mesh
more
efficient
without
having
to
change
all
that
much
about
gossip
sub,
the
I
talked
to
visor
who's,
the
one
of
the
main
authors
of
gossip
sub
and
he
is
essentially
saying
there's
very
minimal
modifications
physics,
so
he
essentially
explained
the
modifications
to
me.
I
I
won't
go
over
them
in
this
call,
but
they're
something
they're,
they're
very
minor.
So
there's
some
small
changes
we
can
do
to
gossip
sub.
That
would
be
backwards
compatible.
That
should
make
our
meshes
and
stuff
more
efficient.
H
The
reason
that
it
hasn't
been
specced
out
or
built
in
limp
p2p
yet
is
because
gossip
sub
seems
to
work
for
for
file
coin
and
ipfs
and
everyone
else
using
it,
and
no
one
really
has
the.
I
guess
the
interest
to
kind
of
reduce
the
bandwidth.
H
So
I
think
the
current
plan
is
given
what
visor
or
what
virus-
and
I
discussed
is
for
me
to
either
give
an
implementation
of
these
metaphor,
modifications
that
we
can
then
spec
and
ideally,
I'm
chasing,
hopefully
a
go
developer
to
either
mimic
the
implementation
or
copy
or
help
me.
I
guess
I
guess
yeah
copy
copy,
whatever
we
build
or
in
rust
and
port
it
to
go,
because
they
have
a
lot
more
kind
of
testing
infrastructure
and
go
so
we
can
test
this
thing
out
before
actually
chuck
it
onto
test
nets.
H
H
Is
there
anywhere
that's
a
good
source
to
read
about
this
yeah?
So
I
think
the
answer
is
no.
I
I
went
through
and
read
all
the
previous
specs
for
episode
and
then
reached
out
to
vizo
and
in
the
call
that
I
had
with
him
he's
just
kind
of
like
I
don't
know
every
don't
worry
about
all
that.
We
just
have
to
do
this
this
this
and
it's
all
kind
of
small.
So
I
can.
I
can
write
up
some
documentation
that
people
can
read
about
if,
if
there's
interest.
H
E
Yeah
just
to
confirm
it's
not
what
is
proposed
on
the
earpiece
of
documentation
in
the
p2p
I
mean
you,
you
want
to
try
some
particular
mechanics
from
from
this
pack.
It's
not
like
implementing
the
spec,
because
when
I
was
like
reading
it,
I
was
under
impression
that
it's
it's
not
yet
like
it's
yet
in
an
active
research
and
was
not
implemented
or
is
there
any
reference
implementation
for
episode.
H
Yes,
so
the
previous
research
was
quite
involved-
and
I
think
I've
seen
like
there's
some
pr's
or
some
issues
around
and
also
a
different
kind
of
spec
version.
I
kind
of
read
through
all
of
these
and
the
the
modifications
are
kind
of
scattered
and
convoluted.
Let's
say
the
the
actual
changes
that
we
would
need
that
create
what
advisor
calls.
The
episode
is.
Essentially,
we
just
add
an
extra
control
message.
H
I'm
not
sure
if
I
want
to
go
into
these
details,
but
but
we
keep
the
mesh
the
same
size
and
if
we
start
seeing
a
number
of
duplicates
from
some
particular
node,
we
we
send
like
a
choke
message
so
that
they
stop
sending
us
on
the
mesh
and
they
only
send
us
gossip
sub
gossip
messages.
So
it's
just
it's
more
of
like
throttling
the
mesh,
not
changing
its
actual
size,
and
this
is
just
adding
a
single
control
message.
The
this
is
not
written
anywhere
that
I've
seen
on
the
internet.
H
H
List:
okay,
maybe
if
there's
people
that
are
interested
in
this
in
particular,
go
a
go.
Dev
either
just
reach
out
to
me,
because
I'll
probably
start
doing
a
little
bit
of
work
on
this,
because
if
we
can
make
it
backwards
compatible,
then
at
least
the
nodes
that
are
running
this
can
can
get
some
bandwidth
saving.
A
H
A
Okay,
thank
you.
Moving
on,
like
client
wanted
to
discuss
the
builder
api
status
light
clan.
Can
you
give
us
a
status
update.
F
Yeah
so
a
couple
update
points
and
then
a
few
questions.
We
talked
about
the
builder
api,
a
bit
on
all
core
devs
last
week,
and
I
wanted
to
get
an
idea
of
how
important
it
was
for
validators
to
still
control
the
gas
limit
that
they
want
their
blocks.
That
they'll
propose
to
target,
even
if
they're
not
performing
the
production
of
the
block,
and
the
overwhelming
answer
was
yes.
F
So
we
extended
some
of
the
signed
messages
to
also
sign
over
a
preferred
gas
limit
that
the
builders
should
adhere
to,
and
there
was
a
small
discussion
within
that
about
whether
they
should
be
siding
over
the
gas
target
or
the
gas
limits,
because
after
1559,
the
actual
independent
variable
is
the
gas
target.
But
because
most
clients
today
still
take
a
gas
limit
to
target
what
the
size
of
the
block
will
be.
F
We've
had
a
lot
of
feedback
in
general
on
the
builder
api,
pr
which
is
in
the
execution
apis
repo,
and
I
think
a
couple
things
have
come
out
of
it
most,
notably
some
teams
started
implementing
it
and
they
felt
that
there
was
a
lot
of
work
going
into
redefining
the
serialization
schemes
for
beacon
types
that
need
to
go
over
the
json
rpc
methods
that
was
originally
proposed,
and
there
is
a
request
to
to
pivot
that
api
to
the
http
rest
style
that
the
beacon
api
is
using.
F
So
it
seemed
like
all
the
people
in
the
block
construction
channel
were
okay
with
this
pivot,
and
so
I've
started
rewriting
the
api
in
that
style
and
hopefully
have
that
done
the
next
couple
days.
The
core
logic
will
be
the
same,
but
you
know
people
who
have
a
lot
of
experience
working
in
the
beacon
api
in
the
open
api
format
would
love
to
have
some
feedback
to
make
sure
that
the
style
is
consistent
with
the
beacon
api
and
all
those
other
things
so
I'll
post
that
some
more
information
about
that
in
the
next
couple
days.
F
When
that's
available
a
couple
of
questions,
one
is
right:
now
these
things
have
been
living
in
the
execution
apis
repo.
It
doesn't
really
feel
like
that's
the
right
place,
and
I
don't
know
if
anybody
has
an
objection
for
this
living
in
its
own
repo.
Maybe
the
builder
apis,
repo
under
the
ethereum
organization,
is
that
something
that
seems
like
a
reasonable
home.
A
A
It's
another
piece
of
software.
You
had
suggested
builder
specs
instead
of
builder
apis,
the
builder
specs
repo.
F
Oh,
it
does
oh
okay
yeah,
I
don't
know
it's
like
the
builder
api
or
the
api's
post
fix
has
been
used
for
these
specific
specs
and
this
specs
postfix
has
been
used
for
like
markdown.
It
feels
like
we
kind
of
need
both
like
there's
still
some
specification
related
things,
but
we
can
talk
about
that
offline.
A
F
I'm
wondering
what
needs
to
be
done
to
keep
that
moving,
because
it's
been
sitting
now
for
maybe
a
week
and
a
half,
and
I
think
that's
like
the
main
thing
on
the
critical
path
for
consensus
layer,
clients
to
implement
so
that
they
can.
We
can
actually
get
these
non-consensus
messages
signed
by
the
validator
keys.
A
Is
this
206
yeah?
I
can
just
review
it
right
after
it
seems
like
there's
been
a
lot
of
comments
here.
Was
there
anything
contentious
lingering
in
here.
F
I
think
the
main
question
that's
a
little
contentious
is
people
feel
it's
weird
to
have
an
optional
element
on
this
method.
That
was
something
that
came
out
of
the
discussion
in
amsterdam.
I
really
don't
have
a
preference
whether
this
is
a
new
method
or
an
optional
add-on
to
this
existing
method.
I
just
want
to
go
down
a
route.
F
Other
than
that
is
anybody
have
additional
comments
or
feedback
on
the
builder
api
or
the
timeline
that
we're
on
it
feels
like
this
is
something
that
should
be
integrated
into
merge,
testing
right
now
and
we're
still
at
the
specking
stage.
So
I
don't
know
if
people
are
worried
about
this
for
anything.
D
You
said
proposers
will
provide
a
gas
target
and
then
the
builders
should
respect
that.
Did
you
mean?
Must.
F
D
F
F
C
N
Well,
the
proposal
that
actually
signed
a
block
that
didn't
include
his
his
limit
his
gas
limit.
Now
we
can
another.
F
F
F
G
E
Also,
we
should
like
take
into
account
that
the
gas
limit
may
be
like,
say:
10
000,
I
don't
know
gas
higher
than
the
current
gas
limit,
and
in
this
case,
what
builders
should
do
should
probably
like
increase
the
gas
limit
as
much
as
possible
towards
the
gas
limit
announced
a
proposal
right
exactly.
F
Any
other
general
comments
or
feedback
on
the
status
to
the
builder
api
do
client
teams
feel
like
this
is
something
that
they're
going
to
have
the
ability
to
implement
in
the
next
couple
of
weeks,
assuming
that
we
have
a
spec
and
some
testing
infrastructure
available
in
the
next
10
days,.
D
F
The
execution
only
are
things
that
need
to
be
done
is
really
in
the
hands
of
the
external
builders
who
you
know
right
now.
It's
mainly
flashbots,
and
so
they
have
people
working
on
things.
Implementing
that
interface,
I'm
working
on
a
testing
implementation
in
merge
mock
so
that
people
who
are
implementing
this
have
something
that
can
respond
to
requests
against.
A
A
Hey
the
the.
A
The
distribution's
looking
much
more
reasonable
on
one
website
lighthouse
entered
the
quote,
yellow
zone
because
it
looks
like
they
might
be
a
third
of
the
network
now
so.
F
A
F
F
That's
responded
by
the
get
payload,
I
think
maybe
we're
too
late
in
the
game
to
have
that
change,
but
it
still
seems
something
that's
important
to
have
some
heuristic
for
cls
to
figure
out.
If
the
relays
are
just
giving
you
blocks
that
say:
they're
only
going
to
pay
you
0.01
eth
and
just
the
basic
tip
from
the
el
block
would
be
higher.
That
would
be
a
good
start.
A
E
If
I
understand
correctly,
the
constructed
block
is
not
like
added
to
the
chain
the
canonical
chain
automatically
by
l.
I
might
be
wrong.
D
F
I
think
that
would
be
the
only
way
right
now
to
get
the
you
know
full
picture
of
how
much
was
earned.
I
think
you
could
statically
do
it
by
just
going
through
the
list
of
transactions
figuring
out
how
much
gas
each
transaction
executed
and
checking
the
tip,
but
that
doesn't
account
for
coinbase
payments.
D
F
D
F
I
don't
think
that's
given
in
the
current
return
of
get
payload
right.
D
So
I
think
what
we
have
right
now,
if
the
execution
layer
didn't
do
anything
like
the
best
case
scenario
is
you
would
have
to
execute
things
again,
because
we
don't
surface
the
information
that
we
need
right.
E
Right
but
executing
a
block
will
cause
a
delay
and
there
is
like
if
we,
if
acl,
will
make
a
decision
after
executing
a
block
variant
state.
It
probably
will
just
miss
the
opportunity
to
propose
timely
block
like
it
reduces
the
amount
of
time
to
propose
a
block.
E
Yeah
and
disseminate
it
over
the
network.
I
would
suggest
the
thing
that
we
have
discussed
on
devconnect.
They
have
a
get
payload
with
two
and
yeah,
see
how
it
can
be
implemented.
Probably
it
can
be
implemented
even
pre-merge
or
like
shortly
after
the
merge.
I
don't
think
we
need
like
a
hard
fork
to
release
this
feature,
so
it
could
be
a
bit
independent.
A
D
A
I
I
was
wondering
if
there's
any
sort
of
like,
should
we
should
we
give
like
timeout
in
second
for
life
for
like
recommendation.
If
you
don't
get
a
reply
back
within
this
time,
you
should
just
go
with
your
own
blog
versus,
because
it's
pretty.
P
E
F
A
Okay,
other
research,
spec
or
other
discussion
points
for
today.
D
A
All
right,
I
believe,
ipv4
support,
is
a
must
in
the
phase
zero
p2p
spec
in
terms
of
the
client
needing
to
be
able
to
be
able
to
implement
it.
D
Sure
sure
so
the
client
can
support
it,
but
that
doesn't
mean
the
user
on
some
particular
network
has
access
to
ipv4
right.
It's
like
we're.
If
we
are
actually
to
the
in
the
living
in
the
future,
where
there
exists
people
who
only
have
ipv6
access,
the
question
becomes:
do
we
want
them?
Do
we
want
to
give
them
good
guarantees?
They
will
be
able
to
connect
to
the
network
and
stay
connected,
or
do
you
want
to
say
if
you're
in
an
ipv6
you're
on
your
own,
you
may
get
partitioned
off
and
we
don't
care.
D
D
H
H
D
D
So
you
have
like
you,
have
ipv6
in
a
sense
that,
like
your,
your
the
libraries
are
using
everything,
support,
ipv6,
but
currently
discovery
doesn't
sound,
correct.
H
Yeah
so
inherently
with
leadp2p
and
tcp
you
can.
You
can
have
an
ipv6
listing
address,
so
you
can
set
up
tcp
connections
but
yeah
discovering
nodes
the
way,
the
enrs
so
enough's
kind
of
defined.
You
can
have
an
ipv
v6
port
and
an
ip
and
a
udp,
oh
sorry,
and
then
an
ipv4
port.
And
then
I
guess
you
have
to
decide
whether
you're
going
to
put
them
in
your
local
router,
routing,
table
and
advertise
them.
M
A
Yeah,
I
believe,
like
go
live.
P2P
is
going
to
have
a
lot
of
baseline
support,
but
doesn't
necessarily
play
nicely
with
how
discovery
is
working.
D
So
for
the
merge,
is
this
something
that
we
care
about,
or
are
we
okay
at
merge
time
being
kind
of
under
the
risk
of
a
network
partition
between
v6
only
and
before
only
nodes
possible.
A
A
D
So
you
probably
let
the
people
who
are
asking
for
it
know
that
at
least
just
set
expectations
just
tell
them
hey
we're
working
on
it,
but
well
you
may
be
able
to
make
something
work
in
some
client.
You
also
make
the
partitions.
H
Yeah
yeah
yeah,
we
will
do
we're
still
trying
to
figure
out
the
best
way
to
make
some
of
these.
Just
I
don't
know
preferences,
I
guess
or
biases
in
discovery
before
I
tell
everyone
how
an
ipv6
only
node
would
work
on
the
network,
whether
just
it's
going
to
be
itself
or
whether
it
can
like
you
know,
find
some
dual
stack
thing
in
and
connect,
but
yeah
we
will
inform
the
people
that
are
asking.
Q
Soon,
yep
next
week,
I'll
make
some
announcements
tomorrow.
Today,
okay,.