►
From YouTube: Merge Implementers' Call 2
Description
A
B
B
B
Okay,
hey
everybody:
let's
start
welcome
to
the
merch
implementers
call
number
two.
B
There
are
apparently
some
problems
with
berlin
right
so
but
yeah,
probably
some
ethereum
core
devs
can
attend
this
call,
but
let's
just
go
through
the
agenda
and
discuss
some
items
that
we
can
do
without
them.
So,
okay.
So,
let's
start
from
like
the
first
one.
We
have
this
new
terminology.
B
The
key
replacement
here
is
that
we
replaced
the
application
term
with
the
execution
one.
So
there
is
the
execution
layer
instead
of
the
application
layer.
This
is
to
not
confuse
people
with
the
smart
contracts
and
applications
using
them,
so
applications
built
on
top
of
the
mainnet.
So
that's
what's
the
purpose
of
it.
Also,
the
term
layer
is
not
is
arguably
not
the
best
one
for
the
execution
and
consensus,
because
they're
not
actually
layered
and
yeah.
So
here
we
can
think
more
about
it.
B
I
don't
want
to
spend
like
much
time
discussing
this,
but
probably
it's
better
to
call
it
subsystems
or
engines
or
whatever
and
yeah.
Let's
just
if
people
have
any
ideas,
just
drop
them
in
discord
and
let's
probably
discuss
it
offline,
so
anything
on
the
terminology.
B
Pretty
cool,
let's
just
move
on
so
the
execution
discussion
I
was
just
going.
My
initial
idea
was
just
going
through
the
key
parts
of
the
execution
stuff
and
ask
for
some
updates
or
for
understanding
for
probably
questions
from
ethereum
developers.
So
that's
the
initial
idea.
So
probably
we
can
do
this
anyway.
So
any
questions
to
the
like
communication
protocol.
C
Where's
the
most
update,
updated
link.
The
communication
protocol
is
that
the
ryan
is
inspect,
that
you're,
maintaining
michael.
B
Yeah
right,
that's
the
one.
I
also
put
the
link
to
the
previous
one.
The
new
link,
the
ram
is
blink,
is
put
to
the
top
of
the
previous
document.
So
that's
like
the
latest
one
anyway.
D
B
A
E
I
have
a
question,
so
I'm
not
entirely
sure
how
to
handle
potential
issues
and
error
in
this
protocol.
For
example,
if
the
assemble
book
fails
or
the
new
block
fails
or
the
set
head
fails
because
either
the
payload
is
wrong
or
some
internal
error
happened,
I'm
not
sure
how
to
handle
it.
The
spec
doesn't
specify
it
anything.
B
B
B
C
B
Non-Existent
also
yeah.
The
other
option
is
to
use
the
errors
in
json
rpc,
as
we
have
them
today.
Right.
E
You
mean
the
result:
yeah
and
error.
Is
it
in
the
spec.
E
B
We
actually
in
general,
expect
that
the
new
block
will
be
called
so
because
there
is
a
state
transition
happening
on
the
consensus
side.
When
the
block
is
assembled
and
it's
proposed,
it
should
yeah
which
the
state
transition
is
called,
triggered
and
yep.
It
will
trigger
the
call
to
to
the
new
block
method.
C
Only
if
they
find
a
solution
does
something
get
added
into
the
block
tree.
F
Yeah,
but
in
a
proof
of
authority
chains
you
would
generate
the
block
and
edit
immediately
so
that.
D
F
C
You
could
presumably
there's
not
an
immediately
obvious
use
case
for
that.
C
Actually,
a
valuable:
is
it
worth
the
complexity
here
in
having
like
you,
be
able
to
point
to
anything
arbitrarily
to
build
build
on
rather
than
just
building
on
the
head,
I
mean
presumably.
C
The
beacon
node
keeps
the
execution
engine
in
sync
with
what
it
thinks
is
the
current
head,
and
so
if
there
were
a
reorg,
you
would
trigger
that
and
then
call
symbol
block
and
just
ask
me
primarily
when
you,
you
definitely
know
the
head
and
you
can
tell
the
the
parent
hash
to
build
on.
But
then
you're
opening
up
like
a
functionality
requirement
on
the
fusion
engine
to
be
able
to
build
on
arbitrary
heads,
which
I
don't
know,
is
worth
complexity.
B
Because
it
could
be
the
case
when
there
is
the
arbitrary,
like
block,
became
the
head
afterwards,
I
can
imagine
this
kind
of
stuff
with
racing
between
a
bit
of
racing
between
the
new
head
and
assemble
block
or
yeah,
so
wha.
What
if
the
head
has
changed
during
the
block
has
been
assembled.
What
should
happen
here.
B
C
I
mean
at
some
point
the
beacon
node
has
to
make
a
decision
about
what
it's
what
it
thinks
the
head
is
and
assemble
the
block
based
on
that,
but
the
idea
would
be,
I
begin,
assembling
a
block.
Some
other
subsystem
triggers
that
there's
a
new
head
halfway
through
me
assembling
a
block.
I
ask
the
execution
engine
for
the
the
transaction
payload,
but
it's
gotten
a
trigger
from
somewhere
else,
but
there's
a
new
head
and
now
I'm
out
of
sync
on
that,
and
this
protects
against
that
case.
G
Consistency
cases,
sir,
go
ahead.
C
E
Yeah,
so
I
would
say
we
need
to
actually
think
about
any
concurrent
calls
to
those
rpc.
So
what
happens?
If
we
had
one
set
head
and
second
set
head,
we
should
probably
cue
them.
That's
the
last
one
wins
for
example,
or
something
like
that
in
the
implementations,
that's
important,
finalized
block
is
probably
not
important.
B
So
my
intuition
is
that
all
these
messages
should
be
processed
sequentially,
but
what
should
be
but
yeah
new
set
head
and
new
block
are
causally
dependent,
so
they
must
be
processed
sequentially,
but
others
could
be
processed
concurrently,
but
not
sure.
If,
if
in
all
cases,
it
could
be
done.
C
B
B
Probably
okay,
anything
else
here,
we'll
move
to
full
choice
and
chain
management.
C
Yeah,
I
just
want
to
highlight
again
like
the
current,
like
with
that
parent
hash
in
there
and
there's
not
really
being
any
bounds
on
that
like
a
cymbal
block
could
trigger
an
arbitrary,
not
reorg,
because
it
wouldn't
be
changing
the
head
but
trigger
an
arbitrary
like
attempt
like
you,
have
to
go
and
put
yourself
into
this
different
state
to
build
a
block,
and
so
there
might
be
complexity
there
and
it's
worth
people
investigating
that
over
the
next
week
or
two.
So
we
can
talk
about
it
again.
Next
time.
E
B
Yeah,
but
I
would
not
enforce
these
checks
on
the
execution
engine,
because
this
is
the
responsibility
of
consensus
and
in
some
cases
probably
there
will
be
the
case
where
consensus
like
switches
from
one
finalized
checkpoint
to
the
concurrent
one.
This
is
like
a
case
of
some
forks
or
whatever,
so.
I
F
E
So
I
have
a
question
so
finalize
block
how
many,
how
how
much
height
of
the
chain
might
not
be
finalized.
Yet
I'm
not
aware
of
that,
because
it's
important
for
state
management,
pruning
implementations
things
like
that.
That's
a
problem
that
that
is
rise
here.
If
what
I'm
aware
of
in
from
eth
one
side,
so
practicality
of
this
problem
is
how
big
this
unfinalized
chain
could
be
that
we
could
reorganize.
C
That
depth,
so
that's
but
that's
normal
operation,
so
you
get
in
in
the
happy
case.
You
get
pruning
on
reasonable
depths,
but
you
cannot
aggressively
prune
if
you're
in
a
time
of
non-finality
and
that
you
know
you
could
go
days
without
finality.
So
there's
definitely
a
variance
that
has
to
be
handled
on
pruning.
J
So
the
risk
regarding
non-finalized
state
is
what
happened
during
medarsha
for
a
couple
of
days.
The
chain
didn't
finalize
and
we
had
many
many
forks
and
in
that
case
technically
you
are.
J
I
B
Okay,
cool:
let's
just
move
to
the
folk
choice
in
chain
management.
I
know
that
people
started
to
investigate
and
that
how
hard
it
would
be
to
to
make
the
fork
choice
pluggable
and
how
big
of
the
impact
it
has
on
the
modifying
the
chain
management
of
their
clients.
E
So
maybe
I
will
start
from
another
mindset:
it's
actually
pretty
easy.
We
had
it
fairly
broken
down
already
the
problem
there
might
be.
I
haven't
investigated
that
much
is
later
syncing
the
network
up
to
the
head
and
then
starting
it
so
integrating
the
syncing
and
the
fork
choice.
Management
itself
might
be
harder
than
just,
but
for
for
starting
from
head
like
for
the
hackathon,
we
want
it's
it's
fairly
easy
and
then
the
mind.
B
Yeah,
because
we're
hackers
only
have
like
this
difficulty
total
difficulty
rules
for
the
beginning.
B
G
I
guess
I
can
give
an
update
if
there
are
no
folks
from
guff
on
the
car.
H
Yeah,
sorry
about
that,
so
if
the
question
was
what's
guest's
progress
on
these
things
yesterday
we
had
a
I
think,
yesterday
or
two
days
ago,
meeting
with
proto
he
kind
of
ran
through
various
stuff.
I
guess
the.
H
The
conclusion
was
that,
if
you
need
something
for
monday,
then
probably
the
closest
we
can
give
you
is
is
guillaume's
pr
which
just
kind
of
hacks
into
hacks
of
things
into
essentially
just
directly
does
inserts
into
the
blockchain
it
hacks
around
all
the
internals.
H
We
started
working
on
on
essentially
new
consensus
engine,
which
does
the
whole
new
fork
choice,
rule,
but
we
haven't
merged
that
in
yet
and
as
far
as
I
know,
it's
not
finalized
yet
either
and
I've
also
started
working
on
the
synchronization
but
yeah
I'm
kind
of
sidetracked
a
bit
because
in
order
to
make
the
synchronization
work,
I
also
need
to
change
some
other
parts
of
production
get
and
yeah.
F
E
D
B
B
Yeah,
so
any
questions
to
the
chain
management
and
the
book
choice.
B
Okay,
so
this
scene
process,
it's
just
been
some
kind
of
high-level
proposal
in
the
design
in
this
high-level
design
dock.
We
discussed
on
the
previous
call
for
how
to
download
the
state
and
do
the
boxing.
B
If
people
have
any
like
assessment
on
that,
whether
it's
viable
or
not,
or
some
any
kind
of
other
inputs
and
things
to
discuss,
we
can
do
it
right.
B
B
Okay
cool,
so
let's
just
assume
that
that
will
work
and
stuck
it
later
about.
C
B
Right,
I
agree
I
mean,
let's
just
assume
for
now
that
it
may
work
yeah.
There
was
a
question
in
the
chat
in
the
discord.
I
don't
remember:
where
exactly
was
it
probably
in
this
court?
B
What
which
part
would
decide
on
the
gas
limit
and
target
voting?
How
this
will
happen
after
the
merge?
So
my
my
basic,
my
basic
thoughts
are
just
it
doesn't
change
so
the
execution
engine
has
this
voting
mechanism
and
every
proposer
will
be
able
to
make
this
as
miners.
Do
it
now,
so
any
any
other
opinions
and
thoughts
on
that.
C
Yeah
I'd
say
by
default:
it
remains
exactly
the
same,
which
is
a
block
producer,
regardless
of
whether
it's
minor
proposer
or
validator.
It
does
that.
Similarly,
to
how
1559
post
merge,
you
know,
the
block
producer
would
be
responsible
for
paying
base
fee
for
transaction
and
figuring
that
out
in
a
similar.
C
B
B
H
I
mean
we
can
always
add
methods
to
to
change
it,
because
I
mean
it's
it's
a
relatively
small
thing,
but
I
don't
think
it's
people
generally
want
to
keep
tweaking
it
run
time
but
yeah.
If
there's
there's
a
reason
to
be
able
to
tweak
the
limits.
Runtime
I
mean
it's
more
than
trivial.
I
mean
it's
real.
To
just
add
it.
H
Yeah
yeah,
but
so
essentially,
if
you
look
at
mainnet
generally,
the
gas
generally
miners
always
run
with
the
maximum
gas
limit
that
was
kind
of
deemed
safe
for
the
network,
and
it's
not
really
changed.
Maybe
once
every
half
a
year
or
so
so
it's
not
like
you
need
to
constantly
tweak
it.
F
B
I
think,
after
a
consensus
upgrade
there
should
know.
There
is
no
reason
like
to
change
this.
This
part,
okay,
so
one
thing
for
the
next.
The
next
item
is
just
slot
clock
ticks.
This
is,
I
guess
it's
been
missed
like
on
the
previous
call
and
in
the
dog,
but
I
think
it
could
be
important
because
there
is
the
consensus
part
that
has
the
slots
clock
and
these
sticks
should
be
propagated
to
the
execution.
C
B
Yeah
they
may,
they
might
change
the
execute
their
like
execution
flow
inside
of
like
transaction
inside,
of
a
smart
contract
method.
It
calls
and
also
it
could
be.
It
should
be
important
for
the
pending
block
functionality,
because
you
have
to
restart
the
block
each
time
the
new
timestamp
is
observed.
H
C
C
So
it's
okay,
the
activation
engine
can
know
about
time
and
can
decide
what
spot
it
is
and
kind
of
use
that
or
it
can
be
told
about
time
and
use
that.
H
Okay,
but
then
essentially,
this
would
mean
that
the
eth1
blocks
should
also
hit
the
same
twelve
seconds.
H
C
H
C
This
is
more
of
I
think,
mikael
is
concerned
like
when
you
call
produce
block
you,
give
the
time
stamp
and
that's
fine.
It
gives
you
a
deterministic
result.
I
think
michael
is
worried
about
systems
that
are
maybe
dependent
on
time
stamp.
F
H
H
I
think
we've
discussed
it
with
diamond
a
couple
days
ago
that
the
original
rpc
apis
also
had
this
round
out
thing,
plus
some
second
field
right
which,
at
least
in
the
in
the
past
api,
they
were
just
passed
along
as
two
more
fields
independent
of
the
block,
and
I
just
wanted
to
ask
that
if
we
want
to
ever
add
those
fields
back,
then
we
probably
need
to
get
them
integrated
into
the
header
and
since
with
the
header,
I
think,
with
this
minimal,
merge,
spec,
we've
nuked
out
three
four
fields,
for
example
the
mix
digest
and
others
we
can
always
repurpose
them
to
if
we
want
to
have
them
in
with
minimal
damage.
C
Not
really
because,
there's
a
time
stamp
field,
the
time
stamp
field,
consistency
with
the
slot
can
be
checked
on
the
consensus
side
and
we
can
outside.
So
it's
not
really.
I
don't
think
you
really
need
to
get
another
field
in
there.
I
think
mikhail
is
more
concerned
about
the
execution
engine,
knowing
what
slot
it
is
without
the
context
of
being
a
new
block
being
called,
and
so
it
can
make
decisions
about
things
like
the
mempool.
B
Yeah
so,
like
my,
my
question
was
like
how
two
ample
transactions
are
executed,
against,
which
block
is
a
dependent
block
that
is
created
and
restored
each
time
after
the
new
one
is
received
from
the
wire
after
the
new
block
is
received
and
imported.
B
You
you
have
to
verify
validate
the
transaction
right
before
propagating
it
to
the
y
right.
H
So
for
the
pending
block?
Yes,
I
guess
the
question
is
whether
there's
so
if
you
want
to
enforce
this
12
12
second
thing,
then
possibly
it
would
make
sense
to
somehow
introduce
into
the
consensus
engineer
a
rule
too
that
the
time
stamps
for
the
pending
block
would
be
again
on
this
12
second
boundary.
C
I
mean
calls
to
assemble
block,
are
only
going
to
ever
be
on
that
12
second
boundary,
and
so
any
anything
opportunistic
like
the
pending
block
should
respect
that
and
then
there's
a
question
of.
Can
the
execution
engine
just
use
its
local
time
mod,
these
12
second
boundaries,
or
should
it
be
told
explicitly
on,
like
a
click
from
the
beacon,
node?
Okay,
new
slot,
okay,
new
slot,
okay
new
slot?
So
it
doesn't
have
to
worry
about
time.
Sync
issues.
H
No,
I
think
it's
safer
to
just
let
them
let
the
pen
I
mean
you,
don't
really
care
what
the
real
world
time
is.
You
only
care
that
it's
in
sync
with
your
12,
second
click
right
and
the
pending
block
is
either
way
just
some
opportunistic.
Let's
try
to
execute
a
batch
of
transactions
and
see
what
happens,
but
it's.
C
So
the
worry
would
be
if
I
had
the
beacon,
node
and
the
execution
engine
on
a
separate
machine
and
the
pending
block
becomes
is
like
one
second
off,
and
so
it's
a
slightly
different
spot,
and
so,
when
I
actually
call
symbol
block
the
pending
block's,
not
as
useful
to
me,
that
would
be
the.
That
would
be
the
reason
for
the
beacon,
node
clicking.
You
know
ticking
on
that
boundary
so
that
they
would
be
interesting.
H
Yeah,
so
I
honestly,
I
think
so
in
in
geth,
if
you
don't,
if
you
are
not
mining,
then
you
are
creating
these
pending
blocks
and
if
you
are
mining,
then
you
are
not
creating
these
spending
blocks.
Rather,
you
are
you.
You
are
specifically
creating
mining
blocks,
which
are
a
bit
different
and
handled
differently,
so
so
for
validates
for
average
nodes
they
would
just
try
to
guess
the
next
time
and
they
they
won't
care
about
they
they
won't
ever
get
caught
to
finalize
something
and
for
miners
well
yeah.
F
H
H
So
it's
the
reason
I
I
would
say
that
it's
useless
is
because
you
have
4
000
transactions
in
the
pool
or
maybe
even
larger.
If
you
count
the
bigger
pools
and
miners
will
pick
a
few,
so
your
local
note
sees
4
000
transactions
fix
200
to
execute,
and
then
you
can
check
the
result.
H
H
F
H
Sorry,
just
one
more
thing
that
the
only
reason
we
didn't
really
push
for
getting
rid
of
the
pending
block
is
because
it
acts
as
this
nice
little
caching
layer,
meaning
that
I
have
my
I'm
maintaining
the
list
of
transactions
that
I
think
will
get
included
in
network.
I
pick
200
best.
I
run
them
as
a
pending
block,
but
there's
a
fairly
high
chance
that
out
of
those
200,
maybe
150
will
actually
land
in
the
next
block.
H
C
The
precast
yeah
okay,
so
it
keeps
your
cash
a
little
bit
more,
a
little
hotter,
but
the
got
it.
So
if
you,
if
you
want
to,
if
you
want
to
keep
that
functionality,
we
pretty
much
just
need
to
have
the
execution
engine,
respect
mod,
12,
second
time
stamps
and
and
then
I
think
you
get
most
of
the
functionality
of
today.
But
no
problem.
And
even
then,
even
if
you
didn't,
you
probably
get
most
functionality
because
most
things
probably
aren't
calling
the
time
stamp.
Output.
H
Yeah,
so
I
guess
the
only
request
that
I
would
have
is
that
if
there's
this
specific
behavior
that
every
block
will
be
on
on
the
twice
or
not
by
second
mark,
perhaps
just
added
to
the
spec-
that
this
is
to
be
expected.
Plus,
it
is
expected
that
pending
blocks
should
behave
accordingly.
B
Okay,
exactly
yeah
yeah
and
also
I
was
thinking
that
any
block
might
be
useful
for
applications
that
send
some
transaction
and
just
get
read
them
from
there
from
the
nodes
they
are
hosted
to
to
send
transactions.
I
mean
this
band
in
block
series
dependent
state,
okay
anyway,
by
the
way,
then,
what
is
the
functionality
that
is
used
for
miners?
H
Well,
guest
is
a
bit
that's.
H
Question
so
because
get
currently
creates
during
a
single
mining
cycle,
it
recreates
a
block
multiple
times.
First,
it
creates
an
empty
blocks
empty
log.
Then
it
fills
it.
Then
it
tries
to
create
better
blocks
with
different
transactions
and
all
of
them
can
be
mined.
So
it's.
This
is
with
the
proof
of
work
network
with
click.
H
C
Yeah
so
either
work,
and
if
there
were
that,
like
known
by
a
half
a
second
delay
to
be
expected,
then
the
proposer
essentially
would
before
they're
supposed
to
broadcast
right
at
that
boundary.
They
would
call
it
early
to
be
able
to
pack
the
block,
but
if
it's
doing
the
pre-packing,
then
it
can
call
it
later.
B
Yeah
I
was
like
thinking
about
just
standing,
not
only
current
time
stamp
rate,
but
also
the
timestamp
of
the
next
slot
to
to
fit
this
kind
of
functionality,
which
prepares
the
block
in
advance.
B
Okay,
cool,
let's
just
yeah
I'll,
think
about
it
more.
I
mean
and
probably
add
this
to
the
specification
as
a
separate
message.
H
H
B
H
Oh
yeah,
but
I
mean
so
if,
if
you,
if
these
two
chain
accurately
tracks
the
12
second
marks,
every
block
is
another
second
mark,
then
I
can
just
calculate
which
will
be
the
next
12
second
mark,
based
on
on
my
chain
head
or
and
the
current
time.
So
I
don't
think
that's
a
problem.
B
I
mean
I
would
like
add
like
a
separate
message
which
just
stands
the
time.
This
time
update.
F
H
H
Actually
is
it
acceptable,
so
what
happens
so
if
each
two
client
wants
me
to
make
a
block,
what's
the
procedure,
what's
the
time
out
how
what
is
the
expected
propagation
time
creation
time.
C
Etc,
expected
is
to
begin
propagation
at
that
boundary
and
so
sometimes
there's
like
a
little
bit
of
pre-work
done,
because
you
know
that
you're
about
to
propose
and
then
propagation
should
happen
in
that
sub.
Second,
on
normal
operation,.
H
Yeah
but
so
let's
say
it
takes
me,
half
a
second,
unless
it
takes
me
one
second,
to
produce
a
block.
How
does
that
influence
the
e2
consensus?
Does.
C
If
I
wait
until
the
slot
boundary-
and
it
takes
one
second
as
long
as
I
still
have
one
to
two
second
propagation
for
the
full
network-
it's
still
fine
you're,
looking
for
like
sub
four
second
between
when
I'm
beginning
my
job
and
when
it
when
you
get
full
propagation
but
the
if,
if
there
were
delays
from
getting
the
block
that
took,
you
know
a
second,
then
I
as
a
block
for
this
producer,
would
just
start
my
job
early,
such
that
at
the
beginning
of
the
slot.
C
H
So
I
mean
these:
I
don't
think
that
so
it's
a
good
idea
to
make
the
e2
client
smart
one.
What
I
meant
is
that
it
takes
one
second,
depending
on
how
many
transactions
I
cram
in
and
it
might
take
less
or
more
so
it's
I'm
just
asking
about
the
worst
case
scenario
that
if
I
take
one
second,
what
happens
does
that
consensus?
Does
that
break
block
production
or
is
it
just
a
bit
unpleasant.
C
It
likely
is
fine
if
you're
taking
two
or
three
seconds
it
becomes
to
not
be
fine.
I
Why
would
you
not
take?
I
mean,
I
guess
my
assumption
is
what
a
miner
does
is
they
just
continuously
process
make
new
blocks
and
always
whenever
they
have
the
block
available,
they
start
mining
on
that
can't
you
do
like
a
similar
approach
that
you
start
making
blocks
from
maybe
four
seconds
before
your
slot
time
and
whenever
you're
done,
you
start
making
the
next
block
with
the
latest
information
and
send
the
current
one
to
the
beacon
node
so
that
it
can
immediately
make
a
block
if
it's
yeah.
H
I
C
I
No,
it
shouldn't
do
that.
I
mean
what
my
assumption
would
be,
that
beacon
nodes
knows,
a
block
is
coming
up,
tells
the
execution
engine
say,
like
I
don't
know
six
seconds
before,
and
then
the
execution
engine
starts
making
blocks
with
that
timestamp,
which
would
then
still
be
a
few
seconds
in
the
future,
but
that
doesn't
matter
sure.
I
C
E
D
E
E
For
example,
on
hardware
so
depending
on
your
hardware,
it
can
take
longer
or
shorter
to
to
produce
a
block.
So
if
I
was
implementing
etf2,
I
would
do
like
you
suggested
so
ask
for
a
blog.
As
soon
as
I
know
we
can
ask
for
is
for
a
block
and
then
they
re-ask
for
a
block.
If,
if
possible-
and
that's
that's
how
I
would.
F
H
H
C
H
You
ask
for
for
a
blog,
but
I
don't
know.
Should
I
make
better
ones?
Should
I
stop?
Will
you
request
once
or
twice
or
300
times
or
what
happened
paul's
a
bit
unpredictable,
whereas
if
you
make
two
calls,
then
at
least
I
know
that
okay,
I
gave
you
my
best
block.
I
can
throw
away
all
the
all
that
scratch
work
because
it
won't
be
used.
I
Think
I
think
I
think
you
can
reproduce
that
with
cool
as
well
like
you
just
like
the
eth2
node
just
pulls
and
then
whenever
it
gets
a
block,
it
just
immediately
starts
the
next
request
and
uses
the
last
one.
It
got
from
that
sequence,
yeah.
I
It
would
because
you
stopped
making,
I
guess
it
produces
potentially
one
more
block
than
necessary.
I
guess
like
that
would
be
the
only
downside,
but
I
that
that
doesn't
seem
huge.
H
H
C
Well-
and
one
signal
could
also
be
if
the
if
the
execution
engine
is
like
more
than
a
slot
past,
the
last
calls
for
the
slot.
You
know
that
no
one's
gonna
be
asking
for
it,
even
if
there's
like
some
sort
of
time
description
but
then
you're
starting
to
make
assumptions
about
time
and
the
relationship
between
the
two,
which
is
probably
not
great.
H
D
B
All
right,
yeah,
okay,
cool
yeah
great,
so
it's
now
much
more
clear,
at
least
for
me
now,
okay,
so
I
guess
we
can
move
on
to
the
consensus
engine
to
the
consensus
so
yeah.
I
have
like
a
few
things
to
discuss
here
and
then
there
and
some
updates
okay.
B
So
the
first
thing
for
consensus
is
that
there
is
an
idea
of
the
improved,
an
improved
transition
process
which
is
like
basically,
we
have
a
transition
epoch
and
when
the
epoch
happens,
the
consensus
node
decides
on
what
will
be
the
total
difficulty
of
the
transition.
The
transition
total
difficulty
it
could
be
done
like
take
the
current.
B
B
B
B
Right
so
like
when
transition
epoch
happens,
yeah
the
first
yeah,
so
the
eth
one
data
that
are
in
the
state
right.
B
We
can
use
this
block
hash
and
get
the
difficulty
and
add
the
difficulty
to
to
the
most
recent
block,
probably
yeah,
so
there
actually
the
why?
Why
is
this?
A
good
idea
is
because
we
have
the
exact
point
in
time
with
no
regard
to
what
difficulty
will
be
on
the
network,
and
we
have
this
kind
of
total
difficulty
mechanism
preserved,
which
has
its
benefits.
C
The
the
fork
happens.
The
actual
change
in
update
of
the
consensus
code
happens
with
a
lead
time
before
the
actual
transition
and
puts
the
the
new
code
in
place,
and
then
the
transition
happens
and
so
doing
it
as
a
function
of
that
dynamically,
I
think,
makes
sense,
because
it
also
just
removes
like
another
thing:
miners
can
potentially
play
with
like.
If
75
of
the
miners
go
offline,
you
know
they
don't
delay
the
the
transition
by
like
timing
and
different
things
like
that.
B
Yeah,
so
the
open
question
here
is
that
how
to
compute
this
transition
total
difficulty
what
to
use
so
we
can
think
about
it
and
get
to
this
discussion.
I
will
also
think
about
how
to
do
it
like
what
potential
ways
of
doing
it.
We
have
with
relay
with
the
relation
to
the
inputs
that
we
already
have
like
in
the
beacon
state
and
the
beacon
block
and
those
that
we
can
get
from
the
execution
engine.
C
Yeah,
I
guess
the
actual
worst
case
in
hard-coding
it
rather
than
doing
it
as
a
function
of
this
transition
epoch.
Is
that
you,
the
the
beacon
chain
fork
that
adds
a
new
functionality
like
if
you
set
the
total
difficulty,
say
three
months
ahead
and
miners
actually
sped
things
up,
which
is
obviously
difficult
and
unlikely.
But
they
sped
things
up
and
made
the
transitions.
Total
difficulty
happen
prior
to
the
actual
forking
of
the
code,
and
this
prevents
that
that
kind
of
crazy
case
from
happening.
B
Okay,
cool.
The
other
thing
to
discuss
is
the
execution
payload
size,
which
is
like
the
biggest
field.
Here
is
transactions
which
has
the
mac
size
up
to
16
gigabytes
at
the
moment.
This
is
because
we
have
to
like
handle
two
different
cases
where
there
are
a
few
transactions
with
like
huge
transaction
data,
and
a
lot
of
transactions
was
like
no
transaction
data,
that's
why
it
is,
and
there
are
two
limits
like
basically
on
the
number
of
bytes
of
each
transaction
and
number
of
transactions.
C
Can
add
some
context?
Yeah
the
ssd
sse
lists
have
a
max
size,
because
this
comes
into
play
and
the
structure
of
the
mercalization
rules
and
like
the
the
structure
of
the
tree
and
so
max,
like
these
things
all
have
to
have
a
max
size.
And
thus,
when
you
take
the
max
as
the
byte
payload
and
max
number
of
transactions
currently,
then
you
get
some
ridiculous
numbers.
Like
microsoft,.
H
So
this
might
be
a
bit
unrelated,
but
maybe
not
so.
F
H
Death
peer-to-peer
itself
also
has
a
cap
on
the
message
size.
That
cap
is,
as
far
as
I
know,
16
megabytes,
but
at
least
gath
limits,
the
east
suburb
packages
to
10
megabyte.
This
means
that
if
somebody
mines
a
11
megabyte
block,
then
gas
will
not
be
able
to
propagate.
If
somebody
mines,
a
20
megabyte
block
ethereum
one
clients
will
not
be
able
to
propagate
it
with
the
current
specs,
that
doesn't
mean
we
cannot
update
it
fix
it,
extend
it.
It's
just
a
mental
note.
B
Okay,
yeah,
I
think
like
this,
is
the
way
to
limit
this
kind
of
stuff,
like
on
the
network
like
by
just
limiting
the
gossip
message.
Size
yeah,
I
was
gonna,
say
on
the
the
beacon.
C
H
One
more
thing
to
keep
in
mind
is
that,
at
least
with
the
ethmoid
network,
we've
kind
of
seen
that
unless
you
have
a
very,
very
beefy
connection,
aka
amazon.
H
You
have
so
for
for
snapsync,
we
are
using
half
a
megabyte
packets
and
I
can
request
packets
from
quite
a
lot
of
peers
simultaneously
and
actually,
we've
managed
to
overload
the
local
node
with
so
we've
managed
to
have
timeouts,
not
because
the
remote
node
isn't
sending
us
the
data
fast
enough,
rather
because
we
just
overload
our
own
inbound
bandwidth
with
data,
and
it
just
takes
that
much
amount
of
time
to
get
it
through
so
in.
H
So
again,
I
don't
know
what
the
what
the
long
term
goals
are
on
how
to
scale
things,
but
you
we
also
probably
need
to
take
to
keep
in
mind
that
network
messages
should
be
somewhat
meaningful
in.
B
B
Okay,
so
get
it
the
the
option.
The
app
is
to
limit
it
on
gossip.
Also,
the
gas
limit
should
work
but
yeah,
I
don't
think
like
this
is
the
the
gas
limit
will
be
anyway
checked
after
the
message
is
received
and
if
there
is
like
a
16,
16
gigabyte
message,
nobody
wants
to
download
it.
So
it
makes
sense
makes
a
lot
of
sense
to
reduce
to
to
get
just
refuse
this
kind
of
thing
something
on
the
gossip
network
stack.
B
B
So,
like
the
default
option,
for
the
consensus
side
is
not
to
cope
with
these
different
transaction
types
and
just
use
this
op
transaction
approach,
which
is
just
the
representing
transaction
as
an
rlp
string
and
just
which
is
working
from
consensus
standpoint.
It's
just
a
string
of
bytes
and
have
like
this
introduced.
B
This
is
what
it's
already
done,
but
we
can
also
introduce
the
union
type
with
like
a
park
selector,
which
will
be
only
one
which
will
allow
for
now
only
one
type,
this
string
of
fights,
but
will
give
us
some
forward
compatibility
with
the
next
updates
when
we
decide
to
like
stem
from
a
back
transaction
and
have
them
explicitly
in
the
execution
payload.
That
was
the
idea
right,
yeah.
C
That's
the
idea,
the
idea
being
that
you
can.
E
G
C
Me,
but
that
for
simplicity
we
can
do
opaque
selector
for
now
and
then
in
the
future,
deprecate
opaque,
selector
with
specific
collectors.
I
think
this
is
an
idea
from
proto-pro.
Do
you
have
anything.
A
G
We
do
not
use
the
union
type,
but
we
can
still
improve
it
and
what
we
would
basically
do
is
define
it
as
a
single
prefix
byte
to
the
transaction,
and
then
we
define
a
single
selector
for
the
back
transaction
for
all
the
existing
types
in
their
encoded
form,
and
then
I'm
talking
about
the
envelope,
including
the
inner
selector.
G
B
I
don't
think
anything
to
discuss
with
this
regard
here,
so
if
anyone
anyone
wants
to,
if
anyone
have
any
opinion,
just
let's
discuss
it
offline
and
like
there
was
the
the
last
item,
is
the
uint256
in
the
beacon
chain
stack
which
is
used
for
total
difficulty,
which
is
like
now
it's
about
72
bytes.
I
don't
remember
well
you
which
is
like
just
exceeds
the
unit
unit
64.,
and
we
have
to
use
some
something
bigger.
B
So
what
options
here
first
is
not
eliminating
it
at
all,
just
because
it's
not
used
in
any
arithmetics
except
for
comparison.
So
it's
just.
The
spec
compares
whether
the
transition
total
difficulty
already
happened
or
not,
and
yeah
that
could
be
handled.
The
other
option
would
be
to.
B
I
don't
know
to
denominate
it
somehow,
but
that
would
probably
require
some
denomination
happening
on
the
execution
engine
side
because
it
returns
little
difficulty.
I
don't
think
it's
like.
B
Probably
it
would
work
but
yeah.
It
just
requires
additional
additional
work,
not
sure
which,
which
way
like
it's.
C
Essentially,
we've
avoided
bigent's,
arithmetic
and
he's
two
on
the
node
side
so
far
and
right
now,
with
total
difficulty,
there
is
a
big
end.
B
E
B
So
the
transition,
like
happens
once
the
certain
total
difficulty
is,
is
reached.
C
And
right
now
the
beacon
node
literally
just
does
not
have
big
end
arithmetic,
and
so
you
could.
The
the
total
difficulty
could
be
denominated
in
a
un64
and
take
off
a
bunch
of
the
precision
and
you'd
also
have
to
have
a
function.
That
returns
that,
with
the
lesson,
precision.
B
J
We
already
have
begins
for
f1,
so
we
can
change
it.
C
Cool,
let's
ask
the
lighthouse
folks
too,
but
let's
just
operate
as
though
we
can
do
a
big
end
comparison
for
this
one
little
thing
unless
we
hear
otherwise.
L
L
B
If
it's
just
for
order,
oh
yeah
right
yeah,
but
you
will
receive
it
from
like
why,
in
in
json
format,
I
guess
yeah.
But
you
can.
I
see
if.
L
B
Encoded
you
can
like
even
compare
and
compare
it
as
like
lexicographical
array.
That's
like.
J
An
exact
decimal
form,
maybe
but
otherwise
more
tricky.
B
G
Sure
so,
in
the
past
week,
or
so,
we
have
had
a
few
of
these
office
hour
type
of
cars,
which
are
more
casual
cars,
where
you
can
stay
in
sync,
with
the
very
bleeding
edge
of
reinism
I'll,
give
a
summary
of
what
we
have
done
so
far.
B
Thanks,
I
would
just
go
through
like
client
updates
on
where
everybody
on
the
with
regard
to
ray
and
is
yep.
So
maybe
we
can
start
like
from
geth.
H
F
H
Thing
so
essentially
the
first
first
version
was
the
decision.
Was
that
we're
going
we're
keeping
guillaume's
api
updated?
To
I
mean
it
will
be
changed
and
updated
to
to
confirm
to
whatever
spec
the
current
api
is,
but
otherwise
it
will
still
be
based
on
directly
just
injecting
data
into
the
chain.
B
Yep
great
nevermind.
E
So
we
have
an
initial
implementation
that
I
am
currently
testing.
I
hope
I
will
finish
testing
stabilizing
it
by
tomorrow
and
if
any
of
eth2
clients
would
like
to
participate
in
testing
integration
with
the
rpc,
I
please
contact
me.
I
would
be
very
happy
to
work
on
something
like
that,
for
example
tomorrow.
B
Great
probably
have
any
any
guide
how
to
run
the
other
mind
in
reynolds
mode.
E
Yes,
I
can
write
something
tomorrow,
but
I
would
like
to
you
know
just
experiment
that
I
didn't
like
miss
something
in
the
spec
and
we
can.
I
can
communicate
with
an
adhd
to
note
if
anyone
has
this
kind
of
test
set
set
up
or
something
cool.
B
Yeah
great
so
actually
work
on
like
taku,
I'm
gonna
to
you.
It's
gonna
be
ready
tomorrow.
So
I
guess
I
can
experiment
with
catalyst
and
with
another
mind
as
well,
so
just
reach
out.
B
Anybody
from
like
open
ethereum
to
turbo
gas,
bazoo
and
abezu
is
starting
to
work
on
on
this
back
as
well.
B
So
yeah,
let's
just
yeah,
go
to
the
consensus
clients
we
can.
We
can.
We
can
be
controlling
clients.
So,
as
I
said
that
I'm
working
on
takuru
should
be
like
ready
by
tomorrow.
I
guess
we'll
test
with
catalyst.
First
then
try
another
mind.
Hopefully
anybody
else
do
bagath
and
know.
What's
their
status
is
so.
A
Yeah,
I'm
I'm
still
not
much
progress
on
the
api
side.
From
my
end,
I'm
still
reviewing
the
changes,
so
I
think
once
the
api
becomes
more
formalized
I'll
put
it
to
one
side.
It
probably
takes
me
a
bit
to
catch
up,
so
that's
not
too
bad
other
than
that
we
built
a
faucet
and
for
our
regism
it's
fully
configurable.
It
comes
with
a
ready,
react
and
angular
project
as
a
reference.
It's
also
dockerized.
A
B
B
B
L
L
B
Catalysts-
okay,
great
anybody
from
from
lighthouse
nobody's
here.
Anyone
else
want
to
give
the
an
update.
B
Okay,
great
thanks.
Everybody.
E
I
have
a
question
not
an
update,
if
possible,
can
you
give
a
rough
estimate
on
the
dates
and
plan
for
the
devnet.
G
G
Can
we
try
the
rpc
in
something
more
of
a
shared
devnet,
and
so
I
just
like
to
try
and
spin
up
whatever
kind
of
prototype
we
have
in
like
in
the
next
week
or
so,
and
I
have
this
example:
configuration
for
the
first
deafness
up
in
the
realism,
repository
I'll
share
the
link
again
in
the
chat,
and
there
I
specify
monday
as
the
the
ethereum
one
genesis,
and
this
can
be
skipped
and
then
then
stay
as
the
actual
genesis.
G
But
this
is
purely
as
example,
right
now
like
I
need.
I
would
like
to
confirm
this,
and
I
probably
wait
for
one
or
two
more
office
hour
calls
for
to
to
learn
about
the
readiness
of
clients.