►
From YouTube: Merge Community Call #1
Description
Merge Community Call #1
Agenda: https://github.com/ethereum/pm/issues/402
B
Sorry,
yes,
so
if
anybody
does
not
want
to
be
recorded,
we
are
recording
this
and
hopefully
that's.
Okay,
all
right
I'll,
get
that
spiel
again.
So
this
call
is
the
first
merged
call
that
we're
running
tim-
and
I
it's
not
the
only
one
puja
has
run
some
related
to
this
as
well.
You
should
definitely
check
out
those
videos,
but
this
is
the
first
one
that
we're
running
we're
going
to
go
over.
B
A
Sure,
I
guess
there's
a
lot
of
like
new
faces
on
this
call,
so
maybe
it's
worth
just
taking
five
or
so
minutes
to
walk
through
what
the
merge
actually
implies
from
like
an
ethereum
node
perspective,
then
I
have
like
a
couple
kind
of
just
things
that
I
think
people
should
be
aware
of
at
the
application
and
like
infrastructure
level,
and
then
I
think
we
can
take
most
of
the
call
to
just
answer
people's
questions
or
chat
about
it
and
yeah
and
see
also
yeah.
A
One
thing
that
that
would
be
helpful
is
knowing
you
know.
If
people
have
questions,
even
though
we
can't
answer
them
on
this
call,
we
can
schedule,
you
know
calls
in
the
future
and
so
as
you'll
you'll
find
out
where
you
know
we're.
Still,
I
wouldn't
say
early
in
the
merge
progress,
but
we're
you
know
we're
still
at
the
spot,
where
we
have
enough
time
to
to
have
kind
of
dedicated
calls
about
specific
topics
yeah
over
the
next
couple
months:
cool
yeah.
A
So
I
guess
yeah
I'll
just
share
my
screen
and
I'll
post
the
link
to
what
I'm
sharing
in
the
chat
here,
but
at
a
high
level.
So
I
I
tried
to
summarize
what
the
merge
actually
oh,
I
can't
actually
share
my
screen
trends.
B
Embarrassing
go
ahead.
Try
now.
A
A
The
way
to
think
about,
like
an
ethereum
node
after
the
merge
is,
is
this
diagram
that
I'm
sharing
right
now,
where
you're
gonna
have
you're
gonna
have
to
run
both
a
beacon
node
and
what
we
call
an
execution
node,
which
is
the
equivalent
of
an
eth1
client
today,
and
basically,
what
the
merge
is.
Is
it's
just
taking
the
current
eth1
clients
and
instead
of
having
them,
follow
proof
of
work
or
proof
of
authority
for
their
consensus
and
to
find
out
like
the
the
chain
head?
A
We
have
them
follow
the
beacon
chain.
So
you
have
kind
of
your
whole
theorem
client,
which
is
the
sum
of
your
beacon
node,
which
will
maintain
its
peer-to-peer
connections
like
it
does
right
now.
So
you
know
the
whole
peer-to-peer
network,
where
at
the
stations
and
and
blocks
are
shared,
will
remain
part
of
the
beacon
node.
Similarly,
all
the
beacon
apis
that
already
exists
will
remain.
You
know,
part
of
those
beacon
nodes,
and
you
can
query
your
node
directly.
A
For
that,
then,
the
execution
engine
is
a
modification
to
eth1
clients,
but
those
will
be
made
available
by
the
different.
The
different
teams
so
you'll
be
able
to
downline
the
aragon
version
of
this,
and
this
is
basically
an
eth1
client
which
strips
down
everything
related
to
consensus.
So
it
just
relies
on
the
beacon
node
for
consensus,
but
it'll
still
be
in
charge
of
block
execution
validating
that
transactions
are,
are
valid
and
maintaining
its
own
transaction
pool
as
well
as
gossiping
the
transactions.
A
So
this
is
why
you
also
keep
kind
of
the
same
peer-to-peer
layer
where
the
main
difference
is.
You
are
not
broadcasting
blocks
anymore
on
the
the
execution
peer-to-peer
layer,
but
the
the
transaction
propagation
still
happens
there,
and
similarly,
all
of
the
json
rpc
apis
will
still
be
present
and
then,
in
order
to
kind
of
facilitate
communication
between
the
beacon,
node
and
the
execution
engine
we've
introduced.
A
A
new
api
called
the
engine
api,
and
this
is
basically
the
api
by
which
the
beacon
node
will
share
a
new
head,
a
new
finalized
block
with
the
execution
engine
and
will
also
query
the
execution
engine
when
it's
when
it's
the
validator's
time
to
produce
a
block
for
a
valid
block
that
you
can
share
with
the
network
and
at
a
high
level.
A
That's
really
it
the
way
we
transition
at
the
merge
basically
is
using
a
total
difficulty
value
on
the
ethereum
maintenance
and
there's
a
couple
security
reasons
for
that,
but
at
a
high
level
it's
just
kind
of
the
safest
way
to
go
about
it.
So
what
will
happen?
Is
that
we're
going
to
have
an
update,
an
upgrade
on
the
beacon
chain,
where
we'll
add
a
terminal,
total
difficulty
value?
A
So
the
kind
of
final
difficulty
that
we
want
to
see
on
the
proof
of
work
chain,
the
the
beacon
chain
will
be
querying
kind
of
the
proof
of
work
chain.
Every
block
asking
it
you
know:
did
you
reach
this?
This
total
difficulty?
No
did
you
reach
just
sold
difficulty,
no,
and
at
some
point
it
will.
A
So,
whichever
block
basically
has
a
total
difficulty
larger
equal
than
this
terminal
value
will
be
the
final
block
on
the
proof
of
work
chain
and
from
that
point,
on
the
eth1
clients
will
begin
following
the
beacon
chain
for
consensus
rather
than
than
proof
of
work,
and
one
thing
to
note
is
obviously
there
can
be
multiple
blocks
kind
of
that
come
at
that
time,
with
a
similar,
total
difficulty,
and
then
we
rely
on
the
beacon
chain
consensus
to
kind
of
decide
the
canonical
blocks
between
say.
A
There
were
two
competing
blocks
that
were
that
were
shared
at
the
same
time,
and
so
after
the
merge,
what
kind
of
the
beacon
chain
will
look
or
or
what
blocks
sorry
will
look
like
it's
kind
of
like
this,
where,
on
the
other
layer,
you
have
the
what's
a
beacon
chain
block
today,
which
contains
you
know
things
like
the
the
current
slot
signature
randall,
all
the
at
the
stations,
the
deposits,
the
validator
exits
and
within
those
will
basically
add
what
we
call
an
execution
payload,
which
is
the
equivalent
of
an
eth1
block
today,
and
this
is
produced
by
the
execution
layer.
A
So
this
execution
payload,
is
what
gets
passed
around
between
the
validator
layer
and
the
execution
layer,
and
these
contain
exactly
what
you'd
see
in
an
eth1
block
today.
So
you
know
the
hash,
the
state
routes
at
the
list
of
all
the
transactions
and
again,
you
know,
when
kind
of
you
receive
a
block
on
the
network.
A
It'll
it'll,
go
to
your
your
consensus,
layer
and
you'll,
pass
it
to
the
execution
layer
to
actually
execute
the
block,
make
sure
that
it's
valid
and
similarly
when
you
need
to
produce
a
block
for
the
network,
you'll
just
query
it
from
your
execution
layer
and
then
propagate
it.
On
your
on
the
consensus
layer
I
mentioned
earlier,
there's
this
engine
api
that
we're
working
on,
which
is
the
communication
interface,
we're
still
kind
of
finalizing
it,
but
at
a
high
level,
there's
three
kind
of
apis
that
are
that
are
going
to
be
added.
A
One
is
called
this
engine
execute
payload,
which
is
the
consensus
layer,
sending
a
block
to
the
execution
layer
to
just
validate
it.
Execution
layer
returns
whether
it's
valid
or
invalid.
If
it's
still
syncing,
you
know
just
return
syncing
and
ask
you
know
for
it
to
be
sent
later.
The
kind
of
biggest
addition
is
this
for
choice,
updated
call
which
the
execution,
the
consensus
layer
uses
to
tell
the
execution
layer
that
there's
a
new
head,
and
then
you
finalize
block
on
the
network
and
optionally.
A
It
can
also
pass
it
what
we
call
a
payload
attribute,
which
is
asking
the
execution
layer
to
start
producing
a
block
and
giving
you
things
like
the
timestamp,
the
randall
value
and
the
the
fee
recipient
or
the
coinbase
value.
That's
that's
required,
and
this
is
how,
basically,
if
the
execution
layer
gets
a
call
from
fork,
choice
updated
which
contains
this
payload
attribute
shield.
A
It
knows
it
needs
to
start
producing
a
block
and
then
there's
a
final
call
called
engine
get
payload
which
asks
the
execution
layer
to
retur
to
return
its
current
best
block.
So
those
come
kind
of
come
after
you've
asked
it
to
produce
a
block,
and
then
you
just
ask
it
to
send
one
back
and
also
really,
ideally,
the
three
only
endpoints
we're
going
to
add.
A
As
I
said,
it's
still
kind
of
being
discussed,
but
we
we
really
want
to
try
and
keep
this
communication
channel
simple
and
then
yeah
so
just
worth
noting
not
much
changes
on
the
execution
layer.
So
there's
an
eip
that
describes
all
the
chain
gip
3675,
but
basically
the
block
itself
won't
change.
A
The
only
thing
is
any
field,
that's
related
the
proof
of
work
or
to
uncles
gets
set
to
zero,
or
you
know,
like
the
data
structure,
is
equivalent
to
zero,
just
because
we
don't
need
those
anymore
yeah
and
obviously,
once
the
merge
happens,
there's
no
more
block
rewards.
So
that
stops.
A
But
it's
worth
noting
that
transaction
fees
still
get
processed
by
the
execution
engine
and
one
thing
that's
kind
of
not
really
obvious-
is
that
transaction
fees
can
be
sent
to
an
like
eth1
address,
so
they
don't
accrue
to
the
validator
addresses
but
they'll
accrue
to
addresses
in
in
the
execution
layer
and
then
yeah.
Here's
again
a
diagram
by
danny
that
kind
of
shows
how
how
the
merge
happens.
So
at
the
left
side
of
it,
you
have
kind
of
the
proof
of
work
chain
as
it
is
today.
A
So
you
have
proof
of
work
within
proof
of
work.
We
have
our
execution
layer,
engines
that
produce
eth1
blocks
and
we
have
a
chain
of
those.
Similarly,
on
the
beacon
chain,
you
know
we
have
blocks
that
are
kind
of
empty,
they
don't.
Oh,
they
contain
kind
of
beacon,
chain
data,
but
they
don't
contain
any
executable
transactions
and
then
around
the
merge.
We're
basically
just
dropping
proof
of
work,
and
this.
This
part
that
contains
kind
of
the
what
gets
executed
in
a
block
becomes.
A
The
execution
payload
in
in
kind
of
the
post,
merge
system,
and
we
just
remove
proof
of
work
and
beacon
chain,
is
now
our
source
of
consensus,
and
then
I
already
kind
of
covered
yeah.
So,
basically,
both
both
both
the
consensus
and
execution
layer
maintain
a
peer-to-peer
network.
They
maintain
their
current
user
apis
and
the
big
thing
that
we're
still
working
on
is
sync,
so
sync
obviously
needs
both
parts
to
interact
with
each
other
and
there's
a
couple
prototypes
for
different
syncing
mechanism,
but
there's
not
one.
A
That's
been
kind
of
fully
fully
deployed,
yet
so
I'll
I'll,
probably
I'll
pause
here.
It
feels
like
I
went
on
for
a
while.
Just
and
I
see,
there's
a
lot
in
the
chat.
Is
there
I
guess
where
there
are
questions
in
the
chat.
I
see,
there's
like
mica
like
client
and
marius,
okay
yeah
before
I
guess
we
just
got.
Does
anyone
have
questions
or
thoughts.
C
A
No
given
that
work,
I
mean
like
it's
the
main
priority
after
the
merge,
but
just
not
comfortable,
giving
like
a
a
month
or
like
a
date,
given
that
like
we,
we
haven't
done
the
merge,
but
after
the
merge,
basically
the
most,
you
know
the
highest
priority
thing
is
beacon
chain
withdrawals,
basically
gotcha
gotcha.
D
And
just
for
some
small
additional
contacts
related
to
your
question
people
when
we
say
transaction
fees
that
also
includes
mev
type
stuff,
which
there's
a
good
chance.
If
current
history
predicts
the
future,
the
mev
stuff
might
actually,
you
know,
outstrip
all
the
other
forms
of
validator
rewards
and
payments.
So
don't
assume
that
just
because
it's
transaction
fees
only
that
it's
going
to
be
some
tiny
little
amount
that
it
could
be
significant.
A
So
there's
a
question:
what
do
you
mean
by
mev?
So
basically,
yes,
it's
still
a
transaction
fee.
Yes,
but
it's.
D
F
D
Not
all
not
only
comes
in
foreign
transaction
fees,
sometimes
direct
payments
to
the
what
was
previously
called
coinbase,
which
will
now
be
the
fee
recipient.
So
you'll
get
a
transaction
that
will
show
up
in
a
block
that
you
as
a
bad
later
produce
and
the
transaction
will
just
directly
send
money
to
the
block
producer,
and
so,
if
the
block
producer
puts
in
you
know
some
ethereum
address
as
their
address,
they
want
to
receive
fees
at
that.
Mev
payment
will
also
go
there.
D
B
I'll
just
jump
in
and
say
really
quick
one
of
the
things
that
we're
trying
to
adapt
to
is
different
naming
schemes.
Many
of
you
are
probably
familiar
with
that.
B
The
merge
used
to
be
called-
or
you
know
this
big
project
used
to
be
called
eth2,
but
that
sent
some
different
messages
to
people
about
like
sequentiality
and
how
product's
working
so
you've,
probably
heard
tim,
say
a
few
times
execution,
layer
and
consensus,
and
these
are
kind
of
the
terms
that
we're
trying
to
shift
towards
as
they
are
more
accurate
and
they
don't
have
to
deal
with
numbers
because
in
reality,
ethereum
is
going
to
exist.
There
won't
be
separate
editions
of
it.
B
E
Yeah,
this
has
been
super
helpful
thanks
for
sending
us
together,
I'm
curious,
so
I
for
my
understanding,
there's
an
existing
test
net,
which
everyone
is
testing
this
on.
Will
the
merge
happen
in
other
test
nets?
Also.
A
Yes,
so
that
was
going
to
be
kind
of
my
next
thing,
but
at
a
high
level
we
have
a
devnet
right
now,
which
is
called
pitos.
I
would
recommend
joining
it
only
if
you
know
you
want
to
be
at
the
very
forefront.
It's
still,
it's
still
quite
experimental,
the
best
way
to
join
it
on
the
eatstaker
discord.
A
There's
a
pitos
channel,
so
it'll
be
quite
manual
onboarding
to
it
right
now
throughout
november,
we're
working
on
a
second
iteration
of
devnets
and
the
goal
is
to
get
kind
of
a
public
long-standing
one
before
the
holidays,
so
that
people
end
with
you
know:
kind
of
readmes
and
actual
instructions
for
people
to
join,
so
that'll
be
a
new
network,
and-
and
hopefully
you
should
have
kind
of
the
final
spec.
A
So
if,
if
you
want
to
kind
of
see
see
it
first,
that'll
be
the
place
to
go,
and
then
we
are
going
to
run
the
merge
on
several
of
the
existing
test
nets
like
robsten,
wrinkby
and
whatnot.
The
order
and
timing
is
still
not
determined.
There's
a
high
chance.
Robson
would
be
first
because
robson
has
a
difficulty
bomb
on
it
which
might
go
off
sometime
in
january.
So
if
that
happens,
then
it
might
just
make
sense
to
run
the
merge
on
robson
first
but
yeah
before
we
do
that.
A
We
want
to
have
just
new
devnets
that
are
up
and
that
we
know
are
kind
of
working
well.
Yeah.
A
Sorry
greg
has
another
question:
fee
recipient
is
a
new
word
that
we've
used,
but
it
maps
the
coin
base,
basically
just
to
make
it
clearer
because
it's
not
mining,
but
it's
basically
the
same
as
a
coinbase.
D
But
the
field
itself
will
be
on
the
execution
block,
not
the
consensus
block
or
because
there's
a
slot.
I
should
say:
oh.
G
Tim,
can
you
clarify
if
the
from
that
diagram
that
you
showed
that
block
propagation
will
be
by
the
beacon
change?
Yes
and
so
that
all
sort
of
pre-consensus
no
pending
blocks
moving
around
that's
going
to
continue
on
the
pdp
layer
of
the
execution
engine,
but
the
only
way
that
the
execution
engine
knows
about
blocks
is
through
that
engine
api
exactly
yeah.
A
A
Okay,
so
ben
has
a
question:
will
every
beacon
chain
block
have
an
execution
layer
block
in
it?
So
no
basically
so
you
can
still
have
somebody
missed
or
missed
their
validator
slot
right.
So
just
like
today,
if
your
validator
is
offline,
when
it
needs
to
propose
a
block,
that
block
will
be
missed
and
we've
been
looking
at
kind
of
the
impact
of
that
on
eip1559
so
like
how
do
we
want
to
treat
missed
slots
with
regards
to
like
the
base
fee
and
like
the
block
capacity?
A
So
there
might
be
some
changes
that
we
make
alongside
the
merge
to
just
take
into
account
potential
missed
slots
when
you're
calculating
the
the
base
fee.
A
So
we
we
expect.
Obviously
you
know
like
validators,
have
an
incentive
to
produce
the
block
and-
and
it
becomes
even
stronger
after
the
merge,
because
they
actually
get
the
transaction
fees.
But
if
a
validator
is
offline,
when
it's
their
turn
to
produce
a
block,
then
that
blocks
kind
of
kind
of
get
skin
and
then
there's
a
good
question.
Also,
how
do
we
plan
to
run
full
test
nets
without
finalizing
the
sync
mechanism?
So
we
have
a
couple
prototypes
of
the
sync
mechanism:
I'm
not
like
super
familiar
with
them.
I
don't
know.
H
Yeah,
so
we're
currently
working
on
that
we
have
a
prototype
for
pitfalls,
but
yeah,
there's
still
some
stuff
to
do.
I
A
I
guess
miguel
since
you
were
speaking.
The
next
thing
I
was
going
to
cover
is
that
your
new
eip
43.99,
but
do
you
want
to
take
a
few
minutes
to
kind
of
walk
through
that,
and
also
basically
not
a
lot
changes
for,
like
smart
contract
layer
and
anybody
using
the
the
kind
of
execution
layer
data?
But
that's
one
of
the
changes?
Do
you
want
to
maybe
walk
through
kind
of
that
and
and
any
other
changes
that
people
writing
smart
contracts
are
just
depending
on
the
execution
layer
should
be
aware
of.
I
Oh
yeah
sure,
so
this
eip
is
the
basically
what
it
does.
It
just
lands
the
existing
difficulty
of
code
yeah.
Okay,
thanks
for
sharing
this
screen,
it
actually
supplies
the
existing
difficulty
of
code
with
the
new
random
of
code,
the
key
okay.
There
are
several
things
here.
First
of
all,
the
number
of
the
instruction
is
not
changed,
so
it's
just
basically
the
renaming
of
the
difficulty
to
random
and
the
change
in
the
semantics
a
bit.
I
I
Usually,
while
the
difficulty
is
like
less
than
I
don't
remember,
what
exact
number
of
bits
is
currently
is,
but
it's
less
than
64
bits,
I
guess
so
yeah
and
the
mechanics
is
just
I
don't
know
if
we
want
to
dive
into
mechanics.
The
mechanics
is
just
to
use
the
mix
hash
instead
of
difficulty,
to
keep
this
randomness
inside
of
the
execution
layer
block.
I
This
is
to
avoid
this
to
avoid
potential
issues
with
the
difficulty
field
that
is
used
currently
is
as
a
source
of
information
for
the
puck
choice
rule
in
the
proof
work
network.
So
we
want.
We
really
want
this
difficulty
field
to
be
zero
after
the
merge,
and
this
is
why
we
use
mix
hash.
This
is
why
this
eap
proposes
to
use
mixed
hash
to
hold
these
randomness
instead
of
difficulty
so
and
from
the
smart
contracts
perspective.
It's
just
for
existing
smart
contracts.
It
should
be
fine
because
this
difficulty
doesn't
matter
how
it's
named.
I
Actually,
what
matters
is
that
the
instruction
number
is
preserved,
so
it
will
start
returning
some
really
large,
like
not
recorded
but
kept
to
32
bytes,
but
still
large
number
with
respect
to
what
is
returned
currently,
but
it's
still
gonna
be
a
valid
random,
randomness
output
and
for
the
new
smart
contracts.
This
is
the
opportunity
to
use
this
random
as
a
source
of
randomness
in
their
code.
So
that's
basically
it.
A
H
Yeah,
so
basically,
we
currently
maintain
a
pending
block,
which
is
basically,
we
just
take
the
current
head
and
we
apply
all
transactions
on
top
of
it,
and
this
is
the
the
new
pending
state
and
we
need
that,
for
example,
for
for
creating
transactions,
basically
like
if
you,
if
you
have
already
like
sent
three
transactions
that
are
sitting
in
the
mempool,
and
you
want
to
get
the
nons
for
for
the
next
transaction.
H
H
The
problem
is
with
mev,
you
don't
get
to
see
a
lot
of
the
transaction
and
the
transaction
ordering
is
different,
so
we
cannot
really
rely
on
the
pending
block
anymore
and
because
we
need
like
when
we
create
this
pending
block,
we
need
to
apply
all
of
the
transactions
we
currently
like
spending
a
lot
of
cpu
time
on
that
on
something
that's
not
really
meaningfully
anymore,
and
so
we
would
like
to
deprecate
this.
H
E
Yes,
okay,
one
more
question
is
the
relationship
between
execution,
clients
and
consent,
clients
one-to-one
or
one-to-many.
Like?
Can
you
run
multiple
execution
clients
for
one
consensus,
client.
H
H
You
cannot
run
multiple
ex
beacon
nodes
with
one
execution
client,
because
the
execution
client
has
to
maintain
a
state,
but
you
can
can
run,
of
course,
run
multiple
validators
with
one
beacon
node.
H
So
you
can
run
multiple
validators
with
one
beacon
node
with
one
execution,
client
or
you
could
even
run
multiple
validators
with
multiple
beacon
nodes
who
each
run
with
multiple
execution
layer,
clients
so
yeah
that
that
there's
there's
there's
there's
the
possibility
for
you
to
run
one
validator
on
top
of
all
25
combinations
of
consensus
of
beacon
and
execution
layout.
A
And
yeah
I'm
curious,
yeah
ben.
You
had
a
couple
questions
in
the
chat
about
like
the
block
time,
and
should
we
assume
that
it
stays
12
seconds
so
just
to
kind
of
summarize,
like
the
block
time
right
now
is
on
average
13
seconds
with
very
random
distribution
across
that,
but
at
the
merge
it'll
go
to
12
seconds,
so
we
effectively
get
like
one
second
quicker
blocks,
but
we
can
mis-block.
A
So
you
know
you're
not
getting
one
specifically
every
next
12
seconds
and
that's
probably
not
going
to
change
soon,
but
it
might
change
in
the
future
yeah.
I
guess
I'm
curious
what
what
breaks
if,
like
the
block
time
changes
like,
does
anything
break
from
13
to
12
seconds?
I
would
think
so
but
like
if
it
went
from
like
12
to
16
or
something
like
that
yeah
I'm
curious.
If
there's
any
applications
or
tooling
that
that
will
break.
J
Just
in
my
experience,
I've
seen
contracts
developed
that
made
like
some
kinds
of
assumptions
about
block
times.
I
actually
think
the
compound
contracts
do
this,
or
at
least
did
this.
I
don't
know
if
there
have
been
upgrades
or
changes
over
time
when
it
comes
to
like
interest
rate
calculations
and
that
sort
of
thing,
but
yeah,
just
as
a
general
idea
about
making
assumptions
about
block
times
in
contracts.
A
D
A
F
I
don't
think
I'm
the
best
person
to
to
share
honestly.
I
think
I
don't
know
if
danny
matters
or
okay
I'll
want
to
talk
about
it.
D
I
can't
if
matt
wants
chicken
out
so
the
so
when,
with
proof
of
stake,
we
have
several
concepts
of
heads.
So
currently
we
have
the
latest
block
and
proof
of
work
and
that
recent
block
that
has
validated
the
most
recent
block.
Your
client
has
seen
that
has
validated
the
work
algorithm.
Basically
and
validated.
The
block
is
legitimate.
It's
like
the
state
transitions
are
accurate.
This
is
a
valid
block.
D
Now,
as
everybody
knows,
re-orders
can
happen,
and
so
this
block
may
know
may
not
always
be
the
current
head,
but
and
at
some
point
in
the
future,
it
may
you
know,
get
reorged
out
and
no
longer
be
in
the
chain
at
all,
but
that's
what
you
get
so
when
you
you
ask
the
execution.
Client
hey,
give
me
the
latest.
You
get
back
that
block
it's
the
most
recent
thing
you
have
with
proof
of
stake,
we're
switching
to
a
mechanism
where
there's
basically
three
types
of
blocks.
One
is
what
we're
calling
the
unsafe
head.
D
I
think
someone
correct
me
if
I
misremembering
the
name
we're
using
so
the
unsafe
head
is
basically
a
block
that
is,
we
have
seen
and
it
appears
to
be
valid,
but
not
enough.
People
have
attested
to
it,
and
so
we
don't
have
any
confidence
that
this
block
is
going
to
stick
around
and
so
we've
seen
it
there
it's
possible.
D
This
may
be
the
next
part
next
block
in
the
chain,
but
we
really
can't
say
for
sure,
then,
after
that
we
have
the
what's
called
the
safe
head,
and
this
is
a
block
where
we've
seen
it
we've
validated
it
and
a
bunch
of
people
have
voted
on
it.
So
the
addis
they've
gotten
out
of
stations
on
it,
so
a
bunch
of
people
that
are
participating
in
the
consensus,
clients,
validation
scheme.
D
Yes
by
people
I
mean
validators,
sorry,
so
a
bunch
of
validators
out
there
have
all
said
yep.
I
think
that's
going
to
be
the
next
block
and
once
that
reaches
a
sufficient
volume,
I
don't
know
what
it
is:
33
or
66
or
50,
or
something
some
some
number
once
it
reaches
that
magic
number.
Then
we
start
calling
that
the
safe
head
and
the
reason
we
call
it
safehead
is
because
it
is
very,
very
unlikely
that
that
block
will
be
worked.
Re-Worked
out,
it
still
can
be
reworked
out.
D
So
it's
not
a
guarantee,
it's
not
finality,
but
it
is
very
unlikely,
and
so
it's
very
safe
to
build.
On
top
of
that
block,
use
that
block
and
assume
that
block
is
probably
going
to
stick
around.
The
third
type
block
we
have
is
post
finality
boxes.
I
don't
remember
what
we're
calling
it
exactly,
but
it's
a
block
that
has
been
finalized
and
the
only
way
that
you're
going
to
undo
a
finalization
is,
with
a
user
activated
fork
of
something
so
user
intervention.
D
D
So
when
you're
building
things
like,
if
you're
building
an
exchange,
you
may
want
to
wait
to
pay
people
out
until
you
see
the
finalized
head
include
the
transactions.
That
would
be
a
good
way
to
decide.
D
Okay,
am
I
going
to
pay
this
pay
this
person
out
in
fiat
or
I'm
gonna
or
not,
you'd,
wait
for
the
finalized
head
once
you
see
a
finalize
head
that
includes
the
transaction
somewhere
behind
that
then
you're
safe,
if
you're,
building
an
app
that
you
know
is
pretty
much
anything
else
like
any
normal
sort
of
app
or
maybe
even
doing
like
a
merchant
payment
system
where
you
don't
have
to
worry
too
much
about
people
doing
like
really
complicated
reorg
attacks.
The
safe
head
is
probably
fine.
D
You
really
should
only
use
that
if
you're
doing
like
data
analytics
or
you're
doing
you're
like
running
ether,
scan
or
something
or
some
sort
of
block
explorer,
then
you
might
want
to
use
the
unsafe
head,
because
you
know
you
want
very
instant
information
or,
if
you're
doing
some
sort
of
high
frequency
trading
to
say
the
unsafe
head
is
very
interesting
again.
For
most
people
safe
head
is
what
you
want,
and
so,
if
you
ask
the
execution
client
for
the
latest
block,
you
will
be
getting
safe
head.
D
There
will
be
new
ways
to
ask
for
the
unsafe
head
and
the
finalized
head,
but
if
you
just
continue
doing
what
you're
doing
right
now,
you're
gonna
get
safehead
and
safehead
will
usually
be
you
know
somewhere.
I
think
mikahel
can
correct
me
on
this.
I
think
it'll
usually
be
somewhere
between
0
and
12
seconds.
Behind
the
unsafe
head.
I
Yeah,
it
should
be
like
probably
four
seconds
when
we
see
at
the
stations
in
in
the
yeah.
If
the
conditions
in
the
network
are
good,
it
should
be
four
seconds
behind
the
unsafe
head.
So
it's
really
close
actually.
D
Yeah
yeah,
so
if
the
network's
behaving
healthy,
which
in
the
vast
majority
of
time
it
should
be
you'll,
get
on
you'll
get
safehead
pretty
quickly
after
unsafe,
fed
and
you'll
get
finalization
some
number
of
minutes
later.
I
forget
how
many,
if
the
network's
unhealthy,
then
you
might
actually
end
up
with
unsafe
head
for
quite
a
while,
like
you
could
in
theory,
end
up
with
the
safe
head
kind
of
never
showing
up.
Finalizations
may
never
show
up
in
the
network's
very
unhealthy.
D
F
Yeah,
I
think
the
interesting
thing
here
for
people
building
applications
is
if
you
are
like,
releasing
real
assets,
whether
that's
like
money
or
physical
assets
in
the
physical
world
and
you're.
Looking
at
like
a
non-komodo
consensus
chain
like
we
have
on
either
one
today,
then
you
might
decide
how
many
blocks
to
wait
until
you
think
that
it's
unlikely
everywhere
is
going
to
happen,
and
so,
like
some
exchanges
say,
like
maybe
35
blocks.
At
that
point,
we
don't
think
that
reorg
is
going
to
happen.
F
But
there's
like
a
large
spectrum,
you
could
choose
five
blocks
if
you're
doing
something
much
like
lower
like
a
much
lower
value
and
with
this
the
spectrum
kind
of
is
reduced
a
little
bit
where
you're
either
relying
on
the
safe
head
that
the
consensus
clients
are
sort
of
deciding
based
on
what
they
see
in
the
network
or
you're
relying
on
the
finalized
head.
If
you
need,
like
ultra
finality
and
absolute
security.
F
That's
what
I
would
also
like
to
know.
This
is
something
I
was
trying
to
figure
out
from
some
of
the
consensus
people
if
they
had
an
idea
of
like
how
much
value
is
behind
safehead,
because,
like
nakamoto
consensus,
you
can
easily
determine
like
if
I
reorg
five
blocks,
that
cost
roughly
this
much
in
hash
rate,
and
so
I
it
would
be
good
to
know
that
number
for
the
beacon
chain
safe
head.
D
Yeah
mchell:
what
is
the
percentage
of
validators
required
to.
I
It's
not
it's
not
that
easy
to
quantify
it
in
the
percentage
of
validators.
There
is
a
couple
of
assumptions
that
were
that
are
that
we
make
when
we're
reasoning
about
the
safehead.
I
There
is
a
four
second
symphony
in
the
network,
so
every
message
is
propagated
across
the
network
participant
within
four
seconds,
and
in
this
case
we
can
we,
we
may
say
that
we
can
say
that
safehead
will
be
eventually
finalized
and
it
if
we
see
the
safe
hat,
but
how
the
safeguard
is
computed.
I
There
is
a
good
presentation
by
democrat,
so
it's
like
you're,
starting
with
the
most
recent
justified
checkpoints
and
counts,
how
many
votes,
how
many
other
stations
we
received
in
this
since
this
checkpoint
and
for
each
block
in
the
block
tree
starting
from
these
checkpoints,
and
if
everything
is
good
and
enough
votes
has
been
done
for
each
block
in
this
chain,
I'm
pretty
much
simplifying.
Currently,
then,
we
can
conclude
that
this
head
is
safe
or
not.
I
If
we
like
see
like
five
percent
of
other
stations
for
for
a
block,
so
we
it
can
be
real
worked
for.
Lmd
goes.
The
threshold
is
fifty
percent
before
there
is
voting
for
a
block
so
but
yeah,
it's
not
the
only
condition.
So
it's
better
to
to
get
this
visitation
and
watch
it.
B
All
right,
really
quick,
I'm
going
to
jump
way
back
out
for
a
second,
I
feel
like
we
got
a
little
a
little
bit
deep
in
some
stuff
there
for
a
second-
and
I
know,
there's
some
application
developers
and
other
people
on
the
call
I'm
just
going
to
share
really
quickly
my
screen
to
show
a
really
simple
diagram.
I
put
together.
B
B
The
beacon
chain,
formerly
called
e2,
and
the
current
eth1
chain
now
which
we're
calling
execution
layer
is
that
it's
wrapped
in
this
proof
of
work
red
bubble
and,
as
you
can
see,
these
two
chains
have
been
operating
side
by
side
since
the
beacon
chain
launched
at
the
end
of
last
year
and
then
eventually
they
will
come
together
at
the
merge,
the
disclaimer
being
that
you
know,
timelines
are
approximate.
These
dates
are
always
subject
to
change,
but
we're
hoping
for
let's
say
q2
next
year-
is
the
the
rough
idea.
B
But
the
thing
to
note
here
for
any
application
devs
is
there's
no
migration
progress.
There's
no
migration
process
required
you're,
not
going
to
have
to
redeploy
your
contracts.
Ethereum
state,
you
know,
addresses
user
balances,
contract
code,
any
sort
of
information
that
tells
us
what
happened
in
the
past,
about
ethereum
and
and
what's
going
to
happen
in
the
future.
B
That
state
continues
throughout
there's.
No,
you
know
some
previously.
There
had
been
talks
of
like
what.
What
does
it
look
like
for
applications
going
between?
Are
they
gonna
have
to
you
know,
choose
which
shard
to
go
on
the
merge
is
just
the
replacement
of
the
consensus
mechanism,
so
you
don't
have
to
worry
about
migrating.
Your
users
migrating
your
contract
migrating
your
state.
This
is
all
going
to
happen
automatically
in
the
background.
B
Really,
unless
you're
watching
you
know
the
the
merged
community
call,
you
and
your
users
will
not
really
even
realize
that
it
has
happened
aside
from
there
being
more
dependable
block
times.
That's
probably
the
most
obvious
thing,
that'll
happen,
but
yeah,
and
then
at
the
top.
I
just
put
together
a
quick
summary
of
the
things
that
have
happened
so
far,
there's
a
longer
article
which
go
with
these,
but
earlier
the
work
for
the
merge
has
been
going
on
starting
earlier
this
year.
B
You
know,
taking
the
the
output
of
rainism
to
the
next
level
and
then
right
now
we're
in
this
last
blue
blob
at
the
far
right
and
that's
you
know:
devnet's
iterating
on
the
spec
broadening
the
participation
and
that's
going
to
keep
going
the
next
hack
or
the
next
test
net,
or
actually
maybe
I'll.
Let
tim
talk
about
this,
but
we're
moving
into
the
next
step
of
this,
which
is
towards
the
end
of
the
year.
We're
gonna
move
into
different
test
nets.
B
A
I
can
so
basically
kind
of
like
we.
We
think
that
at
the
beginning,
we're
having
like
a
next
iteration
of
devnets
right
now
called
consuege,
which
were
in
which
we're
kind
of
redoing
the
same
thing
we
did
in
greece,
where
we're
gonna
get
every
client
to
implement
the
spec.
A
Then,
once
they've
implemented
the
spec,
you
kind
of
get
one
one-to-one
combinations
between
execution
and
consensus
layer,
clients,
then,
once
you
have
that
working
you
try
to
get
many
too
many,
and
then
you
kind
of
grow
the
amount
of
pairs
that
you
have
on
the
network.
You
send
some
transactions
on
the
network
which
which
basically
test
all
of
the
the
functionality,
for
example,
just
testing
kind
of
the
changes
to
the
difficulty
op
code
and
then
towards
the
end.
We
hope
that
we
have
kind
of
a
def
net.
A
That's
run
through
the
transition
from
proof
of
work,
the
proof
of
stake
and
that
kind
of
stays
hosted
so
that
people
can
join
that
and
that's
what
we're
hoping
yeah.
That's
what
we're
hoping
to
get
before
the
holidays.
And
then
this
way
people
can
start
playing
around
it
during
the
holidays
or
integrating
it
right
after
in
january,.
B
B
B
C
A
Basically
walks
through
what
each
of
the
milestones
implements
kind
of
the
status
for
every
client.
So
you
know,
if
you're,
like
really
waiting
for
your
favorite
clients
to
have
this
implemented.
This
is
the
spot
where
they're
gonna,
post
updates
but
yeah
unless
you're
following
the
implementations
very
closely.
This
is
maybe
two
in
the
weeds
someone.
H
Quickly
answer
it
like
this
because
it's
easier
to
to
to
say
so,
you
will
not
run
a
client
that
stores
both
the
proof
of
work
chain
and
the
beacon
chain.
You
will
run
two
different
clients,
one
for
the
beacon
chain,
one
for
the
proof
of
work
chain.
Sorry,
I'm
in
the
in
the
df
office.
Now
it's
really
loud
here
you
will
run
two
different
clients.
The
the
the
execution
layer,
client
will
be,
will
be
syncing
from
genesis
and
executing
all
the
blocks
and.
H
B
Thanks
marius,
I'm
just
going
through
the
questions
that
are
coming
in
the
chat,
looks
like
the
next
one
was
from
ben
about
communicating
with
miners
we
haven't
had
any
merge
related
calls
directly
with
them,
but
earlier
this
year
we
had
a
few
by
we
I
mean
puja
who's,
iran,
at
least
one
that
I
know
of,
and
I've
made
a
point
to
engage
with
miners
on
reddit,
which
is
the
ether,
mining
and
gpu
mining
subreddits
to
try
and
communicate
these
things,
especially
because
in
the
build
up
to
1559,
we
wanted
to
make
sure
people
were
aware
of
this.
B
I
think
you
know
proof
of
stake
has
been
on
the
horizon
for
miners
for
a
long
time,
they've
they're,
aware
of
it,
but
the
the
issue
is,
you
know,
since
it's
been
on
the
timeline
for
so
long
that
many
have
just
discounted
it
or
they
think
it's
never
going
to
happen.
So,
even
if
we
started
communicating
these
things
that
may
or
may
not
have
an
effect.
Hopefully
they
start
to
pay
attention
and
can
see
that
things
are
happening
and
specs
are
being
released.
Things
like
that.
B
We
have
test
nets,
it's
it's
more
than
just
a
meme
and
it's
gonna
happen
sooner
rather
than
later,
but
yeah
all
throughout
this
year
we've
been
telling
people
conservatively
that
mining
was
going
to
end
at
the
end
of
the
year.
Obviously,
that's
not
the
case,
but
I
think
we
we've
started
the
communication
process
and.
B
I
know
some
of
the
mining
pools
are
also
planning
to
participate
in
staking
so
they'll,
be
communicating
this
again,
hopefully
sooner
rather
than
later,
to
their
constituent
hash
rate,
to
make
sure
that
they're
aware
of
what's
happening
once
we
get
closer
to
the
merge,
and
I
I
expect
that
they'll
be
doing
their
own
sort
of
messaging
and
marketing.
B
We're
not
going
to
be
able
to
reach
every
minor,
but
as
for
what's
going
to
happen
between
now
and
the
merge,
possibly
something
to
consider,
but
we'll
definitely
continue
at
least
what
we've
been
doing,
which
is
regular
and
consistent
messaging
going
into
the
minor
communities
trying
to
make
sure
that
they're
aware
that
this
is
is
imminent.
A
And
one
thing
I'll
add
to
that
is
a
lot
of
people
kind
of
don't
upgrade
until
there's
a
blog
post
on
ethereum.org.
So
we're
definitely
gonna
have
that.
I
think
what
we
want,
probably
before
having
that
is
just
a
much
more
kind
of
finalized,
specs
and
whatnot,
and
not
only
for
miners
but
like
given
the
kind
of
uniqueness
of
this
upgrade,
which
isn't
the
same
as
like
previous
upgrades,
where
you
just
tell
people
download
the
new
version
and
that's
it.
A
We're
probably
gonna
have
to
be
like
much
more
thorough
in
explaining
that,
and
what
should
people
do
not
only
for
miners
but,
like
you
know,
people
that
are
running
a
validator
people
that
are
running
in
each
one,
node
and
and
and
not
mining,
so
we're
gonna
need
to
just
be
pretty
explicit
to
what
all
the
different
types
of
users
need
to
do.
A
D
On
the
so
so
miners
we
have
discussed
quite
a
bit
the
various
attack
factors
that
miners
could
launch
against
ethereum
on
the
way
out.
So,
as
we
approach
the
merge,
the
incentives
to
be
good
citizens
decreases
the
way
the
merge
is
designed
and
the
way
the
process
will
happen
is
designed
specifically
to
deal
with
that
like
we
could
do
the
merge
much
simpler.
If
we
didn't
have
to
deal
with
that,
and
so
yes,
we
have
talked
about
it.
D
We
believe
that
the
way
the
terminal
difficulty
stuff
is
set
up
mitigates
the
biggest
set
of
attack
vectors
with,
without
you
know,
going
completely
crazy
and
delaying
the
merge.
My
other
two
years
in
engineering
effort.
I
I
It's
safe,
yeah,
right
terminal,
total
difficulty
is
the
trigger
for
the
actual
merge
for
the
transition
from
the
proof-of-work
to
proof-of-stake,
and
the
question
that
often
arises
here
is
why
we
don't
use
the
block
number
as
in
the
regular,
hard
forks
in
simple
words,
the
the
fortress
rule,
the
fork
choice,
rule
handover
between
the
proof-of-work
and
proof-of-stake
is
happening
in
at
the
point
of
transition
and
since
the
workforce
rule
is
based
on
the
total
difficulty,
it
means
that
we
need
to
to
do
this
handover
at
a
certain
total
difficulty,
because
the
block
number,
if
we
use
a
block
number
there,
could
be
a
minority
fork
that
is
built
and
withheld
by
some
adversary.
I
And
is
revealed
later
at
the
point
of
merge,
and
this
fork
may
has
much
less
value,
much
less
solid
difficulty
value,
so
it
could
be
easier
to
to
be
to
be
built
and
it's
possible.
The
kind
of
minority
fork
attack
and
if
we
have
a
like
suppose,
we
have
a
also
the
adversarial
leader
or
is
it
it
could
be
the
same
party
and
this
father.
I
The
proposer
of
the
first
proof-of-stake
block
can
take
this
minority
fork
and
build
a
block
on
top
of
it
and
with
respect
to
the
block
number
block
height
rule,
everything
will
everything
will
be
okay,
but
with
respect
to
the
work
contributed
to
this
fork
and
yeah,
it
will
not
be
okay
because
it
will
be
a
minority
of
work
and
yeah,
not
not
the
one.
That
would
be
the
canonical
one
in
terms
of
total
difficulty
focus
rule.
So
that's
why
the
terminal
total
difficulty
is
used
to
trigger
the
yeah,
the
actual
transition
yeah.
I
think.
H
I
Lincoln
yeah,
there
is
also
the
original
section
in
the
eip
3675.
D
This
may
may
impact
people
in
this
call
and
the
reason
for
that
is
because
we're
doing
total
sort
of
terminal
total
difficulty.
We
can't
pick
a
ttd
terminal,
total
difficulty
until
very
close
to
time,
and
so
normally,
when
we
do
hard
forks,
we
pick
the
hard
fork
block
like
a
month
ahead
of
time.
We
say:
okay
is
going
to
be
the
hard
fork
block
whenever
we
get
there.
That's
when
the
hard
fork
happens.
D
What
the
ttd
is
going
to
be
early,
because
it
could
end
up
that
now,
we've
gotta
wait
two
years
for
us
to
actually
reach
that
ttd
when
we
meant
to
plan
for
like
a
month
and
so
more
than
likely
what's
gonna
happen
is
the
clients
will
be
released
that
have
all
the
code
for
the
merge
in
and
then
some
placeholder
ttd,
maybe,
and
then,
when
we're
like
a
week
away
from
when
we
want
to
actually
run
the
merge,
we'll
release.
We'll
do
two
things.
D
One
we'll
announce
what
the
actual
ttd
is
and
we'll
release
an
updated
client,
updated
clients
that
include
that
ttd
baked
in
also
these
the
clients
will
have
a
mechanism
for
overriding
the
ttd.
So
that
way,
if
you've
already
upgraded
your
infrastructure
and
everything
when
we
release
the
clients
like
a
month
in
advance,
you
only
have
to
change
like
one
config
option.
In
order
to
use
that
new
ttd
that
we're
going
to
announce.
You
know
a
week
before
the
merge
happens,
and
so
users
can
either
choose
to
just
upgrade
their
clients
again
for
home
users.
D
It's
probably
the
easiest
one,
whereas
if
you're
running
like
some
sort
of
infrastructure
of
your
infrastructure
provider-
and
you
want
to
do
a
bunch
of
testing
against
the
clients-
you
know
as
far
in
advance
as
possible-
and
you
probably
don't
want
to
upgrade
a
week
before
the
merge.
In
which
case
you
can
just
change
the
config
option,
the
environment
variable
or
cli
option
or
config
file
or
whatever
the
mechanism
you
use
to
configure
your
client,
and
so
that
may
impact
people
and
again
slightly
different
than
our
normal
forking
process.
D
B
Yeah,
the
beautiful
thing
about
difficulty
leading
up
to
the
merge.
You
know
trying
to
balance
between
minor
our
miners
going
to
stay
on
the
network
or
are
they
going
to
leave?
Is
that
it's
self-adjusting
and
that
you
know
if,
if
too
many
miners
leave
there'll
be
a
ton
of
profit
left
leading
up
to
the
merge
and
it
should
auto
balance
should,
but
those
those
are
kind
of
what
we're
thinking
about
and
it
seems
like
it
should
be-
should
be
all
right
leading
up
to
the
merge.
We
are
almost
exactly
at
time
tim.
A
A
I
feel
like
it
probably
makes
sense
to
have
another
one
of
those
like
a
month
from
now,
probably
not
two
weeks
but
like
a
month.
Ideally
if
we
by
then
we
we
might
not
have
like
the
kinsugi
devnets,
but
we
probably
will
have.
You
know
we'll
probably
be
close
enough
to
it,
to
tell
you
what
to
expect
and
then,
if
people
you
know
have
other
questions,
we
can
just
spend
time
entering
those
but
yeah.
This
was
really
really
valuable.
Thanks
for
everyone
who
asked
questions
and
showed
up.
A
B
So
we
like
to
keep
it
small
when
they
can
be
small
and
yeah,
it
will
be
uploaded
after
the
recording
of
this
will
be
uploaded
to
the
ef
youtube
sabotage.
Just
dm
me
on
discord
or
twitter,
and
I
can
give
you
the
link
to
the
discord
anything
else.
A
B
Yeah,
or
at
least
at
the
very
least,
a
rough
summary
of
the
stuff
that
came
up,
and
that
would
be
okay.
A
B
Great
thanks
everybody
for
showing
up
and
participating
in
the
discussions
and
we're
all
excited
for
the
merge.
I
know
everybody
has
always
been
excited
about
the
merge,
but
you
know
it's
real.
It's
happening
it's
going
to
be
amazing.
All
right
thanks
for
coming
everybody.