►
From YouTube: Merge Community Call #2
Description
Agenda: https://github.com/ethereum/pm/issues/419
How The Merge Impacts Ethereum’s Application Layer: https://blog.ethereum.org/2021/11/29/how-the-merge-impacts-app-layer/
AllCoreDevs Update 007: https://hackmd.io/@timbeiko/acd/https%3A%2F%2Ftim.mirror.xyz%2FsR23jU02we6zXRgsF_oTUkttL83S3vyn05vJWnnp-Lc%3Fdisplay%3Diframe
Ethereum Protocol Update - Nov 2021: https://trent.mirror.xyz/82eyq_NXZzzqFmCNXiKJgSdayf6omCW7BgDQIneyPoA
Join ECH Discord - https://discord.io/EthCatHerders
B
A
A
Okay,
we
can
probably
get
started
hello.
Everyone
welcome
to
the
second
merged
community.
Call
I'm
trent.
I
work
with
the
ethereum
foundation,
doing
ecosystem
work
talking
to
stakeholders
running
things
like
this
very
glad
to
see
all
these
new
faces
and
some
familiar
ones
so
welcome
to
the
call
tim's
gonna
be
doing
most
of
the
talking
and
then
I'll
probably
tap
in
a
couple.
Other
people,
maybe
marius.
A
If
he
wants
to
talk
about
people
helping
with
test
nets
or
I'm
sure
there
will
be
other
people,
I
asked
to
share
something
but
yeah,
let's
get
started,
I
put
the
agenda
in
the
chat
and
I'll
add
it
again.
I
don't
know
if
new
joiners
can
see
it,
but
yeah
we're
just
gonna
go
through
this
and
if
questions
come
up
please
just
I
think
you
can
raise
your
hand
in
zoom
and
then
I'll
unmute
you.
Actually.
A
I
think
everybody
should
have
the
ability
to
unmute,
which
shouldn't
be
the
case,
but
yeah
tim
is
there
anything
you
want
to
start
with
or
I'll
just
jump
into
the
agenda?
C
Jump
into
the
agenda
yeah
go
ahead.
Cool
here,
trent
did
a
great
job
posting
like
some
pre-call
links,
so
I
I
strongly
recommend
people
kind
of
read
those
if
they
haven't
yet
just
because
we
tried
to
make
it
clear.
You
know,
what's
going
to
change
at
the
application
layer,
what's
going
to
change,
running
a
node
and
whatnot
related
to
the
merge
I'll
I'll
go
over
them
pretty
quickly,
but
yeah.
A
C
Yeah-
and
I
guess
hopefully
I
can
use
this
as
a
way
to
like
highlight
what's
in
it
and
then
you
you
all,
can
kind
of
dive
deep
into
what
happens
to
your
specific
project
at
a
high
level.
You
know
the
merge
should
impact
applications
built
on
ethereum
too
much,
but
there
are
some
changes
you
want
to
be
aware
of.
C
You
know,
first
of
all
kind
of
obvious,
but
after
the
merge
there
will
be
no
more
proof
of
work
blocks.
So
basically
the
contents
that's
currently
kind
of
the
core
of
appropriate
workblocks.
So
all
the
transactions
and
the
metadata
around
you
know
the
block
cache
the
base
fee
and
whatnot.
All
of
that
will
be
part
of
the
beacon
chain
box.
C
Relatedly.
All
the
fields
in
the
proof
of
work
blocks
that
that
basically
relate
to
proof
of
work
or
to
uncle
blocks
or
over
blocks,
are
going
to
be
set
to
zero.
We're
not
going
to
remove
those
fields
from
the
block
header
just
to
not
break
any
tooling
or
whatnot,
but
basically
omar's
the
owmer's
hash
and
the
owner's
list.
C
The
overs
hash
difficulty
in
the
nonce
are
all
gonna
be
set
to
zero,
and
the
one
thing
that
is
not
being
set
to
zero
was
actually
changing
in
value
is
the
mix
hash
value
and
the
reason
for
this
is
a
bit
complex,
so
bear
with
me
for
a
second,
but
at
a
high
level
we
have
this
up
code
on
ethereum
today
called
difficulty,
which
returns
the
difficulty
of
a
block.
It's
a
pseudo
randomness
value
that
people
can
use
and
a
lot
of
smart
contracts
use
for
different
reasons.
C
It
is
not
like
perfect
randomness,
it's
biasable
by
the
miners,
but
obviously,
if
we,
if
we
went
from
setting
that
to
like
some
pseudo-random
value
to
zero
all
the
time,
a
bunch
of
applications
would
probably
break
so
what
we're
doing
instead
after
the
merge,
is
we're
selling
this
value
to
the
rendao,
the
rendal
value.
C
So
basically,
the
difficulty
up
code,
which
is
opcode0x44,
is
not
gonna
point
to
the
difficulty
slot
anymore,
but
it's
gonna
point
to
the
mix
hash
slots
and
we're
just
gonna
rename
mix
hash,
mixhash
random
and
also
rename
the
opcode
to
random.
C
So
if
you're
a
smart
contract
kind
of
using
difficulty
for
pseudorandomness,
you
know
nothing
should
break.
And
yes,
you
know
you
shouldn't
use
this
for
actual
randomness,
but
people
do
yeah.
So
you
know
we
just
want
to
minimize
the
damage
there,
and
one
thing
that's
neat
about
that
too,
is
the
size
of
the
op
code?
Will
change
so
basically,
you
know
if
if
the
value
is
greater
than
two
to
the
64,
you
can
kind
of
query
that
on
the
block
and
know
that
the
merge
has
happened.
C
So
that's
kind
of
a
neat
trick,
that's
exposed
to
an
op
code
yeah.
So
if
you
want
to
know
in
your
contract
where
the
emergence
happened
again,
you
know
this
is
a
bit
more
complicated.
Hopefully
the
article
itself
explains
a
bit
better.
Other
other
kind
of
noteworthy
change
is
the
block.
Time
will
change
after
the
merge,
and
we
we
saw
in
the
last
community
call
that
this
would
affect
some
contracts.
So
basically,
right
now
blocked
blocks
come
in
on
average
every
13
seconds.
C
There's
a
lot
of
variance
on
that
because
of
proof
of
work
after
proof
of
stake,
they
come
in
every
12
seconds
exactly
except
in
the
cases
when
the
validator,
who
has
to
propose
a
block,
is
offline.
So
then
you
basically
miss
a
block,
can
go
all
the
way
up
to
the
next
one.
C
So
this
currently
happens
less
than
one
percent
of
the
time
and
in
practice
it
still
kind
of
comes
to
about
once
one
second
reduction
in
average
block
time,
and
so
the
the
use
cases
we've
seen
the
the
use
cases
we've
seen
for
this
is
like
stuff
like
staking
reward
cards
or
like
liquidity,
mining
reward
contracts
and
whatnots
that,
like
try
to
to
send
out
tokens
every
block
or
you
know,
kind
of
make
an
allocation
every
block.
C
C
Last
but
not
least,
last
but
not
least,
a
safe
head.
So
right
now
under
proof
of
work
and
just
an
rpc.
When
you
ask
for
the
head
of
the
chain,
you
get
basically
the
or
if
you
want
to
get
the
head
of
the
chainsaw,
you
can
ask
for
this
latest
block
and
it's
it's
expected
that,
like
this
block,
can
reorg
under
proof
of
work.
C
So
you
know
like
applications
relying
on
that
should
kind
of
assume
that
there's
there's
you
know,
gonna
be
a
reorgs
and
in
practice
the
way
they
do.
That
is,
they
use
the
concept
of
confirmations,
so
they'll
kind
of
get
a
block
and
then
wait.
You
know
six
blocks
or
something
30
blocks,
or
something
and
and
once
those
blocks
have
passed,
they'll
assume
that
whichever
kind
of
latest
block
which
has
had
those
confirmation
is
unlikely
to
be
reorg
under
proof
of
stake,
we
can
actually
get
some
slightly
better
guarantees.
C
So
we
have
this
concept
of
a
safe
head,
which
there's
a
full
presentation
linked
here.
That
explains
the
entire
theory
behind
how
the
safe
head
is
calculated,
but
at
a
high
level
it's
a
block
that
we
expect
not
to
be
reorg
under
normal
network
circumstances,
so
the
circumstances
under
which
it
would
be
reorg
is
if
there
was
like
an
attack
on
the
network
or
a
large
network
delay.
C
So
it
gives
you
kind
of
slightly
better
assurances
than
basically
the
the
head
of
the
head
of
the
chain,
so
we're
going
to
be
changing
kind
of
the
json
rpc
response
for
the
latest
block
to
point
to
this
safe
head,
which
in
practice
should
come
within
like
four
seconds
of
this,
this
start
of
a
slot.
So
it's
not
going
to
delay
things
too
much.
If
you
still
want
to
use
the
absolute
tip
of
the
chain
for
some
use
cases,
we've
created
this
new
label
called
unsafe,
make
it
clear.
C
So
this
will
return
you,
the
you
know
last
seen
block
on
the
on
the
beacon
chain,
regardless
of
how
many
at
the
stations
and
whatnot
there
is.
So
you
know
you
should
expect
this
is
you
know
somewhat
likely
to
reorg
and
then?
Finally,
because
with
the
beacon
chain,
we
have
the
concept
of
finalization,
we're
also
gonna
be
able
to
return
the
last
finalized
block
under
json
rpc,
which
can
serve
as
a
nice
and
stronger
kind
of
substitute
for
confirmations.
C
So
if
you're
say
like
a
crypto
exchange
or
something
that
just
you
know
usually
has
this
logic
where,
like
you're
waiting,
you
know
n
confirmations,
you
can
probably
move
to
using
like
the
finalized
block
and
and
basically
the
condition
there
would
be
like
a
major
attack
on
the
network
where
you'd
have
two-thirds
of
validators.
You
know
trying
to
finalize
a
competing
chain,
and
that
would
put
a
third
or
more
of
this
stake
at
risk
of
being
slashed,
which
is
over
10
billion
dollars
a
day.
C
C
So,
oh
sorry,
I
lost
the
agenda
now
because
I
think
I
just
clicked
through
it.
C
So
I
think
it
I
don't
know
it's
probably
worth
pausing
there
and
it's,
oh
and
sorry,
yeah,
there's
one
more
thing:
okay,
yeah,
it's
probably
worth
pausing
there
and
just
discussing
I
don't
know
if
people
have
questions
or
thoughts
about
like
the
application
layer
and
then
you
know,
there's
a
couple
more
things
we
can
say
about
like
more
on
the
like
running
node,
slides
what
changes
but
yeah,
maybe
let's
just
pause
and
see
if
people
have
like
questions
or
concerns
about
the
application
layer
before
we
move
on
to
like
the
actual
node
architecture,.
A
Yeah
anybody,
if
you
want
to
talk,
raise
your
hand
and
I
can
meet
you.
D
B
I
just
wanted
to
add
a
brief
comment:
if
you
are
going
to
use
the
random
op
code,
make
sure
you
understand
the
attack
vectors
against
it.
It
is
not
a
perfectly
random
thing,
so
just
don't
don't
think
that,
just
because
we're
giving
you
a
random
op
code-
and
you
can
now
you
know-
write
a
dice
game
naively.
You
really
need
to
make
sure
you
understand
the
caveats
and
the
restrictions
and
the
constraints.
E
Also,
the
the
the
random
op
code,
like
if
you,
if
you
right
now
call
difficulty
in
solidity,
then
you
will
get
the
randomness
problem
is
at
least
for
gas.
If
you
call
it
in
a
view
function,
then
it
will
still
return
the
difficulty.
E
So
it's
it's
only
implemented
correctly
if
you
like,
if
you
like,
use
it
on
chain
but
calling
a
view
function
does
not
trigger
transaction,
and
so
we
don't
have
the
correct
randomness
there.
It's
just
a
buck
and
gas
right
now,
but
we
should
be.
C
Okay,
so
yeah
I'll
take
a
couple
minutes
and
talk
about
like
how
the
architecture
of
things
changes,
post,
merge
and
yeah.
There's
a
note
here.
I
just
want
to
make
sure
I
don't
forget.
You
know
what
we'll
cover
it
once
we
what's
going
to
post
I'll,
try
to,
I
think
I'll,
make
more
sense
there.
So
sorry
a
bunch
of
stuff
at
the
beginning.
Oh
no!
This
is
actually
the
wrong
thing.
C
This
should
be
the
right
one:
okay,
so
high
level
at
the
merge
basically
running
in
the
theorem
client
changes,
and
so
what
a
full
ethereum
look
like
looks
like
is
the
combination
of
a
beacon,
node
and
execution
engine
you'll
often
hear
the
beacon
node
being
referred
to
as
the
consensus
layer
and
the
execution
engine
as
the
execution
layer
and
those
are
basically
the
equivalent
of
what's
an
eth1
and
an
e2
node
today,
and
so
that
means,
if,
if
you
are
kind
of
running
a
node
on
the
proof
of
work
network,
today,
you're
gonna
have
to
add
a
beacon
node
in
order
to
keep
track
of
head
after
the
merge.
C
And
similarly,
if
you
are
running
a
beacon
node
today
or
and
or
a
validator
node
you're
gonna
have
to
run
alongside
that
an
execution
layer
node
in
order
to
validate
blocks.
One
thing
that's
also
worth
highlighting
is
right.
Now
a
lot
of
stakers
are
able
to
depend
on
infera,
because
they
only
need
to
rip
to
basically
look
at
the
deposit
contracts
and
they
can
return
return
that
data
when
they're
validated
on
the
beacon
chain
post
merge.
C
Two
interesting
notes
there,
I
guess,
is
one.
Obviously,
you
know
doing
that
as
a
validator
means
you
kind
of
get
the
block
reward,
but
it
also
means
that
you
get
to
decide
where
the
transaction
fees
go,
which
is
really
interesting.
The
transaction
fees
on
blocks
will
still
be
sent
to
column
like
legacy
ethereum
addresses,
so
not
your
validator's
address,
but
any
kind
of
address
on
ethereum
and
what
that
means
you
know
in
I
guess
in
like
beacon
chain.
C
Language
is
they're
kind
of
immediately
withdrawable
right,
like
you,
they're,
not
locked
alongside
your
validator
rewards.
So
you
know
that's
kind
of
a
nice
nice
property
of
the
system
where,
if
you
validate
or
proposes
a
block,
you
get
to
keep
the
transactions
also
worth
noting.
Both
the
beacon
and
execution
layers
will
maintain
their
peer-to-peer
networks
and
their
set
of
apis,
so
whether
you're
using
json
rpc
on
the
on
the
execution
layer
or
you're
using
the
beacon
apis.
You
know
none
of
those
change
module.
C
You
know
what
we
just
went
in
with
the
with
the
head
stuff,
but
you
can
still
query
your
node
run,
tracing
and
whatnot,
or
you
can
still
kind
of
get
your
information
about
the
consensus
level
and
then
both
nodes
will
also
maintain
their
peer-to-peer
network.
Where
you
know
the
beacon
node
will
be
connected
to
a
set
of
beacon
nodes
and
the
execution
engine
will
be
connected
to
a
set
of
execution
engines.
C
The
only
thing
that
changes
at
the
gossip
level
is
the
block
gossip
will
happen
at
the
beacon
layer
rather
than
the
execution
layer,
because,
basically
the
blocks
you
know
are
kind
of
sealed
by
the
beacon,
node
and
then
it
on
the
network
and
transactions
will
still
be
gossip,
though
at
the
execution
layer,
so
that
your
nodes
can
kind
of
run
it
as
it
gets
it.
C
Finally,
obviously
we
need
a
way
for
those
two
layers
to
communicate
so
there's
an
engine
api
that's
been
put
together,
which
just
kind
of
always
one
directional
pane
from
the
beacon
node
to
the
execution
engine
and
there
at
a
high
level.
The
beacon
node
will
provide
information
to
the
execution
engine
about
kind
of
what
the
latest
valid
head.
The
latest
finalized
blocks
is
and
also
ask
it
to
create
blocks
and
request
blocks.
C
You
know
when
it's
your
turn
to
propose
one
and
sorry
one
last
thing
you
ask
it
to
validate
blocks
once
you
get
a
block
from
the
network,
you
get
it
at
the
beacon
level.
You
just
send
what
we
call
an
execution
payload.
So
this
is
the
contents
which
has
all
the
transactions,
or
you
know,
the
eth1
block,
send
that
down
to
the
execution
engine
run
it
in
the
evm
and
then
the
execution
engine
will
return,
whether
it's
the
xylitor
and
valve
block
it's
a
very
high
level.
C
This
is
how
it
works
again.
We
went
over
this
picture,
but
you
know
you're
going
to
be
in
a
spot
where
all
of
the
kind
of
consensus
data
is
is
the
one
of
the
beacon
chain,
and
it
contains
this
execution
layer
payload,
which
contains
the
transactions
as
well
as
some
some
other
data.
That's
in
the
current
eth1
block
header
here
we
kind
of
go
into
detail
of
like
the
different
calls
of
the
engine
api.
I
write.
This
has
been
written
like
a
month
or
so
ago.
C
I'd
recommend
looking
at
the
spec
and
obviously
anything
in
the
spec
takes
precedence
over
this,
but
I
still
think
the
general
architecture
is
is
the
same,
and
this
basically
gives
you
an
idea
of
how
the
merge
actually
happens.
So
it's
quite
similar
to
the
the
picture
we
had
above,
but
high
level.
You
know
right
now.
We
have
blocks
on
the
beacon
chain
which
don't
have
any
anything
except
kind
of
this
consensus
metadata
in
it.
C
We
have
these
blocks
on
proof
of
work,
which
obviously
have
some
data
about
proof
of
work
and
then
have
all
of
the
transactions,
and
the
merge
is
triggered
by
a
total
difficulty
on
the
proof
of
work
chain.
So
once
we
hit
the
certain
total
difficulty,
which
we
call
the
terminal
total
difficulty,
we
basically
say
that
the
block
after
the
one
whose
equaled
or
exceeded
this
terminal
total
difficulty
will
be
proposed
by
a
validator
on
the
bleaching
chain
rather
than
on
proof
of
work.
So
you
can
imagine
this
image.
C
You
know
you
have
these
blocks
in
parallel,
then
you
have
the
next
one.
The
block,
the
second
proof
of
work
block
is
the
one
which
would
have
hit
the
terminal
total
difficulty,
and
that
means
that
afterwards,
the
the
next
block
is
is
fully
produced
by
the
beacon
chain.
C
So
you
can
see
that
you
know
there's
no
more
proof
of
work
and
then
all
this
kind
of
content,
which
has
the
transactions
and
whatnot,
becomes
part
of
the
beacon
chain
blocks,
and
then
it's
possible
that
at
this
point
there
are
several
competing
blocks
that
are
like
the
last
proof
of
work
block.
So
because
you
know
they
all
hit,
they
all
need
to
hit
this
terminal
total
difficulty.
If
they
do
their
children
cannot
be
valid
blocks.
C
So
we'll
get
you
know
possibly
a
set
of
competing
blocks,
but
the
the
depth
of
that
tree
will
be
kind
of
depth
of
one
and
then
the
beacon
chain
will
kind
of
choose
which
one
is
the
canonical
block
and
at
some
point
we'll
finalize
one.
C
So
if
you're
running
you
know
say
like
an
exchange
or
again
something
that's
like
reliant
on
on
the
sorry
on
confirmations
and
and
and
making
sure
reorgs
don't
happen,
then
you
basically
want
to
wait
for
the
first
finalized
block
after
the
merge
you
know
and
at
that
point
reorg
are
extremely
unlikely
again,
except
in
the
case
of
like
a
major
attack
on
the
network
and
and
by
then
the
merge
is
basically
over
we're
fully
on
the
beacon
chain
and
we
already
covered
this
yeah.
I
think
that's
pretty
much
it
again.
C
We'll
highlight
you
know
outsourcing
your
your
execution
engine
to
infer
or
another
similar
provider
will
not
be
possible
after
the
merge.
So
it
won't
be
possible
mainly
because
you
just
can't
produce
blocks.
If
you
do
that
and
over
time,
we'll
also
have
a
proof
of
custody.
That's
added
into
the
design
which
will
penalize
you.
If,
if
you
did
decide
to
do
it,
so
this
is
really
the
right
time
to
kind
of
get
to
running
on
on.
Basically,
your
own
execution
layer.
C
And
I
think
that's
all
I
had
eip439
we
we
covered.
Basically,
this
is
just
the
one
that
changes
the
difficulty
to
the
difficulty
of
code
random.
There
is
a
merge
spec
in
the
execution
layer
folder,
so
we
have
a
full
spec
now,
which
only
has
two
ieps,
but
if
there
are
any
other
eips
that
come
up
or
whatnot
they'll
be
added
here,
just
like
any
other
kind
of
network
upgrades,
there's
a
there's,
a
spec
and
yeah.
C
I
guess
I
can
kind
of
then
on
this,
like
what
we've
been
doing
at
the
past
month
is
trying
to
spin
up
devnets
every
week
with
kind
of
delay-less
specs
and
get
the
different
clients
to
communicate
with
each
other,
we're
hoping
that
we
spin
up
one
more
next
week
and
then
the
second
week
of
december,
that
we
can
spin
up
a
more
permanent
one,
which
we
leave
up
and
running
throughout
the
holidays
and
maybe
early
january,
so
that
folks,
who
want
to
like
understand
you
know
and
play
with
this-
have
have
something
that's
relatively
stable
to
to
use.
C
Yes,
so
that's
kind
of
the
goal
is
expect
in
the
next
two
weeks,
just
having
kind
of
a
consuming
network.
That's
up
and
running
and
marius
who's
on.
C
The
call
has
has
put
together
a
great
guide
about
you
know,
trying
to
get
nodes
running
on
the
network
and
if
you
do
want
to
help
with
testing
some
type
of
things,
that
that
would
be
helpful
to
try
and
test-
and
I
guess
just
generally
also
if
you
run
an
application
infrastructure
tooling,
you
know
telling
us
what
breaks
is
is
really
helpful
feedback.
The
earlier
the
better,
I
think
one
thing:
that's
that
sort
of
kind
of
stating
is
you
know
when
we
have
these
network
upgrades.
C
We
obviously
try
to
leave
time
for
people
to
upgrade
their
nodes,
and
you
know
typically,
that's
like
one
to
two
months.
We
understand
that
the
merge
is
much
more.
I
guess
very
different
from
regular
network
upgrades
and-
and
I
think
one
part
where,
like
the
community
can
can
really
help
is
by
trying
stuff
early.
We
can
hopefully
like
minimize
the
kind
of
delay
between
when
all
the
code
is
done,
clients
and
like
when
this
goes
live
on
mainnet.
C
You
know,
I
think,
in
the
world
where,
like
nobody,
tries
anything
until
we
have
like
a
final,
proper
release,
it
might
be.
You
know
several
months
of
people
trying
and
figuring
out
what
breaks
and
and
getting
comfortable
with
it,
but
hopefully
by
having
these
deadness
we're
able
to
accelerate
that
a
little
bit
and
yeah
we'll
prove
a
steak,
a
bit
quicker.
F
F
A
F
Yeah,
so
just
to
give
you
guys
a
bit
of
scope
about
the
devnets,
so
devnet
2
is
still
meant
to
introduce
testing,
slash,
running
nodes
to
the
wider
community,
and
a
lot
of
clients
are
still
figuring
out,
small
bugs
or
differences,
so
don't
expect
it
to
be
extremely
stable.
F
Like
tim
mentioned,
we'd
have
devnet
three
that
would
come
out
next
tuesday
and
a
public
test
net.
That's
coming
out
on
the
13th
and
14th.
So
if
you're,
an
insta
infrastructure
provider
or
if
you're
running
any
sort
of
ethereum
based
tooling,
please
do
start
testing
now
and
figure
out
how
where
all
the
tooling
lies,
etc.
F
I've
posted
a
link
in
the
chat,
that's
a
compilation
of
all
the
tools
we
have
right
now.
So
there's
a
beacon,
explorer
regular
blog
scout,
so
you
can
check
transactions,
there's
a
faucet,
so
withdraw
some
ease,
deploy
smart
contracts,
also
other
nice
stuff
and
there's
also
an
rpc.
If
you
don't
want
to
sync
your
own
node,
but
we
do
recommend
that
you
sync
your
own
node,
just
to
get
to
know
how
things
work
yeah,
please
let
us
know
if
something
clicks
and
yeah
it.
Let
mario
stop
for
the
testing
talk.
E
Testing
early,
we
created
some
documents
about
how
to
set
up
some
of
the
clients.
Not
all
of
them
are
there.
So
if
you,
if
you're,
really
interested
in
like
running
a
particular
client
combination
or
something
do
that
test
it
and
put
it
in
the
document,
and
if
you
have
any
questions
about
testing,
then
just
dm
me
either
on
discord
on
on
on
twitter.
You'll
probably
find
me
because
of
my
really
unique
last
name
and
yeah
happy
like
we
we
already
have.
E
I
already
have
like
the
m's
from
over
400
people
right
now
that
are
interested
in
doing
it,
and
we
we
also.
I
also
set
up
a
a
page
of
ideas
what
people
could
work
on
if
they
wanted
to,
and
if
you
have
other
ideas
that
we
should
really
test
before
before
the
merge,
then
you
could
just
add
them
to
this
document.
That's
it.
A
A
Let's
see,
if
there's
anything
else
left
on
the
agenda,
I
think
we
we've
covered
everything,
so
we
can
open
it
up
to
anybody
else.
Who
has
questions?
We've
still
got
quite
a
bit
of
time.
So
if
you're
shy
or
unsure
of
how
to
phrase
your
question
don't
be
yeah,
go
ahead.
Omar.
H
A
quick
question
on
the
diagram
of
the
execution
layer
and
the
consensus
layer
there
seems
to
be
a
one-to-one
relationship
between
the
engine
api.
Is
that
something
that's
true
or
just
kind
of
a
limitation
of
the
diagram?
Can
you
run
one
beacon,
node
and
have
multiple
execution
layers
talk
to
that
beacon,
node.
A
E
Yeah,
so
you
can
do
that
you
can
not
run
multiple
beacon
nodes
with
the
same
execution
layer.
That's
that's
not
working
with
the
current.
E
But
yeah
you
can.
You
can
run
one
beacon,
node
with
multiple
execution
layers
and
take
the
majority
vote
of
them,
for
example,
so
peter
has
been
working
in
his
free
time
on
a
client
that
does
exactly
this
takes
take
the
majority
vote
of
like
three
or
four
different
consensus
layer,
execution
data
clients
for
for
one
consensus,
layer,
client,.
A
C
H
F
I
Of
comment
here:
yes,
you
can
like
guide
multiple
execution
layer,
clients
with
one
beacon
node.
So
it's
just
well,
it's
just
a
matter
of
propagating
all
the
messages
from
this
beacon
node
to
wire
engine
api
to
all
these
extremely
clients
as
possible.
I
But
the
opposite
case
when
you
want
to
run
like
one
execution
client
to
serve
for
multiple
decking
nodes,
it's
not
supported
by
the
engine
api
spec
and
they
will
not
be
supported,
but
in
theory
you
can
like,
if
you're
there
is
like
one
thing
that
can
go
into
the
conflict.
Here
is
the
update
of
the
factory
state
for
that
coming
from
multiple
beacon
nodes
to
one
execution,
ray
client
and
in
theory,
if
you're
like?
I
No,
if
you
have
like
a
kind
of
master
beacon
node
that
just
sends
these
for
choice,
updated
messages
and
the
only
one
but
others
do
not
update
the
factories.
In
theory,
it's
possible
to
do
this
kind
of
setup,
but
yeah
it's
more
complicated
and
there
are
implications.
So
this
is
why
it's
not
supported
by
default
by
the
engineer
aspect.
So
that's
it.
H
I
I
stopped
like
writing
to
the
beginner,
but
imagine
you
have
two
heads
like
two
bigger
notes
think
that
there
are
two
different
heads
on
the
chain
at
some
point
in
time.
It's
possible
like
because
of
the
message
because
of
the
network,
delay
and
other
factors,
so
the
execution
layer
client
will
like
yeah.
It
will
receive
two
conflicting
functions,
updated
from
the
to
two
beacon
nodes
and
it's
difficult
to
decide
what
to
do
in
this
case.
That's
why
this
setup
is
not
supported
by
default.
J
J
You
need
to
have
like
a
separate,
lock
pointer
for
pointers
for
each
of
them.
So
you
need
to
have
like
this
many
to
to
one
relationship,
and
so
the
thing
that
is
not
in
the
spec
you
need
to
have
somehow
differentiate,
which
messages
come
from,
which
we
can
note,
and
so
we
might
experiment
with
this
at
some
point.
But
we
don't
have
capacity
right
now,
so
maybe
closer
to
to
release.
B
Another
thing
to
keep
in
mind
for
anyone
wanting
to
go
down
this
path
is
that
at
some
point
in
the
future,
it
is
likely
that
we
will
start
requiring
proof
of
data
when
producing
blocks.
I
believe-
and
so
I
just
want
to
recommend
caution
like
don't
sink
a
huge
amount
of
engineering
effort,
because
there
may
be
changes
to
protocols
that
critically
break
this.
A
All
right,
I
think
we
covered
that
thanks
for
the
question
omar
anybody
else
with
a
question.
Otherwise,
maybe
we
could
talk
about
what
raul
is
talking
about,
maybe
a
little
tangential,
but
it's
an
interesting
discussion.
Yeah
go
ahead.
Noam.
K
Hey
I
wanted
to
circle
back
to
the
devnet
discussion,
two
questions
here,
one
during
the
last
community
call.
You
guys
mentioned
that
difficulty
bombs
might
go
off
on
existing
tests
like
robson.
I
think
the
timeline
there
was
slated
for
around
january.
Is
that
still
the
case.
C
Possibly
we
haven't
made
a
call
on
that,
yet
I
think
it
depends
just
on
how
far
on
the
client
implementations
are,
but
I
think
yeah
there.
We
we
ideally
would
like
to
merge
robson
before
before
the
difficulty
bomb
goes
off
on
it.
A
Is
it
true,
I
believe
the
gef
team
is
going
to
propose
a
new
test
net
to
replace
the
proof-of-work
testnets.
A
E
Which
has
already
been
started,
but
there
were
some
issues
with
like
I
don't
know,
setting
up
the
servers,
so
we
haven't
publicly
announced
it
yet.
E
But
yes,
we
plan
to
create
a
new
proof-of-work
test
net,
since
all
the
other
proof
of
work
test
nets.
Robson
is
pretty
big
already
and
like
not
really
suitable
for
testing.
So
we
would
like
to
create
another
one.
K
And
somewhat
related
to
this,
has
there
been
any
thought
given
there's
kind
of
two
use
cases
for
test
nets
right
now,
one
is
for
like
eth
core
devs
to
experiment
on
the
protocol
and
then
the
others
for
application
developers
to
sandbox
development
applications
has
been
given
any
thought
to
having
like
two
distinct
test
nets
for
these
two
or
like
not
two
but
multiple
test
nets.
For
these
multiple
use
cases
with
redundancy
picking,
obviously,.
K
G
I
And
that's
a
good
question,
so
the
block
data
you
mean
the
existing
layer
blocks
right.
G
I
And
you
mean
the
history
or
okay,
so
the
execution
layer
blocks
will
be
still
stored
on
the
execution
level,
client
side
and
they
will
be
accessible
in
the
execution
layer
network
and
there
is
like
the
question
is
yeah,
because
the
beacon
blocks
will
contain
the
execution
payload
as
well,
and
there
will
be
kind
of
duplication
between
the
layers
in
terms
of
data
storage.
I
But
we
are
looking
for
the
duplication
techniques
like
like
the
beacon
blocks
and
the
consistency,
clients
for
the
execution,
payload,
headers
and
fetch
blocks
on
demand
from
the
exclusionary
clients.
This
is
one
of
potential
solutions
for
the
dead
application,
but
in
terms
of
like
accessibility
of
blocks
and
the
execution
layer,
client
side
from
the
network,
and
why
I
just
see
it's,
this
is
not
broken
and
it
stay
stays
unaffected
by
the
merge.
G
So
so
in
the
short
term,
there
will
be
some
duplication
and
eventually
it
will
be
optimized
to
to
have
less
storage.
I
L
Yeah
sure
hi,
my
question
is:
is
there
any
change
in
if
it
makes
sense
to
have
consensus
and
execution
there
on
the
same
machine
performance,
wise
or
bandwidth
wise
or
any
recommendation
here.
A
Oh,
I
feel
like
that
should
be
the
default.
I
feel
like
most
people
already
do,
that.
B
Bandwidth
connection
between
the
execution
and
this
client,
you
do
want
it
to
be
low.
Bandwidth
like
you,
don't
want
to
like
go
to
the
moon
and
back
because
there
is
time
constraints
on
validators,
doing
work,
and
some
of
that
work
requires
talking
to
execution
client.
But
if
you
have
like
you
know
two
hosts
in
the
same
data
center,
that's
not
a
problem.
If
you
have
two
hosts
on
the
same
side
of
the
country,
that's
probably
not
a
problem
you
have.
B
I
I
think
the
question
is
about
how
many,
what's
what
will
be
requirements
for
running
the
client
after
the
merge
right
with
respect
to
hardware,
bandwidths
and
sports?
How
many
consist
clients
does
require
right,
in
addition
to.
B
A
I
A
Okay
sounds
like
we've
covered
most
everything
I
guess
just
generally
to
wrap
up.
We
got
pretty
into
the
technical
weeds,
but
if
you're
a
smart
contract
developer
we're
trying
to
make
it
as
simple
as
possible.
Basically
you
shouldn't
have
to
do
anything.
There's
no
migration.
You
don't
have
to
redeploy
your
contracts
if
you're
a
user
listening
to
the
call
or
watching
this
afterwards,
you
won't
have
to
you
know,
move
any
tokens
that
you
own.
All
of
this
stuff
will
happen
in
the
background.
A
A
Oh
right,
sorry,
yeah
tim
tim
points
out
that
if
you
are
a
smart
contract
developer
and
you
have
a
strong
dependence
on
block
time
in
your
in
the
way
you
set
up
your
lending
functions
or
how
you
calculate
time
for
lending
rates
or
something
like
that,
then
you
may
need
to
redeploy,
but
by
default,
if
you're,
you
know,
most
users
shouldn't
have
to
worry
about
things
most
users
or
developers.
B
If,
if
you
are
a
developer,
who
isn't
in
a
situation
where
your
contract
depends
on
block
time,
you
should
not
just
update
it
to
be
12
seconds,
you
should
remove
the
dependence
on
block
time.
You
should
not
assume
that
we
will
keep
the
block
time
in
12
seconds.
It
may
change
to
10
seconds
in
the
future.
It
may
change
20
seconds
it
may
change
to
five
seconds
like
contract
should
not
assume
block.
Time
is
stable
over
time
like
you
should
just
use
the
time
stamp
field
on
the
block.
A
All
right,
I
think
we
covered
everything
and
there
are
no
more
questions
last
chance
for
anybody
to
jump
in
or
ask
a
question.
Otherwise,
we'll
wrap
up.
A
Okay,
thanks
to
everybody
who
showed
up
marius,
mikhail
perry,
micah.
This
is
really
helpful
to
have
people
to
answer
questions.
So
I
appreciate
that
and
then
we
can
go
a
little
bit
deeper
technically
to
people
who
have
questions
about
that
stuff.
So
yeah
we'll
wrap
up
here.
This
will
be
uploaded
sometime
today
or
before
the
weekend,
and
if
anybody
wants
to
listen
to
it
again,
it'll
be
uploaded
to
the
ethereum
cat
orders
youtube.
A
That's
it
thanks.
Everybody
appreciate
it.
Oh
yeah
and
we
will
probably,
depending
on
the
need
or
how
many
questions
come
up,
we'll
probably
host
another
one.
Maybe
in
another
month
I
don't
anticipate
there
being
huge
changes
or
you
know
a
ton
of
new
information
that
we
need
to
relay
other
than
what's
already
been
prepared
in
the
agenda.
But
yeah
tim.
Do
you
have
any
final
thoughts.
C
Yeah,
it
probably
makes
sense
to
host
one
of
these
sometime
in
january,
once
like
consuming
is
out
there,
probably
once
we
have
a
better
view
of
what's
going
to
happen
with
roxton
yeah,
but
definitely
not
before
the
holidays
and
yeah
keep
an
eye
on
the
blog
about
ethereum.org
page
we're
gonna
have
a
post
there
when
the
actual,
like
I
guess,
final
iteration
of
the
consuming
definites
is
up
I'll,
make
sure
to
post
something
there.
B
C
C
Yeah,
so
post
merge
transaction
fees
go
to,
they
don't
go
to
a
validator
address.
They
keep
going
to
like
an
ethereum
call
it
like
evm
address.
So
that
means,
if
you
run
validators,
you
know
you
can
capture
those
transaction
fees,
basically
as
they
come
in
they're,
not
going
to
be
locked
or
anything.
This
is
also
true
of
obviously
any
med
fees.
So
any
of
these,
basically
that
are
paid
to
the
block
producer,
like
that.
C
I
guess
in
today,
would
go
to
like
the
coinbase
address,
get
captured
by
validators
post
merged
directly,
no
they're,
not
locked
or
anything.
Hopefully,
this
is
secure.
A
Yeah
and
that's
that's
pretty
significant,
especially
if
you've
been
validating
since
genesis
the
beacon
chain
genesis.
Last
year,
the
unlock
for
funds,
hopefully
will
come
in
the
upgrade
after
sorry,
unlock
for
staked
eth
or
you
know
being
able
to
transfer
it
to
the
execution
layer,
use
it
in
smart
contracts.
Use
it
like
you
typically
do.
A
But
until
then,
like
tim
said,
you
will,
if
you're
a
validator
you'll
be
able
to
use.
You
know
priority
fees
that
miners
currently
get
that
will
be
directed
to
validators
or
a
validator
controlled
address
on
the
execution
layer
as
well
as
any
mev,
and
I
know
there's
a
lot
of
people
working
on
how
to
integrate
med
into
post-merge
clients.
Specifically,
the
flashbox
team
is
working
on
that.
A
Okay,
I
think
we've
squeezed
all
the
questions
out
of
people,
as
always
myself
tim.
Anybody
who
answered
questions
on
here
is
more
than
happy
to
answer
further
questions
that
come
up
if
you're
not
already
in
the
ethernet
discord.
That's
where
a
lot
of
this
discussion
and
testing
planning
takes
place
so
feel
free.
We
we'd
love
to
have
you
join
there
and
contribute
or
observe,
learn
alongside
us
anything
else
soon,
nope!
Not
for
me
all
right
thanks,
everybody
yeah,
we'll
have
a
another
one
in
about
a
month,
maybe
the
beginning
of
january.