►
Description
Slides: Bit.ly/eth-21q4
Join Ethereum Cat Herders Discord - https://discord.io/EthCatHerders
A
Hey
everyone.
I
want
to
present
all
of
you
with
our
good
friend
tim
he's
going
to
go
over,
as
you
can
already
see,
understand
the
transition
to
proof
of
stake
and
without
further
ado
I
will
let
tim
go
ahead
and
present
to
all
of
you
right
now.
The
floor
is
yours.
Tim.
B
Cool
thanks.
Everyone
could
hear
me.
B
Okay,
awesome
so
I'll
share
my
slides.
Unfortunately,
I
won't
be
able
to
watch
the
chat
while
I'm
presenting
but
I'll
pause
a
couple
of
times
during
the
presentations.
If
there's
questions
in
the
chat
make
sure
to
take
him
yeah
so
ask
I
guess
clement
or
anybody
else.
B
Yeah
when
we
get
there
yeah
but
yeah,
please
feel
free
to
just
like
drop
a
few
questions
and
as
we
go,
I
want
to
make
sure
people
are
following
along
and
kind
of,
I
don't
lose
everybody
halfway
through
that's
caveat
cool,
so
I'll
be
talking
about.
Basically
the
ethereum
roadmap
and
obviously
a
big
part
of
that
is
the
transition
to
proof
of
stake.
But
I
sort
of
make
sure
I
put
it
all
in
the
context.
B
Let
me
put
this
full
screen.
You
all
can
still
see
the
the
slides.
A
B
Okay,
great
yeah,
so
my
name
is
tim
baiko.
I
work
at
the
ethereum
foundation.
What
I
do
there
is,
I
run
what
we
call
the
aquadevs
calls,
which
are
the
calls
where
the
different
teams
that
are
working
on
the
ethereum
protocol
get
together
and
basically
think
through
and
implement
the
changes
to
ethereum.
B
B
B
Every
one
of
these
upgrades
introduced
a
bunch
of
features,
but
there
was
kind
of
usually
one
major
one,
so
the
biggest
one
for
the
berlin
upgrade
was
a
fix
for
a
potential
denial
of
service,
vector
where
we
had
some
operations
on
ethereum,
where
the
gas
cost
of
those
operations
was
too
low
relative
to
how
much
computational
time
it
actually
takes
a
computer
to
run
those.
B
So
the
worst
case
scenario
was
you
could
create
blocks
on
ethereum,
where
it
would
fill
in
the
gas
limit,
but
they
take
longer
than
a
theorem's
block,
prime
of
13
seconds
to
execute
and
that
kind
of
stalls
the
network,
if
you
can
do
that
so
by
changing
the
gas
cost
on
the
network,
we
were
able
to
fix
this,
and
then
we
also
introduced
a
couple
other
changes.
B
One
of
them
was
the
concept
of
transaction
envelopes,
which
allows
us
to
introduce
a
bunch
of
different
transaction
pipes
in
the
future,
and
this
will
be
relevant
for
the
future
upgrades,
and
then
we
also
did
a
bunch
more
minor
gas
repricings.
B
In
august
we
had
another
big
upgrade
on
on
mainnet
called
london,
and
this
one
mostly
introduced
eip.
1559,
that
in
and
of
itself,
could
be
the
subject
of
a
whole
up,
but
at
a
high
level,
eip1559
kind
of
improved
how
the
fee
market
works,
to
pay
for
transaction
fees
on
ethereum
and
as
a
byproduct
of
that
burns,
some
of
the
transaction
fee
and
that's
what
a
lot
of
people
can
to
tend
to
focus
on
and
in
order
to
ship
1559.
B
We
we
basically
had
to
introduce
a
new
transaction
type,
so
the
work
that
we
did
in
the
burden
upgrade
was
kind
of
a
prerequisite
there
and
then
a
couple.
Other
changes
we
brought
forward
in
in
london
were
removing
gas
refunds
for
a
lot
of
use
cases
which
basically
helped
deal
with
gas
tokens
which
we
we
used
to
see
an
ethereum
network
a
lot
and
the
problem
with
these
is
basically
when
gas
costs
were
cheap.
B
So,
by
removing
a
lot
of
the
refunds,
we
were
able
to
just
kind
of
keep
a
more
consistent
execution
pattern
and,
and
that
helps
us
just
make
better
decisions
about
things
like
the
gas
limit
and
then
the
other
big
thing
we
did
is
for
a
long
time.
We've
been
wanting
to
upgrade
the
evm
and
there's
always
a
problem
when
you
try
to
upgrade
the
evm.
Well,
how
do
you
make
sure
you
don't
break
the
contracts
that
already
exist?
B
So
one
thing
we
did
in
london?
Is
we
locked
a
couple
bytes?
So
we
looked
up
on
all
of
ethereum
and
we
said:
okay
are
there
contracts
that
start
with
these
specific
bytes
and
there
weren't,
and
then
we
decided
after
london.
It
would
be
basically
illegal
on
ethereum
to
start
a
contract
with
those
bytes
so
that
we
can
use
those
bytes
in
the
future
to
just
identify
new
versions
of
contracts.
B
Then
the
last
thing
we
did
in
london
is:
we
delayed
the
difficulty
bar,
which
is
mechanism
on
ethereum
that
basically
forces
us
to
upgrade
every
now
and
then
so
we
we
pushed
it
back
and
then,
finally,
last
month
in
october,
we
had
the
first
upgrade
to
the
beacon
chain.
B
So
the
beacon
chain
has
been
live
for
for
a
bit
less
than
a
year
now,
but
it
hadn't
ever
been
upgraded
before
since
it's
gone
live,
and
so
this
was
really
the
first
time
we
managed
to
test
the
upgrade
mechanism
and
make
sure
that
this
works
and
then,
as
part
of
this
upgrade
there
were
there
were
basically
the
penalties
on
the
beacon
chain
that
were
raised.
So
when
we
launched
the
beacon
chain
last
year,
because
it
was
it
was
quite
new.
B
Penalties
for
being
offline
or
for
being
slashed
were
just
lowered
to
incentivize
people
to
use
it
to
try
it
out.
But
now
that
it's
been
up
and
running
for
almost
a
year,
we
felt
quite
confident
to
bring
back
the
the
penalties
to
their
original
level,
and
there
were
also
some
some
changes
of
how
validator
rewards
were
calculated.
B
So
that's
what
we
did
this
year
and
then
we
have
one
more
small
upgrade
coming
in
a
couple
weeks,
so
I
mentioned
earlier
this
idea
of
the
difficulty
bomb.
This
is
something
that
was
in
ethereum
in
order
to
help
us
transition
to
proof
of
stake,
because
the
proof
of
stake
transition
is
is
always
delayed.
We
tend
to
push
it
back,
so
we're
pushing
it
back
one
more
time
in
december,
which
we
hope
will
be
the
final
time.
B
So
we're
going
to
have
just
a
single
upgrade
that
that
that
does
nothing
else
except
push
back
this
value.
So
the
upgrade
is
scheduled
for
block
13
million
700
073
773
000,
which
should
arrive
around
december
8th.
So
if
you
do
run
an
ethereum
node,
you
need
to
upgrade
it
to
one
of
the
versions
listed
on
the
slide,
and
so
that's
basically
2021
looking
forward.
The
big
thing.
B
That's
kind
of
in
store
for
ethereum
is
the
transition
to
proof
of
stake
or
what
we
call
the
merge
and
it's
worth
taking
a
step
back
here
to
walk
through
basically
how
the
ethereum
two
dollar
world
map
has
evolved
over
the
past
couple
years
and
and
what
got
us
to
this
current.
This
current
merge
architecture,
because
you
know
there
were
a
lot
of
information
kind
of
given
on
this
stuff
and
yeah.
I
just
want
to
make
sure
that
you
know
we're
all
on
the
same
page
here.
B
So
originally
we
had
this
idea
of
phase
zero
one
and
two
where
the
road
map
would
come
kind
of
in
three
three
phases.
First,
we
would
have
proof
of
stake.
Then
we
would
create
shards,
which
kind
of
attached
to
proof
of
stake,
and
then
at
some
point
we
would
turn
on
computation
in
these
shards
and
we
would
basically
have
a
bunch
of
evms
that
are
running
in
parallel
and
that's
how
we
would
scale
ethereum.
B
B
But
while
this
was
happening,
there
was
some
more
progress
that
was
made
on
a
different
scaling
technology
called
rollups,
so
roll
ups
are
what
people
call
a
layer,
two
scaling
technology
which
allows
you
to
launch
kind
of
a
separate
ethereum
chain
which
can
process
a
bunch
of
transactions
and
has
very
high
security
guarantees,
where
even
if,
basically,
the
operators
on
that
that
second
chain
were
to
go
offline
or
to
act
maliciously,
you
can
always
withdraw
your
funds
to
the
main
chain
and
that's
really
important.
B
Because
historically,
if
you
had
things
like
side
chains,
the
operators
of
those
side
chains
could
either
censor
you
or
basically
steal
all
your
funds
and
that's
just
like
not
a
suitable
scaling
solution
for
something
like
like
ethereum.
B
B
So
we
shifted
to
what's
called
basically
the
rollup
centric
roadmap,
where
we
would
still
have
proof
of
stake.
B
We
would
have
roll
ups
as
a
way
to
scale
computation
so
kind
of
as
a
replacement
for
phase
two,
and
then
we
would
still
keep
sharding
and
the
reason
is
that
sorry,
the
reason
is
that
rollups
generate
a
lot
of
data.
So
if
we
can
just
use
shards
as
a
way
to
store
data
cheaply
on
the
network,
it
reduces
the
transaction
costs
for
roll-ups,
and
so
that
became
kind
of
the
the
intermediate
roadmap.
B
If
you
want,
where
you
basically
focus
on
roll-up
focus
and
roll
up,
have
a
single
evm,
that's
running
and
then
and
and
then
scale
ethereum
that
way
and
then
the
lastly,
the
kind
of
last
refinement
that
came
to
this
was
this
idea
that
we
call
the
execution,
beacon,
executable,
sorry
beacon
chain
which
is
more
or
less
the
architecture
we
have
today,
where
we,
we
kind
of
realized
that
the
current
ethereum
clients
so
clients,
like
geth,
nethermind,
aragon
basu,
already
have
this
concept
of
switching
to
a
different
consensus
mechanism.
B
B
So
if
you
put
all
this
together,
you
basically
have
a
system
where
the
beacon
chain
is
the
one
that's
kind
of
directing
the
network
telling
it.
What's
the
latest
valid
head
handling
reorgs,
if
needed
and
whatnot,
then
the
current
ethereum
clients
like
geth
are
the
ones
that
still
process
all
of
the
transactions
and
run
the
evm,
and
then
we
can
use
rollups
as
a
way
to
scale
computations
which
we're
already
seeing
which
we're
already
seeing
on
mainnet.
B
Today
and
eventually,
we
can
also
add
sharding
if
we
want
to
lower
the
transaction
costs
on
roll
up,
so
that
just
kind
of
went
from
a
spot
where
we
had
to
build
all
of
this
different
software.
That
kind
of
didn't
exist
to
be
using
the
software
we
have
today,
which
has
been
battle
tested
for
several
years
and
trying
to
use
it
in
ways
where
things
like
transitioning
consensus
mechanisms
are
already
kind
of
built
functionality
into
it.
B
B
B
The
ethereum
client
becomes
a
combination
of
a
beacon
node,
so
that
would
be
a
beacon
chain.
Client.
Today
I
like
prism
or
lighthouse
or
tekku,
and
an
execution
engine
which
is
an
ethereum
one
client
today,
so
something
like
geth,
basu,
nethermine,
aragon
and
so
forth.
I'm
sorry,
I'm
just
trying
to
oh
sorry.
Zoom
is
just
kind
of
plain
weird
tricks
on
me.
Give
me
a
second.
B
Okay,
we
are
back,
apologies
yeah,
so
on
a
high
level,
you
are
basically
writing
a
combination
of
this
beacon
chain,
client
and
an
execution
client
where
they
communicate
together.
The
beacon
chain
node
will
track
the
head
of
the
network.
Will
gossip
and
a
test
and
validate
blocks
and
receive
validator
rewards
for
that?
And
then,
similarly,
the
execution
engine
will
receive
blocks
from
the
beacon
chain,
execute
them.
B
So
that's
where
the
actual
evm
will
be
located
and
communicate
back
to
the
beacon
chain,
whether
a
block
is
valid
or
invalid
via
a
new
api,
and
it's
worth
noting
that
both
the
beacon
and
execution
layer
will
continue
maintaining
their
api
and
peer-to-peer
networks.
So
that
means
that
you
know
all
of
the
json
rpc
infrastructure.
That's
built
around
stuff
like
guest
or
all
of
the
beacon
apis.
B
That's
built
around
prism
or
lighthouse
will
all
be
unchanged,
and
similarly,
the
beacon
nodes
will
still
be
kind
of
gossiping
all
of
their
at
the
stations
and
whatnot,
and
the
execution
engine
will
still
be
gossiping
transactions,
maintaining
a
mempool
and
so
on.
The
one
difference
is
just
that
the
execution
engine
will
no
longer
be
the
one
processing
blocks.
That'll,
be
I'm
sorry,
gossiping
blocks,
so
the
block
gossip
is
moving
from
the
execution
level
to
the
beacon
level.
B
So
I
can
pause
again
here.
I
don't
know
if
clear
this
clarifies
a
little
bit
your
initial
question
or
if,
if
anybody
has
like
a
question
on
this
specific
diagram,.
B
Oh
yes
face
so
to
answer
your
question
with
two
p:
two
p
ports.
Does
it
mean
that
every
note
can
have
different
peers
for
different
layers?
Yes,
that's
correct,
so
you
could
peer
with
like
a
specific
beacon,
node
and
then
a
specific
execution
node,
and
they
might
not
be
the
same
and
like
in
practice.
You
appear
with
you,
have
many
peers.
So
like
say,
you
have
50
peers
on
the
beacon
chain,
50
beers,
on
the
execution
layer.
B
It's
not
the
given
that,
like
those
are
the
same
operators
like
yeah,
they
can
be
completely
different.
That's
a
really
good
question.
B
I'll
keep
yeah
I'll
keep
an
eye
on
the
chat.
If
there's
more
questions,
just
throw
them.
B
So
if
we
kind
of
zoom
in
a
bit
more,
we
can
look
into
like
what
does
a
block
look
like
after
the
merge
right.
So
we
said
that
the
the
execution
layer
is
just
going
to
switch
its
consensus
engine
to
proof
of
stake,
and
what
that
means
is
right.
Now,
on
the
proof
of
stake
layer
on
the
chain,
we
already
have
blocks,
but
the
only
contain
basically
metadata.
B
B
It
won't
have
the
difficulty
anymore
because
it's
not
proof
of
work,
but
it'll
have
things
like
the
base
fee
and
then
the
list
of
all
the
transactions
that
are
part
of
that
block,
and
this
is
kind
of
how,
like
you,
you
can
think
of
a
block
post
merge
where
you
have
like
this
outer
layer
of
of
consensus
information.
B
And
then
you
have
this
inner
payload,
which
contains
everything
modulo
the
proof
of
work
stuff
that
an
eth1
block
contains
today
and
then
I
kind
of
touched
on
it.
But
there's
there's
a
few
noteworthy
things.
One,
the
block
time
changes
a
little
bit
so
right
now
on
proof
of
work.
B
Right
now,
so
you
go
from
you,
you
kind
of
have
slightly
quicker
blocks,
but
also
you
can
always
assume
that,
on
a
multiple
of
12
seconds,
there
is
a
block
happening
as
I,
as
I
said,
a
lot
of
fields
that
are
basically
related
to
proof
of
work
or
also
the
omer
blocks
or
uncle
blocks
are
set
to
zero.
So
because
there's
no
say
difficulty,
we
we
obviously
we
obviously
don't
need
to
track
it.
B
The
reason
to
set
it
to
zero,
rather
than
just
remove
it
is
a
lot
of
tooling
basically
expects
a
difficulty.
So
in
order
to
not
break,
say,
ether,
scan
or
infer
or
truffle,
or
a
hard
hat,
it's
just
much
easier
to
switch
those
fields
to
zero
than
to
completely
remove
them.
Maybe
at
some
point
in
the
future,
we'll
decide
to
remove
them
from
the
blocks,
and
one
thing,
though,
that
we
could
not
just
switch
to
zero
is
the
difficulty
op
code.
B
So
the
difficulty
up
code
is
used
in
smart
contracts,
often
as
a
source
of
pseudo
randomness,
and
the
reason.
B
B
So
you
know,
if
you're,
if
you're
looking
for
a
week
or
so
finesse,
it's
it's,
it's
usually
fine,
but
if
we
set
the
difficulty
to
zero-
and
we
just
set
this
up
code-
the
zero
that
would
break
all
of
contracts
because
basically,
instead
of
having
some
weak
randomness
value,
they
did
always
get
a
zero.
And
you
can
imagine
a
lot
of
things
break
under
that,
so
that
up
code
will
be
will
be
renamed
to
random
and
instead
of
taking
the
difficulty
we'll
take
the
rent,
the
randall
value
from
the
beekeeping
chain.
B
So
the
beacon
chain
produces
a
pseudo
random
value
every
every
every
slot.
Sorry
so
we'll
be
able
to
to
just
swap
that
what
that
up
code
points
to
in
order
to
not
break
any
applications
that
use
randomness
on
chain
pseudo
randomness.
Sorry,
let
me
check
the
chat.
B
Let
me
okay,
sorry
check
the
chat.
Will
the
merge
impact
l2s
in
any
way,
so
not
right
as
it
happens?
Ideally,
most
applications
and
l2s
should
not
really
feel
the
merge
and
we've
kind
of
designed
it
to
be
as
seamless
as
possible.
Modular
small
things
like
yeah.
The
blocks
are
a
bit
quicker
and
you
know
maybe
that
changes
how
quickly
l2s
can
like
sell
data
on
mainnet,
but
it
it
doesn't
have
any
drastic,
drastic
kind
of
impact
there
and
then
jacob.
You
have
a
question.
B
Is
it
fair
to
say
that
the
e2
consensus
layer
figures
out
which
block
which
blocks
to
process
next
and
the
each
one
execution
layer
processes
the
blocks?
Yes,
basically,
that's
it!
So
if
you,
if
you
contrast
this
to
now
right
now
on
proof
of
work,
it's
basically
like
a
race
between
all
of
the
miners
to
figure
out
like
who's
gonna
have
the
next
block,
and
this
race
is
kind
of
weighted
based
on
their
hash
power
on
eth2
or
on
the
consensus
layer.
B
The
the
validators
are
kind
of
randomly
shuffled,
but
they'll
know
kind
of
a
few
minutes
in
advance
when
they're
due
to
produce
a
block.
So
when
they're
due
like
every
slot,
basically
the
validator
who's
due
to
produce
a
block
will
will
produce
it
and
then
other
validators
will
will
attest
to
that
block.
So
it
yeah
and
then
every
time
say
you
produce
a
block
as
a
validator
you're
going
to
want
to
ask
your
execution
layer
for
a
valid
block.
B
So
it's
kind
of
like
when
you're
mining
right
now
you
kind
of
want
to
put
a
bunch
of
transactions
together,
figure
out,
which
ones
are
the
most
profitable
and,
and
you
know
how
many
can
fit
in
a
block
and
your
consensus
layer
will
ask
that
if
the
execution
layer
like
give
me
the
most
profitable
block
and
they'll
produce
that
and
then,
if
you're
on
the
receiving
end.
So
if
you
get
a
new
block
from
the
network,
you
get
the
block
but
you're,
not
sure
at
the
consensus
layer.
B
If
it's
actually
a
valid
block,
so
you'll
send
it
down
to
your
execution
layer
say:
hey:
can
you
run
all
these
transactions?
Make
sure
the
transactions
are
all
valid
and
that
this
block
is
valid
and
then
the
execution
layer
will
return
whether
a
block
is
valid
or
invalid,
and
then
you
can,
you
can
either
attest
to
it
or
just
import
it
as
part
of
your
your
your
local
storage,
and
is
that
it
answer
your
question
jacob.
If
not,
I
guess
just
ask
a
follow-up
in
the
chat.
B
How
do
I
think
l1
gas
prices
will
change
with
the
merge?
This
shouldn't
really
change
so
again,
modulo,
like
the
small
block
block
time
change
the
capacity
of
the
change
the
chain
will
not
change
significantly
with
the
merge,
so
there
shouldn't
be
any
change
either
up
or
down.
B
What's
the
usage
of
the
p2p
port
of
the
execution
area,
then,
since
this
receiver
plot
are
participate?
Okay,
so
why
do
you
need
a
p2p
port
on
the
consensus
there?
That's
a
really
good
point.
Basically,
if
you
want
transaction
data,
so
if
you
basically
want
to
do
something
like
get
transaction
by
hash,
and
you
want
to
actually
run
like
say
a
trace
on
that
transaction
or
something
like
that,
you
still
need
to
ping
the
execution
layer
directly.
So
basically,
every
single
json
rpc
call
that's
not
like
a
get
block.
B
Something
is
still
valid.
Post,
merge,
yeah,
but
the
the
main
use
case
would
be
something
like
tracing
a
transaction
or
just
retrieving
your
transaction,
or
also
like
getting
the
balance
of
an
account.
For
example,
something
like
that
then
block
times
going
from
3
to
12,
isn't
exactly
faster
or
isn't
that
a
measure
for
how
fast
the
network
is.
I'm.
I
think
it's
just
that.
The
reason
why
we
were
at
13
I'm
not
quite
sure
why
we
chose
13
on
proof
of
work.
That's
like
a
really
good
question
for
somebody.
B
Who's
been
working
on
a
theorem
longer
than
me.
The
reason
it
shows
12
on
the
beacon
chain
is
it's
just
like
an
easier
number
to
like
multiply
a
bunch
of
times
like
there's
five
slots
in
a
minute.
So
you
can
say
you
know
like
there's
300
slots
in
an
hour
and
stuff
like
that.
It's
just
like
a
round
number.
Both
13
and
12
are
good
enough
value
so
that
blocks
are
able
to
propagate
quickly
enough
on
the
network
and
that
you
can
generate
a
block
and
whatnot.
B
So
you
know
we
we
can't
have
like
two
seconds,
because
that
would
just
be
way
too
short,
but
like
12
or
13
doesn't
make
a
big
difference
and
12
is
just
an
easier
number
to
work
with
in
a
lot
of
cases.
B
Cool
those
are
really
good
questions,
I'll
move
on,
but
yeah.
If
there's
more
questions
just
feel
free
to
throw
those
in
the
chat
and
so
okay.
I
have
one
last
kind
of
diagram
about
the
the
process
and
this
this
basically
goes
through
how
the
transaction,
how
the
transition
happens
so
in
order
to
move
from
proof-of-work
to
proof-of-stake.
B
B
But
like
reaches
this
block
number
and
then
once
the
merge
is
in
process,
they
could
kind
of
release
another
chain,
and
then
that
would
that
would
kind
of
derail
the
network
and
it
wouldn't
be
clear
which
one
is
the
valid
chain
and
so
by
using
a
total
difficulty
value,
you
kind
of
impose
a
cost
where
you
know.
If
people
want
to
exceed
that
value,
they
need
to
mine
and
and
spend
a
lot
of
hash
power.
B
So
we'll
have
basically
clients
on
both
the
execution
and
and
consensus
layer.
Listen
for
this
total
difficulty
value.
So,
every
time
a
block
comes
in
it
just
checks
you
know
is:
is
this
blocks?
Is
this
block's
total
difficulty
greater
or
equal
to
the?
What
we
call
the
terminal
total
difficulty?
B
So
once
we
do
see
a
block
with
a
total
terminal,
total
difficulty
that
or
that
exceeds
sorry,
the
terminal
total
difficulty
we'll
we'll
kind
of
call
it
the
last
proof
of
work
block,
and
what
this
means
is
that
no
node
will
accept
a
child
block
of
that
one
under
proof
of
work
as
a
valid
block
and
if
you're
very
familiar
with
proof
of
work,
you'll
realize
that
you
could
have
a
spot
where
there's
multiple
blocks
that
come
in
at
the
same
time,
there's
basically
two
competing
blocks
right
that
come
in
with
at
the
same
height,
but
that
both
exceed
terminal
total
difficulty
and
the
the
in
that
case.
B
Basically,
the
proof
of
stake
mechanism
will
vote
on
which
of
those
two
is
the
is
the
is
the
right
one
and
there
might
be
more
than
two.
You
know
you
could
see
a
world
where
there's
like
three
or
five,
but
that's
basically
like
a
kind
of
a
short,
a
short
for
chain.
A
short
chain
fork,
which
we
see
pretty
frequently
on
main
nut
and
what's
really
important,
is
those
forks
will
only
be
one
block
deep.
B
So
because
again,
you
can't
have
a
child
of
a
block
which
already
exceeds
the
terminal
total
difficulty,
no
matter
how
many
competing
forks,
you
have
they're
all
going
to
be
the
same
depth
and
then
they
kind
of
get
resolved
in
the
next
block
on
the
beacon
chain
and
then
once
once,
you've
basically
had
this
block
kind
of
the
final
proof
of
work
block.
B
The
next
block
gets
produced
by
a
validator
on
the
beacon
chain
and
then,
at
the
same
time
you
you
still
kind
of
listen
to
the
blocks,
which
could
be
a
final
proof
of
work
block,
but
the
beacon
chain
has
a
concept
of
finalization.
So
after
about
six
minutes,
an
epoch
is
finalized.
On
the
beacon
chain,
so
as
soon
as
that
happens,
then
we
have
kind
of
our
first
finalized
block
post
merge.
B
Then
the
clients
on
the
execution
side
simply
drop
everything
related
to
proof
of
work
altogether,
so
they
won't
share
or
gossip
any
proof-of-work
block,
and
you
can
say
that
the
transition
is
now
fully
complete
and
if
you're
say
like
an
exchange
or
something
that's
like
waiting
to
enable
deposits
and
withdrawals
back
again,
that's
the
point
at
which
you
should
be
good.
B
So
it's
really
like
a
question
kind
of
minutes
from
the
time
that
we
see
this
block
will
exceed
the
total
terminal,
total
difficulty
to
a
spot
where
we
have
a
fully
finalized
block
on
the
beacon
chain,
and
I
have
a
final
diagram.
Sorry,
I
have
a
final
diagram
that
kind
of
shows
this
how
this
process
happens
on
the
top
on
the
top
row.
B
We
have
the
beacon
chain,
which
has
you
know
all
this
metadata
about
proof
of
stake,
but
up
to
the
merge
has
no
execution
data
in
it
and
then
the
merge
happening
is
basically
removing
this
proof
of
work
layer
and
then
moving
the
execution
layer
into
the
proof
into
the
proof
of
stake
blocks.
So
in
this
example,
you
know
the
second
proof
of
work
block
that
you
see
would
be
the
one
that
exceeds
the
terminal,
total
difficulty
and
then
the
block
after
that
would
be
produced
within
the
beacon
chain.
B
And
then,
after
that,
you
know,
there's
no
more
proof
of
work,
we're
just
producing
blocks
in
the
beacon
chain.
So
I'll
pause
here
again,
because
I
see
there's
a
couple
things
in
the
chat.
B
Yeah,
so
speeding
up
the
blocks,
really
wasn't
the
big
goal
of
the
merge.
There's
a
question
about:
have
we
decided
the
terminal
total
difficulty?
No,
we
haven't
yet
we'll
probably
decided
only
like
a
couple
weeks
in
advance.
B
The
reason
for
that
is
that
we
obviously
want
the
code
to
be
ready
before
we
decide
it
kind
of
like
when
we
choose
a
hard
fork,
and
also
it's
just
really
hard
to
predict
hash
rates
over
time,
because
hash
rate
is
kind
of
correlated
with
price
and
also
with
the
merge
happening
there
might
be
a
world
where,
like
we
lose
a
ton
of
hash
rates
because
people
want
to
sell
their
gpus
before
the
merge
happens,
so
it
just
feels
safest
to
like
wait
until
we're
much
closer
to
the
merge
to
pick
a
value
which
is
at
that
point
a
couple
weeks
in
the
future.
B
B
Compliments
on
the
diagram,
danny
ryan
did
all
those
diagrams.
I
will
take
no
credit
but
I'll.
Let
them
know
that
you
all
appreciate
them.
B
B
What
does
change
for
existing
validators
is
one
they're
gonna
need
to
run
basically
an
execution
engine,
so
they're
gonna
need
to
run
like
a
version
of
geth
or
nethermine
or
base2,
and
I
know
today
a
lot
of
validators
can
kind
of
outsource
this
to
infer
because
they're
only
relying
on
like
a
small
subset
of
data.
In
theory,
validators
should
already
run
kind
of
at
least
one
node,
but
because
they're
only
looking
at
beacon
chain
deposits,
I
think
they
can
get
that
from
infera,
whereas
they
won't
be
able
to
do
that.
B
Post
merge
so
they'll
really
have
to
run
kind
of
their
own
execution
layer
alongside
the
the
beacon
chain,
and
then
one
thing
that
validators
will
be
quite
affected
by
is
transaction
fees
will
now
accrue
to
validators
after
the
merge
and
transaction
fees
will
keep
accruing
to
addresses
on
kind
of
the
evm
chain.
B
So,
whereas
the
32
eth
that
you
put
in
as
a
validator
and
the
validator
rewards
that
you
gain
on
that
are
currently
locked
like
you
can't
withdraw
them,
the
transaction
fees
will
not
have
that
constraint,
so
they'll
directly
accrue
to
whatever
ethereum
address
you
want
to
send
them
to,
and
it's
worth
noting
that
the
withdrawals
for
the
actual
kind
of
staked
ether
won't
be
coming
with.
The
merge.
B
They'll
come
in
the
upgrade
right
after
just
because
we
wanted
to
kind
of
separate
concerns
and
and
and
have
the
merge
ship
earlier
rather
than
later,
but
yeah
at
a
high
level.
You
need
to
run
an
each
one,
node,
that's
the
biggest
change
and
then
the
biggest
kind
of
benefits
validators
get
from
the
merge
is
the
ability
to
collect
transaction
fees
when
they,
when
they
propose
a
block.
B
Cool,
so
that's
kind
of
it
for
the
overall
architecture.
What
I'll
walk
through
now
is.
Basically,
where
are
we
at
with
this
merge
like
what
have
we
done
and
and
what's
left
to
do
so?
We
really
started
working
on
this
kind
of
in
in
earnest
in
may
of
this
year,
where,
at
that
point
we
had
kind
of
an
initial
spec
for
the
merge
that
had
this
general
architecture,
but
still
had
a
lot
a
lot
to
work
through.
B
So
we
had
a
month-long
hackathon
with
all
of
the
client
teams
on
both
the
execution
and
consensus
layer
to
just
try
and
see.
Can
we
actually
build
a
test
net
that,
like
runs
with
this
architecture
like?
Is
this
actually
workable?
So
we
had
the
prism,
lighthouse,
taku,
nimbus,
death,
basu
and
nethermine
teams
all
get
together
work
on
this
for
a
month
and
they
managed
to
put
together
a
test
net
which
was
called
nocturne
and,
if
you're
familiar
with
like
beacon
chain
explorer,
this
is
kind
of
cool.
B
So,
if
you
look
from
this
is
a
random
random
slot
on
the
beacon
chain
for
this
test
net.
If
you
look
at
the
top,
you
have
like
this
randall
revealed,
which
is
a
field
that's
already
on
the
beacon
chain.
The
graffiti
is
already
on
the
beacon
chain.
Each
one
data
is
already
on
the
beacon
chain.
B
That's
what
counts
the
new
deposits,
but
then
you
had
for
the
first
time
this
execution
payload,
which
is
taking
basically
what
was
previously
an
east
one
block,
but
having
it
validated
and
included
in
the
beacon
chain,
and
so
this
is
what
we
managed
to
build
in
a
month.
B
Just
like
a
system
where
you
had
this
this
kind
of
general
architecture
up
and
running,
then
we
obviously
found
a
bunch
of
a
bunch
of
issues
this
may
and
we
kept
we
kept
working
through
them
and
around
october
we
felt
like
we
had
made
enough
progress
on
the
spec
to
get
everybody
together
again
and
see.
Okay,
can
we
actually
build
not
only
a
test
net
that
is,
is
using
this
architecture,
but
that
starts
on
proof
of
work,
kind
of
mine,
some
blocks.
B
Switching
transactions
on
proof
of
work
does
the
full
transition
to
proof
of
stake
and
keeps
running
on
crooked
stakes.
So
we
got
all
of
the
client
teams
together
for
a
week-long
event,
again
kind
of
the
same
teams,
some
some
additional,
some
less
and
within
a
week
we
managed
to
do
that.
To
basically
do
all
the
spec
changes
that
we
we
had.
B
We
had
since
may
then
start
a
network
on
proof
of
work,
mind
a
bunch
of
blogs,
run
transactions
and
then
run
the
full
transaction,
the
full
transition
to
proof
of
stake
and
kind
of
keep
the
network
up
and
running
after
proof
of
stake.
So
that
was
like
a
really
big
achievement
because
it
showed
us
that,
like
not
only
was
the
post
merge
architecture
sound,
but
that
we
could
actually
transition
the
whole
thing,
and
I
believe
we
had
a
network
with
something
like
10
000
validators.
B
What
we're
working
on
right
now
is
basically
the
the
spec
changes
that
came
out
of
our
october
event,
so
obviously
there
we
also
figured
out
a
lot
of
things
that
you
know
we're
not
we're
not
perfect
and
started
tweaking
those
so
from
between
november
and
december
every
every
week,
or
so
we're
trying
to
set
up
a
new
devnet
with
the
changes
run
it
through
the
transition
make
sure
everything
works,
figure
out
what
we
need
to
fix-
and
you
know
the
week
after
that
set
up
another
one,
and
our
hope
is
that
by
the
holidays,
so
you
know
call
it
early
mid
december.
B
We
can
have
a
devnet,
that's
up
and
running
with
a
spec.
That's
mostly
final,
that
we
can
then
kind
of
share
with
infrastructure,
tooling
application
teams
so
that
they
can
start
kind
of
adapting
to
this
post
merge
network.
So
there's
a
link
here,
there's
a
link
to
these
slides
at
the
end.
If
you
want
to
get
them,
but
you
can
see
kind
of
in
real
time
all
of
the
client
combinations
that
work
where
every
client
team
is
at
and
yeah.
B
This
is
like
the
main
target
right
now
before
the
holidays,
get
something
that's
stable,
that's
close
as
close
to
a
final
spec
as
we
can
that
we
can
kind
of
share
with
the
world
and
not
just
have
core
developers
iterate
on.
B
I
have
another
link
here.
So
basically
we
have
this
checklist
that
we
keep
if
you're
interested
again,
which
just
runs
through
basically
all
the
tasks
we
need
to
do.
This
screenshot
is
out
of
date,
there's
more
boxes
that
have
been
checked
since
then,
but
I'd
say
where
we're
at
right
now
we
have
probably
80
90
of
the
actual
implementation
work
done
and
probably
like
a
third
to
a
half
of
the
testing
work
done
so
a
lot
of
the
effort
in
the
next
few
weeks.
B
The
months
is
going
to
shift
from
like
implementing
this,
to
actually
testing
this
and
also
getting
feedback
from
the
community
and
and
and
tweaking
things
based
on
that,
but
we're
in
a
spot
where
yeah
most
of
the
actual
implementation
work
is
is,
is
done.
B
And
yeah,
I
guess
I'll,
take
the
questions
again,
but
I'll
just
share
this
link.
If
you
want
the
slides
with
all
the
links
to
the
different,
the
different
trackers
and
whatnot
there's
a
bitly
link,
so
it's
2021
q4,
so
21q4
yeah
so
feel
free
to
go
there
and
I'll.
Take
a
couple
extra
questions.
We've
seen
in
the
chat.
B
Okay,
so
like
yeah,
how
does
the
decentralization
of
the
merge
work
right
like
so?
Obviously
it's
possible
on
ethereum
to
like
own
more
than
32
east,
with
one
person
and,
like
you
know
have
a
lot
of
you
know
have
a
lot
of
validators
controlled
by
a
single
entity.
We
see
this
in
practice.
You
know
kraken
finance,
coinbase,
ohio
state
staking
offerings.
I
think
the
way
you
want
to
think
about
it
is
twofold.
B
One
is
what's
the
minimum
that
it
takes
for
somebody
to
independently
become
a
validator,
and
this
is
like
the
32
east
which,
when
we
launched
the
beacon
chain,
wasn't
the
ton
of
money.
Now
it's
obviously
much
much
more
and
I
think
that's
the
metric
we
want
to
keep
working
to
lower
because,
yes,
there
will
always
be
kind
of
centralized
entities
that
that
run.
B
You
know
pools
and
whatnot,
but
you
want
it
to
be
possible
for,
for
others
to
also
do
the
same
thing
and
then
the
two
other
comments,
I
would
add,
is
you
know
we're.
Also
working
quite
hard
on
things
like
like
trustless
thinking,
pools
rocket
pool
is
obviously
the
one
that's
most
popular.
B
Launching
that
that's
been
working
on
that
and-
and
it
is
starting
to
launch
recently-
but
that's
a
way
that
you
can
get
people
who
have
less
than
32
to
to
stake
and
and
kind
of
participate
in
this
and
then
the
last
thing
I'll
say
is
that
it's
also
really
important
on
the
network
to
have
non-validating
nodes
and
for
those
of
you
who
follow
twitter.
There
was
some
drama
today,
because
a
bunch
of
people
added
non-validated
nodes
on
avalanche
and
avalanche
kind
of
made.
B
It
seem
like
a
bad
thing
like
people
were
like
you
know,
looking
at
what
was
going
on,
but
on
ethereum
we
have
the
philosophy
like
this
is
actually
quite
important,
because
even
if
you
have
centralized
staking
providers,
there's
a
very
there's
a
much
smaller
limit
to
what
they
can
do.
If
you
have
a
large
amount
of
nodes
that
are
just
verifying
the
transactions,
because
what
what
somebody
would
like
a
large
portion
of
validators
can
do.
B
For
example,
they
can't
send
an
invalid
transaction
on
the
network
if
there's
a
lot
of
independent
nodes
that
just
won't
accept
that
transaction.
Because
for
them
to
do
that,
say
they
wanted
to
create
a
block
which
gives
them
a
thousand
ether
every
other
node
on
the
network
would
see
that
transaction
be
like
hey.
B
This
is
an
invalid
transaction
and
then
they'd
wait
for
like
another
block
to
come
in
and
if
you
just
have
a
system
where,
like
you
have
you
only
have
a
small
amount
of
validating
nodes
and
you
don't
have
like
non-validating
nodes
or
a
large
decentralized
amount
of
validators,
and
it's
it's
much
easier
for
those
changes
to
kind
of
slip
in.
So
I
guess
you
know
long
way
to
say
like
it
is
like
a
major
concern.
B
We
work
hard
on
trying
to
keep
the
barrier
to
entry
as
low
as
we
can,
and
even
if
you're,
not
a
validator,
it
provides
extra
security
to
the
network
to
just
run
a
node
yeah
I'll
pause
here
yeah,
so
the
deck
is
available
there.
At
that
link.
Can
I
explain
like
clients
and
sync
committees
not
in
a
good
way.
Unfortunately,
you
would
need
danny
ryan
to
give
that
presentation
yeah.
B
So
if
your
question
is,
you
know
beyond
like
what's
like
client,
I
I
I
can't
really
help
there,
but,
like
clients
are
just
kind
of
clients
that
rely
on
full
nodes
to
validate
a
subset
of
the
data
on
ethereum,
they're,
obviously
much
easier
to
run
and
the
altair
upgrade
on
the
beacon
chain
introduced
kind
of
the
the
sync
committees
as
a
way
for
like
clients
to
kind
of
hook
to
full
nodes
on
the
beacon
chain.
That's
my
very
high
level
explanation.
I
can't
really
do
much
more
than
that.
B
What
are
non-validating
nodes
so
basically
on
the
theorem
one
you
can
think
of
this
as
every
node
on
a
theorem,
that's
not
in
mine
right.
So
I
think
there's
something
like
a
couple
thousand
nodes.
I
forget
exactly
the
number
on
ethereum
one
there's.
Maybe
a
hundred
of
those
that
are
miners,
but
every
one
of
them
so
like
90
plus
percent,
is
just
a
node.
B
That's
receiving
transactions
and
gossiping
transactions,
but
they're
not
actually
creating
a
new
block,
and
you
can
have
the
same
thing
that
happens
on
the
beacon
chain
and
on
basically
ethereum
post,
merge
where
you're
just
running
a
node.
That
receives
and
sends
transactions,
if
you
also
want
to
get
like
your
own
copy
of
the
ethereum
blockchain's
data,
that's
how
you
would
get
it
and
by
doing
that,
you
also
enforce
the
protocol
rule.
B
Your
node
will
reject
that,
and
so,
if
there's
a
large
percentage
of
nodes
on
the
network
who
end
up
rejecting
these
these
invalid
blocks,
then
you
know:
there's
no
incentive
for
validators
to
try
to
sneak
those
in
because
they're
just
going
to
lose
on
the
rewards
they
could
have
had
if
they
had
it
on
his
block
and
that's
why
you
kind
of
want
you,
you
kind
of
want
these,
these
non-validated
nodes
on
the
network
as
well.
B
Oh
I
see
puja
is
here
so
yeah
puja
has
recorded
a
a
peep
and
heap
episode
about
the
live
clients
and
sync
committees
related
to
altair.
She
just
shared
the
youtube
link
in
the
chat.
So
if
you
wanna
dig
more
into
that,
I
recommend
watching
that.
B
Any
other
questions,
oh,
I
can
turn
my.
Is
there
consideration
to
lower
the
32
eighth
collateral?
I
really
want
this
to
happen
like
if
you're
asking
me
in
five
years.
You
know,
I
hope
it's
much
lower.
I
don't
think
in
the
next
year
or
two
it's
going
to
lower.
I
think
a
bit
if
your
hope
is
like
more
like
next
one
to
three
years,
there's
going
to
be
much
more
much
more
innovation
in
like
the
decentralized
taking
polls
than
there
will
be
in
lowering
the
32,
eighth
collateral.
The
reason.
B
The
reason
why
it's
32
eth
by
the
way
is
there's
like
a
trade-off
between
the
the
amount
that
you
require
and
the
amount
of
gossip
that
you
have
on
the
network.
So
if
we
had
say
16
instead
of
32,
you
would
double
the
amount
of
peer-to-peer
messages
on
the
network
and
that
creates
bandwidth
issues
and
it's
just
like
not
manageable
for
for
the
nodes
to
to
handle.
So
there's
no
like,
I
call
it
like
science
problem
to
lower
it,
but
there's
a
lot
of
hard
engineering
problems
to
lower
it.
B
B
Okay,
yes,
so
paul
has
a
question
about
the
lower
and
upper
bounds
on
the
difficulty
of
code,
I'm
not
exactly
sure
about
a
lower
bound,
but
it
is
a
much
a
much
bigger
a
much
bigger
value
than
the
current
difficulty
is.
So
it's
a
way
also
for
applications
to
be
able
to
tell
if
the
merges
happen,
basically
just
because
the
value
is
much
bigger
and
I'm
not
quite
sure
how
exactly
the
random
number
is
generated.
It's
just
part
of
how
the
beacon
chain
works.
B
B
B
And
yes,
jacob,
okay,
so
I
agree
with
you,
like
all
the
nodes
on
the
network,
validate
block
what
I
meant
by
non-validating
nodes,
which
nodes
that
are
not
validators.
Sorry,
so
that's
like
a
mistake
on
my
part,
but
yes,
all
the
so
even
your
non-validator
node
validates
the
block
on
the
network
and
those
are
important
as
well
yeah,
it's
just
like
yeah.
We
basically
just
have
a
poor
poor
terminology.
There
will
proof
of
stake,
made
gas
prices
lower
and
by
how
much?
B
Unfortunately,
not
so
proof
of
stake
is
really
just
a
change
in
consensus
algorithm,
so
it
you
know,
it
obviously
reduces
ethereums
like
ecological
footprint.
So
like
we
don't
need
to
use
miners
anymore.
It
also
reduces
issuance.
So
we
we
don't
have
to
pay
as
much
ether
to
secure
the
network
as
we
did
under
mining
about
10
times
less,
but
it
doesn't
do
anything
for
gas
prices.
B
The
two
things
that
really
are
gonna
affect
gas
prices
or
roll
ups,
which
we're
starting
to
see,
go,
live
and
then
sharding,
which
enable
rollups
to
store
data
cheaper
and
charity
will
basically
be
the
next
big
thing
we
work
on
after
after
this
merge
clear,
has
a
comment
about
the
staking
pool,
they're
still
essentially
owned
piece
of
hardware.
Yes
and
the
obviously,
the
hardware
is
found
right,
like
there's
only
one
computer,
that
does
it
in
most
designs.
B
The
thing
that
you
want
to
decentralize
is
like
the
the
key
to
the
key
to
basically
the
the
funds
right
and,
if
you're
able
to
like
trustlessly,
trustlessly
kind
of
share,
that
and
and
and
and
not
have
anybody
be
able
to
take
control
of
the
funds
you
cannot
kind
of
decentralize
speaking
slides
will
be
in
the
slack
when
charting
next
big
priority.
B
After
the
merge
we're
trying
to
see
if
there's
like
ways,
we
can
do
kind
of
mvp
shortings
sooner
rather
than
later,
but
definitely
like
yeah
after
the
merge,
and
I
think
we'll
probably
get
the
we'll
get
the
merch
in
like
the
first
half
of
next
year.
So
unfortunately,
it's
not
like
right
around
the
corner.
B
So
yeah,
so
the
miners
right
like
will
they
be
angry,
I'm
trying
really
hard
to
reach
out
to
them
and
like
let
them
know
that
you
know
this
is
happening
I
have
since,
like
last
summer,
a
lot
of
the
big
mining
pools
have
like
moved
to
offering
staking
services
as
well.
B
So
I
think
that,
like
the
operators
are
quite
aware,
what
I'm
always
concerned
about
is
like
there's
a
random
person
who,
just
like
you
know,
doesn't
really
follow
ethereum
and
they
they
just
like,
buy
a
minor
and
plug
it
in
through,
like
ether,
mine
and
someday
just
shuts
down,
so
I
mean
we
try.
I
try
to
go
on.
Meetups
with,
like
the
mining
community,
try
to
write
blog
posts,
we
try
to
get
the
blog
post
translated
into
chinese,
because
a
lot
of
the
miners
are
chinese.
B
If
people
have
suggestions
or
like
forums
or
venues
where
you
know,
they
think
this
information
is
valuable
and
they're
not
seen
yet.
Please
send
it
to
me.
I'm
tim
at
ethereum.org,
I'm
also
on
twitter
yeah,
I'm
happy
to
find
more
ways
to
reach
out
to
miners.
B
If
there's
miners
listening
to
this,
you
know,
I
suspect
I
don't
have
an
exact
date
for
the
merge
but
like
I
would
really
not
buy
a
new
gpu
now
I
think
if
you
can't
be
profitable
in
like
the
next
three
months,
my
name,
you
should
probably
not
start
mining
on
ethereum
yeah
like
it's
hard
to
give
a
date,
and
but
this
is
really
like
the
end
of
my
opinion.
B
So
if
you're,
if
you're,
not
already
kind
of
in
the
green
mining
and
like
this,
is
all
kind
of
bonus
for
you,
it's
probably
really
worth
it
to
think
about
like
how
do
you?
How
do
you
kind
of
get
your
investment
back.
B
Our
l2
l2
bridge
is
possible.
Short
answer
is
yes.
Long
answer
is,
like
all
the
you
know,
the
the
thing
with
bridges
is
they
all
have
like
different
trust
assumptions,
but
you
can
imagine
like
the
most
trusted
bridge
possible
would
be
like
an
exchange
right.
You
could
imagine
you
send
your
funds
to
coinbase
and
then
you
can
send
them
to
arbitrary
optimism
and
you
can
send
them
back.
You
know
and
like
coinbase
effectively
becomes
a
l2
l2
bridge.
B
Obviously,
if
you
do
that,
you're
trusting
coinbase,
but
basically
that
that
kind
of
that
that
kind
of
architecture
can
be
like
more
decentralized
where
there
is
a
bridge
and
it
has
different
trust
guarantees.
So
if
you
can
do
it
through
like
a
fully
centralized
system,
you
can
do
it
through
a
more
trustless
one.
B
E2
validators
are
feasible
on
the
raspberry
pi.
I
think
they
are
today.
Don't
like
quote
me
on
that.
What
happens
if
you
can't
keep
up
this
get
penalized
short
answer
is
yes,
the
long
answer
is,
if
you,
if
you
basically
are
penalized
alongside
other
people,
the
penalty
is
bigger
than
if
you're
just
penalized
by
yourself.
So
this
is
made
to
kind
of
nudge
things
towards
solo
stakers.
B
So
if
coinbase
goes
down,
for
example,
and
all
their
validators
go
down,
their
penalty
is
much
bigger
than
if
your
internet
goes
down
or
your
raspberry
pi
is
like
chugging
behind
yeah.
So
so
there
is
a
penalty,
though,
if
you
don't
keep
up
what
I
would
recommend
you
and
like
other
validators.
Do
is
just
look
on
like
the
at
the
specs
on
whatever
client
you're
using
documentation.
They
all
have
like.
B
B
Cool
well
yeah,
it
seems
like
that's
it.
If
you
do
have
more
questions.
Oh,
I
guess
you
can.
The
best
place
to
reach
me
is
twitter,
baiko,
first
name
last
name
and
there's
two
last
questions
so
I'll
answer
those
and
then
I'll
head
off.
Will
there
be
any
change,
maintaining
pos
systems
at
the
application
layer?
Very
little.
B
So
you
know
I
kind
of
mentioned
this,
like
the
12
versus
13
second
block
time,
the
difficulty
op
code,
I'm
actually
working
on
a
blog
post
right
now
that
describes
those
in
like
more
detail.
So
if
you
are
building
an
application,
yeah
it'll
be
on
the
ethereum
vlog
sometime
next
week,
probably
after
thanksgiving.
B
The
last
question
is:
when
am
I
taking
a
vacation
as
soon
as
we
have
this
devnet
that
we
can
share
with
the
community
before
christmas,
I'll
go
on
vacation
and
kind
of
see
all
the
the
bugs
that
people
find
when
I
get
back
well,
yeah.
Thank
you
all
this.
This
was
really
really
fun
and
I
really
appreciated
the
question.
A
Yes,
and
thank
you
tim
for
doing
this,
we
all
very
much
appreciate
it.
Once
again,
we
will
have
a
recording
of
this
in
the
east
builders
slack
channel.
The
invite
was
put
into
the
chat
and
yeah.
Once
again,
you
know
be
on
the
lookout
for
more
events
from
us
tim
thanks
once
again,
and
I
wish
everyone
a
great
and
happy
evening
and
happy
thanksgiving
to
all
of
our
american
friends.