►
From YouTube: Devcon VI Bogotá | Workshop 1 - Day 1
Description
Official livestream from Devcon VI Bogotá.
For a decentralized version of the steam, visit: https://live.devcon.org
Devcon is an intensive introduction for new Ethereum explorers, a global family reunion for those already a part of our ecosystem, and a source of energy and creativity for all.
Agenda 👉 https://devcon.org/
Follow us on Twitter 👉 https://twitter.com/EFDevcon
A
A
Yeah
ERS
are
welcome
and
then,
of
course,
like
the
users,
we
want
useful
apps,
utilizing
the
likewise.
So
what
are
those
be
nice
to
have
some
Minds
collaborate
on
that?
What
are
these
apps
and
what
are
people
building?
A
B
A
Yeah
I
lost
the
nft
on
that
one,
that's
it
and
any
questions
at
all.
Otherwise,
I'm
going
to
bring
a
ton
up
here.
He
has
a
few
slides
that
just
goes
into
a
little
bit
of
the
protocol
and
with
some
proposals.
D
C
A
C
We
are
back
at
the
left
of
that
bird
I,
put
a
quick
recap
about
how
the
Leica
and
protocol
Works.
C
So
if
we
have
a
beacon
block
from
Beacon
chain,
it
has
two
signatures:
the
proposer
that
signs
the
main
signature,
but
that's
when
we
cannot
actually
confirm
on
the
like
client,
because
we
don't
have
the
data
like
we
would
have
to
verify,
not
just
the
signature,
but
also
all
the
attestations.
So
it's
like
several
gigabytes
of
data
that
we
would
need.
So
what
was
added
in
our
there
is
this
additional
signature
down
here
from
the
sync
committee
and
this
one
actually
signs
the
parent
hash.
C
So
it's
like
a
block
before
this
and
the
like
client
then
trusts
it
with
more
than
two-thirds
of
it
sign
the
same
message:
why
two-thirds
it's
sort
of
arbitrary
but
the
main
Network
operates
under
an
honest
majority
assumption
and
the
string
committee
is
just
sampled
randomly
from
that,
so
the
same
security
guarantees
apply
to
it
as
well.
This
is
what
the
like
client
data
looks
like
it
sort
of
describes
a
set
of
three
different
blocks.
The
main
block
is
this
a
tested
block?
B
C
Proof
to
this
finalized
locked
up
up
there,
and
this
allows
a
like
client
to
follow
both
the
finalized
header
and
the
latest
optimistic
header.
So
this
is
the
chain
header,
and
this
is
the
latest
finalized
block
on
the
network.
These
structures
look
like
this.
It's
like
this
a
tested
header
and
then
it
has
Miracle
proof
through
the
to
the
next
twin
committee
and
the
finalized
header
and
then
the
overall
signature.
C
There
is
also
a
couple
smaller
objects
that
describe
the
same
thing,
but
don't
include
all
the
information
because,
for
example,
next
to
in
committee,
only
changes
once
a
day
and
finally
only
changes
every
12
minutes
or
so,
and
then
there
is
also
this
bootstrap
structure
down
here
to
get
the
initial
sync
committee
from
a
trusted
block
the
entire
protocol
or
a
like
client
then
looks
like
this.
C
So
this
has
been
standardizing
rest
and
in
lip
P2P.
Those
protocols
have
been
merged
into
the
official
specifications.
Nimbus
is
implementing
them
like
it's
working
there,
a
load
star
has
them
I,
think
mostly
done
and
Tech,
who
I
think
opened
an
issue
to
start
implementing
it
as
well.
So
progress
is
going
fine
and
then
there
is
also
the
portal
Network,
which
is
getting
more
important
over
time,
as
it
also
stores
like
historic
data
for
ethereum
it.
C
C
So
most
of
us
know
the
structures
already.
So
there
are
a
couple
problems
that
are
still
not
solved.
B
C
This
is
a
relatively
new
protocol
and
the
first
of
them
is
for
the
security
and
technically.
If,
if
you
assume
that
only
validators
who
exited
the
chain
normally
and
sign
conflicting
messages,
then
it
is
safe
to
use
four
months
old
data
to
stay
in
sync
with
the
network.
So
if
you,
if
you
start
from
a
checkpoint,
that's
four
months
old
and
then
sink
forward,
that's
still
safe
according
to
this
research.
C
So
the
problem
is
right.
Now
there
is
nothing
that
prevents
a
student
Committee
Member
from
just
signing
multiple
chains.
At
the
same
time,
there
is
no
slashing
defined,
and
it's
also
kind
of
difficult
to
define
a
slashing
like
if
you
want
to
slash
on
any
non-canonical
finalized
block.
The
problem
is
that
if
someone
sings
to
an
incorrect
chain
or
even
if
someone
signs
something
that
later
gets
orphaned,
it's
becoming
a
problem,
and
so
we
can
make
this
a
bit
weaker
and
sign
only
if
you
slash
only
if
you
sign
multiple
finalized
headers.
C
C
That's
also
kind
of
difficult,
because
right
now,
slashings
do
not
have
this
sort
of
History,
so
it
maybe
it
needs
to
be
bounded
to
a
day
or
so
or
two
days,
I,
don't
know.
But
what
could
work
is
to
slash
on
signature
of
conflicting
finalized
texting
committee,
so
this
chain
that
we
had
before
here?
If
you
can
break
this,
you
can
basically
send
someone
to
an
arbitrary
chain,
and
with
this
slashing
we
can
at
least
prevent.
E
C
So
if
someone
signs
to
the
correct
chain
and
also
signs
to
a
conflicting
chain,
we
can
submit
Merkel
proofs
that
tell
that
this
is
on.
These
are
two
different
chains
and
two
different
extreme
committees
that
are,
according
to
you,
finalized.
So
it's
statement
so
then
the
next
problem
is
this
data
structure.
It
doesn't
contain
anything
about.
C
But
many
use
cases
for
like
clients
actually
want
to
prove
ethereum
data.
That
is
part
of
the
state.
So
we
have
to
embed
it
somehow
in
here
and
for
that
there
is
a
couple
use
cases,
for
example,
the
top
one
therefore
Choice
updated
that
one
is
basically
instead
of
a
beacon
note,
you
use
a
like
and
to
drive
your
execution
layer
right
now,
it's
not
possible
because
the
block
cache
is
not
in
there,
so
you
still
have
to
download
the
full
block,
which
is
like
two
megabytes
per
12
seconds.
C
Then
the
results
of
the
problem
that
get
right
now,
I,
don't
know
about
the
others.
They
still
required.
A
new
payload
call
and
a
new
payload
also
requires
the
full
block.
But
I
actually
talked
to
yes
yesterday,
and
they
actually
only
only
need
the
block
hash,
the
block
number
and
the
parent
block
hash.
But
that's
more
like
a.
C
It
would
be
very
difficult
to
engineer
it
the
other
way,
but
technically
it
would
be
possible
to
also
just
start
with
the
block
cache
and
someone
else
was
like
mentioning
that
maybe
they
need
the
entire
execution
layer.
Payload
header,
like
everything.
Instead
of
the
accepted
transactions
yeah
and
then
for
all
the
proof,
endpoints,
for
example,
if
you
have
a
wallet
and
want
to
prove
that
you
have
a
certain
token
balance,
you
need
the
El
stay
true,
which
is
part
of
the
block
mesh
as
well.
C
C
If
we
were
to
add
the
full
execution,
payload
header,
it
would
basically
quadruple
in
size,
it
would
be
a
kilobyte
and
if
we
only
want
to
put
in
the
block
hash
right
now,
it's
still
like
a
doubling
of
the
current
size,
because
the
El
block
hash
is
rooted
quite
deep
in
the
block.
So
we
have
like
the
beacon
block
and
then
inside
there
is
the
beacon
block
party,
and
then
there
is
the
engine
execution,
payload
header,
and
in
there
there
is
the
Yale
block
cache.
So
one
suggestion
there
it
would
require
a
consensus.
C
Change
as
well
is
to
just
move
that
El
block
hash
to
the
beacon
block
header.
This
way,
this
structure
can
be
kept
around
roughly
312
bytes,
but
the
challenge
there
is
it
would
be
like
the
first
ever
addition
to
the
beacon
block
headers.
So
no
one
has
done
it
yet.
So
it's.
C
So
you
can
have,
for
example,
at
door
lock
where
you
push
all
these
like
client
updates
and
any
proof
that
you
can
have
a
certain
nft
in
your
wallet
to
open
the
door,
and
that
can
happen
via
Bluetooth
or
via
Ultra
wide
pen
and
there
you
really
don't
want
to
have
very
big
messages,
so
yeah,
not
sure
which
one
is
the
most
simple
here
to
minimize
size.
Here.
If
it's
important
to
Upper
discussion,
so
then
the
third
one
is
the
engine
API.
C
That
one
is
the
API
between
the
is
between
the
beacon
node
and
the
execution
layer
how
it
informs
the
execution
layer
about
the
latest
head
right
now
the
spec
says
that
if
you
passive
work
to
is
updated
within
hash
that
the
execution
layer
doesn't
know
yet,
but
it
is
sort
of
optional
that
the
execution
layer
actually
does
anything
with
so
proposal
here
is
to
just
make
it
a
shoot
so
that
in
general
it
should
be.
Okay
to
just
use,
Fork
Choice,
updated
to
sync
and
yell.
C
E
C
Also
for
Accelerated
thinking,
it
would
be
great
if
the
deposit
contract
would
be
prioritized
during
the
sync,
because
then
you
can
sort
of
have
a
situation
like
this,
where
you
have
like
the
optimistic
sync
that
sinks
forward
and
then
it
jumps
into
the
light
client
data
and
brings
it
all
the
way
to
the
wall
slot
so
can
be
multiple
months
of
data.
That
is
skipped
there.
But
then
you
have
this
like
client
head
and
you
have
the
latest
talk,
but
you
may
not
have
all
the
validator
keys
to
verify
that.
C
C
But
then
we
have
to
sort
of
import
a
deposit
contract.
All
the
logs
build
up
our
set
of
validator
keys,
and
then
we
can
sync
backwards
and
validate
all
the
proposal
signatures
because
they
are.
C
Hash
so
yeah,
those
are
the
challenges
that
I'm
seeing
right
now
with
light
clients
that
should
be
addressed
in
the
near
future
to
make
them
more
useful.
Most
importantly,
it's
this
slashing
and
also
adding
some
some
way
of
notion
of
a
block
cache
or
a
execution
payload
header
into
the
black
client
data,
so
yeah
from.
C
C
Okay
yeah
the
portal
network
is,
is
an
important
step
there
as
well.
Actually
we
from
status
Nimbus.
We
have
a
ongoing
project
where
we
try
to
make
a
zero
knowledge
proof,
so
you
can
get
from
any
point
to
any
point
in
constant
time
and
we
actually
have
something:
that's
working
using
a
recursive
circuit,
but
yeah
not
sure
about
the
details.
I
mean
the
problem
is
complexity
wise.
You
still
have
to
validate
something
of
the
complexity
of
a
PLS
signature
like
the
pairing
is
still
there.
So
for
the
embedded
use
case.
G
C
C
G
G
Phil
and
Eton
I,
don't
think
I
have
too
much
more
to
say,
I
think
now,
unless
somebody
wants
to
just
like
relating
talk
situation,
you
can
just
move
to
like
more
like
smaller
self-organized
working
groups.
We've
kind
of
did
this
past
events
like
Dev
connect
this
year
did
like
you,
know
some
talks
and
then
just
sort
of
like
self-organized
working
groups
just
to
get
like
people
in
the
same
place.
You
know
tissues,
that's
some
things,
I
think
are
probably
some
great
jumping
off
points
and
yeah.
E
G
There's
anything
else,
I
think
we
can
just
move
into
that
phase
of
things.
B
G
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
E
H
H
So
basically
that
was
like
proof
of
work
to
prove
of
stakes
or
like
transition
session,
where
we
hosted
like
some
Community
sessions
as
well,
alongside
the
E1
and
in
each
One
X
to
E2.
In
these
two
sessions
it
was
kind
of
successful
in
a
huge
room
where
vitalik
went
on
like
a
huge
session
and
brought
like
so
many
research
posts
of
like
that.
Inspired
the
discussions
that
happen
during
the
session
and
then
just
to
give
you
a
quick
overview
of
the
the
roadmap
of
what
we've
been
discussing.
H
Like
just
a
few
points.
Mainly
we've
been
discussing
the
IP
1559,
the
fee,
Market
change
for
ethereum,
which
was
a
very
hot
discussion
and
also
the
proc
pow,
which
we
agreed
that
sort
of
a
conspiracy
driven
aipn.
We
should
not
keep
much
of
a
like.
We
should
not
like
pay
much
of
attention
to
that.
As
that's
very
conspiracy
driven,
then
we
also
started
discussing
about
State
rent,
as
we
realized
that
many
people
are
using
blockchain
as
a
storage.
H
H
H
And
then
we
were
discussing
also
Harding
like
how
ethereum
can
like
how
like
hearts
like
The,
Shard
design,
basically
and
then
the
ECM
transaction
history
archive,
which
is
basically
about
how
we
can
keep
the
full
state
of
all
transactions
and
like
archive
notes
and
all
that.
H
So
you
guys
may
be
wondering
why
everybody's
sitting
down
even
I'm
like
sitting
down
because
I
don't
want
to
be
like
too
much
like
oh
I'm
like
important
and
you
guys
are
not.
But
basically
ethereum.
Edition
sessions
are
always
like
everybody's
sort
of
like,
on
the
same
same,
like
line
or
something
that
like
there
is
no
such
thing
as
like.
Oh,
these
are
the
cardiffs
or
like.
H
Oh,
these
are
the
speakers
and
they're
important
like
no
like
everybody
is
equal
and
everybody
is
as
important
as
whether
you're
a
speaker
or
you
are
sitting
in
the
audience
and
I
highly
encourage
you
to
join
us
anytime
in
this,
like
how
Circle.
H
So
if
you
have
like
something
to
say
to
this
session
or
like
to
the
topic
that
is
being
discussed,
feel
free
to
anytime
switch
like
anybody
just
like
take
their
seat
and
be
like
hey,
like
I,
want
to
like,
say
something
to
this
and
even
grab
a
mic
and
I
also
encourage
you
to
take
some
notes,
or
some
like
ideas
that
you
have.
We
also
have
a
marker
would
just
like.
Thank
you.
So
much
who's
gonna
take
a
note
from
this
session
and
then
also
I.
B
H
Your
name,
but
there
is
a
guy
who
is
going
to
take
a
note
in
Spanish,
so
we
are
going
to
have
a
translated
notes
from
this
session
as
well
and
make
sure
to
publish
them
on
ethereum,
magician's,
Forum
and
then
I
just
figured
out
that
this
would
be
a
cool
quote
to
my
kick
off
this
session.
To
like
make
ethereum
more
Diversified
as
I
feel
like.
H
We
need
more
clients
to
be
run
in
the
ethereum
ecosystem
and
we
should
not
be
just
focusing
ourselves
or
on
the
main
two
clients
and
by
the
way,
the
QR
code,
weird
until
let
the
QR
code
be
there
and
that's
where
you
can
find
the
slides
and
in
the
slides.
There
are
many
links
that
what
I've
been
just
mentioning
during
my
introduction
speech
and
that's
pretty
much
it
from
my
side
I'm
just
going
to
try
to
encourage
you
all
to
like,
speak
up
and
like
don't
be
scared,
even
the
mic.
H
It
doesn't
bite
you
and
it's
just
like
our
friend
and
if
you
will
try
to
if
you
want
to
like
speak
up
about
something
we
want
you
to
like.
Make
sure
that
everybody
will
hear
you
basically
and
that's
pretty
much.
It
I'm
excited
for
this
I'm
just
giving
the
words
to
Tim
and
let's
kick
off
so.
I
Yeah
thanks
Annette
for
the
intro
yeah,
so
I
guess
on
the
point
of
people
asking
questions
and
contributing
I
do
not
have
near
three
hours
of
like
contents
here
right
like
so
like
you
know,
I
have
a
couple
questions
to
kick
things
off
but
like
this
is
very
much
on
you.
If
you
will
have
stuff
you
want
to
discuss.
This
is
the
place
for
it
orobial
here
in
50
minutes,
but
I
guess
yeah.
There's
like
three
things
on
the
slides,
I,
think
or
needs
to
discuss
with
the
people
here.
I
First
is
just
like
how
the
merge
went.
It
was
a
pretty
big
piece.
J
I
Not
only
technically
but
just
like
making
it
happen
from
like
a
human
perspective,
we
have
all
these
teams
working
together,
so
yeah
I'm
personally
curious
to
hear
just
from
the
different
client
teams
and
folks
involved
like
how
that
whole
process
went.
Is
there
things
you
can
do
better
for
like
the
next
one
or
things
we
did
well
that
we
didn't
expect
to
do
so?
Well
after
that,
like
we
keep
talking
about
like
The,
Surge,
Verge,
Purge
Splurge,
all
that
stuff
I
think
I
just
want
to
make
sure
we
have
space.
I
What
they
think
is
important
where
things
are
generally
at
and
then
answer
all
the
questions
from
you
guys,
because
I
think
it's
easy
to
like
look
at
the
four
acronyms
or
like
meme
names,
and
then,
when
you
get
into
the
details,
there's
like
many
more
open
questions,
so
hopefully
we
can
get
into
some
of
those
we'll
try
and
keep
time
at
the
end
or
actually
we
will
keep
time
no
matter
what,
at
the
end,
to
just
discuss
like
specific
eip's
like
there's
a
bunch
of
VIP
Champions
Shanghai
planning's
been
a
bit
different
because
we
canceled
the
calls
after
it
emerges
to
give
people
a
break,
so
everyone
has
like
eaps
that
they
want
to
put
in
not
a
lot
of
places
to
discuss
them
right
now.
I
So
we'll
use
this
as
a
sort
of
pressure
valve
for
some
of
those
discussions
yeah.
So
that's
roughly
it
but
again
yeah.
Please
ask
questions
throughout.
Otherwise
this
is
going
to
be
over
pretty
quick
but
yeah
to
kick
it
off.
I
I
just
was
curious
to
hear
from
like
different
folks
on
kite
teams
like
their
perspective,
I'm
like
not
like,
technically
how
the
merge
went
like
I,
think
we've
covered
that
a
lot
already,
but
just
how
they
felt
during
the
process
and,
like
you
know
now
that
it's
over
and
and
we
have
a
bit
more
distance
like
looking
back.
How
do
you
feel
it
went
and
other
things
that
could
have
been
better
yeah.
K
K
So
that's
that's
kind
of
the
personal
view
of
it.
I,
don't
know
how
you
change
that
I'm,
not
sure
we
should,
because
we
should
have
been
under
pressure
to
ship
and
we
were,
but
you
know
I
guess
I'd
say
never
be
in
doubt
that
core
devs
feel
the
pressure
to
ship.
K
We
really
do
yeah
I
think
there
were
a
lot
of
really
good
things.
We
did
the
testing
that
we
we
managed
to
do
was
phenomenal
and
it
was
a
kind
of
another
couple
of
steps
up
on
anything
we've
done
before
for
hard
forks
and
that
kind
of
coordination
and
and
work
that
went
into
that
was
absolutely
fantastic,
starting
with
the
client
teams
testing
their
releases.
But
then
the
EF
support
of
spinning
up
the
shadow
test
Nets
were
a
brilliant
idea
and
Mario
and
and
other
people
contributing
to
the
test.
K
Suites
And
Hive
tests
and
that
kind
of
thing
really
really
valuable.
I,
don't
think
we
started
those
Hive
tests
in
particular
early
enough
and
I.
Think
that
probably
is
a
knock-on
effect
when
we
didn't
necessarily
include
the
execution
clients
early
enough,
which
might
have
been
because
they
were
busy
with
you,
know,
1559
or
something
giant
so
I'm
not
sure
what
the
timelines
there
exactly
were,
but
it
it
felt
like
consensus
side
almost
felt,
confident
and
ready
and
execution
was
kind
of
like
oh
yeah
emerge.
The
merge
is
the
next
thing.
K
We've
got
time
now
we
can
look
and
it
wasn't
wasn't
a
surprise
to
them.
Don't
don't
get
me
wrong,
but
I
think
that
was
a
big
deal,
that
that
would
have
helped
to
bring
that
forward
and
then
have
more
time
on
things
like
being
able
to
pay
real
attention
to
the
hive
test.
We
were
struggling
as
client
developers
to
keep
up
with
the
sport
and
the
last
minute
things
we
knew
about
and
dealing
with
these
Hive
test
reports
that
are
in
a
format
that,
particularly
for
the
consensus
side,
we
weren't
particularly
familiar
with.
L
Thank
you
made
a
mistake:
you're
sitting
too
close
to
age,
yeah,
I
I
agree.
It
was
really
stressful,
very
very
stressful
four
years.
I
think
a
large
part
of
it
was
figuring
out
how
to
deal
with
that
and
how
to
keep
performing
and
keep
delivering
good
code,
not
sure
what
to
do
about
that,
perhaps
having
gone
through
it
once
then,
these
group
of
clients
will
be
better
at
doing
it
again.
L
L
I
think
Marius
also
managed
to
create
a
lot
of
momentum,
so
I
think
there's
a
lot
to
be
said
about
yeah,
just
I
think
the
right
person
at
the
right
time
relieving
the
right
pressure
point
so
I
don't
know
what
we
can
do
to
to
make
that
happen
more
in
the
future,
but
those
yeah
those
individuals
should
be
enabled
when
they
want
to
when
they
want
to
do
something.
L
I
think
it
went
really
well.
I
have
been
quite
disappointed
with
Mev,
since,
since
it's
launched,
I
know,
there's
been
with
some
of
the
implementations
there
there's
been
some
bugs
that
we,
that,
with
like
Json
encoding,
that
really
really
shouldn't
have
hit
us
on
mainnap
I.
Think
it's
probably
because
Mev
like
we
all
kind
of
realize
that
that's
a
thing
is
happening
quite
close
to
the
merge,
didn't
quite
get
enough
time
to
run
with
it
didn't
get
enough
time
for
it
to
be
integrated
into
our
setups
I.
L
Think,
there's
also
some
catching
up
that
those
teams
need
to
do
in
terms
of
announcements
and
cons.
So
if
they
have
failures
with
their
software,
they
need
to
announce
it.
They
need
to
tell
people
instantly
that
there
is
a
problem
and
it's
being
looked
into
not
sit
on
it
for
some
time
and
then
and
then
and
then
publish
it.
I'm
not
naming
names
here
on
purpose,
because
I
think
it's
just
useful
for
for
everyone
to
know.
So,
I'd
really
like
to
see
the
folks
in
there
same
with
the
builders.
L
M
I
was
really
happy
to
actually
we
we
almost
launched
in
October
so
just
the
next
year.
So
but
then
I
came
back
to
the
like
after
I
said
that
I
was
like
yeah.
Maybe
maybe
it
was
not
really
the
common
agreement,
the
common
sense
and
I
came
back
today
to
the
Netherlands
team.
Asking
everyone
hey.
Do
you
think
I
can
do
that
for
October,
like
you
think
it's
possible
at
least
do
I
have
to
go,
and
just
like
tell
everyone,
hey
I
was
wrong
and
they
said
no.
M
We
think
it's
possible,
so
I
thought
okay,
yeah
and
well.
There
was
there
was
a
lot
of
work,
but
at
the
same
time
I
remember
thinking
on
the
same
day,
I
think
it
was
good
to
say
it,
because
suddenly
all
the
infrastructure
teams
started
calling
everyone
and
saying
hey
like
we're
launching
in
October
we're
totally
unprepared.
We
have
to
start
preparing
and
it
was
needed
like
because
we
needed
node
operators
to
start.
M
Looking
at
this,
we
needed
the
community
to
say:
okay,
there's
some
date
for
the
first
time,
so
I
miss
it
by
a
lot
but
and
then
in
Emporia.
M
Yes,
I
was
still
still
very
confident
because
I
I've
seen
the
things
progressing
I've
seen
the
the
Ryan
is
hitting
very
fast,
but
it
was
like
all
those
small
things
that
take
so
much
time
really
in
the
testing
in
a
proper
testing
and
the
confidence
and
I
think
when
I
was
a
bit
pessimistic
was
around
April
this
year,
when
I,
when
we
were
thinking
about
Mev
boost
and
how
ready
the
movie
boost
is
and
how
much
everybody
was
prepared
for
Mev
burst,
and
it
was
more
stressed
at
the
time
about.
M
M
Does
it
say
to
use
this
difficulty
bomb
thing
as
a
as
something
that
would
drive
the
dates
talking
to
developers
to
our
team,
I
feel
like
there's,
really
no
more
motivation
you
can
get
or
no
more,
pushing
or
or
shouting
or
asking
that
you
could
get
for
delivery
on
time.
So
there's
absolutely
no
sense
to
introduce
any
other
stressor.
So
so
the
difficulty
bombing
moved
slightly.
I
hoped
it
won't
move
the
merge
date
and
we
didn't
so
actually
the
merge.
M
I
Nice
yeah
we're
focus
on
another
side
of
the
room.
N
To
share
I
guess
a
different
aspect
of
the
merge,
so
we
are
like
one
of
the
minor
clients,
bezu
team
and
one
thing
I
would
have
done
differently-
was
preparing
for
people
choosing
client
diversity
getting
ready
for
them,
didn't
realize
I,
guess
I
think
it
was
around.
N
Like
two
three
weeks
before
the
merge
timeline,
we
saw
a
huge
amount
of
people
thanks
to
a
lot
of
solo
stickers
coming
through
Discord,
asking
tons
of
questions
and
I
think
the
maintainers
were
super
busy
with
you
know,
trying
to
get
the
final
release
out.
I
didn't
get
to
really
respond
to
a
lot
of
them.
So
I
think
that's
something
that
if
we
had
to
go
through
the
merge,
not
really,
but
if
we
had
to
go
through
it
again
that
we
would
do
that
differently
to
be
ready
for
Community
Support.
N
How
do
we
get
this
new
users
coming
through
this
ecosystem,
like?
How
can
we
support
that
better
and
but
honestly,
it's
definitely
a
hard
challenge,
because,
as
maintainers
like
you're,
really
deep
in
the
weeds
of
like
like
really
trying
to
get
the
code
out
on
top
of
the
trying
to
support
tons
of
users
coming
through,
is
it's
it's
really
challenging,
but
thanks
to
a
lot
of
the
contributors
who
also
chimed
in
and
asked
answering
a
lot
of
other
people's
questions
on
how
to
set
up
their
validators
and
all
that
so
yeah.
N
It's
definitely
a
community
effort.
It's
not
just
the
maintainers
all
cortex.
So
thank
you
for
those
solo
stickers
coming
in
and
asking
questions
and
really
improving
the
client
diversity
and
also
other
contributors
supporting
each
other.
So
yeah.
Thank
you
for
that.
I
guess
one
less
common
is:
please
don't
threat
threaten
us
minority
clients
by
saying
oh
we're
gonna
go
together,
I
think
that's
like
a
stab
in
your
like
heart
and
twisting
it
so
yeah.
Thank
you.
O
Hello,
okay,
so
I
want
to
double
down
on
Paul,
so
I'm
from
Prism,
and
so
for
my
team's
perspective.
Merge
has
always
just
been
consensus,
execution
and
Engineering
API
and
boom
done
right
and
then,
like
I
guess
like
just
a
month
before
the
merge.
We
learned:
okay,
there's
this
Mev
stuff.
We
have
to
work
out
now
and
then
there's
this
Mev
booths
and
there's
this
relayer
this
Builder.
O
So
we
just
keep
adding
more
and
more
adders
to
the
picture
and
then
even
a
few
weeks
before
the
merge
right
there,
there
was
this
tornado
or
cash.
They
say
all
fetch
stuff
and
then
flashbots
would
only
relayer.
So
our
team
had
this
big
internal
discussions.
Okay,
should
we
drop
the
support
right
now
like
like?
O
What
should
we
do
if
flashbart
was
the
only
relayer
and
they
are
sensory
transaction
right,
even
though,
and
maybe
boost
is
a
neutral
piece
of
software
above
flash
about
is
the
only
layer
then
as
a
client
team,
we
are
indirectly
supporting
censorship
and,
of
course,
I'm
not
happy
with
that
this.
This
is
now
why
we're
here
for
right,
so
I
think,
like
we've
learned
a
lot.
I
learned
a
lot
it's
just
like
if
today
relayers
are
producing
50
of
the
blocks
on
the
main
net,
are
they
considered
as
layer?
L
Okay,
yeah
good
points
by
by
Terrence
I
think
it's
worth
saying
as
well.
That
I
think,
even
though
we
did
have
some
problems
with
Mev
and
I
was
pretty
harsh
on
them
before
I
think
we
did
come
out
of
it
in
a
better
spot
than
we
were
before.
There's
no
like
Mev
guests,.
P
L
L
So
I
think
we
really
opened
up
the
world
to
Mev
and
although
we
didn't
solve
it
and
we're
still
not
in
a
great
spot,
I
think
we
ended
up
in
a
better
spot
and
I.
Think
flashbots
have
done
some
good
stuff
towards
that
as
well.
Yeah.
The
circuit
breaker's
good
as
well
yeah.
So
now
there's
the
consensus.
Clients
can
detect
when
the
chain
is
unhealthy
and
then
just
disable
Mev.
L
So
that's
something
that
we
couldn't
really
do
before
in
Mev
Geth
with
the
like
Mev
boot
flashbox
could
have
implemented
it.
But
you
know
it's
a
lot
of
work
for
them.
So
yeah
now
there's
a
lot
more
control
from
the
consensus
clients
at
least
into
what's
going
on
with
Mev
and
I.
Think
that's
that's
a
really
good
place
to
be
in.
L
L
Actually,
it's
a
good
question,
I
kind
of
wasn't
thinking
about
that,
but.
Q
L
Yeah,
I
guess
what's
up,
if
that
that's
still
the
case,
it's
not
it's
not
ideal,
but
I
guess
what's
better
now
is
that
every
one
of
those
gets
produced
is
now
verified
by
not
that
thing
before
it's
published
on
the
network
right.
Q
Yeah
and
I
guess
like
from
my
perspective,
the
the
whole
situation
with
Mev
was
very
interesting
because
Mev
has
been
around
for
several
years
now,
and
you
know
we've
kind
of
seen
through
the
defy
Summer
and
the
nft
summer.
These
crazy
numbers
about
how
much
Mev
there
is
in
blocks-
and
you
know,
for
my
perspective-
was
I-
was
thinking
there's
so
much
money
in
this
industry,
like
the
solutions
are
just
going
to
appear
and
they're
going
to
be
really
robust
and
they're
going
to
work.
Q
And
then
you
know
it
turns
out
that
it's
six
seven
months
before
we
were
really
projecting
for
the
merch
to
happen,
we're
starting
to
realize
that
the
solutions
that
we
as
core
developers
feel
need
to
exist,
weren't
in
the
place
that
we
felt
that
they
needed
to
be,
and
so
we
started
trying
to
like
help,
help
accelerate
that
path
and
the
way
that
it
turned
out,
I
think
like
it's
better
than
it
could
have
been.
Q
I
would
have
been
very
curious
to
see
like
what,
in
you
know,
institutions
would
have
created
their
own
Mev
boost.
Had
this
like
open
source
public
thing
not
existed,
because
I
think
that
there
is
enough
money
in
this
industry
that
some
of
these
big
stickers
would
have
realized.
You
know
there's
no
way
for
us
to
extract
the
movie
anymore.
Let's
create
some
centralized
endpoint
for
people
to
send
bundles
to
and
we'll
just
extract
it
for
our
customers,
and
you
know
Screw
all
the
rest
of
the
stakers.
Q
They
can
figure
out
their
own
ways,
so
it's
not
where
we
want
it
to
be
today.
We
want
to
improve
Mev
boost
and
make
it
better,
and
we
eventually
want
to
move
to
PBS
where
it's
part
of
the
protocol
and
so
we're
just
working
towards
that,
but
and
I.
Think,
like
also,
you
know,
with
respect
to
thinking
that
the
Mev
industry
was
going
to
build
all
this
stuff.
Q
The
core
development
Community
is
also
like
very
anti-f
financialization
of
a
lot
of
things
and
so
to
a
degree
it
felt
like
we
didn't
want
to
think
about
it.
It
felt
wrong
for
us
to
think
about
Mev
in
a
lot
of
cases,
and
it
was
only
later
on
where
we
realized
how
much
interplay
there
was
between
censorship
and
Mev
and
the
ability
to
produce
blocks,
and
it
started
to
just
unfold
very
naturally,
over
the
last
six
to
eight
months
and
I.
I
Yeah
and
I
guess
yes,
talking
about,
like
the
design
of
the
whole
thing,
I'm
curious
to
hear
like
Proto
Macau,
you
both
spend
a
bunch
of
time
like
early
on
with,
like
the
engine
API
and
and
specking
out
of
that,
but
yeah
Miguel.
Do
you
want
to
just
like
walk
us
through,
like
how
do
we
come
up
with
the
merge
like
how
do
we?
How
do
we
get
here.
R
Yeah,
hey,
okay.
So
sorry
like
for
me,
the
merge
sorry,
two
and
a
half
years
ago,
like
roughly
at
this
moment
in
time
and
yeah.
By
that
time,
it
was
like
an
idea
of
the
two-way
Bridge
of
bringing
the
Casper
F
of
G
as
a
frame
analysis
Gadget
from
the
beacon
chain
to
the
the
proof
work
chain.
R
That's
was
the
starting
point
and
there
was
an
idea
of
like
actually
transition
in
the
proof
of
work
chain,
eventually
to
the
one
of
the
shards
that
was
not
yet
designed
by
that
time
and
yeah.
It
was
like
a
long
journey.
R
I
can't
recall
everything
that
has
happened
like
during
this
more
than
two
years
period
of
time,
but
one
thing
that
I
would
just
like
to
mention
here,
like
my
key
takeaway,
one
of
the
key
takeaways
from
the
working
on
the
merge
project
is
that
retrospectively
what
what
I
can
yeah
recall
like
the
first
collaboration
with
other
some
somebody
else,
working
on
the
same
space
was
collaborating
with
Guillaume
Guillaume
prototype
the
basically
the
first
execution
layer.
Client
was
not
called
that
by
that
time.
R
But
anyway,
it
like
made
me
much
easier
to
like
prototype
the
whole
thing
like
the
consensus,
interacting
with
the
execution
layer.
It
was
not
done
by
engine
API
by
that
time,
but
anyway
it
was
like
yeah,
the
old
idea
of
like
putting
it
in
a
chart
and
make
it
work
somehow.
So
we
did
this
prototype
then
yeah
after
some
progress
on
the
merge,
specs
and
then
ideas
then
prototype
it
with
like
renism
project.
R
S
I
mean
I
can
talk
about
that.
I
would
want
to
First
reiterate
over
the
first
moment
in
time
was
that
we
developed
the
merge
I
think
it
was
before
coffee
before
the
whole
pandemic.
I.
Think
ins
like
SBC,
there
were
still
discussions
about
ewossum,
but
then
less
than
a
month
later,
in
Paris
during
iPhone
x,
discussions
afterwards
eosm
or
like
the
ID
of
execution
charts
was
basically
over
and
done
for
these
discussions
between
you
and
Guillaume
Danny
me.
Others
Brandon
also
started
where
hey.
S
Maybe
we
can
prototype
the
Catalyst
thing
with
the
ECM
attached
to
taku
and
yeah,
like
that's
kind
of
cool,
but
I
took
that's
two
thousand
twenty.
It
took
a
year
more
than
a
year
till
we
got
to
to
rayonism.
So
it's
a
huge
leap
already
where
things
happened
and
then
in
realism,
wherever
in
this
awkward
spots,
where
alter
was
still
not
shipped
and
London
was
still
not
chapped,
and
we
were
just
like
moment
in
time
where
there's
this
stack
development,
where
you
have
to
convince
people
that
they
need
to
dedicate
resources
on
something.
S
That's
not
the
main
hard
work
to
make
some
kind
of
progress.
Definitely
like
there,
there
were
moments
in
time
where
I
was
slightly
like
yeah
out
of
my
humor,
because
of
the
the
way
the
the
these
calls
would
go
like
for
a
month.
We
were
thinking
of
it
like
a
hackathon,
where,
okay,
we
just
do
the
Prototype
with
more
than
one
client
and
Martin.
S
Often
it's
just
meant
that
some
clients
would
not
go
and
join
these
cars
to
even
just
get
a
glimpse
of
what
we
were
trying
to
do
after
alter
and
after
London
and
I.
Think
like
this
disco-free
process
for
future
Forks,
we
can
improve
where
that's
the
time,
and
if
we
had
more
people
think
about
the
merge.
We
could
have
realized
that
hey.
We
need
to
make
these
communication
Channels
with
things
like
flashbots
and
MFE,
because
the
Merit
is
very
soon
otherwise,
it's
pretty
late,
then
you've
got
the
new
situations.
B
R
Can
I
continue
on
that
yeah?
It's
just
a
few
few
sentences
left
from
my
end,
okay,
so
after
any,
this
is
the
first
time
we
had
like
an
Engaged,
more
client
developers
and
client
development
teams
into
that.
We
had
to
written
to
write
us
back
of
the
engine
API
like
to
make
this.
You
know
some
kind
of
standardized
or
whatever
it
was
even
before
Emperor.
It
was
like,
like
more
than
one
year
ago,
like
yeah,
yeah,
April
and
May.
R
When
we
won
yeah,
it
was
great,
so
then
Emperor
then
yeah
all
this
of
chain,
events
of
chains,
sorry
of
sites
of
of
offline
events.
It's
a
really
huge
facility
of
the
progress
and
all
the
developers
in
one
place
talking
to
each
other
in
person.
I
think
it's
really
helps
to
collaborate
on
the
things
even
after
the
event,
and
when
we
were
like
on
the
internet
back
to
work
yeah
and,
as
has
been
mentioned,
tests
and
efforts.
R
Client
developers
gave
a
lot
of
feedback,
and
all
this
stuff
I
was
just
and
the
key
takeaway
for
from
from
what
has
been
said,
yeah
the
right
people
at
the
right
moment
in
time
make
the
things
happen
so
retrospectively.
This
is
what
has
happened
with
the
merge
and
I
think
that
this
may
only
happen.
R
This
kind
of
like
thing
right
people
at
the
right
moment
in
time
may
appear
like
in
a
healthy
Community,
which
we
have
currently
the
healthy
community
of
researchers
and
developers
and
yeah,
and
this
like
prove
the
merge,
is
like,
as
for
me,
it's
proof
of
like
these
that
the
this
community
of
research
and
developers
that
we
have
on
ethereum
ecosystem
can
like
capable
of
delivering
like
huge,
big,
sophisticated
projects
and
like
the
merge,
is
like
the
first
one
right.
So
next
is
dank
charging
and
other
things
local
trees.
R
H
Team,
don't
want
to
give
a
lot
star
to
like
tell.
A
This
up,
hey!
Thank
you,
sorry,
for
being
late,
as
we
are
most
of
the
time
we're
we
are
the
fifth,
the
fifth
consensus,
client,
so
yeah.
We
we
actually
produced
our
first
block
on
Main
net
in
November
of
last
year,
but
it
was
actually
fun
to
basically
Sprint
up
to
to
with
everyone
else
and
basically
caught
up
so
that
sort
of
retrospective
was
was
amazing,
to
see
the
kind
of
feat
that
we
can
get
there
and,
of
course,
with
the
help
of
all
the
other
client
teams
as
well.
A
We
were
able
to
pull
through
so
basically
goes
to
show
the
what
we're
capable
of
when
we
all
work
together,
especially
when
we're
all
focused
on
that
one
thing
and
it
was
to
achieve
a
successful
merge.
So
it
was
really
great
to
for
all
you
guys
to
help
us
basically
pull
us
up
to
where
you
guys
are
I.
A
Guess
one
of
the
hardest
things
that
we
had
to
deal
with
with
being
a
more
later
client
to
be
ready
for
mainnet
is
really
just
the
fact
that
client
stickiness
is
really
really
huge
issue
and
we
definitely
noticed
it
as
we're
going
towards
merge.
People
tended
to
not
necessarily
want
to
experiment
with
a
new
client
going
up
to
something
as
critical
as
the
merge.
So
we
found
that
to
be
quite
difficult,
even
though
we
you
know
tried
our
best
to
sort
of
make
it
easier
for
people.
A
But
you
know
didn't
see
as
much
traction
with
that
as
we'd
like
to,
but
now
that
we're
pretty
much
caught
up
going
into
withdrawals,
Shanghai
dangsharding,
hoping
to
see
much
more
adoption
with
that
and
yeah.
We
definitely
can't
do
it
without
the
help
of
all
of
our
Collective
Minds
together.
So
really,
our
retrospective
is,
if
you
are
an
up-and-coming
client,
we're
all
here
to
help
you
nice.
T
K
So
we
we've
just
shared
kind
of
mostly
core
devs
and
researchers.
You
know
our
retrospectors
on
the
merge,
so
my
question
to
everyone
out
in
the
community
is:
what's
it
been
like
like?
What's
your
retrospective
on
how
the
merge
has
gone,
how
has
it
been
as
a
validator
how's
it
been
as
a
adapt
developer
and
those
kinds
of
things
someone's
got
to
start
just
give
someone
a
mic
yeah.
U
Thank
you,
hi
I'm,
not
there.
It
was
crazy
because
in
Mexico
I'm
trying
to
teach
people
what
this
vision
is
making,
while
people
like
you
are
building,
are
creating
I
need
scar
record.
All
the
terminology
is
very
hard
to
understand
and
that
they
don't
teach
people
what
you
are
doing
and
guys
like
me
that
are
nerds
ready
and
all
the
stuff.
E
U
Start
to
make
the
threats
about
all
quartets
in
Spanish,
a
Skylar
came
to
me
and
said:
hey
guy
nice.
What
you
are
what
you
are
doing,
then
team
Baker
start
to
to
share
my
post.
So
thank
you
to
everyone
for
all
this
I
don't
know,
I
feel
like
very
nervous,
because
I
think
that
I
am
with
a
very,
very
smart
guys,
people
and
it's
like
crazy.
But
that
day
that
night,
where
I
was
with
the
East
Latin
Community
speaking
about
the
marriage.
What
he's
going
to
do?
U
K
B
This
might
be
like
a
random
question,
but
can
we
have
more
documentation
resources
for
future
core
developers,
because
I've
looked
around
in
this
very
hard
to
find
and
I
myself
like
to
volunteer
to
take
that
task
and
would
love
to
create
a
documentation,
tutorials
and
roadmaps,
because
I
really
want
to
work
on
that.
But
it's
very
hard
to
find
like
a
clear
path
to
become
a
core
developer.
Okay,.
H
V
Thank
you.
Have
you
heard
about
a
protocol
Fellowship?
It's
exactly
answer
to
your
question.
Are
you
already
I'm
sorry
about
that?
But
you
can
still
participate,
so
this
is
the
thing
so
like.
So,
if
you
didn't
particle,
Fellowship
is
a
kind
of
an
internship
program
where
anybody
can
come
and
start
working
on
a
project
and
in
a
way
that
we,
it
provides
sort
of
mentorship
from
many
of
folks
who
are
sitting
here
and
other
core
devs
and
there
are
also
resources.
V
So
if,
with
the
the
whole
later,
Fellowship
is
coordinated
via
repository
on
GitHub,
where
there
is
also
suggested
reading,
there
are
a
few
things
they
put
together,
but
I
agree.
It's
not
easy!
That's
it's
a
high
wall
that
you
have
to
go
through,
but
so
we
are
I
believe
that
this
might
make
it
easier.
V
So
there
is,
there
is
a
lot
of
things
to
dive
into
and
the
the
reading
in
the
repo
can
help
you
to
look
into
what
what
makes
you
what
you're
interested
in
and
then
you
can
just
start
contributing,
because
so
the
the
calls
and
everything
in
the
even
if
you're,
not
accepted
it's
still
open.
V
You
can
join
if
you
are
not
accepted,
you
just
don't
get
a
stipend,
but
you
can
still
come
and
and
if
you
work
on
a
project
and
over
a
few
weeks
we
see
that
you
have
valuable
output.
We
can
still
give
you
this
open
as
well,
but
it's
fully
permissionless.
You
can
come
to
the
calls
propose
idea
that
you
want
there.
There
are
calls
where
you
can
propose
idea
in
an
issue
similar
to
ACD.
V
That
I
would
like
to
discuss
this
and
we'll
invite
some
of
the
mentors
to
help
you
with
that.
Give
you
some
guidance
so
yeah.
Hopefully
it
can
help.
H
Well-
and
we
have
like
one
of
our
projects
is
also
car
development
as
well,
where
we,
we
are
very
happy
to
guide
you
over
as
well
yeah.
We
are
always
hiring
new
interns,
so
I,
don't.
E
H
Yeah,
so
this
is
like
sort
of
like
should
be
your
base
layer
for
where
to
find
more
cardos
related
or
like
how
to
be
according,
although
that
was
kind
of
controversial,
where
I
think
lane
or
Amine
yeah
I
mean
it
was
like
at
least
a
cardos.
W
Ever
ever
be
a
hackathon
for
this
sort
of
thing,
maybe
like
a
three-day
thing
event
where
people
can
work
on
funny
stuff,
we.
H
Should
do
that
because
Chris
stuff,
once
you
do
one
in
like
never
mind
hackathon,
but
it
was
like
it
was
supposed
to
be
internal,
but
I'm
very
happy
to
take
this
over
and
do
sort
of
like
a
chord
of
hackathon
and
I
mean
Mario
should
definitely
collaborate
on
that.
X
Just
quickly
I
was
gonna,
say
it's
a
lot
of
the
open
source.
Repos
also
have
tickets
that
are
marked
good
to
start
development
on
so
that's
another
Avenue.
It
does
get
taken
up
by
contributors
and
we
review
and
give
feedback
and
help,
and
so
there
are
a
few
Avenues,
but
it
is
a
big
it's
a
learning
curve.
Y
So
we
were
working
this
weekend
on
the
eth
Bogota
on
some
hypothesis
regarding
the
new
proof
of
proof
of
stake
system,
and
one
question
we
written
couldn't
get
answered
was
now
with
the
new
proof
of
stake
system
is
how
the
Box
the
blocks
get
built.
Is
it
still
purely
an
economical
incentive
as
where
they
choose
the
highest
fees
for
the
or
the
highest
prices
to
include
in
the
blocks
or
are
there?
Other
incentives
include
included
as
there's
an
increased
risk
of
slashing.
M
So
they,
the
blog
building,
doesn't
really
lead
to
the
slashing
risk.
Slash
increases
are
on
the
related
to
the
double
signing
like
signing
to
different
blogs
or
attesting
to
two
different
blocks
of
proposing
to
conflicting
blocks
or
attesting
to
conflicting
blocks,
so
they
yeah
it
should
be
economical
incentive.
It
should
be
a
line
incentive,
something
that
is
very
natural,
so
the
blood
Builder
is
looking
for
the
payout
and
collects
the
transactions
that
are
paying
the
most
plus.
There
might
be
any
not
clearly
economical,
not
Financial,
but
still
quantifiable
benefits.
S
Right
so
Thomas
is
right
about
slashing.
There
are
inactivity
penalties
which
are
often
confused
with
slashings,
which
you
can
incur.
If
you
miss
your
proposal
and,
for
example,
if
that
latency
is
very
high
because
they're
using
an
external
block
Builder
surface,
so
that's
one
incentive
for
people
to
use
lower
latency
Services.
It
should
just
be
your
or
not,
but
I
do
think.
S
That's
the
part
of
the
whole
me
discussion
and
how
we
make
homesteakers
as
competitive
with
regular
stakers
when
it
comes
to
publishing
blogs
so
that
they
get
adjusted,
timely
and
they're,
not
missed
and
larger
blocks.
Obviously,
and
they
they
are,
they
cost
more
expensive.
They
cost
more
time
to
propagate.
So
that
might
play
a
role.
Y
Q
Q
This
is
where
I'm
coming
from
you
know.
Do
you
have
a
project
in
mind
that
I
might
be
able
to
help
with
and
I
think
like
once
you
start
doing
a
little
bit
of
that
and
showing
that
there's
like
you're,
creating
value?
Then
core
teams
love
hiring
people,
so
this
is
a
this
is
what
I
would
recommend
all.
I
The
a
tiny
note
on
that
about
like
following
all
core
devs
as
the
guy
who
runs
it
the
first
like
six
months.
I
If
so,
it
was
my
job
to
attend
all
core
devs
I,
just
like
didn't
understand
anything
that
was
happening
and
like
it
was
literally
like
too
intimidated
to
like
speak
up
so
like,
if
that's
your
feeling
that's
normal
and
if
you're
smarter
than
me,
it'll,
probably
not
last
six
months
but
like
it
there's
a
lot
of
just
like,
implied
context
that
I
don't
know
it's
really
hard
and
I
felt
really
good
about
myself.
I
When
Danny
Ryan
told
me
the
same
thing
when
he
started
I
think
before
he
started
the
EF,
he
would
attend
the
Casper
CBC
calls
and
he
would
just
share
links
in
the
zoom
chat.
He
was
telling
me
like
you
know
when
somebody
was
like.
Oh,
like
this
paper
came
out.
He
would
just
like
Google
it
to
be
the
first
guy
to
share
the
link
and
Zoom
chat
so
like
yeah,
the
calls
are
like
hard
to
there's
not
like
a
good
way
to
dive
into
it.
I
I
haven't
found
somebody
so
if
you're
like,
if
it
feels
weird
for
a
while,
that's
that's
kind
of
normal.
O
I
have
I
when
people
ask
me
any
questions.
I
always
throw
in
these
are
random
ideas.
I
think
this
is
a
good
idea,
just
be
extremely
obsessed
with,
like
eat,
Research
Forum,
and
to
a
point
where
that
you
read
The
Forum
one
two
three
times
and
then
take
notes
to
like,
like
basically
where
the
notes
that
you
can
understand,
then
you
can
transfer
notes
into
blog
posts,
and
then
you
can
share
the
blog
post
to
people
and
then
but
but
when
you're
able
to
do
that,
you
you,
like
you,
can
actually
be
fairly.
O
You
can't
actually
be
very
knowledgeable
at
that
topic
already,
which
yeah
and
then
other
things
like
just
just
podcasts
in
general,
like
just
like
pay
attention
to
like
quartet
podcast
and
then
take
notes
on
that,
and
then
also
share
your
notes,
because
people
will
actually
appreciate
that
because
not
just
not
a
lot
of
people
have
time
to
watch
like
YouTube
videos
and
listen
to
podcasts
and
stuff.
People
will
prefer
to
read
notes.
H
And
just
make
sure
to
share
everything
on
Twitter.
That's
where
that's
where
the
hype
is
and
that's
like
how
you
can
get
involved
and
go
to
events
and
just
like
speak
up
or
even
take
notes
and
then,
like
even
DM
to
people.
That's
like
how
you
can
basically
get
started,
also
great
way
how
to
get
even
closer
to
the
people
in
the
ethereum
is
just
volunteer
on
a
conference.
H
Is
my
skill
set
and
like
this
is
how
I
want
to
help
you
or
like
I,
found
a
Bunty
like
I
found
a
back
in
your
code
or
something
or
just
like
get
involved
with
the
GitHub
repos.
Z
Z
So
like
in
the
beginning,
there
was
like.
Probably
there
was
like
there's
this
article
on
coindesk
talking
about
like
how
there's
like
eight
or
nine
different
client
teams
like
trying
to
build
for
the
merge,
but
a
lot
of
them
dropped
off
once
like
I
think
there
was
a
point
in
which,
like
the
the
repo
for
how
ethereum's
consensus
was
going
to
happen,
got
like
totally
overturned
and
that
made
a
lot
of
client
teams
mad.
Z
And
so
then
it
like
shrank
down
to
four
and
I
think
the
closer
like
the
client
teams
got
to
to
the
merge.
It
was
very.
It
was
very
clear
that,
like
there,
there
wasn't
like
really
much
more
discussion
to
be
had
from,
like
just
anybody
so
like
when
the
Discord
channel
was
like
privatized
like
basically
like
no
one
talked,
because
we
really
have
to
focus
on
the
merge
right
now
and
when
they
started
to
become
like
differences
of
like
client
teams.
Z
Readiness
for
the
merge
so
like
some
Clan
teams
were
more
ready
than
others
had
more
features
ready
than
others.
I
think
they're.
The
differences
between
like
preparation
and
resources
for
each
client
teams
got
a
lot
clearer.
Like
the
closer
it
came
to
the
merge,
so
I
think
like
moving
forward
and
looking
ahead
to
some
of
the
other
bigger
upgrades.
Z
Z
It's
been
clear
that
it's
just
the
merge,
but
now
like
there's
going
to
be
so
much
to
discuss
so
much
about
ethereum's
roadmap,
so
I
thought
that
it
was
like
really
like
part
of
the
the
whole
topic
around
like
ossification,
like
the
more
decentralized
this
process
is
the
harder
it
is
to
make
changes,
but
clearly
there's
like
some
amount
of
centralization,
that's
needed
and
that
we've
already
seen
to
make
the
merge
possible,
and
so
now
there's
Shanghai
coming
up
so
like
will
it
still
be
like
consensus,
layer,
execution
layer
like
calls
or
how
are
we
thinking
about
governance,
I
guess
moving
forward
to
to
to
start
to
like
yeah
pull
off
some
of
the
other
big
big
road
map
items.
I
Wow,
there's
a
lot
to
unpack
in
there
I
think.
High
short,
a
really
short
answer
is
like
slowly
iterating
from
what
we
have
like
I
think
just
like
the
merge
itself
was
kind
of
a
feat
of
governance
to
some
extent,
to
get
like
nine
kind
teams
to
work
together
and
agree
on,
like
like
agreeing
on
the
big
stuff
was
easy
for
the
merch
I.
Think
everybody
wanted
to
do
it
like
all
the
tiny
things
I
don't
know.
I
have
the
latest
valid
hash
like
burnt.
P
I
My
head
from
typing
in
the
awkward
as
agenda
like
it's
just
like
this
thousand
tiny
things
where
it's
like
the
coordination
was
really
hard.
I
think
the
other
thing
is
like
and
we're
about
to
get
into
seconds.
It's
like
balancing
like
large
protocol
changes,
so
something
like
the
merge
something
like
sharding,
something
like
stateless
with,
like
smaller
things
that
like
bring
it
out
of
value
as
well.
I
I
think
this
is
where
most
of
the
tension
probably
arises
like
different
prioritizations.
There
I
think
it's
time
curious
to
hear
from
then
crowd.
I
AA
I
mean
my
view
on
all
core
devs
I
mean
like
I,
mean
I.
Think
the
big
thing
that
I
I
feel
is
missing
and
that
I
want
to
see
is
representation
of
like
ethereum's
users
and
the
whole
Community,
rather
than
it
mostly
being
focused
around
one
side
of
the
of
the
of
ethereum,
which
is
the
development
and
I.
Think.
That's
that's
a
big
thing
and
we
would
see
very
interesting
and
different
proposals
if
we
had
more
voices
like
that
on
the
call
also.
H
The
IP
process
is
being
like
it's
being
improved
over
time
and
right
now,
what
like
not
like
we
but
the
ipip
group
is
working
on
and,
alongside
with
ethereum
catalers,
it's
pretty
much
that
they
are
going
to
like
sort
of
fork
or
like
separate
the
core
IPS
from
the
other
year
season,
like
our
different
aips
and
then
eventually.
H
H
Oh
Jesus
I
mean
also
Hudson.
We
we
did
not
use
the
stage
because
we
went
to
like
be
everybody
on
the
same.
H
Okay,
thank
you
so
pretty
much.
We
wanted
to
separate
erc's
from
eip's,
but
then
it
ended
up.
What
ended
up
happening
is
that
we
ended
up
sort
of
separating
core
IPS
from
the
rest
of
the
eaps.
That's.
J
Exactly
right
and
there's
been
there's
been
years
and
years
of
debate
over
whether
to
separate
eips
from
erc's
and
a
lot
of
it.
The
biggest
problem
I'm
finding
looking
back
is
that
there
wasn't
a
steward
to
be
an
ERC
editor
I.
H
J
H
Especially
because,
back
in,
like
I,
think
we
had
this
discussion
and
like
with
Mario's
right
at
the
amstrom,
where
we
started
talking
about
how
the
EI
process
will
gonna
look
like,
and
that
was
like
April
this
year
and
only
basically
pretty
much
got
as
answer
was
that
it
will
be
consensus
driven.
H
I
I
For
for
the
Executioner
like
there's
a
bunch
of
like
mechanical
changes,
we
can
do
to
the
process
and
I
think
they'll
be
better,
but
that's
different
from
like
how
do
we
actually
come
to
consensus
on
the
thing
like
whether
it's
an
EIP
or
the
yellow
paper,
like
a
consensus,
spec
PR,
you
know,
that's
like
a
marginal
difference.
I
I
think
that
so
one
thing
I
actually
really
don't
like
about
Cortez,
is
that
it's
calls
like
my
like,
and
the
reason
for
that
is
like
calls
are
hard
because
for
they
optimize
for
a
bunch
of
weird
things,
so
like
the
optimize
for
people
who
are
awake
at
the
time
who
are
like
good
English
speakers
who,
like
are
very
like
eloquent
and
like
can
like
think
on
their
feet
and
and
like
respond,
live
during
the
call,
I
think
this
and
also
they're
like
not
very
scalable.
I
We
can't
have
like
90
people
on
a
call
and
so
like
to
your
point
about
like
having
more
more
of
like
the
community
involved.
It's
like
that's
something
where
like
moving
the
calls
are
good
because
they
give
like
a
forcing
function
and
like
some
rhythms
like
I,
wouldn't
take
them
away
completely,
but
moving
to
be
like
more
async
is
something
I've
like
I,
try
to
think
a
lot
about,
and
you
know
so
for
Shanghai.
We
have
like
this
tag
on
on
East
magicians
and,
like
you
know,
anyone
can
tag
their
eat
there.
I
Klein
devs
can
review
it
like
I
can't
force
Klein
devs
to
like
look
at
your
VIP.
If
they
don't
want
to,
like
you
know,
no
one
can
do
that,
but
it's
like
at
least
there's
a
list
and
they
can
scroll
through
it
and
like
if
it
sounds
good
to
them.
It
might
be
like
more
accessible,
and
maybe
you
wrote
it.
You
know
at
9am
Vietnam
time
rather
than
like
4
a.m,
during
Yoko,
awkward,
devs,
so
yeah,
I
I.
I
I
I
mean
so
we're
going
to
segue
into
it.
You
don't
have.
H
D
H
I
AB
So
so
so
now
that
we've
talked
about
how
we
are
going
to
to
come
to
consensus
on
these
new
upgrades
and
new
things,
we
could
maybe
talk
about
the
new
upgrades
and
new
things.
What's.
Q
AB
I
I
like
for
for
four
eight
four:
four,
that
that's
that's
pretty
nice.
I
Okay,
maybe
okay,
a
better
question
on
like
the
new
upgrades,
I
guess:
I'm
curious,
like
Yeah
from
from
client
team's
perspective,
like
how
much
have
you
been
thinking
about
new
upgrades
because,
like
everyone
on
Twitter,
obviously
talks
about
4844,
a
bunch
of
people
not
in
fine
teams
have
been
contributing
a
couple
incline
teams
have
as
well,
but,
like
you
know
mostly
like
we
shipped
the
merge
a
month
ago.
I
You
know
like
it's
not
a
lot
of
time
so
yeah.
How
are
you
all
thinking
about
like
what's
next
or
are
you
trying
to
not
think
about
it
as
much
as
possible?
And
you
know
compressed
with
emerge.
AB
Yeah
so
I
I
kind
of
think.
Our
main
responsibility
is
our
users
right
now,
and
so
we
have
a
bunch
of
upgrades
coming
up
that
oh
yeah,
okay,
hi,
we
have
a,
we
have
a
so
forget,
at
least
from
my
perspective
forget
it's.
The
most
important
thing
is
the
current
Network
and
it's
not
the
it's
not
the
future,
and
it's
not
this.
AB
It
wasn't
the
merge
and
we
want
to
make
sure
that
the
current
network
runs
and
the
current
network
doesn't
go
down
and
it's
secure
and
it's
usable
and
it's
decentralized
and
you
can
run
your
own
node.
AB
So
we
have
a
bunch
of
other
things
coming
up
to
improve
things
about
around
the
database
in
gath
the
the
states,
the
way
we
store
the
state
and
so
yeah
I'm
I'm
really
excited
about
those
and
because
we've
been
focusing
kind
of
focusing
on
those.
We
haven't
focused
too
much
on
the
upcoming
eaps,
okay,
okay,
like
in
general,
the
team
I
I
personally,
also
like
I
I,
started
implementing
4844
in
Lighthouse.
AB
The
wrong
wrong
client,
but
it
was
it
was
a
learning
experience
and
I.
AB
I
also
worked
a
bit
on
the
Casey
Jesus
ceremony,
because
that
was
just
interesting
to
me
in
a
fortunate
position
that
we
have
a
lot
of
outside
contributions,
so
people
actually,
if
they,
if
they
propose
an
EIP,
they
usually
implement
it
in
Geth,
and
so
when,
when,
like
the
fork
time
rolls
around,
and
we
have
to
have
the
implementations
for
the
eaps,
we
have
something
to
build
on
and
we're
in
a
very
good
position
there
compared
to
other
client
teams,
and
so
we
can
kind
of
focus
a
bit
more
on
testing,
making
sure
that
the
spec
is
correct.
L
Yeah
cool
how
we
feel
about
upgrades
I
think
we're
generally
Keen
I
think
definitely
Keen
for
withdrawals
which
Mark
is
working
on
I,
I
kind
of
feel
like
the
merge,
is
not
really
finished
until
we
have
them
in.
L
We
kind
of
have
a
bit
of
an
outstanding
promise
that
we're
yet
to
fulfill
to
the
stakers,
so
definitely
Keen
to
get
that
one
in
without
even
saying
like
you
know,
what
do
you
want
to
do
after
the
merge,
because
it's
not
really
after
the
merge
for
me
until
we've
done
that
yeah
so
Keen
to
work
on
that
Keen
to
get
that
in
soon
definitely
Keen
to
get
4844
and
I.
L
Think
that
looks
really
cool
personally,
not
super
Keen
to
rush
it
I
think
it'd
be
nice
to
have
a
bit
of
time
like
Maris
was
saying
to
kind
of
watch
the
network
breathe
yeah,
so
definitely
Keen
for
upgrades.
Sorry.
L
Yeah
yeah,
that's
right!
Yeah!
We
we
have
like
Sean's
been
working
on
it.
Maris
has
been
working
on
it
for
Lighthouse
yeah,
you're
gonna,
put
in
your
time
sheet.
Man
yeah,
so
we're
keen
I'll
pass
it
to
otaku.
K
L
K
Think
I
really
support
Paul's
point
on
the
the
next
thing
we
do
is
withdrawals
bye.
It's
got
to
be
done,
I
I,
don't
think
it's
reasonable
to
put
in
bigger
things
that
are
then
going
to
delay
getting
withdrawals
out.
We
made
a
promise.
We
showed
on
the
promise
a
bit
and
you
were
all
very
forgiving,
because
we
got
the
merge
sooner
and
now
we're
going
to
deliver
the
promise
we've
got
to
come
through
on
that.
K
But
I
think
the
thing
that
really
comes
after
the
merge
is
the
use
of
support
for
the
merge
and
the
optimization
and
the
cleanup
and
the
learning
now
that
we're
actually
seeing
it
in
the
real
world
for
real,
there's
a
whole
heap
of
stuff
that
we
can
now
go.
Oh,
we
should
make
this
better
and
it's
not
going
to
be
protocol
upgrades,
it's
just
client
improvements
and
so
on.
K
So
that's
going
to
take
a
good
chunk
of
time,
but
I
think
we
can
start
looking
at
a
bunch
of
other
things
as
well
and
so
I
think
4844
is
probably
getting
the
The
Lion's
Share
of
attention
of
the
next
big
thing.
After
withdrawals.
M
M
Right,
yeah
exactly
so
so
yeah,
that's.
E
E
M
People
are
so
convincing
in
delivering
those
Visions,
yeah
I.
Think
that's
coming
back
to
her
Netherlands
team
thinks
about
the
next
delivery.
I
almost
would
like
to
to
activate
Daniel
here
to
tell
about
his
like
plan
of
of
delivering
them.
AC
So
I
think
it's
in
our
case
very
similar
what
I
just
heard
from
other
teams,
so
you
right
now
I
would
like
to
relax
a
little
bit
and
focus
on
improving
our
client
yeah.
So
the
merge
was
extremely
stressful.
AC
To
be
honest
for
the
whole
team,
and
now
people
want
to
do
things
that
you
know
you
know,
let's
say
slower
a
slower,
Pace
yeah
and
additionally,
we
would
like
to
clean
up
a
little
bit
here,
because
there
was
no
time
I
know,
I,
don't
know
if
you
are
aware,
but
like
never
mind
a
few
months
ago.
It
was
completely
different,
a
team.
It
was
undersized,
very
small.
AC
At
some
point,
there
was
only
one
guy
Marek
working
on
the
merge,
which
was
crazy
and
you
know
I
was
really
sorry
when
I
joined
and
I
you
know
met
my
recognized
okay.
We
have
to
do
something
with
that
and
yeah.
Now
it's
we
are
in
a
completely
different
State.
We
can,
you
know,
improve
our
client
a
lot.
We
have
a
lot
of
things
that
actually
we
are
excited
about
and
they
are
not.
You
know
I
would
say
one-to-one
related
to
to
Shanghai.
AC
These
are
the
things
like
human
yeah
like
database
Improvement,
our
sync
Improvement.
You
know
our
robustness
of
the
clients.
These
are
the
things
that
maybe
are
not
so
excited
exciting
for
the
community,
but
they
are
exciting
for
us
and
that's
we
are
mostly
are
talking
about
right
now,
and
there
are
also
things
like
you
know:
documentation,
Community,
Support,
user
support.
These
are
things
that
didn't
work
well
in
the
past
and
I
think
in
general
it
should
be
improved
not
only
in
nethermine
but
in
the
whole
community.
AC
So
this
is
what
actually
excites
us
right
now
and
in
terms
of
particular
eips.
We
already
started.
You
know
investigations,
but
you
know
taking
it
slowly.
You
know
have
fun
with
it.
Like
four,
eight,
four,
four,
it's
you
know
there
is
one
guy
working
from
our
team.
That's
I
see
that
he's
really
excited
about
it.
That's
cool
there
is.
We
already
started
with
travels.
We
have
some
kind
of
draft
implementation
and
we
know
that
maybe
it's
gonna
change
but
yeah.
Why
not
to
start
playing
with
it
so
yeah
in
general
yeah.
P
A
Yeah
I
can
give
you
an
additional
perspective
from
Minority
client
like
lodestar,
we're
pretty
much
in
the
same
boat,
I
guess
with
like
nethermind,
where
before
like
at
a
point
where
we
haven't
even
hired
the
people
that
you
guys
have
at
this
point.
So
in
terms
of
time,
we
really
haven't
had
as
much
time
to
really
think
too
too
far
ahead.
We're
getting
most
of
the
stuff
from
you
know.
You
guys
in
in
the
community
and
such
but
technical
debt
is
definitely
a
huge
thing
for
us
as
well.
A
We
have
a
lot
of
documentation.
We
need
to
update
stuff
like
this,
which
you
know
it
would
be
nice
to
have
a
Sprint.
That's
just
you
know,
for
technical
debt
and
and
and
getting
our
client
up
to
to
par
in
that
sense.
But
that's
you
know.
We're
really
excited,
of
course,
to
implement
all
the
upcoming
stuff
in
the
roadmap
like
this
was
great
for
me,
because
I
haven't
even
specked
out
4844
as
much
as
I
should
yet
so
that's
you
know
where
we're
at
basically.
P
So,
on
the
Nimbus
side,
there
is
a
delicate
balance
between
implementing
a
new
thing
that
are
still
moving
targets
and
doing
things
that
are
expecting
from
client
teams
like
testing
and
trying
not
to
introduce
regressions
like
we
had
a
lot
of
Point
release
in
the
past
three
weeks
before
the
merge
and
that
took
like
all
of
our
Focus.
But
we
do
like
implementing
new
things
as
well.
P
For
example,
we
took
the
lead
with
load
star
on
the
light
client
thing,
so
this
is
something
that
we
will
continue,
but
we
didn't
start
at
all
on
anything
related
to
the
search
The
Verge,
The
Purge
endless
Branch.
Well,
we
did
try
to
look
into
kcg
commitment,
but
they
changed
a
lot
in
the
past
two
years,
so
everything
has
to
be
thrown
out
and
recorded.
AB
I
I
also
think
it's
fine
for
these
smaller
clients
to
not
be
on
the
like
brink
of
research,
because
the
especially
with
these
with
these
upgrades
like
like
444
right
now,
the
spec
will
change
a
lot
and
it
I
think
it
just
doesn't
make
sense
for
for
smaller
client
teams
that
have
that
need
to
focus
on
like
getting
their
client
up
to
speed,
to
also
think
about
the
research
and
and
the
iteration
on
the
spec.
AB
So
I
think
that's
something
that
we
did
with
with
the
merge
pretty
pretty
well.
AB
Is
there
was
like
the
bigger
client
teams,
gas
Lighthouse
prism,
never
mind
bisu
iterating
on
the
iterating
on
the
spec
and
the
the
other
client's
kind
of
like
like
following
a
bit
and
I,
think
that's
a
good
way
for
the
teams
that
have
more
funding
more
people
to
take
some
of
the
load
away
from
the
other
teams,
and
also
that's
also
what
we're
trying
with
with
testing
and
and
ethereum
Foundation
testing
team,
where
we're
currently
looking
for
new
people.
AB
So
if
you're
interested
in
testing
come
talk
to
me
or
Mario
Vega,
thank
you.
F
AB
So
we
we
want
to
completely
revamp
the
way
we
do
state
tests
to
make.
It
really
really
easy
for
for
people
to
to
implement
State
tests
and
to
take
some
load
of
of
the
the
client
teams.
N
Just
wanted
to
share
from
a
small
client
perspective
so
for
Basu
honestly
past
month
has
been
as
equally
hectic
as
preparing
for
the
merge,
with
a
bunch
of
major
fixes
for
bug
fixes.
So,
honestly,
we
haven't
even
thought
of
a
future
eips
honestly
I
think
we
even
haven't
gotten
a
chance
to
rest
and
a
lot
of
the
maintainers
couldn't
make
it
because
they're
still
working
really
hard
working
on
the
fixes.
N
Some
even
donated
some
of
their
paternity
leave
to
work
on
the
fixes,
so
quite
quite
intense,
but
I
think
afterwards,
we'll
hopefully
get
some
time
to
rest
and
get
all
the
maintainers
together
talk
through.
What's
going
to
be
the
future,
you
know
works
that
we
want
to
work
on.
N
What
really
resonated
me
personally
with
the
earlier
Talks
by
iOS,
the
subtraction
part,
so
I'm,
hoping
that
as
maintainers
we
got
to
see
which
part
of
our
co-pays
could
we
actually
subtract
I
think
we're
still
talking
about
what
eips
are
we
going
to
add
and
add
and
add
to
our
code
base,
but
I
think
there's
going
to
be
some
like
future.
Like
you
know,
Tech
not
future,
like
Tech
debt
from
the
past
to
clear
out
to
modularize
the
the
product
itself,
and
so
looking
into
a
lot
of
I
would
say.
N
Client
Improvement
is
currently
where
I
see.
Maintainers
are
talking
a
lot
about,
but
yeah.
Definitely
it
would
would
have
to
come
talk
about
eips,
yeah.
O
So
I'm,
prism
and
I
think
withdrawal
is
important,
but
it's
also
on
the
easier
side
and
on
the
eip44
we
have
been
lucky
and
optimism
and
encoding
base
have
been
contributing
to
our
code
base.
So
thank
you
for
that.
But
I
do
want
to
throw
like
a
curveball
I
think,
like
censorship.
Resistance
is
equally
as
important
as
444
and
withdraw
I.
Just
look
at
Mev
watch
info
at
47
of
the
blogs
are
on
the
offback
compliance
right
now,
so
47
of
the
Bloods
or
mainnet
all
have
some
sort
of
censorship.
O
Resistance
built
into
it
right
so
I
do
think
like
there's
something
like
before
full
PBS
there's,
definitely
something
that
we
can
do
to
make
it
better.
We
do
have
some
hybrid
PBS
we
can
do.
We
can
leverage
the
Builder
API.
We
can
iterate
very
fast.
We
can
have
some
math
booths
CR
this
type
of
thing,
and
we
tell
this
latest
research
post,
that's
pointing
us
to
some
nice
Direction
and
burnout.
B
also
has
a
nice
research
post
as
well,
so
very
excited
for
that,
and
I
definitely
definitely
want
to
prioritize
right
now.
AD
I'm
very
sympathetic
to
teams
who
talk
about
wanting
time
to
to
work
on
their
client
and
handle
their
Technical
backload
and
I
thought
that
that
4844
and
withdrawals
was
a
really
good
example
of
they,
both
kind
of
individually.
Look
like
okay.
We've
got
kind
of
an
idea
of
how
these
two
things
are
going
to
work
but
and
then
putting
them
in
the
same.
AD
Fork
actually
does
increase
complexity,
for
you
know
minor
technical
reasons,
I
I
guess
you
know
we're
talking
about
switching
the
forking
based
on
a
block
number
two
based
on
timestamp,
and
you
know
the
the
more
places
that
you
have
to
change
in
the
code
base,
the
the
more
work
that's
going
to
be
right,
so
4844
introduces
a
new
transaction
type,
which
means
you
know
more
places
that
this
forking
based
on
timestamp
has
to
has
to
happen,
for
example,
so
just
an
interesting
place
of
seeing
why
you
might
get
pushback
on
hey.
AB
So
so,
just
that
as
an
example
for
this
I
actually
implemented
the
forking
based
on
timestamps
for
Shanghai,
which
was
like
a
20
line,
change
and
I
looked
into
doing
the
same
thing
for
sorry
for
for
withdrawals
not
for
sure
and
I
looked
into
doing
the
same
thing
for
four
eight
four
four
and
it
would
be
like
70
different
files
that
I
would
need
to
touch
just
for
changing
the
changing
the
way
we
we
verify
the
signatures
and
the
sign
up.
So
it's
individually,
it's
it's
it's!
AB
Okay,
but
together
it
creates
even
more
complexity.
Q
M
So
well,
while
we
say
that
we
want
to
slow
down
at
the
same
time,
I'm
thinking
that
never
mind
is
ready
to
to
think
like
a
big
client
like
the
one
that
is,
that
is
leading
the
effort
on
the
exploration.
So
so
by
the
fact
that
we
have
the
large
team.
We
can
both
keep
cleaning
the
technical
debt,
but
also
participate
in
the
research
in
prototyping.
So
so
we've
we've
seen.
We've
heard
that
from
Mikhail
talking
about
is
Migos
still.
E
M
No,
no
okay,
so
she
was
thinking
how
much
it
was
helpful
to
have
the
Prototype
from
Guillaume
right
and
and
how
much
Maurice's
experimentations
were
pushing
things
forward.
M
So
we
have
those
excited
people
like
Alexa,
you
know
and
that
in
mind,
and
they
they
want
to
explore.
They
want
to
experiment,
and
this
should
help
everyone
to
to
push
things
forward
and,
and
these
deliveries
are
critical
and
they're
still
time
critical
I'll
be
doing
that
in
parallel,
even
if
the
field
will
be,
we
change
this
style.
That's
that
we
do
that
a
bit
more
mature
way.
I
think
that
maturity
will
come
also
from
the
learning,
from
the
merge
and
from
the
team
being
larger,
so
I
think.
M
AC
But
it's
also
another
factor
that
we
need
to
consider
yeah,
but
like
changing.
Let's
say:
10
lines
of
code
or
17
files
two
years
ago
was
completely
different
than
changing
the
same
amount
of
files.
Right
now
the
code
base
is
much
bigger.
The
complexity
is
much
bigger
and
you
know
sometimes
you
know
Small
Change
requires
you
know
days
or
maybe
weeks
of
you
know,
research,
yeah
and
testing
and
testing
especially
testing
here.
It's
something
that
we
need
to
improve
and
so
yeah
the
teams
grow
group,
but
at
the
same
time
it
takes
much
I.
S
If
I
can
add
something
else
and
on
clients,
the
right
now
testing
is
not
much
like.
The
experience
with
testing
is
not
much
better
than
the
experience
introducing
an
EIP
I
think
we
need
a
a
better
platform
to
discuss
like
the
bottlenecks
of
ethereum,
to
understand
it
like
the
complexity
of
syncing
of
disk
I
o,
and
so
on,
and
tools
like
Hive.
They
are
just
a
stagnant,
sometimes
as
the
clients
themselves
when
it
comes
to
making
changes
or
improvements.
S
M
AB
And
we're
slowly
transitioning
this
over
to
the
testing
team.
So
Mario
wrote
a
lot
of
Hive
tests
for
the
merge.
Basically,
all
of
the
hive
tabs
tests
for
the
merge
and
so
as
I
said
we're
trying
to
increase
we're
trying
to
ramp
up
the
size
of
the
testing
team
within
the
ECM
foundation,
and
that
should
take
some
local.
S
M
AE
I
I
We
did
like
an
amazing
job
there
like
relative
to
what
it
was
before
like
when
we
shipped
1559
I
was
like
not
100,
confident
I
was
like
and
then
when
the
merge
went,
live
I,
you
know,
like
I,
was
expecting
some
operator
to
mess
it
up,
but
not
like
the
network
to
break
like
it
felt
like
the
testing.
It
was
like
super,
robust
and
and
the
challenge
the
challenge
I
think
with
shipping.
I
If
you
want
to
do
something
like
blobs,
we
we
don't
have
like
blob
testing
right
and
so
we're
going
to
need
to
build
all
that
and
then
then
we'll
have
it
and
it'll
be
easier
to
like
you
know,
grow
the
size
of
blobs
or
something.
But
then,
when
we're
gonna,
we
want
to
do
I,
don't
know
like
data
availability
sampling.
We
need
like
you
need,
like
always
to
build
the
new
testing
stuff
and
it's
kind
of
hard
because,
like
you
can't
build
it
in
advance
because
you
need
you
need
to
test
something.
I
I
We
have
gotten
better
I,
don't
know
that
it's
ever
going
to
feel
better
doing
the
work,
which
is
maybe
my
thing
is
like
we
just
do
more
or
better
at
it,
but
it
still
feels
like
we're
kind
of
pushing
ourselves
because,
like
it's
just
hard
stuff
and
you
need
to
build
it
and
yeah
like
I
mean
you
know
the
touch
on
444,
obviously,
client
teams,
kind
of
tired
you
know-
want
to
take
a
break,
want
to
focus
on
withdrawals
but
like
having
the
optimism
had
step
up
having
coin
race
step
up
like
we
wouldn't
even
be
talking
about
444.
I
If
this
hadn't
happened
and
I'm
not
convinced
that,
like
four
years
ago,
it
would
have
been
possible
for
like
there
were
no
L2
teams,
but,
like
you
know,
for
like
the
Plasma
Team
to
like
ship,
something
like
that,
so
yeah
I
think
we're
making
progress.
I,
don't
know,
that's
ever
going
to
feel
better.
Is
my
rough
feeling.
K
So
I
drained
from
Tokyo
I
just
moved
from
over
there.
Sorry,
the
context
so
I
think
the
other
Factor
that's
slightly
interesting
is
there's
this
real
cycle
to
Doing
Hard
Forks
in
that
client
teams
in
particular
get
swamped.
K
It's
not
really
early
on
the
research
team
probably
gets
one
first,
then
the
client
teams
get
swamped
and
then
our
work
is
done
and
we're
kind
of
waiting
on
coordination
and
there's
this
lull,
and
that
lull
is
where
we
get
all
of
our
client
optimizations
and
all
the
other
stuff
kind
of
done
and
yeah.
So
that's
real
opportunity
and
and
a
lot
of
the
community
coordination
happens
there.
There's
there's
a
lot
of
time.
K
It
takes
to
go
from
the
code,
is
ready
and
done
to
the
code
is
tested
in
every
possible
situation
and
we've
automated
all
of
that
in
terms
of
the
cross-client
stuff
and
and
the
the
level
of
detail
we
want
for
ethereum
and
then
the
code
is
actually
ready
from
the
community's
perspective
and
all
the
tools
of
updated
and
so
on,
and
all
of
those
kind
of
happen
to
happen
in
sequence.
So
as
long
as
we
don't
shoot
for
a
massive
hard
Fork,
we
can
get
a
you
know.
Little
hump
of
we
can
get.
K
You
know
say,
withdrawals
done
because
they're
relatively
simple
pick
up
a
lull
and
then
be
ready
to
to
pick
up
something
big
again.
So
it's
not
a
case
of
stop
the
world.
It's
just
managing
those
cycles,
and
probably
the
other
factor
is
that
teams
go
through
that
kind
of
cycle
as
well.
That
you'll
have
a
great
team
and
you're
going
well,
and
then
someone
will
get
a
better
job
offer
somewhere
and
you
kind
of
suddenly
you've
got
fewer
people
on
your
team
and
you're
rebuilding
with
some
new
people
again
and
yeah.
K
AF
I
don't
know,
I
was
expecting
to
come
and
try
and
like
make
a
pitch
for
why
it's
crazy,
to
do
more
than
just
withdrawals
is
a
big
thing
in
in
the
first
half
Fork
after
the
merge,
but
actually
I've
not
had
a
single
chord
advocate
for
that.
So
far.
So
oh
but
maybe
Danka.
Oh.
AF
The
question
anyway,
but
I
did
I,
did
want
to
ask
two
points
about
withdrawals
like
mostly
to
consensus
clients.
So
the
first
point
is
a
few
people
have
said
things
like.
Oh,
it's,
a
relatively
small
change,
and
so
my
my
I
wanted
to
just
check
like.
Is
it
so
so
one
of
the
other
sessions
recently,
one
of
the
points
that
was
made
was
that
you
know
at
the
at
the
time
that
the
withdrawals
become
enabled
also
withdrawal.
Key
rotation
becomes
available.
AF
So
there's
going
to
be
enormous
queue
of
people
wanting
to
do
key
rotation,
there's
going
to
be
massive
muv
from
those
validators
who
are
trying
to
exit
before
they
get
hacked
because
their
key's
been
compromised
over
the
last
two
years,
all
kinds
of
other
sort
of
maybe
things
you
haven't
about
very
much,
and
so
do
those
worry
people
at
all
or
not
at
all,
and
the
second
point
I
wanted
to
raise
about
withdrawals
was
around
whether
it
kind
of
links
the
tech
debt
point
that
few
people
have
mentioned
is.
Is
there?
AF
AF
It
seems
quite
natural.
You
might
want
to
just
have
a
corresponding
deposit
operation
clean
out
the
deposit
contract
deal
with
the
issues.
Like
kind
of
you
know,
double
counting
issue,
it's
all
this
kind
of
stuff
that
would
make
the
protocol
more
understandable
for
future
Generations.
Is
anyone
interested
in
that
or
is
that
for
later
I.
S
Heard
some
good
arguments
about
cleaning
up
the
deposit
contracts
to
keep
the
supply
in
the
balances
actually
matching
all
the
expectations
rather
than
looking
up
or
like
burning
the
the
deposits
and
then
minting
through
withdrawals.
So
that's
fair
I
do
think
deposits
are
user,
initiated
and
withdrawals
are
system
initiated
so
they're,
fundamentally
different.
K
AA
Yeah
I
want
to
share
my
perspective
on
as
well
I
think,
like
we
have
heard
a
lot
about
tech
debt.
This
is
which
is
like
Fair
there's
the
the
devs
here
like
they
have
the
best
overview
of
that,
and
that
is
like
a
very
important
point,
of
course,
but
I
think
we
also
have
a
different
kind
of
debt,
which
is
like
right
now.
AA
We
cannot
serve
the
vast
majority
of
people
who
might
want
to
use
ethereum
so,
like
I
think
like
it
illustrates
my
point
from
earlier
that
we
are
having
these
discussions
among
core
deaths
and
they
are
good
and
they're
important,
but
somehow
they
can't
be
the
only
input
into
the
decision-making.
AA
So
I
don't
want
to
Clearly,
say
like
I
can't
say
like
we
have
to
do
4844
as
part
of
Shanghai,
but
personally
I
would
love
to
see
it
and
I
think
there
are
very
good
reasons
to
try
to
do
it
and
and
I
think,
like
only
looking
at
the
tech
depth
and
saying
like,
we
can't
do
it
because
of
this,
and
we
need
to
do
everything
in
sequence.
AA
AA
So
what
what
is
the
consequence
of
not
doing
it,
like
my
fear,
like
of
course
like
if
we
have
say
Shanghai,
for
we
managed
to
do
it
in
February,
and
then
we
can
do
like
four,
eight
four
four,
a
few
months
later:
okay,
like
everyone's
happy
with
that,
but
like
what
happens,
if
actually
Shanghai
does
not
happen
in
February,
it
happens
in
June,
it
happens
in
September,
or
something
and
4844
slips
into
2024
I
think
I
mean
I
would
be
fairly
unhappy
with
that
so
like
we
should
consider
these
scenarios.
AA
We
should
also
think
about
what
it
means
to
ethereum.
Yes,
it's
like
it's
a
bear
market
now,
so
maybe
we
don't
have
these
high
fees
as
pressure
pressing
an
issue
anymore,
but
I
think
they
are
still
actually
a
pressing
issue,
because
right
now,
maybe
like
we
don't
feel
the
pain
as
much,
because
our
the
things
that
we
have
been
doing
are
fine,
but
still
a
lot
of
applications
are
not
being
built
because
the
fees
are
too
high
and
because
they
can't,
like
the
experimentation,
can't
happen,
because
I
can't
like
do
some
fun
stuff.
AA
If
things
costs
one
dollar
per
transaction,
which
I
could
do
if
it's
one
cent
per
transaction
so
I
think
like.
We
should
like
be
more
willing
to
like
think
about
this
part
as
well,
and
I
would
love
to
like
understand
how
we
can
also
like
get
that
in
that
thinking
into
our
governance.
Pro
process
as
well.
P
So
currently
we're
releasing
artworks.
Basically,
when
it's
ready,
so
so
more
things
we
put
inside
the
longer.
It
takes
to
be
ready
because
of
testing,
for
example.
So
if
we
actually
do,
withdrawals
which
are
supposedly
simple
for
the
talk
from
yesterday
was
less
reassuring.
Regarding
that
we
can
yeah,
maybe
in
February
we
can
have
withdrawals
and
then
for
804
for
the
next
one.
But
if
we
put
both
together,
maybe
it
would
be
only
in
June.
P
So
there
is
a
balance
there.
So.
D
I
guess
one
question
there
is
like:
do
you
think
it's
impossible
to
have
a
one
fork
in
February
and
then
another
one
like
literally
three
or
four
months
later.
D
Okay,
well,
that's
I
mean
one
I
mean
this
is
like
a
discussion,
but
just
like
one
other
thing,
I
wanted
to
add
just
like
to
what
Dan
grad
said
is
like
thinking
about
it
from
the
perspective
of
tech,
debt
of
Layer
Two
protocols
and
of
Roll-Ups
right
like
the
yeah.
D
One
of
the
kind
of
philosophical
goals
of
4844
I
think
was
to
kind
of
be
a
like
the
changed,
the
change
to
end
all
changes,
sort
of
for
specifically
for
layer
twos
in
the
sense
that
4844
introduces
stuff
like
the
point,
evaluation,
breaking
pile
and
the
concept
of
blobs,
and
if
that
allows
Layer
Two
is
to
kind
of
set
their
code
once
and
they
can
literally
write
their
code
and
launch
it.
D
And
then
you
know
no
matter
how
much
we
screw
around
with
the
yeah
sharding
design
later
as
long
as
we
kind
of
screw
around
within
certain
parameters,
roll
ups
will
be
able
to
kind
of
you
know,
breathe,
easy
and
know
that
you
know
they
don't
have
to
make
like
that
kind
of
re-architect
thing
again
right
so
I
think
there's
you
know,
there's
also
a
value
in
trying
to
like
get
that
phase
done
earlier,
because
you
know
the
earlier
they
yeah
they
can
do
those
those
kinds
of
changes
that
you
know
the
more
they
can
get
to
the
phase
where,
like
you
know,
they
can
start
clearing
instead
of
they
have
to
know
that,
like
they
have
the
queer
eventually,
let's
just
it's
so
worth
talking
to
away
or
two
teams
about
that
too
I
mean
I'm,
sure
they'll
have
their
own
perspectives.
P
So
I
guess
we
various
kind
of
consensus
that
we
want
for
844
and
withdrawals
within
the
next
nine
months
is,
let's
say
so.
The
question
becomes
both
together
or
withdrawal
first
and
then
fight
over
and
so
planning
team
I.
AG
AH
Sure
so
one
thing
I
would
add
with
regard
to
hard
Forks
one
versus
two
is
that
you
generally
with
the
hard
Forks
implementing
something
takes
usually
less
time
than
rolling
it
out.
So
implementing
both
withdrawals
and
4844
in
the
same
artwork
definitely
makes
it
longer,
but
I
think
overall,
it
would
still
be
shorter
than
doing
two
hard
works
because
you
have
a
yeah,
sorry,
so
so
essentially,
I
think
rolling
out.
The
hard
work
always
takes
two
to
three
months
of
just
while
every
client
is
yeah,
I'm
not
done
yet.
AH
Let's
test
it,
tests
are
not
done
yet.
Okay,
let's
do
a
hard
work.
This
that's
not
hard
Fork!
That's
that!
That's
not
so
we
kind
of
I,
don't
necessarily
want
to
say
we
suck
at
rolling
out
things,
but
we
don't
really
push
very
hard.
So
if
we
want
to
roll
out
two
hard
Forks,
then
you
have
this
boilerplate
time
that
will
eat
up
both
up
from
both
of
them.
So
that's
the
extra.
F
C
F
I
get
it
I,
get
it
like
it's
the
first
time,
you've
had
me
in
this
room,
but
you
know:
we've
been:
we've
been
working
with
Proto
for
the
last
five
months
on
eip4844,
implementing
an
imprisonment,
spec
prism
in
Geth
and
I.
Think
to
doncrat's
point.
The
difference
between
like
H1
of
next
year
and
2024
for
a
business
like
coinbase
is
massive
and
I
think.
F
Most
of
our
customers,
who
are
not
in
the
US,
can't
pay
for
it
and
we
as
a
business,
especially
in
the
bear
Market,
can't
subsidize
the
cost
for
them,
and
so
what
that
means
is
that,
in
the
context
of
these
conversations,
the
people
who
may
be
understand
the
technology
benefits
of
decentralization
or
security.
They
then
go
and
say:
hey,
there's
all
these
other
evm
chains.
They
have
sub
one
cent
fees.
Can
we
just
deploy
this
thing
on
that
like?
Would
that
be
a
faster
can
get?
Can
that
get
this
thing
shipped
in
q1?
F
And
it's
up
to
the
people
like
me,
maybe
or
others
in
the
company
who
who
understand
the
whole
process
and
why
we're
doing
Roll-Ups
and
why
this
is
such
an
important
investment
from
a
decentralization
security
perspective
say
no
like
we
need
to
wait.
We
need
to
wait
for
this
to
have
the
right
solution
and
so
I
think
waiting
until
the
beginning
of
next
year.
You
know
first
half
of
next
year,
I
feel
like
that's
like
a
thing
we
can
hold.
We
can
make
it
happen.
Waiting
until
2024
is
really
hard.
F
That's
going
to
be
a
real
challenge
for
us
and
so
I
think
I
think
from
from
where
I
sit,
our
feeling
is
like,
let's
make
the
list
of
all
of
the
things
that
all
of
the
people
in
this
room
feel
like
we
need
in
order
to
feel
comfortable
and
if
that's
better
monitoring
of
the
network.
So
we
can
understand
bandwidth
if
that's
better
testing,
so
we
can
feel
more
confident
in
the
change
that
we're
making
whatever
it
is
like
give
us
us.
F
The
list
give
optimism
the
list
and
we're
ready
to
throw
resources
at
this
and
support
this
and
I
know
that
again,
we
have
a
lot
of
trust
to
build
that
will
come
through
in
its
work
a
year
from
now.
I
hope
that
there's
a
lot
more
trust
here,
but
I
do
want
a
voice
like
that's
the
impact
for
for
a
business
like
us
and
we're
ready
to
come
to
the
table
with
you
all
and
work
together
to
figure
out.
How
do
we
make
it
so?
AB
I
I
have
I,
have
two
rebuttals
to
to
to
putting
for
it
for
four
in
into
Shanghai.
AB
One
of
them
is
I,
don't
I,
don't
I,
don't
see
any
Roll-Ups
that
are
trustless
right
now.
Most
of
them
don't
implement
the
the
fraud
proofs
they're,
putting
the
data
on
chain,
but
the
data
is
not
really
used
for
anything
so
like
you
cannot
use
it
to
to
prove
that
the
that
the
that
the
Roll
Up
is
wrong,
and
so,
if
we
Implement
dank
sharding,
we
basically
say
to
the
community
use
these
Roll-Ups,
but
the
Roll-Ups
are
not
secure.
AB
I
think
that's
kind
of
a
it's
a
minor
problem.
It's
it's!
The
other
thing.
The
bigger
problem.
I
I
see
is
that
from
yesterday's
conversation,
just
doesn't
seem
ready
yet
so
withdrawals
the
way
I
see
it
are
basically
done
that,
like
they
are
implemented
in
some
of
the
clients
already,
the
spec
seems
to
be
pretty
stable
and
with
Deng
sharding
we
recently
had
a
new
fee.
AB
Like
thing
and
from
my
point
of
view,
it
looks
like
we
have
a
hammer,
the
1559
hammer,
and
now
every
everything
looks
like
a
nail
and
so
I
think
there
has
to
be
more.
AA
AB
Yeah
yeah
so,
but
those
are
the
things
that
those
those
are
things
that
are
that
were
not
not
quite
clear
to
me
and
so,
from
my
point
of
view,
we
could
ship
with
withdrawals
easily
January,
especially
if
we,
if
we
like,
kick
out
some
of
them.
AB
Okay,
it's
it's
probably
a
bigger
change
for
the
for
the
consensus
layer,
I
think
for
the
execution
layer.
It's
it's
pretty
pretty
much
done;
and
okay,
okay,
okay,
maybe
I'm
I'm
totally
wrong
here,
but
I
I
agree
with
the
argument
that
so,
even
if
we
were
to
ship
withdrawals
in
January
I
think
we
could
own
the
ship
tank
sharding.
AB
September
of
of
2023,
if
we
were
to
do
both,
we
can
probably
ship
it
somewhere
in
the
middle
and
so
actually
would,
if
I,
if
I,
really
have,
if
I
really
think
about
it,
it
would
probably
cut
to
two
months
to
to
both
of
it,
even
even
though
I
don't
like
it,
and
even
though
I
don't
like
to
admit
it,
but
I
think
it
might
be
the
right
thing
to
do.
AA
You
have
is
probably
that
it
is
a
bit
more
risky
to
do
both
at
the
same
time
and
I.
Think
like
most
people
would
probably
agree
with
that,
like
that
making
a
bigger
change
definitely
adds
to
the
risk
of
the
hard
folk
itself
and
I
think
we
also
should
have
an
honest
conversation
on,
in
which
cases
we
are
willing
to
accept
those
risks,
because
what
we're
doing
is
just
so
important
that
maybe,
like
some
risks,
have
to
be
accepted
as
part
of
doing
it.
K
X
Paul
from
techu
Team,
just
on
that
I
mean
there's
a
couple
of
aspects.
Maybe
we
need
to
get
better
at
doing
hard,
forks
and
releasing
them
honestly,
it's
a
long
time
between
code
completion
and
getting
to
getting
to
that
gate.
That
may
be
a
thing
we
need
to
address,
but
the
other
side
of
it
Playing
devil's,
advocate
I,
understand
that
4844
might
be
important
and
if
it's
relatively
well
defined,
there's
nothing
physically.
Stopping
us
from
doing
that.
First
and
just
delivering
4844
and
not
delivering
withdrawals,
I
mean
it's
a
it
is.
K
K
Yeah,
we
should
absolutely
focus
on
doing
the
most
important
thing.
First,
always
I
think
the
point
that's
come
up
a
couple
of
times
is
you
know
when
we
ship
withdrawals
and
it's
actually
when
we
finish
code
completion
because
running
through
test
Nets
doesn't
take
quite
of
time.
We
put
out
a
release,
it
should
be
pretty
quick
and
easy.
K
There
is
coordination
cost
and
it
consumes
all
core
devs
for
a
while.
So
you've
got
to
know
what's
coming
next,
if
you're
going
to
paralyze
it,
but
it's
code
completion,
that's
the
big
thing,
I'll
give
it
to
them.
AH
In
between
I'll,
just
pop
in
so
in
my
opinion,
withdrawals
are
kind
of
specked
out,
so
it
I
mean
there
are
variations,
but
it's
simple:
it's
really
really
simple,
whereas
with
the
4844
that
will
most
definitely
take
a
lot
more
time
to
spec
out
and
it
has
a
lot
more
potential
problems
with
denial
of
service
and
everything
it
affects
Network,
so
withdrawals,
that's
just
a
tiny
consensus
change.
You
know,
tweak
a
bit
the
data
structures
and
done
with
with
4844
that
has
a
much
deeper
implication,
so
I
think
it.
AH
It
will
require
a
lot
more
work
and
in
my
opinion
it
would
be
much
simpler
to
say
that
okay,
for
withdrawals
are
definitely
going
in
and
maybe
give
a
cut
off
that
if
we
for
some
reason,
4844
gets
complicated
and
we
cannot
finish
it
by
month.
X
then
just
say:
okay,
we're
rolling
with
the
withdrawals
first
and
then,
whatever
whatever
the
other,
is
ready.
AC
So
quick
question
from
me
because
I
heard
the
term
a
few
times
like
about
code
completion
and
then
rolling
the
the
change
and
that
we
are
very
slow
in
the
second
part.
But
you
know
I
joined
right,
yeah,
I'm,
I'm,
a
new
guy,
but
what
I
observed
with
the
merge?
We
we
shift
shift
in
middle
of
September,
but
what
I
observed
flag
from
my
the
definition?
AC
I
know
the
code
was
not
completed
in
August
and
some
teams
were
like
pushing
the
changes
at
the
very
end
and
for
me
like
when
you
said
that
code
is
completed.
You
know
you
you've
been
testing,
you
start
testing
and
you
don't
introduce
new
changes
here.
So
the
code
is
stable,
so
it
looked
a
bit
different.
From
my
perspective.
I
Yeah
I
agree:
the
merge
was
not
a
case
where
we
were
called
complete
three
months
before
so
yeah,
but
but
we
have
done
that
before
right,
like
and-
and
you
know
again,
I
think
burden
London
was
like
a
good
example.
As
soon
as
we
were
kind
of
code
complete
on
Berlin,
we
started
working
on
London
before,
like
it
shipped
and
part
of
the
reason
for
that
three-month
delay
is
not
like
for
the
client
teams.
I
It's
for,
like
everyone
running
your
node
like
folks
like
coinbase,
because
usually
like
people
like
not
the
coinbase
here,
like
people,
just
don't
care
about
the
hard
Forks
until
they're,
like
announced
on
a
blog
post
on
blog.ethereum.org
and
then
they're
like
holy,
and
then
they
like
message
me
and
they're
like
oh,
my
God.
I
We
need
to
like
upgrade
all
our
infrastructure
and
you
know,
and
especially
so,
if,
like
it's
a
complicated,
hard
Fork,
if
it's
just
like
introducing
a
new
app
code,
they
need
to
upgrade
their
notes
and
still
sometimes
you
know
they're
like
oh,
we
need
like
two
months
to
do
that
or
something
so
it's
it's
worth
noting
like.
I
We
can
set
our
schedule
somewhat
independently
of
like
the
release
schedule,
but
that
doesn't
necessarily
mean
you
want
to
like
do:
code
complete
and
ship
two
weeks
later,
because
there's
still
value
in
giving
people
time
to
upgrade
and
I
I
would
also
argue
the
merge
was
like
cutting
it
close
and
I.
Think
you
know
the
merger's
like
a
big
change,
the
environment
bits
the
like
proof
of
statements,
so
I
think
there
was
a
bunch
of
variables
in
that
one.
I
But
if
you
did
historically,
we've
been
like
kind
of
slow
but
I
think
that's!
That's
like
healthy.
For
people
like
peop
people
who,
like
are
not
core
developers
should
not
have
to
look
at
this
stuff
every
day.
To
know
like,
is
there
a
hard
fork
in
10
days
right.
AI
AI
What
if
we
prototype
some
of
our
ideas
relating
to
things
like
evm
and
transaction
formats,
on
a
layer
two
and
that
way
the
libraries
and
the
tooling
could
adopt
to
that
on
the
layer,
2
stuff
and
then
all
the
downstream
stuff
is
ready
and
then
layer
one
can
implement
it
when
they
have
the
bandwidth
and
not
having
to
rush
in
and
get
those
in.
So
you
can
get
cool
things
like
BLS
transaction
formats
sooner
rather
than
in
2024
or
2025..
S
S
AI
When
we
go
through
that
big
list
of
proposed
EIP
stuff,
that's
coming
up
in
the
next
thing,
like
half
of
them
are
just
evm
stuff
and
that
stuff
is
easy
to
push
through.
But
then
you
get
into
the
situation
like
we
had
with
subroutines
where
it
was
implemented.
It
was
ready
and
then
the
week
before
the
first
test
net,
all
of
a
sudden
it
gets
ripped
out
I
mean
I
I
like
the
idea,
but
your
voice
and
exact
concern.
AI
How
do
we
get
the
commitment
that,
if
it's
done
it
will
ship
on
layer
one
without
substantial
change?
Because
then,
if
you
have
these
changes
on
Layer
Two
and
they
change
substantially,
then
it's
a
burden
on
the
layer,
2
Chain,
if
they
put
it
on
on
one
of
their
Premier
chains
rather
than
on
testnet.
AA
AI
AI
I
Just
yeah
we
can
wrap
up
here,
we're
like
yeah
we
if
yeah
we
can
I
mean.
If
there's
final
comments,
we
can
do
that,
then
maybe
we
should
take
a
short
break,
but
just
want
to
make
sure
we
wanted
to
keep
the
last
hour
to
discuss
actual
eips,
because
we
we
don't
usually
have
like
forums
to
do
that
with
all
the
client
teams.
So
if
people
want
to
have
less
comments
or
take
a
short
break,
you
can
do
that.
But
let's.
H
E
H
I
H
Yes
or
no,
we'll
see
cool.
Thank
you
so
much
everyone.
Thank
you
for
closing
doors.
So
perfect,
you
guys
are
I,
love
you
so
much
because
you're
the
best
so
I
just
wanted
to
say
quickly.
Today
or
now
we
are
going
to
start
the
Shanghai
IP
pitch
session
and
then
there
should
be
all
line
up
the
AP
authors
right,
if
not
and
if
you
are
a
Shanghai
AP
author,
please
come
sit
in
front
as
we
want
you
to
like
pitch
your
AIP
and.
I
I
H
I
H
Track
of
time
should
we
start
yeah.
T
Hey
guys,
I'm
Sarah
I'm,
a
smart
contract
engineer
at
uniswap
and
I'm
here.
T
And
so
we're
gonna.
T
We're
in
we're
done
should
I
drop
the
mic
cool,
so
we're
gonna
kind
of
tag
team.
This
EIP
today,
yeah
first
of
all,
I
want
to
thank
you
guys
for
hosting
the
session
I
think
it's
super
important
that
you
know
when
we're
planning
for
building
open
source
software,
we're
really
bringing
a
lot
of
diverse
perspectives
to
the
table.
T
You
know,
I
think
it's!
You
know
a
lot
of
times,
client,
devs
and
core
devs
were.
You
know,
focused
on
this
really
long-term
vision
for
ethereum
and
unfortunately
for
application
developers.
That
means
some
of
the
stuff
we
want
to
see
does
not
get
through,
and
so
hopefully
I'm
here
today
to
convince
you
that
this
is
worthwhile
and
actually
this
EIP
really
will
complement
the
kind
of
Future
Vision
for
ethereum.
AJ
Okay,
so
we're
here
today
to
talk
about
EIP
1153,
which
is
transient
storage,
and
this
EIP
adds
two
new
additional
op
codes
to
the
evm
and
this
concept
of
transient
storage,
which
is
basically
like
a
key
Value
Store
per
account
and
anytime
that
you,
you
know,
UT
store
your
t-load,
which
are
synonymous
to
s
load
and
s
store,
but
you,
instead
of
putting
in
a
state
you
put
it
into
this
transient
storage
map
and
each
one
is
namespaced
by
the
account
and
it
persists
throughout
the
duration
of
the
single
transactions
execution.
T
Yeah,
and
so
so,
actually
we
have
this
concept
of
you
know.
Kind
of
storage
is
used,
sometimes
in
a
transient
way
right
now
in
the
evm.
You
can
see
this
sometimes
with
like
re-entrancy
locks
right.
So
this
is
when
we
clear
a
slot
back
to
its
original
value
before
the
end
of
the
transaction,
and
then
you
know
we're
allotted
some
amount
of
of
refunds,
and
you
know
to
achieve
transientness
in
this
way
is
actually
quite
messy
from
you
know
the
developer
point
of
view.
T
You
know
it's
really
not
straightforward.
You
know
how
the
accounting
will
work
for
this,
especially
because
refunds
are
now
capped,
and
so
you
know
enshrining
this
directly
in
the
evm
is,
is
a
more
direct
kind
of
use
case
to
get
transientness.
T
You
know
also
developers
are
kind
of
having
to
go
and
do
the
sort
of
messy
implementation
where
you'll
see
a
lot
of
times
like
you
know,
one
zero
one,
instead
of
clearing
to
zero
we're
clearing
to
some
dirtied
value
because,
as
we
actually
end
up
getting,
you
know
more
refunds
in
that
case,
and
so
it's
really
the
sort
of
patchy
way
of
achieving
transientness
in
the
evm
and
so
kind
of
the
way
that
that
we
look
at
this
EIP
is
actually
sort
of
a
cleanup.
T
It's
it's
relieving
some
of
this
Tech
depth
here,
because
this
is
a
real
use
case
and
it
is
wanted.
AJ
So
one
really
cool
side
effect
of
this
is
it
actually
helps
a
lot
with
parallelization,
because
you
know
we
want.
We
want
to
scale
ethereum.
We
want
to
increase
the
throughput
of
the
system
and
we
can
do
that
by
paralyzing,
the
evm
and
right
now,
anytime,
that
there
is
a
lock
taken
in
a
contract.
It
is
writing
to
storage
and
that
prevents
parallelization
of
that
transaction
with
other
transactions
that
are
trying
to
interact
with
the
same
contract.
AJ
So
if
we
move
all
these
into
transient
storage,
instead,
then
we'll
be
able
to
paralyze
a
lot
more
transactions
and
it's
important
to
try
to
get
this
change
in
sooner
rather
than
later.
So
we
can
start
adopting
this
pattern
now
and
have
more
of
the
network
using
this
kind
of
way
of
doing
locks,
so
we
can
have
more
parallel
execution
in
the
future.
AJ
Another
problem
is
that
it's
really
difficult
to
know
how
much
gas
or
how
much
gas
is
going
to
be
used
when
you're
allocating
memory,
because
there's
this
like
crazy,
non-linear
function.
So
this
makes
it
much
more
straightforward,
because
every
you
know
t-store
uses
the
same
amount
of
gas.
So
it's
easier
for
a
developer
to
like
know
how
much
gas
they're
going
to
be
using
when
they're
writing
their
smart
contracts.
T
Cool
and
kind
of
on
the
on
the
last
note
here,
I
just
want
to
re-emphasize
that
this
is
not
necessarily
an
addition
to
the
evm.
It's
it's
really
we're
thinking
about
it
as
a
cleanup.
A
cleaner
way
of
of
achieving
this
use
case
in
the
evm
I
also
want
to
point
out.
This
is
a
really
siled
change.
It's
two
op
codes.
It's
easily
testable
and
also
benefit.
It's
already
been
implemented
across
four
clients,
so
nethermind,
Bay,
Zoo,
ethereum,
JM
or
ethereum,
JM,
vs
and
also
Geth
is
implemented.
T
We've
also
written
tests
for
this.
They
all
pass,
and
so
the
kind
of
the
final
ask
today
is
to
just
really
get
some
more
client
Dev
eyes
on
these
PRS
on
the
tests
we've
written
and
to
actually
kind
of
seriously
open
this
conversation
of
having
CFI
for
1153
in
Shanghai.
I
AI
AH
So
only
know
just
FYI
I
do
like
the
EIP.
However,
I
kind
of
have
a
feeling
that
the
russianizations
are
not
necessarily
all
equally
valid
with
the
parallel
execution.
I
think
that's
kind
of
a
far
away
dream,
so
I
I
yeah
I
mean
essentially
every
time
you
execute
a
transaction.
You
will
touch
some
state.
So
if
you
touch
the
same
contract,
you
will
touch
the
state
some
State
anyway,
so
that
I
don't
think
it
helps
there.
AB
I
I
don't
really
see
the
point,
but
I'm
also
not
a
not
a,
not
a
contract
developer.
It
seems
to
me
like
a
nice
to
have,
but
nothing
that's
critical
for
for
us
right
now.
AH
So
I
can
read
about
that.
One
of
the
one
of
the
points
that
I
do
like
about
it
is
that,
with
the
with
these
mutexes,
essentially
it
touches
the
state,
and
even
if
it
does
nothing,
just
flip
some
bits
back
and
forth,
it
still
has
to
touch
disk,
and
this
would
allow
us
to
do
these
things
without
touching
this.
So
that's
a
net
benefit
for
me.
M
I
see
like
it's
really
paralyzation,
that
has
been
quite
a
lot
of
work
in
analyzing
how
much
the
transaction
can
be
paralyzed
and
even
at
flashbots
we
were
running
the
analysis
of
the
of
the
clashes,
the
bundle
clashes,
the
transaction
clashes,
so
I
think
any
improvements
in
this
would
would
be
quite
nice
to
see.
M
There
are
some
Builders
there,
like
I've,
seen,
actually
the
the
modifications
to
get
that
were
introducing
parallelization
and
they
were
working
and
I
was
really
surprised
like
how
some
developers
were
able
to
to
do
that
in
their
own
in
their
own
implementations.
Just
for
the
simulation
efficiency
consider
already
gains
from
paralyzing
transactions,
such
as
the
the
implementations
are
so
complex
and
so
specific
for
for
searching
that
they
are
not
coming
to
the
journal.
M
View
so
it'd
be
great
to
see
that
I
think
the
pitch
was
really
great,
so
I
I
was
not
considering
this
one,
because
I
was
always
thinking
that
anything
that
was
touching.
The
storage
was
awful
amount
of
testing
and
huge
risk
of
of
something
escaping
and
and
some
contract
breakages.
So
so
this
is
my
biggest
worry
like
we
were
modifying
the
cost
of
storage
in
the
past.
M
We're
modifying
the
behavior
of
refunds,
and
this
one
feels
a
bit
like
that
like
when
you
say:
oh,
it's,
it's
a
known
cost,
but
do
we
do
we
cup,
the
the
storage?
Okay,
I,
see
yeah.
AJ
M
I
So
I
guess
just
to
wrap
it
up.
Sorry,
the
but
like
is
there
something
that
any
like
we
have
tests
for
this
we
have.
We
have
patience,
I
guess.
My
question
is
like
from
the
client
teams
very
quickly.
Is
there
something
like
you
want
to
see
from
here
like
a
big,
open
question
you
have
about
this
Daniel,
you
have
the
solidity
question,
but
like
yeah
just
to
wrap
it
up
anything,
you
wish
you
would
see
from
this.
That
would
help
you
better
understand.
AB
Do
you
have
tests
or
benchmarks
for
like
just
writing
as
much
to
the
storage
as
possible,
writing
in
in,
like
small
chunks
into
it
doing
doing
separate
calls
into
different
different
contracts
that
each
write
their
own
I
would
really
like
to
see
this.
T
Yeah
there's
an
open
PR
in
the
ethereum
test,
repo
I
think.
Most
of
those
cases
that
you
just
said
are
covered.
I
haven't
looked
in
a
bit,
but
yes,
reentrancy
I
think
is
covered
I'm
happy
to
share
that
out
with
with
the
wider
group,
but
there's
extensive
tests.
AJ
AK
I
just
had
a
comment
on
this
already
depart,
adding
it
to
the
assembly,
what
the
pr
there
wasn't
a
PR.
We
just
edited
it,
but
it's
easy
to
add
the
op
code
and
you
can
use
it
in
an
inline
assembly
and
if
the
AIP
goes
live,
that's
going
to
be
added
like
instantly,
but
adding
it
into
language.
Gonna
take
quite
a
bit
because
it's
a
lot
of
changes.
AK
AK
AL
Thank
you.
My
name
is
developer
and
I'm
here
to
Champion
EP,
5978,
entitled,
guest
refund
and
reverse
some
motivation
for
this
Eep
is
that
reward
of
a
transaction
or
any
its
sub
calls
drops
any
state
modifications,
but
the
user
has
to
pay
the
full
price
for
the
state
modifications,
while
with
State
modifications
are
not
preserved
for
forever,
and
this
has
two
problems
with
this.
AL
Is
that
users
overpay
and
then
it
limits
some
solidity
patterns
where
you
may
may
have
a
call
and
revert
it,
and
it
makes
it
extremely
expensive
to
the
point
where
sometimes,
instead
of
reverting
a
call,
you
may
just
like
transfer
easy
tokens
from
one
address
to
another
just
to
like
restore
the
storage
instead
of
paying
the
higher
gas
price
of
reverting
the
call,
and,
in
my
opinion,
it's
an
anti-pattern,
because
some
side
effects
can
be
missed
and
it
may
result
in
critical,
hacks
and
lots
of
funds
eventually
and
vcp
suggests
to
reprise
the
following
op
codes
as
a
s
store
create
so
so
distract
through
the
gas
refund
mechanism.
AL
So
they're
not
going
to
be
free.
You
still
gonna
pay
some
price
for
touching
with
addresses
or
storage
variables,
but
it
seems
unfair
to
pay
a
full
price.
Vcp
is
a
in
a
in
an
early
version,
so
I
just
want
to
spread
more
awareness
around
this
problem
and
hear
what
kind
of
comments
people
have.
Thank
you.
AH
AH
Maybe
I
missed
something,
but
is
the
suggestion
essentially
to
reprice
some
of
the
storage
of
codes
or.
AL
So
because
we've
got-
and
let's
say
slope
is
like
sorry
for
clarity.
So
if
let's
say,
if
s
load
operation,
not
sload
s
store
operation
inside
the
reverted
transaction
across
22
000,
and
then
with
some
this
call
reverts.
Then
the
cost
of
the
separation
should
be
repriced
as
just
like
attaching
this
load
for
a
different
number.
So
you
should
not
pay
twenty
two
thousand
for
modifying
storage
slot,
which
is
not
modified
at
the
end
of
transaction,
and
the
pricing
should
happen
through
the
gas
refund
mechanism.
AE
So
I
I
have
comments
about
like
the
the
general
idea,
so
it
mostly
means
you
would
need
to
remember
all
of
the
changes
you've
done
and
then
it's
like
single
point
on
the
reverts
goes.
You
would
need
to
go
through
all
of
the
changes
and
apply
a
compute
at
least
the
refund
you
you
get
from
that.
So,
like
you
have
like
single
operation
that
it's
actually
like
unbounded
in
that
complexity
internally
and
that
might
be
like
doable.
AE
AH
Just
one
more
comment
usually
with
when
we
reverters
you
can
actually
have
nested
reverters
and
you
that
can
end
up
quite
nasty.
When
you
have
sub
cores
that
revert
and
the
outer
code
doesn't
revert,
then
the
outer
cool
again
reverts.
So
it's
not
a
very,
very
easy
EIP
to
tackle.
It
might
not
be
too
hard,
but
it's
I
think
the
reason
about
it
is
not
necessarily
easy.
AE
Yeah
I
think
the
if,
if
like
that
is
assuming
that
we
have
to
keep
the
general
all
the
time
and
use
it
for
all
all
the
other
apartment
operations
and
it
kind
of
did
this
idea
fits
into
that
perfectly
I
think
that
might
be
considered.
AE
But
if
there
is
something
that
you
would
need
to
keep
different
data
structure
just
for
that,
I
think
it
would
be
really
difficult
to
have
it
and
I
know
some
of
the
implementations
actually
don't
use
the
State
Journal
light
I
mean
that's,
not
something
we
manage
to
to
be
used,
so
you
would
need
a
like.
In
other
words,
it
would
be
forced
to
keep
something
like
that
anyway,
although
it's
not
required
in
particular
right
now,.
AI
You
have
to
keep
track
of
a
whole
lot
of
information
for
for
the
for
the
refunds
anyway
for
the
for
the
storage
refunds
anyway.
So
it's
the
same
stuff
in
all
implementations,
I
think
whether
it's
journal
or
cash.
You
have
all
the
info
you
need,
but
I'm,
just
still
traumatized
by
some
of
the
code.
I
had
to
write.
E
E
AI
We
were
doing
the
the
repricing
of
the
the
2200
I,
can't
remember,
exact
name
of
the
of
it
where
it
was
overfitted
fitted
to
get
and
our
code
analysis
tool,
freaked
out
of
the
complexity
of
the
code,
so
I'm
kind
of
concerned
about
that
about
the
maintainability
of
some
of
these
I
mean
I
was
implementing
the
algorithm
as
specified
and
sonar
said.
That's
too
complex.
You
can't
do
it
obviously.
AM
AM
Actually
it
was,
it
was
the
same
EIP
transient
storage,
so
without
maybe
just
Reviving.
You
know
everything
and
reiterating
everything
there.
It
just
just
kind
of
adding
on
a
few
points
one.
AM
You
know
right
now,
some
of
the
higher
level,
oh
sorry
background,
so
I'm,
actually
a
contributor
to
the
Hof
language,
which
is
a
low
level,
Assembly
Language
and
then
formerly
I
worked
with
the
superfluid
protocol
and
so
I
think
I
think
both
of
these
could
benefit
from
transient
storage.
So,
on
the
Huff
side
you
know
higher
level
languages
like
solidity
and
Viper.
You
know
they
have
these
re-entrancy
locks
or
modifiers
to
facilitate
re-entrancy
locks.
AM
That
you
know
are
very
well
built
and
it's
very
easy
to
build
this
in
a
safe
and
secure
way.
But
when
it
comes
to
assembly
languages,
this
is
actually
a
problem,
because
if
you
set
a
lock
in
storage-
and
you
don't
explicitly
free
it
by
the
end
of
the
transaction,
those
are
now
bricked
which
obviously
you
should
catch
this
and
unit
tests
right.
But
it's
just
it's
one.
More
foot
gun
on
the
stack.
AB
Sorry
to
interrupt,
you
I
think
you
kind
of
capturing
the
the
governance
process
right
now,
because
we
already
talked
about
the
CIP
and
now
rehashing
the
discussionist
kind
of
I.
Think
not
great,
so
I
would,
if
the
if
I,
would
rather
move
on
with
the
next
EIP
and
because
we
discussed
this
one
already.
I
Yeah
yeah,
let's
move
on
because
I
have
it
done
I
do
think,
though
it
is
valuable
to
know
that
like
low
level
languages
and
like
projects
like
this,
if
you
can
like
write
it
on
the
eat,
magicians,
post
or
somewhere.
I
AN
So
I'm
working
on
on
5027
and
also
another
VIP,
five,
four.
Seventy
eight
so
I
just
discussed
five
zeros
27
first,
so
basically
it
aims
to
remove
the
right
now
the
counter
limit
as
24
kilobytes,
that
it
was
introduced
in
EIP
170.
AN
So
the
reason
is
like
I
think
the
motivation
is
pretty
clear
that
a
lot
of
people
complain
regarding
24
kilobytes
contract
size,
especially
right
now.
The
contract
is
significantly
much
more
complicated
when
EIP
170
was
introduced.
So
the
major
concern
of
EIP
170
is
basically
DDOS
attack.
If
a
large
counter,
maybe
fix
100
kilobytes
that
was
deployed
on
the
ethereum,
then
if
it's
just
charged
in
using
a
fluffy
life
exam
right
now
is
2600,
then
you
may
specifically
significantly
under
charged.
AN
So
basically,
the
idea
is
like
right
now.
The
solution,
basically,
is
that
split
the
contract
to
multiple
contrast
and
then
I
call
kind
of
like
chain
of
contracts,
so
they
can
retrieve
their
data,
but
it
basically
make
the
whole
logic
much
more
complicated.
So
the
current
my
idea
of
solving
this
EIP,
basically
the
DDOS
Tech,
the
cup
two
weights,
one,
is
basically
introduced.
Basically,
a
contract
called
hash
versus
the
to
the
counter
size.
AN
So
when
we
call
a
contract,
I
mean
immediately
to
know
basically
the
size,
because
size
is
very
small,
like
four
bytes
number,
then
we
are
able
to
pre-charge
according
to,
for
example,
what
the
actual
size
of
the
contract
is
like.
Maybe
we
can
just
charge
charge
to
2614
per
24
kilobytes
so
that
we
are
able
to
basically
have
similar
right
now,
gas
behavior
of
calling
multiple
contracts,
but
just
putting
it
in
a
single
contract.
So
this
is
one
idea.
Another
idea
is
we
can.
AN
If
the
counter
size
is
greater
than
24
kilobytes,
then
we
are
able
to
append
a
size
together
with
the
current
24
kilobytes
and
then
tells
the
what's
the
actual
size
it
is
and
when,
when
the
first
time
it
costs
its
first
charge,
22
600
about
the
the
first
24
kilobytes
and
then
the
conscious
size
and
then
once
we
know
counter
size,
then
we
can
further
charge
the
corresponding
the
rest
of
contracts
and
then
put
in
the
memory
and
then
execute.
So
basically,
this
is
the
basic
idea
when
it
takes
a
explore.
AN
The
ebm
code
and
also
I
have
a
simple
basic
implementation,
together
with
some
concerns
of
addressing
warm
and
code
storage
and
also
become
some
of
the
P2P
package
size.
Because
right
now
we
have
an
imitation
on
the
P2P
packet
size,
but
with,
for
example,
50
million
gas
block
gas
limit
divided
by
200
gas
price
per
byte,
and
so
the
contract
limit
size
actually
is
essentially
limited
to
250,
50
kilobytes.
So
right
now
that's
still
fit
into
the
P2P
package.
AI
So
there's
also
EIP
3970,
not
3978
3860,
which
is
to
limit
meter
and
net
code,
which
is
like
we
got
two
conflicting
eips,
so
I'm
not
opposed
to
changing
the
limit
but
completely
unlimiting.
It
I
think
it
has
issues
I
think
it's
Martin
that
has
some
really
glorious
code.
That
shows
the
performance
problems
with
the
current
jump
analysis
on
on
the
Legacy
formats.
AI
AI
The
jump
analysis
is
different
in
eof2,
there's
not
the
same
risks,
but
then,
as
you
mentioned,
you
get
into
the
issues
of
how
does
it
impact
the
storage
bringing
out
you
know:
kilobyte,
not
kilobyte,
megabyte
codes
code
out
of
the
storage
and
yeah
I
I
think
the
unlimiting
is
going
to
be
a
hard,
sell,
I
think
changing
the
limit.
I
think
is
going
to
be
an
easier
sell.
AH
I
guess
my
question
is
kind
of
similar
to
this.
That
saying
that
24K
is
too
small.
I
can
definitely
accept
that
I
think
that's
a
valid
concern.
My
question
is
what
is
reasonable,
because
if,
if
you
go
towards
saying
that
well
it
should
be
arbitrary
large
and
it
will
get
so
complicated
that
it
definitely
won't
ship
but,
for
example,
saying
that
well,
let's
raise
it
from
from
24
to
I,
know
64.!
That's
that
thing
can
be
analyzed
there.
We
can
put
the
number
on
it.
AH
AN
Yeah
so
because
I
do
some
experiment
and
and
using
a
code
so
right
now,
I
feel
like
pick
some
English
synchronization
and
put
out
some
deploying
a
lot
of
like
some
200
kilobytes
of
country.
On
top
there,
it
looks
like
everything
is
working.
Fine
on
my
test
net,
so
has
been
running
for
more
than
half
years
yeah,
especially
like
right
now
in
the
gas.
AN
Regarding
the
jump
analysis
right
now,
the
glass
metering
is
charged
for
2600
per
24
kilobytes,
which
essentially
I
think
the
sensation
is
equivalent
to
I
call
this
contract
and
call
another
one.
This
Country
Corner
another
one,
so
basically
I
charge
this
the
chain
of
this
calling
to
in
a
single
but
easy
using
a
single
payment.
So.
AI
So
if
we
tie
this
to
eof,
you
can
have
larger
contracts.
If
you
do
it
in
an
eof
container,
we
simultaneously
solve
the
jump
test,
analysis,
problem
and
reasons
to
motivate
people
to
use
eof,
so
I
I
think
there's
a
lot
of
things.
We
can
combine
and
trade
some
horses
to
make
this
work.
AB
AO
AB
So
I
thought
it's
not
possible
to
do
this
on
on
mainnet
right
now
and
I
also
not
possible
to
to
increase
the
code
size
if
you're,
a
client,
stuff
and
you're
interested
about
my
reasoning
come
talk
to
me
afterwards.
AB
What
is
possible
is
to
do
it
in
UF
and
I.
Think
that's
what
you,
what
you
should
strive
for!
Do
it
any
eof
when
we
have
the
the
jump
test
stuff.
AH
Just
a
tiny,
tiny
bit
of
dot
comment:
you
mentioned
that
you
had
to
test
setup
and
running,
and
it's
been
running
perfectly
the
with
all
these
changes
that
catch
is
it's
the
average
case.
We
know
that
it
runs
perfectly
because
the
code
is
written
well,
the
problem
is
how
attackable
it
is
and
in
your
private
task
that
nobody
is
going
to
attack
it.
I
S
Okay,
hello,
everyone,
I'm,
Proto,
I
work
with
Opie
lamps.
We
have
this
dream
of
serenity.
Serenity
include
us
proof
of
stake
and
sharding.
They
have
achieved
proof
of
Stack
I'm
on
the
continuous
sharding
I'm,
fully
bought
into
the
ethereum
for
this
combination,
not
for
one
and
not
the
other
and
I
think
right
now.
The
process
that
eaps
has
been
like
kind
of
imbalanced
with
the
merge,
because
we
have
a
exclusion,
layer,
consensus,
layer,
I
think
an
EIP.
S
S
Just
in
case
before
it
prefer,
if
you're
not
familiar
already
before
it
prefer,
it
increases
the
data
for
Layer
Two
layer,
2
is
meant
to
be
an
extension
of
ethereum.
You
could
think
of
the
previous
charting
dream
of
ethereum
as
this
execution
charting
thing
where
it
was
all
the
complexity
left
on
ethereum
itself.
S
Layer
2
enables
this
to
be
more
competitive
and
to
be
split
from
ethereum,
where
we
have
exclusive
layer
as
layer
2,
and
we
have
the
layer
bomb
just
focus
on
the
securing
data
fatability,
and
this
is
what
this
EIP
focuses
on
and
achieves,
and
then
through
this
means
we
can
adopt
a
lot
more
ethereum
users
onto
Layer,
Two
and
projects
like
like
coinbase
or
other
like
larger
ethereum.
Users
won't
have
to
look
at
these
ethereum
killers
in
quotation
marks
where
they
can
actually
host
these
users
at
low
cost.
I
Okay,
you
heard
it
here:
yeah
I,
just
I
want
to
make
sure
we
can
get
to
as
many
people
in
the
next
20
minutes
as
possible.
Thank
God,
we're
gonna,
add
something
on.
AA
AB
We
should
remove
self-destruct,
yes,
that's
the
picture.
We.
AH
AB
Okay
and
we
need
we
need
to
remove
self-destruct
for
vertical
and
history,
expiry
and
sorry
State
expiry
and
all
of
these
upcoming
changes,
so
it
needs
to
be
done.
The
question
is:
do
we
do
we
do
it
now
or
do
we
do
it
later
and
I
think
it's
it's
a
really
small
change,
so
we
should
do
it
now.
AH
So,
just
to
I
think
this.
If
somebody
is
not
really
on
the
page
of
why
we
want
to
remove
self-destruct,
essentially
every
single
op
code
on
the
evm
is
the
cost,
is
linear
or
I
mean,
tries
to
approximate
the
actual
execution
that
the
resources
it
consumes
and
self-destruct
is
one
of
those
OP
codes
where
deleting
the
contract
store.
Essentially,
it's
a
it's.
AH
A
single
op
code
call,
but
it
can
result
in
an
arbitrarily
large
execution
and
currently,
the
only
reason
why
currently
it
works
is
because
self-destruct
assumes
that
clients
represent
the
state
in
a
specific
way
in
the
market
Patricia
way
and
it.
It
also
assumes
that
the
state
does
not
get
deleted
from
disk.
It's
just
a
couple
of
branches
of
the
market.
Patricia
gets
updated,
but
the
moment
you
want
to
do
something
fancier
like
what
Aragon
is
doing
or
what
gets
new
pruning
is
doing.
A
sensors.
AA
AE
So
yeah
I
have
only
the
comment
that
the
currency
of
this
track
has
a
quirk
that
you
can
destroy
Eve
with
that
and
the
question.
If,
if
we
want
to
actually
like
to
make
the
the
scent
all
work
the
same
way
or
we
want
to
kind
of
fix
it
and
make
it
more,
intuitive
I
think
so.
I
think
it's
kind
of
the
the
choice
between
like
more
backwards
compatibility
between
something
that
is
more
obvious
how
it
works.
AB
So
the
way
I
implemented
it
now
it
just
it
doesn't
destroy
the
user,
so
it
and
and
the
idea.
AB
Okay
and
for
everyone
in
the
room,
we
are
not
trying
to
remove
self-destruct,
but
we're
changing
it
so
that
so
that
the
self-destruct
will
just
send
all
of
The
Ether,
that
is
in
the
contract
and
the,
but
the
contract
itself
will
stay,
and
so
so
it's
like
it
will
keep
the
current
way.
AB
The
only
the
only
thing
that
is
kind
of
iffy
about
it
is
there's
some
pattern
where
you
self-destruct
and
create
to
contract,
but
there
have
been
an
analysis
about
it
and
it
doesn't
break
too
much
stuff
and
and
we
and
we
talked
to
the
people,
that
that
would
that
we
would
break
with
it
and
they
seem
to
be
okay
with
it.
AP
Yeah
Ronan
I
basically
build
on
chain
games,
I'm,
basically
also
other
application.
Many
things
actually,
but
and
by
that
I
mean
application
or
games
that
have
a
zero
back-end
and
where
the
user
player
provide
their
own
node
through
the
wallet
the
truth
and
in
that
context,
I
am
building
an
indexer
that
runs
in
the
browser,
and
so
you
can
fetch
the
logs
and
it
will
all
fine
and
but
some
application
of
game
rely
on
time.
AP
Information
and
most
developer,
assume
rightly
that
the
timestamp
is
available,
and
so
they
don't
need
to
add
the
timestamp
in
the
event
that
they
emit.
Unfortunately,
the
logs
don't
contain
the
timestamp
information,
and
so
in
my
game,
for
example,
like
20
000
events,
I
can
fetch
them
very
quickly
like
in
five
seconds.
It's
all
all
the
state
is
synced,
but
if
I
have
to
add
the
timestamp,
then
I
need
to
make
20
000
more
requests
and
account
even
batch
it
because
eip1193,
which
is
only
interface
I,
have
cannot
do
that.
AP
AH
So
one
of
my
questions
here
is
that
long
term,
the
essentially
long-term
ethereum
attempts
to
remove
access
from
old
chain
segments
and
ideally
I,
would
also
completely
remove
access
from
law
accessing
logs
that
are
older
than
I.
Don't
know,
I
would
remove.
I
would
say
a
month
three
months,
something
fairly
high,
so
essentially
I.
AP
AH
What
I
was
getting
at
is
that
this
is
kind
of
a
consensus
in
ethereum
that
the
past
chain
segments
needs
to
be
pruned,
otherwise,
the
the
network
implodes
and
in
from
that
perspective,
the
amount
of
logs
you
will
have
to
access
is
more
limited.
So
it
might
not
be
that
big
of
an
issue
not
to
I
mean
you
could
always
retrieve
the
timestamps.
If,
if
you
have
a
bounded
number
of
logs,
you
can
access
don't.
AP
You
think
that
if
we
go
to
that
stage,
the
the
wallet
interface
will
also
evolve
with
a
defined
mechanism
so
that
the
application
can
remain
decentralized
or
are
you
giving
up
on
complete
decentralization
from
the
application
point
of
view.
AH
But
okay,
so
I
think
events
are
completely
being
misused
and
they
are
used
as
a
database
instead
of
events
and
in
my
opinion,
ethereum
should
use
it
as
events
and
should
everybody
else
should
adapt.
But
that's
my
two.
AP
I
mean
I've
also
comment
to
make
because
I
think
it's
a
bigger
discussion
like
a
lot
bigger
than
what
we
have
time
for
now,
but
because
I've,
we
have
all
applications
rely
on
this.
So
the
reason
why
we
use
event
of
the
database
is
because
I
mean
the
typical
example
is
the
NFC.
If
you
want
to
know
the
list
of
the
token
you
own,
you
can
add,
and
a
lot
do
that
they
have
this
further
called
Fetch.
AI
AI
The
graphql
is,
there's
a
standard
for
the
graphql
and
it
is
in
the
execution
apis,
so
gath
and
basic
both
implement.
It
can
expose
it,
but.
AP
I
I
Q
I
Q
I'll,
keep
it
short,
I'll
keep
it
short,
so
my
name
is
Matt
I
am
an
author
of
eip374
often
off
call
more
to
the
mic.
I
thought
she's
going
to
take
it
away
from
me.
You're
finished
now.
Q
Well,
that's
one
reason.
Another
reason
that
I
think
the
eip374
is
very
valuable.
Is
it
lets
all
users
of
eoas
sign
a
message
to
create
some
sort
of
social
recovery
mechanism
and
if
they
happen
to
lose
their
metamask
or
their
Ledger
or
whatever
wallet
that
they're
using
they
can
go
and
recover
it
with
the
people
that
they
signed
through,
and
the
third
thing
I
think
is
really
interesting
with
3074
and
is
a
testament
for
like
how
powerful
it
is.
Q
AI
There's
some
huge
user
experience
risks
with
it
is
currently
done,
and
the
revision
took
some
of
the
guard
rails
off.
So
we
don't
have
enough
time
to
go
into
some
of
those
those
issues
with
safety
and
those
are
I
think
my
number
one
concern
on
that
right
now.
But
if
we
need
meta
transactions,
let's
make
a
meta
transaction
transaction
format
and
some
of
the
other
ones,
you
know
account
abstraction
yeah,.
I
Yeah,
okay,
so
dude
yeah,
yeah,
various
ones
to
China
man
they're
on
the
same
team.
We
have
10
minutes
left.
Sorry,
no
well
well!
I
will
give
a
shout
out.
There
is
an
account
abstraction
panel,
I
think
Matthew
on
it
later
this
week.
So
if
you
want
to
go
ahead
and
do
a
heated
debate
about
the
various
flavors
of
account,
obstruction
and
fake
account,
abstraction
and
3074
right.
I
AI
AJ
AI
O
AB
AB
So
cool,
so
no
one
actually
proposed
uof.
So
I
guess.
G
AE
Can
I
like
like
one
comment
is
like
if
you
combine
functions
and
relative
jumps,
you
can
get
rid
of
of
drop
this
analysis
entirely
because
they
kind
of
replace
that
unless.
AE
Like
to
remove
all
of
these,
we
can
do
that
with
these
two
features,
but
like
the
way
we
didn't
Champion
it
like
because
I
think
that's
not
on
us
to
like
actually
say
it's
great,
because
we
need
to
input
from
people
that
say
they
want
to
use
that.
AK
I
do
have
another
VIP
I
didn't
wanted
to
like
talking
to
UF,
because
we
spent
like
two
hours
on
like
protocol
Workshop,
but
this
app
is
really
cool.
It's
called
M
Copy
for
memory
copying
it's
not
merged
yet
because
of
the
EIP
process,
but
I'm
going
to
summarize
it
so
basically
the
only
way
to
copy
memory
right
now.
There
are
two
ways:
one
way
is
to
do
it
with
the
loop
amp
store,
mlodemp
store,
and
that
was
really
recognized
and
the
identity
pre-compile
was
introduced.
AK
I
think
the
first
like
few
months
after
the
launch
of
ethereum
that
was
used
by
the
solid
compiler,
but
then
with
the
Shanghai
attacks
it
was
repriced.
The
call
was
becoming
too
expensive,
so
nobody
used
the
identity,
pre
compiler
anymore.
It
is
just
there.
AK
I
think,
Viper
use
it
now,
but
solidity
still
uses
the
loop.
So
then,
the
mem
copy
op
code
fixes
all
of
this
and
I
just
I'm,
trying
to
read
the
numbers
so.
AK
Yeah
it
takes
like
800
gas
to
to
copy
255
256
bytes,
with
the
Shanghai
cast
with
the
recent
cost
is
160.
We
demo
them
towards
100
and
video
IP.
It
would
be
25
by
25
gas.
We
did
some
analysis
I.
Think
like
25
of
all
the
memory,
copying
would
be
improved
by
M,
Copy
and
there's
actually
one
feature
in
the
solid
compiler,
which
is
kind
of
be
I,
mean
it's
not
blocked
by
this,
but
it's
not
implemented
slicing
of
memory
arrays
and
a
lot
of
cases.
AK
People
are
doing
like
forcefully
using
call
data
stuff
because
that
can
be
sliced
in
the
compiler
setting
mem
copy
a
cheap
mem
copy
would
also
improve
solidity
as
a
language.
AQ
Yes,
the
reason
I
say
that
half
EIP,
because
it's
almost
certainly
not
for
Shanghai
but
I,
don't
think
it's
been
talked
about
at
all
and
it's
nice
to
like
get
people's
brains
Brewing
on
it.
So,
first
of
all
the
ones
based
on
this
ERC
457
right,
which
is
so
much
the
kind
of
abstraction.
That's
like
a
way
of
getting
a
kind
of
abstraction
without
requiring
a
hard
work
to
avoid
all
those
like
EIP
process
mess.
AQ
And
this
we
found
that
people
quite
like
this
approach
right
because
they
can
already
start
using
their
smart
contract
vaults.
But
we
found
that
users
actually
still
complain
quite
a
bit
about
smart
contract
worlds
because,
like
they
already
have
their
money
on
EOS
and
switching
all
their
balances
and
all
their
all
their
nfts
and
everything
is
just
too
much
for
them.
Usually.
So
we
were
thinking
quite
a
bit
about
okay,
like
how
can
we
develop
a
kind
of
abstraction
more?
How
can
we
perhaps
enshrine
it
a
bit?
AQ
And
so
some
ideas
just
floating
around
again?
There's
no
EIP
set
and
there's
no
like
specific
roadmap
set,
but
an
example
is
making
a
new
transaction
type
which
converts
an
eoa
to
a
Smart
contract
that
you
specify
in
a
data
field
right,
and
this
basically
should
be
quite
a
simple
new
transaction
type.
There's
not
really
that
much
complexity
as
far
as
I
can
tell.
But
please
love
to
hear
some
comments.
AQ
Some
more
advanced
ones
and
again,
just
ideation
is
perhaps
making
an
EIP
which
converts
all
current
eoa
accounts
into
a
sort
of
default
proxy
smart
contract
World,
which
uses
the
current
ecdsa
signature
scheme.
That
EO
has
already
used
right
and
another
sort
of
a
more
advanced
one
is
so
this
ERC
457.
AQ
It
works
with
a
so-called
entry
point,
smart
contract,
which
is
through
which
you
root
all
your
user
operations
to
interact
with
your
wallet,
and
this
causes
a
lot
of
gas,
because
you
do
all
the
signature,
verification
all
the
all
the
stuff
on
chain
using
well
evm
up
codes
right.
So
what?
If?
Instead,
you
made
this
part
of
the
protocol
right
and
that
could
be
validated
outside
and
so
we'll
save,
usually
lots
of
gas.
AO
Oh
yeah,
just
real
quick.
This
isn't
on
that,
but
it
hasn't
been
suggested
for
Shanghai,
but
prior
to
the
merge
it
had
a
bit
of
support
time
aware
base
fee
calculation.
It
would
essentially
just
make
1559
quite
State
friendly
1559
is
aware
of
blocks.
It's
not
aware
of
slots.
You
could
have
say
like
an
empty
block
with
proof
of
work,
but
now
you
can
have
missed
block
proposal.
So
here
we
go.
AO
AO
AH
So
with
the
wallet
I
think
there
was
three
proposals
I
just
wanted
to
mention.
There
was
one
proposal
where
you
saw
that
we
could
just
Auto
convert
everything.
B
AH
Essentially
that
that's
already
a
huge
issue
for
vertical
trees,
where
you
just
want
to
do
an
upgrade
where
the
state
just
gets
flipped
over
and
it's
a
huge
linear
migration
and
we
have
absolutely
no
idea
how
we're
going
to
do
it
for
work
or
trees.
So,
let's
not
do
it
twice,
but.
AB
But
that
would
actually
break
the
new
semantics
that
we
introduced
with
right.
You
know
which.
D
Q
But
there's
no
code
in
the
account,
and
so
it's
already
empty,
and
so
it
would
basically
be
like
me
sending
a
transaction
and
rather
than
executing
it
the
same
way
that
we
do
today.
It
would
realize
that
the
recovered
address
has
no
code
in
it,
and
so
then
it
would
just
start
executing
it
in
an
evm
frame,
with
some
default
account
code
that
implements
the
same
concept
of
the
ecbsa
account.
I
Okay,
I
think
we're
going
to
wrap
up
it's
past
six.
So,
first
of
all
thanks
everyone
for
coming
there's
more
places.
We
can
discuss
all
this
this
week,
so
we
mentioned
there's
an
evm
panel
where
we
can
get
into
eof
1153
all
that
good
stuff,
there's
an
account
abstraction
panel
and
we
just
had
some
new
fresh
account
abstraction
content
as
well
and
then
finally,
there's
an
ERC
kind
of
Youth
magician
session
as
well
throughout
the
week.
I
don't
know
when
they
are
sorry
they're
all
on
the
agenda.
S
You
heard
it
oh
product
right
and
so
Friday,
there's
a
session
about
tank,
shorting
and
Proto
tank
shirting.
If
you're
interested
to
the
helpless
of
the
Erp
for
Edge
referred
just,
please
contact
us
and
we
are
hosting
co-work
sessions.