►
From YouTube: Ethereum Core Devs Meeting #127 [2021-11-26]
Description
A
A
A
A
A
D
First
thing
quickly:
before
we
get
into
it,
we
have
the
aero
glacier
fork
happening
in
two
weeks
for
anyone
listening,
so
it's
happening
on
december
8th
if
you
run
an
ethereum
magnet
node.
Please
upgrade
it
in
the
agenda,
for
this
call
there's
a
link
to
the
aero
glacier
spec,
with
the
release
versions
for
all
of
the
clients.
Again,
the
aero
glacier
is
just
an
upgrade
to
the
difficulty
bomb,
so
it
doesn't
do
any
kind
of
substantial
changes
and
yeah.
You
want
to
upgrade
before
december,
8th
or
block
13.773
million.
D
With
that,
out
of
the
way
yeah
so
mikhail,
I
saw
you
had
a
couple
points
you
wanted
to
discuss
about
kinsugi.
I
think
we
can
probably
just
get
a
quick
update
from
the
different
kind
teams
and
then
and
then
dive
into
those
yeah.
I
don't
know
if
any
team
wants
to
start
and
share
kind
of
their
progress
over
the
past
two
weeks.
I
know
there
was
some
sinking
that
was
happening
this
morning
or
last
night.
D
E
Yes,
I
can,
I
can
go
first
yeah,
so
the
deafness,
I'm
not
sure
if
we
talked
about
definite
zero
on
the
last
call,
but
definite
zero
basically
broke,
because
I
ran
two
miners
at
this
at
the
head,
which
meant
we
had
like
two
com,
competing
proof
of
work
blocks
and
clients
were
not
able
to
to
fork
to
the
correct
one
or
like
to
to
to
to
a
fork
after
after
the
proof
of
work
was
after
the
proof
of
sticker
chain
started,
which
meant
that
like
half
of
the
network
was
on
one
chain,
half
of
the
network
was
on
the
other
chain.
E
This
is
not
really
fixed
yet,
so
we
have
to
think
more
about
how
we
can
make
sure
or
how
we
can
work
with
these,
like
these
forks
around
the
the
terminal
proof
of
work
block,
and
also
maybe
how
we
can
show
that
to
the
user
to
make
the
user
aware
that
there
are
two
forks
and
that
he
has
to
decide
between
them
and
then
like.
E
We
can
have
a
social
consensus
on
like
the
the
right
proof
of
work
block
and
then
build
on
that
and
yeah
that
there
was
the
first
definite
the
second
or
definite
one.
E
Made
a
wrote,
the
bug
and
and
yeah,
but
we
have
updated
geth
now
and
we
fortunately
nevermind
was
correct.
I
was
correctly
producing
the
chain
and
we
then
fixed
gath
and
synced
to
the
chain
and
I
think
currently
on
on
the
chain.
It's
a
nether
mind
gas
be
be
so
I'm
I'm
not
really
sure
and
for
the
consensus
layer,
clients,
it's
loads,
the
lighthouse
teku
that
are
currently
running,
I
think
and
yeah
it's
it's
it's
going
going.
E
Okay
and
I
also
tested
today
tested
the
optimistic
sync
with
with
teku,
which
were
like
there
was
a
bug
in
gas
that
prevented
tikku
from
optimistic
sinking,
but
that
is
also
working
so
for
the
ticker
team.
It
it
works.
It
was
just
the
bug
on
our
side
and
we're
going
to
fix
it.
D
Wow
yeah
great
progress.
Anyone
from
any
of
the
other
fan
teams
want
to
add
some
more
content.
Yeah.
F
Okay,
so
from
nethermite
we
have
to
do
several,
not
critical
fixes
and
quite
a
lot
lots
of
refactoring.
Our
sync
process
allow
us
to
work
in
any
devnet.
However,
this
is
not
our
final
solution.
We
plan
to
rebuild
it.
We
are
in
sync
and
we
are
producing
blocks
on
kitsuki
devnet
like
mario's
side.
F
It
seems
that
nederman
is
working
fine
and
we
will
definitely
continue
testing,
different
client
combination
and
yeah.
I
think
that
is
update
from
nethermind.
G
From
base
to
we've
implemented
the
kansugi
spec
and
we've
successfully
interrupted
with
merge
mock,
but
we
have
been
focusing
more
on
trying
to
get
our
merge
branch
merged
into
maine
and
we
haven't
tried
to
interrupt
on
the
devnets.
Yet
we're
at
a
point
now,
where
we
probably
could
do
that.
But
we
aren't
at
a
point
where
we
can
do
that
with
a
main
branch.
So
I
yeah
we'll
probably
jump
in
with
a
client
from
our
merge
branch
instead
got
it.
H
Right
so
for
erico
invest
we're
still
developing
the
merge
we
so
we
haven't
been
able
to
sync.
Yet
I
have
a
question
about
the
merge.
So
now
we
we
we
have
eip
4399
as
well,
and
there
is
button,
but
if
3675
says
that
mix
hash
should
be
zero,
while
in
ep
43.99
we
are
reusing
mix
hash.
So
can
we
actually
clarify
if
3675
so
that
mix
hash
doesn't
have
to
be
zero?
I.
D
D
You
know,
based
on
subsequent
eeps,
didn't
want
to
assume
that,
like
every
implementation
like,
if
there's
another
network
or
something
that
wants
to
use
36.75,
that
they
did
also
use
43.99
but
yeah,
I
just
tried
to
make
it
clear
and
yeah.
The
43.99
does
supersede
basically
36.75.
I
Yeah
hey
so,
while
the
aim
of
merge
that
nitron
wasn't
to
test
the
execute
like
random
transactions,
etc,
there
is,
however,
a
faucet
deployed.
So
if
someone
has
interesting
edge
cases,
they
want
to
try
out.
Please
claim
some
ether
test
it
out.
I'll
also
just
share
a
couple
more
resources,
so
if
you
find
something
to
break,
feel
free
to
break
it.
E
Also,
if
people
are
testing
the
execution
layer
that
there's
a
43999
transaction
or
difficulty
before
the
merge
and
random
op
code
after
the
merge,
so
both
is
in
the
test
net.
So
if
you
don't
have
or
399
enabled,
then
you
cannot
sync
the
post
much
and
you
will
crash
unblock
1641.
C
Okay,
in
that
case,
mikhail,
you
had
two
issues
you
wanted
to
discuss.
The
first
was
the
message
ordering
for
the.
J
J
What's
what's
the
edge
case,
it
is
so
we
are
it's
very
important
that
for
choice,
updated
messages
are
propagated
in
the
same
order
as
they
appear
in
cl,
and
for
this
purpose
we
use
request
id
json
rpc,
request
ids,
so
they
must
be
constantly
increasing
and
el
must
not
process
choice,
updated
method
calls
if
they
have
the
id
that
is
lower
than
the
previously
processed
call
of
this
method.
J
That's
how
it's
that's,
how
this
vr
proposes
to
to
adhere
to
that
message
or
to
do
the
order
of
fortress,
updated
events
happening
ocl
and
the
edge
case
then
supposed
when
cl
just
went
offline
and
got
back
and
get
back,
so
it
will
either
need
to
persist
the
request
id
to
start
from
the
same
value
otherwise,
or
we
need
to
reset
it
somehow
why
engine
api,
otherwise
the
counter
will
start
will
start
like,
let's
say
from
some
default
value,
which
is
one
or
zero
and
execution
layer
will
have
to.
If
it's.
J
If
it
follows
the
stack,
it
will
just
have
to
reject
those
messages
until
the
time
when
the
counter
gets
back
to
to
the
to
the
same
state
as
it
was
before
the
restart
of
cl
clients
so
and
yeah.
The
proposal
for
setting
this
counter
is
just
if
execution,
layer,
clients
see
that
the
request
id
is
zero.
K
That's
it.
This
seems
totally
reasonable
to
me.
The
edge
case
is
even
worse
than
that.
It's
not
just
like
one
single
cl
resetting
like
cl
a
should
be
able
to
be
swapped
for
clb,
transparently
to
el,
and
so
anything
that
you
have
to
persist
across
those
two
layers
to
be
able
to
communicate
after
that
swap
is
you
know
worse
than
just
resetting
cla,
so
I
would,
I
think,
the
zero
makes
sense
to
me.
J
C
Cool-
and
there
was
a
second
issue-
you
wanted
to
bring
up-
also
to
add
east
get
block,
body
or
sorry.
Engine
get
block
bodies,
method.
J
Yep
yeah,
there
is
it's
it's
a
proposal,
yet
it's
it's
not
even
a
pr.
It's
like
a
request
for
comments,
but
how
we
can
drop
the
issue.
What
it
proposed
is
to
implement,
get
block
bodies
method
in
engine
api.
It
allows
for
pruning
execution,
payloads
on
cl
side
and
save
like
potentially
a
lot
of
space
on
the
disk.
J
In
this
case,
the
cl
clients
will
request
block
bodies,
which
is
just
the
transaction
list
after
after
the
merge
and
whenever
it
needs
to
serve
a
block,
beacon
blocks
to
their
node
here
or
to
the
user.
It
will
just
go
to
el
and
request
bodies
and
request
transactions
of
those
payloads
that
are
supposed
to
be
served.
J
One
thing
worth
mentioning
here
is
that
this
get
block
bodies,
maps
on
the
get
block,
bodies
message
in
the
ekh
protocol,
so
it
I
I'm
assuming
that
it's
pretty
straightforward
for
executioner
clients
to
implement
to
expose
this
logic.
This
same
logic
like
engine
api,
because,
basically
the
logic
already
exists,
and
it
just
need
another
one
interface
to
be
accessible.
B
So
question
the
response:
do
you
expect
it
in
json
format
or
are
included.
J
It's
basically
it's
it's
the
array
of
bytes
according
to
eip2718,
so
it's
either
rlp
or
a
binary
encoded
transaction.
According
to
this
eap,.
B
Yeah,
okay,
but
there's
a
single
transaction,
but
the
list
of
transactions
still
are
included
yeah.
Okay,
so
I
used
to
expect
the
consensus
format
in
short,
whatever
that
is,
so
not
not
the
json
format,
rather
than
the
binary
format.
J
K
I'd
say
something
like
this
seems
very
reasonable.
If
this
becomes
a
complexity,
though,
I
think
it
can
be
a
optional
on
at
the
point
of
the
merge
and
something
that's
pretty
high
priority
to
get
in
place
after
not
not
that
I
or
even
just
utilization
of
the
pruning
mechanism,
even
if
the
the
method
exists,
but
I
I
think
the
duplication
does
make
sense.
B
K
Okay,
so
mandatory
as
part
of
the
spec,
but
whether
cl
is
doing
pruning
at
the
point
of
merge
or
not
can
be.
K
B
J
K
Yeah,
so
I
I
think,
as
you
all
are
all
aware,
the
devnets
are
now
launched
on
tuesday
present
thursday
and
we
had
spoken
about
attempting
to
do
the
persistent
test
net.
K
You
know
next
week
or
the
week
after,
I
think,
given
progress
and
given
some
of
the
iterative
changes
that
are
still
coming
out
on
the
kinsubi
specs,
I
think
we'll
be
at
a
kinsuki
v3
by
about
monday
or
tuesday.
I
would
suggest
that
we
are
aiming
for
the
persistent
test
at
launch
on
the
14th,
rather
than
seven
just
to
give
us
time
to,
I
would
say:
kimsugi
v3
specs
are
out
monday
or
tuesday.
So
then
the
seventh
can
be
a
v3
devnet
and
then
the
14th
can
be
the
a
persistent
test.
K
J
The
question
maroos
has
yeah,
we
will
have
the
changelog
and
the
iconsugi
stack
in
the
table.
D
One
thing
I'll
point
out
for
the
client
team,
so
if
we
do
aim
to
have
say
a
devnet,
that's
stable
and
that
we
want
basically
non-client
people
to
use
over
the
holidays,
I
think
it
was
infera
and
maybe
coinbase
mentioned
that
just
having
it
in
master
with
a
flag,
really
helps
yeah.
So
just
if,
if
that's
something
that
we
can
aim
for
on
the
client
sides
for
like
the
december
14th
release,
I
think
it'll
help
just
get
more
folks.
Onboarded
to
it.
B
B
But
the
problem
is
that
once
we
merge
something
to
master
it's
it's
I
mean
it
already
certainly
says
previously
is
going
to
just
dig
it
up
again
to
see
if
there's
something
wrong
with
it
or
not.
D
Fair
enough,
I
think
yeah
in
that
case
you
know
having
like
a
clean
branch,
we
can
point
them
to
and
and
if
it's
possible
to
have
a
command
line
flag
on
that
branch,
that's
that's
probably
as
good
as
we
can.
D
Yes,
danny
will
be
away
for
the
next
few
weeks,
the
month
dish,
so
you're
stuck
with
me
and
mikael.
D
So,
okay,
so
we
had
two
other
topics
from
the
last
call
that
that
we
we
had
kind
of
bucketed,
basically
how
we
do
fork
ids
for
the
merge
and
the
discussion
around
the
ip4444
and
how
you
know
we
want
to
go
about
potentially
implementing
that
after
the
merge.
I
think
it's
worth
maybe
moving
to
the
next
section
first,
because
it
might
affect
those
discussions.
D
So
in
parallel
to
that,
in
the
past
two
weeks,
there's
been
a
lot
of
discussions
about
transaction
costs
on
roll
up
and
how
we
can
potentially
help
alleviate
those,
and
there
have
been
two
proposals-
kind
of
brought
forward,
which
would
kind
of
reduce
the
call
data
gas
costs
in
different
ways,
potentially
with
the
desire
to
see
if
those
could
be
brought
to
mainnet
fairly
quickly.
Given
that
one
of
them
is
a
literally
one
character
change.
D
So
I
think
it
makes
sense
to
maybe
just
have
the
authors,
if
they're
all
on
the
call
kind
of
walk
through
those
proposals.
Why
they're
valuable
and
get
some
general
feedback
there
and
that
yeah
the
impact
of
those
probably
determines
like
how
we
want
to
deal
with
fork,
ids
and
and
what
we
have
to
do
with
regards
to
historical
data?
O
I'm
happy
to
talk
about
4488,
sure,
okay,
so
the
idea
behind
4488
is
that
that
it
decreases
the
call
data
gas
cost
from
gas
per
byte
to
three
gas
per
byte.
So
decreases
it
by
more
by
more
than
a
factor
of
five
and
this
it
would
make
roll
ups
five
five
times
cheaper,
so
make
something
like
I
think,
on
average
optimism
and
arbitrarium
tends
to
be
around
the
two
to
five
dollar
range.
O
You
would
bring
them
up
under
one
dollar
and
then
loop
ring
and
ziggy
sync
are
often
about
a
quarter
and
it
would
bring
them
back,
bring
them
back
under
five
cents.
The
one
extra
fee
feature
that
the
cip
has
is
that
it
also
adds
a
separate
call
data
size,
a
limit
per
block.
So
it
says
that
asia
block
can
have
at
most
one
megabyte
of
total
transaction
call,
data
plus
300
bytes
for
every
additional
transaction.
O
O
So
there's
a
limit
to
gas
and
there's
a
limit
to
call
data
and
historically,
I
think
we've
been
wary
of
adding
two-dimensional
limits,
because
two-dimensional
limits
make
the
algorithms
for
figuring
out
what
transactions
you
include
harder,
because
you
can't
just
like
take
the
top
priority
fees
and
the
next
priority
and
then
and
then
the
next
priority
and
keep
going
down
until
you
run
out
of
space.
O
So
there's
a
yeah,
so
the
ip
has
a
couple
of
mitigations
to
this.
One
is
just
the
fact
that
we
already
did
eip1559
based
means
that
the
the
number
of
case
like
most
of
the
time
blocks,
are
the
the
constraints
is
not
going
to
be
block
size.
Most
of
the
time.
The
constraints
will
just
be
like
you,
take
everything,
that's
willing
to
that's
willing
to
pay
the
base
fee
until
the
until
the
memphis
at
that
level
clears
and
then
also
the
ef.
O
Just
the
fact
that
the
limit
is
fair.
The
limit
is
quite
high.
O
I
don't
think
we've
even
seen
a
blocks
today
that
rise
to
anywhere
close
to
that
limit,
and
so
the
average
is
somewhere
around
15
times
less
than
the
limit,
and
also
this
extra
stipend
of
300
bytes
per
transaction,
which
basically
says
even
if
you
create
a
block
and
that
block
fills
up
to
the
to
the
call
data
limit,
you
would
still
be
able
to
keep
on
including
transactions
with
less
than
300
bytes,
less
than
300
bytes
of
gold
data,
which
is
on
average
about
90
percent
of
all
of
the
of
all
of
the
transactions
in
the
memphis.
O
So
the
reason
behind
this
limit
is
basically
to
because,
historically,
we
have
been
concerned
about
the
possibility
that
if
there
is
a
really
really
big
block,
then
that
would
just
temporarily
crash
the
network
in
ways
in
ways
that
we
don't
that
we
don't
fully
understand,
because
we
haven't
really
had
blocks
of
that
size.
Yet,
and
so
this
is
basically
just
a
keyhole
solution,
it
says:
well,
you
can.
O
You
can't
have
blocks
that
are
larger
than
than
a
level
like
which
is
somewhere
between
one
and
one
and
a
half
megabytes,
which
is
actually
like
base
the
same
limit
as
like,
actually
lower
than
the
current
defacto
limit
of
blocks.
Like
today,
the
theoretically
were
just
like.
You
could
construct
this
1.87
million
bytes
and
then,
with
this,
with
the
limits
in
the
cip.
It
would
be
between
1
and
1.5
million
bytes,
depending
on
the
amount
of
gas
of
gas
and
the
number
of
transactions.
D
And
maybe
before
we
go
to
comments,
it's
worth
just
highlighting
also
kind
of
the
other
proposal,
which
was
four
four
nine
zero.
So
this
one
would
basically
basically
proposes
to
just
reduce
call
data
from
16
to
six,
so
a
smaller
reduction,
but
then
does
not
add
these
mechanisms
to
cap
the
amount
of
call
data
in
a
block
or
the
transactions
type.
And
so
it's
just
like
a
one-line
change
to
the
actual
gas
cost
of
call
data
in
transactions
and
micah.
I
see
you
have
your
hand
up.
N
N
O
Right,
so
this
right,
so
this
mitigates
the
the
two-dimensional
optimization
problems
like
basically
that
the
problem
with
adding
any
kind
of
extra
limit
other
than
the
gas
limit
is
that
it
means
that
the
naive
algorithm
for
filling
a
block
is
not
going
to
be
optimal
anymore,
which
would
mean
that,
like
either,
we
have
to
actually
implement
some
much
more
sophisticated
algorithm
or
blocks
created
in
a
non-standard
way
are
going
to
be
more
more
profitable
and
so,
like
more
people
are
going
like
even
more
blocks,
they're
going
to
be
greater
than
flashbacks.
O
So
the
idea
behind
the
300
byte
stipend
for
transaction
is
basically
that
if
you
still
create
a
block
using
the
current
naive
algorithm
and
your
block
fills
up
to
the
call
data
maximum
before
you
run
out
of
transactions
or
before
it
hits
the
gas
maximum,
then
you
would
still
be
able
to
keep
on
including
any
transaction
with
call
data
less
than
300
bytes.
So
you'd
still
be
able
to
keep
on
including
ninety
percent
of
the
memphis.
P
And
just
briefly
add
to
that,
like
it's
important
to
note
that
basically
in
the
extreme
scenarios,
so
like
say,
roll-ups
become
very
popular.
While
this
is
an
effect
and
you
have
a
one
big,
roll-up
transaction,
every
single
block,
more
or
less,
and
that
would
fill
up
almost
all
the
call
data,
and
this
would
also
start
interfering
with
the
15
59
algorithm
right,
because,
basically,
if
you
don't
have
a
stipend,
then
after
this
roll
up
transaction,
they
could
only
be
east
transfers,
because
those
are
the
only
transactions
that
don't
consume
any
extra
coal
data.
P
But
you
might
just
not
have
enough.
Each
transfer
demand
at
this
at
the
kind
of
base
fee
level
and
so
the
base
you
would
be
artificially
depressed.
And
then
you
basically
have
the
the
role
of
transactions
doing
the
first
price
auction
again
on
the
on
the
priority
fee.
So
basically
this
just
ensures
that
blocks
can
be
produced
as
normal.
O
Right
so
like
in
that
kind
of
extreme
world,
basically,
the
roll-ups
would
sometimes
end
up
competing
by
prior
by
setting
high
priority
fees,
and
if
your
transaction
is
one
of
those
ten
percent
that
has
really
big
qualitative
and
you
would
have
to
push
your
priority
fee
up
to
compete
with
the
roll-ups.
But
if
it's
under
300,
then
you
would
still
be
able
to
get
in
with
a
pre.
With
this
game
usual
priority
fee
of
one
to
three.
H
Right
I'd
like
to
relate
alex's
concern
and
his
concern
is
that,
with
this
change
here
we
might
incentivize.
We
might
actually
prohibit
the
adoption
of
data
sharding,
because
the
the
input
data
on
the
execution
layer
will
be
so
cheap
that
actually
nobody
will
be
incentivized
to
to
move
to
like
data
sharding.
That
was
his
concern.
K
Oh
just
the
data
sharding
one
would
have
eventually
much
more
capacity
than
what
can
be
provided
there
and
does
not
have
the
competition
mechanism
for
additional
execution,
and
so
I
think,
naturally
you
would
see
the
move
to
data
sharding,
because
one
that
would
be
the
capacity
to
be
exceeded
and
two
you
likely
can
find
cheaper
data
elsewhere.
K
Q
I
mean,
I
think,
generally
there's
a
strong
case
for
just
reversing
this
change
when
we
have
data
shots,
why?
Why
not.
D
Right
and
I
guess
yeah
one
thing
in
the
chat
that
enscar
said
is
obviously
chardonnay
is
probably
still
a
couple
years
away.
So
it's
you
know
kind
of
a
medium
term
problem
contrasted
to
like
the
current
state
that
which
is
high
fees
on
the
network.
R
R
So,
therefore,
you
make
three
tiny,
tiny
transactions
that
just
I
don't
know,
are
at
the
minimum,
and
that
frees
you
up
gives
you
another
900
bytes
that
you
can
put
into
the
block.
R
P
Right,
so
actually,
this
is
in
a
sense
intended.
So
basically
the
idea
is
to
to
say
that
in
a
block,
the
first
megabyte
of
call
data
is
really
cheap.
You
basically
just
get
that
smallest
for
free,
and
you
just
have
to
pay
the
normal
call
data
pricing
that
the
the
reduced
three
bytes
threes,
the
gas
provides
for
it
then
afterwards,
colvater
just
becomes
really
expensive,
meaning
that
basically
for
every
300
bytes,
you
want
to
consume
extra
and
20
bytes.
It's
like
a
super
tiny
fraction
of
one
megabyte
right.
P
You
have
to
include
one
more
transaction,
that's
like
the
it's,
a
the
minimum,
twenty
one
thousand
thousand
guys,
and
so,
if
you
really
like,
if
you're
in
this
situation,
let's
say
you
have
a
600,
byte
transaction
and
you're
really
willing
to
pay
quite
a
bit
to
get
that
in
even
if
you're
not
going
to
compete
for
this
prime
spot
in
the
beginning
of
the
block,
it's
perfectly
fine
to
even
I
don't
know,
say,
use
flashboards
to
create
a
bundle
where
you
add
an
e-transfer
in
front
of
something
again.
P
This
would
be
like
a
super
rare
workaround,
but
the
the
risk
of
the
idea
is
just
that
it.
It
makes
call
data
after
this
first
megabyte
expensive.
It's
it's
not
that
there's
a
hard
cap
of
of
300.
O
Yeah
and
you
don't
actually
need
flashbacks,
you
just
send
a
series
of
transactions
with
the
same
ascender
and
sequential
nonsense.
R
P
Right,
but
just
like
so
basically,
the
padding
transaction
would
only
be
a
workaround
for
inefficient
mining
algorithms,
so
because,
if
you
actually,
if
the
miner
is
runs
the
mining
node
that
actually
can
deal
with
this,
then
it
kind
of
just
reorders
the
transactions,
because
the
like
again,
the
300
bytes,
is
chosen.
The
way
where,
like
90,
of
transactions,
are
below
it.
P
So
if
you
visit
once
you
kind
of
deduct
the
big
one
megabyte
roller
transaction,
the
rest
of
the
block
kind
of,
like
almost
in
all
circumstances,
would,
on
average,
consume
less
less
call
data
than
300
per
transaction,
basically
smoothed
over
over
them
and
so
kind
of
like
the
only
problem
they
can.
It
can
kind
of
curse
just
that.
If
the
miner
naively
assembles
the
block
they
might
skip
a
transaction
with
600
call
that
even
if,
if
they
just
ran
all
of
them,
it
would
still
on
average,
just
work
out.
P
Fine
and
the
block
would
be
valid,
and
so
if
they
are
smart
about
it,
this
will
just
never
happen,
and
so
this
is
really
basically
just
a
workaround
for
either
miners
that
just
run
an
in
inefficient
mining
algorithm
or
the
very
occasional
block
where
the
average
is
even
even
for
like
for
after
the
the
the
big
quality
transaction
is,
is
through
average
quality
says
it's
still
about
300,
but
like
this
might
be
worth
looking
in
the
box,
but
I
would
just
assume
that's
basically
like
I
know
one
block
a
day
or
something
and
having
like
two
or
three
padding
transactions
a
day
shouldn't
be
a
problem.
O
Right
yeah,
basically,
I
think
it's
a
very
exceptional
case,
because
it
would
be
both
for
the
subset
of
transactions
that
is
bigger
than
300
and
for
the
transactions
in
those
partic.
In
that
exceptional
case,
where
call
data
actually
is
going
over
the
cap
and
also
the
other,
it
would
have
to
be
a
situation
where
priority
fees
are
getting
so
high.
That,
like
padding
with
junk
transactions,
is
a
cheaper
strategy
for
them.
O
For
that
sender
than
just
bumping
up
their
priority
for
you
right,
because,
if
you're
willing
to
add
on
a
bunch
of
junk
transactions,
then
you
would
also
be
willing
to
add
on
a
yeah
a
pretty
big
priority
for
you.
So
that
is.
That
would
be
my
answer.
If
we
really
want
to
admit
it
to
to
make
this
not
an
issue,
we
could
basically
say
that,
like
there's
the
possibility
of
redesigning
the
eip
to
basically
say
that
data
under
300
bytes
doesn't
like
increase
the
stipend.
O
So,
instead
of
saying
it's
called
like
the
total
call,
data
has
to
be
less
than
a
million
plus
300
times
the
number
of
transactions.
We
would
say
the
total
number
of
bytes
exceeding
the
byte
number
300
of
each
of
of
each
transaction.
Can't
I
can't
exceed
one
megabyte,
so
I
guess
that
would
be
like
that
would
be
less
exploitable,
though.
O
That
would
also
increase
the
number
of
the
number
of
times
in
which
the
in
which
the
limit
actually
gets
hit
like
basically
because
just
people's
random,
accidental
or
yes
transfers
would
not
would
not
increase
the
women.
R
P
P
And
just
for
the
market
like
if
this
would
really
be
a
sustained
concern,
then
basically
I'm
sure
we
would
find
someone
who
would
be
willing
to
just
write
an
optimized
minor
algorithm
that
that
is
smart
about
this,
so
so
this
yeah,
so
this
one
won't
be
relevant.
If
no
one
else
would
do
it,
I
mean
I'd
be
happy
to
do
it
myself.
S
So
is
this
possible
that
that
will
incentivize
miners
to
like
skip
the
huge
call
data
transactions
and
have
an
effects
opposite
of
desired.
S
So,
for
example,
it
makes
the
block
smaller,
so
they
can,
I
don't
know
propagate
them,
quicker,
etc.
P
Right,
so
that's
actually
a
good
point.
I
think
if
you,
if
we
look
at,
say
the
conversations
around
what
is
the
expected
optimal
priority
fee.
There
was
some
kind
of
reasoning
that,
like
we
would
expect
a
priority
fee,
slightly
buff
one
way
to
account
for
the
increased
anchor
risk
of
basically
having
a
bigger
blocks,
and
so
it
could
well
be
that
indeed,
miners
choose
to
demand
a
priority
fee
of
above
one
grade
specifically
for
big
call
data
transactions
right.
P
This
would
be
a
trivial
rule
to
add
you
could
just
look
at
the
call
later
and
if
it's
some
threshold,
you
basically
say.
Okay,
I
ignore
transactions
if
they
don't
at
least
have
two
three
four
five
way:
whatever
the
miners
choose
to
to
counter
bank
risk,
but
this
should
like
be
just
a
normal,
simple
kind
of
economic
decision
and
it
and
then
there's
this
simple
level
of
the
priority
fee.
So
it
might
well
be
that
roll-ups
just
have
to
pay
a
little
bit
more
priority
fee,
just
to
kind
of
like
make
minus
hold
there.
D
Then
crowd-
I
saw
you
commenting
about
this
this
week
and
I
think
you
feel
pretty
strongly.
This
is
something
we
should
do
sooner
rather
than
later.
So
do
you
want
to
take
maybe
like
a
minute
to
explain
kind
of
your
position
and
like
why
you
think
this
is
really
important
and
like
yeah
and
and
probably
yeah.
Q
Q
So
it's
just
like
a
temporary
kind
of
subsidy.
You
could
say
for
the
roll-ups
in
order
to
push
this
and
I
yeah
I.
I
would
basically
argue
that
we
should
try
to
just
get
this
done
like
even
this
year,
even
if
there
are
some
risks
associated
with
that.
But
I
think
it's
a
very
simple
change
and-
and
I
think
that
would
show
the
community
that,
like
we
really
do,
care
and
and
ether
is
still
alive
and
punching
and
isn't
like
completely
ossified
and
can't
even
make
a
change
like
justin
in
several
months.
B
Well,
I
mean
the
theory
sounds
nice,
but
I
think
you
seriously
underestimate
the
effects
that
it
has
on
every
other
part
of
the
system.
I
mean,
of
course,
it's
the
one
that
changed
to
drop
the
mic,
the
gas
limit
over
the
gas
price
but,
for
example,
get
currently
limits
transactions
to
be
maximum
128
kilobytes.
B
Q
K
Yeah
there's
two
very
clear,
clean
mechanisms
to
do
that.
One
is
they
just
limit
the
size
of
the
blocks
and
be
able
to
check
in
multiple
blocks
in
a
single
l1
block
and
then
the
other
is
there
already
are
mechanisms?
I
think
optimism
uses
this
where
optimism
pre
preloads
the
data
in
a
in
a
transaction
and
then
use
the
subsequent
transaction
to
essentially
check
in
the
data,
and
so
you
could
batch
the
you
could
do
batches
on
the
first
rather
than
needing
a
monolith.
B
So
the
reason
I
kind
of
brought
it
up
is
because
I
think
the
past
week
or
so
somebody
I
think
arbitrage
was
opening
issues
on
the
guest
repo
asking
whether
we
could
raise
the
the
transaction
limits
to
half
a
mag,
and
essentially
I
mean
there's
definitely
a
sweet
spot.
128K
was
kind
of
put
there
as
a
standard
a
little
bit.
We
could
go
a
bit
up,
but
there
are
so.
B
Q
Q
B
N
And
this
was
my
comment
as
well
that
I
agree
with
what
peter
said
last
meeting,
which
was
that
while
we
don't
need
to
implement
the
ip444
right
away,
we
need
to
commit
to
implementing
four
four
four
four
in
the
future
and
four
four
four
of
those
don't
remember,
is
the
one
where
we
assert
that
we're
gonna
start
dropping
historical
bodies
and
receipts
with
that.
Then
the
history
growth
is
much
less
of
a
concern.
Without
that,
then
I
agree.
N
That's
like
a
big,
a
big
concern,
so
I
would
I
I
don't
have
a
problem
with
cfp
as
long
as
it
comes
with
the
promise
of
committing
to
444,
eventually,
which
means
we
advertise
to
users
today
that
hey,
we
are
going
to
be
dropping
history,
rewrite
your
dapps
to
not
rely
on
infinite
history,
storage.
K
B
Yeah,
so
that's
that's
why
I
was
also
before
wanted
to
go
towards.
Is
that
I
mean
it's.
So
the
thing
is
what
we
kind
of
need
to
be
aware
of
is
what
the
timeline
would
be.
So
I
mean
we
could
commit
to
4444
saying
that,
yes,
we
will
start
dropping,
but
we
cannot
need
a
realistic
attack
plan
for
the
whole
thing,
because
just
promising
that
we
will
do
it
someday
and
then
the
changes
goes
wild
in
between
it
is
a
bad
place
to
be
at.
K
I
think
that
we
can
reasonably
form
a
working
group
that
pushes
this
out
within
by
the
end
of
2022,
I
mean
with
even
more
aggressive
timeline
potentially
being
available.
I
think
a
lot
of
the
changes,
a
lot
of
these
things
that
don't
have
to
concern
this
group
directly,
and
so
it
can
be
done
in
parallel
without
taking
a
lot
of
resources
from
this.
B
B
Probably
there
it
will
be
a
bit
weird,
a
bit
hiccuppy
when
you
all
of
a
sudden
you'll
have
large
blocks
appearing.
So
I'm
kind
of
curious
how
how
the
network
will
take
it,
but
yeah.
Q
Q
Q
Maybe
a
question
on
this.
So
right
now
the
only
real
concern
about
having
this
large
chain
data
like
realistically,
I
could
run
a
node
with
dropping
all
the
chain
data,
except
that
I'm
not
a
fully
functional
peer-to-peer,
client
right.
That's
people
could
request
data
from
me
and
I
can't
I
can't
give
them
those
blocks,
because
I
don't
have
them.
N
N
Yes,
there
are
some
who
also
use
call
data
which
annoys
me
greatly,
but.
O
Yes
like,
or
these
are
implied,
stop
start
like
serving
transactions
older
than
a
year.
I
forget
what
the
exact
change
was.
B
N
T
Think
one
by
one.
It's
generally
okay,
though,
because
if
they're
doing
this
to
populate
their
own
database,
then
we
can
just
require
them
to
download
those
out
of
band
and
run
them
through,
maybe
a
more
sophisticated
client.
But
I'm
like
worried
about
the
case
where
the
dapps
are
literally
calling
your
local
client
and
they're
requesting
historical
data.
O
O
Like
my
impression
would
be
that
a
lot
of
those
applications
are
going
to
be
crazy,
slow
in
the
world,
where
you're
not
talking
to
infuria
already
right,
like
there's
a
bunch
of
daps
that
already
basically
even
had
to
switch
to
the
graph
because,
like
using
ethereum's
built-in,
like
log
querying
for
very
long
time,
fans
is
just
too
slow.
N
O
N
F
O
And
I
think
yeah,
I
think
it
can
be
done,
especially
given
that
I'm
expecting
to
have
a
serious
push
for
the
revival
of
light
client
usage
and
now
that,
like
especially
past,
emerge
now
that
we
have
altera
know
that
very
efficient,
like
lines
are
possible
and
once
you
have
like
clients
then
like.
If
you
have
a
a
one
of
interest
model,
then
the
viewing
history,
queries
and
doing
log
queries
and
everything
is
pretty
simple.
D
Just
I
guess
kind
of
bubble
this
up
a
little
bit.
Do
so
obviously,
there's
concerns
about
chain
data
growth.
Some
concerns
about
just
like
the
max
transaction
size,
which
are
you
know
we
can
kind
of
sidestep
for
for
now.
We
we
spent
a
lot
of
time
talking
earlier
about,
like
the
the
kind
of
optimal
block
packing
strategy
and
I'd
be
curious
to
hear
like
just
between
the
two
proposals.
Like
do
people
feel
like
having
you
know
a
lower
call
data
cost
with
a
couple.
D
Additional
constraints
is,
you
know
significantly
harder
to
implement
than
just
changing
the
actual
call
data
cost
is
the
or
basically
the
flip
side
of
that
is
whether
just
changing
the
call
data
cost,
even
if
it's
not
lowering
it
by
as
much
is
already
kind
of
too
much
in
terms
of
the
worst
case
block
size.
So
we
would
there's
not
a
world
where
we
just
lower
the
call
data
cost,
because
we
need
some
cap
anyway.
D
So
I
guess
yeah
kind
of
a
roundabout
way
of
asking
like
you
know,
is
there
basically
like
a
proposal,
people
feel
it
is
more
realistic
or,
like
you
know,
simpler,
to
implement
yeah
just
so
we
can
kind
of
focus.
The
conversation.
D
And
otherwise,
more
explicitly,
like
would
say,
we
just
reduced
from
16
to
6
and
had
no
other
mechanism,
so
no
cap
in
the
block
it
would
not
would
that
be.
You
know,
chain
data
growth
aside,
which
would
be
slightly
slower
than
than
producing
two
three,
you
know
would
that
be
reasonable.
N
I
agree
as
long
as
we're:
okay
with
punting
the
block
stuffing
strategies
to
searchers
and
professional
block
producers,
and
we
just
say
hey
it's
their
problem,
because
in
that
case
it
really
is
very
easy
to
change.
There's
one
line.
N
Basically,
if
we
want
to
force
the
guest
team,
on
the
other
hand,
to
go
and
design
a
better
block
stuffing
strategy,
then
that
costs
you
know
a
quart
of
time
significantly
more.
P
Yeah
yeah,
but
I
mean
just
to
retrade
there
like
that's,
basically
exactly
what
the
stipend
mechanism
is
supposed
to
address.
So,
like
I
mean
we
talked
about
it
earlier
like,
but
with
the
even
if
you
do
the
naive
thing,
it's
already.
Basically,
you
already
get
95
to
optimal
blocks.
You
have
the
occasional.
N
G
Yeah,
I
was
gonna
say
I
don't
think
basically
has
a
significant
portion
of
magnet
mining,
but
I
think
that
we're
okay
with
the
making
this
change.
Q
D
P
This
this
time,
it's
actually
on
purpose:
okay,
perfect
yeah,
yeah.
No,
just
briefly
wanted
to-
can
try
and
bring
this
back
on
on
timeline
discussion,
because
I
generally
agree
that
the
kind
of
a
general
commitment
to
something
like
444
or
just
in
general,
basically
that
we
say
that
in
the
medium
term
we
expect
or
like
we
are
certain
that
the
chain
like
that
the
clients
will
no
longer
by
default,
provide
all
the
history.
P
Basically,
if
we
are
certain
about
that
in
any
version
right
like
it
doesn't
matter
like
how
exactly
we
end
up
doing
this,
but
like
that,
I
think
something
like
that
is
indeed
would
be
important
here.
But
at
the
same
time
I
would
want
to
say
for
context
that,
right
now
we
only
have
a
few
roll
ups
live
on
mainnet
and
those
usually
only
settle
very
occasionally
I
mean
I
haven't
looked
specifically
in
the
optimism,
but
like
zk
sync,
and
so
on
that
that's
I
think.
P
Last
time
I
checked
was
every
couple
hours
and
so
there's
a
long
way
to
go
between
the
concentration
and
the
situation
where
they
settle.
Every
single
bed,
like
one
roll
up,
settles
every
single
block
I
mean.
Hopefully,
of
course
we
get
there,
because
that
means
we
have
more
option,
but
it
won't
be
that,
like
we
turn
this
on,
and
we
immediately
have
a
kind
of
yearly
growth
of
three
terabytes,
because
this
would
be
like
the
long
run
situation
and
even
then
like
it
would
take
a
year
to
get
to
the
the
three
terabytes.
P
So
I
think
it's
something
where,
as
we're
saying
as
long
as
we're
kind
of
reasonably
certain
that
by
the
end
of
next
year,
we
have
some
version
of
history
limiting
history
in
place
or
even
by
I
don't
know,
early
mid
2023.
This
would
be
perfectly
fine,
and
the
question
is
just
like:
are
we
busy
because,
because,
if
I
think
a
lot
of
people
here
would
really
want
to
see
this
change
in
in
a
like
special
purpose
for
before
the
merge?
P
And
for
that,
of
course,
the
timeline
would
have
to
be
quite
accelerated
on
decision
making,
and
the
question
is
like
if,
if,
if
it
takes
us
a
lot
of
time
to
kind
of
like
discuss
the
details
of
4.4
on
the
way
there,
basically,
how
comfortable
are
we
with
already
making
a
preliminary
decision
on
like
a
special
purpose
folk
for
this
before
we
kind
of
fully
worked
out
the
details
and
forever
before?
I
think
that's
really
the
main
question.
D
Right-
and
I
guess
maybe
to
like
give
some
context
on
the
the
timeline
you
know
the
the
default
to
say
path
if
this
was
accepted
is
we'd
say
like
oh,
it
goes
into
shanghai
and
realistically
you
know,
shanghai
is
like
a
year
away,
plus
or
minus
a
quarter,
and
you
know
the
the
pushback
there
would
be
well.
D
Transaction
fees
are
already
super
expensive,
they're,
like
non-trivial
on
rollups
and
like
waiting
another
year
to
reduce
them
is,
is
just
really
bad,
and
so
basically
the
options
you
have
is
then
like
there's
something
between
the
merge
in
shanghai.
That's
just
this,
and
then
you
know
you
also
open
up
the
cam
there
of,
like
all
the
other
things
we
want
to
do
after
the
merge,
then,
obviously,
with
the
merge
itself,
that's
a
fork
and
and-
and
you
know
we're
we're
already
making
two
eips
there.
D
The
other
thing
is,
we
discussed
on
the
last
call
potentially
having
like
a
an
empty
fork
before
it
emerged
to
just
set
the
fork
id
for
the
merge,
and
so
that's
a
time
where,
like
we
might
have
to
get
everybody
to
upgrade
their
nodes
anyway
and
then
the
other
option
is
like
you
have
just
you
know
a
completely
separate
fork
with
just
this
and
at
the
risk
of
potentially
pushing
back
the
timeline
for
the
merge.
I
think
that's
the
other
thing.
D
It's
obviously,
obviously
you
know
people
are
very
excited
about
the
merge.
You
don't
want
to
push
it
back,
for
you
know
for
a
bunch
of
reasons.
So
if
if
we
have
a
completely
separate
fork,
you
know,
is
that
actually
pushing
things
back
or
is
that
maybe
the
most
efficient
way
to
do
it,
because
then
you're
just
separating
concerns
much
more.
I
think
that's
kind
of
the
timeline
discussion
like
yeah
the
yeah.
D
K
The
the
fork
id
change-
wouldn't
that
wouldn't
be
two
software
releases
so
like
that
would
be
say
the
fork
id
changes
two
or
four
weeks
before
the
merge
is
supposed
to
happen.
You
upgrade
your
node
before
then
you
don't
have
your
node
a
second
time
later.
So
if
you
coupled
this
or
anything
with
that,
then
you
now
have
coupled
them
inextricably
and
are
yeah.
K
B
I
guess.
If
somebody
just
creates
a
proof
of
concept,
video
to
see
what
it
would
take,
then
it
might
be
interesting
to
consider
whether
we
can
ram
it
in
before
the
merge
without
delaying
the
merge,
if
not
doing
it
after
the
merge.
Only
only
downside
there
is
that
that
will
take.
I
don't
know
half
a
year
at
least
the
upside
would
be,
however,
that
after
the
merge,
we
would
have
a
nice
nice
symbol
so
to
say
hard
fork
to
test
out
how
we
can
fork
stuff
post
merge
world
which
isn't
a
bad
thing
either.
P
All
right,
I
just
wanted
to
grip
with
dimension
so
that
quilt
looked
into
a
justice
for
curiosity
into
like
an
implementation
of
4488
and
basically
implemented
it
didn't
get
untested
yet
so
it's
kind
of
like
definitely
only
like
a
prototype,
but
that
was
pretty
straightforward.
So
I
don't,
I
don't
think
the
permutation
complexity
would
be
high,
and
just
the
other
comment
I
would
want
to
make
is
just
that.
I
think
we
have
talked
in
the
past
repeatedly
about
that.
P
There
is
a
problem
with
with
the
awkwardest
governance
structure
and
that,
like
user
voices
are
kind
of
underrepresented,
and
I
think
this
is
really
kind
of
like
by
far
the
most
kind
of
pure
purified
example,
although
no
no,
no
instance
of
of
this
red
thing,
we
basically
like
I,
I
I
think
that
the
user
side
is
very
overwhelming
for
kind
of
trying
to
to
do
something
like
this
before
the
merge.
If
there's
any
chance
for
it
and
listening
to
it,
at
least
it
sounds
like
on
this
call.
Everyone
was
at
least
ambivalent.
P
So,
like
I
didn't
hear
any
strong
opposition
against
this,
so
I
would
just
kind
of
like
argue
that
we
should
do
this
before
the
merge,
because
we
are
more
or
less
ambivalent,
but
the
community
is
like
extreme
like
very
strong.
I
don't
want
to
overstate
it
but
like
it
seems
like
the
queen,
which
is
very
strongly
in
favor
of
something
like
this
before
the
march.
I
even
saw
a
lot
of
people
saying
like
yeah.
P
The
merch
is
delayed
by
a
couple
of
weeks
and
I
don't
even
think
that's
necessarily
his
but
like
if
even
if
that
was
the
case,
that
would
be
more
than
worth
it.
So
I
I
really
think
that
we
should
kind
of
like
try
and
even
if
no
one
is
on
the
call
here
like
to
represent
the
this
kind
of
broader
set
of
people,
but
like
I
think
we
should
really
keep
in
mind
that
that
is
an
important
yeah.
V
Opinion
all
right,
I
I
agree.
I
I
agree
with
that.
100
I
think
it's.
It's
we'll
provide
the
change
in
both
narrative
and
cost
for
for
users
to
do
migration
to
to
roll-ups.
In
general,
we
from
software.
D
H
I
think
it's
not
a
good
idea
to
do
it
along
like
at
the
time
with
the
merge
because
yeah
I
agree
with
peter
that
the
emoji
is
already
complex
enough
and
if
we
decide
to
do
it
before
the
the
merge,
my
slight
preference
would
be
to
go
for
the
simpler
490,
because
it's
a
kind
of
a
more
trivial
change
than
488.
D
Got
it,
and
is
this
based
on
like
aragon's
engineering
capacity
or
just
on,
like
analyzing
kind
of
the
second
order,
impacts
of
the
change.
H
I
think
is
the
second
order
impacts
because,
like
it's
not
hard
to
add
an
extra
validation
but
like
them,
there
might
be
implications
for
block
proposers
and
so
on.
D
So
would
it
be
valuable
to
say
like
in
the
next
couple
weeks,
get
feedback
from,
I
don't
know
say
flashbots
or
I
don't
know
what
team
like
some
team
that
has
experience
in
like
the
mempool
to
see
you
know
how
easy
or
hard
it
is
to
have
an
optimal
strategy.
Would
that,
like
help
with
these
concerns,
I
know
eragon
also
has
like
a
separate
design
for
the
transaction
pool.
D
So
I'm
not
sure
if,
like
yeah,
if
it's
helpful
to
say,
have
an
implementation
in
death
or
if,
if
that's
just
you
know
so
different
from
what
you
all
have
yeah
I
I
guess
I
don't
know
yeah
what
would
be
kind
of
the
way
for
you
to
like
feel
more
comfortable
about
that.
Is
it
just
taking
more
time?
Is
it
seeing
your
proof
of
concept.
H
V
I
even
want
to
point
out
that
you
we
could
max,
you
know,
make
a
four
four,
four,
nine,
nine,
nine
and
like
zero.
Ninety
two,
like
you,
know
before
and
and
then
do
4488
like
there
are.
No
I
mean
they
are
for
44.90.
Just
a
simplified
version
and
44.88
goes
just
deeper.
V
That's
a
very
good
question
and-
and
I
don't
have
a
good
answer
for
that-
I
don't
expect-
I
mean
those
two
actually
happen
because
obviously
there
is
like
there
is
a
max
size
transaction
by
a
guest
which
is
100
kilobyte,
which
all
means
that
every
time
we're
going
to
publish
a
transaction,
there
is
at
least
20
20
20
20
000
gas
spent
on
on
the
on
the
signature
itself
like
another
base
cost.
So
we
will
never
get
to
that
numbers.
But
I
didn't
quantify
the
actual
mass
right.
N
Right,
the
I
think
we
need
to
remember
is
that
if
they
someone
can
like
crash
the
network
or
break
the
network
or
whatever
by
making
a
five-megawatt
block,
we
will
see
if
I
mega
that
block.
I
don't
think
the
question
is:
are
we
likely
to
see
one?
Naturally,
it's
will
one
break
the
network
and
if
so,
we
will
see
it
and
block.
K
V
That's
a
that's
a
very
fair
statement.
I
yeah,
I
don't
have
a
clear
answer
for
that.
It's
also
not
this.
Someone
said
that
in
the
worst
case,
due
to
even
259,
it's
also
go
to
10
megabytes
on
the
worst
case
in
in
the
worst
case
scenario,
when
yeah,
because
of
30
million.
D
That
yeah,
so
the
worst
case,
is
a
a
5
megabyte.
It's
not
a
10
megabyte
and
obviously
that's
worth
noting
that,
like
it's,
it's
easy
to
send
one
of
these
worst
case
blocks,
but
then,
if
you're
spamming
the
network
with
these
five
megabyte
blocks,
the
cost
goes
up
exponentially.
D
So
so
you,
you
know
you
can't
send
like
105
megabyte
blocks
in
a
row.
The
cost
to
do
that
would
just
like
be
millions
of
ether
with
the
basic
going
up.
D
B
So,
just
to
add
with
since
we're
talking
about
five
megabytes
of
10
megabyte
blocks,
I
wanted
to
add
that
that
bit
appear
has
a
message
limit
of
10
megabytes,
so
essentially
a
megabyte
would
be
impossible
to
propagate
to
the
ethereal
network,
probably
possibly
eight
megabytes.
I
don't
know
so,
I'm
not
entirely
sure
what
the
payload
caps
are,
but
I
think
gath
has
a
cap
of
two
megabytes
or
something
I
don't
know
it
has
a
pretty
adverse
cap
on
how
much
it's
willing
to
forward.
B
B
You
will
actually
propagate
it
to
seven
years
and
you
kind
of
expectedly
going
to
get
it
from
seven
other
appears
that
you
have
so
he's
going
to
cross
your
network
line
14
times.
If
you
have
a
5
megabyte
block,
that's
70
megabyte
of
data
traffic
just
for
that
one
log
and
propagate
through
the
network.
D
One
thing
sorry
lucas
I'll
get
to
you
right
after
this,
but
one
thing
that's
worth
highlighting
also
is
next
call
is
the
last
call
of
the
year.
So
next
awkward
f
calls
will
be
december
10th
then
the
next
one
would
fall
on
december
24th.
I
don't
think
anyone
here
wants
to
have
a
call
on
christmas
eve,
and
even
if
some
people
did,
I
would
probably
get
quite
low
attendance
and
if
obviously
we're
talking
about
potentially
doing
one
of
these
changes
before
the
merge.
D
If
not,
you
know
by
the
next
call,
definitely
by
the
absolute
first
call
in
january,
and
so
I
think
it's
it's
just
worth
kind
of
highlighting
that
like
to
see,
I
don't
like
the
people,
think
we
should
like
think
of
basically
plan
out
to
think
about
prototype
a
potential
fork
for
this
in
february.
I
I
don't
think
we
need
to
agree
to
it
today
and
if
so,
basically,
what
are
the?
D
What
are
the
things
that
we
want
to
figure
out
in
the
next
two
weeks
right
like,
and
and
can
we
potentially
come
back
with
that
and
and
make
a
decision
or
like
get
closer
to
a
decision
on
the
next
call,
just
because
yeah?
If,
if,
if
we
are
doing
this
before
the
merge,
we
we
need
to
make
the
decision
sooner
than
we
otherwise
would.
M
That
we
should
do
four
four
four
four
or
some
derivation
there
is
some
community.
V
Disagreement
over
the
change
in
security
assumption
for
optimistic
what
up?
So
I
I
don't
know
if
the
community
will
not
fight
those
already
seen
someone
on
twitter
being
extremely
active
against
four.
V
P
Because
you
might
be
the
best
to
ask,
isn't
because
we
we
expect
roll-ups
to
to
move
to
shouting
data
availability
in
the
next
two
or
three
years
anyway,
and
charting,
as
a
standpoint
pointed
out
as
it
has
already.
The
same
security
is
only
kind
of
like
providing
like
data
availability
like
of
the
current
data,
but
not
the
history.
So
so
it
isn't.
Basically
isn't
it
the
same
situation
that
we
have
to
end
up
long
long
time
anyway.
Q
Like
people,
people
confuse
data,
storage
and
data
availability,
these
are
different
properties
and-
and
I
I
don't-
I
don't
know
like
lewis-
did
you
get
this
from
any
from
any
actually
like
roll
up
people?
I
I
know
there
was
someone
on
twitter
who
did
that,
but
I
think
they
didn't
understand
the
problem.
V
So
so
so,
from
the
perspective
of
the
cable
up,
there
is
the
solution
to
four
four
four
four,
which
is
to
have
like
an
off
chain
queue
of
of
latest
like
when
slots
were
touched
or
modified.
So
that
would
not
be
too.
I
mean
there
is
a
way
around
forcing
kerala
to
manage
that.
I'm
not
familiar
how?
Obviously
web
can
sustain
hd
change
assumption,
so
I
I
I
can
talk
on
behalf
of
optimism
without
it
from
here.
H
Yeah
so
we
discussed
four
four
four
four
inside
the
team
and
we
are
not
opposed
to
it,
so
aragon
is
in
favor
of
it.
Thanks.
K
Right
and
I
mean
the
the
assumption-
is
that
four
four
is
coupled
with
a
historic
block,
distribution
standards
that
are
outside
of
the
p2p
network
and
that,
if
the
blocks
are
available
by
any
of
these
standards,
you
can
always
recursively
validate
all
the
way
to
the
the
base
of
the
chain,
via
knowing
recent
information
to
secure
information
about
the
head,
and
so
the
expectation
and
date
availability
is
that
data's
made
available
and
then
those
that
want
to
have
it
and
can
use
it
and
there's
not
security
risks
around
state
withholding
attacks
and
then
once
it's
made
available
and
infrastructure
and
communities
are
utilizing.
Q
Yes
agreed
like:
basically,
we
should
be
careful
because
data
storage
is
not
an
attack
vector.
You
can't
make
someone
forget
data,
you
can
only
withhold
data
in
the
first
place.
That's
an
attack,
vector.
A
V
That's
a
very
good
question
for
the
zika
roll
up.
I
think
there
is
a
solution
which
is,
as
I
said,
keeping
queue
of
the
of
the
like
data
being
stored
on
like
when
it
was
modified,
published
and
republished
it
before
the
grace
like
before
the
destruction
of
the
data
to
to
guarantee
that
availability
over
time,
and
what
I'm
saying
is,
of
course
we
require
engineering
work
and
it's
something
that
would
be
quite
like
an
extensive
modification
of
of
the
what
we
call
core
os
of
start.net.
V
I
once
we
can
manage
it,
we
can
probably
work
around
it,
but
what
I'm
saying
is
obviously
secret
up.
I
don't.
I
can't
talk
on
their
behalf,
so
they
they
should
come
or
so
or
wish
to
reach
out.
Q
U
Q
V
On
re-running,
the
network
and
being
able
to
reconstruct
the
storage
under
four
four
four
four,
I
do
think
zika
can
make
it.
I
don't
know
for
optimistic
rapid.
This
is
important.
Maybe
we
should
change.
Q
D
That's
so
I'm
sorry,
just
because
yeah
just
because
we
have
only
like
five
minutes
left.
Obviously,
four
four
four
four
is
gonna
need
a
lot
of
community
outreach.
I
suspect
whatever
version
goes
live
is
probably
not
exactly
what's
the
e
today,
but
something
like
it.
This
probably
like
has
to
happen
at
some
point.
K
D
D
What
I
think
we
can
do
is
try
to
highlight,
like
you
know
what
the
different
teams
want
to
see
to
be
comfortable
with
44.88
for
in
the
next
two
weeks,
and
so
obviously
trying
to
have
implementations
across
all
clients
is
one
thing
and
then
andrew's
concerns
about
like
and-
and
others
have
raised
this
as
well,
but
just
around
like
how
you
do
optimal
kind
of
block
packing.
I'm
not
sure
if
that's
something
we
can
get
done
in
two
weeks,
but
it
might
be,
it
might
be
something
that
even
say
we
have.
D
You
know
the
client
implementations
ready.
We
still
need
to
figure
that
out,
but
we
we
don't
necessarily
have
to
block
everything
on
it,
and
I
don't
know
how
like
would
most
client
teams
have
the
bandwidth
to
prototype
this
in
the
next
two
weeks
and
if
not,
I'm
happy
to
reach
out
and
try
to
find
teams
or
contractors
that
can
that
can
work
on
prototypes
yeah.
V
I
I
just
wanted
to
correct
one
thing
before
I
drop
before
we
drop
I'd.
I
did
well,
don't
don't
cry,
I'm
not
trying
to
fight
them.
Just
you
know,
that's
like
I
was
just
expressing
the
current
security
mode
and
I
and
maybe
the
secretary
can
be
changed
and
I
agree
with
that.
I
was
about
four
four
four
is
necessity.
There
is
really
no
doubt
about
it.
I
was
just
saying
that
we
should
I.
E
So
tim,
one
thing
that
I
would
really
like
to
see
is:
what's
the
like:
what's
the
worst
block,
what's
the
what's
the
worst
like
with
the
current
packing
algorithm,
if
we
change
it
to
to
to
to
this
this
new
one
like,
what's
the
difference
between
like
a
like
how
how
different
can
this
this
block
be
from
like
a
really
like
smartly
packed
block
and
if
like,
if
someone
could
work
on
that,
that
would
be
really
really
nice.
E
No,
I
mean
I
mean
like
like
given
a
set
of
transactions
and
like
a
smart
algorithm
to
pack
them.
How
like
how
much
overhead
can
can
there
be
to
like
the
the
naively
packed
version
of
that
block
and.
P
P
Okay,
bigger,
because
I
think
it
doesn't
make
sense
to
kind
of
analyze
like
an
artificial
worst
case,
because
you
always
construct
a
case
where
basically,
transactions
are
pending
in
just
the
right
way
that
you
kind
of
like
lose,
lose
out
on
a
lot
of
of
profit.
But,
like
I
mean
that's,
not
an
attack
factor
kind
of
like
taunting,
someone
with
potential
profit
and
they're,
not
giving
it
to
them
that
that
that
that's
not
an
attack.
So
basically
what
you'd
have
to
look.
D
And
yeah,
so
basically,
so
there
is
a
pr
against
guess.
Another
mine
can
prototype.
Aragon
can
probably
pull
in
the
consensus
changes,
but
not
the
transaction
pool
changes
and
it
would
take
some
extra
bandwidth
to
do
that.
Basu.
D
U
Yeah,
we're
always
open
pr's,
you
know
two
weeks
for
a
prototype
would
be
tight
for
us,
but
I'd
have
to
talk
to
the
rest
of
the
team.
D
Okay,
so
I
guess,
let's
see
what
we
can
do
in
the
next
two
weeks.
If
you
are
listening
to
this
call
and
like
you've,
got
a
java
or
go
background,
and
you
can
help
with
prs.
Please
reach
out
to
me
yeah.
We
can
definitely
kind
of
find
or
set
up
some
some
bounties
and
whatnot
for
that
and
yeah.
I
guess
in
two
weeks
we
we
can
get
back
here
and
discuss
and
see
kind
of
what
the
progress
made.
D
You
know
how
how
realistic
is
it
to
put
this
potentially
before
the
merge,
we're
and.
K
I
I
wanna
say
I:
I
think
that
the
es
can
also
commit
to
providing
resourcing
for
pushing
444
along
throughout
2022
yep.
D
Cool
we
are
one
minute
away
from
time,
but
there's
someone
on
the
call
who
had
a
quick
shout
out
so
zach
is
here
he's
working
on
a
documentary
about
ethereum
called
infinite
garden,
so
he'll
probably
be
trying
to
reach
out
to
a
bunch
of
people
on
this
call
and
and
others
in
the
community
over
the
next
few
months.
It's
already
started
so
yeah.
I
just
wanted
to
give
him
a
few
minutes
to
kind
of
explain
what
they're
working
on
and
yeah
before
zero.
W
Thanks
tim
yeah,
I
just
don't
want
to
take
too
much
time,
but
I
wanted
to
jump
on
and
obviously
we
are
huge
fans
of
the
work
of
the
core
devs
and
we
were
making
this
film
called
ethereum
the
infinite
garden,
which
was
actually
funded
by
the
community
via
a
mere
crowdfund
on
mir,
and
it's
allowed
us
to
kind
of
focus
on
how
ethereum's
already
affecting
people's
lives
around
the
world.
We've
been
filming
all
around
the
world
and
how
ethereum
will
affect
people's
lives
in
the
future.
W
So
we're
filming
through
the
merge
and
obviously
the
work
of
the
core
devs
is
fundamental
to
the
story
of
this
film.
So
I
just
wanted
to
jump
on
and
you
know
if
we
pop
into
your
telegrams.
I
wanted
you
to
know
who
it
was
so
you
know
we'll
be
talking
to
a
lot
of
you.
Moving
forward
and
yeah
really
just
appreciate
the
time
and
all
the
work
you're
doing.
D
Thank
you,
where's
the
best
place
for
people
to
reach
out.
If
you
know
they
want
to
ping
you,
they
have
something
interesting
to
share.
W
Yeah,
absolutely
my
email
is
zac
at
dot
optimus.com
and
you
can
check
out
all
our
work,
our
previous
films,
that
are
on
netflix
and
hbo
on
za
on
optimus.com.
D
Awesome
thanks
yeah
and
one
last
quick
thing
before
we
go.
Trent
is
organizing
a
merge
community
call
next
week
at
awkwardness
time,
so
1400
utc,
which
is
now
a
new
time
in.
I
guess
europe
and
north
america,
because
of
daylight
savings
and
so
yeah.
If
you
are
a
basically
a
infrastructure
provider,
tooling
provider,
application
that
wants
to
like
see
what
the
merge
is
what's
happening
with
the
merge
ask
your
questions
get
the
updates.
Please
come
next
friday,
there's
a
link
in
the
chat
here.
D
It's
on
the
ethereum
pm
repo
for
the
agenda
and,
yes,
final.
Shout
out.
Please
update
your
notes.
If
you
are
running
a
note
on
the
proof
of
work
network
right
now,
aero
glacier
will
have
happened
before
the
next
awkward
devs.
D
Yeah
happy
thanksgiving
for
americans
and
yeah
thanks
for
anyone
in
the
us
who
made
it
on
this
call
much
appreciate
you
spending
your
holiday
weekend
here
with
us
and
yeah.
I
think
that's
it
thanks
a
lot
everyone
this
was.
This
was
really
really
good.