►
From YouTube: Ethereum Core Devs Meeting #122 [2021-09-17]
Description
A
A
A
A
A
A
We
can
skip
the
validation
of
proof
of
work
yeah,
and
that
is
all
lukas.
Do
you
want
to
add
something
about
it.
D
B
Okay,
awesome
yeah
thanks
for
the
update,
so
people
please
upgrade
if
you're
running
nethermine
and
in
the
next
call
or
the
one
after
we'll,
probably
go
into
more
details
and
the
actual
vulnerability
itself
cool.
I
guess.
First
on
the
agenda,
we
have
updates
around
the
merge
and
interoperability
between
the
different
clients
on
the
execution
and
consensus
layer
mikhail.
Do
you
want
to
just
give
a
quick
high-level
overview
of
the
the
different
different
updates
you
had.
E
Yeah
sure,
thanks
tim
has
been
work
on
the
drop
edition
of
this
engine
api
and
I
think
it
will
be
published
pretty
soon,
probably
this
day
or
the
early
next
week.
Matt.
Do
you
think
how
much
time
do
we
need
to
do
this?
E
E
Right,
not
100,
okay,
cool,
a
part
of
this
engine
api
stuff.
There
is
a
couple
of
updates
that
I
would
like
to
share
here.
First
is
the
proposal
on
using
the
hard-coded
terminal
total
difficulty
in
both
clients,
in
the
consensus,
client
and
in
the
execution?
Client
here
is
the
link
to
the
pull
request
again,
the
beacon
spec
it
has.
The
is
the
link
to
the
full
request.
E
I've
made
to
the
eap
as
well,
and
you
may
go
there
and
read
for
the
details
and
duration
of
the
hand,
is
in
general,
is
reducing
the
complexity
and
protecting
us
from
various
such
cases
that
could
arise
around
implementation
and
usage
of
the
dynamic
total
terminal.
Total
difficulty
that
previously
were
the
priority
had
in
the
stack.
So
another
thing
this
is
a
small
kind
of
thing,
is
the
the
extra
data
are
brought
back.
E
So
the
the
deprecation
of
extra
data
fail
field
has
been
removed
from
the
eip
and
it's
been
added
to
the
execution
payload.
So
it's
back
again
and
yeah.
The
rational
here
is
that
it's
useful
in
investigations
of
various
incidents
on
the
mainnet
as
it's
the
default
usage
for
the
extra
data
is
to
just
express
the
client
version,
which
is
pretty
helpful.
E
So,
what's
next
here,
yeah
also
in
general
about
the
merge
interrupt
specs,
I
think
we
will
have
them
and
their
versions
settled
down
and
published,
like
in
the
middle
of
the
next
week,
probably
early
next
week,
so
stay
tuned
on
that
yeah.
This
is
with
regard
to
updates.
I
have
al
also
want
to
make
one
proposal
here
and
to
hear
from
client
devs
what
they're
thinking
about
it.
F
E
Yeah
yeah
sure
this
is
the
about
message
ordering
between
the
consensus
and
execution
clients
like
the
order
of
engine
api
calls
to
be
more
strict
and
for
for
the
interrupt
thing
the
proposal
is
to
like
is
to
use
synchronous
calls
on
the
consensus
client
side,
which
means
that
if
the
call
is
made,
the
consensus,
client
just
waits
for
the
response
before
moving
forward
with
its
flow
of
blood
processing
or
block
production
or
any
other
stuff
or
the
folk
choice.
E
The
reason
here
is
that
it's
just
like
the
easiest
way
to
to
handle
the
message
ordering
stuff,
and
I
think
it's
reasonable-
to
simplify
this
part
for
the
interrupt
and
we
can
further
discuss
the
production
ready
solution
for
this
part
of
the
protocol
and
to
not
focus
on
on
that
just
during
the
intro.
I
think
it's
not
super
important
for
interrupts.
It's
done.
E
And
the
consensus
client
will
synchronously
make
the
engine
api
calls,
so
it
will
send
the
request
when
went
for
response
instead
of
like
send.
One
request,
then
go
to
then
get
back
to
the
like
usual
block
process
well
and
send
in
one
one
another
request
without
waiting
for
the
previous
one
to
to
get
processed,
and
that
could
lead
to
the
mess
around
the
message
ordering
and
some
inconsistencies.
E
If
we
don't
have
like
a
strict
mechanism
specified
for
to
to
pre,
to
preserve
the
best,
the
order
of
messages
and
that's
just
a
workaround
for
the
interrupt
just
temporal
solution
to
not
like
get
into
discussion
and
get
into
specification
and
implementation
of
any
message.
Consistency
mechanism
that
is
like
more
sophisticated
than
just
some
chronos
calls.
D
Yeah,
so
we
we
have
this
kind
of
ascent
from
new
city
by
on
the
let's
say
in
a
transport
layer
right
like
you're
saying
but
on
the
protocol
lane
they
are
like
prepared.
Payload
get
payload
gives
us
some
kind
of
asynchronicity
and
the
execute
payload,
and
then
consensus
validated
and-
and
I
don't
remember
the
last
one-
also
validated
message
would
be
giving
us
asynchronicity
on
the
protocol
level,
but
not
on
transport
player
level.
Right.
E
It
depends
on
the
transport,
but
yeah
yeah
yeah
right,
so
we
have
this
synchronicity
and
we
want
to
have
it,
but
it
like
implies
that
we
need
to
deal
with
the
order
of
messages
and
it
needs
to
be
preserved
like
if
you're
receiving,
like
the
message
about
like
some
payload,
some
child
pay,
a
the
apparent
payload
hasn't
been
yet
possessed.
That's
that
could
be
a
problem
yeah.
This
is
just
an
example.
D
C
B
G
Oh,
I
said
I,
I
agree
that
it
makes
sense
like
an
iterative
step,
get
it
get
it
right
in
a
synchronous
method
and
then
layer,
asynchrony
and
where
it
makes
them.
E
E
B
Okay-
and
I
guess
yeah,
I
just
had
one
question
to
to
make
sure
I
it
it's
clear
so
the
the
decision
for
the
the
hard
code,
the
terminal,
total
difficulty
it
seemed
like
there
was
mostly
consensus
on
that
on
this
card,
but
I'm
curious
if
anyone
like
feels
otherwise,
because
I,
as
I
understand
it,
the
trade-off
is
then
you
just
need
to
put
out
like
an
extra
release
potentially,
which
has
that
that
terminal
total
difficulty
hard-coded
is
that
is
that
right.
E
E
So
we
definitely
sure
that
the
merge
fork
on
the
beacon
chain
happens
before
this
little
difficulty
hits
on
the
main
net.
The
other
option
is
to
have
like
a
couple
of
releases.
E
One
is
to
print
this
merge
hard
fork
on
and
wait
for
and
probably
yeah
and
wait
for
the
hard
fork
and
then
another
one
that
just
releases
the
clients
with
this
total
difficulty
and
while
you
hard
coded
that's
kind
of
two
options
here
and
and
and
and
this
and
this
in
this
second
option,
actually
the
first
release
potentially
not
affects
the
execution
clients
right.
This
is
to
this.
This
is
yet
to
figure
out,
but
that
might
work
this
way.
E
So
we
will
have
like
only
one
release
as
we
as
we
used
to
have
like,
with
this
dynamic,
total
difficulty
for,
inter
for
the
execution
class.
So
if,
if
it
even
reduces
the
complexity
of.
E
At
all,
I
mean
like
removing
this
one:
extra
release
from
the
execution
client
side.
B
G
The
risk
there,
the
risk
in
doing
one
release
is
that
you
have
to
forecast
total
difficulty
in
a
longer
time
period,
which
could
be
subject
to
attack
or
just
high
variance.
I
think
I
personally
would
take
the
hit
and
just
do
one
release,
because
there
also
is
specified
the
manual
override
in
the
event
of
an
attack
or
say,
terminal
total
difficulty
just
or
total
difficulty
dropping,
lower
and
lower,
or
the
difficulty
each
block
tracking
dropping
lower
and
lower.
So
it
takes
a
really
long
time.
B
I
G
We
have
discussed
that
is
there.
Is
it
actually
practical
without.
I
E
And
won't
they
need
like
the
great
the
bigger
hashing
power.
Then
you
keep
mine
in
this.
G
There's
two
sequences
of
them:
the
beacon
chain,
hard
forks
to
actually,
you
know,
update
the
data
structures
to
have
allow
for
the
payload
and
that's
empty,
and
then,
after
that
the
control
difficulty
is
hit
and
the
you
know
the
execution
layer
payload
is
inserted
into
the
beacon
chain,
and
so
the
risk
on
an
acceleration
would
be
if
they
could
hit
thermal
total
difficulty
prior
to
the
actual
vega
chain
fork.
Because
then
I
think
the
fork
would
end
up
once
the
fork
happened.
The
transition
would
happen
immediately
and
happen
on
a
pass
block.
E
B
But
yes,
I
think
micah,
I
think,
you're
correct
where
the
beacon
chain
upgrade
is
triggered
by
I
assume
epoch
or
slot
number
and
yeah.
It's
not
triggered
by
the
total
difficulty
itself.
K
I
see
so
the
real
the
real
source
of
all
these
problems,
then,
is
the
fact
that
the
beacon
chain
execution
chain
are
forking
on
different
metrics
and
those,
no
matter
what
those
metrics
are.
They
there's
a
possibility
that
they
will
diverge
and
one
will
happen
before
after
the
other,
and
we
need
them.
G
Yeah,
I
mean
that's,
that's
the
that's
the
mikhail
argument
of
two
releases
and
if
you
I
wouldn't
want
to
do
two
releases
in
like
a
two-week
time
stamp.
But
if
we
did
time
it
such
that
the
big
change
worked
months
before
I
think
that's
a
much
more
palatable
to
really.
K
K
E
E
This
is
to
my
understanding.
In
the
current
moment
of
time,
I
haven't
like
thought
much
about
this
scenario.
K
Is
there
any
any
any
reason
we
can't
do
the
consensus,
clients,
update
or
fork,
or
whatever
long
long,
in
advance
like
let's
say
next
week,
just
just
throwing
it
out
there
is
there
some
reason
that
we
need
to
do
it
some
kind
kind
of
near
the
actual
proof
stake
switch
or
can
we
do
it
anytime?
Prior.
J
J
L
G
G
K
That's
one
just
to
make
sure
I
understand
here
the
when
this
consensus
clang
goes
out.
That
is
the
start
of
the
merge,
but
there
is
not
any
sort
of
time
constraint
between
between
when
this
new
consensus
client
goes
out
and
when
the
execution
client
eventually
goes
out
and
when
the
hard
work
happens
like
that
duration
can
be
whatever
we
want.
It
just
needs.
K
E
Yeah
this
book,
both
softwares,
should
be
tested
with
each
other,
like
with
the
they're,
the
most
recent
version
of
each
other.
Before
making
these
releases.
A
B
Oh
sorry,
I
was
just
gonna
say
in
the
next
couple
months
as
we're
working
on
this.
It
probably
makes
sense
that
maybe
default
to
hard
coding
the
total
difficulty,
because
it'll
be
mostly
on
devnets
and
whatnot
and
as
we
start
kind
of
as
we're
further
down
with
the
spec,
and
we
start
actually
thinking
about
how
to
release
this
on
mainnet.
B
K
So
yeah,
so
I
I
agree
that
we
shouldn't
hold
up
devnets
for
this,
but
at
the
same
time
I
think
we
do
want
to
test
the
production
code
paths
as
soon
as
an
early
as
possible,
like
I
don't
want
to
wait
until
the
last
minute
to
change
the
way
we
do
our
releases
and
then
have
like
you
know.
It's
only
run
on
gurley
or
whatever
we
wanted.
J
Just
one
thing
I
don't
understand:
why
are
we
talking
about
like
one
month
to
like
change?
A
single
number
in
the
execution
client
like
it
feels
to
me
yeah,
I
agree
like
cutting
a
new
release.
Is
something
difficulty
difficult,
but
not
if
it
if
the
change
is
literally
just
a
single
number,
like
you
know
exactly
what
you're
going
to
release
no.
B
Yeah,
it's
not
the
it's
not
the
time
to
do
the
release.
Obviously,
that
can
be
done
in
you
know
a
day
or
two,
it's
getting
everybody
to
upgrade
to
the
release
to
you
know,
read
the
blog
post.
I
do
think
if
we
wanted
to
go
to
release
route,
we
can
get
around
that
where
we
tell
people
like
you
know,
first
release
is
out
on
the
date
x.
B
Second,
release
is
out
on
date
y,
so
they
know
that
they
should
be
expecting
to
release,
but
there's
just
the
risk
that
like
yeah
every
time
we
do
releases
people
don't
upgrade
and
they
don't
get
the
memo
and
if
there's
two
of
them,
you
know,
there's
just
the
risk
that,
like
they've,
updated
to
the
first
one
and
they're
not
aware
that
they
need
to
upgrade
again,
but
I
don't
think
it's
any
it's
not
a
technical
barrier.
It's
a
dissimilar.
B
I
mean
just
to
give
a
recent
example:
geth
basically
did
that
a
few
weeks
ago
and
two
major
mining
pools
have
not
upgraded
right,
and
you
know
it's
again:
it's
not
like
an
impossible
problem,
but
it
is
something
that
well.
G
In
this
case,
I
look,
I
I
agree,
donker,
that
it
might
be
the
most
palatable
solution
and
you
can
orchestrate
and
make
sure
to
communicate
well,
but
it's
it's
definitely
a
consideration,
and
I
also
think
that
we
don't
necessarily
need
to
decide
right
now.
I
do
we
can.
We
can
spend
a
little
bit
of
time
over
the
next
few
days
to
think
through
whether
the
dynamic
setting
of
terminal
total
difficulty
can
be
done
actually
in
a
tractable
way.
There
was.
G
An
issue
that
emerged
that
kind
of
highlighted
there
might
be
a
number
of
corner
cases
here,
so
we
can
think
through
it
and
try
to
make
sure
that
we
definitely
want
to
scrap
it
by
mid
next
week
and
then,
if
so,
then
we're
kind
of
working
with
the
static
which
I
think
is
also
solves
a
couple
of
other
problems.
G
Primarily
when
you,
this
donker
brought
this
up
on
all
core
devs
two
weeks
ago,
but
if
the
execution
engine,
if
a
user
naively,
runs
an
execution
engine
and
doesn't
connect
the
beacon
node
and
it
was
relying
on
being
told
from
the
beacon,
node
the
terminal
difficulty
and
it
never
connected
to
one,
then
it
would
literally
just
follow
the
proverbial
chain
forever
and
not
like
not
following
the
transition
would
not
show
up
as
a
failure
to
them,
even
if
they
wanted
to
be
following
the
transition.
G
Whereas
if
there
was
a
terminal
total
difficulty
released
in
the
execution
engine,
then
they
would
see
essentially
a
failure
at
the
point
of
the
transition
and
not
see
any
new
blocks.
That
would
be
essentially
an
alert
that
something's
wrong
and
then
they'd
figure
out
that
they
need
to
run
their
beacon,
node,
that's
the
primary
thing.
It
solves
other
than
some
of
the
corner
cases
with
sync
that
we
identified
a
couple
days
ago.
K
E
Other
some
other
like
details
and
implications
of
these
dynamic,
double
difficulty
stuff,
so
they
are
just
exposed
in
the
issue
I
mean
they
are
just
described
in
details.
You
can
drop
it.
The
link.
K
G
B
Yeah,
michael
already
linked
it
in
the
chat,
so
this
pr
2605
on
on
the
consensus
specs
and
that
links
to
another
issue
as
part
of
its
rationale:
yeah,
oh
and
then
there's
also
pr2603.
B
B
Oh,
I
guess
anything
else
on
this
specific
topic.
C
N
I
opened
this
issue
on
the
consensus
specs
repository
about
transaction
typing,
so
the
api
and
the
execution
layer
don't
change,
but
within
the
consensus
pack
there's
something
we
can
change
to
be
more
compatible
in
the
future
with
new
transaction
types.
So,
if
you're
interested
in
typing
then
just
have
a
look.
B
Cool
and
that's
issue
2608
in
the
consensus,
specs.
B
I
B
Okay,
anyone
else
have
an
update
or.
O
Not
the
same
for
basu,
we
have
the
latest
specs
stubbed
out
for
the
moment,
we're
working
on
the
implementation,
but
we're
not
ready.
Okay,.
A
And
we
are
not
ready
to
in
other
mind,
we
started
working
on
the
interfaces,
but
I
think
we
have
a
good
progress.
E
Yeah
one
another,
I
just
wanted
to
repeat
that
stay
tuned.
We
will
just
publish
the
spec
person
on
this
engine.
Api
it'll
be
different
to
the
the
design
dock.
I
mean
some
some
stuff,
some
details.
B
Sweet
yeah
we'll
share
that
in
awkwardness
and
maybe
in
the
announcement
channels
as
well
on
the
r
d
discord
for
people
to
follow
next
week.
Anything
else
on
the
merge
before
we
move
on.
B
Okay,
if
not,
we
had
a
couple
eips
that
people
wanted
to
discuss.
First,
one
was
37.
Oh
sorry,
mikhail
did
you
say
something.
E
I
was
like
yeah,
this
is
an
agenda,
I'm
not
sure
I
was
just
looking
to
get
any
updates.
There
are
on
the
specs,
talk
your
documentation
on
this
draft
design
proposal.
E
I
Yeah
so
he
had
some
dentistry
going
on,
and
so
he
wasn't
really
able
to
work
this
week,
he's
going
to
finish
it
next
week.
E
Yeah
and
my
question
is
like
yeah:
is
there
a
plans
of
the
client
developers
to
implement
the
sync
for
the
interrupt
and
yeah?
You
have
capacities
for
that,
like
who's
planning.
L
To
do
this
from
guest's
perspective
picture
is,
as
far
as
I
know,
implementing
that,
but
in
order
to
do
it
he
is
the
yeah
doing
some
major
refactorings
that
are
needed
before
you
can
really
get
started
on
the
new
stuff.
I
Yeah,
so
we
need
to
implement
some
other
stuff
before
we
can
start
it
with
it,
and
but
peter
is
is
doing
that
right
now,
I'm
not
sure
if
we,
if
we
can
actually
make
it
until
the
merge
until
until
the
interrupt,
but
we
should
until
the
merge,
probably
but
until
the
interrupt,
but
we
should
have
at
least
some
some
beta
version
that
we
can
try.
G
B
And
I
guess
one
thing
that's
just
worth
mentioning
I
saw
peter
say
this
week
is
as
part
of
these
refactors
within
geth
to
support
kind
of
merge.
Sync
fassync
is
considering,
is
considered
being
dropped.
So
if
you're
a
get
user
running
fastsync,
you
should
probably
switch
to
snapsync
going
forward.
L
L
So
if
we
don't
have
a
concrete
pr,
it's
more
of
something
that
has
been
checked
around
a
bit
and
yeah
sooner.
D
Yeah,
it's
understandable
that
it's
not
scaling
extremely
well,
so
we
are
planning
either
to
implement
snapsync
or
either
to
go
move
to
more,
like
arrigon
style,
I'm
more
in
favor
implementing
snapsync,
but
with
everything
going
on
with
the
merge
etc.
We
haven't
managed
to
do
that
yet.
B
I
And
just
for
the
people
listening
in
snapsync
is
way
faster
than
fastsync.
It
is
way
faster
and
takes
way
less
bandwidth
and
it's
in
all
aspects
period
to
it,
except
in
name
so.
B
Cool
okay,
so
yeah
next
up
light
client
wanted
to
give
an
update
on
eip
3756.
So
the
gas
limit
cap
yeah,
that's
kind.
B
L
P
Yeah
we
presented
this
at
the
last.
All
cordless
didn't
have
a
lot
of
time
to
discuss
it,
so,
just
to
briefly
reiterate
the
gas
limit
cap,
it's
motivated
by
putting
it
upper
bound
on
what
the
gas
limits
can
be
set
to
by
block
proposers.
P
In
the
benign
case,
high
gas
limits
can
increase
the
size
of
the
state
in
history
faster
than
we
could
sustain
the
malicious
case
it
amplifies
attacks
on
clients
in
the
past.
This
hasn't
really
been
a
significant
issue,
because
miners
have
worked
pretty
diligently
with
all
core
devs
to
set
gas
limits,
but
they've
shown
in
the
last
few
months
that
they
are
potentially
willing
to
be
incentivized
to
increase
limits
beyond
what
may
be
considered
safe.
So
this
is
a
eip
proposal
to
set
up
our
bound
on
that
limit.
L
K
I
can
give
some
the
cons.
The
primary
one
is
that
it
means
that
all
core
devs
will
have
to
actually
come
to
consensus
on
a
gas
limit,
the
upper
bound.
Previously
we
have
kind
of
been
able
to
shrug
off
having
to
make
that
decision.
If
we
make
this
change,
we
will
now
have
to
officially
make
that
decision.
K
As
a
group
and
the
that
then
leads
to,
we
need
to
then
decide
what
is
an
acceptable
rate
of
state
state
growth
and
there
is
a
huge
amount
of
diversity
amongst
client
devs,
for
what
is
reasonable
in
terms
of
state
growth
and
there's
also,
you
know,
disagreement
on
just
execution
time
per
block
on
what's
reasonable
there.
So
I
think
the
biggest
con
here
is
that
we
will
be
taking
something
that
currently
we
can
kind
of
ignore,
or
at
least
have
plausible
deniability
of
and
we're
taking.
J
K
Yeah,
so
when
we
want
to
change
increase
it,
we
wouldn't
do
any.
If,
if
this
eip
passes-
and
we
want
to
presumably
would
have
some
default-
I'm
guessing
30
million-
it's
reasonable
default.
If
we
wanted,
then
change
it
later
to
something
bigger,
so
45
million
or
whatever,
then
we
would
need
to
have
an
eip
and
then
wait
for
the
next
hard
fork
and
get
included
in
a
hard
fork.
J
L
So
I
mean
there's
definitely
so
good,
oh
yeah,
I
didn't
mean
to
interrupt,
but
my
question
is
basically
is
this.
Eep
is
the
motivation
for
it
is
that
we
think
that,
before
the
merge
miners
are
going
to
be
like
not
have
the
long
view
on
ethereum
and
they
just
load
the
statement
yolo
in
the
last
time
they
have
or
is.
This
is
the
is
the
motivation
more
longer
than
that?
Is
it
do
you?
L
Do
you
think
that,
even
in
the
future,
when
we're
after
the
merge
that
this
is
going
to
be
needed
or
wanted
or
desirable,
I.
G
I
think
that
if
those
tokenomics
work
and
that
the
the
the
hype
around
collecting
tokens
is
around
when
staking's
around
that
stakers
could
easily
be
motivated
as
well
to
abuse
the
limit.
There's,
maybe
a
bit
of
counteraction
there,
because
they
have
a
longer
term
view.
But
I
think
a
lot
of
them
have
machines
that
can
easily
run
40
million
instead
of
30
million
and
and
would
do
so
at
the
the
cost
of
others.
But
I
I
don't
know
I
can't
speak
for
all
the
stickers,
but
I
think
the
the
incentives
are
still
there.
G
K
Everybody
most
people
I
talk
to,
who
are
not
core
devs
seemed
and
even
a
lot
of
gourd
devs
have
a
belief
that
you
know.
K
State
growth
is
not
a
problem,
yet
you
know
there
are
definitely
core
devs
me
very
much
included,
who
believe
state
growth
is
a
major
problem,
and
so,
if
group
stickers
be,
it
might
be,
someone
else
token
voting
whatever.
K
Even
if
they
are
altruistic,
it
doesn't
mean
they're,
well
informed,
and
so,
if
we
do
decide
to
go
that
route,
we
would
need
to
make
sure
we
have
some
mechanism
of
ensuring
those
people
are
in
fact
well
informed
and
educating
them,
which
is
again
a
hard
problem
like.
I
think,
all
the
options
here
are
hard
problems.
K
To
answer
martin's
question,
though
I
I
do
think
that
whatever
rule
we
applied
to
to
minors,
we
should
probably
similarly
apply
two
stakers.
I
think
their
incentive
structures
are
are
similar.
There
is
slightly
longer
time
horizon
as
danny
mentioned,
but
it's
not
that
much
longer
like
the
state
growth
problem.
E
H
How
to
how
to
use
it
right
so
because,
like
say
we
say,
we
set
the
upper
limit
at
30
million.
That
would
actually
basically
be
a
recommendation
of
like
going
with
30
million
unless
we
have
technical
issues
where
we
temporarily
reduce
it
or
something
one
alternative
could
also
be
to
even
like
today,
set
it
at
like,
say:
40
million
50
million.
Something
like
that.
Where,
like
we,
don't
recommend
people
to
actually
go
that
far.
H
But
it's
like
this
is
the
limit
where
we
still
consider
the
network
technically
safe
or
something
right,
and
in
that
circumstance
where
we
usually
expect
not
to
be
at
the
limit.
This
kind
of
change
would
be
much
less
invasive,
while
still
giving
like
at
least
some
upper
bound.
So
there's
no
limitless
like
potential
for
or
for
catastrophe.
Any
sense.
K
I
think,
as
a
kind
of
shelling
point,
30
million
is
a
reasonable
start,
but
I
just
just
to
give
everybody
a
fair
warning.
I
will
argue
too
lower
from
that.
I
personally
think
the
30
million
is
too
high
for
a
number
of
reasons.
I.
G
P
Been
movement
I
haven't
followed
the
voting
as
much,
but
I've
looked
at
the
mining
pools
that
are
still
using
it.
It
looks
like
two:
smaller
pools
are
still
sweeping,
rewards,
flex,
pool
and
b
pool,
f2
pool
was
doing
it
in
the
beginning,
and
it
looks
like
this
didn't
sweep
for
rewards
for
a
pretty
long
time,
but
I
saw
that
they
did
sweep
rewards
a
few
days
ago.
So
it's
not
clear
exactly
what
their
involvement
still
is,
but
there
are
two
small
mining
pools
that
are
actively
sweeping
rewards.
B
And
another,
I
guess
thing
that's
worth
mentioning
is
the
gas
limit
is
still
30
million
right,
yeah,
so
so
yeah,
it's
it's
not
currently
being
pushed
upwards,
but
obviously
that
could
that
could
change.
F
Yeah,
it's
me
andrew
well,
I
I
think
our
position
is
the
same
as
before
that
we
are
not
in
favor
of
this
eep.
But
if,
like
the
majority,
decides
that
we
are
going
through
with
this
heap,
then
we
are
not
going
to
fight
and
we
are
not
going
to
die
on
that
hill.
H
B
Right
and
knowing
we
also
need
to
update
upgrade
in
december
because
of
the
difficulty
bomb
right,
yeah
marius,
you
have
your
hand
up.
I
Yeah,
I
would,
I
would
oppose
moving
this
into
the
into
shanghai
or
like
the
next
fork
or
the
just,
because
I
think
we
should
really
keep
the
fork.
The
the
the
fork
is
as
clean
as
possible
and
if
we
were
to
include
an
erp,
it
has
to
be
like
a
hard
consensus,
critical
eap.
And
I
don't
think
that
this
eap
is.
I
Is
security
critical
enough
to
do
that?
So
I'm
I'm
in
favor
of
the
eap,
but
I
don't
I
I
think
I
would
want
to
have
the
fork
in
november,
be
as
small
as
possible
and
if
we
were
to
need
something
that
it
would
need
to
be
security.
Critical
at
this
point
and
not
right,
something
that
may
be
introduced
in
the
future
or
something
maybe.
B
I
guess
on
that
note
like
it
might
be
helpful
to
understand
from
client
devs
like
how
how
much
work
would
it
be
to
activate
this
as
a
soft
fork,
so
say
that
you
know
two
weeks
from
now.
We
have
a
call
and
the
block
limit
is
up
to
you
know
40
million,
because
either
because
of
miners
moving
it
up
or
egl
moving
it
up
or
whatever
other.
C
B
And
and-
and
it's
felt
that
that's
unsafe,
how
given
this
is
like
a
minimal
change?
How
quickly
can
you
know
this
be
deployed?
Is
it
a
matter
of
like
weeks.
L
So
how
do
you
mean
deploying
it
doesn't
work?
Really,
because
the
soft
fork
is
a
minority
fork
and
it
becomes.
G
Bad
yeah,
you
still
need
to
coordinate
and
get
everyone
to
upgrade,
especially
if
it's,
if,
if
you
picked
a
hundred
million-
and
you
know
we
were
nowhere
near
it-
maybe
you
could
argue
doing
a
soft
work
and
it
just
like
takes
time
for
it
to
ripple
out.
But
I
yeah
you
still
need
to
coordinate
like
it's
a
hard
fork.
In
my
opinion,.
J
B
And
I
guess
so,
given
that
and
given
that
basically
you
know
we
need
to
have
a
fork,
we
have
the
fork
in
november,
then
we
have
the
merge.
Is
this
something
that
would
be
easy
to
like
how
how
much
lead
time
would
we
need
to
add
this
to
the
november
fork
if
we
wanted
to
right
like
is
this?
Something
basically,
is
this
something
that.
B
F
B
Yeah,
because
in
that
case,
can
we
and
basically
kind
of
following
off
marius's
comments
around
you
know,
keeping
the
the
merge
as
lean
as
possible?
Is
this
something
that
we
can
have
a
spec
for
that's
kind
of
ready
to
implement?
And
if
you
know
we
see
an
issue,
we
can
combine
it
with
either
basically
the
november
fork
or
the
merge
yeah.
L
Yeah,
we
can
have
it
in
the
back
pocket
as
a
if,
if
we
have
to
and
not
otherwise
schedule
it
as
part
of
the
shanghai,
that's.
G
Possible,
I
do
want
to
mention,
and
I
apologize
this
is
actually
in
the
in
the
spec
there,
but
I
I
think
it's
implied
to
me
that
it
wasn't
because
software,
if
the
gas
limit
on
mainnet
is
already
above
the
target
that
we'd
want
to
select
that
you
would
actually
need
some
sort
of
hard
fork
to
discreetly
change
it.
To
that.
To
that
point,
is
that
already
covering
the
eip?
I
apologize
for
this.
It's
not.
This
is
the
main
technical
questions,
okay,
sure,
so
that
we
could.
G
B
B
P
B
So,
as
I
guess
yeah,
as
I
understand
it,
there
is,
you
know
wanting
to
keep
the
shanghai.
You
know
lean
wanting,
like
I
guess
some
chord
ads,
obviously
being
opposed
to
it
and
needing
to
come
up
with
an
actual
value
for
it,
which
I
guess
we
could
default
to
30
million
but
yeah.
Those
seem
to
be
like
the
three.
P
My
view
is:
there's
not
never
really
a
good
time
exactly
to
do
some
of
these
things
and
if
we
don't
do
it
in
shanghai,
it's
already
simple
change:
there's
not
anything
in
shanghai.
Already
we're
not
gonna.
Do
it
during
the
merge
or
likely
not
going
to
do
it
after
the
merge,
because
there's
a
lot
of
other
things
to
do,
and
so
then
it's
going
to
be
sitting
around
for
a
while
and
the
attack
factor
isn't
going
away.
B
Right
yeah,
I
guess
one
thing
I
I
don't
know
like.
I
know,
there's
also
other
kind
of
eips
on
the
agenda
today
that
people
wanted
to
discuss
for
shanghai
and
yeah.
I
it
does
kind
of
feel
like
we
had
this
consensus
earlier,
to
only
focus
on
the
merge
at
the
verities
until
we
have
devnets
up
for
the
merge
and-
and
you
know
we're
far
enough
in
the
implementations
and
then
kind
of
make
decisions
around-
you
know
what
we
do
in
in
november,
based
on
on
how
much
bandwidth
we
have
yeah.
B
So
I
think
that
I
don't
the
the
big
concern
I
have
is
that
it
kind
of
sets
a
precedent
where
we,
you
know,
move
away
from
just
being
focused
on
the
merge
to
starting
to
work
on
this
and
potentially
other
things
when
we
don't
yet
have
the
merge.
You
know
devnets
set
up
and
we
don't
have
a
good
feel
for
like
how
much
work
is
left
to
do
on
that
yeah.
I
don't
know
if
what
other
client
teams
or
developers
yeah.
What
are
your
thoughts.
L
B
And
I
do
think
I
don't
the
ex
from
what
I'm
getting
the
exception
to
that
would
be.
If
we
do
see
you
know
the
gas
limit
being
raised
to
like
levels,
we
consider
unsafe
on
main
net
having
this
already
specked
out.
You
know
like
it
would
basically
increase
the
the
urgency
that
we
need
this
with
and
having
it
already.
Specced
out
would
be
valuable,
but
assuming
the
kind
of
urgency
for
it
stays
the
same.
It
seems
like
it.
B
It
just
might
be
better
to
focus
on
the
merge
exclusively
until
until
we're
at
least
farther,
along
with
the
implementation
of
it.
K
The
this
it
feels
like
if
we
want
to
go
down
that
path,
we
then
need
to
get
come
to
some
sort
of
agreement
on
what
a
dangerous
level
is.
I
believe
there
are
some
core
devs
who
believe:
30
million
is
a
dangerous
level
already,
due
to
some
known
eos
vectors
against
certain
clients,
like
is
31
million
safe,
39
40
45
70
like
like.
If
we're
going
to
say,
if
right
people
range
of
increase
in
dangerous
levels,
then
we
will
do
an
emergency
fork.
K
B
L
Yeah,
I
don't
think
we
necessarily
mean
if
our
stance
is.
If
this
gives
us
problems,
then
we
might
have
to
roll
up
with
e.
I
don't
think
we
need
to
try
and
extract
what
is.
L
B
B
L
Oh
sorry,
well
from
a
denial
of
service
perspective,
I
don't
think
we
need
to
pre-define
what
would
be
a
problem
or
how
a
problem
would
look
like,
because
we
would
notice
if
we
become
better
by
denial
of
service
tax
but
from
a
state
growth
perspective.
If
that
is
the
concern,
then
I'm
not
sure
yeah
exactly
that
how
to
define
what
are
dangerous
levels
of
state
growth
and
what
are
okay
levels
of
state
growth.
H
Yeah,
I'd
personally
be
in
favor
of
at
least
stating
clearly
or
trying
to
come
to
an
agreement
clearly
that
we
would
kind
of
like
reconsider
participation
in
egl
to
raise
the
gas
limit
kind
of
illegitimate
and
like
even
if
we
wouldn't
immediately
act
right
because
there's
definitely
some
room
above
30
million,
where
it
wouldn't
immediately
become
problematic
in
any
sense.
H
But
just
because
this
is
like
a
slip.
Slippery
slope
situation
where
maybe
32
million
is
still
okay
for
state
growth
and
maybe
33
years,
but
maybe
34
isn't
or
something
right
and
like
just
because
if
we
signal
ambivalence,
I
think
a
lot
of
miners
will
be
much
more
interested
in
the
future
to
kind
of
be
involved
there.
And
I
think
I
think
at
least
basically
stating
very
clearly
that
we
would
very
much
expect
them
to
not
participate
and
not
raise
raise
at
about
30
million.
H
And
if
they
do,
we
basically
will
likely
do
something
about
it.
Even
if,
like
there's,
probably
some
room
where,
if
it's
only
a
little
bit,
we
might
just
not
bother
to
to
like
to
look
into
it
or
something.
But
but
I
feel
like
if
we,
if
we
basically
know
that
there's
some
some
room
before
we
go
back,
then
that
does
implicitly
just
like
make
it
an
endorsement.
I
think
in
a
sense,
so
I
think
that's
the
interest.
B
Okay,
of
course,
so
moving
on
alex
had
two
eips
that
he
wanted
to
discuss.
One
is
eap,
38
55,
which
is
that
push
zero
instruction
and
then
the
other
one
is
eip.
3680
limit,
meter
and
init
code
alex.
You
want
to
give
a
quick
overview
of
them.
Q
Yeah
yeah,
I
will
given
the
word.
You
have
to
put
zero
and
the
pavel
about
the
unit
code,
so
the
init,
the
pus
zero,
is
it's
a
very
simple
one.
Q
It
just
introduces
a
new
instruction
which,
which
is
the
constant
zero
onto
the
stack,
and
this
is
specified
in
a
way
that
it
can
actually
be
implemented
in
the
same
place
where
the
rest
of
the
pushes
are
implemented
because
most
of
the
evms
implemented
such
that
they
just
have
the
starting
of
code
and
and
the
current
top
code,
and
they
just
subtract
it.
So
they
know
how
many
bytes
to
read
in
case
of
put
zero.
They
don't
need
to
read
anything.
Q
So
that's
on
the
technical
side
and
and
then
regarding
the
demotivation.
I
think
the
eip
does
a
pretty
good
job,
explaining
the
motivation,
and
I
will
try
to
replicate
it
here,
but
probably
I
won't
be
as
good
as
you
know
what
we
put
into
writing,
but
basically
there
are
a
lot
of
cases
where
someone
needs
to
push
zero
into
the
stack.
A
good
example
is
calls
since
return.
Data
has
been
introduced
quite
a
few
years
ago.
Q
Many
of
the
calls
would
just
end
up
with
at
least
two
zeros
at
the
very
end,
which
would
be
the
the
pointer
and
size
for
the
return
memory,
because
people
are
not
using
that
anymore.
They'd
rather
use
the
explicit
return
data
of
code
to
retrieve
the
data,
so
that's
one
good
example
to
see
where
the
zeros
are
used,
but
there
are
many
more
cases
and
the
the
problem
we
have
seen.
Q
So
we
we
have
been
motivated
by
looking
at
actual
byte
code
on
the
network
as
well
as
talking
to
solidity
and
and
some
challenges
they
face
in
this
regard,
so
basically
pushing
a
zero
can
be
done.
At
least
it
can
be
done
in
many
different
ways.
Q
One
one
way
is
just
with
the
push
instruction,
which
means
that
is
two
bytes
and
it
costs
three
gas,
but
one
can
can
also
use
a
dupe
if
in
case
it
was
already
on
the
stack
that
also
caused
free
gas,
and
these
are
like
the
the
clean
and
nice
ways
to
do
it.
Q
But
people
are
always
looking
at
to
save
gas,
so
there
are
certain
other
instructions
which,
in
certain
cases,
can
return
zero.
So
one
example
is
return
data
size
in
case
there
has
been
no
return
data,
it
would
be
zero
call.
Q
This
call
data
size
is
the
same,
and
plenty
of
others
are
listed
in
the
ip,
so
in
many
cases,
contracts
and
and
people
start
to
use
these
just
in
order
to
save
gas,
because
some
of
these
only
cost
two
gas
at
run
time
and
the
only
one,
byte
and
so
they're
cheaper
to
deploy
and
the
one
problem
we
have
seen
with
this
is
this
kind
of
puts
us
in
a
position
that
certain
changes
would
be
harder
to
do.
One
example
was
the
transaction
packages
which
would
change
the
behavior
of
return
data
size.
Q
So
that's
one
reason
why
why
this
is
this
optimization
is
like
a
bad
direction
for
for
many
people
to
go,
and
another
reason
is,
is
what
what's
happening
in
solidity.
The
team
is
looking
for.
Q
I
like
that.
The
need
code
generator
to
to
have
like
a
nice
way
to
to
push
like
a
known
unknown
value
in
the
cheapest
possible
way,
and
they
don't
really
want
to
go
to
the
extent
to
to
try
to
use
these
other
instructions
and
then.
Q
Lastly,
we
also
did
an
analysis
on
how
much
gas
has
been
wasted
on
ontis
and
we
only
looked
at
all
the
bytecode
deployed
and
how
many
push
one
zero
instructions
they
had
so,
which
is
like
the
sixty
zero
zero
in
hex,
and
according
to
that,
it
seems
that
I
think,
like
60
billion
gas,
let
me
just
check
it
quickly.
B
That's
a
lot
of
gas.
Does
anyone
have
thoughts
comments
on
this.
L
L
B
P
B
Right,
I
think
this
is
you
know,
kind
of
the
higher
level
conversation
where
we
have
these
things
like
say
the
merge,
which
is
obviously
very
important-
and
you
know
after
the
merge,
there's
going
to
be
other,
like
kind
of
pretty
important
things
to
do
in
the
protocol-
and
you
know
I
I
it
does
feel
like.
B
We
have
like
a
pretty
strong
consensus
about
not
doing
any
any
changes
before
the
merge
but
yeah-
I,
I
think
not
necessarily
today,
but
we
need
to
figure
out
how
do
we
keep
making
these
improvements
to
the
evm?
And
you
know
this
is
not
the
only
one
there's
you
know
alex
has
proposed
eip
3540,
which
was
very
popular,
there's
been
3074,
which
also
had
like
a
lot
of
community
support.
Yeah.
J
F
Yes,
so
from
my
point
of
view,
I'm
in
favor
of
this
deep,
I
have
a
suggestion,
maybe
when
we
can
create
tentatively
placeholder
for
hot
fork
after
the
merge
so
and
we
can
tentatively
approve
eeps
that
can
go
into
the
next
postmatch
fork.
What
do
you
think.
B
Right,
I
think
that
generally
makes
sense.
It
feels
like
it's,
maybe
a
bit
early
for
that,
just
because
we're
again
like
at
the
very
beginning
of
implementing
the
merge,
but
I
don't
know
yeah
I
I
do
agree
that
like
when,
when
we're
we're
gonna
start,
you
know
having
the
merge
implemented.
It
makes
sense
to
look
at
what's
after
and
we
already
have
like
a
pretty
long
list
of
proposals.
B
You
know
I
mentioned
a
couple
and
they're
all
I've
like
purposefully
left
them
all
open
in
the
ethereum
pm
repo.
So
we
have
like
a
pretty
long
list
of
ones
that
basically
all
asked
for
shanghai,
and
I
think
we
have
consensus
that,
like
assuming
you
know
what
we
the
fork
in
november
to
move
back,
the
difficulty
bomb
is
called
shanghai.
B
It
seems
like
we
don't
want
to
add
any
other
features
to
that,
but
we
have
a
lot
of
features
that,
like
are
valuable,
that
we
want
to
do
after
the
merge
yeah.
I
I'm
just
not
sure,
what's
like
the
right
time
to
start
discussing,
that
it
feels
like
it's
probably
a
bit
early,
but
we
probably
don't
want
to
to
wait.
Super
long
either.
B
I
guess
alex
to
give
a
quick
summary
because
you
dropped
off.
It
seems
like
people
are
like
you
know,
weekly
in
favor
of
the
proposal.
B
It
seems
like
we
don't
want
to
do
it
before
the
merge
and
it
seems
like
there's,
also
a
desire
to
start
thinking
more
deeply
about
what
we
want
to
do
after
the
merge.
Given
this
and
all
the
other
proposals
that
are
pending.
B
Okay,
the
next
one
then
was
eip
3860.
R
Hi
yeah
I
powered
here-
I
I
should
take
care
of
this
one,
so
so
this
ap
mostly
limits
add
some
limits
to
the
init
code,
size
and
additional
cost
to
it,
and
just
like
quick
background
before
evm
can
execute
any
code,
it
needs
to
do
drumless
analysis
of
of
the
code,
and
this
cost
of
of
this
analysis
is:
doesn't
it's
not
directly
reflected
in
in
any
in
anywhere,
and
this
is
partly
limited
by
by
two
factors?
R
R
So
analysis
of
the
deployed
code
is
at
least
capped
by
the
bytes
limit
and
for
init
code
there's
no
limit,
so
this
is
unbounded
and
the
sizes
can
be
in
in
megabytes
in
in
practice,
I
mean
in
like
attacking
attacking
scenarios.
So
previously
there
was
like
a
previous
version
of
this.
This
concept
eip
at
2677,
which
only
introduces
the
the
size
limit
for
init
code
and
at
some
point
we
realized
that
currently
in
create
two
days.
R
This
is
the
precedence
for
for
charging
additional
gas
for
init
code
size,
which
is
related
to
the
the
requirement
of
hashing
the
unit
code,
and
we
want
to
include
the
similar
similar
mechanism
for
image
code
in
general,
so
this
eap
proposes
charging
2
to
gas
per
unit
code
word
size
for
reference,
create
to
already
charge
you
six
gas
for
that,
so
that
would
be
increased
to
eight
and
for
create
like
regular,
create
the
cost
should
be
two
and
the
costs
were
were
taken
from
from
the
performance
of
of
current
graph
implementation
of
the
the
jump.
R
This
analysis,
it's
in
our
opinion,
it's
quite
quite
low,
so
yeah,
that's
that's
mostly
the
the
description
of
the
ap.
B
Thanks,
oh
martin,.
L
Oh
yes,
so
yeah,
I'm
I'm
one
of
the
co-authors
so
always
got
in
favor.
But
what
I
wanted
to
add
was
I've
been
basically
wanting
to
have
something
like
this
for
a
number
of
years.
That's
why
I
wrote
the
limitation
of
unit
code
earlier
that
one
is
a
bit
of
a
hack,
and
I
think
it's
it's
not
great,
and
this
is
more
rigorous.
B
Thanks
for
sharing
I'm
curious,
anyone
else
have
thoughts
on
this.
B
R
So
I
yeah
I
just
first,
I
want
to
confirm
it's
like
I.
I
don't
know
the
other
thing
mentioned,
but
I
think
it
should
work
more
or
less
the
same,
so
there
might
be
some
coordination
if
that's
required.
It's
also.
The
ap
also
applies
the
cup
on
the
transaction,
if
I
recall
correctly
and
for
internal
calls
yeah
and
in
terms
of
when
it
should
happen.
I
actually
don't
have
strong
opinions
about
that.
I
don't
think
it's
my
job
to
decide
this,
but
I
think
I
will
leave
to
others.
I
Yeah,
probably
a
bit
unfair
because
I'm
also
in
the
guest
team,
but
I
would
also
like
to
have
this
in
as
soon
as
possible.
I
I
know
that
some
some
clients
are
way
worse
in
their
jump
test
analysis,
so
it
might
make
sense
to
increase
the
the
gas
cost
even
more,
but
I
think,
like
the
limitation
of
init
code,
size
is
a
no
brainer
for
me.
I
think
that
should
go
in
either
way
and
the
gas
cost
increase.
B
So
I
guess
it
it
does
seem
like
I
don't
know
the
get
team
given.
I
guess
martin
is
an
author
and-
and
the
authors
are
like
most
familiar
with
this:
does
it
make
sense
to
just
kind
of
give
everybody
two
weeks
to
familiarize
themselves
with
this
and
see
what?
I
guess
all
the
client
team's
thoughts
are
about,
including
this
alongside
the
merge?
I
did
like
the
idea
of
like
no
having
your
president
that
we
don't
accept
something
into
an
upgrade
on
the
same
call.
It's
proposed
just
to
make
sure
people
have
time.
Q
I
was
wondering
why
why
should
this
go
in
with
the
merchant
and
maybe
not
before
so
like
in
shanghai,
just
to
reduce
the
scope
of
the
merge.
B
Oh
good
point,
I
suspect,
I
guess
one
reason
I
can
think
of
and
plea.
Please
let
me
know
if,
if
this
is
wrong
is
if,
if
we
include
this
in
the
merge,
we're
gonna
have
test
nets
for
the
merge,
regardless.
If
shanghai
is
just
a
difficulty
bomb
pushback,
we
don't
need
to
deploy
that
across
all
the
test
nets.
So
that's
one
reason
I
can
think
of,
but
yeah
I
you
know
there
might
be
others.
B
And
yeah,
I
guess,
there's
a
comment
by
a
light.
Client
about
like
adding
3680
is
bigger
than
obviously
the
two
other
eips
we
discussed,
and
it
seems
I
guess,
from
what
I'm
getting
is
like.
The
get
team
feels
there's
like
this
kind
of
security
risk
and
proto-mentioned
there
might
be
just
a
better.
I
guess,
interaction
with
how
the
beacon
chain
is
already
set
up
yeah.
So
that
seems
to
be
the
rationale.
P
I
guess
my
understanding
of
the
pushback
against
the
other
two
simpler
heaps
is
that
if
we
wanted
to
have
shanghai
not
have
any
features,
the
the
initial
energy
required
to
have
other
features
in
the
fork
was
great
enough
to
not
include
these
like
relatively
simple
eeps.
But
if
people
want
to
do
30
3860,
then
there
has
to
be
effort
put
into
building
out
the
tests.
P
B
P
Am
personally
fairly
opposed
to
including
anything
in
the
merge
that
doesn't
need
to
absolutely
go
in,
but
I'm
specifically
curious
question
hi.
B
B
Got
it
sorry,
so
I
got
that
wrong.
I
do
think
in
that
case
yeah.
Your
argument
is
probably
good,
like
science
we're
like
yeah
it.
If
we
are
gonna
set
up
infrastructure
for
shanghai
with
this,
then
it
basically
becomes
a
proper
feature
fork
and
we
need
testing
and
we
need
to
deploy
across
the
test
nets
and
whatnot
and
that's
yeah.
That's
like
a
much
bigger
overhead.
B
B
Because
none
of
the
test
nets
have
difficulty
bombs,
so
we
obviously
we
would
had.
We
would
add
reference
tests,
but
I
don't
think
we
would
spin
up
a
test.
B
The
big
difference
is:
if
we
just
push
back
the
difficulty
bomb,
it's
something
we
can
only
release
for
mainnet
and
literally
not
have
an
upgrade
on
on
test
nets,
and
then,
if
we
do
have
anything
else,
we
need
to
be
on
the
test
nets,
and
that
also
implies
that
kind
of
the
work
has
to
be
done.
Much
sooner
because
say
we
wanted
to
fork
mainnet
mid-november
with
the
difficulty
mom.
That
means
that,
like
at
least
a
month
before,
ideally,
we
fork
the
test
nets.
B
So
that's
like
mid-october,
and
that
basically
means
you
want
releases
out.
You
would
want
releases
out
for
clients
in
like
two
weeks
like
early
october
and
that
just
seems
kind
of
unrealistic
time,
wise
yeah.
B
So
I'm
not
sure
I
don't
know
I
I
I
struggled
to
see
how
we
would
put
anything
in
shanghai
at
this
point,
which
is
not
which
has
to
go
through
a
full
test.
Net
deployment,
and
one
thing
I
can
do
in
the
next
two
weeks
is:
I
can
just
double
check
the
difficulty
bomb
calculations
and
whatnot.
You
know
to
see
if
anything
has
changed,
but
as
I
understand
it,
it's
supposed
to
go
off.
You
know
early
december
and
I
think
no
one
wanted
to
fork
like
around
the
holidays.
B
P
Are
people
strongly
in
favor
of
trying
to
put
30
80
3860
in
shanghai.
D
So
we
never
mind
recently
optimize
it
a
bit
looking
at
the
guest
code,
so
I
would
say
we
are
in
case
of
like
vulnerabilities,
more
or
less
at
the
same
level.
Of
course,
there's
some
maybe
run
time
overhead
or
something,
but
generally,
I
agree
that
this
probably
needs
to
be
done,
but
I
don't
have
that.
D
I
B
I
think
we
would
need
like
we're
already
at
time
today.
I
think
we
would
need
to
be
like
100
in
agreement
on
the
next
call
and
that's
like
the
absolute
latest.
We
could
do
it.
I
can
kind
of
I
I
can
look
into
you
know
into
what,
like
with
the
difficulty
bomb
and
and
whatnot
what
it
would
look
like,
but
yeah,
that's,
my
understanding
is
like.
We
can't
really
go
past
next
call
to
make
a
decision
on
that,
because
then
it'll
be
mid-october,
and
that
just
seems
way
too
late.
I
I
think,
that's
too
early
and
said
president
okay,
I
think
every
client
team
that
wants
it-
that's
in
favor
of
it-
should
implement
it
until
next
week
and
then
maybe
have
some
discussions
over
or
call
the
oral
code
of
channel
or
something
I
don't
really
want
to
have
another
record
of
meeting
next
week,
because
I
think
it's
also
bad
if
we
have
them
every
week.
B
Yeah
and
I
think
the
fact
that
these
proposals
are
also
kind
of
new
I'd,
be
I'm
like
less
inclined
to
have
like
an
off
schedule
meeting
where
we
just
put
them
in,
and
you
know
yeah,
I
I
do
think
like
people
watch
these
calls,
they
read
the
notes.
B
The
notes
take
a
couple
of
days
to
come
out,
so
I
yeah,
I,
I
think,
there's
value
in
like
discussing
it
on
the
discord
over
the
next
two
weeks,
but
and
yeah
teams
who
think
this
is
really
important,
having
at
least
a
preliminary
implementation
and
yeah
we
can.
We
can
make
a
call
on
on
the
next
awkward
devs
we're
okay,
oh
sorry,
yeah!
B
B
Okay,
yeah
we're
already
at
time.
I
guess
a
couple:
just
announcements
before
we
wrap
up
get
put
out
a
postmortem
on
the
split
that
happened
when
they
they
basically
announced
that
there
was
a
vulnerability
that
they'd
patch.
This
is
linked
in
the
alcohol
devs
agenda.
You
can
read
it
there,
there's
two
other
or
I
guess,
there's
three
other
kind
of
eips.
B
We
didn't
have
time
to
discuss
that
are
not
urgent.
If
people
want
to
have
a
look
async,
those
are
also
linked
in
the
agenda.
Yeah,
that's
pretty
much
it
so
thanks.
Everyone.