►
From YouTube: Ethereum Core Devs Meeting #62 [2019-05-24]
Description
A
Hello
and
welcome
to
Theorem
quartet
meeting
62,
let's
jump
right
into
it.
We're
gonna
go
to
the
summary.
If
you
click
on
action,
item
1
and
we're
gonna
review
the
actions
required
and
decisions
made
and
suggestions
all
right.
So
we
got
action.
61
point
1
is
the
ganache
slash,
spec
compliance
issue?
It
go
aetherium
that
issue
we're
gonna
come
to
the
back
to
that
in
a
couple
of
meetings:
I,
don't
do
we
have
anything
updated
on
that
or
but
we
each
should.
We
come
to
that
in
the
next
meeting.
David.
A
No
problem
review
time
frame
for
hard
forks
in
June
to
refresh
memories.
Yeah
we'll
do
that
in
the
next
item.
When
I
pull
up
the
link
for
the
time
frames
for
June
Diana's,
gonna,
add
9
month
out
hard
for
kickoff
time
frames
that
one
I'm
looking
here,
I
think
that's
done.
Danno
didn't
do
that
one
get
done.
Yeah
I.
C
Also
propose
for
the
next
batch
to
add
another
checkpoint,
which
comes
two
months
after
the
kickoff
VIP
proposals.
What
we
do,
which
is
kind
of
what
we're
doing
what
we
did
that
last
week
in
this
week
and
replacing
what
the
checkpoint
fees
do.
A
IP
is
ready
to
give
us
more
time
to
review
and
work
through
the
e
IPS
before
we
hand
it
off
to
client
development.
So
that's
something
I'm
proposing
for
the
next,
but
we'll
see
how
this
goes.
If
we
need
it.
A
Okay
sounds
good.
The
next
item
is
a
IP
615
decision.
Discussion
at
meeting
status
as
a
PR
will
be
made.
Eap
615
is
the
e
IP
for
subroutines
and
static
jumps
for
the
EVM
and
I
guess
that
would
be
Brook
or
Christian
or
Powell,
and
it
looks
like
that
was
done,
but
I'm
not
positive
is
anyone
here
who
can
speak
to
that.
A
Okay:
next,
we
have
Martin
and
Alex
to
confirm
whether
a
IP
689
needs
to
be
implemented.
It's
a
work
in
progress
and
that
that
was
the
status
last
time.
This
is
the
address
collision
of
contract
address,
causes
exceptional
halt
that
yuuichi
originally
drafted,
Martin
or
Alex.
Is
that
have
any
update.
D
We
can
erase
that
action
item
for
sure.
I
have
an
update
on
that
from
the
testing
side
that
it
still
needs
to
be
written
in
tests,
and
it's
perhaps
that
the
the
intention
of
the
EIP
was
for
it
to
be
updated
in
the
yellow
paper,
which
that
hasn't
happened.
So,
like
implicitly,
it
exists
in
the
clients
because
it's
already
been
accepted,
but
explicitly
it
isn't
in
the
yellow
paper
or
in
the
test.
Yet.
E
F
Yeah
the
issue
was
raised:
it's
an
objection
to
using
like
shared
libraries
for
the
pre
compiles
and
discuss
that
with
the
team
and
know
you
have
no
objection
to
that.
But
one
comment
might
be:
is
that
yeah,
using
shared
libraries,
kind
of
reduces
the
the
benefits
of
having
multiple
implementations,
but
we'll
just
leave
it
at
that,
but
yeah?
We
can
go
forward
with
this
and
use
the
shared
library
there.
F
A
G
A
A
G
A
That
sounds
good,
so
that's
all
the
action
is
required.
The
decisions
made
is
that
there's
value
in
hard
fork
deadlines.
That's
sixty
one
point.
One
sixty
one
point
two
is
that
the
first
deadline
should
serve
as
an
upper
bound
for
the
maximum
number
of
changes
that
might
go
in
and
subsequent
hard
fork
and
do
a
subsequent
hard
fork
into
Istanbul.
But
of
course
not
everything
that
is
proposed
by
the
deadline
will
necessarily
make
it
into
the
hard
fork.
A
Okay,
so
I
actually
wasn't
here
last
week.
Can
anyone
elaborate
on
exactly
what
the
summary
of
that
decision
was
it
kind
of
makes
sense,
but
in
my
opinion,
that
makes
it
a
little
bit
unclear
if,
if
it's
in
draft
form,
whether
or
not
that's
actually
something
that
we
can,
that
needs
to
be
AB.
Everything
complete
before
it's
an
accepted
state.
B
A
Agree:
yeah,
that's
kind
of
what
I
read
that
as
to
I
was
just
making
sure
I
read
it
right.
The
IP
615
PR
will
be
made
for
this.
That's
decision,
61
point
4,
that's
C,
I'm
gonna,
open
that
one
subroutines.
Instead,
oh
that's
already
been
addressed,
so
we
don't
have
anyone
to
speak
for
that
one.
So
we'll
skip
it.
A
Today
we're
kind
of
focusing
on
which
hard
which
a
IPS
are
going
into
Istanbul.
So
we
probably
won't
act
on
that
suggestion
today.
In
my
opinion,
unless
someone
feels
strongly
that
we
should
and
then
ganache
spec
compliance
issue,
it
go
aetherium,
it
would
be
helpful
to
spin
it
out
into
separate
repos
a
suggestion.
61
point
to
disk
an
Oscar
go
aetherium.
Have
any
comments
on
that.
Nothing.
H
H
A
Let's
see
what
is
the
next
item?
Okay,
so
part
of
the
actions
were
to
review
the
timeline
for
hard
Forks
to
refresh
our
memories.
I
think
that
happened
last
week.
If
the
notes
are
correct,
but
let's
go
ahead
and
just
go
to
the
roadmap
and
item
two
and
see
what
that
says
so
june
19th
was
the
soft
deadline
for
mayor.
Wait.
A
Sorry,
May
17th
was
the
hard
deadline
to
accept
proposals
for
Istanbul
that
was
on
an
off
week,
so
we're
gonna
do
that
today
and
then
Friday
on
the
19th
of
August
is
the
soft
deadline
for
a
major
client.
If
lament
wait,
not
August
July
is
the
soft
deadline
for
major
client,
implementations
and
yeah
we'll
go
from
there.
After
that.
A
C
A
We
expected
it
to
be
in
November
at
the
time
I'm
sure.
Maybe
we
should
change
it
to
make
sure
it's
not
during
Def
Con,
but
then
that
would
kind
of
wreck
the
rest
of
the
timelines
and
stuff.
Does
anyone
feel
uncomfortable
with
having
it
so
close
to
DEFCON,
where
people
usually
have
stuff
going
on
right
afterwards
or
right
before.
A
Sounds
good
also
I,
guess,
on
the
bright
side,
if
Def
Con
goes
until
the
11th
and
people
stay
over
in
Japan
a
few
days,
we
would
actually
be
in
town
for
the
hard
fork
or
the
test,
Food
Network
upgrade
and
then
or
not
for
the
test
network
upgrade.
We
would
actually
potentially
some
of
us
be
in
town
for
the
main
net
upgrade
if
some
of
us
stay
in
Japan
a
little
bit
longer,
so
that
might
be
beneficial
okay
and
then
the
rough
plan
is
for
April
2024,
the
next
hard
fork.
A
A
I'm
on
the
wrong
agenda.
Luckily,
the
first
three
items
are
the
same.
In
almost
every
agenda,
I
was
almost
bad
okay,
so
we
do
refer
to
the
roadmap
list.
That's
from
the
roadmap
link,
so
we
have
a
bunch
of
proposed
ones.
We're
gonna
finalize
these
today
as
accepted
or
not
accepted
for
the
Istanbul
heart
fork,
so
subroutines
and
Static
jumps
for
the
EVM.
It
looks
like
that
one
is
done.
A
E
E
E
A
E
So
I've
my
reasons
why
I
don't
think
I'm
not
proponent
of
it
and
I'm,
not
against
it
per
se,
I
think
its
probe
I
mean
it's
a
good
thing
in
the
end.
What
they
want
to
do,
but
I
think
that
the
benefits
that
we
gain
our
notes
are
not
imperative
with
the
overhead
and
the
complexity
of
the
EEP
in
total,
because
it
brings
a
lot
of
complexity
to
the
consensus
engine.
E
It
brings
a
lot
of
overhead
as
far
as
I
understand
it
to
the
actual
ground
and
execution
until
we
get
to
a
point
where
the
code
is
actually
validated
before
it's
deployed,
so
it
can
be
executed
without
any
additional
checks.
But
if
I
understand
it
correctly
in
the
first
iteration,
it
will
actually
make
things
slower,
because
we
need
to
do
that.
E
The
big
benefits
are
basically
better
compilers
and
better
optimizations
during
population
and
also
better
verification,
and
that's
especially
adversarial
verification
where
you
might
not
actually
have
the
source
code,
because
if
you
actually
have
the
source
code,
then
you
could
do
pretty
good
verification
on
ast
level
of
a
source
code
yeah.
So
where
is
he
very
complex?
Very
large?
No
real
handsome
gains
in
the
areas
where
I.
A
E
I
mean
if
we
want
the
versioning,
if
the
versioning,
if
you
want
to
deploy
time,
verification
and
versioning,
then
I
think
it
would
be
better
to
have
that.
First,
let's
see
that
it
works
iron
out
any
kinks
for
that
and
then
in
the
second
hard
work
have
something
that
uses
this
version.
Anything
I
think
it's.
A
Okay,
wait:
am
I
unmuted
yep
I'm
unmuted,
okay,
so
this
is
an
interesting
process.
Question
for
officially
confirming
VIPs
to
go
to
a
next
hard
fork.
Should
the
champion-
or
at
least
one
of
the
EIP
authors,
be
on
the
call
in
order
for
that
to
get
approved,
I
don't
really
have
a
strong
opinion
either
way
on
that.
Does
anyone
have
a
strong
opinion,
they'd
like
to
suggest
reasoning
for.
E
I
A
Okay,
so
what
we'll
do
is
we'll
try
to
reach
out
to
the
author's
after
this
call
and
then
make
a
decision
using
the
all
core
devs
get
her
Channel
and
the
etherium
magicians
link.
That's
already
been
posted
with
the
EIP
and
then
Alex
point
Vera's
Ozzy
pointed
out
that
the
the
wiki
is
out
of
sync
with
the
official
AIP
meta
that
has
the
links
to
what
is
being
proposed.
So
can
I
get
a
link
to
1679
Alex?
Did
you
have
that
open.
J
I
think
it
doesn't
BM
Kirby
if
you
decided
about
30
times
more
efficient
than
Brad
to
consider,
but
yet
I
think
part
of
that
is
because
of
the
complexity
of
happening
of
this,
of
the
of
the
stack
abstraction
and
having
things
like
the
a
local
variable
stored,
help
immensely
with
this
intentionally
even
over
not
having
used
to
have
you
know,
compile
real
languages,
compile
all
the
way.
But
you
don't
establish
time
you
just
compile
to
give
you
in
code,
which
would
be
quite
useful.
J
K
So
if,
in
general,
we
would
like
to
like
improve
EVM
over
time,
I
think
that's
try
direction
to
go,
but
I
kind
of
agree
about
splitting
that,
and
there
are
proposals
how
to
do
it.
On
the
practical
level
and
I
mean
we
can
give
that
these
tool
quite
small
features
in
this
six
one,
five
VIP
that
can
be
implemented
up
front
and
yeah
I
kind
of
agree
with
Martin
about
version
ink
that
think
deploying
that
before
we
can
actually
verify
a
deploy
time.
I
will
sure
about
that.
K
C
C
For
like
the
chain
code
ID,
you
could
do
a
versioning.
You
could
do
the.
What
is
it
17:06,
I'm,
not
sure,
with
memories
but
the
versioning
to
require
the
diversion
abit
to
be
there
for
the
new
opcode
to
work?
And
so
we
don't
need
6:15
in
for
that
versioning
IQ
to
have
meaning.
So
we
could
put
those
two
together
and
still
get
a
meaningful
deployment
of
the
versioning
signaling
inside
of
the
EVM
icons.
E
Yeah
I
agree
that
versioning
is
a
pretty
powerful
construct
and
I
think
it
could
stand
actually
on
its
own,
even
even
if
6:15
in
the
end
is
never
implemented
to
things
that
were
one
more
thing
about.
Pursing,
though,
is
that
there
are
two
types
of
versioning
one
versioning
is
the
type
of
Wersching
where
contact
says
I
want
to
play
by
these
new
cool
rules.
Use
these
new
pool
of
codes.
E
A
E
A
M
Yeah
also
mentions
that
sort
of
one
of
the
benefits
beside
the
verifiability
is
performance,
but
can
we
actually
put
some
numbers
on
that?
So
can
I
I
haven't
been
following
the
CIP,
so
all
J's,
if
I
I
missed
this,
but
do
we
know
exactly
what
performance
improvements
improvements
could
be
expected
from
this?
So,
for
example,
if
I
think
there
would
be
hack,
you
mentioned
the
C++
implementation.
Do
we
have
some
numbers
from
how
a
smart
contract
could
be
rewritten
to
perform
better
with
the
CIP
rather
than
without
I.
K
M
Asking
is
because
so
I
kind
of
have
the
feeling
that
people
are
a
big
area.
This
yeah
it
because
it
seems
large,
but
probably
if
we
could
put
some
numbers
on
it
and
demonstrate
that
it
is
good
because
this
and
this-
and
this
then
might
maybe
people
would
be
a
lot
more
open
to
actually
doing
it.
If
they
can
see
some
tangible
benefits.
E
C
M
A
Okay,
that's
something
we
can
also
talk
to
their
team
about
I'm
guessing
they
probably
ran
something,
but
just
to
timebox
this.
Let's
go
ahead
and
move
on
to
the
next
one.
We
got
a
big
list
to
go
through
and
we'll
follow
up
with
this
on
the
all
coordinate,
guitar
channel
I'm
gonna
write
that
down
as
a
note
for
me
to
do
specifically.
A
L
N
A
A
A
A
N
L
Now.
I
think
this
has
to
be
well
discussed.
I,
don't
think
in
its
current
form,
with
a
discussion.
This
should
be
accepted,
but
it
should
be
properly
discussed
and
maybe
some
limitations
need
to
be
put
in,
and
especially
consideration
should
be
made
so
that
this
could
be
a
precursor
to
the
changes
in
6
105
and
the
main
benefit
with
accessing.
L
There
are
three
options
listed
in
the
IP.
One
is
as
what
you
explained
having
an
immediate
value.
Another
one
is
requiring
a
a
push
up
front:
okay,
technically
it's
not
requiring
a
push,
but
rather
taking
the
value
from
the
stack,
but
you
would
want
to
extend
this
with
with
validation,
that
it
requires
a
push
up
front,
but
then
we're
getting
into
six
one
five
territory,
so
I
think
there
should
be
discussed
in
depth.
Off
of
this
call
is
this
might
take
long,
okay,.
L
M
Okay,
so
I'm-
probably
not
going
to
argue
against
this,
because
I'm
not
familiar
with
the
design,
absolutely
just
one
thing
I
could
get
I
would
add.
Is
that
since
we
do
have
1,000
I
think
I'll
go
yeah?
Our
settlement
is
currently
1k.
M
Honestly
I
never
really
understood
the
reason
of
limiting
accessing
these
items
to
only
to
talk
16
words,
because
it's
I
mean
we're
storing
the
whole
thing
in
memory
anyway,
we
don't
have
any
specialized,
CPUs
or
aetherium
that
would
benefit
of
in
the
16
limit.
So
from
my
perspective,
raising
removing
this
limit
seems
like
a
good
extension.
Even
if
it's
not
too
valuable,
it
just
seems
like
an
arbitrarily
chosen
limit
that
doesn't
feel
make
sense.
So.
E
A
L
L
Think
pretty
much
every
single
IP
proposed
needs
more
discussions,
I'm
not
sure
we
can
actually
make
decisions,
maybe
with
a
few
exceptions
on
this
list
of
35,
where
we
can
say
it's
accepted
or
rejected,
but
I
think
every
like
the
majority
of
them.
You
need
a
lot
more
discussions
where
we
need
to
figure
out
whether
they
can
be
considered
for
Istanbul
or
not
well,.
M
A
A
So
we
would
need
another
section
on
the
meta
which
would
be
actually
don't
we
already
have
proposed,
so
we
would
basically
have
accepted
and
rejected
on
the
meta
and
then
proposed
would
be
the
ones
we
can't
decide
today.
So
so
far,
six
fifteen
is
can't
decide
today.
663
is
can't
decide
today,
10:57,
which
is
prog
pal.
A
A
group
of
GPU
miners
and
stuff,
like
that,
we're
gonna
try
to
put
the
hardware
piece
back
in
I
think
we're
approving
at
a
hardware
otter
in
the
next
couple
days.
We
have
we
found
one
who
doesn't
have
a
conflict
of
interest,
who
isn't
a
giant
ASIC
manufacturer.
Anything
like
that,
so
that's
really
promising,
but
also
the
reality
of
this
is
it
won't
get
into
Istanbul
most
likely,
because
the
audit
should
will
likely
not
be
done
before
is
Stan
bull.
A
If
that
changes
between
now
and
the
next
awkward
to
have
meeting
I'll,
let
everyone
know,
since
we
still
have
some
proposed
ones
to
come
up
with
so
forth.
As
far
as
the
status
I
propose
that
this
go
into
the
stay
in
the
proposed
EIP
section
as
something
that
is
still
pending
further
information
from
the
audit.
E
M
And
I
think
the
question
is
whether
we
would
like
to
move
forward
from
that
one
thing
or
the
other:
that's
what
that's
I
mean.
That's
that
one
decision
and
but
if
they
always
find
something,
then
we
can
always
with
gold
up
until
the
very
last
moment,
and
that
I
mean
the
question
is
not.
If
the
audit
doesn't
find
anything,
do
we
want
to
go
forward
with
it,
because
if
he
has,
some
people
need
actually
start
implementing
it
and
if
they
already
find
some
issues,
then
it's
down
the
drain
anyway.
Well.
A
A
This
is
both
software
and
hardware
I'm.
Only
speaking
to
the
fact
that
the
hardware
one
was
delayed,
which
also
delayed
the
software
one,
because
the
audit
package
was
a
package
to
be
signed
by
all
parties
involved
and
it
wasn't
able
to
be
signed
because
of
the
dropping
out
of
the
hardware.
Auditor.
I
C
Filling
that
role
as
champion,
even
though
it
didn't
hog
for
the
IP
I'm
just
listening
to
the
discussion,
mostly
I,
do
have
one
concern
that
I
asked
Hudson
to
add
to
the
audit
I'll
detail
it
into
a
getter,
but
I
would
want
the
auditors
opinion
if
we
do
it
as
written
before
we
go
forward
because
I'm
kind
of
of
the
opinion
there's
one
thing
that
needs
to
be
changed,
so
I'll
detail
it
and
as
not
to
derail
this.
This
meeting
but
I,
don't
think
the
EIP
as
written
today.
A
Love
your
jokes,
but
now
I
think
that
their
likelihood
of
it
going
to
April
is
greater
now
and
that
I
will
explore
how
long
the
audits
gonna
take
over
the
next
week
and
then
update
the
all
core
devs
chat
about
it.
So
it'll
just
stay
and
proposed
for
now,
even
though
it
has
been
tentatively
approved
pending
the
audit.
According
to
the
second
time
we
approved
the
CIP,
there's
a
possibility
for
a
delay,
a
coin
desk
if
you're
listening,
I
have
not
officially
delayed
this
at
all.
Neither
has
the
core
devs
we're
just
saying.
A
A
A
N
J
I
can
give
a
bit
of
a
background
to
say
a
few
more
I
think
it
would
be
a
very,
very
strong
bonds
with
cerium
sure
so,
I
think
this
guy
he's
been
in
the
works
for
a
while
and
in
previous
part
folks.
It
wasn't
because
well
I'm,
partly
because
the
existing
client
imitations
either
rod
some
of
the
BN
on
to
eight
freakin
bhai
brothers
didn't
justify
his
guests
possible
and
also
the
the
value
of
doing
reducing
the
gasket
was
more
speculative
of
things.
The
concert
being
changing
quite
significantly.
J
There's
a
number
of
companies
or
organizations
developing
trivia,
see
preserving
technology,
being
solutions
on
the
etherium
I'm
CJ
as
to
take
the
privacy
provider.
So
one
of
them
and
we
use
elliptic
curve
dropping
extensively.
You
know
you
know
protocol
and
on
smart
contracts
alongside
the
relative
metal
ads,
see,
step
and
at
the
moment
the
the
precompile
Gasca
Judaism,
but
as
quests
from
bottleneck
to
deploying
more
advanced
and
innovative
crypto
system.
Second
solve
a
lot
of
Syrians.
J
So
what
address
some
of
the
variants?
Technical
limitations,
such
as
a
lack
of
privacy
or
a
transaction
throughput,
and
so
so
actually
in
the
IKEA,
are
to
give
a
description
of
their
some
of
the
things
that
we
as
be
deploying
to
theorem.
Were
the
people
hard
costs
low
and
offer
the
start
to
be
actually
practical.
Things
like
confidential
widget
voting
having
a
confidentially,
centralized
exchanges
as
part
of
all
the
fillings.
J
Help
when
a
quite
a
few
people
and
the
institution
is
building
what
I'm
feeling
today
to
do
more
advanced,
innovative
things
with
there
and
I
think
the
whole
ecosystem
community
will
benefit
from
this.
He
happy
significantly
also
for
the
Elm,
yet
that
the
actual
iron
has
already
implemented
to
the
major
clients.
So
the
implementation
work
should
be
extremely
denied.
E
Okay,
questions
about
the
benchmarks
that
published
so
the
benchmark
listed
for
the
two
one,
one
beta
release
and
the
benchmarks
after
pairing
optimizations
are
you're
listing
the
exact
same
test,
vectors
and
I'm
just
curious.
If,
if
you
on
the
second
time,
did
it
take
the
exact
same
answer?
Did
you
take
the
ones
that
were
performed
the
worst.
E
E
J
H
And
if
I
can
chime
in
for
a
second
chair
from
a
very
high
level,
the
reason
these
were
priced
so
high
in
the
first
place
was
to
prevent
to
us
attacks
from
a
relatively
new
on
chain,
primitive
right.
We
have
no
evidence
that
gos
attacks
are
forthcoming,
the
actual
compute
time
cost
of
these
is
much
lower
than
the
gas
costs
would
imply.
A
I
have
a
quick
question:
is
this
something
that
the
pre
compiled
for
elliptic
curve
linear
combinations,
a
IP
that
would
make
that
generic
enough?
We
wouldn't
need
this,
or
is
that
not
the
case
because
I
know
there's
other
AIPS
where
that
is
the
case.
I
believe.
E
Mean
I've
no
idea
if
that's
okay
but
I
and
I
totally
agree
that
if
we
can
make
these
cheaper,
we
definitely
should
because
we
went
through
all
the
trouble
of
putting
them
in
there
and
if
no
one's
using
them,
then
it's
wasted
effort
and
it's
extremely
cheap
I
mean
it's
really
easy
for
us
to
just
lower
the
price.
But
we
just
need
to
make
sure
that
we
know.
L
A
We're
not
gonna
get
through
all
these
today.
I'm
calling
it
now,
but
well,
we'll
see
what
we
can
do.
A
L
I
have
a
quick
comment:
yeah
sure
I've
been
commenting
on
that
one,
and
initially
it
was
a
different.
The
the
proposal
was
very
different.
It
was
about
changing
the
existing
calls
to
behave
differently
if
they
are
targeting
a
pre-compile,
but
that
was
changing
to
proposing
a
specific
opcode,
but
I
couldn't
read
Geordi
for
a
couple
of
months.
He
wasn't
responding
on
on
the
the
forum,
so
I
made
another
AIP
the
last
one
on
the
list
2046,
which
practically
is
doing
the
original
proposal.
It
only
changes,
static,
call
to
behave
differently.
L
M
A
E
A
Yeah
so
after
this
is
done,
I
think
it
was
James
Hancock
who
was
keeping
up
with
which
ones
are
accepted,
rejected
and
proposed.
Is
that
right,
James?
So
we're
noting
it
in
the
in
the
chart
while
we're
talking
Oh,
wonderful,
so
there's
a
Google
Doc
and
so
now
there's
three
places
for
these
to
be
there's
the
wiki.
There's
the
meta
EIP
and
there's
a
Google
Doc,
the
Google
Doc
is
gonna,
be
most
updated.
M
C
I
L
A
Okay
Phil,
could
you
let
Wade
know
of
our
request?
It's
also
if
you
didn't
catch
all
that
it's
gonna
be
in
the
notes,
and
probably
the
summary
after
this
call.
A
L
M
A
A
A
C
N
A
E
E
E
E
C
Also,
some
models
of
non
maintenance
where,
when
they
fork,
they
change
their
ID
to
make
it
clear
that
they
change
the
rules
of
the
change.
It's
not
always
a
contentious
point
and
it's
not
always
working
off
of
main
net,
but
I
think
that
would
be
something
that
could
be
introduced
on
their
own
clients.
I,
don't
know!
If
it's
a
requirement
from
a
net
client
to
support,
it
would
be
my
opinion
of
it.
So
I
think
they
both
to
go
in,
but
I
also
think
they
can
go
in
independently.
A
M
Between
the
two
of
them,
I
concur
with
Martin
that
the
opcode
one
team-
that
is
really
should
be
a
thing
I
mean
get
the
current
chain
ID
here
the
current
chain
idea
and
you
can
do
whatever
you
want
with
it,
and
I
mean
you
if
a
fast
train
ID
is
important
for
a
contract
and
they
can
even
store
it
or
something,
whereas
the
alternative,
where
you
can
look
up,
look
up
the
chain
ideas,
an
arbitrary
past,
20
time
and
one
of
them
one
of
the
weird
quirks
is
that.
Why
does
why?
M
E
I
actually
asked
them
about
that
on
the
PR,
the
as
far
as
I
understood
it
it's
because
it's
considered
that
how
was
it
at
an
earlier
point
in
history,
if
you
know
on
say
95,
it
was
considered
that
even
ox
0
that
was
about
a
chain
ID.
So
it
was
something
to
that
effect
that
the
subset
only
decreases.
E
C
N
L
E
M
There
another
problematic
thing
is
if
we
specify
that
this
isn't
a
restricted
address
range.
Is
that
previously,
when
we
had
the
clean
ups,
and
during
this
alone,
whatever
part
for
where
we
cleaned
up
the
empty
accounts,
we
actually
accidentally
deleted
the
right
handy
and
then
every
client
now
has
this
weird
special
clause
in
the
code
base
that
bragging
B
is
a
special
snowflake
and
the
reason
why
rest
of
them
were
a
special
snowflake
is
because
somebody
actually
sent.
M
Why
wait
one
way
to
every
one
of
them
so
that
they
they
become
non-empty
but
I
think
we
only
send
one
way
to
the
first
256
and
then
okay,
what
happens
with
the
large
the
higher
ones?
So
it's
a
bit
of
a
weird
I
mean
it
doesn't.
Bcit
doesn't
really
provide
any
value,
but
it
does
make
reasoning
about
things
a
bit
more
complicated
because
all
of
a
sudden,
you
can
misinterpret
things
or
easily.
E
M
Honestly
I
would
say
that
pre-compile
should
be
at
least
in
gap.
We
have
a
list
of
the
compiles
and
hard
coded
and,
in
my
opinion,
if
you
want
to
make
freaking
virus
free,
so
to
say,
I
mean
calling
them
then,
essentially,
if
they
shouldn't
depend
on
the
address,
rather
they
should
depend
on
on
how
the
chain
is
configured.
If
the
chain
defined
that
we
have
eight
pre-compile
them,
it
doesn't
matter
what
you
have
on
what
address
those
eight
cap,
the
fee
waiver
and
the
rest
of
them
have
to
pay.
A
L
A
There
is
one
I
thought
there
wasn't
for
a
second:
there
is
an
ethereal
missions
thread,
okay,
great,
so
there's
a
few
people
who
are
on
the
call
today
who
came
on
the
call
specifically
to
advocate
for
their
EIP,
so
instead
of
going
in
order
anymore
for
the
last
15
minutes,
let's
run
through
those
II
IPs
I
know
that
a
big
one
was
the
fee
market
change
for
the
eath
1.0
chain.
It's
1559
Rick,
is
that
the
one
you're
championing
championing.
I
I
K
A
M
I've
never
seen
this
so
apologies
and
think
stupid,
but
skimming
it.
Could
you
also
try
to
try
to
briefly
talk
how
I
mean
not
here
rather
on
the
IP?
How
this
whole
thing?
That's,
not
working
for
certain
transaction
propagation,
because
the
transaction
pool
logic
is
really
heavily
tied
into
how
the
whole,
how
miners
accept
different
transactions,
and
if
we
start
changing
the
that
logic
that
maybe
the
networking
layer
needs
some
patches.
I
M
A
O
Currently,
disk
IO
is
over
T
over
utilize
and
is
the
biomech
network
and
computation
are
underutilized
so
because
they're
they're
overpriced,
is
the
main
reason.
They're
underutilized.
So
there's
the
e
IP
to
reduce
the
price
of
call
data
which
will
help
better
utilize,
Network
IO,
there's
an
e
IP
to
increase
the
price
of
s
load,
which
will
help
rebalance
disk
I/o,
there's
e
IP
to
increase
the
price
of
s,
store
and
create
U
and
so
forth,
and
that
makes
it
possible
to
boost
the
block
gas
limit
while
maintaining
the
same
rate
of
state
growth.
O
If
we
boost
the
bot
gas
limit,
while
simultaneously
increasing
the
price
of
state
growth,
that's
effectively
the
same
as
keeping
the
block
gas
limit
the
same
but
reducing
the
price
of
all
other
operations,
all
the
non
state
expanding
operations.
So
that's
again,
the
computation
and
network
related
operations
which
is
reading
the
the
price
of
transaction
data
and
and
call
data
or
call
data.
O
How
much
can
we
reduce
the
price
of
transaction
data
depends
on
when
bandwidth
becomes
a
bottleneck?
How
much
we
reduce
the
price
of
computation
depends
on
when
computation
computational
op
codes
become
the
bottleneck.
So
currently,
computational
opcodes
are
we'll
see,
not
the
bottleneck,
even
in
an
unappetizing.
So
that's
what
Martin,
swen
d's
recent
benchmark
showed
that
s
load
is
the
biggest
bottleneck,
and
so
the
EIP
to
increase
the
price
of
s
load
proposes
taking
it
from
two
gas
costs
of
200
to
a
gas
cost
of
800.
O
Even
after
that
4x
increase
in
the
cost
of
s
load,
it
would
still
be
the
bottleneck
according
to
those
those
benchmarks
so
and
note
that
these
benchmarks
were
done
on
gas.
So
just
using
the
speed
of
computation
on
gas,
the
benchmarks
show
that
that
use
the
cost
of
disk
IO
needs
to
be
raised
substantially
or
the
cost
of
computation
should
be
reduced
or
a
combination
of
both,
because
the
costs
are
all
relative.
So
reducing
one
is
the
same
as
raising
the
other
its
equivalent.
O
So
what's
kind
of
crazy
is,
if
you
benchmark
gas
against
an
EVM
implementation.
That's
optimized
for
speed,
computational
speed,
which,
like
EVM
1,
which
is
what
we
did
and
and
the
graphs
of
those
benchmarks
they're
linked
in
the
agenda
and
they're
in
the
IP,
and
they
show
that
we
can
get
a
like
a
10x
speed
up
from
just
from
optimizing
the
EVM
so
that
the
proof-of-concept,
the
fast
EVM
implementation
that
does
this
is
it's
called
EVM
one
and
Pavel.
He
wrote
it
for
fun
during
Christmas
break,
but.
O
So
that's
two
significant
speed
ups,
which
are
basically
low-hanging
fruit
right
now,
so
the
first
is
just
rebalancing
the
cost
to
the
current
death
and
parity
speeds
and
then
the
second
speed-up
is
optimizing
death
and
parody
to
get
the
same
speed
up
as
EVM
one
or
you
could.
Just
you
know,
use
EVM
one,
but
a
fun
thing
to
try
is
them
to
take
EVM
one
and
benchmark
some
of
the
most
optimized
EBM
contracts.
And
so
we
did
this
with
the
contract
that
that
Zack
Williamson
wrote.
He
was
speaking
on
the
call
earlier.
O
The
contract
called
the
wire
strudel.
It
implement
the
same
that
implements
EC
Mall.
So
this
is
the
elliptic
curve
multiplication,
the
precompile
that
was
added
in
Byzantium.
So
it
implements
the
same
thing
and
it
beats
the
precompile
in
gas
costs
in
computation
time,
EVM
1,
it
executes
its
EC
ball
in
500
microseconds
and
that's
compared
to
native
rust,
which
parodies
is
for
the
for
that.
Pre-Compile
x,
gifts
in
300,
microseconds
and
death
is
more
as
more
optimized
with
the
some
native
go
and
assembly
and
it
runs
in
100
microseconds.
O
You
know
to
optimize
interpreter
executing
an
optimized.
Bytecode
can
achieve
speed,
not
not
a
lot
slower,
I
mean
I
would
almost
say
it's
near
native
and
even
that
a
result
of
500
microseconds
isn't
the
best.
We
can
do.
There's
more
optimizations
remaining
there
like
a
well-known
one
for
elliptic
curves,
montgomery
multiplication
that
wasn't
done
on
wire
strudel,
because
why
should
was
optimized
for
gas
cost
and
not
speed?
O
O
A
H
Right
now
it
takes
16
million
guests
to
evaluate
one
equi
hash
at
low
security
parameter
further
chains
that
are
going
to
use
it
at
higher
security
parameters.
It
would
take
32
or
64
million
gasp
to
do
that.
So
I
have
a
use
case
in
mind
for
this
today
that
I
could
implement.
You
know
it's
just
the
Blake
to
precompile
comment.
O
H
O
H
So
the
thing
about
echo
is
that
you
have
to
do
32
indications,
so
even
if
you
reduce
it
by
an
order
of
magnitude,
you're
still
looking
at
one
point:
six
million
gas
per
invocation
in
order
to
verify
a
set
of
Z
cache
headers,
for
example,
you'd
need
to
do
somewhere
in
the
neighborhood
of
30
to
40
in
vacations.
Depending
on
how
much
security
you
want
right.
H
J
Yeah
so
I'm
happy
to
show
this
was
good
with
you
because
it,
but
yeah
basically
is
about
30
X.
J
O
H
The
question
here
is
whether
this
is
a
useful
thing
within
the
next
about
11
or
with
an
11
month
lead
time,
assuming
that
the
next
time
monk
has
hard
fork
has
about
the
same
lead
time.
We're
looking
at
12
months
at
so
at
23
to
24
month
lead
time
for
something
like
this
I
think
you
know,
even
assuming
that
this
optimization
comes
out
in
two
years.
It
puts
us
in
a
really
uncertain
place
with
the
launch
of
eath
2
and
it's
a
full
year
slower.
H
A
And
if
we
can
connect
you
with
Zach
and
Casey
to
further
this
discussion,
I
think
that
would
help
because
of
our
time
restraints
on
the
call.
And
then,
if
you
want
to
come
next
week,
James
or
a
two
weeks
from
now
and
continue
the
discussion
just
because
I
didn't
give
you
a
ton
of
time.
That's
perfectly
great
as
well.
I.
A
Yes,
as
if
they
are
completed
and
ready
to
not
implement
it,
but
if
they
are
completed
by
as
far
as
being
merged
as
a
draft
and
having
the
motivation
and
specification
sections
and
things
like
that,
they
can
be
considered
for
Istanbul.
As
long
as
you
spur
discussion
on
fellowship
of
a
theory,
magicians
in
the
all
core
devs
get
her
channel
and
get
a
little
bit
of
consensus
on
it.
E
H
H
E
E
H
I'm
happy
to
like
point
you
to
them.
If
you
shoot
me
a
message
after
this
one
of
the
like
main,
really
cool
things
that
differentiates
this
from
BTC
really
is
that
the
Z
cache
team
is
considering
fly
client
merkel,
mountain
range
commitments
for
their
next
hard
fork,
which
would
be
about
six
months
from
now.
So
getting
a
blake
to
pre,
compile
into
a
theory.
H
I'm
on
you
know,
a
slightly
longer
timeframe
would
allow
you
to
do
not
just
a
z
cash
relay
which
would
be
primitively
expensive,
but
the
like
ideal
minimal
fly
client
relay,
which
is
it,
which
is
a
orders
of
magnitude
more
efficient
than
relaying
over
every
header.
So
it's
it's
kind
of
like
a
unique
opportunity
here,
because
the
z
cache
team
is
looking
at
hard
working
in
features
specifically
for
this
as
well.
Oh.
H
M
E
E
A
Q
A
Q
Brett
Miller
I'm,
the
developer
relations
manager
at
electric
coin
company
and
honestly
James
did
a
phenomenal
job
framing.
You
know
a
lot
of
the
reasons
why
we
want
this.
I
was
really
here
just
to
show
support
and
to
make
sure
that
we
at
least
have
some
more
time
to
try
and
make
sure
this
can
make
it
in
the
Istanbul
we're
very
interested
in
seeing
it
push
through
and
I'm
personally
shepherding
this
effort
on
the
electric
coin.
Company
side
and
I
can
coordinate
also
with
the
Z
cash
foundation.
Q
M
One
thing
I
would
like
to
add
that,
so
here
we
kind
of
have
two
approaches.
One
of
them
is
cases
one
EVM
Oracle
that
one
probably
I
mean
I
mean
honestly
I'm,
not
sure
how
whether
that
would
fly
ever
because
that
would
require
all
clients
to
use
a
single
legal
implantation.
So
that's
probably
a
very
political
thing,
but
that
said,
the
pre-compiled
approach
to
me
seems
like
trivial.
M
The
idea
to
be
or
feature
to
add,
because
essentially,
for
example,
Blake
do
is
a
popular
hash
function,
it's
implemented
by
in
probably
in
every
single
language
out
there,
so
probably
integrating
that
would
be
half
an
hour's
worth
of
work.
So
from
this
perspective,
if
there
is
a
legitimate
use
case,
I
think
adding
pre-compile
for
a
couple
of
high-profile
function
seems
a
no-brainer.
Q
There's
also
been
quite
a
bit
of
work
already
done
on
this,
which
is
you
know,
another
I
think
you
know
positive
somebody
add.
Electric
coin
company
previously
did
quite
a
bit
of
work
on
this
Jay
grabber.
She
she
was
I,
guess
one
of
the
two
authors
on
the
original
VIP,
as
well
as
a
reference
implementation,
so.
O
M
Yes,
I
guess
we
can
make
a
make
a
list
of,
for
example,
Emily
make
a
list
of
hashes
that
we
would
like
supported
and
I
mean
adding
one
hash
reading
the
five
of
them
is
probably
the
same
effort.
Maybe
testing
them
is
a
bit
more
effort,
but
it's
not
really
an
effort
to
support
new
hash
function.
So
I
don't
really
see
the
reason
why
we
wouldn't
do
it.
I
mean
we
compile
address.
Space
is
kind
of
infinite.
Adding
code
is
just
pulling
in
an
external
library,
because,
obviously
nobody
is
going
to
implement
it
from
scratch.
O
I
didn't
mean
to
suggest
that
we
should
do
you
know
P
2045
and
reduce
all
the
computational
gas
costs
instead
of
adding
Blake
to
B,
but
just
that
just
to
point
out
that
the
need
to
add
Blake
to
B.
Now
you
know
there
is
another
option
which
would
supersede
both
the
potentially
superseded,
both
the
pre
compiled
for
like
to
be
and
many
other
news
cases.
A
Okay,
all
right
thanks
everyone.
There
is
a
fellowship
of
a
theory,
magicians
forum
for
Blake
to
be
the
people
can
participate
in
and
I.
Think
Brad
and
James
are
monitoring
that
one
as
far
as
I
know,
and
that's
in
the
EIP
thanks
everyone
for
coming
today
and
we'll
talk
more
on
the
awkward
ebbs,
get
our
channel
to
try
to
kind
of
wrangle
in
some
of
these
AI
peas
that
are
still
stuck
and
proposed
and
as
quickly
as
possible,
decide
on
which
ones
are
being
implemented
for
Istanbul.