►
From YouTube: Ethereum Core Devs Meeting #98 [2020-10-16]
Description
A
Okay,
so
we're
recording
to
the
cloud
welcome
everyone
to
core
devs
number
98..
I
just
shared
the
agenda
in
the
chat.
There
were
not
too
many
things
on
it,
so
if
there's
anything,
people
want
to
bring
up
during
the
call
we'll
definitely
have
have
some
time
for
it.
I
think.
First
up
we
had
two
new
eips
that
people
wanted
to
discuss.
The
first
is
a
pre
eip
for
the
binary
tri
format.
I
believe
guillaume
was
the
one
who
wanted
to
bring
that
up.
A
B
A
quick
overview
and
yeah
sure
sure
yeah
yeah.
So
it's
basically
a
proposal
like
try
to
make
a
simple
three
structure.
So
this
is
the
first
time
I
I
presented
so
I'm
mostly
looking
to
to
get
feedback,
and
I
would
like
to
to
create
the
the
pull
request
afterward
afterwards
yeah.
B
So
it's
been
a
back
and
forth.
There's
been
a
back
and
forth
with
a
with
a
few
people,
and
the
goal
is
simply
to
have
two
rules.
B
One
of
them
is
okay,
so
clearly
the
first
thing
is
that
the
the
tree,
the
tri,
is
a
binary
binary,
try
and
we
also
wanted
to
have
because
apparently
there's
a
lot
of
zero
knowledge
application
to
this,
an
input
where
it's
just
two
32
bytes
like
items
as
an
input
and
one
32
byte
item
as
as
the
output
and
otherwise
another
thing
was
to
get
rid
of
rlp
by
popular
demand.
Everybody
wanted
to
get
rid
of
rlp.
B
I
also
tried
to
get
rid
of
extension
nodes,
but
that
is
clearly
too
yeah
too
difficult,
like
the
the
number
of
hashes
that
need
to
be
done.
If
you
get
rid
of
extension,
nodes
is
just
it's
just
too
big
yeah,
so
pretty
pretty
simple
yeah,
whatever
yeah.
If
you
have
any
comments
or
I'm
I'm
happy
with
with
any
feedback.
C
Oh
okay,
so
I
was
just
wondering
so
this
is
the
format
of
the
binary
trial
itself,
but
previously
you
had
some
suggestions
as
to
how
to
act.
Move
to
that
format.
Yes,.
B
B
B
That
is
independent,
yeah
there's
still
this
other
eip.
Indeed,
that
I
don't
really
want
to
tie
because,
like
last
time,
peter
wasn't
really
like
sold
on
the
idea
so
yeah
whatever
it
could
happen
with
regenesis.
It
could
happen
with
with
the
overlay.
Try
method,
that's
yeah,
that's
also
well.
D
B
An
example,
so
I
don't
have
the
exact
number
I
used
to
have
it,
but
I
lost
my
my
ssd
two
weeks
ago
and
I
need
to
recalculate
everything,
but
it
was
like,
maybe
half
a
second
to
to
just
when
you
had
like
one
thousand
just
one
thousand
leaves
in
the
tree.
It
would
take
like
half
a
second
to
recalculate
everything
on
on
an
average
machine.
D
Okay,
so
sorry
one
thousand
leaves
in
the
entire
tree
yeah,
but
why
would
it
I'm?
Basically?
I
think
I
want
to
yeah.
Maybe
I
should
just
go
deep
into
it,
because
I
don't
I
don't
know
whether
this
makes
sense
to
me,
I
mean,
doesn't
make
sense
to
me,
but
we
should
look
at
it.
Okay,.
D
So
what
basically
I'm
saying
is
that
I
need
to
verify
that
the
extension,
the
addition
of
extension
node
is
justified,
because
you
know
we
kind
of
agreed
that
extension
nodes
do
bring
extra
complexity,
and
if
we
can
get
rid
of
them
without
sacrificing
too
much
of
a
performance,
then
it
would
be
good.
But
yeah.
B
Yeah,
I
totally
agree
with
that.
I
mean
there's
so
many
nice
properties
even
for
witnesses,
and
things
like
this,
if
you
don't
include
the
extension
nodes
and
but
yeah
yeah
the
way
I
see
it,
there
were
too
many
two
things
that
really
increased
the
complexity.
There
was
the
extension
notes
and
really
the
all
the
bit
twiddling
about
like
yeah,
that
is
included
in
the
hex
prefix,
especially
and
all
the
the
rlp
you
know
rule.
If,
if
your
rop
is
less
than
32,
then
you
need
to
hash
it.
B
Sorry,
then
you
need
to
include
this
verb
verbatim.
Otherwise,
you
you
have
to
hash
it.
In
my,
in
my
view,
this
is
where
the
complexity
really
explodes,
so
the
extension
node
that
I
would
have
liked
to
to
be
able
to
get
rid
of
them
by
all
means.
If
you
find
a
way
to
to
get
rid
of
them
and
still
keep
the
performance,
I'm
I'm
all
ears.
I
was
not
able
to
to
do
that.
D
Okay,
so
when
you
were,
let's
just
narrow
down
what
what
kind
of
performance
do
we
mean?
Do
we
mean
the
performance
of
constructing
the
tree
or
like
an
entire
tree,
or
do
we
mean
the
performance
of
let's
say
verifying
the
proof
with
a
miracle
proof
yeah
constructing
the
tree,
but
we
do
not
talk
about
verifying
numerical
proof
right.
B
I
mean,
let
me
say
you
know,
I
think
it's
the
same.
It's
the
same
problem.
Unless
I
mean
yeah
verifying
a
miracle
proof.
Presumably
there
will
be
less
leaves
than
rebuilding
the
the
tree.
Although.
D
If
you're,
if
you
don't
have
extension
nodes
that
that
basically
means
that
all
the
merkle
proofs
will
have
the
same
exact
depth
right
even
righteously
yeah,
so
and
even
though
you
could
say
that
the
amount
of
information
you
have
to
carry
in
the
proof
is
still
going
to
be
small,
but
the
amount
of
calculations
that
you
need
to
perform
to
verify.
The
proof
will
be
essentially
constant
and
that
constant
will
be
basically
depth
depending
on
on
these
entire
depths
of
the
like
64
or
whatever.
It's
called
yeah.
D
You
know
thing,
okay,
so
yeah
then
so.
Basically,
I
like.
I
would
like
to
separate
these
two
things
and
then
look
at
them
separately.
Like
do
we,
you
know
the
verification
of
miracle
proof
and
the
construction
of
the
chief.
So
I
guess
right,
yeah.
B
So
if
you,
if
you
look
at
the
pr
somewhere,
there's
a
link,
sorry
not
the
pr.
If
you
look
at
the
the
draft
there's
a
link
to
a
hack
md,
I
did
to
precisely
explain
this
thing
separate
the
miracle
proof
generation
from
the
from
the
try
and
you
could
use
like
extension
nodes
in
the
in
the
witness
in
the
proof
and
then
recalculate
it
one
one
at
a
time,
so
that
can
be
done
but
yeah.
Even
then,
the
production
of
a
block,
in
my
view,
would
take
too
long.
D
Sure,
okay,
cool
yeah,
because
basically
my
approach
to
that
would
be
just
that.
You
know
again
when,
when
you
talk
about
the
calculating
the
the
the
hash
of
the
entire
tree,
I
would
also
again
separate
it
into
two
parts:
the
initial
calculation
and
the
incremental
calculation.
So
you
know,
even
even
if
the
initial
calculation
is
somewhat
slow,
that
could
be
could
be
worth
sacrificing
if
we
can
show
that
the
incremental
calculation
is
still
okay,
yeah.
B
D
Because,
basically,
no,
I
think
you
could
be
because
when
you
were
so
basically
again
when
you
think
about
how
you're
going
to
there's
a
logical
structure
of
the
tree,
which
basically
assumes
that
initial
calculation
might
be
very
slow
and
there's
the
the
physical
sort
of
model
where
how
you
store
it
and
inside
the
physical
model,
you
can
already
take
into
account
the
fact
that
there
is
a
kind
of
extension
mode
and
then
you
can
basically
skip
most
of
those
calculations.
D
So
basically,
I
think
you
can
make
the
incremental
calculation
pretty
much
the
same
as
the
with
the
current
thing,
because
I
think
the
the
times
when
we
really
touch
the
extension
nodes
are
probably
quite
rare.
Statistically.
So
I
think
we
could
work
on
this.
I
think
we
could
actually
solve
this
problem.
B
Actually,
I'm
not
sure
I
understand,
because
you're
saying
every
time
we
touch
an
extension
or
like
every
time,
you're
going
to
write
a
new
and
you
you
value
like
a
new
key
value,
you're
going
to
have
to
update
the
extension.
D
What
I'm
say,
what
I'm
saying
is
that
let's
say
that
we
could
keep
the
same
physical
model
that
we
have
right
now
in
your
implementation,
for
example,
because
your
implementation
essentially
has
the
the
whatever
embodiment
of
this
extension
mode
right,
which
is
essentially,
but
you
can
still
have
this
embodiment
of
extension
nodes
in
your
physical
but
by
by,
but
by
having
a
logical
model,
without
extension,
so
that,
if
nothing
changed
in
that
in
that
node,
you
don't
need
to
update
it.
So.
E
D
My
hypothesis
is
that
the
number
of
extension
nodes
are
not
that
large
in
practice,
and
we
don't
this
statistically,
we
can
figure
out
how
much
it
is
that
you
know
we're
hitting
extension
mode
and
how
hard
can
you
hit
them
in
the
worst
case
and
then
maybe
that
that
is
not
going
to
be
a
significant.
So
that's
what
I'm
saying.
C
C
D
No,
what
that,
by
hitting
extension
mode,
I
mean
modifying
something
which
is
beyond
the
extension
node
so
like
beyond
meaning
like
closer
to
the
leafs,
because
if
you
don't
modify
anything
which
is
which
is
basically
beyond
the
extension
node,
then
you
don't
need
to
do
anything
with
that
extension
because
you
sort
of
pre-cached
the
the
hash
of
that
sub-tree
yeah.
So
so,
basically,
I
I
do
not
have
this
data
at
hand,
but
how
often
do
we
do
we
hit
that
you
know
the
the
things
beyond
the
extension
nodes.
D
Do
you
think
so?
Because
I
think
what
would.
B
Yeah,
I
think
I
think
I
think
this
is
the
correct
analysis,
but
let
me
offer
like
I:
I
need
to
recover
the
data
from
my
ssd
like
what's
left,
what's
left
there,
I
will
rerun
the
test
and
push
them
and
publish
them
on.
G
Well,
there
is
a
property
of
k,
chuck
hash
function
that
it
can
calculate
hash
of
300
and
oh
of
132
bytes
as
fast
as
of
like
128
bytes,
as
fast
as
a
hash
of
64
bytes.
So
I
would
say
if
you
would
get
rid
of.
G
Value
in
a
node
tuple
and
also
get
rid
of
the
main
separation
hash,
which
you
call
the
node
prefix,
then
most
likely
you
can
get
radix
four
three
having
the
twice
performance
compared
to
radix
two.
If
we.
B
B
Okay,
so
if
for
your
information,
there's
no
value
and
that's
one
of
the
things
that
got
dropped
there,
there's
no
more
possibility
of
storing
values
in
in
the
internal
nodes.
G
E
G
With
prefix
in
another
format,
it's
just
a
little
bit
confusing
for
me,
but
it's
not
a
problem.
I
mean.
If
you
make
a
putting
a
domain
separation,
exactly
136
bytes,
then
you
can
make
kind
of
specialized
well,
then
you
can
make
128
specialized
keychuck
256
hashes,
with
their
external
fade
being
non-trivial,
and
then
you
can
use
four
and
then
you
can
hash
four
elements
simultaneously
into
a
single
node
without
increasing
the
actual
hashing
time.
G
So
your
tree
will
be
twice
less
deep,
but
a
time
you
would
spend
to
calculate
it
as
effectively
the
same.
G
G
B
Time:
okay,
that's
of
course,
investigating,
indeed,.
A
Is
it
worth
making
this
into
an
actual
beep
at
this
point,
or
is
it
better
to
just
keep
iterating
on
the
hackmd.
B
G
B
Make
I'll
make
a
new
when,
when
everything
is
integrated,
okay,
cool
and
thanks
for
your
feedback,
guys.
A
Great
next
up
on
the
list
was
eip
2926
about
the
chunk
based
code
merklyzation,
I
believe
cena
you
wanted
to
bring
this
up.
H
H
So
I
can
benchmark
and
and
come
arrive
at
those
those
values
and
yeah.
I
just
wanted
to
bring
it
up
today
for
some
initial
feedback
so
happy
to
take
any
questions.
C
I
This
also
might
be
answered
in
the
eip,
and
I
apologize
if
it
is,
is
the
where
the
boundaries
for
the
code
hashing.
Is
that
something
that
is
controllable
by
the
solidity
or
contract
authors
like
function.
Boundaries
seem
like
a
very
obvious
place
that
you'd
want
to
have
your
kodash
boundaries,
or
is
it
just
you
don't
really
get
to
choose
it?
It's
going
to
boundary
where
it
does,
and
you
can't
optimize
for
that
at
all.
F
I
think
the
current
approach
is
just
hashing,
every
32
bytes
and
like
I
don't
actually
even
think
that
the
lot
that
the
losses
of
doing
of
doing
it,
that
way
are
that
significant,
like
just
because
generally
yeah
functions,
are.
F
E
E
E
That
so
so
don't
take
that
as
possible.
D
And
also
also,
we
did
not
actually
have
a
proper
analysis
of
what
is
the
optimal
chunk
size?
I
don't
know
anybody
has
done
that
yet
sort
of
like
what
is
the
empirically
or
the
optimal
size,
but
I'm
not
too
worried
about
this
for
a
moment.
C
Regarding
this
32
bytes,
are
there
any
guarantees
such
that
a
chunk
never
starts
on
a
data
segment,
or
is
it
blindly
32
bytes,
always.
H
We're
not
taking
any
extra
consideration
for
the
data
section.
The
only
edge
case
that
I
came
came
upon
was
so
at
the
beginning
of
every
chunk.
We
store
the
offset
of
the
first
instruction
in
that
chunk
to
be
able
to
escape
push
data,
and
if
the
data
section
at
the
end
of
the
byte
code
has
something
resembling
push
up
code
and
and
then
the
data
for
that
push
of
code
goes
over
the
the
bytecode
length.
H
This
is
just
something
that
has
to
be
handled
in
the
code,
but
it
can
be
safely
ignored.
C
C
So
what
you're
saying
basically
is
that
if
there
is
a
jump
to
another
chunk,
we
can
just
load
it
and
in
the
metadata
c,
where
the
first
op
code
is
and
thereby
determine
whether
the
jump
is
valid
or
not.
D
So
what
we're
trying
to
do
is
that
we're
trying
to
prove
for
as
many
contracts
as
we
can
in
the
which
are
currently
deployed,
whether
they
ever
exhibit
the
behavior
that
they
can
jump
inside
the
push
data,
and
so
so
far.
I
think
we
have
proven
for
97
of
the
contracts
that
were
deployed
that
it
never
can
can
never
happen.
So
the
basically
we're
just
working
on
the
doing
the
proof,
checker
and
stuff
like
this.
D
But
the
idea
behind
this
is
that,
if
we
can,
I
know
for
for
basically
that
most
of
the
programs
which
do
not
specifically
kind
of
want
that,
if
you
compile
them
with
solidity
solidity,
pretty
much
always
generates
the
code
which
never
jumps
into
the
jump
into
the
push
data.
So
you
actually
have
to
craft
the
code
specifically
to
do
that
to
you
know
to
violate
this
rule.
So
the
suggestion
was
there.
D
If
we
do
that,
if
we
figure
out
how
to
put
that
in
then
we
can
even
simplify
the
chunking
rule
that
that
we
don't
even
need
to
include
metadata.
That
means
that,
for
the
contracts
where
we
were
able
to
analyze
them,
we
we
put
the
flag
into
the
account
saying
that
this
we
proved
that,
for
this
account
there
is
no
possibility
of
jumping
to
the
push
data
at
all.
That
means
that
you
can
chunk
them
with
way.
Do
you
want
the
way
you
want
without
even
needing
the
metadata
and
for
those
where
are?
D
F
Them
interesting,
I
mean
radical,
but
instinctively
I
kind
of
like
it
have
we
done
analysis
on
like
what
pers?
I
guess
the
question
is
like:
how
difficult
is
this
proving
and
like
what
percentage
of
contracts
are
97
97.
D
Essentially,
there's
two
parts
of
that:
there's
a
proof
generator
and
there's
a
proof
checker
so
and
we
we
hope
to
have
a
proof
checker,
which
is
much
much
more
simple.
So
the
proven
is
basically,
we
use
a
the
thing
called
abstract
interpretation
where
you
go
through.
D
D
So,
but
now
we
are
trying
to
figure
out
how
the
big
the
proof
is
going
to
be
and
how
simple
the
the
proof
checker
is
going
to
end
up,
and
what
I
also
wanted
to
add
is
that
you
know
these
suggestions
about
the
subroutines
and
dynamic
jumps.
We
kind
of
now
see
that
if
we
did
not
have
dynamic
jumps
in
ibm,
then
those
proofs
will
be
pretty
much
trivial,
but
what
what
we
could
do
is
that
we
could
actually
go
into
that
direction
and
essentially
sort
of
back.
F
Right,
do
you
know
what
percentage
of
contracts
to
use
dynamic
jumps.
D
That
is
not
immediately
preceded
by
a
push.
Oh
no!
No.
This
happens
all
the
time,
because
the
the
most
common
pattern
of
using
the
dynamic
jumps
is
essentially
calling
a
subroutine
in
solidity
from
different
places
in
the
code.
So
what
it
does
is
that
pushes
the
return
address
does.
D
F
E
E
E
Oh
as
in
like
adding
a
new
op
code
to
access
the
the
tree
hash,
instead
of
the
the
cash.
F
My
instinct
would
be
against
because
of
the
possibility
that
we'll
end
up
changing
the
treeing
again
later
and
we'll
have
an
a
a
growing
list
of
hashes
that
we'd
have
to
store
for
backwards
compatibility,
fair
point:
I.
F
Yeah
by
the
way,
speaking
of
other
treeing
approaches,
I
should
also
bring
up
the
something
I
brought
up
once
before,
which
is
the
option
of
using
cade
commitments
in
addition
to
a
merkle
root
for
hashing
a
code.
Basically,
the
arguments
for
arcade
commitments
is
basically
that
you
can
fairly
easily
generate
a
proof
for
an
arbitrary
subset
of
chunks,
regardless
of
where
they're
located,
and
this
proof
would
just
be
a
constant
size,
48
48
bytes,
which
has
the
possibility
of
making
witness
sizes
for
code
like
quite
trivial.
F
F
C
C
Do
you
see
that
we
should
would
keep
the
current
codes
side
by
side
in
the
implementation,
or
should
we
would
we
actually
load
these
750
leaves
and
concat
them?
Have
we
thought
about
that?
You
know.
H
Yeah,
so
in
in
the
eip,
we
currently
talk
about
there's
a
segment
for
the
transition
process
and
we
thought
maybe
it's
it's
easiest
to
only
store
the
flat
code
right
now.
So
when
the
when
a
new
contract
is
created,
the
client
mercalizes
it
computes
the
root,
but
then
stores
the
root
and
doesn't
touch
it
again
and
only
stores
the
the
normal
full
bytecode
in
the
database
and
because
of
this,
we
won't
need
to
for
now
change
gas
cost
for
call
and
any
any
other
code
accessing
up
code
and
later
on.
C
K
C
Okay,
so
the
proposal
as
it
is
it
would
it
kind
of
already
is
open
to
there
being
trybacked
codes
and
flatback
codes
living
simultaneously
on
the
on
the
network.
H
Oh
so
so
now,
the
idea
was
that
you
mercalize
all
of
the
contracts
all
including
existing
contracts,
compute
the
code
route
for
them
and
sort
them
in
the
count
record,
but
then
only
store
the
the
full
byte
code
and
not
the
actual
leaves
and
the
code
try
anymore,
because
code
is
static
and
the
the
tree
structure
of
the
code
doesn't
change
and
and
clients
from
what
I
gathered
wouldn't
need
to
have
the
tree
structure
until
status.
H
L
So
I
had
a
question
about
chunking
in
the
middle
of
the
push
data,
if
you're
going
to
keep
metadata
anyways
flagging.
If
the
chunk
ends
in
the
push
data,
then
could
you
not
have
not
exactly
fixed
size
chunking
but
chunking
that
takes
that
into
account
and
chunks
at
around
32
bytes,
but
just
at
the
end
of
every
push
data.
F
E
See
the
central
theme
here
that
I've
kind
of
seen
since
the
beginning
is
that
there
are
a
lot
of
little
efficiencies
that
we
could
possibly
add
in
around
the
edges
and
it
seems
like
pretty
much
all
of
them
end
up
not
being
justified
just
due
to
the
minor
savings
that
they
seem
to
provide
and
the
higher
complexity.
They
seem
to
add
not
saying
we
shouldn't
look
at
these
things,
but
that
has
been
kind
of
a
central
theme
along
the
way.
M
M
M
So
there
started
to
to
appear
quite
a
few
data
contracts
and
anyway,
through
the
the
subroutines
proposal,
martin
and
I
we
tried
to
analyze
contracts
in
the
the
state
and
as
of
that
work,
I
had
an
idea
that
maybe
we
could
do
a
one-self
validation
of
contracts
and
mark
them
in
the
account
if
their
data
were
not
or
to
say
it
more
nicely
whether
they
can
be
executed
or
not.
And
I
wonder
if
that
could
be
just
merged
with
your
work,
because
you're
already
setting
a
flag
and
you
do
analysis.
M
I
wonder
if
how
easy
would
it
be
to
check
if
the
code
can
be
executed?
First
of
all
and
leave
it.
D
C
M
Heuristics
we
added
is
you
know
in
how
many
steps
would
it
terminate
and
what
kind
of
inputs,
but
many
of
them
would
would
just
start
with
an
op
code
which
is
invalid,
like
a
truncated,
push
or
any
valid
of
code,
and
those
are
clearly
not
executable
under
any
circumstances.
D
We
we
weren't
kind
of
interested
in
that
specific
thing
we
just
were
interested
in
whether
they
or
so
we
were
basically
interested
in
building
the
control
flow
graph
and
we
either
so
for
those
for
the
things
that
terminate
straight
away.
The
control
flow
graph
is
very
simple:
it's
just
like
terminate
it's
just
one
node,
so
the
if
they,
if
the
by
by
chance
they
have
a
little
logic,
which
does
something
I
think
in
most
cases,
the
control
flow
graph
is
still
quite
simple.
D
M
So
one
motivation
for
this
is
executable
flag
in
the
country
would
be
that
when
a
new
contract
is
deployed,
it
is
analyzed
whether
it
won't
terminate
immediately
and
if
it
won't
terminate
immediately.
Then
it's
marked
executable,
and
this
is
somewhat
a-
maybe
a
replacement
to
the
evm
versioning
in
the
sense
that
there
always
have
been
concerns
when
a
new
opcode
is
introduced.
M
How
would
that
affect
existing
contracts?
And
if
you
already
have
you
know
old
contracts
which
were
invalid?
Maybe
because
on
a
non-existing,
op
code
and
they're
not
marked
executable,
then
even
after
introducing
that
new
up
code,
they
won't
be
executable.
That
was.
D
Okay,
I
see,
I
see
what
you
mean
so
so
to
answer
your
question
is
that
I
think
this
is
basically
a
special
case
of
our
analysis,
because
I
said
that
if
we
already
trying
to
figure
out
these
control
flow
graphs
for
anything
that
we
see
and
there
could
be
either
they
could
be
basically
either
we
can
build
the
controlled
photograph
or
we
can't.
I
mean
that
means
that
we
don't
know
like.
D
Maybe
it's
just
super
dynamic,
but
if,
if
you
look
at
this
control
graph-
and
it
could
see
that
it's
clearly
always
failing
because
it
will
be
obvious
from
control
photograph
that
is
always
failing,
then
you
can
mark
it
as
non-executable.
So
I
think
what
we
do
is
basically
a
bit
wider
than
than
what
you're
asking
so
yes,
we
can
do
that.
C
Yeah
just
just
to
add
some
context,
so
the
analysis
we
did
like
alex
it
focused
on
a
different
thing,
and
one
thing
that
caused
problems
for
us
were
not
the
type
of
data
contracts
which
are
just
blind
data,
there's
another
type
of
data
contracts
which
basically
function
that
you
do
a
delegate
call
to
them
and
that
you
delegate
call,
for
example,
load
segment,
one
and
then
it
just
it
does
an
internal
code
copy
from
the
code
to
memory.
C
So
you
get
like
segment
one,
and
then
you
can
continue
your
execution,
because
you
got
that
in
memory
and
then
you
can
do
that
again
and
load.
Another
segment
of
whatever
is
data
you
need
to
load,
which
is
cheaper
than
doing
like
x
code
load
index,
one
two
blah
blah
blah,
or
at
least
it's
easier.
I
don't
know
why,
but
that's
the
type
of
data
contract
which
is
actually
executable
but
which
does
contain
like
white
random
data,
although
it
should
not.
I
mean
the
control
flow
graph
should
not
lead
into
that
random
data.
D
Well,
basically,
what
if
you
look
at
the
if
you
think
about
the
this
particular
contract?
D
So
when
we
do
delegate
call
right
from
the
point
of
your
control
flow
graph
it
it
doesn't
really
matter
what
this
delegate
quote
does
because
it
just
returns
with
either
failure
or
success,
and
we
know
that
the
top
of
the
stack
will
be
just
either
zero
or
one,
and
then
it
does
not
really
affect
the
control
flow
graph.
So
we
can
see
that
our
analysis
will
probably
show
that
this
is
executable.
D
M
Yes
or
maybe
I
have
brought
you
on
the
wrong
track,
because
I
was
just
trying
to
remember
all
the
contexts
and
it's
actually
multiple
different
discussions
and
ideas.
The
data
contract
question
is
just
one.
The
having
invalid
instructions
is
another
one,
but
it
would
be
perhaps
nice
to
discuss
this
and,
and
I
don't
think
that
all
cordes
is
the
best
place
to
discuss
it.
D
Yeah
yeah,
so
because,
because
basically
we
are
we're
doing
this
control
flow
graph
analysis,
not
sim,
not
just
for
the
chord
localization,
but
another
thing
we
would
like
to
use
it
for
is
for
looking
for
data
dependencies
in
the
past
transactions
to
be
able
to
sync
faster.
So
we
want
to
create
the
data
dependency
graph
to
be
able
to
parallelize
the
execution
and
all
these
kind
of
things.
M
Maybe,
as
a
closing
question,
should
we
consider
like
a
brainstorming,
call
like
one
of
those
breakout
things
or
just
have
like
a
text
discussion
on
on
the
old
codex
channel?
What
is
the
best
way
forward.
D
I
would
say
that
it
depends
on
the
priorities.
To
be
honest,
so
I
see
the
kind
of
we
could.
We
could
have
a
probably
like
some
sort
of
background
discussion
going
on
about
it.
Is
it.
D
D
One,
I
think
I
don't
don't
need
more
discord
channels.
So
if
there
are
inter
people
who
are
interested
in
the
discussion,
I
think
we
could
self-organize
into
in.
So
if
people
have
time
and
it's
priority
and
we
can
talk
about
it,
because
what
I
don't
want
to
do
is
I
want
to
create
a
discussion
which
is
for
something
which
we're
going
to
do
in
like
one
year
and
at
the
detriment
of
something
we
have
to
do
like
in
the
next
two
months.
A
Okay
sounds
good
and
yeah
we
can
use
just
the
basic
awkwardness
channel
for
that.
So
next,
up
on
the
agenda
was
the
euro
networks,
so
james
shared
kind
of
some
specs
for
both
yolo
v2
and
yellow
v3.
Let
me
just
post
those
in
the
chat.
I
guess.
First
of
all,
it's
probably
just
worth
asking
the
different
kind
teams,
where
they're
at
with
yellow
v2
and
then
maybe
we
can
have
a
just
follow-up
conversation
on
v3
and
the
status
there.
A
So
for
v2,
which
was
basically
the
eip2537,
the
bls
curve,
2315
the
subroutine
and
then
29.29
the
gas
cost
changes
yeah
artem
I
see
you're
on
the
call.
Turboget
is
not
doing
yolo
v2
right,
no.
D
No
we're
not
collecting
that
okay
yeah
we're
not
we're
not
doing
yolo
v2
at
the
moment,
because
again
we
there's
other
things.
We
have
to
fix.
Okay,.
A
Yeah
and
then
sorry,
I'm
going
on
order
on
the
screen.
I
see
here
so
martin,
any
update
from
guest.
C
C
It's
pretty
slow,
I
don't
know
why,
but
so
far
I
haven't
found
any
differences
and
as
for
test
updates,
I
know
there
has
been
work
done
on
the
converting
the
these
standard
tests
with
two
29
29
rules
on
you,
yolo
v2,
and
that
seemingly
has
a
pretty
large
piece
of
work
because
it
changes
all
the
expect
sections
or
rather
a
lot
of
expect
sections
start
to
fail,
and
thus
the
tests
are
do
not
get
filled
properly
and
dmitri
is
working
on
that
and
also
dano
has
done
some
work
on
that.
N
Yeah
I've,
as
is
dimitri,
gets
him
checked
in.
I
bring
him
over.
I
film,
with
bisu,
and
I
run
him
on
gas.
I
haven't
seen
any
differences
yet
since
we
got
those
last
few
issues
relating
to
the
constant
values
resolved.
N
So
as
far
as
reference
test
goes,
it
looks
fairly
solid,
yeah
and
so
basically's
got
all
three
of
the
things
for
yellow
v2
and
we're
ready
to
go
on
it.
As
far
as
the
evm
test
I
updated
on
the
bug,
I
think
it's
because
when
we
use
the
bitcoin
ec
k256
whatever
it
is,
we
do
the
default
randomization
to
prevent
side,
channel
attacks
and
I'm
thinking
that
it's
blocking
on
native
entropy
we've
seen
that
before
as
some
of
our
unit
tests,
so
there's
a
a
special
flag.
N
C
Yeah
just
another
thing
that
might
be
worth
mentioning
is
that,
in
my
opinion,
the
you
know,
2929
gets
huge
coverage
from
the
existing
22
000
tests,
so
we'll
have
to
look
into
if
there's
something
that
is
not
being
covered.
But
you
know,
since
it
changes
pretty
a
long
list
of
actual
normal
op
codes
and
doesn't
introduce
any
new
things.
C
I
would
say
that
it
gets
great
coverage
from
the
existing
test.
N
C
C
A
Cool,
so
I
guess
yeah
that
covers
it
for
the
base
you
update
as
well
dragan.
I
hope
I
got
your
name
right,
you're
from
open,
ethereum
right.
O
Basically,
we're
not
going
to
participate
in
yoga.
Here's
some
current
tasks
that
we
need
to
focus
on.
O
K
A
Okay-
and
I
think
that's
everybody-
anyone
else
want
to
speak
up
that
I
might
have
forgotten.
P
I
have
a
request
that
is
the
icip
meeting,
like
as
a
part
of
new
process
that
we
are
starting
for
eip
standardization
and
network
upgrade
the
consensus
there
was
to
have
the
change
in
the
statuses
of
eip,
and
that
should
be
reflected
at
the
time
when
we
are
getting
into
the
f
mirror
or
into
the
test
net
and
all
like
at
at
this
point
of
time,
all
the
eips,
the
two
eip,
two
three
one,
five,
two
five,
three
seven
and
two
nine
two
nine
all
are
in
the
status
of
traff.
P
But
now
the
work
has
enhanced.
I
mean
like
we
are
looking
into
devnet.
We
just
want
to
bring
into
the
attention
of
authors
that
it
would
be
worth
considering
the
change
of
a
status
and
they
can
make
their
request
change
without.
It
is.
P
I
Review
means
that
the
spec
is
at
a
point
where
other
people
should
start
looking
at
it.
Draft
is
really
meant
just
for
I'm
working
on
this,
and
maybe
one
other
person
is
looking
at
it,
but
no
one
should
look
at
this
because
it's
not
done
review
means.
We
think
it's
done
as
an
author.
Other
people
should
now
take
their
time
to
look
at
it
pretty
much.
Anything
that
has
made
it
into
this
call
is
probably
should
be
reviewed
by
that
by
now
like
making.
A
Okay,
good
to
know
so
yeah
for
the
authors
of
of
all
the
stuff,
that's
being
discussed
for
the
yellow
networks,
definitely
worth
moving
the
eeps,
and
so
in
terms
of
next
steps
for
yolo
v2,
it
seems
like
geth
and
bazoo
are
going
to
keep
working
on
adding
tests
and
setting
up
the
network.
Then
another
mine
will
probably
join
shortly
after
and
that'll
kind
of
be
it
on
the
last
call
we
discuss
a
potential
yellow
v3,
I'm
not
sure,
that's
worth
you
know.
A
If
discussing
now,
given
we're
still
kind
of
wrapping
up,
yellow
v2
or
I
don't
know
what
do
people
think
the
yellow
v3
was
basically
yellow,
v2
plus
2718
the
type
the
transaction
envelope
and
2930
the
optional
access
lists,
but
it
feels
like
on
the
last
call.
I
don't
know,
people
have
like
different
opinions
about
how
valuable
this
would
be,
whether
or
not
this
list
made
sense.
A
So
yeah,
I'm
just
curious.
If
people
have
just
general
thoughts
about
the
idea
of
a
yolo
v3
and
whether
we
should
do
it
or
not,
then
if
so,
whether
the
current
list
of
each
on
it
makes
sense.
K
Actually,
the
team,
since
you
mentioned,
yellow
v3
and
2565,
is
not
on
the
list.
I
think
then
it
means
that
it's
in
yellow
v2.
So
let
me
add
something
I
was
finally
running
the
benchmarks
in
nethermine
for
2565
and
surprisingly,
for
most
of
the
operations
on
the
modex.
The
previous
gas
pricing
was
better
aligned
with
the
results
that
I've
seen
in
nethermind
and
we're
using
this
like
built-in
library
for
big
integer
exponentiation.
K
A
Q
Sure
yeah,
so
I
think
the
the
quick
update
is
so
it
it
looks
like
bessu
has
now
implemented
and
the
test
vectors
are
matching
what
they're
supposed
to
be
with
the
new
pricing,
so
good
progress
in
that
direction.
As
thomas
mentioned,
you
know
when
he
ran
the
benchmarks
and
we
looked
at
it
with
the
new
pricing.
There
were
some
items
that
were
down.
Q
You
know
in
the
there's
one
I
think
as
low
as
like
seven
or
eight
million
gas
per
second,
so
you
know,
unfortunately,
this
is
another
case
sort
of,
as
we
had
with
open
ethereum,
where
the
native
or
the
standard
modular
exponentiation
library
is,
is
not
as
performant
as
it
could
be.
We're
just
starting
to
look
at
what
that
looks
like
for
net
and
what
alternatives
are.
Q
Q
I
guess
you
know
that
being
said,
you
know,
I
would
be
you
know
personally
interested
in
in.
If
yellow
v3
happens,
you
know
you
know,
we
know
that
there
could
be
a
sort
of
denial
of
service
vector,
given
that
some
of
these
are
down
near
10,
but
I
think
you
know
to
the
extent
that
we
could
test
it
from
a
cross
client
perspective
on
making
everything
you
know
making
sure
everything
works,
assuming
we
can
fix
this.net
performance
problem.
That
would
be.
Q
C
Just
to
note,
when
you
mention
like
seven
or
eight
millimeters
per
second,
what
is
the?
How
does
that?
Compare
to
the
other
precompiles,
I
think
that's
an
important
metric.
G
Q
This
is
one
of
the
major
things
that
I
think
you
know,
and
maybe
it's
something
that
I'm
could
potentially
spearhead
sort
of
moving
forward.
You
know
on,
I
think
you
know,
as
we
move
forward,
and
these
pre-compiles
are
more
important
right
for
things
like
roll-ups
and
and
are
getting
more
use.
The
gas
repricing
situation
is
not
it's
not
crystal
clear
right.
Q
We've
got
multiple
different
clients,
you
know
what
you
know
is
15
million
gas
per
second,
the
right
you
know
number
like
that's
what
ec
recover
is
on
geth,
it's
as
low
as
15
million
other
folks
are
targeting
30
or
40..
You
know,
there's
no
standard
like
are
we
on
a
four
core
macbook
at
two
gigahertz?
Are
we
on
an
eight
core
at
you
know
five
gigahertz,
so
I
think
one
thing,
at
least
in
my
perspective,
that
would
really
help
on
these
repricing
things.
Q
Moving
forward
is
like,
maybe
even
to
have
an
eip
to
to
specify
like
a
reference
configuration
for
how
to
reprice
these
things,
because
I
think
you
know
it's
very
hard
to
do
apples
to
apples
on
any
of
these
things
when
people
are
running
them
at
different
frequencies.
Different
number
of
cores.
C
Q
And
I
think
that's
a
that's
a
great
point.
I
mean
you
know
if
you
go
back
and
look
at
some
of
these
other
eips,
you
know
some
target
to
try
and
get
them
on
par
with
ec
recover.
But,
as
I
mentioned
with,
you
know
something
like
death.
That's
closer
to
15
million
gas
per
second,
whereas
you
know
nethermine's
been
targeting
30
or
40..
Well,.
G
G
G
Well,
the
problem
with
non
absolute
values,
as
even
worse,
as
martin
mentioned
they
use
sha-256,
which
is
elsa
as
a
result
of
measurements
for
266
eep,
is
quite
different
on
different
platforms
and
implementations,
and
even
now,
if
we,
if
we
continue
to
use
this
recover
as
something
like
a
baseline,
and
we
assume
that
everyone
uses
t
library.
First
of
all,
you
can
compile
it
differently
to
get
quite
different
performance.
G
If
you
really
want
second,
with
recent
optimization,
since
there
and
expiry
of
dependent
on
multi-fast
multiplication,
you
will
get
a
bomb
points
there
like
in
a
factor
of
30
at
list.
If
you
try
to
estimate
the
gas
price
as
a
gas
per
second
constant,
based
just
on
c
implementation
of
easy
recover.
E
C
C
If
the
miners
have
decided
it's
50
million
gas,
then
it
doesn't
matter
what
what
we
execute.
It
takes
roughly
the
same
amount
of
time,
whether
it's
a
simple
loop
or
if
it
pre-compiled
or
that's
the
like
goal,
and
if
it
takes
too
long
time,
then
they
will
have
to
lower
the
gas
limit
to
make
it
faster
again
and
as
long
as
we
have
it
balanced,
it's
not
like
someone
can.
C
Some
blocks
will
suddenly
take
a
minute
just
because
hey
someone
used
the
precompiled
to
do
denial
of
service
as
long
as
it's
balanced
on
like
your
everyday
hardware,
whatever
that
is,
so
I
don't.
I
don't
think
we
should
have
this
idealized
reference
hardware,
where
we
yeah,
I
don't.
I
don't
think
that
works
will
work
as
well.
G
Then,
to
use
a
measurement,
because
even
if
we
try
to
do
it
with
the
current
ones
like
there
are
two
options.
One
of
them.
We
measure
each
of
those
and
calculate
some
gas
per
second
constant
based
on
each
of
those,
and
then
we
have
to
ideally
linearly
reprice
each
of
those
up
or
down,
assuming
that
the
initial
formulas
for
gas
are
even
correct
and
they
are
not
for
some
functions.
G
So
at
least
the
baseline
of
some
flexibility
to
35
with
turbo
boost
off
to
get
some,
and
it
was
basically
what
was
done
in
fall.
25266
like
say
that
there
is
a
constant
on
your
computer
benchmark
everything
and
set
a
gas
formula,
and
it
works
forward
perfectly,
but
it
doesn't
work
backward
because
you
cannot
say
that
your
formula
was
correct
in
the
first
place,
especially
if
it's
for
variable
length
inputs.
D
D
Because
what
I
think
we
should
expect
is
that
let's
say
that
if
they
are
now
calibrated
to
whatever
right,
20
mega
mega
gas
per
second
or
lay
over
30,
whatever
that
number
is,
and
so
that
means
that
if
we
were
to
lift
the
gas
price
or
the
gas
limit
for
any
reason,
then
one
of
the
things
we
have
to
do
is
to
raise
the
the
cost
of
these
operations
again,
because
otherwise
they
will
be
the
denial
of
service
vector.
D
We
should
look
at
the
let's
say
that
the
the
reducing
the
cost
of
operation
is
not
the
only
way
to
make
it
relatively
cheaper.
The
other
way
to
make
it
relatively
cheaper
is
to
increase
the
gas
limit,
so
we
should
look
at
it
both
from
both
perspective,
not
just
from
the
sort
of
single
perspective,
yeah.
K
So
what
I'm
just
suggesting
is
that,
after
repricing,
the
modex
with
the
current
library
in
a
inanimate
becomes
a
bottom
line,
so
the
repricing
would
make
it
worse,
which
means
that
we
just
have
to
think
whether
we
can
look
for
a
different
library
here.
A
Okay,
so
nethermine
you'll
look
at
a
different
library
and-
and
I
think
it's
worth
having
the
conversation
around
like
what's
the
right
benchmarks
offline,
so
I
thought
of
this
call.
Q
Yeah,
I
mean
maybe
one
question
for
tomos.
You
know
in
terms
of
yolo,
yellow
v3,
you
know:
do
you
think
that
you
could
run
forward
with
2565
using
the
current
library
for
now
just
to
get
the
cross-client
tests
of
eip2565.
K
I
mean
since
we're
changing
only
pricing
there's
like
almost
no
benefit
in
doing
the
cross-client
testing.
Everything
will
be
covered,
probably
by
the
consensus
tests.
It's
not
the
particular
type
of
vip
that
benefits
so
much
from
the
cross.
Client
testnet,
it's
no
problem
to
edit,
but
it
sounds
more
like
pushing
into
the
test
net
with
a
hope
that
it
would
be
there
rather
than
just
benefiting
from
from
testing
it.
K
I
think
it's
relatively
easy
one
to
include
an
important
one,
and
I
think
what
we
should
do
and
you
make
it
more
likely
to
be
included,
is
just
suggest
one
library,
senior
signals
plus
library
that
can
be
compiled
and
added,
and
it
should
be
super
simple.
We
don't
use
this
big
integer
library,
the
native
one.
You
never
mind
anywhere
else,
because
for
all
the
32
byte
operations
we
we
have
now
a
separate
uin256
library.
So
this
is
the
only
place
because
here
we
can
have
arbitrary
length
of
numbers.
A
Okay
and
so
yeah
for
yolo
v3.
I
guess
we
had
a
list
last
time
again,
which
had
the
2537
2315
2929,
which
is
yolo
v2
and
then
added
to
that
2718
2930,
given
that,
like
only
geth
and
bass,
you
have
done
yolo,
v2
and
other
clients
are
not
doing
it.
Is
it
worth
having
a
larger,
yellow
v3,
how
the
different
client
teams
feel
about
that?
Is
it
worth?
Are
there
other
things
we
want
to
include
in
it
yeah?
What
are
people's
thoughts
on
that.
K
So
if
if
people
will
be
interested
and
think
to
include
it,
then
this
is
the
one
that
I
want
to
propose.
But
apart
from
that,
I
I
don't
have
strong
opinions
about
yellow
v3
v2,
so
in
between
the
bezel
gas
and
other
projects.
P
Yeah,
so
one
thing
that
the
presentation
thomas
was
talking
about
it's
about
people
and
we
invite
all
the
client
team
and
people
who
have
any
questions
on
these
particular
proposals
earlier.
We
also
did
an
episode
for
2565.
I'll
share
the
link
in
the
agenda
for
people
to
refer
and
for
the
next
proposal.
D
I
I
For
yellow
v3,
just
as
a
reminder
this-
I
don't
know
if
this
came
up
in
all
corners
or
one
of
the
other
media.
Many
meetings
that
we
now
have
yellow
v3
appears
has
turned
into
pre-berlin,
and
so
I
know
the
original
intent
that
ever
that
we
wanted
for
yolo
was
it's
yolo,
doesn't
matter
what
goes
on
here?
It
doesn't
mean
it's
going
into
berlin,
but
that
has
changed
just
naturally
to
this
is
pre-berlin.
So
if
something
doesn't
make
it
into
yolo
v3,
then
it
sounds
like
it
probably
won't
make
into
berlin.
I
A
A
K
Yeah,
I
think
it's
a
good
example.
Actually
I
believe
that
two
five
six
five
can
make
it
to
berlin
without
participating
in
the
multi-client
test
net.
It's
just
a
matter
of
finding
one
more
library,
not
too
difficult.
We
just
need
to
resolve
this
small
problem
and
testing
should
be
entirely
encompassed
in
the
consensus
test.
A
And
I
guess
the
other
option
is
we
don't
have
a
yellow
v3
and
we
just
you
know,
come
up
with
the
final
list
for
berlin
in
a
couple
weeks.
The
people
feel
like
that
would
be
a
better
approach.
A
Because
I
know
I
see
james
press
switches
on
the
call-
and
I
know
last
time
around
you
had
some
questions
about.
Was
it
2539
your
other
bls
curve,
I
believe
and
and
the
fact
that
it
was
quite
similar
to
the
one
included
so
yeah?
I
I'm
curious
what
like
from
someone
who's
a
bit
more
like
outside
the
process
than
just
trying
to
get
something
in
what?
What
do
you
think
would
provide
you
with
the
best
outcome.
A
K
2718,
I
think,
is
the
kind
of
change
that
definitely
requires
multi-client
testing.
It
can
be
in
a
separate
eip,
specific
testnet,
which
I
wanted
to
suggest
to
focus
more
on
the
ap
specific
testing.
A
A
J
Okay,
sorry,
I've
been
holding
my
tongue
through
the
conversation
a
little
bit
and
then
was
muted.
When
I
tried
to
talk
so
I
I
don't
want
to
be
just
someone
like
outside
this
process,
that's
trying
to
get
something
in
I'd
like
to
be
sticking
around
long
term
and
contributing
so
I'm
gonna
speak
against
my
own
interests
a
little
bit
here.
J
It
feels
like
the
every
two
weeks.
We
move
the
goalposts
back
on
yolo,
v2
and
yolo
v3
by
two
more
weeks,
so
we
last
week
decided
that
there
would
be
a
yellow
v3
and
that
its
eip
list
was
set
two
weeks
ago.
Sorry
and
now
we're
kind
of
reopening
that
and
questioning
the
decisions
we
made
two
weeks
ago.
J
I
would
love
to
get
some
kind
of
like
defined
process
around
this,
so
that
issues
like
whether
you
know
me
coming
in
as
an
outsider
with
an
eip
so
that
you
know
like
issues
like
what
eips
get
in
and
what
the
deadlines
for
those
things
are
are
like.
You
know
set,
and
we
can
know
those
in
advance.
So
we
don't
end
up
with
this
kind
of
process
mess
every
two
weeks.
C
A
Yeah,
this
is
kind
of
attention.
I
strongly
agree
on
that
dano
had
put
an
eep
forward.
I
think,
a
year
ago
that
tried
to
address
some
of
that.
The
eip
1872,
where
we
could
kind
of
preset
some
dates
in
the
future,
for
when
we
would
actually
have
the
mainnet
hard
forks
and
that
allows
you
to
you
know,
go
backwards
and
set
deadlines
for
stuff,
and
I
guess
the
reason
why
this
process
would
burn
in
is
maybe
a
bit
flunky
is
it's
the
first
time
we've
tried
these
yellow
networks.
A
We
didn't
have
a
fixed
date
for
burden,
so
I
agree
it
kind
of
leads
to
like
growing
and
reducing
the
scope
and
and
and
kind
of
pushing
things
back.
If
what
people
you
know
care
about,
the
most
is
just
having
things
move
forward
then
yeah.
I
think
it
makes
sense.
We
keep
yellow
v3
as
is,
and
then
maybe
we
have
one
last
like
free
berlin.
A
It
wouldn't
even
need
to
be
a
test
that
I
guess
it
would
just
be
like
running
this
state
test,
so
the
people
feel
like
that
would
be
a
better
approach,
so
we
do
yellow
v2,
which
is
basically
done.
You
know
in
the
next
week
start
implementing,
yellow
v3,
which
I
suspect
will
take
us
another
two-ish
weeks
and
maybe
set
like
a
final
date
for
inclusion
to
berlin
for
stuff,
like
2565,
to
be
resolved.
C
I'm
not
so
sure
for
me,
I
think
27
18
seems
like
big.
I
2718
is,
it
is
sorry,
2930
is
depend
on
27.18,
and
there
is
a
very
strong
belief
that
we
should
not
do
2929
unless
we
also
include
29.30,
and
so
there's
also
a
strong
belief
that
we
should
include
29.29,
so
those
three
basically
kind
of
come
together
as
a
package.
Either
we
include
all
three
or
we
don't
include
any
of
them
is
kind
of
the
situation,
and
so.
I
I
Yeah,
the
the
fear
is
for
those
that
haven't
been
following
along
29
30,
we'll
introduce
access
lists
which
will
make
it
so
it
is
possible
to
not
have
your
contract
completely
inaccessible
because
of
2929
gas
repricing,
because
you'll
be
able
to
use
access
list
to
get
those
gas
prices
back
down
to
where
they
were,
if
you
absolutely
need
to,
but
it
is
not
required
by
anyone
to
use
them.
So
the
fear
is,
if
we
do
29.29
without
2930,
then
there
will
be
contracts,
maybe
that
are
completely
inaccessible,
because
it's
no
too
expensive
to
call
them.
K
I
think
the
2929
is
the
only
appeal
at
the
moment
with
enough
of
the
feel
of
urgency
that
that's
why
we
delay
everything
else
just
to
have
29
29
packaged
properly,
and
it
ended
up
being
slightly
more
difficult
that
we
were
planning
as
just
raising
gas
prices.
Now
it's
it's
trying
to
solve
a
few
more
issues
and
it
comes
with
other
things.
K
A
So
this
one
is
already
included,
but
I
think
that
2537,
the
bls
cricopal
was
another
pretty
important
one,
even
though
we
might
not
ship
berlin
prior
to
the
deposit
contract
going
live.
Deposits
into
eq
will
keep
coming,
and
I
think
there's
a
lot
of
value
in
being
able
to
validate
those
deposits
on
chain.
I
Is
are
there
people
who
still
demand
29
30
in
order
for
antoine
go
in?
I
don't
know
if
we
actually
know
that
this
will
break
any
contracts,
there's
just
a
fear
that
it
will-
and
I
personally
I'm
okay
with
breaking
things,
but
I
know
I'm
much
more
aggressive
than
most
people
here.
So
is
that
still
desired?
Do
we
still
demand
29
30?
If
we're
doing
29.
I
R
I
just
wanted
to
very
briefly
raise
that
or
like
just
raise
the
question:
how?
How
confident
are
we
that
there
are
indeed
any
any
contracts
that
would
there
would
be
that
would
basically
break
between
29
20,
which
one
I
I
think
I
I
I
got
the
number
wrong
but
but
like
because,
because
I
just.
E
R
Adding
a
new
new
transaction
type
with
access
lists,
I
I
think
there
are
concerns
around
like
the
usability
of
access
lists
in
general,
with
like
some
use,
cases
might
might
not
blend
themselves
with,
like
some
static
state
access
concerns
there
it
to
use
which
exodus
anyway
and
then,
obviously
that
is
a
rather
large
commitment
to
adding
in
your
transaction
type.
R
C
Yeah,
so
one
thing
that
we
could
do
now
that
I'm
fairly
certain
that
the
implementation
is
in
line
with
specification
since
I've
matched
it
with
visu.
I
can
really
do
the
same
analysis
that
I
did
on
gurley.
That
is,
take
a
couple
of
blocks:
re-run,
the
transactions
on
29
29
rules
and
see
if
they
start
failing
and
if
they
start
failing
see
if
they
also
would
have
failed
if
more
gas
had
been
given
externally,
I'm
sure
I
will
find
a
couple
that
will
still
fail.
C
So
yeah
such
an
analysis.
I
can
do
it's
quite
intense.
C
I
mean
it's
a
lot
of
work
to
weed
out
the
some
of
the
results,
but
if
yeah,
if
you
want
some
extra
clarity,
if
these
cases
are
actual,
I
can
see
if
I
can
find
a
couple
of
such
cases,
although,
if
I
don't
find
them,
we
won't
know
that
they
don't
exist
because
you
know
I
can't
do
it
for
the
entire
entire
chain.
C
C
C
No
well,
it
would
basically
be
like
full
thinking,
but
a
lot
slower,
yeah.
I
D
A
good
ideal
test
we
do
have
the
way
to
do
it
and
it's
you
know,
because
I
do
actually
do
do
the
sports
for
cold
traces
and
at
the
moment
I
think
it
would
run
for
about
two
days
or
something
to
do
something
like
that
yeah.
There
is
a
possibility
to
do
that,
although
I
wanted
to
actually
suggest
the
potential
alternative
to
eip
90
29
29,
because
as
a
lot
of
people
know
that
I'm
still
mildly
opposed
to
that
so
yeah.
D
But
I
was
gonna
just
ask
because
we
started
to
look
at
this
more
specifically
a
couple
of
weeks
ago,
and
one
thing
we
did
is
that
we
looked
at
the
using
some
kind
of
filtering
bloom
filters,
for
example,
and
we
were
able
to
successfully
defend
against
the
the
most
potent
attacks
that
were
that,
basically,
the
eip
2929
designed
to
to
protect
against
so
obviously
those.
D
So
this
filtering
you
need
some
nuances,
but
essentially
it's
the
way
to
do
it
without
changing
the
rules,
although
there
are
still
attacks
that
are
hitting
not
the
absent
data
but
they're
hitting
the
existing
data
but
they're.
These
attacks
are
a
bit
harder
to
mount
and
they're,
probably
not
gonna.
You
know
you
cannot
sustain
them
for
a
long
time,
but
we
are
looking
into
those
as
well.
So
I
still
think
that
the
ip99
2929
is
is
too
complex
for
for
what
it
needs
to.
D
And
actually
I'm
going
to,
we
will
share
because
we're
doing
an
analysis
now
on
different
type
of
filtering
we've.
Basically,
what
we
have
already
done
is
which
works.
So
we've
done
the
bloom
filter,
very
simple
implementation
of
bloom
filter.
With
I
mean
we
took,
let's
say:
half
a
gigabyte
one
filter
and
also
a
quarter
gigabyte
boom
server
with
15
functions,
and
so
with
the
half
a
gig.
So
that's
only
for
accounts,
and
so
the
half
a
gigabyte
boom
filter
is
able
to
protect
perfectly
from
the
from
the.
D
So
basically,
it
has
zero
false
positive.
When
you
try
to
read
the
non-existent
accounts,
if
you
reduce
it
to
let's
say
a
quarter
gigabyte,
then
the
false
positive
rate
is
something
like
0.01
or
something
like
this,
and
then
we're
also
going
to
look
at
two
other
filters
like
cuckoo
filters
and
stuff
like
this,
because
they
allow
deletions.
D
But
generally,
I
think
we
might
be
able
to
find
a
way
to
kind
of
at
least
protect
against
those
attacks
and
move
on
to
the
next
level,
which
is
the
kind
of
more
expensive
attacks.
But
it
could
still
be
executed
and,
as
I
said,
that
it
doesn't
require
the
change
of
rules
and
doesn't
require
this.
Basically,
I
think
the
ip
is
becoming
a
bit
too
complex.
C
D
A
It
so
we
have
basically
two
minutes
to
go
sorry
to
kind
of
jump
in
here.
I'm
not
sure.
What's
like
the
best
next
step
here,
because,
yes,
we
do
have
the
lesser
yellow,
v3
and
yolo
v2,
but
is
it
even
worth
implementing
all
this
stuff?
If
there's
a
high
chance
that
we
don't
end
up
doing
it,
I'm
curious
what
yeah,
what
people
feel
is
like
the
best
next
step
for
29.29
and
then
the
bundle
of
27,
18
and
29
30.?
C
No
sorry
so
I
was
just
kind
of
saying
all
that
I'm
still
a
pro
2929
and
2930.
D
Oh,
no,
that's
not
necessary
because
you're
you
don't
have
to
sink
from
zero
to
to
construct
the
filter.
You
need
to
just
simply
iterate
through
the
state
and
build
the
build
it
initially.
So
that's
that
that
could
be
done
or
you
can
even
like
what
you
can
do
as
well
is
that
you
can
download
the
the
pre
well,
not
pre-made,
but
because,
because
actually
everybody's
balloon
filter
has
to
be
slightly
different.
N
N
Q
A
Okay,
so
I
guess
we
should
probably
also
just
continue
that
conversation
on
the
core
devs
chat.
I
know
this
is
yeah
not
kind
of
a
great
outcome
in
terms
of
clarity
of
process,
but
it
feels
like
it's
it's
probably
worse,
to
push
like
a
yellow
v3
forward.
If
we
don't
have
clarity
on
that,
especially
given
that
the
implementation
on
yellow
v2
is
is
not
fully
done.
Hopefully,
we
can
agree
kind
of
maybe
async
on
what
the
yellow
v3
spec
would
look
like.
Does
that
make
sense
for
everybody.
R
K
E
E
K
A
bit
more
complex,
it
has
some
dependencies,
it
doesn't
have
an
agreement
around
it
and
everything
for
berlin
and
whether
it's
yellow
v2
v3.
If
we
want
to
bundle
the
aps
together,
everything
collapses
around
2929.
K
I
suggest
to
stop
binding
the
eips
or
apes
into
into
yellow
v2
v3
allow
people
to
actually
push
for
if
specific
test
nets
like
that
will
be
able
to
show
that
two
three
clients
can
sink
on
particular
e,
for
example,
two,
seven
eighteen
or
two
five,
six,
five
or
like
this
one
doesn't
even
require
testnet,
but
but
things
like
whatever
other
people
are
suggesting,
because
they're
waiting
for
berlin
to
be
defined
like
here
james
comes
over
and
says
there
is
something
that
is
potentially
important
for
them
and
potentially
useful
for
the
network
and
it's
totally
not
dependent
on
2929,
but
they
are
locked
there
they're
waiting
for
that
and
I
feel
paralyzed.
K
So
we
need
to
have
a
path
for
them
to
to
if
they
want
to
spend
time
and
provide
the
the
working
prototype.
That
shows
this
is
how
we
can
implement
this
change
and
all
the
clients
to
agree.
So
I
have
a
proposal
for
this
and
and
it's
in
a
way,
it's
iterating
over
two
great
things
that
we
were
discussing
in
the
last
year.
One
was
the
eep
specific
upgrades,
which
is
fine,
but
it
only
gets
much
much
stronger
with
the
ips
specific
testers
that
we
started
introducing
recently
with
eip-1559.
K
I
think,
by
binding
these
two
things
together,
we
can
show
exact
path
for
any
external
teams,
how
they
can
deliver
the
fully
tested,
described
and
analyzed
if
specification
that
can
be
easily
pushed
all
the
way
to
mainnet.
If
everyone
agrees
that
it's
beneficial
for
the
network
for
particular
use
cases
or
an
improvement
and
stop
bundling
things
and
making
them
delay
just
because
some
things
are
not
agreeable,.
J
Oh
yeah,
it's
been
running
for
a
couple
weeks.
We've
been
buzzing,
the
implementations
against
each
other.
A
Yeah,
so
maybe
again,
we
won't
resolve
this
three
minutes
over
the
call,
but
james,
if
you
can
share
that
test
net
on
the
core
devs
getter
chat.
That
would
be
great,
so
people
can
have
a
look
at
it.
Yeah
yeah.
Sorry.
I
said
that
this
for
this
gator
chat
still
so
on
the
the
the
discord.
That
would
be
good
and
then
the
other
thing
that
was
on
the
agenda,
which
we
we
can
maybe
very
quickly
cover
is
we
were
talking
on
discord
this
yesterday.
A
I
think
about
the
fact
that
most
clients
seem
to
be
working
on
a
sort
of
flat
database
approach,
so
get
a
snapshot
at
bc.
We
have
bonsai
trees.
Nethermine
is
working
on
on
something
as
well
and
turboget
is
obviously
architected
like
that
from
the
start,
and
because
we're
all
working
on
kind
of
different
flavors
of
it,
it
might
make
sense
to
set
up
a
call
to
just
discuss
that
and
share
notes
on
it.
A
Would
people
be
up
for
that
generally
and
that
we
can
maybe
find
a
time
on
the
on
the
cordex
chat?
If
that
makes
sense,.
A
A
Okay,
then
thanks
everybody.