►
From YouTube: Ethereum Core Devs Meeting #77 [2019-12-13]
Description
A
B
B
Okay,
so
now
that
we
got
everyone
on
here,
let's
talk
about
the
Istanbul
update
that
happened
about
a
week
and
a
day
ago
or
less
than
a
week
ago.
I
suppose!
Is
there
any
update
from
anybody?
It
looks
like
it
went
well,
we
had
a
community
call
that
I
thought
went
really
well
during
it
and
according
to
the
ether
nodes
website,
the
last
time
I
checked
I
think
it
was
like
95%
of
nodes.
It
updated.
So
if
you
were
out
of
seeing
yeah,
it
was
97%
now
if
nodes
have
updated.
B
B
Okay,
so
now
we're
on
Muir
glacier.
If
you
click
on
that
link,
you'll
see
that
it
is
in
last
call
and
we're
gonna
change
that
to
final
the
review
ended
yesterday
and
no
one
raised
any
concerns
that
I'm
able
to
see
in
The
Magicians
threat
at
least
I
had
guessing.
They
would
have
left
it
there
instead
of
a
PR.
So.
C
One
concern
that
I
think
was
on
the
magician's
thread.
Is
some
people
felt
that
the
four
million
blocks
was
like
too
big
of
a
push
right?
So
there
were
a
couple
comments:
I'm,
pretty
sure
it
was
it's
either
on
the
eep
thread
or
on
the
either,
unlike
the
hard
fork
thread
or
on
the
actual
eeap's
thread,
I'm,
not
sure,
but
there
were
just
like
a
couple.
People
saying
that's
like
four
million
might
be
too
far
in
the
future
and
just
wanted
to
make
sure
we
about
that.
C
B
D
D
Maybe
a
couple
two
or
three
comments
that
it's
not
we
released
too
soon,
and
people
didn't
really
or
two
or
three
people
said
that
they
don't
really
like
it,
that
we
push
it
back
too
far,
but
generally
there
is
the
rationalization
that
they
posted
was
that
this
means
that
we
are
all
of
a
sudden
delaying
is
theorem
2.0,
so
for
some
reason,
people
kind
of
link
this
delay
to
eastern
2.0
and,
from
my
perspective,
the
two
things
are
completely
unrelated.
I.
C
Think
those
were
some
of
the
comments.
I
think
I,
just
posted
the
thread.
I
was
referencing
to
you.
I
think
the
one
concern
that
at
least
I
don't
know
two
people
seem
to
have
was
like.
If
we,
if
we
push
the
bomb
so
far,
does
it
make
it
harder
to
ship
stuff
like
the
finality
gadget,
which
will
potentially
reduce
issuance
for
minors
and
what's
miners
incentives
there
like
upgrade
to
that?
C
If
there's
not
the
bomb
in
place
anymore,
so
it
seems
like
there's,
there's
at
least
a
little
bit
of
concerns
on
like
the
$1
side
and
or
at
the
very
least
they're
like
link
between
$1
and
$2,
so
yeah
I
shared
the
threat.
Acai
I
don't
want
to,
like
you
know,
kind
of
put
words
in
people's
mouths,
but
that
seem
to
be
another
in
at
the
concern.
At
the
same
time,
you
know
it's
like
one
or
two
people
had
those
concerns.
B
Okay
sounds
good.
Let's
see,
I
think
that
that
is
fine,
then
I
think
we
can
go
ahead
and
just
continue
to
market
final,
because
there
seems
to
be
some
misunderstanding
with
the
implications
would
be
to
point
at.
Like
Peter
said
anybody
opposed.
D
Might
apply
to
these
people
was
and
I
kind
of
stand
by
it
they're
kind
of
pretty
late
to
start
debating
on
whether
four
million
or
three
point
nine
or,
however
much
would
be
more
appropriate.
So
my
suggestion
is
that
we
go
ahead
with
whatever,
so
that
we
can
just
release
the
thing
and
make
sure
that,
if
still
functions
in
a
month
and
then,
if
somebody
feels
really
strongly
that
this
is
a
bad
number,
I
think
it's
completely
fine
to
adjust
it
in
the
next
work.
If
somebody
is
really
opposed
to
it.
For
some
reason,.
B
B
It
I
believe
it'll,
be
before
the
the
four
million
blocks
hit
B,
as
we
already
talked
about
that
so
I
think
that's
sufficient
personally
for
dealing
with
the
delay
and
adjusting
it
if
necessary,
based
on
community
feedback
once
that
happens,
or
once
we
get
like
after
Muir.
Glacier
I
also
encourage
people
who
do
feel
like
it's
not
accurate
to
make
another
AIP
that
either
adjust
it
takes
out.
The
difficulty
bomb
extends
the
difficulty
bomb
whatever
you
want
to
do
so
that
those
could
be
up
for
consideration
after
muir
glacier
and.
C
I
guess
yeah
one
thing,
I'll
just
add
the
reason
I
think
we
should
still
go
to
final
is
the
absolute
worst
case.
Scenario
seems
that
you
know
if,
if
it's,
if
it's
true,
that's
like
miners
will
not
upgrade
like
unless
faced
with
the
difficulty
bomb
which
I'm
not
personally
convinced
of
but
like
anyways,
you
think
that
as
a
given,
then
it
seems
like
in
the
worst
case.
C
What
you
get
is
something
like
the
finality
gadget
won't
go
live
until
you
know
the
bomb
kicks
off,
which
is
like
even
one
and
a
half
year,
or
something
like
that:
I'm,
not
sure
how
quickly
the
finality
could
go
to
life,
but
it's
like
I
doubt
you'll
see
that
and
less
than
like
six
to
twelve
months
so
I.
The
absolute
worst-case
scenario
seems,
like
you
add,
maybe
another
like
six
ish
months
before
you
can
deploy
the
finality
gadget.
D
That's
realistic
either
because
so
I
think
this
whole
miners
will
not
upgrade
this
kind
of
originates
from
the
Bitcoin
world,
which
is
a
fairly
stable
protocol.
So
people
don't
people
expect
that
to
not
change
at
all,
and
you
require
quite
a
lot
of
force
to
change
it,
but
in
the
case
of
aetherium,
people
are
more
accustomed
to
our
regular
updates.
So
even
now
we
have
two
hard
Forks
lined
up.
We
have
the
crypto
stuff
that
you
can
talk
about.
It
actually
think
afterwards.
D
C
D
D
A
B
Sounds
good
so
and
let's
see
two
weeks
is
the
27th
I'll
just
go
ahead
and
say
I
think
we
should
have
a
meeting
in
two
weeks.
It
can
be
very,
very
light,
but
just
talk
about
me
or
glacier
and
probably
recommend
that
people
who
want
to
bring
their
VIP
s
maybe
should
wait
for
a
different
meetings
since
it'll
probably
be
very
lightly
attended.
B
C
A
D
F
G
I
mean
just
a
friendly
reminder
of
what
it
bases,
the
the
purpose
of
it,
the
high-level
purpose
and
how
it
achieves
that
high-level
purpose.
The
high-level
point:
it's
it's
a
VIP
benefit,
Alec.
The
point
is
to
add
stability
to
the
gas
ether
price
or
the
ether,
gas
price,
and,
and
also
it
has
some
other
useful
side
effects
in
terms
of
you
know,
removing
zero
fee
transactions
and
the
way
it
does.
This
is
basically
we
add
a
base
fee.
G
We
bring
the
gas
pricing
under
consensus
and,
and
then
the
transaction
each
transaction
instead
of
having
a
single
component
in
terms
of
the
fee
that
goes
to
the
miners.
You
now
have
two
components
you
have:
some
of
the
gas
is
burned
and
some
of
the
gas
goes
to
the
miners
and
that's
that's
basically
it
it's
actually
a
set
of
you
know
it's
a
it's.
A
phased,
so
Vitalik
wrote
the
skinny
one,
five,
five,
nine,
which
is
not
what
we
implemented.
We
implemented.
G
What
was
in
the
implementation
study,
which
is
two
phases
basically,
because
you
know,
as
we
can
imagine,
changing
how
transactions
work.
Oh
and
another
major
benefit.
Is
it
really
simplifies?
You
no
longer
need
ether
gas
station.
It
really
simplifies
the
user
experience
it's
much
easier
to
figure
out
what
your
gas
fee
is
going
to
be
in
advance
and
and
because
it
changes
how
transactions
work.
G
You
know
you
can't
just
flip
a
switch
and
expect
all
of
the
downstream
tooling
to
have
changed
night.
That
would
be.
You
know
that
to
me
that
was
a
much
greater
risk
and
so
we've
made
it.
You
know:
we've
made
it
two
phases
so
that
there's
a
period
of
time
where
both
transaction
types
are
valid,
which
adds
some
complexity
but
I
think
it's
I
think
it's
necessary
to
actually
get
adoption
without
breaking
everything.
Were
there
any
questions?
Yes,.
G
Well,
let
me
actually
that's
that's
inaccurate.
What
I
had
hoped
to
have
happen
was
that
we'd
actually
do
some
modeling
and
simulations
to
to
sort
of
prove
that
this
isn't
going
to
blow
everything
up,
but
we
couldn't
get
funding
for
that.
So
what
we've
got
funding
for
was
the
implementation,
so
we
wrote
an
implementation
and
in
that
process
there
were
changes
that
need
to
be
made,
and
you
know
the
EIP
is
forthcoming,
but
we
thought
it
wasn't,
but
those
were
separate
tracks.
G
F
So
it
has
this
premium
base
fee
and
a
premium
and
I
don't
really
understand
what
prevents
all
the
miners
from
basically
colluding
and
setting
the
base
fee
to
zero
and
still
only
accepting
transactions
which
have
highest
premium
and
falling
back
basically
to
the
same
situation.
We're
out
today,
yeah.
G
So
I
think
I
think
before
I
answer
that
question
yeah
I
completely
understand
what
you're
saying
about
the
EIP
and
in
the
review
I
in
my
discussions
with
people.
To
be
frank,
it
didn't
seem
as
though
it
seemed
as
though
this
was
the
shortest
path
to
having
a
discussion.
Frankly,
since
no
one
responded
to
any
of
my
comments
or
anything
else
that
wasn't
and
I
mean
they
say,
no
one
responded,
I
mean
very
few
people
responded
and
it
didn't
seem
to
I
was
having
a
difficult.
F
F
G
To
have
a
discussion
about
ok,
ok,
I!
Don't
expect
there
to
be
a
decision
today.
So
so
thank
you,
yeah.
So
I
there's
a
there's,
an
averaging
of
the
base
fee
over
a
large
number
of
blocks,
and
so
the
idea
is
that
the
miners
would
need
to
kind
of
have
an
overwhelming
amount
of
the
transaction
volume
for
a
long
period
of
time
to
adjust
the
price.
You
know
they
basically
need
to
be
the
majority
of
demand.
So
you
know
it's
kind
of
a
weird,
but.
G
You
know
they're
taking
some
average
number
of
blocks
and
it's
these
are
the
exactly
the
sort
of
questions
where
I
thought
they
were
very
difficult
to
answer
and-
and
this
attack
that
you
pointed
out,
we
sort
of
have
this
sketch
solution,
but
I
felt
like
given
the
importance
of
the
change
we
needed.
You
know
a
lot
more
engagement
to
actually
answer
that
question.
D
The
question
is:
how
does
this
degree
relate
to
the
dynamic,
dynamic
block
sizes
because
sure
on
earth
remaineth,
we
kind
of
have
it
fixed
at
ten
million
currently,
but
in
theory
it
should
have
been
dynamic.
So
if
we
add
this,
how
how
will
this
do
values
in
the
place?
Because,
as
the
blocks
are
getting
fuller,
the
miners
would
in
theory,
push
the
block
size
up,
which
would
make
transaction
cheaper,
and
your
proposal
is
doing
the
exactly
the
opposite.
G
D
G
D
Yeah
I'm
saying
is
that
the
price,
so
essentially
the
problem
is
that
in
theory,
what
these
theorem
protocol
specs
is
that,
if
blocks
are
getting
full,
the
original
spec
was
that
the
gas
limit
should
be
raised
now,
you're
saying
that
the
price
should
be
raised,
but
I
think
it
should
be
important
to
touch
on
what
happens.
For
example,
what
happens
on
a
network
where
we
don't
have
this
limit,
for
example,
in
rinkeby?
D
Currently
we
configured
it
that
the
blocks
are
10
million
in
size,
but
they
are
allowed
to
go
up
until
15
million
if
there's
a
high
network
traffic.
Now,
in
this
case,
the
the
trigger
for
pushing
the
block
limit
up
would
be
that
the
blocks
are
full,
but
at
the
same
time,
in
your
Eid,
this
would
also
trigger
transactions
to
be
so
expensive
that
the
blocks
won't
be
pushed
up.
So
I
just
want
to
make
sure
that
we're
not
accidentally
murdering
an
existing
mechanism
with
this
one.
G
D
G
F
Have
another
question
so
right
now
we
are
we
there's
a
cap.
We
know
that
even
when
the
books
are
full,
we
won't
go
great.
Tell
me
don't
know
whatever
it
is,
but
here
in
this
proposal
it
looks
like
yeah
will
will
target
8
or
10
million,
but
actually
the
the
hard
cap
is
at
three
times
that
amount,
so
it
might
be
suddenly
24
million
a
24
million
girls
block
would
be
valid,
am
I
reading
it
right.
G
D
G
Yeah
I
think
again,
I
I
completely
agree
with
that
and
I
think
that
yeah,
if
someone
were
able
to
sustain
that
I
mean
as
far
as
as
far
as
I
am
concerned,
you
know
I
sort
of
volunteered
to
Shepherd
this
EIP
through
I.
Think
these,
of
course,
are
you
know,
I
mean
you
guys
know
this
stuff
better
than
anyone
else.
Basically,
I
think
these
are
really
great
questions,
and
these
are
exactly
the
types
of
questions
that
I
was
trying
to
surface
prior
to
writing
any
code.
G
B
G
B
Yeah
and
taking
this
to
like
the
etherium
magicians
thread,
is
gonna,
be
very
helpful,
I
think
to
Rick
and
the
rest
of
his
team.
So
if
anyone
here
has
further
stuff
after
looking
deeper
into
the
implementation,
I
think
that
would
be
important
and
then
even
more
important
than
that,
in
my
opinion,
would
be
an
update
to
the
EIP
itself,
even
if
it's
not
pushed
through
the
EIP
process
having
a
PR
that
has
the
changes
Rick
that
you
and
your
team
have
implemented.
B
D
Before
we
kind
of
deflect
into
a
different
topic,
I
just
wanted
to
emphasize
it
a
bit
more
because
I
kind
of
have
a
feeling
that
it's
not
to
be
taken
as
seriously
as
Martin
intended.
So
currently,
the
10
million
gasps
limit
that
the
serum
Network
is
running
on
well.
The
reason
why
it
was
originally
capped
at
8
million,
because
that
was
considered
the
only
sane
limit
so
that
this
guy
Oh
doesn't
murder
the
network.
D
And,
yes,
we
did
some
optimizations
are
now
people
pushed
up
the
gas
limit
to
10
million,
but
but
essentially
we
really
really
don't
want
to
get
into
the
position
when
all
of
a
sudden
that
suppose
that
we
it
we
can
handle
I'm
just
seeing
around
the
number
of
15
million
and
after
50
million
things
starting
to
get
screwy.
Now,
if
you
all
of
a
sudden
allow
people
to
expand
24
million,
then
that's
it's
going
to
be
really
really
bad.
D
G
D
The
3x
is
not
a
horribly
bad
idea.
If
you
look
at
the
average
network
usage,
so
currently
guests
can
process
blocks
in
them.
I
don't
know
honestly,
I
haven't
checked,
but
maybe
around
150
milliseconds.
So
if
you
were
to
3x
that
that
would
mean
maybe
half
a
second.
So
that's
not
that
bad,
but
the
thing
is
that
this
is
the
whatever
people
throw
at
it
when
they
are
using
a
theorem.
It's
not
the
worst
case
possible
attack
scenario
and
we
need
to
keep
the
limits
in
control.
For
that
scenario,.
G
G
B
B
H
Here
guys
before
this
discussion,
Oh
perfect
go
ahead.
Yes,
who's
well,
first
of
all,
I'm
sorry
for
a
distraction
for
the
last
few
months,
because
we
were
well
I
was
busy
with
academic
side
publishing
the
paper
which
we
had
to
finish.
Otherwise.
On
the
implementations.
There
is
only
one
major
road
block
right
now,
and
this
is
how
to
measure
the
gas
cost
for
precompile
call
for
one
particular
family,
where
my
initial
ideas
and
how
I
would
do
I
actually
fail.
H
So
I
was
kind
of
simulating
the
calls
with
a
lot
of
parameters
around
one
big
limb,
just
drawn
uniformly
from
available
parameter
space,
and
then
I
was
trying
to
do
kind
of
multi.
Multi
parameter
fitting,
unfortunately,
dependency
on
some
of
the
values,
but
it
was
a
little
bit
weak,
so
I
couldn't
factor
them
out
and
get
a
final
formula.
So
I
will
now
have
to
do
it
another
way
by
first
dissecting.
H
The
function
call
in
just
requires
which
are
independent,
and
we
should
have
quite
trivial
kind
of
a
priori
parameter
dependencies,
and
then
I
will
have
to
just
combine
the
three
formulas,
but
unfortunately,
I
have
to
do
all
this
measurement
once
again
after
this,
there
are
no
technical
problems
in
terms
of
kind
of
all.
The
same
stuff
will
be
ported
from
the
rest
to
C++
implementation,
which
was
done
before
and
the
same
way
they
will
be
run
or
another
facet
testing
to
check
the
correspondence
between
those
two.
H
B
B
Yeah
and
I'm,
looking
at
the
Fellowship
of
a
theory
magicians,
it
looks
like
no
one's
commented
since
August
and
the
last
time
that
someone
has
it
was
July
that
talked
about
getting
some
test
cases,
and
so
the
robotic
is
still
that
curve.
In
order
for
you
to
generate
test,
cases
am
I.
Reading
that
correctly
from
the
magician's
form
well,
in.
H
The
main
repository
where
there
is
a
Rosco
is
rough,
few
test
factors
which
were
dumped
just
do
it
for
four
known
curves,
which
can
be
pulled
from
either
various
papers
or
just
where
is
kind
of
standard
repositories,
with
curved
descriptions
for
those
I
just
dumped
the
kind
of
binary
encoded
plot
which
which
are
in
a
right
form
for
the
input
and
then
pre-compile
should
return.
Some
answer.
I
said
yes,
either
boolean
or
just
and
as
a
series
of
bytes
those
will
be
available.
H
But
right
now
house
there
are
only
two
kind
of
implementation
and
like
almost
full-scale
implementations,
which
are
both
done
by
us
one
in
Rastan
us
one
in
c++
and
to
test
correspondence
with
between
them
to
check
all
this
well
to
check
for
consensus
between
two
different
implementations.
We
do
the
facet
testing
which
basically
first
faster
the
contract
itself
and
then
Elsa
compares
as
the
output
result
so
for
this
I
can
definitely
make
a
huge
set
of
test
vectors.
F
H
Yeah
well,
there
are
two
one
is
impressed,
which
is
my
kind
of
main
working
repository
and
which
I
used
for
right
now
for
just
schedule,
estimation
and
as
one
is
also
in
metal,
apps
github
emphasis,
c++
implementation,
which
also
his
name
II
1962,
CPP
I,
think
I
notice
that
people
from
UI
Hurston
young
were
interested
in
trying
to
make
an
alternative.
One
I
talked
to
them
I
think
three
weeks
ago,
but
I
mean
they
looked
at
the
spec
sets
of
explicit
formulas
which
were
Elsa
said
where
Elsa
published
quite
a
long
ago.
H
H
Personally,
I
would
suggest
that
it's
should
because,
though
those
two
implementation
are
all
our
post
and
bi-metal
labs,
and
it's
not
kind
of
very
much
independent
I
would
argue
that
it's
much
easier
to
use
just
one,
because
it
lifts
a
lot
of
questions
for
a
consensus
results,
but
it's
kind
of
still
the
difference
between
those
two
will
be
very
small.
So
it's
still
kind
of
easier
to
use
just
one,
even
while
so
able
to
test
it
for
difference.
You
know
right.
F
But
the
core
problem
here
being
that
this
is
extremely
complex
stuff.
This
is
basically
an
EVM
for
complex
cryptography.
Oh
I
totally
agree
that
it
would
be
a
lot
simpler
to
just
have
one
reference
implementation,
because
then
you
wouldn't
actually
need
to
specify
everything
in,
and
you
know
just
consensus
by
reference
implementation
and
it
feels
kind
of
dangerous.
H
Well,
my
argument
is
not
that
I
don't
want
to
have
separate
implementations.
I
would
want
to
have
separate
implementations,
but
right
now
those
two
implementations
that
will
be
available
and
in
any
foreign
production
ready.
They
will
be
Bo's
done
by
us
and
will
be
goes
down
by
and
was
the
same
set
of
well
public
documents,
specs
and
all.
H
Decisions,
so
the
difference
between
them
is
just
so
small,
most
likely
I
mean
there's
also
different
languages,
but
the
difference
which
one
would
expect
will
be
small,
so
I
unless
every
will
be,
and
next
one,
the
next
one
I
mean
for
this
period
of
time.
Until
there
is
no
next
one
and
pretty
much
independent
one,
it's
kind
of
less
risk
to
use
one
which
will
not
crash
it
will
just
anyway
give
consistent
results
then
try
to
use
two
which
are
very
much
similar.
H
It's
first
of
all,
it's
a
repeater
and
it's
kind
of
can
be
changed
a
little
bit,
but
for
now
it
looks
reasonable
and
for
the
part
where's,
the
discrepancy
may
come
it's
much
larger
chance
that
it
may
come
from
arithmetics,
even
while
the
for
most
everywhere
are
explicit
and
those
halves.
Those
are
present
in
a
separate
document
was
also
exposed
to
formulas.
F
H
Experience
of
previous
round
of
asset
testing
between
to
implementation
to
implementations,
we
have
found
a
set
of
discrepancies
which
were
kind
of
checks
at
some
stuff
was
empty
or
not,
for
example,
but
we
didn't
find
any
discrepancy
in
ABA
parsing
code
between
our
raft
and
C++
limitations,
for
example.
So
it's
just
from
a
critical
experience.
There
is
much
smaller
chance
to
get
an
error
there.
So.
I
My
concern
functions
with
a
lot
of
different
ways
that
you
need
to
pack
the
parameters
with
a
lot
of
different
ways.
We
need
a
cage.
The
gas
wouldn't
be
conceptually
simpler,
just
to
have
each
one
of
these
pairs
of
function
and
curves
have
their
own
individual
call,
and
we
could
easily
isolate
the
testing
cases
that
way.
H
It's
a
big-endian
representation,
so
I
mean
two
bytes
address
which
call
you
do
and
the
rest
is
just
uses
the
same
set
of
functions
to
slice,
this
light
and
iterate
them
in
some
way.
So
there
is
no
in
dependency
between
I
mean,
even
if
you
separated
to
20
separate
function,
calls
which
is
still
fine
they
will
have.
They
will
still
have
very
similar-looking
way
how
you
would
call
them,
so
they
will
have
their
own
kind
of
binary
interface
and
when
I,
maybe
I
did
use
the
ABI
a
little
bit
wrong.
H
I
Do
we
even
need
those
two
bags
I
mean
you
also
know?
What's
the
issue
with
the
gas
calculation
there's
different
gas
calculations
for
each
curve
and
then
to
calculate
this
in
the
implementation,
or
you
know,
giant
switch
statement
which
way
thing
could
be
easier
just
to
say
that
this
curve
has
its
own
set
of
functions
with
this
gas
calculation
rather
than
this
curve.
You
do
this
giant
four-way
switch
depending
upon
which
curve
you
are
on,
and
then
you
go
down
these
complex
things.
I.
H
Mean
just
from
the
way
how
they
implement
I
mean
just
from
any
perspective.
Curves
implementation
would
be
done
as
the
difference
between
those
is
just
literally
one
switch
statement
which
is
much
simpler
parts
and
the
rest
of
it
I
mean
for
every
function
will
use
what
the
the
entity,
which
is
a
finite
field
and
no
matter
which
of
those
calls
you
will
use.
You
will
have
to
first
specify
the
parameter
of
this
finite
field,
and
then
such
parsing
will
be
done
inside
the
call
to
any
of
those
20
functions
anyway.
H
So
there
is
no
good
in
dependency
between
them.
That's
why
I
when
I
was
and
when
I
working
on
it
right
now,
I
just
didn't
separate
them,
because
there
is
no
good
separation
between
them
kind
of
any
single
kind
of
the
switch
statement
at
the
beginning.
Yes,
it
tells
you
which
functions
you
call,
but
after
this
all
of
those
20
functions,
they
just
use
the
same
set
of
primitives
to
do
their
work
since
there
they're
not
that
much
independent
and
the
same
is
for
gas.
H
I
D
I
D
Highlight
that
it's
completely
fine
to
have
one
single
function:
implementation
wise
within
the
EVM
that
just
does
a
big,
huge
switch
and
then
calculates
everything
the
way,
that's
cleanest.
The
reason
people
are
suggesting
the
24
or
however
many
pre-compose
is
because
the
EVM
is
kind
of
all
the
other
operations
are
structured
in
one
way
and
if
we
were
to
have
24
freaking
powers,
then
yes,
maybe
behind
the
scenes.
Those
24
peak
empires
would
just
call
the
exact
same
single
function,
but
it
would
avoid
introducing
an
extra
encoding
idea
or
concept
into
the
EVM
code
itself.
H
First
of
all,
I
should
note
that
such
switch
statement
would
anyway
was
like,
if
happen
not
at
the
level
of
EVM
but
inside
of
the
implementation,
because
well
at
least
how
it's
done
right
now,
I'm
in
the
pre-compiled
implementation
just
takes
a
set
of
bytes
as
input
and
internally
parses
certain
holidays.
Most
likely
I
was
expecting
that
this
will
be
the
way
houses
date
is
passed
from,
say,
EVM,
just
a
pre
compile,
but
this
is
very
minor
issue.
H
The
reason
why
I
didn't
want
to
put
it
initially
is
it
just
as
a
solid
example
in
any
of
those
calls,
even
if
there
will
be
20
of
those,
the
first
parameter
will
always
be
the
same,
and
this
parameter
will
specify
the
modulus
of
the
finite
field
over
which
one
want
to
work
and
define
occur.
So
even
if
there
are
20
of
those
independently,
you
will
still
have
to
specify
the
parameters
which
are
very
similar
for
each
of
those
calls
its,
and
this
is
not
a
huge
switch
statement
anymore.
Those
independent
calls.
H
That's
why
I
decided
that
that
it's
kind
of
backwards,
the
same
way,
if
you
have
a
lot
of
similarity
in
the
way,
how
you
call
each
of
those
then
most
likely
you
don't
want
to
separate
them
from
just
logical
perspective,
I,
correct
I,
don't
have
any
argument
that
we
should
do
one
way
or
another
strictly.
If
you
want
20,
separate
function,
calls
and
perfectly
fine
with
this
I
just
described.
Why
I
didn't
put
it
initially.
H
Okay
I
mean
if
this
is
a
kind
of
decision.
I
will
separate
this
function.
It
is
a
required
number
of
sub
goals.
It's
not
a
problem
from
any
perspective.
It's
just
this
decision
was
never
kind
of
reach
to
the
final
point,
and
I
have
always
said
that
I'm
fine
with
any
of
those
and
easily
some
kind
of
if
there
is
consensus
that
we
should
make
it
15
separate
goals
that
I
will
make
it
15
separate
calls.
It's
not
a
problem.
I
think.
F
C
F
F
H
H
It
allows
you
to
do
right
now:
seven
different
operations,
one
first
of
all
throughout
operations
which
are
arithmetics
on
elliptic
curve
defined
over
the
prime
field,
and
there
are
three
kind
of
operations
which
you
can
do
there.
It's
an
addition
of
points,
multiplication
of
points
by
scalar
and
multi
exponentiation,
which
is
just
where
I
finish.
It
way
how
you
can
say
from
consecutive
course
of
multiplications
and
additions
of
intermediate
results.
Those
are
three
functions.
H
J
Not
with
the
design,
just
in
general
I
think
it's
kind
of
premature
to
argue
about
a
BIA
encoding
it.
How
any
of
these
operations
should
be
laid
out
before.
There
are
actual
example
codes
using
this
recompile,
because,
eventually,
what
what
should
be
also
part
of
the
decision
is
the
cost
contracts
have
to
incur
while
interacting
with
this
pecan
pies
and
secondly,
the
dev
ex
experience
had
to
actually
interact
with
these
pecan
pies.
J
Probably
the
arguments
against
using
the
standard,
ABI
encoding
would
include
that
it
may
be
just
too
big
and
therefore,
if
we
actually
x-men
how
the
pre
comp
I
would
be
used
so
I
guess
in
the
pairing
case
it
would
be
fine,
but
for
doing
repetitive
additions,
it
would
be
just
too
big
of
an
overhead
and
the
other
I
think
people
will
say
regarding
the
a/b
encoding
that
it
it
may
be
just
ambiguous,
but
I
think
that
can
be
argued.
I.
H
Think
he
can
kind
of
shortly
answer
both
I
I
never
heard
about
using
standard
encoding.
It
may
be
an
option.
I
just
didn't
meter
this
and
still
the
cost
of
parsing
is
cost
of.
Parsing
is
negligible
compared
to
originals,
but
I
didn't
estimate
the
cost
of
actually
kind
of
forming,
say
array
of
data
in
memory,
and
during
this
call
this
part
I
didn't
estimator.
We
don't
have
a
solid
answer
with
this
for
developer
experience,
as
the
I
will
link
this
to
the
guiter
just
to
cabbage
kind
of
stated
somewhere.
J
On
the
example,
I
actually
meant
a
real
life
example
contract
where
tests
would
actually
be
beneficial,
which
is
not
just
like
calling
a
single
function
on
the
precompile,
but
rather
I
would
assume
in.
In
any
case,
you
would
call
it
a
bunch
of
times
different
functions
on
the
pre-compiled,
so
I
think,
if
complete
real
life
example
would
would
I
think
we
necessarily
to
actually
reach
a
correct
decision
on
the
design.
J
H
Okay,
okay,
okay,
okay,
I
get
the
example,
yeah
I
think
I
can
kind
of
quickly
write
the
equivalent
of
the
current
snark
way
for
every
fication
kind
of
a
routine,
but
for
how
it
will
work,
which
is
pretty
vile.
I
think
this
would
be
a
good
example.
So
there
is
a
large
reuse
of
the
parameters
in
principle,
which
is
possible
to
push
the
efficient
to
the
limit
must
like
a
developers
will
want,
and
such
optimization
needs
to
be
done
only
once
that
yeah,
it's
not
a
problem.
To
make
one
real
life
example
great.
J
Just
one
more
comment
regarding
the
cost
you
mentioned:
I
think
there
are
two
important
costs
both
from
the
developer:
slash
EVM
side,
so
that
one
the
cost
of
preparing
the
message
for
a
pre
compile,
because
we
want
to
keep
that
cost
low
and,
second,
the
actual
cost
of
the
call
the
data
sent
through
the
call
I
think
there
the
cost
on
the
pre-compile
side
decoding
any
of
these
is
negligible
because
we're
creating
the
free
compile
in
the
first
place,
because
you
think
it's
cheaper
to
do.
Calculations
on
the
client
as
opposed
on
a
VM.
H
Obviously,
there
will
be
some
overhead
in
terms
of
message
being
prepared
in
memory,
because
one
would
have
to
specify
more
parameters,
but
after
this
and
the
part
which
I
measure
for
a
gas
cost
right
now
is
the
second
part
which
involves
parsing,
which
is
negligible
and
then
actually
kind
of
also
arithmetic,
which
is
required
mostly
because
I
don't
have
a
way
to
affect
how
expensive
is
a
cost
of
memory.
Human
memory
can
in
a
VM
in
a
VM,
but
still
even
for
simplest
operations.
H
Right
now
is
the
cost
of
my
rough
way
of
rough
calculations
was
their
cost
of
four
means.
The
memory
chunk
is
is
at
maximum
15%
for
a
simplest
goal.
Then
they
chant
and
then
the
actual
work
of
the
pre-compiled
intermediate,
and
this
ratio
will
go
substantially
down
for
for
cold
for
pre-compiled,
which
will
involve
more
affinity,
operations.
D
H
Yeah,
well,
this
part
I,
didn't
estimate
and
well
as
reason
for
having
to
custom
API
section
a
little
bit
simplifying
my
own
work,
because
the
way
how
the
one
scalar
is
encoded
and
well
also
furniture
is
they're.
Just
basically
large
unsigned
integers
is
there
is
one
byte
which
tells
how
many
bytes
address
it
after
it
encodes
this
number,
and
then
there
is
a
sequence
of
bytes
which
are
which
is
interpreted
as
a
peak
Indian
encoding
and
with
the
Naza
limitations
that
the
top
byte
should
be
meaningful.
H
So
it's
not
not
zero,
and
this
is
was
a
kind
of
very
simple
set
of
checks
which
I
would
need
to
do
and
this
it
will.
Allow
else
me
to
quickly
estimate
over
how
large
numbers
I
will
have
to
do.
My
arithmetic,
which
is
kind
of
which
is
also
beneficial
to
do
the
the
quick
gas
schedule
checks
without
actually
parsing
the
full
set
of
bytes
and
then
checking
again
how
many
beats
I
actually
have
there
if
I
have
the
redundant
encoding
by
using
the,
for
example,
fixed
chunks
of
by
32
bytes.
This
was
a
reason.
H
D
I
guess
we
could,
we
could
always
just
check
and
see.
Maybe
AXA
could.
So
if
we
have
an
actual
contract,
a
real
use
case,
then
maybe
it
will
probably
be
a
lot
easier
to
just
check
that.
Ok,
if
we
were
to
encode
it
with
your
ABI
or
with
x6
ABI
or
just
a
dumb
binary
encoding,
which
would
be
or
preferable
and
probably
something
we
can
try
out
if
we
have
actually
live
codes
to
play
with
so.
J
You're,
probably
my
main
message
is
that
we
we
definitely
should
have
actual
EVM
implementations
of
contracts
using
the
Dupree
compiled
or
any
other
pre
compiler,
which
is
proposed
because
otherwise
we're
going
to
end
up
with
a
situation
like
we
did
with
Blake
to
where
the
design
had
no
input
from
how
you
would
actually
use
it
from
within
the
EVM,
and
it
ended
up
being
suboptimal
in
some
cases
and
I.
Think
that
applies
even
more
today
as
precompiled,
because
it's
it's
a
lot
more
complex,
so
yeah.
B
All
right
I
there
is
an
EMV
or
that
there
is
an
EVM,
see
binding
that
was
posted
in
the
chat
by
axe
ik.
So,
if
anyone's
interested
in
that,
you
can
check
out
the
zoom
chat,
was
there
any
other
comments
on
this?
Otherwise,
this
was
a
great
discussion
and
we
can
take
everything
back
to
the
magician's.
For
this.
F
H
H
Do
define
exceptions,
and
especially,
is
arithmetic
one,
it
defines
exception,
but
luckily
for
us
by
how
they
how
you
can
implement
this
arithmetic.
If
you
use
this
for
most
explicitly,
there
are
only
two
exceptions
in
arithmetic,
so
most
of
the
except
exceptions
which
will
kind
of
which
will
kind
of
tell
that
precompiled
didn't
output
any
data.
H
There
are
only
two
exceptions
which
happen
in
the
arithmetic,
so
it's
just
propagated
and
the
pre-compiled
call
just
returns
an
error
and
there
is
much
larger
set
in
the
ABI,
which
is
just
verifications
that
also
parameters
were
encoded
correctly
and
except
of
this,
if
you
just
use
the
formulas-
and
you
just
do
the
maths
and
just
write
this
at
the
end
of
the
day,
you
will
always
you
will
just
get
the
same
result
just
because
this
is
how
the
arithmetic
works
for
us.
If
we
did
also
check
before
so.
D
H
Yes,
you're
correct,
and
this
is
what
else
is
actually
there
blonde
Wayne
under
light
each
case
in
arithmetic.
This
edge
case
is
basically
telling
us
that
you
don't
have
the
inverse
of
the
element,
which
is
actually
the
same
as
you
haven't
kind
of
division
by
zero.
In
this
case,
if
you
encounter
it
anywhere
in
your
code,
you
just
propagate
all
the
way
up
and
say
that,
well,
my
code
just
didn't
produce
any
output,
and
this
is
an
error.
H
D
Yeah
so
I,
but
this
is
like
an
interesting
question
because,
for
example,
in
the
go
implementation
of
of
the
various
crypto
curves
various
compiles,
if
you
input
junk
it
will
actually
not
so
if
you
just
throw
the
whole
thing
out,
for
example,
that
this
point
is
not
on
the
curve
bye-bye.
So
it
won't
just
start
computing
chunk
and
returning
you
the
computation
results.
It
will
actually
refuse
to
compute
it
because
it
just
does
some
pre
checks
and.
H
H
It
is
not
very
large
and
when
you
go
into
just
formulas
and
foremost
and
formulas,
there
is
only
one
edge
case
and
after
you
did
all
the
checks,
if
they
passed
on
the
same
input
data,
then
after
all
the
four
notes,
you
will
get
the
same
answer.
So
there
are
checks
for
inputs.
Yes,
and
those
are
well
the
same
as
you
just
mentioned,
largely
and
few
additional
ones,
but
yeah.
If
you
get
into
one
of
those
checks,
you
basically
get
an
answer.
There
is
no
result
from
pre-compile,
so
just
an
error
on
a
large
scale.
H
B
H
B
K
H
B
Yeah
with
all
the
hard
work,
stuff
and
other
things,
oh
I
want
to
go
back
to
Muir
glacier,
just
real
quick,
because
I
realized
I
didn't
actually
check
with
each
client
to
see
if
they
have
a
compatible
version.
The
etherium
cat
herders
want
to
release
a
blog
post,
and
so
does
blog
etherium
org,
with
a
link
to
all
the
clients
that
have
Muir
glacier.
So,
let's
just
go
through
each
client.
That's
on
the
call
and
see
where
everybody's
at
we
know
get
has
a
client,
that's
Muir,
glacier
compatible.
Is
there
any
other
information?
C
B
D
B
D
L
I
A
B
Perfect
cat
herders
are
there
anything
else
that
I'm
missing
as
far
as
speaking
Samir
glacier?
This
could
be
pooja
or
Tim.
K
D
C
D
It's
kind
of
hard
to
say
because
the
difficulty
just
got
bumped
six
hundred
blocks
ago,
so
I
don't
know
that's
quarter
of
an
hour
half
an
hour
ago,
which
kind
of
means
that
probably
all
the
estimates
are
a
bit
off
now.
So
that's
I.
Don't
let's
wait
until
two
more
days
and
see
see
whether
the
numbers
change
the
estimates
that
ethers
can
and
we
can
all
suggest
so.
C
Does
it
make
sense,
then
yeah
to
release
the
blog
post
on
Monday?
Hopefully
parody
can
have
a
release
by
Monday
and
like
we
can
release
it
end
of
the
day
America's
time.
So
that
means
that
you
know
parody,
I,
think
you're,
all
in
Europe.
So
it's
kind
of
well
past
the
end
of
the
day
and
we
could
use
whatever
numbers
are
on
ether
scan
Monday,
and
that
should
give
it
a
couple
days
to
like
readjust
it's
it's
it's
estimation,
given
the
the
difficulty
change
just
kicked
in.
Does
that
make
sense.
B
C
B
Awesome
yeah,
we
can
definitely
talk
more
than
and
if
it
needs
to
be
on
Tuesday
cuz
of
other
clients,
releasing
that'd
be
fine,
we
don't
even
have
Trinity
on
the
call
or
aetherium
j/s,
so
I
can
reach
out
to
them
manually
or
we
can
just
head
up
there,
get
our
channels
I.
Think
we're
nearly
done
here.
Let
me
go
back
to
the
gender
if
I
could
find
it.
I
have
30
tabs
up
and
right
now
don't
I
where
to
go
all
right
guys.
Almost
there
I've
been
going
through
all
my
tabs.
B
Well,
I'm
just
gonna
go
back
to
it
again:
anyways
we
have
one
more
EFI
and
basically,
that
EFI
is
not
truly
a
new
VIP
that
we're
discussing
it's
kind
of
a
just,
a
formality
that
we
need
to
all
agree
on
for
putting
EIP,
10:57
programmatic
proof
of
work
into
EFI,
because
it
was
already
accepted
and
other
all
core
devs
decisions
back
to
basically
a
year
ago.
Multiple
times
so
is
anyone
opposed
to
adding
prog
Pao
to
the
list
of
EFI
is
granted
that
we've
before
any
of
this
process
was
discussed.
B
It
kind
of
seemed
like
a
like,
at
least
to
James
and
I.
It
seemed
like
a
like
a
something
that
we
would
need
to
do
just
for
procedural
reasons.
So
if
anyone
does
have
a
comment
on
that
feel
free
to
talk
to
the
getter
or
bring
it
up
here,
the
EIP
IP,
the
EIP
improvement
proposal.
Meeting
I've
dropped
the
ball
on
that
a
few
times
we
haven't
had
it
yet
and
then
I
kind
of
started,
thinking
about
it
and
everyone's
slowing
down
because
of
the
holidays.
B
So
it
might
be
better
just
to
push
it
till
January.
Unless
there's
a
different
opinion
on
the
call
and
until
then
I
can
make
a
telegram
and
start.
You
know
getting
people
on
that
to
have
ideas
and
suggestions
from
those
who've
expressed.
They
wanted
to
be
involved
in
it
because
I
think
yeah
at
least
way
and
I
think
Danna
reached
out
and
I'm
a
poojas
gonna
be
a
leader
on
that
one
and
a
few
other
people
wanted
to
be
involved.
B
The
last
one,
let's
see
here,
how
does
it
have?
Oh,
oh
I'm,
on
the
wrong
thing.
Sorry
76
is
done.
I
merged
it
today
because
there
was
a
few
Corrections
that
need
to
be
made.
So
if
you
go
to
p.m.
/
all
quartet
meeting
/
meeting
76
MD,
it's
gonna
have
the
decision,
so
II
I,
P,
23
48,
was
listed
as
accepted
and
final.
B
A
K
B
Thanks
pooja
also
the
EIP
2387
is
put
to
last.
Call
that
one.
Oh,
that
one
is
the
hard
fork
metaphor
Muir
glacier
the
other
one.
If
it's
so
87
is
the
meta
and
84
is
the
difficulty
Baum
delay
IP
itself,
there's
just
one
e
in
the
meta
got
it
okay,
so
this
just
basically
covers
me
or
glacier.
Okay,
if
that
was
the
decisions
made,
that's
fine
both.
F
J
D
J
There
was
one
one
comment
on
the
the
wording
of
the
EIP
itself
regarding
23
84,
and
the
comment
said
the
AAP
refers
to
suppose
the
previous
I
can't
find
it,
but
anyway
it.
It
says
that
adi
opinion
refers
to
the
previous
difficulty.
Balm,
delay,
VIP
and
just
explains
the
difference,
which
is
the
block
number.
B
Okay,
if
that
needs
to
be
addressed,
who
thinks
that
needs
to
be
addressed?
Is
that
because,
like
the
last
difficulty,
bottom
adjustment
did
deal
with
an
issuance
reduction
I
believe?
But
since
this
one
doesn't,
why
does
it
need
to
be
included?
Did
they
explain
that
in
the
comment.
J
C
Two
last
ones
hadn't
had
the
reduction
I
agree.
It
might
make
sense
to
you
have
like
that.
One-Line
change
to
say
specifically
this
one
doesn't
I,
think
the
meta
EEP
might
capture
some
of
that
I
know.
James
have
had
like
a
added
a
section
about
like
the
rationale
of
like
why
we're
doing
this
upgrade
and.