►
From YouTube: Peep an EIP #3: EIP-2537 in five slides with Alex Vlasov
Description
Presentation slides: https://docs.google.com/presentation/d/1uN-ziUVXP1xtxEyKc5piHcVnOqcrTk26WIk-fzkbOMs/edit
EIP: https://eips.ethereum.org/EIPS/eip-2537
Discussion at EthMagician: https://ethereum-magicians.org/t/eip-2537-bls12-precompile-discussion-thread/4187/12
Follow Alex Vlasov at Github/Twitter (@shamatar)
Follow the schedule of Peep an EIP for another exciting EIP at
GitHub: https://github.com/ethereum-cat-herders/PM/projects/2
Contact Ethereum Cat Herders at
Discord: https://discord.gg/sgdnxZe
Twitter: https://twitter.com/EthCatHerders
Medium: https://medium.com/ethereum-cat-herders
A
A
A
Today
we
have
alexander
vlasa,
also
known
as
shanta
and
with
him.
We
are
also
joined
by
james
hancock,
the
ethereum
hardware
coordinator,
william
schwab,
and
aleta
moore
from
the
ethereum
cathedrals
team.
The
8th,
magician
and
link
for
active
discussion
and
eip
is
provided
in
the
description
below
so
without
further
ado.
Let
me
invite
alex
welcome
alex.
Let
us
know
a
little
bit
about
yourself
and
then
we
will
dive
into
the
protocols.
B
Yeah
well
a
little
bit
about
myself.
Currently
I'm
one
of
two
co-founders
of
metal
apps
and
we,
like
our
main
goal,
is
scaling
for
public
blockchains
with
their
knowledge
proofs,
as
many
of
or
maybe
some
of
you
have
heard.
We
have
launched
our
main
ad
called
ziki
singh
back
in
june
and
we
use
zero
knowledge,
proof,
snarks,
plonk
as
a
proof
system
and
another
israeli
pre-compile
to
actually
do
our
work
and
provide
scaling
benefits.
B
But
this
is
not
that
relevant.
I
think
for
this
discussion,
but
I
would
be
able
to
answer
any
questions
later
on.
If
there
will
be
some.
B
So
if
it's
fine,
we
can
just
jump
directly
to
the
eep
description.
It
should
be
short.
It
will
not
be
technical
and
a
little
bit
from
historical
perspective.
B
Initially,
I
have
proposed
another
eep,
which
is
1962,
which
was
a
totally
generic
set
of
pre-compiles.
Actually
it
was
maybe
just
one
precompile
with
huge
functionality
which
would
allow
different
arithmetic
operations
over
led
curves,
and
the
goal
for
this
pre-compile
was
like
make
a
generic
solution
which
would
serve
everyone's
needs.
B
But
then
people
had
an
impression
that
complexity
of
such
pre-compile
is
enormous,
and
that's
why
it
was
cut
in
few
parts
and
the
first
one
and
the
simplest
one
is
2537,
which
is
only
for
bls
12
381
curve.
First
of
all,
you
should
not
mix
the
bls
signature,
which
is
quite
like
commonly
discussed
construction
right
now.
Well,
bls
curve
and
signatures
are
two
different
deals
and
only
shares
the
letter
b
for
then
banner
as
one
of
the
resources
fire.
B
As
far
as
I
remember,
but
let's
put
it
aside,
so
the
main
reason
is
to
adds
a
breaking
pile
which
allows
efficient
operations
on
led
curve
which
first
of
all
supports,
pairings
and
has
a
good
paying
performance
and
also
which
provides
120
bits
of
security.
B
So
this
curve
can
be
used
in
a
long
term.
B
Right
now,
there
is
already
one
led
curve
which
supports
spring
operations
in
ethereum
in
the
form
of
pre-compile,
which
is
bn254
curve
and
depending
on
the
estimate,
it
has
from
80
to
100
bits
of
security,
which
is
fine,
but
since
bls
12
381
is
kind
of
de
facto
standard
and
is
like
explicitly
named
in
one
of
itf
graphs
for
signature
schemes
for
bls
signature
scheme
decided
that
this
is
a
good
first
candidate,
so
this
pre-compile
is
quite
simple,
but
it
brings
a
kind
of
three
different
different
flavors
of
operations.
B
One
of
those
like
first
set
of
those
doesn't
involve
pairings,
and
this
is
just
a
standard
set
of
operations
over
points,
analytic
curve,
which
involves
additions
and
multiplications,
and
those
operations
are
already
enough
for
some
of
the
protocols.
For
example,
bullet
proofs
do
not
need
bearings
at
all,
but
still
provide
interesting
instructions.
B
So
these
observations
are
like
for
common
use.
There
is
also
paying
operation
which
requires
curve
is
a
special
properties.
So
this
is
a
main
operation
which
is
utilized
by
signature
scheme
by
zero
knowledge.
B
Proof
verifications
by
six
improve
systems
and
also
there
is
an
auxiliary
operation
which
is
called
the
meptic
curve,
which
allows
one
to
do
to
take
a
message
or
a
hash
of
the
message
and
map
it
from
arbitrary
32
bytes
to
the
point
on
lep
curve
in
a
secure
way,
because
bla
signature
scheme
requires
that
message
should
be
mapped
to
the
curve
point.
It
is
a
requirement
and
this
operation
is
expensive
and
not
well
and
cannot
be
well
expressed
in
terms
of
standard,
even
primitives.
B
So
that's
why
it
was
introduced
as
a
separate
operation
and
also
was
introduced
as
a
pre-compile
and
made
efficient.
B
I
already
mentioned,
like
what
constructions
could
potentially
use
an
elliptic
curve
based
pairing
properties,
but
let's
just
go
once
again
throw
it.
First
of
all:
it's
everything
which
uses
ziki
snarks
in
various
pro
systems.
It
can
be
for
scaling,
it
can
be
for
privacy
solution.
Maybe
some
of
you
heard
about
recent
zero
knowledge
based
game.
B
It's
actually,
I
would
say
wonderful
and
you
should
definitely
try
to
play
it
well,
as
was
mentioned
many
times,
the
bla
signature
scheme
requires
to
have
a
curve
with
spring
operations.
So
this
can
be
used
to
various
things
like
dao
governance,
just
roll
ups
or
plasma
constructions,
where
you
have
to
get
the
signature
from
a
large
number
of
participants
and
and
verify
it
very
fast
and
efficiently.
B
It
also
will
be
required
for
cross-chain
interactions
like
you
can
verify
the
validity
of
the
block
of
another
blockchain
if
validity.
Just
such
validity
check,
for
example,
just
requires
a
simple
signature
from
majority
of
the
validators.
This
idea
was
also
a
part
of
like
having
a
light
clan
for
east
2.0.
B
When
east
2.0
is
launched
as
a
smart
contract
for
east
1.0,
so
it
would
be
possible
to
get
the
data
from
east
2.0
beacon
chain
and
refer
to
it
from
ease
1.0
and
that's
also
kind
of
freaking
kind
of
very
good
feature
for
east
2.0
deposit
contract,
because
it
allows
a
user
to
opt
in
to
the
validation
of
his
private
public,
key
that
it's
well-formed.
B
B
B
So,
let's
get
to
the
current
state
of
affairs.
So
at
the
moment
this
pre-compile
is
included
in
the
code
base
of
all
major
es,
1.0
clients
and,
if
like,
if
someone
would
want
to
make
me
a
signature
and
do
just
anything
using
this
pls
1231
curve.
There
is
a
huge
set
of
libraries
which
allows
to
do
it
for
various
platforms,
various
languages
and
also
various
licenses
in
case
it's
also
important.
So
this
curve
is
indeed
de
facto
standard
and
there
are
a
lot
of
tools
to
work
with
this.
B
After
integration,
there
was
a
fuzzing
effort
from
guest
team
to
fuss
the
implementations
between
different
clients,
because
clients
use
different
libraries
to
actually
implement
what
is
required
by
cpr
compile
the
interesting
part.
There
is
even
while,
initially
people
viewed
this
freaking
file
as
like,
very
complex
and
intensive
mass.
B
There
were
actually
no
errors
in
mathematical
part
of
all
this
code,
but
they
were,
but
the
only
errors
found
during
the
washing
process
were
integration.
Errors
like
people
didn't
place
proper
constants
in
gas
calculations
functions.
People
didn't
handle
all
the
errors
properly.
B
So,
as
it
happens,
arithmetic
parts
and
heavy
mass
wasn't
a
problem
at
all.
So
for
now
the
status
all
the
clients
have
an
implementation,
and
the
only
thing
which
is
left
is
just
declares
that
this
precompiled
is
included
in
berlin
and,
like
put
like,
have
a
berlin
at
some
point,
and
so
we
just
activate
it
everywhere,
and
that
would
be
usable
by
everyone.
B
So
it
was
indeed
in
five
slides.
So
I
can
stop
the
screen
demo.
B
Yeah,
thank
you,
james.
Okay.
I
will
also
put
a
set
of
people
because
I
don't
recognize
everyone
by
voice
so.
C
Oh
yeah
go
back
a
couple
slides
to
the
one
where
you
talk
about
who
needs
it?
Yes,
so
can
you
so
the
cross
chain
interactions
with
eth2
file,
coin
and
tesla's?
How
is
it
because
you're
you're
you're
using
the
same
version
or
how
does
that?
How
is
it
that
it
enables
this
kind
of
cross
chains.
B
Well,
it's
more
or
less
gets
to
the
point
how
one
can
verify
like
validity
of
one
chain
in
another
chain,
so
as
an
example
in
east
2.0,
it
would
boil
down
to
verification
of
a
signature
of
bls
signature.
Well
too
many
bls
here,
but
the
verification
of
bls
signatures
of
validator
set
over
the
block
header,
and
it
will
allow
to
attest
to
ease
1.0
that
this
header
is
part
of
the
canonical
chain
of
ease
2.0.
B
Well.
This
applies
in
principle
to
any
proof
of
stake
or
proof
of
authority
chain
which
would
use
pls
signature
over
this
curve
as
like
as
a
kind
of
a
seal
of
a
main
and
valid
chain.
I
hope
I
try
to
explain
it.
Yeah.
C
B
B
Some
kind
of
the
same
as
you
would
most
likely
just
deposit
in
east
2.0
that
there
was
some
transaction
with
some
properties.
So
there
was
some
set
of
funds
locked
in
is
1.0,
so
it
can
be
released
in
east
2.0,
but
all
particular
nature
of
interactions
is
up
to
some
developer.
First
of
all,
you
need
to
attest
that
this
block
is
a
part
of
canonical
chain.
B
Yeah,
it's
like
such
precompiles
are
not
solutions
for
a
particular
problem.
It's
more
like
a
tool
which
allows
a
large
set
of
applications
like
I
am
huge
fan
of
snarks
and
also
randomness
speaking,
but
in
randomness
part.
My
knowledge
is
much
more
limited.
B
I
was
only
more
or
less
following
the
construction
by
rsa
refinable
delay
functions
and
how
it's
incorporated
in
randomness
generation
but
looks
like
there
are
other
solutions
which
are
kind
of
simpler
and
would
also
want
to
have
a
bls
curve
which
is
required
for
them
to
work.
C
Yeah,
let's
talk
a
little
bit
more
or
I
want
to
kind
of
get
people
who
are
users
of
the
chain
or
developers
of
the
chain,
a
sense
of
when
you
say
it
enables
zk
robes
and
privacy.
How?
What
does
that
kind
of
benefit
translate
to
users
or
to
the
people.
B
Well,
this
part
is
a
little
bit
more
technically
involved,
but,
as
I
like,
as
I
made
an
example,
there
is
already
bn254
curve
which
allows
springs
and
which
allows
you
snarks
using
this
curve,
and
this
is
what
we
actually
do.
So
I
try
to
put
the
word
capacity
here.
B
To
make
it
simple,
when
you
make
a
certain
knowledge
proof,
you
have
to
express
the
statement.
You
are
proving
as
a
circuit,
which
involves
at
the
high
level
addition
and
multiplication
gates
so
and
to
be
able
to
run
the
efficient
prover
for
such
circuit.
You
need
to
have
a
set
of
routes
of
unity
in
your
field
over
which
you
work,
you
like,
usually
the
circuit
is
expressed
towards
prime
field
in
this
case,
and
roots
of
unity
is
a
property
of
the
modulus.
B
In
case
of
current
curve,
there
are
228
roots
of
unity,
the
maximum
size
of
the
circuit.
So
if
you,
if,
let's
say
you
have,
if
you
try
to
process
transactions
using
zero
knowledge
circuit
and
each
transactions
takes,
for
example,
one
million
gates-
and
you
only
have
access
to
the
maximum
of
2
in
28
gates,
then
the
largest
amount
of
transactions
which
you
can
process
is
around
256..
B
This
is
4bn254
curve.
The
pls
1281
curve
has
two
in
a
power
of
32
rods
of
unity,
so
it's
four
times
larger
and
it
allows
you
kind
of
batch
more
transactions
in
one
zero,
knowledge
circuit
and
verification
costs
for
bn
and
bls
1231
curves
is
roughly
the
same.
So
at
the
end
of
the
day,
when
you
do
all
the
calculations,
what
is
the
final
cost?
You
will
get
that
you
can
guess
like
four
times
the
compression
for
the
same
amount
of
gas
required
on
chain
to
verify
the
proof.
B
Yeah,
if
you
just
divide
the
total
gas
spent
to
verify
the
proof
by
the
number
of
interactions
or
like
transactions,
which
you
can
verify
by
one
proof,
you
get
like
four
times
in
the
efficiency
in.
C
This
example
so-
and
this
would
be
in
like
a
layer,
two
sort
of
something
so
so
that
would
also
end
up
being
cheaper.
B
Yeah
I
mean
this
was
this:
would
translate
to
layer
twos
having
larger
capacity
per
block,
so
if
effectively,
it
would
decrease
the
costs
like,
but
I
mean
for
later
to
use.
There
are
other
costs
consideration,
so
it's
more
complicated,
but
at
the
end
of
the
day,
if
you
run
it
full
capacity
and
you
just
send
a
lot
of
proofs
for
verification
and
you
do
it
almost
every
block,
then
at
the
end
of
the
day,
your
costs
for
end
users
will
be
four
times
smaller.
C
That's
pretty
good,
that's
it.
Let's
see
is
this,
is
it
is
the
privacy
solutions
kind
of
the
same
thing
like
it's
more
efficient
or
cheaper
to
do
privacy
solutions
like
a
like?
What's
the
tornado
cache
or
something
like
that,.
B
It's
I
mean
for
now
well,
for
if
you
want
to
put
an
example
to
another
cache
here,
it's
a
little
bit
more
technically
involved.
You
would
want
to
run
the
recursion,
but
it's
kind
of
the
same
like
no
matter.
What
is
your
construction
if
you
use
their
knowledge
proofs
and
you
want
to
place
like
and
if
you
want
to
prove
the
larger
statement
or
to
prove
the
statement
over
a
larger
set
of
like
small,
simple
chunks
and
substatements,
it
will
directly
translate
to
savings
on
the
cost.
B
B
If
you
use
special
algorithm
for
this,
you
get
huge
discounts,
and
such
operation
is
a
part
of
many
verification
procedures
for
zero
knowledge
proofs.
So
I
would
imagine
that
even
for
them,
it
would
be
an
immediate
like
immediately
benefit
them
for
verification,
and
then,
if
they
decide
to
go
into
recursion,
then
the
same
capacity
argument
would
else
apply
for
them.
B
In
principle
now
you
can
choose
a
recursion
over
just
one
and
one
curve
the
same
curve.
The
problem
is
that
it's
more
computationally
expensive,
so
there
are
trade-offs.
So
as
a
recursion
argument,
doesn't
kind
of
this
curve
is
not
strictly
for
recursion.
This
is
de
facto
standard
curve
for
all
interactions
listed
here,
but
if
you
want
to
name
it,
there
is
another
eep
which
is
2569,
which
is
bls
12,
but
3
377
curve
a
little
bit
different.
One
like
the
only
thing
which
is
largely
different
is
the
modulus.
B
A
prime
number,
which
defines
a
field
over
the
curve
is
defined.
This
curve
is
also
known
as
a
sexy
curve,
and
sexy
construction
is
like
from
the
sexy
paper
and
we
go
in
deeper
and
different
technical
details.
Yeah
this
curve
is
like
is
like
recursive
composition
friendly
because
it
allows
various
tricks
to
make
recursion
cheaper,
but
I
mean
we
can
go
deep.
We
can
go
in
this
direction
if
you
want,
but
I
don't
think
it
will
be
useful,
we'll
just
scare
people
away.
C
B
Well,
from
my
perspective
and
like
from
implementation
of
one
curve
or
another
bls
377
proposal
is
like
copy
paste
of
381
proposal,
everything
will
be
the
same.
Api
will
be
the
same.
Gas
cost
will
be
the
same.
All
error
k,
you
know,
like
all
edge
cases
and
errors
will
be
the
same.
The
only
thing
which
will
change
is
roughly
four
equations
in
the
implementation
which
are
like,
and
these
equations
are
usually
not
even
hardcoded
they're,
just
parameterized
in
the
source
code.
B
So
if
this
one
is
accepted,
the
modification
for
bls377
will
take,
like
maybe
50
lines
of
code,
to
like
provide
an
implementation
for
a
377.
C
B
Will
not
even
change
external
interfaces
and
clients
will
be
would
be
immediately
able
to
integrate
it.
So
just
from
a
perspective
of
how
it
is
implemented
in
code,
there's
a
small
change
of
parameters,
so
nothing
will
change
for
external
external
server,
but
it's
like
it
would
be
an
asset
building
block
for
various
constructions.
B
This
is
this
like
the
various
options.
What
you
can
do
now
in
the
in
area
of
zero
knowledge
pros,
there
is
smaller
set
of
options.
What
you
can
do
with
their
knowledge
pros
in
ethereum,
so
what
I
try
to
do
is
gradually
increase
this
set
of
options
because
for
now,
for
example,
if
you
look
at
the
zit
cache,
they're
not
limited
by
free
compiles
and
the
users
can
use
any
crypto,
they
want
and
then,
and
they
can
use
very
interesting
constructions
like
halo.
B
If
you
look
at
the
coda,
they
also
recently
introduced
their
approach
for
efficient
composition
and
like
set
of
smart
contracts,
which
would
be
possible
with
just
purely
certain
algebras.
So
it's
kind
of
extension.
B
I
tried
to
extend
ethereum
to
support
more
and
more
primitives,
which
would
eventually
allow
me
and
other
people
to
build
something
which
I
cannot
maybe
imagine
right
now.
For
example,
zero
knowledge
games
are
great.
I
was
proposing
it
for
some
time
ago.
I
think
first
on
zero
knowledge
summit
last
year,
but
I'm
happy
to
see
that
someone
has
built
it.
D
So
I
have
a
quick
question.
You
said
that
ethereum,
the
zero
knowledge
proves
ethereum
is
like
limited
a
bit
and
I'm
wondering
well
first
off,
why
and
like
how
is
it
different
in
ethereum
like?
How
is
this
implementation
different
from
say,
you
know,
z,
cash
or
something
like
that.
B
It's
limited
in
the
sense
that
well,
first
of
all,
such
computations,
like
an
amount
of
computations
required
to
calculate
the
print,
is
quite
large.
It's
few
milliseconds.
B
B
It
allowed
zak
from
adstaq
to
actually
implement
the
same
bn254
curve,
which
was
available
in
ethereum
as
a
precompile.
B
He
implemented
it
as
a
evm
contract
and
it
had
decent
performance
before
bn254
pre-compile
was
repriced
because
implementations
has
improved,
so
the
gas
cost
was
reduced
versus
precompiled
for
bls,
12,
381
or
377
curves,
it's
even
more
difficult
because
they
have
to
work
over
numbers
which
are
381
bit
long,
so
they
don't
fit
in
a
single
ethereum
unsigned
integer.
So
you
have
to
do
more
and
more
tricks.
B
That's
why
implementation
sounds
like
this
as
the
evm
would
be
expensive
and
it
would
be
first
of
all
difficult
and
then
it
would
be
definitely
more
expensive
than
if
you
implement
it
in
a
native
code
as
an
example
with
bn
implementation
show
and
no
one
wants
to
pay
a
huge
amount
of
gas
for
any
interaction.
So
such
implementation
has
to
be
efficient.
B
That's
why
it's
usually
implemented
as
a
pre-compile,
which
is
just
a
native
code,
which
is
just
a
function
which
is
executed
in
native
code
and
just
reached
back
to
the
evm.
D
So
have
we
ever
considered
if
I
understand
how
how
it
works
correctly,
we
ever
considered
doing
like
sacrificing
some
of
the
benefits
of
touring
complete,
like
architecture,
for
the
benefit
of
like
the
the
more
complex,
zero
knowledge
proofs.
This
doesn't
mean
to
be
considered.
B
B
Like
I
have
to
at
least
to
improve
the
interpretation
of
this
question
like
zero
knowledge
process
as
they
are
in
terms
of
their
knowledge
circuits,
they
are
not
turing
complete
at
the
beginning.
B
This
is
part
which
is
purely
for
their
knowledge
pros,
which
can
be
over
different
proof
systems
and
different
curves.
B
But
then
you
get
to
the
point
where
you
have
to
verify
this
proof
and
in
most
of
the
cases
which
we
discuss
here,
verification
happens
in
ethereum,
so
ethereum
itself
should
be
able
to
do
the
computations
which
are
required
to
do
the
verification,
and
these
computations
right
now
are
involve
running
bm254
pre-compile
and
later
on.
They
would
be
able
to
use
this
pls-12381
curve,
which
else
is
lc
pre-compile.
B
So
this
part
has
to
be
run
in
the
serium
and
has
to
be
as
fast
as
possible
so
and
right
now
there
is
only
one
curve
there
may
be
there.
Maybe
will
be
two
in
berlin,
but
those
two
curves
are
not
the
only
curves
and
constructions
possible
for
their
knowledge
probes.
B
There
are
other
curves
which
we
don't
discuss
here.
There
are
other
proof
systems
which
have
some
different
tradeoffs.
They
can
trade
off
bandwidths
for
speed,
for
example,
and
not
all
those
kind
of
interactions
are
reasonable
to
have
an
ethereum.
B
That's
why
there
is
a
larger
landscape,
further
knowledge
proofs,
which
is
used
by
other
chains,
but
we
discussed
ethereum
only
here,
so
we
try
to
bring
the
constructions
which
are
reasonably
efficient
from
both
perspectives,
running
in
a
public
blockchain,
which
is
important
and
also
not
having
a
full
kind
of
power
to
extend
this
public
blockchain.
In
any
way.
You
want
that's
why
all
those
precompiled
inclusion
procedures
are
quite
slow.
B
Even
while
there
are
other
constructions
in
the
wilderness,
which
are
also
very
interesting,
but
they
will
not
be
usable
in
estonia
until
more
and
more
pre-compiles
are
introduced
or
until
something
like
1962
is
introduced,
which
allows
any
curves
any
functions
and
also
high
performance.
But
it
was
considered
to
be
too
difficult
to
include.
B
A
So
talking
about
berlin,
I'm
I'm
curious
to
have
a
little
conversation
about
the
latest
evm
384
that
came
up
in
our
last
all
core
dev
call.
So
would
you
be
able
to
elaborate
on
like
how
is
it
going
to
affect
yeah
eip2537
before.
B
Well,
ideally,
you
should
separate
it
here.
A
little
okay
like
there
is
a
huge
set
of
possibilities.
B
Some
of
those
are
better
suited
for
steering
some
of
those
are
not
well
suited
for
ethereum,
for
example,
size
of
zero
knowledge
proof
using
pairing,
friendly
curves,
as
let's
say,
a
few
kilobytes,
which
is
reasonable
and
verification
takes
constant
time
and,
let's
say,
half
a
million
gas
which
is
also
reasonable.
B
There
are
proof
systems
which
have
a
larger
proof
size,
for
example,
starix,
where
a
single
proof
verification
would
take
a
kind
of
like
counting
together
as
a
proof,
size
and
running
the
contract,
which
does
verification
which
takes
5.5
as
far
as
every
member
billion
gas
for
one
block.
So
this
is
this
feeds
into
the
block
but
more
expensive,
as
there
are
proof
systems,
whereas
the
proof
sizes
would
be
even
larger,
so
those
are
not
suitable
for
serum.
B
We
don't
consider
them,
but
even
in
a
subset
of
those
which
is
suitable,
there
is
variety
of
choices
and
those
different
choices
would
require
different
building
blocks.
Some
of
those
are
present
in
the
serium.
Some
of
those
are
not
present.
In
this
room,
and
even
while
some
blocks
are
present,
there
are
better
options
of
this
box,
for
example,
kind
of
for
bn254
curve
and
this
bls
1281.
C
B
C
B
C
And
then
and
then
pooja
your
question
on
the
evm
384
eip2537,
it
came
up
on
the
last
call
and
you
have
like
a
space
to
respond.
Instead.
A
B
Yeah,
I
mean
I
definitely
cannot
answer
how
it's
going
to
affect,
because
I'm
not
the
person
who
does
a
decision,
but
from
what
I've
seen.
I
didn't
dig
very
deep.
So
there
are
maybe
some
other
factors
which
you
don't
count
right
in
short
right
now,
the
performance
difference
as
a
factor
of
three,
so
this
would
affect
like
anyone
who
would
we
who
would
want
to
use
and
like
a
curve
like
the
last
two
381
implemented,
either
in
native
code
or
using
this
new
op
codes.
B
So
it
would
force
people
to
pay
three
times
the
gas.
Also
the
compression
which
was
present
there
like
it,
was
between
kind
of
generic
implementations
of
both,
which
is
a
reasonable
comparison.
B
So
if
people
would
start
to
do
optimizations
on
both
sides
and
they
go
to
the
extreme
limits,
I
think
the
ratio
will
stay
largely
the
same,
so
it
will
be
three
times
lower
cuts,
no
matter
if
everyone
goes
to
all
the
way
down
to
assembly
to
implement
basic
operations.
B
B
It's
kind
of
the
simulation
of
this
by
operation
counts,
so
it's
like
doing
the
same
number
of
operations,
primitive
operations
and
to
see
how
long
it
takes.
So
there
is
no
implementation.
There
is
no
full
implementation
there.
There
is
no
control
flow
and
I
don't
know
how
it's
how
difficult
it
would
be
to
implement
it.
So
it's
still
in
research.
B
There
are
also
other
parts
which,
for
example,
subgroup
checks,
which
I
know
how
are
implemented
in
1962,
which
was
a
baseline
for
them,
and
I
don't
know
how
they
were
simulated
in
evm
opcodes,
and
this
reason
is
this.
Difference
can
also
be
quite
substantial
because
they
take
some
like
up
to
40
percent
of
execution
time.
B
So
we
will
see
how
this
progress
goes,
but
my
prediction
is
more
or
less
it
will
be
penalty
for
end
users
from
performance
perspective
somewhere
around
the
factor
of
three.
If
someone
would
want
to
kind
of
force
that
such
operations
would
be
implemented
using
new
op
codes
instead
of
native
code.
B
Oh
yeah
yeah,
I
mean
I,
I
thought
this
was
kind
of
obvious,
but
yeah
I
mean
if
we
assume
that
any
second
is
some
amount
of
gas
spent,
then
yeah
it
would
directly
transform
into
higher
gas
cost
for
any.
A
Operation,
thank
you
and
like.
I
hope
that
this
comparison
would
be
coming
up
soon
and
we
we
will
get
to
know
like
how
is
it
going
to
go
ahead
like
evm
384
or
this
one
or
both.
C
Well,
I
I
can
give
my
prescription
perspective
on
that
puja
for
at
least
from
the
hardcore
coordinator
side.
Yeah,
please
go
ahead.
I
think
evm
384
is
a
good
direction,
but
is
it
a
reason
to
delay
other
good
things?
So
I
I
don't
think
anyone's
decided
anything
that
I
that's
kind
of
where
I
I
view
this
as
another
good
thing
that
is
pretty
far
along
and
can
benefit
and
give
value
to
both
devs
in
the
in
the
scope
of
what
things
they
can
do
and
users
in
that
it's
cheaper
to
do
things.
B
Yeah
well,
it
can
be.
I
mean
for
this
particular
choice
of
like
I
I
don't
know,
what's
happening
with
the
hardware
coordination
in
a
sense.
Is
there
a
plan
to
have
it
at
some
point?
B
Is
there
a
deadline
to
have
it
or
you
can
delay
it
more
and
more
until
all
these
results,
but
which
effectively
leads
to
like
not
having
features
which
could
have
been
there
already,
but,
like
purely
from
a
technical
standpoint,
maybe
it's
kind
of
useless
for
384
bits,
but
it
can
be
usable
for
other
cases
like
for
384
bits.
As
it
happens,
there
is
definitely
in
an
overhead
in
crossing
a
boundary
between
the
evm
and
the
code,
which
actually
does
point,
does
field
addition
or
montgomery
multiplication.
B
But
if
you
go
to
the
larger
field
size,
for
example,
let's
say
twice
larger:
this
ratio
would
will
definitely
change
and
maybe
for
field
sizes
which
are
roughly
700
like
768
beats.
Let's
say
this
ratio
will
be
such
that
an
overhead
will
be
not
three
times.
Maybe
it
will
be
30
percent
and
then
it
would
definitely
make
sense
to
have
a
universal
solution
over
particular
ones.
Maybe
just
not
the
best
choice
for
284
bits,
but
it's
kind
of
another
level
of
research.
B
A
Yeah
both
of
you
thank
you,
alex.
Thank
you
james.
This.
This
was
really
an
interesting
conversation
and
we
get
to
know
more
about
this
particular
eip.
With
this
program,
we
are
hoping
that
it.
It
would
be
useful
for
community
to
learn
and
ask
questions
directly
from
the
author.
So
that's
what
we
are
trying
to
achieve
here
and
I
I
think
that
we
have
achieved
some
of
the
questions
that
we
received
has
been
answered.
A
A
So
I
hope
to
see
you
again
on
this
show
to
learn
about
proposal
and
for
those
who
are
watching
us
on
youtube.
I
hope
to
see
you
all
in
another
episode
of
fifa
eve
with
another
puzzle.
A
C
B
Well,
we're
not
that
active
on
social
media.
There
is
a
like
on
about
on
our
website,
which
is
easy
to
google.
There
are
links.
Well,
there
is
a
link
to
the
github
channel
and
usually
when
we
have
something
we
write
a
blog
post
but
like
all
the
work
is
happening
on
the
github
and
like
we're
not
that
active
on
social
media
but
yeah.
You
can
just
try
to
find
us
on
google,
some
of
the
metal
apps.
B
You
will
find
it
easily
and
you
can
check
the
materials
and
a
few
hours
blog
posts
with
quite
a
like.
Three
of
them
were
quite
recent
on
medium
other
than
this,
like,
I
don't
think
there
are
good
sources
or,
if
necessary,
like
you,
can
reach
me
in,
like
in
private
message
on
twitter
or
in
the
discord.
I
have
the
same
handle
everywhere,
so
you
can
just
pm
me
if
something.