►
From YouTube: Breakout room meeting #7
Description
B
A
A
A
As
for
my
recollection,
eip2537
pre-compiled
for
bls
12
381
curve
operation
was
included
in
yolo
v1,
so
it
has
been
on
the
developer's
test
net
for
at
least
once
and
I
suppose
clients
must
have
looked
into
the
integration
process.
B
C
Well,
maybe
I
should
add
here
that
this
is
a
ongoing
discussion
and
it
has
two
large
points
in
it.
One
is
a
particular
particular
implementation
of
bls
1231
primitive,
which
was
delayed
or
superseded,
or
I
don't
know
what
happened
was
the
events
of
84.38
proposal.
So
this
is
like
a
particular
question:
how
and
what
primitive
should
be
implemented
for
ethereum,
because
we
just
need
well.
C
New
crypt
is
needed
anyway,
so
one
way
another
developer
should
get
access
to
it,
and
the
second
part
is
if
it
looks
like
that
native
code.
Pre-Compiles
are
still
the
better
option
like
they
do,
require
more
work
in
terms
of
testing
late
on
later
stages,
fuzzing
to
avoid
consensus
failures,
they
do
require
more
work,
but
such
requirements
should
be
ideally
formalized
and
eip.
2567
would
be
a
good
example
for
it,
because
it's
most
likely
one
of
like
it's
most
likely
is
the
most
like.
C
This
is
the
highest
quality
pre-compile,
which
was
ever
included
in
ethereum,
which
is
related
to
the
cryptographic
primitives.
So
it
can
be.
So
if
the
decision
is
that
pre-compiles
are
still
a
way
to
go-
and
I
don't-
I
don't
think
that
there
is
other
option-
there
are
functions
which
are
not
algebraic,
so
we
would
have
to
get
breaking
balls
for
them
anyway.
C
At
some
point,
then
what
would
be
a
formal,
a
list
of
requirements
for
such
pre-compiles
to
be
considered
like
considered
for
inclusion
and
that's
out-of-scene
air
arguments
that
is
just
not
safe
and
not
tested
enough
cannot
be
made
for
such
cases,
because
this
is
because
such
argument
is
slightly
purely
subjective
at
this.
As
a
current
stage.
A
Right,
I
totally
agreed
to
it
and
I
believe
we
will
try
to
at
least
get
some
some
answers
to
this.
These
questions
like
how
we
can
decide,
maybe
a
flow
of
considering
the
proposals
and
if
someone
is
superseding
the
implementation
into
proceeding
how
we
can
come
to
that.
D
I'm
looking
at
the
agenda
and
just
what
has
been
said
in
the
last
few
minutes,
I'm
wondering
if,
after
the
goal
of
this,
because
the
two
are
kind
of
conflicting
and
I'm
wondering
whether
the
goal
is
a
technical
discussion
today
or
more
like
a
political
discussion
and
if
it's
a
political
discussion,
I
think
we.
A
D
Have
the
right
audience
because
a
lot
of
the
the
core
devs,
who
can
actually
you
know,
make
a
statement
on
on
what
would
be
requirements
for
pre-compiles
and
maybe
agreeing
on
those
requirements?
I
think
none
of
those
core
devs
are
this
call.
C
Well,
the
second
part
of
the
what
I
try
to
describe
like
it
will
logically
follow.
If
it's
during
the
technical
parts
of
the
discussion,
we
agree
that
in
any,
like
short,
medium
term,
it's
still
native
code
recompiles,
which
are
way
to
go
for
assume
recompiles.
C
So
if
we
do
this
like,
if
we
do
get
to
this
outcome
and
like
as
the
current
stages,
I
see
it
in
terms
of
performance
and
gas
costs.
It
is
the
state
then
later
on.
We
should
just
proceed
in
a
similar
direction,
maybe
with
just
a
different,
a
little
bit
different
set
of
people
but
yeah.
The
first
part
is
like
any
progress
on
pls
12
381
curve.
C
Implementation
was
stopped
largely
for
a
reason
that
people
thought
that
evm384
should
solve
everything
and
be
like
efficient
enough,
which
wasn't
the
case
according
to
my
last
last
analysis.
So
I
want
to
know
the
technical
updates,
like
I
understand
that
it
is
possible
to
fix
any
technical
problem,
largely
it's
just
that
amount
of
work
which
I
see
that
is
required
to
do.
These
changes
taking
take
out
the
current
speed
of
development
and
speed
of
accepting
the
like
non-small
hotfixes.
C
It
is
the
same
core
clients,
it's
very
like
with
all
this
speed.
I
I
would
say
that
it
would
take
at
least
a
year
to
implement
the
changes
which
would
make
evs
384
in
its
current
form
gas
efficient
enough
to
to
be
considered
the
viable
option
to
use
it
by
mainstream
developers
like
it
would
still
be
kind
of
okay
in
terms
of
price,
to
use
it
like
once
or
twice
per
block,
maybe
at
maximum,
but
taking
the
account.
C
That's
everything
like
more
and
more
people
would
want
to
have
bls
signatures,
account
obstructions
and
similar
stuff.
This
primitive
is
important
and
it
should
be
as
efficient
and
as
cheap
as
possible.
A
So
to
answer
first,
the
alex's
question
exact
question:
well,
this
is
a
breakout
room
meeting
and
in
the
past
it
has
been
suggested
that
we
do
not
try
to
make
any
decisions
here.
So,
as
you
mentioned,
that
we
may
not
have
right
audiences
for
having
a
discussion
is
fair,
we
would
try
to
collect
thoughts
of
whatever
is
there
from
the
you
know.
A
First,
the
development
update
from
the
proposal
site
the
authors
and
then
they
try
to
collect
the
concerns
that
alex
blass
have
just
mentioned,
that
what
are
the
concerns
with
gas
efficiency
and
others
we'll
try
to
document
all
those
things
and
share
it
with
the
all
coder
meetings
to
make
or
announce
decisions,
as
required.
A
I
believe
this,
this
can
be
a
good
start
to
you
know
at
least
start
to
discussing,
like
what
clients
also
think
about
these
proposals
and
which
proposal
we
should
be
like
considering
for
future
upgrade,
either
or
both
or
yeah
whatever.
It
is.
A
A
So
if,
if
we
may
get
some
updates
from
the
first,
the
evm
384
team
like
where
we
are
standing
right
now
and
what
would
be
the
you
know,
a
good
place
to
go
and
look
into
the
updates
there.
I
believe
there
is
one
at
magician
forum
for
question
and
answer,
but
yes,
if
we
can
get
updates
there.
D
Yeah,
we
have
been
working
on
a
couple
of
different
things
regarding
avm
284
and
the
we
plan
to
release
those
findings.
Of
course,
we
we,
I
mean
there
was
the
holiday
season
as
well,
which
which
was
a
cause
for
this
delay,
but
unfortunately
we
don't
have
the
update
ready
for
this
call,
even
though
we
really
plan
to
to
have
it
ready.
D
But
we
we
plan
to
get
it
out,
hopefully
by
tomorrow,
so
the
the
updates
we
have
provided
so
far
went
into
the
different
versions.
We
could
do
with
evm
trading
for
the
different
versions
of
opcodes.
D
The
two
big
parts
of
the
pairing
operation
and
the
pairing
operation
is
is
the
most
expensive
operation
on
on
bls12
and
we
have
dissected
this
despairing
operation
into
two
big
parts
to
the
miller
loop
and
the
final
exponentiation,
and
these
two
big
parts
have
been
implemented
using
dvm
384
up
codes,
and
you
probably
remember
that
there
were
two
updates
explaining
each
of
these
and
since
then
we
have
further
work
to
to
get
those
implementations
in
a
better
shape
and,
more
importantly,
we
looked
into
determining
the
actual
cost
for
the
dvm
384
upgrades
themselves
and
the
process
we
have
gone
through
to
do.
D
D
Like
you
know,
things
like
dealing
with
memory,
the
ketchup
code,
a
number
of
different
opcodes,
and
then
we
also
looked
at
a
different
approach
to
trying
to
dissect
the
just
the
core
cost
of
the
the
the
interpreter
and
looking
at
the
actual
processing
cost
for
the
up
codes
on
top
and
through
this
process,
which
took
quite
a
bit,
we
did
arrive
at
some
costs
which
we
believe
don't
pose
any
risk
to
the
network,
but
we
also
found
that
these
costs
can
be
improved
even
with
various
means,
so
just
the
the
actual
upgrade
costs
themselves.
D
The
way
we
we
looked
at
the
the
the
runtime
cost
for
these
is
we
used
four
different
machines,
widely
different
machines,
and
most
of
them
were
actually
in
the
range
of
four
to
five
years
old.
So
these
are
definitely
not.
You
know,
power
houses,
so
I
would
say
it's
definitely
more.
Looking
at
the
very
safer
side
of
the
spectrum.
D
And
that's
the
part
we
also
spent
quite
a
bit
of
time
on
and
we
are
hoping
to
to
also
make
some
of
our
findings
published
in
that
regard.
So
that's
not
part
of
evm34.
That's
like
a
separate
process,
but
through
that
we
think
that
these
other
op
codes,
which
which
are
used,
I
mean
what
were
labeled
control
flow,
but
these
are
not
really
control
flow.
It's
much
more
than
control
flow
instructions.
D
So
these
all
other
instructions
we
believe
could
be
also
significantly
repriced
yeah
and
in
essence,
we
we
have
numbers,
like
total
estimated
numbers
for
the
pairing
operation
based
on
you
know
what
could
be
achieved
without
free
pricing,
anything
else
and
then
different
proposals.
What
could
be
achieved
if
those
other
things
would
be
repriced,
and
I
think
all
these
numbers
are
actually
looking
quite
nice
and
it
definitely
isn't
on
the
order.
D
What
what
was
mentioned,
that
you
could
only
run
a
single
pairing
in
a
block,
it's
it's
way
way
way
below
that.
D
And
lastly,
I
would
like
to
emphasize
that
we
only
looked
at
the
pairing
operation
because
that's
the
most
expensive
one
in
the
most
complex
one,
but
it
would
be
possible
to
implement
other
operations.
You
know
using
these
primitives
and
and
tools
we
have
written
and,
and
those
definitely
would
have
much
lower
costs
as
well
yeah.
I
think
just
actually
showing
you
guys.
The
actual
write-up
would
have
been
much
better
than
me.
Trying
to
you
know
give
some
kind
of
an
overview,
but
I
hope
that
you
can.
You
can
see
that
tomorrow.
A
Great
thank
you.
So
I
have
tried
to
list
all
the
resources
that
I
could
find
for
evm
384
here.
I'm
sharing
the
link
in
the
chat.
If
something
is
missing,
feel
free
to
send
it
my
way-
and
I
will
add
it
here
and
like
we'll-
try
to
make
this
a
reference
point
for
people
for
whoever
are
looking
into
evm
updates
so
moving
on.
Can
we
get
some
real,
quick
updates
for
eip2537
and
eip2539?
I
believe
there
were
some.
A
E
Sorry
I
I
lost
my
mute
button.
There.
C
Yeah,
I
think
james
should
give
an
update
on
on
the
fuzzing
and
testing
which
he
did
for
both
three
similar
pikmill
proposals,
because
in
2567
one
as
it
is,
it
was
made
to
the
point
when
it
was
in
a
shape.
Everyone
was
considered
safe
and
acceptable
for
inclusion.
Now
it
was
so.
E
Yeah,
so
you
know
my
interest
in
2537
and
2539
is
primarily
out
of.
E
The
upcoming
cello
hard
fork
is
that
we
wanted
to
add
both
of
these
to
the
donut
hard
fork
which
is
going
to
activate
in
march.
So
the
update
on
those
is
that
we
have
high
quality,
go
implementations
integrated
with
the
client
and
we're
on
track
to
like
push
these
out
to
test
nets
in
the
next
month
and
mainnet
in
march.
E
As
part
of
our
like
testing
and
integration
and
confidence
building
process,
we
did
a
lot
of
structured
fuzzing,
so
we
worked
with
alex
flassov
to
build
out
fuzzers
that
are
aware
of
the
you
know:
kind
of
algebraic
structures
involved
so
instead
of
passing
random
byte
vectors
to
the
pre-compiles,
we're
passing
randomized
points
on
the
curves
group
elements,
so
we
ran
those
for
a
month
or
so
and
did
a
few
like
billion
iterations,
without
running
into
any
issues
with
our
implementations,
alex's
implementation
or
compatibility
between
them.
E
So
we
we
have
a
very
high
degree
of
confidence
in
these
and
we're
going
to
be
moving
forward
with
them.
This
is
not
going
to
prevent
us
from
looking
at
evm
384
integration
in
the
future.
If
ethereum
goes
that
direction
as
well.
E
I
you
know,
I
think,
alex,
and
I
could
both
talk
about
the
fuzzing
approach
a
little
bit
more,
but
from
a
high
level.
It's
that
the
fuzzers
are
giving
valid
inputs
to
the
pre-compiles
and
have
the
are
giving
valid
inputs
to
the
pre-compiles
in
most
cases,
but
also
have
the
ability
to
generate
targeted
edge
cases
and
invalid
inputs
as
well.
So
we're
going
through
all
of
the
possible
cases.
E
You'll
have
to
forgive
me
for
being
a
little
rambly.
It's
8,
20
and
my
coffee
is
still
settling
in.
A
A
Bls
12,
381
and
37..
What
would
be
the
best
place
for
your
fuzzing
updates
if
there
are
any
like
if
that
is
published
anywhere.
E
We're
going
to
be
doing
some
write-ups
of
this
as
part
of
our
hard
fork,
documentation
release
process.
So
I
can
take
some
time
in
the
next
few
days
to
clean
that
up
and
publish
something
about
it.
A
That
would
be
great,
okay,
so
moving
on
to
the
next
topic
here
I
mean
it's
not
the
topic,
but
I
think
it's
it's
a
it's.
The
next
piece
of
puzzle
that
we
are
trying
to
solve
here
is
about
the
concerns.
A
Most
of
the
people
are
very
not
very
sure
like
why
eip2537
was
dropped
from
berlin.
Why
edm
caught
attention
and
why
none
of
them
is
seen
in
the
berlin
update?
So
let's
talk
something
about
that
like
in
the
beginning
of
this
call
alex
class.
I
was
mentioning
about
the
difference
of
the
performance,
so
if
we
could
collect
some
thoughts
on
that
like
what
do
people
think
here?
What
are
the
concerns
and
how
we
should
move
forward
addressing
those
concerns.
F
So
I
think
that
the
testing
is
a
concern,
but
I
don't
think
that's
the
main
blocker
and
I'm
curious
if
people
agree
with
me
here,
but
I
feel
like
the
main
blocker
we've
seen
is
regardless
of
the
testing.
F
You
know,
client
developers,
don't
feel
confident
reviewing
the
code
themselves
and
so
that
leads
to
you
know
what
happens
if
something
goes
wrong.
We
can't
kind
of
fix
it
ourselves
and,
and
given
that
I
feel
like
I
don't
know
what
what
might
like
one
one
way
to
not
necessarily
solve,
but
like
at
least
somewhat
mitigate
those
concerns
is
trying
to
think
through.
You
know
what
would
happen
like
if
there
is
a
bug
with
one
of
the
implementations
on
mainnet.
F
F
But
if
there's
a
consensus
right
like
if
failure
where
two
implementations
disagree
with
each
other,
you
know
what
happens,
and
I
guess
that
the
case
with
a
consensus
failure
is
what
happens
say
that
geth
has
an
implementation
and
they're
the
one
that
has
an
error,
and
you
know,
minority
clients
have
used
different
libraries,
so
basu
nethermine,
whatever
were
actually
right.
F
Mathematically,
but
you
know
we're
on
the
wrong
side
of
the
chain
and
I
think
to
me,
that's
kind
of
the
you
know
what's
not
been
addressed
and
what
seems
to
be
more
appealing
about.
Evm384
is
because
they're
just
you
know,
independent
up
codes
and
whatnot,
it's
much
easier
for
client
developers
to
have
high
confidence
that
you
know
those
are
implemented
and
that
the
consensus
is
tested
properly
across
different
clients.
But
to
me
that
feels
like
the
gist
of
it.
It's
like.
F
Yes,
there
is
a
lot
of
testing,
but
I
I
don't
think
people
are
confident
I
don't
know
not
necessarily
like
evaluating
but
yeah
making.
I
don't
know
making
the
call
and
that's
what
leads
to
the
frustration
I
think
alex
you
mentioned
the
beginning.
It's
like
you
know
how
much
testing
do
we
need
we've
done
a
lot
and
it
seems
to
go
around
in
circle
and
I
think
that's
why
it's
like.
F
E
Yeah
the
two
recent
security
events
are
definitely
a
really
interesting
case.
Studies
does
geth
not
have
a
cryptographer
like
on
staff.
G
We,
no,
we
don't
have
a
designated
cryptographer.
G
G
G
So
because
I
wanted
to
know
I
measured
today
how
big
the
difference
is
and
if
we
would
merge
evm384
from,
I
think,
one
of
the
one
of
the
branches
that
is
currently
proposed.
G
It's
around
500
589
editions,
so
around
600
editions
and
back
when
we
merged
eap
2537.
It
was
around
15
000
editions.
So
this
is
like
15
000
lines
of
code
compared
to
500
lines
of
code,
and
this
is
this
is
basically
the
big
blocker
for
us.
It's
just
a
way
bigger
surface
for
attacks
and.
C
Yeah,
I
think
it's
kind
of
one
of
the
reasons
why
every
eip
has
a
champion,
and
there
was
a
lot
of
effort
to
build
this
particular
one
as
that's.
They
were
a
lot
of
in
the
a
lot
of
independent
implementations
were
made,
so
clients
would
have
a
freedom
to
choose
which
ones
they
like
and
any
implementation
has
go.
C
We
have
gone
even
further,
with
paid
audits
and
formal
verification
performed
on
them,
especially
as
a
blast
one,
the
fastest
implementation,
which
is
currently
available
and
can
be
used
by
all
the
clients.
C
So,
regarding
the
attack
service,
I
don't
understand
in
particular
what
you
mean
here.
Is
that,
like
yes,
it's
possible
to
for
someone
who
would
review
the
code
to
try
to
find
places
where
it
can
lead
to
discrepancies
between
different
clients,
but
it's
kind
of
what
we
try
to
do
when
we
do
implement
those
libraries
is
first,
we
ensure
that
those
are
correct
in
good
cases.
So,
like
that,
we
get
the
same
output
and
the
same
inputs.
C
C
Then
we
check
particular
edge
cases
for
mathematical
operations
themselves
and,
like
general,
logical
operation
itself,
which
we're
aware
of
and
which
are
everywhere
from
academic
papers
to
the
spec
itself,
so
which
tells
you
that
this
inputs
are
not
valid,
for
example,
for
this-
and
this
reason
is
a
break
say,
api
of
the
contract
or
they
break
a
definition
of
the
pairing
operation,
for
example-
and
this
is
what
all
was
done-
one
after
another.
C
So
the
same
way
it's
up
to
like
up
to
the
amount
of
effort
which
was
placed
in
there
and
the
review
of
different
people
who
are
more
comfortable,
inviting
such
code
and
reviewing
it
yeah.
I
mean
it's
not
like
it's
it's
trivial
actually
to
implement
384
bit,
integer
modular
editions
and
montgomery
multiplications,
but
like
if
you
never
did
it
before
you.
Will
you
have
a
high
chance
to
do
a
mistake?
C
If
you
do
the
ten
ten
times
in
your
lifetimes
and
slightly
will
get
a
mistake,
gets
out
of
the
day
same
goes
for
experience
in
implementing
such
security
primitives
and
say
it's
not
the
first.
C
Like
I
mean,
but
let's
look,
for
example
zipcads:
they
do
have
few
different
clients,
they
have
the
cryptographic
operations,
but
they
still
taking
this
way
forward,
so
they
have
independent
implementations.
For
this
did
they
review?
Yes,
they
have
a
like
larger
team
special
licenses,
but
they,
I
would
say
they
are
much
more
strict
about
security
and
they
were
satisfied
with
the
quality.
C
C
That's
in
this
case
I
by
default,
would
have
to
be
a
person
to
conduct
for
an
emergency
cases
like
this
and
like
as
a
reflection
of
this
mitigation
and
which
also
gives
the
like
the
greatest
benefits
for
end
user
is.
There
is
already
a
precedent
when
a
single
library
is
used
for
is
used
to
implement
cryptographic
operations,
which
is
like
the
secp256k1
curve
implementation.
C
C
So
we
cannot
even
judge
the
other
ways
that
there
are
no
examples
of
existing
code
which
can
be
exploited.
So,
yes,
we
in
any
coding
principle
can't
have
liberties
just
completely
overlooked.
Even
standard
library,
one
can
have
like
code,
can
have
vulnerabilities
no
one's
protected
from
this
so
yeah
as
far
as
the
policy
like
what
would
happen
if
clients
diverge
like
logically,
I
would
say
that
we're
fixing
valid
implementation,
a
mathematically
invalid
one,
but
then
it
would
be
more
like
not
my
call
to
decide
on
this.
C
So
if
the
fix
is
fast,
then
it
can
be
any
chain
if
it's
slow
then
was
like
it
should
be
made
an
exception
in
some
particular
code.
But
taking
this
account
how
much
time
we
try
to
do
it
with
many
different
people
reviewing
it,
maybe
not,
especially
like
penetration
experts
and
vulnerability
exploration
experts,
it's
like
smaller
and
smaller
chances.
Someone
else
will
try
to
exploit
it
for
bad
will,
but
in
a
similar
manner.
C
I
think
it
possible
to
argue
that
most
likely,
it's
possible
to
exploit
the
avm
implementation
itself
in
different
clients
and
triggers
the
same
stuff
much
more
simple,
since
the
avm
implementation
is
most
likely
much
larger
and
more
complex,
as
we
see
with
with
this
blog
in
memory
copy
pre-compile,
which
happened
a
few
months
ago.
I
think.
C
It
was
a
long
speech,
but
it's
more
or
less.
Everything
is
bounded
by
the
time
spent
for
reviews,
testing
development
in
general
and
who
actually
did
write
this
code.
I
see
several
people
new
in
the
area,
but
it's
not
the
case
versus
pre-compile
and
there's
a
like
a
case
for
any
pre-compile
to
still
realize
in
a
part
that
humans
can
make
mistakes,
but
we
still
did
our
best
to
the
best
our
knowledge
to
not
let
them
sneak
into
the
final
code
basis.
D
Yeah,
it's
just
something
to
add
which
hasn't
come
up
today,
but
when
I
look
back,
you
know
when,
when
all
these
discussions
started,
I
mean
it's
been
a
long
time.
I
think
the
complexity
was
was
one
of
the
the
complexity
question
was
one
of
the
the
big
discussion
points,
but
it
was
not
the
only
one.
The
other
big
discussion
point
was-
and
this
goes
back
even
before
the
current
version
of
the
eip
was
proposed.
D
You
know
this
large
space
of
different
curves
people
may
want
to
use
and
just
looking
at
the
different
curves,
I
think
different
projects
were
looking
into.
You
know
different
ones
and
not
just
bls
12.
and
bls.
12
381
came
into
prominence
because
I
mean
due
to
my
limited
understanding
due
to
its
adoption
in
e
2.0,
and
you
know
some
other
chains
for,
for
I
guess
for
signatures.
D
But
if
I
understand
correctly,
some
other
projects
would
be
looking
at
different
curves
for
zk
systems
and
that's
why
the
the
old
version
of
this
this
pre-compile
was
more
generic
trying
to
solve.
You
know
this
this
larger
problem,
but
that
problem
deemed
to
be
at
least
if
it's
sold
by
a
pre-combined
deemed
to
be
too
big
of
a
scope
and
surface
area,
and
that's
why
I
think
somehow
this
discussion
shifted
into
okay.
D
If
I
recall
correctly-
and
you
know
what
comes
here
as
a
question-
okay-
so
what
if
these
nine
pre-compiles
are
added
to
ethereum
now,
does
that
mean
that
all
these
projects,
and
more
specifically
thinking
about
zk
projects,
will
all
of
them
be
happy
and,
and
will
all
of
them
be
able
to
accomplish
their
goals?
Or
will
they
be
looking
for
other
curves?
D
And
if
the
question
and
the,
if
the
answer
to
this
is,
is
more
likely?
Yes,
they
are
wanting
to
use
other
curves.
Then
we
we
are
going
to
have
this
discussion
again.
Are
we
going
to
introduce
another
nine
pre-compiles
for
those,
and
if
we
do
that,
when
where
do
we
stop?
Are
we
going
to
introduce
nine
pre-compiles
every
six
months
or
12
months
for
different
curves,
and
if
we
do
that
the
complexity
will
blow
up
quite
a
bit?
So
maybe
this
is
something
we
should
think
about
at
some
point.
C
There
is
actually
a
very
simple
answer
to
this
concern.
Well,
the
ls
1231
was
historically
the
first
one
found
by.
Why
is
it?
Cache
guys
has
a
curve
with
specific
properties
which
also
satisfies
their
security
requirements
due
to
recent
attack
on
number
field
sleeve,
so
it
required
to
be
curve.
So
initially
people
thought
that
well,
pn256
is,
is
a
very
good
one
for
a
reason
that
it's
over
256
bit
field
and
should
have
given
enough
security.
C
It
happened
to
be
no
decay,
so
the
base
curve
of
point
of
curve
point
coordinates
was
needed
to
be
extended
to
something
like
384
bits
but
like
after
they
did
it.
They
generated
the
curve
with
specific
properties,
namely
a
large
number
of
roots
of
unity
for
zika
snark
applications.
It's
unrelated
to
pls
signatures,
for
example,
not
necessary
strictly,
and
that
it
has.
C
C
It
was
a
curve,
this
particular
parameters,
and
so
then
sid
cache
was
released
with
sapling
over
this
curve
and
then
it
went
it
has
it
it.
It
has
great
parameters
from
from
many
perspectives.
C
It
was
included
in
more
and
more
projects
but
like
bls
curves
unrelated
to
bls
signatures,
but
bls
signatures
do
require
pairings
and
like
one
way
or
another
for
such
convenient
aggregated
signatures.
It
would
be
required
to
get
the
pairing
friendly
curve
and
the
only
playing
fan
and
the
very
friendly
curve
with
enough
security.
Was
this
blf
the
bls
12
381?
So
the
main
kind
of
definition
and
property
of
this
curve
was
it
was
over
the
field
which
is
381
bits
of
flanks
in
the
modules.
C
So
but
then
there
was
a
work
by
sexy
team
which
used
like
one
layer
of
recursion
and
they
generated
another
blf
12
curve,
which
has
some
extra
interesting
properties.
C
So
it
allowed
you
to
do
one
layer
of
recursion,
so
you
prove
one
ziggy
snark
over
this
curve
and
use
another
curve
which
is
768
in
a
similar
range
bits
modulus
for
curved
points
to
prove
a
snark
about
some
properties
of
the
snark
which
you
have
just
proven
on
the
bls
12
387
curve.
So
those
are,
namely
two
curves
which
are
of
the
interest
at
the
moment,
and
that's
why
there
are
two
proposals
for
bls
like
2567
and
25
39
for
those
two
curves.
C
So
if
a
discussion
happens
that
well
people
want
another
curve
over
even
larger
field,
it
would
mean
either
due
pre-compiles
by
default.
There
is
no
other
option
or
a
new
set
of
codes.
I
see
a
question
in
the
chat
that,
like
let's
make
it
arbitrary
precision
over
all
the
parameters.
It's
an
option
it
it's
what
was
done
in
eip
1962,
but
it
may
be
possible
to
make
with
reasonable
efficiency.
I
don't
know
it
may
be.
C
A
final
answer,
but
for
this
it
would
be
necessary
to
address
the
concerns
about
at
least
evm384
implementation,
which
I
was
calling
the
like.
The
control
flow
of
codes
cost,
which
is
much
larger
in
the
current
state
of
pricing
for
evm
of
codes,
then
even
the
events
384
codes
which
happen
in
the
function.
D
To
interrupt
because
yeah,
I
wanted
to
respond
to
your
short
answer
before
you
went
on
to
a
different
topic,
and
so
are
you
seeing
that
only
these
two
curves
are
the
ones
which
you
know
for
the
foreseeable
future
would
be
needed
by
people
and
there's
no
chance
that
any
other
curve
would
be
devised
in
the
next
three
years.
C
I
cannot
guarantee
that
no
one
will
find
another
like
problem
here
is
there
are
sets
of
families
of
curves?
They
have
some
particular
properties
and
they
give
different
levels
of
security
and
for
families,
of
course,
which
are
there
now
known
so
like
those
occurs,
which
we
basically
can
easily
generate
by
taking
one
scalar
parameter.
C
C
At
the
current
stage,
these
blf
bls12
curves
provide
the
optimal
balance
and
those
two
curves
provide
like
cover
a
wide
sweep
of
the
interesting
properties
which
people
need
for
different
final
solutions,
such
as
non-recursive
snarks,
like
singly,
recursive,
snarks,
dls
signatures
themselves
and
whatever
people
can
come
up
later
so.
E
So,
from
a
little
bit
less
of
the
cryptography
side,
we
can't
say
that
there
will
be
no
new
curves
found,
but
what
we
can
say
is
that
there
are
no
ongoing,
like
standardization
efforts
for
curves
beside
12,
besides
12
381
and
12
377.
E
C
C
C
C
And
an
option
to
implement
arbitrary
precision,
modular
arithmetic
opcodes
may
be
a
solution,
but
then
it
will
become,
I
think,
as
complex
as
original
ap
1962,
because,
ideally
you
would
want
to
price
all
of
them
differently,
depending
on
on
the
length
of
the
modulus
and
which
will
add
another
painful
layer
of
complexity
in
evm,
which
is
already
quite
a
fragile
solution.
I
would
say.
G
Yeah,
if
we
want
to
do
arbitrary
precision
arithmetic,
then
the
gas
calculation
would
have
another.
C
Complexity,
well,
I
mean,
if
you
want
to
make
it
efficient
and
by
efficient
I
don't
mean
the
pure
execution
time
I
mean
how
much
end
user
will
pay
for
this
implementation.
C
Then
you
would
ideally
want
to
price
modular
arithmetics
over
384
bit
field
cheaper,
well
up
to
rounding
errors,
since
gas
costs
are
integers
cheaper
than,
for
example,
if
someone
does
it
over
the
field,
which
is
768
bit
field
which,
like
what
complexity
is
there
is
quadratic
for
multiplications,
so
it
should
be
substantial
difference.
C
C
Yeah,
so
yeah,
so
if
you
would
want
to
give
people
ability
to
work
over
the
modulus
over
different
lengths
in
terms
of
number
of
limbs,
so
like
have
a
different,
shorter
function
for
384
bit
compared
to
768,
then
you
would
want
to
have
some
branching
there
or
something
else,
but
the
branching
there
and
the
limitation
is
not
a
concern.
The
concern
there
is
that
actual
execution
time
for
mainly
model
mode
or
multiplication
there
will
be
very
different
and
ideally
end
users
would
want
to
benefit
from
it.
C
C
C
No,
it's
just
well,
you
can
have
it
on
there
kind
of
you
can
make
all
those
up
codes
to
be
priced
on
the
upper
bound,
but
for
additions.
The
actual
execution
time
for
different
lengths
of
the
modulus
will
not
be
that
large.
C
It's
just
a
linear
amount
of
arithmetic
operations,
which
you
will
need
to
do
to
implement
them
for
multiplication.
The
scaling
is
quadratic
on
the
number
of
limbs,
which
is
which
is
required
to
represent
this
modulus
on
this
machine,
and
there
the
difference
in
final
execution
time
and
which
also
means
the
gas
costs
will
be
substantial,
so
yeah.
H
C
Say
it
would
be
questionable
to
price
it
on
the
upper
bound,
then
not
one
kind
of
benefits
then
like
then,
most
likely
in
the
most
frequent
case
of
smaller
fields,
people
will
pay
a
lot
for
resistance
efficiency,
but
from
technical
perspective,
it's
possible
yeah.
H
C
I
think
that
the
proposal
by
marius
was
that
it's
still
a
single
op
code,
but
can
that
can
take
various
lengths
of
the
modulus?
It's
not
like
it's
a
set
of
three
up
codes
for
every
64-bit.
Multiple.
If
I
understood
the
chat
message
correctly,
then
it
was
a
proposal.
That's
it
was.
It
was
just
a
proposal
that
it's
a
single
up
code,
which
can
work
over
some
set
of
models,
lenses.
C
But
maybe
mario's
can
comment
on
what
was
actually
meant
in
his
proposal.
Yeah.
G
Thank
you.
You
understood
it
correctly,
yeah,
it's
I!
I
don't
think
it's
feasible
right
now,
but
it
may
be
feasible
in
the
future
just
to
have
three
up
codes
or
even
pre-compiles.
Just
for
for
addition,
multiplication
mod
for
for
arbitrary
precision,
integers.
G
If,
if
we
like
find
a
way
to
change
the
gas
behavior
of
calling
pre-compiles,
like
maybe
set
it
to
like,
I
had
the
idea
to
that,
you
can
set
the
pre-com
like
set.
Some
basically
set
the
evm
in
a
particular
state
so
that
you
don't
have
to
pay
for
calls
into
precompiled.
G
If
you
call
pre-co
separate
pre-compiles
or
something
like
this,
whatever
it's
not
really,
it's
not
really
feasible
right
now,
and
it's
also
not
really
applicable
to
the
to
the
question
at
hand.
So
I
think
we
should
just
skip.
C
Yeah
well
so
mario
has
touched.
The
part
which
I
was
mainly
arguing
against
from
the
beginning
of
the
discussion
is
that
I
understand
that
for
events
384
it's
possible
to
fix
everything
like
it
would
require
a
huge
repricing
of
all
other
op
codes
in
the
evm
extensive
testing,
potentially
fractional
gas
costs.
C
Then
it
would
be
possible
to
bring
the
final
gas
cost
of
the
pairing
operation
to
something
like
at
least
very
close
to
the
current
proposal
of
2537,
maybe
a
little
more.
By
a
little
I
mean,
let's
say
50.
I
understand
that
that's
perfectly
doable
it's
just.
The
amount
of
work
required
for
this
isn't
is
huge
and
it's
already
a
slow
process
of
including
of
set
changes,
and
this
change
will
be
large
and
quite
radical,
and
even
as
an
example,
the
aip
266
for
pre-compiled
repricing,
which
was
complete
and
is
much
smaller.
C
It's
just
six
coefficients
which
I
needed
to
change
is
still
in
a
pending
state.
So
I
doubt
it
would
be
possible
to
pull
this
in
short
term.
By
short,
I
mean
even
a
year,
and
people
still
want
to
use
a
primitive,
so
the
primitive
should
be
included.
C
One
way
or
another,
it's
either
highly
inefficient
without
a
huge
rework
of
the
evm
and
repricing
of
all
the
like
of
many
op
quotes,
which
I
largely
believe
stack
operations,
but
we
will
see
the
report
from
toxic
or
we
should
proceed
with,
including
the
the
pre-compile
in
a
similar
manner
like
until
there
is
a
better
solution
or
like
extreme
2.0.
Is
there
so
we
can
kind
of
pay
for
an
efficiency
of
something
like
a
wasm
or
something
and
tolerate
the
slowdowns.
There
too,.
D
So,
just
to
respond
what
you
said
that
you
know
if
there
would
be
different
repricings
on
the
evm,
then
you
think
the
cost
could
be
driven
down
significantly
and
we
came
to
the
same
conclusion.
But
there
are
multiple
levels
to
this.
You
know
levels
of
changes
you
may
want
to
do
and
you
don't
have
to
go
all
the
way
down
to
fractional
gas
costs,
but
in
the
case
one
would
do
that
that
not
would
not
only
benefit.
You
know
like
something
like
evm
284,
but
it
would
benef.
C
C
So
that's
why,
since
like
the
pre-compile
was
ready
already
for
berlin,
it
should
be
kind
of
got
back
on
track,
maybe
even
feedback
into
berlin,
because
it
was
included,
like
included
in
all
the
code
bases,
for
example,
if
it's
not
in
new
berlin.
That's
just
it's
more
like
to
make
a.
C
Well,
we
cannot
make
decisions
here,
let's
kind
of
follow
an
approach.
This
pre-compile
is
good
enough
and
this
part
should
not
be
questioned,
but
the
work
on
maybe
arbitrary,
precision,
arithmetic
up
codes
and
larger
repricing
would
be
also
nice
to
make,
because
I
also
would
want
to
see
those
working,
but
my
doubt
is
only
about
the
time.
I
Just
to
point
something
that
for
ethereum,
we
think
that
the
code
is
not
ready
because
it's
not
reviewed
and
we
cannot
add
anything
without
proper
review.
So
it's
like
it's
written
written
by
an
excellent
developer
and
like
a
magnificent
implementation,
but
it's
not
reviewed
and
for
us
it's
not
really
so,
and
it
costs
1170k
the
minimum.
The
review
of
the
code
and
yeah
we
need
to.
I
A
So,
to
be
clear,
open
ethereum
is
referring
to
the
bls
curve
right
here.
I
B
B
I
A
Okay,
so
like
we
are
a
little
over
the
time,
so
I
would
like
to
quickly
you
know
summarize
it
and
I'm
not
sure
if
we
are
in
a
in
a
state
to
conclude
whether
this
or
that
should
be
for
implementation
or
integration
and
like
why?
Why
not,
but
from
the
conversation
today
there
are
some
concerns
that
surface
like
we,
we
discussed
on
some
of
the
concerns
related
one
related
to
gas
efficiency,
another
the
testing
concern
that
is
really
important
for
many
readiness
like
consensus.
A
If,
if
there
is
any
consensus,
failure
how
to
address
that
the
complexities
that
are
highlighted
by
exec
for
the
ls
curve
do
do
people
think
that
before
we
can
go
ahead
and
you
know
submit
a
final
report
on.
Obviously
we
are
going
to
submit
a
summary
to
the
all
goddamn
meeting.
We
can
have
one
more
meeting
on
this
and
decide
on
some
of
the
questions
that
we
collected
today
and
reach
out.
A
I
I'll
be
reaching
out
to
team
separately
iron
james
would
be
reaching
out
to
teams
of
reality
to
get
answers
and
then
decide
on
the
client's
part,
because
I
believe
there
is.
There
are
some
more
lighting
needed
gets
seems
to
be,
I
mean,
like
guests,
have
shown
their
concerns
and
have
done.
Their
research
same
goes
with
desu
and
open
ethereum
team,
as
they
mentioned
that
the
auditing
would
be
quite
expensive
for
them,
so
yeah.
What
do
people
think
here
could
be
the
next
step.
F
So
clearly
we
haven't
like
come
to
a
conclusion.
I
I
suggested
you
know
at
least
having
a
discord
channel
so
that
maybe
we
can
move
some
of
this
stuff
forward.
Async
a
bit
more
and
and
alex
b.
You
mentioned
that
you
had.
You
know
the
report
for
evm
384
coming
out
tomorrow,
so
I
think
you
know,
ideally
if
we
could
have
just
somewhere
where
we
can
discuss
that
on
the
chat
it'll
make.
F
I
don't
think
it'll
solve
the
problems,
but
it
will
make
the
problem
the
process
of
solving
it
slightly
more
efficient.
A
Yeah,
okay
and
one
more
thing
like
I'm,
not
sure,
I'm
just
it's
a
question
from
my
end.
Do
we
have
any
thoughts
on
adding
evm
384
as
a
proposal
like
an
as
an
eip
sooner
rather
than
later,
because
it
would
be
easier
for
people
to
follow
and
to
start
generating
momentum
if
it
is
planned
to
be
included
in
any
of
the
future
upgrades.
D
Yeah
we
plan
to
to
publish
an
aip.
You
know
once
we
get
these
updates
out.
A
Okay
sounds
good
so
on
on
the
suggestion
of
having
a
discord
channel
I
mean
like.
Is
it
something
that
we
should
proceed
with
to
begin?
The
conversation
move,
the
conversation.
A
F
F
Hopefully
yeah,
oh
I
like
hashtag
crypto.
Does
that
make
sense,
and
I
feel
like
the
most
you
the
most
you
know,
near-term
thing
is
figuring
out
this
three
384
versus
2537
and
I
think
it's
valuable
to
have
discussions
with
both
in
the
same
place.
Does
that
make
sense.
A
A
Very
good
yeah,
okay,
so
that
is
one
thing
and
that
I
think
that
is
one
of
the
next
steps
and
on
part
of
meeting
my
there
were
two
questions.
One
do
we
want
to
have
like
like
frequent
meetings,
not
not
as
frequent
as
week
or
so,
but
maybe
once
a
month?
B
I
have
a
thought
on
it.
I
think
it
should
be
driven
by
demand.
So
if
we
have
a
meeting,
that's
recurring
but
nobody
posts
on
the
you
know
any
agenda
items
or
anything.
It
doesn't
make
much
sense
for
me,
but
if
it
becomes
more
popular
and
people
are
posting,
you
know
a
meeting
with
no
agenda
items
doesn't
make
sense
for
anyone
to
join.
I
think
that's
that's
my
point.
B
So
so,
if
we
so
just
to
finish
up,
if
it's
recurring,
then
I.
A
A
Make
total
sense,
so
let's
go
ahead
with
the
discord
channel
and
we'll
try
to
guard
momentum
there
like
what
do
people
think
about
having
it
like
the
frequency
of
it
and
if
there
is
demand
based
on
that,
we
can
have
fusion
meeting.
A
Yeah,
okay,
I
will
go
ahead
and
summarize
today's
meeting
highlights
in
the
agenda
as
a
comment
in
the
same
agenda
section
and
we'll
share
it
with
the
all
coders
in
the
next
meeting,
and
hopefully
with
the
help
of
this
new
channel.
We'll
get
answers
to
the
questions
and
the
concerns
that
we
are
releasing
and
we'll
see
you
guys
I
can
decide
on
either
or
both
or
which
one
should
go
ahead.
Clients
will
get
clarity
and
that's
my
hope.