►
From YouTube: Ethereum Core Devs Meeting #81 [2020-2-21]
Description
A
A
Hello
and
welcome
to
ethereum
core
developer
meeting
number
81..
This
is
hudson.
Let's
see
real,
quick,
okay,
I
just
muted
someone
who
was
typing:
let's
go
ahead
and
start
with
the
agenda.
A
This
is
going
to
be
a
super
packed
agenda
today.
So
let's
try
to
be
cognizant
of
getting
through
your
topics
really
quickly
and
with
yeah,
with
like
minimal
interruption
and
not
too
much
off
topic
discussion
when
it
comes
to
the
technical
aspects,
if
it
can
be
talked
about
in
getter
or
the
magicians
just
make
sure
to
keep
that
in
mind.
A
Okay,
we
can
skip
that
for
now.
Let's
start
with
the
efi
stuff
and
I'll
just
let's
tag
team.
This
with
james
is
that,
okay
with
you
james.
B
Yeah,
I
would
actually
discuss
any
of
the
efi
or
eips
that
are
looking
to
be
in
berlin
and
then
have
a
general.
The
general
efi
discussion
happen
after.
A
B
B
There's
been
a
couple
eips
that
are
almost
ready
to
get
in
and
then
there's
recently
been
in.
The
eth2
deposit
contract
is
looking
to
use
a
pre-compile,
so
they
can
validate
the
bls
curves
within
the
contract
itself,
which
they
can't
do
in
solidity.
B
B
1962
or
the
bls
one
that
the
eth2
team
has
created
and
it
has
created
a
proposal
for
a
simplified
version
that
only
has
the
curves
that
they
need.
So
we
can
talk
about
that
at
the
at
the
time.
That
happens,
we
should
do.
We
can
do
that
at
the
as
the
last
eip
and
that
list
then
there's
danos
eip,
which
is
about
scheduling,
doing
schedules
for
the
scheduling,
but
with
block
time
for
the
forks
and
just
making
it
so
it's
easier.
B
B
B
C
E
C
I
do
agree
that
we
we
could
make
call
today
like
and
say
yes
we're
all
for
this
e
or
some
other
go
for
it,
that
I
think
we
should
and
can
do.
B
Yeah,
then,
that
was
more
of
what
I
was
intending
and
the
as
far
as
the
scheduling
goes,
the
only
one
to
really
figure
out
when
it
when
it
could
happen
is
the
is
the
signature
for
the
eth2
contract
and
then
whatever
ends
up
is
ready
by
that
time
can
also
be.
It
just
would
be
helpful
for
me
to
get
an
idea
of
who
wants
to
kind
of
hit
that
timeline
and
if
and
then,
as
a
group
are
getting
the
general
okay.
For
that
not
saying
it
will
or
will
not.
E
Could
could
you
remind
us
what
the
timeline.
B
Is
craig
the
other
greg?
Are
you
here.
F
I
thought
maybe
I'm
wrong
here,
but
I
thought
that
the
e2
deposit
contract
would
happen.
Sorry,
the
e2
beacon
chain
launch
was
to
happen
around
july,
which
means
the
deposit
contract
would
probably
be
you
know,
weeks
to
month,
ish
before
that.
G
F
B
Yes,
so
the
the
bigger
conversation
is
by
june,
which
of
those
could
be
ready
so
that
the
deposit
contract
could
be
made
and
then
which
of
the
eips
could
would
would
be.
Would
we
accept,
as
like
efi
for
that
kind
of
before,
for
that,
so
that
the
authors
of
the
ips
can
either
get
ready
and
be
and
make
it
for
this
window
or
make
it
in
the
next
window
when,
as
the
next
fork
happens,
is
that
kind
of
does
that
feel
all
right?
Martin
the
way
I
said
that
yeah.
B
E
Well,
it's
fine
by
me,
but
I'm
not
I'm
not
currently
implementing
a
client.
So
I
think
martin,
who
else
dano
here.
A
Yeah
dana's
here
I
would
say
greg.
Can
you
give
like
a
two
sentence
overview
of
what
your
eip
is
again
just
as
a
reminder,
I
see
there's
a
lot
of
new
people
here
today.
E
Okay,
the
proposals
out
there,
essentially
essentially
it
adds
three
op
codes
at
this
point.
Possibly
two
there's
just
to
begin
sub
up
code
to
mark
the
beginning
of
a
routine
martin,
and
I
have
been
discussing
that
maybe
just
a
jump
test
would
do,
but
you
have
to
have
some
something
to
go
to
and
then
two
two
op
codes,
one
of
them
is
just
jump
sub,
go
to
a
subroutine
and
the
other
one
is
return.
Sub
come
back
for
most
for
most
implementations.
You
would
simply
have
a
a
return
stack.
E
So
when
you
jump
to
a
subroutine,
you
push
the
current
pc
onto
the
stack
and
when
you
return
from
the
subroutine,
you
pop
the
stack
and
resume
execution
where
you
jumped
from,
and
so
it's
it's
really
just
that
simple.
It's
a
two-stack
design
just
like
fourth,
so
it's
basically
getting
the
evm
up
to
1970
standards.
H
Yes,
I
just
want
to
add
something
to
greg's
comments.
Basically,
I
think
that's,
basically
what
other
machines
are
doing
so
like
with
some
subroutine
jumps
and
return
address
stack.
Yes,
so
that's
pretty
standard
so
and
also
the
the
lvn
compiler
could
support
that
with
a
few
modifications.
So,
yes,
I
think
this
is
a
good
eip
that
we
should
put
it
in
thanks.
I
So
one
thing
that
we've
been
discussing
with
martin
about
the
cip
is
that
on
the
surface
it
looks
really
nice
and
we
are
wondering
how
much
time
would
it
be
to
hack
solidity
so
that
it
actually
can
use
this
so
that
we
might
somehow
try
and
run
some
benchmarks
against
existing
contracts,
because
it
would
be
really
nice
to
be
able
to
see
the
actual
number
of
what
what
this
would
mean
from.
H
Yes,
but
it's
up
to
the
solidity
compiler
to
do
the
change
and
it
has
nothing
to
do
with
any
solidity
language
level
modifications.
Basically
it's
just
a
compiler
change
and
the
real
change.
I
J
So
the
the
request
was
not
to,
like
you
know,
have
all
the
compiler
changes
ready
just
before
the
hard
fork,
including
this
eip.
But
the
idea
was
that,
since
it
is
unclear,
if
you
know
it
provides
an
actual
like
benefit
to
the
performance
of,
let's
say
solidity:
smart
contracts.
J
I
think
it
would
be
very
good
to
validate
this
eep
before
deploying
it
in
a
hard
fork.
By
basically
just
trying
out
the
changes.
I
mean
the
the
the
evm
changes
are
implemented
at
this
time,
so
it
is
possible
to
actually
spin
up
a
a
a
small
laptop
network,
and
then
you
know,
use
a
modified
compiler
and
then
just
run
some
benchmarks,
and
this
is
basically
what
we
we
were
thinking
on.
J
The
guest
side
would
be
the
next
step
for
this
eip
because
it
is
such
a
low
level
change
and
you
know
just.
I
think
it
deserves
to
be
evaluated
before
inclusion.
B
I
can
help
connect
you
you
with
them.
As
far
as
like,
assuming
that
we
have
this
the
solidity
benchmarks.
Is
there
any
opposition
to
having
this
in
our
strong
feelings.
C
No,
we
we
discussed
this
a
bit
in
the
gut
team
and
I
think
yeah
as
a
whole,
we're
supportive
of
this
proof
of
it
to
be
eligible
for
inclusion,
and
there
was
one
thing
though
so
one
thing
greg
already
mentioned:
that's
another
stack,
a
parallel
stack.
So
that's
a
kind
of
big
change
to
dbm,
but
it's
it's
not
a
show
stopper.
C
In
our
opinion,
another
thing
worth
pointing
out
is
that
if
we
do
have
the
begin
data
sorry
begin
sub
op
code,
it
means
that
when
doing
the
jump
test
analysis,
we
also
need
to
mark
out
begin
subs
and
forget.
This
is
easy.
We
do
a
one
pass
and
mark
code
sections
and
data
sections,
but
other
implementations
which
specifically
look
for
the
jump
desktop
code.
They
would
have
to
either
convert
to
our
style
or
they
would
have
to
do
one
more
jump
test
analysis
basically
begin
sub-analysis.
E
At
the
at
the
end
of
the
eip,
there's
an
example
laying
out
how
subroutines
are
going
to
show
up
in
the
assembly
code.
So
I
think
it's
pretty
clear
there.
The
the
difference
in
gas
count
actual
measured
performance,
I
think,
would
have
to
be
better,
but
that
would
have
to
be
measured,
but
certainly
it's
going
to
save
on
gas
substantially.
E
J
I
I
have
a
quick
question,
though,
so
I
think
so
there's
a
awesome
implementation
attached
to
the
eip
against
the
gas
code
base.
Yeah
there's
a
pr.
I
The
ip
mentions
three
up
codes,
whereas
the
implantation
does
too
yeah.
The
implementation
is
earlier:
yeah,
okay,
okay,
that's
fine!
I
was
just
wondering
which
one
is
the
canonical
version
then,
but
yeah
okay,
then
the
ip
cool
thanks.
B
Okay,
great
dano,
do
you
have
any
updates
on
your
eip?
I'm
not
remembering
the
number
right
now,
two
four
five
six
two.
K
B
K
One
thing
I
was
thinking
of
the
possible
change,
one
of
the
I
don't
think
jason
carver's
on
call
out
not
close
to
my
zoom
spring.
His
concern
was
going
back.
A
thousand
blocks
to
check
might
be
too
much
load
for
lighter
clients
that
don't
store
as
many
blocks.
So
one
thing
that
crossed
my
mind
is
instead
of
going
back
a
thousand
blocks
to
check
the
trigger.
K
We
could
go
back
to
ten
blocks,
to
check
the
trigger
and,
with
those
ten
blocks
back,
that's
going
to
be
far
enough
so
that
the
armors
will
have
the
armor
limit
of
six.
So
we
won't
have
straight
armors
for
when
we
trigger
the
upgrade.
So
if
10
before
a
thousand
is
after
the
times
another
thousand,
we
would
do
the
upgrade.
K
K
So
you
know
mostly,
I
think,
we're
just
haggling
over
how
far
the
look
back
is
and
whether
it's
going
to
be
you
know,
zero
or
ten
or
a
thousand
for
this,
and
probably
a
discussion,
that's
better
to
have
on
the
e
positions.
B
K
I
think
we
would
need
some
testing
the
current
testing
structure
for
these
activations.
I
don't
think
quite
would
support
the
look
back,
so
it
needs
some
updates
to
the
reference
test
files
and
I'd
also
like
to
see
some
live
transitions
before
we
go
to
a
test
net.
So
there's
there's
more
of
a
testing
burden
on
this
and
it
looks
like
the
most
of
the
work
on
this
is
in
the
test
code.
So
that's
mostly.
K
K
C
Yeah,
I
was
thinking
more
about
actual
test
cases
high
test,
but
maybe
that's
not
a
problem
because
it
can
just
ignore
the
system,
talk
and
use
old
timestamps
for
all
blocks.
So
that's
not
a
problem.
K
B
B
A
Well,
yeah,
so
I
think
dana
said
good.
I
have
a
question
for
way
and
anyone
who
was
on
the
parity
team,
like
parity,
ethereum
client
team.
What
is
your
involvement
right
now
in
the
open,
ethereum
initiative,
and
is
that
something
that
is?
We
should
consider
you
guys,
having
like
an
up-down
opinion
on
this,
because
what
I
heard
was
that
parity
technologies
won't
be
supporting
parity
ethereum,
slash,
open
ethereum
after
q2.
L
We
will
I
mean
we
will
still
provide
support
for
opening
serum
for
at
least
the
next
artwork,
and
there
are
a
lot
of
other
teams,
for
example
the
geniuses
they
are
taking
a
lot
of
the
development
job
over
so
yeah.
So
I
think
we
should
be
should
be
finally
should
open.
Ethereum
should
continue
to
be
considered
a
usable
client
implementation.
A
D
A
L
Oh
yeah
yeah,
so
I
I
think
that
should
be
fine
for
us.
It
requires
an
actual
stack,
but
so
I
don't
see
any
problem
of
that.
A
A
Trinity-
oh
that's
right,
sorry,
trinity
that
that's
another
good
one
they've
been
focusing
so
much
on
beam,
sync
and
1x.
I
forget
when
they
come
to
the
calls
who
who's
from
trinity.
A
A
C
B
Oh,
that's
perfect,
so
greg's
eip,
which
is
the
number
of
two
three
one,
five
moving
to
efi
dano's
eip.
Did
we
move
it
to
efi
or
we
did
we.
I
believe
we
we
had
like
a
general
thumbs
up,
but
you
wanted
to
talk
to
jason.
A
A
Oh
say
that
one
more
time
it
cut
out,
I
didn't
think
we
gave
any.
You
don't
think
we
gave
it
an
efi.
Okay
sounds
good.
We
can
continue
to
not
have
it
in
that
status
as
as
long
as
you
haven't
talked
to
jason.
D
K
To
1000
is
such
a
fundamental
enough
change
that
it
shouldn't
be
considered
state
on
october
past,
where
it's
at.
B
The
next
one
I
I've,
I
wrote
a
proposal
for
this.
One
is
two.
B
And
so
we
so
there's
there's
an
eventual
that
leads
off.
Until
eventually,
the
network
can't
support
the
block
times
become
too
big
for
the
network.
B
A
Sure
so,
looking
at
this
eip,
let
me
just
pull
it
up
real
quick.
So
it's
eip21.
Was
it
two
one,
two
five
one
five,
I
believe
it's
only
in.
A
Perfect,
okay,
that
sounds
good.
I
was
just
making
sure
I
was
on
the
right
page,
all
right,
there's
a
few
things.
So,
since
my
assumption
is
you're,
not
a
client
dev,
so
you
wouldn't
be
able
to
put
this
in,
but
I
think
that
as
far
as
implementation
and
testing
goes,
but
let's
just
talk
about
and
just
a
paragraph
or
two,
the
general
idea
of
how
this
is
laid
out
and
then
we'll
get
some
opinion
and
then
after
that
we'll
see.
B
So
the
the
general
layout
is
that
the
difficulty
bomb
has
gone
off
a
few
times,
sometimes
due
to
moving
a
chain
having
to
delay
a
fork,
and
so
it
goes
off
for
a
while
and
then
most
recently
it
went
off
because
I
had
missed
the
miscalculated
it
and
then
it
and
it
wasn't
double
checked.
So
we
we
weren't
sure
when
it
happens,
and
that's
primarily
right.
It's
primarily
because
the
difficulty
bomb
adjusts
is
affected
by
the
network.
B
There's
the
adjustment
factor
that
happens
within
the
the
out
within
the
block
like
trying
to
make
sure
the
is
between
10
and
20
seconds,
and
if
the,
when
the
difficulty
bomb
is
unsurmountable
by
that
adjustment
period,
it
depends
on
the
current
difficulty
on
the
network
and
because
we
don't
know
what
the
difficulty
will
be
in
the
future.
B
We
can
estimate
it
and
there
was
a
great
calculator
from
from.
B
Yeah
they
made
a
great
calculator
for
predicting
that
the
difficulty
bomb
as
it
is
now
it
still
relies
on
assuming
what
the
difficulty
will
be
at
a
certain
time
and
over
time.
That
will
get
more
accurate
and
I
think
it
would
just
be
a
lot
easier
if
we
removed,
if
the,
if
we
split
those
two
abstractions
and
kept
the
difficulty
bomb,
because
I've
as
I've
heard
from
the
community,
there's
a
lot
of
people
who
have
strong
feelings
about
keeping
it.
B
A
So
any
questions
on
that
anybody
have
comments.
C
B
B
B
C
A
It
might
be
a
situation
where
people
need
a
little
more
time
to
read
it
and
if
that's
the
case,
I'd
say
it's
a
good
idea,
james
to
just
shop
it
around
to
the
different
teams
in
the
awkward
devs
getter
or
just
people
you
talk
to
individually
and
then
once
it's
in
draft
stat.
If
it's
in
draft
status
by
next
meeting,
that
would
be
good
to
yeah
that'd
be
good
to
do.
F
I
have
a
question:
maybe
this
is
a
dumb
question,
but
wouldn't
like
freezing
the
difficulty
and
then
linearly
increasing
it
make
it
it
kind
of
goes
against
the
whole
difficulty
adjustment.
So
I'm
I'm
not
sure
I
haven't
read
the
whole
ep,
but
maybe
just
trying
to
clarify
how
those
two
mechanisms
would
like
interact.
So
what's
like
the
trade-off
of
of
not
having
the
difficulty
adjustment
yeah
and
the
security
potential
security
implications,.
F
Yeah,
so
that's
what
I
mean,
but
right
so
say
you
know
this
kind
of
a
a
toy
scenario
but
say
you
freeze
the
difficulty
and
then
2x
the
amount
of
miners
start
mining
on
ethereum,
you
know.
Are
we
gonna
have
seven
second
blocks
instead
of
15
second
blocks,
because
our
difficulty
is
kind
of
frozen
and
I
understand
over
time,
it'll
grow.
But
if
the
amount
of
hash
power
kind
of
grows
quicker
than
the
rate
at
which
the
difficulty
bomb
slows
the
network,
we
have
like
a
quicker
network.
B
B
To
jump
on
and
increase
the
hash
rate,
and
because
the
difficulty
is
now
adjusting
up
slowly,
that
it
would
like
block
times,
would
increase
quite
a
bit
for
a
short
amount
of
time,
and
it
would
they
would
accelerate
their
pace
into
the
into
the
linear
increase
and
the
the
risk
there
is
that,
if
block
times
are
too
fast,
the
uncle
rate
could
be
high
enough
to
result
in
sort
of
a
fracturings
of
the
network
or
but
or
just
block
times
becoming
very
very
quickly
for
a
short
time.
F
Yeah,
I
think
the
mining
reward
would
also
play
into
that
right,
like
how
profitable
is
it
for
them
and
whatnot.
But
I
think
to
me:
that's
like
one
one
thing
where
I'd
like
to
see
someone
who
understands
it
much
better
than
me,
like
the
various
incentives
and
and
kind
of
talk
through
the
possible
cases
like
what
happens.
If
you
know
there's
like
50,
more
miners,
there's
like
half
the
amount
of
miners.
You
know
because
we
kind
of
lose
this
like
dynamically
adjusting
parameter.
B
B
C
C
C
B
The
reason
and
that
I
think
that
would
have
the
same
problem
is
not
really
being
able
to
predict
effectively
when
that
first
increase
would
happen,
because,
as
long
as
the
difficulty
adjustment
piece
is
in
there
and
the
increase
is
some
static
function
that
inc
that
can
can
adapt
to
it.
Then
there
is
this
requirement
to
guess
or
to
predict
what
the
difficulty
of
the
network
is
in
order
to
be
able
to
predict
the
effect
of
the
bomb.
B
B
B
So
I
don't
know
how
we
could
we
can
move
that
window.
The
the
other
concern
I
would
have
with
that
approach
is
the
the
people
that
are
most
affected
by
this
going
off
or
for
us,
for
it
not
going
for
it
going
off.
Is
the
miners
in
there
and
that
isn't
that
never
was
really
the
intention
of
the
difficulty
bomb
was
to
make
it
so
miners
are
are
less
able
to
pay
for
their
electricity
that
month
so
as
like,
while
it
is
less
impactful
for
us.
B
We
are
also
not
this
really
impacted
by
having
block
times
increase
in
the
same
way
as
other
stakeholders
on
the
network
and
so
making
it.
So
it
is
more
real
to
us
what
the
consequences
are
as
in
visceral
for
us
in
in
this
room,
then
we
who
are
the
ones
who
can
address
it,
will
also
be
more
likely
to
address
it
in
a
timely.
B
A
Okay,
we
can
probably
just
go
to
the
next
one
and
just
shop
it
around
some
more.
I
guess
james.
That
sounds
like
a
good
idea
to
me.
G
Yeah
yeah,
so
just
following
the
discussion,
which
was
on
the
last
call,
people
wanted
to
get
someone
outside
of
this
like
call
and
implementation
of
the
eip
every
time,
so
I
have
invited
few
people
zack
and
kobia
here,
unfortunately,
joseph
from
eui
couldn't
join
due
to
personal
reasons.
I
got
an
email
from
him
an
hour
ago
else.
G
Unfortunately,
this
week
it
wasn't
the
best
one
to
also
get
more
people,
because
now
it's
standard
blockchain
conference
and
it's
6
a.m
in
here,
yep
a
little
bit
inconvenient,
but
still,
I
think,
people
who
were
actually
interested
in
getting
something
new
in
terms
of
elliptical,
cryptography
and
serum
here.
So
in
principle,
if
you
want
to
ask
them
about
the
difficulty
of
the
pre-compile
or
something
else,
you
can
talk
with
them.
I
also
chatted
on
telegram
police
people
who
implemented
cryptography
like
shambo
and
definitely,
for
example,
it's
not
a
problem.
G
The
difficulty
of
such
pre-compile
and
its
age
cases,
documentation,
elsa,
didn't
rise,
any
questions
for
them
so
and
in
general,
in
total.
What
was
also
like,
adjusted
over
the
last
two
weeks
between
the
calls,
a
gas
price,
is
very
calculated
to
increase
the
constant
of
gas
per
second.
Now
it's
30,
even
while
it's
way
above
what
my
laptop
has
for
bn
pre-compile,
I
mean
current
implementation
was
being
pre-compiled.
G
Oh
oh
josef
sent
his
colleagues
that's
great
yeah,
so
lc
gas
formulas
were
adjusted
a
little,
but
it's
it's
not
a
large
difference.
Otherwise
all
implementations
are
ready
and
what
we
do
now
is
just
sayed
who
is
most
likely
listening
to
the
skull,
but
never
joins
another
of
going
implementation.
He
boards
a
gas
metering
routine
to
go,
which
is
the
most
real
part
of
the
of
the
quick
compile.
N
Yeah,
I'm
happy
to
have
a
couple
of
thoughts
if
now's
a
good
time
so
yeah.
I
just
I
just
kind
of
wanted
to
give
like
a
bit
of
an
external
viewpoint
on
kind
of
the
value
of
this
eip.
N
So
it's
quite
quite
a
few
companies
working
in
this
space,
like
the
current
pre-compile
situation,
means
that
it's
difficult
to
deploy
kind
of
state
of
the
art
for
cutting-edge
cryptography
to
ethereum,
especially
given
the
amount
like
there's
been
a
lot
of
new
developments
that
have
come
out
in
the
past
year
that
we
can't
currently
leverage
because
of
the
limited
pre-compost
support,
and
I
guess
yeah.
This
erp
is
kind
of.
N
I
see
it
as
a
way
of
future
proofing
ethereum
so
that
it
can
become
a
test
fed
for
a
lot
of
the
advanced
cryptographic
techniques
and
that,
in
turn,
should
provide
a
lot
of
value
to
the
community
to
the
to
the
wider
ecosystem,
particularly
in
the
form
of
roll-ups,
in
the
form
of
you
know,
using
starks
and
snarks,
for
proof
of
data
availability
and
for
scaling
and
sorry
hang
on.
So
just
lost
my
place
there
a
minute.
N
Oh
yes,
and
basically,
there's
also
been
some
research
lately
spared
by
kirby,
actually,
which
highlights
the
fact
that
if
you
want
128
bits
of
security
for
snarks
roll
up
scaling
solutions,
that
kind
of
thing
the
bls
12
381
curve,
isn't
really
sufficient.
So
whilst
it
would
be,
it
would
be
a
big
improvement
for
for
ethereum
to
support
to
have
the
bls
12
pre-compiles.
The
ability
to
use
more
secure,
curves,
which
eip
cip
supports,
would
be
extremely
valuable.
N
When
it
comes
to
implementation,
I'm
about
to
add
a
few
thoughts.
Otherwise
I
just
echo
alex
says
it
is
a
complicated
erp
and
there
is.
There
is
a
lot
of
like
a
lot
of
potential
attacks
that
need
to
be
closed
off,
but
like
fundamentally,
it's
not
new
cryptography,
that's
being
deployed
or
new
techniques,
and
also
the
teams
that
would
be
using
these
precompiles
as
part
of
the
tech
stack.
N
B
Yes,
so
the
we,
I
can
see
the
value
of
it.
I
think
all
of
us
here
agree
the
value
of
it.
The
concern
is,
how
do
we
do
it
in
a
way
that
is
secure,
given
previous
experience
with
alt
bien
and
other
and
others?
So
if
your
organizations
or
others
could
could
I
mean
not,
if
not
audit
ends
in
the
formal
way,
but
look
at
the
implementation
and
look
at
the
specification
and
just
say
yes,
both
of
these
things
are
lining
up
as
a
vote
of
confidence
for
those
things.
C
C
It's
basically
a
virtual
machine
for
a
modern
crypto,
which
I
don't
think
suitable
for
ethereum.
I
think
we
should
add
pre-compiles
for
well-defined
use
cases
and
if
we
need
some
particular
pre-compile
for
some
well-defined
use
case
such
as
c
cache,
interoperability
or
ease
2.0,
then
we
can
add
a
precompiled
with
that,
but
I
think
it's
two
largest
step
to
take
to
add
this
big
generic
recompile
and
it's
there.
There
are
several
concerns.
One
is
the
actual
crypto
correctness
and
there
I
can
only
you
know,
trust
that
cryptographers
know
what
they're
doing.
C
The
other
large
concern
is
that
these
are
extremely
large
code
bases.
So,
even
if
the
crypto
is
right,
there
may
still
be
mistakes
in
the
implementation
and,
like
I
I
briefly
looked
at
the
code.
The
goal
line
code
base
and
solar
that,
like
12
days
ago,
there
was
a
commit
which
fixed
a
simple
mistake
that
set
copied
a
value
into
the
wrong
destination
in
one
edge
case,
and
it's
like
these
things
happen.
But
if
they
slip
into
production
there
we
have.
G
Well,
I
think
I
should
hear
a
little
bit
explain
about
why
supreme
compile
is
what
like
was
made
universal
at
the
first
place,
it
was
actually
to
eliminate
ever
a
discussion
about
like
how
many
people
want
this
feature,
how
many
people
want
this
curve
and
who
is
to
decide
like
whether
we
will
increase
it
or
not
seconds
for
code
based
size
and
similar
things?
G
This
is
the
reason
why
we
run
the
classic
testing
to
find
such
mistakes,
and
if
you
look
at
the
implementation
of,
for
example,
if
you
would
ever
want
to
add
bl
b
ls
12
curve,
three
81
bits
is
the
base
field,
and,
if
you
ever
after,
this
would
want
to
add
bls
12
curve.
377,
it's
different
one.
Let's
give
you
like
huge
other
set
of
capabilities
which
people
would
potentially
want.
G
Actually
is
this
well
not
eighty
percent,
as
in
my
previous
example,
but
most
likely
75.
So
if
I
can
a
little
bit
explain
how
it
works
for
go
implementation.
We
originally
said
that
for
now
it's
it's
only
for
what
they're
64-bit
processors
like
performance,
one
especially
done
by
handcrafted
assembly
which
was
later
cross-checked,
and
I
mean
still
independent
orientation,
and
it's
not
didn't
hit
the
like.
G
We,
we
didn't
hit
the
inclusion,
we
didn't
hit
the
hardware,
so
we
can
fix
the
mistakes
that
at
least
the
gas
price
inform
was
very
calculated
on
the
lisa
last
week.
So
this
small
correction
is
not
a
big
deal.
I
also
included
a
set
of
like
what
can
be
called
the
downgrades
or
like
a
white
listing
feature
which
people
have
discussed
before.
G
Unfortunately,
when
I
started
to
collect
the
list
of
the
curves,
which
are
of
the
like
kind
of
known
of
the
good
use
in
the
mod
for
a
lot
of
various
applications
by
modern
crypto,
and
I
mean
for
existing
projects
already
when
I
got
into
the
list
of
these
curves,
I
collected
a
quite
huge
one.
G
It's
also
listed
in
the
document
and
my
main
readme
file,
this
insta
repository
it
says
so
at
the
end,
this
precompiled
wants
to
eliminate
any
form
of
like
centralized
decision
to
like
which
curse
goes
in
which
curse,
that's
not
going
if
you
want
to
implement
it
in
a
set
of
specialized
pre-compiles.
G
But
this
is
the
state
of
some
modern,
cryptography
and
specifications
that
now
so
with
recent
discoveries.
There
are
more
choices
which
give
you
a
lot
of
different
features.
Another
one
talking
about
whether
you
use
narcissistics
it
just
kind
of
separates
discussion.
So
it's
why
it's
universal
is
why
you
call
it
a
virtual
machine.
O
Yeah
I
want
to
sorry
I
just
want
to
react
on
top
of
this.
You
you
see,
I
mean
understand
the
logic
of
implementing
one
and
for
all
crypto
and
be
done
with
it.
At
the
same
time,
you
can
ignore
that
it's
a
attack
vector
and
I'm
just
going
to
go
back
to
one
specific
mistake
that
happened
with
windows.
We
had
a
very
thrilled
crypto
team.
That
happened
like
a
stream
just
one
month
ago.
O
So
my
concern
at
this
point
concerning
cip
is
like
having
it's
not
encrypted
itself,
because
the
crypto
the
math
works,
and
we
can
write.
We
can
verify
the
map,
it's
it's.
Okay,
the
right
is
probably
in
the
implementation
level,
and
what
concerns
me
here
is
that
we
I
mean
I
mean
we
don't
have
like.
I
know
you
told
me
that
there
is
another
team,
but
they
I
haven't
come
up
to
to
present.
O
I
am,
and
also
we
don't
have
like
sort
of
other
known
figure
and
copy
or
being
one
of
them,
or
it's
like
being
one
of
them
actually
doing
the
implementation
and
saying
that
this
current
spec
is
enough
for
anyone
to
implement
according
to
the
spec.
G
I
mean
we
cannot
ask
someone
like
specifically
if
he
doesn't
want
to
implement
or
do
the
duplicate
work,
because,
as
far
as
they
know,
zak
usually
writes
in
c
plus,
plus
and
c
plus.
Plus
implementation
exists
if
he
wants
to
find
unit
but
change
it
a
fork
it
just
for
his
own
interest,
he's
free
to
do
this,
but
if
no
one
volunteers
to
make
an
alternative
implementation
according
to
suspect,
because
a
set
of
languages
in
which
it's
already
implemented
is
kind
of
wide
enough
and
what
are
most
usable.
G
We
cannot
force
anyone
to
do
this.
According
to
the
windows
hack,
which
you
mentioned
as
far
as
they
know,
the
problem
was
not
even
the
cryptography
itself,
it
was
in
parsing
format,
and
this
is
equivalent
to
the
api
part
of
this
pre-compile,
which
is
likely
much
simpler
than
their
encoding
format
in
usual
certificates,
but
I
mean
attack.
G
Surfaces
chapter
is
also
covered
in
the
documentation,
so
there
were
only
three
main
parts,
and
I
spent
a
few
pages
to
explain
by
say
are
not
like
how
they
addressed
and
like
why
this
way
of
addressing
attack
services
is
enough.
G
I
mean
recompile
documentation
and
readme
document
was
updated
quite
a
lot
over
the
past
weeks,
and
I
mean
I
cannot
force
people
to
actually
look
at
it
and
pay
attention
because
it's
quite
a
large
document
I
mean
for
these
questions.
There
is
a
formula.
C
Well,
no!
Well,
I
think
that's
a
bit
of
a
strong
argument.
I
mean
I
have
read
the
attack
service
description.
I
just
don't
agree
with
the
conclusions
there
actually
yeah.
What
can
I
say?
There's
there's
one
little
paragraph
about
it
being
consensus,
breaking
where
you
somehow
you
say
that
it
cannot
be
consensus
breaking
because
of
conventions
yeah.
I
don't
agree.
Sorry.
J
So
I
would
like
to
also
expand
on
that
so
yeah.
I
think
with
this
eep.
Our
main
concern
is
really
that
is,
is
really
like
with
the
complexity
of
this
of
this
particular
precompile,
and
this
definitely
would
definitely
also
be
my
concern.
J
It
is
definitely
possible
to
to
create
a
large
programs
which
behave
correctly
and
are
in
consensus
across
multiple
implementations,
as
the
ethereum
implementation
shows,
but
especially
with
this
pre-compile,
I
think
it
it
might
be
stretching
the
limit
a
little
bit
as
to
how
big
a
precompile
can
be,
and
I
do
understand
that,
basically,
it
is
quite
important
for
applications
today
to
have
access
to
these
kinds
of
cryptographic
features.
J
But
I
do
disagree
with
the
notion
that
the
process
of
adding
pre-compiles
is
like
too
complicated
right
now
to
make
that
happen.
So
I
think
as
client
implementers,
I
don't
think
there's
any
problem
with
adding
any
particular
pre-compile
as
long
as
it
is
reasonably
simple
and
there
is
a
use
case
for
it,
and
these
things
are
like
with
this
particular
eip.
There
is
definitely
use
case
for
it,
but
it
is
not
simple
and
I
think
as
such
is
it
doesn't
really
meet
the
criteria
for
like
a
good
pre-compile.
N
G
No,
no,
I
I
mean
I
wanted
to
answer
these
questions.
This
kind
of
like
a
stock
border
like
for
difficulty,
you
may
ask
like
there
is
a
list
of
cures
which
would
be
potentially
end
of
interest
to
use.
Unfortunately,
this
is
quite
large,
so
it
would,
at
the
end
of
the
day,
will
result
in
making
let's
say
eight
different
heaps.
G
Each
of
those
will
take
something
like
three
to
six
pre-compile
addresses
and
I
can
easily-
and
I
mean
I
can
definitely
split
this
breaking
pile
by
just
doing
calculations
for
some
coded
set
of
parameters
and
even
by
using
the
same
code
base
as
a
reference
to
like
say.
Well,
let's
include
these
eight
curves,
but
the
problem
and
all
of
those
curves
are
usable
and
people
want
to
use
them
for
some
or
one
or
another
particular
application.
What
would
be
the
chance
to
actually
get
this
included
in
a
time
frame
of
a
few
months?
G
And
I
mean
it's
eight
pre-compiles
with
huge
set
of
addresses,
but
they
all
have
this
vanishing
vanishingly
smaller
difficulty
in
this
case.
G
I
just
want
to
I
mean,
because
this
is
what
I
wanted
to
avoid
with
this
pre-compiler,
avoid
this
specific
decision
for
every
time.
What
would
be?
Second,
three
chances
for
this
case.
P
Right,
I
just
want
to
comment
on
some
things
that
happened,
so
I
won't
go
again
into
saying
the
value.
D
P
A
bunch
of
use
cases
that
we
want
to
use
it
for
so
I
won't
go
into
that.
I
will
say
to
the
to
alex's
credit
or
like
to
metrolabs
credit.
This
is
has
explicit,
explicit
formulas:
how
to
implement
the
specific
crypto
features
that
are
needed,
which
is
quite
unusual.
Usually
people
implement
from
papers
and
the
papers
are
scattered
and
all
around
and
people
go
to
different
websites
and
implement
different
ways
to
add
or
multiply
points
so
having
these
explicit
formulas
is
super
helpful
to
make
independent
implementation.
P
So
this
is
a
very
good
positive
point
about
the
cip.
One
thing
that,
for
example,
I
don't
completely
understand
yet-
or
I
I
can't
internalize
yet-
is
the
guys.
P
Kind
of
like
I
understand
the
ideas,
but
it's
hard
for
me
to
internalize
it,
and
I
think
this
is
somewhat
related
to
how
people
can
perceive
the
complexity
of
this
ap,
because
it
feels
like
it's
a
big,
very
big
unit,
that
you
must
take
as
a
whole
all
over
nothing
and
that
is
kind
of
scary
and
that
is
kind
of
problematic
like
like
or
not,
maybe
not
problematic.
P
But
this
makes
you
look
at
this
20
000
or
whatever
lines
code
base
and
say
this
is
very
big
and
I
can
say
I
disagree
with
it
and
it's
it's.
I
think
it
might
come
down,
and
maybe
we
could
discuss
that
at
some
point
of
how
to
to
make
this
at
least
lighter.
So
like
alex
mentioned
yeah,
you
could
divide
this,
so
you
won't
have
like
five
compiles.
You
would
have
60
recompiled
for
all
the
different
curves
that
we
want
and
maybe
even
eap
eap1962
would
be
the
underlying
implementation.
P
J
J
E
K
P
So
you
could
divide
this
into
60
ap
60
compiles
and
do
that.
But
then
it
would
require
a
hard
fork.
Every
time
which
is
maybe
fine,
and
I
think
something
that
we
could
discuss
is
to.
P
Does
this
improve
the
complexity
worries
or,
for
example,
even
including
one
curve
is
already
scary,
and
even
that
should
be
really
evaluated
further,
and
these
are
things
we
could
discuss
as
to
how
we
can
actually
move
something
like
this
forward,
because
I
totally
understand
the
worries
and
the
complexity
of
what
it
introduces,
and
I
think
it's
mostly
about
the
discussion
of
how
can
we
modularize
it?
So
it
is
not
received
as
complex.
I
So,
just
to
add
something
to
it,
so
our
biggest
concern
with
the
code
with
regard
to
complexity
is
that
I'm
almost
certain
that
there
are
bugs
in
it,
because
I
don't
believe
that
three
times
20
000
lines
of
code
is
bugless.
So
from
my
perspective
there
will
be
a
consensus
issue.
That's
for
sure,
and
the
question
is
that
parity
is
currently
maintained
by
I
don't
know
two
people
and
in
the
progress
of
being
switched
over
to
open
ethereum
with
god
knows
what
governance
model
and
who
will
be
actually
maintaining
it.
I
Geth
is
again
maintained
by
a
handful
of
people
and,
if
hits
the
fan,
who
is
going
to
fix
it?
That's.
A
What
I'm
saying
actually
so,
just
real,
quick
peter!
I
was
wanting
to
get
like
buy-in
from
some
of
the
cryptographers
in
the
community,
including
people
on
this
call
who
either
built
or
support
this
eip
to
be
in
a
getter
or
telegram,
or
you
know
something
bridge
channel
so
that,
if
hits
the
fan,
we
can
call
on
you
all
to
jump
in
and
within
a
very
reasonable
amount
of
time
fix
things
and
when
I
say
fix
things
more
identify
the
problem.
That
kind
of
thing
so
so
peter.
What
about
that
idea?.
I
To
jump
in
with
a
reasonable
amount
of
time,
I
would
like
to
mention
the
time
when
doing
the
shanghai
attack.
You
called
me
at
4
am
in
the
morning
to
get
up
and
solve
some
issues.
So
is
anybody?
Is
a
group
of
cryptographers
willing
to
be
able
to
be
called
at
4am
in
the
morning
and
fix
it
at
4am?
Because,
if
not,
then
that's
a
huge
problem
because
we
cannot
fix
it.
I
think.
A
N
G
G
I
mean
I
saw
this
as
a
world
responsibility.
I
would
I
mean
I
didn't
know
that
it
existed,
but
I
thought
it's
by
default,
so
I
can
count
it
as
yes
from
my
side.
G
A
A
Regardless
of
I
mean
it's
not
going
to
fix
the
concerns
that
the
guth
team
has
for
sure,
but
anything
if
this
goes
through
anything
that
we
can
have
in
addition
to
just
putting
it
in,
and
everybody
leaving
like
having
this
supported
until
well
after
each
two
becomes
a
thing
I
I
think
is
going
to
help
like
some
concerns.
Not
all
concerns
peter.
You
can
continue,
because
I
interrupted
you
earlier.
I
I
Yeah
yeah,
so
I
mean
I
would
gladly
fix
something
and
obviously,
if
hits
the
fan,
I
would
be
super
pissed
and
I
would
I
would
be
staying
there,
but
but
if
it's
something
that
I
cannot
that,
I
don't
understand
then
so
I
think
it's.
It's
also
really
important
to
see
to
consider
what
are
the
consequences
on
on
the
network
of
the
whole
thing.
So,
yes,
everybody.
Everyone
wants
everything
to
go
really
smoothly
and
yeah,
of
course.
I
But
what
happens
if,
let's
suppose
the
implantation
is
huge,
so
you
can
have
two
two
types
of
issues.
One
of
them
is.
I
You
can
have
a
consensus
issue
which,
for
example,
currently
if
there's
a
bug
in,
let's
suppose
in
the
go
implantation
and
eighty
percent
of
the
network
goes
off
in
a
different
direction
and,
let's
suppose
or
whatever,
let's
suppose
80
of
the
miners
go
in
in
the
direction
which
has
the
bug
then
the
only
way
to
solve
it
is
to
roll
back
the
chain
which
is
like
this
horrible
situation
or
pause
the
chain
like
iota
style.
So
do.
Are
we
willing
to
do
that?
I
Probably
not,
then,
if
even
if
the
whole
implantation
is
implantations
are
in
consensus
and
there
is
a
bug
in
the
cryptography
itself,
I
cannot
verify
it.
I
can
say
that
the
five
of
the
different
implantations
do
the
same
thing,
but
whether
that
makes
sense
or
not
is
beyond
me.
But
if
something
like
that
happens,
then
great
game.
I
It's
a
catastrophic
failure
for
ethereum,
and
these
are
the
risks
that
we
are
talking
about
and
the
reason
why
martin
is
kind
of
against
this
and
was
suggesting
that
the
individual
eips
that
introduce
curves
one
by
one
can
help
is
because
it's
much
easier
to
to
say
that
yes,
bn256
or
whatever
is
bls
signatures
is
even
adding
that
one
curve
can
be
really
dangerous.
But
I
mean
it's
a
tiny
surface
compared
to
enabling
a
touring
complete
cryptography
machine.
G
Well,
it
kind
of
gets
us
back
to
the
ideas
that
for
like
we
can
roll
it
out
gradually,
but
not
by
making
individual
curves
as
individual
pre-compiles
and
addresses
and
all
other
things.
So
people
don't
really
know.
What's
under
the
hood,
we
can
just
white
list
the
curves.
By
still
I
mean
as
copy
mentioned.
G
Well,
even
we
set
it
as
a
set
of
independent
pre-compiles
for
different
cures
and
still
use
the
same
machinery
underneath
then
it's
not
that
different
from
just
making
a
white
list
which
is
in
this
case
it's
just
a
prefix
matching,
for
whatever
user
wants
to
call
to
speak
compilers.
O
Can
their
proposal
for
this?
Could
we
imagine,
providing
the
cip
has
sort
of
a
bitter
testing
like
that
testing
mode
like
where
we
offer
a
bounty
for
like
no
one
can
use
it
except
to
to
to
to
sure
to
display
an
attack
on
this
vip.
P
O
P
Because
then
they
would
wait
for
for
it
to
go
a
minute.
O
Because,
as
as
martin
said
like,
there
is
100
percent
like
99
standards,
there
is
probably
a
bug
there,
and
if
someone
that
is
individual
implementation,
we
would
have
a
consensus
issue
and
at
least
having
a
hash
like
challenge
for
this
cryptography
library,
probably
could
help
sort
of
limit
a
limited,
the
sort
of
attack
vector
in
the
first
in
the
first
first
phase
and
maybe
like
letting
it
run
for
like
six
months
or
whatever.
Until
until
it
gets
it,
gets
like
sort
of
a
bit
trusty
enough.
F
So
maybe
this
is
like
a
separate
conversation,
but
I
feel
like
with
this
eip
and
what
you're,
mentioning
lewis
and
and
just
like,
having
this
longer
testing,
it
kind
of
reminds
me
of
what
we
were
talking
about
with
eip
1559
on
a
previous
call,
where
it's
like.
F
We,
we
maybe
want
something
more
than
you
know,
activate
it
on
a
test
net
for
six
weeks
and
then
fork
the
main
net.
I'm
not
sure
how
we
do
this.
F
I
don't
think
the
17
minutes
we
have
left
is
enough
to
discuss
like
a
a
proper
plan
to
it,
but
maybe
it's
worth
thinking
about
like
what's
a
what's
a
better
way
to
test
these
like
complex
eeps,
because
there's
more
coming
down
the
pipeline
so
that
we're
we're
kind
of
you
know
confident
with
them,
and
I
know
we
wanted
to
talk
about
progpal
today
as
well.
That's
another
one
of
those.
I
feel
the
unjustine
will
have
the
same
exact
conversation.
F
So,
what's
like
the?
How
do
we
test
hard.
O
O
Too,
I'd
like
to
be
there
great,
so
let's
maybe
coordinate
something
in
if
paris,
on
the
killer
and
and
get
you
know
start
discussing
there.
A
Yeah,
I
think
that's
a
great
idea,
also
just
before
we
end
this,
just
just
a
like
as
quick
as
you
can
dano
and
way,
and
we
we
know
geth's
position,
but
I
haven't
heard
from
bae
su
or
from
parity
and
there's
particular
concern
over
parody
not
being
kept
up
with.
So
I
just
want
y'all's
perspectives.
We
can
go
dano.
L
I
believe
this
is
1962,
so
I
mean
it's
the
the
there's
an
implementation
in
us,
so
that's
good
for
us,
but
we
also
think
the
the
this
pre-compile
is
is
quite
complicated
and
yeah.
So
that
means
we
are
slightly
against
it
for
implementing.
A
Okay,
this
is
something
that
is
why
we
have
this
type
of
governance
system
and
I
think,
having
in-person
meetings
at
ecc,
where
it
could
be
longer
form
discussion
and
individual
discussion
is
going
to
help
clear
some
things
up,
but
in
no
way
can
we
confirm
this.
Eip
is
going
in
because
at
this
point
there
are
clients
who
would
not
want
to
implement
it
right
now
or
what
I
should
say
have
serious
concerns
implementing
it.
So
as
unfortunate
as
that
is
for
people
who
are
wanting
to
use
these
curves.
A
This
is
just
how
the
process
goes,
but
I
do
thank
everybody
for
coming
here
to
discuss
this,
especially
the
people
who
took
their
time
on
the
west
coast
during
stanford
blockchain
week
to
discuss
this
particular
eip
and
the
cryptographers
who
were
offering
to
lend
support
given
if
this
goes
in
peter.
Do
you
have
one
last
thing
I
saw
you
pop
up.
I
Yes,
so
just
you
mentioned
that,
so
I
don't
think
that
so
I
don't
want
to
say
that
the
guest
theme
opposes
the
features
itself,
so
I
we
have
absolutely
nothing
against
adding
whatever
cryptography
is
deemed
useful
and
necessary.
A
J
Yeah
yeah
minimizes,
I
think
what
he
was
going
for
it's
like
kind
of
like
ethereum
it
as
it
said
you
know,
ethereum
has
a
large
amount
of
complexity
already
and
we
have
been
pretty
good
at
slowly
expanding
the
complexity
and
not
adding
like
a
whole
bunch
of
complexity.
All
at
once-
and
I
think
this
is
something
that
you
know
over
time
as
ethereum
1
evolves.
J
There
will
be
more
and
more
and
more
and
more
complexity
in
the
execution,
because
this
is
where
all
of
the
interesting
features
are,
and
there
are
many
many
things
that
could
be
done
to
improve
what
you
can
do
in
a
smart
contract.
But
this
also
kind
of
I
think
there
are
certain
things
where,
like
we
really
have
to
like
slow
down
and
then
really
check.
Okay,
so
like
what
is
the
thing?
J
We
really
need
right
now,
that
will
like
add
the
most
value
and
then
maybe
we
go
for
the
next
thing
and
then
for
the
next
thing,
and
I
think
doing
it
this
way.
Will
it
basically
allow
us
to
vet
each
feature
more
or
less
completely?
Like
I
mean
the
last
few
times,
we
added
cryptographic
pre-compiles.
There
was
a
lot
of
exhaustive
testing
done
and
I
see
there
has
been
a
lot
of
exhaustive
testing
on
this
particular
eip
with
all
the
fuzzing
that's
been
going
on
on
this.
J
J
However,
you
know
just
adding
this
large
chunk
of
complexity.
All
at
once
is
just
a
really
scary
thing,
and
maybe
we're
all
going
to
be
able
to
absorb
that
scare-
and
you
know
know
like
in
a
couple
weeks
or
whatever
we're
all
going
to
say.
You
know,
why
did
it?
J
G
B
G
If,
if
I
may
have
like
a
10
second
closing
word,
first
of
all
20
lines,
the
20k
lines
of
code
isn't
most
likely
not
the
right
estimate,
because
more
than
half
of
this
are
testing.
Second
one.
If
j,
if
martin
has
a
a
counter
example
for
why
there
will
be
no
consensus
breaking
if
implementation
is
done
correctly,
I
would
see
this
contract.
I
would
like
to
see
this
counter
example,
because
otherwise,
I
believe
it's
my
kind
of
semi-formal
analysis.
B
So
going
there,
there
is
a
pre-compile
that
we
want
to
look
into
individually
to
getting
in,
and
I
and
I
can
see
that
this
is
something
that
will
take
longer
a
longer
amount
of
time
for
everyone
to
be
comfortable
with,
and
I
don't
want
to
continue
the
conversation
on
that.
I
think.
G
B
A
Target,
okay,
so
yeah.
I
agree
that
we
need
to
figure
that
out.
Would
there
be
an
appetite
for
meeting
in
one
week
and
fleshing
out
specifically
this
eip
after
discussion
on
getter
and
magicians,
who
would
be
able
to
come
to
that.
A
This
might
have
to
wait
for
two
weeks
and
given
the
text
stuff
and
that's
unfortunate
because
of
the
timing
of
you
know
that
that
might
put
off
some
of
the
other
discussions.
But
I
guess
this
is
the
process
now,
if
it
doesn't
get
in
at
berlin,
it
doesn't
get
in
at
berlin.
A
We
can
only
do
so
much
with
the
time
we
have
james.
If
it's
okay
with
you,
I
was
wanting
to
get
the
open,
rpc
discussion
going
if
that
person
is
still
here,
because
we
promised
last
time
that
they'd
be
able
to
have
a
little
bit
of
time
and
we're
at
the
end
of
the
meeting
again
with
like
eight
minutes
left.
So
I
think.
B
Yeah,
I
think
we
should
do
that
and
then
we
should
also
discuss
proc
pal.
I
know
things
might
go
over,
but
I
think
it's
important
to
talk
about
as
far
as
in
in
conjunction
with
berlin,
going
in
for
each
e
for
eth2
stuff
anyway
continues.
A
We'll
do
light
discussion
on
progpal
that
won't
be
anything
binding
if
people
have
to
leave
so
yeah.
Let's
start
with
zach,
though,
go
right
ahead.
Cool.
Q
Yeah
thanks
for
having
me
yeah
sorry,
I
turned
up
a
little
bit
late
today,
but
yeah
glad
to
be
here.
Q
So
I
just
wanted
to
quickly,
maybe
start
by
saying
a
little
bit
about
what
openrpc
is
so
yeah
openrpc
is
what
is
called
a
service
description
specification,
so
a
way
of
describing
a
service
there's
other
ones
that
exist
open
api
is
probably
the
most
well
definitely
the
most
popular
one,
and
that's
the
one
that
I've
used
a
lot
in
the
past,
and
I
found
that
there
was
some
particular
challenges
when
using
open
api
with
json
or
pc
services.
Q
It
also
has
a
lot
of
features
that
are
specific
to
rest
based
apis,
and
so
that's
why
we
made
open
our
pc,
which
we
started
out
as
a
fork
of
open
api
and
worked
with
the
guys
that
maintained
that,
to
figure
out
basically
how
to
set
this
thing
up,
and
so
yeah
like
started
out
by
deleting
all
the
stuff
that
only
pertained
to
rest
and
the
beauty
of
json
rpc
is
really
its
simplicity.
Q
And
you
can
really
tell
that
by
like
how
much
stuff
we
are
able
to.
We
are
able
to
remove
from
open
api,
and
we
also
worked
a
lot
on
tooling
around
it
as
well.
So
the
motivation
behind
all
this
is
really.
Q
We
wanted
the
same
sort
of
tooling
that
you
get
from
open
api
but
specific
to
json
or
pc,
and
because
and
there's
a
couple
other
things
that
we
added
in
there
as
well,
like
the
the
concept
of
service
discovery
maps
really
well
on
to
json
or
pc,
whereas
for
the
open
api
stuff
they
have,
they
have
ways
of
doing
it,
but
it's
a
little
bit
different
and
it's
so
like.
Q
For
example,
we
have
a
rpc.discover,
which
is
a
method
that
you
can
add
to
your
jsonrpc
service,
rpc,
dot
being
a
reserved
prefix
for
json
rpc,
and
the
idea
there
is
that
that
method
would
return
the
service
description
for
itself,
and
so
you
can
sort
of
ask
a
service
hey
what
what
methods
do
you
have
you
know
what
parameters
they
take?
What
do
they
return?
Q
So
you
know
fundamentally,
we
built
this
stuff
for
json
rpc,
not
for
any
particular
one
technology,
just
json
rpc
in
general,
then
you
know,
of
course
it
has
specific
applications
within
blockchain
because
most
crypto,
you
know
most
blockchain
clients,
use,
json
or
pc,
but
also
there's
a
lot
of
say
like
within
ethereum
ethereum
classic,
whichever
there's
multiple
client
implementations.
Q
All
trying
to
you
know
hopefully
adhere
to
some
common
base
set
of
methods
and
interface
right
and
maybe
some
clients
add
certain
functionality
and
whatnot,
but,
moreover,
having
a
way
to
communicate.
These
differences
is
really
important
and
so
yeah
that's
pretty
much
the
gist.
Q
We've
put
together
one
like
a
specification
for
ethereum
as
like,
like
the
base
level
set
of
methods
for
ethereum,
haven't
had
too
many
eyes
on
it
really,
but
we're
we're
using
it
inside
of
multi-guest
so
like
multi-gath,
is
supporting
those
methods
and
it's
return
and
it's
implementing
the
service
discovery
as
well,
but
yeah.
There's
there's
also
like
a
lot
of
interest
outside
of
blockchain
as
well,
so
yeah
happy
to
answer
any
questions
that
you
have
and
also
we're
really
happy
to
help
out.
A
And
just
to
make
sure
I
understand
is
the
ask
to
implement
this
at
the
client
level
and
for
those
who
might
not
be
familiar,
this
is
not
a
consensus.
Breaking
change
right.
This
is
just
like
something
at
the
like
client
level,.
Q
Yeah
totally
yeah,
it's
yeah.
Definitely
it's
at
the
client
level.
It
also
like,
I
guess
the
there's.
I
have
a
couple
examples.
I
don't
have
the
links
handy
right
now,
but
one
example
of
a
way
that
it
could
be
used
is
when
proposing
an
ec
or
an
eip.
Q
If,
if
the
eip
includes
a
new
json,
rpc
method
or
a
change
to
the
existing
json
rpc
method,
or
something
like
that,
it
can
be,
you
know
specified
as
a
in
like
a
structured
format.
That's
you
know
like
you,
can
just
include
the
open,
rpc
definition
of
the
method
right.
Q
So
that's
quite
nice,
but
aside
from
that,
like
yeah,
each
client,
it's
on
it's
up
to
them
to
implement
this
stuff
or
not.
Like
you
said,
it's
not
consensus,
breaking
or
anything
like
that.
Q
It's
just
something
that,
for
example,
with
multi-geth.
Q
So
yeah
the
idea
is
to
just
save
time
and
not
really
like
force
it
on
to
anyone.
I
guess
is
a
good
way
to
put
it
so
like
there's,
no
need
for
anyone
to
implement
anything.
If
you
don't
want
to
it's
just
a
tool.
A
Cool
anyone
have
questions
or
peter.
I
I
They
are
super
complex
and
you
have
api
generators
for
pretty
much
everything,
and
essentially
I
was
using
the
go
apis
and
the
problem
is
that,
yes,
you
do
get
these
apis
generated,
but
they
are
more
or
less
useless
on
themselves,
because
after
the
api
reaches
certain
complexity,
the
generated
code
is
just
you
just
essentially.
Map
function
calls
to
or
api
calls
to.
You
just
translate
them
to
your
language,
but
most
often
the
user
doesn't
really
want
to
call
do
the
low
level
calls
individually
and
assemble
and
pass
in
all
the
configurations.
I
Q
Agreed
there
are
some
cases
where
the
complexity
is
too
great
for
for
a
generated
client
to
really
fit
the
bill,
but
like
similar
to
in
open
api
world,
there's
many
rest
sort
of
pattern
or
there's
many
patterns
that
people
use
that
don't
necessarily
fit
rest
or
open
api,
and
in
that
case
you
either
accommodate
with
by
adding
like
plug-in
interfaces
to
the
generators
or
like
you
like,
as
you
said,
you
just
you're
stuck
with
having
to
write
your
own,
and
if
that
is
the
case,
that
is
the
case
right.
Q
So,
but
certainly
there's
a
lot
of
cases
where
you
know
what
you
want
is
just
a
very
light,
wrapped
interface
to
your
methods
that
includes
static,
typing
and
and
and
like
like
say
like
if
it's
the
j,
if
it's
a
js
or
javascript
generated
client,
then
you
want
like
js
doc,
annotated
functions
so
that
you
know
in
your
editor,
when
you're
calling
like
if.call
or
whatever
you
you
don't
have
to
go.
Look
up
the
docs
for
how
to
use
it
right.
J
So
the
thing
is,
I
think
we
are
happy
to
add
the
server
side
for
this.
I
doubt
that
we
will
change
in
in
mainline
guest.
J
We
won't
necessarily
change
the
client
based
on
this,
because
I
think
we
are
still
on
that
part
that
basically
like
we
do
have
a
handwritten
client,
which
I
think
is
basically
includes
the
functionality
that
you
would
need
to
interact
with
the
chain,
and
it's
kind
of
we
do
add
to
it
from
time
to
time,
but
it's
not
in
general,
we're
more
trying
to
provide
a
stable
sort
of
go
api
instead
of
providing
like,
even
if
the,
if
the
underlying
mechanism
changes
that
is
used
by
this
particular
go
api
and
we
do
strive
for
the
stable
api
on
the
client
side
also-
and
so
I
I
don't
we'll
be
using
the
generator,
but
we're
super
happy
to
just
add
the
like
rpc.discover,
endpoint
and
return.
Q
Yeah
cool
I
mean
the
the
intent
is
for
you
know
this
stuff,
I
guess
to
highlight
differences
between
clients
really.
So
you
know
if
you're.
J
J
Oh
yeah,
okay,
so
the
thing
is
that
in
within
that
server
implementation,
it's
kind
of
like
we
basically
need
to
have
a
way
to
like
auto,
generate
this
like
schema
from
the
provided
methods
and
that's
not
something
we
are
not
doing
right
now,.
Q
Right
so
it's
a
very
good
point
that
you
brought
up
so
I'll.
Q
First
start
by
saying
that
we
have
a
fella
on
our
team
named
isaac
who
is
working
on
this
stuff
exactly
and
what
he's
trying
to
do
is
yeah
like
infer
from
the
code,
what
the
what
the
scheme
like
what
the
document
like
the
service
description
document
ought
to
look
like,
and
so
that's
like
sort
of
in
the
realm
of
like
document
introspection,
which
is
definitely
one
avenue
another
one
being
like
starting
from
the
document
and
then
updating
the
the
typed
interfaces
in
your
code,
which
you
know
breaks
compilation
you
go
and
fix
it
and
now
everything's,
happy
okay,
you've
implemented
the
change
to
the
interface
right,
so
they're
sort
of
two
different
schools
of
thought.
Q
I
suppose
I
so.
J
There's
a
big
benefit
to
this,
so
the
big
benefit.
If
we'd
go
down
the
path
where,
basically,
we
would
define
the
official
sort
of
like
ethereum
rpc
interface,
as
you
have
done
as
a
schema
like
that,
then
loading
it
into
the
server
we'd
also
be
able
to
provide
canonical
parameter
names,
which
is
something
that
we
cannot
do
right
now.
So.
J
Are
you
know
by
position
and
that
can
actually
be
a
bit
of
a
hurdle
because
you
actually
have
to
remember
which
thing
goes
into
which
place
and
some
of
the
methods
work
around
it
by
just
taking
an
object
as
a
parameter,
but
that's
kind
of
a
hack.
So
I
do
feel
that
like
this
is
something
that
could
provide
like
a
big
benefit
to.
You
know:
users
of
the
rpc
interface,
because
they'd
be
able
to
use
name
parameters.
Finally,
people
have
been
requesting
that
feature
for
a
long
time.
A
I
want
to
time
box
this
a
bit
absolutely
but
zach.
If
you
could
just
tell
us
how
to
get
in
touch
with
you
or
are
you
on
the
all
core,
devs
getter,
just
basically
how
to
get
in
touch
with
you
or
your
team?
If
people
are
interested
in
this.
Q
That
is
a
good
question.
Github
is
definitely
the
best
place
to
chat
about
this
stuff.
If
it's
a
related
to
the
specification,
there's
the
spec
there's
the
open,
rpc,
slash,
spec
repo
happy
to
chat
there
entertain
any
questions.
Otherwise,
I'm
on
you
know
telegram
discord
twitter,
all
that
stuff
getters.
Q
Yeah
yeah,
so
actually
we've
been
talking
about
this
sort
of
having
the
need
to
maybe
start
a
getter
for
these
exact
reasons.
So
maybe
I'll
get
to
that
this
week
or
today.
M
A
Feel
free
to
come
back
and
give
any
announcements
of
better
communication
methods
if
you'd
like,
even
if
it's
not
talking
about
the
whole
thing
y'all
are
always
welcome
here.
Q
Oh
well
well,
thank
you
very
much
for
your
time
and
it
was
nice
to
meet
some
of
you
in
east
denver.
Last
weekend,
oh
yeah,
thanks
again,
all.
A
Right
thanks,
zach
all
right
on
to
praguepow
thanks
for
staying
over
everybody.
James
did
you
or
actually
I
should
put
it
this
way.
We
could
have
james
go
ahead
with
it
or
martin.
You
put
it
on
the
agenda,
so
maybe
it's
better
to
hear
what
you
wanted
to
do
with
it
today
and
then
go
to
james.
C
Yes,
sorry,
so
my
idea
was
basically,
I
think
we
should
have
another
discussion
of
this
proposal
and
see
what
the
next
step
is.
I
would,
for
example,
propose
that
we
launch
a
new
testnet
with
the
updated
093
implementation
and
yeah.
I
don't.
I
don't
really
know
where
we're
at
in
the
discussion
or
in
the
decision-making
process.
A
So,
andrea
or
andrea,
I'm
really
sorry,
if
I'm
not
pronouncing
your
name
correctly.
Where
do
you
see
where
we
are
in
the
process.
A
B
B
Yeah
so
they're
the
readiness
of
their
like
they
have
some
testing
of
some
test
nets
or
they're
they're
mining
on
the
093
spec
and
it's,
in
my
opinion,
the
closest
eip
to
being
ready
to
launch
so
as
far
as
status
of
implementation,
they're
very
they're,
pretty
much
ready
to
go
if,
unless
andrea
has
an
opposing
opinion
to
that,
my
I
do
have
a
proposal
for
a
a
hard
fork,
scheduling
proposal
for
this,
which
is
the
bls
pre-compile
getting
in
sometime
for
june,
having
a
fork
scheduled
for
that
as
the
as
the
whether
it's
with
the
18
one
white
listed
or
if
it's
in
the
specific,
as
if
it's
a
specific
pre-compiled
there.
B
The
eth2
team
is
working
on
that
right
now
and
and
it's
important
that
we
get
that
done
before
the
deposit
contract.
So
things
can
be
validated
on
chain,
then
for
progpal
and
its
inclusion,
I
would
suggest
that
it
is
a
contentious
upgrade,
but
I've
as
I
have
done,
research
and
around
that
there
isn't.
B
The
likelihood
of
a
network
split
is
very,
very
low
because
I
haven't
actually
seen
someone
willing
to
be
on
the
other
side
of
that.
But
I
do
agree
that
it
is
a
contentious
upgrade.
So
my
proposal
is
that
we
have
a
fork
for
the
bls
precompile
and
then
three
the
next
third
wednesday.
After
that
we
have
a
fork
that
includes
only
proctol
and.
B
And
that's
the
the
suggestion-
and
there
are
there's
a
lot
to
go
in
on
why
I
don't
think
we
have
a
lot
of
any
more
time
to
really
go
over.
That.
A
My
main
question
is,
when
you
say
three
weeks:
is
that
we're
testing
them
at
the
same
time,
but
we're
just
including
them
in
forks
at
different
times
in
order
to
make
sure
people
have
the
opportunity
to
fork
off
if
they
want
to?
Is
that
correct.
B
F
I
think
from
a
community
perspective
and
enough
people
opposing
prog
pal,
I
I
feel
like
putting
e2
and
e2
like
pre-compiles
at
brockpow
a
month
apart,
is
kind
of
conceptually
weird.
For
some
reason,
I
you
know,
if
you
told
me,
like
three
months
or
two
months
between
the
two
sure
I'm
also
unsure.
If,
like
I
know,
we've
had
a
lot
of
discussions
around
like
if
people
want
frog,
pow
and
and
or
not,
and
what
and
given
how
contentious
it
is.
F
I'm
personally
a
bit
uneasy
about
about,
including
it
I
think
kind
of
deciding
through
a
network
split
is
is
far
from
ideal,
so
yeah
I
I
yeah,
I'm
I'm
a
bit,
I'm
a
bit
uneasy
about
program
general,
but
I'm
especially
uneasy
about
having
it
very
close
to
an
e2
related
upgrade
just
because
of
the
the
potential
confusion.
B
That's
so
the
both
the
as
far
as
the
releases
go.
We
could
have
the
release
for
berlin
as
soon
as
that's
ready
and
we
could
release
the
progpal
one
shortly
afterwards
and
it's
more
of
giving.
If
we
wait
another
three
months,
then
progpal
eats
up
six
months
of
our
development
cycle.
Where
really
proctol
is
actually
the
most
ready
to
go
out
of
any
pay
right
now,
but
the
reason
we
aren't
scheduling
it
is
because
of
the
we
don't
want
to
just
do
the
only
proc
pal
one
first.
B
So
how
and
as
far
as
timing
goes
getting
the
the
bls
signature,
the
precompile
in
is,
is
more
important
than
having
proctol
come
in
first
and
the
so
having
multiple
releases
before
and
then
having
people
to
like
upgrade
to
the
one
that
they
want,
and
them
is
I
like
that.
That
has
already
worked,
and
so
it
isn't
like
a
okay.
Once
berlin
is
done,
then
we'll
release
the
version
for
frogpal.
C
C
A
So
what
I
remember
from
the
dow
is
that
there
was
a
lot
of
pushback
on
a
switch,
because
that
that
implies
a
default
to
the
switch,
whereas
running
an
entire
client.
Where
the
default
is
one
or
the
other
is
a
much
more
explicit.
Switch
than
having
a
software
enabled
switch.
That
you
have
to
like
manually,
go
in
and
do.
B
F
I
think
one
thing
that's
also
worth
considering
is
the
exchange
have
the
biggest
incentive
to
support
two
coins
right
like
if
ethereum
splits,
the
best.
You
know
it's
bad
for
everybody,
except
exchange
fees
and
it's
worth
being
mindful.
This
is
basically
how
etc
came
into
exit
existence.
So
this
is
just
you
know.
I
can.
I
can
absolutely
see
the
first
exchange
the
once.
The
first
exchange
is
like
we'll:
let
users
decide
we'll
run
both
etc
or
sorry,
ethrogpow
and
eta
etch.
F
Then
it's
almost
like
a
you
know.
Every
other
exchange
has
to
do
it
because
they
end
up
competing
for
the
fees
and
we've
kind
of
split
the
network.
So
again,
this
is
kind
of
why
I'm
personally
uneasy
on
this.
B
Yeah
yeah-
and
I
and
I
I
understand
that
and
as
as
I've
read
from
the
community,
there
are
people
who
are
who
oppose
praguepow,
but
their
stance
is
that
we
would
all
go
to
the
prague
pal
state.
We
would
all
go
to
the
non-prog
pouching
and
if
that
is
actually
the
case,
then
there
is
no
network
split
they're.
Just
we
go
on
with
the
one
without
product
power
or
we
go
on
the
one
without
and
I
have
not
seen
any
evidence
that
there
is.
A
Yeah
that
probably
wouldn't
be
necessary,
but
what
I'd
say
is
I've
seen
very
little,
the
only
person
I
can
just
call
it
out
I'll
just
say:
amin's,
the
only
one
who
said
they'd
step
up
and
I'm
not
sure
if
they
still
have
that
position
today
and
even
if
they
did
just
kind
of
seem
like
I
I
don't
know,
I'm
I'm
skeptical
of
that
happening.
But
again
I
don't
want
to
like.
I
don't
want
to
be
proven
wrong,
but
if
I
am
then
that's
just
how
things
go
so.
B
F
Yeah,
I
was
gonna
say
another
another
sort
of
broadcast
critic
I
found
is
like
the
gnosis
team
and
martin
specifically,
and,
and
you
know,
I
think
it
might
be
worth
getting
their
thoughts
on
it
on
a
cordes
call,
especially
because
they're
gonna
be
maintainers
of
open
ethereum.
F
I
personally
would
feel
much
more
comfortable
with
this
plan.
If
you
know
gnosis
as
maintainers
of
parity,
ethereum
or
open
ethereum,
say
they're
kind
of
okay.
With
this
you
know
like
I,
I
know
for
a
fact
they
don't.
Actually.
Martin
is
not
gnosis.
F
B
F
Split
yeah,
so
I
think
yeah
again,
I
would
be
much
more
comfortable
if
gnosis
martin,
k
from
gracis,
you
know
agrees
that,
like
open
ethereum
will
implement
that
and
kind
of
be
okay
with
it.
Even
though
say
you
know,
he's
personally
somewhat
opposed
to
it,
because
I
think
that's
the
other
kind
of
credible.
C
Like
the
two
people
yeah,
I
don't
know,
I
mean
we've
had
discussions
back
from
everything.
That's
going
to
be
said.
That
has
already
been
said.
I
think
we
kind
of
just
need
to
go
forward.
That's
my
view.
C
A
With
that
view,
however,
tim,
if
you
want
to
ask
them
and
make
sure
they're
aware
of
it,
or
at
least
stefan
from
gnosis.
D
A
Free
to
ping
them
and
like
let
make
sure
that
they
know
this
is
happening
because
the
last
thing,
the
the
the
thing
that
got
me
last
time
was
people
saying
this
is
being
snuck
in.
So
I'm
gonna
make
a
effort
to
make
sure
that's
not
the
case
this
time,
as
I'm
sure
james
and
others
will
do
as
well
and
there'll,
be
plenty
of
time
for
open
descent
that
won't
really
change
the
decision,
necessarily
because
we've
already
gone
back
and
forth
and
approved
it
twice,
but
at
least
they'll
be
aware.
A
So
if
people
want
to
do
that,
they
can
it's
that's.
What
that's
why
forking
off
is
the
ultimate
consensus
mechanism.
D
Yeah,
just
a
quick
question:
is
there
a
better
way
to
advertise
this
than
through
your
twitter
account
like
a
more
official
venue,
good
call
I'd
say
I
can't
think
of
one.
What
about
you,
the
ethereum
blog?
I
mean
if
you
were
really
serious
about
this.
A
There's
a
there
are
trade-offs
to
doing
that
and
when
I
say
that
what
I
mean
is
it
looks
like
it's
an
endorsement
by
the
ef
which
it
really
isn't,
because
geth
acts
autonomously
from
the
ef,
although
it
is
funded
by
the
ef,
with
as
far
as
their
decisions
as
in
no
one
outside
the
geth
team
influences
any
or
has
ever
influenced
any
decision
for
what
goes
into
a
network
upgrade
within
the
ef.
So
like
aya
or
any
of
that
stuff.
Don't
just
they
don't
go
in
and
be
like.
A
You
have
to
put
this
in,
and
so
that
would
be
one
misconception.
The
other
one
would
be
giving
it
more.
It
would
be
kind
of
fanning
the
flames
of
something
that,
like
is
not
really
needed
to
be
fanned,
but
I'm
open
to
other
suggestions.
If
enough
people
want
that
to
happen,
I'm
perfectly
comfortable
bringing
that
up
to
the
other
blog
editors.
B
Mine
would
be,
if
you
have
grievances
come
to
me.
I
am
taking
responsibility
for
this.
J
Announcement
of
like
the
decision,
the
like
you
know
this,
would
this
basically,
you
know
awkward
is
deciding
that
like
the
way
forward
is
to
include
profile,
is
kind
of
something
that
I
feel
like
is
newsworthy,
and
it's
definitely
something
that
can
be
announced
on
the
ethereum
block,
even
without
seeing
it
as
an
endorsement,
because
it's
basically
just
documenting
the
fact
that
yeah
all
quartus
has
you
know
collectively
decided
that
you
know
pro
crowd
is
the
way
to
go,
and
if
the
community
doesn't
accept
power
and
we
turn
we
all
turn
around
and
we're
like.
J
Okay,
you
know.
Maybe
it
wasn't
such
a
good
idea,
and
then
we
just
go
with
the
other
chain.
That's
fine
as
well,
but
at
least
having
like
an
explanatory
write-up
that
on
the
blog
that
actually
explains
why
proper
was
adopted,
and
you
know
like
that
it
is
being
adopted
because
of
the
decision
in
all
cortex.
I
think
these
things
can
just
be
included
in
the
block
and
it's
not
an
endorsement.
A
Yeah
there's
a
happy
medium.
We
can
definitely
approach
once
things
get
closer,
especially
if
we
very
early,
if
we
very
early
put
a
blog
out
there,
rather
than
like
two
weeks
before,
like
I've
been
prone
to
doing.
I
think
we
should.
B
Yeah
we
should
have
it,
we
should
have
the
yeah
next.
A
J
B
A
Okay,
we'll
discuss
this
more
next
meeting
because
it's
not
going
to
happen
before
next
meeting
and
I
can
definitely
draft
something
up
just
so
that
we
don't
keep
everyone
here
all
day
and
artem
makes
a
good
point
that
we
should
definitely
invite
stefan
and
amin
if
they
want
to
come
on
and
give
their
perspective
of
like
why
they
would
like
that
they're
splitting.
Just
so,
people
are
aware,
but
I'm
going
to
talk
to
amin
and
just
see
if
what
his
position
is
now.
He
knows.
A
So
that's
a
hard
thing
to
balance
and
I
think
james
is
doing
a
good
job
of
that
and
I'm
going
to
try
to
also
do
a
good
job
of
that.
Any
other
final
comments
before
we
close
out
the
meeting
and
what
we'll
do
is
for
the
next
meeting.
Let's
look
at
what
we
missed
this
meeting
and
try
to
prioritize
that,
along
with
the
bls
curve
stuff.
A
J
I
have
a
quick
request
yeah,
so
I
know
that
you
know
so
it's
been
quite
a
while,
since
we've
had
the
eep
778
and
868
merged.
So
this
is
something
that
to
refresh
you
guys,
this
is
ethereum
node
records
and
the
node
discovery
v4
enr
extension.
These
things
have
been
live
for
quite
a
long
time
and
there
is
implementation
in
trinity,
geth
and
and
an
alif,
but
we
are
not
really
seeing
many
other
implementations
on
that.
J
So
it
would
be
very
useful
to
have
these
things
because
we
we
have
just
rolled
out
another
critical
piece
of
infrastructure
that
will
deprecate
the
bootstrap
nodes
in
the
long
term,
and
that
is
the
dns
based
discovery,
and
I
would
really
watch
for
people,
especially
like
client
implementers,
to
go
ahead
and
just
add
these
features
to
the
discovery
implementations.
They
do
not
take
a
lot
of
time,
but
it
would
be
very
helpful
for
the
network.
R
Okay;
okay,
if
I
may,
I
have
just
finished
working
on
it:
64
implementation
for
open
ethereum,
so
it's
just
hit
the
master
and
and
now
I
will
also
be
working
on
ethereum
node
records
and
integrating
it
into
the
open
ethereum's
networks
text.
J
Oh,
that's
very
nice.
That
is
very,
very
nice,
so
do
note
that
basically,
the
just
contact
me
if
you
have
any
questions
regarding
that
and
I'm
very
happy
that
this
is
finally
happening,
and
I
can
really
recommend
that
you
look
into
the
rustly
p2p
ripple
because
they
have
already
implemented
all
that
stuff
as
part
of
the
discovery
version.
Five
draft
work,
so
you
can
just
use
the
implementation
from
there.
You
don't
have
to
implement
in
or
again.
R
In
fact,
I
have
been
in
contact
with
sigma
prime,
and
they
have
yes
and
they
have
split
their
implementation
of
enr
into
a
separate
crate.
So
will
reuse
it
and
yeah
very.
A
Very
helpful,
okay,
yeah,
so
felix
if
you
could
get
on
the
all
core,
devs
getter
put
your
specification
and
ways
to
contact
you
and
then
tag
nethermind
and
baysu.
That
sounds
like
those
are
the
ones
that
we
are
not
accounted
for
for
if
they've
implemented.
This.
D
A
J
So
with
these
heaps
is
basically
the
the
situation
is
a
bit
unclear
anyway,
because
we
have
approved
them
on
all
quarters
a
long
time
ago,
but
that
but
the
status
in
the
eep
was
never
updated,
so
they're
all
still
in
draft.
But
actually
you
know
like
we
have
discussed
these
two
or
three
times
on
all
quarters
even
more
than
a
year
ago
and
by
now,
because
the
enr
was
it
was
proposed
in
november
2017
right.
J
So
that
is
like
more
than
two
years
ago
yeah
and
I
think
it's
kind
of
like
it's
been
it's
been
around
all
this
time
and
I
kind
of
feel
like
moving
these
things
to
final
would
be
great,
but
also
I
don't
really
know
like.
We
still
don't
know
what
the
process
is
for
for
those
networking
things,
because
it's
kind
of
like
you
know,
is
it
move
to
final.
When
everyone
has
it
or
is
it
move
to
final,
when
we've
discussed
it
enough
or
like.
A
A
There's
a
process
being
changed
in
the
eips
right
now
to
make
that
more
clear,
there's
eipip
meetings
which
are
eip
improvement
process
meetings,
so
yeah
and
you're
welcome
to
come
to
those
there
every
other
wednesday,
but
basically
right
now.
The
next
step
as
the
current
system
stands,
would
be
to
put
it
in
final
call
or
last
call,
which
is.
A
Perfect,
okay,
yeah
just
ping
me:
I
can
merge
it.
Okay,
thanks
all
all
right,
any
other
final
stuff.
A
All
right,
we
went
over
exactly
half
an
hour,
not
bad
all
right
thanks.
Everybody.
Thank
you,
artem
for
your
update,
artem.
Just
for
the
record
you're
on
you're
on
open
ethereum.
Are
you
with?
I
just
totally
forgot?
Are
you
with
parity
technologies
or
are
you
with
different.
R
A
Yeah
welcome
to
the
calls
I
think
you've
been
on
before
a
little
bit,
maybe
but
yeah.