►
From YouTube: Ethereum Core Devs Meeting #78 [2020-1-10]
Description
B
All
right,
it
looks
like
we
are
live
thanks.
Everyone
for
showing
up
today
is
the
core
dev
meeting
number
78
on
january
10th.
B
The
first
topic
of
discussion
was
the
eip
2387,
mere
glacier
updates
just
talking
about
how
it
went,
and
I
think
the
cat
herders
are
doing
a
review
of
it.
If
I'm
not
mistaken
and
we'll
have
puja
talk
about
that
a
little
bit.
If,
if
that's
happening,
but
first,
I
guess,
let's
throw
it
to
whoever
added
that,
so
that
would
be
okay.
I
guess
tim
added
that,
but
does
anyone
want
to
speak
on
it.
C
Yeah,
but
I
can
talk
about
it
great,
so
we
blocked
9
million
200
000
for
those
falling
over
home.
We
had
muir
glacier
activated
and
it
actually
went
off
that
while
there
was
quite
a
bit
of
drama
before
in
the
community
about
it,
technologically
speaking,
it
went
off
really
well
and
three
of
the
four
clients
were
perfect
and
nethermine
had
a
quick
update
that
happened
afterwards
without
any
negative
effect,
and
that
fork
included
the
eip
for
pushing
back
ice
age
which
now
block
times
have
reduced
to
fastest.
They
have
been
since
then.
B
D
I'm
not
aware
of
the
like
postmortem
of
istanbul,
but
yes
certainly
come
across
a
discussion.
I'm
not
sure
was
that
you
james,
like
working
on
it
and
yeah.
We
are
willing
to
help
you
on
that.
C
I
have
some
kind
of
initial
suggestions
for
takeaways,
but
it's
not
really
a
finished
state
yet
so
we,
I
can
speak
them
about
that
or
later
and
we
get
back
with.
D
Some
of
new
glacier
variant
did
really
good
like
it
was.
The
percentage
of
readiness
was
more
than
the
istanbul,
so
it
was
over
92
at
the
time
of
the
four,
and
currently
it
is
like
99.5
percent.
So
yes,
we
did
really
great
with
the
muir
glacier.
To
sum.
C
Yeah
the
target
was
to
try
to
be
around
the
same.
We
thought
we
rather
than
oh,
give
our
obstacles
to
update
their
notes,
the
same
amount
of
time
and
then
the
and
then
the
time
happened.
D
The
expectation
was
like
it
would
happen
under
the
window
of
48
hours
for
both
maintenance
and
the
test
net,
but
somehow
robsten
got
delayed
and
like
yes,
it
is
coming
up
like
around.
B
Monday,
okay!
Well,
that's
good
to
know.
The
next
thing
is
testing
updates.
Let's
see,
do
we
have
demetrion?
No
does
anyone
else
have
testing
updates.
B
Okay:
next,
we
can
go
to
the
eligibility
for
inclusion
eip
review.
The
first
one
is
going
to
be
2456
and
that's
one
that
dano
added
to
the
discussion.
So
yeah
go
right
right
ahead.
Dana.
F
So
this
is
a
proposal
to
try
and
move
the
fork
instead
of
picking
a
specific
block
to
try
and
get
some
methods
where
we
can
go
on
a
time
and
getting
a
time-based
fork
is
a
tricky
issue.
There's
plenty
of
ways
to
introduce
new
attack,
vectors
and
ways
to
basically
make
things
more
complicated
than
they
would
ordinarily,
just
by
saying
well,
for
you
know
next
wednesday,
at
noon.
F
F
They
were
all
off
by
at
least
three
days
and,
as
I
mentioned
earlier
in
the
call
robsten
at
its
current
rate,
is
going
to
fork
probably
next
early
next
monday
morning,
which
is
about
a
week
after
our
intended
fork
time,
and
there
were
times
where
it
was
forecast
at
the
block
rate
of
the
current
to
four
two
weeks
afterwards,
and
that's
just
that
level
of
unpredictability
is
just
incredibly
bad
for
our
downstream
partners
who
have
to
maintain
nodes
and
run
exchanges.
F
So
the
the
first
proposal
I
put
together,
you
know
I'm
open
to
other
proposals.
I
just
want
to
get
some
sort
of
a
mechanism
that
is
predictable
into
a
much
smaller
window.
F
So
one
of
the
problems
with
the
with
forking
on
on
a
date
is,
we
have
to
deal
with
honors
and
there's.
Also.
The
issue
of
reorg's
reorg,
I
think
is,
is
initially
the
most
obvious
issue.
F
Although
owners
is
also
a
very
subtle
issue,
if
we
work
on
a
specific
time,
what,
if
there's
a
reorg
that
includes
that
block
number,
that
also
gives
the
miners
some
opportunity
to
play
games
with
the
numbers
to
try
and
force
forward
to
the
fork,
although
geth
parody,
basically
that
I
know
of
only
accept
blocks
15
seconds
in
the
future.
I
don't
know
about
trinity
left
and
nether
mind.
I
don't
know
what
their
rules
are,
but
even
then
you
know
you
can
have
like
you
know
all
sorts
of
deep,
all
sorts
of
difficulty.
F
If
you're
trying
to
fork
in
a
specific
block,
that's
not
round,
it
could
be
random.
So
the
first
thing
that
I
proposed
is
that
the
transitions,
only
the
network
upgrades
only
transition
at
block
numbers
that
are
round
by
a
thousand,
and
the
second
thing
is,
I
propose
a
two-phase
commit
that
we
do
the
transition
at
the
second
opportunity,
where
the
block
number
that
is
around
thousand
is
after
the
fork
time.
So
that
gives
us
a
window
that
gives
us
a
trigger
event
about
a
thousand
to
almost
two
thousand
blocks
in
the
past.
F
To
say:
hey
we're
going
to
be
upgrading
at
this
block
about
you
know
on
average
1500
blocks
in
the
future
and
there's
you
know
some
calculations
that
I
did
in
there
to
say
that
this
would
happen
anywhere
between
two
and
twenty
years
after
the
fourth
time,
which
is
a
lot
from
two
to
twenty
hours
from
the
initial
time
proposed,
which
is
a
lot
narrower
a
window,
a
lot
more
manageable
in
theory,
you
could
have
somebody
working
full
time
during
that
that
window,
rather
than
putting
someone
on
all
the
time
for
20
hours
on
ethereum
magicians
peter,
was
concerned
about
the
armors
the.
F
If
there
was
any
additional
rules
that
might
change
the
header
validation,
how
they,
a
malicious
miner,
might
be
able
to
literally
armors
with
transition
eligible
blocks.
Luckily,
it
looks
like
the
two-phase
commit
on
round
numbers
is
going
to
limit
that
window
to
about
three
bucks
before
the
second
transition
that
they
could
fill
it
up
and
even
then
there's
you
know,
there's
there's
finance,
there's
economic
limitations
on
how
effective
that
can
be.
There's
only
so
many
over
blocks.
F
G
Peter
yeah,
I
just
wanted
to
ask
that
say
that
my
my
concerns
aren't
necessarily
malicious
minors
doing
weird
things.
Rather,
it's
just
just
the
complexity
of
consensus
rules
that
in
theory,
so
I'm
not
sure
how
the
rules
are
currently
laid
out,
but
in
theory
it
could
happen
that
along
so
you
or
one
of
the
blocks
already
fork.
F
Yeah
bugs
are
actually
more
difficult
to
untangle
than
a
malicious
attack,
because
it's
pretty
usually
obvious
what
they're
doing
there.
A
Same
issue
with
this
as
peter
brought
up,
I
think-
and
I
wrote
it
in
fellowship
and
I
have
since
edited
it
because
I
realized
that
it
was
based
on
a
misconception.
A
F
Right
we
can
because
we've
never
had
a
thousand
block
rewind.
So
that's
that
was
something
that
went
back
and
forth
on
the
decision,
whether
after
the
fact
we
go
back
and
make
never
canonical
or
whether
we
stick
with
the
date
defining
the
number
that
would
sound
too
strong.
A
Yeah
because
it's
easier
right
now,
it's
it's
easier
to
when
we
do
syncing,
we
can
do
checks
for
specific
blocks
if,
if
the
peer
has
them
or
not-
and
if
you
know
if
we
want
to
sing
to
after
that
block
and
check
that
it's
on
the
right
side
of
the
fork
and
stuff.
I
F
Since
we
don't
know
the
future
block,
we
would
be
advertising
a
future
time
and
unless
we
then
update
our
record
at
the
block,
when
we
think
it
happens,
you
know
if
we,
if
we
stick
with
that
number
meaning
when
the
the
block
the
fork
block
is
we
don't
have
to
change
our
our
hash,
nearly
as
often
otherwise
there's
a
small
window
where
you
might
have
a
difficult
time.
Peering.
F
Based
on
that
identifier,
and
I
did
mention
in
the
block
that
clients
might
want
to
include
both
the
fork
number
and
the
fork
time
and
so
like
during
a
fast
sync,
you
could
use
the
the
block
number
as
an
aid
to
make
sure
that
you're
doing
it
correctly,
but
in
all
the
synchronization
methods
right
now,
everyone's
getting
all
the
headers
anyway.
So
you'll
have
time
to
validate
and
since
you're
only
checking
every
thousandth
header,
it
shouldn't
be
too
much
of
a
validation
burden
on
fastings.
G
C
Just
a
question
of:
why
is
that?
Why
is
ethereum?
It
is
so
irregular
or
is
the
as
if
I
look
on
ether
scan
the
average
rock
climbs
end
up,
increasing
I've
been
off
so
long
and
what
is
sort
of
relying
that
office,
and
this
would
also
be
a
getting
a
more
consistent
block
time.
You
know
and
solve.
F
So,
what's
hitting
us
what's
hitting
us
on
roxton
is
hash
rate
is
much
more
highly
variable
than
it
is
on
mainnet.
There's,
no
economic
incentive
to
keep
all
your
hashes
pointing
to
make
money
is
the
experience
with
istanbul.
Somebody
pointed
a
lot
of
hashtag
early,
because
I
guess
they
were
testing
a
new
rack
or
something
and,
and
they
like
you
know,
doubled
or
10x
the
hash
rate,
so
that
brought
brought
it
forward
three
days
sooner
than
we
thought
it
was
going
to
happen.
F
We
wanted
it
on
a
wednesday
and
it
still
happened
on
a
weekend
for
mainnet.
What
hit
us
with
istanbul
was.
We
were
not
sure
what
the
impact
of
the
ice
age
was
going
to
be.
We
didn't
know
if
it
was
going
to
be
a
half
a
second
impact.
Two
second
impact
10
second
impact.
I
think
our
estimates
for
getting
mere
glacier
on
the
sixth
was
based
on
a
22nd
block
time
and
it
wound
up
being
17
which
pulled
it
into
the
day
after
new
year's.
F
So
it's
it's
the
things
that
we
can't
predict
is
why
getting
a
date,
you
know,
there's
an
si
definition
for
a
second
and
we're
going
to
follow
that.
But
there's
no
inside
definition
for
how
fast
you
can
hash
and
that's
the
unpredictable
part.
There.
G
I
think
in
general
previously
we
managed
to
hit
the
fork
more
or
less
accurately,
but,
for
example,
I
think
petersburg
hit
at
somewhere
around
midnight
utc.
So
again,
I'm
not
sure
whether
that
was
intended
or
not.
G
I
I
assume
it
wasn't
really
intended
so,
for
example,
what
a
time-based
fork
would
also
allow
is
to
essentially
it
would
allow
you
to
delimit
the
fork
within
two
hours,
so
you
could
say
that
this
fork
is
going
to
happen
between
6
pm
and
8
pm
utc,
and
then
everybody
knows
specifically
that
you
have
to
be
online
at
that
point
in
time.
J
And
I
think
one
thing
that's
nice
about
this
proposal
is
the
fact
that
there's
like
this
thousand
to
two
thousand
block
delay,
which
is
like
I
don't
know
a
couple
hours,
so
you
can
imagine
a
case
where,
like
you,
do,
set
up
alerts
for
that
time
and
then
it
gives
you
like.
A
thousand
block
warning
to
you
know,
monitor
the
fork
so
that,
and
and
also
just
like
within
the
community.
You
know
it
gives
an
opportunity.
The
message
of
like
hey
this
thing
is
happening
in
a
couple
hours.
J
The
only
potential
drawback
I
see
is
whether
or
not
having
this
window
would
mean
that,
like
exchanges
and
whatnot
would
pause
deposits
and
withdrawals
for
a
longer
period
that
they
do
now
so
right
now.
I
think
they
only
do
it
for
a
couple
hours,
but
if,
if
this
is
like
a
thousand
blocks,
I'm
not
sure
if
that's
longer
or
shorter,
but
on
that
probably
it's
probably
good
to
give
this
like
two-step
warning
that
the
upgrade
is
happening.
G
No,
I'm
not
sure
so
if,
if
your
node
is
the
one
warning
you
that
there's
an
update
coming
in
half
an
hour,
I'm
not
sure
you
have
time
to
react
meaningfully.
So
if
you
haven't
showed
you
in
advance.
J
F
J
H
F
15
and
after
and
you
know,
there's
about
a
four
hour
window
for
15
second
blocks.
If
we're.
G
J
F
B
B
Oh
okay,
perfect!
Okay!
We
can
go
to
the
next
part
of
the
agenda.
That's
going
to
be
eip,
1962
eligible
for
inclusion
or
at
the
elevate,
the
eligibility
for
inclusion
of
eip1962
and
that's
going
to
be
alex,
and
I
think
louise
is
here
for
that
too.
Right.
B
Okay,
alex
you
can
take
it
away.
I
Yeah,
so
as
I
written
briefly
at
github
sure
right
now,
both
c
plus
plus
implementation
and
rust
implication
are
both
complete,
both
feature
wise
and
from
testing
perspective.
Their
performance
is
also
within
some
plus
minus
10
percent
deviation.
I
So
I
use
the
rust
as
a
kind
of
metering
source
for
all
the
gas
estimates,
so
in
principle,
right
now,
what's
left
is
basically
how
I
should
integrate
it
into
a
few
existing
clients,
I'm
only
familiar
with
priority
and
gas,
and
there
are
a
few
options,
whether
to
use
kind
of
a
single
implementation
or
use
two
independent
implementations
into
independent
clients,
for
example,
and
with
all
of
this
because
well
for
priorities.
The
ras
library
is
easy
to
include
and
compile
for
gas.
I
It's
not
that
easy
to
compile
c,
plus
plus
source
and
or
the
rust
library
in
any
form
so
so
like
it
would
require
some
kind
of
addition,
synthesis
existing
continuous
integration
pipelines.
I
So
all
those
advices
would
be
welcome
or
common
sense,
inspired,
also
to
simplify
alternative
implementations,
because
I
know
that
there
is
one
happening
right
now
in
go,
which
is
still
non-complete.
As
far
as
I
know.
Maybe
it
would
be
possible
kind
of
to
put
a
performance
margin
like
use.
The
current
numbers
for
gas
between
which
are
acceptable
for
rust
and
c-class,
plus
and
just
say.
Well,
alternative
implementation
in
gold,
for
example,
would
not
be
more
than,
for
example,
twice
slower
than
those
two
existing
ones,
and
so
I
can
incorporate
all
those
changes.
G
G
However,
we
cannot
integrate
rust
code,
so
the
only
thing
we
could
do
with
rust
is
to
have
a
sub
completely
separate,
rust
combination,
step
that
builds
a
shared
library
or
static
library
and
then
somehow
link
that
to
togeth,
but
that
completely
and
utterly
blows
up
the
entire
build
process.
And
then,
instead
of
using
go
as
a
build
tool,
we
would
need
to
have
make
files
and
custom
steps
and
that's
definitely
not
something
we
want
to
do
so.
Technically,
it
is
doable,
but
practically
it
just
nukes.
The
entire
project's
simplicity.
I
Yeah,
so
the
c
plus
plus
implementation
is
written
in
a
quite
modern
dialect
of
c,
plus,
plus
and
well,
not
even
the
library
itself.
It
also
relies
on
the
dependency
which
does
all
the
arithmetics,
so
the
arithmetics
implementation
is
not
done
by
by
me
in
c
plus
plus
code.
I've
taken
the
existing
ones
existing
one,
but
it
still
gives
me
the
same
results
as
in
the
rust,
where
I
have
re,
I
have
written
all
by
myself
and
those
two
require
c
plus
plus
17..
I
I
don't
know
what
containers
you
use
in
your
building
pipeline
and
just
depending
on
what
what
linux
edition
and
distributive
you
use,
you
may
have
or
may
not
have
a
modern
enough
compiler.
This
is
my
only
concern
for
c
plus
plus
side.
Otherwise,
if
you
say
that
you
can
easily
build
a
plus
where
I
had
some
problems
as
far
as
I
tried
the
last
time,
then
sure
it's
it's
as
good
for
me
as
any
other
version.
G
I
G
A
Yes,
sorry
for
making
noise.
So
what
I
I
read
on
your
comment
here
that
it
moved
to
move
to
ten
separate
operations.
That's
my
map
to
pre-compile
addresses,
and
I
looked
at
the
eep
and
I've
seen
no
such
changes
in
the
eep.
So
I'm
kind
of
wondering
where
this,
where
these
changes
are
taking
place
and
where
we
can
follow
the
progress.
I
Well,
there
is
a
kind
of
reference
rust,
well
reference,
github
with
all
the
rust
code
and
also
a
few
documents
which
describes
a
binary
interface
and
operations
which
are
now
10
separate
of
them.
I
just
didn't
make
the
the
can.
I
did
just
make
another
pull
request
to
the
eip
to
update
it
to
the
current
state,
but
I
still
maintains
the
list
and
kind
of
also
description
of
binary
interfaces
now,
with
10
separate
kind
of
operations.
I
Okay,
I
will
make
a
post
to
magicians.
You
know
corresponding
thread.
I
Yeah
so
well,
basically,
there
are
kind
of
two
implementation,
wise
and
integration
wise
questions
like
what
biggest
most
likable
user
integration
into
priority
myself
is
to
have
what
pre-compile
address
range
I
should
use
for
which
will
be,
should
be
10
addresses
and
basically
just
what
is
the
number
for
this
kind
of
performance
margin
for
alternative
implementations?
C
I
Oh
well,
I
mean
I
have
the
draft
for
the
parity
client,
which
can
be
like
which
is
completely
dropped,
but
basically
it
integrates
collins
or
rust
implementation
and
gas
metering
routines
from
the
priority
client,
but
to
integrate
it
completely.
I
I
would
also
need
to
change
few
config
files.
I
think
which
will
require
me
to
have
to
knows
addresses
for
all
these
10
pre-compile
functions
which
people
wanted
me
to
make,
which
should
be
mapped
to
10,
independent
recompile
addresses
and
just
to
finish
this
integration.
I
would
need
to
know
those.
F
Typically,
that's
what's
been
done
and
they've
always
been
packed
tightly,
so
there's
been
no
open
spaces,
that's
something
we
might
want
to
reconsider
or
not
so.
B
I
Well,
I
mean
if
someone
would
want
to
integrate
it
into
parity,
for
example
himself
I
think
I'll
say
go
when
people
will
need
to
integrate
it.
They
will
need
to
know
this
information.
I
can
just
leave
placeholders
for
now,
just
not
that
important.
I
think
so.
The
second
question
is
kind
of
performance
margin
for
future.
I
If
alternative
implementations
arise
with
a
literally
different
performance,
so
right
now
the
difference
between
c
plus,
plus
and
graphs
is
ten
percent
within
and
kind
of,
plus
and
minus
in
different
parts,
just
more
like
a
deviation
so,
but
for
the
future.
If,
let's
say
there
will
be
some
other
implementation
notes
that
performance,
but
still
alternative
and
matches
to
existing
ones
completely,
and
someone
would
want
to
use
this
implementation,
for
example,
in
gas,
then
we
can
just
kind
of
give
the
performance
margin.
So
I
will
just
blindly
multiply
all
the
gas
prices
by
two.
G
The
problem
with
them-
it's
not
just
the
gas
prices
generally,
but
so,
if
the,
if
the
margin
is
10,
then
nobody
really
cares,
but
once
you
start
entering
into
2x
3x
territory,
the
problem
is
that
those
start
to
look
like
denial
of
service
opportunities.
G
So,
for
example,
either
you
would
double
the
gas
price,
in
which
case
the
implementations
I
mean,
in
which
case
people
would
be
just
charged
more
and
blocks,
won't
be
as
useful
or
if
you
keep
the
gas
price
at
some
meaningful
levels,
then
a
slower
implementation
could
just
be
too
slow
for
the
network,
so
yeah
ux
is
probably
something
that
is
still
okay
and
doable.
I
Yeah
for
this
I
would
need
an
advice
I
can
do
the
I
can
just
do
the
comparison
with
the
current
bn
curve.
Right
now
for
most
of
the
operations,
it's
it's
cheaper.
I
just
didn't
try
with
the
pairing
operation
so,
but
even
for
example,
1.5
efficient
is
still
a
good
margin
anyway,
and
just
as
a
security
feature
too.
I
Even
if
later,
some
required,
modifications
or
extensions
will
require
what
like
will
decrease
the
speed
of
one
of
the
existing
implementations,
but
it
still
should
be
first
of
all,
faster
than
the
existing
brick
compiled
for
bn
curve
with
the
current
pricing
after
the
first
istanbul
fork
and
second,
it
will
give
access
to
wide
set
of
secures
which
are
not
accessible
right
now,
which
is
still
beneficial.
I
That's
it.
I
mean
it's
beneficial
from
both
sides,
so
it
will
not
make
free,
compile
completely
useless,
but
zeke
efficient,
whether
it's
1.5,
for
example,
or
two
I
also
wouldn't
want
to
go
above
2-
would
be
kind
of
a
way
to
allow
people
to
make
alternative
ones
in
other
languages,
which
will
kind
of
not
give
this
exact
performance.
I
G
Yeah,
so
generally,
the
way
we
decide
on
these
these
gas
prices
is
just
if,
if
we
have
multiple
implementations,
just
try
to
run
benchmarks
on
multiple
different
machines
with
different
implementations,
gather
all
the
numbers
and
then
just
try
to
compare
it
to
the
existing
op
codes
and
then
just
boom
give
a
number
and
maybe
multiply
it
by
a
bit
just
to
be
on
a
safe
side.
G
A
I
Well,
this
was
kind
of
this
was
largely
part
of
my
work
for
because
I
also
did
a
gas
schedule.
I
mean
I
have
all
the
formulas
for
gas
scheduling,
dependence,
input
parameters
and
to
measure
those
I
kind
of
for
parameters
which
are
in
a
narrow
limit.
I
I
can
I
use
it
worst
case
scenarios
like,
for
example,
if
you
multiply
the
elliptic
curve
point
by
a
number
which
is
in
certain
range
of
the
bit
widths
worst
case
is
just
having
all
the
ones
like
all
the
bits
set,
so
you
have
the
maximum
number
of
additions
and
multiplications,
basically
doublings,
all
those
were
used
in
a
gas
estimate,
so
those
numbers
are
worst
case
right
now.
A
I
I
Well,
I
mean
okay,
like
there
is
a
set
of
parameters
for
every
for
every
operation,
which
is
happening
right
now,
for
example,
addition
of
two
elliptic
curve
points
is
basically
a
lookup
table.
I
It
only
depends
on
what
is
your
field
site
and
for
this
basically,
I
just
gives
a
lookup
table,
because
those
numbers
do
not
depend
on
anything
else,
but
the
number
of
by
kind
of
the
widths
bit
widths
of
the
modulus
for
multiplication
there
is
with
bits
of
the
scalar
but
by
which
you
multiply
and
those
cells
are
are
kind
of
split
into
windows,
each
all
categories
which
each
of
those
is
64
bit
wide
and
in
each
of
the
categories
I
use
the
worst
case
to
give
the
gasket
on
so
from
this
part,
it's
kind
of
this.
I
Well,
it's
it's
the
safest
way,
because
I
should
not
introduce
the
knife
series.
It's
maybe
not
the
most
optimal
way,
but
I
mean
right
now,
it's
already
safe
if
the
c
plus,
plus
or
right
implementation
is
used.
I
Oh
well,
I
mean
I
used
kind
of
as
far
as
I
understood
the
standard
measure
of
15
million
gas
per
second
in
my
gas
scheduling.
A
G
I
Well,
I'm
yeah
so
right
now,
the
this
this
number,
which
is
15
million
gas
per
second
or
just
15
gas
per
a
microsecond,
is
a
global
constant,
which
I
can
change,
and
if
this
is
a
standard
measure
across
all
the
machines
for
in
which
you
use
the
benchmarks,
I
will
just
and
if
you
can
point
me
to
some
benchmark,
will
which
will
allow
me
to
check
what
is
this
number
on
my
machine?
A
I
So
is
there
an
existing
page
like?
Is
there
existing
code
to
do
something
like
this,
or
should
I
just
write
one
myself.
B
Okay,
that's
great
progress,
alex
thanks
for
the
update.
The
next
efi
eip
is
eip.
2348
validated
evm
contracts,
so
that's
gonna,
be
dano
yeah.
F
That's
something
to
be
shorter.
I
just
want
to
point
out
that
I
in
the
ethereum
magicians
forum,
I
just
I
gave
responses
to
two
of
the
concerns.
The
first
one
was
about
validating
in
transactions,
and
my
principal
argument
was
that
the
contract
can't
be
too
long
because
of
the
gas
limits.
It's
either
gonna
be
two
and
a
half
if
it's
nothing
but
zeros,
two
and
a
half
megabytes,
or
something
between
half
and
a
full
megabyte
if
it
is
not
full
of
zeros.
F
So
I
put
some
numbers
in
there.
I
don't
know
if
those
will
address
martin
and
peter's
concern
about
the
attack.
So
this
is
a
request
for
comment
in
that
thread
and
also
the
second
concern
was
about
why
headers
and
why
not
some
other
mechanism
to
identify
contracts
that
are
subject
to
the
validation
rules?
The
strongest
argument
actually
came
from
some
of
the
work
that
wade's
been
doing
on
his
on
gas.
F
So
I'm
not
ready
to
have
it
voted
next
week,
so
it's
at
least
a
month
away.
So
I
just
wanted
to
take
the
time
to
solicit
responses
in
the
ethereum
magician's
thread.
H
F
B
Okay,
the
eipip
ethereum
improvement
proposal,
improvement
proposal
meeting,
also
known
as
the
eip
improvement
processes
meeting,
is
going
to
start
happening
next
week.
I
need
to
still
schedule
the
day.
I
don't
know
what
day
is
best,
I'm
going
to
guess
wednesday.
That
feels
right
and
that's
going
to
be
organized
over
telegram,
but
I
might
do
a
chat
bridge
together
if
that'll
help
some
people
reach
out
to
me,
preferably
on
telegram
or
at
hudson
ethereum
dot
org.
B
B
All
right
review
the
previous
decisions
made
in
action
items
from
call
77.,
the
first
three
or
the
first
two
are
about
mere
glacier
ech
released
the
blog
post
on
mere
glacier.
B
They
did
that
hudson
to
connect
the
clients
to
get
the
latest
versions
of
mere
glacier
ech,
and
I
did
that
action
item
77.3
create
an
eip
ip
telegram
channel.
Yes,
I
did
that
and
again
you
can
email
me
at
hudson,
ethereum.org
or
chat
with
me
on
telegram
to
get
added
to
that
action.
Item.
77.4
is
for
someone
to
contact
eric
and
add
the
block.
Rewards
are
unchanged
line
to
the
eip.
I
don't
know
if
that
ever
happened.
I
did.
H
B
Okay,
great,
then,
that
clears
up
all
the
action
items
and
I
think
that's
the
last
thing
does
anyone
else
have
anything
they
want
to
add
to
the
meeting.
L
The
rest-
I
do
I
just
so-
I
just
want
to
bring
to
your
attention
there
was
there
there's
we
had
a
discussion
between
starkware
and
geff
about
the
maximum
transaction
size
of
in
the
bampoo
to
get
accepted
into
example.
We
this
has
been
even
in
being
resolved
with
the
f
team.
They
they
had
a
limitation
there,
and
I
just
want
to
bring
it
to
the
attention
of
the
other
client
that
this
limitation
or
other
limitation
could
be
a
problematic
for
many
layer.
Two
solution.
L
L
L
Default,
I
mean
they
can
let
martine
in
we.
This
is
getting
solved
on
their
own
on
this.
We
are
already
at
a
diary
discussion
for
a
few
months
and
this
is
getting
sold.
I
just
want
to
bring
it
to
the
rest
of
the
attention
that
this
sort
of
limitation
could
be
a
problem
for
l2
solution
in
the
future.
G
It's
mostly
the
work
was
because,
because
we
wanted
to
make
sure
that
no
denial
of
service
can
occur
due
to
transactions,
but
I
guess
the
important
memo
here
is
that
geth,
probably
from
the
next
release
guest,
will
also
allow
propagating
larger
transactions.
So
people
might
be
able
to
use
transactions
that
are
not
only
limited
at
32
kilobytes,
but
actually
at
128..
A
While
we're
on
the
subject
peter,
do
you
want
to
mention
anything
about
the
proposal
for
fetching
transactions.
G
Yes,
actually
I
wanted
to
do
that.
Just
figured
I'd
wait
until
the
end,
but
I
guess
now
is
as
good
as
the
time
as
any.
So
currently
the
theorems
the
way
nodes
propagate
transactions
in
the
network
is
horrible,
actually
geth
relays
every
single
transactions
to
every
single
peer
it
has
in
theory.
It
would
be
enough
to
relate
to
a
logarithmic
number
of
peers.
H
G
Problem
is
that,
if
you
have
some
weird
connection,
then
it
can
happen
that
you
miss
out
some
transactions
and
the
theorem
protocol
currently
does
not.
I
mean
ethereum.
Networking
protocol
currently
does
not
have
a
means
to
request
transactions.
If
you
missed
something,
you
cannot
ask
your
peers
for
it.
G
You
just
have
to
wait
until
they
deliver
it,
and
essentially
we
already
have
a
pr
that
would
again
bump
the
ethereum
protocol
to
e65
and
we
propose
adding
two
message
types
essentially,
similarly
to
how
for
block
propagation,
we
have
a
message
to
propagate
a
block
and
we
also
have
a
message
to
just
announce
a
block
and
request
a
block
the
same
way.
G
We
would
extend
it
for
transactions
so
that
beside
the
current
message
that
just
propagates
the
transaction,
we
would
also
support
announcing
a
transaction
essentially
just
the
hash,
or
at
least
that's
the
plan
currently,
and
then
the
remote
side
could
request
it
if
it
doesn't
have
it
yet,
and
we're
kind
of
hopeful
that
this
will
help
drastically
reduce
the
network,
bandwidth
that
the
entire
ethereum
network
consumes,
because.
E
G
Yeah,
so
essentially
that
would
be
the
plan
and
we'll
probably
write
up
an
eip,
maybe
for
next
next
awkward,
of
course,
so
it
would
be
kind
of
a
small
small
addition
to
the
network.
Just
to
add
this
support
announcing
a
transaction
and
request
it.
If
you
don't
have
it
yet
because.
E
Go
ahead,
yeah!
Sorry,
when,
when
we
connect
networks
when
we
connect
nodes
to
each
other,
I
think
in
the
moment
get
is
sending
all
the
transactions
it
knows
about
on
connection.
Is
it
possible
to
change
it,
so
it
only
sends
hashes.
G
L
It
seems
to
me
very
similar
to
what
bitcoin
is
doing
with.
I
don't
remember.
If
it's
either
fiber
or
compact
blocks,
there
could
be
something
to
be
taken
directly
from
them.
G
Well,
this
is
really
kind
of
trivial,
so
it's
just
the
same.
So
currently
the
ethereum
protocol.
You
can
request
blocks
by
hash.
You
can
request
receipts,
you
can
request
everything
by
hash
except
transactions,
so
it
will
be
extending
that
and
the
same
way
that
you
can
announce
blocks
it
would
so
I
I
don't
I
mean
that
would
if
you
have
a
link
to
somebody
else's
work.
We
can
definitely
take
a
look,
but
this
is
really
something
super
trivial.
G
Yeah,
so
that
so
that's
definitely
also
something
that
we've
been
thinking
about,
but
in
reality
we
wanted
to
so
sorting.
The
transaction
propagation
problem
out
is
fairly
trivial,
so
just
a
tiny
addition,
we
can
get
a
new
protocol
version
released
and
see
that
it
actually
works,
and
then
we
can
look,
look
at
the
whole
block
propagation,
whether
that's
an
issue
or
not.
Generally,
since
we're
only
propagating
blocks
to
logarithmic
number
of
peers,
it's
not
really
an
issue.
In
my
opinion
I
mean
we
can
definitely
make
it
more
optimal,
but
it's
not
it.
L
On
that
specific,
I
completely
agree
about
the
block
propagation
itself.
I
mean
if
we
could
reduce
also
the
propagation,
then
time
which
is
roughly
today,
200
milliseconds.
If
I'm
correct,
it
would
be
something
that
would
be
compared
to
the
mining
time
of
of
an
ethereum
block.
It's
somewhat
still
significant.
I
mean
not
not
as
of
now.
Of
course,
I'm
just
like
in
the
future.
That
could
be
a
nice
improvement.
G
C
C
C
I
have
an
observation,
and
this
this
isn't
anything
to
do
with
this
doesn't
say
anything
about
your
eid
or
your
process
alex,
but
I
think
is
a
comment
on
on
the
process
in
general
and
possibly
one
also
focused
on
improving,
which
is,
if
I
think,
back
to
blake
2b
and
I
think
back
to
1559.
I
think
back
to
others.
That's
a
single
source
of
truth,
we're
keeping
up
to
date
and
where
that
is,
it
needs
to
be
a
difficult
thing
to
keep
track
of.
I
Yeah,
I
just
don't
know
if
I
can
just
make
another
pull
request
to
update
the
text
of
the
existing
eip
in
the
corresponding
repository.
I
just
don't
know
what
was
the
previous
standard
process.
I
can
post
it
in
both
places
or
just
continue
updating
it's
it's
like
eip
github,
which
is
like
whatever
we
decide
now.
I
will
just
do
this
way.
B
Yeah,
I
would
say
the
eip
github
in
combination
with
the
ethereum
magicians
thread.
If
you
have
one
already
that
should
cover
it.
In
my
opinion
and
that's
kind
of
what
we're
leaning
toward
or
were,
I
feel
like
we're
heading
that
direction.
Standards
wise.
It
was
kind
of
vague
before
what
do
you
think
james,
but.
C
I'm
I'm
just
I'm
noticing
that
there
is
a,
and
this
is
more
of
a
processing
unless
it's
not
on
your
actual
fpl
and
you're
doing
great
on
that.
But
the
there
seems
to
be
a
lot
of
tension
between
keeping
those
things
in
line,
whether
we
keep
the
amount
of
effort
required
to
keep
going.
Is
it
worth
it
or
try
it
in
a
way
that
has
less.
G
G
G
B
What
was
that
trend
you
cut
out?
Sorry,
a
link
to
the
eip
would
be
perfect.
M
G
L
I
have
a
general
question
about
networking
issues.
Are
they
at
this
point
standardized
or
discussed
as
part
of
the?
If
you
like,
the
the
protocol
itself
or
not.
L
My
question
is
my
question:
is
like
those
networking
this
like
messages
and
and
standard?
Are
they
discussed
today
as
part
of
the
protocol
itself?
I
feel,
like
I
minus
my
impression
from
the
outside,
that
they
are
designed,
decided
addock
by
the
by
each
client.
Am
I
am
I
correct
or
wrong.
G
I
don't
think
that's
correct,
so
the
the
whole,
the
networking
protocol
was
pretty
much
standardized
even
before
frontier,
and
then
we
used
the
usual
eip
process
to
make
changes
to
it.
So
the
only
thing
is:
that's
not
standardized
is,
for
example,
lds
is,
since
it's
only
supported
by
geth.
It
was
kind
of
never
modified
through
the
all
chord,
of
course,
because
nobody
else
had
it.
G
So
there
was
not
really
a
point
to
to
have
to
introduce
all
that
bureaucracy
here,
but
with
regard
to
the
eth
protocol
or
the
deaf
pewter
peer
or
discovery,
all
of
them
were
pretty
much
made
here
against
once
something
that
was
not
not
designed
through
the
overlapped
work
at
all
code,
of
course,
was
the
discovery
v5
plus
the
nr
stuff,
but
that
again
usually
was
because
nobody
else
cared
about
them.
So
people
just
assumed
that
whatever
the
deaf
people
authors
come
up
with
is
fine
but
again
discovery.
G
V5
was
something
that's
completely
independent
and
completely
new
and
after
the
whole
initial
specs
were
done
even
before
implant.
We
started
implementing
it.
I
think
nethermind
already
jumped
on
it.
So
we
had
a
few
other
teams
collaborating.
So
it
wasn't
really
collaborated
through
the
awkward
of
course,
but
it
was
a
collaborative
thing.
E
There
are
multiple
things
on
networking
that
would
that
would
actually
benefit
from
being
standardized.
There
is
one
one
particular
example
that
you
can
give
from
another
mind
when
we,
when
we're
sending
requests
for
the
nodes
in
the
fastsync
mode,
the
requests
are
handled
both
by
parity
and
gif,
and
the
way.
G
E
Are
optimizing
it?
We
create
the
request
patches
first
and
then
we
decide
whether
to
send
it
to
parity
or
get,
and
the
limits
for
the
sizes
of
the
requests
in
parity
are
at
around
1024
and
you
get
and
they
are,
I
think,
get
handles
192
at
most
and
requests
up
to
256,
and
the
thing
is
if,
if
parity
doesn't
want
to
respond
to
a
big
request,
it
actually
just
sends
you
as
much
as
it
wants
to
send
you
back
and
you
can
handle
the
part
and
then
re-request
the
rest.
E
But
kent
actually
disconnects
you
as
a
punishment
for
requesting
too
much,
which
means
that
we
cannot
ever
use
the
fact
that
parity
can
give
us
more
data,
because
if
we
send
it
without
knowing
exactly
which
note
we'll
be
assigning
it
to,
then
we
might
be
disconnected
by
all
the
guest
nodes.
So
these
are
all
these
like
small
things
that
are
not
really
specified,
and
if
63
like
there
is,
there
is
no
definition
that
we
should
get
disconnected.
E
There
are
some
specific
behaviors,
unlike
when
the
null
headers
are
sent
by
get,
for
example,
which
was
not
following
the
protocol
and
not
specified
anywhere
and
they're
above
the
backs
reported
in
the
past
in
parity
and
in
nethermine,
when
the
special
condition
was
found.
So
any
any
further
specification,
like
very
detailed
specification
of
those
network
protocols,
improve
it
for
the
future.
E
The
information
why
we
are
disconnecting
now,
like
the
debugging,
the
opportunity
situations
when
the
other
nodes
are
disconnecting,
it's
really
hard,
so
you
very
often
have
to
run
two
debuggers
at
once
like
gets
or
party
code
and,
at
the
same
time,
the
other
mice
to
check
what
exactly
was
a
series
of
events
that
led
to
the
node
being
disconnected
as
a
like
for
some
for
whatever
reason:
either
it
was
sending
too
many
messages
or
something
was
small
formative
that
is
extremely
time
consuming.
E
So
the
any
any
more
detailed
specification
would
be
great
work
from
all
of
us
to
to
improve
it.
Also
for
any
new
note
being
notes
being
built
by
any
new
teams.
G
Just
to
react
to
one
of
them,
so
you
mentioned
that
get
disconnected.
You
request
more
data
than
some
limit,
and
that
seems
so.
That's
definitely
not
something
we
want
to
do
so.
We
simply.
If
you
request
too
much
data,
then
we
stop,
I
mean
we,
we
have
limits
and,
for
example,
if
you
request
000
state
entries,
then
you
will
get
370
something
back
and
that's
it
and
you
can
request
that.
J
E
E
That
actually
takes
the
limit
and
if,
if
the
request
is
over
the
limit,
it
disconnects
the
request
and
that's
what
you
see
also
happening
in
the
in
the
actual
behavior.
I'm
talking
about
nodes,
not
data
not
about
blocks
or
headers.
G
Yeah,
but
that
should
you
probably
want
to
open
an
issue
because
that's
definitely
not
the
behavior
we
intend.
So
we
don't.
We
never
intend
to
disconnect
just
because
we
requested
more
data,
so
it
should
be
a
clear
protocol
violation.
So
we
only
disconnect
if
you
send
something,
that's
definitely
junk
that
cannot
be
interpreted.
E
Okay,
yeah,
I
mean
because
the
code
looks
like
really
that
was
intended,
so
maybe
maybe
you
should
treat
it
as
a
bag
and
raise
an
issue,
but
also
like
specifying
it
in
the
future
like
what
should
be
the
behavior
like
with
the
requests
are
in
different
formats,
then
that
will
be
helpful.
G
Yeah,
I
guess
the
here.
The
thing
with
the
festival,
fasting
and
863
is
that
when
we
implemented
this
whole
protocol,
essentially
geth
was
the
single
client
which
had
this
thing.
So
there
was
a
the
e63,
was
kind
of
just
packed
as
hey
kind
of
these
are
the
messages,
but
there
was
nothing
on
cpp3
implemented.
This
was
before
frontier.
I
think
I
think
this
is
like
a
common.
E
Pattern
when
you,
when
we
when
we
hear
what
we
are
saying
here,
that
like
fasting,
was
the
get
only
less
the
client
like
client
was
to
get
on
in
discovery
five
as
they
get
guess
only.
I
think
the
reason
they
are
remaining
for
so
long
he's
on
guess
only
it's,
because
because
the
spec
would
have
to
be
like
more
detailed
and
it's
it's
very
natural,
are
we
doing
the
same
whenever
we're
doing
something
that
only
we
do
at
the
moment,
then?
E
Obviously,
this
pack
is
less
detailed,
so
we
should
try
to
resurface
all
those
changes
and
inspect
it
as
soon
as
possible
and
involve
more
more
of
the
core
devs
to
to
be
aware
of
any
experimentations
on
the
client.
So
we
may
kind
of
build
some
better
communication
here
and
it
will
help.
G
Yeah,
I
completely
agree
and
that's
pretty
much
the
reason
why
why,
instead
of
why,
we
came
with
this
whole
fork
idea
and
we
wanted
to
spec
it
out
as
an
eip
and
when
that's
specifically
the
reason
why
we
brought
it
up
today
to
that
hey.
We
have
this
whole
transaction
propagation
issue
and,
let's
back
it
out
properly,.
E
L
B
E
Affected
buffner
to
mind
the
party
and
it's
almost
like
there
is
a
second
form
of
like
maybe
20
people
discussing
the
about
the
rd
open
ethereum
and
I
was
there
as
well
and
we
discussed
their
fights
already
and
because
it
didn't
affect
death.
We
were
not
raising
it
here.
E
So
the
the
general
thing
was
that
there
was
the
incorrect
block
being
sent
and
then
a
party
was
adding
into
the
cache
of
the
incorrect
blocks
as
the
hash
of
the
block
of
the
header,
but
because
the
body
of
the
block
was
modified,
but
the
hash
was
all
fine,
so
the
block
was
validated,
then
it
was
blowing
up
the
processing
and
then
it
was
added
to
the
hash
because
the
invalid
block,
but
there
were
perfectly
valid
blocks
on
the
network
with
the
same
hash,
so
different
hashing
mechanisms
had
to
be
introduced
and
while
the
same
thing
affected
never
mind
the.
E
In
our
case,
it
was
slightly
different
thing,
so
we
we
missed
the
validation
on
some
of
the
incoming
new
blocks
and
we've
added
that
and
it
solves
it
as
well,
and
the
priority
already
has
a
fix
where,
where
that
cache,
actually
hashes
the
raw
content
of
the
blog
body
and
stores
it
separately.
So
it
says
the
block
is
only
discarded
immediately
is
invalid.
If
the
hash
of
the
content
is
exactly
the
same
and
not
the
advertisement.
G
E
Yeah
so
like
how
how
it
was
done
on
particular.
Actually
we
indicated
when
we
were
saying
that
guest
notes
are
actually
sending
that
every
now
and
again
guests
are
propagating
the
invalid
combinations
of
headers
and
bodies,
also
like,
for
example,
I'm
receiving
a
lot
of
invalid
nodes
from
the
get
node
data
from
from
gif.
Sometimes
the
hash
doesn't
match
the
content,
so
whether
these
are
specifically
designed
attacking
death
notes
or
just
some
bikes,
it's
hard
to
say.
G
But
that
seems
really
weird,
because
we
don't
so
we
fast
sync,
we
do
brand
benchmarks
and
we
never
seen.
We
don't
see
bad
blocks
and
we
never
see
invalid
hashes
either.
So.
E
Sure
so
like
I
definitely
should
do.
M
E
Because
at
the
moment
we
have
code
that
says,
like
only
when
you
send
to
us
something
like
and
bad
things
in
a
second
and
only
then
we
disconnected,
but
literally,
we
have
to
be
very,
very,
very
like
soft,
on
behavior
of
other
nodes.
So
it
might
be
that
we
should
start
like
communicate
more
on
those
network
issues
and
then
raise
it
more
and
then
or
just
remove
all
of
those
problems.
A
So
guess
our
monitoring
guest
node
is
still
picking
up
these
bad
blocks
and
storing
them
in
the
bad
block.
Cache
so
geth,
like
parody
flags,
a
particular
hash
as
bad,
but
the
difference
is
that
we
don't
actually
use
that
as
a
blacklist
in
the
future.
We
do,
we
do
maintain
it.
So
if
we
want
to
analyze
it
from
the
outside,
we
can
retrieve
the
bad
blocks,
but
we
don't
blacklist
based
on
it.
So,
therefore,
later
on,
once
we
get
the
correct
body
content
for
that
header,
we
successfully
import
it.
E
Oh
okay,
so
we
we
do
blacklist
them
and
we
never
propagate
them,
but
the
blacklisting
works
as
long
as
you
do
it
at
the
right
stage.
So
priority
was
just
like
this
thing
a
bit
too
early.
We
only
blacklist
after
a
full
processing.
If
everything
matches.
A
G
E
C
But
post
mortem
on
this
be
something
another
mine
would
be
able
to
do.
Have
something.
E
Now
we
can,
we
can
try
to
like
copy
the
data
from
the
telegram
channel,
where
we're
discussing
it
with
the
mind
yeah,
with
a
priority
team.
Yeah
I'll,
send
a
few
questions,
a
few
answers
and
then
like
what's
more
than
but
like.
I
cannot
promise
you
that
it
will
be
very
quickly
done
on
the
like
formal
way,.
E
So
like,
if
we
don't
have
to
rush
it
now,
but
I
think
like
the
next
week
or
two.
C
B
Yeah,
that
sounds
good.
Anybody
else
have.
B
Anything
should
we
have
the
meeting
in
two
weeks
from
now
we've
we
realize
now
that
we're
on
the
same
weekly
schedule
as
the
eth
2.0
calls,
but
I
think
someone
mentioned
there's
not
a
ton
of
overlap
and
also
we've
stayed
on
schedule
and
we
think
they
got
off
schedule,
so
it
might
be
on
them
to
not
have
it
the
same
week
if
they
don't
want
it.
B
And
I
will
be
gone
in
two
weeks
on
a
trip,
so
tim
will
be
taking
over
that
meeting.
Thank
you,
tim
who
had
to
leave.
I
think
this
meeting,
but
tim's.
F
B
Yeah,
we'll
figure
it
out.
He
already
told
me
he
should
be
able
to
do
it
so,
but
there's
other
people
I
think
who
can
if
he
can't
so
that's
it
thanks.
Everybody
see
you
in
two
weeks.
Thank
you.