►
From YouTube: Ethereum Core Devs Meeting #38 [05/18/18]
Description
A
A
B
B
All
right,
Lena
we
get
on
the
live,
pier
we're
good.
All
right,
awesome!
Thanks
everybody
for
joining!
Let's
go
ahead.
We
got
a
lot
on
the
agenda
today,
so
let's
jump
into
that.
Dimitri
I
think
you're
on
here.
Let's
start
with
testing,
and
you
also
had
an
EIP
for
a
common
Genesis
format
across
all
client
implementations
or
Genesis
JSON,
so,
basically
something
that
everyone
can
agree
on.
C
Yeah,
that's
right.
I
was
thinking
about
this
for
a
long
time
now,
when
I
started
doing
this
RPC
testing
tool
I
see
that
each
individual
client
has
its
own
Genesis
configure
field
and
field
names
and
parameters,
but
some
of
them
are
basically
means
the
same,
and
we
should
come
to
agreement.
How
should
we
name
it?
At
least
the
format
should
be
general
for
all
of
the
clients.
That
would
be
much
easier
for
everybody.
D
Do
you
have
I
can
write
about
that,
and
but
actually
I
think
there
are
two
problems.
One
of
is
the
structure
of
the
Genesis
file,
which
differs
between
the
clients
and
another
is
the
kind
of
semantic
meaning
where
some
clients
have
like
four
o'clock
number
for
each
for
hopes
that
a
4
o'clock
number
for
it
is
not
where,
as
priority,
for
example,
has
number
they
haven't
divided
it
up
by
forks
necessarily
but
by
features,
and
that
may
vary
between
clients
so
that
they
have
feature
X
at
block
X
instead
of
certain
hard
work.
D
E
E
C
F
B
B
Okay
sounds
good:
let's
move
on
to
the
next
topic,
client
updates
and
then
a
research
updates
after
that
I
decided
to
move
them
from
the
end
of
the
meeting
to
the
beginning.
So
we're
not
having
to
rush
at
the
very
end
of
the
meeting
through
this,
and
we
can
just
if
we
run
out
of
time
we
can
just
table
some
of
the
other
items
till
afterwards.
That's
a
series
of
changes
that
are
going
to
be
happening
to
this
meeting.
E
Right,
yeah,
we
I'd
say
the
biggest
update
is
the
one
11.1
release.
We
have
a
bunch
of
performance
improvements
and
general
improvements
here,
but
two
main
ones
are:
we've
introduced,
sea
life
or
whisper,
and
we've
have
a
complete
refactoring
of
the
transaction
cue,
so
verification
is
happening
in
parallel.
Previously
fetching
items
from
the
queue
was
order,
one,
but
now
and
inserting
was
order
n-squared.
E
Now
it's
much
more
performance
for
inserting
and
we've
kind
of
split
out
the
transaction
pool
to
have
like
a
scoring
and
readiness
thing
separately,
which
makes
it
a
lot
easier
for
my
nurse,
to
like
view
the
pool
and
a
more
coherent
way
and
be
able
to
do
a
lot
more
with
it.
So
I
think
that's
a
pretty
big
change
on
our
and
put
otherwise
this
just
general
improvement.
G
Know
so
not
much
we're
mostly
working
on
performance
improvements.
We've
seen
that
in
the
last
couple
of
weeks,
transactions
on
main
ads,
especially
spam
transactions,
kind
of
went
up,
so
we're
trying
to
track
down
some
issues
that
we're
seeing
especially
around
memory
usage
and
I'm,
not
really
sure
we
have
a
few
hunches.
Hopefully
those
will
be
fixed
I
mean,
hopefully
those
will
fix
the
issue,
but
your
card
benchmarking
them
on
testing.
G
G
The
words
is
basically
some
form
of
accounting
resource
accounting
for
the
light
client,
which
should
be
able
to
help
a
lot
with
with
the
centralizing
running
full
notes,
so
basically,
we'd
like
to
create
some
form
of
from
basically
separate
the
light
kind,
the
resource
counting
which
kinds
uses
how
much
resources
and
probably
the
monetary
layer,
so
that
anyone
can
basically
monetize
their
note.
However,
they
like
with
whatever
mechanism
and
then
gas,
would
just
count
it
and
make
sure
that
you
adhere
to
your
own
quarters
and
yeah
I.
Think
that's
mostly.
G
It
we've
also
I've
done
not
quite
a
few
little
bit
of
hacking
on
on
a
new
synchronization,
but
I.
Don't
really
have
any
numbers
yet,
but
I'm,
hoping
that
by
next
week
sorry
bye
next
chord
F
meeting
in
two
weeks.
I
could
give
you
guys
some
hard
numbers
on
whether
it's
worth
it
or
not.
If
it
seems
to
be
worth
it,
I'll,
probably
post
some
some
document
next
week,
so
that
everybody
would
have
time
to
look
through
it.
That's
it.
C
Sent
binaries
for
a
CPP
atrium.
It
goes
if
client
and
test
if
to
to
get
half
releases.
It's
it's
now
in
form
of
development
snapshot,
but
will
make
stable
release
soon
and
we
have
some
RPC
improvements
and
fixes,
and
and
also
all
the
tools
within
CPP
atrium
accepts
dynamic,
loaded,
EVM,
C
implementations
and
you
can
load.
For
example,
he
was
in
back-end
as
a
shirt
library,
so
you
don't
have
to
rebuild
all
the
source
codes
in
trying
in
case
you
would
like
use
some
external
VM
yeah.
H
We
have
finished
everything
we
want
in
this
release.
It
is
release
candidate
now
and
we
started
to
test
innate.
It's
mainly
a
performance
release.
We
have
improved
processing
time
up
to
five
ten
times
before
it
was
difficult
for
us,
sometimes
to
stay
on
the
top
of
the
chain.
Some
blocks
arrived
early
and
we
processed
previous.
So
we
have
solved
this
issue
now,
it's
a
fast.
We
are
still
behind
get,
but
it's
a
big
improvement
for
us.
Also.
H
I
We've
been
knocking
out
low-hanging
performance
issues
for
the
last
like
two
or
three
weeks
and
gotten
things
down
to
a
point
where
we
at
least
probably
have
a
viable
alpha
right
around
the
corner.
We
have
main
net
full
and
fast
sync,
functional
same
with
functional
white
clients,
working
on
getting
some
of
the
RPC
endpoints
up
and
running
against
those
and
basically
knocking
off
a
pretty
short
checklist
right
now
towards
public.
B
J
Guys,
thanks
for
having
me
here
yeah,
you
know
we
started
X
theorem
about
you
know
it's
pretty
new
client,
probably
about
six
months
ago,
and
the
focus
has
been
to
help
you
know.
Diversify
the
community
plus
a
lot
of
people
have
been
using
elixir
for
the
backend
servers
and
we
figured
having
a
lot
of
these
libraries
in
place
would
be
beneficial
for
people
building
services.
On
top
of
this
area.
For
us
you
know,
there's
just
been
a
lot
of
work
on
getting
to
a
v1
client.
J
B
Ok,
am
I
missing
any
clients,
not
research
teams,
but
clients,
Turbo
goth
put
their
update
and
the
notes
and
the
agenda
if
anyone's
interested
in
that
there's,
some
really
cool,
really
cool
improvements
and
there's
also
a
link
to
the
video
Alexi.
Did
a
TED
Kawas,
really
good
I
was
able
to
catch
that.
K
We've
been
honing
in
on
the
VIP
spec,
as
we've
been
talking
with
clients
and
kind
of
figuring
out
all
the
little
details.
There's
the
updated
spec
is
live
and
associated
change
long
and
the
discussions
to
issue
there's
some
concerns
about
the
parallelization
of
both
transactions
and
how
clients
will
actually
handle
this,
and
initially,
these
conversations
are
leading
us
to
think
that
we
might
put
all
of
the
the
data
that
boats
do,
modify
and
a
separate
track.
K
That's
guaranteed
to
not
be
modified
by
anything
else,
but
those
boats
so
would
be
really
easy
for
clients
just
copy
the
state
route
hash
like
that
contract
round
anyway,
we're
working
on
that
we're
also
working
on
some
decentralized
pool,
implementations,
just
kind
of
some
proof
of
concept
stuff
right
now.
This
is
also
something
that
recently
made
it
onto
the
grant
wish
list.
If
anyone
out
there
listening
is
interested
in
getting
involved
other
than
that,
just
keeping
trucking
along
I
know
that
a
lot
of
the
clients
are
implementing
their
exciting.
Getting
there
great.
L
B
Ii
was
any
number
of
people
can
take
that
Lane
or
KC.
M
M
But
but
the
main
reason
we're
we've
been
focused
on
that
and
testing
it
with
on
CPP
etherium
and
is,
is
to
get
greater
wasum
greater
test
coverage
over
the
over
the
that
you
as
an
interface
the
I,
because
with
the
with
IBM
to
us,
and
we
can
transpile
all
the
IBM.
You
know
you
know
state
tests,
the
the
multi
client
test
suite
and
then
and
check
a
lot
of
edge
cases
and
discovers
or
discovering
fixing
bugs
in
an
era
and
then
EVM
EBM,
G
Watson.
B
I'm
not
sure
if
Phil
has
a
mic
or
anything,
but
if
you,
if
you
are
able
to
talk
to
us
later,
Phil
just
jump
on
I,
wanted
to
see
if
there
were
any
updates
from
your
end
on
the
jello
paper
or
anything
else
like
that.
So
now
that
we've
gone
through
all
of
the
updates.
The
next
item
on
the
agenda
is
reward
clients
first,
first
sustainable
network
when
each
transaction
is
validated,
give
a
reward
to
clients
for
developing
the
client
and
I.
B
L
You
could
add
a
hash
of
the
previous
block
verifications
with
access,
whether
the
address
of
the
client
has
verified
the
transaction
and
in
order
to
make
sure
that
the
client
is
legitimate,
they
could
be
included
in
an
access
list,
and
you
could
check
that
the
address
this
provider
is
in
the
access
list
in
order
to
check
that
it's
legitimate.
So
that's
essentially
it.
However,
there
are
complications
with
implementing
the
access
list
in
order
to
do
it
in
a
trust,
las'
way.
L
So
there's
a
number
of
proposals
to
do
that.
However,
they
all
have
difficulties
with
them,
so
I
mean
they
could
be
done,
but
they
might
not
be
ideal.
So
yeah
I
also
initially
had
a
proposal
included
to
reward
for
modes,
but
I
decided
to
remove
that
because
Casper
is
going
to
the
incentivizing
of
validation,
Kasper
FSG.
So
when
that
concert,
we
won't
really
need
to
advise
our
full
notes
for
verification.
L
However,
there
would
still
be
any
incentives
for
relaying,
with
bandwidth
or
downloading,
as
well
as
storing
stay
in
data,
so
that
would
still
need
to
be
incentivized
in
proposals
have
been
made
elsewhere
on
that
which
is
dick
thousand
eight,
so
yeah.
If
anyone
has
any
questions,
yeah
feel
free
to
ask
that
yeah,
there's
there's
more
details
than
eight.
G
I
have
one
question
so
I
haven't
dug
into
this,
so
maybe
what
I'm
going
to
ask
is
really
stupid.
So
if
you
are
including
any
metadata
in
the
blocks
which
aren't
part
of
consensus,
aren't
verified,
then,
let's
suppose
that
parrot
we
have
a
block
which
has
a
metadata
that,
for
example,
parity
verified
it
or
whatever,
whichever
parity
notes,
verified
it
and
then
I'm
going
to
be
an
and
release
a
new
version
of
death
which
simply
strips
these
out
before
forwarding
to
the
network.
Now,
of
course,
what
happens
in
this
case.
L
Sorry,
yeah
I
I'm,
not
sure
if
this
LAN
sauce,
but
let
me
know-
but
it
says
when
a
client
verifies
that
a
previous
block
is
valid
and
that's
according
to
the
valid
valid
DT
definition
in
the
yellow
paper,
then
so
that
not
yellow
paper
is
a
specification
for
all
common
clients,
so
that
will
determine
whether
a
block
is
valid
acquainted,
census
rules.
So
if,
if
guess
released
a
new
release
that
had
some,
it
would
need
to
make
sure
it.
You
know,
complies
with
communist
what
I'm
doing.
G
L
G
What
I'm
saying
is
that,
if
you
are
tracking
the
accounting
of
which
client
verified,
which
blocks
then
all
other
clients
in
the
network,
so
there's
no
incentive
for
other
clients
actually
forward
this
metadata
discounting
info.
So
everyone
has
sent
out
this
info
and
forward
the
block
themselves.
Only
sure.
L
Sure,
yes,
so
I
mean.
Obviously
you
want
multiple
clients,
not
just
one
client
to
be
verifying
book
and
I.
Believe
guess.
The
previous
block
verification
is
an
arbitrary
array.
Any
client
could
include
that
they
verified
it.
So
it's
it's,
you
know
any
any
node
is
verifying
it
and
it
would
insert
the
address
of
the
client
of
that
node.
That
is
verified.
It.
L
L
G
L
D
So
Peter
mentioned
earlier
that
there
will
be
an
invitation
to
run
full
notes
once
you
can
monetize
on
less
clients,
but
that's
not
neither
here
nor
there.
Regarding
the
EEP
itself,
I
still
think
it's
a
little
thin
on
actual
technical
data,
but
but
it's
I
mean
it's
mostly
a
lot
of
discussion
and
yeah
discussion
about
I.
Think,
and
it's
very
brief
briefly
about
the
actual
technical
implication.
What
what
is
hashed
and
who
does?
What
kind
of
so
I
still
think
it's
bit
early
to
actually
discuss
this
with
a
pin,
a
quart
of
cold.
G
Okay,
just
another
minor
addition
to
it,
so
so
plastic,
for
example.
We
have
to
go
to
incentivize
running
full
nodes.
We
have
basically
two
problems.
One
of
them
are
live
clients
which
are
obviously
Qing
the
network.
Those
joint
is
working
on
a
few
monetization
schemes.
I
won't
go
into
that
and
the
other
is
running
basically
just
running
a
full
load
without
serving
life
clients.
G
That's
also
still
handsome,
because
you
are
wasting
your
bandwidth
to
have
the
network
and,
for
example,
what
it's
really
really
hard
to
solve
this
problem,
because
it's
really
hard
to
fake
that
you
verify
the
block
or
that
you
indeed
uploaded
one
gigabyte
of
traffic
to
whoever
wanted
to
join
the
network.
But
maybe.
N
G
Swarm
is
actually
the
direction
where
we
want
to
take
this.
So
if
the
reason
of
being
of
swarm
would
be
to
share
data
and
to
be
able
to
run
a
tit
for
tat
for
the
call,
essentially
as
long
as
you
upload,
approximately
the
same
amount
of
data,
if
you
don't
care
and
if
you
down
more,
then
you
need
to
somehow
make
amends.
L
B
B
Perfect
so
next
we
have
VIP
10:57,
prague,
pal
programmatic
proof-of-work,
its
Britain
by
radix
PI,
and
then,
if
def
else,
if
def
else
are
three
people
who
are
working
on
it
and
they've
actually
joined
us
in
the
call.
So
a
few
three
want
to
just
at
least
one
of
you
mainly
talk
about
what
this
is
a
little
bit
about
how
it
works,
and
the
goal
of
this
pick
is
not
to
decide
exactly
like
if
this
is
going
in
or
not.
B
But
this
is
a
really
good
way
to
kind
of
either
replace
the
current
proof-of-work
scheme
or
have
as
a
backup
if
we
ever
need
to
replace
the
current
proof-of-work
scheme.
So
at
the
very
end
of
the
topic,
I'm
gonna
kind
of
just
query
the
core
devs
and
see
if
anyone
has
changed
their
mind
with
regard
to
having
a
strong
opinion
on
implementing
this
now
or
waiting
till
Caspar
gets
implemented
and
not
changing
the
proof-of-work
algorithm.
So
to
start,
if
you
all
could
just
introduce
what
this
is
and
what
you
guys
are
doing,.
A
So
no
pressure,
so
proof
of
work
is
a
new
proof
of
work
algorithm,
that's
designed
to
close
the
efficiency
gap
available
to
specialized
Asics.
So
in
the
current
implementation
of
etherium
and
ASIC,
it
can
get
a
potential
for
two
times
speed-up,
and
this
is
because
a
large
portion
of
the
actual
programmable
hardware,
in
this
case,
a
GPU
card,
is
not
used.
So
what
type
proof
of
work
does
is?
It
starts
utilizing
more
of
the
GPU
part.
A
It
utilizes
specifically
engages
more
of
the
core,
and
in
doing
so
it
means
that
specialized
hardware
is
not
able
to
gain
its
large
efficiency
speed
off.
So
we
have
taken
great
lengths
to
ensure
that
implementation
of
this
is
as
easy
as
possible.
We've
already
got
the
minor
implementation
up
and
running
on
our
github.
It's
public
and
ready
for
review.
A
We've
also
started
with
the
first
client,
see
yes,
Alex,
someone
shouldn't
link
it
to
you.
We've
already
started
with
the
C++
client
it's
already
committed
as
well,
and
our
goal
is
to
ensure
that
all
of
the
clients
see
more
ethereum
are
already
done
with
sites
with
the
work
implemented,
so
that
the
ethereum
developers
do
not
have
to
spend
any
development
time
or
effort
on
the
implementation.
Just
the
testing
I
want
to
introduce
deaf
and
else.
These
two
are
hardware
engineers
that
have
been
worked
by
mining
projects
in
the
past.
A
O
Yeah
so
basically,
the
current
patch
isn't
so
much
a
proof-of-work
as
it's
a
proof
of
devan
bandwidth
that
all
the
algorithm
really
requires
is
a
large
amount
of
D
vamp,
a
store,
the
Dagon
and
then
high
bandwidth.
Access
to
that
dag.
So
a
specialized
ASIC
can
be
made
for
that
by
basically
removing
the
majority
of
the
ASIC
and
only
having
the
frame
buffer
interface.
O
The
algorithm
that
we
developed
takes
the
e
patch
as
the
base,
but
then
adds
using
the
register
file
high
high
throughput,
math
and
caches
so
that
any
ASIC
that
implements
this
also
needs
to
implement
those
and
once
you've
implemented.
All
of
those
you've
basically
implemented
a
full
GPU.
So
any
implementation
is
going
to
be
almost
through
the
level
of
a
GPU.
A
Okay,
so
my
main
queries
right
now
at
the
developers
meeting
would
be
what
barriers
do
you
see
for
adoption
of
proof
of
work
and
I
will
tell
you
what
has
been
referenced
to
me
on
my
side.
The
major
arguments
have
been
that
we're
not
seeing
enough
adoption
by
both
stakeholders
and
developers,
and
this
I
believe
is
an
educational
issue
that
we
are
working
on.
G
I
have
a
question
with
regard
to
well
actually
two
two
issues
that
you
may
or
may
not
have
thought
about.
So
that
means
that,
for
example,
as
far
as
I
know,
ET
hash
on
CPP
theorem
only
is
an
only
little-endian
implementation
and
we
actually
had
to
ream
from
and
the
whole
thing
in,
go
for
to
be
able
to
run
ET
hash
on
top
of
big
andean
system,
side
mix
and
whatnot.
This
do
not
affect
you,
probably
not
just
saying
that.
It's
probably
something
you
need
to
take
care
of
or
take
care
with.
G
The
other
interesting
issue
is
that
I'm
not
sure
how
your
algorithm
will
scale
to
mobile
use
because
currently
surrounds
ET
hash.
If
you
want
to
mind
a
tag
you,
if
you
want
to
mind
a
block,
I
mean
you
generate
a
1
gigabyte,
+
or
the
byte
right
I,
don't
know
the
exact
number,
how
large
dag,
but
in
order
to
verify
it,
you
know
you
can
do
that
only
using
a
40
megabyte
verification
cache.
Now
the
issue
is
that
this
verification
cache
is
insanely
expensive
already
on
mobile
phones,
so
it
takes
a
fairly
modern
mobile
phone.
A
Sure,
so
let
me
just
address
your
first
question.
We
are
working
on
the
go
implementation
next,
we're
hoping
that
this
should
be
rolled
out
this
week.
So
again
we
are
taking
on
the
implementation
work
ourselves
and
ensuring
that
there's
nothing.
The
developers
of
the
etherion
community
you
need
to
do
I
will
turn
to
death
on
how
it
sorry,
mr.
Ellis,
on
how
it
scales
through
mobiles,
so
mr.
Ellis
take
it
from
them.
So.
O
For
the
verification
side
calculating
some
of
the
cash
the
bag
entries
that
need
to
be
used,
that's
extremely
expensive.
It's
a
couple
thousand
instructions
so
adding
to
every
loop
we're
only
adding
about
thirty
instructions.
The
randomized
instructions
that's
going
to
basically
be
in
the
noise
compared
to
generating
the
dag
data
that
you
need
like
now.
The
parameters
end
up
generating
twice
as
much
dag
data.
We
could
tune
that
so
that
it
the
same
amount
of
dag
data
needs
to
be
generated,
so
the
verification
times
will
be
approximately
the
same.
G
C
So
it's
not
the
same,
but
I
started,
fixing
Indian
s
in
cpp
implementation
and
also
I
wanted
to
optimize
like
loading
so
in.
In
the
end,
it
ended
to
be
a
separate
project
and
it
has
endianness,
correct,
correct
and,
and
also
like
for
for
Peters
information.
The
the
de
cash
generation,
the
small
cash
generation,
takes
around
half
a
second
I
haven't
benchmark
on
mobile
as
but,
if
that's
something
like
it's
something
performance
critical,
we
can
compare
numbers
and
half.
G
C
Okay,
yeah
I
can
I
can
start
playing
panting
that
mobile
as
well.
If
not
something
does
this
critical
yeah
I
just
wanted
to
do
ya,
do
inform
that
there's
like
the
site
implementation
of
you
fresh.
If
that
helps,
and
we
want
to
actually
use
it
in
I-
think
in
theory,
andreas
to
switch
from
from
the
old
library.
F
Apologize
that
I
couldn't
actually
find
my
notes
on
this,
but
I
do
really
I
like
the
proof
of
work
proposal,
although
I
I
really
am
not
sure
if
it
carries
the
same
properties
due
to
the
FMV
function.
So
it
says
validity
is
outside
of
my
expertise.
However,
I
noticed
when,
when
we,
when
we
initially
adopted
it
and
I'm
curious,
if
this
was
noticed
by
mister,
miss
if
mister
elf's
and
anyways,
so
you
might
look
into
I
believe
that
the
like
client,
dag
generation
is
actually
somewhat
redundant
in
that.
F
Basically,
the
this,
the
property
of
this
protocol
is
maxing
out.
The
sort
of
the
like.
You
said,
the
DRAM
on
the
GPU
and
I
recognize
that
and
when,
when
we
were
looking
at
like
that,
the
hardware
performance,
because
we
we
noticed
there
was
also
like
difference
between
nvidia
and
AMD
and
stuff
like
that.
So
the
point
is:
is
that
I
think
that
the
fewer
fewer
cycles
are
required
or
the
white
client
dag?
F
F
I
can't
remember,
but
I
would
encourage
looking
into
that,
because
that
might
actually,
if,
if
you
reduce
that,
that
might
compensate
for
the
addition
of
the
math
operations,
so
I
wanted
to
point
that
out
and
then
also
ask
if
the
protocol
has
the
same
security
properties
as
far
as
it
being
a
cyclic
and
and
quickly
verifiable.
On
the
on
the
light.
Client
I
think
that
that's
assumed
but
I'm
curious.
F
O
The
the
basic
structure
is
the
same
as
etherium
the
existing
eve,
hash
and
there's
an
array
of
basically
register
file,
elements
that
are
calculated.
One
of
them
acts
exactly
like.
The
existing
eath
has
implemented
that
it
loads
of
and
m
value
from,
the
dag
updates,
its
value
and
then
goes
on
to
load.
The
next
value.
F
Got
it,
and,
and
did
you
by
chance,
run
into
the
a
quick
note
by
the
way
on
the
redundant
DAG
generation
cycles?
What
actually
happened
at
the
time
is
that
everything
had
everything
was
ready
to
go
and
we'd
already
launched
Olympic,
and
we
were
kind
of
ready
to
launch
the
main
that
so
it
like,
like
nobody,
wanted
to
go
and
and
refactor
that
out
so
I'd
be
curious
if
you
post
on
github
somewhere.
If,
if,
if
you
look
into
that,
it
would
be
great
if
that
small
bit
of
redundancy
was
was
optimized.
M
I
had
a
comment,
another
aspect
about
these
proof-of-work
proposals
that
I
don't
see
often
mentioned
as
verifying
them
in
contracts,
so
with
eat
hash.
We
have
some
efficient
ways
to
verify:
eath
had
you
know
POC
headers
in
contracts
and
that
enables
cross
chain
atomic.
You
know
transactions
or
you
know,
cross
chain
relays
or
bridges
or.
A
B
Fix
this,
if
you're,
looking
at
an
example
of
a
an
ethereal
base
chain
to
aetherium
base
chain,
real,
a
look
at
Lulu's
piece
relay
its
aetherium
classic
to
aetherium,
and
I
don't
even
think
it's
in
use
right
now.
I
think
it
was
a
proof
of
concept,
but
that's
an
example
of
verifying
etherium
style
headers
and
a
different
ethereum
blockchain.
If
I
recall
correctly
and
then
another
one
would
be
the
Doge
ether
relay
that
uses.
A
So
we'll
definitely
take
a
look
at
that.
I
just
wanted
to
address
that
the
the
key
message
for
why
we
are
pushing
this
and
why
why
we
are
so
motivated
for
this,
is
that,
while
Kasper
FFG
will
provide
finality
and
secure
security
during
the
hybrid
period,
there
are
still
proof
of
work
owner
abilities,
specifically
censorship
attacks,
where
from
that
can
be
accomplished
by
a
six
which
we've
already
identified
our
running,
and
this
can
all
be
mitigated
by
a
proof-of-work.
A
So
later
today,
we
will
have
more
of
a
complete
statement
on
this
attack,
a
detailed,
basically
explanation
of
the
attack,
and
this
is
our
main
motivation
for
proof
of
work.
We
want
to
ensure
that
proof
of
stake
has
its
best
chance
for
adoption
by
giving
up
this
transition
period,
giving
the
transition
period
a
fighting
chance.
Ok,.
P
B
K
I
think
I
don't
know
that
that
attack
ends
up
being
a
lot
more
limited
by
the
finality.
I
know
you
might
be
maybe
detailing
something
later
today
that
has
to
do
with
censoring
these.
Actually,
the
transactions
but
I
will
say
I'm
I
am
kind
of
from
from
this
is
not
the
entire
caspere
team,
I'm,
not
sure
what
everyone's
opinions
are,
but
I
do
think
that
you
know
if
there
is
a
easy
switch
to
make
it
make
this
thing
potentially
more
ASIC
resistant.
K
P
We
hear
that
we
are
definitely
as
this,
if
said,
working
on,
trying
to
make
this
as
painless
as
possible
and
furthermore,
we
agree
that
it
doesn't
change
the
finality
behavior
and
finality
is
a
good
thing,
and
this
is
totally
independent
of
the
the
there's.
There's
no
weaknesses
to
the
proof
of
stake
side
of
Casper.
B
B
So
the
next
agenda
topic
is
gonna,
be,
let
me
see
here,
EW
210,
block,
hash,
refactoring,
Martin,
put
a
note
up.
It
says
see
this
summary
document
and
the
agenda,
if
you
refresh
the
page
on
item
6
and
that
kind
of
gives
an
overview
of
it,
but
Martin
can
also
go
through
and
explain
it
again,
a
little
bit
so
Martin
go
right
ahead.
B
B
N
So
this
is
really
raised
before
there
are
some
technical
details
to
work
out
and
some
interesting
points
being
raised
on
the
magicians
forum
about
whether
this
should
be
prep.
You
know
like
a
flag
and
and
track
operation
or
a
flag.
That's
cleared,
or
you
know
what
the
best
implementation
is.
But
the
reason
I
wants
to
add
it
to
the
agenda
was
to
get
a
general
gist
for
what
kinds
of
pointers
think
of
overflow
checking
in
general.
N
My
view
is
that
the
vast
majority
of
math
operations
want
to
be
overflow,
protected
or
differently
use
cases
for
overflow
in
math
in
the
EVM.
But
most
people
are
working
with
the
assumption
of
the
won't,
be
others,
and
presently
you
have
to
code
around
that
explicitly
it
significant
additional
runtime
cost.
So
I
think
it
would
be
useful
to
add
a
backwards
compatible
way
to
support
overflow
checking
in
the
EVM.
That
lowers
the
costs
to
clients
and
makes
it
more
practical
and
more
secure
to
do
so.
So
I'm
curious.
What
behind
teams
thinking
I.
B
N
I
N
I
N
Personally,
so
if
you
use
a
trap
mechanism,
then
you
can
simply.
If
you
have
an
overflow,
you
jump
to
a
trabber
drinks
and
you
don't
need
the
flag.
But
if
you're
using
a
Flags
mechanism,
then
resetting
the
flag
on
entrance
and
seems
to
it
wouldn't
be
any
more
efficient
and
it
seems
to
me
that
it
would
kind
of
void
some
of
the
purpose
of
having
a
flag,
because
the
idea
is
that
you
can
do
a
whole
basic
block
with
operations
and
then
afterwards
check
whether
any
overflows
occurred,
which
means
you
only
really
want
a
block.
I
C
N
C
Yeah,
but
does
it
mean
you
you
have
you
would
like
to
have
two
flags
for
assigned
overflow
in
for
unsigned
overflow?
Yes,
I'm,
not
sure.
Actually,
if
you
do
like
regular
arbitrary-precision
implementation,
you
mostly
do
it
using
unsigned
numbers,
and
then
you
try
to
incorporate
sign
later
on.
So
sorry
I'm,
not
so
sure
it's
so
easy
to
actually
get
the
signed
overflow
information
doing
it.
This
way,
maybe
there's
like
the
way
to
actually
transform
the
results
after
it,
but
yeah.
It's
like
it's
not
obvious
to
me
at
this
moment
how
to
do
it.
If.
N
L
Just
wondering
with
you
is
USM
I
know
it
counts
for
metering.
I.
Think
the
change
is
some
pro
quo.
It's
great,
but
I'm
just
wondering
it's
like
he
wasn't.
Has
built-in
metering
right
so
does
that
kind
of
negate
the
need
for
having
these
kind
of
changes
to
gas
costs
for
optimization,
or
at
least
with
these
kind
of
changes,
still
be
helpful
with
rehab?
He
was
in
production.
L
G
I
know
that
there
are
some
slight
challenges,
for
example,
with
with
division.
I
know
that
or
exponentiation
I
think
yeah
an
exponentiation.
It
has
some
it's
based
on
some
special
algorithm.
That
kind
of
assumes
that
if
you
can
randomly
cut
off
anything
above
200
256,
so
yeah
I
can't
answer
this
question
up.
B
Q
Q
So
I
could
get
a
512
bit
product
that
I
have
to
check
whether
the
product
is
bigger
than
the
overflow
value
and
trap.
If
I'm
not
going
to
trap
that
I
have
to
cast
the
product
back
down
to
256
bits.
So
it
seems
a
significant
amount
of
extra
work
and
that
extra
code
will
have
to
be
added
to
every
arithmetic
operation.
N
Q
Q
N
Q
B
Q
B
L
N
I
L
So,
as
I
understand
it
metering,
he
can
use
two
meter
how
much
computation
is
done
for
its
convocation,
not
storage,
memory,
loads
and
stores
and
hymns
and
loads.
But
yeah
I
understand
he's
great.
He
was
long,
but
I
have
read
the
spec
and
I
just
thought
that
the
hats,
with
metering
you
would
know,
you'd,
be
able
to
measure
how
much
computation
is
done
so
that
you
can
then
see
you
don't
need
to
play
around
with
the
gas
costs
in
India.
So
much.
N
G
I
Either
way
this
seems
like
a
positive
thing
and
while
we
might
have
some
concerns
about
the
overhead,
it
adds
to
a
you
know.
Some
commonly
used.
Op
codes
seems
like
something
that
might
be
worth
implementing,
especially
since
languages
are
kind
of
already
pushing
the
stuff
in
in
the
first
place
and
adding
overhead
to
their
contract
code.
I,
don't
know,
I'm
I'm
I
thought
about
this
a
lot.
So
that's
kind
of
just
a
gut
reaction.
B
C
B
B
N
Yes,
so
this
one's
even
a
straightforward,
there's
an
increasing
number
of
use
cases
where
on
chain,
a
contract
wants
to
know
what
the
code,
what
the,
whether
the
code
of
another
contract
matches
known
template
in
order
to
to
do
various
things
such
as
trusted,
uncertain,
implementations
and
so
on.
Presently
they
have
to
fetch
the
entire
code
into
memory
and.
N
I
On
the
surface,
this
seems
good.
It
seems
like
this
would
encourage
languages
like
solidity
that
include,
and
the
extra
like
I
think
they
include
a
swarm
hash
of
some
information
in
the
contract
by
code
which
would
sort
of
well
it
I
guess
it
already
makes
this
problematic.
If
you're
checking
into
the
full
code,
it
seems
like
it
would
discourage
that
practice
even
more,
but
in
general
it
seems
positive,
I.
F
C
Yeah
from
from
the
VM
perspective,
it's
also
very
useful
information
and
it's
one
of
some
of
the
informations
I
care
about
it.
It's
used
to
actually
have
unique
and
identifiers
for
executive
code,
so,
for
example,
in
EVM
see
this
information
is
already
available
to
VMs,
so
it
wouldn't
be
very
problematic
to
expose
that
to
contracts
as
well,
yeah
and.
N
F
N
R
N
N
K
L
D
D
F
D
D
D
It
have
the
hack
and
B
note
up
so
a
couple
of
problems
with
how
it's
written
right
now,
I
divided
it
into
two,
so
the
intent
and
it
would
have
been
good
if
victaulic
was
on
the
call.
So
I
don't
know
if
we
can
solve
this
now
or
maybe
we'll
have
to
talk
about
it
next
time.
The
intent
was
to
not
change
the
semantics
of
the
OP
code
block
cache,
so
the
OP
code
block
cache
would
still
return
only
the
last
256
hashes.
D
That's
what
he
wanted,
however,
as
it
is
specified
and
as
it
is
written
in
the
contract,
the
contract
doesn't
care.
If
you
ask
about
the
only
the
most
recent
where,
if
you
ask
about
older
stuff,
so
as
it's
written,
it
would
be
the
same
thing.
Basically,
if
I
do
block
hash
on
a
very
old
book
or
if
I
actually
call
the
contract
way
back.
D
So
one
alternative
is
to
do
it
as
vitalik
wanted
from
the
beginning
to
modify
it
to
make
it
clear
that
when
you
call
it,
when
you
do
the
block
hash
up
code,
you
only
get
the
latest
256
and
in
that
version
it
doesn't
change
anything
on
the
block,
hash,
opcode
semantics,
so
the
clients
would
actually
not
need
to
do
any
changes
for
that
or
code.
The
only
thing
that
would
be
required
would
be
to
increase
the
Gaskell's
for
it.
D
The
second
alternative
is
to
have
the
e
as
written.
That
would
change
the
baka
semantics,
because
you
could
reach
further
back
yeah
I,
don't
know
that
makes
it
kind
of.
Why
would
I
ever
ever?
Call
it
if
the
block
hash
exists
like
that
and
then
the
second
issue?
It's
not
really
an
issue.
It's
a
question
should
we
add
support
to
get
obtained
at
the
Genesis
hash
from
this
contractor.
Absolutely
a
nice
thing,
nice
thing
and
yeah:
it
complicates
a
little
bit
because
we
need
to
show
it
in
there
during
contact
creation.
C
Oh,
so
the
current
the
current
contract
is
implemented.
The
serpent
and
serpent
only
has
signed
signed,
integers
I
believe
it's
hard
to
I
mean
I,
had
to
guess
that,
but
it
looks
like
from
from
the
behavior
I
get
from
from
writing
unit
tests.
So
so
there
was
some
some
comparison
in
inside
the
code
that
does
it
actually
accounted
for
it,
and
so
you
can
you
can
overflow
this
comparison
and
you
get
some
unspecified
results.
I
mean
they
are
specified,
but
it's
not
what
you
want
it
intentionally.
C
So
this
another
check
needed
in
the
serpent
because
to
check
if
there,
if
the
argument
from
the
call
it's
not-
so
there's
a
break
where
that
fixes
these
two
issues
I
can
to
continue
working
on
writing
unit
tests.
C
C
Okay,
it's
so
I
wanted
to
actually
add
more
unit
tests
to
make
sure
it
works
as
specified
and
and
then
I
would
think
about,
like
optimizing,
the
the
contract
with
something
using,
maybe
solidity
assembly,
or
something
like
that.
It's
not
it's
not
mandatory
to
do
that,
but
having
unit
tests,
it's
quite
easy
to
test
if
the
result
of
the
same
mm
and
the
costs
are
lower.
C
C
To
different
storage
location
in
this
in
this
way,
you
will
keep
all
the
information
all
the
time
about
some
specific
blocks,
so
we
never
override
some
important
information
and
the
block
the
block
number
zero.
It's
it's
like
meet
solder
or
this
all
these
requirements
being
division
by
by
some
some
multiplication
of
two
five
six.
So
in
case,
so
we
can
I
believe
you
can.
You
can
change
a
bit
the
logic
of
how
how
you
store
information
in
this
in
this
way.
C
C
C
Also,
on
the
main
eight
so
that
would
somehow
solve
that.
Determines
this
hash
problem,
but
I
think
it's
it's
quite
complex,
comparing
where
what
we
have
right
now
in
the
contract,
I
mean
I,
haven't
finished
the
prototype
of
the
implementation.
It
seems
it
requires
recursive
because
of
functions
or
something
like
that.
Maybe
can
be
flattened
down
later
on,
and
it's
really
good
at
auguries.
C
So
I
cannot
do
it
very
quickly,
but
that
that
would
be
one
of
the
solutions
to
to
this
block
hash
and
also
it
might
be
a
bit
problematic
to
actually
insert
all
this
block
hashes
on
the
on
the
time
of
the
hard
work,
but
I
love
her
good
property
of
that
solution
is
in
the
smart
contracts
you
can,
you
can
assume
the
information
about
historic
blocks
are
already
there.
According
to
the
specification,
you
don't
have
to
account
also
the
hard
work
number
in
your
contracts
to
actually
to
actually
compensate
this.
B
Ok,
let's
summarize
that
in
two
parts
of
the
EIP
discussion
and
I
think
that
the
PR
that
you
opened
I
think
it
was
ten
fifty
seven
no
way
what
was
it
ten.
Ninety
four
is
appropriate
to
include
fixes
for
VIP
210,
but
we
might
also
take
those
comments
to
EIP
210
itself
so
that
it's
kind
of
kept
up
with
by
everybody
or
at
least
linked
to
the
PR
great,
so
does
that
kind
of
cover
most
of
your
e
IP
210
stuff
Martin.
D
B
Sounds
good,
it
looks
like
we
are
pretty
much
out
of
time,
so
Nick
and
Greg
I'm
gonna
have
to
move
your
items
to
the
next
core
dev
meeting,
the
EIP
1087
and
the
concern
for
using
native
browser
vm's
while
running
ewaz
them.
So
I
will
move
those
to
the
next
meeting.
Sorry,
we
couldn't
get
to
them
today
and
otherwise.
We'll
see
everybody
in
two
weeks,
I'll
also
kind
of.
If
you
go
to
the
etherium.
Slash
p.m.