►
From YouTube: Ethereum Core Devs Meeting #40 [06/15/18]
Description
B
A
A
A
C
C
D
Know
sorry
have
a
hard
time
finding
the
onion
button,
the
problem
so
while
regarding
client
updates
I,
don't
think
we
have
anything
too
spectacular,
you're,
focusing
mostly
on
performance
we
released.
Basically,
our
previous
release
was
last
week,
which
is
kind
of
give
or
take
20-30
percent
faster
and
lighter
than
the
previous
ones.
We
still
have
a
lot
of
ideas
that
we're
currently
benchmarking
and
trying
to
implement
I
think
that's.
That's
the
only
meaningful
update
I
can
give
you
now.
A
G
We
just
released-
and
this
release
I
have
mentioned
before-
is
about
rocks
the
beach
Union
and
some
memory
improvements.
We
also
have
planned
a
new
one,
so
it's
good
that
we
started
to
have
some
time
for
planning
and
some
project
micromanagement
or
yeah,
and
we
also
we
plan
to
start
work
on
that
release
during
next
week
and
we
also
plan
to
update
our
Casper
implementation
according
to
latest
changed
in
the
EIP
on
the
next
weekend
right
in
paraty
this
network.
So
here
all
updates,
I.
Guess
thanks.
G
C
So
from
my
changes,
I
already
posted,
like
I,
have
a
branch
at
testing
branch
that
is
a
working
process.
It's
a
modification
to
existing
city,
be
Clanton
which
I
ran
the
test.
Vr
this
earpiece
images
and
yeah
I
have
a
success
on
executing
execution
of
state
tests.
State
does
work
like
really
fine,
but
well.
Timing
is
not
exactly
like.
C
It
takes
ten
minutes
to
execute
all
of
the
death,
but
is
mostly
because
CPP
codebase
is
not
optimized
to
run
the
RPC
request
and
that
some
optimization
required
to
submit
to
client
and
I
also
did
research
and
the
state
test
could
be
generated
via
air
PC
into
blockchain.
Test
and
location
test
could
be
executed
without
mining
without
proof
of
work
check,
but
well,
there's
a
main
stopper
for
me
is
that
CPP
client
is
not
ready,
is
not
optimized.
C
H
I
D
H
A
B
A
K
K
So
it's
more
of
an
experiment
platform
at
the
moment
and
yes,
over
the
summer,
I
am
going
to
participate
like
with
the
help
of
martin,
I'm
going
to
create,
say
sorry
to
have
and
all
the
other
security
audit
of
whisper
update
the
documentation
and
the
release
will
be
will
be
over.
I
think,
will
be
complete.
Sorry
awesome.
D
Hello,
I
go,
but
I
just
want
to
say
that
my
kind
of
the
lightest
one
of
the
latest
version
is
still
sinking,
and
I
think
is
about
thinking
about
five
days
and
its
current
in
a
block.
Five
point:
zero.
Seven-
and
this
is
the
this-
is
actually
this
sort
of
the
archive
mode
which
preserves
the
entire
history
and
it's
currently
the
database
size
is
about
two
hundred
ten
gigabytes.
Just
pretty
good,
I
mean
I
will
be
able
to
get
it
down
even
more
so
I
think
this.
D
D
He
is
in
protocol
and
be
able
to
basically
feed
whatever
blocks
at
once,
cause
the
reorg
of
the
arbitrary
ways
and
hopefully,
maybe
I'll-
be
able
to
get
some
tests
a
test
there
as
well,
but
I'm,
not
sure
yet
about
a
pest
yeah.
That's
my
goal
and
then
after
that,
if
I
succeed,
doing
this
I
will
just
clean
up
some
RPC
implementation
and
I
think
I'm
going
to
do
the
first
release.
That's
that's
it
for
me.
It.
D
We
yes,
but
it's
in
a
slightly
different
way,
so
you
were
I,
did
review
your
the
tested
and
retested
and
the
the
the
way
that
you
do.
It
use
fire
RPC
interface,
which
will
have
to
be
implemented
in
every
client
but
I'm
trying
to
do
it
in
a
way
that
does
not
require
change.
Any
changes
in
existing
clients
so
basically
do
the
same
test,
but
without
any
changes,
so
you
basically
you're
gonna
just
take
any
client
and
test
it.
You
know,
regardless
of
whether
they
have
this
I'm.
C
C
G
D
So
I
use
the
are
you
basically
I
run
my
test
on
two
types
of
machines
they're
both
in
Google
Cloud.
So
the
specification
is
for
CPUs
I'm,
not
sure
the
the
more
cpu's
26
gigabyte
of
memory
and
one
of
them
has
SSD
I,
think
500
gigabytes
of
SSD.
Another
one
has
500
K
bytes
of
HDD
and
I
get
kind
of,
compare
them
with
each
other.
That's
that's
it
and
I.
Currently,
the
the
memory
consumption
actually
fixed
it
up
to
it
con.
It
doesn't
consume
more
than
6
gigs
of
memory
in
terms
of
the
heap
allocation.
D
The
thing
that
I
just
said
to
you
it's
an
SSD,
so
the
HDD
is
falling
behind,
but
not
too
much
so
in
my
blog
post,
you
could
see
the
graphs
that
are
that
a
HDD
is
falling
behind,
but
it's
still
possible
to
do
that
when
they
both
sort
of
sink
to
the
end.
I
will
publish
another
updated
comparison
for
everybody
to
see
and
I
also
publish
the
the
disk
breakdown
because,
because
of
the
database
I'm
using
it's
very
easy
to
break
down
like
what.
D
M
Good,
hey
guys,
role
working
here
on
sharding
and
forget
just
wanted
to
do
a
quick
update
on
where
we
are
now
so
we
have
essentially
finished
the
minimal
charting
protocol
implementation.
Currently
have
we
no
proposers
and
notary
is
performing
the
responsibilities
interacting
with
the
starting
manager
contract,
and
you
know,
having
implemented
this
using
similar
models
as
guest
does
and
making
sure
that
we're
extensible
with
their
interfaces.
M
Aside
from
that
we're
holding
off
a
little
bit
on
the
latest
research
on
kind
of
random
D
contains
and
dynamic,
shout
count
properties
that
have
been
worked
on
in
that
research
just
because
those
things
are
currently
in
flux
and
we're
trying
to
basically
be
as
scrappy
and
as
efficient
as
possible.
Our
first
release
will
basically
involve
kind
of
a
containerized
system
where
we
have
you
know
anyone
can
download.
M
This
conclude
the
repo
run,
like
maybe
a
doctor
or
a
kubernetes
config,
and
see
like
proposers
notaries,
all
performing
the
responsibilities
and
kind
of
observing
the
state
of
the
shard
network
on
your
local
computer,
so
it'll
be
more
of
a
prove
concept,
release
and
that's
essentially
where
we
are
right
now,
also
exploring
peer-to-peer
networking
through
a
goal,
a
p2p
and
that's
also
something
that's
in
flux
with
the
research.
So
at
the
moment
we're
just
trying
to
get
something
that
works
and
there's
a
line
with
the
current
spec.
That's
out
there.
N
M
It's
all
being
gossiped
at
the
moment,
and
essentially
you
know
we're
we're
mostly
focusing
on
the
workflow
of
just
the
proposers
and
the
notary's
interacting
with
each
other
thinking
about
thinking
about
sync
of
chartres
chain
data,
and
we
still
haven't
really
figured
out
reorg.
So
that's
something
that
we're
gonna
be
working
on
next
yeah,
except
from
that
we're
trying
to
not
and
not
do
anything
that
will
be
extremely
disrupted
by
changes
on
the
research
side.
Okay,.
A
All
right
awesome,
so
we'll
probably
do
now
as
we'll
go.
Ii
was
because
we're
kind
of
in
the
research
/
client
/,
whatever
section.
So
he
was
him,
then
KVM
and
whatever
else
Everett
is
working
on
and
then
we'll
do
Vitalik
and
his
his
update
with
Dani
and
Justin.
Whoever
else
wants
to
get
part
of
a
part
of
that
update,
so
4e
wasum
Casey.
Did
you
want
to
give
an
update.
O
Yeah
sure,
mostly
we've
been
focused
on
making
a
Bach
Explorer.
That's
II
wasn't
friendly
for
the
impending
test.
Net
launch
and
also
jake
has
been
working
on
a
abstraction
around
hair
off,
so
we
can
swap
out
binary
n,
which
is
a
wasum
interpreter
for
watham,
which
is
a
JIT
engine,
so
it
should
be
faster
than
parody.
If
we
can
do
that
right.
I
A
P
P
So
these
are
like
the
call-out
codes
and
create
op
codes
and
things
like
block
hash,
which
queries
something
about
the
blockchain,
State
and
I've
pulled
them
out
into
their
own
file
that
we're
calling
the
ke
e
I
for
aetherium
environment
interface,
and
they
essentially
just
define
the
abstract
updates
that
have
to
happen
on
the
blockchain
side
or
queries
that
have
to
happen
through
the
blockchain
to
to
implement
those.
You
know
particular
pieces
of
functionality,
and
then
they
allows
you
to
kind
of
take
two
K
definitions
and
mash
them
together.
P
So
then
I'm
going
to
take
ke
VM
and
remove
that
functionality
to
a
VM
and
then
just
mash
it
together
with
uses
functionality
and
make
sure
the
resulting
thing
still
passes
the
test.
Basically
and
then,
once
that
thing
passes
the
tests,
then
it
should
be
fairly
straightforward
to
take
the
Kei
and
mash
it
together
with
chaos,
'm
and
get
a
model
of
you
know.
Ke,
wasum
and
yeah,
and
there's
been
a
lot
of
discussion
about
what
the
EEI
is
to
me.
It
just
seems,
like
you
know,
a
bunch
of
abstract.
P
You
know
state
changes
that
you
can
trigger
on
the
blockchain
from
the
VM
and
the
VM
should
only
handle
execution
level
details.
So
the
VM
for
EVM
is
just
you
know
the
stack,
the
word
stack
and
the
local
memory
and
pretty
much
nothing
else,
so
things
like
add
all
completely
within
the
VM
fragment
and
not
at
all
on
the
EEI
side,
whereas
something
like
self-destruct,
you
know,
falls
on
the
on
the
EDI
side.
P
Another
thing
that's
come
up,
is
you
know,
there's
some
kind
of
sticky
op
codes
with
EVM
like
call
code,
and
you
know
just
kind
of
historically,
not
great
things
and
we'd
like
to
you
know
limit
their
use
in
azam.
So
we're
thinking
you
know,
it'd
be
nice
to
say.
Okay,
there's
this
set
of
abstract
state
update
functions
that
clients
can
implement
for
the
ethereum
blockchain
and
the
you
know
each
different,
each
of
different
vm,
these
different
execution
engines,
sorry
like
II,
was
and,
for
instance,
can
choose
which
subset
of
that
it
actually
exposes
to
contract.
P
So
right
now
the
you
know,
the
subset
is
just
the
entire
thing
and
it's
all
exposed
to
EVM
but
we're
thinking.
For
instance,
okay
will
break
self-destruct
into
two
operations.
One
which
is
you
know
just
to
self-destruct.
All
it
does
is
destroy
the
account
another,
which
is
just
a
transfer
funds
and
I've
heard
arguments
against
transfer
fund
like
a
transfer
funds,
primitive
that
I
honestly,
don't
it
doesn't
make
sense
to
me
because
you
can
already
transfer
funds.
P
Just
you
have
to
kind
of
jump
through
this
create
slash
self
to
start
loop
instead
to
do
it
so
I
think
we
should
just
have
affirmative
for
it,
and
maybe
we
can.
You
know,
expose
that
primitive,
the
transfer
funds
primitive
to
you
Azam,
but
the
EDM
whenever
it
calls
self
destruct,
it
always
has
to
call
both
self
destruct
and
transfer
funds
of
the
EEI
level,
the
maintain
backwards
compatibility.
So
sorry
that
was
probably
a
lot,
but
does
that?
Does
that
over?
Does
the
overall
idea
make
sense.
A
A
N
J
N
A
N
N
The
basic
idea
here
is
that
this
as
a
sort
of
substantial
reworking
of
intermediate
steps
in
the
roadmap,
though
not
the
final
product
and
the
main
kind
of
material
difference
is
so
what
this
specification
says
is
that
it
is
a
beacon
chain
that
basically,
is
a
proof
of
stake
chain
that
kind
of
hangs
off
of
so
you
can
sort
of
call
it
a
side
chain
of
the
main
chain
and
this
beacon
chain
implements
Casper
FFG
validation.
So
it
implements
like
voting
justifying
finalizing,
based
in
the
same
way
that
Casper
FG
did.
N
Basically,
we
and
we
just
end
up
not
going
with
the
version
that
had
Casper
FFG
as
a
contract
on
the
etherion
main
chain
and
instead,
like
basically
everyone's
the
deploy
hybrid
proofs,
take
as
a
kind
of
prelude
to
full
proof
stake
in
charting.
Then
what
we
can
do
is
we
can
just
run
the
beacon
chain
and
because
the
speak
in
chain
kind
of
implicitly
links
to
the
main
chain.
It
can
be
used
to
finalize
the
main
chain,
but
it
does
so
without
being
sort
of
directly
inside
the
main
chain.
N
So
the
key
thing
that
this
would
mean
is
that,
like
the
only
contract
written
in
viper
or
written
in
whatever
that
would
exist
at
any
points
in
the
roadmap
is
basically
a
contract
that
allows
you
to
call
a
function
to
send
32
eath
along
with
some
arguments.
It
would
burn
the
32
beef
and
it
would
create
a
wok
and
everything
else
would
be
processed
by
the
rest
of
the
system.
N
If
gia
as
a
contract
had
with
an
exception
that
this
particular
mechanism
is
one-way
right,
so
it
basically
means
that
if
someone
deposits
32
Ethan
to
this
contract,
then
that
32
eath
is
basically
just
in
there
until
a
future
or
hard
fork
implements
the
part
that
allows
that
32
if
to
get
withdrawn,
I
mean
into
a
shard
and
where
the
the
shard
state
transition
function
is
more
fully
enabled.
So
the
so.
N
Basically,
the
main
advantages
of
this
are
one
that
the
even
the
the
Casper
components
as
somewhat
more
separate
from
via
maintain,
which
basically
means
that
it
can
be
developed
kind
of
more
inch,
less
intrusive
lis
in
some
ways.
So
basically
it
can
be
its
and
developed
as
a
as
a
separate
chain,
and
it
can
have
its
own
rules
about
how
invoke
messages
are
passed
around
the
network.
How
blocks
are
passed
on
the
network?
N
How
blocks
are
included,
and
you
don't
really
have
to
worry
about,
like
interactions
between
Casper
related
transactions
and
knowing
Casper
related
transactions
and
like
what
gets
processed
in
parallel
and
what's
not
and
so
forth,
because
there's
a
much
clearer
wall
between
the
two
systems.
So,
though,
what
the
beacon
chain
does
is
also
the
beacon
chain
is
essentially
a
a
full
proof
of
stake
chain.
So
the
beacon
chain
has
a
proof
of
stake,
based
block
proposal
mechanism,
but
every
block
in
the
fold
proof
of
stake
chain
also
references
a
block
in
the
main
chain.
N
Leagues
kind
of
its,
so
it
describes
you
know,
basically
what
the
blocks
and
the
beacon
chain
are.
What
the
fields
in
the
beacon
chain
are
what
the
state
is.
It
includes
the
different
parts
of
the
state
are
split
into.
It
describes
the
fortress
rule.
It
describes
the
SK
transition
function.
There
is
also
a
Python
implementation
that
implements
like
I
mean
it's
somewhat
outdated
now,
but
but
it
implements
most
of
the
core
logic
and
then
I
believe
Danny
has
his
own
private
Forks.
That's
even
more
advanced
at
this
point.
N
So
another
important
feature
of
this
is
that
this
particular
you
might
notice
that
in
the
perfect
state
in
this
particular
premise
take
chain
the
proof
of
stake.
Deposit
size
reduces
from
a
minimum
of
1,500
if
to
exactly
32
eath,
basically
because,
like
by
in
the
short
term,
using
BLS
signatures
for
basically
signature
for
signature
aggregation,
which
has
the
property
that,
if
you
have
say
like
a
fairly
large
number,
even
a
thousand
of
elevators.
That
sign
the
same
message
that,
in
order
to
all
of
those
signatures,
can
be
combined
into
one
signature.
N
And
in
order
to
verify
that
signature,
you
basically
need
just
one
signature
of
your
vacation
operation
plus
a
fat
one
elliptic
curve.
Addition
for
every
participant
and
it'll
look
and
there's
about
a
thousand
elliptic
curve,
addiction,
additions
and
regular
signature
verification.
So
the
cost
of
an
elliptic
curve.
Addition
is
fairly
tiny.
So
what
and
in
the
short
term,
and
as
far
as
the
goal
of
wanting
to
have
something
that's
kind
of
quantum
proof
and
ideal,
ideally
kind
of
purely
hash-based.
N
In
the
long
term,
we
are
simultaneously
exploring
like
basically
in
the
long
term,
replacing
the
BLS
based
system
with
the
Starks
and,
like
there's
reason
to
believe
that
if
we
use
the
right
hash
function,
then
using
Starks
to
verify
aggregate
signature
may
not
be
if
that
bad
either.
Once
it's
once,
the
tech
has
developed
a
bit
more
so
I
guess
like
just
to
kind
of
re.
Summarize
all
of
this.
N
But
at
the
same
time,
this
beacon
chain
is
also
something
that
pushes
us
much
further
along
the
way
toward
a
kind
of
final
product
arted
system.
And
we
get
because
of
like
things
being
written,
natively
and
kind
of
optimizations
involving
bit
masks
that
aren't
really
available
in
the
EVM
as
well
as
signature
aggregation.
N
This
design
can
basically
scale
up
to
well
the
theoretical
maximum,
which
is
like
assuming
10
million
eath,
participate
participating
about
300,000
validators,
and
it
can
likely
even
survive
in
be
yeah,
absolutely
absolute,
practical
max,
which
is
4
million
validators,
which
is
basically
it
absolutely
everyone,
spitting
and
there's
numbers
for.
Why,
basically
like
it,
could
work
flying
even
like
it
couldn't
work,
even
under
those
conditions
which
is
and
basically
point,
but
basically
the
kind
of
worst-case
behavior
and
even
the
average
case,
behaviorism
substantially
more
feasible
than
it
was
under
the
old
road
map.
N
So
one
thing
that
it
does,
though,
arguably
it
does
somewhat
reduce
the
Anita,
the
need
for
some
of
the
fancier
schemes,
basically
because
the
minimum
speaking
amount
goes
down
from
1032
efn.
So
this
radial
or
individuals
participating
and
speaking,
directly
becomes
more
viable,
oh
no
Danny
or
Justin.
Do
you
have
any
yes
or
anything
that
I
missed
or
you
think
I
should
say
more
of,
or
you
want
to
say,
I.
F
M
N
That's
a
very
good
question,
so
woody
I
mean
I
used
the
word
cross-linked
a
lot
right,
so
it's
a
kind
of
define
the
term
very
clearly.
The
basic
way
that,
like
this
category
of
designs
works,
is
that
you
have
like
a
separate
chain
for
every
short
and
these
chains
are
kind
of
progressing
on
their
own
in
parallel
and
once
in
a
while.
A
some
particular
block
from
every
shard
would
basically
be
attested
to
by
a
randomly
sampled
committee
of
validators
on
the
main
chain.
N
So,
basically
on
the
main
chain,
some
randomly
sampled
committee
of
alligators
wouldn't
come
together
and
say
we
all
we've
all
verified
like
the
portion
of
the
short
chains
after
the
since
the
last
cross
lake.
Up
until
this
block,
we
think
it's
all
correct
and
here's
some
here's
the
hash
of
a
block
in
here
what
we
all
signed,
we
all
signed
it.
N
So
if
10%
of
the
veil
of
all
the
validators
are
deposited,
it
would
be
from
10%
of
all
the
shards,
and
so
you
could
see
that
like
say
any
given,
shard
would
have
a
new
cross
link
that
gets
added
to
the
main
chain.
Once
every
hour
and
I
mean
if
we
want,
we
could
always
like
crank
up
the
number
of
cross
links
at
some
at
some
cost
to
the
efficiency
or
the
efficiency
of
the
system.
We're
just
increasing
overhead,
like
generally,
basically
the
cross.
N
Next,
what
happened
might
happen,
something
like
once
every
hour
for
a
shard
and
then
the,
but
even
in
between
cross
links,
you
can
generally
trust
the
chain
to
have
like
the
short
chains.
Own
growth
be
fairly
good
at
kind
of
no,
not
reverting
in
the
short
range.
So
if
I
walk
even
guess
like
two
or
three
confirmations
in
one
of
the
shower
chains,
that's
already
a
pretty
good
level
of
security,
but
so
that's
kind
of
a
long-term
goal.
These
short,
if
he
wants
to
be
lazy.
N
So
if
we
wants
to
just
say
you
know,
we'd,
we
want
quadratic
shorting
out
the
door
as
fast
as
possible,
and
we
don't
care
about
one
hour
of
work
times.
We
just
want
the
five
thousand
or
five
to
ten
thousand
TPS
already
gone.
Damn
it,
then.
The
strategy
that
we
could
take
is
basically
instead
of
cross
links
when
referring
to
a
short
chain
cross
links
are
basically
just
blobs
that
come
from
that
are
connected
to
a
shard
and
there's
some
proposer.
N
That's
assigned
to
a
shard
where
that
proposer
is
the
one
is
the
one
that
would
basically
have
the
ability
to
create
a
cross
link
right.
So
one
of
the
things
that's
not
yet
listed
in
this
document
is
the
exact
rules
by
which
show
proposals
would
be
selected,
but
generally
all
what
happened
is
that
he
would
all
of
the
valid
every
validator
would
be
randomly
assigned
to
some
shard
for
a
fairly
long
period
of
time
to
be
a
proposer
and
then,
in
this
very
naive,
simplified
design.
N
Like
basically
every
time
a
across
link
has
created,
some
new
proposal
would
get
selected
and
that
you
proposer
would
have
the
ability
to
a
propose
an
ex
cross
link,
but
in
the
longer
term
model.
Basically,
the
set
of
proposed
risk
in
that
shard
would
kind
of
between
themselves
ends
up,
creating
and
pushing
forward
the
chain
until
some
block
in
the
chain
would
get
me
cross.
A
L
One
of
the
kind
of
advantages
of
this
is
also
I
mean
of
moving
away
from
having
Casper
FG
in
the
in
the
currency
chain
is,
is
most
more
security
and
that's
kind
of
a
consequence
of
the
one-way
deposit.
So
it's
not
possible
to
do
withdrawals
until
there
is
state
in
the
shards
and
that
that
means
that
effectively
the
the
deposits
will
be
will
be
frozen
for
quite
some
time,
and
it
means
we
can
kind
of
launch
with
more
confidence
and
potentially
experiment
and
move
faster.
L
Another
thing
is
because
it's
all
native
code
in
the
beacon
chain,
the
EVM
there's
going
to
be
some
simplification.
So
we
don't
have
to
worry
about
gas.
We
don't
have
to
worry
about
gas
limits
and
it's
also
more
future
proof,
because
the
kind
of
the
end
goal
the
end
game
of
a
theorem
is
is
effectively
to
deprecate
EVM
1.0,
as
we
know
it.
So
it's
also
more
future
proof,
as
metallic
said,
because
it's
much
closer
to
the
final
design.
L
L
Another
thing
is
that
in
in
general,
there'll
be
more
unity
between
the
Casper
and
sharding,
and
the
teams
that
developing
these
two
projects
so
I
think
that's
good.
You
know
more
network
effects
there.
Another
thing
to
mention,
maybe
is
that
I
mean
Nutella
code.
You
said
it
that
there's
going
to
be
a
lot
of
reuse
capital,
so
infrastructure
between,
like
between
cusp
and
shouting,
the
all
the
way
from
the
the
capital
that
is
deposited,
the
30
to
Eve's.
L
All
the
way
down
to
messages
gossip
channel
signatures
called
points,
aggregation
mechanisms,
accounting
and
kind
of
from
a
from
a
high
level.
I'd
say
that
large
parts
of
of
Casper
actually
come
for
free
if
you're,
given
shouting
so
large
parts
of
Casper
kind
of
a
subset
of
the
infrastructure
that
we
want
to
build
out
for
for
shopping.
L
You
can't
be
a
caster
validator
without
being
a
shouting
validate
or
vice
versa,
and
it
means
that
if,
for
some
reason
we
we
mess
up
the
incentives
in
such
a
way
that
it's
it's,
for
example,
very
profitable
to
be
a
Casper
validator,
but
you
know
being
in
Shawnee,
but
at
the
data
it's
not
so
great
because
you
have
to
do
all
this
work
and
you
don't
get
much
much
income
from
it.
Well,
tough!
Luck!
L
There's
this
recent
idea
of
one
bit:
custody,
bonds
and-
and
these
are
really
really
cool
and
just
wants
to
give
an
update.
So
basically
in
ensuring
you
have
validated
voting
on
the
availability
of
of
some
piece
of
data
like
a
chart
block-
and
you
know
we
can
have
an
honesty
assumption
and
we
can
trust
the
votes
that
are
being
made
Oh.
L
What
we
can
do
is
we
can
have
this
idea
of
a
custody
bond,
which
is
that
when
you
make
a
vote,
there's
some
sort
of
crypto
economics
in
which
highly
incentivizes
you
to
actually
have
custody
of
the
data
for
which
year
voting
availability
for
and
one
of
the
recent
discoveries
is
that
basically
we
can.
We
can
have
this
enhanced
voting
mechanism
where
the
validator
most
likely
has
custody
of
the
data,
add
basically
the
same
cost
of
making
a
plane
vote
with
the
signature,
and
it
all
works
really
nicely
with
the
the
BLS
aggregation.
A
Q
Q
So
multiple
writes
the
same
value
on
the
each
charge
once
and
if
you
reset
it
to
its
original
value,
then
you
don't
even
get
charged
for
that
and
set
for
the
read
cost.
So
the
idea
here
is
to
improve
the
usefulness
of
this
store
and
also
reduce
excessive
gas
costs
for
operations.
That
actually
aren't
as
expensive
as
they're
being
charged
for
at
moment.
A
A
D
C
A
A
Since
those
are
in
the
list
of
potential
a
I
P
s
for
the
hard
fork,
we've
gone
back
and
forth
about
when
to
do
Constantinople
and
I'm
kind
of
just
wanting
to
hear
what
everyone's
thoughts
are
about,
maybe
doing
it
before
the
end
of
the
year
or
as
soon
as
possible,
either
one
because
we
don't
have
a
timeline
for
some
of
the
research
ones
that
we
might
want
to
bundle
into
a
huge
hard
fork.
I
think
it
would
be
prudent
to
talk
about
anyone's
opinions
if
they
want
to
have
it,
and
this
year.
N
A
Yeah
and
it
looks
like
we
have
enough
AIPS
now
to
justify
a
hard
fort,
because
before
we
were
just
on
bit
wide
shifting
and
block
hash
refactoring,
and
now
we
have
a
few
other
things
that,
like
skinny,
create
two
that
might
want
to
be
might
want
to
go
in
there
and
stuff
like
that,
of
course,
keeping
in
mind
that
there
will
take
a
lot
of
time
for
testing
and
we're
not
gonna
rush.
Anything
like
we've
done
in
previous
Forks
that
needed
to
be
rushed
because
of
emergency
situations
that
the
like.
F
Just
mean
last
year,
I
think
there's
a
fork
kind
of
soon
before
Def,
Con
and
people
were
concerned
about
that,
because
if
something
bad
happened
and
we're
all
calm,
but
the
con,
the
counter
to
that
was
it
was
after
DEFCON.
People
might
have
much
work
through
Def
Con,
so
just
something
something
to
keep
in
mind.
As
we
plan
this
I
know,
we
don't
even
know
which
ones
are
going
in,
but
let's
just
not
put
ourselves
in
some
weird
predicament
around
up
yeah.
A
So
it's
the
middle
of
June
right
now.
It
is
one
two
three
four
five
about
five
months
till
DEFCON,
something
around
that
area,
so
I
think
we
can
do
a
hard
fork
in
less
than
five
months.
Does
that
sound?
Does
that
sound
right
for
the
amount
of
VIPs
we
may
have
going
in,
which
would
be
about
three
or
four
of
different
varying
levels
of
difficulty?.
Q
C
A
A
Actually,
if
you
look
at
item
six,
I
forgot
about
this
I
put
at
item
six,
some
of
the
concerns
that
Martin
hole-
seven,
they
had
specifically
to
clarify
it-
is
to
tenth
italics.
Let
me
look
real
quick
yeah.
This
is
metallics,
so
I
guess
clarifying
that
it
does
not
change
the
current
semantics
when
invoked
via
block
hash
that
so
it
doesn't
deliver
older
blocks.
Do
you
have
any
comments
on
some
of
the
concerns
listed
in
the
agenda?
Vitalik.
N
A
N
N
Yeah,
the
1
million
is
definitely
something
that
should
not
be
fitting
that
that
should
exist,
regardless
of
like
bought
gas
limits
and
like
I
guess.
The
other
thing
worth
pointing
out
is
that
clients
don't
have
to
implement
it
by
making
in
the
opcode
vehicle,
like
it's
just
an
alternative
way
of
implementing
it.
A
A
So
if
we
can
get
that
sorted
out,
we
can
put
the
IP
to
ten,
and
if
we
want
the
next
one
on
the
list
is
account
abstraction
I
think
we
already
I
think
we
tabled
that
pretty
much
permanently
because
it
was
too
complicated
as
that's
something
that's
needed
for
any
future
research.
He
thinks
we
would
need
to
put
in
I.
Think.
A
A
Q
I
mean
basically
the
idea
is
we
currently
have
a
way
to
fetch
code,
but
although
we
already
have
the
pre-computed
code,
hash,
there's
no
efficient
way
to
fix
that
inside
the
VM,
and
there
are
a
bunch
of
situations
where
it's
useful
to
be
able
to
do
so
and
there's
no
reason
it
should
cost
as
much
as
it
does.
So.
This
is
just
a
suggestion
to
add
a
very
simple
table
code
that
features
the
code
hash
without
requiring
levian
code
to
fish
nashit
themselves.
A
J
Q
A
Cool-
and
there
is
an
ethereal
magicians
thread
that
talks
about
it
more
so
one
thing,
Nick
I,
think
that
at
least
the
on
the
EIP
website
there's
a
lot
of
things
that
are
TBD.
Oh,
that's,
okay,
nevermind!
Those
are
just
test
cases
and
implementation.
So
that's
something
that
would
need
to
be
done
after
we
approve
it
pretty
much
yeah,
but
maybe
a
little
more
detail
and
the
EIP
would
be
helpful
for
people
to
understand
it
more.
It
looks
pretty
straightforward,
though
yeah.
Q
A
And
then
I
know
Martin
Martin's
all
for
it
because
he
said
so
online
and
in
the
past
and
then
and
so
the
ox
had
a
need-
some
clarification
for
a
few
things
on
the
magician's
form.
So
if
you
can
take
a
look
at
that
Nick
before
next
call,
we
can
address
those,
then
you
know
wonderful.
Any
other
comments
on
that.
Ip.
A
N
To
answer
Alex's
question
just
now:
do
any
of
these
VIPs
have
impacts
on
scalability
in
sharding,
so
on
on
the
shorting
side?
Basically,
none
of
them
do
and
that
doesn't
matter,
especially
because
the
sharding
roadmap
is
kind
of
starting
with
this
separate
beacon
chain.
Skinny
creates
who
makes
the
channels
easier
and
state
channels
are
definitely
a
very
valuable,
short-term,
scalability
technology.
N
Another
thing
we
could
do
to
improve
scalability
is
we
could
improve
scalability
of
privacy
and
the
way
that
we
would
do
that
is
by
adding
an
EIP
that
dramatically
reduces
the
gas
cost
of
EC
at
NEC,
mall
and
and
pairings,
and
making
sure
that
all
of
the
major
implementations
have
optimized
implementations
of
those
operations
so
that
doesn't
introduce
any
kind
of
new
gas.
Well,
any
kind
of
new
to
us,
vulnerable,
Adeus
and
I
actually
think
that's
something
that
probably
could
be
valuable
right,
guys
like
if
we
can
knock
the
cost
of
any
of
these
rings.
N
Signature
applications
down
from
the
current
like
200,000,
something
gas
per
participant
to
something
like
30,000
gas
per
participant,
then
I
think
we'll
see
a
lot
more
like
a
Furion
based
privacy
solutions.
In
the
short
term
and
we'll
probably
see
a
lot
more
ZK
smart
stuff,
as
eg
snarks,
can
also
help
with
shorts.
Our
scaling.
N
D
So
they've
done
some
benchmark
about
the
actual,
putting
the
like
ezekias
archetype
transactions
on
Assyrian
and
the
main
gas
cost
seems
to
be
not
the
actual
computation
of
the
elliptic
curve
signatures,
but
actually
the
storage
cost
of
nullifiers,
because
they
need
to
keep
forever
the
registry
of
the
nullifiers
that
already
been
spent
so
that
nobody
can
double
spend
them
again,
and
that
seems
to
be
the
by
far
the
major
cost,
not
the
so.
The
actual
EC
signatures
are
basically
just
really
insignificant.
Compared
to
that.
N
So
the
cost
of
a
storage
Sawat
is
20,000
gas,
and
if
you,
if
it's
just
a
matter
of
indices,
that,
if
you
cut
the
size
of
the
indices
down,
you
can
probably
amortize
it
down
to
like
five
to
seven,
yes
and
we'll,
probably
more
like
seven
to
seven
to
ten
thousand
gas,
but
the
cost
of
verifying
a
snark
even
with
currents
up
with
them.
The
more
recent
snark
protocols
is
still
in
the
hundreds
of
thousands,
okay,
I.
D
N
A
So
I
know
previously:
we
we
tabled
the
Blake
like
to
be
addition
to
our
the
Blake
to
be
addition
to
the
pre
compiles,
because
we
were
waiting
for
awasum
and
we
were
saying
we
didn't
want
to
add
a
bunch
of
pre
compiles
in
the
meantime,
because
that
would
take
away
from
other
research
development
and
improvements.
I.
N
Would
argue
that
I,
the
things
that
have
changed
now
is
like
obviously,
number
one.
This
roadmap
by
which
he
was
them,
is
definitely
going
into
the
new
sharding
land
and
not
like
if
EVM
11.0
main
chain,
which
all
realistically
kind
of
delay
its
availability
and
number
two
is
just
that
like,
given
that
most
of
the
work
is
already
done
and
like
in
some
ways
it
is
actually
lower,
complex,
lower
complexity.
Then
a
lot
of
the
under
yet
can
use
because
it's
kind
of
fairly
contained.
G
N
A
D
Sorry
I
just
remembered
Vitalik
I,
just
remembered
the
way.
I
said
that
the
the
storage
cost
was
actually
overwhelming
the
elliptic
signature,
because
it
is
possible
with
at
least
where
the
Starks
to
actually
aggregate
many
signatures
into
many.
Basically
such
transactions
into
one
and
you
reduce
the
cost
of
the
signature
verification,
but
you
cannot
read
a
Greg
eight,
the
no
defiers
so
in
when
you
start
aggregating.
The
cost
of
storing
nullifiers
becomes
the
main
costs
century
yeah.
That's
the
context.
N
Narc
is
still
like
very
substantial
right,
so,
yes,
the
benefits
are
like
okay,
so
the
benefits
of
having
a
snark,
which
is
like
cutting
the
gas
cost
of
a
snark
by
factoring
I,
would
say,
let's
say
for
the
sake
of
example:
we
managed
to
cut
it
from
like
700
thousand
to
two
hundred
thousand.
So,
first
of
all,
that's
a
10%
scalability
Ganga's
who
stole
the
broadcast
one
of
eight
mil
of
eight
million
seconds.
N
It
improves
usability
because
I
mean
it
means
that
he
needs
to
wait
for
a
smaller
number
of
things
before
you
publish
us
before
he
can
aggregate
all
of
them
and
publish
a
snork
and
that
and
that
reduces
the
lane.
So
you
have
the
application,
and
so
at
any
given
like
acceptable
fee
level,
the
latency
of
an
application
could
potentially
go
down
by
a
factor
of
three
and
third
I'm,
not
just
thinking
of
snarks
I'm,
also
thinking
of
other
tech,
where
it
occurs,
tough
dominates
one
of
so
to
give
very
particular
examples.
A
Okay,
that
sounds
like
a
good
argument
for
it,
so
we'll
put
the
Blake
to
be
stuff
on
the
agenda
next
time
and
I'll.
Let
Zuko
and
Jay
Jay
Gary
bird
know
that
this
is
back
on
as
a
potential
for
going
into
the
next
hard
fork.
Does
anyone
have
any
other
comments?
I
think
that,
like
Zuko
and
his
news,
people
are
willing
to
put
some
work
into
this,
to
figure
out
some
implementations
that
are
compatible
across
clients,
but
I
know
they
found
some
forego
and
rust
and
Python
I
believe
so
yeah.
Any
other
comments
on
that.
A
Q
A
A
Q
Know
I
mean
the
really
short
version
is
obviously
my
particularly
use
cases.
Dna
SiC
we're
places
like
CloudFlare
and
others
are
starting
to
make
Peter
SIG's
really
widely
used,
but
it
does
seem
to
be
the
curve
of
curve
of
choice
for
a
lot
of
asymmetric
crypto
applications
outside
blockchain
and
it'd
be
really
useful
to
be
able
to
do
like
an
easy
verify
for
for
those
curves
as
well.
D
Yeah
so
I
mean
I,
think
it's
all
good,
but
what
I
could
the
Train
that
I
could
see
with
this
kind
of
thing?
Is
that
remember
this
thing
about
the
RISC
architectures
of
CPUs,
where
some
people
we
used
to
have
like
really
processors
with
the
really
complex
instructions
and
some
people
came
around
and
say:
let's
do
the
very
simple
things
but
very
cheaply.
So
maybe
there
is
a
way
to
essentially
just
sort
of
generalize
it
in
a
very
small
set
of
instructions
which
could
implement
any
of
the
elliptic
curve.
So
yeah.
D
N
Q
Feel
like
if
we
like
I'm
talking
here
about
adding
pre-compile
is
not
new
up
codes
and
if
we
did
so,
we
can.
Clients
can
potentially
implement
multiple
curves
using
the
same
underlying
library,
because
I
know
there
are
semi
generic
ECDSA
libraries
that
can
support
multiple
curves,
so
the
additional
complexity
curve
should
be
fairly
minimum.
That's
more
like
adding
new
standard
libraries
in
your
codes,
oh
yeah,.
A
N
I
was
just
making
the
comments
that
were
already
most
of
the
way
there
to
having
pre
compose
for
finite
field
operations
and
the
like.
Basically-
and
you
might
want
to
do
this
yourself,
like
look
through
the
some
of
the
elliptic
or
the
contracts
that
I
wrote,
that
implemented
elliptic
curve
multiplication
in
servants
and
we're
basically
the
gun
owner.
D
We
have
multiple
opcodes
I
mean
multiple
pre
compiles
and
any
actual
easy
case,
not
whatever
verification
will
use
multiple
cause,
multiple
implications
of
the
same
optos
and
or
pre
combust,
and
there's
a
really
interesting
issue
that
one
thing
that
every
else
each
and
every
one
of
those
are
pre-compose
does
is
verified
that
the
curve
point
is
actually
valid
and
the
verification
is
insanely
expensive
and
because
we
have
multiple
freaking
powers
to
create
complexes
of
behavior.
We
actually
repeat
the
same
verification
over
and
over
and
over
again
I.
N
Like
if
we
try
to
do
that,
like
actually
compute,
how
much
point
verification
costs
right?
That's
just
verification
of
the
equation.
Y
cube,
equal
Y,
squared
equals
x,
cubed
plus
B,
which
is
possible
I,
can
see
how
that's
comparable
to
the
cost
of
what
so
the
cost
of
an
easy
add.
But
an
easy
mole
I
expects
to
be
much
bigger
than
that
right.
So
I
guess
yes,.
N
D
Q
N
N
Q
M
A
D
A
O
O
O
N
O
O
A
P
I,
don't
even
have
to
you,
don't
need
a
timeline
for
II
was
to
add
the
pre
time
some
files
in
web
assembly
itself,
because
you're
gonna
have
to
implement
the
three
key
files
in
some
language
right.
So
you
could,
you
know,
go
implement
them
and
see,
or
in
whatever
right
or
you
can
implement
in
and
web
assembly,
and
then
it
will
smoothly
transition
to
being
just
a
normal
web
assembly
or
what
be
wasn't
contract
in
the
future
as
opposed
to.
If
you
do
it
and
see,
then
the
transition
will
be
less
I
mean.
P
He
was
and
comes
online,
basically
we're
just
going
to
implement
those
pre
compiles,
as
you
know,
webassembly
contracts,
and
so,
if
you
do
the
work
now
to
implement
them
in
webassembly
I
think
it's
gonna
be
comparable
to
doing
it.
You
know
in
some
other
language
and
it'll
just
be
a
little
smoother
in
the
future.
Q
Q
P
Q
You
then
end
up
with
to
viens
one
for
the
walls.
You
wrote
the
the
pre
compiler
and
one
three
wasn't.
P
O
P
D
Guess
I
guess
the
main
I
see
this
is
the
one
of
the
main
risks
of
implementing
pre
compose,
but
their
semantics
is
not
easy.
Well,
it
could
be
quite
complex
weather,
whereas
the
semantics
of
the
opcodes
should
be
pretty
simple,
so
it
introduces
the
risk
of
the
consensus
failures,
because
if
multiple
clients,
especially
in
different
languages,
use
different
libraries
which
sometimes
give
you
different
results,
then
you
have
a
potential
consensus.
D
Failure,
and
another
thing
I
wanted
to
note
is
that
currently
is
the
italic
mentioned
that
some
a
part
of
this
glue
is
the
swapping
and
duping
on
it
on
the
stack
I.
You
know
a
currently
because
three
gasps
do
you?
Do
this
operations
on
a
stack
and
I
think
we
might
see
whether
it's
actually
over
priced?
You
know,
because
I
also
saw
that
in
gold
serum,
for
example,
there
is
a
lot
of
kind
of
a
cost
of
the
stack
allocation
memory.
P
D
P
P
We
don't
we
don't
have
to,
we
don't
have
to
say
the
clients
have
to
have
and
you
as
an
interpreter,
because
you
know
wasum
can
be
compiled
to
LLVM
I'm
pretty
sure.
So
we
can
just
write
the
contract
in
some
language
that
compiles
the
wasum
and
then
have
the
walls
and
codes
say.
This
is
the
pre
compile
that
we
agree
the
network
will
run
and
then
also
compile
it
to.
You
know
a
binary
that
can
be
linked
into
a
c
implementation
right.
So
it's
not.
You
know
we're
losing
that
flexibility
here.
I.
Think.
Q
P
A
Okay,
so
we
have
a
few
options
here,
we'll
be
talking
about
this
more
in
the
next
meeting.
Definitely
the
last
item,
which
we
won't
even
really
go
into
that
much
just
skinny
create
it
seems
like
there's
universal
support,
for
that
is
anyone
against
that
and
then
yeah
cuz
for
better
state
channels
and
all
that
and
then
I
think
Peter.
You
had
a
comment
in
the
agenda
about
restrictions
on
the
execution
context.
Let's
see
if
there's
as
Peter
going
Peter
might
have
had
to
leave
okay,
we
could
talk
about
that
next
time
and
I.
A
Think
Vitalik
actually
gave
an
agenda
to
comment
that
actually
addressed
that
anyway.
So
yeah
well,
we'll
have
skinny
create
as
one
of
the
options
next
time
and
hopefully
that
EIP
will
be
a
little
further
along.
So
we
can
talk
about
it
because
I
think
I
think
it's
pretty
new
Vitalik
is
it?
Would
you
consider
it
pretty
much
done
or
is
there
more?
Is
there
much
more
to
be
done
with
it.
N
Probably
not
much
more.
Okay.
A
Cool
then
we'll
bring
that
up
next
time.
That's
the
end
of
the
list
and
we're
over
time
already
so
well
in
two
weeks,
we'll
reconvene
and
focus
a
lot
more
on
what
a
I
P
is
we'll
go
in
to
Constantinople,
so
we
can
start
having
a
timeline
developed
thanks,
everybody
for
coming
today
and
any
of
the
other
stuff
we
talked
about.
Let's
just
take
it
offline
to
the
core
dev
chat
and
get
her
thanks.
Everybody
bye!
Thank
you.