►
From YouTube: Ethereum Core Devs Meeting #31 [01/12/18]
Description
A
A
A
Okay,
no
problem,
let's
see!
The
next
item
is
the
yellow
paper,
so
the
yellow
paper
was
put
under
a
an
open-source
license,
specifically
Gavin
put
it
under
the
Creative
Commons
free
culture
license.
So
that's
really
good.
That's
an
awesome
step
and
I
think
that
that
alleviates
a
lot
of
the
problems
we
talked
about
last
meeting,
and
so
that's
that's
a
good
thing.
I
think
you
know
each.
He
also
had
an
update
and
the
chat
log.
Let
me
pull
it
up.
A
A
B
What
is
the
routine
for
updating
it
particular
I'm,
quite
interested
in
discussing
where
the
EIP
should
include
an
update
to
the
yellow
paper
as
a
pull
request,
for
example,
just
to
make
maintenance
that
much
more
natural
and
part
of
the
process.
So
we
don't
need
to
discuss
now,
but
if
we
can
keep
it
on
the
agenda,
I
would
be
grateful.
A
D
E
D
So
the
eeap's
regarding
d
was
on
those
can
be
disregarded.
They
are
quite
old,
they
don't
really
reflect
anything
up
to
date,
and
but
we
have
an
org
on
github
called
awasum
and,
more
specifically,
there's
a
repo
called
design
on
it.
So
if
anyone
is
interested
in
getting
a
better
understanding
where
he
was
and
is
currently
they
should
go
to
github.com
slash,
he
wasn't
slash
design.
D
You
will
tell
a
little
anecdote
about
that
after
this
and
but
so
far
it
seems
we
are
going
with
the
sync
version,
at
least
for
40
sprint.
You
are
doing
here
in
lisbon
and
the
main
point
of
this
print
is
to
finish
the
c++
implementation
we
have
and
to
spin
up
a
small
internal
test
net
and
with
two
nodes
running
CDP,
Tyrian
and
Cooper
port,
which
will
be
able
to
run
pure
web
assembly
contracts.
D
In
synchronous
mode-
and
it
has
another
part-
we
have
yet
to
finish-
is
the
metering
NEP
assembly,
which
will
be
a
contract
itself,
and
this
doesn't
really
rely
on
sync
or
icing
and
then
I
guess
we
need
to
do
to
make
a
decision
on
the
sync
and
async
soon
enough.
But
probably
we
won't
have
a
final
say
on
that
in
the
next
few
weeks.
D
C
The
we're
we're
prototyping
using
the
synchronous
API,
so
the
asynchronous
API
adds
some
callbacks
and
the
primary
reason
for
it
was
that
if
you
want
to
use
promises
or
fetches
or
so
let's
say,
the
contract
calls
an
S
load.
But
the
browser
doesn't
have
that
storage
location
already
loaded.
Then
it
has
to
fetch
it
from
the
network
and
then
return
it
back
to
the
wasum
instance
and.
C
The
problem
is
that
the
browser
implementations,
or
maybe
it's
the
the
the
the
spec
the
browser
spec,
does
not
the
the
Oise
'm.
The
walsim
instance
does
not
play
nice
with
with
promises,
and
so
that
led
to
an
asynchronous
version
of
the
of
the
spec.
Now
there's
some
alternatives
where
the
what
the
the
heat
was
in
spec
can
remain
synchronous,
and
then
things
get
a
little
bit
clunky
in
the
browser,
and
so
one
of
those
alternatives
was
to
use
web
workers
and
a
new
JavaScript
feature
called
a
atomic
dot
dot
weight.
C
A
Okay,
awesome
thanks
for
the
update,
so
what
I
gathered
from
that
three
sub-point
I
talked
about
the
EVM
2.0
I
I
P,
which
I
believe
is
number
48
from
20
December
2015
Martin.
Can
we
like,
or
can
you
go
in
and
close
that
or
deprecated
it
within
the
IPS
repo
awesome
thanks,
so
that
deals
with
some
point.
A
some
point.
B
is
something
with
extending
doop
and
swap
with
do,
pin
and
swap
in
so
that
is
from
Matt
de
frente.
F
A
F
A
F
A
So
I
guess
I'm
not
up
to
date
on
where
we're
at
for
EVM
1.5
with
II
was.
Is
this
something
where
there's
gonna
be
EVM
1.5
and
then
trans
translation
II
was
them
or
is
it
still
kind
of
up
in
the
air?
What's
gonna
happen?
I
have.
F
No
idea
I
haven't
been
on
contract
since
September,
so
I've
been
working
with
seed
I,
don't
know
his
real
name.
He
works
with
with
UHE
on
just
cleaning
up
cleaning
up
that
proposal
and
a
formal
level
he's
writing
a
limb
formalization.
Now
that
he's
very
interested
in
in
getting
that
part
in
what
will
come
of
it,
I
have
no
idea.
You
know
we
discussed
it
and
I,
don't
think
we
have
any
particular
mechanism
for
a
social
practice
for
making
a
decision
on
this
it'll
happen.
Somehow.
E
F
F
A
A
Okay
item
for
a
stateless
client
development,
someone
just
said
they
wanted
to
hear
any
updates
on
that.
Does
anyone
have
any
updates
on
that
I
think
it
was
more.
There
was
a
blog
post
explaining
it
and
there
was
some
discussion
last
time,
but
I
don't
think
anyone
took
it
upon
themselves
to
research
or
perform
anything
further
on
it.
G
And
it's
Alexi
here,
I
just
wanted
to
shortly
say
that
I'm
hiper
didn't
join
the
meeting.
I
think
he
left
some
comment
in
the
in
the
know:
Indies
thing
about
that
there
there
is
some
development
of
state
and
stateless
client,
but
I
understand
it
is
for
the
sharding,
because
that's
actually,
when
the
italic
talks
about
it.
He
almost
he
talks
about,
stated
stateless
clients
and
sharding
and
I.
G
A
A
Okay
cool,
so
that's
that's
the
update
for
stateless
client
development.
The
next
one
would
be
adding
EC
and
EC
mul
precompiled
for
SEC
P,
256
K
1,
that's
an
EIP
that
matt
opened
and
I
believe
it's
just
an
alternative
to
the
pre
compiles.
We
recently
added
and
byzantium.
If
I'm
not
mistaken,
these
pre
compiles,
it's
been
argued
would
be
a
speed
up
and
let
me
see
the
comment
that
was
posted
and
inside
of
the
agenda.
There's
a
comment
from
mobius
development
team.
A
That's
doing
ring
signature,
privacy
solutions
or
actually
it's
the
team's,
not
called
Moebius
I,
believe
that's
their
product.
It's
clear,
Maddux!
That's
actually
working
on
this
from
what
I
understand,
so
they
basically
argue
that
privacy
on
aetherium
is
too
expensive.
They
have
a
blog
post
about
it
and
I
wanted
to
get
everyone's
opinion.
I
know
that
we
talked
about
this
months
ago,
and
it
was
some
pretty
positive.
People
were
had
positive
reaction
to
potentially
adding
this.
If
I
remember
correctly,
so
does
anyone
have
comments?
I.
E
Had
a
nice
email
was
never
intended
to
be
used
for
a
general
purpose
cryptography.
It
was
intended
to
be
used
in
conjunction
with
the
appearing
pretty
much
and
since
the
parent
recompiles
are
for
a
specific
curve
that
is
parent
friendly.
That's
the
reason
why
we
chose
exactly
the
same
curve.
Also
for
you
see,
and
you
see
mo.
A
E
H
Sylar
I,
don't
know
the
plans
about
this
ring
signature
stuff
after
the
blog
post.
You
know
that
we
are
working
on
a
new
ring
signature
with
the
Monaro
research
team
and
cause
string,
city
or
rough
city.
We
still
didn't
decide
which
name
we
will
go
and
we
also
have
working
Java
implementation
on
our
repository.
If
you
are
interested
great.
A
A
D
You
don't
have
recent
benchmarks,
the
last
benchmark
we
had
was
regarding
the
sha-256
be
compiled
and
the
speed
I
don't
really
remember,
but
it
was
a
factor
of
like
factor
of
ten
slower,
so
the
pre,
the
code
written
in
web
assembly
and
metered
using
the
web
assembly
rules
was
consuming
probably
ten
times
more
gas
than
the
subsidized
gas.
We
have
chosen
for
the
pre-compiled,
but
that
was
a
single
example
for
sha-256,
and
probably
we
were
really
better
choosing
the
subsidized
gas
values
for
everything
every
Pig
compiled.
D
D
D
F
F
H
I
E
F
F
F
F
C
There's
also
a
point
that
it's
not
I
mean
there's
the
comparing
the
performance
of
waz
them
to
native,
but
then
there's
also
comparing
formative
azam
to
EVM
1.0.
So,
even
if
it's
not
as
fast
as
a
native
pre
compile
it,
it's
probably
still
a
lot
faster
than
what
it
would
be
when
implemented
in
IBM
1.0.
F
The
issue
is
just
it
needs
to
be
done.
The
compilot
you
have
to
compile
at
the
time
you
load.
You
know
at
the
time
you
load
the
contract,
you
can't
you
can't
do
it
as
a
chit
unless
the
jits
pretty
carefully
designed
for
the
purpose,
not
even
then
I
think
there's
a
pretty
big
risk
of
an
attack,
vector.
D
The
freaking
wise
I
mean
it's
one
thing
that
there
needs
to
be
multiple
implementations
and
but
if
I
remember
the
Byzantine
pre-comp
eyes,
it
took
quite
a
while
to
figure
out
the
actual
subsidized
gas
values
for
them,
and
partly,
of
course,
because
the
implementation
differences
so
that
that
may
be
an
important
point.
If
we
have
like
pep
assembly
metered,
and
we
don't
need
to
figure
out
those
subsidized
veins.
A
D
Reasons
for
webassembly
probably
other
reasons
against
it,
but
we
never
properly
rated
which
properties
are
more
useful
to
us
and
the
speed
benchmark
speeds
is
only
one
property
out
of
that
and
definitely
need
to
be
benchmarked
again,
because
any
benchmarks
done
we're
probably
done
two
years
ago
or
year
and
a
half
and
a
lot
to
changed
in
the
webassembly
spec.
Since.
E
Also,
perhaps
another
generic
comment
on
pre
compiles
and
VM,
and
things
like
that,
so
adding
a
new
pre-compiled
would
only
give
us
a
constant
speed
up
or
a
constant
reduction
in
costs,
and
if
we
can
achieve
the
same
or
a
similar
thing
with
an
improved
virtual
machine,
it
will
get
us
much
further,
because
an
improved
version
machine
gets
this
constant
speed
up
for
every
conceivable
routine
and
furthermore,
it
also
gets
this.
So
for
the
moment
with
the
speed-up.
It
allows
us
to
implement
other
scaling
schemes
like
plasma
and
troop
it
and
yeah.
G
Yes,
so
it's
just
to
get
people's
opinion
on
that.
So
what
I've
noticed
a
couple
of
days
ago
that
so,
if
you
go
to
either
scan
and
then,
if
you
inspect
the
dependent
transaction,
the
tab
and
so
Anna
sorted
by
the
gas
price
in
descending
order,
you
might
notice
that
very
often
on
top
of
that,
there
would
be
some
transactions
which
are
hanging
there
for
probably
like
minutes
and
hours
which
pay
like
a
thousand
Giga
Way
of
fees.
G
So,
and
if
you
familiar
with
the
Bitcoin,
you
probably
know
that
there's
this
thing
called
the
parent:
they
search
out
pay
for
parent,
so
I'm
not
sure
how
much
benefit
we
can
get
in
a
theorem
from
that.
But
I
was
gonna.
I
thought
it
might
be
an
attack
vector
that
you
might
want
to
close
out.
So
that's
basically
it
so.
K
The
problem,
but
that
is
at
any
theorem
basically
I-
can
specify
that
I
have
a
transaction
I,
give
it
3
million
gas
and
I
give
it
one
thousand
bigger
way
of
transaction
fees,
but
basically
the
minor
has
no
guarantees
that
the
transaction
will
actually
consume.
3
million
gas,
so
the
only
guarantee
the
miner
has
is
the
transaction
will
consume
21,000
gas,
basically
a
plain
ether
transfer
anything
about
that
is
just
hoping
to
be
consumed.
K
So
it
means
that
if,
if
basically
attack
vector,
is
a
type
pushing
a
transaction
which
truly
consumes
3
million
gas
and
I
push
it
in,
for
example,
one
way
and
then
I
push
in
another
transaction,
which
I
say
consumes
again:
3
million
with
1000
way
bigger.
Why
sorry
and
I
mean
and
from
the
outside
it
would
look
that
I
could
execute
both
transactions
for
500,
500
gigabyte
and
in
reality
my
second
transaction
will
immediately
return.
G
K
G
So
basically
like
what
I
can
do
now
is
an
attacker.
Let's
say:
I
can
just
generate
lots
of
small
lots
of
pairs
of
transactions
like
first
transaction
with
with
non
zero
will
be
paying
a
very
small
fee.
Let's
say
one
gig
away,
which
will
not
be
mined
for
days
and
days
and
days
in
the
second
one
with
non
Swan,
which
is
gonna,
pay
a
higher
fee
and
I
can
generate
lots
of
those,
and
then
there
will
be
stuck
in
the
everybody's
mental
for
a
very
long
time.
So
I
just
wonder:
if
that's
the
problem.
K
What
currently,
at
least
in
criterion
we
may
gained
about
4,000
the
most
expensive
transactions?
So
if
your
transaction
is
is
too
cheap,
so
to
say
to
make
it
into
the
top
four
thousand,
then
it
will
get
evicted
and
once
your
cheap
transaction
gets
evicted,
your
expensive
transaction
gets
evicted
with
it
because
because
it's
not
executable
anymore.
L
L
A
A
A
One
of
those
being
a
relay
network
and
I
know.
Alexi
has
some
knowledge
on
this
because
he's
been
discussing
some
of
the
detriments
of
having
a
relay
network
of
nodes
that
directly
connect
to
each
other,
mainly
because
that
would
create
a
a
kind
of
privilege
network
and
make
less
of
a
mesh
network
that
would
not
benefit
small
miners.
So
Alexi
can
you
kind
of
define
what
people
are
talking
about
when
they
say
a
relay
network
and
then
your
opinion
on
why
that
wouldn't
be
a
good
idea?
I.
G
Mean
I'm,
not
an
expert
in
this
subject.
I
must
say,
but
that's
just
what
I've
been
kind
of,
maybe
I'm
completely
wrong
in
my
assumptions.
But
what
I
understand?
If
if
so,
the
rail
relay
network,
as
far
as
understand,
is
some
kind
of
high
speed
connections
or
direct
connection
between
the
major
miners
so
that
they
never
really
produce
the
on
code
blocks
because
they
can
just
route
transaction
to
each
other
directly
and
very
fast
and
so,
and
that
would
improve
the
unco
rate
and
probably
will
bring
it
to
zero.
G
But
then
you
know
it's
is
that
the
dangers
dangers
that,
if
I'm
a
small
minor
I
mean
I,
have
to
first
get
into
that
group.
Otherwise,
my
profitability
will
be
smaller
because
I
will
have
to
be
on
the
periphery
and
and
I
won't
be
included
into
this,
like
a
powerful
ring
of
players
who
were
just
just
given
each
other
transaction
very
quickly.
So
and
I
don't
know
at
the
moment
it
looks
like
the
etherium
because
of
these
people,
don't
like
this
unco
rate,
but
I
think
it
might
be
designed
that
it's
still
a
mesh
network.
G
It
doesn't
didn't
actually
form
that
ring
of
power
for
miners,
which
I
see
is
the
sort
of
a
good
thing,
but
it's
kind
of
there
is
another
side
of
the
coin.
So
that's
why
I
think
we
we
need
to
try
to
to
improve
the
network
without
creation
of
the
second
like,
like
isolated,
another
super
network,
because
we
want
things
to
be
homogeneous
and
that's
my
just
my
opinion.
Many
some
people
will
say
that
is
all
I
have
known
on
assumptions.
Yeah.
K
So
one
of
my
questions
actually
is
regarding
what
the
issue
actually
is.
So
is
it
a
lock
propagation
issue
or
is
it
transaction
propagation
issue.
A
So
it
looks
like
it
is.
A
transaction
propagation
issue
is
what
my
understanding
is
and
that's
what
Grif
is
describing
in
here.
So
its
transaction
propagation
out
of
networks
and
nodes
with
are
not
networks,
nodes
with
high
throughput,
so
like
bit
tricks
and
in
fira,
and
so
that's
kind
of
the
issues
that
have
been
going
on.
What
I've
kind
of
seen
in
there
are
exchanges
that
the
suddenly
the
clients
stop
relaying
the
transactions
or
the
it's
and
they're
not
able
to
push
out
the
transactions
fast
enough.
G
I'm,
just
getting
some
some
more
feedback
from
other
channels
that
so
my
kind
of
the
flaw
in
my
definition
is
that
I
am
assuming
that
there.
This
relay
network
will
only
connect
miners,
but
I
think.
The
idea
was
also
that
this
relay
network,
who
include
other
kind
of
operators
like
exchanges
and
things
like
this.
So
maybe
that
changes
the
picture.
A
Yeah,
so
a
lot
of
it,
people
just
kind
of
make
the
assumption
that
clients
like
death
and
parity
aren't
designed
for
high
throughput
and
that
causes
transaction
propagation
issues.
I
know
deaths
been
working
on
some
of
that
because
there's
been
some
fixes
before
for
issues
that
have
been
filed
by
exchanges
but
I
didn't
know
Peter.
If
you
had
any
other
comments
on
that
or
if
you've
noticed
any
of
the
same
issues.
K
No
so,
since
the
last
quite
a
few
releases,
we
haven't
heard
any
particular
issues
that
would
target
guests
and
transaction
propagation
itself.
I
mean
so
know,
perhaps
know
what
one
thing
that
what
we
could
check
is
how
different
clients,
propagate
transactions
or
how
many
transactions
different
clients
keep.
K
For
example,
I
can
describe
how
GUI
theorem
handles
transactions
at
how
it
propagates
them
I'm,
not
sure
how
parity
does
it,
because,
for
example,
if
there's
a
conflict
between
how
the
two
of
them
do
it,
and
maybe
so
worst
case
scenario
or
gas
filters
out,
some
transactions
parity
filters
out
some
other
transactions
and
at
the
end
it
only
the
lowest
common
denominator
gets
through.
So
it
might
also
be
something
like
that,
so
it
would
be
nice
exploring
whether
there's
banished
and
some
conflict
or
what
the
different
queue
limits
are
for
parity
and
GUI
cerium.
A
A
A
I'm
back
working
full
hours
at
the
etherium
foundation,
so
I
may
have
some
time
to
start
to
write
up
some
stuff
about
that
and
there's
a
miner
who
does
some
videos
on
YouTube
named
bits,
B
trip
and
who
sent
me
some
really
good
documents
outlining
some
of
the
possible
ways
that
we
could
improve
our
processes.
So
I'm
gonna.
Look
at
those
more
does
anyone
else
have
any
comments
on
like
maybe
when
we
should
do
Constantinople
and
the
ideas
around
having
a
release
management
process
within
aetherium
for
hard
Forks.
A
Okay,
when
we
have
more
people
in
here
in
the
next
meeting,
we'll
probably
discuss
more
about
Constantinople
between
now
and
then
I
might
try
to
round
up
some
of
the
EIP
s
that
are
meant
to
go
that
are
potentially
going
to
go
into
Constantinople
the
most
popular
or
discussed
one
being
accountable
to
happen.
I
think
the
general
consensus
was
that
it
was
a
very
difficult
problem
to
solve,
but
that
it
was
something
that
people
wanted.
A
C
A
Right
but
Alec
wanted
feedback
on
that.
Okay,
so
yeah.
Let's
go
next
meeting,
let's
collect
the
feedback
from
that
and
have
vitalik
or
other
researchers
discuss
what
came
of
that
thread
to
see
if
we're
any
closer
to
cracking
the
problem
of
figuring
out
the
best
way
to
perform
a
count,
abstraction
or
implement
of
count
abstraction.
A
A
Okay,
let's
go
ahead
and
get
to
client
updates
will
start
with
death.
I
saw
there
was
some
tweets
from
you
Peter
about
some
cool
new
features
that
are
coming
out
and
gets
1.8
and
some
speed
ups
that
you've
been
able
to
do
since
our
last
core
death
meeting,
where
we
discussed
that,
if
you
want
to
elaborate.
K
Yeah
sure
so
feature
wise,
well,
I'm,
not
sure
it's.
It
doesn't
necessarily
affect
the
entire
ecosystem.
Whether
it's
nice
features
that
we're
trying
to
do,
for
example,
a
nice
tracing
API,
so
that
anyone
can
write
their
own
little
JavaScript
racers
and
run
it.
So
you
could
also
do
that
previously,
but
we
put
a
lot
of
effort
to
clean
that
up.
K
We
also
put
a
lot
of
effort
now
to
so.
Even
until
now
we
could
generate
go
ate
the
ice
for
contracts,
so
you
could
just
lock
in
either
the
ABI
or
the
solidity
code
and
could
generate
you
go
wrap
around
it.
Now
we
have
the
same
thing
for
working
for
events
to
our
subscriptions,
but
these
kind
of
learn
the
niceties
and
probably
the
rest
of
the
model.
At
least
I
and
a
few
others
we'll
be
focusing
on
for
the
next
release
is
somehow
to
have
to
make
gas
a
lot
more.
K
Performant
Nick
had
this
idea
way
back
that
we
mentioned
a
few
times
with
on
the
code
of
meetings
which
he
kind
of
passed
on
and
I
picked
it
up.
Now,
it's
actually
the
idea
why
it's
really
nice.
It's
kind
of
solid,
it
has
a
ton
of
corner
cases,
but
it
kind
of
seems
that
we
managed
to
reduce
the
data
base
rights
both
by
quantity
and
this
guy
wise
by
about
60%.
K
K
We
kind
of
tried
to
do
that
a
few
times.
Usually
the
limit
is
that
fast
sink
doesn't
so
the
way
fasting
was
designed
it.
It's
really
horrible
from
the
perspective
of
garbage
collection,
so
we're
actually
were
thinking
about
figuring
out
how
we
could
do
garbage
collection
properly
or
how
we
could
speed
up
the
database
properly
and
then
just
roll
a
brand-new
synchronization
to
aid
it
so,
but
but
that
one
would
probably
affect
affect
the
network
a
bit.
K
J
L
Hudson
yeah
regarding
your
previous
question
about
painting
transactions,
I
just
double
checked
with
the
code,
and
there
are
some
different
logic
theorem
J
implements
and
if
transaction
hasn't
been
include
included
in
a
certain
number
of
boxes,
chat,
it's
just
a
wicked
from
the
pool,
so
it
doesn't
matter
it
doesn't
address
to
a
reason
of
why
it
has
not
been
included
in
the
block,
so
yeah,
that's
how
it
copes
with
that
okay
and
some
updates
from
us.
We
have
started
to
work
on
Casper
implementation
and
on
the
other
hand,
we
are
still
fighting
work
performance
improvements.
L
We
have
met
some
difficulties
that
haven't
been
expected
by
us
and
we
plan
to
release
database
improvements
first
and
in
the
next
release,
where,
when
we
were
planning
to
work
on
the
reduce
memory
footprint
and
the
proven
processor,
speed,
I'm
afraid
to
make
an
estimation
regarding
database
release,
but
this
is
number
one
priority
for
our
team.
So
as
soon
as
we
manage
that,
over
least
that's
that's
all
from
us.
Thanks.
C
A
Okay
sounds
good.
Thank
you.
Let
me
just
check
to
see
what
clients
I'm
missing
I,
don't
think
anyone
from
PI
AVM
is
here,
but
I
know
that
Piper
left
an
update,
so
I'll
just
read
that
where
did
he
put
that
he
said
that
implementation
of
full
med,
sync
and
PI
VM
is
underway?
The
foundation
for
running
PI
VM
is
a
stateless
client
is
running
as
a
going
on
implementation
for
a
simplified
F
gas
station
pricing
algorithm
is
in
progress
for
web
3
P
and
an
alpha
release.
2
pi,
VM
based
client
is
happening.
A
He
also
said
that
the
research
team
may
or
may
not
be
there,
but
charting
and
research
development
continues,
which
I
know
that
they're
helping
a
little
bit
with
they're
doing
a
little
bit
in
the
collaboration
with
the
research
team,
so
that
is
the
PI
EVM
update
I,
believe
that
that
is
all
of
the
clients.
Are
there
any
other
clients
or
major
projects
that
want
to
give
an
update.
A
G
I
sort
of
data
that
I've
collected
when
so
first
of
all,
I
would
say
that
they're
kind
of
my
plan
with
this
is
that,
just
to
first
of
all,
it's
experimental
there's
some
optimizations
I
can
do
of
course,
I
when
I
did
the
fork
of
go
theorem,
I've
started
making
changes,
I
broke
a
bunch
of
stuff
like
slight
client
and
oh
like
a
fast
thing.
They
absolutely
don't
work
at
the
moment.
G
So
but
I
don't
worry
about
this
because
I
just
experimenting
so
a
couple
of
things
that
I
wanted
to
analyze
and
I
have
some
data.
Now,
if
you
saw
my
list
of
the
improvements
that
I
want
to
check
out,
so
one
of
them
was
the
reducing
the
state
size
on
a
disk
and
the
idea
there
was
that
so
the
way
that
the
state
is
Thornton
disk.
There
is
a
lot
of
repetition,
so
lots
of
hashes
are
written
multiple
times
and
so
I'm
currently
running,
like
the
the
analysis
on
the
full
state.
G
G
Never
he
forcible
never
actually
touch
the
disk
so
that
everything
and
I'm
trying
to
push
it
to
the
limit,
and
so
what
I
realized
that
I
analyzed
that
you
know
how
many
nodes
are
actually
like,
fully
occupied
and
not
fully
occupied.
So
if
you
look
at
all
the
nodes
in
these
extra
tree,
so
about
half
of
them
are
the
actual
leaves
the
values.
G
Usually
they
connect
through
some
so-called
short
nodes
and
another
half
is
the
full
nodes
which
are
basically
like
arrays
of
16
elements
and
they're
kind
of
issue.
Imagine
this
visually
that
would
be
on
the
bottom.
There
will
be
all
these
values
and
then
on
top
of
them
they
will
be
like
sort
of
pyramid
build
up.
So
this
is
the
weight
of
this
pyramid
is
approximately
the
same
as
the
weight
of
the
bottom
of
it.
G
And
so,
if
I
look
at
the
tree,
which
corresponds
to
about
block
2
million,
then
there
will
be
about
140,000
full
nodes,
which
are
these
6017
cells
elements
and
there's
151
value
nodes.
And
what
the
interesting
bit
is
that
out
of
140
K
full
nodes,
there
are
44,000
node,
which
is
a
third,
are
only
with
the
two
children.
So
this
is
my.
It
was
my
suspicion
before
and
then
there's
a
similar
amount
of
nodes
with
the
three
children.
So
here
there
is
a
potential
for
improvement
by
you
know.
G
You
know,
especially
it's
storing
these
the
nodes
with
only
two
children
in
a
more
compressed
way,
so
that
we
can
spend
even
more
memory
on
a
holding
state.
So
my
hope
is
basically
at
least
for
this,
for
the
initial
thing
to
try
to
basically
fly
through
the
like
first
4
million
nodes,
if
it's
possible
just
using
the
memory
without
even
touching
the
disk,
only
for
rights,
of
course,
but
not
for
reads.
So
that's
what
I'm
trying
to
achieve
at
this
moment
and
that's
it
for
me.
A
I
have
client
improvements
to
alleviate
issues,
but
we
kind
of
went
over
that
with
the
client
updates
and
some
of
our
discussion
on
actually
what
you
just
described.
Someone
also
wanted
to
add
to
the
agenda.
This
is
Griff
again
updates
on
scaling.
So
will
we
see
any
scaling
improvement
from
Constantinople?
A
We
don't
have
finalized
IPs
yet
for
Constantinople.
So
that's
not
really
a
question
we
can
answer
concretely,
but
he
also
asked
if
there's
going
to
be
improvements
because
of
Caspar
FFG.
So
let's
see
here
guesses
on
the
time
frame
for
scaling
improvements.
So
because
there's
a
lot
of
network
congestion
right
now
and
the
people
are
finding
that
in
order
to
get
their
transaction-
and
you
know-
and
under
a
few
minutes
they
need
to
pay
high
fees
that
people
are
wondering
about
scaling.
K
I,
don't
really
see
why
Casper
would
happen.
Basically
what
we
need
so
scaling
is
more
so
currently
were
kind
of
limited
about
database
and
processing
stuff
due
to
being
limited
by
the
database,
so
whether
we
run
proof
of
work
or
Casper
that
just
kind
of
just
boils
down
to
wasting
or
not
wasting
GPUs
on
it,
but
I,
don't
think
it
matters
much
from
a
block
processing
standpoint,
yeah.
C
Since
disk
IO
is
the
bottleneck
and
it's
still
not
clear
how
much
room
we
have
to
optimize,
you
know
by
optimizing
the
database,
how
many
more,
how
much
more
gas
per
second
you
know
the
average
or
even
high-end
computer
you
know-
will
be
able
to
process
I.
Think
there's
still
plenty
of
room,
at
least
to
you
know,
verify
hit
the
limit
on
on
disk
IO
by
optimizing
the
database.
That's
that's
the
most
important
thing
right
now.
A
C
And
there
are
some
really
interesting
threads
on
the
e3
Search
forum
about
the
state
tree,
so
the
Merkel
tree
format
that
will
be
used.
You
know
for
for
starting
there's
some
interesting
ideas
on
on
these
asynchronous
accumulators
or
merkel.
Mountain
ranges
that
sound
very
promising.
So
there's
a
lot
of
exciting
proposals
that
are
being
discussed.
A
Someone
also
mentioned
in
a
comment:
they
wanted
to
talk
about
the
decision
process
for
e
IPs.
That
is
something
that
I
also
want
to
work
on,
hopefully
in
q1,
and
that's
specifically
what
that
means
is
kind
of
updating
the
IP
one
a
little
bit
more
because
I
know,
there's
been
a
lot
of
confusion
about
where
ERC's
fit
in
and
Casey
and
Nick
and
others
have
talked
before
about
doing
some
improvements,
Greg
about
doing
improvements
to
the
EIP
process,
so
it
definitely
can
use
more
improvements.
A
We've
added
an
EIP
editor
Nick
saver
on
get
github
he's
been
doing
a
great
job.
Cleaning
up
the
EIP,
x'
and
y'
ohe
has
also
been
added
as
an
editor
and
he's
been
doing
some
cleanup
of
e
IPS,
so
we've
been
kind
of
trudging
through
that
a
bit
and
I
hope
to
dedicate
more
time
personally
to
it
in
order
to
get
some
of
that
done,
which
I've
said
before,
but
this
time
I
actually
feel
like
I
might
have
the
time.
So
that's
that's.
A
That's,
hopefully,
gonna
be
good,
so
the
yeah
as
far
as
the
decision
process
goes
community
consensus
and
then
the
editors
getting
enough
time
to
go
in
and
actually
merge
the
commits
or
the
kind
of
how
it
goes
down
what
GIPS
go
in
and
which
ones
don't.
But
it
should
be
noted
that
if
you
have
an
e
rc
or
an
e
IP
that
you've
created
it's
good
to
actually
start
implementing
them
even
before
they
get
officially
approved,
because
part
of
the
criteria
for
certain
e
IPS
is
having
example,
implementations.
A
So,
for
example,
e
ER
c
20
token
was
implemented
well
before
the
e
IP
was
actually
finalized.
So
and
I
know
that
crypto
kitties
I
think
had
a
IP
770
or
something
around.
That
number
for
non
fungible
tokens
implemented
before
the
CIP
before
the
e
IP
will
be
finalized.
So
having
an
implementation
like
that
in
the
wild
is
very
positive
for
your
e
IP
getting
approved.
A
A
Okay,
well
thanks
everyone
for
joining
we'll
meet
back
up
in
two
weeks.
Hopefully,
we'll
have
more
people
around
and
some
representation
from
some
of
the
research
team
and
parity
and
others
so
that
we
can
get
more
updates
and
dive
into
some
of
these
issues
a
little
bit
deeper
thanks.
Everybody
see
in
two
weeks.