►
From YouTube: Ethereum 1.x Afternoon [Day 1]
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
So
main
differences
are,
if
you,
if
you're
seen
the
first
proposal
it
had,
this
thing
called
a
linear
cross
contract
storage.
This
has
now
been
removed
because
we
currently
believe
that
we
can
emulate
it
by
using
the
new
create
two
up
code
which
supposed
to
happen
in
Constantinople
and
I,
also
published
the
the
example
of
the
smart
contract
which
implements
that
approach
for
our
ERC
20
tokens.
And
so
then
the
priority
queue
for
eviction
has
been
removed.
B
So
the
main
reason
why
it
was
there
in
the
first
place
is
because
or
not
so,
my
idea
was
not
to
give
minors
any
extra
powers
more
than
they
have
at
the
moment.
So
one
of
the
power
we
do
give
them
here
sleep
to
control
the
eviction
so
but
I
got
myself,
convinced
that
we
it's
okay
to
give
them
that
power
because
of
the
sort
of
censorship
resistance
of
aetherium,
which
was
discovered
in
2016
after
the
soft
attempted
soft
work
on
after
the
DAO
attack.
B
So
then
another
another
changes
that
we
are
introducing
the
calculation
of
the
storage
size
before
and
not
after
not
during
the
introduction
of
rent,
and
so
then
there's
a
lock
ups,
which
I
will
look
into
further
so
and
then
we
only
discuss
in
temporal
protection
here
and
the
rent
price
is
fixed
in
this
proposal.
You
know
not
floating
as
in
one
before,
but
we
can
introduce
the
floating
one
if
it's
really
necessary.
B
So
this
proposal
was
also
organized.
The
previous
one
had
six
steps,
but
this
one
is
more
of
the
dependency
graph,
and
so
this
diagram
is
essentially
outlining
what
these
changes
could
be
from.
A
to
M
and
their
dependencies
explained
here,
so
they,
the
solid
line,
shows
what
has
to
happen
in
two
distinct
hard
Forks
and
the
dashed
lines
is
something
which
could
happen
in
one
hard
work.
B
So
we
just
discussed
it
in
in
the
breakout
session
like
what
could
be
potential
potential
division
into
the
forks
so
like.
How
quickly
can
we
do
this?
And
so,
if
we
really
really
push
it
so
in
the
first
photo,
you
got
obviously
two
easy
changes
and
then
you
got
up
to
the
rent
on
the
counts
and
the
eviction
of
dust.
So
after
the
second
hard
work,
we
would
see
some
improvements
in
the
state
size
by
getting
the
dust
account
removed.
That's
the
country
moved.
We
also
introduced
the
state
law
caps.
B
B
So
let's
see
that
so
just
to
quickly
remind
you
why
we
needed
for
your
play
protection.
So
in
the
one
of
the
changes
is
essentially
eviction
of
the
dust
accounts
by
dust
account.
We
mean
that
the
the
known
contract
account,
which
has
zero
balance,
and
so
it's.
If
we
remove
those
things,
then
the
problem
is
that
when
this
account
gets
recreated
by
sending
some
east
to
it,
then
the
nones
gets
reset
to
zero,
and
previously
rather,
transactions
still
become
valid.
B
So
the
this
this
change
has
been
proposed
by
by
Martin
from
go-go,
is
theorem
team
team.
So
it
basically
means
that
we
we're
adding
an
optional
field
into
transaction,
which
is
called
valid
until
and
so
the
users
will
be
in
charge
of
protecting
themselves.
So
this
transaction
will
only
last
for
like
three
something
like
minutes
or
whatever.
They
do
you,
and
so
in
the
first
change.
It's
in
change
a
it's
optional,
so
that
gives
the
the
ecosystem
the
time
to
get
used
to
this
change
implemented-
and
things
like
this.
B
So
in
the
change
B
it
becomes
mandatory
and
so
that
everybody
has
to
start
thinking
about
what
they're
gonna
put
in
there.
They
can
put
an
infinite
amount
if
they
want
to
all
behavior,
but
essentially
they
will
be
forced
to
choose
the
value,
and
if
they
choose
the
value
such
that
the
validity
is
before
the
eviction,
then
they
will
be
protected
from
replay.
So
there
was
other
proposals
which
were
based
on
the
change
in
unknowns
after
the
account
is
recreated.
B
There
are
some
advantages
and
disadvantages
of
this,
but
for
the
concreteness,
I
just
left
this
proposal
in
in
here
so
change
see.
Is
that
so
what
I
call
the
net
contract
size,
accounting?
So
for
the
things
like
lockups
and
for
the
rent
we
require
to
have
access
to
the
accurate
number
of
storage
items
which
exist
within
the
contract,
which
we
don't
have
now
in
the
in
the
protocol.
So
introduction
of
such
a
account
needs
to
be
done
in
two
stages
because
they
did
the
the
because
of
the
blockchain
keeps
moving.
B
We
cannot
simply
introduce
it
from
a
certain
block,
so
we
first
introduced
in
the
net
accounting
where
each
store
now
starts
increasing
the
counter.
When
the
the
item
is
allocated
and
decreasing
the
counter
when
it's
D
allocated
and
then
in
and
so
this
year
we
introduced
something
like
a
huge
number.
So
the
reason
why
we
need
this
huge
number.
First
of
all,
we
we
don't
want
to
have
a
storage
size
as
a
the
signed
integer.
B
We
want
it
unsigned,
but
the
second
reason
is
that
later
on,
we
want
to
distinguish
between
the
contracts
where
the
storage
size
hasn't
been
even
introduced,
because
the
contract
who
hasn't
been
modified
and
the
cases
where
it
has
been
would
have
modified.
Then
the
the
the
the
accurate
contract
size
has
been
introduced,
so
I
think
I'm
not
going
to
go
into
the
much
details
of
here
is
just
I
will
show
you
what
the
contract
were
to
change
D
means.
So
when
we
go
from
a
net
to
gross
accounting
of
the
size.
B
So
essentially
because
we
split
into
two
changes
so
in
in
Block
C
we
start
net
accounting,
and
then
we
know
that
everything
after
Block
C
is
now
accounted.
So
all
the
only
thing
we
need
to
do
to
get
the
accurate
count
is
to
take
the
sizes
at
the
Block
C,
and
this
is
what
we
can.
We
can
then
do
it,
because
then
we
can
compute
it
offline
and
in
we
included
in
every
client
implementation.
B
What
was
the
size
of
each
contract
at
the
time
C
and
then
by
by
adding
these
two
together,
the
net
count
and
the
gross
count
at
at
a
time
see
we
get
an
accurate
account
dynamically
and
a
huge
number
here
is,
is
used
to
as
I
said,
to
be
able
to
use
sign,
unsigned
integers,
also
to
distinguish
different
cases,
and
the
important
bit
here
is
about
observable
storage
size
which
will
be
used
later
on.
So
when
we,
so
we
try
to
not
introduce
any
more
transaction
churn
here.
B
So
we
we
do
not
increase
number
of
modifications
of
the
state
simply
for
the
sake
of
the
accounting,
so
they
so
the
storage
size
is
only
introduced
or
changed
when
there's
other
reasons
to
modify
the
account.
So
that's
why
we
need
this
notion
of
what
what
is
the
observed
value,
because
if
there
is
an
account,
if
there
is
a
contract
which
never
changed
after
the
Block
C,
we
still
want
to
see
the
correct
size
of
it.
B
So
if
this
is
the
observability
rules,
which
also
include
this
huge
number,
so
in
change
eb
introduced
the
storage
lockups.
So
the
idea
of
the
lockups
is
a.
We
wouldn't
introduce
it
if
we
were
starting
from
scratch,
if
we
didn't
have
any
existing
contracts,
if
we
simply
had
zero,
nothing
like
in
theorem
2.0,
you
probably
would
just
simply
introduce
rent,
but
because
we
have
some
contracts
in
the
state
that
some
people
might
want
to
keep
using
without
necessarily
rewriting
them
completely.
B
So
we
give
them
that
option,
although
I
personally
think
they
would
still
rewrite
them
anyway,
but
this
is
actually
also
applies
to
recovery.
I
think
that
these
features
we
only
need
there
to
give
people
the
option,
but
but
given
the
the
cost,
they
will
probably
decide
to
start
from
scratch
anyway.
So
what
the
lockups
do
essentially
I
know
that
Andre
doesn't
like
this
analogy,
but
imagine
that
each
contract
is
the
is
kind
of
like
a
glass
weight.
B
One
item
is,
is
like
one
centimeter
long,
and
so,
when
your
so
the
size
of
that
glass,
the
height
of
this
glasses,
is
that
is
the
storage
size
and
then
what
you
do
is
that
you
can
increase
that
size
or
decrease
it
by
free
and
storage
or
increasing
the
storage.
But
you
also
have
to
pour
some
water
in
it
in
it.
So
pouring
increasing
the
size
of
the
glass
requires
to
pouring
a
little
bit
of
water
like
to
fill
it
up
so
for
the
new
contracts.
B
It
means
that
whenever
you
add
to
the
contract
storage,
but
you
always
have
to
fill
it
up,
so
you
basically
have
to
keep
it
full.
We
keep
this
glass
always
full.
So
when
you
reduce
the
size
of
the
glass,
the
excess
water
gets
back
to
you.
So
you
can
release
those
funds.
Imagine
that
the
liquid
is
actually
the
funds
and
but
but
in
the
case
where
you
had
the
accounts,
which
were
full
previously.
B
So
let's
say
after
we
introduced
the
lockups,
we
had
some
accounts
which
were
basically
like
empty
glasses
before
so,
what
would
you
gonna
do
with
those
well?
We
were
introduced
in
the
rules
where
we're
so
when
somebody
changes
the
value
inside,
even
without
changing
the
size,
they
still
have
to
contribute
a
little
bit
of
liquid
in
it.
B
Semantics
of
this
change
was
very
similar
to
what
I'm
going
to
describe
and
here
the
semantic
assessed
or
depends
on
the
the
three
values
original
current
ana,
new
or
value.
So
original
is
the
is
the
value
of
the
storage
item
before
the
trans
personal
transaction
happened.
Current
is
the
value
of
the
storage
item
before
this
store,
operation
happened
and
value
is
what
we're
trying
to
set
and
without
the
loss
of
generality,
we
can
only
think
about
0
1,
&
2,
where
1
&
2
are
any
values
which
are
kind
of
distinct
from
each
other.
B
So
when
we
describe
this
state
transition,
we
will
just
think
about
four
different
states,
something
I
call
ground
state
we
can
like
green
thing
is
when
we
allocated
new
storage,
so
we
went
from
0
to
1
or
2,
or
we
can
have
a
removed
storage
item
where
we
can't
go
from
1,
1
&
2
to
0,
or
we
can
have
something
which
just
changed.
The
values
which
is
the
orange
one
and
so
the
so
each
state
transition
is
essentially
going
to
be
the
arrow
that
goes
from
one
of
these
circles
to
another
circle.
B
So,
for
example,
we
go
from
the
ground
state
to
the
green
state,
which
means
that
we
are
located
a
new
new
value
or,
and
so
how
we
decide
where
the
the
arrow
starts
and
where
it
ends.
It's
very
simple
table
here.
So
the
the
the
place
where
the
arrow
starts
only
big
deep,
two
things
an
original
and
current
and
then
using
this
table
you
can
figure
it
out
and
in
the
same
exact
table.
But
are
you
looking
at
two
values
of
original
and
new
value?
B
You
can
figure
out
where
the
value
of
the
whether
the
arrow
points
to
and
so
for
example.
These
are
the
the
three
examples
of
regional,
current
and
value
and
what
are
the
state
transitions?
So
so?
What
I'm
trying
to
say
here
is
it's
very
easy
to
describe
and
you
don't
actually
need
to
seven
to
twenty
seven
possible
variations,
but
much
fewer
than
that,
because
so
here
you've
got
nine
and
nine
I.
B
Essentially,
that's
is
the
cost
you
pay
for.
You
know
like
if
the
contract
is
really
important
and
the
current
users
are
prepared
to
do
that.
Then
it's
fine
and
we
just
let
it
go.
And
then
if
the
contract
is
new
or
it's
already
been
filled
up
by
the
users,
then
it
behaves
this
way.
So
whenever
you
go,
you
create
a
new
item.
B
B
So
your
for
example,
when
you
free
up
the
item,
it
doesn't
give
you
back
the
ether,
it
just
keeps
it
so
it's
kind
of
more
greedy
and
then,
when
you
change
the
value
or
add
a
new
value,
it
will
take
Easter
for
you
anyway,
so
even
if
you're
you're,
basically
changing
somebody
else's
value.
So
this
particular
thing
removes
the
the
possibility
of
dust
attack.
Does
talking
attacks
for
example,
because
it's
so
what
a
furlough
cop
is
introduced,
that
it
does.
There's
no
point
of
adding
the
that
the
storage
to
anybody's
contract
and
I
was
explaining.
B
Why,
later
on?
So
here's
the
example
of
let's
say
that
we
are
in
the
glass
in
a
half-full
glass
and
that
was
original
equals.
One
current
equals
to
Val
equals
one,
and
so
we
can
figure
out
the
transition
and
and
looking
at
the
rules
before
we
can
figure
out
what
the
semantics
should
be
in
this
case.
B
So
and
what
I
want
to
say
here
is
that
you
might
have
noticed
that
I
didn't
try
to
piggyback
on
the
gas
here,
because
that's
probably
one
of
the
intuitively
one
of
the
first
thing
you
want
to
do
is
like
oh
well,
can't
you
just
use
the
gas
mechanisms
for
that.
Well,
the
problem
is
that
the
gas
behaves
differently
from
lockups.
So
when
transactions
revert,
the
gas
still
gets
spent,
it
doesn't
get
refunded,
but
with
the
lockups
and
with
the
releases,
they
need
to
be
reverted
when
the
transaction
is
reverted
right.
B
Think
because
now
it
might
the
amount
of
ether
that
will
be
deducted
from
TX
origin
is
only
limited
by
potential
number
of
s
stores
it
can
do
in
a
transaction,
and
this
balance,
so
we
might
need
to
introduce
a
new
field
and
in
a
transaction
to
say
like
this
is
how
the
maximum
lookups
I
am
prepared
to
do.
I
haven't
put
in
proposals
yet
because
this,
this
kind
of
was
a
bit
late,
late
thought
so
then
proposal
number
so
then
F
is
a
fixed
rent
on
accounts.
B
So
in
a
previous
version
of
statement
proposal
we
were
only
introducing
it
on
the
contract
accounts,
but
here
I'm
introducing
it
for
all
both
contracts
and
the
non
contract
accounts.
Just
rent
important
bit
is
no
eviction
here,
just
rent,
so
I
separated
eviction
from
the
rent
in
all
cases,
because
they,
so
what
the
rent
allows
you
to
do
is
simply
to
reduce
the
balance
but
doesn't
doesn't
decide
whether
the
account
will
be
removed
or
not,
and
so
here,
in
order
to
to
to
support
this,
we
need
to
constant
account.
B
Rent
is
how
much
you
charge
for
one
account
per
block
and
the
code
rent
is
how
much
you
charge
per
one
byte
of
quote
per
block,
and
this
could
be.
This
would
be
different
values
because
the
code
is
kind
of
more
stable
and
it
probably
it
probably
doesn't
cause
as
much
performance
issues
as
the
as
they
account
itself
and
obviously,
pre-compose
are
exempt
for
some
of
these
reasons,
so
this
is
how
it
works.
So
this
is
how
the
so,
whenever
the
account
any
account,
gets
modified.
B
It
recalculates
the
rent,
balance
and
ran
block
and
potentially
also
reduces
the
balance,
but
this
particular
operation
does
not
evict.
So
this
is
only
the
reducing
the
potentially
reducing
the
balance
or
rent
balance,
and
another
important
piece
here
is
that
the
rent
balance
could
become
negative
right
as
a
result
of
this.
So
if
there's
no
more
balance
left,
you
just
go
basically
start
accumulating
negative
rent
balance,
but
because
this
change
doesn't
include
eviction,
you
know
it's
just
gonna
be
run
negative.
Rent
balance.
B
So
then,
during
the
proof-of-concept
implementation
adrian
figured
out
that
there's
something
needs
to
be
clarified
where
exactly
the
calculation
of
rent
happens.
And
then
I
looked
at
the
yellow
paper
and
then
it
states
that
during
the
block
finalization
there's
this
four
things
happening
and
illogically.
B
So
then,
after
we've
done
with
rent,
so
we
can
introduce
the
eviction.
So
here
we
only
in
this
particular
change.
We
only
evict
non
contracts.
And
again
this
has
been
asked
by
the
proof-of-concept
to
clarify
what
do
you
mean
by
non
contract,
accounts
and
C's
here
I'm
clarifying?
This
is
the
specific
code
hash
and
if
the
code
hash
is
equals
that
and
a
balance
equals
zero
way,
then
it
it's.
It's
deemed
to
be
dust
account
and
will
be
evicted
as
a
result
of
this
change,
and
this
is
how
it
happens.
B
So
the
eviction
check
is
performed
at
the
end
of
transaction
for
all
the
accounts
that
were
touched
during
this
transaction,
not
necessarily
modified
by
touched
by
touching
means
that
you
read
their
balance
or
you
try
to
send
zero
ether
to
it.
So
then
this
account
is
touched.
So
in
the
end
of
the
transaction,
you
have
this
loop,
which
goes
through
all
touched
account
and
figures
out
whether
they
need
to
be
evicted.
So
if
the,
if
the
account
is
not
going
to
be
evicted,
then
it's
not
modified
so
this.
B
This
is
why,
in
this
diagram,
we
don't
modify
any
actually
I
need
to
fix
this,
because
I
think
this.
This
implies
that
the
rent
balance
in
rent
block
is
modified,
but
actually
I
need
to
create
a
that's
a
good
point.
So
essentially
the
thing
is
that
it
does
not
introduce
the
change
unless
the
county's
it
gets
evicted.
So
then
change
H
is
where
we
start
charging
the
rent
for
the
storage
and
as
I.
B
So
now
you
would
understand
why
I
had
needed
lockups
for
the
existing
accounts,
because
the
rent
is
actually
charged
not
on
the
entire
storage
size,
but
on
difference
between
storage,
size
and
lockups.
So
it
means
that
if
the
lockups
equals
storage
size
then
there's
no
rent
on
storage,
the
only
rent
will
be
charged
in
this
cases
for
actual
account
and
the
code
so
which
means
that
any
new
account
new
contracts
created
after
lookups
will
pay
constant,
constant,
rent
and
but
everywhere,
of
all
the
empty
contracts
which
existed
before
and
nobody
cares
about.
B
They
will
pay
a
rent
on
a
full
storage
size
and
they
will
be
evicted
pretty
quickly,
but
it
is
possible.
If
somebody
really
cares
about
their
contracts,
they
can
fill
them
up
with
their
ether
and
prevent
them
from
being
kind
of
very
quickly.
Decayed
and
again,
we
need
to
introduce
the
third
constant
here
for
the
storage
so
yeah.
B
It
changes
the
formula
about
how
to
calculate
the
rent
due,
and
this
also
come
from
a
proof
of
concept
that
we
need
to
specify
here,
that
the
value
of
storage,
size,
lookups
and
code
size
need
to
be
taken
at
the
beginning
of
the
current
block,
because
otherwise
you
will
be
overcharging
or
under
charging
so
because,
because
basically,
the
calculation
of
rent
only
happens
when
an
account
is
modified,
so
it
could
be
like
for
last
hundred
blocks.
It
has
not
been
modified,
so
now
it's
finally
modified
and
you
need
to
calculate
the
rent.
B
You
shouldn't
be
calculating
it
on
something
which
is
currently
there.
So
you
need
to
say
what
was
the
state
of
the
beginning
of
the
block,
and
this
is
going
to
be
the
determining
the
charge
for
the
last
hundred
blocks,
and
this
is
where
I
refer
to
the
notion,
as
observability
of
which
describe
didn't
change
d.
B
So
now
we
come
into
the
eviction
and
recovery
of
the
contracts,
so
this
was
basically
copy
pasted
from
the
previous
proposal.
It
doesn't
really
change
a
lot.
So
the
only
thing
which
came
up
with
the
proof
of
concept
is
that
we
need
to
clearly
define
how
distinguish
hash
stubs
from
the
contracts
themselves
it's
possible
to
do,
but
it
needs
to
I,
haven't
clarified
it
yet,
but
but
it's
possible
to
do
so.
B
That's
what
I
was
going
to
say-
and
this
is
the
graphical
representation
of
how
the
restoration
works,
and
so
the
main
idea
here
is
that
so,
when
you
plan
to
restore
the
your
account,
so
your
contract,
for
example,
zoo
multi-sig,
that
the
sudden
LED
accidentally
got
evicted.
So
you
need
to
figure
out
what
was
it?
Storage
like
at
the
time
of
eviction,
recreate
it
the
same
exact
storage
in
the
new
contract?
You
because
you
can
code
up
the
new
contract,
the
way
that
you
simply
accept
the
storage
item
from
a
certain
account.
B
So
you
recreate
the
storage
items.
Then
you
create
a
second
contract
which
will
contain
exactly
the
same
code
as
there
is
the
the
one
that
you
need
it.
So
essentially
you
need
two
new
contracts.
One
of
them
will
contain
the
exactly
same
storages
that
you're
ready
to
contract
had
the
second
one
who
had
exactly
the
same
code
that
your
evicted
a
contract
had,
and
then
you
call
this
thing
called
restore
and
it
basically
merges
them
together
and
restores
your
contract
at
the
same
address
as
it
used
to
be.
B
B
This
is
for
the
potential
library
contract,
which
will
so
if
nobody
is
looking
after
them
and
they
want
to
charge
for
their
own
existence,
so
they
can
introduce
this
call
fee,
which
is,
if
it's
a
popular
library,
that
everybody
is
calling
they
might
be
able
to
kind
of
immortalize
themselves
if
they
charge
a
small
amount
of
ease
for
each
or
to
each
color,
and
that
the
important
bit
is
that
this
charge
doesn't
go
to
the
balance
but
to
the
rent
balance.
So
it
cannot
be
recovered.
B
It
can
only
be
used
to
prolong
the
life
of
this
contract,
and-
and
so
eventually
you
can.
This
contract
can
collect
enough
rent
to
last
for
a
hundred
years
or
something
like
that,
and
then
everybody
will
be
sure
that
this
is
gonna,
be
there
forever
practically
and
then
last,
oh,
no.
Actually,
the
second-last
is
that
if
you
have
a,
if
you
have
a
contract
which
doesn't
have
any
way
of
accepting
easter,
but
you
still
wanted
to
keep
it
around,
so
you
can
add
the
easter
directly
to
their
rent
balance
rather
than
to
the
balance.
B
By
using
this
new
up
code
and
the
last
one
big
after
we've
introduced
the
lockups,
the
locals
basically
give
much
more
straightforward
and
better
reward
for
the
clear
in
storage
and
and
if
we
also
so
basically,
we
can
remove
the
storage
refunds
at
all.
And
if
you,
if
you,
if
you
decide
to
go
even
further
and
then
say,
remove
the
refund
for
the
for
the
service
truck,
then
we
can
just
completely
remove
the
concept
of
refund
altogether,
which
just
simplified
the
protocol.
C
C
Just
won
hi
King,
Arthur
Murray,
so
just
a
quick
rundown
of
food.
What
we're
working
on
code
name
of
sakala,
we're,
building
a
light,
client
for
primarily
browsers
and
environments.
We
reduce
networking
capabilities
and
resources
in
general,
so
it's
built
around
this
concept
of
slices
which
are
sub
trees
of
the
Merkel
sea,
storage
and
and
storage
as
well.
In
a
state,
then
we
have
a
movie
called
internet,
which
is
a
network
of
flight
clients,
seed.
That
say
they
also
traditional
my
clients
right
now.
At
least
they
connect
to
a
node
and
download
what
they
need.
C
It
relies
on
on
full
nodes
to
provide
access
to
the
state,
so
there's
gonna
be
some
are
bicycles
to
retrieve
State.
So
what
are
slices
again,
they're
their
miracle
sub
trees.
They
consist
of
some
parts
I.
Can
you
can
show
that
in
a
minute?
But
basically
it's
the
three
nodes,
the
branch
nodes
and
the
Leafs,
which
are
the
counts.
There's
a
VM
code
in
there
as
well
and
and
the
stem
and
the
cool
thing
is
that
we
can
identify
them
by.
C
We
call
the
stamp
pad
and
the
depth,
which
is
basically
the
four
navels
of
a
key
in
the
in
the
merkel
pressure
tree,
and
then
the
depth
identifies
how
how
large
that
chunk
is
going
to
be,
and
we
can
also
use
the
stage
or
the
storage
root
to
uniquely
identify
a
slice.
So
we
can,
we
can
use
just
the
stem
path
and
the
depth
chart
of
the
four
enables
and
like
the
Deaf,
it
could
be
anything
ten,
for
example,
oh
sure,
the
yes.
C
So
so
we
can
identify
the
slice
by
the
stem
pad
and
depth
which
basically
allows
us
to
grab
it
grab
it.
And
then,
if
you
have
something
like
pops
up
or
multi-class,
or
something
like
that,
you
can
use
those
as
identifiers
to
to
create
subscriptions,
and
then
you
can
near-real-time.
This
could
propagate
changes
to
to
some
subset
of
those
of
those
slices.
C
It's
again
p2p
notes.
They
see
this
state
across
across
this
disc
lines,
build
with
lip
to
be
a
sure
that
people
know
this,
but
it
basically
allows
you
to
run
peer-to-peer
ordinate,
so
some
semblance
of
peer-to-peer
in
a
browser
again
we're
using
the
pops
up
for
for
near
real-time
data
propagation
right
now,
there's
up
which
is
what's
available
on
only
p2p.
But
work
is
being
done
on
creating
something
a
little
bit
more
performant
and
yeah.
C
So
again,
I
kind
of
touched
on
that
already
data
propagation
is
being
built
on
top
of
pops.
Up
slices
can
be
identified
and
can
be
grouped.
They
can
be
created
as
topics
note
subscribe
to
those
topics
and
they
get
updates
as
soon
as
the
new
block
is
generated.
There's
a
slides
me
and
extract
it
and
propagated
for
the
network.
C
A
client
is
only
interested
in
on
a
subset
of
this,
of
the
slices
which
are,
they
can
be
based
on
user
account,
the
sum
of
the
absence,
the
tokens
that
are
being
interacted
with,
and
then
we
can
also
take
advantage
of
a
large
amount
of
this
discounts
in
the
network
and
basically
say
well.
You
know
if
a
client
can
dedicate
20
30
50
megabytes
to
store
a
portion
of
the
of
the
miracle
state.
Then,
if
we
have
a
million
clients,
then
we
can,
we
can
provide.
C
C
We
can
base
it
around
some
some
notion
of
checkpoint
and
if
the,
for
example,
we
can
talk,
though
the
checkpoint
in
the
client
itself
and
distribute
it
update
clients
every
every
now
and
then
an
update
that
check
when
basically
week
we
can
use
in
this
checkpoint.
We
can.
We
can
guarantee
that
the
slices
we're
extracting
are
are
correct
because
they're
based
on
the
state
or
the
storage
hood,
so
they're,
basically
time-stamped
and
yeah.
B
B
B
So,
let's
start
with
the
gas
metering,
I'm
not
going
to
read
out
of
the
slide,
but
I
just
give
you
comments
on
that
and
maybe
describe
what
you're
thinking
you're
a
good
microphone.
Okay,
maybe
the
slide
just
tell
me
if
this
isn't
correct
or
anything
like
that,
all
right,
yeah,
so
I
would
explain
to
the
audience
so
that,
as
far
as
an
understand
from
the
last
time,
the
there
were
two
different
methods
of
duty
and
gas
metering
in
the
azam
in
first
is
the
injection
and
the
second
is
the
upper
bound
estimation.
B
D
Yes,
there
should
be
a
third
one.
What
is
the
third
one
exact,
not
upper
bound
but
exact
calculations?
So
you
have
no
matter
what
the
input
the
gas
calculation
is
a
function
of
the
input
in
the
current
states
that
might
be
intractable
for
a
lot
of
contracts,
but
it
might
be
reasonable
for
pre
compiles.
D
But
yes,
the
whole
question
of
metering
is
a
very
interesting
one.
Obviously,
pre-compiled
zanoits
I'm
contracts
have
to
be
metered.
You
have
to
pay
gas
for
them,
currently
its
benchmark
based
and
yes,
the
injection
way.
So
this
is
one
level
currently
like
EVM
excuse,
opcodes
and
each
opcode
uses
some
guess.
We
have
an
optimization
I,
don't
know
if
it's
the
best
one,
but
at
each
basic
block,
which
means
a
block
of
app
codes
that
will
they're
at
guaranteed
to
execute
in
sequence,
there's
no
branches
into
or
out
of
that
block
before
it.
D
We
inject
some
code.
That
says
you
know,
we
count
the
gas
used
by
that
block,
and
then
we
inject
some
code
into
the
web
assembly
to
use
that
amount
of
gas
at
the
beginning
of
the
block.
That's
what
injection
means
and
the
automatic
upper
bound.
My
understanding
of
it
may
be
your
your.
My
interpretation
of
it
is
a
little
different
means.
We
you
give
us
an
awesome
contract
and
we
give
you
an
upper
bound
of
the
gas
use.
D
B
D
So
we
would
have
to
restrict
things
and
the
other
one
would
be
even
worse,
because
the
number
of
paths
through
a
given
contract
can
just
branch
you
know
exponentially
so
so
only
for
certain
contracts.
We
can
do
it,
but
it's
hopeless.
In
the
generic
case.
We
would
like
to
do
the
upper
bound
estimation
for
pre
compiles,
but
there's
a
problem
because
the
input
of
hash,
for
example,
hash
functions,
can
be
arbitrarily
long,
well,
obviously
limited
by
guess
or
whatever.
So
it's
not
an
upper
bound.
You
know
the
upper
bound
of
the
max.
D
B
E
Current
prototype
is
not
a
program
like
this
is
what
we
want
to
have,
but
we're
still
working
on
it.
So
the
current
situation
is
to
just
do
what
regularly
was
in
precompile
our.
So
you
have
a
fixed,
fixed
gas
like
get
charged
a
fixed
amount
and
then,
depending
on
the
input
size,
you
add
something,
but
it's
really
arbitrary.
E
F
C
F
The
first
option,
the
gas
rule-
is
very
complex
and
it
basically
would
make
it
impossible
for
clients
to
implement
the
pre-compile
natively,
so
they
would
have
to
import
a
web
assembly
engine.
But
if
we
can
extract
an
upper
bound
for
the
gas
rules,
then
it
gives
clients
the
option
to
implement
them.
Natively,
like
like
a
past
pre
compiles
like
the
existing
free
compiles.
Anybody.
G
I
thought
I
would
share
an
observation.
I
had
these
guys
are
the
ye
Hwa's
of
experts.
I
just
have
a
lot
of
background
and
processors
and
is
a
so
I
thought.
I
would
join
that
group
today
and
I
thought
a
useful
point
that
I
learned
from
that
in
maybe
it's
obvious
to
everybody
here,
but
I
thought
I
would
just
say
it
in
case
it
wasn't
that
the
phase
one
when
they
say
pre
compiles
I
thought
that
there
might
have
been
some
implication
of
the
wasum
system.
G
They
can
from
first
principles
just
look
at
the
Oise
encode
like
they
look
at
a
formal
description
of
an
algorithm
and
then
recoded
in
go
or
recode
it
and
rust
or
java
or
whatever,
or
they
could
try
to
use
some
wasum
system
to
actually
help
them
with
the
compiling
effort.
And
so
that's
maybe
helpful
in
this
gas
metering
to
understand
this
first
phase
is
really
about
specifying
an
algorithm
that
people
can
under
you
know
have
that
gas
being
picked
automatically.
G
B
B
So
my
question
is
that
so
how
we're
gonna
handle
this
in
in
Eva's
amenge
in'
included
into
the
UAE
vm,
so
is
it
going
to
be
allocated
every
time
we
call
the
engine
and
then
discarded
after
the
call,
or
is
it
going
to
persist
between
the
call
somehow?
So
what
are
what
is
your
current
thinking
and
if
it's
actually,
if
it's
allocated
in
turndown,
how
is
this
going
to
be
more
performant
then?
Could
it
actually
could
it
be
more
performant
than
EVM,
because
this
is
exactly
what
EVM
does
at
the
moment
at
the
core?
D
Torn
down
I
mean
you
can
zero
it
out.
There's
a
third
option.
You
can
zero
it
out
as
well,
because
there's
garbage
left
over
from
the
previous
run-
and
maybe
you
did
some
previous
run-
that
it
didn't
end
up
being
used.
So
you
would
have
to
save
this
old
stuff
anyway,
so
maybe
zeroing
out
I
think
it
makes
sense
to
either
zero
out
or
to
give
a
fresh
chunk
of
memory.
That's
what
we're
doing
currently
in
a
test
net.
It's
all
fresh
each
time
and
do.
B
H
I
think
it
has
to
get
zeroed
out.
Otherwise
you
run
a
risk
of
spectra
like
attacks.
You
can
just
overflow
or
do
something
and
then
all
of
a
sudden,
you
might
have
access
to
code
to
execute
on
somebody
else's
account
and
it
might
if
everyone's
running
the
same
system,
it
might
just
cause
problem
that
way.
B
G
Am
I
correct,
let
me
just
ask
your
question
so
I
mean
assuming
that
there
are
memory
instances
and
they
get
zeroed
out.
There's
no
I
mean
it
would
be
part
of
a
you
know,
used
up
memory,
spaces
would
go
back
into
a
pool,
get
zeroed
and
then
get
dynamically
allocated
another
uoit's
and
calls
so
it's
not
like
it
would
be
statically
assigned
for
in
any
way
so
I
mean
that's,
not
very
high
performance
overhead
at
all,
I
mean
that's.
How
all
network
processes.
H
E
H
J
Just
wanted
to
make
a
quick
point
regarding
the
specter
attacks
that
any
kind
of
cache
timing,
attack
or
side-channel
attacks
in
general
depend
heavily
on
things
like
real
time,
high
clocks
being
available,
and
that's
not
going
to
be
part
of
the
evolving
semantics.
This
should
not
be
a
concern
for
us.
F
B
So
the
question
number
three
is
that
so
interaction
with
the
EVM
state,
so
essentially
as
I
read,
some
really
old,
I
think
you've
some
proposals
and
there
in
order
to
say
when
you're
running
inside
the
e
azam
code,
and
then
you
need
to
access
some
of
the
EVM
state,
some
mysterium
state,
then
you
would
have
some
kind
of
external
functions
declared.
So
this
is
not
the
actual
function
which
was
declared,
but
I
just
was
too
lazy
to
look
it
up.
So
essentially
they
imagine.
B
That
is
a
sanction
which
allows
you
to
do
some
sort
of
s
load
and
then
delivers
pulls
something
out
of
the
serum
state
and
delivers
into
the
EVAs
in
context.
So
alternative
approach
would
be
to
not
allow
any
of
the
state
access
from
within
the
USM
code,
but
simply
provided
as
the
argument.
So
essentially
like
anything,
you
want
to
tell
the
USM
code
about
the
state.
You
have
to
push
it
in
as
input
and
anything
you
want
to
modify
after
that.
B
You
have
to
take
it
out
of
the
output
and
put
it
in
storage
yourself,
but
this
of
course
makes
it's
difficult
or
impossible
to
use.
Even
for
maintaining
large
persistent
structures
like
if
you
want
to
implement
like
say
red
black
tree
with
Eva
zoom,
then
you
would
have
a
problem
here,
because
you
would
have
to
first
decide
what
to
push
in
as
the
input.
But
then,
by
that
time
you
already
read
or
wrote
most
of
the
red
black
tree
algorithm.
D
So
I
read
the
spec,
the
webassembly
spec
I,
implemented
it
I'm
implementing
it
again
now
and
I
can
tell
you
that
having
exchanged
inputs
and
outputs
would
be
difficult.
The
outputs
would
be
difficult
because
the
output
is
limited
to
one
value.
There's
a
proposal
to
the
webassembly
spec
to
allow
arbitrary
number
of
return
values,
but
they're
still
on
their
first
version.
They
just
want
to
make
it
as
simple
as
possible
for
now
so
the
output
will
be
a
will
hinder
it,
but
certainly
we
can
branch.
D
You
know
we
can
fork
the
spec
and
return
arbitrary,
many
values
and
hopefully
the
webassembly
spec
will
catch
up
to
us.
This
is
one
of
many
questions
that
you
know.
Should
we
fork
the
spec
or
should
we
not
and
for
now
we're
not
forking
the
spec,
but
as
the
input
also
an
issue
is
we
can
only
have
a
the
at
at
static.
You
know
at
compile
time
at
deploy
time.
You
know
exactly
how
many
arguments
there
there
are
gonna
be
so
this
would
only
work
for
fixed
number
of
arguments.
D
D
D
We
would
have
you,
love
assembly
function,
calls
with
arguments
and
things
like
this
and
that's
that
might
be
faster
than
having
to
you
know,
get
called
data
size
get
you
know,
call
data,
you
know,
move,
you
know
things
into
memory,
bring
it
bring
it
into
our
stack
things
like
this,
so
this
is
a
bottleneck.
This
is
one
of
the
sort
of
micro
optimizations
that
I
think
we're
putting
off
a
little
bit,
but
it's
a
very
good
point
and
I
yeah.
It's
it's
very
interesting
that
yeah
so.
E
You
know:
what's
one
of
the
interesting
part
of
the
the
whole
wasn't
model
is
the
way
you
import.
Dependence
is
how
you
import
modules,
and
one
of
the
things
you
can
do
is
actually
either
share
an
entire
memory
with
all
your
your
modules
or
you
can
import
memory
from
from
other
modules.
So
when
one
of
the
things
we're
looking
at
is,
but
it
would
also
depend
how
storage
happens
would
be
to
do
some
sort
of
a
map,
memory,
mapping
and
then
yeah.
B
B
Yeah
this
one,
so
this
is
about
the
interpreter.
Compiler
guarantees.
I,
remember
I,
first
learned
about
it
when
I
talked
to
some
of
you
in
Prague
at
that.
So
essentially
there
was,
you
probably
heard
a
lot
of
discussion
about
just-in-time
compilers,
essentially
not
being
suitable
for
the
for
the
for
the
things
that
we
want.
But
I
want
to
get
your
current
thinking
about
I.
Remember
we
discussed
that
in
phase
one
we
want.
We
potentially
want
to
just
implement
what
I
call
like
a
dummy
dump.
B
Oh
sorry,
no
dummy
dumb,
straightforward
interpreter
which
basically
implements
the
classification
one-to-one,
and
then
we
put
it
into
the
put
it
into
cerium,
and
then
we
have
a
motivation
to
work
on
more
sophisticated
interpret,
interpreters,
compilers
and
things
like
this.
So
your
current
thoughts
on
that.
I
Could
I
just
add
a
comments?
It's
not
jits,
it's
not
just
in
time.
Compilers.
There
are
their
problem.
It's
optimizing
compilers
that
artha
problem.
So
it's
it's
not
that
it's
doing
it
just
in
time.
It's
that
it's
trying
to
apply
some
optimization
algorithm,
that
is
attackable,
but
there
are
actually
a
number
of
jets
that
are
not
optimising
compiler,
so
they
are
just
linearly
scanning
across
them
that
might
work
pretty
well
yeah.
D
That's
exactly
correct:
Firefox
has
a
linear,
Pass
compiler,
where
it
linear
linearly
passes
through
the
code
one
time
and
it
compiles
it.
It's
usually
2
times
slower
than
the
optimized
code,
yeah
you're,
right
that
there
could
be.
We
call
it
a
JIT
bombe,
you
give
it
a
web
assembly
module
and
it
takes
a
long
time.
Perhaps
quadratic.
This
was
the
v8
one.
D
He
used
to
used
to
do
this,
where
it,
the
intermediate
representation,
was
some
sort
of
directed
graph
that
had
cycles
for
each
loops
and
then,
if
you
had
a
lot
of
nested
loops
and
a
lot
of
nest
control
flow,
it
would
have
this
sort
of
exponential
growth
in
in
the
compiled
time.
So
there
were
these
pathological
examples
that
took
a
very
long
time
to
compile
in
v8.
D
Thankfully,
v8
shifted
away
from
this,
and
they
they
have
now
they
have
two
compilers:
they
have
their
base
line,
which
is
a
single
pass,
and
then
they
have
their
optimizing
one.
First,
they
let
the
base
line,
compile
it,
so
they
can
start
running
and
then
they
have
the
optimizing
one
working
in
the
background
and
then
they
swap
it
eventually.
Yes,
it's
very
important
as
you're
saying
not
to
have
these
sort
of
JIT
bombs
or
compiler
bombs
ahead
of
time.
Compilers
usually
have
linear
passes
and
you
have
some
optimization
levels
that
take
all
these.
B
The
part
of
my
question
is
that
I
remember
somebody
mention
it
to
me
in
Prague
that
are
essentially
at
that
point.
There
were
no
existing
compilers.
That
would
give
you
the
guarantees
that
we
want
sofirst
guarantees
in
a
compilation,
time
and
second
guarantees
that
the
product
or
the
product
of
a
compilation
will
also
not
be
more
mobile
like
will
not
be
exploding,
and
so
that
is
when
I
suggested
the
idea
of
if
we
have
to
write
the
compilers
ourselves
eventually,
because
if
there's
no
good
compilers
around,
we
have
to
write
it
ourselves.
B
D
This
this
this
goes
into
the
question
of
auditing
verification,
and
certainly
we
have-
and
we
have
a
hundred
fifty
page
web
assembly
specification.
It's
written
down.
I
submitted
some
fixes
to
it,
so
there
are
still
some
sort
of
typos
and
still
some
things
being
worked
out
in
the
web
assembly
expect.
But
it's
good
it's
it's
it's
good
from
what
I've
seen
and
certainly
I
interpreted
their
what
they
wrote
on
paper
and
when
I
was
writing
my
interpretation,
my
implementation
of
the
of
the
spec.
D
Maybe
there
was
a
bug,
maybe
I
interpreted
something
wrong,
and
maybe
one
of
these
compilers
interpreted
something
wrong
and
someone
knows
of
this
edge
case,
one
person
that
knows
one
that
you
know
one
bug
in
one
engine
can
sort
of
forth
in
that
work.
This
is
a
big
risk
and
when
they
were
writing
it,
you
know
the
Firefox
engineer,
maybe
wasn't
concerned
about
consensus.
They
knew
that
they
can.
Eventually,
you
know,
push
a
some
sort
of
fix
in
the
next
version,
but
we
are
concerned
with
consensus,
so
I
think
yeah.
D
I
think
the
point
is
it's
reason:
I
think
it
would
be
reasonable
to
you
know
when
they
submit
the
NIST,
the
crypto
stuff
to
NIST.
They
have
implementations
reference
implementations,
usually
in
C,
and
they
have
some
of
them
are
verified,
so
I
think
it
would
be
reasonable
to
at
the
very
least
audit
the
implementations,
and
it
would
be
better
if
we
can
verify
them.
Maybe
have
a
computer
check
our
proofs
that
in
fact
this
spec
was
implemented
in
this
code,
and
you
know
a
C
expert
knows
that
you
know
over
of
signed.
D
Integers
is
undefined
behavior
and
see
so
we
need
some
flag
for
certain
things.
So
everything
is
language
specific,
but
auditing.
Definitely
I
think
we
need
it.
Verification
would
be
great.
Another
option
is
redundancy,
so
one
person
would
run
multiple
implementations.
So
we
would
have
you
know
these.
These
three
compilers
on
one
machine
executing
the
code
and
you'd
have
a
best
n
of
em.
You
know
if,
if
n
of
them
agree,
then
then
there's
some
threshold,
so
there
might
be
some
sort
of
or
they
might.
If
there's
some
disagreement,
you
don't
include
that
transaction.
D
These
kinds
of
ideas-
yes,
I,
think
to
start
your
rights,
let's
motivate
people
that
we
need.
Therefore,
we
need
audited,
verified
compilers.
Do
we
trust
the
existing
ones?
Maybe?
But
yes,
I
think
this
is
very
important.
This
I
think
this
is
thank
you
for
selling
I've
been
trying
to
convince
my
colleagues
of
this
point.
So
thank
you
very
much
alexei.
G
Have
one
other
comment,
I
wanted
to
say
on
this
one
here
alexei
cuz.
This
was
a
good
point.
We
didn't
discuss
this
at
all
when
we
were
just
having
our
breakout
session
here,
I
kind
of
wish.
We
did.
This
looks
to
me
like
this,
pretty
heavily
interacts
with
the
gas
metering
question.
I
just
say
that,
from
the
perspective
of
like
whether
it's
a
jet
or
even
more
and
like
phase
two,
what
Alexei
is
saying
here
on
the
ahead
of
time:
compiler
I
mean
there's,
you
know
doing
any
compilation.
G
Step
is
an
investment
in
time
now,
with
the
assumption
that
you're
gonna
run
that
code
enough
times
to
recoup
that
cost
and
more
over
time,
and
so
you
know
I
think
there
should
be
a
consideration
of
in
the
gas
cost
model.
I
mean
if
you're
going
to
do
a
compilation,
step,
there's
a
cost
associated
with
that.
Maybe
you
know
the
assumption.
Is
you
do
it
every
single
time
for
simplicity?
G
But
maybe
you
also,
if
you
knew
it's
just
throw
throw
away
code
for
one
time
and
you
don't
want
to
do
the
compilation,
you
just
run
it
in
an
interpreter
mode
and
yeah,
and
the
other
comment
was
the
JIT
I.
Think
somebody
over
here
made
the
comment
that
it's
optimizing
compilers
that
can
get
into
really
hairy
situation
to
spending
a
lot
of
time.
Crunching
an
optimization.
The
goal
on
the
assembly,
I'm,
hoping
with
the
you
Azam
right,
is
that
we're
trying
to
make
it
a
very
simple
mapping
from
that
to
is
ace
right.
G
J
So
my
question
is
this:
slide
right
now
suggest
that
indeed,
compilers
optimizing
is
a
fundamentally
more
dangerous
problem
than
a
ahead
of
time.
Compilation,
but
isn't
it
more
or
less
symmetric
like
you?
Can
you
like
it
right
now
suggests
that
you
cannot
have
a
secure
ahead
of
time
compiler,
but
if
you
use
the
same
linear
passes,
then
ahead
of
time
up
to
a
compiler
would
work
just
as
well
as
the
JIT
compiler
right,
I.
G
F
You
know
on
the
compilation
time-
and
we
were
more
worried
about
this
last
year,
particularly
with
the
with
the
JIT
bombs,
but
and
we
did
fuzz
test
v8
and
found
JIT
bomb
for
for
two
versions
of
v8
since
since
that
time
someone
last
year
there
have
been
so
now,
there's
a
v8
version
called
liftoff,
which
they
claim
is
explicitly
linear
time.
There's
other
fire.
There's
web
assembly
compiler
engines
around
Firefox
that
are
supposed
to
be
linear
time.
We
haven't
fuzz
fuzz
tested
those
to
verify
that
we
can't
find
any
compiler
bombs
for
these
other
engines.
F
But
that's
the
the
to
do.
I
think
the
bigger
issue
with
with
compiler
engines
isn't
even
to
the
point
of
isn't
so
much
worrying
about
the
security,
whether
it's
security
against
consensus,
bugs
or
security
against.
Do
s
attacks
it's
just
having
compiler
engine
implementations
available
in
the
languages
that
the
clients
are
written
in.
So
you
know
for
parity
there's
a
lot
of
good.
F
You
know
stuff
coming
out
of
Firefox,
that's
written
in
rust,
so
that
parody
the
parody
client
could
adopt
and
incorporate
into
the
client,
but
for
the
death
client
for
forego
aetherium,
there's
really
not
serious
efforts
at
compiler.
You
know
web
assembly
engines
that
that
we
can
use
so
I
think
that's
the
biggest
blocker
right
now.
J
So
my
second
question
was:
if
I
look
at
the
way
the
EVM
is
currently
used.
I
would
imagine
that
ahead
of
time,
compilation
makes
a
lot
of
sense,
because
the
contract
is
deployed
once
and
then
used
many
times.
Even
if
you
account
for
people
wanting
to
grieve
this
by
deploying
lots
of
single-use
contracts,
you
could
still
maintain
a
simple
counter
that
does
like
hey
the
tenth
time.
This
contract
is
called.
We
ahead
of
time,
compile
it
and
store
the
architecture,
specific
instructions
instead
of
the
EVM
code.
J
F
Yeah
we'll
see,
one
of
the
ideas
was
actually
to
compile
one
of
the
webassembly
compilers
to
webassembly
and
then
run
that
in
a
web
assembly
interpreter.
In
that
case,
we
already
have
a
web
assembly
interpreter
written
in,
go
that
perhaps
could
run
the
ahead
of
time
compiler
at
deploy
time
and
spit
out
some
machine
code.
The
that
goth
could
use.
D
G
D
You're
right
that
sometimes
it
might
be
better
to
compile
things
ahead
of
time.
I
think
that's
up
to
the
implementation.
We're
only
here
for
writing
down
a
spec,
and
you
can
execute
things.
However,
you
want
is
the
point,
but
yes,
I
agree
with
you
that
it
might
be
wise
to
ahead
of
time
everything
or
most
things,
I.
G
Mean
the
only
caveat
I
would
say
on
that
is
that
you
know
if,
if
you
were
trying
to
include
the
in
a
gasps
cost,
you
know
the
costs
of
compiling,
so
that
explicitly
I
mean
because
no
matter
what
you
know
you
can't
get
away
from
it.
Compiling
has
some
fix
upfront
costs
and
then
a
lower
runtime
cost
for
more
iterations.
G
Whereas
interpretation
is,
you
know,
zero
up
front
and
then
a
much
steeper
cross
and
they
cross
after
some
number,
no
matter
what
they
cross
after
some
number
of
calls-
and
you
know
I
don't
know
I
would
like
not
I.
Don't
know
necessarily
we'd
want
to
say
ahead
of
time
in
this
is
a
phase
two
questions,
so
it's
kind
of
far
off
so
we'll
have
a
lot
better
data
and
I.
Don't
know
that
there's
JIT
versus
at
a
OT
I
mean
I.
Think
there's
not
going
to
be
like
Java
style
JIT.
G
Here,
where
you
know
you
have
really
big
code
and
you're
compiling
part
suit
or
whatever
it
seems
like
you're
gonna,
compile
a
contract
lump
sum
one
time
and
then
use
the
compiled.
You
know
machine
level
version
after
that
or
before
that,
potentially
you'd
use
the
interpreted.
So
it's
just
like
somebody
was
saying
here
like
phase
either
you
could
explicitly
say:
do
it
at
instant
zero,
you
know
it's
our
runtime
zero,
meaning
the
first
one.
Does
it
or
you
could
set
a
counter,
and
it
actually
could
be
something
that
the
contract
writer
could
say.
G
E
I
mean
I,
I,
agree,
sure,
please,
okay,
you
know
yeah,
it
should
make
sense.
I,
wouldn't
trust
the
the
person
to
actually
tell
you
the
truth
like
when
it
comes
to
to
telling
you
yeah
it's
going
to
work
only
once
like
you
don't
need
to
compile
it
to
ahead
of
time,
because
it
will
only
be
a
single-use
contract,
basically
yeah,
because
if
you
do
that
you
can
just
lie,
say
yeah,
it
will
be
a
single-use
contract.
E
K
Right
yeah,
we
just
add
a
comment
on
that
ahead
of
time.
Here
is
its
when
the
client
is
executing
the
contract
for
the
first
time,
essentially
right,
but
before
that,
when
you
make
the
deployment
of
the
contract,
it
is
kind
of
rational
to
run
an
optimizer
and
deploy
optimized
wasm
code,
which
has
already
gone
through
constant
propagation
and
and
and
and
the
vast
majority
of
tricks
that
you
can
do
at
an
intermediate
representation
level.
K
So
the
only
optimizations
that
was
imagined
really
has
to
do
is
is
that
the
machine
code,
translation
level
register
allocation
and
stuff
like
that,
but
a
lot
of
the
heavy
lifting
has
already
been
done.
All
the
control
flow
all
so
your
returns
on
optimizing
are
not
necessarily
that
great
either
whether
you
choose
a
JIT
or
you
choose
ahead
of
time
and
and
ahead
of
time
being
execution
time
right.
So
it's
still.
A
A
A
So
here's
the
problem,
the
etherium,
what
next
stuff
is
complicated
and
some
of
us
are
more
builders
than
communicators.
So
whenever
we
try
to
bring
things
to
the
community,
we're
like
let's
dump
this
like
30,
page
PDF
document,
and
then
everyone
give
me
feedback,
no,
no,
no
offense
Alexei,
just
saying
it
happens,
and
so
then
there
are
some
like
people
who
give
feedback,
which
is
great,
it's
usually
less
than
10.
And
it's
usually
not.
A
You
know,
people
who
are
using
the
etherium
network
who
feel
like
they
have
a
say
in
things
who
do
have
a
say
in
things.
So
things
that
are
obvious
to
us
are
not
always
obvious
to
others.
So
whenever
we're
coming
up
with
these
ideas,
we're
not
always
thinking
of
how
this
is
gonna.
Look
from
the
outside
words
like
state
rent,
sound,
really,
scary.
Some
people
have
been
saying:
storage
maintenance
fee.
Instead,
earlier
we
were
joking
that
we
were
going
to
call
it
tipping,
so
it
could
be
like
a
little
bit
easier
to
swallow.
A
Like
hey,
you
know,
you
tip
your
uber
driver.
You
know
you
have
to
tip
the
blockchain,
but
we're
probably
not
going
to
do
that.
Also,
there's
a
lot
of
trolls
there's
trolls
from
different
block
chains,
there's
trolls
from
yeah
trolls
from
aetherium
aetherium,
classic
all
kinds
of
stuff,
so
they're
gonna
try
to
mess
with
your
plans
as
well.
A
So
what
do
we
do?
First
thing
and
I
put
write
a
simple
blog,
but
really
it
can
be
any
kind
of
like
like
right
up
on
medium
or
a
write
up
on
a
gist
or
something
that's
accessible
to
normal
people.
People
who
aren't
very
technical
I
should
say
not
like
we're
weird
or
anything,
but
you
know
more
normal
people
and
after
you
do
that
and
when
I
say
simple
I
mean
like
don't
go
into
the
technical
one,
have
a
link
to
your
technical
document.
From
that
one
just
say:
I
want
to
do.
A
X
I
think
it's
very
important
I
have
run
tests.
The
numbers
are
in
the
document
and
I
think
we
need
to
do
it
by
this
time
and
I
understand
the
other
side
of
the
argument.
But
I
will
be
explaining
that
in
the
other
technical
document,
and
if
we
talk
about
just
that
much
and
a
blog
post,
it
makes
people
feel
really
comfortable
about
the
idea
it
makes
people
feel
like
they're.
They
can
connect
to
it
and
they
can
understand
it.
A
Get
feedback,
definitely
listen
immediately,
wherever
you
post
it
for
people
who
are
saying
this
is
confusing
people
who
are
not
giving
any
feedback.
If
you
don't
get
a
lot
of
feedback,
then
that
might
mean
your
idea
is
still
too
complicated,
or
your
presentation
of
the
idea
is
still
too
complicated,
spread
it
on
medium
reddit,
Twitter,
aetherium,
magicians
forum,
getter
everywhere
you
can
gonna
bring
up
a
Lexie
again.
A
He
does
a
great
job
with
his
rent
present
presentations
of
always
going
to
Twitter
I,
see
it
on
Twitter
I,
see
it
on
reddit
I,
see
it
on
the
ëthe
magicians
forum
and
on
getter
whenever
he
comes
out
with
a
new
rent
proposal
document
so
spreading
it
out,
like
that
makes
sure
it
reaches
the
widest
audience.
We
don't
have
as
many
signals
and
we
don't
have
as
many
platforms
as
we
really
should
right
now,
like
I.
A
Just
listed
the
top
five
and
I
can't
think
of
any
more
than
that,
maybe
telegram,
if
there's
some,
really
cool
telegram
groups,
the
next
one
and
this
one's
overlooked.
A
lot
of
the
time
gets
support,
get
support
from
and
more
like
endorsements
is
the
better
word
from
core
developers
and
from
people
in
the
community
that
people
really
trust.
So
if
you
have
a
group
of
like
most
of
the
people
from
the
core
dev
meetings,
commenting
on
something
that's
a--that's
gonna
go
through.
A
That
might
be
normally
controversial
if
enough
people
who
trust
the
core
developers
here
that
they
trust
the
idea,
then
that's
gonna,
make
it
much
easier
to
go
through
cleanly,
like
some
ideas
are
like
real,
simple
and
they
get
through
really
easy
like
whenever
we
switch
to
snappy
compression.
You
know
between
Gath
and
Paradis.
There
was
a
little
bit
of
some
like
I
guess
discussion
amongst
people
about
that,
but
it
wasn't
anything
major
and
the
community
didn't
care
at
all,
because
it
was
too
technical
and
also
didn't
really
affect
them,
except
behind
the
scenes.
A
A
lot
of
the
stuff
we're
talking
about
like
state
rent
and
some
pruning
initiatives
does
affect
the
community,
so
they
need
to
know
how
they're
affected
they
need
to
know.
What's
gonna
happen,
you
know
if
we
go
through
with
this
change
and
they're,
like
the
reasons
we're
doing
it,
they
have
to
be
really
solid
reasons,
so
getting
endorsements
from
really
smart
and
admired.
People
in
the
community
is
super
important
iterate.
A
So
if
it
doesn't
work,
the
first
time
try
to
do
a
message
in
a
new
way
in
a
different
way,
with
more
endorsements,
with
more
people
talking
about
it
and
then
finally
win,
which
could
mean
that
you
know
your
idea
works
and
everybody
wants
to
do
it
and
we
implement
it
or
it
doesn't
in
the
community,
doesn't
want
to
implement
it.
If
there's
one
thing
you
take
away
from
this,
it's
that
we
can't
do
anything
without
community
support,
otherwise
we're
a
cabal,
and
we
don't
want
to
be
a
cabal
of
developers.
A
We
want
to
actually
listen
to
the
community,
no
matter
how
non-technical
they
are,
because
they
are
the
ones
that
we're
catering
to
the
user,
the
stakeholder
and
the
ecosystem
they're,
the
ones
who
matter
in
all
this.
So
if
they
say
rents
a
bad
idea,
I
don't
want
to
pay
for
it,
but
I
can
but
I
will
trade
that
off
for
the
network
being
slow
and
unusable.
Then
that's
the
trade-off,
they're
choosing.
I
Just
wanted
to
add
a
quick
comment:
I
think
it's
maybe
less
relevant
for
people
here,
but
relevant
for
the
community
at
large
is
when
proposals
are
brought
to
the
table
like
you
always
have
to
keep
in
mind
that
all
the
client
dev
teams
are
extremely
busy
and
just
don't
have
a
lot
of
time
to
do
stuff,
and
so,
if
there's
a
proposal
that
takes
a
lot
of
work
from
a
clients,
we
might
love
that
proposal
and
really
want
to
do
it.
But
on
the
court
never
calls
it'll
be
like
yeah.
I
That
sounds
great,
but
no
sorry
and
getting
an
idea
what
it
takes
to
implement
your
proposal
and
sort
of
putting
in
the
work
to
show
that
you
know
what
you're
talking
about
and
how
to
get
it
done.
Even
if
you
don't
do
all
the
code
yourself,
even
if
that
means
you
interact
with
them
and
you
can
guide
some
of
the
client
developers
to
how
to
implement
it.
That
goes
a
really
long
way
with
actually
getting
it
done.
A
A
A
A
A
Alright,
let's
head
back
in
and
take
a
seat
or
a
stand
or
whatever
you
want
to
do,
cuz
we're
about
to
start.
Do
you
know
how
to
do
the
extend
screen
thingy
on
a
Mac,
wonderful,
and
this
will
be
the
last
presentation-
then
we're
gonna
go
over
the
objectives
again
and
then
we're
going
to
have
some
breakouts
is
my
understanding.
A
A
A
L
Good,
thank
you
I'm.
Getting
you
a
cookie
cake.
Alright,
hey
I'm,
Zack,
Cole,
CTU
I
want
to
talk
about
like
simulation
stuff
and
testing
in
regards
of
testing.
I
think
this
is
gonna,
be
applicable
to
a
lot
of
you
folks,
so
I'm
gonna
kind
of
like
breeze
through
this,
because
I
think
giving
you
a
practical
demonstration
rather
than
like
reading
off
of
a
reading
off
of
a
slides
is,
is
good,
is
better.
L
Okay,
cool
all
right,
so
yeah,
so
first
I
want
to
clarify
the
difference
between
like
a
simulation
and
an
emulation.
A
simulation
is
like
a
mathematical
model,
and
it's
really
only
as
good
as
the
data
sets
that
you
provide.
So
it's
really
hard
to
account
for
things
that
you
can't
really
predict
or
don't
really
know
about.
So,
if
you
have
a
good
enough
data
set,
you
can
run
a
lot
of
simulations.
It's
really
fast
and
efficient.
L
So
an
emulation
is
more
of
a
functional
model
that
can
actually
like
replace
systems
right
so
with
a
simulation
you're,
just
mathematically
modeling,
different
processes
with
an
M
within
an
emulation,
you're,
actually
replicating
these
processes
and
you're
you're
doing
them.
It's
practical.
The
system
is
actually
functioning
right.
So
that's
good
for
acquiring,
like
large
data
sets
that
can
be
highly
accurate
depending
on
the
setup
right,
but
it's
more
indicative
of
a
real
performance.
It
could
take
more
time,
but
so
the
world
is
a
big
place.
There's
a
lot
of
stuff
going
on.
L
So
what
I'm
presenting
for
acquiring
these
data
sets
it's
kind
of
similar
to
what
we're
doing
already
for
like
Ethernet
stats
right,
everybody
just
provides
their
own
nodes,
we're
collecting
data,
it's
cool,
but
we
don't
really
have
granular
views
of
this
data,
so
we
don't
know
how
accurate
is
or
valid
the
data
is
going
to
be
because
we
don't
have
control
over
those
notes.
So
we're
not
really
aware
of
all
of
the
environmental
conditions
that
are
involved
that
provided
that
data
right.
L
L
Okay,
all
right
sure,
all
right,
so
what
I'm
proposing
is
like
we
can
set
up
nodes
in
like
different
regions
and
those
nodes
we
control
them
globally,
so
we
can
deploy
them
on
the
cloud
or
whatever
they're,
just
pretty
much
light
clients
that
are
passively
receiving
data
and
writing
new,
a
shared
database
and
like
Kafka,
or
something
like
that.
So
that
allows
us
to
understand
more
granular
granule
understand
exactly.
What's
going
on
within
the
main
net,
so
we
can
acquire
these
relevant
data
sets.
L
L
So
then,
on
the
emulation
side,
we're
working
white
block,
we've
developed
a
testing
platform
that
allows
us
to
like
provision
multiple
nodes
running
whichever
client
you
choose
to
configure
the
network
links
between
these
nodes,
so
each
node
exists
within
its
own
VLAN
and
is
assigned
an
IP
address
that
allows
us
to
provide
logical
separation
between
those
nodes
who
are
each
running
independently
of
one
another,
and
then
we
can
configure
the
network
links
between
those
nodes
with
like
packet,
loss,
latency
or
bandwidth
constraint.
So
we
can
actually
replicate
a
live
functioning
Network.
L
That
is
highly
accurate
because
we
can
observe
how
this
client
performs
and
how
it
responds
to
different
environmental
conditions
etc.
And
then
we
automate
processes
so
like
we're
generating
transactions,
they're
real
transactions,
because
it's
actually
running
the
GAF
client
or
whatever
client.
We
want
it
to
we're.
Automating.
All
of
that,
so
we
can
test
like
TPS
and
what
the
effects
of
network
latency
or
packet
loss
or
different
bandwidth
constraints
have
on
TPS.
That's
a
pretty
high-level
example.
L
We
can
also
implement
forking
conditions,
so
we
can
test
consensus
and
like
observe
what
happens
within
the
network
when
these,
when
these,
when
the
blockchain
is,
is
segmented,
right
and
forked,
and
then
we
can
do
a
bunch
of
other
stuff,
so
it's
an
actual
full
mesh
network.
So
it's
it's
real
I
just
wanted
to
go
over
some
tests
that
we
ran
earlier
like
first,
we
wanted
to
just
start
with
some
basic
stuff
as
like
what
effects
do
does
latency
have
on
like
minor
profitability.
L
L
I
was
gonna,
show
you
guys
like
this,
you
don't
mind
so
the
control
group,
each
node,
had
equal
computational
resources,
but
we
applied
and
no
latency
was
applied
and
the
test
group
we
applied
incremental
latency
to
each
one
of
them,
and
we
wanted
to
observe
the
wallet
balance
after
a
period
of
time
when
they
mined,
and
that
was
what
we
used
to
gauge
to
validate
our
hypothesis.
So
the
results
at
the
end
of
the
test
were
that
the
the
the
control
group
had
a
25%
aviary,
a
higher
balance
than
the
test
group.
L
So
what
are
the
implications
of
this?
It's
like
latency,
increases,
block,
propagation
time,
block
propagation
time,
reduces
transactional
throughput
because
there's
an
increased
uncle
rate,
and
that
weakens
the
absolute
security
of
the
network,
because
not
all
of
the
hashing
power
is
going
to
actually
securing
the
network.
It's
just
going
towards
mining
uncle
rates.
Our
mining
uncle
blocks
right.
L
So
then
I
was
at
I
think
it
was
like
Idi
Khan
last
year,
Shawn
Douglas
from
amber
dado
was
talking
about
how,
when
the
uncle
rate
is
whatever
like
X,
you
don't
actually
need
51
percent
control
of
the
network.
In
order
to
engage
a
successful,
51%
attack
and
I
sent
him
in
the
email
and
I
was
like
cool,
but
you
didn't
really
provide
any
data.
It
was
just
like
literally
a
bullet
point,
so
I
asked
him
for
the
data
and
he
never
responded.
L
So
I
was
like
you
know,
we'll
just
run
the
test
on
her
own.
So
that's
where
our
next,
our
next
test
goes.
So
we
actually
just
built
it
out
on
our
own.
So
we
had
four
nodes.
We
implemented
a
really
high
uncle
rate,
forcing
a
high
uncle
right
through
latency,
and
then
we
had
one
node.
That
was
like
a
super
node
and
it
controlled
approximately
46
percent
of
the
network,
hashing
power.
We
had
a
low
block
time
because
you
were
just
really
trying
to
observe
like
these
effects.
There
there
was
a
57
percent.
L
Uncle
rate
was
what
we
measured
as
a
result
and
we
ran
the
test
for
a
thousand
blocks,
so
these
charts
are
kind
of
wacky.
So
let's
go
at
the
end
of
it.
The
node
that
had
the
lowest
latency
and
the
highest
hashing
power
controlled
most
of
the
blocks
that
were
produced.
So
it's
pretty
interesting.
So
what
do
we
do
with
this
information
like
one?
We
could
like
raise
the
block
time
and
the
gas
limits
per
block
higher
gas
limits
are
bigger
blocks
more
transactions.
L
L
It's
pretty
much
just
a
gaff
work
and
they
implemented
their
own
custom
difficulty
algorithm
to
target
an
88
second
block
time,
which
means
they
have
larger
blocks
and
they
want
to
to
see
if
they
could
raise
the
default
gas
limits
from
4
million
a
30
million,
and
they
wanted
to
make
sure
that
it
wouldn't
negatively
impact
the
the
performance
and
they
wanted
a
benchmark
that
against
aetherium
as
well.
So
we
set
it
all
up
hold
on
so
at
the
end
anyway.
L
I
want
to
get
I
want
to
just
show
you
guys
a
demo
because
I'm
not
really
good
at
doing
these
presentations
I'm.
So
sorry,
I'm
very
good
face-to-face
though
so
you
guys
should
come
talk
to
me
so
so
throughput
was
higher
for
you
booked
under
the
same
conditions.
The
difficulty
algorithm
implied
higher
stability
more
consistent
block
times.