►
From YouTube: Ethereum 1.x Morning [Day 3]
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
the
current!
Well,
what
we
have
with
it
wisdom.
We
have
a
full-fledged
implementation.
It's
available
in
several
pairs
should
have
change
slides.
Why
is
that
not
happening?
Okay?
So
we
have.
We
have
the
P
are
called
the
evm
see
we
have
a
reference
implementation
called
Hira
that
uses
a
binary
and
as
as
a
back-end
engine
and
there's
there's
a
go
version
of
that
of
that
implementation.
The
full-fledged
implementation-
and
that's
that's
using
wagon,
so
wagon
is-
is
the
least
performance
interpreter
that
we
have
so
far,
but
it's
also
written
in
go.
A
So
it's
pretty
practical
for
for
testing
and
yeah.
We
we
have.
We
have
a
test
net
running,
so
that's
well.
I
I
encourage
you
to
go
and
visit
it
because
we,
it
doesn't
receive
a
lot
of
visits.
So
if
you
want
to
learn
about
the
wisdom,
if
you
want
to
play
with
it,
you
can,
and
if
you
have
some
feedback
where
we're
really
interested
in
that
when
it
comes
to
even
one
point
X,
however,
we're
we're
having
a
more
I
mean.
A
So
I
started
working
on
it
like
two
weeks
ago,
one
week
ago,
so
it
passed
the
unit
tests
and
right
now,
I'm
using
rinkeby,
like
I'm
sinking
rinkeby,
to
test
try
to
find
out
all
the
problems,
so
I
managed
to
to
to
sink
up
to
25%
of
rinkeby,
and
it's
not
because
it's
not
sinking
properly
or
there
are
errors.
It's
simply
because
I
haven't
been
able
to
get
a
stable
connections
with
all
the
ports
available
in
the
in
the
last
few
days.
A
B
A
So
there's
there's
there's
repository
in
the
it
was
an
organization,
that's
called
it
was
an
precompiled
and
they
are
reimplementation
and
some
of
them
are
fairly
optimized.
All
of
them
are
written
in
rust,
so
it
has
some
disadvantages
because
the
binaries
tend
to
be
a
bit
big.
But
apart
from
that
they're
pretty
functional
and
and
yes,
we
yeah,
we
were
implemented
everything
and
we're
just
comparing
it's
just
about
finding
finding
challenges
and
errors.
Yes,
sir,
it.
A
A
A
It's
really
taking
much
longer
by
the
way.
I
have
five
working
contracts
right
now
and
there's
only
three
there
for
two
reasons:
v
1x
mod
is
not
used
that
much
on
rinkeby,
so
I
don't
have
enough
data
and
the
other
one
is
recover.
The
first
one
is
actually
dwarfing
the
other,
the
other
pre-compile,
so
I
could
include
it
here,
but
it's
basically
it's
basically
the
same
phenomenon.
It's
it's
much
longer.
So
there
are
several
reasons
for
that
and
that
I
will
cover
in
the
next
line.
A
I
just
want
to
point
out
that,
even
though
it's
not
properly
displayed
it's
it's
in
microseconds,
so
it's
not
even
though
it's
way
longer,
it's
it's
still
very
fast
for
further
for
humans
and
I.
Think
yeah
I
think
that
was
it
so
yeah
you
have
you
see
that
that
benchmark?
But
please
don't
panic.
There
are
several
considerations.
The
first
one
is
that
we're
not
going
to
I
mean
we
had
a
conversation
like
I
mentioned
by
frederik
yesterday
or
two
days
ago
about
we're
not
going
to
replace
like
very
critical
path.
A
Pre
compiles
we're
just
going
to
add
the
interpreter
on
this
side
to
be
able
to
to
add
functionalities,
but
when
it
comes
to
something
like
easy
recover
that
is
used
very
often
I
think
the
native
it
doesn't
make
sense
to
replace,
especially
with
an
interpreter
based
precompile,
so
yeah.
What's
yes,
sir.
A
So,
where
do
the
two
hundred
two
hundred
times
PDF
come
from
there,
I
mean
once
again:
it's
not
it's
not
optimized
at
all,
so
where?
Where
does
it
come
from?
Well,
first,
there's
the
actual
execution
and
having
an
instruction.
You
you
go
through
the
interpreter,
so
you
have
to
do
all
the
like
for
each
operation.
You
do
you
do
loops,
you
do
you
know,
memory
accesses
and
things
like
that.
What
else?
A
A
Okay,
so
precisely
the
next
step
is
is
to
optimize
that's
to
get
more
data.
I
mean
like
the
current
step,
is
about
finding
what
the
issues
are
going
to
be
so
we're
finding
some
issues
and-
and
then
it's
about
trying
to
provide
to
get
to
squeeze
some
performance
out
of
this.
So
we
want
to
try
the
GI
T.
We
want
you
to
see
if
we
can
maybe
use
a
different
language,
because
rust
rust
might
not
provide
the
most
the
fastest
I
mean
that's.
A
It
wasn't
proposed
proposal.
There's
how
do
you?
How
do
you
link
like
so
far
I
was
saying
it's
it's
a
bit
slow
because
you
have
a
lot
of
data
copy.
It's
the
way
we
pass
we're
using
the
way
we
pass
the
the
data
we
really
need
to
investigate,
or
at
least
any
any
proposal
we'll
have
to
suggest
how
the
data
gets
passed
from
one
from
the
from
the
the
blockchain
or
the
blockchain
environment
to
the
execution
environment.
What
else?
Yes?
A
So
a
very
nice
conversation
with
with
Brooke
yesterday
hi
about
about
the
migration
path.
This
is
something
that
we
I
mean.
You
know
we
I'm
also
part
of
the
gues
team.
So
it's
it's
about
not
just
jumping
the
gun
and
and
pushing
something,
some
interpreter
and
and
hoping
crossing
your
fingers
to
make
sure
everything
like
hoping
everything
is
going
to
work.
You
have
to
be
sure
so
yeah
any
proposal
needs
to
needs
to
cover
that
and
that
definitely
needs
to
be
in
in
the
requirements
right.
So
that's
what
I'm
doing!
A
Why
not
try
to
run
several
several
several
interpreter
several
clients
and
make
sure
that
the
result
is
is
completely
consistent
and
the
last
slide
is
about
data.
That
a
proposal
should
include
is
a
list
of
pre-compile,
so
I
think
the
USM
team
wants
to
pretty
much
introduce
what
Blake
I'm
trying
to
remember
Blake,
and
is
there
another
one,
some
sub
snark
some
snark
contracts,
but
that's
still
still
debated
on
our
end
and
yes
on
the
personal
level
like
I,
really
find
it
really
hard
to
to
work
with.
A
You
know
I
mean
not
really
hard,
but
it's
it's
sometime,
not
very
practical
to
work
with
the
tools
we
currently
have
and
that's
also
the
the
interest
of
switching
to
wisdom.
It's
about
grabbing
grabbing,
like
tools
from
a
different
community
that
tends
to
be
some
of
the
wasn't.
People
come
from
the
C++
world,
so
C++
has
a
excellent,
an
excellent
tradition
of
tooling,
so
getting
that
getting
that
on
board
would
be
a
would
be
very
nice
and
I
think
a
proposal
should
should
definitely
list
and
explore
that
and
yeah
I
think
that
was
it
yep.
B
A
So
there's
an
Arab
based
version
of
Aviva's,
yes
and
there's
a
wagon
based
version.
D
F
E
B
A
G
G
So
just
a
quick
recap
of
the
current
proposal.
This
might
be
slightly
simplified.
Every
contract,
every
contract
has
a
storage
size
and
a
a
lock
up
counter.
Whenever
someone
modifies
storage,
if
the
current
lock
ups
are
less
than
the
storage
size,
then
they
will
have
to
pay
a
lock-up
fee
and
increment
that
counter
or
whenever
someone
releases
storage.
If
the
lock
ups
are
less
than
the
storage
size,
then
they
would
release
that
lock
up
and
the
lock
up
fee
is
always
transferred
to
and
from
TX
origin.
G
So
this
is
meant
to
provide
pretty
much
full
backwards.
Compatibility
for
older
smart
contracts,
so
we
can
imagine
crypto
kitties,
which
you
know
currently
is
using
a
lot
of
storage
size,
but
has
no
lock
ups.
So
every
single
time
someone
trance
there's
a
kitty.
They
would
have
to
pay
a
lock-up
and
increment
that
counter
until
the
lockups
actually
matched
the
storage
size.
G
And
this.
This
also
means
that
any
new
contracts
post
fork
would
always
have
a
lock-up
equivalent
to
the
storage
size
and
they
would
never
have
to
actually
pay
rent
to
on
storage,
because
the
storage
rent
essentially
will
you're
only
paying
storage
on
on
storage
slots
that
do
not
have
a
corresponding
lockup.
G
So
before
I
explain
why
this
breaks
things
I
want
to
talk
about
a
extremely
common
smart
contract
pattern.
This
is
the
approve
and
call
pattern,
so
any
any
contract
that
interacts
with
tokens
typically
uses
this.
So
a
user
will
first
approve
the
smart
contract
to
transfer
funds
on
their
behalf
and
then
that
smart
contract
will
execute
some
logic
and
can
transfer
the
users
tokens
under
whatever
arbitrary
conditions
are
defined
in
that
smart
contract.
G
So,
let's
look
at
a
decentralized
exchange,
so
we
have
Alice
who
hold
0
die
and
10
wrap
ether.
We
just
think
of
wrap
either
as
as
ether,
but
it's
like
the
ERC
20
equivalent,
and
then
we
have
Bob
who
has
a
balance
of
both
of
these
things.
So
when
Alice
purchases
died,
she
will
be
paying
a
lock-up
fee
because
she
now
has
a
divalent
and
that's
using
an
extra
storage
slot
that
was
not
there
before
now.
G
Let's
say
she
wants
to
sell
this
die
back,
so
she
places
an
order
to
sell
the
die
and
Bob
comes
in
and
fills
that
order.
And
now
this
will,
you
know
completely:
get
rid
of
Alice's
die,
balance
and
release
a
storage
slot
and
release
a
lot
up
to
Bob.
So,
even
though
Alice
originally
placed
the
lock
up,
Bob
is
now
receiving
the
release
of
that
lock
up.
G
So
this
drastically
changes
the
economics
of
the
trades
now
there's
a
much
larger
incentive
to
only
fill
orders
rather
than
placing
orders,
and
it's
also
susceptible
to
front
running
and
race
conditions.
So
let's
say
Bob
attempts
to
fill
Alice's
order
and
Alice
decides
to
front
run
that
by
transferring
more
dye
into
her
account
and
transferring
all
the
wrap
ether
out
of
her
account.
Now
Bob
actually
ends
up
paying
a
lock-up
fee
rather
than
releasing
the
fee.
G
So
my
proposal
is
to
fix
this.
Rather
than
making
lockups
a
mandatory
part
of
every
transaction
that
modifies
storage,
we
should
just
create
op
codes
for
locking
up
and
releasing
ether,
and
now
smart
contracts
can
determine
their
own
rules
around
how
fees
are
locked
up
and
released.
This
is
less
backwards
compatible
because
if
we
take
the
crypto
cookies
example
from
before
you
know
now,
transferring
the
crypto
kitty
would
not
actually
change
any
lock
ups,
you
would
have
to
write
some
sort
of
a
wrapper
smart
contract.
G
These
op
codes
are
pretty
simple.
They
would
both
just
take
a
to
address
the
law
lock
up,
opcode
would
always
transfer
ether
from
the
the
caller
of
that
opcode
and
and
the
release
opcode
would
would
transfer
either
from
the
caller
as
well,
and
now
lock
up
fees
would
be
included
in
the
value
of
a
transaction
rather
than
well
I
guess
it
hasn't
been
fully
defined
in
the
current
proposal,
where
it's
coming
from
exactly.
G
Just
some
discussion
points
you
know,
given
that
this
proposal
is
less
backwards,
compatible
and
kind
of
thinking
about
REM,
pose
presentation.
Yesterday,
I
think
we
just
have
to
basically
define
the
types
of
things
that
we
are
willing
to
break
and
the
types
of
things
that
we
are
not
you
know
by
introducing
these
things
as
opcodes
we're
breaking
a
lot
of
old
contracts.
G
G
So
contracts
could
like,
for
example,
decide
not
to
use
any
lock
ups
and
just
pay
storage
right
now
or
they
could.
You
know,
choose
to
to
force
people
to
like
deposit
ether
which
would
later
be
be
used
for
storage
slots
that
are
relevant
to
that
account.
Things
like
that.
They
can
basically
just
define
how
lock
ups
are
actually
included
and
released.
B
It
first
of
all,
thank
you
very
much
for
putting
this
together
really
good.
So
one
thing
I
would
note,
is
that
in
I
think
was
the
second
slide
or
first
slide
where
you
were
talking
about
release
I
think
it
was
a
bit
of
a
inaccuracy
there,
no
there
before
that.
So
yes,
so
when
you
said
the
storage
slot
cleared
actually
when,
according
to
the
current
proposal,
if
the
lockups
less
than
storage
size,
then
there's
no
release
at
all.
So
the
release
only
happens
when
the
contract
is
full
over
and.
D
B
Means
that
for
some
time,
if
it's
a
pre-existing
contract
and
it's
really
large
for
sometimes
there
will
be
no
releases
at
all-
and
my
other
comment
is
that,
so
if
you,
you
probably
know
that
the
first
ever
proposal
for
the
statement
was
nolo
cops
right
and
the
main
criticism
there
was
that
well,
what
is
gonna
happen
to
the
existing
contracts
because
it's
really
hard
for
them
to
migrate.
So
the
law
cops
essentially
is
the
feature
that
wouldn't
have
been
there
if
we
were
creating
the
statement
from
the
scratch,
so
without
a
the
legacy
contracts.
B
So
it's
simply
there
to
to
provide
the
migration
path.
So,
yes,
I,
agree
that
it's
sort
of
a
bit
weird
and
since
this
is
actually
the
AVI
who
first,
we
talked
about
it
with
avi
who
here,
when
I
started
to
think
about
this
idea
and
I
had
exactly
the
same
kind
of
criticism
that
you
have
to
the
lockups,
where
essentially
it's
tricky
to
to
figure
out
who
is
going
to
contribute
a
lock-up
and
who's
gonna.
B
Get
it
back
and
things
like
this
and
also
I
thought
like
if
you're,
if
you're
about
to
withdraw
tokens
from
an
exchange
like
the
tote,
the
exchange
now
have
to
pay
lockup
from
for
you,
because
you
didn't
have
any
tokens
and
then
now
to
talk
and
address,
and
yes,
so
maybe
I
thought
what
the
exchange
is
gonna
do.
Is
it
gonna
pay
you
the
lookups
for
everybody
or
maybe
it
will
start
requiring
people
to
have
nonzero
account
before
you
can
withdraw
things?
B
Yes,
so
there's
a
lots
of
these
things
and
thank
you
for
start
thinking
about
those
I.
Just
didn't
really
have
time
to
list
all
this
issues
regarding,
if
you,
if
you
scroll
back
to
your
up
codes,
I,
remember
you
asked
me
this
question
on
the
first
day,
yeah
about
this
yeah
I.
Remember
that
so
these
things
are
so
my
question
here
to
clarify
so
the
lock-up
up
code,
when
you
say
to
address,
is
this
address
of
the
contract
so.
B
B
Who
has
the
authority
to
release
that
lookup
to
themselves?
So
when
you
do
lookup,
essentially,
if
you
imagine
what
the
effect
is,
is
that
in
that
to
address
a
contract,
there's
another
data
that
needs
to
it
needs
to
remember
who
placed
that
lookup
in
there.
So
when
you
do
release
lookup,
so
you
can't
simply
just
sweep
somebody
else's
lookups
from
there.
So
there's
some
to
be
some
to
be
permission
mechanism
there
right,
yeah,.
G
B
B
G
You
you
wouldn't
lock
up
anything
in
a
contract
that
didn't
have
a
clear
mechanism
for
you.
Releasing
that
like
there
are
some
potential
attack
vectors
here
right,
like
you
could
just
arbitrarily
lock,
you
threw
up
in
a
certain
contract
and
then
that
contract
has
a
function
where,
like
the
owner,
could
just
release
all
the
lock
ups
and
yeah.
That's
that's
so.
B
Does
is
the
release
lookup
of
code
really
required,
so
is
it
because
like
does
it
actually
affect
a
lot
release
or
not,
or
does
the
s
store
which
store
the
clear,
the
storage
effect,
the
lockup
sorry
effects
the
release
so
which
action
actually
releases
the
Easter?
Is
it
the
release
lockup
or
is
it
the
store
which
cleared
the
storage?
It.
H
G
B
Does
the
contract?
No,
the?
Basically,
if
you're
thinking
about
the
contract
in
order
for
the
contract
to
decide
it,
it
needs
the
other
way
to
introspect
the
lock
ups
at
the
moment,
there's
there's!
No
such
thing
is:
there's
no
ability
for
a
contract
itself
to
introspect
the
lock
ups,
so
it
will
not
be
able
to
decide
because
it
doesn't
have
information.
So
the
only
thing
that
can
decide
it.
B
G
H
C
I
C
Suggest
that,
instead
of
doing
that,
you
can
let
the
contracts
choose
which
logic
they
want
to
go
with.
They
could
go
with
the
logic
of
having
this
like
for.
Every
third
cell
have
the
owner,
which
will
cause
them
to
double
the
storage,
or
they
could
go
a
different
logic
of
how
you
were
leased,
how
you
release
the
lock-up.
So,
for
example,
you
could
choose
that
the
contract
owner
would
pay
it
or
any
other
logic
that
you
want
to
beat.
On
top
of
that
yeah
and.
G
B
G
B
H
G
B
G
B
That,
for
the
newly
created
contract
is
not
ideal,
the
lookups
are
not
ideal,
so
maybe
I
will
go
and
think
about
it.
Thank
you
for
this
proposal.
I
will
go
think
about
it,
whether
it's
possible
to
not
have
a
lock
ups
for
new
contracts
over
the
implication.
Does
it
happen
because,
as
we
are
kind
of
realizing
that
the
lock
up
and
release
op
codes,
probably
not
gonna
work
for
legacy
contracts,
so
we
are
basically
having
now
three
different
considerations:
rent
lock
ups
for
legacy,
things
lookups
for
non
legacy,
things
so
yeah.
G
J
J
I've
run
a
bunch
of
nodes
for
long
I
have
been
doing
that
for
a
long
time
and
I've
got
a
little
bit
of
a
conservative
background,
because
I
come
from
more
of
the
Bitcoin
side
of
stuff.
So
my
main
takeaway
from
this
meeting
is
that
I
thought
that
there
was
a
lot
more
consensus
on
keeping
the
oneexchange
sustainable
I,
really
like
that.
Hashtag,
a
sustainable
and
I
think
that
I'm
going
to
try
to
push
forward
for
more
people
to
realize
that
this
is
a
huge
problem.
J
Like
all
the
2018,
it's
been
a
little
bit
of
a
pain
to
run
full
note
and
it's
seems
to
be
becoming
more
of
a
pain
every
day.
I
just
run
faster
machines
with
better
hardware
but
yeah.
That's
that's
not
sustainable,
so
I
don't
want
to
repeat
everything,
but
these
four
numbers
were
worth
something
that
was
pretty
interesting
to
have
in
mind
for
all
of
this.
Considering
for
all
of
these
conversations,
sorry.
J
So,
first,
the
current
state
size,
nine
gigawatts
running
a
full
node
that
takes
a
hundred
40
gigabytes.
I
knew
that
already
there
are
about
50
million
accounts
out
there,
and
we
could.
We
could
consider
about
30
million
of
those
vast
accounts
and
there
are
about
a
hundred
forty
million
storage
items.
J
So
first
with
regards
to
burning
accounts,
I
think
that
this
is
like
the
low-hanging
fruit.
In
terms
of
everything,
we
need
some
data
to
realize
how
much?
How
much
are
we
going
to
win
from
this
or
if
it's
even
worth,
to
prune
a
lot
of
account
data
because
of
all
the
pains
about
resetting
the
gnomes,
but
I
think
that
it's
worth
it
because
we
don't
know
how
much
or
for
how
long
will
people
continue
to
use
this
in
this
chain
and
make
it
sustainable
over
time?
It's
a
lot
better
than
just
saying.
J
There's
going
to
be,
I
could
cut
off,
and
after
this
moment,
there's
not
going
to
be
like
your
account
may
live
forever.
Where
so
the
convent
industry
I
find
that
it
may
be
like
independent
of
the
contract.
Maintenance
fees.
I
think
the
that
is
pretty
simple.
I'm
not
going
to
explain
it
again,
because
we
already
like
sorry
this
about
three
times
I
understood
I
learned
how
about
the
count
eviction,
how
we
should
I
add
this
expires
at
the
X
field.
J
This
is
what
I
think
may
become
a
tragedy
of
the
Commons
for
smart
contract
developers
like
if
there's
not
enough
incentive
for
people
to
release
a
storage
over
time.
It's
going
to
become
too
expensive
to
maintain
that
contract,
and
it
may
be
so
that
the
contract
gets
evicted
and
I
think
it
may
be
okay,
because
it's
a
little
bit
of
a
separation
of
layers,
I'm,
not
sure
about
that.
J
It
feels
to
me,
as
an
outsider,
that
there's
meanst
a
lot
more
coordination
between
the
different
teams
to
work
on
our
proposal
to
sing
together,
and
then
there
are
another
thing
that
I
learned
was
about
increasing
the
gas
limit,
which
I
think
it's
a
little
bit
reckless
right
now,
adding
more
features
which
would
be
cool,
but
I
think
that
we
need
to
fix
the
scalability
first
and
releasing
the
gas
cost
for
cold
data,
which
I
also
think
it's
kind
of
like
increasing
the
gas
limit.
So
I
think
it
would
be
a
little
bit
complicated.
J
B
So
with
the
text
or
you
said
that
the
using
TX
origin
for
lockups
is
an
anti-pattern
I
did
actually
think
before.
I
arrived
at
this
I
settled
what
it
takes:
origin
I.
My
first
idea
was
to
use
the
message
e
sender
for
that
purpose,
but
then,
for
some
reason,
I
quickly
realized.
This
is
gonna,
be
an
T
pattern
and
I,
and
then
there
was
another
idea
to
store
the
where
the
lock-up
came
from.
B
J
Like
so,
the
problem
is
legacy
contracts
and
legacy
contracts
usually
use
message
sender,
so
you
cannot
wrap
that
contract
in
a
contract
that,
before
every
store
whatever
like
tries
to
do
something
else,
but
if
we
can
find
some
kind
of
like
condition
to
grab
like
to
change
the
semantics
of
s
store
if
they,
if
the
wrapper
contract
calls
in
some
other
way
in
may
mean
maybe
there's
a
solution.
Yeah.
B
B
B
B
So
yeah,
so
what's
something
that
I
said
on
the
first
day
and
then
Robert
has
pointed
out
that
I
probably
need
to
put
it
into
presentation,
is
that
in
this
framework
for
the
estate
management
I
am
ignoring.
So
we
are
ignoring
the
cost
of
running
the
phone
out.
So
it's
not
addressed
in
this
framework
and
that's
to
be
that's
to
make
sure
that
we
can
kind
of
have
some
reasonable
things
that
we
can
reason
about
and
we
only
concentrate
on
the
one
day
on
the
performance
from
a
different
dimensions.
B
So
page
number
nine,
so
pinch
number
nine.
So
here's
the
you
might
remember
that
might
recall
that
I
was
also
talking
about
caching,
so
for
the
some
of
the
performance
degradation
like
the
reading,
the
state
from
the
transactions
and
writing
the
state
from
the
transit.
So
writing
the
the
miracle
tree.
B
So
all
these
things
need
to
be
analyzed
if
we
want
to
make
sure
that
it
is
actually
happening,
because
sometimes
you
have
this
really
weird
slow
downs
of
the
cerium
like
mine
is
sinking,
but
you
know
if
we
started
to
analyze
what
was
actually
happening
there,
then
we
might
see
all
sorts
of
things
we
might
have
already
done.
They
might
have
really
been
these
dos
attacks,
but
we
don't
we.
They
were
just
basically
generally
slowed
down
a
network,
but
I
didn't
see
analysis
on
that.
B
I
A
quick
question
on
the
previous
slide:
do
you
mean
cash
hits
or
the
doses
when
you
construct
cache,
misses.
B
B
So
if
the
adversary
can
basically
make
sure
that
this
is
not
true
and
essentially
defeating
the
efficiency
of
this
policy,
so
basically
making
sure
that
there
the
transactions
are
happening,
they
only
using
the
the
items
which
haven't
been
accessed
for
a
long
time.
That's
cash
misses
right,
yeah
to
generate
as
many
cache
misses
as
possible.
So
so
what
I'm
saying
is
that
you
cannot
say
that
you
have
improved
performance
by
simply
put
in
the
cache
on
the
other
thing,
because
that
cache
can
be
attacked.
K
B
B
Practically,
you
probably
still
gonna
run
ro
you
for
for
the
for
the
case
of
simply
processing
the
blocks,
but
as
I
discussed
with
some
people
in
here,
if
you're
actually
thinking
then
ro,
you
is
not
an
optimal
policy.
The
optimal
policy
for
thinking
is
that,
if
you
know
in
advance
what
you're
going
to
be
touching
can
basically
have
an
absolutely
optimal
policy,
but
this
is
what
I
was
planning
to
do
in
the
troop
again
at
some
point,
when
I
have
some
time
so
12
page
so
page
12.
B
So
basically,
here
we
were
discussing
possible
mitigations
of
our
performance
issues
and
you
might
recall
there
was
a
mitigation
to
improve
latency
but
not
throughput.
Then
you
can
do
advanced
syncing.
Actually,
you
can
forget
about
the
slide
mode.
Last
modified.
Look.
We
have
the
little
breakout
today
about
the
sync
protocols
and
I'm
hoping
to
put
together
some
presentation,
maybe
today
in
afternoon,
to
summarize
what
we've
done,
but
I
now
want
to
add
a
third
item
here.
B
One
of
the
mitigations
that
we
can
ask
people
to
increase
the
pruning
threshold,
so
essentially
that's
saying
that
a
serial
node
is
not
140
gigabytes
anymore,
but
it's
300
gigabytes
officially,
and
that
means
that
everybody
should
should
try
to
keep
a
bit
of
more
history
to
allow
the
other
peers
to
think
properly.
I
know
this
is
just
the
lowest
temporary
mitigation,
but
it
could
help
if
we
just
if
we
determined
during
our
emulation
that
in
the
two
months
time
it's
going
to
be
impossible
to
sync.
B
This
is
the
easiest
mitigation
is
just
to
ask
somebody
who
has
to
run
the
note
to
increase
the
throughput
in
threshold,
and
here
is
well
yesterday.
I
think
Fredrik
mentioned
to
me
when
we
talking
about
a
success
rate
of
snapshot,
saying
I
was
previously
thinking
about
only
free
variables
that
affect
it
like
three
main
variables,
which
is
state
size,
bandwidth
and
pruning
threshold.
B
But
now
he
said
to
me
that
there's
another
four
possible
causes
that
appears
uptime,
so
I
kind
of
assumed
before
that,
all
the
peers
that
they
permanently
online,
but
in
the
reality
they
always
drop
in
and
out
of
the
network.
It's
sort
of
the
it's
very
sort
of
ephemeral,
so
we
also
need
to
test
for
the
period
uptime
like
what
happens
when
the
peers,
like
only
temporarily
there's
you
know,
you
cannot
have
basically
persistent
sink
connection
to
the
pier
you
have
to
be
able
to
find
your
like.
B
B
I
I
B
J
B
So
we
are
just
before
we
started
the
presentations.
We
had
a
little
breakout
session
over
there
about
sinking
sinking
mechanisms
and
I'm
hoping
to
put
together.
I
have
some
PDF
about
this
already
but
I.
If
I
have
time
before
the
afternoon,
I'm
gonna
put
something
together
and
explain
what
we
what
we
discussed
so
first
of
all,
this
is
the
like
Version
three.
B
Now,
because
there
was
a
previous,
it
was
version
2,
but
version
3
is
not
published
yet
so
gonna,
so
first
of
all,
I
remain
renamed
it
from
state
management
from
state
1
to
state
management,
and
it's
not
just
because
we
don't
like
the
word,
but
it's
because
it
actually
overgrown
the
state
rent.
Now
it's
not
just
about
statement
is
about
lots
of
other
things,
and
so,
let's
see
what
we
have
here.
So
you
might
recall
there
was
this
diagram
of
the
changes
before
so
here.
B
I
started
to
make
more
clarifications,
so
I've
realized
that,
for
example,
the
replay
Protection
AIP
Marcin
has
written
specifies
that
the
first
step,
which
is
the
optional
reply
protection,
is
a
hard
fork,
but
the
second
step
is
actually
so-called
soft
work.
So
this
is
the
cloudy
bit
like
soft
things
and
I
just
explained
you.
What
is
the
difference?
So
the
the
hard
fork
is
essentially
adding
new,
so
it's
extending
the
protocol,
meaning
that
certain
things
which
were
not
valid
before
will
become
valid
now.
B
So
that's
the
hard
fork
extension
of
the
protocol
and
the
soft
work
is
the
restriction
of
the
protocol.
Certain
things
that
were
valid
before
are
not
valid
anymore.
So
in
the
example
of
the
reply
protection
when
we
make
the
option
of
we,
we
allow
the
optional
field
in
the
transaction
means
that
we
still
allow
the
old
stuff.
We
also
extend
it
to
the
new,
so
we
allow
more
transaction
types
to
be
valid
and
then
we
restrict
it.
We
remove
the
old
way
of
doing
things
and
only
the
day
of
the
new
things.
B
It's
the
soft
work.
Another
thing
I
added
here
is
the
s
in
the
inner
left
bottom
corner,
which
is
the
advanced
sync
protocol
and
I
introduced
anon
for
exchanges
here
with
the
with
a
circle
with
the
oval
which
basically
don't
require
the
protocol
upgrade,
but
they
still
quite
important
and
they
should
be
on
a
roadmap.
B
This
is
kind
of
it
messy
now,
but
I
needed
to
rely
a
relay
relay
layout
these
things
because
they're
not
really
nice,
nice
circles
anymore,
but
what
you
could
see
that
the
circles
are
now
not
dropped
out
of
the
forks
altogether,
because
they're,
not
Forks
and
another
thing
which
we
didn't
realize.
What
we
did
know
didn't
realize
yesterday,
when
we
were
talking
about
so
we
were
talking
about
what
is
the?
B
What
is
the
shortest
path
to,
let's
say,
increase
in
a
block
gas
limit,
as
the
estaban
correctly
mentioned
increasing
ago
blow
gas
limit
right
now
is
a
bit
reckless,
because
we
will
probably
see
the
acceleration
of
the
state
size
growth,
but
we
still
want
to
do
that
and
what
are
the
minimum
set
of
changes
that
allow
us
to
do
it
without
being
reckless?
And
we,
if
we
are
looking
at
this
particular
proposal
here,
probably
storage,
lock
ups
will
slow
down
the
state
expansion
sufficiently
so
that
it's
not
going
to
be.
B
So
it
it's
conceivable
that
there
could
be
some
rogue
contracts
which
have
a
goal
of
draining
your
account
and
putting
it
in
lock
ups.
So
as
a
safety
feature
this
in
the
same
vein,
as
that
we
have
a
gas
maximum
gas
limit
on
transaction,
it
would
be
prudent
to
introduce
something
like
max,
lock
ups,
which
is
like
number
of
lock
ups,
that
you
allow
this
transaction
to
do
on
your
behalf.
Let's
say
if
you
put
it
as
five
and
the
the
minute
the
the
moment
it
goes
to
the
six
lock
up
it.
B
The
transaction,
abort,
verts
and
no
lock
ups
are
made,
so
you
can
basically
limit
how
much
how
much
ether
your
transaction
can
spend
on.
Lock
ups,
if
we
agree,
I
mean
that
it
is
important,
then
the
situation
changes
that
the
storage
lock
ups
can
only
be
introduced
in
a
second
fork,
because
before
that,
we
need
to
introduce
the
safety.
B
The
lockup
safety,
so
I
included
the
safety
into
the
I,
combined
it
with
the
replay
protection,
because
these
both
changes,
modified
transaction
format
and
it's
probably
easier
to
do
them
together,
because
so
this
is
how
it's
gonna
look
like.
So
change
to
change
a
optional
temporal
protection
and
lock
up
safety.
So
now,
I
have
a
little
icon
here
to
signify
if
it's
a
hard
work
but
I
haven't
finished
on
the
whole
presentation,
so
the
temporal
protection
we've
seen
before,
but
now
the
general
this.
This
change
looks
like
this.
B
So
I
also
take
into
consideration
what
the
case
you
said
about
versioning.
So
let's
say
that
we
add
a
three
optional
fields:
they
either
all
together
there
or
not,
none
at
all
version
valid
until
in
max
lookups
and
then
in
the
change
be
together
with
the
protection.
We
also
make
the
mandatory
version
valid
until
in
max
lock
ups,
and
so
this
now
becomes
because
of
the
safety
feature
for
lookups.
We
we
added
as
a
prerequisite
for
for
the
lockups.
B
Let's
look
again,
you
see
that
there's
a
little
line
between
the
B
cloud
into
E,
so
that's
the
safety
thing
and
because
the
B
cloud
now
can
only
happen
after
a
so.
The
a
will
be
in
the
first
fork
and
be
together
with
E
is
in
the
second
fork,
but
then
the
second
four
gives
you
both
both
storage
lookups
in
the
dust
account
eviction,
so
it
becomes
sort
of
more
impactful.
So
what
are
the
things
I
changed
here
so
yeah?
So
I
haven't
finished.
B
B
This
is
up
to
the
client
implementation,
but
they
start
maintaining
the
the
signed
integer
count
and
they
whenever
they
execute
a
store.
They
do
these
things
to
this
counter
and
then
in
a
change
D.
They
are.
This
needs
to
be
written
by
the
way,
so
it
changed
either
they
inject
the
correct
value
into
the
into
the
state
after
the
block
D.
So
this
is
how
you
can
achieve
it
in
one
hard
work.
B
B
B
On
the
so
in
this
proposal,
there's
three
types
of
rent:
there's
a
rent
on
account
which
applies
to
everything
to
a
contract
non
contracts.
Then
there's
a
second
type
of
rent,
which
is
applies
only
to
the
code.
The
longer
your
code,
the
more
you
pay
and
the
third
type
of
rent,
which
is
the
storage
rent,
which
only
applies
to
the
the
contract
which
didn't
look
enough
ether
into
into
them.
B
B
B
B
Yes,
any
other
questions
go.
Thank
you
very
much,
and
anybody
else
wants
to
make
a
presentation
for
now
or
we
are
doing
a
breakup,
breakouts.
Okay,
I,
take
it
as
no
yeah.
We
we're
yeah
we're
doing
a
breakouts
now,
so
the
live
stream
will
be
suspended
for
for
a
couple
of
hours.
I
guess!
Thank
you
very
much
for
listening.