►
From YouTube: Ethereum 1.x Morning [Day 2]
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
61
point
x
meetings,
so
what
we're
going
to
do
now
is
that
we
have
about
four
different
presentations.
First
will
be
Frederick,
then
there
will
be
KC.
Then
there
will
be
a
Remco,
then
I
will
I
will
present
after
that
I
presenter
very
shortly.
The
things
that
have
changed
since
yesterday
and
after
then
we're
gonna
have
a
breakout
session.
Okay
and
then,
following
that,
we'll
have
individual
time
and
more
presentations.
B
All
right
good
morning,
so
we
had
some
good
discussion
on
chain
pruning
in
the
relatively
small
group
yesterday,
I
can't
say
that
we
got
to
any
sort
of
implementation
details
or
figured
out
exactly
what
to
do
or
what
an
EIP
or
anything
should
look
like.
But
we
had
a
lot
of
discussion
around
what
we
need
more
feedback
on
and
what
we
need
to
figure
out
before
we
can
write
an
EIP
and
a
lot
of
this
comes
down
to
like
community
feedback
and
actually
figuring
out
what
users
want
and
need
so
I
think.
B
The
first
point
that
we
need
to
establish
is
what
do
people
actually
want
and
when
we're
talking
about
a
prudent
node,
this
is
you
know
we
can
talk
about
it
in
different
context,
but
essentially
imagine
it's
a
node
that
just
has
a
full
copy
of
the
states
and
headers
and
nothing
else
who
wants
this
like.
What
is
this
for,
and
you
can
imagine
a
use
case
like
for
myself,
where
it's
someone
that
wants
to
use
apps
and
used
apps
regularly?
B
They
don't
necessarily
want
to
use
the
like
lines
and
pay
the
network
cost
of
going
out
over
the
network
for
our
every
requests
to
this
step,
so
they
want
the
full
copy
of
the
states
and
but
don't
have
150
gigs
free
on
their
laptop.
So
it's
sort
of
a
thing
in
between
no
my
client
and
a
full
note
and
like
I,
can
imagine
that
user
and
like
I
would
be
one
of
them
myself,
but
how
many
of
those
users
exists
in
the
ecosystem
and
it's
really
hard
to
tell
we've
already.
B
You
know,
pushed
people
to
accept
centralization
and
just
just
use
in
Fura.
Everything
is
fine,
and
can
we
really
take
this
step
back
and
and
like
bring
those
users
back
to
care
about
data
validity
and
other
things
like
that,
and
it
is
something
that
I
don't
have
the
answer
to
and
I
think
we
need
more
people
saying
that
they
actually
want
this
before
we
should
just
dive
headfirst
into
doing
stuff.
B
This.
The
second
question,
sir,
why
yeah?
Why
run
this
over
all
our
clients?
Do
people
care
about
those
Network
costs?
How
hard
is
it?
Maybe
we
should
focus
instead
of
focusing
on
building
a
proven
node.
We
should
focus
on
like
clients
and
sensitization
actually
making
like
client.
The
like
client
experience,
a
first-class
experience
that
works
really
well.
If
we
solve
the
in
sensitization
problem,
that's
also
used
for
useful
for
you
too,
and
something
that
can
further
like
both
the
current
ecosystem
and
future.
B
And
the
final
question
just
here
is:
this
is
almost
like
west
question
to
Hudson
which
who's
not
here
right
now,
but
the
question
is
like:
how
do
we
find
out
the
answers
to
these
questions?
I,
don't
really
know
we
no
need
to
like
engage
the
community
to
find
out
what
they
actually
wants,
what
they
actually
need,
what
they're
what
they
would
use.
B
But
if
we
assume
that
this
is
something
desired,
how
do
we
build
an
MVP
to
prove
these
assumptions
so
I
think
the
the
first
thing
that
is
really
easily
implemented
like
I
said
yesterday,
something
that
we
can
have
like
next
week.
If
we're
going
into
is
a
feature
flag
on
CLI
to
just
say,
you
know,
proven
everything
beyond
and
blocks
and
I
think
there's
a
discussion
here
around
if
this
behavior
is
default
versus
non
default
and
in
Peters
proposal
I
think
he
has
written
up.
B
Most
of
this
is
like
right
up
in
assumptions
and
sort
of
how
this
should
work,
with
the
expectation
that
it's
the
default,
behavior
and
I
think
going
into
default.
Behavior
is
really
dangerous
and
something
we
shouldn't
do
as
an
MVP.
If
we
just
ship
a
feature
flag
that
says,
hey,
run
this
and
pruned
mode
then,
and
and
have
a
non-default,
then
we
can
sort
of
gauge.
You
know
how
many
people
are
actually
using
this.
B
B
I
think
it's
relatively
safe
to
assume
that
we
won't
have
availability
problems
if
this
is
just
an
optional
known
default
feature
that
we
shipped
I
think
there's
also
much
less
of
a
coordination
burden
between
clients
if
it's
the
non
default,
behavior
and
a
sort
of
quote/unquote
insignificant
amount
of
users
using
it.
But
if
a
significant
part
of
the
network
starts
using
it,
then
coordination
becomes
more
important
and
we
actually
need
to
figure
out
like
how
do
clients
sings
together
and
things
like
that.
So.
B
The
last
thing
that
we
discussed
was:
how
do
we
like
what
is
the
best
way
to
ensure
availability
into
the
future?
Peters
sort
of
top
suggestion
was
to
put
blocks
on.
Ipfs
comes
with
a
couple
of
additional
problems
like
we
we're
not
actually
storing
ipfs
hashes
in
the
header
so
to
be
able
to
look
up
blocks.
We'd
have
to
either
introduce
another
hash
or
have
some
sort
of
gateway
service
that
converts
one
hash
to
the
other,
and
it
can
be.
You
know,
somewhat
complicated,
it's
figure
out
and
we
can.
B
B
Yeah,
it
requires
cross
client
coordination.
So
if
we
go
with
IPS
ipfs,
we
need
to
make
sure
that
all
clients
have
ipfs
capability
to
be
able
to
read
from
ipfs,
etc.
That
adds
another
level
of
overhead,
so
I
think
something
that
we
talked
a
little
bit
about.
Was
you
know
what
level
of
availability
is
required?
What's
high
probability
of
availability
and
what.
B
Incentive
structures
can
we
build
around
having
availability,
but
I
didn't
hear
any
really
great
suggestions
on
actual
incentive
structures.
So
if
anyone
has
ideas
still
open
to
that
and
finally
I
think
something.
We
talked
quite
a
lot
about
in
especially
related
to
the
discussion
of
ipfs,
and
where
do
we
store
things?
B
Sort
of,
but
we
currently
don't
have
any
discovery
method
of
finding
notes
that
store
this
section
of
blocks,
so
we're
sort
of
dependent
on
discovery
b5,
which
development
of
is
not
that
active
or
fast,
and
so
we
need
to
be
v
to
be
in
place
and
sort
of
deployed
widely
across
nodes
before
we
can
use
and
a
mechanism
that
is
inside
the
theorem
network
to
just
like
distribute
these
blocks.
But
I
think
everyone
in
the
circle
sort
of
agree
that
this
would
be
the
best
scenario,
because
we
keep
everything
in
the
theorem
Network.
B
We
don't
have
external
dependencies
on
things
like
IP
FS
and
it
paves
an
easier
path
for
sure
for
proper
and
sensitization
in
the
future.
So
yeah
there's
an
open
question
there
of
like
how
do
we
get
discovery,
v5
ships
as
fast
as
possible,
but
I,
don't
think
we
need
that
to
start
experimenting
and
I
think
we
can
build
our
MVP
and
have
a
feature
flag
that
enables
to
promote
pruned
mode
and
basically
say
if
a
lot
of
people
want
this
a
lot
of
people
use
this.
B
Then
we
can
look
into
you
know
how
do
we
ship
discovery,
vv5,
faster
or
whatever
else,
that's
necessary
to
get
higher
probability
of
availability
on
the
network?
So
those
roughly
what
we
discussed
yesterday
and
I,
don't
really
think
that
we
can
have
much
more
like
productive
discussion
with
just
the
people
here
today.
My
hopes
is
that
everyone
kind
of
goes
away
and
thinks
about
this.
A
little
bit,
and
especially,
we
try
to
reach
out
to
the
community,
try
to
see
what
people
want,
and
is
this
indeed
something
that
is
desired
and
people
are
asking
for.
B
Yeah
so
discovery
b5
is
I,
don't
know
exactly
everything
that's
proposed
in
so
we're
currently
on
the
before.
Like
version
4
of
the
dev
p2p
discovery
protocol,
and
so
this
is
the
next
version
of
dev
PTP
discovery.
It
includes
one
important
thing
called
node
records,
which
is
just
a
fixed
length
string
that
becomes
available
like
Mazar
Batory
contents
and
it
becomes
available
in
the
DHT.
So
you
can
once
you've
downloaded
that
DHT.
You
know
which
nodes
have
which
section
of
the
blocks,
so
you
don't
actually
need
to
go
run.
B
B
B
Think
the
argument
is
like
ipfs
is
something
that
exists
now
and
we
can
put
in
next
week.
If
we
wanted
to
like
there's
great
bindings
or
go
and
there's
we
have
already
have
a
bunch
of
stuff
in
rusts
for
the
peri
client,
so
we
can
get
it
done
really
quickly
and
making
Sorum
production
ready
is
a
longer-term
project
that
we
don't
really
have
any
estimates
of
how
much
effort
that
would
actually
take.
But
I
would
argue
that
we
don't
need
to
solve
this
problem
right
now.
B
C
D
B
D
B
Think
so
question
is:
can
you
incentivize
both
disk
and
bandwidth
usage
and
I
think
you
can
know
but
I've
like
swarm
and
pylori,
and
a
couple
others
are
working
on
dit
and
since
I
storage
and
that's
a
hard
enough
problem.
I
haven't
actually
even
seen
one
anyone
trying
to
approach
incentivizing
bandwidth.
It's
it's
a
lot
harder,
yeah,
it's
wrong.
Maybe.
C
F
E
Okay,
so
the
idea,
can
you
hear
me,
okay,
with
roadmapping
in
Reverse,
we
know
what
we
want
to
achieve.
We
know
what
the
final
product
is,
which
is
compiler
engines
for
both
the
major
clients,
Gethin
parody
and
preferably
the
engine
for
goal
would
be
written
in
pure,
go
as
the
go
team
likes.
The
codebase
likes
their
codebase
to
be
volunteer
goal
not
pulling
in
dependencies
written
in
C++
or
Java,
or
various
languages
just
pure
go,
and
that
would
be
ideal.
So
that's
the
ideal
situation.
E
E
Interpreters
are
easy,
but
compiler
engine
prototypes,
there's
the
the
serious
ones
come
from
the
browsers,
so
in
chrome,
there's
it's
written
in
C++,
so
there's
a
decent
engine
in
C++
there's
also
good
efforts
for
engines
and
rust,
because
Mozilla
is
trying
to
rewrite
basically
Firefox
in
rust.
So
for
parody
it's
integrating
the
web
assembly
webassembly
engine
in
rust.
Is
it's
there's
a
lot
of
progress
towards
that.
E
E
Without
having
a
good
compiler
engine
in
and
go,
then
one
workaround,
we
could
try
as
compiling
the
pre
compiles
using
just
ahead
of
time
compiler,
whichever
one
works
the
best
and
then
deliver
the
binaries.
You
know
the
exe
is
to
to
the
clients,
so
that
you
know
guess
would
import
and
executive
all
they
might
not
like
this.
Well,
we
we
already
know
they're
gonna
hate
it,
but
this
is
one
when
approached
short
short
of
having
a
full
engine,
full
compiler
engine
written
go.
E
E
We
can
just
start
out
with
an
interpreter.
This
was
a
proposal
from
Alexi
where,
rather
than
trying
to
jump
straight
to
having
awesome
compiler
engines,
if
we
just
start
with
integrating
an
interpreter
engine,
then
other
teams
will
be
incentivized
because
they'll
know.
Okay,
well.
Web
assembly
is
definitely
in
this
area.
So
if
we
work
on
a
fire
engine
for
theorem,
then
you
know
we
know
it'll
actually
be
used
and
adopted
on
the
main
net
or
e
2.0,
whichever
without
having
even
an
interpreter
base
engine
inside
the
theorem
clients.
E
So
the
downside
is
this:
using
a
web
summary
interpreter
for
free
compiles.
It
only
worked
for
a
few
frickin
files.
A
lot
of
some
pre
compiles
are
prohibitively
expensive,
prohibitive,
prohibitively
slow
when
ran
inside
an
interpreter.
So
like
the
snark
pairing
free
compiles
are
way
too
slow
in
an
interpreter.
E
E
So
even
another
workaround,
even
less
ambitious,
is
to
simply
use
webassembly
as
a
blueprint
for
free
compiles.
So
we
just
be
analyzing
this
webassembly
blob
to
generate
a
gas
roll,
and
this
way
the
gas
roll
is
simple
enough
to
be
implemented
natively.
This
is
basically
how
the
existing
free
compiles
are
done.
E
E
It
doesn't
this
doesn't
help
get
introduced
webassembly
into
into
the
clients
very
much
at
all.
It
makes
it
maybe
slightly
easier
to
add
new
free
compiles,
but
still
like
one
of
our
estimates
is
maybe,
if
we
only
do
this,
then
adding
pre
compiles
the
according
to
the
way,
the
old
way
we
can
maybe
do
like
two
new
pre
compiles
in
2019,
when
there's
probably
demand
for
like
five
or
seven
new
pre-compiled
that
people
want,
but
doing
it.
This
way
is
a
lot
of
work
for
each
one.
E
E
E
E
Using
as
the
as
the
engine
I
mean,
you
can
probably
specify
this
by
yeah
by
me,
but
it's
not
going
to
be
usable
for
all
the
briefing
files
that
people
will
want
so
be
patient.
D
D
F
Alright,
let's
try
that
so
so
my
question
was
yesterday:
we
talked
about
how
difficult
it
is
to
estimate
gas
and
we
identified
a
couple
of
scenarios.
One
is
that
it's
constant
gas,
the
other
one
is
that
you
can
cap
it
at
an
upper
level
and
the
final
one
is
that
it's
an
undecidable
problem
depending
on
stuff.
So
my
question
is
really:
is
there
any
pre-compiled,
so
we
can
start
with
which
which
which
are
easy
to
do?
According
to
this
increasing
ladder
of
difficulty,.
G
Yeah
things
that
are
constant
runtime,
for
example.
If
you
know
your
elliptic
curve,
you
know
multiplying
to
whatever
points
or
whatever
you're
doing
is
constant
runtime.
Then
that's
the
guest
role,
you
charge
them
constant
gas,
so
someone
some
are
have
structure
like
hashing.
They
do
it
in
blocks,
so
they
they.
Your
input.
Data
is
sort
of
arbitrary
length,
but
they
hash
it
in
chunks
and
they
you
know
they
do.
You
know,
digest,
digest
ideas
and
then
finalize
so
that
has
sort
of
a
simple
structure.
G
So,
yes,
the
pre-compose,
we're
looking
at
have
this
structure
that
we
can
analyze
arbitrary
code
hopeless.
As
you
know,
the
halting
problem,
but
as
you
mentioned,
but
yes
for
what
we're
doing,
we
think
we
can
have
reasonable
guess.
Rules
for
the
pre
compels
that
people
are
interested
in
I,
don't
know
one
more
point:
I
just
wanted
to
say:
Casey
said
that
the
interpreters
might
be
too
slow
for
certain
things,
like
pairings
I,
think
that
might
not
be
the
case.
I
think
we
need
more
benchmarks
for
that
kind
of
stuff.
E
E
If
we
can
maybe
do
to
your
new
pre
compiles
that
are
too
complex
to
run
fast
inside
the
interpreter
and
for
those
we
yeah.
We
want
to
use
the
web
assembly
as
a
group
blueprint
to
generate
the
gas
rules,
but
still
doing
more
than
two
seems
unrealistic
with
all
the
testing
and
an
implementation
that
clients
would
require
to
add
those
pre
compiles.
A
Okay,
so
we
were
just
my
previous
question:
was
about
so
I
just
looking
at
the
objectives,
because
I'm
determined
to
kind
of
keep
people
and
objectives
today,
so
we
stab
lished
sort
of
vaguely
the
initial
set
of
changes
for
the
May
2019,
so
the
first
is.
The
second
objective
is
established
as
framework
for
designing
evaluating
and
comparing
that
sort
of
the
change
proposals.
A
So
the
question
to
you
and
to
your
your
team
is
that
if
somebody
wants
to
sort
of
propose
something
for
your
for
your,
you
know
for
a
Eve
as
an
introduction,
so
so
water's
gonna
be
maybe
you
could
write
it
out
or
something
like
this.
So
what
are
the
things
that
they
need
to
consider
so
that
they
don't
have
to?
They?
Don't
throw
like
really
and
thought
on
thought
through
the
proposals
at
you.
So
what
kind
of
what
are
the
steps
that
need
to
go
through
to
to
describe
like?
A
Do
you
actually
want
people
to
come
up
with
alternative
things
and
sort
of
the
framework
that
I'm
talking
about
the
framework?
So
so?
What
are
the
questions?
People
need
to
answer.
What
are
the
things
that
they
need
to
consider
before
they
come
to
you
and
says:
hey
I've
got
I've
got
an
idea.
I've
got
an
alternative
proposal
to
what
you're
doing
maybe
or
something
like
that.
I
know.
E
A
The
so
you
might
remember
yesterday,
I
put
up
the
like
of
four
questions,
write
to
you.
So
if
we
were
to
keep
those
questions
and
obviously
added
a
third
thing
today,
so
that
another
consideration
you
would
add
to
that
is
the
is
the
client
code
and
they
are
sort
of
some
of
them
being
opinionated
about
what
should
go
in
and
not
what
are
other
things
that
you
would,
you
would
say,
are
important.
A
So
if
you
have
any
ideas
through
today
or
tomorrow,
to
add
to
because
I
think
I'm
gonna
start
with
the
the
questions
that
are
asked
yesterday,
I'm
gonna
add
what
you're
done
today
and
we
can
add
some
more
things
about
all
the
considerations
that
people
have
to
think
about.
If
they
want
to.
You
know,
to
participate
and
give
you
our
alternative
proposals:
okay,
cool
I,.
B
Just
wanted
to
add
some
comments
on
specifically
this
and
some
of
what
you
said
about
clients
and
code
bases
as
well.
So
something
we
have
to
keep
in
mind
with
pre-compile
is
that
there
are
some
pre
compiles.
That
will
never
be
anything
about
x86
assembly
like
it's
just
anything
else
is
too
slow,
so
it
doesn't
matter
if
it's
wasn't
compiled
to
assembly
like
that's
gonna,
be
too
slow.
So
there
are
there's
like
a
certain
class
of
pre
compiles
that
are
assembly.
Only
that
oversight
whoo.
H
B
Optimized
assembly
code,
then
there's
like
a
second
class.
That
I
think
this
fits
really
well
for
where
it
needs
to
be
fast,
but
not
extremely
fast,
where
it
needs
to
be
compiled
as
a
web
assembly
and
then
there's
a
third
class
where
it
just
needs
to
be
faster
than
EVM.
And
if
it's
a
really
good
interpreter
an
interpreted
webassembly
might
actually
be
faster
than.
A
B
B
A
About
like,
if
we,
if
we
think
about
this,
is
the
G
minus
two,
but
did
you
start
with
G?
Is
this
the
so
if
we
think
about
G
right,
was
it
G
yeah?
So
if
we
have
a
very
good
yeah,
if
we
have
a
like
a
compiler,
maybe
ahead
of
time
compiler
or
something
which
can
compile,
let's
say
pairing
code
written
in
a
web
assembly
into
machine
code,
do
you
think
it's
going
to
be
fast
enough
or
not
where
it
does,
it
have
to
be
handcrafted
assembly?
The.
E
A
B
B
A
Okay,
I
would
still
I
would
still
like
to
separate
these
things,
because
if
you
know,
if
we
can
improve,
increase
the
gas
limit
or
something
to
make
those
things
useful,
so
then
maybe
it's
it's,
even
even
because
what
it
looks
like
what
you're
saying
is
that
some
of
us,
some
of
the
pre
compiles
the
whole
approach,
is
probably
kind
of
flawed
right.
The
whole
approach
to
trying
to
do
the
pre-compose
and
Eva's
ohm
is
flawed.
A
F
H
Are
applications
where
you
would
need
to
do
pairings
infrequently,
but
I
think
the
issue
is,
if
you
price
the
gas
costs,
relatives
are
the
worst
performance,
you're,
discouraging
developers
from
using
it.
So
it's
it's
probably
not
the
best
approach
to
take
in
the
short
term,
just
because
it's
not
gonna
encourage
use,
but
actually
just
following
up
on
that.
Maybe
I'm
misunderstanding.
If
so,
all
the
clients
already
have
an
optimized
pairing
implementation
yeah.
What
is
hard
about
calling
that
from-from,
otherwise
I'm
interpreter.
B
So
it
would
you
gain
nothing
but
I,
think
there's,
so
we
will
always
have
this
like
the
super
optimized
like
pairings,
stuff,
I,
don't
think
it's
going
like
you
like.
It
will
discourage
use
like
even
currently
it's
use
this
limited
because
it's
too
expensive
or
like
too
slow
and
I,
think
that
will
remain
the
case
for
those
but
there's
a
whole
sea
of
other
pre-compile.
That
people
want
to
add
like
they
want
to
add
different
hash
functions.
They
want
to
add
a
ton
of
different
stuff.
B
A
So
the
just
to
clarify,
so
we
they're
one
of
the
reasons
why
what
the
e
azam
was
included
into
the.
Why
we're
trying
to
get
it
earlier
than
a
Tyrian
2.0
is
because
we
see
it
as
a
meta
feature,
whereas
instead
of
developed
core
like
a
client,
develop
of
developers
working
on
specific
pre
compiles,
and
it's
really
a
lot
of
work,
we
basically
deliver
one
engine
which
could
implement
any
pre
compiles.
A
So
that's
like
a
it's
like
a
optimization
of
our
time,
sort
of
instead
of
try
to
optimize
things
which
might
be
used
by
quite
a
few
contracts.
We
actually
just
got.
The
whole
class
saw
the
whole
class
of
problems,
probably
not
the
most
optimal
way,
but
like
a
which
is
kind
of
we
were
going
for.
The
breads
was
the.
E
G
I
think
Frederick
made
the
best
point
where
there's
some
class
of
problems
that
are
reasonable
for
interpreters.
There's
some
class
of
problems
that
are
not
reasonable
for
interpreters,
there's
some
class
that
we
can't
even
use
with
aetherium
now,
because
we
would
need
FPGAs
or
a64
so
there's
sort
of
classes
of
problems.
G
The
interpreters
give
us
access
to
some
sort
of
problems
like
hamstring,
like
Blake's
and
certain
things,
but
perhaps
not
other
things,
but
this
is
sort
of
a
process
as
we
go
as
we
as
we
improve
as
we
shift
our
gas
prices
from
interpreters
to
compilers
when
our
into
a
compiler
infrastructure
improves.
We
have
to
take
a
first
step.
This
isn't
going
to
just
magically
appear.
Our
compilers
are,
you
know,
audited
everything
isn't
going
to
just
appear.
We
have
to
take
a
first
step
and
that's
the
that's
the
reason
we
want
to
start
with
interpreters.
G
D
D
D
I
Found
it
so
I
just
want
to
have
a
brief
meta
discussion.
So
me,
immune
Greg
are
here
from
the
perspective
of
tap
developers
which,
which
means
that
we're
not
as
much
in
the
know
of
the
real
problems
that
the
core
developers
are
having,
but
I
think
it's
good
to
just
step
back
a
little
bit
from
these
specific
implementations
and
and
look
at
the
higher-level
problem
that
we're
trying
to
solve
here,
because
I,
don't
think,
there's
a
lot
of
clarity
right
now
about
what
specifically
is
the
is
the
problem.
I
I
I'm
suddenly
become
more
expensive,
so
it's
a
very
crude
signal
that
we
have
and
any
sort
of
discrepancy
between
this
gas
cost
and
the
actual
resource
consumption
of
the
system
that
could,
in
theory,
grow
out
of,
like
without
bounds
over
time,
will
inevitably
become
a
huge
performance
problem
in
the
system.
So
we
need
to
make
sure
that
whatever
the
actual
cost
of
something
is
and
the
gas
cost
of
something
is,
it
needs
to
say
within
some
finite
bound
of
each
other.
Otherwise
this
will
inevitably
lead
to
problems.
I
So
now
that
we
sort
of
defined
the
problem
space,
the
other
thing
we
need
to
do
is
define
the
solution
space.
So
what
are
our
options?
What
are
we
willing
to
do?
What
are
we
not
willing
to
do?
Where
are
the
trade-offs?
How
much
pain
are
we
willing
to
suffer
in
order
to
solve
certain
things?
How
urgent
are
things
so
one
thing
I
realized
is
that
at
least
in
some
respects
notes
are
not
fully
optimized.
I
Fortunately,
it
seems
that
notes
in
many
critical
areas
are
already
heavily
optimized
and
where
they
aren't,
we
can.
We
should
be
able
to
estimate
how
much
of
an
improvement
we
can
expect
from
future
development
here
and
how
much
the
impact
of
certain
protocol
changes
would
be
here.
Another
goal
that
we
have
is
to
remain
implementation
agnostic,
we
kind
of
want
to
give
all
the
notes,
the
freedom
implement
things.
I
However,
they
see
fit,
but
this
is
directly
at
odds
with
having
accurate
gas
cost
and
solving
concrete
performance
problems,
because
they
are
always
dictated
by
a
particular
style
of
solution.
So
these
two,
these
two
points
in
the
solution
space,
are
actually
its
trade
of
what
we
need
to
collectively
agree
on
where
we
want
to
be
between
one
and
the
other
another.
Another
question
is:
what
do
we
want
to
break
in
a
protocol?
So
the
protocol
has
a
lot
of
things.
I
It
has
historically
guaranteed
by
virtue
of
the
yellow
paper,
and
it
is
obvious
now
that
we're
going
to
have
a
series
of
hard
Forks
that
are
going
to
break
some
of
the
things
that
were
to
fight
in
the
yellow
paper,
entirely
clear.
Where
do
we
draw
the
lines
here?
What
what
the
boundaries
of
this
are?
What,
for
example,
it
seems
that
we
are
completely
fine
with
destroying
gas
token,
like
the
storage
also,
and
some
of
the
other
changes
are
going
to
utterly
destroy
us.
I
However,
we
were
kind
of
talked
when
something
that
you
cannot
do
is
storage
for
2300
gas
was
broken
and
started
working
and
it
cost
a
big
planet.
So
this
was
an
assumption,
tween
core
developer
resources
and
urgency,
so
this
sort
of
defines
the
solution,
space
and
again
I'll
like
to
give
the
dab
developers
perspective
here.
I
I
We
specified
the
current
behavior
really
nicely
in
the
yellow
paper,
but
what
is
not
clear
in
the
yellow
paper
is:
how
does
will
evolve
going
forward
and
it
would
be
nice
to
add
this
now
big
shops
like
let's
say,
maker
0x,
you
name
it
honestly
break
whatever
you
need
to
break,
we
can
migrate.
We
have
designed
our
systems
to
be
upgradable.
I
We
have
developer
resources,
we
have
the
skills
in-house
to
deeply
understand
the
and
the
changes,
and
if
you
just
give
us
at
least
a
path
to
a
migration,
we
will
be
able
to
find
it
implement
it
and
take
it.
So
don't
worry
too
much
about
about
us.
Well,
you
should
be
worried
about.
I
presume
is
the
medium
scale
developers
and
the
individuals,
because
they
might
not
have
the
resources
to
implement
migration
paths,
or
they
might
have
locked
themselves
in
the
position
where
a
migration
part
is
really
hard.
I
Let's
say,
for
example,
that
I'm
an
individual
who
thought
it
was
fun
to
become
an
absolute
0x,
Hudler
and
I
locked
all
my
0x
tokens
into
a
contract
that
locks
them
for
three
years
and
now
storage
rent
comes
and
this
contract
gets
slowly
drained.
What
happens
to
my
tokens?
Is
there
a
migration
part
here?
So
that's
that's.
The
sort
of
thing
that
I
I
would
assume
is
a
bit
more
challenging
so
yeah.
My
key
takeaways
here
are:
we
need
to.
A
A
A
Hello,
so
you
probably
remember
I
showed
this
yesterday,
so
my
kind
of
premise
was
that
we
had
four
main.
A
A
It's
more
than
an
hour,
for
example,
which
it
could
do
then,
by
the
time
the
sink
is
complete.
The
piers
would
would
have
pruned
the
state
that
the
this
new
pier
wants
to
sink
and
that
clearly
is
getting
worse
with
the
sort
of
with
the
with
increase
in
the
state
size.
So
what
we
want
to
do
is
that
we
want
to
do
this
emulation
which
to
determine
what
that
function
is.
A
So
we
kind
of
think
that
this
function
has
three
arguments:
state
size,
bandwidth
and
pruning
threshold,
and
so
what
is
the
success
rate
in
this
case?
And
maybe
we
could
try
to
see
investigate
the
dependency
there,
and
so
what
happens
when
you,
when
your
state's
I
start
improving
and
so
the
next
thing?
What
we
could
do
is
that
for
other
things
for
the
number
1
2
3,
although
we
we
have
an
idea
of
what
the
function
is,
but
we
don't
know
the
coefficients.
A
A
They
probably
will
have
experienced
this
problem
at
different
times
at
the
different
size
of
the
state.
So
then
an
accidental
thing
to
look
for
is
that,
when
is
the
block
processing
going
to
be
10%
of
the
interlock
time?
When
is
it
gonna
take
half
of
the
interlock
time,
because
all
these
things
will
start
affecting
more
and
more
facets
of
the
of
the
performance?
So
that's
kind
of
the
the
the
plan.
So
next
thing,
I'm
going
to
show
you
is,
is
the
additions
that
I've
done
to
the
to
the
state
rent
since
yesterday.
A
Yes,
so
here
remember,
yesterday,
I
was
trying
to
explain
you
the
hello
cups,
with
the
with
this
analogy,
so
I
think
Andre
came
up
with
a
different
analogy,
which
I
didn't
put
any
pictures,
but
yesterday
I
used
analogy
where
the
contract
is
like
a
glass
separated
into
the
sections.
Here,
we've
got
the
six
sections,
so
basically,
this
contract
has
six
storage
items
and
two
of
them
are
currently
have
low
cups
on
them.
A
So
the
idea
is
that,
after
the
introduction
of
low
cups,
all
the
contracts
of
the
new
contracts
will
always
be
full.
So
in
this
example,
where
you
created
the
contract,
it's
got
no
storage
and
then
you
have
an
explore
which
sets
some
values
and
it
keeps
growing.
And
then
you
see
that
at
some
point,
when
you
simply
change
the
value,
the
storage
size
doesn't
change
and
there's
no
lookups
necessary.
So
you
could
see
that
takes.
Origin
doesn't
need
to
pay
anything,
but
when
subsequently
the
ciock
surgeon
women.
Subsequently
you
free
element
of
the
storage.
A
The
excess
lookup
gets
returned
to
detects
origin
and
in
the
case
where
we
have
a
pre-existing
contracts.
So
again,
this
picture
demonstrates
that
we
start
doing
some
changes,
which
could
be
either
add
in
a
new
item
which
needs
to
be
needs
to
have
a
lookup
with
it
or
even.
If
we
just
modify
the
item
in
a
second
transition,
we
still
have
to
put
up
the
lock
up.
Although
the
the
storage
size
doesn't
change
in
then
in
then
we'll
do
another
modification,
and
then
it.
A
C
A
Reason
for
that
is
because
the
reason
the
whole
reason
why
the
lookups
were
introduced
is
to
provide
the
existing
contract
with
the
migration
path.
So
and
of
course,
if
you
need,
if
you
need
to
put
a
new
opcode
in
a
means
that
you
can't
have
to
rewrite
the
contract,
so
the
we
have
no
choice.
If
you
want
to
keep
the
existing
contract
for
a
while,
while
they
migrating,
we
have
no
choice
than
to
modify
the
semantics
of
the
current
things,
and
actually
something
came
up
this
morning
which
I
didn't
put
into
the
presentation.
A
A
The
explanation
is
that
if
there
are
multiple
contracts
which
have
a
synergetic
relationship,
then
like
let's
say
three
of
them
using
each
other,
then
these
three
contract
will
have
to
upgrade
kind
of
step
wise,
sir,
the
first
one
so
the
first
contract
rewrites
its
code,
but
it
keeps
my
gradient
of
clients,
but
in
the
meantime,
then
the
contracts
which
uses
it
has
to
use
the
old
version,
and
then
they
say:
okay,
we've
done
migrating.
Now
the
the
contract
number
two
can
switch
to
the
new
version
and
then
start
migrating
itself
and
so
forth.
C
A
We
could
rediscover
it
using
simulations
or
something
that
I
just
mentioned
at
what
sort
of
storage
size
we
can.
Everything
is
gonna
stop
working
then
we
also
have
some
sort
of
target
lock
up.
So
how
much
easier
do
we
want
to
be
locked
up
so
completely
reach
that
target
storage
size
and
we
just
divide
one
another?
A
So
if
we,
for
example,
take
five
hundred
million
items
which
is
about
three
times
more
than
we
have
at
the
moment
and
which
we
say,
okay
in
order
to
completely
fill
this
up,
we
will
we
will
require
10
million
easier
to
be
locked
up.
Then
we
arrive
at
the
lock
up
price
of
0.02.
East
per
item,
which
is
quite
expensive,
I
would
say,
and
so
what
else
did
I
aren't
so
I
fixed
this
diagram
from
yesterday?
A
A
Okay,
so
here
I
clarified
at
a
pay.
Rent
opcode
actually
has
two
arguments,
which
is
the
target
contract
and
the
amount,
which
means
that
you,
you
can
call
it
from
the
third
party
contracts
and
to
to
top
up
somebody
else's
a
balance,
rent
balance
and
the.
Lastly
again
we
had
a
discussion
yesterday
what
if
we
get
the
constants
wrong
so
in
this
proposal,
therefore,
for
different
constants
that
have
to
be
introduced,
which
is
the
account
rent
code,
rent
storage,
rent
for
different
aspects
of
the
rent,
as
well
as
the
lock-up
price.
A
So
if
we
realize
that
we
have
under
price
to
over
priced
rent,
then
we
could
probably
just
change
it
by
a
hard
fork
and
don't
at
the
moment
I,
don't
see
any
major
problems
with
that
we're.
But
the
local
price
is
a
bit
different,
because
we
we
currently
have
one
a
simple
counter
that
account
accounts.
How
many
lockups
do
we
have?
So
if
we
decide
to
change
the
local
price
with
a
very
hard
work,
what
we'll
have
to
do?
We
have
to
introduce
the
second
counter
into
the
each
contract
called
lockup.
A
In
this
case,
the
condition
of
like
remember,
there
was
a
condition
that
which
determines
whether
the
contract
is
full,
which
means
that
it
doesn't
have
to
pay
a
storage
rent.
So
this
condition
will
change
from
the
simply
storage
size
equals
two
lookups
to
something
a
bit
more
complicated
but
still
manageable,
which
includes
two
prices:
the
old
one
and
the
new
one,
which
is
basically
say
that
we
were.
We
were
tracking
both
of
these
things
and
if
we
need
to
do
the
third
hard
work,
then
of
course
we
can
make
this
formula
a
bit
more
complicated.