►
From YouTube: Ethereum 1.x Morning [Day 1]
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
Yeah
totally
all
right
cool
we're
just
getting
set
up
up
here:
who's
and
who's,
not
in
the
all
core
devs
channel,
pretty
much
everyone
here
should
be
in
the
all
core
devs
channel
on
get
er.
It
should
be
good
or
dot.
I
am
slash,
aetherium,
slash
all
core
devs,
all
one
word:
if
you're
not
in
there
join
it,
there's
a
lot
of
good
discussion
in
there.
That's
where
the
core
developers
meet
up
for
a
theory
amande.
If
there's
you
know
something
major
going
on,
we
meet
up
in
there.
A
A
B
So
very
short
presentation
just
to
give
everyone
an
intro
into
the
simulation
working
group.
So
the
simulation
working
group
started
in
Prague
as
part
of
the
youth
when
X
stuff,
and
the
goal
is
to
do
some
analysis
that
will
help
the
community
develop
the
roadmap.
One
of
the
first
things
to
do
is
talk
with
people
find
out
what
analyses
people
have
done
and
then
start
exploring
ways
to
do
those
analyses
that
requires
collaboration,
communication
openness
because
we
need
datasets,
we
need
code.
We
need
different
groups
to
coordinate
what
they're
doing
so.
B
Basically,
there
was
a
question
that
came
up
in
Prague
about
why
our
uncle
rates
dropping
what
can
we
do
to
help
ensure
understand
what
uncle
rates
represent
on
that
main
node
and
how
they
impact
clients
and
network
protocols?
So,
along
the
way,
there
are
several
steps,
collect
datasets,
develop
models
and
also
test
plans,
determine
how
these
potential
understandings
are
simulated
and
emulated
and
share
those
results
with
a
community.
B
So
we
have
a
few
data
data
sets
that
we've
collected
a
lot
of
you
has
provided
us
with
some
Eve
stats,
thanks
Hudson
for
sharing
stats
from
each
sets
ether
scan.
Those
are
those
public
charts
that
are
available
and
Zack
will
talk
about
the
nodes
on
the
network
proposal,
essentially
launching
no
nodes
on
the
main
that
to
collect
some
statistics.
B
So
there
are
two
simulation
emulation
frameworks
being
used
and
explored.
New
clan
Vanessa
are
working
on
the
wittgenstein
simulator,
which
you
can
model
different
consensus,
algorithms
and
at
the
same
time,
because
uncle
rates
are
related
to
Network
protocols.
It's
being
used
for
that
as
well
and
Zach
will
be
able
to
talk
about
the
testing
platform,
which
is
essentially
a
platform
that's
being
designed
in
order
to
reduce
some
of
the
overhead
that
goes
into
network
protocol
analysis,
usually
associate
with
say,
latency
and
low
level
network
access.
B
So
the
roadmap
for
the
working
group
is
collecting
data.
Doing
analyses
working
on
the
code
bases
are
one
of
our
goals
here.
This
weekend
is
to
figure
out
who
else
we
can
talk
to
who
else
wants
to
get
involved
and
communicate,
communicate
with
other
working
groups,
even
though
we're
working
on
uncle
right
simulations
right
now,
we
believe
that
the
working
group
can
but
help
address
other
simulations
and
statistics
or
analyses
that
need
to
be
done:
state,
rent
or
state
pruning.
B
There
are
a
lot
of
statistics,
analyses
that
have
to
be
done,
and
we
want
to
find
a
way
to
do
that,
perhaps
more
generically
with
tools
that
can
be
reused
rather
than
just
each
group
having
to
do
that
so
kind
of
join
forces
and
help
each
other
as
well.
Some
of
the
other
areas
that
we're
gonna
explore
our
gas
costs
because
things
like
state
rent
all
of
these
analytics
deal
with
the
cost
of
transacting
on
the
network,
and
we
need
to
better
understand
how
what
what
impacts
each
of
these
components
have
these
things.
B
B
Okay,
shall
I
invite
the
next
group
up
shall
I
invite.
He
was
him
sure.
D
C
C
C
First,
two
problems.
The
first
problem
webassembly
is
great,
but
it's
not
it's
designed
to
execute
arbitrary
code
and
at
native
speeds,
but
it's
not
designed
for
consensus,
but
it's
pretty
close.
It's
close
enough
that
we
think
we
can
choose
a
subset
of
it
that
is
designed
for
consensus
and
that
we
can
have
guarantees
of
what's
going
to
execute,
so
we
can
avoid
consensus
bugs,
and
so
that's
the
first
problem
with
that.
Webassembly
is
great,
but
it's
not
quite
there
and
with
engines.
Yet
we
have
to
modify
things.
We
have
to
choose
subsets
of
it.
C
The
other
problem
is
pre.
Compiles
are
a
burden.
We
haven't
had
a
lot
of
pre
compile
for
a
long
time,
because
implementing
auditing
is
done
by
each
client,
awkward,
gas
metering,
testing
who's
gonna
do
all
this
work,
so
people
are
reluctant
to
keep
on
adding
these
pre
compiles.
So
the
idea
was
one
precompile
called
awasum
and
then
that's
gonna
be
the
last
pre-compiled.
Then
we
can
launch
things
on
this
II
was
in
pre
compile,
and
so
these
are
the
two
problems.
C
So
the
solution
is
replace
the
current
pre-compiled
infrastructure
with
web
assembly,
and
this
is
a
first
step
in
a
journey
for
general
user
deploy
to
us
of
contracts.
The
advantage
of
pre
compiles
is
that
we
have
a
subset
of
e
wasum.
We
can
start
with
a
with
a
first
step,
a
small
step
where
we're
sure
we
know
what
the
code
were
executing.
We
can
check
things
and
we
can
hopefully
audit
and
even
verify
things
before
we
even
starts
and
the
deliverables
are
listed.
I
think
there's
some.
C
There's
some
of
it
is
being
included.
Okay,
there
it
is
so
we
have
we're.
Gonna,
deliver
Azam
modules
for
existing
pre-compose
and
a
bunch
of
other
pre-compile.
The
specifications
for
the
interface
and
the
implementation
of
these
specifications
are
a
lot
of
work
in
death,
client
parity.
All
of
these
all
these
things
take
a
lot
of
work.
The
specification
is
a
lot
of
work.
Gas
metering
is
a
big
thing
that
we'll
talk
about.
It's
really
interesting.
C
Actually,
if
we
can
find
some
way
to
automate
metering
of
pre
compiles,
and
there
are
some
options
and
the
goal
of
this
first
step
is
to
have
a
streamlined
process
for
deploying
pre-compile,
where
someone
can
give
us
a
module
and
we
can,
we
can
deploy
it
and
maybe
eventually
will
lose
users
deploy
their
own
pre
compiles
everything
automated.
So
that's
the
ultimate
goal,
but
we're
taking
a
first
step,
we're
just
exploring
ideas
and
where
we're
building
experiments,
nothing
is
sort
of
finalized.
C
Yet
so
we're
still
open
and
we're
still
discussing,
but
this
is
sort
of
the
state
we're
at
right
now,
and
there
are
discussions,
perhaps
we'll
just
go
straight
to
use
or
deploy
dae-hwa's
and
contracts.
Perhaps
we'll
start
with
pre-compile,
but
and
there's
a
lot
of
other
questions,
we'll
talk
about
it
this
week,
a
lot
so
really
exciting
stuff
and
we
hope
to
hear
from
everyone.
So
there
are
some
discussion
channels
that
were
available
on.
D
Okay,
so
I
think
it
makes
sense
to
do
a
little
introduction
into.
This
is
my
my
point
of
view
on
the
serum
1x
and,
of
course,
a
lot
of
people
can
object,
but
yeah
I
think
it's
useful
to
explain
what
binds
us
together
why
these
are
four
groups.
They
came
up,
not
the
others,
and
some
of
it
is
sort
of
accident
that
some
of
it
is
not.
So,
let's
look
at
the
short
his
prehistory
I
would
say
of
how
this
whole
thing
came
about
from.
D
My
point
of
view
is
that,
first
of
all
in
Cancun
and
DEFCON
3
vitalic
had
his
talk
about
modest
proposal
for
cerium
2.0,
in
which
we
sort
of
hear
this
proposition
that
that
a
theorem
existing
in
theorem
1.0
should
stay
safe
and
conservative
and
all
the
kind
of
just
sort
of
the
break
in
innovation
should
go
into
the
theorem
2.0
in
the
shards
so
that
it
included
he
was
a
man.
I
think
this
is
the
first
time
when
I
heard
that
he
was
am,
is
probably
gonna,
be
shifted
towards
2.0
and
other
things
as
well.
D
D
Do
you
want
to
become
Casper
validator,
which
sort
of
signal
signal
the
Casper
is
very
near.
It
was
Casper
effigy
at
that
point,
and
people
got
really
excited
about
it,
and
it
was
one
thing
in
particular,
which
was
very
exciting.
Is
that
the
slashing
condition
was
changed
so
that
if
your
equivocation
like,
if
you're
you're,
voting
on
multiple
blocks
in
the
same
height
and
that
sort
of
didn't
cause
too
much
trouble,
then
your
slashin
is
going
to
be
less.
D
So
that
means
that
there
will
be
less
risk
of
participating
in
in
in
in
validating
pools
and
and
then
another
thing
I
remember
that
essentially
enabled
people
to
validate
it
from
our
laptops,
which
was
really
cool
so
but
then
June
2018
something
changes.
So
one
of
the
accord
that
meeting
number
four,
which
I
have
a
link
there.
But
basically
there
was
a
pivot
announced
by
the
by
the
research
team.
So
essentially
like
initial
idea,
was
to
have
caspere
to
have
a
contract
on
a
cerium
1.0
which
would
be
validating
the
votes
and
doing
other
things.
D
And
the
sharding
had
was
a
different
research
group
and
they
realized
that
their
researching
pretty
much
like
very
similar
things,
and
they
really
decided
that
it's
gonna
be
very
challenging
to
validate
all
the
votes
and
the
contract.
Simply
because
there
was
a
too
many
signatures
to
verify.
And
so
the
decision
was
made
to
join
together
the
the
Keira,
Casper
and
sharding
research
groups
and
not
do
contract
based
caspere.
D
And
this
is
where
we
start
hearing
things
like
beacon,
chain
and
Jasper,
which
is
the
sort
of
the
hybrid
of
Casper
and
shouting
and
and
so
then,
we're.
If
you
look
at
what
happened
in
October
November,
which
actually
the
events
which
led
to
the
creation
of
this
our
experiment.
Pony
X,
is
that
we
we
came
to
Prague
and
we
kind
of
realized
that
well.
D
These
is
going
to
take
a
bit
more
than
we
thought
before
the
what
I
mean
by
this
means
that
serenity
is
a
12.0
and
and
so,
when
I
talk
about
this
serenity,
I
don't
mean
the
beacon
chain
is
gonna,
be
alive.
It
might
be
even
alive
at
the
end
of
this
year,
but
actually
to
functionally
supersede
the
theorem
1.0
it
might
take
longer,
because
a
current
plan
is
to
do
phase
1
phase,
2
phase
3
and
the
phase
3
is
or
maybe
phase
2.
They
call
it.
D
If
you
count
from
zero
is
where
you
have
execution
engine
and
everything
starts
working,
and
also
that
I
think
somebody
says
that
well
also,
then
you
need
to
go
through
the
trial
by
fire,
which
means
that
survived
a
few
attacks,
and
things
like
this
and
until
people
become
comfortable
using
this
chain.
So
we're
talking
about
optimistically
three
years
per
semester,
CLE
five
years,
maybe
even
more
so
as
I
was
joking
sieve
italic
I
will
retire
by
then.
D
And
so
why
did
this
whole
start?
So
it
is
the
in
Prague
when,
when
I
came
to
Prague
there
was
a
kind
of
sort
of
cognitive
dissonance.
For
me
around
the
around
the
rooms,
I
saw
lots
of
excited
people
who
were
building
stuff
on
aetherium.
They
were
really
kind
of
cheering
cheering
lis
excited,
but
I
saw
some
different
people
who
were
kind
of
going
around
and
thinking
like
the
sink
is
taking
too
long.
D
There's
too
much
data
to
go
around
the
nose
because
it
take
more
and
more
space
and
stuff
like
this,
and
so
there
was
a
sort
of
the
people
who
I
talked
to
I
started
hearing
the
sentiment
and
then
eventually
we
just
start
talking
to
each
other
about
this,
and
this
is
how
the
serum
on
Phonics
came
out
again.
This
is
my
personal
perspective.
D
Please
correct
me
if
I'm
wrong,
but
the
main
point
of
this
Union
point
one
point
X
from
my
point:
you
used
to
develop
and
design
some
changes
that
we
can
make
into
the
existing
existing
network
so
that
it
survives
until
the
the
Sirians
2.0
supersedes
functionally
and
people
do
migrate
so
which
we
don't
know
how
long
it's
going
to
take.
But
there
the
danger
of
not
doing
any
changes
is
to.
D
D
Deliverable
again,
this
is
what
could
change
as
well
so
but
out
of
the
three
working
groups,
so
there's
two
things
which
definitely
might
require
will
require
hard
forks
if
they
ever
going
to
be
accepted
and
storage
pruning
at
the
moment.
We're
thinking
about
it
might
not
require
the
hard
Forks,
but
it
might
if
we,
if
we
find
out,
there's
something
we
have
to
do
to
make
this
whole
thing
better.
D
E
D
E
All
right,
I,
don't
really
have
a
presentation.
I'm,
just
gonna
talk
through
Peters
proposal,
because
this
is
originally
Peter.
Shivaji's
proposal
on
chain
pruning
and
I'm
been
involved
in
the
discussion
for
a
long
time
as
well.
So
I
think
it's
good
to
get
a
bit
of
history
and
sort
of
what
chain
pruning
is
all
about.
So
we
all
know
that
a
full
node
takes
a
lot
of
space.
E
It
takes
about
140,
gigs
or
so
to
run
a
full
node,
and
it's
not
really
reasonable
expectation
for
someone
to
run
it
on
their
laptop
anymore
and
that's
something
that
we
want
to
address.
And
so
this
you
know
has
been
a
long
discussion.
You
know,
Bitcoin
introduced
this
a
long
time
ago,
prune
nodes,
there
are
pretty
common
and
Rob
had
a
BRABUS
one
of
the
main
consensus
developers,
a
parody.
E
E
I
would
say
it's
been
a
discussion
for
a
long
time,
but
no
one
is
really
willing
to
take
action
and
now
we're
starting
to
really
feel
the
pains,
because
the
number
of
clients
in
the
network
have
reduced
drastically
over
the
cover
of
2018
I
think
we
need
to
do
something
to
encourage
people
running
more
clients,
and
this
is
one
of
those
approaches
so
just
going
over
the
proposal
from
from
Peter,
he
highlights
that
cross
client
coordination
here
is
important.
This
is
like
chain
pruning
is
something
that
any
client
can
implement
today.
E
If
they
want
to
it's-
and
it's
not
really
that
big
of
a
problem
but
if
parody
implemented
chain
pruning
by
putting
all
the
Box
on
ipfs
and
gets
implemented
it
by
putting
it
all
on
bit
torrents,
then
we
can't
really
sing
from
each
other
anymore
and
that's
a
bit
of
a
problem.
But
furthermore,
if
then
Trinity
wants
to
join
the
network
and
they
have
neither
of
these
these
methods
to
bring
historic
blocks,
then
they
can't
join
the
network
anymore
because
they
would
need
to
implement
one
of
these
methods
to
even
get
history.
E
Peter
presents
a
lot
of
good
data
on
what
we're
actually
talking
about
in
terms
of
size.
We
have
blocked
bodies.
The
header
chain
will
always
have
to
be
downloaded
and
maintained
by
every
client,
but
just
the
block
bodies
is
on
the
order
of
100
gigs.
Then
we
have
logs
that
can
be
deleted
as
well
and
then,
of
course,
the
Associated
indexes
with
all
these
things
take
up
some
space
as
well.
E
So
the
server
requirements
of
a
system
like
this
is
that
we
need
to
have
data
retention.
We
need
to
ensure
that
we
have
data
availability
in
some
way,
and
so
we
could
either
say
there's
a
probabilistic
availability.
Some
number
of
people
will
always
run
a
non-punitive
mode,
but
it's
unsatisfactory
solution
to
just
say
it
price
is
probably
fine,
so
we
would
like
to
have
some
stronger
guarantees
and
that
could
be
doing
things
like
putting
the
block
bodies
on
IP
FS
and
then
it's
discoverable
and
you
want
to
download
it
it's
freely
available.
E
We
can
say
that
you
know
the
Internet
Archive
parody
and
the
foundation
will
always
pin
all
of
this
content,
so
we
have
multiple
replicas
of
it.
You
know,
there's
methods
like
that.
If
we
start
delving
into
actually
incentivized
like
Unchained
incentivized
methods
of
ensuring
availability,
it
becomes
much
more
complicated
and
so
I,
don't
think.
Peters
proposal
does
not
cover
that
and
I
think
that
goes
for
beyond
what
we
should
be
dealing
with
today,
but
I'm
interested
in.
If
people
have
ideas
about
that.
E
So
obviously,
pruning
history
makes
syncing
different,
like
a
parody
warp.
Sync
right
now
doesn't
rely
on
history
actually
downloads
the
snapshot
in
back
Villas
history.
Fat
fast,
sync,
is
a
little
bit
different,
but
it
would
also
work
well,
but
a
regular
sync
becomes
much
more
complicated
if
it's
hard
to
find
historic
blocks.
So
basically
we
would
be
crippling
regular,
sync
to
some
extent,
but
a
regular
sing-sing
takes
weeks
anyway.
E
So
the
question
is:
if
anyone
is
actually
doing
they're
not
and
I-
think
just
covering
a
little
bit
of
what
I
hope
to
get
out
of
today.
I
think
the
chain
pruning
proposal
is
the
least
controversial.
It's
the
easiest
one
to
talk
about
easiest
one,
to
implement
it's
something
that
we
can
have
a
very
minimal
version
of
like
next
week.
E
But
if
we
go
10x
on
on
the
main
net,
then
that's
not
going
to
be
possible
very
soon.
So
if
the
other
proposals
go
through,
then
this
is
necessary
if
they
don't
go
through,
and
this
is
a
nice
to
have
so
I'd
say.
The
mission
is
to
get
to
a
point
during
these
couple
of
days
where
we
agree
what
we
should
put
forth
to
client
implementers,
as
this
is
what
we
wanted
and
part
of
the
goals
of
that
is
sort
of
establish.
E
What
problems
and
concerns
exists,
especially
from
like
deaf
developers
and
and
other
people
who
who
use
history
in
some
way
and
not
just
take
this
perspective
of
someone
trying
to
run
a
full
note
on
their
laptop
and
to
figure
out
a
bit
of
a
roadmap.
So,
as
I
said,
the
simplest
possible
solution
is
something
that
we
can
do
next
week.
A
A
D
So
I
would
like
to
present
the
framework,
which
is
that
my
attempt
to
to
set
out
the
questions
which
proposal
should
be
able
to
answer
and
sort
of
the
problems
that
it
has
to
go
into,
so
that
when
we
look
at
the
proposals,
we
don't
have
to
do
a
lot
of
big
interrogations
about
what
about
this.
What
about
that
so?
And
so,
as
the
I
start,
with
a
list
of
questions
about
like
how
do
we
go
from
from
beginning
to
the
to
the
actual
proposal?
D
And
so
the
the
we're
gonna
go
through
every
question
in
in
in
turn?
So,
but
we'll
start
with
with
this
one.
So
again,
this
is
my
personal
perspective
and
I
would
encourage
people
to
come
and
challenge
me
on
this
if
they,
if
they
see
that
something
I've
missed
here
or
something
that
they
disagree
with,
this
is
exactly
the
reason
why
we're
here
I
want
to
hear
how
this
needs
to
be
changed
so
I'm,
claiming
that
this
is
there's
a
two
main
values
that
the
the
state
is
providing
to
the
to
the
applications
or
to
people.
D
Essentially,
the
whole
network
works
really
hard
on
trying
to
make
keep
your
claims
available
for
a
very
long
time
and,
secondly,
that
the
state
in
Syria
allows
the
this
kind
of
synergy
between
different
smart
contracts.
This
is
the
sort
of
like
a
higher
level
higher
level
value
and
I.
When
I
was
writing
the
the
the
first
proposal,
I,
probably
more
thought
about
the
from
number
one,
but
I
didn't
think
a
lot
about
number
two,
because
the
number
one
is
this
is
where
the
beneficiaries
of
this
valley
could
be
the
people
who
are
holding
the
claims.
D
D
So
then,
why
is
this
so
again?
This
is
my
claim
which
I
need
to
sort
of
run
through
everybody,
and
so,
depending
on
the
answers
to
these
questions,
we
either
say:
ok,
we
do
need
this
measures
to
control
the
size
or
we
say
no.
We
don't
need
this
measure,
so
there's
no
actual
problem
that
we're
solving,
and
so
the
important
bit
here
is
that
I
do
depart
from
the.
D
So
initially
when
I
read
the
historical
proposals
for
the
state
rent
I,
most
people
were
trying
to
solve
the
problem
of
cost
of
storage
like
how
much
does
it
cost
to
buy
the
this
4
terabyte
hard
drive?
How
much
does
it
cost
to
do
this,
and
this
I
completely
ignore
this
question
here,
because
I
think
it
needs
to
be
solved
in
a
different
way
and
also
it
doesn't.
D
But
here
it
looks
like
the
same
function,
but
the
actual
coefficient
is
much
larger
because
it's
probably
going
to
be
yeah.
So
you
basically
end
up
reading
much
more
data
because
you're
not
just
reading
the
the
bit
that
you're
read
there.
You
are
accessing,
but
you
also
have
to
read
the
siblings
in
order
to
agree
redo
the
miracle
tree,
and
here
people
might
say
okay,
what
about
caching?
So
can
you
just
cache
things?
So?
D
D
But
essentially
what
you
can
do
is
that
if
you
know
the
way
that
a
client's
cashing
things
you
can
specifically
construct
transactions
which
generate
hash,
cache,
misses
and-
and
if
you
want
to
be
resistant
to
this
attacks,
then
you
need
to
the
only
caching
strategy
you
can
implement
is
the
randomized.
Caching,
essentially
you
just
hold
the
random
sample
of
the
state
and
the
the
sample
is
going
to
be
slightly
different
in
all
the
clients,
but
essentially
that
leads
me
to
believe
that
the
function
is
simply
logarithmic.
D
So
the
third
number
is
that
when
so,
currently
the
state
I
think
with
the
most
compact
representation,
is
around-
maybe
twelve
gigabytes
or
something
like
that.
It
depends,
but
I
think
the
most
efficient
snapshot.
Sync
is
when
you're,
basically
a
node
join
in
network.
Instead
of
reading
all
the
blocks
and
trying
to
re
execute
them
from
the
Genesis,
you
can
do
the
snapshot
sink.
D
So
you
can
see
this
is
actually
a
bigger
bigger
deal
than
previous
to
potentially
and
then
the
number
four,
which
is
also
very
well,
we
were
very
little
known,
but
probably
is
known
to
some
of
the
client
developers,
is
that
as
the
state
size
grows,
more
nodes
start
more
aggressively
pruning
the
state
history.
So,
when
you're
surrounded
by
the
nodes,
when
your
node
is
surrounded
by
the
nodes
who
very
aggressively
pruning
the
state
history,
then
it
could
be
that
it
takes
you
longer
to
sync.
Then
it
takes
them
to
prune
away
the
state.
D
Your
sinking,
for
example,
you
started
right
now.
You
start
thinking
the
the
state
which
is
current
now
and
then
it
takes
you
two
hours
to
sink
and
in
this
two
hours
the
the
chain
progressed
to
five
hundred
two
blocks
or
something,
and
then
it
could
happen
that
all
the
peers
that
you're
thinking
from
already
prune
that
history,
because
for
them
it's
too
old.
So
they
have
nothing
to
give
you
anymore.
D
Well,
you
might
have
already
halfway
beef
halfway
through,
but
the
question
is
that:
do
you
gonna
start
from
the
beginning
and
try
again
or
are
you
gonna,
be
patchy,
patching
up
the
holes
and
I
think
Fred
had
and
I.
Think
pizza
also
had
some
proposals
to
deal
with
that,
but-
and
this
is
where
I
just
recently
had
an
idea.
D
This
is
where
we
can
involve
the
simulation
simulation
group
to
help
us
with
essentially
to
figuring
out
what
is
the
actual
relationship
between
between
the
syncing
between
the
pruning
threshold
and
bandwidth
in
this
particular
rate
of
success
of
the
sync
so
basically
to
simulate
these
failures.
Sync
failures
like
how
often
does
this
thing
fail?
If
you
have
this
bandwidth
and
this
pruning
threshold.
D
Maybe
we
shouldn't
just
do
rent,
but
so
we
could
do
a
little
bit
of
mitigation
and
if
you
know
about
other
mitigations,
please
let
me
know
because
we
can
add
them
to
the
list.
So
my
two
mitigations
I
could
think
of
is
that
we
could
improve
the
latency
of
blog
post
processing
by
sort
of
pre-warm
in
the
caches.
So
we
can
look
at.
D
For
example,
we
could
look
at
the
transaction
pool
or
we
get
the
miners
to
pre-announce
the
blocks
and
we
can
figure
out
like
with
a
certain
probability
like
what
part
of
the
state
these
transactions
are
going
to
hit,
and
so
we
can
pre-warm
them
in
a
caches
right.
So,
by
the
time
the
block
comes,
we
already
have
the
warm
cache
in
and
it
will
reduce
some
of
this
latency
but
important
to
note.
It
doesn't
improve
the
latency,
but
not
a
throughput,
because,
if
you're
looking
for
because
you
you
basically.
D
You,
if
you're,
so,
if
you
hit
in
the
point
where
process
in
the
block
takes
almost
as
much
as
the
block
propagation,
sorry
interblock
time,
then
this
is
not
gonna
help
you.
So
we
were
still
hitting
the
limit.
So
if
our,
if
processing
of
blocks,
takes
longer
and
longer,
for
example,
now
if
it's
takes
500
millisecond
or
400
millisecond,
and
if
we
keep
the
state
growing,
it
will
then
take
1.
D
Second,
it
will
take
1.5
seconds
and
then
there
we
know
that
the
block
time
is
only
15
seconds,
so
we
probably
have
to
increase
the
block
time
or
something
like
that.
If
we
don't
want
to,
you
know,
hit
some
promise.
So
even
you
do.
If
you
do
this
latency
mitigation,
it's
not
gonna
affect
so
the
the
problem
of
throughput
and
another
thing
we
can
do.
This
is
where
we've
been
talking
to
Fred
about,
to
make
some
sort
of
clever
syncing
algorithms,
which
allow
you
to
improve
the
success
rate
for
a
sink,
and
things
like
this.
D
D
Okay,
so
I'm
gonna
try
to
go
faster
from
here.
So
how
could
we
manage
the
state
so
here
I'm
into
you,
probably
familiar
with
this
stuff?
I
just
took
it
from
the
book
about
feedback
feedback
control
system.
So
there's
this
forward
control
and
there's
a
feedback
control.
You
probably
familiar
with
us
think
about
thermostat,
it's
a
feedback
control.
D
So,
if
you
think
about
EDM,
then
the
gas
schedule
is
the
4-bit
feed
forward
control.
So
we
figuring
out
how
much
each
instruction
could
should
cost
and
that
should
allow
us
to
limit
the
the
number
of
computations
we,
which
will
the
nodes
will
will
make
an
obvious
example
of
the
feedback.
Control
is
the
minor
difficulties
so
which
we
can
change
after
each
block.
D
So
if
we
want
to
do
the
feedback
control
which
then
we
need
to
have
a
state
size
observable
within
the
protocol
and
then
somehow
change
the
parameters
to
make
sure
that
we
are
within
the
bounds
and
if
you
want
to
use
the
feed
forward
control
we're
just
like
using
some
sort
of
Markin
yeah,
using
some
benchmarking,
for
example,
with
the
help
of
our
simulation
group
just
figure
out.
Okay,
what
are
the
like?
Let's
say
we
want
to
fix
the
rent
or
fix
other
things.
Just
do
some
simulations.
D
D
So
let
me
just
stop
there.
I,
don't
want
to
I
think
this
is
enough
granularity
for
this
particular
presentation.
I,
just
we're
gonna
walk
you
through
this
very
quickly
to
to
just
to
show
you.
The
Quai
are
the
questions
that
we
want
to
answer,
and
maybe
we
could
address
the
other
bits
in
a
different
presentation.
So,
first
of
all,
so
if
we
both,
if
you
think
about
both
feedback
control
or
feed
forward
control,
so
what
we
need
to
think
about
is
that
what
is
our
input?
And
now?
What
is
our
output?
D
What
do
we
want
to
regulate
in
our
case?
Do
you
want
to
regulate
the
actual
state
size
or
do
we
want
to
regulate
the
rate
of
growth,
so
they
could
have
come
up
with
two
different
mechanisms,
and
then
we
also
need
to
define
define
whether
those
metrics
have
to
be
on
state
in
the
state
itself
or
out
of
the
state.
And
obviously,
if
we
do
the
feed-forward
feed
forward
control,
they
don't
have
to
be
in
the
state,
because
we
just
come
up
with
some
values.
D
If
we
want
to
do
feedback
back
control,
they
have
to
be
in
the
state,
because
we
need
to
be
able
to
verify
that
the
state
the
control
is
executed
correctly
because
most
of
the
clients
now
sync
with
a
snapshot
without
history.
So
they
need
to
be
able
to
in
the
using
only
the
data
inside
the
state
to
verify
that
the
control
has
been
applied
correctly.
D
So
then
another
thing
we
need
to
think
about
what
for
what?
What
are
we
gonna
change
to
do
our
control?
And
in
this
case
we
know
that
there
are
like
six
different
operations
which
can
increase
the
state
or
decrease
the
state
size.
And
probably
we
want
to
somehow
affect
those
operations,
and
then
I
also
suggest
that
we
can
do
three
different
things
we
can
in
the
cost
of
those
actions,
the
ones
that
increase
the
state
we
can.
D
We
can
make
the
actions
which
decrease
the
state
more
rewarding
or
we
can
directly
reduce
the
state
size,
which
is
what
state
rent
is
doing.
So
there's
like
three
approaches
which
could
be
combined,
and
so
one
of
the
the
obvious
thing
to
increase
the
cost
of
the
actions
is:
do
some
sort
of
increase
the
blower
gas
cost
of
the
let's
say
of
a
store
or
something
like
that?
Let's
say
from
four
20,000
to
40,000,
and
so
it
has
two
effects.
What
I
would
say
is
first
as
scarcity
and
another
one
is
cost
by
scarcity.
D
But
I
will
say
that
ii
ii
ii
effect
might
actually
be
somehow
circumvented
by
the
by
a
some
private
agreement
with
the
miners
or
with
the
mining
pools,
because
they
you
can.
You
know
you
because,
because
in
this
sort
of
economic
transaction,
there's
only
two
parties,
this
transaction
center
and
the
miner,
so
they
can
do
some
sort
of
agreement
and
they
can
even
make
it
appear
that
they
don't
have
an
agreement
by
doing
some
sort
of
kickbacks
or
rebates.
D
And
the
question
open
question
is
that:
do
we
think
that
the
first
effect
is
enough
so
that
we
can
kind
of
ignore
the
the
the
fact
that
people
can
circumvent?
The
second
effect?
Maybe
it
is
another
thing
about
I
want
to
talk
about-
is
about
the
gas
refunds
at
the
moment,
the
one
idea
to
reward
the
state
decreasing
actions
is
do
gas
refund.
Unfortunately,
this
mechanism
is
a
bit
is
a
fraught
with
some
drawbacks,
because
there
is
a
cap
there.
D
You
cannot
get
more
refunded
a
half
the
gusty
already
spent,
which
kind
of
makes
it
a
bit
tricky
to
actually
use
the
refund.
So
if
you
have
a
transactions,
but
that
doesn't
really
spend
anything
but
there's
a
lot
of
clearing,
then
all
the
refund
is
unfortunately
lost,
because
it
will
be
less
it
supposed
to
be
much
more
than
a
half
of
the
spent
and
that
there
are
reasons
why
why
this
cap
was
introduced.
But
we
potentially
can
come
up
with
a
solution
to
make
this
reward.
Much
more
straightforward
and
effective.
D
So
yeah
I
will
be
look
I'm,
not
going
to
talk
about
the
lock
ups
right
now,
because
it's
in
in
detail
described
in
second
proposals.
But
this
is
one
of
the
mechanism
to
make
the
refund
simpler.
If
you
require
everybody
to
lock
up
some
Easter
when
they
expand
the
state,
then
you
can
actually
return
that
almost
in
full
or
in
full
when
you're,
clear
estate
and
that
that
refund
should
come
not
from
the
minors
but
from
the
protocol
itself,
and
that
that
it
means
that
you
don't
have
to
have
a
cap.
D
D
D
You
know,
I
talk
of
a
lot
about
the
hoarding
problem
is
that
when
you
increase
the
increase,
the
cost
of
some
old
action,
and
then
you
have
to
announce
it
in
in
advance
and
people
can
try
to
sort
of
prepare
for
that
by
hoarding
some
of
the
state
and
then
reselling
it
or
just
keeping
it
for
themselves.
So,
in
our
other
example
of
how
the
system
can
try
to
evade
the
control
is
basically
this
private
agreement
with
the
miners
to
try
to
to
avoid
the
in
question
of
the
cost.
D
So
that's
it
for
this,
but
again
to
repeat
for
what
I
said
before
this
is
not
actually
finished.
So
this
is
just
an
attempt
to
come
up
with
these
set
of
questions
and
things
that
every
proposal
have
has
to
look
at
at
least
and
explain
what
they
do
about
it.
And
does
this
problem
exist
and
stuff
like
this?
D
So
that
we
can
have
a
proposal
from
pretty
much
anybody
they
want,
but
we
every
one
of
them,
have
to
have
this
sort
of
quenches
answer
questions
answered,
so
we
don't
have
to
spend
the
time
of
pulling
this
information
out.
So
then
we
can
just
put
the
proposals
next
to
each
other
and
we
can
compare
them
like
based
on
this
criteria
and
if
somebody
one
knows
about
more
things
to
to
cat,
please
let
me
know
that's
it.