►
From YouTube: Ethereum 1.x Afternoon [Day 2]
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
So,
let's
take
an
example
layer
to
contract,
so
the
cost
of
core
data
is
68
gasp
per
byte,
which
means
2,176
gasp
per
word
of
32
bytes.
Now,
let's
take
a
side
chain
that
can
accommodate
2
million
users
each
with
a
balance
in
two
different
token
types.
These
are
the
rough
numbers
that
we
consider
a
minimum
viable
product
at
0
X.
This
won't
even
take
us
like
to
a
the
whole
world
is
tokenized
scenario,
in
which
case
we
need
a
million
tokens
and
a
billion
users
anyway.
This
will
already
bring
you
to
a
balanced
treat.
B
It
has
32
layers
in
it.
So
a
miracle
proof
in
this
balance
tree
is
30.
True
words,
which
means
70,000
gasp
of
call
data
verifying.
It
is
super
cheap.
Like
it's
a
couple
of
showers,
you
do
a
couple
of
cold
data
loads.
It's
only
2k
gasps
to
verify
this
proof,
but
compare
this
to
doing
a
storage
update
of
a
same
single
balance.
That's
only
5k
gasps,
so
we're
talking
about
a
more
than
tenfold
more
expensive
solution
to
bring
it
off
chain.
This
makes
this
entirely
unfeasible
right
now.
B
This
problem
actually
gets
worse
once
you
look
at
more
sophisticated
solutions
starts
right
now
are
limited
by
call
data.
You
can
verify
a
stark
and
EVM
fairly
easily.
You
don't
even
need
a
precompile,
but
for
that
the
problem
is
that
Starks
contain
a
lot
of
merkel
proofs,
and
these
merkel
proves
right
now
cost
millions
in
gas
it's
hard
to
stay
under
the
gas
limit,
purely
with
the
call
data.
So
if
you
want
Stark
based
layer,
two
solutions
called
ADA
needs
to
become
significantly
cheaper.
Another
thing
is
fraud.
B
Proofs
I
know
that
the
true
bit
people
they
have
this
mechanism,
where
you
can
submit
a
fraud
proof
of
incorrect
execution
and
they
had
trouble
keeping
this
fraud
proof,
which
contains
a
lot
of
miracle
roots
to
the
memory
that
was
used
in
webassembly.
They
had
trouble
guaranteeing
that
this
fraud
proof
would
actually
stay
under
the
gas
limit
and
if
you
have
a
system
that
depends
on
fraud
proofs
and
you
cannot
guarantee
that
the
proof
actually
fits
in
a
block.
B
Your
system
is
up
for
disaster
like
someone
can
do
something
fraudulent
and
you
would
not
be
able
to
prove
this
on
the
main
net.
So
it's
it's
essentially
uses
that
way.
Another
thing
is
take,
for
example,
the
mini-me
token.
This
is
a
token
that
uses
a
tremendous
amount
of
storage
because
it
keeps
all
the
old
balances
around
and
the
main
purpose
of
doing
that
is
that
you
can
do
things
like
forking,
it's
open
at
all
balances
or
implementing
folk
and
voting
mechanisms
that
uses
balances
at
a
certain
point.
B
So
where
does
this
number
come
from
the
2176
gas
per
word
and
photonic?
Please
interrupt
me
if
they
get
this
wrong,
but
the
way
I
understand
it
is
that
it
started
out
with
a
particular
maximum
called
data
block
size
of
around
120.
Kilobytes
then
divide
this
by
the
gas
limit,
and
this
will
give
you
the
minimum
amount
of
gas
required
to
ensure
that
the
coal
data
is
always
under
a
particular
threshold.
B
So
basically
it's
starting
from
the
what
is
the
most
adversarial
scenario
of
call
data
given
the
gas
limit
and
the
cash
price
of
it,
and
how
do
we
need
to
set
the
gas
price
to
prevent
this
from
happening?
I'm,
not
exactly
sure
where
this
120
gal
kilobytes
itself
comes
from.
This
could
be
either
like
chain
growth.
Ask
stuff
where
we
don't
want
the
chain
to
grow
with
more
than
a
maximum
of
a
gigabyte
per
day,
or
something
like
that.
It
could
be
a
bandwidth
issue
where
a
transaction
propagation
grinds
to
a
halt.
B
If
we
go
over
this
limit-
or
it
could
be
a
block
propagation
issue
where
blocks
themselves,
don't
move
fast
enough
as
soon
as
we
increase
it,
I'm
hoping
that
snack
can
come
up
with
a
decent
answer
here
soon
of
what
actually
breaks.
If
we,
if
we
up
this
limit-
and
that
being
said
in
practice,
we
don't
even
get
close
to
this
call.
B
B
There's
some
auxiliary
use
cases
that
people
have
sporadically
used,
such
as
proof
of
knowledge.
You
can
submit
a
hash
of
a
document
to
the
blockchain
it
gets
stored
in
the
call
data,
and
then
I
can
always
claim
in
court
that
I
had
this
document
in
my
possession
before
a
certain
time.
Same
is
proof
of
publication.
Some
algorithms
require
data
availability,
things
where
you
need
to
prove
that
you
actually
publish
certain
things.
You
can
store
that
in
call
data
and
the
other
way
is
just
simply
use
the
etherium
blockchain
as
your
own,
like
distributed
file
system.
B
C
B
Device
safe
to
prune
as
soon
as
a
decent
amount
of
consensus
and
finality
exists
on
a
state.
So,
given
all
that
I
propose
the
following:
first
of
all:
remove
the
0
nonzero
distinction.
I
didn't
really
get
into
this
during
this
slide,
but
basically
this
is
a
remnant
of
using
a
length
encoding
in
the
wire
format
which
to
me
seems
like
an
overemphasis
in
the
gas
model
on
a
particular
implementation
and
as
far
as
I
understand
this
run,
length.
Encoding
isn't
even
used
anymore
and
right
now.
People
are
mining.
B
Aetherium
addresses
that
have
the
maximum
number
of
zeros
in
them,
scattered
throughout
the
address,
which
doesn't
even
help
for
run
length
encoding,
so
yeah
I
think
this
can
go.
I
would
also
argue
that
we
should
lower
it
to
a
hundred
gas
per
word
in
order
to
make
Merkel
proofs
economically
competitive
with
change
storage.
B
Now
this
is
the
immediate
consequence
that
the
will
that
the
maximum
call
data
size
of
a
block
can
increase,
so
I
would
I
would
say
we
can
add
an
additional
limit
for
valid
block,
so
block
has
only
fell
it.
If
the
combined
call
data
size
is
less
than
X,
where
I
suggest
as
a
starting
point,
we
make
X
twice
what
the
current
maximum
is
256
kilobyte
and
then
in
order
to
make
sure
that
chain,
growth
and
things
are
kept
in
check.
D
B
D
D
E
I
wanted
to
ask
if
you
go
back
to
previous
slide
when
you
were
describing
the
users
for
call
data.
So,
interestingly,
here
what
you
said
that
the
other
usage
is
like,
for
example,
distributed
storage.
Number
four
is
this:
we
were
actually
going
on
about
the
not
using
storage
for
like
for
as
a
contract.
Storage
is
basically
like
a
really
valuable
resource.
It's
probably
more
valuable
than
the
coal
data
because
it
could
be
accessed
from
the
smart
contracts,
so
I
would
rather
say.
E
B
E
I
would
say
this
is
actually
very
interesting
in
the
context
of
a
chain
pruning
so
and
in
the
state
rent
as
well,
so
it
sort
of
encompasses
all
these
things.
So
what
are
the
I
think
what
we
could
get
out
of
it
as
a
recommendation
for
like
how
the
storage
should
be
used
and
what,
in
which
cases
coal
data
should
be
used
instead
on
an
issued
case,
in
which
cases
a
log
should
be
used.
E
D
So
you
know
just
like
one
brand
of
use
case
for
this
to
give
people
context
is,
let's
say:
I
wants
to
have
a
gap
that
is
just
accessible
from
East,
like
in
one
line
of
lobstery
code,
which
would
basically
say
load
up
the
the
get
raw
transaction
of
this
then
DHEC
safai
it
and
then
execute
that
as
JavaScript
and
like
that,
would
let
you
have
gaps
without
having
to
have
centralized
UI
or
whatever
it
gets.
It's
obviously
a
stupid
workaround,
but
there's
reasons
to
do
it.
D
D
D
So
what
you
do
instead
is
you
choose
all
five
pieces
of
data?
Then
you
use
a
razor
coating
to
extend
that
to
five
out
of
seven
and
then
the
sixth
and
the
seventh
are
random
junk.
When
you
publish
those
to
the
blockchain,
and
now
you
can
recover
data
with
any
three
of
five
human
memorable
things
plus
two
random
things
that
are
publicly
viewable.
D
E
Also
say
one
more
comment,
although
it
might
have
been
designed
that
logs
should
be
sort
of
cheaper
to
tour
the
Nicolle
data,
but
the
print
practice.
The
the
growth
of
the
log
is
already
faster
and
growth
of
the
Cole
data.
So
it's
already
like,
if
you
look
at
the
archive
note
that
I'm
running
the
size
of
the
older
receipts
is
already
larger
than
the
size
of
all
the
blocks
which
answering
your
question.
It
pretty
much
contains
the
code
data
plus
some
addresses,
so
yeah
I
think
it's
like
ever.
B
E
Also
that
one
of
the
big
differences
between
sort
of
narrative
of
the
cerium
kind
of
sinking
and
Bitcoin
is
in
a
Bitcoin,
essentially
there's
only
one
way
to
sync
the
chains
to
do
the
from
Genesis
and
because
they
don't
have
the
state
routes
and
the
blocks
so
essentially
the
old
it
blockchain
is
one
giant
miracle
tree
in
a
serum.
However,
you
have
a
ability
to
sync
from
the
snapshot,
which
means
that
it
allows
you
to
prune
everything
and
still
sort
of
have
some
kind
of
version
of
every.
E
E
That
and
I
think
that
means
that
tells
me
that
the
storage
in
so
the
space
in
the
state
is
more
precious
than
the
state
in
the
blocks
so
and
it
has
to
be,
and
what
you're
saying
is
that
it's
actually
looking
at
the
core
gas
gas
cost
it's
the
other
way
around,
so
it's
actually
easier
to
put
stuff
in
the
state.
Rather
than
to
put
it
in
a
code
called
data
which
should
be
reversed,
yeah.
B
That's
the
main
point
when
it
comes
to
your
point
on
log
messages.
So
if
we
were
to
do
security,
critical
information
in
log
messages,
they
will
get
mixed
in
with
a
lot
of
not
security,
critical
information
that
we
would
love
to
prune.
So
then,
the
argument
would
be
to
not
use
not
use
log
messages
for
points
to
entry,
but
instead
use
storage
directly
because
we
do
have
security
guarantees.
That
storage
will
be
maintained
forward
and
then
we
can
aggressively
proven
both
call
data
and
lock
messages.
A
B
Now
we're
nowhere
near
the
current
limit,
but
the
assumption
is
that
if
we
build
off
chain
storage,
we
will
get
a
significant
increase
in
Markov
all
proofs
that
are
submitted
to
the
chain.
So
you
can
expect
a
significant
uptick
in
the
block
size
and
then
right.
But
if
you
then
do
this
10
times
gasp
increase,
you
also
have
a
10
times
block
size
increase,
so
it
already
becomes
1.2
megabytes,
which
is
something
worth
discussing
it
in
its
own
right.
D
F
D
G
Minor
comment
that
we've
observed
that
uncle's
are
smaller
than
regular
blocks,
so
box
size
is
not
the
impact
that
we
think
it
is.
D
Justifies
that
some
reduction
from
the
status
quo
is
like
good,
but
the
question
is
how
big
of
a
reduction,
because
he
still
wants
to
kind
of
keep
it
balanced
relative
to
other
stuff,
and
it
seems
like
like
if
we
wants
to.
If
our
goal
is
to
make
more
goal
proofs,
be
the
cheaper
way
to
build
these
kinds
of
applications.
Then,
like
we're,
actually
attacking
that
problem
from
two
angles,
one
of
them
is
making
the
call
data
each
data
cheaper
and
the
other
one
is
making
storage
slots
more
expensive
I
mean.
B
H
Okay,
so
I'm
here
about
here
to
talk
about
uncle
right,
so
the
previous
conversation
kind
of
flows
nicely
into
this
we've
been
looking
at
the
data
historically
of
uncle
rates
and
then
just
to
give
some
context.
Uncle
rates
are
still
blocks
that
get
mine
and
included
into
the
chain
to
get
some
reward,
which
is
about
7/8
depending
on
when
it
gets
referenced
in
the
main
chain,
and
you
also
get
a
reward
for
including
an
uncle,
and
so
why
is
this
important?
It's
it
can
help.
You
see
what
the
state
of
the
network
is.
F
H
And
also
to
a
larger
size,
but
it's
not
necessarily
the
case,
as
we've
noticed
in
the
last
six
months.
So
we
look
at
the
uncle
rates
historically
and
then
we
find
different
points
in
time
where
you
have
interesting
changes.
So
you
had
some
hikes
in
August
2016,
which
is
the
dowel
or
like
another,
now
another
fork
in
November,
2016
and
so
on
and
so
forth.
But
for
us
what's
been
most
interesting.
Is
this
drop
in
the
last
six
months
of
the
uncle
rates?
H
It
just
seems
to
have
been
going
down,
which
is
interesting
because
the
number
of
transactions
per
blog
doesn't
seem
to
be
going
down.
So
there's
not
necessarily
a
relation
between
the
transactions,
the
size
of
the
block
and
the
uncle
rates,
and
so
the
idea
of
this
is
to
make
it
a
little
bit
more
interactive
for
you
to
give
us
also
some
ideas
of
why
this
might
be
happening
and
some
of
the
implications
of
this.
H
So
essentially,
this
is
just
like
a
closer
look
in
terms
of
the
last
six
months
and
as
you
can
see,
the
transactions
doesn't
really
match
this
slope
that
we
see
in
the
crease
of
Uncle
rates,
and
so
we
just
thought
about
a
number
of
different
possible
causes
for
why
this
could
be
happening.
So
maybe
the
data
size
is
more
important
than
the
transaction
count.
F
H
It's
the
number
of
minors
or
the
number
of
nodes
protocol
changes
or
the
parity
and
get
optimizations
or
changes
in
the
minors
strategies,
maybe
an
increased
connectivity
between
the
minors
or
since
there's
a
migration
of
a
lot
of
nodes
and
minors
into
Amazon.
Maybe
that's
also
causing
better
connectivity
and
thus
resulting
in
lower
uncle
rates,
and
so
we
started
testing.
H
All
of
these
hypotheses
we
have
collected
data
from
eligio,
eats
eat
their
skin
and
ether
chain
and
we're
gonna
share
all
this
data
so
that
people
can
also
start
doing
their
own
analysis,
and
essentially
these
are
the
variables
that
we've
used
so
they're
all
number
there,
and
so
we
started
checking
the
number
of
miners
which
doesn't
seem
to
have
changed
much
in
the
last
couple
of
months.
So,
if
anything,
there's
a
little
bit
more
miners,
so
it
doesn't
seem
to
be
related
to
the
own
quarries.
H
We
can
also
see
that
there's
a
lot
of
centralization.
So
if
we
look
at
the
top
20
miners
historically,
it
has
decreased
a
lot.
Now
you
have
like
only
five
main
pools
that
have
most
of
the
hashing
power,
so
that
could
explain
also
the
increase
in
connectivity
and
the
drop
and
onco
rates,
but
it's
not
conclusive,
and
this
are
that
mining
pools
that
have
been
doing
the
most
mining
of
uncles
in
the
last
six
months.
So
we
started
theorizing
what
could
cause
this?
H
H
So
again
we
tested
this
idea
that
miners
would
stop
working
on
blocks
as
they
receive
the
header
from
other
miners
and
they
and
since
they
don't
have
complete
information,
all
they
can
do
is
post
empty
blocks,
but
it
doesn't
seem
to
be
the
case,
we're
also
look.
We
were
also
looking
at
these
empty
blocks
and
find
out
of
the
data
a
bit
inconclusive,
as
you
might
have
higher
block
size
than
the
uncle
size
or
the
inverse.
H
So
we're
a
bit
puzzled
by
why
this
is
happening,
and
so,
like
you
can
see
here,
you
can
see
the
size
of
the
uncles
on
the
first
column
and
then
the
size
of
the
blocks
on
the
second
column,
and
you
just
see
variation,
sometimes
dunkels
are
bigger.
Sometimes
the
blocks
are
bigger,
and
so
we
started
doing
some
analysis
in
terms
of
this
data
and
trying
to
see
what
could
be
causing
this
behavior
so
yeah.
H
We
also
check
the
number
of
notes,
but
it
doesn't
seem
it
might
have
an
effect
in
terms
of
less
participants
in
their
notes
or
they're
more
connected
between
themselves
and
then
decreasing.
The
Encore
rates
again,
which
seems
to
have
been
happening,
there's
well
a
lot.
Let's
nodes
in
the
network.
Actually
I
heard
that
it
was
about
3000,
the
real
number
of
notes.
That's
actually
writing
in
the
network.
At
this
point
so
very
low.
H
We
did
conclude
that
some
of
the
protocol
changes-
the
optimizations-
have
definitely
helped
in
terms
of
the
on
call
rates,
and
then
we
don't
have
anything
on
Amazon.
So
it's
really
hard
to
tell,
but
this
is
some
of
the
regression
analysis
that
we
started
running.
So
this
is
data
from
last
year
and
looking
at
what
impacts
the
most
Yonker
rate.
So
definitely
the
number
of
transactions,
the
hash
rate,
the
block
size,
the
number
of
accounts,
but
correlation-
doesn't
necessarily
mean
causation,
so
it
doesn't
really
matter
and
also
the
rewards
and
the
price
of
ether.
H
H
So
we
just
started
doing
some
regression
analysis
where
all
of
the
variables
this
is
all
the
results
for
all
the
regression
analysis
for
simple
linear
regression
with
all
the
different
variables,
and
these
are
the
ones
that
seem
to
be
the
most
related.
So
you
have
hash
rate
average
block
size,
difficulty,
mining
return,
uncle
mourning
reward
ether,
price,
gots
limit
and
gas
usage.
H
So
essentially,
the
idea
is
that,
with
all
this
data
that
we've
collected,
we
can
actually
build
a
mathematical
model
and
start
playing
around
with
a
lakh
gas
limit
and
see
how
it
impacts
the
uncle
rates.
Because,
essentially,
if
you
have
an
increasing
size
of
blogs,
but
the
uncle
rates
are
still
dropping,
then
you
could
potentially
increase
the
size
of
the
blocks
to
a
certain
degree.
H
And
so
this
is
like
building
a
model
in
the
idea.
Is
that
we're
gonna
build
a
model
with
this,
with
multivariate
regression
and
trying
to
find
what's
most
impactful?
And
so
from
all
of
these
hypotheses,
we
concluded
that
maybe
the
data
size
has
something
to
do.
It's,
not
the
number
of
miners,
maybe
the
notes
as
well.
Definitely
the
protocol
changes
and
the
cat
and
parrot
the
optimizations
have
had
an
impact
and
as
well
as
some
mining
strategies
that
have
been
employed,
there's
no
selfish
mining
or
no
SPV
mining.
H
E
One
of
the
things
that
I
was
expecting
for
after
this,
so
you
mentioned
the
guest
and
parity
optimizations,
so
one
of
the
effects
of
those
optimizations
is
that
it
actually
breaks
the
relationship
between
the
block
gas
limit
and
our
core
rate,
so
it
actually
weakens
it
in
this
is
because
the
block
propagation
now
happens
so
the
block
propagation
between
the
nodes,
the
speed
is
basically
doesn't,
doesn't
depend
on
the
like.
How
hard
is
to
execute
the
block?
E
G
I
Yeah,
hey
guys,
I
want
to
talk
about
some
testing
criteria
right.
A
lot
of
this
is
some
of
this
is
gonna,
be
a
little
redundant,
but
I
just
want
to
make
sure
that
we're
all
the
same
page,
and
we
have
some
clarification
as
to
like
what
it
is
we're
trying
to
do
here
and
how
we
can
most
efficiently
go
about
achieving
these.
These
goals
that
we're
setting
for
ourselves
so
I'm
a
proponent
of
obviously
like
doing
this.
I
You
know
you
know
an
accurate
and
like
deterministic
manner,
so
what
we
need
to
do
is
we
need
to
like
have
a
structure
to
the
the
way
that
we're
we're,
testing
and
validating
all
of
these
hypotheses.
So
obviously,
first
we
want
to
formulate
our
hypothesis
and
we
want
to
figure
out
which
problems
are
valid
like
what's
worth
spending
our
time
on
right,
and
this
isn't
any
one
about
any
one
particular
project
or
issue,
that's
at
hand,
it's
just
in
general.
I
So
I
think
that
we
should
target,
like
you
know,
low-hanging
fruit,
obviously
like
which
optimizations
are
the
best
and
which
are
the
easiest
to
achieve
and
implement
in
a
short
period
of
time,
and
then
just
you
know,
repeat
this
process
going
down
the
list
right.
So
we're
gonna
have
these
three
criteria
for
evaluation,
essentially
and
it's
nice
to
segment
these
and
like
break
them
up
into
group.
So
we
have
like
environmental
factors
and
that's
going
to
be.
I
You
know
more
protocol
related
stuff,
like
the
state
size
and
the
hashing
rate,
and
another
important
thing
is
what
can
be
controlled
and
what's
uncontrollable
in
relation
to
these
protocol
related
variables?
And
then
we
have
computational
factors
which
are
going
to
account
for
like
the
resources
of
a
single
node.
Like
the
IO,
the
NIC
capabilities,
all
that
stuff,
because
we
can't
control
the
wide
area
network
necessarily
and
we
can't
really
control
a
lot
of
aspects
of
the
protocol.
I
Peering
there's
a
lot
of
things
that
are
gonna
be
difficult
and
then
there
are
also
Network
factors
which
would
be
like
bandwidth
and
latency
and
packet
loss.
The
actual
conditions
that
we
experience
when
these
nodes
are
communicating
with
one
another.
You
know
within
those
within
the
network,
so
this
is
kind
of
like
a
brain
dump
and
I'm
really
just
presenting
this.
So
we
can
start
initiating
some
sort
of
dialogue
amongst
ourselves.
I
So
we
can
formulate
a
more
structured
way
of
going
about
testing
and
validating
this
thing's
so
like
which
environmental
factors
we
consider
like
how
the
nodes
available,
computational
or
network
resources.
How
do
they
affect
these
performance,
like?
What's
the
relationship
between
pruning
threshold
and
bandwidth
like
how
often
does
a
sync
fail
with
X
bandwidth
and
Y
pruning
threshold?
If
we
count
for
state
size
within
protocol
to
account
for
this
and
what
parameters
would
result
in
the
most
effective,
optimization
like
which
tweaking
and
tuning
which
parameters?
I
And
so,
in
regard
to
cache
size,
what's
the
minimum
viable
cache
size?
All
of
these
are
things
that
we
should
go
about
researching.
So
we
have
some
assumptions.
These
are
just
like
my
basic
assumptions.
I
came
up
with
I
wrote
this
last
night,
just
based
on
some
conversations
and
dialogue
that
from
yesterday
so
like
processing
the
block
I
mean
this
is
redundant
because
everybody's
been
talking
about
it
like.
C
I
So
I'm
not
really
gonna
go
over
that
too
much
I
just
want
to,
like
guys,
know,
share
with
you
what
I'm
thinking
so
first,
we
should
establish
our
testing
objectives
like
identify
this
assumptions
and
formulate
the
hypothesis
like
what's
the
bottom
line
and
which
criteria
should
we
absolutely
evaluate
and
which
factors
have
the
most
dramatic
effects
on
the
outputs.
So
by
identifying
those
factors,
we
can
understand
a
few
key
points
like
which
optimizations
should
be
prioritized,
which
are
the
most
practical,
which
are
the
most
immediately
achievable
and
I.
I
Think
that's
one
of
the
most
important
and
formulating
a
test
plan.
I
think
that
III
created
I
wrote
an
example
test
plan
for
what
we're
doing
right
here.
So
we
can
identify
like
the
client
like
we
want
to
test
so
I.
Just
kind
of
put
this
in
and
I
haven't
changed
it
me
and
Alexi.
We
had
a
conversation
and
we
need
to
identify
like
what
is
the
criteria
for
a
state
size.
I
C
I
So
I
mean
this,
isn't
something
that
we're
gonna
decide
right
now
between
the
two
of
us,
but
before
we
move
on
and
actually
start
testing
these
things
we
need
to
have
those
those
variables
need
to
be
defined.
So
we
know
what
to
look
for
right.
So,
within
our
test
cases
that
we're
running
we
could
write.
We
write
out
a
test
series
like
this
and
we
we
do
that
on
white
block,
which
I
will
run
a
demo
of.
I
Let
me
just
let
me
just
wrap
up,
though
so,
using
our
emulation
platform,
we
can
acquire
large
data
sets
that
are
going
to
be
highly
accurate
or
reflective
of
real-world
data
and
then
using
those
data
sets.
We
can
perform
additional
simulations
kind
of
similar
to
what
Vanessa
was
talking
about
and
the
output
of
one
test
series
can
influence
our
hypothesis
and
like
bring
you
issues
like
that's
just
the
process
so
but
I
think
we
should
it's
it's
easier
to
just
be
practical
and
I
think
we
should
just
demo
white
block
right
now.
I
I
I
J
I
I
So
we're
building
out
this
blockchain
with
using
the
etherium
on
a
theorem
with
the
goth
client
we're
just
going
to
indicate
five
nodes.
You
can
indicate
the
computational
resources
that
are
going
to
be
assigned
to
each
node,
and
then
you
can
identify
the
define
the
parameters
of
the
box
chain
like
gasps,
limit
or
difficulty
or
whatever
you
would
in
a
genesis
file
and
then
it's
all
dynamically
created
and
then
our
platform,
like
provisions.
All
of
those
notes
sets
them
up
with
the
appropriate
client
funds.
I
The
accounts,
because
we
so
when
we
automate
transactions
and
all
that
and
assigns
them
each
to
a
signs
of
virtus
inés
a
VLAN
to
each
one
of
them,
so
they
all
have
their
own
IP.
Now
we
can
dynamically
configure
any
of
the
links
between
those
nodes
with
packet
loss
or
bandwidth
constraints,
latency,
whatever
we
can
also
add
accounts
or
nodes
after
it's
already
been
built.
So
right
here
you
can
see
like
this
was
the
initial
balance.
I
E
K
Modeling
assumptions
or
like
simplifications,
because
on
one
one
hand
we
could
use
the
real
death
and
parity
client
and
so
on
and
be
as
close
to
the
real
thing
as
possible.
But
of
course,
we'll
have
to
reduce
the
difficulty,
but,
but
still,
if
that
proves
to
be
computationally,
prohibitive,
expand
expensive,
then
we'll
have
more
and
more
modeling
assumptions,
because
at
the
end
of
the
day,
we
are
not
interested
in
any
security
aspect
of
it
or
like
even
that,
it's
a
theory
or
whatnot.
They
are
interested
I
given
a
certain
synchro
tikal.
I
So
one
of
the
issues
is
like:
how
do
we
test
effectively
starting
out
because
the
state
needs
to
be
generated
and
before
we
start
actually
testing?
So
what
my
idea
was
is
that
we
create
a
series
of
images
with
the
state
at
size,
X
and
then
we
after
those
who
have
been
pre
generated,
we
can
just
import
those
states
and
our
framework
and
then
start
testing
with
those.
So
we
can
automate
transactional
activity
if
you
want
to
show
that
and
deploying
smart
contracts
as
well.
So
for
this
we
do
what's
a
transactions
yeah.
I
So
let's
show
transactions
so
we're
gonna
start
a
stream
of
transactions
between
accounts,
and
you
indicate
the
transactions
per
second
and
it's
not
like
transactions
per
second
that
are
going
to
be
processed
in
terms.
It's
just
how
many
transactions
we're
going
to
be
sending
in
total
within
the
network,
regardless
of
how
many
that's
not
the
real
TPS
that
everyone
thinks
so
we're
sending
a
hundred
transactions
per
second
with
a
value
of
1/8.
I
It
just
started
so
we
can
do
get
get
stats
passed,
get
stats,
that's
10!
So
we
can
look
at
the
past
10
blocks.
It's
going
to
take
a
second,
so
we're
watching
that.
So
you
can
just
dynamically
observe
how
these,
how
this,
how
these
are
those
TPS
is,
can
change
or
whatever
values
or
data
points
you
want.
So
one
thing
to
note:
is
we
haven't
applied
any
sort
of
latency
or
delay
or
bandwidth
constraints
or
packet
loss
between
those
nodes?
I
E
J
J
I
I
Get
well
we're
gonna
be
working
together,
so
we're
gonna,
be
we're
gonna,
be
collaborating
on
on
these
initiatives.
So,
in
terms
of
my
priorities,
that's
something
that's
very
interested
interesting
to
me,
but
I
think
my
priority
is
finding
out
how
we
can
be
useful
and
contributing.
So
if,
if
Alexi
sees
value
in
this,
then
it's
my
highest
priority
that
I'm
that
we're
able
to
provide
value
and
help
solve
problems
and
create
solutions.
That's
my
priority
is
like
yes,.
E
I
think
we
kind
of
I
think
what
we're
hoping
to
do
is
that
we
we
are
get
so
basically
Zack
would
provide
access
to
this.
To
this
platform.
We
have
to
decide
about,
like
the
the
cloud
cloud
compute
bills
like
how
we're
gonna
pay
this
bills
and
stuff
like
that,
but
I
think
we
can
figure
this
out
and
you
know
essentially
to
improve
the
documentation
as
we
go,
because
there
might
be
some
some
some
lobsters
and
documentation
and
eventually,
like
the
reasons
why
I
want
to
dedicate
Andre
for
some
time.
E
I
So,
okay,
to
do
this
platform
after
because
I
was
working
on
etherium
and
I,
didn't
have
the
tools
that
I
needed
to
do
a
lot
of
the
low-level
research
that
I
wanted
to
do
so.
This
was
made
primarily
for
aetherium
and
I've,
been
working
and
focusing
on
aetherium
for
some
time,
but
I
think
we
should
talk
about
that
like
offline,
because
I
don't
want
to
like
I.
L
Wanted
to
add
a
comment
on
testing
sinking,
so
you
didn't
include
this,
but
something
you
might
want
to
think
about.
An
important
factor
in
sinking
is
at
least
in
warp
sink.
Is
the
snapshot
is
now
so
large
that
just
downloading
it
takes
hours
right
and
you
can't
expect
a
single
clients
be
online
for
that
long.
L
So,
in
the
vast
majority
like
in
60
to
80
percent
of
cases,
the
person
you're
downloading
from
will
actually
disconnect
you
halfway
through,
and
so
you
need
to
then
go
back
into
pairing
and
find
out
like
we
now
have
it
in
place
where
you
can
actually
restart
a
snapshot
way.
They
need
to
find
a
snapshot
from
this
same
block
height.
This
will
be
fixed
in
the
next
version
of
faster,
but
that's
how
it
works
right
now.
So.
F
L
E
I
So
just
some
things
we
need
I
think
like
the
most
like
the
priority
is
just
identifying
how
we
can
effectively
introduce
the
state
within
the
environment.
So
it's
not
occupying
a
an
exorbitant
amount
of
time
and
generating
that
state
initially
from
like
Genesis
so
but
I,
don't
think
that's
very,
very
difficult
problem.
I
think
that's
probably
something
we
can
solve
in
a
couple
days
of
real
I
guarantee
which
we'll
get
started
on
immediately
so,
but
does
anybody
else
have
any
questions
I
had.
M
I
Suppose
we
have
precompiled
images
that
we
store
and
then
we
just
deploy
them
based
on
that
client
by
indicating
which
client
you
want
to
deploy.
So
what
we're
showing
you
here
is
like,
oh
just
like
high-level,
like
obvious
Atlantic,
when
we're
testing
we're
not
using
these
like
build
Wizards
that
are
prompting
and
doing
all
that
stuff,
we're
actually
running
just
scripts,
that
we
define
with
based
off
of
like
a
Yama
file
or
something
like
that,
and
then
we
automate
all
of
the
tests.
I
So
after
you
define
them
in
like
a
test
file
like
I
showed,
we
could
just
like
run
all
of
those
tests
and
one
of
them
would
be
like
client.
We
want
a
hundred
notes,
total.
We
want
50
of
them
to
have
guests
and
50
of
them
to
have
parody
after
you
deploy
those
clients.
We
have
flags
like
we
want
to
run
these
as
many
transactions,
or
we
want
to
do
this
or
do
that
and
that's
what
we
want
to
do
for
this
one
test
case.
I
So
we've
done
a
lot
of
work
with
III,
search
on
exploring
sharding
and
lid
p2p.
Using
this
framework
and
using
those
testing
methodologies-
and
it's
been
pretty
cool
experience,
so
I
would
encourage
you
guys
like
check
out
what
we've
worked
on
before
cuz.
We
don't
really
do
any
marketing
like
this
is
okay.
It's
just
us
like
hanging
out
and
talking
to
people.
So
you
know,
but
I
think
that
these
are
valuable
tools.
I
They
were
useful
to
me,
so
I
think
they
would
be
of
immense
value
to
you
guys
as
well,
and
we
want
to
work
with
to
build
these
tools
to
make
your
lives
easier
and
I
want
to
open
source.
All
of
this-
and
that's
that's
a
plan
but
right
now,
I'm
beholding
to
a
board
of
directors,
but
I,
don't
want
to
say
anything
on
the
livestream.
But
we
could
talk
about
my
personal
feelings
and
plans
like
afterwards.
Well.
N
E
M
O
Yes,
yes,
metering
tomorrow,
so
that
people
know
that
there
there
is
some
beauty
and
some
sophistication
to
have
assembly.
It's
not
just
sort
of
you
know
some
bytes
of
an
EVM
that,
hopefully
there's
no
bugs
in
it.
This
is
some
engineering,
you
know
project
and
they
have
some
proofs
of
some
things
like
most
languages,
they
have
a
syntax
okay.
O
So
this
is
a
paper
that
they
put
out
when
they
were
first
announcing
the
language
and
they
have
this
I,
guess,
syntax,
theirs,
I'm
sure
people
in
this
room
know
about
these
kinds
of
things,
there's
standard
ways
to
define
languages
and
usually
there's
a
syntax
like
we
have
a
syntax
in
the
English
language.
There
are
grammar
rules.
Likewise,
they
have
some
sort
of
grammar
rules
or
syntax
to
build
these
programs
with
and
on
top
of
this
definition.
O
So
this
is
a
definition
of
a
syntax
I,
don't
know
if
they
have
all
the
rules
or
things
slightly
changed
since
they
publish
this.
On
top
of
this
definition,
they
define
you
don't
have
to
know
what
this
is
by
the
way.
Each
of
these
is
a
rule,
and
you
have
to
you
know
that
if
you
want
to
implement
the
spec
or
you
have
to
follow
these
all
these
rules,
so
these
rules
are
for
validity,
everything
you
know
that
can
be
checked
statically.
They
wanted
to
write
down
and
they,
hopefully
they
wrote
most
of
them
down.
O
I
think
there's
some
more
later
on
about
for
some
other
things,
but
anyway
we're
there's
some
beauty
that
we're
defining
something
called
the
grammar
or
define
another
thing.
On
top
of
this
grammar
for
each
of
these
grammar
rules
were
defining
these
validity
rules,
and
once
we
define
these
validity
rules,
we
can
prove
things
like
that.
The
absence
of
undefined
behavior,
actually
they
have
some
undefined
behavior
that
they
identified,
but
we
we
know
about
it
and
source.
O
O
Then
you
can
check
the
validity
while
you're
parsing
it
and
you
can
also
compile
it
at
the
same
time
in
one
pass
and
also
you
can
do
it
in
parallel
for
each
function.
So
it's
it's
an
engineered
sort
of
it's
engineered.
It's
a
great
engineering
project,
I
think
and
they
have
a
compact
representation,
like
80%
of
native
code
size
on
average,
so
portable
any
you
know
you
can
compile
it
to
different
architectures
and
the
ecosystem
is
growing.
There
are
some
growing
pains,
but
it's
growing
so.
O
O
You
know
these
proofs
are
gonna.
You
know
just
like
Euclid
proved
things
like
2,500
years
ago
about
geometry,
you
know,
and
those
proofs
are
still
good
today.
There's
proofs
about
this
stuff
and
they're
gonna
be
true
2500
years
from
now-
and
you
know
so,
there's
no
obsolescence
to
these
proofs
and
I
guess
I
just
wanted.
That's
all
I
wanted
to
say.
K
You
are
using
web
assemblies
so,
where
assembly
gives
you
some
mechanisms
and
facilities
for
virtual
machines,
but
also
LLVM
can
give
you
certain
facilities
as
well,
and
one
kind
of
thing
that
doesn't
run
in
browsers
directly
but
like
I
guess
it
can
be
compiled
into
web
assembly
at
that
gives
you
a
means
to
run
a
lot
of
am
stuff
in
a
browser.
So
like
have
you
compared
using
web
assembly
as
your
primary
kind
of
tool
set
versus
using
LLVM
as
your
primary
tool
set.
O
Yeah
this
was
a
question.
I
was
not
working
on,
II
was
and
when
this
question
was
answered,
but
yes
there's
a
there's,
an
effort
with
Cardno
that
they
want
to
use
LLVM
instead
of
instead
of
web,
something
or
they
may
be,
using
both
and
yeah.
It
might
be
a
reasonable
alternative.
I,
don't
think
they
design
there
are
you
talking
about
the
LLVM
intermediate
representation.
O
Yeah
sure
it's
an
alternative
I,
don't
know
if
they
have
the
same
properties.
I,
don't
know.
If
it's
designed
to
be,
you
know
passed,
you
know
for
transportation,
doesn't
it
might
not
have
the
same
properties
that
web
assembly,
but
certainly
card
ana,
was
justified
in
in
exploring
it?
They
change
their
intermediate
representation,
every
version
of
LVM,
so
LVM,
eight
or
seven.
You
know
they're
starting
to
work
on
eight,
so
the
their
IRS
is
constantly
changing.
So
you
have
to
keep
up
or
you
have
to
either
you
know,
move
with
them
and
they
might.
O
They
might
introduce
something
strange,
yeah,
you're
right
that
this
is
an
alternative
and
yeah.
There
are
other.
You
know
the
the.net
CLI,
there's
there's
a
few
other
alternatives
that
people
might
suggest
and
I
think
it's
an
interesting
discussion
to
have,
but
certainly
webassembly
has
some
properties
that
might
be
hard
to
beat
so
I.
Don't
know
if
I
answered
your
question:
I'm
just
ma'am
I'm,
just
mumbling.
F
F
I
might
be
able
to
yeah
I
might
be
able
to
answer
part
of
that
as
well.
That's
it's
not
an
either/or
distinction.
You
can
compile
down
to
LOV
Mir
and
then
write
a
back-end
for
webassembly.
Webassembly
is
the
actual
virtual
machine
as
opposed
to
this
abstract
intermediary
right.
So
it's
actually
I
mean
really
any
modern
system
is
gonna.
Go
through
several
of
these
layers
before
you
get
all
the
way
down,
so
we
can
absolutely
do
both.
We
can
also
use
LLVM.
I
are
down
to
EVM
as
well.
F
There's
nothing
stopping
us
from
doing
that
and
we
could
actually
get
like
you
know.
Obviously,
wasum
was
designed
from
the
ground
up
in
a
particular
way,
which
is
nice,
but
we
could
get
a
lot
of
these
properties
moving
forward
at
the
EVM
prior
to
you,
Watson
being
implemented
as
well
and
plug
it
that
then
into
the
LLVM
ir
as
well
write
it
back
in
for
that.
So
we
could
get
a
lot
of
these
benefits
in
the
meantime
and
I
agree.
I.
Think
using
LLVM
is
a
good
strategy.
E
M
B
Wanted
to
add
more
to
the
decision
for
using
webassembly,
so
LCM
is
designed
around
two
assumptions.
The
first
one
is
that
this
intermediate
representation
is
every
mirror,
or
they
only
exist
during
the
duration
of
compilation.
You
control
it
away
and
redo
it.
The
second
is
that
you
trust
it.
Web
assembly
is
designed
on
the
principles
that
you
don't
trust
the
byte
code
that
goes
in
it,
and
this
needs
to
be
standardized,
and
the
reason
is
that
web
assembly
is
meant
for
the
web,
so
you
don't
trust
the
code.
It
goes
in
it.