►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
My
name
is
Alexy
and
you
probably
heard
of
what
I
was
doing,
so
this
is
stupid.
Yet
it's
just
a
fork
of
go
ethereal
which
has
been
taken
like
in
December
2017
and
since
then,
I
tried
to
experiment.
So
at
the
moment,
I
still
see
it
as
an
experimental
experimentation
project,
although
I
do
want
to
release
some
work
inversion,
which
is
not
far
away.
I,
hope
and
I
wanted
to
say
that
I
like
giving
this
talks,
although
I
don't
do
very
often
because
it
makes
me
document
one
of
my
ideas
a
bit
better.
A
So
this
is
the
short
short
outline.
I
will
skip
very
quickly
through
the
things
how
it
started,
because
I
already
showed
it
at
Toronto
and
we're
gonna
skip
quickly
through
our
syndicated
data
structures.
There
is
a
little
bit
more
there
than
last
time.
Then
we
compare
the
how
the
state
is
persisted
in
gap
and
trip
again.
This
is
the
crucial
bit.
This
is
the
where
the
biggest
difference
is
then
we're
going
to
look
at
some
alternatives
to
the
Patricia
tree
which
exists
in
some
product
and
don't
exist,
there's
some
of
them.
A
They
don't
exist
yet,
but
then
we'll
look
at
a
different
processing,
architectures
and
here
I
haven't
since
last
time,
I
have
included
either
mint.
I
will
tell
you
about
a
little
bit
about
this
project
and
why
it's
interesting
in
this
context,
then
we
will
look
at
the
latest
data
that
I
had
from
the
two
biggest
nodes
and
then
we're
gonna
discuss
shortly
about
light
clients,
because
this
is
something
I
have
ignore,
largely
for
for
for
a
while
and
now
I
started
to
look
into
it
like
whether
I
actually
be
able
to
support
it
and
light.
A
A
So
essentially,
if
we
want
to
store
the
data
in
time
to
prove
manner
and
then
able
to
prove
something
about
this
data,
you
usually
use
something
like
Norco
tree
which
allows
you
to
do
the
proof
of
membership
which
is
showed
on
the
right.
But
if
you
just
do
that
so
here
on
the
left
is
you
can
see
that
the
proof
consists
of
the
older,
the
false
structure,
so
that?
Nor
could
she
solves
that,
but
the
marquetry
doesn't
in
the
naive
version,
doesn't
solve
the
problem
of
through
non
membership.
A
So
we
don't
have
to
repeat
the
whole
structure
again,
so
that's
another
property
we
want,
and
you
can
see
that
here,
the
the
things
that
are
shown
on
the
right
are
the
modifications
made
to
the
tree
so
that
the
gray
bits
on
the
right
are
not
repeating
they
just
I,
just
put
it
there
for
clarity.
So
essentially
we
wanted
to
encode
modifications,
compactly,
and
so
one
of
the
one
of
the
solutions
to
this
problem
is
the
morkul
radix
tree.
A
So
in
a
in
a
Gaussian
code,
it's
a
little
bit
confusing,
but
the
branches
are
called
the
full
nodes,
and
these
leaves
and
extensions
are
called
short
nodes.
I
have
also
introduced.
Another
type
of
node,
which
is
called
a
dual,
now,
has
exactly
two
children,
because
I've
realized
that
they
are
very
they're,
quite
a
lot
of
them
and
they
might
be
some
optimizations
so
essentially
see.
A
What
you
see
on
the
left
is
that
what
we
were,
how
the
radix
tree
looks
like
and
on
the
right,
how
the
Patricia
tree
looks
like
with
the
some
of
the
efficiencies
here,
unfortunately,
that
it
comes
at
the
cost
of
complexity
of
the
algorithm.
So
if
you
ever
looked
at
the
Patricia
processing
code
in
any
client,
you
might
find
it
a
bit
a
bit
complex
because
you
have
to
handle
lots
of
special
cases,
as
you
could
see
on
this
picture.
Like
so
pretty
much
like
a
switch
case.
A
Statements,
lots
of
those,
and
so
as
we
mentioned
in
a
theorem,
is
called
radix
is
16
and
all
keys
are
so
because
in
this
theorem
we're
before
we
add
anything,
and
apart
from
the
sorry
the
in
when
we
store
the
actual
serum
state,
we
always
store
the
keys
of
256
bit,
which
is
32
bytes,
which
is
64
nibbles.
And
the
reason
for
that
is
because
all
the
keys
are
hashed
by
the
sketch
function
and
the
reason
for
that
is.
A
We
would
like
to
treat
to
be
balanced
and
we
will
see
why
it
is
important
a
little
bit
later.
So
when
we
hash
them,
they
kind
of
start
to
be
randomly
distributed
around
the
tree.
So
this
is
very
shortly
to
remind
you
that
whenever
we
talk
about
any
serialization,
usually
in
Assyrian
client,
it
usually
done
by
the
means
of
ROP,
which
is
recursive
length,
is
not
very
important
for
this
particular
presentation.
A
But
just
so
basically
the
arrow
P
can
encode
either
single
bytes
or
byte
arrays
of
different
sizes,
or
they
can
recursively
encode
the
sequences
of
other
arrow
keys,
and
so
now
it
Allah
allows
you
to
sort
of
understand
this
sort
of
persistence.
So
this
is
how
the
gap,
and
probably
most
of
other
theorem
clients
the
score
the
state
in
the
database.
So
so
they
store
it
in
the
key,
a
key
value
store.
A
A
So,
essentially,
that's
very
sort
of
straightforward,
and
so
when
we
want
to
so
the
eventually,
as
the
block
numbers
increased
here
in
the
sink,
the
Patricia
tree
becomes
very
large
and
eventually
it
does
not
fit
in
memory
anymore
and
we
will
have
to
offload
some
of
it
into
disk.
So,
let's
imagine
that
we
have
offloaded
this
three
nodes
on
the
text
and
what
we
have
now
in
memory
is
just
that
the
hash
of
this
node
here.
A
So
this
one
is
not
in
memory
and
eventually,
let's
say
that
some
of
the
some
of
our
transactions
request
that
piece
of
state
they
need
to
read
it
or
write
it.
So
what
the
gap
or
other
clients
usually
do
they
do
the
lookup
operation.
They
look
up
at
this
hash
and
they
consult
the
database
and
see
okay.
A
So
that
would
be
that
kind
of
ash
and
they
were
they
will
get
back
the
arrow
key
of
the
node
and
then
this
realize
it
and
they
will
instantiate
it
in
memory
and
then
need
to
go
further
so
that
that's
gonna
be
one
look
up
to
the
database
and
say:
oh
and
now
we
need
that
one.
So
they
will
take
the
hash
and
do
that.
So
you
see
in
this
particular
example.
A
There
are
two
lookouts
and
you
also
need
to
notice
that
the
these
lookups
they
are
necessarily
sequential,
because
you
cannot
do
the
second
look
up
until
you
have
done
the
first,
because
the
lookup
value
for
the
for
the
second
is
contained
in
the
result
of
the
first.
So
there's
a
data
dependency
there,
and
this
is
why
the
depth
of
the
tree
actually
matters
because
the
deeper
the
tree,
the
more
tree,
the.
A
More
lookups
you
have
to
do
and
because
they
are
data
dependent,
essentially
you
get
more.
You
cannot
sort
of
paralyze
it.
So
the
remember
depth
deficit
important-
and
this
is
why
this
is
actually
the
reason.
The
main
reason
I
started
to
experiment
with
a
different
database
layout
because
of
this
depth
dependency,
and
so
here
let's
see
how
it
was
like
what
is
the
current
death?
What
is
not
really
Clara?
It's
actually
was
quite
longer,
but
I
didn't
want
to
recalculate
this
number
so
to
get
their
idea
that
so
the
levels
from
zero.
A
So
these
are
the
levels
of
the
Patricia
tree
and
you
can
see
that
level
0
to
5
pretty
much
is
fully
occupied,
which
means
that
all
these
16
16
elements
in
these
branch
nodes
are
all
completely
full
which
is
kind
of
good,
because
it
it
means
that
we
don't
waste
too
much
memory.
But
then,
after
that,
the
level
6
7
8,
we
could
see
that
there's
more
sparsity
occurring.
So
by
the
way
on
on
the
rows
you
get
the
levels
of
the
tree
and
on
the
columns
you
get
two
different
acute
occupancy
levels.
A
So
the
number
the
column
number
one
is
kind
of
special
because,
as
you
might
remember,
in
Patricia
tree
there
is
no.
There
is
no
branch
nodes
with
occupancy
one.
They
are
replaced
by
either
extension
nodes
or
leaf
nodes,
because
that's
the
point
of
this
patricia
3
optimization.
So
what
you
see
in
the
first
column
essentially,
are
these
the
leaf
nodes.
A
A
But
eventually,
I
got
to
the
point
where
rebuilding
that
the
history
is
not
actually
too
expensive,
the
more
expensive
it
is
to
actually
read
it
out
of
the
of
the
database
and
but
so
what
happens
here
is
that
when
we
essentially,
let's
imagine
the
same
scenario
as
before
that
this
is
a
memory.
This
is
not
that
the
thing
is
not
a
memory,
and
we
only
have
this
flash
right
or
not.
Actually,
we
don't
even
need
that
hash.
Yes,
we.
A
Need
this
hash,
we
might
need
to
do
some
checks,
and
actually
this
is
what
we
get.
Does
a
sanity
check
that
what
we
actually
loaded
is
correct,
but
what
we
do
is
we
say
we
need
everything
we
start
with
with
the
nipple
one.
So
nipple
is,
let's
have
a
bite
right,
so
this
is
the
right
where
it
clearly
to
the
to
the
database,
and
that
range
query
essentially
fetches
the
two
first
two
keys
and
values-
and
this
is
where
we
essentially
just
rebuild
the
Patricia
tree
and
plug
it
in
there
and
then
all
the
way.
A
A
Thing
here
is
the
depth:
doesn't
matter
so
you?
Actually,
the
performance
of
this
particular
mechanism
does
not
depend
on
the
depth
of
the
tree,
which
doesn't
mean
that
you
don't
work,
which
kind
of
means
that
you
don't
really
need
to
balance
it.
But
if
you,
if
you
use
that,
but
yes,
so
that's
a
thing,
I
actually
forgot
to
mention
when
I
was
talking
about
in
Def
Con.
A
So,
let's
just
for
for
education
purposes,
you
might
have
heard
why
it
might
have
seen
that
some
of
the
other
things
we
just
called
sparse
Merkle
tree,
which
you
might
as
well
call
binding
a
red
X
3,
because
it's
also
radically,
but
when
they
write
x
2.
So
the
idea
is
that
you.
So
when
you
want
to
store
the
tree
like
in
this
example,
the
let's
say
to
be
store
to
0,
0,
1
1,
then
imagine
the
tree,
which
kind
of
the
radix
tree,
which
potentially
contain
all
possible
keys.
A
So
in
the
case
of
this
theorem,
it
might
be
like
how
many
250
256
bit
dead
of
the
tree.
Of
course,
that
tree
is
gonna,
be
humongous
de
large,
but
we
don't
actually
need
to
store
it
and
the
reason
why
let's
say
that
in
this
particular
tree
will
only
store
two
keys
and
everything
else
is
empty.
That's
why
it's
called
sparse.
So
what
we
could
do,
then,
is
that
so,
first
of
all,
we
we
introduced
some
constants
for
the
level
0.
A
A
So
we
introduced
that
constant,
so
for
each
level
we'll
have
one
constant
and
the
same
for
the
level
2
we
have
this
big,
constant
and
essentially
all
we
need
to
encode.
Now.
Is
that
so,
when
we
have
any
node
here,
we
just
compare
it
is
this
is
a
like
a
left
or
a
right
there?
Is
it
a
constant?
Is
that
our
empty
sub
C
constant?
And
if
it
is,
we
don't
have
to
look
any
further.
We
know
it's
empty
like
for
example,
here
we
know
that
that
you
need
to
look
at
it
and
so.
A
And
that
I
remember
it's
been
that's
been
right
up
on
the
theory.
Research
and
I
know
a
lot
of
people
like
the
structure,
because
it's
it's
very
simple
to
implement
and
yeah.
So
that
could
be
put
seven
done
any
test,
but
I
really
want
to
make
a
benchmark
about
what
would
what
would
happen
if
we
use
that
instead
of
Patricia?
That
would
be
interesting
to
see
if
there's
going
to
be
any
difference,
but
actually
a
bomb
might
be
able
to
do
it
quite
soon.
A
A
A
So
if
you
take
a
balance,
three
cell
balancing
tree
then
add
two
key
values
in
a
different
order:
the
resulting
balance-
you
will
be
different,
so
the
the
orders,
the
order
of
the
multiplication
it
matters
and
it
wouldn't
be
so
bad,
except
for
this.
Alright,
so
in
here
I,
do
actually
require
that
the
order
of
multiplication
doesn't
matter
so
I
can
bash
them
up.
A
A
So
now,
let's
have
a
look
at
the
processing
architectures,
so
this
is
kind
of
super
simplified
picture
about
how
let's
say
GAF
is
going
about.
Processing
turns
out
the
processing
block.
So
here
you've
got
virtual
machine,
which
is
essentially
like
some
kind
of
the
opcode
of
interpreter.
It
can
interpret
event.
Sometimes
it
accesses
a.
B
A
Here
and
so
the
state
during
the
process
in
a
one
single
block,
the
state-
and
there
is
this
thing-
called
training
state.
It's
obviously
for
performance
reasons.
We
do
not
go
into
the
patrician
to
your
database
all
this
time
during
the
one
block
we
simply
have
like
some
sort
of
hashmaps
or
something
else,
and
also
we
use
the
journal
was
introduced
around
2016
is
that
is
the
way
to
click.
We
quickly
revert
the
state
so
because
there
are
some
things
like
revert,
opcode.
A
Gas,
so
you
have
to
quickly
revert
it.
That's
why
this
journal
and
at
the
end
of
the
block
there
is
something
called
commit
and
in
normally
what
it
does.
It
takes
all
the
modifications
which
are
pending
and
then
applies
them
to
Patricia
trees
and
then
eventually
write
them
out
of
the
database.
So
at
the
moment,
in
the
gap
at
least
the
logic
which
does
the
update
of
the
database
is
kind
of
also
collect
an
exit
to
the
Patricia
tree.
So
it's
like
there's
a
callback
in
the
Patricia
tree,
which
then
goes
into
the
database.
A
A
So
they
would
use
that
poll
to
query
for
the
accounts
have
been
modified
and
then
they
will
match
it
without
much
out
for
their
with
what
they
know
are
their
addresses,
and
then
they
will
figure
it
out
and
then,
because
that
will
it
takes
a
long
time.
They'll
have
to
do
it
again,
but
by
the
time
they
record
results.
There's
a
lot
in
another
twenty
blocks
right
they
have
to
keep
doing
it.
And
if
you
look
at
the
implementation
of
this
particular
to
see
query,
it
actually
goes
through
the
nutrition
tree.
A
So
it
kind
of
reads
up
to
the
Patricia
tree
on
the
block
tan
and
another
petition
to
you
on
a
block
twenty,
and
then
it
goes
the
iterates
over
them.
It's
actually
quite
slow.
So
now,
let's
look
at
actually,
let's
change
the
order.
Let's
do
this
first.
So
this
is
the
what
I
ended
up
the
moment
to
interpret
yet,
which
is
a
these
parts
of
the
same
pretty
much
the
same
code
so
p.m.
there's
a
transient
stay.
A
But
what
is
different
is
that
when
you
commit
at
the
end
of
the
block,
instead
of
writing
to
the
database
through
the
Patricia
tree,
I
actually
have
the
common
interface,
which
doesn't
care
whether
it's
database
of
Patricia
tree.
So
you
can
actually
write
to
one
to
two
to
another
or
to
both
with
the
same
call
and
so
I
try
to
separate
them
as
much
as
possible
and
they
it's
security
so
that
there
is
a
still
one.
Unfortunately,
there's
still
one
link
which
I
can't
remove.
It's
the
it's
essentially
when.
B
A
Store
the
account
it
has
a
state
root
of
its
storage.
It
has
a
root
of
the
storage
state
and
it's
in
quickly
kind
of
totally
dependent
and
that's
well
the
only
one
thing
which
keeps
that
which
still
keeps
a
little
bit
of
coupling,
so
otherwise
it
might
have
become
pleated
decoupled
anyway,
so
recently,
so
before
that
this
slide
actually
pictures
just
one
database,
but
now
I
read
into
because
I
realized.
There
is
a
big
value
in
separating
history
in
the
current
state
and
I
will
show
it
later.
A
Why
and
so,
there's
a
you
know
on
the
slider
to
get.
You
saw
this
little
make
more
many
trees
here,
but
there's
only
one
tree
here
and
the
reason
for
that
is
because
trooper
gets
currently
can
only
hold
one
one
version
of
the
state.
It
cannot
hold
mod,
multiple
branches
as
the
gift
actually
can.
So
if
somebody
comes
in
with
their
procedure,
they
don't
go
through
the
tree
because
I
don't
have
to.
They
actually
goes
straight
into
the
database,
and
lately
I
started
to
realize
that
this
database
could
actually
be.
A
Potentially
it
could
be
just
a
secure
database
right
and
then
you
just
use
SQL
to
to
do
your
RPC
queries.
I
have
not
confirmed
it
100%,
but
I'm
pretty
sure
that
might
be
tried.
So
now,
let's
look
at
this
thing,
so
this
is
some
some
of
you
might
know
this.
There
is
this
protocol
either
mint
and
it's
been
attempted
by
a
people
from
cosmos
team
interchange
foundation.
A
Theory
it's
very
hard,
so
then
I
they
asked
me
to
look
at
the
potentially.
What
what
is
there
mean
2.0
would
look
like
and
I
said:
no
we're
not
going
to
modify
going
Syria
we're
gonna
use
it
as
a
library
and
I.
Remember
pizza
prop
guys
here
on
team
talks
at
DEFCON
two
said
his
talk
was
like
use
the
Gauss
theorem
as
a
library
and
I.
A
Remember
that,
like
okay,
why
don't
we
try
to
use
the
Gauss
theorem
as
a
library
and
actually
cosmos,
also
the
library,
so
the
cosmos
has
got
two
cosmos
and
a
cake,
so
I
try
to
do
some
jig
in
there,
and
essentially
architecture
is
like
this.
So
we
do
read
commit
there
is
no
Patricia
tree
here,
so
I
meant
to
remove
it
from
from
the
from
the
from
here.
It
was
a
bit
of
a
key,
but
I
managed
to
do
it.
D
A
Avl
trees,
we
are
not
using
any
of
the
Patricia
stuff,
it
all
goes
into
tandem
in
stack
and
and
eventually
it
might
be
using
some
of
the
tools
of
state
that
they
have.
But
it's
interesting
that
what
this
particular
piece
of
work
made
me
realize
that
it
might
I
might
be
able
to
actually
do
this.
Do
this
with
a
actually
using
yes
more
as
a
library
rather
than
as
a
fork,
I
need
to
look
at
it
a
bit
more,
but
actually
just
gave
me
the
idea.
So
let's
look
at
the
other
difference.
A
Is
it
so
so
this
was
decided
I
already
presented,
but
there
is
a
little
bit
of
change
here.
So
obviously,
first
of
all,
I
changed
the
database
from
the
level
TBH
to
both
DB
and
I.
Wasn't
able
to
say
a
lot
about
it
last
time
because
of
time
constraint,
but
I
will
go
into
this
little
bit
further
down.
So
why
so?
The
level
DB
is
a
database
which
is
got
the
log,
structured,
merge,
texture
and
actually
they're,
quite
a
few
of
them
and
the
main
idea.
A
D
A
It
essentially
runs
this
over
to
call
compaction
and
it
just
just
basically
rewrites
things
from
one
level
to
another.
It
tries
to
compact
itself,
it
tries
to,
let's
say,
merge
the
entries.
For
example,
there
are
two
entries
in
the
log
which
pertain
to
the
same
key.
It
will
merge
them
together
as
it
lifts
it
up
for
the
level
and
then,
if
there's
a
deletion,
it
will
also
merge
it
with
the
other
update.
B
A
You
know
they
they
pretty
much
not
on
a
critical
path,
but
there
is
a
cost
for
that
to
pay,
for
that
is,
first
of
all,
if
there's
a
cop
thing
called
rad
amplification,
which
means
that
every
every
record
that
you
write
in
database
ends
up
being
rewritten
multiple
times
as
it
moves
up
the
levels.
So
then
there
is
a
also
part
of
it.
Part
of
it
is
the
read
amplification
is
that
the
entry
might
be
located
in
multiple
levels.
A
Then
you
need
to
do
a
bit
of
work
to
figure
out
where
it
is
now
and
what
is
the
current
state
and
also
just
empirically
when
I
write
the
on
my
local
is
laptop
when
I
run
gas
and
level
the
B
it
actually
really.
No,
it's
really
noisy.
If
I
could
hear
the
noise
when
I
run
to
be
guessing
ball,
TV,
it's
very
quiet,
although
no,
it
I
think
it's
the
matter
of
the
background
processes
which
going
on
so
the
difference
with
the
both
to
be
both
Ibiza
is
more
kind
of
traditional.
A
To
remind
you
that
beech
tree
is
essentially
dead,
like
it's
a
it's
a
tree
which,
unlike
the
binary
tree,
it's
got
a
multiple
nodes
on
each
level
and
a
B+
us,
because
they,
the
actual
values,
are
stored
on
they're,
not
leaves
on
the
branches,
and
so
the
good
thing
about
this
V
V
plus
three
is
that
it
allows
you
to
do
better
atomicity.
So
you
can
actually
support
any
sub
transaction.
B
A
This
is
the
main
reason
for
that,
because
this,
the
layout
of
the
state,
is
so
that
these
are.
These
things
are
pretty
random
right
and
we
don't
really
exploit
the
fact
that
the
level
of
DB
actually
stored
sorted.
It
sorts
the
keys,
but
we
we
actually
use
it
as
around
and
key
value.
Storage
right
so
interpret
the
predominant
workload.
Instead,
something
called
traversal
so
which
reversed,
which
reversed
the
the
database
and
what
I
also
tried
it
out.
A
A
A
Had
a
problem
that
when
I
was
batching
up
a
lot
of
updates,
it
was
a
very
inefficient
to
bash
them
up
in
the
faulty
B's,
if
old,
buffer
I
actually
had
to
introduce
my
own
buffer
and
I.
Just
pick
up
picked
up
some
red
blocks,
implementation,
and
just
did
that
and
as
I
said
before,
the
super
just
cannot
maintain
mantra
maintain
multiple
tips
of
the
chains.
So
what
that
means?
A
That
is
that,
if
you,
let's
say
that,
if
you
follow
in
the
tip-
and
sometimes
you
get
this
little
Forks
this
way
that
way
that
way,
so
what
again
does
is
that
it
sort
of
remembers
them
like
okay,
goes
two
blocks
in
this
way.
Two
blocks
in
that
way.
Maybe
two
books
in
that
way.
Eventually
some
of
them
win,
but
it
doesn't
know
yet
which
one
so
it
will,
if
remembers
them,
pull
it
a
while
and
then
just
like,
goes
to
the
what
so
interpret
yet
because
of
the
way
that
the
database
is
organized.
A
I
did
not
introduce
the
mechanism
for
doing
that,
so
it
essentially
what
it
does
is
just
okay.
So
here's
the
packed
bunch
of
blocks
if
the
total
difficulty
is
bigger
than
I
had
before
then
I
will
accept
them.
If
not
I'm,
not
gonna
accept
them
for
now.
So
it
still
remembers
applause,
so
it
doesn't
actually
do
anything
until
it
knows
that
the
difficulty
is
larger.
So
that's
why
it
doesn't
I
mean
it
cannot
maintain
multiple
force
at
the
same
time.
So
whenever
york's.
B
A
It
has
to
go
back
and
rewrite
it,
but
do
we
rewind
I
actually
did
some
tested,
it's
actually
pretty
efficient,
I
wasn't
able
to
test
them.
Thority,
though,
so
now,
let's
look
at
the
some
latest
data
performance
data,
so
that
was
so
these
in
the
blue
line
right
hours
to
live
with
the
dates
24
hours
each,
and
so
this
is
the
same
time.
I
got
I
got
this
idea
of
this
graph
from
a
free
who
is
work
and
I
really
like.
B
A
I
started
to
use
that
format
everywhere
and
this
this
this
black
line
is
the
is
like
a
database
size
how
it
grows.
It
went
to
be
kind
of
in
250
gigabytes,
no,
okay,
maybe
something
wrong
with
it,
but
I
think
it.
No
actually
no
I
remember
this
is
the
this
is
the
version
where
the
receipts
are
not
optimized.
Yet
that's
why,
in
the
version
where
these
seats
are
optimized,
it
actually
going
down
quite
a
bit
for
up
80
gigabytes
less.
A
So
you
can
see
that
there
are
some
sort
of
inflection
points
here.
So
it's
where
the
slope
changes
and
my
theory
currently
I
haven't
verified.
It
is
that
it's
when
the
block
block
sign
is
is
raised.
Oh
sorry,
block
gas
limit
is
raised
or
there's
some
sort
of
attack
with
this
year.
I
already
remember
like
what
other
than
block
numbers
where
attacks
happen,
because
I
watched
it
so
many
times
in
logs,
it's
like
ok,
so
now
it's
coming
like
yes,
so
and
of
course,
because
this
only
displays
the
blocks.
A
A
Are
they
heap?
The
blue,
like
a
green
bits,
is
the
is
the
number
of
Patricia
tree
note,
and
here
this
is
another
thing:
I've
done
differently,
so
in
the
yeah
the
there
is
no.
Currently,
there
is
no
way
to
restrict
how
many
Patricia
tree
knows
they
are
in
the
memory.
So
it's
done
in
an
indirect
way
by
saying
how
many
generations
of
the
tree
nodes
can
survive,
I
think
128.
So
what
does
it
mean?
A
So
what
I've
done
is
that
I've
introduced
this
linked
list,
which
basically
goes
through
all
the
pituitary
nodes
at
some
with
some
overhead,
but
I
actually
count
them
I
count
how
many
they
are
and
I
have
this
Ella
LRU
sort
of
cache
of
them,
so
I
have
a
limit
of
how
many
they
can
ever
be
in
the
memory.
I
was
hoping
that
I
will
be
able
to
get
a
nice
correlation
here
between
this
and
this,
but
it
hasn't
happened.
Yet
there
was
a
when
I
was
at
Toronto.
A
There
was
a
memory
leaf
tonight,
I
fixed
it
and
I
thought
the
graph
would
actually
just
kind
of
go
like
this,
but
it
didn't
yet
so
there
is
that
there's
still
some
work
in
there
now.
This
is
the
probably
the
most
interesting
part
for
now,
so
so
260
gigabyte,
if
I,
need
to
remind
you.
This
is
the
archive
node,
which
basically
the
archive
node,
is
the
node
which
has
expanded,
State
or
all
the
entire
history.
What
does
it
mean?
Expanded
State?
It
means
that
it
for
each
block.
A
It
has
the
record
of
what
each
count
was
at
this
block
like
what
is
the
value
of
each
count
and
what
is
the
value
of
each
storage
entry?
You
know
so
and
of
course
people
sometimes
say:
oh,
you
know
what
these
like.
All
the
nose
has
all
over
the
information.
Yes,
it
only
takes
40
4
gigs
to
store
all
the
bodies
of
the
blocks,
but
you
can
twittery
right.
So
the
difference
is
that
you,
you
have
to
somehow
expand
and
pre-process
it
in
order
to
be
able
to
query.
A
So
that's
why
I
said
if
you
compare
this
to
compare
it
to
the
archive
notes,
whatever
I
think
they've
countered
the
over
terabyte
so
before
I
did
this.
So
the
previous
versions
of
this
did
not
separate
history
of
accounts
and
history
of
storage
from
the
actual
storage,
and
this
is
the
one
of
the
first
time
I
did
it
and
I
came
to
interesting
conclusion
that
no,
actually,
we
won't
show
you
later
so
essentially
what
the
one
Fitbit,
which
is
the
this,
is
the
bit,
which
is
the
specific
superbug
app.
If.
A
Have
been
changed
in
this
book
and
I,
don't
know,
maybe
I'm
also
going
to
optimize
it,
but
this
allows
you
essentially
do
very
quick,
a
rewinding
because
it
knows
which
keys
to
delete,
because,
like
okay,
so
I
have
to
rewind
ten
blocks
back
I,
actually,
first
fetch
the
set
of
the
keys
that
have
been
updated
and
just
rewind.
It
completely
delete
it
from
databases.
I
need
to
think
of.
Maybe
I
need
to
change
this.
Somehow,
so
you
can
see,
receipts
are
quite
large,
I
have
optimized
them,
they
used
to
be
like,
and
the.
A
Is
that
well
afford
optimization
is
because
we
see
it's
actually
store
a
lot
of
redundant
information.
They
store
blockhouses
transaction
hashes
transaction
indices,
although
this
old
stuff
is
already
provided
when
you
have
secreted
them.
So
when
you
create
the
receipt,
your
way
to
say
I
want
to
receive
from
this
book.
Book
block
has
conditions
action
from
this
index,
but
I
mean
this
information
repeat.
A
So
what
I've
done
is
that
the
logs
contain
bloom
filters
and
I
basically
decided
not
to
store
bloom
filters
at
all,
I
compute
them
as
the
when
you
query
the
receipt
you
just
compute
a
bunker.
It's
quite
easy
like
it's
not
like
your
block
is
super
large
and
long
tail.
It
takes
like
three
minutes.
It
just
takes
like
maybe
like
half
a
second
to
compute,
the
floater
I
haven't.
A
3.44
gigabytes:
this
is
the
current
accounts
and
storage
6.21
Digga
bytes.
Remember
these
numbers
actually
I
will
repeat
it
any
compared
to
the
history
right.
So
it's
actually
not
that
bad
they
store.
The
state
is
not
that
large.
So
what
what
we
have
left
told
you
I
can
actually
get
some
work
over
them.
At
least
from
my
point
of
view,
I
need
to
test.
B
works.
I
need
to
fix
some
of
their
PC
API
calls
potentially
for
support
for
me
tested.
A
This
is
what
the
Metreon
was
presented
yesterday,
but
even
then
the
database
layout
is
gonna
change,
pretty
much
I'm
pretty
much
certain
of
that.
So
it's
not
gonna,
be
like
you
have
a
first
version
and
I
will
support
backwards.
Compatibility,
no,
no,
no
backwards.
Compatibility
at
least
too
much
burden
for
me
to
to
do
that.
So
very
recently,
I
started
to
look
at
lifelines.
I
have
completely
ignored
them
for
for
like
for
several
months,
because
I
thought
you
know,
I
can't
really
hold
all
these
things
in
my
head,
and
only
recently
I
started.
B
A
See
here
this
is
the
there's
the
if
there's
a
less
two
protocols
which
are
both
protocols
which
are
served
around
the
peer-to-peer
networks
in
etherium,
and
there
have
some
common
commands.
For
example,
you
can
handle
shake
first,
a
handshake.
You
can
retrieve
the
block,
headers
block
bodies
from
each
other,
so
I
cannot
do.
That
is
this.
The
third
line
is
called
to
get
no
data.
This
is
unfortunately.
A
Specific
to
Patricia
tree,
which
is
something
that
I
wanted
to
avoid
so
and
this
is
used
by
fast
clients
to
retrieve
the
card
state
and
that's
the
reason
why
I
cannot
support
fast
client
because
I,
just
simply
don't
have
this
information
and
it's
also
not
used
by
the
light
clients,
I
suppose.
So
we
could
do
receipts
pretty
easily.
We
can
do
block
hashes
and
easily,
so
one
things
that
I
cannot
really
do
very
easily.
Is
the
the
proofs
and
yeah
actually
the
prudes
only
so
this
is
fine.
A
B
A
Is
that
if
the
light
client
or
the
fast
node
queries,
the
get
no
data
or
proof
for
current
state
I
will
be
able
to
respond,
because
the
kurtter
baguette
has
the
current
state
at
the
moment.
But
if
they're
a
little
bit
behind,
let's
say
that
they're
like
three
blocks
behind
me
and
they
will
ask
for
it:
I,
don't
have
it
anymore.
I
already
moved
on
I've
rebuilt
my
petition
to
because
I
only
have
one
for
carrot.
C
A
In
order
to
solve
this
particular
problem,
maybe
we'll
do
some
compromise.
We
will
store
some
of
the
Patricia
trees
from
the
past
a
little
bit
for,
like
maybe
ten
blocks,
I
mean
depending
on
how
so
this
is
like
if
I
really
want
to
support
life.
Science
I
might
do
that.
I
might
just
like
okay
over
the
store,
ten
ten
block
history
for
the
Patricia
two,
but
then
I
just
remove
it,
and
somebody
tries
to
ask
me
the
history
back
then
this
tough
luck,
I'm
not
gonna,
do
anything.
So
that's
my
contract
to
see
so.
A
Okay,
now,
let's
get.
That
is
the
interesting
thing
that
our
haven't
implemented.
Yet
these
are
just
idea:
I'm,
just
gonna,
throw
with
you,
and
some
of
these
ideas
are
just
a
feared
couple
of
days
ago.
So
we
had
this
conversation
with
me
tree
from
testing,
and
he
said
to
me
like:
did
you
actually
try
to
put
everything
in
memory.
A
A
We
replaced
two
level
2b
were
there
Redis
like
it
was.
Actually
it
was
surprisingly
simple,
because
I
would
it's
always
like
a
praise
to
the
lowest
theory
of
T.
It
was
sort
of
a
abstracted
actually
took
me
two
hours
to
completely
replace
that
level
to
be
with
the
Redis.
What
we
even
mess
and
interest
like
okay,
let's
try,
and
so
they
run
it.
I
didn't
have
our
machine
with
so
much
memory
because
it
needed
like
400
members.
D
A
A
6.51
3.44:
what
is
this?
Some
is
about
techniques,
a
gigabyte.
Actually,
you
can
afford
story,
at
least
the
current
state
in
memory,
and
this
is
what
matters
for
most
of
these
patricia
tree
recalculation.
This
is
the
way
the
most
time
is
actually
spent
or
anything
at
the
moment
right,
and
this
is
how
I'm
going
to
do
this
well.
This
is
the
current
time
so
because
the
goal
to
be
one
of
the
feature
that
I
didn't
mention
of
the
board
it'd
be
that
I.
A
That
it
has
the
very
little
code,
I
was
able,
within
two
days
to
completely
look
at
the
read
the
code
of
bode,
be
pretty
much
completely,
understand
it
and
then
what
start
modifying
it?
This
is
what
I,
like
I,
wouldn't
be
able
to
do
it
with
level
of
EB.
So
I
basically
understand
how
about
if
it
works,
pretty
much
inside
out.
I
already
done
few
modifications
in
it
so
and
one
of
the
modification,
which
should
be
pretty
easy
to
do
at
the
moment,
a
level
to
be
the
sorry.
A
D
A
A
So
all
the
pages
will
be
just
stored
in
the
memories
and
they
will
be
loaded
on
the
startup
and
stored
on
shutdown
and
then
we'll
see
how
it
works,
and
so,
if
this
gets
corrupted
what
we're
gonna
do
we
still
have
so
history
actually
has
there
yeah
that
you
can
reconstruct
so
currently
the
way
the
history
is
done,
that
you
can
actually
construct
the
car
from
the
history
by
just
looking
at
the
latest
bit
and
there's
already
clearly
for
that.
So
this
shouldn't
be
a
problem.
D
A
A
You
don't
really
want
to
sorting
in
one
dimension,
you
you
need
it
at
least
in
two
dimensions,
and
then
you
have
start
inventing
all
these
little
ad
hoc
schemes
for
jumping
around
the
database
like
okay,
I'm
gonna,
have
a
time
stamp
and
then
we
just
have
a
snapshot,
and
then
it
becomes
a
bit.
First
of
all,
it
becomes
a
bit
messy.
It's
not
very
transferable
okay,
so
maybe
I
was
actually
familiar
with
a
graph
databases
before
and
I
thought.
A
For
example,
in
one
day,
I
decide
to
support
cost
clients,
and
this
is
the
O
in
the
graph
I
say:
okay,
I'm
not
gonna
store
the
whole
history
of
partition
trees,
but
maybe
I'm
just
gonna
store
it
once
every
thousand
blocks
and
I
will
just
insert
some
Patricia
tree
hashes
in
there,
so
that
I
will.
It
would
be
trade-offs
over
reconstruct.
It
will
be
quicker
for
me
to
reconstruct
it,
or
maybe
somebody
walk
to
try
the
DRC
20
tokens
and
they
want
to
overlay
it
to
the
database.
Here.
B
A
That
you
can
have
a
our
PC
queries
for
that
and
subsidies,
so
it
would
be
quite
interesting
because
then
you
can
latch
on
to
specific
block
numbers
to
specific
accounts
which
are
all
represented
as
that
graphs.
On
a
note,
so
here
this
picture
shows
you
so
here
we've
got
the
accounts
so
here
about
the
block
numbers,
and
so
at
the
moment,
when
you
query
some
state
of
the
state
of
the
databases,
so
state
of
experimental
blocks
339.
This
is
actually
what
your
query.
A
A
Explicitly
market
with
these
edges
in
the
graph
we
might
be
able
just
to
traverse
it
directly
and
then
another
thing:
oh
yeah,
and
usually
the
graph
databases
they
support
things
called
no
types
edge
types,
so
we
can
mark.
We
can
create
one
type
to
the
block,
for
example,
and
then
the
queries
normally
the
query
languages
come
with
the
graph
database.
They
support
discriminating
the
types
so
and
remember
this.
A
D
A
Actually
introduced
one
node
type
per
block,
so
basically
nodes
will
be
marked
by
certain
property
or
node
type,
depending
on
which
block
they
appear.
So
all
the
records
will
basically
be
marked
by
which
block
they
belong
to,
and
then
it
becomes
really
easy
to
rewind,
because
we
can
just
clear
it
for
which
or
which
records
belong
to
the
nodes
after
339
you're
just
gonna,
remove
it
like
it.
One
query,
and
so
I
started
to
have
a
more
interest
in
this
after
this
event.
So
very
recently,
but
this
is
actually
not
a
product
for
the
global.
A
A
It's
actually
very
interesting
project
I
started
to
look
into
it,
and
underlying
storage
is
called
11
TV,
which
is
lightning
lightning.
My
memory-mapped
database,
essentially
it's
another
B
3
plus
database,
and
the
architecture
of
the
lemon
graph
is
that
well
they
played
it
to
be
super
fast.
It's
written
on
C
with
some
Python
bindings
at
the
moment.
I
have
managed
to
install
it
and
did
some
tests,
but
I
haven't
done.
I
have
a
figured
out
how
to
map
it
into
the
serum
state,
but
one.
A
B
A
It
says
this:
it's
called
a
log
database,
so
whenever
you
add
the
entry
it
goes
into
the
log
and
it
requires
the
log
ID
and
this
then
log
ID
is
dated
a
ever
incrementing
and
it's
that
log
IDs
then
getting
referenced
any
time.
We
create
some
kind
of
object.
So
when
you
create,
let's
say
a
node
in
the
graph,
it
gets
added
as
the
log
entry
and
then
there's
the
entry
in
the
index,
which.
A
A
B
A
So
they
also
have
a
bit
of
a
clutch
which
is
the
key
value
store.
They
also
have
a
very
clever
mapping
of
them,
like
let's
say
if
you
have
a
lot
of
repeated
information
which
we
actually
do
have
in
this
theory
of
State.
Like
let's
say,
if
you
look
at
the
storage
of
account,
you
probably
find
a
lot
of
addresses,
because
a
lot
of
them
are
tokens
right
like
addresses
here,
just
they're
like
all
over
the
place,
and
they
probably
lots
of
replications
there.
So
you
might
take
some
advantage
of
that
and
compress
it
somehow.
A
So
they
did
something
interesting,
which
is
a
very
simple
but
interesting.
So
they
have
this
scalars,
which
could
be
like
the
property
values,
and
so
they
get
h,
scalar
that
ID
each
increment
and
they
have
two
indices,
one
whose
maps
from
scalar
ID
to
actual
byte
byte
sequence
and
in
order
to
map
back.
For
example,
you
have
a
new
byte
sequence,
which
you
need
to
decide
whether
it's
a
new
one
where
it's
already
existent,
so
they
do
offer
crc32.
They
don't
do
sha
sha
3.
A
Them
to
actually
store
each
value
ones,
and
one
of
the
interesting
feature
of
this
database
is
that
you
can
always
have
this
historical
view
of
the
database
like
a
snapshot.
So
what
did
my
database
look
like
at
this
point
and
the
reason
why
they
can
do
it,
because
this
is
a
log
structure?
They
always
have
this.
So
what
is
the
state
up
to
the
log
ID
number
this
and
then
you
just
look
at
it
and
you
look
at
as
if
the
database
was
there
and
that
might
be
potentially
very
useful
for
for
historical
queries
again.
A
C
Thanks,
that's
a
great
great
presentation:
I
learned
a
ton
about
turbo
gas
I'm
curious
about
downsides
right,
so
you
talked
a
little
bit
about
it
in
the
context
of
like
you,
don't
get
like
client
support
out
of
the
box.
Are
there
any
other,
like
trade-offs,
are
downsides
to
sort
of
the
part
you're
taking?
Yes,.
A
So
the
light
clients
of
work
might
need
to
be
bolted
on
yes,
then
fasting
doesn't
is
is
very
hard
to
do
probably
possible
at
this
stage.
The
another
downside
is
that
they're
potentially
going
to
be
a
slow
startup
time
so
I,
because,
basically
you
have
to
it,
because
you
don't
have
a
patricia
tree.
You
have
to
rebuild
with
it
all.
A
The
startup
I
had
this
a
bit
of
a
solution
where
I
store,
like
first
million
like
a
top
million
nodes
from
Patricia
tree,
but
now
as
I
might
I
might
throw
the
way
because
it
did
when
I
separated.
This
is
the
current
state
and
it's
Turkey
state
I
realized
that
loading
up
the
state
actually
becomes
much
faster.
I
might
not
need
that
little
hashed.
Well,
so
that's
another
thing
to
a
slow
start
up,
potentially
I'm,
still
not
sure
about,
rewards
I
might
actually
find
some
blow
broker
there.
I
might
need
to
do
some
more
stuff.
A
E
B
E
So
he
presented
a
lot
of
results
about
reducing
the
database
size,
another
goal
which
I
outlined
them.
The
initial
blog
post
was
to
reduce
the
latency
of
disk
I/o
when
processing
transactions
by
take
the
using
that
so
wouldn't
require
traversing
the
trees
of
do
like
an
S
load
or
to
read
something.
Oh.
E
E
A
Ok
yeah
one
thing
I
forgot
to
mention
that
actually
there,
because
their
souls
there's
a
lot
of
details
that
I
actually
just
forget
to
mention.
So
one
of
the
things
that's
also
different
is
that
when
transaction
is
executed
in,
for
example,
again,
if
the,
if
the
it's
an
account
or
some
storage
values,
not
in
memory,
it
will
always
access
it
through
the
Patricia
tree.
So
it
will
have
to
load
it
up
through
the
this
expansion
process
in
to
forget
it
only
it.
It
doesn't
do
that,
so
it
looks
and
Patricia
tree.
A
If
it's
there
in
the
memory,
then
it
uses,
but
if
it's
not,
it
doesn't
do
expansion.
It
just
goes
straight
into
the
database.
Do
the
key
value,
lookup
and
loads
it
up
in
one
go
in
one
so
and
when
I
look
at
the
current
profiles,
this
I
was
worried
that
this
could
be
a
like
a
noticeable
thing,
but
I
don't
even
see
it
anywhere
in
the
professor,
it's
actually
a
very
significant
cost
in
terms
of
the
latency
of
transaction.
I,
now
remember
I,
which
I
haven't
implemented.
Yet
is
that
so.
A
The
transaction
in
the
pool
in
the
transaction
pool
you
kinda
can
get
the
idea
which
transaction
will
happen
in
the
next
block.
I
mean
roughly
right.
You
can
sort
them
as
if
them
is.
If
you
were
a
miner,
you
can
sort
them
out
by
the
gas
price,
and
you
see
so
closely.
These
transactions
are
verified
chance
to
be
in
the
next
book,
and
so
what
you
do.
A
I'm
going
to
heat
up
the
path
in
my
Patricia
T
or
whatever,
just
in
anticipation
for
those
I'm,
gonna,
brine,
my
caches
and
so
and
with
a
very
high
probability.
You
will
ask
that
you
right,
because
when
the
next
block
comes,
you
don't
actually
need
to
do
any
disk
I/o
you
you're
ready
for
that
transaction,
and
this.
B
A
The
first,
the
first
one
is
the
strata
support
is
missing
and
it's
pretty
important
for
them,
and
second
bit
is
that
they
claim
that
the
gap
cannot
keep
up
with
the
tip
very
in
a
very
stable
manner,
so
they
sometimes
in
very
often
post
behind.
There
is
no
data
to
support
this
claim,
but
so,
in
order
to
alleviate
this,
the
idea
is
to
essentially
be
low.
The
caches,
as
you
look
into
the
transaction
pool,
because
there
and
another.
A
Which
is
actually
cycle
using
some
sort
of
machine
learning
for
better
cash
strategies,
because
basically,
the
cash
strategy
which
we
all
use
everywhere
is
the
LRU
list
recently
used
with
we're,
assuming
that
whatever
has
been
recently
used,
will
be
used
again,
but
maybe
that's
not.
Maybe
this
is
not
the
right
strategy,
maybe
that's
not
by
far
away
optimal.
Well,
maybe
you
can
actually
learn
from
the
past.
What
is
the
current
strategy?
14.
D
A
I
was
aware
of
it.
I
also
was
aware
of
another
database
called
battery
DB,
which
is
kind
of
similar,
so
they
took
the
leveldb
they
took
both
the
vo.
They
try
to
do
something
between
no
I.
Didn't
that
to
honestly
do
that
for
no
specific
reason,
I
I
just
chose
the
both
TV
and
I
decided
to
kind
of
try
to
torture
it
until
until
I
decided.
A
A
So
that's
the
interesting
question.
We
actually
discussed
it
with
the
well,
not
really.
We
did
discuss
that
particular
team
with
Felix
yesterday.
He
was
around
here
and
I
told
him
that
I
totally
I'm
totally
happy
to
merge
things
back,
but
I
am
oh
man,
I
can't
do
it
because
I,
it's
just
impossible
in
terms
of
my
time
to
do
that,
to
prepare,
because
there's
actually
everything
that
I've
changed.
These
actually
would
require
active
work
to
do
that,
to
be
back
for
it
into
go
Syria
so
I,
don't
I
can't
just
do
that.