►
From YouTube: Devcon VI Bogotá | Workshop 2 - Day 4
Description
Official livestream from Devcon VI Bogotá.
For a decentralized version of the steam, visit: https://live.devcon.org
Devcon is an intensive introduction for new Ethereum explorers, a global family reunion for those already a part of our ecosystem, and a source of energy and creativity for all.
Agenda 👉 https://devcon.org/
Follow us on Twitter 👉 https://twitter.com/EFDevcon
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
B
B
C
A
A
A
A
A
A
A
A
A
A
A
D
D
D
D
E
Okay,
we
hello
everyone
welcome
to
the
Thanksgiving,
Workshop,
Defcon,
Edition
and
cool,
so
first
I
guess,
half
of
you
have
already
know
what
is
okay,
and
so
this
is
the
latest
Instagram.
E
Sharding
design
that
proposed
by
Dan
Clark
he's
here
today
in
2021
and
so
in
this
new
design
that
it
unlocked
so
many
scaling
I
mean
The
Challenge,
and
it's
not
that
no,
the
charted
data,
we
don't.
We
shot
it
at
the
data
blocks
rather
than
having
many
EPN
structures,
but
the
dark
jotting
protocol
is.
There
are
still
some
technical
challenge
that
we
need
to
fix
so
right
before
we
have
the
full
time
charting.
Proto
is
also
here
and
many
of
us.
E
E
So
that's
the
Proto
and
dankara,
and
this
is
the
Proto,
then
okay,
so
there
are
some
and
many
common
features
in
both
eip4
and
the
default.
Then
sharding.
So
today
we
will
break
down
these
topics
and
you
can
see
that
they
both
have
the
kcg
commitments.
So
Democrat
will
introduce
the
cryptography
part
first
and
they
also
have
the
blobs
transactions.
So
we
will
also
introduce
like
what
is
blog
today
and
fee.
Market
is
also
the
shared
common
features
here,
and
so
the
challenging
part
of
dangshotting
is
the
PBS
and
the
DS.
E
E
F
F
I'll
be
starting
with
giving
a
motivation
to
to
understand
why
why
we
need
these
Advanced
polynomial
commitments
and
why
we
can't
do
all
this
simply
with
Merkel
routes
which
we're
all
quite
familiar
with
so
I'll,
be
going
through
the
motivation
I'll
quickly
go,
give
an
overview
over
the
final
fields
that
we
that
we
use
in
order
to
commit
to
polynomials
I
will
kind
of
motivate
ktg
commitments
as
like
hashes
of
polynomials
then
go
to
the
actual
meet,
which
is
how
do
kcg
commitments,
work
and,
finally,
because
we
also
use
these
a
lot
in
our
construction.
F
Now,
I
will
be
going
through
the
technique
which
I
call
random
evaluation,
which
is
a
nice
trick
that
you
can
often
use
to
to
work
with
polynomials,
and
that
that
makes
a
lot
of
things
that
you
want
to
do
with
polynomials
a
lot
more
efficient
cool.
So
let's
talk
about
data
available,
assembling
and
eraser
coding,
so.
F
So
so
what
is
data
availability
sampling?
So
the
idea
is,
we
somehow
have
a
large
blob
of
data
and
and
what
we're
working
on
is
scalability,
so
scalability
means
that
somehow
we
have
to
make
it
so
that
a
node
has
to
to
do
less
work
to
achieve
the
same
thing
that
we
do
today
so
right
now,
every
node
ensures
that
all
ethereum
blocks
are
available
by
downloading
all
the
blocks.
F
That's
just
an
implicit
part
of
it
like
it
seems
obvious,
because
right
now
you
also
execute
the
full
blocks,
but
it's
one
of
the
things
that
don't
scale
in
the
current
ethereum
system.
So
we
need
a
way
to
to
reduce
this
this
workload,
but
we
want
to
do
it
in
such
a
way
that
we
don't
lose
any
of
the
security
that
this
provides
and
that
that's
what
makes
us
tricky
and
so
like
the
the
basic
idea,
is
okay.
F
But
if
we
take
our
data
blob
and
we
just
check,
that's
random
samples
of
the
data
are
available.
F
So
if
we
do
this
naively,
if
we
just
take
the
data
as
it
is,
then
this
this
does
not.
This
does
not
really
work
because
even
even
missing
a
tiny
amount
of
data
is
catastrophic,
potentially
for
a
blockchain,
but
but
best
by
by
doing
random
sampling.
You
can
never
find
out
whether
a
tiny
bit
is
missing.
You
can
only
see
whether
major
parts
of
the
data
are
missing.
F
So
what
we'll
need
to
do
is
in
order
for
this
technique
to
work
is
we
need
to
encode
the
data
in
such
a
way
that
that
even
having
some
parts
of
the
data
say
50
is
enough
to
guarantee
that
all
the
data
is
available.
F
And
so
the
way
we
do
this
is
we
extend
the
data
using
ISO
called
resolomon
code,
and,
if
you
know
a
little
bit
about
polynomials
like
read,
Solomon
code
is
nothing
else,
but
but
extending
the
data
using
polynomials.
So
what
you
do
is,
let's
say
like
in
the
simple
example:
we
have
four
blocks
of
original
data
and
what
we'll
do
is
we'll.
F
We
will
take
these
four
as
evaluation
as
an
over
polynomial.
There
will
always
be
a
polynomial
of
degree
3.
That
goes
through
these
four
points,
and
then
we
can
evaluate
this
polynomial
for
more
points
and
what
this
means,
because
Four
Points
always
determine
a
polynomial
of
degree.
Three,
that
any
of
these
four
points
are
enough
to
reconstruct
exactly
the
same
polynomial.
F
It
does
not
matter
which
four
points
you
have,
and
so
this
this
is
like
this
is
the
basis
of
Erasure
coding
and
now
because
of
this,
the
data
will
be
sampling,
idea
that
we
had
here
and
actually
works,
because
now
I
don't
need
to
ensure
that
every
single
bit
of
the
data
is
available.
Now,
I
only
need
to
know
that
at
least
50
of
the
data
available
and
then
I
can
always
reconstruct
everything
yeah
sure.
F
Yes,
yeah.
We
have
double
the
data.
No,
so
that's
the
trick,
so
we
do
random
sampling.
So
as
an
example,
we
query
30
random
blocks.
So
if
the
data
is
not
available,
that
means
the
attacker
needs
to
have
withheld
50,
because
if
they
submitted
more
than
50
to
the
to
the
network,
it's
all
there.
Okay!
So
if
it's
not
available,
then
each
of
these
samples-
because
we
used
local
Randomness
to
query
them-
has
only
a
50
chance
of
succeeding.
F
So
that
means
in
aggregate
the
probability
that
all
of
them
succeed
is
now
2
to
the
minus
30,
which
is
one
in
a
billion.
So
this
is
why
it
scales,
you
don't
need
to
ensure
you
don't
need
to
query
fifty
percent
of
the
data.
You
only
need
to
do
like
a
tiny
number
of
random
samples,
and
this
number
is
constant,
so
it
does
not
depend
on
the
amount
of
data
foreign.
F
So
the
problem
with
this
is
that
mercury
Roots
do
not
tell
you
anything
about
the
content
of
the
data
so
like
it
could
be
anything.
So
in
this
case,
let's
say
an
attacker
wants
to
trick
our
data
availability
system.
They
could
just
not
use
this
polynomial
extension,
but
they
could
just
put
random
data.
So
basically,
in
coding
terms,
they
have
provided
an
invalid
code.
What
that
means
is
that
any?
F
If
you
get
four
different
chunks
of
this
data,
you
would
always
get
a
different
polynomial,
so
you
would
like
so
consensus
is
all
about
agreeing
on
like
something
like
and-
and
in
this
case
we
wouldn't
actually
have
agreed
on
something,
because
the
data
is
different,
depending
on
which
on
on
which
of
these
samples
we've
got.
F
So
the
only
way
to
make
this
work
is,
if
we,
if
we
add
fraud
proofs
to
the
system
where
you
basically
prove
prove
that
that
someone
has
provided
this
invalid
code,
but
that
that
isn't
great
so
that
that
has
some
problems.
F
They
add
a
lot
of
complexity,
and
particularly
in
this
case,
because
this
is
about
the
layer
1
itself,
it
would
make
our
system
very,
very
difficult
to
design,
because
now
validators
would
need
to
basically
wait
for
this
fraud
proof
in
order
to
know
which
block
to
even
vote
for
so
there
would
be
kind
of
very
Affy
to
design
this.
F
So
the
interesting
question
is
what
if
we
could
find
some
kind
of
commitment
that
instead
always
commits
to
a
polynomial,
so
we
always
know
that
the
encoding
is
valid
cool
and
that
is
why
we
will
introduce
kcg
commitments
and
we
need
to
start
a
little
bit
earlier.
So
I
will
start
from
by
introducing
finite
Fields
a
little
bit
for
those
that
who
are
not
familiar
with
it.
Okay,
so
I
can
return
you,
okay,
so
what's
the
finite
field?
F
Okay,
so
to
understand
what
a
field
is,
it's
basically
think
about
rational,
real
or
complex
numbers,
which
you've
already
learned
about
and
just
remind
yourself
like
we
have
basic
operations
that
we
can
do
in
them.
We
can
add
subtract,
multiply
and
divide,
and
we
can
do
that
do
all
of
these,
except
division
by
zero,
so
that
you,
you
always
you're,
always
able
to
to
do
these
operations,
and
you
have
some.
F
You
have
some
laws
like
their
associative
commutative
distributive,
basically,
just
think
of,
like
I
mean
I
I
could
give
like
the
formal
laws
here,
but
I.
Don't
think
that
would
be
the
best
illustration,
because
you're
already
very
familiar
with
these
rules
when
you
work
with
rational
or
real
numbers
and
the
big
difference
finite
Fields.
Is
that,
unlike
unlike
these,
these
fields
that
we're
very
familiar
with
they,
which
all
have
an
infinite
number
of
elements?
F
They
have
a
finite
number
of
elements,
that's
quite
important,
because
otherwise
we
can't
encode
them
with
a
finite
number
of
bits,
which
is
yeah
kind
of
like
something
that
we
that
we
need
to
be
able
to
do
so
yeah.
So
that
means
that
each
element
can
be
represented
using
the
same
number
of
bits
and
as
an
example
on
of
showing
how
this
works.
Here's
he
has
a
very
small
finite
field.
This
is
F5
and
basically
the
way
it
works.
F
Is
you
you
take
the
five
numbers:
zero
one,
two,
three
four
and
and
you
you
use
your
normal
integer
operations
to
compute
like
the
addition,
subtraction
and
multiplication,
but
whenever
you've
done
that,
you
take
the
result
and
do
module,
take
the
remainder
after
division
by
five.
So
you
take
it
modular
five
and
then,
when
you
write
it
out,
basically
you'll
find
that
for
each
element,
so
we
haven't
yet
defined
like
how
do
we
do
division?
F
So
if
you
write
down
the
multiplication
table,
you'll
find
that
each
element
has
actually
an
inverse.
So
basically,
if
you
take,
for
example,
here
like
a
2,
then
you
can
see
that
2
times
3
is
6,
but
modulo
five,
that's
one.
So
it
has
an
inverse.
Okay,
that's
nice
and
like
then,
the
other
way
around
three
the
inverse
is
two
and
for
four.
F
If
you
take
four
times
four,
it's
16
and
it's
and
that's
that's
again:
modulo
five,
that's
one,
and
so
we've
just
found
like
by
just
listing
these
these
numbers
that
every
element
has
an
inverse,
and
the
reason
for
that.
That
is
that
5
is
a
prime
number.
F
So,
whenever
we
to
take
these
modular
operations,
modular
prime
number,
then
then
we'll
find
that
we
actually
have
a
finite
field
and
so
that
that's
our
final
Fields,
except
that
the
fields
we're
going
to
be
working
with
in
practice,
will
have
a
lot
more
elements.
So
the
prime
that
we're
going
to
be
using
will
have
255
bits.
So
it's
like
a
very,
very
big
number,
because
yeah
we
want
to
be
able
to
represent
a
lot
of
numbers
in
this
field.
F
Hashing
polynomials,
okay,
so
a
quick
reminder,
what's
a
polynomial
and
so
polynomial
as
an
expression
of
this
form.
So
it's
like
a
sum
over
some
coefficients.
F,
I
and
and
and
terms
of
the
times
term,
of
the
form
x
to
the
power
five.
F
So
the
property
is
that
this
it
has
to
be
finite
sum.
So
it's
a
sum
from
zero
to
n
and
we
is,
is
the
degree
of
the
polynomial
and
and
basically
the
other
important
thing
that
you
have
to
always
remind
yourself.
There
can
never
be
any
negative
terms,
so
you
cannot
have
x
to
the
minus
one.
It's
only
terms
of
the
form
x
to
the
power
of
0,
1,
2,
3
and
so
on.
F
Yeah
and
and
each
polynomial
defines
a
polynomial
function.
So
it's
important
to
distinguish
between
the
two.
So
polynomial
is
just
an
expression
of
this
type,
so
it's
just
you
could
think
of
it,
even
as
a
list
of
coefficients
and
like
and
then
it
defines
a
faction
but
like,
for
example,
in
some
Fields
like
in
finite
Fields,
you,
you
will
have
the
property
that
the
same
polynomial
function
can
have
many
polynomials
corresponding
to
it,
because
there's
only
a
finite
number
of
functions,
but
there's
an
infinite
number
of
unpinomials
this.
F
This
property,
you
don't
have
in
infinite
fields,
and
so
the
the
core
property
that
polynomers
have
is
that
for
any
K
points
there
will
always
be
a
polynomial
of
degree,
K,
Min,
K,
minus
one
or
lower.
That
goes
to
all
of
these
points,
and
and
that's
polynomial
is
unique,
and
the
other
properties
property
is
that
a
polynomial
of
degree
n
that
is
not
constant,
has
at
most
n
zeros.
F
F
So
what
what
what
would
be
cool
if
we
could
imagine
a
hash
function
for
polynomials?
So
let's
imagine
that
we
could
have
a
hash
function
that
takes
a
polynomial
and
hashes
it.
Okay,
that's
easy,
but
it
should
have
a
have
an
extra
property,
which
is
that
we
can
construct
proofs
of
evaluation.
F
So
basically,
what
we
want
is
that
for
any
Z,
so
any
point
we
want,
we
can
evaluate
those
polynomials
comply,
compute
y
equals
F
of
that,
and
we
want
some
some
proof
that
this
is
correct,
so
that
that
that
would
be
an
interesting
hash
of
polynomials.
That
gives
us
something
new
and
and
this
this
hash
and
the
proof
should
be
small
in
some
sense.
F
So
here
here's
some
idea:
okay,
what
if
we
just
choose
a
random
number,
for
example?
Let's
say
we
choose
the
number
three:
if
we
want
to
Hash
a
polynomial,
we
just
evaluate
it
at
this
random
number.
Three,
so
we
said
we
put
x
equals
three
here's
a
couple
of
examples:
how
that
works.
If
we
stay
in
our
small
field
F5
for
between
that
provide
before
with
just
those
five
numbers.
F
So
if
the
opponium
is
x,
squared
plus
2X
plus
4,
then
the
hash
is
like
x,
squared
9,
plus
2X
6,
plus
4
modulo
5
is
4.
and
then
here's
a
second
example.
So
a
bit
of
a
bigger
polynomial
module
5
in
this
case
it's
zero.
Okay,
that
that
seems
a
bit
stupid
to
just
do
it
at
one
point.
F
But
the
interesting
thing
is
if
our
modulus
has
256
bits,
which
is
what
we're
going
to
work
with
in
practice,
it's
actually
extremely
unlikely
that
two
randomly
chosen
polynomials
have
the
same
hash
and
quotation
marks
just
like
it
is
for
a
normal
hash
function
right,
so
that
that
that's
an
interesting
property
like
I
mean
it
seems
like
a
very
stupid
and
simple
operation,
but
in
some
ways
in
one
way,
it
already
has
like
a
property
like
a
hash.
F
F
Okay,
so
if
we
accept
this
for
now,
then
let's
have
a
look
at
some
of
the
things
we
could.
We
could
do
with
it.
So,
for
example,
we
can
actually
add
two
hashes
of
polynomials
so
like
if
we
have,
if
we
have
the
hash
of
take
oops.
If
the
take
of
the
hash
of
two
functions,
hash
of
F
and
F
hash
of
G,
then
the
hash
will
just
be
the
sum
like.
F
The
hash
of
the
sum
of
the
functions
will
just
be
the
the
hash
of
F
and
plus
the
hash
of
G,
and
that's
because
of
this
homomorphic
property,
which
is
Trivial
if
you
write
it
out
in
polynomials,
and
the
same
is
true:
if
you,
if
you
multiply
two
of
these
volumes
and
that's
just
because
polynomial
evaluation
itself
is
a
homomorphic
property
like
if
you
you
can
either
first
add
to
polynomials
and
evaluate
them
or
you
can
like,
do
evaluate
them
and
then
add
the
result
and
the
same
for
multiplication.
F
F
If,
if
you
use
this,
then,
if
someone
knows
this
random
number
right,
then
they
could
easily
create
a
collision
of
this
polynomial
function
because,
while
for
random
polynomials,
it's
very
unlikely
that
they
evaluate
to
the
same
point.
It
is
very
easy
to
create,
like
manually,
to
polynomials
that
evaluate
to
the
same
value
at
this
random
number,
so
it
doesn't
quite
work
as
a
hash
function
as
we
know
it,
but
what
it
would
be
different
if
somehow,
instead,
we
could
put
this
random
number
into
a
black
box.
F
So
if
we
could,
if
we
could
find
a
way
of
computing
with
these
finite
field
elements,
but
instead
of
giving
everyone
who
wants
to
evaluate
this
hash
function,
giving
them
the
actual
number,
you
give
them
a
black
box.
F
So
like
we,
we
assume
we
have
a
cryptographic
way
of
putting
a
number
into
a
black
box,
and
then
we
give
them
our
random
number
as
and
we
give
them
also
like
the
random
number,
squared
and
S
to
the
power
of
three,
but
all
of
them
only
in
the
Black
Box,
and
we
do
it
in
such
a
way
like
this
black
box
needs
to
have
the
property
that
you
can
multiply
it
with
another
number,
and
you
can
add
two
of
these,
but
you
cannot
multiply
two
numbers
in
a
black
box.
F
F
They
cannot
craft
handcraft,
the
polynomials
that
so
that
they
evaluate
to
the
same
number.
At
that
point,
and
basically
the
cool
thing
is
that
elliptic
curves
actually
give
you
give
you
exactly
that
so
elliptic
curves
are
basically
you
you
can
think
of
them
as
a
way
of
creating
Black
Box
finite
field
elements
and
the
finite
field
that
you
have
to
use
is
the
curve
order
of
that
elliptic
curve.
F
So
if
we
have
an
elliptic
curve,
which
we
call
G1,
why
we
need
this
in
xg1
we'll
come
to
later,
but
it's
just
elliptic
curve
that
has
a
generator,
which
means
that
that's
a
point
so
that
the?
F
If
you,
if
you
add
that
point
again
and
again
it
will
generate
your
whole
curve
and
the
the
order
of
the
curve
is
p.
So
that's
the
number
of
points
and
then
basically
we
have
the
property
that
that
x
times
G,
where
X
is
a
finite
field,
element
x
times,
G1,
it's
basically
this
black
box
and
the
reason
for
that
is
that
it
is
hard
to
compute
so-called
discrete
logarithms.
F
So
it's
it's
difficult
like
when
you,
when
you
have
computed
this
x
times
G1
it
is,
it
is
difficult
to
to
compute
X
from
that
point.
F
So
that's
that's
a
cryptographic
assumption,
and
so,
if
we
have
that,
then,
if
we
take
two,
if
we
took
two
elliptic
curve
elements
G
and
H,
then
we
can
multiply
them
with
field
elements
like
we
can
compute
x
times.
G,
we
can
add,
add
the
two
G
plus
h
and
we
can
compute
linear
combinations
like
x
times,
G
Plus
y
times
H.
But
what
we
can't
compute
is
we
can't
we
can't
without
Computing
the
discrete
logarithm
with
the
chart.
F
We
can't
compute
something
like
G
times
h
and
so
just
like
we,
we
said
before,
like
we,
we
want
this
black
box,
so
we
will
introduce
the
notation,
like
X
and
squared
brackets,
one
for
saying
that
it's
in
this
G1,
which
is
the
first
elliptic
curve
we're
going
to
use,
we
need
later
we'll
need
another
one.
F
We
Define
that
at
x
times
G1,
and
so
basically,
when
you,
when
you
see
these
square
brackets,
think
of
it
as
like.
This
is
a
prime
field
element
in
this
black
box
in
this
elliptic
curve,
black
box.
So
we
can
put
stuff
inside
and
there's
no
easy
way
to
take
them
back
out,
but
we
can
do
some
computations,
while
they're
in
there
cool
and
with
us.
We
are
ready
to
introduce
KCT
commitments.
F
Okay,
so
what
we're
going
to
do
is
we're
going
to
introduce
a
trusted
setup.
So
we're
going
to
assume
that
that
that
someone
has
computed
has
taken
a
random
number
s
and
they've
computed
inside
this
block
box
and
given
to
us
and
the
powers
of
s
s
to
the
power
of
0,
1,
2,
3
and
so
on
in
our
black
box,
and
actually
forget
this
second
one
for
now
we'll
come
to
that
later
and
and
so,
if
we
take
a
polynomial
function,
so
we've
defined
this
previously.
F
So
it
looks
like
this:
it's
like
a
sum
of
coefficients
times,
powers
of
X
and
we
Define
the
kcg
commitments
as
as
this
sum
which
we
can
evaluate.
So
we
take
the
coefficients
and
we
replace
x
to
the
power
of
I
by
S
to
the
power
of
I
inside
this
black
box,
and
here
on
the
left
like.
So
this
is
something
we
can
obviously
compute.
F
It's
just
a
linear
combination
of
these
elements,
which
we
have
been
given
as
part
of
the
trusted
setup
and
and
the
cool
thing
is,
if
you
write
this
out
in
in
effect,
if
it
is,
it
is
just
an
F
to
the
power
of
s
evaluated
inside
this
black
box.
So
effectively,
we've
come
back
to
what
we
said
before.
F
F
And
yes-
and
this
this,
we
call
the
kcg
commitment
to
the
function
f
and
now
in
order
to
to
do
interesting.
Things
with
this,
we'll
need
to
introduce
elliptic
curve
pairing.
So
this
is
where
we,
where
we
get
our
second
group,
so
we
actually
need
a
total
of
two
two
groups
and
what
we'll
have
is
we
we
have
a
pairing
is
a
function
from
two
elliptic
curve
groups
and
a
Target
Group,
which
is
a
different
kind
of
group.
F
It's
actually
not
an
elliptical,
but
that's
not
too
important
here,
and
it
takes
basically
these
two
elliptic
curve
elements,
one
in
D1
and
one
G2,
and
it
has
the
cool
property
that
it
is,
what
we
call
bilinear
and
so
that
what
that
means
is
that
you
can
you
can
compute
this,
this
linear
combination.
F
So,
for
example,
if
you
have
the
pairing
of
a
times
x
and
z,
that's
basically
you
can
take
this
a
out
the
same
in
the
second
coefficient
and
in
addition,
if
you
have
the
sum,
then
then
basically
what
it
does
it.
It's
it
splits
into
these
two.
F
So
it's
like
a
distributive
law
here,
X
Plus
y
times,
Z
is
e
of
X,
that
and
E
of
y
z
and
the
same
goes
again
in
the
in
the
second
parameter
of
this
function,
and
and
basically
the
cool
thing
is
that
that
this,
what
what
we
couldn't
do
before
inside
yeah.
F
F
And
so
what
we
couldn't
do
previously
between
elliptic
curve
points,
which
is
multiply,
multiply
two
elliptic
curve
points
is
that
we
can
do
in
a
way
between
pairing.
So
if
we
have
one
of
our
points
in
the
in
this
first
group
and
one
in
the
second
group,
then
due
to
this
bilinear
property,
we
can
it
actually
in
in
the
in
the
Target
group.
It
computes
something
like
x
times
y
right,
so
it
has
this
property.
F
We
Define,
we
Define
this
additional
notation
for
the
Target
group,
and
then
we
have
this
very
clean
and
nice
equation
that
the
pairing
of
X
as
a
black
box
element
y
as
a
black
box
element,
is
x
times
y.
So
this
is
very
important.
Basically
at
this
point,
when
we
have
the
pairings
and
that's
why
we
really
need
them,
we
can
do
one
multiplication
you
can.
F
We
can
only
do
one,
because
afterwards
we
get
this
target
group
element
and
that
we
can't
really
do
anything
with,
but
it
turns
out
that
this
is
actually
actually
enough
to
do
like
a
lot
of
very
useful
stuff
in
elliptic
curves.
F
Two
polynomials
f
and
g,
and
we
commit
to
those
polynomials,
but
we
can
commit
to
f
in
G1
and
G
in
in
G2
in
the
different
groups,
and
then
then,
basically,
this
pairing
actually
lets
us
compute
this,
like
the
product
of
these
two
commitments
in
the
Target
group.
So
basically,
in
this
really
cool
polynomial
hash
that
we
have
defined,
we
can
now
if
we
commit
to
them
in
the
in
the
right
groups.
F
We
can
now
multiply
two
polynomials
that
are
committed
in
this
way,
so
we
can
multiply
the
commitments
without
even
knowing
the
polynomials
themselves,
okay,
cool
okay,
so
we
will
need
to
introduce
one
one
last
missing
piece
in
order
to
fully
come
to
our
how
kcg
commitments
work
and
how
we
can
construct
proofs,
and
that
is
quotient
of
polynomials.
Okay,
so
let's
say
we
have
foreign.
F
F
So
you
can
just
see
this
as
like
a
formal
expression,
but
sometimes
this
quotient
is
exact
so
sometimes
like
this
quotient
will
actually
result
in
another
polynomial
and
basically
there's
a
theorem.
That's
called
the
factor
theorem.
It's
a
relatively
elementary
math.
You've
probably
learned
that
in
school
at
some
point,
without
calling
it
that
basically
says
that
this
is
a
polynomial
disclosion,
exactly
if
F
of
Z
equals
y.
F
And
I
mean
you
can
kind
of
see
like
that
in
One
Direction,
because
f
of
x,
f
of
x
equals
y.
Then
then,
at
the,
if
you
set
x,
equals
y,
you
get
a
zero
here,
f
of
z,
f
of
Z
here,
and
you
get
a
zero
here
so
like
zero
by
zero.
That
can
that
that
that
that
that
can
only
like
that
yeah.
F
So
sorry,
if,
if
the
quotient
is
zero
at
Z,
that
can
only
work
if
this
is
also
zero
at
that
Z
So
like
so,
it
can
only
really
work
if
this
is.
This
is
correct,
but
the
other
direction
is
a
slightly
slightly
more
complicated.
F
F
Okay
and
now
we
get
to
how
the
kcg
proofs
work,
so
if
approver
wants
to
prove
that
f
of
Z
equals
y,
the
computer
desk
version
Q
of
X,
which
is
f
of
x,
minus
y,
divided
by
x,
minus
Z
and
Center
proof
Pi,
which
is
Q
of
s.
F
So
the
commitment
to
the
polynomial
polynomial
Q
and
in
order
to
verify
this,
what
the
verifier
will
do
is
they'll
take
this
quotient
and
they
will
multiply
it
by
the
commitment
to
S
minus
Z
and
check
that
this
is
the
same
as
original
polynomial.
The
commitment
to
that
minus
y,
and
so
this
is
unfortunate,
very
readable
on
this
background,
because
if
you
write
it
out
in
this
pairing
group,
then
you
get
on
the
right
hand,
side.
Q
of
s
times
s.
F
Minus
Z
in
the
Target
group
equals
F
of
s
minus
y,
and
this
is
the
same
as
the
second
equation.
So
the
cool
thing
is,
we
can
verify
this
equation,
because
we
are
able
to
multiply
two
polynomial
commitments
inside
using
the
pairing,
and
this
way
we
can
verify
that
the
portion
was
actually
computed
correctly.
F
Cool
yeah
and
that
that
is
basically
that
that
is
the
that
is
how
kcg
commitments
work
so
like
just
to
yeah
so
yeah.
So
so
the
idea
is
just
if
you
can
compute
this.
This
quotient
then
you'll
be
able
to
find
something
that
fulfills
this
equation
and
using
the
factor
ethereum
that
we
mentioned
previously.
F
If
F
of
Z
is
not
y,
then
you
cannot
compute
this.
It
doesn't
exist,
it's
not
a
polynomial
and
we
can
only
commit
to
polynomials
so
yeah.
This
is
the
recap
on
the
kcg
commitment.
We
can
commit
to
any
polynomial
using
a
single
element
in
G1
and
it
is-
and
this
is
just
the
version
valuation
of
the
polynomial
at
the
secret
Point
s
inside
the
black
box.
F
We
can
open
the
commitment
at
any
point,
so
we
can
compute
F
of
z
we
and
by
Computing
the
quotient
Q
of
X.
We
can.
We
can
compute
this
proof
which
is
Q
of
s
in
the
black
box
and
in
order
to
verify
that
proof
we
use
this
pairing
equation
and
and
that
that
shows
a
verifier,
that
this
evaluation
is
correct.
F
Cool,
so
that
is
kcg
how
kcg
commitments
work
now
this
I
want
to
do
something
slightly
more,
which
is
a
technique
that
we
use
quite
a
lot.
We
have
even
using
it
in
eip4844
and
so
I
want
to
give
a
quick
introduction
into
how
it
works,
which
is
a
random
evaluation
trick.
F
Okay,
so
basically,
let's
recall
that
kcg
commitments
are
nothing
but
evaluating
a
polynomial
F
at
a
second
Point
s
inside
this
elliptic
curve,
Black
Box,
and
so
in
a
way.
This
is
already
like
a
random
evaluation
like,
but
basically
what
we've
done
is.
We've
we've
identified
this
polynomial
using
a
random
evaluation,
and
we
kind
of
we
somehow
found
that
this
is
good
enough
to
like
to
Hash
a
polynomial
in
a
way
that
it's
very
difficult
to
create.
F
Collision
and
more
generally,
this
random
evasion
trick
can
be
used
to
verify
polynomial
identities,
and
the
reason
for
that
is
the
amstrad's
typical
Lemma
and
I
will
just
formulate
as
a
more
General
one,
but
let's
say
what
it
says
in
one
dimension,
so,
let's
have
a
degree,
a
polynomial
of
degree,
less
than
n.
That
is
not
identical
to
zero.
So
there's
one
particular
polynomial
that
is
zero
everywhere.
That's
just
like
all
zeros
right,
that's
a
very
special
polynomial.
F
So
let's
say
it's
not
that
now,
let's
take
a
random
point
that
in
FP,
then
the
probability
that
F
of
that
is
0
is
at
most
n
over
p
and
that's
because
it
can
have
at
most
n
zeros,
and
so
this
is
a
very
useful
thing,
because
our
p
is
very,
very
large
and
our
degree
is
relatively
small
compared
to
it
in
our
case.
So,
for
example,
in
for
plaster
381,
PS
255
bits
say
we
commit
to
a
polynomial
of
degree
2
to
the
12th.
F
F
And
so
here
here's
the
first
way
in
which
in
which
we
can
use
this
so
like
we
have
this
transaction
blobs
that
we'll
Define
for
four
eight
four
four.
So
it's
like
they
are
commitments
to
polynomials
with
four
thousand
six.
Ninety
degree
4095.
So
in
total,
four
thousand
ninety
six
points
and
to
committing
such
Computing
such
a
commitment
is
not
very
expensive,
but
it
is
expensive.
It's
like
for
50
milliseconds
to
do
this,
but
verify
one
kcg
proof
is
quite
a
lot
cheaper.
F
It
only
costs
about
two
milliseconds,
so
we
can
use
this
to
our
advantage,
and
so
the
idea
is
this:
we
take
our
commitment
to
the
polymer
C
and
we
take
the
polynomial
F
itself.
So
what
we
want
to
verify
is
that
we
have
the
polynomial
that
it's
given
to
us.
In
this
case
we
have
all
the
data
and
we
have
the
commitment
and
the
naive
way.
We
can
just
commitment
to
compute
the
commitment
from
the
polynomial,
but
that's
expensive.
Okay,
how
can
we
do
it
cheaper?
F
We
do
we
compute
a
random
point
and
one
way
to
get
a
random
point
is
actually
a
very
cool
technique.
It's
called
via
chamir
and
we
take
all
our
inputs,
so
we
compute
that
as
the
hash
of
the
commitment
and
the
polynomial.
Why
is
that
kind
of
random
because,
like
if
an
attacker
tries
to
craft
something
if
they
try
to
adversarially
compute,
either
C
or
F,
it
will
always
change
the
point
that
so
it's
very
hard
for
them
to
find
some
like
to
to
craft
them
in
a
way
that
that
breaks
our
construction.
F
So
basically,
this
is
a
common
technique
in
in
cryptography
to
to
get
something
random,
that
the
attacker
cannot
control,
and
so
we
evaluate
this
polynomial.
Why,
at
at
this
random
point
that
we've
taken
and
then
we
compute
a
kcg
proof
that
F
of
Z
equals
y
and
basically
that
then
what
we'll
do
is
we
just
add
this
proof
to
our
transaction
blob
wrapper,
which
is
the
way
we're
sending
transactions
and
then
like
to
verify
this.
You
compute.
F
He
also
compute
F
of
Z,
which
you
can
do,
because
you
have
the
data
for
that
and
you
check
the
proof,
Pi
the
kcg
proof
and
that's
done
and
that's
much
much
cheaper
than
Computing
the
commitment,
and
so
that's
one
way
in
which
we
can
use
random
evaluations
to
like
save
us,
a
lot
of
work
and
making
things
more
efficient.
F
Okay,
so
here's
another
way
in
which
we
can
use
those
random
evaluation
technique
and
so
ZK
Roll-Ups.
They
use
many
different
proof
schemes,
and
so
only
a
handful
I,
don't
know
if
actually
there,
any
right
now
will
use
natively
kcg
commitments
over
over
BLS
12
381..
F
And
so
the
question
is
like
how
do
all
the
other
make
efficient
use
of
our
blob
commitments
that
we
want
to
add
with
4844
and
then
full
charting,
because,
like
because
Computing
kcg
commitments
inside
a
proof
or
Computing
pairings?
That
is
pretty
expensive
like
that
that
that's
a
very
expensive
operation
in
a
zeroid
proof,
foreign.
F
Is
basically
you
have
to
you,
you
commit
to
the
data
in
a
different
ways,
so
we
have.
We
have
three
different
inputs,
so
we
have
our
blob
data,
which
is
this
function
f
itself
and,
and
we
have
two
different
type
of
commitments.
Now
we
have
C,
which
is
our
blob
commitment,
which
is
what
we'll
use
inside
ethereum
for
4844,
and
we
have
another
way
of
committing
to
this
data,
which
is
using
the
the
ZK
Roll-Ups
native
commitment.
F
So
they
will
in
some
way
it
will
also
have
some
way
of
committing
to
data
that
that
works.
Well.
For
the,
as
you
know,
it's
proof
scheme,
and
so
in
this
case,
what
we'll
do
is
we'll
take
Z
as
a
hash
of
C
and
R
like
these
two
different
commitments,
and
we
will
compute
y
again
as
F
of
Z
and
we'll
add
pi
as
a
proof
that
F
of
Z
equals
y
and
we'll
we'll
add
this
pre-compile.
That
allows
us
in
the
ethereum
virtual
machine
to
verify
that
the
the
kcg
proof
pi.
F
So
we
will
know
that
c
c
is
a
correct
commitment
to
f
and
what
we'll
need
to
add
is
to
add
the
proof
that
R
is
also
in
a
correct
commitment
and
and
the
ZK
roll
up
can
do
that
inside
the
proof.
So
they
will
inside
the
proof-
and
they
also
have
to
somehow
get
C
and
R
as
an
input
and
hash
them
and
compute
that
and
then
they
can,
they
can
evaluate.
F
So
they
will
also
have
F,
because
the
rollup
wants
to
use
the
data,
so
f
is
completely
available
to
them
and
they
just
have
to
compute
y
equals
F
of
Z
and
use
some
technique
to
verify
that
the
f
is
the
same
as
they
are.
But
that's
there
are
ways
to
make
this
easy,
and
then
they
can
verify
that
they
have
the
same
data
as
was
committed
through
C,
so
that
will
make
it
much
easier
to
use
these
commitments
in
ZK,
rollups,
foreign.
F
F
Yeah
I
collected
some
resources.
If
you
want
to
read
further
on
this,
so
vitalik
wrote
a
while
ago
a
post
on
elliptical
pairings
I,
because
there
was
a
lot
of
interest
in
that
I
wrote
some
notes
on
how,
on
this
last
part,
how
to
use
kcg
commitments
in
ZK,
roll-ups.
F
F
Vitalik
recently
wrote
a
summary
on
like
what
what
the
difficulties
are
with
alternatives
to
KCT
commitments,
and
here
this
is,
if,
if
you
want
kind
of
it's,
it's
very
similar
to
this
talk,
but
I
I
brought
a
blog
post
about
kcg
commitments
and
then,
of
course,
if
you
want
to
dive
deep,
there's
the
case,
the
original
kcg
paper,
and
if
you
scan
this
QR
code,
there
will
be
all
these
links.
F
F
H
F
F
Yes,
but
so
we
are
setting
in
cryptography,
we
are
setting
our
security
parameter
already
in
the
assumption
that
an
attacker
will
do
a
lot
of
computation
to
try
to
break
it
like
2
to
the
50
2
to
the
60
or
more
computation
power.
This
is
much
much
more
than,
however,
we
ever
use
it
in
the
in
the
actual
protocol
so
like
this
is
all
already
covered
by
the
by
the
cryptographic.
Construction.
F
I
mean
you
can
do
it,
but
the
random
probabilities,
like
the
probability
of
randomly
hitting
that
are
extremely
extremely
low,
like
if
you
like,
if
you
construct
it
so
that
the
probability
so
yeah
yeah
like
randomly,
they
are
less
than
two
to
the
minus
200,
or
something
like
that.
It's
like
so
low.
You
cannot
even
like
yeah,
think
about
it.
Yeah.
I
F
Somewhere
correct,
but
okay,
so
we
are
always
because
we
are
limiting
the
degree
of
our
polynomials
right.
So
our
trusted
setup
will
only
go
to
a
certain
power,
for
example
to
the
power
of
twelve
and
inside
that
space.
There's
only
the
zero
point-
number
yes,
yeah
yeah
so
like.
If
you
have
no
limits
on
the
polynomial
decrease,
then
it
doesn't
work,
but
we
always
have
a
limit.
E
J
So
I'm
gonna
start
with
a
small
like
explanation
of
how
all
this
math
stuff
going
to
our
protocol
and
how,
like
you
know,
all
the
extra
bandwidth
of
4844,
travels
around
and
gets
verified,
then
Proto
is
gonna.
Take
it
and
tone
it
even
down
into
more
practical
stuff
like
how
the
l2s
are
gonna,
use
the
data
and
then
answer
is
going
to
tone
it
even
more
down
and
basically
explain
how
people
pay
for
this
data.
So
okay.
J
So
basically,
this
is
a
graph
that
shows
how
like
optim,
optimism
and
L2.
What
is
its
costs?
And
you
can
see
that,
like
this
blue
stuff,
is
the
data
fees
like
how
much
money
they
are
paying
for
the
like
data
they
put
on
chain
and
the
other
like
white
stuff
is
some
other
stuff?
But
you
can
see
that
the
blue,
the
data
is
dominating
all
the
costs.
So,
basically,
what
4844
is
it's
like?
A
mechanism
that
drastically
increases
the
amount
of
data
people
can
post
on
chain?
J
And
this
is
all
it
is
right,
so,
okay,
so
basically
what
we
want
to
do
is
we
want
to
increase
the
amount
of
data.
So,
on
this
very
simple
picture,
on
the
left
side,
you
can
see
our
data,
which
we
call
blob,
because
it's
a
bunch
of
data
that
also
corresponds
to
polynomial
and,
on
the
right
hand,
side.
J
You
can
see
a
small
thing,
a
commitment
that
represents
that
data
commits
to
that
data,
and
the
like
graph
idea
is
that
you
know
commitment
goes
Unchained
forever,
whereas
The
Blob
is
kind
of
like
you
know
there
for
a
bit
and
then
disappears.
So
this
is
like
the
high
level
strategy
of
how
we
increase
bandwidth.
We
commit
to
data,
we
keep
the
commitment
forever,
but
the
data
is
ephemeral
in
a
way
okay.
So
let's
talk
a
bit
about
what
this
data
is,
what
these
blobs
are,
how
do
polynomials
enter
this
picture?
J
So,
okay,
this
is
a
polynomial
I
think
by
now
you're
very
familiar
with
it.
Based
on
the
last
talk,
the
question
is
like:
how
do
we
put
data
into
this
polynomial
and,
like
the
basic
idea,
is
you
know
you
have
these
coefficients
the
A1
a2a
whatever
and
each
it
you?
You
can
basically
put
data
into
this
coefficient.
So
you
know,
if
you
have
some
data,
one
four
one
six,
you
can
put
it
in
the
coefficient,
so
you
make
this
little
polynomial
on.
J
And
that's
like
a
very
straightforward
way
to
put
data
into
polynomial,
so
so
right,
so
think
of
like
what
like
in
in
our
case,
let's
see
about
these
numbers,
one
four
one:
six,
how
they
can
resemble
real
data.
So
in
our
case
the
numbers
are
going
to
be
finite
field,
so
they're
going
to
be
parts
of
a
finite
field
which
is
going
to
be
like
a
number
between
zero
and
this
insanely
huge
prime
number,
and
so
each
coefficient
is
going
to
be
a
number
between
these
two
things
and
that's
about
254
bits.
J
J
So
you
know
now
we
know
the
way
to
store
128
kilobytes
into
polynomial
and
that's
kind
of
interesting
because,
like
you
know,
right
now,
roll
ups-
they
don't
even
use
close
to
that
number
like
maybe
they
use
one
kilobyte,
so
we're
basically
giving
lots
and
lots
of
space,
maybe
even
uncomfortably
lots
of
space
to
roll
ups
to
put
their
stuff
in.
But
this
is
like
the
whole
idea
of
4844.
J
J
So
you
end
up
with
a
situation
where
you
know
you
end
up
posting
a
chain,
lots
of
data
and
then
a
small
commitment-
and
this
is
like
the
rough
idea
so
just
to
talk
a
bit
about
like
when
this
data
travels.
What
the
network
is
supposed
to
do.
You
know
like
when
you
see
a
commitment
that
corresponds
to
lots
of
data.
J
You
know,
fooling
us
and
giving
us
a
wrong
commitment
to
other
data
which
would
be
catastrophic
is
to
you
know
like
commit
to
P
of
X
use
this
black
box
again
commit
to
it
and
then
check
that
the
commitment
that
the
verifier
computed
matches
the
commitment
that
the
guy
gave
you
so
that
that's
basically
a
pretty
straightforward
way
to
to
verify
that
polynomial
matches
the
commitment.
But
you
know,
then
we
have
more
data
and
more
commitments.
J
You
know
in
a
transaction,
you
can
have
lots
of
those
in
a
blog,
you
can
have
block,
you
can
have
lots
of
those
and-
and
that
starts
being
quite
expensive.
So
what
we
end
up
doing,
because
you
know
like
it's
50
milliseconds,
to
do
each
of
the
commitments
so
and
it
scales
linearly.
So
that
ends
up
being
quite
expensive,
especially
you
know
for
mempool
and
this
kind
of
stuff.
J
So,
in
the
end,
what
we're
using
we're
using
kcg
proofs
and
this
whole
random
evaluation
trick
that
dankrad
taught
you
before
and
basically
for
it
like
data
and
commitment,
we
also
put
a
proof
of
a
random
evaluation.
So
basically
the
proof
is
a
helper
that
helps
you.
Do
this
small
verification
and
I.
J
K
So,
with
ep44
we're
introducing
a
transaction
type
to
make
to
confirm
these
blobs
in
the
evm10.
However,
something
to
note
is
that
it's
A
New
Concept
here
where
we
are
having
a
transaction
type
with
data
outside
of
the
transaction.
That's
now
responsibility
of
the
consensus
layer,
so
it's
like
a
regular
Erp
month
of
F9
transaction.
Then
the
transaction
contains
some
pointers
or
hashes
ready.
Let's
then
commits
to
the
debloop
data.
K
This
is
the
transaction
in
a
little
bit
more
detail.
Something
else
to
note
is
that
it's
not
rlp,
but,
as
you
see
so,
it
mercolizes
nicely
it's
better
for
layer
too,
and
then
note
here
that
we
have
these
data
hashes
committing
or
to
hashing
the
case
D
commitments
which
then
commits
to
the
film
block
data.
K
So
the
blob
content
is
unlike
call
data
not
available
in
evm.
Eventually
we
can
prune
this
blob
data.
It's
not
a
long-term
commitment
to
store
off
this
block
data,
but
rather
we
are
introducing
this
blob
data
and
just
for
the
availability
properties,
a
layer
2
needs
this
data
to
help
users.
Sync,
the
latest
State
permissionlessly,
without
communicating
directly
with
the
sequencer
or
whichever
operator,
exists
on
a
rollup,
and
then
people
can
reconstruct
the
latest
stats.
K
K
This
is
task
for
develop
operator
and
then,
as
a
rollup
operator,
you
publish
your
bundle
to
layer,
one
with
this
new
transaction
type
and
then
in
the
transaction
pool.
We
have
both
the
transaction.bes
the
fee,
as
well
as
the
wrapper
data
with
the
actual
blob
content
and
then
the
layer
bomb
Beacon
proposer
creates
a
block
and
the
blobs
make
their
way
from
the
transaction
pool
in
the
execution
layer
to
the
consensus
layer,
a
disparents,
the
blobs
don't
get
into
the
execution
layer
back.
K
It's
just
the
responsibility
of
the
consensus
layer,
pairs
on
the
beacon
that
and
they
think
the
blobs
bundle
together
with
blobs
from
other
transactions
as
a
sidecar,
and
then
the
execution
payload
stays
on
layer
one,
whereas
the
blobs
stay
available
for
a
sufficient
amount
of
time
to
secure
layer
2,
but
then
can
be
pruned
afterwards.
So
blob
data
is
bounded
foreign.
K
We
have
the
layer,
2
sequencer,
communicating
with
the
transaction
pool
the
execution
engine,
communicating
as
the
beacon
proposer
then
Beacon
notes,
syncing
the
blobs
with
each
other
and
then
there's
the
splits
of
the
data
where
the
other
big
notes
they
give
the
execution
parallel
to
process
the
ECM
and
everything
fees
will
be
processed
by
everybody,
the
Deep
blobs.
They
are.
They
stay
in
the
contents
layer
into
a
layer,
2
Note,
retrieves
them
to
reconstruct
the
layer.
2
stands
foreign.
K
Thank
God
already
explains
the
proof
of
equivalence
trick.
So
I'll
give
you
just
a
simplified
overview.
How
we
do
this
in
the
evm,
we
introduced
two
new
things
in
evm
and
opcodes,
and
a
pre-compile
the
up
codes
simply
retrieves.
The
data
hash,
which
is
this.
L
K
K
And
so
to
just
decode
data
that
we're
passing
in
with
a
proof
the
index
of
the
pointer
trying
to
fit
for
about
the
commitment
that
hashes
to
the
hash
that
we
retrieve
from
the
up
cards
and
then
the
pre-compile
will
verify
everything.
K
Similarly,
we
have
the
zika
state
transition
that
needs
to
be
verified.
This
is
also
you
can
roll
up
specific
up
to
you
to
design
this,
but
with
the
data
that's
verified
and
the
secret
proof
that's
verified.
We
can
then
get
some
outputs
that
we
can
persist
and
then
use
to
enable
withdrawals.
K
And
then
this
is
the
version
of
the
interactive,
optimistic,
roll-ups,
interactive,
optimistic
rollups
use.
This
concept
called
a
pre-image
Oracle,
where
we
do
not
access
all
the
data
at
the
same
time,
but
rather
we
loads
pre-images
one
at
a
time
and
by
bisecting
an
execution
Trace.
We
only
really
have
to
do
a
proof
for
a
single
step,
a
single
execution
of
a
single
like
VM
instruction,
and
this
might
be
loading
some
data
so,
for
example,
the
start
of
a
layer,
one
block
header
hash.
Then
we
retrieve
the
fill
block
header
as
a
pre-image.
K
Then
we
retrieve
the
transactions
by
digging
into
the
Mercury
commitment
in
the
transactions
hash,
and
then
we
can
get
the
data
hashes
from
the
transaction
and
then
from
the
data
hash.
We
can
get
the
case
D
commitment
and
then
it's
not
a
regular
hash
commitment
anymore,
but
there's
a
different
type
of
commitment
with
the
same
Oracle
very
loads,
one
point
from
The
Blob
that
is
committed
to
by
the
block
transaction.
K
M
You,
okay,
yeah,
hello,
everyone,
a
Man's,
Guide
I'm,
going
to
talk
a
little
bit
about
now
that
we
hopefully
in
the
future,
will
have
this
functionality.
How?
How
can
you
pay
for
it,
but
also
kind
of
conceptually
basically
I
mean
the
data
invention
is
already
you
know,
kind
of
pushing
its
limit
like
where's
the
extra
space
and
resource-wise
for
this?
M
Basically,
where,
where
does
the
efficiency
gain
here
come
from
and
to
understand
that
first,
we
have
to
just
look
in
general
about
like
how
do
how
does
research
resource
pricing
on
ethereum
work
today?
So
this
is
just
kind
of
my
way
of
thinking
about
categorizing
the
different
resources
we
have.
We
have
on
ethereum,
so
there's
things
like
bandwidth,
compute,
State,
Access,
Memory,
State,
growth,
history,
growth
right!
This
is
all
the
kind
of
things,
and
this
is
a
non-exhaustive
list
right,
but
this
is.
M
This
is
basically
the
kind
of
things
that
actually
cause
effort
for
nodes
while
they
are
processing
a
transaction,
and
if
you
squinted
this
hard
enough,
you
you'll
notice
it
and
that
that
basically
two
different
types
of
resources
here
and
we
we
called
those
there's
the
burst,
Limit
and
sustained
limits
and
the
best
limits,
I
think
things
where,
basically
they
they
they
cause
costs
or
costs
right
at
the
moment
that
the
the
block,
the
block,
is
propagated
right,
the
bandwidth
to
to
to
to
propagate
a
block
the
compute
to
actually
verify
it.
M
All
of
this,
the
the
the
the
the
critical
point
there
is
that,
basically,
it
has
to
be
bounded
in
order
for
blocks
to
still
be
propagated
in
a
timely
manner,
and
in
order
for
notes
to
be
able
to
to
to
verify
them
at
all
right,
they
might
run
out
of
resources
and
the
sustained
limits
they
don't
matter
so
much
block
to
block
those
are
more
things
that
accumulate
over
time
so
that
state
growth
history
grows.
M
These
kind
of
things
right,
like
in
a
single
block,
can't
really
make
it
like
produce
too
much
damage
there,
but
over
time
it
just
basically
makes
it
more
and
more
costly
to
to
to
to
run
a
full
node,
as
it
turns
out
like
if
you,
if
you
look
at
this,
there's
some
sort
of
of
structure
to
this,
and
you
can-
you
can
actually
reorder
this
a
little
bit
and
it
turns
out
that
usually
there's
a
relatively
good
matching
between
like
a
specific
burst,
Limit
and
a
specific,
sustained
limit,
so
bandwidth
and
and
history
growth
kind
of
they
correspond
right
because
the
bigger
block
is
the
bigger
like
the
more
bandwidth
you
need
to
propagate,
but
then
also
the
more
disk
space
you
need
to.
M
Just
you
know,
keep
it
around
forever
for
history
purposes
and
similar
with
State
access
and
state
growth.
These
kind
of
things
now
specifically
for
4844
right.
What
we
are
introducing
is
this
new
type
of
data,
so
kind
of
the
the
resources
we're
talking
about
here
this
this
first
kind
of
row.
So
it's
it's
on
those
burst
Limit,
it's
the
bandwidth!
How
how
big
can
blocks
get
and
then
on
the
sustained
limit?
M
It's
just
like
how
much
resource
do
you
need
to
store
kind
of
the
history
of
ethereum
and
and
if
you,
if
you
put
an
engine
and
if
you
basically
look
into
the
AP
a
little
bit,
you
already
know
that,
like
there
is
this
this
this
limit
in
terms
of
History
growth,
we
basically
we
introduced
this
new
new,
basically
a
mechanism
where
blobs
are
only
stored
for
a
single
month,
and
so
this
is
basically
why,
on
the
history
course
side,
we
basically
it
it
will.
M
It
does
mean
that
there
will
be
some
extra
requirement
for
node
operators,
but
it's
quite
bounded
Because
unless
normal
history
that
today
is
stored
forever,
but
even
after
this,
this
nice
erp444,
basically,
even
even
after
that,
it's
still
going
to
be
stored
for
a
year.
Blobs
are
only
stored
for
a
month.
So,
basically
in
terms
of
sustained
limits,
it's
it's
not!
Basically
it.
It
has
like
a
very
limited
impact.
M
The
more
interesting
and
also
more
tricky
side
of
this
picture
is
the
the
burst
limit,
so
so
bandwidth
and
to
kind
of
understand
like
what.
What
the
situation
is
and
how
444
fits
in?
We
have
to
first
remember
that
today,
on
ethereum,
basically,
we
only
have
a
single
gas
price
right.
Whenever
you
send
a
transaction,
you
don't
actually
specify
how
much
bandwidth
am
I.
M
Do
I
want
to
use
how
much
compute,
how
much
memory
you
just
spend
said
like
one
gas
one
gas
limit
and
then
also
like
how
basically,
how
how
what
kind
of
Base
fee
are
you
willing
to
pay
for
this
right
and
it's
all
basically
mapped
down
into
what
you
think
about
it
as
like,
a
single
dimension
for
pricing
and-
and
that
comes
with
basically
very
real
trade-offs
in
terms
of
a
kind
of
resource
efficiency.
M
So
if
you,
if
you
look
at
this
kind
of
stylized
picture
of
just
looking
at
two
different
dimensions
here,
there
could
be
I,
don't
know
data
and
compute,
or
data
and
memory
or
whatever
right,
two
different,
two
different
dimensions
and
basically
the
way
the
the
the
kind
of
the
ethereum
Gas
Works
today,
and
that's
purely
for
Simplicity
right,
because
it's
very
simple
to
for
users
to
deal
with
one
dimension.
M
Basically,
but
the
way
it
works
is
basically
that
it
is
basically
that
those
two
resources
and
compete
for
for
for
for
usage
in
a
block
right.
So
you
could
imagine
if
you
use,
if
a
block
is
very
full
of
compute,
then
there's
very
little
room
for
to
to
put
any
data
in
it
or
the
other
way
around.
M
And
actually,
if
you,
if
you
want
to
like
open
e-test
ether,
scan,
for
example,
they
fight
for
every
block
for
the
detail
page,
they
actually
give
you
the
size
of
the
blog
and
usually
it's
something
like
50
kilobytes,
100
kilobytes,
but
like
rarely
more
than
that.
But
if
you
look
at
what
would
a
block
look
like
if
it
was
was
like
just
full
of
call
data,
which
is
where
all
the
data
comes
from?
If
you
were,
we
were
all
the
way
like
say
on
the
lower
part
of
the
diagram.
M
If
recess
B
was
was
Data,
it
could
actually
be
up
to
one
or
two
megabyte
like
well,
two
megabytes,
basically
of
of
size
right.
So
so
what
that
means
is
B
basically
determined
in
the
past
that
two
megabytes
per
block
are
kind
of
safe
and,
and
the
reasons
would
be
there
right-
it
basically
sitting
there,
but
an
average
block,
basically
almost
like
you
completely
underutilizes
data,
and
that
is
again
just
just
because
it's
simpler
for
us
conceptually
to
price
these
things.
M
There
should
not
be
this,
this
kind
of
a
competitive
nature
to
it,
and
this
is
basically
where
four
eight
four
four
on
the
burst,
Limit
side
gets,
gets
it
in
efficiency
right
because
full
charting
full
length
charting
we'll
hear
about
it
a
bit
more
after
this.
Actually,
that's
really
clever
things
where
people
only
sample
the
data,
so
bandwidth
constraints
goes
go
quite
down,
but
for
four
eight
for
four
there
is
no
fancy
trick
right.
M
Everyone
still
downloads
all
the
data,
so
it's
very
real
bandwidth
strain,
so
the
Innovation
and
the
burst
Limit
side
is
purely
trying
to
get
to
this
upper
right
point
trying
to
actually
basically
make
it
so
that
the
existing
resource
we
already
have
today
is
just
more
efficiently
utilized
and
the
way
we
do
this
is
by
going
from
as
we're
saying
like
right
now.
Today,
pricing
is
one
dimensional,
and
so
what
what
we
introduced
with
point
for
voice.
Basically,
we
go
2D,
and
this
is
how
that
how
that
looks
like
so
this
is.
M
This
is
an
open,
PR
right
now,
it's
not
yet
quite
much,
but
you
can
have
a
look
so
like
small
details
might
still
change,
but
I
think
the
general
direction
is
is
pretty
sad
and
so
the
idea
is
we
introduce
what
we
call
data
gas
and,
as
you
can
kind
of
figure
from
the
name,
it's
not
blobcast
status.
The
aspiration
would
be
that,
like
maybe
in
the
future
we
can.
M
We
can
expand
this
to
cover
the
entire
data
Dimension,
but
for
now
it's
it's
only
used
for
blobs
and
we
we
set
it
in
a
way
where,
basically
like
one
byte
of
of
plop
will
cost
one
data
gas,
and
this
data
gets
importantly,
basically
is
completely
independently
priced
for
normal
gas.
So
it
has
its
own
5059
style
mechanism.
Where
and
that's
that's
where,
basically,
where
they
use
and
I
see.
M
Mary
is
not
very
happy
about
this
because
you
know
like
he
has
to
implement
it
and
get
it
in
of
the
day,
but
this
is
really
important
for
the
EAP,
because
other
than
that,
basically
you
wouldn't
be
able
to
get
to
this
more
efficient
bandwidth
usage.
So
what
does
it
look?
Like
kind
of
how
does
how
can
you
think
about
it?
Well,
it's
just
you
know
similar
to
how
1559
already
looks
so.
M
The
way
to
the
this
is
the
courtesy
of
Proto
I
stole
the
site,
and
so
every
column
here
would
be
a
separate
slot.
So
the
first
slot-
and
in
this
case
basically
the
the
the
target
amount
of
blobs,
would
be
two
the
maximum
allow.
It
would
be
400
block.
So
the
first
block
the
first
block
comes
in.
It
has
exactly
two.
So
nothing
happens.
The
next
one
has
three
right:
the
red
one.
M
It's
basically
one
one,
two,
two
many,
so
the
price
would
go
up
and
then
the
next
two
kind
of
like
are
stable
again
and
then
then
one
misses
the
blob,
so
the
price
goes
back
down.
So
it's
like
a
very
you
know
like
just
like
1559
like
you,
you,
you
know
and
love
it.
Basically,
it
is
a
bit
different
or
like.
Basically
it's
it's
under
the
hood.
It
works
a
bit
differently.
So
here's
kind
of
a
bit
more
more
look
at
the
details
here.
M
So,
first
of
all,
of
course,
just
we
have
the
max
data
gas
per
block
right,
just
similar
to
259
and
the
Target
that
that
is
half
of
that
transactions.
These
block
transactions.
They
specify
an
additional
Max
fee
per
gate
per
data
gas
field
so
like
how
much
are
they
willing
maximally
per
data
guess
and
to
to
to
have
their
transaction
included?
Importantly,
you
know
that
this
introduced
a
little
bit
extra
complexity
for
users,
but
users
in
this
case
are
not
actually
users.
M
You
know
fine
that
shouldn't
like,
if
you
can't
do
that,
maybe
you
shouldn't
be
in
the
world
game
basically,
and-
and
so
we
just
to
to
keep
the
complexity
this
year,
minimum
though
we
did
not
opt
for
having
a
separate
tip
for
this
Dimension,
so
we
just
reuse
the
the
existing
existing
tip
and
then
we
one
thing
that
we,
where
we
deviate
from
1509
a
little
bit
in
1559.
M
Basically,
if
the
demand
were
to
completely
crash
theoretically
like
one
gas
could
could
be,
could
be
I
think
valued
as
little
as
seven
way,
which
is
just
the
minimum
after
which
basically
updates,
don't
don't
go
lower
anymore,
so
the
transactions
would
basically
be
free.
We
don't
quite
want
basically
to
make
the
the
the
lowest
demand
case
of
transactions
here
completely
free,
so
we
set
a
minimum
data
gas
price-
that's
a
that's
kind
of
at
least
somewhat
meaningful.
So
that's
like
10
to
the
minus
five
eighth
per
blob.
M
So
of
course
it's
priced
in
late,
I
guess,
but
it
comes
down
if
you,
if
you
compute
the
for
the
cost
of
a
full
block
to
to
the
value
and
and
the
last
thing
and
again
this
is
very
technical
so
like
if,
if
you
just
want
to
understand
and
conceptually
this
works-
don't
don't
care
about
this,
but
but
if
you
ever
pull
up
the
IP
and
you
might
stumble
across
this-
and
you
might
be
confused
so
so
actually
the
way
we
we
track
this
in
1559
right
now.
M
We
we
usually
track
the
base
fee
directly
and
then
we
update
it
every
every
block
and
actually
it
turned
out
after
we
introduced
it
like
looking
at
it,
it's
slightly
conceptually
ugly,
because
we
always
do
these
these
kind
of
services,
basically
there's
some
some
properties
in
the
upgrade
updating.
We
don't
quite
love,
it's
it's
a
little
path
dependent
and
these
kind
of
things.
So
so
we
moved
to
to
just
a
conceptually
simpler
way
of
tracking
for
this
Dimension,
where
we
track
the
excess
excess
data.
M
Guess
that
has
been
basically
been
used
over
the
existence
of
the
EIP
right.
So
basically,
we
just
we
have
some
sort
of
Target
that
we
want
to
be
used
and
then
every
every
block,
if
it
basically
uses
more
than
that.
We
just
add
to
this
to
this
counter
and
then
every
block
wave
is
basically
uses
less
than
that,
but
we're
still
above
zero.
In
this
counter,
we
just
reduce
the
counter
yeah
sure,
if
you
wanna
foreign.
M
Just
like
the
base
fee,
yeah,
so
yeah
yeah.
So
this
is
one
additional
header
field
which
good
question
actually
also
I
you
can
you
can
see
that
because
I
just
wanted
to
give
you
like
an
impression
of
of
what
the
kind
of
calculating
the
the
the
the
the
cost
looks
like
with
this
header
field.
So,
as
you
can
see,
basically
we
have
these
kind
of
functions.
M
If
you
want
to
to
get
the
the
feed
that
in
third
section
actually
has
to
pay.
It
depends
on
on
on
on
the
head
of
the
previous
box,
similar
to
1559,
and
so
you
first
get
the
total
data,
guess
that
the
transaction
consumes,
which
is
just
you,
know,
data
gas
per
blob
times
the
number
of
blobs,
and
then
you
calculate
the
basically
the
base
fee,
but
we
don't
call
it
basically
because
again,
there's
no
tips.
So
it's
kind
of
unnecessary
to
have
the
base
fee
tip
distinction.
So
we
just
call
it
data
gas
price.
M
And
so
how
would
you
basically
for
each
block,
you
once
basically
calculated
its
data
gas
price,
and
you
do
that
by
by
basically
taking
in
there's
this
excess
data
guys,
and
then
we
use
this
fake
exponential
function.
It's
a
little
nice
little
tidbit
I,
don't
know!
Maybe
it's
irrelevant,
but
it's
it's
something
time
to
talk
about
briefly
So
like
just
because
we
want
we
want
some,
so
maybe
I
can
already
go
to
the
next
step
to
explain
so.
Basically,
this
is
kind
of
how
the
pricing
develops.
M
So
basically,
if
you
were
to
continue
to
just
keep
keep
basically
using
up
all
the
data
space
in
a
block
not
just
the
target,
it
would
basically
be
on
an
exponential
curve
and
would
be
more
and
more
and
more
expensive,
and
you
can
see
basically
like
a
thousand
a
thousand
excess
plops
that
that's
roughly
I,
don't
know
something
like
10
minutes
or
so
so
within
10
minutes,
you'd
really
like
they
have
like
super
expensive
blobs
if
it
were
to
keep
to
keeping
being
fully
used,
yeah
Marius
again
that
accumulate
a
lot
of
exercise.
M
Excess
data
guess
right,
so
it
doesn't
really
matter
if
it
was
accumulated
at
the
beginning
or
at
the
end
there
will
probably
be
a
different
like
in
the
beginning.
It
will
probably
be
relatively
cheap
to
to
use
data
guess
because
Roll-Ups
are
still
kind
of
adopted
in
the
process
of
adopting
it.
M
So
there's
not
that
much
that
much
demand
so
basically,
probably
for
the
first
first
month
or
so
we'd
be
like
in
the
very
zoomed
in
left
part
of
this
picture
and
then
later
on
once
once,
it's
all
basically
fully
adopted
and
people
use.
It
we'll
be
like
a
bit
more
towards
the
right
in
this
picture,
but
it's
not
like
this
is
not
a
basically
because
it's
so
reactive,
it's
it's
similar
to
59,
so,
basically
every
block
in
it
most
they're
doing
12.5
update.
So
the
difference
here
like
basically,
you
can
come.
M
You
can
go
from
one
of
these
paradigms
to
the
other
within
five
minutes
of
of
of
high
usage
or
low
usage
blocks.
So
it's
not
like
it's
not
basically
something
where
it
matters
immensely.
What
and
what
what
was
done
in
the
in
the
past?
Basically,
a
lot
a
high
consumption
in
the
past
only
means
that,
like
basically,
you
have
like
five
five
minutes
of
of
reduced
Bob
usage
before
you're
back
to
your
normal
price
level.
So
it's
not
Yeah,
so,
basically,
there's
no
significant
kind
of
accumulation
effect
or
anything
right.
M
Sure
no,
no,
but
the
thing
is
because
so
so
the
way
that
think
about
it
is
like
because
the
the
the
the
price
is
a
pure
function
of
the
excess
data
guys
so
at
any
excess
excess
data.
Guys
I
mean,
of
course,
I
put
down
excess
plops
just
to
think
about
it
more
easily.
But
it's
tracked
in
excess
data.
I
guess!
But
once
you
reach
something
like
say:
I,
don't
know
a
thousand
excess
excess
blobs.
M
That
would
mean
that
sending
one
block
already
costs
30
each
and
that
doesn't
matter
whether
the
excess
blobs
were
accumulated
over
one
day
or
if
they
were
accumulated
over
a
year.
So,
basically,
once
they
access
the
data
gas
field
reaches
that
value.
It
would
cost
30
years
per
block
to
send
blob.
So
we
would
expect
of
course,
like
if
robes
are
not
willing
to
pay
that
much
for
blobs
right.
M
So
if,
for
some
weird
reason
there
was
some
spike
in
demand
and
the
excess
would
shoot
up
to
that
level,
it
would
quickly
come
back
down
and
stay
at
some
some
kind
of
permanent
level.
So
the
excess
is
not
something
that
will
continue
to
grow
over
time,
it'll
just
similar
to
the
base
fee.
The
basically
doesn't
grow
over
time
it
just
Finds,
Its
equilibrium,
value
and,
of
course,
sometimes
goes
up
and
sometimes
goes
down
temporarily,
but
it
hovers
around.
M
Some
sort
of
you
know
10
to
100
gray,
level
ish,
and
similarly
it
takes
us
blobs,
because
that
can
can
go
back
down
right
if
a
block
uses
less
than
a
Target.
The
number
goes
back
down,
so
it
will
just
find
some
sort
of
equilibrium,
value
that
corresponds
to
to
to
some
sort
of
equilibrium.
Price
and
it'll
just
have
around
that.
M
M
H
F
F
So
okay
I
want
to
introduce
now
to
the
two-dimensional
kcg
scheme,
which
we
will
need
for
full
sharding.
Sorry,
this
is
a
big
jump.
F
Okay,
so
when
we
do
full
sharding,
why
do
we
not
take
all
the
data
that
we
want
to
encode
and
put
it
into
one
big
kcd
commitment,
and
the
reason
for
that
is
that
that
is
going
to
require
a
super
node
like
some
powerful
node
that
you
probably
can't
easily
run
at
home
unless
you
like,
have
a
very
good
intensive
connection
and
want
to
invest
some
money
into
it.
F
F
If
a
failure
leads
to
just
not
being
able
to
contact
blocks-
or
maybe
we
have
to
make
smaller
blocks
or
we
have
to
make
blocks
without
chartered
data,
but
it
would
be
really
bad
if
the
absence
of
the
supernode
could
lead
to
the
to
a
network
split
where
some
people
think
data
is
available
and
some
people
think
it's
not
available.
This
is
what
we
want
to
avoid.
F
So
what
we
want
is
a
construction
where
yes
like
there
will
be
a
lot
of
data
in
the
network
and
maybe
like
someone
needs
to
be
there
specialized
into
Distributing
that
data,
but
once
they've
done
their
job.
The
very
decentralized
network
of
maybe
your
Raspberry
Pi
is
at
home
can
guarantee
that
it
will
always
converge.
It
will
always
be
safe
and
so
on.
F
F
So
what
what?
If
we?
What
if
we
just
use
many
many
different
kcg
commitments,
just
a
list
of
KCT
commitments?
F
So
if
we
do
this
naively,
we
just
take
many
commitments
and
we
sample
from
each
then
we'll
need
a
lot
of
samples,
because
we
before
I
had
this
number
of
samples,
for
example,
say
30
samples.
Now
we
need
30
samples
per
commitment-
okay,
that's.
That
would
be
a
lot
of
samples,
but
there's
another
much
cooler
way
of
doing
this,
where
we
use
read
Solomon
codes
again
and
we
will
extend
M
commitments
for
like
M
actual
payload
blobs
and
extend
them
to
2m
commitments.
So
here's
how
this
is
going
to
work.
F
So
we
have
our
original
data
commitments
in
this
case
three
commitments
and
what
we'll
do
is
we'll
Define
another
four
commitments
that
are
an
extension
of
these
commitments,
so
they
will
be
completely
determined
by
the
actual
data
commitments
and
yeah.
So
here
here's
the
math
of
how
this
works.
So
what
we'll
do
is
we'll
Define,
a
two-dimensional
polynomial
for
the
data
and
it
works
the
same
way
as
before.
So
basically,
we
will
interpolate
this
polynomial.
F
We
will
Define
it
by
this
data
region,
the
original
data
that,
if
that
comes
from
many
different
transactions,
that
include
Charlotte
data
and
what
we'll
say
is
for
Simplicity
I'll
just
take
the
row.
K
will
just
be
the
evaluation
of
this
polynomial
where
we
set
y
equals
to
the
number
of
this
row
y
equals
k.
F
So
we
evaluate
the
the
polynomial
at
K
and
then
we
get
a
one-dimensional
polynomial
right,
so
we
get
F
K
of
x,
equals
to
this
and,
like
you,
can
pull
up
all
of
this
together
and
what
you
get
is
again
an
expression
just
in
the
these
powers
of
x
and
then
we
can
commit
to
those
polynomials
in
our
normal
kcg
way.
Okay,
so
we
have
FK
of
s
equals
to
this.
Now
we
replace
the
X
by
S
and
some
complicated
sum
in
there.
F
But
overall,
we'll
have
like
one
elliptic
curve
element
this
black
box
evaluation,
and
we
call
this
C
of
K
okay.
Now
the
cool
thing
is:
if
you
look
at
this
expression
as
a
function
of
K,
then
this
is
also
polynomial
right.
It's
just
a
sum
of
terms
of
powers
of
K
okay,
so
this
is
very
cool
and
what
this
means
is
that
our
commitments
themselves
will
be
on
a
polynomial.
So
if
we
see
the
commitments
which
are
now
elliptic
curve
points
as
a
function
of
K,
they
are
on
a
polynomial.
F
So
what
we
have
is
before
we
started
with
having
each
row
being
a
polynomial
that
we
commit
to.
We
also
have
that
each
row
I
mean
this
is
a
property
of
just
a
two-dimensional
polynomial
each
column
will
be
a
polynomial,
but
also
the
commitments
themselves.
Our
upon
are
a
polynomial,
in
this
case
of
degree,
three
yeah,
because
they're
determined
by
these
four
commitments.
F
So
what
we
have
is
we'll
the
how
the
2D
commitment
scheme
will
work
as
we'll
have
two
2m
row
commitments,
and
we
can
actually
verify
that
this
is
the
cool
thing
like
a
any.
Anyone
who
validates
these
commitments
can
easily
verify
that
that
they
are
on
this
polynomial
using
a
random
evaluation,
trig
again,
which
I
introduced
earlier.
So
what
do
you?
F
What
we'll
do
is
we'll
take
the
first
M
commitments,
divide
them
at
a
random
point
and
we'll
do
the
same
for
the
second
M
commitments,
and
if
these
two
result
in
the
same
Point.
Actually
the
point
will
be,
in
this
case
an
elliptic
curve
point.
Then
they
are
actually
on
a
polynomial
of
degree,
n
minus
one.
F
For
those
who
are
interested.
There
is
a
way
to
do
something
very
similar
using
2D
commitment.
So
you
can
do
one
commitment
to
the
whole
thing,
but
I
won't
go
into
the
details
here,
but
there
are
basically
some
downsides,
but
which
is
why
we're
not
choosing
that
way.
F
F
There
are
no
fraud
proofs
required,
but
now
we
need
a
constant
number
of
samples
for
all
these
commitments
in
order
to
get
probabilistic
data
availability
and
basically
we
get
the
property
that
if
at
least
75
of
those
samples
are
available,
then
all
the
data
is
available
and
it
can
be
reconstructed
and
that's
the
cool
thing
from
validators
or
other
and
also
only
observe
rows
and
columns.
So
there's
nobody.
F
So
what
you'll
notice
is
that
this
number
is
a
bit
higher
than
before
so
like.
If
we
only
have
one
commitment,
then
we
only
need
50
of
the
samples
to
be
available
for
the
square.
We
need
75
percent,
so
the
number
of
samples
you
need
to
get
this
will
be
a
bit
higher
cool,
and
so
what
we
get
with
this
is
that
I
made
a
proposal.
F
I
mean
this
is
all
still
in
discussions,
but
like
one
of
the
ideas,
how
how
we
could
extend
this
to
a
full
trading
construction.
Is
that
basically,
the
way
validators
useless
construction
is
that
they
will
download
rows
and
columns.
F
They
will
each
choose
to
run
randomly
of
each
and
then
what
we
get
is
that
if
a
block
is
unavailable,
it
can't
get
more
than
1
16th
of
attestations
so
automatically
the
consensus
will
never
vote
to
unavailable
blocks
and
and
at
the
same
time
they
can
use
these
full
rows
and
columns
that
they
don't
load
to
reconstruct
any
incomplete
rows
or
columns.
F
So
if
any
samples
are
missing,
they
can
reconstruct
this,
and
because
there
will
be
some
interactions
like
for
each
validator,
there
will
be
if
they
do
too
like
there
will
be
these
four
intersections
and
they
can
see
the
orthogonal
rows
and
columns
with
the
samples
that
may
be
missing
and
so
like.
As
an
example,
I
made
a
computation
that,
basically
with
about
55
000
online
validators,
you
get
a
guaranteed
reconstruction
where
basically,
every
sample
will
always
be
reconstructed.
F
If,
like
we
initially
had
enough
data
available
to
do
this
and
the
state
in
practice,
this
number
will
be
much
smaller
because
most
nodes
don't
run
one
valid
data,
but
tens
and
some
even
hundreds.
F
And
data
availability
sampling-
yes,
basically
just
checking
like
random
samples
on
a
square
and
what
we
want
is
again.
We
want
to
get
that
the
probability
that
an
unavailable
block
passes
is
less
than
2
to
the
minus
30,
and
if
you
do
the
math,
you
find
that
you
need
about
75
random
samples
to
do
that,
and
so
the
bandwidth
to
do
that.
In
this
example,
if
we
do
512
byte
samples
would
be
2.5
kilobytes
per
second,
which
is
really
nice
low
number
cool,
okay,
handing
it
to
Denny.
N
Okay,
so
there's
a
lot
of
math
and
there's
an
elegant
construction,
assuming
that
we
can
do
a
constant
time
amount
of
work
for
a
large
amount
of
data
to
kind
of
layer
it
into
it
as
similar
to
like
a
validity
condition
on
our
on
our
block
tree,
we
don't
consider
invalid
blocks
in
our
block
tree.
We
don't
consider
unavailable
blocks
in
our
block
tree,
and
so
the
math
and
the
construction
are
very
elegant,
but
when
the
rubber
meets
the
road
datability
sampling
on
the
networking
layer
is
actually
non-trivial
problem.
N
N
Everyone's
seen
this
these
things
are
it's
not
fundamental
that
they
cannot
come
together,
scalability
security
and
decentralization
all
in
one
system,
but
it
is
hard
and
it's
hard
primarily
because
we
want
home
nodes
to
be
able
to
run.
We
want
standard
computers
to
be
able
to
validate
the
system,
to
kind
of
have
security
and
aggregate,
even
against
a
malicious
majority
of
our
consensus
participants.
N
Again,
kind
of
in
that
validity,
condition
of
if
there's
an
invalid
block
and
all
the
validators
or
miners
are
saying,
that's
a
that's
that
what
is
that's
what
the
head
is,
you
say:
well,
that's
not
even
literally
real,
because
it
is
invalid
and
so
users
in
power
kind
of
Define
what
the
network
is.
Similarly,
we
want
to
do
that
with
our
bandwidth
consideration
with
respect
to
data
availability.
So
thus
we
need
to
focus
on
the
bandwidth
here.
N
A
lot
of
this
is
a
quick
recap.
We've
been
talking
about
this
all
day,
but
we
need
to
scale
execution.
We
need
to
scale
data
availability,
essentially,
Roll-Ups
give
us
some
sort
of
like
compression
algorithm
for
the
execution
of
transactions,
whether
it
be
from
fraud,
proofs
or
validity
proofs
data
availability.
We
use,
disability
sampling
or
we
want
to
data
availability.
We've
been
talking
about
all
day
means
no
network
adjustment,
including
excluding
super
majority
of
full
nodes,
has
the
ability
to
withhold
data
again
this
this
it
kind
of
makes
data
availability
of
validity
condition.
N
It
is
already
today,
as
we
noted,
you
have
to
download
full
blocks,
but
once
it's
a
lot
of
data
and
we
want
those
home
nodes,
it
becomes
very
hard
right.
So
again
we
want
the
amount
of
work
to
not
really
scale,
as
those
blocks
become
very
large
to
scale
the
network,
so
data
availability,
Insurance,
the
data-
is
not
withheld.
Also
Assurance.
The
data
was
published.
Real
quick,
shout
out
dockerd
made
most
of
these
slides
of
another
talk
and
I'm,
just
reusing
them.
N
N
You
know
it
kind
of
depends
on
the
use
case
and
and
it's
a
bit
of
a
more
of
a
ux
debate,
it's
kind
of
the
online
in
this
requirement
of
people
to
be
able
to
get
this
security
guarantee
without
trusting.
You
know
someone
else,
so
is
it
important
I,
don't
think
we
need
to
get
into
this
too
much
optimistic,
Roll-Ups
and
ZK
Roll-Ups.
N
It's
critically
important,
and
you
know
who
knows
the
utility
of
solving
this
problem-
might
extend
beyond
these
two
types
of
systems,
so
networking
and
everything's
hard-
and
we
probably
are
making
it
even
harder
on
ourselves
by
some
of
our
assumptions
here.
So
we
could
say:
okay,
we
want.
We
certainly
want
to
make
sure
that
block
producer
and
consensus
nodes.
N
We
want
to
be
able
to
not
be
fooled
by
a
malicious
majority,
but
maybe
we
have
a
neutral
PDP
Network
and
we
can
just
assume
that
P2P
network
is
like
healthy
and
gives
us
what
we
want.
This
is
certainly
attractive.
It
ensures
that
each
node
really
can
see
that
they
get
the
statistical
security,
but
if
we're
assuming
that
the
validators
can
be
malicious,
it's
very
high
amount
of
them.
At
least
you
know,
maybe
about
two-thirds.
Some
people
like
to
say
99
depends
on
probably
the
construction
on
what
the
real
one
is.
N
Then
the
Assumption,
then
that
the
network
network
is
neutral,
is
probably
not
a
realistic
assumption.
So
well
maybe
it's
realistic
in
most
scenarios,
but
if
we
want
to
really
be
able
to
harden
against
that
majority
adversary,
we
need
to
be
thinking
about
an
attacker,
controlled,
P2P
Network
by
some
threshold
defining
whatever.
That
is
again.
This
is
a
lot
of
kind
of
exposition
of
the
problem,
rather
than
total
Solutions
of
the
problem.
So
you
know
if
I'm
thinking
about
designing
data
availability,
sampling
I'm-
probably
it's
probably
interesting
to
think
about.
N
What's
a
good,
neutral,
Network
solution,
but
then
I
think
when
the
rubber
meets
the
road,
we
need
to
think
about
what
thresholds
can
we
actually
Harden
against
a
very
attacker-controlled
PDP
Network?
N
So
in
this
model,
certainly
some
nodes
can
be
fooled,
and
so
it
ends
up
being
a
collective
guarantee
again
depending
on
the
thresholds
and
how
the
system
is
tuned,
but
rather
than
no
node
can
be
fooled.
It's
probably
going
to
end
up
looking
like
no
above
certain
threshold
of
node
can
be
fooled,
maybe
for
a
certain
period
of
time,
maybe
until
the
network
kind
of
resolves
itself,
but
so
this
is
likely
correct
model,
but
it
does
make
the
problem
harder.
N
We
want
this
like
P2P,
distributed
data
structure
that
can
reliably
serve
samples
so
that
people
can
do
their
job
of
getting
the
samples.
We
won
low
overhead
on
nodes
from
multiple
perspectives,
one
on
nodes
that
are
participating
in
pulling
down
samples,
but
also,
potentially,
we
want
to
leverage
nodes
that
are
not
just
validators,
not
just
builders
in
this
distributed
P2P
structure.
N
So
we
want
to
also
consider
the
the
overhead
of
these
nodes
that
are
participating
in
the
serving
of
the
samples
as
well
or
in
the
dissemination
of
the
samples
other
things
I
want
to
be
robust
against
attacks.
I.
Think
one
of
the
really
really
scary
things
here
is
liveness
attacks,
doses,
civil
attacks,
Etc
that
happen
on
the
network
layer,
because
if
a
majority
of
nodes
are
seeing
data
as
unavailable
either
temporarily
or
permanently,
then
they
cannot
follow
the
chain
at
all.
Again,
we
want
this
to
be
essentially
a
validity
condition.
N
You
know,
if
there's
an
invalid
transaction
in
this
Branch
I,
don't
follow
the
branch.
If
that
branch
is
unavailable,
I
don't
follow
the
branch,
so
that
is
a
very
important
critical
requirement,
but
a
very
terrifying
requirement,
meaning
that,
like
it,
is
very
important
that
this
these
PDP
structures
are
hardened
and
do
we
do
understand
kind
of
their
failure
modes.
N
We
understand
where
they
where
they
operate
and
we
do
understand
how
they
resolve,
maybe
after
an
attack
and
low
latency
on
the
order
of
seconds
I,
have
a
page
of
some
deciderado
I'm
going
to
get
into
in
a
second
and
there's
some
distinct
challenges.
I
think,
when
you're
kind
of
thinking
about
this
problem
dissemination
into
the
P2P
structure,
we
have
a
lot
of
data.
How
do
you
efficiently
get
it
into
this
P2P
structure
without
causing
High
load
on
the
on
the
individual
nodes
of
the
PHP
structure?
N
So
if
every
node
only
needs
you
know
1
100th
of
the
data,
but
they
had
to
touch
50
of
the
data
to
get
it
disseminating
the
structure,
it's
we're
kind
of
missing
something
there.
Similarly,
we
want
to
support
queries.
We've
disseminated
data
sample
for
x,
amount
of
time
which
I
can
get
into
this
Serrata
again
and.
N
Validators,
certainly
with
their
row
and
column
kind
of
crypto
economic
Duty
can
identify
and
reconstruct
missing
data,
but
we
also
probably
want
to
consider
should
this
PDP
structure
be
able
to
identify
and
reconstruct
missing
data.
So
there's
two
kinds
of
potential
reconstruction
that
we
might
want.
So
validators
are
very
incentivized
out
the
gate.
You
know
if
things
are
missing
from
the
rows
and
columns
to
incentivize
to
to
repair
patch
and
make
make
things
whole.
N
But
if
say,
the
p2b
structure
is
supposed
to
serve
data
availability
sampling
for
one
week,
then
are
those
validators
the
same
people
that
will
then
identify
and
reconstruct
missing
data,
or
is
there
some
other
more
distributed
and
less
timely
required
method?
To
do
so?
N
N
These
are
you
know,
crypto
economically
incentivized
actors
that
we
can
try
to
leverage
in
this
construction.
They
do
have
the
rows
and
columns.
They
do
also
perform
data
availability
sampling,
like
a
user
node,
and
then
we
have
users,
users
perform
data
availability
sampling;
hopefully
they
can
be
leveraged
in
serving
and
making
the
whole
P2P
also
more
resilient
itself.
N
Some
quick
deciderata
right
now.
You
know
if
I
were
thinking
about
building
datability
sampling
if
I'm,
researching
and
doing
stuff
I'm.
N
These
are
kind
of
some
Target
numbers,
but
I
would
also
be
sweeping
these
numbers
and
understanding
where
they,
where
they
work
and
where
they
don't
so,
data
size,
32,
megabytes
per
block,
that's
per
12
seconds
or
if
the
slot
time
were
adjusted,
it
might
be
per
some
other
amount
of
seconds,
call
it
16
or
20.,
but
with
the
2D
Erasure
coding
that
ends
up
being
128,
megabytes
of
data
being
inseminated
into
the
network.
N
Chunks
I
think
we
there's
chunks
and
we
sample
the
chunks
or
their
samples
and
we
sample
the
samples,
but
on
the
order
of
250
000,
you
can
make
these
larger.
But
then
you
end
up
with
you
still
need
the
same
constant
number
of
samples.
So
you
end
up
with
more
overhead
samples.
He
said
75
something
on
that
order,
but
essentially
we
want
to
drive
that
probability
down
as
we're
doing
the
sampling.
N
Latency
validators
really
right
now
need
to
make
decisions
about
what
they
see
is
the
valid
and
available
head
on
the
order
of
four
seconds
that
could
be
tuned
depending
on
the
constructions
available
to
us,
but
they
if
they
could
not
regularly
be
able
to
do
data
availability
sampling.
N
Then,
on
the
order
four
seconds
we
have
a
problem
users.
You
could
have
a
potentially
lack
more
LAX
requirement
on
the
order
of
12
seconds
on
the
order
of
a
slot
or
you
could
even
consider.
Maybe
they
need
to
be
doing
it
on
the
order
of
epochs
and
optimistically
following
the
head
as
available
and
maybe
there's
some
play
in
in
the
constructions
there
validator
nodes
100K
is
pretty
optimistic,
but
we
probably
open
the
order
of
four
thousand
today.
N
You
know
100K
to
a
million
user
notes.
You
know,
so
it's
really.
If
the
user
nodes
cannot
participate
in
the
serving
of
samples,
then
the
load
on.
If
we
only
relied
on,
say
incentivized
actors
like
validators,
then
the
load
would
actually
scale
as
the
the
to
serve
as
the
user
nodes
serves.
So
it's
probably
very
important
to
tie
them
into
the
data
structure
itself.
Bandwidth
assumption,
I,
don't
know.
It's
probably
worth
discussing.
N
The
eth.org
website
suggests
a
minimum
of
10
megabytes
per
seconds
around
a
full
node,
but
but
for
good,
whatever
25
megabytes
per
second
I,
don't
know
who
came
with
that
number?
Maybe
it's
a
good
place
to
start
the
conversation
and
then
persistence.
Obviously,
like
I
said:
datability
sampling
is
not
for
persistence.
It's
to
ensure
the
data
was
made
available
where
how,
but
you
know,
data
if
data
was
made
available
for
half
a
second
like
No
One's
Gonna
necessarily
be
able
to
prove
that
to
themselves
that
it
was
made
available
or
a
very
small
subset.
N
So
is
it
two
epochs?
Is
it
two
weeks
there's
much
debate
here,
onsgar
I
think
what
was
your
recent
number?
Is
he
still
here.
N
Okay,
okay,
10
minutes
an
hour,
whereas
I
think
some
are
more
like
a
week
two
weeks
and
those
that
actually
changes
the
requirements
on
nodes,
especially
in
terms
of
storage.
N
My
intuition
here
is
that
the
online
in
this
requirement
for
users
that
want
to
get
their
you
know
State
transition
changes
from
ZK,
Roll-Ups
or
policeman
users
that
want
to
submit
fraud
proofs
for
ZK
for
optimistic
Roll-Ups.
You
know
this
dictates
their
online
in
this
requirement,
and
so
I'm
I'd
be
like
10
min.
Oh
man,
I
gotta,
get
out
of
here,
Okay
cool,
so
debate
an
hour
seems
short
P2P
designs.
N
So
one
easy
thing
you
could
do
is
just
say:
there's
a
bunch
of
super
nodes
in
the
network
and
if
you
connect
to
them,
you
do
dis
and
if
they
give
you
the
samples
that
you
want,
then
things
are
available.
N
This
is
I
believe
Celestia's
current
design,
although
that
statement,
I
could
claim,
is
true
a
few
months
ago,
I'm
not
sure
today
and
you
could
potentially
do
something
in
similar
ethereum,
whereas
maybe
instead
of
a
each
node,
meaning
you
have
everything
you
could
leverage
ethereum
validators
the
rows
and
columns
that
they
custodied
and
it
looks
kind
of
similar.
N
This
is,
is
nice.
You
know.
If
you
connect
to
one
on
a
super
node,
then
you
get
what
you
need,
but
this
doesn't
really
fit
well
under
the
node
model,
especially
if
validators
you
know
a
node,
that's
running
on
the
order,
one
two,
maybe
three
validators,
should
be
able
to
run
on
the
order
of
you
know
home
resources,
which
is
definitely
not
the
case
dhgs
they
all
of
a
sudden
DHT
is
a
nice
way
to
distribute
data
and
attributed
data
structure
across
the
network.
N
It's
a
nice
way
to
find
data
and
seems
intuitively
like
a
very
good
direction,
a
very
good
start.
It
fits
really
well
in
because
each
of
these
nodes
can
have
very
small
amount
of
data
and
really
nice
scalability.
As
you
add
more
nodes
to
network
you
can,
depending
on
your
redundancy
Factor,
you
can
have
you
know
similar
or
less
data
per
node
prone
to
lightness
attacks.
It's
really
easy
to
assemble
this
thing.
N
Naively
you
just
make
note
IDs,
you
fill
the
tables
and,
if
you're
a
malicious
node,
you
can
just
return
entries
from
your
table
that
are
full
of
malicious
nodes,
and
one
thing,
that's
I
think
very
promising
is
looking
at
secured.
Dhts
docker's
been
digging
into
Academia
and
I.
N
So
you
could
use
standard,
open,
DHD
for
average
case
performance
and
maybe
a
secondary
fallback
DHT
leveraging
the
validator
set
for
in
case
of
attack.
You
could
also
yeah
there's
some
weirdness
because
then
all
of
a
sudden
you're,
assuming
that
you
have
a
certain
amount
of
honest
validators
for
this,
so
does
that
suffice
under
the
malicious
majority
construction
sure
you
can
probably
tune
the
numbers,
but
you
could
also
potentially
layer
other
types
of
crypto
economic
sets.
N
You
know
proof
of
humanity,
Spruce
ID,
whatever
the
hell
all
sorts
of
stuff
and
could
have
layered
dhts,
where
they're,
ultimately
just
kind
of
fallbacks
in
the
event
that
the
the
big
main
DHT
starts,
failing
validator
privacy
and
optionality
and
how
they
construct
their
node
setups
is
probably
very
important.
I'm.
Definitely
over
time.
Okay,
cool,
great.
N
Oh
and
there
are
actually
a
handful
of
people
in
this
room
that
have
grants
from
the
EF
to
dig
into
r
d
around
data
availability
sampling.
If
you
are
interested
in
this
problem,
there's
a
critical
problem
to
solve
and
work
on
over
the
next
12
months,
talk
to
me
or
email
grants
program
at
the
EF
or
something
thank
you.
O
Hi
I'm,
Francesco
and
I'll
cover
the
last
bit
of
this
very
large
topic
that
we've
kind
of
gone
over
today.
It's
purpose
of
bullish
operation
expect.
Probably
most
people
will
be
somewhat
familiar
with
the
concept,
but
this
will
be
kind
of
a
light.
Introduction
like
it's
not
going
to
be
yeah,
it's
not
going
to
be
too
advanced,
it's
going
to
be
just
for
you
to
get
a
picture
of.
O
How
does
it
fit
with
dunk
sharding
and
what
does
it
have
to
do
with
it
in
general
and
also
like
kind
of
how
does
the
roadmap
of
that
fitting
in
the
protocol?
Look
like
yeah.
So,
first
of
all
what
is
PBS?
It's,
oh
sorry,
yeah.
So
there's
a
let's
start
from
the
pieces.
We
have
DB
and
NS.
O
So,
first
of
all
block
building
the
B
is
essentially
this
task
of
actually
creating
and
distributing
execution
payloads
mainly
so
we
have
Beacon
blocks,
but
then
inside
them
there's
execution
payload,
which
is
the
kind
of
the
valuable
part
in
some
sense,
the
part
that
actually
changes
the
state
in
of
the
execution
layer-
and
this
is
the
part
that
is
kind
of
requires
some
specialization
to
deal
with,
whereas
the
the
beacon
block
part
is
more
of
a
consensus,
part
and
yeah.
O
So
this
is
the
normally
today
we
only
think
about
the
creating
part
like
only
basically
putting
together
a
new
execution
payload,
but
the
distribution
part
will
also
become
critical,
especially
in
the
well
in
the
context
of
dunk,
charting
and
yeah,
and
also
this
distribution.
So
the
distribution
will
involve
the
data
that
is
committed
to
which
is
going
to
be
eventually
very
large.
So
that's
why
it's
it's
kind
of
an
important
task
eventually,
and
so
for
these
reasons
and
well
later,
it
will
get
a
bit
more
into
them.
O
It's
a
quite
specialized
activity
that
we
don't
really
want
normal
validators
to
do
because
it
would
kind
of
increase
their
requirements
too
much
for
our
for
us
to
be
comfortable
with
and
then
there's
proposing.
So
this
is
just
you
could
think
of
today.
O
Proposing
includes
both
things,
both
this
kind
of
consensus,
part
of
making
a
beacon
block
and
including
all
the
consensus
messages
in
it
attestations
and
other
things
like
slashing
messages
or
anything,
that's
kind
of
critical
to
the
good
function
of
the
beacon
chain,
but
then
also
the
putting
an
execution
payload
in
it.
So
today,
it's
still
possible
for
anyone
to
do
this
by
themselves
and
kind
of
have
both
the
roles
together.
O
But
if
we
kind
of
ignore
this
execution
payload
part,
this
is
really
not
a
particularly
specialized
role
and
we
think
that
it's
always
going
to
be
possible
to,
or
we
really
want
this
to
always
be
possible
with
low
requirements.
Basically,
what
we
expect
today
validator
to
have
and
yeah
the
separation
is
just
that
these
two
things
are
split
up
like
we
don't
the
default.
O
It
would
not
be
any
more
that
a
validator
does
both
things
or
the
proposer,
which
is
a
validator,
does
both
things,
but
that
the
proposal
does
the
beacon
block
relevant
part.
The
consensus
messages,
part
and
some
other
kind
of
specialized
actor
comes
in
with
the
execution,
payload
and
the
distribution
of
the
data
eventually
and
yeah.
So
why
do
we
want
to
do
this?
I've
kind
of
already
well
hinted
at
it,
but
yeah
it's
it's
simply
that
if
we
Outsource
the
specialized
stuff,
we
can
keep
the
simple
stuff,
basically
decentralized.
O
We
can
keep
the
really
consensus
critical
things
essentially
done
by
a
very
decentralized
validator
set,
which
is
a
really
important
goal
in
ethereum
in
general.
So
and
I
mean
practically
why
you
know
what
are
these
things
that
we
want
to
Outsource
so
that
we've
for
all
the
whole
day,
we've
been
talking
about
dunk
sharding,
and
it's
not
there's
nothing,
really,
I,
guess
fundamental
about
starting
that
requires
this
Outsourcing.
You
could
imagine
other
models,
I
mean
the
I
guess
original
starting
model
before
the
Dank
part
didn't
require
this
Outsourcing.
O
But
it's
really
like
a
major
simplification,
and
so
I
mean
not
just
simplification,
also
as
I
think
like
consequences
for
latency
like
it
just
makes
the
it
gives
us
this
really
tight
coupling
between
the
execution
payload,
the
blobs
and
kind
of
yeah
just
streamlines
the
whole
process,
and
so
we
in
we're
done
charting.
O
If
we
do
want
these
simplifications,
we
kind
of
have
to
we
start
having
something
to
Outsource,
because
the
proposer
has
to
compute
these
commitments
really
quickly,
which
is
not
easy
to
do,
for,
like
normal
hardware,
and
also,
like
probably
the
most
prohibitive
part,
is
the
basically
distribution
of
the
data
to
the
network.
So
that
would
require
like
really
kind
of
not
acceptable
Upstream
requirements
for
for
validators,
like
more
than
you
know,
probably
multiple
gigabits,
you
know,
and
so
yeah
we
don't
want.
O
We
don't
want
to
require
this.
It's
like
or
there's
a
magnitude
more
than
what
someone
would
need
today,
because
basically,
the
most
you
might
need
to
distribute
is
128
megabytes
per
block
and
yeah.
But
again
this
is
not
a
kind
of
fundamental
reason.
If
there
was
no
other
reason
that
we
needed
the
separation,
for,
we
might
be
a
bit
more
skeptical
about
dunkshardt,
we
might
think.
Well,
you
know
we
don't
need
these
other
actors.
Why
are
we
introducing
the
system
just
to
get
the
simplification?
O
That's
not
kind
of
the
ethos
of
ethereum
like
we
really
want
everything
to
be
as
decentralized
as
possible
as
like
resilient
as
possible.
These
actors,
probably
you
know,
do
introduce
some
complexities
in
this
Vision,
but
the
issue
is
dunk
charting,
isn't
the
reason
why
we
introduce
these
actors?
The
reason
is
med
and
this
kind
of
fundamental
reason.
O
There's
I,
don't
think
anyone
that
has
looked
into
me
enough
thinks
that
there
there's
any
other
way
essentially
to
go,
and
the
issues
is
simply
that,
as
I
said,
these
execution
payloads
are
really
valuable
and
extracting
value
from
them
is
a
really
sophisticated
activity.
From
many
points
of
view,
algorithmic
infrastructure,
like
requires
potentially
very
good
Hardware.
Very
a
very
good
connection
like
latency,
is
really
important.
So
there's
like
all
kinds
of
reasons,
oh
and
also
like
access
to
order
flow.
O
So
you
know
today
we
can
think
that
order
flow
is
more
or
less
so.
You
know,
essentially,
access
to
mempool
transactions
is
more
or
less
available
to
everyone
publicly,
but
that's.
It
seems
very
naive
to
assume
that
that's
going
to
be
the
case
in
the
future,
and
already
it's
not
quite
true
that
that's
the
case,
so
maybe
they're
always
going
to
be
a
public
mempool
for
censorship.
O
Resistance
reasons
for
I
mean
other
reasons,
but
it's
really
naive
to
think
that
everyone
is
gonna,
have
access
to
the
same
kind
of
raw
material
to
build
blocks
like
the
transactions
and
this
access
order
flow
is,
is
a
huge
part
of
being
able
to
create
valuable
payloads.
So
there's
like
all
kinds
of
reasons,
why
it's
just
not
realistic
to
think
that
validators
will
be
able
to
profitably,
make
their
own
blocks,
and
so
there's
this
really
like
strong
centralization
pressures.
O
If
we
essentially
don't
provide
them
a
way
to
do
it,
you
just
go,
and
you
know,
have
someone
else
to
do
it
well,
which
is
the
whole
point
of
Separation,
but
there's
different
ways
in
which
it
could
happen.
Some
ways
in
which
it
could
happen
are,
for
example,
just
everyone's
taking
with
pools,
because
that
that's
the
only
way
that
they
can
extract
value,
although
that's
actually
kind
of
not
already.
It
seems
like
a
scenario
that
in
some
sense,
maybe
we
can
avoid.
O
We
already
have
PBS
today,
like
we
usually
say
PBS,
and
we
mean
basically
empirical
PPS
so
where
the
protocol
kind
of
knows
about
the
separation,
like
has
a
concept
of
a
builder
and
in
some
sense
like
negotiates
this
Outsourcing,
but
today
we
basically
FPS
is
just
not
in
protocol.
It's
called
a
map
boost,
maybe
probably
a
lot
of
you
know
it
and
essentially
what
it
does
is.
It
introduces
a
trusted
third
party
in
between
a
builder
and
proposer,
which
are
these
relayers
I.
O
Don't
think
I
have
time
to
like
go
into
the
details
of
it.
But
essentially
you
know
we
don't.
We
want
Builders
to
not
trust
proposers.
We
want
proposers
to
not
trust
Builders,
there's
reasons
for
that
and
yeah.
We
just
basically
put
like
a
trust
to
the
third
party
in
the
middle,
which
kind
of
negotiates
the
The
Exchange.
So
you
know
the
proposal
wants
something
they're
built
from
the
Builder.
The
Builder
wants
to
get
something
to
the
proposer.
The
third
party
makes
sure
that
the
exchange
happens.
O
You
know
we
where
none
of
the
two
parties
can
cheat
each
other
essentially,
and
so
this
already
exists
today.
A
lot
of
ethereum
blocks
are
built
in
this
way.
So
it's
it's
the
reality
that
and
it's
not
something
that
you
know
the
ethereum
community
I'm
kind
of
made.
Well,
it
is
something
that
digital
Community
made
happen,
but
it's
in
some
sense
inevitable,
like
anyone
could
always
build
some
infrastructure
of
this
kind,
and
people
could
use
it
if
it's
more
profitable
for
them
so
yeah.
So
you
know
we
already
have
this.
O
Why
do
we
care
about
potentially
putting
this
separation
protocol?
So,
as
I
said,
relays
are
trusted
to
the
parties?
We
don't
usually
like
to
have
these
sort
of
entities
in
the
protocol-
they're
not
critical
in
some
sense.
Well,
if
things
are
set
up
properly,
which
I
mean
I,
think
there's
a
lot
of
improvements
to
be
done
on
the
infrastructure
that
exists
today.
It's
very
you
know
young
infrastructure,
but
either
way.
O
There's
always
going
to
be
some
kind
of
failure
modes
that
we
don't
really
like
or
some
some
requirements
that
we
don't
really
like
from
having
this
these
parties.
So
one
is
that
you
have
to
basically
white
list
them
because
they're
trusted,
so
everyone
has
to
kind
of
go
and
configure
some
list
of
these
entities
that
they
they're
fine
with
essentially
the
distrust,
and
we
don't
care
if
Builders
do
that.
But
we
don't
really
like
validators
to
do
that
or
well
I.
O
Don't
know,
that's
debatable,
but
anyway,
there's
I
think
there's
a
future
for
relays
to
still
exist
and
just
have
a
full
fall
back
in
protocol.
That
is
not
the
default,
but
that's
a
conversation
from
our
time
but
yeah.
Another
thing
is
that
today
we
don't
really
have
a
kind
of
live
monitoring
for
relays
like
locally
people,
don't
have
a
chance
to
observe
interactions
that
really
have
had
with
with
other
proposers
and
then
disconnect
for
them.
O
If
these
interactions
look
suspicious,
essentially
so
that's
something
that
we
can
include,
we
can
basically
really
improve
the
you
know
the
the
resilience
of
this
whole
system,
because
we
can
consider
that
people
don't
need
to.
You
know,
go
on
Twitter
and
find
out.
O
Oh,
this
relay
is
malicious
I'm
going
to
disconnect
from
them,
but
just
maybe
this
can
happen
locally
essentially,
so
this
is
there's
a
lot
of
improvement
there,
but
still
there's
some
kind
of
I
guess
really
fundamental,
catastrophic
scenario
that
that
seems
unavoidable
to
me
if
we
keep
having-
or
rather
if
we
only
rely
on
these
entities
for
this
Outsourcing.
If
we
have
kind
of
no
fallback
so
especially
we're
done
charting
so
today
you
can
always
have
a
fallback.
O
Actually,
it's
not
so
fundamental
to
this
the
state
of
things
today,
you
could
always
have
this
fallback,
which
is
essentially
the
characteristic
scenarios
like
all
relays
that
most
people
are
connected
to
fail
for
some,
whatever
reason:
they're,
malicious
or
they're
attacked,
Anything,
Could
Happen
they
fail
and
now
all
of
a
sudden.
Today,
it's
fine,
you
could
you
know
once
you
manage
to
disconnect,
because
you
realize
okay,
these
people
haven't
given
me
blocks.
For
you
know,
however
many
times
I've
tried
or
if
you
have
this
monitoring
system,
that's
fine!
O
You
just
fall
back
to
you,
you
building
your
own,
you
know
gather
or
whatever,
like
other
execution,
the
clients
you're
running,
building
your
own
blocks.
So
now
we're
likeness
is
not
really
threatened.
Maybe
it's
like
a
temporary
thing,
but
with
dunk,
sharding
and
also
statelessness
in
some
sense,
if
you
know,
let's
say
all
the
validators
are
still
list,
they
cannot
build
their
own
blocks
or
with
unchart
in
like
you
cannot
distribute
data.
Then
this
becomes
like
a
threat
to
know
our
liveness.
We
dangsha
did
not
exactly
it's
like
you
could
make
blocks.
O
You
just
cannot
put
a
lot
of
data
into
them,
but
you
could
argue
well.
Is
that
really
liveness
like
if
all
the
rollups
can
stop,
because
they
don't
have
access
to
data
anymore?
That
is
not
really
what
we
want
yeah.
So
this
is
maybe
what
it
would
well.
This
is
like
the
one
of
the
current
ideas
of
what
it
could.
Look
like
to
put
it
in
protocol
I,
think
well,
yeah
I
think
it
probably
can't
really
go
into
it.
O
I,
don't
think
we
have
time
to
go
into
it
in
much
detail,
but
basically
it
looks
like
you
know,
as
I
said
before,
what
are
the
relays
they're
just
these
kind
of
actors
that
negotiate
the
The
Exchange,
you
know
what
do
we
do?
We
want
to
remove
these
actors.
We
basically
have
the
protocol
negotiate
exchange
and
the
protocol
in
this
case
is
basically
other
validators.
O
So
there's
a
proposer
there's
a
builder
and
we
have
the
whole
rest
of
the
validator
set,
or
some
committee
more
well
likely
that
basically
kind
of
makes
sure
with
their
well.
They
observe
the
exchange
and
with
their
attestations,
they
sort
of
make
sure
that
if
the
proposal
tries
to
shoot
the
Builder,
they
they
fail
and
vice
versa.
O
Essentially,
so
it
essentially
gives
us
the
property,
for
example,
that
if
the
proposer
accepts
some
block
and
and
or
some
some
like
a
bid,
you
could
say
from
from
a
builder-
and
we
have
good
latency
like
things
are
fine
from
a
fortress
perspective
from
a
network
perspective,
then
the
proposal
will
get
paid.
It
doesn't
matter
what
the
Builder
does
if
they
reveal
their
block.
A
good
kind
of
this
is
a
good
case
if
they
don't
reveal
their
block
they're
really
late.
O
You
know
tough
luck
for
them,
they're,
going
to
repeat
the
validator
and
not
even
get
their
building
opportunity,
and
so
this
is
one
design.
There's
this
other
design,
which
is
kind
of
interesting.
Oh
yeah,
also
thanks
to
vitalik
for
all
of
the
things
that
just
took
him
from
many
of
his
research
posts,
but
yeah
like
so.
This
is
basically,
you
could
say
empirical
map
boost,
because
it's
really
like
designed
to
look
like
my
Boost
again.
O
We
have
basically
this
like
party
in
the
middle
this
time
more
clearly
than
before,
which
is
in
this
case
a
committee.
It
also
was
before,
but
anyway,
and
this
kind
of
party
again
negotiates
negotiates
The
Exchange.
We
could
think
of
the
party
as
basically
an
availability
Oracle.
So
it's
basically
its
job
is
to
ensure
it's
to
give
guarantees
to
the
proposer
that
what
the
Builder
sent
is
available.
So
the
proposal
will
accept
a
header
like
a
basically
offer
of
you
know.
O
I
want
to
give
you
this
blog
pay
you
this
much
and
the
Builder
will
essentially
erase
your
code.
So
you
know,
hopefully,
if
you've
followed
the
discussion.
You
know
whether
each
recording
is
by
now
the
essentially
the
execution
payload
to
the
committee,
like
essentially
encrypt
well
Azure
code,
then
encrypt
and
then
basically
split
the
parts
to
the
committee,
so
that
if
some
threshold
of
the
committee
is
honest
and
online,
they
will
be
able
to
decrypt.
O
Even
if
not
all
of
the
committee
is,
and
basically
the
committee
signs
you
know.
Essentially,
the
individual
members
of
the
committee
will
attest
to
the
fact
that
they
have
their
part
so
that,
if
you
see
enough
attestations
and
the
committee
is
officially
honest,
then
you
know
that
as
a
proposer
that
this
thing
will
be
able
to
be
decrypted,
and
you
know
the
data
will
be
there
essentially,
so
it
actually
yeah
this
kind
of
fits
in
quite
nicely
with
these
data
availability.
O
Discussions
like
that
is
really
the
problem
here
that
the
purposes
accepting
a
bit
but
the
Builder
doesn't
want
to
say
what
the
bid
is,
because
that's
the
are
kind
of
private
like
secret
information,
and
we
want
basically
some
guarantee
that,
even
if
you
don't
know
what
it
is,
it
is
going
to
be
there
once
the
time
comes
essentially
like
once,
you've
accepted
a
bid
and
it's
ready
to
go
in
the
chain.
So
that's
yeah,
that's
what
it
looks
like
looks
like
and
just
last
quickly.
I
want
to
comment
on.
O
Basically
well,
so
there's
like
sensory
resistance
questions
about
PBS
and
I.
Think
they're,
not
you
know,
they're
like
fairly
well
understood.
There's
a
there's
clearly
like
a
way
that
PBS
in
a
lot
of
protocol
says
it
doesn't
really
depend.
This
is
like
you
know
our
questions
also.
Today
it
does
the
great
censorship
resistance,
but
that
we
already
know
kind
of
how
to
deal
with
that.
There's
this
concept.
O
Inclusion
list,
there's
like
slight
tweaks
to
that
there's
I
mean
there's
like
basically
really
wide
design
space
of
very
like
roughly
said
ways
for
validators
or
proposers,
but
you
could
just
say
validators
to
basically
make
sure
that
transaction
that
should
go
in
the
chain
eventually
get
in
the
chain,
even
if
Builders
don't
want
that,
and
this
also
by
the
way,
like
a
really
important
reason
why
we
want
decentralization
of
the
validator
set,
because
if
you
don't
have
that,
then
you
just
don't
have
this
option
like
if
you
have
100
validators
and
they
don't
want
some
video
on
the
Chain.
O
That's
it
there's
no
way
well,
I
mean
there's
ways
like
soft
working
or
you
know
other
reason
other
ways,
but
there's
no
kind
of
automatic
way
to
to
do
that,
whereas
with
a
decentralized
by
leadership,
we
can
always
do
that
and
yeah.
So
inclusion
lists
are
quite
simple
in
some
sense
and
there's
like
disagreements
about
how
exactly
they
should
work,
but
they're
super
simple
today.
O
So
if
we
have
like
the
property
that
it
is
easy
for
a
validator
to
say
this
transaction
is
available
and
this
transaction
is
valid,
so
the
validity
part
becomes
a
bit
harder
with
account
abstraction.
So
there's
some
questions
there,
but
it
won't
go
into
that.
That's
not
really
relevant
here,
the
availability
part
that
becomes
a
bit
harder
with
dunk
sharding,
because
now,
all
of
a
sudden,
you
know
there's
all
this,
like
all
these
blobs
floating
around
the
network.
O
There's
all
this
data
that
you're
not
supposed
to
no
they're
not
supposed
to
essentially
download
all
of
you're
only
supposed
to
sample
what
actually
ends
up.
Okay,
yeah
well,
I've,
just
finished
this
phrase
and
then
I
guess
that's
it,
but
yeah,
basically
with
yeah,
so
we're
done
charting
the
terminal
availability
becomes
a
bit
harder.
So
we
would
like
to
have
some
kind
of
Chardon
mempool
construction,
so
that
you
can
even
for
things
that
have
not
been
included
in
a
in
a
block.
O
Yet
you
can
still
in
some
way
determine
that
they're
available
without
everyone
having
to
download
everything
essentially
and
at
the
point
at
this
point-
and
this
might
not
need
to
be
the
default
route
that
all
transaction
goes
through
and
probably
won't
for
kind
of
some
of
the
reasons
that
I've
already
hinted
at
before,
like
it's,
it's
unreasonable
to
expect
that
everything
will
go
through
a
public
mempool,
but
this
is
kind
of
the
fallback
for
censorship
resistance
always,
and
so
we
want
to
basically
have
some
kind
of
construction
like
this
and
I
think
that's
it
we're
out
of
time.
O
Hopefully
maybe
we
still
have
time
for
some
questions
for
everyone,
but
otherwise
that's
it.
E
Okay,
so
yeah,
obviously
out
of
time,
but
we
can
still
use
this
room
for
another
20
minutes.
We
have
a
special
guest
here
of
italics
here
to
answer
some
questions
around
of
it.
Okay,
any
questions.
L
The
topic
of
multi-rely
in
this
time
sharing
black
ecosystem,
because
there
are
many
solutions,
because
I
heard
in
PBS,
but
that
it
weakens
them
the
topic
of
censorship.
But
how
do
you?
How
do
you
approach
the
to
improve
the
mempool
with
multiple,
multiple
relays,
so.
P
I
Is
is
Network
persistent
for
the
blob
is
going
to
be
dependent
on
finality,
because
I
would
have
expected
this
to
be
the
case
and
therefore
rule
out
completely
these
Notions
of
having
them
for
only
five
minutes.
P
Well,
oh
I
see
it
well
so
like
if
you're
in
the
middle
of
an
inactivity,
League
that
probably
makes
sense
I
mean
I
think
like
I
personally
favor
blobs
being
around
for
long
enough,
like
you
know,
I'm
on
like
at
least
a
month
or
so
so
that
you
know
any
like
it's
longer
than
any
realistic
inactivity
a
week.
That
would
happen,
but
there's
different
approaches.
R
Okay,
just
on
the
setup,
was
there
like
a
consideration,
or
is
it
even
possible
or
what's
the
problem
with
actually
making
that
like
a
separate
system,
it
reminds
me
a
bit
to
like
swarm
as
it
was
integrated
into
ethereum
notes
like
wouldn't
it
be
possible
with
pre-compiles
and
like
the
right
new
evmob
codes
actually
make
that
an
independent
system
or.
P
So
the
problem,
the
reason
why
we
need
like
data
availability,
sampling
in
consensus
and
why
it's
like
so
different
from
you,
know,
ipfs
and
everything
out.
There
is
because
we
want
to
have
like
actually
have
consensus
on
the
fact
that
the
data
is
available
right,
like
the
yeah
ipfs,
does
not
provide
that
right
and
there's
ways
to
like
upload
files
so
that
some
people
think
it's
available
and
other
people
think
it's
not
available
and
for
like
regular
file.
Publishing.
That's
fine,
because
if
it's,
if
a
file
is
half
available,
you
just
publish
it
again.
P
But
for
Roll-Ups
like
it,
you
need
like
exact,
like
Global
agreements
on
which
data
was
published
on
time,
in
which
data
was
not
published
on
time,
because
the
Roll-Ups
like
in
order
to
figure
out
the
current
state
of
our
rollup,
you
need
to
figure
out
like
which
data
blobs
to
include
and
which
ones
to
skip
over.
B
Hi
for
the
sort
of
data
blob
storing
period
are
there
any
thoughts
about
like
challenges
for
valve
leaders,
that
kind
of
keep
the
data,
or
is
that
purely
altruistic,
Behavior.
P
I
mean
there
have
been
proof
of
class
study
designs
that
we've
worked
on
over
the
years.
I
think
it's
kind
of
it's
on
the
sort
of
rhetorical
back
burner,
because
we
just
like
know
that
these
techniques
exist
and,
like
we
know
that
when
the
time
comes,
you
know
we
can
probably
just
stick
them
in
to
get
extra
security.
S
Yeah
so
I
wanted
to
ask
about
the
multi-dimensional
fee
Market
quickly.
We
talked
about
this
excess
data.
Gas
field.
I
was
just
wondering
if
the
same
would
like
in
an
Ideal
World.
If
we
hadn't
already
done
eip1559
with
the
same
construction,
be
wanted
for
the
like
original
kind
of
gas,
yes
yeah,
and
but
this
can
happen
simply
just
could
this
happen.
Yeah.
M
I
think
they
even
already
thoughts
in
their
directions,
I
think
over
long
over
the
medium
term.
We
would
want
to,
and
one
of
the
nice
side
benefits
it
would
give
us
that
it
also
makes
other
improvements
to
to
a
50
59,
like
mechanism
easier,
like,
for
example,
like
a
Time
based
instead
of
a
block-based
kind
of
throughput
targeting
so
yeah.
We
would
probably
want
to
homogenize
this
over
time.
T
Yeah,
so
I
could
be
wrong
in
one
of
these
assumptions,
but
my
understanding
is
that
proposer.
Builder
separation
was
motivated
largely
by
the
centralizing
effects
of
Mev
and
US,
wanting
to
keep
the
proposer
set
decentralized
but
then
kind
of
later.
With
these
designs
like
full
sharding,
we
realized
that
we
could
utilize
the
builders
as
kind
of
with
extra
Hardware
requirements,
because
they'd
be
incentivized
with
the
Mev
that
they're
extracting
to
have
these
nodes,
but
then
there's
also
Research
into
completely
mitigating
Mev.
Okay,
so
maybe
that's
the
wrong
example.
P
There's
a
separate
strand
of
research
on
the
topic
of
trying
to
make
build
like
Builders
themselves,
a
decentralized
internally
so
like
a
some
kind
of
protocol,
would
plug
into
the
market
and
make
bids
instead
of
being
a
single
actor
and
then
there's
also
research
on
making
applications
that
are
Mev
minimized.
So,
like
all
three
of
those
exist,
okay,.
O
Goes
away,
it
just
means
we
do
as
much
as
possible
to
reduce
it,
but
there's
no
actually
again,
like
I,
think
anyone
that's
thought
about
maybe
for
some
time
will
come
to
the
conclusion
that
it's
just
not
possible
to
assume
that
there's
not
going
to
be
an
incentive
to
be
a
specialized
actor
like
there's,
even
if
all
transactions
are
encrypted,
there's
always
going
to
be
some
reason
to
be
the
first
to
to
touch
this
state.
Like
you
know,.
O
M
Right
and
just
to
brief
mentioned
it's
not
just
dunk
sharding,
where
basically
a
PBS
like
architecture
would
help.
So
once
we
move
with
vocal
trees
to
some
through
a
world
where
it's
easier
to
run
stateless
nodes
and
then
also
with
what
PBS
would
get
us
would
be
that,
like
that,
normal
validators
would
basically
all
of
a
sudden
have
like
way
way,
lower
storage
requirements,
and
we
only
we
don't
don't
get
that
if
they
still
have
to
create
blocks,
because
then
they
need
the
state.
M
But
if
you
only
actually
validate,
but
you
don't
create
your
own
box,
you
you
get
you
you
leave
that
to
to
a
specialized
energy,
then
you
can,
as
a
validator
turn
stateless
and
that's
kind
of
the
one
more
of
these
benefits
we
would
get
out
of
this.
U
Wouldn't
it
make
sense
to
like
charge
for
sampling
or
something
like
that
to
like
motivate
people
to
have
the
data
and
be
able
to
collect
the
charges.
F
I
mean
I
think
that's
definitely
a
possible
construction.
You
could
like
see
a
way
like
where
you
oops
a
specialized
sample
provider,
and
you
pay
them
I.
Think
like
the
downside,
is
that
it
makes
it
much
harder
to
run
a
node,
because
now
you
need
to
somehow
set
up
this
payment
infrastructure.
So
I
think
it's
not
ideal.
That's
possible.
V
Hello
will
there
be
ways
to
foreign
for
somebody
who
wants
to
make
data
available
to
provide
a
proof
through
a
smart
contract
as
well,
which
is
independent
of
this
call,
data,
layer
tools
and
so
on,
which
is
a
smart
contract
specific
like
it's
a
generic
infrastructure
for
proving
that
the
data
was
available.
F
V
Then
also
being
does
it
need
to
be
like
a
special
opportunity
to
to
run
the
proof,
then,
to
will
to
check
that
it
was
actually
provided,
because.
F
V
That
it
was
provided,
but
then,
if
let's
say
somebody
wants
to
have
a
check
inside
solidity,
if
the
data
was
available
but.
F
V
W
W
Yes,
but
I,
just
like
kind
of
like
for
the
network
perspective
like
yeah
and
I
know
ipfest
is
wouldn't
want
to
go
to
civil
attack
yeah,
that's
something
that
is
going
to
be
addressed.
I.
P
Mean
I
think
yeah
from
from
the
kind
of
network
perspective
of
how
the
thing
should
be
yeah
implemented
like
this
thing,
has
much
higher
requirements
in
terms
of
Byzantine
fault,
tolerance
and
in
terms
of
real-time
access
and,
like
real-time,
real-time
being
able
to
change
what
you're
accessing
yeah.
Those
are
probably
the
biggest
differences
in
requirements.
Okay,
great.
H
Thank
you
something
I
was
thinking
about
is
that
if
so
we
we
assume
that
that
movie
exists
and
that
people
want
to
you
know,
sandwich
other
people's
transactions
and
stuff
like
this.
So
we
we
have
this
proposal,
Builder
separation,
which
makes
a
lot
of
sense,
and
then
we
once
we
have
this,
we
kind
of
start
to
utilize
it
to
do
heavier
work
like
thanks,
sharding
and
things
like
this.
H
One
of
the
things
I'd
be
concerned
about
is
that
presently
we
have
the
ability
for
users
to
just
simply
not
run
Mev,
and
they
just
let
the
transactions
come
in
as
they
will
and
they,
you
know,
lose
a
bit
of
money,
but
they're
kind
of
genuine
people
I'd
just
be
interested
to
make
sure
that
we
don't
rule
out
this
person
and
we
don't
kind
of
glue
together
the
role
of
like
making
really
specialized
fancy
sandwiching
blocks
and
then
also
doing
all
of
the
tank
shouting
stuff.
I
think
it'd
be
nice.
P
Yeah
yeah
I
know
well.
No.
This
is
actually
one
of
those
things
that
I
yeah
I,
think
I
wrote
a
new
research
post
about
last
week,
right,
like
basically
yeah
like.
Can
we
push
the
autonomy
in
choosing
a
block
contents
back
to
the
proposer
and
like
that's
a
spectrum
that
potentially
could
go
all
the
way
up
to
the
purpose
of
having
an
option
to
make
everything?
P
And
then
one
of
the
conclusions
there
was
that
if
we
want
to
have
that
kind
of
proposal,
autonomy,
property,
but
also
have
the
the
property
of
like
potential
low
proposal
requirements,
we
might
need
to
have
a
third
category
of
actor.
That
does
are
kind
of
not
and
not
Mev
extraction
that
in
basically
the
entire
bundle
of
computationally,
expensive
stuff.
So
like
witness,
Edition,
State,
Route
calculation
in
the
future,
ZK
snarking
and
minimum,
like
figuring
out
polynomial
commitments
and
proofs
and
broadcasting
and
so
forth,.
F
I
mean
I
I
just
want
to
comment.
I
think,
like
people
need
to
stop
thinking
of
Mev
only
as
a
bad
thing,
because
you
think
of
sandwiching
and
even
without
any
front
running
any
sandwiching.
We
still
have
lots
of
Mev
and
it's
actually
a
necessary
part
of
the
system.
Like
someone
needs
to
do
the
Arbitrage
on
the
exchanges,
someone
needs
to
do
the
degradation
someone
need
to
submit
the
fraud
proofs.
All
of
these
are
Maybe.
F
F
And
but
I
mean
you
so
if
we
I
would
say,
for
example,
you
can
eliminate
the
bad
Mev
using
transaction
encryption,
for
example,
but
you'll
still
have
loads
of
mab
left
like
yeah.
X
A
question
on
the
PBS,
so
here
so
we
we
like
sort
of
apply
some
slashing
mechanism.
Once
we
have
a
separation
between
the
builders
and
the
proposers
right
sort
of
different
actions,
they
probably
have
yeah
so.
P
There
are
slashing
mechanisms
different,
slashing
mechanisms
that
play
right
so,
like
there's
a
swatching
mechanism
that
slashes
the
proposer,
if
they
make
two
conflicting
blocks
in
some
of
these
partial
block
auction
protocols,
we
use
eigen
layer
and
that's
like
basically
exposes
the
proposer
to
kind
of
extra
swatching
if
they
yeah.
So
if
they
violate
the
rules
of
the
partial
block
auction
protocol,
Builders
can
get
slashed
in
some
some
contacts
and
forget
exactly
yeah,
which
ones
but
there's
a
definitely
a
few
cases.
P
So
you
know
there's
definitely
like
different
forms
of
of
swatching
to
make
sure
different.
The
the
different
participants
follow
the
rules
of
the
protocol.
Yeah
thank.
E
E
Thank
you
for
joining
us
with
for
this
2
hours
and
30
minutes.
J
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
W
W
K
D
A
A
A
A
Y
Okay,
before
we
start,
this
is
going
to
be
an
engine
Workshop,
so
you're
going
to
have
a
couple
of
exercises
to
do.
You
can
already
like
take
a
picture
of
the
career
code
and
close
the
reports
that
you
will
already
have
everything.
D
Y
Y
I'm,
assuming
everyone
here
is
familiar
with
unit
tests.
You
should
use
any
test
and
they
are
good
usually
to
cover
that
the
system
is
working
as
expected
in
the
happy
path,
something
that
we
have
learned
over
our
audit
and
in
other
analysis
that
we
have
done
is
that
there
is
no
correlation
between
the
quality
and
the
quantity
of
the
unit
test
and
the
likelihood
of
having
high
severity
vulnerability.
And
this
is,
we
actually
have
an
academic
paper
on
this,
where
we
have
looked
over.
Y
Like
all
the
audits
we
have
done-
and
this
is
a
correlation
that
we
have
not
found
and
the
reason
why
our
intrusion
tell
us
that
when
you
are
going
to
write
any
test,
you
are
going
to
try
to
cover
happy
path,
things
that
are
supposed
to
do
in
the
in
the
correct
execution,
while
vulnerability
usually
lies
in
the
edge
case.
In
the
thing
that
you
have
and
the
tool
is
going
to
tell
you
there
is
this
type
of
work
or
not.
For
example,
you
might
know
slitter,
which
is
a
static
analyzer
for
solidity.
Y
This
type
of
techniques
might
give
you
first
learning,
but
they
are
also
really
powerful
because
I
might
cut.
You
know
like
critical
bugs
you,
oh
yeah,.
Z
Y
Okay,
so
for
Twitter,
like
the
best
technique,
is
spend
one
hour
like
the
first
time
you
try
it.
There
is
a
triage
mode.
So
once
you
have
three
hours
like
the
ways
they
won't
show
up
like
in
the
next
execution,
and
if
it
takes
you
like
one
hour
and
at
the
end
of
the
day,
you
might
be
able
to
catch
critical
vulnerability.
I
would
say
it's
worth
going
through
the
first
result
and
like
we
are.
We
have
like
a
list
of
trophy
for
Twitter.
Y
That
demonstrates
that
we
have
found
a
lot
of
like
actual
bug
using
it
so
yeah.
There
is
like
a
false,
positive
path
for
for
false
alarm,
but
it's
not
going
to
take
you
so
much
time
and
it's
going
to
provide
you
value.
For
example,
we
have
a
git
abduction
with
Filter,
when
you
can
connect
it
to
GitHub
on
every
pull
request
commitment,
depending
on
how
you
are
going
to
configure
it.
It's
going
to
run
if
to
see
if
you
are
introducing
new
new
vulnerability,
yeah.
Y
And
this
is
open
source
language
free,
okay,
the
last
technique
that
you
can
use
is
using
semi-automated
analysis,
so
these
are
going
to
be
tools
for
which
you
are
going
to
provide
some
information
for
which
you
are
going
to
have
a
human
intervention,
to
explain
to
the
tool
what
you
are
looking
for,
and
this
is
a
bit
more
difficult
to
use
because
it
requires
like
this
interaction
from
you
know,
from
the
user.
Y
It's
a
Technique.
We
are
going
to
see
today
with
property-based
testing
with
echin
now.
So
what
is
property-based
texting
to
understand
how
it
works?
I
have
to
introduce
fuzzing.
So
fading
is
a
standard
program
analysis
technique
that
is
used
a
lot
in
traditional
security.
The
idea
is,
basically,
you
provide
a
one-time
input
to
the
program
and
you
try
to
see
what
what's
going
to
happen.
You
try
to
stress
this
with
random
input.
The
most
trivial
photo
that
you
convert.
Y
Y
However,
most
of
the
traditional
feathers
are
going
to
look
for
memory,
corruption
for
crash
in
the
program.
We
don't
have
a
lot
of
memory,
corruption
on
solidity.
There
are
some,
but
they
are
not
that
common.
What
we're
going
to
try
to
look
for
is
a
property
of
the
system
that
can
be
broken,
and
this
is
why
we
call
it
property
based
testing.
Basically,
the
way
it
works
is
that
the
user
is
going
to
Define
invariants.
Y
The
first
one
is
going
to
explore
London
music
program
and
it's
going
to
try
to
save
the
invariant
odd
or
not.
You
can
think
of
you
of
folding
videos
like
unit
tests
on
steroid,
where,
with
unit
test,
you
try
one
specific
value
with
the
program.
While
phasing
is
just
going
to
try
randomly
a
lot
of
different
value.
Y
Y
So
I'll
talk
also
about
ekina,
so
echina
is
a
further
for
smart
contract
interpensers.
We
have
been
using
it
for
like
four
or
five
years.
Even
now.
In
all
our
audits,
you
can
see
a
list
of
match
your
code
base
actually
using
and
I've
integrated
Echidna
in
their
process
for
Aquino.
We
are
focusing
on
the
you
know,
ease
of
fuse,
so
the
invariant
are
going
to
be
described
in
solidity.
We
have
a
GitHub
action
similar
to
slitter
and
we
support
all
the
compilation
of
framework.
Y
Okay,
I
was
talking
about
invariant,
so
let's
say
you
have
a
token.
You
have
a
near
C20
token.
It
has
you
know
a
balance.
You
can
transfer
token.
What
would
be
an
invariance
could
be
that
if
you
have
a
total
Supply,
no
user
in
the
system
should
have
not
talking
to
the
total
Supply
right.
If
you
have
10
million
of
token,
if
a
user
of
20
million
yeah
something
is
wrong.
Y
Y
Okay,
so
the
question
was:
what
are
the
benefits
of
using
Echidna
as
a
Foundry,
so
first
I
think
echina
has
more
features
than
from
there.
At
the
moment
they
were
developing
Foundry.
For
like
six
months,
we
have
been
using
a
canine
like
four
years.
We
support
any
compilation
framework.
Y
So
let's
say
you
are
using
rdot
because
you
want
to
do
integration
test
and
you
need
some
complex
setup
using
like
typescript
or
whatever,
if
you,
if
you
move
to
Foundry
you're,
going
to
have
issue,
because
it's
more
difficult
to
you
know
create
this
type
of
test.
So
you
end
up
in
a
situation
where
you
need
to
have
a
setup
with
two
different
compilation
framework.
Y
If
you
use
some
Advanced
options,
then
you
have
to
need
to
have
like
both
Advanced
option
in
both
compilation
framework
if
they
support
it
and
it's
a
lot
of
you
know
like
maintenance.
Here
we
are,
like
you
know,
agnostic
to
the
compilation
framework.
In
that
sense,
we
have
we're
going
to
talk
about
that
later.
Y
We
also
have
like
a
couple
of
advanced
features
that
the
others
I
don't
have,
for
example,
something
that
you
can
do
with
Echidna
is
that,
instead
of
trying
to
find
in
there
that
are
broken,
you
can
look
for
functions
that
consume
most
of
the
gas.
So
you
can
list
the
further
one
and
give
you
a
summary
of
okay.
I
can
learn
this
function
with
this
parameter
and
it's
going
to
Output
this
amount
of
gas,
and
if
you
are
looking,
you
know
for
this
type
of
things.
It's
very
nice.
AB
Hey
there
Brock
from
Foundry
here,
I'm
curious.
How
do
you
go
about
benchmarking,
a
fuzzer
right?
So
how
do
you?
Because
it's
something
like
for
us?
It's
just
a
black
box,
yeah
and
yeah.
Y
Okay,
that's
a
really
good
question
and
even
in
like
traditional
Feathering,
you
know
like
if
you
go
like
in
the
literature
of
whole
feather
or
benchmark,
I
would
say
that
most
of
the
benchmarks
are
poor
or
one
of
the
issues
that
when
someone
does
a
benchmark
to
you,
know
benchmark
their
own
tool,
there's
a
bias
all
right,
so
we
we
have
Benchmark.
We
have
our
own
Benchmark
to
try
to
see.
Y
AA
Yeah,
it's
it's
also
an
open
question.
What
is
your
benchmarking
like
if
you're,
if
you're,
if
you're
saying
well,
this
is
faster
than
this
sort
of
thing,
but
it
could
be
executing
just
always
the
same
thing
like
you
know,
calling
a
constant
function
over
and
over
again
is
going
to
be
faster
than
calling
some
deep,
deep
part
in
the
in
the
call.
On
top
of
that,
you
have
like
bugs
what
what
about
like
finding
bars,
how
much
bugs
you
found-
and
there
is.
AA
There-
are
a
couple
of
academic
papers
saying
that
some
people
like
to
compare
this,
but
you
have
a
like,
like
a
plot
saying
how
many
Bucks
you
found
and
let's
say
that
one
faster
is
better
than
the
other,
but
you
don't
know
if
the
the
next
hour,
you
will
have
a
a
peak
saying.
Well,
these
found
a
lot
of
things.
So
there
are
all
the
things
that
you
can
do,
so
you
can
use
coverage,
but
also
coverage
is
not
going
to
give
you
like.
AA
It's
not
going
to
be
the
the
ultimate
answer.
So
it's
it's
still
a
debate
whole
long.
You
should
run
a
faster
for
a
benchmark
or
even
for
you
know,
testing
something.
It's
also
a
debate.
What
you,
what
we
should
use
for
benchmarking,
should
we
use
like
complex,
defy
applications
like,
but
how
many
of
them
we
have
like
10
or
20?
We
don't
have
thousands
of
different
device.
So
it's
it's.
We.
We
definitely
are
interested
in
a
deeper
discussion
on
how
to
have
a
good
band.
Y
And
we
have
the
same
problem,
for
example
with
Rita,
where
how
do
we
Benchmark
that
our
static
analyzer
provide
good
weather
and
it's
tough?
We
usually
tend
to
have
a
practical
approach
in
the
sense
that,
if
the
truth
provide
value
during
our
audit,
if
it
helps
us,
you
know
to
find
bugs,
and
we
make
us
faster.
AA
Z
Value
so
it's
it's!
It's
tough.
AA
Z
Will
be
here
today
so
please,
let
us
know:
okay,.
Y
It's
it's
only
for
first
thing:
we
have
another
tool
for
symbolic
execution,
which
is
called
Mantika,
however,
and
something
actually
we're
going
to
discuss
later,
I
think
in
practice,
any
formal
based
method
approach
is
going
to
have
a
lower
return
on
investment
and
further.
If
you
have
two
weeks
three
weeks
to
work
on
a
project,
you
know-
and
you
want
to
invest
some
resource
to
increase
your
confidence
in
the
project.
First
thing
is
the
best
solution.
Yes,.
AA
And,
and
also
we
found
that
so
Echidna
is
a
tool
that
works
with
our
static
analysis
leader
that
gets
value,
so
you
have
in
some
cases
like
let's
say
that
you
have
a
test
that
says,
if
x,
equal
to
some
some
value.
AA
Some
traditional
fossil
techniques
have
hard
time
to
deal
with
this,
but
what
we,
what
we
do
is
constant
mining,
so
we
scan
all
your
code,
look
for
these
magic
values
and
we
replay
these
magic
values
and
some
mutation
of
that
from
time
to
time
so
or
fossil
should
be
able
to
get
inside
the
inside
this.
AA
If
you,
if
you,
if
you
have
a
test
case,
that
is
not
that
is
not
working,
please
let
us
know,
and
we
can
try
to
to
see
it,
but
in
practice
it
seems
like
some
of
the
typical
use
cases
for
symbolic
execution
in
which
you
have
constant
magic
values
to
to
look
for.
They
can
be
replaced
by
constant
mining
extraction.
Z
R
A
Z
AA
AA
Yeah
so
I
wanted
to
highlight
one
one
little
feature
that
we
are
testing
on
Echidna,
that
is,
that
is
also
used
in
passing,
but
instead
of
testing
a
property
where
we
are
doing
minimization
or
maximization
of
some
value.
So
this
is
an
a
new
thing
that
we
are
that
we
are
testing.
It
is
not
property
based
testing,
but
it's
it's
something.
It's
something
that
we
are
trying
that
we
are
trying
to
push.
AA
So
if
you
want
to
know
if
a
user
is
it's
capable
of
extracting
tokens
from
your
system
without
you
to
realize
you
can
use
that
feature
just
saying:
Hey
Echidna.
Can
you
maximize
this
balance
of
this
account?
So
it
will
try
to
generate
you,
the
maximum
sequence.
So
it's
it's
it's
a
little
bit
outside
this,
but
it's
something
that
we
wanted
to
mention
foreign.
Y
Did
anyone
already
solved
it?
Foreign.
Y
D
A
A
A
A
A
A
A
A
A
A
A
A
A
A
Y
Y
D
D
Y
Y
It
has
a
transfer
function
like
a
classic
Crosshair
function,
enable
it
from
a
portable
contract,
which
is
like
a
basic
possible
system,
and
what
we
want
to
try
to
do
here
is
to
create
the
invariants,
such
as
no
user
should
have
a
balance
above
the
total
Supply
to
test
the
tokens
where
we
are
going
to
do
it
is
that
we
are
going
to
inherit
the
token
our
Target.
We
are
going
to
create
a
contract
test
token.
Y
We
are
going
to
initialize
the
balance
of
the
colors
of
the
first
user
to
10
000,
and
this
is
an
initialization.
So
you
are
creating
a
token.
There
is
10
000
in
one
address,
and
no.
The
invariant
is
simply
that
no
user,
so
the
user
echina
color
should
not
have
more
than
ten
thousand
token
again.
You
deploy
a
token
10
000
token
to
one
user.
This
user
should
never
have
like
20
000
token,
all
right,
and
if
you
run
this
with
a
kidnap,
a
Kina
is
going
to
tell
you
that
this
invariant
is
property.
Y
Y
This
was
compiled
with
solidity
0.7.
So
there
is
no
overflow
and
underflow
protection.
So
there
was
an
underflow
problem
here,
where,
if
you
try
to
send
more
more
tokens
that
you
have
a
new
balance,
the
balance
is
going
to
render
flow,
and
you
know
you
have
a
really
large
balance.
Something
which
is
interesting
here
is
that
we
Define
the
invariance.
You
know
without
looking
at
the
card.
Without
looking
at
the
function,
we
were
not
looking
either
any
issue
in
the
transfer
function.
Y
So
this
is
a
kind
of
a
nice
a
way
of
trying
to
find
bugs,
because
you
don't
look
at
the
individual
function.
Necessarily
you
can
just
Define
invariant
and
the
further
is
going
to
try
to
break
the
environment,
for
you
does
that
make
sense
any
question
yeah.
Y
Okay,
so,
okay,
so
the
question
is:
does
it
execute
a
specific
function
or
how
does
it
know
which
function
to
curl
the
answer
that
is
going
to
call
everything?
So
in
this
token,
if
you
look
at
like
the
word
source
code,
you
have
a
transfer
function,
possible
function
and
like
some
additional
function,
so
the
further
is
just
going
to
call
everything
and
everything,
external
or
public,
like
everything
that
a
user
can
call.
Y
Okay,
the
question
is,
if
you
have
a
very
large
token
or
very
large
contract,
you
have
a
lot
of
function,
so
here
you
can
take
different
approach
either
you
want
the
keynote
to
call
everything
and
you
just
do
nothing
and
you
let
the
key
now
when
which
might
work.
You
know
it
depends
on
what
it
is.
If
you
know
that
some
functions
are
more
important
and
you
want
to
Target,
you
can
change
in
the
configuration
of
configuration
file
of
echina
and
tell
him
call
only
this
function
or
don't
call
this
function.
Y
Y
Y
Okay,
so
the
question
is:
can
you
have
like
a
better
log,
because
obviously
this
is
like
a
simple
example,
and
when
you
do
random,
you
know
exploration,
you
might
call
a
lot
of
functions
that
are
not
necessary
for
what
you
are
trying
to
to
call
right,
and
the
answer
is
yes,
so
Echidna
does
what
we
call
shrinking,
where
once
it
found
a
way
to
break
the
invariant,
it's
going
to
try
to
reduce
the
choice,
so
it's
going
to
continue
to
First
more
or
less.
Y
On
the
same,
you
know
iteration
and
trying
to
reduce
like
the
size
of
the
of
the
trace
culture.
Y
Okay,
then
we
have.
The
second
exercise,
friction
so
on
the
same,
repo
just
got
exercise
two:
it's
on
the
same
Target,
so
you're
going
to
try
to
have
an
invariant.
Y
On
the
same
token,
the
first
invariant
was
that
no
user
should
have
a
balance
above
total
Supply
here,
as
we
kind
of
hinted
before
now.
This
is
a
possible
system,
so
it's
a
system
where
the
owner
can
pause
or
unpause
the
system
and
what
we
want
to
verify
the
environment
we
want
to
have
is
that
if
there
is
no
or
no
and
the
system
is
pause,
can
someone
unpose
a
system?
And
this
is
what
we're
going
to
try.
Y
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
Y
Y
And
what
we
want
to
check
is
that
if
we
drop
the
ownership
and
we
pause
the
system,
is
it
possible
to
unpause
it?
So
here
we
have
a
bit
of
initialization
to
do
right
because
we
want
to
drop
the
ownership
and
we
want
to
pull
the
system.
We
are
doing
this
in
the
Constructor,
so
we
are
calling
pause
and
oh
no
from
now
on
the
system
as
a
contract
is
deployed,
it's
pause.
There
is
no.
Y
Y
Count
and
common
in
old
version
of
solidity,
there
was
no
constrictor
keywords
and
where
you
were
doing
the
Constructor
that
you
needed
to
have
the
function
name,
which
was
a
match
with
a
contract
name,
and
here
you
have
the
contract,
ownership
and
a
function
owner,
and
because
of
that,
the
functional
now
is
a
public
function
and
anyone
can
call
it
and
become
the
owner.
This
does
not
work
anymore
with
more.
You
know
modern
version
of
solidity,
but
the
type
of
bug
that
we
are
finding
a
bit
more
a
bit
too
much
in
the
past.
Y
Something
which
is
interesting
again
is
that
you
know
we
did
not
look
at
the
ownership
contract.
We
did
not
look
like
at
the
implementation
itself.
We
just
Define
an
invariant
and
without
the
one
of
the
further
one,
and
you
found
the
environment
for
us.
So
now
brings
a
question
on
how
to
define
okay
and
okay.
Y
So
the
question
is:
is
defining
invariant
part
of
the
auditor?
You
know
work,
yes
like
we
are
using
a
kidney
in
our
audit
and
during
our
audit
we
are
going
to
Define
invariants
and
something
we
are
going
to
do
that.
We
are
going
to
discuss
with
a
developer
because
the
developer
know
you
know
better
than
us
what
the
system
is
supposed
to
do.
Y
So
we
are
going
to
have
this
collaboration
with
them
to
understand
what
the
super
system
is
supposed
to
do
and
to
Define
this
environment
foreign,
how
to
define
invariance,
because
you
know
like
if
you
have
bad
invariance,
it
doesn't
matter
what
you
are
doing.
You
know
if
you
are
using
further,
if
you
are
using
like
formal
method,
if
your
invariant
are
not
good
you're,
just
going
to
check
for
some
things
that
you
know
doesn't
matter.
Y
AA
Yeah
in
in
our
experience
when
we
work
with
with
with
clients,
when
we
ask
them
to
do
step
one
undefined
invariants,
they
are
actually
they
they
realize
about
bugs.
So
it
is
already
a
good,
a
very
good
thing
to
start
thinking
into
that.
Even
if
you
don't,
if
you're,
not
testing
yeah.
Y
Okay,
how
to
use
it
with
other
contract,
so
you
can
just
in
the
Constructor,
deploy
the
contract,
some
things
that
we
are
not
going
to
cover
here,
but
we
have
a
tool
which
is
called
itino,
which
is
basically
going
to
take
your
unit
test,
take
like
your
sweet
and
we
play
them
in
Echidna.
So,
for
example,
if
you
have
like
a
complex
integration
with
like
you
know,
you
are
deploying
a
new
unit
test,
10
different
contracts.
Y
AA
Yes,
yes,
I,
I
I
think
there's
some
a
little
bit
something
else
there
that
you
want
to
what,
if
you
want
to
Define
an
invariant
on
a
unisot
contract
right
that
you
that
you
are
using
that
your
contract
is
is
is
using,
so
you
will
need
to
know
how
unisa
Works,
in
order
to
put
it
in
your
environment,
like
if
I'm
swapping
something
then
I'm
getting
something
else
right,
and
in
that
case
you
need
to
realize
that
it's
difficult
to
write
invariance
with
other
people
call
right
despite
despite
this,
is
working
and
everyone
is,
is
using
it.
AA
AC
So
there
is
case
was
we
have
a
special
I'm
from
gearbox
protocol
and
we
work
on
composable
Leverage
and
we
have
adapters
because
we
just
provide
leverage
for
some
other
contracts.
So
when
you
combine
gearbox
with
uni,
swap
you
get
immediately
and
margin
trading,
and
in
this
case
it
has
adapters
and
these
adapters
incorrectly
parse
path
to
make
check
after
uni
swap,
however,
they
integrated
and
may
call
to
existing
uni,
Swap
and
guy
who
were
on
a
modified
problem.
AC
He
write
a
small
test,
and
this
test
show
us
that
if
you
really
add
some
additional
part
of
call
data,
it
could
be
interpreted
incorrectly.
So
we
have
two
different
ones.
Our
system
could
be
fooled
check,
not
Z
balance,
which
should
be
checked,
and
in
this
case
it
was
a
fault
of
the
system
and
the
funds
could
be
drained
and
in
this
case
I
think
we
could
find
some
fuzzing
testing
to
really
provide
any
information.
AC
But
this
test
should
work
with
uni
swap
because
we
behave
in
different
way
and
we
shouldn't
cover
that,
because
we
run
with
mocks
and
MOX,
of
course,
was
created
with
the
same
bug.
So
MOX
was
okay,
but
real
implementation
totally
different
and
I
definitely
believe
that
fuzzing
should
found
this
mistakes.
Yeah.
AA
Yeah
definitely
definitely
when
you
are
creating
a
mock,
you
are
assuming
how
everything
works,
and
if
your
assumption
is
not
precise
enough
it
will
it
will.
You
won't
be
able
to
detect
something
and
we,
as
an
Auditors
is,
is
common,
that
we
have
two
audit
contracts
that
will
interact
with
other
other
contract,
let's
say
company.
AA
So
when
we
go
into
compound,
we
have
all
the
documentation
and
says
well
compound
word
like
this
or
like
that
when
we
look
at
the
code
and
see
some
things
that
are
not
documented
and
we
go
back
into
the
developers
and
look
look
if
you,
if
your
contract
is
doing
this,
so
that
it
will
revert
and
you're
not
testing
for
that.
So
it's
when
you,
when
you
are
using
a
third
party
contract,
you
are
importing
the
risk
so
either
either
you
have
really
good
tests
or
you
even
make
sure
that
you
understand
everything.
Y
Y
That's
why
that's
why
you
know
we
are
kind
of
in
putting
like
the
infances
onto
defining
environment,
because
this
is
like
the
key
component
of
this
technique.
Yeah.
AC
In
terms
in
terms
of
speed,
is
it
okay
to
put
all
deployment
script
into
Constructor
because
of
course,
if
you
deploy
such
a
huge
system
and
you
deploy
some
contracts
from
external
repositories
like
uni,
Swap
and
so
on,
it
requires
time
and
of
course,
when
you
want
to
test
million
operation,
if
you
redeploy
each
time
holds
the
system,
it
could
require
hours
or
days.
I
know.
AA
Yeah,
so
so
so,
when
you
use
Echidna,
you
deploy
it
only
once
and
then
the
when
your
tests
finish,
it
will
go
back
to
the
states
after
the
contract
is
deployed.
There's
no
need
to
redeploy
it,
and
that
is
why
we
have
we.
We
asked
the
developer
to
have
fixed
amount
of
parameters
in
the
deployment
right
on
the
on
the
on
the
Constructor.
Otherwise
we
will
know
what
what
we
should
deploy.
Yeah
I
think
there
is
a
question
over
there
also.
AB
I'm
just
wondering,
as
you
once
you've
kind
of
defined
your
invariance
and
I.
Imagine
you
guys
in
your
audits,
you
you
run
through
these
and
basically
I'm,
not
trying
to
understand
when
you
guys
have
confidence
that,
yes,
the
this
is
a
good
invariance
from
and
and
if
there
are
any
metrics
that
you
guys
use
internally
like
I,
see
it's
outputting
Unique
instructions
and
unique
code
hashes
those
sorts
of
things
that
give
you
confidence
in
in
what
you've
done.
Y
So
in
practice
you
know
you
can
look
at
the
coverage,
but
usually
coverage
is
not
a
good
indicator
and
In
fairness,
like
you
know,
when
we
do
that
we
do
this
in
a
time
box
manner.
So
we
have
two
or
three
weeks
to
do
it
and
we
are
going
to
do
our
best
in
two
or
three
weeks.
That's
the
best
that
we
can
do
yeah.
AA
A
bullet
for
this
it
we
when
we
do
a
report,
we
list
the
invariance
that
we
test.
So
it's
it's
clear
what
what
we
tested
and
what
and
everything
else
was
was
not
tested
with
all.
So
we
will
perhaps
did
Manual
review
or
use
all
the
techniques
like
Slither
to
check
some
some
other
things
but
yeah.
Unfortunately,
there
is
no
good
way
to
to
Define
this,
but
perhaps
I
I
personally
think
that
talking
with
the
developer
early
on
the
invariance,
it's
a
really
good
thing.
AA
It's
usually
the
case
that
we
think
an
invariant.
Let's
say
that
some
some
value
cannot
be
zero
and
we
go
into
the
into
the
client
and
says:
is
this
an
invariant?
We
don't
know
because
we
have
not
designed
the
system
and
they
don't
know,
and
if
they
don't
know,
that's
that's
an
issue
right.
We
should.
We
should
absolutely
know
what
is
the
behavior
of
the
system
and
and
if
we
don't
know
if
anybody
should
should,
if
something
should
be
an
environment
or
not,
then
we
should
go
back
and
rediscus
that.
Y
G
Thank
you
thanks.
I
have
a
question
actually
more
related
to
the
earlier
question,
which
is
a
big
class
of
bugs
that's
been
occurring
recently
and
for
a
while
now
are
re-entrancy
bugs
right.
How
do
you
deal
with
finding
violations
of
invariance
that
correspond
to
external
contracts?
In
that
way,.
Y
Okay,
so
this
is
a
really
good
question
and,
in
my
opinion,
the
best
tool
to
find
way
on
Transit
is
static
analysis.
So
the
question
is
more
to
find
which
technique
you
should
apply,
for
which
problem
and
for
things
like
Korean
translation,
static
analysis
is
just
going
to
be
better.
You
can
use
further.
You
can
create
like
green
callback
and
things
like
that,
but
in
practice
that
you
can
reduce
is
just
going
to
outperform
any
further
this
work.
Y
AC
And
one
more
question:
what's
your
addition,
for
example,
we
have
a
complex
system
and
we
want
to
make
a
classical
fuzzing
with
the
kidnap,
and
it
seems
that
to
really
cover
many
cases,
it
requires
a
lot
of
computational
power.
So
maybe
can
you
advise
some
cloud
provider
or
how
to
do
to
run
it,
maybe
for
a
week
with
very
powerful
computer
to
get
something
achievable
because
of
course,
this
pretty
simple
contracts
could
be
found
on
my
MacBook.
But
if
we
go
a
little
bit
further
many
contracts,
many
setups,
maybe
it
requires
more
computational
power.
Y
The
question
was:
if
you
want
to
run
a
key
down
the
cloud
or
on
a
lot
of
you
know
a
large
like
system.
How
can
you.
AA
Do
it
yeah
yeah,
so
so
so.
The
first
thing
that
you
should
know
is
that
it
can
it
to
have
a
very
large
contract.
It
can
take
some
amount
of
memory,
so
first
first
thing
get
a
good
server
with
with
a
good
amount
of
of
memory
and
CPUs.
AA
AA
So
you
can
run
10
at
a
time,
but
we're
not
only
going
to
run
it
10
of
the
time,
but
we
are
going
to
randomly
Shuffle
parameters
because
in
some
cases
there
are
some
there
are
some
issues
that
can
be
easily
found
with,
let's
say
three
or
ten
transactions
and
some
other
issues
are
going
to
be
more
easily
found
with
two
200
transactions
in
a
in
a
in
a
row
right.
AA
AA
You
can
see
all
the
all
the
all
the
code
that
was
covered
and
then
we
started
again
but
taking
the
the
output
of
the
previous
generation,
so
you
iterate
over
and
over
again,
so
you
can
see
how
your
code
is
explored
right
or,
if
there's
some
part
of
the
code
that
is
not
explored
with
10
different
instances.
You
can
go
back
and
say
no
I
need
I
need
to
change
this
because
it
doesn't
depend
on
the
on
the
actual
execution.
So
we
can.
We
can
give
you
the
link
for
that.
Y
Use
yeah,
it's
also
open
source,
like
everything
we
are
doing,
is
open
source.
Y
Okay,
so
yeah
like
it's
really
about
spending
time
and
thinking
about
our
invariants
and
start
simple
like
if
the
first
invariant
that
you
are
writing
leads
to
a
bugs.
There
is
something
wrong
about
your
approach.
You
should
not
have
like
simple
invariants
that
you
know
are
going
to
work
the
system
so
we'll
start
simple
and
iterate
over
them.
Y
Okay,
to
give
you
some
example,
let's
say
you
have
a
now
with
magic
libraries.
What
environment
can
you
have?
You
can
have
commutative
properties?
A
plus
b
is
equal
to
B,
plus
a
you
can
have
identity.
A
1
multiplied
by
two
should
be
true
our
inverse.
If
you
add
something
by
its
opposite,
it
should
be
zero.
This
is
not
always
true
right,
but
depending
of
what
you
are
building,
this
might
be
like
the
type
of
property
you
are
looking
for
for
token.
We
already
talked
about
the
first
one.
Y
No
no
user
should
have
a
balance
above
total
Supply.
Let's
say
you
want
to
look
at
the
transfer
function
and
patsync
transfer
function.
What
does
it
do?
I'm
transfering
token
to
someone.
So,
at
the
end,
my
balance
should
have
decreased
by
the
amount
and
the
receiver
should
have
see
its
balance
increased
by
the
amount.
And,
let's
say
you
try
to
write
something
like
that
once
you
might
quickly
realized
that
what
happened
is
the
destination
is
myself?
So
if
I
transfer
token
to
myself,
my
balance
is
not
going
to
increase
or
decrease
quite
awfully.
Y
So
this
is
an
example
where
you
might
try
to
Define
an
invariance
on
transfer.
It
might
seem
simple.
You
might
write
like
the
the
thing
in
salinity
and
if
you
do,
that
again
is
going
to
tell
you
that
there
is
an
edge
case
where,
if
you
transfer
to
yourself
like
the
event,
is
going
to
be
to
be
broken
and
in
this
example,
if
you
go
through
this,
it's
not
the
code
but
which
is
bad.
It's
an
invariant.
That
was
bad.
Y
So
that's
why
having
this
iterative
approach
is
really
important,
because
sometimes
we
are
going
to
make
assumption
about
your
system
and
you
might
actually
be
wrong
and
as
the
system
gain
on
complexity,
it's
no
more
likely
that
it's
going
to
be
more
difficult
to
refine
the
invariance.
Y
Something
else
which
is
also
important
to
to
consider
is
returning
first,
reverting,
for
example,
an
Inverness
you
can
have
is
that
if
you
don't
have
enough
phones,
such
transfer
function
to
the
revert
or
written
first,
depending
on
how
you
implemented
the
token
once
you
have
this
list
of
invariants,
usually
you
can
split
invariant
into
two
category
function:
level,
invariant
system
level,
invariant
function
over
invariant
are
usually
stateless.
Y
They
are
things
that
you
can
just
you
know,
look
at
a
specific
function
and
try
to
see
if
it
holds
so
it
may
take.
You
know.
Environment
and
mention
are
stateless
and
our
functional
leveling
variants.
Here
you
can
craft
simple
scenario.
Just
by
calling
the
specific
function,
then
you
have
system
level
in
the
Hangouts
system,
level.
Environment
are
usually
more
complex,
but
they're
also
more
powerful,
and
here
you
are
stateful.
Y
You
are
going
to
change
the
state
of
the
contract
and
you
are
going
to
try
to
see
the
environment
no
matter
the
state,
and
here
is
why
it's
important
that
the
keynote
is
calling
all
the
different
function,
because
it's
actually
what
you
want
to
try
the
balance
being
below
or
equal
to.
The
total
Supply
is
an
example
of
a
system
level
invariant
for
functional
value.
Y
One
thing
that
you
can
use
is
a
different,
modern,
equivalent
set
of
coding
Akinator,
something
we
support
a
session,
so
you
can
just
create
function,
put
a
session
and
try
to
see
if
it
holds
for
SIM
for
system
level
environment
as
we
we
kind
of
already
like
discuss.
It
might
be
more
complex
depending
on
the
initialization
of
your
system.
Y
AA
All
right,
so,
let's,
let's
see
this
particular
piece
of
code,
let's
take
half
a
minute
to
read
it:
it's
it's
basically
a
buy
function
that
will
call
an
internal
function
valid
by.
So
what
we
are
going
to
do
is
we're
going
to
think
what
are
the
type
of
invariants
that
we
can
have
here
and
what
what
each?
What
will
they
are
going
to
test
and
what
type.
AA
Okay,
so
the
question
is:
testing
timestamp
dependent,
so
clearly,
not
not
the
case
here
for
timestamp,
depending
code
I
think
now,
when
it
runs
it
automatically
increase
either
the
block
number
or
the
block
timestamp
inside
some
range
right,
because
it
it
happens
that
some
code
will
fail
when
the
timestamp
is
increased
into
a
really
really
large
number,
but
yeah
100
years
or
like
the
end
of
the
universe.
AA
AC
The
first
one
we
can
understand
that
how
much
token
how
many
tokens
we
can
get
as
a
result
follows
our
expectation
when
we're
sending
message
value
because,
as
you
can
see,
it's
a
hard-coded
rate
is
around
10.
So
basically
it's
a
pretty
simple
formula:
we
can
change
different
values
we
put
into
the
function.
We
have
totally
prediction
how
much
we
can
get
and
then
we
can
try
to
verify
that
it
works.
AA
Yes,
exactly
so,
the
the
property
is
related
with
the
amount
that
we
can
get
even
the
number
of
weight
that
is
sent
so
yeah
we
can
so.
The
first
thing
is
this:
this
code
will
depend
on
the
state,
we
don't
have
the
mean
function,
so
we
don't
know
what
is
what
is
inside,
where
we
have
the
valid
buy
function.
That
is
actually
abstracting
the
thing
that
we
want
to
that.
We
want
to
test,
so
we
will
start
with
valid
by
which
is
a
pure
stateless
function.
AA
Yes,
so
we
were
thinking
about
invariance
here
related
with
the
amount
that
we
can
get
so
this
is.
These
are
very
simple.
Without
going
into
specific.
This
is
a
very
simple
invariant.
If
the
amount
is
the
Wayside
in
C
is
zero,
then
the
user
should
receive
no
tokens
at
all
right.
So
so
that's
it's!
It's
even
simple
and
thinking
how
how
much
a
user
should
receive,
but
it's
a
Concrete
case
and
it's
it's
kind
of
a
corner
case,
so
it's
it
can
be
important
to
test
all
right.
AA
So
how
we
can
test
this,
so
there
are
a
couple
of
way
to
test
it.
This
is
this
is
one
so
we
we
can
write
a
function
that
will
take
one
parameter
so
again,
I
will
put
any
any
number
there.
However,
we're
going
to
restrict
the
number
the
input
of
this
function
to
be
non-zero,
then
we're
going
to
execute
valid
by
and
then
we
want
to
know
if
echina
can
reach.
He
can
reach
the
the
statement
after
that,
because
valid
by
Will
revert.
AA
If,
if
the,
if
the
inputs
are
not
the
one
expected,
so
we
want
to
know
if
we
can
get
if
we
can
get
tokens,
despite
sending
no
no
value.
Z
AA
And
so
perhaps
you're
wondering
what?
If
what
if
desire
amount
is
zero,
then
clearly,
this
code
will
not
do
anything
interesting,
so
she
was
just
is
going
to
revert
when
you're
writing
tests
you
need
to
so
you
can
put
any
any
amount
of
requires
or
preconditions
or
people
usually
call
it.
However,
if
you
put,
if
the
preconditions
are
too
restrictive
and
your
your
function
reverts
most
of
the
time,
it
means
like
you're
not
going
to
get
value
from
the
execution,
so
every
execution
reverted
in
a
test
in
an
invariant.
AA
Let's
say
that
every
every
case
that
you
don't
that
you
don't
use
it's
going
to
be
an
execution,
an
execution
that
you
waste.
So
in
this
case
you
only
waste
one
one
execution
in
a
range
of
in
the
full
range
of
uins
256.
AA
So
it's
not
a
big
deal,
but
if
you
have,
if
you
put
a
lot
of
requires
that
only
a
very
small
small
amount
of
values
well
satisfied
and
it
will
be
difficult
to
get
randomly
or
even
even
with
the
with
the
techniques
that
we
use,
you
will
need
a
slightly
different
approach,
but
yeah
we
will.
We
will
then
go
later
into
that,
so
any
any
questions.
AA
AD
Yeah
so,
as
you
mentioned
in
this
case
that
we
are
basically
sacrificing
only
one
one
case
which
is
when
it's
zero,
but
is
it
gonna
run
over
the
whole
range
of
un256
because
that's
a
really
large
range
and
it
doesn't
make
sense
to
test
all
of
that
in
some
cases.
AA
Yeah
exactly
so
it
won't
there
there
will.
There
is
no
tool
in
the
world
that
can
run
for
all
the
all
the
range
it
is
always
either
either
symbolic
I
mean
you
can
do
it
symbolically,
but
it's
it's
not
it's
not
going
to
test
all
the
values.
It's
another
thing
and
fasting
techniques
are
going
to
sample,
let's
say
randomly
from
the
from
the
input
or
with
hubs
right
it's
in
the
case
of
Echidna.
AA
Since
we
are
going
to
compile
this
code
and
it
will
run
to
our
static
analyzer,
we
will
detect
some
interesting
values
in
this
case
10,
for
instance,
10
10
is
an
interesting
value
there,
it's
a
it's
a
constant
and
it's
going
to
be
used
somewhere.
So
definitely
we
want
to.
We
want
to
test
with
that
constant
okay.
So,
let's
see
what
happens
if
we,
if
you,
if
you
run
so
this
is
so
echina
will
run
a
number
of
transactions.
It
will
eventually
detect
that
a
certain
of
free
token
has
an
assertion
failure.
AA
However,
this
is
going
to
be
in
the
context
of
100
of
transactions,
random
transactions
that
perhaps
will
do
something
that
is
completely
unrelated.
But
what
we'll
do
is
input
minimization
input.
Minimization
is
a
very
old
technique
referred
to
testing,
in
which
you
have
a
list
of
bytes.
That
affects
a
bug.
AA
You
want
to
remove
that
bytes
one
by
one
or
in
some
random
way,
in
order
to
get
a
list
of
bytes
that
will
still
trigger
the
bug,
but
it's
going
to
be
minimal
or
or
either
local
or
a
global
minimal,
depending
on
the
type
of
tool
that
you're
that
yours.
So
in
this
case
we
can
Echidna
will
try
to
minimize
any
any
parameter
here.
We
have
only
one
parameter
and
the
parameter
is
actually
is
actually
going
to
be
useful
triggering
about.
AA
If
we
have
more
parameters,
they
are
going
to
be
minimized
towards
zero.
So
if
you
have
units,
then
it
will
be
reduced
until
zero.
So
zero
is
the
simplest
value.
This.
AA
Want
and
but
in
this
case
the
parameter
cannot
be
zero,
because
if
it's
zero,
the
test
will
pass
right.
So
in
this
case
the
minimal
amount
is
one:
it's
not
guaranteed
that
you
will
always
get
the
smallest
set
list
of
transactions
to
trigger.
This
is
an
an
MP,
complete
problem.
AA
It
cannot
be
solved
on
on
linear
time,
so
it's
it's
all
going
to
be
always
a
sample,
but
in
in
practice
even
randomly
sampling,
removing
transactions
or
reducing
the
complexity
of
each
value
will
will
give
you
good
answers
all
right,
so
a
little
bit
about
ekina
apis,
maybe.
Y
We
can
just
explain
yeah
just
explain
why
it's
happening
yeah.
Y
Issue
here
is
that,
if
you
send
one
desire
token,
what's
going
to
happen,
is
that
you
do
one
divided
by
10
and
1
divided
by
10
is
0,
because
you
are
winding
down
and
other
results
the
required
amount
to
be
sent
to
zero.
So
if
you
ask
any
number
of
tokens
below
10,
you
are
going
to
get
them
for
free,
and
this
is
again
again
an
example
where
we
Define
an
invariant.
We
don't
actually
look
at
the
formula.
We
are
not
looking
at
how
this
formula
works.
Y
AA
AE
AE
AA
So
yeah,
so
if
I
understand
it
correctly,
so
this
this
could
be
an
internal
part,
and
you
can
have
like
a
lot
of
code
that
put
like
gets
the
value
from
them
from
the
message
value
and
then
do
something
else
you
can
do
that
yeah
I
mean
I
mean
is
this
depends
on
on
your
code.
Here
we
are
testing
an
internal
function
right
using.
AE
Y
Yeah
and
if
it
was
using
message
that
value
yeah
exactly.
AA
So
yeah
so
properties
can
also
take
value.
So
and
if
you
have
a
constant
in
your
code,
saying
message,
value
should
be
42,
42
42.
It
will
use
that
constant
eventually,
so
you
should
be
able
to
hit
that
particular
that
particular
case.
AC
But
can
you
moving
back
to
the
previous
slide
to.
Z
AC
What
I'm
really
wondering,
because
we
talk
about
this
error,
however
tokens
less
than
is
so
way
less
than
it's
so
small
amount
and
the
same
problem
which
could
be
here
is
Overflow
if
I
provide
a
huge
amount
of
ease,
because
we
have
a
multiplication
to
decimals,
it
could
be
done
on
the
other
side.
We
all
of
us
know
that
the
quantity
of
s
is
limited.
AC
You
can't
even
take
a
flash
loan
and
get
more
ease
than
it
produce.
So
what
is
the
best
practice?
Follow
this
formal
execution
when
you
write
fuzzing
tests
or
take
some
real
examples
as
limitations
as
there
is
no
much
a
zeroom
as
it
exists
at
the
moment,
and
we
know
it's
a
deflation,
so
we
can
simply
assume
that
in
the
future
nobody
can
take
so
much
to
get
overflow
here.
Yeah.
AA
Yeah
exactly
so
yeah
this
is
is
an
interesting
question
and
it
goes
into
the
fact
that
what
are
your
assumptions
on
the
test
right,
if
you,
if
your
assumptions
are
like
I,
have
this
token,
with
this
limited
Supply
that
should
never
go
over
something,
then
you,
you
can
just
say,
I
require
that
that
the
value
sign
cannot
be
more
than
the
total
supply
of
something
right.
But
in
the
case
of
interest,
is
a
bit
more
tricky
and
and
in
fact,
in
in
Echidna,
what
we
do
is
we
have.
We
have
externally
on.
Accounts
are
simulated.
AA
We
loaded
with
eater
every
at
every
transaction,
because
you
can
have
a
very
large
amount
of
feature.
You
can
take
a
flat
loan.
You,
you
probably
cannot
have
as
enough
eater
to
overflow
you
into
256,
because
that
will
be
a
real
issue
for
the
AVM
in
in
itself,
but
we
can.
We
can
Define
in
the
Kindle
config,
which
is
the
maximum
amount
of
value
that
we
send
per
transaction
right.
AA
So
if
you
put
like
I,
don't
care
if
the
attacker
has
more
than
than
10
000
eater,
because
that
will
mean
that
they
can
do
other
things,
then
Echidna
will
happily
take
that
limit
and
will
never
put
something
more.
However,
it's
still
the
case
that
over
several
number
of
transactions,
the
accumulate
number
of
features
can
go
over
that
that
barrier.
So
you
should,
you
should
be
careful.
Y
So
either
you
start
where
the
invariant
has
really
limited,
Twitter
like
really
with
chain
one,
and
you
try
to
see
if
it
hurts.
If
it's
working
with
like
a
you
know
like
you
with
one
meter:
okay,
it's
already
working,
so
you
can
continue
like
this.
If
it's
not
breaking,
then
you
can
increase
this
without
from
time
to
time
or
you
can
take
the
opposite
approach.
If
you
define
an
invariant
where
the
threshold
is
really
large,
it's
breaking
because
it's
really
large
and
then
you
decrease
this
result.
AA
If
we
start
with
very
large
values,
we
we
could
have
false
positives,
but
and
if
we
start
with
very
small
values,
we
have
a
false
negative,
but
which
are
the
ones
that
are
going
to
cause
you
more
trouble.
That
is
something
to
to
think
about
it,
because
if
you
miss
one's
false
positive,
your
your
protocol
can
be,
you
know
destroyed,
and
if
you
miss
one
one
false
negative,
then
it
could
be.
Okay,
all
right.
C
Hey
there
in
terms
of
fuzzing
mutation,
do
you
do
any
clever
things
like
say
this
function
has
a
constant
of
10.
So
would
you
then
see
the
constants
in
the
the
function
and
use
that
as
input
yeah.
Y
AA
You,
yes,
there
are
some.
There
are
some
techniques.
I
can
show
you
a
little
bit
after
a
few
lines
of
the
code
of
Akinator
that
shows
all
limitations.
We
have
mutation,
interesting
mutations
on
list
of
transactions
in
which
we
Shuffle.
We
do.
We
do
like
splice
as
well,
so
we
take
a
list,
a
list
of
Interest
transaction
and
another
one
and
it
splice
it
on
a
random
position.
So
there
are
a
couple
of
fun
things.
Z
To
to
look
at
but
yeah
I
think
we
should
move.
AA
On
a
little
bit
all
right,
so
as
I
was
saying,
we
have
this
feeling,
even
if
you
don't
understand
what
was
what
is
the
failure?
Well,
that
is,
that
is
a
different.
There
is
a
different
Beast.
Sometimes
you
you,
you,
have
you
put
your
your
your
invariant
and
your
invariant
fails
and
then
you
start
the
journey
to
understand
why
it
fits
so
that's
we
are
not
going
to
talk
about
that.
You.
Some
people
like
to
re-uh,
run
We
Run,
The
failure
into
a
unit
test
to
make
sure
step
by
step.
AA
What
is
what
is
going
on,
but
yeah?
That's
that's
a
completely
different
type
of
Beast
that
is
related
with
what
happened
after
and
how
we
can
fix
how
we
can
fixation
all
right
so
a
little
bit
about
ekin
apis.
This
is
a
topic
that
it's
it's.
It's
still
an
open
debate
in
some
in
some
cases.
So
what
are
the
best
way
to
test
to
create
properties?
So
Echidna
supports
a
couple
different
ones:
it's
about
Boolean
properties
in
which
a
function
is
executed
and
then
it
will.
AA
It
should
return
a
Boolean,
true
or
false,
and
if
the
function
reverts
for
any
reason,
that
is
the
same
as
returning
false
right.
So
if
we
go
back
a
little
bit,
we
can
see
over
there
error
unrecognized
op
code
that
is
going.
That
is
related
with
the
assertion.
Failure
this.
That
is
how,
on
version
of
solidity,
used
to
have
this
assertion
failures.
But
if
you
use
Boolean,
you
will
just
say
return
false
right,
so
you
know
exactly
how
how
everybody
failed,
or
it
could
be
a
revert.
AA
So
you
you
say
over
there
you
see
over
there.
It
reverts
okay,
so
either
you
do
Boolean
properties
which
are
the
classic
way
to
define
invariants,
and
these
come
from
some
some
very
old
techniques,
in
particular,
a
quick
check
which
is
which
is
a
property
based
testing
tool
for
Haskell
and
a
couple
of
other
languages,
which
was
an
inspiration
for
for
for
this.
Then
you
have
assertion
failure
so
every
time
you
see
every
time
the
the
assertion
is
called
with
false.
That
is,
that
will
fail.
AA
AA
If
you,
if
you
care
about
reverse,
you
should
have
to
either
use
the
the
type
of
the
Boolean
type
or
what
you
can
do
is
if
you
care
about
reverts
invalid
boy
and
if
it's
an
external
function,
you
can
do
a
try
and
catch
and
you
can
check
which
type
of
reverse
and
you
can
even
fail
in
some
type
of
reverse
and
not
in
other,
because
you
want
the
user
to
get
a
good
message
of
of
alert
and
perhaps
other
type
of
reverse
you
don't
you
you
want
to
know
so
and
finally,
we
have
the
DAP
and
Foundry
API
in
which
you
will
you
will
have
a
function
that
if
it
reverts
it's
going
to
it's
going
to
fail
and
otherwise
it's
not
and
I
hope
The
Foundry
team
agree
with
us
and
it's
is
correctly
implemented
all
right.
AA
So
there
is
testing
modes
in
the
in
the
in
a
repository.
So
we
can.
You
can
go
there
and
it's
explained
a
little
bit
more.
This
is
very
high
level
overview,
so
yeah
yeah
yeah,
so
so
so
yeah
again,
that's
something
returnable
Boolean,
it's
easy
to
Define,
no
side
effects,
that's
that
that
is
so
interesting.
When
you
use
Boolean
properties.
All
the
side
effects
will
be
reverted
before
the
execution
of
the
actual
invariant
right.
AA
But
if
you're
using
assert
the
side
effects,
so
everything
you
change
in
the
blockchain
will
will
remain
so.
This
can
be
really
useful
for
testing
some
some
complex
code,
but
yeah
we're
not
going
to
well
yeah
assertion
is,
can
be
simple
to
to
Define
and
it's
it's.
It
will
also.
AA
It
will
also
be
easy
to
see
on
the
code
coverage
if
it's
not
if
it's
not
covered
or
or
not.
However,
some
code,
especially
some
old
piece
of
solitary
code,
it's
going
to
have
assertion
as
required,
and
that
is
a
really
bad
thing
you
should
not
be
doing
it
use
require
if
you
want
to,
if
you
want
to
actually
have
a
precondition
in
your
call
and
use
assertion
for
for
testing,
okay
yeah
and
finally,
we
have
the
The
Foundry
and
up
compatibility.
The
only
thing.
AA
Well,
the
thing
that
we
don't
support
is
pranks.
So
we
don't
like
to
prank
people.
So
we
don't,
we
don't
support
pranks.
However,
we
support
some
of
the
some
of
the
havm
yeah,
the
havm,
the
original
ones
cheat
codes.
You
should
be
careful
using
it.
We
we
know
that
there
are
some
catch
with
that,
especially
related
with
what
the
solid
compiler
expects
and
what
you're
doing
in
your
in
your
transaction.
AA
So
please
be
careful
because
you
could
have
some
some
some
issues,
so
we
in
we
rarely
use
cheat
codes.
We
try
to
keep
all
our
code
as
close
at
the
solidity
possible,
so
you
can
easily
Port
it
to
another
tool,
there's
there's
little
again
specific
but
yeah.
We
are
also
open
to
discussion
if,
if
the
community
agrees
that
we
need
a
specific
cheat
code
or
we
need
to
avoid
some
specific
chicken,
then.
AA
So
exercise
four
we're
going
to
deal
with
one
of
the
one
of
the
exercise
for
dumb
vulnerable
defy.
So
how
many
of
you
know
this
amazing
CTF
yeah?
Sorry.
Y
AA
Yeah,
so
we
will
go
into
a
more
interesting
example,
but
do
that
there
something
that
you
will
need,
which
is
called
the
multi-avi
mode.
So
we
usually
usually
testing
tools.
Take
a
specific
contract
as
your
main
contract
to
interact
so
in
in
the
default
mode.
AA
Akina
will
only
target
a
specific
contract
that
you
can
put
in
on
on
your
command
line,
or
if
you
have
only
one
contract,
you
will
use
the
first
one
right,
but
there
is
something
called
multi-avi
that
we
call
every
every
contract
that
is
deployed
after
the
Constructor
that
you
have
API
right.
So
if
you
deploy
something
in
bike
in
bico
directly
and
you
don't
have
Avi
in
the
won't
be
able
to
call
it
because
it
doesn't
know
what
is
what
is
there?
AA
AA
We
will
need
this
in
order
to
deal
with
the
next
example,
because
sometimes
the
the
the
the
debug
that
you
want
to
detect
it
doesn't
depend
on
the
state
of
one
contract.
It
depends
on
the
state
of
many
contracts,
and
in
that
case
you
can
you
can
be
surprised
by
the
fact
that
changing
the
state
in
another
contract
can
break
your
your
property,
and
definitely
we
want
to
avoid
that
Okay.
So
again,
how
many
of
you
know
about
them?
Vulnerable,
defile,
okay,
a
good
number,
and
did
you
actually
well?
This
is
these.
AA
Are
the
first
exercise,
the
first,
the
first
two,
so
I
hope
that
people
know.
So,
if
you
know
this
this,
how
to
solve
it,
you
should
it's
going
to
be
even
easier
for
you.
What
we're
going
to
do
is
we're
going
to
take
a
look
of
this
sample,
so
yeah
I,
assume
that
we
already
you
already
have
a
code.
AA
So
is
the
exercise
the
the
naive
receiver
one.
So
what
we
want
to
do
is
we
want
to
be
able
to
drain
the
the
funds
in
Flash
loan
receiver.
Y
Right
just
to
give
like
a
bit
of
description
of
the
of
the
challenge.
Here
you
have
two
contract.
You
have
the
naive.
We
see
the
Orlando
pool,
which
is
yeah.
Y
There
I
can
allow
you
to
take
a
Flash
run
for
a
fee,
and
you
have
a
second
contract,
which
is
a
user
contract
that
is
going
to
interact
with
a
pool
and
the
contract
is
going
to
be.
The
user
contract
is
going
to
be
deployed
with
some
font
inside
and
the
girl
is
going
to
try
to
see
if
it's
possible,
for
this
specific
contract
from
the
user
to
be
joined.
Y
AA
So
what
we
want,
we
we
want
you
to
review
the
the
the
exercise.
If
you
have
already
did
it,
it
requires
you
to
send
some
some
some
some
number
of
transactions.
So
in
this
case
we
are
going
to
prepare
everything
for
Echidna
to
ReDiscover
this
without
telling
it
what
is
what
how
how
it
can
be
sold.
So
we
will
need
two
things.
We
will
need
to
initialize
the
the
code
to
have
too
much
what
the
what
the
initialization
is.
It's
is
actually
showing
us.
AA
Let's
see,
okay,
so
this
is
the
The
Flash
loan,
the
part
of
the
or
the
flash
loan.
So
the
the
interesting
part
about
this
is
we
don't
have
to
care
about
specific
details
in
the
code
we
want
to.
We
want
to
give
a
kid
now,
the
same
scenario
that
we
have
on
the
on
the
actual
Challenge,
and
we
want
to
know
if
it
can
actually
find
a
way
to
drain
the
contract
so
yeah
we
can.
We
can
see
how
the
receiver
function
works
here,
but
yeah.
AA
The
interesting
part,
perhaps
is
the
initialization.
What
we
are
going
to
do
is
we
are
going
to
deploy
the
Constructor
of
our
test,
we're
going
to
deploy
the
the
contracts
that
we
have
here
and
prepare
everything
and
then
we're
going
to
use
a
suitable
invariant,
which
should
be
really
simple.
You
don't
have
to
overthink,
and
we
want
you
to
with
that
in
order
to
see
if
it
can
drain
the
contract.
Z
With
that,
of
course,
with
a
suitable
with
a
suitable
invariant,
so
yeah
we
will
take.
AA
We
are
running
out
of
time,
but
let's
take
10
minutes
so.
Y
Y
AA
Yes,
there
there
is
an
alternative
way
to
do
it,
but
the
solidity
approach
is
is
easier
to
to
avoid
because
well,
the
other
tool
is
called
atino
that
can
replay
this
in
a
simulated
blockchain
like
an
ash
and
then
send
it
to
the
to
the
tool.
So
in
that
case
you
don't
need
to
replace
on
solidity
manually,
but
you
are
still
I
still
know
still
need
to
know.
Where
is
everything
deployed
right?
AA
So
if
you
want
to
interrupt
with
the
pool
you
need
to
know,
where
is
which
is
the
address
of
the
pool,
and
you
can
you
can
then
I
can
cannot
actually
call
the
pull
functions
automatically,
because
it
has
everything,
but
in
this
case
we're
going
to
go
into
simple
route
that
is
going
to
take
us
to
rewrite
it's
just
a
couple
of
and.
AA
So
yeah
the
the
previous
exercises
are
on
the
easy
side.
This
is
a
bit
more
difficult,
but
still
shouldn't
require
more
than
a
few
lines,
so
yeah.
But
please
let
us
know
if
you,
if
you
need
some
some
hints
there
are
some
hints
in
the
in
one
of
the
one
of
the
branches
in
the
hint
Branch,
but
yeah,
hey.
C
There,
hello,
yeah,
sorry,
no
worries.
Is
there
any
thought
or
current
support
for
magnet,
forking
or
State
forking
of
some
sort.
AA
So
not
yet
havm
has
support
for
that,
but
that
will
that
will
need
us
to
put
that
requires
to
do
like
input
outputs
on
on
transactions.
So
we
need
to
check
if
that
will
impact
on
the
actual
on
the
actual
performance,
on
the
call
so
yeah
there
will.
G
Yeah
I
have
kind
of
high
level
questions,
so
I'm
trying
to
think
about
what
the
limitations
are
in
terms
of
expressing
properties
as
invariance
right.
So,
for
example,
let's
say
we
have
a
temporal
property
that
we
want
to
express
like
we
have
a
wallet
contract
and
we
want
to
be
able
to
say
that
any
user
who
deposits
their
money
is
eventually
able
to
withdraw
it.
Is
that
just
like
a
fundamental
limitation
of
Echidna,
or
is
there
a
way
to
so?
If
you
can.
AA
So
the
notion
of
eventually
it's
you,
you
need
to
have
some
definition
on
the
or
on
the
blockchain
right.
So
eventually
cannot
mean
like
in
100
years
right.
So
if
you,
if
you
find
say
a
will,
only
allow
increment
of
time
up
to
some
limit,
yes,
you
can,
you
can
call
it
as
a
bounded
version
of
that
property,
but
you
cannot.
You
cannot
use
it
as
a
theoretical
thing.
AA
Like
you
know,
I
have
a
state
which
I
don't
know
what
it
is
and
then
I
will
transition
to
another
state
which
I
don't
know
which
what
it
is,
and
in
that
case
you
will
need
to
know
if
the
original
state
was
actually
possible
and
and
things
like
that,
so
a
kind
of
works
on
the
on
a
concrete
on
on
concrete
States.
So
you
always
have
concrete
data
and
you
need
to
put
like
boundaries
on
the
things
that
you
that
you
can
do.
AA
So
if
you
say
a
user
should
eventually
receive
this
amount
and
eventually
means
like
in
a
number
of
transactions
and
a
number
of
blocks
or
timestamp.
Then
yes,
otherwise
it's
it's
more
like
a
theoretical
proof
that
you're
doing-
and
you
probably
need
another
type
of
tool.
Yeah.
G
AA
You
you
will,
you
will
do
something
like
this.
You
will
put
a
function
that
says
and
and
you
will
need
a
state
to
track
down
like
you,
you
do
a
deposit
and
you
will
need
a
user
to
eventually
receive
something
right,
so
you
will
need
to
have
a
state
that
tracks
deposits
like
a
mapping
right
and
you
have
a
function.
That
is
your
invariant.
So
if
and
then
you
receive
the
address
of
a
user
and
you're
checking
the
mapping.
AA
If,
if
the
time
between
the
last
deposit
is
in
this
range,
then
you
were
going
to
check
something
and
if,
if
the
time
in
this
in
this
other
range
you're
going
to
check
some
something
else,
so
it
will
be
like
randomly
checking
in
in
transactions
and
with
that
you
will
be
able
to
cover,
given
the
fact
that
you're
going
to
generate
enough
transactions
and
enough
time
times
right.
AA
It
so
yeah,
you
will
you
so
if
the
if
the
property
that
you
are
testing
requires
to
add
State,
yes,
you
will
need
to
add
whatever
state
is
needed,
it
won't
be
able
to
track
state
outside
right.
The
only
thing
that
will
be
tracked
outside
is
the
increment
between
between
the
different
blocks,
for
instance,
so
that
that
will
be
tracked
outside
a
new
akina
will
show
you
from
in
between
this
transaction,
and
this
transaction
I
have
10
blocks
that
will
be
tracked
outside,
but
everything
else.
AF
Hi
is
there
any
feature
that
allow
me
to
guide
the
future
or
the
domain
of
values
that
I
want
to
try.
AA
So,
in
order
to
in
order
to
guide
the
the
faster
into
a
particular
State,
the
easiest
way
to
do
it
is
to
add,
is,
is
to
add
a
small
piece
of
code
that
will
be
auxiliary
code
in
order
to
move
the
state
from
your
contract
into
something
else
like,
for
instance,
if
you
have,
if
you
have
a
contract,
the
protocol
that
requires
a
deposit
with
a
particular
property.
AA
Let's
say
that
you
have
three
parameters
and
you
need
a
deposit
that
has
that
has
digital
parameter
in
in
in
the
same,
in
the
same
number
or
in
numbers
in
which
in
which
are
difficult
to
find.
You
can
have
this
piece
of
code,
but
it's
important
to
let
the
kidnap
to
explore
freely,
at
the
same
time
that
you
are
allowing
information.
AA
But
at
the
same
time
you
want
to
allow
the
tool
to
explore
things
that
you
don't
expect,
because
if
you
just
restrict
says,
I
only
expect
users
to
do
this
type
of
deposits,
then
you
can.
You
can
be
surprised
later,
because
there
there
is
a
way
to
break
your
your
property
using
things
that
you
don't
expect.
AA
AA
So
did
anyone
manage
to
at
least
Still
Still
start
to
create
a
Constructor,
or
even
run
it
to
have
some
some
invariant
or
even
think
about
the
invariant
that
you
need.
AA
G
Just
from
an
implementation
side,
I'm
curious
what
what
actual
like
virtual
machine,
do
you
use
to
deploy
and
execute
contracts.
AA
So
we
we
use
havm,
which
is
the
Virtual
Machine
written
in
Haskell
I,
know
that
it's
in
the
process
of
improving
and
rewriting
so
it
was
moved
from
the
adapt
tool
into
the
ethereum
repository.
So
we
are
eager
to
test
new
features.
AA
But
yeah,
if
you
use
the
Tino
companion
tool
to
deploy
your
contract,
it's
going
to
use
ganache
and
you
will
serialize
into
a
Json
file
and
you
can
then
load
it
into
into
that.
A
AA
Okay,
yeah
we're
going
to
quickly
go
over
the
solution,
so
we
can
have
a
couple
of
minutes
for
for
this,
so
the
solution
requires
first
to
deploy,
have
a
contract
with
enough
amount
of
heater
to
to
match
what
is
deployed,
and
then
you
can
see
here
in
we,
we
deployed
all
the
contracts
that
we
need
and
send
the
amount
of
eater
that
every
contract
needs
and
then
what
is
what
we're
going
to
use
we're
going
to
use
a
simple,
very
simple
property
that
is
going
to
say
that
the
balance
of
the
receiver
is
at
least
10
each
year.
AA
So
we
don't
actually
need
to
follow
exactly
what
the
exercise
says
about
draining
completely
the
contract.
If
we
have
one
transaction
that
allows
you
to
reduce
the
balance
of
the
receiver,
then
then
it's
then
something
is
wrong
and
definitely
it
will.
It
will
eventually
be
drained
so
yeah
so
quickly.
If
you
run
it,
you
will
see
something
like
this
in
which
well
the
the
flashlight
has
a
parameter.
That
is
the
the
borrower
and
we
can
control
this
to
in
order
to
reduce
the
amount
of
balance
all
right.
AA
So
yeah
you
want
to
go
into
the
conclusion.
Okay,.
Y
Yeah
we're
the
second
exercise
on
demonable
day
five,
but
we
are
running
out
of
time.
This
idea
was
the
same.
We
are
defining
an
initialization
which
was
a
bit
more
complex
and
if
you
go
through
this
exercise
with
something
a
bit
more
specific
is
that
there
is
a
callback
from
the
contract
to
the
caller.
So
in
the
Kindle
test
you
need
to
have
like
also
the
Callback
to
implement
the
flashlight,
but
yeah.
Y
Okay
yeah,
so
this
is
something
we
kind
of
touch
on
a
bit
during
our
discussion.
What
about
the
other
tools?
So
there
are
a
couple
of
other
further
out
there.
There
is
Dub
Brony
Foundry,
at
least
this
one
open
source.
Y
This
funnel
might
be
a
bit
better
for
simple
tests
and
for,
like
you
know,
ease
of
useful
like
the
first
invariant,
because
they
are
integrated
within
the
compilation
framework,
but
in
the
long
run
they
might
not
be
as
powerful
as
a
Kinder.
Simply
because
we
have,
you
know,
use
akina
for
a
couple
of
years
and
we
have
tuned
it
in
a
way
that
it
provides
the
best
value
that
we
can
do.
I'll
get
finished,
okay,
sorry,
yeah,
okay,
so
I
hope
you
enjoyed
the
workshop.
Y
We
have
more
exercise
in
building
secure
contract.
Something
I
would
recommend,
for
you
is
to
try
to
write
invariant.
You
know
in
your
next
project
and
actually
who
is
going
to
try
again
now
on
his
next
project
nice
and
thank
you.
A
A
A
A
AG
All
right
so
today,
I'm
gonna
present
how
we
can
leverage
storage
screws
to
build
cool
applications.
Maybe
a
bit
of
info
before
this
is
me:
I
work
on
Herodotus
and
yeah.
That's
that's
it!
So
today
we're
going
to
talk
about
storage
proofs
I
want
to
introduce
this
app.
AG
Yes,
I'm
gonna
present
you
storage
proofs
and
explain
why
they're
cool
how
to
work
with
them.
Why
you
need
tooling
to
work
with
them
and
yeah
a
bunch
of
other
things?
Why
is
it
even
possible
all
the
complexities
behind
the
trade-offs
and
so
on?
So
a
few
words
about
storage
Crews?
Why
I
really
believe
that
they
are
cool,
especially
nowadays
so
might
as
this
is
that
ethereum
is
pretty
sharded
nowadays
and
with
storage
truth,
we
can
essentially
read
the
state
in
a
almost
synchronous
manner,
which
is
a
pretty
pretty
nice
thing
to
do.
AG
Given
the
circumstances
yeah
and
maybe
also,
let
me
explain,
why
is
it
even
possible?
So
storage
room
is
essentially
this
idea
that
the
entire
state
is
committed
in
a
cryptographic
manner,
using
some
data
structure
like
Miracle
trees,
Marco
Patricia,
trees
and
so
on
and
yeah.
We
can
essentially
verify
any
specific
piece
of
State
at
any
point
in
time
on
any
domain,
which
is
pretty
nice
and
does
introduce
additional
trust
assumptions.
AG
We
just
rely
on
the
security
of
like
the
base
chain
so
yeah,
that's,
like
storage
proofs
tldr,
where
they
are
cool
now,
a
bit
of,
like
sponsored
section,
sponsored
section
so
what
we're
doing
at
harvardus.
So
our
like
goal
is
to
make
smart
contracts
software
in
a
way
by
providing
access
to
historical
state.
AG
We
like
I,
said
might
as
this
is
that
ethereum
is
pretty
short
that
nowadays
we
want
to
unchart
it
by
using
storage
Crews,
and
we
want
to
enable
synchronous
data
reads
because
today
we
do
not
have
really
nice
ways
to
make
synchronous
data
access
without
introducing
New,
State
new
trust
assumptions,
so
yeah,
that's
we
do
and
how
we
achieve
that.
We
achieved
it
by
using
obviously
storage
proofs,
we
use
snark,
Starks
and
then
PC
I
will
get
why
we
even
use
all
this
tooling,
but
first
a
few
words
about
storage
rules.
AG
What
these
are
and
and
so
on.
It's
so
tricky.
Actually
I
I
need
to
be
multitasking.
Okay.
So
what
we're
gonna
cover
in
today's
Workshop,
so
all
the
basics
required
to
like
understand
properly
this
primitive,
how
to
like
work
with
it.
How
you
can
generate
these
proofs,
why
they're
pretty
useful
and
how?
Actually
you
can
access
this
commitments?
I'll
get
later,
what
we
call
the
commitment
in
a
process
manner
and
how
we
make
smart
contracts
software
and
enable
historical
data
reads
cool.
AG
So
it's
pretty
that's
pretty
tricky
so
about
the
background
that
I
want
you
to
have
for
this
Workshop,
so
we're
gonna
like
start
from
the
biggest
Basics.
So
what
is
the
hashing
function?
AG
Just
a
very
quick
reminder:
I
hope
it
will
take
less
than
a
minute
like
generalized
blockchain
Anatomy,
how
I
ethereum
header
looks
like
why
ethereum
they're,
not
like
pretty
on
only
like
ethereum
focused,
however
I
think
that,
for
the
sake
of
this
Workshop,
it's
the
best
to
like
present
on
this
concrete
example,
Miracle
trees,
explain
me
like
on
five
I
will
just
quickly
explain
the
idea,
how
it
works
and
what
is
America
Patricia
tree
without
really
going
too
much
into
digitals
yeah.
Finally,
no,
not.
Finally,
the
anatomy
of
the
ethereum
state.
AG
It's
pretty
important
to
like
deal
with
this
with
this
primitive
and
finally,
how
to
deal
with
the
storage
layout
cool,
so
touching
function.
Essential.
Is
this
idea
essentially
it's
this
idea
that
I
can
have
a
function
that
takes
some
input
of
any
size
and
it
always
always
return
an
output
of
a
fixed
size
and
now
what's
also
important.
There
is
no
input.
There
are
no
two
inputs
that
will
generate
the
same
output
and
you
cannot
reverse
the
hashing
function.
AG
AG
Why
is
it
important
foreign,
so
generalized
blockchain
Anatomy,
so
why
we
call
it
a
chain,
because
we
have
a
bunch
of
blocks
mined
together
like
linked
together,
because
each
block
contains
the
reference
the
parent
hash
and
the
previous
header
contains
the
reference
of
the
parent
hash,
which
is
pretty
cool,
and
let
me
remind
what
the
hash,
the
parent
hash
or
the
block
hash
of
on
ethereum
is
it's
essentially
the
hash
of
the
header
pretty
important
to
deal
with
this
Primitives
and
make
smart
contracts
solve
for
words
so
accessing
six
Oracle
state?
AG
Does
it
just
keep
that
in
mind?
Let's
get
to
the
next
part,
so
no
I
think
I'm
missing
one
slide.
No,
it's
the
correct
one!
Okay.
So
this
is
an
ethereum
block
header,
as
I
said,
we're
gonna
go
through
the
example
of
ethereum,
concretely,
so
a
bit
of
anatomy
so
to
access
State.
Obviously
we
need
the
state
route.
What
is
the
state
route
is
the
root
of
the
miracle
Patricia
tree
of
the
ethereum
state.
AG
We
also
have
the
transactions
rule,
which
is
pretty
useful
if
you
want
to
access
historical
transactions
like
their
entire
body
and
receive
truth.
So
it's
very
useful
to
access
any
events,
logs
and
and
so
on,
and
all
of
these
are
like
root
of
the
American
Patricia
tree
America,
Patricia,
3's,
America
3,
just
think
of
it
in
that
way,
and
most
importantly,
we
have
the
parent
hash
and
with
the
parent
hash,
we
can
in
a
way
go
go
backwards.
AG
I
think!
That's
it.
Let's
get
to
Miracle
tree.
So
essentially
it's
this
idea
that
I
can
take
whatever
amount
of
data
and
I
can
commit
it
in
a
cryptographic
manner
by
using
this
data
structure.
So
on
the
left
side,
we
see
a
standard
Merkle
tree.
So
essentially
all
the
data
goes
to
the
bottom
and
we
essentially
hash
it.
You
know
what
the
hashing
function
is.
AG
What
you
see
here,
I
hope
you
see
on
the
top.
We
have
the
state
route
and
essentially
the
state
route
is
the
root
of
this
tree
and
now
how
it
works
and
how
you
should
think
of
this
of
this.
It's
a
pretty
complex
data
structure.
I,
don't
want
you
to
bother
with
it
today,
but
essentially
we
have
three
types
of
like
notes:
we
have
Leaf
nodes,
extension
notes
and
Branch
notes,
so
Leaf
nodes
contain
data.
AG
Branch
nodes
contain
data
and
extension
nodes
like
on
the
high
level,
just
help
us
to
like
sort
of
navigate
in
that
tree,
but
to
be
honest,
to
deal
with
storage
truth,
you
don't
really
need
to
understand
this
part,
but
to
like
build
on
the
low
level
as
we
do.
Obviously,
we
need
to.
We
need
to
deal
pretty
pretty
a
lot
with
that
with
that
part,
okay,
so
ethereum
State.
How
is
it
constructed
most
important
takeaway?
AG
It's
a
two
level
structure,
so
I
mentioned
that
the
state
route
is
the
commitment
of
the
entire
state,
but
it's
not
really
true,
because
ethereum
is
a
does
it
still.
Okay,
it
works.
It's
it's
account
based
and
essentially,
the
state
route
is
the
commitment
of
all
the
accounts
that
exist
on
ethereum
and
what
an
account
is
made
of
it's
made
of
a
balance
like
the.
AG
If
balance,
it's
announced,
transaction
counter
storage
route,
the
storage
route
is
like
the
root
of
another
miracle
Patricia
tree,
and
this
miracle
Patricia
3
contains
the
key
value
database
that
holds
like
the
mapping
from
storage
key
to
its
actual
value.
And
finally,
we
have
the
code,
it's
essentially
the
hash
of
the
of
the
bytecode
so
main
takeaway.
First,
we
access
accounts,
and
once
we
have
the
account
storage
root,
we
can
access.
Is
it's
it's
the
okay
cool?
AG
So
to
sum
it
up
like
the
background,
so
main
takeaways,
given
the
block
stay
true
to
you,
can
recreate
any
any
state
for
this
specific
block
on
this
network
and
give
an
initial
trusted
blockage.
You
can
essentially
recreate
all
the
previous
headers,
which
is
pretty
pretty
cool
and
important
to
get
the
ideas
that
I
will
explain
like
pretty
soon.
Okay.
So,
as
it's
gonna
be
a
workshop,
it's
a
short
one.
So
I
won't
let
you
code,
but
I
will
show
you
some
concrete
examples.
AG
AG
So
first
of
all,
the
question
that
we
need
to
answer
to
ourselves
is
how
does
Polygon
commit
to
ethereum
L1,
because
if
we
want
to,
like,
let's
say,
prove
the
ownership
of
a
lens
profile
on
optimize,
we
need
to
know
the
state
route
of
polygon,
but
there
is
pictorial,
one
in
the
middle.
So
how
do
we
actually
access
this
on
each
normal
one
primarily
so
polygon
is
a
commitment
commit
commit
chain
and
it
commits
to
to
ethereum
a
bunch
of
things
every
some
amount
of
time
and
essentially
on
L1.
AG
We
do
not
validate
the
entire
State
transition,
but
we
just
verify
the
consensus
of
polygon
and
this
checkpoints
how
they
call
it
essentially
contain
State
routes,
and
so
I
mean
not
directly,
but
we
can
access
them
and
let's
get
to
this
to
this
part,
so
this
is
taken
from
polygons
documentation
and
this
is
how
a
checkpoint
looks
like.
So,
as
you
can
see,
the
checkpoint
is
made
of
a
proposal
so
who
proposed
The,
Block,
start
block
and
block.
Give
me
a
second
I'll
get
to
this
and,
most
importantly,
we
have
the
root
question.
AG
So
the
root
hush
is
essentially
a
Merkle
tree,
not
America
Patricia
tree
that
contains
all
the
headers
and
which
headers
the
headers
in
the
range
of
start
block
and
end
block
cool.
So
now,
if
we
get
back
to
the
previous
part,
we
can
essentially
prove
with
this
commitment
that
we
know
the
valid
state
route
of
polygon.
AG
AG
This
is
like
the
basic
erc71
as
you
can
see,
it's
an
abstract
contract
and
it's
slightly
modified
instead
of
having
like
a
standard
mapping
from
like
token
ID
to
its
owner.
We
have
like
token
ID
to
token
data
token
data
is
a
structuring.
This
struct
is
32
bytes
in
total,
20
bytes
is
the
actual
owner
and
the
remaining
12
bytes
represent
when
the
token
was
minted.
AG
Okay,
but
how
do
I
actually
prove
it?
Oh
and
also
very
important
thing
when
dealing
with
storage
layout,
we
have
something
that
is
called
like
slot
indices,
so
each
variable
has
a
given
slot
like
in
the
some
sort
of
meta
layout.
I
call
it
like
that
it's
from
the
right
way
anyways
this
mapping.
It
has
like
the
slot
index
too.
I
will
get
to
this
part
in
a
second.
Why
it's
two
and
we
have
a
mapping
from
token
ID,
so
you
win
to
32
bytes
of
data
represented
as
a
strat
Cloud.
AG
Just
think
of
it
as
some
bytes,
okay,
so
I
guess
most
of
you
use
hard
hot,
so
I'm
gonna
present
on
on
hard
Hut.
There
is
a
very,
very
cool
tool
to
deal
with
storage
layouts.
It's
called
obviously
hard
hot
storage
layout.
This
is
how
you
install
it.
It's
literally
yarn
install
hardcode
storage
layout.
You
add
one
comment
to
your
Hardware
config.
You
write
a
new
script
that
contains
literally
eight
lines
of
code.
You
run
the
script
and
you
get
this
weird
table
and
what
does
it?
AG
What
does
it
really
tell
you
and
oh,
and
by
the
way
why
this
tool
is
pretty
useful?
As
you
see,
this
contract
is
abstract,
so
some
other
contracts
again.
Does
it
store
yeah.
Some
countries
can
inherit
from
it
and
obviously,
while
we
inherit
the
storage
layout,
I
mean
this.
Does
this
in
synthesis
can
can
get
more
trickier
because
it
also
okay,
so
that's
it's
pretty
hard
to
coordinate
like
one
hand
with
another
hand,
even
though
I'm
Italian,
okay,
anyways
yeah.
We
know
this
slot
in
the
in
the
index
and
that's
how
we
get
it.
AG
We
have
a
column
that
is
called
storage
slot
and,
as
you
see
underscore
token,
data
is
marked
as
two
and
that's
it
okay.
But
what
do
we
do
with?
How
do
we
get
this
storage
key
and
yeah?
That's
that's
it.
Let
me
check
the
time.
Okay,
so
a
bit
of
Hands-On.
How
do
we
get
the
actual
surges?
It
sounds
scary
and
it's
meant
to
be
scary.
So
we
know
the
slot
index
the
storage
index
I
want
to
prove
that
it's
like
0x35
owns
with
ID
3594.
AG
How
do
we
get
the
storage
key?
We
essentially
do
this
operation,
so
we
concatenate
the
slot.
I
mean
the
key
in
the
mapping
which
is
35.94,
because
this
is
the
token
ID.
As
you
know,
we
have
a
mapping
from
token
ID
to
token
data.
Token
data
contains
the
O
okay,
so
we
concatenate
this
with
the
storage
index.
We
hash
it
all
together.
This
is
the
storage
key
that
we
have,
if
you're
interested
how
to
deal
with
it
for
like
more
complex
mappings
than
like
layouts
back
the
solidity
documentation.
AG
It's
explained
pretty
well
so
now
that's
to
make
sure
we
got
the
proper
storage
key.
Let's
just
check
it
how
we
can
check
it
super
easy.
Let's
just
make
a
one.
Liter
PC
call
to
get
this
storage
at
some
specific
key,
because
the
if
get
storage
at
so
the
parameters
we
want
to
access.
The
storage
of
what
of
the
lens
Hub
lens
cap
is
a
contract.
That's
essentially
is
the
representation
of
these
profiles
and
its
address
is
0xdd4
and
so
on
and
the
slot.
AG
Oh,
is
it
better?
Oh,
it's
much
better
and
just
slot
does
the
storage
key
is
0x1.
So
essentially
that's
the
hash.
The
two
we
got
and
the
result
is
zero,
X,
zero,
zero,
zero
and
we
know
that
it's
32
bytes
of
data
where
we
have
20
and
12..
So,
let's
split
it
into
12
and
20
bytes,
and
what
we
have
is
some
number
like.
You
can
see
Zero
x,
a
lot
of
zeros,
then
62
till
D,
and
this
looks
like
a
small
number.
AG
AG
But
how
do
we
actually
get
to
storage
proofs?
So
there
are
standardized
method
in
like
the
Json
or
PC
standard
for
ethereum
clients,
and
this
method
is
called
eth,
get
proof
which
essentially,
given
the
contract
address
so
I,
better
call.
It
account
address
in
this
specific
case,
allows
us
to
generate
a
state
proof
and
the
last
argument
I
mean
the
sorry.
The
second
argument
is
an
array
that
contains
all
this
Storage
storage
keys
that
we
want
to
prove.
AG
There
is
another
argument
which
is
0x1a:
it's
essentially
the
block
number
for
which
we
prove
the
state
yeah.
Let's
call
this
method,
oh
by
the
way
you
might
have
a
question:
how
do
we
deal
with
this
method
on
non-evm
chains
because,
for
example,
on
some
specific
Roll-Ups,
this
method
is
like
not
supported?
Actually,
it's
not
a
big
deal,
because,
if
you
think
of
it,
we
just
need
the
database
and
on
top
of
this
database
we
can
literally
build
this.
This
method.
We
just
need
to
know
how
the
storage
is
constructed.
AG
AG
It
is
scary,
it's
meant
to
be
scary.
One
proof
is
like
more
or
less
600
bytes
700
bytes.
It
really
depends
like
bigger.
The
storage
is
than
bigger.
The
proof
is,
and
also
more
accounts
we
have
than
bigger.
The
account
proof
is,
so
that's
a
lot
of
cool
data.
If,
if
you
can
imagine
you
can
imagine
and
yeah,
that's
that's
pretty
bad.
Why?
Because
we
need
to
pause
this
proof
on
the
Chain.
So
it's
a
lot
of
cool
data.
But,
okay,
let's,
let's
try.
AG
AG
So
it's
pretty
bad,
and
why
is
it
that
bad,
so
I
explained
on
the
high
level
what
Merkel
trees
are
America
Patricia
trees
are
on
ethereum,
we
use
Marco,
Patricia
trees,
and
essentially
there
is
a
trade-off
that
when
using
Miracle
Patricia
trees,
the
proof
is
a
slightly
bigger.
It's
like
harder
to
decode
it,
because
actually
we
need
to
do
some
a
bit
of
decoding
there,
but
we
need
to
do
less
hashing.
AG
So
this
is
a
trade-off,
but
depending
where
we
actually
verify
this
proof,
it
might
be
more
feasible
to
verify
like
prove
that
is
based
on
Marco
Patricia
trees
or
Miracle
trees.
Okay,
but
there
is
a
solution
and
the
solution
is
what,
if
we
snarkify
such
such
a
proof-
and
we
verify
this
proof
inside
the
start,
why
is
it
cool?
Because
we
can
like
let's
say
that
I'm
gonna
verify
this
proof
inside
the
craft
craft,
16
circuit
and
yeah,
the
verification
costs
more
or
less
like
210k
gas?
AG
The
proof
is
like
way
less
than
600
bytes.
So
it's
good
so
essentially
get
rid
of
the
cold
data,
because
the
proof
itself
can
be
the
private
input
to
the
Circuit
yeah.
We
can
like
use
multiple
proving
system,
depending
on
the
on
the
actual
use
case,
and
now.
Why
is
it
like
very,
very
cool?
So,
first
of
all,
it
removes
call
data
second
model.
AG
It
allows
us
to
deal
with
very
UNH
unfriendly
hashing
functions
or
the
evm
these,
the
ones
that
we
don't
have
pre-compiles
for,
like
let's
say
Peterson,
so
it
might
be
like
super
expensive
to
verify
such
a
proof
on
the
evm,
because,
first
of
all,
that's
a
lot
of
cool
data
and
the
hashing
function
is
pretty
like
unfriendly.
But
what
if
we
can
like,
do
it
inside
the
snark
and
just
verify
a
snark
and
yeah?
So
another
benefits?
This
really
really
helps
in
obstructing
the
way.
AG
How
we
verify
this
proofs,
because
you
don't
need
to
have
like
one
generalized
verifier
for
each
type
of
of
proof,
but
you
can
essentially
obstruct
it
behind
behind
the
behind
the
snark,
which
is
which
is
great.
These
numbers
were
taken
from
a
very
nice
article,
written
by
a16z,
like
a
bunch
of
a
few
a
few
months
ago,
yeah
and
I
think
that's
pretty
much
it.
AG
So
I
mentioned
that
we
always
need
the
state
route,
but
because
all
of
these
systems
have
a
native
messaging
system,
we
can
send
the
small
commitments
like,
for
example,
the
blockage
to
like
a
one.
Usually
it
goes
off-road
one
and
and
yeah
we
can
like
enroll
it
or
send
the
state
through
directly
and
also
we
don't
need
to
rely
on
messaging.
But
we
can,
for
example,
rely
on
the
fact
that
polygon
is
like
a
commit
chain
and
all
these
roll
ups
like
commit
from
time
to
time,
they're
like
batches
and
and
so
on.
AG
So
if,
let's
say
polygon
commits
another,
one
I
can
send
this
commitment,
then
to
start
head
and
start
and
do
the
actual
verification
cool.
So
now,
how
do
we
actually
do
that?
So,
let's
break
the
entire
flow
into
like
smallest
pieces,
so
the
flow
is
the
following:
we
need
to
have
access
to
the
commitment
which
is
either
a
block,
hash
or
a
state
route,
and
again
we
can
get
it
or
I
either
by
sending
a
message
relying
on
the
fact
that
is
this
chain
commit.
So
in
a
sense
it's
still
a
message.
AG
We
can
relate
an
optimistic
manner
or
we
can
go
even
more
crazy
and
verify
the
entire
consensus.
Okay.
So
this
is
Step
number
one.
We
need
to
get
the
commitment
step
number
two:
we
need
to
somehow
access
the
state
route,
so
the
commitments
of
the
state
from
like
a
previous
block
or
the
actual
blog,
because
keep
in
mind
that
these
commitments
are
only
block
hashes
and
we
block
hashes.
We
can
recreate
headers,
but
we
cannot
access
the
state
okay.
So
once
we
have
the
state
route,
we
obviously
need
to
verify
this
state
storage
groups.
AG
AG
So
approach
number
one
messaging
so
I
can
send
the
message
from
let's
say:
optimism
to
Eternal
one
I
can
get
the
opco
I
can
get
the
blockers
by
just
calling
the
proper
op
code
and
and
I
get
it.
It
takes
some
time,
but
still
I
get
it.
This
is
approach
number
one.
So
we
rely
on
the
built-in
messaging
system,
which
is
I,
think
Fair,
because
the
security
of
it
is
equal
to
the
security
of
the
rollup
and,
if
you're
deploying
an
application
of
this
rollup,
it's
a
fair
assumption
to
do
so.
AG
Yeah
it
doesn't.
Oh,
they
don't
know
about
the
downsides,
so
the
message
must
be
delivered.
So
it
introduces
a
significant
delay,
especially
when
dealing
with
the
withdrawal
period
in
the
in
in
the
middle,
and
it
requires
we.
It
requires
interacting
with
multiple
layers.
So,
first
you
need
to
send
a
message
and
then
actually
you
need
to
consume
it.
So
it's
it's
not
ideal,
but
the
trust
assumptions
are
pretty
occasional.
AG
AG
Okay,
so
maybe
a
few
a
bit
of
an
intro
right
now
we
have
POS
as
the
native
like
consensus,
algorithm
on
ethereum,
which
is
pretty
great
because
verifying
the
consensus
is
finally
doable
because,
before
like
verifying
the
hashing
function,
eth
hash,
which
was
used
for
proof
of
work,
was
very
memory
intense,
so
not
possible
to
do
inside
the
snark
on
chain
directly.
So
it
was
almost
impossible
to
do
so.
AG
So
now
we
also
have
this
four
Choice
rule
called
lmdigos,
which
is
implementable,
but
doing
all
of
this,
like
directly,
is
pretty
expensive,
so
we
need
to
ideally
wrap
inside
the
snark,
but
there
is
another
downside.
So
a
few
words
about
the
trust
assumptions
you
well,
you
verify
the
consensus
directly.
So
it's
it's
fine.
He
do
you
introduce
any
trust
assumptions,
not
really,
but
the
biggest
downside
that
generating
the
proof
actually
takes
some
time.
AG
So,
to
be
honest,
this
approach
is
feasible,
but
comparing
to
messaging
like
quite
often
is
like
almost
the
same,
and
you
pay
a
lot
of
improving
time
and
requires
like
having
more
advanced
infrastructure.
Okay
last
approach
that
we
actually
use
is
something
that
we
call
like
an
optimistic
relator
based
on
NPC
MPC
stands
for
multi-party
computation.
Maybe
before
I
explain
how
it
works.
Let
me
explain
the
the
image
I
hope
it's
self-explanatory,
so
it's
an
NPC
protocol.
We
have
multiple
parties,
it's
multiple
parties
attached
something.
AG
AG
But
the
reason
why
we
have
NPC
is
because
more
validators
you
have
than
obviously
more
Securities
but
more
validators.
You
have
in
it
like
standard
multi-sig
approach.
You
have
more
signatures
so
more
in
a
way
decentralized.
It
is
then
it's
more
expensive
to
verify,
because
you
need
to
verify
multiple
signatures
and
you
need
to
like
pause
the
signatures.
It's
a
lot
of
call
data.
Such
approach
is
not
feasible
on
chains.
Oracle
data
is
expensive,
so
I
want
optimistic,
rollups
and
yeah
okay.
So
how
does
it
work?
AG
What
is
actually
NPC
part
doing
the
MPC
part
is
very
simple:
it's
essentially
signing
over
like
a
specific
curve.
Some
specific
payload
and
the
payload
is
the
commitment
itself
and
just
say,
okay.
So
this
is
how
we
actually
attest,
but
now
how
wide
this
approach
is
called
optimistic
and
why
it's
still
secure.
AG
So,
first
of
all,
we
just
posted
some
something
on
the
actual
L2
and,
as
you
may
know,
we
can
send
messages
from
L1
to
L2,
and
such
a
message
can
contain
like
the
proper
commitment.
So
essentially,
even
if
the
validator
set
will
lie,
L1
will
never
lie.
So
you
can
just
challenge
such
a
message
and
now
to
participate
in
verifying
this
validators.
It's
super
easy
because
literally
two
RPC
calls.
One
call
is
gonna
check
the
actual
commitment
on
the
actual
chain
and
the
other
one
checks
like
what
is
the
claimed
Commitment?
AG
If
you
disagree,
you
just
send
a
message:
it
costs
roughly
60
K
of
gas
and
that's
it.
Everyone
can
do
that
and
again,
the
fraud
proving
window
is
pretty
short,
because
it's
essentially
how
long
it
will
take
to
generate
like
the
proof
of
consensus.
If
it's
possible,
or
how
long
does
it
take
to
deliver
the
message
and
what
is
pretty
cool
in
this
approach?
It's
not
gas
intensive.
We
verify
just
one
signature.
So
that's
about
this
approach.
Let's
make
a
recap
and
let's
identify
the
trade-offs,
so
we
have
three
approaches.
AG
The
first
one
is
messaging.
The
second
one
is
validating
the
consensus
and
the
third
one
is
having
this
optimistic
layer.
So
I
categorize
it
in
four
categories.
The
first
one
is
latency.
The
second
one
is
the
gas
cost.
The
third
one
is
trust
and
the
last
one
is:
what
is
the
off
chain
computation
overhead?
Why
do
I
even
list
it?
Because
if
we
do
some
sort
of
proving
then
obviously
it
takes
time,
because
we
need
to
generate
the
proof
so
messaging
in
terms
of
latency.
AG
We
are
quite
sad
because
well,
the
message
needs
to
get
delivered.
So
once
the
message
gets
delivered
to
some
specific
L2
L1
will
be
able
to
generate
or
write
a
new
block.
So
we
don't
have
like
access
to
the
newest
values
in
terms
of
gas
cost.
It's
not
bad,
but
it's
not
perfect,
because
we
need
to
interact
with
two
chains
at
the
same
time.
So
first
we
need
to
send
the
message
and
consume
it
in
terms
of
trust.
We
are
pretty
happy
because
we
trust
the
rollup
itself
and
it's
a
fair
assumption.
AG
Option
computation
overhead
we're
very
happy,
because
there
is
no
computation
to
do
off-chain,
verifying
the
consensus
so
in
terms
of
latency
we
are
set
because
we
need
to
generate
the
proof
that
we've
done
it.
It
takes
a
bit
of
time
in
terms
of
gas
cost
we
are
I
would
say
sad
because
we
need
to
verify
the
actual
ZK
proof
which
is
way
more
expensive
than
just
consuming
a
message
or
verifying
a
signature.
In
terms
of
trust,
we
are
happy
because
we
verify
the
consensus
itself
and
computation
overhead.
AG
It's
significant
right,
because
we
need
to
generate
the
proof,
Final
Approach,
this
optimistic
layer,
so
in
terms
of
latency
we're
happy,
because
we
simply
make
a
claim
and
we
post
it
on
the
other
chain.
That's
it
gas
cost
we're
very
happy
because
the
well,
we
just
verify
a
signature
in
terms
of
trust.
Well,
we
are
not
that
happy,
but
also
not
that
sad
at
the
same
time,
because
it
still
can
be
challenged
in
an
optimistic
manner
using
a
fraud
proof
computation
option,
computation
overhead
are
pretty
happy
because
we
participate
like
an
MPC
protocol.
AG
AG
Okay,
accessing
the
headers
I
hope
it's
self-explanatory,
because
we
literally
unroll
something
from
The
Trusted
input
and
The.
Trusted
input
is
again
a
block
hash
for
a
specific
block
X,
and
if
you
follow
the
initial
slides,
that's
essentially
each
block
we've
given
a
blockage,
you
can
recreate
the
block
header
and
knowing
the
block
header,
we
can
access
the
parent
question
by
knowing
the
parent
hash.
You
can
recreate
the
previous
block
header
to
essentially
go
to
the
Genesis
block.
AG
So,
given
this
very
small
input,
we
can
essentially
unroll
the
state
or
whatever
was
present
on
the
chain
from
this
block
till
the
Chinese
is
block.
Okay,
so,
as
I
said,
I'm
gonna
explain
everything
on
on
the
example
of
ethereum,
and
today
all
the
block
headers
together,
are
like
roughly
seven
gigabytes
of
data.
AG
So
it's
quite
a
lot
but
okay.
This
is
how
we
actually
do
that
this
is
the
high
level
concept
and
what
are
the
approaches
so
the
first
one
we
call
it
like
on-train
accumulation.
So
essentially
we
do
this
procedure,
this
computation
directly
on
the
Chain,
so
we
provide
all
these
properly
encoded
block
headers
inside
the
call
data
and
the
blockers
that
we
might
receive
as
like
The
Trusted
input
by
sending
a
message
relaying
it
in
optimistic
manner
or
validating
the
consensus
and
yeah
like
recursively,
go
through
all
these
headers
and
and
verify
them.
AG
But
there
are
many
many
downsides
because,
first
of
all,
it's
vertical
data
intensive,
it's
very
computational
intensive
and
now
we
can
store
all
these
headers
on
the
actual
chain.
But
you
know
even
storing
on
an
L2
is
throwing
7.
Gigabytes
of
data
is
still
a
significant
cost,
because
the
state
on
an
L2
is
reflected
as
call
data
on
L1.
So
it's
still
expensive
either
way.
But
the
cool
thing
is
that
I
have
direct
access
to
like
State
router
or
anything
that
I
want
to
access.
AG
Next
approach
is
on
chain
compression,
so
we
can
still
use
the
same
approaches
previously,
so
literally
unroll
it
and
process
this
seven
gigabytes
of
data,
but
instead
of
like
storing,
then
we
can
just
update
the
miracle
tree.
It's
a
nice
approach,
but
comes
again
with
a
few
dump
sites.
It's
very
computationally
intense,
because
if
you
have
like
millions
of
headers,
we
need
to
perform
millions
of
hashes
on
the
Chain.
AG
Last
downside
is
that
we
need
to
index
all
the
headers
that
have
been
processed.
Why
we
need
to
index
them,
because,
if
I
want
to
up
the
access,
a
specific
block,
header
I,
need
to
provide
a
miracle
path
because,
as
we
update
America
3-
and
we
just
store
the
root
in
the
contract
itself,
then
I
need
to
know
the
path
right.
So
I
need
to
index
the
data
and
essentially
once
I.
It's
the
moment
that
I
want
to
access
it.
I
need
to
provide
a
miracle
path.
AG
This
approach
is
okay,
it's
I,
wouldn't
say
way
better
than
the
previous
one,
but
it's
way
cheaper
last
approach,
so
there
is
a
very
cool,
primitive
called
Merkel
mountain
ranges,
love
it
and
the
idea
is:
let's
do
the
same
that
we
do
previously
inside
this
Arc.
So
we
can
provide
this
tremendous
amount
of
data
as
a
private
input
to
the
circuit
and
essentially
do
the
same
computation
like
unrolling
inside
this
record
itself,
and
now
we
have
a
public
input,
which
is
the
blockage,
so
essentially
the
commitment
from
which
we
unroll
it.
AG
So
the
trusted
input,
the
public
input,
can
be
literally
asserted
when
we
do
the
on-train
verification
and
why
we
unroll
it.
We
can
accumulate
inside
the
miracle
tree
or
American.
Mountain
range.
Why
American
mountain
range
is
is
cool
because
well,
let's
imagine
that
you
want
to
have
like
seven
gigabytes
of
data
processing
once
in
our
life,
the
proving
time
is
going
to
be
horrible,
and
why
would
you
even
like
prove
this
commitment
for
like
the
entire
history
like?
Do
you
really
need
that?
AG
Probably
not
so,
let's
chunk
it
like
into
smaller
pieces
and
Miracle
mountain
ranges
are
pretty
cool,
primitive
that
allow
to
do
this.
To
do
to
do
this,
to
give
you
like
a
bit
of
intuition,
how
how
does
it
work,
it's
essential,
think
of
it
as
a
tree
of
trees
yeah.
So
once
we
do
all
this
proving
like
of
chain,
we
simply
verify
the
proof
on
chain,
as
you
know,
like
verifying.
The
proof
is,
is
way
cheaper
than
doing
this
directly
on
the
chain
and
still
we
just
provide
a
miracle
path
and
that's
it.
AG
We
essentially
have
access
to
any
sort
of
data
we
want.
Let's
do
a
recap
again,
so
approach
number
one
answer:
accumulation
on
chain
compression
option,
compression
three
categories:
prover,
overhead
gas,
cost
storage
cost.
Actually
gas
costs
should
be
computational
cost.
Okay,
so
prover
overhead
on-chain
accumulation?
Do
we
prove
anything?
Well,
not
really.
So
we
are
happy
answering
compression
well.
AG
We
still
like
need
to
update
the
miracle
tree.
I
think.
Actually
there
is.
There
is
an
issue
here,
so
I'll
just
skip
this
part
after
in
compression
you're.
Very,
very
sad
because
well,
we
need
to
prove
actually
significant
computation,
so
the
proving
time
is
significant.
Okay,
now,
in
terms
of
gas
cause,
the
third
approach
is
horrible,
because
it
just
costs
a
lot,
because
we
do
the
entire
computation
engine
compression.
AG
Well
we're
a
bit
happy
because
we
just
do
a
bit
of
computation,
but
still
it's
a
lot
of
cool
data,
a
lot
of
computation
but
lost
at
least
not
so
much
storage,
storage
cost.
Oh
sorry,
gas
cost
in
the
second
approach,
while
we
just
verify
approved
so
it's
cool,
okay
storage
cost
for
the
first
approach.
Well,
seven
gigabytes
of
data:
it
is
horrible,
so
we
are
very
sad.
AG
Unchain
compression,
sorry,
storage
cost
for
unchain
compression.
We
just
the
root
of
the
miracle
tree,
so
we
are
happy
and
in
the
second
case,
we're
even
more
happy
because
we
again,
we
just
essentially
keep
updating
a
tree
and
we
don't
even
need
to
post
a
lot
post
a
lot
of
cool
data,
because
the
cool
data
we
passed
is
literally
just
the
proof,
so
we're
very,
very
happy
but
again
I.
Don't
want
to
say
that
all
of
the
one
of
these
approaches
is
the
best
one,
because,
as
you
see,
there
are
trade-offs
and
yeah.
AG
So
this
part
is
actually
pretty
easy.
So,
as
you
know,
as
you
may
notice
here,
I
was
explaining,
like
the
second
step
when
it
comes
to
toolingual
storage
groups,
and
now
there
is
the
the
last
part
which
is
essentially
verifying
the
proof
itself.
So
approach
number
one
is
verifying
the
proof
directly
on
the
Chain
approach.
Number
two:
let's
verify
the
proof
inside
the
star
can
then
verify
the
snark
approach.
Number
three.
Let's
verify
multiple
truths
inside
the
snark
and
then
verify
the
snark.
AG
We
can
aggregate
multiple
sharks
together
and
so
on,
but
obviously
there
are
some
trade-offs,
especially
when
it
comes
to
proving
time
and
yeah.
So
now
why
the
first
approach
is
visible
on
on
ZK
rollups,
for
example,
on
start
medical
data
is
very
cheap
and
what
we
want
to
avoid
in
this
specific
processes
called
Data.
So
this
approach
is,
for
example,
feasible
on
starknet,
but
for
example,
if
you
want
to
verify
like
a
proof,
an
optimism
or
a
colleague
is
very
expensive.
AG
You
want
to
reduce
it
as
much
as
possible,
so
for
that
reason
you
might
want
to
use
a
snark.
And
finally,
if
you
have
like
many
slots
that
you
want
to
prove,
why
can't
you
just
verify
them
inside
one
snark,
you're
gonna
pay
improver
time,
but
you
just
present
one
proof
at
the
end.
So
this
approach
is
cheaper
is
the
cheapest
one,
but
only
if
you
have
multiple
actions
to
to
take
so
there
are
trade-offs.
AG
So
let's
identify
them
categories,
approval
overhead,
latency
verification
cost,
so
verifying
the
proof
directly
prover
overhead
doesn't
exist.
Latency
doesn't
exist
because
we
don't
need
to
prove
anything
verification
costs.
Well,
it
is
significant
because
we
need
to
pass
call
data
and
we
need-
and
we
need
to
do
the
actual
computation
so
like
going
through
the
entire
path
and
each
step
in
the
path
is
one
hashing
function,
oh
and
also,
let
me
get
back
to
the
previous
slide.
I
forgot.
This
is
very
important.
Why
wrapping
inside
this
wrapping
inside
this
Arc
is
pretty
important.
AG
If
you're
like
dealing
with
a
storage
layout
that
is
using
a
specific
hashing
function,
let's
say,
for
example,
Peterson
Peterson
is
not
available
like
on
on
the
EVN
like
you
just
need
to
implement
it's,
not
the
pre-composed,
it's
gonna
be
costly,
but
if
you
do
inside
the
scenario
and
Peterson
is
pretty
smart
friendly,
it's
not
friendly,
then
well,
it
just
verifies
anark
on
the
one
and
you
abstract
it,
so
it's
going
to
be
way
way
way
cheaper.
But
again
there
are
trade-offs.
Let
me
get
back
to
this.
So
I
went
through
this.
AG
The
normal
competition,
three
snarky
fight
proof
through
the
overhead
it
exists,
so
we
are
not
super
happy
latency.
We
are
also
not
happy
because
we
actually
need
to
spend
time
on
improving
this.
This
thing
verification
cost.
We
are
happy
because
well
we
we
just
verify
a
proof.
So
it's
fine
and
it's
an
artifying,
multiple
proofs,
the
approver
overhead,
is
still
there.
AG
Okay,
went
through
quite
a
lot
of
things,
let's
put
this
all
together,
so,
let's
imagine
we
have
three
chains
and
we
want
to
have
interoperability
interoperability
between
them,
so
we
have
chain
Z,
chain,
X
and
chain
y,
so
it
all
starts
with
a
message:
AKA
commitment.
We
send
a
message
in
order
to
get
the
commitment.
So
let's
say
that
we
send
a
message
from
chain
Z
to
chain
X,
because
on
chain
X
we
want
to
access
the
the
state
of
changing.
AG
So
what
do
we
do
once
we
have
the
commitment,
we
literally
recreate
all
the
headers
using
one
of
the
three
approaches,
and
once
we
recreated,
the
header
is
still
the
pond
for
which
I
want
to
prove
the
storage.
I,
just
verify
a
proof
and
again
for
verifying
approved.
There
are
multiple
approaches,
but
now,
let's
say
that
on
chain
y
I
want
to
access
the
state
of
change
Z,
and
there
is
no
direct
communication
between
chain
Y
and
chain
Z.
AG
So
it
must
be
routed
through
trainx
by
the
way
I'm
like
talking
about
this
in
a
pretty
abstract
Way
by
Chain,
X
I
just
mean
ethereumulator,
one
yeah
so
from
chain
X
I'm,
just
gonna
send
again
the
commitment
about
change
the
as
a
message
and
then
simply
recreate
all
these
all
these
headers,
as
you
may
not
notice,
it's
pretty
redundant
because
we
perform
the
same
computation
on
two
different
chains,
and
we
don't
need
to
do
that,
especially
if
you
use
like
the
third
approach,
which
is
generating
the
proof
on
chain.
AG
AG
We
don't
expect,
like
developers
to
deal
with
all
that
complexities,
choosing
the
right
approach
for
the
direct
thing
essentially
right
now,
our
apis
optimizes
cost
wise
soon,
we'll
be
able
to
optimize
latency,
wise
and
yeah,
and
essentially
that's
it
just
about
our
API
I,
highly
highly,
encourage
you
to
check
this
out
and
yeah
like
a
few
final
words
about
the
API.
It
acts
as
a
coordinator.
AG
It
optimizes
the
costs,
it
optimizes
the
cost,
because
we
can
batch
multiple
things
and
once
the
job
is
done,
you
get
a
notification
like
via
webhook
VIA,
an
event
like
whatever
you
want
so
essentially
you're.
Not
you
don't
need
to
be
like
a
infrastructure
maintainer
and
you
can
just
focus
on
essentially
building
on
top
of
this
primitive
and
I.
Think
that's
it
questions.
AG
So
the
API
essentially
is
a
rest
API.
For
now
we
also
have
a
Json
interface.
We
have
option
entry
points,
so
we
can
request
the
data
like
by
making
enough
jingle
like
calling
a
rest
API
or
like
calling
a
Json
or
PC
method
or,
if
you're,
a
smart
contracts
like
wants
to
access
this
data,
then
you
just
submit
an
event:
we're
gonna
catch
the
event
and
later
on,
like
after
a
bit
of
time,
fit
this
the
specific
data
inside
the
smart
contract.
AG
So
we
have
like
a
bunch
of
interfaces
and
by
the
way,
speaking
of
like
the
off-train
entry
points,
once
the
entire
leg
work
is
done
on
our
side,
you
can
get
a
notification,
it
can
be
like
a
web
hook.
We
can
like
send
you
a
bit
of
information
like
using
a
websocket.
You
can
be
essentially
whatever
whatever
you
want.
AG
AG
Oh
yeah,
so
that's
actually
a
great
question.
So
different
chains
use
a
different
like
storage,
I
would
say
architecture
they
might
commit
to
America
Patricia
3,
Miracle,
3,
maybe
even
vertical
tree
and
obviously,
like
I,
said
having
a
generalized
verifier.
It's
like
pretty
it's
not
a
clean
approach,
so
we
essentially
abstract
it,
but
by
using
a
snark
and
inside
the
snark
itself,
we
just
do
the
proper
work.
Like
you
know.
AG
We
go
for
the
for
the
tree
like
through
the
for
the
through
the
elements
of
the
proof,
and
then
we
can
like
use
a
specific
hashing
function.
So,
for
example,
now
Poseidon
is
Poseidon
is,
is,
is
pretty
popular,
I
think
that
scroll
uses,
Poseidon
and
also
SDK
sync
uses
Poseidon
on
the
avian
like
performing
Poseidon
will
be
pretty
expensive.
So
for
that
reason
you
cannot
verify
the
proof
directly,
but
what
you
can
do
you
can
do
the
entire
verification
inside
the
snark
and
then
on
the
one.
AG
AG
AG
AG
Oh
yeah,
that's
that's
actually
a
good
question
because
I
think
I
went
super
technical,
so
actually
what
we
do
at
Herodotus
every
two
weeks
we
have
some
internal
hackathons
and
right
before
the
merge,
we
built
a
proof
of
concept
that
we
called
merge.
Swap
and
essentially
we
allowed
anyone
to
dump
their
proof
of
work
if
on
proof
of
stake
and
the
way
how
it
works.
AG
We
literally
build
a
bridge
on
top
of
this
technology
and
the
bridge
Works
in
a
way
that
you
can
lock
your
if
proof
of
work
inside
a
smart
contract
on
if
proof
of
under,
if
proof
of
work
chain,
you
can
prove
that
you've
done
it
on
ethereum
proof
of
stake.
You
can
once
you
the
proof
is
verified.
You
can
meet
the
erc20
token
and
you
can
do
whatever
you
want
with
this
token
and
then,
if
you
want
to
withdraw
back
to,
if
you're
in
proof
of
work,
you
just
burn
it.
AG
You
prove
the
fact
that
you
burned
on
the
other
side
and
and
yeah.
That's
it
also
in
terms
of
other
use
cases.
I
think
that
cross-chain
collateralization
is
pretty
cool,
because
this
is
the
place
where
you
want
to
avoid
latency
as
much
as
possible,
and
you
want
to
be
asynchronous
as
much
as
possible
and
essentially
that's
that's
what
we
do
here,
because
our
latency
comes
only
from
from
the
proving
time
but
again
using
some
optimistic
approaches,
and
so
on
there
a
lot
of
things
we
can
do
cure
I
hope
it
answers
the
question.