►
From YouTube: Devcon VI Bogotá | Workshop 4 - Day 1
Description
Official livestream from Devcon VI Bogotá.
For a decentralized version of the steam, visit: https://live.devcon.org
Devcon is an intensive introduction for new Ethereum explorers, a global family reunion for those already a part of our ecosystem, and a source of energy and creativity for all.
Agenda 👉 https://devcon.org/
Follow us on Twitter 👉 https://twitter.com/EFDevcon
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
B
For
everyone
that
we
interact
with,
if
we
wanted
to
there's
also
an
interesting,
interesting
use
case
for
enabling
cross
chain
operations,
there
are
many
ways
to
do
that
and
yeah.
That's
just
something
that
becomes
a
lot
easier
once
once
you
once
you
have
this
functionality
built
into
the
protocol.
B
There's
also
there's
also
some
say
things
that
you
get
from
from
just
being
able
to
batch
different
transactions
together,
also
from
being
able
to
guarantee
that
the
transactions
are
going
to
execute
with
optimicity.
So
for
some
things,
it's
useful.
B
If
everything
happens
together
and
if
everything
doesn't
happen
together,
then
you
don't
want
it
to
happen
because
there
you
know
it
wouldn't
make
sense
for
the
transactions
to
execute
separately,
and
you
know
whether
this
happens,
whether
this
happens
like
in
a
gaming
environment-
or
you
know
there
are
some-
you
know
like
Financial
scenarios,
I
mean
imagine
the
simplest
one
would
be
you
have
to
you
have
to
approve.
You
have
to
give
some
authorization
to
a
contract
to
perform
an
action
on
your
behalf
and
then
then
you
want
to
perform.
B
Otherwise,
like
you
just
wasted
gas
and
there
is
there,
there's
there's
a
sort
of
General
use
case.
That's
interesting
where
you
can
actually
Implement
these
time,
delay
flows.
So,
for
example,
let's
say
I
want
I
want
to
be
I
want
to
be
selling
my
teeth
when
it
hits
5000,
but
I.
Don't
wanna
I,
don't
I,
don't
know
when
that's
going
to
happen.
B
I
don't
want
to
sit
in
front
of
my
computer,
so
it
would
be
possible
for
me
to
just
pre-create
the
transaction,
but
making
conditional
on
certain
certain
things
happening
like
okay.
Only
if
the
price
of
eighth
is
5000,
or
only
after
this
time
or
whatever
the
condition
is,
and
so
my
wallet
would
agree
to
execute
and
pay
for
the
gas
for
that
transaction
and
also
maybe
a
transaction
fee,
and
you
would
put
that
into
a
registry
and
the
registry
of
future
time
delayed
or
event
driven
transactions
and
Searchers.
B
They
would
be
able
to
monitor
this
registry
and
see
oh,
like
these
are
transactions
that
their
conditions
have
just
been
been
met,
so
they
can
compete
on
executing
it
for
you
and
that
opens
up.
That
opens
up
a
whole
range
of
of
interesting
use
cases,
because
now
you
know
it
doesn't
have
to
be
you
that
pulls
the
trigger.
At
the
exact
moment,
the
conditions
are
met
and
if
the
conditions
are
met,
the
transaction
will
be
executed
for
you
by
Searchers,
and
it
could
be
time
delayed.
B
It
could
be
based
on
essentially
like
whatever,
whatever
the
conditions
make
sense.
B
B
Which
is
so
this
is
the
standard
that
we
have
been
working
on
and
blow
here
is,
is
the
guy
who
implemented
the
the
contracts
so
yeah,
please,
if
there
are
any
technical
mistakes,
correct
me,
okay,
so
this
is
the
first
step
towards
protocol
level
account
abstraction.
The
nice
thing
about
our
approach
is
that
it
doesn't
require
any
change
to
the
rules
of
consensus,
so
we
can
kind
of
experiment
for
free
and
we
don't
have
to
solve
governance
in
advance.
B
The
way
we're
doing
it
is
okay.
We
essentially
create
a
mempool,
a
new
type
of
mempool
for
anyone
that
wants
to
participate
in
this
and
a
single
you,
don't
you
don't
need
more
than
a
single
Network
this
this
mempool,
it's
essentially
it
accepts.
B
It
accepts
something
that
is
essentially
a
transaction,
we're
calling
it
user
operation,
but
a
user
operation
is
equivalent
to
a
transaction,
but
it's
a
it's
a
transaction
that
works
with
with
with
these
account
contracts.
B
So
what
that
does
is
it
makes
contract
wallets
a
first-class
Citizen
and
it
totally
does
away
with
the
need
for
having
anyway,
you
don't
need
a
new
way
to
to
control
to
control
this.
This
account
and
I
mean
the
way
the
way
that
works,
maybe
a
little
bit
about
how
that
works.
B
You
have
it's
kind
of
similar
to
how
you
know.
Flashbots
Mev,
private
mempools,
like
the
principal
behind
it
is,
you
can
have
a
mempool
are
bundlers.
They
are
provided
an
incentive
to
submit
your
transaction
and
essentially
we're
just.
We
took
that
that
idea.
B
Originally,
this
was
vitalik's
idea
and
we
added
the
gas
subtraction
part
to
it
to
make
it
a
more
general
purpose
and
then
the
bundlers
you
know
they
their
their
job
is
to
just
take
these
user
operations
and
eventually
they
they,
they
bundle
them
together
and
they
submit
them
as
when
they're
creating
when
they're,
creating
bundles
that
are
go
to
the
to
actual
blocks.
B
Okay,
the
the
the
other
advantage
of
doing
things.
This
way
is
that
we're
separating
validation
from
execution,
so
you
can
have
you
can
have
so
so
if
I'm,
if
I'm
a
bundler
and
I,
am
paying
for
the
gas
of
your
transaction,
there's
there's
some
risks
involved
for
me,
because
you
know
what
happens
if
the
the
you
know,
like
you
know,
if
you
don't
pay
me,
what
happens
if
I
execute
a
transaction
on
chain,
it
turns
out
that
it
and
it
ends
up.
You
know,
ends
up
well.
B
At
the
very
least,
you
expect
to
be
repaid
in
gas,
because
otherwise
you
know
why.
Why
would
you
participate
in
the
scheme
so
to
make
it
make
it
very
safe
for
bundlers
to
participate?
B
What
we've
done
is
we've
provided
a
contract
level
guarantee
that
you're
always
going
to
be
paid
back
regardless
of
of
what
happens
with
your
transaction
when
it's
executed
on
chain,
so
we've
separated
validation
from
execution
and
what
the
bundle
needs
to
do
when
they
accept
the
transaction,
they're,
just
verifying
that
they're
doing
this
off
chain,
initially,
just
okay,
if
I,
if
I,
accept
your
transaction
and
I'm,
calling
this
huge
function
and
am
I
going
to
be
paid
back
for
the
guess-
that's
that's
all
they're
verifying!
B
So
it's
pretty
cheap
for
them
to
do
that
and
later
the
totally
separately
when
the
transaction
is
submitted,
it
gets
executed.
But
by
then
you
don't
really
care
as
a
bundler,
even
if
it
reverts
it's
your
problem,
just
like
with
a
normal
transaction
because
you're
still
going
to
get
paid
and
the
the
you
know.
B
Without
this,
it
wouldn't
really
be
possible
to
the
crate,
a
permissionless
pool
of
bundles
that
are
participating
in
in
this
protocol,
because
the
bundlers
would
have
to
trust
that
they're
not
going
to
get
cheated.
C
We
use
the
term
brundler,
which
I
know
the
value
debtors
just
like
any
node
Valley
that
also
supports
account
obstruction,
and
not
one
of
them
is
enough
to
run
a
network.
The
more
they
are.
The
notebook
is
more
resilient
to
a
censorship
and
other
things,
but
the
bundler
eventually
and
eventually
all
validators,
hopefully,
will
be
also
bundles.
B
Right
so
the
right
now
we
have
so
exactly
you
don't
need
a
consensus
change.
You
don't
need,
like
51
of
validators
participating
in
this
scheme
because
ultimately
you're
generating
just
Regular
legal
bucks,
and
we
have
we
have
another
mind
as
as
an
implementation
that
is
supporting
this.
The
more
the
more
clients
support
this,
the
faster
your
transactions
are
going
to
get
executed,
but
yeah
Okay,
so
so
with
erc4337.
B
Then
you
know
once
we
have
this
scheme,
we
can
also
use
it
to
make
Roll-Ups
cheaper
because
you
can
batch
transaction.
You
can
aggregate
signatures
so
so
yeah,
that's
that's
another
advantage
and,
like
I
said
it
doesn't
require
any
protocol
changes
so
on
any
ABM
chain.
C
Yeah,
okay,
technically
a
bundler
can
run
as
a
separate
entity.
It
is
much
much
better
for
it
to
be
a
node
to
be
more
resilient.
So
the
way
to
add
it
right
now,
the
way
we
currently
adding
it
ungirly,
because
it's
still
in
test
is
elegant
as
a
separate
server.
A
the
wallets
don't
care
its
implementation
in
order
to
be
highly
scalable
and
to
be
able
to
batch
more,
it
has
to
be
a
node
in
the
network.
B
So,
what's
what's
next?
Well,
yes,
we
can
start
experimenting
and
have
account
abstraction
and
any
evm
compatible
chain
without
consensus
changes.
But
the
goal
is
to
do
away
with
the
OAS.
B
Eventually
we
don't,
we
don't
need
doas
and
we
know
we're
going
to
have
to
move
away
from
them
eventually
at
some
point-
and
there
are
various
ways
of
of
thinking
about
this,
so
we
want
to
want
to
want
to
have
account
abstraction
as
a
basic
feature
of
the
protocol,
but
we
want
to
do
it
in
a
way
that
doesn't
enshrine
any
particular
wallet
or
gives
an
unfair
advantage
to
any
particular
wallet
and
because
the
eoas
are
effect
and
they're
very,
very
common,
and
they
probably
will
be
until
we.
B
You
know
we
move
away
from
them.
It
will
take
a
few
years.
We
need
a
way
to
convert
EOS
seamlessly
to
smart,
smart
contracts,
so
there
will
be.
B
There
will
be
some
default
implementation
where,
yes,
everything
in
the
future
is
an
abstract
account,
including
EOS,
but
if
you
haven't
upgraded
your
UA,
if
you
have
an
inserted
code
into
your
eua,
if
you
haven't
like
activated
it
in
some
way,
then
it
just
you
know
behind
the
scenes
continues
behaving
like
in
your
UA,
but
it
has.
It
has
the
the
the
functionality
requiring
allowing
you
to
to
upgrade
it.
Of
course,
this
would
require
a
consensus
change
and
there
there's
various
ways
this
it
can
be
achieved.
We're
we're
discussing
you
know.
B
One
way
would
be
there's
to
create
a
new
transaction
type,
and
then
you
can
set
the
code
for
your
eoa.
This
is
this.
Is
this?
Is
now
the
code?
That's
running
your
UA
or
you
know,
there's
also
I've
been
been
a
suggestion,
eip-3074,
maybe
that
in
combination
with
another
EAP,
we
could
also
set
default
proxy
contract
for
all
addresses
doll.
You
want
to
add
anything
Yeah.
C
By
default
that
basically
two
options,
one
of
them
to
let
a
user
decide
the
exact
point
of
time
where
he
wants
to
upgrade
its
eoa
into
a
a
contract
wallet
either
using
a
transaction
type
or
new
opcode
on
such
the
other
way
is
to
decide
that
at
one
point
of
time,
all
Eos
start
using
some
default
implementation,
we've
deployed
and
tested
thoroughly
before,
which
behaves
exactly
like
in
a
way.
So
all
user
will
not
notice
a
difference,
except
that
from
now
they
have
a
way
to
modify
the
replace
the
actual
implementation.
So.
B
C
B
You
can
start
experimenting
with
erc4337
right
away
and
we
had
eight
wonderful
submissions,
and
this
is
something
that's
you
know
already
working,
so
you
don't
have
to
wait.
B
You
can
add
useful
features
like
the
one
we
did
discussed.
You
know
batching
or
key
recovery
or
any
any
of
the
things
that
that
we've
been
talking
about.
B
You
could
build
features
that
were
totally
not
possible
with
the
uas
that
we
haven't
thought
about,
and
if
you
do,
if
you,
if
you're
building
anything
cool,
then
you
should
definitely
apply
for
an
EF
Grant,
because
we
want
to
see
this.
We
want
to
see
this
used
and
adopted
and
we
want
to
see
the
experimentation
and
want
to
update
this
presentation
with
more
interesting
use.
Cases
so
definitely
apply
for
an
EF
Grant.
B
B
You
know,
multi-cigs
are
an
example,
but
still
many
dapps
assume
that
they're
going
to
be
interacting
with
an
eoa
and
that
that
is
just
an
obstacle
for
us
to
move
forward.
It
means
your
adapt
already
can't
interact
with
things
like
Windows,
safe
wallet
if
you're
assuming
you're,
making
assumptions
such
as
you
know,
the
the
the
like,
you
know
the
how
how
signatures
are
validated.
B
B
If
the
caller
has
code
and
then
there's
a
mechanism
where
it
can
just
invoke,
invoke
a
function
and
instead
of
assuming
that
there's
they
can
rely
on
this
EC,
this
DC
DC
DSi
key
the
other
one
is,
if
you
can,
if
you
can
benefit
from
batching
in
your
user
interface
and
many
many
dapps
especially
games,
can
then
you
should
check
if
you're
connected
to
a
contract
wallet
that
supports
it
and
that
will
that
will
create
a
better
experience
for
your
users.
It
will
save
gas
costs.
B
The
other
thing
is
with
how
gas
is
paid.
So
if
you
have
adapt,
you
should
think
about
different
types
of
Gas
payment
models.
I
mean
you
know
a
big.
An
easy
example
is:
if
you
have
a
token,
then
it
makes
sense,
perhaps
that
your
users
should
be
able
to
pay
for
the
transactions.
B
In
your
token,
when
you're,
using
your
dap
and
if
you
don't
have
a
token
or
you
want
to
subsidize
your
users-
that's
easily
accomplished
with
with
the
with
account
abstraction
you
you
set
up
yeah,
you
know
you
set
up
a
a
contract
that
authorizes
to
reimburse
your
users
for
whatever
criteria
you
feel
comfortable
with,
maybe
the
onboarding
process,
maybe
they
they
have
to
perform
some
action.
But
it's
it's.
B
It's
something
that's
possible
now
and
a
lot
of
the
a
lot
of
the
improvements
that
we're
gonna
get
for,
for
that
usability
is
going
to
also
require
some.
You
know.
Wallet
support,
so
wallets
are
an
important
part
of
of
usability
for
depths
and
as
a
depth
developer,
you
have
some
influence
by
collaborating
with
wallet,
Dev
saying,
okay.
This
is
this
is
something
that
you
know
would
would
be
beneficial
for
my
use
case,
and
you
can.
You
can
have
a
have
a
bit
of
an
influence
by
just
saying,
okay.
B
B
So
other
than
talking
with
us
we're
happy
to
help
anyone,
that's
implementing
different
use
cases.
We
do
have
an
SDK
up
later.
You
have
will
share
on
his
Twitter
the
the
links
but
there's
there's
a
there's,
an
SDK
there's
an
SDK.
B
B
B
Cool
okay,
so
that's
the
SDK.
We
will
fix
the
link
later.
You
can
also
read
up
on
the
ERC
and,
oh,
maybe
that's
the
right
link.
No,
no!
That's
that's!
For
the
yeah.
B
Yeah
yeah,
it's
the
ERC
itself,
so
you
can
read
the
ERC,
it's
very
detailed,
but
you
can
get
very
precise
understanding
of
how
it
works.
There's
also
a
discussion
on
the
ethereum
magician's
forum
and
and
of
course,
it's
nice
to
be
able
to
talk
with
people.
So
we
have
a
Discord.
Now
we
have
a
Discord
server
and
you're
very
welcome
to
join
and
ask
questions,
and
even
after
this
event
like
if
there's
something
that
you
don't
get
to
ask
us
in
person
and
yeah,
maybe
now
we
will
just
take
some
questions.
B
C
Okay,
the
internal
development
roadmap.
What
we've
developed
are
the
interfaces
and
the
core
contract
that
performs
this
magic.
We
call
it
entry
point,
it
was
audited,
but
then
was
extensively
modified
to
support
the
l2s
that
is
not
yet
audited.
We
still
have
some
work
on
it.
The
API
probably
won't
change,
so
a
wallet
will
be
able
to
work.
So
it's
not
deployment,
deploy
the
mainnet
only
and
test
net
currently,
but
you
can
create
and
start
experimenting
with
Wallets.
C
E
A
sample
yeah.
C
A
sample
that
uses
that
adds
account
attraction,
support
to
ignosis
Safe,
you
add
the
module
and
you
basically
get
make
a
okay,
a
single
owner.
You
can
also
safe
and
account
obstruct
the
compatible.
C
So,
yes,
you
can
create,
and
there
are
some
work
on
creating
wallets
today
in
terms
of
applications.
Yes,
it's
chicken
and
egg
application.
It's
a
wallet
in
order
to
work.
There
is
a
way.
No,
it's
for
a
basically
a
hackathon.
An
application
can
work
without
a
wallet,
but
it's
not
something
else.
You
want.
Your
user
want
to
sign
blindly
a
hash
if
you're
good,
with
that,
it's
also
possible
to
with
an
application,
can
work
with
accountability
in
the
day.
C
C
The
way
gas
abstraction
works
is
that
when
you
submit
a
user
up,
I
said
the
contract
itself
validates
itself
the
signature
and
its
nons,
and
in
order
to
accept
the
request,
but
there's
also
a
pointer
to
what
we
call
a
paymaster.
A
payment
is
a
contract
that,
before
submitting
the
transaction,
has
a
chance
to
decide
whether
it
agrees
to
pay
or
not.
If
it
says,
okay,
that
is,
it
doesn't
revert,
it
will
use
its
own
balance,
its
own
stake
and
everything
to
pay
for
this
transaction.
C
Now,
what
this
paymaster
does
on
chain
depends
on
the
paymaster,
so
the
most
obvious
example
of
a
paymaster
is
that
the
validation
will
be
I
will
check
that
this
user
has
a
balance
of
enough
die
and
has
approval
for
me
to
use
this
die
and
I
will
grab
enough
die
to
cover
this
transaction
so
and
at
the
end,
I
will
refund
it
with
the
excess
So.
Eventually,
I
pay.
The
user
pays
with
tokens
for
the
transaction.
C
This
is
mostly
it's
most
obvious
use
case
of
a
paymaster,
but
there
are
other
use
cases
if
you
want
a
voting
depth
and
you
don't
want
the
user
to
pay
anything
okay.
So
the
check
is
that
the
user
is
eligible
to
vote.
If
a
user
is
eligible
to
vote
and
didn't
vote
yet
I
agree
to
pay
it's
another
example
of
a
paymaster.
Other
examples
are
still
open.
You
can
write
whatever
you
like.
B
It's
just
just
to
contrast
right
now,
if
you're
using
Nosa
safe,
then
someone
on
your
someone
on
your
team
has
to
like
pay
from
their
account
right,
even
if
the
notice
safe
has
plenty
of
these,
someone
still
needs
to
like
pay
from
their
own
accounts
just
to
have
that
transaction
finalized.
B
So
with
account
obstruction
that
wouldn't
be
necessary,
your
account
will
be
able
to
pay
for
yourself,
and
this
doesn't
require
paymaster,
but
let's
say
you're
you're
safe.
For
going
back
to
the
safe
example.
If
hypnosis
safe
doesn't
hold
any
eth,
it
doesn't
hold
a
sufficient
eth
where
you
don't
want
to
have
to
care
about
the
balance
and
continually
like
exchanging
and
topping
it
off,
and
whatever
you
have
a
balance
of
assuming
it's
enough
to
pay
for
the
gas.
B
You
would
essentially
include
in
your
transaction
a
reference
to
something
we're
calling
a
token
paymaster
and
token
paint.
Master
can
be
completely
autonomous
contract
on
chain
that
it
just
it
accepts
tokens
it
and
it
it
would
pay
for
the
transaction
in
eth,
and
then
it
would
settle
in
some
way
so
well.
An
old
reference
like
an
older
reference,
implementation,
recreated,
which
is
unique
swap
using
uniswap
and
that
single
transaction,
which
was
kind
of
expensive,
either
cheaper
ways
of
doing
it.
But
just
it's
like
very
simple
and
one
single
Atomic
transaction
it
gets
paid.
B
It
gets
in
a
sort
of
an
allowance.
In
whatever
token
you
have
it
pays
for
the
gas,
it
charges
the
transaction
fee,
and
then
it
gives
you
back
the
the
remaining
tokens,
so
with
account
abstraction.
If
you
want
to
use,
if
you
want
to
use
that,
for
example,
then
you
would
just
include
oh,
the
The
Entity
that
is
paying
for
the
gas
is
a
token
paymaster,
but
there
is
a
way
for
the
token
paymaster
you
know
to
to
receive
a
commitment
from
your
account
to
pay
it
back.
D
All
of
that
is
going
to
be
able
to
be
Consolidated
because
of
the
expressivity
of
smart
contract
laws
of
a
contraction
I'm
wondering
if
you
can
just
like
check
my
answers
so
I'm
assuming
so
many
many
people
are
here,
have
so
many
laws
spread
out
if
we
can
incorporate
logic
into
a
smart
contract.
While
all
of
a
sudden,
we
have
the
ability
to
kind
of
concentrate
everything
into
one
single
log.
To
my.
C
Intuition,
yes,
it
it
makes
it
possible
right
now.
You
have
to
split
it
because
of
a
different
concern.
If
you
need
some
corporate
level
security
using
a
you
can
also
safe,
and
if
you
want
a
game,
you
will
use
this
metamask
and
field
your
private,
you
use
trezor.
Now,
if
you,
if
you
want
them
all
in
a
single
address,
yes,
you
will
be
able
to
do
it
with
a
counter
fraction.
Probably
you
still
might
have
multiple
abstracted
accounts
for
different
purposes,
but
for
different
reasons.
C
B
Because
you
can
just
limit
your
risk
exposure
to
each
device
depending
on
how
much
you
trust
it.
C
Again,
I'm
not
saying
that
there
will
be
one
wallet
that
will
give
you
all
these
use
cases
you'll
find
a
wallet
that
give
you
all
the
user's
case.
You
need
and
use
that,
and
you
always
can
switch
I.
Think
the
the
first
use
case
of
a
signature
is
chain
signature.
Just
think
of
it.
You
start
using
metamask,
and
after
a
few
years
you
collected
a
lot
of
nfts
and
a
lot
of
money,
but
you
can't
change
the
security
model.
C
Your
browser
holds
your
private
key,
so
you
have
no
idea
if
anyone
hacked
into
your
computer
and
grabbed
it
through
a
copy
of
your
computer
without
changing
the
address,
you
can't
change
the
security
with
account
obstruction
with
single
operation
of
change
owner
now.
The
treasure
on
the
same
account,
even
if
with
the
basic,
simple
account
I
just
changed
the
owner
and
now
I-
am
really
secured,
because
the
previous
private
key
is
no
longer
relevant.
B
Yeah,
even
before
you
get
into
the
really
fancy
stuff
that
you
can
do
with
the
abstract
account
distraction,
the
basics
are
actually
pretty
pretty
useful
just
by
themselves,
because
right
now,
if
anything
goes
wrong,
then
it's
really
hard
to
transfer
everything
from
from
one
eoa
to
another
I
mean
you
would
have
to
create
separate
transactions
for
each
asset
that
you
hold,
that
could
be
pretty
expensive,
and
if
your
computer
got
compromised,
you
might
need
to
do
that
in
a
huge
hurry.
B
You
know
so
it's
it's
it's
just
not
not
the
best
situation
to
be
in
even
very
simple.
Improvements
like
this
will
make
a
big
difference.
C
F
G
G
H
H
H
G
A
I
We'll
first
start
with
a
section
about
what
a
trip
to
economics
theoretically
is,
and
it's
supposed
to
be
interactive.
So
if
you
have
any
questions
anywhere,
please
just
raise
your
hand
and
at
the
end
of
each
section,
we'll
have
have
a
moment
for
questions
or
discussion
if
you'd,
like
so
crypto
economics,
has
a
special
place
in
a
protocol,
it's
not
as
heart-based
as
cryptography
in
the
sense
that
we
verify
what
people
are
doing.
I
It's
not
that
we
100
sure
know
what
what's
happening,
but
we
use
economic
incentives
to
induce
people
to
follow
what
they're
supposed
to
do
what
the
protocol
wants
them
to
do.
So,
if
you're,
a
network
participant
doubting
whether
you
should
do
the
right
thing
or
adhere
to
another
set
of
rules
that
you
prefer,
maybe
it
makes
you
more
money.
Maybe
it's
easier,
whatever
preference
you
have
crypto
economics
is
here
to
guard
you.
I
We
have
some
incentives
to
for
you
to
follow
what
the
protocol
wants
to
do
and
we
could
punish
punish
you
if
you
do
something:
that's
not
aligned
with
proposed
tool.
So
then,
why
is
crypto
economics
different
from
just
regular
old
economics?
Well,
it's
the
environment.
We
live
in
a
very
different
environment
with
the
decentralization
and
trustlessness
that
makes
economics
a
lot
more
difficult.
We
channeled
rely
on
the
outside
law
enforcement
that
Niche
people
follow
what
we
do.
I
Instead,
it's
a
very
adversarial
environment
where,
where
we
only
assume
that
people
are
rational,
meaning
that
people
maximize
their
own
payoffs
and
do
what's
best
for
them,
this
also
what
makes
it
very
exciting,
in
my
opinion,
so
where
crypto
economics
really
started
was
with
the
game
theory
of
the
Bitcoin
protocol.
So
game
theory
is
the
study
of
strategic
Behavior
or
how
you
would
respond
to
situations
in
which
other
people
also
make
decisions
about
how
they
behave.
I
Importantly
in
this,
in
for
the
Bitcoin
protocol
is
that
you
have
a
decision
in
which
chain
to
mine.
You
could
either
remain
mind
on
the
longest
chain
or
on
another
fork,
and
this
decision
is
something
that
every
Miner
actually
has
to
make
and
you're
incentivized
to
mine
on
the
longest
chain,
because
you'll
get
some
issuance
rewards
and
fees
and
if
you
mine
on
another
chain
and
it's
not
included
in
the
technology
chain,
you
only
waste
your
energy
spending
on
this
mini
your
incentivized.
I
So,
as
you
can
see
here,
assume
that
you're,
a
Bitcoin
miner
doubting
to
minor
the
longest
chain,
what
you're
supposed
to
do
or
on
another
chain
from
the
next
plot
and
all
the
other
miners
we've
added
dated
into
one
group
as
well
and
they're,
simultaneously
deciding
on
their
strategy
as
well.
So
in
this
table,
the
first
Emoji
corresponds
to
the
utility
or
payoff
of
the
person
in
the
First
Column.
I
So
that's
you
as
minor,
and
the
second
Emoji
corresponds
to
what
all
the
other
players
are
doing,
and
we
can
see
that,
since
all
the
other
players
are
in
this
situation,
mining
on
the
same
chain
they'll
always
be
happy
as
that
chain
will
be
the
canonical
chain.
However,
for
you
deciding
on
which
chains
mine
on
it's
important
to
think
about
whether
you're
going
to
on
the
longest
chain
or
on
the
other
chain,
and
then
how
you
could
determine
your
strategy
is
to
see,
given
that
other
people
are
mining
on
the
launch
chain.
I
I
There
we
go.
Thank
you.
So
since
every
player
in
this
situation
has
the
same
strategy,
we
actually
end
up
in
a
point
where
there's
a
steady
state.
Everyone
mines
on
the
same
chain-
hopefully-
and
this
is
the
steady
state-
is
what
we
call
a
Nash
equilibrium,
because
no
one
has
a
straight
incentive
to
deviate
from
this
from
this
situation.
So,
if
you're
mining
on
launch
chain,
if
you
mind
on
another
chain,
it
means
that
your
payoff
will
be
less
meaning.
That
you're
means
that
you
don't
have
an
incentive
to
deviate.
I
So
this
is
the
game.
Freeway
parts
of
crypto
economics,
a
small
introduction
and
now
we'll
introduce
a
bit
more
Theory
called
mechanism
design.
So
mechanism
design
is
really
the
study
of
Designing
strategic
situations
with
game
theory
in
mind.
So
how
can
we
make
games?
So
that's
the
payoff
or
the
the
outcome
is
how
we
want
it
to
be
an
example
could
be
when
we're
designing
an
auction.
I
We
want
people
to
have
have
an
easy
way
to
bid
so,
for
example,
bit
their
true
evaluation
of
something,
and
we
want
that
to
be
incentive
compatible
with
the
protocol.
So
incentive
compatibility
means
that
the
the
designers
have
a
door
in
mind
and
the
strategy
that
users
are
going
to
deploy
reaches
that
goal.
I
So
in
that
case
we
could
use
Game
Theory
to
see
what's
the
strategy,
and
how
can
we
design
around
this
and
it's
always
very
important
to
take
into
accounts
what
you're
actually
designing
for
so
there
have
been
some
famous
mistakes,
for
example
some
game,
some
Olympic
games,
where
the
the
pools
weren't
made
correctly
and
some
teams
actually
try
both
try
to
lose,
which
is
very
weird.
Setting.
I
I
So
this
was
a
section
about
Game,
Theory
and
mechanism
design
and
bit
more
theoretical
setting
days
will
drive
more
into
apply
settings,
but
if
anyone
has
any
suggestions,
questions
or
anything,
please
just
raise
your
hand,
if
notes
we'll
just
continue
to
how
the
dash
Market
works.
So
many
of
you
will
probably
have
heard
of
Italian
speech
this
morning.
He
talked
to
bits
about
the
dash
markets
and
I'll.
I
Try
to
elaborate
a
bit
on
that,
so
the
dash
Market
is
basically
for
any
transaction
that
you
send
to
ethereum
you
pay
cash
and
how
this
gas
is
constructed
is
dependent
on
the
amount
of
operations
and
the
type
of
operations
that
you
do
in
that
transaction.
So
each
operation
or
optoad
has
a
fixed
amount
of
Dash
units
that
are
associated
with
it.
So,
for
example,
multiplying
two
numbers,
just
five
units
of
gas
and
adding
two
numbers
plus
three
units
of
cash,
and
this
this
ratio
is
relatively
defined.
I
So
five,
three
ratio-
and
this
doesn't
change,
but
this
may
be
weird
because,
as
you
may
have
noticed,
the
amount
that
you
pay
for
your
transactions
isn't
actually
fixed.
This
is
because
the
amount
of,
if
that
you
pay
per
gas
units,
so
these
are
two
separate
markets-
is
determined
by
supply
and
demand.
It's
important
to
have
this
distinction
between
the
amount
of
Dash
units
and
amount
of,
if
that
you
pay
paper
Dash
unit.
I
So
we
have
a
dash
limit
to
preserve
decentralization
which,
of
course,
a
drove
of
the
protocol,
and
this
is
done
because
if
we
could
make
the
trade-off
for
a
higher
Dash
limit,
so
more
Dash
units
per
block-
and
this
means
lower
fees
more
transactions.
But
it
means
less
decentralization
because
less
people
would
be
able
to
participate
in
the
protocol.
I
Less
people
would
be
able
to
validate,
meaning
that
we
missed
one
of
the
doors
of
decentralization
and
how
Bloods
are
in
principle
made
is
that
they,
a
minor,
sees
all
of
the
transactions
that
come
into
the
mempool
and
they
choose
the
transactions
that
are
paid
the
highest
fee
per
gas
unit
and
they
basically
just
fill
the
fill
them
and
fill
their
blood
with
the
high
space
transactions.
At
the
end
of
rationality,
assumption
that
we
talked
about
before
important
to
note
here
as
well
is
that
this
is
the
pre-erp
1559
gas
market.
I
I
told
you
bits
about
this
Erp
later,
but
this
is
the
simplest
setting
how
it
was
before
some
time
ago
in
ethereum,
then
this
actually,
this
auction
for
blotch
space,
so
an
auction
in
which
we
sell
a
scarce
resource
which
is
blood
space,
which
is
what
the
ethereum
protocol
sells,
is
actually
a
first
price
auction.
So
players
bid
for
the
transaction
to
be
included,
and
if
they
win
the
bid
win
the
auction,
they
they
just
pay
their
bid
and
they're
included.
I
However,
this
is
not
an
ideal
setting,
because
it's
very
difficult,
as
a
user,
to
know
what
to
bid
exactly
If.
You
bid
your
true
evaluation.
You
you
get
your
transaction
in,
but
you
also
pay
everything
in
dash
fee,
so
you're
not
really
better
off
so
you're,
going
to
share
the
bits
and
going
to
bit
a
bit
lower
lower.
I
So
then,
how
could
we
design
an
urchin
mechanism
for
ethereum
in
which
block
space
is
sold
in
an
incentive
to
passable
manner,
so
that
people
can
just
bid
their
true
evaluation
and
do
not
have
to
worry
about
shading?
Well,
we
have
set
in
price
options
in
principle.
These
are
very
simple:
we
are
ordering
of
the
scarce
resource
again
block
space,
and
if
you
win
an
auction
and
you
pay
the
second
highest
bid,
so,
for
example,
if
I
bid
10
10,
if
you
bit
14
even
to
other
people,
bid
5-1.
I
This
means
that
every
person
has
a
strategy,
independence
of
what
other
players
are
doing
to
Simply
bid
their
true
valuation,
there's
maximizers
there
utility
maximize
their
payoff
and
because
every
user
is
going
to
do
this,
we
end
up
in
the
National
equilibrium
we
talked
about
before,
and
this
is
actually
great,
because
now
we
have
a
national
equilibrium
that
we
as
opposed
to
want,
and
so
why?
Wouldn't
we
just
Implement
the
second
price
origin?
This
is
an
open
question,
so,
if.
I
I
So
you
mean
that
you
know
that
what
everyone
bids,
so
you
can't
Implement
a
second
price
option,
because
because
then
you
can
just
pick
yourself
yeah
exactly
so
because
of
the
adversarial
setting
miners
will
maximize
their
payoff.
So
let's
say
that
you
have
a
block
in
which
there
are
four
transactions:
one
paying
10
fees,
eight
fees,
seven
fees
and
two
fees
and
we'll
assume
here
that
the
second
price
version
works.
In
the
case
where
you
just
pay
the
lowest
transaction,
that's
included
in
the
blood,
then
every
user
will
have
to
pay
two
fees.
I
If
the.
If
the
miner
uses
the
real
transactions,
however,
the
miner
can,
in
the
adversarial
setting
maximize
their
payoff
by
using
by
stuffing
the
block
with
no
interest
action,
so
they
switch
out
the
transaction,
paying
two
fees
insert
one
of
six
and
now
three
people
pay
section
fees
instead
of
four
people
paying
eight
meaning
that
they
maximize
their
payoff,
and
this
is
something
that
can
be
easily
done
by
a
minor,
it's
very
difficult
to
detect.
I
Therefore,
we
can't
Implement
these
kind
of
mechanisms,
which
is
unfortunate
because,
as
we
have
seen,
there
are
quite
a
few
negative
consequences
of
first-price
auctions,
for
example
the
priority
gas
origin
or
PGA.
For
short,
this
means
that
if
there
is
a
very
valuable
blood
space
you
may
want
to
have
your
transaction
included
before
other
players.
So,
for
example,
if
there
is
an
NFD
minting
and
there's
only
one
nft,
you
want
to
be
the
first
one
to
mintage
NFC,
but
other
players
might
may
also
want
to
Mint
it.
I
I
We
see
gas
bids
in
Gray
and
in
the
what
oh
sorry
and
on
the
x-axis
we
see
time
and
the
the
orange
triangles
are
bits
by
one
bolt
who
is
searching
for
ethereum,
seeing
if
there
are
valuable
transactions
they
want
to
bid
for,
and
the
blue
is
a
similar
book,
but
just
a
different
one.
On
the
green
star,
we
see
where
the
ball
side
the
boat,
that
one
and
the
red
bulges
the
boat
that
lost,
and
you
see
that
the
bids
are
increasing
over
time,
outbitting
each
other.
I
I
It
made
sure
that
even
transactions
that
don't
win
so
transactions
that
aren't
the
green
one
are
also
included
in
blood
because
they
pay
a
lot
of
gas
fees.
So
miners
are
incentivized
to
include
them
in
the
blood,
meaning
that
the
block
is
filled
up
with
transactions.
That
revert.
They
do
nothing
and
they're,
basically,
only
waste
blood
space
and
since
blood
space
is
now
more
scarce,
base
fees
or
Dash
fees
go
up,
which
is
bad
for
everyone,
of
course.
I
So
an
elegant
solution
that
was
forced
for
this
is
erp1559,
as
I
spoke
about
before
and
basically
what
this
does
is
it
transforms
the
fee
markets
into
something
that
resembles
more
of
a
second
price
option.
So
now,
up
until
erp1559,
you
just
pay
the
dash
fees
and
those
go
to
the
the
block
Builder
and
they
they
can
put
all
of
their
profits
in
the
pockets.
But
now
there's
a
base
sheet,
that's
determined
by
the
protocol
and
an
amount
that
you
give
to
the
block
Builder
and
the
space
fees
Burns.
I
So
there's
no
incentive
for
people
to
try
and
make
off-chain
agreements
give
new
parts
of
the
Bay
Street,
and
this
this
makes
the
bidding
for
bloat
space
very
different,
because
now
bloods
in
general,
aren't
full.
So
miners
are
just
incentivized
to
include
whichever
transaction
pays
them
enough
tip
to
be
included
and
so
bidding
is.
A
lot
has
performed
a
lot
easier.
You
basically
just
paid
a
base
fee.
I
You
add
a
very
small
amount
of
trade,
that's
constant
over
time,
and
this
means
that
your
strategy
of
bidding
is
basically
incentive
compatible
with
how
the
protocol
one
should
do
so.
It
resembles
kind
of
a
second
price
auction
and
you
can
just
fit
their
true
value
and
also
it's
a
common
misconception.
That's
erp1559
decreases
total
fees
or
gas
fees
at
users
pay,
and
this
is
not
the
case,
because
it's
only
a
mechanism
of
how
users
bid
for
their
transactions
to
be
included.
I
If
anyone
has
a
question,
Erp
159
is
great,
interesting,
so
be
happy
to
take
anything,
but
otherwise
we'll
just
continue
with
maximum
attractable
value,
which
is
also
a
very
interesting
subject,
it's
subject
which
has
a
lot
of
applications
and
there
there
are
many
Papers
written
about
this.
So
it's
definitely
worth
checking
out,
but
it
will
give
a
brief
introduction
to
it.
I
So
maximum
attractable
value
means
that
you
extract
as
much
value
as
you
can
from
the
ethereum
network.
So
how
this
is
done
is
we'll
start
by
looking
at
how
transactions
come
into
the
blood
as
we
talked
about
before
users
submit
their
transactions
and
they're
included
in
the
mempool
at
first,
in
which
all
blood
Builders,
Searchers
search,
store,
people
extract,
Mev
can
can
look
and
they
try
to
maximize
their
own
payoff.
I
So,
for
example,
if
you
submit
a
transaction
to
the
mempo
trading
token,
a
for
token
B
and
it's
a
very
large
trade,
the
price
of
this
pair
is
going
to
switch.
So
as
someone
searching
through
the
mempool,
you
could
think
that
this
this
is
going
to
happen.
You
know
it's
going
to
happen,
so
in
that
case,
I'll
bid
place
in
transaction
just
before
this
in
the
blood.
I
So
that's
I
I
can
buy
token
B
before
it
increases
in
price,
meaning
that
I
have
an
Arbitrage,
a
risk-free
profit,
and
this
is
what
happens
a
lot
and
why
this
is
possible
is
because
the
ordering
in
the
block
is
not
fixed.
It's
not
the
case
that
if
you
submit
your
transaction,
it's
included
in
the
block
on
a
Time
basis.
Anyone
can
Can
shift
the
order
or
actually
build
shift.
I
I
So
this
cell
sounds
very
bad.
Users
are
being
value
attracted.
It
makes
execution
worse
for
users.
Why?
Wouldn't
we
just
forbid
Mev?
Well,
it's
not
as
simple
Mev
is
a
quite
powerful
for
us.
So
we'll
have
a
look
at
why
some
people
think
Mev
is
good
and
why
I
mean
some
people
think
MVP
is
bad.
I
So,
on
the
one
hand,
people
argue
that
Mev
is
bad,
because
Searchers
searches
find
almost
all
transactions
in
the
in
the
mempool
that
they
can
do
Mev
on
and
they
make
sure
that
your
execution
is
as
bad
as
possible,
which
is,
of
course
not
something
that
you
like.
Also
interestingly,
Mev
incentivizes
centralization.
I
This
is
again
if
we
wrote
back
to
the
comparison
with
high
frequency
trading
in
traditional
Finance,
you
should
see
the
corporations
with
multi-billion
dollar
budgets.
They
have
very
big
infrastructure
operations
and
some
similar
arguments
can
be
made
for
Mev.
I
You
need
you
need
to
stand
the
mental.
There
are
multiple
strategies
that
require
High,
Investments,
meaning
that
there
is
Economist
skill
and
which
is
centralizing
for
us,
which
of
course,
is
not
something
which
we
want
and
it's
actually
been
torted
as
one
of
the
threats
to
ethereum
and
searches-based
blood
space.
This
is
what
we
saw
before
in
the
priority
gas
auctions,
where
transactions
that
are
reverted
or
do
nothing
are
included.
I
Pushing
up.
Dash
prices
and
Mev
searches
are
generally
very
smart,
so
they
could
put
their
time
and
effort
into
building
other
great
projects
that
contribute
to
the
ecosystem.
But
some
people
already
that
this
is
very
bad.
On
the
other
hand,
there's
an
argument
to
be
made
that
Mev
is
good
or
maybe
distributes.
A
nuanced.
Mev
might
not
be
extremely
good,
but
it's
worth
extracting
or
the
way
to
deal
with
MEP
is
not
to
just
ignore
it.
I
There's
an
argument
that
some
searches
provide
very
valuable,
very
valuable
services
to
the
the
network.
For
example,
if
there
are
two
liquidity
pools
and
in
the
one
pool,
13,
A
and
B
are
trading
for
well,
you
can
get
five
token
B
for
one
token
a
and
then
the
other
pool
you
can
attend
token
B.
For
one
token,
a
this
is,
of
course,
a
mismatch
and
Searchers
they
can
do
an
Arbitrage
transaction
here,
making
the
prices
again
equal.
I
So
that's
users
in
general
have
better
execution
if
they
trade
in
one
of
these
pools
randomly
also
liquidations
for
Lending
platforms.
If
there's
bad
debts,
Searchers
liquidate
and
the
the
people
that
lend
out
the
money
are
protected,
and
these
are
generally
rejected
scenarios
as
quite
good
parts
of
Mev
Mev
can
be
redistributed.
I
So
this
is
an
interesting
line
of
research,
where
the
idea
is
that
you
extract
all
of
the
Mev,
but
then
the
Mev
is
redistributed
to
users,
for
example,
as
a
user,
if
you
submit
a
large
transaction,
that's
going
to
shift
prices,
you
could
make
an
agreement
with
someone.
That's
going
to
extract
from
you.
I
There
are
quite
some
proposals
to
ensure
postal
safety
by
other
means,
but
the
extraction
and
redistribution
seems
like
an
argument
which
is
very
holistic,
meaning
that
users
are
don't
fall
through
the
trash
and
there
are
no
incentives
to
be
very
quick
to
your
notes.
I
Okay,
so
it's
very
difficult
to
say
whether
Mev
is
actually
very
good
or
if
it's
particularly
bad,
it's
easy
to
say
that
Mev
cannot
be
ignored.
Why
it's
not?
Why
it's
not
really
settled
is
because
there
are
lots
of
nuances
as
well.
For
example,
some
backgrounds
that
we
talked
about
before
that
made
sure
that
prices
in
liquidity
pools
are
equal,
are
seen
as
bad.
I
For
example,
if
you
have
a
lot
of
transactions,
trading
e
for
Bitcoin
and
Bitcoin
for
Eve
and
the
other
way
around,
you
can
first
align
all
of
the
transactions
trading
e
for
Bitcoin
and
then
back
run
by
base,
basically,
first
having
all
the
users
pay
up
the
price
and
then
taking
free
Arbitrage
profits,
which
would
be
seen
as
a
bad
background
and
so
noticeable
Mev
can
be
detected
and
also
MVP
can
be
easily
classified
into
good
or
bad
meaning.
That's
not
that
easy
to
say
that
we
should
do
particular
things
with
it.
I
This
is
something
that's
very
important
and
there's
an
increasing
line
of
research,
and
we
can't
say
that
all
of
the
responsibility
lies
with
app
developers,
because
some
Mev
cannot
be
mitigated
by
only
one
dab.
It's
a
contribution
of
multiple
factors,
multiple
transactions
that
may
be
unrelated,
meaning
that
there's
also
a
rule
for
the
protocol
to
make
sure
that
users
aren't
extracted
too
much
from
okay.
I
So
and
now
we'll
be
going
a
bit
into
ongoing
research
that
we
do
at
the
EF
at
the
robust
and
centers
group.
So
I
would
also
like
to
invite
for
this,
and
if
you
have
any
questions,
do
let
us
know
no,
otherwise
we'll
be
happy
to
talk
about
what
we
do.
We
do
basically
crypto
economic
research
on
the
foundation
of
the
assumptions
that
we
talked
about
earlier.
I
I
Yeah
sorry
I
couldn't
hear
it
I
think
any
of
the
it's
it's
also,
irrespective
of
shutting
in
some
sense.
So
there
are,
for
example,
close
the
main
Mev
opportunities
that
don't
simply
disappear
because
of
sharding,
so
no
I
don't
think
it
would
disappear.
J
Hi
I
guess
it
could
be
mitigated
somehow
if
most
of
the
user
transactions
move
to
like
Roll-Ups
and
Roll-Ups
are
the
ones
who
use
the
data
sharding
facilities
that
we're
now
building
at
protocol
level.
In
that
case,
most
of
the
Mev
may
be
accumulates
at
Proto
at
the
roll-up
level,
and
you
might
not
see
so
much
of
it
at
the
base
layer
of
ethereum
but
yeah
as
Julian
said,
because
Roll-Ups
don't
just
live
in
their
single
world.
J
For
instance,
you
have
designs
for
pulled
liquidity
where
different
Roll-Ups
could
use
the
same
liquidity
that
resides
at
the
base
layer
or
at
some
settlement
layer.
You
could
see
that
some
of
the
Mev
sort
of
percolates
down
to
wherever
liquidity
is
so
many
people
I,
think
are
trying
to
build
models,
including
us.
So
role
of
Economics
is
something
that
we're
trying
to
think
about
to
to
see
how
the
value
flows
from
the
users
to
the
protocols
to
protocols
which
are
on
top
of
ethereum
and
mevs
is
a
part
of
it.
Yeah.
H
I
So
if
there's
a
is
there
a
transaction
moving
prices,
you
can
put
your
transaction
in
front
and
the
transaction
at
the
batch
so
back
running
and
profiting
at
both
sides,
and
in
this
case
you
have
a
transaction
in
between
two
of
your
transactions,
which
makes
it
a
sandwich
which
is
seen
in
general,
I
think
as
a
tortured
form
of
Mev
but
yeah.
You.
F
I
That
yeah
also
in
traditional
Finance,
it's
a
difficult
argument:
I
think
it's
not
as
Nuance
that
or
Market
making
isn't
yeah
it's
it's
not
as
atomic
Arbitrage
as
it's
here
like
you,
have
a
hundred
percent
chance
of
making
money
and
if
it's
not
profitable,
you
simply
let
your
transactionals
execute.
I
J
Yeah,
if
I
cannot
do
this
I
think
in
traditional
Finance,
when
you
see
high
frequency
trading,
a
lot
of
value
goes
to
I,
don't
know
put
in
your
computer
next
to
the
New
York
Stock
Exchange
or
billions
of
dollars
to
shave
of
nanoseconds
to
to
your
strategies.
This
is
economic
value
that
just
leaves
the
market
and
goes
towards
people
who
build
all
this
infrastructure.
J
Maybe
one
of
the
opportunities
that
we
have
with
a
protocol
with
respect
to
Mev
is
if
it
can
be
captured
and
if
it
can
be
captured
efficiently,
but
this
value
could
serve
to
strengthen
protocol
security
rather
than
hamper
it.
So
of
course
it
doesn't
mean
that
yeah,
let's
get
user
sandwich,
because
that
gives
us
more
value
for
protocol
safety.
I.
J
Think,
of
course,
like
we
should
design
dabs
such
that
these
bad
outcomes
don't
happen
so
for
sandwiches,
specifically
there's
many
different
proposals
that
I
would
say
realize
different
trade-offs
that
users
might
have
so
just
mentioning
some
of
the
top
of
my
head.
One
is
encryption,
so
your
transaction
could
go
encrypted,
be
committed
to
and
then
executed.
So
people
can
send
with
you,
because
you
don't
know
what
happens.
J
The
trade
of
here,
of
course,
is
that
the
execution
latency
is
a
bit
higher,
but
maybe,
as
a
user
you're
fine
with
this
another
ID
is
receive
time
ordering
consensus.
So
there's
this
idea
that
you
know
with
transaction
a
is
seen
by
most
of
the
network
before
transaction
B,
then
transaction
a
should
be
included
in
the
block
before
transaction
B.
It's
in
theory,
I,
think
a
property
that
is
really
nice,
but
again
because
we're
in
a
decentralized
system.
J
There's
no
one
that
reports
oh
I've,
seen
a
before
B,
so
a
must
be
before
B
and
again
you
can
have
these
games
of
collocation
so
again
trade-offs
here
as
well.
Another
thing
that
I
would
say
is
a
relatively
new
idea.
Is
the
idea
of
offering
your
order
flow
so
getting
paid
for
your
transaction
saying?
Well,
if
my
transaction
is
so
valuable
to
you,
you
should
pay
for
it.
J
Game
more
fair,
I,
don't
want
to
comment
on
flashbacks,
specifically
because
I'm
not
working
for
flashbots
but
I
would
say
flashbots.
Other
people
in
this
ecosystem
are
trying
to
understand
Mev
from
first
principle,
where
it
comes
from.
The
view,
of
course,
is
to
to
use
it
as
a
Force
for
good
so
trying
to
ensure
that
it
doesn't
disabilize
a
protocol
that
it
doesn't
hurt
the
users
yeah.
So
out
of
that
comes
from
mitigating
it.
If
it's
bad
part
of
it
comes
from
containing
it
and
maybe
capturing
it.
J
If
it's
good,
yeah
I
would
say
these
are
broad
Strokes
of
the
ecosystem,
but.
I
Yeah
sure
so
multi-dimensional
test
phase
is
very
different
from
Mev
it's
not
placed.
It
means
that
now
we
pay
cash
for
any
kind
of
operation
that
you
do,
whether
you
store
something
on
the
blockchain
or
whether
you
do
just
simple
operations
like
multiplying.
I
We
all
tram,
this
costs
of
computation
cost
of
storage
into
one
unit,
which
we
call
Dash,
but
we
could
split
this
up
into
multiple
units
and
so
that
you
pay
for
pay
more
directly
for
what
you
use.
So,
if
you're,
if
you're,
trying
to
store
things
you
pay
for
that,
you
store
and
you
don't
congest
blockchain
with
so
in
this
case
the
dash
limit
is
set.
So
that's
people
aren't
their
computers
aren't
overwhelmed.
I
But,
for
example,
if
you
have
lots
of
transactions
only
using
one
particular
thing
like
if
they're
only
transactions
using
storage,
there's
a
lot
of
operations
that
could
still
be
executed
by
people,
and
so
in
this
case,
multi-dimensional
Dash
would
mean
that
these
computers
are
used
more
efficiently.
Basically,
and
the
more
a
transaction
could
be
executed.
J
Yeah,
adding
to
this,
if
you've
heard
about
Roll-Ups,
so
the
idea
of
rollups
is
where
chains
that
exist
outside
of
the
ethereum
base
layer,
These
Chains,
to
secure
themselves
with
ethereum.
They
have
to
post
data
to
the
ethereum
base
layer,
basically,
the
kind
of
summary
of
what
happened
on
the
Chain.
So
this
data
is
not
executed,
so
it
doesn't
add
execution
to
execution
cost
to
the
base
layer,
but
it
needs
to
be
made
available
and
stores.
J
For
instance,
these
are
two
separate
types
of
resources,
probably
if
you've
heard
of
eip4844,
though
the
idea
of
providing
a
much
greater
data
capacity
at
the
ethereum
based
layer
is
separating
the
market
between
the
ethereum
execution
and
the
market
for
data
that
Roll-Ups
are
posting.
In
that
case,
you
would
have
something
like
two
base
fees
or
you
would
have
the
way
to
differentiate
between
two
markets.
I
Any
more
questions
or
okay,
so
what
we
are
personally
working
on
is,
for
example,
me
the
multi-dimensional
tasks,
proposable
Separation
by
David
thought
about
and
I've
already,
based
upon
block
space
derivatives,
so
ensuring
that
people
can
hedge
against
gas
fees
rising
in
the
future
yeah.
If
you'd
like
to
talk
about
that,
please
find
us.
J
Yeah
and
I
would
say:
crypto
economics
is
relatively
new
as
a
field,
there's
a
lot
of
people
who
don't
have
traditional
economic
background
or
even
Computer,
Science
Background,
who
get
interested
in
it.
So
yeah
the
barrier
to
entry
feels
a
little
lower,
mostly
because
there's
a
lot
of
resources.
Now
that
are
available.
If
you
go
into
the
Devcon
video
archive,
there's
lots
of
talks
on
crypto
economics
that
are
interesting
and
yeah.
If
you
find,
if
you
think
it's
fun,
I
think
both
Julian
and
me
would
be
also
happy
to
answer
questions
offline.
I
Talking
about
resources
we
compile
the
list,
and
so
in
the
table
there
are
some
collectives
or
groups.
That's
published
research,
research
on
crypto
economics
and
in
the
bottom
there
are
some
links
to
some
personal
blogs
from
people
during
part
of
bay
about
some
crypto
economic
research.
The
slide
should
also
be
made
available
later
yeah,
so
that
was
it.
That
was
it.
We
ended
a
bit
early.
So
if
anyone
has
any
questions,
please
feel
free,
but
thank
you
very
much
very
much
for
attending
and
you
can
always
also
ask
your
questions
later.
J
J
J
J
J
J
J
J
J
E
E
How
to
treat
modifying
awesome
along
with
20
different
Communications,
all
changing
at
the
same
time,
communicating
and
debugging
it's
great
that
we
have
a
decentralized
environment
awesome.
You
have
to
wait
for
the
Australians
to
wake
up
for
anything.
H
E
People
waking
up
in
Australia
around
Americans
a
lot
of
different
things
and
figuring
out
how
to
do
all
of
this
in
a
reliable
Manner
and
on
a
timeline
was
crazy.
The
last
one
was
debug
knowledge.
We
were
really
surprised.
The
type
of
debugging
you
need
to
do
for
CLS
and
years
are
totally
different.
We
had
to
see
how
to
bring
all
of
that
confidence
in
one
place.
Foreign.
K
H
D
C
K
E
E
Hype
tests-
you
might
have
seen
this
being
reference
a
couple
of
times
with
the
simulator
and
they
essentially
startup
mechanics
and
then
run
the
tests
against
a
predefined
interface
is
a
couple
hundred
tests.
K
J
K
K
Guys
so
I'm
just
going
to
get
started
now,
so
what
I'm
going
to
go
over
today
is
basically
scaling,
ethereum
and,
of
course,
the
pitfalls
and
the
solutions
now
I
was
told.
This
was
supposed
to
be
an
explainer
to
me
like
I'm
five,
but
who
gives
that
to
an
academic,
you
know.
K
So
what
we're
going
to
start
off
with
is
something
very
simple,
like
a
recap
over
how
blockchain
works
and
by
the
end
of
it
hopefully
it'll
be
a
bit
more
technical,
a
bit
more
difficult
and
we'll
challenge
everyone
in
the
room,
but
just
to
get
a
round
of
hands.
So
who
here
has
a
technical
background
like
who's
here
as
a
programmer
or
a
few
people,
then
who's
not
a
programmer
who's
like
more
business
product?
K
Okay,
that's
awesome!
This
is
a
good
mix,
so
I,
hopefully
I've
targeted.
This
correctly
then,
and
remember
guys,
this
is
a
workshop.
It's
not
really
a
lecture.
I,
don't
want
to
talk
with
you
guys
for
an
hour,
because
I'm
pretty
boring
that
way
what
I've
done
is
I
prepared
content
that
I
think
would
be
useful.
K
But
if
you
have
any
questions
whatsoever
over
the
next
hour,
just
throw
up
your
hand,
I'm
happy
to
explain
the
idea
a
bit
further
or
hopefully
even
diverge,
if
that's
useful
for
the
for
people
here,
because
if
you
have
a
question
five
other
people
probably
have
the
same
question
as
well.
So
please
don't
be
shy.
There's
no
dumb
question.
K
So,
as
I
mentioned
the
goals
for
today
maybe
I'll
leave
his
laptop
here.
It
makes
it
easier
for
me
cool,
so
I
guess
what
we're
going
to
cover.
Is
you
know
what
is
it
we
actually
need
to
scale?
And
why,
like?
What
do
we
mean
by
scalability
the
bottlenecks
of
skill
ability
today,
I'm
picking
some
core
bottlenecks
that
we
can
go
over,
that
are
pretty
simple
to
grasp
and
then
finally,
the
future
roadmap
that
ethereum
is
sort
of
considering
I
know
the
whole
modular
approach.
I'm
sure
you've
heard
off
already.
K
So
what
do
we
actually
need
to
scale?
So
this
is
a
basic
recap
of
you
know
what
is
a
blockchain.
So,
as
we
all
know,
a
block
is
just
an
ordered
list
of
transactions.
A
block
is
produced
every
12
seconds
and
has
appended
to
something
called
the
blockchain.
You
know
given
the
name
as
a
chain
of
blocks,
and
it
represents
the
canonical
history
of
the
entire
network.
K
Now
Alice,
you
know,
Alice
is
an
inspector,
so
she's
a
cute
little
heart
and
her
job
is
to
make
sure
that
every
new
block
that
is
produced
is
valid,
and
so
what
she'll
do
is
she'll
get
a
copy
of
the
blockchain,
so
you'll
replay
and
execute
every
transaction
and
eventually
so
compute
a
copy
of
ethereum's
database.
Okay.
K
So
the
one
thing
I
want
to
highlight
is
we
have
the
blockchain,
which
is
the
canonical
history
of
the
network
and
every
single
transaction
that
has
ever
occurred
and
we
have
the
database,
which
is
your
current
account
balance
smart
contract
byte
code?
You
know
the
actual.
You
know
the
programs
did
as
well
and
there
are
two
very
different
things:
the
blockchain's
history.
The
database
is
up
to
date
on
my
current
balance,
it's
very
important
to
have
that
distinction
and,
of
course,
the
blockchain
computes
the
database.
K
So
anyone
here
he
gets
a
copy
of
the
database
or
copy
of
the
blockchain
can
compute
the
CM
copy
of
the
database
as
everyone
else.
So
it's
widely
replicated
across
the
world
and
I
like
to
call
the
blockchain
a
cryptographic
data
trail
because
it
allows
us
the
audit,
the
database
in
real
time.
You
know,
there's
no
other,
let's
say
your
bank.
You
can't
audit
your
bank,
can
you
you
ignored
the
blockchain?
It's
an
audit
Trail
now
I'll
list,
the
inspector
could
be
any
of
us.
You
know
and
who's
actually
running
a
node
here.
K
Is
anyone
running
a
node
one
guy
over
there
there
you
go,
we've
got
some
inspectors,
that's
creating
a
small
sample
size,
but
still
a
grip,
so
the
peer-to-peer
Network
I
mean
it
changes
from
time
to
time,
but
it's
normally
around
like
10
000
computers
that
are
online
fully
synchronized
with
the
network
and
they're
auditing.
You
know
blockchain
real
time
now
the
peer-to-peer
network
is
responsible
for
propagating
blocks
and
transactions.
Its
only
goal
is
to
gossip.
If
you
have
a
transaction,
it
goes
bad
to
peer-to-peer
Network
and
it
spreads
out
the
everyone.
K
So
a
single
transaction
will
reach,
you
know:
10
000,
computers,
100,
000
computers
and
the
oil
has
to
help
them
within
a
few
seconds.
The
CM
for
blocks
on
the
peer-to-peer
Network
as
well
are
the
block
block
proposers
and
a
theorem.
We
call
them
the
validators
and
the
proof
of
stake
chain
in
Bitcoin
they're,
the
miners.
The
ball
proposers
are
also
on
the
peer-to-peer,
Network
they're
listening
now
for
users,
transactions
and,
of
course,
new
blocks.
K
K
So,
as
an
example,
user
sends
their
transaction
flows
across
the
network,
and
every
block
proposal
will
hear
about
this
in
about
two
to
three
seconds
and
then
a
block
proposal
will
produce
a
block
based
on
the
transactions
they
hear
and
every
single
peer
in
the
network
will
get
this
block
validate
it
and
then
update
their
copy
of
the
database
and
that's
basically
what's
happening
under
the
hood.
A
block
is
really
like
a
Bots
update.
K
You
know
for
a
database
you're
just
doing
it
every
12
seconds
and
you're
updating
everyone
across
the
world,
they're
a
database,
and
so
what
we
end
up
with
is
this
public
and
Global
database
conceptually.
It's
like
a
bulletin
board
like
everyone
here,
can
see
the
exact
same
image
under
the
hood.
There's
thousands
of
copy
of
this
database
everywhere,
it's
widely
replicated
and
that's
what
helps
secure
the
network,
because
if
anyone
can
get
a
copy
off
it,
then
we
can
also
check
that
it's
correct
sort
of
scalability
come
in.
K
You
know:
why
do
we
care
about
scalability?
So
there's
two
real
people,
there's
two,
you
know
parties
we
care
about.
One
are
the
block
proposers.
It's
really
important
to
block
proposers
can
get
the
most
recent
block
right
away.
So
they
can,
you
know,
take
the
block
check,
is
correct
and
then
extend
the
blockchain.
K
The
block
proposers
want
to
converge.
Let
me
get
back
to
the
blockchain.
The
block
proposers
want
to
converge
on
a
single
blockchain,
so
when
a
new
block
is
produced,
they
want
to
take
it
and
then
extend
it.
You
know
block
one
block,
two
block
three
block
four,
so
it's
really
important.
They
can
get
the
blocks
very
quickly.
K
At
the
same
time,
we
have
the
peer-to-peer
Network.
We
have
Alice
the
auditor
and
some
people
in
this
room
and
your
job
is
to
hold
the
block
proposers
accountable.
You
want
to
validate
every
transaction
and
if
they
try
to
break
the
rules,
you
know
they
try
to
include
an
invalid
transaction.
Then
the
peer-to-peer
network
will
reject
it
gets
rejected.
They
don't
make
any
money,
they
wasted
their
time
and
of
course
you
know
blog
proposers
will
then
just
not
extend
it.
K
So
there's
two
parties,
we
care
about
block
proposers
and
the
verifiers
and
what's
important
is
that
you
know
want
to
be
mean
for
scalability.
What
we
really
care
about
are
the
resources.
You
know
what
compute
isn't
required.
What
bond
with
is
required
on
what
storage
is
required
and
we
have
to
consider
these
Resources
with
the
goal
of
decentralization
in
mind.
You
know
what
are
the
minimum
requirements
for
someone
to
run
a
node
for
someone,
the
volatia
blocks,
or
even
for
someone
to
become
a
block
proposer.
K
Do
you
have
any
stakers
here
by
the
way,
any
eve
two
stickers
there
you
go
over.
There
got
a
couple
of
e-stickers.
You
know
we
got
to
make
sure
that
the
resources
are
good
enough,
so
you
can
run
your
sticker
at
home,
hopefully
you're
running
it
at
home
anyway,
I
don't
know
how
you're
doing
it,
but
we
know
we
care
about
compute,
bandwidth
and
Storage.
K
Now,
there's
one
takeaway
here:
I
hope
this
is
the
biggest
takeaway
you
take.
Scalability
cares
about
resources
on
the
delicate
balance
between
verifiers
and
proposers,
so
proposers
commit
blocks
and
verifiers
can
check
it
in
real
time.
It
has
absolutely
nothing
to
do
with
transactions
per
second,
that's
more!
Like
a
byproduct,
you
know
you
could
have
one
transaction
that
explodes
to
the
database
and
then
none
of
no
one
here
could
be
a
verifier.
K
So
the
reason
just
to
recap
the
basics
there.
You
know
we
have
block
proposers
transaction
ordering
service,
they
propose
blocks
to
the
network.
We
have
the
peer-to-peer
Network
that
has
block
proposers
and
verifiers.
Anyone
here
can
be
a
verifier
and
your
job
is
to
hold
the
block
per
block
proposers
accountable,
okay
and
make
sure
that
the
consensus
rules
and
the
network
rules
are
enforced
in
real
time
and
finally,
scalability
has
nothing
to
do
with
TPS
it's
about
resources
and
how
you
know.
K
K
Exactly
yeah,
it's
we'll
get
to
this
sort
of
the
balance
there,
but
that's
a
good
point
Nick.
What
what
is
the
meaning
of
resources
and
for
the
goal
of
decentralization?
We
need
to
pick
the
right.
You
know
configuration
that
maximizes
the
population,
so
we
say
you
know
we
want
90
of
people
in
this
room
to
run
a
verifier.
What
are
the
requirements?
That's
what
we
have
to
have
on
the
network
and
the
other
questions
by
the
way.
Please
throw
your
hands
up.
You
know,
there's
no
dumb
question
cool!
Okay,
I'll
just
continue!
K
So
let's
have
an
overview
of
some
of
the
scalability
challenges
that
we
face
today
and
we're
going
to
do
this
again
through
storage,
competition
and
bandwidth
and
you'll,
see
computation
and
bandwidth
sort
of
join
The
Gather.
When
we
talk
about
the
fork
Grit,
so
let's
jump
in
the
storage,
okay,
so
the
storage
requirements
for
a
node
is
you
know
how
big
is
the
database?
You
know
there's
a
database
of
everyone's
account
balance.
K
How
big
is
that
there's
something
called
the
mempool
and
it's
more
like
a
cache,
so
you're
on
the
peer-to-peer,
Network
you're
running
a
node
and
you
hear
a
new
pending
transaction.
What
you'll
do
is
keep
a
copy
of
this
and
pass
it
on
to
your
peer.
If
you
hear
the
CM
transaction
again,
you'll
just
reject
it.
So
it's
really
to
prevent
a
denial,
a
service
attack
on
the
peer-to-peer
Network.
K
So
people
can't
spam
it
with
the
CM
transaction
over
and
over
again,
but
again,
that's
something
you
will
have
to
consider
when
you
run
a
node
and,
of
course,
the
blockchain
itself,
how
big
is
the
blockchain,
and
even
how
long
does
it
take
to
synchronize
it?
You
know:
how
long
does
it
take
us
to
compute
a
copy
of
the
database,
so
before
I
begin,
has
anyone
ever
heard
of
an
archival
node
or
a
full
node
or
a
print
node,
okay,
awesome
and
raise
your
hand
if
you
think
they're
really
confusing
terms?
K
Oh
look
at
this.
Oh
there's,
one
guy!
Thank
you!
The
good
guy
over
there,
the
honest
guy
they're,
pretty
confused
I
know
the
Bitcoin
Maxes
have
no
idea.
You
know
they're
always
misinterpreting
the
freeze,
so
let's
talk
about
them.
What's
a
full
node,
a
full
node
is
a
piece
of
software
that
will
take
the
entire
blockchain
validated
from
scratch
up
to
the
beginning
and
then
compute
a
copy
of
the
database.
Importantly,
they
keep
a
whole
copy
of
the
blockchain
locally
guard
blocks.
They
keep
all
the
blocks.
K
K
K
So
has
anyone
ever
heard
of
a
block
reorg
before
okay
we're
going
to
jump
into
the
block
reorgs?
Very
soon?
That's
part
of
the
fork
rate,
but
the
idea
is
that
if
your
transaction
got
confirmed
in
Block
seven
and
naira
blocked
hand,
you
have
some
guarantee
that
is
probably
going
to
get
finalized,
but
an
alternative
block
for
maybe
from
block
five
could
emerge.
K
But
if
that's
the
case
well,
you
need
to
have
this
database,
so
you
can
quickly
jump
back
and
then
deal
with
the
new
fork,
and
so
we
have
to
keep
around
you
know,
but
100
copies
of
the
CM
database
just
to
deal
with
reworks,
but
all
the
other
databases
can
be
deleted.
You
don't
care
about
your
very
historical
databases.
Just
the
most
recent
copies,
an
archival
node
is
very
different.
K
An
archival
node
is
something
that
ether
scar
mode
run
or
a
block
Explorer
archival
node
is
where
you
want
to
quickly
look
up
historical
data.
So
maybe
you
have
a
request.
What
was
my
balance?
A
block
to
that
could
have
been
a
year
ago.
There's
no
reason
to
run
that
on
the
peer-to-peer
Network,
because
that
you
know
the
probability
of
a
one-year
reorg
is
very
small,
in
fact
impossible
in
proof
of
stick
ethereum,
but
an
arc.
K
Avenue
will
run
this
and
that's
why
when
you
hear
quotes
you
know,
an
archival
node
is
like
two
terabytes
in
storage
and
ethereum
isn't
scalable,
but
that's
because
they
run
an
archival
node
in
you
know,
you
don't
need
all
these
databases.
You
just
need
100
of
the
most
recent
databases,
then
a
print
node
is
where
a
prune
node
will
discard
the
historical
blocks.
A
prune
will
just
keep
around
the
most
recent
blocks
and
the
most
recent
copy
of
the
databases
they
prune
as
much
as
they
can
and
they
have
minimal
resources.
They
still
validate
everything.
K
You
know
you
still
go
from
the
start
to
the
end.
You
validate
it
all,
but
you
just
keep
around
the
most
recent
data.
So
my
question
to
you
guys
is
you
know
it's
my
next
slide.
If
you're
going
to
run
a
node
like
a
block
proposer
or
a
verifier
which
one
would
you
run,
let's
do
your
readers
of
hands.
Would
you
run
a
full
node
for
either
verify
or
block
proposal?
K
K
No,
maybe
when
you
come
but
there's
no
need,
you
know,
it's
spit
a
waste
of
resources
and
what
about
a
print,
node
yep,
exactly
that's!
Probably
what
most
people
run
today.
You
know.
Most
people
do
want
to
keep
around
the
entire
blockchain
they
just
discard
most
of
it
and
as
we're
going
to
see
that's
quite
a
lot
of
gigabytes,
so
a
copy
of
the
oh
yeah.
K
Oh
events,
yeah
so
an
event,
that's
a
great
question!
So,
basically,
when
you
execute
a
trump,
so
in
solidity
and
a
smart
contract,
you
can
define
an
event.
So
let's
say
it's
the
vote
function.
If
I
cast
my
vote,
it
will
emit
an
event
that
will
tell
the
world
and
notify
the
world
that
Patrick
has
caused
the
vote.
The
way
you
get
an
event
is,
when
you
execute
a
transaction,
it
produces
a
transaction
receipt
and
in
the
receipt
is,
there
is
the
event.
K
H
K
I
think
I
mean
I
will
touch
on
this.
That's
a
great
question,
so,
just
to
summarize
ethereum
now
has
two
layers.
Has
the
beacon
layer
that
deals
with
proof
of
stick
and
you
have
the
execution
layer
which
deals
with
obviously
the
execution
of
smart
contracts
for
now
I'm
assuming
they're
the
same
thing.
You
know
if
you're
joining
the
beacon
you
know
for
the
beacon
chain.
Again
you
just
care
about
the
last
finalized
block.
After
finalize
you've
done
that's
about
like
15
minutes,
you
could
technically
just
delete
the
rest
of
it.
K
K
Yeah,
so
the
real
difference
is
that
in
a
full
node
you
keep
around
the
entire
blockchain
and
the
reason
you
do
that
is
to
serve
at
the
peers
on
the
network.
A
prune
node
deletes
most
of
the
blockchain.
They
just
keep
around
did
I
say,
and
this
gets
four
blocks
and
that's
the
deal
with
forks
and
reorgs.
It's
called
reorg
cfd,
so
you're,
assuming
that
if
I
keep
around
oh
yeah.
K
I
think
there's
default
settings
I
think
there's
a
default
setting,
but
really
you
know
the
more
you
store
the
better
you
can
handle
reorgs.
So
one
issue
we
had
was
I
previously
worked
on
a
transaction
relayer
where
you'd
send
transactions
and
try
to
guarantee
the
delivery.
So
we
wrote
Our
Own
blockchain
machine
the
deal
reorgs.
K
We
would
keep
around
three
to
four
hundred
blocks
and
you
know
obviously
copies
of
the
database.
But
if
you
run
this
in
robsten,
Robson
was
a
very
adversarial
Network
and
you'd
wake
up
one
day
and
there's
a
20
000
block
reorg
and
the
real
Arrow
will
just
tip
over
because
it
just
can't
deal
with
20
000
block
reorgs.
So
it
really
comes
down
to
what
network
are
you
running
and
you
know
what
so
another
example
is
a
theorem
classic.
K
Oh
for
reorgs,
I
think,
okay,
so
let
me
I'll
make
it
to
the
rework
section:
first,
we
have
a
picture
of
it,
but
yeah
reorgs
are
less
likely
to
happen
and
proof
of
stick.
You
know,
because
part
of
it
was
part
of
the
puzzle
for
proof
of
work,
but
I
will
touch
on
that
because
there
is
still
prop.
You
know
a
good
chance
on
proof
of
stake.
Any
other
questions.
K
Yep
exactly
so
improve
of
stake
is
above
optimistically.
It
should
be
about
15
minutes.
Worst
case
scenario
could
be
like
three
weeks.
Well,
maybe
we'll
chat
about
that
afterwards,
it's
a
great
topic,
okay,
cool
any
other
questions,
or
are
we
all
satisfied,
cool
I?
Guess
your
goal
is
to
make
sure
I
don't
finish
my
slides
as
well
very
likely
to
happen
Okay.
So
that's
a
good
what
a
node
restore
so
on
bitcoin.
K
This
is
maybe
four
months
ago
when
I
made
this,
the
database
was
roughly
about
five
gigabyte,
but
if
it
was
an
archival
node
version,
it
was
about
35
gigabyte
on
ethereum,
the
you
know,
a
normal
pruned.
A
normal
database
was
about
700
gigabyte,
so
the
database
is
quite
big
on
ethereum.
You
know,
according
to
the
stock
that
I
have
here,
an
archival
node
was
10
terabytes
and
that's
normally
the
number
you
hear
throwing
around
Twitter
by
all
the
Bitcoin
Maxis.
But,
as
you
know,
that's
for
Block
explorers.
K
An
Aragon
got
this
down
to
about
1.9
terabytes
I
actually
forget
how
about
but
we'll
first
we'll
figure
it
out,
and
then
what
about
the
the
blockchain
itself?
I?
Could
just
look
here
the
blockchain
so
for
Bitcoin
you
can
see
that
the
blockchain's
about
422
gigabytes,
which
is
huge
by
the
way,
but
on
a
print
node.
They
keep
around
seven
gigabytes
perfect
blocks.
They
discard
most
of
it
on
ethereum
next
slide
the
blockchain's
about
200
gigabytes.
K
You
know,
so
it's
actually
smaller
than
Bitcoin,
which
is
surprising
actually,
given
you
know,
there's
blocks
every
12
seconds.
That's
generally
because
blocks
are
smaller
in
ethereum
than
they
are
in
Bitcoin,
because
we
worry
more
about
Goss
than
we
do
bite
size.
You
know
bitcoin's
all
about
one
megabyte
two
megabit
blocks
and
it's
really
about
the
size
of
the
transactions.
K
But
here
you
know
it's
about
200
gigabytes
for
the
blockchain
and
overall,
including
Watson
memory,
and
what's
on
disk
you're,
probably
going
to
store
around
560
gigabytes,
give
or
take
it's
a
roundabout
like
client.
Got
me
this
by
the
way.
He's
really
really
thankful
from
getting
me
that
that
picture,
of
course,
flood
protection.
You
know
how
do
we
deal
with
you
know
now
the
service
attacks
in
the
network
on
ethereum,
it's
about
I,
ought
to
rough
estimate
about
100
megabytes.
K
You
might
store
in
the
worst
case,
so
the
memory
pool
has
nothing
to
do
with
scalability,
really
so
far.
We're
not
really
hitting
any
storage
problems
for
dealing
with
pending
transactions
on
the
network,
such
storage.
You
know
we
covered,
you
know
blockchain
the
database
and
the
mempool
and
the
different
types
of
software
that
you
could
run
repetition,
and
so
there's
this
really
great
blog
post
that
I'm
going
to
run
through
by
Jimson
Lop.
He
runs
this
every
year.
How
long
does
it
take
to
synchronize
a
node,
and
it
has
this?
K
Pretty
beefy
machine
has
one
terabyte
stories:
32
gigabyte,
let's
see
how
let's
see
how
well
it
works
so
on
bitcoin
back
in
2011,
but
November
it
took
about
400
minutes
to
synchronize
the
entire
blockchain.
That's
pretty!
Damn
fast,
don't
even
know
how
long
that
is
like
five
six
hours
and
you're
fully
caught
up
on
every
transaction.
That's
ever
occurred
in
Bitcoin
yep.
K
K
Awesome
well
I
lots
of
seems
for
computation
for
now
and
we're
not
worried
about
latency
because
I'm
not
actually
too
sure
I,
don't
think
he
defines
it
in
the
in
the
blog.
But
anyway
it
took
about
you,
know
500
minutes
or
400
minutes
for
Bitcoin
what
about
ethereum.
So
his
issue
was
that
goofy
amount
of
memory
when
it
was
synchronized
and
it
stopped
but
after
five
days,
but
that's
because
he
has
one
gig.
K
You
know
one
terabyte
storage
and
he
could
have
more
stories
to
do
with
synchronizing,
but
he
would
estimate
it
would
take
about
10
days.
But
the
important
bit
here
is:
what's
the
bottleneck.
Why
has
it
taken
10
days
to
synchronize
ethereum
and
you
would
think
as
execution,
but
actually
is
input
and
output?
It's
just
reading
and
writing
to
the
database
here
according
to
the
the
stats
that
he
had,
you
would
read:
15
terabytes
from
disk
and
12
terabytes
back
the
disk
for
the
first
five
days
of
Trends
off.
K
You
know
the
blocks
and
there's
another
five
days
to
do
by
the
way.
So
that's
probably
20
30
terabytes
worth
of
reading
and
writing.
And
why
is
this?
You
know
why
are
we
doing
like
15
terabytes
of
reading
from
desk
for
a
blockchain
there's
about
500
gigabytes
are
the
database.
So
the
reason
is
that
oh,
actually,
just
before
I
get
into
the
reason,
but
I
deleted
the
reason.
Oh
I
haven't
okay,
it's
over
there.
Obviously
I've
messed
up
my
slides.
K
The
reason
is
that
in
ethereum
in
the
block
header
there's
something
called
the
state
route,
and
this
is
good
for
the
snapshots.
You
know
you
want
to
download
a
copy
of
the
database.
You
want
to
make
sure
this
database
was
correct
for
this
block
and
so
in
the
block
header,
you
have
a
hash
of
the
entire
database.
K
It's
also.
This
is
very
expensive,
so
Aragon
stop
doing
that
so
they're
reading
and
writing.
The
disk
is
still
about
a
terabyte
after
10
days,
but
actually
no,
they
heard
of
us
that
they
synchronize
they
synchronized
in
two
days.
So
just
removing
that
one
part
of
the
validation
you
see
if
it
days
worth
of
time
to
synchronize.
The
question
is:
do
you
need
to
do
this?
K
So
it's
not
as
an
essential
check,
but
it's
useful
if
you
want
to
download
snapshots
from
the
network
of
the
actual
database
itself,
but
the
surprising
takeaway
here
is
that
execution
isn't
really
the
bottleneck.
It
is
expensive.
Extrusion
is
expensive.
It's
just
reading
and
writing
from
the
database
is
the
current
bottleneck
for
ethereum.
K
K
They
should
I
mean
so
there
is
work
on
that
called
an
access
list,
so
in
your
transaction,
if
you
define
an
access
list,
I
think
I
touch
upon
it
later.
You
can
Define
what
storage
slots
in
the
database
you're
accessing,
and
so,
if
you
have
two
transactions
that
don't
access
the
CM
part
of
the
database,
you
could
run
that
in
parallel.
But
right
now
access
lists
aren't
heavily
used,
but
they
they
should
be
because
they
help
a
parallel
execution
awesome.
K
So
anyway,
just
a
final
joke,
I
mean
I,
don't
mind.
Salon
acts,
I'm,
not
a
hitter
on
Solana,
but
I
just
made
a
joke
that
you
know
some
projects
gave
up
on
the
fact
that
people
can
synchronize
the
blockchain
I
think.
Actually,
this
is
the
internet
computer.
You
have
to
get
special
Hardware
from
their
own
suppliers.
They
run
a
note
on
their
Network.
Very
permissionless.
Isn't
it
but
anyway,
that's
synchronizing
is
a
fun
topic
to
talk
about.
So
what
about
the
fork?
Grant?
K
So,
let's
assume
more
block
all
block
proposers
are
honestly
following
the
protocol.
They
get
a
block,
they
extend
it
and
they
propose
a
new
block.
No
malicious,
behavior
whatsoever
by
the
block
proposers,
so
the
fork
grid
is
the
following.
So
let's
say
you
have
block
one
and
block
two,
then
a
magical
wild
Fork
appears
you
can
have
block
3A
on
block
3B
proposed
by
two
different
block
proposers.
The
question
is,
which
one
do
you
extend?
K
Eventually,
you
know
you'll
get
Block
four
block
five
and
everyone
will
Converge
on
the
longest
chain
or
the
heaviest
chin
block.
3B
becomes
a
steel
block.
You
know
the
content
is
ignored
and
it
eventually
ends
up
as
an
uncle
block.
So
if
you
ever
hear
the
word
Uncle
block
as
a
fork
that
didn't
make
it
into
the
canonical
chain,
but
it
did
exist
and
in
a
way
this
is
wasted
resources,
and
we
have
these
two
competing
blocks.
You've
wasted
some
resources
because
this
never
actually
gets
used.
K
You
really
want
to
maximize
a
single
canonical
fork
with
no
with
no
Forks.
Okay.
Any
questions
on
this
part
before
I
continue,
because
this
is
quite
important,
fairly,
straightforward,
awesome,
so
the
inability.
So
why
do
we?
Why
do
we
consider
the
the
the
the
the
fork
rate?
So
one
is
about
this.
You
know
how
reliable
is
the
network?
You
know
if
you
get
your
transaction
confirmed
in
a
block,
but
there's
a
16
chance
that
it
gets
dropped
and
reconfirmed
later
well
that
sucks
from
user
experience
perspective.
K
You
know
if
I
only
have
to
wait
two
or
three
confirmations
that's
way
better
than
winning
20
confirmations
and
the
fork
grid
is
really
about
how
reliable
is
a
confirmation
and
a
block
at
the
same
time,
there's
a
bond
with
on
a
computer
overhead.
If
I
send
everyone
here,
a
block
I've
used
your
bun
with
you,
then
validate
the
block.
I
visited
your
compute,
but
in
the
end
the
block
never
gets
in
the
blockchain.
So
I've
just
wasted
your
resource
for
a
block
that
wasn't
actually
useful.
K
Do
you
really
want
to
minimize
that
fork,
grip
and
there's
two
aspects
that
we
have
to
consider
is
one
you
know:
what's
the
length
of
time
between
blocks?
Is
it
12
seconds?
Is
it
10
minutes
and
how
fast
does
a
block
reach
another
block,
proposer,
okay
and
also,
of
course,
what's
the
Frequency,
so
this
is
sort
of
the
big
block.
First,
it's
a
small
block,
we're
back
in
the
2015
World
with
the
block
size
Wars.
You
know
this
is
pre.
Actually
I
guess
the
theorem
was
born
around
this
time.
K
So
if
you
have
a
one
megabyte
block
every
10
minutes,
if
you
imagine
this
being
the
peer-to-peer
Network
and
reaching
all
the
peers
in
the
network,
then
it
won't
megabyte
block
should
fly
across.
You
know
everyone
gets
this
within
a
second,
not
much
issue.
We
have
a
one
gigabit
block,
you
know
every
30
seconds.
Well,
maybe
you
know
one
gigabyte
takes
a
long
time
to
get
across
the
network.
K
Then
you
may
have
a
competing
block
at
the
same
time,
then
you
have
another
competing
block
and
you
have
lots
of
forks
and
then
you
know
you've
wasted
time
because
there's
more
there's
three
competitive
blocks
and
of
course
this
is
a
bomb
within
compute,
and
so
these
are
two
extremes.
We
have
a
two
megabyte
block
every
oh
yeah,
I'm
right
now.
You
know,
if
you
have
a
you
know,
a
block,
that's
greater
than
two
megabytes,
but
less
than
one
minute
you
increase
the
fork
rate.
Smaller
blocks,
longer
interval,
smaller
Fork
grid.
K
So
is
there
a
good
way
to
get?
You
know
a
good
feel
of
the
numbers
here,
so
there's
a
study
back
in
2016
for
Bitcoin
and
I
would
love
it
to
be
repeated
for
ethereum,
because
it's
very
useful
for
proof
of
stake
is
oh,
and
this
one
point
is
on
bitcoin.
We
only
consider
megabytes
the
size
of
the
block
on
ethereum.
We
consider
gas,
because
gas
takes
into
account
bond
with
stories
and
compute.
K
You
know,
30
million
gas
is
the
maximum
size
of
a
block
and
it
tries
to
take
into
account
all
the
resources
that
are
required
and
actually
there's
a
z
cash
article
there,
because
right
now,
Z
cost
is
being
spammed
is
costing
ten
dollars
a
day
and
they're
going
the
database
by
like
a
gigabyte
per
day
or
something.
You
know,
it's
very
cheap
that
you
know
attack
the
network.
K
So
on
ethereum
you
know
blocked
you
around
120
kilobytes,
there
are
30
million
gas
and
they
occur
every
12
seconds,
roughly,
even
previously
a
gun
proof
of
work,
that's
that
was
for
the
proof
of
work
chain
on
Bitcoins.
You
know
one
to
two
megabyte
every
10
minutes,
so
on
ethereum
and
proof
of
work
ethereum,
the
fork
root
was
around
five
to
six
percent
at
any
time.
So
that
means
you
know.
K
Then
the
validators
not
committee
will
vote
on
that
block
if
it
takes
longer
for
a
block.
Does
it
take
if
it
takes
longer
than
four
seconds
for
a
block
proposal
to
get
their
block
across
the
room?
You'll
end
up
with
a
fork,
because
the
committee
will
vote
on
the
parent
block
and
not
the
current
block,
and
so
you
can
see
right
now,
there's
very
little
Forks.
So
clearly
you
know
there's
a
good
block
size
for
the
proof
of
a
stick
chain.
K
K
I'm
just
about
to
get
on
that,
that
is
a
big
part
of
it.
Yep
the
Chinese
firewall
specifically
so
I'll
leave
that
for
a
second
any
other
questions
before
we
continue
awesome,
Okay
cool.
So
just
look
like
just
told
some
numbers
out
there
back
in
2016.
We
want
to
keep
90
appears
on
the
network.
What
do
you
think
the
ideal
block
size
would
have
been
without
increasing
the
fork?
Grit?
K
K
20
megabytes,
oh
wow,
so
we
found
the
Bitcoin
Maxis
and
the
Bitcoin
Unlimited.
If
you
know
your
history,
no,
that's
great,
though
that's
a
great
quick,
yes,
so
so
the
ideal
block,
if
I
deleted
the
slide.
Of
course
I
have
so
the
ideal
block
size
was
actually
around
four
megabyte
from
what
I
remember
it
was
by
4.2
megabytes
or
something
to
keep
90
appears
on
the
network
and
just
for
that
table.
So
what
we're
saying
there
is
that
that
table
is
really
saying
you
know
how
long
like?
K
How
fast
did
the
top
10
of
nodes
get
the
recent
block?
Then?
How
long
does
it
take
for
90
90
of
nodes?
Do
you
also
get
the
CM
block
so
in
the
first
example
for
for
the
second
one,
for
one
megabyte
block,
ten
percent
of
nodes
will
get
this
within
1.5
seconds.
Then
90
of
nodes
will
get
this
in
2.4
minutes.
So
that's
a
2.4
minute
difference
between
the
top
nodes
or
the
Foster
nodes
and
the
well-connected
nodes
on
the
slowest
on
the
network.
So
what
impact
does
this
have?
K
So
that's
my
little
China
logo.
So,
as
I
mentioned
back
in
2016
2017,
some
there
were
some
Forks
on
the
network
because
the
chart
like
70
of
miners
were
in
China.
30
percent
were
in
the
rest
of
the
world,
which
implies
the
Chinese
miners
got
the
blocks
faster,
they
get
the
blocks
faster.
K
Well
then
they
can,
you
know,
start
working
on
it
before
the
rest
of
the
world,
so
they
may
get
like
a
30
second
or
a
minute
head
start
on
just
solving
the
proof
of
work,
and
so
there
was
a
biased
towards
Chinese
Miners,
and
what
actually
happened
was
that
you
had
this
private
relay
Network
between
all
the
miners,
so
they
just
bypass
the
peer-to-peer
Network
altogether
because
of
this
issue.
But
basically
you
know
ball.
Proposers
will
fall
behind
if
they're,
you
know
the
90
nodes
in
the
network
on
stick
for
proof
of
stick.
K
If
it
takes
longer
for
four
seconds
for
you
to
get
the
new
block
and
you
vote
for
the
wrong
block
or
even
12
seconds,
you
may
incur
some
penalties,
you
know
if
you
won't
get
sloshed
and
it's
not
like
you
want
to
lose
all
your
money,
but
you
might
not
get
like
little
rewards
and
your
your
yield
will
go
down
a
bit.
So
your
yields
directly
impacted
by
how
well
you're
connected
to
other
peers
and
obviously
as
a
verifier.
K
K
So,
typically,
when
we
think
about
the
size
of
blocks,
we
normally
assume
that
the
block
proposals
are
very
powerful.
You
know
they
should
be
able
to
quickly
get
blocks,
execute
them
and
send
them
out
within
two
to
three
seconds.
We
assume
verifiers
are
weak,
so
verifiers.
Maybe
it
takes
them.
You
know
six
or
seven
seconds
to
get
the
block,
but
that's
okay,
because
12
seconds
is
the
deadline,
so
you
normally
assume
you
know
different
specs
for
different
parties
and
I
do
have
it
there.
There
you
go.
K
Four
megabytes
was
what
would
the
report
recommended
on
bitcoin
and
you
still
have
90
of
nodes
participate
on
the
network.
It's
probably
much
higher
now,
but
that's
like
six
years
ago.
Why
is
this
all
important?
You
know,
why
do
we
care
about
this
aspect
of
scalability
and
it
really
comes
down
to
you
know
what
does
the
mean
to
be
decentralized
and
everyone
has
different
takes
and
what
it
means
to
be
decentralized.
My
take
is
really
you
know
what
percentage
of
the
world's
population
can
validate
and
protect
the
database
in
real
time.
K
So,
regardless,
if
you're
in
an
Olivia,
Australia
China
the
US,
you
should
have
the
right
to
run
the
software.
Get
a
copy
of
the
database,
rather
be
a
blocked
in
real
time
or
participate
as
a
proof
of
stake.
You
know
sticker
validator,
it's
the
CM
for
both
because
that's
what
it
means
to
be
decentralized.
K
K
So,
let's
summarize
this
storage,
you
know
the
storage
bottleneck
is
really
how
big
is
the
database?
How
big
is
the
blockchain
I'm
realistically?
Who
can
you
know,
cut
my
Hardware
deal
with
that?
You
know
the
size
of
that
database
as
we
saw
his
computer
GM's
lob's
computer.
He
couldn't
synchronize
Google
ethereum
because
he
ran
out
of
space
so
clearly
his
computer
could
not
participate
on
the
peer-to-peer
Network.
So
you
have
to
consider
storage,
and
you
know
how
big
this
database
gets.
K
Two
is
compute.
How
long
does
it
take
me
to
get
a
copy
of
the
database
and
be
convinced?
It
is
indeed
you
know
the
the
one
true
database
that
we
all
have
and
right
now
our
proof
of
work.
At
least
you're
supposed
to
objectively
follow
you
from
the
beginning
to
the
very
end,
and
then
you
know
how
long
does
it
take
for
blocks
to
get
across
the
network?
And
can
we
fall
behind
because
we
just
can't
get
the
blocks
in
time?
K
You
know:
what's
the
latency
issues
around
that
and
the
most
important
bit
is-
and
this
is
why
I
don't
like
transaction
throughput
as
a
metric?
You
know
if
you
just
blow
up
the
TPS,
you
know
the
the
tip
of
the
chin
can
become
unstable
because
there's
too
many
forks
and
then
it's
also
difficult
for
us
to
keep
up
so
I.
Think
for
I.
Remember:
hearing
a
stop
for
polygon,
so
the
proof
of
stake
polygon,
an
archival
node
was
growing
two
megabytes
every
second
and
that's
pretty
damn
big.
K
Isn't
it
like
two
megabytes
every
seconds
because
of
15
terabytes,
then
AWS
can
no
longer
handle
that
in
a
you
know,
straightforward
monitor
so
anyway.
That's
actually,
why
again,
the
whole
point
of
scalability
is
that
fine
balance
between
block
proposers
and
verifiers-
and
you
know
how
big
that
database
gets.
K
K
So
the
uncle
blocks,
so
you
have
the
so.
The
entire
block
is
the
block
header
and
the
block
content,
the
block
content
or
the
transactions
that
gets
thrown
away.
All
we
keep
around
is
the
block,
header
and
it'll
be
included
in
a
future
block,
so
we'll
block
one's
the
block
here.
If
block
one
was
the
uncle
block,
then
maybe
the
header
gets
included
in
Block
five.
So
we're
still
aware
that
it
exists.
K
K
Oh
definitely
I
hope,
I
allude
a
point.
I
think
I've
got
that
my
slide.
But
what
he's
saying
is
that
you
know
one
of
the
ways
we're
going
to
solve
these
issues.
Is
your
knowledge
proofs,
so
their
knowledge
proof
is
really
useful.
It
allows
me
to
do
a
lot
of
the
hard
work
to
say
you
know.
Let's
just
say,
I
want
to
approve
a
transactions
valid
I.
Do
all
the
hard
work
then
I.
Send
you
the
result
of
the
transaction
on
a
small
proof
that
will
convince
you
that
it
was
correct.
K
K
I
think
a
TX
one
or
two
seconds
per
transaction
on
a
CPU,
and
you
know,
if
you're
having
to
do
that,
for
when
you're
proposing
a
block
you
know,
I
create
a
block.
I
make
a
proof
for
every
transaction,
that's
pretty
slow.
K
So
that's
still
very
much
a
work
in
progress,
so
that'll
be
more
for
the
Roll-Ups,
so
the
rule
apps,
you
would
assume
there's
a
very
powerful
executor.
You
can
run
gpus
paralyze
them
and
do
the
proofs
in
real
time
for
proof
of
stake.
Ethereum,
it's
probably
a
little
while
off
because
proven
is
still
very
expensive.
I
mean
it's
not
too
it's
much
cheaper
than
it
was
four
years
ago
anyway.
K
K
Yeah
I
think
it's
like
so
he's
also
in
bed
the
bite
size
so
I
think
that's
more
of
a
problem
for
Stark,
so
Starks
will
grow
based
on
how
much
you're,
proving
where
a
snark
is
constant
sizes.
They're
funny
I
forget
the
bite
size
with
a
Freddy
Spa,
but
star
has
a
stalker
issue.
Yeah
they're
I
think
it
proves
to
cost
like
five
million
gas
and
ethereum
to
verify
just
for
the
proof,
because
they're,
very
big,
but
anyway
any
other
questions
guys
before
I
continue
awesome
cool
okay.
K
So
how
are
we
going
to
solve
these
skill
ability
issues?
So
just
a
reminder
when
we
consider
scalability
for
the
blog
proposer,
we
want
to
reduce
the
forcreant.
We
want
to
make
sure
no
one
is
witnessing
their
resources
when
they
propose
a
block
and
on
the
verifier
side
we
want
to
maximize
the
population
of
who
can
validate
blocks
in
real
time,
so
reduce
the
resource
requirements
to
run
a
node.
K
That's
basically
what
we're
trying
to
achieve
now
over
the
years
since
2015
up
to
about
2020
and
today,
there's
been
lots
of
crazy
Wizardry
tricks
from
basic
engineering
principles.
On
the
you
know,
the
make
it
easy
to
run
a
node.
So
one
is
you
know
you
can
compress
Data
before
you
send
it
across
the
wire.
So
on
bitcoin
we
call
that
a
compact
block,
you
know,
I,
give
you
a
block,
but
actually
I
don't
give
you
the
transactions
I,
give
you
the
block.
K
K
You
have
private
relay
Network,
so
in
Bitcoin
all
the
miners
had
a
private
Network
that
only
they
could
connect
to
and
probably
get
blocked.
So
you
bypass
the
peer-to-peer
Network
completely.
You
know
is
that
you
know
ideal
for
a
censorship,
resistant
currency.
It's
a
different
question.
You
know
you
could
do
parallel
execution.
We
had
a
question
down
there
before
maybe
a
viewed
access
lists.
You
know,
I
know
two
transactions,
don't
conflict,
execute
them
in
parallel.
We
speed
up
our
ability,
the
validator
transactions
in
real
time.
Oh
no,
there's
also
like
you
know,
set
reconciliation.
K
That's
also
compact
blocks,
but
the
issue
is
that
in
all
of
these
engineering
approaches
we're
making
it
easier
to
do
the
job,
but
they
still
have
to
do
it.
So
now
a
lot
of
the
scalability
research
is
thinking.
Do
they
need
to
do
it?
Could
we
take
that
responsibility
away
from
the
peer-to-peer
Network
and
you
know
give
it
to
a
external
provider?
Who
could
do
it
in
their
behalf?
So
the
peer-to-peer
network
does
the
absolute
minimum.
K
So
what's
the
goal,
so
it
should
look
like
this.
The
protect
decentralization
we
work
out.
What
is
the
absolute
minimum
the
peer-to-peer
network
has
to
do
and
what
can
we
offload
to
Services
providers?
Businesses
always
make
the
joke
and
fewer
could
do
this.
You
know
why
could
you
pass
off
to
inferior
to
protect
the
peer-to-peer,
Network
and
so
who's
heard
of
this
idea,
like
the
monolithic
blockchain
and
the
I?
Guess
modular
blockchain,
but
this
says
macro
Services
who's
heard
of
that
idea
before
the
monolithic
blockchain,
okay,
but
the
last
people
that
I
expected.
K
Actually
that's
great,
so
I
stole
this
actually
from
a
normal
web
2
company,
because
it
isn't
a
new
idea.
You
know
you
build
this
big
monolithic,
cobius
difficult
to
maintain
difficulty
upgrade
and
what
you
really
want
to
do
is
take
out
the
little
components
and
maintain
them
individually
and
hopefully
delete
them
as
well.
So
that's
what
a
theorem
has
been
struggling
with
for
the
past
six
years,
we
had
this
monolithic
blockchain,
where
it's
trying
to
do
everything
at
once.
K
I
know
what
we're
trying
to
do
is
Define
each
of
the
macro
services
or
the
modular
components
and
then,
of
course
solve
each
problem
individually.
So,
let's
go
through
how
we're
doing
this
so
first
we
have
compute.
Compute
was
one
of
the
resources
that
we
cared
about.
You
know
how
long
does
it
take
the
execute,
a
block?
K
What
if
you
could
have
a
dedicated
execution
layer,
so
there's
an
execution
layer,
that's
doing
most
of
the
work
and
ethereum
doesn't
really
care
about
that.
All
a
theorem
cares
about
is
the
result
of
that
execution.
If
ethereum
doesn't
have
to
do
the
execution,
well,
it's
way
easier
to
run
a
node.
If
you
don't
have
to
execute
anything,
you
know
you
pass
it
off
to
someone
else.
K
The
other
one
was
bond
with
so
right
now
we
have
to
propagate
all
the
transactions
and
blocks
across
the
network.
You
know
what,
if
you
could
have
a
dedicated
data
availability
layer
where
you
don't
even
care
about
the
transaction
content.
It's
like
a
blob
of
data.
As
long
as
there's
a
blob
of
data.
You
know
you
get
the
blob
of
data,
then
you
can
throw
it
away.
K
Eventually,
you
know,
could
we
have
a
dedicated
layer
just
for
data
and
ethereum
doesn't
necessarily
care
about
that
either
and
finally,
storage,
one
of
the
node
you
know
didn't,
have
to
have
a
database.
Could
you
run
a
node
and
just
delete
the
entire
database
and
not
care
about
it,
but
the
database
is
stored
somewhere
else.
K
So
if
you
want
to
transact
you
talk
to
the
provider,
you
get
the
database
content
and
then
you
send
it
off
to
the
peer-to-peer
Network
you
know.
Can
we
build
a
settlement
layer
in
a
sense
where
all
it
does
is
minimal
competition?
Maybe
stores
account
balances,
but
otherwise
it
doesn't
minimizes
what
has
to
store,
because
you
push
that
problem
off
somewhere
else.
K
That's
the
idea
behind
the
modular
blockchain,
it
looks
like
a
simple
Rini
I
mean
you
know:
I
could
be
a
marketing
person
being
like
well
we're
going
to
solve
compute
with
an
execution
layer.
You
know
I
just
rename
it,
but
actually,
if
you
just
find
like
you
know,
if
you
make
this
dedicated
layer
in
action,
then
you
can
think
how
do
I
solve
this
problem,
so
we're
just
renaming
the
resource
in
a
way,
but
in
a
way
where
it
makes
more
sense
on
how
to
solve
it.
K
So
what
this
actually
leads
to
is
the
rule
up
Centric
or
the
roll-up
Centric
roadmap
for
ethereum.
Has
anyone
heard
of
this?
The
rule
up
Centric
roadmap,
okay,
great
about
five
or
six
people?
So
that's
good,
though,
because
in
2016
ethereum
they
thought
they
would
solve
the
world
of
execution
sharding.
We
all
realized.
K
You
know
this
is
how
we've
scaled
cryptocurrencies
for
the
past
10
years.
So
raise
your
hand
if
you've
ever
used
a
coinbase,
binance
or
bit
stomp
or
whatever
there
you
go.
Don't
worry
I'm,
not
the
SEC
I'm,
not
here
to
dox
you,
you
know,
but
you
know,
realistically
speaking,
in
a
way,
cryptocurrency
exchanges
are
like
sharding.
K
You
know
you
deposit
your
funds
in
the
coinbius,
you
go
in
the
Columbia's
execution
layer,
you
transact
as
much
as
you
want
there,
and
then
you
bring
the
funds
back
and
you're
using
the
theorem
as
a
settlement
layer.
You
know
the
get
your
phones
on
and
off
coinbase,
but
otherwise
coinbase
is
where
the
execution
happens.
K
Well,
it
sucks
a
bit,
doesn't
it
it's
pretty
custodial
you
have
to
deal
with
their
customer
support.
If
you
get
locked
out,
there's
a
private
database.
We
have
no
idea
of
the
art.
You
know
of
the
rsats
cover
the
liabilities.
We
have
no
proof
of
reserves.
We
have
to
blindly
trust
this
execution
layer.
We
can't
audit
it
in
any
way
and
we
can
do
better
than
this
and
that's
the
goal
of
this
rule
up
Centric
roadmap.
K
The
goal
is
to
build
a
bridge
that
connects
to
another
blockchain
system,
this
off
chain
system
that
you
can
check
in
real
time
and
the
bridge
will
hold
your
assets.
You
can
mint
it
on
this
other
system,
transact
there
as
much
as
you
want
and
then
bring
your
funds
back
to
ethereum.
You
know
you
burn
it
on
the
on
this
chain
and
bring
it
back
to
ethereum.
K
So
really
bridging
is
at
the
heart
of
how
we
scale
ethereum.
We
can
build
good,
Bridges
and
move
the
computation
elsewhere.
We
solve
a
big
Pro,
a
big
part
of
the
scalability
problem.
An
ethereum
then
becomes
a
settlement
layer
that
does
minimal
competition
for
the
bridging
and,
of
course,
recording
everyone's
account
balances.
So
it
really
is
how
it's
going
to
deal
with
bridging
in
the
future.
L
K
Okay,
so
just
to
summarize
what
he
means
is
that
when
you
bridge
the
you
put
the
phones
in
the
bridge,
you
go
to
this
other
network.
There's
gas
fees
here
as
well
and
there's
also
bridging
it
back
that
causes
funds
or
you
know,
causes
yeah,
so
I
think
I
mean
a
theorem
should
be
the
most
expensive
chain,
so
that'll
always
be
expensive.
So,
ideally
most
users.
Oh
sorry,
I'll
just
finish
this
one.
Most
users
should
not
have
the
interact
with
ethereum.
K
They
just
live
in
these
other
layers
and
quickly
transfer
their
funds
and
because
they're
the
execution
layer.
We
can
assume
that
they
have,
like
you
know
like
starknet,
for
example,
they
can
aggregate
lots
of
transactions
and
you
know
aggregate
the
cost
for
everyone,
so
there's
still
a
cost,
but
hopefully
you'll.
K
Yes,
that's
a
grip
point.
So
what
he's
saying
is
that
when
you
bridge
your
asset
to
another
layer,
you're
taking
on
the
risk
of
the
bridge
and
we've
all
seen
the
binance
bridge,
The
Nomad
Bridge,
the
Ronan
Bridge,
the
Wormhole
bridge
and
other
Bridge
there's,
obviously
a
collective
War
they'll
all
keep
getting
hacked
and
there's
clearly
a
smart
contract
risk
of
using
a
bridge
and
then,
depending
on
how
you've
designed
your
Bridge,
you
may
also
have
risk
on
the
off-chan
system
as
well.
K
So
that's
why
the
roll
ups,
you
know
what
they're
trying
to
build
is
just
jump
to
that
now.
Oh
actually,
let
me
finish
this
bit.
So
the
rule
ups
are
trying
to
build
a
bridge
where
you
don't
have
to
trust
the
off
chain
system
at
all.
So
really,
if
the
bridge
is,
you
know,
bug
free,
then
you
should
not
have
to
trust
the
off
chain
system
at
all.
So
that's
the
long-term
goal,
but
right
now
a
lot
of
the
bridges
are
a
bit
there's
a
lot
of
trust,
a
lot
of
bridges
but
anyway.
K
So
let's
go
to
the
settlement
layer.
A
theorem
was
a
set
of
funds,
and
the
execution
layers
are
on
these
off-chean
systems
that
offer
a
seamless
user
experience,
and
the
point
here
is
that
the
off
chain
database
and
the
execution
layer,
recourse
the
liabilities
and
the
bridge
records
the
assets
and
the
bridges
here
just
to
make
sure
the
outside
is
cover
the
abilities,
armor
taxi
user
on
the
off
chain
system,
and
how
does
the
bridge
protect
the
assets?
K
K
You
know
the
bridge,
you
get
an
update
from
the
off-chin
system,
then
the
bridge
have
to
be
convinced
that
this
update
is
valid
and
correct,
and
if
it's
correct,
then
it
will
accept
that
that's
the
new
state
of
the
database,
you
know
the
bridge
will
always
check.
Is
every
opt-in
to
this
often
system
valid?
Yes,
it
is,
the
funds
are
CF
and
the
other
one
is
where
the
fraud
proves
versus
the
optimistic
roll-ups.
K
K
What
that
assumes
is
that
there's
one
honest
party
in
this
room,
you
can
come
online,
get
a
copy
of
the
database
and
guarantee
that
all
our
transactions
are
eventually
executed.
We
can
withdraw
our
funds
from
the
system
if
that
system
is
malicious,
and
this
comes
down
the
data
and
there's
three
things
to
consider.
You
know
why
does
the
data
need
to
be
publicly
available?
What
data
needs
to
be
publicly
available?
And
how
do
we
guarantee
it
is
publicly
available
so
for
these
Bridges?
K
What
you're,
assuming
is
that
there's
one
honest
party
who
can
get
the
data
recompute,
the
offchain
database,
execute
the
transactions,
propose
an
update
to
the
bridge
and
let
you
get
your
funds
out
of
the
bridge
now.
This
is
very
different
to
ethereum,
as
we've
just
said
for
about
you
know
the
past
40
minutes,
they're
just
trade-off
between
block
proposers
and
verifiers
and
the
resource
constraints.
Here
we
just
have
to
assume
there's
one
on
this
party.
K
There's
one
honest
party
out
there
with
enough
resources
to
get
a
copy
of
the
database
and
execute
the
transactions,
and
anyone
in
this
room
ideally
could
be
that
honest
party,
so
it
doesn't
allow
you
to
go
beyond
the
restrictions
of
the
layer,
one
you
create
all
this
big
beefy
machine.
This
I
don't
want
to
say
super
computer,
but
you
have
a
beefy
machine
on
this
network
that
can,
you
know,
reduce
the
fees
for
everyone,
because
now
the
main
fee
on
a
rule
app
is
not
execution.
K
The
main
fee
is
the
data
that
you
push
to
ethereum,
that's
the
biggest
cost
now
for
using
roll
ups,
the
type
of
data
that
you
propose
to
be
you
know
the
transaction
history
to
ascend
the
bridge,
all
the
transactions
or
maybe
just
an
update
to
the
database,
and
what
is
the
state
def
between
the
two
databases?
K
How
do
we
make
this
available?
You
know
just
to
skip
over
this
a
bit.
Those
are
challenged.
Proof
sets
for
plasma
and
sort
of
field,
any
trust
like
arbitrary
Nitro
or
starknet
their
data
availability
committee.
They
have
a
committee
that
guarantees
the
data
is
available
and
that
one
honest
party
can
get
the
database
or
a
roll
up
where
you
take
all
the
data
and
you
post
it
to
ethereum,
and
that's
the
next
part.
K
K
They
guarantee
that
one
honest
party
can
get
the
data
base
for
the
off-chain
system,
so
the
long-term
scalability
goal
for
ethereum
is
to
make
this
data
as
cheap
as
possible.
If
you
give
me
a
theater
or
a
bond
with
cheap,
then
you
could
have
real
Ops
that
are
humongous
in
dealing
with.
You
know
crazy
amount
of
transactions.
This
is
the
Dank
sharding.
This
is
eip4844
and
this
is
sort
of
what
they're
going
towards
in
the
next
few
releases
of
ethereum.
K
K
K
I
know,
there's
also
you
know
different
networks
emerging,
because
now
that
we've
separated
our
concerns,
maybe
you
have
a
dedicated
availability
layer
like
Celestia
polygon,
novel
or
a
theorem
itself
of
dank
charting
I
have
a
minimum
ethereum.
Actually
I,
don't
know
we
have
the
settlement
layer
here
which
is
ethereum
because
they're
doing
the
Roll-Ups.
Then
all
these
different
execution
layers
are
emerging
they're
all
solving
different
parts
of
that
puzzle
of
how
we
scale
ethereum.
K
K
Scalability
is
really
by
bridging
bridging
is
how
we'll
scale
ethereum,
because
we'll
move
the
assets,
but
we'll
move
the
assets
to
another
Network
and
transact
there.
Bridges
are
a
give
or
tick
very
insecure,
but
we're
working
towards
building
a
a
secure
version
of
bridges
and,
of
course,
data
availability
is
really
the
big
bottleneck.
Now
for
how
we're
struggling
with
ethereum
and
just
to
finish
up
because
I've
got
one
minute
left
is
one
of
the
oddest
last
part.
Oh
God,.
K
So
the
bridge
should
be
able
to
independently
check
everything
itself.
So
when
you
give
me
an
update
about
the
bridge,
you
give
me
the
update
and
I
used
to
be
able
to
check.
This
is
a
valid
update,
so
either
you
give
me
a
zero
knowledge
proof.
So,
there's
a
mathematical
guarantee,
I
ran
a
fraud
proof
right,
get
the
update
and
there's
like
a
one-week
window
for
anyone
to
convince
me.
That
is
incorrect,
so
the
bridge
has
to
be
convinced.
H
K
Awesome,
so
maybe
I'll
just
finish
here:
I
think
this
is
a
great
talk.
We
nearly
I
didn't
finish
the
slides,
but
that's
awesome
because
we
had
a
lot
of
great
content,
so
I'll
leave
it
here
and
I'll.
Let
the
next
speaker
come
up
because
I
think
he's
hanging
down
there
somewhere.
So
thank
you
guys.
Gg
awesome.
K
K
M
Okay,
hello,
everyone,
let's
start
about
today's
explain
like
I'm
in
five
session
of
the
zero
knowledge
probe,
yeah,
so
I'm
very
happy
to
have
this
session
for
the
beginners
here
by
the
way.
Actually,
this
session
is
a
for
a
kind
of
a
five
years
old
kid.
So
if
you
are
already
familiar
with
the
journal,
knowledge
proof.
D
M
I
strongly
recommend
you
to
listen
here
this
session,
the
designing
public
goods
using
zkps,
which
is
running
by
the
Rachel,
who
is
a
super
designer
of
the
intern
foundation's
PSE
team.
She
have
designed
a
lot
she
has
designed
in
three
years
at
the
team
and
designed
a
lot,
some
zkp
related
user
experiences
and
user
interfaces,
so
it
might
be
very,
very
inspiring,
oh
yeah.
So
let's
start
with
about
thinking
about
where
the
ckp
is
mostly
used.
M
So
we
use
zero
knowledge
proof
when
you
want
to
prove
a
fact,
but
you
don't
want
to
share
the
information.
So
I
want
to
give
you
an
example
about
more
detail.
So,
let's
think
about
some
example
at
immigration
at
the
airport.
M
So
let's
assume
that
I'm
a
immigration
officer
here
and
just
an
entrant
here.
So
when,
when
you
go
to
the
airport,
the
immigration
Officer
says:
where
are
you
from
then
I
say
like
I'm
from
Korea
I'm
from
Korea,
then
America's
Officer
says
like
North
or
South
I,
definitely
say
I'm
from
South,
but
unfortunately,
this
immigration
officer
doesn't
trust
me
that
much
so
he's
just
like
I
really
think
you
look
like
from
North.
M
So
give
me
your
password
and
and
oh
this
poor
Korean
guy-
wants
to
keep
the
freedom
to
keep
my
personal
information.
So
I
said
like
oh
that's,
my
personal
information.
I
don't
want
to
keep
my
passport,
then,
okay,
what
happens
they
just
kick
him
up?
Kick
him
out
there
and
this
situation.
What
should
I
have
to
do
here?
M
Yeah,
actually
we
can
I
can
show
some
passport
number
I
I
can
show
my
nationality
or
passport
card
they
cover,
but
if
we
give
a
zero
knowledge
proof
that
I
can
prove
that
I'm
a
part
of
this
South
Korean
yeah.
This
is
just
an
example
which
can
be
really
happened
in
the
future.
D
M
So,
let's
think
about
what
happens
here,
so
we
can
call
the
immigration
officer
here
as
the
verifier
and
the
Antron,
the
poor
South
Korean
as
the
prover,
and
then
the
proverb
prepares
the
witness
using
my
passport,
which
is
which
means
like
witness
means.
I
have
some
passport
number
here
and
I
have
my
I'm
a
mail
and
my
my
number
in
Korea,
something
like
that.
The
birthday
Etc
so
and
then
generates
a
ZK
proof
and
give
it
to
the
verifier,
which
means
the
immigration
officer
here
than
the
verifier.
M
Actually
this
is
pretty
possible
and
we're
going
to
take
a
look
at
the
details
later
about
the.
How
can
how
we
can
prove
the
membership
of
that
I'm?
A
part
of
this
South
Korean
people,
then,
by
the
way,
let's
see
what
is
witness
and
what
is
informations.
We
have
to
share
here.
M
So
there
are
some
informations
you
can
see
here
and
maybe
there's
a
passport
number
that
I
want
to
that.
I.
Don't
want
to
share
with
the
immigration
officer
actually
totally
doesn't
make
sense,
but
so
I
recall
these
values
as
private
inputs,
but
I'm
gonna,
just
say:
I'm,
a
part
of
the
South
Koreans,
so
I
just
opened
my
nationality.
This
is
the
public
input
and
to
prove
that
I'm,
a
part
of
the
people
I
just
culturally
some
mathematical
values
using
the
inputs
there.
M
So
we
call
this
intermediate
values,
including
all
the
private
inputs
and
the
public
inputs
as
witness
here,
and
this
is
just
a
basic
model
of
how
zero
knowledge
proving
system
works.
So
we
have
private
inputs
and
we
have
public
inputs
and
also
we
have
the
circuit
here.
The
circuit
is
about
the
relations.
M
It's
like
I,
don't
reveal
my
passport
number,
but
my
passport
number
is
definitely
a
part
of
the
registered
existing
database
of
the
Korean
people's
passport
number.
So
it's
kind
of
a
membership.
Oh
there's
a
database
and
I
don't
want
to
share
my
information,
but
this
is
definitely
exists
in
the
database.
So
this
relation
is
the.
What
is
is
the
circuit
here
that
do
we
have
to
soundness
there?
Yes,
we
have
the
soundness
there.
If
the
proof
is
too
small,
we
cannot
have.
M
The
soundness
like
the
false
positive
is
to
probably
drop
up
is
the
probability
of
the
false
positive,
too
high.
So
the
proof
that
is
pretty
important
and
here,
okay,
but
imagine
if
we
do
this
thousand
times
at
immigration,
so
in
regular
Services.
Okay,
just
tell
me
your
message,
something:
okay,
I,
actually
it
totally
doesn't
make
sense.
So
we
need
the
non-interactive
system
here.
M
Okay,
let's
go
back
to
the
Rhino
case
in
this
case,
this
is
verifiable.
Try
to
make
the
random
question
every
time,
but
what
if
we
generate
some
random
set
of
questions
before
these
proven
proving
happens
like.
M
Yeah
right
just
using
the
dolly,
so
we
need
to
homomorphically
encrypt
those
values.
Actually,
today,
we're
gonna
We're
Not
Gonna
deal
with
the
homomorphic
concept
here,
but
we
should
encrypt
this
in
a
very
viable
way.
M
So
by
the
before
we
go
to
the
next
slide.
I
just
want
to
say
here.
This
is
called
common
reference
string
because
verifier
already
make
made
this
for
the
proofer,
so
this
string
is
shared
between
the
proverb
and
the
verifier,
so
commonly
shared
string,
which
is,
can
be
the
reference
for
The
Proven
system.
So
this
is
called
common
reference
string
by
the
way,
because
we
need
to
encrypt
those
values.
M
I'm
gonna
get
this
in
Korea.
Maybe
there
might
be
a
little
people
who
can
read
this
word,
but
actually
this
is
saying
about
go,
left
and
go
right
and
if
you
go
left
and
right
go
right
twice
something
else.
So,
let's
assume
there
is
something
that
can
interpret
this
encrypted
command.
Reference
string
so
actually
interpret
is
not
a
good
explanation
here
in
more
detail.
Let
me
assume
that
the
five
years
old
key
understand
hash
function.
M
M
M
What
we
want
to
build
is
some
protocol
that
can
be
used
publicly
widely
for
everyone.
So,
for
example,
we
have
a
system
that
we
can.
We
can
transfer
some
if
using
ZK
proof,
then
the
ZK
proof
will
include
some
information
that
I
have
enough
balance
and
I
can
generate
some
signature
and
every
information
will
be
in
the
JK
probe.
But
if,
if
you
can
sit
on
that,
I
just
can
move
a
lot
easier
to
my
account
without
the
correct
information
without
the
signature.
So
this
is
really
important
to
make
anyone.
M
So
anyone
who
wants
to
guess
what
it's
called
yes,
this
is
called
the
trusses
setup.
To
achieve
this,
we
have
to
do
the
trusses
setup.
So
let
me
explain
this
one
by
one.
M
Actually,
this
is
how
a
zkp
system
works.
With
I
mean
the
zika
snark
works
with
the
common
reference
string,
so
there
is
a
common
reference
string
made
by
the
trusses
setup.
You
share
with
the
proverb
and
the
verifier
and
the
program
verifier
both
does
not
know
the
seed
original
reference
string
here
and
then
proverb
picks
a
random
salt
and
share
it
with
the
verifier
and
then
also
once
the
previous
picks,
the
soul.
The
privilege
can
drive
a
set
of
questions
because
there
is
a
reference
string,
actually
the
verifier
at
first
in
the
interactive
system.
M
M
If
the
verifier
using
the
homomorphic
feature
homover
characteristic
of
the
reference
string,
it
can
be
a
little
bit
tricky,
but
this
is
also
a
pretty
important
thing.
So
I
want
to
explain
some
multi-party
computation
thing
here:
yeah
actually
yeah
this
pretty
technical,
but
not
that
difficult,
actually
so
yeah,
please
by
the
way.
M
M
M
There
is
a
homomorphic
hiding
G
to
the
A
is
kind
of
a.
We
can
make
some
encrypted
value
using
number
a.
So
actually
there
is
a
signature
of
the
homorific
hiding
age,
but
just
think
about
that,
just
as
a
hash
function,
so
to
think
about
G
to
the
a
is
something
related
to
the
hash
of
a
actually
it's
kind
of
a
homorphic
hiding
over
a.
M
But
then
there
is
a
characteristic
that
we
can
compute
the
G
to
the
a
using
number
a
pretty
easily
but,
in
contrast,
is
extremely
difficult
to
compute,
a
from
G
to
the
A.
M
Actually,
this
is
called
the
logarithmic
discrete
assumption
here,
but
Let's
Escape
there
skip
here
and
also
if
we
have
a
g
to
the
A
and
B,
we
can
also
compute
the
G
to
the
a
b
pretty
easily
okay.
This
is
a
this
is
a
some
key
features
of
the
homography
hiding
using
the
electric
curve
cryptography
and,
let's
see
how
the
traces
setup
works,
so
trust
setup
is
creating
a
command
reference
string.
M
M
Actually
we
did
some
encryption
thing
here.
You
remember
so
the
left
left
right
left
right
left
is
a
b
c
d
e
and
this
something
already
encrypted
value,
G
to
the
a
g
to
the
b
g
to
C
something
so
we
can
compute
G
to
the
H
to
b
c
and
G3
there
and
Alice
shares
this
G
to
the
A
and
G
to
the
E
values
publicly
peep
to
the
people.
M
Then
Bob
joins
this
ceremony.
In
this
case,
Alice
May
Alice
May
discard
the
ABCDE
value
if
Alice
is
innocent,
but
if
Alice
is
Not
Innocent,
maybe
she's
just
store
the
ABCDE
value
in
her
computer
by
the
way
Alice
didn't
share
the
ABCD
value
with
Bob.
Yet
okay,
then
Bob
also
picks
a
set
of
questions
again
here.
That
is
f
g
h
I
j
and
then
we're
gonna
create
a
new
reference
string
using
G
to
the
a
to
the
G
to
the
E
and
the
F
G
H
I
J.
M
So
Bob
can
generate
G
to
the
AF
G
to
the
BG
G
to
the
CH,
without
knowing
the
a
b
c
d
e
value
here
and
call
does
the
same
thing
here:
okay,
then,
if
any
one
of
these
three
participants
discarded
and
destroyed
the
randomly
picked
question
values,
then
maybe
anyone
might
know
the
original
reference
string
here:
AFK,
bgl,
chm
and
Etc,
because
to
know
AFK
you
need
all
those
values,
a
and
F
and
K.
So
it
means
to
just
recover
its
reasonable
value.
M
M
M
Yeah
just
go
to
the
triceps
ceremony,
and
this
is
the
page
that
share
today's
opening
ceremony.
So
you
can
just
go
to
ceremony.etherium.org,
then
you
can
join
the
ceremony
and
actually
you
just
saw
that
these
should
be
conducted
in
a
sequential
manner,
because
always
should
do
something
and
share,
and
then
Bob
do
something
about
something.
So
there
might
be
some
cue,
but
please
don't
lose
your
faith.
You
you
don't
need,
please
don't
trust
anyone
so
go
into
the
queue
and,
let's
join
the
ceremony
together.
M
Okay,
great
then
now
I
think
I've
explained
almost
every
important
concept
of
the
historic.
Then,
let's
rebuild
why
this
is
called
ZK
snark,
using
the
concepts
we
just
Explorer
today.
M
Okay,
so
ZK
snark
is
zero
knowledge,
succinct,
non-interactive,
argument
of
knowledge.
First
journals
means
just
hiding
some
information.
You
remember
that
I,
just
one
I
just
wanted
to
hide
my
passport
number
here
and
but
doesn't
want
it
to
prove
something.
So
this
is
called
as
a
general
knowledge.
If
you
want
to
hide
some
value
that
can
be
called
zero
knowledge
and
to
talk
about
succinct.
Actually,
we
have
to
talk
about
the
soundness
you
remember.
M
If
we
have,
if
we
repeat
only
10
times,
the
proof
will
be
pretty
small,
but
if
we
repeat
the
answering
like
10
000
times,
the
proof
size
will
be
larger.
M
So
it's
pretty
important
to
keep
the
proof
succinct.
While
we
keep
the
soundness,
so
we
need
to
find
the
gray
balance
there.
So
succinct
is
used
here
for
because
of
the
soundness
thing
and
the
non-interactive
thing,
we
can
do
that
at
the
thousand
times
the
immigration
office
right.
So
we
have,
we
should
have
a
non-interactive
system
and
for
the
non-interactive
system
we
should
do
some
common
reference
string
and
because
of
that,
we
need
to
do
the
trusted
setup
stuff
and
all
we
all
we.
M
We
did
all
these
things
to
prove
argument
of
Knowledge
from
this
rhino
right.
So
this
is
called
ZK,
snork,
journalistoxin,
non-interactive
argument
of
knowledge.
So
does
everyone
understand
now
great
I
am
pretty
happy
now
yeah,
okay,
so
we're
gonna
I'll
go
through
the
apply
zkp
stuff.
So
where
can
I
use
zkp
mostly
people
think
like
I
can
hide
something.
Then
it
can
be
used
for
the
Privacy.
Definitely
so
the
usages
are
mainly
the
privacy
and
scaling
and
there
are
a
lot
of
undiscovered
usages.
M
Yeah
and
actually
we
already
go
through
some
difficult
Concepts
like
multiple
computation,
homomorphic,
hiding
and
logarithm
discrete
assumption
stuff,
so
Let
Me
Assume
again
that
our
kids
already
knows
this
function
and
Mercury
proof,
and
please,
let's
remind
how
the
ZK
proving
system
works.
Here
again
we
have
a
circuit
that
represents
the
relations
between
witness,
including
the
public
inputs
and
private
inputs,
and
the
proverb
creates
a
zkp
and
the
verifier
will
verify
the
proof
using
the
circuit
together
and
in
the
miracle
tree.
M
What
we
want
to
do
here
is
proving
that
there
is
this
a
leaf
in
the
Merkel
tree
and
without
revealing
any
information
about
the
leaf
and
The
Sibling
information
which
can
reveal
the
path
of
the
leaf,
which
can
be
kind
of
a
reference,
and
here
I'm
gonna
share
the
Mercury
information
between
the
both
both
the
verifier
and
the
program.
M
M
So
actually
we
can
compute
the
Merkle
root
using
sibling
values
and
The.
Sibling
values
also
should
be
private
input,
because
if
they
are
revealed,
then
they
can
be
some
hint
for
about
the
leaf
and
then
to
generate
the
multiple
proof.
We
need
to
compute
the
intermediate
nodes
here
right.
You
need
to
compute
the
branch
node
of
the
marker
tree
when
you
compute
the
Mercury
proof,
and
these
intermediate
values
are
already
witnessed.
M
Actually
witness
also
include
the
private
and
public
all
the
values,
but
I'm
gonna
say
this
is
an
witness
and
also
this
is
the
relation
of
the
witness.
M
This
is
a
relation
that
we
want
to
prove
using
the
witness.
While
we
are
not
revealing
the
private
information
here,
Okay
so
yeah.
We
just
put
these
values
like
this,
so
there
is
a
circle
circuit.
The
logic
is
a
blue
color
and
the
private
input
or
the
green
color
and
public
inputs
are
the
red
color.
Here,
then,
we
can
generate
ZK
proof
and
the
verifier
can
prove
that.
Okay,
you
don't
need
to
reveal
the
private
inputs,
but
I
have
the
information
about
about
this
group,
which
is
the
root
value.
M
The
public
input
red
thing
and
also
there
is
a
relation
logic
between
the
witness
here.
That
is,
the
marker
proof
yeah,
and
this
members
proofing
also
can
be
used
for
various
usages,
actually,
first
of
all,
the
Privacy
protocol
definitely
and
for
the
Privacy
protocol.
We
have
the
very
good
example
for
the
identity.
M
We
are
having
the
semaphore
protocol,
which
is
just
the
name
of
a
membership
proof
protocol
that
keeps
your
identity
private,
but
lets
you
vote
on
some
agenda
in
an
anonymous
manner,
and
also
we
can't
have
some
private
transaction
stuff,
Z
cash
and
also
Aztecs
as
xdk
money
and
pulling
a
nice
ball
and
PS
team
0
Pro
and
for
linear
cash.
There
are
all
the
implementing
the
same
logic
with
this
members
proof
system
and
also
this
members
proof
system
can
be
implemented
in
various
ways.
M
Definitely
the
first
way
is
the
method
that
I
shared
here,
the
marker
proof
thing
and
actually
recently,
people
are
exploring
another
methodology
using
the
vector
commitment,
which
can
let
us
express
a
set
of
values
using
a
polynomial
yeah.
So
if
you
are
interested
in,
you
can
just
Google
this
Cork
and
take
a
deep
look
at
that
and
the
by
the
way
we
have
10
more
minutes,
so
I'm
gonna
use
10
more
minutes
all
over
10
10
minutes.
So
the
next
example
is
a
scaling.
M
Yeah
I
think
you
guys
are
pretty
familiar
with
the
word
roll
up
right.
Actually,
the
roll-up
started
I
guess
it
started
from
2018
by
Barry
the
our
PSA
teams,
leader
and
roll
up
started
from
the
ZK
roll
up
and
I'm
gonna
explain
what
is
the
basic
form
of
zika
roll
up
here
Okay.
So
oh
my
God
yeah.
Let's
use
this
diagram
first,
the
first
block
is
a
just
a
normal,
some
ethereum
block.
Let's
assume
that
is.
This
is
a
normal
ethernet.
M
Then
there
should
be
some
transactions
transaction,
one
changes
two
and
for
each
transaction,
every
account
external
owned
account
should
generate
the
ecdsa
signature
right.
So
every
transaction
has
its
matching
transaction
and
signature
there
and
we.
Finally,
we
compute
the
block
hash
using
some
another
values
there,
but
how?
If
we
make
this
signatures
as
private
input,
what
happens
here?
M
Yeah
computation,
because
if
we
just
compress
all
the
signatures,
then
we
don't
need
to
verify
all
the
electric
curve
signatures.
So
let's
assume
that
if
we
have
ten
thousands
of
signatures,
then
the
general
launch
proof
can
be
much
less
than
the
10
000
of
the
signatures.
So
we
can
reduce
the
data
size
a
lot
and
also
we
can
skip
the
computation
just
using
the
cryptographical
verifying
system.
So
we
have
a
two
advantages
here:
the
scaling
of
the
computation
and
the
scaling
of
the
data
uses
so
yeah.
So
this
is
the
reason
why
we
are.
M
And
actually
there
is
a
tutorial
that
you
can
implement
the
simple
ZK
roller
by
yourself.
So
if
you
want
to
just
Deep
dive
into
how
it
really
works,
then
you
can
just
go
there
and
go
to
the
tutorial.
Okay
it'll
be
very
helpful
for
you
to
understand
how
it
works.
M
Okay
and
for
the
next
I'm
gonna
share
another
fun
examples
like
Macy
and
rate
limiting
nullifier.
M
The
Macy
is
a
stat.
Macy
stands
for
minimal
anti-collusion
infrastructure,
which
means
actually
have
you
tried
the
CLR
fund
before
using
Bitcoin
phone
quadratic
fund,
it's
a
quarterly
Fund
in
quadratic
fund.
It
is
very
useful
to
buy
the
vote
because
the
number
of
the
participants
is
much
important
than
the
amount
of
diverts
right
in
quadratic
body
so
buying.
The
vote
is
pretty
useful.
M
I
M
The
coordinator
mixed
the
result,
without
just
modifying
them
correctly,
using
zero
knowledge
probe.
So
we
are
also
doing
the
cr1
phone
CLR
fund
for
Defcon,
so
you
can
go
to
the
at
columbia.co.n
to
participate
in
the
new
round
contract
funding
round
and
the
another
really
fun.
Example
is
the
rate
limiting
nullifier.
M
This
is
pretty
novel,
and
maybe
it
is
pretty
hard
to
think
about
this
concept
from
zkp
because
we
can
just
we
are
just
thinking
about
like
only
the
privacy
or
scaling
right.
Then,
let's
see
what
it
is.
So
there
is
a
polynomial,
a
one
degree
polynomial,
so
it
is
y
equals
ax,
plus
b,
and
actually
this
polynomial
is
a
polynomial
that
I
just
chose
that
I
just
chose
and
actually
the
value
p
is
my
secret
key
of
ethereum.
Then,
okay,
then
I
can
show
you
some
point
on
this
line.
M
You
can
just
compute
this
polynomial
right,
because
you
have
two
points,
and
this
is
one
degree,
so
you
can
just
know
the
A
and
A
and
B.
Then
you
can
know
my
secret
key,
so
my
secrets
gets
revealed
right
so
rate
limiting
lonely
fire
is
using
this.
Actually,
this
product
is
charmiral's
secret
sharing
protocol
rate
limiting
logifier
is
using
this
to
prevent
spam
attack.
M
On
on
the
polynomial
to
you,
so
if
I,
if
I
just
send
you
too
many
messages
that
actually
you
can
just
recover
my
polynomial
and
just
get
all
my
it
from
the
account
right,
but
here
we
have
to
use
zkp
that
all
these
shared
points
are
on
the
polynomial.
M
This
is
the
only
the
relation
pretty
simple
right.
Then
we
can
use
this
for
spam
protection
protocol.
We
are
doing
a
lot
of
experiments
using
this
right,
limiting
modifier
concept
for
the
consensus
layer
and
also
the
peer-to-peer
networking
so
okay.
So
this
is
the
last
so
from
five
years
old.
Key
to
a
student
I
want
to
recommend
this
curriculum.
The
first
one
is
just
write:
a
ckp
application
using
the
tutorial
thing,
I
shared
for
the
ZK
roll
up,
then
it
will
let
you
it
will
help.
M
You
understand
how
zkp
works
and
how
the
proving
system
works
there.
And
then
you
need
to
study
and
learn
about
the
at
first
abstract
algebra,
because
in
the
proving
system
we
are
using
a
specific
set
of
numbers
and
we
need
to
understand
how
these
numbers
works
and
how
the
homomorphic
hiding
Works
to
understand
this.
Actually,
you
need
to
understand
the
abstract
algebra
and
the
group
Theory
thing
after
that.
M
Please
study
and
learn
about
the
electric
curve
cryptography
first
and
then
after
that,
please
study
about
the
pairing
based
cryptography.
Then
maybe
some
of
you
guys
are
heard
about
a
plonk
and
K
to
3
and
inner
product
argument
stop
and
they
are
all
the
kind
of
a
things
after
you.
You
have
to
you
study
about
this
pairing
with
cryptography
and
then
astrology
and
extra.
M
Then
after
then
just
go
through
the
polynomial
commitment
schemes,
which
is
like
how
to
make
the
questions
and
how
to
make
the
answers.
What
we
have
done
using
the
alibaba's
case
with
the
rightness.
So
actually
the
arithmetizations
is
pretty
related
to
the
polynomial
commitment
scheme,
so
you're
going
to
study
with
some
rncs
arithmetization
with
growth,
16
and
you're,
going
to
study,
Planck
arithmetization
with
kg
or
inner
product
argument.
M
Okay.
So
thank
you.
Everyone
so
I'm
once
up
from
intern
Foundation,
PSC
team
and
I
hope
this
session.
Helped
you
a
lot.
Thank
you.
So
much
I
think
it
yep.